From xen-devel-bounces@lists.xenproject.org Mon May 01 02:05:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 02:05:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527919.820530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptIuB-0004Fa-HU; Mon, 01 May 2023 02:04:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527919.820530; Mon, 01 May 2023 02:04:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptIuB-0004FG-BV; Mon, 01 May 2023 02:04:43 +0000
Received: by outflank-mailman (input) for mailman id 527919;
 Mon, 01 May 2023 02:04:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptIu9-0004F6-SL; Mon, 01 May 2023 02:04:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptIu9-0005Bi-IV; Mon, 01 May 2023 02:04:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptIu9-0008R5-02; Mon, 01 May 2023 02:04:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptIu8-000347-Vx; Mon, 01 May 2023 02:04:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MCsuZLpjSHay1yPzhRisZK7v858VGeFvg6FhF3GFmog=; b=OYm8PiiBfj7J2HIhWv/pHlwzt7
	7VWcHFvKvVHqqkb8VMaq769hIbrYQvmjgeqlX9kb7gsiSiT/6ame+ZYF3GFQ0xaYVo9RL9OBrNbgv
	pCZIsy2YlLToygW78wgay7C41CR/IPRpt990AHPYAmauwdGd6DSwdaagEDtKMJx5oZyA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180489-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180489: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=825a0714d2b3883d4f8ff64f6933fb73ee3f1834
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 May 2023 02:04:40 +0000

flight 180489 linux-linus real [real]
flight 180491 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180489/
http://logs.test-lab.xenproject.org/osstest/logs/180491/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                825a0714d2b3883d4f8ff64f6933fb73ee3f1834
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   14 days
Failing since        180281  2023-04-17 06:24:36 Z   13 days   23 attempts
Testing same since   180486  2023-04-30 05:33:05 Z    0 days    2 attempts

------------------------------------------------------------
2087 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 247816 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 01 09:42:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 09:42:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527994.820557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptQ3L-0002Fo-8s; Mon, 01 May 2023 09:42:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527994.820557; Mon, 01 May 2023 09:42:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptQ3L-0002Fh-4Y; Mon, 01 May 2023 09:42:39 +0000
Received: by outflank-mailman (input) for mailman id 527994;
 Mon, 01 May 2023 09:42:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ExGp=AW=citrix.com=prvs=4789cc51e=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ptQ3J-0002Fb-CX
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 09:42:37 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7eca9566-e804-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 11:42:34 +0200 (CEST)
Received: from mail-mw2nam12lp2048.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 01 May 2023 05:42:23 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5357.namprd03.prod.outlook.com (2603:10b6:208:1e0::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Mon, 1 May
 2023 09:42:20 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.030; Mon, 1 May 2023
 09:42:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7eca9566-e804-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1682934154;
  h=message-id:date:subject:to:references:from:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=TNw1JkG/WJ6X19gEA6XgkNzFyZFAGhk3BSAduUwCFLA=;
  b=OUygsJlZb2OGP34PelYcMqv62nQgyVIorh7WCyjJTcgS30QZjGxa8gIa
   D35G8qephknyB/ZhuMxkgs3N5jEUKGLbTgL1K85uF4DpWI8uKrV6BY55y
   IYrHmmuuykJpFsGLeHi86uEXRxmerzT/zbJbts7s/NIjLqbQR5cZVe9jh
   0=;
X-IronPort-RemoteIP: 104.47.66.48
X-IronPort-MID: 107315830
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:LefeaqM23laoNljvrR2JlsFynXyQoLVcMsEvi/4bfWQNrUog1TADn
 GMdCmiBO/uDNDT3fI0jYdyz9U5VuJPQnIMxTAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tE5QRmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uMmHktC9
 PgWFGwUYErEuaW28pGaTeY506zPLOGzVG8ekldJ6GiDSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+/Rxvzi7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXwniiBdJJTNVU8NZTg1ap/DIDSyFGTEqhn9ap1HeFA5F2f
 hl8Fi0G6PJaGFaQZtz0RRixunOHlh8aRdtLEuc+5R2Ny6zb+AKQDC4PSTspQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkAowXQqYCYFSU4A/IPlqYRq1BbXFI4/QOiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAuTA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:1RlYg6kaLAPRKYGHzhHrmyUxfj3pDfNMiWdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5VoMkmskaKdhrNhR4tKPTOWw1dASbsP0WKM+UyCJ8STzJ8l6U
 4kSdkHNDSSNykEsS+Z2mmF+r8bqbHokZxAx92uqUuFJTsaF52IhD0JbzpzfHcGIzWuSaBJdq
 Z1saF81kedkDksH7KGL0hAe9KGi8zAlZrgbxJDLxk76DOWhTftxK/mHwOe1hI+VSoK5bs562
 DKnyHw+63m6piAu1Xh/l6Wy64TtMrqy9NFCsDJos8JKg/0ggLtSJV9V6aEtDUVpvjqzFoxit
 HDrzopIsw2wXLMeWOepwfrxmDboX0Twk6n7WXdrWrooMT/Sj5/I81dhbhBeh+cz0Y7ptlz3I
 9Cwmrc7vNsfFv9tRW4w+KNewBhl0Kyr3ZnuekPj0ZHWY9bTLNKt4QQ8G5cDZ9FNiPn74IMFv
 VoEajnlb9rWGLfS0qcknhkwdSqUHh2NhCaQnIassjQ6DRSlGAR9Tps+OUv2lM7sL4tQZhN4O
 rJdo5ykqtVc8MQZaVhQM8cXMqeEAX2MFzxGVPXBW6iOLAMOnrLpZKyyq4y/vuWdJsBy4Z3sI
 jdUWlfqXU5dyvVeIKzNaVwg1DwqViGLHfQIpk03ek6hlS8fsumDcS7ciFuryP6yM9vR/EyWJ
 6ISeBr6rHYXC/T8L1yrn3DsqlpWAcjufIuy6cGsnK107b2w97Rx5vmWceWAobROhAZfU66Kk
 c/fVHIVbZ9BwaQKzLFvCQ=
X-Talos-CUID: 9a23:NvnFq2Hwf08Fi97pqmJipVE+FsEhfEGB1XmKZHH/CVRKabqaHAo=
X-Talos-MUID: =?us-ascii?q?9a23=3A/r2sqg/3KID5L3VsUU3pQtqQf/5r6J6UVlBOrcU?=
 =?us-ascii?q?fqtO6GjwuAgzFzx3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,239,1677560400"; 
   d="scan'208";a="107315830"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kXzP6HIFMQPBp74qp/G4GHvduk45yOCKN9kVp+vTZTP2WVRSelQeKaBsfzA88usjBXjMRW6WK/8He/i5XPp19ncpljmTmaMbsiYEGXzAJXQk97Fh6+s1dz2eb70fe9FS3Yu7eAanGQTfHKJdPybhJWTvD/3TqhgtFhzo6yLySZSqDn+JitjBJjPaTy70tq/U6Rf1+xoG+Z//cFd1wAZLofS4XckGzpIqVsXkc64AITY/iP7dBonPlzoHMC5QwOlnODZlieNozzylsuJcpwz505f3sxHfgv5b2IirduOx7NLG2EkOGTe3RQDnjBiNHOSA1OXiadkcv7/f0oo50Ivgqg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5PXg4EtINe7QjykmwunbB7ueYRJ0hS8xhvdoz9Z7xrA=;
 b=ZND+0D7vGQ4pwQtCuPIBqX+fRuvS3lOb5vGkpsk0UN35ofLrSvte8LIg26UP6odQCajvxgbj4mCXAg2zskjQ95vSyfYQ3CFcsGXn/OJ7K6IYc37kc+1QTuDREh98b0G4YRSJZCRLhtV0KIghhUM9yRpNVTm87SWLqsW+awm84fmKQbpNjT8KRba4zBggzdYphqRVxRtKsMJYP6D1tWh/AHdRLLXEVN4cfAfNI48Zt6gXm7i4o+b36DdCI5F42sIG3R64xmPxmcmNDofaxGI4nQtDP4MjJVaFaO2Ok9GJv7CDSMYfvjcs1MwxPjtDhZJIy0TE6+Iw+KDetvP57Jun2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5PXg4EtINe7QjykmwunbB7ueYRJ0hS8xhvdoz9Z7xrA=;
 b=e2Luj439KGL8n+PQTY3QGsLZ0GtiA+hONQMbyn5XxUGAnJo440qnyhQ+LaeLaIC60IlSvRswlAUVARH4Z9BLiOFknr0pbPOVPDYKq/BOGFYzmLAiNUymoKVW7YvG+SqGxHpYc/ZxdcFjlvpo1CiIGAuw7SdjQlH0f+SrWVixUEk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <5b17e7c8-3c52-799f-f995-73b3b30ee5ad@citrix.com>
Date: Mon, 1 May 2023 10:42:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2 0/2] automation: optimize build jobs order
Content-Language: en-GB
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org
References: <cover.ff073811df470285fc1011952c6cc28e9e77607e.1682894502.git-series.marmarek@invisiblethingslab.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <cover.ff073811df470285fc1011952c6cc28e9e77607e.1682894502.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0157.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c7::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5357:EE_
X-MS-Office365-Filtering-Correlation-Id: 2bbad587-85e4-4a41-4b2c-08db4a285a51
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6F0Fgp40QCF8U47O73UbS1fKAca3kFpqyAiCjS0+kpLxpl4K102WiAh3dLPov5mitMW3c0k+vDL+ArZwUeQh4kYgYCBssuqQ2rdgkYqYRCuSs0lcOnnOyAuLBd16UYLcaZipsdW1XzzweOP7kNjS+zSaz8JO5DRrOOhjT2wx0AzBdMJRwjyuev1suXlX5thhKOldZNj4eJ+h/EAEpyfsu6L6dPTefDAiyWv0Z9VtCLbsxKbatSLAAcvoEro+CD+HLKniMXtmvf2Sgylom5i4zZuN8k8Hs286oYlgehh/fal89e+bEsvvUlrE19SYd3v4z5FotmJIMPHWC227cFCGDj7ragXz3AhyURYA8s2TxN7n7Oqw+UbNTkhE5QZwprJbL1qyY6M64jVhxUc6OJ+xR/oGZ467ApUHlRVzsC8MyHJEMaDz/bdkHK42exnEafvspn6Yqd6eHoGjJ4d5NDTbdMshRmSJCPyB7TaOcD/Jr/5TWZpCTx0QKEKAal6GDwN8qcEkDowoxFJB6NHF2P/7sjIGpJN+GezQ99OViSDE247bleBY8PTt70+O4mXaUoGEUI6LRHWIVHtcA1944DNe8+riSwo/CnOTJjPWg0EZYZNmPNW27kcvU33WPGinRyhI5BHVlCQzxG7NrC8caac4VA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(366004)(376002)(346002)(451199021)(31686004)(31696002)(86362001)(36756003)(82960400001)(38100700002)(66574015)(83380400001)(2616005)(6512007)(6506007)(26005)(186003)(53546011)(6486002)(6666004)(478600001)(66946007)(66476007)(66556008)(41300700001)(8936002)(316002)(8676002)(5660300002)(4744005)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RXFCMWNKYng3STV5NmlXcks3OUZHcTdJSUFiRk1ZWEdIZ0VZbGlrVFdCMmVp?=
 =?utf-8?B?Und0N2VFY2pDNUIwN0x4ZFgyRWhINGl3cXFUZVg5UFV6Vm1UdWcwTDFPM0lw?=
 =?utf-8?B?T1d5a1NrcGRSSWUvL3kzbTY2UkVCS1pLRGlodGxFaUVQU3VZODFzSzRGVUlk?=
 =?utf-8?B?K2R3NlRUcWxLZ1dKd2IzckoyUU5KT2hleUg2aHJlQ3VCdXQ4Y1RoK21uQTg4?=
 =?utf-8?B?L1k1Z2lQeE5xcGk5S2w1bnhVbE5oNHpvaXJxWTN0TVBFaUJPeUhYRlVOZ3BK?=
 =?utf-8?B?V21lQnVmZVZhMk9iR0Y0N3BXZ1QvaHdjSXpSczZWVlpLbUZSWVZGMDV1RVdN?=
 =?utf-8?B?Y2Z5b1V6NDd2b29IYWNpaVdpNjhYYWtlcXA2SksxMS9aNUFkZWlMdUFlSm5W?=
 =?utf-8?B?a1hBKzQrWXdjMU5WSFlUQnYrSFN4MTRaOUY2cFJONUJTMFBhVE5sZlZ4S2s5?=
 =?utf-8?B?WnZYZm1LRERQa0MraXpnenphbFB4QWFDekVwU1JoS1duTXlLdE1OMFZxNkZQ?=
 =?utf-8?B?SmdSdVVmUVZnb1J2UHRnTW5aRGcvdFlSay84b1BWa2NEeFcxVjYwZ2owNHB1?=
 =?utf-8?B?VDZyQzFHcGxQanZ0YnpRam9IaUh6STFrOGd1ZkVRNlNRcDVudHp3RVE1SjJk?=
 =?utf-8?B?WHJOM1RRM2ZQUmgyemlrV1lIUlJjd0VTS1Y0SHg2QTErenZvdFRkSkNad3hI?=
 =?utf-8?B?WUNMQXJmQUc1WmhnRjRGVCtCWllaY1UzcksvanJORkVpNFdDS3o2eDI2NDJY?=
 =?utf-8?B?VEJVdENGUnp5TzlTR3kyV01YV040bWwwV0h4UWpwOU0zcHJCeEJFLzdoWXl5?=
 =?utf-8?B?cjVqcmhNVVBTQW9FSCtBOE0xVEEvQjNyTDIxTElJNkZMbTdnWDlDa3ZxeTdZ?=
 =?utf-8?B?SXFCcGNsRDBaMk9Gb2hhZVlMZlZmMHd5eVVuMHB2QXVsVFhjcFFpZWIrb0Y4?=
 =?utf-8?B?bkpsV3p3bVFEbGpqcFFySWVpMTgxVUdmMXIxM0VlNm9lc2RwN3BySGlzOXFu?=
 =?utf-8?B?T2IzOTFKdWZjY0NDUXpvZXMvcmlmbnI0RVpkME1tdlNRQnozUmQ5YzZZeDI4?=
 =?utf-8?B?NitqVTFNQzIyWXk2dStoV0xJVnBFUm8yM0svWHJjb0pmWE1nWHA3ZU9lMnR2?=
 =?utf-8?B?OWdCTnpnSnY5SnhSTTVkekVFZmh3UVU0MkVYbWJFbFNZTDJwRDE0LzB1aXUz?=
 =?utf-8?B?c0x6RFh3aU0ybGhYY0dTdE1hdWpaRHFzdGF0a09KSGVOd2VNWE5aTXlUL0h3?=
 =?utf-8?B?bGVMUmRuUlZEZFdmNnFVM1hpeUFpeGlHNmpBME0vZlVUVWF4Nk45bnhmL2Nx?=
 =?utf-8?B?SHB1ZW1QazAySVB3OVFBQzFwdWo0V2ZETlJ0dWR3RHFTVmxLbzBrTzJ4RWFj?=
 =?utf-8?B?YkxVZnNYQUl6RS9sK3krZWFaNmRlekZWUUFEVDlzc0QvUVZBakxyNzg0b3JL?=
 =?utf-8?B?MVlyVGdPTEl6VzUrMHcrdnpibzBpcnpsQVUwMVZ0TDhhRXpvaVJob1RHYUZE?=
 =?utf-8?B?YUFoOWV4U3FjblhNVnBQb2hmRkRVd1Z2ODNQL3hWN3Q3ZkZNaWFLNWZhWjlV?=
 =?utf-8?B?M0w0ekhoVTI0aFZNdC9GYWNVbWtBWjRWUUcvYi9Wdmd2Q2RQSklDNXVNV1RL?=
 =?utf-8?B?bXJWSTZNeXErY1JGdHVaNkdNR09QeEdOV1gwQ1BSOWNVWW0wUTFrNHlPZWFk?=
 =?utf-8?B?MmJDQlhYeGxZYnE5Z0xlREtmUHN3cXRub2F5MUNqaVI5MGNnc0xLYU5NNi95?=
 =?utf-8?B?WGlEYXZxOHhjeXhvOElweDlvWVRxclJxV0RSdlRyNEh1L0pJT2hNTjI3ZVRk?=
 =?utf-8?B?OE1PZk85NVBRTUlZZ2hpVGp4Ry96VDBzUkJtR2NBdmVpcXBSOGRpcTViTjNw?=
 =?utf-8?B?MnkwakFRYTZKc2dQS25TVTliSE54UWU0azVMQmxuQjY1amlXOWlnYzR6ZGlr?=
 =?utf-8?B?TGtiYVRMMEVtay9wWFdVNlBxaEVVd1hXRFN2TEhLSmVZaFZaY1B6U0pHMmxI?=
 =?utf-8?B?eG5MVUluNWVMZEllNm1SRHJGTnpEaHI5eEJ1R1JJcURPTWtuamFuM2xOaVpY?=
 =?utf-8?B?N0JJVEFCY3hHaGY1TlZhdExrbUdwRktpeFErcVB6U3dJZ1N0NTREdjRpckUy?=
 =?utf-8?B?RVR0Y3k3a1hCcW1DT2tPVTd3d2hOcTNaUFNhT25MWFNzWHVYOXA3SDlGa1hN?=
 =?utf-8?B?eXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	j9os9VWauS7GfRIUZpNWEDhvWvgiHrwXCNS6CsZRXi0HW+Zy7/QPKsgmywoJbqpXMIE//tprJyEkGomJA8j6gcgZexWHCUiCQpsXQWUvbpkCkigH7r3SZKqX/Js5osJuNIBeFliRexiYt5kq14glWChi0AdrWMa2ctkQaNkcJ+wleCmTmY1eAyX/IvoCssFFcBlptIBR3vXHWBVuW/haHooPPuhXGkwsSkwVQZ0c4YYgd5OiAwzRipCDyyX30TkRRsqYDo8AXzpq5NcYux+nxCLILyZcejdno19GJGD3wGOj/DA4ClsMxMJoclgo9BlxOhm+2s5ACSktudwCID8UlaqBD8Ei5baHSXTnKYcsA/Ayijd8/PHaOc1hwX0HyxrC4OZAxYnofHdIr32aFW5szP57MxJAUGbU0XZ9rf3UQr6FRkLvvGrkeOChSyK1sZ9f+LBiqwzQYAOQ2vxTRxBLL8D1lFPxZnFZiKDCpA8RYiyAWDTQKaLnX5ncOe94xALt8Gca/czUp5nm7CjVChX16vp6Mf54fyrd0QF6EFSzirPZa5nV9PhsUFi9LJ/fmyKc/HxJH+BTQqqJc2/mwPB0ekfFTMCcYZ0s4OJUuytylhA5TRRriat6OcqB1ud4wX0EvMoEZlIEANX8tO/nKm/cyYZiY6OgT1d5u6DIVQQEaOzzjdtIZouOkSyWeVWMOiJuwki6EZ2T76KQyfLlr6svQd9k7wCPKrxCPzK4A9q+CAR/kjq/VnIf4Y/gy/h/OWNGYnzZuc8MK2amOm3LQYVQGcdsNUT06yFAK7N7RG2HVVr0AanqAez9MmuXH1A9Js/AijteJstDf9G/6H1Vdov3cQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2bbad587-85e4-4a41-4b2c-08db4a285a51
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2023 09:42:18.4194
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Cw7dsPwLTS2lwrO4RnOjQ8NcOEmMfAs6z3RJHeE1R2yWzaSbMS1ZD3o2++mQaqv03dWML/rF2hopjHuzvu4+qdHFucOLD4oow4jdPmvCItk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5357

On 30/04/2023 11:42 pm, Marek Marczykowski-Górecki wrote:
> This made the pipeline to complete within 45 minutes. This isn't big
> improvement on its own, but should make adding more runners more
> beneficial. While looking at it in real time, most jobs were really
> waiting for available runners and not stuck on dependencies anymore.
>
> Marek Marczykowski-Górecki (2):
>   automation: move test artifacts jobs to the top
>   automation: optimize build jobs order

Much easier to follow.  Thanks.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon May 01 10:06:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 10:06:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.527998.820567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptQQS-0004tC-3g; Mon, 01 May 2023 10:06:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 527998.820567; Mon, 01 May 2023 10:06:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptQQS-0004t5-0J; Mon, 01 May 2023 10:06:32 +0000
Received: by outflank-mailman (input) for mailman id 527998;
 Mon, 01 May 2023 10:06:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptQQP-0004su-Qg; Mon, 01 May 2023 10:06:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptQQP-0001ZE-Fo; Mon, 01 May 2023 10:06:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptQQO-0005Wj-VP; Mon, 01 May 2023 10:06:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptQQO-0000JG-V0; Mon, 01 May 2023 10:06:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MOrsYD36vFgY3q15wNpKNU6adowB2fcfAjCDaP+KNsU=; b=Yr9I9WTmUAUvR4UK1NGGwjczRZ
	g+DwqzUbtAuBJsC2Pwc/V2pMAQ42KY0ipNt5l7/t1E+O+IXYux9NLBrXnai2FZLZZICHicuDTVtdo
	Rjt52aAdhDfoWJaGOd3k4sbpXU4QuG2vRJf6Qq/jFzeU03wTcK2w+iqazmqHHt29A9ao=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180492-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180492: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6a47ba2f7855f8fc094ec4f837e71a34ededb77b
X-Osstest-Versions-That:
    xen=6a47ba2f7855f8fc094ec4f837e71a34ededb77b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 May 2023 10:06:28 +0000

flight 180492 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180492/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install fail in 180484 pass in 180492
 test-amd64-i386-freebsd10-i386  7 xen-install              fail pass in 180484

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install   fail in 180484 like 180481
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180481
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180481
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180484
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180484
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180484
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180484
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180484
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180484
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180484
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180484
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180484
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180484
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180484
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  6a47ba2f7855f8fc094ec4f837e71a34ededb77b
baseline version:
 xen                  6a47ba2f7855f8fc094ec4f837e71a34ededb77b

Last test of basis   180492  2023-05-01 01:52:02 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 01 13:26:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 13:26:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528024.820577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptTXm-0007zv-Bx; Mon, 01 May 2023 13:26:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528024.820577; Mon, 01 May 2023 13:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptTXm-0007zo-8L; Mon, 01 May 2023 13:26:18 +0000
Received: by outflank-mailman (input) for mailman id 528024;
 Mon, 01 May 2023 13:26:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptTXl-0007zd-Bp; Mon, 01 May 2023 13:26:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptTXk-0005wH-V4; Mon, 01 May 2023 13:26:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptTXk-0001jN-B6; Mon, 01 May 2023 13:26:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptTXk-0001bK-Ag; Mon, 01 May 2023 13:26:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=33eSr4Ud25s2d+ZXDV+Y0W/bwEw4xfk8MfQ9ZDWE3xA=; b=cCuakuRSJjBXdUljjJnmVvRpCt
	SjJEJqzcSycPNr2wIJAmszNbvlqW53ODj2Qu+1SngrckFafHcfFstNhF1vsTX6ABEGFyMfEEX0mxe
	i8fvZ7i4yZbdVzo1j6XPeuJiXEmF1Z09RRXGtnsHVt6HHcp4L+j8Q4W5aeACHnMMKAaI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180493-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180493: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=58390c8ce1bddb6c623f62e7ed36383e7fa5c02f
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 May 2023 13:26:16 +0000

flight 180493 linux-linus real [real]
flight 180495 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180493/
http://logs.test-lab.xenproject.org/osstest/logs/180495/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 test-amd64-amd64-xl-vhd     21 guest-start/debian.repeat fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-vhd       8 xen-boot            fail pass in 180495-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 180495 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 180495 never pass
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                58390c8ce1bddb6c623f62e7ed36383e7fa5c02f
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   14 days
Failing since        180281  2023-04-17 06:24:36 Z   14 days   24 attempts
Testing same since   180493  2023-05-01 02:06:50 Z    0 days    1 attempts

------------------------------------------------------------
2116 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 254448 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 01 13:33:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 13:33:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528029.820587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptTeJ-00012b-4u; Mon, 01 May 2023 13:33:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528029.820587; Mon, 01 May 2023 13:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptTeJ-00012U-0T; Mon, 01 May 2023 13:33:03 +0000
Received: by outflank-mailman (input) for mailman id 528029;
 Mon, 01 May 2023 13:33:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptTeH-00012K-Ul; Mon, 01 May 2023 13:33:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptTeH-0006Cf-N1; Mon, 01 May 2023 13:33:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptTeG-0001s6-M5; Mon, 01 May 2023 13:33:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptTeG-0002wR-Lc; Mon, 01 May 2023 13:33:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GlPNhGj+SkdjHwGhzQrq+rLTdRazIYKTKdlQi5SL95Y=; b=4HSQQXPKQkhexbbV2p8bIb2zKz
	hrtyGv/WKr0ffmL5yOqJPULp5fW+QCdqyX8fDsXG89ilHwyCYqUkBHpKHU/UAqZkkq7WvYUdsTLRT
	M4GWq1P0pMglXRL8yzsW3ZzFRHue3yp6Kk7VUW4SESp3PeCl4l0Op33pQ5qwnrh8aVds=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180494-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180494: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ef841d2a2377f5297add27e637b725426bb4840a
X-Osstest-Versions-That:
    xen=6a47ba2f7855f8fc094ec4f837e71a34ededb77b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 May 2023 13:33:00 +0000

flight 180494 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180494/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ef841d2a2377f5297add27e637b725426bb4840a
baseline version:
 xen                  6a47ba2f7855f8fc094ec4f837e71a34ededb77b

Last test of basis   180471  2023-04-28 15:02:00 Z    2 days
Testing same since   180494  2023-05-01 11:03:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6a47ba2f78..ef841d2a23  ef841d2a2377f5297add27e637b725426bb4840a -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 01 15:10:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 15:10:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528042.820596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptVAK-0002uJ-7Z; Mon, 01 May 2023 15:10:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528042.820596; Mon, 01 May 2023 15:10:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptVAK-0002uC-4i; Mon, 01 May 2023 15:10:12 +0000
Received: by outflank-mailman (input) for mailman id 528042;
 Mon, 01 May 2023 15:10:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u6+9=AW=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ptVAJ-0002u3-5M
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 15:10:11 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 420b0a69-e832-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 17:10:08 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-650-aITlDNxzPOCmk1dkEKuuzA-1; Mon, 01 May 2023 11:09:45 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 32042381459C;
 Mon,  1 May 2023 15:09:38 +0000 (UTC)
Received: from localhost (unknown [10.39.192.118])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 3B641405DBBC;
 Mon,  1 May 2023 15:09:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 420b0a69-e832-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1682953807;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=QPwbHvw4ANoG38uZDcwsHmZDGENhR6tztyBUqhJj/1o=;
	b=djXZWchKCi8PMRevZ9+gcZE/8i9jp69nQKiYLXQPPoa5kWHecxH+/L/N8X4LyJqlrjlp3C
	IwEQqQT4c/QRW0V08TMf0QfZtUGUXThrNZirUCMBRRmEY4xU+nRCkRuOYj/D99nmHYtaqZ
	De7RlyQGxtd4rXo+zKW4ydVwPlFD+7c=
X-MC-Unique: aITlDNxzPOCmk1dkEKuuzA-1
Date: Mon, 1 May 2023 11:09:34 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v4 04/20] virtio-scsi: stop using aio_disable_external()
 during unplug
Message-ID: <20230501150934.GA14869@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-5-stefanha@redhat.com>
 <ZEvWv8dF78Jpb6CQ@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="afWx/qHB+Kz1LRw/"
Content-Disposition: inline
In-Reply-To: <ZEvWv8dF78Jpb6CQ@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1


--afWx/qHB+Kz1LRw/
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Apr 28, 2023 at 04:22:55PM +0200, Kevin Wolf wrote:
> Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > This patch is part of an effort to remove the aio_disable_external()
> > API because it does not fit in a multi-queue block layer world where
> > many AioContexts may be submitting requests to the same disk.
> >=20
> > The SCSI emulation code is already in good shape to stop using
> > aio_disable_external(). It was only used by commit 9c5aad84da1c
> > ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
> > disk") to ensure that virtio_scsi_hotunplug() works while the guest
> > driver is submitting I/O.
> >=20
> > Ensure virtio_scsi_hotunplug() is safe as follows:
> >=20
> > 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
> >    device_set_realized() calls qatomic_set(&dev->realized, false) so
> >    that future scsi_device_get() calls return NULL because they exclude
> >    SCSIDevices with realized=3Dfalse.
> >=20
> >    That means virtio-scsi will reject new I/O requests to this
> >    SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
> >    virtio_scsi_hotunplug() is still executing. We are protected against
> >    new requests!
> >=20
> > 2. Add a call to scsi_device_purge_requests() from scsi_unrealize() so
> >    that in-flight requests are cancelled synchronously. This ensures
> >    that no in-flight requests remain once qdev_simple_device_unplug_cb()
> >    returns.
> >=20
> > Thanks to these two conditions we don't need aio_disable_external()
> > anymore.
> >=20
> > Cc: Zhengui Li <lizhengui@huawei.com>
> > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>=20
> qemu-iotests 040 starts failing for me after this patch, with what looks
> like a use-after-free error of some kind.
>=20
> (gdb) bt
> #0  0x000055b6e3e1f31c in job_type (job=3D0xe3e3e3e3e3e3e3e3) at ../job.c=
:238
> #1  0x000055b6e3e1cee5 in is_block_job (job=3D0xe3e3e3e3e3e3e3e3) at ../b=
lockjob.c:41
> #2  0x000055b6e3e1ce7d in block_job_next_locked (bjob=3D0x55b6e72b7570) a=
t ../blockjob.c:54
> #3  0x000055b6e3df6370 in blockdev_mark_auto_del (blk=3D0x55b6e74af0a0) a=
t ../blockdev.c:157
> #4  0x000055b6e393e23b in scsi_qdev_unrealize (qdev=3D0x55b6e7c04d40) at =
=2E./hw/scsi/scsi-bus.c:303
> #5  0x000055b6e3db0d0e in device_set_realized (obj=3D0x55b6e7c04d40, valu=
e=3Dfalse, errp=3D0x55b6e497c918 <error_abort>) at ../hw/core/qdev.c:599
> #6  0x000055b6e3dba36e in property_set_bool (obj=3D0x55b6e7c04d40, v=3D0x=
55b6e7d7f290, name=3D0x55b6e41bd6d8 "realized", opaque=3D0x55b6e7246d20, er=
rp=3D0x55b6e497c918 <error_abort>)
>     at ../qom/object.c:2285
> #7  0x000055b6e3db7e65 in object_property_set (obj=3D0x55b6e7c04d40, name=
=3D0x55b6e41bd6d8 "realized", v=3D0x55b6e7d7f290, errp=3D0x55b6e497c918 <er=
ror_abort>) at ../qom/object.c:1420
> #8  0x000055b6e3dbd84a in object_property_set_qobject (obj=3D0x55b6e7c04d=
40, name=3D0x55b6e41bd6d8 "realized", value=3D0x55b6e74c1890, errp=3D0x55b6=
e497c918 <error_abort>)
>     at ../qom/qom-qobject.c:28
> #9  0x000055b6e3db8570 in object_property_set_bool (obj=3D0x55b6e7c04d40,=
 name=3D0x55b6e41bd6d8 "realized", value=3Dfalse, errp=3D0x55b6e497c918 <er=
ror_abort>) at ../qom/object.c:1489
> #10 0x000055b6e3daf2b5 in qdev_unrealize (dev=3D0x55b6e7c04d40) at ../hw/=
core/qdev.c:306
> #11 0x000055b6e3db509d in qdev_simple_device_unplug_cb (hotplug_dev=3D0x5=
5b6e81c3630, dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/core/qde=
v-hotplug.c:72
> #12 0x000055b6e3c520f9 in virtio_scsi_hotunplug (hotplug_dev=3D0x55b6e81c=
3630, dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/scsi/virtio-scs=
i.c:1065
> #13 0x000055b6e3db4dec in hotplug_handler_unplug (plug_handler=3D0x55b6e8=
1c3630, plugged_dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/core/=
hotplug.c:56
> #14 0x000055b6e3a28f84 in qdev_unplug (dev=3D0x55b6e7c04d40, errp=3D0x7ff=
ec55192e0) at ../softmmu/qdev-monitor.c:935
> #15 0x000055b6e3a290fa in qmp_device_del (id=3D0x55b6e74c1760 "scsi0", er=
rp=3D0x7ffec55192e0) at ../softmmu/qdev-monitor.c:955
> #16 0x000055b6e3fb0a5f in qmp_marshal_device_del (args=3D0x7f61cc005eb0, =
ret=3D0x7f61d5a8ae38, errp=3D0x7f61d5a8ae40) at qapi/qapi-commands-qdev.c:1=
14
> #17 0x000055b6e3fd52e1 in do_qmp_dispatch_bh (opaque=3D0x7f61d5a8ae08) at=
 ../qapi/qmp-dispatch.c:128
> #18 0x000055b6e4007b9e in aio_bh_call (bh=3D0x55b6e7dea730) at ../util/as=
ync.c:155
> #19 0x000055b6e4007d2e in aio_bh_poll (ctx=3D0x55b6e72447c0) at ../util/a=
sync.c:184
> #20 0x000055b6e3fe3b45 in aio_dispatch (ctx=3D0x55b6e72447c0) at ../util/=
aio-posix.c:421
> #21 0x000055b6e4009544 in aio_ctx_dispatch (source=3D0x55b6e72447c0, call=
back=3D0x0, user_data=3D0x0) at ../util/async.c:326
> #22 0x00007f61ddc14c7f in g_main_dispatch (context=3D0x55b6e7244b20) at .=
=2E/glib/gmain.c:3454
> #23 g_main_context_dispatch (context=3D0x55b6e7244b20) at ../glib/gmain.c=
:4172
> #24 0x000055b6e400a7e8 in glib_pollfds_poll () at ../util/main-loop.c:290
> #25 0x000055b6e400a0c2 in os_host_main_loop_wait (timeout=3D0) at ../util=
/main-loop.c:313
> #26 0x000055b6e4009fa2 in main_loop_wait (nonblocking=3D0) at ../util/mai=
n-loop.c:592
> #27 0x000055b6e3a3047b in qemu_main_loop () at ../softmmu/runstate.c:731
> #28 0x000055b6e3dab27d in qemu_default_main () at ../softmmu/main.c:37
> #29 0x000055b6e3dab2b8 in main (argc=3D24, argv=3D0x7ffec55196a8) at ../s=
oftmmu/main.c:48
> (gdb) p jobs
> $4 =3D {lh_first =3D 0x0}

I wasn't able to reproduce this with gcc 13.1.1 or clang 16.0.1:

  $ tests/qemu-iotests/check -qcow2 040

Any suggestions on how to reproduce the issue?

Thanks,
Stefan

--afWx/qHB+Kz1LRw/
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRP1i4ACgkQnKSrs4Gr
c8jU8gf7Bsg/kploFAmdPvcQ5QvmehPNQbyu4ORQkGh9WjEiJ4fROi0dXYB9yyU6
EZMpHv+mxn82F2lSoWcqD6HuuxeUJn+LVEHUzrDgcy4z9KWY8pNQmC5tijEAWnh3
0Q9G6WC9F+3GFly99ED34Ip8UJhz7IetXKMlZ+/cD8rm8YNa17cXEzZ/nHq8+aa0
SUAcymqkoOEQ9fRPQKx5TK2PW5AsZmgQE8dlmWh2c4zaWyjmZdLjE6Dap5e4l5U8
+fR9RbHAl9hMd/+SZNGzkVvZDSYMnZ8qdoEbvQ8YaxlXXaXP9gh4Mv4UBxHlOIAS
Af2CAk1ye+2jhy7jM1N0qxVg5pu/oQ==
=v+9M
-----END PGP SIGNATURE-----

--afWx/qHB+Kz1LRw/--



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:21:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:21:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528061.820606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ4v-0002o2-Uf; Mon, 01 May 2023 19:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528061.820606; Mon, 01 May 2023 19:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ4v-0002nv-S0; Mon, 01 May 2023 19:20:53 +0000
Received: by outflank-mailman (input) for mailman id 528061;
 Mon, 01 May 2023 19:20:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZ4u-0002np-SC
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:20:52 +0000
Received: from mail-qv1-xf32.google.com (mail-qv1-xf32.google.com
 [2607:f8b0:4864:20::f32])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 48818f38-e855-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:20:51 +0200 (CEST)
Received: by mail-qv1-xf32.google.com with SMTP id
 6a1803df08f44-61aecee26feso6514226d6.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:20:51 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 x20-20020a0ce0d4000000b0061927ddb043sm2012307qvk.80.2023.05.01.12.20.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:20:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48818f38-e855-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682968850; x=1685560850;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=LIDXD1rjydJmaXqYe1USNeTaj7ndwcTzSDSXNcX5Wmw=;
        b=fCxjw0fDJEhIoYwSMnKGFnxE1HSttRshqhiOS3w37Ymm5azocp8ZcNWwc4m2a3q4lA
         7G5/PEesUyxFiNELjSeaoGBP+tpUMFV+R3oKoHxgqjPWHfaN3f7AN4VmtQEeDBvF5JZ3
         utMi/4V7/QqXcOsIpxj76V7gfchy0TGcr1G1pWO/sqatWzSXU36Bwad+npRXA2nVrLEH
         cDC+f9NFhQxXhjc+lXNI8wyFVbeVP7T3deuGAQzvSuNsnlIWiUazz6k0IEff0E/BHTiG
         X7Ff0yXKJhE4WMWBXeKHfnVLSExxKK34N1Jja+pgktNUbBwbu/ZeuJXYvCoBKuzJpl1C
         xTQQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682968850; x=1685560850;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=LIDXD1rjydJmaXqYe1USNeTaj7ndwcTzSDSXNcX5Wmw=;
        b=dXxGQmp/0mkLuiTFFqNBAhjJ8fUKeyEKJgV+LSpodgMs7doKmYNOXcrtZQlas4uNsA
         TXjP5RXgeoGYixHKnkMKq1Qwm2P69H8ocYkUn4qFnuL3yk+R2/rwgSNgcQsGM2v8niwc
         CZjati7jLT8YpIWYHDSxiErlLrjhuD4vqAH0VeJhNwxDXyGT80dFkPPGpqOsO4t+Uz+Y
         ZmrnRA7JR9Arlmy+9GqyuZU6+VRQI2UXQXjV7lRRWvtP6zUXJNbzc+E+dwzpaakTmLEn
         oQs9T9ZjkX53BZZPhOQRUXkgNpBbkw4SKKdI+oeTGhHpSsBI10qtY/j/PANfjW+EHnAC
         iuPA==
X-Gm-Message-State: AC+VfDwsGGLY3iRf5lf81Tl4nWZBiweOKgB8mBKR9HU+53wdw+JcU8QT
	3ZbD5Bl2oCX1vhMefxqRowQ=
X-Google-Smtp-Source: ACHHUZ6KZb0pidqm5YN+GDCFc/grr88J90LhZQaVORRfrc+BGNPLFyX2X74VvD2xoxTzjJ5RzX3NkQ==
X-Received: by 2002:ad4:5e8d:0:b0:56f:52ba:cce6 with SMTP id jl13-20020ad45e8d000000b0056f52bacce6mr1779909qvb.19.1682968849783;
        Mon, 01 May 2023 12:20:49 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproj,
	xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH v3 00/14] Intel Hardware P-States (HWP) support
Date: Mon,  1 May 2023 15:20:31 -0400
Message-Id: <20230501192045.87377-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi,

This patch series adds Hardware-Controlled Performance States (HWP) for
Intel processors to Xen.

v2 was only partially reviewed, so v3 is mostly a reposting of v2.  In v2 &
v3, I think I addressed all comments for v1.  I kept patch 11 "xenpm:
Factor out a non-fatal cpuid_parse variant", with a v2 comment
explaining why I keep it.

v3 adds "xen/x86: Tweak PDC bits when using HWP".  Qubes testing revealed
an issue where enabling HWP can crash firwmare code (maybe SMM).  This
requires a Linux change to get the PDC bits from Xen and pass them to
ACPI.  Roger has a patch [0] to set the PDC bits.  Roger's 3 patch
series was tested with "xen/x86: Tweak PDC bits when using HWP" on
affected hardware and allowed proper operation.

Previous cover letter:

With HWP, the processor makes its own determinations for frequency
selection, though users can set some parameters and preferences.  There
is also Turbo Boost which dynamically pushes the max frequency if
possible.

The existing governors don't work with HWP since they select frequencies
and HWP doesn't expose those.  Therefore a dummy hwp-interal governor is
used that doesn't do anything.

xenpm get-cpufreq-para is extended to show HWP parameters, and
set-cpufreq-hwp is added to set them.

A lightly loaded OpenXT laptop showed ~1W power savings according to
powertop.  A mostly idle Fedora system (dom0 only) showed a more modest
power savings.

This for for a 10th gen 6-core 1600 MHz base 4900 MHZ max cpu.  In the
default balance mode, Turbo Boost doesn't exceed 4GHz.  Tweaking the
energy_perf preference with `xenpm set-cpufreq-hwp balance ene:64`,
I've seen the CPU hit 4.7GHz before throttling down and bouncing around
between 4.3 and 4.5 GHz.  Curiously the other cores read ~4GHz when
turbo boost takes affect.  This was done after pinning all dom0 cores,
and using taskset to pin to vCPU/pCPU 11 and running a bash tightloop.

HWP defaults to disabled and running with the existing HWP configuration
- it doesn't reconfigure by default.  It can be enabled with
cpufreq=xen:hwp.

Hardware Duty Cycling (HDC) is another feature to autonomously powerdown
things.  It defaults to enabled when HWP is enabled, but HDC can be
disabled on the command line.  cpufreq=xen:hwp,no-hdc

I've only tested on 8th gen and 10th gen systems with activity window
and energy_perf support.  So the pathes for CPUs lacking those features
are untested.

Fast MSR support was removed in v2.  The model specific checking was not
done properly, and I don't have hardware to test with.  Since writes are
expected to be infrequent, I just removed the code.

This changes the systcl_pm_op hypercall, so that wants review.

Regards,
Jason

[0] https://lore.kernel.org/xen-devel/20221121102113.41893-3-roger.pau@citrix.com/

Jason Andryuk (14):
  cpufreq: Allow restricting to internal governors only
  cpufreq: Add perf_freq to cpuinfo
  cpufreq: Export intel_feature_detect
  cpufreq: Add Hardware P-State (HWP) driver
  xenpm: Change get-cpufreq-para output for internal
  xen/x86: Tweak PDC bits when using HWP
  cpufreq: Export HWP parameters to userspace
  libxc: Include hwp_para in definitions
  xenpm: Print HWP parameters
  xen: Add SET_CPUFREQ_HWP xen_sysctl_pm_op
  libxc: Add xc_set_cpufreq_hwp
  xenpm: Factor out a non-fatal cpuid_parse variant
  xenpm: Add set-cpufreq-hwp subcommand
  CHANGELOG: Add Intel HWP entry

 CHANGELOG.md                              |   1 +
 docs/misc/xen-command-line.pandoc         |   8 +-
 tools/include/xenctrl.h                   |   6 +
 tools/libs/ctrl/xc_pm.c                   |  18 +
 tools/misc/xenpm.c                        | 355 +++++++++++-
 xen/arch/x86/acpi/cpufreq/Makefile        |   1 +
 xen/arch/x86/acpi/cpufreq/cpufreq.c       |  15 +-
 xen/arch/x86/acpi/cpufreq/hwp.c           | 633 ++++++++++++++++++++++
 xen/arch/x86/acpi/lib.c                   |   5 +
 xen/arch/x86/cpu/mcheck/mce_intel.c       |   6 +
 xen/arch/x86/include/asm/cpufeature.h     |  13 +-
 xen/arch/x86/include/asm/msr-index.h      |  14 +
 xen/drivers/acpi/pmstat.c                 |  23 +
 xen/drivers/cpufreq/cpufreq.c             |  40 ++
 xen/drivers/cpufreq/utility.c             |   1 +
 xen/include/acpi/cpufreq/cpufreq.h        |  14 +
 xen/include/acpi/cpufreq/processor_perf.h |   4 +
 xen/include/acpi/pdc_intel.h              |   1 +
 xen/include/public/sysctl.h               |  57 ++
 19 files changed, 1187 insertions(+), 28 deletions(-)
 create mode 100644 xen/arch/x86/acpi/cpufreq/hwp.c

-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:21:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:21:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528062.820617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ56-00034r-75; Mon, 01 May 2023 19:21:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528062.820617; Mon, 01 May 2023 19:21:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ56-00034k-3J; Mon, 01 May 2023 19:21:04 +0000
Received: by outflank-mailman (input) for mailman id 528062;
 Mon, 01 May 2023 19:21:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZ55-0002np-CQ
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:21:03 +0000
Received: from mail-qk1-x730.google.com (mail-qk1-x730.google.com
 [2607:f8b0:4864:20::730])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f87efd6-e855-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:21:02 +0200 (CEST)
Received: by mail-qk1-x730.google.com with SMTP id
 af79cd13be357-75131c2997bso1448687285a.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:21:02 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 x20-20020a0ce0d4000000b0061927ddb043sm2012307qvk.80.2023.05.01.12.21.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:21:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f87efd6-e855-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682968862; x=1685560862;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7VFwxnOVM/nUJWo2+kFJfbzF53GX8dfaVA+R5Uyazpc=;
        b=fxt+FrecoN8NG22EJ/cAh7e8No5T6QNfq9reDEVzQ7EdqNL/B6JjLggYYDL8/2v9gw
         rqD/WLcw8nkbQ1SUKcI+20o+Tu/W9Uiz5MTfrNtY9z5uam7qXqGBya8azNCJXqFE08Xk
         kbLvDoQ58GUkDwEw8Oc527I9EEPhPMQfZU+xCLzjQ1YGwAqpbGUbQaOEkMYGsHJIXH1L
         ySUDH5HHSwLUWEP9aC89yK74HssrIkNPdaGDzaXZuAOB0Sej6K3TDScZC/zTqYlii53O
         rIDA4MvVDlT7jnNZ9kdGxQgRPZdyo56FQJRUOrLxOE1Wr3NdZYkx/ZvxbQriRsaVfWG3
         ku3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682968862; x=1685560862;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7VFwxnOVM/nUJWo2+kFJfbzF53GX8dfaVA+R5Uyazpc=;
        b=ZIfSwEFvebKWOzBAKg16cOISPs987j4sF+asXIsJaX/a5mlyTPTWd+X8ZoA6SrYOWf
         RzfId430GKKBf3oLhNDV5YfRkXqf77NBIu/Bk6zjHlia6YmFOCPiFds5Us1Ep95BS+xw
         MrJ3JFpv/RnpYAUowWU86riJkJjaf/u/oc6NXicqFwfGmb+q9VhC/P2uWoWcYpf7nj13
         Jzdq6TYSz+tTbxcwAozk6lwaVv86Q1qh4XeRezihMiXXUi35K++sCcabjuaE1migZfoF
         SGGEvtQPZVcTY993ZxjD8CD7JnUHGLqTTi7WMWzzto1b/cgQ6iIdwzZx+amYYGzgGwm6
         m+Qg==
X-Gm-Message-State: AC+VfDyN6RQlOMvETukC083AAehwduDtpfEQDdC9aKGyygQh2JdYznoE
	GYqNURPzOfOg9nGzTv3iYR4=
X-Google-Smtp-Source: ACHHUZ7GV+m0SaqcaVilE9mZSkNpsz96YrsDMORzgTpHu95CZ/am2ZJKlVW9lI3rEQAcEmBuuBN5wQ==
X-Received: by 2002:a05:6214:c8c:b0:5ef:512d:2d47 with SMTP id r12-20020a0562140c8c00b005ef512d2d47mr1231450qvr.19.1682968861656;
        Mon, 01 May 2023 12:21:01 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproj,
	xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 01/14] cpufreq: Allow restricting to internal governors only
Date: Mon,  1 May 2023 15:20:32 -0400
Message-Id: <20230501192045.87377-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501192045.87377-1-jandryuk@gmail.com>
References: <20230501192045.87377-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For hwp, the standard governors are not usable, and only the internal
one is applicable.  Add the cpufreq_governor_internal boolean to
indicate when an internal governor, like hwp-internal, will be used.
This is set during presmp_initcall, so that it can suppress governor
registration during initcall.  Only a governor with a name containing
"-internal" will be allowed in that case.

This way, the unuseable governors are not registered, so the internal
one is the only one returned to userspace.  This means incompatible
governors won't be advertised to userspace.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v3:
Switch to initdata
Add Jan Acked-by
Commit message s/they/the/ typo
Don't register hwp-internal when running non-hwp - Marek

v2:
Switch to "-internal"
Add blank line in header
---
 xen/drivers/cpufreq/cpufreq.c      | 8 ++++++++
 xen/include/acpi/cpufreq/cpufreq.h | 2 ++
 2 files changed, 10 insertions(+)

diff --git a/xen/drivers/cpufreq/cpufreq.c b/xen/drivers/cpufreq/cpufreq.c
index 2321c7dd07..7bd81680da 100644
--- a/xen/drivers/cpufreq/cpufreq.c
+++ b/xen/drivers/cpufreq/cpufreq.c
@@ -56,6 +56,7 @@ struct cpufreq_dom {
 };
 static LIST_HEAD_READ_MOSTLY(cpufreq_dom_list_head);
 
+bool __initdata cpufreq_governor_internal;
 struct cpufreq_governor *__read_mostly cpufreq_opt_governor;
 LIST_HEAD_READ_MOSTLY(cpufreq_governor_list);
 
@@ -121,6 +122,13 @@ int __init cpufreq_register_governor(struct cpufreq_governor *governor)
     if (!governor)
         return -EINVAL;
 
+    if (cpufreq_governor_internal &&
+        strstr(governor->name, "-internal") == NULL)
+        return -EINVAL;
+
+    if (!cpufreq_governor_internal && strstr(governor->name, "-internal"))
+        return -EINVAL;
+
     if (__find_governor(governor->name) != NULL)
         return -EEXIST;
 
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index 35dcf21e8f..0da32ef519 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -114,6 +114,8 @@ extern struct cpufreq_governor cpufreq_gov_userspace;
 extern struct cpufreq_governor cpufreq_gov_performance;
 extern struct cpufreq_governor cpufreq_gov_powersave;
 
+extern bool cpufreq_governor_internal;
+
 extern struct list_head cpufreq_governor_list;
 
 extern int cpufreq_register_governor(struct cpufreq_governor *governor);
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:21:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:21:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528063.820627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ5A-0003MU-D4; Mon, 01 May 2023 19:21:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528063.820627; Mon, 01 May 2023 19:21:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ5A-0003ML-AI; Mon, 01 May 2023 19:21:08 +0000
Received: by outflank-mailman (input) for mailman id 528063;
 Mon, 01 May 2023 19:21:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZ58-0002np-O4
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:21:06 +0000
Received: from mail-qk1-x730.google.com (mail-qk1-x730.google.com
 [2607:f8b0:4864:20::730])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 518ff36a-e855-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:21:06 +0200 (CEST)
Received: by mail-qk1-x730.google.com with SMTP id
 af79cd13be357-74fb8677a36so133678885a.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:21:06 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 x20-20020a0ce0d4000000b0061927ddb043sm2012307qvk.80.2023.05.01.12.21.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:21:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 518ff36a-e855-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682968865; x=1685560865;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=txBzmZlgf2a5ZApasbIDD2P8VUrWanxh/ZCZKapkSd4=;
        b=hnzzQkqw6iB4TnpMVolsM/d8tA0cLpu0rYq2RmzA5nHVBJa6VHBWbPjqIW0uCmQTUp
         sTHTPdiaSED7oKPvQeLQsKAATgCljdsn+MCz/RG6kH2ypD7ZqbfMoY6R0HGTi4r8sbJa
         vLXsFsL7O59YaDIhrpdY8gTEK7MrfC07APLJ70wAJkXqatxHLhrAJYYx5RT+v3jM+9mF
         9rPC6jdrwvxBkIzcvDFk0czEXPAxtnVqYbEwo01XNdS6hA8AhE7gWz0u4GUIzitKoSib
         slFFJidmyr7rrVuZnj5teD6sjEbvDofZD+sEdBlTlhwKt1Ce80m/sho+QJMXIot4qpnK
         l7sQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682968865; x=1685560865;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=txBzmZlgf2a5ZApasbIDD2P8VUrWanxh/ZCZKapkSd4=;
        b=V8vp370i1PknhVmIhkzfuvNks4vnELh2lTUgXZVLmBlvtvuGO6lL8vZLUmez9X2CT6
         2Vf54AU+OJhx1xUUySHYqZSrVBCn4OGXYIxaP5hAIJbJTpIb4pFfBYY9Mo7OEllaMAZc
         Tx5gAFITi17+vSIu5NAt4VNtsEEJjvLMlRzueRxVb59RU/WQ1cQZq1N4SnCf5hRiK4L3
         442dV7SbSJRs4uldLLv31Tx+UYFfUuvG8jHGJyp77jV5V9Xx8DNfOUPUjt7+86JCTDwy
         PcYYcMc8BgNCQZ/XFhg2B2rlXygXyraPU5S5MLz9yxFhAwUL7g3MBT6snvLI8IBTLemp
         6IAQ==
X-Gm-Message-State: AC+VfDwluYVv3DIRT/UivgmMNO+rRj8OeSLjyF5l6jSsZ3OBWjzFWqi/
	TsAwtVnwhLWdbAcI079ZKQI=
X-Google-Smtp-Source: ACHHUZ417cxCd/vqsbSp93kURkQVEsMZjpcjAt0WtKYiNB7H5wmUcdd/cQhBwm4ZtrgjNj2q+ozWzw==
X-Received: by 2002:a05:6214:c4c:b0:5ef:83cf:91c2 with SMTP id r12-20020a0562140c4c00b005ef83cf91c2mr1729663qvj.45.1682968865210;
        Mon, 01 May 2023 12:21:05 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproj,
	xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 02/14] cpufreq: Add perf_freq to cpuinfo
Date: Mon,  1 May 2023 15:20:33 -0400
Message-Id: <20230501192045.87377-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501192045.87377-1-jandryuk@gmail.com>
References: <20230501192045.87377-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

acpi-cpufreq scales the aperf/mperf measurements by max_freq, but HWP
needs to scale by base frequency.  Settings max_freq to base_freq
"works" but the code is not obvious, and returning values to userspace
is tricky.  Add an additonal perf_freq member which is used for scaling
aperf/mperf measurements.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v3:
Add Jan's Ack

I don't like this, but it seems the best way to re-use the common
aperf/mperf code.  The other option would be to add wrappers that then
do the acpi vs. hwp scaling.
---
 xen/arch/x86/acpi/cpufreq/cpufreq.c | 2 +-
 xen/drivers/cpufreq/utility.c       | 1 +
 xen/include/acpi/cpufreq/cpufreq.h  | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/acpi/cpufreq/cpufreq.c b/xen/arch/x86/acpi/cpufreq/cpufreq.c
index 2e0067fbe5..6c70d04395 100644
--- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
+++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
@@ -316,7 +316,7 @@ unsigned int get_measured_perf(unsigned int cpu, unsigned int flag)
     else
         perf_percent = 0;
 
-    return policy->cpuinfo.max_freq * perf_percent / 100;
+    return policy->cpuinfo.perf_freq * perf_percent / 100;
 }
 
 static unsigned int cf_check get_cur_freq_on_cpu(unsigned int cpu)
diff --git a/xen/drivers/cpufreq/utility.c b/xen/drivers/cpufreq/utility.c
index 9eb7ecedcd..6831f62851 100644
--- a/xen/drivers/cpufreq/utility.c
+++ b/xen/drivers/cpufreq/utility.c
@@ -236,6 +236,7 @@ int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
 
     policy->min = policy->cpuinfo.min_freq = min_freq;
     policy->max = policy->cpuinfo.max_freq = max_freq;
+    policy->cpuinfo.perf_freq = max_freq;
     policy->cpuinfo.second_max_freq = second_max_freq;
 
     if (policy->min == ~0)
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index 0da32ef519..a06aa92f62 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -37,6 +37,9 @@ extern struct acpi_cpufreq_data *cpufreq_drv_data[NR_CPUS];
 struct cpufreq_cpuinfo {
     unsigned int        max_freq;
     unsigned int        second_max_freq;    /* P1 if Turbo Mode is on */
+    unsigned int        perf_freq; /* Scaling freq for aperf/mpref.
+                                      acpi-cpufreq uses max_freq, but HWP uses
+                                      base_freq.*/
     unsigned int        min_freq;
     unsigned int        transition_latency; /* in 10^(-9) s = nanoseconds */
 };
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:21:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:21:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528064.820637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ5D-0003eO-Nh; Mon, 01 May 2023 19:21:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528064.820637; Mon, 01 May 2023 19:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ5D-0003eF-K7; Mon, 01 May 2023 19:21:11 +0000
Received: by outflank-mailman (input) for mailman id 528064;
 Mon, 01 May 2023 19:21:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZ5C-0002np-B7
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:21:10 +0000
Received: from mail-qk1-x735.google.com (mail-qk1-x735.google.com
 [2607:f8b0:4864:20::735])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 53a88cf9-e855-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:21:09 +0200 (CEST)
Received: by mail-qk1-x735.google.com with SMTP id
 af79cd13be357-7515631b965so673368285a.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:21:09 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 x20-20020a0ce0d4000000b0061927ddb043sm2012307qvk.80.2023.05.01.12.21.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:21:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53a88cf9-e855-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682968869; x=1685560869;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=H9MXL508WAYhYYKIHs8d77jkKwZyoTe/WztjxZedAQ8=;
        b=rq0DvyOB5DfwmfNuwRvzir/PEUW4Q7Y7tqorFvzYUK9HQUsaW9yTveOdaSqHQszeGL
         jiHsrz5gRbbu3FwfmQLHNbPYDkloR8iXpYmZVMUNv2hNSzj2hJXttzAZO8x07fI705gn
         jEjQ0VogRFH3rkilyFMTeqorC6HRMiwrrdxRzNpgb/+UO1MRDIOj5z9QG6Z4oXYiBO19
         pjtMBMhUGnFJh8mxInOdnvtei4CDIzG9sBwjryXnb6D0yA5/VK4QjvlalXEMDkEC8QcR
         LDYdb/AzkiKUFK1+YA8RUoc8TmDoG93/ulnV+j4qPQ5kzJhPSQCdGnQXYqSlLZjtIUXl
         Qzeg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682968869; x=1685560869;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=H9MXL508WAYhYYKIHs8d77jkKwZyoTe/WztjxZedAQ8=;
        b=U0ezniXveMf1m1cRbCjB7AkO45yWgq1pvxcN7SuYUWKwpXLcAtAaY1g9hyikCgNUi/
         dF0vqvLQyxeUvQXK3nlRyxMnQr157eU4/xRetceDVFptrZfjHY66M3SjHajrXCvqjle8
         DIC8pvOFxbY9+vhTd2+qxMPFZ1k6Az4nkDNpWVz9rnqwKunBpmYSCg3xrYXh+0j3YZBQ
         6vwVGpS1ejaRZVw6KV2mY8q+HRxoIqsV/TJHulx46RJJJvDW1xXlgzUwcMyShQ1a8x/K
         N+UXwH9C0Ca5UmWhfu7Ush7FmaVc6bMMhxIAkTdwfS7nPTIziAB+H/ku0Ru1zSDX99FB
         I0Zg==
X-Gm-Message-State: AC+VfDxvdOinF02xc+MIvacBMiF9feDAUoDYFTTTXtNMAwSd6bBNkhZV
	WpOfR5ekxOLY4jg3G0SquUo=
X-Google-Smtp-Source: ACHHUZ52fJMB7p3+z00UEHeNT6H5lQjNgXa9HJUz0dTy7X0df51pnNMHSDMMqDjFDxxrpToBKZMY8g==
X-Received: by 2002:a05:6214:23cc:b0:572:6e81:ae9c with SMTP id hr12-20020a05621423cc00b005726e81ae9cmr1857350qvb.1.1682968868683;
        Mon, 01 May 2023 12:21:08 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproj,
	xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 03/14] cpufreq: Export intel_feature_detect
Date: Mon,  1 May 2023 15:20:34 -0400
Message-Id: <20230501192045.87377-4-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501192045.87377-1-jandryuk@gmail.com>
References: <20230501192045.87377-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Export feature_detect as intel_feature_detect so it can be re-used by
HWP.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
v3:
Remove void * cast when calling intel_feature_detect

v2:
export intel_feature_detect with typed pointer
Move intel_feature_detect to acpi/cpufreq/cpufreq.h since the
declaration now contains struct cpufreq_policy *.
---
 xen/arch/x86/acpi/cpufreq/cpufreq.c | 8 ++++++--
 xen/include/acpi/cpufreq/cpufreq.h  | 2 ++
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/acpi/cpufreq/cpufreq.c b/xen/arch/x86/acpi/cpufreq/cpufreq.c
index 6c70d04395..f1cc473b4f 100644
--- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
+++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
@@ -339,9 +339,8 @@ static unsigned int cf_check get_cur_freq_on_cpu(unsigned int cpu)
     return extract_freq(get_cur_val(cpumask_of(cpu)), data);
 }
 
-static void cf_check feature_detect(void *info)
+void intel_feature_detect(struct cpufreq_policy *policy)
 {
-    struct cpufreq_policy *policy = info;
     unsigned int eax;
 
     eax = cpuid_eax(6);
@@ -353,6 +352,11 @@ static void cf_check feature_detect(void *info)
     }
 }
 
+static void cf_check feature_detect(void *info)
+{
+    intel_feature_detect(info);
+}
+
 static unsigned int check_freqs(const cpumask_t *mask, unsigned int freq,
                                 struct acpi_cpufreq_data *data)
 {
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index a06aa92f62..0f334d2a43 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -243,4 +243,6 @@ int write_userspace_scaling_setspeed(unsigned int cpu, unsigned int freq);
 void cpufreq_dbs_timer_suspend(void);
 void cpufreq_dbs_timer_resume(void);
 
+void intel_feature_detect(struct cpufreq_policy *policy);
+
 #endif /* __XEN_CPUFREQ_PM_H__ */
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:21:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:21:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528065.820647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ5K-00041W-2F; Mon, 01 May 2023 19:21:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528065.820647; Mon, 01 May 2023 19:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ5J-00041N-TM; Mon, 01 May 2023 19:21:17 +0000
Received: by outflank-mailman (input) for mailman id 528065;
 Mon, 01 May 2023 19:21:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZ5I-0003yM-91
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:21:16 +0000
Received: from mail-qv1-xf2d.google.com (mail-qv1-xf2d.google.com
 [2607:f8b0:4864:20::f2d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 559e417b-e855-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:21:13 +0200 (CEST)
Received: by mail-qv1-xf2d.google.com with SMTP id
 6a1803df08f44-61b5a6865dfso8720006d6.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:21:13 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 x20-20020a0ce0d4000000b0061927ddb043sm2012307qvk.80.2023.05.01.12.21.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:21:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 559e417b-e855-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682968872; x=1685560872;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Hym2Jx5Z8r7rrhqAfu+8xxIV/nnrCdXHwFgph7V69FQ=;
        b=fGoIO+6qsbtKMFPSgN0i4Zavvb5MTnhhdqp8CeKrIMwrWVUoGzsS5GUMU3+Vf/BqaK
         vwExJ3rn+DHp4vhn7EgJIimziO/yBOwa9nYWdkh9s2/iGTkofXrSa2J3+pJQVNEi70vQ
         9NoO/vP3pduE9AZow9+Q+ioOJRnST09uH6SvlQQOhIftyDKMET4CLUxaeV0aQPuOYnHW
         JByxXuaAkZf8rc6KIGViImcm3TCKJwiKT/wjVmLGWLl4npF/XU3/cPhAo3jQTwctqHfj
         XAeyll9kEcP3rK6l0/CWTUG24Y3NObIfxdL0RryyhEGH41d8fWxJ+/xDoXWUs3qOj3ux
         1qkA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682968872; x=1685560872;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Hym2Jx5Z8r7rrhqAfu+8xxIV/nnrCdXHwFgph7V69FQ=;
        b=eLUNg4WDMCt7yT6IdEEbWtdKHNXmPUOZRmtqnsm7MCn+qZm3U+rTyyXXQBAKtbNOUH
         IWBL2WqgEbHaGK45z55iqv4vFctMC0YrgcZmlaUvxgXAeomHoxQrJK9LFTGl07TCJ9+i
         n4LJ1gLlvoKKmTjP1FJRVD/n9oqjvDLyhSHZOfn/cy+5vGOVRpm8TyEFui74vWzbs/pq
         Qvune/P06c7AHGuwrPlIvrlt6gjBY0BWySQARKY71byPzgM3/RuG/lg4nrK1lA3XrPdX
         p1i/kB2RToR44Pj0jExsiuRxE9Lut3CF8Oz8C4lXKkRnPxEU6ppmb6xmFli64n+3pc5D
         GFbQ==
X-Gm-Message-State: AC+VfDxLNGdowL6tHAhU5RdiRVqmcDCJNfA/uWZxkyfZA/Q0EtqRci7g
	RMMAk7boPh/OnXejiq3qhPQ=
X-Google-Smtp-Source: ACHHUZ681CkprnN+PnfFwou1vy3oW/3SYrpP7yKOkM9VljtTjTR/9pSDCq4Yp2i9BWpKWV0sPzdLVQ==
X-Received: by 2002:ad4:5aeb:0:b0:61a:43f0:7305 with SMTP id c11-20020ad45aeb000000b0061a43f07305mr1682385qvh.35.1682968871742;
        Mon, 01 May 2023 12:21:11 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproj,
	xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 04/14] cpufreq: Add Hardware P-State (HWP) driver
Date: Mon,  1 May 2023 15:20:35 -0400
Message-Id: <20230501192045.87377-5-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501192045.87377-1-jandryuk@gmail.com>
References: <20230501192045.87377-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

>From the Intel SDM: "Hardware-Controlled Performance States (HWP), which
autonomously selects performance states while utilizing OS supplied
performance guidance hints."

Enable HWP to run in autonomous mode by poking the correct MSRs.
cpufreq=xen:hwp enables and cpufreq=xen:hwp=0 disables.  The same for
hdc.

There is no interface to configure - xen_sysctl_pm_op/xenpm will
be to be extended to configure in subsequent patches.  It will run with
the default values, which should be the default 0x80 (out
of 0x0-0xff) energy/performance preference.

Unscientific powertop measurement of an mostly idle, customized OpenXT
install:
A 10th gen 6-core laptop showed battery discharge drop from ~9.x to
~7.x watts.
A 8th gen 4-core laptop dropped from ~10 to ~9

Power usage depends on many factors, especially display brightness, but
this does show an power saving in balanced mode when CPU utilization is
low.

HWP isn't compatible with an external governor - it doesn't take
explicit frequency requests.  Therefore a minimal internal governor,
hwp-internal, is also added as a placeholder.

While adding to the xen-command-line.pandoc entry, un-nest verbose from
minfreq.  They are independent.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

---

We disable on cpuid_level < 0x16.  cpuid(0x16) is used to get the cpu
frequencies for calculating the APERF/MPERF.  Without it, things would
still work, but the averge cpufrequency output would be wrong.

My 8th & 10th gen test systems both report:
(XEN) HWP: 1 notify: 1 act_window: 1 energy_perf: 1 pkg_level: 0 peci: 0
(XEN) HWP: Hardware Duty Cycling (HDC) supported
(XEN) HWP: HW_FEEDBACK not supported

IA32_ENERGY_PERF_BIAS has not been tested.

For cpufreq=xen:hwp, placing the option inside the governor wouldn't
work.  Users would have to select the hwp-internal governor to turn off
hwp support.  hwp-internal isn't usable without hwp, and users wouldn't
be able to select a different governor.  That doesn't matter while hwp
defaults off, but it would if or when hwp defaults to enabled.

We can't use parse_boolean() since it requires a single name=val string
and cpufreq_handle_common_option is provided two strings.  Use
parse_bool() and manual handle no-hwp.

Write to disable the interrupt - the linux pstate driver does this.  We
don't use the interrupts, so we can just turn them off.  We aren't ready
to handle them, so we don't want any.  Unclear if this is necessary.
SDM says it's default disabled.

FAST_IA32_HWP_REQUEST was removed in v2.  The check in v1 was wrong,
it's a model specific feature and the CPUID bit is only available
after enabling via the MSR.  Support was untested since I don't have
hardware with the feature.  Writes are expected to be infrequent, so
just leave it out.

---
v2:
Alphabetize headers
Re-work driver registration
name hwp_drv_data anonymous union "hw"
Drop hwp_verbose_cont
style cleanups
Condense hwp_governor switch
hwp_cpufreq_target remove .raw from hwp_req assignment
Use typed-pointer in a few functions
Pass type to xzalloc
Add HWP_ENERGY_PERF_BALANCE/IA32_ENERGY_BIAS_BALANCE defines
Add XEN_HWP_GOVERNOR define for "hwp-internal"
Capitalize CPUID and MSR defines
Change '_' to '-' for energy-perf & act-window
Read-modify-write MSRs updates
Use FAST_IA32_HWP_REQUEST_MSR_ENABLE define
constify pointer in hwp_set_misc_turbo
Add space after non-fallthrough break in governor switch
Add IA32_ENERGY_BIAS_MASK define
Check CPUID_PM_LEAK for energy bias when needed
Fail initialization with curr_req = -1
Fold hwp_read_capabilities into hwp_init_msrs
Add command line cpufreq=xen:hwp
Add command line cpufreq=xen:hdc
Use per_cpu for hwp_drv_data pointers
Move hwp_energy_perf_bias call into hwp_write_request
energy_perf 0 is valid, so hwp_energy_perf_bias cannot be skipped
Ensure we don't generate interrupts
Remove Fast Write of Uncore MSR
Initialize hwp_drv_data from curr_req
Use SPDX line instead of license text in hwp.c

v3:
Add cf_check to cpufreq_gov_hwp_init() - Marek
Print cpuid_level with %#x - Marek
---
 docs/misc/xen-command-line.pandoc         |   8 +-
 xen/arch/x86/acpi/cpufreq/Makefile        |   1 +
 xen/arch/x86/acpi/cpufreq/cpufreq.c       |   5 +-
 xen/arch/x86/acpi/cpufreq/hwp.c           | 506 ++++++++++++++++++++++
 xen/arch/x86/include/asm/cpufeature.h     |  13 +-
 xen/arch/x86/include/asm/msr-index.h      |  13 +
 xen/drivers/cpufreq/cpufreq.c             |  32 ++
 xen/include/acpi/cpufreq/cpufreq.h        |   3 +
 xen/include/acpi/cpufreq/processor_perf.h |   3 +
 xen/include/public/sysctl.h               |   1 +
 10 files changed, 581 insertions(+), 4 deletions(-)
 create mode 100644 xen/arch/x86/acpi/cpufreq/hwp.c

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e0b89b7d33..aaa31f444b 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -499,7 +499,7 @@ If set, force use of the performance counters for oprofile, rather than detectin
 available support.
 
 ### cpufreq
-> `= none | {{ <boolean> | xen } [:[powersave|performance|ondemand|userspace][,<maxfreq>][,[<minfreq>][,[verbose]]]]} | dom0-kernel`
+> `= none | {{ <boolean> | xen } [:[powersave|performance|ondemand|userspace][,<hdc>][,[<hwp>]][,[<maxfreq>]][,[<minfreq>]][,[verbose]]]} | dom0-kernel`
 
 > Default: `xen`
 
@@ -510,6 +510,12 @@ choice of `dom0-kernel` is deprecated and not supported by all Dom0 kernels.
 * `<maxfreq>` and `<minfreq>` are integers which represent max and min processor frequencies
   respectively.
 * `verbose` option can be included as a string or also as `verbose=<integer>`
+* `<hwp>` is a boolean to enable Hardware-Controlled Performance States (HWP)
+  on supported Intel hardware.  HWP is a Skylake+ feature which provides better
+  CPU power management.  The default is disabled.
+* `<hdc>` is a boolean to enable Hardware Duty Cycling (HDC).  HDC enables the
+  processor to autonomously force physical package components into idle state.
+  The default is enabled, but the option only applies when `<hwp>` is enabled.
 
 ### cpuid (x86)
 > `= List of comma separated booleans`
diff --git a/xen/arch/x86/acpi/cpufreq/Makefile b/xen/arch/x86/acpi/cpufreq/Makefile
index f75da9b9ca..db83aa6b14 100644
--- a/xen/arch/x86/acpi/cpufreq/Makefile
+++ b/xen/arch/x86/acpi/cpufreq/Makefile
@@ -1,2 +1,3 @@
 obj-y += cpufreq.o
+obj-y += hwp.o
 obj-y += powernow.o
diff --git a/xen/arch/x86/acpi/cpufreq/cpufreq.c b/xen/arch/x86/acpi/cpufreq/cpufreq.c
index f1cc473b4f..56816b1aee 100644
--- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
+++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
@@ -642,7 +642,10 @@ static int __init cf_check cpufreq_driver_init(void)
         switch ( boot_cpu_data.x86_vendor )
         {
         case X86_VENDOR_INTEL:
-            ret = cpufreq_register_driver(&acpi_cpufreq_driver);
+            if ( hwp_available() )
+                ret = hwp_register_driver();
+            else
+                ret = cpufreq_register_driver(&acpi_cpufreq_driver);
             break;
 
         case X86_VENDOR_AMD:
diff --git a/xen/arch/x86/acpi/cpufreq/hwp.c b/xen/arch/x86/acpi/cpufreq/hwp.c
new file mode 100644
index 0000000000..57f13867d3
--- /dev/null
+++ b/xen/arch/x86/acpi/cpufreq/hwp.c
@@ -0,0 +1,506 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * hwp.c cpufreq driver to run Intel Hardware P-States (HWP)
+ *
+ * Copyright (C) 2021 Jason Andryuk <jandryuk@gmail.com>
+ */
+
+#include <xen/cpumask.h>
+#include <xen/init.h>
+#include <xen/param.h>
+#include <xen/xmalloc.h>
+#include <asm/io.h>
+#include <asm/msr.h>
+#include <acpi/cpufreq/cpufreq.h>
+
+static bool feature_hwp;
+static bool feature_hwp_notification;
+static bool feature_hwp_activity_window;
+static bool feature_hwp_energy_perf;
+static bool feature_hwp_pkg_level_ctl;
+static bool feature_hwp_peci;
+
+static bool feature_hdc;
+
+__initdata bool opt_cpufreq_hwp = false;
+__initdata bool opt_cpufreq_hdc = true;
+
+#define HWP_ENERGY_PERF_BALANCE         0x80
+#define IA32_ENERGY_BIAS_BALANCE        0x7
+#define IA32_ENERGY_BIAS_MAX_POWERSAVE  0xf
+#define IA32_ENERGY_BIAS_MASK           0xf
+
+union hwp_request
+{
+    struct
+    {
+        uint64_t min_perf:8;
+        uint64_t max_perf:8;
+        uint64_t desired:8;
+        uint64_t energy_perf:8;
+        uint64_t activity_window:10;
+        uint64_t package_control:1;
+        uint64_t reserved:16;
+        uint64_t activity_window_valid:1;
+        uint64_t energy_perf_valid:1;
+        uint64_t desired_valid:1;
+        uint64_t max_perf_valid:1;
+        uint64_t min_perf_valid:1;
+    };
+    uint64_t raw;
+};
+
+struct hwp_drv_data
+{
+    union
+    {
+        uint64_t hwp_caps;
+        struct
+        {
+            uint64_t highest:8;
+            uint64_t guaranteed:8;
+            uint64_t most_efficient:8;
+            uint64_t lowest:8;
+            uint64_t reserved:32;
+        } hw;
+    };
+    union hwp_request curr_req;
+    uint16_t activity_window;
+    uint8_t minimum;
+    uint8_t maximum;
+    uint8_t desired;
+    uint8_t energy_perf;
+};
+DEFINE_PER_CPU_READ_MOSTLY(struct hwp_drv_data *, hwp_drv_data);
+
+#define hwp_err(...)     printk(XENLOG_ERR __VA_ARGS__)
+#define hwp_info(...)    printk(XENLOG_INFO __VA_ARGS__)
+#define hwp_verbose(...)                   \
+({                                         \
+    if ( cpufreq_verbose )                 \
+        printk(XENLOG_DEBUG __VA_ARGS__);  \
+})
+
+static int cf_check hwp_governor(struct cpufreq_policy *policy,
+                                 unsigned int event)
+{
+    int ret;
+
+    if ( policy == NULL )
+        return -EINVAL;
+
+    switch ( event )
+    {
+    case CPUFREQ_GOV_START:
+    case CPUFREQ_GOV_LIMITS:
+        ret = 0;
+        break;
+
+    case CPUFREQ_GOV_STOP:
+    default:
+        ret = -EINVAL;
+        break;
+    }
+
+    return ret;
+}
+
+static struct cpufreq_governor hwp_cpufreq_governor =
+{
+    .name          = XEN_HWP_GOVERNOR,
+    .governor      = hwp_governor,
+};
+
+static int __init cf_check cpufreq_gov_hwp_init(void)
+{
+    return cpufreq_register_governor(&hwp_cpufreq_governor);
+}
+__initcall(cpufreq_gov_hwp_init);
+
+bool __init hwp_available(void)
+{
+    unsigned int eax, ecx, unused;
+    bool use_hwp;
+
+    if ( boot_cpu_data.cpuid_level < CPUID_PM_LEAF )
+    {
+        hwp_verbose("cpuid_level (%#x) lacks HWP support\n",
+                    boot_cpu_data.cpuid_level);
+        return false;
+    }
+
+    if ( boot_cpu_data.cpuid_level < 0x16 )
+    {
+        hwp_info("HWP disabled: cpuid_level %#x < 0x16 lacks CPU freq info\n",
+                 boot_cpu_data.cpuid_level);
+        return false;
+    }
+
+    cpuid(CPUID_PM_LEAF, &eax, &unused, &ecx, &unused);
+
+    if ( !(eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE) &&
+         !(ecx & CPUID6_ECX_IA32_ENERGY_PERF_BIAS) )
+    {
+        hwp_verbose("HWP disabled: No energy/performance preference available");
+        return false;
+    }
+
+    feature_hwp                 = eax & CPUID6_EAX_HWP;
+    feature_hwp_notification    = eax & CPUID6_EAX_HWP_NOTIFICATION;
+    feature_hwp_activity_window = eax & CPUID6_EAX_HWP_ACTIVITY_WINDOW;
+    feature_hwp_energy_perf     =
+        eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE;
+    feature_hwp_pkg_level_ctl   = eax & CPUID6_EAX_HWP_PACKAGE_LEVEL_REQUEST;
+    feature_hwp_peci            = eax & CPUID6_EAX_HWP_PECI;
+
+    hwp_verbose("HWP: %d notify: %d act-window: %d energy-perf: %d pkg-level: %d peci: %d\n",
+                feature_hwp, feature_hwp_notification,
+                feature_hwp_activity_window, feature_hwp_energy_perf,
+                feature_hwp_pkg_level_ctl, feature_hwp_peci);
+
+    if ( !feature_hwp )
+        return false;
+
+    feature_hdc = eax & CPUID6_EAX_HDC;
+
+    hwp_verbose("HWP: Hardware Duty Cycling (HDC) %ssupported%s\n",
+                feature_hdc ? "" : "not ",
+                feature_hdc ? opt_cpufreq_hdc ? ", enabled" : ", disabled"
+                            : "");
+
+    feature_hdc = feature_hdc && opt_cpufreq_hdc;
+
+    hwp_verbose("HWP: HW_FEEDBACK %ssupported\n",
+                (eax & CPUID6_EAX_HW_FEEDBACK) ? "" : "not ");
+
+    use_hwp = feature_hwp && opt_cpufreq_hwp;
+    cpufreq_governor_internal = use_hwp;
+
+    if ( use_hwp )
+        hwp_info("Using HWP for cpufreq\n");
+
+    return use_hwp;
+}
+
+static void hdc_set_pkg_hdc_ctl(bool val)
+{
+    uint64_t msr;
+
+    if ( rdmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
+    {
+        hwp_err("error rdmsr_safe(MSR_IA32_PKG_HDC_CTL)\n");
+
+        return;
+    }
+
+    if ( val )
+        msr |= IA32_PKG_HDC_CTL_HDC_PKG_ENABLE;
+    else
+        msr &= ~IA32_PKG_HDC_CTL_HDC_PKG_ENABLE;
+
+    if ( wrmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
+        hwp_err("error wrmsr_safe(MSR_IA32_PKG_HDC_CTL): %016lx\n", msr);
+}
+
+static void hdc_set_pm_ctl1(bool val)
+{
+    uint64_t msr;
+
+    if ( rdmsr_safe(MSR_IA32_PM_CTL1, msr) )
+    {
+        hwp_err("error rdmsr_safe(MSR_IA32_PM_CTL1)\n");
+
+        return;
+    }
+
+    if ( val )
+        msr |= IA32_PM_CTL1_HDC_ALLOW_BLOCK;
+    else
+        msr &= ~IA32_PM_CTL1_HDC_ALLOW_BLOCK;
+
+    if ( wrmsr_safe(MSR_IA32_PM_CTL1, msr) )
+        hwp_err("error wrmsr_safe(MSR_IA32_PM_CTL1): %016lx\n", msr);
+}
+
+static void hwp_get_cpu_speeds(struct cpufreq_policy *policy)
+{
+    uint32_t base_khz, max_khz, bus_khz, edx;
+
+    cpuid(0x16, &base_khz, &max_khz, &bus_khz, &edx);
+
+    /* aperf/mperf scales base. */
+    policy->cpuinfo.perf_freq = base_khz * 1000;
+    policy->cpuinfo.min_freq = base_khz * 1000;
+    policy->cpuinfo.max_freq = max_khz * 1000;
+    policy->min = base_khz * 1000;
+    policy->max = max_khz * 1000;
+    policy->cur = 0;
+}
+
+static void cf_check hwp_init_msrs(void *info)
+{
+    struct cpufreq_policy *policy = info;
+    struct hwp_drv_data *data = this_cpu(hwp_drv_data);
+    uint64_t val;
+
+    /*
+     * Package level MSR, but we don't have a good idea of packages here, so
+     * just do it everytime.
+     */
+    if ( rdmsr_safe(MSR_IA32_PM_ENABLE, val) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_PM_ENABLE)\n", policy->cpu);
+        data->curr_req.raw = -1;
+        return;
+    }
+
+    /* Ensure we don't generate interrupts */
+    if ( feature_hwp_notification )
+        wrmsr_safe(MSR_IA32_HWP_INTERRUPT, 0);
+
+    hwp_verbose("CPU%u: MSR_IA32_PM_ENABLE: %016lx\n", policy->cpu, val);
+    if ( !(val & IA32_PM_ENABLE_HWP_ENABLE) )
+    {
+        val |= IA32_PM_ENABLE_HWP_ENABLE;
+        if ( wrmsr_safe(MSR_IA32_PM_ENABLE, val) )
+        {
+            hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_PM_ENABLE, %lx)\n",
+                    policy->cpu, val);
+            data->curr_req.raw = -1;
+            return;
+        }
+    }
+
+    if ( rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, data->hwp_caps) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_CAPABILITIES)\n",
+                policy->cpu);
+        data->curr_req.raw = -1;
+        return;
+    }
+
+    if ( rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_REQUEST)\n", policy->cpu);
+        data->curr_req.raw = -1;
+        return;
+    }
+
+    if ( !feature_hwp_energy_perf ) {
+        if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, val) )
+        {
+            hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n");
+            data->curr_req.raw = -1;
+
+            return;
+        }
+
+        data->energy_perf = val & IA32_ENERGY_BIAS_MASK;
+    }
+
+    /*
+     * Check for APERF/MPERF support in hardware
+     * also check for boost/turbo support
+     */
+    intel_feature_detect(policy);
+
+    if ( feature_hdc )
+    {
+        hdc_set_pkg_hdc_ctl(true);
+        hdc_set_pm_ctl1(true);
+    }
+
+    hwp_get_cpu_speeds(policy);
+}
+
+static int cf_check hwp_cpufreq_verify(struct cpufreq_policy *policy)
+{
+    struct hwp_drv_data *data = per_cpu(hwp_drv_data, policy->cpu);
+
+    if ( !feature_hwp_energy_perf && data->energy_perf )
+    {
+        if ( data->energy_perf > IA32_ENERGY_BIAS_MAX_POWERSAVE )
+        {
+            hwp_err("energy_perf %d exceeds IA32_ENERGY_PERF_BIAS range 0-15\n",
+                    data->energy_perf);
+
+            return -EINVAL;
+        }
+    }
+
+    if ( !feature_hwp_activity_window && data->activity_window )
+    {
+        hwp_err("HWP activity window not supported\n");
+
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+/* val 0 - highest performance, 15 - maximum energy savings */
+static void hwp_energy_perf_bias(const struct hwp_drv_data *data)
+{
+    uint64_t msr;
+    uint8_t val = data->energy_perf;
+
+    ASSERT(val <= IA32_ENERGY_BIAS_MAX_POWERSAVE);
+
+    if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, msr) )
+    {
+        hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n");
+
+        return;
+    }
+
+    msr &= ~IA32_ENERGY_BIAS_MASK;
+    msr |= val;
+
+    if ( wrmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, msr) )
+        hwp_err("error wrmsr_safe(MSR_IA32_ENERGY_PERF_BIAS): %016lx\n", msr);
+}
+
+static void cf_check hwp_write_request(void *info)
+{
+    struct cpufreq_policy *policy = info;
+    struct hwp_drv_data *data = this_cpu(hwp_drv_data);
+    union hwp_request hwp_req = data->curr_req;
+
+    BUILD_BUG_ON(sizeof(union hwp_request) != sizeof(uint64_t));
+    if ( wrmsr_safe(MSR_IA32_HWP_REQUEST, hwp_req.raw) )
+    {
+        hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_HWP_REQUEST, %lx)\n",
+                policy->cpu, hwp_req.raw);
+        rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw);
+    }
+
+    if ( !feature_hwp_energy_perf )
+        hwp_energy_perf_bias(data);
+
+}
+
+static int cf_check hwp_cpufreq_target(struct cpufreq_policy *policy,
+                                       unsigned int target_freq,
+                                       unsigned int relation)
+{
+    unsigned int cpu = policy->cpu;
+    struct hwp_drv_data *data = per_cpu(hwp_drv_data, cpu);
+    /* Zero everything to ensure reserved bits are zero... */
+    union hwp_request hwp_req = { .raw = 0 };
+
+    /* .. and update from there */
+    hwp_req.min_perf = data->minimum;
+    hwp_req.max_perf = data->maximum;
+    hwp_req.desired = data->desired;
+    if ( feature_hwp_energy_perf )
+        hwp_req.energy_perf = data->energy_perf;
+    if ( feature_hwp_activity_window )
+        hwp_req.activity_window = data->activity_window;
+
+    if ( hwp_req.raw == data->curr_req.raw )
+        return 0;
+
+    data->curr_req = hwp_req;
+
+    hwp_verbose("CPU%u: wrmsr HWP_REQUEST %016lx\n", cpu, hwp_req.raw);
+    on_selected_cpus(cpumask_of(cpu), hwp_write_request, policy, 1);
+
+    return 0;
+}
+
+static int cf_check hwp_cpufreq_cpu_init(struct cpufreq_policy *policy)
+{
+    unsigned int cpu = policy->cpu;
+    struct hwp_drv_data *data;
+
+    data = xzalloc(struct hwp_drv_data);
+    if ( !data )
+        return -ENOMEM;
+
+    if ( cpufreq_opt_governor )
+        printk(XENLOG_WARNING
+               "HWP: governor \"%s\" is incompatible with hwp. Using default \"%s\"\n",
+               cpufreq_opt_governor->name, hwp_cpufreq_governor.name);
+    policy->governor = &hwp_cpufreq_governor;
+
+    per_cpu(hwp_drv_data, cpu) = data;
+
+    on_selected_cpus(cpumask_of(cpu), hwp_init_msrs, policy, 1);
+
+    if ( data->curr_req.raw == -1 )
+    {
+        hwp_err("CPU%u: Could not initialize HWP properly\n", cpu);
+        XFREE(per_cpu(hwp_drv_data, cpu));
+        return -ENODEV;
+    }
+
+    data->minimum = data->curr_req.min_perf;
+    data->maximum = data->curr_req.max_perf;
+    data->desired = data->curr_req.desired;
+    /* the !feature_hwp_energy_perf case was handled in hwp_init_msrs(). */
+    if ( feature_hwp_energy_perf )
+        data->energy_perf = data->curr_req.energy_perf;
+
+    hwp_verbose("CPU%u: IA32_HWP_CAPABILITIES: %016lx\n", cpu, data->hwp_caps);
+
+    hwp_verbose("CPU%u: rdmsr HWP_REQUEST %016lx\n", cpu, data->curr_req.raw);
+
+    return 0;
+}
+
+static int cf_check hwp_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+{
+    XFREE(per_cpu(hwp_drv_data, policy->cpu));
+
+    return 0;
+}
+
+/*
+ * The SDM reads like turbo should be disabled with MSR_IA32_PERF_CTL and
+ * PERF_CTL_TURBO_DISENGAGE, but that does not seem to actually work, at least
+ * with my HWP testing.  MSR_IA32_MISC_ENABLE and MISC_ENABLE_TURBO_DISENGAGE
+ * is what Linux uses and seems to work.
+ */
+static void cf_check hwp_set_misc_turbo(void *info)
+{
+    const struct cpufreq_policy *policy = info;
+    uint64_t msr;
+
+    if ( rdmsr_safe(MSR_IA32_MISC_ENABLE, msr) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_MISC_ENABLE)\n", policy->cpu);
+
+        return;
+    }
+
+    if ( policy->turbo == CPUFREQ_TURBO_ENABLED )
+        msr &= ~MSR_IA32_MISC_ENABLE_TURBO_DISENGAGE;
+    else
+        msr |= MSR_IA32_MISC_ENABLE_TURBO_DISENGAGE;
+
+    if ( wrmsr_safe(MSR_IA32_MISC_ENABLE, msr) )
+        hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_MISC_ENABLE): %016lx\n",
+                policy->cpu, msr);
+}
+
+static int cf_check hwp_cpufreq_update(int cpuid, struct cpufreq_policy *policy)
+{
+    on_selected_cpus(cpumask_of(cpuid), hwp_set_misc_turbo, policy, 1);
+
+    return 0;
+}
+
+static const struct cpufreq_driver __initconstrel hwp_cpufreq_driver =
+{
+    .name   = "hwp-cpufreq",
+    .verify = hwp_cpufreq_verify,
+    .target = hwp_cpufreq_target,
+    .init   = hwp_cpufreq_cpu_init,
+    .exit   = hwp_cpufreq_cpu_exit,
+    .update = hwp_cpufreq_update,
+};
+
+int __init hwp_register_driver(void)
+{
+    return cpufreq_register_driver(&hwp_cpufreq_driver);
+}
diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 4140ec0938..f2ff1d5fde 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -46,8 +46,17 @@ extern struct cpuinfo_x86 boot_cpu_data;
 #define cpu_has(c, bit)		test_bit(bit, (c)->x86_capability)
 #define boot_cpu_has(bit)	test_bit(bit, boot_cpu_data.x86_capability)
 
-#define CPUID_PM_LEAF                    6
-#define CPUID6_ECX_APERFMPERF_CAPABILITY 0x1
+#define CPUID_PM_LEAF                                6
+#define CPUID6_EAX_HWP                               (_AC(1, U) <<  7)
+#define CPUID6_EAX_HWP_NOTIFICATION                  (_AC(1, U) <<  8)
+#define CPUID6_EAX_HWP_ACTIVITY_WINDOW               (_AC(1, U) <<  9)
+#define CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE (_AC(1, U) << 10)
+#define CPUID6_EAX_HWP_PACKAGE_LEVEL_REQUEST         (_AC(1, U) << 11)
+#define CPUID6_EAX_HDC                               (_AC(1, U) << 13)
+#define CPUID6_EAX_HWP_PECI                          (_AC(1, U) << 16)
+#define CPUID6_EAX_HW_FEEDBACK                       (_AC(1, U) << 19)
+#define CPUID6_ECX_APERFMPERF_CAPABILITY             0x1
+#define CPUID6_ECX_IA32_ENERGY_PERF_BIAS             0x8
 
 /* CPUID level 0x00000001.edx */
 #define cpu_has_fpu             1
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index fa771ed0b5..a2a22339e4 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -151,6 +151,13 @@
 
 #define MSR_PKRS                            0x000006e1
 
+#define MSR_IA32_PM_ENABLE                  0x00000770
+#define  IA32_PM_ENABLE_HWP_ENABLE          (_AC(1, ULL) <<  0)
+
+#define MSR_IA32_HWP_CAPABILITIES           0x00000771
+#define MSR_IA32_HWP_INTERRUPT              0x00000773
+#define MSR_IA32_HWP_REQUEST                0x00000774
+
 #define MSR_X2APIC_FIRST                    0x00000800
 #define MSR_X2APIC_LAST                     0x000008ff
 
@@ -165,6 +172,11 @@
 #define  PASID_PASID_MASK                   0x000fffff
 #define  PASID_VALID                        (_AC(1, ULL) << 31)
 
+#define MSR_IA32_PKG_HDC_CTL                0x00000db0
+#define  IA32_PKG_HDC_CTL_HDC_PKG_ENABLE    (_AC(1, ULL) <<  0)
+#define MSR_IA32_PM_CTL1                    0x00000db1
+#define  IA32_PM_CTL1_HDC_ALLOW_BLOCK       (_AC(1, ULL) <<  0)
+
 #define MSR_UARCH_MISC_CTRL                 0x00001b01
 #define  UARCH_CTRL_DOITM                   (_AC(1, ULL) <<  0)
 
@@ -500,6 +512,7 @@
 #define MSR_IA32_MISC_ENABLE_LIMIT_CPUID  (1<<22)
 #define MSR_IA32_MISC_ENABLE_XTPR_DISABLE (1<<23)
 #define MSR_IA32_MISC_ENABLE_XD_DISABLE	(1ULL << 34)
+#define MSR_IA32_MISC_ENABLE_TURBO_DISENGAGE (1ULL << 38)
 
 #define MSR_IA32_TSC_DEADLINE		0x000006E0
 #define MSR_IA32_ENERGY_PERF_BIAS	0x000001b0
diff --git a/xen/drivers/cpufreq/cpufreq.c b/xen/drivers/cpufreq/cpufreq.c
index 7bd81680da..9470eb7230 100644
--- a/xen/drivers/cpufreq/cpufreq.c
+++ b/xen/drivers/cpufreq/cpufreq.c
@@ -565,6 +565,38 @@ static void cpufreq_cmdline_common_para(struct cpufreq_policy *new_policy)
 
 static int __init cpufreq_handle_common_option(const char *name, const char *val)
 {
+    if (!strcmp(name, "hdc")) {
+        if (val) {
+            int ret = parse_bool(val, NULL);
+            if (ret != -1) {
+                opt_cpufreq_hdc = ret;
+                return 1;
+            }
+        } else {
+            opt_cpufreq_hdc = true;
+            return 1;
+        }
+    } else if (!strcmp(name, "no-hdc")) {
+        opt_cpufreq_hdc = false;
+        return 1;
+    }
+
+    if (!strcmp(name, "hwp")) {
+        if (val) {
+            int ret = parse_bool(val, NULL);
+            if (ret != -1) {
+                opt_cpufreq_hwp = ret;
+                return 1;
+            }
+        } else {
+            opt_cpufreq_hwp = true;
+            return 1;
+        }
+    } else if (!strcmp(name, "no-hwp")) {
+        opt_cpufreq_hwp = false;
+        return 1;
+    }
+
     if (!strcmp(name, "maxfreq") && val) {
         usr_max_freq = simple_strtoul(val, NULL, 0);
         return 1;
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index 0f334d2a43..29a712a4f1 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -245,4 +245,7 @@ void cpufreq_dbs_timer_resume(void);
 
 void intel_feature_detect(struct cpufreq_policy *policy);
 
+extern bool opt_cpufreq_hwp;
+extern bool opt_cpufreq_hdc;
+
 #endif /* __XEN_CPUFREQ_PM_H__ */
diff --git a/xen/include/acpi/cpufreq/processor_perf.h b/xen/include/acpi/cpufreq/processor_perf.h
index d8a1ba68a6..b751ca4937 100644
--- a/xen/include/acpi/cpufreq/processor_perf.h
+++ b/xen/include/acpi/cpufreq/processor_perf.h
@@ -7,6 +7,9 @@
 
 #define XEN_PX_INIT 0x80000000
 
+bool hwp_available(void);
+int hwp_register_driver(void);
+
 int powernow_cpufreq_init(void);
 unsigned int powernow_register_driver(void);
 unsigned int get_measured_perf(unsigned int cpu, unsigned int flag);
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 2b24d6bfd0..b448f13b75 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -292,6 +292,7 @@ struct xen_ondemand {
     uint32_t up_threshold;
 };
 
+#define XEN_HWP_GOVERNOR "hwp-internal"
 /*
  * cpufreq para name of this structure named
  * same as sysfs file name of native linux
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:21:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:21:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528066.820656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ5L-0004K9-Ej; Mon, 01 May 2023 19:21:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528066.820656; Mon, 01 May 2023 19:21:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZ5L-0004Jy-Bi; Mon, 01 May 2023 19:21:19 +0000
Received: by outflank-mailman (input) for mailman id 528066;
 Mon, 01 May 2023 19:21:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZ5J-0003yM-Vk
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:21:17 +0000
Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com
 [2607:f8b0:4864:20::732])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 57a16940-e855-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:21:16 +0200 (CEST)
Received: by mail-qk1-x732.google.com with SMTP id
 af79cd13be357-74cebbb7bc5so275654785a.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:21:16 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 x20-20020a0ce0d4000000b0061927ddb043sm2012307qvk.80.2023.05.01.12.21.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:21:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57a16940-e855-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682968875; x=1685560875;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=hA8U/InM9EEJxsTtEJ4pQE8Dle8MV2Sm7m6DjaGLFBw=;
        b=alyW1WnDxf9gWpn/iywDsDjWYtiNrb58353i+DD9++Cpa7YDvVl5EOnhjOr7qgZ6B0
         RWo7oakWLr32hdtu25e0yXhddgGem81Abc9u1tiXk5BSZKu9Wn4LnZnWqohDt7U0OIRP
         1Vwij9eGmo0DPTDjtZAwVPzsmENXo5NCzFBn/pjPxkfGWiKoaKf2iMk1WNC0jNaA7qWV
         5fkIkwLkHB2hIGp6CrRnnNLwyp+Sw/3+BpQifdRz+2gZsR4C4IEGgAwG74R7Qe73+Joe
         6nZzkXBWog45V7Zpne/TjUOKDbRKSCzjxi67JcYxqhIWy/iklXt+xS1d0TB15rJvRp0s
         7//A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682968875; x=1685560875;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=hA8U/InM9EEJxsTtEJ4pQE8Dle8MV2Sm7m6DjaGLFBw=;
        b=TV5WbagDmYG46TWa0zSQzYbBrHe77J89qcyOIBUz0gUDwJktRj9fNkg5SHYC5b7a0Q
         TtaRRqW2TtQ58O0ffKPQAVeasdCM93z4H9EOtUIRLxxHgpDFDbvvYWURkhYTiI86FvAb
         tt6SnTSa0oPWgp5yCD/0qObyhzYT/i3dCyKL6FlEqGHkrczypp8M91JyYpn9Ac7PQDyQ
         TVUezSWCS73RjRt/GK2Wi9bSbQ0nLvUjyx9rLkuzsuH1KOlVh7/1GV+T0KE7ap3w2o5G
         1SXPdD5wh/0JGamIYlzogbnmWs0D2w9R6vmdZZE+h9u4obiAbsA3Fvu2C12X7GlCVTt4
         SVBg==
X-Gm-Message-State: AC+VfDx9ah8A7JlM4jEGXY8MXpgaz980oA3gO4S4uDmAgGUqJEdSFAu3
	uEwnBK6uzL5rLubheIVipRQ=
X-Google-Smtp-Source: ACHHUZ7Si1N+KuQIyeuzpeF6TE80Hpb0MgsDrxMePO6KWSxcK6U33V2aUW4c9/8oenuaGAK3spl/eQ==
X-Received: by 2002:a05:6214:2a49:b0:5f5:51c4:fca5 with SMTP id jf9-20020a0562142a4900b005f551c4fca5mr1614591qvb.49.1682968875262;
        Mon, 01 May 2023 12:21:15 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproj,
	xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 05/14] xenpm: Change get-cpufreq-para output for internal
Date: Mon,  1 May 2023 15:20:36 -0400
Message-Id: <20230501192045.87377-6-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501192045.87377-1-jandryuk@gmail.com>
References: <20230501192045.87377-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When using HWP, some of the returned data is not applicable.  In that
case, we should just omit it to avoid confusing the user.  So switch to
printing the base and turbo frequencies since those are relevant to HWP.
Similarly, stop printing the CPU frequencies since those do not apply.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
v2:
Use full governor name XEN_HWP_GOVERNOR to change output
Style fixes
---
 tools/misc/xenpm.c | 41 +++++++++++++++++++++++++----------------
 1 file changed, 25 insertions(+), 16 deletions(-)

diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index 1bb6187e56..ce8d7644d0 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -711,6 +711,7 @@ void start_gather_func(int argc, char *argv[])
 /* print out parameters about cpu frequency */
 static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
 {
+    bool internal = strstr(p_cpufreq->scaling_governor, XEN_HWP_GOVERNOR);
     int i;
 
     printf("cpu id               : %d\n", cpuid);
@@ -720,10 +721,15 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
         printf(" %d", p_cpufreq->affected_cpus[i]);
     printf("\n");
 
-    printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
-           p_cpufreq->cpuinfo_max_freq,
-           p_cpufreq->cpuinfo_min_freq,
-           p_cpufreq->cpuinfo_cur_freq);
+    if ( internal )
+        printf("cpuinfo frequency    : base [%u] turbo [%u]\n",
+               p_cpufreq->cpuinfo_min_freq,
+               p_cpufreq->cpuinfo_max_freq);
+    else
+        printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
+               p_cpufreq->cpuinfo_max_freq,
+               p_cpufreq->cpuinfo_min_freq,
+               p_cpufreq->cpuinfo_cur_freq);
 
     printf("scaling_driver       : %s\n", p_cpufreq->scaling_driver);
 
@@ -750,19 +756,22 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
                p_cpufreq->u.ondemand.up_threshold);
     }
 
-    printf("scaling_avail_freq   :");
-    for ( i = 0; i < p_cpufreq->freq_num; i++ )
-        if ( p_cpufreq->scaling_available_frequencies[i] ==
-             p_cpufreq->scaling_cur_freq )
-            printf(" *%d", p_cpufreq->scaling_available_frequencies[i]);
-        else
-            printf(" %d", p_cpufreq->scaling_available_frequencies[i]);
-    printf("\n");
+    if ( !internal )
+    {
+        printf("scaling_avail_freq   :");
+        for ( i = 0; i < p_cpufreq->freq_num; i++ )
+            if ( p_cpufreq->scaling_available_frequencies[i] ==
+                 p_cpufreq->scaling_cur_freq )
+                printf(" *%d", p_cpufreq->scaling_available_frequencies[i]);
+            else
+                printf(" %d", p_cpufreq->scaling_available_frequencies[i]);
+        printf("\n");
 
-    printf("scaling frequency    : max [%u] min [%u] cur [%u]\n",
-           p_cpufreq->scaling_max_freq,
-           p_cpufreq->scaling_min_freq,
-           p_cpufreq->scaling_cur_freq);
+        printf("scaling frequency    : max [%u] min [%u] cur [%u]\n",
+               p_cpufreq->scaling_max_freq,
+               p_cpufreq->scaling_min_freq,
+               p_cpufreq->scaling_cur_freq);
+    }
 
     printf("turbo mode           : %s\n",
            p_cpufreq->turbo_enabled ? "enabled" : "disabled or n/a");
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528079.820666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCQ-0006Ff-83; Mon, 01 May 2023 19:28:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528079.820666; Mon, 01 May 2023 19:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCQ-0006FY-53; Mon, 01 May 2023 19:28:38 +0000
Received: by outflank-mailman (input) for mailman id 528079;
 Mon, 01 May 2023 19:28:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCP-0006FS-8Q
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:37 +0000
Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com
 [2607:f8b0:4864:20::42e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d40efeb-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:28:36 +0200 (CEST)
Received: by mail-pf1-x42e.google.com with SMTP id
 d2e1a72fcca58-63b5c4c76aaso2032564b3a.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:36 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d40efeb-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969314; x=1685561314;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=ipls6koA+AeH8J6j2WxgUhcq4mmw5a3NH+J2RY9iS6I=;
        b=eztCo5uQgzZ++c4nibUBiIDXpS2LGO24+qSzaafIbpKx+1BibUdX3vQLCCvqCM9QaK
         k2szVsQtO5BsNtu2gEbwQ2R1Cp8SZWTwumicPfkwsEEQtcvZx+KCA7E9E8yQbBLG/+U1
         YuKydYrLelSrUvEFcJOenbExJISoaZaYCzUq7BQHSqP3uoPFqEPYIMH3FrftdGe1VnOk
         18/QJSaB/qqzmIXnj9u6ae7JwxuEe1M8it5S+Q9K46yajD+0ZkqBf7KewtfRcvygNDOo
         LlI0LPcK+WO/JxG3p9SD85FYmZExm1luEC4PEkLlS1jz538f0/tapw7EjRytIfe2E8ce
         OeqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969314; x=1685561314;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=ipls6koA+AeH8J6j2WxgUhcq4mmw5a3NH+J2RY9iS6I=;
        b=KEq7r/fiVwRVpze8Askz1PFxUICXoM21VphjKTszh/jZgSJavPjyuoR6cXZlXQH+D7
         mXswhcMwIO7lOM5yI4EAvsyMZicI57/64IKPAz3SmDHEo3mU8G/0p/OInt/ARjISUOsU
         8Kp7C0sM0N42KgAtmIk+eOvfioKbWZgi8fTIuvr6jl02diG4z8ipSbGMKLDc97+WiMXI
         XJRbAsyPzsPdErLr9B3wJMkdoKJNSEKuuF4h3jBEvU/Hsw0X5H9Dse9K6lvbNu+pOstc
         luDtJf9ZhUPT7twtsPNw9mFe9bZSVLzyEEQmLgBaPx3xGrVcytnva7wIUhgYNEu+xUEN
         6PDA==
X-Gm-Message-State: AC+VfDy4XVkdykWiUNVWKkFNrR6R+eVPTnTyB/NZlij7f8Hq6IhdLjz1
	EKpn1eLTOd350AFhXWLYgbo=
X-Google-Smtp-Source: ACHHUZ4EtcVpfh2GrxC12spi0fGVMtR6JEHjWCZcT25vgipN1a5chaj6S10yX1bBKRrgSrOQ8bpQcQ==
X-Received: by 2002:a17:903:2310:b0:1a6:54ce:4311 with SMTP id d16-20020a170903231000b001a654ce4311mr18428090plh.43.1682969314058;
        Mon, 01 May 2023 12:28:34 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Dinh Nguyen <dinguyen@kernel.org>,
	Jonas Bonn <jonas@southpole.se>,
	David Hildenbrand <david@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"David S. Miller" <davem@davemloft.net>,
	Richard Weinberger <richard@nod.at>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Arnd Bergmann <arnd@arndb.de>
Subject: [PATCH v2 00/34] Split ptdesc from struct page
Date: Mon,  1 May 2023 12:27:55 -0700
Message-Id: <20230501192829.17086-1-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The MM subsystem is trying to shrink struct page. This patchset
introduces a memory descriptor for page table tracking - struct ptdesc.

This patchset introduces ptdesc, splits ptdesc from struct page, and
converts many callers of page table constructor/destructors to use ptdescs.

Ptdesc is a foundation to further standardize page tables, and eventually
allow for dynamic allocation of page tables independent of struct page.
However, the use of pages for page table tracking is quite deeply
ingrained and varied across archictectures, so there is still a lot of
work to be done before that can happen.

This is rebased on next-20230428.

v2:
  Fix a lot of compiler warning/errors
  Moved definition of ptdesc to outside CONFIG_MMU
  Revert commit 7e25de77bc5ea which had gmap use pmd_pgtable_page()
  Allow functions to preserve const-ness where applicable
  Define folio equivalents for PAGE_TYPE_OPS page functions

Vishal Moola (Oracle) (34):
  mm: Add PAGE_TYPE_OP folio functions
  s390: Use _pt_s390_gaddr for gmap address tracking
  s390: Use pt_frag_refcount for pagetables
  pgtable: Create struct ptdesc
  mm: add utility functions for ptdesc
  mm: Convert pmd_pgtable_page() to pmd_ptdesc()
  mm: Convert ptlock_alloc() to use ptdescs
  mm: Convert ptlock_ptr() to use ptdescs
  mm: Convert pmd_ptlock_init() to use ptdescs
  mm: Convert ptlock_init() to use ptdescs
  mm: Convert pmd_ptlock_free() to use ptdescs
  mm: Convert ptlock_free() to use ptdescs
  mm: Create ptdesc equivalents for pgtable_{pte,pmd}_page_{ctor,dtor}
  powerpc: Convert various functions to use ptdescs
  x86: Convert various functions to use ptdescs
  s390: Convert various gmap functions to use ptdescs
  s390: Convert various pgalloc functions to use ptdescs
  mm: Remove page table members from struct page
  pgalloc: Convert various functions to use ptdescs
  arm: Convert various functions to use ptdescs
  arm64: Convert various functions to use ptdescs
  csky: Convert __pte_free_tlb() to use ptdescs
  hexagon: Convert __pte_free_tlb() to use ptdescs
  loongarch: Convert various functions to use ptdescs
  m68k: Convert various functions to use ptdescs
  mips: Convert various functions to use ptdescs
  nios2: Convert __pte_free_tlb() to use ptdescs
  openrisc: Convert __pte_free_tlb() to use ptdescs
  riscv: Convert alloc_{pmd, pte}_late() to use ptdescs
  sh: Convert pte_free_tlb() to use ptdescs
  sparc64: Convert various functions to use ptdescs
  sparc: Convert pgtable_pte_page_{ctor, dtor}() to ptdesc equivalents
  um: Convert {pmd, pte}_free_tlb() to use ptdescs
  mm: Remove pgtable_{pmd, pte}_page_{ctor, dtor}() wrappers

 Documentation/mm/split_page_table_lock.rst    |  12 +-
 .../zh_CN/mm/split_page_table_lock.rst        |  14 +-
 arch/arm/include/asm/tlb.h                    |  12 +-
 arch/arm/mm/mmu.c                             |   6 +-
 arch/arm64/include/asm/tlb.h                  |  14 +-
 arch/arm64/mm/mmu.c                           |   7 +-
 arch/csky/include/asm/pgalloc.h               |   4 +-
 arch/hexagon/include/asm/pgalloc.h            |   8 +-
 arch/loongarch/include/asm/pgalloc.h          |  27 ++-
 arch/loongarch/mm/pgtable.c                   |   7 +-
 arch/m68k/include/asm/mcf_pgalloc.h           |  41 ++--
 arch/m68k/include/asm/sun3_pgalloc.h          |   8 +-
 arch/m68k/mm/motorola.c                       |   4 +-
 arch/mips/include/asm/pgalloc.h               |  31 +--
 arch/mips/mm/pgtable.c                        |   7 +-
 arch/nios2/include/asm/pgalloc.h              |   8 +-
 arch/openrisc/include/asm/pgalloc.h           |   8 +-
 arch/powerpc/mm/book3s64/mmu_context.c        |  10 +-
 arch/powerpc/mm/book3s64/pgtable.c            |  32 +--
 arch/powerpc/mm/pgtable-frag.c                |  46 ++--
 arch/riscv/include/asm/pgalloc.h              |   8 +-
 arch/riscv/mm/init.c                          |  16 +-
 arch/s390/include/asm/pgalloc.h               |   4 +-
 arch/s390/include/asm/tlb.h                   |   4 +-
 arch/s390/mm/gmap.c                           | 222 +++++++++++-------
 arch/s390/mm/pgalloc.c                        | 126 +++++-----
 arch/sh/include/asm/pgalloc.h                 |   9 +-
 arch/sparc/mm/init_64.c                       |  17 +-
 arch/sparc/mm/srmmu.c                         |   5 +-
 arch/um/include/asm/pgalloc.h                 |  18 +-
 arch/x86/mm/pgtable.c                         |  46 ++--
 arch/x86/xen/mmu_pv.c                         |   2 +-
 include/asm-generic/pgalloc.h                 |  62 +++--
 include/asm-generic/tlb.h                     |  11 +
 include/linux/mm.h                            | 138 +++++++----
 include/linux/mm_types.h                      |  14 --
 include/linux/page-flags.h                    |  20 +-
 include/linux/pgtable.h                       |  61 +++++
 mm/memory.c                                   |   8 +-
 39 files changed, 648 insertions(+), 449 deletions(-)

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528081.820687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCU-0006m7-NJ; Mon, 01 May 2023 19:28:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528081.820687; Mon, 01 May 2023 19:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCU-0006ly-KR; Mon, 01 May 2023 19:28:42 +0000
Received: by outflank-mailman (input) for mailman id 528081;
 Mon, 01 May 2023 19:28:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCS-0006PY-Jr
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:40 +0000
Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com
 [2607:f8b0:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5f278727-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:28:38 +0200 (CEST)
Received: by mail-pl1-x62f.google.com with SMTP id
 d9443c01a7336-1aafa41116fso8341125ad.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:38 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f278727-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969317; x=1685561317;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=M8hiVfs4v+/D1xXViLOX+gYSSXi/GMS2KWwieBJlAWc=;
        b=AY7R5i89uxro5jd56WDlKU8Fja36ryrUk0S6Q16X+U0ZmNOWNsYwM9UD5V7eDDQ7CL
         2Tp8zG1LpWxUjW4nysKBNDYzREXSeT3FlM9ngsN5B+/pdzX19qHXjQsHi+3L0FpuArC6
         DX51z91UL4SAH0JFTTT8iwQUyEwSGDC+CwqD9d2dvVW7YCy7XaJzjb77A+6boEThDAuR
         yo/RflA9LSDAu/9t9COb7jqY02Est+ouWn7H3Pdtze5JZEAvjpVjvI8vLLYbYiOL7/Bq
         zRTj3vIBIMrvJGHdN6ZHUGjTTTFWg+U9ypFlp6tV+iF0iJAPNZuT4obK8hP4ZcyyaPpG
         zH0w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969317; x=1685561317;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=M8hiVfs4v+/D1xXViLOX+gYSSXi/GMS2KWwieBJlAWc=;
        b=ImwDrM8Z9+Awy0nlSj55dU86LsBTKk/Bw4LzY71tD8h2j/TfeaFh1X7OgHkPJTB6Va
         YVN1Fsy2LtWg7kCPVd4/SlC/AEVc5IW/M+PeH+44E+Br1DSmzeSyhOZnXLdH72gGdPQ/
         iPZtBiF9uqHXnlM42iUTYfgK8DMehFEkI7ucEzjn69oNjSwAwZTMVP6IWh4lLAflBZMh
         HMHUGteNTK0ahqC+pZ3W0XID2ZHFgpJxXfDZh3dGK46+yWIHE5rFmjfgD6WUKPsMoGdB
         Ijm2/82cT6XyJoJ+npUe0VytlAMlGLdAleKwB8cfUxWewp5sjqqwmbpzL+Fyg+gG8nZ0
         zncA==
X-Gm-Message-State: AC+VfDyuSr1EQ3aX2QohvrgS4SCHtye70lQc2CkfjNj+SJVNuAzvUGXc
	7QdENA6oDdV8jV3Vw5hZ0B8=
X-Google-Smtp-Source: ACHHUZ4Ql//l/srSqMxjTIGMvYKmafNUcwRHhPOehsj6F0oVzwyh2L3O7KuPKaj/60mvEgHluv+34Q==
X-Received: by 2002:a17:902:9347:b0:1a6:e564:6046 with SMTP id g7-20020a170902934700b001a6e5646046mr15359610plp.46.1682969317238;
        Mon, 01 May 2023 12:28:37 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>
Subject: [PATCH v2 02/34] s390: Use _pt_s390_gaddr for gmap address tracking
Date: Mon,  1 May 2023 12:27:57 -0700
Message-Id: <20230501192829.17086-3-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

s390 uses page->index to keep track of page tables for the guest address
space. In an attempt to consolidate the usage of page fields in s390,
replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap.

This will help with the splitting of struct ptdesc from struct page, as
well as allow s390 to use _pt_frag_refcount for fragmented page table
tracking.

Since page->_pt_s390_gaddr aliases with mapping, ensure its set to NULL
before freeing the pages as well.

This also reverts commit 7e25de77bc5ea ("s390/mm: use pmd_pgtable_page()
helper in __gmap_segment_gaddr()") which had s390 use
pmd_pgtable_page() to get a gmap page table, as pmd_pgtable_page()
should be used for more generic process page tables.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/mm/gmap.c      | 56 +++++++++++++++++++++++++++-------------
 include/linux/mm_types.h |  2 +-
 2 files changed, 39 insertions(+), 19 deletions(-)

diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index dfe905c7bd8e..a9e8b1805894 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -70,7 +70,7 @@ static struct gmap *gmap_alloc(unsigned long limit)
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		goto out_free;
-	page->index = 0;
+	page->_pt_s390_gaddr = 0;
 	list_add(&page->lru, &gmap->crst_list);
 	table = page_to_virt(page);
 	crst_table_init(table, etype);
@@ -187,16 +187,20 @@ static void gmap_free(struct gmap *gmap)
 	if (!(gmap_is_shadow(gmap) && gmap->removed))
 		gmap_flush_tlb(gmap);
 	/* Free all segment & region tables. */
-	list_for_each_entry_safe(page, next, &gmap->crst_list, lru)
+	list_for_each_entry_safe(page, next, &gmap->crst_list, lru) {
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
+	}
 	gmap_radix_tree_free(&gmap->guest_to_host);
 	gmap_radix_tree_free(&gmap->host_to_guest);
 
 	/* Free additional data for a shadow gmap */
 	if (gmap_is_shadow(gmap)) {
 		/* Free all page tables. */
-		list_for_each_entry_safe(page, next, &gmap->pt_list, lru)
+		list_for_each_entry_safe(page, next, &gmap->pt_list, lru) {
+			page->_pt_s390_gaddr = 0;
 			page_table_free_pgste(page);
+		}
 		gmap_rmap_radix_tree_free(&gmap->host_to_rmap);
 		/* Release reference to the parent */
 		gmap_put(gmap->parent);
@@ -318,12 +322,14 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
 		list_add(&page->lru, &gmap->crst_list);
 		*table = __pa(new) | _REGION_ENTRY_LENGTH |
 			(*table & _REGION_ENTRY_TYPE_MASK);
-		page->index = gaddr;
+		page->_pt_s390_gaddr = gaddr;
 		page = NULL;
 	}
 	spin_unlock(&gmap->guest_table_lock);
-	if (page)
+	if (page) {
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
+	}
 	return 0;
 }
 
@@ -336,12 +342,14 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
 static unsigned long __gmap_segment_gaddr(unsigned long *entry)
 {
 	struct page *page;
-	unsigned long offset;
+	unsigned long offset, mask;
 
 	offset = (unsigned long) entry / sizeof(unsigned long);
 	offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE;
-	page = pmd_pgtable_page((pmd_t *) entry);
-	return page->index + offset;
+	mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1);
+	page = virt_to_page((void *)((unsigned long) entry & mask));
+
+	return page->_pt_s390_gaddr + offset;
 }
 
 /**
@@ -1351,6 +1359,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 	/* Free page table */
 	page = phys_to_page(pgt);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	page_table_free_pgste(page);
 }
 
@@ -1379,6 +1388,7 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr,
 		/* Free page table */
 		page = phys_to_page(pgt);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		page_table_free_pgste(page);
 	}
 }
@@ -1409,6 +1419,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 	/* Free segment table */
 	page = phys_to_page(sgt);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1437,6 +1448,7 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr,
 		/* Free segment table */
 		page = phys_to_page(sgt);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1467,6 +1479,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr)
 	/* Free region 3 table */
 	page = phys_to_page(r3t);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1495,6 +1508,7 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr,
 		/* Free region 3 table */
 		page = phys_to_page(r3t);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1525,6 +1539,7 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr)
 	/* Free region 2 table */
 	page = phys_to_page(r2t);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1557,6 +1572,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
 		/* Free region 2 table */
 		page = phys_to_page(r2t);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1762,9 +1778,9 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		return -ENOMEM;
-	page->index = r2t & _REGION_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_r2t = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
@@ -1814,6 +1830,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 	return rc;
 }
@@ -1846,9 +1863,9 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		return -ENOMEM;
-	page->index = r3t & _REGION_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_r3t = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
@@ -1898,6 +1915,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 	return rc;
 }
@@ -1930,9 +1948,9 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		return -ENOMEM;
-	page->index = sgt & _REGION_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_sgt = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
@@ -1982,6 +2000,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 	return rc;
 }
@@ -2014,9 +2033,9 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
 	if (table && !(*table & _SEGMENT_ENTRY_INVALID)) {
 		/* Shadow page tables are full pages (pte+pgste) */
 		page = pfn_to_page(*table >> PAGE_SHIFT);
-		*pgt = page->index & ~GMAP_SHADOW_FAKE_TABLE;
+		*pgt = page->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE;
 		*dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT);
-		*fake = !!(page->index & GMAP_SHADOW_FAKE_TABLE);
+		*fake = !!(page->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE);
 		rc = 0;
 	} else  {
 		rc = -EAGAIN;
@@ -2054,9 +2073,9 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	page = page_table_alloc_pgste(sg->mm);
 	if (!page)
 		return -ENOMEM;
-	page->index = pgt & _SEGMENT_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_pgt = page_to_phys(page);
 	/* Install shadow page table */
 	spin_lock(&sg->guest_table_lock);
@@ -2101,6 +2120,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	page_table_free_pgste(page);
 	return rc;
 
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 306a3d1a0fa6..6161fe1ae5b8 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -144,7 +144,7 @@ struct page {
 		struct {	/* Page table pages */
 			unsigned long _pt_pad_1;	/* compound_head */
 			pgtable_t pmd_huge_pte; /* protected by page->ptl */
-			unsigned long _pt_pad_2;	/* mapping */
+			unsigned long _pt_s390_gaddr;	/* mapping */
 			union {
 				struct mm_struct *pt_mm; /* x86 pgds only */
 				atomic_t pt_frag_refcount; /* powerpc */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528080.820677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCS-0006V7-FV; Mon, 01 May 2023 19:28:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528080.820677; Mon, 01 May 2023 19:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCS-0006Uy-CV; Mon, 01 May 2023 19:28:40 +0000
Received: by outflank-mailman (input) for mailman id 528080;
 Mon, 01 May 2023 19:28:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCR-0006PY-34
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:39 +0000
Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com
 [2607:f8b0:4864:20::42c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5e33d582-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:28:37 +0200 (CEST)
Received: by mail-pf1-x42c.google.com with SMTP id
 d2e1a72fcca58-63b51fd2972so1993042b3a.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:37 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e33d582-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969316; x=1685561316;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=9i6mX6BOTgb2TGO86QCloW3rCrq78sBOKu0dxXwRHVs=;
        b=pYnEspIgQ/94L4B7QgE7HxUaLyMe6HQnu5gVv7XKe7wrHmtTwaLadWDte57s2SBwdQ
         8bePLJ5Ai3/AmYhH4RiAfK8IjGb+8BxlohLn4+W6ycJ9Pg3WMvHLkkZOviW7YYTD/mmK
         GtZnk734H4mZvUq9Iut38ZejJnfrw2oYgWeCZ6PRbVlgoGyBpRA67FKhICj+7yH7j8HF
         a3LIP2IDGGOTftrbrsZRq8EQiQ5R1WymqweCw9yu0t92fclq88jf6kqwtf3FextynpJY
         IncHO/7Qw6ErlKR7f7HeTzuXXgCmLczBJBgXHLIbesev1rm2mkGmxXqiRqgTqpx3oce1
         ZBnQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969316; x=1685561316;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=9i6mX6BOTgb2TGO86QCloW3rCrq78sBOKu0dxXwRHVs=;
        b=jO2Y331sc6XE+lr0s6rvDbUuy+l8h49ZoORwGByQG2XkBC6dPc7ksnrfjOpp+4D4u3
         lg2mzuBL4+17G0EsmSwZuWUytazyHETIcMH4BIh+ebCn5EmwBDPTl3AlQPs5nDpDJ3UD
         CSbQTL+1IpZhf2dWIC6P7DU9gN6/y8YRqMysa9bA/Z0WWacrVLJ68qpNrrXp/OrSGaHj
         5bGa9qcLz6x+2L9btERh3HqE4qLugZrs0HzpJQMqV9Zc5YGXUQN8fKjg1JsGhp6pJR+B
         EVfc2CKGfp68qEdHOT2aXPk/xPMGRURr0BFn6dh+/fl/GudJGAZ3Fyo3XlLdPNapoXVF
         JT5g==
X-Gm-Message-State: AC+VfDxxpxJCQYTqCmMZiVJUbD/dwNSh5VyMLf133NznVPa+qGQsnFSA
	CeruaxMILdCZfnQiXIJA4CQ=
X-Google-Smtp-Source: ACHHUZ4H6i4St7ld1wD0WuHlGapRQhff/8tSAzzBd7XutE+/MiRiNA1zmkp/wHjGquMcgELZnMsxdQ==
X-Received: by 2002:a17:903:1c3:b0:1a9:80a0:47dc with SMTP id e3-20020a17090301c300b001a980a047dcmr15039374plh.3.1682969315645;
        Mon, 01 May 2023 12:28:35 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 01/34] mm: Add PAGE_TYPE_OP folio functions
Date: Mon,  1 May 2023 12:27:56 -0700
Message-Id: <20230501192829.17086-2-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

No folio equivalents for page type operations have been defined, so
define them for later folio conversions.

Also changes the Page##uname macros to take in const struct page* since
we only read the memory here.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/page-flags.h | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 1c68d67b832f..607b495d1b57 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -902,6 +902,8 @@ static inline bool is_page_hwpoison(struct page *page)
 
 #define PageType(page, flag)						\
 	((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
+#define folio_test_type(folio, flag)					\
+	((folio->page.page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
 
 static inline int page_type_has_type(unsigned int page_type)
 {
@@ -914,20 +916,34 @@ static inline int page_has_type(struct page *page)
 }
 
 #define PAGE_TYPE_OPS(uname, lname)					\
-static __always_inline int Page##uname(struct page *page)		\
+static __always_inline int Page##uname(const struct page *page)		\
 {									\
 	return PageType(page, PG_##lname);				\
 }									\
+static __always_inline int folio_test_##lname(const struct folio *folio)\
+{									\
+	return folio_test_type(folio, PG_##lname);			\
+}									\
 static __always_inline void __SetPage##uname(struct page *page)		\
 {									\
 	VM_BUG_ON_PAGE(!PageType(page, 0), page);			\
 	page->page_type &= ~PG_##lname;					\
 }									\
+static __always_inline void __folio_set_##lname(struct folio *folio)	\
+{									\
+	VM_BUG_ON_FOLIO(!folio_test_type(folio, 0), folio);		\
+	folio->page.page_type &= ~PG_##lname;				\
+}									\
 static __always_inline void __ClearPage##uname(struct page *page)	\
 {									\
 	VM_BUG_ON_PAGE(!Page##uname(page), page);			\
 	page->page_type |= PG_##lname;					\
-}
+}									\
+static __always_inline void __folio_clear_##lname(struct folio *folio)	\
+{									\
+	VM_BUG_ON_FOLIO(!folio_test_##lname(folio), folio);		\
+	folio->page.page_type |= PG_##lname;				\
+}									\
 
 /*
  * PageBuddy() indicates that the page is free and in the buddy system
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528082.820693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCV-0006pt-52; Mon, 01 May 2023 19:28:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528082.820693; Mon, 01 May 2023 19:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCU-0006p9-TO; Mon, 01 May 2023 19:28:42 +0000
Received: by outflank-mailman (input) for mailman id 528082;
 Mon, 01 May 2023 19:28:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCS-0006FS-U7
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:40 +0000
Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com
 [2607:f8b0:4864:20::436])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6002b4cd-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:28:40 +0200 (CEST)
Received: by mail-pf1-x436.google.com with SMTP id
 d2e1a72fcca58-63b5465fc13so2109652b3a.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:40 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6002b4cd-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969319; x=1685561319;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bmbsS8DkaHNij/v9bdafzBlktB7f+f7NAeK2Q02GvbI=;
        b=B4azYqsU+PZdNT5sPYRuRynEP/yG+nnQZzCwWNiJuBYZMp7zK7WPiFNkyYATyiWLs9
         KXu3O4UXjROzHmg5BH+SgHHBIK4HaE73RnqEPc9ZoVRSl4Uh4CCXlTE9MpnCo1Ir9cR5
         1tT0IRumxWPV12suNOkG+VwHWOIH3Na7v0j8RESqGnAScgYPrOMUDDIHfUYdpmcQFIY7
         vF1i0igsbLRBnZPi2HEPlzfFt+R0Rd+c2dVX9RoHx/GLiWbMDfrtbVlk0iN4TeEGJT4d
         1bO8trvKRg+pq0Q4NgCg3JPagFS7MkJj/4qfBc8P2r4TUch12xgUcUg4F69rFfadNKuw
         8vIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969319; x=1685561319;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=bmbsS8DkaHNij/v9bdafzBlktB7f+f7NAeK2Q02GvbI=;
        b=YDyIjqtPwkXaf/Su4Y2scYaL76uyY6jRVZxhFW6wKY/GmFWzS0Wb2gGlflWEqsxAdr
         /+vYXPTKJ9XfH03f5tfMMpbhPXbSWqFm5ZGG9FK3BA0QZNbVyAVJL/53JJJN0yuhjgcP
         okHH5Jxe76qKyOW0P3GTV5VWiVXj1lO5xKykUVxwlhnfypPgNM9lE1ciIWNMo5O0BwI4
         qLYhFgtScBHQQyP7oFXhJsSU/ZV+isphtEdSxs4KPY7+VQCtSj8o7DgAUqIfya5x3pgx
         g2KG1PJQsFH9P1BxvlgnA5oBn/88DwYlSR49xowbN0K9RM7LGqPz2GabKC4LYvDb00vZ
         Rhog==
X-Gm-Message-State: AC+VfDw7hiV/MlpVWw4LVvZj/HlqHHGzJPcLUiHyLMDeDjYeY9YaHNm2
	knhQc44ukAkVRL7VpX+DoQQ=
X-Google-Smtp-Source: ACHHUZ4E/JVh2DFvUW/bEZgIHPiMJ+Zfpw7pvXWZYNeWA42FHZxnOhQrWiIO3kxV0xxzCmUatMgqYw==
X-Received: by 2002:a17:902:8c91:b0:1a9:2b7f:a594 with SMTP id t17-20020a1709028c9100b001a92b7fa594mr13894676plo.29.1682969318714;
        Mon, 01 May 2023 12:28:38 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>
Subject: [PATCH v2 03/34] s390: Use pt_frag_refcount for pagetables
Date: Mon,  1 May 2023 12:27:58 -0700
Message-Id: <20230501192829.17086-4-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

s390 currently uses _refcount to identify fragmented page tables.
The page table struct already has a member pt_frag_refcount used by
powerpc, so have s390 use that instead of the _refcount field as well.
This improves the safety for _refcount and the page table tracking.

This also allows us to simplify the tracking since we can once again use
the lower byte of pt_frag_refcount instead of the upper byte of _refcount.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/mm/pgalloc.c | 38 +++++++++++++++-----------------------
 1 file changed, 15 insertions(+), 23 deletions(-)

diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
index 66ab68db9842..6b99932abc66 100644
--- a/arch/s390/mm/pgalloc.c
+++ b/arch/s390/mm/pgalloc.c
@@ -182,20 +182,17 @@ void page_table_free_pgste(struct page *page)
  * As follows from the above, no unallocated or fully allocated parent
  * pages are contained in mm_context_t::pgtable_list.
  *
- * The upper byte (bits 24-31) of the parent page _refcount is used
+ * The lower byte (bits 0-7) of the parent page pt_frag_refcount is used
  * for tracking contained 2KB-pgtables and has the following format:
  *
  *   PP  AA
- * 01234567    upper byte (bits 24-31) of struct page::_refcount
+ * 01234567    upper byte (bits 0-7) of struct page::pt_frag_refcount
  *   ||  ||
  *   ||  |+--- upper 2KB-pgtable is allocated
  *   ||  +---- lower 2KB-pgtable is allocated
  *   |+------- upper 2KB-pgtable is pending for removal
  *   +-------- lower 2KB-pgtable is pending for removal
  *
- * (See commit 620b4e903179 ("s390: use _refcount for pgtables") on why
- * using _refcount is possible).
- *
  * When 2KB-pgtable is allocated the corresponding AA bit is set to 1.
  * The parent page is either:
  *   - added to mm_context_t::pgtable_list in case the second half of the
@@ -243,11 +240,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 		if (!list_empty(&mm->context.pgtable_list)) {
 			page = list_first_entry(&mm->context.pgtable_list,
 						struct page, lru);
-			mask = atomic_read(&page->_refcount) >> 24;
+			mask = atomic_read(&page->pt_frag_refcount);
 			/*
 			 * The pending removal bits must also be checked.
 			 * Failure to do so might lead to an impossible
-			 * value of (i.e 0x13 or 0x23) written to _refcount.
+			 * value of (i.e 0x13 or 0x23) written to
+			 * pt_frag_refcount.
 			 * Such values violate the assumption that pending and
 			 * allocation bits are mutually exclusive, and the rest
 			 * of the code unrails as result. That could lead to
@@ -259,8 +257,8 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 				bit = mask & 1;		/* =1 -> second 2K */
 				if (bit)
 					table += PTRS_PER_PTE;
-				atomic_xor_bits(&page->_refcount,
-							0x01U << (bit + 24));
+				atomic_xor_bits(&page->pt_frag_refcount,
+							0x01U << bit);
 				list_del(&page->lru);
 			}
 		}
@@ -281,12 +279,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 	table = (unsigned long *) page_to_virt(page);
 	if (mm_alloc_pgste(mm)) {
 		/* Return 4K page table with PGSTEs */
-		atomic_xor_bits(&page->_refcount, 0x03U << 24);
+		atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
 		memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE);
 		memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE);
 	} else {
 		/* Return the first 2K fragment of the page */
-		atomic_xor_bits(&page->_refcount, 0x01U << 24);
+		atomic_xor_bits(&page->pt_frag_refcount, 0x01U);
 		memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE);
 		spin_lock_bh(&mm->context.lock);
 		list_add(&page->lru, &mm->context.pgtable_list);
@@ -323,22 +321,19 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
 		 * will happen outside of the critical section from this
 		 * function or from __tlb_remove_table()
 		 */
-		mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
 		if (mask & 0x03U)
 			list_add(&page->lru, &mm->context.pgtable_list);
 		else
 			list_del(&page->lru);
 		spin_unlock_bh(&mm->context.lock);
-		mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24));
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x10U << bit);
 		if (mask != 0x00U)
 			return;
 		half = 0x01U << bit;
 	} else {
 		half = 0x03U;
-		mask = atomic_xor_bits(&page->_refcount, 0x03U << 24);
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
 	}
 
 	page_table_release_check(page, table, half, mask);
@@ -368,8 +363,7 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 	 * outside of the critical section from __tlb_remove_table() or from
 	 * page_table_free()
 	 */
-	mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
-	mask >>= 24;
+	mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
 	if (mask & 0x03U)
 		list_add_tail(&page->lru, &mm->context.pgtable_list);
 	else
@@ -391,14 +385,12 @@ void __tlb_remove_table(void *_table)
 		return;
 	case 0x01U:	/* lower 2K of a 4K page table */
 	case 0x02U:	/* higher 2K of a 4K page table */
-		mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24));
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, mask << 4);
 		if (mask != 0x00U)
 			return;
 		break;
 	case 0x03U:	/* 4K page table with pgstes */
-		mask = atomic_xor_bits(&page->_refcount, 0x03U << 24);
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
 		break;
 	}
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528083.820701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCV-00073I-Ov; Mon, 01 May 2023 19:28:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528083.820701; Mon, 01 May 2023 19:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCV-000705-F8; Mon, 01 May 2023 19:28:43 +0000
Received: by outflank-mailman (input) for mailman id 528083;
 Mon, 01 May 2023 19:28:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCU-0006FS-4M
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:42 +0000
Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com
 [2607:f8b0:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 60cc3f41-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:28:41 +0200 (CEST)
Received: by mail-pf1-x430.google.com with SMTP id
 d2e1a72fcca58-63b7b54642cso2058874b3a.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:41 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60cc3f41-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969320; x=1685561320;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=B0Od9Inc5y6h3dSbsHmdGR20xSLs7HxIKZ/fFcl7OHg=;
        b=BSfSX498+nRsOwkOZCv/c3TfiIzHvFrBHqOPXzZnUzdOTpraA1SFYJQ7S9OlNSy+oo
         SgKJwDZ3qC6C//XEAcYljRcEjgHnyeaNoyhEAyOx7Ys6/3k09Ik8CgCQWIiXukHyuWI8
         JqooqMgdLQ1gpcR/L/mcYK+KZEjEg+InGInQqZs8g+xQwi3fiMtnPj+o8uf54dHCG0nD
         x8PpX8tpE3nfYMsr6vBPGSrBk7IqcdW9rAQJxVfKMF33+mjDorfNig4J/P1XioZbWwxD
         uXByeI5tsaD8qrYP2W7mncKrp6aQydR1tjw1T6sz0jLjo8cvDXE3/58iahOtRXOU+sGT
         jXsQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969320; x=1685561320;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=B0Od9Inc5y6h3dSbsHmdGR20xSLs7HxIKZ/fFcl7OHg=;
        b=b5tAbdM7h1XPki587llQvmhrUefAIRRLA8nqPUv2w26QJ4inEkJyAhesY8a+7eolFi
         0pPE9bo4E+e63zMElTs8nM05PQdJKUAlUUqo8kvWDLEpWf5O7kHXRqq1FoMlj2WHyigw
         mzEpq2ccpMsw/jv19njxNQT6GDgY8EUH3ULpMsHJ4TGY56nOuiFdvhpv/hyUM6/noONs
         19dfA8wYFluS0giTk1sgyVLB7W+OrVd4z7oSd/s+jiyWUNo1+w2YXMJBPL/jOvgBfNp5
         iNbeL+hPSj8ay/7ISm4gYSvmhPJovUrJ4jOOl3KS3oE8IGeHS/H4XsrsEtdX41E0bJnO
         ATkQ==
X-Gm-Message-State: AC+VfDwyYrTNdvTJtRUyt0j7fE+n4IJQghaLDpDFQ6EYZ3cQNarh+ubf
	Njuzr0grhYPsQMZtcFUNr9I=
X-Google-Smtp-Source: ACHHUZ5c/2Ko6MMnlJILdIHFSiqsi7woH2pp9LXJUwyBETBH//ZWHg6e+tUlm1aX3nzTckKQE2QL/Q==
X-Received: by 2002:a17:902:d2cf:b0:1aa:ffe1:de13 with SMTP id n15-20020a170902d2cf00b001aaffe1de13mr2810716plc.5.1682969320162;
        Mon, 01 May 2023 12:28:40 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 04/34] pgtable: Create struct ptdesc
Date: Mon,  1 May 2023 12:27:59 -0700
Message-Id: <20230501192829.17086-5-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently, page table information is stored within struct page. As part
of simplifying struct page, create struct ptdesc for page table
information.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/pgtable.h | 52 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 023918666dd4..5e0f51308724 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -989,6 +989,58 @@ static inline void ptep_modify_prot_commit(struct vm_area_struct *vma,
 #endif /* __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION */
 #endif /* CONFIG_MMU */
 
+
+/**
+ * struct ptdesc - Memory descriptor for page tables.
+ * @__page_flags: Same as page flags. Unused for page tables.
+ * @pt_list: List of used page tables. Used for s390 and x86.
+ * @_pt_pad_1: Padding that aliases with page's compound head.
+ * @pmd_huge_pte: Protected by ptdesc->ptl, used for THPs.
+ * @_pt_s390_gaddr: Aliases with page's mapping. Used for s390 gmap only.
+ * @pt_mm: Used for x86 pgds.
+ * @pt_frag_refcount: For fragmented page table tracking. Powerpc and s390 only.
+ * @ptl: Lock for the page table.
+ *
+ * This struct overlays struct page for now. Do not modify without a good
+ * understanding of the issues.
+ */
+struct ptdesc {
+	unsigned long __page_flags;
+
+	union {
+		struct list_head pt_list;
+		struct {
+			unsigned long _pt_pad_1;
+			pgtable_t pmd_huge_pte;
+		};
+	};
+	unsigned long _pt_s390_gaddr;
+
+	union {
+		struct mm_struct *pt_mm;
+		atomic_t pt_frag_refcount;
+		unsigned long index;
+	};
+
+#if ALLOC_SPLIT_PTLOCKS
+	spinlock_t *ptl;
+#else
+	spinlock_t ptl;
+#endif
+};
+
+#define TABLE_MATCH(pg, pt)						\
+	static_assert(offsetof(struct page, pg) == offsetof(struct ptdesc, pt))
+TABLE_MATCH(flags, __page_flags);
+TABLE_MATCH(compound_head, pt_list);
+TABLE_MATCH(compound_head, _pt_pad_1);
+TABLE_MATCH(pmd_huge_pte, pmd_huge_pte);
+TABLE_MATCH(mapping, _pt_s390_gaddr);
+TABLE_MATCH(pt_mm, pt_mm);
+TABLE_MATCH(ptl, ptl);
+#undef TABLE_MATCH
+static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
+
 /*
  * No-op macros that just return the current protection value. Defined here
  * because these macros can be used even if CONFIG_MMU is not defined.
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528084.820716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCX-0007XI-27; Mon, 01 May 2023 19:28:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528084.820716; Mon, 01 May 2023 19:28:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCW-0007Vs-U1; Mon, 01 May 2023 19:28:44 +0000
Received: by outflank-mailman (input) for mailman id 528084;
 Mon, 01 May 2023 19:28:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCV-0006FS-Je
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:43 +0000
Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com
 [2607:f8b0:4864:20::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 61a6acfa-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:28:43 +0200 (CEST)
Received: by mail-pl1-x62f.google.com with SMTP id
 d9443c01a7336-1a6762fd23cso23588215ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:42 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61a6acfa-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969321; x=1685561321;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=AVVNU1/Z0e3e15xLpxnmg1YI8sj9yavBZMAv1HCyq4s=;
        b=LLYF3e7E/amT0QC9X83jr8STEh2LEyhiKwXOflj4vOGUUQ80STkux/B3tsBPRR+iZs
         Z8PEklKtnrf1ANY0yS+7B3mEyOHWcYHngOK95uUUv6xkoIQMHojULkOYhoKI4Ni0FrP2
         S5NHFGXLQHvwvsG30gws5NVKvTRiKsyEh3YkA8qvjSZ3/KlMdDYEuevU5tP8u4iNYhu8
         1xlMd7KFd1vYjOGrW/3tOSP+nbu4BvTDU3KzWxrFnBRpLAae4H3q3HG2nt2iYhftpMs0
         U6/yXrewj3zrB5xufQ+HtnH4MYxc4pcXaT4tSeTPWcq6jVXUA+PWNym9caGo+Q16VqPL
         FcqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969321; x=1685561321;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=AVVNU1/Z0e3e15xLpxnmg1YI8sj9yavBZMAv1HCyq4s=;
        b=BKnhzzmvCN/Amaj/D9Nt0ZySP6FnPX53PaBviAcESuScggmKPzvQUAfvVxn38nL9Q6
         hgbfus573AFC7/X593C6Gkx93mueeEc4hDOG7istpgeV2mXQaYe/ZvYI8QwuveIvn9NI
         O17ZjXPOR6eitKLagHdzcVdJdIjQNVwKJHFN4wm1lbMwCtBj44tNJmiYWhI+KOX2RII/
         ZGjELyhu+5nY0pAkp/lEhkjt3YZ3zCz6LMVqZ2kEX+fup+vnJ6QL8KkCtNT3zVrH0a4+
         eAskB6BCcNjQX4CzQYqnyAxSL4ZP2RUxGMOH9wGwZcBNkLqE1Ny7jdzRGyrvbHK4V/k3
         uh8Q==
X-Gm-Message-State: AC+VfDztfPtpebbzDkZ0uz2WIHW+jsVa9KR6ECrHJMfCM9H0f+p2izv0
	q++dWIAa8oixpgr4wLhgZJ0=
X-Google-Smtp-Source: ACHHUZ5YKwOgPLgh0+tsSaYrcdi28QhyEeU0Z2KklUkfZ7L+H63PFxAPQAkXwR8Vmhp1bd6sZvpxeg==
X-Received: by 2002:a17:902:f814:b0:1a0:50bd:31a8 with SMTP id ix20-20020a170902f81400b001a050bd31a8mr14734719plb.26.1682969321461;
        Mon, 01 May 2023 12:28:41 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 05/34] mm: add utility functions for ptdesc
Date: Mon,  1 May 2023 12:28:00 -0700
Message-Id: <20230501192829.17086-6-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce utility functions setting the foundation for ptdescs. These
will also assist in the splitting out of ptdesc from struct page.

ptdesc_alloc() is defined to allocate new ptdesc pages as compound
pages. This is to standardize ptdescs by allowing for one allocation
and one free function, in contrast to 2 allocation and 2 free functions.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/asm-generic/tlb.h | 11 ++++++++++
 include/linux/mm.h        | 44 +++++++++++++++++++++++++++++++++++++++
 include/linux/pgtable.h   | 12 +++++++++++
 3 files changed, 67 insertions(+)

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index b46617207c93..6bade9e0e799 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -481,6 +481,17 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
 	return tlb_remove_page_size(tlb, page, PAGE_SIZE);
 }
 
+static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt)
+{
+	tlb_remove_table(tlb, pt);
+}
+
+/* Like tlb_remove_ptdesc, but for page-like page directories. */
+static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt)
+{
+	tlb_remove_page(tlb, ptdesc_page(pt));
+}
+
 static inline void tlb_change_page_size(struct mmu_gather *tlb,
 						     unsigned int page_size)
 {
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b18848ae7e22..258f3b730359 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2744,6 +2744,45 @@ static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long a
 }
 #endif /* CONFIG_MMU */
 
+static inline struct ptdesc *virt_to_ptdesc(const void *x)
+{
+	return page_ptdesc(virt_to_head_page(x));
+}
+
+static inline void *ptdesc_to_virt(const struct ptdesc *pt)
+{
+	return page_to_virt(ptdesc_page(pt));
+}
+
+static inline void *ptdesc_address(const struct ptdesc *pt)
+{
+	return folio_address(ptdesc_folio(pt));
+}
+
+static inline bool ptdesc_is_reserved(struct ptdesc *pt)
+{
+	return folio_test_reserved(ptdesc_folio(pt));
+}
+
+static inline struct ptdesc *ptdesc_alloc(gfp_t gfp, unsigned int order)
+{
+	struct page *page = alloc_pages(gfp | __GFP_COMP, order);
+
+	return page_ptdesc(page);
+}
+
+static inline void ptdesc_free(struct ptdesc *pt)
+{
+	struct page *page = ptdesc_page(pt);
+
+	__free_pages(page, compound_order(page));
+}
+
+static inline void ptdesc_clear(void *x)
+{
+	clear_page(x);
+}
+
 #if USE_SPLIT_PTE_PTLOCKS
 #if ALLOC_SPLIT_PTLOCKS
 void __init ptlock_cache_init(void);
@@ -2970,6 +3009,11 @@ static inline void mark_page_reserved(struct page *page)
 	adjust_managed_page_count(page, -1);
 }
 
+static inline void free_reserved_ptdesc(struct ptdesc *pt)
+{
+	free_reserved_page(ptdesc_page(pt));
+}
+
 /*
  * Default method to free all the __init memory into the buddy system.
  * The freed pages will be poisoned with pattern "poison" if it's within
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 5e0f51308724..b067ac10f3dd 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1041,6 +1041,18 @@ TABLE_MATCH(ptl, ptl);
 #undef TABLE_MATCH
 static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
 
+#define ptdesc_page(pt)			(_Generic((pt),			\
+	const struct ptdesc *:		(const struct page *)(pt),	\
+	struct ptdesc *:		(struct page *)(pt)))
+
+#define ptdesc_folio(pt)		(_Generic((pt),			\
+	const struct ptdesc *:		(const struct folio *)(pt),	\
+	struct ptdesc *:		(struct folio *)(pt)))
+
+#define page_ptdesc(p)			(_Generic((p),			\
+	const struct page *:		(const struct ptdesc *)(p),	\
+	struct page *:			(struct ptdesc *)(p)))
+
 /*
  * No-op macros that just return the current protection value. Defined here
  * because these macros can be used even if CONFIG_MMU is not defined.
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528085.820727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCY-0007rl-EP; Mon, 01 May 2023 19:28:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528085.820727; Mon, 01 May 2023 19:28:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCY-0007rb-Ak; Mon, 01 May 2023 19:28:46 +0000
Received: by outflank-mailman (input) for mailman id 528085;
 Mon, 01 May 2023 19:28:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCW-0006FS-Qz
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:44 +0000
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [2607:f8b0:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6270271e-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:28:44 +0200 (CEST)
Received: by mail-pl1-x636.google.com with SMTP id
 d9443c01a7336-1aaf91ae451so11829955ad.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:44 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6270271e-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969323; x=1685561323;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0Ii9PvRdhTMRqBop+JFdKAWa6G/XpZkmB9p8iHmrwUc=;
        b=lFLswDg0HvrPkF+dtbJhg/asaCBgi3e00w4opfv2mri4GHWG5ahGHxm9sLAnqC4qbq
         cQ+9MPTjwHhlk51h4w8lOE5zFHDICMVPmNnmwWW2l0krumAZ2PDeWC1WEjJOX3LShLcV
         g0uvqMJBbig8RslrcZc6EAJCiGxazADIIfKt9bAt8yGDMo+5nIH4mdhXs+LaHQV1OVik
         U1EjmbMLOLA/7Pwt5TKTUK+w95+W73jVDYdL+HuxsLXipLy8KET4HB50FeUWTlZzXbc3
         qFSctTNSvV8Q7v0NapgoBkUZ/cCAkZoFPz8ver9uL6iYj4+bfmoOnnABcA11qHjnQcrq
         /yHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969323; x=1685561323;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=0Ii9PvRdhTMRqBop+JFdKAWa6G/XpZkmB9p8iHmrwUc=;
        b=SgrQb5ed8EXUqfeBjnrlY8KNetBLwNG92i12AEztZMKx2YEZCFv5+0PdyT+NLFrCUD
         xW7oVXy4J4j67VFoF0RmEL8Gjd/5yLRPoMA1T/R1QkxiAb52V6RZlv2cKE8+dGl1h5bu
         iIJGhrk56PzAI1/hRRiffzQrTdOwY2HSUHsiET2umF9sjxYubGn72NnYky0aMV5fjoNO
         10aZSlcygCQKeFZ4azDhEppGPR3HrhsyKi3i0tVkKx9nmmWyHw45UPOmS5odIxBobA/v
         PDgeevov+JdyNE7Cu46q8gqajcbVux2wWyJUNpzXXQiGGbcSKNjPkIe3Hcs1WwawCJwC
         9sLQ==
X-Gm-Message-State: AC+VfDxtWc5NoG8KDquUEPZWw44CSMjGKqwEIKl7+nOXFxtuPyXlW6tW
	OAetJDRRPYpMw/xnLpqGfhM=
X-Google-Smtp-Source: ACHHUZ4gWlsAH/48zhDdvicQeallszzZORdrHEFL4BiABQ6zj3x4/LHvqIQGrX0uRjqorCk+ulIxBg==
X-Received: by 2002:a17:902:e805:b0:1aa:f6c3:ba24 with SMTP id u5-20020a170902e80500b001aaf6c3ba24mr5259994plg.4.1682969322807;
        Mon, 01 May 2023 12:28:42 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 06/34] mm: Convert pmd_pgtable_page() to pmd_ptdesc()
Date: Mon,  1 May 2023 12:28:01 -0700
Message-Id: <20230501192829.17086-7-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Converts pmd_pgtable_page() to pmd_ptdesc() and all its callers. This
removes some direct accesses to struct page, working towards splitting
out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 258f3b730359..62c1635a9d44 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2892,15 +2892,15 @@ static inline void pgtable_pte_page_dtor(struct page *page)
 
 #if USE_SPLIT_PMD_PTLOCKS
 
-static inline struct page *pmd_pgtable_page(pmd_t *pmd)
+static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd)
 {
 	unsigned long mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1);
-	return virt_to_page((void *)((unsigned long) pmd & mask));
+	return virt_to_ptdesc((void *)((unsigned long) pmd & mask));
 }
 
 static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 {
-	return ptlock_ptr(pmd_pgtable_page(pmd));
+	return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd)));
 }
 
 static inline bool pmd_ptlock_init(struct page *page)
@@ -2919,7 +2919,7 @@ static inline void pmd_ptlock_free(struct page *page)
 	ptlock_free(page);
 }
 
-#define pmd_huge_pte(mm, pmd) (pmd_pgtable_page(pmd)->pmd_huge_pte)
+#define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte)
 
 #else
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528086.820737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCb-0008Ec-0T; Mon, 01 May 2023 19:28:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528086.820737; Mon, 01 May 2023 19:28:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCa-0008EM-SH; Mon, 01 May 2023 19:28:48 +0000
Received: by outflank-mailman (input) for mailman id 528086;
 Mon, 01 May 2023 19:28:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCZ-0006PY-7Y
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:47 +0000
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [2607:f8b0:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 633ee4c7-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:28:45 +0200 (CEST)
Received: by mail-pl1-x636.google.com with SMTP id
 d9443c01a7336-1aaf70676b6so9588205ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:45 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 633ee4c7-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969324; x=1685561324;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+5AaOHEPf+h0pPpRdylr/G5bcmt7vW9RVYKqYfsRH1I=;
        b=roKi10VJs9dRiilqVeW0uT+zBJLQBRTUlVbgvCysCoLkLcBaFtSDJFWEbNs2bywIGo
         H3vsreWJ1ZpYCtBmz7ULTfsdWW+yedXBXQs/IwteaEsgFryEWzy4p352hRLq6aefxJE1
         6kRZZ9T8eU7cXFjWBj8Pc1GfIk6+0XR5y+mT0ED8uGwr4lyimQw2Z0aiiwvL0/TUCnLz
         1YAzhf+9hpLKk/t8M1Z7w1Ox1h7cTnYF1yo9NW5nVgHqgkVIivMOhZpN29c9g57SmIug
         sEztd2i0kQBJZgz4HOMkQSUf4qdi6lxWXkVGt+hsJgx7i60d4BQr0JdQ/AyxEv6LKtIb
         /zeQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969324; x=1685561324;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=+5AaOHEPf+h0pPpRdylr/G5bcmt7vW9RVYKqYfsRH1I=;
        b=Myj+F+dUyBf1v8Fii5xxIGvNRrna+BUMNmxg2Hd8mIq9c0gK07Qr2U8Suvgllt/285
         Be9bgp2qhaU9XbigIRze2mtPariYjKapyktuoaFdZFNi5S8B2Zkn5OfnLdEczQ28NJ2o
         YGhACrOqxg4q5pgHxD9OPftyc2yzWqAl1IwtSn4/H1a5WBaJaLMeUybk+pTID+B3j+Ki
         BLLeXugq4AnQNXTqjfiBDCcC/zOUhU7zdBs+9D5THIau5ogkwx1QC+u6GB70VeLtNCgy
         aDsDcwp8kwxlWzn6zYlUP59XbaYS8xqmL9LgvrtpOAYVeAIjbcY7jpkMYPqh9Ar+kbJ+
         CJ2Q==
X-Gm-Message-State: AC+VfDzM0njJXwerqMGN+Fo9hEcNvQPHApTYITwaZwXzbzXK1j6aOGb/
	x72MERJhR33F7LqsTvFtXg4=
X-Google-Smtp-Source: ACHHUZ4PPvDcr1ZSfGKsFuoX96n8X09PGID8nEMj55TFVd8R4rz9suoKbv6A+ENi8FZ5wQLN727S7w==
X-Received: by 2002:a17:902:ca0d:b0:1a6:5274:c1b0 with SMTP id w13-20020a170902ca0d00b001a65274c1b0mr12751445pld.60.1682969324101;
        Mon, 01 May 2023 12:28:44 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 07/34] mm: Convert ptlock_alloc() to use ptdescs
Date: Mon,  1 May 2023 12:28:02 -0700
Message-Id: <20230501192829.17086-8-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 6 +++---
 mm/memory.c        | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 62c1635a9d44..565da5f39376 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2786,7 +2786,7 @@ static inline void ptdesc_clear(void *x)
 #if USE_SPLIT_PTE_PTLOCKS
 #if ALLOC_SPLIT_PTLOCKS
 void __init ptlock_cache_init(void);
-extern bool ptlock_alloc(struct page *page);
+bool ptlock_alloc(struct ptdesc *ptdesc);
 extern void ptlock_free(struct page *page);
 
 static inline spinlock_t *ptlock_ptr(struct page *page)
@@ -2798,7 +2798,7 @@ static inline void ptlock_cache_init(void)
 {
 }
 
-static inline bool ptlock_alloc(struct page *page)
+static inline bool ptlock_alloc(struct ptdesc *ptdesc)
 {
 	return true;
 }
@@ -2828,7 +2828,7 @@ static inline bool ptlock_init(struct page *page)
 	 * slab code uses page->slab_cache, which share storage with page->ptl.
 	 */
 	VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
-	if (!ptlock_alloc(page))
+	if (!ptlock_alloc(page_ptdesc(page)))
 		return false;
 	spin_lock_init(ptlock_ptr(page));
 	return true;
diff --git a/mm/memory.c b/mm/memory.c
index 5e2c6b1fc00e..ba0dd1b2d616 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5939,14 +5939,14 @@ void __init ptlock_cache_init(void)
 			SLAB_PANIC, NULL);
 }
 
-bool ptlock_alloc(struct page *page)
+bool ptlock_alloc(struct ptdesc *ptdesc)
 {
 	spinlock_t *ptl;
 
 	ptl = kmem_cache_alloc(page_ptl_cachep, GFP_KERNEL);
 	if (!ptl)
 		return false;
-	page->ptl = ptl;
+	ptdesc->ptl = ptl;
 	return true;
 }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528087.820743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCb-0008KS-Jl; Mon, 01 May 2023 19:28:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528087.820743; Mon, 01 May 2023 19:28:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCb-0008Ip-9O; Mon, 01 May 2023 19:28:49 +0000
Received: by outflank-mailman (input) for mailman id 528087;
 Mon, 01 May 2023 19:28:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCa-0006PY-Jj
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:48 +0000
Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com
 [2607:f8b0:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6416506e-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:28:47 +0200 (CEST)
Received: by mail-pl1-x62f.google.com with SMTP id
 d9443c01a7336-1aafa03f541so12193395ad.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:47 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6416506e-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969325; x=1685561325;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=dsNph+IH8G9kAdlfpPDvdsVRvR3y4GWwc24n4bCHoNE=;
        b=ORzYAt4fgaepA0jj/nxqG5QCQUnwh7Hd86B8NHpvgQ9XS5bxCw6FjAddYAhSWVBGhs
         RguloM+rsX2sX5k8L8jQmcK3uoCnItaXGukxO2sNFKQkHNbJapaDcmG0iMAuRZwI8f8t
         xe4tIz8OOyBZAi/AlwagFAtmsldgUDQdHIa2vdidJh9lpo3fAW1D+2jPZJhQXOxhvdZs
         c4hk+WfixJsxNAANDMRcuNKXIxHiA6rEY6QaL0IGeoKtL1JoS6dBJBARbulTJ6NVKxdj
         8lac9J2KQ3hHBoBfb7fQ//alN+wtsMCP7K7b5tVdh9EZXnnMD2pBGWAPXyEldDpkm7ax
         zAvw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969325; x=1685561325;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=dsNph+IH8G9kAdlfpPDvdsVRvR3y4GWwc24n4bCHoNE=;
        b=ZaiErvj8TPLWpxsSqn9Dgzlq/HirYyxLyNcGFr8PQvMHYRGG8yrgNCecvT11dSLtsd
         pK5OuzCwzI3JhZXo98VzwVqLhi1bRMa67P7mxybf2cqaNm2cxjSAFy3eXD/oCd6YierZ
         yV7NsQGbOSPLan0UwWAytGFNhymXdLGMXnBfE7eDRHVHBWwJiiOUl9QqsU1LWKtxkHf9
         CKF4WvTjdo+oQRcbDqgDBObqz9f3ouyBB88ajY8iFLLxeJ8tYxhk/fm0qsLMbqR45uzL
         vQWlTykP1pC4+9ROj5rnVwrqhLxIHppgJZrW01qQWrKO0xtfwZXWGV+R57UG8X/DDTQO
         sX+Q==
X-Gm-Message-State: AC+VfDz845i7BpAdJX4TFQSS7VMFvjEPbvq4r03d0QAM9Dh2IC8Cwwx9
	DPHeqjufSKa2kcgUI3xXXtg=
X-Google-Smtp-Source: ACHHUZ6Hyp/uTZNVOfbVle27YVii8Y9q7eyatfM6y5VDboeAGhZWSkyjhlYeOczpnOUzLbM+Y5y88w==
X-Received: by 2002:a17:902:da90:b0:1a9:b902:84b9 with SMTP id j16-20020a170902da9000b001a9b90284b9mr18090785plx.24.1682969325399;
        Mon, 01 May 2023 12:28:45 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: [PATCH v2 08/34] mm: Convert ptlock_ptr() to use ptdescs
Date: Mon,  1 May 2023 12:28:03 -0700
Message-Id: <20230501192829.17086-9-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/x86/xen/mmu_pv.c |  2 +-
 include/linux/mm.h    | 14 +++++++-------
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index fdc91deece7e..a1c9f8dcbb5a 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -651,7 +651,7 @@ static spinlock_t *xen_pte_lock(struct page *page, struct mm_struct *mm)
 	spinlock_t *ptl = NULL;
 
 #if USE_SPLIT_PTE_PTLOCKS
-	ptl = ptlock_ptr(page);
+	ptl = ptlock_ptr(page_ptdesc(page));
 	spin_lock_nest_lock(ptl, &mm->page_table_lock);
 #endif
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 565da5f39376..49fdc1199bd4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2789,9 +2789,9 @@ void __init ptlock_cache_init(void);
 bool ptlock_alloc(struct ptdesc *ptdesc);
 extern void ptlock_free(struct page *page);
 
-static inline spinlock_t *ptlock_ptr(struct page *page)
+static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
 {
-	return page->ptl;
+	return ptdesc->ptl;
 }
 #else /* ALLOC_SPLIT_PTLOCKS */
 static inline void ptlock_cache_init(void)
@@ -2807,15 +2807,15 @@ static inline void ptlock_free(struct page *page)
 {
 }
 
-static inline spinlock_t *ptlock_ptr(struct page *page)
+static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
 {
-	return &page->ptl;
+	return &ptdesc->ptl;
 }
 #endif /* ALLOC_SPLIT_PTLOCKS */
 
 static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 {
-	return ptlock_ptr(pmd_page(*pmd));
+	return ptlock_ptr(page_ptdesc(pmd_page(*pmd)));
 }
 
 static inline bool ptlock_init(struct page *page)
@@ -2830,7 +2830,7 @@ static inline bool ptlock_init(struct page *page)
 	VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
 	if (!ptlock_alloc(page_ptdesc(page)))
 		return false;
-	spin_lock_init(ptlock_ptr(page));
+	spin_lock_init(ptlock_ptr(page_ptdesc(page)));
 	return true;
 }
 
@@ -2900,7 +2900,7 @@ static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd)
 
 static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 {
-	return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd)));
+	return ptlock_ptr(pmd_ptdesc(pmd));
 }
 
 static inline bool pmd_ptlock_init(struct page *page)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528088.820748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCc-0008Sh-42; Mon, 01 May 2023 19:28:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528088.820748; Mon, 01 May 2023 19:28:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCb-0008Rj-TB; Mon, 01 May 2023 19:28:49 +0000
Received: by outflank-mailman (input) for mailman id 528088;
 Mon, 01 May 2023 19:28:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCa-0006FS-M8
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:48 +0000
Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com
 [2607:f8b0:4864:20::436])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 64bdc27d-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:28:48 +0200 (CEST)
Received: by mail-pf1-x436.google.com with SMTP id
 d2e1a72fcca58-63b4bf2d74aso1995444b3a.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:48 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64bdc27d-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969327; x=1685561327;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Qr6GrFg3nf2tpOD8Orpodgx1J1bZoReYY/j4MOttH8M=;
        b=UIGt4Rxs2HkrsHutpAYaiWsjpykN6YUUd9E7lUyM0xl9pxC5slr+PivAMCf0kJWRtN
         +uixoPZwp4Vkl4YHEUToQjjzNjvIathQteky2ye14eE7NbHDFgUdPJKRNVXQAhEugOmA
         SnKo0+DWLOVbHHUZvAiFOaLnHaj2/1PYZhJmFGNiABpmjR+0nby25eJKrDAKj84VfLsp
         mCPKXur9uRrX02YRe2GBv91KFh4j6FQNTVZMtFBm8+b26dYihW/DQX+FZ7CAta9Gp4QP
         rZEIiymi83HRxeSWeuBi69l3Fo9OQWMSSxYcww0ZzqBAzl9SsUUSi153JEf/pt3dBwk3
         OFrA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969327; x=1685561327;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Qr6GrFg3nf2tpOD8Orpodgx1J1bZoReYY/j4MOttH8M=;
        b=mBDn8X5cnHuoFXjjcT8BsjiLmWtWg+ByKxzF6EUM+fSRwMQDuO4CS/rjVKUbh2ItGZ
         a7Wc6b3cxy4t3fcsypcmjaP5IoKADDpqZ80DClLJ6I/NnxwssAdqmDhtkTb3GCIaPfFr
         LeFUPm5jWv6uPBds+UKJADXhEiIgD5tP992zbl0wJZzKV+Ts0RXR/o4p5VrAJ36YrDBm
         7ze0WfDMlikksEsf72DFemxeB0bxwrdNFxiC83YKYRkti+NsP4s1Tfpp/FWTQgne6L44
         qw9JkIMw+TJeI8SLRniyRKY5RQowN/+W4Fw0S7iLwMcorV5XZfOW5XPtNRU88XxzVzub
         c/yw==
X-Gm-Message-State: AC+VfDwb2Oqs9t/6QfBA50tLhUOh0ELrfhF+sMsNGvou5e00mmnRcy/d
	kNHrLqfPxy37ZNDt9BrDvCQ=
X-Google-Smtp-Source: ACHHUZ7U6exPykPvKxCgekHmX9KTHTAx49eIcN0vG+aTw+3qX+uTyRXMYpekTZ9RZqGTjPxZ9ZewtA==
X-Received: by 2002:a17:902:ecca:b0:1a6:4127:857 with SMTP id a10-20020a170902ecca00b001a641270857mr17426348plh.5.1682969326811;
        Mon, 01 May 2023 12:28:46 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 09/34] mm: Convert pmd_ptlock_init() to use ptdescs
Date: Mon,  1 May 2023 12:28:04 -0700
Message-Id: <20230501192829.17086-10-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 49fdc1199bd4..044c9f874b47 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2903,12 +2903,12 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return ptlock_ptr(pmd_ptdesc(pmd));
 }
 
-static inline bool pmd_ptlock_init(struct page *page)
+static inline bool pmd_ptlock_init(struct ptdesc *ptdesc)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	page->pmd_huge_pte = NULL;
+	ptdesc->pmd_huge_pte = NULL;
 #endif
-	return ptlock_init(page);
+	return ptlock_init(ptdesc_page(ptdesc));
 }
 
 static inline void pmd_ptlock_free(struct page *page)
@@ -2928,7 +2928,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return &mm->page_table_lock;
 }
 
-static inline bool pmd_ptlock_init(struct page *page) { return true; }
+static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; }
 static inline void pmd_ptlock_free(struct page *page) {}
 
 #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte)
@@ -2944,7 +2944,7 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd)
 
 static inline bool pgtable_pmd_page_ctor(struct page *page)
 {
-	if (!pmd_ptlock_init(page))
+	if (!pmd_ptlock_init(page_ptdesc(page)))
 		return false;
 	__SetPageTable(page);
 	inc_lruvec_page_state(page, NR_PAGETABLE);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528089.820763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCd-0000X6-U5; Mon, 01 May 2023 19:28:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528089.820763; Mon, 01 May 2023 19:28:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCd-0000VZ-Iy; Mon, 01 May 2023 19:28:51 +0000
Received: by outflank-mailman (input) for mailman id 528089;
 Mon, 01 May 2023 19:28:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCc-0006FS-4y
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:50 +0000
Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com
 [2607:f8b0:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6592313f-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:28:49 +0200 (CEST)
Received: by mail-pl1-x633.google.com with SMTP id
 d9443c01a7336-1aaec6f189cso12397105ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:49 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6592313f-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969328; x=1685561328;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wHprEtJ9ence/xzqPBz4IJf2z5UW2zl5tMMosQGe/i0=;
        b=PY19zV8eUjDo/XMGG1Fc34r7BzCEZH74vj9MdJuOaMu+i1F5EHabpfUbH5/UkV5r7u
         xDPodd9s+AIKUKlEgX8AIGrVTqPfTCCV19VkI5iB53+1hWJANYdEVCUSRnMghYWfpO7M
         cjlz4mTJKg6lVp3kGic5xzqQRsLBSUE+d02rlXiSiKdYRR/2g0lktCdvpXk87E+WP0bW
         PyWsq2WmVkZLx+/D9I3aZnTNSq5u2Ks8EcfSEB3bP53mV0saqVpK3cKwl8Vm7ptPZTKs
         sMCaD1Y/vK3rtRXkw0c64x6M8sUBtgZ+i4WFX1gLvu6Qtr7y6/c6crVIx9ttRb0VBr94
         CwWQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969328; x=1685561328;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=wHprEtJ9ence/xzqPBz4IJf2z5UW2zl5tMMosQGe/i0=;
        b=KXyuKi4bo/iCqBPVF8ZCJGKSGjpRpl1vCadgEPYiKguLrX69jnD/aDP2BkgnyusoFd
         5mOWc1ArvHAqZSINYzz6j/wR3c5i9usJdyVuuPFzjz8nAHrpqm0MIeapW+aiorH9xiIB
         ZcKif6SbLDmqUqlLtkRy4ERv9gmGqa6xrycjcLlbRuRHKotNLZ+CSm9ydgfHvWQZfRdM
         kfcPUWb4So32O4isg2IrHVc4tSJnZdJVr6wokrQeTbuNPgSCAfIkNMEeWb/bwjR6z1u1
         GEktbBnDjfwYSyOmKX0V0qkITXTU2QsjATcRyNIGtjvGwnM7Oe4MMszLrfwkpAYCqRf+
         zKmQ==
X-Gm-Message-State: AC+VfDxeQXuNopKQ9iEmfMRh87PvxaSfCur+0z89v6dtogbJsWGABBQ/
	NaY0jDL6br606Ykj9vAvGKM=
X-Google-Smtp-Source: ACHHUZ7Z//gnFWu5Wh7IVebCi2ECb1TkUFb7V74C+bMLVwpZ6jGbtiv1kPkjLZLQcQGCrs+Dltn26Q==
X-Received: by 2002:a17:902:c745:b0:1a6:6fe3:df91 with SMTP id q5-20020a170902c74500b001a66fe3df91mr11845223plq.50.1682969328214;
        Mon, 01 May 2023 12:28:48 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 10/34] mm: Convert ptlock_init() to use ptdescs
Date: Mon,  1 May 2023 12:28:05 -0700
Message-Id: <20230501192829.17086-11-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 044c9f874b47..bbd44f43e375 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2818,7 +2818,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return ptlock_ptr(page_ptdesc(pmd_page(*pmd)));
 }
 
-static inline bool ptlock_init(struct page *page)
+static inline bool ptlock_init(struct ptdesc *ptdesc)
 {
 	/*
 	 * prep_new_page() initialize page->private (and therefore page->ptl)
@@ -2827,10 +2827,10 @@ static inline bool ptlock_init(struct page *page)
 	 * It can happen if arch try to use slab for page table allocation:
 	 * slab code uses page->slab_cache, which share storage with page->ptl.
 	 */
-	VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
-	if (!ptlock_alloc(page_ptdesc(page)))
+	VM_BUG_ON_PAGE(*(unsigned long *)&ptdesc->ptl, ptdesc_page(ptdesc));
+	if (!ptlock_alloc(ptdesc))
 		return false;
-	spin_lock_init(ptlock_ptr(page_ptdesc(page)));
+	spin_lock_init(ptlock_ptr(ptdesc));
 	return true;
 }
 
@@ -2843,13 +2843,13 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return &mm->page_table_lock;
 }
 static inline void ptlock_cache_init(void) {}
-static inline bool ptlock_init(struct page *page) { return true; }
+static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; }
 static inline void ptlock_free(struct page *page) {}
 #endif /* USE_SPLIT_PTE_PTLOCKS */
 
 static inline bool pgtable_pte_page_ctor(struct page *page)
 {
-	if (!ptlock_init(page))
+	if (!ptlock_init(page_ptdesc(page)))
 		return false;
 	__SetPageTable(page);
 	inc_lruvec_page_state(page, NR_PAGETABLE);
@@ -2908,7 +2908,7 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	ptdesc->pmd_huge_pte = NULL;
 #endif
-	return ptlock_init(ptdesc_page(ptdesc));
+	return ptlock_init(ptdesc);
 }
 
 static inline void pmd_ptlock_free(struct page *page)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528090.820777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCg-0001BU-Bc; Mon, 01 May 2023 19:28:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528090.820777; Mon, 01 May 2023 19:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCg-0001A9-3s; Mon, 01 May 2023 19:28:54 +0000
Received: by outflank-mailman (input) for mailman id 528090;
 Mon, 01 May 2023 19:28:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCe-0006PY-IW
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:52 +0000
Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com
 [2607:f8b0:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66769ded-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:28:51 +0200 (CEST)
Received: by mail-pl1-x62f.google.com with SMTP id
 d9443c01a7336-1aaf706768cso10981755ad.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:51 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66769ded-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969329; x=1685561329;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qTcYn70jypAxoGiZWNhXW/f3wJ7PRfT+UBBqvcdtR7Y=;
        b=lB3Z6fcbmN3PCDkgSSAUdLyGGv/N+aWg88Xq/3YBAOLHjjeKvsEOyPJ7qOvQJsTwm/
         r41UWIzlQ/5Ax+ha7xRNy8kdIkTNxjA3q1QX8n9bXAh3z/8wB6GEn84UnvlT4jKe1qmu
         sQyc9OGXZjTePH+r4Y6e6ozh0UMlJ+VYjsnu2YRncxwQaVJ+Cf7Nqth6cXAsabwTRyMX
         v6eeeBBdHDjqYX6OiT4X4zQkg5l42dCb/DNQDsMWC1atimcSE8JSmejQfjKp+N8Ip6uu
         zd2ZsR9jcfUX1/JB5xCLWPO6fdSMIACYocvg6A/MPAxkpU5nLiRN3WCB2A8Uz52//cN+
         kz+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969329; x=1685561329;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=qTcYn70jypAxoGiZWNhXW/f3wJ7PRfT+UBBqvcdtR7Y=;
        b=CHyRK/wx8jeHpdFXyIRmr/Wovzk2W1cxA2z6GMP/HyrEGKcf4VRhpScUDUK9JcP6AY
         q3a9H82+Y/KzV0n2dpO+DDdK6hLuQTPmqTdAlN+d3nfEYNcNudOQ9oN73f/8spaBMZNe
         TfRHcEjT+6bZ5tSXq+YcCX9SsgelYhyM8SwdchCxzPMMF45XUVMc5aYvmaUyxdwo/hg0
         D+6gS0NQiFw/zyG1l53NaMPzbX4mJyaJsvBZUGC7pn5TrEQ579/66B6xx5xPtufOcy71
         fs6rxGEBxZU+OW0+xTO2M4PGcj4meHAugB9/IVcLvfB60eRcWntH182G0evw/49pk0hJ
         CICA==
X-Gm-Message-State: AC+VfDwuO6MNGd4F/dsO2Ew6uGt/FBh7xWxwhvZynkf+OQUGA32qaXkC
	VJrd2EBOvIQSEwDoaABqOUw=
X-Google-Smtp-Source: ACHHUZ6RkKtjORSVVTbvqSb/nEXqQaQ9JTIasvSEW6m9mcivHjmudekwdvS8zsaYUeAaqoCEzf8v6w==
X-Received: by 2002:a17:903:24c:b0:1a6:4a64:4d27 with SMTP id j12-20020a170903024c00b001a64a644d27mr17386318plh.40.1682969329458;
        Mon, 01 May 2023 12:28:49 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 11/34] mm: Convert pmd_ptlock_free() to use ptdescs
Date: Mon,  1 May 2023 12:28:06 -0700
Message-Id: <20230501192829.17086-12-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index bbd44f43e375..a2a1bca84ada 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2911,12 +2911,12 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc)
 	return ptlock_init(ptdesc);
 }
 
-static inline void pmd_ptlock_free(struct page *page)
+static inline void pmd_ptlock_free(struct ptdesc *ptdesc)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	VM_BUG_ON_PAGE(page->pmd_huge_pte, page);
+	VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc));
 #endif
-	ptlock_free(page);
+	ptlock_free(ptdesc_page(ptdesc));
 }
 
 #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte)
@@ -2929,7 +2929,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 }
 
 static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; }
-static inline void pmd_ptlock_free(struct page *page) {}
+static inline void pmd_ptlock_free(struct ptdesc *ptdesc) {}
 
 #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte)
 
@@ -2953,7 +2953,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page)
 
 static inline void pgtable_pmd_page_dtor(struct page *page)
 {
-	pmd_ptlock_free(page);
+	pmd_ptlock_free(page_ptdesc(page));
 	__ClearPageTable(page);
 	dec_lruvec_page_state(page, NR_PAGETABLE);
 }
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528091.820782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCh-0001In-3W; Mon, 01 May 2023 19:28:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528091.820782; Mon, 01 May 2023 19:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCg-0001HT-NY; Mon, 01 May 2023 19:28:54 +0000
Received: by outflank-mailman (input) for mailman id 528091;
 Mon, 01 May 2023 19:28:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCe-0006FS-S4
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:52 +0000
Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com
 [2607:f8b0:4864:20::436])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67394254-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:28:52 +0200 (CEST)
Received: by mail-pf1-x436.google.com with SMTP id
 d2e1a72fcca58-64115e652eeso29038905b3a.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:52 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67394254-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969331; x=1685561331;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Mt3HE6L47sPJBYnAKPMz0/DxNRKvA6oRz72+XbjGX9E=;
        b=bRmqBPTZgCa5IrhecvrTPyLDuab4M/2pWmm67Ij9kB88cqyNYvHpKZf2c822tlLINs
         V7g4wokwjyA5yoaxqgkTwqtzjgfghwoIfl0NbPXtXdg2Gw3C4PIS2FYz/KLko7etiChc
         s5Y++dzV39ZBew8o5WYNHoWZR+E9Lx1p8UATOni2Z3kTd3MdcR3ji+KJhcCE5jLMM79j
         4rcgjvbB4U2t2rwTpoiUyWgvBTuTWbg65E+Konk9GYsk7NMgNhoUvQ9jMEmXFDz/6lzW
         5y5R3ETL6C6d8IdLyMxau5jgOeNqS9ewzuT1HexEBx4VHGUimuQrmfTb2llKzaIBnzUu
         BV2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969331; x=1685561331;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Mt3HE6L47sPJBYnAKPMz0/DxNRKvA6oRz72+XbjGX9E=;
        b=ffSG/0GCjz3zNdpHkJ+G7LNMN/+dvWjxSkdN4x5RiI+r/2S+eMtgsl3TQp1RyOtiLe
         hg8g6ucqu5VfHXT/25A57f+zzWMiS+Xf75vQm/3T2WvtcQ5KpioDt8DY63qDOX3QEDbV
         X0AYWmzTZILiglR/ay9LPdG2h9dIo/yOE00IOYbftmb1TM8LJIXjs1s5lG/qQeGu9DFS
         HA21nB/uHrVXBniNFKG3h61zDj3Yow/ewgu6kZxqBZh+HpCXeMMh43iO/dPjhmBMZWwN
         5GkAPeCG2zLq6V9cbOJ4+c7jdoRog47YQiusHCAxaNxMytlY+z0L7Ze8G0+Kn90xlN2I
         +NyA==
X-Gm-Message-State: AC+VfDzBTqFrUT3Lx7SRF6c6cst6CUfVfX+Y6PGos6vPRotq5tWvaCDk
	fMNHSu2T8pM1C5S53UklrGQ=
X-Google-Smtp-Source: ACHHUZ6xNyoMsdtkeuJYwr4JFA16W7bSDCfft0wt/spwymptVQy+yHqySIROmads3b5d2VYC5oAliA==
X-Received: by 2002:a17:902:ced1:b0:1a9:3c1d:66de with SMTP id d17-20020a170902ced100b001a93c1d66demr18699403plg.15.1682969330907;
        Mon, 01 May 2023 12:28:50 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 12/34] mm: Convert ptlock_free() to use ptdescs
Date: Mon,  1 May 2023 12:28:07 -0700
Message-Id: <20230501192829.17086-13-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 10 +++++-----
 mm/memory.c        |  4 ++--
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a2a1bca84ada..58c911341a33 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2787,7 +2787,7 @@ static inline void ptdesc_clear(void *x)
 #if ALLOC_SPLIT_PTLOCKS
 void __init ptlock_cache_init(void);
 bool ptlock_alloc(struct ptdesc *ptdesc);
-extern void ptlock_free(struct page *page);
+void ptlock_free(struct ptdesc *ptdesc);
 
 static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
 {
@@ -2803,7 +2803,7 @@ static inline bool ptlock_alloc(struct ptdesc *ptdesc)
 	return true;
 }
 
-static inline void ptlock_free(struct page *page)
+static inline void ptlock_free(struct ptdesc *ptdesc)
 {
 }
 
@@ -2844,7 +2844,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 }
 static inline void ptlock_cache_init(void) {}
 static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; }
-static inline void ptlock_free(struct page *page) {}
+static inline void ptlock_free(struct ptdesc *ptdesc) {}
 #endif /* USE_SPLIT_PTE_PTLOCKS */
 
 static inline bool pgtable_pte_page_ctor(struct page *page)
@@ -2858,7 +2858,7 @@ static inline bool pgtable_pte_page_ctor(struct page *page)
 
 static inline void pgtable_pte_page_dtor(struct page *page)
 {
-	ptlock_free(page);
+	ptlock_free(page_ptdesc(page));
 	__ClearPageTable(page);
 	dec_lruvec_page_state(page, NR_PAGETABLE);
 }
@@ -2916,7 +2916,7 @@ static inline void pmd_ptlock_free(struct ptdesc *ptdesc)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc));
 #endif
-	ptlock_free(ptdesc_page(ptdesc));
+	ptlock_free(ptdesc);
 }
 
 #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte)
diff --git a/mm/memory.c b/mm/memory.c
index ba0dd1b2d616..7a0b36560e28 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5950,8 +5950,8 @@ bool ptlock_alloc(struct ptdesc *ptdesc)
 	return true;
 }
 
-void ptlock_free(struct page *page)
+void ptlock_free(struct ptdesc *ptdesc)
 {
-	kmem_cache_free(page_ptl_cachep, page->ptl);
+	kmem_cache_free(page_ptl_cachep, ptdesc->ptl);
 }
 #endif
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528092.820794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCj-0001wY-Gp; Mon, 01 May 2023 19:28:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528092.820794; Mon, 01 May 2023 19:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCj-0001uS-33; Mon, 01 May 2023 19:28:57 +0000
Received: by outflank-mailman (input) for mailman id 528092;
 Mon, 01 May 2023 19:28:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCg-0006FS-PO
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:54 +0000
Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com
 [2607:f8b0:4864:20::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 68d01922-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:28:54 +0200 (CEST)
Received: by mail-pl1-x62f.google.com with SMTP id
 d9443c01a7336-1a6762fd23cso23590835ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:54 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68d01922-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969333; x=1685561333;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZNft63XW9H/x5zK6HuCJK1zZ20bn1woHqcJTdinQF34=;
        b=B5BQ0PUPuJOe03g2Zj4SmcYFdZoHhvQEhjHjXIrDV/cPLgWmHOnGLph27PMtDy+gUk
         vrb8aMyrjxs7+rH+bt4rcFQy1jeum2KTcgeZne4eID9YQx0p8xFIvsQDPlXa9T02F5bm
         XDuuEHyhAFLF/N4HXmu9lHkWjBlAxtCK794Vd1UTAVcPxs1s/UJDMyYjBVgG5TX0KFIE
         Ex1W8v+MyjG1XFcX2NmMKnrKsIeNEQB+OMbajwZPymWFDvUYoyj1bmkkKVL9eBJZnPmo
         n2i7pNy6eK/amTh+JnM4F05ImhBvIH/eExR134Xtgm68Uv7no3ItCNvBtZlMVAsp6dYE
         5mrA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969333; x=1685561333;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ZNft63XW9H/x5zK6HuCJK1zZ20bn1woHqcJTdinQF34=;
        b=GJRWvVXqAfm2Q3W6BrbcOXznaet/bQRSDTNp+s1ufU6N0PY2IIIsMLTyImcRrscl2J
         7YdOhzS0vXYS9IPptz6/hbUOBpY4lC0tdE/qBeT7EaUvdJMMCu/hr3PhS8HvZ1Hqb9Fy
         i6yg+CfnKrAdMOEfkcy82zUAhMfMOiNRadoItvyWusYLDz0ehuQ1ihe0Vqb8k1FSUSSa
         kesfoEC/xymiM15VebVNGfU+j1MpAI+31+BSgdU2yXGQDN5POk8pMfVzW/wgns8a+qut
         N38dkRKTpoOHOLgEDnKtJfR84/RSoDmXUAlC8TtkYLey+lWSJ3piiNx0hYxj45Vmc/GI
         fqQQ==
X-Gm-Message-State: AC+VfDwURP2mL+gOsyuOSBKfuVvlR8vlFxA1aOFfnaZoCqXGSngItES7
	UZW73UxQFUvAskYfx8m42xM=
X-Google-Smtp-Source: ACHHUZ5/VRaG2onpIf15XR/3p6i4VphUhlpU8l14iUMmxT/o+LeZ6COrGbtS8DM/yhDOMNorlLAd2g==
X-Received: by 2002:a17:902:d48b:b0:1a6:ef75:3c53 with SMTP id c11-20020a170902d48b00b001a6ef753c53mr18457119plg.11.1682969333517;
        Mon, 01 May 2023 12:28:53 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 14/34] powerpc: Convert various functions to use ptdescs
Date: Mon,  1 May 2023 12:28:09 -0700
Message-Id: <20230501192829.17086-15-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to split struct ptdesc from struct page, convert various
functions to use ptdescs.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/powerpc/mm/book3s64/mmu_context.c | 10 +++---
 arch/powerpc/mm/book3s64/pgtable.c     | 32 +++++++++---------
 arch/powerpc/mm/pgtable-frag.c         | 46 +++++++++++++-------------
 3 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
index c766e4c26e42..b22ad2839897 100644
--- a/arch/powerpc/mm/book3s64/mmu_context.c
+++ b/arch/powerpc/mm/book3s64/mmu_context.c
@@ -246,15 +246,15 @@ static void destroy_contexts(mm_context_t *ctx)
 static void pmd_frag_destroy(void *pmd_frag)
 {
 	int count;
-	struct page *page;
+	struct ptdesc *ptdesc;
 
-	page = virt_to_page(pmd_frag);
+	ptdesc = virt_to_ptdesc(pmd_frag);
 	/* drop all the pending references */
 	count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT;
 	/* We allow PTE_FRAG_NR fragments from a PTE page */
-	if (atomic_sub_and_test(PMD_FRAG_NR - count, &page->pt_frag_refcount)) {
-		pgtable_pmd_page_dtor(page);
-		__free_page(page);
+	if (atomic_sub_and_test(PMD_FRAG_NR - count, &ptdesc->pt_frag_refcount)) {
+		ptdesc_pmd_dtor(ptdesc);
+		ptdesc_free(ptdesc);
 	}
 }
 
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 85c84e89e3ea..da46e3efc66c 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -306,22 +306,22 @@ static pmd_t *get_pmd_from_cache(struct mm_struct *mm)
 static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
 {
 	void *ret = NULL;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO;
 
 	if (mm == &init_mm)
 		gfp &= ~__GFP_ACCOUNT;
-	page = alloc_page(gfp);
-	if (!page)
+	ptdesc = ptdesc_alloc(gfp, 0);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pmd_page_ctor(page)) {
-		__free_pages(page, 0);
+	if (!ptdesc_pmd_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
 
-	atomic_set(&page->pt_frag_refcount, 1);
+	atomic_set(&ptdesc->pt_frag_refcount, 1);
 
-	ret = page_address(page);
+	ret = ptdesc_address(ptdesc);
 	/*
 	 * if we support only one fragment just return the
 	 * allocated page.
@@ -331,12 +331,12 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
 
 	spin_lock(&mm->page_table_lock);
 	/*
-	 * If we find pgtable_page set, we return
+	 * If we find ptdesc_page set, we return
 	 * the allocated page with single fragment
 	 * count.
 	 */
 	if (likely(!mm->context.pmd_frag)) {
-		atomic_set(&page->pt_frag_refcount, PMD_FRAG_NR);
+		atomic_set(&ptdesc->pt_frag_refcount, PMD_FRAG_NR);
 		mm->context.pmd_frag = ret + PMD_FRAG_SIZE;
 	}
 	spin_unlock(&mm->page_table_lock);
@@ -357,15 +357,15 @@ pmd_t *pmd_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr)
 
 void pmd_fragment_free(unsigned long *pmd)
 {
-	struct page *page = virt_to_page(pmd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmd);
 
-	if (PageReserved(page))
-		return free_reserved_page(page);
+	if (ptdesc_is_reserved(ptdesc))
+		return free_reserved_ptdesc(ptdesc);
 
-	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
-	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
-		pgtable_pmd_page_dtor(page);
-		__free_page(page);
+	BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0);
+	if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) {
+		ptdesc_pmd_dtor(ptdesc);
+		ptdesc_free(ptdesc);
 	}
 }
 
diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c
index 20652daa1d7e..b53e18fab74a 100644
--- a/arch/powerpc/mm/pgtable-frag.c
+++ b/arch/powerpc/mm/pgtable-frag.c
@@ -18,15 +18,15 @@
 void pte_frag_destroy(void *pte_frag)
 {
 	int count;
-	struct page *page;
+	struct ptdesc *ptdesc;
 
-	page = virt_to_page(pte_frag);
+	ptdesc = virt_to_ptdesc(pte_frag);
 	/* drop all the pending references */
 	count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT;
 	/* We allow PTE_FRAG_NR fragments from a PTE page */
-	if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) {
-		pgtable_pte_page_dtor(page);
-		__free_page(page);
+	if (atomic_sub_and_test(PTE_FRAG_NR - count, &ptdesc->pt_frag_refcount)) {
+		ptdesc_pte_dtor(ptdesc);
+		ptdesc_free(ptdesc);
 	}
 }
 
@@ -55,25 +55,25 @@ static pte_t *get_pte_from_cache(struct mm_struct *mm)
 static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
 {
 	void *ret = NULL;
-	struct page *page;
+	struct ptdesc *ptdesc;
 
 	if (!kernel) {
-		page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT);
-		if (!page)
+		ptdesc = ptdesc_alloc(PGALLOC_GFP | __GFP_ACCOUNT, 0);
+		if (!ptdesc)
 			return NULL;
-		if (!pgtable_pte_page_ctor(page)) {
-			__free_page(page);
+		if (!ptdesc_pte_ctor(ptdesc)) {
+			ptdesc_free(ptdesc);
 			return NULL;
 		}
 	} else {
-		page = alloc_page(PGALLOC_GFP);
-		if (!page)
+		ptdesc = ptdesc_alloc(PGALLOC_GFP, 0);
+		if (!ptdesc)
 			return NULL;
 	}
 
-	atomic_set(&page->pt_frag_refcount, 1);
+	atomic_set(&ptdesc->pt_frag_refcount, 1);
 
-	ret = page_address(page);
+	ret = ptdesc_address(ptdesc);
 	/*
 	 * if we support only one fragment just return the
 	 * allocated page.
@@ -82,12 +82,12 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
 		return ret;
 	spin_lock(&mm->page_table_lock);
 	/*
-	 * If we find pgtable_page set, we return
+	 * If we find ptdesc_page set, we return
 	 * the allocated page with single fragment
 	 * count.
 	 */
 	if (likely(!pte_frag_get(&mm->context))) {
-		atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR);
+		atomic_set(&ptdesc->pt_frag_refcount, PTE_FRAG_NR);
 		pte_frag_set(&mm->context, ret + PTE_FRAG_SIZE);
 	}
 	spin_unlock(&mm->page_table_lock);
@@ -108,15 +108,15 @@ pte_t *pte_fragment_alloc(struct mm_struct *mm, int kernel)
 
 void pte_fragment_free(unsigned long *table, int kernel)
 {
-	struct page *page = virt_to_page(table);
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
-	if (PageReserved(page))
-		return free_reserved_page(page);
+	if (ptdesc_is_reserved(ptdesc))
+		return free_reserved_ptdesc(ptdesc);
 
-	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
-	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
+	BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0);
+	if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) {
 		if (!kernel)
-			pgtable_pte_page_dtor(page);
-		__free_page(page);
+			ptdesc_pte_dtor(ptdesc);
+		ptdesc_free(ptdesc);
 	}
 }
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:28:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:28:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528093.820800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCk-0002Aw-80; Mon, 01 May 2023 19:28:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528093.820800; Mon, 01 May 2023 19:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCj-00027g-Tu; Mon, 01 May 2023 19:28:57 +0000
Received: by outflank-mailman (input) for mailman id 528093;
 Mon, 01 May 2023 19:28:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCh-0006PY-Am
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:55 +0000
Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com
 [2607:f8b0:4864:20::42b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 680fa245-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:28:53 +0200 (CEST)
Received: by mail-pf1-x42b.google.com with SMTP id
 d2e1a72fcca58-64115eef620so29038012b3a.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:53 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 680fa245-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969332; x=1685561332;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=L/HbzXYZlFs8Glr3mCgcQ6yZjaq6zFAfxmTJm9brDXY=;
        b=YU74FvpHQCK1VqAuVX8r2BjduNpUerOh1yATlR8mKQ/tJLpS2WI1LKB7NUnPZev+ku
         rfgntGzBluAMKuHfK/jms2NgctRzUuZHcxtFDuNiPPeXkbQ7Z48v/aLnoLGhmcKi5okk
         NTHNilz0HthKY/XHg97rKwZvMnLAYPI7U1eBqYo3DUIS7zYhW0tg5v9W5kNg2nZAJHx+
         0W0xkqlDIUsVkMuesra65fGttpYRxCYfqsHDDsc8zlGh06W2A2UnpmXTtrAlRE/I54n/
         9b61gxsFscmWDNDgpJxss4pOhFtz/pn5R4Qs2mfASivX3bx3H5gbzbMc69UUOvWi91zv
         pr8Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969332; x=1685561332;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=L/HbzXYZlFs8Glr3mCgcQ6yZjaq6zFAfxmTJm9brDXY=;
        b=WZDCwAS8xBgIuJCEK4N3WVf5afNcrTU0AXu6zk/hHJJT6rosFUsGnutOKmr3ROh10H
         nnOAs5tWWE9SrfUY5oizOyqg+KYOlUwWYnM43v0DqY8cMx38t8t/9j4kcLWjgoXfUNEX
         NoV52KXTY38WRv87tw72XNhPiBO+WJKnt+BPo66PIZr2I8B3VIAjz35VkrZbrKghwAph
         NUrFgSeXUShksc9FGob6BVkC1d43yJIiSxDlpNEfhV/pmYGw0YYk+9NLhkU0vkFtM3VD
         GLUQNNd1GTcTvelqj4ONAEKHJjrFfG9pYXf72RrqdkddFr27ExR0TV1qjs5hlWmYrzKU
         Oeyg==
X-Gm-Message-State: AC+VfDwwY+seaNk1o5lNspdnvcwx206zlIqJgtLITgrovFTAhJb5sj1H
	aA6cQfQSLwpoaFhygEIXmFs=
X-Google-Smtp-Source: ACHHUZ5tlVMWyUaJdG9fqFVZnpLPEvgTINU205p9ECnUgFr/jtV6hdVvJcRfvr33OgCYwfP6dTnMLg==
X-Received: by 2002:a17:902:c94f:b0:19a:727e:d4f3 with SMTP id i15-20020a170902c94f00b0019a727ed4f3mr22987004pla.5.1682969332204;
        Mon, 01 May 2023 12:28:52 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 13/34] mm: Create ptdesc equivalents for pgtable_{pte,pmd}_page_{ctor,dtor}
Date: Mon,  1 May 2023 12:28:08 -0700
Message-Id: <20230501192829.17086-14-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Creates ptdesc_pte_ctor(), ptdesc_pmd_ctor(), ptdesc_pte_dtor(), and
ptdesc_pmd_dtor() and make the original pgtable constructor/destructors
wrappers.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 56 ++++++++++++++++++++++++++++++++++------------
 1 file changed, 42 insertions(+), 14 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 58c911341a33..dc61aeca9077 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2847,20 +2847,34 @@ static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; }
 static inline void ptlock_free(struct ptdesc *ptdesc) {}
 #endif /* USE_SPLIT_PTE_PTLOCKS */
 
-static inline bool pgtable_pte_page_ctor(struct page *page)
+static inline bool ptdesc_pte_ctor(struct ptdesc *ptdesc)
 {
-	if (!ptlock_init(page_ptdesc(page)))
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	if (!ptlock_init(ptdesc))
 		return false;
-	__SetPageTable(page);
-	inc_lruvec_page_state(page, NR_PAGETABLE);
+	__folio_set_table(folio);
+	lruvec_stat_add_folio(folio, NR_PAGETABLE);
 	return true;
 }
 
+static inline bool pgtable_pte_page_ctor(struct page *page)
+{
+	return ptdesc_pte_ctor(page_ptdesc(page));
+}
+
+static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc)
+{
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	ptlock_free(ptdesc);
+	__folio_clear_table(folio);
+	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
+}
+
 static inline void pgtable_pte_page_dtor(struct page *page)
 {
-	ptlock_free(page_ptdesc(page));
-	__ClearPageTable(page);
-	dec_lruvec_page_state(page, NR_PAGETABLE);
+	ptdesc_pte_dtor(page_ptdesc(page));
 }
 
 #define pte_offset_map_lock(mm, pmd, address, ptlp)	\
@@ -2942,20 +2956,34 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd)
 	return ptl;
 }
 
-static inline bool pgtable_pmd_page_ctor(struct page *page)
+static inline bool ptdesc_pmd_ctor(struct ptdesc *ptdesc)
 {
-	if (!pmd_ptlock_init(page_ptdesc(page)))
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	if (!pmd_ptlock_init(ptdesc))
 		return false;
-	__SetPageTable(page);
-	inc_lruvec_page_state(page, NR_PAGETABLE);
+	__folio_set_table(folio);
+	lruvec_stat_add_folio(folio, NR_PAGETABLE);
 	return true;
 }
 
+static inline bool pgtable_pmd_page_ctor(struct page *page)
+{
+	return ptdesc_pmd_ctor(page_ptdesc(page));
+}
+
+static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc)
+{
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	pmd_ptlock_free(ptdesc);
+	__folio_clear_table(folio);
+	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
+}
+
 static inline void pgtable_pmd_page_dtor(struct page *page)
 {
-	pmd_ptlock_free(page_ptdesc(page));
-	__ClearPageTable(page);
-	dec_lruvec_page_state(page, NR_PAGETABLE);
+	ptdesc_pmd_dtor(page_ptdesc(page));
 }
 
 /*
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:29:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:29:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528094.820810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCm-0002lX-FN; Mon, 01 May 2023 19:29:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528094.820810; Mon, 01 May 2023 19:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCl-0002je-SJ; Mon, 01 May 2023 19:28:59 +0000
Received: by outflank-mailman (input) for mailman id 528094;
 Mon, 01 May 2023 19:28:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCk-0006PY-7V
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:58 +0000
Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com
 [2607:f8b0:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 69c3c029-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:28:56 +0200 (CEST)
Received: by mail-pl1-x635.google.com with SMTP id
 d9443c01a7336-1aad5245632so16554975ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:56 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69c3c029-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969335; x=1685561335;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/Q5rpHIbLtPdsxzDzaO3buqXOd2yqrT0Z007lfiZkpI=;
        b=AxAga4aoXPNlXE7+DeUiTXETzfeeCHbm4zGppPqr1MDTZFxhWAViYrGbWyE6GLWt+9
         IB0rpQEMaD7N9ypKiOPbsaINcslUNsYjGQTG5q5vWeCnFZgA7dcIKUsRPd84RGXai5h1
         l16CjSDmsfosD9KMmdT6xq8VRCXKZ9G6YKbNHAZjIZoU/NfYHkwEKIBnMm9mpAHGxSXg
         2woYY+DxDFTOnXL4g3fftw/6LCcjY+gzdniMaWURcfFDuyLcW5IhoWI0SL4yBh+tQBiv
         tdoOcx5iHNlwMY/jUOXeVBpbt3BPtJyBIMif2r3nZba5xvX4tcuGQGviD0d/uLbj9ohj
         JJbQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969335; x=1685561335;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/Q5rpHIbLtPdsxzDzaO3buqXOd2yqrT0Z007lfiZkpI=;
        b=iIDxCuXmkCtevPZmrt/SpgfbY2BurQRE/8Ac/hZroXM0zFPO5b/szZ0/rwmQrgIDZZ
         VAG9QaqiWBSipFB7JokAaaL4R8D5al4GXJgcHmCN4bcforAd1mCD9JNbt2/SQa/QxY04
         9oqH3pjLsGoFc08Wi4r/yxzDW0fnQxyWqEp59HwsoxyhQsisQSbbbqfiV9pKUmbCANtF
         2ZaiBAYybP3MxiIYoOSwJjmj/vCYIayQTefMuhAXiDHZMLqCQU7gldZrAgOwQ9pcmdki
         sa8I9ao21QUHS53ZEiovlUmTrO/h/8Kjm8hw7D+nO9f75pD29l587nBKyxHC5mMr77cb
         Kq8w==
X-Gm-Message-State: AC+VfDwsby8YXMbikQLVUKvh7muDEZzigsprrSxxoQiClz9FhBOk+baL
	eN8M4919d0BTCOOYpU0PwfM=
X-Google-Smtp-Source: ACHHUZ70tPtnwybaj+jIE0QGoYPcdHCYjv99qqOX/zc6uUXZ1o0XGtKwgumEUC29ZoX/UBhdbXOt+g==
X-Received: by 2002:a17:903:41d0:b0:1ab:12a:bd2e with SMTP id u16-20020a17090341d000b001ab012abd2emr2887995ple.37.1682969334945;
        Mon, 01 May 2023 12:28:54 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: [PATCH v2 15/34] x86: Convert various functions to use ptdescs
Date: Mon,  1 May 2023 12:28:10 -0700
Message-Id: <20230501192829.17086-16-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to split struct ptdesc from struct page, convert various
functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/x86/mm/pgtable.c | 46 +++++++++++++++++++++++++------------------
 1 file changed, 27 insertions(+), 19 deletions(-)

diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index afab0bc7862b..9b6f81c8eb32 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -52,7 +52,7 @@ early_param("userpte", setup_userpte);
 
 void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
 {
-	pgtable_pte_page_dtor(pte);
+	ptdesc_pte_dtor(page_ptdesc(pte));
 	paravirt_release_pte(page_to_pfn(pte));
 	paravirt_tlb_remove_table(tlb, pte);
 }
@@ -60,7 +60,7 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
 #if CONFIG_PGTABLE_LEVELS > 2
 void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
 {
-	struct page *page = virt_to_page(pmd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmd);
 	paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT);
 	/*
 	 * NOTE! For PAE, any changes to the top page-directory-pointer-table
@@ -69,8 +69,8 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
 #ifdef CONFIG_X86_PAE
 	tlb->need_flush_all = 1;
 #endif
-	pgtable_pmd_page_dtor(page);
-	paravirt_tlb_remove_table(tlb, page);
+	ptdesc_pmd_dtor(ptdesc);
+	paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc));
 }
 
 #if CONFIG_PGTABLE_LEVELS > 3
@@ -92,16 +92,16 @@ void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d)
 
 static inline void pgd_list_add(pgd_t *pgd)
 {
-	struct page *page = virt_to_page(pgd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pgd);
 
-	list_add(&page->lru, &pgd_list);
+	list_add(&ptdesc->pt_list, &pgd_list);
 }
 
 static inline void pgd_list_del(pgd_t *pgd)
 {
-	struct page *page = virt_to_page(pgd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pgd);
 
-	list_del(&page->lru);
+	list_del(&ptdesc->pt_list);
 }
 
 #define UNSHARED_PTRS_PER_PGD				\
@@ -112,12 +112,12 @@ static inline void pgd_list_del(pgd_t *pgd)
 
 static void pgd_set_mm(pgd_t *pgd, struct mm_struct *mm)
 {
-	virt_to_page(pgd)->pt_mm = mm;
+	virt_to_ptdesc(pgd)->pt_mm = mm;
 }
 
 struct mm_struct *pgd_page_get_mm(struct page *page)
 {
-	return page->pt_mm;
+	return page_ptdesc(page)->pt_mm;
 }
 
 static void pgd_ctor(struct mm_struct *mm, pgd_t *pgd)
@@ -213,11 +213,14 @@ void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd)
 static void free_pmds(struct mm_struct *mm, pmd_t *pmds[], int count)
 {
 	int i;
+	struct ptdesc *ptdesc;
 
 	for (i = 0; i < count; i++)
 		if (pmds[i]) {
-			pgtable_pmd_page_dtor(virt_to_page(pmds[i]));
-			free_page((unsigned long)pmds[i]);
+			ptdesc = virt_to_ptdesc(pmds[i]);
+
+			ptdesc_pmd_dtor(ptdesc);
+			ptdesc_free(ptdesc);
 			mm_dec_nr_pmds(mm);
 		}
 }
@@ -232,16 +235,21 @@ static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[], int count)
 		gfp &= ~__GFP_ACCOUNT;
 
 	for (i = 0; i < count; i++) {
-		pmd_t *pmd = (pmd_t *)__get_free_page(gfp);
-		if (!pmd)
+		pmd_t *pmd = NULL;
+		struct ptdesc *ptdesc = ptdesc_alloc(gfp, 0);
+
+		if (!ptdesc)
 			failed = true;
-		if (pmd && !pgtable_pmd_page_ctor(virt_to_page(pmd))) {
-			free_page((unsigned long)pmd);
-			pmd = NULL;
+		if (ptdesc && !ptdesc_pmd_ctor(ptdesc)) {
+			ptdesc_free(ptdesc);
+			ptdesc = NULL;
 			failed = true;
 		}
-		if (pmd)
+		if (ptdesc) {
 			mm_inc_nr_pmds(mm);
+			pmd = (pmd_t *)ptdesc_address(ptdesc);
+		}
+
 		pmds[i] = pmd;
 	}
 
@@ -838,7 +846,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
 
 	free_page((unsigned long)pmd_sv);
 
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
+	ptdesc_pmd_dtor(virt_to_ptdesc(pmd));
 	free_page((unsigned long)pmd);
 
 	return 1;
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:29:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:29:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528096.820817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCn-00032u-F9; Mon, 01 May 2023 19:29:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528096.820817; Mon, 01 May 2023 19:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCn-0002yS-0A; Mon, 01 May 2023 19:29:01 +0000
Received: by outflank-mailman (input) for mailman id 528096;
 Mon, 01 May 2023 19:28:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCl-0006FS-FY
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:28:59 +0000
Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com
 [2607:f8b0:4864:20::436])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6b82f5ab-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:28:58 +0200 (CEST)
Received: by mail-pf1-x436.google.com with SMTP id
 d2e1a72fcca58-63b5465fc13so2109913b3a.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:58 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b82f5ab-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969338; x=1685561338;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=DfBzvZT8j9y8kX+XY5vccXNkzfj+khuHcHL5hyCh86c=;
        b=ZGC4cCywpFatHbTovw1WlKjZ1/JpIzLm49ZnB+qALtN8+uBzAmOEyQhoqTiZAL9Lov
         1//5Ifkg8A25NAtuG/zaT6BZSC/OxKKSrYkP8TeZX+kUNfD8Xntdlqk4M/TUk6HLakv1
         nCrmIlVJ/lAy/lFljijZzj8/LpWBCQqPALZ1y/GGQ8i7Rpm01GpW0tHuORtTXujkRAt2
         duA4cTRp6tLOiXSxedEuorGvbfQ8keU9JobrzJ6OWtv8cF/fNSyFKrwRMOsU31de5FQU
         2Q0V2AuDWJPTZbnRyTcSGfBvM15PDA9NYPwDO/VpLGYUHfCtfkvJIisembrIccrMdBVE
         w3TQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969338; x=1685561338;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=DfBzvZT8j9y8kX+XY5vccXNkzfj+khuHcHL5hyCh86c=;
        b=Wv/XwDEef3LPDk5YaGOjjMdaUwg57u2+VDj74qCNa/Ckzps6P4WwjG4Zs5XHYA+nIU
         JY+0QbUVTEEWbhomN5XGYW4mQM97fn+wp9WEWk8jMa6sm5MzN3qMidXur80UTLF3XS5c
         IGEAzWqKjxGSSRxWRPeotuhhqG/lnuvuuiBscWsVHNYLibs9kma3+bOFmeKrLf7bOSuE
         HlFlIBbbcUmuuKP824R+HREaqizF8ZP/cPdO76PiIRjNbGhVq64tTjPlaBP1SQ/DiC8V
         QyJbOEq1SQnncp659YkGvgKVVjHLCEQ4bcrMZ6pqhX37mpS74kiSMgdN+JFelF5/7cBR
         hAlA==
X-Gm-Message-State: AC+VfDwuj1vsxxVS55fislnj035r7V/ChTDAvwnk5kq7+q+6jXq8fBkC
	JsAY1xsQeaw9VoRVvhY+JjE=
X-Google-Smtp-Source: ACHHUZ6bNcHkqBfUbTar64QETUpHtB7bbYe1QXkuNIjeJUzlYTez90IJdWZAdk9hI9pBXzKuPuEYjw==
X-Received: by 2002:a17:902:ea09:b0:1a9:79e7:2ba with SMTP id s9-20020a170902ea0900b001a979e702bamr18843060plg.23.1682969338010;
        Mon, 01 May 2023 12:28:58 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>
Subject: [PATCH v2 17/34] s390: Convert various pgalloc functions to use ptdescs
Date: Mon,  1 May 2023 12:28:12 -0700
Message-Id: <20230501192829.17086-18-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/include/asm/pgalloc.h |   4 +-
 arch/s390/include/asm/tlb.h     |   4 +-
 arch/s390/mm/pgalloc.c          | 108 ++++++++++++++++----------------
 3 files changed, 59 insertions(+), 57 deletions(-)

diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h
index 17eb618f1348..9841481560ae 100644
--- a/arch/s390/include/asm/pgalloc.h
+++ b/arch/s390/include/asm/pgalloc.h
@@ -86,7 +86,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr)
 	if (!table)
 		return NULL;
 	crst_table_init(table, _SEGMENT_ENTRY_EMPTY);
-	if (!pgtable_pmd_page_ctor(virt_to_page(table))) {
+	if (!ptdesc_pmd_ctor(virt_to_ptdesc(table))) {
 		crst_table_free(mm, table);
 		return NULL;
 	}
@@ -97,7 +97,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
 {
 	if (mm_pmd_folded(mm))
 		return;
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
+	ptdesc_pmd_dtor(virt_to_ptdesc(pmd));
 	crst_table_free(mm, (unsigned long *) pmd);
 }
 
diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h
index b91f4a9b044c..1388c819b467 100644
--- a/arch/s390/include/asm/tlb.h
+++ b/arch/s390/include/asm/tlb.h
@@ -89,12 +89,12 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd,
 {
 	if (mm_pmd_folded(tlb->mm))
 		return;
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
+	ptdesc_pmd_dtor(virt_to_ptdesc(pmd));
 	__tlb_adjust_range(tlb, address, PAGE_SIZE);
 	tlb->mm->context.flush_mm = 1;
 	tlb->freed_tables = 1;
 	tlb->cleared_puds = 1;
-	tlb_remove_table(tlb, pmd);
+	tlb_remove_ptdesc(tlb, pmd);
 }
 
 /*
diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
index 6b99932abc66..e740b4c76665 100644
--- a/arch/s390/mm/pgalloc.c
+++ b/arch/s390/mm/pgalloc.c
@@ -43,17 +43,17 @@ __initcall(page_table_register_sysctl);
 
 unsigned long *crst_table_alloc(struct mm_struct *mm)
 {
-	struct page *page = alloc_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, CRST_ALLOC_ORDER);
 
-	if (!page)
+	if (!ptdesc)
 		return NULL;
-	arch_set_page_dat(page, CRST_ALLOC_ORDER);
-	return (unsigned long *) page_to_virt(page);
+	arch_set_page_dat(ptdesc_page(ptdesc), CRST_ALLOC_ORDER);
+	return (unsigned long *) ptdesc_to_virt(ptdesc);
 }
 
 void crst_table_free(struct mm_struct *mm, unsigned long *table)
 {
-	free_pages((unsigned long)table, CRST_ALLOC_ORDER);
+	ptdesc_free(virt_to_ptdesc(table));
 }
 
 static void __crst_table_upgrade(void *arg)
@@ -140,21 +140,21 @@ static inline unsigned int atomic_xor_bits(atomic_t *v, unsigned int bits)
 
 struct page *page_table_alloc_pgste(struct mm_struct *mm)
 {
-	struct page *page;
+	struct ptdesc *ptdesc;
 	u64 *table;
 
-	page = alloc_page(GFP_KERNEL);
-	if (page) {
-		table = (u64 *)page_to_virt(page);
+	ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
+	if (ptdesc) {
+		table = (u64 *)ptdesc_to_virt(ptdesc);
 		memset64(table, _PAGE_INVALID, PTRS_PER_PTE);
 		memset64(table + PTRS_PER_PTE, 0, PTRS_PER_PTE);
 	}
-	return page;
+	return ptdesc_page(ptdesc);
 }
 
 void page_table_free_pgste(struct page *page)
 {
-	__free_page(page);
+	ptdesc_free(page_ptdesc(page));
 }
 
 #endif /* CONFIG_PGSTE */
@@ -230,7 +230,7 @@ void page_table_free_pgste(struct page *page)
 unsigned long *page_table_alloc(struct mm_struct *mm)
 {
 	unsigned long *table;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned int mask, bit;
 
 	/* Try to get a fragment of a 4K page as a 2K page table */
@@ -238,9 +238,9 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 		table = NULL;
 		spin_lock_bh(&mm->context.lock);
 		if (!list_empty(&mm->context.pgtable_list)) {
-			page = list_first_entry(&mm->context.pgtable_list,
-						struct page, lru);
-			mask = atomic_read(&page->pt_frag_refcount);
+			ptdesc = list_first_entry(&mm->context.pgtable_list,
+						struct ptdesc, pt_list);
+			mask = atomic_read(&ptdesc->pt_frag_refcount);
 			/*
 			 * The pending removal bits must also be checked.
 			 * Failure to do so might lead to an impossible
@@ -253,13 +253,13 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 			 */
 			mask = (mask | (mask >> 4)) & 0x03U;
 			if (mask != 0x03U) {
-				table = (unsigned long *) page_to_virt(page);
+				table = (unsigned long *) ptdesc_to_virt(ptdesc);
 				bit = mask & 1;		/* =1 -> second 2K */
 				if (bit)
 					table += PTRS_PER_PTE;
-				atomic_xor_bits(&page->pt_frag_refcount,
+				atomic_xor_bits(&ptdesc->pt_frag_refcount,
 							0x01U << bit);
-				list_del(&page->lru);
+				list_del(&ptdesc->pt_list);
 			}
 		}
 		spin_unlock_bh(&mm->context.lock);
@@ -267,27 +267,27 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 			return table;
 	}
 	/* Allocate a fresh page */
-	page = alloc_page(GFP_KERNEL);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(page)) {
-		__free_page(page);
+	if (!ptdesc_pte_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
-	arch_set_page_dat(page, 0);
+	arch_set_page_dat(ptdesc_page(ptdesc), 0);
 	/* Initialize page table */
-	table = (unsigned long *) page_to_virt(page);
+	table = (unsigned long *) ptdesc_to_virt(ptdesc);
 	if (mm_alloc_pgste(mm)) {
 		/* Return 4K page table with PGSTEs */
-		atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
+		atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U);
 		memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE);
 		memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE);
 	} else {
 		/* Return the first 2K fragment of the page */
-		atomic_xor_bits(&page->pt_frag_refcount, 0x01U);
+		atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x01U);
 		memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE);
 		spin_lock_bh(&mm->context.lock);
-		list_add(&page->lru, &mm->context.pgtable_list);
+		list_add(&ptdesc->pt_list, &mm->context.pgtable_list);
 		spin_unlock_bh(&mm->context.lock);
 	}
 	return table;
@@ -309,9 +309,8 @@ static void page_table_release_check(struct page *page, void *table,
 void page_table_free(struct mm_struct *mm, unsigned long *table)
 {
 	unsigned int mask, bit, half;
-	struct page *page;
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
-	page = virt_to_page(table);
 	if (!mm_alloc_pgste(mm)) {
 		/* Free 2K page table fragment of a 4K page */
 		bit = ((unsigned long) table & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t));
@@ -321,39 +320,38 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
 		 * will happen outside of the critical section from this
 		 * function or from __tlb_remove_table()
 		 */
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x11U << bit);
 		if (mask & 0x03U)
-			list_add(&page->lru, &mm->context.pgtable_list);
+			list_add(&ptdesc->pt_list, &mm->context.pgtable_list);
 		else
-			list_del(&page->lru);
+			list_del(&ptdesc->pt_list);
 		spin_unlock_bh(&mm->context.lock);
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x10U << bit);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x10U << bit);
 		if (mask != 0x00U)
 			return;
 		half = 0x01U << bit;
 	} else {
 		half = 0x03U;
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U);
 	}
 
-	page_table_release_check(page, table, half, mask);
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	page_table_release_check(ptdesc_page(ptdesc), table, half, mask);
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 			 unsigned long vmaddr)
 {
 	struct mm_struct *mm;
-	struct page *page;
 	unsigned int bit, mask;
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
 	mm = tlb->mm;
-	page = virt_to_page(table);
 	if (mm_alloc_pgste(mm)) {
 		gmap_unlink(mm, table, vmaddr);
 		table = (unsigned long *) ((unsigned long)table | 0x03U);
-		tlb_remove_table(tlb, table);
+		tlb_remove_ptdesc(tlb, table);
 		return;
 	}
 	bit = ((unsigned long) table & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t));
@@ -363,11 +361,11 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 	 * outside of the critical section from __tlb_remove_table() or from
 	 * page_table_free()
 	 */
-	mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
+	mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x11U << bit);
 	if (mask & 0x03U)
-		list_add_tail(&page->lru, &mm->context.pgtable_list);
+		list_add_tail(&ptdesc->pt_list, &mm->context.pgtable_list);
 	else
-		list_del(&page->lru);
+		list_del(&ptdesc->pt_list);
 	spin_unlock_bh(&mm->context.lock);
 	table = (unsigned long *) ((unsigned long) table | (0x01U << bit));
 	tlb_remove_table(tlb, table);
@@ -377,7 +375,7 @@ void __tlb_remove_table(void *_table)
 {
 	unsigned int mask = (unsigned long) _table & 0x03U, half = mask;
 	void *table = (void *)((unsigned long) _table ^ mask);
-	struct page *page = virt_to_page(table);
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
 	switch (half) {
 	case 0x00U:	/* pmd, pud, or p4d */
@@ -385,18 +383,18 @@ void __tlb_remove_table(void *_table)
 		return;
 	case 0x01U:	/* lower 2K of a 4K page table */
 	case 0x02U:	/* higher 2K of a 4K page table */
-		mask = atomic_xor_bits(&page->pt_frag_refcount, mask << 4);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, mask << 4);
 		if (mask != 0x00U)
 			return;
 		break;
 	case 0x03U:	/* 4K page table with pgstes */
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U);
 		break;
 	}
 
-	page_table_release_check(page, table, half, mask);
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	page_table_release_check(ptdesc_page(ptdesc), table, half, mask);
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 /*
@@ -424,16 +422,20 @@ static void base_pgt_free(unsigned long *table)
 static unsigned long *base_crst_alloc(unsigned long val)
 {
 	unsigned long *table;
+	struct ptdesc *ptdesc;
 
-	table =	(unsigned long *)__get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
-	if (table)
-		crst_table_init(table, val);
+	ptdesc = ptdesc_alloc(GFP_KERNEL, CRST_ALLOC_ORDER);
+	if (!ptdesc)
+		return NULL;
+	table = ptdesc_address(ptdesc);
+
+	crst_table_init(table, val);
 	return table;
 }
 
 static void base_crst_free(unsigned long *table)
 {
-	free_pages((unsigned long)table, CRST_ALLOC_ORDER);
+	ptdesc_free(virt_to_ptdesc(table));
 }
 
 #define BASE_ADDR_END_FUNC(NAME, SIZE)					\
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:29:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:29:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528097.820831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCr-00040s-4Z; Mon, 01 May 2023 19:29:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528097.820831; Mon, 01 May 2023 19:29:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCq-0003sz-Ap; Mon, 01 May 2023 19:29:04 +0000
Received: by outflank-mailman (input) for mailman id 528097;
 Mon, 01 May 2023 19:29:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCo-0006PY-Ld
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:02 +0000
Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com
 [2607:f8b0:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c740127-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:01 +0200 (CEST)
Received: by mail-pl1-x634.google.com with SMTP id
 d9443c01a7336-1ab032d9266so6158805ad.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:01 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c740127-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969339; x=1685561339;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=mrqntkjJd8BO8YfASMnosk0HIl+t/0EXXW8sSx0dJVg=;
        b=YynIc3MWOn+7eKGjmZARCn/IjCANfe8gTHKhI/1ubKTs4ZbiubHG1P9aimD9KzNrIR
         eunJCIzqiXfZGl0CCMrWEsO+nwpbu9UTA+2coBncf0PpX1ohouYsG2DEdDJZPuTXi/ir
         yR5oJPT4K2awb6eBfuzacZRfwPXKDUNKWynDv8oNvVCBXhLYj3JhwMiPBRO+Fg6hjdSs
         FM9xgZDHU6Sc+urW7voaFKEt/7m6LjDuxZZp6qL4zZI9dwEcGtT8oaSDqFIY7TNV4Oop
         NDnPsASY2UskW6usELO5k/OqVJv807YApzHQHt4bSD2BLXySmWIPAbRFeQIxlm3YwdkY
         qcpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969339; x=1685561339;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=mrqntkjJd8BO8YfASMnosk0HIl+t/0EXXW8sSx0dJVg=;
        b=KDGHEdTMGScv5ImkdHYZA5CK74uIGtCrxIv4hXFxwr9ZkycUjbi/whn3tflEetbYPE
         ATu3n2Pl1oWLmPZNqAzPzVWm0rqS4xD0veeQqG4awStEnMTKeqH2SNxDK0GLbxTMeDS4
         G93JkLX1IZY+19nsStYzucTLiCxp4K2nmtKV2KbqIBzDJ6VzqB5BRThx2oQI+QUOvm8B
         oYrFk1j9W2cGR2AUq4Z1ZcbFRTMITn/lKSd7RHxyqpNt/Sql9XCQLLuJXAontzzEmQ+e
         Y2DpYyVSjXqIc2rfoULgvIgyAnzfJ6tJ7CV9PQeePXgsSG0/1TiDmjxPuiytvId4ZZyW
         V1hQ==
X-Gm-Message-State: AC+VfDw2riYNBT5BqyDE1ECCexdIksoRI0f8z35IxBytBLYb6PZOf8jt
	/jqvI/btTxvg91/HUOYGaHg=
X-Google-Smtp-Source: ACHHUZ77ujqEA4Fn1OLOb7e5ecCGVNnJtAoTp2B3mAkadiAfSXyelwvqP5mlNEo41vojA+MW1rmqbg==
X-Received: by 2002:a17:902:ce8e:b0:1aa:efad:f2d4 with SMTP id f14-20020a170902ce8e00b001aaefadf2d4mr5633464plg.63.1682969339499;
        Mon, 01 May 2023 12:28:59 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 18/34] mm: Remove page table members from struct page
Date: Mon,  1 May 2023 12:28:13 -0700
Message-Id: <20230501192829.17086-19-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The page table members are now split out into their own ptdesc struct.
Remove them from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm_types.h | 14 --------------
 include/linux/pgtable.h  |  3 ---
 2 files changed, 17 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6161fe1ae5b8..31ffa1be21d0 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -141,20 +141,6 @@ struct page {
 		struct {	/* Tail pages of compound page */
 			unsigned long compound_head;	/* Bit zero is set */
 		};
-		struct {	/* Page table pages */
-			unsigned long _pt_pad_1;	/* compound_head */
-			pgtable_t pmd_huge_pte; /* protected by page->ptl */
-			unsigned long _pt_s390_gaddr;	/* mapping */
-			union {
-				struct mm_struct *pt_mm; /* x86 pgds only */
-				atomic_t pt_frag_refcount; /* powerpc */
-			};
-#if ALLOC_SPLIT_PTLOCKS
-			spinlock_t *ptl;
-#else
-			spinlock_t ptl;
-#endif
-		};
 		struct {	/* ZONE_DEVICE pages */
 			/** @pgmap: Points to the hosting device page map. */
 			struct dev_pagemap *pgmap;
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index b067ac10f3dd..90fa73a896db 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1034,10 +1034,7 @@ struct ptdesc {
 TABLE_MATCH(flags, __page_flags);
 TABLE_MATCH(compound_head, pt_list);
 TABLE_MATCH(compound_head, _pt_pad_1);
-TABLE_MATCH(pmd_huge_pte, pmd_huge_pte);
 TABLE_MATCH(mapping, _pt_s390_gaddr);
-TABLE_MATCH(pt_mm, pt_mm);
-TABLE_MATCH(ptl, ptl);
 #undef TABLE_MATCH
 static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:29:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:29:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528099.820845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCt-0004t2-L2; Mon, 01 May 2023 19:29:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528099.820845; Mon, 01 May 2023 19:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZCt-0004rw-BG; Mon, 01 May 2023 19:29:07 +0000
Received: by outflank-mailman (input) for mailman id 528099;
 Mon, 01 May 2023 19:29:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCq-0006PY-Mg
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:04 +0000
Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com
 [2607:f8b0:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6d8d0f37-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:02 +0200 (CEST)
Received: by mail-pl1-x62f.google.com with SMTP id
 d9443c01a7336-1aaf706768cso10983245ad.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:01 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d8d0f37-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969341; x=1685561341;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=dfen1Ya6fa7BCRlN1BPOcmTNp3Sy/DALCm51X/wHZX4=;
        b=NGlfJzK3SLIyrzVeCW+cy/LENuF7AIYcpLjneyCln8/dN8tsSa+TUgx6Zs0xZt09V/
         AUoO/9vVboMIkkiAJKzWw3zNTkXyZHbg5i+Uv3Zqtu+rX4tuOiaEx2sdsDxoZuaBDGgr
         vod9dLgUvGamVrCZ2EGOCdE8ToYnRSXk/vhm78x+dP7MR2nyb1QcOC5STzovq8Qe7Hqo
         h0w/rTHUO6l4DW/cRdBZ54cZqeORpVG+YoPGSUP9GNzfvkM/15zFtG0hwm7rzS97mp6T
         8K3nb/fOTHh3fKdVDbkoQtqh9GEw5zSK4hVf63htcA2MUZQPuMJk5JXlIyYiKdaNTnZr
         +1fQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969341; x=1685561341;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=dfen1Ya6fa7BCRlN1BPOcmTNp3Sy/DALCm51X/wHZX4=;
        b=dQANMYcFTypk+0dw2i8Pr8QpO5rIvlzvisICD9U11IGWs8UWTA22JUzbvfUU68M6fS
         CJ1bJyg7L/sppHsJHAIjRfamfauXi4rVuGlWU7xGQDBNRYoxGy3Ekndeuax7iS+oOxJL
         kkOt/FJg9+bmb8KF83pkhO5sEn+eTnSzqb/sHd0b9duCO/i1TdZP4xlUesl/jGyT3Mlz
         6Ix3bN9v8bwCvZTFi+UARLTId7v0yee12DvBEhD4ioYR+W9ycNvJWSB6bMHLZKirFfUF
         CL2HWRCOFtYFI3fBOQV4qlX6ueyUg4Bfmiz/ZjZ3XM6s/SB5oyMdIuTLSgG4SNRXrI50
         g/ow==
X-Gm-Message-State: AC+VfDyPcafaDaxArqUqHxJ9qnZdUALzzhKZ/fJMTbYQXy8u2VihkMdc
	13w4LoPTFj2sa0ot1wFpVt4=
X-Google-Smtp-Source: ACHHUZ4bym3aGkcEwh3pzEy9AQ1AIJMbPocjKSt/BA7y0pQIbB80eZOWxLwavvxYdF9801tFQUA10w==
X-Received: by 2002:a17:903:124b:b0:1a2:8c7e:f315 with SMTP id u11-20020a170903124b00b001a28c7ef315mr17565749plh.21.1682969341343;
        Mon, 01 May 2023 12:29:01 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Arnd Bergmann <arnd@arndb.de>
Subject: [PATCH v2 19/34] pgalloc: Convert various functions to use ptdescs
Date: Mon,  1 May 2023 12:28:14 -0700
Message-Id: <20230501192829.17086-20-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/asm-generic/pgalloc.h | 62 +++++++++++++++++++++--------------
 1 file changed, 37 insertions(+), 25 deletions(-)

diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
index a7cf825befae..7d4a1f5d3c17 100644
--- a/include/asm-generic/pgalloc.h
+++ b/include/asm-generic/pgalloc.h
@@ -18,7 +18,11 @@
  */
 static inline pte_t *__pte_alloc_one_kernel(struct mm_struct *mm)
 {
-	return (pte_t *)__get_free_page(GFP_PGTABLE_KERNEL);
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_PGTABLE_KERNEL, 0);
+
+	if (!ptdesc)
+		return NULL;
+	return (pte_t *)ptdesc_address(ptdesc);
 }
 
 #ifndef __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL
@@ -41,7 +45,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
  */
 static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
 {
-	free_page((unsigned long)pte);
+	ptdesc_free(virt_to_ptdesc(pte));
 }
 
 /**
@@ -49,7 +53,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
  * @mm: the mm_struct of the current context
  * @gfp: GFP flags to use for the allocation
  *
- * Allocates a page and runs the pgtable_pte_page_ctor().
+ * Allocates a ptdesc and runs the ptdesc_pte_ctor().
  *
  * This function is intended for architectures that need
  * anything beyond simple page allocation or must have custom GFP flags.
@@ -58,17 +62,17 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
  */
 static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp)
 {
-	struct page *pte;
+	struct ptdesc *ptdesc;
 
-	pte = alloc_page(gfp);
-	if (!pte)
+	ptdesc = ptdesc_alloc(gfp, 0);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(pte)) {
-		__free_page(pte);
+	if (!ptdesc_pte_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
 
-	return pte;
+	return ptdesc_page(ptdesc);
 }
 
 #ifndef __HAVE_ARCH_PTE_ALLOC_ONE
@@ -76,7 +80,7 @@ static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp)
  * pte_alloc_one - allocate a page for PTE-level user page table
  * @mm: the mm_struct of the current context
  *
- * Allocates a page and runs the pgtable_pte_page_ctor().
+ * Allocates a ptdesc and runs the ptdesc_pte_ctor().
  *
  * Return: `struct page` initialized as page table or %NULL on error
  */
@@ -98,8 +102,10 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
  */
 static inline void pte_free(struct mm_struct *mm, struct page *pte_page)
 {
-	pgtable_pte_page_dtor(pte_page);
-	__free_page(pte_page);
+	struct ptdesc *ptdesc = page_ptdesc(pte_page);
+
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 
@@ -110,7 +116,7 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page)
  * pmd_alloc_one - allocate a page for PMD-level page table
  * @mm: the mm_struct of the current context
  *
- * Allocates a page and runs the pgtable_pmd_page_ctor().
+ * Allocates a ptdesc and runs the ptdesc_pmd_ctor().
  * Allocations use %GFP_PGTABLE_USER in user context and
  * %GFP_PGTABLE_KERNEL in kernel context.
  *
@@ -118,28 +124,30 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page)
  */
 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
 {
-	struct page *page;
+	struct ptdesc *ptdesc;
 	gfp_t gfp = GFP_PGTABLE_USER;
 
 	if (mm == &init_mm)
 		gfp = GFP_PGTABLE_KERNEL;
-	page = alloc_page(gfp);
-	if (!page)
+	ptdesc = ptdesc_alloc(gfp, 0);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pmd_page_ctor(page)) {
-		__free_page(page);
+	if (!ptdesc_pmd_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
-	return (pmd_t *)page_address(page);
+	return (pmd_t *)ptdesc_address(ptdesc);
 }
 #endif
 
 #ifndef __HAVE_ARCH_PMD_FREE
 static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
 {
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmd);
+
 	BUG_ON((unsigned long)pmd & (PAGE_SIZE-1));
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
-	free_page((unsigned long)pmd);
+	ptdesc_pmd_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 #endif
 
@@ -149,11 +157,15 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
 
 static inline pud_t *__pud_alloc_one(struct mm_struct *mm, unsigned long addr)
 {
-	gfp_t gfp = GFP_PGTABLE_USER;
+	gfp_t gfp = GFP_PGTABLE_USER | __GFP_ZERO;
+	struct ptdesc *ptdesc;
 
 	if (mm == &init_mm)
 		gfp = GFP_PGTABLE_KERNEL;
-	return (pud_t *)get_zeroed_page(gfp);
+	ptdesc = ptdesc_alloc(gfp, 0);
+	if (!ptdesc)
+		return NULL;
+	return (pud_t *)ptdesc_address(ptdesc);
 }
 
 #ifndef __HAVE_ARCH_PUD_ALLOC_ONE
@@ -175,7 +187,7 @@ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
 static inline void __pud_free(struct mm_struct *mm, pud_t *pud)
 {
 	BUG_ON((unsigned long)pud & (PAGE_SIZE-1));
-	free_page((unsigned long)pud);
+	ptdesc_free(virt_to_ptdesc(pud));
 }
 
 #ifndef __HAVE_ARCH_PUD_FREE
@@ -190,7 +202,7 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud)
 #ifndef __HAVE_ARCH_PGD_FREE
 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 {
-	free_page((unsigned long)pgd);
+	ptdesc_free(virt_to_ptdesc(pgd));
 }
 #endif
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:30:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528101.820857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZER-0000mt-91; Mon, 01 May 2023 19:30:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528101.820857; Mon, 01 May 2023 19:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZER-0000mm-5e; Mon, 01 May 2023 19:30:43 +0000
Received: by outflank-mailman (input) for mailman id 528101;
 Mon, 01 May 2023 19:30:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZEP-0000m4-9y
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:30:41 +0000
Received: from mail-qt1-x831.google.com (mail-qt1-x831.google.com
 [2607:f8b0:4864:20::831])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a7c2d785-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:30:40 +0200 (CEST)
Received: by mail-qt1-x831.google.com with SMTP id
 d75a77b69052e-3ef6e8493ebso14045741cf.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:30:40 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.30.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:30:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7c2d785-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969438; x=1685561438;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=J4I9pwZJc+JjJXrrhRllN1qzGBzavb5IgPCBPwFplUc=;
        b=fmiPSYc2nGsg78ceE6+kbaB+qUy1PF1cvWxSlRh2fDCshuHC0BS7N7CUoDCcKsUWNI
         d3qriqn5pdIkDU3LuxpIjij2MwV09x7y7mk1aOtTW2/4tuFmYPTSpknyx+8ASI3Kzhex
         Nm8AHOMc5dViFo+xWPkFnMLzYT5hZ3SDOH3sH18iQuvRDgrP7nKV7c7Xg7Y5jaxqqZBg
         H4I7v+SMWoVXGDasH1UTZnVj9C0UWEpNlUkqwmJD0mn7go6wgLCkn5dGaD3M2P5dK2A8
         PHTHrwO7yBQh0uxa0ka3Fdoy9HovpfZCViu5tV2DutmBJ63X0eNKV2aI3fZHndCBEvZQ
         YaQw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969438; x=1685561438;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=J4I9pwZJc+JjJXrrhRllN1qzGBzavb5IgPCBPwFplUc=;
        b=AkwP21x1YMHz+VDLjyXWd8TYSwJAFfHE/le+wxlnZH4Z/+BxCM4mSJoF7yDNQpM4o1
         Yx2Y2c0TZSR+lqstxRXbxfrAAZX2dZHe+xaQ0M9hWkiK+NPezLtVfmdjz21VZz7FbwNp
         87IKrmaUTXA1U0zTN8cTFBB9ukaxDSYLMl/xDgy7WJhc+KHDe7PmE5XBxGI8aKO17VvZ
         yVh8aff8ZQAh+CyFd7nMajNkocoOGv1QWpg6XSusYyAbkjdzIMpRGq5rVqkMT6W8MidC
         CamCeibj3MFFYtapovFoN2RhM01DE6OldzGOFs14tUpjtBEP6ae/UjnvW1VJaexKI5b6
         EvGQ==
X-Gm-Message-State: AC+VfDxL6NsLG1fagJBnwdkcaegE+M7uwg3rdIGfZD4snFyz4v4gYTh5
	rz5Fb05apbPKSOE+0bxKyz+HxaLYgSo=
X-Google-Smtp-Source: ACHHUZ6JC6uoyLOBqmP1EpZkLhQWLiit8irznw5BGejX4UnZK6DMIMe/egnWLI5OCWF/y+K3B+ujnQ==
X-Received: by 2002:ac8:5846:0:b0:3ef:3511:f494 with SMTP id h6-20020ac85846000000b003ef3511f494mr24788647qth.16.1682969438585;
        Mon, 01 May 2023 12:30:38 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH v3 00/14 RESEND] Intel Hardware P-States (HWP) support
Date: Mon,  1 May 2023 15:30:20 -0400
Message-Id: <20230501193034.88575-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi,

Resending since I messed up the To: line.  Sorry about that.

This patch series adds Hardware-Controlled Performance States (HWP) for
Intel processors to Xen.

v2 was only partially reviewed, so v3 is mostly a reposting of v2.  In v2 &
v3, I think I addressed all comments for v1.  I kept patch 11 "xenpm:
Factor out a non-fatal cpuid_parse variant", with a v2 comment
explaining why I keep it.

v3 adds "xen/x86: Tweak PDC bits when using HWP".  Qubes testing revealed
an issue where enabling HWP can crash firwmare code (maybe SMM).  This
requires a Linux change to get the PDC bits from Xen and pass them to
ACPI.  Roger has a patch [0] to set the PDC bits.  Roger's 3 patch
series was tested with "xen/x86: Tweak PDC bits when using HWP" on
affected hardware and allowed proper operation.

Previous cover letter:

With HWP, the processor makes its own determinations for frequency
selection, though users can set some parameters and preferences.  There
is also Turbo Boost which dynamically pushes the max frequency if
possible.

The existing governors don't work with HWP since they select frequencies
and HWP doesn't expose those.  Therefore a dummy hwp-interal governor is
used that doesn't do anything.

xenpm get-cpufreq-para is extended to show HWP parameters, and
set-cpufreq-hwp is added to set them.

A lightly loaded OpenXT laptop showed ~1W power savings according to
powertop.  A mostly idle Fedora system (dom0 only) showed a more modest
power savings.

This for for a 10th gen 6-core 1600 MHz base 4900 MHZ max cpu.  In the
default balance mode, Turbo Boost doesn't exceed 4GHz.  Tweaking the
energy_perf preference with `xenpm set-cpufreq-hwp balance ene:64`,
I've seen the CPU hit 4.7GHz before throttling down and bouncing around
between 4.3 and 4.5 GHz.  Curiously the other cores read ~4GHz when
turbo boost takes affect.  This was done after pinning all dom0 cores,
and using taskset to pin to vCPU/pCPU 11 and running a bash tightloop.

HWP defaults to disabled and running with the existing HWP configuration
- it doesn't reconfigure by default.  It can be enabled with
cpufreq=xen:hwp.

Hardware Duty Cycling (HDC) is another feature to autonomously powerdown
things.  It defaults to enabled when HWP is enabled, but HDC can be
disabled on the command line.  cpufreq=xen:hwp,no-hdc

I've only tested on 8th gen and 10th gen systems with activity window
and energy_perf support.  So the pathes for CPUs lacking those features
are untested.

Fast MSR support was removed in v2.  The model specific checking was not
done properly, and I don't have hardware to test with.  Since writes are
expected to be infrequent, I just removed the code.

This changes the systcl_pm_op hypercall, so that wants review.

Regards,
Jason

[0] https://lore.kernel.org/xen-devel/20221121102113.41893-3-roger.pau@citrix.com/

Jason Andryuk (14):
  cpufreq: Allow restricting to internal governors only
  cpufreq: Add perf_freq to cpuinfo
  cpufreq: Export intel_feature_detect
  cpufreq: Add Hardware P-State (HWP) driver
  xenpm: Change get-cpufreq-para output for internal
  xen/x86: Tweak PDC bits when using HWP
  cpufreq: Export HWP parameters to userspace
  libxc: Include hwp_para in definitions
  xenpm: Print HWP parameters
  xen: Add SET_CPUFREQ_HWP xen_sysctl_pm_op
  libxc: Add xc_set_cpufreq_hwp
  xenpm: Factor out a non-fatal cpuid_parse variant
  xenpm: Add set-cpufreq-hwp subcommand
  CHANGELOG: Add Intel HWP entry

 CHANGELOG.md                              |   1 +
 docs/misc/xen-command-line.pandoc         |   8 +-
 tools/include/xenctrl.h                   |   6 +
 tools/libs/ctrl/xc_pm.c                   |  18 +
 tools/misc/xenpm.c                        | 355 +++++++++++-
 xen/arch/x86/acpi/cpufreq/Makefile        |   1 +
 xen/arch/x86/acpi/cpufreq/cpufreq.c       |  15 +-
 xen/arch/x86/acpi/cpufreq/hwp.c           | 633 ++++++++++++++++++++++
 xen/arch/x86/acpi/lib.c                   |   5 +
 xen/arch/x86/cpu/mcheck/mce_intel.c       |   6 +
 xen/arch/x86/include/asm/cpufeature.h     |  13 +-
 xen/arch/x86/include/asm/msr-index.h      |  14 +
 xen/drivers/acpi/pmstat.c                 |  23 +
 xen/drivers/cpufreq/cpufreq.c             |  40 ++
 xen/drivers/cpufreq/utility.c             |   1 +
 xen/include/acpi/cpufreq/cpufreq.h        |  14 +
 xen/include/acpi/cpufreq/processor_perf.h |   4 +
 xen/include/acpi/pdc_intel.h              |   1 +
 xen/include/public/sysctl.h               |  57 ++
 19 files changed, 1187 insertions(+), 28 deletions(-)
 create mode 100644 xen/arch/x86/acpi/cpufreq/hwp.c

-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:30:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:30:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528102.820866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEX-00015W-GU; Mon, 01 May 2023 19:30:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528102.820866; Mon, 01 May 2023 19:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEX-00015P-Dp; Mon, 01 May 2023 19:30:49 +0000
Received: by outflank-mailman (input) for mailman id 528102;
 Mon, 01 May 2023 19:30:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZEW-000149-0Q
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:30:48 +0000
Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com
 [2607:f8b0:4864:20::731])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ab36cae7-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:30:46 +0200 (CEST)
Received: by mail-qk1-x731.google.com with SMTP id
 af79cd13be357-74e0180b7d3so129184485a.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:30:46 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.30.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:30:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab36cae7-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969444; x=1685561444;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7VFwxnOVM/nUJWo2+kFJfbzF53GX8dfaVA+R5Uyazpc=;
        b=r9j9BPVz3/laZA8dsDmxZe7TRv0yNCtj/Xbsb4WpqDXgx/fA0HOTxNUrvpQf+C4t2T
         YNYN206A0eGrKe1qAym9I5p7tG8XdEGiJaLW8/t6JkTJFkIT0pzBFXS/iIkCO56wJn3J
         pUjquCuW3n9+l6STbY/dk/Ilm9zT/scjBTbtK0COm2KbSl8FAiXw+kNZWbpx0W2WhcG0
         xkKJVYBUTb1RNwa7L8EPhULHZE9DzKHmCK095rERWUvFMoy1jZyxhh3f0Ve6Bc7Dd9u0
         RXDpYSvLxEtoWzrauTp25svhJmvdF5OfkwS+uKrmB1iBB7Lcit4b94RhLAlA38eqUI8l
         eUEg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969444; x=1685561444;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7VFwxnOVM/nUJWo2+kFJfbzF53GX8dfaVA+R5Uyazpc=;
        b=e5YLgoL599yP+Xdv0THAgss77WW4BQBhNDGUt27NFu/yV8d6Hk2Xd9U7aftqQ9dfzF
         oxkCghMUyRzel1clzQJRcVM5vahCRhg5CezxVDGDhNsiIXrczwnAnja7up9ZnNSqsirz
         Xk0Vz8cjOfJwbJDcv0EbU8WPQS4pz0PV5AmBG55CQXJwBZo4R77v1SRPl7B8hHOGyuux
         fW7j/gmJRqw+qC1Kd8bmqB+lyEOOsb8WaTgPQNv/o086SviJ2xaEg6yIef5ubaSMo0/4
         rYTqB6y/H3gTJVZ2si5QbqwIoOXZfqyUHXaWgQ8j2LM2Pri5RCygi/gtiVwLfOhr2+Op
         pcqA==
X-Gm-Message-State: AC+VfDwFiDMZoBVws863mnqB/YZLQslfknsbY4EPLNA1QijkGfK4cKVZ
	0hNoj1dGBrCCfasguw8ZeKCvACa/Zlk=
X-Google-Smtp-Source: ACHHUZ73X7XhEV+VKk/WhUuhmrdce/HnekpsrCHDFejO2dmdia7/TP7F2pse7XFmNwI5RC0Ffs/PlA==
X-Received: by 2002:ac8:7c50:0:b0:3f0:df4d:40b7 with SMTP id o16-20020ac87c50000000b003f0df4d40b7mr24378132qtv.7.1682969444425;
        Mon, 01 May 2023 12:30:44 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 01/14 RESEND] cpufreq: Allow restricting to internal governors only
Date: Mon,  1 May 2023 15:30:21 -0400
Message-Id: <20230501193034.88575-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For hwp, the standard governors are not usable, and only the internal
one is applicable.  Add the cpufreq_governor_internal boolean to
indicate when an internal governor, like hwp-internal, will be used.
This is set during presmp_initcall, so that it can suppress governor
registration during initcall.  Only a governor with a name containing
"-internal" will be allowed in that case.

This way, the unuseable governors are not registered, so the internal
one is the only one returned to userspace.  This means incompatible
governors won't be advertised to userspace.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v3:
Switch to initdata
Add Jan Acked-by
Commit message s/they/the/ typo
Don't register hwp-internal when running non-hwp - Marek

v2:
Switch to "-internal"
Add blank line in header
---
 xen/drivers/cpufreq/cpufreq.c      | 8 ++++++++
 xen/include/acpi/cpufreq/cpufreq.h | 2 ++
 2 files changed, 10 insertions(+)

diff --git a/xen/drivers/cpufreq/cpufreq.c b/xen/drivers/cpufreq/cpufreq.c
index 2321c7dd07..7bd81680da 100644
--- a/xen/drivers/cpufreq/cpufreq.c
+++ b/xen/drivers/cpufreq/cpufreq.c
@@ -56,6 +56,7 @@ struct cpufreq_dom {
 };
 static LIST_HEAD_READ_MOSTLY(cpufreq_dom_list_head);
 
+bool __initdata cpufreq_governor_internal;
 struct cpufreq_governor *__read_mostly cpufreq_opt_governor;
 LIST_HEAD_READ_MOSTLY(cpufreq_governor_list);
 
@@ -121,6 +122,13 @@ int __init cpufreq_register_governor(struct cpufreq_governor *governor)
     if (!governor)
         return -EINVAL;
 
+    if (cpufreq_governor_internal &&
+        strstr(governor->name, "-internal") == NULL)
+        return -EINVAL;
+
+    if (!cpufreq_governor_internal && strstr(governor->name, "-internal"))
+        return -EINVAL;
+
     if (__find_governor(governor->name) != NULL)
         return -EEXIST;
 
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index 35dcf21e8f..0da32ef519 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -114,6 +114,8 @@ extern struct cpufreq_governor cpufreq_gov_userspace;
 extern struct cpufreq_governor cpufreq_gov_performance;
 extern struct cpufreq_governor cpufreq_gov_powersave;
 
+extern bool cpufreq_governor_internal;
+
 extern struct list_head cpufreq_governor_list;
 
 extern int cpufreq_register_governor(struct cpufreq_governor *governor);
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:31:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:31:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528103.820877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEa-0001Nv-Ou; Mon, 01 May 2023 19:30:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528103.820877; Mon, 01 May 2023 19:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEa-0001NW-Lb; Mon, 01 May 2023 19:30:52 +0000
Received: by outflank-mailman (input) for mailman id 528103;
 Mon, 01 May 2023 19:30:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZEZ-0000m4-Bd
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:30:51 +0000
Received: from mail-qt1-x836.google.com (mail-qt1-x836.google.com
 [2607:f8b0:4864:20::836])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id adfbb034-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:30:50 +0200 (CEST)
Received: by mail-qt1-x836.google.com with SMTP id
 d75a77b69052e-3ef302a642eso13479981cf.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:30:50 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.30.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:30:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: adfbb034-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969449; x=1685561449;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=txBzmZlgf2a5ZApasbIDD2P8VUrWanxh/ZCZKapkSd4=;
        b=ruWWxubqxWEfgknzCeHfuioXix3neQCmP3+a25TdXUf2lY/OXFPt8Fw7bOACTmeMj9
         W6lzbRO4CQm8VUxPYOZhHZE9MAixtoaCcKrnatX2CtjIGctgcqhiDy42JfMp7IiwQsJ2
         pJORAvC5p1oFLI9ZU2fPomLTuylha83HfbB0PRFixRqg6vC0BYsOt3O8hjTzKeInVEdB
         xETcc6syUMfOhifQLASnke2KU3HfyTwJkOGtQEYC2WBE1K5fATNr2/dHtxgsu261Kbce
         +eSDtdfkJ8r4k8f/c73OadVqX97fNky9tSHYk/+MBR2Gh0dmKpPjUK3U/1WOM4cjGqT8
         qyEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969449; x=1685561449;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=txBzmZlgf2a5ZApasbIDD2P8VUrWanxh/ZCZKapkSd4=;
        b=lRDxUe0u80tEBUo92bxCtRWM89Hy/Fw3aGEIRyMdP2PTUrIr2heCD6Tw0J1ELx2T+k
         kvzOWfk7ttslKi9QWHDo17AiiosT1bsSntJo1bjoA9d/M7UH9ItDYQf60f+qtFKtQfBJ
         a1sUADMWndydb1dJS44dp6IdNuPVzkmFshlM1JUM4MSQCDDoq6OyIPR4btn9Rsja0vUU
         TmWsF3DWL6kgbprf/92dHaQaifLGalJ3WXJ380y8hthwzRKi7Uhukrq+llE5lhGWYUMx
         3LI0cKVLhbRJwBWb3VZuvoDju4ALAyYcbpj6q64dLqyRBgOMWnaZXnIWcB26PyJMCMTo
         DNZQ==
X-Gm-Message-State: AC+VfDz2ZyXgNv5NVJHbm+h+AJUvo12iPoBEfAROIVdOAPKtibh8Xq7Q
	E1q2aR2C8nus3W2KoD2ACU8ntOmiLfQ=
X-Google-Smtp-Source: ACHHUZ6w3eawFGc/l9nZiVmnOOvz/C0aG06cvY9N0n+NkCEj6Vg7LQZ4xbldKhhvalrOwWvRbT6aDw==
X-Received: by 2002:ac8:7c50:0:b0:3f0:df4d:40b7 with SMTP id o16-20020ac87c50000000b003f0df4d40b7mr24378669qtv.7.1682969449211;
        Mon, 01 May 2023 12:30:49 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 02/14 RESEND] cpufreq: Add perf_freq to cpuinfo
Date: Mon,  1 May 2023 15:30:22 -0400
Message-Id: <20230501193034.88575-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

acpi-cpufreq scales the aperf/mperf measurements by max_freq, but HWP
needs to scale by base frequency.  Settings max_freq to base_freq
"works" but the code is not obvious, and returning values to userspace
is tricky.  Add an additonal perf_freq member which is used for scaling
aperf/mperf measurements.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v3:
Add Jan's Ack

I don't like this, but it seems the best way to re-use the common
aperf/mperf code.  The other option would be to add wrappers that then
do the acpi vs. hwp scaling.
---
 xen/arch/x86/acpi/cpufreq/cpufreq.c | 2 +-
 xen/drivers/cpufreq/utility.c       | 1 +
 xen/include/acpi/cpufreq/cpufreq.h  | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/acpi/cpufreq/cpufreq.c b/xen/arch/x86/acpi/cpufreq/cpufreq.c
index 2e0067fbe5..6c70d04395 100644
--- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
+++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
@@ -316,7 +316,7 @@ unsigned int get_measured_perf(unsigned int cpu, unsigned int flag)
     else
         perf_percent = 0;
 
-    return policy->cpuinfo.max_freq * perf_percent / 100;
+    return policy->cpuinfo.perf_freq * perf_percent / 100;
 }
 
 static unsigned int cf_check get_cur_freq_on_cpu(unsigned int cpu)
diff --git a/xen/drivers/cpufreq/utility.c b/xen/drivers/cpufreq/utility.c
index 9eb7ecedcd..6831f62851 100644
--- a/xen/drivers/cpufreq/utility.c
+++ b/xen/drivers/cpufreq/utility.c
@@ -236,6 +236,7 @@ int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
 
     policy->min = policy->cpuinfo.min_freq = min_freq;
     policy->max = policy->cpuinfo.max_freq = max_freq;
+    policy->cpuinfo.perf_freq = max_freq;
     policy->cpuinfo.second_max_freq = second_max_freq;
 
     if (policy->min == ~0)
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index 0da32ef519..a06aa92f62 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -37,6 +37,9 @@ extern struct acpi_cpufreq_data *cpufreq_drv_data[NR_CPUS];
 struct cpufreq_cpuinfo {
     unsigned int        max_freq;
     unsigned int        second_max_freq;    /* P1 if Turbo Mode is on */
+    unsigned int        perf_freq; /* Scaling freq for aperf/mpref.
+                                      acpi-cpufreq uses max_freq, but HWP uses
+                                      base_freq.*/
     unsigned int        min_freq;
     unsigned int        transition_latency; /* in 10^(-9) s = nanoseconds */
 };
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:31:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:31:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528104.820886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEe-0001iY-1N; Mon, 01 May 2023 19:30:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528104.820886; Mon, 01 May 2023 19:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEd-0001iM-U2; Mon, 01 May 2023 19:30:55 +0000
Received: by outflank-mailman (input) for mailman id 528104;
 Mon, 01 May 2023 19:30:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZEd-000149-7d
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:30:55 +0000
Received: from mail-qt1-x82b.google.com (mail-qt1-x82b.google.com
 [2607:f8b0:4864:20::82b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id afbb5f8e-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:30:53 +0200 (CEST)
Received: by mail-qt1-x82b.google.com with SMTP id
 d75a77b69052e-3ee339e8c2fso12897371cf.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:30:53 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.30.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:30:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afbb5f8e-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969452; x=1685561452;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=H9MXL508WAYhYYKIHs8d77jkKwZyoTe/WztjxZedAQ8=;
        b=CxdCiWP1SUv+Yp9hcpkzl92fy/yJPA8ybi4F68aLbr7SImDVwdbkjA8fYrnKQtXs+V
         pyIBxunNJdDUmJu3GU08SVoJY6+AUIzlYw8Hp/YQNccxMy5ld9JuVsWUxj5x49H+ujHO
         VIieK+/4WpGVxK+rnYj1gYs2kAL8VpqO7ajvJjsKCiAZUeKosYnkXATmvGwpKqOi5mx6
         COc3zbTQyMnNL4+dvc51sNW/sIZgrauQWsGX/+84BZCDmKNjVqvNfj1VCJo+X01cSs32
         pAT8wfI6COmTC1J5jd8FIwODj9GF/tVGjc1pKqiQ8bgdYYRdSKgJeaG6kECUY3aWLl2j
         bwQw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969452; x=1685561452;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=H9MXL508WAYhYYKIHs8d77jkKwZyoTe/WztjxZedAQ8=;
        b=epUQ+HPRXrzEXbAqQbc+o/XsL9i7CUbgvGl+IgjQMp1ZpotUerBNXs1DdNq2iF2cEc
         a6ozq+PnBexrR5+OFSKztpOFa7T0ejWrrhMK/nezT5DwLrwK8/G7HCaW6+NZ4AghBlba
         1d1ZUHfTBovnkM8JwtayqRZfqc1aTpPjkvWgyFvvUPUJyh921R0LFiJuFXX6/o7TeU1P
         8jjlrQ9p5wxMZKx1PVal+s71elFTKmAGVm9ou5RNdL6syiPmAz0UO0/d2OALP35GaKVm
         1TMblTve55P+1EQae2MfiZ4CmBd7KhKD2c42sD1MSqCrrwvZlF4oHKPXKzDi4fWDTdAR
         g7YA==
X-Gm-Message-State: AC+VfDyqOUuNo9XQTsPtnA31+l+MJGd9w/hQBW0ytFVLfd/uvpZp56UG
	mPcFNF+96U+vtgJ2M0MjM4loGWCsvVI=
X-Google-Smtp-Source: ACHHUZ61Lw9FNIzq0S8Unejn3bdDcd2bR/GRbFtZ1Rd6OIkz4KwanSmye/RkOvAVRjQPpb2hLgwA8Q==
X-Received: by 2002:ac8:5d89:0:b0:3e8:1903:ab05 with SMTP id d9-20020ac85d89000000b003e81903ab05mr22968522qtx.64.1682969452035;
        Mon, 01 May 2023 12:30:52 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 03/14 RESEND] cpufreq: Export intel_feature_detect
Date: Mon,  1 May 2023 15:30:23 -0400
Message-Id: <20230501193034.88575-4-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Export feature_detect as intel_feature_detect so it can be re-used by
HWP.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
v3:
Remove void * cast when calling intel_feature_detect

v2:
export intel_feature_detect with typed pointer
Move intel_feature_detect to acpi/cpufreq/cpufreq.h since the
declaration now contains struct cpufreq_policy *.
---
 xen/arch/x86/acpi/cpufreq/cpufreq.c | 8 ++++++--
 xen/include/acpi/cpufreq/cpufreq.h  | 2 ++
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/acpi/cpufreq/cpufreq.c b/xen/arch/x86/acpi/cpufreq/cpufreq.c
index 6c70d04395..f1cc473b4f 100644
--- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
+++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
@@ -339,9 +339,8 @@ static unsigned int cf_check get_cur_freq_on_cpu(unsigned int cpu)
     return extract_freq(get_cur_val(cpumask_of(cpu)), data);
 }
 
-static void cf_check feature_detect(void *info)
+void intel_feature_detect(struct cpufreq_policy *policy)
 {
-    struct cpufreq_policy *policy = info;
     unsigned int eax;
 
     eax = cpuid_eax(6);
@@ -353,6 +352,11 @@ static void cf_check feature_detect(void *info)
     }
 }
 
+static void cf_check feature_detect(void *info)
+{
+    intel_feature_detect(info);
+}
+
 static unsigned int check_freqs(const cpumask_t *mask, unsigned int freq,
                                 struct acpi_cpufreq_data *data)
 {
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index a06aa92f62..0f334d2a43 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -243,4 +243,6 @@ int write_userspace_scaling_setspeed(unsigned int cpu, unsigned int freq);
 void cpufreq_dbs_timer_suspend(void);
 void cpufreq_dbs_timer_resume(void);
 
+void intel_feature_detect(struct cpufreq_policy *policy);
+
 #endif /* __XEN_CPUFREQ_PM_H__ */
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:31:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:31:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528105.820897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEi-0002AI-Ce; Mon, 01 May 2023 19:31:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528105.820897; Mon, 01 May 2023 19:31:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEi-0002A1-8b; Mon, 01 May 2023 19:31:00 +0000
Received: by outflank-mailman (input) for mailman id 528105;
 Mon, 01 May 2023 19:30:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZEg-0000m4-EJ
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:30:58 +0000
Received: from mail-qk1-x736.google.com (mail-qk1-x736.google.com
 [2607:f8b0:4864:20::736])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b1a01d17-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:30:56 +0200 (CEST)
Received: by mail-qk1-x736.google.com with SMTP id
 af79cd13be357-7515631b965so676444985a.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:30:56 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.30.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:30:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1a01d17-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969455; x=1685561455;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Hym2Jx5Z8r7rrhqAfu+8xxIV/nnrCdXHwFgph7V69FQ=;
        b=eFLLq7533G3AnlY/uK7JfettQ+iRAVTRvVGoCC70QyQnroXyxJDlomnil6MLU0E2DA
         jhQTA/f36QeSqDRdqwjwFcxcqn4JCYAQkwHADfatxLrVUFBiDC+/w78PmV+e7fs7jzJT
         FNxKneHnD7iFgYBzeJc8Cg4MxNqtOBNDA+1+2FQGd737UubPzGypVrZ18DfT02/HqgNu
         GVN1Nf9WZVBrPbiAfuRkiETrycnYVgCKKBo9wWVsx4Huq9l2oroPhemhECPYaBplzCT3
         Hh1OOYVk78PzWfR+CEZTvVDb9pcGrar29yGM8SyXw2Fogtp0xoASga9uWVcqCr+dERqD
         IBUQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969455; x=1685561455;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Hym2Jx5Z8r7rrhqAfu+8xxIV/nnrCdXHwFgph7V69FQ=;
        b=O78QZNNAzyIBvRSi66VQB3uMnz2d1zEWIsJJNglMeh0rkKGAa7llo11z4r5vQKBWZd
         gLnd1x89gMxGYdr1s2dmu2nTf2spBT8Rtxm/sEBdNI6WK8VsaONnEU3L+V9Z2LHN+R39
         mG6s6VoVm9Yyb0etDXeXdb7rvLIlCc0qn3kB1nydN/FOYV//J2qHliRaZJmN+qEO9Ywc
         9rWLBw7tH1L0IxPXgAUnCEVeXh/wF3BvDDp6yjF+GsTOWcEvz5R+LPpq7rYcURTb7gkt
         iAcWPM4PzVjVhaSXMDWbnenj7i4p+byA2rmR+Vip4t/rcsOpw3r5g4krm5TyT4YSTgzx
         RMRQ==
X-Gm-Message-State: AC+VfDzplqNVmnRYhCmpDBbWfBb0MUQm815JOD1LQ3m7PxZtrABzIPt2
	ywRNCX91rsgqUuwIhgr6UFNUONdWDNk=
X-Google-Smtp-Source: ACHHUZ6QbNWTkiVCscld5Z5s1zw5gwOZxJEuSKsuwgvQq7Nmmk/GhBWAz0RBbxsfRy3SwJJxRchdFA==
X-Received: by 2002:ac8:5c85:0:b0:3e8:e986:b20a with SMTP id r5-20020ac85c85000000b003e8e986b20amr30735751qta.16.1682969455231;
        Mon, 01 May 2023 12:30:55 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 04/14 RESEND] cpufreq: Add Hardware P-State (HWP) driver
Date: Mon,  1 May 2023 15:30:24 -0400
Message-Id: <20230501193034.88575-5-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

>From the Intel SDM: "Hardware-Controlled Performance States (HWP), which
autonomously selects performance states while utilizing OS supplied
performance guidance hints."

Enable HWP to run in autonomous mode by poking the correct MSRs.
cpufreq=xen:hwp enables and cpufreq=xen:hwp=0 disables.  The same for
hdc.

There is no interface to configure - xen_sysctl_pm_op/xenpm will
be to be extended to configure in subsequent patches.  It will run with
the default values, which should be the default 0x80 (out
of 0x0-0xff) energy/performance preference.

Unscientific powertop measurement of an mostly idle, customized OpenXT
install:
A 10th gen 6-core laptop showed battery discharge drop from ~9.x to
~7.x watts.
A 8th gen 4-core laptop dropped from ~10 to ~9

Power usage depends on many factors, especially display brightness, but
this does show an power saving in balanced mode when CPU utilization is
low.

HWP isn't compatible with an external governor - it doesn't take
explicit frequency requests.  Therefore a minimal internal governor,
hwp-internal, is also added as a placeholder.

While adding to the xen-command-line.pandoc entry, un-nest verbose from
minfreq.  They are independent.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

---

We disable on cpuid_level < 0x16.  cpuid(0x16) is used to get the cpu
frequencies for calculating the APERF/MPERF.  Without it, things would
still work, but the averge cpufrequency output would be wrong.

My 8th & 10th gen test systems both report:
(XEN) HWP: 1 notify: 1 act_window: 1 energy_perf: 1 pkg_level: 0 peci: 0
(XEN) HWP: Hardware Duty Cycling (HDC) supported
(XEN) HWP: HW_FEEDBACK not supported

IA32_ENERGY_PERF_BIAS has not been tested.

For cpufreq=xen:hwp, placing the option inside the governor wouldn't
work.  Users would have to select the hwp-internal governor to turn off
hwp support.  hwp-internal isn't usable without hwp, and users wouldn't
be able to select a different governor.  That doesn't matter while hwp
defaults off, but it would if or when hwp defaults to enabled.

We can't use parse_boolean() since it requires a single name=val string
and cpufreq_handle_common_option is provided two strings.  Use
parse_bool() and manual handle no-hwp.

Write to disable the interrupt - the linux pstate driver does this.  We
don't use the interrupts, so we can just turn them off.  We aren't ready
to handle them, so we don't want any.  Unclear if this is necessary.
SDM says it's default disabled.

FAST_IA32_HWP_REQUEST was removed in v2.  The check in v1 was wrong,
it's a model specific feature and the CPUID bit is only available
after enabling via the MSR.  Support was untested since I don't have
hardware with the feature.  Writes are expected to be infrequent, so
just leave it out.

---
v2:
Alphabetize headers
Re-work driver registration
name hwp_drv_data anonymous union "hw"
Drop hwp_verbose_cont
style cleanups
Condense hwp_governor switch
hwp_cpufreq_target remove .raw from hwp_req assignment
Use typed-pointer in a few functions
Pass type to xzalloc
Add HWP_ENERGY_PERF_BALANCE/IA32_ENERGY_BIAS_BALANCE defines
Add XEN_HWP_GOVERNOR define for "hwp-internal"
Capitalize CPUID and MSR defines
Change '_' to '-' for energy-perf & act-window
Read-modify-write MSRs updates
Use FAST_IA32_HWP_REQUEST_MSR_ENABLE define
constify pointer in hwp_set_misc_turbo
Add space after non-fallthrough break in governor switch
Add IA32_ENERGY_BIAS_MASK define
Check CPUID_PM_LEAK for energy bias when needed
Fail initialization with curr_req = -1
Fold hwp_read_capabilities into hwp_init_msrs
Add command line cpufreq=xen:hwp
Add command line cpufreq=xen:hdc
Use per_cpu for hwp_drv_data pointers
Move hwp_energy_perf_bias call into hwp_write_request
energy_perf 0 is valid, so hwp_energy_perf_bias cannot be skipped
Ensure we don't generate interrupts
Remove Fast Write of Uncore MSR
Initialize hwp_drv_data from curr_req
Use SPDX line instead of license text in hwp.c

v3:
Add cf_check to cpufreq_gov_hwp_init() - Marek
Print cpuid_level with %#x - Marek
---
 docs/misc/xen-command-line.pandoc         |   8 +-
 xen/arch/x86/acpi/cpufreq/Makefile        |   1 +
 xen/arch/x86/acpi/cpufreq/cpufreq.c       |   5 +-
 xen/arch/x86/acpi/cpufreq/hwp.c           | 506 ++++++++++++++++++++++
 xen/arch/x86/include/asm/cpufeature.h     |  13 +-
 xen/arch/x86/include/asm/msr-index.h      |  13 +
 xen/drivers/cpufreq/cpufreq.c             |  32 ++
 xen/include/acpi/cpufreq/cpufreq.h        |   3 +
 xen/include/acpi/cpufreq/processor_perf.h |   3 +
 xen/include/public/sysctl.h               |   1 +
 10 files changed, 581 insertions(+), 4 deletions(-)
 create mode 100644 xen/arch/x86/acpi/cpufreq/hwp.c

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e0b89b7d33..aaa31f444b 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -499,7 +499,7 @@ If set, force use of the performance counters for oprofile, rather than detectin
 available support.
 
 ### cpufreq
-> `= none | {{ <boolean> | xen } [:[powersave|performance|ondemand|userspace][,<maxfreq>][,[<minfreq>][,[verbose]]]]} | dom0-kernel`
+> `= none | {{ <boolean> | xen } [:[powersave|performance|ondemand|userspace][,<hdc>][,[<hwp>]][,[<maxfreq>]][,[<minfreq>]][,[verbose]]]} | dom0-kernel`
 
 > Default: `xen`
 
@@ -510,6 +510,12 @@ choice of `dom0-kernel` is deprecated and not supported by all Dom0 kernels.
 * `<maxfreq>` and `<minfreq>` are integers which represent max and min processor frequencies
   respectively.
 * `verbose` option can be included as a string or also as `verbose=<integer>`
+* `<hwp>` is a boolean to enable Hardware-Controlled Performance States (HWP)
+  on supported Intel hardware.  HWP is a Skylake+ feature which provides better
+  CPU power management.  The default is disabled.
+* `<hdc>` is a boolean to enable Hardware Duty Cycling (HDC).  HDC enables the
+  processor to autonomously force physical package components into idle state.
+  The default is enabled, but the option only applies when `<hwp>` is enabled.
 
 ### cpuid (x86)
 > `= List of comma separated booleans`
diff --git a/xen/arch/x86/acpi/cpufreq/Makefile b/xen/arch/x86/acpi/cpufreq/Makefile
index f75da9b9ca..db83aa6b14 100644
--- a/xen/arch/x86/acpi/cpufreq/Makefile
+++ b/xen/arch/x86/acpi/cpufreq/Makefile
@@ -1,2 +1,3 @@
 obj-y += cpufreq.o
+obj-y += hwp.o
 obj-y += powernow.o
diff --git a/xen/arch/x86/acpi/cpufreq/cpufreq.c b/xen/arch/x86/acpi/cpufreq/cpufreq.c
index f1cc473b4f..56816b1aee 100644
--- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
+++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
@@ -642,7 +642,10 @@ static int __init cf_check cpufreq_driver_init(void)
         switch ( boot_cpu_data.x86_vendor )
         {
         case X86_VENDOR_INTEL:
-            ret = cpufreq_register_driver(&acpi_cpufreq_driver);
+            if ( hwp_available() )
+                ret = hwp_register_driver();
+            else
+                ret = cpufreq_register_driver(&acpi_cpufreq_driver);
             break;
 
         case X86_VENDOR_AMD:
diff --git a/xen/arch/x86/acpi/cpufreq/hwp.c b/xen/arch/x86/acpi/cpufreq/hwp.c
new file mode 100644
index 0000000000..57f13867d3
--- /dev/null
+++ b/xen/arch/x86/acpi/cpufreq/hwp.c
@@ -0,0 +1,506 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * hwp.c cpufreq driver to run Intel Hardware P-States (HWP)
+ *
+ * Copyright (C) 2021 Jason Andryuk <jandryuk@gmail.com>
+ */
+
+#include <xen/cpumask.h>
+#include <xen/init.h>
+#include <xen/param.h>
+#include <xen/xmalloc.h>
+#include <asm/io.h>
+#include <asm/msr.h>
+#include <acpi/cpufreq/cpufreq.h>
+
+static bool feature_hwp;
+static bool feature_hwp_notification;
+static bool feature_hwp_activity_window;
+static bool feature_hwp_energy_perf;
+static bool feature_hwp_pkg_level_ctl;
+static bool feature_hwp_peci;
+
+static bool feature_hdc;
+
+__initdata bool opt_cpufreq_hwp = false;
+__initdata bool opt_cpufreq_hdc = true;
+
+#define HWP_ENERGY_PERF_BALANCE         0x80
+#define IA32_ENERGY_BIAS_BALANCE        0x7
+#define IA32_ENERGY_BIAS_MAX_POWERSAVE  0xf
+#define IA32_ENERGY_BIAS_MASK           0xf
+
+union hwp_request
+{
+    struct
+    {
+        uint64_t min_perf:8;
+        uint64_t max_perf:8;
+        uint64_t desired:8;
+        uint64_t energy_perf:8;
+        uint64_t activity_window:10;
+        uint64_t package_control:1;
+        uint64_t reserved:16;
+        uint64_t activity_window_valid:1;
+        uint64_t energy_perf_valid:1;
+        uint64_t desired_valid:1;
+        uint64_t max_perf_valid:1;
+        uint64_t min_perf_valid:1;
+    };
+    uint64_t raw;
+};
+
+struct hwp_drv_data
+{
+    union
+    {
+        uint64_t hwp_caps;
+        struct
+        {
+            uint64_t highest:8;
+            uint64_t guaranteed:8;
+            uint64_t most_efficient:8;
+            uint64_t lowest:8;
+            uint64_t reserved:32;
+        } hw;
+    };
+    union hwp_request curr_req;
+    uint16_t activity_window;
+    uint8_t minimum;
+    uint8_t maximum;
+    uint8_t desired;
+    uint8_t energy_perf;
+};
+DEFINE_PER_CPU_READ_MOSTLY(struct hwp_drv_data *, hwp_drv_data);
+
+#define hwp_err(...)     printk(XENLOG_ERR __VA_ARGS__)
+#define hwp_info(...)    printk(XENLOG_INFO __VA_ARGS__)
+#define hwp_verbose(...)                   \
+({                                         \
+    if ( cpufreq_verbose )                 \
+        printk(XENLOG_DEBUG __VA_ARGS__);  \
+})
+
+static int cf_check hwp_governor(struct cpufreq_policy *policy,
+                                 unsigned int event)
+{
+    int ret;
+
+    if ( policy == NULL )
+        return -EINVAL;
+
+    switch ( event )
+    {
+    case CPUFREQ_GOV_START:
+    case CPUFREQ_GOV_LIMITS:
+        ret = 0;
+        break;
+
+    case CPUFREQ_GOV_STOP:
+    default:
+        ret = -EINVAL;
+        break;
+    }
+
+    return ret;
+}
+
+static struct cpufreq_governor hwp_cpufreq_governor =
+{
+    .name          = XEN_HWP_GOVERNOR,
+    .governor      = hwp_governor,
+};
+
+static int __init cf_check cpufreq_gov_hwp_init(void)
+{
+    return cpufreq_register_governor(&hwp_cpufreq_governor);
+}
+__initcall(cpufreq_gov_hwp_init);
+
+bool __init hwp_available(void)
+{
+    unsigned int eax, ecx, unused;
+    bool use_hwp;
+
+    if ( boot_cpu_data.cpuid_level < CPUID_PM_LEAF )
+    {
+        hwp_verbose("cpuid_level (%#x) lacks HWP support\n",
+                    boot_cpu_data.cpuid_level);
+        return false;
+    }
+
+    if ( boot_cpu_data.cpuid_level < 0x16 )
+    {
+        hwp_info("HWP disabled: cpuid_level %#x < 0x16 lacks CPU freq info\n",
+                 boot_cpu_data.cpuid_level);
+        return false;
+    }
+
+    cpuid(CPUID_PM_LEAF, &eax, &unused, &ecx, &unused);
+
+    if ( !(eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE) &&
+         !(ecx & CPUID6_ECX_IA32_ENERGY_PERF_BIAS) )
+    {
+        hwp_verbose("HWP disabled: No energy/performance preference available");
+        return false;
+    }
+
+    feature_hwp                 = eax & CPUID6_EAX_HWP;
+    feature_hwp_notification    = eax & CPUID6_EAX_HWP_NOTIFICATION;
+    feature_hwp_activity_window = eax & CPUID6_EAX_HWP_ACTIVITY_WINDOW;
+    feature_hwp_energy_perf     =
+        eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE;
+    feature_hwp_pkg_level_ctl   = eax & CPUID6_EAX_HWP_PACKAGE_LEVEL_REQUEST;
+    feature_hwp_peci            = eax & CPUID6_EAX_HWP_PECI;
+
+    hwp_verbose("HWP: %d notify: %d act-window: %d energy-perf: %d pkg-level: %d peci: %d\n",
+                feature_hwp, feature_hwp_notification,
+                feature_hwp_activity_window, feature_hwp_energy_perf,
+                feature_hwp_pkg_level_ctl, feature_hwp_peci);
+
+    if ( !feature_hwp )
+        return false;
+
+    feature_hdc = eax & CPUID6_EAX_HDC;
+
+    hwp_verbose("HWP: Hardware Duty Cycling (HDC) %ssupported%s\n",
+                feature_hdc ? "" : "not ",
+                feature_hdc ? opt_cpufreq_hdc ? ", enabled" : ", disabled"
+                            : "");
+
+    feature_hdc = feature_hdc && opt_cpufreq_hdc;
+
+    hwp_verbose("HWP: HW_FEEDBACK %ssupported\n",
+                (eax & CPUID6_EAX_HW_FEEDBACK) ? "" : "not ");
+
+    use_hwp = feature_hwp && opt_cpufreq_hwp;
+    cpufreq_governor_internal = use_hwp;
+
+    if ( use_hwp )
+        hwp_info("Using HWP for cpufreq\n");
+
+    return use_hwp;
+}
+
+static void hdc_set_pkg_hdc_ctl(bool val)
+{
+    uint64_t msr;
+
+    if ( rdmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
+    {
+        hwp_err("error rdmsr_safe(MSR_IA32_PKG_HDC_CTL)\n");
+
+        return;
+    }
+
+    if ( val )
+        msr |= IA32_PKG_HDC_CTL_HDC_PKG_ENABLE;
+    else
+        msr &= ~IA32_PKG_HDC_CTL_HDC_PKG_ENABLE;
+
+    if ( wrmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
+        hwp_err("error wrmsr_safe(MSR_IA32_PKG_HDC_CTL): %016lx\n", msr);
+}
+
+static void hdc_set_pm_ctl1(bool val)
+{
+    uint64_t msr;
+
+    if ( rdmsr_safe(MSR_IA32_PM_CTL1, msr) )
+    {
+        hwp_err("error rdmsr_safe(MSR_IA32_PM_CTL1)\n");
+
+        return;
+    }
+
+    if ( val )
+        msr |= IA32_PM_CTL1_HDC_ALLOW_BLOCK;
+    else
+        msr &= ~IA32_PM_CTL1_HDC_ALLOW_BLOCK;
+
+    if ( wrmsr_safe(MSR_IA32_PM_CTL1, msr) )
+        hwp_err("error wrmsr_safe(MSR_IA32_PM_CTL1): %016lx\n", msr);
+}
+
+static void hwp_get_cpu_speeds(struct cpufreq_policy *policy)
+{
+    uint32_t base_khz, max_khz, bus_khz, edx;
+
+    cpuid(0x16, &base_khz, &max_khz, &bus_khz, &edx);
+
+    /* aperf/mperf scales base. */
+    policy->cpuinfo.perf_freq = base_khz * 1000;
+    policy->cpuinfo.min_freq = base_khz * 1000;
+    policy->cpuinfo.max_freq = max_khz * 1000;
+    policy->min = base_khz * 1000;
+    policy->max = max_khz * 1000;
+    policy->cur = 0;
+}
+
+static void cf_check hwp_init_msrs(void *info)
+{
+    struct cpufreq_policy *policy = info;
+    struct hwp_drv_data *data = this_cpu(hwp_drv_data);
+    uint64_t val;
+
+    /*
+     * Package level MSR, but we don't have a good idea of packages here, so
+     * just do it everytime.
+     */
+    if ( rdmsr_safe(MSR_IA32_PM_ENABLE, val) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_PM_ENABLE)\n", policy->cpu);
+        data->curr_req.raw = -1;
+        return;
+    }
+
+    /* Ensure we don't generate interrupts */
+    if ( feature_hwp_notification )
+        wrmsr_safe(MSR_IA32_HWP_INTERRUPT, 0);
+
+    hwp_verbose("CPU%u: MSR_IA32_PM_ENABLE: %016lx\n", policy->cpu, val);
+    if ( !(val & IA32_PM_ENABLE_HWP_ENABLE) )
+    {
+        val |= IA32_PM_ENABLE_HWP_ENABLE;
+        if ( wrmsr_safe(MSR_IA32_PM_ENABLE, val) )
+        {
+            hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_PM_ENABLE, %lx)\n",
+                    policy->cpu, val);
+            data->curr_req.raw = -1;
+            return;
+        }
+    }
+
+    if ( rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, data->hwp_caps) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_CAPABILITIES)\n",
+                policy->cpu);
+        data->curr_req.raw = -1;
+        return;
+    }
+
+    if ( rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_REQUEST)\n", policy->cpu);
+        data->curr_req.raw = -1;
+        return;
+    }
+
+    if ( !feature_hwp_energy_perf ) {
+        if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, val) )
+        {
+            hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n");
+            data->curr_req.raw = -1;
+
+            return;
+        }
+
+        data->energy_perf = val & IA32_ENERGY_BIAS_MASK;
+    }
+
+    /*
+     * Check for APERF/MPERF support in hardware
+     * also check for boost/turbo support
+     */
+    intel_feature_detect(policy);
+
+    if ( feature_hdc )
+    {
+        hdc_set_pkg_hdc_ctl(true);
+        hdc_set_pm_ctl1(true);
+    }
+
+    hwp_get_cpu_speeds(policy);
+}
+
+static int cf_check hwp_cpufreq_verify(struct cpufreq_policy *policy)
+{
+    struct hwp_drv_data *data = per_cpu(hwp_drv_data, policy->cpu);
+
+    if ( !feature_hwp_energy_perf && data->energy_perf )
+    {
+        if ( data->energy_perf > IA32_ENERGY_BIAS_MAX_POWERSAVE )
+        {
+            hwp_err("energy_perf %d exceeds IA32_ENERGY_PERF_BIAS range 0-15\n",
+                    data->energy_perf);
+
+            return -EINVAL;
+        }
+    }
+
+    if ( !feature_hwp_activity_window && data->activity_window )
+    {
+        hwp_err("HWP activity window not supported\n");
+
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+/* val 0 - highest performance, 15 - maximum energy savings */
+static void hwp_energy_perf_bias(const struct hwp_drv_data *data)
+{
+    uint64_t msr;
+    uint8_t val = data->energy_perf;
+
+    ASSERT(val <= IA32_ENERGY_BIAS_MAX_POWERSAVE);
+
+    if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, msr) )
+    {
+        hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n");
+
+        return;
+    }
+
+    msr &= ~IA32_ENERGY_BIAS_MASK;
+    msr |= val;
+
+    if ( wrmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, msr) )
+        hwp_err("error wrmsr_safe(MSR_IA32_ENERGY_PERF_BIAS): %016lx\n", msr);
+}
+
+static void cf_check hwp_write_request(void *info)
+{
+    struct cpufreq_policy *policy = info;
+    struct hwp_drv_data *data = this_cpu(hwp_drv_data);
+    union hwp_request hwp_req = data->curr_req;
+
+    BUILD_BUG_ON(sizeof(union hwp_request) != sizeof(uint64_t));
+    if ( wrmsr_safe(MSR_IA32_HWP_REQUEST, hwp_req.raw) )
+    {
+        hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_HWP_REQUEST, %lx)\n",
+                policy->cpu, hwp_req.raw);
+        rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw);
+    }
+
+    if ( !feature_hwp_energy_perf )
+        hwp_energy_perf_bias(data);
+
+}
+
+static int cf_check hwp_cpufreq_target(struct cpufreq_policy *policy,
+                                       unsigned int target_freq,
+                                       unsigned int relation)
+{
+    unsigned int cpu = policy->cpu;
+    struct hwp_drv_data *data = per_cpu(hwp_drv_data, cpu);
+    /* Zero everything to ensure reserved bits are zero... */
+    union hwp_request hwp_req = { .raw = 0 };
+
+    /* .. and update from there */
+    hwp_req.min_perf = data->minimum;
+    hwp_req.max_perf = data->maximum;
+    hwp_req.desired = data->desired;
+    if ( feature_hwp_energy_perf )
+        hwp_req.energy_perf = data->energy_perf;
+    if ( feature_hwp_activity_window )
+        hwp_req.activity_window = data->activity_window;
+
+    if ( hwp_req.raw == data->curr_req.raw )
+        return 0;
+
+    data->curr_req = hwp_req;
+
+    hwp_verbose("CPU%u: wrmsr HWP_REQUEST %016lx\n", cpu, hwp_req.raw);
+    on_selected_cpus(cpumask_of(cpu), hwp_write_request, policy, 1);
+
+    return 0;
+}
+
+static int cf_check hwp_cpufreq_cpu_init(struct cpufreq_policy *policy)
+{
+    unsigned int cpu = policy->cpu;
+    struct hwp_drv_data *data;
+
+    data = xzalloc(struct hwp_drv_data);
+    if ( !data )
+        return -ENOMEM;
+
+    if ( cpufreq_opt_governor )
+        printk(XENLOG_WARNING
+               "HWP: governor \"%s\" is incompatible with hwp. Using default \"%s\"\n",
+               cpufreq_opt_governor->name, hwp_cpufreq_governor.name);
+    policy->governor = &hwp_cpufreq_governor;
+
+    per_cpu(hwp_drv_data, cpu) = data;
+
+    on_selected_cpus(cpumask_of(cpu), hwp_init_msrs, policy, 1);
+
+    if ( data->curr_req.raw == -1 )
+    {
+        hwp_err("CPU%u: Could not initialize HWP properly\n", cpu);
+        XFREE(per_cpu(hwp_drv_data, cpu));
+        return -ENODEV;
+    }
+
+    data->minimum = data->curr_req.min_perf;
+    data->maximum = data->curr_req.max_perf;
+    data->desired = data->curr_req.desired;
+    /* the !feature_hwp_energy_perf case was handled in hwp_init_msrs(). */
+    if ( feature_hwp_energy_perf )
+        data->energy_perf = data->curr_req.energy_perf;
+
+    hwp_verbose("CPU%u: IA32_HWP_CAPABILITIES: %016lx\n", cpu, data->hwp_caps);
+
+    hwp_verbose("CPU%u: rdmsr HWP_REQUEST %016lx\n", cpu, data->curr_req.raw);
+
+    return 0;
+}
+
+static int cf_check hwp_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+{
+    XFREE(per_cpu(hwp_drv_data, policy->cpu));
+
+    return 0;
+}
+
+/*
+ * The SDM reads like turbo should be disabled with MSR_IA32_PERF_CTL and
+ * PERF_CTL_TURBO_DISENGAGE, but that does not seem to actually work, at least
+ * with my HWP testing.  MSR_IA32_MISC_ENABLE and MISC_ENABLE_TURBO_DISENGAGE
+ * is what Linux uses and seems to work.
+ */
+static void cf_check hwp_set_misc_turbo(void *info)
+{
+    const struct cpufreq_policy *policy = info;
+    uint64_t msr;
+
+    if ( rdmsr_safe(MSR_IA32_MISC_ENABLE, msr) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_MISC_ENABLE)\n", policy->cpu);
+
+        return;
+    }
+
+    if ( policy->turbo == CPUFREQ_TURBO_ENABLED )
+        msr &= ~MSR_IA32_MISC_ENABLE_TURBO_DISENGAGE;
+    else
+        msr |= MSR_IA32_MISC_ENABLE_TURBO_DISENGAGE;
+
+    if ( wrmsr_safe(MSR_IA32_MISC_ENABLE, msr) )
+        hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_MISC_ENABLE): %016lx\n",
+                policy->cpu, msr);
+}
+
+static int cf_check hwp_cpufreq_update(int cpuid, struct cpufreq_policy *policy)
+{
+    on_selected_cpus(cpumask_of(cpuid), hwp_set_misc_turbo, policy, 1);
+
+    return 0;
+}
+
+static const struct cpufreq_driver __initconstrel hwp_cpufreq_driver =
+{
+    .name   = "hwp-cpufreq",
+    .verify = hwp_cpufreq_verify,
+    .target = hwp_cpufreq_target,
+    .init   = hwp_cpufreq_cpu_init,
+    .exit   = hwp_cpufreq_cpu_exit,
+    .update = hwp_cpufreq_update,
+};
+
+int __init hwp_register_driver(void)
+{
+    return cpufreq_register_driver(&hwp_cpufreq_driver);
+}
diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 4140ec0938..f2ff1d5fde 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -46,8 +46,17 @@ extern struct cpuinfo_x86 boot_cpu_data;
 #define cpu_has(c, bit)		test_bit(bit, (c)->x86_capability)
 #define boot_cpu_has(bit)	test_bit(bit, boot_cpu_data.x86_capability)
 
-#define CPUID_PM_LEAF                    6
-#define CPUID6_ECX_APERFMPERF_CAPABILITY 0x1
+#define CPUID_PM_LEAF                                6
+#define CPUID6_EAX_HWP                               (_AC(1, U) <<  7)
+#define CPUID6_EAX_HWP_NOTIFICATION                  (_AC(1, U) <<  8)
+#define CPUID6_EAX_HWP_ACTIVITY_WINDOW               (_AC(1, U) <<  9)
+#define CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE (_AC(1, U) << 10)
+#define CPUID6_EAX_HWP_PACKAGE_LEVEL_REQUEST         (_AC(1, U) << 11)
+#define CPUID6_EAX_HDC                               (_AC(1, U) << 13)
+#define CPUID6_EAX_HWP_PECI                          (_AC(1, U) << 16)
+#define CPUID6_EAX_HW_FEEDBACK                       (_AC(1, U) << 19)
+#define CPUID6_ECX_APERFMPERF_CAPABILITY             0x1
+#define CPUID6_ECX_IA32_ENERGY_PERF_BIAS             0x8
 
 /* CPUID level 0x00000001.edx */
 #define cpu_has_fpu             1
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index fa771ed0b5..a2a22339e4 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -151,6 +151,13 @@
 
 #define MSR_PKRS                            0x000006e1
 
+#define MSR_IA32_PM_ENABLE                  0x00000770
+#define  IA32_PM_ENABLE_HWP_ENABLE          (_AC(1, ULL) <<  0)
+
+#define MSR_IA32_HWP_CAPABILITIES           0x00000771
+#define MSR_IA32_HWP_INTERRUPT              0x00000773
+#define MSR_IA32_HWP_REQUEST                0x00000774
+
 #define MSR_X2APIC_FIRST                    0x00000800
 #define MSR_X2APIC_LAST                     0x000008ff
 
@@ -165,6 +172,11 @@
 #define  PASID_PASID_MASK                   0x000fffff
 #define  PASID_VALID                        (_AC(1, ULL) << 31)
 
+#define MSR_IA32_PKG_HDC_CTL                0x00000db0
+#define  IA32_PKG_HDC_CTL_HDC_PKG_ENABLE    (_AC(1, ULL) <<  0)
+#define MSR_IA32_PM_CTL1                    0x00000db1
+#define  IA32_PM_CTL1_HDC_ALLOW_BLOCK       (_AC(1, ULL) <<  0)
+
 #define MSR_UARCH_MISC_CTRL                 0x00001b01
 #define  UARCH_CTRL_DOITM                   (_AC(1, ULL) <<  0)
 
@@ -500,6 +512,7 @@
 #define MSR_IA32_MISC_ENABLE_LIMIT_CPUID  (1<<22)
 #define MSR_IA32_MISC_ENABLE_XTPR_DISABLE (1<<23)
 #define MSR_IA32_MISC_ENABLE_XD_DISABLE	(1ULL << 34)
+#define MSR_IA32_MISC_ENABLE_TURBO_DISENGAGE (1ULL << 38)
 
 #define MSR_IA32_TSC_DEADLINE		0x000006E0
 #define MSR_IA32_ENERGY_PERF_BIAS	0x000001b0
diff --git a/xen/drivers/cpufreq/cpufreq.c b/xen/drivers/cpufreq/cpufreq.c
index 7bd81680da..9470eb7230 100644
--- a/xen/drivers/cpufreq/cpufreq.c
+++ b/xen/drivers/cpufreq/cpufreq.c
@@ -565,6 +565,38 @@ static void cpufreq_cmdline_common_para(struct cpufreq_policy *new_policy)
 
 static int __init cpufreq_handle_common_option(const char *name, const char *val)
 {
+    if (!strcmp(name, "hdc")) {
+        if (val) {
+            int ret = parse_bool(val, NULL);
+            if (ret != -1) {
+                opt_cpufreq_hdc = ret;
+                return 1;
+            }
+        } else {
+            opt_cpufreq_hdc = true;
+            return 1;
+        }
+    } else if (!strcmp(name, "no-hdc")) {
+        opt_cpufreq_hdc = false;
+        return 1;
+    }
+
+    if (!strcmp(name, "hwp")) {
+        if (val) {
+            int ret = parse_bool(val, NULL);
+            if (ret != -1) {
+                opt_cpufreq_hwp = ret;
+                return 1;
+            }
+        } else {
+            opt_cpufreq_hwp = true;
+            return 1;
+        }
+    } else if (!strcmp(name, "no-hwp")) {
+        opt_cpufreq_hwp = false;
+        return 1;
+    }
+
     if (!strcmp(name, "maxfreq") && val) {
         usr_max_freq = simple_strtoul(val, NULL, 0);
         return 1;
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index 0f334d2a43..29a712a4f1 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -245,4 +245,7 @@ void cpufreq_dbs_timer_resume(void);
 
 void intel_feature_detect(struct cpufreq_policy *policy);
 
+extern bool opt_cpufreq_hwp;
+extern bool opt_cpufreq_hdc;
+
 #endif /* __XEN_CPUFREQ_PM_H__ */
diff --git a/xen/include/acpi/cpufreq/processor_perf.h b/xen/include/acpi/cpufreq/processor_perf.h
index d8a1ba68a6..b751ca4937 100644
--- a/xen/include/acpi/cpufreq/processor_perf.h
+++ b/xen/include/acpi/cpufreq/processor_perf.h
@@ -7,6 +7,9 @@
 
 #define XEN_PX_INIT 0x80000000
 
+bool hwp_available(void);
+int hwp_register_driver(void);
+
 int powernow_cpufreq_init(void);
 unsigned int powernow_register_driver(void);
 unsigned int get_measured_perf(unsigned int cpu, unsigned int flag);
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 2b24d6bfd0..b448f13b75 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -292,6 +292,7 @@ struct xen_ondemand {
     uint32_t up_threshold;
 };
 
+#define XEN_HWP_GOVERNOR "hwp-internal"
 /*
  * cpufreq para name of this structure named
  * same as sysfs file name of native linux
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:31:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:31:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528107.820907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEl-0002dr-Tn; Mon, 01 May 2023 19:31:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528107.820907; Mon, 01 May 2023 19:31:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEl-0002dg-OU; Mon, 01 May 2023 19:31:03 +0000
Received: by outflank-mailman (input) for mailman id 528107;
 Mon, 01 May 2023 19:31:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZEk-000149-UP
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:31:02 +0000
Received: from mail-qt1-x833.google.com (mail-qt1-x833.google.com
 [2607:f8b0:4864:20::833])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b44a29e3-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:31:01 +0200 (CEST)
Received: by mail-qt1-x833.google.com with SMTP id
 d75a77b69052e-3ef64d8b2b4so13236131cf.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:31:01 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.30.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:30:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b44a29e3-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969460; x=1685561460;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=hA8U/InM9EEJxsTtEJ4pQE8Dle8MV2Sm7m6DjaGLFBw=;
        b=cqYdElAgv017E/RPw2IfPj2RWa7HBP5wPUoivkXxfKl7Wcrhcl0iwOuzZphMkXGrmq
         wXCkHA8JltP+awApYh8wW61giM/5P2zyNIoHj3q9YmwRfaTUXFVZakg8TrC8FV/WoDGZ
         mo9IaeB6VUPChzAspWP9VcAsfV+TlO8Y/3cNML4q+Pu36JsDdpapVSvhvzxmzur3X5kb
         iYsf3odUJqBk7a9Ou/Bf4vu/nG3w22mo0+ol5vCDLc5dYSOaknjfd3TwNd4KjydszEDR
         jyy1FovHqAsAjtuqX9giS6oKxRxgZXnd68neZX3LzXl3USX2N/zFujA6OIMOGGjtAGZz
         xCkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969460; x=1685561460;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=hA8U/InM9EEJxsTtEJ4pQE8Dle8MV2Sm7m6DjaGLFBw=;
        b=lymf5mbhwTrLPRooCSX7O3oPwZlGnMHiAdCT5aleAen8xbkQ3ahlWEqgWp2HASkWKZ
         Tf6aTDH/6VnBlmZG1Oum6xHimWPpMW9AulbOs+TJXMfpxhg8Rps/oM6HPJdJGmoOLd5W
         zJ1iXp6Wg2/4P9chlZD0SfusTNpqEDDLhRbBmnrUEFQML//lMBmgrfnmRBNlNCgz8A7J
         10htRuiX5GmIJMtq9fDJinNlyt69NjVjb4GOsGOHKfuH4z5KPvZfvA0iRIy9v4F9oZ57
         wxlxmvF/1yjJL192Hmt0oG+DXa+q48XuDm1tVUerdQ9A0Dt6Lxxo6D6HJBzX409YbeO7
         zIDw==
X-Gm-Message-State: AC+VfDxn7qAj80dvYOzVdxfsbUfBEGwBYyBRr4iwtY5XgUxrH4NivQVW
	MVKSSioemicMgE0WJBn/SdZg6aIbCPA=
X-Google-Smtp-Source: ACHHUZ56JSvdUTnYPJIqqSMnTeyqiLw1eNi/3MYuwdvmwUt+ZTaf7S1mFfukjjdCf4lp2bgDkUEqQQ==
X-Received: by 2002:ac8:5f54:0:b0:3e4:3f79:9d7b with SMTP id y20-20020ac85f54000000b003e43f799d7bmr23571957qta.55.1682969459739;
        Mon, 01 May 2023 12:30:59 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 05/14 RESEND] xenpm: Change get-cpufreq-para output for internal
Date: Mon,  1 May 2023 15:30:25 -0400
Message-Id: <20230501193034.88575-6-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When using HWP, some of the returned data is not applicable.  In that
case, we should just omit it to avoid confusing the user.  So switch to
printing the base and turbo frequencies since those are relevant to HWP.
Similarly, stop printing the CPU frequencies since those do not apply.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
v2:
Use full governor name XEN_HWP_GOVERNOR to change output
Style fixes
---
 tools/misc/xenpm.c | 41 +++++++++++++++++++++++++----------------
 1 file changed, 25 insertions(+), 16 deletions(-)

diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index 1bb6187e56..ce8d7644d0 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -711,6 +711,7 @@ void start_gather_func(int argc, char *argv[])
 /* print out parameters about cpu frequency */
 static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
 {
+    bool internal = strstr(p_cpufreq->scaling_governor, XEN_HWP_GOVERNOR);
     int i;
 
     printf("cpu id               : %d\n", cpuid);
@@ -720,10 +721,15 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
         printf(" %d", p_cpufreq->affected_cpus[i]);
     printf("\n");
 
-    printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
-           p_cpufreq->cpuinfo_max_freq,
-           p_cpufreq->cpuinfo_min_freq,
-           p_cpufreq->cpuinfo_cur_freq);
+    if ( internal )
+        printf("cpuinfo frequency    : base [%u] turbo [%u]\n",
+               p_cpufreq->cpuinfo_min_freq,
+               p_cpufreq->cpuinfo_max_freq);
+    else
+        printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
+               p_cpufreq->cpuinfo_max_freq,
+               p_cpufreq->cpuinfo_min_freq,
+               p_cpufreq->cpuinfo_cur_freq);
 
     printf("scaling_driver       : %s\n", p_cpufreq->scaling_driver);
 
@@ -750,19 +756,22 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
                p_cpufreq->u.ondemand.up_threshold);
     }
 
-    printf("scaling_avail_freq   :");
-    for ( i = 0; i < p_cpufreq->freq_num; i++ )
-        if ( p_cpufreq->scaling_available_frequencies[i] ==
-             p_cpufreq->scaling_cur_freq )
-            printf(" *%d", p_cpufreq->scaling_available_frequencies[i]);
-        else
-            printf(" %d", p_cpufreq->scaling_available_frequencies[i]);
-    printf("\n");
+    if ( !internal )
+    {
+        printf("scaling_avail_freq   :");
+        for ( i = 0; i < p_cpufreq->freq_num; i++ )
+            if ( p_cpufreq->scaling_available_frequencies[i] ==
+                 p_cpufreq->scaling_cur_freq )
+                printf(" *%d", p_cpufreq->scaling_available_frequencies[i]);
+            else
+                printf(" %d", p_cpufreq->scaling_available_frequencies[i]);
+        printf("\n");
 
-    printf("scaling frequency    : max [%u] min [%u] cur [%u]\n",
-           p_cpufreq->scaling_max_freq,
-           p_cpufreq->scaling_min_freq,
-           p_cpufreq->scaling_cur_freq);
+        printf("scaling frequency    : max [%u] min [%u] cur [%u]\n",
+               p_cpufreq->scaling_max_freq,
+               p_cpufreq->scaling_min_freq,
+               p_cpufreq->scaling_cur_freq);
+    }
 
     printf("turbo mode           : %s\n",
            p_cpufreq->turbo_enabled ? "enabled" : "disabled or n/a");
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:31:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:31:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528109.820917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEp-00036m-5I; Mon, 01 May 2023 19:31:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528109.820917; Mon, 01 May 2023 19:31:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEp-00036X-1N; Mon, 01 May 2023 19:31:07 +0000
Received: by outflank-mailman (input) for mailman id 528109;
 Mon, 01 May 2023 19:31:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZEo-000149-3A
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:31:06 +0000
Received: from mail-qt1-x82f.google.com (mail-qt1-x82f.google.com
 [2607:f8b0:4864:20::82f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b61e0f36-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:31:04 +0200 (CEST)
Received: by mail-qt1-x82f.google.com with SMTP id
 d75a77b69052e-3ef38bea86aso13223701cf.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:31:04 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.31.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:31:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b61e0f36-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969463; x=1685561463;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=AClKrOg8M97t4+1HhqAtzZODxTQ5H9SUOQWpW1JlGKM=;
        b=YaGuiAS9skGtNqMMYIs2izeJbZk9tjGRToKR7izraEPPKc5z3vDnPbYPZTMX9SPIu2
         9hEoyC+hQrBI0Au9ULyOsD0lzBzqhfpe3fmOUvtwsKoqOyiyRWgcObksuugY0iJ/6oJo
         Gr2bq0+Qr7BpDyDnjjtIaOqmt+9IImpmB/tmGF3TWsDBKlNezlhRfW5UV6vIAOF420WE
         K+ll91Lxlyg8ArMyp5vI/yJi3DnXpPzdn3IEmd4Xpp6BBLZCDpX5lPNO2kh1XAl2iheK
         /NBX+Vgt+eOocF/A2c26q8nSO5AerNn+zRDoWvx8Ptzxx08z1cJ6vk8JpTjfJXFP3xLA
         1Brg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969463; x=1685561463;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=AClKrOg8M97t4+1HhqAtzZODxTQ5H9SUOQWpW1JlGKM=;
        b=JQzUmKQCSJEEMvwUz/pBRyC3rWy0SooUxZfQOsmzq78sB0DNGnstGgUcK4ZvhKpE6v
         RwNwYzaRcJDHwYBxp7ZaCCMMVzxdEGNAi5Fksz8pjbRFcjux9pdkpwaUNZv0qDyh/vOq
         jcnhi5Oxjhd5mvtLG5WCwZg6TLXuWzwHRwLK91kLiZ/dvzQIhVH82+AmVMGOASXdyDim
         4F6WWOZke0sutPS/ZqgXLGbzwcmv3ufBWxZu54iIa+F1AZ0Q+eaxZ0XnXM4lTYIFL/X1
         VzCS+Rr4YUWa4cI80KZzmelrAU4UU9FBxNtBFRpp7uu1S3RtNuSuygaZHTABq51vSe9/
         52xQ==
X-Gm-Message-State: AC+VfDy0j+9cdFFfsGn6qypEnAf4IS7d6+rCvHmk8VQTPyW3QOk1M0B6
	2K2LI5Ium/ZBSfxOelbyczUIVxXCLiA=
X-Google-Smtp-Source: ACHHUZ5cQ2cgJdSAh0zgGf+/+ZF/Ml22GYGgHkn6nhGuZflecOxzamQB3CxZ4cxbFlh83Usj9WeWAw==
X-Received: by 2002:a05:622a:199d:b0:3ef:3bad:6d12 with SMTP id u29-20020a05622a199d00b003ef3bad6d12mr20428194qtc.5.1682969462760;
        Mon, 01 May 2023 12:31:02 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 06/14 RESEND] xen/x86: Tweak PDC bits when using HWP
Date: Mon,  1 May 2023 15:30:26 -0400
Message-Id: <20230501193034.88575-7-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Qubes testing of HWP support had a report of a laptop, Thinkpad X1
Carbon Gen 4 with a Skylake processor, locking up during boot when HWP
is enabled.  A user found a kernel bug that seems to be the same issue:
https://bugzilla.kernel.org/show_bug.cgi?id=110941.

That bug was fixed by Linux commit a21211672c9a ("ACPI / processor:
Request native thermal interrupt handling via _OSC").  The tl;dr is SMM
crashes when it receives thermal interrupts, so Linux calls the ACPI
_OSC method to take over interrupt handling.

The Linux fix looks at the CPU features to decide whether or not to call
_OSC with bit 12 set to take over native interrupt handling.  Xen needs
some way to communicate HWP to Dom0 for making an equivalent call.

Xen exposes modified PDC bits via the platform_op set_pminfo hypercall.
Expand that to set bit 12 when HWP is present and in use.

Any generated interrupt would be handled by Xen's thermal drive, which
clears the status.

Bit 12 isn't named in the linux header and is open coded in Linux's
usage.

This will need a corresponding linux patch to pick up and apply the PDC
bits.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
New in v3

 xen/arch/x86/acpi/cpufreq/hwp.c           | 16 +++++++++++-----
 xen/arch/x86/acpi/lib.c                   |  5 +++++
 xen/arch/x86/cpu/mcheck/mce_intel.c       |  6 ++++++
 xen/arch/x86/include/asm/msr-index.h      |  1 +
 xen/include/acpi/cpufreq/processor_perf.h |  1 +
 xen/include/acpi/pdc_intel.h              |  1 +
 6 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/acpi/cpufreq/hwp.c b/xen/arch/x86/acpi/cpufreq/hwp.c
index 57f13867d3..f84abe1386 100644
--- a/xen/arch/x86/acpi/cpufreq/hwp.c
+++ b/xen/arch/x86/acpi/cpufreq/hwp.c
@@ -13,6 +13,8 @@
 #include <asm/msr.h>
 #include <acpi/cpufreq/cpufreq.h>
 
+static bool hwp_in_use;
+
 static bool feature_hwp;
 static bool feature_hwp_notification;
 static bool feature_hwp_activity_window;
@@ -117,10 +119,14 @@ static int __init cf_check cpufreq_gov_hwp_init(void)
 }
 __initcall(cpufreq_gov_hwp_init);
 
+bool hwp_active(void)
+{
+    return hwp_in_use;
+}
+
 bool __init hwp_available(void)
 {
     unsigned int eax, ecx, unused;
-    bool use_hwp;
 
     if ( boot_cpu_data.cpuid_level < CPUID_PM_LEAF )
     {
@@ -173,13 +179,13 @@ bool __init hwp_available(void)
     hwp_verbose("HWP: HW_FEEDBACK %ssupported\n",
                 (eax & CPUID6_EAX_HW_FEEDBACK) ? "" : "not ");
 
-    use_hwp = feature_hwp && opt_cpufreq_hwp;
-    cpufreq_governor_internal = use_hwp;
+    hwp_in_use = feature_hwp && opt_cpufreq_hwp;
+    cpufreq_governor_internal = hwp_in_use;
 
-    if ( use_hwp )
+    if ( hwp_in_use )
         hwp_info("Using HWP for cpufreq\n");
 
-    return use_hwp;
+    return hwp_in_use;
 }
 
 static void hdc_set_pkg_hdc_ctl(bool val)
diff --git a/xen/arch/x86/acpi/lib.c b/xen/arch/x86/acpi/lib.c
index 43831b92d1..20d6115ba9 100644
--- a/xen/arch/x86/acpi/lib.c
+++ b/xen/arch/x86/acpi/lib.c
@@ -26,6 +26,8 @@
 #include <asm/fixmap.h>
 #include <asm/mwait.h>
 
+#include <acpi/cpufreq/processor_perf.h>
+
 u32 __read_mostly acpi_smi_cmd;
 u8 __read_mostly acpi_enable_value;
 u8 __read_mostly acpi_disable_value;
@@ -140,5 +142,8 @@ int arch_acpi_set_pdc_bits(u32 acpi_id, u32 *pdc, u32 mask)
 	    !(ecx & CPUID5_ECX_INTERRUPT_BREAK))
 		pdc[2] &= ~(ACPI_PDC_C_C1_FFH | ACPI_PDC_C_C2C3_FFH);
 
+	if (hwp_active())
+		pdc[2] |= ACPI_PDC_CPPC_NTV_INT;
+
 	return 0;
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index 2f23f02923..d430342924 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -15,6 +15,9 @@
 #include <asm/p2m.h>
 #include <asm/mce.h>
 #include <asm/apic.h>
+
+#include <acpi/cpufreq/processor_perf.h>
+
 #include "mce.h"
 #include "x86_mca.h"
 #include "barrier.h"
@@ -64,6 +67,9 @@ static void cf_check intel_thermal_interrupt(struct cpu_user_regs *regs)
 
     ack_APIC_irq();
 
+    if ( hwp_active() )
+        wrmsr_safe(MSR_IA32_HWP_STATUS, 0);
+
     if ( NOW() < per_cpu(next, cpu) )
         return;
 
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index a2a22339e4..f5269022da 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -157,6 +157,7 @@
 #define MSR_IA32_HWP_CAPABILITIES           0x00000771
 #define MSR_IA32_HWP_INTERRUPT              0x00000773
 #define MSR_IA32_HWP_REQUEST                0x00000774
+#define MSR_IA32_HWP_STATUS                 0x00000777
 
 #define MSR_X2APIC_FIRST                    0x00000800
 #define MSR_X2APIC_LAST                     0x000008ff
diff --git a/xen/include/acpi/cpufreq/processor_perf.h b/xen/include/acpi/cpufreq/processor_perf.h
index b751ca4937..dd8ec36ba7 100644
--- a/xen/include/acpi/cpufreq/processor_perf.h
+++ b/xen/include/acpi/cpufreq/processor_perf.h
@@ -8,6 +8,7 @@
 #define XEN_PX_INIT 0x80000000
 
 bool hwp_available(void);
+bool hwp_active(void);
 int hwp_register_driver(void);
 
 int powernow_cpufreq_init(void);
diff --git a/xen/include/acpi/pdc_intel.h b/xen/include/acpi/pdc_intel.h
index 4fb719d6f5..e8332898fc 100644
--- a/xen/include/acpi/pdc_intel.h
+++ b/xen/include/acpi/pdc_intel.h
@@ -17,6 +17,7 @@
 #define ACPI_PDC_C_C1_FFH		(0x0100)
 #define ACPI_PDC_C_C2C3_FFH		(0x0200)
 #define ACPI_PDC_SMP_P_HWCOORD		(0x0800)
+#define ACPI_PDC_CPPC_NTV_INT		(0x1000)
 
 #define ACPI_PDC_EST_CAPABILITY_SMP	(ACPI_PDC_SMP_C1PT | \
 					 ACPI_PDC_C_C1_HALT | \
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:31:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:31:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528111.820927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEs-0003ZG-Fd; Mon, 01 May 2023 19:31:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528111.820927; Mon, 01 May 2023 19:31:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEs-0003Z5-C5; Mon, 01 May 2023 19:31:10 +0000
Received: by outflank-mailman (input) for mailman id 528111;
 Mon, 01 May 2023 19:31:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZEq-000149-Vx
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:31:08 +0000
Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com
 [2607:f8b0:4864:20::734])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b7cb34ef-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:31:07 +0200 (CEST)
Received: by mail-qk1-x734.google.com with SMTP id
 af79cd13be357-74e12e93384so121163085a.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:31:07 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.31.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:31:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7cb34ef-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969465; x=1685561465;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=dJbA+BcuMsWkD5WxDhYLUpTvdE3qh/C4Qmxcz85bGeo=;
        b=nPWlTQA1AjIKcrvqph4r2lWQaecwmL0x3jr52qhY18Eh/tj6AwisM64Uf5AQI6tDP6
         +SHB6kT3hgPFzYFuPSiZdnezeuvRqRWklk6JbJwPnYXYJbiqijAAN73gfC6F2KQ72EM6
         S/WRuCg+Qj79DNx629ZznAmgIMCLFtAaTKnc/saoTH6BdXF/uSDpTQLtAL19IYtku7Qr
         TfzVmWYYvO2v2vCS6L9GfUYjTluKmU+tKekrWsmI0v37A1B6cVvfI5NwNK4xxvxBpOt8
         BlRjgQ7IhaWZBBqlScTZy3R7VDxW6QfZxKvbgKIcen7kq3fhlMEwWUIHssUGtO3tUD4/
         FUSA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969465; x=1685561465;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=dJbA+BcuMsWkD5WxDhYLUpTvdE3qh/C4Qmxcz85bGeo=;
        b=cQx7lRyvtUz+bVqmhLqdnDXUM6eiKzVugVDf1Hcc8mqD0sJKGxtISdyegXLIGi9LT/
         egKYG4eKcRb0BNrBPoNBmvPt7OqWq+yTzbFoq5G90dnCERGupYN8tYhXGGdIe0CzBJ0T
         5UTEWvoRVmvg4Ii3ouaoeK5E8s8WIrgvrMEwU95iHdBjPeEN5mf6ckd+HzDmoHJvR/Mn
         gGc5aaBtbytUqEFedN3sg+HPTZGmLPgkSvDslA2ajeLiQvg+O3BEO4q7FE8w7cex9xrv
         JDilodbUH/M7JwiQyPum3O7tntvxqFyRtl8IfIAX+xh415bviSXs8JqaglpJGtVKhVBH
         LUsg==
X-Gm-Message-State: AC+VfDyVNDBtQv17JRHMam2wb/9ykLhrIk1dgkggIoSVtv8ezS7te8hG
	a5PZBod1xAPZlDIQk4kzU+nvnpAjTGA=
X-Google-Smtp-Source: ACHHUZ4yIAxtLHY8htulS6SPC+QNZq8i1LbhxAtAihfEzMKjPNr6hbZgwyHOFtp1cGuiuvKuMhWGew==
X-Received: by 2002:ac8:5e10:0:b0:3ef:34e1:d380 with SMTP id h16-20020ac85e10000000b003ef34e1d380mr24529806qtx.11.1682969465465;
        Mon, 01 May 2023 12:31:05 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 07/14 RESEND] cpufreq: Export HWP parameters to userspace
Date: Mon,  1 May 2023 15:30:27 -0400
Message-Id: <20230501193034.88575-8-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Extend xen_get_cpufreq_para to return hwp parameters.  These match the
hardware rather closely.

We need the features bitmask to indicated fields supported by the actual
hardware.

The use of uint8_t parameters matches the hardware size.  uint32_t
entries grows the sysctl_t past the build assertion in setup.c.  The
uint8_t ranges are supported across multiple generations, so hopefully
they won't change.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
v2:
Style fixes
Don't bump XEN_SYSCTL_INTERFACE_VERSION
Drop cpufreq.h comment divider
Expand xen_hwp_para comment
Add HWP activity window mantissa/exponent defines
Handle union rename
Add const to get_hwp_para
Remove hw_ prefix from xen_hwp_para members
Use XEN_HWP_GOVERNOR
Use per_cpu for hwp_drv_data
---
 xen/arch/x86/acpi/cpufreq/hwp.c    | 25 +++++++++++++++++++++++++
 xen/drivers/acpi/pmstat.c          |  5 +++++
 xen/include/acpi/cpufreq/cpufreq.h |  2 ++
 xen/include/public/sysctl.h        | 26 ++++++++++++++++++++++++++
 4 files changed, 58 insertions(+)

diff --git a/xen/arch/x86/acpi/cpufreq/hwp.c b/xen/arch/x86/acpi/cpufreq/hwp.c
index f84abe1386..cb52918799 100644
--- a/xen/arch/x86/acpi/cpufreq/hwp.c
+++ b/xen/arch/x86/acpi/cpufreq/hwp.c
@@ -506,6 +506,31 @@ static const struct cpufreq_driver __initconstrel hwp_cpufreq_driver =
     .update = hwp_cpufreq_update,
 };
 
+int get_hwp_para(const struct cpufreq_policy *policy,
+                 struct xen_hwp_para *hwp_para)
+{
+    unsigned int cpu = policy->cpu;
+    const struct hwp_drv_data *data = per_cpu(hwp_drv_data, cpu);
+
+    if ( data == NULL )
+        return -EINVAL;
+
+    hwp_para->features        =
+        (feature_hwp_activity_window ? XEN_SYSCTL_HWP_FEAT_ACT_WINDOW  : 0) |
+        (feature_hwp_energy_perf     ? XEN_SYSCTL_HWP_FEAT_ENERGY_PERF : 0);
+    hwp_para->lowest          = data->hw.lowest;
+    hwp_para->most_efficient  = data->hw.most_efficient;
+    hwp_para->guaranteed      = data->hw.guaranteed;
+    hwp_para->highest         = data->hw.highest;
+    hwp_para->minimum         = data->minimum;
+    hwp_para->maximum         = data->maximum;
+    hwp_para->energy_perf     = data->energy_perf;
+    hwp_para->activity_window = data->activity_window;
+    hwp_para->desired         = data->desired;
+
+    return 0;
+}
+
 int __init hwp_register_driver(void)
 {
     return cpufreq_register_driver(&hwp_cpufreq_driver);
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 1bae635101..67fd9dabd4 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -290,6 +290,11 @@ static int get_cpufreq_para(struct xen_sysctl_pm_op *op)
             &op->u.get_para.u.ondemand.sampling_rate,
             &op->u.get_para.u.ondemand.up_threshold);
     }
+
+    if ( !strncasecmp(op->u.get_para.scaling_governor, XEN_HWP_GOVERNOR,
+                      CPUFREQ_NAME_LEN) )
+        ret = get_hwp_para(policy, &op->u.get_para.u.hwp_para);
+
     op->u.get_para.turbo_enabled = cpufreq_get_turbo_status(op->cpuid);
 
     return ret;
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index 29a712a4f1..92b4c7e79c 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -247,5 +247,7 @@ void intel_feature_detect(struct cpufreq_policy *policy);
 
 extern bool opt_cpufreq_hwp;
 extern bool opt_cpufreq_hdc;
+int get_hwp_para(const struct cpufreq_policy *policy,
+                 struct xen_hwp_para *hwp_para);
 
 #endif /* __XEN_CPUFREQ_PM_H__ */
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index b448f13b75..bf7e6594a7 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -292,6 +292,31 @@ struct xen_ondemand {
     uint32_t up_threshold;
 };
 
+struct xen_hwp_para {
+    /*
+     * bits 6:0   - 7bit mantissa
+     * bits 9:7   - 3bit base-10 exponent
+     * btis 15:10 - Unused - must be 0
+     */
+#define HWP_ACT_WINDOW_MANTISSA_MASK  0x7f
+#define HWP_ACT_WINDOW_EXPONENT_MASK  0x7
+#define HWP_ACT_WINDOW_EXPONENT_SHIFT 7
+    uint16_t activity_window;
+    /* energy_perf range 0-255 if 1. Otherwise 0-15 */
+#define XEN_SYSCTL_HWP_FEAT_ENERGY_PERF (1 << 0)
+    /* activity_window supported if 1 */
+#define XEN_SYSCTL_HWP_FEAT_ACT_WINDOW  (1 << 1)
+    uint8_t features; /* bit flags for features */
+    uint8_t lowest;
+    uint8_t most_efficient;
+    uint8_t guaranteed;
+    uint8_t highest;
+    uint8_t minimum;
+    uint8_t maximum;
+    uint8_t desired;
+    uint8_t energy_perf;
+};
+
 #define XEN_HWP_GOVERNOR "hwp-internal"
 /*
  * cpufreq para name of this structure named
@@ -324,6 +349,7 @@ struct xen_get_cpufreq_para {
     union {
         struct  xen_userspace userspace;
         struct  xen_ondemand ondemand;
+        struct  xen_hwp_para hwp_para;
     } u;
 
     int32_t turbo_enabled;
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:31:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:31:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528113.820937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEu-0003zl-Qb; Mon, 01 May 2023 19:31:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528113.820937; Mon, 01 May 2023 19:31:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEu-0003zc-M2; Mon, 01 May 2023 19:31:12 +0000
Received: by outflank-mailman (input) for mailman id 528113;
 Mon, 01 May 2023 19:31:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZEt-000149-19
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:31:11 +0000
Received: from mail-qt1-x836.google.com (mail-qt1-x836.google.com
 [2607:f8b0:4864:20::836])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b922eb57-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:31:09 +0200 (CEST)
Received: by mail-qt1-x836.google.com with SMTP id
 d75a77b69052e-3eab1f2ba18so13396801cf.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:31:09 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.31.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:31:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b922eb57-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969468; x=1685561468;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7SMitIr3/lPLI3ESf+JT4clE3jKHCc578aryTjCi/TM=;
        b=fiGDFAOgA/yRwd4eCcCOYq1b+8La77hlcXxxVYY11Jk3ndSYohpbOgDWzMc+ef2i7y
         MZUw4a2bVpYlLbvPg0AkKIkzn/jZOC0/fSkrMmVCbF5iNv+gD5CzIpL9eKIXO4A4IWOY
         zO5I+Gppui1NvJrXDFx5WHy4gH9DsTROqREvqQVl0rpr2FiWZ8JRj9AD29fcdwqL7mOu
         UenAFx6F2pAc7I+Z0qFd2qyUwx2BXVG4ZsRItUDTqLk6dI9yNwsqHk3OWC0hmzQW/cDz
         klfjzF7uS9KkQhdVoDE3D2XHzi//AoVJg5Gt783IE/hqKELm00ios7PJhaABv269uSGX
         nNlQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969468; x=1685561468;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7SMitIr3/lPLI3ESf+JT4clE3jKHCc578aryTjCi/TM=;
        b=Jin5OuPOQdz8Jh887hbY6gzPZGH5y9Cc095NUQiEse0T7JJLcMiOnGZj+ZVgIx75Ke
         6MedX/5QdFyFcUiftjLOMjR6w8Dahp9PLyNjygRya4Tdb9U1ym0JcSTJKUHkpl3lu6rl
         QmU8gVvrFuqtVbVur1E32AzCR8S3P90g4cVVdsncJw0Qt+N1jAkA359EwHv+ZD80XIaW
         oRdCim4VZ4fycer0ZphWR5XkSr8/jPjpHVz2jUyr2BCFcMDEWz04qrGG1LYduza7pF88
         db8F+zcTViI+amomQkJLr9pLrpyUOWeQk6i1aHd8D3Wv6kJzm4ZhhHtCazPmdzmLsqsT
         De8Q==
X-Gm-Message-State: AC+VfDzdwcu5opoyq/xs7T86F7RJgDc8HIq1+aQba8XAHRQ6IlwPVyHO
	iba1AHNuw+Ka7hQNk1ahK8IjsqBkC2g=
X-Google-Smtp-Source: ACHHUZ4zfsPnNVWoI6gOx5jVThON5YPlKizbp+7vW0T78d3g1rUxzlLc/WN/8uL2RyHM2Eexervnuw==
X-Received: by 2002:a05:622a:1a8c:b0:3a8:a84:7ffa with SMTP id s12-20020a05622a1a8c00b003a80a847ffamr24778833qtc.57.1682969467897;
        Mon, 01 May 2023 12:31:07 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3 08/14 RESEND] libxc: Include hwp_para in definitions
Date: Mon,  1 May 2023 15:30:28 -0400
Message-Id: <20230501193034.88575-9-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Expose the hwp_para fields through libxc.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/include/xenctrl.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 05967ecc92..437001d713 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1903,6 +1903,7 @@ int xc_smt_disable(xc_interface *xch);
  */
 typedef struct xen_userspace xc_userspace_t;
 typedef struct xen_ondemand xc_ondemand_t;
+typedef struct xen_hwp_para xc_hwp_para_t;
 
 struct xc_get_cpufreq_para {
     /* IN/OUT variable */
@@ -1930,6 +1931,7 @@ struct xc_get_cpufreq_para {
     union {
         xc_userspace_t userspace;
         xc_ondemand_t ondemand;
+        xc_hwp_para_t hwp_para;
     } u;
 
     int32_t turbo_enabled;
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:31:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:31:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528115.820947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEw-0004Nk-DC; Mon, 01 May 2023 19:31:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528115.820947; Mon, 01 May 2023 19:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEw-0004Md-73; Mon, 01 May 2023 19:31:14 +0000
Received: by outflank-mailman (input) for mailman id 528115;
 Mon, 01 May 2023 19:31:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZEu-000149-Vf
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:31:12 +0000
Received: from mail-qt1-x836.google.com (mail-qt1-x836.google.com
 [2607:f8b0:4864:20::836])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bab86444-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:31:11 +0200 (CEST)
Received: by mail-qt1-x836.google.com with SMTP id
 d75a77b69052e-3eab1f2ba18so13397041cf.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:31:11 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.31.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:31:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bab86444-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969470; x=1685561470;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=eu+1+LIcVcy6gjOVYvjWRf1phVWRYdUosVQFp9Yv6+Y=;
        b=ql4ZCfHL4hQ0YAqSbKsTQKwnIs1cococeDaD457Nd/sfN5/9p3XqlZSXQSmhYvdmBP
         W7yppqkFqdkjvLPtUOedeKRhze+niKiPC42xH8rLzDf/TDD9lhegO79YtMMwfXGQ3iwR
         5ZIQu0fUlo+SGW4BCPKFqLK9ZeNE4ntJZrtDe8B2UGjfD9ViFOSsQmWsesd0/Ag6TOuu
         ryHfod/UGi6YB9XTT8mlVX2YJGkndLYiBrAyEJO3/P92P4qKqWpUGoCRGSVjuNSNPPfR
         GoijsvrKi55fyZrP9J0BYmDW6CFG3rS3pA9ysF+snUGXJjIOq4jupDAkynLScU7AaUAQ
         jpXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969470; x=1685561470;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=eu+1+LIcVcy6gjOVYvjWRf1phVWRYdUosVQFp9Yv6+Y=;
        b=KVsx/qHi083GcvYFA+XFXNhb1GbDqJmB8x6KuPxeLc/TydkvfAjkYzRlv5Z0Q2DJrP
         DwD2xTHERN9bmliu94BHjsU90jQsvc7zqIn8Wj8Ze0whcYB4isCoDLA4sZljYX1Ksp9w
         qGIj8mWUnG0fIcM6+LoAE8FXsAO+d+q/Mjkh9YjxghkQAc7/kAUqQ8hI+gWL7y+92WVP
         8YRZh8+E6s8BZH+Vpzc/yrTYlZZP6Yr9C7oMy2NkuMlVglZhmmtH6L2eK5+l8YK8TkUv
         ABOyE5IXAhe9zNiYt3tc/jRGPuIghjpUidB/mIYeVp95hkIrJxd6/CSbCl2hiyqveVY3
         NxOQ==
X-Gm-Message-State: AC+VfDyh/+midfWynqigiu/+cviRjgubegJM7Hl97EWRcyJ81oSAGLiD
	KzGIbfC1smZYySf0DKhI3JIqianQDLc=
X-Google-Smtp-Source: ACHHUZ75umoQyE6/G5AilWu+IH7PSPRSc9wk3YctqTMZOoMg4HPknC2FZ+DYgYcgnIQ1PIyrcoZ8AA==
X-Received: by 2002:a05:622a:646:b0:3eb:9b03:b5ba with SMTP id a6-20020a05622a064600b003eb9b03b5bamr23146173qtb.37.1682969470417;
        Mon, 01 May 2023 12:31:10 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 09/14 RESEND] xenpm: Print HWP parameters
Date: Mon,  1 May 2023 15:30:29 -0400
Message-Id: <20230501193034.88575-10-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Print HWP-specific parameters.  Some are always present, but others
depend on hardware support.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
v2:
Style fixes
Declare i outside loop
Replace repearted hardware/configured limits with spaces
Fixup for hw_ removal
Use XEN_HWP_GOVERNOR
Use HWP_ACT_WINDOW_EXPONENT_*
Remove energy_perf hw autonomous - 0 doesn't mean autonomous
---
 tools/misc/xenpm.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 65 insertions(+)

diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index ce8d7644d0..b2defde0d4 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -708,6 +708,44 @@ void start_gather_func(int argc, char *argv[])
     pause();
 }
 
+static void calculate_hwp_activity_window(const xc_hwp_para_t *hwp,
+                                          unsigned int *activity_window,
+                                          const char **units)
+{
+    unsigned int mantissa = hwp->activity_window & HWP_ACT_WINDOW_MANTISSA_MASK;
+    unsigned int exponent =
+        (hwp->activity_window >> HWP_ACT_WINDOW_EXPONENT_SHIFT) &
+            HWP_ACT_WINDOW_EXPONENT_MASK;
+    unsigned int multiplier = 1;
+    unsigned int i;
+
+    if ( hwp->activity_window == 0 )
+    {
+        *units = "hardware selected";
+        *activity_window = 0;
+
+        return;
+    }
+
+    if ( exponent >= 6 )
+    {
+        *units = "s";
+        exponent -= 6;
+    }
+    else if ( exponent >= 3 )
+    {
+        *units = "ms";
+        exponent -= 3;
+    }
+    else
+        *units = "us";
+
+    for ( i = 0; i < exponent; i++ )
+        multiplier *= 10;
+
+    *activity_window = mantissa * multiplier;
+}
+
 /* print out parameters about cpu frequency */
 static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
 {
@@ -773,6 +811,33 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
                p_cpufreq->scaling_cur_freq);
     }
 
+    if ( strcmp(p_cpufreq->scaling_governor, XEN_HWP_GOVERNOR) == 0 )
+    {
+        const xc_hwp_para_t *hwp = &p_cpufreq->u.hwp_para;
+
+        printf("hwp variables        :\n");
+        printf("  hardware limits    : lowest [%u] most_efficient [%u]\n",
+               hwp->lowest, hwp->most_efficient);
+        printf("                     : guaranteed [%u] highest [%u]\n",
+               hwp->guaranteed, hwp->highest);
+        printf("  configured limits  : min [%u] max [%u] energy_perf [%u]\n",
+               hwp->minimum, hwp->maximum, hwp->energy_perf);
+
+        if ( hwp->features & XEN_SYSCTL_HWP_FEAT_ACT_WINDOW )
+        {
+            unsigned int activity_window;
+            const char *units;
+
+            calculate_hwp_activity_window(hwp, &activity_window, &units);
+            printf("                     : activity_window [%u %s]\n",
+                   activity_window, units);
+        }
+
+        printf("                     : desired [%u%s]\n",
+               hwp->desired,
+               hwp->desired ? "" : " hw autonomous");
+    }
+
     printf("turbo mode           : %s\n",
            p_cpufreq->turbo_enabled ? "enabled" : "disabled or n/a");
     printf("\n");
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:31:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:31:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528117.820957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEy-0004nF-QO; Mon, 01 May 2023 19:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528117.820957; Mon, 01 May 2023 19:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZEy-0004mj-Jh; Mon, 01 May 2023 19:31:16 +0000
Received: by outflank-mailman (input) for mailman id 528117;
 Mon, 01 May 2023 19:31:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZEx-0000m4-7L
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:31:15 +0000
Received: from mail-qt1-x834.google.com (mail-qt1-x834.google.com
 [2607:f8b0:4864:20::834])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bc136e52-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:31:14 +0200 (CEST)
Received: by mail-qt1-x834.google.com with SMTP id
 d75a77b69052e-3ef588dcf7aso30763471cf.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:31:14 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.31.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:31:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc136e52-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969473; x=1685561473;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YO0WsYuSOYLWQPRYhV/xiXSnog1IJZfbXjjJjaPzhyY=;
        b=p6B+B6XpWfwKWnoxvpW5gtX5oGjDvEZdCLriLQQadIJV+J3ljmoPJcH4bPIuL2ucfz
         mtHlnETg0UUF5jByhu/mM9kCyYkjt9P0hrQo1BqQXzxoyHqAN/zU/4cLUegdhqJGFW3t
         QztH/fVaAxhlryJfCG8yyT2BR+q8MjaDHQ1k8RBkJvq7VwlkvwAzt64TqlMQRtaJLSlX
         B9OciZJscFFBbj8sJxWeij9ItpbyraeDBE61NstR0lmm+WYl7PVusJFMbrY+siLj8pf9
         Ra5w3MhcVc+FspJuTOmCPDMSjYTp6lK70LZtyG2s2KTPyCZWrz8TqB30M8y+BTPNtocN
         BQzQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969473; x=1685561473;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=YO0WsYuSOYLWQPRYhV/xiXSnog1IJZfbXjjJjaPzhyY=;
        b=QpuBVmnais08zJSPDCUqVlgC/Hx8zXpAmue0WlvQCqLNznlb47OM5qWwpvtHhJKgI3
         pNDIu5QQM5CZHmZV3zPIaAxs72097gv3hm9V/mMg8YO70MtNHHE7VJ16zpQBayUWkJf2
         wqYOcH7phRpoVtDvYEUeCqYruaVFGkKlCrSTIAXa2dBArn9H6lbKzTA8JtBkVRjiQV7I
         W05zriV0EnE09HlAoU5K58Ba9BMUrw+d7XiVXv8HhwF6b8Xo984uv722mJxNVoX1v6K6
         sy1ePIFiAeIONA0US/X9ZV0nffTXvml64BP8530FXkSbbzYxS4TXLgZi19kuxzvaKtPX
         DNaw==
X-Gm-Message-State: AC+VfDxcJS26HCAvfpwbsNFu1ysfH3FWzlwba98dHqpLmrzDOfv3NicB
	DOaBH6h6IRhYu4h37lRpPKWMuMHN0hE=
X-Google-Smtp-Source: ACHHUZ6b2qw5XxkOLyjOcqjlFfogIYKVNHV7rj6p+aoYwVx36V4SPAB0ZWkXmKy5FGmMsIXEVVUykQ==
X-Received: by 2002:a05:622a:13cc:b0:3ef:6c09:edcc with SMTP id p12-20020a05622a13cc00b003ef6c09edccmr23837085qtk.22.1682969472810;
        Mon, 01 May 2023 12:31:12 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 10/14 RESEND] xen: Add SET_CPUFREQ_HWP xen_sysctl_pm_op
Date: Mon,  1 May 2023 15:30:30 -0400
Message-Id: <20230501193034.88575-11-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add SET_CPUFREQ_HWP xen_sysctl_pm_op to set HWP parameters.  The sysctl
supports setting multiple values simultaneously as indicated by the
set_params bits.  This allows atomically applying new HWP configuration
via a single wrmsr.

XEN_SYSCTL_HWP_SET_PRESET_BALANCE/PERFORMANCE/POWERSAVE provide three
common presets.  Setting them depends on hardware limits which the
hypervisor is already caching.  So using them allows skipping a
hypercall to query the limits (lowest/highest) to then set those same
values.  The code is organized to allow a preset to be refined with
additional stuff if desired.

"most_efficient" and "guaranteed" could be additional presets in the
future, but the are not added now.  Those levels can change at runtime,
but we don't have code in place to monitor and update for those events.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

---
v3:
Remove cpufreq_governor_internal from set_cpufreq_hwp

v2:
Update for naming anonymous union
Drop hwp_err for invalid input in set_hwp_para()
Drop uint16_t cast in XEN_SYSCTL_HWP_SET_PARAM_MASK
Drop parens for HWP_SET_PRESET defines
Reference activity_window format comment
Place SET_CPUFREQ_HWP after SET_CPUFREQ_PARA
Add {HWP,IA32}_ENERGY_PERF_MAX_{PERFORMANCE,POWERSAVE} defines
Order defines before fields in sysctl.h
Use XEN_HWP_GOVERNOR
Use per_cpu for hwp_drv_data
---
 xen/arch/x86/acpi/cpufreq/hwp.c    | 96 ++++++++++++++++++++++++++++++
 xen/drivers/acpi/pmstat.c          | 18 ++++++
 xen/include/acpi/cpufreq/cpufreq.h |  2 +
 xen/include/public/sysctl.h        | 30 ++++++++++
 4 files changed, 146 insertions(+)

diff --git a/xen/arch/x86/acpi/cpufreq/hwp.c b/xen/arch/x86/acpi/cpufreq/hwp.c
index cb52918799..3d15875dc1 100644
--- a/xen/arch/x86/acpi/cpufreq/hwp.c
+++ b/xen/arch/x86/acpi/cpufreq/hwp.c
@@ -27,7 +27,9 @@ static bool feature_hdc;
 __initdata bool opt_cpufreq_hwp = false;
 __initdata bool opt_cpufreq_hdc = true;
 
+#define HWP_ENERGY_PERF_MAX_PERFORMANCE 0
 #define HWP_ENERGY_PERF_BALANCE         0x80
+#define HWP_ENERGY_PERF_MAX_POWERSAVE   0xff
 #define IA32_ENERGY_BIAS_BALANCE        0x7
 #define IA32_ENERGY_BIAS_MAX_POWERSAVE  0xf
 #define IA32_ENERGY_BIAS_MASK           0xf
@@ -531,6 +533,100 @@ int get_hwp_para(const struct cpufreq_policy *policy,
     return 0;
 }
 
+int set_hwp_para(struct cpufreq_policy *policy,
+                 struct xen_set_hwp_para *set_hwp)
+{
+    unsigned int cpu = policy->cpu;
+    struct hwp_drv_data *data = per_cpu(hwp_drv_data, cpu);
+
+    if ( data == NULL )
+        return -EINVAL;
+
+    /* Validate all parameters first */
+    if ( set_hwp->set_params & ~XEN_SYSCTL_HWP_SET_PARAM_MASK )
+        return -EINVAL;
+
+    if ( set_hwp->activity_window & ~XEN_SYSCTL_HWP_ACT_WINDOW_MASK )
+        return -EINVAL;
+
+    if ( !feature_hwp_energy_perf &&
+         (set_hwp->set_params & XEN_SYSCTL_HWP_SET_ENERGY_PERF) &&
+         set_hwp->energy_perf > IA32_ENERGY_BIAS_MAX_POWERSAVE )
+        return -EINVAL;
+
+    if ( (set_hwp->set_params & XEN_SYSCTL_HWP_SET_DESIRED) &&
+         set_hwp->desired != 0 &&
+         (set_hwp->desired < data->hw.lowest ||
+          set_hwp->desired > data->hw.highest) )
+        return -EINVAL;
+
+    /*
+     * minimum & maximum are not validated as hardware doesn't seem to care
+     * and the SDM says CPUs will clip internally.
+     */
+
+    /* Apply presets */
+    switch ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_PRESET_MASK )
+    {
+    case XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE:
+        data->minimum = data->hw.lowest;
+        data->maximum = data->hw.lowest;
+        data->activity_window = 0;
+        if ( feature_hwp_energy_perf )
+            data->energy_perf = HWP_ENERGY_PERF_MAX_POWERSAVE;
+        else
+            data->energy_perf = IA32_ENERGY_BIAS_MAX_POWERSAVE;
+        data->desired = 0;
+        break;
+
+    case XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE:
+        data->minimum = data->hw.highest;
+        data->maximum = data->hw.highest;
+        data->activity_window = 0;
+        data->energy_perf = HWP_ENERGY_PERF_MAX_PERFORMANCE;
+        data->desired = 0;
+        break;
+
+    case XEN_SYSCTL_HWP_SET_PRESET_BALANCE:
+        data->minimum = data->hw.lowest;
+        data->maximum = data->hw.highest;
+        data->activity_window = 0;
+        if ( feature_hwp_energy_perf )
+            data->energy_perf = HWP_ENERGY_PERF_BALANCE;
+        else
+            data->energy_perf = IA32_ENERGY_BIAS_BALANCE;
+        data->desired = 0;
+        break;
+
+    case XEN_SYSCTL_HWP_SET_PRESET_NONE:
+        break;
+
+    default:
+        return -EINVAL;
+    }
+
+    /* Further customize presets if needed */
+    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_MINIMUM )
+        data->minimum = set_hwp->minimum;
+
+    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_MAXIMUM )
+        data->maximum = set_hwp->maximum;
+
+    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_ENERGY_PERF )
+        data->energy_perf = set_hwp->energy_perf;
+
+    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_DESIRED )
+        data->desired = set_hwp->desired;
+
+    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_ACT_WINDOW )
+        data->activity_window = set_hwp->activity_window &
+                                XEN_SYSCTL_HWP_ACT_WINDOW_MASK;
+
+    hwp_cpufreq_target(policy, 0, 0);
+
+    return 0;
+}
+
 int __init hwp_register_driver(void)
 {
     return cpufreq_register_driver(&hwp_cpufreq_driver);
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 67fd9dabd4..12c76f5e57 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -398,6 +398,20 @@ static int set_cpufreq_para(struct xen_sysctl_pm_op *op)
     return ret;
 }
 
+static int set_cpufreq_hwp(struct xen_sysctl_pm_op *op)
+{
+    struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_policy, op->cpuid);
+
+    if ( !policy || !policy->governor )
+        return -EINVAL;
+
+    if ( strncasecmp(policy->governor->name, XEN_HWP_GOVERNOR,
+                     CPUFREQ_NAME_LEN) )
+        return -EINVAL;
+
+    return set_hwp_para(policy, &op->u.set_hwp);
+}
+
 int do_pm_op(struct xen_sysctl_pm_op *op)
 {
     int ret = 0;
@@ -470,6 +484,10 @@ int do_pm_op(struct xen_sysctl_pm_op *op)
         break;
     }
 
+    case SET_CPUFREQ_HWP:
+        ret = set_cpufreq_hwp(op);
+        break;
+
     case GET_CPUFREQ_AVGFREQ:
     {
         op->u.get_avgfreq = cpufreq_driver_getavg(op->cpuid, USR_GETAVG);
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index 92b4c7e79c..b8831b2cd3 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -249,5 +249,7 @@ extern bool opt_cpufreq_hwp;
 extern bool opt_cpufreq_hdc;
 int get_hwp_para(const struct cpufreq_policy *policy,
                  struct xen_hwp_para *hwp_para);
+int set_hwp_para(struct cpufreq_policy *policy,
+                 struct xen_set_hwp_para *set_hwp);
 
 #endif /* __XEN_CPUFREQ_PM_H__ */
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index bf7e6594a7..3242472cbe 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -317,6 +317,34 @@ struct xen_hwp_para {
     uint8_t energy_perf;
 };
 
+/* set multiple values simultaneously when set_args bit is set */
+struct xen_set_hwp_para {
+#define XEN_SYSCTL_HWP_SET_DESIRED              (1U << 0)
+#define XEN_SYSCTL_HWP_SET_ENERGY_PERF          (1U << 1)
+#define XEN_SYSCTL_HWP_SET_ACT_WINDOW           (1U << 2)
+#define XEN_SYSCTL_HWP_SET_MINIMUM              (1U << 3)
+#define XEN_SYSCTL_HWP_SET_MAXIMUM              (1U << 4)
+#define XEN_SYSCTL_HWP_SET_PRESET_MASK          0xf000
+#define XEN_SYSCTL_HWP_SET_PRESET_NONE          0x0000
+#define XEN_SYSCTL_HWP_SET_PRESET_BALANCE       0x1000
+#define XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE     0x2000
+#define XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE   0x3000
+#define XEN_SYSCTL_HWP_SET_PARAM_MASK ( \
+                                  XEN_SYSCTL_HWP_SET_PRESET_MASK | \
+                                  XEN_SYSCTL_HWP_SET_DESIRED     | \
+                                  XEN_SYSCTL_HWP_SET_ENERGY_PERF | \
+                                  XEN_SYSCTL_HWP_SET_ACT_WINDOW  | \
+                                  XEN_SYSCTL_HWP_SET_MINIMUM     | \
+                                  XEN_SYSCTL_HWP_SET_MAXIMUM     )
+    uint16_t set_params; /* bitflags for valid values */
+#define XEN_SYSCTL_HWP_ACT_WINDOW_MASK          0x03ff
+    uint16_t activity_window; /* See comment in struct xen_hwp_para */
+    uint8_t minimum;
+    uint8_t maximum;
+    uint8_t desired;
+    uint8_t energy_perf; /* 0-255 or 0-15 depending on HW support */
+};
+
 #define XEN_HWP_GOVERNOR "hwp-internal"
 /*
  * cpufreq para name of this structure named
@@ -379,6 +407,7 @@ struct xen_sysctl_pm_op {
     #define SET_CPUFREQ_GOV            (CPUFREQ_PARA | 0x02)
     #define SET_CPUFREQ_PARA           (CPUFREQ_PARA | 0x03)
     #define GET_CPUFREQ_AVGFREQ        (CPUFREQ_PARA | 0x04)
+    #define SET_CPUFREQ_HWP            (CPUFREQ_PARA | 0x05)
 
     /* set/reset scheduler power saving option */
     #define XEN_SYSCTL_pm_op_set_sched_opt_smt    0x21
@@ -405,6 +434,7 @@ struct xen_sysctl_pm_op {
         struct xen_get_cpufreq_para get_para;
         struct xen_set_cpufreq_gov  set_gov;
         struct xen_set_cpufreq_para set_para;
+        struct xen_set_hwp_para     set_hwp;
         uint64_aligned_t get_avgfreq;
         uint32_t                    set_sched_opt_smt;
 #define XEN_SYSCTL_CX_UNLIMITED 0xffffffff
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:31:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528120.820966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZF2-0005Mw-4d; Mon, 01 May 2023 19:31:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528120.820966; Mon, 01 May 2023 19:31:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZF2-0005MZ-1F; Mon, 01 May 2023 19:31:20 +0000
Received: by outflank-mailman (input) for mailman id 528120;
 Mon, 01 May 2023 19:31:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZF0-000149-N3
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:31:18 +0000
Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com
 [2607:f8b0:4864:20::82c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd953b45-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:31:17 +0200 (CEST)
Received: by mail-qt1-x82c.google.com with SMTP id
 d75a77b69052e-3ef38864360so32531821cf.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:31:17 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.31.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:31:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd953b45-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969475; x=1685561475;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0JAKWHRmes/OZ2QqxthbDdVSth+8gV1gmiT1eY6OQAU=;
        b=agUYa27f+iQLWMNxlFEc2K+VW9/TDwM3m+jQfvXTxvXZbmzGAjWm0wsvu8JqtR207U
         t1t5hRwO5XCaNJeBc88seuaxK9iWobMhqaMFalduGXrSd1fuN4jpHX1uq0hiw73GxLxa
         QbJ0eOzdWZOowX39nQau708W4gepwI7smAEBwsHyb73JAshCTh2+k0inXyBLfFQROQjn
         kVZLmA8d83RtY1EFNGFyqPmD8NoRnV7padsi+zu+hjkMVIvAnO4R1lui3cpMG8T7Y8DP
         FqjAkbwmR1Ss3yV18+dB6lolo03B2Ai2r90qUQENivonfb8k4P3/JMrwXX/E19b/M6vA
         YtxQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969475; x=1685561475;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=0JAKWHRmes/OZ2QqxthbDdVSth+8gV1gmiT1eY6OQAU=;
        b=UlaklaRnKMH45rH2nnOqbKi5d+lnyRb0FeK2gNj7bSMajod8f/mQ29nKj5PAhsOB8k
         4CePo6DrzAjDo0DVPvQcKhMiulxc/uIuGRf0aXI5Prq4HOLVSrpX5iTxYrBGatHRW/bK
         X6APhppKC6Lnfd/Jj9HouwnH6yLs9hNij2buwRLX2S6qNl5Lb6djIxcgvr+K8GAoLPSW
         nO6A2AXanDtZOaLgSVxtHudXr4WUFXiVikWssmeG1MxKoU1zuBiEK5ZNkWWktcX+sI5Y
         7Xp+2t91IaVMhz3azgV4TSfyeNtEa+4dTqdZlcFQm1XgGU8Qva5nTpBmuhaQe9BRJ5rd
         V0SQ==
X-Gm-Message-State: AC+VfDx45hYpx6vN6LN+G7gg5A2qoI+uqewsi5UXTQcy/BWmlJstqCgt
	d7FAT32sa+7T5HlYEdmPRXHdTtLleIw=
X-Google-Smtp-Source: ACHHUZ7q7E2b3NRT8r2PP7bqWrRHuqumQRwO8VPJV/bucabXWhGvM2lctIFgM6saOuOeuyBtdclxKQ==
X-Received: by 2002:ac8:5c08:0:b0:3ef:7975:99d0 with SMTP id i8-20020ac85c08000000b003ef797599d0mr24307476qti.31.1682969475143;
        Mon, 01 May 2023 12:31:15 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3 11/14 RESEND] libxc: Add xc_set_cpufreq_hwp
Date: Mon,  1 May 2023 15:30:31 -0400
Message-Id: <20230501193034.88575-12-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add xc_set_cpufreq_hwp to allow calling xen_systctl_pm_op
SET_CPUFREQ_HWP.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

---
v2:
Mark xc_set_hwp_para_t const
---
 tools/include/xenctrl.h |  4 ++++
 tools/libs/ctrl/xc_pm.c | 18 ++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 437001d713..cd367d9d8f 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1937,11 +1937,15 @@ struct xc_get_cpufreq_para {
     int32_t turbo_enabled;
 };
 
+typedef struct xen_set_hwp_para xc_set_hwp_para_t;
+
 int xc_get_cpufreq_para(xc_interface *xch, int cpuid,
                         struct xc_get_cpufreq_para *user_para);
 int xc_set_cpufreq_gov(xc_interface *xch, int cpuid, char *govname);
 int xc_set_cpufreq_para(xc_interface *xch, int cpuid,
                         int ctrl_type, int ctrl_value);
+int xc_set_cpufreq_hwp(xc_interface *xch, int cpuid,
+                       const xc_set_hwp_para_t *set_hwp);
 int xc_get_cpufreq_avgfreq(xc_interface *xch, int cpuid, int *avg_freq);
 
 int xc_set_sched_opt_smt(xc_interface *xch, uint32_t value);
diff --git a/tools/libs/ctrl/xc_pm.c b/tools/libs/ctrl/xc_pm.c
index c3a9864bf7..a747ab053c 100644
--- a/tools/libs/ctrl/xc_pm.c
+++ b/tools/libs/ctrl/xc_pm.c
@@ -330,6 +330,24 @@ int xc_set_cpufreq_para(xc_interface *xch, int cpuid,
     return xc_sysctl(xch, &sysctl);
 }
 
+int xc_set_cpufreq_hwp(xc_interface *xch, int cpuid,
+                       const xc_set_hwp_para_t *set_hwp)
+{
+    DECLARE_SYSCTL;
+
+    if ( !xch )
+    {
+        errno = EINVAL;
+        return -1;
+    }
+    sysctl.cmd = XEN_SYSCTL_pm_op;
+    sysctl.u.pm_op.cmd = SET_CPUFREQ_HWP;
+    sysctl.u.pm_op.cpuid = cpuid;
+    sysctl.u.pm_op.u.set_hwp = *set_hwp;
+
+    return xc_sysctl(xch, &sysctl);
+}
+
 int xc_get_cpufreq_avgfreq(xc_interface *xch, int cpuid, int *avg_freq)
 {
     int ret = 0;
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528136.820977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMY-0007qg-UN; Mon, 01 May 2023 19:39:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528136.820977; Mon, 01 May 2023 19:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMY-0007qX-RK; Mon, 01 May 2023 19:39:06 +0000
Received: by outflank-mailman (input) for mailman id 528136;
 Mon, 01 May 2023 19:39:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZDH-0006PY-PU
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:31 +0000
Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com
 [2607:f8b0:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7864ac7c-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:20 +0200 (CEST)
Received: by mail-pl1-x62d.google.com with SMTP id
 d9443c01a7336-1aaea3909d1so17755805ad.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:20 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7864ac7c-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969359; x=1685561359;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=uIsUNYOwSPNxjYFxvF4LgB4Ah8JH2T7Nmtlw8Cz+gsw=;
        b=JvPLlmC/XzUWqdECLyDwvfeLFLPFT450E92A3QXyvZyOZFiHjs0SqrfxeBMPzeb8LI
         K2VYtEv3r+e+UcHKuqN8TTRgEDMk5QiHg2w8LWe4aiFQNNV8Hh6df9gt7z6D8qxOouWJ
         wLPmS/k/ifaj+fNqHq4KBrLkalomexLgsPcDr4xMzpwyPwPn0hyVLwNcBBGjvlXZkwRl
         lPtqWGJrBs3wn3DJU9zWYI2bb/1KFvoVhNYahmv1oY0+idoipKM5ABGGY8tIdglpIS4E
         WqBtNig6OCJrivJLAJr7mSv5nUheWG1j4pFaa5KN6zFjtfYkKhLunWXNp9G65rPn/E55
         zhpg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969359; x=1685561359;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=uIsUNYOwSPNxjYFxvF4LgB4Ah8JH2T7Nmtlw8Cz+gsw=;
        b=fuovOtMeRxE3+1W0Hee6wxwIlrpb4uUU5MkUD2lhnrUKD0PpjdPUu6F3gHm5uniYNh
         P6tIc4sy2YLUpH3fD/kRrU2RHfGbfQdxHoxPBgWCLxRd2UR8F70KEfg9j46VvkYQMAHr
         PUIPywKWf6FAWgMtS2dfv9yYhG9b5By1NCfpc1h/GLARy9RnCmPuvb7fKYXsrjnip5sM
         ZPhIWoEh17jTEcV1Ln2DnyvNmwvKcKmgYwY19f8wngEA/J9sHScGE0TWVPeaYi3h9PCd
         ugkqW3DAZZTKJkXmEWKEJD1uxqbhG5uMgO553esVSrSp6bhG763OfnSeupeZXShA1roT
         /mdg==
X-Gm-Message-State: AC+VfDyx5cnbZztC5QeKkjjVJ+9Q/D17zXLudHKXATehKf6JfHTiwLNX
	Kk8kkAHpnoWLyA2rnTQKsJClzwfoMfO8nhMc
X-Google-Smtp-Source: ACHHUZ4XGLMgkIB9Q6wCsJwFAYUMJfuCSptJDhnDjc9kz87hQFDk4CAG77uXolw+VTSUtUndXFrgrA==
X-Received: by 2002:a17:903:120c:b0:1aa:f53a:5e45 with SMTP id l12-20020a170903120c00b001aaf53a5e45mr5020547plh.39.1682969359666;
        Mon, 01 May 2023 12:29:19 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: [PATCH v2 31/34] sparc64: Convert various functions to use ptdescs
Date: Mon,  1 May 2023 12:28:26 -0700
Message-Id: <20230501192829.17086-32-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/sparc/mm/init_64.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 04f9db0c3111..eedb3e03b1fe 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2893,14 +2893,15 @@ pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
 
 pgtable_t pte_alloc_one(struct mm_struct *mm)
 {
-	struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO);
-	if (!page)
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL | __GFP_ZERO, 0);
+
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(page)) {
-		__free_page(page);
+	if (!ptdesc_pte_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
-	return (pte_t *) page_address(page);
+	return (pte_t *) ptdesc_address(ptdesc);
 }
 
 void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
@@ -2910,10 +2911,10 @@ void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
 
 static void __pte_free(pgtable_t pte)
 {
-	struct page *page = virt_to_page(pte);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pte);
 
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 void pte_free(struct mm_struct *mm, pgtable_t pte)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528139.820987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMa-00086X-8U; Mon, 01 May 2023 19:39:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528139.820987; Mon, 01 May 2023 19:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMa-00086Q-52; Mon, 01 May 2023 19:39:08 +0000
Received: by outflank-mailman (input) for mailman id 528139;
 Mon, 01 May 2023 19:39:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCm-0006FS-Fl
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:00 +0000
Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com
 [2607:f8b0:4864:20::535])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6aa9bdc0-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:28:58 +0200 (CEST)
Received: by mail-pg1-x535.google.com with SMTP id
 41be03b00d2f7-52c3ffc8d13so158497a12.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:28:58 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.28.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:28:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6aa9bdc0-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969337; x=1685561337;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Ij5AkRCa+yootiVPqcVahGB0vu642P/kX8VFrblz4TQ=;
        b=UkuGYyJfBaWkUOY2u3PlvKhnoh02w7kKy6Yo+3jgJq0jrvHddqf+dLyHTj9abI3kjB
         +3BvMKKzDVLKdOt8tjDXhylN3ftFB2Dv4naACQiv7JfxIAQ+eWqFsM2f729a64J6CKIo
         OlCeiwuLQwZBmOQ0K2+KuoVUh+iT8KM5gemKLrwEoCjwq4V7nKjRwu9ccr7I3jE8KUCP
         /cwHlob/NeEwLBmBQ0+XtF7e7T+SHAaH9GHToGx8cp5jdfXxZIxid4S+cFW1MgiFv2tg
         PqSj02mK1VvkeR5635etrkR82//haDgUAAf4WRjR+9CVW5EPPVO9DB3jlHc5FET3LbQo
         Yxrg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969337; x=1685561337;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Ij5AkRCa+yootiVPqcVahGB0vu642P/kX8VFrblz4TQ=;
        b=d3z5tjHMAXE/De1vW79ylUciwSrskYULH4/E0fVFzHOvH+JSVGdo0iD6uoSEv+hG1j
         qZFgM3lQpLzwAdhV6/liSq0+JIVMK9AP5Wlxc/10g8HgLeRX4LaMvmElg0lrVBm9Tmpi
         0nLjt7aSxWuf/cfxZIBGsOR/kx2t47HSsfEZ5XWFCj3oUrIeex3Pb111AEMH9vfy2GYT
         CC8FU2/QkmBmdWwOeeAEI90klqJ7KVY9H1EuJYGM32HgMxy2P1StOlqwEGtkIR71eYNe
         WixwIbdsDlrVzAefXSYQQJQZh6NMTZh2hfyk41er8QXOE+QT9rtv9BcAySkmFApabXbe
         ZWnw==
X-Gm-Message-State: AC+VfDzpMwWmaXKKZ5RQDiYgl+NuiXSnCx+Y6TFEGsdU6mQcKT3jRy/E
	rpq1oONPb6D0/AkW8UvH7fM=
X-Google-Smtp-Source: ACHHUZ56JMfJkN+zfbveEwGUizMU0dJC7BXLEE13YfG4Ba6DaXws7RlJ55LFBdOT1As72Xg5aIUi5g==
X-Received: by 2002:a17:903:2343:b0:1a6:b971:faf8 with SMTP id c3-20020a170903234300b001a6b971faf8mr18128456plh.53.1682969336571;
        Mon, 01 May 2023 12:28:56 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 16/34] s390: Convert various gmap functions to use ptdescs
Date: Mon,  1 May 2023 12:28:11 -0700
Message-Id: <20230501192829.17086-17-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to split struct ptdesc from struct page, convert various
functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/mm/gmap.c | 230 ++++++++++++++++++++++++--------------------
 1 file changed, 128 insertions(+), 102 deletions(-)

diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index a9e8b1805894..e833a7e81fbd 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -34,7 +34,7 @@
 static struct gmap *gmap_alloc(unsigned long limit)
 {
 	struct gmap *gmap;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned long *table;
 	unsigned long etype, atype;
 
@@ -67,12 +67,12 @@ static struct gmap *gmap_alloc(unsigned long limit)
 	spin_lock_init(&gmap->guest_table_lock);
 	spin_lock_init(&gmap->shadow_lock);
 	refcount_set(&gmap->ref_count, 1);
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		goto out_free;
-	page->_pt_s390_gaddr = 0;
-	list_add(&page->lru, &gmap->crst_list);
-	table = page_to_virt(page);
+	ptdesc->_pt_s390_gaddr = 0;
+	list_add(&ptdesc->pt_list, &gmap->crst_list);
+	table = ptdesc_to_virt(ptdesc);
 	crst_table_init(table, etype);
 	gmap->table = table;
 	gmap->asce = atype | _ASCE_TABLE_LENGTH |
@@ -181,25 +181,25 @@ static void gmap_rmap_radix_tree_free(struct radix_tree_root *root)
  */
 static void gmap_free(struct gmap *gmap)
 {
-	struct page *page, *next;
+	struct ptdesc *ptdesc, *next;
 
 	/* Flush tlb of all gmaps (if not already done for shadows) */
 	if (!(gmap_is_shadow(gmap) && gmap->removed))
 		gmap_flush_tlb(gmap);
 	/* Free all segment & region tables. */
-	list_for_each_entry_safe(page, next, &gmap->crst_list, lru) {
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+	list_for_each_entry_safe(ptdesc, next, &gmap->crst_list, pt_list) {
+		ptdesc->_pt_s390_gaddr = 0;
+		ptdesc_free(ptdesc);
 	}
 	gmap_radix_tree_free(&gmap->guest_to_host);
 	gmap_radix_tree_free(&gmap->host_to_guest);
 
 	/* Free additional data for a shadow gmap */
 	if (gmap_is_shadow(gmap)) {
-		/* Free all page tables. */
-		list_for_each_entry_safe(page, next, &gmap->pt_list, lru) {
-			page->_pt_s390_gaddr = 0;
-			page_table_free_pgste(page);
+		/* Free all ptdesc tables. */
+		list_for_each_entry_safe(ptdesc, next, &gmap->pt_list, pt_list) {
+			ptdesc->_pt_s390_gaddr = 0;
+			page_table_free_pgste(ptdesc_page(ptdesc));
 		}
 		gmap_rmap_radix_tree_free(&gmap->host_to_rmap);
 		/* Release reference to the parent */
@@ -308,27 +308,27 @@ EXPORT_SYMBOL_GPL(gmap_get_enabled);
 static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
 			    unsigned long init, unsigned long gaddr)
 {
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned long *new;
 
 	/* since we dont free the gmap table until gmap_free we can unlock */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	new = page_to_virt(page);
+	new = ptdesc_to_virt(ptdesc);
 	crst_table_init(new, init);
 	spin_lock(&gmap->guest_table_lock);
 	if (*table & _REGION_ENTRY_INVALID) {
-		list_add(&page->lru, &gmap->crst_list);
+		list_add(&ptdesc->pt_list, &gmap->crst_list);
 		*table = __pa(new) | _REGION_ENTRY_LENGTH |
 			(*table & _REGION_ENTRY_TYPE_MASK);
-		page->_pt_s390_gaddr = gaddr;
-		page = NULL;
+		ptdesc->_pt_s390_gaddr = gaddr;
+		ptdesc = NULL;
 	}
 	spin_unlock(&gmap->guest_table_lock);
-	if (page) {
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+	if (ptdesc) {
+		ptdesc->_pt_s390_gaddr = 0;
+		ptdesc_free(ptdesc);
 	}
 	return 0;
 }
@@ -341,15 +341,15 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
  */
 static unsigned long __gmap_segment_gaddr(unsigned long *entry)
 {
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned long offset, mask;
 
 	offset = (unsigned long) entry / sizeof(unsigned long);
 	offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE;
 	mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1);
-	page = virt_to_page((void *)((unsigned long) entry & mask));
+	ptdesc = virt_to_ptdesc((void *)((unsigned long) entry & mask));
 
-	return page->_pt_s390_gaddr + offset;
+	return ptdesc->_pt_s390_gaddr + offset;
 }
 
 /**
@@ -1345,6 +1345,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 	unsigned long *ste;
 	phys_addr_t sto, pgt;
 	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	ste = gmap_table_walk(sg, raddr, 1); /* get segment pointer */
@@ -1358,9 +1359,11 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_pgt(sg, raddr, __va(pgt));
 	/* Free page table */
 	page = phys_to_page(pgt);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	page_table_free_pgste(page);
+
+	ptdesc = page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	page_table_free_pgste(ptdesc_page(ptdesc));
 }
 
 /**
@@ -1374,9 +1377,10 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr,
 				unsigned long *sgt)
 {
-	struct page *page;
 	phys_addr_t pgt;
 	int i;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	for (i = 0; i < _CRST_ENTRIES; i++, raddr += _SEGMENT_SIZE) {
@@ -1387,9 +1391,11 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_pgt(sg, raddr, __va(pgt));
 		/* Free page table */
 		page = phys_to_page(pgt);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		page_table_free_pgste(page);
+
+		ptdesc = page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		page_table_free_pgste(ptdesc_page(ptdesc));
 	}
 }
 
@@ -1405,6 +1411,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 	unsigned long r3o, *r3e;
 	phys_addr_t sgt;
 	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	r3e = gmap_table_walk(sg, raddr, 2); /* get region-3 pointer */
@@ -1418,9 +1425,11 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_sgt(sg, raddr, __va(sgt));
 	/* Free segment table */
 	page = phys_to_page(sgt);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+
+	ptdesc = page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 }
 
 /**
@@ -1434,9 +1443,10 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr,
 				unsigned long *r3t)
 {
-	struct page *page;
 	phys_addr_t sgt;
 	int i;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	for (i = 0; i < _CRST_ENTRIES; i++, raddr += _REGION3_SIZE) {
@@ -1447,9 +1457,11 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_sgt(sg, raddr, __va(sgt));
 		/* Free segment table */
 		page = phys_to_page(sgt);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+
+		ptdesc = page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		ptdesc_free(ptdesc);
 	}
 }
 
@@ -1465,6 +1477,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr)
 	unsigned long r2o, *r2e;
 	phys_addr_t r3t;
 	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	r2e = gmap_table_walk(sg, raddr, 3); /* get region-2 pointer */
@@ -1478,9 +1491,11 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_r3t(sg, raddr, __va(r3t));
 	/* Free region 3 table */
 	page = phys_to_page(r3t);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+
+	ptdesc = page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 }
 
 /**
@@ -1495,8 +1510,9 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr,
 				unsigned long *r2t)
 {
 	phys_addr_t r3t;
-	struct page *page;
 	int i;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	for (i = 0; i < _CRST_ENTRIES; i++, raddr += _REGION2_SIZE) {
@@ -1507,9 +1523,11 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_r3t(sg, raddr, __va(r3t));
 		/* Free region 3 table */
 		page = phys_to_page(r3t);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+
+		ptdesc = page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		ptdesc_free(ptdesc);
 	}
 }
 
@@ -1525,6 +1543,7 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr)
 	unsigned long r1o, *r1e;
 	struct page *page;
 	phys_addr_t r2t;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	r1e = gmap_table_walk(sg, raddr, 4); /* get region-1 pointer */
@@ -1538,9 +1557,11 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_r2t(sg, raddr, __va(r2t));
 	/* Free region 2 table */
 	page = phys_to_page(r2t);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+
+	ptdesc = page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 }
 
 /**
@@ -1558,6 +1579,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
 	struct page *page;
 	phys_addr_t r2t;
 	int i;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	asce = __pa(r1t) | _ASCE_TYPE_REGION1;
@@ -1571,9 +1593,11 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
 		r1t[i] = _REGION1_ENTRY_EMPTY;
 		/* Free region 2 table */
 		page = phys_to_page(r2t);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+
+		ptdesc = page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		ptdesc_free(ptdesc);
 	}
 }
 
@@ -1770,18 +1794,18 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	unsigned long raddr, origin, offset, len;
 	unsigned long *table;
 	phys_addr_t s_r2t;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	/* Allocate a shadow region second table */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_r2t = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_r2t = page_to_phys(ptdesc_page(ptdesc));
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 4); /* get region-1 pointer */
@@ -1802,7 +1826,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 		 _REGION_ENTRY_TYPE_R1 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= (r2t & _REGION_ENTRY_PROTECT);
-	list_add(&page->lru, &sg->crst_list);
+	list_add(&ptdesc->pt_list, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -1830,8 +1854,8 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 	return rc;
 }
 EXPORT_SYMBOL_GPL(gmap_shadow_r2t);
@@ -1855,18 +1879,18 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	unsigned long raddr, origin, offset, len;
 	unsigned long *table;
 	phys_addr_t s_r3t;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	/* Allocate a shadow region second table */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_r3t = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_r3t = page_to_phys(ptdesc_page(ptdesc));
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 3); /* get region-2 pointer */
@@ -1887,7 +1911,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 		 _REGION_ENTRY_TYPE_R2 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= (r3t & _REGION_ENTRY_PROTECT);
-	list_add(&page->lru, &sg->crst_list);
+	list_add(&ptdesc->pt_list, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -1915,8 +1939,8 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 	return rc;
 }
 EXPORT_SYMBOL_GPL(gmap_shadow_r3t);
@@ -1940,18 +1964,18 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	unsigned long raddr, origin, offset, len;
 	unsigned long *table;
 	phys_addr_t s_sgt;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg) || (sgt & _REGION3_ENTRY_LARGE));
 	/* Allocate a shadow segment table */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_sgt = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_sgt = page_to_phys(ptdesc_page(ptdesc));
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 2); /* get region-3 pointer */
@@ -1972,7 +1996,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 		 _REGION_ENTRY_TYPE_R3 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= sgt & _REGION_ENTRY_PROTECT;
-	list_add(&page->lru, &sg->crst_list);
+	list_add(&ptdesc->pt_list, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -2000,8 +2024,8 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+	ptdesc->_pt_s390_gaddr = 0;
+	ptdesc_free(ptdesc);
 	return rc;
 }
 EXPORT_SYMBOL_GPL(gmap_shadow_sgt);
@@ -2024,8 +2048,9 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
 			   int *fake)
 {
 	unsigned long *table;
-	struct page *page;
 	int rc;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	spin_lock(&sg->guest_table_lock);
@@ -2033,9 +2058,10 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
 	if (table && !(*table & _SEGMENT_ENTRY_INVALID)) {
 		/* Shadow page tables are full pages (pte+pgste) */
 		page = pfn_to_page(*table >> PAGE_SHIFT);
-		*pgt = page->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE;
+		ptdesc = page_ptdesc(page);
+		*pgt = ptdesc->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE;
 		*dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT);
-		*fake = !!(page->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE);
+		*fake = !!(ptdesc->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE);
 		rc = 0;
 	} else  {
 		rc = -EAGAIN;
@@ -2064,19 +2090,19 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 {
 	unsigned long raddr, origin;
 	unsigned long *table;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	phys_addr_t s_pgt;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg) || (pgt & _SEGMENT_ENTRY_LARGE));
 	/* Allocate a shadow page table */
-	page = page_table_alloc_pgste(sg->mm);
-	if (!page)
+	ptdesc = page_ptdesc(page_table_alloc_pgste(sg->mm));
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_pgt = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_pgt = page_to_phys(ptdesc_page(ptdesc));
 	/* Install shadow page table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 1); /* get segment pointer */
@@ -2094,7 +2120,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	/* mark as invalid as long as the parent table is not protected */
 	*table = (unsigned long) s_pgt | _SEGMENT_ENTRY |
 		 (pgt & _SEGMENT_ENTRY_PROTECT) | _SEGMENT_ENTRY_INVALID;
-	list_add(&page->lru, &sg->pt_list);
+	list_add(&ptdesc->pt_list, &sg->pt_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_SEGMENT_ENTRY_INVALID;
@@ -2120,8 +2146,8 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	page_table_free_pgste(page);
+	ptdesc->_pt_s390_gaddr = 0;
+	page_table_free_pgste(ptdesc_page(ptdesc));
 	return rc;
 
 }
@@ -2814,11 +2840,11 @@ EXPORT_SYMBOL_GPL(__s390_uv_destroy_range);
  */
 void s390_unlist_old_asce(struct gmap *gmap)
 {
-	struct page *old;
+	struct ptdesc *old;
 
-	old = virt_to_page(gmap->table);
+	old = virt_to_ptdesc(gmap->table);
 	spin_lock(&gmap->guest_table_lock);
-	list_del(&old->lru);
+	list_del(&old->pt_list);
 	/*
 	 * Sometimes the topmost page might need to be "removed" multiple
 	 * times, for example if the VM is rebooted into secure mode several
@@ -2833,7 +2859,7 @@ void s390_unlist_old_asce(struct gmap *gmap)
 	 * pointers, so list_del can work (and do nothing) without
 	 * dereferencing stale or invalid pointers.
 	 */
-	INIT_LIST_HEAD(&old->lru);
+	INIT_LIST_HEAD(&old->pt_list);
 	spin_unlock(&gmap->guest_table_lock);
 }
 EXPORT_SYMBOL_GPL(s390_unlist_old_asce);
@@ -2851,15 +2877,15 @@ EXPORT_SYMBOL_GPL(s390_unlist_old_asce);
 int s390_replace_asce(struct gmap *gmap)
 {
 	unsigned long asce;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	void *table;
 
 	s390_unlist_old_asce(gmap);
 
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	table = page_to_virt(page);
+	table = ptdesc_to_virt(ptdesc);
 	memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT));
 
 	/*
@@ -2868,7 +2894,7 @@ int s390_replace_asce(struct gmap *gmap)
 	 * it will be freed when the VM is torn down.
 	 */
 	spin_lock(&gmap->guest_table_lock);
-	list_add(&page->lru, &gmap->crst_list);
+	list_add(&ptdesc->pt_list, &gmap->crst_list);
 	spin_unlock(&gmap->guest_table_lock);
 
 	/* Set new table origin while preserving existing ASCE control bits */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528141.820991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMa-00089n-JE; Mon, 01 May 2023 19:39:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528141.820991; Mon, 01 May 2023 19:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMa-00089T-Dx; Mon, 01 May 2023 19:39:08 +0000
Received: by outflank-mailman (input) for mailman id 528141;
 Mon, 01 May 2023 19:39:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZDL-0006PY-Q6
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:35 +0000
Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com
 [2607:f8b0:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7ae1c1cf-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:24 +0200 (CEST)
Received: by mail-pl1-x635.google.com with SMTP id
 d9443c01a7336-1aad5245632so16558095ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:24 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ae1c1cf-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969364; x=1685561364;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tVP5uHyitpWmGyZGGmBmuGVwAuiqA5RrNeRETZrKFQk=;
        b=l3gNIC841LsuGqQQ2UpVF+XnHjZ+kiBXZeZy4GTzDMbWbXaEiG5ArSl/ydDvotE01/
         fGJ2+A5T8Q0iCPtlIlo1cxl3Uf1AxNVghNXp2wCgxamPWRN0f7NM89uS/eqg02SKiRak
         np/AOUUvT9Iq32lA1ArHNRu1TUuWBGP97diP1cbyLES/I8a8uq904CtHM5VcyBKmIx8g
         ULh11nFUApu/c5GfuUXZpzFhh6lYLhMsU4hm7q2aFdXWAxNpZKe/ScDOD1gC9EPME0aI
         4wA9wQgJ9NzOXeiB1g9JMqcyyqFI/8Wvj0CTUQfo5zcTDuOrG/ytNAkFcETgxxSn2sbl
         hW5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969364; x=1685561364;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=tVP5uHyitpWmGyZGGmBmuGVwAuiqA5RrNeRETZrKFQk=;
        b=a1EpmSRxzkAtGNwNr9tSZfplIvuedRQ4tUUA02wQwbVRYlA+vSsP+VAHKvqgCzvSjO
         5+b1XfA8aZplzPmn6s5WqyQxFv17r9ZkqXwPvt88pPXMfnXapYXM/72QFE29LyTJADtS
         7zvFvyl6mdhHj8VnwNf9fFUnu+FBvLeLplzEy/ltWWwY11qHZPOzhsRNqEDcsN8Ji1Jc
         Hhe7m8H/WDUyiPp9WnRu+GjItHWVe54ZNXSNZBAFt292WjjcZmWd06PVifksCk/L56lk
         tWAGmcKaRl1VXKcw8gkpKeVSsYxhci2WL6HQD480ctY0GbDGfaHqEgv9kpxoZCj4POcL
         SvCQ==
X-Gm-Message-State: AC+VfDxZgJT/Fl861g0Ucj82a+yoKSHrcVNMu+pxJsZLRvadz1F7Gq7k
	7HDuCOXwcat8SQ8BzBjLHXo=
X-Google-Smtp-Source: ACHHUZ4QjkMy1fp/+Rin9A48k0WWuHM3fOzsdnkuixAb5bcQScYzyO713qNNM5COHanOuLDMosv4jg==
X-Received: by 2002:a17:902:db07:b0:1aa:f203:781c with SMTP id m7-20020a170902db0700b001aaf203781cmr5844122plx.44.1682969363711;
        Mon, 01 May 2023 12:29:23 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 34/34] mm: Remove pgtable_{pmd, pte}_page_{ctor, dtor}() wrappers
Date: Mon,  1 May 2023 12:28:29 -0700
Message-Id: <20230501192829.17086-35-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

These functions are no longer necessary. Remove them and cleanup
Documentation referencing them.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 Documentation/mm/split_page_table_lock.rst    | 12 +++++------
 .../zh_CN/mm/split_page_table_lock.rst        | 14 ++++++-------
 include/linux/mm.h                            | 20 -------------------
 3 files changed, 13 insertions(+), 33 deletions(-)

diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst
index 50ee0dfc95be..b3c612183135 100644
--- a/Documentation/mm/split_page_table_lock.rst
+++ b/Documentation/mm/split_page_table_lock.rst
@@ -53,7 +53,7 @@ Support of split page table lock by an architecture
 ===================================================
 
 There's no need in special enabling of PTE split page table lock: everything
-required is done by pgtable_pte_page_ctor() and pgtable_pte_page_dtor(), which
+required is done by ptdesc_pte_ctor() and ptdesc_pte_dtor(), which
 must be called on PTE table allocation / freeing.
 
 Make sure the architecture doesn't use slab allocator for page table
@@ -63,8 +63,8 @@ This field shares storage with page->ptl.
 PMD split lock only makes sense if you have more than two page table
 levels.
 
-PMD split lock enabling requires pgtable_pmd_page_ctor() call on PMD table
-allocation and pgtable_pmd_page_dtor() on freeing.
+PMD split lock enabling requires ptdesc_pmd_ctor() call on PMD table
+allocation and ptdesc_pmd_dtor() on freeing.
 
 Allocation usually happens in pmd_alloc_one(), freeing in pmd_free() and
 pmd_free_tlb(), but make sure you cover all PMD table allocation / freeing
@@ -72,7 +72,7 @@ paths: i.e X86_PAE preallocate few PMDs on pgd_alloc().
 
 With everything in place you can set CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK.
 
-NOTE: pgtable_pte_page_ctor() and pgtable_pmd_page_ctor() can fail -- it must
+NOTE: ptdesc_pte_ctor() and ptdesc_pmd_ctor() can fail -- it must
 be handled properly.
 
 page->ptl
@@ -92,7 +92,7 @@ trick:
    split lock with enabled DEBUG_SPINLOCK or DEBUG_LOCK_ALLOC, but costs
    one more cache line for indirect access;
 
-The spinlock_t allocated in pgtable_pte_page_ctor() for PTE table and in
-pgtable_pmd_page_ctor() for PMD table.
+The spinlock_t allocated in ptdesc_pte_ctor() for PTE table and in
+ptdesc_pmd_ctor() for PMD table.
 
 Please, never access page->ptl directly -- use appropriate helper.
diff --git a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst
index 4fb7aa666037..a3323eb9dc40 100644
--- a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst
+++ b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst
@@ -56,16 +56,16 @@ Hugetlb特定的辅助函数:
 架构对分页表锁的支持
 ====================
 
-没有必要特别启用PTE分页表锁：所有需要的东西都由pgtable_pte_page_ctor()
-和pgtable_pte_page_dtor()完成，它们必须在PTE表分配/释放时被调用。
+没有必要特别启用PTE分页表锁：所有需要的东西都由ptdesc_pte_ctor()
+和ptdesc_pte_dtor()完成，它们必须在PTE表分配/释放时被调用。
 
 确保架构不使用slab分配器来分配页表：slab使用page->slab_cache来分配其页
 面。这个区域与page->ptl共享存储。
 
 PMD分页锁只有在你有两个以上的页表级别时才有意义。
 
-启用PMD分页锁需要在PMD表分配时调用pgtable_pmd_page_ctor()，在释放时调
-用pgtable_pmd_page_dtor()。
+启用PMD分页锁需要在PMD表分配时调用ptdesc_pmd_ctor()，在释放时调
+用ptdesc_pmd_dtor()。
 
 分配通常发生在pmd_alloc_one()中，释放发生在pmd_free()和pmd_free_tlb()
 中，但要确保覆盖所有的PMD表分配/释放路径：即X86_PAE在pgd_alloc()中预先
@@ -73,7 +73,7 @@ PMD分页锁只有在你有两个以上的页表级别时才有意义。
 
 一切就绪后，你可以设置CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK。
 
-注意：pgtable_pte_page_ctor()和pgtable_pmd_page_ctor()可能失败--必
+注意：ptdesc_pte_ctor()和ptdesc_pmd_ctor()可能失败--必
 须正确处理。
 
 page->ptl
@@ -90,7 +90,7 @@ page->ptl用于访问分割页表锁，其中'page'是包含该表的页面struc
    的指针并动态分配它。这允许在启用DEBUG_SPINLOCK或DEBUG_LOCK_ALLOC的
    情况下使用分页锁，但由于间接访问而多花了一个缓存行。
 
-PTE表的spinlock_t分配在pgtable_pte_page_ctor()中，PMD表的spinlock_t
-分配在pgtable_pmd_page_ctor()中。
+PTE表的spinlock_t分配在ptdesc_pte_ctor()中，PMD表的spinlock_t
+分配在ptdesc_pmd_ctor()中。
 
 请不要直接访问page->ptl - -使用适当的辅助函数。
diff --git a/include/linux/mm.h b/include/linux/mm.h
index dc61aeca9077..dfa3e202099a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2858,11 +2858,6 @@ static inline bool ptdesc_pte_ctor(struct ptdesc *ptdesc)
 	return true;
 }
 
-static inline bool pgtable_pte_page_ctor(struct page *page)
-{
-	return ptdesc_pte_ctor(page_ptdesc(page));
-}
-
 static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc)
 {
 	struct folio *folio = ptdesc_folio(ptdesc);
@@ -2872,11 +2867,6 @@ static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc)
 	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
 }
 
-static inline void pgtable_pte_page_dtor(struct page *page)
-{
-	ptdesc_pte_dtor(page_ptdesc(page));
-}
-
 #define pte_offset_map_lock(mm, pmd, address, ptlp)	\
 ({							\
 	spinlock_t *__ptl = pte_lockptr(mm, pmd);	\
@@ -2967,11 +2957,6 @@ static inline bool ptdesc_pmd_ctor(struct ptdesc *ptdesc)
 	return true;
 }
 
-static inline bool pgtable_pmd_page_ctor(struct page *page)
-{
-	return ptdesc_pmd_ctor(page_ptdesc(page));
-}
-
 static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc)
 {
 	struct folio *folio = ptdesc_folio(ptdesc);
@@ -2981,11 +2966,6 @@ static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc)
 	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
 }
 
-static inline void pgtable_pmd_page_dtor(struct page *page)
-{
-	ptdesc_pmd_dtor(page_ptdesc(page));
-}
-
 /*
  * No scalability reason to split PUD locks yet, but follow the same pattern
  * as the PMD locks to make it easier if we decide to.  The VM should not be
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528144.820999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMb-0008L6-4Z; Mon, 01 May 2023 19:39:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528144.820999; Mon, 01 May 2023 19:39:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMa-0008Jy-W8; Mon, 01 May 2023 19:39:08 +0000
Received: by outflank-mailman (input) for mailman id 528144;
 Mon, 01 May 2023 19:39:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCz-0006PY-Nh
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:13 +0000
Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com
 [2607:f8b0:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 70f6f602-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:08 +0200 (CEST)
Received: by mail-pl1-x630.google.com with SMTP id
 d9443c01a7336-1ab01bf474aso6107295ad.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:08 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70f6f602-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969347; x=1685561347;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=apg836Gn0N7QExyyhUUGNgc2fd2r3iyoAghRwsUjc2o=;
        b=WbTJO1ayjn3HXL/iQMZCmT2+RnFSF2O5WG+VKbNfOxDU8NvbfSgz+Wjs4UVgNNQcMr
         uLj57ayYRa/f+rOhAVP0jHM/2f33b+TbcsUuT8OausfFIQuWjGI50HrH5E0KiKkIgQwa
         0uM85OALkALC3dGk5nJ/tYYANgKg4uu8cRvFIwQEiY4YzwHIJvVidPyhHtwxmNKQYA0W
         tAWz4vl42KgoRYKzs8w7tUocWedsU0Shw+kefBEmySvrYYB7bH8pThcvpYZ6Dpmw316z
         icjhKqrruYJxczjDCcNsGgrwDrViIcKW8h7/U3ESAg45ipYAB08luKD04Ylpo9b00Q6j
         cFOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969347; x=1685561347;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=apg836Gn0N7QExyyhUUGNgc2fd2r3iyoAghRwsUjc2o=;
        b=BsN4KJFQnQgvdpxeGHifI4nYxUqKcgCYB+C3akbEzi9E/HJGpEpk8Rjm4sIM6FFVyT
         LnUD1X1I8pBM5NQeo4YDDP3DXk8mRBjRtMTID0kYDTZZPGqbTrBHMzF9sYDWgcArrE7F
         JGT20/NZc9dBHbTnuObriv6KrDEV3Y3CKYFlI4ttVNHMeRLChV/ve5Hxl6Dt8Up/JtSN
         wHFBcHH0kwCWAOGzAGvhgfqy/X3Ba/oq7oVF8dnfFdeFO62AJ/T4Vq4P253baN3S2LI7
         5Uv4SQBKH9QFat+XIgOJgjqHRlCI8AZaoONxaiCD9FA2+XCicYgUYtgmEpT0xY137g5S
         l/GQ==
X-Gm-Message-State: AC+VfDw90YHZHVbe3X4kmLf3QWuT53JG298vChN/pZewv2sWPj8sKwhc
	HWqVV6KpQTdUMH7XhveDfcQ=
X-Google-Smtp-Source: ACHHUZ6JhHSCQpD21/kJ78iavglhPlPTqvuf+3JEce/L0R67tH5klQ2MwjR0kPXU/JMC0Y1BeGjoZA==
X-Received: by 2002:a17:902:c94b:b0:1a8:626:6d9d with SMTP id i11-20020a170902c94b00b001a806266d9dmr19271440pla.62.1682969347146;
        Mon, 01 May 2023 12:29:07 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 23/34] hexagon: Convert __pte_free_tlb() to use ptdescs
Date: Mon,  1 May 2023 12:28:18 -0700
Message-Id: <20230501192829.17086-24-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/hexagon/include/asm/pgalloc.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/hexagon/include/asm/pgalloc.h b/arch/hexagon/include/asm/pgalloc.h
index f0c47e6a7427..0f8432430e68 100644
--- a/arch/hexagon/include/asm/pgalloc.h
+++ b/arch/hexagon/include/asm/pgalloc.h
@@ -87,10 +87,10 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
 		max_kernel_seg = pmdindex;
 }
 
-#define __pte_free_tlb(tlb, pte, addr)		\
-do {						\
-	pgtable_pte_page_dtor((pte));		\
-	tlb_remove_page((tlb), (pte));		\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	ptdesc_pte_dtor((page_ptdesc(pte)));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #endif
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528145.821017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMl-0000ij-CV; Mon, 01 May 2023 19:39:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528145.821017; Mon, 01 May 2023 19:39:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMl-0000iY-8B; Mon, 01 May 2023 19:39:19 +0000
Received: by outflank-mailman (input) for mailman id 528145;
 Mon, 01 May 2023 19:39:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZF1-0000m4-81
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:31:19 +0000
Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com
 [2607:f8b0:4864:20::732])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be9f0d3b-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:31:18 +0200 (CEST)
Received: by mail-qk1-x732.google.com with SMTP id
 af79cd13be357-75131c2997bso1451854885a.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:31:18 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.31.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:31:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be9f0d3b-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969477; x=1685561477;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Bnmwx4JL4zzfpoAgNv41/x6mI5Sl5Tvy0no2deSD0O4=;
        b=CbOoWqTSk3uKbjzelG8yqnld5byWglvqESK7A5ayA7t7shLwdOeyy9pYr2h0dHZ4sw
         Ioebaoj5c+mUhs5R428HbAlgESy36wct7HvPU+oF2sighIIgNW2S/bMhrwz2G2bU7eAR
         PzmvfvNi5zQlFQbN2MU0tjZW9ZGQ2YrqYPpfUK353KT9AyIsPVIIWXUt0bf5KdeuKy4r
         7ttyjdCx983mA5YglTw1av0sK693taU4SPfn7jm3Xk/tPYsgPA/bWeAXUDACZE5DCRsI
         HPGPu/bDWsw9EvYxBiodD1s8ObxO8fmcp+DI9gRq0PdosiMcfYJh0fkyFQCSwdkoHQTx
         BOCQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969477; x=1685561477;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Bnmwx4JL4zzfpoAgNv41/x6mI5Sl5Tvy0no2deSD0O4=;
        b=OQI8i/AibKzefLly3forRo3Vn51hGG72o0vOyamOejU3/TDG5XhHc/VckUYbFj6Bxv
         v8uxi9H0XekL3O/e6pp31yt9jUCNMTUfHrFrFSr0HjLtCJTMkphOD3KYegnLjsd/XkOL
         Cjz5nBe+yZb7BFLKEYugoUkhXIKNqpuMJR2qgrIPUdWBhbIVZV0sjN1X6zadsi21a8p0
         ffbfONklGA45eTuerewA3aylN8sENt2PdwN/vsvdOCjsrTFcU+rU0ID4Q2l/zIGy4RkI
         rj8z37aEQXLYAiVr1Ljg5aoZ61XxrH/mpf3z0a5PSyexZ78aepW28FmtqeOFAFUwIubi
         t2ig==
X-Gm-Message-State: AC+VfDwQjYN7BdUfKRbPznmeI2FGNy7z7uN618QopAZwL/Rwe7Qiy5Bc
	5RK/Xg1qig/bZkFkPFO3DlAosfsSEks=
X-Google-Smtp-Source: ACHHUZ7BLXGnIebUIbZnncmPO+ryFBqQfenJ7JOTBDj0u5O37zFK/qjVSkLkOTcSG87sWV42QOYX7Q==
X-Received: by 2002:ac8:4e4c:0:b0:3d4:17dc:3fcf with SMTP id e12-20020ac84e4c000000b003d417dc3fcfmr27298811qtw.5.1682969477226;
        Mon, 01 May 2023 12:31:17 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 12/14 RESEND] xenpm: Factor out a non-fatal cpuid_parse variant
Date: Mon,  1 May 2023 15:30:32 -0400
Message-Id: <20230501193034.88575-13-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Allow cpuid_parse to be re-used without terminating xenpm.  HWP will
re-use it to optionally parse a cpuid.  Unlike other uses of
cpuid_parse, parse_hwp_opts will take a variable number of arguments and
cannot just check argc.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
v2:
Retained because cpuid_parse handles numeric cpu numbers and "all".
---
 tools/misc/xenpm.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index b2defde0d4..6e74606970 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -79,17 +79,26 @@ void help_func(int argc, char *argv[])
     show_help();
 }
 
-static void parse_cpuid(const char *arg, int *cpuid)
+static int parse_cpuid_non_fatal(const char *arg, int *cpuid)
 {
     if ( sscanf(arg, "%d", cpuid) != 1 || *cpuid < 0 )
     {
         if ( strcasecmp(arg, "all") )
-        {
-            fprintf(stderr, "Invalid CPU identifier: '%s'\n", arg);
-            exit(EINVAL);
-        }
+            return -1;
+
         *cpuid = -1;
     }
+
+    return 0;
+}
+
+static void parse_cpuid(const char *arg, int *cpuid)
+{
+    if ( parse_cpuid_non_fatal(arg, cpuid) )
+    {
+        fprintf(stderr, "Invalid CPU identifier: '%s'\n", arg);
+        exit(EINVAL);
+    }
 }
 
 static void parse_cpuid_and_int(int argc, char *argv[],
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528147.821027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMo-0001KK-Nh; Mon, 01 May 2023 19:39:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528147.821027; Mon, 01 May 2023 19:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMo-0001K4-Ht; Mon, 01 May 2023 19:39:22 +0000
Received: by outflank-mailman (input) for mailman id 528147;
 Mon, 01 May 2023 19:39:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCr-0006FS-G0
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:05 +0000
Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com
 [2607:f8b0:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6f497be2-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:29:05 +0200 (CEST)
Received: by mail-pl1-x633.google.com with SMTP id
 d9443c01a7336-1aaec6f189cso12398745ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:04 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f497be2-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969344; x=1685561344;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=oJJ/5JNaAhhn0BtCWyESRHZW29NfaaBB7RT6Dtpfjog=;
        b=dujVT07hfeADlXP8VLeHzHZos19FRJjW/aMFZqcOyBAO37TIyuOPUD4cjiEOtYecqR
         Tiwbs3Jgtkq4Kww3BZLsfUeYGH01HRped6BHBhotJFQfLn1orwI/QFojUqd07/woILZm
         IdwKD4oQO46zQ5FUSiD0ClypRdGvF7AHFnh12n9hs1VGHXuzfiiaf8+D2bQL3TQLnlID
         aypLmgidYgZS67oc/+3avE+c7doqeOxXcTi5kAqysB/3bLqLAcTPHmeArfdREDSIi+5K
         85L7sVL9riiLJDbWejHR83t0dYjCRXFIJ1/0gZmctdGxqkiAMg4Ak+2alOu75j8lTMAw
         rYkg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969344; x=1685561344;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=oJJ/5JNaAhhn0BtCWyESRHZW29NfaaBB7RT6Dtpfjog=;
        b=R+RhZb0vFRSt4YtjISBkUGwPCbBgRzL0FBJIDWUBVxJGuVLP7TH6W3Q030fUEgPTp6
         xDJpeLlr+6rO64nLZJkUuoBdvsj/IQ3VG7AjQ2FNp5gJi8g/AZ5Q8NsEzaUa7NW+sbFV
         rASAMDWNx9FgGqIx6mgyV/cJJAv/S9w2dLkIJKKXOl5aPa/edOLIyRd7W0K7bYK7Umkr
         XrzKkOXTQWWpzLYLQD7gzkq4US16H0WZQrJKeotniURb7tF/U8507EKd8lNCUkqZ10PM
         FFoKAsBgRTtGYJpwDz1njCFTP6i9GS+07MmMBFy6NsyphuqManDOiHz69C8mb7rNPAzA
         gSIw==
X-Gm-Message-State: AC+VfDxydXdzRTR7fqKakz322VeQbMsOkfp5UCjZS+sG6sKt5K6buCpm
	xkkTtqN/LDBejlNQgFfqi+Y=
X-Google-Smtp-Source: ACHHUZ4ZEnjwtlgm3inm4fu27SXTrMWZSABNUt55ss8dOEDonQf7mUoqMAdyKodh48zsDDd3raB2eg==
X-Received: by 2002:a17:902:f68b:b0:1a9:2e3d:fca2 with SMTP id l11-20020a170902f68b00b001a92e3dfca2mr18382952plg.33.1682969344394;
        Mon, 01 May 2023 12:29:04 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>
Subject: [PATCH v2 21/34] arm64: Convert various functions to use ptdescs
Date: Mon,  1 May 2023 12:28:16 -0700
Message-Id: <20230501192829.17086-22-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/arm64/include/asm/tlb.h | 14 ++++++++------
 arch/arm64/mm/mmu.c          |  7 ++++---
 2 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
index c995d1f4594f..6cb70c247e30 100644
--- a/arch/arm64/include/asm/tlb.h
+++ b/arch/arm64/include/asm/tlb.h
@@ -75,18 +75,20 @@ static inline void tlb_flush(struct mmu_gather *tlb)
 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
 				  unsigned long addr)
 {
-	pgtable_pte_page_dtor(pte);
-	tlb_remove_table(tlb, pte);
+	struct ptdesc *ptdesc = page_ptdesc(pte);
+
+	ptdesc_pte_dtor(ptdesc);
+	tlb_remove_ptdesc(tlb, ptdesc);
 }
 
 #if CONFIG_PGTABLE_LEVELS > 2
 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp,
 				  unsigned long addr)
 {
-	struct page *page = virt_to_page(pmdp);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmdp);
 
-	pgtable_pmd_page_dtor(page);
-	tlb_remove_table(tlb, page);
+	ptdesc_pmd_dtor(ptdesc);
+	tlb_remove_ptdesc(tlb, ptdesc);
 }
 #endif
 
@@ -94,7 +96,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp,
 static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp,
 				  unsigned long addr)
 {
-	tlb_remove_table(tlb, virt_to_page(pudp));
+	tlb_remove_ptdesc(tlb, virt_to_ptdesc(pudp));
 }
 #endif
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index af6bc8403ee4..5ba005fd607e 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -426,6 +426,7 @@ static phys_addr_t __pgd_pgtable_alloc(int shift)
 static phys_addr_t pgd_pgtable_alloc(int shift)
 {
 	phys_addr_t pa = __pgd_pgtable_alloc(shift);
+	struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa));
 
 	/*
 	 * Call proper page table ctor in case later we need to
@@ -433,12 +434,12 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 	 * this pre-allocated page table.
 	 *
 	 * We don't select ARCH_ENABLE_SPLIT_PMD_PTLOCK if pmd is
-	 * folded, and if so pgtable_pmd_page_ctor() becomes nop.
+	 * folded, and if so ptdesc_pte_dtor() becomes nop.
 	 */
 	if (shift == PAGE_SHIFT)
-		BUG_ON(!pgtable_pte_page_ctor(phys_to_page(pa)));
+		BUG_ON(!ptdesc_pte_dtor(ptdesc));
 	else if (shift == PMD_SHIFT)
-		BUG_ON(!pgtable_pmd_page_ctor(phys_to_page(pa)));
+		BUG_ON(!ptdesc_pte_dtor(ptdesc));
 
 	return pa;
 }
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528151.821037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMs-0001oL-V6; Mon, 01 May 2023 19:39:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528151.821037; Mon, 01 May 2023 19:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMs-0001o9-RD; Mon, 01 May 2023 19:39:26 +0000
Received: by outflank-mailman (input) for mailman id 528151;
 Mon, 01 May 2023 19:39:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZD1-0006FS-LH
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:15 +0000
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [2607:f8b0:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74c83574-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:29:14 +0200 (CEST)
Received: by mail-pl1-x636.google.com with SMTP id
 d9443c01a7336-1aaf91ae451so11836335ad.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:14 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74c83574-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969354; x=1685561354;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tniPUuVL526Jec1TVIaHRLAUx7OruO8jRIKlA5wqt8Q=;
        b=sCYdFD4O8Had2jReQPDULWIb6jXMWTBVOP1xIsSQDJFrjK4Unzq4xPir2vv9xD60ba
         KhnmERHxh4wjne9PTfzHFe7mUKtGQVGSRxsSTI7oGHQc7fKexIRwy4BhAuhQWGLGw+L4
         jhaBOvefDEjvTltxe1zcg9AOJCenWnwtbQvW8h9UKrmrEmLrYzROLydP0XMIyNyplCDF
         ksvZ9Q2L1rfMSznCSwk+WHikKvzxa7zVAjndA+MUieyvTGKh0Hnkx2VWar6ZLI/SEpj3
         zJ2Emm1btVtqn0B7TpGayRWt9h4mLeUHtWruxn5Hxr1lyas/Ibpq/YOzX0vO1bUnoNNT
         ju/g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969354; x=1685561354;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=tniPUuVL526Jec1TVIaHRLAUx7OruO8jRIKlA5wqt8Q=;
        b=YfoqjohuJkwH/jk3dm0c34FzWmudsx602spF/NN/rcr7AS/1vkCitNDdxcpxwGOmJj
         OT9GG5Iip9tk3YfYP2XomWqI//HJTm7LcAxzJcSIOQXEs2h5dXWxh99xSbOr+RFxDmcQ
         6z7EfHpIHbggm2Gcjzpd7LkOWI405GS+1zQ4LdhCV6oRA5fDMczWagnMJQhJRJqT6NlY
         kK47FPt2ayFfQegSXhIr6wU5gNrNIvGtH4uuquQQi4Y/JsV6I5xtKdgbhUZcueXWQ2an
         ygFXii0G3F1QceYVqF5mp+Y/z5TBaA1KTxeGj7SuvfD/eZCv+80BY0YIVHv6pR4Dnu6I
         E4gA==
X-Gm-Message-State: AC+VfDxeLjT15jXz6A42hvx2aLIXyJwcfSG51uPXdCOyug76Tcs7EFh5
	90sMuSqwJGBZt+jf9k/xCbQ=
X-Google-Smtp-Source: ACHHUZ6fkDxKllaaORZgzXEjR5COvh0uqYIPMfHZHm801Tjdn3OT0QOppKb8hXtDeBwjumbd6cc2Pw==
X-Received: by 2002:a17:902:c411:b0:19c:d309:4612 with SMTP id k17-20020a170902c41100b0019cd3094612mr20785827plk.6.1682969353666;
        Mon, 01 May 2023 12:29:13 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Dinh Nguyen <dinguyen@kernel.org>
Subject: [PATCH v2 27/34] nios2: Convert __pte_free_tlb() to use ptdescs
Date: Mon,  1 May 2023 12:28:22 -0700
Message-Id: <20230501192829.17086-28-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/nios2/include/asm/pgalloc.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h
index ecd1657bb2ce..ed868f4c0ca9 100644
--- a/arch/nios2/include/asm/pgalloc.h
+++ b/arch/nios2/include/asm/pgalloc.h
@@ -28,10 +28,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
 
 extern pgd_t *pgd_alloc(struct mm_struct *mm);
 
-#define __pte_free_tlb(tlb, pte, addr)				\
-	do {							\
-		pgtable_pte_page_dtor(pte);			\
-		tlb_remove_page((tlb), (pte));			\
+#define __pte_free_tlb(tlb, pte, addr)					\
+	do {								\
+		ptdesc_pte_dtor(page_ptdesc(pte));			\
+		tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 	} while (0)
 
 #endif /* _ASM_NIOS2_PGALLOC_H */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528152.821040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMt-0001qn-8V; Mon, 01 May 2023 19:39:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528152.821040; Mon, 01 May 2023 19:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMt-0001qN-54; Mon, 01 May 2023 19:39:27 +0000
Received: by outflank-mailman (input) for mailman id 528152;
 Mon, 01 May 2023 19:39:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCs-0006PY-NB
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:06 +0000
Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com
 [2607:f8b0:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6e774797-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:04 +0200 (CEST)
Received: by mail-pl1-x630.google.com with SMTP id
 d9443c01a7336-1aae5c2423dso20954225ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:04 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e774797-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969343; x=1685561343;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gfpIRfFq9zSU9T7uNfhlcYr/W6dIxhZkZgP28wF3SDA=;
        b=Qau1TBSyVipNmef8PNT07M/JoK5y55d5cqgZDlN4vhhP79tr9yaRETSYXvwSfxkzpQ
         1wzTUy/T5LzAEZJEeUbvFZuE+VickGgRQkGj6paYz1jDOFsUn9ZCDLvNYYI/Zyr2RUmg
         bm3l9m5jLCH0toRw7+aEb6gYmXflBLWLMpuQeA0OVXisZ1A5osNX02KzemiZmkdGT+6d
         H7+k3mn0V3HtUXPo/S8coTYf23tz7uGdOWK0nFPfjqqe58njMrJom/cuk3n4mMysX8dy
         jTZaIlbFQNlLjrIuXwQgtdhVp6dM9pTxTSdYHIHgZYka1AJxhJpIcTRf9fC09oM2h4fK
         Ax0A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969343; x=1685561343;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=gfpIRfFq9zSU9T7uNfhlcYr/W6dIxhZkZgP28wF3SDA=;
        b=kHUc4Hj2Dhsbn2oMdm/AQm20tiMvonMm8unQZzyrbrKlhdAYbTk5im9wlkAdE6N5sS
         HU6sXg37mwT2AwCXHrG3QTso91us/ih2aEF+yozLfDluEcNY7ZGDA6fzk9OaAX6RFBIx
         fBR5DHjW3bODdXyB63rIEScNAho95MaVmGeh2/lhpWeWMEH1c1iI91PKwqJ+yPHvfT0T
         A+PySmSpmEAIi53KEDXjbQwiFaz9ZTtMPZotRuHpaHqsHY4P2KCSTpCX2niDepiAy9mn
         KD/daGp7ghr5yjq6EKEcNSzj06W6Gfs8TLN1Ah5dVbnp5aiWWvBU1aORvwurELCTi8nk
         aH1A==
X-Gm-Message-State: AC+VfDxEbUVQVfTm7HSFzMv9UfBOB5CTnWCl1giBurkgBDNVTZ0jHvfg
	u0vYiL9fZmZ1WKhpzm56ZUY=
X-Google-Smtp-Source: ACHHUZ7gDAee3JEsjrpH2Z6atN7OAzqWZ2l6e0m4gqF7AbuDA0ihHMvj1UGUNCCunfz91CTNrFO0iw==
X-Received: by 2002:a17:902:e547:b0:1ab:8f4:af3a with SMTP id n7-20020a170902e54700b001ab08f4af3amr217267plf.39.1682969342957;
        Mon, 01 May 2023 12:29:02 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>
Subject: [PATCH v2 20/34] arm: Convert various functions to use ptdescs
Date: Mon,  1 May 2023 12:28:15 -0700
Message-Id: <20230501192829.17086-21-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

late_alloc() also uses the __get_free_pages() helper function. Convert
this to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/arm/include/asm/tlb.h | 12 +++++++-----
 arch/arm/mm/mmu.c          |  6 +++---
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h
index b8cbe03ad260..9ab8a6929d35 100644
--- a/arch/arm/include/asm/tlb.h
+++ b/arch/arm/include/asm/tlb.h
@@ -39,7 +39,9 @@ static inline void __tlb_remove_table(void *_table)
 static inline void
 __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr)
 {
-	pgtable_pte_page_dtor(pte);
+	struct ptdesc *ptdesc = page_ptdesc(pte);
+
+	ptdesc_pte_dtor(ptdesc);
 
 #ifndef CONFIG_ARM_LPAE
 	/*
@@ -50,17 +52,17 @@ __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr)
 	__tlb_adjust_range(tlb, addr - PAGE_SIZE, 2 * PAGE_SIZE);
 #endif
 
-	tlb_remove_table(tlb, pte);
+	tlb_remove_ptdesc(tlb, ptdesc);
 }
 
 static inline void
 __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr)
 {
 #ifdef CONFIG_ARM_LPAE
-	struct page *page = virt_to_page(pmdp);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmdp);
 
-	pgtable_pmd_page_dtor(page);
-	tlb_remove_table(tlb, page);
+	ptdesc_pmd_dtor(ptdesc);
+	tlb_remove_ptdesc(tlb, ptdesc);
 #endif
 }
 
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 463fc2a8448f..7add505bd797 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -737,11 +737,11 @@ static void __init *early_alloc(unsigned long sz)
 
 static void *__init late_alloc(unsigned long sz)
 {
-	void *ptr = (void *)__get_free_pages(GFP_PGTABLE_KERNEL, get_order(sz));
+	void *ptdesc = ptdesc_alloc(GFP_PGTABLE_KERNEL, get_order(sz));
 
-	if (!ptr || !pgtable_pte_page_ctor(virt_to_page(ptr)))
+	if (!ptdesc || !ptdesc_pte_ctor(ptdesc))
 		BUG();
-	return ptr;
+	return ptdesc;
 }
 
 static pte_t * __init arm_pte_alloc(pmd_t *pmd, unsigned long addr,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528153.821048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMt-0001yd-RG; Mon, 01 May 2023 19:39:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528153.821048; Mon, 01 May 2023 19:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMt-0001vv-IA; Mon, 01 May 2023 19:39:27 +0000
Received: by outflank-mailman (input) for mailman id 528153;
 Mon, 01 May 2023 19:39:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZCv-0006PY-NM
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:09 +0000
Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com
 [2607:f8b0:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7022b5c1-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:07 +0200 (CEST)
Received: by mail-pl1-x62d.google.com with SMTP id
 d9443c01a7336-1aaea3909d1so17752805ad.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:07 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7022b5c1-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969346; x=1685561346;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WInZHFLF0E37CPvH/fyQuQmpONrFi7mPGahLu3Z9gNY=;
        b=q4na0YpY6+HIdR7eCjpOfgl3pLZKXaYPSEQBquag8z/68f0H1ARrXFU5GHdi9YxK8/
         rB/VLGJXy6D4ZufE/2LOOEwnretxgWAB++ZlGVRxVIYCoUOlMN8apcsmkHtbQevtOOJx
         LNKiQlcElPDsuGjD80MSn3zT41r1jijBtepWy2itQQP8kPb/qbuR3Nu9du37XsuFat5b
         tY9vSCaH65FZJZYGzqq6Sev5FEiEarceFI+PkCB4zysu+TPDmYzeupQ3qWrtewRiW+Ef
         pPbEC2E+2Gu9odmO4zWHfrgw5vSTXCdgYdvlBNlEJqZMmLFOH+elBEgtadEGjmhbbSeV
         twkg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969346; x=1685561346;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=WInZHFLF0E37CPvH/fyQuQmpONrFi7mPGahLu3Z9gNY=;
        b=VduIuFvKOXAydoDzd1DVCpZmEjA+SFree7Zz66sH7I296trGPa7gES+Z0BCrj5J8NV
         Eb2nNZ+tuUWPAMHRUVDnpY36ixZBj78IgPB+Zwf+zFU9DEBFdT0h/f+czgPzyYdKn4/i
         R6AfDW5794QW6PX5HWmB/hZx1TThXPcEErd13sPbFpv1YdB1UWkQycnaZIbfwjf7JUJs
         BO76r+9pckbcef3bZO8nJARMQWh26GG3m9WPakH/39ev+gVTtIiHim5La2pFnA43CLHf
         txthxPBCTjkd9w7mCPdZDYPhd8HUXhbbIRufCglBQXWY0z+WqiV+miRmWCeCpF0GtCw9
         uLPA==
X-Gm-Message-State: AC+VfDy9HNVgNrd1QOpZl6R6l1HN7NQ0I09FphW8NUIqdJqnn75w93uv
	fKCzS0v1TaXd2EMAtIkAzww=
X-Google-Smtp-Source: ACHHUZ4G7uSDpj0Az+Rf76NPdw6reyXbUYX/RD5L7uNUC8eMCewkmfrtYB+I6yNLHyDYyReDKoyXCw==
X-Received: by 2002:a17:902:c947:b0:1ab:8f4:af21 with SMTP id i7-20020a170902c94700b001ab08f4af21mr235908pla.42.1682969345737;
        Mon, 01 May 2023 12:29:05 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v2 22/34] csky: Convert __pte_free_tlb() to use ptdescs
Date: Mon,  1 May 2023 12:28:17 -0700
Message-Id: <20230501192829.17086-23-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/csky/include/asm/pgalloc.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h
index 7d57e5da0914..af26f1191b43 100644
--- a/arch/csky/include/asm/pgalloc.h
+++ b/arch/csky/include/asm/pgalloc.h
@@ -63,8 +63,8 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
 
 #define __pte_free_tlb(tlb, pte, address)		\
 do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page(tlb, pte);			\
+	ptdesc_pte_dtor(page_ptdesc(pte));		\
+	tlb_remove_page_ptdesc(tlb, page_ptdesc(pte));	\
 } while (0)
 
 extern void pagetable_init(void);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528159.821067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMw-0002m0-GA; Mon, 01 May 2023 19:39:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528159.821067; Mon, 01 May 2023 19:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZMw-0002lc-9Z; Mon, 01 May 2023 19:39:30 +0000
Received: by outflank-mailman (input) for mailman id 528159;
 Mon, 01 May 2023 19:39:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZD4-0006PY-Oa
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:18 +0000
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [2607:f8b0:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 730680a3-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:11 +0200 (CEST)
Received: by mail-pl1-x636.google.com with SMTP id
 d9443c01a7336-1aaf70676b6so9592365ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:11 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 730680a3-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969350; x=1685561350;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4OtNgWOkZ3LfZLHJ8GzTdJkHLxMJcmtHKM1yrsNlf/Q=;
        b=Z7q9x2Xo+R2VkP+SAoL4No5SPSbkqDSqv1O/IrhmxSJUfCVapYFLSqvpVv77DdaaNp
         7xLFfeVlS+/Z0T5loFjBwxXBtJMOL0SomvnLpxrMgUE0H3Z2NwMVPWqjWZZaWw/Xtz9H
         bXhAt7jaDl6TAQEvatyLSYOokashb8FOV+mCaWLsgTEKm8FhtMTj/edGzjuLKaTQnJVx
         0eZhRzcKCCgmZxOyeAj8qE7Nk7KdCTP6wyrMnmtMZ4gp104XnMq6z3YcmLmYurGQiyUb
         qQCTGhNGKLl+87vhFPVut1PJmPE/0ePjlrtCLEgAAJx+RTgFbaFyPCMZ3prXkkSGna7E
         KEcQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969350; x=1685561350;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=4OtNgWOkZ3LfZLHJ8GzTdJkHLxMJcmtHKM1yrsNlf/Q=;
        b=Ovv59nhvdFcq84lTwAVBkI4cVibM9bWtFdL33hN8CmYXeYDBr2hIIboEFjVipz/Hkd
         9nigqoRzmsJLT7UNHpQFFrOTO7lVasOjJ3yKupwYYBuGg7xkAXRjkzuBGwZu0RdkbZ7a
         ejOSXqtPjoFrKr4X3+HDNvFwNuSP5BFMpOcLZtcTIOGCUGZhwMbxtL6Us6MoGOymynYA
         pEEAnGPNf7vIRVNskMuwOd75wX50fWoqTddRSZW2eoH6dOg36882cvqmTAvndaAqqgFS
         jiZFBW1tdXNHNILtcaLLQvtG70lAd8SsZ84iYsvKz9/VojpP/xymwsVWN+iNCHSqkhPH
         T77Q==
X-Gm-Message-State: AC+VfDxG0AX1GtMTnEEoJ0LL7AxURdKj/aQXror3BxJwLGxAtr+X/mmf
	MAyLKyMZ+/4Dx7IIqp8HkrlMgyaFieLiVW+c
X-Google-Smtp-Source: ACHHUZ6ZQG7EefBUYeumgjpKENexw5lTdMP6ZEhxCncdw/ecXlC+vhc9H9rF3r/Lw9bkiQ71uazgBQ==
X-Received: by 2002:a17:902:a516:b0:1a6:6b9d:5e0f with SMTP id s22-20020a170902a51600b001a66b9d5e0fmr14561274plq.17.1682969350547;
        Mon, 01 May 2023 12:29:10 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>
Subject: [PATCH v2 25/34] m68k: Convert various functions to use ptdescs
Date: Mon,  1 May 2023 12:28:20 -0700
Message-Id: <20230501192829.17086-26-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/m68k/include/asm/mcf_pgalloc.h  | 41 ++++++++++++++--------------
 arch/m68k/include/asm/sun3_pgalloc.h |  8 +++---
 arch/m68k/mm/motorola.c              |  4 +--
 3 files changed, 27 insertions(+), 26 deletions(-)

diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h
index 5c2c0a864524..b0e909e23e14 100644
--- a/arch/m68k/include/asm/mcf_pgalloc.h
+++ b/arch/m68k/include/asm/mcf_pgalloc.h
@@ -7,20 +7,19 @@
 
 extern inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
 {
-	free_page((unsigned long) pte);
+	ptdesc_free(virt_to_ptdesc(pte));
 }
 
 extern const char bad_pmd_string[];
 
 extern inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
 {
-	unsigned long page = __get_free_page(GFP_DMA);
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_DMA | __GFP_ZERO, 0);
 
-	if (!page)
+	if (!ptdesc)
 		return NULL;
 
-	memset((void *)page, 0, PAGE_SIZE);
-	return (pte_t *) (page);
+	return (pte_t *) (ptdesc_address(ptdesc));
 }
 
 extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address)
@@ -35,36 +34,36 @@ extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address)
 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable,
 				  unsigned long address)
 {
-	struct page *page = virt_to_page(pgtable);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pgtable);
 
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
 {
-	struct page *page = alloc_pages(GFP_DMA, 0);
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_DMA, 0);
 	pte_t *pte;
 
-	if (!page)
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(page)) {
-		__free_page(page);
+	if (!ptdesc_pte_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
 
-	pte = page_address(page);
-	clear_page(pte);
+	pte = ptdesc_address(ptdesc);
+	ptdesc_clear(pte);
 
 	return pte;
 }
 
 static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable)
 {
-	struct page *page = virt_to_page(pgtable);
+	struct ptdesc *ptdesc = virt_to_ptdesc(ptdesc);
 
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	ptdesc_pte_dtor(ptdesc);
+	ptdesc_free(ptdesc);
 }
 
 /*
@@ -75,16 +74,18 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable)
 
 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 {
-	free_page((unsigned long) pgd);
+	ptdesc_free(virt_to_ptdesc(pgd));
 }
 
 static inline pgd_t *pgd_alloc(struct mm_struct *mm)
 {
 	pgd_t *new_pgd;
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_DMA | GFP_NOWARN, 0);
 
-	new_pgd = (pgd_t *)__get_free_page(GFP_DMA | __GFP_NOWARN);
-	if (!new_pgd)
+	if (!ptdesc)
 		return NULL;
+	new_pgd = (pgd_t *) ptdesc_address(ptdesc);
+
 	memcpy(new_pgd, swapper_pg_dir, PTRS_PER_PGD * sizeof(pgd_t));
 	memset(new_pgd, 0, PAGE_OFFSET >> PGDIR_SHIFT);
 	return new_pgd;
diff --git a/arch/m68k/include/asm/sun3_pgalloc.h b/arch/m68k/include/asm/sun3_pgalloc.h
index 198036aff519..013d375fc239 100644
--- a/arch/m68k/include/asm/sun3_pgalloc.h
+++ b/arch/m68k/include/asm/sun3_pgalloc.h
@@ -17,10 +17,10 @@
 
 extern const char bad_pmd_string[];
 
-#define __pte_free_tlb(tlb,pte,addr)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), pte);			\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));	\
 } while (0)
 
 static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte)
diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index 911301224078..f7adb86b37fb 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -161,7 +161,7 @@ void *get_pointer_table(int type)
 			 * m68k doesn't have SPLIT_PTE_PTLOCKS for not having
 			 * SMP.
 			 */
-			pgtable_pte_page_ctor(virt_to_page(page));
+			ptdesc_pte_ctor(virt_to_ptdesc(page));
 		}
 
 		mmu_page_ctor(page);
@@ -201,7 +201,7 @@ int free_pointer_table(void *table, int type)
 		list_del(dp);
 		mmu_page_dtor((void *)page);
 		if (type == TABLE_PTE)
-			pgtable_pte_page_dtor(virt_to_page(page));
+			ptdesc_pte_dtor(virt_to_ptdesc((void *)page));
 		free_page (page);
 		return 1;
 	} else if (ptable_list[type].next != dp) {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528188.821077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNF-0004VH-R9; Mon, 01 May 2023 19:39:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528188.821077; Mon, 01 May 2023 19:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNF-0004V6-Nl; Mon, 01 May 2023 19:39:49 +0000
Received: by outflank-mailman (input) for mailman id 528188;
 Mon, 01 May 2023 19:39:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZD3-0006FS-D8
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:17 +0000
Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com
 [2607:f8b0:4864:20::532])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75db82fb-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:29:16 +0200 (CEST)
Received: by mail-pg1-x532.google.com with SMTP id
 41be03b00d2f7-517c01edaaaso1819418a12.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:16 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75db82fb-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969355; x=1685561355;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=M2PCFvKq9JWcIC4Exd9oDYO0v0PQOHyXRap+yEgbOb4=;
        b=pbOgq641Q1VbKkWYpSKEOBTaJbK7N+gn3tPQxAnQMHVxsbuYS7d6gCFZWWRyAoJPvd
         QWBoalX7ztTkXy1/S77S44wUsaYiw+c9l0pjoAvj83RwanS2FHVDTD9b5yLt3dvSiRkC
         Y6FbuRkXeTez8mU8JwqFdJ8q3udPbLVyCGPniHNdmE3Bvis2G1N5Cf54GBHK19m19dh2
         ECy1qDfhwNI4K2KlZeZKdUsb//pSuTNjtcr9BAZz+ZE7jjtZb0g8W7o+eLshkW2fWRhk
         rjbBHucvZ06kdc4e0Uijz1mCvV4V1RY1P7Hdmh8XbAUQYZvgzIuSpNmzClPaEZvw/eBL
         VKvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969355; x=1685561355;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=M2PCFvKq9JWcIC4Exd9oDYO0v0PQOHyXRap+yEgbOb4=;
        b=hz/KyNhYBED8qd+/giVzthLgrntYWj0NGtqsCZEfx3EFuQwWXzbw++LUxEipbXyQcx
         6flNl9UHeotwDEBUbsam9nIrfvzhow0K4I688xHu5LfJ0++FqY5nrsRB4tcUETE++Ij6
         R+x8X4Ru5F10T9BodXZVb/4sqmEJ16KnGwmrcXoppGBMKATm71IUReBtX2jerDqnHqc5
         dzvToWkcBZ3zx0N35malvmru+6OzPf0EEIvKuOxMKDtOElE66cLhuRdbBN5XNlZwSsxy
         vOg5PavotJFOUYEGsYVeA0du58l0F4c8J1xQ5HvJ4CaAaWZhRhAgs1Ma6dPz7XY56z+o
         Z2Nw==
X-Gm-Message-State: AC+VfDy3cd6qYtBCCh65FoW0YeplIukOlmPyUxXIK3vR8hikWoHCKMvl
	IVshxyMz56s7Ztib54djUUI=
X-Google-Smtp-Source: ACHHUZ5aUwQpmXFq62IOwkeTiJiFUSz6Wt9PaXTxNdRH/kylJwLUJvf08r71LifeGRMo3EYnyRIKBw==
X-Received: by 2002:a17:902:dacb:b0:1a8:d7a:9255 with SMTP id q11-20020a170902dacb00b001a80d7a9255mr17869996plx.54.1682969355424;
        Mon, 01 May 2023 12:29:15 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Jonas Bonn <jonas@southpole.se>
Subject: [PATCH v2 28/34] openrisc: Convert __pte_free_tlb() to use ptdescs
Date: Mon,  1 May 2023 12:28:23 -0700
Message-Id: <20230501192829.17086-29-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/openrisc/include/asm/pgalloc.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h
index b7b2b8d16fad..14e641686281 100644
--- a/arch/openrisc/include/asm/pgalloc.h
+++ b/arch/openrisc/include/asm/pgalloc.h
@@ -66,10 +66,10 @@ extern inline pgd_t *pgd_alloc(struct mm_struct *mm)
 
 extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm);
 
-#define __pte_free_tlb(tlb, pte, addr)	\
-do {					\
-	pgtable_pte_page_dtor(pte);	\
-	tlb_remove_page((tlb), (pte));	\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #endif
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528189.821083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNG-0004aD-6T; Mon, 01 May 2023 19:39:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528189.821083; Mon, 01 May 2023 19:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNG-0004Zt-1F; Mon, 01 May 2023 19:39:50 +0000
Received: by outflank-mailman (input) for mailman id 528189;
 Mon, 01 May 2023 19:39:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZD2-0006PY-OK
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:16 +0000
Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com
 [2607:f8b0:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7215b18d-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:10 +0200 (CEST)
Received: by mail-pl1-x62e.google.com with SMTP id
 d9443c01a7336-1aaec9ad820so16454975ad.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:10 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7215b18d-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969349; x=1685561349;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Pf7K/iFMD6od7/SRYpuPsYcodkkXMgOdjXv5YJvZldk=;
        b=Ln4mDR/5PqhQgedxrHvQljALDEWoa3b+hRIGv35D/XEQopkojKeYky16q/KJiQP190
         gOXMLkWlsxRYt/qNbMMHduRgO5rwSHkWxXdTawxJdNC4w4aAhizbEp/u5yJIFvo+hvkC
         CzCza/yOOV7wQ3CRXlPMEZ6eMm1zhczAF9AJlp92How5r4l3oEpl8xhmNNQZ5IRuQriJ
         5iq8Jlorps05Qifajbp6RIvsWCW3JV3LIBqvQ7RLXCzoWQn8RP/izc4h1lgdgQ7smZ+o
         cGeex+nu9iv1OEbJv0WRRlF1TRZ+PJHxnItl8vKiR8iqu8w5u+UliAsQ4ZVuON7yHbns
         fmHg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969349; x=1685561349;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Pf7K/iFMD6od7/SRYpuPsYcodkkXMgOdjXv5YJvZldk=;
        b=KhZtg5sjhEFLZeizPlqdgud7VhP47V8DLrB0V/lGqvKm1ehWVsuguphCTLXFRrIDvc
         hWzgrrVCofJTVSIyERppe9fz6zdyC2uzyX+MbC9NPCa7xfUes4why3JN7VsFJmntbD9r
         5nG7afGhRDpnTs3iStct+b/HscbyALIqB1kUNp04PImtOFSDG+4MSnO8Sb0zZXxCbAX7
         drh+R+kurgzvA4lFmGuSbmyTg5Gc9HwYmWAxfYMcBeEnr6f0ECZHeFt+t5yJFmbrFEJJ
         yheCLbiNoBrmH8u0JBFQu4pB+a99ZGrnGZodlEhLxOIr+69LXFK4dX0Dn5uc/Wyn9zOG
         LcGA==
X-Gm-Message-State: AC+VfDybrLnsXvHEtnIsYpYBg1+GzSpLDycKVuiXi1C4s/6hlsOi2tWG
	CuSqUfkcx2AbK53XMKo10o2g2Krei6drKeSA
X-Google-Smtp-Source: ACHHUZ4pGEgDP0ct3SBtbJjK9qWlsOBNKoJkb8hPo4/muNsqGBXZA1NHW06v64HPJmj4Ckit7F5PBA==
X-Received: by 2002:a17:903:2352:b0:1aa:ed6f:29c2 with SMTP id c18-20020a170903235200b001aaed6f29c2mr6827893plh.11.1682969349038;
        Mon, 01 May 2023 12:29:09 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Huacai Chen <chenhuacai@kernel.org>
Subject: [PATCH v2 24/34] loongarch: Convert various functions to use ptdescs
Date: Mon,  1 May 2023 12:28:19 -0700
Message-Id: <20230501192829.17086-25-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/loongarch/include/asm/pgalloc.h | 27 +++++++++++++++------------
 arch/loongarch/mm/pgtable.c          |  7 ++++---
 2 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h
index af1d1e4a6965..1fe074f85b6b 100644
--- a/arch/loongarch/include/asm/pgalloc.h
+++ b/arch/loongarch/include/asm/pgalloc.h
@@ -45,9 +45,9 @@ extern void pagetable_init(void);
 extern pgd_t *pgd_alloc(struct mm_struct *mm);
 
 #define __pte_free_tlb(tlb, pte, address)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), pte);			\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));	\
 } while (0)
 
 #ifndef __PAGETABLE_PMD_FOLDED
@@ -55,18 +55,18 @@ do {							\
 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pmd_t *pmd;
-	struct page *pg;
+	struct ptdesc *ptdesc;
 
-	pg = alloc_page(GFP_KERNEL_ACCOUNT);
-	if (!pg)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, 0);
+	if (!ptdesc)
 		return NULL;
 
-	if (!pgtable_pmd_page_ctor(pg)) {
-		__free_page(pg);
+	if (!ptdesc_pmd_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
 
-	pmd = (pmd_t *)page_address(pg);
+	pmd = (pmd_t *)ptdesc_address(ptdesc);
 	pmd_init(pmd);
 	return pmd;
 }
@@ -80,10 +80,13 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pud_t *pud;
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
 
-	pud = (pud_t *) __get_free_page(GFP_KERNEL);
-	if (pud)
-		pud_init(pud);
+	if (!ptdesc)
+		return NULL;
+	pud = (pud_t *)ptdesc_address(ptdesc);
+
+	pud_init(pud);
 	return pud;
 }
 
diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c
index 36a6dc0148ae..ff07b8f1ef30 100644
--- a/arch/loongarch/mm/pgtable.c
+++ b/arch/loongarch/mm/pgtable.c
@@ -11,10 +11,11 @@
 
 pgd_t *pgd_alloc(struct mm_struct *mm)
 {
-	pgd_t *ret, *init;
+	pgd_t *init, *ret = NULL;
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
 
-	ret = (pgd_t *) __get_free_page(GFP_KERNEL);
-	if (ret) {
+	if (ptdesc) {
+		ret = (pgd_t *)ptdesc_address(ptdesc);
 		init = pgd_offset(&init_mm, 0UL);
 		pgd_init(ret);
 		memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528192.821089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNG-0004iB-NZ; Mon, 01 May 2023 19:39:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528192.821089; Mon, 01 May 2023 19:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNG-0004hC-Ei; Mon, 01 May 2023 19:39:50 +0000
Received: by outflank-mailman (input) for mailman id 528192;
 Mon, 01 May 2023 19:39:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZDB-0006FS-Oh
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:25 +0000
Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com
 [2607:f8b0:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 79e204dc-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:29:22 +0200 (CEST)
Received: by mail-pl1-x633.google.com with SMTP id
 d9443c01a7336-1aaec6f189cso12402295ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:22 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79e204dc-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969362; x=1685561362;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=m8/b+eKq2obnq3wrn4AaChyOhPXbZKipyE2UTOB7XHg=;
        b=Q3jl6YttJDwUuWKyn1dVs6I8W94SFWxxciTJdLpT/LELBYCeGyev+ExIC9w29x2Yt1
         zXHIDz5du52vN2frCHR+hIpcwuJ6c3WNhfC/Gsy1nS14RuXHFXtXj5G9xq8KbaH8upCB
         hoZ+KR9ZQITjkQA540sZmLX6PX4iGNe7NpnWG1G2AsO+NZdypSRuEfFuzqh0wPvQQaGw
         4XpO3Q1D/iCtA92s97XY1Or0L7GqG2SHJxpwKx0yuxfIbjPslNXarskrtkCCtU8UwW5x
         DSxt7iwfG4oFpxWasaAVqe2jz7cH5xxqomQxklMjhuQacqvGLIa7+5cde3ROrtfKL2NP
         oDNA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969362; x=1685561362;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=m8/b+eKq2obnq3wrn4AaChyOhPXbZKipyE2UTOB7XHg=;
        b=P4LA8mL83iGpVPZCMEul+Jt+u50RS0EsV5MtSc1aTqjRnUWWEIiBqyQVVN+a0663xB
         hL7h8nD+zrA3egyVVlfG6WQtjO9EZey0sZdjtyvSKCbgOQoPIeb/jy98bEz2HX6mq7dP
         45tiNu8DI421+Kb08fdzejl1BFaoHoVpDkUKu5I3W3Hi+8Rx2ocAyxXftV1N7wwL1slf
         bP343e1MXwOmWrcTcLcllTMPhUuv6+hmtJtERgYkWqOdB3cWnlhwDB9viHe8kv1jy73x
         sR/h7Xk0F+SFzNVhILsMzF84ttfETUs1OIqRs5NyJga7EQ6o54ZKoMxyliwDELBpDAfy
         zX4w==
X-Gm-Message-State: AC+VfDz9Jlg/ZWgIzmFzkC+6c/P/I+6hRNS03ZSfRJFnkqbnnGLZHLnS
	1NJ59Ky4/TGr8uTWo+0rIBw=
X-Google-Smtp-Source: ACHHUZ7SACaylL3/aSyKhkx7S2g1HYBvFG08B1pme9nPnu6u4pSNBbJS188a/Kq3fbmOV3eKOsv72w==
X-Received: by 2002:a17:902:db0b:b0:1aa:e5cd:647a with SMTP id m11-20020a170902db0b00b001aae5cd647amr7875588plx.23.1682969362180;
        Mon, 01 May 2023 12:29:22 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Richard Weinberger <richard@nod.at>
Subject: [PATCH v2 33/34] um: Convert {pmd, pte}_free_tlb() to use ptdescs
Date: Mon,  1 May 2023 12:28:28 -0700
Message-Id: <20230501192829.17086-34-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents. Also cleans up some spacing issues.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/um/include/asm/pgalloc.h | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h
index 8ec7cd46dd96..760b029505c1 100644
--- a/arch/um/include/asm/pgalloc.h
+++ b/arch/um/include/asm/pgalloc.h
@@ -25,19 +25,19 @@
  */
 extern pgd_t *pgd_alloc(struct mm_struct *);
 
-#define __pte_free_tlb(tlb,pte, address)		\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb),(pte));			\
+#define __pte_free_tlb(tlb, pte, address)			\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #ifdef CONFIG_3_LEVEL_PGTABLES
 
-#define __pmd_free_tlb(tlb, pmd, address)		\
-do {							\
-	pgtable_pmd_page_dtor(virt_to_page(pmd));	\
-	tlb_remove_page((tlb),virt_to_page(pmd));	\
-} while (0)						\
+#define __pmd_free_tlb(tlb, pmd, address)			\
+do {								\
+	ptdesc_pmd_dtor(virt_to_ptdesc(pmd));			\
+	tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd));	\
+} while (0)
 
 #endif
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528202.821107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNJ-0005Oj-5S; Mon, 01 May 2023 19:39:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528202.821107; Mon, 01 May 2023 19:39:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNJ-0005OU-2T; Mon, 01 May 2023 19:39:53 +0000
Received: by outflank-mailman (input) for mailman id 528202;
 Mon, 01 May 2023 19:39:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZD8-0006PY-Oh
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:22 +0000
Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com
 [2607:f8b0:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 73d8f659-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:13 +0200 (CEST)
Received: by mail-pl1-x635.google.com with SMTP id
 d9443c01a7336-1aad5245571so16972035ad.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:13 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73d8f659-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969352; x=1685561352;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kyicI3UFSJTH3R33HGjLFGEyUDrTPXlIkqJCgIfzdrc=;
        b=bIC3wiMrkZ043YRG29tisEfS2j+UeOAsmyII3EDzrldtjEb8icghvzgUTYdQYM4doj
         vdu0JKDKQcL03t0/j67MJsK3EMdVvlx3Lc/9NrJJJYh+5KdPrYzRKBiHtTGqpBRhkA5a
         ZboUsEenHMS9VrUn3gKI66CrCL9Z8GCruZLIMyl89zj2oR4Tg7uJGI/BsPQUk9uXq4w7
         IEqVWzOt4YqC8i9AxlAbu1JP49n5XnekAOLER4FSWBBd0M0ka47bttpu+oGYYPDkowKQ
         ep35YwloCTCGYeII4Pobz7D0t4NcJCq50AMppyxrvfN/Wt0IBRqMUSxoDgh1JHSYp5Pm
         g50g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969352; x=1685561352;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=kyicI3UFSJTH3R33HGjLFGEyUDrTPXlIkqJCgIfzdrc=;
        b=FitvQea9s9nu065KvmRCz5AWJvOUpPS8WkEl+3BAjmo92H3mvX9J41C3JVOMuQke/h
         FlTGk4hT2FRi8vZB7nPgX8TA6xJOZySNS0CnyhAGbSsAZRQONchWaRefojRkBvtGU7j5
         fBlZgXIpd+qSSUxp3bYYVR1TSV8hZWkM+FSEVxdu2+OiiJTQ6CxveukDWKgFApblfkAO
         nV1aqU7HhnowR0G1lXVys6x46e/To9Qu1TDkL0fgsen6/BVScJSfbBuujHeL5DJq03wQ
         IzWDBGyNQhDkQwyjm/3wcLOwejXWVvUyEfAA+wKepzUcBanX3JKNAAp1R/6/1Uly3XVg
         2d0g==
X-Gm-Message-State: AC+VfDyAQK0tPFtFavPq6l0hc5PdSHvLlibf7m1K8YjWVS7vEELaIA7n
	THuHl3ZB7MiVKPMU27Kam7U=
X-Google-Smtp-Source: ACHHUZ6JGwiASceIv2N+eQzagLfWHlR6lj5LsM9PIBmbvfqlnJYs1oK6NLyKl7E/zGC8k10AZHOifw==
X-Received: by 2002:a17:902:ea0b:b0:1a6:3b9c:7fea with SMTP id s11-20020a170902ea0b00b001a63b9c7feamr19311944plg.36.1682969351902;
        Mon, 01 May 2023 12:29:11 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Subject: [PATCH v2 26/34] mips: Convert various functions to use ptdescs
Date: Mon,  1 May 2023 12:28:21 -0700
Message-Id: <20230501192829.17086-27-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/mips/include/asm/pgalloc.h | 31 +++++++++++++++++--------------
 arch/mips/mm/pgtable.c          |  7 ++++---
 2 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
index f72e737dda21..7f7cc3140b27 100644
--- a/arch/mips/include/asm/pgalloc.h
+++ b/arch/mips/include/asm/pgalloc.h
@@ -51,13 +51,13 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm);
 
 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 {
-	free_pages((unsigned long)pgd, PGD_TABLE_ORDER);
+	ptdesc_free(virt_to_ptdesc(pgd));
 }
 
-#define __pte_free_tlb(tlb,pte,address)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), pte);			\
+#define __pte_free_tlb(tlb, pte, address)			\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));	\
 } while (0)
 
 #ifndef __PAGETABLE_PMD_FOLDED
@@ -65,18 +65,18 @@ do {							\
 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pmd_t *pmd;
-	struct page *pg;
+	struct ptdesc *ptdesc;
 
-	pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER);
-	if (!pg)
+	ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER);
+	if (!ptdesc)
 		return NULL;
 
-	if (!pgtable_pmd_page_ctor(pg)) {
-		__free_pages(pg, PMD_TABLE_ORDER);
+	if (!ptdesc_pmd_ctor(ptdesc)) {
+		ptdesc_free(ptdesc);
 		return NULL;
 	}
 
-	pmd = (pmd_t *)page_address(pg);
+	pmd = (pmd_t *)ptdesc_address(ptdesc);
 	pmd_init(pmd);
 	return pmd;
 }
@@ -90,10 +90,13 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pud_t *pud;
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, PUD_TABLE_ORDER);
 
-	pud = (pud_t *) __get_free_pages(GFP_KERNEL, PUD_TABLE_ORDER);
-	if (pud)
-		pud_init(pud);
+	if (!ptdesc)
+		return NULL;
+	pud = (pud_t *)ptdesc_address(ptdesc);
+
+	pud_init(pud);
 	return pud;
 }
 
diff --git a/arch/mips/mm/pgtable.c b/arch/mips/mm/pgtable.c
index b13314be5d0e..d626db9ac224 100644
--- a/arch/mips/mm/pgtable.c
+++ b/arch/mips/mm/pgtable.c
@@ -10,10 +10,11 @@
 
 pgd_t *pgd_alloc(struct mm_struct *mm)
 {
-	pgd_t *ret, *init;
+	pgd_t *init, *ret = NULL;
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, PGD_TABLE_ORDER);
 
-	ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_TABLE_ORDER);
-	if (ret) {
+	if (ptdesc) {
+		ret = (pgd_t *) ptdesc_address(ptdesc);
 		init = pgd_offset(&init_mm, 0UL);
 		pgd_init(ret);
 		memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:39:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:39:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528205.821117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNP-0005x0-HW; Mon, 01 May 2023 19:39:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528205.821117; Mon, 01 May 2023 19:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNP-0005wp-9m; Mon, 01 May 2023 19:39:59 +0000
Received: by outflank-mailman (input) for mailman id 528205;
 Mon, 01 May 2023 19:39:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZDJ-0006PY-Pw
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:33 +0000
Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com
 [2607:f8b0:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7925230b-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:22 +0200 (CEST)
Received: by mail-pl1-x62e.google.com with SMTP id
 d9443c01a7336-1a9253d4551so21243575ad.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:22 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7925230b-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969361; x=1685561361;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=CSVYEUhM0zGHlDR1yqa5KIpwwPynV1rh1Z3AvIT239o=;
        b=YCg1YMo/xitXPr8EH+7i7/M0wYddD4PRgw+Xim9LG20m92WUdC9kOQT3fMeSkFJ8bz
         2xW9Qv874by/73QMdHiVk+wqz4B7HUYL6Uj2zbe5cOYZkoH3JAXPoFdQlYFWToTwzHn7
         dFfF9W+SZdFlY3dKNimMml0EcpIP5K8qP91gmqtjuJ3sBzpu94RZheVZlyc0ZtXP5Kh9
         Ho0HyuApLfDJnfV0KIWHGVL4xeGycAgKdLMplPIR2oWrmLtc//uxDTAq/yAuqmVSMzMJ
         fvK8MGrkhlBYA+kuhjIoxK7FCwuGLTfgfMGFdrbgTVFBoKNwWCi+6p4VuuSzDnIOEdqU
         o3Sw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969361; x=1685561361;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=CSVYEUhM0zGHlDR1yqa5KIpwwPynV1rh1Z3AvIT239o=;
        b=FW2ILUMdVZEfzO9OyBRhGxo/gVudi1oOXYpMrhc4eQ9XumZCoxaqyKV/FlZv2WEbyn
         w0AY09n0BEG4RdTEfU+DbkkJJXB7wwQpM1BT8gH3eyXc2gKE2pYs4/UxJz/dUyxM7Z8x
         /VzqSCjUyEOsUpM/qtC06yFVA8qpH9y2+MLqQp9zlPcQTaUQxbB40aT45lHD2hP0aH6n
         g+7sk5dOja970DfiZVe42JWHwTAuqMChtQJ8EUqCXuPtqYRIp4mQGoXVRiCyacrl0Hr3
         76QHqXmsb5u355AzA2LavwuwMmhy1JUhnZxh0M3xOdcXKav+MB2u0/Mw7wQ6OmYibl1a
         603g==
X-Gm-Message-State: AC+VfDwg/pEZWM7m/YJETQHyudZsCLRohmjTiMGV7N3HbkfNH1S4AsF/
	F1+eSKAMpYUygl+yjKGuW/A=
X-Google-Smtp-Source: ACHHUZ6yreaxJl7mf5rX62OpE7ktDECZCzzaxE5NqOb8jR7RBUIdxZHYmi631H3wh4Z5BZGAmm3BhA==
X-Received: by 2002:a17:902:c712:b0:1ab:3ba:d2c1 with SMTP id p18-20020a170902c71200b001ab03bad2c1mr1428915plp.44.1682969360924;
        Mon, 01 May 2023 12:29:20 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: [PATCH v2 32/34] sparc: Convert pgtable_pte_page_{ctor, dtor}() to ptdesc equivalents
Date: Mon,  1 May 2023 12:28:27 -0700
Message-Id: <20230501192829.17086-33-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable pte constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/sparc/mm/srmmu.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index 13f027afc875..964938aa7b88 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -355,7 +355,8 @@ pgtable_t pte_alloc_one(struct mm_struct *mm)
 		return NULL;
 	page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT);
 	spin_lock(&mm->page_table_lock);
-	if (page_ref_inc_return(page) == 2 && !pgtable_pte_page_ctor(page)) {
+	if (page_ref_inc_return(page) == 2 &&
+			!ptdesc_pte_ctor(page_ptdesc(page))) {
 		page_ref_dec(page);
 		ptep = NULL;
 	}
@@ -371,7 +372,7 @@ void pte_free(struct mm_struct *mm, pgtable_t ptep)
 	page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT);
 	spin_lock(&mm->page_table_lock);
 	if (page_ref_dec_return(page) == 1)
-		pgtable_pte_page_dtor(page);
+		ptdesc_pte_dtor(page_ptdesc(page));
 	spin_unlock(&mm->page_table_lock);
 
 	srmmu_free_nocache(ptep, SRMMU_PTE_TABLE_SIZE);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:40:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:40:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528211.821121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNP-00060Y-Sx; Mon, 01 May 2023 19:39:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528211.821121; Mon, 01 May 2023 19:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNP-00060G-ME; Mon, 01 May 2023 19:39:59 +0000
Received: by outflank-mailman (input) for mailman id 528211;
 Mon, 01 May 2023 19:39:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZD4-0006FS-2c
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:18 +0000
Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com
 [2607:f8b0:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 76b9d1fe-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:29:17 +0200 (CEST)
Received: by mail-pl1-x633.google.com with SMTP id
 d9443c01a7336-1aaec6f189cso12401465ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:17 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76b9d1fe-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969357; x=1685561357;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=l1ckLKbyzk3TycCfORfJ/TBkh9kewWK+wgbsmWovkLc=;
        b=QOhINfLsJScPCSo/IfeyIyqWMqeIs48D8yaoqlS6ttjMbmNZXsD0pl2iRSwsAXqRrv
         zUUJOXK9c/ejP/aK8PhIumKGyTM6wx4FMhZyB9B20DcTQunSql+VvjSwHErAI8TxAw4R
         Y99zvhqhTYwxHGbhjSdsPxRy9B+yD0izY9vTuFFo+4wz8Jz2C37zAT+8c8EYp2fDz24U
         oGrB6CUieeF0SbwTG14XhJhloKyt3nzbHvCyZJJU6u4aL5aKMUavEI+jK4qMtp6ZB84A
         TmLf+lTqOkVGEF+wZ1coiBesyT5OIE5Gk6hWNsWKJwTKVv0S28KrIjiosst3fU+17ZhU
         2bkA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969357; x=1685561357;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=l1ckLKbyzk3TycCfORfJ/TBkh9kewWK+wgbsmWovkLc=;
        b=MIL6ONcp6uYLKp3Vakgn64BKNtu+LlgND4f0nSfanXLzKJoRZg7miN07WtxtL5iIpr
         NgMDa3r0ltD9qVSPqwf5jW6NCsOwyZqJUa1MOWWCsi6YrgpNnx8FIIMnEJ2VTIFrEBPg
         gCnD2uAoCx8NYCCi/sM3pB7niEfeWdFMA+88R4OLqwMQYWqBXVTKvzbd5xjHgPPL4dja
         YO6gS4G7gnICIQYsP/glP+rcohFNJXVKBCNFnhBxjotwiQHQsxWZClZd94ZSbw+Ca1JZ
         FyLM3j5YCq5MmHKtPt7Sbjfr/9J129VPmsqY4P8W4jAVKoOT/ViyK4OF0eQhF0EoHtw5
         mQbA==
X-Gm-Message-State: AC+VfDzmhaxaXB5+JArra+RRrj7QQKUjZ1/7W019bq335zSCXyaRyDCx
	cPJNFkoGujtfFK98CCND1Pg=
X-Google-Smtp-Source: ACHHUZ5ad9gvhcfa1XbL+pCFsXQ41wyEuU00A0VqKiT0180mGKgTi8tTwjBbR7r20yn6kHE8l0iQaQ==
X-Received: by 2002:a17:902:ce91:b0:1aa:ef83:34be with SMTP id f17-20020a170902ce9100b001aaef8334bemr6350222plg.47.1682969356855;
        Mon, 01 May 2023 12:29:16 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Paul Walmsley <paul.walmsley@sifive.com>
Subject: [PATCH v2 29/34] riscv: Convert alloc_{pmd, pte}_late() to use ptdescs
Date: Mon,  1 May 2023 12:28:24 -0700
Message-Id: <20230501192829.17086-30-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use ptdesc_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/riscv/include/asm/pgalloc.h |  8 ++++----
 arch/riscv/mm/init.c             | 16 ++++++----------
 2 files changed, 10 insertions(+), 14 deletions(-)

diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h
index 59dc12b5b7e8..cb5536403bd8 100644
--- a/arch/riscv/include/asm/pgalloc.h
+++ b/arch/riscv/include/asm/pgalloc.h
@@ -153,10 +153,10 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
 
 #endif /* __PAGETABLE_PMD_FOLDED */
 
-#define __pte_free_tlb(tlb, pte, buf)   \
-do {                                    \
-	pgtable_pte_page_dtor(pte);     \
-	tlb_remove_page((tlb), pte);    \
+#define __pte_free_tlb(tlb, pte, buf)			\
+do {							\
+	ptdesc_pte_dtor(page_ptdesc(pte));		\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));\
 } while (0)
 #endif /* CONFIG_MMU */
 
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index eb8173a91ce3..8f1982664687 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -353,12 +353,10 @@ static inline phys_addr_t __init alloc_pte_fixmap(uintptr_t va)
 
 static phys_addr_t __init alloc_pte_late(uintptr_t va)
 {
-	unsigned long vaddr;
-
-	vaddr = __get_free_page(GFP_KERNEL);
-	BUG_ON(!vaddr || !pgtable_pte_page_ctor(virt_to_page(vaddr)));
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
 
-	return __pa(vaddr);
+	BUG_ON(!ptdesc || !ptdesc_pte_ctor(ptdesc));
+	return __pa((pte_t *)ptdesc_address(ptdesc));
 }
 
 static void __init create_pte_mapping(pte_t *ptep,
@@ -436,12 +434,10 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va)
 
 static phys_addr_t __init alloc_pmd_late(uintptr_t va)
 {
-	unsigned long vaddr;
-
-	vaddr = __get_free_page(GFP_KERNEL);
-	BUG_ON(!vaddr || !pgtable_pmd_page_ctor(virt_to_page(vaddr)));
+	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
 
-	return __pa(vaddr);
+	BUG_ON(!ptdesc || !ptdesc_pmd_ctor(ptdesc));
+	return __pa((pmd_t *)ptdesc_address(ptdesc));
 }
 
 static void __init create_pmd_mapping(pmd_t *pmdp,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:40:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:40:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528216.821145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNV-0007K6-0K; Mon, 01 May 2023 19:40:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528216.821145; Mon, 01 May 2023 19:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNU-0007II-PJ; Mon, 01 May 2023 19:40:04 +0000
Received: by outflank-mailman (input) for mailman id 528216;
 Mon, 01 May 2023 19:40:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZF7-0000m4-8s
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:31:25 +0000
Received: from mail-qk1-x72d.google.com (mail-qk1-x72d.google.com
 [2607:f8b0:4864:20::72d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c1497016-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:31:23 +0200 (CEST)
Received: by mail-qk1-x72d.google.com with SMTP id
 af79cd13be357-74e1745356dso122713385a.0
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:31:23 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.31.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:31:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1497016-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969481; x=1685561481;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/Y4+dyjBVr8IC4wKpAH33RVK4Uwk38u6mRfHXdnUc88=;
        b=El+j9CYH92/VkKvU0xX/iezFblXzbQBrsLh2hY6wa/mKGGMU40Uw1TuWpMCuy0pgxJ
         5j25wvX86m18bmSl4JWCZOcYJgk4pkihBGHshqS6ZFUQ3VIwp5N3Sx3TYb8/S1ZWaUXX
         gi9MXhzcIjciT/IIbb4Y2gqBo4BOTJvuvt1CgX8wloUt0Cha3xzpy4q068L65lQYB+0O
         goNpAjrDRmeJjVeFBsddApKCxJ5eMSmOy5WS24SN7q1UKc5j3TwAvJKKYLQod5Fyz23R
         p0iYFyedNbgM2llrYVboI5n03letvmTsNNr+ABaInQsOGR85gbcJJeGWwNcalW1N833h
         C7FA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969481; x=1685561481;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/Y4+dyjBVr8IC4wKpAH33RVK4Uwk38u6mRfHXdnUc88=;
        b=cQEWzQZcnc3tyGx2ESF2u1OO5bHyg7fA220AhbA1id46OJ1SZnzdWq8bncFcmOEOXJ
         SY73NsjvdjD2G+uSabkXcTmQYMeh+g+MsPxIAWSQLgxa/RDhxBc+gfZ9iDH369VRACT/
         5Qn8KPOAWV5o8y+OlYWcPGLVqn9g8h/R9hO6DpJ6Wve2jcNs89byEcPsGHKOO3/C7imO
         LNJHu0JRd52Ht6l7Jtr+P1EC2k3XJLkS4RlqMojsUEcpM9Rk/bXBq7/abh2p3JgqynPB
         oVZrdyHx+D/tqQmaCBGnaBkQLhS/O+fvQbiBLWxtXNHTHZzaoLUiJ5XdTdJOxHWFakEG
         VsAA==
X-Gm-Message-State: AC+VfDwdl+hoM7Bsspho8fSWT16oionsMF0VrdstzPl/jE4x3U2w7U6V
	Hyu7LDggm3APUEAIyuEsJvbaSPCqByQ=
X-Google-Smtp-Source: ACHHUZ7oF2OrgJXKm39RGRVXI1vKWuuYqbfsPQUZk89HXKREDFLA2mFb9YF3y3DhBgK9OOK1C/WqCg==
X-Received: by 2002:ac8:5747:0:b0:3f2:1f2f:dc80 with SMTP id 7-20020ac85747000000b003f21f2fdc80mr6028725qtx.9.1682969481554;
        Mon, 01 May 2023 12:31:21 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH v3 14/14 RESEND] CHANGELOG: Add Intel HWP entry
Date: Mon,  1 May 2023 15:30:34 -0400
Message-Id: <20230501193034.88575-15-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Henry Wang <Henry.Wang@arm.com>
---
v3:
Position under existing Added section
Add Henry's Ack

v2:
Add blank line
---
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5dbf8b06d7..2eb9e2cfd0 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -18,6 +18,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    - Bus-lock detection, used by Xen to mitigate (by rate-limiting) the system
      wide impact of a guest misusing atomic instructions.
  - xl/libxl can customize SMBIOS strings for HVM guests.
+ - Add Intel Hardware P-States (HWP) cpufreq driver.
 
 ## [4.17.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.17.0) - 2022-12-12
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:40:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:40:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528215.821137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNT-0006tx-9U; Mon, 01 May 2023 19:40:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528215.821137; Mon, 01 May 2023 19:40:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNT-0006tV-1S; Mon, 01 May 2023 19:40:03 +0000
Received: by outflank-mailman (input) for mailman id 528215;
 Mon, 01 May 2023 19:40:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lARM=AW=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptZF5-0000m4-8M
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:31:23 +0000
Received: from mail-qt1-x835.google.com (mail-qt1-x835.google.com
 [2607:f8b0:4864:20::835])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c00e31df-e856-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 21:31:21 +0200 (CEST)
Received: by mail-qt1-x835.google.com with SMTP id
 d75a77b69052e-3ef3ce7085bso12846761cf.2
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:31:21 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 d6-20020ac80606000000b003bf9f9f1844sm9351784qth.71.2023.05.01.12.31.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:31:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c00e31df-e856-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969479; x=1685561479;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8XlrjFSqYpzQJQGdwHcwomspneq906r3odg2YJyUcgo=;
        b=ZIuYdu87cYcvUsSs9yRHdMP3Pm5GSwQX7w9kuI8lr30khecKw3o+OlElMPhyvABnrJ
         gmluE6uLnPxk7UG7YCcfHZ09VskA++s/5lNwNcL8uBI0ygSsE2yd/FAs/L4wPA/dADmj
         mz/pSaazJpvVfYBRmBPkf3yQIsyEtzTmHyiK6wFKzQIoUuy0eFTX95lH4LryFdZ8QlGc
         mpUhuxMhCZdalTvv135QKyZ5RBBJvgLVLI3EnjIJvrmqcrxpG5dAjwH1fRJm8b+q/mio
         DbsTwq7u5gfWM34SXiQxaAhatFXAMrRKNliQleEENClFijB/4MxYCoKTnPiDtgNawdSN
         yMIg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969479; x=1685561479;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=8XlrjFSqYpzQJQGdwHcwomspneq906r3odg2YJyUcgo=;
        b=BIz8e8BYpRK/c0MmjhLaisW85pkMujWutMchCow0M7by8mI/HpOY0xlmWmESh4jOq+
         j+gL8zqXgX6gXoANRfccvJwB8Z1/xqt6vU8AB2SaFD0MPZvjGxtxZ0NC4VZFhFKLorCH
         wYU5bzQEEbpDranua9UUv2vDehJuIN1JI5qRN/C/YYBocqFsEqq56wchzPlUQt7PmSkr
         56tfKTl7Rx0E8FYlEzxcQZsLdzy2LK4QQSMyT6cU7gag7fnpKf//NENPR5Ps7kXtubL6
         rWtLRD0cyi/nUXnh17jaw5vSIuCe4FJ666nunpdo+ctDuh2KqX1XoNI6vLR1W0AmH2Er
         1C6g==
X-Gm-Message-State: AC+VfDxre9MGUTBmyXcDwfddT5WW4Xxh3VcUZFa4QwQRWFTU78zg2w22
	Foh1aqqiskBsBx2WCEFr7ayGo/kZaF4=
X-Google-Smtp-Source: ACHHUZ6s7TAWu0GGiaRBOmaYwJA7QeeEcnz3hfdXYDb9SCnyiJRNs+k3LepojJZ5B4tKUB1oeC0+xQ==
X-Received: by 2002:ac8:5a54:0:b0:3ef:380a:d7bc with SMTP id o20-20020ac85a54000000b003ef380ad7bcmr22971561qta.54.1682969479398;
        Mon, 01 May 2023 12:31:19 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 13/14 RESEND] xenpm: Add set-cpufreq-hwp subcommand
Date: Mon,  1 May 2023 15:30:33 -0400
Message-Id: <20230501193034.88575-14-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230501193034.88575-1-jandryuk@gmail.com>
References: <20230501193034.88575-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

set-cpufreq-hwp allows setting the Hardware P-State (HWP) parameters.

It can be run on all or just a single cpu.  There are presets of
balance, powersave & performance.  Those can be further tweaked by
param:val arguments as explained in the usage description.

Parameter names are just checked to the first 3 characters to shorten
typing.

Some options are hardware dependent, and ranges can be found in
get-cpufreq-para.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
v2:
Compare provided parameter name and not just 3 characters.
Use "-" in parameter names
Remove hw_
Replace sscanf with strchr & strtoul.
Remove toplevel error message with lower level ones.
Help text s/127/128/
Help text mention truncation.
Avoid some truncation rounding down by adding 5 before division.
Help test mention default microseconds
Also comment the limit check written to avoid overflow.
---
 tools/misc/xenpm.c | 230 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 230 insertions(+)

diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index 6e74606970..8d99c78670 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -16,6 +16,7 @@
  */
 #define MAX_NR_CPU 512
 
+#include <limits.h>
 #include <stdio.h>
 #include <stdlib.h>
 #include <unistd.h>
@@ -67,6 +68,27 @@ void show_help(void)
             " set-max-cstate        <num>|'unlimited' [<num2>|'unlimited']\n"
             "                                     set the C-State limitation (<num> >= 0) and\n"
             "                                     optionally the C-sub-state limitation (<num2> >= 0)\n"
+            " set-cpufreq-hwp       [cpuid] [balance|performance|powersave] <param:val>*\n"
+            "                                     set Hardware P-State (HWP) parameters\n"
+            "                                     optionally a preset of one of\n"
+            "                                       balance|performance|powersave\n"
+            "                                     an optional list of param:val arguments\n"
+            "                                       minimum:N  lowest ... highest\n"
+            "                                       maximum:N  lowest ... highest\n"
+            "                                       desired:N  lowest ... highest\n"
+            "                                           Set explicit performance target.\n"
+            "                                           non-zero disables auto-HWP mode.\n"
+            "                                       energy-perf:0-255 (or 0-15)\n"
+            "                                                   energy/performance hint\n"
+            "                                                   lower - favor performance\n"
+            "                                                   higher - favor powersave\n"
+            "                                                   128 (or 7) - balance\n"
+            "                                       act-window:N{,m,u}s range 1us-1270s\n"
+            "                                           window for internal calculations.\n"
+            "                                           Defaults to us without units.\n"
+            "                                           Truncates un-representable values.\n"
+            "                                           0 lets the hardware decide.\n"
+            "                                     get-cpufreq-para returns lowest/highest.\n"
             " start [seconds]                     start collect Cx/Px statistics,\n"
             "                                     output after CTRL-C or SIGINT or several seconds.\n"
             " enable-turbo-mode     [cpuid]       enable Turbo Mode for processors that support it.\n"
@@ -1299,6 +1321,213 @@ void disable_turbo_mode(int argc, char *argv[])
                 errno, strerror(errno));
 }
 
+/*
+ * Parse activity_window:NNN{us,ms,s} and validate range.
+ *
+ * Activity window is a 7bit mantissa (0-127) with a 3bit exponent (0-7) base
+ * 10 in microseconds.  So the range is 1 microsecond to 1270 seconds.  A value
+ * of 0 lets the hardware autonomously select the window.
+ *
+ * Return 0 on success
+ *       -1 on error
+ */
+static int parse_activity_window(xc_set_hwp_para_t *set_hwp, unsigned long u,
+                                 const char *suffix)
+{
+    unsigned int exponent = 0;
+    unsigned int multiplier = 1;
+
+    if ( suffix && suffix[0] )
+    {
+        if ( strcasecmp(suffix, "s") == 0 )
+        {
+            multiplier = 1000 * 1000;
+            exponent = 6;
+        }
+        else if ( strcasecmp(suffix, "ms") == 0 )
+        {
+            multiplier = 1000;
+            exponent = 3;
+        }
+        else if ( strcasecmp(suffix, "us") == 0 )
+        {
+            multiplier = 1;
+            exponent = 0;
+        }
+        else
+        {
+            fprintf(stderr, "invalid activity window units: \"%s\"\n", suffix);
+
+            return -1;
+        }
+    }
+
+    /* u * multipler > 1270 * 1000 * 1000 transformed to avoid overflow. */
+    if ( u > 1270 * 1000 * 1000 / multiplier )
+    {
+        fprintf(stderr, "activity window is too large\n");
+
+        return -1;
+    }
+
+    /* looking for 7 bits of mantissa and 3 bits of exponent */
+    while ( u > 127 )
+    {
+        u += 5; /* Round up to mitigate truncation rounding down
+                   e.g. 128 -> 120 vs 128 -> 130. */
+        u /= 10;
+        exponent += 1;
+    }
+
+    set_hwp->activity_window = (exponent & HWP_ACT_WINDOW_EXPONENT_MASK) <<
+                                   HWP_ACT_WINDOW_EXPONENT_SHIFT |
+                               (u & HWP_ACT_WINDOW_MANTISSA_MASK);
+    set_hwp->set_params |= XEN_SYSCTL_HWP_SET_ACT_WINDOW;
+
+    return 0;
+}
+
+static int parse_hwp_opts(xc_set_hwp_para_t *set_hwp, int *cpuid,
+                          int argc, char *argv[])
+{
+    int i = 0;
+
+    if ( argc < 1 ) {
+        fprintf(stderr, "Missing arguments\n");
+        return -1;
+    }
+
+    if ( parse_cpuid_non_fatal(argv[i], cpuid) == 0 )
+    {
+        i++;
+    }
+
+    if ( i == argc ) {
+        fprintf(stderr, "Missing arguments\n");
+        return -1;
+    }
+
+    if ( strcasecmp(argv[i], "powersave") == 0 )
+    {
+        set_hwp->set_params = XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE;
+        i++;
+    }
+    else if ( strcasecmp(argv[i], "performance") == 0 )
+    {
+        set_hwp->set_params = XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE;
+        i++;
+    }
+    else if ( strcasecmp(argv[i], "balance") == 0 )
+    {
+        set_hwp->set_params = XEN_SYSCTL_HWP_SET_PRESET_BALANCE;
+        i++;
+    }
+
+    for ( ; i < argc; i++)
+    {
+        unsigned long val;
+        char *param = argv[i];
+        char *value;
+        char *suffix;
+        int ret;
+
+        value = strchr(param, ':');
+        if ( value == NULL )
+        {
+            fprintf(stderr, "\"%s\" is an invalid hwp parameter\n", argv[i]);
+            return -1;
+        }
+
+        value[0] = '\0';
+        value++;
+
+        errno = 0;
+        val = strtoul(value, &suffix, 10);
+        if ( (errno && val == ULONG_MAX) || value == suffix )
+        {
+            fprintf(stderr, "Could not parse number \"%s\"\n", value);
+            return -1;
+        }
+
+        if ( strncasecmp(param, "act-window", strlen(param)) == 0 )
+        {
+            ret = parse_activity_window(set_hwp, val, suffix);
+            if (ret)
+                return -1;
+
+            continue;
+        }
+
+        if ( val > 255 )
+        {
+            fprintf(stderr, "\"%s\" value \"%lu\" is out of range\n", param,
+                    val);
+            return -1;
+        }
+
+        if ( suffix && suffix[0] )
+        {
+            fprintf(stderr, "Suffix \"%s\" is invalid\n", suffix);
+            return -1;
+        }
+
+        if ( strncasecmp(param, "minimum", MAX(2, strlen(param))) == 0 )
+        {
+            set_hwp->minimum = val;
+            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_MINIMUM;
+        }
+        else if ( strncasecmp(param, "maximum", MAX(2, strlen(param))) == 0 )
+        {
+            set_hwp->maximum = val;
+            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_MAXIMUM;
+        }
+        else if ( strncasecmp(param, "desired", strlen(param)) == 0 )
+        {
+            set_hwp->desired = val;
+            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_DESIRED;
+        }
+        else if ( strncasecmp(param, "energy-perf", strlen(param)) == 0 )
+        {
+            set_hwp->energy_perf = val;
+            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_ENERGY_PERF;
+        }
+        else
+        {
+            fprintf(stderr, "\"%s\" is an invalid parameter\n", param);
+            return -1;
+        }
+    }
+
+    if ( set_hwp->set_params == 0 )
+    {
+        fprintf(stderr, "No parameters set in request\n");
+        return -1;
+    }
+
+    return 0;
+}
+
+static void hwp_set_func(int argc, char *argv[])
+{
+    xc_set_hwp_para_t set_hwp = {};
+    int cpuid = -1;
+    int i = 0;
+
+    if ( parse_hwp_opts(&set_hwp, &cpuid, argc, argv) )
+        exit(EINVAL);
+
+    if ( cpuid != -1 )
+    {
+        i = cpuid;
+        max_cpu_nr = i + 1;
+    }
+
+    for ( ; i < max_cpu_nr; i++ )
+        if ( xc_set_cpufreq_hwp(xc_handle, i, &set_hwp) )
+            fprintf(stderr, "[CPU%d] failed to set hwp params (%d - %s)\n",
+                    i, errno, strerror(errno));
+}
+
 struct {
     const char *name;
     void (*function)(int argc, char *argv[]);
@@ -1309,6 +1538,7 @@ struct {
     { "get-cpufreq-average", cpufreq_func },
     { "start", start_gather_func },
     { "get-cpufreq-para", cpufreq_para_func },
+    { "set-cpufreq-hwp", hwp_set_func },
     { "set-scaling-maxfreq", scaling_max_freq_func },
     { "set-scaling-minfreq", scaling_min_freq_func },
     { "set-scaling-governor", scaling_governor_func },
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Mon May 01 19:40:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 19:40:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528226.821157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNf-0000i3-90; Mon, 01 May 2023 19:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528226.821157; Mon, 01 May 2023 19:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZNf-0000hr-50; Mon, 01 May 2023 19:40:15 +0000
Received: by outflank-mailman (input) for mailman id 528226;
 Mon, 01 May 2023 19:40:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Chj3=AW=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1ptZDG-0006PY-PR
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 19:29:30 +0000
Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com
 [2607:f8b0:4864:20::42b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 779418c1-e856-11ed-8611-37d641c3527e;
 Mon, 01 May 2023 21:29:18 +0200 (CEST)
Received: by mail-pf1-x42b.google.com with SMTP id
 d2e1a72fcca58-64115eef620so29041064b3a.1
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 12:29:18 -0700 (PDT)
Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::9a2c])
 by smtp.googlemail.com with ESMTPSA id
 u8-20020a170902bf4800b0019c13d032d8sm18175622pls.253.2023.05.01.12.29.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 12:29:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 779418c1-e856-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1682969358; x=1685561358;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=5aCjEZJk/he0B507v8WHbAo+ayyw9u0whNe5hSh1Qc0=;
        b=AK+jJQIndwyipvZ3j74RRrDCnGWb55RfRTFZCAj1QjfEP6JtcEnkxab8VHfUxlnJqD
         nzyDIYAX6jvOrQqc9g/TswPCkcqNA0B0b+SVKLMEmNApHkelPozzZarkoTrP2UobldQU
         E6NqDV1CfueV3wQaPICMNftiMflroUHircwtbGv3hyeRwqO/KqZCxi3BNAP0dk8lYID7
         5OwqBACQpi+nvrf0goZMdOfhVuRwRiXsUHvkFH+vRl0eIOaoeix6IqEc2ZJsYZWlONhr
         qoO4S8rXijotpMJHxa1FWYbjKdY7R5WGEGDzLNmBq3rOIwjAHcvHuIBKVpOdfgv+tSJ1
         RDOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682969358; x=1685561358;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=5aCjEZJk/he0B507v8WHbAo+ayyw9u0whNe5hSh1Qc0=;
        b=Lwm2/iKMyBKzWHFDNT6CQbT9wNIicJcXr9GFV6oipIRBsFPN8R2qwceOk3GDbzvpuQ
         WmvCpykB0c0dgcUhmLcv52tpOH1bUCdbCn53cvy/KZJusFiLvWgpb29VCUukRknAMrFC
         nm5eTm7DtHVWrf2ngjXqqjlNNawv2+HmdplDgDE+iR/TvThw74/rkTOlYmp7BXAo5DU3
         1qxu1ZZkIoJwUfhoJBggXznvm5xkg6pjgSODZUiRUmI0kEF+VnPzTE64vLyKqgGZWHwM
         +Wxci3lrkutYk8P/j5J4NlAYMTaCFnNZu1CBFe8w5lmYdrKRJOAVK9JeIapTftwk9zgX
         IYIQ==
X-Gm-Message-State: AC+VfDz+folEUK9Hc4TJqDjDFc5V9SfY+0DgA2EWzbNc4xwWE+kNeibH
	xjGlR4RNbYlNAh8n7bQc55Q=
X-Google-Smtp-Source: ACHHUZ6t8R71ld5pjw9YJiJ63m1PjvmCLBjw2ONvK02V0vikRtqecj56I+goZm4GypngOeaNmv5J/w==
X-Received: by 2002:a17:903:2307:b0:19a:96ea:3850 with SMTP id d7-20020a170903230700b0019a96ea3850mr18111173plh.17.1682969358236;
        Mon, 01 May 2023 12:29:18 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>
Subject: [PATCH v2 30/34] sh: Convert pte_free_tlb() to use ptdescs
Date: Mon,  1 May 2023 12:28:25 -0700
Message-Id: <20230501192829.17086-31-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents. Also cleans up some spacing issues.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/sh/include/asm/pgalloc.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h
index a9e98233c4d4..ce2ba99dbd84 100644
--- a/arch/sh/include/asm/pgalloc.h
+++ b/arch/sh/include/asm/pgalloc.h
@@ -2,6 +2,7 @@
 #ifndef __ASM_SH_PGALLOC_H
 #define __ASM_SH_PGALLOC_H
 
+#include <linux/mm.h>
 #include <asm/page.h>
 
 #define __HAVE_ARCH_PMD_ALLOC_ONE
@@ -31,10 +32,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
 	set_pmd(pmd, __pmd((unsigned long)page_address(pte)));
 }
 
-#define __pte_free_tlb(tlb,pte,addr)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), (pte));			\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	ptdesc_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #endif /* __ASM_SH_PGALLOC_H */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon May 01 20:03:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 20:03:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528260.821176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZkC-0005IF-Bg; Mon, 01 May 2023 20:03:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528260.821176; Mon, 01 May 2023 20:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZkC-0005I8-8d; Mon, 01 May 2023 20:03:32 +0000
Received: by outflank-mailman (input) for mailman id 528260;
 Mon, 01 May 2023 20:03:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nwpa=AW=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1ptZkA-000525-Hn
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 20:03:30 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20601.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d35e810-e85b-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 22:03:29 +0200 (CEST)
Received: from MW4PR03CA0268.namprd03.prod.outlook.com (2603:10b6:303:b4::33)
 by PH0PR12MB5646.namprd12.prod.outlook.com (2603:10b6:510:143::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.26; Mon, 1 May
 2023 20:03:26 +0000
Received: from CO1NAM11FT032.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:b4:cafe::2) by MW4PR03CA0268.outlook.office365.com
 (2603:10b6:303:b4::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Mon, 1 May 2023 20:03:25 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT032.mail.protection.outlook.com (10.13.174.218) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.20 via Frontend Transport; Mon, 1 May 2023 20:03:25 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:03:24 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:03:24 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 1 May 2023 15:03:23 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d35e810-e85b-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AqZ/4rh6XbSxYd/ji14LZGcdeDQm7k1Zf5I2jMgWISTCJQ6bOv/XM8vfQah/68DnRhSTvLxCc7W3vokeXrigee51mGOsPEshKNjiwG+eAeoey/MPhBLz6ZBtqUgGBkBLvpIrnuAznADUZuIBvsZsLgCBzaetN7XN7aNi1jUzpFJ6ohzNCVegBqXs6jklO0ZtD4KiG66P03HDUGzpfsGkTW4oQ4P7xJLsEzewlmzmbLSkFLcif/QLNXW273pwYQcFC/ACw7Flxk5sQBioJ3JgfnbpZAfUulSnIsP5XQKICWp7Rm09vfmfAD20JePEEadsPUdpxNm/4Kfw4f7g6AlCUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EAeerVBnHarRbBFzP8eA8mfPIPjOdFnbNXDV9Rw/NiY=;
 b=QKTFOr2+L6TT9LmB24VAx/aWHmqldaT3JSeJXYWsdZHO7R69uV27VtM2MUpQg+8eZr9yvGW2ZshF2E9SGG1VegUn/Bw4JV7FGHshBhWw71a11ts884Q1urNRq+ZMBHuXUpExRbAeLNzN6H+oJG+bK6oeNc16g+46Y0ab4Wc0WZnDLAweC2nrnLezZir7PxbAt5Coei5rwT7lqlGT3s74XN//E29UzWltuIsfw6YebLMPtknMXMkbik9lBB6P87scy9LdBYjtskVBLzjE2QtGQ/us7gb1IMsuZZ2RGioJY9kjZcNwLGpVClxAC5+3ituUgsHFZPBtTbPtKs2Zt81FBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EAeerVBnHarRbBFzP8eA8mfPIPjOdFnbNXDV9Rw/NiY=;
 b=J+gcXbHaoFZ2ByEWu0VdephSDtAPkF9GFpgvRlAYX+/c1laSPCC0xs0skpQ25VZZgpZpI9BEm1AbB2XJrgfWVRQ55arblnYE/Qrqx/6JwaAP34aZDx7JtOfbqJ2QYUaeY4/wr1SQY+tGnX7Zcx1veKlHWy4GcBD/o5eiRMZLDWs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Rahul Singh <rahul.singh@arm.com>, Stewart Hildebrand
	<stewart.hildebrand@amd.com>
Subject: [PATCH v1 1/6] xen/arm: Move is_protected flag to struct device
Date: Mon, 1 May 2023 16:03:00 -0400
Message-ID: <20230501200305.168058-2-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230501200305.168058-1-stewart.hildebrand@amd.com>
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT032:EE_|PH0PR12MB5646:EE_
X-MS-Office365-Filtering-Correlation-Id: 893b28b4-cda2-406b-784b-08db4a7f1f8b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xdwpARfOl4Si5U24LX0KjYnqGjOBU6slMV5yPGtgU+00+WO8ucRcbsYJEzqWnOBWOzn2GtuwgKgqz82us3QSlrcrT7EQ68Ju2iYj427m2kVjTHekKlTDBsGb9zrRIggi3SSyCLaHYPX6O0Q4CB9xrnvIpEo4x9UI5TNyNa12bIPnknuqIiim94vREsajHTBP11NN8uHnp5YE2IZrXRajtGrrem7g9Fv6rVYKJn3aCi/Fx8tsvE29MvS/Ym07YWuWntsiuS5zkZCutGBBDzrsE4kDocgXXt2R1MR0K3dh0V3VpVDlsIV+I1fVMBtwu3B1NGDIY6GKSlk7FvynXEP+OwODXS8uaXfAlD3Rt0XQ3pFg3kma9oiNX00l1udfv0fUGgbyTlZo51Q2ZFXnEQ3Lu3eJ4Uy2LVtdTNMt87zIMuX0T+oMl+XAurCqySEJVmgH5BCld7/qB2efZiLXUEByPAXejZF9Z5cMWMeXa717K22QlFnDWY6BVVqJZsQ3aUhNtUZ4aEYw6i5sNiztEHYyS0hOl/6ZPWZN+GUbH6oePU9jxDiJVu8plJdE0DOFfl7hsQktn8wHy+LTGZeEFVqvoQkFc1B4laWI0ZOSpfe5rULMGetqeU2nblAiDBgkH6e+09/A6EJif3mxUUmOusE5NiX7y7WWCe6npL5FHO/EKrkApUNNfmGgC8f0lCzGHUqi2mQ7uReD/tHruIXmAbLDMz0Pm6SVGCBgzq4alRGjH7Y=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(376002)(136003)(451199021)(46966006)(40470700004)(36840700001)(70586007)(2906002)(70206006)(44832011)(2616005)(336012)(86362001)(5660300002)(82310400005)(4326008)(8936002)(40460700003)(36756003)(6666004)(41300700001)(6916009)(54906003)(478600001)(316002)(8676002)(40480700001)(356005)(26005)(81166007)(1076003)(966005)(82740400003)(47076005)(426003)(186003)(83380400001)(36860700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2023 20:03:25.2187
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 893b28b4-cda2-406b-784b-08db4a7f1f8b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT032.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5646

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This flag will be re-used for PCI devices by the subsequent
patches.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
downstream->v1:
* rebase
* s/dev_node->is_protected/dev_node->dev.is_protected/ in smmu.c
* s/dt_device_set_protected(dev_to_dt(dev))/device_set_protected(dev)/ in smmu-v3.c
* remove redundant device_is_protected checks in smmu-v3.c/ipmmu-vmsa.c

(cherry picked from commit 59753aac77528a584d3950936b853ebf264b68e7 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---
 xen/arch/arm/domain_build.c              |  4 ++--
 xen/arch/arm/include/asm/device.h        | 13 +++++++++++++
 xen/common/device_tree.c                 |  2 +-
 xen/drivers/passthrough/arm/ipmmu-vmsa.c |  8 +-------
 xen/drivers/passthrough/arm/smmu-v3.c    |  7 +------
 xen/drivers/passthrough/arm/smmu.c       |  2 +-
 xen/drivers/passthrough/device_tree.c    | 15 +++++++++------
 xen/include/xen/device_tree.h            | 13 -------------
 8 files changed, 28 insertions(+), 36 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f80fdd1af206..106f92c65a61 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2507,7 +2507,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
             return res;
         }
 
-        if ( dt_device_is_protected(dev) )
+        if ( device_is_protected(dt_to_dev(dev)) )
         {
             dt_dprintk("%s setup iommu\n", dt_node_full_name(dev));
             res = iommu_assign_dt_device(d, dev);
@@ -3007,7 +3007,7 @@ static int __init handle_passthrough_prop(struct kernel_info *kinfo,
         return res;
 
     /* If xen_force, we allow assignment of devices without IOMMU protection. */
-    if ( xen_force && !dt_device_is_protected(node) )
+    if ( xen_force && !device_is_protected(dt_to_dev(node)) )
         return 0;
 
     return iommu_assign_dt_device(kinfo->d, node);
diff --git a/xen/arch/arm/include/asm/device.h b/xen/arch/arm/include/asm/device.h
index b5d451e08776..086dde13eb6b 100644
--- a/xen/arch/arm/include/asm/device.h
+++ b/xen/arch/arm/include/asm/device.h
@@ -1,6 +1,8 @@
 #ifndef __ASM_ARM_DEVICE_H
 #define __ASM_ARM_DEVICE_H
 
+#include <xen/types.h>
+
 enum device_type
 {
     DEV_DT,
@@ -20,6 +22,7 @@ struct device
 #endif
     struct dev_archdata archdata;
     struct iommu_fwspec *iommu_fwspec; /* per-device IOMMU instance data */
+    bool is_protected; /* Shows that device is protected by IOMMU */
 };
 
 typedef struct device device_t;
@@ -94,6 +97,16 @@ int device_init(struct dt_device_node *dev, enum device_class class,
  */
 enum device_class device_get_class(const struct dt_device_node *dev);
 
+static inline void device_set_protected(struct device *device)
+{
+    device->is_protected = true;
+}
+
+static inline bool device_is_protected(const struct device *device)
+{
+    return device->is_protected;
+}
+
 #define DT_DEVICE_START(_name, _namestr, _class)                    \
 static const struct device_desc __dev_desc_##_name __used           \
 __section(".dev.info") = {                                          \
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 6c9712ab7bda..1d5d7cb5f01b 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1874,7 +1874,7 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
         /* By default dom0 owns the device */
         np->used_by = 0;
         /* By default the device is not protected */
-        np->is_protected = false;
+        np->dev.is_protected = false;
         INIT_LIST_HEAD(&np->domain_list);
 
         if ( new_format )
diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
index 091f09b21752..039212a3a990 100644
--- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
+++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
@@ -1288,14 +1288,8 @@ static int ipmmu_add_device(u8 devfn, struct device *dev)
     if ( !to_ipmmu(dev) )
         return -ENODEV;
 
-    if ( dt_device_is_protected(dev_to_dt(dev)) )
-    {
-        dev_err(dev, "Already added to IPMMU\n");
-        return -EEXIST;
-    }
-
     /* Let Xen know that the master device is protected by an IOMMU. */
-    dt_device_set_protected(dev_to_dt(dev));
+    device_set_protected(dev);
 
     dev_info(dev, "Added master device (IPMMU %s micro-TLBs %u)\n",
              dev_name(fwspec->iommu_dev), fwspec->num_ids);
diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index bfdb62b395ad..4b452e6fdd00 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -1521,13 +1521,8 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev)
 	 */
 	arm_smmu_enable_pasid(master);
 
-	if (dt_device_is_protected(dev_to_dt(dev))) {
-		dev_err(dev, "Already added to SMMUv3\n");
-		return -EEXIST;
-	}
-
 	/* Let Xen know that the master device is protected by an IOMMU. */
-	dt_device_set_protected(dev_to_dt(dev));
+	device_set_protected(dev);
 
 	dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
 			dev_name(fwspec->iommu_dev), fwspec->num_ids);
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0a514821b336..5b6024d579a8 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -838,7 +838,7 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 	master->of_node = dev_node;
 
 	/* Xen: Let Xen know that the device is protected by an SMMU */
-	dt_device_set_protected(dev_node);
+	device_set_protected(dev);
 
 	for (i = 0; i < fwspec->num_ids; ++i) {
 		if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) &&
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 1c32d7b50cce..b5bd13393b56 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -34,7 +34,7 @@ int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev)
     if ( !is_iommu_enabled(d) )
         return -EINVAL;
 
-    if ( !dt_device_is_protected(dev) )
+    if ( !device_is_protected(dt_to_dev(dev)) )
         return -EINVAL;
 
     spin_lock(&dtdevs_lock);
@@ -65,7 +65,7 @@ int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev)
     if ( !is_iommu_enabled(d) )
         return -EINVAL;
 
-    if ( !dt_device_is_protected(dev) )
+    if ( !device_is_protected(dt_to_dev(dev)) )
         return -EINVAL;
 
     spin_lock(&dtdevs_lock);
@@ -87,7 +87,7 @@ static bool_t iommu_dt_device_is_assigned(const struct dt_device_node *dev)
 {
     bool_t assigned = 0;
 
-    if ( !dt_device_is_protected(dev) )
+    if ( !device_is_protected(dt_to_dev(dev)) )
         return 0;
 
     spin_lock(&dtdevs_lock);
@@ -141,12 +141,15 @@ int iommu_add_dt_device(struct dt_device_node *np)
         return -EINVAL;
 
     /*
-     * The device may already have been registered. As there is no harm in
-     * it just return success early.
+     * This is needed in case a device has both the iommus property and
+     * also appears in the mmu-masters list.
      */
-    if ( dev_iommu_fwspec_get(dev) )
+    if ( device_is_protected(dev) )
         return 0;
 
+    if ( dev_iommu_fwspec_get(dev) )
+        return -EEXIST;
+
     /*
      * According to the Documentation/devicetree/bindings/iommu/iommu.txt
      * from Linux.
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 19a74909cece..c1e4751a581f 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -90,9 +90,6 @@ struct dt_device_node {
     struct dt_device_node *next; /* TODO: Remove it. Only use to know the last children */
     struct dt_device_node *allnext;
 
-    /* IOMMU specific fields */
-    bool is_protected;
-
     /* HACK: Remove this if there is a need of space */
     bool_t static_evtchn_created;
 
@@ -302,16 +299,6 @@ static inline domid_t dt_device_used_by(const struct dt_device_node *device)
     return device->used_by;
 }
 
-static inline void dt_device_set_protected(struct dt_device_node *device)
-{
-    device->is_protected = true;
-}
-
-static inline bool dt_device_is_protected(const struct dt_device_node *device)
-{
-    return device->is_protected;
-}
-
 static inline bool_t dt_property_name_is_equal(const struct dt_property *pp,
                                                const char *name)
 {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 01 20:03:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 20:03:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528259.821167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZk7-00052I-3I; Mon, 01 May 2023 20:03:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528259.821167; Mon, 01 May 2023 20:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZk6-00052B-WD; Mon, 01 May 2023 20:03:27 +0000
Received: by outflank-mailman (input) for mailman id 528259;
 Mon, 01 May 2023 20:03:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nwpa=AW=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1ptZk5-000525-M9
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 20:03:25 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::60d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 39bd0846-e85b-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 22:03:23 +0200 (CEST)
Received: from MW3PR05CA0005.namprd05.prod.outlook.com (2603:10b6:303:2b::10)
 by IA1PR12MB8465.namprd12.prod.outlook.com (2603:10b6:208:457::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Mon, 1 May
 2023 20:03:20 +0000
Received: from CO1NAM11FT057.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:2b:cafe::6) by MW3PR05CA0005.outlook.office365.com
 (2603:10b6:303:2b::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.19 via Frontend
 Transport; Mon, 1 May 2023 20:03:20 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT057.mail.protection.outlook.com (10.13.174.205) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.19 via Frontend Transport; Mon, 1 May 2023 20:03:19 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:03:17 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:03:17 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 1 May 2023 15:03:16 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39bd0846-e85b-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QQOsvCMqYd8qQjX0QFmzLrSvL9+DDAv4jeMzHToQMYI8CGlCdFDaOahp3pi0Qs5wTYCw2NB4iic//VMqT7mipmTD3VHhGU+wmazb6G7efiTIKt5kgrbE6muiqwNA2Vj+ePz+S/5TTPuLrnU4VwugTBnejlOelCOIn1h5Qu5Ezt7RGse3C+6JI9X3H0Pt2w3Mx0zewYfuRaC36DWt9tlX9WtAS1/pb3O+xoNSr6kTOPOp5L+ctLabSUV8qSjLy0SO2IlyHVjii381WdXHI7s7CyE2QyG1mfECsdLmhfi4soRafxRMVGvbcSR27rb4diXKAtrlfpxGXJ7TynORa09qwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6GirXXEDtjoXkR1fOhQzNZVltrkp/AltBpvuxHRBipc=;
 b=Vbh3HA4PQFKPVxWgMTHavXQecjpUWeTnKhCpDlZpsqnNdskA1ndcxQ/VCR0gB+SopsxT48iM/HUD+Jh/FXF7vWM118oMs6KSHPd5UoFZYAqJ42sI0qDEjCUiy5dCI2CrxhSjgS0ykjA30sSi6oH8/+lxWNCWIvCyZoQkY2EZe+WHRg21ysWM7stB81LTGf9W6VJ0d1UnIXLGX5DYfv6kKCxtTs4MYUgGpUWazGSWCDATE4Ve9YpCbDvz88G+ppAKo+1r5LlLIPajOaaPcR8vagW95WiLgB0DYAeGn7rXX10l9ERCIOy1er8PGyXmlWQpnHRCV4dUuexSs9OLJutmkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6GirXXEDtjoXkR1fOhQzNZVltrkp/AltBpvuxHRBipc=;
 b=wnzxJZE9oSTKtjJVF02QvcRTFBSLq2xhdq/qhYhwRclNtktYCy0vMPlAf2G1tjJJUKeyKgjqPVf4s2/DmPYgred8oQ0RLGUbb0w5n57nguQUl0zwJInxciQUBYpeGrpbLSgOFiJTKoBrhtcvK0UhAvyRj4T7MUFIgsouV3gfa+E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Rahul Singh <rahul.singh@arm.com>, Jan Beulich <jbeulich@suse.com>, "Paul
 Durrant" <paul@xen.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>
Subject: [PATCH v1 0/6] SMMU handling for PCIe Passthrough on ARM
Date: Mon, 1 May 2023 16:02:59 -0400
Message-ID: <20230501200305.168058-1-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT057:EE_|IA1PR12MB8465:EE_
X-MS-Office365-Filtering-Correlation-Id: dcc40ebc-cf7f-4ede-478e-08db4a7f1c4c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GffGYe5j4IS26DKLP1tkkPygp9s995dyvMsZ0d96T184J2bCdhBmwtjtggEV8uQmVMMok/8kP3/gScPTB2C+EpLYCdaMTeL12H3myegyHbC3v/p1d7morXWTsnwQSXckL080iHWxeQiq/8k8/5TtPyNr9jklkiJ+YfcunzWKlciszxeKUhQSJ1zbrZ5ew8+FxZG/Kd9kmDw9Vkp8r6mR6zDE1Shfq4mNyz3dc/D2p0WuZWbXA9urQBmcgKLdDH8wjepa5/iCuHH59tpx7Tf02fxk3mwsPpF0T/0f886bbjxL/Twf4lA+K6cok1rsIRrE2FJ8dYUF+VO+gqd/VN/0kMrZL7yhI0QEflnjuyqfkd0FDwilrHIIc58yqOYcOXd2o1Kofef/UbLN06KKr6OU9KauXWlBMc5pZ+cjL6oanTDm7A4iBV+zqFbK+RO8Vn0rwDDqZSVhPSu+KvhxjtQqDX28wBWSSg8sx8aycFHh4CyGyV1NYag4ll5dz47X56uWnB0x+UoEPqRxw16K++vt5cAqdUxVvrryThR3FB//l5/h/E/zuf9Pn69nBuhQj4TVs0in7ywlF6KW4MPBgguroVUayOWFI7qY1uZxyx2qejrVGDfPgUhqhm9UkP7vuwcrz6pPokuNgB2in/KQGnPgvsVXkY4j1IO89Jq4LhErE1GzlQxzx6+tXcoI6G6zM1t7nGxQ8yPMazSWhCXqs/b4+xOECIrGnFxmkVbSz9ybbZk=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(39860400002)(376002)(451199021)(40470700004)(36840700001)(46966006)(2616005)(336012)(426003)(2906002)(36860700001)(26005)(1076003)(8936002)(41300700001)(81166007)(5660300002)(83380400001)(44832011)(8676002)(47076005)(316002)(82740400003)(40460700003)(6916009)(4326008)(70586007)(70206006)(356005)(186003)(36756003)(86362001)(54906003)(40480700001)(6666004)(478600001)(82310400005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2023 20:03:19.7557
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dcc40ebc-cf7f-4ede-478e-08db4a7f1c4c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT057.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8465

This series introduces SMMU handling for PCIe passthrough on ARM. These patches
are independent from (and don't depend on) the vPCI reference counting/locking
work in progress, and should be able to be upstreamed independently. I have
cherry-picked them from various downstream branches, rebased, and made minor
changes since the cherry-pick. Details are in the individual patches. The
changes are designated with "downstream->v1" even though this is the first time
the patches appear on the list.

Oleksandr Andrushchenko (1):
  xen/arm: smmuv2: Add PCI devices support for SMMUv2

Oleksandr Tyshchenko (4):
  xen/arm: Move is_protected flag to struct device
  iommu/arm: Add iommu_dt_xlate()
  iommu/arm: Introduce iommu_add_dt_pci_device API
  pci/arm: Use iommu_add_dt_pci_device() instead of arch hook

Rahul Singh (1):
  xen/arm: smmuv3: Add PCI devices support for SMMUv3

 xen/arch/arm/domain_build.c              |   4 +-
 xen/arch/arm/include/asm/device.h        |  13 ++
 xen/common/device_tree.c                 |   2 +-
 xen/drivers/passthrough/arm/ipmmu-vmsa.c |   8 +-
 xen/drivers/passthrough/arm/smmu-v3.c    |  73 +++++++-
 xen/drivers/passthrough/arm/smmu.c       | 109 +++++++++---
 xen/drivers/passthrough/device_tree.c    | 202 ++++++++++++++++++++---
 xen/drivers/passthrough/pci.c            |  19 ++-
 xen/include/xen/device_tree.h            |  38 +++--
 xen/include/xen/iommu.h                  |   6 +-
 10 files changed, 395 insertions(+), 79 deletions(-)

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 01 20:03:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 20:03:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528261.821187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZkJ-0005cr-Pz; Mon, 01 May 2023 20:03:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528261.821187; Mon, 01 May 2023 20:03:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZkJ-0005ci-MW; Mon, 01 May 2023 20:03:39 +0000
Received: by outflank-mailman (input) for mailman id 528261;
 Mon, 01 May 2023 20:03:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nwpa=AW=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1ptZkI-000525-PU
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 20:03:38 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2062f.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 41eb799f-e85b-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 22:03:37 +0200 (CEST)
Received: from BN8PR16CA0024.namprd16.prod.outlook.com (2603:10b6:408:4c::37)
 by CO6PR12MB5409.namprd12.prod.outlook.com (2603:10b6:5:357::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Mon, 1 May
 2023 20:03:34 +0000
Received: from BN8NAM11FT091.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:4c:cafe::db) by BN8PR16CA0024.outlook.office365.com
 (2603:10b6:408:4c::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30 via Frontend
 Transport; Mon, 1 May 2023 20:03:34 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT091.mail.protection.outlook.com (10.13.176.134) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.20 via Frontend Transport; Mon, 1 May 2023 20:03:33 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:03:33 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 13:03:33 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 1 May 2023 15:03:32 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41eb799f-e85b-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LV0L+cf9/+6ujJZdLT8zIve3cMVi3MpH41QJrGA7CHEm4Q4bUgznShMOcJt80isbi9oScwHO1tQ1yyN3pIDsqUe+E7XujEyBxWcPnls5z+p9VQWlKlB70yRnCYYzwqn871TmJIxOy5oJAD66NMnTqB9khf4gHY9KBMLyvXp4XL5Ee1wlPhyptlXCJShRqGTkbLEf8GdJDo9mTysBEVa+lLk38YgZS1ro4EHmMkSO9NBNVvo7m1Fm50VriNcN/N166twvnQqR8dfTvRfX+bv71eowKXkprixLgMQDcnAqoFplhmJD96R7otYw+YqwGqEZyPyOyN5DRxs/iuZLJQGgOA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FH3cbOq7w4Ry4gUimwrCXWO3YUo9mU/EEVtFtBj29+A=;
 b=Twdih1oH9w/ylQfYLwykufELuND7MvGz+L/nnqynK0t/AHIlrRbSbnnDENCuAAxSoEjoypXDVhFIttZhiDZLBETdIcTw6nO0TnVwRpa9jkKPWZW4DCKwl1wjZMpKibXe7M8eAM+MJKUk1FYPGQNs6JksM9ikY26W3FebZPogpwDfAtZgYYAkeNnJDHykR6pIUIaaEmyCr49zLoQ0t5MVim3EKe4TrYcvlOo73xznYS6t7HDiMWvpJb4wOde5rfFMv6YiQOsEX7EIAeXoOhCvGsfAzXmo1aEhxF58RSkQ0ndqxKfkdAy6P9UBNxOqc3BfqeBiRos0SyPNOZEToalGKA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FH3cbOq7w4Ry4gUimwrCXWO3YUo9mU/EEVtFtBj29+A=;
 b=gb1DePfI6A2wEm5sog39kHFHnEzCTWnFg/y79Tfmwd7O9JC+dexz5WD0EsN0L/u8sVvH8KofezLyqcwpLE8qO91o7KxMBBkDCsIyLBGw2t5HgjJm+krxhwy1KoPEjXQAaBhMp3tWIBHwnj45KRfvobpb7zEt3gXCtG99yI2ZRys=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Stewart Hildebrand
	<stewart.hildebrand@amd.com>
Subject: [PATCH v1 2/6] iommu/arm: Add iommu_dt_xlate()
Date: Mon, 1 May 2023 16:03:01 -0400
Message-ID: <20230501200305.168058-3-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230501200305.168058-1-stewart.hildebrand@amd.com>
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT091:EE_|CO6PR12MB5409:EE_
X-MS-Office365-Filtering-Correlation-Id: 8cebc486-355f-4c39-d618-08db4a7f24ac
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	haDaEyABepE2+Gd97f0WweYm957fdnjDqu1y9Tqk61FEjA6lT5xKt7LwNaydMWmDZ7I77xuy/Eucn1nzrilsG0x2MRPlZ6GUNpGtsMeapI+R0MoHXR3v4yg5xJqQueT06qoj833NWBAwI62BrhrxtNdOHfto+V82EP0/L/v2ZuytWl7rlVhtXb5byFjLVkqE/KxDbi7zuysm+xt8CvtwkHcoMTWcoypxKsz4vv7NeDQHM5vqtM/+pOwZ7AsqhX3Ny6GugI6iHPgOmNAqZVSCB7x2Z2HFeDhqkfmYy19JhxAcWpRxvc0xIgmPLQhAu88/hP8/U0ThVo73FAtloxfAO5lwm0JcyJCwHnehxPrnAdqyEKkeLbdgG26ng5sro0C/Yt4490vxFu9bsyRG1dhnPYZM+7iG/lB6WcAHLMI2VnJHIZkR9nLJWlqCAkP4a11l3Y/blD/UUAP6avM1NlzZp3hldQIKacjtEzL5HCfdqCV3XztBx2fogf5qh2q+Rq+OEU9nl0jh1fJNf5UOJQi6/3y2crM5H9gQYArHlJlHZynt/H9se1LT72kJtXhog26qf1OZoCGRwLC1whvSvosNTBqOenFL7jlzrr8PY6KFwBbrMJbFV2QrlssW9NUmz/GY5kr/8IaE+0BJuDAzBdRWxPlCk5DmC/vcNQ1jRuHbXBVeLPQhh9Hi6B3i4oTNk231YaSN2ECcCmHuILieZvkqmzW1Fj9Sk6Ec301LZWjMIIo=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(346002)(376002)(451199021)(46966006)(36840700001)(40470700004)(40480700001)(82310400005)(40460700003)(316002)(2616005)(4326008)(1076003)(336012)(426003)(70586007)(41300700001)(47076005)(83380400001)(36860700001)(26005)(478600001)(54906003)(70206006)(6916009)(6666004)(966005)(186003)(356005)(82740400003)(2906002)(36756003)(81166007)(86362001)(44832011)(8936002)(8676002)(5660300002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2023 20:03:33.9197
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8cebc486-355f-4c39-d618-08db4a7f24ac
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT091.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR12MB5409

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Move code for processing DT IOMMU specifier to a separate helper.
This helper will be re-used for adding PCI devices by the subsequent
patches as we will need exact the same actions for processing
DT PCI-IOMMU specifier.

While at it introduce NO_IOMMU to avoid magic "1".

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com> # rename
---
downstream->v1:
* trivial rebase
* s/dt_iommu_xlate/iommu_dt_xlate/

(cherry picked from commit c26bab0415ca303df86aba1d06ef8edc713734d3 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---
 xen/drivers/passthrough/device_tree.c | 42 +++++++++++++++++----------
 1 file changed, 27 insertions(+), 15 deletions(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index b5bd13393b56..1b50f4670944 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -127,15 +127,39 @@ int iommu_release_dt_devices(struct domain *d)
     return 0;
 }
 
+/* This correlation must not be altered */
+#define NO_IOMMU    1
+
+static int iommu_dt_xlate(struct device *dev,
+                          struct dt_phandle_args *iommu_spec)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    int rc;
+
+    if ( !dt_device_is_available(iommu_spec->np) )
+        return NO_IOMMU;
+
+    rc = iommu_fwspec_init(dev, &iommu_spec->np->dev);
+    if ( rc )
+        return rc;
+
+    /*
+     * Provide DT IOMMU specifier which describes the IOMMU master
+     * interfaces of that device (device IDs, etc) to the driver.
+     * The driver is responsible to decide how to interpret them.
+     */
+    return ops->dt_xlate(dev, iommu_spec);
+}
+
 int iommu_add_dt_device(struct dt_device_node *np)
 {
     const struct iommu_ops *ops = iommu_get_ops();
     struct dt_phandle_args iommu_spec;
     struct device *dev = dt_to_dev(np);
-    int rc = 1, index = 0;
+    int rc = NO_IOMMU, index = 0;
 
     if ( !iommu_enabled )
-        return 1;
+        return NO_IOMMU;
 
     if ( !ops )
         return -EINVAL;
@@ -164,19 +188,7 @@ int iommu_add_dt_device(struct dt_device_node *np)
         if ( !ops->add_device || !ops->dt_xlate )
             return -EINVAL;
 
-        if ( !dt_device_is_available(iommu_spec.np) )
-            break;
-
-        rc = iommu_fwspec_init(dev, &iommu_spec.np->dev);
-        if ( rc )
-            break;
-
-        /*
-         * Provide DT IOMMU specifier which describes the IOMMU master
-         * interfaces of that device (device IDs, etc) to the driver.
-         * The driver is responsible to decide how to interpret them.
-         */
-        rc = ops->dt_xlate(dev, &iommu_spec);
+        rc = iommu_dt_xlate(dev, &iommu_spec);
         if ( rc )
             break;
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 01 20:03:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 20:03:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528264.821197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZkX-0006HD-2d; Mon, 01 May 2023 20:03:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528264.821197; Mon, 01 May 2023 20:03:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZkW-0006H5-W9; Mon, 01 May 2023 20:03:52 +0000
Received: by outflank-mailman (input) for mailman id 528264;
 Mon, 01 May 2023 20:03:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nwpa=AW=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1ptZkV-000525-Du
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 20:03:51 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20627.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 492ecb6f-e85b-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 22:03:49 +0200 (CEST)
Received: from MW4P223CA0027.NAMP223.PROD.OUTLOOK.COM (2603:10b6:303:80::32)
 by DM4PR12MB8497.namprd12.prod.outlook.com (2603:10b6:8:180::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Mon, 1 May
 2023 20:03:46 +0000
Received: from CO1NAM11FT049.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:80:cafe::d8) by MW4P223CA0027.outlook.office365.com
 (2603:10b6:303:80::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30 via Frontend
 Transport; Mon, 1 May 2023 20:03:45 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT049.mail.protection.outlook.com (10.13.175.50) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.20 via Frontend Transport; Mon, 1 May 2023 20:03:44 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:03:43 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 13:03:43 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 1 May 2023 15:03:42 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 492ecb6f-e85b-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=itY6LV+fQEhdFUOO4+G6drQvRa474rhgPkD3dElQ0n/CBoSn8ViLrJrFWXL0XthXLcqkYQa4m/3lQeMxjBTXOFk45z2VajOni1UCp732CiKiGaTby8K7KeWqVO2xHu+krlZz+goTIIyTXhNBAcrGp0BQAHLb+InkVvU0efz/ydHGhM4UlwII2d2ZwPSMzNqzKjI+SK422/dSJ9qiJR7EKNCWNUNACoDZsH76dL8ABNjQj94+Gw6gOWAMexnLLfFaqWaX0J7untUGwARtqAz0HAJJwfmYRaAIn77UFHJjg9BGWaP8c2EnagZ5yM8FT0VL/l12GEyQp40dIcJ0WmhaAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tPTXU+0KLiFp2//TpisX/eYI6MvOPIIjwmGHW7rytQA=;
 b=SCNFZJ+Em/zIOeUhRJtm8k7cfah0bZqbMPBJuS0C8mxSAZydfHaDhI5FTp02ltEQ9QZuR5ijMBtTrTYNeZhn6wo5tuPFmHDFS4ogn9ELo8Dl7qs5qV0dow/bpkMG/LwYgi7bTARS+fhF9QZQCNAF7qufAxusRw9W3lNBk54cWHJrj2S5Z3TCys6Mj8q0xCWJvRA2hdRZMf0IHW0Yz3dPuj9wNiHTUdS6qlLCgigpInM/QAp49sF/vTMG8DW5jDKzfXDCwomhNeL6TyGX0OunrJtUPLAKHUSbNt2Jn8Htz1TgrZGpGjubApSmx7Z5SV4F5MtPzDN/FoBRri1x8RMQKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tPTXU+0KLiFp2//TpisX/eYI6MvOPIIjwmGHW7rytQA=;
 b=jNewda7fLrpIonms0NAPfIMB+Z/unDJRI/jFgpuewQo+7kr+Wxp+YpQVJhBtUKHry0KPItf/CDe3kdm6JcTvwq8uLRwTRxba1hvi01bgtncVEnUj2h3S0XSULc8bfTz7NDDxldtMPNx9FFe+Ztvt50Npabv6+E23nvRJhvMJLdc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, "Stewart
 Hildebrand" <stewart.hildebrand@amd.com>
Subject: [PATCH v1 3/6] iommu/arm: Introduce iommu_add_dt_pci_device API
Date: Mon, 1 May 2023 16:03:02 -0400
Message-ID: <20230501200305.168058-4-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230501200305.168058-1-stewart.hildebrand@amd.com>
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT049:EE_|DM4PR12MB8497:EE_
X-MS-Office365-Filtering-Correlation-Id: 47adc05a-10e5-4517-7712-08db4a7f2b49
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zETcY6KrC2KwwoH7dFvA7/l0iOb8nsC9CPbbncqRqnkpP6ZYaumC5eRjNFJsaBg+7QwQBFb3n8gHlmr/VJRW2zHNeYWHDJjauwprvXR9kW59I31UxauinjSppIWrZqdvpXCwyXs99UGC98/0XjvJMHMf9OWTl9DyyQKKwLmKxh8QiZ46WgXI10Xo4kdV+KBukgyuXgMj2D4+CFOpeJNbGyUbmRgCwqvLG6fuphV7D5MYHzLw9biDZNTvvR0sKTz2aCZELbgYSHHUuxIFxHlgL66/61mhQD+swFhY9mDXWNuehZkQJIn5zsL3ZQWmyrrR63DrsJD+xH7KCpXJ9Eky88x6s/Kv+mzZl0XwfKmGnJ6Ys80RxFRm7W+Qv6mIgFjkSbAOhQS3jE+L3AJDhu6w/QnPlJmBxkJs+O5PKyeDhOxTaxPfx/c3wy+LmPjmWex5mIVFqFYqeXfgOQyJq31McKkQs2tnw0PIcojYlPVegi61BKuA3tJ6mRqtYYzGd+hTy9g20Z/e191wRRQPw0AI79O1ckbQ/d0mi+i0JpfMmjnMY8Ys4U/B3BsTAzw/eYCQR2NfLQcALrnnxrm5Q9MlSnd0B6VINEF9oSwsByv68zE9yEHxvromlyH+ty8GcxPzDg7BuFnMMdtaXKZtKThOLQywjQter5XpjUlh1YnvGTNAN3uoGMQ7ScYtOQV+1/LO0b9UeGinCtMg3+56aRaNPEBP66TI+6X6rOcGNgsNM4Kr4WMCU/JQHhYZPRPA2zA8RNVUX5Lc75jf/3WoefU3qsYbWHUO0EpQQ/C3LaaLJkQ=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(396003)(346002)(136003)(451199021)(40470700004)(36840700001)(46966006)(356005)(81166007)(8936002)(82310400005)(478600001)(8676002)(82740400003)(6916009)(86362001)(40480700001)(4326008)(70586007)(70206006)(41300700001)(36756003)(54906003)(316002)(40460700003)(36860700001)(2906002)(83380400001)(47076005)(44832011)(966005)(186003)(1076003)(26005)(426003)(336012)(2616005)(5660300002)(6666004)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2023 20:03:44.9031
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 47adc05a-10e5-4517-7712-08db4a7f2b49
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT049.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB8497

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The main purpose of this patch is to add a way to register PCI device
(which is behind the IOMMU) using the generic PCI-IOMMU DT bindings [1]
before assigning that device to a domain.

This behaves in almost the same way as existing iommu_add_dt_device API,
the difference is in devices to handle and DT bindings to use.

The function of_map_id to translate an ID through a downstream mapping
(which is also suitable for mapping Requester ID) was borrowed from Linux
(v5.10-rc6) and updated according to the Xen code base.

XXX: I don't port pci_for_each_dma_alias from Linux which is a part
of PCI-IOMMU bindings infrastucture as I don't have a good understanding
for how it is expected to work in Xen environment.
Also it is not completely clear whether we need to distinguish between
different PCI types here (DEV_TYPE_PCI, DEV_TYPE_PCI_HOST_BRIDGE, etc).
For example, how we should behave here if the host bridge doesn't have
a stream ID (so not described in iommu-map property) just simple
fail or bypasses translation?

[1] https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-iommu.txt

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
downstream->v1:
* rebase
* add const qualifier to struct dt_device_node *np arg in dt_map_id()
* add const qualifier to struct dt_device_node *np declaration in iommu_add_pci_device()
* use stdint.h types instead of u8/u32/etc...
* rename functions:
  s/dt_iommu_xlate/iommu_dt_xlate/
  s/dt_map_id/iommu_dt_pci_map_id/
  s/iommu_add_pci_device/iommu_add_dt_pci_device/
* add device_is_protected check in iommu_add_dt_pci_device
* wrap prototypes in CONFIG_HAS_PCI

(cherry picked from commit 734e3bf6ee77e7947667ab8fa96c25b349c2e1da from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---
 xen/drivers/passthrough/device_tree.c | 145 ++++++++++++++++++++++++++
 xen/include/xen/device_tree.h         |  25 +++++
 xen/include/xen/iommu.h               |   6 +-
 3 files changed, 175 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 1b50f4670944..ef98f343eef2 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -151,6 +151,151 @@ static int iommu_dt_xlate(struct device *dev,
     return ops->dt_xlate(dev, iommu_spec);
 }
 
+#ifdef CONFIG_HAS_PCI
+int iommu_dt_pci_map_id(const struct dt_device_node *np, uint32_t id,
+                        const char *map_name, const char *map_mask_name,
+                        struct dt_device_node **target, uint32_t *id_out)
+{
+    uint32_t map_mask, masked_id, map_len;
+    const __be32 *map = NULL;
+
+    if ( !np || !map_name || (!target && !id_out) )
+        return -EINVAL;
+
+    map = dt_get_property(np, map_name, &map_len);
+    if ( !map )
+    {
+        if ( target )
+            return -ENODEV;
+        /* Otherwise, no map implies no translation */
+        *id_out = id;
+        return 0;
+    }
+
+    if ( !map_len || map_len % (4 * sizeof(*map)) )
+    {
+        printk(XENLOG_ERR "%pOF: Error: Bad %s length: %d\n", np,
+            map_name, map_len);
+        return -EINVAL;
+    }
+
+    /* The default is to select all bits. */
+    map_mask = 0xffffffff;
+
+    /*
+     * Can be overridden by "{iommu,msi}-map-mask" property.
+     * If of_property_read_u32() fails, the default is used.
+     */
+    if ( map_mask_name )
+        dt_property_read_u32(np, map_mask_name, &map_mask);
+
+    masked_id = map_mask & id;
+    for ( ; (int)map_len > 0; map_len -= 4 * sizeof(*map), map += 4 )
+    {
+        struct dt_device_node *phandle_node;
+        uint32_t id_base = be32_to_cpup(map + 0);
+        uint32_t phandle = be32_to_cpup(map + 1);
+        uint32_t out_base = be32_to_cpup(map + 2);
+        uint32_t id_len = be32_to_cpup(map + 3);
+
+        if ( id_base & ~map_mask )
+        {
+            printk(XENLOG_ERR "%pOF: Invalid %s translation - %s-mask (0x%x) ignores id-base (0x%x)\n",
+                   np, map_name, map_name, map_mask, id_base);
+            return -EFAULT;
+        }
+
+        if ( masked_id < id_base || masked_id >= id_base + id_len )
+            continue;
+
+        phandle_node = dt_find_node_by_phandle(phandle);
+        if ( !phandle_node )
+            return -ENODEV;
+
+        if ( target )
+        {
+            if ( !*target )
+                *target = phandle_node;
+
+            if ( *target != phandle_node )
+                continue;
+        }
+
+        if ( id_out )
+            *id_out = masked_id - id_base + out_base;
+
+        printk(XENLOG_DEBUG "%pOF: %s, using mask %08x, id-base: %08x, out-base: %08x, length: %08x, id: %08x -> %08x\n",
+               np, map_name, map_mask, id_base, out_base, id_len, id,
+               masked_id - id_base + out_base);
+        return 0;
+    }
+
+    printk(XENLOG_ERR "%pOF: no %s translation for id 0x%x on %pOF\n",
+           np, map_name, id, target && *target ? *target : NULL);
+
+    /*
+     * NOTE: Linux bypasses translation without returning an error here,
+     * but should we behave in the same way on Xen? Restrict for now.
+     */
+    return -EFAULT;
+}
+
+int iommu_add_dt_pci_device(uint8_t devfn, struct pci_dev *pdev)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    struct dt_phandle_args iommu_spec = { .args_count = 1 };
+    struct device *dev = pci_to_dev(pdev);
+    const struct dt_device_node *np;
+    int rc = NO_IOMMU;
+
+    if ( !iommu_enabled )
+        return NO_IOMMU;
+
+    if ( !ops )
+        return -EINVAL;
+
+    if ( device_is_protected(dev) )
+        return 0;
+
+    if ( dev_iommu_fwspec_get(dev) )
+        return -EEXIST;
+
+    np = pci_find_host_bridge_node(pdev);
+    if ( !np )
+        return -ENODEV;
+
+    /*
+     * According to the Documentation/devicetree/bindings/pci/pci-iommu.txt
+     * from Linux.
+     */
+    rc = iommu_dt_pci_map_id(np, PCI_BDF(pdev->bus, devfn), "iommu-map",
+                             "iommu-map-mask", &iommu_spec.np, iommu_spec.args);
+    if ( rc )
+        return rc == -ENODEV ? NO_IOMMU : rc;
+
+    /*
+     * The driver which supports generic PCI-IOMMU DT bindings must have
+     * these callback implemented.
+     */
+    if ( !ops->add_device || !ops->dt_xlate )
+        return -EINVAL;
+
+    rc = iommu_dt_xlate(dev, &iommu_spec);
+
+    /*
+     * Add master device to the IOMMU if latter is present and available.
+     * The driver is responsible to mark that device as protected.
+     */
+    if ( !rc )
+        rc = ops->add_device(devfn, dev);
+
+    if ( rc < 0 )
+        iommu_fwspec_free(dev);
+
+    return rc;
+}
+#endif /* CONFIG_HAS_PCI */
+
 int iommu_add_dt_device(struct dt_device_node *np)
 {
     const struct iommu_ops *ops = iommu_get_ops();
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index c1e4751a581f..dc40fdfb9231 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -852,6 +852,31 @@ int dt_count_phandle_with_args(const struct dt_device_node *np,
  */
 int dt_get_pci_domain_nr(struct dt_device_node *node);
 
+#ifdef CONFIG_HAS_PCI
+/**
+ * iommu_dt_pci_map_id - Translate an ID through a downstream mapping.
+ * @np: root complex device node.
+ * @id: device ID to map.
+ * @map_name: property name of the map to use.
+ * @map_mask_name: optional property name of the mask to use.
+ * @target: optional pointer to a target device node.
+ * @id_out: optional pointer to receive the translated ID.
+ *
+ * Given a device ID, look up the appropriate implementation-defined
+ * platform ID and/or the target device which receives transactions on that
+ * ID, as per the "iommu-map" and "msi-map" bindings. Either of @target or
+ * @id_out may be NULL if only the other is required. If @target points to
+ * a non-NULL device node pointer, only entries targeting that node will be
+ * matched; if it points to a NULL value, it will receive the device node of
+ * the first matching target phandle, with a reference held.
+ *
+ * Return: 0 on success or a standard error code on failure.
+ */
+int iommu_dt_pci_map_id(const struct dt_device_node *np, uint32_t id,
+                        const char *map_name, const char *map_mask_name,
+                        struct dt_device_node **target, uint32_t *id_out);
+#endif /* CONFIG_HAS_PCI */
+
 struct dt_device_node *dt_find_node_by_phandle(dt_phandle handle);
 
 #ifdef CONFIG_DEVICE_TREE_DEBUG
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 405db59971c5..d1b91ec13056 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -219,7 +219,8 @@ int iommu_dt_domain_init(struct domain *d);
 int iommu_release_dt_devices(struct domain *d);
 
 /*
- * Helper to add master device to the IOMMU using generic IOMMU DT bindings.
+ * Helpers to add master device to the IOMMU using generic (PCI-)IOMMU
+ * DT bindings.
  *
  * Return values:
  *  0 : device is protected by an IOMMU
@@ -228,6 +229,9 @@ int iommu_release_dt_devices(struct domain *d);
  *      (IOMMU is not enabled/present or device is not connected to it).
  */
 int iommu_add_dt_device(struct dt_device_node *np);
+#ifdef CONFIG_HAS_PCI
+int iommu_add_dt_pci_device(uint8_t devfn, struct pci_dev *pdev);
+#endif
 
 int iommu_do_dt_domctl(struct xen_domctl *, struct domain *,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 01 20:03:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 20:03:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528267.821206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZkd-0006ht-BY; Mon, 01 May 2023 20:03:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528267.821206; Mon, 01 May 2023 20:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZkd-0006hf-8Y; Mon, 01 May 2023 20:03:59 +0000
Received: by outflank-mailman (input) for mailman id 528267;
 Mon, 01 May 2023 20:03:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nwpa=AW=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1ptZkb-000525-TZ
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 20:03:57 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20621.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4da851d7-e85b-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 22:03:57 +0200 (CEST)
Received: from MW4PR03CA0347.namprd03.prod.outlook.com (2603:10b6:303:dc::22)
 by PH7PR12MB6762.namprd12.prod.outlook.com (2603:10b6:510:1ac::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Mon, 1 May
 2023 20:03:54 +0000
Received: from CO1NAM11FT009.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:dc:cafe::6f) by MW4PR03CA0347.outlook.office365.com
 (2603:10b6:303:dc::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30 via Frontend
 Transport; Mon, 1 May 2023 20:03:53 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT009.mail.protection.outlook.com (10.13.175.61) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.20 via Frontend Transport; Mon, 1 May 2023 20:03:53 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:03:52 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 1 May 2023 15:03:51 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4da851d7-e85b-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kROiZiIA4cLY5yWmMHra6ZecDER05QnXBno6CzGvKXzhwDgCyKrGVlEPPgn/rNBwAgR1jrCnRxA2hQQOWonC+FtQCrJGCHlGXs6pznkhTTM5jiY6TtIwXK7w8uC8PgowZKMEchEkURt7km7cZs/lDfuZh3PMUiq1VO6/XkxD5HiirYAqh1GN3NDD72RnW7WkhZ4uEnAVKu2FYBl1TavXZNNGLskiOks0LbTYnevVJzklDh6np18XRLv0qm4Xj2Xzavk8pFWVgd4/h5s0et2N6Umxn2gML0nGqzxKxdk/9SkOMwPViUZNlj962I4soEyAcO0DquBlHo5IxPvUcuVz9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eCX9DqGe9db1MXdGpEVHFk+gaS1vUShIki9NWo1vZU8=;
 b=XO7Yk9gqBVOU/nHgCjzj2TFvEhNxvXbSdpfvjGNIgyuCp0TF/Z7ULuNkGAZqor4CUMI9bAt0QR0C4EAL30I5vDnD6adLd+P2NZCW1X0mQaD5NW/7xzab9Lsscv3JcRVEk22CgbmM6rzrPvTOo3juMicDJKU4slGdTMcsQSE/3KtQ+BxLCpEF+MW7MFJUg1KxXhivuwbezGodE/UIpM07M10SuQ4xKE3f1FvCpP6XHyKJNor3AVrfQ+qk5B2LbmK0bO2NrD9IUtSkX4yyigj9EKxyyOQp+DvYSzajyrj+UuRkdk7G5Eweg+f6N5WKfvetMzbo2O7gSWn08PtDelfsVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eCX9DqGe9db1MXdGpEVHFk+gaS1vUShIki9NWo1vZU8=;
 b=u6LVLwI4G8an8UAnAZJc5KrGq37sRtf2keRD7tgB68K13JORbuPahCiJNZ6Ad9RhA0Feglb28tXME2tbuPOJy3ypGAdzflHQx8fWEilft0to8nrEo+78umjhBK8W7CKthmZeAFjagIGilBKXkpZEwnIacYug4vOBoxGGKgpJLnE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, "Stewart
 Hildebrand" <stewart.hildebrand@amd.com>
Subject: [PATCH v1 4/6] pci/arm: Use iommu_add_dt_pci_device() instead of arch hook
Date: Mon, 1 May 2023 16:03:03 -0400
Message-ID: <20230501200305.168058-5-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230501200305.168058-1-stewart.hildebrand@amd.com>
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT009:EE_|PH7PR12MB6762:EE_
X-MS-Office365-Filtering-Correlation-Id: 0ca6b476-853d-4ff2-67e4-08db4a7f304c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7g/OblPUtzx8aUQCo+wrnlELHWK8d/3yT/7x8z4fxDDsK6vZilApLcJcT1ONO5Xd36v6KASu6wpfvxrxTHAxnKr8jr99rlNe0xqiOOIexd1PYYUU80stkiJL+eevWuQCTTRCt/2qeVIB0VA+DpSvdxfNiCXkxx64MpjaJ/JwO3tNXNOQmk07zDkS0wdNpv/T2u45o3YnX/2BelJFF/DmVQzCYk43H4ygqUGshEmtRax/1fA5DPXXQ+4MShljL6k8ze+OZDwYb25jJH5n/wEfXZC3Tje2q5/LfKahoH31IXJkg7EnAj907GcBe8gTgpDP7bYSZhmn++zi1ksBozYDaGfyfPHDxpc3MPdzj/7DdbnuHVcfBQ9ZFY+GWbxA6A8a8Vcgjla27SC4iLaMDJflgEsV9jowhF9eC3wcWLBJ9cu+77OojR2MsCE6mtEfF9cqXDuDVDS+dEnueYdLUyjvzyt+VWaMxZIR/omB9qIrvAhO2b/Rvvnlc+yxFSHCprelCo2BCIE033BQkrsECbMPew88JsshxhThwu5yb0FMUmfIO8sH8TgxXQ5jrC8OdXsMAtmazRH+8Ho2M+zY25Gkew2slxPNyVXPQfRwC4a7zVPqeCIp9rY/TWV8yOZfJ+riT0ehFfgbmywyk2YSdQo1VsvRiNAHMuXKSrPx202NDqU+5eRRNUwI3zYKwsIrhMMpX5oRCZKWSF1WL25N8LAnlZq+4WvGtnE80/xeXHi92Zk=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(86362001)(36756003)(81166007)(82740400003)(356005)(8676002)(5660300002)(8936002)(41300700001)(4326008)(70206006)(316002)(6916009)(44832011)(2906002)(40480700001)(70586007)(82310400005)(336012)(2616005)(47076005)(426003)(40460700003)(36860700001)(6666004)(966005)(54906003)(478600001)(186003)(83380400001)(1076003)(26005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2023 20:03:53.3266
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0ca6b476-853d-4ff2-67e4-08db4a7f304c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT009.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6762

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

On Arm we need to parse DT PCI-IOMMU specifier and provide it to
the driver (for describing the relationship between PCI devices
and IOMMUs) before adding a device to it.

Also clarify the check of the return value as iommu_add_pci_device
can return >0 if a device doesn't need to be protected by the IOMMU
and print a warning if iommu_add_pci_device failed.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
downstream->v1:
* rebase
* add __maybe_unused attribute to const struct domain_iommu *hd;
* Rename: s/iommu_add_pci_device/iommu_add_dt_pci_device/
* guard iommu_add_dt_pci_device call with CONFIG_HAS_DEVICE_TREE instead of
  CONFIG_ARM

(cherry picked from commit 2b9d26badab8b24b5a80d028c4499a5022817213 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---
 xen/drivers/passthrough/pci.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index b42acb8d7c09..ed5a6ede7847 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1305,7 +1305,7 @@ __initcall(setup_dump_pcidevs);
 
 static int iommu_add_device(struct pci_dev *pdev)
 {
-    const struct domain_iommu *hd;
+    const struct domain_iommu *hd __maybe_unused;
     int rc;
     unsigned int devfn = pdev->devfn;
 
@@ -1318,17 +1318,30 @@ static int iommu_add_device(struct pci_dev *pdev)
     if ( !is_iommu_enabled(pdev->domain) )
         return 0;
 
+#ifdef CONFIG_HAS_DEVICE_TREE
+    rc = iommu_add_dt_pci_device(devfn, pdev);
+#else
     rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
-    if ( rc || !pdev->phantom_stride )
+#endif
+    if ( rc < 0 || !pdev->phantom_stride )
+    {
+        if ( rc < 0 )
+            printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
+                   &pdev->sbdf, rc);
         return rc;
+    }
 
     for ( ; ; )
     {
         devfn += pdev->phantom_stride;
         if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
             return 0;
+#ifdef CONFIG_HAS_DEVICE_TREE
+        rc = iommu_add_dt_pci_device(devfn, pdev);
+#else
         rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
-        if ( rc )
+#endif
+        if ( rc < 0 )
             printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
                    &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
     }
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 01 20:04:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 20:04:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528269.821217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZkl-0007EO-R2; Mon, 01 May 2023 20:04:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528269.821217; Mon, 01 May 2023 20:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZkl-0007EB-Ni; Mon, 01 May 2023 20:04:07 +0000
Received: by outflank-mailman (input) for mailman id 528269;
 Mon, 01 May 2023 20:04:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nwpa=AW=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1ptZkl-000525-2K
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 20:04:07 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 52e8df08-e85b-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 22:04:06 +0200 (CEST)
Received: from MW4PR04CA0113.namprd04.prod.outlook.com (2603:10b6:303:83::28)
 by MW4PR12MB8611.namprd12.prod.outlook.com (2603:10b6:303:1ed::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Mon, 1 May
 2023 20:04:02 +0000
Received: from CO1NAM11FT094.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:83:cafe::eb) by MW4PR04CA0113.outlook.office365.com
 (2603:10b6:303:83::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30 via Frontend
 Transport; Mon, 1 May 2023 20:04:02 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT094.mail.protection.outlook.com (10.13.174.161) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.20 via Frontend Transport; Mon, 1 May 2023 20:04:02 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:04:01 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 1 May 2023 15:04:00 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52e8df08-e85b-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZOVMIRE2X7DPxKGPPVVzf7wJDjSP53mki8qz8xlrnp5JOFFthx4rcshJAdot3F2v8sraPYe6ZTyzYNF+DXw7GyeEth/skLrL3MSTQhLi4P1ByfBifMQdyMEM0orS0DCijZcriPoSooiyp7c8dAYxiAfvF+KobmA22oVi3PY7dhmzjoibJip1ITuFpRWGqd2a0UiOfG7+89fQa86ZUctZX8DX/kvfc1a7dlnnq7mJjssbnnzzPiOUsSeoPKpwr8lqOk1QvX5N3iHuaRBPgeBDx7u6W8lo5G9dH2yR5jRxNEJ/DR3Ya5CQjiPfsBJ+y6CwEIuWu33EjsvQVQarYxRCGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kw1Pv3wUj5s9KFrgoBEyVCO4rA7mTJkBr2g4lQl2H8w=;
 b=EYGB6wIAiZ9jLGggDl6UEyNcXjvDUmK119KnWwRtAeFoYhlRd3kdoxJGliaklqmMabLqrk3Ks4MQKN+WYzcBpXdf0PToK7Q50kA/4/lV7kc302Z/2nnYJfA0RG+yThRWtlkL3ZPwR2wnUXn/RbrQjdEVjR6MTtO9s0Ym6/8O+ziANJtf+fhQJQOABAKANcWApT6wLsnXotcjVKwvCH/aIW7h3f1wk7fW+N+6az8pAuIbGX6w4qDjck04HGrP6dDeXnjws5Oo1Yg+drdvfhDVnUiOMuBapkvWYHYrmOARp0nyRJblCqK2yPzs1j5L+F6DseDCTFIsoIJLhJhGnQKG5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kw1Pv3wUj5s9KFrgoBEyVCO4rA7mTJkBr2g4lQl2H8w=;
 b=I38jmhxkLb05puhi8QlR6wZNDFyOaw+N1ZD7/elqOyzmNG/cWCNJEXk27AM45lHr7boYHJ14PwP2IJUDgio1r6e2YRUqbZLhIqEDmFcU+KcUf6Ly4nA7g/yAAfWpnwcUW1e1cD1tNGnZ1szY7Vvlemau4UPx3gYZmjFvW3SAEXg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Julien Grall
	<julien@xen.org>, Rahul Singh <rahul.singh@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Stewart Hildebrand
	<stewart.hildebrand@amd.com>
Subject: [PATCH v1 5/6] xen/arm: smmuv2: Add PCI devices support for SMMUv2
Date: Mon, 1 May 2023 16:03:04 -0400
Message-ID: <20230501200305.168058-6-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230501200305.168058-1-stewart.hildebrand@amd.com>
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT094:EE_|MW4PR12MB8611:EE_
X-MS-Office365-Filtering-Correlation-Id: ed2ed5ad-8a9a-413d-094a-08db4a7f35af
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+ZUzcqKSKb3/OI8rYGlrEmYKUZY8tcNjGtQFCxHTOLyAGpS6qYwFdkOuL5xEmiDaIC2/OBSuQq8Eo+olxWxjMg5Z/qETCyJi9SoCY/ggxZuCty/Eq8/JziUP97iIPffxLASYoGSachstTALfjB3ZLEoAqgFk11dgWIxLjWayvKpn3u3vmhulbijKxOdeWN2jYaDDJ9PT8KDHc0pfNf9kshGQL3jy+fru5srpfrhlUxKEukpkJfneyl9XUcMr8gEsXVgptqDkn712vFmhdTUJecfLc7HAd/HzX/dnQu8W9pGm7IMq6mSph5gCMbxealJ3jit53ZrAGzbTDtGTgS4y+A/c9om+OG34XGg4K/wPP9foXxE9//4mPvVTe2xf1ShDgFQWAjXlXhuURUvXftraPkYvWJlM0/kAyB9M3t2MMyBUxRLCJj6BZzURVmAMDr8gX+w01V5+HwWX8clRlfoljYBQDMnYWYOz37TkRTLci6O+aMDjnoyx7uetEY1hlTcVc+xRi8lauClr2wXpqf5oE87CM1tyUQm+vqACIoGyPTljpu+1GMgyGF9vHtvKc1OHdLa9U9luW3X3jSLqErARM8l0F0iDdS40QupU7YIZPML3n5XYCKJVq7fwYr5Lx+7zyhPEUr53pthe1RqkGY/cyt/odC2ihj2K5JjZySs/pNMv74yshT5czVNSram4dcDnAXXJMJvQVrW+k3oQiZeqeLwH3toGERurUYlXDbQVwqIMttN/jTwPfdASIQvYfHe3acku1MMWSH/VxEvbS0VCs34pyJtTYcO85n6BdgXWv/E=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(39860400002)(376002)(451199021)(36840700001)(46966006)(40470700004)(4326008)(6916009)(82740400003)(186003)(5660300002)(36756003)(316002)(70586007)(70206006)(2906002)(41300700001)(356005)(81166007)(966005)(86362001)(83380400001)(47076005)(478600001)(426003)(336012)(54906003)(2616005)(1076003)(26005)(8936002)(82310400005)(36860700001)(40460700003)(8676002)(40480700001)(44832011)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2023 20:04:02.3460
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ed2ed5ad-8a9a-413d-094a-08db4a7f35af
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT094.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB8611

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
downstream->v1:
* wrap unused function in #ifdef 0
* remove the remove_device() stub since it was submitted separately to the list
  [XEN][PATCH v5 07/17] xen/smmu: Add remove_device callback for smmu_iommu ops
  https://lists.xenproject.org/archives/html/xen-devel/2023-04/msg00432.html
* arm_smmu_(de)assign_dev: return error instead of crashing system
* update condition in arm_smmu_reassign_dev
* style fixup
* add && !is_hardware_domain(d) into condition in arm_smmu_assign_dev()

(cherry picked from commit 0c11a7f65f044c26d87d1e27ac6283ef1f9cfb7a from
 the downstream branch spider-master from
 https://github.com/xen-troops/xen.git)
---

This is a file imported from Linux with modifications for Xen. What should be
the coding style for Xen modifications?
---
 xen/drivers/passthrough/arm/smmu.c | 107 +++++++++++++++++++++++------
 1 file changed, 86 insertions(+), 21 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 5b6024d579a8..c33f583f424a 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -134,8 +134,20 @@ typedef enum irqreturn irqreturn_t;
 /* Device logger functions
  * TODO: Handle PCI
  */
-#define dev_print(dev, lvl, fmt, ...)						\
-	 printk(lvl "smmu: %s: " fmt, dt_node_full_name(dev_to_dt(dev)), ## __VA_ARGS__)
+#ifndef CONFIG_HAS_PCI
+#define dev_print(dev, lvl, fmt, ...)    \
+    printk(lvl "smmu: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#else
+#define dev_print(dev, lvl, fmt, ...) ({                                \
+    if ( !dev_is_pci((dev)) )                                           \
+        printk(lvl "smmu: %s: " fmt, dev_name((dev)), ## __VA_ARGS__);  \
+    else                                                                \
+    {                                                                   \
+        struct pci_dev *pdev = dev_to_pci((dev));                       \
+        printk(lvl "smmu: %pp: " fmt, &pdev->sbdf, ## __VA_ARGS__);     \
+    }                                                                   \
+})
+#endif
 
 #define dev_dbg(dev, fmt, ...) dev_print(dev, XENLOG_DEBUG, fmt, ## __VA_ARGS__)
 #define dev_notice(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
@@ -187,6 +199,7 @@ static void __iomem *devm_ioremap_resource(struct device *dev,
  * Xen: PCI functions
  * TODO: It should be implemented when PCI will be supported
  */
+#if 0 /* unused */
 #define to_pci_dev(dev)	(NULL)
 static inline int pci_for_each_dma_alias(struct pci_dev *pdev,
 					 int (*fn) (struct pci_dev *pdev,
@@ -196,6 +209,7 @@ static inline int pci_for_each_dma_alias(struct pci_dev *pdev,
 	BUG();
 	return 0;
 }
+#endif
 
 /* Xen: misc */
 #define PHYS_MASK_SHIFT		PADDR_BITS
@@ -632,7 +646,7 @@ struct arm_smmu_master_cfg {
 	for (i = 0; idx = cfg->smendx[i], i < num; ++i)
 
 struct arm_smmu_master {
-	struct device_node		*of_node;
+	struct device			*dev;
 	struct rb_node			node;
 	struct arm_smmu_master_cfg	cfg;
 };
@@ -724,7 +738,7 @@ arm_smmu_get_fwspec(struct arm_smmu_master_cfg *cfg)
 {
 	struct arm_smmu_master *master = container_of(cfg,
 			                                      struct arm_smmu_master, cfg);
-	return dev_iommu_fwspec_get(&master->of_node->dev);
+	return dev_iommu_fwspec_get(master->dev);
 }
 
 static void parse_driver_options(struct arm_smmu_device *smmu)
@@ -757,7 +771,7 @@ static struct device_node *dev_get_dev_node(struct device *dev)
 }
 
 static struct arm_smmu_master *find_smmu_master(struct arm_smmu_device *smmu,
-						struct device_node *dev_node)
+						struct device *dev)
 {
 	struct rb_node *node = smmu->masters.rb_node;
 
@@ -766,9 +780,9 @@ static struct arm_smmu_master *find_smmu_master(struct arm_smmu_device *smmu,
 
 		master = container_of(node, struct arm_smmu_master, node);
 
-		if (dev_node < master->of_node)
+		if (dev < master->dev)
 			node = node->rb_left;
-		else if (dev_node > master->of_node)
+		else if (dev > master->dev)
 			node = node->rb_right;
 		else
 			return master;
@@ -803,9 +817,9 @@ static int insert_smmu_master(struct arm_smmu_device *smmu,
 			= container_of(*new, struct arm_smmu_master, node);
 
 		parent = *new;
-		if (master->of_node < this->of_node)
+		if (master->dev < this->dev)
 			new = &((*new)->rb_left);
-		else if (master->of_node > this->of_node)
+		else if (master->dev > this->dev)
 			new = &((*new)->rb_right);
 		else
 			return -EEXIST;
@@ -824,18 +838,18 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 	struct arm_smmu_master *master;
 	struct device_node *dev_node = dev_get_dev_node(dev);
 
-	master = find_smmu_master(smmu, dev_node);
+	master = find_smmu_master(smmu, dev);
 	if (master) {
 		dev_err(dev,
 			"rejecting multiple registrations for master device %s\n",
-			dev_node->name);
+			dev_node ? dev_node->name : "");
 		return -EBUSY;
 	}
 
 	master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
 	if (!master)
 		return -ENOMEM;
-	master->of_node = dev_node;
+	master->dev = dev;
 
 	/* Xen: Let Xen know that the device is protected by an SMMU */
 	device_set_protected(dev);
@@ -845,7 +859,7 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 		     (fwspec->ids[i] >= smmu->num_mapping_groups)) {
 			dev_err(dev,
 				"stream ID for master device %s greater than maximum allowed (%d)\n",
-				dev_node->name, smmu->num_mapping_groups);
+				dev_node ? dev_node->name : "", smmu->num_mapping_groups);
 			return -ERANGE;
 		}
 		master->cfg.smendx[i] = INVALID_SMENDX;
@@ -912,11 +926,10 @@ static struct arm_smmu_device *find_smmu_for_device(struct device *dev)
 {
 	struct arm_smmu_device *smmu;
 	struct arm_smmu_master *master = NULL;
-	struct device_node *dev_node = dev_get_dev_node(dev);
 
 	spin_lock(&arm_smmu_devices_lock);
 	list_for_each_entry(smmu, &arm_smmu_devices, list) {
-		master = find_smmu_master(smmu, dev_node);
+		master = find_smmu_master(smmu, dev);
 		if (master)
 			break;
 	}
@@ -2006,6 +2019,7 @@ static bool arm_smmu_capable(enum iommu_cap cap)
 }
 #endif
 
+#if 0 /* Not used */
 static int __arm_smmu_get_pci_sid(struct pci_dev *pdev, u16 alias, void *data)
 {
 	*((u16 *)data) = alias;
@@ -2016,6 +2030,7 @@ static void __arm_smmu_release_pci_iommudata(void *data)
 {
 	kfree(data);
 }
+#endif
 
 static int arm_smmu_add_device(struct device *dev)
 {
@@ -2023,12 +2038,13 @@ static int arm_smmu_add_device(struct device *dev)
 	struct arm_smmu_master_cfg *cfg;
 	struct iommu_group *group;
 	void (*releasefn)(void *) = NULL;
-	int ret;
 
 	smmu = find_smmu_for_device(dev);
 	if (!smmu)
 		return -ENODEV;
 
+	/* There is no need to distinguish here, thanks to PCI-IOMMU DT bindings */
+#if 0
 	if (dev_is_pci(dev)) {
 		struct pci_dev *pdev = to_pci_dev(dev);
 		struct iommu_fwspec *fwspec;
@@ -2053,10 +2069,12 @@ static int arm_smmu_add_device(struct device *dev)
 				       &fwspec->ids[0]);
 		releasefn = __arm_smmu_release_pci_iommudata;
 		cfg->smmu = smmu;
-	} else {
+	} else
+#endif
+	{
 		struct arm_smmu_master *master;
 
-		master = find_smmu_master(smmu, dev->of_node);
+		master = find_smmu_master(smmu, dev);
 		if (!master) {
 			return -ENODEV;
 		}
@@ -2724,6 +2742,31 @@ static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
 			return -ENOMEM;
 	}
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) && !is_hardware_domain(d) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Assigning device %04x:%02x:%02x.%u to dom%d\n",
+		       pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+		       d->domain_id);
+
+		/*
+		 * XXX What would be the proper behavior? This could happen if
+		 * pdev->phantom_stride > 0
+		 */
+		if ( devfn != pdev->devfn )
+			return -EOPNOTSUPP;
+
+		list_move(&pdev->domain_list, &d->pdev_list);
+		pdev->domain = d;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	if (!dev_iommu_group(dev)) {
 		ret = arm_smmu_add_device(dev);
 		if (ret)
@@ -2773,11 +2816,33 @@ out:
 	return ret;
 }
 
-static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
+static int arm_smmu_deassign_dev(struct domain *d, u8 devfn, struct device *dev)
 {
 	struct iommu_domain *domain = dev_iommu_domain(dev);
 	struct arm_smmu_xen_domain *xen_domain;
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Deassigning device %04x:%02x:%02x.%u from dom%d\n",
+		       pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+		       d->domain_id);
+
+		/*
+		 * XXX What would be the proper behavior? This could happen if
+		 * pdev->phantom_stride > 0
+		 */
+		if ( devfn != pdev->devfn )
+			return -EOPNOTSUPP;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	xen_domain = dom_iommu(d)->arch.priv;
 
 	if (!domain || domain->priv->cfg.domain != d) {
@@ -2805,13 +2870,13 @@ static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
 	int ret = 0;
 
 	/* Don't allow remapping on other domain than hwdom */
-	if ( t && !is_hardware_domain(t) )
+	if ( t && !is_hardware_domain(t) && t != dom_io )
 		return -EPERM;
 
 	if (t == s)
 		return 0;
 
-	ret = arm_smmu_deassign_dev(s, dev);
+	ret = arm_smmu_deassign_dev(s, devfn, dev);
 	if (ret)
 		return ret;
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 01 20:10:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 20:10:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528279.821227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZqJ-00008g-GQ; Mon, 01 May 2023 20:09:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528279.821227; Mon, 01 May 2023 20:09:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZqJ-00008Z-Cv; Mon, 01 May 2023 20:09:51 +0000
Received: by outflank-mailman (input) for mailman id 528279;
 Mon, 01 May 2023 20:09:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nwpa=AW=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1ptZl1-000525-Ba
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 20:04:23 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20601.outbound.protection.outlook.com
 [2a01:111:f400:7eae::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5caaf7b0-e85b-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 22:04:22 +0200 (CEST)
Received: from MW4PR03CA0166.namprd03.prod.outlook.com (2603:10b6:303:8d::21)
 by DS7PR12MB6262.namprd12.prod.outlook.com (2603:10b6:8:96::7) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.30; Mon, 1 May 2023 20:04:18 +0000
Received: from CO1NAM11FT016.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8d:cafe::7b) by MW4PR03CA0166.outlook.office365.com
 (2603:10b6:303:8d::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30 via Frontend
 Transport; Mon, 1 May 2023 20:04:18 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT016.mail.protection.outlook.com (10.13.175.141) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.20 via Frontend Transport; Mon, 1 May 2023 20:04:18 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:04:17 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 1 May 2023 15:04:16 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5caaf7b0-e85b-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jNexJhQ/NuhFiX0DyBzMSeOq62USE0QEjAanpnrNFaavCs7HvVvReW2YWYXN4b4leT+2o9MBbHn14UANlFt5+Jk9Z5LF05ii8SkX2NNDp/iwzhoPif08QRS+OarFwkSJ+jZffXiJ+nzEjDOtuwzZLmQSNtje51xKWLwQZYw/bdN/Bzz5NL+KojeX56qihL6V2x69euXFSOTwTwUhfqe57g7SA5qBBMwKnkvt0QaQpmJ1uH+hwphYsuwZw5+24UjLGIanmLVQYN4WCyKmVnXfUoHWx2MNxepSb6t+CUsw1zFsFurrP9HKaSCKVFLR/klhHVBqZYN/OEpBMiDuN7WS7Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0XJhPVciLUzEu7aYbf6zQpM/bkxm9q36IA+36Y75kWQ=;
 b=iQCskHVZ4giy13gN7Swm67lRuzEkZzgjJVWTw5k6BaNUiKI/UJps7CBtUc4tx9bJPZ9qVsN9awRebpiUIm3DBR8izhIaIeo1pPYMdEmAGZTIK7ye65W/h9FVeYI86J/Hc283Ek9K7QlbG8YPyhBpE3Xbdgx7/eiboOKq66wvmzf2t8GQM1xyMDy66ffUhZgvDXy4fICZUYcbXhJ7aAeYjXhxsl2uU2H7EhDuZF3p3+pAgfQ0NQvfFPZ7Qno5Ajs8hjCwJV/PIT4mE6i7tORRtstYBApHjcDL/W/mhgA8jsZOQKbk18C6GxnqqJksA/xiy/MDblBF7zvnRjLor1qUcQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0XJhPVciLUzEu7aYbf6zQpM/bkxm9q36IA+36Y75kWQ=;
 b=QdlfoD2li1KtoqoJdGnQKPbex1ZfI1cViutgN1oMCJL1kxO9Wf1gU3FnYxEao8MKtW8sHu42CC8KUMoTWqdN2uBz4LnrbkeccIi7c1PCH4K9buMCxYgkwuQ2oCMlbRC3hCxHNhCA99LUrJZH2FTCb8RqLyqtvMRbIzqtcFOaJHU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Rahul Singh <rahul.singh@arm.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Stewart Hildebrand <stewart.hildebrand@amd.com>
Subject: [PATCH v1 6/6] xen/arm: smmuv3: Add PCI devices support for SMMUv3
Date: Mon, 1 May 2023 16:03:05 -0400
Message-ID: <20230501200305.168058-7-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230501200305.168058-1-stewart.hildebrand@amd.com>
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT016:EE_|DS7PR12MB6262:EE_
X-MS-Office365-Filtering-Correlation-Id: cd0b89df-c04d-4fda-e7fa-08db4a7f3f45
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lHPtqGOP4XC2D/KGuxYseWc2nAyihzMlfG4W1jUl61eKikVKGiiGoZ1iQlhn//G8O8dcCRMRcjA8mRAClYScapDW8t9ETdy10lsXiwRTjjWYOQ9aQ04Q9i4KipxdfewBDxC/W536GYT2JTV0i3o6OfVPQQbQZTZhNKyvmolt/YtxKmGT5PRYocKldF/FUgg9e292RYdkGckuxZzFJ3N8PPsXMsuY0OHNBnlquPgIv3TAZOQK/AbvU/RKqVaSYmnCcJlCDMrEupm0nk+VR0wjwLyf7rDBghKjqzMKcUiUUBVPvUBDU6cxR0OL9I+P1mC8WI18Hj9CfIn8Epo2Z3t2puwsf/zXVjlXN7PSaX0y0CGsWs+xXJx5uCeGgtsUC67sIN4i4LkElMNFU/UTducWV5aAAqKXCk1pFjeOAoGBvjitldGae4rwCu56WSWb7qOHVRkVdHT7ubhA26QSzslPQzSiaj7ZyRVJB8PDIjL6ZJoOL2h3a4VNi51b2/o1NH+bqPIJCudhaUDR1TwAqp/EcOgBRq/pWpuWGYQ7i6X9b3dPS813hf22wv5qas/g+4Nxs1/4VGmdINsRrv381B/D7yMxFe35uOhnvhpDl4++LpdYH44NPNkobZcOCPShuA1jhqlTXTxuySXjjRLkB57R9jBSDyUoiMUNIeM/six4iQYjbmoX4UZ7P91pDFp7QSLQJuioWFEXfbG+r/QFbg0PBx1tevMItVotL214WAWfPRpMzRgXWbszuw+Jc3hmRUhR
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(376002)(136003)(396003)(451199021)(40470700004)(36840700001)(46966006)(82310400005)(40460700003)(8936002)(8676002)(36860700001)(26005)(2616005)(1076003)(44832011)(478600001)(5660300002)(186003)(40480700001)(36756003)(316002)(82740400003)(2906002)(54906003)(426003)(336012)(86362001)(83380400001)(70586007)(4326008)(6916009)(6666004)(41300700001)(70206006)(47076005)(966005)(81166007)(356005)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2023 20:04:18.4498
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cd0b89df-c04d-4fda-e7fa-08db4a7f3f45
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT016.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6262

From: Rahul Singh <rahul.singh@arm.com>

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
downstream->v1:
* rebase
* move 2 replacements of s/dt_device_set_protected(dev_to_dt(dev))/device_set_protected(dev)/
  from this commit to ("xen/arm: Move is_protected flag to struct device")
  so as to not break ability to bisect
* adjust patch title (remove stray space)
* arm_smmu_(de)assign_dev: return error instead of crashing system
* remove arm_smmu_remove_device() stub
* update condition in arm_smmu_reassign_dev
* style fixup

(cherry picked from commit 7ed6c3ab250d899fe6e893a514278e406a2893e8 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---

This is a file imported from Linux with modifications for Xen. What should be
the coding style used for Xen modifications?
---
 xen/drivers/passthrough/arm/smmu-v3.c | 66 +++++++++++++++++++++++++--
 1 file changed, 63 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 4b452e6fdd00..481a35a8b8d4 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -1469,6 +1469,8 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
 }
 /* Forward declaration */
 static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev);
+static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
+			struct device *dev, u32 flag);
 
 static int arm_smmu_add_device(u8 devfn, struct device *dev)
 {
@@ -1527,6 +1529,17 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev)
 	dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
 			dev_name(fwspec->iommu_dev), fwspec->num_ids);
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		ret = arm_smmu_assign_dev(pdev->domain, devfn, dev, 0);
+		if (ret)
+			goto err_free_master;
+	}
+#endif
+
 	return 0;
 
 err_free_master:
@@ -2607,6 +2620,31 @@ static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
 	struct arm_smmu_domain *smmu_domain;
 	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) && !is_hardware_domain(d) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Assigning device %04x:%02x:%02x.%u to dom%d\n",
+			pdev->seg, pdev->bus, PCI_SLOT(devfn),
+			PCI_FUNC(devfn), d->domain_id);
+
+		/*
+		 * XXX What would be the proper behavior? This could happen if
+		 * pdev->phantom_stride > 0
+		 */
+		if ( devfn != pdev->devfn )
+			return -EOPNOTSUPP;
+
+		list_move(&pdev->domain_list, &d->pdev_list);
+		pdev->domain = d;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	spin_lock(&xen_domain->lock);
 
 	/*
@@ -2640,7 +2678,7 @@ out:
 	return ret;
 }
 
-static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
+static int arm_smmu_deassign_dev(struct domain *d, uint8_t devfn, struct device *dev)
 {
 	struct iommu_domain *io_domain = arm_smmu_get_domain(d, dev);
 	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
@@ -2652,6 +2690,28 @@ static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
 		return -ESRCH;
 	}
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Deassigning device %04x:%02x:%02x.%u from dom%d\n",
+			pdev->seg, pdev->bus, PCI_SLOT(devfn),
+			PCI_FUNC(devfn), d->domain_id);
+
+		/*
+		 * XXX What would be the proper behavior? This could happen if
+		 * pdev->phantom_stride > 0
+		 */
+		if ( devfn != pdev->devfn )
+			return -EOPNOTSUPP;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	spin_lock(&xen_domain->lock);
 
 	arm_smmu_detach_dev(master);
@@ -2671,13 +2731,13 @@ static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
 	int ret = 0;
 
 	/* Don't allow remapping on other domain than hwdom */
-	if ( t && !is_hardware_domain(t) )
+	if ( t && !is_hardware_domain(t) && (t != dom_io) )
 		return -EPERM;
 
 	if (t == s)
 		return 0;
 
-	ret = arm_smmu_deassign_dev(s, dev);
+	ret = arm_smmu_deassign_dev(s, devfn, dev);
 	if (ret)
 		return ret;
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 01 20:10:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 20:10:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528280.821237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZqM-0000PC-O1; Mon, 01 May 2023 20:09:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528280.821237; Mon, 01 May 2023 20:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptZqM-0000P5-LS; Mon, 01 May 2023 20:09:54 +0000
Received: by outflank-mailman (input) for mailman id 528280;
 Mon, 01 May 2023 20:09:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nwpa=AW=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1ptZqL-0000Of-TV
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 20:09:53 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 21b20385-e85c-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 22:09:52 +0200 (CEST)
Received: from MW4PR04CA0323.namprd04.prod.outlook.com (2603:10b6:303:82::28)
 by DM4PR12MB6038.namprd12.prod.outlook.com (2603:10b6:8:ab::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Mon, 1 May
 2023 20:09:50 +0000
Received: from CO1NAM11FT067.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:82:cafe::44) by MW4PR04CA0323.outlook.office365.com
 (2603:10b6:303:82::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30 via Frontend
 Transport; Mon, 1 May 2023 20:09:49 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT067.mail.protection.outlook.com (10.13.174.212) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.19 via Frontend Transport; Mon, 1 May 2023 20:09:49 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:09:49 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:09:48 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 1 May 2023 15:09:47 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21b20385-e85c-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RD3OwduzSOZw9pVYmJqz5vhD8oFwIil30HgUTqOGpE/EV9XjgSsHVo2Eyi0yJtfyyGzEiJod49SLv+8sKRZETj8Ahf9MPf3KijWKjK0lvgnDugGQSHbSUf3ikJH62BXbEQrkKQ/jkxmGYA+YzgihUNFbf4ha1TlaKnBMiHaRWbonuhDfh/5nhv8Szk977fsDjrbWDaGEz7DnoG8xuA/2GVKOtc2aE/In72CT97uLMW4l6j8mkf7ptC/hcgEGkCjAkACMhhiyEEAVm1ZkE4qX0fHrMZfvSsFrlZZawjZSST7N/hYybeZ/XnTQNUizUSYg4nOOr65auJZu5CRfqLDmEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0XJhPVciLUzEu7aYbf6zQpM/bkxm9q36IA+36Y75kWQ=;
 b=loBGtN6HImwO0TelplfgEoHq25dQ+HuRjGjWDUhFTIL2EvvCkcL5SH15jJRJQOejm0nx5eb81snfNK7MxlAV9Rxu+JVpEqfOYNeqgPc8vAZ9Ue+BCaUgcVTgHuF7+BqZSSIVOVXQkXm84q9CteSl3qG3JH24VJhfSF++WifF6mKatVKG+rwQ4UgpEqE56+7dB6svbEBGAjxKTXEcQIWEdbJd/XvE7djf0od+kai7Zecsb0enLMYsGsemkdfzm7IsliLS4IQ28k2m+08E5MkAGjTBJHbZMSu/R/xRBSw+1duO7GU7jb+ozjHSEas8Hy9LY0lszj1VzL8fVNy5YWeqfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0XJhPVciLUzEu7aYbf6zQpM/bkxm9q36IA+36Y75kWQ=;
 b=0slI8cl+LqjDqadGUBE5m95+adcNqO6T9TVhZfCNoplQ6uN+DrJiKm0wyGb4PrLopego0mOmxDTFyAvLfEw399H9uXQMdTIAQ9Q4IUjRj1NKUmgoUhPiURvgI80BCAoKw6ELLQY5YhHYmZQGd0IvMMwYwHTR/F/Cu9OIp2+vrnI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Rahul Singh <rahul.singh@arm.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Stewart Hildebrand <stewart.hildebrand@amd.com>
Subject: [PATCH v1 6/6] xen/arm: smmuv3: Add PCI devices support for SMMUv3
Date: Mon, 1 May 2023 16:09:42 -0400
Message-ID: <20230501200942.168105-1-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230501200305.168058-1-stewart.hildebrand@amd.com>
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT067:EE_|DM4PR12MB6038:EE_
X-MS-Office365-Filtering-Correlation-Id: 3a3272d9-79e4-4a75-ca3b-08db4a800482
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AYbOBjwjju28o+61rCE+w9k0H0/ZRJqJopCvwEl4QGMqHtNwsWGsDzPWCZcmr3cRCFHzCxOy5RGRthnSLLEzjlSPhg2IXy1qAhPBYDES5PQemCtNDwHYC06FwXclLRIBABAPoSLijsC9TbjZYafmwNbDc7O87Qc+2x7niTjjJbGcr6lsJk3CzIV+YazJUboqO3lvJvN10LKbqDYTJxUu9ok8ZaCO6z6j6XmkZe5qsqpmOBV3L3oK3LFRzNTj3FWg2wVv4IjYwGISy4GevbyLTbN8yM2fBw6O/Wjnjk83CZjvP5lD/z27Mn436uK+F3fE88BloLZ3473dn8gUzzXrLZ9v2KnNnoNr6tqJKRJXuqISudAUudZZMoz6jib1vxs6aGE3ZkgrGEgpgULlXBjasoH15vslF+RCLER/1zTQPL2xEKQAJ7L0PJx6oN5Y49PA/l8rEVRI7OylvSMpg8tp2x/Ka85tiRoNAphprAFJ+BAR1owYULnIiIuPEOfgGK7Qw7d+Gz1cl8pQ/YOymeezkyH+rWwcT7HTb8EypernyyScKj93xMvJ7V5/LB+gJpOCITAYSi5bmaGfIqVmLeMIviURG7xUUboaKaBWdC3v++7UDAxfVCQ5Gh3h90MIny6wcNEYEew9dqdx/awKIOaz5miCdUcEVOAK8sFds1ph4rdSJ0/RkBFko4uAVjfYrgxsuz56Ev8q8A/peJ3zZfvJqrYiX4aMAdZaq89Gmihv9RrHIJ8L3RJnVN7NlMVvXfcs
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199021)(40470700004)(36840700001)(46966006)(5660300002)(82310400005)(44832011)(86362001)(336012)(2616005)(83380400001)(1076003)(47076005)(36860700001)(82740400003)(186003)(356005)(966005)(81166007)(426003)(26005)(40460700003)(6666004)(8936002)(478600001)(316002)(54906003)(8676002)(36756003)(40480700001)(6916009)(41300700001)(4326008)(70206006)(70586007)(2906002)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2023 20:09:49.3600
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a3272d9-79e4-4a75-ca3b-08db4a800482
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT067.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6038

From: Rahul Singh <rahul.singh@arm.com>

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
downstream->v1:
* rebase
* move 2 replacements of s/dt_device_set_protected(dev_to_dt(dev))/device_set_protected(dev)/
  from this commit to ("xen/arm: Move is_protected flag to struct device")
  so as to not break ability to bisect
* adjust patch title (remove stray space)
* arm_smmu_(de)assign_dev: return error instead of crashing system
* remove arm_smmu_remove_device() stub
* update condition in arm_smmu_reassign_dev
* style fixup

(cherry picked from commit 7ed6c3ab250d899fe6e893a514278e406a2893e8 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---

This is a file imported from Linux with modifications for Xen. What should be
the coding style used for Xen modifications?
---
 xen/drivers/passthrough/arm/smmu-v3.c | 66 +++++++++++++++++++++++++--
 1 file changed, 63 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 4b452e6fdd00..481a35a8b8d4 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -1469,6 +1469,8 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
 }
 /* Forward declaration */
 static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev);
+static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
+			struct device *dev, u32 flag);
 
 static int arm_smmu_add_device(u8 devfn, struct device *dev)
 {
@@ -1527,6 +1529,17 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev)
 	dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
 			dev_name(fwspec->iommu_dev), fwspec->num_ids);
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		ret = arm_smmu_assign_dev(pdev->domain, devfn, dev, 0);
+		if (ret)
+			goto err_free_master;
+	}
+#endif
+
 	return 0;
 
 err_free_master:
@@ -2607,6 +2620,31 @@ static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
 	struct arm_smmu_domain *smmu_domain;
 	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) && !is_hardware_domain(d) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Assigning device %04x:%02x:%02x.%u to dom%d\n",
+			pdev->seg, pdev->bus, PCI_SLOT(devfn),
+			PCI_FUNC(devfn), d->domain_id);
+
+		/*
+		 * XXX What would be the proper behavior? This could happen if
+		 * pdev->phantom_stride > 0
+		 */
+		if ( devfn != pdev->devfn )
+			return -EOPNOTSUPP;
+
+		list_move(&pdev->domain_list, &d->pdev_list);
+		pdev->domain = d;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	spin_lock(&xen_domain->lock);
 
 	/*
@@ -2640,7 +2678,7 @@ out:
 	return ret;
 }
 
-static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
+static int arm_smmu_deassign_dev(struct domain *d, uint8_t devfn, struct device *dev)
 {
 	struct iommu_domain *io_domain = arm_smmu_get_domain(d, dev);
 	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
@@ -2652,6 +2690,28 @@ static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
 		return -ESRCH;
 	}
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Deassigning device %04x:%02x:%02x.%u from dom%d\n",
+			pdev->seg, pdev->bus, PCI_SLOT(devfn),
+			PCI_FUNC(devfn), d->domain_id);
+
+		/*
+		 * XXX What would be the proper behavior? This could happen if
+		 * pdev->phantom_stride > 0
+		 */
+		if ( devfn != pdev->devfn )
+			return -EOPNOTSUPP;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	spin_lock(&xen_domain->lock);
 
 	arm_smmu_detach_dev(master);
@@ -2671,13 +2731,13 @@ static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
 	int ret = 0;
 
 	/* Don't allow remapping on other domain than hwdom */
-	if ( t && !is_hardware_domain(t) )
+	if ( t && !is_hardware_domain(t) && (t != dom_io) )
 		return -EPERM;
 
 	if (t == s)
 		return 0;
 
-	ret = arm_smmu_deassign_dev(s, dev);
+	ret = arm_smmu_deassign_dev(s, devfn, dev);
 	if (ret)
 		return ret;
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 01 20:31:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 20:31:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528289.821247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptaAi-0004GW-KM; Mon, 01 May 2023 20:30:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528289.821247; Mon, 01 May 2023 20:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptaAi-0004GP-Hb; Mon, 01 May 2023 20:30:56 +0000
Received: by outflank-mailman (input) for mailman id 528289;
 Mon, 01 May 2023 20:30:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nwpa=AW=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1ptaAh-0004GJ-DN
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 20:30:55 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20600.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10c6ba00-e85f-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 22:30:53 +0200 (CEST)
Received: from DS7PR05CA0077.namprd05.prod.outlook.com (2603:10b6:8:57::17) by
 SA3PR12MB7784.namprd12.prod.outlook.com (2603:10b6:806:317::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Mon, 1 May
 2023 20:30:50 +0000
Received: from DS1PEPF0000E649.namprd02.prod.outlook.com
 (2603:10b6:8:57:cafe::8f) by DS7PR05CA0077.outlook.office365.com
 (2603:10b6:8:57::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.20 via Frontend
 Transport; Mon, 1 May 2023 20:30:49 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DS1PEPF0000E649.mail.protection.outlook.com (10.167.18.39) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6340.16 via Frontend Transport; Mon, 1 May 2023 20:30:49 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 1 May
 2023 15:30:48 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 1 May 2023 15:30:48 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10c6ba00-e85f-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mdMwMeBAzJ0YwgoD90WE9JrbR91We36xi7QeZHVwiTKiWY7A5Khhoc6nxeuWihek56vzUcJ4DYkKLLzo+5NjJzHfrKOBzkwl/OZCzsEo69HEpT7pOMq8zsGZq7G/QMStnhSt0IIZsYOqxvyH93aiZ9uYxH+vTry9AV0SzEpYpnOM4Sw5YN7vFcPeATrl5XHbglJP6134JQJB1/Un9urouybQ+u4i/59HhF3U0GKL0Ladsck8qt4oS8/nSIt/DsxwGICZPvkIevoZntQskET84Yi/uMHDRpnwZZoY3KKmYKunJ0H6Lt35aAwg15DNw4I65z2vW/zLWO1x7pzihh+AbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CSFUIv+vA4xvYLSDbnANA0FJUUKynik5M6lLpry0LHk=;
 b=Wtg8XHCEFG0n5SqdcJqZqvWykMifHxVRV8SzZ2zrQnrZ1rgOMJLeyOZmVxhk98oWI7HSmsAkaRpNjogWAwshBJgiDQRS6n2o8YyxljzVzLJaVuw7h1LqgnQ2nfctTxMcuJOHLVDdP8UpPs0881sIvUZpd5q6nGa3BoDyLxFy4yDo/rBWjqlWsEE/gS0Z93Iu7ftVBVYh4SttaJFB19LJpuhjZOUcUbGpexwqGJJ1qUGvSruhSXxcZpj+AQrUZtFXjDzcx/oMhxP3hR87Tp1aYeraWnlO+5igDc2o/mvS/GWPZhf3diisk7jSYmnK+Jn/til7tVSYhCpwhipAau+bEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CSFUIv+vA4xvYLSDbnANA0FJUUKynik5M6lLpry0LHk=;
 b=BUhgGeyjVq9n2YOjlMMUkiY94U8LkxXF2vt1Gl8le2Gmj5XQ8qU8R4ahXKwGNO39SHPC+QkcoLIdeHdwczECLuXGUS4yjwu05hY+7qUk1qnaAIlXNPqsBU+R8KH2OrGKegyHY3kTrHjt8KB3Mmirdk940T/Xjr2gThhp63wq+K8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, George Dunlap
	<george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>, "Juergen
 Gross" <jgross@suse.com>
Subject: [PATCH v1] xen/sched/null: avoid crash after failed domU creation
Date: Mon, 1 May 2023 16:30:46 -0400
Message-ID: <20230501203046.168856-1-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS1PEPF0000E649:EE_|SA3PR12MB7784:EE_
X-MS-Office365-Filtering-Correlation-Id: d80ca6cd-7382-4db1-4971-08db4a82f387
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9rasoY0u2wUFDqhcqLDQc8UUGjWTGuBZITehAW1/NmIt5BgArjh6Rk829Z5ywuuBV8Tg+1Uo2iKcV+B74EwPql8Vj/x1HlIv5ZpSMr9wxcBIRhvOc/JQUr4NJkP9BOphqSrsTQOVI8/p5g2VfDS/N76QwVxeKAnKsalWpfzlS+XZJmGvBfwpT+ZRgyBt4nv3GGujpNxPL1kNJkiIw6CHdTFtEcFOb0JwME6srJSjm4GsJEB9HmnwA8v0gpzilKBMAnmD8Wm3tbvAiqaf+IBUNxYBVd77bBtAk9Di7NW2UdhSbFI9u3hFPg6At1I2SVdGHbsi8oWCSfaGxWQF57ukrFJL0D9XemCGTXTsHLjXohBcw2tbtKU91hbR93u/Sm8S5jeaZ2WWT7TU8brRNDi3LLIay+0xA3zU/0dgpVUaKcqxr2SfA2B4f8iWYeAiJsnUZdOJi5QrCBxYRNUP57rUBtEmG2jZwIGgU0UEQvGnTXsAEHp5FFlLaC2jBiGxqTEE0uTW97Jplj+kk0LMRojLRCpj7KZXYmK7OrZ9c0temJPa17BAXu9DP/bmY2GeJxJVlaqkb2a/NCVzXbxwhdPBvHY/+uQfkoJAIdLuJ5UQdDjTQyXdCpWvh9zRcq4o01EeLUvD4OpzNpMsE2fd94tL26hJPIo2ZRulJSSyy0ta82bMYvmHMeR6GtH7JJ7aBn92psuvWlNFaMuVnbjVievHU0/fZq8gFkfv9YxJ+x6dDDc=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(346002)(376002)(451199021)(36840700001)(46966006)(40470700004)(40480700001)(40460700003)(316002)(1076003)(82310400005)(26005)(426003)(336012)(41300700001)(2616005)(47076005)(83380400001)(186003)(478600001)(70206006)(36860700001)(54906003)(70586007)(4326008)(966005)(6916009)(356005)(82740400003)(81166007)(36756003)(2906002)(5660300002)(86362001)(8936002)(8676002)(44832011)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2023 20:30:49.4209
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d80ca6cd-7382-4db1-4971-08db4a82f387
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DS1PEPF0000E649.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7784

When creating a domU, but the creation fails, there is a corner case that may
lead to a crash in the null scheduler when running a debug build of Xen.

(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'npc->unit == unit' failed at common/sched/null.c:379
(XEN) ****************************************

The events leading to the crash are:

* null_unit_insert() was invoked with the unit offline. Since the unit was
  offline, unit_assign() was not called, and null_unit_insert() returned.
* Later during domain creation, the unit was onlined
* Eventually, domain creation failed due to bad configuration
* null_unit_remove() was invoked with the unit still online. Since the unit was
  online, it called unit_deassign() and triggered an ASSERT.

To fix this, only call unit_deassign() when npc->unit is non-NULL in
null_unit_remove.

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
RFC->v1
* Follow Juergen's suggested fix

Link to RFC [1]

[1] https://lists.xenproject.org/archives/html/xen-devel/2023-04/msg01387.html
---
 xen/common/sched/null.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c
index 65a0a6c5312d..2091337fcd06 100644
--- a/xen/common/sched/null.c
+++ b/xen/common/sched/null.c
@@ -522,6 +522,8 @@ static void cf_check null_unit_remove(
 {
     struct null_private *prv = null_priv(ops);
     struct null_unit *nvc = null_unit(unit);
+    struct null_pcpu *npc;
+    unsigned int cpu;
     spinlock_t *lock;
 
     ASSERT(!is_idle_unit(unit));
@@ -531,8 +533,6 @@ static void cf_check null_unit_remove(
     /* If offline, the unit shouldn't be assigned, nor in the waitqueue */
     if ( unlikely(!is_unit_online(unit)) )
     {
-        struct null_pcpu *npc;
-
         npc = unit->res->sched_priv;
         ASSERT(npc->unit != unit);
         ASSERT(list_empty(&nvc->waitq_elem));
@@ -549,7 +549,10 @@ static void cf_check null_unit_remove(
         goto out;
     }
 
-    unit_deassign(prv, unit);
+    cpu = sched_unit_master(unit);
+    npc = get_sched_res(cpu)->sched_priv;
+    if ( npc->unit )
+        unit_deassign(prv, unit);
 
  out:
     unit_schedule_unlock_irq(lock, unit);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 01 20:59:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 20:59:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528297.821257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptacM-0006tf-TB; Mon, 01 May 2023 20:59:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528297.821257; Mon, 01 May 2023 20:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptacM-0006tY-PF; Mon, 01 May 2023 20:59:30 +0000
Received: by outflank-mailman (input) for mailman id 528297;
 Mon, 01 May 2023 20:59:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s7rn=AW=dabbelt.com=palmer@srs-se1.protection.inumbo.net>)
 id 1ptacL-0006tS-Cl
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 20:59:29 +0000
Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com
 [2607:f8b0:4864:20::429])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0ea585df-e863-11ed-b225-6b7b168915f2;
 Mon, 01 May 2023 22:59:27 +0200 (CEST)
Received: by mail-pf1-x429.google.com with SMTP id
 d2e1a72fcca58-63b57c49c4cso2218322b3a.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 13:59:27 -0700 (PDT)
Received: from localhost ([50.221.140.188]) by smtp.gmail.com with ESMTPSA id
 x3-20020a628603000000b0063d666566d1sm20322681pfd.72.2023.05.01.13.59.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 13:59:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ea585df-e863-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=dabbelt-com.20221208.gappssmtp.com; s=20221208; t=1682974766; x=1685566766;
        h=content-transfer-encoding:mime-version:message-id:to:from:cc
         :in-reply-to:subject:date:from:to:cc:subject:date:message-id
         :reply-to;
        bh=56zwdsQPzBA4Fq57F7zZq8xm3cbpRSpnOmcnA0Dh+zY=;
        b=dhuoKYDmXvVL9rcvID9l7RqV772Z8lTC+U0WXQUi0tYJbLE9a48GrmSzQs7bEghGbC
         JAz8rXoL1w4B9KKuEwX7POjH3Ox3r6V4zpIxvsvZpSsylQJD8ptmZ7+Vz+V8jAHJj04w
         T6FV37TSFDDCD7el69cwAEuuEXi6g8EgNvIcALlfkGBQgAio2kQ2a6bJAUGpYJRLsZzz
         ZNQyL545JSX++O1LCUyCJsybjwMgrDioUAedrjXfVSqEM2lGIt3XAkC5ujMjWrYDbKAU
         lx6fFu18ULaY/km5rliSpJC6ISEyi513He8uKlna726VX+0Zf3a2zrJJ6SJgvgEWorgq
         velw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682974766; x=1685566766;
        h=content-transfer-encoding:mime-version:message-id:to:from:cc
         :in-reply-to:subject:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=56zwdsQPzBA4Fq57F7zZq8xm3cbpRSpnOmcnA0Dh+zY=;
        b=JCXzJUkuVZtrvn6Psj2eIf+22uAXBp98t4QtFNxeY3RxUUs1hufwNi+3VH60Rgy8hr
         gFOV+yiilTc3yDlVqE46AjIXyg7qHTt91f4rWwD3IFYgDXA64q8AYdO98q+/DhhKawRm
         /AMSnVNmN8fO0GE8b5SXnddh9WY7km7QzvB3LAnL8bXbxIM8eWjRGFGfSKw2DFoHPePa
         g+NrEFm9rtaTLmCrBfRx3Ai+MbS4+2X+6Qmgx4O1pmKm6GtFIrP32hDscejSn/fz/A5R
         ada9QyVpEV/PqJxYs9PBdBA5Uo/pin3vD1UrH98ErXsIbIltBlWCPbyRPGd3a60U9+G7
         K85A==
X-Gm-Message-State: AC+VfDwQmMJUhDrcVvZKIhwyVhs/zA4F2gZ5rBYRXu2d9xfCfWs00M7P
	KXXqslO83U9cKkNZJrb6iJKNwFfR/ihVT66jsy7+Nw==
X-Google-Smtp-Source: ACHHUZ7NRh0XptQx7EX4B/2U8oMa1VyQthNvjC4nK5qetnlZ9yiAEbyNISqPVXEejwWKpBkEgu1KUQ==
X-Received: by 2002:a05:6a00:1301:b0:63d:27a1:d578 with SMTP id j1-20020a056a00130100b0063d27a1d578mr19620776pfu.20.1682974765524;
        Mon, 01 May 2023 13:59:25 -0700 (PDT)
Date: Mon, 01 May 2023 13:59:24 -0700 (PDT)
X-Google-Original-Date: Mon, 01 May 2023 13:59:09 PDT (-0700)
Subject:     Re: [PATCH v2 29/34] riscv: Convert alloc_{pmd, pte}_late() to use ptdescs
In-Reply-To: <20230501192829.17086-30-vishal.moola@gmail.com>
CC: akpm@linux-foundation.org, willy@infradead.org, linux-mm@kvack.org,
  linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org,
  linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
  linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
  linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
  sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org,
  kvm@vger.kernel.org, vishal.moola@gmail.com, Paul Walmsley <paul.walmsley@sifive.com>
From: Palmer Dabbelt <palmer@dabbelt.com>
To: vishal.moola@gmail.com
Message-ID: <mhng-e6f12727-9abe-4a93-a361-15a6cd333f51@palmer-ri-x1c9a>
Mime-Version: 1.0 (MHng)
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

On Mon, 01 May 2023 12:28:24 PDT (-0700), vishal.moola@gmail.com wrote:
> As part of the conversions to replace pgtable constructor/destructors with
> ptdesc equivalents, convert various page table functions to use ptdescs.
>
> Some of the functions use the *get*page*() helper functions. Convert
> these to use ptdesc_alloc() and ptdesc_address() instead to help
> standardize page tables further.
>
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  arch/riscv/include/asm/pgalloc.h |  8 ++++----
>  arch/riscv/mm/init.c             | 16 ++++++----------
>  2 files changed, 10 insertions(+), 14 deletions(-)
>
> diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h
> index 59dc12b5b7e8..cb5536403bd8 100644
> --- a/arch/riscv/include/asm/pgalloc.h
> +++ b/arch/riscv/include/asm/pgalloc.h
> @@ -153,10 +153,10 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
>
>  #endif /* __PAGETABLE_PMD_FOLDED */
>
> -#define __pte_free_tlb(tlb, pte, buf)   \
> -do {                                    \
> -	pgtable_pte_page_dtor(pte);     \
> -	tlb_remove_page((tlb), pte);    \
> +#define __pte_free_tlb(tlb, pte, buf)			\
> +do {							\
> +	ptdesc_pte_dtor(page_ptdesc(pte));		\
> +	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));\
>  } while (0)
>  #endif /* CONFIG_MMU */
>
> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
> index eb8173a91ce3..8f1982664687 100644
> --- a/arch/riscv/mm/init.c
> +++ b/arch/riscv/mm/init.c
> @@ -353,12 +353,10 @@ static inline phys_addr_t __init alloc_pte_fixmap(uintptr_t va)
>
>  static phys_addr_t __init alloc_pte_late(uintptr_t va)
>  {
> -	unsigned long vaddr;
> -
> -	vaddr = __get_free_page(GFP_KERNEL);
> -	BUG_ON(!vaddr || !pgtable_pte_page_ctor(virt_to_page(vaddr)));
> +	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
>
> -	return __pa(vaddr);
> +	BUG_ON(!ptdesc || !ptdesc_pte_ctor(ptdesc));
> +	return __pa((pte_t *)ptdesc_address(ptdesc));
>  }
>
>  static void __init create_pte_mapping(pte_t *ptep,
> @@ -436,12 +434,10 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va)
>
>  static phys_addr_t __init alloc_pmd_late(uintptr_t va)
>  {
> -	unsigned long vaddr;
> -
> -	vaddr = __get_free_page(GFP_KERNEL);
> -	BUG_ON(!vaddr || !pgtable_pmd_page_ctor(virt_to_page(vaddr)));
> +	struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0);
>
> -	return __pa(vaddr);
> +	BUG_ON(!ptdesc || !ptdesc_pmd_ctor(ptdesc));
> +	return __pa((pmd_t *)ptdesc_address(ptdesc));
>  }
>
>  static void __init create_pmd_mapping(pmd_t *pmdp,

Acked-by: Palmer Dabbelt <palmer@rivosinc.com>


From xen-devel-bounces@lists.xenproject.org Mon May 01 22:05:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 22:05:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528300.821267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptbe9-0005mS-QJ; Mon, 01 May 2023 22:05:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528300.821267; Mon, 01 May 2023 22:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptbe9-0005mL-MM; Mon, 01 May 2023 22:05:25 +0000
Received: by outflank-mailman (input) for mailman id 528300;
 Mon, 01 May 2023 22:05:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptbe8-0005mB-C5; Mon, 01 May 2023 22:05:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptbe8-0004ei-1Z; Mon, 01 May 2023 22:05:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptbe7-0000E9-LP; Mon, 01 May 2023 22:05:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptbe7-00067R-Iu; Mon, 01 May 2023 22:05:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Pxdd1GoEDGF9UOeAEK7HxyTadRVBlQ2tPGVxbXwJhaE=; b=Sjt9GUiuTnbiagBRBdCEyN8zpy
	5NOa/VnKqoAOFC3neV50KJ4MLKqJa9C39AQCMBjCxSnnD/CCzpU/uXKRgL9CaAHtSjkIYVu7SqYr0
	9YG9xARQtYyK0bwOjactL/XMyjRG6JpUtBZ/CRKBJWTvpCnmAoJzj0SsZKxTOcZvHWoo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180496-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180496: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ef841d2a2377f5297add27e637b725426bb4840a
X-Osstest-Versions-That:
    xen=6a47ba2f7855f8fc094ec4f837e71a34ededb77b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 01 May 2023 22:05:23 +0000

flight 180496 xen-unstable real [real]
flight 180498 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180496/
http://logs.test-lab.xenproject.org/osstest/logs/180498/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180498-retest
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 180498-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180492
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180492
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180492
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180492
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180492
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180492
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180492
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180492
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180492
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180492
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180492
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180492
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180492
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  ef841d2a2377f5297add27e637b725426bb4840a
baseline version:
 xen                  6a47ba2f7855f8fc094ec4f837e71a34ededb77b

Last test of basis   180492  2023-05-01 01:52:02 Z    0 days
Testing same since   180496  2023-05-01 13:38:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6a47ba2f78..ef841d2a23  ef841d2a2377f5297add27e637b725426bb4840a -> master


From xen-devel-bounces@lists.xenproject.org Mon May 01 23:56:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 01 May 2023 23:56:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528312.821285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptdNB-00006a-UP; Mon, 01 May 2023 23:56:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528312.821285; Mon, 01 May 2023 23:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptdNB-00006T-Pg; Mon, 01 May 2023 23:56:01 +0000
Received: by outflank-mailman (input) for mailman id 528312;
 Mon, 01 May 2023 23:56:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s7rn=AW=dabbelt.com=palmer@srs-se1.protection.inumbo.net>)
 id 1ptdNA-00006N-Bx
 for xen-devel@lists.xenproject.org; Mon, 01 May 2023 23:56:00 +0000
Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com
 [2607:f8b0:4864:20::430])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b6056f9c-e87b-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 01:55:57 +0200 (CEST)
Received: by mail-pf1-x430.google.com with SMTP id
 d2e1a72fcca58-63b8b19901fso3667234b3a.3
 for <xen-devel@lists.xenproject.org>; Mon, 01 May 2023 16:55:56 -0700 (PDT)
Received: from localhost ([50.221.140.188]) by smtp.gmail.com with ESMTPSA id
 t2-20020a628102000000b0063b1e7ffc5fsm20413877pfd.39.2023.05.01.16.55.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 01 May 2023 16:55:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6056f9c-e87b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=dabbelt-com.20221208.gappssmtp.com; s=20221208; t=1682985354; x=1685577354;
        h=content-transfer-encoding:mime-version:message-id:to:from:cc
         :in-reply-to:subject:date:from:to:cc:subject:date:message-id
         :reply-to;
        bh=QuwyJlMhOzHc7pD19rcBJm7YGGdkgt4st5TSnBpxtCA=;
        b=u7Fbet5GsM/a+tKDG1jIQjylhg8yDIGsEfB3R3METzXxmxTlMfZIrKJcjG1Eg5BKJj
         MlFOPCIdB1Kzok6k4VO3kofHLnjOICQw4gU2tuQOBg05zhDbFsggy6hJcjAmZMRGAXf0
         rUtZb8K8J1C2NtGocYCnTuPL/7q8UuOvuWeleHqtXMOc3hyp4VTgoMKde+PprRxEq3U9
         3NCfHp5mczh1hPm51YYXtT/w0nBzMFK+Xv4D4Xzjfds97FS7WzOT/O5styDeQpozCahx
         aw/N9G2On9yOd5m0BdC1sLbhcBfoO3yIW/1WaAekonzfxEBWX3WbGIXs9NTSCve1lAj1
         n8sA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1682985354; x=1685577354;
        h=content-transfer-encoding:mime-version:message-id:to:from:cc
         :in-reply-to:subject:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=QuwyJlMhOzHc7pD19rcBJm7YGGdkgt4st5TSnBpxtCA=;
        b=AXjVAAOy36CR1B9sJotdZBON9iZBAFosUpB+ZJY1hlIPrzYddyMErlpIm4cSlSMjbL
         cFp4vgqFMD7iO0wKXWKoQLtJwHnkgRKZcNScQVZ01tV3O3S/H7poU2MrXcL39teWSn9X
         9RIf/Ri+pceUMGRyFWJKETAHuqzGr4neA1BswUWsNG1JFurwMnP3523scI1h++gEwVww
         1cuLjEtBRXQ8B6qzlhL0XgouJMLSyiB3Lx3iR6lopyy/U9uzGUOY8n7/F6z6w2CYfn/x
         DtvDLT+zkfWYq0WROgp6vex6YtJo2iQ5BKfTitZkOTAGEG0lTRDVGh7ABJoC884bYLEA
         Tt0w==
X-Gm-Message-State: AC+VfDx+UWpUCP362lultdJX8cp/ZFj2clph+p7ovK17s1psBNpeWVZN
	xAksnb/Xs5NHrlu2Ux4dBDTUmA==
X-Google-Smtp-Source: ACHHUZ4SgBzSoTxFJ9j/mzMx+FBKsdjE/IfAfq6NZWT2hwrYFJsUY4cczelg66bWFkmYBXKaJT4LqQ==
X-Received: by 2002:a05:6a20:6f03:b0:f8:b39b:b24e with SMTP id gt3-20020a056a206f0300b000f8b39bb24emr16225285pzb.11.1682985354255;
        Mon, 01 May 2023 16:55:54 -0700 (PDT)
Date: Mon, 01 May 2023 16:55:53 -0700 (PDT)
X-Google-Original-Date: Mon, 01 May 2023 16:55:31 PDT (-0700)
Subject:     Re: [patch 26/37] riscv: Switch to hotplug core state synchronization
In-Reply-To: <20230414232310.817955867@linutronix.de>
CC: linux-kernel@vger.kernel.org, x86@kernel.org, dwmw@infradead.org,
  andrew.cooper3@citrix.com, brgerst@gmail.com, arjan@linux.intel.com, pbonzini@redhat.com,
  paulmck@kernel.org, thomas.lendacky@amd.com, seanjc@google.com, oleksandr@natalenko.name,
  pmenzel@molgen.mpg.de, gpiccoli@igalia.com, lucjan.lucjanov@gmail.com,
  Paul Walmsley <paul.walmsley@sifive.com>, linux-riscv@lists.infradead.org, dwmw@amazon.co.uk, usama.arif@bytedance.com,
  jgross@suse.com, boris.ostrovsky@oracle.com, xen-devel@lists.xenproject.org,
  linux@armlinux.org.uk, Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
  Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, guoren@kernel.org, linux-csky@vger.kernel.org,
  tsbogend@alpha.franken.de, linux-mips@vger.kernel.org, James.Bottomley@HansenPartnership.com,
  deller@gmx.de, linux-parisc@vger.kernel.org, Mark Rutland <mark.rutland@arm.com>,
  sabrapan@amazon.com
From: Palmer Dabbelt <palmer@dabbelt.com>
To: tglx@linutronix.de
Message-ID: <mhng-fd944caa-93db-40e0-8ea8-bc52772a261a@palmer-ri-x1c9a>
Mime-Version: 1.0 (MHng)
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

On Fri, 14 Apr 2023 16:44:55 PDT (-0700), tglx@linutronix.de wrote:
> Switch to the CPU hotplug core state tracking and synchronization
> mechanim. No functional change intended.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: linux-riscv@lists.infradead.org
> ---
>  arch/riscv/Kconfig              |    1 +
>  arch/riscv/include/asm/smp.h    |    2 +-
>  arch/riscv/kernel/cpu-hotplug.c |   14 +++++++-------
>  3 files changed, 9 insertions(+), 8 deletions(-)
>
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -116,6 +116,7 @@ config RISCV
>  	select HAVE_RSEQ
>  	select HAVE_STACKPROTECTOR
>  	select HAVE_SYSCALL_TRACEPOINTS
> +	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
>  	select IRQ_DOMAIN
>  	select IRQ_FORCED_THREADING
>  	select MODULES_USE_ELF_RELA if MODULES
> --- a/arch/riscv/include/asm/smp.h
> +++ b/arch/riscv/include/asm/smp.h
> @@ -64,7 +64,7 @@ asmlinkage void smp_callin(void);
>
>  #if defined CONFIG_HOTPLUG_CPU
>  int __cpu_disable(void);
> -void __cpu_die(unsigned int cpu);
> +static inline void __cpu_die(unsigned int cpu) { }
>  #endif /* CONFIG_HOTPLUG_CPU */
>
>  #else
> --- a/arch/riscv/kernel/cpu-hotplug.c
> +++ b/arch/riscv/kernel/cpu-hotplug.c
> @@ -8,6 +8,7 @@
>  #include <linux/sched.h>
>  #include <linux/err.h>
>  #include <linux/irq.h>
> +#include <linux/cpuhotplug.h>
>  #include <linux/cpu.h>
>  #include <linux/sched/hotplug.h>
>  #include <asm/irq.h>
> @@ -48,17 +49,15 @@ int __cpu_disable(void)
>  	return ret;
>  }
>
> +#ifdef CONFIG_HOTPLUG_CPU
>  /*
> - * Called on the thread which is asking for a CPU to be shutdown.
> + * Called on the thread which is asking for a CPU to be shutdown, if the
> + * CPU reported dead to the hotplug core.
>   */
> -void __cpu_die(unsigned int cpu)
> +void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
>  {
>  	int ret = 0;
>
> -	if (!cpu_wait_death(cpu, 5)) {
> -		pr_err("CPU %u: didn't die\n", cpu);
> -		return;
> -	}
>  	pr_notice("CPU%u: off\n", cpu);
>
>  	/* Verify from the firmware if the cpu is really stopped*/
> @@ -75,9 +74,10 @@ void arch_cpu_idle_dead(void)
>  {
>  	idle_task_exit();
>
> -	(void)cpu_report_death();
> +	cpuhp_ap_report_dead();
>
>  	cpu_ops[smp_processor_id()]->cpu_stop();
>  	/* It should never reach here */
>  	BUG();
>  }
> +#endif

Acked-by: Palmer Dabbelt <palmer@rivosinc.com>


From xen-devel-bounces@lists.xenproject.org Tue May 02 01:38:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 01:38:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528317.821295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pteyC-0002zt-Ov; Tue, 02 May 2023 01:38:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528317.821295; Tue, 02 May 2023 01:38:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pteyC-0002zl-Jv; Tue, 02 May 2023 01:38:20 +0000
Received: by outflank-mailman (input) for mailman id 528317;
 Tue, 02 May 2023 01:38:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pteyB-0002zb-L5; Tue, 02 May 2023 01:38:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pteyB-0002Iw-9F; Tue, 02 May 2023 01:38:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pteyA-0004zy-Oe; Tue, 02 May 2023 01:38:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pteyA-0004qW-OC; Tue, 02 May 2023 01:38:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EPxiEZMMtbDkJW73s34MdUe8otFQddET4f+3KSho4yo=; b=YKY40jSd28FN0QRtQO38ph+sUb
	8Y4OGRUPZZ6334KADf0ZLIqwVmXw/5hPRUjhc4ZslwALErlLzhfiJVNV1kY68DIXoyNZ0UFUzbouQ
	VPIU5vf3n/23fW5zfuYJWw/+rkCW6f136s96CvXqJkp0tK+UT4r9OR0AZL9QwW8DAm4o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180497-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180497: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=58390c8ce1bddb6c623f62e7ed36383e7fa5c02f
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 May 2023 01:38:18 +0000

flight 180497 linux-linus real [real]
flight 180499 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180497/
http://logs.test-lab.xenproject.org/osstest/logs/180499/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 test-amd64-amd64-xl-vhd     21 guest-start/debian.repeat fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                58390c8ce1bddb6c623f62e7ed36383e7fa5c02f
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   15 days
Failing since        180281  2023-04-17 06:24:36 Z   14 days   25 attempts
Testing same since   180493  2023-05-01 02:06:50 Z    0 days    2 attempts

------------------------------------------------------------
2116 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 254448 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 02 01:49:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 01:49:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528322.821306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptf96-0004Vm-QD; Tue, 02 May 2023 01:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528322.821306; Tue, 02 May 2023 01:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptf96-0004Vf-LS; Tue, 02 May 2023 01:49:36 +0000
Received: by outflank-mailman (input) for mailman id 528322;
 Tue, 02 May 2023 01:49:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lHQJ=AX=intel.com=lkp@srs-se1.protection.inumbo.net>)
 id 1ptf95-0004VZ-Eq
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 01:49:35 +0000
Received: from mga17.intel.com (mga17.intel.com [192.55.52.151])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9431b320-e88b-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 03:49:32 +0200 (CEST)
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 01 May 2023 18:49:25 -0700
Received: from lkp-server01.sh.intel.com (HELO e3434d64424d) ([10.239.97.150])
 by fmsmga007.fm.intel.com with ESMTP; 01 May 2023 18:49:20 -0700
Received: from kbuild by e3434d64424d with local (Exim 4.96)
 (envelope-from <lkp@intel.com>) id 1ptf8q-0000ke-0J;
 Tue, 02 May 2023 01:49:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9431b320-e88b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1682992172; x=1714528172;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=atYvzhPCCZ/tytx2DE7s23i4jWidElWSRjsH6cc5aEY=;
  b=Dw7biiOHIg+GLNRuChMal0jdKqSwMg8kLWn6ilfwftkDGxLQABrpFYSu
   SjZVqpFslipAljPqIg8E8x7ozL/4sgdnfcpLA4hFRR777CnMcpP+uKPeU
   5VEYiBBgnQk4DASs2EqV/BOe71eHiKhA5eKPYsLJt3U9/iJEbyyvoDKxh
   JTDpVFn3BFRpwOIjQ0UdUHh20OkxUKJWUU7cSD2/vwpiSCJ1K/VZ7Q7zB
   32qLl/eUZlBXuJclleFmghqGPXkGVJduJWK/cNoa2SQ5W53gBUyaH/+g0
   /nXmO3esuerp7Ka86tPeKJg2t3LYI6ZbCWHhyulx0QPAEClQwO6yhgEUk
   w==;
X-IronPort-AV: E=McAfee;i="6600,9927,10697"; a="328651235"
X-IronPort-AV: E=Sophos;i="5.99,242,1677571200"; 
   d="scan'208";a="328651235"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10697"; a="698763727"
X-IronPort-AV: E=Sophos;i="5.99,242,1677571200"; 
   d="scan'208";a="698763727"
Date: Tue, 2 May 2023 09:48:46 +0800
From: kernel test robot <lkp@intel.com>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: oe-kbuild-all@lists.linux.dev,
	Linux Memory Management List <linux-mm@kvack.org>,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>
Subject: Re: [PATCH v2 21/34] arm64: Convert various functions to use ptdescs
Message-ID: <202305020914.OGRWcEG1-lkp@intel.com>
References: <20230501192829.17086-22-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230501192829.17086-22-vishal.moola@gmail.com>

Hi Vishal,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on linus/master next-20230428]
[cannot apply to s390/features powerpc/next powerpc/fixes geert-m68k/for-next geert-m68k/for-linus v6.3]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Vishal-Moola-Oracle/mm-Add-PAGE_TYPE_OP-folio-functions/20230502-033042
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20230501192829.17086-22-vishal.moola%40gmail.com
patch subject: [PATCH v2 21/34] arm64: Convert various functions to use ptdescs
config: arm64-allyesconfig (https://download.01.org/0day-ci/archive/20230502/202305020914.OGRWcEG1-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/8e9481b63b5773d7c914836dcd7fbec2449902bc
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Vishal-Moola-Oracle/mm-Add-PAGE_TYPE_OP-folio-functions/20230502-033042
        git checkout 8e9481b63b5773d7c914836dcd7fbec2449902bc
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=arm64 olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=arm64 SHELL=/bin/bash arch/arm64/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202305020914.OGRWcEG1-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from include/linux/build_bug.h:5,
                    from include/linux/bits.h:21,
                    from include/linux/bitops.h:6,
                    from arch/arm64/include/asm/cache.h:39,
                    from include/linux/cache.h:6,
                    from arch/arm64/mm/mmu.c:9:
   arch/arm64/mm/mmu.c: In function 'pgd_pgtable_alloc':
>> arch/arm64/mm/mmu.c:440:24: error: invalid use of void expression
     440 |                 BUG_ON(!ptdesc_pte_dtor(ptdesc));
         |                        ^
   include/linux/compiler.h:78:45: note: in definition of macro 'unlikely'
      78 | # define unlikely(x)    __builtin_expect(!!(x), 0)
         |                                             ^
   arch/arm64/mm/mmu.c:440:17: note: in expansion of macro 'BUG_ON'
     440 |                 BUG_ON(!ptdesc_pte_dtor(ptdesc));
         |                 ^~~~~~
   arch/arm64/mm/mmu.c:442:24: error: invalid use of void expression
     442 |                 BUG_ON(!ptdesc_pte_dtor(ptdesc));
         |                        ^
   include/linux/compiler.h:78:45: note: in definition of macro 'unlikely'
      78 | # define unlikely(x)    __builtin_expect(!!(x), 0)
         |                                             ^
   arch/arm64/mm/mmu.c:442:17: note: in expansion of macro 'BUG_ON'
     442 |                 BUG_ON(!ptdesc_pte_dtor(ptdesc));
         |                 ^~~~~~


vim +440 arch/arm64/mm/mmu.c

   425	
   426	static phys_addr_t pgd_pgtable_alloc(int shift)
   427	{
   428		phys_addr_t pa = __pgd_pgtable_alloc(shift);
   429		struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa));
   430	
   431		/*
   432		 * Call proper page table ctor in case later we need to
   433		 * call core mm functions like apply_to_page_range() on
   434		 * this pre-allocated page table.
   435		 *
   436		 * We don't select ARCH_ENABLE_SPLIT_PMD_PTLOCK if pmd is
   437		 * folded, and if so ptdesc_pte_dtor() becomes nop.
   438		 */
   439		if (shift == PAGE_SHIFT)
 > 440			BUG_ON(!ptdesc_pte_dtor(ptdesc));
   441		else if (shift == PMD_SHIFT)
   442			BUG_ON(!ptdesc_pte_dtor(ptdesc));
   443	
   444		return pa;
   445	}
   446	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests


From xen-devel-bounces@lists.xenproject.org Tue May 02 02:22:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 02:22:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528327.821314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptfeh-0000pY-Cw; Tue, 02 May 2023 02:22:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528327.821314; Tue, 02 May 2023 02:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptfeh-0000pR-AP; Tue, 02 May 2023 02:22:15 +0000
Received: by outflank-mailman (input) for mailman id 528327;
 Tue, 02 May 2023 02:22:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lHQJ=AX=intel.com=lkp@srs-se1.protection.inumbo.net>)
 id 1ptfef-0000pL-Ka
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 02:22:13 +0000
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2312a6a7-e890-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 04:22:10 +0200 (CEST)
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 01 May 2023 19:22:06 -0700
Received: from lkp-server01.sh.intel.com (HELO e3434d64424d) ([10.239.97.150])
 by orsmga003.jf.intel.com with ESMTP; 01 May 2023 19:22:01 -0700
Received: from kbuild by e3434d64424d with local (Exim 4.96)
 (envelope-from <lkp@intel.com>) id 1ptfeS-0000m8-1s;
 Tue, 02 May 2023 02:22:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2312a6a7-e890-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1682994130; x=1714530130;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=85LkbDK+KApuWepVvTrNKkbkOtRXRZZHLuh2xgotSTc=;
  b=nwqKegsP0FYwI/O9UMptNhx52Qh6ZPwaTrFShYhU0H4Nplhblc+73R80
   EbYgIKzfGc4eAj7nFxt1Ro9/zkD8PhgGx1av99XrR8u3ZNVWRPX8UihoY
   rNq6+0VOx9j6ORnr5qUOA2K4szy0KIdtjdD5/FVZ0Qd+NzDHF3xRlutRv
   Xp8DP7t90hfIXfTzDh/ZR6RPyXvacARY8tcmEo9iYV7T/wtQgsYrL6/gI
   /UAQy+ZNunZz85iubO6aXtvtCeu3eoOAj6xHA3rHNGQDfxz2u5F0LDG09
   Nsvaq9F0Zr13eZfMvwvoqRM+Zfp8j4CS79wFM1yapWv/VWq+7I+mgn9eL
   A==;
X-IronPort-AV: E=McAfee;i="6600,9927,10697"; a="376342299"
X-IronPort-AV: E=Sophos;i="5.99,242,1677571200"; 
   d="scan'208";a="376342299"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10697"; a="646346632"
X-IronPort-AV: E=Sophos;i="5.99,242,1677571200"; 
   d="scan'208";a="646346632"
Date: Tue, 2 May 2023 10:21:30 +0800
From: kernel test robot <lkp@intel.com>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev,
	Linux Memory Management List <linux-mm@kvack.org>,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>
Subject: Re: [PATCH v2 21/34] arm64: Convert various functions to use ptdescs
Message-ID: <202305021038.c9jfVDsv-lkp@intel.com>
References: <20230501192829.17086-22-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230501192829.17086-22-vishal.moola@gmail.com>

Hi Vishal,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on linus/master next-20230428]
[cannot apply to s390/features powerpc/next powerpc/fixes geert-m68k/for-next geert-m68k/for-linus v6.3]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Vishal-Moola-Oracle/mm-Add-PAGE_TYPE_OP-folio-functions/20230502-033042
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20230501192829.17086-22-vishal.moola%40gmail.com
patch subject: [PATCH v2 21/34] arm64: Convert various functions to use ptdescs
config: arm64-randconfig-r023-20230430 (https://download.01.org/0day-ci/archive/20230502/202305021038.c9jfVDsv-lkp@intel.com/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project b1465cd49efcbc114a75220b153f5a055ce7911f)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install arm64 cross compiling tool for clang build
        # apt-get install binutils-aarch64-linux-gnu
        # https://github.com/intel-lab-lkp/linux/commit/8e9481b63b5773d7c914836dcd7fbec2449902bc
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Vishal-Moola-Oracle/mm-Add-PAGE_TYPE_OP-folio-functions/20230502-033042
        git checkout 8e9481b63b5773d7c914836dcd7fbec2449902bc
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=arm64 olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=arm64 SHELL=/bin/bash arch/arm64/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202305021038.c9jfVDsv-lkp@intel.com/

All errors (new ones prefixed by >>):

>> arch/arm64/mm/mmu.c:440:10: error: invalid argument type 'void' to unary expression
                   BUG_ON(!ptdesc_pte_dtor(ptdesc));
                          ^~~~~~~~~~~~~~~~~~~~~~~~
   include/asm-generic/bug.h:71:45: note: expanded from macro 'BUG_ON'
   #define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
                                               ^~~~~~~~~
   include/linux/compiler.h:78:42: note: expanded from macro 'unlikely'
   # define unlikely(x)    __builtin_expect(!!(x), 0)
                                               ^
   arch/arm64/mm/mmu.c:442:10: error: invalid argument type 'void' to unary expression
                   BUG_ON(!ptdesc_pte_dtor(ptdesc));
                          ^~~~~~~~~~~~~~~~~~~~~~~~
   include/asm-generic/bug.h:71:45: note: expanded from macro 'BUG_ON'
   #define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
                                               ^~~~~~~~~
   include/linux/compiler.h:78:42: note: expanded from macro 'unlikely'
   # define unlikely(x)    __builtin_expect(!!(x), 0)
                                               ^
   2 errors generated.


vim +/void +440 arch/arm64/mm/mmu.c

   425	
   426	static phys_addr_t pgd_pgtable_alloc(int shift)
   427	{
   428		phys_addr_t pa = __pgd_pgtable_alloc(shift);
   429		struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa));
   430	
   431		/*
   432		 * Call proper page table ctor in case later we need to
   433		 * call core mm functions like apply_to_page_range() on
   434		 * this pre-allocated page table.
   435		 *
   436		 * We don't select ARCH_ENABLE_SPLIT_PMD_PTLOCK if pmd is
   437		 * folded, and if so ptdesc_pte_dtor() becomes nop.
   438		 */
   439		if (shift == PAGE_SHIFT)
 > 440			BUG_ON(!ptdesc_pte_dtor(ptdesc));
   441		else if (shift == PMD_SHIFT)
   442			BUG_ON(!ptdesc_pte_dtor(ptdesc));
   443	
   444		return pa;
   445	}
   446	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests


From xen-devel-bounces@lists.xenproject.org Tue May 02 03:59:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 03:59:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528330.821325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pthAv-0001pJ-A2; Tue, 02 May 2023 03:59:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528330.821325; Tue, 02 May 2023 03:59:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pthAv-0001pC-7E; Tue, 02 May 2023 03:59:37 +0000
Received: by outflank-mailman (input) for mailman id 528330;
 Tue, 02 May 2023 03:59:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PvCK=AX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pthAu-0001p6-2Y
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 03:59:36 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bdb2bdd0-e89d-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 05:59:32 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 399EC618FE;
 Tue,  2 May 2023 03:59:30 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9E4C1C433D2;
 Tue,  2 May 2023 03:59:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdb2bdd0-e89d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1682999969;
	bh=pmCTwM5JeS/nROPQdqbvEVzgt5qKVs7zZy9Af5KBrC0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=LpV7prXFLMFtaEZrhSeYwjhaMrErPjwtYqkH1CkPrNmwpXPJS3FOOvEFyONbjilRG
	 9N5bgQCO04Mr9HUip9M1I98h/8mi8ArU/EmTRqWi++UzLM+t+fw8J//1huirwUCMoh
	 QCsttQxykjd/PjYjpQKAgcPdeR5wJjpNZYSzaOFaWvsBg0iswwNr5JkUWX0wm1Mw7K
	 7SKRLdMHdjojXXVRa9q1dXgnd3GL3LpAQdg2Fy8zS1ROFvb4bbPgs3tawVFJPnpONL
	 GHgZRSAzatM1T09iHkbRKh9rs1Ef2EaHelrMoKYPRmxWlz0KE8EFv71DUICe7cmwbO
	 9NcVX/7JCmkRQ==
Date: Mon, 1 May 2023 20:59:26 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: andrew.cooper3@citrix.com, alejandro.vallejo@cloud.com, 
    committers@xenproject.org, michal.orzel@amd.com, 
    xen-devel@lists.xenproject.org
Subject: Re: xen | Failed pipeline for staging | 6a47ba2f
In-Reply-To: <alpine.DEB.2.22.394.2304291808420.974517@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2305011835000.974517@ubuntu-linux-20-04-desktop>
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail> <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop> <ca0144a6-2c57-0cc3-fd27-5dbe59491ef3@citrix.com>
 <alpine.DEB.2.22.394.2304291808420.974517@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 29 Apr 2023, Stefano Stabellini wrote:
> Your guess was correct. I have done more bisecting today. The culprit is
> the following commit (I reverted only this commit and ran 25 tests
> successfully, usually it fails in less than 5):
> 
> e522c98c3    tools: Refactor console/io.c to avoid using xc_domain_getinfo()

I did more debugging. One problem seems to be that
XEN_SYSCTL_getdomaininfolist is buggy in the hypervisor. The field
u.getdomaininfolist.num_domains is not copied back to the guest. It
doesn't look like the hypercall would behave well for more than 1 guest.
I am appending the fix.

This is not sufficient to fix the failure. On a hunch, I made this
change:


 	/* Fetch info on every valid domain except for dom0 */
-	ret = xc_domain_getinfolist(xc, 1, DOMID_FIRST_RESERVED - 1, domaininfo);
+	ret = xc_domain_getinfolist(xc, 1, 10, domaininfo);
 	if (ret < 0)
 		return;
 
With it, everything works. I have run out of time today for my
investigation.


I would like to take the opportunity to highlight that gitlab-ci did a
very good job spotting an issue. I am glad we are starting to reap the
benefits of all the hard work we put into it.

Cheers,

Stefano

---
xen: fix broken XEN_SYSCTL_getdomaininfolist hypercall

XEN_SYSCTL_getdomaininfolist doesn't actually update the guest
num_domains field, only its local copy. Fix that.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>

diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index 02505ab044..0e1097be96 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -107,10 +107,8 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
         
         rcu_read_unlock(&domlist_read_lock);
         
-        if ( ret != 0 )
-            break;
-        
         op->u.getdomaininfolist.num_domains = num_domains;
+        __copy_field_to_guest(u_sysctl, op, u.getdomaininfolist.num_domains);
     }
     break;
 


From xen-devel-bounces@lists.xenproject.org Tue May 02 05:43:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 05:43:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528335.821336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptimk-0004wK-Ub; Tue, 02 May 2023 05:42:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528335.821336; Tue, 02 May 2023 05:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptimk-0004wD-Pp; Tue, 02 May 2023 05:42:46 +0000
Received: by outflank-mailman (input) for mailman id 528335;
 Tue, 02 May 2023 05:42:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d5QU=AX=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1ptimi-0004w6-DA
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 05:42:44 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.163]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 27b03e94-e8ac-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 07:42:42 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz425gTaRK
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 2 May 2023 07:42:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27b03e94-e8ac-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683006149; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=hz4P2fYHoTAGVQJxiYF1+wg1eGX4s1T6oLiKnz3J9ijpYzkt+pHRrYGLY9KNIi5wSc
    vwk/W94SFCc5885n0F3vIjEqHm4jdMyEIuOjtwJ+Q2OzZMvSkhkaBQVZ3PZfoFFfnvj9
    yVglKOzNQ1C3ZTIrTqiU9/kQCfkwQxLqKirs/13YydPRb/4Prq4LQMCfKMZJOkzqX9cs
    vhQCqLLL4j6Ja2EtuiQ4kGvrWcd79kVJ38awktFQ68ZExXFNvhIZYzPETIXNvrYVSK6v
    FMxBeC55j1VchQhPXE5KspldBUVi/fpvbjx9Eg7HqWG6EVgkhqEltyoP6ti3RtH9qCHw
    Yw8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683006149;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=GhE5fBfcNwMLwxUGqEDocKD//uK7BAVcWCZfFlhjFms=;
    b=Gde1FB+GfQETrwt+XkxjfOrvcybPN0VAQeR9m+K2cd4mfnDYlxtOnOSquEVUbYxevI
    Dds1Z3Oy2vNgHTeUIyGnMRvxyHDDMYN6mYqmPISUsSWnjeZPE2HZQJHJQumZU47MW4G4
    grW5N/u6w44/Z88lRmrldU4FBErCS1InX/4tTtthsj0ByTWjPvc3908biRdZ2T9Jo3RN
    B88jXkKE0L97zI+vkLcaK/Yh3HhdWw4v9a/NKsxidkF+oVhMHe09wRrYEa3pxrOCuyxW
    t65iRbILnc09mZfd2a6CAEONc+UWobZvMO2OFGn74iAZzPB1bqrw8v1MSEGX6biEw07Y
    cU3A==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683006149;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=GhE5fBfcNwMLwxUGqEDocKD//uK7BAVcWCZfFlhjFms=;
    b=PS3ttY3jAhpm4VF4UrTquU3/BxHgUGsP9e4yt/292Jn9arpNz90rNGgsDIq0qG6hIj
    xVb0cdSgiY3CvTSmatDIM9U6dqOAOrYxLxMaq+t9dGgrkb4Rc68DpVamAt9JpVRtJ4pw
    qaoaDOh6cozDCLuhDVzIxtGV+V/y+jMeupUw8sTqtzWz8mDnNG2ayw8gzIf3nXf4dqfr
    4wUZCT6JXwl1/1av7ceNyPfmRRQVP24LQV2dI6oKGWpetZ3IXJb3bheITpjIQ8wgRRYT
    a22FzeGFw3Z23t1Ty6w5LzK04dxDFwCXMi8IvwnEur7/zjUKgamIoea5Heoz8e1hElKN
    hxPA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683006149;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=GhE5fBfcNwMLwxUGqEDocKD//uK7BAVcWCZfFlhjFms=;
    b=jce+kKB9N9Yw3D4ke3LWxnftnI7FNN8KoVsLRzyBG4JD1+RhZ4FP33vzSxtCde+Uap
    VdFO+886OfHYvfQLSEDA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4wqVv7FZ8tH5EUSbMVU80kUr7f4QlYaI60OjHt/Q=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v1] automation: provide diffutils and ghostscript in opensuse images
Date: Tue,  2 May 2023 05:42:18 +0000
Message-Id: <20230502054218.15303-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

The diffutils package is a hard requirement for building xen.
It was dropped from the Tumbleweed base image in the past 12 months.

Building with --enable-docs does now require the gs tool.

Add both packages to the suse dockerfiles.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 automation/build/suse/opensuse-leap.dockerfile       | 2 ++
 automation/build/suse/opensuse-tumbleweed.dockerfile | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/automation/build/suse/opensuse-leap.dockerfile b/automation/build/suse/opensuse-leap.dockerfile
index bac9385412..c7973dd6ab 100644
--- a/automation/build/suse/opensuse-leap.dockerfile
+++ b/automation/build/suse/opensuse-leap.dockerfile
@@ -18,11 +18,13 @@ RUN zypper install -y --no-recommends \
         clang \
         cmake \
         dev86 \
+        diffutils \
         discount \
         flex \
         gcc \
         gcc-c++ \
         git \
+        ghostscript \
         glib2-devel \
         glibc-devel \
         # glibc-devel-32bit for Xen < 4.15
diff --git a/automation/build/suse/opensuse-tumbleweed.dockerfile b/automation/build/suse/opensuse-tumbleweed.dockerfile
index 3e5771fccd..7e5f22acef 100644
--- a/automation/build/suse/opensuse-tumbleweed.dockerfile
+++ b/automation/build/suse/opensuse-tumbleweed.dockerfile
@@ -18,11 +18,13 @@ RUN zypper install -y --no-recommends \
         clang \
         cmake \
         dev86 \
+        diffutils \
         discount \
         flex \
         gcc \
         gcc-c++ \
         git \
+        ghostscript \
         glib2-devel \
         glibc-devel \
         # glibc-devel-32bit for Xen < 4.15


From xen-devel-bounces@lists.xenproject.org Tue May 02 05:49:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 05:49:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528339.821345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptisz-0005ZZ-JY; Tue, 02 May 2023 05:49:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528339.821345; Tue, 02 May 2023 05:49:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptisz-0005ZS-Fl; Tue, 02 May 2023 05:49:13 +0000
Received: by outflank-mailman (input) for mailman id 528339;
 Tue, 02 May 2023 05:49:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d5QU=AX=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1ptisy-0005ZM-14
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 05:49:12 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.219]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0fbe4ac5-e8ad-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 07:49:11 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz425nAaSe
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate)
 for <xen-devel@lists.xenproject.org>;
 Tue, 2 May 2023 07:49:10 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fbe4ac5-e8ad-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683006550; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=f+QlJylNgM5LghQjiXyqVyI+qwXVrI44uqcb0hsqHsizeLa/cH85IV4Jb9KoS1F6jc
    f0sqtpTKBWXIsfCommQDn5AY1HNLP0jlASwtjG6IFBmbfx0I9ECfgCwTC0Bhd+9UfzWC
    oRUaLi8HIkcp4Qu11PQvYinXcaAN/1qyHpHSu5ytNX/ntjdepcEu0Q7FOQjqoGACT1De
    T21/tLTEC/G77ivz2/wkfWrCQwGCGifYVtvGyUIT6NOn3VBKe2F4zGCmAU+L3c/LOUsS
    7g4mDiFpNaacogTYJ2dq2UJP07NzzICxbadiiISEPnPGubnyiD4Um7/QgTqabL+CrJK0
    jsIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683006550;
    s=strato-dkim-0002; d=strato.com;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=98qYXeKcPqrWVbf2mQ/Ey9QiiyRrTjLYLEEwS8i5Dso=;
    b=Fkt+ljJpvJh4mYjNgiEBTAxVtmgp8w7mGgtHdGSy6n4Y2rMygslWOEjUgHxjlmYDv+
    Fd+/3j8lo4KeEf4giy7z3kF572B5NsrMkWG0OLDK0B8AYEvAexPUaxdtsV8MzYjJmviP
    PbQ9Q0RNLjVkPKnWWbfpMcPV/CTBHFhUFHcmWuh87jvScWpFQuLY0XVGlz6kUVWR9Di6
    zNVmuvSOBN21e6j0SvMezv28JyNO8CDYZJ15v45U95T3mMiN2iRE6S1fwITg/teCYakw
    tGa3f1huIiKryD3PvfY33zppuOddCm9fWIFnnHmTGifXcK4piV6lnis4DLUTwscJE70w
    9yZQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683006550;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=98qYXeKcPqrWVbf2mQ/Ey9QiiyRrTjLYLEEwS8i5Dso=;
    b=n0pjWuG8dTEZua1BSIDkzictqoaJOZCkusNR225/oYRUqVtLqt2guxtC04rrjYLkkg
    TE9P4fenN45pfzHZhv6PEzNK2jQY1LlyVev2pfYEPDIQhStWV1uBHi4fSNex1RUJAHrO
    gAqRd7i5uh2tugSUdFe77fO5FZVo4ZurApEcZZFqhzLJFbN6oU3ZDEqY5rKna47kqUEc
    9qtfIKLpNTBOipc06ekhUlWj5lUs+qGjSFKPq93a9Z9m9ZsCeVHYhSzqVsYzh093Hu5I
    /Yd8UvAkmzyXd16geme5j+TIMP3bejWbrEVJ2vsibiJG4dL3t7ORdE5RMKgjpQXK3TVL
    MZcw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683006550;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=98qYXeKcPqrWVbf2mQ/Ey9QiiyRrTjLYLEEwS8i5Dso=;
    b=wrJmCI7pkFXzpmEWrquKXphK6J3COdOJufzhdOJvpvfBVGU1h2QtHFmJ6gKiVoripN
    73NrW+GFjSHOgP2QK+DA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR5BxOIbBnsc1fym1gFvNQ7EzMpH+yFJc4aADp/8Q=="
Date: Tue, 2 May 2023 07:48:53 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: HAS_CC_CET_IBT misdetected
Message-ID: <20230502074853.7cd10ee3.olaf@aepfle.de>
X-Mailer: Claws Mail 20220819T065813.516423bc hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/wxBthHcgG/WUfQyJhujuOEU";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/wxBthHcgG/WUfQyJhujuOEU
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

The next push to xen.git#staging will trigger a build failure in the refres=
hed Leap docker image.

For some reason HAS_CC_CET_IBT will evaluate to true. I think the significa=
nt change is the binutils upgrade from 2.37 to 2.39 in November 2022.

The comment indicates the combination of gcc7 and binutils 2.39 is supposed=
 to evaluate HAS_CC_CET_IBT to false.


Olaf

--Sig_/wxBthHcgG/WUfQyJhujuOEU
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRQpEUACgkQ86SN7mm1
DoDcZRAAkbewwB0xogP4GGHk8EbSNS98gAt4lceiXyk3zK7g0ig4LqppH1TVNXqv
a1Iz07Vw7RLGj/oQMMkR7oi+fMw6yHnc9Me6yjEGK/EoHaNEVRIL/Qu/nphr9Q/G
Hm2L+X3aYOC1k38lAgu1WFUoarbwdSJMc+5PKmwH/o5AbMgvWQHcIXtzkYaT4JN/
mBNnihgHMa/t9TGkFltz8lRdYe7LMTn2EezFIbSDarjy7oFwqaOS7VlM4A9OJY5Z
14iY8CKV9m/r+vk4+CTY6uIdhP5wl9VkI5IKR+pDebfBsomK68K7YqhsWMPYILJc
BS4Lm8YcSo4c6BLn6r9ARjMoA7ST1QHmzdouOLrp8Q4f7JralPyNJIjqgmtSEgtU
/jJY49RTsad+3Pb30zx0XB0KwhrYYhZWSDD4Pwl7dWdovAIfXyk/66G0NT59qRFM
pnfzAAjEwLFZduRDUZ5Jlb9SP+KSsKeIEx66ruyys+GSX0Iiur/PkY//b1ai/L9E
nJXCiPjqECPnvL8yJDHSZ+9DTaKw2ArdPYiozZNNJg5NbRrpWf7YgdutWD+Ng3f8
eQY0Hy2n2yXiKxF39sjMVU0GeqDEHKjzkqnJG8+5rGOP/HwZhh3gA5k/FlCd2ZfV
Yf/KLoYGU4isCM3OM1mAe+nA8dv6oChhmXXOpHfwE4+PSI1mZLE=
=RLv9
-----END PGP SIGNATURE-----

--Sig_/wxBthHcgG/WUfQyJhujuOEU--


From xen-devel-bounces@lists.xenproject.org Tue May 02 06:00:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 06:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528343.821355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptj3w-000865-IO; Tue, 02 May 2023 06:00:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528343.821355; Tue, 02 May 2023 06:00:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptj3w-00085y-FQ; Tue, 02 May 2023 06:00:32 +0000
Received: by outflank-mailman (input) for mailman id 528343;
 Tue, 02 May 2023 06:00:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptj3u-00085s-P4
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 06:00:31 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2085.outbound.protection.outlook.com [40.107.7.85])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a4191ef1-e8ae-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 08:00:29 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9149.eurprd04.prod.outlook.com (2603:10a6:150:24::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 05:59:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 05:59:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4191ef1-e8ae-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hmBzpeFU5yhVNK2nG3nUwqCaQ9wAtkOdDP7XjUpVNmAxmYX5rjZfd8M2NCy4utLq2w79ffgelFSTivHvQjwTLEpdiNs920d0VaNGQbZgGqKuqNzqQRpabM+E2Yft8CG7MYFAWJb+K36rICMxwIesPCuW2Xx7rFuM96HLmHeGdczpJRXFjJN2mk/dSMHtsKFbBl/CTleh8ORbU0r5fcLI5/EfnkkyQlHBe9d1VGJ7S5bT8JrUyvfqJ4zRD2I+hsnrGWNAMuHjD4rjcR4DMrV7jnHVAalkxgHJkFb5J6oYmvzUvPPMpF0WQN1gFN5McQ4L6wiFDrwNYX8nYfnIQXjquQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UqUDT48TZ44wEPoY2YLLLr65qaAbK6lEv81awHVxLKE=;
 b=UesVRjxai+9smaFUJa/clDWj6iOI5t+HZrsl6GU4jAcaj2yddayF1w+IEU0y0zitnt0Oo5s/TSF8Z0Z++lPP1eoeC4f2pdEf8kEMmBBrYfRAoPsafsQggX6Krq8+gLV4l3PYGyWlmYNoxW71tDqzDf+s+iPwYOh6OjsTuKlNli7ll5J41aXWVNENaLHCx8C115rS9lYCU2gncavV7p8UwPzORTAMN3nzDnOGAVw3X1Whh9kw+FS9fn92jqbYHD5bcwe5/d7H/u7xNN9XXqkuSAR2JiLn8zaFqOYre5gX1HIKD97Dv/wx2DR8lmudAAgIS9xM7t/49Rb3B+eWlO6FVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UqUDT48TZ44wEPoY2YLLLr65qaAbK6lEv81awHVxLKE=;
 b=KZDEwZuVNYbydsBXnsQpDm0SxNSB1sJeUh2OKAhiaW0oqgXBrh+WTeSNedjqni5iWS64ewA8sOqyugp77VptoIblb+f1+n8/pVzKW9fbQP62/ZA2RBzvffU1RG/DHZdhY42h6Xx+PBrpkgdHLzFySl1pIBqTgmG8TftZgbRCGMmqevGr/juqT+4cFo/kjCRcOTyTN8PxF2yzn9pHAlB29vajaJQ2DTfpElcYks/qG8bc/2hXuEu0ekNGEu9xR4T39V2R8fx6htW/ZXw9k3WgCky0M+e3BrtISaLXPEcKIQzspp2UU5xxS//vFJQZekk/m2rVOfyW8Ew7T1l+Vk6Y7g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <211ad193-07d3-2e9b-2215-c31858b854f2@suse.com>
Date: Tue, 2 May 2023 07:59:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [BUG] x2apic broken with current AMD hardware
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org
References: <a2e5cb62-9aef-4f91-b5e9-35fee6739fc8@suse.com>
 <ZAkVVhIldUv/xQqt@mattapan.m5p.com>
 <21436010-8212-7b09-a577-09d3f57156bf@suse.com>
 <ZAvGvokloPf+ltr9@mattapan.m5p.com>
 <f33c9b8a-f25d-caab-659d-d34ba21ebc25@suse.com>
 <ZBOSKo+sT/FtWY9C@mattapan.m5p.com>
 <e5b28dae-3699-cb0d-ab7e-42fdd42d3222@suse.com>
 <ZBSi2KfoQXo7hr6z@mattapan.m5p.com>
 <b2eaeacc-de5f-ebe9-a330-fbf9e20626b1@suse.com>
 <a2de5d87-ada8-46b9-090b-00dc43309362@suse.com>
 <ZE6iaaUvScHUjoKy@mattapan.m5p.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZE6iaaUvScHUjoKy@mattapan.m5p.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0211.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9149:EE_
X-MS-Office365-Filtering-Correlation-Id: 449bc788-c9cb-4930-6828-08db4ad27580
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JgJYAeKkDKBpjlZ3zqwp9ORsnCrZTbI5x8Ad3xTLN+eDZZYYJTzZ5QD9iaTTzwDmXU6D1RatC9yRyUr3CHNzqy1vvsNuI+F3fRkY6cHrsonlwOl9ISwDdvSAF/5UBoxosK+SldFCfA4m+lIlU3qOUl8uyPTzRcLCHLN/CpzI3l2SFiiU49j01tlHJXr3QH+EokvLO7KK1wDv4wMg6IW172RHOuRDn0o7UnLqH8dwnJSxAUqWwQ2KGvXOTM7l8v9YNPKVjAxxKfMMLQY4G73Upp7PuCBfJ2W5YZFYbg85bLkslMwTQki6/0urQk+W0vduOHMqZakSlCkdFB1mU/itMLJ2HUoZxS8bBpd92Tx99wuEEc4EsPhORV5EOhC6FVo7ErAuIlHRPpfYQAKQpX7ZNknfvQWW7fX7gqcEOhlq2EW5gjjgfstm4MBSq5jgVG46yEojE2gyjYliSn2BYeIPFPV6RAn856AcEOk9gNXTy/R6nADm5r/fk0pVYp4vBp56nYBa3RHQqM7nw0NIBYs1ymdz83TZKK90YLh6kr53h3cZbo7GbpsurMR3UMkP9/JHSTuSlXb5hhX38MZWZ95bQDdUilSZJRDPIOHcITaUIeF0oYP+RPrujiW1nCR0+A08Vi1fvM1S8gzsYjrULn9G4Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(376002)(396003)(39860400002)(346002)(136003)(451199021)(2616005)(83380400001)(478600001)(53546011)(31696002)(6486002)(26005)(6512007)(6506007)(66946007)(66476007)(66556008)(316002)(4326008)(186003)(31686004)(41300700001)(8676002)(5660300002)(8936002)(38100700002)(2906002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cmxuNDVlcjVoQktUODh4RzZRSXRrWVpJMFZDL2hVMW5vRkVmWEZtNVdsaU43?=
 =?utf-8?B?cHhNbGFQUm1PN2ZSeDdMdzhLRko2WFhocE1POERJM0ZlOXR6U3ZqLzNRdWRz?=
 =?utf-8?B?YnlaVzBZWmlBMXNpa1pleGhBQnVWYU14ajZOVnZKempUTUJYQnVMYmw5bC9V?=
 =?utf-8?B?cVQvVjFUVlQxY0VFQi9SdzcvOFZBaWZMNnpZM2JXUGt3M1V0Z0NWUG1ibVZ4?=
 =?utf-8?B?OGoyM0p1N1NEY0hJTXFmd0dUcnhpUDdMdWpRUUJCeDdaclpRc2V2QXlaTi96?=
 =?utf-8?B?TG1SZ1psbXZ5SWd5NlZ3Y0p6ME94aHRJT1RCUXo3dzJmVG9sK2VqcHJUSWlD?=
 =?utf-8?B?OGdyU3Bhck1ZSEJOT1FVemhuRmk5Vmd6aU5Hdnh1MHpKTm1WSVJWeUJkSGNj?=
 =?utf-8?B?emE4Q3YxWm11K0Q0Y2t5OElJSGFWYitleHBkbk1sR1VueXVVM3BzYk9zOGpL?=
 =?utf-8?B?VmQ3cjY1RW9lT2ROd0hDeW9IUGQ5Q3dUdDZjajc5VmdIU2xFdWlPYWVlWmNs?=
 =?utf-8?B?MEcydWpZaGk4cjFwVTV5NWoxWU1zaDNYTVBzYVB2b01hRWNJdWFXNHVjc3g3?=
 =?utf-8?B?U1BlVDY1MXJGQmhTSXRVNDQzOEFwdkdCbzdSMUwzZmx3Vm84RlBQMU1obkhq?=
 =?utf-8?B?ZFNrVm9KS2J6ZzlkU1RyNzlISjVKdTJVcC8vNVljSncza0p0REpiZU9KZyt6?=
 =?utf-8?B?ejdiVlpYVVhFaUFabXFjbDg0Vlg5V2VQaFNtZS9Fb2UybU5wUVRtNnNzNDYr?=
 =?utf-8?B?Ny9LMVZObkw0c3hVNUxYZlMwdndVaTlHM21XcFduREN1TWFUQzZnU25mbmQw?=
 =?utf-8?B?a1FQV0NGN1pESUk5L2owL1FIcis0VXVMY2RYUXdMLy9qZy8za28zUVlCZnds?=
 =?utf-8?B?K3Y4S1BVRllRQkI0NXR6ay92WWFOa1p5cXNyNjF3OHQrZlFONkNyN2tsNFpU?=
 =?utf-8?B?RDZOdyt6c1pFNWNQa2xPMGJ5Q25WYjZLTU1vQmlldHFQazRrVUx1ZDRCTHR3?=
 =?utf-8?B?NWEybm1SbU5LSkhyMWRaNVlTdThMNWtGcVhLNUIwMXh5cVhuWjYxOHRsOWpH?=
 =?utf-8?B?L0FrQkhTeEtvSVdQQ3R0dnhVMU45djJHV1BaQ1BFdTJVN1o1QmtXRi9xV0V5?=
 =?utf-8?B?YTBxWC9ScDk5VU5wemNtTnB2djhPYnBvT2RrYlkyVFQybDZmOCtnT3grNmFl?=
 =?utf-8?B?OGhKVXluTlFuZ0srdWNwYWVyaEY1V0VadlE3Yno5ZU00VEhWazFNOWVSWHZ0?=
 =?utf-8?B?WCtEZG5RemJvNmRSZy9wUTdJUjk3SktiU1B0MjB1dFZOd3dVNXdxRmV6c0h6?=
 =?utf-8?B?NDFvRklLQlc2Y1N4Sk1TNnBrd2ZxQlppZk9rTzk3SEtXTlkvT2gzM25uUlRT?=
 =?utf-8?B?TGkwZk04ZEVKcXN1Q09hT1E0ZWlNQjJWbkZtV2JZaFVmZUxDVUp2SFpnM0xJ?=
 =?utf-8?B?bEt3Wkkza1ZkSUkvWUE1b0VEN29Tc09wODI4MHZoWHYveHNZWERlaEpWbVZR?=
 =?utf-8?B?akJTU1R4bVF2RkFETE5aQVhyZGRJczVPVVpJMDZmelFnT3d1QlpJaUI4OFJz?=
 =?utf-8?B?ckhzMHd2MU5PMlhGMG5BOEp6MUFEWHFnME1GQW9IdXIrY0lxTWI5amlEeC9s?=
 =?utf-8?B?a1RMejd3MjBUM2Z6bmh5SEFuUnBJdnBRbDFnYW1RcVF2UjBFSDRTS3JsSldz?=
 =?utf-8?B?emVneXBWK1FmWXBXNUhlWlRxWmNIMUthNEtpZzMzMEl4eXEzNC9OOE9odmVj?=
 =?utf-8?B?aDd5d1pVZjJ2RlFpYndWYXBweURubmxwZXVJWDhRQ3JaVEpyZHh0TjFSbHF1?=
 =?utf-8?B?VFpWT1U4K1JGc2V3cnRyWks5TDRiT3JWQ1RIalB2dy9LWk9HMEZ1THZ2dkpD?=
 =?utf-8?B?eWliRHhNcU5reGJMeHh0YkZwSnRGM2ExVWFoVU9nUldiQWR0ZDlGbFZGYjNM?=
 =?utf-8?B?eVcrNDVjWkIwck4zSFFreGhvSS9kckw0eHNMVWNrczh6Uk9pak95QXBtVWFT?=
 =?utf-8?B?UXZ2RnlmU2lNQWovTlhsMmNQTVB4SFp1OUUreWpNdHByUXNrNGk0bTAyclFs?=
 =?utf-8?B?UE9pWi9tdHM1dU1IRFhsSkUycHA3WDhPM3dVSUM1d2NhdC9IQlRTU3plRXBn?=
 =?utf-8?Q?XGtixyIqmTZ5EXcF1SeQeCwHZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 449bc788-c9cb-4930-6828-08db4ad27580
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 05:59:58.0233
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xGt4dYGuEVT4xri3+6/MtU0EppJg+5wr6l0E8jloxUNuw7uEUaqC3EgNJUIvHU5ut3P/PO7oN3ht2xBBz9D09A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9149

On 30.04.2023 19:16, Elliott Mitchell wrote:
> On Mon, Mar 20, 2023 at 09:28:20AM +0100, Jan Beulich wrote:
>> On 20.03.2023 09:14, Jan Beulich wrote:
>>> On 17.03.2023 18:26, Elliott Mitchell wrote:
>>>> On Fri, Mar 17, 2023 at 09:22:09AM +0100, Jan Beulich wrote:
>>>>> On 16.03.2023 23:03, Elliott Mitchell wrote:
>>>>>> On Mon, Mar 13, 2023 at 08:01:02AM +0100, Jan Beulich wrote:
>>>>>>> On 11.03.2023 01:09, Elliott Mitchell wrote:
>>>>>>>> On Thu, Mar 09, 2023 at 10:03:23AM +0100, Jan Beulich wrote:
>>>>>>>>>
>>>>>>>>> In any event you will want to collect a serial log at maximum verbosity.
>>>>>>>>> It would also be of interest to know whether turning off the IOMMU avoids
>>>>>>>>> the issue as well (on the assumption that your system has less than 255
>>>>>>>>> CPUs).
>>>>>>>>
>>>>>>>> I think I might have figured out the situation in a different fashion.
>>>>>>>>
>>>>>>>> I was taking a look at the BIOS manual for this motherboard and noticed
>>>>>>>> a mention of a "Local APIC Mode" setting.  Four values are listed
>>>>>>>> "Compatibility", "xAPIC", "x2APIC", and "Auto".
>>>>>>>>
>>>>>>>> That is the sort of setting I likely left at "Auto" and that may well
>>>>>>>> result in x2 functionality being disabled.  Perhaps the x2APIC
>>>>>>>> functionality on AMD is detecting whether the hardware is present, and
>>>>>>>> failing to test whether it has been enabled?  (could be useful to output
>>>>>>>> a message suggesting enabling the hardware feature)
>>>>>>>
>>>>>>> Can we please move to a little more technical terms here? What is "present"
>>>>>>> and "enabled" in your view? I don't suppose you mean the CPUID bit (which
>>>>>>> we check) and the x2APIC-mode-enable one (which we drive as needed). It's
>>>>>>> also left unclear what the four modes of BIOS operation evaluate to. Even
>>>>>>> if we knew that, overriding e.g. "Compatibility" (which likely means some
>>>>>>> form of "disabled" / "hidden") isn't normally an appropriate thing to do.
>>>>>>> In "Auto" mode Xen likely should work - the only way I could interpret the
>>>>>>> the other modes are "xAPIC" meaning no x2APIC ACPI tables entries (and
>>>>>>> presumably the CPUID bit also masked), "x2APIC" meaning x2APIC mode pre-
>>>>>>> enabled by firmware, and "Auto" leaving it to the OS to select. Yet that's
>>>>>>> speculation on my part ...
>>>>>>
>>>>>> I provided the information I had discovered.  There is a setting for this
>>>>>> motherboard (likely present on some similar motherboards) which /may/
>>>>>> effect the issue.  I doubt I've tried "compatibility", but none of the
>>>>>> values I've tried have gotten the system to boot without "x2apic=false"
>>>>>> on Xen's command-line.
>>>>>>
>>>>>> When setting to "x2APIC" just after "(XEN) AMD-Vi: IOMMU Extended Features:"
>>>>>> I see the line "(XEN) - x2APIC".  Later is the line
>>>>>> "(XEN) x2APIC mode is already enabled by BIOS."  I'll guess "Auto"
>>>>>> leaves the x2APIC turned off since neither line is present.
>>>>>
>>>>> When "(XEN) - x2APIC" is absent the IOMMU can't be switched into x2APIC
>>>>> mode. Are you sure that's the case when using "Auto"?
>>>>
>>>> grep -eAPIC\ driver -e-\ x2APIC:
>>>>
>>>> "Auto":
>>>> (XEN) Using APIC driver default
>>>> (XEN) Overriding APIC driver with bigsmp
>>>> (XEN) Switched to APIC driver x2apic_cluster
>>>>
>>>> "x2APIC":
>>>> (XEN) Using APIC driver x2apic_cluster
>>>> (XEN) - x2APIC
>>>>
>>>> Yes, I'm sure.
>>>
>>> Okay, this then means we're running in a mode we don't mean to run
>>> in: When the IOMMU claims to not support x2APIC mode (which is odd in
>>> the first place when at the same time the CPU reports x2APIC mode as
>>> supported), amd_iommu_prepare() is intended to switch interrupt
>>> remapping mode to "restricted" (which in turn would force x2APIC mode
>>> to "physical", not "clustered"). I notice though that there are a
>>> number of error paths in the function which bypass this setting. Could
>>> you add a couple of printk()s to understand which path is taken (each
>>> time; the function can be called more than once)?
>>
>> I think I've spotted at least one issue. Could you give the patch below
>> a try please? (Patch is fine for master and 4.17 but would need context
>> adjustment for 4.16.)
> 
> Given the patch didn't fix the problem, that wasn't the issue.  I did
> though manage to try another variant of BIOS settings for this
> motherboard.  Setting "Local APIC Mode" to "x2APIC" in the BIOS neither
> breaks anything additional, nor fixes issues.  What was in Xen's dmesg
> did change slightly and looks likely better for my purposes.  Some more
> snippets from 4.17 Xen dmesg, with "x2apic_phys=true":
> 
> (XEN) AMD-Vi: IOMMU Extended Features:
> (XEN) - Peripheral Page Service Request
> (XEN) - x2APIC
> (XEN) - NX bit
> (XEN) - Guest APIC Physical Processor Interrupt
> (XEN) - Invalidate All Command
> (XEN) - Guest APIC
> (XEN) - Performance Counters
> (XEN) - Host Address Translation Size: 0x2
> (XEN) - Guest Address Translation Size: 0
> (XEN) - Guest CR3 Root Table Level: 0x1
> (XEN) - Maximum PASID: 0xf
> (XEN) - SMI Filter Register: 0x1
> (XEN) - SMI Filter Register Count: 0x1
> (XEN) - Guest Virtual APIC Modes: 0x1
> (XEN) - Dual PPR Log: 0x2
> (XEN) - Dual Event Log: 0x2
> (XEN) - Secure ATS
> (XEN) - User / Supervisor Page Protection
> (XEN) - Device Table Segmentation: 0x3
> (XEN) - PPR Log Overflow Early Warning
> (XEN) - PPR Automatic Response
> (XEN) - Memory Access Routing and Control: 0x1
> (XEN) - Block StopMark Message
> (XEN) - Performance Optimization
> (XEN) - MSI Capability MMIO Access
> (XEN) - Guest I/O Protection
> (XEN) - Enhanced PPR Handling
> (XEN) - Invalidate IOTLB Type
> (XEN) - VM Table Size: 0x2
> (XEN) - Guest Access Bit Update Disable
> (XEN) AMD-Vi: Disabled HAP memory map sharing with IOMMU
> (XEN) AMD-Vi: IOMMU 0 Enabled.
> 
> 
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Interrupt remapping enabled
> (XEN) nr_sockets: 1
> (XEN) Enabled directed EOI with ioapic_ack_old on!
> (XEN) Enabling APIC mode:  Physical.  Using 2 I/O APICs
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using old ACK method
> 
> 
> (XEN) SVM: Supported advanced features:
> (XEN)  - Nested Page Tables (NPT)
> (XEN)  - Last Branch Record (LBR) Virtualisation
> (XEN)  - Next-RIP Saved on #VMEXIT
> (XEN)  - VMCB Clean Bits
> (XEN)  - DecodeAssists
> (XEN)  - Virtual VMLOAD/VMSAVE
> (XEN)  - Virtual GIF
> (XEN)  - Pause-Intercept Filter
> (XEN)  - Pause-Intercept Filter Threshold
> (XEN)  - TSC Rate MSR
> (XEN)  - NPT Supervisor Shadow Stack
> (XEN)  - MSR_SPEC_CTRL virtualisation
> (XEN) HVM: SVM enabled
> 
> If I'm reading that correctly, everything is there for x2APIC.  As such
> there seem to be 1 or 2 bugs:
> 
> The definite bug is the x2apic_cluster APIC driver fails on recent AMD
> processors.
> 
> I'm unsure whether selecting the x2apic_cluster APIC driver is correct or
> not.  Capabilities you used to only find a multi-socket server
> motherboards are now being found on desktop motherboards.  My
> understanding is this processor does NUMA on a single die, not merely a
> single-socket.  As such it may well need the features of x2apic_cluster,
> perhaps the driver assumes nr_socket > 1 which is untrue here?

Just to answer this one (I don't think there's much more I can do at
this point, without further information): No, there certainly isn't
such an assumption. Iirc the x2APIC code also predates the existence
of the nr_sockets variable (and the respective log line) by quite a
bit.

Jan

> Does appear "x2apic_phys=true" plus "tsc_mode = 'always_emulate'" are
> adaquate workarounds all the way back to 4.14.  Now for the second
> correct bugfix.
> 
> 



From xen-devel-bounces@lists.xenproject.org Tue May 02 06:09:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 06:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528348.821365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptjCm-0000Mq-L6; Tue, 02 May 2023 06:09:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528348.821365; Tue, 02 May 2023 06:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptjCm-0000Mj-IP; Tue, 02 May 2023 06:09:40 +0000
Received: by outflank-mailman (input) for mailman id 528348;
 Tue, 02 May 2023 06:09:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptjCl-0000Md-Hm
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 06:09:39 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on062b.outbound.protection.outlook.com
 [2a01:111:f400:fe02::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea8afc85-e8af-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 08:09:37 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6980.eurprd04.prod.outlook.com (2603:10a6:208:17e::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 06:09:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 06:09:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea8afc85-e8af-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HpgS0IdSxkR6dJoK6CWdJbwlLbRN/cwMh37o44u/t5+p4fDHfW8BrOc3XdFDHv1QQPlAO5tA1NgkXHaVk9nPXvlNy4WzIe+us1vsUBjHd1fle6HtwV9qbIHGnujBKBY+Q7ZV5aPmJSxJ3VYTpjG+jOAbHox5JcmQ/KOga4qLgLEHVjZmANM/4uLyCdtn32sNMWNHA2HyeVlvoVxA5Y65wnjhDDOey469tbBFHyOTdRsh3TlkKckYE9STJsScbucAl70gRYFVrDMFF8fl3MkySn/9s6fCk9ZYo85DMh35kB4iTTN4JZjZy2RZGWBrc8Mls44xKc3h8Qa6oG8jxL76Zg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PdlqNXxo5ZGgTvs/sF34dAjMHKHhoq8d9qTIJr7mB+4=;
 b=jIMF5SR1qM4S1N5pJbjHNNZIxHY0zTj57AxOUI7Oc07T0njG8Ssx6zlrrVvdDngb2EO0HjpLy5NtA9pcEyXVYprVCj4AHNjyuw4/gyjB9SdX/vPip/fhCe2XChTcy5fuSc+ebNH2eFHEHgiuVKVDsCPu0XZJE8K0dkG1J793MGh04AcKg1FWk4GoSROcbZQ8tR5zWegZKD1VpQatsipAJli4vtIREHzGq38pvtc1gGAlqBjKizH1RUyhdAzB3gt1JjL0UtnpS4Ww2oSSwdBKhgP4qtKLW8nBDKvW6aABGFhy0hF4wWHWQQwhST/51GhOBX2TqVt6gv2IrAoiBUGDaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PdlqNXxo5ZGgTvs/sF34dAjMHKHhoq8d9qTIJr7mB+4=;
 b=saZRC4PGRXEOhQDCqtJENtdzPJcG+A16tlEzmZyvctIyVVXCKn5KoeVcya3kU7URnlMM7AtrKsTFVuHZGrVKlZFYPizMRLD1cAJrFeg0mXuz6226lKPXbkNR8AV2xWFGCWhiWw/PYCvqv+alRFPEmPKwVeTmgdjYUkckX/tsVjM8guxu+Pi+gKqpE1oWkjCPJcQ9RV7bB2BZ+4Wr3KBBfJf4eGgVCmBLlVwpYNseV5v8UWhzoCWGlCyc5QgPLFm3rqTtmQHAJ0DZciTEhp+1+zvRDTv/0DK7V+yinG3ubfCugJibvv7ML1FGuXQaP0wbr8DbZDRZ/l7/zJYnEd5RjQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <322276ce-2586-4b26-cf1e-ee1b467d3ebc@suse.com>
Date: Tue, 2 May 2023 08:09:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v5 1/4] xen/riscv: add VM space layout
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
 <f1b5ee8652a20b2043965a4de5c2c64f662724bb.1681918194.git.oleksii.kurochko@gmail.com>
 <34f032df-cbfc-7a97-9a1f-2fa1ce574281@suse.com>
 <f2978c2ddc1872025f4d939187775c21fd90f074.camel@gmail.com>
 <509ba3a2-0b85-d758-6915-7975d31a3437@suse.com>
 <db3a9b3b-63db-89d1-5386-57eb7044b317@xen.org>
 <d157b1e2-cfc5-f7b7-9443-16d1db9a4311@suse.com>
 <5176b0bc-3727-e939-9776-ee4bfd732e32@xen.org>
 <016a95e8cc1be45ce1821aba0570ff87973c4c35.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <016a95e8cc1be45ce1821aba0570ff87973c4c35.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0219.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6980:EE_
X-MS-Office365-Filtering-Correlation-Id: cf742a4a-5952-4f14-c154-08db4ad3ccdb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	L+On3FYj2wlTuzcDAKKLmvJNWuhEvpqHdroQw+a0f5T0dKMesYXFFO5kmf7Vv1zlPsH4cOBZdyWgkyiAejPOy3+YSOnDt7FCZmr/HH1t9tZW8a8tdSPhMSxvKw5xDMLOI6SdJsYIj/kpmTJCIkLXSrJcWKKerV7ekCkG4o+h2DKZ/PWuk/FcPwtbO1OQ/CxX5HXQ5BXqsHisYM30ZTM8MewuHD8ovbvZZpeSFcTOeDc9jYPHYFMeJMeuLJMXtgwIpJ+jXgsbaQNLo/LOJrAGzagEJ2j8rRCsDvAjlHjzcV4pj8ipb0au/E4uYs/eBb80HBV2zfG4bb/+5mHzDBzMJTHRlrc8P6Tbgx/B7mZV5+HlqIH2qMJrw7RfE6pcsmBa7iw+CN8ZavJgxPwWOfLPEJZb8uYtt+X10MXAqvDYd2B3eZ7WdUui2/FPpp+KVa65QjT1tL9nFjh5Z6OchNH5zSDPtrJ53gtOh9xZKPyQ3q2uDxfqichbm4vWSKndsrFROvMt6nk8Yj8E0cNXoP8KW8zefUhKY7sedBzkDWQWgdGbt0uUXNzB3Imu6ybGuGOfwUr0VXuoJik/lFf7NI1V8aq7ihRIqr9ZmLEv2dCEKTmgo4tbgx7Shd7qEkjADZ8qUlLNnSxTnTsCjNRSB8P52A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(396003)(39860400002)(346002)(376002)(451199021)(4326008)(6916009)(66556008)(5660300002)(66476007)(66946007)(31686004)(2906002)(4744005)(316002)(41300700001)(8936002)(8676002)(478600001)(54906003)(6486002)(53546011)(36756003)(6512007)(186003)(2616005)(26005)(6506007)(38100700002)(31696002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b2djK2dXWi90OTY3N0trT1pGdFVNRlZXV0l0bGJHaHJkdktDRmZRcGlGaUZH?=
 =?utf-8?B?dFFNdDZQQ095VzBFeGhjbXE2ODIrWTlhUU0weldyU3pYMkdBZTQ2M1cxQWt1?=
 =?utf-8?B?cThkZGxvaXJaYWIvbXNGRHFSVExNdTd3YWlYcGdNT25ialMyRFBkVjZmdXdK?=
 =?utf-8?B?WXV5a0l6Z0gxajRjQ2tqOEplak5aZU14NXJNZGdnaldRVlBoTDYzNWF5SDQ3?=
 =?utf-8?B?am51azhDTE5PN3g0aHVWSTg3Z2VHNzlRTS8wV25ZMkF0SGJFdGRLL2RlbXp2?=
 =?utf-8?B?WlNRNjE3OGFwVUJlNnMxa2dGWXZSSnovZmxJc0pHOUtGN3U5U2dZcFMxZlFz?=
 =?utf-8?B?K0U1OXYvdFpHcUpqQVdNMTQ2anpqNGR6WmVYSDZMTFd4QjZrQzVGaTgxZTVP?=
 =?utf-8?B?VUxnYnhUWVBSK2pXVHVGNkwxdytVc2VMV3BmM2dGaU9NT09Ec0JtNDUzTmM1?=
 =?utf-8?B?M0pMYUoxeEZpSFVqZEZPeTVqaUZYME91Y2V6ZHNmck5Nd0hWT0lGbzVXWGYv?=
 =?utf-8?B?d1I1WkFWL0U2R0ZLYWVzbzFqTjJWVm80U2hoUFJGWWYzRlBIOVRhL1lqdExN?=
 =?utf-8?B?SkNzdnlpeVY0aXkzcGJxeWxrbENnU0o4V3NrcHlnMGFGQ0QrRnVwVmJEM0ZN?=
 =?utf-8?B?RGI4Y2hTems1bkNpR24zdG9xZ1J1eWxpWDJBN0JkTXdub1pQQVYyMEVTLzFq?=
 =?utf-8?B?ZFA2KzgrSCtXZk5sWVVydnZ4Z0tjb09UZmFhdStyaVROWWxZcFNZOCtLeFdC?=
 =?utf-8?B?YnhzTmFwemx5aU1tM1VLbEIraFVmSHlhUnVBTEtCc2IxaG5yUmF5WTBHbW9q?=
 =?utf-8?B?OUROdlRVZE1wd0R1S3lSaTR4cTRDRWpvUjNUcTBOeHFpeGJXb0lkeE1WbjA2?=
 =?utf-8?B?bGxMOGw4UDVtMHc0SDdPcEJnSFBXN0V6UGFXeXJwYjhBNzZIS0UyRXJucnZ5?=
 =?utf-8?B?QUlhTzRDWkt4L1c3Vkk5eldGN1lydVZ6MEhkNDJueXBYWWxkN0wwcUFhTTBk?=
 =?utf-8?B?ay9RQmlJWkFuSkQrQ2hXY2cvQjEwWTVENGNkaFBpQy9kM3poN1dYQ1pyVjMx?=
 =?utf-8?B?aUZUaWZTRFdhSTNrekJaLzliTGdGSllOSk1qM0xrNnhTTUtsU1lHb241bGdJ?=
 =?utf-8?B?bFZTZVZLbHVBZmxuZTVZbCtwa3RrbVJCYTRNVEZXNEhDMmJOOUdNSmthZFMr?=
 =?utf-8?B?OHN3NVBwb0MrQ0hZSjQ1NkIzWitlR0FFTzZ0OWRVY201VkYzV0tPZWo3eFQr?=
 =?utf-8?B?K0tTNnRjNTRkT2I3REtpWVc4cWhOMGNsM3ZvSzBhSnlZVCtMRHU2VnFMV3dL?=
 =?utf-8?B?WGp5clhXRnlkRUFSVXNnSENWVzNoOFhBemJpRndGRmNMUWNSL1NJUkM1TDNT?=
 =?utf-8?B?UU5BMHVxM1lhaGs1RUFBUGxIdEkwenpNNmxFRHpJNU5RTjc5SkREcVJPQXlP?=
 =?utf-8?B?bXRRaU53UHVnY04xTWI3VDN5Q0RvdEgrRkhxUU1FL1RkWFVraFV2T3JqaTNO?=
 =?utf-8?B?Q1NLQTcvRVdJaWNlWmZvYTcvZFRkR001SEhuMU8rTk1GQWR6MGFzWkN6ZnVl?=
 =?utf-8?B?Ykl6cDRSRklUbzlVMjN4NnFzdHIwOTFMM2JSZkcwVVhBM01MTXBoUjNtOHJn?=
 =?utf-8?B?OTVuUkJxUjBHbUpwNzBlTkxNTzNYTTVnZGRWL3FUY0c4bVkwTy9ZNGc3Z2hO?=
 =?utf-8?B?YjdOb2gyeEMzVEhUb09DYlpiM29KWE5pRW0wNEI2bkxSbHNFdG5TUUZ2UHlT?=
 =?utf-8?B?YlJMSHJuTTMxekczNlFTR2VtU1FlbFluUk5KR0hBZ3cvaDZ1R1N4Q1ZJVVZ2?=
 =?utf-8?B?amtmbDFoT3o2Tm1OazVXMXVVSkVqcHdqN3NnTG5ENmhMWkJyR0ZQajVBZndO?=
 =?utf-8?B?dllNZ3dzZE9CYnZUY3BnQUxrMUlIbm53OVdjZHduNGZTV2xvUUNUMHF5Zm1i?=
 =?utf-8?B?Um5wRjVnR1hzRUk0aktGV01udDZrMnk0bmo3ODh6LzNzWU5lWGsvUmZJdm9Q?=
 =?utf-8?B?S0tuT09YYkp1enJ2OERpcGhVSUZhNnBCOVdUTWw3dkYvRTNnVDB5R1UrZHpX?=
 =?utf-8?B?ZnlxMkFDODZQaUxUa2tTaW9Uek9Ia3BvNU1YZFJBS1VZOFAxK3BzSVZaSit5?=
 =?utf-8?Q?KhfplRuEJHUyVYlw5AFpT/FfD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cf742a4a-5952-4f14-c154-08db4ad3ccdb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 06:09:34.0106
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wF8F//uxxDFB7Q15Ad5+AIgncd4BE2lW2TgdAF7714JR99w4bqNAg7pRfIfKClV6g9RvLs+AlwTyp1mkIAmV/g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6980

On 29.04.2023 12:05, Oleksii wrote:
>>>> For
>>>> RISC-V, I would recommend to make sure the struct page_info will
>>>> never
>>>> cross a cache boundary.
> Do you mean that size(struct page_info) <= cache line size?

I don't think that's what was meant. Instead I expect the goal is for no
struct page_info instance to ever cross a cache line boundary. IOW one
of sizeof(struct page_info) % cachelinesize == 0 or
cachelinesize % sizeof(struct page_info) == 0, or in yet different terms
(with the expectation that cache lines are always a power of two in size)
sizeof(struct page_info) == 2**n. Yet unless you're able to fit everything
in 32 bytes, that'll mean more overhead than strictly necessary.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 06:16:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 06:16:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528352.821374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptjIo-0001q5-9z; Tue, 02 May 2023 06:15:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528352.821374; Tue, 02 May 2023 06:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptjIo-0001py-7M; Tue, 02 May 2023 06:15:54 +0000
Received: by outflank-mailman (input) for mailman id 528352;
 Tue, 02 May 2023 06:15:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptjIm-0001ps-41
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 06:15:52 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on062d.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c855e62b-e8b0-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 08:15:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6980.eurprd04.prod.outlook.com (2603:10a6:208:17e::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 06:15:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 06:15:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c855e62b-e8b0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S73vx3jv+I3zZIewPLdwXxwL5Aq+E0BKkejpZnLnCQUCIebL3U9nqGJlBFlL3uZ0wyV2g9O4tiOG3PA+J7yR+gudvO8412CtM9smOizWAHsrMW22QsPisn5NudrHgDKB2r82DHg14DhMTRjm+E0aDYndDPf0mb5DwQ0som4TtGl7Ryjo3t5MoLCQucULrTKhygXezoakFMkhxOx1bvo5m0AbAISl0nq2BK4ZeQ06dYJK/fm25fHZBbQO0M6pSYCER6QMvoKg6cuv7txi1M32Ldhf2INfMO0qB8NI6ObpVq0/9jOfaVl4DSkJaLgQqcqwBXvWYpV+nuh7Wo394LEkrw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tYW6Ju3/WGY+WAhCKayIV6c3SHWrtO97t72yOJ0NzeY=;
 b=J6bX0SUIIzm2dmPEFEJImyqW/FB+pFbbkGOuTseFfKqt7uw964HbLJMAGcY4yPUYvOFbdu7jt0BVncsBm6FLuv+HHQqI4VuwYuQO6EAPCkSkKJceBu+tcg+NeeLTsYRq195OxLTyiSY9DWxPP2uNiSz0+lIzP6f0dw8NIqnakCloRGuBFO5pCrtii4ugune34RnjaG+vhW40MQtXDAqYn/CjWniZkOBXp6Y0vGPHpqToEqZIQ43NEfGPb2t+2qGkaiThFJrNmBtdTbL8yR5tN4ozOz32SqcyD++EoIOQG2RWDNkwmGWeCg7RjjTK3UeDTjvK68U65RCb2/Mtscqkfg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tYW6Ju3/WGY+WAhCKayIV6c3SHWrtO97t72yOJ0NzeY=;
 b=Jdv5Ce3ECq1X3GCU8phbXmAQe861HQXy+tRjU6rra1L4XRhB/Gp/qlraWFMVpRxj7mtWxkZVvs4dv1MQbwq5qQiEkGLcMuOt4HadGyF5zIdjhH8u3MRthYYYg8rzNditQrbJU34GvRnTNTurq+E/spbq28Ztmxl0qk3IuH/Ykr/ixUB1Toqsnh+UtMjq7d+IodeacRsU5PWWqyyBxerWge9gDzPTTICGT53MXB7sUcOFbF87zQIvuJzkG4QP1rznVRzZ/ZW8Q5wLpADXQbKwQHlsMvd1Pta2usRXvZiye1Y34f50bhNhnjTFTnG+FwyUNSmC/oYb+MNr1cLI6DWK5A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6e03a33b-f5df-391a-8c70-ebe4294e6fe6@suse.com>
Date: Tue, 2 May 2023 08:15:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v5 2/4] xen/riscv: introduce setup_initial_pages
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Connor Davis
 <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1681918194.git.oleksii.kurochko@gmail.com>
 <5b27693bcdf6d64381314aeef72cfe03dee8d73a.1681918194.git.oleksii.kurochko@gmail.com>
 <67d8574f-2e0d-4eb6-19aa-67fe7645e35a@suse.com>
 <ea2d5cfabb9ada64eb975369779ca430f38e9eec.camel@gmail.com>
 <53257ae8-d306-8c7e-35ff-f3bc3947849b@suse.com>
 <3d440048717892fe5d3ed7fe3255dc8c9f5d38a3.camel@gmail.com>
 <2c424759-3072-cd07-913d-c45ae6791ce2@suse.com>
 <5316859bf081d2c00dae784e6700f55747a6635d.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <5316859bf081d2c00dae784e6700f55747a6635d.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0094.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6980:EE_
X-MS-Office365-Filtering-Correlation-Id: da6376f7-a144-4f30-e057-08db4ad4ab5f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ll7wMgtoyjyzanHg9MO198TJFt2Qjymz7qOnusWeKbZZOqqg5lo+cdJ+M0gLxU+gTlO5cyWOdEGLlLLCl8NoSAiTS65KWaoxUemfvbj42F45QuicVtertWZuoFMlP7mVzvmdlxApVGtfNaN4Wx0qcLp1P5k58yLoHLTfv8j65GUvTxUUlJCafpWloOKmL5FI1GuMrZAaNYiR5+fenT0SxEiSGR7a3Gg00wQ06IOw8MbfDbtY6OAwHbW3yU+s6NaA+NSnI+C3JTVhPKlxJcfMUTHTBUzpBEdYsakr53SjuCpnLz8XuwAYlivHLQ/rlkcKtOl0vU8NC38SuVrmzXYJp2PSw7RFh4Zyes0+LRoHyMe8ZLzaSbXcGsiMDoG3xdINhdheVT7KOc79mfuIfVKqnkZETSeVl7owqw6c39k9jsUAOHE6LCRPgt36IgxD5QYsAErYhf5ZPujGgwR2zBZLX/YuG3j4632DqADg4pLKNLsCbT+jDncRrDBWg6LhkCeWlPVT4AEzpoTJLXGWRbYjJkC3TcDv0lRki6R5p8N5JDQf+rUYR6DfF1HVAI+R9mA1JmAY1paoi2jUnBlz/qt56cLOTaNZn+w+pDkd3rBpUrvHxLA7UnevMSW9NkESn1pYlmITFJwcC2xzk1vbU7cMUg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(396003)(39860400002)(346002)(376002)(451199021)(4326008)(66556008)(5660300002)(66476007)(66946007)(31686004)(2906002)(316002)(41300700001)(110136005)(8936002)(8676002)(478600001)(83380400001)(54906003)(6486002)(53546011)(36756003)(6512007)(186003)(2616005)(26005)(6506007)(38100700002)(31696002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OGVFbFZ3OG9jd01GOU9lbVVISUpsZkJQZnpUdHBFUVRuODlOR1pjakJ1cW8r?=
 =?utf-8?B?OHhOaUFjcDBuTXovdHpXcUo1OUlHdlNVVFkyT2RhVUxiRkVhbGpjM3NtYnFE?=
 =?utf-8?B?b2xpUHFlNHZwbmdrdHFJTitWNldKb1UxMnRZenFXaXg5djdnaFllU3g2a0VF?=
 =?utf-8?B?S2VIQnRLcnk3WXBlcm9pNXlxY1lsMmd3UVZoQUxBWnIrTHhwbm5YUFN1OFlY?=
 =?utf-8?B?WEJQTFBPNE5iUSsrYzNCTW9hdzJoZTl6c3hZeTllU2w5Y3RNREM3a3pGN09F?=
 =?utf-8?B?VEVkMVNKTGtCMXpOSVpPOWRQMU9PbU02N3BzZE9EaCt5ZzRFcTczbzRxeXdW?=
 =?utf-8?B?dVNwVUltc3FzeWJpeExVWTBNRTVLSy9qOTBTY0pHNmFPejRkanVUNWRXa29a?=
 =?utf-8?B?M3Q4Y0sxbnZGNy92WjhCTFJHakxwdkh4WW5aVzl3K0phWklZVFJHbjVJU2V2?=
 =?utf-8?B?aU45VUV6dmpCcnBMTUxGVHVyQ0lEZEhsY0tUVmVSeEp6dEVFbjNaU1ZnN2lH?=
 =?utf-8?B?RDlYdXhhL3R3Q3FRbnB2Y0pyRUd1UmdvaFR3OElISmtFelRuY2JCY01NSERy?=
 =?utf-8?B?L0RKUzJMV1lQUG54Y0JRaER0alFLR01DWStFcUJSUktNRUowK25HRzhJZDlu?=
 =?utf-8?B?RStmaE1NeU9oQ2hicnEwYlJCWkY1eklOeXpzaVVaU0dOei9QeVNBdUxwb2dy?=
 =?utf-8?B?S0Jvd2FkTWcwYjc2Mkl5L2wyM0FvQXEyVktFc3ZYbHZya3BYUFZOMU03ODNK?=
 =?utf-8?B?MFNoSGhSaFF0R3ltNkZwTi9oUHlzb2IvOTYvdEY2VnFDUUwySlQ4b09ZL3R3?=
 =?utf-8?B?ZDNhVnZuSVdncjJwTFUzeXQycUNlT3h2NFkyVERCRmEyZ0NZWFJMaXFmbjZk?=
 =?utf-8?B?ZTZSR0c2WmJuTFpqOElaaUF5ZUpWRkxrSnpHQ2xFVS9UZVBEWFJvUmw0RWh1?=
 =?utf-8?B?RU5BNkJaL05vMndvSkkrajhBSDBwamVZdXh2UFhTNm9iWWdkZVRJcFRQMUJZ?=
 =?utf-8?B?cG5jZ2toRXpXc2huRWFkU2RENVg0M0hDdmxFNDZjMWJEMmdXbXVCNWlYZ2JP?=
 =?utf-8?B?MDNVSE54OXdhM0J4K2t2bWNYdUJicTZKQkNPcEovYXdldm16MWR6MUJKOCtz?=
 =?utf-8?B?ZDllVHF5ckpxaG9JVTl4aFE1aldpQXFwNWhFdDFhaUlpM0hYaldFMEN3NlZD?=
 =?utf-8?B?K3VvcHhCbHcxT3dVL3hNdEZuTUZLMG9QOFZVa2taVGdyQUpRMVV3RTh0UzJ0?=
 =?utf-8?B?RVRzMG5IVFhCKzYyLzV5ZXdFNmxsQ2lHMWVMVlhWRXgrcEZMS21uMmdsVjNW?=
 =?utf-8?B?b3E5dkZXRHpEVVJyT1pvVGFHTVA5alVOcTRBYnh2QWJxZGtaYnkrTXppOTEw?=
 =?utf-8?B?S2RLZnI4andJSnVvY1ViZXJZaEYzbCsveEY0UytIUFV5Z0ZSQlkvaDdlaXZi?=
 =?utf-8?B?QldzdHdwUzNrZi9lbjMwcjJpR3ZYSUdWd3h6SDFGRFU0cVlHOHNtSzI3RzUv?=
 =?utf-8?B?ZG05dGRWdDNtWlBQNEN0WitpNDZBcmREMGphUzViS0pDNUtVQTNNY3hsS09G?=
 =?utf-8?B?RDNOdEFrRXZvb2hMS3UvNSs5MThVNlp2UC9NUVF1MUt6d2kzT09pUGJ3TWFq?=
 =?utf-8?B?VWp2bGxzNnQ3MTlrTGhNM01adU1sQ1RRS2dKVDJDYUNOVTd4MXNkOWVkOXdi?=
 =?utf-8?B?KzVkUXRxNGNNdktRRmE3U1ZiZ2Zid2NpYXFBbGlJbXNjdG45N0hGNWtxd3B4?=
 =?utf-8?B?U3pOVmlwbDRJYWdZczVkdGQ4TzY3K3hNS2RwaDRnMEswMU1nTEkvWTBtQXRP?=
 =?utf-8?B?Nm1SOTZEMXVlSGl5U3h1TWt1dG53K3ZqYlhab3pDNkJoMDdWVUN0Q05aeGow?=
 =?utf-8?B?NmR5WUdWZGxBcHNQM0NWdmRud2YxQzlYSFJ0ZFE3eUFjOGV0NEZPd25LeVUx?=
 =?utf-8?B?ZWJHdjdiRXVSajAza1h0QkVjNVBabnY0bi8yNVR0UkJ3OWJCa0dyMlY4R3RC?=
 =?utf-8?B?MGgrQ2NZcXY2aFNyNThmcG9XSERSRG9XUGd5NVhyTmhYNHZnc0lmcllPRGlh?=
 =?utf-8?B?WTlFT0VxWStlMDdkdDJVdVlyMHU5T0ExMVFTbVpmRWlPRUdhWmdBSDdIM3Ru?=
 =?utf-8?Q?6H6ak164D21bQYgYyTNPSI6NI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: da6376f7-a144-4f30-e057-08db4ad4ab5f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 06:15:47.3983
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kD+C9xJSh4wHYIcicNpEDAfHylgmPqvWHTyabzJx5tf9tSpBpKmRq3bRy0EAk6s5Il/3CwSVKDglkCcr62+cAQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6980

On 28.04.2023 22:05, Oleksii wrote:
> Hi Jan,
> 
> On Mon, 2023-04-24 at 17:35 +0200, Jan Beulich wrote:
>> On 24.04.2023 17:16, Oleksii wrote:
>>> On Mon, 2023-04-24 at 12:18 +0200, Jan Beulich wrote:
>>>> On 21.04.2023 18:01, Oleksii wrote:
>>>>> On Thu, 2023-04-20 at 16:36 +0200, Jan Beulich wrote:
>>>>>> On 19.04.2023 17:42, Oleksii Kurochko wrote:
>>>>>>> +    csr_write(CSR_SATP,
>>>>>>> +              ((unsigned long)stage1_pgtbl_root >>
>>>>>>> PAGE_SHIFT)
>>>>>>>>
>>>>>>> +              satp_mode << SATP_MODE_SHIFT);
>>>>>>> +
>>>>>>> +    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) ==
>>>>>>> satp_mode
>>>>>>> )
>>>>>>> +        is_mode_supported = true;
>>>>>>> +
>>>>>>> +    /* Clean MMU root page table and disable MMU */
>>>>>>> +    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
>>>>>>> +
>>>>>>> +    csr_write(CSR_SATP, 0);
>>>>>>> +    asm volatile("sfence.vma");
>>>>>>
>>>>>> I guess what you do in this function could do with some more
>>>>>> comments.
>>>>>> Looks like you're briefly enabling the MMU to check that what
>>>>>> you
>>>>>> wrote
>>>>>> to SATP you can also read back. (Isn't there a register
>>>>>> reporting
>>>>>> whether the feature is available?)
>>>>> I supposed that it has to be but I couldn't find a register in
>>>>> docs.
>>>>
>>>> Well, yes, interestingly the register is marked WARL, so
>>>> apparently
>>>> intended to by used for probing like you do. (I find the
>>>> definition
>>>> of
>>>> WARL a little odd though, as such writes supposedly aren't
>>>> necessarily
>>>> value preserving. For SATP this might mean that translation is
>>>> enabled
>>>> by a write of an unsupported mode, with a different number of
>>>> levels.
>>>> This isn't going to work very well, I'm afraid.)
>>> Agree. It will be an issue in case of a different number of levels.
>>>
>>> Then it looks there is no way to check if SATP mode is supported.
>>>
>>> So we have to rely on the fact that the developer specified
>>> RV_STAGE1_MODE correctly in the config file.
>>
>> Well, maybe the spec could be clarified in this regard. That WARL
>> behavior may be okay for some registers, but as said I think it isn't
>> enough of a guarantee for SATP probing. Alistair, Bob - any thoughts?
> I've re-read the manual regarding CSR_SATP and the code of detecting
> SATP mode will work fine.
> From the manual ( 4.1.11
> Supervisor Address Translation and Protection (satp) Register ):
> “Implementations are not required to support all MODE settings, and if
> satp is written with an unsupported MODE, the entire write has no
> effect; no fields in satp are modified.”

Ah, I see. That's the sentence I had overlooked (and that's unhelpfully
not only not implied to be that way, but actively implied to be different
by figures 4.11 and 4.12 naming [all] the individual fields as WARL).

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 06:53:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 06:53:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528355.821385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptjtK-00067d-2y; Tue, 02 May 2023 06:53:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528355.821385; Tue, 02 May 2023 06:53:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptjtK-00067W-0G; Tue, 02 May 2023 06:53:38 +0000
Received: by outflank-mailman (input) for mailman id 528355;
 Tue, 02 May 2023 06:53:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptjtJ-00067O-7s
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 06:53:37 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20605.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0d014f29-e8b6-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 08:53:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7166.eurprd04.prod.outlook.com (2603:10a6:800:121::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 06:53:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 06:53:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d014f29-e8b6-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nNp8q5CcnOl/0AUeIHGQ+6W6OpdVzdWScnUcpbXQ4dKoGbMYaEA7XH14J/fQMonyo+j3YlYRyF0WczBn8N8BzAOAk3GxjqVYPGy8MtpxDWdH6yEYWx482zCoJhIeGDY8i4tsXUGKgIemF/p4knCLlvIxvNPCNONCLcZyzSYKfgGNGtlEAcIQM6kfD0/NlZTy0tP5XdqSCLGJ2p0ttUV6yvqs5iOCKW7QJIuR/5PGIQ4iHPg36l1BGIjizACY5vex0pIRwt5mC0iZ0aEYwFb438CJL1p0TOidAMg7+QrYgg2sLkHw911bABEqStO+DKSsubjSSzrDZEeasu5GFENqng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KrLqDwXogvWTiVrkF8oSNrFfbNIE+diqB/eY+7SfAGE=;
 b=BdiNKGBJiP10df1M7B4H4J+kt4HNcLKqiRV52JZAJEkMQQpTGRGpuehAWmge6JIMywD/bjfjlwgUlV5zUmGG3PFz+2EH3s5/JHt0reV+rQv8rO560zOLdsiGKbB1cIHvU1TA6HpphMlTabzDPtlErDb+9F050E5vXR2j3S1a7vVG6lN4zZ6IfSx0LpTBp+DnlJH0qP4LvIjh8HJCW10o174bvoDR7myE7wwVEHBd6atnjFWGi8u8GNWhz/O9f0FiElbjlgC9wHhM7qLwxVNpaWX5FQeYPBKA12DOzBlA4Rw5kKGcuEZ+1gq2c3CMEIAZg5JNmNbg8g7ROiHqJtSByw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KrLqDwXogvWTiVrkF8oSNrFfbNIE+diqB/eY+7SfAGE=;
 b=W8Ix5nkC79R46Gd/R4FxCjtMd7XQox70KmM/ikPWTICEfc/tTDQE36j7wvhvVmhXEfvWawECz0iCwsdlGmgH4TnaMVGoBZMpQT6FuEZvlp7RQwvVZcNyUr2fBpEOPvVMOYI0yaSzje44+aRqea6BqUJsPzVnCft4l42WmbpARvqu+If8Tm2tIKrtatbr48QlXgnRD3Otzqem/cKFCIv0XSOk03LXL4c0P4vFDVhoAhWo/61t/4tIOCID0LO0n+clE/Zl+WoQArzbAKIczR7SXJlzgkZTVAOinkfLOLcAuZ3bT0rknYKNcHpp1xR5txJIB8aJ8/jwCj3EC0UpcJFrsA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <333df991-58a8-f4e0-b46c-9f480cd34213@suse.com>
Date: Tue, 2 May 2023 08:53:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: xen | Failed pipeline for staging | 6a47ba2f
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, alejandro.vallejo@cloud.com,
 committers@xenproject.org, michal.orzel@amd.com,
 xen-devel@lists.xenproject.org
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
 <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
 <ca0144a6-2c57-0cc3-fd27-5dbe59491ef3@citrix.com>
 <alpine.DEB.2.22.394.2304291808420.974517@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2305011835000.974517@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305011835000.974517@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0006.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::16)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7166:EE_
X-MS-Office365-Filtering-Correlation-Id: 9d591166-a79b-49f1-547b-08db4ad9ef95
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qbzQPVavKnYfdLUSE/vmpkMy5tBAX1AUAwG2Irw/ovXSkC8TzL4pSFKLHXprPTgkjagt1/yeGTe6P+8EqeYIN2lUZdDykh3WtxfmmLvM8UgzEUoG6iAEPdiSr7lnBPIRfYrC312x9VO/pe/hzkInvWldH7ygZxhs1onYP2N5Ko/igtG0yDPVOii1FGvD1vbic7jXGslqyC3hcpXn7iGxKSfoWcg4FVFYthvKqZOs1FUyqZyQP4HC+cAfFR7eHe91VOf6dpsNW12uSHU37+pCA2WyABTxs0+evlQN+l2RCwygiZt4NUkOTIBRA5xHBpaq/vxHSZtljqd3gUkNvxoQb5o4WACJJpDGafiNay71wryb9lkx25VHjyDmmaRDg6TUElzn4UhawxhO4ACBNH4fKmTwGNBRL8RZP0qpj/bpL/6QHd+GCJYE4SBVm+wxsRKKaLVAgmfG1UgN7tTVLTTbMfyhagchvwA4tdvgsRSmsd0GkdyGz5c30TiQbTyuPEAsFu/73UuxLvdi/QVO3QqP3rY8lXvD+SbK68VeXXnfT4J6CoTs6R5xVox9M1bVR+h6x2zd+iZwWtYXSH9u1BapuKLT7ECz8hrRbhehcm/cXOUNKp5ZLwyTHnGFy2C9WSwN3mPRkXiWWSLW+KHttCQXUQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(346002)(39850400004)(376002)(396003)(451199021)(31686004)(316002)(2616005)(53546011)(6512007)(4326008)(6506007)(8936002)(8676002)(31696002)(86362001)(5660300002)(478600001)(83380400001)(26005)(41300700001)(2906002)(36756003)(6916009)(6486002)(66946007)(38100700002)(66556008)(186003)(66476007)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Qy85dzQvU3F6ZkZsenFCZVg3ZHZkd3k1MForRlJpMXZxV2F2eURmQWxHN2Mv?=
 =?utf-8?B?blhnQUlveW5majlBcWh5dERKOGRIZnhlN3UvWmcxQkhJQWZ5SnVpRzliSHZM?=
 =?utf-8?B?QWdKVFlISUJ6emIzMlRSZmNtaEt0dXFsV0xNUkRkWnR3Z0dlMFMwYU5DSWVV?=
 =?utf-8?B?MUxoanRVSjNuMk4vWWZwcDNNR21CRm9sT2luUnZobE4wNGZ0b2xBZ1JBK1hm?=
 =?utf-8?B?YXlMTW9EQ3FjRUNUajI5ZHBFeW5ROGpWMWRrK3QwTElkY2UwYkpzcjNwTHln?=
 =?utf-8?B?MGhibVExWHpxOURNY2w2cHBVWjJlcSsyc09saXp4dzkvaEhUdEtzUW9ON0Zp?=
 =?utf-8?B?MmtVNmRWNEpLMmJONlNwZUdLaFdlUVU0R0FVaEN6SHovTk9ZQ2dJNmNOcmp4?=
 =?utf-8?B?QlBjZFF2akl1djdnajNuREp3UnF2U2pJSFIxejh3SHl0RXhLd0hFckYrdDhP?=
 =?utf-8?B?c0pTVjBvOTkrRlo1d3VyL2xBTGxGZFhXcnFBZ0JOQTV3SXkrWWpGdjQ2dG1F?=
 =?utf-8?B?Rk5idmwvZE42YUVCZ2xuaVROT2x6VEo3d003N1lZK1I4WmN5VERMV0Vvck9N?=
 =?utf-8?B?Q01Gb3BnOTdKK1dMVUMxWEJxdXZvMjgvVjk3c0puVndWU3BkbUY4Wkk4WERZ?=
 =?utf-8?B?Rm12QlZ1b0RtZGRIRUhMc0lRemErcXdZbjliMTgxMmQrWkJKRkxwN1RQV3l0?=
 =?utf-8?B?YnRjS1pYSlZvc1FsbGRKc1kyWTlWUHF0Y0pYVnhlQTdKVjJpZ3ROcXB3Z2JJ?=
 =?utf-8?B?VWE1UkpCaVZMeDArMVQ0QmhBbjFvcHVrWVdXdGxlMXFqK1lZbmxBUUVQRzhr?=
 =?utf-8?B?SmNabUJWR0Q5amppTXA3bHRIRkUxd0phSjFYMjhjaEltK1Y0UnN2ZmM1Nm9k?=
 =?utf-8?B?UnRVaXAvOHZnWVVyL09CQVJ6UXhpSEtZVWNhMStucmVtQ1grWFVqZ0hsOHRI?=
 =?utf-8?B?aytZcHpnaFdoUk1Rdi9oVzF2ZEpuMmJEU3d0L1ZxK3dYMVd0cmpiSUdlelg0?=
 =?utf-8?B?RFBVOVJQU3VpVmtpZHFoNDcvTUpxK2I0bEwrYUlLSXY3YkpRajgwL1U1MUJK?=
 =?utf-8?B?ZzFxNzlZMFNnUzFlNEhreUN2U0I0L3BpRzdIdGJMenkvOUJyYVhLKzF4VElo?=
 =?utf-8?B?bTFmbjZOYkF4NVNra2dybW5oR0Q3UjJZZDZ5bHYwa1ppQjFScEtxbm1OV0JV?=
 =?utf-8?B?cDM0dEUvczhBc1gwOEp3eU5LZWZkeGpzU0JWc3Z3TnF6QVdwUHA3Qi9tblJB?=
 =?utf-8?B?Zlg1aTR2dnFOU21nRVFzZy9lRXBBc25uVytxcUM2MlVFbytCUHNsYnFRTDI0?=
 =?utf-8?B?alBCbWpJU1hxamp5UzJuM242YlpFTVkvWXhWb3RXQU5kWWI3UXloTmpFeHV6?=
 =?utf-8?B?UFE1cmhSMmJDQ1lHL2ZjNDJENHZZdnh4dWlrdlNydjBncU9Ed0UrM29GYzla?=
 =?utf-8?B?TWVKeG5WdkV0ZUgxV1QveHJYN2ZWUEFqWXNzek0xSjc4VTQ0UW5GY3hISHdL?=
 =?utf-8?B?QXZ5aGl2bmVNLzVRS0cyWUEzYzNqNWxmMHBpd2lqaXNyc1dBa0xHVFlYR3Vh?=
 =?utf-8?B?SXRadmgzSTdZK3FiRFp4Zk5wOHNFNTNGSEZycFhDVUpCQllPWjd0YW4wcldS?=
 =?utf-8?B?WVQwVEZWeHlsWDdvMWV1T3lNZ2RCRFNkSWdhTU5wZnlHb3d5ait0QVU1MTZl?=
 =?utf-8?B?NE4wc3VtdjFiWW9FUEgyNVVEMXFrdjc0UURJSTlLMk1HOGc3VEpKdzI0NEpz?=
 =?utf-8?B?UUJTbVFGbk9QZTlycWRaRUJJY2dic21pdWlxZE5ESnVFTWhwQUJFYjJpeTNZ?=
 =?utf-8?B?WTh1RUkweGo1dGcycFRnMWZWWi83TmRHeUkwUW8rVVhLU09vTG5rWTlkSi9Q?=
 =?utf-8?B?UTFSbHFVMUp5c3d5NjZwVjJBMW1XVTNreVFyczZpMGdwUmNhZVZOZDlqaW10?=
 =?utf-8?B?R0x5ZTBLbEx5WHg2N29POW9ENVpGT05MSEVOdWFzeUN5VkNza1BqdXo0YVlH?=
 =?utf-8?B?SlZhZDlHYzBod0x4OGxJVDYyQlNCdFhyS3NoakxlQytmMkNnNTZRc2x5VUNR?=
 =?utf-8?B?V3dXVnVIMjM2THZaYlpKc3NhM2dteUhwbnQyT2FGbWtXTE96RkdkNW9rVE9S?=
 =?utf-8?Q?T1v9GXQBq8vRJO0oE+EkKiOOF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9d591166-a79b-49f1-547b-08db4ad9ef95
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 06:53:29.2757
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: z5jsHjRA6JGkxWYJpi5s7LiWBjhryPWew13PHDaH1HQhzXvwmD4QMadDZK3PmPWbWmoxeViZEy51e6KnQj+FQQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7166

On 02.05.2023 05:59, Stefano Stabellini wrote:
> xen: fix broken XEN_SYSCTL_getdomaininfolist hypercall
> 
> XEN_SYSCTL_getdomaininfolist doesn't actually update the guest
> num_domains field, only its local copy. Fix that.

This isn't true, at least not always / unconditionally. "copyback" is
what controls copying back of the entire struct, and in the success
case this looks to be happening fine. Yet for the failure case it's
unclear whether any copying back is actually intended. (If the op was
to return merely the number of active domains, I think that ought to
be restricted to max_domains == 0 and the handle also being a null
one.

I'm also having a hard time seeing what failure case the test ended
up encountering: There are only two errors which can occur - one
from the XSM hook (which is mishandled, and I'll make a separate
patch for that) and the other from failing to copy back the info for
the domain being looked at. I hope we can exclude the former, so are
you suggesting the info struct copy-back is failing in your case?

> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>

In any event, if anything needs fixing here, a Fixes: tag would be
nice.

Jan

> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 02505ab044..0e1097be96 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -107,10 +107,8 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>          
>          rcu_read_unlock(&domlist_read_lock);
>          
> -        if ( ret != 0 )
> -            break;
> -        
>          op->u.getdomaininfolist.num_domains = num_domains;
> +        __copy_field_to_guest(u_sysctl, op, u.getdomaininfolist.num_domains);
>      }
>      break;
>  



From xen-devel-bounces@lists.xenproject.org Tue May 02 07:18:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 07:18:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528362.821395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptkHI-0000LK-4t; Tue, 02 May 2023 07:18:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528362.821395; Tue, 02 May 2023 07:18:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptkHI-0000LD-2G; Tue, 02 May 2023 07:18:24 +0000
Received: by outflank-mailman (input) for mailman id 528362;
 Tue, 02 May 2023 07:18:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptkHG-0000L7-Kt
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 07:18:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2052.outbound.protection.outlook.com [40.107.7.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 83772843-e8b9-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 09:18:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB10072.eurprd04.prod.outlook.com (2603:10a6:150:117::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 07:17:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 07:17:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83772843-e8b9-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YoFzm8fmaDDVP9+IUvOGemPkdB0WBIVdceOi1jg63jmurBJRS04BupVc9AnPlF0k7IOrdk9j3JdpkBJbWj5C1uHAll/U9oM1XeQZRffloDcTgi4G5xjwMS52rVTsh4sXjUuRG0QwPY8RyryfzEh3SpEE04661mrXliVtiJeVjLmQtlDb+DuHPDQ54L5D0EumU7hR8/y9Nzcse/7BoOudtCIofcZFkQ3MCWHMKAuy4fdGqA0TwymtsLSZGXm5ZvhlaoeQrLlfIcvshhMXNkFQCoheZCJfkZVTrHTM46TNhAPTh0aCC6VSExLEpeIrjqfSzvyWWhx9PrsgLdXRVtVhLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mJOXKJhqj0dCvMwJodrI7VMwHJWDODfDJZHptDaJKnA=;
 b=cQwgf+HFCB71gNmwOwTc32kr0vD6apx8wsnQTFmfgPtLakoBpY2J0FP4Ip/tYNUogGDADU13g+Z444CsvAKMzvkpewNFKcFicZmZL0ls0SzptRX5ZGWq+SPOUW5IfQ0GkhjG4TXsWRKcB4d2bgp7XU6QqeYvRlfWPITreZUa7jRpY5CnCN0DR7vUvhlFK7upre5Kt+GHD0uicR1/RjcOlbYpC9CHDCPzpJHaOlz6Jm66aiF9VGr8e/yNwboHwVXMI8MbVjZ9fEzWaK6IrjyO6hgD6QEmYp62v/LXfePtd5MHQHoMX+j/fY4JDCm589ADJWKe8OHEr3jRRh5dpRoL2A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mJOXKJhqj0dCvMwJodrI7VMwHJWDODfDJZHptDaJKnA=;
 b=TW+ANcAzg1QYKZs6gvuJXde2Srh2KXjRR9MPzOGydwGa7xnTfwBkFh1ADlUqI5rX7Cwv0cwvxpGmLodyLCiOHjtZkX0Rsasop8LBoVS9dmE6yzyYx0zuNs3vk1/NLz5rbtEplqbN9Iap2TEUY+5pgQaORnsWDDnwMemibyD+57Y/En/m4v7U3W8S0ikKe7IXGmf2GlnES3nSJVmCO4fDU/5n/0rvlCrGn8ZBXDDvQyvsxbFRVerteekE68uwmgCMZN1j2xhXgbnTI62wtY8tlaf5IZCv/CVz2OodLHGEBR8/1RXXvHieVojeA207U3MDTWm38Qay6g55Wu/WW85FvA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
Date: Tue, 2 May 2023 09:17:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0141.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB10072:EE_
X-MS-Office365-Filtering-Correlation-Id: 7d484f16-dd8e-4ca2-bcba-08db4add5695
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Gs8EetnGNi98b48zVs7nSu+d8jNxf6h2QCcIQP9R1XuLAEXeOdiKW2OlfU6wEooRH4BtM9nFAPd1+Cao56kilEg3dAz4r1llEvmGPks1To55o2672TUVjvrFoCZbIuujkunYdENBxNC4XSB+WFdnERbWW0UPKe5xfFWUpP6wsUFy4efM+NU3pTw+dSRk9z3tkkX3G998+8EIM9JdByc42jarY9OTTFVbkDxgONSyAkZglnRte1BduYF67y1JCrcZDzrVaMOyzxenSOareoz4z1cgzXMn0xf98rUv1yogGEN0F6KZGaAJf7GEevnAFcP4IP19EzNcYGINhrULsk4xG5CY1C8m8Ccgyv6lgfdSpP8AABEorOAAqtxIWij9VS39Pjw/7gy9zidzevFEdGZCHw+2KT9Dop6hrh2+qkSiphQhzXRWVB4/o4xSyx0Cwqm6iz+oiGEaGtXufuhukh1vEZL92Lh4aCsPDfRtHob+DZ2WUxPtJ4StbnkPtuO/Fjg3SR1GSfYQZwtEwJwfq/GUGUCD31rTSgoM+5tTVAJPsT64u3r/ni8oU1ldh1Lw6uPd3lhQi2EnbjgQWQm4g/Ny2W/mY5SPg3GAEa9YdXL9HiUPXL7bMB2CLwZtsS2TKheQ90IzzkCRwSD8dZXZ9O4IHg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(136003)(396003)(39860400002)(346002)(451199021)(31686004)(86362001)(4326008)(6916009)(66476007)(66556008)(66946007)(6506007)(6512007)(41300700001)(36756003)(8676002)(8936002)(5660300002)(6486002)(316002)(2906002)(31696002)(54906003)(478600001)(38100700002)(26005)(186003)(2616005)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cGtvcmF2ZHozUjRtK3BWRkJVY1FoSXpBNzZCSWJRc3NMZzk2dlhQa3QvNHRF?=
 =?utf-8?B?Y2hMdHl6L1YwVWdvWWR0SzJkc0NxMnk3c3oydmZxK1pFNVZwTG1Vb1B3Zk9j?=
 =?utf-8?B?R3RSWHdDQ2sxNzY1dUkwT0NmZlExdW96ZlVyRSs4aUdRWUxzNlZ4VC8xYm01?=
 =?utf-8?B?MU56Z0EvZkFDSVUrTTdIcndVM0RFRjgzWG9QQVBRUWNNT3ltWHpkQWhMQW1N?=
 =?utf-8?B?eVZBUldpZFErd2xjQnpwUXFzR2pucEhOU1hvKzNORVFLREZPR2h4MGYyZlR0?=
 =?utf-8?B?NERZSWU2MmFoWjRjdVQ0dkVPVytiazRRT203YUJaR1NIK21DcHRUZFoxL0pM?=
 =?utf-8?B?cHdDS00zUkUwdlhKakZDdUlGNVhMWHBMTzRORk1rSUhvS1h0S3FBeWxqdXdM?=
 =?utf-8?B?VlpNVndUNW1lcFA5ZTVZQUI1ZkZ4aEFvbytOMy9DZXBoSFZ1dmxQdnhrVjdU?=
 =?utf-8?B?VG9tUVNSVDQvYVJvQk90MkpUOU9DblpwNTF0YnprVU5vdFJPdG9EcWhHVWRD?=
 =?utf-8?B?OUE3WGc1VndGMHp4a3d1VE5ldWZGV1RCTStrbGtrai9YYlZKRUlzTWEweGxX?=
 =?utf-8?B?TmpDcmZDempqSFNWem5uN080eWlNRDVHRnNCVlFvaWpzVXVhckNlVGsyaE4r?=
 =?utf-8?B?QVl6VXQ5S0VDeVlsWi9rT1NhY2Q0NVorVUtXdnZFajBRL2ZacXQ2UHZjbXdS?=
 =?utf-8?B?N2s0Wm5ubjlJM0NkV2xYU2xiNWJpeDZBMTdxTTg2WWxya2dJRGQzMmVhc296?=
 =?utf-8?B?aGp3NG1wMktjTERGR2VPOGUrNzRESlRTaFpIbkZUOHZsaXBHOTlLaEgrMzRF?=
 =?utf-8?B?RFpIY1lwRDhVNU4zUFMxWXcwZ084OElRYXFoam1Tb1FZdk9JRThoZlUxNEZ2?=
 =?utf-8?B?V0xnTVBDdTQzWEczbDBwcFpXMHVuUk1HYk1XRDJCenhmUjYyS0pEaTEwNEgr?=
 =?utf-8?B?eTBZTWFjRmVpS2hxNTh1RVM5QWh3QmczSE9KS1l1emViTTluVTVINXlsKzU1?=
 =?utf-8?B?YWlFcXhTTkllQXpNT3BDd2p5VjV0Umk2a0NtRW5oellnV3k1NXJwT3JKM2I3?=
 =?utf-8?B?QXhKZkVpYU1XOHRjSG9oRWFEZnNnb0M4ZE41dzZEN1RhRGZjalUxZVhHYmJ1?=
 =?utf-8?B?UDVrYlVSRUhPMWNjTXhVcTl4ZE16dEdMZGNTRzdjUVpnemQ4TDRwMnhsNXh2?=
 =?utf-8?B?OHVzeEJYR0htWEQzK01JWkw3d2dMZ0VoSXV2cFJDdUZ3ZDdCMGVkemxEaWlN?=
 =?utf-8?B?dzQ1eUJRallKdXZETFFralJIYjF1SlpGcjRaaVpqVVNBNzFGcWlBbklpaGQ5?=
 =?utf-8?B?WFZTQlRSSC92ekxHZXJWcWw4bDlHbkdiVTJzSFJLQXBVSGlBc0VkYkhmZXFy?=
 =?utf-8?B?Y2FBSFRqeXlMWnVMMVNqYXkrNXRkQUltY1NxN0xmRWNhYktWMUNKc2JYMkw5?=
 =?utf-8?B?ejdjN21idWpCNGM1ZnpRbVQ4eWE4KzE5bUVBZGFCd001WkZPR2RNaTFyN2sy?=
 =?utf-8?B?bkUvQk1YUmZVWVExMzZQN2tJc29aSFgwVmcwNWNCaVhwL1AxaGQwaHhPOHZ3?=
 =?utf-8?B?S2VJTk4zeUVzbDBRYjdZYi9ZdEpWWndnQnJRSktwN3NWMWkwV3kwcFVlOUpR?=
 =?utf-8?B?NUpjbEJDaHIwSTN2MzdaK0FCbHhSRnhRVG14elhGVnMrNDBvOFpDVXhoaWRQ?=
 =?utf-8?B?eUlCbDBnREdVa0F4NjR3NGhVQUdaaEkyK0N5djgwUGNWVzFpc1pISkRObVE5?=
 =?utf-8?B?c3lsVEQ4TFlVWEdNeTlnbUs2V2cvWEx2ZklBODMrVkdYVUwwUTYrOEQyeGEy?=
 =?utf-8?B?aVVoMTc4dzdhVDhrWk1PcmxoNW11dHllTEVNTDhoaXpOM04vTUN5WGh5d3gx?=
 =?utf-8?B?RmVCM3ZFUWtqcTVIWEZ2Y3RYaUFjZTNnVklPdzB1K3RlMUYybWQzd1dqbldi?=
 =?utf-8?B?TmN0bGg0WEx6aXhVQkYzOVNKM05OaXdGam9IVXhrcHZtM3FsUmVhRG91U3cr?=
 =?utf-8?B?UTZCYU5RVHU2Umx4Q0dSR1FoMU9EcC9IZ3BuVlJDTktWWmZkYTlSOFZXcU93?=
 =?utf-8?B?a25HbGFnSjlGeG1RTzc5VzhoUnJLUWRSOWF6VitOT0hBL1Y4NTJtMnhZcFBo?=
 =?utf-8?Q?9GgplJLyUkuB5NtN4Ozeny0zz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7d484f16-dd8e-4ca2-bcba-08db4add5695
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 07:17:50.5780
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xawX4/7HxFajo9aw1K0PWV8Bey8UTunv4MBbFItVn+IBfu3Dr6PMuJz9uXjRh0DB+HboQ/qxXDU6zZi8nYG3wQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB10072

Unlike for XEN_DOMCTL_getdomaininfo, where the XSM check is intended to
cause the operation to fail, in the loop here it ought to merely
determine whether information for the domain at hand may be reported
back. Therefore if on the last iteration the hook results in denial,
this should not affect the sub-op's return value.

Fixes: d046f361dc93 ("Xen Security Modules: XSM")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
The hook being able to deny access to data for certain domains means
that no caller can assume to have a system-wide picture when holding the
results.

Wouldn't it make sense to permit the function to merely "count" domains?
While racy in general (including in its present, "normal" mode of
operation), within a tool stack this could be used as long as creation
of new domains is suppressed between obtaining the count and then using
it.

In XEN_DOMCTL_getpageframeinfo2 said commit had introduced a 2nd such
issue, but luckily that sub-op and xsm_getpageframeinfo() are long gone.

--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -89,8 +89,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xe
             if ( num_domains == op->u.getdomaininfolist.max_domains )
                 break;
 
-            ret = xsm_getdomaininfo(XSM_HOOK, d);
-            if ( ret )
+            if ( xsm_getdomaininfo(XSM_HOOK, d) )
                 continue;
 
             getdomaininfo(d, &info);


From xen-devel-bounces@lists.xenproject.org Tue May 02 07:20:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 07:20:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528365.821405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptkJI-0001kK-Gi; Tue, 02 May 2023 07:20:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528365.821405; Tue, 02 May 2023 07:20:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptkJI-0001kD-Dt; Tue, 02 May 2023 07:20:28 +0000
Received: by outflank-mailman (input) for mailman id 528365;
 Tue, 02 May 2023 07:20:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptkJG-0001k5-Sh
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 07:20:26 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2072.outbound.protection.outlook.com [40.107.7.72])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cecac40a-e8b9-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 09:20:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB10072.eurprd04.prod.outlook.com (2603:10a6:150:117::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 07:19:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 07:19:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cecac40a-e8b9-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bIgnX4Su5oMzTFIjPaDeciXZi6yPfjLIv4l2JxSENMTrlEfskPrHELiL33m7WcJrRTsjWOvNziMn5cDyef+t18cMm0koQ1m1nC2pJjvkemwidO5NuDAydqSGcV4V8WacEzXk3Ni8HMm4CzyJdICm2mpu+RWTT2O6xOT9KU/lm2onRFmALcixsT5Gwq//Jvto+Naey69aT5F3+hHj2jPjiALYoiyvwXEmlCnu7LjOGIF4HBUKA0xTDVJoJN/sa5ftE27lkCkwH/ApJRdT9WTT4rP8xqrTXQnSiER1CK1bYRXNzlvEdqbrzQEVLok4AIvYtirwzizpwf1Z/5JfYbV+2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ge+zmYmWwl76Lvy1vCFAkPgqQs+CeCMIZ3C1bhjNBu4=;
 b=kQRXUKz/nct7A/y1z5cxFiZWGXFm7XFGRfJtklbFb5F1nnM4GSreQD9KDrK2GZ6TR0UnayKKuoEbgJRrXmqnXJ4c8oxtX0GTYws02w/4kE13iCIq8V7wqSj3mcO7PKBmHz71rjmsG+UYuagJIrXZWqDBjXtfTu4hMe3U6dynPb2EO4SpfUobsvTNok4YVCuUdOf19LOm25oyG84B3pqM2BiONwYAj5G7WIMMcy97M03BTNw1vNaC97g3fpSvL1NiIcE5bTR17srrhiy5cqYyeSVIB/4YX8zK/DbLnncmphygUudjfAU8GTOVLA5eypTrVXQJZTIPWuEKV5B662iMaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ge+zmYmWwl76Lvy1vCFAkPgqQs+CeCMIZ3C1bhjNBu4=;
 b=i0Dj0Myrp9pvA1UsRaEkuRiPvS5/ovi7pQ2P2Wt2YFE325qgOHF3kpJeQyjPiI5B40poXV34nCM4Zx71ujxouXnRUwnRCb7CmZDB4hNbnYlxdnxJvLUUZ8Tu8yNVcbFFcqQcQm+N+2XPfxNaJpaBNbvLHSekgqTuEvJeyIjn4w3WWcLA4cXCcw6ICAmKkvcsTvWAnxHPgRQ6scixyvw28KqohUgJjafej7COZYhRVK8cbKipCepSjhgs8Y5iCMYCfOWnbVDO/K1c86XizEs8Ta/MfUYp4bUxcyIckqz4z8itwcCrXqE5t9PTfY8uqOZXYQo9PqNUuse2hmMfwgbvcA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c6bbf10d-f4a0-decb-d299-ab2093060b46@suse.com>
Date: Tue, 2 May 2023 09:19:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling
 with XSM
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230430144646.13624-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230430144646.13624-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0025.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB10072:EE_
X-MS-Office365-Filtering-Correlation-Id: 9c97f6c0-4eb8-403d-552f-08db4adda22f
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pItcUnFHYD93FNMHgtH5v7fe//3hS9B4ogemegJqfFles5P+12S5+w3WGSxEgmlrbjChX3yjGRJxcg/aMnfrbRoCfWHIuPQfY6u3KYdkvalVv6S7c7XNsh0XBo6rom9HTb3ZzaYjEnoB3NtC6mDTt2VIscnq1iQoXUk5Wo6005bUAtfuvEIEHP1zf4k74O93HZzgqrp7LqctzOe0okRpTjH+7NJpD43L5sZrCHo9PamGbBLb2EPszwFn2aDB9EFjUVUJ8ZViipXh0t4QAp9wq6GURviQGGgdvvbOiIY5uIevcA/lnYKM+s4jx2QAh7ftyO4LyhsedABZZskHNP5kpLV0M1TrjlrLoX+UNBdawNZSnQpd1NqL7qNmn9a07csQIMo/FRYHk+MPgxkpu7XjKXppOUmj2BoSRnxs6kxfgxOWSKs101fOmZJFCfnSnBa2rnp8oeLtQzs3lGN8p1HCjTT692MCqdwKD0roDJYDVTqeUWFqSIXkkN8V2GGvSQ/3sHsIpUM36ZJHurct34KrnJH67rK8DQwLzdf7Vou30kBI16Wa3OHksP6agtSvrh8u+zS9Uarmy8nVRuElhXG/Qj0MZ86DUrQptJRmNsLfg4zE2Vcx1h0BAnFHd/XD67llLO9BehLGk+SW+TjoPsM6lQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(136003)(396003)(39860400002)(346002)(451199021)(31686004)(86362001)(4326008)(6636002)(66476007)(66556008)(66946007)(6506007)(6512007)(41300700001)(36756003)(8676002)(8936002)(5660300002)(6862004)(6486002)(316002)(2906002)(4744005)(37006003)(31696002)(54906003)(478600001)(38100700002)(26005)(53546011)(186003)(2616005)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NEJhaHkyMDk0cXZ4K2RwcDFJSGZEQXR6OS9xV09qS1ZFZ1AyZG5zM2lBeDlu?=
 =?utf-8?B?ZHJMam5EQkhIMWgxV20yRUpBR2hNaktkVDB4bU5mNGZwY0MyOEttK3hPd3dU?=
 =?utf-8?B?Nm1lZkxCYVQ4cEN2WnBsOVFBQndmZEMwUWJGUzZnZ0NmY3pyMWlycXgyV2tm?=
 =?utf-8?B?QmdwNkVzczZ3c3lBaGw4TXMzYmlJN0p5UTQxRDZmZ3d2cTdHTGl3WlYxYzRT?=
 =?utf-8?B?VjRHN3dMTGhCR3VFRTRWNGtiNmx5aVl3ZzJJTk9xdWNsMlpTR1UyMXAraTNz?=
 =?utf-8?B?Z0pSb3crS0RPTFhLdlRldUpDYWpqRGpDTzNMK2s4SlhvUHVKN2R6QUYwZUNo?=
 =?utf-8?B?TDBib1ArY3JjUmppN0plMHdWL09BK0x5Z0gyUVRvbjBlSUp0cUU3Q1JxU0xC?=
 =?utf-8?B?WVk3c21JdHRrbW8xNkllVE9WOEViY0Vzb0llMEFQN1VydDBVM1NJbUgxWXNi?=
 =?utf-8?B?aW9XQ04zdTIzbUNDNW4rK1Q2N3gwOW93YW5tVnFEMU54ZmlnamR5MXc1YlRJ?=
 =?utf-8?B?NG95S1hGc21uWk9VMTVBN2FKbkx3UEpHOVN0ZE5JRWFKZUpGQzJzRGU2M25m?=
 =?utf-8?B?RmsraThmOGlBb1Bpc2tVd01aNTVkL0F6WnVqYzdXT2VlNGx6S3kxejE3S3ov?=
 =?utf-8?B?WWZZWjRDR3ZTc3ZaWndsTmgzNm1TaHdPVUd1aUJmbitGaUJ4b3BsK2ptcDlO?=
 =?utf-8?B?b2F4S3FIcGFyaDV2RXNMdUFnTU5tWEJXaDlYSWVCdytOR1VvcG1GUm5XVXo0?=
 =?utf-8?B?QXhyS0Rtd1hiNkFhOUJYMzB2WFcxSXFDbUFTcG5EeW1tVFpVYWhRck8rbklJ?=
 =?utf-8?B?L1labzJZbi9mNDkvQTlsWVpzN29vSlhxTGpkcnFjanpZaEVHSG1xS1pLcGVX?=
 =?utf-8?B?K0pYZFhRa0p2bHY5QW1PZHRGbm95cmJnaS9qcFBNQ2h3d0loT0tjclh5U3hv?=
 =?utf-8?B?THFxSlo5ZU9NbkhqWVVROG9FeDM4WFE1ZncwM1VhNVdHR3pDYUtBRE9GVDM3?=
 =?utf-8?B?WkNTTGE1SW5qOWlPM2lWN01sTFFnbDBkR2FTekd2eURidXZIS3VRTWRSeU4y?=
 =?utf-8?B?alI5ejN5VkpsbllDTWdhdWEwK21XTnptWHhtZ3NLb3F0VCtzaUdJMXAvR29C?=
 =?utf-8?B?M1NDU1E3NE5EMnp2dTZ2T0pTazNDVVp0YWNVZjB3UUtJa0I1eFpXMU0zUHc3?=
 =?utf-8?B?eDRWcjJTc3JjMGl2R2oydTIrb3RQVG9wYkYzVEl6T0I1WDQ1T0oxMnJYd0lh?=
 =?utf-8?B?Yjh2eldmVktxQ3lDcThjelRKeDFnK05RQzJrblByQnRSN2p0c09KSDNUcXly?=
 =?utf-8?B?bGhYNVZoV0hOOFphSFFVR2xOUDh4eXhPYmVBWGhvWDl0eC9EUmd2VS9Jdzkx?=
 =?utf-8?B?OHk0bWpRSzBkYTFKcnkvT1BWYWhhRFJrZFhPZS9nTkpwa2xrdkowWW01bUFD?=
 =?utf-8?B?c3NpcWh3c3NaM3gwanZXWjVKQzZnR2diWGc3bmhrTVM5ZW14d2ZkWmVodENS?=
 =?utf-8?B?c3JIUlNFZm9GQXVaSFFIU0FkaTFwU2RBNW4zZi8wRVE4VEN6ZFZTN3F6b1lt?=
 =?utf-8?B?SE1UZ1Vud0UvVldUZTFFT2JqdDN6VUx2N2VWRWFHYWNOdU5CL2x5RDQzUXQ4?=
 =?utf-8?B?RXNlN2tXd1R4ck16aS9Yc3ZvUVNBczBaVUcxUy9kNU9HOXUrTHhDQlFyZEVO?=
 =?utf-8?B?ck1WS3BudHh4N0tXbkdXZWlyK3JmazM0ejNOY2xPcUxYa3l4WFV6NWg3eXJN?=
 =?utf-8?B?Vm1sbDZrbXVPQVBjWHMyWFdKejZ5bDlxRzdhL0FCRm5rUzMwb3phcnAyaTRz?=
 =?utf-8?B?bERDakw2NGVHcHM1Wlh5SGhGMGtEc2hqbjBWSWhLbi90R3E1Y3dRSG9aY2dH?=
 =?utf-8?B?bW5PbVV0Z05MZXZBZENlMVNsRDlSKzN0Vm45Ym83aDBxUmFjL3NwbndDeGdJ?=
 =?utf-8?B?MlUwbVlXT3U5S0RaOEdocVB0dWxIT3FpOHQyVGw1OFNqWVVPVnRnQUtDcUI4?=
 =?utf-8?B?TUtUUS9ndmM4NDhHVWV1a2U3L25DQWNYR3dyK0VDdTNoYVl0VUI0REwxRzh4?=
 =?utf-8?B?ZEZvT0wwUTVjelJxRDFNYWdKc3dkVlF1VCtHdmNzNFZIUk1CT250TFljVkRJ?=
 =?utf-8?Q?rzaFI4WEproGI7vpjMXbGna/A?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9c97f6c0-4eb8-403d-552f-08db4adda22f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 07:19:57.4287
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3VdPxRBuF0fybvU7H6m+DeZgnngmhJR7YjZpTQ14LtVLBQFwCznGEoDfYRK9x2vlROlFGRHuqj7c8Gtw6Me9Mw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB10072

On 30.04.2023 16:46, Juergen Gross wrote:
> In case XSM is active, the handling of XEN_SYSCTL_getdomaininfolist
> can fail if the last domain scanned isn't allowed to be accessed by
> the calling domain (i.e. xsm_getdomaininfo(XSM_HOOK, d) is failing).
> 
> Fix that by just ignoring scanned domains where xsm_getdomaininfo()
> is returning an error, like it is effectively done when such a
> situation occurs for a domain not being the last one scanned.
> 
> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Should have finished reading the list first, before sending the same
patch again ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 07:32:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 07:32:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528368.821415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptkUR-0003Ix-G4; Tue, 02 May 2023 07:31:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528368.821415; Tue, 02 May 2023 07:31:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptkUR-0003Iq-Ck; Tue, 02 May 2023 07:31:59 +0000
Received: by outflank-mailman (input) for mailman id 528368;
 Tue, 02 May 2023 07:31:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptkUQ-0003Ik-SS
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 07:31:58 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0618.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 69f37921-e8bb-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 09:31:55 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8186.eurprd04.prod.outlook.com (2603:10a6:10:25f::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 07:31:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 07:31:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69f37921-e8bb-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VZZE+F/eoqt3NpzStDOYzo5CExw18lv9QMwoWWIb3uS7CBoD/0XDT4IsdadcUzQAKB8xAuaPgFhCuCpbX0J+opiDcQwK4dNq+M6W1vr2pjOIOp6QRN4nHcpU9IZJZG17KgEU/2gzg2/9MPvIls/w+SJdJIWyJlbnyOwl0j+H2cBkwYuoUbEY6MAr0KMyYNln83CxLkXOgqS1NKDc4Q00pXq00iZ61gJWGm/hpTvJ3KdahyF0vmYacSzvLxQIHnCHZPA7mrR6AjQKqaGQVVHTWpgqBw1Qdhflr9v1zYPzD998AZybnWxA/vdMBLAIwMV/cRVQKFTl58b03Qixzd3Xjg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2YMiXAnfAKMyQKqJ13W90DOmfzJnHkhJukbcKuVvems=;
 b=ACbunt98dQnFWJnpOTUkBUJrIWx/uLao/4bN/OIlGP64CKL9srpljmfpV9dCQDgalkF0SzfIBS040xBtQCgYHvk8n467flDpBFEX3OgnPZH/qbiq77hbfnpXQ+YxIJc/IsKeoGqs7qY47wBaBAAKGVquXp3EZyfypE1ACUTx/gCX0SZtgx2JrKExzW8uyXC2EyHpT8FcakZlvyx6stxw+kktqA975TTTExQmFYwkJVODZgvyO2TqVbCQsauC8QOM0wM/NZaxgwqpNwe3dutjVmeEC4I5L+aO3a4OLeO3CMf/eTBuX1IoMOrr8T2WhnwOlWGyn1AA+aAGrsJIQDNL0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2YMiXAnfAKMyQKqJ13W90DOmfzJnHkhJukbcKuVvems=;
 b=34oHAdLWujbDxYjar2Qku9BdAJPWLg2eBjKvHq23js6JxiJYfD/jtgf+twC7FJ2D/CSMqdToTFet2lGxqIW2HNsSGXHqDIQNOjzC+dfjhMrplo9uiN0+Idn2mcv8BCzMyIxQQLdcJrcPVCgwvPKiBzwRSPXs5lo4TSOxSx6267oxntB9TjTijUPqOFEenGjUUWhfXdz7XKqeTtO05M/Kjahb9PJnI3yacBZtRZ1s4NUtJL1dU4F5Sz3+LKQMVDwVus8qmOm7s4sGrQg9KUJPCtbYvhPfYrbIL1LpBgST4QaCZfIJh2c+U0L3hePiTp4MMVPkYc9oAjAqW9pfctpoKg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
Date: Tue, 2 May 2023 09:31:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: HAS_CC_CET_IBT misdetected
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>
References: <20230502074853.7cd10ee3.olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230502074853.7cd10ee3.olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0086.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8186:EE_
X-MS-Office365-Filtering-Correlation-Id: cf157e7a-1395-4e2e-4420-08db4adf4d18
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gVTOrYPAarPQbZLQY0XGKyWfZ6+cx6KV6XZaq3Ivi0RKxhBGg+n0AfeUIpHGic731khyUBoGjW3qbh5iYWLePvmk5Sfqfg7mvf+CjYjntoalOj4UKlSyTeK0+uo03mZqhmvdMzXkd0QC4aq0MEk2DsEa5GdiDMwjfG7TngbPrqKquclcWHrnMhzTYfATqXhG5+BpSLmdet9TzwcEYSamPYJkIVgcBNo44ecHQYYIH3VXux0+nkwq0vFHesbXqb5bo8ClbLBehkt6Kw4n3fSctzVuahg1ULqR/MIY33saJAvOFfrkHtKT1IFGoDpNxFlXXQtVlYbEk5NYImcsf5nijPbUimrTIi2ZCYZ2QRKY8FyugRe6Dy6Z8ar3jkRgG3vuwaHCjD6nuGjqvGXEqzH4L6kGU2cEXcrl9Q4/BAZ0wCCtqaudC5WLzSd8fnSEbHY+XEJaC4823sgIr4E3juxD6bb6NV+DEu9Axjo3yr9XdFt1kBzomvFzPC1O5ICNDXBra5W+UFUdlCWQuyi9RPaMJrc9oWgf+ymF+c+UJoL/MprySwHPo5a3wA6yuSaMv0cwQ4yEcVtq+LW5yUaXi3Y4eh/lzW42Poi2mri3aSfaL18i/tsMBioQwnDOeuHaJMW8O0PU3j0SRBFj8m1GmtFXfg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(346002)(396003)(136003)(366004)(451199021)(478600001)(26005)(6512007)(6506007)(53546011)(6486002)(83380400001)(186003)(31686004)(2616005)(4326008)(38100700002)(66556008)(6916009)(66476007)(66946007)(41300700001)(8676002)(8936002)(5660300002)(7116003)(316002)(86362001)(31696002)(4744005)(2906002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dW56Qm5Mc1dFQ0FSOTF2b3VNcUZjL3dOcllFcUZkSFRCVWcxbkk0RnhvNzFE?=
 =?utf-8?B?ZDZLUitDQ25CZ1BXeUNzdnREeHJUb1cwKzV1dlRFbStwaGpTUmFYeWowUmJH?=
 =?utf-8?B?MExub1pZTUdUY05sWXUzRis0QlcwQytVRDczZUluM0JCRGZhcmJmeEVwTjVx?=
 =?utf-8?B?TmkvWHlHV0x2ZFovUjNzdThWRzhoZURraW5YbmtpYkhYRi9RK2dibEVRYUlk?=
 =?utf-8?B?dEZqTEFvMHVjVWYxVW5RQ0Z2M1pEaTFnNUZjNHo4bmJRelYzbno3L2d6enVu?=
 =?utf-8?B?SEVDQkFLN1dRTWpqaEZUZkpuNUh2bFVOUld2YitUcUFieVBxMlU3RXBDdFlJ?=
 =?utf-8?B?ZGNhSDRpTVFySzJrU3dkYlJUQ20ySjRzMjlONllKYjFGdHhEc0l4YVAvenRC?=
 =?utf-8?B?NjBobEJVLy91MjVZNW5WWjg5c3YvczRmTjF4UDl4eUtuNUNJay9MaG1wRlZD?=
 =?utf-8?B?eHVCOHVEYjRmVW5ycy83Q0YzVllZSkRTbTNmSjdhNHNJZkZWZGJqNVgrMnhw?=
 =?utf-8?B?Nkw4Qzh6MHh0UVJlTnFOdkt2Y0cvMFRLOEhacFRoWkhrYzV0RlVGYUQ2cGV3?=
 =?utf-8?B?YjhyT0hOc0VyNzQ5OW0vYUpLWUtTQ1NHVTBYMjhTVWk4TXJBZk9GTnhDWXhB?=
 =?utf-8?B?cHlsc0pnK0VvWlZtRTM5enpvamZSSlQzTFh5dFpMcktOZjRiSU9DaVk1ekh1?=
 =?utf-8?B?RXVtK3FOUnVaNVo1MGdpNWhPN3BIaG41d3lJSEU5eUxRQUoyMmJzclYwRVVM?=
 =?utf-8?B?VlNtYTVrM3VwNlhPcjVmQzJ3WXRzTk9PbjFGZG9Kbk1zRUkrWEpzL0ZTSFhH?=
 =?utf-8?B?MnFDVk5Ed1Bqd2ZMUHhZaWFINURESG9mM0ZmZFdnZnZJeXo0YkppZ1VUci9p?=
 =?utf-8?B?WDZxMjh3TU1Xc1VzQUpFVGNqK2J6WGhjYjRjSHkxMExwY0R1a25Ec2E3a21t?=
 =?utf-8?B?ZWdVWW8wYUR3cmhzWjRaWW83MUxzUnk2aGpEQng4NHB0ZFBBYlpCRVh2dUU1?=
 =?utf-8?B?OC9lSytaV0dTZlEwZWFXV3Jab1g5SE1QWklqT0trRTJ5RnYwT2RaSzgyamhZ?=
 =?utf-8?B?YjZxcUd0T25OK1ZZbGF4KzMwMXRNUHF6TVNmd0FzTVVwaXY0T0pHRUdJRnFM?=
 =?utf-8?B?QWlQV3d2M0Z0WUNya3VFTzdSVGRhOVBzOFBoM3NYS0UwcWpyTkNTOSs4U2da?=
 =?utf-8?B?QmR6RVUrYkRyL1MzZFB1eVZ2elgza0dsTEZPU244bjBUSnBFcUExMVJ2bmdj?=
 =?utf-8?B?MjRrMFQ5UFNaV0lJdmYwM05GRU40S0lNY1M4ck0vKzRWcGdiZGxib2hrUUJy?=
 =?utf-8?B?R3pVeTJSSVRMeGg3RjBTdjVCNWJJeFlSWGZmcHMrdmVOMXBPWGlyQUZDSzRv?=
 =?utf-8?B?eVFhV1lsSkduOGFOeWRhUXBBYTJrTjI3RkRERi9zN0R0dzVobG1lVlFkNGVV?=
 =?utf-8?B?S3gzK3JqeWlvN0JMcHN5bDIvby9EeGJERThrNjFCZmxZdEhhRytNRTZxYVpE?=
 =?utf-8?B?RzJhS05YSGRpWmVUWll6MmRJSmp3ck9QbHZKUUNyZXYybkwrbEtndjNEbkFD?=
 =?utf-8?B?S1hmSllPZ0VYSWZ5K1dNeGJIMU02ZURYNmpidVdxMzdTZVdCSE42Nk9KV1dH?=
 =?utf-8?B?YXFNZEIyb1B0Z0JJaE9pKzJsdm9JK2ZCZFliRzI4UEFYMzBEN1FUSVFTd2FK?=
 =?utf-8?B?YzRIUmNzSmFhTEhzZVd2SzR4ZDdRU0kxUUFGM0xkc2p1ZXpSbDZBWTlrbEwy?=
 =?utf-8?B?MlB5dUNRTXk1OGtmQWZ4ZWt2c3oyRFdNQXdWZjV2RHdmVU9CNUNUSjlKQ3c5?=
 =?utf-8?B?eWVheXVleFRkaTZSQ29WQWZWbC90S0YxZ2xVMGxhMUdheWRGTDZxb2E3bGpX?=
 =?utf-8?B?NW92M3BXSkV6eE9GbjlhV2QrdUV6OW0zTkd2QzlRZ2NkRVluTXVqRjVid2dU?=
 =?utf-8?B?TEtSL2QzUHRYQ0N2dndreXA5MzMwb21iVmc3anJrMlZ4cU1GdTlPb2p5Z243?=
 =?utf-8?B?UmZjeUlGQ3MzK1ZPRXl2VTA0cDdxQ1d5SGIyUWl1alVOSkFGemU1eUNJQmNN?=
 =?utf-8?B?SHJGallnMDVaaXRXdndNeHF4K0dveDc1c3V5SFM5RXIrM0I0Q1Q0dEtBTzg4?=
 =?utf-8?Q?hMDCie1SWvc05Cy8Bh24qs/Tb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cf157e7a-1395-4e2e-4420-08db4adf4d18
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 07:31:53.7087
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rb3Urt6N2fVGBlQ9cihH0sP/Eto7BcZa55DdHO15bCMljOAD3XY7bL1NVb4DxxMfei3vozS845f/oBJbvaEa6g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8186

On 02.05.2023 07:48, Olaf Hering wrote:
> The next push to xen.git#staging will trigger a build failure in the refreshed Leap docker image.
> 
> For some reason HAS_CC_CET_IBT will evaluate to true. I think the significant change is the binutils upgrade from 2.37 to 2.39 in November 2022.
> 
> The comment indicates the combination of gcc7 and binutils 2.39 is supposed to evaluate HAS_CC_CET_IBT to false.

How does 2.37 vs 2.39 matter? CET-IBT support is present in gas as of 2.29.
IOW I think it all ought to be tied to gcc being 7.x when 9.x is the
supposed minimum. Did you / could you check which of the three options
(-fcf-protection=branch -mmanual-endbr -mindirect-branch=thunk-extern)
is/are possibly recognized by the (likely also updated) gcc7 there? That
may provide a hint at what's going wrong ...

Jan



From xen-devel-bounces@lists.xenproject.org Tue May 02 07:34:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 07:34:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528373.821424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptkWx-0003vp-V5; Tue, 02 May 2023 07:34:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528373.821424; Tue, 02 May 2023 07:34:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptkWx-0003vi-ST; Tue, 02 May 2023 07:34:35 +0000
Received: by outflank-mailman (input) for mailman id 528373;
 Tue, 02 May 2023 07:34:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptkWw-0003vY-BF
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 07:34:34 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0616.outbound.protection.outlook.com
 [2a01:111:f400:fe02::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c81e4d64-e8bb-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 09:34:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8967.eurprd04.prod.outlook.com (2603:10a6:10:2e2::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 07:34:32 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 07:34:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c81e4d64-e8bb-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f7l6SR0ZPOs+u3/kb3E0YxSfxQW8inxOhqW6x23N2JC26Lw/DUAOig7CJ0PH+qwSzIhezzm6FhNP4byfIQl/pz4zJ4XrfQ7vIdAueczcRiwYiHlNVKDj9QOlOvDb+thQz45jhdScBYpnNxarEoQiRcAwkX1Hnw4XfQ/mVjib9e19GjUWW1bFK+rK6r9mIbgV+8qq2JbNN7sYdiNgAOlcr8RGY0MFVCyW71f9oU8hVsL6MpFgmbxGTUzV9RIiAgb7s+fXM0KEbPJh7dkDij8CsPWRq+0XQh80xjOFTaztMUr7yVDyQKV35c5TziA9Cnej9vx/Rpyd6+H+rVCpimIOQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GQnUS/XI/DAbV78nBE3hwMbN1l5XLfaE3ZjQSab4SE0=;
 b=cfJMZsdXRomUyD1lXSGxYqt4y96W7hGV/6QhU2DJA3mKn0Blt2AsMqa/UhdmYlH0/OAIeS5yJzSDRZgrxLjcswlAmnhN3Ohz4A50v6uzMrPseozC/w1C6y5ROsFj/Jf4HIvdWfRh8VdwBjW+V9UTPxn+bSj2PqeuupldlUORARVY17iQizdeMzV9FF/P5tCnvdhzae1ChrCwXgdZTecQyEQQrEv7AIlAfOGxXi/jFkC5AxCSdZm4zVNnP2USB5rnSSlQd303MjGiolbmco+ej9eU78IwLD05XmL0k1AaygmWTX+Ws2lTQqgAXs3V1/bbd3/p418gK7EWVJiJdRq/vg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GQnUS/XI/DAbV78nBE3hwMbN1l5XLfaE3ZjQSab4SE0=;
 b=Uu0nWfQo1KSxFvDXKYFT2OS6/6PA/shbPplEe7ll+SS98GamSgcOIe1cKzi1QuZgLRXCpPg8lnCfp7hPQmwhgciUnr92UZKi9QF2kXYWVORsR2OamEupFdwFvVykvoFYX/vgHGD/hsnHJYwUal8RxgotvyvBoJskuUe0U2H8g+a4lhMWS4bMzsJ0j1Yc+F3NCiMKv10d1/2Gv7nNZOYSigBkMb+oti2H0q6ER0vH0myBPcXOAAfEHwsfPmaMUlVaeaxdxpf0sF6Xzbxh4vm1upWnIf/2LKy70tWrJC0H6LTGLCJlWJPnLTYSJEJ8iTlLrH5elZybhbk4r4F+OSBHRw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d58e0b29-3b00-a59f-ac49-a876359a299d@suse.com>
Date: Tue, 2 May 2023 09:34:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: HAS_CC_CET_IBT misdetected
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
References: <20230502074853.7cd10ee3.olaf@aepfle.de>
 <43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
In-Reply-To: <43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0159.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8967:EE_
X-MS-Office365-Filtering-Correlation-Id: adb3385d-2178-4675-b128-08db4adfab75
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yi1RI/idzBCe7/bsuVg100wZc4lc0SbeuhPtzMkIQ6WlT1i0+tUdbfZm++G8WH1uFncxkowySqvS3wGJQWsUJfLDM/C3vppALfPv1IywrvR3bvF78hKgCZev2btDyFceDJ37Pc1qutZ3kdPgwCz2aKFcbGPI2TxMf9551/OfVIHADOfw4Tx60jme7Ubop6CRM3Iz297fKaCH101hrixEBWtUJuyzhsDALwADUlbnCnkndMBdQokpZks3xB4efDahWVSLIkHtHBh7B2twUN14192t7N0WZfBgs+/Vzn46TCroqum4lBGxI0RvetqGNgOcfswHLB28z+CfgfA4u9JR7VMyALzUzAFquGRL4ERW9D/89/ibI0yKAzFWYuFxTQ7y9//Zs1/T3Pv1IFLoHn0zpt9WXBbcUhTrL4S7iWXHML4HflIJPd66FNk6nt06ZMaiwJsL6JtTTU8fv9hq2P7SeD371d2WabyD09ek9FFf6bCPMBIsXJbXlUC6nP6K+hWEtV5Z3HEtWAZ6cGwfR+LHz3Sj56tJGyL6LI6A+bZSttRDTyPBv3ZDawPynVmVHZbUe9Lg/9I5wseylGH5mSl9IQcoSdR99wwDooqzA5mLYPFQ99VqS3lmqwdxxhM/xMhGXv5Z/MEo5NufVXmho6c8vA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(346002)(136003)(366004)(396003)(376002)(451199021)(2906002)(6506007)(6512007)(26005)(53546011)(186003)(38100700002)(86362001)(31696002)(2616005)(36756003)(83380400001)(478600001)(5660300002)(316002)(6486002)(41300700001)(4326008)(6916009)(66476007)(66556008)(8936002)(8676002)(66946007)(31686004)(7116003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R0I1Y0dOYlFnSkRXZmJ4ZlN4L0VPS1RZdERnSndqdlN0ekxzSVhpbVJpdGZI?=
 =?utf-8?B?dm5yL1pvbjd4WkxqSWdPZm9jUzZoMERqMW1uWG82TFlvVnkwQ0t1VDV3eHhI?=
 =?utf-8?B?OUpZanZzMjhUd0UyU1NxYzFReWVFanlGYXFVQk9yR1JTcENSMkJUZ1FBMHEv?=
 =?utf-8?B?dy9Ya0VnOS9KZUFDNnZHSzJkTTNDdEhKeHRuU0hhcmF1TjlDYlhLbnFwQWxB?=
 =?utf-8?B?NzZLWERWZlFjaFpIc24xb0JscVF1TXFZR1Zob0VjNVF6d054S1lhcTRmaVgv?=
 =?utf-8?B?aXcwSUp2aFNNVnpGNU1vTmxPYmE0WkZHWGk2WmhCK1RpdlRwK2xoeUdKOVhT?=
 =?utf-8?B?MHBUTzNIUHdQWTREeTFxRnM5cDdvQ2ZXZ1Zhb1dqY3FKR1Y0RUxwN3VIeFov?=
 =?utf-8?B?R2RweVlyUE54UWJQUWNiYXRTUUN1TmMxTXhlbCsvUzlMWkZKbUExdXF0WjBZ?=
 =?utf-8?B?WHdaQUtvb2F0VWVUVHZURG9EV3NMMGVyOU5FNS9OV3RWM09reXZlV3RraVY2?=
 =?utf-8?B?SGtkTEpnNW5vZWhtZm1wY2pKQUQ3RHR3Y2tKOEJsUURsTjFXU3NhWjNzM2pi?=
 =?utf-8?B?RkZxbUw0TlA0NmNVZU41VGU5QUllRjQzZnVLeElqMWNFcHFIU2puTUl0M0hG?=
 =?utf-8?B?MWQrOFp1RFI3QTdYUHkxekdTS0wxNTAxWGNsQmcreUNlMDlBek5ES3lRTlpa?=
 =?utf-8?B?V1FpU1VLMDJ3REUxaWJQK0RKM1l0aHRxUVNKMUdCSHpNUTR0eWR6MXJneG9u?=
 =?utf-8?B?cExNYktvRklTdGYxSUxjV0VRYUh2aGI1WiswZmZ2UzhYRkd0bHRjSXE3UXYw?=
 =?utf-8?B?K0xIVE9rUUhiNjE0UUxYcG9kNjVMT1pSYjlORDJubTQxZzQvYWFJTHlvVDJD?=
 =?utf-8?B?eHNWOVdUWGNXUUJ6ckZkdkhTcEE4dHdPem8zclZQYmY5Z3Y1QSsweGVGVGV4?=
 =?utf-8?B?Qlg1cWlnVExvNXRVMWJkaEd0QU9SeWtvMFRQdU9OVVZiM2RqSFQ1N2NHRldl?=
 =?utf-8?B?RFVRdkxVc00reFArR0NyRWczeE1yeWFGTDBNUzZRazJOWitKWUh0VHVkbDli?=
 =?utf-8?B?OEhYVDJRTGVybFBEL2JSZ0JIUERPbmZ3VVR5bTlDak5KaHJPamxMc3oyeVZJ?=
 =?utf-8?B?a25mNFlkYis1c2tialdPM3BNVXh1dzMxSFM4YjVabkFlWlFxYVNGTldlSVps?=
 =?utf-8?B?a2Z3UzBNNW4wSU9XOHZuQWdpcjRmTjJJbDRqYkpqajk3TFAvVG5PUTAxdkRm?=
 =?utf-8?B?cVFtbE8walY2MWZSQ1QzN3BSTW8ydmtWZE5zWTR2T1VXUGtMVnIvMUF4K1Jz?=
 =?utf-8?B?emZIcHpQT3ZOYlNjQ3ZKZ0lzdVp5KzNRajRqckdLUTFwb2dQL3BuOWdQSFlZ?=
 =?utf-8?B?TWN3MG5QQ3lvSGdnelF1elR1d1hBVVlqNFFtS3QreHBFRW8xSUd2K3VJVkJH?=
 =?utf-8?B?cTN0MXcxaUlGQ3Y5R3p0RUR2ajYzMHZpeXY0eGxxdEowYmRwcGZrU0orNXQ2?=
 =?utf-8?B?ZTZMTEZyOE9POE1uZGZicmExWnpldjFxbGNaQnBPOGxJWE5yTE5jL2ppT0Jh?=
 =?utf-8?B?SGJiVjdxLzlrQWhFTXNtcldRMCtaSDJrSGU4eUJ3VkdsNGxOKzlFWE9PS1Jw?=
 =?utf-8?B?ZjJSVUgxVDY4ek9IT0ZHMy9Mb1pFWURzS01PVnNrRkZBak9FN3ZtTVNuS01L?=
 =?utf-8?B?Z3JpNlRNWWVSdTRrQXBoeTBoNThhMGNYK2lPNS93WmVCN2FUdkFqeTZUUk1i?=
 =?utf-8?B?QzVuYWZjakZJU3pWRmtKN0VQZTdIZG1uUXpDZFI2Z3FJMi8wYS9ieFhtTmVs?=
 =?utf-8?B?RTVBN29tTDZpSVNEZjgwTGRDN0FrcjVseHpmUDNhN0xhLzdCVjdLSUpZRWFK?=
 =?utf-8?B?SVVzTmZDWnJtRjhSTUhkL2RCQi9QKy9SYmRMWW81MVFObmdaa0p6OVBWTW5t?=
 =?utf-8?B?NlJuM1ZqK1NlSUlOQ3p4T0w5TjcycFdwUlFoSjFhT0k0QnpsNG9ZbWJtNnk3?=
 =?utf-8?B?SDRKQ1ZJTXNNQmJ1UGgyb01aNkNMbGZuTTg4NHM3OTFtZWxBK3NQR2s0OXBo?=
 =?utf-8?B?VFlhUyttU0hpRjExNlNIWWNwZE5CTG9NdmNPTXdtS0JOR3lzanNtRmZwQTF2?=
 =?utf-8?Q?T+IvlYpjtIr8wlIT/ENTaX2dI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: adb3385d-2178-4675-b128-08db4adfab75
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 07:34:31.9167
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IbVXx5nVin7kT0/7bjnAxTDJuU7syFwqzyHEA0TZONLcQDN9iVp4JI3tH6Z6KAw6ki4GjPFRDp4sEGV68Cq4kA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8967

On 02.05.2023 09:31, Jan Beulich wrote:
> On 02.05.2023 07:48, Olaf Hering wrote:
>> The next push to xen.git#staging will trigger a build failure in the refreshed Leap docker image.
>>
>> For some reason HAS_CC_CET_IBT will evaluate to true. I think the significant change is the binutils upgrade from 2.37 to 2.39 in November 2022.
>>
>> The comment indicates the combination of gcc7 and binutils 2.39 is supposed to evaluate HAS_CC_CET_IBT to false.
> 
> How does 2.37 vs 2.39 matter? CET-IBT support is present in gas as of 2.29.
> IOW I think it all ought to be tied to gcc being 7.x when 9.x is the
> supposed minimum. Did you / could you check which of the three options
> (-fcf-protection=branch -mmanual-endbr -mindirect-branch=thunk-extern)
> is/are possibly recognized by the (likely also updated) gcc7 there? That
> may provide a hint at what's going wrong ...

Oh, it might further be relevant that Kconfig's cc-option passes -E to the
compiler, yet none of the options actually affect pre-processing (and hence
might not tried to be evaluated with -E).

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 07:44:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 07:44:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528376.821435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptkgL-0005Rb-Si; Tue, 02 May 2023 07:44:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528376.821435; Tue, 02 May 2023 07:44:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptkgL-0005RU-PK; Tue, 02 May 2023 07:44:17 +0000
Received: by outflank-mailman (input) for mailman id 528376;
 Tue, 02 May 2023 07:44:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptkgK-0005RO-8v
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 07:44:16 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2062e.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 22f6fece-e8bd-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 09:44:15 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8325.eurprd04.prod.outlook.com (2603:10a6:20b:3f6::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 07:44:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 07:44:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22f6fece-e8bd-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X0MkeRvdBlHcdeXBcm35bwRiLFQPmolCJds6+9mWx3r1nXKu/kRdd90R1j3eENd8HrOqILcnVkUKeneMOFnSwlsDaI0CJcViI9iLwzTGMvb0E9GfmU+aRhFq6VM15RfUZSR5+FH/iAbrocdQPGrsJyIy9DgvhjeG1/0AnOCImfMKyTPkkuoQsc1waUWNy7/3WPgcuh6fxtAj2+1sT7UbXCEteYSwx3pQBdgBknupc6fiZv0qwGtOlZlf5MYJgyJ/8nMgknfbjrsyBPXJ0P7jRFwR+mNtgF9jFmBgMaEJIkrDy4cLr9X/WiJh8QU4PGnuJL8JiiEXe9yNntolXLpxHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IZhG623XH/gq2heft254aHJTyUmLp/RiNpLTQb7PQSA=;
 b=QNr1C77pYQ1NecvL8LjmdFvoBlb65WbXRgIIktTvGLJah9K/zuzg8OoICWB3FtuLzGc5OINwIXDl1JydzToifnoO0b5EZIlKZvqYTIqhJaxNM8ijnJJyFDY0vSsVcsdJ+QuF9vkJ8pAgJGebQM/W9OYMxiVmmhrHvBHyAGdy6gkOV/23vuuuwl8Xx9an7Qx40nCT1iYSMT13honr5tJ85JqgYw0VT/R7jpZpOelTUwpyTtNdttPlSCoNB/toA/LH0Jt226TnIN2I9IGvvg5vzFwomfK7gOP43/ObTjy/cBPnshzDhnciLQxn4QXty+RXvEh5nkAUK/XkHKSK92zyOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IZhG623XH/gq2heft254aHJTyUmLp/RiNpLTQb7PQSA=;
 b=JJQAdrVqcfrqN7+eW+q1MJ3sTvKNFz2/krgCg1INt+R8YHP59n1F6p+BdXB8Z+MGC+ImKybsfiIuP7Vg/IAYwIFlc01J8Oso+JULQy43eQG3XVxEywAZHVZWBKGQguFGmbtVs/d7SU93egXBYg/wp/AV7on/Pskti4d4Bo+ZjmciAqpIzLL1PDZjTuWH8CBB8Wu4gXrBbrMkD8B+4c8HLfGoMeLkI7oPgWC4vKoHBkO8q51KgV2JKxSdyMWkplc+dMqPhgALJsLDMQ97f//CVDVMU6tYrHZDLWV1z8tj1bOHKY8hubcMN/I2CWhYG75ingM8UQ9sStr/oRnQKGjecw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0e7404fe-e249-7b3f-0628-b8b8b1925765@suse.com>
Date: Tue, 2 May 2023 09:44:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v1 3/6] iommu/arm: Introduce iommu_add_dt_pci_device API
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
 <20230501200305.168058-4-stewart.hildebrand@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230501200305.168058-4-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0185.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8325:EE_
X-MS-Office365-Filtering-Correlation-Id: ee445cf3-ff33-4ee4-e0dc-08db4ae1060e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eJVoQSUjT/NrKyKs5xm1/ByNJALAHqyeHMEbWwshn7nKDJaxy1bXCqsM0XFMx/Z+eJ2SbJ3jnojYbBfsnk456ROw7lyNsjvuWiryiDa/u+FmRm8VkqyPho7PZzhU17reNWo0+xYzEtONS8219UOYgPjzDtBBWid1iML4CPcD2VIQUMVzdLMIXyxf8Ljup8g7mj6B/m4VjBVXcmp2KZMvp9Dg0DKl455Yj3JJP1ZsAjMZY5LOGAomAXUEpRA6g88rYWva7f1sFvOUW9kYI3dTTsVlTU2s6Pf8qSi7pX9pOkLpeCKO51YsZ3xR9pcSYj9GReSF+qLkGQrpKmCf1CjxYKelTpyjTU+1TQk7OImHYQyhG1xjWHP1TiOnpQyXOZAmMjAfkRry4CNl3OItKoTkyE3mD62cCVIVhYuFuMqXP4jU0frf64bxILY52oZUiEUKZ28BfGI0qlW975cW6u0pFNq/8ftpCuIMoypqOkVfZkEu4KHXKZtcLDGY8XfmOUhOJXruPnecDQJWgiX6f9G2qojNKiEtYUyYPgYTRiyCnfsh3lODAw7KoW/U2EAfLHRm7w1g2hvWM3sYBE3vVREGs//oPl8ff5iIJgkaFbEDj6jcBFIEL+D9VrhX0yiSUrm2JY2YMOAJJeNI8w7DcF3G8A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(346002)(136003)(39860400002)(366004)(451199021)(38100700002)(8676002)(8936002)(5660300002)(316002)(2906002)(66476007)(36756003)(66946007)(66556008)(6916009)(4326008)(41300700001)(4744005)(86362001)(31696002)(6486002)(478600001)(6506007)(6512007)(53546011)(26005)(186003)(2616005)(31686004)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eUp6dGNIMUtDWmMvY3lEa2VmYnZ1Vy9RRUJneVpKQ25VK0VROXZZTDI3b0gw?=
 =?utf-8?B?Zi9ZN0Z5WGZFQ0JFdSsyT0hacnFLR0d5bmh0L3ljR04zK3R3RGZuTSsxMnFB?=
 =?utf-8?B?Tk1kTFlLVFI4NmsyN25mZ0kwSjJZK015VGQ0aXJleDh4Z3NHWGJydDBINTZS?=
 =?utf-8?B?Y255NlY2VllkajdpWWl2NTlGSFJyZm5EdnBGVUlpb1psbkFUT2t5dCsxZVRS?=
 =?utf-8?B?Qlp6bUwrSitUMzB6Z1dsU3hJRUtlbU54K3NvK2M2Wmd5Um1EaFpld3I4Sm1I?=
 =?utf-8?B?cmoyaVlNRlM5YnZZS2l0OFV5UnFDaXhOTG15b1ZNZHlyNXhXYXljbkdWYVlM?=
 =?utf-8?B?bDd6UXdETU85R2x4STExR3I3WEl0QjBWcURsOFdEWFFxdnNWcmpCVVlaUkZ6?=
 =?utf-8?B?VHYvOTlGbnJoeERxa28vOGpzUEpLZUgrUzB5YStkTndad3dMTzhqNFcrblNU?=
 =?utf-8?B?SmlHcXlpTUR4L29LY2NUcFpSS2ZJMzZTT2RmdHZ5aXUrcXEvZDVCaG9kNE9q?=
 =?utf-8?B?YTFmMW9rTll5YkVxRno4YUl5MzRoT0dWM25XcS9iR0FoQVFEYmZnaXNWbDNq?=
 =?utf-8?B?ajNjNnJFQWhFNlY1bHFaMkJGSENBUWV6QkRyM2dIWCs2S0lDTlA0SGcwaE5L?=
 =?utf-8?B?eWMya1oxc3ZpU2s0L0hpKzZRdnBWZUU5ZGVjVDc1dFIwakp0Wks0cGdEckFq?=
 =?utf-8?B?cDJVVmhSS1ZKamo0VlREUDVKWDc1dHBvMlROTzlsd2Qyak4yS1RiWHFaK2U2?=
 =?utf-8?B?SVRodEw4R1J4RDg0aUxTUG5oclJNNGpJK3FwUGZmdjdHZHNFam1lc2hYalZW?=
 =?utf-8?B?TmdNZWpoQy9lY0p3SHpQSVdseXM4dGloZHhwNkIrTlZlMTZNRjlNUXViTyty?=
 =?utf-8?B?L253VHpINW9ZQ2RyS0R1ZU5Ob0ZaRnVmbGdUWDNSNnRVQWQ5K1N6S3U2c01w?=
 =?utf-8?B?YVNiRjNCUitLU2l6YkNSSUpHd2ZKVVJqa2tSQmNUakVpYlpucEVGUEt5L09n?=
 =?utf-8?B?UDJRUFJkbU45Nm1rUmdBdG1jUHZiMGFmWWFnRjBSZDVNb1FQU3FrRDJxeTFm?=
 =?utf-8?B?RGMyeGVtZFJzaFVaWlNPak1hR2hOQVp5Qy9iQW1TbldwZXJMSEt5ZElHT283?=
 =?utf-8?B?Mk9XZDYwbHByemNiTm1Lbk12N1NLRjRXay9FWUdMMHlyWURmUXR6aDg3Smxq?=
 =?utf-8?B?Ync4UDMrT1RVby82VDJYK0MzL0tTUmFXZTRFbmsxUFQyQWZ1Z3BlU3NvYVBm?=
 =?utf-8?B?eUxtN3FkUFNVNFJsTjQ0aGx0dUcyMy9rSUVZN3ZjQXFUNEd3MlBkTHdHemFU?=
 =?utf-8?B?Y2VEK1l3RGRBclVocndWNWZ0eDRNVStTVm9DeWVTMVNwNG9mZ3VpeXpqcEd1?=
 =?utf-8?B?a2toMURkSmI3WFpGb3U4c2ZzQ1g0N1NuZW5EQXhvdkNnTnFFTE1ieXp4MWxQ?=
 =?utf-8?B?V1IxbDNqeHFYbzNkMkhvcGdtVXc3YkI4MDhSZDVLUE91QStmN2V2Vzd1MGRi?=
 =?utf-8?B?Rk52SWxvaE9oM1doNENWaHVVZFpEZUZ4SktCSysvK1l2RjMweDhHYTdMNmk5?=
 =?utf-8?B?cmNVS0pMZ21jRmdTTnZOeEZ1RmU5TGcwT2UyNndTTjBoMkh3NjJwNkkvd0or?=
 =?utf-8?B?Q2IzeFhUYVhuSGdhVHRkdUlveTR0UVlWeU9tWlZXWUFBNzFEbDBVRkgyR3g1?=
 =?utf-8?B?djZiV2hESVRlRUZWUEJTUGJDLzBkYkxXSUtZdUw5Q0lrWUZ4NittVm5nd21s?=
 =?utf-8?B?c1RpM2JSaGllSGRPempoaWxHQVBacTVVZmhUNUlHOUdWUS9RMXhFcXZlMmNl?=
 =?utf-8?B?ZTYvbE9SK0J3SzdJSkkyY2tpeTN4VDNVOVAvb1JvTkhVZDZJTmE0enRIZ1Rx?=
 =?utf-8?B?WjJTNW8rNllwRnBWcG5SL2xTV0V1RFluVDFIR2szcHJ3R1F2c1JLZWNpT3ZG?=
 =?utf-8?B?QWdBWHRib2JLRk9xekFmQVYyZkIwNXBQcDh6RDYxVGEvSm5lSUtkcUt0blQ2?=
 =?utf-8?B?VEZzeTJpLzIrcUI5T1RmNzFsY09ad2Z2ZURxNlVJQUk3QUlCVDI1T3hjWER5?=
 =?utf-8?B?RTA1TmRqT3B5N3VkT2JzUHBZTjR0czRDNi9SNkdMQkJVbmVTU3hlTGQ0MXlG?=
 =?utf-8?Q?j3BmGSBzG7hwoyg+DOSK8Ks9I?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ee445cf3-ff33-4ee4-e0dc-08db4ae1060e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 07:44:13.9106
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4wQ59Car2FuT4GBCUGY0YY70YjDnLJhTf8t8T+pRb1TdfAxoboHpsJvXQSmn4nm9mvL2nKu4FWXfKDrI1mfZ3g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8325

On 01.05.2023 22:03, Stewart Hildebrand wrote:
> @@ -228,6 +229,9 @@ int iommu_release_dt_devices(struct domain *d);
>   *      (IOMMU is not enabled/present or device is not connected to it).
>   */
>  int iommu_add_dt_device(struct dt_device_node *np);
> +#ifdef CONFIG_HAS_PCI
> +int iommu_add_dt_pci_device(uint8_t devfn, struct pci_dev *pdev);
> +#endif

Why the first parameter? Doesn't the 2nd one describe the device in full?
If this is about phantom devices, then I'd expect the function to take
care of those (like iommu_add_device() does), rather than its caller.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 07:50:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 07:50:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528379.821445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptklw-0006eL-I1; Tue, 02 May 2023 07:50:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528379.821445; Tue, 02 May 2023 07:50:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptklw-0006df-Dn; Tue, 02 May 2023 07:50:04 +0000
Received: by outflank-mailman (input) for mailman id 528379;
 Tue, 02 May 2023 07:50:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptklv-0006KK-9K
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 07:50:03 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f1c08b63-e8bd-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 09:50:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8377.eurprd04.prod.outlook.com (2603:10a6:10:25c::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 07:50:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 07:50:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1c08b63-e8bd-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RxgGpEdCxhX8SEPQ22d/9zZfV8TsALWT/TM8iN0kTAkzpMBiS9BcL7exV3exbiYVH1d+4U0U1rn68sIxOxtHGY97K5LxhFeDJ2FVvhgZM/yYGscsFKImsllbqw86gInNBdQW11QNQ6ZGwN6U20GsZ20QAgTa7GVPivYLQHKNqwc99arKuVy+ICSS7fgKBV64yIfpmzHmwiSJ/2xAghhrj7rNeQpMJLWW1qjPN/Zli4tfRhCUfghIMPwafLYOuN6YyEH+XFmi2WIZQGXqazPKsAHYJqr+/nD/1MVUVGnYozRbdAu6mOqWQts0uYfZkUoYe/VF1i48m5ZM++Fvc5ocgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=slYW9KRkZUY7iZeaVQfzOkx3O2g9GRcyd0agbY3CAwQ=;
 b=dry7INdnYsNVFGnMPxY12sJLAHSTp++OBkX3gBIgfRaCBlpt3x7oIseGbTx525u2SZuLkmFSq5mv8M0LZsP7lGIT7pleNBld5auNM1To6ewqx0zg1Cg4Dwlpd0YDiShq6x4Lkd9FzX/xP6ua/nXcdp26HOfb0H6AgHO+FCElmDLMSd0Ac7gVtKFx1g7OobifQEeC/2xSyp4V1kMOUfehm+wihdV3EndPxmBsZi1EVF0CrUHW6WrpJRCOOTf2YPaYelaO4mLoBWkIIZMhC9W4GAhpfw3uQ/oHNISESgESTlgmD3HfV9OynqP6KBXpLIrZfVSgzhl3pI4VQBW7uGJy2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=slYW9KRkZUY7iZeaVQfzOkx3O2g9GRcyd0agbY3CAwQ=;
 b=2M9zENVKb7XiwhIMqu22II2wIzReOCr3BNOKv0OWMYSMa+N4gDdL6zM6HD0Qtu4WsCevwgYosrNrn0z8eL+BJbjuLUbnhrtb/LFTqep2ZRWahAjm8rEvg38Vtk1vaqYIF4LAM3ZCUcKVupON83I3dXXo9XDa1tNY1m7YMcgC02/a8xpoINgM1YfUHNxsxfitrzh1Si4aY2uWDsWrXbcs3Cr9zL4rSUbLn5PjvQ8RVktDJzrjJbzmclomGvfkN13+Pqq8bgW7dHII42IrIo+jJeiUMQSoLHnAYn+h12cgk6kP/3qfdH6ZRsC4945nhhudetuyKx2EWxh3GP4GJP6XpA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ced11c6e-caf7-3a19-92c8-5c11b18952b6@suse.com>
Date: Tue, 2 May 2023 09:50:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v1 4/6] pci/arm: Use iommu_add_dt_pci_device() instead of
 arch hook
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
 <20230501200305.168058-5-stewart.hildebrand@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230501200305.168058-5-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0164.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8377:EE_
X-MS-Office365-Filtering-Correlation-Id: 100cf02a-9e75-46d9-dd56-08db4ae1d471
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GwzwlViAGMP/4wKofkhTrxVgfxOYFxfwsH1TrXlyY6ezXvenOolS2ggkANjLUMOSGtXdRHo/SeuoXkD2DcMb7UNjlSvOkjR1KAeBLkAFQELcS9C7bcXbt7QeGaHM3Vb8Uhgn4XENow/kbYOaF3iu/fTV45RtwK8CzRbD+6xA6lc6xbqZPHoRFBN+y1g7chl0zslbGRJTjiuloxA5ckdY6uugDCS0fVXOZmeZd27iA4sqdoaRe06EFpxu2NeFiRwhe5xE7ZCI9xjn89qZ16vNp1d6UHmpAnRAeKtJMbZKN1uxOoqHh6fmbJq14F2kunsVXv2/bIVjTfNsApbxrxxvbqXzyunFdeYLvFLw6Ws84W9B2moMm+h3tyGlsCX5wwvn3FnQ2ekK9dZ3CnPrqihK0uYTb+eL2g63Q1YEMyWnJCp4JwhJWwfT8b4TrGh2bbm50cDqzA2QlGsZD4h3jFFg0ZS1AWSVy93hQ4eeZj/KLs0OU3Ud7UKB+EU4w4xFd94Nc153YRlJJGXIr3Vlemea7+C7/9i+9op0FqrgAtRUbbgCnaCvKE8s0UgXTiwAGpmvzUV+mCZtHSJsyHAGia73vgCC/g6F2nS5AoHUje7C6vT50UMhIxUIWr2jEFEavzIqqeKngxwAu7qjYLc1LSmvxQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(366004)(346002)(39860400002)(136003)(451199021)(31686004)(6486002)(54906003)(316002)(4326008)(6916009)(5660300002)(478600001)(36756003)(31696002)(86362001)(41300700001)(66476007)(66556008)(66946007)(26005)(186003)(2616005)(6506007)(38100700002)(2906002)(8676002)(8936002)(6512007)(53546011)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NWxTcGxvQWJtbE9SRy9tN0xVUkh3bWt0bjhyRDVLaEUrbUJuVGxrWGV2aDE1?=
 =?utf-8?B?eGR3UzRuSDkrYjFvWlJub1NUUFNXUkpJb0JJSEhtdEhRcDBIek5NZ2FFR09D?=
 =?utf-8?B?SWdlOTZ6Wm5WVFZPajBZMy9xc2pycnVFUDY4NVZjWUF4dk80K3V3MkFjTjdv?=
 =?utf-8?B?T2FSMDFaL09uQmNaRE5ETndEeVUwYVRSQ3hnelpodklBRGJRMGh0SEY0L002?=
 =?utf-8?B?eTRtM2ZaQkNmTTJKYW1mMlpHZEFtSUpXV1ZWV0dySnJjN0xYYU4zVUd1Tk1I?=
 =?utf-8?B?VHk2eUpjTk51Q0QzbVUyY290cDdML21PL1M2MGg4VzRiRll0dm9LckpOeC84?=
 =?utf-8?B?ZUpkeUI4WWFMaEdZVUtvZVV3aDgrNk5URzFCSzdkcjlnZnBwZUNkczl5VWZ0?=
 =?utf-8?B?aXBwYW9HempMVm5vTFJxbnhVK3pOTkErbXlieUQvRXA1ekVNaGFsdUlKbE5a?=
 =?utf-8?B?emZRZlo5NkZCaHl0MWU0aU81MEE2U2tIVlU0bTd5SkxZa2xVdTQ2MFM4SUp1?=
 =?utf-8?B?ZzVWSjhCbng5UXE5c04vNGM0OWpaN1ZBN09aZnhGclBjMXhuK0NldzVCZ0c5?=
 =?utf-8?B?YUs4RWtYNUlHbGpNSGhFQzhLMTlTR3pVblcwVmlJWXM3V1lLS0RvTENkSFpO?=
 =?utf-8?B?Qm9OWVMxQlFzNWxEMkNiUHN3TFl1bUtnN3Z5Z3VPN3kzMDdFWHdqQXRlbHVV?=
 =?utf-8?B?OThUUTJlRnZoZDljTGsrYURxN1pJc0hzWS9LYWRjUk9vaytyblQ2L09OUjRn?=
 =?utf-8?B?Wm0rdVlUelNqck5GQXhYKzVBa1RGOFlQR0VIRXNac3NTQzhmd2pjaXJMQ0Vp?=
 =?utf-8?B?bWtMelQyTStMK05EaFFCQVdNcmFjQ1dkVTd5eE81N016aFU1Z21sMnZZeHJH?=
 =?utf-8?B?eU0wK3dnVjQrdTlpUy9JeHYvNmxjMngwaWNUUFE3RTAwQ25na2JzVGtsaDBS?=
 =?utf-8?B?ZVJWYVNQLzJrazk3UDJRZm9aV3NwTElxWEoraTQrUXJkWnlxa2twcWdSbkZS?=
 =?utf-8?B?NkY0QTVNSlZJNzRsVEJvaCtJS012YnZINk5ha0x4YXF0amlkdUhNK2NoM3Z5?=
 =?utf-8?B?M3JhbXdPMEpWVE5JdGhZbWExdWh1OG1rQTgrMXdYQTd4alJWQnlGc0pCczg2?=
 =?utf-8?B?SUtIM2Y4QmVRSjlOcHhCcmQ1cHFqZDBQL1NtKzVSdE93MEUySXhOb3B4SFhM?=
 =?utf-8?B?T1J4UFpkaWNBSTF5alpYUmVPOXUzUm1KNklNWkV6a3dEWHp5Qk14eWtRSzhz?=
 =?utf-8?B?TE52THN4djl5NVczNXJMdTE5bXUzYURGUmxHTEhGSURrRzBTOVJhTzk3dlNx?=
 =?utf-8?B?d1R4Z0JvOW5MQU9SbE9WTkxSTVRZdVdQZkFyKzNvdzQrK2swZE5RTDBEUTJG?=
 =?utf-8?B?dEhSbW9CQ1dZVklqd21vK3BlUi9JVFd3ZFEzM3FqY2JyckNCZjhrOVAzbjRB?=
 =?utf-8?B?R09jUHVScUlEbWFWUHY4VkFCNUNxQmFHWm0xbmlDNVBQQnNIU0M4K1RTL2Np?=
 =?utf-8?B?MHNlQzEvR3pYa3NGSkdMNGNWWWcwMU1ZSm8wdDl0WGJwYm9KUWZKVmJmV25D?=
 =?utf-8?B?MHVZcGFxb2UvanZ3MVJSMFk5b3hMM1NZMlkrQ2oyZ25CdW5WUk5ZWEdlVHdo?=
 =?utf-8?B?RzZnb2tOUEd3em9nMTRXWTNBeEMvUXdoeFRnR2J3SlBkczZWNkUxYWd6ekhh?=
 =?utf-8?B?VHFPdUticjVuSGI3UVorZUkrWWxJc280STY5WVlpQ05ZbkJ3cjl0SWt1ejgw?=
 =?utf-8?B?ejhMQ0ZUMDhDS0FkZ3Z5TWNXUTB4d1ZZdFd4aUg1MmQ3SDBoNy9nakZDcVZ5?=
 =?utf-8?B?M2RheWV4d0d1L28yVDJ6RzlENGppUUFkMmlKSjNtVFBVZ05pWEhzR2tPNjQ3?=
 =?utf-8?B?dXhKaDlYOEdrYzUvUkJJNEEyVWVVQVVTaGs5ektKUkRUZHBvWnR5aW84Tk94?=
 =?utf-8?B?WTlUanhCY1ZaUmJSeWU0bWFDdElpWHpzY21XL01PWWU3dy9yUXV2Z3RUWXZK?=
 =?utf-8?B?MlAzSmNsN3dTaGtwVFJ2ODBZWUhaQ0FiY1ZHZkVCUWJ6ME52Ky9qR3JzUDNF?=
 =?utf-8?B?ZG5VTEdWWXBobVBkN04zaWF1K1R1RGs0WGpZeE54cXBjcCt0R3d0TjVZYjBy?=
 =?utf-8?Q?HH0wGI/THN9YekrD9Jq7rixmf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 100cf02a-9e75-46d9-dd56-08db4ae1d471
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 07:49:59.8345
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: m7aeOGE8HyfdnjrgsJIy9IKOq3Ey52K4PmNsT41I0sHJjR/f76nib12izew0I7YuZ0Ju20WwLNMIOge6zz3OlA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8377

On 01.05.2023 22:03, Stewart Hildebrand wrote:
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -1305,7 +1305,7 @@ __initcall(setup_dump_pcidevs);
>  
>  static int iommu_add_device(struct pci_dev *pdev)
>  {
> -    const struct domain_iommu *hd;
> +    const struct domain_iommu *hd __maybe_unused;
>      int rc;
>      unsigned int devfn = pdev->devfn;
>  
> @@ -1318,17 +1318,30 @@ static int iommu_add_device(struct pci_dev *pdev)
>      if ( !is_iommu_enabled(pdev->domain) )
>          return 0;
>  
> +#ifdef CONFIG_HAS_DEVICE_TREE
> +    rc = iommu_add_dt_pci_device(devfn, pdev);
> +#else
>      rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
> -    if ( rc || !pdev->phantom_stride )
> +#endif
> +    if ( rc < 0 || !pdev->phantom_stride )
> +    {
> +        if ( rc < 0 )
> +            printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
> +                   &pdev->sbdf, rc);
>          return rc;
> +    }
>  
>      for ( ; ; )
>      {
>          devfn += pdev->phantom_stride;
>          if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
>              return 0;
> +#ifdef CONFIG_HAS_DEVICE_TREE
> +        rc = iommu_add_dt_pci_device(devfn, pdev);
> +#else
>          rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
> -        if ( rc )
> +#endif
> +        if ( rc < 0 )
>              printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
>                     &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
>      }

Such #ifdef-ary may be okay at the call site(s), but replacing a per-
IOMMU hook with a system-wide DT function here looks wrong to me. The
!= 0 => < 0 changes also would want, after respective auditing,
clarifying that they're indeed no functional change to existing code.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 08:39:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 08:39:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528389.821455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptlXA-0003fK-LF; Tue, 02 May 2023 08:38:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528389.821455; Tue, 02 May 2023 08:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptlXA-0003fD-IF; Tue, 02 May 2023 08:38:52 +0000
Received: by outflank-mailman (input) for mailman id 528389;
 Tue, 02 May 2023 08:38:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptlX9-0003f7-5K
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 08:38:51 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20613.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::613])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c25d6db5-e8c4-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 10:38:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DUZPR04MB9981.eurprd04.prod.outlook.com (2603:10a6:10:4d8::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 08:38:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 08:38:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c25d6db5-e8c4-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EgyPGO1kksQmiFNu/Kmo6csu45oBbMdf3mtqKjvozgwEIuyaupOZM2nP/H9lg2wpREH1kkTIgTKtH0zmOjgA7V1n+XQZNarmk1Kf+yMimm+yh6NX7UD9Ep+YMfbBjfdXRpfgRWtEKt0EOglhEvipF8CEmtFPeQg58ebKTygPazEvANqMqfjYLag2Lah0ZVvCxqaZFsv16WyRII5oZQxzEHnd7ADu6vEyv+OJO2opQufJ6hGAfQILyVhwhGxHwSB0lXscK80NTrqz7C0Lg9TBEldRRkIAFMhTAC7Y974J4qyk5A0RLc+VRMW25ENWYfupxwnh2pZNZ71MT+hXLG8uUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Xuhrjsvox4jEM2bJBgm4PuEwQYTn8ZcnwU6nTDq3t+Y=;
 b=Fp3h7WbOh3Hekd+IwRQV+qzZOnzdbfVVgN3zs7qbZ0r0JqrLYepW8teNZo49VIc6JMNN5tVc/z0zStWUy70WWAtM08dJ6PARKJXbK+XFMv0Y7FVXGq5hX0AuXG1ro1Oc0uciY1A/zIcmK6mdHku6EhoGXVk6umwZ9TlDnX/pyrcKzu2A7XU+v3X2fUKDDCNR62L/z87+59+GYqvqJUGqc/ccDp47Bjouxw3r9AMIyimQX5VJQ0mk9yMnY9R/oG4Bk+9XYK0a8D2A0OCk4li0KXcygbxbQ8nzF9usOlQLY/uHGUOd11GBfYTN5uSgqP0EBLi4+L3AcCHmoqpnbG1nbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Xuhrjsvox4jEM2bJBgm4PuEwQYTn8ZcnwU6nTDq3t+Y=;
 b=EfAtQOdEHEAhCwwZCtwF6/ysvKn7eLbxrFN4oBCio20mbUdB2tjLNk/Nn9iSK0LBILwxvdjDcqJkbVjMFVlhdRIGCp3NNzHL9MDNdbyXo5GEIvjZh04vK/KaknmYXnJRlyD5HJtcmLDzjxJ3c3+3i1FLb+tJo2jiGFbUg+5157RAM3644Q0f35fhAaCS+2SX9jCpwou3Rwb1o0z610CatGPMjeB8hv7I7Blo7l6+5ZQIPD/+u5t397UhxJ2b/XMkqJqA8djcsnfKAyCX4nIOl9I19C2QyiHrRLdREHc03zwzXHenvS+ilzBu8gFb5EGnKsaDVttGI7f4FVGe9HFn3A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <11eb3319-d03b-ef5a-7006-49f0cbb2826a@suse.com>
Date: Tue, 2 May 2023 10:38:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] xen/grant-table: Properly acquire the vCPU maptrack
 freelist lock
Content-Language: en-US
To: Ruben Hakobyan <hakor@amazon.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230428102633.86473-1-hakor@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230428102633.86473-1-hakor@amazon.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0192.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DUZPR04MB9981:EE_
X-MS-Office365-Filtering-Correlation-Id: 0d2eb96f-1957-4363-6937-08db4ae8a4f6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ar/flNsbbs3iq++bvZBgFotxcVttPfyqEBUzYSVUNx8IA6oUvCyuh354j1cRg1myXRLY4d16DCQ/w2zi4RMaxvY7rwbybV+JkNxUhlMzhCR0PiPXeO/zfMftsiyzBAdMvasRJxz7vq7uS1eO48tbsadvfQIBZdfa/ENIIOEpY51U/SHL9/vn7t4gHPJmI1wlpFg88NPDKS8TsZk+65l5imkuyOEYe3vK+q3RcuctyZES0dqILdKRwfrq5Er4vvo2pOmJKBFxpQnepHUNFyzNqfDDAVCei8FFyHgCH7j6lc0JojInm8riFy/JZLX0hAp6RYLrkDP8Gp5E6ZNHCZ8cdoivpycTogJKceuyTk0wVF/0Klr7AsHNKv6BK8zO8NEB/y1FchbYzCeeUgPstFKgBgIgSjbt1b/UNboQq3Iei/ckG1V4b3JHRr1oCv/6FfhTWkXf3kezV6w8qJF4j+zakCO0RWZwO/22KiqZPneBBf3uGwQz1KHPqGNrg4Lt1Cw8k+pKg1zaCB+eKJVeHRS990k5oFbFqH4RrkqlZx8WhIAiLD4eUxRn8eE8K6kN43aOoqGvg60vVQilqRJl8FHMKBq5Q99Pk7VioS5ZQ2GgQn+3PQ1BWxpGzsYN84JEQPoTj3p5sbQFYvxu/LhYFPtc6Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(346002)(376002)(366004)(39860400002)(451199021)(26005)(8936002)(2906002)(8676002)(2616005)(6512007)(6506007)(53546011)(41300700001)(83380400001)(31686004)(5660300002)(38100700002)(316002)(4326008)(6916009)(66476007)(66556008)(66946007)(186003)(36756003)(86362001)(31696002)(54906003)(6486002)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eDQ0Skg2WTFrWWNGbytTQ3Fma0FQMklZVnpxc0Z0Z1YrMjFzVFJLMlRSOHpr?=
 =?utf-8?B?VEdKalJNTlFuMTN6QnBwZndnRFZwa3ZLb0Q5eUt0a2V2REd6N1lnbWNPdEJX?=
 =?utf-8?B?NDNleUt1VVdxNWxJYWhvQzBhODdZN0Juak9tU1gxc28rckk1NXQ3SnV3d1VQ?=
 =?utf-8?B?bWRmUzVHQTBPTVFCSDJOQ0VqVWxUTko4WkJHOUxLamNjZEU2NDdpaWtBWEs0?=
 =?utf-8?B?T3g1c0NDMk5JN1hBUnZHVlY5WnBvWnF0YVAxNlpFYkVaMjdhK1I4VjZTcGFw?=
 =?utf-8?B?cnc2RS9SRENVV2RKVFF3SXVmeFhYOWNDZko2WkdZbTlHMGpXVnQ5ckU5YUV3?=
 =?utf-8?B?NW8yVWhOd0tyV0lZTG5GUjdURDdCMW81VUZxRTI0Ykx0RldLTVZLZHkrbERL?=
 =?utf-8?B?NHpUQ0NER1ZscVl6cnZVRVhvUDNrYXNUTkh0R1kwU1VEazZiOFlYZmxibzRo?=
 =?utf-8?B?aHFmMGQ1UlVGUDJtaEV1QWVkUENVRjNMUjRsbDVwT21PZHFsMXFtbzJERVNN?=
 =?utf-8?B?U1p0Mm5CRk9ITUhlNXN5M3hnQ1lsVDkwa2xRNUh5YnVYTE1OWis4TFRNSEhT?=
 =?utf-8?B?aTg4ZE1tTFNBNTM1aFg4VENkTVp6UFIwZVl0WllTUSttejVPV2VyM1gzY3BN?=
 =?utf-8?B?d1Z3eU03OXVKR0Q3NnRSQi91TUNISGpHNnRNM3dUWm9ubW9Xc01xRWp6VE5l?=
 =?utf-8?B?R3NmWUZLZUFLY25yNUVmalhzYXRJcGxaRlJHRFo2SFlSRGFFNzhsdVZQS3BY?=
 =?utf-8?B?MkxQbjNXRWJVendId0J5aFZxNHFIQjNobkxTOTFnMGNETEQ2c0FZMmhzNjg2?=
 =?utf-8?B?SEovS2paVkQ3YnpDWittOUZBeGRnVlcxcmxGYitPNFplQnVyTmFWZVJ3eEZV?=
 =?utf-8?B?WFFvck9DTy8wM3I1N01MZ04rbTJGRjJ0bXhKTWdTMlRwQ2dWMUs3R3FTcElt?=
 =?utf-8?B?LzM4d1BPTUNER2dZRVZNdHh3SGl0LzlzdnhNbGVyd3hCQUpBS29CdXA3dHlY?=
 =?utf-8?B?RWRGL2pKdWxySXRZV1dhV0tpU2pwNUljZ0l3bkVHVnBJVFJQamtCSTFIYTlM?=
 =?utf-8?B?UlBYOGRwZUIyZ0k5bWlzYjVHWUpGYVpIT2wwTGZ2cDhBQXd4cVZieldZbmc0?=
 =?utf-8?B?WUtSWWNqVGd4TVkzMUlRdmtUY3dRd1dnOGtIVWFsQ0tES0dhTjk5T2V2RTc5?=
 =?utf-8?B?RlZEUm5UTGVkVnJGbXBDRnB3eFVkN25IRkNTTko3THdmSTNGU1g5MmVQNjZK?=
 =?utf-8?B?SzNvUkpmb2FaanNYV2VCT016RmJIMGZDQUU5SndZNXprRDJZcXp1UUJGdmt5?=
 =?utf-8?B?cU1pb3lDYmhDbURNcDlIN3NEMnc3YWJvU2YrWmlETi9MSTBFbFpEWS80Zi9w?=
 =?utf-8?B?VUJuNVRTTnk2QWhacXRpbFIzNVE3QmhmV0dpUkdPWmxtT01LaFFqMW5wcFdC?=
 =?utf-8?B?WWtUOERNTkl0czBzcFFmcTNtWllrQjRSZDJsWFhwVW9FRUV0ZEQ2ZHNvek1W?=
 =?utf-8?B?MEpvdGJGZFAvZS80K1BhQStEVEhkMXh3UGxIcEwvUkxFalorSEtFeWVQaVRY?=
 =?utf-8?B?ZzFsYXN5SGdoSXNGdmlpOWhGSFlaZU5oLzk2TDAyVXJCR1czZmdvUGkwaU91?=
 =?utf-8?B?Ujl4VHFqNVYrMFkvbFNUelBjOERnMjJDSVF0TjFkVWNMc0dMYktkRXBudVM1?=
 =?utf-8?B?WFdSSk56NFBKTnVEM3h1bnlCbVFPTFhSSDg2Y1hyM0MwbXZXQVZEeno3T21m?=
 =?utf-8?B?WGVpaGVNYlFVRXpTRkdkY3dudkp0Q2U1WmkvRlJ0b2VTQmsrSzhWVEZmQ3lh?=
 =?utf-8?B?Y2dpTW92eHFucHovN3h4VXhqMnVjekZ5eC9XbWg0UmlqK2Q1YkJkV1dLanpM?=
 =?utf-8?B?Uk10MHBOSWM3ck0vVUFoUGdacmUrT1FJeTJDVklySFdaSkx3UVpCaHNMVEtn?=
 =?utf-8?B?VEhwRlZTU1VabndGbjM5OWNNdXFld1VUY3F0eUwxUjN2VVVVUW1wbmNucTFv?=
 =?utf-8?B?RndQTUlrZXhHdkJhK0wxTWc5b1llaHNjbzB5dzJVNWR6K1F4TjBHMGJqcDRa?=
 =?utf-8?B?d0VaRTcwMmdnOEpuUTFXWXBqakZKdTNFRmZFSkt2c2k4VHFtUnNtQXl5dkJm?=
 =?utf-8?Q?MTDrxG2t74xWNPd/+bnTleCTU?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0d2eb96f-1957-4363-6937-08db4ae8a4f6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 08:38:46.4926
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: stFiVwYEEacLnhdC26P047dlW2uX6MEap5XMaLHEYwydGVLmx5hHvPJa7GHqrlkBwgHQH9Yh/gqwysy0ey4HWQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DUZPR04MB9981

On 28.04.2023 12:26, Ruben Hakobyan wrote:
> Introduced as part of XSA-228, the maptrack_freelist_lock is meant to
> protect all accesses to entries in the vCPU freelist as well as the
> head and tail pointers.
> 
> However, this principle is violated twice in get_maptrack_handle(),
> where the tail pointer is directly accessed without taking the lock.
> The first occurrence is when stealing an extra entry for the tail
> pointer, and the second occurrence is when directly setting the tail of
> an empty freelist after allocating its first page.

I don't read out of the doc we have that this is a violation (the lock
still fully protects the list, just - as you say below - that in two
cases taking the lock isn't necessary to achieve that goal), and iirc
the relaxation was done quite deliberately. Did you, as an alternative,
consider making the doc more explicit?

> Make sure to correctly acquire the freelist lock before accessing and
> modifying the tail pointer to fully comply with XSA-228.
> 
> It should be noted that with the current setup, it is not possible for
> these accesses to race with anything. However, it is still important
> to correctly take the lock here to avoid any future possible races. For
> example, a race could be possible with put_maptrack_handle() if the
> maptrack code is modified to allow vCPU freelists to temporarily
> include handles not directly assigned to them in the maptrack.
> 
> Note that the tail and head pointers can still be accessed without
> taking the lock when initialising the freelist in grant_table_init_vcpu()
> as concurrent access will not be possible here.

This "no concurrent accesses" is the justification also for at least the
"set tail directly ..." case, aiui.

> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -660,23 +660,27 @@ get_maptrack_handle(
>      if ( !new_mt )
>      {
>          spin_unlock(&lgt->maptrack_lock);
> +        handle = steal_maptrack_handle(lgt, curr);
> +        if ( handle == INVALID_MAPTRACK_HANDLE )
> +            return handle;
> +
> +        spin_lock(&curr->maptrack_freelist_lock);
> +        if ( curr->maptrack_tail != MAPTRACK_TAIL )
> +        {
> +            spin_unlock(&curr->maptrack_freelist_lock);
> +            return handle;
> +        }
>  
>          /*
>           * Uninitialized free list? Steal an extra entry for the tail
>           * sentinel.
>           */
> -        if ( curr->maptrack_tail == MAPTRACK_TAIL )
> -        {
> -            handle = steal_maptrack_handle(lgt, curr);
> -            if ( handle == INVALID_MAPTRACK_HANDLE )
> -                return handle;
> -            spin_lock(&curr->maptrack_freelist_lock);
> -            maptrack_entry(lgt, handle).ref = MAPTRACK_TAIL;
> -            curr->maptrack_tail = handle;
> -            if ( curr->maptrack_head == MAPTRACK_TAIL )
> -                curr->maptrack_head = handle;
> -            spin_unlock(&curr->maptrack_freelist_lock);
> -        }
> +        maptrack_entry(lgt, handle).ref = MAPTRACK_TAIL;
> +        curr->maptrack_tail = handle;
> +        if ( curr->maptrack_head == MAPTRACK_TAIL )
> +            curr->maptrack_head = handle;
> +        spin_unlock(&curr->maptrack_freelist_lock);
> +
>          return steal_maptrack_handle(lgt, curr);
>      }

While this transformation looks to provide the intended guarantees (but
the comment would then need re-wording some), ...

> @@ -696,8 +700,10 @@ get_maptrack_handle(
>      }
>  
>      /* Set tail directly if this is the first page for the local vCPU. */
> +    spin_lock(&curr->maptrack_freelist_lock);
>      if ( curr->maptrack_tail == MAPTRACK_TAIL )
>          curr->maptrack_tail = handle + MAPTRACK_PER_PAGE - 1;
> +    spin_unlock(&curr->maptrack_freelist_lock);
>  
>      lgt->maptrack[nr_maptrack_frames(lgt)] = new_mt;
>      smp_wmb();

... I don't think this change does: It now leaves the free list in a
transiently corrupt state until the freelist-lock is taken again a few
lines down and the list is further manipulated. Ftaod it's not the state
itself that I'm worried about - that remains as before - but the
supposed implication of the list now always being in consistent/correct
state when having successfully acquired the lock. (Iirc one of the goals
with omitting the lock here was to avoid lock nesting, even if this
nesting arrangement is permitted. Plus of course said observation that
acquiring the lock once here and then once again later in the function
doesn't really help. I don't recall though whether there was a reason
why setting the tail needs to be done right here. Yet if it was to be
moved into the locked region further down, it would need to be
clarified that such movement is correct/legitimate.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 09:23:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 09:23:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528392.821465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmDj-0000Tn-1j; Tue, 02 May 2023 09:22:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528392.821465; Tue, 02 May 2023 09:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmDi-0000Tg-UL; Tue, 02 May 2023 09:22:50 +0000
Received: by outflank-mailman (input) for mailman id 528392;
 Tue, 02 May 2023 09:22:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptmDh-0000TZ-0z
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 09:22:49 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e331ab5d-e8ca-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 11:22:43 +0200 (CEST)
Received: from mail-dm6nam12lp2172.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 05:22:40 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MW4PR03MB6348.namprd03.prod.outlook.com (2603:10b6:303:11f::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 09:22:37 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 09:22:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e331ab5d-e8ca-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683019363;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=cXZQ/ieTQqnv0yguUimYMbrZgvtXD6uzWuHWWq/PDVs=;
  b=fibALUWuSGbHKl/5WkVaXWXBI3Ge1Y0j2y4alYOL9jX6HS4ryD+t2f0L
   jleA8ARM3HqSRVTR1O4zlRPW5kjofqxejhcXr7H4jOCMSzUmX3o+poDkG
   ry4lw2b1hFQoQoMHvxfrQ/OD9TvhO9v9qHguPJ4seAK7OGnY32+Cn0S8i
   A=;
X-IronPort-RemoteIP: 104.47.59.172
X-IronPort-MID: 107567437
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:pKlYPq8d9/v/dYFMoADsDrUDpn+TJUtcMsCJ2f8bNWPcYEJGY0x3m
 jQbUW/XbvfcMGXwL950PNmw9RhV657RmtRkTldv+S48E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKgR5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklep
 N83DBkGUSmfjvqt2pWnSM99weIaeZyD0IM34hmMzBn/JNN/G9XvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWCilUuitABM/KMEjCObd9SkUuC4
 HrP4kzyAw0ANczZwj2Amp6prraXwn6lB95JS9VU8NZrr1646UwdFiQGbnbloOeY0FKOCvJQf
 hl8Fi0G6PJaGFaQZsnwWVi0rWCJujYYWsFMCKsq5QeV0K3W7g2FQG8eQVZpatYrqcs3TjwCz
 UKSkpXiAjkHmKKRYWKQ8PGTtzzaBMQOBWoLZCtBQQ5b5dDm+ds3lkiWEYwlF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9bABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:d0fq5KAqEkL55m7lHejXsceALOsnbusQ8zAXPo5KOHhom62j5r
 STdZEgvnHJYVkqOE3I5urwR5VoLUm8yXcX2/hiAV7dZniFhILAFugLh7cKqAeQeBEWmNQtsJ
 uJeMBFeaPN5TARt6rHCGLTKbkdKBbsys2VrNab9lMoaTxDL5hn6QIRMHfvLqW1LjM2dKbQ0P
 Cnl7l6T//LQwVmUi3BPAhjY8Hz4+fTkY7gY1ovHgdP0nj3sRqYrITiFgSe3FM0TzNLzN4ZgB
 T4uj283Lynr/a4jjjV02O71eUxpPLRjuFbAdCKiIwyNDLhkW+TFf1ccozHhikxvOasrGwLvb
 D30mwdFvU20WrVYma25SHgwBbtyxEn73OK8y7jvVLT5fbhQS48CY5/iZlCch3fgnBQwe1B7A
 ==
X-Talos-CUID: 9a23:j0qjMmPvjGfLku5DSHhE22o9Ef8cbnTF1233P02eGCFPcejA
X-Talos-MUID: 9a23:u7aSbQrPEUBcnf5gX0wezzBNN+xHw6qwMVA2vMsHpejeKRwhGzjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,243,1677560400"; 
   d="scan'208";a="107567437"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oEoxG0VdALQg+l7WgfIC+Jgz7G7BisD9OG/30gA3LKQ3Y75lHzwNDk+20pPKW52/wX9KkiyHxrC67JZBgwLEZ4jN3+QFH0dpqRxqZ0erZp+iryw0yQfSlJA4wgD2CNRKKc7coZn8ClK61Y3pD6QQoyveTR9AJKUd2moRcPuaSROIrtYd/LP0SQr8XuLnrhrhi1zR9Ic3yq5+9me4kzvgMVFh4xNawQJAzsuIx2tagUd+Vk9Bb7hKLalwLf88J8Y4h+KiOhG47tpvXqhdhHo4xS7iwBmmMeiSOIgDfukC1ZhfMBRMeCX+w36waqZvWipXF40LabcVk2qzK855UT8gIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6wsYJPe0iu0Z4mSkpPhl8HsiIbXw2+t+onLantpeZho=;
 b=Huhr7yj6GmH9BlDJZfUfroHMePtFb1yp3J4Jt/GauGiODO4r23G+mDEsWOdzdhOvIIuauyTmPt6gJIlI9wDE4sLxvklB+CGgCp1tAgE60GG1NnfAY9aG2S22A+JGuG1S7zCX3C1k4f//WZq8DkF410w5WLYiaEyjtO/TMH3wsdry3tPWdv1F6PFrwP3srFZGfJlhcfGGcUlSck7NQjrHcdoVIWNYynu9VAhBb2Q4b9stON8vZkpSMV4ud1ca8w0r2w5rHapLZBvVof6tylyFAFbzfv5wLZ9c3ewnkdckADdU1rNWWDXVIY/0Ht/IVnIc2Pdvz3ET5+pHuBxrkCbQ0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6wsYJPe0iu0Z4mSkpPhl8HsiIbXw2+t+onLantpeZho=;
 b=XZMKRRRo0xHpKi4a9Sj7rpTC34e+7V5fVNg5vCVuZdL6RoqFOvgbyzU8D2Ohm8UZbHv4I3j9ZYCfHzcVXeA4t+uDPfTAemMlBsiygduLs7IQPqVUTMRaMROaUuEXlUNgQ5RqKg8X03YPR4H1Z6mNljwtMzeiGs56+eFr9J5o9gc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 0/2] x86: init improvements
Date: Tue,  2 May 2023 11:22:22 +0200
Message-Id: <20230502092224.52265-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0528.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:2c5::10) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MW4PR03MB6348:EE_
X-MS-Office365-Filtering-Correlation-Id: 0c4d8aaf-6154-4097-76d6-08db4aeec550
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RC792jRsx6dSUTpiv6J5dTZNWWX1UoO3ZfqLoBhB3vGWTeLBdKqFi/mdSsQzWnfy3F0B6A7e5UkKsbRnAlyInq2VfYDjj1q1Y48L66AnJ6We4obLH07f+pcgzigPbiyI/reVz4mM1nLn92u6GTMfng1f5nEqB86869SqNUnWFkgW7agyL5QkNMvqnbJup3RgDMMU+AqDKSri0PVG+GMa/CftAH2Dpza21mGMEMA8Eea/FwaesYSarbLkzLb0rOTS8Ft7ZAaWsaXtwQO5c3OgR3Zb3Wmv5x0lB+MTivco7/ZzGLDVVup8ijEqXgXeSD1tmGsW6MKoF4LmGAwXKZFw9a/dGvnuMuVCfxd49WG5tzPNktVTFpHBffP1xbrn8bpxYf8vHYvxcgugiuvZRoYQBzFPKNllM+2lCi0G+f4Xq7EMqoACVdhJ/97wpNOgefQDe1CxL1LLqUy/LhYu47GsKzl8jnY6lD2j74ewUTn+pTL8w0vtsjlslEJv3auOc1K0fYZB7PpTO34vSlVZEP9A3+C8NNbetdqMzCIniO44ZpMy3ad/EvmLYeti96ULtnOK
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(376002)(396003)(136003)(366004)(451199021)(83380400001)(26005)(6486002)(6512007)(6506007)(1076003)(6666004)(186003)(86362001)(54906003)(36756003)(478600001)(2616005)(82960400001)(66946007)(316002)(38100700002)(41300700001)(6916009)(66476007)(4326008)(66556008)(8676002)(8936002)(5660300002)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U0VHekxvL2EyV1YwR3EyUkltelg2MmlEZDZVQ1NJRW9NSlhzNHBBOWVxRkJR?=
 =?utf-8?B?RXRlbzErYUtQRjhEeXRFa04yU1I1V2orekZIZG9GQ0pNK2p4c0Z2bnFuREZl?=
 =?utf-8?B?SG9Bb2Z4S1hTNDBBMjNxZ01YQlAwZlMrekcyQUNFTWV2Ym9hU21FUGpKUUU3?=
 =?utf-8?B?SnBxNTNyVTNTcEJzcjFSazdHQnlDK3dXNit4eG1wWXUwQ1hDYUhTMHg2dmtQ?=
 =?utf-8?B?Y1NxWm02TEp4WnBBbTFmcnRlN2pJaG9hV2FJU20xWDE2NktzTG5LOFh0WDc5?=
 =?utf-8?B?cCtSSFNpWjhkV2daMTg0cUJ6TDdiRCsvYzRieHJYRXl0UitDMTFWbk5NSkd0?=
 =?utf-8?B?QmRjUmxzLy9ZeXo3TkJSZW5BN0cwb2QyaHVXMS9lYWlRQTFwNTNEVm1YcURR?=
 =?utf-8?B?TFpLZ0psZ1BGTC8wRENLQXVFa250bkVwRlByZHZCL1ZpcHlzUU9tNEpFQno3?=
 =?utf-8?B?N1hTekFkN2hEcTJkRGswWHV2OU1CT0M5MmJmYjBEN3VMN2NIT1VJc3k2dngx?=
 =?utf-8?B?OUtUK2tZS3JMVnYzWmlJdndZZlVIZm5wVlhjcjhqRFR5VHhrRm13RWxPZzA2?=
 =?utf-8?B?NXF4Uy83Y2grQVhNR3lwVGYxaE5sWHJ4RzZzUG9WS0xsbEt0cjNZa3BKSjlV?=
 =?utf-8?B?ajlQeEp1MVRrZ1R0cHVXR0NaOUV0YTgxRk9SOFZoeS9JbXpCdzZVYmdmR0lH?=
 =?utf-8?B?L1pmWlNaSUliaFZMOGVnMUhwL1VTci9LVlp5TEJzM29jOTBTOFBJUkp0YkJR?=
 =?utf-8?B?aWhuRlhHNzhHOUVxbnVjZHNMaVZPcFppd0tEQWF3VXk1WFYxN0V2elhRcDE4?=
 =?utf-8?B?RmNDMWVYNExTSmY3TmcwTzc5bjZGVzZoa2V3bTFza3Znd3FTNXVXMWpORW5t?=
 =?utf-8?B?SGFlc3FmUEVYdmd0SXRaZGRSSzNHcHdDWjZGYXA0MlFWTGozNUtkRDIzVFla?=
 =?utf-8?B?NHdWMm9GVCtJazcrVUFGR3BFeW9pN1Q4cER3NnRBb0RqVy9TamZwenhvNHA4?=
 =?utf-8?B?QmI1V0pXYTgvS3c2SlNPZ1BEOVRCbnJ6TFFSRjR2NkE1UUNJQnI1VHJWYnJi?=
 =?utf-8?B?K2RzVjU4M3hUaWs1ZWQ3VjBYSWlocytxOWlOYmJYaUF2T3VSVzFXYWw2d3Vv?=
 =?utf-8?B?d2NUVjFVRWwwcWlnM29vWHZ5R21aVUNjYjZtclVCeG5TTUpsOVJRbGRYVW8v?=
 =?utf-8?B?bnNReUdaUEFGaTlkcUNNOU04R2FqbnJKVEpDcnJiWElJNjRrV1k3MmFwS0M5?=
 =?utf-8?B?MnlsaS9TVlA2YXMzU3BqU3NHbHpZazhWZXRuQ2dMZkJCVVBLNGliTWU5RXJZ?=
 =?utf-8?B?UWVoczVwMlU5V0RTK045OCtEbHNuS1YvWm5YN1RYVzRIT0lid3B2a1lQQWEv?=
 =?utf-8?B?SHNVcC9LRm9jdi9HTHBabnYrQkVFMDhyWkMvUWRIZlJIc0ZsMjJLMWdVcUlm?=
 =?utf-8?B?YVlvKzlKRWNjZnF5bFBENWhnQjlyc1lYa2U0NzMyZVkrbXZYeWlmTTY4ZHVm?=
 =?utf-8?B?WWZpd0xiYnNOMkV4WVVNSFZQVmRFRFpvWVhvcnJ5QVdhZTlXckFoM0VsLy9E?=
 =?utf-8?B?dVdLSHh6YlF1dDRtOU5zbmZLcWY1VGVVekpudzgyUU51WmROb29rYVNJVzRF?=
 =?utf-8?B?YTJyMGgydVlRbDYvcTYzRjRNZklzTlZsK3BTbVlnRlY2MTdCUEZnTWk0c3Zk?=
 =?utf-8?B?Zm1meXNwVDZPK1VBUzB0TmdJOW9UM2Z1TzZWN1J4T1lybk9VandoOHNsR2Fp?=
 =?utf-8?B?VGtDQ3JmN09uSUk2VEgyV0gvQzQ0UUdMMUFZSkE3TnZtei9OVWRwaGo1VjB0?=
 =?utf-8?B?QzdvRmhWSnBOTjVzY2ZCcDd2Y0FjRzFHMXQyMzU4T2p0TjF4dkZ4Q1JhTXFG?=
 =?utf-8?B?UFcrMkltUjNIWno3Mi9hV0hCNUFTQUVVcVV3RzFKbkpYNEhKVHptZk5PQUdV?=
 =?utf-8?B?TnNIQThFa0FSWkVrTjhiUjJkNVJzdUlrazJUbElYbFZrMWdEVUxoS2xtalM4?=
 =?utf-8?B?K2FWZ3BRQWFvbjl3dzY3eVNIWlJGcUx2VDEwWTZGZk9aTnFKdGxqRTJiUkxK?=
 =?utf-8?B?ZTVoZG9yVnc0OGRyRGtPWUo0T2g2L3A4L3RiV1M1b2lhaXJJZTluTVJLNmJm?=
 =?utf-8?Q?phM2DuD9agvGH6aZDo9E8iczA?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	J68I0eyDnjPn1mb+0lkfdKVXXi4T5M2ef2HHVubLu0B6f3WEBqfZqj6me95K/rdT7sFCjP4rt/IUAeRDpe7sJREZ0Knr666B7DJ4I9I+FVu2ETRbm/gSiNfpg9t3+sLIzMK2cF0i14JydbKLWrz8Q7kVdtxnhjMKry+wNhELpszQexpav8KQ4f9qlNGX4+Vkbt8uyPEPSYgH1ErJTGjpa1ztpfW5e7xnFHc2ORZ+f+rgOjWgiuL6rMb0lIFnbaL3VYdyiLfrBjSxq7/yws+nVgE0YkdqZsPZMnDsSitJRsHTs42Vs38rKBjG8684+IRYIyh02ef4QDUZqlNoC3Qe9Gfbuhd4jKMJO4di5gxRXw7Gas3ndhmX5uhJ2GL3dl+HvhdhcAIvux7SUcsUQIQ5dQk5ephBhvPPDO2PQI6q6TIKkMaZe2/BHDcquu3NuZpBIgDRuNbsRgZPY3GO9FI1CMGc7dXvIkC8wJOa2UwRTmlzyz9viE0LrbjsZVMRg0Ed5dLbpkxDVQopGvTg9sKJO9YneT6cFn0oIr+A88y86RWAkzReLAOzJDNbJXS2aFqpJZBKRw/rOnW2k8x9oTOgXG6bCJYeig8OepvDKMrwucix8ZUD/bl5f7tjG9+h4ku+FfH4QiFFSYIBxIljtstC/emeidbiGFHpV+5utS3Hwz36ubQRGaMxpLY6PBtxq/IZDI+XbqabjYqJv6mzjr7xOlTSuqilhZ0GZxO8g0F86E/4j704wAeMaDjBLmIQZ6W6V/nuClTrA2+i22I6YkmkNgQQLZDvSwhdYlLR4KVhgj9qz1ttqCePCmUaSMiBe5RY
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0c4d8aaf-6154-4097-76d6-08db4aeec550
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 09:22:37.7760
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yR+0adD1cj/ewrJHc4FnZcIn++HFKEGXZ2VW4wq8nFepfcGIazeXC5Ce1WNrme+c8h9BpZma7nWafxBSgCqx3Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6348

Hello,

The following series contain two minor improvements for early boot: the
first one is an alignment check when building the initial page tables,
the second is a consistency fix for the GDT used by the BSP for the
trampoline code.

Both are a result of some debugging work done on a system with broken
firmware that resulted in Xen text not being loaded at a 2Mb aligned
address.  This resulted in corrupted page tables that would manifest as
the ljmp from compatibility mode in trampoline_protmode_entry causing a
triple fault due to the GDT being located in the Xen text section, and
the page table entry for that address being corrupt because Xen was not
loaded a 2Mb boundary.

The aim of the series (specially the first patch) is not to allow
booting on such broken firmware, but to print an error message instead
of causing a triple fault.

Thanks, Roger.

Roger Pau Monne (2):
  x86/head: check base address alignment
  x86/trampoline: load the GDT located in the trampoline page

 xen/arch/x86/boot/head.S       | 9 +++++++++
 xen/arch/x86/boot/trampoline.S | 6 ++++++
 2 files changed, 15 insertions(+)

-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue May 02 09:23:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 09:23:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528394.821481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmDp-0000qm-NF; Tue, 02 May 2023 09:22:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528394.821481; Tue, 02 May 2023 09:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmDp-0000od-Ij; Tue, 02 May 2023 09:22:57 +0000
Received: by outflank-mailman (input) for mailman id 528394;
 Tue, 02 May 2023 09:22:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptmDo-0000jF-P2
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 09:22:56 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e8d30504-e8ca-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 11:22:53 +0200 (CEST)
Received: from mail-dm6nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 05:22:45 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MW4PR03MB6348.namprd03.prod.outlook.com (2603:10b6:303:11f::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 09:22:43 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 09:22:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8d30504-e8ca-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683019373;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=KCDk/0A8ZabYB6hXuyuxQS2zkZwTzSsysD0WyLMx1Tw=;
  b=VWf4lLapOIvDzCKx1+Vd3ij4fAS0zOnmp7uudB5BF23R6v60bK89Z4vK
   YzLChJIRjYvST0Lnqd+gWXgmJn3pUbNl30JpkwXE425fNApkRiukTwJq/
   z5f02g0406sUkOWQj2dJaRz+rsTAthHayZ0CbZR/TAAEFVHZ0IKBfYN6/
   I=;
X-IronPort-RemoteIP: 104.47.59.176
X-IronPort-MID: 109990342
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:4m+MZa3wP7j/nkcxJ/bD5fVwkn2cJEfYwER7XKvMYLTBsI5bp2EEm
 jQeCGzQOqyLYWP0fooibYi2/UNXvZTcm4Q1QQZopC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFmP6gS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfG2tBy
 OE8dykxbzPYm+3s6ZyddPlAr5F2RCXrFNt3VnBI6xj8VKxjbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqsi6Kk1IZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13r6ezHugBNh6+LuQ1681rhqa4HQqCzoSW0qnu8O/qhS5cocKQ
 6AT0m90xUQoz2S7Q9+4UxCmrXqsuh8HR8EWA+A88BuKyKff/0CeHGdsZjxLZcEitcQ2bSc3z
 VLPlNTsbRRwtJWFRHTb8a2bxQ5eIgAQJG4GICMBEw0M5oC5pJlp102RCNF+DKSyk9v5Xynqx
 CyHpzQ/gLNVitMX06K8/hbMhDfESoX1czPZLz7/BgqNhj6Vrqb/D2B0wTA3Ncp9Ebs=
IronPort-HdrOrdr: A9a23:0SmS46NZuixqWsBcT7r155DYdb4zR+YMi2TDGXoBLSC9E/bo4/
 xG+c5xuyMc5wxwZJheo6H+BEDtexLhHP1OkPss1MmZLWvbUQKTRekJ0WKI+UyCJ8SRzJ856U
 9qG5IOduEZZTJB4foTi2ODfOrJD7O8nZyAtKPm6zNIcCkvUqdn6m5Ce3Om+o8dfng2OXL8fq
 DslfauYlCbCAQqh7+Adx44dtmGncTPiJXlJTYeHnccmXGzpALt0qf+Dx+bmjwDUzZDqI1SjV
 TtokjC/6C+tPP+7RfZ2wbonvJrseqk8MJHGMuPzu4KLTn24zzYHbhJavm5pTUop+Pq0nYG+e
 O82CsIDoBI8nbMeWPwmxf3xAX69z4r5xbZuCWlqEqmm9X9WDU5T/VMnphYdByx0TtcgO1B
X-Talos-CUID: 9a23:EfXYsm9gMpCVoNSATz2Vv1QeFf04ViLE8HbzeFO3U09HboOFZXbFrQ==
X-Talos-MUID: 9a23:iSJOBQY0mwiBSuBTlm6zqnZrEt9R4IOqBlERi68Am5ODHHkl
X-IronPort-AV: E=Sophos;i="5.99,243,1677560400"; 
   d="scan'208";a="109990342"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B0U0ilJDWosDcWjLN/xie8ZJzC7K/4HzRPuDd6ViGpiN2hFMsqob9BUPIP5qNRl1Yfz0NhuN5gVNlBnyMXkB8Yh7TISkYZcQQIu2+clV2Xq/0bmcZlGZt6HZFzzT71o4cQhH0/LY8WCwgBYi+AXKF7sGdk3iXRXKTFANP6HeNimrmkmQFfSdkcnnp0b3jE7nu2lVz7HECJvGUXQIgJnegi1IvB/RkuELgZxGBAexSttGK8cEQ57cNVAfQfuJJ8wXiyt4ITnQnjzoRrsiuQjmYdSH74crbYc1y4VXMx/bRiXx3I0XiDJ54/Kvpt3u2DrmmPceMTqKQKSsuzyu7ELv8g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DLYLxraJZq+IGvhK6ipB1M7FD/OD6TDsXKINBErQQ+o=;
 b=PpUcnT3/eM9DD4GHTPL4ILidlRrtk3Am01eCYUuGtdy2ko7DBDLG0lFmMhLGUYv1a+bMBGq+y64Gtu9YJpO8r/DyRuVDlO9oPLlKM+ZpzQuLSzYzOjwgclB/x2f0FguAj67U31caVNWmipJz+lsbTO9wpElesbbq85ReafMmWWMq3iyj+dGSGAbK5gRlgtA5xMFUnV1PGh6FwIWlzxvPcnCQg17XbelIQGJoXCGChLGk45N7MQO498GhJk9CwSmgMwjJlm+n5wqZD7KSPHL3Yn1cvwDIV5S6aGD8S3w3qKgNqu6EURyfRZJv/P0097VMGwm3D0tVXcS7uhu3CaMgMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DLYLxraJZq+IGvhK6ipB1M7FD/OD6TDsXKINBErQQ+o=;
 b=Ii9WJTut60dHHnfrT60yM5gipP6tkkijMUWqNjda1jKpTNKoS0+uJaVXSrvsmnY9GC+r/iWg+aT0Uwwj5Z3rl3Pn5FOO2nQLy7VIaB8GfupC3zCrsXLAFmmJ5IJABuD02rtar1xwf4fAz9SzSfXlzuvjF2BXVwQURQYnrACHba8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] x86/head: check base address alignment
Date: Tue,  2 May 2023 11:22:23 +0200
Message-Id: <20230502092224.52265-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230502092224.52265-1-roger.pau@citrix.com>
References: <20230502092224.52265-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0229.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:315::18) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MW4PR03MB6348:EE_
X-MS-Office365-Filtering-Correlation-Id: 1797c685-169a-432c-c6e9-08db4aeec892
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	O+4GvYXoQJM46MFGq81QyV6z4VrDr1ddsGQ75IPSBQu1Bp74mpFBxC9VURMw50728U/vejmdkVsK4jBTq83DQf4dHnTtZmUShpg7G7MHebVa3XJGy6RV64GIEFScYBvaIHPa9XKLzTM1KXr5s7ZU8X9MINBGnSGMt/CraK2RdcLY2R11VY+b2tXQOqX0EJ+Yuz78isxGK682E2MHd5iFymjF+uKXjQwCTUjnXH5uoNN82JGGYQCubA69tVdbttiFR/AJDS9z8XorHQabCuJ/W/YCKa6MeK+vQXfCFjkQisRSbggngNy3ySlAj8SCYT9ManB4klZbuqg0gilIMxrYKrpbTQu2enzD6ZEea4ArGGUf6253d4DzytIFny8bJzET2qa/5KnoPsfFRH4AdwT1OjRm54JnbNY1kBAbJPMsZ/Sqz/d5ugCfTezOZ0yEDrBYXSMhP2nXcTf/Eak419MtLdubLX3PCQIojHKERcLvfmpLPfLt+9mHaMBE9oGkPq1c98Yk9/aRMl9SF6nByKMB7IOCIfLEpMbimjqvnI+UKKMgKI3wTD2VeHzCx2M3TU0T
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(376002)(396003)(136003)(366004)(451199021)(83380400001)(26005)(6486002)(6512007)(6506007)(1076003)(6666004)(186003)(86362001)(54906003)(36756003)(478600001)(2616005)(82960400001)(66946007)(316002)(38100700002)(41300700001)(6916009)(66476007)(4326008)(66556008)(8676002)(8936002)(5660300002)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M0RJdldZbktFNDd0QWYxQ1VWNzhpMDJWdzZUWExQWGptbkNiSCt3VlY3MlF3?=
 =?utf-8?B?OC93endHWnJEYlZ4ejhNdy9ucXdEaDIySmN2c1oyVi9MUjZzNzBjVUtIN3hv?=
 =?utf-8?B?Y1h4SDh3NUZWV3FMMm9tZ3EzT01FNjUzTDdPY0ZjSHdJcW9GZ1BEbVlnOW12?=
 =?utf-8?B?L3prUXlNeEo3SHNndVB4MnNUY1VObXJSb1BlQjFUdUw3cXpReS9XcmdRQWNG?=
 =?utf-8?B?aStvV2p4NVNaN2pnbm9Fd055WTdLeitJdmpFT05NMnU4UkR1bW5PRHRCMVFt?=
 =?utf-8?B?a0JURVp4VTZ6TjRBWFF2RjdBY0dlbm5tUk43dklIRjVkRi9YZTR3QTVTcit0?=
 =?utf-8?B?cFlyZmluZGtob0pDNXRlRWw4d09ZaTVQdDNFMmpNN1o4UWx2eCt0SGRQb1NG?=
 =?utf-8?B?dGo2dTRvZzlmWFBNOEVnLy9XbnJiU0t4ZnU5MVZ3U0QxenVNenJyRmtxcDVk?=
 =?utf-8?B?c08zcmpFNGRxQ3J4NjROK2lBRk9pVTEzM2pNVWk3UDJGN1RQQlpidGwxWTE5?=
 =?utf-8?B?RFFMME1oUzVud1JqTjVMeEtQNHNqbHZPVEt1bmU4azRYeWp6OGR2V2E4eWhr?=
 =?utf-8?B?ZEc3alF1N0Mzb3h4Q0NwTkRKQ2tWYmR3OVpLWVNKbjQ4ZDR4RDZrTU9RL0dx?=
 =?utf-8?B?MGZITUVIb05mODhVd21qZEdoOEVyWnFJOHNCaTdQRnFDeFl4OFoxZkF2Q3Nw?=
 =?utf-8?B?RWVuM1Y2TEdZWHd2Zm1XU09QUzMxWFFUSzJuUGgxaE9EY2k1dEFPTWZQRmtq?=
 =?utf-8?B?Zzl0Sld0SjNLZldxa3dnNzJraDJLK0ZGSGZEYXJ0MmdsOHZEYUVzZGJ5MEVU?=
 =?utf-8?B?V1ZTanNOQnkrR2FXVlRwSzcydkpScGh3NHpUSlpQcmttUjlxOHNCdXVPL0g0?=
 =?utf-8?B?eGhCbzNvV3QvUGZKMTFlaVpnaUZIQWtMcUJYa1hENGQ1MmhQcTUzUERJM0NP?=
 =?utf-8?B?RHMrQXJ2WjJaZnlIbDA3K0I5L2NwME1FYnhTbUVSdkRlSjUrK3FJUGlQOHZ6?=
 =?utf-8?B?QlBMRzVsNGtuZG0xR3JQb2FlR2V1Q0lMT2Y4c0NrSWxtL29KSUVScjQ4d0VZ?=
 =?utf-8?B?NkhKYS9FMHlaVGRBNGdrWW9FNkZLOTAzdlJUL3JHNjFCeWZxLzZFRXZWWVFy?=
 =?utf-8?B?WGo5cVBaN3crNG5FeUI1ZDdlQWxNdW1PSXlnbkI1S3o1RkFxVUplaU1sREVE?=
 =?utf-8?B?NFNDTHBHNlkyY3FYUFd6bW1xMXBnT01DSkZxVDFab20yN0M2UHB3eEZiNkps?=
 =?utf-8?B?bXUzSlFlWWZFMUFKcmpMVUJHRU9tc1ZzNEd6a0NodnBCMUpESDNLTkl5ai9J?=
 =?utf-8?B?cGtqRXJkTG85WjBOTW9rSkZWMmZKdGlGc1NiOHJiMzdMaXI1aXRHNzRYRk4w?=
 =?utf-8?B?Y0lyaGptWEM5Z0g1Q3E4Nmk5TGVuUDg1NU9ZTTViRVZFZ0djcG85NWp6UXVF?=
 =?utf-8?B?NjVNbyt4dUViSDIwZnpsdXVkVzJEUTQ4aVdYemF6TC9NQmFEQkpOT3VNVHJx?=
 =?utf-8?B?aVdlaG9HRjJMYVhzc1gxVGNITWFwc1ltNWl3a2FLZ2tRclpqUE1zZ2lKU1JJ?=
 =?utf-8?B?RTh6U3NibldHQVpmeU5UV0FTZndYQVVaTGsveXhzVUhmOGNsVEk5YWsrT212?=
 =?utf-8?B?b2UwQnRWQ2JZME9GOXI3MG1QbXRnUkpiM3c5Yis1ZzVUVFN0RjlWc2VmalBG?=
 =?utf-8?B?MVpzcXFoZ0hnaHZyM2tvNE1KOG11VlUwSFhXRnhiZEZJZ2Y2R0Y2V25vWHJH?=
 =?utf-8?B?MGZzZXI5SE1oU2hHS21YZEZ3V1RHQUx2cWpleURpZ2w0RHVQM1Rlb0FZakdP?=
 =?utf-8?B?cVZDclJGaG9yNDU4M1ZRWWpwbG0zdGMwYW9zaHlEdFhyT1pUaGpUUXd3NW5k?=
 =?utf-8?B?NUxWa3R3K0N4YXpNYXJiZ3lKdXhHQ1VTTnY4Sk5zR3pKL3pXczRHQ3lPbWFQ?=
 =?utf-8?B?eWZ1UEk4SVJ3RDlCb2pRU3pzU0lnSk9XMUNZaXVQSnlVWWJOaUtPRjhJSFJB?=
 =?utf-8?B?RjY2Y09qMjYvU2ZpTmJLblBTQksvdU9vNmRUTWJRQUVKSUFrbm5rSjJ5ampO?=
 =?utf-8?B?My9JdWI2NERZbS9Wb241bFZ2VmQ2TjUrQ3dlaXdnOEpucXFxcnNEZjJFSDB1?=
 =?utf-8?Q?qsv+0ijMYTguMPbTnsAmXu5Fw?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	CicFLLOOzW5DBHm2mp76zNtJNAHpJJGRvQsBmCg3mpmMw1EgXrmw0iEc30SB1OGxq/7SGTtLDV8UQp8AZsAMhTx5XNmVecygd4gRmpfk5+/zKAYwS2nsUH6EOHRaZpzo9lbavvclHpa0v92phvYPQQkVkTmKxKc8ZL7B9GQWmZ7Ol5ze3UE+lQjfwEbqbY3UzbHtGm9KspIB7A3jQt6xRFmXENLTaxqr3hadW1KHax0rg5jrb2osCW65Zu3hnbvC045eG2FE1dNrtvQnn967/m5kdenV9/Wg+tKlOrUOutdM5pgPnxFXmb9l0Ob/i+OD7dr5eWTxfoLX2Drem3A2kIfhwhOcS5XlCZzfrY8MdsCFGhKcARs6YYtwqabPvLZyezOz293pGbKHHUYBDcCpKCFlN5Nf3NQXJiZlPnsIFLIHWpyKb9y7yoaP2k48//VOoLg+9NO1hN828Gf4QONO1yTKM8Ozq8L174XXYYQBM+NOZnHWyOyKMjYgGWFzYuPj44ESkkhmpIh0iOgwmt6aONBqvaGyy0lSeKbzjwH91hNLFO+8LB5xsEuYZCNr3PAAFBobsk8JsfnrQcjHtIUF3HWCioTaX7wYhpo6pwJXNqh9HfFBBeduoV9m4eBTpIDaKY+uBspPkcmfiJV1NFS9EJ8SOM/+0QmhfvTJ15zeytNfU62yrnbc4iTCDgY/GtjUoxlm05ixLulQlOzgA3UEUcu+L008NxPsk8FH6iHpakyalQQWShsWAN+MscmlhWncRyQVQdMXmXX+GG2pD9N9gLRLESdYMG//WLYkgBv7TDrYxP5w4HtniPtLFv5J+teT
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1797c685-169a-432c-c6e9-08db4aeec892
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 09:22:43.3618
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9fs8zdPV0c8QrTHn4LxWf/ivDerT69E0d5JZYH65023SADAyWTdXBt6Anf6saefWdvVxgrrzfFuwFl6CWs9Q5g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6348

Ensure that the base address is 2M aligned, or else the page table
entries created would be corrupt as reserved bits on the PDE end up
set.

We have found a broken firmware where the loader would end up loading
Xen at a non 2M aligned region, and that caused a very difficult to
debug triple fault.

If the alignment is not as required by the page tables print an error
message and stop the boot.

The check could be performed earlier, but so far the alignment is
required by the page tables, and hence feels more natural that the
check lives near to the piece of code that requires it.

Note that when booted as an EFI application from the PE entry point
the alignment check is already performed by
efi_arch_load_addr_check(), and hence there's no need to add another
check at the point where page tables get built in
efi_arch_memory_setup().

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/boot/head.S | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 0fb7dd3029f2..ff73c1d274c4 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -121,6 +121,7 @@ multiboot2_header:
 .Lbad_ldr_nst: .asciz "ERR: EFI SystemTable is not provided by bootloader!"
 .Lbad_ldr_nih: .asciz "ERR: EFI ImageHandle is not provided by bootloader!"
 .Lbad_efi_msg: .asciz "ERR: EFI IA-32 platforms are not supported!"
+.Lbag_alg_msg: .asciz "ERR: Xen must be loaded at a 2Mb boundary!"
 
         .section .init.data, "aw", @progbits
         .align 4
@@ -146,6 +147,9 @@ bad_cpu:
 not_multiboot:
         add     $sym_offs(.Lbad_ldr_msg),%esi   # Error message
         jmp     .Lget_vtb
+not_aligned:
+        add     $sym_offs(.Lbag_alg_msg),%esi   # Error message
+        jmp     .Lget_vtb
 .Lmb2_no_st:
         /*
          * Here we are on EFI platform. vga_text_buffer was zapped earlier
@@ -670,6 +674,11 @@ trampoline_setup:
         cmp     %edi, %eax
         jb      1b
 
+        /* Check that the image base is aligned. */
+        lea     sym_esi(_start), %eax
+        and     $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
+        jnz     not_aligned
+
         /* Map Xen into the higher mappings using 2M superpages. */
         lea     _PAGE_PSE + PAGE_HYPERVISOR_RWX + sym_esi(_start), %eax
         mov     $sym_offs(_start),   %ecx   /* %eax = PTE to write ^      */
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue May 02 09:23:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 09:23:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528393.821474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmDp-0000jn-7x; Tue, 02 May 2023 09:22:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528393.821474; Tue, 02 May 2023 09:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmDp-0000jg-54; Tue, 02 May 2023 09:22:57 +0000
Received: by outflank-mailman (input) for mailman id 528393;
 Tue, 02 May 2023 09:22:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptmDn-0000TZ-RB
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 09:22:55 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e9e39fd0-e8ca-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 11:22:53 +0200 (CEST)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 05:22:51 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MW4PR03MB6348.namprd03.prod.outlook.com (2603:10b6:303:11f::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 09:22:49 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 09:22:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9e39fd0-e8ca-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683019374;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=Dq+sSd8VpqgctaVXXnnolVpe/f4WOYP+g9T1j8Xl514=;
  b=W2nrzB5pAIN47qGqnNLNLnNTSDa9GCsk0UJe8bNHY3J3Pqt8pnNAFo3w
   cE9hylybGuAuICPVq8YupCwc3A28rt+u7Dsd34VSRQ6iejmKMyua05Gov
   2ubR1lL6yBTw9iXvCgoP2ahOn6Wot5sGZjLpZ0l75KfPauYMZlqKvJpi6
   Q=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 107567454
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:f7vg4Kx9xmwsU1jbvkF6t+cRxyrEfRIJ4+MujC+fZmUNrF6WrkUCy
 WQXXTuOO/rbYGr9L4wnO43j8k4DsJWHndZgHVBpqyAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPaoT5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVB8x
 aY8BAoRVDPd2uW6+ZKAFthvj+12eaEHPKtH0p1h5RfwKK98BLzmHeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjmVlVMuuFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37eQwH2kBN9OfFG+3sRMjWCT7VcYMyYtUUrliteor2iQC90Kf
 iT4/QJr98De7neDXtT7GhG1vnOAlhodQMZLVf037hmXzajZ6BrfAXILJhZDYtE7sM49RRQxy
 0SE2djuAFRHr7m9WX+bsLCOoluP1TM9KGYDYWoISFUD6ty6+oUr1EuQEZBkDbK/icDzFXfo2
 TeWoSMihrIVy8kWy6G8+lOBiDWpznTUcjMICszsdjrNxmtEiESNPeRENXCzAS58Ebuk
IronPort-HdrOrdr: A9a23:NkiOOa9R0ApEaksDvItuk+FZdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYW4qKQodcdDpAtjifZquz+8O3WBxB8boYOCCggeVxe5ZnOzfKlHbehEWs9QtrZ
 uIEJIOQuEYb2IK6/oSiTPQe7lP/DDEytHQuQ609QYOcegeUdAF0+4PMHf/LqQZfml7LKt8MK
 DZyttMpjKmd3hSRN+8HGM5U+/KoMCOvI76YDYdbiRXpDWmvHeN0vrXAhKY1hARX3dk2rE561
 XIlAT/++GKr+y78BnBzGXehq4m1+cJi+EzSvBkuPJlagkEuTzYJ7iJnIfy/gzdldvfqWrCVu
 O85ivIcf4Dr085NVvF3CcFkzOQrArGrUWShGNwyEGT3/AQSF8BerV8rJMcfR3D50U6utZglK
 pNwmKCrpJSSQjNhSLn+rHzJmdXf2eP0A4feNQo/gtieJpbbKUUoZ0U/UtTHptFFCXm6Jo/GO
 0rCM3H/v5ZfV6Tcnic5wBUsZSRd2V2Gg3DTlkJu8ST3TQTlHdlz1EAzMhamnsb7poyR5RN+u
 yBOKV1k7NFSNMQcMtGda48aNryDnaITQPHMWqUL1iiHKYbO2jVo5qy+7kx7PHCQu188HLzou
 WybLp1jx9AR6u1M7z+4HRiyGG8fFmA
X-Talos-CUID: =?us-ascii?q?9a23=3Arw0tj2uHNhCcCrv4d4cglXzI6IsUQE2H6FvqP3W?=
 =?us-ascii?q?bVztuRIS1cgOh+6pNxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AXf9Pbwzglp5q9XzAmyHDDFr2rBCaqJ6IBX0Nuq4?=
 =?us-ascii?q?NgPK/MT0uEBmMqRKZEoByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,243,1677560400"; 
   d="scan'208";a="107567454"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fyKmkPZc0R2sVFoAByrKZWef/NHL6WziTmFrL/7Uh6BcznhYMwwhrQwMGpXmHgumSCyeV1K2RB+1AGRO9wP3+tgQ+gqRO0VJV8ZOLwT7kCRakQfA/7VJ3Nw9rvwFdd429o8qj/OPtNQo3QC+x9q07ulQHK9Z/PYUp9s5L7JS4ZTz93HaGrEwGzT9lE9S9VDrEPmJy325IzRjU3KVsM9/5lryUbG6w4X06EWTFtm4+UDVWaOYGmDfkmJHaQL4ic0VUlwDQCyCUmKBcYcGjHXFKeEAPDl53ijinV2ja8VK/kbTFMelrp84IuESTWrHe72dxEY7lnk9IgXna8DjLQd9sw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TUjBzIIMR2wZs7nCQYlISDZS3wMRy1R7AX1/v57R7+w=;
 b=LK2zt9zsprfon7bdwxRH7KUx8gVbcoWguVDkDaotou/nzoySMbLoxJRTCrOI41ZVa/KMF9Q150kke9mdXtxumiWHRy9+l8H3o8wUHf8rWdNQEmFDX3HH7L++91woP9RUzXLb819R0jRP2l2WuAiocN/QFshDDDt9l8FNkh5efaYs5IEXaOzGhqRyj6FbII2VKThJ0SansSxoViX4MJpuEkeowTmothcACIoqlNheh05sJjfKDph9TJLx81YWmrQ6wK2LTKMOdpMAH8qjO4QJWy27kFLD3JtiaT83zJsUpa2pQpU81AMuWzm9ew7tOEzAaJI3WkToioija65aPTSBCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TUjBzIIMR2wZs7nCQYlISDZS3wMRy1R7AX1/v57R7+w=;
 b=SF469FBt6Wkq80xe8f2qEoH3EkBcPH0ldLrZdEJEvQ3Kita1OzJOMRPaoLNHgIHbPIBi80w+s+D3iivU3fNywPEA+MM1hyQtEbZQea4a8ZDoGU5sCRpwzjZ2D6yi+mSezSx4nyzw26PhD517BBQQaofdCL75jpC3nHEFrDRizic=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/2] x86/trampoline: load the GDT located in the trampoline page
Date: Tue,  2 May 2023 11:22:24 +0200
Message-Id: <20230502092224.52265-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230502092224.52265-1-roger.pau@citrix.com>
References: <20230502092224.52265-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0101.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bc::10) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MW4PR03MB6348:EE_
X-MS-Office365-Filtering-Correlation-Id: e4f46dc4-4c26-4918-231e-08db4aeecc26
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	S9TCLLVijCkiVllTEdNlD6+0xUl3mtMSWMstH+dvQ5/6D8E4pH0RWSBvl4zO2kZphsLwgz/VpwNLHDNMg+UL3QXaHs+7cCljCnBPSHEVJpVDvO6ipmJjTAqkQBl6j2pPgO3MMdL5kMhyiFLC/yYtmk1aHKaSJqgES4AnFDMH6VE68M5VtoqNDm9sOmOPQnX5vfDKog6vdmlqc4YCUXJiRjczRzjvq+iOn7/KfrLu3RoOzWjk+G1d7GtHqXD8lxwJKSEUTcEcZ8L3Unwun7/LFc8EAxRHJ+ZXeACcgI6N6UBDWhPq4KenjDfBlY71o9O8lMfiuouoxKX8NnAwjXy69egSC1KJb/fTSyhMIqPtt/O+iQN/dLmo9kRkZRuM5NKj2e1NxX+BH5K7yI//FID+XJE5jYC0hqcQpx+T2hZk4kDCVQVT0G1mLWrwgsuKELiAcPl26+/GZOquW62dQNTSqo6WRysGvRMSgMn/CTW9tLvV3eI3i72zAghMkwoit0O9UZyK8FAFoOgw8NNRh+ARGO4Ei8KZfQDSkeD0IAPt/gKv+iGe4Pgq1Nn8NGZZXOgm
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(376002)(396003)(136003)(366004)(451199021)(26005)(6486002)(6512007)(6506007)(1076003)(6666004)(186003)(86362001)(54906003)(36756003)(478600001)(2616005)(82960400001)(66946007)(316002)(38100700002)(41300700001)(6916009)(66476007)(4326008)(66556008)(8676002)(8936002)(5660300002)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cjN2L3BzNlh2YnNvRERsWS96L3NSLzZaR25haFlpYTBmZXQvSnp0QlFFTVp4?=
 =?utf-8?B?RUtJZVhaU0dtdkJZZ3Y5VjlVOFlWa2VIQ1JRUEREbVBESWgyd0NDeUF2YklE?=
 =?utf-8?B?WVc2MGoyM3hjZnV1VS9YZ0pNUjBaOEhhNGxNQTF1VmRhYVRjdVdlZXE4L2VY?=
 =?utf-8?B?TGIrQWc0RFVGcVNsdE5XdjE0YzhuVUQyZnZ0K2t4akFUV2lYMmJZSkoyc2hX?=
 =?utf-8?B?VTBlSTAxazgzRExPekVSWXBMR2JJd3Q4NnB1Q1BPSU44eHlENTVla0gzd0NQ?=
 =?utf-8?B?UTRZeTVXNFhxWTFCQUNLNW54bkdpSkFrUnFvek1NSk4yb2Ztbkp0RXVyRDRX?=
 =?utf-8?B?SjIvWkozdHZZYkU2VmlRei9ScEtkUkhXc2JFazlVZ3BRVExwWElIU0tqQXZW?=
 =?utf-8?B?UWM2a1BtditSKzhSR0l3eEswOXhoUWNGUnF3RGt3UVo3S2Rad1Byem5PTUZO?=
 =?utf-8?B?elk4ZmdzZTNvR2Q1Z3lYTUJSalpBSW5JWEhQSEtUeWxlS0N5R0h0Q0hPcTB2?=
 =?utf-8?B?UE9Edk1QVVpuZHpwa0poK1c3QmtZeXZGekl1OThJb3VBTWhadmx3ZSs4ZzZW?=
 =?utf-8?B?bDNTZndlNFQwS200R3NlQ0N0Um9lbFZMaVNVdHpYRkZYM1ExcTVvMjZxM2NI?=
 =?utf-8?B?SGxsNEJSSmdwV2tSTnpvdWVzV2lpcGw3Rm9yUWcwYlQyOHNTNlhWU1Bpb0Jy?=
 =?utf-8?B?UkFuWFBMSmQ1RHdpVWxQNDBxdWpCNTQrL1VSNlYweDBZSUJFSFRpSjYrNkZI?=
 =?utf-8?B?YXJ4M3JkQWV3MVk2ZHlTclZIYTJtMkhYZG9IZExXaWJORmFqcmphV2tjSW1h?=
 =?utf-8?B?ZnViWW5jRUlLaHNzU1NxT1p1dVhhRWxpYnJLazhCZGNXejc3OWREZk5tYUhW?=
 =?utf-8?B?QjhwSk1DeXI2ZGJuN2FYVm1CMkdnclp4OW5oclFzaHoyUjB3OVVvd1YwNVJH?=
 =?utf-8?B?M0xBekFRMlVhb1puR0p2YiszYUFEL1h4RzJwM2hrY3crTGdUc2ZhQ3ZKWlFY?=
 =?utf-8?B?THBDSUM3RlBERVA5bzcrREtCQlc5MEZucUp6YTRBbXZhSEY3VE9HRWxhemRx?=
 =?utf-8?B?ZnJmWTRUQ0NEN0xMUnplVDlEUlhCeUQ5UzRpMGwvd1h3TnMvazlBZ254Q0k2?=
 =?utf-8?B?RHZEVXF2WkpYZXl6NHdHakQ4SFJHcGlKa2E3MzZHVWpNN1QzSDlGRXBpcDR4?=
 =?utf-8?B?c3VzdzI2UkJRQStEd1pGaFJZR2dtQXRrNktxMzVyTC9OVU5DcWVOSTNiTlVQ?=
 =?utf-8?B?b1BhaVJqeFd2UWx3N2ZUWEFOT2piWU1mcEc4M1Ewc2JoTlVmK1FQdnR5L1Fi?=
 =?utf-8?B?NllXYmEvOG1IWVo4SkQ1UGcvc2xjTlZjNXhubllMREJnOGU5U1lYSGp6N2t6?=
 =?utf-8?B?RkdrT2svend2RUdKcHhFZFhWbE8xQzRhY084bkJNdXRya3NPTHZ2emJ3d3N6?=
 =?utf-8?B?Z3MyWDRieW1pTHQxRkE4NzNYV0tVamxYanhLdGUrL0lvdkNDMjJrS21kcDBR?=
 =?utf-8?B?Slk4MEJLdXBmd3JzOXR2OEo2UHZUUkZyemZmRFpnWWtHTCtBL1podjBmUVVC?=
 =?utf-8?B?ZFhvNTIzazN2Z21JQ0xMR3N5aU1UaDM1THoxWVV0OCtGdVFmQ1lpUkNUMkZk?=
 =?utf-8?B?NlAxVHVDVjRscFd1YnR0emdpL1JYQkthL2gyNFlOOHZGVWROcGFKSnNxekI1?=
 =?utf-8?B?MXBpS2lQV0F0cndsRGRvRkpKRFB6R0tyRWZqZTBuWUUyS21peEdmMTZvRHVJ?=
 =?utf-8?B?Z3c0MFlGZU5JbWR6YTVZbklXWEhpVWNYZUkzUjA4SWwyM3Q0aytPNWFBRk1i?=
 =?utf-8?B?OTRtYUFKRlNxSnZlMXVYU0VwTWQ5TXhuVWMzNTVXN291WnFvd3pqRFpueXpa?=
 =?utf-8?B?SE5zWlVCVExwSnJEbzBwcFNlNEd2VlpOSVdQUEZVMUw3TUtoOGtLWTMydjR1?=
 =?utf-8?B?WVY3UVVjalJYMnNaRjRoc2tMelZXR2EvcDZTeGpyVFNUWTlPbjhlM2pxOXZl?=
 =?utf-8?B?ZjNaaExrZjQzY2ZOQ04vTkhra1pJUndpZnBMNnFYY2wyZXkvOU53S3VSZEl6?=
 =?utf-8?B?cnYrUGttWEdnREhjckJsMDRIMkNTaHNhdmt3Vmt6VFhybmExZHIreUxzSUNM?=
 =?utf-8?Q?qE0+ziXFILmo/VBohgPreWX1Y?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	sXp29qqFBu8mGf2KYSUq1eB+vzHlz6LqIN7qXhYYCy0DJtP+VOhPIV4JIHIQFk+N16KwKBUNkTveJ2Dgg253DStaMhYssDfhso0lYn4qTLXFlaMSSCIg8RXbWN2EL8nqMJk/KIfNwqYmdGBZf6NKlXnA402p/Ur/fEGLFEWnQZgzIqLM0Q2JbJuKEE+kE/4KYzH6wB1X6Kpmi7t9crgII6s+t7nzS/kSOCGOZ1I5bJx9ZXk8DB/OOCA/6YyMwdw1NKH2FTDBRV2VBBxlWRTwHRh30Hv2DRhhy5sNCIn8kGQvX2b+w4x09bMi3H5AtgOkSw8hZjUowgDiHT5kKBGmxzmN6L0yoHEWEMtY/sJVC6I2Q6lxB4eMnWa649ICafOTCz4GYDI6zajvuX2BKFbMy98/Yex08Di5hn9Ynxw39vc2862Cm3SaGVG2emDy37A6HtJ+Vj6fXFviylSdkx7dy64pMCWODO3OIujmZbl7obmZQBdftrg3g+W26/Z8Zuljfal4rNcElY1G2EySQw5WJkCqc4PYcCGSIw8DTx+no/vgCyEKqrTWukU//TLm2Qt5Rqkhz0GEmeisiCRhSol5BlyEQ9RkwbIcdR2F+ZAIwdliyIfKmSfVYLlCu6PdDzf7q2ww0DiLMWbYUe57kn3jF1hdkJkgv21kbXva/jHHeNW2pXxAi8xOBWtS1VhCL/T31rSKKlBDICnjlvmw1gJh6Mg9E1D8RRxaLCTOppP6sHJ/ustLleYwAh0E/doTWplpHpS142qb8XZheHIZCHXPdF6ykBnQ6s965hqIWVF2d4g13tzLQew2/tbDGwRjIqQj
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e4f46dc4-4c26-4918-231e-08db4aeecc26
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 09:22:49.3951
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BVd5TpzupTc1IBOwhbA5aeuPRAq+YVbFO/WOxsGynWEFPsjdbVpeag/5LysloKDx0u/z7c33gv5gn3T63Ragkg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6348

When booting the BSP the portion of the code executed from the
trampoline page will be using the GDT located in the hypervisor
.text.head section rather than the GDT located in the trampoline page.

If skip_realmode is not set the GDT located in the trampoline page
will be loaded after having executed the BIOS call, otherwise the GDT
from .text.head will be used for all the protected mode trampoline
code execution.

Note that both gdt_boot_descr and gdt_48 contain the same entries, but
the former is located inside the hypervisor .text section, while the
later lives in the relocated trampoline page.

This is not harmful as-is, as both GDTs contain the same entries, but
for consistency with the APs switch the BSP trampoline code to also
use the GDT on the trampoline page.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/boot/trampoline.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index cdecf949b410..e4b4b9091d0c 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -164,6 +164,12 @@ GLOBAL(trampoline_cpu_started)
 
         .code32
 trampoline_boot_cpu_entry:
+        /*
+         * Load the GDT from the relocated trampoline page rather than the
+         * hypervisor .text section.
+         */
+        lgdt    bootsym_rel(gdt_48, 4)
+
         cmpb    $0,bootsym_rel(skip_realmode,5)
         jnz     .Lskip_realmode
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue May 02 09:28:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 09:28:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528403.821495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmId-0002Ge-BC; Tue, 02 May 2023 09:27:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528403.821495; Tue, 02 May 2023 09:27:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmId-0002GX-8U; Tue, 02 May 2023 09:27:55 +0000
Received: by outflank-mailman (input) for mailman id 528403;
 Tue, 02 May 2023 09:27:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MJpR=AX=citrix.com=prvs=4790f2855=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ptmIb-0002GR-Np
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 09:27:53 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9b821f0e-e8cb-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 11:27:52 +0200 (CEST)
Received: from mail-dm3nam02lp2048.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 05:27:48 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6769.namprd03.prod.outlook.com (2603:10b6:510:120::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Tue, 2 May
 2023 09:27:46 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 09:27:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b821f0e-e8cb-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683019671;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=riX/EATnG8fjIwzTkKAbOEAU7FIgdSiaCVyzPoBYrYI=;
  b=UU2T2QvkPfu/0GNClU7IZXrhXDseTXNPn/Qr0I3MjvORSxlazoczz/HK
   77dkZ+1FekrIPPoo1+ZeVYfebR97Dm2oU/gZDJvnTL9nRKgAM2dgGhg3A
   ltH2OSB9CNQYsp8/fkIREmtMMV2VkOR4sbbCW1jxTcldP+x2apkacdN8j
   w=;
X-IronPort-RemoteIP: 104.47.56.48
X-IronPort-MID: 107956025
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:dvnP2qivJMuHsnZ+3QUtyDLPX161+xEKZh0ujC45NGQN5FlHY01je
 htvCG7QOKyDZGL9cotzO4njoBhTuseAxoBnSFRlqSxnHi8b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QeEzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQJB2wDMSjbg9m354yqQdFLm8lkCNPSadZ3VnFIlVk1DN4AaLWbGeDmwIQd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEvluS9WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOLpq6Ay2gD7Kmo7CwMudV/q+OKDhHG+fIldK
 x0T5jFphP1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+WsDezNC49PWIEIygeQmMt+ML/qYs+ihbOSNdLE6OviNDxXzbqz
 FiisywWl7gVy8kR2M2T8UjchjOwprDAVgMv+hjMRWWh8x94Y4i+IYev7DDz5PJNLo+fQkOG+
 mYNn8yT7ucmBpWKiSDLS+IIdJmr7vCJKizBgnZgGpAg83Km/HvLQGxLyDR3JUMsPsNffzbsO
 BXXoVkJuM8VO2a2Z6hqZY73E94t0aXrCdXiULbTc8ZKZZ9yMgSA+UmCeHKt4owkq2B0+YlXB
 HtRWZ/E4aoyYUi/8AeLeg==
IronPort-HdrOrdr: A9a23:QkKfhqg/PDQayhHe/l/tuaPn2nBQX1B13DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03IwerwQJVoMkmsjqKdgLNhdYtKOTOLhILGFvAH0WKP+Vzd8mjFh5dgPM
 RbAuND4b/LfD9HZK/BiWHWferIguP3lpxA7t2urEuFODsaDp2ImD0JaDpzfHcXeCB2Qb4CUL
 aM7MtOoDStPV4NaN6gO3UDV+/f4/XWiZPPe3c9dlMawTjLqQntxK/xEhCe0BtbeShI260e/W
 /MlBG8zrm/ssu81gTX2wbonthrcZrau5R+7f63+4kowwbX+0aVjUNaKv6/VQUO0a+SAZAR4Z
 vxSlkbToFOAjjqDxuISFPWqnTdOXAVmjXfIBaj8ATeScCVfkNHN+NRwY1eaRfX8EwmoZV117
 9KxXuQs95NAQrHhzmV3am+a/hGrDvAnZMZq59ms1VPFY8FLLNBp40W+01YVJ8GASLh8YgiVO
 1jFtvV6vpaeU6TKymxhBgn/PW8GnAoWhuWSEkLvcKYlzBQgXBi1kMdgMgShG0J+p4xQ4RNo+
 7ELqNrnrdTSdJ+V9MKOM4RBc+sTmDdSxPFN2yfZVzhCaEcInrI74X65b0kjdvaCqDgDKFC66
 gpfGkoxVLaIXied/Fm9Kc7gyzwfA==
X-Talos-CUID: 9a23:QJuI6W1UvQq8eHKVaLElNLxfC+95fCLllHPrImiGGG1Rb6CYDnuKwfYx
X-Talos-MUID: 9a23:tR1l1gjU39DPLBthcqksAMMpN9lQw6D/UVs3n6oGnJKGNX13JyzapWHi
X-IronPort-AV: E=Sophos;i="5.99,243,1677560400"; 
   d="scan'208";a="107956025"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q3fd5wyjhZpZtnTaw8zyv1w9IYxEXpbBA1F7qVmT2VphG69kl2ZpcbcQ2IHV9K1RFeuUltrDBDxA6QXzVqzbk3ekKekrP3zvY/x2ipVWfMwL7IZCL7i7LJ/lXkfH8dwLEMXgtvUNzCTa0nF36NLMIdosFbLmps0GuD6qD0C4Wxi8qdGvh7LF+vitvPR3SSRvrV8pu3pR3zOVUJvTBl56rFbTRRcrCBfjC70OclCOadziPTmD1Yy2py2cpdANllhhOhNzFXdnzQ4m9QcdglyHyk4vwLaCilqFpcoDJBRD0LBwfrDlIkOXWIh5ih0u36Zm59zPsERfS/rqyyOkU/ysuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=riX/EATnG8fjIwzTkKAbOEAU7FIgdSiaCVyzPoBYrYI=;
 b=bpDd6HgEGwVkXT09d0zvxUYmMktK/s48ccWi9cFQenZyaNk9421fOLdpxwKVnWxixA7jAHD4N47IlbWI4GLjAwsOFWQnXlpQwEXATy31UC93iJHOtljrYTSxbw3OErhsMZxr0mWGE9LOrRSzdEofHk68oM4/2gOUriT/BKdwszNqTq3Lc1PaW2R7w34ytMlZPh/yk8Co4OwWal4FJdbT01gTrA1dgiet6UUsH1SwDk22+WhxGLCMJjrMuXdNCzJ3Ripx5ozdmnkRq9pFLlWfDxo6hYPeC7+ooYc8N334c/P5tRocmpnQwbIN9Az4fdRY432RSBX7jAA3Pxxj+2WirQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=riX/EATnG8fjIwzTkKAbOEAU7FIgdSiaCVyzPoBYrYI=;
 b=iyJllSRnSb6hIKtZLYS9/0lRcIiAsdp+T9FUKvatub9VUBBir4C6laYQTNk+X47WvvJ23L/qRRtT+BilddDTkiHkXB/qpFyaEqWMunM0pYYxWXq5ebxFSK8RPSoZU+yXLMl7TaQkBHLxMzAGgdIvzMrSqbjS4qNmYEgeJ/cDHOc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <afc999f3-fbe0-3b52-2f8a-b8b5a36eda87@citrix.com>
Date: Tue, 2 May 2023 10:27:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Jason Andryuk <jandryuk@gmail.com>
References: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
In-Reply-To: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0186.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a4::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB6769:EE_
X-MS-Office365-Filtering-Correlation-Id: 8a625af4-d1f2-400f-3203-08db4aef7cfb
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PQ+fdx9BLui81JXByME9Hcxz3ybzz5ksKeT8Y3G9GiSvZPSCQYl0QO399qgu2L70dMB2TYXS9+crtzfxYUPSaDqMlHuiuNbeOjQAApEg7xSZ0s1LaecyP9gzax03TYqYM2xV97EdusCyEi0qEnY8Gx2sBh4nH5KnCzb3+uJeinJz7CThOpNRK4mOi/6UtROXNe81qL5SBQ/NsydIdCHccQ6wmZB7fCdBVUT2x4Hg9QUNG8wIm6+1wYgONHg0AjCI8vswl1/+calxlIq8UgmScygJKkjES4C1LbK6Ngf+gH8TmzgV8GZNsVwsE475HH0UjlEvmdISFpmNUf59qbYpwlXVP0dnRFNJ7dXgYOvHTbtK1msCvDIGCF97JrW9x9Cf6XqU2sHI+Zq7BSJE1ObS/gOcVC/eYTIXokcRyxS95cxpEWDCoweZ/VjgWt8YFUt++CyyExBKRjZk1zXDNIYrKaOVp0vIpyQGjQJZ3OtN7QBMihqKrKl0F/UEIVGZ/yWe8+pN7WEMo5vhpzh2hxUYUZt+f0sC+Oi/Fmr3Lg6vZes5JG8T+eDmCubvw6/wNmOClPgmCvavaqZ3nr1QPjBTMkOQl5lusf6qsZ9spnv5B0JmhWlkWj8hmPIRVBiLK/kFmpPn9hgNQc/bKWnP4cZ3fQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(136003)(39860400002)(396003)(366004)(451199021)(38100700002)(82960400001)(2906002)(83380400001)(31696002)(2616005)(86362001)(26005)(4744005)(6512007)(6506007)(186003)(53546011)(6486002)(31686004)(6666004)(66556008)(66476007)(36756003)(110136005)(478600001)(66946007)(5660300002)(41300700001)(54906003)(316002)(4326008)(8676002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QnhLclJwcFozc0IrTHEyZElDUzMreXlsVVFoQWt1ZVh5a2lKQnFhcXNxcXlK?=
 =?utf-8?B?NkFobHFIVlBJc25pT2IwRzd6d28rbE5zZGNad2F4UEdDS1VadmRGWDcxc096?=
 =?utf-8?B?SWRDVjAvS3JlaUwwMnAxNmlZNVliSlpRUytrbjVWREZaamhCenlNbFpoVXY1?=
 =?utf-8?B?cU9XOVJXVDh2dkVVRzRyY3ZPdDlyZ2dvUlZLb2JaVU00M2NjMXd1UTZ4RmpZ?=
 =?utf-8?B?NHk5WFJZM3NjOW05NjJySmlmd3NjS1ZnQzhnL05BWU1vNGJId0VkT3Uwb3RD?=
 =?utf-8?B?UllhNHJDbHFEbnBDaW1NVjhMcmV1RTR6TENBZ1FiOGVDTEltWUtoQ0luM1Zw?=
 =?utf-8?B?ZHFON1pvbGtVTVd3dTFXT05SbjRianZtSzBNbDA2ZXpqdzBSbC8wblVSblln?=
 =?utf-8?B?QWxHY3VTclZqQTR5S21VelZEeUVPeFI5RXdZU3I1NE80RUdIMXk4TUphWVBK?=
 =?utf-8?B?NGNBQTdIM3liQjlsTDFSblkzWm83eEJPNitISmU1RkdhMTh6MWw2bVVKSHpB?=
 =?utf-8?B?RklHZEtubW9PQXAvTDBUUndJb1FPeG5yUHdtV0x3MEp0VlRabXovVTFyNCtI?=
 =?utf-8?B?MzE5bWJodDZwLzdUZk9JSFV5SDNRbDh5OENFNHJVYm5QSGZmb2JreXVMeTdY?=
 =?utf-8?B?ZkI3MXJsL0lDK25NRzJKVlNWYUJveFo0dVRaMlh5TlcxRE9NUDF0OGZ5Qnhn?=
 =?utf-8?B?VHdob09XUXdFbmNpM09TOXlvVDdFdXlhd216NnNnTVEwMjFEYnBpV1dxVTFz?=
 =?utf-8?B?VXVocDVuSU1BVm5rUHFkYVNNbnJrLzFENmljZTIxSGh4YWRldVd3YU5yZXg4?=
 =?utf-8?B?YW8rbkJNOUVETzYvRlN5aE1SdTV0UU1yZkN4TDRzR010RXpiTkQ4M2pwQ0s1?=
 =?utf-8?B?Qk1ialVNV3BKaGlSUjVjajI0YXorNlVLWmVoZlJFL25JdzVQbWN6aHdnQ1Vw?=
 =?utf-8?B?VldqU1lDTmJMbHpSczBoNFRPRWxtWmZRTDlOdHFLbXA5dkl1VUg0MzEveXVh?=
 =?utf-8?B?cTFoMGJBZEd2VnNCbHovZEU1cHM2ejVTZlI3YlV5a3FjOURTMmkwRWV4dnJI?=
 =?utf-8?B?d0lBbnhWMTN0OHZ3T2tKNjF5V0l0ZzJrTit4bVFOV2MxY29VTmlmd2Q1NVBN?=
 =?utf-8?B?NjgybnZBN0NkVkh2LzFDWFE0eHFhM2VzQ0xXdk95STlPSk9xRjFnZVN2bEZh?=
 =?utf-8?B?U3p4Vk9CRUN6blYvckdGTUNVRWM1YTRnbEVnNDJKSFYyZUZaamdLOU9ndWxy?=
 =?utf-8?B?RWpSMHROTlcxUlZnQWJGWUJXdC9hL2RYbnJIdCtIanM0STg2U04xQVhjb0tv?=
 =?utf-8?B?dkRiWXdaUzNtckN2OFg0SitMNDZYbXB5NVgrekoxQXJISEQ3TWF4U1lwOHFK?=
 =?utf-8?B?ZFNWWUdWaDY4MVJ0NXFVYWdVdEQzVzdOMS9rY216NFl5U1NMckozTHRIdjVF?=
 =?utf-8?B?NGZmTWh2TkxMUUlSNjR2YTVzNldxMVNzL3ZYajJRc1FVMEVZaGt4UlphTSt1?=
 =?utf-8?B?cVFQYkRaZlJBSTBYVGtqeFc1dWJxcVl5YmNNaStlcGF1bUFvWlBrcGJyclJq?=
 =?utf-8?B?eUZhbHBXcHRjczdPazM2U2ZGd09KdmdDakFDbGF0OGZUaEw1R2JCZ1UyeXlm?=
 =?utf-8?B?NlNMSkdEakpyNTh6aWlnQVdETFRIOXUxM0pMdWd2Sy9kb01lOUJSMmRFSGdz?=
 =?utf-8?B?NDROcWE5VTlrU040UkgvbHhySjZkRHJkMzIzN2gyVW91bytZUWFtb3VQVlJt?=
 =?utf-8?B?ZEduT1I3VjMrQzZXS1pNWVgyZ0EwaDNtSEVSNFNzMVlSVkZCWE5oZ1ZtRmEr?=
 =?utf-8?B?S25jU3RlZEttZkYyQmo5VUlYN29sSnYyZ096MUp6cVpLTnJYTVBZbFNmQzRF?=
 =?utf-8?B?YnBDMWF0MVczRDFkdjVjT0tWcDRwcndlZGFMVkdMNnpLVTJpQ09ibDUyZkQx?=
 =?utf-8?B?cllJWGNOWkpDUGs5WWQ5RXBVNk5qWmowV3A5b25OSTYzK2l2d1huQy9QbndM?=
 =?utf-8?B?dWMrbmNuVFovbHlaMUFDejVnZ1lTNHVUSkVZYVZlV0N5OWVDV2FSN2pCbldj?=
 =?utf-8?B?MlRaV3d3aTMzakxjQXB3OVZLV212UVZjWWhMOUpqdjBqaGJZdkF4enlNc3JK?=
 =?utf-8?B?TWt1VFFibG1IZkRHa3ZOREl5U1JSc3JyMlZ5SGVkd3ZNSzNjcEdyNmp1WE1r?=
 =?utf-8?B?T2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?ZlBvSTIwWjlpc2JhM0JxR08yVjMwS21WS29JZjZhMUJPYWd0eURMZnhPK0Fw?=
 =?utf-8?B?aGgvY21HaG56bjRJS21uSktzNkpEM2VIYTBOekpBUFZHMkpGWW96QWtGSUpF?=
 =?utf-8?B?cnB3TVAvZWpleFFWMDQyRnU2V2t1eHdaSHZDUm15Mmh2Q250WThlaUdHUExa?=
 =?utf-8?B?ZkNDNGZ5VnlJRUw5Tlk1ZFNGUGlucDQ5MjA2WWhYS3NFRk82OFFkZDYrNW5s?=
 =?utf-8?B?azVDWFVXdXZvR1FVK3U1UUNKeDRFN1ZvYmdERzg2QWxON2MraE1RNTJkQkpk?=
 =?utf-8?B?MkUyaHA1OFQ0UkdvemdoaUFtSERaeU9MejVvU1BCK0hwQk4xOWlPNGk2Y3RF?=
 =?utf-8?B?VncwRVlLbGtmMkpscURVendrdFFub3JFZEl3NC9RT01NZ1g5SWh5MDVwTVhy?=
 =?utf-8?B?bDl3OTR5ZXY2RWRwNXhPeFE1K09SZ0hNV2pLa2dITktiNmJiUmxVUzR3dUJB?=
 =?utf-8?B?TEZyMEo0bFhpSVBrODFxQVplM056Z3IxQjNmdk16T1lZV1ZTUmtGUmY3OXJn?=
 =?utf-8?B?ekNsVzZsKzFmUENhM0ppVVN4VzFsSWJtbVZLbVFlMnNzZDNuUFpLWXVXUlRy?=
 =?utf-8?B?bVJlbHdWaHp4NnVONExHMHAxT1Q4NGlaeFpoa29wWlRCd3VDSlZueWNaYTZF?=
 =?utf-8?B?NVpnckkyb1lkdXp3ZnVqdWtza1VPdDE0TGh6WFJGWENwUDUzRGZVdk9GUFBy?=
 =?utf-8?B?eEtqbmNPd0xoVFpPcjFkUGF1TE9rZzN5UXRqVXhLTnU2UEhjQUtWanZpSjRG?=
 =?utf-8?B?TGY2THNMMWg0bEo2dExCak1wMEFzbjlYdGNzZXB4N2VINFRuVWcrM1AyRFlC?=
 =?utf-8?B?S0hsUHdBTmxob2JsT2c4dGZNWG4rRG44MVV3SXgrUkV2NnBmUGpYYnQ4TlpC?=
 =?utf-8?B?cit2bFJEUkU1VGpia2tQbnRTQTh3NzV2UGpqZ3lic0xBQjZ5Y0svbzhidEht?=
 =?utf-8?B?ZzBNendhUGUxMjl4b2w4NGtER3M5RG95Q040R2dCNGtDQzNFU3llYkJyTlBx?=
 =?utf-8?B?TEpFUzhsSzJRUXp1djhVbXVrVWhrSzhVRHMxZThGd2F0MGdkYXN6QTRyYVkr?=
 =?utf-8?B?TkVTb0poQldzQkVnV0J2SUtFMi9Ta3FMUzZQWTFCTmNvUUNycDZ1VjRGVzh2?=
 =?utf-8?B?N0VRYUdCQWhnVmNiZ3pWc3pTcWhHVDk4VnBEV0RPQXBNRXQ5OGdjK2JHbElr?=
 =?utf-8?B?TmtsMnJPcGZ0NUtqZXMvTWoxRXgrUEFXNVgrd3BCbjJIc09JUnZ0VDZTRHgx?=
 =?utf-8?B?NEE2bk5XRFhyWEdnbld1RlgvQngyanUraHNqT05sUUYzNS9lQT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8a625af4-d1f2-400f-3203-08db4aef7cfb
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 09:27:46.0614
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IljGmG+AiW9XRvjX5Iq6TUmJ5AbtW4DokiYURa+DBPB1+e/ei4Pw9g4ClcdeyZQzpwmDEHXMgqcJcQmRxhlwJYbsjzS2Rpm3vdCKe8cWw5s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6769

On 02/05/2023 8:17 am, Jan Beulich wrote:
> The hook being able to deny access to data for certain domains means
> that no caller can assume to have a system-wide picture when holding the
> results.
>
> Wouldn't it make sense to permit the function to merely "count" domains?
> While racy in general (including in its present, "normal" mode of
> operation), within a tool stack this could be used as long as creation
> of new domains is suppressed between obtaining the count and then using
> it.

This would not be the first example of the XSM hooks being tantamount to
useless.  I doubt it will be the last either.

With the rest of Alejandro's series in place, all requests for a single
domid's worth of info use the domctl, and all requests for all domains
use the systctl.


As a result, we can retrofit some sanity and change the meaning of the
XSM hook here for the sysctl, to mean "can see a systemwide view" (or
not).  This moves the check out of the loop, and fixes the behaviour.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 02 09:34:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 09:34:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528406.821505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmOj-0003jp-0l; Tue, 02 May 2023 09:34:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528406.821505; Tue, 02 May 2023 09:34:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmOi-0003ji-Ta; Tue, 02 May 2023 09:34:12 +0000
Received: by outflank-mailman (input) for mailman id 528406;
 Tue, 02 May 2023 09:34:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptmOh-0003jc-0u
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 09:34:11 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b9d8f3a-e8cc-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 11:34:08 +0200 (CEST)
Received: from mail-bn7nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 05:33:51 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB6699.namprd03.prod.outlook.com (2603:10b6:a03:402::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Tue, 2 May
 2023 09:33:49 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 09:33:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b9d8f3a-e8cc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683020048;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=5zgyUxEJbKeos8eY1I65h8nPBR26M0j8HASaCCtmj70=;
  b=AZuZ+YpcTh9NN5CyPoHbAVLTYxhGwg7OBZnh8WiREthyII0ubqjg695c
   x61AEckVECsyAJmvKHNg8ZGLoPxh64KUQ3BctumkpPM/SCWUP6RIC7EwY
   u4ASybAJ098C4C9GbFSmtz81BxAGYBOdoVTUSMLZYdNnIilY/OFtPsKZc
   o=;
X-IronPort-RemoteIP: 104.47.70.107
X-IronPort-MID: 106876281
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:NFR6KKtMPNC8Jxl52KHjje4gwOfnVMhfMUV32f8akzHdYApBsoF/q
 tZmKW7UPPyNYmSgfdsga42//RsC6pXXyd9hTANprnpjFywV+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AKGzSFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwIhENQgC6ltCMwbu6ZvFDvPYqM+/AM9ZK0p1g5Wmx4fcOZ7nmGv2Pz/kHmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ouiP60aIW9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiANxCS+PipqcCbFu7gW85LhBOan6AjPSHkGjuWPYHL
 WIZw397xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqf67OVoDWaKSUTa2gYakcsVhAZ6tPupIUyiBPnTdt5FqOxyNrvFlnY3
 DSivCU4wbIJgqYj272g+FHbgxqlvpXTUhMu/QLTQ36k6QViIoWiYuSA4FzW7/9GIJyeCEeIu
 HwJmc+25+QJEJ3LnyuIKM0PFbel/eeYMxXThFduG98q8DHFxpK4VYVZ4TU7LkE2NM8BIGfte
 BWK4VwX44JPNny3a6Mxe5i2F8kh0annE5LiS+zQad1NJJN2cWdr4R1TWKJZ5Ei1+GBErE31E
 czFGSpwJR720Zha8Qc=
IronPort-HdrOrdr: A9a23:2dq8gqnZ5yTu4rDB8A7xG4tKd27pDfNLiWdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0mskKKdxbNhRYtKOzOWw1dATbsSlLcKpgeNJ8SQzI5gPM
 tbAstD4ZjLfCJHZKXBkXaF+rQbsb66GcmT7I+xrkuFDzsaDZ2Ihz0JdjpzeXcGIDWua6BJdq
 Z1saF81kedkDksH42G7j5vZZmxm/T70LbdJTIWDR8u7weDyRuu9b7BChCdmjsOTj9Vxr8m0G
 7d1yj0/L+qvf2XwgLVkza71eUapPLRjv94QOCcgMkcLTvhzi6ueYRaQrWH+Bwlve21714usd
 /U5zMtJd565X/9dny85THtxw7j+jAz7GKK8y7TvVLT5ejCAB4qActIgoxUNjPf9kobpdl5lI
 ZGxXiQuZZ7BQ7J2H2V3amCazha0m6P5VYym+8aiHJSFaMYdb9qtIQauGdYCo0JEi7W4J0uVM
 NuEMbfzvBLdk7yVQGQgkBfhPiXGlgjFBaPRUYP/uSTzjhthXh8i3AVwcQO901wgK4Vet1h3a
 DpI65onLZBQos9dqRmHtoMRsOxFyjkXQ/MGHj6GyWnKIg3f1b277Ln6rQ84++nPLYSyoEppZ
 jHWFRE8UYvZkPVD9GU1pEjyGGCfIyEZ0Wv9ihi3ek6hlWlL4CbdBFrCWpe3PdIms9vQvEyAJ
 2ISdZr6/yKFxqaJW8G5Xy4Z3BoEwhvbCQkgKdEZ7uwmLO7FmTLjJ2tTB+BHsuaLR8UHkXCP1
 AkYB/fYO1902HDYA6LvPGWYQKgRnDC
X-Talos-CUID: =?us-ascii?q?9a23=3ALt1GG2lER5FR9/1sUbCYvYG/ZyLXOXf8zFvAGl6?=
 =?us-ascii?q?KM2dgSZGNZmWi8bxCuMU7zg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AUseKYA7vvXeEuzPpWbDaxIQixowv4I6IBRgytK4?=
 =?us-ascii?q?rtveeE3xzH2+bojmOF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,243,1677560400"; 
   d="scan'208";a="106876281"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BKsJ/lwyG/cnje+UkX+EaqMpAqUk47BzH7r0qc+CaPRmUnoQkJcowC/jxLG7R41dUHkpjvIAwfrFaskj06niaZgXgAlHwAR0h+zN4sC2VWFKhlc0COIMEUpuZMYZfcdGZem56ze7m5mwWNeMqESmVLN0mzZt/uDD1vT8jDb+OhgFAgYO/46+WVoWti7P4sOB/og93cyngNrS1cI8ZmXDzXZDtQ2ivmZKCQW2ugKeYSlFi1xpqp+72Z8Jjws+sfU0AujIPyRo7cUCNttgKR6oKKgwky0FobzU3ldq8BzLkQJ0dXgo4nJFe15bX02hq+sqyeb4uhgRveu+A5oLYghJ/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YUuP+RCufEMSUbnrdI5OJ9jcsu8ig30o9+L3OWAOM7A=;
 b=YSPvJzlE0s+Fl8UQ3xg+wo+p6/0lmcovkxuEPwGW9UGY/6GZ0YfWraoASzSiW72sEOZcTyBdLZbslrclDMcikp56P3tNvVT+1IjE8wkWE9S88UILh5pY/lsDf7y+t4y/zZegSIqrSHaETM8sgctOvzJeWQoRq7ObD0Cte1HzvSjFrAz+l9tiAuB+Ba0WPDx1HFBm3ybMQtTcYmeTjPU4SgWCQ/VZd+ELeQP4Nb77js7822CwC+cFFHr1UE9ZNvAkMQxPR6lOx/Qu3yP19lOG3BWtL+EyD4Ebo1eOvMZjuLjH2xEZaY0tMSBRgm3UDCG4Qr6xNz7WAtydYKkALOLrcA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YUuP+RCufEMSUbnrdI5OJ9jcsu8ig30o9+L3OWAOM7A=;
 b=JcGOBb1p1ufJ1DKTVJKYEgieJUCt85GzN+rW8GnpB30cHzv7beviAej0GP+FADneESLRAgBOtsp9dtQIeUx2oreV2h5N6GjHe2aOctJsq1V4pUeifg/I/KDr+ZuNslmuo7M2lgG120TSWa7ceK2E/3JwbgQiZ9YEPfpbTYc0oKY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 2 May 2023 11:33:43 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Daniel Smith <dpsmith@apertussolutions.com>,
	Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
Message-ID: <ZFDY9/mXw1gr5HgI@Air-de-Roger>
References: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
 <afc999f3-fbe0-3b52-2f8a-b8b5a36eda87@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <afc999f3-fbe0-3b52-2f8a-b8b5a36eda87@citrix.com>
X-ClientProxiedBy: LO4P123CA0544.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:319::9) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB6699:EE_
X-MS-Office365-Filtering-Correlation-Id: c4fd9326-0936-44c2-3e11-08db4af0556e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k3oP4ioLuhF2xjUm0z2TpuchLI80y3ZAxw84I6USw5c6SDmIj60TuuwnhbloG9u5QC/Hk2YtUZj7LZf+1Hek7AEeE0JafMW56JN3PI8HYbmyyh6efQDMGtoLtncals79k54G2aDCAnBkk9zVCvY5UEB65j6PA4K6AYWLcQv50eQh13wq2HIn4FMmGsI5KKCG6UHZePYD/tYcGTXp2K402g+1Q0pHrk6lq6zfU2WbSSmFtsVckeBZ17h+wdVJ+TKa2AAIsBEpbfBtgdHJMbYzw3nHBy9+gmspxDShoMn5j06ZnuniCm++GgpIO4lq3W2z/ta8XK/OHPH+oj7nizcIDMIfgpjAQQvoDxynYXABwP7w9G1itWn+29mDF0fsw9MuS57caH5TDHaGeHm0XHEZhFZ87sO2TRic0JrWN46WV7fu4w27CQb96MD9g29KLDiPASRW8AT4bQ2hazDxBDTVTkM5Ms6tUW0XAKnRkw2OmZ08bHnTZAOau1hGrMATsKWQ2JXkVNBcdefIrxfUaPyXitv2kaGBs4L+QUdSujueB/rZuVXXdiUzKV0uwI9wt5sg
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(346002)(39860400002)(136003)(366004)(376002)(396003)(451199021)(66476007)(316002)(66556008)(41300700001)(66946007)(8936002)(7416002)(8676002)(5660300002)(6862004)(4326008)(86362001)(33716001)(2906002)(85182001)(6512007)(186003)(478600001)(38100700002)(53546011)(9686003)(6506007)(26005)(6486002)(6666004)(54906003)(82960400001)(6636002)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ODFmUGg1c2VzNnBIZHVHYTFsRno1VjFRRW1HenFjSjltd1hWR0hOamVnYnlt?=
 =?utf-8?B?eW5MZHFqcGsvVWQyRThlVDRkSGx0NTNtazJOZGhZUXNRSVB6VzYvdHBqLzNE?=
 =?utf-8?B?QkFzYWVmaitRNDVtNjFsZHUyRW1Dd29kSHFBSjlaUmhRSHhFM0VxNHlIRllL?=
 =?utf-8?B?NTBWblJ6NUMwd3Z0VFdCcmwvNFNrbUNjWWpUYzAwbGZzNExYaDVKOGpud2Er?=
 =?utf-8?B?OEpTbFJBRVRidXEwdCswaWxyMm1oMTl0NkdMdU5IcWZ2aWhMM3JrdjZWZ2Ji?=
 =?utf-8?B?R3RPUUU4bXFXa0F2UTBoM2lnU1dYV05ob1BjcExKY0lMVWltY2wyOTkwd2Zp?=
 =?utf-8?B?bzE3allCZlNKeUNDYUN3cERJbWZWTEdvMXFWWVZjOGNnVzB0QmdKU2RGSXRL?=
 =?utf-8?B?a3B4TFh4UnZMaUU3RDRqLzBYL2twcTQyNmxKemFBVHkza2NIejhLb09NYmgz?=
 =?utf-8?B?N0ZmRkZaT2tQSTZWMHFFSHg1TjJuMk1yTzllc1FGd1JKV3N0VStBeGhIVHhp?=
 =?utf-8?B?ajVpN2ZDZ1E5YjEva09rU1lOQmxoMnJSZ2lPUHJXZXNPOENjL1dScWNRWEVK?=
 =?utf-8?B?QnU4QTYxMDViL3ZNR2J1b2VFNmQ3cmZHdW1WWFk2OS9LOG5zSEZFYjg0VmtZ?=
 =?utf-8?B?bDNZQitOK3VpZmdjakFzN0o1TGpCM0xnVFZNbDN1Q3g3QlFyRmh3UUZncDRG?=
 =?utf-8?B?TkVXSGdmRFhvVE9Jak1McVFKNTI5N2xOUFlkQWpuNDFXTE1xOVVEbXE1NDE2?=
 =?utf-8?B?V1U0blY5aUVMQjRCeWFjOTk4UlhNZkEySE9lM2REVktjd1Mzbnp3RjVjUHNH?=
 =?utf-8?B?SXZOOUZMb2pldFZ0OHNCMTJPbE5kRmJ5azg1dThHdElCaVpJT3M0UFE3eEJL?=
 =?utf-8?B?aVNOaGhHbnkyUURabm9RMTVBRzEza1h2bS9aU2p4YmgweXFidWg0MWZhb2ZX?=
 =?utf-8?B?cjBDcUZKdEZHVmZES3dEOC9oVkJaaXl4WjF4L0g0KzByUVM5eFhCbGxtYnVH?=
 =?utf-8?B?QVRMdU52Q2FQQmJFNGc5ckN2T1pvVnh6QWZZRkZTdnpaUkYrYlV0aFhiTW95?=
 =?utf-8?B?aWNmb29pR2xGdDc1MU1ydTdSUk5iRVVlZjBoTzh3bUJWOVhTcS9tVjc1c0c3?=
 =?utf-8?B?L1VIbTFlRGRhYUJPK1owR3pQajR5aGtwWTduODU2bzRhd2VDNnI5bmE1WnVp?=
 =?utf-8?B?ckZoa2xUREszL1Y1WFpIeUlzdzRleHpxTDR0ZHdBOCtqcWM2eUVMMzUwZysz?=
 =?utf-8?B?UDlLYlFub3JJd1Z3Tk9QUVZ3d005ZmpaNk5NRHU4UlJuTzhIanZTUjJDNWdF?=
 =?utf-8?B?R1lEamEzT0RlbnV4U2pZSnpDNXVqbEJiZ0tra0x2YUJMMWNOb3Y4dkRjQ2pU?=
 =?utf-8?B?UGNQQ21TdlJicVgzaTI0NEJTNkZqdjdRVHAvMkIwbmNBYnN3RENabTBPZk8w?=
 =?utf-8?B?citzNHhzK1l6N1FqaXVMeHVLNGMrTGNmN3pLZDRrZWozZlBFbFIxWFp4NGl6?=
 =?utf-8?B?Y0k5cFc3eC9Nekd6TWlZUmc4aUhPQ05MY0hZS0dla1VxQytKc0ZyS3BJLy94?=
 =?utf-8?B?WGdNR0lDUWxza3ZOVUZhQ1BxV1d5aEc1YWRIWlNwWVBrcC85a3BZbzhjTzJX?=
 =?utf-8?B?aU5iaW5aUTNwdktyQ3BKQTYzV2JUdlk2eUxNV3A2Zzh4bm96Z2VtUTRxUEdH?=
 =?utf-8?B?anNmMks4SVZGSlB4ajN4YTJXUUVKMks2Wjl1WGZBZ0c4di9pcnZpWFhZb21L?=
 =?utf-8?B?dCtiL05iM3V0eGlmaTB4UWJjU25vWEE2bG9yeE80Ylp4d0hzVW41bFpTY1hR?=
 =?utf-8?B?OElKYVRJV0hDaytZVEsvN040bDM5VnE5VUl2ZENZOStBb09BbExDL2VVU3hW?=
 =?utf-8?B?amVtbUwyM2NPNlR5VU9qZFBhRkdEdVhmQUV1L2NFVXZROEgrTHdJVVd3a2hF?=
 =?utf-8?B?YXVHSlYxZk82aUI4ai9QczdjNTdOYXBjMWlDZmZoL1BuVXhHdWtLTHI2dk5V?=
 =?utf-8?B?NnF5NThtY1FTU3JnR3BKZndYL2w2WjhNeUxaVkRBR1BXNmlWL2hmZEhNRDJo?=
 =?utf-8?B?YmVrK2tMYVd4Y2NlNERzaEdRWFNVQytaYXM4NmFHb0J5YWc0UDAwT2FFMkVp?=
 =?utf-8?Q?qH1f+Tnyy+eC/Lise1iUC5EpR?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?UlZBU3hFUk1ucHF6dXMvSE9kOUJtZmNSdTF3WFByUzU0MENWV3dtRkFVYzRF?=
 =?utf-8?B?azVCT2xPNzhZK1AzYlhLdHZ2bDdMNGhFcS9QcERYSHE5bHRTeVJOK0g3NHB1?=
 =?utf-8?B?bng4M1B1NktNeXFKb1grUVUrUzBreW5PaDBzVXI1OWFJTngyY3FmU0kvdm9H?=
 =?utf-8?B?MUdpTWo3d3o3L3VyMlVubmhGRkEwTWZPUnFNdlk0cU4vU05HbDlVYjFNTW5O?=
 =?utf-8?B?b0R1ZTF6b3g4V0NwMDFWRyt6NFk3VndDMGFoRDJrK25OaytiT0NiTmN4ZVg4?=
 =?utf-8?B?RG5MVEx5SWFwSStGSXVwM05FOVVxRDczb05HcUhZeGtEMVZFb2FBYzFtZDlJ?=
 =?utf-8?B?Nm1rQ0l4T2J5WUNsZWw0bVFXTDNMdWNKSnJxSnRzRlVocG1uQmlmS2xPczhS?=
 =?utf-8?B?MThKQ25UR1J1S2NRQUI3bmd3OWJsU0VjRDVNdWUxbmZxZm9QWUlJVExRVFRm?=
 =?utf-8?B?NlR4QUV6SzltNnFwMFFidm5lMnBWSS92azMrVk41cE5VdnB1SStqaGFxK3Ey?=
 =?utf-8?B?MnRtK3JUNjJ0VmhiZXRWL2pHY3JHcFBoc2g5aDVYL0Y2TlVJZVVUMjJ4MXdT?=
 =?utf-8?B?WE5OK2pUelR6c2tscVVFRjJQTmsrNDJFZ0NRbkpmQnZ3RUtpTTJYbWtIeEc3?=
 =?utf-8?B?dDRNK3NGZE9aR0M4VHJJeHppeGZXUHEzc25QT3pkQmN6a3BlRnRFVVgzTTJ1?=
 =?utf-8?B?R1BWTGw3T0g1dWdHSjRBVUpRSHplempmWTBzZ0lIMXlDQ01tNEx3ZllZUENV?=
 =?utf-8?B?bi80bG42dnRoTTV3YTFKZlBzQXZJNDQ4TkcwbC9JT2NvLzBvQnRBNlhvNDE1?=
 =?utf-8?B?eXVOLytKYnlldEZhSkQweE1xRTNadStwcHVlWkNaK0dMdEpHbUFuU2R0eEhQ?=
 =?utf-8?B?Zjh3UHZ3K2taSWtjc0RRa1NWSFltWDd3UUdmSmwvSE1yL2wvZlhuVkpiWmcz?=
 =?utf-8?B?THIvWXphRnhUdnltUnE1Q29zWjREeGZhWDVqekxiSnZ2SVJ5OEJ2dFJ0RlBY?=
 =?utf-8?B?SEV1TEdrb1NCZTZOYlZ1WFIxYnNwK0FHaXRRMTE0YmFyOUZJMmtJY3lpM1VF?=
 =?utf-8?B?OXpQUzZGeEdSTEczNDFoOU9XR3JpRE9VUmFJMXh1UWQ2RzZ2a3Vlc0NJT1FJ?=
 =?utf-8?B?dlV2SklHWksvNjdCckRPNXhKSmhmTzIxejlZeCtBTzZjdCtaSVdKM2lrMjha?=
 =?utf-8?B?U1pabjhoTXdPdnJtaTNkK21uTTl4MEZrWEhicUVvb0FURHFpRFlkWXZxQjF6?=
 =?utf-8?B?eFBjRzEvWE12L2JndDVTRndxUFVRSThPZkhFdDk2RElWTll4Zz09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c4fd9326-0936-44c2-3e11-08db4af0556e
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 09:33:49.2402
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 84z/nnPzj03z73/Cj6ojODkpleCNuLwgg/njzSEDetBbU8X7vNiwZ5kJhyFCP3mTSHOBUQevpv3cNKGOhhLZBw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6699

On Tue, May 02, 2023 at 10:27:39AM +0100, Andrew Cooper wrote:
> On 02/05/2023 8:17 am, Jan Beulich wrote:
> > The hook being able to deny access to data for certain domains means
> > that no caller can assume to have a system-wide picture when holding the
> > results.
> >
> > Wouldn't it make sense to permit the function to merely "count" domains?
> > While racy in general (including in its present, "normal" mode of
> > operation), within a tool stack this could be used as long as creation
> > of new domains is suppressed between obtaining the count and then using
> > it.
> 
> This would not be the first example of the XSM hooks being tantamount to
> useless.  I doubt it will be the last either.
> 
> With the rest of Alejandro's series in place, all requests for a single
> domid's worth of info use the domctl, and all requests for all domains
> use the systctl.
> 
> 
> As a result, we can retrofit some sanity and change the meaning of the
> XSM hook here for the sysctl, to mean "can see a systemwide view" (or
> not).  This moves the check out of the loop, and fixes the behaviour.

Don't we still need some kind of loop, as the current getdomaininfo()
XSM hook expects a domain parameter in order to check whether the
caller has permissions over it?

Or we plan to introduce a new hook that reports whether a caller has
permissions over all domains?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 02 09:34:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 09:34:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528410.821514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmPC-0004F8-CG; Tue, 02 May 2023 09:34:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528410.821514; Tue, 02 May 2023 09:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmPC-0004F1-9f; Tue, 02 May 2023 09:34:42 +0000
Received: by outflank-mailman (input) for mailman id 528410;
 Tue, 02 May 2023 09:34:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptmPB-0004En-Ug; Tue, 02 May 2023 09:34:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptmPB-0006bB-L1; Tue, 02 May 2023 09:34:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptmPB-0000zN-1d; Tue, 02 May 2023 09:34:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptmPB-0002Hp-15; Tue, 02 May 2023 09:34:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+viSyQCA1Z2bF9ETp9Kc8Gi2H5hJICjYKlwNPCv/2m4=; b=grPipTIBjshBhl3v01nHuewxCZ
	Ya6YefIyIjIasOheqmoe2Sgb4Df6lTpruL/2Tv9pMTEHDjpES45/vU1dL/gKfO7nU1HdRP/fyUWyY
	qGZyZyz7EiheC+b3UK6IBOwb7Sbokndx0sBYLsrN+DjbC1KIYGSyakhGaqbWWM462x8o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180502-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180502: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=23c71536efbebed57942947668f470f934324477
X-Osstest-Versions-That:
    ovmf=56e9828380b7425678a080bd3a08e7c741af67ba
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 May 2023 09:34:41 +0000

flight 180502 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180502/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 23c71536efbebed57942947668f470f934324477
baseline version:
 ovmf                 56e9828380b7425678a080bd3a08e7c741af67ba

Last test of basis   180470  2023-04-28 12:40:43 Z    3 days
Testing same since   180502  2023-05-02 07:42:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  PaytonX Hsieh <paytonx.hsieh@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   56e9828380..23c71536ef  23c71536efbebed57942947668f470f934324477 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 02 09:39:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 09:39:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528416.821525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmTx-00050a-VO; Tue, 02 May 2023 09:39:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528416.821525; Tue, 02 May 2023 09:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmTx-00050T-RX; Tue, 02 May 2023 09:39:37 +0000
Received: by outflank-mailman (input) for mailman id 528416;
 Tue, 02 May 2023 09:39:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MJpR=AX=citrix.com=prvs=4790f2855=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ptmTw-00050N-1E
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 09:39:36 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3da54026-e8cd-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 11:39:33 +0200 (CEST)
Received: from mail-dm6nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 05:39:26 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6526.namprd03.prod.outlook.com (2603:10b6:510:b6::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 09:39:20 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 09:39:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3da54026-e8cd-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683020373;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Vupd9GIf/iSUDWxuU9OY5RGPur9GEsNPNTDZ9vRliqw=;
  b=hHxXgr3RywuwNAXZxuqaPAvxp10lYWaqfvbZRLBgQU8e2YFxGG08Ogsr
   xMJUGGAeyYxN8HSkmzoj9+kZSm0sUKlbbzYM9+hWX+wmwWaB7MA3xPIhV
   0PBWfu9ovCBBm7L2mG0xkkp5ahvBXvBcMghLcQusGfAkjuCZj0g/PP9pi
   k=;
X-IronPort-RemoteIP: 104.47.59.171
X-IronPort-MID: 106308496
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:0eY/c6/H7nn6MYSZ+Xu0DrUDGn+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 2AYD2+EbKrZN2HxKN4gPo+29xkG75TTxtM2HQFvpCo8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKgR5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklDr
 6weJRE9diy9gvm7h7CaR9Z2u5UaeZyD0IM34hmMzBn/JNN/G9XvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWCilUuidABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpMT+Xoqq806LGV7jAKDyYoVlqdmvyopB6PcO1jK
 FUL2DV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq19KqQrD60ETgYKykFfyBsZRAe/9DprYU3jxTOZtVuCqi4ipvyAz6Y6
 y+OhDgzgfMUl8Fj/6mj5lXGnzKEr4DEVBIo/R7QWn+57wR/f8iuYInAwVHf4PRJKoqDSR+ft
 XwAlsqZxOsKCoyB0ieKRY0lHriv6+yULT70jltmHp1n/DOok0NPZqhV6TB6YUtsbMANfGazZ
 FeJ4FwIophOIHGtcKl7JZqrDNgnxrThEtKjUe3Iat1JYd56cwrvEDxSWHN8FlvFyCAE+ZzT8
 7/AGSpwJR720Zha8Qc=
IronPort-HdrOrdr: A9a23:ZTR7X67KDQmj6L8uPAPXwMzXdLJyesId70hD6qkRc20tTiX8ra
 uTdZsgpHzJYVoqNk3I+urwXZVoI0msl6KdiLN5Vd3PMzUO0FHYSL2KhrGD/9SPIUzDHkM378
 pdmy8UMqyXMbCv5vyKhzWFLw==
X-Talos-CUID: 9a23:GAVLA20BlJl3roihmSsnh7xfF5p1dHD8/G/sHUqjBCFrabCqU2LBwfYx
X-Talos-MUID: 9a23:Lj0T1AkwyweZ29UJXt7Ydno+LcBT3aevLnsIgLtWiuK7CS9pJBWC2WE=
X-IronPort-AV: E=Sophos;i="5.99,243,1677560400"; 
   d="scan'208";a="106308496"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XQ8NLxYEtrBeLweKkTfOk4yasIMDAdgsJl/IU+Mvra0WOafI259MHwuuKNAFeF/jn3wBOEiVA9sqBRrk0PFt4G1wQx7ms94fAQMB7YtMjbEjKiHZ7QiFSGLoCkEEsMoOPUTGXqmq+gWnPxEm8qmlKHvxzVgeLvicFWMzsrz0VQahbq1ALBhGRnls5ul65+0APpB1jWRBgbFLpx1MdszHsyHIa2G6WNNIUYIRua87SCpDTILYqTOR1YJWaP0lJh0nf0iebs5HSCd3GZ5V5b0ASzWb+5/en6Zd8asW85BBACtGRjUlr3K7BikI3jA/8JylSKMc0hK4PgvtOoNQif3Ccw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Vupd9GIf/iSUDWxuU9OY5RGPur9GEsNPNTDZ9vRliqw=;
 b=ULX4YjFWLfKRn0miVfnnfDI+HU0p8Z3w9Mp3gWgSyW/YqvmbfTscb1xtcpgeaBAskpXspKWvADW8WL/FIedA2f2CPA0fIPi4F1b47WfyD4BlhdWFswBLuYpwvZXofJmlmWpplf8tQRdYfN45zynyd7PHox9wniyybOK3DwmMmG6dnrZBZRavYHMqCqGZK3yHnTnpiLSo8iyYvHo0JzWKo8ftwl5cKqVs2Qvgat8sFP5bTNPo5c0e52STsSQC2OKEMmKhjyCZmbQclf/oh58WrO3wFDTwuJDairg7K4Eu6XAwX/e00bYilR08T+IkCjf+Cllq86fXGp6Ovx5WuuUR+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Vupd9GIf/iSUDWxuU9OY5RGPur9GEsNPNTDZ9vRliqw=;
 b=EXtBWPs0/nsF082w0k1F9SnQRs2C8zcteDj0+s1gq3LdueF+CyToNFKc0o02CHPogfeD142tRdFLT3Kc66FJEmXnMLMndnPeIJ5NPwggdPE8gIEq+JT6kSj5AFka/yCzT/Evii/CF7sr3zXdZa4SWiXrZBVQwuAV2hMBLI1FxWk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <05b09b22-08a5-a2fa-5efa-74845fcc2445@citrix.com>
Date: Tue, 2 May 2023 10:39:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
Content-Language: en-GB
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Jason Andryuk <jandryuk@gmail.com>
References: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
 <afc999f3-fbe0-3b52-2f8a-b8b5a36eda87@citrix.com>
 <ZFDY9/mXw1gr5HgI@Air-de-Roger>
In-Reply-To: <ZFDY9/mXw1gr5HgI@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0363.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a3::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB6526:EE_
X-MS-Office365-Filtering-Correlation-Id: c5020920-f78e-4b45-5465-08db4af11b10
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wtyPiy1BXUpYD+Yum4L6xkS9UbDQZD7OK9T6/VSG9sDQpHzfxcaiXtxg5bc6UW46kvfYqU6XyGDVE3vJrvb2PYKUb9Bfbb/js3JNsVsYPHVAxLIB2HnE5PDw+3Il4mpNlh4Ww86rLlW+l7ysUSef0jfHfTISKPl+M245OvvMjlr0WIxdnY2OxSq8zDz4MtcBaso6sLVECSJkOrccFzrHhG50lykxE9oJPNp1IS/HNU/IYoO8O7CwMu0FyOMqsjHEqlo4b0p6MFdmOXB1L8RqOusuO425iAKxIjw9pglHD44NxVuVZpLRTHIclPSf38iUPXcShvg5xjs47e73YVPG+wJ9UPv71jw653Sm7Kxnaf0d7qP2AiFVLiScy15OnoEV9II5j9MGYxptesMKOHkRhxDaw5Q99bqYqsi+HCTlpMn0g6NNn2MIK98PSx9PuST2JxcICHwjOBDe7wzrd+23uik4hzyon1O79Nkf1//tU6sc/1HGtwTbAKhu56eSY75G+V9UeULmH94clWBZtU95duSV6XAXHUuSpzBASRj8O7QpEHizx1ye6nGCuIqjtbjGjnLWfxcQig8hChpoE5jRl3kNhlC0CYXd/xCqnnKRmMscy+TqauohlkE+d9vwCGSZa8teFkREwkm7CTPmzPqoww==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(346002)(136003)(396003)(39860400002)(451199021)(31686004)(316002)(2616005)(53546011)(6512007)(4326008)(6506007)(6862004)(8936002)(7416002)(8676002)(31696002)(86362001)(5660300002)(478600001)(37006003)(54906003)(83380400001)(26005)(41300700001)(2906002)(36756003)(82960400001)(6636002)(6486002)(66946007)(38100700002)(66556008)(186003)(66476007)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dktOVmNQZDg3RUROenVlcGt6S0FOcURqRHJDWU56NlJZR2U2clB2YmNIN1ZZ?=
 =?utf-8?B?c1VEY2FYNHZTdXdzY29ocCtoVHlHcVQvK1pQK2hURkplOUdvTVJuMndOUW1M?=
 =?utf-8?B?enVDaW5GdEU2S3FUM21DVTNmZ24ySnJYTTM2Wm9hREplNkNOdFpFaWpobDFl?=
 =?utf-8?B?aXRDYnVacmROVjlWUW1qMW9KaVpMck5ndmxmUlE0TnNFQm5KTm1PaHBmQlN6?=
 =?utf-8?B?ZXlieFRycTZyZTh1WU9EaFVmdGFpYVZsd1J0NTM5cjhCOGdDem5DUEtJR1BV?=
 =?utf-8?B?RzB4VDBJVGZmSU1HeU1FOHNKQjN1UGx3SXF1WXUyZnkyTUMyZVgwa0ZNRlhw?=
 =?utf-8?B?aGUrVUloNURJOGRwR0xhN3k0ekpibFdicDRHREZHbkJueng0ZHNEVTdWcTlW?=
 =?utf-8?B?WnpTeVdPeW1yY0RhTUVyNUpYdTNxc25KZVk1dG9FeGZoQXBNYkdqbGUwS2h3?=
 =?utf-8?B?L0JzSGZFN0w2UFd2NUp2ZG1PalhSQTkxTlBBaWdvb3lrTy9BUjNQbTE5L3cv?=
 =?utf-8?B?Q0dLSFBQdWFLWnppWUU2Q0NkeUFrbUx4NGlHSTJnNUJGTitKYWpKaHc3UWo0?=
 =?utf-8?B?U1J6UU5iOCtSM0RTWWpRN1grcUtSU1J5L1oxd0pZNlZtcFpraUxrMVVNblgw?=
 =?utf-8?B?cDc0Kyt5U1E1aWl2QkN3TE12Q0lGd0REeTE0NFRQY1owZ1lhUzZZOFJyV01u?=
 =?utf-8?B?djZvVWM5Ris3cXgya0pOaTFkcG5wa2lpWU00cG84MjgyYVJXZnZLS0VnUzcx?=
 =?utf-8?B?RGRjSVBOQ1Fham8zcStGbUJjeDU3NTRGTURFZG1UZGgyeGJ6NXQweXJNZk9t?=
 =?utf-8?B?bkxSMGl5SGRKV3BtZ1hSRUhoMmg3eEFFZ0xWdUpBOHBlMG1OejJUK2l2S2po?=
 =?utf-8?B?T21pblpaSUkwWDNDUy9sTnp2eFZlTGVQU2JXL1gxaW9pREtLNitwVTZDRDh4?=
 =?utf-8?B?dWswbzRtTXYvSngreUdwckNkQ3JVVUFqN0ZwMkVaQTJFQnVNY05aVTQ0S0dU?=
 =?utf-8?B?dWZNbDVENW9QUjNabEQ4THZRWHN2djhLb1duYlBsNmRYOWJQUFNTZHdWcmJW?=
 =?utf-8?B?SFM5a1lSZjlnMUNMOTA0ZjB0TU0xOGhFNEZBRUJacjNoNTZKMFNkQlhiVmRV?=
 =?utf-8?B?TGgySEE4dU90MTlvRzFHZ0hDNytpWndwa01KQXB6YmlNeUs3NkgyZVFRMGw1?=
 =?utf-8?B?MFNYTm5EZCtmSjlOdG8rTDdjcmVBR2s2ZVVXNUQ5OUt2dlFqT0E3d2lvQ1Zl?=
 =?utf-8?B?V3ZOZXYzNlpCb2tsM3pqdmlJWmN3WEFzcmd0QXNTTk5sdGU2KzdWRi9vTCtp?=
 =?utf-8?B?b1kySFJWT1gwUU1EQ1V0NWUxNGgrYmRDdks1anJiMCt3eWYyRDYwNW15UE9K?=
 =?utf-8?B?Z3gyK3NBREtlQXpoVGpOVkNtanJkMXZsTjRQbTFBeS9waUtOOTE1SWVwSjJ6?=
 =?utf-8?B?VkZLQkpWNElXUVEzeFpRVHJjVVBPRFhxUCtMVEVIZ29ETUVBVk1QUFJBMWhj?=
 =?utf-8?B?TTdpU0I2M0VZdWZLdzdxcDhtYTNVUzIwbkt1MXlLTllRZEJnay8vTnJVZkFt?=
 =?utf-8?B?M1VkaCtnbEN0RngxRTZwRmRKajA0RGFQQ21ucm5oUGdCTXpLQkdEaGFXN0dm?=
 =?utf-8?B?QnVoWXhyNW0rc3NZVGxPeUZXTEJtdmZORzFXNE5IWUpJQU03UzJndGZaSUdT?=
 =?utf-8?B?ZzJXQWN4OHkvYXF0UWxhd3F2c29lVGp4Rloza1FCWW0rcTRZM0FZR1Q1R3k1?=
 =?utf-8?B?dUFYRkFUNVh4bFBYMTRVaWhkSnZmSGNTYVcxZmk1QWwwZFhRQUJJeHFYZVVk?=
 =?utf-8?B?czhHaUszZ21RSlhqbHI1UFY4dlVJazJFQmFYQlQ1dDBndzd6VkEySjFiVXM2?=
 =?utf-8?B?VklDLzFMWDJQNFlJbnhSR2l5VExsNTd0cnpTUzVHeE5aUEFPZGlyNWxsNFMz?=
 =?utf-8?B?UWVSUnNyamJPVVZmZlRWYWVSbnlkWUUwcUZSKzN0VkNjYjdsOVJQU2hCcHBo?=
 =?utf-8?B?N3ZpR1pVTDRKYS8vdSt2SG5BOGZnd3FVUFJqdTh3dTNUOUgxUVdHRXdNTUto?=
 =?utf-8?B?Y3BhdDc2ZStOeVA2ek9YaW5mYUZuWStXaFdVcTlzcXFlTjZYdHV5L2xtUVNT?=
 =?utf-8?B?YUh4VWM1SHFWQmFFUC8xQTZjRFdua3k2MHAxRmQxRFlNNVBaUWNlUHI2M0sy?=
 =?utf-8?B?TFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?UnBaRzhsVDV6YjArb1NaV1ZML3dBUlRGT2ZKYW1Gc3Q3dHZEVlhDQTZyQ3gy?=
 =?utf-8?B?SlNjRWMzYnZ1L0RwL1A5TFI4SStYTGFxdW9aeFZxMWpMend2L0ZQTUVpYjNY?=
 =?utf-8?B?TzV4eVViSmlPSEhqWGI3dFhzV3pJL2NHSG1aQWI4aTV0UitqMlN3aGszWVNw?=
 =?utf-8?B?TVNybHJwSUc5ckJtekVQSitqd1FaTjFvOUhtSkdxdlJsTDdFZFRzSEJDWXdL?=
 =?utf-8?B?Yk9qQU0wRVpBOHpGNUVSU1U4UVNDNzd0R1Bob3BWdmtpaFZucUp5QWFiT0tV?=
 =?utf-8?B?dzBKQ0cxRGt4RVNBSjFobGFZSDU3WklRZU5JZnJMazVIOFhRb25iWkt1L1Jx?=
 =?utf-8?B?NWtlOXRPcnA5RitPa3VsWGtjRFZrRFNyd1lvVWNjSGlnQW52TEJkZUtxMkR3?=
 =?utf-8?B?Y3FXU3RPRGp3QlJyTTZpQ1hMN2NNWFg3cUpBWngwbmtvV3ZEZ1NwOTh4Vm1D?=
 =?utf-8?B?czBRMG04UzdLUTlXZyt6SVJ5UTdrU1ZlMExLcFV5U3J5RmRaNnM0OXhyQXh3?=
 =?utf-8?B?R0JzVFFDWXo4azBSdktxRFAzalZNWmtPOWR3akJwZStNVXYraEtlSG9UVHNM?=
 =?utf-8?B?REZlNDdzYmEvek85d0o3dUdHZldOMHNNZ2hTTTJjSGVsNDd3OGVDNFc5eVRs?=
 =?utf-8?B?blMwT09yejBoSXlHdmJaS2psQWpkVEoyWUtiazZFSzdtWmpPenRHNnZBRlND?=
 =?utf-8?B?UTRHcWJtYkxCNmViZWQwVCtvQ1ZXdXYrNFdPWm5EaXREY2ZvTzBmaG83TUxN?=
 =?utf-8?B?YnFUalloUnhDQ0E5RXVLMHpaV0wxSGk3Y1lZTHVEckdOVys3ZXUzNkNhWDBN?=
 =?utf-8?B?UDJKUnRGS2VGNDRGaGorR3VNVHVMeFB3UHhJNUtFL3BnUlhlY0RVdUFBQW50?=
 =?utf-8?B?SE5meTBlSHFTQStoM21MbUVFdkdJNEU5UklvZjlHTTY4ZElVTi9GSFJFcTB2?=
 =?utf-8?B?aE9DNDdlUW1pVUxQYnROY3pNWFFXYWkwMk50bTRyaUN0VGVsSVpmdTkwb1BS?=
 =?utf-8?B?VjUrTG1IOEdCcksxTXA4Mkg1N3QySjg0c1pkQzBqQkNRKzNaclJFV1owZVNF?=
 =?utf-8?B?WVZibHU0cS9oTXpFMmdZQnd1TkttY0d5alV5SnJXWitSeHkrNUsvcGc2cnNE?=
 =?utf-8?B?ZzJpZEF5b2R2elp6c1VmdWtyNG5CR1hmNlVUN2NhbmhyMUwrNldDYXdzWGsw?=
 =?utf-8?B?eU9sWE9wdEhkL3NaMUZHM3pxRExTZ0NGd2dnbXRRYXJFZmd0bGNaTUNBS2M1?=
 =?utf-8?B?NU5JUEVvUXhZTVFGbDV6bGNha2lCckQ2elk3b3N2Y2tvbTZKQT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c5020920-f78e-4b45-5465-08db4af11b10
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 09:39:20.7433
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +kBqrQUkG8gSP0RoIpA2tTsCM+AFIwaVmj9faavkyHEjSsxb8fKbTSkE/C/Js4/fKsd+QfK1rxKdTPYpfuPfRT4PmaGubDvlhaMny/DiNis=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6526

On 02/05/2023 10:33 am, Roger Pau Monné wrote:
> On Tue, May 02, 2023 at 10:27:39AM +0100, Andrew Cooper wrote:
>> On 02/05/2023 8:17 am, Jan Beulich wrote:
>>> The hook being able to deny access to data for certain domains means
>>> that no caller can assume to have a system-wide picture when holding the
>>> results.
>>>
>>> Wouldn't it make sense to permit the function to merely "count" domains?
>>> While racy in general (including in its present, "normal" mode of
>>> operation), within a tool stack this could be used as long as creation
>>> of new domains is suppressed between obtaining the count and then using
>>> it.
>> This would not be the first example of the XSM hooks being tantamount to
>> useless.  I doubt it will be the last either.
>>
>> With the rest of Alejandro's series in place, all requests for a single
>> domid's worth of info use the domctl, and all requests for all domains
>> use the systctl.
>>
>>
>> As a result, we can retrofit some sanity and change the meaning of the
>> XSM hook here for the sysctl, to mean "can see a systemwide view" (or
>> not).  This moves the check out of the loop, and fixes the behaviour.
> Don't we still need some kind of loop, as the current getdomaininfo()
> XSM hook expects a domain parameter in order to check whether the
> caller has permissions over it?
>
> Or we plan to introduce a new hook that reports whether a caller has
> permissions over all domains?

New hook.

The current behaviour of skipping certain entries is fundamentally
broken, and needs not to stay.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 02 09:41:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 09:41:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528419.821535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmVV-0006NY-9x; Tue, 02 May 2023 09:41:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528419.821535; Tue, 02 May 2023 09:41:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmVV-0006NR-6j; Tue, 02 May 2023 09:41:13 +0000
Received: by outflank-mailman (input) for mailman id 528419;
 Tue, 02 May 2023 09:41:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptmVT-0006NJ-Tj
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 09:41:12 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2043.outbound.protection.outlook.com [40.107.7.43])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 78286036-e8cd-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 11:41:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7025.eurprd04.prod.outlook.com (2603:10a6:208:19c::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.25; Tue, 2 May
 2023 09:40:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 09:40:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78286036-e8cd-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Gyqu7AJGv+KGQ9CFoLTS5HhonlItsa+4ojydPRjvGnG61qfrkaHP7hN9DKIfa2oxhWXvZgfofi1tXsY3+eAFZCLFMZL5sucz7a3Vi+ElgXQ8GgyuZAnf9Ahwq5ffX8gNtwCqrzcYvi9xeXtHYYMw9QKMmv6fp9y9qpVlj4GOQI9oaTfz4FHsmCZW/VfU9VWUT0EqMBmiIm3VKVjAtI7VvEezAb37HSNZSzbY1Ph4HXNzQHFSV8/hRLn/k4vPQ2rS9Ro+DkZn+o/WajDPvkgCydp1QF2CNzKUtMu9OtnHhVgvHcShaL7k5PsBy6rOyMAi0n02RgqO4rEevdGBx1nFDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sWd4VHl3QwgEFRTdzx+i/fqQiQZSGVt6HDdW7tgg490=;
 b=XcOXzrRx7JKFyOmfAuR6qIjpDaq0Fl1YrcW1oVpyVttBoL3TjPSuFUKxdYa2f6Wq3G796ey9Ue8IvRHLm4JjeCX0wK3UA4m7JFr7gItXZVqwhbBMUDEHnc1SNijjz6FLCq9LpRIAt9vVQvOXbDM6vEAUCuxIxJ40Qfm3Lppc7TjdE95ek7aPLehAqsf6Z1BU/79puweuVQJxqPYcYhjYK3TAdc98Ax9eNtlKwWYsFLYf7YGR2iqRoTGFPZUBisWPlDCFHSZVfhebs3JZCIFoyr2nhu62xwGWlpLHEWTVnZva+ZJ5pjxeNaWBVhnSacuM3t/zDKyBWTM7aPx/BaFebQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sWd4VHl3QwgEFRTdzx+i/fqQiQZSGVt6HDdW7tgg490=;
 b=UJWTe1tRoi9ZJD62RQeVdubkRyl2+l2vf8b2aOy0v5/fA/qMPEBETg/0ke1iso+a9WwROZnbeVYEaTLHgtUyR08K5QaidsI9Q26pKNxVe5uShajJu1U6OoRZ865ti0at/astLFnlN1xFhfhPt2A6VwU7OmzibyxiOgf5VHDhaGMQm63YT9DdZqAhGOTE6gYfT+DrjaffvUarNMUZNM5mXGJ+Lfl4+C/+xiSMK0k+xdSrWvwemlrN8Ppo6nbYSreQ0hAT4J6g/xqVro+TeE5nxRIp8dkSp8gF8JmSi36eznYJ6Cx+5HXtFgJ3ty6WO1DcIGakcjU6Iq14TFtc5B9JvQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9339daaf-533a-4758-c25f-164c27f56043@suse.com>
Date: Tue, 2 May 2023 11:40:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Jason Andryuk <jandryuk@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
 <afc999f3-fbe0-3b52-2f8a-b8b5a36eda87@citrix.com>
 <ZFDY9/mXw1gr5HgI@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFDY9/mXw1gr5HgI@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0107.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7025:EE_
X-MS-Office365-Filtering-Correlation-Id: 02054f07-810a-415e-577f-08db4af14a4e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uPTnxV2UK5Lchke2f8bQUOGrw4skonr6hID1Aju23aBSXkmLERCf1T6rjQQYbneCJP41Naztfe+K2jfzU4JIY+UenbVOMig2zd5NMsRhywWEJ856IAjOo3Oh2W8vYkZHCtUnhScKxEQ9lrt/dQl/e2yy1lwvTtE3jZ98igOMDyqEdg8SGQd7W0kwI2aniymNCsuRQy++qf4xaGprixlmXWsP4BtxMuQp5ACKbpykYE56oyYqVSM/6yPdhYg7jChP3JzK/fC6UD+QIW5biUPgS9JT/YLfiPtRcWGtYT8IcaiJbWtW2Xaq7UBaEqK4quc+Q7DWEE1xXxD/FZaGD0Qrq/iCWKpEkwVIXPV5ep2N1KF8zHSdeC2HojbwOow2ZiV9KO9pa155v9SkeGZJO/udr4Ofz5egyKHdmw3CdZj8o6wb2Kj7bUbxeM+qfJ4dC85P1aqy5oopF30LGI3pWSlYv+f+gI0WZoWp+lZeL2NtyvSinuNI+HkhcfXuW5c2vFl7EtiCLAWADW70SRkeLrfgYcNvcHBIPGqzdLhLXxQft7S7fPRMx2G5+INPdkbhatbzmp0tFFh0JiGckcgTl7w79+ilRE0OV6X1CaVB8BNbdAwf67P3lC/bqM8ksDQ4vYe0CpMAx/q4ntAorPPQnximlg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(136003)(39860400002)(396003)(346002)(451199021)(86362001)(36756003)(31696002)(38100700002)(66556008)(8936002)(316002)(7416002)(6916009)(478600001)(5660300002)(41300700001)(8676002)(66476007)(66946007)(4326008)(2906002)(83380400001)(2616005)(6506007)(6486002)(54906003)(186003)(6512007)(53546011)(26005)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dGJJN28wU3lzZE9lM21hWll1dHkzNmJTeGMvZG9RZCtkR1d6R01zVTVvVUg4?=
 =?utf-8?B?aGNsOFZKUUJFWTBaZUNwdDBZUVJNWVlaWVhleUlvdExMMWdJYzJXVy9CZ3Rv?=
 =?utf-8?B?REV0TFhxSmhldDhQekpHVGEyWWNWZzU2ZThPd2doaXhLOWdwcjQ4eUlYSTFq?=
 =?utf-8?B?cU9Pc0ZyY0d3QkdRT3JRYnRPK2szeFVOcjN5Q3VOWE1iTDVwbSszbzF6SXlk?=
 =?utf-8?B?QjFwcXJLQU95cEtubkM1Q093T3ZjZHVWQkU4Z1dUZFpPQitqaG53TzRDMlZN?=
 =?utf-8?B?bXNuNWVXZDRTMkxVc2xmaU9ZN1ZlU2ZHZjJIWC95VHVvYzd2WHVQNHdMYTJQ?=
 =?utf-8?B?azhHZ3NzYjlmeUkzYWZQb0JEdkF3M09TOFNEamFzcml3eTdBM0h3WEQrKzRm?=
 =?utf-8?B?a1FPOXJ6Y0F4QzZuU0tSTjE0UCswQ1ZTMlBNRDFEdWJNbHJIcWhQaXcxRTM5?=
 =?utf-8?B?QlErUVRzWXdGVzJIV0c0YU9YMGx1dWFGYTRTcDRZMXRMOWNmdlVoWlFPeEEv?=
 =?utf-8?B?Q1VHMWZtZ3lodmMwUk1YbE9OVDN1d3NyazkvdjcwVlJsTS9CWWtENjM3bjRB?=
 =?utf-8?B?VEQrSHJRb2JsZDdQRVltMEN3dnpwWVFuSEtjTlJhSGZzbnZURWRhTE5XS2x1?=
 =?utf-8?B?dU1PdG5vQTk4cWUydTRnR1ZRa25mZThRUjF1TVVHNlFTS1Y2bFAyaHB4dEZu?=
 =?utf-8?B?cVYrVUhuMnFIYWtURndzZDNSaGlzQ1h2SVR6TnlON3JUUWVHTWZ2cFNZK2Jp?=
 =?utf-8?B?L004a0tvdkdVcW44RE1XakFoalZLWkNHY0RiakV6UXNYUVArOER5bjFVeVNv?=
 =?utf-8?B?bHBKK21pQWQ4ZUZnZmlqUk5sd2FBNWx3VEpvYzJpaWpOOWNrY0MrVVcxakd5?=
 =?utf-8?B?NlZLb1Eyek9GOEJSZmhvaktpK0FhRkdaamUyN2R2b2FUUWZ2czVmU05aMGVx?=
 =?utf-8?B?UWpHalc2d2ZlRmkvVjFBTHF6eDlQbEJCTlg5dGFGUUF3c0hxbEFRWHo4RjND?=
 =?utf-8?B?SzgwdjFZU2Y2dUpxSU9DRkxrRzVvY3lHTXZkdXZjOXFzcU9RRFRBSW9nQ3FC?=
 =?utf-8?B?NUtBY21PYXEvNHhYL0szZml4NEcraTVmLzJvVDNKTnF5YzlSdW8zMmJwa1BC?=
 =?utf-8?B?K25IMXZEcWZNMkVTa05KaTFUNmhMMzF0bDF6RVVBUUQrREpHR1ExYmRnTmUr?=
 =?utf-8?B?alF3UWE1Y3VhdGlORVJjWXNuTk9zb01DTkVlSXFyZFlYdW8xSmNuaDZ0MFFE?=
 =?utf-8?B?WTBtTTlRbDZib2VPMUNuU3JDcTlDblJpR0FJbkNON2JZWDc2SExXYk5DdDVJ?=
 =?utf-8?B?NW9UaSt0MXlsc3UwNXBlbXdKZjVkcUpzMkE3NWRycWFVcmRFM2FIaFVZUUdZ?=
 =?utf-8?B?RTBQeVBsNy9BTFl4MmhWV0xrR3dWeHBSVFU0Z2tXVDk2OE1Od25rR3JDUU9B?=
 =?utf-8?B?NmY0djJTYTQ4N1NPb3pBdTRYY1IwWUVjOFJCdXl5VmFKSFJyZFVoS3lDSjFp?=
 =?utf-8?B?cFVzUlNwOGpZUVhyMXZ2d1RzYlQrUjRDNTMwcnArNENLdHhEL3QrVFhiZThx?=
 =?utf-8?B?amxXc1d4TEhsWmNmV1o2RFcvYUpXS1VwdGZ1MWp2b1NNeU04Rk1maHNPQ2U4?=
 =?utf-8?B?Nk03WEh4QlVBUFhDUFFzRHpiUjhQeEdJcUJFQnNDMmN5R2lkTXF0U1JjRzJh?=
 =?utf-8?B?Z252YzMvamVOMUVyOUI4YjNBeFk1L3ZqLzRwRGF3eXRzdWg1KzdyWFViUmZ1?=
 =?utf-8?B?UnlxSEtNUFVrc1B2djJiQWhZbjNsV0k3N1pkVlJEYXZEYlV1Sm1BVDJsSU8z?=
 =?utf-8?B?empFRFpkOUJXMWl3QkR0RzRoSTdGMm9YLzErSnlhdEwyMVRlNFFheERsOVNz?=
 =?utf-8?B?VEgvY0RhUkpLYVdoV1pYWWhrY1dtUkhQWitWVTR0b3NBakx6ekdFNFNMMmN2?=
 =?utf-8?B?cVBxdUlaMkx2cnFiRkRCMlhPV3o4dGgwMjhiQ1lMWmhObVBYQTU5bTVneTgv?=
 =?utf-8?B?U3h3Y2l2ZVpUK3FaOFBBSkVqNTV5ekl3VkVMNHE4S0hEVUlhMGZhOXk2Ryt4?=
 =?utf-8?B?VTlhV0ZjVFh5akJVYks4b0dZRU1KMmUxdmNZYWFvVFEwUC9WRzd3WXFBTnZX?=
 =?utf-8?Q?kc8+8uCyuJM+u7ebljEOE6pxn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 02054f07-810a-415e-577f-08db4af14a4e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 09:40:39.8579
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: j2yoxfCuPyz2KfyXTCUiAWOkJWNHbZ8Z7fPdU20xwwS8p/8ei8DnZDShG2mZ4zTI/oYNqvCVsvoq4fuar1+T/Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7025

On 02.05.2023 11:33, Roger Pau Monné wrote:
> On Tue, May 02, 2023 at 10:27:39AM +0100, Andrew Cooper wrote:
>> On 02/05/2023 8:17 am, Jan Beulich wrote:
>>> The hook being able to deny access to data for certain domains means
>>> that no caller can assume to have a system-wide picture when holding the
>>> results.
>>>
>>> Wouldn't it make sense to permit the function to merely "count" domains?
>>> While racy in general (including in its present, "normal" mode of
>>> operation), within a tool stack this could be used as long as creation
>>> of new domains is suppressed between obtaining the count and then using
>>> it.
>>
>> This would not be the first example of the XSM hooks being tantamount to
>> useless.  I doubt it will be the last either.
>>
>> With the rest of Alejandro's series in place, all requests for a single
>> domid's worth of info use the domctl, and all requests for all domains
>> use the systctl.
>>
>>
>> As a result, we can retrofit some sanity and change the meaning of the
>> XSM hook here for the sysctl, to mean "can see a systemwide view" (or
>> not).  This moves the check out of the loop, and fixes the behaviour.
> 
> Don't we still need some kind of loop, as the current getdomaininfo()
> XSM hook expects a domain parameter in order to check whether the
> caller has permissions over it?
> 
> Or we plan to introduce a new hook that reports whether a caller has
> permissions over all domains?

I'd be inclined to make the existing hook recognize NULL as "global view".

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 09:43:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 09:43:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528424.821545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmXd-00071P-P2; Tue, 02 May 2023 09:43:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528424.821545; Tue, 02 May 2023 09:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmXd-00071I-Li; Tue, 02 May 2023 09:43:25 +0000
Received: by outflank-mailman (input) for mailman id 528424;
 Tue, 02 May 2023 09:43:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MJpR=AX=citrix.com=prvs=4790f2855=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ptmXc-00071C-Th
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 09:43:24 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c6e2a9cb-e8cd-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 11:43:23 +0200 (CEST)
Received: from mail-bn7nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 05:43:20 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6526.namprd03.prod.outlook.com (2603:10b6:510:b6::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 09:43:18 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 09:43:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6e2a9cb-e8cd-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683020603;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=SqrnxB1FHGf9zwSPosA5wTrIkQF4vLRLE1zeU49Fi80=;
  b=Z8gbEux+3tQy039Ju48KQH0wPgsG5U7kUFVmNqWgzbAvOA9xdJzI7P3c
   JVxdmXLyViIwNNSHAbZxC0uG/f4zsA0IWSlgpDErJIEobEsZDonKP4BqR
   jrLpGc+96Ddeu15A94QhiEyc7piRPcAphNBHdzabutsdEQZ0Zf0uTe3uV
   E=;
X-IronPort-RemoteIP: 104.47.70.100
X-IronPort-MID: 106877086
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:iZ+TGKysO+xm+jKOPJZ6t+caxyrEfRIJ4+MujC+fZmUNrF6WrkVTm
 2JLXDuPOazeY2L2ctF2PIyw8k8A75+BnINhGlM4qyAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPaoT5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KT1B+
 tdGDxkgVRyGg92nxoOdW9J0ott2eaEHPKtH0p1h5RfwKK9+BLrlHODN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDCVlVQpuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiANxCS+LgpqACbFu75F0sCTsGeEWAotKGh0+jauAGd
 F1Kw397xUQ13AnxJjXnZDWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnM08SCEu1
 1SJt8j0HjEpu7qQIVqC8p+EoDX0PjIaRVLufgcBRAoBptPl/4c6i0uXSs45SfbtyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CChRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:+gkPba3ShLyJWOJP9nScEQqjBfdxeYIsimQD101hICG9Lfbo6f
 xGzc5rqiMc1gxhJE3I+erwSJVoj0msvKKdkrNhSotKOzOW9FdATbsSoLcKpgeQahEWmdQtp5
 uITZIOcOEYYWIKxvoSpTPIWerJ7rG8gcaVbM3lvhJQpTgDUdAG0++SYjzra3GePTM2YabRd6
 DsqPavxQDQCkj/Nf7LfUXtNtKrz7an+P2JAH52ZCLPqjP+xQ9Ax4SKaSRwtS1uJw+nr41Sh1
 Qt3zaJk5lKcpmAu33hPzi51eUYpDKt8KoEOOW8zugTNjHmjEKJSe1aKsC/lQFwhNvqxEchkd
 HKrRtlF8Nv60nJdmXwhRf2wQHv3Bsn9nenkDaj8CPeiP28YAh/J9tKhIpffBecw008vOtk2K
 YO+26CrZJYAT7JgSy4zdnVUBNBkFayvBMZ4Lcupk0adbFbRK5arIQZ8k8QOJAcHBji4IRiK+
 VqBNG03ocSTXqqK1Ti+kV/yt2lWXo+Wj2cRFIZh8CT2z9K2Fhk0kox3qUk7ys93aN4b6MBy/
 XPM6xumr0LZNQRd7hBCOAIRtbyInDRQCjLLHmZLT3cZfw60kr22sDKCYgOlbCXkMRi9upjpH
 2BaiIDiYcKQTOwNSXUt6c7oSwkQw2GLHbQI49llspEU4bHNfnW2B24ORETevSb0rUi6+3gKr
 KO0cFtcrjexC3VaN109jy7fYVOJnwXV89Qnt46XlaHpavwQLHXig==
X-Talos-CUID: =?us-ascii?q?9a23=3Au6oNtWo6yOzObBVyo8Zr3u3mUZwqfVaD6179GEO?=
 =?us-ascii?q?9DVl4ToyzWGSI3Lwxxg=3D=3D?=
X-Talos-MUID: 9a23:2CGbMAkQlv5Otoj+uUjBdnpABfhpxbSDFnsOuqoLutS1B3UvKmmk2WE=
X-IronPort-AV: E=Sophos;i="5.99,243,1677560400"; 
   d="scan'208";a="106877086"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vcn2R3zHW7Yk7eCf2+geCobSmlIBLWoztBjE51PVNQGH1okYj3h2698Iu3mRk+GPhpT7BFkhuoKoEL2p6BZQWNezUHyVs7dvmMQ35QwCeq6a4i+FbtUvLEsawiExBqt1xmoNc5yQb85XVM+yxoM0KY3ZOosCUKxcBEv5gwSKDfE5y+dEoAcyNrvxDkPAVeSm0gUkF3BBzZdUFzG668nYUDe47qCo/BahMGHV5sqwux2IAI6eos56Hf99J27Im+jpAeBuO7lkBnWJ1O1msJNBANek1T4XLo5DhnGeeTVqwv9m+ZTts1a1ixnxRrj5qC0NRkzICa0JwyTey+BnPJnWag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=y/ohsvz7RMYZ/9dWDuDSvtZTZRcuZhJhikCKGjbfp9M=;
 b=ijOuBBMk0c71m43wZbFuuiKP8ae9/PULpH2zcthzKUVxQdCVhjfmMx8BiebxI/s+4Bu/aDa1v/OlQ1Od6SZ4lwnV+uV87f1AvT2KYwMfIJdGmKZwD5ZWoU8n++X+bDOn0yrPFRsstChtGb+Ls09/lGsWt/Uwj7NQSWHH5icsKwtFMox6uPFIj0IA4HpgQ7fsSk2ocsxCPJ81tET4/pV7nof5ziuF0XaytSDHwpTLp35vH7+yFeMnWRfCncj6904M2ULMl0T+II33VnS5HxJH9raMKT1pAXth3/0hLCKCNdC+IBqpe5UhRbadgaPXYz6OllAZzZddm71/HsyfOWbqbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y/ohsvz7RMYZ/9dWDuDSvtZTZRcuZhJhikCKGjbfp9M=;
 b=YQykTkpwG/5Jd7e53JYnlGV3j3MFSztUhiE674oqoyN8TS/Hk+nVR1ePSiuvQsp6ff11GoekWXkKarwkMaErXfObHAJFjhl+rijNtt/OyKYL77pYpeNacExbF3ti6kl7i44msondXkwoOjuziEdLbe33ZRWAuHHAsPBAsVWcj1w=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <11b24761-9268-e647-7316-0bffb549ae6d@citrix.com>
Date: Tue, 2 May 2023 10:43:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/2] x86/trampoline: load the GDT located in the
 trampoline page
Content-Language: en-GB
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-3-roger.pau@citrix.com>
In-Reply-To: <20230502092224.52265-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO6P123CA0038.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:2fe::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB6526:EE_
X-MS-Office365-Filtering-Correlation-Id: 5b78b9ed-f411-40b1-feec-08db4af1a8eb
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b+pmVU6BAQoYmnilQ1Y3dJZU9ngm4iugPpLPT2GOUiuQTuWtyrIYWulh6Wupzy7dlgna1BrcryknsfH15KjaMNIVLoKvcPu06kMhCkRBwHbmbvwLr2+yqTCYfpZ/hr4456s8WkwGb3F09rKnzcYtH4EeSk61y/wmNCY9xxXFJvPaX1IxSQ6DrWQnswTSDEFDP+nFQK9yUUH99MWuIG9MyKhGu2fwI/S7FilKP7sd6pCTe/TzNDKv2xjuXnDEIC/bp67IwxrzA8ixdKe1KOzqLmtRMiw4/xYhWv033ePqxKwQ15rvLckyYZ2UK4bFZQp/ZO88O7654yjlfngHnFOL2TNoaEs4M/KTrqLqnj1BQVy9k+Fg7Q7MNQ34RWfUQEC2yuCcpCQ6MLFla2fkN3+0j7ZgPn+CClZCc1CoBwA6ueeK+WTf6526b8Lbsy8H5bk1okRhLUN6NVQKi9SwpXeoc3Qw3X2RlCVCuMghjROPXWzlvlalwbmu8DlQpijChbiFEmXcVsjJq7bnV91ultPrFehLOm09BXwRAysOZf9HTqbdMhvimG0tmm6iLzpMy2FgyyVDMYsHTi2mbo/q12o8bzHbfaURJ+c3iEaTk8hfK+8JEReY8odVvYGi3iLPVu7kyNu5jZhNBdK/SIwdovBe8A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(346002)(366004)(376002)(451199021)(53546011)(6512007)(4326008)(316002)(2616005)(6506007)(41300700001)(26005)(54906003)(478600001)(38100700002)(66556008)(6486002)(66946007)(6666004)(186003)(66476007)(2906002)(36756003)(82960400001)(31696002)(86362001)(8936002)(8676002)(5660300002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N1dRdjhwalpmREo1N29mTWRRdmFSeWIxR3lCL3RxbUxPVzBxVGZ6L01JNkt2?=
 =?utf-8?B?VjNRcVdNa3ZRVGk2dUhhQUlhdkNzckhDZzNyTUhDMS9MLys4QUhaZ3A1aHYx?=
 =?utf-8?B?RzZkTUxwbEpIdW80OEwrZFRHWHhSQUIvQUQ2Y0Z6NGduS2xoWGs2WXBZNEN4?=
 =?utf-8?B?QVQvdVlnYVZvZ1dKaTFRa0RsaGVuaGNGbTdDQ2xPUHRLV1U1c3pyMTN1YWhm?=
 =?utf-8?B?TW1BVXlZYTVZRkJkMGZHRDk2YU44MmxNcTVtYUphSy9lR3M1WGVFVi9xS1N2?=
 =?utf-8?B?Qk9PSk9nWHhYT2w3aW5LZUZObjhJdDRtakhyelRzNUwvVlg2RUsrSExmckQ2?=
 =?utf-8?B?YkJxczBBRkorOGdGYldRZUU3NGNjS05NRHdRVUkwbTY2N3R5K1E5ZWFmTjNS?=
 =?utf-8?B?OWRJckQrVUsvVXZiS3VpUmt1bnNBdG1wYlNLS0FVUmJxSkY0c25iV2M5MXBV?=
 =?utf-8?B?T3YwOHpuMGx0Y2hoS0dDR3ZqSFhZcGhXTHRWZXZoay9iV3lIY1ArSWlMcFhD?=
 =?utf-8?B?aE40K0ZrVmFyNmZ6OEV1QXhtK2ZvaDN5NFVYcnR4OURmd2xGK3FzODU2Yk5u?=
 =?utf-8?B?REMzT2gwUmNZN1BHekRVVEhMRFZkclczZHhISTF5RGQwOUI3dzd2QU56U09Y?=
 =?utf-8?B?dkxIdGowUitKdnk5VWhEZHhwUjUyakJUakc1WUEwUmF2Y3VOS0JYWUNld3lo?=
 =?utf-8?B?SDVMSkE0NXlvZUJYK3QzYnBsTEIweWNlZGlRb2JDZjd3ZDlSSTd5TVhqTlRw?=
 =?utf-8?B?dXc1ZzFUblcxdW5VczZvRHJDT08yVWlPSkY4Y2RuUXpNbG9icC9jQkFmRzNK?=
 =?utf-8?B?NmlJcVcxbHo0MUdiT2VxektWRTFheXl3eFc2N1h3ODR3OTZ5bExJVm1aanhl?=
 =?utf-8?B?YWJ0dFJ6ekZkd29VOHJGWDZSNEF5L0VRL3NNTTZUTzdMekNqUk9KUWx6a3BH?=
 =?utf-8?B?dm96WGVlSkY3bFU0TlRlM29sTnJEU3BTeXErSDJpcmp5TVMxeVFSdGdaUDgv?=
 =?utf-8?B?N09nUDd1R2JTaWFFQnVTd2sxV0tUMnBJTTl2RDlGdzZLVlN2cEpJeWNMNnhy?=
 =?utf-8?B?S1FiTGxqajBZUVR0N3lQVXhXTnFWRHBMZWdUY1J3bXJEeURUY0hhU3lLVWM0?=
 =?utf-8?B?U3VjZy9pK1BkUkh6ZFk1bmNWZWszc25BN1k4dWJMcFZidmIwUkI3SnpBdi8z?=
 =?utf-8?B?aklabjVoWlpjM1hUYXV1bFlET1ZCdTdRSHdJRkdxK3puTEZQTkpCdGo2YjZo?=
 =?utf-8?B?YjdxUG94ekcxVUxSMzArZWs1S0tkR053V2x4WTVsYTJzVnhVeWhTd2o3MU1U?=
 =?utf-8?B?V1ppNk5qUjJKWmpWcm0vT2VGb2U1UHh3bm5Uc1QzOGhuQnA5cmlHd2xyMmQ2?=
 =?utf-8?B?NDlpNzk1WTBPKzBKdkxYTkpSQUl3Q2ZKc2cvOU9kRjNmNk1VSWcxalNUWnc4?=
 =?utf-8?B?NmRsQzdNU3FXR2FFaGk5alhRWHE3dk9VL3ZOWTZsdkdSeThyNHhFdFFVQ3hu?=
 =?utf-8?B?Y082ejd1QjQ3Ty9USjFwY3FiM21FQy9FNHBRa0htQ0tIQnltZlNyeENSeWlM?=
 =?utf-8?B?MG4vdzNCSkRnd0FsUUJpZzBKRVVRaFk1blpnMUkvQlpkcDR6aU16VzRaVGt3?=
 =?utf-8?B?OXBQSWJ2WjFXT0FIbnNpOE9XWURScUpheDRKVWo5aXN6eGhiS3JOZzJBRFhn?=
 =?utf-8?B?em14RkNWdWpxa1Fxd21laHJ6RWY1UTQvZERkb2FLaEV6VUU4NWpZRWFRMDF5?=
 =?utf-8?B?TFFIUFFUY1NSdEpCUGdiZDNLUGFZTXpBYUh1UnJwOUU1NUhWejB2aXoxNldK?=
 =?utf-8?B?SDl2cUNTRmJSQ2dFUHR6ZVcvMW1hZ3lqeXl3YjlEVjI0ODFFMzNzWGx5RXBU?=
 =?utf-8?B?eis4ZXdpOHBsTlZmMEJ5Wk5rVEdpbitIS21yRTZWend0SjZFVzNjMFpuTGQx?=
 =?utf-8?B?VCtJMk1zWVU5U1FNSkc5b2lJaEVnT2c5Y3ZzVXZQbGR0OFBiVzN1M04xVWk3?=
 =?utf-8?B?UytZMWJmSzBZWGExWmhWdURQdmhseEh3NXVTVkZvcmd4VzZROGtFTTU1dHV3?=
 =?utf-8?B?QWRIS0MxOFc3ek5mZml2cVVBZGdLaXYvOGhXQWJ5Qk1SWGlHTzFSMStoRUVY?=
 =?utf-8?B?WmhHWkNIODV5UVRoM2ZhSmk1MytOQTA4UDJQVHZvanF4M1dXVCtkNjgvK0pC?=
 =?utf-8?B?OFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	mR/ESx0KRPO8Gqdntnpjc+du4dZtK2MZ/dUdT4GKUh6TK7nMP5p/5Tmx+HIjSEGGuGnFgkN3jmvnXMZ/gZd331TILLcC3y2U6Am92+W5+vd/PbgslZgC6JE5Ej2oOJ8L8eG44cZ0KgB8Wz+79x03TXL32OiysL3Y8SwnCUoxv+viQZdd4UaZc+c6s+fMFoKi1vcrkRQ6XmTE6icrn4lBgKE+Jdb3ResAKICcyfy5tIfw9Y9ez0M/NklkqlqZ3BkInPYkA5Q2yYkzeBvjXXjLxGF9Vp4lC4lwPk+4wlVjs7L9stJC0sK2Ht2/OHpSnMiGjnCfpXGD/waiRTq9etMPOTPmscPBH4zVEL4ExbrDpJDw1y3qmHZvD7HEYIjNMjZq2pI04ofMSO7S60hVU41mQdv8iA6P4XjMOV5EhytY4Srht/XfTFsSu1/dduCYsuEKAAdWsAiWJav5NaRprYE30utEzDeerJLa0+4+e3Tu4jVW0RVaKcJsKtr7bI07igfY5wAqN1xdspzy3yZcfaPiXJM8+2gUNKR338cu2IBqRw1PJXGppPJjO5YwaMzqT10B1U3rYwLPiTQdlKzxDimCsscP6WydtCtSMwA8MjLtGC+mnZSiofpjXODI01soAUqtCe3WBzBLKkBEgT2amPtSqjrI5CJ/7ucNKOSmEm5f2KSMLcyJSUoniU1/TSRokEYDWlQ74r/9pxOFadu8bzPYouddz+Ho3gV4nfbem4C72XEa6JNNNDPvvoCzfNGkNGPflXBR9fU1jUANrdiPgjBiXmMadNLdvxBdKXUlFWNssR01iPL2QoIwyDW0FJOGmb40
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b78b9ed-f411-40b1-feec-08db4af1a8eb
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 09:43:18.7160
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xXeHrFU7MO+zflkpkciGSm+fW9XThtRyqHW8rn8WjvylJ5YW2jYFZC2IKT43gYBh9QpAzBa8F92XBjNwTW+2tpAoOcG9lkxwX3Oc4b+xarg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6526

On 02/05/2023 10:22 am, Roger Pau Monne wrote:
> When booting the BSP the portion of the code executed from the
> trampoline page will be using the GDT located in the hypervisor
> .text.head section rather than the GDT located in the trampoline page.

It's more subtle than this.

gdt_boot_descr references the trampoline GDT, but by it's position in
the main Xen image.

>
> If skip_realmode is not set the GDT located in the trampoline page
> will be loaded after having executed the BIOS call, otherwise the GDT
> from .text.head will be used for all the protected mode trampoline
> code execution.
>
> Note that both gdt_boot_descr and gdt_48 contain the same entries, but
> the former is located inside the hypervisor .text section, while the
> later lives in the relocated trampoline page.
>
> This is not harmful as-is, as both GDTs contain the same entries, but
> for consistency with the APs switch the BSP trampoline code to also
> use the GDT on the trampoline page.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>, although ...

> ---
>  xen/arch/x86/boot/trampoline.S | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
> index cdecf949b410..e4b4b9091d0c 100644
> --- a/xen/arch/x86/boot/trampoline.S
> +++ b/xen/arch/x86/boot/trampoline.S
> @@ -164,6 +164,12 @@ GLOBAL(trampoline_cpu_started)
>  
>          .code32
>  trampoline_boot_cpu_entry:
> +        /*
> +         * Load the GDT from the relocated trampoline page rather than the
> +         * hypervisor .text section.
> +         */
> +        lgdt    bootsym_rel(gdt_48, 4)

... I'd suggest rewording this to simply /* Switch to trampoline GDT */,
or perhaps with an "alias" in there somewhere.

The important point here is that we want to shed all pre-trampoline
state, and unexpectedly being on the wrong GDT alias certainly
complicated debugging this...


> +
>          cmpb    $0,bootsym_rel(skip_realmode,5)
>          jnz     .Lskip_realmode
>  



From xen-devel-bounces@lists.xenproject.org Tue May 02 09:55:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 09:55:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528428.821555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmj4-0000AN-RH; Tue, 02 May 2023 09:55:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528428.821555; Tue, 02 May 2023 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmj4-0000AG-Ob; Tue, 02 May 2023 09:55:14 +0000
Received: by outflank-mailman (input) for mailman id 528428;
 Tue, 02 May 2023 09:55:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MJpR=AX=citrix.com=prvs=4790f2855=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ptmj2-00008T-Ui
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 09:55:12 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c208b24-e8cf-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 11:55:10 +0200 (CEST)
Received: from mail-bn8nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 05:55:07 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6991.namprd03.prod.outlook.com (2603:10b6:303:1ba::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 09:55:02 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 09:55:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c208b24-e8cf-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683021310;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Qx8TBmHioxfZ2IfV9j0Zp8QCJ6YrR0h3CkbpgUxzq70=;
  b=JS+WQs+LbidtxCcsJ5tY7QIz6WnKSAi0lDgPaQ+YYAd9pPyZeQsbuHhr
   vcAAWm1ByQcXNdin/9yXJdGNhwmdITtPKBonFfERMbpUt4Pee+4FtRj58
   R1FYSJmm8QW0fKiLNqCFik+PlqZGe7184DvOkE4XcP2fA0/45BZkIRIsY
   E=;
X-IronPort-RemoteIP: 104.47.55.171
X-IronPort-MID: 106310059
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:XROsWaC1SrjXiRVW/wTiw5YqxClBgxIJ4kV8jS/XYbTApGknhjNUz
 GRJWm/Sa66KazGgKN1+PdmxpEtU6MKDxtdnQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5ARnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwxN1rWk5A9
 scichcRNjKhvOWs66u3Vbw57igjBJGD0II3nFhFlGucJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuvDG7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXw3iiANpPRdVU8NZn3lKy5lMeKyZMFlfgheXolkekSdFAf
 hl8Fi0G6PJaGFaQZsnwWVi0rWCJujYYWsFMCKsq5QeV0K3W7g2FQG8eQVZpatYrqcs3TjwCz
 UKSkpXiAjkHmKKRYWKQ8PGTtzzaBMQOBWoLZCtBRw1V5dDm+ds3lkiWEY8lF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9XABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:h0CH5aFzSeLJsHu+pLqE7MeALOsnbusQ8zAXPhZKOHhom62j9/
 xG885x6faZslwssRIb+OxoWpPufZqGz+8R3WB5B97LYOCBggaVxepZg7cKrQeNJ8VQnNQtsp
 uJ38JFeb7N5fkRt7eZ3DWF
X-Talos-CUID: =?us-ascii?q?9a23=3AlgQC2Gpg1kzjKD4tDjd2qjrmUf4ebibywnTrGRW?=
 =?us-ascii?q?fFT5LaoaSU3SJ9Lwxxg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AEU0i8w7lQPPbA/lIUrOfdMWyxoxlvauPKH8Asq5?=
 =?us-ascii?q?c+MXfETAhJz6HnjeoF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,243,1677560400"; 
   d="scan'208";a="106310059"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ge3XFzrOvjkaLOW5CW4t4OqEigzFSal0wso+vq/m4JXn4iy7VGM7L4/JeTvePz9wFYlHCQCDHE9fLABtjN4GCDPZaJtoFNyTyehh1PwlCirpWzXmo2k6mZN6wKXmucDLBeEE3lBdoa+8HkTrPacMNGPfstxh2hWfp6XbJB5jmyZWvIIK/Zecv9jY3PNOwdNEhLJMBxI81m+KR3pYDnlsSQyy6lhdNw8m7TLMfrGu6XRnz8GG2hUuxt4SwhGOI17kVwXs3NnxJJdp+NjM4jG4xo/PTAyk90upn0Ll62CNpcz2ru/WiQp+r9+EZz9QZ79PXre3aLGYCHeLdTnqeLTX5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gCtsbvuU7CzsbkqhJiHIMYUbXQSCF59o1g4jgrRizI4=;
 b=X4Cn1rGD9hcYqH7l0QlbmOgFeP2G9nDUMhCYTXhiLF7tZRpFTeRm1xGMdZJFjpWszCB4bVmuaHGDAVitxyEqv5lnnupSUuzivct2hzKQoid55nye4OLTpDaoxikRZlRuQ3oJ7lEVdMgGcj/mMBOmOa5WJtDHXN+HO45W8ir5SCbM52RK8ZBuRqYdWdJgongZDUFB45vv7Eb+dWwZ4GpPT9Z+9C1prLUTs5ACIp5ELSbXXS1D1GAkfD1h40a7aQ83WeVdwMC0TY1ouJi7mDDX7v2O0A57KJqfHx9oXrBChpLarN+XUEr0arWdQkbKSjwaXBsyU7lMurkT8u7eLVrNDg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gCtsbvuU7CzsbkqhJiHIMYUbXQSCF59o1g4jgrRizI4=;
 b=mn3KEX+Gg4/7NnmdoKU8rM9CoDuV9uphxS7K1/0k5an7Eeujhqkr66OpMJZbt3HGQFgNyY6RMeYuymWtyH9/2cj7LHW+A8OIIloIZAGBPpi3J8CWUKYL0KOEb18qrtK2KKPEFw5wIKKF0QPlGwqxH07S+UVJbY7/mv3HBBhZqIc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
Date: Tue, 2 May 2023 10:54:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/2] x86/head: check base address alignment
Content-Language: en-GB
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-2-roger.pau@citrix.com>
In-Reply-To: <20230502092224.52265-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0548.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:319::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MW4PR03MB6991:EE_
X-MS-Office365-Filtering-Correlation-Id: e5ad664f-73ad-4be4-b6b5-08db4af34c22
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pmGua80B08hRyH53Ko8JNGhgKHDeitU/nI6vzr469AH2pGc/hr+iV8sa1fVBOWgzJl0lc+5qIgvIaVwNRS+ILbfASuQGW+jIuoB2gTB7gOfPDuwUjqT5ZCC+m9o2ZMojn3HMVRyLNSZbJ0rd26kACkhVVQsxsjNJJRJV6z1dl3nqST4QokuD8rz8bs/Cz2Disygs6wsSB2Zo/9gHTXgXQfinIgcltsGBpnZr1NpyrfIMEFHYhRsb1EVTXnsX8FgXF9BEQW6Wl+N7EO4qi2qsTc0iNadcV9ov8DrPbQTyJh6ZNeLDdCKSlyv9XblIsoek1/5FbCZDB8ux+BbXVWf1XLZ9aotf8s1C4UnvRTLPJYtOwOpzRZ5Lz2E0iG3rYAf8Euj7pB/e25rMXT1owhZ5NRZ2+UFqDPMuYw+CZuw+82FmkPvv9MP0UC4w5F47hKVCwISNVKg1Q0wpOAKN0Aza8JT8eX6n/afqUZkIiKkD1ofATKxjqZlSZGY9yE9mBzc4v8WtMSYh9ww+5LI1EAGXyVxjV1159Cbih89E5xRG5paFyl+WxnQ0prHHR1vnOWpiChz0xF9+mudjH7aioPhvFdIw+jjtYvS+93elKZity9szlOmzbyWOSYMf/rK/En0B2JFmskoPEAR8leijXamltA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(39860400002)(366004)(346002)(451199021)(6666004)(6486002)(83380400001)(2906002)(6512007)(6506007)(26005)(53546011)(186003)(478600001)(2616005)(31686004)(36756003)(54906003)(41300700001)(66476007)(66556008)(66946007)(82960400001)(316002)(38100700002)(8936002)(8676002)(86362001)(4326008)(5660300002)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cEp6LzZCNjJqMk5BT3Uxdkt0NzIvZ3A1SkVyU3pTVmw2U2lhZEVpZ2t0d1ha?=
 =?utf-8?B?STNqODEvZGJ3VTFxSzN0Zy9Xb0c0QXY2ZGtGQU5IdFFOM29SK3ZJOHVDWWg4?=
 =?utf-8?B?WmxZenI5ZEl0RmF0YmFYU2Y3QjFqK0pMdUVkeWZFWDh5Wmh0akNHUWkwSWps?=
 =?utf-8?B?Sm1lSTkwOVhaZzFOMUl1aGhyb1d2U01vY2Jya3g2ZnB6V1Z2YUh6REtMcTFS?=
 =?utf-8?B?bU9oa0tCQmJlTVgxaTBVeTNXTXlnakkxRlFyT1dxaVdiMituTGNyR2JDQTI1?=
 =?utf-8?B?TUk3VDdEdUlhb25kdFBHMFRhQ25FZkxrUmFzTEQ2VUVZYnI2eGYza054NGt1?=
 =?utf-8?B?T3l0WXl2bDdGQndjUzl3alQrRGVrM0hpWWZZbXhDbjRqbktnazhLOXpXbVBO?=
 =?utf-8?B?OGRDQndrb3JKeHR2aVJLL09FUzBYeTNHY20wYjdDcmZPNTYvZ2xILzRjM1BK?=
 =?utf-8?B?UE9VL3ZHYVhhdkw5VFNwNm5vSGVTRDdnbEgydDZuMVJmK0hFZHowWlp5OFZI?=
 =?utf-8?B?d05nWFBWRXlwUVpIbHRsSERoWTNlUU5SV0tJQ0NyNlJpVGE3NVpVNWo5eTAy?=
 =?utf-8?B?ZnFXQy9mSTI3NFpnY2pCMHJhaWFrVzBadXBMOVg1TjRZNXBwa21kSDZGYU5h?=
 =?utf-8?B?NjFiZ2FnL1hHKzNxS2xyT0U5aDR4UjRVMzZsbHBLMWRITS9pYVN5cFRlRkhK?=
 =?utf-8?B?WG4xb21RT1dKTXB0cmhrYVcxc2NYQTluZVFTR3lPZmZ2a1BtbENQQ2Y2b21N?=
 =?utf-8?B?bmpqdExLUEJhNS8wWWhDRmV1T0FGTDdBL2lmbHFwNmRCT2o0WEFTc09uZW9h?=
 =?utf-8?B?d1N4RFZjaEJTcGVoRDNjVVozd0ZOUHVUVTd5U0x4YnBJQXFiMXE4YXByTTdR?=
 =?utf-8?B?Zk4rNmF5a3Nocm4reDBUMWIxZVdRbUE2dXZuUDAzWStpRHd4VDcvRGZWbDdQ?=
 =?utf-8?B?UUxEOGJwQmF0ZDZmZFFlQ0VEN3E0REtnRS9Wd2QvSThaOUk0S3NyaytCR0tr?=
 =?utf-8?B?TGZpN0dXWDZNVjEwdHNKVFI5QkdHTmhMUmRROUVZcVU3T0lRUjdUbmdHT0xw?=
 =?utf-8?B?NHY5R3E0QUJvVXdPcW1wTk1oNlF3SDd5enhWdnltc1JULzdIVmx5VldneUVO?=
 =?utf-8?B?R2F1MUtuTjhUbllFYlV5SDdsM0dtQkZOa1lLaENabEhIV3BhdXNOQlMyOXhV?=
 =?utf-8?B?bmRkVy94R2VlZ1UrTzdvV3oxcGsyT09HOWFqVGovOXpWbkdzQnZXZWJYVkdz?=
 =?utf-8?B?d0RBMm00eGEyZm1CalhlNU5EV3ZNMHE0SUdYRHhoa2IxaXltelFJZG1lalkx?=
 =?utf-8?B?d0ExQXdTWkx4LzVhUWJUZDVZUWZxVXdlVVBYK1l0ZmZiaDU3WGpHWDVKRHYw?=
 =?utf-8?B?RnNyQ3ZBdzdWNmkvcHpCVW8rV3dQZStrczM4S1NOTklMVzdsMGdGa3JISW9H?=
 =?utf-8?B?Q1RaRmRhTGdiNkFEUnR5NTd6NUdFNzZZUm5aKzc0Vnd3WjBOZERXWXFjSjh2?=
 =?utf-8?B?T2huLzQxWG16bjJmbHBEQ1AwK2dmbm5jTjZvZGpZcTVZaDVlYWZTMkZ4RzE3?=
 =?utf-8?B?SHR1dDJRY3REODdMeXh3amI4OXdpQURDT1hEQThGd2pEb3dITUVKL3ZWdFl2?=
 =?utf-8?B?NXpzbjk1bDlMNEt0eFJWeTB0R1p0cDBBV2ovOElhb1lyeXJ6TFNUQUV5eVo5?=
 =?utf-8?B?UDJyZUFjQUFvdXl5WXAwcVdFRFdyUFNRMkw0TkxaY0R0Z2J1RmVrN0xoZHhM?=
 =?utf-8?B?NUlRRUZPZjVOMDU4eHlidnRWMDI0dnN5R2FSS3owZkIrcTI4QVlCS0lLRERT?=
 =?utf-8?B?TFFicEZKeGowNnpDSjdTak1HeEYxS0swbllXckwrTHZ0NEtESWVYQUpUaGU0?=
 =?utf-8?B?L1NvUXdpY1lQZDRkZnZUaWtsSTMvT0Z2WXJOTHNGYStGU1djdEgrdWtRMFhU?=
 =?utf-8?B?cGNyQ2N4OWVreE5ta0NpVEV1QUkycjBjbzlXdUgvVkF3T0RmV3VTTlVpSGly?=
 =?utf-8?B?em1TVWpGbEcvVVBYVTF0Yk9CN1V5QnVuTTl4MUlXWTFkSXlyWTBXdHNKRVhF?=
 =?utf-8?B?OHh2RU03ZngwdHM0TnFzc0RxbXIzWjF4bUoxTWcrQ3dCUlVTOFUzRWRDSDNR?=
 =?utf-8?B?aWVJWnd4WVZjR2ZtdXl5czVBbzZrWVJDSlZKODg4MzQxdGhyQnBEcmo4Vldr?=
 =?utf-8?B?bGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	WZshAvxJx+55c6GyYgFZdYANBepGNXYiRauRZAORDSZHTfkEIwHjNK3gfsO0Mxn/UA2l9qGLc2ln3sMLtCxHa7mE7C6cC7O2NyQ4XvJ9HmuJMXesYwDJDVx4jpIM1AcfohG7xCUOmWfU7BatlmM6V81VEP51dUZJiY1M3PmI6ZQnJY9+PmKAfS9iRl8S2/41ZV2aoJe5hnhzm4XqjV6yDmUkiCaPiYrNU8jdiOFcZIsCFfjdiQDHbzvd8tqpNZcgiZuTgv8GvNxOtoMlUGy//mxuAEnfHfkeiiTrCLlzpWz63lT1uuBJuQgVLZcRwGoxk2Weg3FT3A5zTpBfIBD6X9K5tnlXU5pEMS+CcQuiR6sWMH+D5UtamA0xmKHtmeehBlbjRLdOGq35K4jhBs5uNsEzS9jpgT3bwadJr1dRPUiMGVa9QaBw95nbUJ/t+SYVsNYmNb70unS+RrrZxPbr/oYJlkPkhCWap2qWPLa4nBIFaV+svdrwZyQ7c7Z+FvJSbt634+hUjYK9vYc5CYbA9ffACn9y4EXp6ahdDWuvjivGLRdLSqPoyO7tQgy5KkubfIlpomVeo8DlwljhXTJhgqLzT8dpqsg9Up93Zo++2yjzNh+VLksqWrJV7k/JfNd5SX9krP40QlBYwG4Xl0FH3JcyRVpt30vwhvtkKnig3EwZiFTPLjF7ovp2wNzspRsd6QK6s8UnQ7obUQmu8CECVnVopWiTj5LhAcZp90o/DAtSvQxwXTagOG+jdBeuqCVSzV3z7aM2DZR0rR480iR9rLBN7fFGITfXkrw1YAmOtLGsskA14EA1B2iRCi6dGJEN
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e5ad664f-73ad-4be4-b6b5-08db4af34c22
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 09:55:02.0767
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: siEj9Jwjrix2/a+saslokRLzV8SOqROU0XO+eH5owbKWpKmDtjO+6/zfe4kXkuVKIx4YsCkoEc4SUKSNpi10lzKhTnBdaK+nBYsePSY7Rvg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6991

On 02/05/2023 10:22 am, Roger Pau Monne wrote:
> Ensure that the base address is 2M aligned, or else the page table
> entries created would be corrupt as reserved bits on the PDE end up
> set.
>
> We have found a broken firmware where the loader would end up loading
> Xen at a non 2M aligned region, and that caused a very difficult to
> debug triple fault.

It's probably worth saying that in this case, the OEM has fixed their
firmware.

>
> If the alignment is not as required by the page tables print an error
> message and stop the boot.
>
> The check could be performed earlier, but so far the alignment is
> required by the page tables, and hence feels more natural that the
> check lives near to the piece of code that requires it.
>
> Note that when booted as an EFI application from the PE entry point
> the alignment check is already performed by
> efi_arch_load_addr_check(), and hence there's no need to add another
> check at the point where page tables get built in
> efi_arch_memory_setup().
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/boot/head.S | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
> index 0fb7dd3029f2..ff73c1d274c4 100644
> --- a/xen/arch/x86/boot/head.S
> +++ b/xen/arch/x86/boot/head.S
> @@ -121,6 +121,7 @@ multiboot2_header:
>  .Lbad_ldr_nst: .asciz "ERR: EFI SystemTable is not provided by bootloader!"
>  .Lbad_ldr_nih: .asciz "ERR: EFI ImageHandle is not provided by bootloader!"
>  .Lbad_efi_msg: .asciz "ERR: EFI IA-32 platforms are not supported!"
> +.Lbag_alg_msg: .asciz "ERR: Xen must be loaded at a 2Mb boundary!"
>  
>          .section .init.data, "aw", @progbits
>          .align 4
> @@ -146,6 +147,9 @@ bad_cpu:
>  not_multiboot:
>          add     $sym_offs(.Lbad_ldr_msg),%esi   # Error message
>          jmp     .Lget_vtb
> +not_aligned:
> +        add     $sym_offs(.Lbag_alg_msg),%esi   # Error message
> +        jmp     .Lget_vtb
>  .Lmb2_no_st:
>          /*
>           * Here we are on EFI platform. vga_text_buffer was zapped earlier
> @@ -670,6 +674,11 @@ trampoline_setup:
>          cmp     %edi, %eax
>          jb      1b
>  
> +        /* Check that the image base is aligned. */
> +        lea     sym_esi(_start), %eax
> +        and     $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
> +        jnz     not_aligned

You just want to check the value in %esi, which is the base of the Xen
image.  Something like:

mov %esi, %eax
and ...
jnz

No need to reference the _start label, or use sym_esi().

Otherwise, LGTM.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 02 09:58:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 09:58:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528433.821564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmlw-0000oX-D6; Tue, 02 May 2023 09:58:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528433.821564; Tue, 02 May 2023 09:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmlw-0000oQ-9m; Tue, 02 May 2023 09:58:12 +0000
Received: by outflank-mailman (input) for mailman id 528433;
 Tue, 02 May 2023 09:58:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zdoi=AX=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1ptmlv-0000oI-Db
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 09:58:11 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d75d9f23-e8cf-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 11:58:09 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 7373A21D47;
 Tue,  2 May 2023 09:58:08 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3DD3C139C3;
 Tue,  2 May 2023 09:58:08 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 2rIuDbDeUGTnZQAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 02 May 2023 09:58:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d75d9f23-e8cf-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683021488; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=E3qmdns4q/cZ/NWmrzqGY03nxOPP2tMuMNz4jFkuKl4=;
	b=mBLYkkHJcOwd8p316DbLOqa2WVhMjfKHp1xXlIBJDCNwZ2l0o3I6E64A/AEVLOtfHU8RRh
	vjq4VkJniQ85luURpoMiGlxdQoooC9zcBaOwa+gpLJ9w1sVOL5rTB8rW3cjjnvHei9MozT
	iDFQzKooD/a8KZm8XwxVgHxSa0p28ss=
Message-ID: <1d6010d8-ec3b-6b34-f031-5c0e68ebaf6a@suse.com>
Date: Tue, 2 May 2023 11:58:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] xen/blkfront: Only check REQ_FUA for writes
To: Ross Lagerwall <ross.lagerwall@citrix.com>, xen-devel@lists.xenproject.org
Cc: roger.pau@citrix.com, sstabellini@kernel.org,
 oleksandr_tyshchenko@epam.com, axboe@kernel.dk
References: <20230426164005.2213139-1-ross.lagerwall@citrix.com>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230426164005.2213139-1-ross.lagerwall@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------L7dAGg1Ytvh72l5dMzjFTHrd"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------L7dAGg1Ytvh72l5dMzjFTHrd
Content-Type: multipart/mixed; boundary="------------EA7GBIwmE5skG7fSwXaPF5qz";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Ross Lagerwall <ross.lagerwall@citrix.com>, xen-devel@lists.xenproject.org
Cc: roger.pau@citrix.com, sstabellini@kernel.org,
 oleksandr_tyshchenko@epam.com, axboe@kernel.dk
Message-ID: <1d6010d8-ec3b-6b34-f031-5c0e68ebaf6a@suse.com>
Subject: Re: [PATCH] xen/blkfront: Only check REQ_FUA for writes
References: <20230426164005.2213139-1-ross.lagerwall@citrix.com>
In-Reply-To: <20230426164005.2213139-1-ross.lagerwall@citrix.com>

--------------EA7GBIwmE5skG7fSwXaPF5qz
Content-Type: multipart/mixed; boundary="------------KMPQhgyXkctIaC4EjZvsJSnt"

--------------KMPQhgyXkctIaC4EjZvsJSnt
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjYuMDQuMjMgMTg6NDAsIFJvc3MgTGFnZXJ3YWxsIHdyb3RlOg0KPiBUaGUgZXhpc3Rp
bmcgY29kZSBzaWxlbnRseSBjb252ZXJ0cyByZWFkIG9wZXJhdGlvbnMgd2l0aCB0aGUNCj4g
UkVRX0ZVQSBiaXQgc2V0IGludG8gd3JpdGUtYmFycmllciBvcGVyYXRpb25zLiBUaGlzIHJl
c3VsdHMgaW4gZGF0YQ0KPiBsb3NzIGFzIHRoZSBiYWNrZW5kIHNjcmliYmxlcyB6ZXJvZXMg
b3ZlciB0aGUgZGF0YSBpbnN0ZWFkIG9mIHJldHVybmluZw0KPiBpdC4NCj4gDQo+IFdoaWxl
IHRoZSBSRVFfRlVBIGJpdCBkb2Vzbid0IG1ha2Ugc2Vuc2Ugb24gYSByZWFkIG9wZXJhdGlv
biwgYXQgbGVhc3QNCj4gb25lIHdlbGwta25vd24gb3V0LW9mLXRyZWUga2VybmVsIG1vZHVs
ZSBkb2VzIHNldCBpdCBhbmQgc2luY2UgaXQNCj4gcmVzdWx0cyBpbiBkYXRhIGxvc3MsIGxl
dCdzIGJlIHNhZmUgaGVyZSBhbmQgb25seSBsb29rIGF0IFJFUV9GVUEgZm9yDQo+IHdyaXRl
cy4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IFJvc3MgTGFnZXJ3YWxsIDxyb3NzLmxhZ2Vyd2Fs
bEBjaXRyaXguY29tPg0KDQpBY2tlZC1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2Uu
Y29tPg0KDQoNCkp1ZXJnZW4NCg==
--------------KMPQhgyXkctIaC4EjZvsJSnt
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------KMPQhgyXkctIaC4EjZvsJSnt--

--------------EA7GBIwmE5skG7fSwXaPF5qz--

--------------L7dAGg1Ytvh72l5dMzjFTHrd
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRQ3q8FAwAAAAAACgkQsN6d1ii/Ey/N
oAf/Y0NoXQFylmrPidnUfvHRCcY4DCzY0QIQ9+y+7uP5XrvPp9yEpcikk7Oi2pkxcEpFL844n/uT
rtZdjhhYqhTh95OSvyX04XfGWKT+uXoaBndQW3coqiuzneG0RZRzZoxCh8w/xdRqgIIdeSp0mkaM
TlBnBrhOmlyC45Kevr5ih1jPO4pYf4PT/Qa71Iv0o47N2+O0IAOvLt4/N8eX3fgb2KicQoy4v48e
CPFO2kz92vlwRR2IDriK/M7kYwDldrxfBwOOEWdtdcgY/+dorkTuCAmleq+3S413pC4eSh9+cZrq
NzvA19d102W+gRexoEuAqZIb6FvlpaG7Y37nqWUSYg==
=Uish
-----END PGP SIGNATURE-----

--------------L7dAGg1Ytvh72l5dMzjFTHrd--


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:11:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:11:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528438.821575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmyX-0003Kh-Hf; Tue, 02 May 2023 10:11:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528438.821575; Tue, 02 May 2023 10:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptmyX-0003Ka-Ed; Tue, 02 May 2023 10:11:13 +0000
Received: by outflank-mailman (input) for mailman id 528438;
 Tue, 02 May 2023 10:11:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptmyV-0003KU-SH
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:11:11 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a64ff230-e8d1-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 12:11:07 +0200 (CEST)
Received: from mail-dm6nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 06:11:04 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BL1PR03MB6152.namprd03.prod.outlook.com (2603:10b6:208:319::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 10:11:02 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 10:11:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a64ff230-e8d1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683022267;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=YB1v6eMwKwthpuGEZUBw5DLY5WhZFvqP0KKlCoDLEa0=;
  b=I0Gl62pgNa4Ym33Rapcf3DZKSLE1644n9x0Y4MBFERZLAW25QqPYv29b
   6bx/rHHGa12wBW+1hc8hV3I2id0p/MuTdzm7rbtLF2bxCfzLb7fVLn8VA
   BMzvN9jNrT+maCFtCU/iY1p1hPg6Vhvkr1EGg8Ndo5Y+uQ0uYBJJyGWpS
   A=;
X-IronPort-RemoteIP: 104.47.57.177
X-IronPort-MID: 107440925
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:3jfhTaJOKm1AP97MFE+RHJQlxSXFcZb7ZxGr2PjKsXjdYENS3jxSy
 zBKW22HPf6LYWT3eNx0PoTlpB5Q78LXmtA2HVNlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wRkPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4pE11/6
 8QiNwsDRQ6e3f+M8eKfFs9V05FLwMnDZOvzu1lG5BSAVLMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dopTGMk2Sd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv02bOTxXKhBer+EpWA0ONmm2XPxVcqJyYNWFip/6mwukWhDoc3x
 0s8v3BGQbIJ3EyiTd7ndxS9qWyDuFgXXN84O/037kSBx7TZ5y6dB3MYVXhRZdo+rsg0SDc2k
 FiTkLvBBzN1ubmRYXuY/6WTq3W5Pi19BW0IaDIATAAFy8L+u4x1hRXKJv58FIalg9uzHiv/q
 w1mtwA7jrQXyMIOiaOy+Amehyr2/8eWCAko+g/QQ2SpqBtjY5KobJCp7l6d6utcKIGeTR+Ku
 31sd9Wi0d3ixKqlzESlKNjh1pnzjxpZGFUwWWJSIqQ=
IronPort-HdrOrdr: A9a23:BEak76mVvcyndKpFxTq8erpV/CrpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-Talos-CUID: =?us-ascii?q?9a23=3Apkj6emsJRgvcs5uQjflRdlUw6It7YEHs0k/qPHS?=
 =?us-ascii?q?2Inh0EqCveA+0pIVdxp8=3D?=
X-Talos-MUID: 9a23:HYAp8AnBXRDg4gGhU7/3dnpaEtpk+JzyC3s0vq4dtsqWa3x5CzyC2WE=
X-IronPort-AV: E=Sophos;i="5.99,243,1677560400"; 
   d="scan'208";a="107440925"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cdQ6UwkklcwRH6HDP8BBmJ7VSL0EAyT2Z6SbvPpZnm/Joe0fQRR4uxwRO+3GXQgIETw5uiHULHSCEB/pzwUEFFZQfQ+1zzcCe4kfGwcxI3ux5f9b3mHh9/+m7lFKeBaBRDRw5Wq3NnqQOQd2OieGk03+sDGdKOaSmapxYcVRPsHw/so7EJ9m+8DAr+gj0ea9TjO4dG+CKQkF7xY/FoNFNqn0cMzqVi/cPUuSgkogQD/eZ3uZYh5n7Sx0/gbshg7AE0u6odhmJNkyOsqO0aZk8wYBQI2qwuLobEh+PeZf04zEG/Eyk96sHz7WY4colqtBzJ4pVrYGz9CyAjC1MRg27Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6BE+8aTgQq0j7fMxCMU6txBUZTgnLVdNNaZi1+b/qx8=;
 b=MKzdG40lb+4c8WlZVMdGiPhPaI+GvBZ57ZVfu2Uz4lEjdchAJsqvf8q7ji8EwiJWq9zahPdoi0rd+c3w9cbQeHpLedbLhRc2AvCMB7LxZTL3axJBLj9qVpqEx0S/Zq+J/R+lLk9YHG2Fs08hffJepetEXlY2xc3wgRSGYNWmXm/KTarfLNv3nLg+CJNES+H44U2/BrNEQQUockcZoMoH97DCSfqNpHxhYMYsVCYc7z5IKLBno+8+Yu6kUSCV8SyiXOxcwHY7BtleosuYgx/2iVwLmcwf+1xdiwF/rj2B2FsoFj3wBIaq2PgA3TjdG47T0TnX0hJl+pzGf/NQCrvPwQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6BE+8aTgQq0j7fMxCMU6txBUZTgnLVdNNaZi1+b/qx8=;
 b=okjlOUYa7lBuvRQVrFe70MaxR/1ZUdeiGVyyarlcvSIbfXiYzHHM3kSiTHxTlT+sSUd2TEB6WVNiIyw51A9Dsy6jbbm5N8Kqy/rBiklgXA7ZDpl+AbDxiP7sjz+9km4TcHmqHr7GvHaNKdxQgZvIHvFiLI2vT1MjHPX3hfiJxV4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 2 May 2023 12:10:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Ruben Hakobyan <hakor@amazon.com>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/msi: dynamically map pages for MSI-X tables
Message-ID: <ZFDhr+IlwjCDPOOC@Air-de-Roger>
References: <20230426145520.40554-1-hakor@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230426145520.40554-1-hakor@amazon.com>
X-ClientProxiedBy: LO4P123CA0153.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:188::14) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BL1PR03MB6152:EE_
X-MS-Office365-Filtering-Correlation-Id: bc1c8ae6-692d-4ada-29bb-08db4af58824
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HmSr5/ajl6GJlEH+N7xomL2Yde8SuqG7qyrITcVvrYhblO1K88LKENs1UqafY9mxJkEibscftOI2Ys2BwCnEexkELfESbPc8B6Je0NmzZ4Osxj+p+yMijUKGxEO/A72unArE8S0oB8E9kGDAU4uGtXUyhM/8Q/y7ZhwpuCeglJSK3Be+y1SZKemz1ZC3TV5joYeFF1zcATMSZIj0j8/nEl9X/Bi5v6QDlzX6MLdC4ltsloPohz/fM3BATmIujAdNtCUXTz/AL469lKZzE3T0wPH5QUM7yPhCu0pGjK2Ty4SmPktWgeU31kDslOzHxuCRK/R7eVNr0VPXcfzvnm/c4TPyTlYYz+Au4/7Ka1j8wEWwPWW95WdYx3J6Xb/UrpbNeCb98O9W2q+InmK41DDM8k7FvOboTJmP0v5oukHK3ok9IG2S9D03Zy+5dQ11CIzKQjRM/ygUmu2UmKZ7k/+H82BoqMznTIbnCuy3N01+bQV4k2BRFem/+kRKRtXbJWeWf9KlN6jE/Y59eNdfcHiA0UgTmO7vsv2WASyN97EH6wuvw1qVQcqgPcUM95DRGOSi9z3g31YHl16qlFI4F8FufzFJDadKUgHzVjM+d3J5FF4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(346002)(366004)(396003)(136003)(39860400002)(376002)(451199021)(8936002)(38100700002)(2906002)(41300700001)(8676002)(5660300002)(85182001)(33716001)(86362001)(6486002)(6666004)(6506007)(9686003)(26005)(6512007)(54906003)(478600001)(83380400001)(186003)(6916009)(66556008)(66946007)(66476007)(316002)(4326008)(82960400001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SHVNOU9Yenh1eTJnbjl3allENmdKaGdybkMyK0JEenh0bk9TQW00VFdQbWhT?=
 =?utf-8?B?YTNOU3A5KzRBSkRUM2V6ckFLT2NMeERhbS9MMjBhNWR6cXhhVFUwcmR5OTlr?=
 =?utf-8?B?M3J6Z2MwQlhpZnJtWTZyQzFTM1RVODVuQ0IrNWFES3VsS2ZWdUlxNENGLzV0?=
 =?utf-8?B?UHZib0UySFdEQ29aRE9UV2lRUUh1ZTd6Q1dTVWFaVFRDWFYwMFB1YkNpQWpy?=
 =?utf-8?B?TlZCZFZmQlNoZFhiS2pBRk5lMitoT3RhUWRpYWRVZTM4emwzaThRblNpR2lM?=
 =?utf-8?B?bEYrU29CTDB6eWdLVW4rS08weXZCM3ZadmFWeDNtQ1F2UkJUc3poTGt6WDdv?=
 =?utf-8?B?VTdIR3gzRWRBNjlUSm1XclJYdENmTDJjYzNYN2lYK1J6Y1pMZFhiL3h5OVlw?=
 =?utf-8?B?NzNjYWoxNExFRnZLQkVZcVVSVmJRZ0hEWWlJSFB0elpTcVlJdE83RXZpMWMr?=
 =?utf-8?B?bDYvSDVHTW02US9TSk8wM2RHdEdLWGhkQzdQdnVrWTZBOWk5Q2ZoUXhYRHps?=
 =?utf-8?B?bXBoYVlqbW1SRUdGWDJJQi9zM1J0OFJZdE5DYkZKMWRzS0xmVmQwMEF4U1p3?=
 =?utf-8?B?cDg1bW92dG93NzRJZzl2T1ZUSXVENTd1cUJYN1lOZTFWRk9rbFJmZ1Y3dWpt?=
 =?utf-8?B?NFkzNzlXVXFLc2FxemlJV3ZHOGdGd1UzOTFEQ1oxdEdMelFGbjZncDdOaE80?=
 =?utf-8?B?eHU4Nnd4TVlTT0I5ZDdTUTBzLzFoL3BneUV2OUdtNUwwcWZaZDJEMjZoTmpI?=
 =?utf-8?B?a0VTc1pBRkZpTjRlN2JnS005allhb2pDa1JKRWtQditUc3diTEdxOWdwNUZn?=
 =?utf-8?B?WjZwRjY4YktjWlhIVTRFUU5hdmdYaU9DQW84cTRoSERBUEJUemNRanhFNUx2?=
 =?utf-8?B?ajN6SW9Tc0RlTWh0elVERWFQTmk5NXJuZXFhMHFOWUxCcmRIblk1eUJ3czRp?=
 =?utf-8?B?NThuUnF5SjgvWHdKbHYyajNuT2EwdHVyeE1yTmtpQjFSbHRidmVDNUNzZGdP?=
 =?utf-8?B?VFY4NDluTHcrUVBPRTg0Z2VRT3dPdWp2UTZuc0tWalBnRWJSSGFJUzFENXFG?=
 =?utf-8?B?a2QvclRKNG9MYzNTdGhOajZJM0FheVJOcWFlZjZIVkdTN0FzSWRhWUxOMjNJ?=
 =?utf-8?B?N0F3dXdWbVk0b0JVcU44R0lrUGpnNEtBZkw2YlZCTVNtakw1SkVvdkNTUUNv?=
 =?utf-8?B?a09jSExUUGJmaGxwZy9DYTIwRWl3RlpqSHltems5VEk2ZDcyVVgyT1pvQVdl?=
 =?utf-8?B?ajYzVFdUZzgwSVp5Z1BEQWdqM0wyWEp5d0l6NnpTNGRpTE5CUFNTNzBWaTdr?=
 =?utf-8?B?Z2tKbGk1bHgyR0dnMXVSdzVkZ0M3aUJYRTF2S256d0tDUjh3UE1TQU0xditG?=
 =?utf-8?B?cEpxV0ZGaHhPYVFVNXQ3bTlkbGpDY3czbFhIaE5MYlArbUt4NVZkQWxyWE10?=
 =?utf-8?B?akxyWHU3djlkRnE5ZG1CY0lzRXFUZGh3cHlPT1gwdkpEdXdDZUY0M0srZmdY?=
 =?utf-8?B?VEtNb0VwdkJKWUZuMlZyNEREbVRiaFQ1OFdPRWtjOEU0R04zeDdmTzh5ZndO?=
 =?utf-8?B?ajRKclEvdExZL1hrU2VhVXVUa2lma2NzNEVqVHRnMjB5MkRZUWlHa0I1VURF?=
 =?utf-8?B?cUVibVZBMlgwSGZYdEVDRDJJOEJYbVd5RTdpR05FOGNOSkZYVVF2WFJveVlX?=
 =?utf-8?B?bHZ3YWdEaXBmU3VpeUpLN1ozaDFhOEhmOFcwMlBhRGZKTUJRa3BLMjd1citi?=
 =?utf-8?B?YzAwS29xRjBVbU5RMnQrSGtGYkFRdW5wTHdhTitSZHA0dDVsa1JXRzRQWk9z?=
 =?utf-8?B?VXRyd1BUaXBCZUVWc0VNK3drYk1rY2pxQkg5MGZIM3lyNnZ0Y3I3Q1o3U3M1?=
 =?utf-8?B?b2U1YjFmRFA0d1NsRG1pUXdyR3BnU0FyUmwyaTRWSlZlWEhlYTBKSm1nUjls?=
 =?utf-8?B?dlo2bElIVHVuUk1rT0RBNTBjTDIzRytCUElBRU10WkN1Tkdta1NBZ1UyU3FQ?=
 =?utf-8?B?dkhlUE80enZMRStLL3J2cVNRK1V1d0FTY2JITlhITCtURUt6MWUyVHpvVWlj?=
 =?utf-8?B?dHhFZUI3UklVUms4QUhWTFc0V2U3akswOFNJcHZMaHBMQjhPOTRwb2pzRnNw?=
 =?utf-8?B?bE5VZFJuNnJJLzJHSTdRSVZHUGpVSGpuZlZEb0pMY3JzbEV1T2d5ZEc3TVdz?=
 =?utf-8?B?MlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	pcZGgg7v4i8B5Xx7rE+cYCqVHTSizMlAgRybqlQxFqgEDf1LrhWNlaERK/AK6Pa0JXrFvhvB6PPlLOO+CVpaE0zsbU7tQ9jhPe6zGYIZWMsuACY7HWIKXTs+Gp7n7oC/D7QqbiT2efHPySthhVaH+gf1YYkBsoZVHxLeMiwmzW2XEnvIkzOlDuAmdzqD92NgGtxthcDYIogPurrt6Xv6SrlI6Vaqr/HbYxzqFUltrgebeJKliXZP0HjQ3xjyEW2Y5v4jqxX3NzygjxKOO20aDY+BJb9U8KbQ5oPM++L2wMvD2X9yO9974b+yDUCtTq5xQe2CkR9olLFGaR8qV8Yl4mZ4lxja/rXutLo6DJzzNFGs456zay8RPlMoxcWqkupUH9H6Ync1hS+tfVqMeWVh2+uCFkmjctGU12r3PBMbpyMBNLW7BKo8SWh2MgckIwaSFEjq0wXhUubb+8EwYgvGoYLB5P62ZsilHIGWZv3PFwTi0/Sy5cM2CGUdOYNngu5AJHY3RjO0j5XLeB3q6Om5zt3HNnBepB3UUQmNn3r8EKQUnv13aLiKyW3cO2LQnfwFr64VQOR4KGjc7v2PNLBVOPOlkhBhkbA12zuq3nDKLSzzswucCHU1j114wZROtWTJ9XEogxEH+NuI160+d7vb3HZXOFnI5l1Fhr5/ZoEenu30rSQriMkYrNulO5pN40l7sqTpHm1IR6WhdVM8XEkRoq13R6RsnJ7A0BAuqDSzNOW9W+6LV3k1KUkq8floLpqMr0OP90ch0k7B7E8Vn8ECmv+LtP9RFnxCkU0h5pId73nuGkptR0/wWT+KCOgfujhZ8CtesNGXNE0152zFCtv2eUs9iLS4zPCbliiFgXezpQc=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bc1c8ae6-692d-4ada-29bb-08db4af58824
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:11:01.6296
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: d/orvhqYJj8m3AZYWHpyR4VDBfH6rOhT+ZEI36dCkr6KDK+r51VmRv+QCxhR6I/kNcGLoJ+TYMyqsxf2lBhcDQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6152

On Wed, Apr 26, 2023 at 02:55:20PM +0000, Ruben Hakobyan wrote:
> Xen reserves a constant number of pages that can be used for mapping
> MSI-X tables. This limit is defined by FIX_MSIX_MAX_PAGES in fixmap.h.
> 
> Reserving a fixed number of pages could result in an -ENOMEM if a
> device requests a new page when the fixmap limit is exhausted and will
> necessitate manually adjusting the limit before compilation.
> 
> To avoid the issues with the current fixmap implementation, we modify
> the MSI-X page mapping logic to instead dynamically map new pages when
> they are needed by making use of ioremap().

I wonder if Arm plans to reuse this code, and whether then arm32 would
better keep the fixmap implementation to avoid exhausting virtual
address space in that case.

This also have the side effect of ioremap() now possibly allocating a
page in order to fill the page table for the newly allocated VA.

> Signed-off-by: Ruben Hakobyan <hakor@amazon.com>
> ---
>  xen/arch/x86/include/asm/fixmap.h |  2 -
>  xen/arch/x86/include/asm/msi.h    |  5 +--
>  xen/arch/x86/msi.c                | 69 ++++++++-----------------------
>  3 files changed, 19 insertions(+), 57 deletions(-)
> 
> diff --git a/xen/arch/x86/include/asm/fixmap.h b/xen/arch/x86/include/asm/fixmap.h
> index 516ec3fa6c..139c3e2dcc 100644
> --- a/xen/arch/x86/include/asm/fixmap.h
> +++ b/xen/arch/x86/include/asm/fixmap.h
> @@ -61,8 +61,6 @@ enum fixed_addresses {
>      FIX_ACPI_END = FIX_ACPI_BEGIN + NUM_FIXMAP_ACPI_PAGES - 1,
>      FIX_HPET_BASE,
>      FIX_TBOOT_SHARED_BASE,
> -    FIX_MSIX_IO_RESERV_BASE,
> -    FIX_MSIX_IO_RESERV_END = FIX_MSIX_IO_RESERV_BASE + FIX_MSIX_MAX_PAGES -1,
>      FIX_TBOOT_MAP_ADDRESS,
>      FIX_APEI_RANGE_BASE,
>      FIX_APEI_RANGE_END = FIX_APEI_RANGE_BASE + FIX_APEI_RANGE_MAX -1,
> diff --git a/xen/arch/x86/include/asm/msi.h b/xen/arch/x86/include/asm/msi.h
> index a53ade95c9..16c80c9883 100644
> --- a/xen/arch/x86/include/asm/msi.h
> +++ b/xen/arch/x86/include/asm/msi.h
> @@ -55,9 +55,6 @@
>  #define	 MSI_ADDR_DEST_ID_MASK		0x00ff000
>  #define  MSI_ADDR_DEST_ID(dest)		(((dest) << MSI_ADDR_DEST_ID_SHIFT) & MSI_ADDR_DEST_ID_MASK)
>  
> -/* MAX fixed pages reserved for mapping MSIX tables. */
> -#define FIX_MSIX_MAX_PAGES              512
> -
>  struct msi_info {
>      pci_sbdf_t sbdf;
>      int irq;
> @@ -213,7 +210,7 @@ struct arch_msix {
>          unsigned long first, last;
>      } table, pba;
>      int table_refcnt[MAX_MSIX_TABLE_PAGES];
> -    int table_idx[MAX_MSIX_TABLE_PAGES];
> +    void __iomem *table_va[MAX_MSIX_TABLE_PAGES];
>      spinlock_t table_lock;
>      bool host_maskall, guest_maskall;
>      domid_t warned;
> diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
> index d0bf63df1d..8128274c07 100644
> --- a/xen/arch/x86/msi.c
> +++ b/xen/arch/x86/msi.c
> @@ -24,7 +24,6 @@
>  #include <asm/smp.h>
>  #include <asm/desc.h>
>  #include <asm/msi.h>
> -#include <asm/fixmap.h>
>  #include <asm/p2m.h>
>  #include <mach_apic.h>
>  #include <io_ports.h>
> @@ -39,75 +38,44 @@ boolean_param("msi", use_msi);
>  
>  static void __pci_disable_msix(struct msi_desc *);
>  
> -/* bitmap indicate which fixed map is free */
> -static DEFINE_SPINLOCK(msix_fixmap_lock);
> -static DECLARE_BITMAP(msix_fixmap_pages, FIX_MSIX_MAX_PAGES);
> -
> -static int msix_fixmap_alloc(void)
> -{
> -    int i, rc = -ENOMEM;
> -
> -    spin_lock(&msix_fixmap_lock);
> -    for ( i = 0; i < FIX_MSIX_MAX_PAGES; i++ )
> -        if ( !test_bit(i, &msix_fixmap_pages) )
> -            break;
> -    if ( i == FIX_MSIX_MAX_PAGES )
> -        goto out;
> -    rc = FIX_MSIX_IO_RESERV_BASE + i;
> -    set_bit(i, &msix_fixmap_pages);
> -
> - out:
> -    spin_unlock(&msix_fixmap_lock);
> -    return rc;
> -}
> -
> -static void msix_fixmap_free(int idx)
> -{
> -    spin_lock(&msix_fixmap_lock);
> -    if ( idx >= FIX_MSIX_IO_RESERV_BASE )
> -        clear_bit(idx - FIX_MSIX_IO_RESERV_BASE, &msix_fixmap_pages);
> -    spin_unlock(&msix_fixmap_lock);
> -}
> -
> -static int msix_get_fixmap(struct arch_msix *msix, u64 table_paddr,
> +static void __iomem *msix_map_table(struct arch_msix *msix, u64 table_paddr,

I think msix_{get,put}_entry() might be better, as you are not mapping
and unmapping the table at every call.

>                             u64 entry_paddr)
>  {
>      long nr_page;
> -    int idx;
> +    void __iomem *va = NULL;
>  
>      nr_page = (entry_paddr >> PAGE_SHIFT) - (table_paddr >> PAGE_SHIFT);
>  
>      if ( nr_page < 0 || nr_page >= MAX_MSIX_TABLE_PAGES )
> -        return -EINVAL;
> +        return NULL;
>  
>      spin_lock(&msix->table_lock);
>      if ( msix->table_refcnt[nr_page]++ == 0 )
>      {
> -        idx = msix_fixmap_alloc();
> -        if ( idx < 0 )
> +        va = ioremap(entry_paddr, PAGE_SIZE);

You are missing an 'entry_paddr & PAGE_MASK' here AFAICT, or else
ioremap() won't return a page-aligned address if entry is not the
first one on the requested page when mapping.

> +        if ( va == NULL )
>          {
>              msix->table_refcnt[nr_page]--;
>              goto out;
>          }
> -        set_fixmap_nocache(idx, entry_paddr);
> -        msix->table_idx[nr_page] = idx;
> +        msix->table_va[nr_page] = va;
>      }
>      else
> -        idx = msix->table_idx[nr_page];
> +        va = msix->table_va[nr_page];
>  
>   out:
>      spin_unlock(&msix->table_lock);
> -    return idx;
> +    return va;
>  }
>  
> -static void msix_put_fixmap(struct arch_msix *msix, int idx)
> +static void msix_unmap_table(struct arch_msix *msix, void __iomem *va)

va can be made const here.

>  {
>      int i;
>  
>      spin_lock(&msix->table_lock);
>      for ( i = 0; i < MAX_MSIX_TABLE_PAGES; i++ )
>      {
> -        if ( msix->table_idx[i] == idx )
> +        if ( msix->table_va[i] == va )
>              break;
>      }
>      if ( i == MAX_MSIX_TABLE_PAGES )
> @@ -115,9 +83,8 @@ static void msix_put_fixmap(struct arch_msix *msix, int idx)
>  
>      if ( --msix->table_refcnt[i] == 0 )
>      {
> -        clear_fixmap(idx);
> -        msix_fixmap_free(idx);
> -        msix->table_idx[i] = 0;
> +        vunmap(va);

iounmap()

> +        msix->table_va[i] = NULL;
>      }
>  
>   out:
> @@ -568,8 +535,8 @@ int msi_free_irq(struct msi_desc *entry)
>      }
>  
>      if ( entry->msi_attrib.type == PCI_CAP_ID_MSIX )
> -        msix_put_fixmap(entry->dev->msix,
> -                        virt_to_fix((unsigned long)entry->mask_base));
> +        msix_unmap_table(entry->dev->msix,
> +                       (void*)((unsigned long)entry->mask_base & PAGE_MASK));

Did you consider calling this msix_unmap_entry() and just pass the
entry VA to the function and get the page from there.

round_pgdown() might be helpful here otherwise.

>      list_del(&entry->list);
>      xfree(entry);
> @@ -892,10 +859,10 @@ static int msix_capability_init(struct pci_dev *dev,
>      {
>          /* Map MSI-X table region */
>          u64 entry_paddr = table_paddr + msi->entry_nr * PCI_MSIX_ENTRY_SIZE;
> -        int idx = msix_get_fixmap(msix, table_paddr, entry_paddr);
> +        void __iomem *va = msix_map_table(msix, table_paddr, entry_paddr);
>          void __iomem *base;
>  
> -        if ( idx < 0 )
> +        if ( va == NULL )
>          {
>              if ( zap_on_error )
>              {
> @@ -907,9 +874,9 @@ static int msix_capability_init(struct pci_dev *dev,
>  
>              pci_conf_write16(dev->sbdf, msix_control_reg(pos), control);
>              xfree(entry);
> -            return idx;
> +            return -ENOMEM;
>          }
> -        base = fix_to_virt(idx) + (entry_paddr & (PAGE_SIZE - 1));
> +        base = va + (entry_paddr & (PAGE_SIZE - 1));

Now that msix_map_table() returns a virtual address, you could likely
do the adjustment in there an return the entry VA from
msix_map_table() or equivalent? (see my naming suggestion above)

Otherwise please use ~PAGE_MASK.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:15:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:15:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528442.821585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptn2Q-0003wF-1l; Tue, 02 May 2023 10:15:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528442.821585; Tue, 02 May 2023 10:15:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptn2P-0003w2-Ub; Tue, 02 May 2023 10:15:13 +0000
Received: by outflank-mailman (input) for mailman id 528442;
 Tue, 02 May 2023 10:15:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptn2O-0003uZ-UF; Tue, 02 May 2023 10:15:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptn2O-0007dp-Lk; Tue, 02 May 2023 10:15:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptn2O-0002Ue-9S; Tue, 02 May 2023 10:15:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptn2O-00058k-8z; Tue, 02 May 2023 10:15:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2g68SVldXs+rOMPJbKUl2N0VuAvUCUTq6+uUs0fULe8=; b=QwsKgDmQC+CSzqfTo8bvkSa66y
	1CAHgg39kl8tP//BxtIDfOpn/jABZIUmxAkR9mCoppLlFrK4f39zyWCeJkSFJkvDNmaA+nc5HVOzf
	xJAywC6+20nahHqJCucTO56kEJWDUgzh2hjxav9lbdk2Y28ApU1/N/rz5RkQJCwcOe+U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180500-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180500: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=865fdb08197e657c59e74a35fa32362b12397f58
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 May 2023 10:15:12 +0000

flight 180500 linux-linus real [real]
flight 180503 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180500/
http://logs.test-lab.xenproject.org/osstest/logs/180503/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                865fdb08197e657c59e74a35fa32362b12397f58
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   15 days
Failing since        180281  2023-04-17 06:24:36 Z   15 days   26 attempts
Testing same since   180500  2023-05-02 01:41:05 Z    0 days    1 attempts

------------------------------------------------------------
2156 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 261762 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:18:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:18:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528449.821594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptn5h-0004a8-KA; Tue, 02 May 2023 10:18:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528449.821594; Tue, 02 May 2023 10:18:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptn5h-0004a1-HZ; Tue, 02 May 2023 10:18:37 +0000
Received: by outflank-mailman (input) for mailman id 528449;
 Tue, 02 May 2023 10:18:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptn5g-0004Zt-NH
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:18:36 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2053.outbound.protection.outlook.com [40.107.7.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b1db14ae-e8d2-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 12:18:34 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9212.eurprd04.prod.outlook.com (2603:10a6:10:2fb::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 10:18:05 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 10:18:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1db14ae-e8d2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CsLNfOW1W2yiuCDCNHqvfGcAigxhKI7kO4AessEpqaxhab5srP/+E8y+Yh4/it4dufy3efTI1scs8ER7ickA5l6D1fqANu8uXIdM1a7Hj+9DHv1qn+2wKku6qKJMS+ZaTWacnoJSRYtCeuM5pbpYO4faYnXfELAtMt2SmKtAFUKeaYozSFH7yZaEPfoM/Fx2xUUXJnoaCS02LhhtPzSoMinLJ71sVx54jFREmPsYHxB8RlTtUBckXNKuyTXvlouyWOQAUvUNe+6v6si2Mw4scsYZ9dbkMNFFfiJaNh+nQPWxj3scBLltxvRvZRjhYhou/qjJzBYuVix6O7kookIL+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xEep6J3w5rqH6AHqUm58pv5LcYpdYtdRvkslNxU9Hi0=;
 b=lMeeamR4dnJZj/VHRXu8MtVO8ydL97TGdNi5I3NEMy2EoRktlPNgOFnujmAlcIbCamd6EePztNBb6+r/wUcfkFapt1Fvvti1ZHyCb1a6k6OivUYfWV1g6gudpuDAiJNMbWHm9XpDMGSsk68TGG1+MZE21JMJoLytrIKs18koCgIEx0c+1cO/R/OkFEeab00oBMLjJwAT0eSTcy7nTxLEjahnbab4wK5WKpdMVKL5VmufEh8kR9oywbg7FpHv/0e9zGEiSqx7/yevFB+kHdtjLq3X4cDBg6g6YJsP+wz+HdkzBvoFcsyL8mKQZAjZh4bgMq+IiG3XtkQkrZS5L7/AcQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xEep6J3w5rqH6AHqUm58pv5LcYpdYtdRvkslNxU9Hi0=;
 b=NsFcYmJCBrPUKSNjnb4uarYtiTGM0UAYFIYa85CNeR+hA0JMB0fGOVIliOkbWoOOy5Nr1SjLgnpdyUnM/YFUf8KUJXrKybHClaSpapLXnbrD2NujVcBHae/X2lg7K9Md5xMvC9VQtEfQ7+7K/Q72X3TBxH11VmljkjnKpaEEKVYijzTp/1hJTl5TEZ0aamCidRLv0JvhFBnjzavaF/8zuRrLBwolMHMkPP3APf5Ln+zLQqLQjJu7Dgs77CTI4AImAR8IXbmzADJNXf123SrLBTrtkO7WAy0IsneVpX9ZMrHe0gX/gB1rVPI2+HQLpflSfpMyzK4W7imYe58ljUjcHA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d865ebe2-81b8-465b-710c-81b9b07c9fa5@suse.com>
Date: Tue, 2 May 2023 12:18:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/msi: dynamically map pages for MSI-X tables
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ruben Hakobyan <hakor@amazon.com>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <20230426145520.40554-1-hakor@amazon.com>
 <ZFDhr+IlwjCDPOOC@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFDhr+IlwjCDPOOC@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0100.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9212:EE_
X-MS-Office365-Filtering-Correlation-Id: d4a28759-f6c7-44a3-ed54-08db4af68461
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6KXk2MXMRFBBbxd2KsQxJO3YW5FFhIwGYFrPNRypPOzhHUjagF6Ty+U5LUMR9cj1OAughCW/HicA27D0GhotIam21W5vCZmM8e7mqHAVVo/kyEtjZapMGm4b8+MhtmaSPUP+XFNJ37/ntSg7pCxoqtcUkZNKXtiLJK4qz4+MAwc0mmQVnfppBABAjZrTLoEGX1d39EBUlmCFLMMQFcTMnvFS1EgJKi0s/qDHMC3dEbLiJh6hZZx57XODXYhQhIhHTGKR8Rbvj8DfYLrOoNgCjbZdRdCmJWNSgEHqLTWK+x8s2UhhDxGDKxPVf4ccJBR9LMU9jSueNnP6LCSmV58CdDoKeV8Qcl1jqzSX4YPV9dfwobcr9aHbv9Q5HqRzq8qnjPlZp4F6uJCIw0NqjS5XLPO9ew5B8WbuXRfAbbrZwHeqHGFkuS6HSg9pSfk0sdpAr3spdE7PZ9ylJipyEjyZGOjqWH4aP1QJu+SHJvuvR3fSTvnEm8aSASosZb4BgMLxniJVQjgE9Kghfi9RaKDjLHXMvLDDKWqwPWz1jVL0JYXvDzXX0bdYl4PEaCO/R71WEO4K4fVVUicS211SsKwUajszehktqV9eP1pyQy64/8VPiiGrOUcMDk43FVgzHPBSnzlsPs0POQs3TS6qs/XJQA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(366004)(136003)(396003)(39850400004)(376002)(451199021)(31686004)(6486002)(66946007)(66556008)(66476007)(4326008)(54906003)(478600001)(110136005)(316002)(86362001)(36756003)(31696002)(83380400001)(53546011)(26005)(6506007)(6512007)(5660300002)(8936002)(2906002)(8676002)(41300700001)(186003)(2616005)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b2YyR2x1WjAzeEtEVEJsWlNLdnowM2dpK3RJRGFoYVlhTGlTN3dZUUs5dFh0?=
 =?utf-8?B?TTBicmtrUjJwK245Z1l3UjNYdHgxVXJ4aWdZNVd3MkpWZzBHUTJveHpTaGxG?=
 =?utf-8?B?eUN2akthOE5TOW8vUWFabmNkNFZQNU00MmtaTXFSeUZvYzBCQWpSeWZQNTdF?=
 =?utf-8?B?QmdvaWgrQ25iZVN6QjR6VDJ1cEVLdjN4UFJSa3RHeWd1ZW5QZXV2cDQvYTVU?=
 =?utf-8?B?amxvaXczcHhYejhHV3VINEFuR2NZN0ZxU3pVbEVnV2ZvM2pEK1VPb3V2MVg2?=
 =?utf-8?B?aVFzc1E2V09hRWx6SkR2a1h5VlZlUTcxUTFGb0lqUjVQcWpIbHUvVy8vNkpz?=
 =?utf-8?B?bmZ4K1ZkWWJvVW43a29hYThER1lwYVZ5ZHgwenlTU0RBN3dCSG94YlpOQS9z?=
 =?utf-8?B?aWNNTzJhbTNjdGF0d3VVNWlKQXhtbDZZeWhQcHgxQ0UrSWdlSDN4RStxQUp1?=
 =?utf-8?B?NldBR1ZPNGI4SDBSU0lTVWVBVEY0ZGxOcjhydU9uYkZKU1FLNmoxbjBsQU1k?=
 =?utf-8?B?bC95UHNXS3NmUWsxVzUxRGlWcTMyeXZWRE45VkU2MTBoUm5WWTh1OHQ3c3po?=
 =?utf-8?B?WmxsdFhHdU1hZmZaSGd5SytLUVJkek1EcWJISVp5cm5Oa1oyakNuNU5CMFVm?=
 =?utf-8?B?MnozRFAxczhreTFIb0c2aDNWWVlFWUdybUZRRnpvU0VDMFYxVFhhdjgwQ0tV?=
 =?utf-8?B?bGwvdGQ1YVZaQjk4UzJheTlwUzJvc3lPMTNnMXphMlhFRTZzNW1nbm9FQ3R2?=
 =?utf-8?B?ZVdzSUF3dG5raVE1cGpmdGFFaS9zTjF6bDBZQS9pZWtSVnkxZU5scFVTS3k5?=
 =?utf-8?B?QlQ3d085OVJxQ1dGSU9GeHZnTDJOZHIraUJpaWt6QjdUbU9lamRIOHZLM1B1?=
 =?utf-8?B?bm1vWjhLa2c3eWNCN1dGQ3pJc0hCV0NXRytVNmo1YUx0Qks4WlpkdDVNV3pP?=
 =?utf-8?B?VGxJS0hVNFhFS2N1UUsrRFI3OTVxbG9wOEdMN2tITndVbTN2dFFIV0djdzRN?=
 =?utf-8?B?anFwMEFSbDBSd2h4STRPMHVVUlFrYUNRRGQzN3hIa1dWaGRuU2djN1ZpVFBj?=
 =?utf-8?B?V1NqbVVjUHFDejd2dGRZVGdGZld3Q2t4eVVFeURpVmV4L3M5QnJURTIydlJn?=
 =?utf-8?B?Rmo1Q0tRa01vMEgxSlFhVGxNa2hucEVGeGZlTGhCWkhVaHFiYzBaWHlVeVB5?=
 =?utf-8?B?am9XbWUzMTkzY3lLTzM0Y0YxUWFvbnF1a2hkcVhtdFc0UTc2aWxyYlVZNHJF?=
 =?utf-8?B?WmtGZTk1ZkozZGFJNFpBMVBwZzNLTFZ3WDhrUlJsSGtDbGVtT3JQZDUzZGNx?=
 =?utf-8?B?ck9tLysvTWtMY2pqVVhNMFlkS0ZPQmZVQWZRYVBkbTZvSFJUSExJOC9TTHNU?=
 =?utf-8?B?ZVk2TStUQy8rZS9zUjBRR2h6a05ZTmJMdGhYUEFXL0Q3Wk1JekFQSkU3S3RJ?=
 =?utf-8?B?SkZYNmVwZVJTb3NQT0Y1WVNOWStVbmZJMjV5SlVyVmJTamwzMXpnbzJjSjZ2?=
 =?utf-8?B?R01KZklYUXhUZ3p0eEpsbzdpZEdLZDBxbWlNNUZuTHpLN1YveXpBbFpnZGFO?=
 =?utf-8?B?Mm1UTWQvSW1ZRjBuZ091Q1V6QW1oSmIzKzh2Y2JDRWNPKytQSUU4ZlRmYnZU?=
 =?utf-8?B?R1EvQ0p2azliRlF1RjlDL25DYkg3OE9XTHNqRnNUS1UyUVBtU2twSExMb0JW?=
 =?utf-8?B?UXFGdmhoTEQ4NlMxVXEyWjh5b3dTVmdEZW11RGpQYVlDWmtCN3RnQk0wS2pX?=
 =?utf-8?B?MVAwQkE5b3RnSDBSY2c3SjhIRmMrcC9tNTRnR3VySFB3VExnaEp6MkRlYzA3?=
 =?utf-8?B?Uk9QSURObTBoMDVUTVJ6NnFqMTZzSktHR0xiOVBZcjVxK2dhVWhtbjFtRUpY?=
 =?utf-8?B?dWlpVWY5OUZjWWFTKzNDZmRqSzZ1c3JidEVXa1FPUFBIb21TWlJMTEY2aUZl?=
 =?utf-8?B?OG5mN0JxYVBaWlQ5UXlwaDMyMDhybkRtbkFFRVVzSWp3VWZ4Q1NYcTJhSHp6?=
 =?utf-8?B?dHdheXR2S0NxYXFyb3JmN2VqMjJTT3pDT0tmcFF5MU5wQ0ZucHcrTDZMM2NV?=
 =?utf-8?B?cVNzZms4dTNGcTEyUWlZQzVrRVR6UE5ockZnNi9pWUNmUUNGV2M2RHdSLzFo?=
 =?utf-8?Q?nXd3bMQedZtN7NIG5hR/zoClQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d4a28759-f6c7-44a3-ed54-08db4af68461
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:18:04.8553
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9R72a0Hf6PwzwKaBPqgw+V77gQGse1nWl3+k++TGoTltIJdOXLquUPaHVCbqfa7pUOaKcsffi5JnVNIarsZAGA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9212

On 02.05.2023 12:10, Roger Pau Monné wrote:
> On Wed, Apr 26, 2023 at 02:55:20PM +0000, Ruben Hakobyan wrote:
>> Xen reserves a constant number of pages that can be used for mapping
>> MSI-X tables. This limit is defined by FIX_MSIX_MAX_PAGES in fixmap.h.
>>
>> Reserving a fixed number of pages could result in an -ENOMEM if a
>> device requests a new page when the fixmap limit is exhausted and will
>> necessitate manually adjusting the limit before compilation.
>>
>> To avoid the issues with the current fixmap implementation, we modify
>> the MSI-X page mapping logic to instead dynamically map new pages when
>> they are needed by making use of ioremap().
> 
> I wonder if Arm plans to reuse this code, and whether then arm32 would
> better keep the fixmap implementation to avoid exhausting virtual
> address space in that case.

I think this would then need to be something that 32-bit architectures
do specially. Right now aiui PCI (and hence MSI-X) work on Arm targets
only Arm64.

> This also have the side effect of ioremap() now possibly allocating a
> page in order to fill the page table for the newly allocated VA.

Indeed, but I think the (vague) plan to switch to ioremap() has been
around for a pretty long time (perhaps forever since 32-bit support
was purged).

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:20:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:20:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528452.821605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptn7t-00060v-0x; Tue, 02 May 2023 10:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528452.821605; Tue, 02 May 2023 10:20:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptn7s-00060o-UY; Tue, 02 May 2023 10:20:52 +0000
Received: by outflank-mailman (input) for mailman id 528452;
 Tue, 02 May 2023 10:20:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptn7r-00060g-UY
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:20:51 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 024749da-e8d3-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 12:20:50 +0200 (CEST)
Received: from mail-dm6nam04lp2044.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.44])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 06:20:41 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6365.namprd03.prod.outlook.com (2603:10b6:510:b4::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 10:20:36 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 10:20:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 024749da-e8d3-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683022850;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=QthY+tIi41uwnnmCkFKwhmW5dSImhl3K8J/XmWvKTNU=;
  b=YUFqFKdjI4af/+8jL5JnGwkm4QM7zdTiTE5TTtCKU9qsd0nFJcL1mOUZ
   mSmzc1/Dl/krqj1BQnxqOY+eO2uN52CEgt2xxRDfOB7xdyFxOTcWpjqiX
   sVUXFg+aTU6kXQDgfEVSrUJVsH8wZAKfV1xiqhsQgaEWjRQdoD8tDGp4B
   0=;
X-IronPort-RemoteIP: 104.47.73.44
X-IronPort-MID: 109995977
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Habe7a8MTkRijJFO+LkpDrUDp3+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 DYcUT3TbvfeazHyc4txb9ixpBlSu8CAztVlGlQ++Co8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKgR5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklVx
 607BG5WVSykuP6o+rODFLVogJsseZyD0IM34hmMzBn/JNN+HdXvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTeIilAtuFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37eezHKmCd1NfFG+3v10oW2u9mo6MwdMCnSCjeCouE/uVvsKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHr7m9WX+bsLCOoluP1TM9KGYDYWoISFUD6ty7/IUr1EqTEpBkDbK/icDzFXfo2
 TeWoSMihrIVy8kWy6G8+lOBiDWpznTUcjMICszsdjrNxmtEiESNPeRENXCzAS58Ebuk
IronPort-HdrOrdr: A9a23:SxfA3KOoRp2gzsBcT4T155DYdb4zR+YMi2TDtnoBPCC9F/by+f
 xG88566faKskdsZJhNo7G90cq7MADhHOBOkOss1N6ZNWGNhILCFvAA0WKN+UyEJ8X0ntQtqp
 uJG8JFZOEZZjJB4voTL2ODfuoI8Z2/1OSNuM+b9nFqSGhRGtNdB8USMHfkLqWzLjM2dabQ0f
 Cnl7t6TkGbCBAqR/X+PGABQ+/A4/XTjfvdEGc7Li9i0hCKkTSrrJXnEx2Uty1uLg9n8PMZ6G
 3YlA68wa2mv5iAu3jh/l6W1Y1ShNzijv1cA8CW4/JlTAnEu0KTfYF8XL/HhhAZydvfkGoCoZ
 33uhI9OMY20X/LYW2vhhPo12DboU0Twk6n80acnXzg5fP0Xyg7Dc0pv/MiTifk
X-Talos-CUID: 9a23:D9bsjWOn/4SGYe5DQxs212JKQfsZVULB0k3RGxHkDWA2R+jA
X-Talos-MUID: 9a23:lkR0Kgu0B0F3Xvi7gs2nhy9DBPpY5Y2VEhpSvqwqoti2NHx9EmLI
X-IronPort-AV: E=Sophos;i="5.99,243,1677560400"; 
   d="scan'208";a="109995977"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C6/4yz/mBnEpw2zxwgYZIvgNZtvQ2q5nABzETJDF5P8tK/zRNMPprX+G0C3jvFdb/NuybslwByN1Gp3TXRfHkcE6Ej0GglL1+O39zKNFrBM5MEnXqX/mlhHnaEDKSbOLXqk+1oNjwkbNXOo1uBSq+DkOxo8GivxVjo8t614XlD+NgkSB1wtrNAyRD9kSf1Ny2E0lncQlt5aYOP5dsQSzg++00E2Sc6o7FoshTX63bzfrPUS8C5tSQ+wz3VVN0fb+RJKWQG6Eg2mD+/q0ijpb4bYoqsEdcPvEKXQ7DK7pC1FUWbJEAqxdCPKNuNXBZ56RD32tZKBpOR8MEzVTHJAzjw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pb6iw5cLRiSuesUXK7CeQp6i3v2ou/42Org036m02Yk=;
 b=S8/4/DgxIfVfrL+NibJ1ygKKsLDQqYhXkPYIqyxzIj3coOwiCxii2VYBzt3WfGF1HkiZ/ZOtuPgfIDCBv+qDpbMxisZXrxrNfFLwAnS+3M+SiN1JAgCCmZi4ngtkiwU0AzU1gGyRueJW3giPn+enq98i+/6ZjI0nZg/r9TSAIpq+UeFQ39ZgNeuBCLo5ASuKGu88gdZEL2r5YqBrj1KjbORo5kilfAvEY7idrZwvawUjYYPkr0T32T6KXt62rBXprJWZjoYiiEpULmIu2uB3KLo35kIHjS81sRKpZOrVhuvICYBX4Bfy4zg3fCrk0kHFrj6YsPN3+rKzftwxQGY8yg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pb6iw5cLRiSuesUXK7CeQp6i3v2ou/42Org036m02Yk=;
 b=iZvJhwSOiAp7jcue6DfVWsg/i3htYt3HgYbyxmrlnIMBbo5fhtC1O9EB9B2uIRMa7JpM39j4+G96vhqkXWOuqaroS1hT1l3i7cKGkr+FQnXpCGkK0DhDoLWB/bTBDRQa1baypxki616bqkf3JkypO2gDBZ+wtZl0KfV5nJhXpA8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 2 May 2023 12:20:31 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH] x86/mm: replace bogus assertion in paging_log_dirty_op()
Message-ID: <ZFDj79NCnfSxnyyN@Air-de-Roger>
References: <104c3456-03bd-37be-627b-45e614a616c1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <104c3456-03bd-37be-627b-45e614a616c1@suse.com>
X-ClientProxiedBy: LO4P123CA0291.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::8) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6365:EE_
X-MS-Office365-Filtering-Correlation-Id: 387bc84d-1553-447c-a90c-08db4af6dec4
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9Hxk+rzzi6C8vhVIkAoS3Us4QvgmE6AJFqEKyG4AV2qp9kAtncdQ9AAZ1c4j/dFWv9EoPoJXNkKe8ASiTTkZacqpUhL7+Tb/plne8XCBUU+uyMkHv3AbBr9SrT1ay2JQuKCi8TZuU51bgzaH2jYzvVK3/cWFqJdMoTpJxggG7v+AQISi4bGDFyWUxD8Y7OyVoNNYroRzPEt0aNJ20k+He0nntg20/h9Ip8g8cojYZMXaC/TT9UF8NW7dNYLUNRowOloPoKcxNWDMGU/Qhb6KegksukvwPv6L8U3A6MdSD34G8rivkh5tcUWb+SBNfxN4BAjunNCBLCXwiMvJRRM/iNtpC0uifx20klZc+zQb4gST3HHEbnpARhSNtMYhQWJqJMuiEZykiU53XnbuTOeKx9vNdQuX7XYdPwN14vFs2zWnEchuQTe8egzIuCBoy0F5MYKUxDR+WIMEBnho2LX1BGRNsISFeh+QXykQeMJHmq1XE3MY2GYZoIciYH7nLRZ0spLg0XTi8bGUpG2XVjQ7GwPUf6Sm/NNQl0XmBuBymKBkuQOSKB1pqycqR/6GY9Isr9WiE0IBDhzO//tOTYdIAr8CeKFmFqRR4f/UmN3j7ek=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(396003)(136003)(376002)(39860400002)(366004)(451199021)(2906002)(86362001)(6486002)(4744005)(66946007)(85182001)(6916009)(4326008)(66476007)(66556008)(478600001)(6666004)(54906003)(38100700002)(186003)(5660300002)(83380400001)(33716001)(8936002)(8676002)(6506007)(6512007)(26005)(9686003)(316002)(107886003)(82960400001)(41300700001)(14143004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OUdNT1k2NEx4b091cTdWdVlyd3dLR0NCS3FVbmxmdjQ3RFpvVHQySHdacUc5?=
 =?utf-8?B?UURybk04ak50VTRlMnhXUzBwTm95KzdoSndZbzU0NnowdzJhQzNVM21MK2gr?=
 =?utf-8?B?c3VTVVpQeXFiT0RmalhqakNOZTNpYjN3aFcrT2d5ZnRGN21qQk4zT3dJUGxP?=
 =?utf-8?B?bllUSTJBSTFJcXBwOERTZjZzUVZqdStKWmlhend1aXJ3eE5jamMzNUZ4eWtB?=
 =?utf-8?B?Q3lTcWVmUmRNZkMyd2RGQWR0R204SGFsZ2hiNkdHUDFlZUNVVUZRNzNLdEQw?=
 =?utf-8?B?OG10ZVI3aWo2TW9obndWK3JkUWszb1VwQXB1NnQrUlZEMStuWEVraVYxWnN5?=
 =?utf-8?B?WjRTdzhIODl1RlNKb1JYVHMrSXQ2b3hVNm5BbmRnaDRBdnJsQitjZTZPZ0I2?=
 =?utf-8?B?UFVmbjdLZWFma0pCU3FzQXZtRFEzaXV4OTd6ZGtyMks3ZE5ianMzSXVvRHNt?=
 =?utf-8?B?Lzl5Z1d1QzNkWWlpM3UyU2VUc0Y3b0toZ2h5cFIxU0hGYTU5aGN0aXpNY05a?=
 =?utf-8?B?WmU4SVVacWdsM0h3N3g5ZlpwUUc2Vmd6Mkc1REsvTk9VR0h1RG15QitDRFlz?=
 =?utf-8?B?Q1NSUm0xOHkzQk5lV3JIeXFIYWtXOUlGY05SekdHTUs1dG96R2VuaEVHREts?=
 =?utf-8?B?SE10cE01M0laTnFtYTVsSy80dlo1dm56Qm5QWmhTUndiamxEblFmMjF0aU5r?=
 =?utf-8?B?YmZGOVFQM1I3b2xydnFlbFRHTkp3UkhoYkpodmk3cU9nZXFDUytIbmR3VEVp?=
 =?utf-8?B?Q3NkMEtaMmlpZm9FeTh1YWVuTVVIY1d6VThyL3crMHRFVlN1SVF2cGYvWmFo?=
 =?utf-8?B?RUVwQnp5MEYwLzc2MEdqKzNNYllnM0N6UnpodnJyaHdjUFhKbkZ0bDcyaVdG?=
 =?utf-8?B?ZGhNRXdkWU9melpUTUQxVGVPZVZnVXlNZFlkS2NNdXcyYzZ1eWEyaXMwQVFP?=
 =?utf-8?B?TVFCeWQzejh3ZTdXeXQ5dndJMGZ3ZkN6MmlCWHlJd0FiYkEvZFg5ZTYrc2FK?=
 =?utf-8?B?MEduZldYOU9XU0FLMWpTTWNjcTN3TmtzUm5CT3ZOdm41N3ZycG9lSVNMczND?=
 =?utf-8?B?a1NuU3FqbDZQZFU0TTlBMVEvZWRsZzdvR1E5cmpuWXJuajFnMzFKY1cyNHlY?=
 =?utf-8?B?aHBrMGE0TWJYenBEdzJEZUsyTHU4SjlEZUhQMjFZb2F2Nk9SYzVpTGhzWnRU?=
 =?utf-8?B?Q1JqTkgzWTRvV1piZDBwVElvdFdVeC9jNitNQlpTeWNLNURnQkJ6eEFUWDJx?=
 =?utf-8?B?V0ZtR0dKY0k2UzNTSnBHWk1HZDdVaGJqNC9WZlhqRXFKRTZ6YmcveEtweWlH?=
 =?utf-8?B?YWRLYnRwZXhMRTB6ZW5vR3VhQ0pFem5pcSsyNUw3UDdFU1lPM2NUeE04MVQ5?=
 =?utf-8?B?WTRTTTJQOE5yeTl0c1ZtQlY4ZmtDbG8vcUtITThMLzVxR0NNWlBTYS9Fd0o4?=
 =?utf-8?B?RW1NUW9IaFp4WkVQRWljU0VzN3hINWpPdVhjaTVHUUpTQ1NDbmI2SkxDMGR3?=
 =?utf-8?B?TWI4RW5ENEhJalFxUzlXR2Y2VFgxM0FhTTh6T3RCa2NYV1FTMGYxbnJQcU5K?=
 =?utf-8?B?dTFacWhKMGhYYkl0QlFuQlR6ZzUvN0VvVDNkcmcrQTIySWI3eTZDV3cvOTNV?=
 =?utf-8?B?L2VCNkxsTnJpd014ZUlPVDZ4eGNPa2RXZHEvdHYwL0dtS1kvUWJLSExGL0tV?=
 =?utf-8?B?emVpOGxPOVRxQW9rZjMwdVJHUzNOZjc3RnJrNm5hUmxHK0hBaHJsdDFjTENX?=
 =?utf-8?B?NmhxTFd0Vy9LR1N0cXpGMDFhd3BQSkJONHpZdHg5V3RyVHB1STlZM1RYM1Av?=
 =?utf-8?B?Z1czaTVDdy9IYjBZc054a052RThSTlN4S3dMY0VpTWxteXVFMTRGYkNlU0ZF?=
 =?utf-8?B?OFk1cDJOTXlFZTl0SVEya1hCeVc0YWVkTVF1OHF4YmN1MFpYTStaL3dKNm9j?=
 =?utf-8?B?dTREeWd4SWozMnVNVWRMa0FwV1V4SDBKWVZEQXV5QVdYcndZMVNHYkorMktv?=
 =?utf-8?B?a2kraDcvQXRoZWNWY3c4bmJjRFE3Mmx5b1duNXFDM0dnYzNUSlZ3YUlZa0FL?=
 =?utf-8?B?aFdXM01vOG9tK1J6VEtWYi85enM0Z2JDeXFaQTRvNDRpSzdPaGVLMHdOUTBU?=
 =?utf-8?B?eEdDV1BxblRydjlHNnh2Snc3NzlPdDArQm4yNEpVeGd0dkFiYk0rcTNIQWZu?=
 =?utf-8?B?Z1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	xX1/mzD8NEsmBdiCe9WOULWX6EQxYc0o1MhdQRESFIpvoIJwdUa4O8sbt5GAJIfQaj6zAmDJqhOsg9SRrVNp9v3Hq8s9BMK6vGk7nTIshd1TCxK5JKD0z1u6TMTT9/ghxTWyXvuHlMabn5FrmLqLY49sYQzGrdDfWXwIt9SknPGPkpx11LtOyRFTHw6CvsKqvjwjCU9WI3z5RU0YbwbfIB0bEbIKvOMQwUUcJ7/01EEM4P/jv4KSUvepFJfzwefPDWfeLmM8CpKEoqEqCM5ALwQ1uxxsjK2e50cxgGBBOucWmucVl9m0rnKXK1mBgsrNq3lUXDDisrKOjkzzxQEik6xzs/7wrldviuxMjuqUbtLvjw3oEP9aWjB/8WcktAjI9cZkwYPAiBRP3utgOleVj3t16S3mI/ZB98LFoZQ9aMdJsvRB1pwat0SnsE1URdFb0Y+n38BHwGOREbVxTHqPS9iYcSJGs8OrMNsGgwKzEgDSAlYDLMzrF7MiR86y8WJ0qti8KSV8+vBCUQ6khcFibhSMXpZU/lInqKWlf2yLgql9Y9abu0X1EgABW0wGgKyn8C+kPMSNuU9MI5SKejT2A9YtfgpGtrObfqmVfHJR3eQEvtvZyaZOP2JhLHZDBVujKv8A6pdgNym1efFxrxoWm5cZD4zFdTWXE/v6V1GRADGSBa5j4OO/m5d3rguUPnvSU1SW7NNzyMaiSQza7MU0cq88r4fDY/QOllxz8T6Lp95g8JaqxS0WelixmcuyrQIxfvnUQXAq4iw0PMudUJYbIecr+uZJWEQdwMCNamPJ0p+04Be23+OL6szVZj6Nvabs
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 387bc84d-1553-447c-a90c-08db4af6dec4
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:20:36.4575
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: D3BE+uF+eLJjkh5VZPQ1G53u2SryNFd90jhiW9qTPqISdM1ksUWhwc+7/IMvW1Cxsm6uA8K4ROrtOyBsHrXP4g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6365

On Thu, Apr 27, 2023 at 02:29:06PM +0200, Jan Beulich wrote:
> While I was the one to introduce it, I don't think it is correct: A
> bogus continuation call issued by a tool stack domain may find another
> continuation in progress. IOW we've been asserting caller controlled
> state (which is reachable only via a domctl), and the early (lock-less)
> check in paging_domctl() helps in a limited way only.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:22:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:22:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528455.821614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptn9a-0006aK-BR; Tue, 02 May 2023 10:22:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528455.821614; Tue, 02 May 2023 10:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptn9a-0006aD-8r; Tue, 02 May 2023 10:22:38 +0000
Received: by outflank-mailman (input) for mailman id 528455;
 Tue, 02 May 2023 10:22:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptn9Y-0006a7-CQ
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:22:36 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4072e5d8-e8d3-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 12:22:35 +0200 (CEST)
Received: from mail-dm6nam04lp2042.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 06:22:32 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6365.namprd03.prod.outlook.com (2603:10b6:510:b4::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 10:22:29 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 10:22:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4072e5d8-e8d3-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683022954;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=8pxjHqsBCnAU/Z0MEF3vX67moRg3lWkjDnto3SsiKW8=;
  b=fL0XBs5qKwHY1p6HaF1FU5IcRpBO3wIQ8r+8hQXppMYMvUw/P55vtBAv
   qeQEIsg9peUdECSZHPN/SfmR9ItKLsz1iBA4YOm1Cx8bYoWq8cyqq2k1c
   FowySZgnD74Odfhkomu41qyVX7QrTA9y0Qz4bR+chVEufece/s10zOLEF
   c=;
X-IronPort-RemoteIP: 104.47.73.42
X-IronPort-MID: 107961572
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:dQDlkqKZ8uCw3zWWFE+RE5QlxSXFcZb7ZxGr2PjKsXjdYENS1TUAy
 2EdWzuOa/uJYmH8Ko12OY+29k4GvZTTzYAwSgtlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wRkPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5MOW9E3
 9IIJQsfYwmeoeGa7KuZaNtz05FLwMnDZOvzu1lG5BSBUbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VspTSJpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyrz17GWwHyhMG4UPJCl0+5WjG+q+nACCz83akmn8MC2qVHrDrqzL
 GRRoELCt5Ma9kamU938VB2Qu2Ofs1gXXN84O+0z6waX4qPR6hSeAC4PSTspQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkA9L2AYbCsAZQIA6svkpsc4iRenZslnOL64iJvyAz6Y/
 tyRhC03hrFWgctV0ay+pQzDm2j0+sKPSRMp7ALKWG7j9hl+eIOue42v7x7c8OpEK4GaCFKGu
 RDohvSj0QzHNrnV/ATlfQnHNOvBCyqtWNEEvWNSIg==
IronPort-HdrOrdr: A9a23:sVKx66DDcePI6ujlHem955DYdb4zR+YMi2TDtnoddfUxSKfzqy
 nApoV56faKskdyZJhNo7690cq7LU80l6QU3WB5B97LYOCMggSVxe9ZjLcKygeQfhHDyg==
X-Talos-CUID: =?us-ascii?q?9a23=3AO5Y1qGt+t6rmJwm6Qa7sRVfX6IseSnPX5lnALnS?=
 =?us-ascii?q?qBH90br65ZUay2pF7xp8=3D?=
X-Talos-MUID: 9a23:NeYeiwuEedxKDw3sJM2nqSxHJPtk0Z2SFFEOs7Q5qdXVDCNTJGLI
X-IronPort-AV: E=Sophos;i="5.99,243,1677560400"; 
   d="scan'208";a="107961572"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ThYnrfHIJ4NgPMGG2tGutv1oBmV8nHlsAOplqEVjFfSTWwifAIeWh5WKll3tMrJZ2sNctShM1HPf4GgBsa90t+W/Gm2AKfE8hw/+FTmpJc1HMWbO0uE/qhnRLvNxQT7/U/3aEp4CEOipZd2NeHk93AH7e5MqEX1E2H0b7BdbzJufMRcCb9XMzjL1DJX9Orc334Cqgdx7v0vTTB4Pb48qcoKRHJyfXPQ5yAEUDp6m5UALW9nvD1d5fd/vD6TOBqIvbJ1iqPoAgxl9V0L5dqBiKdG/Tlba00DGzj5rZKDcoGlRsAOaIdItwr4kWRauYcYtlkRUoAZhEsbHNG5h9gULhw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qUgTXWvqtxnPedM06EBb+hHJ66G71f5SwjS/Ezmc6+Q=;
 b=W6tkrahJ5zOLIBGGd/70YMKuOv1s4JyiQYUQHgtOpeatO0gibls+h40U5kaFCeTrie8ps4z6C2l99NiBs3vVD2gbRawP5ovbP/19g+fHr5/VzQG5IHqL9v7Wo6roLr/ZUy8Neffc3yPwUMJ4O1TldJ08PENecyUJswXnVZVO1vyQqykLuEAte3Ik9PKt3QiNwqVCKf6YdqU6kozicPYbHwT7kMwvq8kLXOTJ3H+/WBU3GIdxIU5AVEoBZgEHK6PQWblxdHrlnubMEsyzOVLe2Ydrvn6dyjIKy8MpFa/pSkKruTVE/XEUPvEDuDw0oPms5gfbO7Rz4PjvDQJNQJHnaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qUgTXWvqtxnPedM06EBb+hHJ66G71f5SwjS/Ezmc6+Q=;
 b=MkmmDqCl6yb9nOy/4q/pCN+ft9X5YmMcetNyNGLLL1T0RM0QzfpuzwysWlYiCOG6eS9IzcqIUuUhrkAPwam4ONz+Kmn/VN1HOTvmYwFDFnXjUMbXeP1r2j8ydbMGitR7vtrBC1g9+5BeRU5lcU3fxs0wn2nyHCgeoPnAh6xPrMo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 2 May 2023 12:22:23 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Ruben Hakobyan <hakor@amazon.com>, xen-devel@lists.xenproject.org,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/msi: dynamically map pages for MSI-X tables
Message-ID: <ZFDkXzflH9c2Duon@Air-de-Roger>
References: <20230426145520.40554-1-hakor@amazon.com>
 <ZFDhr+IlwjCDPOOC@Air-de-Roger>
 <d865ebe2-81b8-465b-710c-81b9b07c9fa5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d865ebe2-81b8-465b-710c-81b9b07c9fa5@suse.com>
X-ClientProxiedBy: LO2P265CA0459.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::15) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6365:EE_
X-MS-Office365-Filtering-Correlation-Id: ed040652-597b-4d65-9abf-08db4af721bc
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r6fApPOMzwTFMHaYgEyM7KVZW0mvUbssuwY1jWB7WWWCITK7/lQ8hdPtGmXGj6ZfVaBcqa40dscoE4qJttF+mTKORdA56F/DJ9TlZb6dqKu+FYx5sP3iiclKBnAths9/cNRABreOZIw/sIjYBBXAH8n85GKUu9qkPvyygpuL6JSt1/WP5PIqOUcy0vVt3Cht/b2Zs0vjqvMhAzuHN3x3oUGaF7UDePw4QLZ5qIv0hGbiEEpJrFw1M1/y+v/zgqbWICxEz3nUVhZ0GnCPRnCb54xRxygdRmoXRMyYk/cyi7bHxBOBmX27BSqraU2Bm9tWTJZOKNc+1itfrP+MvL+GX9k+sgu7AEREYg0AIMD2H/Stkl+RcW+Ij1W5IseCCge2L3jy0iVP0q+LFJY0IUI3Mnk94fuHf7rq9BVFw+GRDhvlBp4EH502vuZ2IW8gnhztU9ivgd22ic7SO+QveSY5Fm0fNbF0kTtMlKpHuh8VvVcMqdIX8E7szbKmOp7fTt3DBk/GsSQrafgFPwAZIlYkDTXehsVxqsgea6QpZw9pcQRV03wRLW6NE5AMhWopanN1
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(396003)(136003)(376002)(39860400002)(366004)(451199021)(2906002)(86362001)(6486002)(66946007)(85182001)(6916009)(4326008)(66476007)(66556008)(478600001)(6666004)(54906003)(38100700002)(186003)(5660300002)(83380400001)(33716001)(8936002)(8676002)(6506007)(53546011)(6512007)(26005)(9686003)(316002)(82960400001)(41300700001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?amxEZ2FDK3FROUZtdHZQc093VkFDL0JvRmxUTGZubjZXZ2Y5QzZhckVNRzFz?=
 =?utf-8?B?eHRnT2diazlrVS8yYUZXOER3UDNtZjFxSGVnZFlkSXBXc1pKaTdLVTFvQkdt?=
 =?utf-8?B?QnlIY3N5R2lTaDZpVmNWMzFaMVdyU1dVMHFndjVKaTVqN0x0ZWtaQ3ZlWUw3?=
 =?utf-8?B?di9FVG4weWo0VDFNTDdRdUZsUHd0enlTaW5zUVkvT01SdWFiTmhFYXhwUTEz?=
 =?utf-8?B?MFlXSmttdXA2MUljYTBOaGVoU1loV1lhM2x0T1F2aTczZEsrWGVpbk9Xb1dE?=
 =?utf-8?B?bXo4SGhGYk1nMi9Od1h5U0g2dXIzOTBCcS9paFhLaGJHZGRYRm5rbXkrSE1l?=
 =?utf-8?B?K0JFdzArb1dZaFpoWlVrZ25xOHZWY3c2K0EzcG8zUjVTNGFGNHhBMjF4b1pX?=
 =?utf-8?B?cWplK0haTnhVK0NDVkJjQncyTHRwSmZEazhMYVhDb3J5Y3VtVVVxZGY5ZVZQ?=
 =?utf-8?B?RWV5NUhWN0dGTUhnL3hQNmtkVVhObzhpRUtiQUVIb1dqWEVOVFgwSEFURjBj?=
 =?utf-8?B?RHp6VDVQRXNsRFVWVHdIcUQrZm9uMTRHTjZ2R00wdW9SbTlwL1ZwTmhURk83?=
 =?utf-8?B?eUZCSHJNUVVBUDhVNXNpKytBbUE3UWNtWTF5enRvcmcvcWZYSGhrc1h2Wkgx?=
 =?utf-8?B?ZnJpazBCV21qK05NbVI4WGxFQTdZN1U2UUFVM2JqQUM3OG5rK3lFcy9zaFdN?=
 =?utf-8?B?R0I2UnRiNVlNeld1azk5OTZBMTZVRDdRQWVRaGRWTnppd1JqRUlXN3pKa0R1?=
 =?utf-8?B?VFo2MGNhSFhrRkhtMGFVTkIwZHpBZ09qVDJ0N3k5aGRFMDBoTFMxeGljc1dL?=
 =?utf-8?B?eXQ3ZWs3cGcvdnkxVmNIWjFSakp1dmd2VWh5WUtNOUNwMy9DbG8wOGhkRnVx?=
 =?utf-8?B?MUpuMHlkMUo1YVBXSzZDSFNET1RGNHhoU0VkWUVHSTBIMHJ2NlNNejNFUlE4?=
 =?utf-8?B?cHZyamo5NlVuUmR1akt5ZWk1Ym5OdkJva0tKeEphYVZNRFIvZnhwODh1TUI3?=
 =?utf-8?B?SStRckdsWDVQNjB1aGM0K3VCVDBIVXkzazkrcHdvLzZaclArVHp1RFo5T3I3?=
 =?utf-8?B?UEx3Y2NqN1VybW8yN1BCMExTZmU0RWFvV3RxTTQyMFdvRURLVDFtNVhJOWs4?=
 =?utf-8?B?VUV0WHQ1N1A4NEpiZy9KWERxRkt6eDNFaUlnM2pNZ3JmQU1QNEpxVkdRWjNN?=
 =?utf-8?B?Ymp4b3dRa29Iemc3YXppNTFXQTJudDUyN2gxcDdsM1EwQzFNNStQcEV2UUpm?=
 =?utf-8?B?YUQrRnNaeHFha1pld0NCUW9pR1pxaytXQlN6K2Q4bTJ2OW51ZHlxbXNhaGtw?=
 =?utf-8?B?b3FKNUxaQmxzNmtCd3JXSEJMOHlaWDlTUmVjdE5iLzNjQzdxYUdFOUVuZTQx?=
 =?utf-8?B?Wi9CUHVtVWtkeDdodEQ2NkhlVTI0dFk1NFdkaDZFR2MyK1ZpYnRyeDg5YlRk?=
 =?utf-8?B?bE5uU0lIZnMvRnZKQWRtRzViRkRlL0lHZEhuZS9iRXovbEgvZlRodHAxK0I5?=
 =?utf-8?B?VGEyZE01Ti9FSW9wUmZjZW9FV2VkUkd4eVdVcnBKZ1ZjcmRFcGw3L3pYbUpz?=
 =?utf-8?B?cW1ReUFxMjRHUk5pYmJVbWxWTjdCWFg5Q2oxdlMzMFBzNk1ubjFIUkRxYW9X?=
 =?utf-8?B?QitXa1ZqYzZMY3RKZ2JoNFBzTHppQ0laOTFGaXIvbE1qaGtQWkwwV20xTDNO?=
 =?utf-8?B?ZXJ4ZVhMazl1L2ZDdDAyTm50dG96VGxZbnUxaFg5TEEyaUJ4eXpUbGJHNXNJ?=
 =?utf-8?B?eS9aSTVhVmVnZUZrUW43WGpwZ2pMQVBxUzIxaHpyTXc1eUJsUlVRd3VDa3pF?=
 =?utf-8?B?YzdmYzdrOWJ3NFN3ck9veVlWZ0lCNUJYUEN1dnk4bmNyUmdMTXZ0a3prL1BX?=
 =?utf-8?B?OEoxWmg3UmxGSkpvbHFMamtUVjNvYkh5MC9WcTd4VFV1Z3J0UU13YTk0Tkpr?=
 =?utf-8?B?Wk1xNHhGRzdlanorU3dCWnJjY29iYXpOOTI4a2ZDS1BTTUNCOEt5TGpaaW0w?=
 =?utf-8?B?WEduRDN4aVllTjVGSWV6d0tBRFlHSW9jazExbkYwRVlWOTJBcWl5NHBiKzUz?=
 =?utf-8?B?eEVONlBDOGRUZW1VQk05cTUvMWdJd05TU2YyUXljWXY5YUJVcjJ0TE5KVmI2?=
 =?utf-8?B?dEUyVjZMLzZyZWVLQUFmWkxoWVZUajBEQVRKalpHR1AyNmVPRm83cU12UW9v?=
 =?utf-8?B?dXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	e9+Sy8S26P2L1rpLnyIDf27274cz3T6oaSuk60kCnBfVFwPD3FLoibFo4zMe294+PZt5rawCMZzgK7bFTbOYzZsba+Cu1gTcn2qqjmd5LCwPfcgrkBAKo18c3RZEbzkrD+TrRMp6MOlq6+gMH2ADSuUugOyQjZSYXrePev5Il2m9x44lJXNe4HgXW61myzB7tKh0hNZqXxMd40qHpnRRrUhz7eM4MjCuvWubhwNIQSGdK6NrtAKTtRYby8rPE20KZYNK5u5xmp4NjscIAvFmoChV2zWdYVFVO3Coc9C8MRGn+5BXnwCJ9GuNWWAHvEfbh4S5Rtl/WLnMnMSWtT2j/dqm7bvSpSp4rGGwzhAga++bKXzUcLl9mgkODuLnZNE4bpItN9F/rKIsrSfOHkMcVRYySuOTVIQeNpQFbL1AY9MkpMs7+BtrZ9eaLvmtBXEe8lVrrQYAxko72XcrFY0HIcgiomeiTe7/Zdeo2emhEjvoIFvxSXoY8zJNiG3ZwkqhmK2tm9Wb93OgwMuZr6Oe+wxv+hCFFHpKVuCZ9KdF+z3HhrqRu6E9JnBDf3tYfya8cd5/j/1WfepQNZvjX43S5SgwGfhvYgybjsIDep/cnW6jhgT+AqIRcyWN+VacoGEuZFWOVwyzP3J4a0fMUCrZI5E4BjTjqQhc+gc2uKHYOhFHCQM7oVOowfmaaAs4Hc3xqWwfcoIdr6JabjLvF5qj3iP32EImwGw2BiPXBZJq3eIZi2AtxhAsgZJZzP1yJgCLQwfnl5ahQS41ERvqIBfPx0dbY9oSotWydnsVkwsUQsmvY/pVhx60NY9EFQONYkGYDGg/ToxxceXldynrSWZ0g8/IDLzoRa8xSZVCMGAkPdA=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ed040652-597b-4d65-9abf-08db4af721bc
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:22:28.8951
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZjWd4/qp+4AzPEAUjKHMloXhgfahB+8/euTcSW+8XlvGImBPCKZqHnxR7Rx6fy36BOaC7x/xorq0gnDdhb82Hw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6365

On Tue, May 02, 2023 at 12:18:06PM +0200, Jan Beulich wrote:
> On 02.05.2023 12:10, Roger Pau Monné wrote:
> > On Wed, Apr 26, 2023 at 02:55:20PM +0000, Ruben Hakobyan wrote:
> >> Xen reserves a constant number of pages that can be used for mapping
> >> MSI-X tables. This limit is defined by FIX_MSIX_MAX_PAGES in fixmap.h.
> >>
> >> Reserving a fixed number of pages could result in an -ENOMEM if a
> >> device requests a new page when the fixmap limit is exhausted and will
> >> necessitate manually adjusting the limit before compilation.
> >>
> >> To avoid the issues with the current fixmap implementation, we modify
> >> the MSI-X page mapping logic to instead dynamically map new pages when
> >> they are needed by making use of ioremap().
> > 
> > I wonder if Arm plans to reuse this code, and whether then arm32 would
> > better keep the fixmap implementation to avoid exhausting virtual
> > address space in that case.
> 
> I think this would then need to be something that 32-bit architectures
> do specially. Right now aiui PCI (and hence MSI-X) work on Arm targets
> only Arm64.
> 
> > This also have the side effect of ioremap() now possibly allocating a
> > page in order to fill the page table for the newly allocated VA.
> 
> Indeed, but I think the (vague) plan to switch to ioremap() has been
> around for a pretty long time (perhaps forever since 32-bit support
> was purged).

Yup, I'm not saying the above should block the patch, but might be
worth mentioning in the commit message.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:29:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528460.821624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnFj-0007I3-4b; Tue, 02 May 2023 10:28:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528460.821624; Tue, 02 May 2023 10:28:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnFj-0007Hw-1t; Tue, 02 May 2023 10:28:59 +0000
Received: by outflank-mailman (input) for mailman id 528460;
 Tue, 02 May 2023 10:28:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptnFi-0007Hj-ES
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:28:58 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2060f.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::60f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 247f1ae0-e8d4-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 12:28:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9099.eurprd04.prod.outlook.com (2603:10a6:10:2f2::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 10:28:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 10:28:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 247f1ae0-e8d4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ikwz2xTLMztoIowwUlLUlxyZuq6NeHRMzSqD8gdcgkeCYk318rqZPhKXOArDDPEYjT+A6Rs5lMhW7PZnZNlOiJ6/SKG81cBMHtwRP17KIzwDC3Rdbq5Oa7qGkngHzKYVEqc0aIpKeaTs6BTPr6Q4bX4T6uZ/S7W20wg4tb0iEqm09KOTGTm2eXUZ1zAwNeN+uWtSqOIv36NIgyTPSKFOsY41hEkDYzwZxGwqIOL0B3ARf1Jpo0EefzkLMNGo8M+u64uMcL0jhBX7fQvyvavkThTW0r8XquvfaRVMOUTi+ckKQW+wavMig6PguTWqJ2pSHNEyl2CpZ+Om6vKui4HIYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=N0ielytoTu6NUD/JoRmt6M7O4JGjqcE4ERzlv1AGDK4=;
 b=K2KFHNrDxIduEUI2O3sn/OoEkKlWmTH72OiK8fU2eQmefMdNPrufN5k5VfUflrLSs2j1MLsiRapYCHSViLH1S2f5MlQWs8YFGPjZ0c3f93+Vqe6xtC6cezz82M3IvsWMtU898KPUbwb2iKM7cO61hA7ZA+Jsr+c9+ENwrd1qPbC2RJB//EglLggU7CNYGIZC4e5sO/huP4AeptrmZyHS6tJSVzhvGXLTw6f3MJKpgqfdvcYZDZXa/Io3akQjlI8s4dfxEljtu4FBZRLaSOlbHoRR62Pf/kV8dF/YrktttODrBJYYF12vEVEqZRSf+5Y4iFZpuoVN224mB6JusAAZiw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N0ielytoTu6NUD/JoRmt6M7O4JGjqcE4ERzlv1AGDK4=;
 b=kZr6FI53XNuN0Q57wUaZzNrCKe4WwPqNTiK2eCNN0SGRAVOF3w80LWVHI3Pk8bgUIkm5FnDruGgsOf0zaZHEfOl9ftgUk9to92HJBTfWBThXIHPgLFy06QVrzrhJ+5WSg4eHXu8AbW+XgzMknzRFqE4Ivu1HW4RShfNdXPgfMh1F2mtiWSTzqR2IUJSXUQcvtjOM7FYWsVelfiV7nfu7ZVdc8OeNqyI6nznhvPER5SRwoSacIWcFO49qIPOVKsm8QH0PQS1rhSGXKI9ybqkZ0mvnw7Yk1a0pBOu7cFc59U2i/5mPcUjADpk9C8KELZwTt0fu3i3NmU2DBetoEEzUew==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <068ad06b-d766-4fc7-6bbc-289911441ee7@suse.com>
Date: Tue, 2 May 2023 12:28:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] x86/head: check base address alignment
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-2-roger.pau@citrix.com>
 <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FRYP281CA0015.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::25)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9099:EE_
X-MS-Office365-Filtering-Correlation-Id: 163cb86b-45b2-400e-7de4-08db4af8073f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BbAy6CO/WTeNfLlW09u5lHhUBbMph0VF5VqHvcs0R3yqZ2NOYnIqa1WPuSqEqVGOtbs684XGxPyS4qNbeqGSauP460K8aPBxYukoF4qGjXCQzJImCf+6X0qR3fdF+gUmhHPXKdjFn8AQ8Fv+1djgxRMWN7sA4qtQTWyKlabE7ab2vl240wToJpDw0AkdIAVDDA+KJPgDioOR4TaUjSeF38PBHzg7yQRDDX1Uh9g53xFceOuSE0ksYhuV5aGrKMuvQtD7GNSj1nmsE0Au9W2E75G0WgXlCLvezZMez4N5XDv7NV/t8XZ0BNjWwNfpM0hYO72JMOkcb2fzhULuwMb4amLG8Hc0QbrnyiwzpQIaugdUrnjBDiVPIIG0B37VvAH0yOj2+6I5UpmQCoYMVE7HmK0XyX95xzbDsvhX2n/D5xyC829ibBlg0qK/AKry/3dCjFoTANbuoK2QqeFZhg/TFGp3EivhA/dqJnXS516bJV7uanNz7gG2xHhZgVuEGz0rbCK0QPJzLU1sw3UW1NJUYmNpUt3J+Syr3uIj2EEqx1u9qOlqoADWfP/MD0oHqpTMJlaOQ/RP4tJlnf70gjw4zg870eVp5UIF1tad4u57Xrxa/6nJe7BnTMTrMWzX8F/ktZVrsNF7zUYPsjw9BDXPYQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(396003)(376002)(366004)(136003)(346002)(451199021)(31696002)(86362001)(36756003)(110136005)(316002)(66476007)(4326008)(66556008)(6486002)(66946007)(478600001)(5660300002)(8676002)(2906002)(8936002)(41300700001)(38100700002)(186003)(2616005)(53546011)(6506007)(6512007)(26005)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U1NzalVFSTZ2bk9RalEzdE8zMUdnVnR2SUNCUUkvVVBqSVczenN0MzB4QlFX?=
 =?utf-8?B?VGFSTDRwenJqVWx4ZXR4dUMvMUhUUEM1OFMyNnpRclNXb0JQdldDbG9UUzh4?=
 =?utf-8?B?dWVQYUs5MXVWUDJoSTdPNCt1bVEyNDUxallTR0pxWk0xSFpRVXZReXpXeGhz?=
 =?utf-8?B?cXVJdDlUQkY1MmJmZHVTNmpXeGdZYTVRM2V4cWRRR0FoOTFITjlvbXR2UERM?=
 =?utf-8?B?TnRISlBKTmt3V3YrNmtJamhJdU10Wms4bUNUUVF6OWRDaTlaZXl1cHNyd20y?=
 =?utf-8?B?UitmR1VqVngzRWF1QnRYdUNrNDNWQUJJYXJSVk4yQSt6dmNrS1NoNmVIT0RY?=
 =?utf-8?B?czA2b0Y2WDNCaGszeDNCR0tHRmhQSG52WW9sUzZWSytrdEtMZmFtRlBMeHgw?=
 =?utf-8?B?RkRub2xGRURHMUpvR24rTmcvdlJwVDZFc29aMWVsQktkM3h5TnRuS0s3L0Nl?=
 =?utf-8?B?OEsrdWNpNDEvOXFMZGo5aCtZUW5tejloQzJDZXl6aDJveFZiR1Vqc3dmT0NI?=
 =?utf-8?B?aDh2R2hOL3pheGRvNFpsSG5iQS85S3hhQUpCRWxOQ0xQUW9oT3ZxUnU4K05H?=
 =?utf-8?B?NzdFUEhKOVV5RzZYVk5tMlFBc0Y4elRHbXRVOUl0VDYzbDNRR2pIN05mUjhy?=
 =?utf-8?B?U0pna3ZVYU4wN3hjcDRkTWZzak1LeEYydXA1NHlzRXdoOVEwYTNHOElhampO?=
 =?utf-8?B?MDZjMStaak5XbGlrcHJqMEtINVRsQ2drQk5hK2JwVjlBcnEzalZOK1pvRlc1?=
 =?utf-8?B?R1pJdTBZdXVEc1ljWll0MHFneUt1emJtSDRaUENDaHdZcWkweWJQR0tYYy9V?=
 =?utf-8?B?OVBQeVVFME0xUmlNTXNYb29zZkdxa2pkeXRYT21MTW1xZnRCQjZpdEFjUHRE?=
 =?utf-8?B?djBOemNLaGdIOW81RWpyU2daSzYwdkkvMjNvcHJZRWs2N0dNYnJ0VEx1R3lV?=
 =?utf-8?B?eCt4bHJaZTAxK2ZqcGVYWnE5d3cxZlZzZ1hIKzVIN2tMZ2diWS9lL1ZJdkV0?=
 =?utf-8?B?Mk95TkduQ2d6VXEyL2MyWnVCVG8wNWxFQmRCSGRrODNLdzFwR1Jwb2VFNkRk?=
 =?utf-8?B?RGNRNUhKc0kzVW9UMnZGZmhLN2pWNEpkdStMN0R6R24wU3hmTEJDYjE4djhl?=
 =?utf-8?B?YlhkV0VqOWhRS1hHalF6Sldpd0M2THRtQmhNd0JVejZNUlVlTjY4dTRMemNN?=
 =?utf-8?B?LzRBTlI4ZTBFRXZnVkVIb1V4Q2laL0s0MVNxVVgvdnV1cldNZFNkY2FqaGlP?=
 =?utf-8?B?dTBMSERiU09MTFVNdC91WVd6dC9OQ25qdVl6MlVxN1h0dXpQR3BiZ1RoTUZC?=
 =?utf-8?B?djJuWEFnNU5UWTByRUtIRENoaFFJWkxHK3JPR1pXSmt4OFY3MGF5RDcwVXZn?=
 =?utf-8?B?bHg3RW0vRXR0U29LNEsrSzFRVlE3WnpHSnE0cVJJMkI5NStLdk5aL1YwQ3Qz?=
 =?utf-8?B?dHZjZmp0alpDKzZUd1RKendLaTdyRWUxYm5KR1YrOWt0N1JDL014VWx2Z041?=
 =?utf-8?B?UkQyMmpGNW5YSy9uMDF1eW9yZk10bjZ2VWhDSE1SdjdwREh1NlMzNktjWmlD?=
 =?utf-8?B?VGhUWGpDTWpGdTdZTkF1Rkd3UEV6enBKanJLZjB6cWJzUFp4cWlrejl6VjQw?=
 =?utf-8?B?cUlFc3paU0pVTFNvSUhmMDBmM0tqVVFUVW8yWU5JMWozVjNSdUNwZFVnUGY2?=
 =?utf-8?B?dlViRmJrbUF2S29WNHdIVDN0ZittVFd6NEI2VUoyUVdmemxKWmpoMVFzTk5B?=
 =?utf-8?B?MjVzUVdNamVJRUZReXBlMHJFOFBBTFhGRXN5M0EwSUhsYWFnNVlCM3hLYlpO?=
 =?utf-8?B?d2xLRkoyN2dOd2d4aHNsT2FSSDNYZ2NVT0k5RXl3My9RdkRKRXhWR0VlS3dW?=
 =?utf-8?B?ZGI2OHZ5OWZNQ05nc2ZVNEV2OGdHcEJhc29RQmNVWXY4dTdVMW1VTzEvbWwr?=
 =?utf-8?B?NVBhYk4wTE1QNjRKY0c3Nm1SenR0RmpUZ1gwVFdidHFCVnBwczhyTkNnQm4w?=
 =?utf-8?B?dGtIdGJGVHpyWFVRLzN6RTdPc3ZTTC9RYVJvMjhIeG5IOFpXdEJ0N2FDbXEz?=
 =?utf-8?B?dE5odmE0c0RLSTV3S0VJUzVrZk9pQ3hhblNORTdLUm5ibEsyVFJacS93S0gr?=
 =?utf-8?Q?cTATU27mjsM9VDtxPG8r1amFL?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 163cb86b-45b2-400e-7de4-08db4af8073f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:28:53.9120
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QxVB2ZCnab8say83gbbxq9UE0jyl3BRg7fMovXNAc4ZgOUkoobH86BqTZbzEkWTx+seQzuif3DTdmFQOgaXKug==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9099

On 02.05.2023 11:54, Andrew Cooper wrote:
> On 02/05/2023 10:22 am, Roger Pau Monne wrote:
>> Ensure that the base address is 2M aligned, or else the page table
>> entries created would be corrupt as reserved bits on the PDE end up
>> set.
>>
>> We have found a broken firmware where the loader would end up loading
>> Xen at a non 2M aligned region, and that caused a very difficult to
>> debug triple fault.
> 
> It's probably worth saying that in this case, the OEM has fixed their
> firmware.

I'm curious: What firmware loads Xen directly? I thought there was
always a boot loader involved (except for xen.efi of course).

I'm further a little puzzled by this talking about alignment and not
xen.efi: xen.gz only specifies alignment for MB2 afaik. For MB1 all
it does specify is the physical address (2Mb) that it wants to be
loaded at. So maybe MB2 wants mentioning here as well, for clarity?

>> @@ -670,6 +674,11 @@ trampoline_setup:
>>          cmp     %edi, %eax
>>          jb      1b
>>  
>> +        /* Check that the image base is aligned. */
>> +        lea     sym_esi(_start), %eax
>> +        and     $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
>> +        jnz     not_aligned
> 
> You just want to check the value in %esi, which is the base of the Xen
> image.  Something like:
> 
> mov %esi, %eax
> and ...
> jnz

Or yet more simply "test $..., %esi" and then "jnz ..."?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:29:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:29:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528461.821635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnFx-0007ec-CT; Tue, 02 May 2023 10:29:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528461.821635; Tue, 02 May 2023 10:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnFx-0007do-9N; Tue, 02 May 2023 10:29:13 +0000
Received: by outflank-mailman (input) for mailman id 528461;
 Tue, 02 May 2023 10:29:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptnFv-0007Hj-DG
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:29:11 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2b7768d7-e8d4-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 12:29:09 +0200 (CEST)
Received: from mail-co1nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 06:29:06 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA0PR03MB5530.namprd03.prod.outlook.com (2603:10b6:806:b0::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 10:29:04 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 10:29:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b7768d7-e8d4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683023349;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=8fmYAgrEMm4z+FHSmfluJMGF/rFQW2JxGHZQEALI2hg=;
  b=TdIWjUBCna7B1s+3QV4maPQZQEA5HvBxanqufrT34dbWavHVwoDMZwCn
   C0KfIXttbKTJiHX3bVP1+OYP+cGMW95i5VGSEYLfwc5sN5e8qqWalLIjm
   n8u6FfmodbFZhpuigTCkLKap0dFSQtKYfcK0FaewysY2pxzURGlrY3Dsv
   I=;
X-IronPort-RemoteIP: 104.47.56.177
X-IronPort-MID: 107442572
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ale0mqsSZ4l6+kTeBhpRhgUl7OfnVHBfMUV32f8akzHdYApBsoF/q
 tZmKWjUO/6CZjD0c4glPY638UwGu8eHn95jGlY6+31jEypB+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AKGzSFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwOhUIfwCEpuSNx+jjTOB3lvUOBcayFdZK0p1g5Wmx4fcOZ7nmGvyPzvgBmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjv60b4W9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiAN1OSOflqaQCbFu7ynAwNgE6VWuCivTnrVXuQv1uL
 0YP0397xUQ13AnxJjXnZDWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnM08SCEu1
 1SJt8j0HjEpu7qQIVqC8p+EoDX0PjIaRVLufgcBRAoBptLk+Yc6i0uWSs45SfHqyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CChRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:Jf1pnqn68euzSUMd4jJGcpYeqnXpDfIr3DAbv31ZSRFFG/Fw8P
 rCoB1773PJYVMqMk3I9erwW5Voa0msk6KdmLNhWotKPzOGhILLFu9fBOLZqlXd8kvFh4pgPM
 xbAs1D4bPLYmSS3fyQ3OTwKadD/DDCysCVuds=
X-Talos-CUID: 9a23:mavgv24NbRUrpm7lttss5msRC9AiU3vmyy2IJUajGVdEYbLMYArF
X-Talos-MUID: 9a23:q8v03Qlu5JV3EUPmpQszdnpuKORW/4iuOHkmnMshoJmdGh10MSy02WE=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="107442572"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ci5HwaCdi1MYSVaArKKLV6hxicU4oOU3bf5K2x9/uyi6sTdaezodSiLEDOSnEEO8CREiCC+YLS2pjViJ8eMYTXzIAY5NO2dA7voqKuP718MIbJzf1pdW3tNx2Al8cf9vKNAyeHuONjaNUPDLzj8xSUB7Vpz8kGZCMGxyV/w/yOdWpjL2+oFCDtwuU4gDtCpfIUaIZ04urAoGdrESQ9LLK/LKGC37b7G8+xA3ljIKKCPLqbYu4JG6IN42K1fly1KfeUDKKWnUDTmSwNxdDK7SPiWFbF3hPmtqxdcEC1acA1X3r3+WsvEs/ipwzkV7bwCkQ7QEJEbuKB7v762sc4Iq/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0ZZalwAl4VL+hS2tkqb5oYm/g4u5h2Oc8shmcdV5Itc=;
 b=ksSQXNChNb5EjG80RlLdeX4kewkaYWefWOof/CGBXUwnAlRiwLpxnsWsYr/HgpnmfxJeocbUoJMl3I5Cih45MKTg+KMMgelaGMZZjq0x5zPgBJzCLQpF2j5WbILfjkG6FFqmslGfyz+Da12077lJmGHbjuKdsEZT2LLsIBZu1VRib7yj6lsiK4GT2KF3c1GHUAREdLTmll05u4T7mqvO5oAjQY0u3WgB+UtrmoKrWb61gvt9jCFY8QzgFmUIpAkQK0Qi/rPBfx5WXSPAYPb2J1QrQzWW2lJXV5siIWs87bhqgDyCUdyF1/mudwHJ3tgGpmJWXPHfOL2QlSOcMwofcA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0ZZalwAl4VL+hS2tkqb5oYm/g4u5h2Oc8shmcdV5Itc=;
 b=umlBnlMJ3Tq6wJXVJcrzAblP7y7JGH0Y1WcL+89fL53RstrMGgPXcIg6ZJA6Z8PGcT0rUcSLIONtlUOGQh78onp4msG/yy7i7AUIVfQbSE6Gk3+mG6NfXxJfhjf8WNzG+wONlDnk4WgWwpVXoaHeXNpjYsshd1K8oTr/LJyEcu0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 2 May 2023 12:28:58 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/2] x86/head: check base address alignment
Message-ID: <ZFDl6rSYRzNEoVX6@Air-de-Roger>
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-2-roger.pau@citrix.com>
 <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
X-ClientProxiedBy: LO2P265CA0259.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::31) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA0PR03MB5530:EE_
X-MS-Office365-Filtering-Correlation-Id: 5c802f13-7004-4efc-e13b-08db4af80da4
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BhsQKmzbsBf3y0qroIA8DKBLJSVH7hn2THFLpZiypSSZIdn49PYWoinukvRuUCORzBPBC0LBSfgkjwLA+C8ESrW+fPDzRcET6Zx9pCPH4dOBW3qiSjulGT0PYLSHH5IPpR6brnjXlSFvfPOVprlLGShLp/vjeBLt7ytVtfJTjziQ6JDovnJgu8w3DbAw33tMbdNSRNEqKHgoq3CTZuxFDOD/WwTsrsFPNb9PmiQzC8XSyitZXH43sAWHmMcR4jRYdGLKBPt+ecCMhyahwQ8p9yu9PcSJvsP9acTYHyRAioT/Ntrb8Lh3pr6Nt9DuQXZSTimfP08X+2867a+xr++7vbbCffj03nnYuttG/on8ZnKVaSYzki7h4jVGlF1vOJWUcuiDZe6GSuAIx3mrkAEB+wTKjNd/3JZSPWpsfFePp3vDpEx9e/2jHUZ9z0X6wI0cgkwB4ZY50b0I4KMosSmTi7KyqVeN4ZlLfmSk4aOnDsgVqKiLn6xJljI7aBHLKLiiI4s2tNes4fnpjLE1pd4B7rEzuud28We2wfQcAN2Uh1zGm/3P1naRv9KwJEbpVccE
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(366004)(376002)(396003)(346002)(136003)(39860400002)(451199021)(85182001)(86362001)(83380400001)(41300700001)(4326008)(5660300002)(66946007)(316002)(2906002)(6862004)(8936002)(8676002)(6636002)(54906003)(38100700002)(82960400001)(478600001)(6666004)(33716001)(186003)(6486002)(66476007)(66556008)(6512007)(9686003)(26005)(53546011)(6506007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TS85b0g5MXltQXNLMGJzcENLdTdVRHh6Q205bnMvU0RXV2p6Zkp6MW5TUzBQ?=
 =?utf-8?B?YTNna2UyYldwZWc0dzNtTGVwTlJDOUhwYUo3OU1qWXB0ZjNuVHU0S3hLNXlF?=
 =?utf-8?B?Sk1iOFl6VDlaemFESmxlbkFmdTBQNXNXUWFtY1dDN0F3SCsvVXN1N1ZiUjF2?=
 =?utf-8?B?cmNFdCtlRW9GcW1PQy9iNFBmc1k4WnN5WWcvL2lhM2tVbUZ1cEZjSDAwYk10?=
 =?utf-8?B?cFYza2xaQVI0MWtSanhqd1JkZVVyQ29zd2ZTaDVxbFFDRld2cldpRjR3YWsx?=
 =?utf-8?B?WTJKTTVhSjJMeWRvSk0wb0dQZWkrYkZGYW5zRXRLb0RZSEFWczBlaGFtUFkx?=
 =?utf-8?B?QTNDZjViSXJTSVY3QXNsZU1tclVla3hacTlTQ21Xb2lER3BOMzQ3eHJhUWlF?=
 =?utf-8?B?SXErUXR1bFVkRnJZa1BQTHZGbVNramt1TFE0Z1QzcXhFVzh5WThFL1pOWDZJ?=
 =?utf-8?B?UTJCaUk0RlN5QlB1MHY1TGxtK2NkS21CT2Iwc2hLNmRBUVZuYmJLS2JZNHYr?=
 =?utf-8?B?WGIxYWlhWWc1ejladzZGajJET0VIUUxOR1EzRVhlcnBpbEtRUVJ4OCtlT1pJ?=
 =?utf-8?B?NUJWRFlkUkJ4NVpqVlA1MTBuM01zM25pV0ZlUVlYa1hJZHlsZGlMb2FoZHpS?=
 =?utf-8?B?blhlOTdXUmozVjVVZWhQZDh2ZDZqMHhVKzdZRTZCRjhUVS9UZk1VVXdVK2lU?=
 =?utf-8?B?bFl0SWhBak9KYzh6ZlJ2RWRuVXZLQytIbjlrVmJQdWtXNlQwWFZDcWpuTGJC?=
 =?utf-8?B?eUJuTUJGMkpwUTZTV3l4SWNTUmZrRnc2WjFKZlJQcjBHWDRFVFpWczYycVMr?=
 =?utf-8?B?aW1PTlpueWFEeFN0UG85aGxLNU9iNkIxRkFRWjltKzd2VHFxSjVuV1BaOWZX?=
 =?utf-8?B?eVdYMzNGbnk1eXIvcXFXV0lDZnNiRERMVytsbGpobjMrNzNNVnJMMWxJc3Zq?=
 =?utf-8?B?SUxmektMaUJLRzQwVVIvNWt4VE9CTjlFL0prSDliSlgvWTJrdzl3anA1ZUR2?=
 =?utf-8?B?NU4weGVVUm9nODl1VGhObGhzVXBSa2k4cll1eUd2a09Kc3VmVWFwbnIyZDRq?=
 =?utf-8?B?b1dCTkkyelN0V3A5ZW5NS3hOd0R3WDlDYzZ5ZDlWOSt3anBxcWxQZjZDSlJO?=
 =?utf-8?B?TUtObUhYc3dSU3o4L0NNYWF6THlLa1RKZkU5alBwTzltczU1d1pZLzhNalZy?=
 =?utf-8?B?WFVMd3V0eTZPQUxOWllvV2psQkgwUlYvOVBhQWdKSUhyUjloYklxL1RjQVZP?=
 =?utf-8?B?RWtyL1pvUFNLZmRaWEFOaEk0bnR0cHhac29adWVEMHhhRGRBc210Z0ZKZDZk?=
 =?utf-8?B?TThEQXlML0VSdWZZc0J4RGFIUXJuZVhSQ0ZGSkhWNFBaWnJRWUFoNVJZNjhk?=
 =?utf-8?B?QlpOM2E4eWM0T3dTdGlxcE5QaHNWTkNtc0R3K25OR2VKNWVlVDY5azdyWTh4?=
 =?utf-8?B?Y25YL0NiOVhoUXllNGxFKzJxankzT1ZURjl5STRXYk81SURxZWVMTUZrdlR2?=
 =?utf-8?B?L0hGSUpUNVZIeVZaUXB4aGg2TGRRY21qKzJuNFFMZVdySStRVGpXby9pcHph?=
 =?utf-8?B?aGY4dFo2QzZHUWtoTTB1N3F6b1F3enI0NWpLK1FNNG1RZGltVVpDQ0ZZSDQy?=
 =?utf-8?B?Nkp6blFvNjRhNzk5V1BPY2MrZWcyWGxkUlROYmxPSlRYd2xzcXNsNWQwbFZW?=
 =?utf-8?B?UkVhMkxOZ1hHaFBsRithOXAvdWpJb0EzNTc1K2pTZDRSa3d1NmxNUzRVQVZ6?=
 =?utf-8?B?TlZDd25MMHRzYkVxakNYcmNSTGwxSmt5WGJmRE12QUYxSCtoZVE2OG1LbTRn?=
 =?utf-8?B?YS9qazVta2RSblVwNzRJY1dkS1VEQ21CaWc2elVWRWNkQzRyMkU3aTJ1K01q?=
 =?utf-8?B?STZNTVFvNzNhR1ZsblBsNk9TOFlJZUUwcTdCbW5kbTJjRyswbDZReW1rT0Y4?=
 =?utf-8?B?eWlKZDdjZGVlamV4TDZtVExOdzVSTmtDYUJWSEh6aFViZGV3dTRab1JOUW9n?=
 =?utf-8?B?K2ZzRE9jOGZqREFSNmpITFlLNjFjVUNtdDcvTWZKb0R5Qko3Z2JrTEYyYVRQ?=
 =?utf-8?B?bXRwN0k0WHhodi9wOGM0WmNiUTJ1ZWVmQTdCVVZsMXhsZG1oTzVaODF3bHJs?=
 =?utf-8?B?cTFlRHBnTmxNYmFwSmJkTjFUd1pRbUFMY0FiZkNDZDE1eVZwL2ZaUDRrUm5z?=
 =?utf-8?B?NHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	iozp4iqZpBo3UlqpNqOBZQ6i7wXPwa0kI2n4FoL+/D1iwDu6Om2/li0dqMMEyk41Pga2XlIstHUpQf5XgVcCm8zNxuUlbRS1UTCI8AHQW72q31XGPo6O4n+zAYUzXrFYbE5XEZKzjfgwGHJUwDDl1rbydJV7kwGRPNVb+Duk6kWqWqc0+30sgtDGnSo6NI7SlbtKIw8rD5l5f81NXCDA6AEGqmO90Q4SQBn6xHOB7Sm9nwr9c7IkI/6bza9+CgwwYll7IQV/3FeWOj7TWCZTQ8hLlmA8k6xdPkIjbHamXPTfcbXDhj+VM3EahfIUQBnbdJeka3K45+dKGGeKaqongLjjcckkUIo73hWePZ+7C+NF/orjNEtjRdEt1tmEp+ET1/dELtTOBoZBXgKzFDAFGpXZk7GLUuoA6OXrIJSynjTsAo07mPVt7SbH3BYTd0YnzK2wtLvNnlLM+ICzrJE9U93Rz6NKPUV1ZNNv0QfYWkIdlfwozhzxcYmw2of+iN2FduxLXxO4OQJYu6R3bWnww9Jrj/d7zC3SPSKuqzAv4lVBrMgqPjqMRtZxgagE83mcipk3gUy704ovMTl7/84yLipE6dUo2ldv3+fgwO8oViRQJsFHYQQLeoAe5lPmvt/BhL5oymnizhrl1IID1WtzG9yvls5tkwZZATURE7Vbwufcy57v9vfkCkGe2EMBc5ARH0JgUo+56jpaBZTJYKDz6BcJQHs+Yzm/Gw+AOY3zh/qP4kzd2swGHXazbPRzN5jk8l0xzscOQ6KWAHYpDRSpxdnhpRs8PiekRvqQ4cU7POC6oOBvjcLYlaA/SGFLVkJZ
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c802f13-7004-4efc-e13b-08db4af80da4
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:29:04.6859
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BNoxJP+5MtsGHUbVmMtY0KvvCAw8yKiRrWEP8DlOygN0WJMZEW7T+dcSGpKfQYkMVte4NLP5se7TQROjm+gnvA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5530

On Tue, May 02, 2023 at 10:54:55AM +0100, Andrew Cooper wrote:
> On 02/05/2023 10:22 am, Roger Pau Monne wrote:
> > Ensure that the base address is 2M aligned, or else the page table
> > entries created would be corrupt as reserved bits on the PDE end up
> > set.
> >
> > We have found a broken firmware where the loader would end up loading
> > Xen at a non 2M aligned region, and that caused a very difficult to
> > debug triple fault.
> 
> It's probably worth saying that in this case, the OEM has fixed their
> firmware.
> 
> >
> > If the alignment is not as required by the page tables print an error
> > message and stop the boot.
> >
> > The check could be performed earlier, but so far the alignment is
> > required by the page tables, and hence feels more natural that the
> > check lives near to the piece of code that requires it.
> >
> > Note that when booted as an EFI application from the PE entry point
> > the alignment check is already performed by
> > efi_arch_load_addr_check(), and hence there's no need to add another
> > check at the point where page tables get built in
> > efi_arch_memory_setup().
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> >  xen/arch/x86/boot/head.S | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> >
> > diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
> > index 0fb7dd3029f2..ff73c1d274c4 100644
> > --- a/xen/arch/x86/boot/head.S
> > +++ b/xen/arch/x86/boot/head.S
> > @@ -121,6 +121,7 @@ multiboot2_header:
> >  .Lbad_ldr_nst: .asciz "ERR: EFI SystemTable is not provided by bootloader!"
> >  .Lbad_ldr_nih: .asciz "ERR: EFI ImageHandle is not provided by bootloader!"
> >  .Lbad_efi_msg: .asciz "ERR: EFI IA-32 platforms are not supported!"
> > +.Lbag_alg_msg: .asciz "ERR: Xen must be loaded at a 2Mb boundary!"
> >  
> >          .section .init.data, "aw", @progbits
> >          .align 4
> > @@ -146,6 +147,9 @@ bad_cpu:
> >  not_multiboot:
> >          add     $sym_offs(.Lbad_ldr_msg),%esi   # Error message
> >          jmp     .Lget_vtb
> > +not_aligned:
> > +        add     $sym_offs(.Lbag_alg_msg),%esi   # Error message
> > +        jmp     .Lget_vtb
> >  .Lmb2_no_st:
> >          /*
> >           * Here we are on EFI platform. vga_text_buffer was zapped earlier
> > @@ -670,6 +674,11 @@ trampoline_setup:
> >          cmp     %edi, %eax
> >          jb      1b
> >  
> > +        /* Check that the image base is aligned. */
> > +        lea     sym_esi(_start), %eax
> > +        and     $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
> > +        jnz     not_aligned
> 
> You just want to check the value in %esi, which is the base of the Xen
> image.  Something like:
> 
> mov %esi, %eax
> and ...
> jnz
> 
> No need to reference the _start label, or use sym_esi().

The reason for using sym_esi(_start) is because that's exactly the
address used when building the PDE, so it's clearer to keep those in
sync IMO.

That's also the reason for doing the check here rather than earlier,
so it's closer to the point where the value is used and not being
aligned would lead to corrupt entries.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:34:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:34:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528466.821644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnKx-0000ry-Tq; Tue, 02 May 2023 10:34:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528466.821644; Tue, 02 May 2023 10:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnKx-0000rr-RB; Tue, 02 May 2023 10:34:23 +0000
Received: by outflank-mailman (input) for mailman id 528466;
 Tue, 02 May 2023 10:34:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptnKw-0000rl-HR
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:34:22 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e4565d0f-e8d4-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 12:34:19 +0200 (CEST)
Received: from mail-mw2nam04lp2173.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 06:34:11 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MW4PR03MB6377.namprd03.prod.outlook.com (2603:10b6:303:11c::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 10:34:07 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 10:34:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4565d0f-e8d4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683023659;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=OcL+fG5wIRAZ9WwG+QC74Q95vh4A8SH6qo7tdrcXdLg=;
  b=NsFwCJMy+YXgHlwdSeWlmlztQwbtYYMIZuDSSUfxYA4tjt1YkFkq7UD5
   u8DJ2R+Yn22OL9fQpO1RYZyhUxMyVVwAVUwXLLogEQ0yqW9tvuqRjVixm
   1Jk2DYiF5R5mnZPGtHDKX7x992B5mCThXqFy1I0YLhvBHbWkv6P9dkDnY
   g=;
X-IronPort-RemoteIP: 104.47.73.173
X-IronPort-MID: 106882018
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:zTYqtKvdXjajRR6hFyO7FWVJOefnVHBfMUV32f8akzHdYApBsoF/q
 tZmKWiBM/aNYmekc4x0PoS+8h8O7JKGzd5jHQM/+yo2ECgS+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AKGzSFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwEiEudDGmubOK6ZW9YO5q3McRCJfpBdZK0p1g5Wmx4fcOZ7nmG/mPwOACmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0oui/60b4G9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiANxCROfgr64CbFu7/jAXUzQMZACCnNK0003uWsBZD
 30Uw397xUQ13AnxJjXnZDWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnM08SCEu1
 1SJt8j0HjEpu7qQIVqC8p+EoDX0PjIaRVLufgcBRAoBptz8+oc6i0qTSs45SfHuyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CBhRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:ZJ2NkaCVUp9mEd3lHemW55DYdb4zR+YMi2TDtnoBKiC9F/byqy
 nAppgmPHPP5wr5O0tBpTnjAse9qBrnnPYejLX5Vo3CYOCJghrOEKhSqafk3j38C2nf24dmpM
 FdmnFFebjN5I5B/KLHCaKDYrQd/OU=
X-Talos-CUID: =?us-ascii?q?9a23=3AbJwajmvMi3GpRs5RfBQVzgbs6IsfY17H7VLucnO?=
 =?us-ascii?q?RLmBDT7vPSm20pqFdxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AD5olMA5B3Aqm3zZmlAqK9wWRxoxQv6D1IVo8yqw?=
 =?us-ascii?q?7kMTdNnxxKTG5vTSOF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="106882018"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NTiSQy2ZHdqcfQ/c6tg012pHRl5DY5hbab5ef6EPVhV5OsD6mO3qE7M6bt5bg3Y+ihPG0rOtydkH1cqfbEf/ZjxRxflGdFawe4pURq7bNstaNHcH1wO7CfKMwRpfBe7/dj3xHwkVw9nEWshXDNR4ngnQojRlzNqMutMLIVhJyhIH674+8esJ1BDANx3/ijJYX/3RXmLrzblLIoUrlg7L7F7FD/E3E2my7AhbXXOr/CiHlPMxc54/Wd8IFdbG/DNwQrZ6ArBBTaBMeBEmxOrMcUjIur00PDlvq877kLdXNWBawehYGQiRUswKD4jJAGf+5qIpiL6vhS3EXR75scDrkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GK3S7p7ZYgG72MlZtaCx995FesgjOzg2erqZCRG44lw=;
 b=mrjz97AG4hdLpm59q4PGDRgL5ds7J3vhwV3KQLwZbqUeBx6RiN6Hwh7XLF/lD7RaXSvz0D1P7B+dYV5yEgs3vgkTffPNHzYk9HlEnpR8itOggtbmkvVs181OJEe41Oj+a8694/81j/mH8SDP469bbZUjCOEYSVFcxVn/QahRJg5SYGd0C/ruvDaIEgH6cZGh/4FzXHbO3HT/A7ZsSIjNvfrC8c8BEunUll0wNT/96qHf/ngQVQCbsVxBPMq88gc4E9Jxo/CF/JcQUxAvBYkR3TH3c5aaXHjARNVyN/3OhhPtomBeer3w5rZEOaiOMcfpdRl++DATL6sd1yhGIHF8Ow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GK3S7p7ZYgG72MlZtaCx995FesgjOzg2erqZCRG44lw=;
 b=eOTI4ORgDQXmFoCAHDBMOBHyPA93vF7FVe7fJLvudRTU60PIZibHPuluVR/lKECPJIcHOcmIOkwBg31wdyRdgrgvUC4L7pqvEtrSC8VafyRRny7tbLq0kxjjOCaBDZtXWHwyFYna5UHk1gMuqePcOB1N8Jj9DJfCHSoT2GbhyqU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 2 May 2023 12:34:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/2] x86/trampoline: load the GDT located in the
 trampoline page
Message-ID: <ZFDnGaNXhI7PLOBM@Air-de-Roger>
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-3-roger.pau@citrix.com>
 <11b24761-9268-e647-7316-0bffb549ae6d@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <11b24761-9268-e647-7316-0bffb549ae6d@citrix.com>
X-ClientProxiedBy: LO4P302CA0010.GBRP302.PROD.OUTLOOK.COM
 (2603:10a6:600:2c2::18) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MW4PR03MB6377:EE_
X-MS-Office365-Filtering-Correlation-Id: 1f05ead9-979c-4c7e-6d29-08db4af8c22f
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0/3WxzpTu7Lkw1YweM6WqWo72QQQkY45SZYSH1EwD9GNw0x7MqMXtpY3aC/4FBCPoKGCIhH3LnC4uUWecFYwMRxaNWzQXzxG0Uu1gobWLJVVpx/mRP2gI5FneJJG8I3Fq6qCHdHDMcd0/oFPp25q+Hn/hzUsMv2QQUY9tXr3NJELjC5vTS/qVQ8lATsxx+dMypdbc9gXAJhsmu4LJIZOfbUZ4Z3L7qldVrukRuVF7UgcEg8lzZW/YS/RedKJzod7rgXISXn9d2DiyRlIaZRCJrhCyYL047tl3tQVVfb1nPCyqnLYmeOVLRDVhtJqxdmb1I+YtwvHZ2HAkGbYz0ZRTc1mwFk6hp1zn6XaO5eZAYcAc2r9fbM5MPLKTpTbUxWzSIx+R87KApZqmmqRDaKQiMTmxB/8GnnOI9jfIpF7FA4jYZKnHyzYDa8OaCGMCYy37G1Yp6JGEtsoGuMvhGcBb5+bVGNnJ0/IfpYn20jriu7YRm94vstNo8T/8HsHJ3XCpUAx2FK/KBfjbIwHv8P/Y1RIcOI8cI3hU6Bh7ALkzqHxNiSHsj2A06qgxox6fM1S
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(346002)(39860400002)(366004)(136003)(396003)(376002)(451199021)(4326008)(6636002)(83380400001)(2906002)(5660300002)(8936002)(85182001)(6862004)(33716001)(8676002)(41300700001)(316002)(186003)(9686003)(6512007)(26005)(66476007)(66946007)(53546011)(66556008)(6506007)(54906003)(86362001)(82960400001)(6486002)(6666004)(38100700002)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eXM4RXhBSGxXVm1sVXBuWnY0Q2FQdHNrU1pzR0RpN2hteG91MnVVa1hpTmt3?=
 =?utf-8?B?NWt0aHhURVVUUEhzRnBmWmZDZnZIOFBQZlI3MDNHYTRtWndxdUMvTFNrcHlJ?=
 =?utf-8?B?TWZ0b1VPNVpwS0VyYkJua1hwbmhRR3FGeGlQOU9hNEF4ZFNNSytRMHY3Tkkx?=
 =?utf-8?B?dnRMOFRkTTJuekI3VXFuakV4MkNvUmNrclZ5T1MxNC9iMjVDTFdhRUhuVzln?=
 =?utf-8?B?OXVCZE16RHpBMTA4K1lMcVNkNW5QZWZuRi9kZXFNUFc5aXNncSt6UGFCVXhD?=
 =?utf-8?B?dm96NjlXcy9Za1ZIRElCVWxvUFJkYVptSURNcGxSbTI4eERGdGxkUUF2RDhm?=
 =?utf-8?B?MzNEcEk5VFRWVCs3U2RrN2hxcUFCNmpxY0NaT2R4WTMvSTZXUlBMR2pzSUtR?=
 =?utf-8?B?aDJ3WU5SYUVxVWQwRDhlNDc4MEVydVkwbzBXNXpGdEJQblRFVFZrWXpTN1Rn?=
 =?utf-8?B?cGYrUm1ZOEpQc0toMzZqRlhWT0RVN0FVbW94bmZTMjAzMHVGWTRmZnlCNVk4?=
 =?utf-8?B?RXB0OHVxaXFzdDNtVFBoU3M2b0c2U0g1NGpnRXpLKzN6ZGdwZzhKU0UvYm1X?=
 =?utf-8?B?RGIwSlFCeE53WWdOWGQzYVNWSkswMWpDMlp3Q1hWRnIrQ3l3VFgrbml0TUk0?=
 =?utf-8?B?OWF4N05QQ0ptZVRncFhVQjdPeG5qbHE3Y2RlS3pmOG1abHpRSkhCZkhPSkNi?=
 =?utf-8?B?b3hyck0zRG10cThOWjNnMEdqdFZONE1JVFBhYWgvblpsdGh2R3FvUmdnTG50?=
 =?utf-8?B?UkF2dGYrSXV5TGpQZDZva2lvT0t5QmNZUlRUMVd2NGdxVFh3bWNLRXlOVG1Y?=
 =?utf-8?B?dDJudVVrcVV3bm51czYrZzFQNUlYV1hLWlhXdzNSTWJmMFNraVY5cVcwVWlt?=
 =?utf-8?B?Y1YwdkxVMXJJYm1tUzAxVjd4MnZrY1JHU0FhRVFJcUV4OFJma2FuQ1pIZThN?=
 =?utf-8?B?VDY4ejMwaE1sQkVjV0ZOYTFkeTMybk1yc2lzUW80cVp4KzNHMjJiRnZOblRu?=
 =?utf-8?B?WWlMeGVJQStpLytIZU5URy9iMUpjaVlSUnFVdS90YStHUEdrM0l0eitRNGc1?=
 =?utf-8?B?VFBaRDlpNUJwR29URHlpL2NpdW1KbTRLYzhteDRxa2xaSXRWeUYwT1NXMUNs?=
 =?utf-8?B?WXRkSWg0Si9HcjFXMjdsQk4wZ0xaQjZ2Z0szVDhTT2lDSkdXK2tXS2ZCcVYx?=
 =?utf-8?B?Mk5jbWIyNGJNcUpPeS9OaGM4NVh2TFljMkYwQVhSY2E1K1lweUROdWdPcWxO?=
 =?utf-8?B?Tm5BVGpTdm9ZZmFCb254U1BmVk9wWlY4V0hZbHByQ3R1a0p6SStkczM3TDB3?=
 =?utf-8?B?bklvRXNuNHNoMkRBK2N0U3lXdmpTY2Z0clVpU0sxaEd1blBDMHB6bzZjcC9S?=
 =?utf-8?B?Y1hNRXF5UXVMemJla0Vmb1E1TkVDLzZ5dis3TnFmL2VEWjdDZUQxNW5CSkdX?=
 =?utf-8?B?SXk1Vm1EcTNrZUhTRVZCcDJaMXQxZXo0TnRWMThxdGNyYThJeWVqV1hvZmpy?=
 =?utf-8?B?L1RQMEREeVUwRWo0Tytnd2hDMVhod2czWDRnYy9DYkQxcmFES1o4RVJpZXgy?=
 =?utf-8?B?MVJJVWJVSUxpQWNqaS9IdnJZOThMYlI1ZGFBOE1sdnJ3RkVOUmNuVS9qeUVK?=
 =?utf-8?B?aUF2eDd5RWp0d1pLWHBrM283QlBPUmhWM0FHc29JWXJkMmd5ZFZkSzd2cUFz?=
 =?utf-8?B?bnlaeGNEK1IzWkhEUkp1aUNobFpCZmtRZjBrTnRqWVR4THV0TU1TYlFYUWpW?=
 =?utf-8?B?cXJNamR3eVY1bGRndE50blRTMUJtMUtmV2ZuNzVqQ0VZR1U4L01yOFR1UUNa?=
 =?utf-8?B?ZWpSSS9HY0xoWGU0ZnhhcVBlQ1pDZ0lHRFF2SkVNYW00VWxjNWN3WS8xbGND?=
 =?utf-8?B?OGdMREtadHlwZDVvQ1JDZUM4YlU1TmIzMXFMSkgzZFh3YXl6NFBMak1CK1ZJ?=
 =?utf-8?B?OEhDMUszMVM2WjVoTnFJRDRockkrOFZzRHhPT05rTEVTUUlBUXR1NUFyd0JT?=
 =?utf-8?B?ZHdMOGorOXNnY3hVMENqZXI3VGpVVmt6elp6cktvWTMwQlJrSTY0ajBGeUJY?=
 =?utf-8?B?WDBjOWFtTkNFZHQ0b2pQbFFISDJyK2luUnZRbG56MEIyYzdNeWxCaDUwVmRo?=
 =?utf-8?B?azJheVdJVnZLNzR2aUVqZVFTQlhaZjhDSkxkbDY4eHpHaGM3b0IyWEVoOW1L?=
 =?utf-8?B?QVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	1OOtrEE3Btos8DXXC67djlIg8QlU0OmiGKiQDk2zJpbh4kkQJso5YUyRLodoAHb9YQ1CW7sHZnMNbcN5ElsBbdAISbQ+6eOnLNqK/puHn0f+l4O7tBMtLAVWRXmpIUOYlfBR2WpkGIeeA1Jdg5hb+nL7BrUM5YUV13439AxS3jwiiGXicPWtC7KwfgT3xR51NdhhQmhacCU7Q9XB/1cCu0vv3Vk2/M8xBcqDxrzMrvt60L/8bykwL4xOJB/lQ7OUtvRxUjhjmFWiFHVwBmiQH/CX2PA4mYe2aZ0VZM8G6/DYLWsDcfEV8wpSfps7DDl7nUStjOJ4+WXYf3EbKZhCZGFteiCFvDNDD38Ta/GQj7vhSi9453Y2PnsVnbl+7SCYYYAGb1llEHTpBeSKZVUGxZLkXIQ2ldqguIlAScukjQ7zxnbVoV8yy3tuKOfiZ0D3ReitCpE7VzNJA+sC1cAg+Qz0+ZEcCgLLoBfymqStXeTcfye79ZS75umbGAPtHb25dx5hDDOWtXxYdll9F+8pmtnaeShCsUmMDrVuAoocEStKbT9Tw5WvCWUXI3+RCAw6KNeA+DQPeKygCGWGuXx+QgYkJVvoiWkrE2kqbFN6wC9IKEtYRg3UgpHVpuCrntqL53IA6d8EdYaRNpsw0TNfpt+LKoLyvfnMiuZZLbtZShTe/HnAmAb/TI33oh+g8TOsoJT/P3xgFZVZX77TAdlBxOQXYjfnPEsmzIDmkXSRMivbMVrIOrj9P0pRExH2OyKB4zd4XlKwa/aecTw6BIAPzmGqNvucbViy8aPjf9be6YY/9dGxKrM7Rz8TiMcQ1N4/
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1f05ead9-979c-4c7e-6d29-08db4af8c22f
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:34:07.5778
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1jeQzUA6PEPFeHoJUObp0qEN2xXLkzgpTwSHyCtF7kO5IrD8umptI/dOJDuKdiRSd7hSezm5pVvK0T08eSwcHw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6377

On Tue, May 02, 2023 at 10:43:13AM +0100, Andrew Cooper wrote:
> On 02/05/2023 10:22 am, Roger Pau Monne wrote:
> > When booting the BSP the portion of the code executed from the
> > trampoline page will be using the GDT located in the hypervisor
> > .text.head section rather than the GDT located in the trampoline page.
> 
> It's more subtle than this.
> 
> gdt_boot_descr references the trampoline GDT, but by it's position in
> the main Xen image.

Right, gdt_boot_descr GDTR references gdt_48, but the instance on the
Xen .text section, not the trampoline.

I've tried to explain this in the commit message, but maybe I've
failed to do so.

> >
> > If skip_realmode is not set the GDT located in the trampoline page
> > will be loaded after having executed the BIOS call, otherwise the GDT
> > from .text.head will be used for all the protected mode trampoline
> > code execution.
> >
> > Note that both gdt_boot_descr and gdt_48 contain the same entries, but
> > the former is located inside the hypervisor .text section, while the
> > later lives in the relocated trampoline page.
> >
> > This is not harmful as-is, as both GDTs contain the same entries, but
> > for consistency with the APs switch the BSP trampoline code to also
> > use the GDT on the trampoline page.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>, although ...
> 
> > ---
> >  xen/arch/x86/boot/trampoline.S | 6 ++++++
> >  1 file changed, 6 insertions(+)
> >
> > diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
> > index cdecf949b410..e4b4b9091d0c 100644
> > --- a/xen/arch/x86/boot/trampoline.S
> > +++ b/xen/arch/x86/boot/trampoline.S
> > @@ -164,6 +164,12 @@ GLOBAL(trampoline_cpu_started)
> >  
> >          .code32
> >  trampoline_boot_cpu_entry:
> > +        /*
> > +         * Load the GDT from the relocated trampoline page rather than the
> > +         * hypervisor .text section.
> > +         */
> > +        lgdt    bootsym_rel(gdt_48, 4)
> 
> ... I'd suggest rewording this to simply /* Switch to trampoline GDT */,
> or perhaps with an "alias" in there somewhere.

"Switch to the relocated trampoline GDT." maybe?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:35:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:35:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528470.821654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnLa-0001QR-As; Tue, 02 May 2023 10:35:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528470.821654; Tue, 02 May 2023 10:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnLa-0001QK-82; Tue, 02 May 2023 10:35:02 +0000
Received: by outflank-mailman (input) for mailman id 528470;
 Tue, 02 May 2023 10:35:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptnLY-0000rl-8h
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:35:00 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2061a.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::61a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc38dc79-e8d4-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 12:34:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7323.eurprd04.prod.outlook.com (2603:10a6:102:88::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Tue, 2 May
 2023 10:34:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 10:34:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc38dc79-e8d4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QxBayhTqCq0MCoE3vWzmlDwvCQWYI9X1gEybsZapi8KRUBLwOgtJQf4HtcR4qkDCGuZZ8OFIQYS8G4qvbxrfFYeVryjVSYqC+6qfgfwXHCZa8JEzjKD/ZAZsOs9WsnCsd1zUmspgrjshLSRPbbB6g9Hm7vE1VaOC3eUKfZU30sd0Jw8B1xfwUaDIWkcakwqSad5A82KkwMt2ie2WxdQVFfCsmge5XJv+xAnctnHiSulTE4qOeljSKYaKW2oKG0/OuO6vm6rI8jyufUHbi/VyUAVUy20erttOTAytvBmgD9iz0qKjUdd+zoPLqm7j3d+IcabsCC90W6YWHkXi8nxYkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=d7BwD3nBy3IrkV7OT7lKA7OJfSsO4zE9CNbhIE0PTCM=;
 b=lhCIcastGiIvCz1c6MR7uQEHKyjlFR6sNDkLRVuuj5aohdtqZTaEgzg6Nm4ZF2rjGPm3KldGeQQ40BFUy5QtZsPI7zq34jc2CTCRsOrhaNPG+3GhMXnhEm/CamJ/HzIrLIZtAGHEOS8M7emENYwn3Toka+F7rHuQWl7FjCLy1AeN2238D11AbgOm52XpVGqcouv5xDIal+GStngJLAbTHn61PdK34sGxG2xhgzeULcyqg53YJs0yNoPiPH4HVApk/i/grmh3mj9hTsBcaT+cALC10WsE0xplJiCp9EQ0LPkBiADe6U+wd6M+9D2AOFiCYooNAPUMufGpwgIIM1xnUA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=d7BwD3nBy3IrkV7OT7lKA7OJfSsO4zE9CNbhIE0PTCM=;
 b=t54zU0UAhhEYHFPHE0FuTBGn4y4kTjH4tLgAtCtRQfYWU5igA7uo6zv6D14kEznIRhIPseh6386muAAd/5tm1Fveh4b9JqBp4POQSVZUSzcMJrEjDxeus8O9A5ohe8VepzawtnNl/LQwTvBOXsnvtoQ8fs7nWnjqieKzFQrR/Y34ygvQjBSwBEHwGcVy04o2VmpyepAtjn2a6yjN8EgBbhv2o18KRAa3ijFLWMRSYVpoOcRcf+ijcIwv80UnTacTQ1Zst2N8Yh5T4+zo1C0qWsmCiW7urs4AawBQS+lozFhzh1cnC0DP5hmo7m5c9YkYlpo4Ll7KnNA1B4IVYq9w7w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3154d046-7ba4-63c7-e2c0-9a7e06128ebb@suse.com>
Date: Tue, 2 May 2023 12:34:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] x86/head: check base address alignment
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-2-roger.pau@citrix.com>
 <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
 <ZFDl6rSYRzNEoVX6@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFDl6rSYRzNEoVX6@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0092.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7323:EE_
X-MS-Office365-Filtering-Correlation-Id: 72169607-6f3f-4f7a-ce70-08db4af8dee1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Am5vrgLOQDuYhZt/PD/FSXEOEMv/Z01pR5H0rSi2Vef+/BjJM8At9q3W20q1yx/xfo+1upEwDo4JKeeYYLipjw646KUNUuEG7gB12A+pUCHriwwBnkz5EHjVEDyOOCSx3CcpQxf4j6NkE1z0YgY3nesVVdz9S+Q6xeQd8vXWoUNL6CQr2D3OQXIeaGmCu/rltce8pAkjl4HgSiNQU4scns8Q60O2tz0e+n0Cld42rJL9Gr7GWxA/+uYBI5GemQ08kTyrbqfDzUb8Qtx0L67hvJOh+2IrlWqiXcex2pP5tCns8FtrCU1n/SM0HH7IeaxNkBsX7RJipypAw6mAbEGojvynM2jdAtxTufKsDOntMWEaqBnXN2GmDb3oWTWWZrJkksxFzJbuGCPUlU+XJAfQkOUoxuJsHGSVA3Ew/DjrhDzbjOKDonvCFyIBdKZvhAu1uIrQvxiqx+WudTitZsPntlLLTaHfL0rP0SaWK4A2HM6EdLLh12WJssDeOiiblmgTBHPj/26HKunsTXos/YQwet/7xLDdm5aARo++Tq57yhP4v96luCcCpzoJUX2uv3pwe0qej/nHG/8asAkG7mImGQfpAx3t/PkUgSeYJ2rTYb7QMyjmerafEfTYJ9V8rPJeuaw5iJO0klw7bTyN3CjySA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(396003)(366004)(346002)(39860400002)(451199021)(5660300002)(8936002)(36756003)(316002)(41300700001)(31686004)(6916009)(2906002)(4326008)(8676002)(66476007)(66946007)(54906003)(66556008)(478600001)(6486002)(186003)(26005)(6506007)(53546011)(6512007)(38100700002)(2616005)(31696002)(83380400001)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bFJCMmpjVm5vNmdUejFKSFdSQml2OEhzYVhtU2lBMzhSMTVWM0xvOEdPQUdL?=
 =?utf-8?B?TWFsdU8wVTBpbWZLTndpS0piSnN6Vi92WUFZbDR4Yy91OWpTMTlvYUU4T0FB?=
 =?utf-8?B?MGZ1M1pRd3JhcDZOcUtreWN5Ny9NcytXNzNtbnU5TE01dWM4WEFhL3ZxS2ls?=
 =?utf-8?B?b1laNk1rMFp4ZG9jMWVUUER4THI2VlhHbXJGcmRrclQ2NjBzMStseTQ1ZERw?=
 =?utf-8?B?MUZUMFJNSDMzaHBwaWw2dXdpUERlWkVyQUxjRU5BZ1dGMis3UFhXbHJLMHFN?=
 =?utf-8?B?WkJ2WXFkRGxUOThCQjdtS2IwNndmMVIrNWxERVN6b29pTzZOcHNkMmpuQmhG?=
 =?utf-8?B?MXU4cGZvSnA4RG50QmxSWmtwOXdDcjlOcXgyRHV2UDVXTGYrc0kwdVNpOU9s?=
 =?utf-8?B?R0NGNDgrQ1NJeUlndUo3YU13SmtSRytuYThGalBWUExxc2tBSVpqa1UrdWhV?=
 =?utf-8?B?SnF6aEhlNmwxdk5rdGVHY1Vqcm1LNmdueEVaUlFvL0txaFJWRnBrbGtNbXNM?=
 =?utf-8?B?WUVPcmN5OG0yYlNYRW0xUjRLUzlzMk9RWUh0R1Avc1lNVG5LU3UzajFjaHNH?=
 =?utf-8?B?enR1VmxmOVp1NW9xUmppWkloS2hNdnhPM0Fjd296enpyK2FCd0RnUlQ0OTdR?=
 =?utf-8?B?UzVFbEo1TVFqSU41WTMvYUFBY1cxNlh5bmRFNmtVYXR1ekZYRDc0QS9aMmR5?=
 =?utf-8?B?eHhES3R1TFZkVGV2bUFXVlB0d2xnOGVWYkc4TllBZnJHcHg1R0lMYUN2Rmlu?=
 =?utf-8?B?VEE1WXFTNDZGUmRDdlNRempLRjNXWERVTkEwOVIyVzMvYm45YjA5NE4yS0Vt?=
 =?utf-8?B?a2I5RG5SWEZYZVV1V01iR0drKzR6QUNOWkpQbTQ4dXI5ekpyNTF0NHRsY3do?=
 =?utf-8?B?N3JvYXBqR1F5QjFIZmw2WnQ3RVZtN3hvYWFOSXYvWWVQWUVFYTF1eXpWcVVS?=
 =?utf-8?B?QngxQ0FOblBVNlFzZ0tnMnh5UVNEVGc3Vkd4TmFEdGlQQ0VFSWoxVG45Skk4?=
 =?utf-8?B?NkdiOFp0VkRTeU9qM0JjWUxGZGcxdzhuVmRuYVhqc3BqUUJLVTk4WnZ1RGYr?=
 =?utf-8?B?bmhja2JkYTVuS3FDR2tiY1lhWG5OZmhIN0Fuc1FqWnQ1OGJtRWtaY3dCbkJK?=
 =?utf-8?B?Wk5uT2NQdS9oWmVnUGJHa2s1MURnQmZZOHVoaFVHS3FjT0hZUEFyc3h2UlBw?=
 =?utf-8?B?WDR6SjVzT3phL015L0h6Y2hrQnpjbVJaN0RBZ3ExSkdZd0tNNWVDaUdOVzB3?=
 =?utf-8?B?blBlZldrNzVLM3JtamxmSGZZaFRWUzBNc2tHTkUrVHpmeHR2NFZ4OFlrVEpq?=
 =?utf-8?B?UzFOd2JKcUU5TlUySG15T3dxUWF4R1hxQ09vN1ErU05mbW1YZkRzVFlKMnFS?=
 =?utf-8?B?R3dEVTdPdk9kVHAxWi9USGdvcUNDOHorbVkzd21kdlE2Z2VyRjBSWjZGSjFu?=
 =?utf-8?B?djFPdGdxVHo5dU9SSVpqR01ZK3Y1VHJqTkJFSzIyV05OZC8xejdJaHVJMW1z?=
 =?utf-8?B?QmUxRVV6THY3RURsQUhucmY4N2xkQ3VuUjViYkQxaEgwdW9TVDEyR3JQdHVW?=
 =?utf-8?B?aHBuZ3JRVHZNVGQ3MW0xK05MSjhjZlcwQ3RiVFBUT1FJWXVzeGl6TlMyZDlk?=
 =?utf-8?B?aVlLUWloLyttSzJ2RGUxTlZJVDZPdUlsZGx0ZW8rUWQ1elZQNGNRUStqdDFV?=
 =?utf-8?B?NE5aSFQ3VEFmcjBUVGFiSXpsNlFkSEFHNjNSaHhSZmgzVStVZVVYNURSV2dj?=
 =?utf-8?B?MnRsazFqSkorWkwyb0Fwdzg0RXRNaXl2M0F5WEtUQW8zbkduQ3Jabkw2WlIy?=
 =?utf-8?B?aXdLYTdzemdoK2dmTkdrcElWTmJPQU90YXk1LzcyaDMwUGl0N1Jsa2Ezck9U?=
 =?utf-8?B?MmJJTy9tRFdzdVJWelltSFFrdEh3TW9mUHJVTUprU29QUEVNendGbkZ2ZnF1?=
 =?utf-8?B?b0lhc0hNWFYreEVscUc2VmI1T1FaVjJkTHBZVnlYNUtlUWg4dnBKZVFWbWtY?=
 =?utf-8?B?Y3dNTlVxVE01NXJSN3pTZXhzdUI0dGM2VCttT0VjYlVKYkZDZmN2VlBiMGx5?=
 =?utf-8?B?UVg2aWZYWlprekJ2ZDB1WVlyNld4SkUwTW43Q3M5NkhLcFFkWjB1amMwdE9T?=
 =?utf-8?Q?ZZP7GnbZpRzYM9J7SjZ+LxFdO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 72169607-6f3f-4f7a-ce70-08db4af8dee1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:34:55.5963
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qj1VzCQl1bBAsVexQYlsOqFxcj0EMwI6uODGarzCR6iucutfovXI9ZnWViPU3SiRBiWpF9MmJV5rU9jDiXgS4g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7323

On 02.05.2023 12:28, Roger Pau Monné wrote:
> On Tue, May 02, 2023 at 10:54:55AM +0100, Andrew Cooper wrote:
>> On 02/05/2023 10:22 am, Roger Pau Monne wrote:
>>> Ensure that the base address is 2M aligned, or else the page table
>>> entries created would be corrupt as reserved bits on the PDE end up
>>> set.
>>>
>>> We have found a broken firmware where the loader would end up loading
>>> Xen at a non 2M aligned region, and that caused a very difficult to
>>> debug triple fault.
>>
>> It's probably worth saying that in this case, the OEM has fixed their
>> firmware.
>>
>>>
>>> If the alignment is not as required by the page tables print an error
>>> message and stop the boot.
>>>
>>> The check could be performed earlier, but so far the alignment is
>>> required by the page tables, and hence feels more natural that the
>>> check lives near to the piece of code that requires it.
>>>
>>> Note that when booted as an EFI application from the PE entry point
>>> the alignment check is already performed by
>>> efi_arch_load_addr_check(), and hence there's no need to add another
>>> check at the point where page tables get built in
>>> efi_arch_memory_setup().
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> ---
>>>  xen/arch/x86/boot/head.S | 9 +++++++++
>>>  1 file changed, 9 insertions(+)
>>>
>>> diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
>>> index 0fb7dd3029f2..ff73c1d274c4 100644
>>> --- a/xen/arch/x86/boot/head.S
>>> +++ b/xen/arch/x86/boot/head.S
>>> @@ -121,6 +121,7 @@ multiboot2_header:
>>>  .Lbad_ldr_nst: .asciz "ERR: EFI SystemTable is not provided by bootloader!"
>>>  .Lbad_ldr_nih: .asciz "ERR: EFI ImageHandle is not provided by bootloader!"
>>>  .Lbad_efi_msg: .asciz "ERR: EFI IA-32 platforms are not supported!"
>>> +.Lbag_alg_msg: .asciz "ERR: Xen must be loaded at a 2Mb boundary!"
>>>  
>>>          .section .init.data, "aw", @progbits
>>>          .align 4
>>> @@ -146,6 +147,9 @@ bad_cpu:
>>>  not_multiboot:
>>>          add     $sym_offs(.Lbad_ldr_msg),%esi   # Error message
>>>          jmp     .Lget_vtb
>>> +not_aligned:
>>> +        add     $sym_offs(.Lbag_alg_msg),%esi   # Error message
>>> +        jmp     .Lget_vtb
>>>  .Lmb2_no_st:
>>>          /*
>>>           * Here we are on EFI platform. vga_text_buffer was zapped earlier
>>> @@ -670,6 +674,11 @@ trampoline_setup:
>>>          cmp     %edi, %eax
>>>          jb      1b
>>>  
>>> +        /* Check that the image base is aligned. */
>>> +        lea     sym_esi(_start), %eax
>>> +        and     $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
>>> +        jnz     not_aligned
>>
>> You just want to check the value in %esi, which is the base of the Xen
>> image.  Something like:
>>
>> mov %esi, %eax
>> and ...
>> jnz
>>
>> No need to reference the _start label, or use sym_esi().
> 
> The reason for using sym_esi(_start) is because that's exactly the
> address used when building the PDE, so it's clearer to keep those in
> sync IMO.

Hmm, while I see your point using sym_esi() here merely means
subtracting __XEN_VIRT_START. Which would better remain 2Mb- (and
even 1Gb-)aligned (and if you wanted to guard for that, you could
add a build-time check instead of a runtime one, e.g. for sym_esi(0)
to be suitably aligned).

Jan

> That's also the reason for doing the check here rather than earlier,
> so it's closer to the point where the value is used and not being
> aligned would lead to corrupt entries.
> 
> Thanks, Roger.



From xen-devel-bounces@lists.xenproject.org Tue May 02 10:35:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:35:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528471.821665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnLu-0001tM-KQ; Tue, 02 May 2023 10:35:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528471.821665; Tue, 02 May 2023 10:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnLu-0001tD-GV; Tue, 02 May 2023 10:35:22 +0000
Received: by outflank-mailman (input) for mailman id 528471;
 Tue, 02 May 2023 10:35:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MJpR=AX=citrix.com=prvs=4790f2855=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ptnLt-0000rl-CL
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:35:21 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0815b569-e8d5-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 12:35:19 +0200 (CEST)
Received: from mail-mw2nam12lp2049.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 06:35:16 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5507.namprd03.prod.outlook.com (2603:10b6:208:284::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 10:35:14 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 10:35:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0815b569-e8d5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683023719;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=R+AMK7zkOiW+LnEf1od1SO9FS24cWue2G22D2IbXJWM=;
  b=OtSZkT1HLzwFIt/qq6giEXsuesyhS4G58iu7p0Afleb2UUDIA6TnFhuo
   4LSPMp5lJO1cNqsfPAO8HZIOvzK9kBVE7qu249ZCW8If7REjKmItMTvGn
   voSHMhuLiRM7MnPjsIXaP2OvUDvzFlMtncfD034RSsnZIXZ5K2cC2KVvL
   U=;
X-IronPort-RemoteIP: 104.47.66.49
X-IronPort-MID: 106882132
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Ts4MW6/iy0m4UVtnxmkvDrUDon+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 2AXDz3XOa6KMzb0LYhwYYi3/U0G7JPcyt4wGQc+qi08E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKgR5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkep
 Pw9CjY3QSmNnsSN5O+mVvBOj+sKeZyD0IM34hmMzBn/JNN/G9XmfP+P4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTaNilAguFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiANxCROfhraQCbFu71HVUBBwTXHeCs+inh2myRoNPc
 Go65X97xUQ13AnxJjXnZDWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnM08SCEu1
 1SJt8j0HjEpu7qQIVqC8p+EoDX0PjIaRVLufgcBRAoBptz8+oc6i0uVSs45SPLkyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CChRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:l8P3U64j5jJiDGMW/gPXwNnXdLJyesId70hD6qkRc3Fom6mj/K
 qTdZsgpHzJYUkqKRMdcLy7VpVoIkmxyXcW2+ks1N6ZNWHbUQCTQ72Kg7GC/9ToIVyaytJg
X-Talos-CUID: 9a23:jbbHTGE+7ecMc/kuqmI3pU4xOMYfXUTT1VHrBxKlFX9LQZK8HAo=
X-Talos-MUID: =?us-ascii?q?9a23=3AjNAqwgwib5BeGdpRn4xnLBRiTZeaqLa3T0wMlJw?=
 =?us-ascii?q?qgtuVKyFXNyrMnG+RbJByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="106882132"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f335TTLAV6yBmzpReSgML1eNRD/hvsyhFK7YtZdHGMt4fmskG1rgaKWXi7XHtttp90GsaOq20irEy99MSGiIe/TqsOlRV7/ADFYuzvss6Tz00vv3xszLO+kYCdSEBEg7+XiL2p1Sc7tdOQYxmr/ZVaxRYtOOsPg+CXlJfOzmr67xHlo+JA4IHpGTX4o+FVPoknBAz7TJXqPdnQ7uxopiOBTFGX+DC/82QvSrzsgecYS5DNsFkYclWHGToYxed5neN8hHwq/6COE91NUqG9YERBrRkF/xRJX++IqVaAMvosHTmt3Bl3+QfMaZjItcZAVup/H+0EUaTmYEfxqbTXUc9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7jRCMgEtM32NM/MjsLxwPGxdGv3dmM0sdW8hY20jarw=;
 b=J1nWLPKwtrQT+uUvMEP/7UASUwy9a7/rOvSxKZQ9JXf7YelkYnMLAChJ2ODs0LQfm4bAlTouvLsLXrK4Mtbrm2KsrJUp6Oe+9C+0f1rsy/2lScLhA0UO8q/J8LoVITVeBUve+pvlf5OplwDPUoTaiOjp9iKznhABip1yDxMVnQnAlh5+Z0A6xjIlRbj+ag+YGWR2Gwih3/Inu3pj+4HW41aGMsd+QuLhvyAc+RjcRCeCzxCZDh4t5dhFMSz3tjFjkTPhBi6fkfcSngULGtiYnR/s8/skKAc0HjV7mnZEMJYy+UxV1NxSq0c93wO7YB1XDx4wudL3FRelJC+82wo7Xg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7jRCMgEtM32NM/MjsLxwPGxdGv3dmM0sdW8hY20jarw=;
 b=kwMON9oe6rWVwo1AcmdRJd3HL/T9ZfkeMoQQ4HOtCEWfTGtk14VCM9PaJ/qrAgIq0uo1A2J/HjcmlD0Ot8tx/Vbor85f7DrGAN4+i6eOkzrNhICWABeU6GvR2blEMOzmc/JfVlgplYQ0GLbyk/+tE1lE3YQvFvtuR+K2LxtS2RM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <c49050cb-7c86-41e7-913d-8f03f4f4b156@citrix.com>
Date: Tue, 2 May 2023 11:35:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/2] x86/head: check base address alignment
Content-Language: en-GB
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-2-roger.pau@citrix.com>
 <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
 <ZFDl6rSYRzNEoVX6@Air-de-Roger>
In-Reply-To: <ZFDl6rSYRzNEoVX6@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0368.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a3::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BLAPR03MB5507:EE_
X-MS-Office365-Filtering-Correlation-Id: 75da98d0-ff16-4db4-0782-08db4af8e9b2
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6UFj5GnSKv6Zyk37+kG/d+fOHuVMLSHZ/UuYaHqo42ZzyqtEy87w1Qm8yog719wMQhnZbA/0gUOxdTKcr/iVBdkBZn3zxCSFbDvxvUcmvQM25e5kdGvgmnIEAk3YkPDNHVCaKSPvu8mOnljY93VDdiWZEI07bW2Wxl5lIlAwTCpXxr8dtC9ND+dSYUBpv+5cEWUP6prVOKV8j+lekxlHtnkFIVY7OOOI58+MgQkblKhUixUiR5BlMTaHIHOEEl9rnrcasWi4e3kh6jTp5+xUFW9UYHEXVwn+YW2lwrUJqnO66Mft9pEHTEBernbLcJLPITpPg0EIBASS5RCtlCRDpgjVnJQYYDmTZ0LR44ZYPUmfg1wrCMWsE2s4iLbxnUZ4ozKBGzVxD9Vduk1BBxroR0AONmDEPlerCUfOHsHI7xc+vMw8Th6hfD8mw3TG8xRCufRgsuVYlwCak9oZQ2anYEi8AtqivbY+G9bLtEOZYIWldzYWQlnlLrmfriLRPxqkQl5vaWq1oo2htzFUO8MwkbECaCbElAUv8Oo/+DLmD6JPuFFyLzVfB60yMOuZ/OE04TwvfKHTDK9fz2DUo9gYcgeI+X5PxJkQdyEByut3ChFx6mmLLdijSZY+UyvsEehi4gFm6mNEV4wKmAUePionPQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(366004)(39860400002)(396003)(451199021)(6666004)(6486002)(186003)(83380400001)(2906002)(53546011)(6512007)(6506007)(26005)(37006003)(478600001)(2616005)(31686004)(36756003)(54906003)(41300700001)(66556008)(66946007)(82960400001)(316002)(86362001)(38100700002)(5660300002)(8936002)(8676002)(6862004)(4326008)(6636002)(66476007)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZWI2NGhBYm9kNVlGUHdSc2tFQm9ncGtnVkFsVDdkNGhFTWdjSlA0SVU4dkhC?=
 =?utf-8?B?TmlHNXZIeUNHVDdHNGJyRUZ1YzhhLzVHMTZ3OTY2Zmd2UU1Ic0lJeENmSzBi?=
 =?utf-8?B?R1Rnc3RSektaaG55VTZVWWUwamQzVDV6OFRSZWJ5Nnk4c0dmbWhPeEhMbUtv?=
 =?utf-8?B?WDh2WDNlcWNLd2ExbkxMclI2VTNQUjBVNmt0SzFlS3RWSWo1blVRZFF0Z1ly?=
 =?utf-8?B?Q2taUjdDY1pxNWoyOGhDdGRGOUd1VzNvNkcrYjBnTjdKb3gxaHc4b0oxS0pR?=
 =?utf-8?B?ZGJocXBHQ3hWZmpuTVBydkFJYmNJNmRXVWlqRFVDcHhkSVFKeVkweWdhcXFw?=
 =?utf-8?B?UVFlSXZYZ1BPNU1Vcm82dC9oc0JhU2U2LzJmb3dDY0FnRmNSN09oTXJsWjFL?=
 =?utf-8?B?bHYrSDVPWGNZclZpRkgyVmZxejhacDB4WHdZTnFnNnUvQkx6VTgyeE1xdXpZ?=
 =?utf-8?B?M1ZaQTMvN0hkT3RsWVh0UTAyNGtkZnBFcWo3RW5GWWg0NGM0UUZYNXF3Skll?=
 =?utf-8?B?SFN5RHl1MmlkcDNvZENmZFhHbWVZaHNLUDh4VmR5MG1jcFJVR1A0Mk5BM2Fr?=
 =?utf-8?B?LzF0S1Z4dW9NZ09XeTNKdzBBZHVtc1N4UGdiQVNVcmxOa1NIS0tPSVBSMTlW?=
 =?utf-8?B?cDhnUVpCSWpBbWYrMDBVZ2syeFJNK3JGNmxrTytobVAzVEp3bTkyQm54SVVW?=
 =?utf-8?B?Z0h5ZFNOVHp1YzlTRzM2RnJZUlB0RjVsU0RlSDlCSUQwZllMQWljYzdIbUQ1?=
 =?utf-8?B?UGFjUXY1QUVId0NweHlTYmw1b0RSeS9oNzB4Y0o5RThKY0xZWHlTVG12UmZq?=
 =?utf-8?B?eXBnbVhGZ2h3YVFkenVGTWVBWEl3dERWQmNiZGRvSk83d0t0UHRFdzZXN2Jx?=
 =?utf-8?B?Ym5EWGkvN0I3SHArU01vdHZDaUV3MWJnUnJwc1dLN2dKbUZVV0hSQ0thQ1Ju?=
 =?utf-8?B?THFQYXl1QldaZGJnL0txVXFPcUxKOGdlWWhnOXRvdVU3Z3V0SnNWVVhITXR4?=
 =?utf-8?B?QjhSZWFQQXZxVDY5d0VOZTd5OXZKeDNWbzlPZTUxbFRFbm1FOXZGbnBsSmp4?=
 =?utf-8?B?OUhweGZqeENmY0JGVUtFamZ2U3I0emd6UitCZTFaRlRvTzREMDRLZjVpV24x?=
 =?utf-8?B?SFduL0NxZ1RPODVlUkxtbTdDMmxmTFFJRzlIZnFscGdIdjROQUlDdloxeEZq?=
 =?utf-8?B?bENnYit6eGVCMDF5SUwrSCtMdjh1WXBYYUYwbFBMYkcveUx5OWxLcVJVU2h4?=
 =?utf-8?B?cVZGc0VPb01lbkI2a3B5MHkxSW5LUm50dmFqRnY2WjNUSms5UUpGaitxY0tO?=
 =?utf-8?B?M0xWVUUwSjN1bkZzdVJtRVk5RzM4TDlkS3RWQ2ZNRlNXVXJUWGRoVVNJa3Vn?=
 =?utf-8?B?U2lxV3ZLZjgvcGhrVzZMY0oxOW45N1FnbW95MmZVU2wrS1lncWVOdm9jcHBH?=
 =?utf-8?B?a2V6eGpNcENCZXc4dDRFSkJ5SVNjTXhPMnZBMzhmKzdHUWxtUDZxTUU5QXdC?=
 =?utf-8?B?VTlKM0N5K0Q4cWFpMWU2dWovekV2RFJJOThUZ05qYVoxeEZsdnhyMURiVkdr?=
 =?utf-8?B?Q0NDdWgzNTFsa080KzhYa0RsK0d5d1I3ajE2SEl3TWsxM05TV1NnMDlBTHhT?=
 =?utf-8?B?YVJadUtnTFVzVjlRalVqTi9TSzZkbHlvSnBwVVlZcFNiU0pUd1ZtV0RnNDFL?=
 =?utf-8?B?ZThwY0hjb0hBVklINzY2U1IwNjNaSGNrbXJkejJoOE1xTUtDTnNiM0VKYXA4?=
 =?utf-8?B?V3hEdkhLSERiaityclVOUWxNN2h4dThOSVBNU012RkkzTjFIOXBSRWZlMHIz?=
 =?utf-8?B?ekZ5dlBTSTFVSkRkenN4RWovQTQyTnBlQmsxV3JYejRodkVvOElEbDUvTzda?=
 =?utf-8?B?MTlHaFBxWXYxMDdGUFFGTGVrZm1yMHFCWWlUTTUxdjFIZENoV25FUHBVZnJn?=
 =?utf-8?B?MGdNNlErK2NyM3Q3NHlaUHFSSWorcUU0T3Rka1Flb0trT0YzbXFpa1JPUmxy?=
 =?utf-8?B?QmFYZlNjNGdaY25yaXl0NnNFNGUvb0tEUDRiUGFWWWUrN3ExdUE2TkJYS1hw?=
 =?utf-8?B?ZnBER0JWS3lSaUdKU2lQSVdmLzEvOCthVi85T2puWm5TMXlWNTJKeHJIS29C?=
 =?utf-8?B?Q1l4UlBGd0t6WUxoSmVOSFExZTRxZXYwb0xXMWVKUVU4bnA5eHJEblB3OXFJ?=
 =?utf-8?B?cFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	a6oQ0M+WvMLO5XysweIg/vPBO8wSbUC7Yx4Wu+Y+rCt/MfA6gLiUjP6klys4eGLlL/SBIjZ2sLLeOQZ6nsncL+o/4xVtmkWb16zF/lVSRjeVCa+gcVhZw3E2dQq92yWGphtxsBdolU6vEd4tXW6eIQ65+NwCWck5VWy/A33Ghcy5xRrufLexfJVkTSz0LZEy6S8D0xkmJPXk0u2DhGujs6yl5vAFB2OQAwfDB74EfLSr6Rf2stIcv1NDGZPLUYV01454HILpY6BoTnN+y9UYJ6sDbGYtUu8KlFvGfizod0E/DbeDEVP3mig+D4e+1MkXebvsfVQs39YcE0os/vtdCCR3kqYnLpjbo6s50Lcdg5xcBYS07RT6YR7633rdSYv6vsRae9ctUikCMGg156tBNWJ592GNVvrGavp2/ZQfd21H8zyD4DsN9gXXwOTSvEQTV5/7n/4SRExnOqYRTADFM0ZExYFnBnCmznU0PxaCt7E2VC/xbUWrBwFErQPwy1qFYV3nMkxqsVUus0Am+3nE2wlFnqylHzhcc30vhM8CztVil3LWHIbc6tuCLNg+ERUY5+fae7VOgbXMpklx3+lsaTos3WwlmdhP8Pc0NmKu7bEFOmtDKouBo5Xg58n5FFH4S1qyBGKwEqoSmGeq/zar5bUlnSqnA9OWJPAzaCfUZmMehg2kL7J0e0Ug6BTPm/3dTZZEUqr1dWFhzsbDtfVZZcAMt8+332H5zoIj4yj2jXp33wMp99g8QwGY291Ms67xmbhasKOzPcHW45h0s66f98TNtNjc5HT+P8S3r2UeHL4HzbVzBopq2AuT0LsDGtcd
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 75da98d0-ff16-4db4-0782-08db4af8e9b2
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:35:13.8650
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5r3AQbMO3EihMQsXrzbyGw2uqrlo3N31b/Xg1rB+Mwe7PjlfkBJrDkxcmVtL+L0E84IgxWksSbfo6dVjc+tcewCD6rQtyWA1rdOyWWwD7Kg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5507

On 02/05/2023 11:28 am, Roger Pau Monné wrote:
> On Tue, May 02, 2023 at 10:54:55AM +0100, Andrew Cooper wrote:
>> On 02/05/2023 10:22 am, Roger Pau Monne wrote:
>>> Ensure that the base address is 2M aligned, or else the page table
>>> entries created would be corrupt as reserved bits on the PDE end up
>>> set.
>>>
>>> We have found a broken firmware where the loader would end up loading
>>> Xen at a non 2M aligned region, and that caused a very difficult to
>>> debug triple fault.
>> It's probably worth saying that in this case, the OEM has fixed their
>> firmware.
>>
>>> If the alignment is not as required by the page tables print an error
>>> message and stop the boot.
>>>
>>> The check could be performed earlier, but so far the alignment is
>>> required by the page tables, and hence feels more natural that the
>>> check lives near to the piece of code that requires it.
>>>
>>> Note that when booted as an EFI application from the PE entry point
>>> the alignment check is already performed by
>>> efi_arch_load_addr_check(), and hence there's no need to add another
>>> check at the point where page tables get built in
>>> efi_arch_memory_setup().
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> ---
>>>  xen/arch/x86/boot/head.S | 9 +++++++++
>>>  1 file changed, 9 insertions(+)
>>>
>>> diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
>>> index 0fb7dd3029f2..ff73c1d274c4 100644
>>> --- a/xen/arch/x86/boot/head.S
>>> +++ b/xen/arch/x86/boot/head.S
>>> @@ -121,6 +121,7 @@ multiboot2_header:
>>>  .Lbad_ldr_nst: .asciz "ERR: EFI SystemTable is not provided by bootloader!"
>>>  .Lbad_ldr_nih: .asciz "ERR: EFI ImageHandle is not provided by bootloader!"
>>>  .Lbad_efi_msg: .asciz "ERR: EFI IA-32 platforms are not supported!"
>>> +.Lbag_alg_msg: .asciz "ERR: Xen must be loaded at a 2Mb boundary!"
>>>  
>>>          .section .init.data, "aw", @progbits
>>>          .align 4
>>> @@ -146,6 +147,9 @@ bad_cpu:
>>>  not_multiboot:
>>>          add     $sym_offs(.Lbad_ldr_msg),%esi   # Error message
>>>          jmp     .Lget_vtb
>>> +not_aligned:
>>> +        add     $sym_offs(.Lbag_alg_msg),%esi   # Error message
>>> +        jmp     .Lget_vtb
>>>  .Lmb2_no_st:
>>>          /*
>>>           * Here we are on EFI platform. vga_text_buffer was zapped earlier
>>> @@ -670,6 +674,11 @@ trampoline_setup:
>>>          cmp     %edi, %eax
>>>          jb      1b
>>>  
>>> +        /* Check that the image base is aligned. */
>>> +        lea     sym_esi(_start), %eax
>>> +        and     $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
>>> +        jnz     not_aligned
>> You just want to check the value in %esi, which is the base of the Xen
>> image.  Something like:
>>
>> mov %esi, %eax
>> and ...
>> jnz
>>
>> No need to reference the _start label, or use sym_esi().
> The reason for using sym_esi(_start) is because that's exactly the
> address used when building the PDE, so it's clearer to keep those in
> sync IMO.

Hmm yeah, fair point.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>, preferably with
the extra note in the commit message.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:43:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:43:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528477.821674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnU0-0003ai-Fm; Tue, 02 May 2023 10:43:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528477.821674; Tue, 02 May 2023 10:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnU0-0003ab-Cc; Tue, 02 May 2023 10:43:44 +0000
Received: by outflank-mailman (input) for mailman id 528477;
 Tue, 02 May 2023 10:43:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l/wp=AX=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1ptnTz-0003aV-5U
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:43:43 +0000
Received: from sender3-of-o58.zoho.com (sender3-of-o58.zoho.com
 [136.143.184.58]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32b7f95e-e8d6-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 12:43:40 +0200 (CEST)
Received: from [10.10.1.128] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1683024216011767.2876740014466;
 Tue, 2 May 2023 03:43:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32b7f95e-e8d6-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683024218; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=la7k4xNjsSQnpYvPdTmYmMSusIMwiClw5LVgl55eUx8jePMgS0+ImyKRzXcJEfxnp4Qsbtohu8C/eSCAbe3h3CGJVn8JV+33RceH8R6+7v57jg4bCN5Lawl5FH6JHPZQa4ScMKt8Iejj8Dq0zTHdehDCbKk30DiwayZMblZGGhY=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1683024218; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=oIghSZgYKIpdj1Zdp8vPzSF0OJ7ZJeiI5lUW7UH9DMc=; 
	b=lEJDv+8p7ANbe+jHXLr9ek3lJJnbwo7A9KixtdajhGOiHQkG69e6v3ErvFxRJItAVBEoMc2DVsr0BD9DIgF6m8rIVM2Nc8mZjLmtP9WrQa5UXHnyljXbe7o8cImgG882xOBTUVRQ70mD80kLH7XcWzMxpG727SRTbpsaoUTxmn0=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1683024218;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:Cc:Cc:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=oIghSZgYKIpdj1Zdp8vPzSF0OJ7ZJeiI5lUW7UH9DMc=;
	b=sL8W/BgI3pojMDg1ZDr5k6uL/FnYNc4fGHjkyiEHFb5b+3TZwsTs7kv8QcJr9DOa
	+QmvWbcJhZONbEuwD/0/2ASp8rnLB16ebM/an4txTiWkTT6sQySeb1Q6ZgcNR2OMlXe
	l70VLsNuAMdeC0cDnErkEad3b4LvZPSlvB6TX8R4=
Message-ID: <600c8c62-5982-ec7e-7996-5b7fbfb40067@apertussolutions.com>
Date: Tue, 2 May 2023 06:43:33 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
In-Reply-To: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External



On 5/2/23 03:17, Jan Beulich wrote:
> Unlike for XEN_DOMCTL_getdomaininfo, where the XSM check is intended to
> cause the operation to fail, in the loop here it ought to merely
> determine whether information for the domain at hand may be reported
> back. Therefore if on the last iteration the hook results in denial,
> this should not affect the sub-op's return value.
> 
> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> The hook being able to deny access to data for certain domains means
> that no caller can assume to have a system-wide picture when holding the
> results.
> 
> Wouldn't it make sense to permit the function to merely "count" domains?
> While racy in general (including in its present, "normal" mode of
> operation), within a tool stack this could be used as long as creation
> of new domains is suppressed between obtaining the count and then using
> it.
> 
> In XEN_DOMCTL_getpageframeinfo2 said commit had introduced a 2nd such
> issue, but luckily that sub-op and xsm_getpageframeinfo() are long gone.
> 

I understand there is a larger issue at play here but neutering the 
security control/XSM check is not the answer. This literally changes the 
way a FLASK policy that people currently have would be enforced, as well 
as contrary to how they understand the access control that it provides. 
Even though the code path does not fall under XSM maintainer, I would 
NACK this patch. IMHO, it is better to find a solution that does not 
abuse, misuse, or invalidate the purpose of the XSM calls.

On a side note, I am a little concern that only one person thought to 
include the XSM maintainer, or any of the XSM reviewers, onto a patch 
and the discussion around a patch that clearly relates to XSM for us to 
gauge the consequences of the patch. I am not assuming intentions here, 
only wanting to raise the concern.

So for what it is worth, NACK.

V/r,
Daniel P. Smith


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:52:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:52:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528482.821685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptncK-0005Bb-DZ; Tue, 02 May 2023 10:52:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528482.821685; Tue, 02 May 2023 10:52:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptncK-0005BU-AF; Tue, 02 May 2023 10:52:20 +0000
Received: by outflank-mailman (input) for mailman id 528482;
 Tue, 02 May 2023 10:52:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptncI-0005BO-Jw
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:52:18 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6668088a-e8d7-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 12:52:16 +0200 (CEST)
Received: from mail-dm6nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 06:52:11 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB6676.namprd03.prod.outlook.com (2603:10b6:a03:389::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 10:52:05 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 10:52:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6668088a-e8d7-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683024736;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=4PSVT+SYB3WvHqSTfjEJyPbxogo5VSUdqRfe+uzsLCM=;
  b=KGEH9hKl9oN2ljeyjPtHDWlJ001/4ZH4Icb1LwddPJUQ76xBDnWZa6H3
   7bpSIAWfRsrDBq2XntsWGIXCCdrSUfzcQIXQkjT+6ZbYO6gOVihdpApNS
   Al3OOggeAyT7O9AjMw5hiaFRSrAqLJ40viuPNXAKLCZ/C//yUdd0PbB8o
   g=;
X-IronPort-RemoteIP: 104.47.58.103
X-IronPort-MID: 106315749
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:+0GAsKJQRSMe+wXlFE+R95QlxSXFcZb7ZxGr2PjKsXjdYENSgmdUx
 2cYWW+EOv6LNmWnKopyOt6z808G6MSAm4NmHFdlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wRkPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4pO2tE0
 9ZBIQo3c0+qrcuE5rW5Ss9j05FLwMnDZOvzu1lG5BSAVbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGLl2Sd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv02LSWwHynCer+EpWq3K42klGo21ApViA6DGSDr8KLiEeHDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8Ul7Cmdx6yS5ByWbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8kN+pES0cLGtHaSpaSwIAuoHnuNtq1kmJSct/GqmoiNGzASv33
 z2BsCk5gfMUkNIP0KK4u1vAhlpAu6T0c+L83S2PNkrN0++zTNfNi1CAgbQD0ct9EQ==
IronPort-HdrOrdr: A9a23:GHNFDa6fsrOPXBzd/wPXwNvXdLJyesId70hD6qkRc203TiX2ra
 qTdZgguCMc6wxwZJhDo7690cC7KBu2yXcS2+Us1NyZPTUO1lHGEGmnhrGSpgEJ3EbFh4xg6Z
 s=
X-Talos-CUID: =?us-ascii?q?9a23=3Ac867h2rh5L38Ob1FC6CP8UfmUZ4feHf75i7ZGXW?=
 =?us-ascii?q?lVk9DYY+zTl2Q3qwxxg=3D=3D?=
X-Talos-MUID: 9a23:/g6ytgnkWTDajYaGeabWdnpwHpc08aiVFXw3z6gq4va8OxdCOT6k2WE=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="106315749"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mm4isgJGn7SLOuiPepVD+Pu7uF+Tx0/YFG3RJUEOajKfq7zqH1ERJag3JJNNu5X/u6XfR+T90fnm2Vij7r9sc6DdZq8xB89dm4dnATbAnVF7QMymPbE99yPRixV34/jdeK0yj7igy0Rqz/oPylH4TN6v6rpIrMf1Bd6UvPS2T5KlrFj2Oyk4Y28HjE+17QUEu1yBcDA4dBHNh72FMuhPHRcHtr7dRoFM0BRnpnkNraxVU6X0O/c++bV+42k345Gkxst2tu6YyxeQg5HSdt8K3bpShqMaAlawEL/VOxdTmHAGoj4hV7i9mqCvY/tnyMQH4FVCaLmjo4XOeOb3w+UsJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ePrgMFJN8aSP6o4ya3j7L9UOUOtT16JJ5FwlOhlpZ7c=;
 b=OZzBgj/2IdZ5jMNftSBn896BZPXYYHwnz0GPYv78lPgXaI0MEud+uloeeCZIoAn+HJF/W+Dzkr+lwEk+axtpcHd++KAcyHMbyCw3+LYWeRKrrcPpF1Jnp8ZR51KcoPGFAFpiabwI7lOXpAzC+4IrAXKCbyNr7zvoJcCxU3PzWVcijpjAyToQawfpeKi1RduIyBHTLlfmaiyRGxJr3QiRJ0ts/oFnzCaGaHKiu3429Qumn0X8TGcFLKJyi4fewfzvubPGx1iatJYklo1Kp+moRSkBqbKtd1A4leVN2PPEpW+ONTNvAKoC/0sHjZjx3TA946uuFCn5MZfVLqfZ4xn3MA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ePrgMFJN8aSP6o4ya3j7L9UOUOtT16JJ5FwlOhlpZ7c=;
 b=XXWKILhA08oEh6kjSOXYm+sH91R8wevzZeCC3FMVcJuL1DUg8VtY2LexRFCR6/sKFC7N5dcgjRhi4U84Z2T4yoybl9GSbg2j0G5PKMq4w2wDzGiQc6LbyxHsFzgPyw2JbHci/9F1+N+JZG/Pn0OVzJ4Xq0w0NxOE5hnOz5oIUU8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 2 May 2023 12:51:59 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH 1/2] x86/head: check base address alignment
Message-ID: <ZFDrT87RixpOmMfq@Air-de-Roger>
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-2-roger.pau@citrix.com>
 <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
 <068ad06b-d766-4fc7-6bbc-289911441ee7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <068ad06b-d766-4fc7-6bbc-289911441ee7@suse.com>
X-ClientProxiedBy: LO4P265CA0157.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c7::14) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB6676:EE_
X-MS-Office365-Filtering-Correlation-Id: bba8bd7b-d584-4003-923b-08db4afb446b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	13pHst1EH1OjcsZp/+sAEMqmBrJ5pHGX2QnulS+q/3HOQyBdWSJW/etozAXvt7g++Fv6EuuTUMEVCMdKjSN1mAK7kHIFXSg+ngtqEQSWUcDVFqEJKs+AxS8UGMY15q3t9vXP2KY+g2x8KVCjDDfcg8viZJRaSjRAFf7e+88BfBZlYpP5fnJsGoU0Z25Tp8DTBpUlo1eVKn0WyVndzYMn10j6Db9idx6OWa6NM2ab2/IczHtuTLy5RbZwXqkG/JW49hXS/jjwEa36n2JSkYMSKW9K0S8yX3a9f01ZnchSWMegDIGJPynoxFFZkxXfucMjTROhYa2wwb3Z0WLdf8OxBnsG3VAqyqwGmtUHtex+auRxL7TptD5FGpnL6kE1FqnFCMXqy68k/NPTUPybrk6e9aLyZKe2hVW92a8iIktgwB8PC54omjYg7bf2iEZa4rCUswmtml/rPEEmyeH8DRazWJt8AulwAWAW+sbITDq0gaNe3wbmG+LxgY9sWU5W+E+WAmrQCRPo4a9T8idIripgMAM9SZt2HMWRtUI7S3iDuGI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(136003)(376002)(366004)(346002)(39860400002)(451199021)(54906003)(966005)(86362001)(82960400001)(6666004)(38100700002)(478600001)(6486002)(83380400001)(5660300002)(2906002)(4326008)(6506007)(66476007)(66946007)(53546011)(6916009)(66556008)(85182001)(8936002)(8676002)(41300700001)(33716001)(26005)(9686003)(6512007)(316002)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TWVBYzNoSE1ybzZiNXNjcmc5WitGQ0lEZjBYUFNubUwwMVJoWlduWWVUZW5P?=
 =?utf-8?B?a0dzMXk2UlV4OEFSakQzQVBCTEJMRGxNN0lZb2hrVUkrMURUZ0ZJSEdCLy9q?=
 =?utf-8?B?YVJPSFNpdkFsVU85ZXptRjFlOGZMVFpvb2kzbllEclhabTZRK1pnYzl2NlVt?=
 =?utf-8?B?UzlFZXJWcjdvdm92eXJ3YnhlcGErZk1lQnBYNzJMOUY1Z1VXTzJLaDBtL3cz?=
 =?utf-8?B?YkhnTTM2NHhpK0pQc1N4Y3pRUUZJWklVUkJYS1REVFBxenR5a1phVC84TnNr?=
 =?utf-8?B?a3RtT3hYY3VESmhCMG0zWVQ4NXlsQVEwUlU0M3U2MHJ4b0JGOEV4UGdRTlVl?=
 =?utf-8?B?TzBLQXludHZ3RmRjamQrbi8rYzJrc3NCWjArMTlhRFpuQWNhdHErcTJUMW14?=
 =?utf-8?B?N0hid2d2M2tlbE55WWloYUtjbnlGQi9OWEpCQ3ZqUXZJdzRkcHZrOE1NK0VD?=
 =?utf-8?B?bkZhcUhjWXNsUlBCT084VzRMVXFmbmwvVnhjb3JxSmdJYnhqWmJ3eWg2NVlI?=
 =?utf-8?B?WWJKSkswZXNMUUVGZXhtenFLcG42djZZdVUwaXJBSXNiVzlwWU41VmJpbzJK?=
 =?utf-8?B?OVdVczZtUDdRYWgwMDJ0TGkrMHN2TW1HVlNmMlc3Z3VkNjkxS056ZnpnRnh4?=
 =?utf-8?B?Y2tOM0lLaDVQeFNvYWNSRS9CYUIySnVZbUpDenpHbzVNbUZKUVdnMWljVjdy?=
 =?utf-8?B?aWhQUUE0V2tZT3BMTW5XNzVkRkZMMDRvYStSQ1NCRGhLQWU5aHY1L2FvODRZ?=
 =?utf-8?B?amxnS1J2UDVOT1lNV3RXZTlEY0xTeC9TZ0JCbVlXVlNUUmxTOVFnUEw4Wmxk?=
 =?utf-8?B?WU95STAxU0xvTTJnT0dnSnVrdlR3NUlKbjdCNEJNK1V2NVpsYWJzdjNWUW1v?=
 =?utf-8?B?R2pRNElKSk00L1pJNDlYejY2UzVzeXNBWCswRXBtTk9haDZIWUdJWW1hUHRS?=
 =?utf-8?B?Rjd3TnFNS3NwcHg5VFRUaWtST1liVWoyZllZanJlZ0wxdmd6WkNzeEJwcWRr?=
 =?utf-8?B?aUp6VENoRnYrYnJzeEpXUGxlMHBGaUJURUhjTGdjOEp6M1JSUkNSSnlwNHlS?=
 =?utf-8?B?akVsQnF2Z2hGM0dYMXoranRYOGZpSW85OFE2SEpFNVptSTFxRTlxMG1LclRM?=
 =?utf-8?B?OU5zYm84TnBPdFlWNFpNK3BZK2thUVRHNGlQaGhUUTN0eG1XQnBKQ090ZVpW?=
 =?utf-8?B?ZUVuMEd5Zll2cVJnOXIvd1FGaHFtNHlEbjlHdjJHRzhBNzJXZGJmNFZZWnhL?=
 =?utf-8?B?QkNPRjZqVTFUK1VHMlFJOHo4YTlSS1FDMm05a2xPeUpNRWgwWTAzMzdvYWlW?=
 =?utf-8?B?azcvc3Z2VUVZd2YrMGlDS1ZXWXgyRzJSeUVqa09ZbzdvVkQwWTFSbGtwRXd2?=
 =?utf-8?B?ZEZGN3FLVVRGSWQzbkZjTDh6SGdQOVJpYkVKSnhvNTZ0RExLRzNzOGNEVXM3?=
 =?utf-8?B?dXd4ZjRaMEJHR2tiMG1wU0hVQTdBa3ZsclZ3Z2Jkd21NTFI5ZjVVdHF2Y1Ey?=
 =?utf-8?B?YkxWcDcyVDNUci9CSHoyRExMam8rV2d0MWhIbEJaMElMMi9NWFh0Z0ZDeHBh?=
 =?utf-8?B?SUNnc0FISktZa2JrRkR5c0FSWWFVMFg5MU9VcDRSS0VSSnBPelg2TWVGT1Qx?=
 =?utf-8?B?d080TkVGWWZTVlpGZzRLQ2MrdjVTWFNiWDhaSDZkTHJRa0ZMcVJGMTBydkVN?=
 =?utf-8?B?OHNyNlJJUk01UEhvem91K1lsb1pJMTRJWjFEbkZ3TCtvV0d5WmxmcVUzcnRE?=
 =?utf-8?B?Wmw2cmN3YnBGL1VkWFZoM2ZCVVFHdHdIV1QwZ2RsbjdtT0NESEI5VjRYbHFQ?=
 =?utf-8?B?elR5bGNwRTF0YUoyZnRiOGRseFg5Z3orYUVXeGh5L204M2k2clFKcUY1TVFC?=
 =?utf-8?B?MVVYNldpcnBtMjV0VzFvNDhrZUpFQnh2eC85dG5iMFlVYlJ5Qlo5dW11ckNX?=
 =?utf-8?B?SlRiWFdGamxlNlN2TzA1Y0t5NmpYTFREOWlPMkkzcUhETmp1TTZRVlRqZ0Ry?=
 =?utf-8?B?TTcrSUg4SWZWcHl2cmsyYitENnZjakdpL3NSMmlSZkx5a050WlZScE9TQ1Vo?=
 =?utf-8?B?K2pGeXpMbzRjcmM0Mkt2U2ptUnJXUkkzSXYxMmFTMDM4b0V0dDRIRlRiZzha?=
 =?utf-8?B?VTFIY2NvUmRUb2FtYnhOY0ZNRGl3Nk1TTUNwcTdoa09yUmRicDA1aTBCc0FY?=
 =?utf-8?B?ZXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	BPfUGrpojLIkoI3e9kxazPZ8K73zyStr1w+cu1OwUV8MEorrkbRiNME9ONp1wtTK7+lbCaRvvq/0NexneFsr2rWb0SqwZclf6iw8V00RtTGASjoBwklEbMvmfS+szzQPOQCGftzHZ+NxCWB9v4HP1Y+Dehshn9R/u8SQb4bR3Juea1FTLK731K+mRtdteuGQo3JlqlG371COpJ/2vraXAzDRiy5dYrdd5Q4bPMs2XIhCAzordzx9XxQE/iYFa86CmN9ndratvUcChCgvIckiW8VLk742bNF+HdRXvN90PcuDbQh96I4K8ZeAn+B/KTikqesdl69gXdRh6joHIXhG7/XhkEJoLpvUF5ZLv8A2FVx+R8EX2iuIvu8KTak2Zm87NmOMCgAtHxW65q08BmdB9H+9jgN8H3vy2xu+T5bxhj7rS+dEUz+sB0qRkGfQIf3s60e/Yx5jOt54p7E0faOm0WzRXE+y02G2we4UQIIS8sLQAP/pzNcB4lZhqU5HaxPXcyGVpqWYuZbvyyZHRFhM28k8vgflP1Ra+d2XnYUMXdWN+TOMZaAe40ygcjtRCPHMhRaY92ssmKHl6tJbNPUrJh/qOSNBu20S9Kn19htYC285EO52gTr8JBEgr6N3g070Yt15vnHzqTxi2u5MSUeVZxsGYoEPfq8H4aP5cl0Fx+GlACjTd+hGHhw5CjmNOMUMWMPa4fxtcCRn/M0W27jjYI02sPNeApRDwhR5WK3ctt783BMpYvlVWZvBS0ZIB1YBCVaM48klQRUsZPxFsZqDkslIsh/mRqT+LLZ9DFgufqz6B171/436ZLYDuQJD9DiT
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bba8bd7b-d584-4003-923b-08db4afb446b
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:52:05.0835
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: j2NHiZKPlk+/n/q8wHUvjMvW+bOG2V4rt/B8Y+0WtLmYSIghxYlTTbIARXQQYUqIbxAnOFtAN2TC5ZfOjKp3qg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6676

On Tue, May 02, 2023 at 12:28:55PM +0200, Jan Beulich wrote:
> On 02.05.2023 11:54, Andrew Cooper wrote:
> > On 02/05/2023 10:22 am, Roger Pau Monne wrote:
> >> Ensure that the base address is 2M aligned, or else the page table
> >> entries created would be corrupt as reserved bits on the PDE end up
> >> set.
> >>
> >> We have found a broken firmware where the loader would end up loading
> >> Xen at a non 2M aligned region, and that caused a very difficult to
> >> debug triple fault.
> > 
> > It's probably worth saying that in this case, the OEM has fixed their
> > firmware.
> 
> I'm curious: What firmware loads Xen directly? I thought there was
> always a boot loader involved (except for xen.efi of course).

This was a result of a bug in firmware plus a bug in grub, there's
also one pending change for grub, see:

https://lists.gnu.org/archive/html/grub-devel/2023-04/msg00157.html

The firmware would return error for some calls to Boot Services
allocate_pages method, and that triggered a bug in grub that resulted
in the memory allocated for Xen not being aligned as requested.

> I'm further a little puzzled by this talking about alignment and not
> xen.efi: xen.gz only specifies alignment for MB2 afaik. For MB1 all
> it does specify is the physical address (2Mb) that it wants to be
> loaded at. So maybe MB2 wants mentioning here as well, for clarity?

"We have found a broken firmware where grub2 would end up loading Xen
at a non 2M aligned region when using the multiboot2 protocol, and
that caused a very difficult to debug triple fault."

Would that be better?

> >> @@ -670,6 +674,11 @@ trampoline_setup:
> >>          cmp     %edi, %eax
> >>          jb      1b
> >>  
> >> +        /* Check that the image base is aligned. */
> >> +        lea     sym_esi(_start), %eax
> >> +        and     $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
> >> +        jnz     not_aligned
> > 
> > You just want to check the value in %esi, which is the base of the Xen
> > image.  Something like:
> > 
> > mov %esi, %eax
> > and ...
> > jnz
> 
> Or yet more simply "test $..., %esi" and then "jnz ..."?

As replied to Andrew, I would rather keep this inline with the address
used to build the PDE, which is sym_esi(_start).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:53:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:53:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528486.821695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptndJ-0005iY-Md; Tue, 02 May 2023 10:53:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528486.821695; Tue, 02 May 2023 10:53:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptndJ-0005iR-Jb; Tue, 02 May 2023 10:53:21 +0000
Received: by outflank-mailman (input) for mailman id 528486;
 Tue, 02 May 2023 10:53:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptndH-0005fU-Mg
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:53:19 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2062f.outbound.protection.outlook.com
 [2a01:111:f400:fe16::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8b9147d0-e8d7-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 12:53:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB10069.eurprd04.prod.outlook.com (2603:10a6:150:121::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 10:53:13 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 10:53:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b9147d0-e8d7-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z6uT0owrnYItSFu/uyk+jGZ88jOexV6wNpyIKosz5LRvMk3MPjLOZu8hBYufVM1prSBJ4VByLr2kmnoUw5ZP1fhzdZdcHb8E5Yu49/JBi/diEWbJ4bq2WgRkXBYTj/RlnfK9an09nftBjfiI9mwwN+/1OsxOEl649CcGe8ksg16l+CdepOfsES7luJsKb2Pmf1mkYYLxR4jABet6K5MJfaqYMJi7ZoqaLgO4X2CZh92jwyLHuASTPzwvjNma/1BFDN1EtpUmTuZPFS0u8LV5h7s4UhvEiCNOJzNH/n+aN/koyct611mQgtwCvIQClFg3L2IieiUeHERrAwNyml30Og==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gXbRNs8L0HqQSzzKkVrPVszxsVNglD88jNZQ98h8eug=;
 b=dkmdwbyZ4TVCmhA9uFOUKpt2PH+f+y+J6xEcKTpoiRFtHUZkREq22J8X9yGtnYlABFLYpa1nmHEJ/5fwNhKGw8pwGiyZTxj5St5DhcVB+3cYI8BYYcmo1FyyFzcIZOtdp7l5kZb4fw7zqsxJ5me6eV7RqE6r1fhjkdQAK7NlX4D+vkeImZ+f26Y5khxD54KZPTbUByBEM3nMsK08YBK5cXAGWDIYimL+Ah/9Z2+JRgJFGVXC9tLDt3bVrSLMBWDy6OO4a3AbBErSxWgeyMH2bYgaf+lyZpprI/tMMGGpWoWw9P7QCHOVC9Rx71KRELfCRkys7BuiYY9XLH+cfrPx9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gXbRNs8L0HqQSzzKkVrPVszxsVNglD88jNZQ98h8eug=;
 b=C4sAanMrp7PQxeG+VeWAGxzwCCf5gEnyPtqPBaSy5HxiMQh6obHymTvaZBoyjJcDTMBf9J9BmQ8FmHGxIdtVO9FHoem27g4Q2P4jwNAqdL5j7ZXFVeuJKk9RUdDEvdfd0O0FKknpV5j+025msewB4zQ7Y59g92nc77WdpQW1+kTPrNGhxmWzZ2mPQZCGgkUX/SGCkgmJCmd419b+7IhjgoNJ6jPPuDxhN1gJOe7g/tuL94kxLR0+adq2cwEMYM3NMC8dS8XV8J/2hmwp2RmCrgH/QNzPQCBkkvMoa0oH6gkemgg5TulGRTkDFf4M9rEd1IGvxVkujwF2SD/Mj8Uuog==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a3f2d048-78c5-9a5d-d44d-3a930ba780fd@suse.com>
Date: Tue, 2 May 2023 12:53:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2] ns16550: enable memory decoding on MMIO-based PCI
 console card
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230425143902.142571-1-marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230425143902.142571-1-marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0046.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB10069:EE_
X-MS-Office365-Filtering-Correlation-Id: 7718c9a2-cf9d-4a4e-d549-08db4afb6d34
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Sm5t77WuXcH8imUj1HImIQ76wX0XxAhjV04HH9kM5pSFY66LhEDMLzNTCuF51olj2kY7dTx43pBcvCqNOM6KoEHWvik8bhYyirF4a+virPWqKe7T8k7PZIgvKgYls2I3+KvZVRXFkIPjZ87dm/TgpWcOyXIV2jHaXEa2Kci9KcvKjsAWCnbLnMR2IWcQV9hX74QzJEawmM7NTOYSgtzNTtBT1wyC9D4Ef8VQBBQl6ydomW6x9XaYRAQRTdbXIyNjVe/5S4Atpa7gsP5yfFDZiT2RwH5VLA9GiJGkVhjdsEm8T4OLWwODZeXlcvrtrAFuMkcc1cvO0XL7CZSzBeRLc+ZurzhhjDQXorHFSh0C/m+3bBQGq11c3vO36PQ792LSaoxFvBQMaaeO5RtQvhNteb45TrC20H0YPLn/cDNFAJdRMyMX3aMyccoq2FEAD/ohHvG2fcqOVGpCK1GxUO5AwcETIxc3Vf3PPHnf/2PoKDyPgxI0U4R0kCxH5Bb9xNkkaInDtGnrK8mirjUh5WE7Gf5rWQFyZ8AxiB0SWDHxi1758a/8QEuSgHxRfjHHO8KkYu06LA11SQVj3ArDKKuV+u6EIRVD837mULrxsivRq6EqBgoHj12MQNI+1m/62Eko4EsO2tWD+2f1B0SoI6URsA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(346002)(39860400002)(376002)(136003)(451199021)(8676002)(8936002)(36756003)(41300700001)(31696002)(316002)(66556008)(66946007)(6916009)(4326008)(66476007)(86362001)(54906003)(478600001)(38100700002)(6486002)(2906002)(2616005)(5660300002)(53546011)(26005)(6512007)(6506007)(31686004)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SkMxZHNwbjVBSXZUa0RzTjZBQzBJbHVYOS9CN3Y4L01Gbkl2a3NOSkphVGhW?=
 =?utf-8?B?KzdCc01Ud0p0dS9PTGtIZk1sZ2FZdElhcXBBRW5aTmxuemVFc3loWFNBRmkz?=
 =?utf-8?B?cEIrRTBMY2VtREltSFRQOW1GTmVjckxwZDUrOEN6allGYWFhTXF1ODZvR1Za?=
 =?utf-8?B?SCtFejZ5cVAxOWswaFNyMzl3WENQOXhBZ1Axemp4TDNadUVMSWUvYXo0YXky?=
 =?utf-8?B?RGxRSHJOd29veGNRdDJPSHFGdWtDdWVLMTlaZDEvSUZtR29UVTM2emVJUStz?=
 =?utf-8?B?cXAxSFJVY3FNSStHaGRjMFFBQzNMYnloWlZ3UjN0U0NEZWM5Nm1DM1FQZDFU?=
 =?utf-8?B?N1RVSGd6SVppaHpsSEZNZ1lFQkp5TUl3bHNvbHFVdWM4eGkwR0l5azR5Z1Iv?=
 =?utf-8?B?OE83Zk8zcGhoQ3I0bDZIL3lhTTdZOVNPdGt5TE5KR0NydXdxY1Nma3EyYkpS?=
 =?utf-8?B?SjNKT3hiWDVNRFYyb0t6dTNHWmZ0Zks0RTlPNzJKU3dhbyttckQrY21LWWEy?=
 =?utf-8?B?S09mQm9iZnJyenhJMTd6ZTNiN3d5MU9JS1V6YjdCNXAzVlFmb2FLTy9mdXZ6?=
 =?utf-8?B?WDQ2Wi9xWmRzWnlkcEN5SXNjVCtOMmdyUzNxMUxkTDFBdmJpZ3d3bHN6SGln?=
 =?utf-8?B?NFlvSDVsQ0NiYWdGMjAzV25jam1Ta3ZBU3BpZklNQ0dWUW05RE5XN2w1d28v?=
 =?utf-8?B?Z0g0cng0dVhkTVYxeWJ1c1hhd1Vqa3UvS3EvcWVqdlhEeHh0dWdqU3R1cVE5?=
 =?utf-8?B?MnBZalJhNW5yZ0Rka2s4djlvY2JUMS9tSHJTR2F6MloxcEZJRTVLNzF0MnRt?=
 =?utf-8?B?UlpmazM4SWNyWDJNRGRjMWptR3Q3WHZ4dmJFb1RMOTFaeERUOEU1ckdpaksr?=
 =?utf-8?B?WmxVYjU2OWQ2UTdkM0pQOTAyRDUvRkZEdDdrdU9haWNTR2RGVzdzWGRRb0xw?=
 =?utf-8?B?Z1VBZkp0RzVBRllSZWJWQU91RDA4QTJBazVmYUN5T3ZMUGhpTzRPYmlUWGJN?=
 =?utf-8?B?dnp2MzBBQ0lGTXZwaE91SFp5UVh4dmZnemZQS3RxSTc5cEVFS04yTkErMER1?=
 =?utf-8?B?WkJrdHczMm51OFNLdGNHU1I4UG5jVVlXSG9sb3NGMU5HN2N1aVMydTQyL0ha?=
 =?utf-8?B?NGNEbkpBV3ZaL3E0S1FXanczTzFtbFBuVG1uN0dHbFZaVmdrai96YlBneDY4?=
 =?utf-8?B?a3cvdEFsblZMS2RNUnNvby85OUF4Y2FENVlsU2pyWXF6TUtDM3RVUWRCMWFi?=
 =?utf-8?B?N1c1d1ZNeDZBdDVFRmdkNFp2M0tpNTVmTGttM3VoMEt5Qm9ROWYvRHJkNGRz?=
 =?utf-8?B?emppODBqM1JqSjcyTlpSOVBGVzJzaHNPRHdWR1VSWllVSnNmSm1wYUl0WmtL?=
 =?utf-8?B?bWxiVTBmcGtmUmFuVEZJRXRHdzFMb3YwVW9VVVlRcnl4YzJhVENOcEQ1aWZK?=
 =?utf-8?B?VVZtR0JoM0cxbGlhQ0ZjQWFESFlWRUlkYXFLVk1zWFhqanpNNnB5YkZ6TmJ4?=
 =?utf-8?B?RVVXOG5mSmtXZW52R09vcHVnU3ZtYnlyOGNWZS9Ham5rQy9TZFQwbW5POG1P?=
 =?utf-8?B?L1J0OWZpUWZMalZoR2FGa055dXlua1BZbElNeEk3QjFmb0NWUnZ3c2g1dXNK?=
 =?utf-8?B?YVFMejZsS0pFRXduSktQMTU4RFVqalRvMjd0bVFjRFBiL0JJc1FRbmVhLzNW?=
 =?utf-8?B?djJpNGNHekRrT01icUR4Vm9pVlc4c1NlY1BwNDVnQmlwYkJtTFFuK3JBaFR1?=
 =?utf-8?B?by9IWnNpclkreFZEUGJ1NXRlbVhIbk12WGYzN2dhditVcEh6OW5PaStUVnV1?=
 =?utf-8?B?Nkg2MVRUQ094dnlhUytFbzF1OGllSU9jRlpET1U5aDYxaXZrWVhwSCtVQjky?=
 =?utf-8?B?K2NWWlR5UWhNY1FrLy9mdUtsUlI4cU1oWGFYK2lCTlhnbXlOT3JYcWttNDl4?=
 =?utf-8?B?ZEVvN3JRU2dXc3JNUWxaTytHZFRVQk5RN0ZvSzZ3WnRLVWtFT2dsTFIycHZy?=
 =?utf-8?B?c3RsZVcycmdxTEhpT2hlTVBkaWxRemROV3M4L0dkTm1XSXlVN2gxdCtOVDNo?=
 =?utf-8?B?Y2N5bEVEVCt0U1hMYmM1a2o1S0w1ODR1bEw4Zm9jTnFKTk4xNEFjcWp1b2VC?=
 =?utf-8?Q?igZylEpaZkLYUHU10vGMfUFDI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7718c9a2-cf9d-4a4e-d549-08db4afb6d34
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:53:13.3819
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oKvXqdiy9789UbIXn3pStcbtOdm9qyK2k79rhBsYRKVavs51E0+aQoYzPTDSX7/25N9H/SknqEXW4OYPBYTNTQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB10069

On 25.04.2023 16:39, Marek Marczykowski-Górecki wrote:
> pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
> devices, add setting PCI_COMMAND_MEMORY for MMIO-based UART devices too.

This sentence is odd, as by its grammar it looks to describe the current
situation only. The respective sentence in v1 did not have this issue.

> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -272,7 +272,15 @@ static int cf_check ns16550_getc(struct serial_port *port, char *pc)
>  static void pci_serial_early_init(struct ns16550 *uart)
>  {
>  #ifdef NS16550_PCI
> -    if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
> +    if ( uart->bar )
> +    {
> +        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
> +                                  uart->ps_bdf[2]),
> +                         PCI_COMMAND, PCI_COMMAND_MEMORY);
> +        return;
> +    }
> +
> +    if ( !uart->ps_bdf_enable )
>          return;
>  
>      if ( uart->pb_bdf_enable )

While I did suggest using uart->bar, my implication was that the io_base
check would then remain in place. Otherwise, if I'm not mistaken, MMIO-
based devices not specified via "com<N>=...,pci" would then wrongly take
the I/O port path.

Furthermore - you can't use uart->bar alone here, can you? The field is
set equally for MMIO and port based cards in pci_uart_config().

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:59:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:59:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528489.821706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnix-0006Qd-87; Tue, 02 May 2023 10:59:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528489.821706; Tue, 02 May 2023 10:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnix-0006QW-38; Tue, 02 May 2023 10:59:11 +0000
Received: by outflank-mailman (input) for mailman id 528489;
 Tue, 02 May 2023 10:59:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptniw-0006Q7-08
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:59:10 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0619.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5c08f4e2-e8d8-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 12:59:07 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8177.eurprd04.prod.outlook.com (2603:10a6:20b:3b7::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 10:59:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 10:59:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c08f4e2-e8d8-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Zx78q1ZzM2xxCujcPgwQ+6ZG6BElMZMDs60MTwfinDgo2+DKHGpljpZyZHGLiKvk86uJVD/XLXLTqDO6F9jBWWyYOILl7E9uXwlTPMW5bEBrwYW0xG/+11SHQNLXEkX/+8LluV/M2q2vRlFJGlak8oOmtm/OP3bBhbG7tDmnoRPvLixGmTHVDb+RvHbVEt3qykscCITNbuLYMgup3y3ZCQ5oRqa7pPSN8O266jZPyvtpqe9wYFRK2k7sQu0w5HA4G7W5o9fCCfgsffd/wMK5CFCVsSvNbZiWySrdwv5eaJ7lEBmjMWkDLL3Q7N6SERauYX9nvuSVM/l6HmilU/iVIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W3fCQlvpDzEPjiM6a2fCovblRteLnyI0+TpO2FGT6zY=;
 b=Lr09JNO3cDo1JUzF+X3OrIzG1wkHh5cvu6ZjfS5QlDyaXZ9h+D9C8xy7jr6jL+v82c1RSiwYFA1xhZeRdOEz0E49wV8X6wmzNCWKmG31Vi1M7P3z5BKteoaGMOFxNdwTVC0Twcq1J264cG/PX8kg8nlE+xCJIiIuVcYnRaVzNC+VpYlzJRQfMy2+O+dfKvCvTfxAs35OeO8hLMwkAeDzttvjUNS53dkkeHB7q382U2bVIX/cNw5rH+uOJrdrzvPZ/mMBRuqtZvPNiK6/ldaUhOWpxssf7b4/hFLFXdr2CYlPOy1wyPdU56ZadBvtzVHkfZV24uvlYMRHpS8XwOjpow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W3fCQlvpDzEPjiM6a2fCovblRteLnyI0+TpO2FGT6zY=;
 b=MQN2nyNvqbSvypviWgvIKv28HjuQU2VPz9XUsO7kKp1cdnY2QzYu47H7+4mtN2DdBQqWQUBlkRHIpVS2YNAeAggiedyMQU8lKEuusdPCoB658VtEXxAonFzFAfH0qULea5x8c4wFSi7NHKJP/r6mlzNuHoSfXubbIM5EfO17SsmOup9MnPEvFi4mnzMVzBMxegceTx/iHBgkQrP/Q17pDJ8oHoWY2KeBS4dsZDqRe5GW83crL35Pb+Mg+HTHqY56hhkVWL08iMzZyCW4rzRP/GP0oRVwS9Zz0u3XUrDrhWR7sjfyVgZBLcbV88FcP2DJJdPU8VFpGbhUTOih4TkIzQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <22b2e03c-ac5e-915a-78a2-0a632b09a53a@suse.com>
Date: Tue, 2 May 2023 12:59:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
 <600c8c62-5982-ec7e-7996-5b7fbfb40067@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <600c8c62-5982-ec7e-7996-5b7fbfb40067@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0087.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8177:EE_
X-MS-Office365-Filtering-Correlation-Id: da721443-6b17-41be-f00b-08db4afc3e6e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8YodDVeluo0qVcFLJrtElMsBqieglQfr5MkiCdmfgv/uQNzkb1cM/avK39Olr0MdPE5Osl3Gz86MhpnlTTzPw37i0HDaS8U/YIMATQLbm8WLCn6CC043x8M1CSdQbJBml9K6IlH8AxfML7w6gvIgcL1HUBM7K4Gc+Rpy++wLDkJTJBxrSHVBu5+4Cpscp0YRn4PHtVKZvdyrf//ac0+EQj4ASk0H83ON145y1mLjmeowgzxYTWn5HuQSbDjgyWADICRELIlMrQf+LFqPnWV1Ho48fMmEQ5JNZEpgAiEEi1TGuCupZUqrcn/4FH4RI46dHadOR5W3GYGhcKqnB3ZRvbkoJ2BEWQeVNPWLFiLRM7EJdYbnCSWX25vHuXExB/fGJuI6K1Z8kFnUyT3sgLEo/7Bv0UdlTuuD9mHK7QLwgvjfN3DXjYU3OeGuiq6T1AClMRb0yNv/L9nxEo3WFDwjuK31kduAZ8p9CMxTP0hBDAxr9h+cF7Prrt7o3aa6FZEJf2BD439iHiwmtajz8cP4wcUE3qIHm6aytA4zJnwhmmRPLwFi9+LrnHLcGpW21ueHNZj2mOp53MNI0BEHfBCcXt9cjgg+UDP8aqq49FIAFyJC6cUP+fD/SBoN6MmM0oEM7CMb77NbogYhdCdm/VV+0g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(39860400002)(396003)(346002)(376002)(451199021)(316002)(6486002)(41300700001)(8936002)(8676002)(478600001)(5660300002)(54906003)(31686004)(6916009)(66946007)(66476007)(66556008)(4326008)(26005)(6512007)(6506007)(2906002)(53546011)(186003)(36756003)(2616005)(83380400001)(38100700002)(66574015)(31696002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dHhHMGxXMEtDKzRrSC8wSjZzcTlRWGNUeVUrcFdacXBrY05MLzBWb3k2Mi9t?=
 =?utf-8?B?MGtoQ2VvYTFPVWNEM2RxSlh1WXlVcW10RzJQei9uMk1HLzJFV1NSeFY0MmJl?=
 =?utf-8?B?OVUzTHgvOGE3ekxpY1U4Nk4zeEJsN2ozNVBOWnpvUlZsOTRGOTQyR3RkcWVS?=
 =?utf-8?B?dTdqejNCVVVLUy9VeHBpb0pLd3I1dHduOXFGZXR4VHRzRFpGVUExbnI4MjNj?=
 =?utf-8?B?d3ZtWmhMT2NXSEpwYXFsbms5R3BnZWVRUzYwZzJ1bmdtMkNTcVFJam9YL1Ex?=
 =?utf-8?B?TDk4NzAvcGZNSlBzamdySFNXdm9ULzZMbXE3ZXNzd1EzdjVCQk1UZ0R4RG8z?=
 =?utf-8?B?MUt6UzIxRlpnMWNCVGNZSWw2cFNERVVzbWtWaHl6b0tVSm5SRVErSWo4dTBI?=
 =?utf-8?B?NGYzbjFzN2dzOWxZTXgvc1NiVkhSZE91WWJHdkNvS3RjRGdaNXpGamVoU2RI?=
 =?utf-8?B?TjdxSXZnd0VNdmdwS1VVdzQrcFN0eU0zNjAzWjFBbWFORDlHdUFaM2dIQUNF?=
 =?utf-8?B?KzgrZCtwMExCTkgwZ1haRWc5SEIyczZ6RDNlR1Frd0J0QWdEbno0SUgzTjVa?=
 =?utf-8?B?MG93eGtOSzhVZ0N3aEJMUWxPMkVkYWpuVzdPb3J6dE9LQ3VSZklPWllKS3VJ?=
 =?utf-8?B?Rk5KMVc4VDBXQjZrV3diZUlMS2N0U042SlQzSXM4L3hQR09va1QwVHJsRGVQ?=
 =?utf-8?B?bDdFM1djT1VuOUt3V3AwTmRMeEV6bENmdE5RK0JCWHA5YVFIWXNNbjcrV04z?=
 =?utf-8?B?Kytrb29sUFhhcUZ1MDFwM092by9DS0Z3czFUNzdSdXRpUlR3ZERLYVdlYXlt?=
 =?utf-8?B?VGhvUTFQc2FXcHBBYXlzb25qUDZOa1VOV2lUSGV3T3IvNHY0ZjgxNTVkaWFy?=
 =?utf-8?B?OTNlY3JTdFBmbDI4eVNNdHNpUU5VTXlQbFJSaXAwc1lKZkt1U0s5V1NVZVBU?=
 =?utf-8?B?b1laUGt3SG84aXp0emcrTzlGRkhhWWdvd2NHMFl2YjgvdnRFamtBN3RWaEVl?=
 =?utf-8?B?ZktKU2g0THJESlBnVkdSRjNMelNTc0dSNU4ybWVDTWpBdU14WkRPOEEwUDc2?=
 =?utf-8?B?L2pMalltMHVJbG44clR3VVFkSDh6SkdudzJIYVlUakIrWUFLcXNZMkJJbmtX?=
 =?utf-8?B?NzhxdU5ISUE1aXN0bk94MlZmRlZuV1d4ZVk4RzZHcDJWRllyTWc1Qi9QTm0v?=
 =?utf-8?B?N0lCcVRHUmM0MFBsdDFYb1NVM1ZUK1VFT0N2OS9YU3lTU3RwcW1ZTElxM0g4?=
 =?utf-8?B?RENQTjJza2ZxcDZNWG05OW9LRDJteEd3OW16eVZPa21Ca3ArZE5oVHd2Wm9D?=
 =?utf-8?B?clYvSkVGT3BmUE5idEpDcGRzUFROUm5PSFp3ZXU2K0Mvd2NBbmFHbUhWS2hV?=
 =?utf-8?B?NVI4Tm0ralE1V05yb0FFdXRZTzR1dXRrc3FnbndGWkFOSUFxU0NqSzV2d2JF?=
 =?utf-8?B?bDhWQk9yQ3ZGdjIzWnR5NSt3bVlCMngrOFVtelNjMWZ2RTFXSnhPSTJwdGdD?=
 =?utf-8?B?L0RCenF6a1cyY0l4bG00aC80RW14cXpZc2I2RjlXMFN5Ym4wUUFpMHlwdXc0?=
 =?utf-8?B?WmJtRU92QzlGOTgzK21GbHh6bFJvQmE3M1Q1TSttcTNJd0RrY2YxVlBIVmtF?=
 =?utf-8?B?bFY1VjVlZCtpZmpBdFpzdlRXMFZvK0VkKzNMaC9ZTWQ3cDhFRzkxWlMwRDZF?=
 =?utf-8?B?ZFlCOGVBbldLaGFwNVJIbGViT29wdlpNcWpSOVU4bmlNWmNOb3l4bThiTzBK?=
 =?utf-8?B?WG5iTEJXMGNkbEV4eU9CbTU3aS9ZMnpnLzd3ZHljZ0NPcm5hVlV1WE1uYTBj?=
 =?utf-8?B?QWd3RElqSXdqVGVuWFFTUFFJS2pCS2Y0aGEvZ0pxbGJPZzZtS1NwTmNoS00w?=
 =?utf-8?B?Sitkbnh4WmMzSWQyVmdvUTJzd0F2cGNNZ1hsbmwxL2dhNmphM3Q3dXpHRms3?=
 =?utf-8?B?VHI1QWQwazZNMko3QzRHR2ZqOVJIbmlEUTFBV2xaWTRLSXpVQkZ2RmNsZmVw?=
 =?utf-8?B?UlR3SlNXV0dWRFkzZGdEb25VQURCSXIzZWtnM3pNOTdrSzRLb0ZTV1p3aTlR?=
 =?utf-8?B?UDY4Q0V0OUhPZE9rVnEwK0t5UW1tS1Bsc1BndHpjeFlBWDNBUnIvVmNDR2tw?=
 =?utf-8?Q?8wPfg6iPR/5pOamGykcONx8oE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: da721443-6b17-41be-f00b-08db4afc3e6e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:59:04.4328
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Wa5abGoCzdx6yeb8wywLyDU5C9b2K+nJ2SfZeP0VJ5jR+7KBU928de8HzHutaiPyiPA/tNJYjL+QW9r6yGKg4Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8177

On 02.05.2023 12:43, Daniel P. Smith wrote:
> On 5/2/23 03:17, Jan Beulich wrote:
>> Unlike for XEN_DOMCTL_getdomaininfo, where the XSM check is intended to
>> cause the operation to fail, in the loop here it ought to merely
>> determine whether information for the domain at hand may be reported
>> back. Therefore if on the last iteration the hook results in denial,
>> this should not affect the sub-op's return value.
>>
>> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> The hook being able to deny access to data for certain domains means
>> that no caller can assume to have a system-wide picture when holding the
>> results.
>>
>> Wouldn't it make sense to permit the function to merely "count" domains?
>> While racy in general (including in its present, "normal" mode of
>> operation), within a tool stack this could be used as long as creation
>> of new domains is suppressed between obtaining the count and then using
>> it.
>>
>> In XEN_DOMCTL_getpageframeinfo2 said commit had introduced a 2nd such
>> issue, but luckily that sub-op and xsm_getpageframeinfo() are long gone.
>>
> 
> I understand there is a larger issue at play here but neutering the 
> security control/XSM check is not the answer. This literally changes the 
> way a FLASK policy that people currently have would be enforced, as well 
> as contrary to how they understand the access control that it provides. 
> Even though the code path does not fall under XSM maintainer, I would 
> NACK this patch. IMHO, it is better to find a solution that does not 
> abuse, misuse, or invalidate the purpose of the XSM calls.
> 
> On a side note, I am a little concern that only one person thought to 
> include the XSM maintainer, or any of the XSM reviewers, onto a patch 
> and the discussion around a patch that clearly relates to XSM for us to 
> gauge the consequences of the patch. I am not assuming intentions here, 
> only wanting to raise the concern.

Well, yes, for the discussion items I could have remembered to include
you. The code change itself, otoh, doesn't require your ack, even if it
is the return value of an XSM function which was used wrongly here.

> So for what it is worth, NACK.

I'm puzzled: I hope you don't mean NACK to the patch (or effectively
Jürgen's identical one, which I had noticed only after sending mine).
Yet beyond that I don't see anything here which could be NACKed. I've
merely raised a couple of points for discussion.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 10:59:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 10:59:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528490.821715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnjB-0006lP-Jl; Tue, 02 May 2023 10:59:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528490.821715; Tue, 02 May 2023 10:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnjB-0006lG-GK; Tue, 02 May 2023 10:59:25 +0000
Received: by outflank-mailman (input) for mailman id 528490;
 Tue, 02 May 2023 10:59:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/mRu=AX=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1ptnjA-0006kf-Fm
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 10:59:24 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20628.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::628])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 648248b0-e8d8-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 12:59:22 +0200 (CEST)
Received: from DM6PR05CA0052.namprd05.prod.outlook.com (2603:10b6:5:335::21)
 by PH7PR12MB9101.namprd12.prod.outlook.com (2603:10b6:510:2f9::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 10:59:14 +0000
Received: from DM6NAM11FT031.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:335:cafe::17) by DM6PR05CA0052.outlook.office365.com
 (2603:10b6:5:335::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.20 via Frontend
 Transport; Tue, 2 May 2023 10:59:14 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT031.mail.protection.outlook.com (10.13.172.203) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 10:59:13 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 05:59:13 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 03:59:12 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Tue, 2 May 2023 05:59:11 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 648248b0-e8d8-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WzaL5pvSdR6/FWIl5XJrQcaUB9qFlxhr2tzrHUsg/nQ8B3OETwOrH+Ua1y/AUpABhGJ8nDlNnjiY+TvV7RiWPw5aTl9/6YWtupUcilUIvUMVWemXb/6itFUDHI77Nc5QCTi/yif3Mshe5e+187xkpf8RLXRt9Cui1Z0C/qgpeqiikckh77qxZlCRpjtBh7crLOobsLDjIRj0UMX6v7DfS43FfAJPca8tScG8uXY7fwrfxGKt7VM9FPGGhD5PNk/rbTyOzfeWn0J+EkjLd8BcnsEtQsTl1BEYXKOSQk9Pf8Yl+zlmwvKXrR9W7IYkwa/cKCsNbwMbYXL5Hl8hZpwU6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6PNfiLfJ10whjs+8iQVR/PSPige/chRgA3zDIEpohM0=;
 b=nBJvrYCkscFgGLIrrQkgi2tqczU2DO5vaWGd3uFUTaQdkJSsgAgb0TB+ajpIqtzX1FOQrCN1Km758PrrF9Xf4/i8Kzpd5cjw38NAhe7aGcaPxuU8TBF3V17687rpFnBrD+1IciJeFr45iP46EiaxrwJq1kH5Gq1H/a/lMFO5dHuaPDaWmquvDTtzXtXYbo7wbOg5CKfr3vafN0ffviEKlkvKbMh6I3268NvKc/SEwQddMuFuZ2TPf/hJJE9ky7XlPdsQPM80M0U+6C+KYQmScUSoZcwUwn6u32owXiKKqJxcur/r0CxgtpcPV6qKVQ1UufE5zKwpvPXSZqm27xeShw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6PNfiLfJ10whjs+8iQVR/PSPige/chRgA3zDIEpohM0=;
 b=z9mGcYRSjjWrIvhFEG2GoM16AhGE/tK0alCTBGkOOp7ROsVx31t+Ru17frXUwHcUMS3Q9JWKRgUuRIeO34eCHN6lHxoLm2lp0ZwdsXU5W4Xqvvmg0qIe9V3vDxH+GMjn+wDya8XZoaLaSy0U+zJSzSGBRQpqJdC4Nz0RorWLHrI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>
Subject: [RFC PATCH] xen/arm: arm32: Enable smpboot on Arm32 based systems
Date: Tue, 2 May 2023 11:58:49 +0100
Message-ID: <20230502105849.40677-1-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT031:EE_|PH7PR12MB9101:EE_
X-MS-Office365-Filtering-Correlation-Id: c5ed3dd7-90c7-42a8-c0e9-08db4afc4417
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3ktANM2X2wpzfWF7vop5ZDS74ZOsRE8IF8bCVEKyXE0/s+TLLUnKBXSUs2myd2LIpy7N2BcY95ko1FqBRDIUfowqVvKxnvAmJDSYDuZL4SUzvxhDUbJaNQyR+S5AmlMz4O4KSqtmzvGX8Ckw1gE1SCTdBTURnuuDDBv4sFxRI6Z6H8TQe+d/zniawhuqZe5BrqG6ygWi4Sw32mwQ9hu0+58UBwL/hQSgNUWybr8rcNCL9/8os+n2YnyGhTjqibmfOcHm9ab+8MpyQ4mvQp0Kf1pgzpqu1YFckDpF7ZgB90lo7LYFvPfvpCZOPxvChGs2KiyjlWREUriZX/DbhHBVo0ARTos44N0hsc+EGugWjn6gnYjyd2hv1GOkD9w632V5aKr53yDy7MlcccjDqiU6Jj7jgXZyLVv6tG7SuF0I0IXPkKibjJ24X4uMSm9tcHbw9wthwk7H0rYxcAk25CvLzr0vzQpRwaGteOzM0uRyP7cSZd/yToLVBs91JxVTJAz+y0sGb+l/hC/1RxIM3ooiOYw3cxrlyo8DiuTJ2zDX75batf1bFWhAcP/iSxFkP308El3HnSdznD0e09y/T6Gx1yjek160lEbfXdYXIq5ad7GTXu4Ju63JFh1kw6nrlmAtTPmOMnDtneCEjC8pivT7AgdKuTPKNVUOQfocHQZOLZ0yQKgqIjnbqHpkdBOAIUK8VfWBLJmu9WoYq3xXN2cqU/RfjQj4AE6FHJITX+KYF0w=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(26005)(1076003)(478600001)(36860700001)(47076005)(2616005)(426003)(336012)(186003)(83380400001)(86362001)(40460700003)(966005)(6666004)(40480700001)(82310400005)(8936002)(316002)(2906002)(54906003)(70586007)(6916009)(103116003)(8676002)(36756003)(356005)(70206006)(82740400003)(81166007)(41300700001)(5660300002)(4326008)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 10:59:13.6895
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c5ed3dd7-90c7-42a8-c0e9-08db4afc4417
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT031.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB9101

On some of the Arm32 based systems (eg Cortex-R52), smpboot is supported.
In these systems PSCI may not always be supported. In case of Cortex-R52, there
is no EL3 or secure mode. Thus, PSCI is not supported as it requires EL3.

Thus, we use 'spin-table' mechanism to boot the secondary cpus. The primary
cpu provides the startup address of the secondary cores. This address is
provided using the 'cpu-release-addr' property.

To support smpboot, we have copied the code from xen/arch/arm/arm64/smpboot.c
with the following changes :-

1. 'enable-method' is an optional property. Refer to the comment in
https://www.kernel.org/doc/Documentation/devicetree/bindings/arm/cpus.yaml
"      # On ARM 32-bit systems this property is optional"

2. psci is not currently supported as a value for 'enable-method'.

3. update_identity_mapping() is not invoked as we are not sure if it is
required.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---

The dts snippet with which this has been validated is :-

    cpus {
        #address-cells = <0x02>;
        #size-cells = <0x00>;

        cpu-map {

            cluster0 {

                core0 {

                    thread0 {
                        cpu = <0x02>;
                    };
                };
                core1 {

                    thread0 {
                        cpu = <0x03>;
                    };
                };
            };
        };

        cpu@0 {
            device_type = "cpu";
            compatible = "arm,armv8";
            reg = <0x00 0x00>;
            phandle = <0x02>;
        };

        cpu@1 {
            device_type = "cpu";
            compatible = "arm,armv8";
            reg = <0x00 0x01>;
            enable-method = "spin-table";
            cpu-release-addr = <0xEB58C010>;
            phandle = <0x03>;
        };
    };

Although currently I have tested this on Cortex-R52, I feel this may be helpful
to enable smp on other Arm32 based systems as well. Happy to hear opinions.

 xen/arch/arm/arm32/smpboot.c | 84 ++++++++++++++++++++++++++++++++++--
 1 file changed, 80 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/arm32/smpboot.c b/xen/arch/arm/arm32/smpboot.c
index 518e9f9c7e..feb249d3f8 100644
--- a/xen/arch/arm/arm32/smpboot.c
+++ b/xen/arch/arm/arm32/smpboot.c
@@ -1,24 +1,100 @@
 #include <xen/device_tree.h>
 #include <xen/init.h>
 #include <xen/smp.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/platform.h>
 
+struct smp_enable_ops {
+        int             (*prepare_cpu)(int);
+};
+
+static uint32_t cpu_release_addr[NR_CPUS];
+static struct smp_enable_ops smp_enable_ops[NR_CPUS];
+
 int __init arch_smp_init(void)
 {
     return platform_smp_init();
 }
 
-int __init arch_cpu_init(int cpu, struct dt_device_node *dn)
+static int __init smp_spin_table_cpu_up(int cpu)
+{
+    uint32_t __iomem *release;
+
+    if (!cpu_release_addr[cpu])
+    {
+        printk("CPU%d: No release addr\n", cpu);
+        return -ENODEV;
+    }
+
+    release = ioremap_nocache(cpu_release_addr[cpu], 4);
+    if ( !release )
+    {
+        dprintk(XENLOG_ERR, "CPU%d: Unable to map release address\n", cpu);
+        return -EFAULT;
+    }
+
+    writel(__pa(init_secondary), release);
+
+    iounmap(release);
+
+    sev();
+
+    return 0;
+}
+
+static void __init smp_spin_table_init(int cpu, struct dt_device_node *dn)
 {
-    /* Not needed on ARM32, as there is no relevant information in
-     * the CPU device tree node for ARMv7 CPUs.
+    if ( !dt_property_read_u32(dn, "cpu-release-addr", &cpu_release_addr[cpu]) )
+    {
+        printk("CPU%d has no cpu-release-addr\n", cpu);
+        return;
+    }
+
+    smp_enable_ops[cpu].prepare_cpu = smp_spin_table_cpu_up;
+}
+
+static int __init dt_arch_cpu_init(int cpu, struct dt_device_node *dn)
+{
+    const char *enable_method;
+
+    /*
+     * Refer Documentation/devicetree/bindings/arm/cpus.yaml, it says on
+     * ARM 32-bit systems this property is optional.
      */
+    enable_method = dt_get_property(dn, "enable-method", NULL);
+    if (!enable_method)
+    {
+        return 0;
+    }
+
+    if ( !strcmp(enable_method, "spin-table") )
+        smp_spin_table_init(cpu, dn);
+    else
+    {
+        printk("CPU%d has unknown enable method \"%s\"\n", cpu, enable_method);
+        return -EINVAL;
+    }
+
     return 0;
 }
 
+int __init arch_cpu_init(int cpu, struct dt_device_node *dn)
+{
+    return dt_arch_cpu_init(cpu, dn);
+}
+
 int arch_cpu_up(int cpu)
 {
-    return platform_cpu_up(cpu);
+    int ret = 0;
+
+    if ( smp_enable_ops[cpu].prepare_cpu )
+        ret = smp_enable_ops[cpu].prepare_cpu(cpu);
+
+    if ( !ret )
+        return platform_cpu_up(cpu);
+
+    return ret;
 }
 
 void arch_cpu_up_finish(void)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 11:00:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:00:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528497.821724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnk1-0008OV-SY; Tue, 02 May 2023 11:00:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528497.821724; Tue, 02 May 2023 11:00:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnk1-0008OO-Pk; Tue, 02 May 2023 11:00:17 +0000
Received: by outflank-mailman (input) for mailman id 528497;
 Tue, 02 May 2023 11:00:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptnk0-0006Q7-LI
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:00:16 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 81b76432-e8d8-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 13:00:13 +0200 (CEST)
Received: from mail-bn8nam12lp2170.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 07:00:09 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM4PR03MB6128.namprd03.prod.outlook.com (2603:10b6:5:397::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 11:00:06 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 11:00:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81b76432-e8d8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683025213;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=KsLAOn3h7ZIfMVeyWt77Gz0Acm9pzqv/sXsIG3autwY=;
  b=dnLnP6+uXLzlrwyxSRa1pZnBb16xI/kbhKwLoXpPuCbfqmOG+BoYUWmB
   u+x8W8Ml1VWvcn1Txx/J4WYGTyh3ZL8U+u5NBaFHufPqo5YzaXWfgG9IE
   ByYFnua4MigPwgp2J/KrdESJMvsbLgRSDCe/M8pN1fK9w72FobFuOlld4
   8=;
X-IronPort-RemoteIP: 104.47.55.170
X-IronPort-MID: 106884776
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:7mCyP60ms8IletlK5fbD5T9wkn2cJEfYwER7XKvMYLTBsI5bp2QHy
 zZLDTjUMq6KNzCmfIx/aoi19hsO75Hcy4c1QQs6pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFmP6gR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfG2FPr
 8UEKgI3axmu3bvszK+Hdsl+v5F2RCXrFNt3VnBI6xj8VKxja7aTBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvi6Kk1QZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13rGfzX+kB9x6+LuQy9xhonuh/mgvDyY0UEe8jMCc1EGvYocKQ
 6AT0m90xUQoz2SnVsL4XgG4iHecswQARsFLFOkn9ACKzLGS6AGcbkAGRDNcbN0ttOctWCcnk
 FSOmrvBFTFp9bGYV3+Z3rOVti+pfzgYK3cYYi0JRhdD5MPsyKkxkxbOQ9BLAKOzyNrvFlnY2
 CuWpSIzg7ESi88j1Kih+13DxTW2qfDhUQod9gjRGGW/4WtEiJWNYoWp7R3R66ZGJYPAFF2Z5
 iFbw46Z8fwECoyLmGqVWuIREbq15vGDdjrBnVpoGJpn/DOok5K+Qb1tDPhFDB8BGq45lfXBO
 Sc/ZSs5CEdvAUaX
IronPort-HdrOrdr: A9a23:cV404KDrnjN+4c3lHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-Talos-CUID: 9a23:h7SKTmM8Qm3O9e5DQilmrncIGPIfd3Twli6KL1KENUZ7cejA
X-Talos-MUID: =?us-ascii?q?9a23=3Ay1nRmQ+GXCFCk7Mog2tbaRyQf/pK6IaIOngJrZI?=
 =?us-ascii?q?9mcyYMgFJHmmSnQ3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="106884776"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mVUnNBhYuQztLClAKCXvWb0EkZ02/GwYzCpwEzH0pv0jvtve2nFvzqG0jV+3SdW3PCVqPZ4O/LhAbErA/TPpmL3+DyjVKNzNv+idOIEcaSFigg0MiF/GDlFKK1oAc/DoztiLgTxOku4aU65QXeKUPxj36pGor5K1I9P+oYGGmwUaniiJdXfP6jo14cStLIbyXzdcgu9yLax5IUWJ0C3oAm+M/qoduGeESu0QQfCucmJsVbxEyB15ohs6YctuMzxLZqREfm6MceDn6dR1ChuJb11cIhN8dzJYZFb7QxnifmSnHpdzHO48CJHQeRMHy3ZBt/vX9v+mKjI0yZHJbIWOJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=njO6siTvEdw9eAhZlRafJ1EmMyH9PhIGPS5uu3T0qFA=;
 b=TOjV+M3hwYwdyTS1NeD6RxzMpaPh7J3oiwl95PE/QHF28XUJAFBDclZ0FxnrFgeIHYil2PCQTm586RsUNpfrzy6MG1GRF+pIXWbvpfpv0YSZFdm2XjjZ7wwRaSc2VBM8peUEBMN6epqX9X6ZX9XPgazr1SoZTsuODXowGmqAWPZO9InA24VYWQuKddA9MLKIaGkYTn7ci/Clmxn3PTVuag0RNP7FjEwNheRmRwHYI99zt70ndHnZVM4/dbobv0cUAbeZ+DRyX0yDTn6ktSgBPkGllblf5LzUlAplII+pOmwrMZCBR+0Rfy9UOSGi3Jc49VfWa6zBmA6Rozojmo2Krw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=njO6siTvEdw9eAhZlRafJ1EmMyH9PhIGPS5uu3T0qFA=;
 b=lq8ZPiqAIprPs94ksrlG+ImM8haJarARWIVGmfAeComHM21kpkdTAfCVjf3lu5qFPjMvRJbSPv3svPGDIQsU5xW4t9K8lGS/j1kbmYu07GGCmlaZw1dDzYJvUTtB7Z+//8oS1TH6JSPRU9zp6u3P883GrKzgkczIBU9dyYwCrJc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 2 May 2023 13:00:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
Message-ID: <ZFDtMMUzBGXFZPsQ@Air-de-Roger>
References: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
 <600c8c62-5982-ec7e-7996-5b7fbfb40067@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <600c8c62-5982-ec7e-7996-5b7fbfb40067@apertussolutions.com>
X-ClientProxiedBy: LO4P123CA0087.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::20) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM4PR03MB6128:EE_
X-MS-Office365-Filtering-Correlation-Id: 674fda44-a22d-47ff-0478-08db4afc6324
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ye/g9z/fhzI9QAKeHyrQUFCabh24zeoq/DgjNuJWI8AsO0Frkn6hSIsMLm57yPaxqUjBGZKK7pcw7pNlSip8cqEXfwdwMhqb4Kd9m5Jl/2l79CtUR6lC//2yoaQRTsHSoNi6pFR/jiMiiEdTC3/aw/eYqxkkGZvEXqk4ukPFpy2fIA2QmAQbH6pk2DQ8xFBHoG+JgR9mzyjtxAcAMq81j+Z+3GN7BkG/cAStazSUKMJtkwB6Nh+BKtELExXfKw6SYXqJBe5z3Xd3G/eseTyV+fN57gb+imqBGM3pypelixr94aTRzGGp3tFtmH7Vx9TTvABHYZTt05R0efL09C2P/RnZ89gWqRTe2RFpVZnXgP3w5t/hT23A+t9mX2WObeQJBY6a14W1Zj8fm2dNGrS6q2G6VFNstif9SJSbHHZZ5IZ2cv+Thx+UfaFtt2ssll3KzeH4Dsm6HnexAFlMtGtU6uQS5bS/qbkFgHI9Ti+JAoN93C1s75JRBGC1PUxW/98l++e5yHmerySBMa5zpikwebRI3khD4kr8uBKtAmFlTU1aVMfYWwy6mrN27aSsPkW+
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(346002)(396003)(376002)(136003)(39860400002)(366004)(451199021)(85182001)(6512007)(38100700002)(53546011)(9686003)(6506007)(82960400001)(186003)(26005)(86362001)(6486002)(6666004)(83380400001)(41300700001)(66946007)(66556008)(6916009)(316002)(66476007)(4326008)(5660300002)(2906002)(478600001)(33716001)(8936002)(8676002)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZlJEa0JsbzBDYTR5N2kvNHdVRTU2TG1VY0FJM0t5RDhiTGhXT1gyd0txNXhV?=
 =?utf-8?B?YUJKQ3Z3OGpRWkh0V2JyQW5NZ2tFdkNEWExMcVY5UTQ4UUpKTUtaOXYxdTBx?=
 =?utf-8?B?WUl3WHlhNGdtWFRzSUVKSUNaN3hYOHVPR3JmL3lqNW4vdkpCYXVDVHg1WUdS?=
 =?utf-8?B?eVUxYmJEMDhpeldFVjU1ZlpYT3NmWmFqU3dTNE1SWEdkRkR3eUx3dWYvQ3V3?=
 =?utf-8?B?bVFhM2tjeGpOTDlUTW93Slk4di9JcEVEaURiS2Y0ZEJkcGx2UW44Z1o4cXFZ?=
 =?utf-8?B?Y2xkSUVwT3k3eEdoak9PekJmWGRpQldqeUttV1RaL2F4WlpHWExIVTAvNzV0?=
 =?utf-8?B?Zkhya1NYYlg0ZkJaNXlsQXJ5N0JGTFNrTEt2RVVid1ZpVkRKTzFvMGlQN0NR?=
 =?utf-8?B?cXZQWXFrWENwdzJrL29Na2pNOXdBczlBREF1UkFEY1Y5dzl2T0Y1b0dhbWpi?=
 =?utf-8?B?ZWR4RTQ0QWNWSnBkTzN5RXU0b09QWGJpU3FTci95dnBNL1ZZWGEvSVExMDVp?=
 =?utf-8?B?eGtialJFazYvWVBZeVRrYlA3aVNVbkFXTUpwUXJBTGNFcWVlSE0zSFdJMjVs?=
 =?utf-8?B?QlhDSy93M1dPb0lpTE9hRlNqSnU0SDdqZnh6LzhMODdraDhjbGt6M1c1K0FN?=
 =?utf-8?B?WC93djJabGhMSDJnMC95djZjME5qU2xtQ2tQSDdYc3FHeEd6NERyOHU3bTJF?=
 =?utf-8?B?NUJTZ2p0bEttVzNveStiZnhzQ0hvWFFrUUZKMkI2QTdrTVVlZE1CVHlRR1U1?=
 =?utf-8?B?NTJMS1NmSU05dzVXK2JFZ1ZUaksrSXRIUjgxQUlqVVlGTnlLTXc1ZllmaENV?=
 =?utf-8?B?eEJBTHpTVk04a3NGWmYrVkN5cGt5K25xNy81UW5UNnN2ZU8yeDZoODhacmxJ?=
 =?utf-8?B?aU5qbFk5RG1Kc1ArbGErQnA4RnVySFJjQWVsQytxYTk0L092Q1AwSlRULzM4?=
 =?utf-8?B?VTYyMFAvbnkxNE43VURrb2U3MU5oTDZOTWZ6ekVYRFB4bzRoNG1WQXZJSVc4?=
 =?utf-8?B?Vkt1eU1jUXpDNVRkQVhsWCtrcDB4NHY0SUFKc3RZcGN3SGJYdk1OcVlBdU1z?=
 =?utf-8?B?Umpjcm9vZFMzcDZERjNaM0JhUGdWN21EMGExR1pwRTBKOWw4VVcrRXplMHdr?=
 =?utf-8?B?MEVUcTl2VlBDaVBBRkJmbFlTUS9sZ0hOR3J1MVAwb1IvMGM5N3ppQkNSSE9I?=
 =?utf-8?B?UjdzMDhkeHNGU0tzc2xmd3FXSG9CQ3ZqeWhJQlNWSmtDR1hVVThRVjIrQ1VV?=
 =?utf-8?B?Yy9tQS9NY3YxdVZDYkREbHErWHRlT3ZuVThRblBJaEk5SW1zVC9EZG1tYTJQ?=
 =?utf-8?B?OThaRklXQkpvNEUvUE5Jbm1EUWlaZjdqb0JpSzdZQW1TVWI5OXJFOTNib0hk?=
 =?utf-8?B?aU51TDJQRlZyUVNoUmdVbWdRa3FoMmtZTHZsd0JTMVFLSGlyd2Nub3lVazM5?=
 =?utf-8?B?dHhNVldSemplem5TTFZHMmRIZjZoY2xWTDZ5Q1lrd0VXTkJBMTBhSWFYNEoz?=
 =?utf-8?B?cDdDRUVnOWRDUDVYbDF3MEpsVSsxdW1rYlJrN1M1VXdmZXNBMlhmV1FqMVNE?=
 =?utf-8?B?alYvUE1TeU5yeVI3Mk0wUDhZdjdDc0Z1OHRib2dRQ2RJSGw4TXBXNzc0L0JP?=
 =?utf-8?B?OU5zTHpoQkl2VUE5STNRTG1RTWNRSGJPWFhZUm1lYm9xaDZ5eGJ6ZlJHZmpB?=
 =?utf-8?B?anNVQ2IwNEpZaENHT09IN3lkazk4MndBN1RaL0RZUVFEaXJjc1ZHQkF1TGRB?=
 =?utf-8?B?cTNwdHc2cnZIemZJUGRrRTFMZmRBOE9QdGJQZktYckdESEtQeUN3VFd1amln?=
 =?utf-8?B?Z0poMzVHNWJ4Wmt2dldib0Y2MkRJM0NIWWZMMW1Gb010MnJCZUFxZVBobjVJ?=
 =?utf-8?B?MzZqYzRXQ0YrTVp1bWR4ZklXanZZN3JMcW12L29kZFhzanFoYkhxTFVLWGFQ?=
 =?utf-8?B?RlVUa1JiamtrcEVoR05PZjRTQ2gzSFdDZTNHdXprZ1VBK3g3TXhlZVNseElZ?=
 =?utf-8?B?Z1Q1M2ZWazI3YjRxTUdTOTNBZjFpN3QrZ3VHQ1BIMjVjaWRER0EyRWRpRG0x?=
 =?utf-8?B?VzRqM3BwY1ROQlg5ak9sQVJId2JFb21zczZ6MkdQemVrTlluZlpseU1xb1Vi?=
 =?utf-8?B?UTNkRkFJaGtTcFpnMWhITVJmN0RPZGthQlhmVk5wbkFEN2pKRTZQczdHajZK?=
 =?utf-8?B?b3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	p0W+dYFNiPFQKtUqJnzvYIWjaZ7zENr5dPNQquraobBBIRwLgKVNtQg4sudMy0SHpICyXpkYCxRpsuH8tN2L+8fu5YCg34VS8DKapIr7ZHOJa0ISA39muuSLbq84vnXEnQ3uQYcBmj0DaPsIfUfm1NcJzf6ZND3wAHAtLryIQ8VNHgjaYn0u+7YiEMCPI/xHcF/QwDSuFqxZTKDs48tEALixZ5+qpsFVU8Fw9W97ye53CfkjmqZ5JU39/eKjbML2sEKDEtLFaEE3yr6mIsqkfxfgM2ej8NVE7fj0PL12e7l1XRgRtjHkVsF0Wx+9fJTVTzcQ3LSL6rM2ObfdcEvcLidUlo/tyV4ubF3zHhVu3OuiuXQ6Sq8hShEyv6ZnGwlNuMnBUSlVZLeIev3R7N4LLhXX1m7iVhcayU1/hprZcv/LQu8nKPl8GAI4PLOiCrFPFP6zbqLNUyzgYE4mvKrBcDst2A1uRb586hpqcSb5zsHa6ioJCgOYzZA5h+MB+FQ04Aqs4NwTHL5rcpJBFJf12hnC13zoOn+tvxMqqRby6hyxv6pSpT5S6Ql1GZ5IZLBv/pl1+CIhD/i6W9OFaOKzU7MbMjmRNcfm3Bao3/jqZ0fH63A5iIyFlW3lgnjwPmK4p5lOnQ42U2Ol298ZEcgHRR80ofu1XDIyastYjiRo95CzfMryjsQDgzx9qGN6/Jp6hAWxGsOMxkRuzLQrJweeoCBK7/NjDpkMZLnw93dFZctlyzpZ7CGb8VUPNlV9n91VvTh/diTc5APOOBKFTORt6k4RepPIKH0Bb13lX0qkgalN3/18QZy0geko8qUxKL53/XAGNB4IRe6vqVEg0xp+tQa+vplqzPwmbxbaFbYTGaBlD8B1vBXn+3LYQoxPER451PnS3u5DncA3srLq2TtkyA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 674fda44-a22d-47ff-0478-08db4afc6324
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 11:00:06.0664
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: i1kUJTobExAt2oY4ggIv4p0P4U1dJGqGWAahyaTPYIiDQwS4uk6nRmA+kiVxYcXXxgDD5M6ox4vlw9jKn1TFMw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6128

On Tue, May 02, 2023 at 06:43:33AM -0400, Daniel P. Smith wrote:
> 
> 
> On 5/2/23 03:17, Jan Beulich wrote:
> > Unlike for XEN_DOMCTL_getdomaininfo, where the XSM check is intended to
> > cause the operation to fail, in the loop here it ought to merely
> > determine whether information for the domain at hand may be reported
> > back. Therefore if on the last iteration the hook results in denial,
> > this should not affect the sub-op's return value.
> > 
> > Fixes: d046f361dc93 ("Xen Security Modules: XSM")
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > ---
> > The hook being able to deny access to data for certain domains means
> > that no caller can assume to have a system-wide picture when holding the
> > results.
> > 
> > Wouldn't it make sense to permit the function to merely "count" domains?
> > While racy in general (including in its present, "normal" mode of
> > operation), within a tool stack this could be used as long as creation
> > of new domains is suppressed between obtaining the count and then using
> > it.
> > 
> > In XEN_DOMCTL_getpageframeinfo2 said commit had introduced a 2nd such
> > issue, but luckily that sub-op and xsm_getpageframeinfo() are long gone.
> > 
> 
> I understand there is a larger issue at play here but neutering the security
> control/XSM check is not the answer. This literally changes the way a FLASK
> policy that people currently have would be enforced, as well as contrary to
> how they understand the access control that it provides. Even though the
> code path does not fall under XSM maintainer, I would NACK this patch. IMHO,
> it is better to find a solution that does not abuse, misuse, or invalidate
> the purpose of the XSM calls.
> 
> On a side note, I am a little concern that only one person thought to
> include the XSM maintainer, or any of the XSM reviewers, onto a patch and
> the discussion around a patch that clearly relates to XSM for us to gauge
> the consequences of the patch. I am not assuming intentions here, only
> wanting to raise the concern.
> 
> So for what it is worth, NACK.

I assume the NACK is to the remarks after the '---'?

The patch itself doesn't change the enforcement of the XSM checks,
just prevents returning an error when the information from the last
domain in the loop can not be fetched.

Am I missing something?

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 02 11:06:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:06:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528500.821735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnpT-0000fH-GP; Tue, 02 May 2023 11:05:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528500.821735; Tue, 02 May 2023 11:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnpT-0000fA-DT; Tue, 02 May 2023 11:05:55 +0000
Received: by outflank-mailman (input) for mailman id 528500;
 Tue, 02 May 2023 11:05:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptnpR-0000ey-Up
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:05:53 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20611.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d167fa8-e8d9-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 13:05:52 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8367.eurprd04.prod.outlook.com (2603:10a6:102:1c7::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 11:05:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 11:05:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d167fa8-e8d9-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A4scwBzn9rvRbQBKU72IRVBfAVDq/7RYW9nTKHVoVFVqr2yXEWGCwVkwWj3TuNPR7VBy7RR6JLD3ex7pgO4GZwGbhOYjUvvJxgEaoM2SEj5fwAERzp+PtT1Tm9zjg/1FUNgFyC/mCv6ZGgqy8xpf5RTIEw+UMu7YLnnqw9qVXzmvkPcCNnRIZM/5aS96pUj/t0brqcx7pv7ZyZ3mt8H5Ysbh5uyWD5F1wI6bmGBddbEqMl13vTOz//ILcHU/4rfFCHNdQJSU310LUGklofTAjO0ASUSnx/ugdeHakdBzxc1lt4AoMq89jkUgLVDmLliBAnJrgW9iGf9NrJmHrbiqxg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EUcaUhaJh2P17DmQn85ZzGyhh7j+KHpKjFu6faQt5hY=;
 b=TT0lK35Wvw0j3cliYl5EqY63NUKorlCnHag7Vx3DSMQbToDObzRXslF5wWOeJQ5G+xkt3HRz7fcPgrUCBC/3EP4vpDAOstGhNGHw30sZ7/o2B/MRCrmpZTJQXPAI6JUFABjr5UH5864NewexBR4wIOpJDQThkSXyNfce/seqo13ddmTroU08TWsu9QXaRdQpmhCgKswc067/n/hAtVWcKAQzBd9i5TaHf2sGkpSiHOBEXYmHmjPYR//Dv3RFQxBFlJlhhhx8wU1GNTnLax/3Lpl1U45D+9VN1gg1rQmlNFBayvlx8dHKN/5fxGnm/t3eXWnDApqs+C2bgjdEBFtEgA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EUcaUhaJh2P17DmQn85ZzGyhh7j+KHpKjFu6faQt5hY=;
 b=4+sK2Jlre2Un0r9Er1pGg2KFndtv+8KxMucRDUDDicTB62ouo4PtWDRTRIwToP3Jl4Lf8EvitW/1kX1BJYLsSQLxjdhSdhQeJCUBuRPIawu11Cng3SOHddu5K8cvQm3psaQhillkE0C8P9q7deaTw1BPw9biMnqT9wHAgmhPDQprfa53UjZ/NTkDZCR85oLIR8JnQPGqHjHdwLPBvQsiYlkYVPztBUZ9RxQVpIGk7vLhJDzROYgYZqLKr9WNJBQxfyfxts3l3vvW3Uh4s7QW7h32wvTf1gR9flSN8aDlKcY63PhCPuknQJQZLxkdkIHMrvNJbFwXTmhT0jEPmh64Mw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6aa9f2a5-bb57-4c56-84b8-5bc63b47cfa4@suse.com>
Date: Tue, 2 May 2023 13:05:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] x86/head: check base address alignment
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-2-roger.pau@citrix.com>
 <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
 <068ad06b-d766-4fc7-6bbc-289911441ee7@suse.com>
 <ZFDrT87RixpOmMfq@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFDrT87RixpOmMfq@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0120.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8367:EE_
X-MS-Office365-Filtering-Correlation-Id: 5aba1d6c-8d78-42cf-d8d7-08db4afd3070
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+4UDTG+ub4kz9f1+5b0tNYaQl8pFRqCAIX1qoqorOY1exGs7lrqXGz/6dhh4Y2UCMwBO9lQbpOOEl6lFR3lhjc9Vm5LP4FwL8jsHeA/nb7dRHHCjiVGiXr/YxbGyoCGQy8Bs0WDx3xqL77MeopeXomhr6U4UCbICImtk52uYIOQ6pabchdmZm5DRjafChmQCahVoNmcJPzjTDiOAjtkADz0H/eSYndFfYSGu0ROF7Z0NxTd2kVUxo62mWC2QfHsI4cEmvhTX9/BFihQIJ+ST/aFPuSyEhkXdBtAnlRf3Ze7692yaIWDWa/LlDw9Vg7TxOnXWAOZ4Oc1u0gyrA1GcYZu5ZYWZl2jmjW5D2cwn8F0Kg+wfZ3BF7XAGRqQStPQo6CIkW5vXivCHLvpeJOHDysAY5KgQknMk0g09n5A9Ec9tMnBqLrW5Ep7L0fN1fbjrrQhtNRmch9imKJwzBEMDlLeaUYn9jVvYfPp/kbB2k5PXsUqgouEMI+j0jcu+Gknmo49DoqT7t3FQeqdW+nLLmJtJxAhwQyNeMmxwRObtokYTIZhbKm91G9u8TkPcwxHm9Pe8dguiK+f5cv0JKozUkNrA78CJPVMQbNwJeGijZFhIZWXH7Hk3Xu6dIwhlAQG8LikTezk0ckdyQnPKg0+Z0A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(346002)(396003)(376002)(39860400002)(451199021)(66476007)(66556008)(4326008)(316002)(6486002)(966005)(38100700002)(6916009)(26005)(186003)(6506007)(6512007)(53546011)(66946007)(478600001)(36756003)(54906003)(31696002)(86362001)(5660300002)(83380400001)(41300700001)(31686004)(8676002)(8936002)(2906002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ME1nSlY0TytONVdzbCtlZEU1eHE2WEZ4OXI1amlQSCtZQStNYWRFd2k2R211?=
 =?utf-8?B?YU5KZno5WU95WnFHZmlTcTRaQi9ocE01NTdydDJtZTJoelExK0MxKzE2TGtl?=
 =?utf-8?B?eUhPWFdMTDRsa3RVYXpndUQxTXRuQi9vM0ZrRVRxK25BTnNXWVdPajNSK3NR?=
 =?utf-8?B?TzA1Sm1TbTNTQU9HMW9VR0MzWDlpUEwvbllqdGR2dkVJejlCOC9zZis1MnQv?=
 =?utf-8?B?SnlrcEJxWDV5blZDZDNUUmx0M0hOQUVOL2VRMVZkS1lpMGU5SXNPMWlmVXVj?=
 =?utf-8?B?dXVRWjdmbkVHbTlUN1VaY0RMd3F2OW9KMkd6UnBoWTFsZy9SSHhCYkxNcStW?=
 =?utf-8?B?bFo0WUoyYkxPaURzNkZQVTdxYkN3U3ZNMWp4TjA4MEsyQXVPMk11R3poUmFm?=
 =?utf-8?B?cE5yeEZ6c2NLaFNZV2l0Uzl6T1VGYnlVNVczK0MxQjBUOTVDV1l2eVU2T2dV?=
 =?utf-8?B?dWkyWm1qRWNTVDRkckdXVWJRUTBpRUl4Y1JNaXZsNWFXd2dDRmpvekNhMlNJ?=
 =?utf-8?B?TzlVVlhtVElydUgwVnE4UEVRMHYycWMrQktWc3owUzNLVnQ0K0dITlN4aGxr?=
 =?utf-8?B?NHlvSHlKcU1GS0d4MU5oc3llVGdDeGhlVXF2eUtpSzQzREdXUmZaU3hVN2lW?=
 =?utf-8?B?Z0F3RFYvYVlyN1dPSGR5WjFIdGVUTnNBNEpvTDh6cGxBTzdXaEFsSXZ4VHBL?=
 =?utf-8?B?L0tRNCtGZzU3SjZXMXA4YVpGa3dDaUtqNzFuUFpmVjZmZnFhSFNEZDBhdHEy?=
 =?utf-8?B?Mk5CREQwRzgzeFdGenRIVC9FTE13ZlpMeHhueTkrMjBjZEpFZUZlQmcrbnhX?=
 =?utf-8?B?SlYxbjd4ZGxmaHVZbHZadzVYRDRNcDF1Y2J5LzFBZXE2Sm5rMHhBa1ZMTDRj?=
 =?utf-8?B?dUJ4c0NtVWdYalFqRjdyYlFPMmlJOGtMUUt3ZWVJSFBkRU5MODcwVFZmZjFr?=
 =?utf-8?B?TjVXNFlyc1lKOGg2SVp4Sis2Rnl3ZnREdlo2NC94bE5LYTErWkNDcXo2RXpY?=
 =?utf-8?B?dklENnpsMkt0Umoyako5aCtsc0lzekFpaFdRUFJsWldaKzFUR1BKZUwvYzBl?=
 =?utf-8?B?R1RIdUhUVDZyTy9XNkJ6VUtLT0xSV04zeG9Xald1K0ZXdEM1SHFNOFcxL0xP?=
 =?utf-8?B?d29nV0hUalUrWWp5Zm93WnB6WjFqZ3dBd3doRzdtTk1Mb1FMakRsK2tJdEMw?=
 =?utf-8?B?S0VzanZqdW1XWE04a24wVE5BM3RHNEdvS25UNEQ3NHFWUHZEdGpyYmQ5aU1G?=
 =?utf-8?B?WkszT3BQK2JaMGErUks1WUhJaEQrSTN1TXZ4ODNkZSs4TnlrNFZVR3NRSHZk?=
 =?utf-8?B?U05UWXYyZGc3K1h5Unl1V1h5d0JybkRmVWpUQ2llT3dqajNLU3FiU3VQSzFM?=
 =?utf-8?B?NzJxS3VhcEcrZW91UW9qWXI1R3JCZmFqS1dmOTFTZkpsYjNEMHM5dFJ2d2h4?=
 =?utf-8?B?TlZ4WFAyNVBCdlF1Q2dwL21OQVJGL2c2R0R3Z1JYVmZVWWlBYkFuZGVVcU1M?=
 =?utf-8?B?N0pLdmdHK0tSakhJZDdjK1VBYWRvWEJwdHE2SzJXaHVLaEFnZ2FRSmRTWUlI?=
 =?utf-8?B?R3ZhakJwbVBJUWc1SkJwMVBjY29CTU5HMGxackRqaTQyOGZISU9YUE5yb3dM?=
 =?utf-8?B?TnJScnE4NWRYRUx3TUxlRDFOWTNGT3RERWlOTmxOTEd6VndndW1McmRpYzNq?=
 =?utf-8?B?eU8wRWRLZmsxSzRjb3Z1QmhtVnRDbTJXNEJGejFJTHlxWXZ2Vytvck02L0l0?=
 =?utf-8?B?cjRVS2VmQlg4dUY1Y1U3SisvK2lXbGoxZVFoMkVoM2MwYzlEaTJ5Zmp2N1U2?=
 =?utf-8?B?ZzZlaWRrZkoxMFlZWm1CVkd2T0FYZld2SmFMa3hEWEJBRXh1OVByUlFFZlZZ?=
 =?utf-8?B?VjFERXVrVW5QWGxmOUl2SDhyMWtvSkpvZzhNVjNXSlBQcnJLa3VWNDBabDMy?=
 =?utf-8?B?UDl2c0gydzV1aHIrVHF5VlpieWpudG1wOG9ZdkUyMlNPZVBQVS8zRXNKbUdl?=
 =?utf-8?B?NXMrYjhhRTYyMFo5REF3U2xJWS9sQVRUQmhSMklNWExvbmhqWllDNklhdVVR?=
 =?utf-8?B?bHlWZ2ppektYa2IybTRIc3J2bk5YMEttaUZSZExMRWpPVWh5cHFmd0lWMmtm?=
 =?utf-8?Q?zuzyRlOYjyBVuK6Gh4UL9CmXf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5aba1d6c-8d78-42cf-d8d7-08db4afd3070
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 11:05:50.4142
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WPa4b8uij6EdTz6x0/X5WM4Z+3XHmY/a9YAp8kgaD5lIjPgyk887zvaWrx9eCSjqSiBe+JEbDibLKN2bXRWnzg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8367

On 02.05.2023 12:51, Roger Pau Monné wrote:
> On Tue, May 02, 2023 at 12:28:55PM +0200, Jan Beulich wrote:
>> On 02.05.2023 11:54, Andrew Cooper wrote:
>>> On 02/05/2023 10:22 am, Roger Pau Monne wrote:
>>>> Ensure that the base address is 2M aligned, or else the page table
>>>> entries created would be corrupt as reserved bits on the PDE end up
>>>> set.
>>>>
>>>> We have found a broken firmware where the loader would end up loading
>>>> Xen at a non 2M aligned region, and that caused a very difficult to
>>>> debug triple fault.
>>>
>>> It's probably worth saying that in this case, the OEM has fixed their
>>> firmware.
>>
>> I'm curious: What firmware loads Xen directly? I thought there was
>> always a boot loader involved (except for xen.efi of course).
> 
> This was a result of a bug in firmware plus a bug in grub, there's
> also one pending change for grub, see:
> 
> https://lists.gnu.org/archive/html/grub-devel/2023-04/msg00157.html
> 
> The firmware would return error for some calls to Boot Services
> allocate_pages method, and that triggered a bug in grub that resulted
> in the memory allocated for Xen not being aligned as requested.
> 
>> I'm further a little puzzled by this talking about alignment and not
>> xen.efi: xen.gz only specifies alignment for MB2 afaik. For MB1 all
>> it does specify is the physical address (2Mb) that it wants to be
>> loaded at. So maybe MB2 wants mentioning here as well, for clarity?
> 
> "We have found a broken firmware where grub2 would end up loading Xen
> at a non 2M aligned region when using the multiboot2 protocol, and
> that caused a very difficult to debug triple fault."
> 
> Would that be better?

Yes indeed, thanks.

>>>> @@ -670,6 +674,11 @@ trampoline_setup:
>>>>          cmp     %edi, %eax
>>>>          jb      1b
>>>>  
>>>> +        /* Check that the image base is aligned. */
>>>> +        lea     sym_esi(_start), %eax
>>>> +        and     $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
>>>> +        jnz     not_aligned
>>>
>>> You just want to check the value in %esi, which is the base of the Xen
>>> image.  Something like:
>>>
>>> mov %esi, %eax
>>> and ...
>>> jnz
>>
>> Or yet more simply "test $..., %esi" and then "jnz ..."?
> 
> As replied to Andrew, I would rather keep this inline with the address
> used to build the PDE, which is sym_esi(_start).

Well, I won't insist, and you've got Andrew's R-b already. (What I would
appreciate though as a minimal change is to switch from AND to TEST. We
really should avoid using AND or SUB when in fact we only care about the
flags output, and hence TEST or CMP can be used. It doesn't matter much
on one-time paths like the one here, but each unnecessary use sets a bad
precedent.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 11:11:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528505.821745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnuJ-0002E2-5r; Tue, 02 May 2023 11:10:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528505.821745; Tue, 02 May 2023 11:10:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnuJ-0002Dv-1z; Tue, 02 May 2023 11:10:55 +0000
Received: by outflank-mailman (input) for mailman id 528505;
 Tue, 02 May 2023 11:10:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=87GZ=AX=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1ptnuH-0002Do-Uj
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:10:53 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 00942132-e8da-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 13:10:53 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-3f1728c2a57so36242665e9.0
 for <xen-devel@lists.xenproject.org>; Tue, 02 May 2023 04:10:52 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 n3-20020a7bcbc3000000b003f175b360e5sm35359896wmi.0.2023.05.02.04.10.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 02 May 2023 04:10:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00942132-e8da-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683025852; x=1685617852;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=CrppReUeYZVSFZqFBkVwrLemCcLTs/Xn38ozhZ1/RHE=;
        b=hlQEGzHU0E+m+gi964/ghAazVExWws+a7i0EwYHZdziXCzEpskfCCNuI/N/1IuPO1j
         u97I3vSvdyb/Gaj7kds9TqayTtzJi8fQmCjvAfauZIkdeUaF+F9tb0h01En3MXONrMHm
         9DlOdwkzCYHAM3EuPuTu4LLTbc31hsie/yFh8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683025852; x=1685617852;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=CrppReUeYZVSFZqFBkVwrLemCcLTs/Xn38ozhZ1/RHE=;
        b=PbLcCwWBWKomk+YAqS/uEUrWu4Eemmb5jrAcq3DUDUdIKbRgcGrg0UXwcPSPY1AwU3
         9ANoyXkkFc2GRJhaCYsUaG93PTp3BlFySCIkWiL1IioVMCEjpxuUojL5KXq/mmY+3zu4
         uabd6/YBDOMEbO1oD0/XukG+/ES5jmKeHjv3Yabk9RI2SdNlU7r8hr1XygladLtyZL7m
         cJPSYGzfZOi8dwKf4I0Bteub8/9zKeJmdWCUftNNohOfMy1Zx0mjjOw2ld8H3cXT37yY
         U5KMNxe/gGW6TjhrUyhDVgZnvUsXHJOLhfgSZj+1NXz3v3k+A5V+/S2ixcvyoVZNxnyg
         BsMA==
X-Gm-Message-State: AC+VfDy+cdSsMsFA6o3EpXnW4L2lmrLA8rH9KW2wT4pvwvJKGQ567rvC
	5jIqxHgXmSLBjhMdpMZgIw3iNw==
X-Google-Smtp-Source: ACHHUZ7QZc1lIYE4w1r1TuXOZ+DRhv8aFFUNbVFyuIq+VlrY8aE+/DIk30O6TZ/kvStW10oG7gHqIw==
X-Received: by 2002:a1c:f019:0:b0:3f0:49b5:f0ce with SMTP id a25-20020a1cf019000000b003f049b5f0cemr11324696wmb.12.1683025852268;
        Tue, 02 May 2023 04:10:52 -0700 (PDT)
Message-ID: <6450efbb.7b0a0220.9c334.a329@mx.google.com>
X-Google-Original-Message-ID: <ZFDvuHddJlX7W66s@EMEAENGAAD19049.>
Date: Tue, 2 May 2023 12:10:48 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com,
	committers@xenproject.org, michal.orzel@amd.com,
	xen-devel@lists.xenproject.org
Subject: Re: xen | Failed pipeline for staging | 6a47ba2f
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
 <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
 <ca0144a6-2c57-0cc3-fd27-5dbe59491ef3@citrix.com>
 <alpine.DEB.2.22.394.2304291808420.974517@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2305011835000.974517@ubuntu-linux-20-04-desktop>
 <333df991-58a8-f4e0-b46c-9f480cd34213@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <333df991-58a8-f4e0-b46c-9f480cd34213@suse.com>

On Tue, May 02, 2023 at 08:53:31AM +0200, Jan Beulich wrote:
> I'm also having a hard time seeing what failure case the test ended
> up encountering: There are only two errors which can occur - one
> from the XSM hook (which is mishandled, and I'll make a separate
> patch for that) and the other from failing to copy back the info for
> the domain being looked at. I hope we can exclude the former, so are
> you suggesting the info struct copy-back is failing in your case?

Something's off somewhere, that's for sure. I'll post the remainder of
the series (not all of it made it to staging) so it's not left hanging
and start investigating this failure.

Cheers,
Alejandro


From xen-devel-bounces@lists.xenproject.org Tue May 02 11:11:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528507.821755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnv4-0002jn-Ei; Tue, 02 May 2023 11:11:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528507.821755; Tue, 02 May 2023 11:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnv4-0002jg-AM; Tue, 02 May 2023 11:11:42 +0000
Received: by outflank-mailman (input) for mailman id 528507;
 Tue, 02 May 2023 11:11:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptnv3-0002jT-5a
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:11:41 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2081.outbound.protection.outlook.com [40.107.13.81])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c06659d-e8da-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 13:11:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB8059.eurprd04.prod.outlook.com (2603:10a6:10:1e9::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 11:11:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 11:11:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c06659d-e8da-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lSV+nmqpe7tb4fBc6V/h5l9GR1v4OFNloFdlKtBv+U8wvV2gBUBRcM7ItadxZA7FXhU7zse8/iUP4Z1cRDoELrKZn4Gzg43P2sQCvcFfVH3gGr9EpDfAoHM1j+Q7hIfsn51h4d35A7M1RDuVEK+R6eYQK8wHFZQyh4BZ6lH6kPLSUHIGhKNKpSMLnEaAKC6GFq5C0F1BpzK8FTiK21Beh/CiEd5ESaOYx3NaTc3Ju98Q66ghg3FEjy0QOGrNg9u4ISIwZS9TtWCSc8Sy9AcK2OaNONcc/rD8KPPsmZ45zmdSKLAmftAcd/gbh9Ju78K2PaaXVKW7aj61Rn84uzr0Ww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=n75EAu9Yc8oJm1hbgtfHoz+EO3uNctXze4Zfpbc9TFU=;
 b=Z4tD5AIZC/Ptji/ehoWxrAyDuS9l3MgsuP9bSm5vBMJf0JMsOTOYVR7GrzEDZsxt/HCXAyJ1jr9BBrLELynrzaqIxS2DDfSxP5Ieigmy67Ddfix26I3fUTpPiMCN56eM6KCfOSPK7JyWRhHkKsqUlBr266v95CY6RvOa8UFACSJqb23SX7dvlcUe6HgYC+Ahh5UHsrZaTfHWry0MPZ9vga9qhG2Fi2g4jkv7ptma14P+P1tVGzCxG6fdMVXZQfWNy4z6xRJ314AXo/1F8Nqop4u8Auu/meUa3zZJ/M5zWodXGJSCUwgZMb0diUUAtUuHP3XY27KP631KKow+2Z55KA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=n75EAu9Yc8oJm1hbgtfHoz+EO3uNctXze4Zfpbc9TFU=;
 b=JyS0R/ccKwJ0+T/gHZsfjY5+yNJ5jH7MfFad3g14POGNWw/F4xQ9W/sd2b3INy08sZ0Hd6qlCoOW7QGiKpqXUfSdIvSOs2buHvOCRzh9CZz5HhTBNyyqiTkQh2tvvxuXrXasORpiG1cJXXDCBrz1NyxpE37LKHmE1HQx1fLs4qOJ0jCqCaN1NMYQ3TNZ1nTFNtbDKR4HUUsVCT6ve/D/Smet2j47an94XW3EDEqkgcLv5es/KiaakTeSQiZxP4p7syvD9hMcc0A8Z5sBXh43VPi/WvJQRNzl6dPlJOmyID9MIeY4rutNrn6+91yM6HwPRZ/aLEAk8IHJiKc7oN5+yA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <269b0894-5fe6-fd71-9f6d-24e3b08973cf@suse.com>
Date: Tue, 2 May 2023 13:11:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] x86/head: check base address alignment
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-2-roger.pau@citrix.com>
 <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
 <068ad06b-d766-4fc7-6bbc-289911441ee7@suse.com>
 <ZFDrT87RixpOmMfq@Air-de-Roger>
 <6aa9f2a5-bb57-4c56-84b8-5bc63b47cfa4@suse.com>
In-Reply-To: <6aa9f2a5-bb57-4c56-84b8-5bc63b47cfa4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0142.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB8059:EE_
X-MS-Office365-Filtering-Correlation-Id: 11758879-7dbe-400f-510e-08db4afdef20
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8b6wKl/O6oCEDbj1AgNsg6M/paTeHl34rpYBjJgWXpHxsU31mTN1DkfUe7XH8qd7lmIDiBuX+yj8nLr+0XSQ/8uOHPUTh8G9QsNCyzOkHqEp6+Vhk0RExav21fdXtlNmZSnwluvhMN0sKZhOzgI3EyX6sLrZyeYxiKSVncxhMPMN46/0zvEypF2b8nqDMTYlTJwGTVwVGN5ea8SPtvfXkUG35nuRiBdWl5Gl64tlW1Ec4O4pJKhqoCI3XHoyfCfW3zhzqiFWiX2U+i/GeMX2Lrpvt8ZiJGGe70j1wAF4xv+vyj6g3mqrOLgpZ+ByMxOc7bAuvoSasEw8eNcTtFCeksP/p6QolQsnr4/es3cVt+TGA5+N1gCB/rxY0fbFfmfJj0huT7YciXBnPzAW37CDNS27AgKw/pUCOl0m0H3qa7aX9d4en6Adno2LLKf0WvLXFe9F3JB7LY34Bz5H/CsDSyRrOIWVQDVavr+Qcvkbcl6dSGa7/rwrPBcjblSRmPCknwiNt0zhzzGuUshALpqze4ALbK+moMspi/OsU9eVZjodyy78zyb3w+s40HL5EwAuosG4RtBt39gl4i5R+NWJWM/0a7Oz6QTIIwoBBI2ndc3iwwWTXzib94NfW8MSYGPBQqxp8DBaorhKRJQV657Q3A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(346002)(396003)(376002)(39860400002)(451199021)(478600001)(26005)(6506007)(6512007)(53546011)(86362001)(31696002)(6486002)(83380400001)(54906003)(186003)(2616005)(31686004)(36756003)(66946007)(66556008)(66476007)(2906002)(316002)(6916009)(4326008)(38100700002)(8936002)(8676002)(5660300002)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OWk2QkxsY0N4NmJoUW5sQjZwOUxHV0pxZWJ6WnV6THErZzBUL1lxUWJxNmRI?=
 =?utf-8?B?V240bWE2bDFHZVMwd1Npc05VZ1VCY3VFTlI5NHFndTE4V1V6cFpMMlBzcklO?=
 =?utf-8?B?a011cEdDdTdVR25aNDR0N0Q4M2grTjhZUjJDUVQ1UGt5N1g1REFpT1FpdExB?=
 =?utf-8?B?YjJDVTAxaFFEUlFaU043NjI0blFsbFZqY1NWTG12SllrTFFmM0xhL1BLd1Fl?=
 =?utf-8?B?Nk9WV0s3aE0zV3h1VExnWkRxNEJyLzl2S00xRS9VRzllSnRWNi9MRGJMdHh4?=
 =?utf-8?B?MjNaVUdkSC8xMjlNVHNSU0RXVnRXU3l2bTJJWHVobmJnYS9qRDBya3JxWTVC?=
 =?utf-8?B?cWRFN0xzUU5WMmFCczZTZU52R3VTVEZ6R2hVNGxwWm5TQnRDREdYRGJFWEdX?=
 =?utf-8?B?QmtZdTBRMTU0VStDc0c3WWNrTC9xd3ZxajhLWnNsbmlmTG1rTk5ycXozTE02?=
 =?utf-8?B?K05JeDBnb2pwZUVyMjVjUmR2RWFSblkzdi9LbGpnMFNjZnI5UXczS2RyVWhw?=
 =?utf-8?B?cUV3QVBnVGdYd3ZDQmt3WFVSR2p5eGxjUkVWWmtBMG4xM2pOY1dIWFF0UlFk?=
 =?utf-8?B?WU5jV09WSFFVUVZ0b2t2aUJ4MWZieHZYL1lmd2dGMTlmZkhPdDZrZGNlVW1Q?=
 =?utf-8?B?eGZzSnppRjJIUDhkdnVKSkRxZW9sZnRFM29WWDlNdVZQclU1M3RlR3dUUm9h?=
 =?utf-8?B?MUpRb3RTakJJV0RIVXJLTERwNTRQQ1NEQnpMK3lLRStKRHFXaGVtR1Y5UTlp?=
 =?utf-8?B?b05uMGdBdE5qY3RFdkVjSmwwVzRvVHViQ0xtcU5aVWYrNHpFdG42dys4UkZp?=
 =?utf-8?B?MU9UY1RSSC80Z0EvRFVhYllER2JTRnJMcWx0Q3NyT1JNSFBRSm5Yd2NpcWR5?=
 =?utf-8?B?R2prTGtaN2xHZ2kxcTZ4dU1GNk9WZWQ1SmdlWmtqYU1EaXNpT1FhUDh2enk0?=
 =?utf-8?B?MzkrdDhyUlpLV281K2YySWJ4Q1VvYVNaYmxTa1dlcnJ0L1hiVXpsT01HQW1B?=
 =?utf-8?B?RzgyNlRTdS9YUVk1OFZjSlVOQTFXVVFVdll0REtPd09ZaVJSODlsY1Q0NXRa?=
 =?utf-8?B?QnFFKzlCUWV5TWNpOFJsQU53dVZYcVpwUzlHSENhRHJEb1pXK2hLWlZCYWFB?=
 =?utf-8?B?WVlubVdNdjB2c3RQdjV0Rm9VK2wzRHVvWGl5aFVBYjBBdkNnVWpzT2tieWhj?=
 =?utf-8?B?R3pmUjlIODlzcEh5dkQ5QkZ2WCtkcVJNSithYzFWU3AyeTh2ak81bUlXcGhq?=
 =?utf-8?B?alRta2trTnJOZE1kV0Q5T2hRcnp0UytTZG9ycGlmV3VKcVJXenFLam1hRnFO?=
 =?utf-8?B?SHcxNmlKNHpidHhQS3p0RTNidHVnajRiVCttYXk4NFAvbUlMeXI1UnJxb0Z0?=
 =?utf-8?B?bWZkK0k5LzRrcjU1OE1FL29ycWY3aXlaN2swREhUYmk1TzZOMjZ4eFN3S0lP?=
 =?utf-8?B?bTBicXRYdEl4R2oyMUlJbnhsd2RVMDJaVWNVUHMwYlYvci9mZlJKUkhqZHdW?=
 =?utf-8?B?NjBwTzRtc09uUFlMOWNadmxwOUhpWUkxN0Vyb3pZNGhiRzNDS2xLQnRFWlBu?=
 =?utf-8?B?M1dqQndNbURTUFRET1YzOU1Ic3c0WXNrOGdpWnIydldpdi9ZSGZEdnQ2NS8z?=
 =?utf-8?B?T09SeEM5SWI0bUcyUm9QMVNFSU1acVlBdWEzZ3c0SnFIbVZqZU82OWg4bW5r?=
 =?utf-8?B?c0w1RDB0UXQzNkNaN3M1VExRcGRXTlpJOW14RmNMUUJjRTF2L2ZzeHpIQUJy?=
 =?utf-8?B?bVRiczZrMkIydTF1UXpBbDk2LzFPT1hzS3VCYzBoNDFrQk1kTFVGSUFXQyti?=
 =?utf-8?B?YWpTRG1VZ24ycThDelZjK2UwR2o2WUl3ODdHL2ZtOXYzTmxzQ1l6ZnR4OW50?=
 =?utf-8?B?VXFxcVpwVTY2MHRDUUZVQjNWMnhxS0xkWDBiNzkwemhEN090RjhhOEh5eVgx?=
 =?utf-8?B?cUlEY3RXTU9ScHpGYXdVUlFpd25yOXV4K2lQSnRRWUNMTzZwZW5Ldy9FbDhG?=
 =?utf-8?B?UlhWM0N5OXNjaFNOK0JFZDNCZXRHRzRwbHNZOC8rSndwU1ZlZWcrb2dCMlNv?=
 =?utf-8?B?dmdsQTlUWkxza1ZoQmt0ZDZCbFRiMG82QXl3cGpiZitqZG1OK2lQVGN1alVE?=
 =?utf-8?Q?eVbnmVnQvSvR1N56ZZKp1GrYE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 11758879-7dbe-400f-510e-08db4afdef20
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 11:11:10.3460
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fB9eJfc1xatQtFuvw/5kBj2QeaOPYuCbX6A2hpP2pISzTdDT6rEvYIR+A3XEwVTcIxBIdN8Ki/Argld6UICJQQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB8059

On 02.05.2023 13:05, Jan Beulich wrote:
> On 02.05.2023 12:51, Roger Pau Monné wrote:
>> On Tue, May 02, 2023 at 12:28:55PM +0200, Jan Beulich wrote:
>>> On 02.05.2023 11:54, Andrew Cooper wrote:
>>>> On 02/05/2023 10:22 am, Roger Pau Monne wrote:
>>>>> @@ -670,6 +674,11 @@ trampoline_setup:
>>>>>          cmp     %edi, %eax
>>>>>          jb      1b
>>>>>  
>>>>> +        /* Check that the image base is aligned. */
>>>>> +        lea     sym_esi(_start), %eax
>>>>> +        and     $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
>>>>> +        jnz     not_aligned
>>>>
>>>> You just want to check the value in %esi, which is the base of the Xen
>>>> image.  Something like:
>>>>
>>>> mov %esi, %eax
>>>> and ...
>>>> jnz
>>>
>>> Or yet more simply "test $..., %esi" and then "jnz ..."?
>>
>> As replied to Andrew, I would rather keep this inline with the address
>> used to build the PDE, which is sym_esi(_start).
> 
> Well, I won't insist, and you've got Andrew's R-b already.

Actually, one more remark here: While using sym_esi() is more in line
with the actual consumer of the data, the check triggering because of
the transformation yielding a misaligned value (in turn because of a
bug elsewhere) would yield a misleading error message: We might well
have been loaded at a 2Mb-aligned boundary, and instead its internal
logic which would then have been wrong. (I'm sorry, now you'll get to
judge whether keeping the check in line with other code or with the
diagnostic is going to be better. Or split things into a build-time
and a runtime check, as previously suggested.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 11:13:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528510.821764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnx5-0003Li-Qe; Tue, 02 May 2023 11:13:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528510.821764; Tue, 02 May 2023 11:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnx5-0003Lb-No; Tue, 02 May 2023 11:13:47 +0000
Received: by outflank-mailman (input) for mailman id 528510;
 Tue, 02 May 2023 11:13:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=87GZ=AX=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1ptnx5-0003LR-0N
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:13:47 +0000
Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com
 [2a00:1450:4864:20::336])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 672ba75a-e8da-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 13:13:45 +0200 (CEST)
Received: by mail-wm1-x336.google.com with SMTP id
 5b1f17b1804b1-3f315735514so163681275e9.1
 for <xen-devel@lists.xenproject.org>; Tue, 02 May 2023 04:13:45 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 e5-20020a5d5005000000b0030469635629sm25685459wrt.62.2023.05.02.04.13.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 02 May 2023 04:13:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 672ba75a-e8da-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683026024; x=1685618024;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=RsC+epa7CsvsdYdUD2vHUSqP2BJfk8zEfXqKTWfPouw=;
        b=WkEwIoz90+GkvJBvSoZcRgsY7VdSW391O/IsBn9N2/D1hnSOavHBtc7clB+iRlJWj7
         fsNjk+Ro+j4lfpQ/3rcdJFrMZ+y20hmsdLDLAPJ9IDi16CrdjtSen4Ewd2qd17fPqum5
         0MCySztnmMDYlfCTAnnB33LrmhEqVRUaSLouI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683026024; x=1685618024;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=RsC+epa7CsvsdYdUD2vHUSqP2BJfk8zEfXqKTWfPouw=;
        b=KEd9GDLGqcz42Z3gqCcDG/Rq6BF5gcFhFpdIU2cP4Q24yIs/OG470cK3y9URaMVmFg
         XBN7hvr2OVBFfRWIKXmhwK8/aLRO79+Wp6bmPm8I6UF05VO31Hqv7BqQOeQmppaDh0D+
         SS8J2ziKpmh+nUz/eYd3vCOW68iO1KwWYvfccTy6UPN4ZD9E68B1aiRQ+JSM9qBV1ZD7
         aO8HYpDlbOAex9N+nOW4CZq4r2LOeOR3+WUy1wqyFqPE8zf1vbr4bQ+6YTSCVYLyc8cv
         8pSyYc+dgW9LiM0MDnm6p52D1ntFImoMWu749EDLtx/MhFurkoKOVHVA0CvKTMw5IzRE
         7SfQ==
X-Gm-Message-State: AC+VfDy4fnoDWXJA3zfmpkdDc49jziZS2BV+/cttD8TDQoJToQ37rViH
	AUpeGm3UCtBIN7UkXn+h1PygvkgF/ATsf/y8dGI=
X-Google-Smtp-Source: ACHHUZ46cHk7Or+b/FgUpaGyCoWKuBRHuMEFrC/bLfmR4PqOpsLEBPTOvww+VO4iHL4HAokusHbg6A==
X-Received: by 2002:a5d:6b01:0:b0:2fa:abcd:59a2 with SMTP id v1-20020a5d6b01000000b002faabcd59a2mr10558235wrw.30.1683026023975;
        Tue, 02 May 2023 04:13:43 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Tim Deegan <tim@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>
Subject: [PATCH v3 0/3] Rationalize usage of xc_domain_getinfo{,list}()
Date: Tue,  2 May 2023 12:13:35 +0100
Message-Id: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The first 4 patches of v2 already made it to staging. This is a corrected
repost of the 3 remaining ones.

Original cover letter:

xc_domain_getinfo() returns the list of domains with domid >= first_domid.
It does so by repeatedly invoking XEN_DOMCTL_getdomaininfo, which leads to
unintuitive behaviour (asking for domid=1 might succeed returning domid=2).
Furthermore, N hypercalls are required whereas the equivalent functionality
can be achieved using with XEN_SYSCTL_getdomaininfo.

Ideally, we want a DOMCTL interface that operates over a single precisely
specified domain and a SYSCTL interface that can be used for bulk queries.

All callers of xc_domain_getinfo() that are better off using SYSCTL are
migrated to use that instead. That includes callers performing domain
discovery and those requesting info for more than 1 domain per hypercall.

A new xc_domain_getinfo_single() is introduced with stricter semantics than
xc_domain_getinfo() (failing if domid isn't found) to migrate the rest to.

With no callers left the xc_dominfo_t structure and the xc_domain_getinfo()
call itself can be cleanly removed, and the DOMCTL interface simplified to
only use its fastpath.

With the DOMCTL ammended, the new xc_domain_getinfo_single() drops its
stricter check, becoming a simple wrapper to invoke the hypercall itself.

Alejandro Vallejo (3):
  tools: Modify single-domid callers of xc_domain_getinfolist()
  tools: Use new xc function for some xc_domain_getinfo() calls
  domctl: Modify XEN_DOMCTL_getdomaininfo to fail if domid is not found

 tools/console/client/main.c             |  7 +--
 tools/debugger/kdd/kdd-xen.c            |  5 +-
 tools/include/xenctrl.h                 | 43 -------------
 tools/libs/ctrl/xc_domain.c             | 82 ++-----------------------
 tools/libs/ctrl/xc_pagetab.c            |  7 +--
 tools/libs/ctrl/xc_private.c            |  9 +--
 tools/libs/ctrl/xc_private.h            |  7 ++-
 tools/libs/guest/xg_core.c              | 23 +++----
 tools/libs/guest/xg_core.h              |  6 +-
 tools/libs/guest/xg_core_arm.c          | 10 +--
 tools/libs/guest/xg_core_x86.c          | 18 +++---
 tools/libs/guest/xg_cpuid_x86.c         | 40 ++++++------
 tools/libs/guest/xg_dom_boot.c          | 16 ++---
 tools/libs/guest/xg_domain.c            |  8 +--
 tools/libs/guest/xg_offline_page.c      | 12 ++--
 tools/libs/guest/xg_private.h           |  1 +
 tools/libs/guest/xg_resume.c            | 20 +++---
 tools/libs/guest/xg_sr_common.h         |  2 +-
 tools/libs/guest/xg_sr_restore.c        | 17 ++---
 tools/libs/guest/xg_sr_restore_x86_pv.c |  2 +-
 tools/libs/guest/xg_sr_save.c           | 27 ++++----
 tools/libs/guest/xg_sr_save_x86_pv.c    |  6 +-
 tools/libs/light/libxl_dom.c            | 17 ++---
 tools/libs/light/libxl_dom_suspend.c    |  7 +--
 tools/libs/light/libxl_domain.c         | 13 ++--
 tools/libs/light/libxl_mem.c            |  4 +-
 tools/libs/light/libxl_sched.c          | 26 ++++----
 tools/libs/light/libxl_x86_acpi.c       |  4 +-
 tools/misc/xen-hvmcrash.c               |  6 +-
 tools/misc/xen-lowmemd.c                |  6 +-
 tools/misc/xen-mfndump.c                | 22 +++----
 tools/misc/xen-vmtrace.c                |  6 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c     |  6 +-
 tools/vchan/vchan-socket-proxy.c        |  6 +-
 tools/xenpaging/xenpaging.c             | 10 +--
 tools/xenstore/xenstored_domain.c       | 15 +++--
 tools/xentrace/xenctx.c                 |  8 +--
 xen/common/domctl.c                     | 32 +---------
 38 files changed, 184 insertions(+), 372 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 11:13:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528512.821785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnxB-0003tY-CZ; Tue, 02 May 2023 11:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528512.821785; Tue, 02 May 2023 11:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnxB-0003tP-9N; Tue, 02 May 2023 11:13:53 +0000
Received: by outflank-mailman (input) for mailman id 528512;
 Tue, 02 May 2023 11:13:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=87GZ=AX=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1ptnx9-0003LR-RI
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:13:51 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a25db82-e8da-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 13:13:50 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-2fddb442d47so3377411f8f.2
 for <xen-devel@lists.xenproject.org>; Tue, 02 May 2023 04:13:50 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 e5-20020a5d5005000000b0030469635629sm25685459wrt.62.2023.05.02.04.13.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 02 May 2023 04:13:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a25db82-e8da-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683026029; x=1685618029;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6NMka2gY1+NXQR3tdXq6DZrNuT/nqEX+jYw5XJAP3Fg=;
        b=UAWniR9nix166rgK//vtzWkMtmKcU+ns+MqC/YLYxV7C+hhT/UsSmOeYd+IYGb8Wdv
         zu5+PaVOaRWQgkZ+4SYMZjrQ8Ee/SxcjeiobayebmSf/+pedDY0ufpLgzjmb3oB5W76m
         8bVmYn4XnE8wKS61Eh0/2zP58gG7qq550R2no=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683026029; x=1685618029;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=6NMka2gY1+NXQR3tdXq6DZrNuT/nqEX+jYw5XJAP3Fg=;
        b=kNm4l48e/1KAoOfiBA1ggsX18HMg0AXn0qAwF68IE+jcTvJd8pfxiH8M48VS/yq7rf
         2wo/YZihFwM6fkmzh4br7d2gVT4mT4/KOCxUtWkCPLQ+sUk6fjaznDoa1FzyrMMPoHVr
         61LsHV+gg1yxECOQWE36h6qQbS4pI/cnUUdYJpfWXc3lUtwoAOBNA+c+9Lymwf6o1Myg
         VrvlQADz5xNdfaUq+96Afq0KkHa3OutkbImKmQf7EbX+vbwfCO0MttaJCu8veVZC+obC
         GEcYRJfmj7IM01GpSL/wt/Wci8UhqSjYrJMmj5CGtZaMMlRI+SFisDAHbozbxo3ouPYE
         V/Mg==
X-Gm-Message-State: AC+VfDxLelPZts4isipOoLwcVtmJ0TtGpYXf8gU4ErBiEBQOj19vjI2J
	1Jw9KZt/4vzLozgR02ZLHlkMZU3gR8e4SNzpLGI=
X-Google-Smtp-Source: ACHHUZ50euzeKBOB3ItpiDW44fY+K/YB3syuNexDFtOiirahG7OAo2y9/V6qt/kl5T86jzDCw1etcQ==
X-Received: by 2002:a5d:5410:0:b0:305:fbfb:c7d7 with SMTP id g16-20020a5d5410000000b00305fbfbc7d7mr6787591wrv.44.1683026028927;
        Tue, 02 May 2023 04:13:48 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3 3/3] domctl: Modify XEN_DOMCTL_getdomaininfo to fail if domid is not found
Date: Tue,  2 May 2023 12:13:38 +0100
Message-Id: <20230502111338.16757-4-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
References: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It previously mimicked the getdomaininfo sysctl semantics by returning
the first domid higher than the requested domid that does exist. This
unintuitive behaviour causes quite a few mistakes and makes the call
needlessly slow in its error path.

This patch removes the fallback search, returning -ESRCH if the requested
domain doesn't exist. Domain discovery can still be done through the sysctl
interface as that performs a linear search on the list of domains.

With this modification the xc_domain_getinfo() function is deprecated and
removed to make sure it's not mistakenly used expecting the old behaviour.
The new xc wrapper is xc_domain_getinfo_single().

All previous callers of xc_domain_getinfo() have been updated to use
xc_domain_getinfo_single() or xc_domain_getinfolist() instead. This also
means xc_dominfo_t is no longer used by anything and can be purged.

Resolves: xen-project/xen#105
Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>

v3:
 * No changes

---
 tools/include/xenctrl.h     | 43 ----------------------
 tools/libs/ctrl/xc_domain.c | 73 -------------------------------------
 xen/common/domctl.c         | 32 +---------------
 3 files changed, 2 insertions(+), 146 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 752fc87580..08a15c5911 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -444,28 +444,6 @@ typedef struct xc_core_header {
  * DOMAIN MANAGEMENT FUNCTIONS
  */
 
-typedef struct xc_dominfo {
-    uint32_t      domid;
-    uint32_t      ssidref;
-    unsigned int  dying:1, crashed:1, shutdown:1,
-                  paused:1, blocked:1, running:1,
-                  hvm:1, debugged:1, xenstore:1, hap:1;
-    unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
-    unsigned long nr_pages; /* current number, not maximum */
-    unsigned long nr_outstanding_pages;
-    unsigned long nr_shared_pages;
-    unsigned long nr_paged_pages;
-    unsigned long shared_info_frame;
-    uint64_t      cpu_time;
-    unsigned long max_memkb;
-    unsigned int  nr_online_vcpus;
-    unsigned int  max_vcpu_id;
-    xen_domain_handle_t handle;
-    unsigned int  cpupool;
-    uint8_t       gpaddr_bits;
-    struct xen_arch_domainconfig arch_config;
-} xc_dominfo_t;
-
 typedef xen_domctl_getdomaininfo_t xc_domaininfo_t;
 
 static inline unsigned int dominfo_shutdown_reason(const xc_domaininfo_t *info)
@@ -721,27 +699,6 @@ int xc_domain_getinfo_single(xc_interface *xch,
                              uint32_t domid,
                              xc_domaininfo_t *info);
 
-/**
- * This function will return information about one or more domains. It is
- * designed to iterate over the list of domains. If a single domain is
- * requested, this function will return the next domain in the list - if
- * one exists. It is, therefore, important in this case to make sure the
- * domain requested was the one returned.
- *
- * @parm xch a handle to an open hypervisor interface
- * @parm first_domid the first domain to enumerate information from.  Domains
- *                   are currently enumerate in order of creation.
- * @parm max_doms the number of elements in info
- * @parm info an array of max_doms size that will contain the information for
- *            the enumerated domains.
- * @return the number of domains enumerated or -1 on error
- */
-int xc_domain_getinfo(xc_interface *xch,
-                      uint32_t first_domid,
-                      unsigned int max_doms,
-                      xc_dominfo_t *info);
-
-
 /**
  * This function will set the execution context for the specified vcpu.
  *
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index 66179e6f12..724fa6f753 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -357,85 +357,12 @@ int xc_domain_getinfo_single(xc_interface *xch,
     if ( do_domctl(xch, &domctl) < 0 )
         return -1;
 
-    if ( domctl.u.getdomaininfo.domain != domid )
-    {
-        errno = ESRCH;
-        return -1;
-    }
-
     if ( info )
         *info = domctl.u.getdomaininfo;
 
     return 0;
 }
 
-int xc_domain_getinfo(xc_interface *xch,
-                      uint32_t first_domid,
-                      unsigned int max_doms,
-                      xc_dominfo_t *info)
-{
-    unsigned int nr_doms;
-    uint32_t next_domid = first_domid;
-    DECLARE_DOMCTL;
-    int rc = 0;
-
-    memset(info, 0, max_doms*sizeof(xc_dominfo_t));
-
-    for ( nr_doms = 0; nr_doms < max_doms; nr_doms++ )
-    {
-        domctl.cmd = XEN_DOMCTL_getdomaininfo;
-        domctl.domain = next_domid;
-        if ( (rc = do_domctl(xch, &domctl)) < 0 )
-            break;
-        info->domid      = domctl.domain;
-
-        info->dying    = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_dying);
-        info->shutdown = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_shutdown);
-        info->paused   = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_paused);
-        info->blocked  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_blocked);
-        info->running  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_running);
-        info->hvm      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hvm_guest);
-        info->debugged = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_debugged);
-        info->xenstore = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_xs_domain);
-        info->hap      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hap);
-
-        info->shutdown_reason =
-            (domctl.u.getdomaininfo.flags>>XEN_DOMINF_shutdownshift) &
-            XEN_DOMINF_shutdownmask;
-
-        if ( info->shutdown && (info->shutdown_reason == SHUTDOWN_crash) )
-        {
-            info->shutdown = 0;
-            info->crashed  = 1;
-        }
-
-        info->ssidref  = domctl.u.getdomaininfo.ssidref;
-        info->nr_pages = domctl.u.getdomaininfo.tot_pages;
-        info->nr_outstanding_pages = domctl.u.getdomaininfo.outstanding_pages;
-        info->nr_shared_pages = domctl.u.getdomaininfo.shr_pages;
-        info->nr_paged_pages = domctl.u.getdomaininfo.paged_pages;
-        info->max_memkb = domctl.u.getdomaininfo.max_pages << (PAGE_SHIFT-10);
-        info->shared_info_frame = domctl.u.getdomaininfo.shared_info_frame;
-        info->cpu_time = domctl.u.getdomaininfo.cpu_time;
-        info->nr_online_vcpus = domctl.u.getdomaininfo.nr_online_vcpus;
-        info->max_vcpu_id = domctl.u.getdomaininfo.max_vcpu_id;
-        info->cpupool = domctl.u.getdomaininfo.cpupool;
-        info->gpaddr_bits = domctl.u.getdomaininfo.gpaddr_bits;
-        info->arch_config = domctl.u.getdomaininfo.arch_config;
-
-        memcpy(info->handle, domctl.u.getdomaininfo.handle,
-               sizeof(xen_domain_handle_t));
-
-        next_domid = (uint16_t)domctl.domain + 1;
-        info++;
-    }
-
-    if ( nr_doms == 0 )
-        return rc;
-
-    return nr_doms;
-}
-
 int xc_domain_getinfolist(xc_interface *xch,
                           uint32_t first_domain,
                           unsigned int max_domains,
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index ad71ad8a4c..24a14996e6 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -314,7 +314,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         /* fall through */
     default:
         d = rcu_lock_domain_by_id(op->domain);
-        if ( !d && op->cmd != XEN_DOMCTL_getdomaininfo )
+        if ( !d )
             return -ESRCH;
     }
 
@@ -534,42 +534,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     case XEN_DOMCTL_getdomaininfo:
     {
-        domid_t dom = DOMID_INVALID;
-
-        if ( !d )
-        {
-            ret = -EINVAL;
-            if ( op->domain >= DOMID_FIRST_RESERVED )
-                break;
-
-            rcu_read_lock(&domlist_read_lock);
-
-            dom = op->domain;
-            for_each_domain ( d )
-                if ( d->domain_id >= dom )
-                    break;
-        }
-
-        ret = -ESRCH;
-        if ( d == NULL )
-            goto getdomaininfo_out;
-
         ret = xsm_getdomaininfo(XSM_HOOK, d);
         if ( ret )
-            goto getdomaininfo_out;
+            break;
 
         getdomaininfo(d, &op->u.getdomaininfo);
 
         op->domain = op->u.getdomaininfo.domain;
         copyback = 1;
-
-    getdomaininfo_out:
-        /* When d was non-NULL upon entry, no cleanup is needed. */
-        if ( dom == DOMID_INVALID )
-            break;
-
-        rcu_read_unlock(&domlist_read_lock);
-        d = NULL;
         break;
     }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 11:13:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:13:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528511.821774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnx8-0003bN-1c; Tue, 02 May 2023 11:13:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528511.821774; Tue, 02 May 2023 11:13:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnx7-0003bG-VD; Tue, 02 May 2023 11:13:49 +0000
Received: by outflank-mailman (input) for mailman id 528511;
 Tue, 02 May 2023 11:13:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=87GZ=AX=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1ptnx6-0003LR-EJ
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:13:48 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 681b89b1-e8da-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 13:13:46 +0200 (CEST)
Received: by mail-wr1-x42a.google.com with SMTP id
 ffacd0b85a97d-306342d7668so790669f8f.1
 for <xen-devel@lists.xenproject.org>; Tue, 02 May 2023 04:13:46 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 e5-20020a5d5005000000b0030469635629sm25685459wrt.62.2023.05.02.04.13.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 02 May 2023 04:13:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 681b89b1-e8da-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683026025; x=1685618025;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UJB458iJtLEmTYey1jxDUVCpJXePcaStPPsd47NEdBU=;
        b=Q0aQ2HbKVXh5yJ7uUaV235VFVGz5+dWaAOea//8omPCY0h+znmuO0LgbauEITZX4H8
         J2gTW9FAy9o6wI2bHlCz5Mav1cC/GcLcF9vDuGqCfvh+HwP2vMGUIfMYD7Nb+pc33TNa
         OeIRTGNh38yLJADH0wb8vvWi6+eePVWkByR9Q=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683026025; x=1685618025;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=UJB458iJtLEmTYey1jxDUVCpJXePcaStPPsd47NEdBU=;
        b=ClD6hS61XeuzP1Hg+GBzEhpj3YKUlTLLQfIvUmYFV7oBHb7uRib6Yd+9lmXs3+VZvs
         DFWzUS/n9ZFoWQI1adIRCQKqt2XAix3xexgFnTik9FV8QrCzmAGhW5ynnWydY6YXEzRj
         ceNCTJFTrs59RvsRK2SKroTA+ZDZCU4vkfKp8LYiIpN1vQuLgJCQPlQ9war49+g5gJ6U
         iNm1OPpgxGeZl0ZGV8pSeNXCBxTaJepNUQj0LWYCmghsUtrRRfSGa7byvIEDFOTzZUo3
         N9dUrZZIpWUF4LR9tsHuTBaL+hyg4HLVkIxJLvlZGLYZHrL+rloAdjt5fa4B3rlQAhRQ
         O19Q==
X-Gm-Message-State: AC+VfDw8hQ9T06xb8KkW2rRp3fTXiCySUtPwXU6uBXWiJdb+xYXxzhEx
	ggqoS8v0kplF1R9A719wd+39RMjc9/j7s8JyCy4=
X-Google-Smtp-Source: ACHHUZ60duR1pheaga52MURiXLFcnhkK4Bv1d6HQdAP8fJOZPhDkYfo+dpoMpdaXlaBpau0mcGksNw==
X-Received: by 2002:adf:e944:0:b0:306:284c:8f4c with SMTP id m4-20020adfe944000000b00306284c8f4cmr6164633wrn.64.1683026025509;
        Tue, 02 May 2023 04:13:45 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>
Subject: [PATCH v3 1/3] tools: Modify single-domid callers of xc_domain_getinfolist()
Date: Tue,  2 May 2023 12:13:36 +0100
Message-Id: <20230502111338.16757-2-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
References: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xc_domain_getinfolist() internally relies on a sysctl that performs
a linear search for the domids. Many callers of xc_domain_getinfolist()
who require information about a precise domid are much better off calling
xc_domain_getinfo_single() instead, that will use the getdomaininfo domctl
instead and ensure the returned domid matches the requested one. The domtctl
will find the domid faster too, because that uses hashed lists.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Christian Lindig <christian.lindig@citrix.com>

v3:
 * Replaced single-domid xc_domain_getinfolist() call in ocaml stub with
   xc_domain_getinfo_single()
---
 tools/libs/light/libxl_dom.c         | 17 ++++++-----------
 tools/libs/light/libxl_dom_suspend.c |  7 +------
 tools/libs/light/libxl_domain.c      | 13 +++++--------
 tools/libs/light/libxl_mem.c         |  4 ++--
 tools/libs/light/libxl_sched.c       | 10 +++-------
 tools/ocaml/libs/xc/xenctrl_stubs.c  |  6 ++----
 tools/xenpaging/xenpaging.c          | 10 +++++-----
 7 files changed, 24 insertions(+), 43 deletions(-)

diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
index 25fb716084..94fef37401 100644
--- a/tools/libs/light/libxl_dom.c
+++ b/tools/libs/light/libxl_dom.c
@@ -32,9 +32,9 @@ libxl_domain_type libxl__domain_type(libxl__gc *gc, uint32_t domid)
     xc_domaininfo_t info;
     int ret;
 
-    ret = xc_domain_getinfolist(ctx->xch, domid, 1, &info);
-    if (ret != 1 || info.domain != domid) {
-        LOG(ERROR, "unable to get domain type for domid=%"PRIu32, domid);
+    ret = xc_domain_getinfo_single(ctx->xch, domid, &info);
+    if (ret < 0) {
+        LOGED(ERROR, domid, "unable to get dominfo");
         return LIBXL_DOMAIN_TYPE_INVALID;
     }
     if (info.flags & XEN_DOMINF_hvm_guest) {
@@ -70,15 +70,10 @@ int libxl__domain_cpupool(libxl__gc *gc, uint32_t domid)
     xc_domaininfo_t info;
     int ret;
 
-    ret = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
-    if (ret != 1)
+    ret = xc_domain_getinfo_single(CTX->xch, domid, &info);
+    if (ret < 0)
     {
-        LOGE(ERROR, "getinfolist failed %d", ret);
-        return ERROR_FAIL;
-    }
-    if (info.domain != domid)
-    {
-        LOGE(ERROR, "got info for dom%d, wanted dom%d\n", info.domain, domid);
+        LOGED(ERROR, domid, "get domaininfo failed");
         return ERROR_FAIL;
     }
     return info.cpupool;
diff --git a/tools/libs/light/libxl_dom_suspend.c b/tools/libs/light/libxl_dom_suspend.c
index 4fa22bb739..6091a5f3f6 100644
--- a/tools/libs/light/libxl_dom_suspend.c
+++ b/tools/libs/light/libxl_dom_suspend.c
@@ -332,13 +332,8 @@ static void suspend_common_wait_guest_check(libxl__egc *egc,
     /* Convenience aliases */
     const uint32_t domid = dsps->domid;
 
-    ret = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
+    ret = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (ret < 0) {
-        LOGED(ERROR, domid, "unable to check for status of guest");
-        goto err;
-    }
-
-    if (!(ret == 1 && info.domain == domid)) {
         LOGED(ERROR, domid, "guest we were suspending has been destroyed");
         goto err;
     }
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 7f0986c185..5709b3e62f 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -349,16 +349,12 @@ int libxl_domain_info(libxl_ctx *ctx, libxl_dominfo *info_r,
     int ret;
     GC_INIT(ctx);
 
-    ret = xc_domain_getinfolist(ctx->xch, domid, 1, &xcinfo);
+    ret = xc_domain_getinfo_single(ctx->xch, domid, &xcinfo);
     if (ret<0) {
-        LOGED(ERROR, domid, "Getting domain info list");
+        LOGED(ERROR, domid, "Getting domain info");
         GC_FREE;
         return ERROR_FAIL;
     }
-    if (ret==0 || xcinfo.domain != domid) {
-        GC_FREE;
-        return ERROR_DOMAIN_NOTFOUND;
-    }
 
     if (info_r)
         libxl__xcinfo2xlinfo(ctx, &xcinfo, info_r);
@@ -1657,14 +1653,15 @@ int libxl__resolve_domid(libxl__gc *gc, const char *name, uint32_t *domid)
 libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
                                        int *nr_vcpus_out, int *nr_cpus_out)
 {
+    int rc;
     GC_INIT(ctx);
     libxl_vcpuinfo *ptr, *ret;
     xc_domaininfo_t domaininfo;
     xc_vcpuinfo_t vcpuinfo;
     unsigned int nr_vcpus;
 
-    if (xc_domain_getinfolist(ctx->xch, domid, 1, &domaininfo) != 1) {
-        LOGED(ERROR, domid, "Getting infolist");
+    if ((rc = xc_domain_getinfo_single(ctx->xch, domid, &domaininfo)) < 0) {
+        LOGED(ERROR, domid, "Getting dominfo");
         GC_FREE;
         return NULL;
     }
diff --git a/tools/libs/light/libxl_mem.c b/tools/libs/light/libxl_mem.c
index 92ec09f4cf..44e554adba 100644
--- a/tools/libs/light/libxl_mem.c
+++ b/tools/libs/light/libxl_mem.c
@@ -323,8 +323,8 @@ retry_transaction:
     libxl__xs_printf(gc, t, GCSPRINTF("%s/memory/target", dompath),
                      "%"PRIu64, new_target_memkb);
 
-    r = xc_domain_getinfolist(ctx->xch, domid, 1, &info);
-    if (r != 1 || info.domain != domid) {
+    r = xc_domain_getinfo_single(ctx->xch, domid, &info);
+    if (r < 0) {
         abort_transaction = 1;
         rc = ERROR_FAIL;
         goto out;
diff --git a/tools/libs/light/libxl_sched.c b/tools/libs/light/libxl_sched.c
index 7c53dc60e6..21a65442c0 100644
--- a/tools/libs/light/libxl_sched.c
+++ b/tools/libs/light/libxl_sched.c
@@ -219,13 +219,11 @@ static int sched_credit_domain_set(libxl__gc *gc, uint32_t domid,
     xc_domaininfo_t domaininfo;
     int rc;
 
-    rc = xc_domain_getinfolist(CTX->xch, domid, 1, &domaininfo);
+    rc = xc_domain_getinfo_single(CTX->xch, domid, &domaininfo);
     if (rc < 0) {
-        LOGED(ERROR, domid, "Getting domain info list");
+        LOGED(ERROR, domid, "Getting domain info");
         return ERROR_FAIL;
     }
-    if (rc != 1 || domaininfo.domain != domid)
-        return ERROR_INVAL;
 
     rc = xc_sched_credit_domain_get(CTX->xch, domid, &sdom);
     if (rc != 0) {
@@ -426,13 +424,11 @@ static int sched_credit2_domain_set(libxl__gc *gc, uint32_t domid,
     xc_domaininfo_t info;
     int rc;
 
-    rc = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
+    rc = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (rc < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         return ERROR_FAIL;
     }
-    if (rc != 1 || info.domain != domid)
-        return ERROR_INVAL;
 
     rc = xc_sched_credit2_domain_get(CTX->xch, domid, &sdom);
     if (rc != 0) {
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 6ec9ed6d1e..f686db3124 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -497,10 +497,8 @@ CAMLprim value stub_xc_domain_getinfo(value xch_val, value domid)
 	xc_domaininfo_t info;
 	int ret;
 
-	ret = xc_domain_getinfolist(xch, Int_val(domid), 1, &info);
-	if (ret != 1)
-		failwith_xc(xch);
-	if (info.domain != Int_val(domid))
+	ret = xc_domain_getinfo_single(xch, Int_val(domid), &info);
+	if (ret < 0)
 		failwith_xc(xch);
 
 	result = alloc_domaininfo(&info);
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 6e5490315d..c7a9a82477 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -169,8 +169,8 @@ static int xenpaging_get_tot_pages(struct xenpaging *paging)
     xc_domaininfo_t domain_info;
     int rc;
 
-    rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1, &domain_info);
-    if ( rc != 1 )
+    rc = xc_domain_getinfo_single(xch, paging->vm_event.domain_id, &domain_info);
+    if ( rc < 0 )
     {
         PERROR("Error getting domain info");
         return -1;
@@ -424,9 +424,9 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     /* Get max_pages from guest if not provided via cmdline */
     if ( !paging->max_pages )
     {
-        rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1,
-                                   &domain_info);
-        if ( rc != 1 )
+        rc = xc_domain_getinfo_single(xch, paging->vm_event.domain_id,
+                                      &domain_info);
+        if ( rc < 0 )
         {
             PERROR("Error getting domain info");
             goto err;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 11:13:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:13:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528513.821795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnxC-0004A8-Ki; Tue, 02 May 2023 11:13:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528513.821795; Tue, 02 May 2023 11:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptnxC-000494-HK; Tue, 02 May 2023 11:13:54 +0000
Received: by outflank-mailman (input) for mailman id 528513;
 Tue, 02 May 2023 11:13:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=87GZ=AX=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1ptnxA-0003LR-Rg
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:13:53 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6930062b-e8da-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 13:13:48 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-2f58125b957so3699105f8f.3
 for <xen-devel@lists.xenproject.org>; Tue, 02 May 2023 04:13:48 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 e5-20020a5d5005000000b0030469635629sm25685459wrt.62.2023.05.02.04.13.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 02 May 2023 04:13:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6930062b-e8da-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683026027; x=1685618027;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=OvwXlyXdL/UMKYGghla11RapD1E2ZkL5ikYlGdKfGoY=;
        b=M4KZeD654X56iUYNnthflmlSakyP88bM68BJVMBaLC+pr/VYeK7oV3ggJK4zjOx0En
         MIeYadbhYp44N5S2w1t7AA1/dwjjiSORkSK8tQGID12MDAeuvclHsoGybfLHQrQZ5oZf
         6jbtsU5vR0mfiPEngVyryFhWdCLX1ltfs1Z6M=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683026027; x=1685618027;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=OvwXlyXdL/UMKYGghla11RapD1E2ZkL5ikYlGdKfGoY=;
        b=JeWNoRNvYyoEDkAoS0cjCr2el5KytqRudXgGjjo9XEId/fG3bbR6J9UlK3SIn4Jk6d
         wLk1kL7EufHqpeOe+97zaJpun+gkXDfiPU05KlS1Od3PsQ569qgIP+Jv/n6Ha71lTusa
         09UHBE9roFst/C9W6ON0sAFMux4A8w4T+AiVnsISODa6ovOR+69osBz1GTOImDdM3zKG
         2vkz4wGh+Qt0XxoCAqp9IYSvh4A8/+QDESSEdM+u/xLFD0qlVXPd8dQjrCnEw1LcMXtd
         pYNwgydrmvZPBzW+W2FpK4xTlCq8j07R9+QBnU+jQizmj6pm+HG1r4A8wGVdz/jNtFQZ
         NxsA==
X-Gm-Message-State: AC+VfDxB8qb/v3aW5oRhVzIBKNDVFX3QuHUXL4LmnkyC8sHaqscfgs5v
	TF4LYR60PeH3qf8kjb3mpwel1cP+9xPD8LPlL5Y=
X-Google-Smtp-Source: ACHHUZ7Ux40yrsaBsU6lLwL5MdUTafOXKH9BTW/XAP17RY+0liZxNg8GViVrO/lKp1wJMHqQo4pjtg==
X-Received: by 2002:a5d:618a:0:b0:2f8:3225:2bc1 with SMTP id j10-20020a5d618a000000b002f832252bc1mr13903769wru.41.1683026027164;
        Tue, 02 May 2023 04:13:47 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v3 2/3] tools: Use new xc function for some xc_domain_getinfo() calls
Date: Tue,  2 May 2023 12:13:37 +0100
Message-Id: <20230502111338.16757-3-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
References: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move calls that require a information about a single precisely identified
domain to the new xc_domain_getinfo_single().

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Tim Deegan <tim@xen.org>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Juergen Gross <jgross@suse.com>

v3:
 * Stylistic changes that fell under the cracks on v2
 * Reinserted -errno convention from v1 that had been
   removed in v2

---
 tools/console/client/main.c             |  7 ++---
 tools/debugger/kdd/kdd-xen.c            |  5 ++--
 tools/libs/ctrl/xc_domain.c             |  9 +++---
 tools/libs/ctrl/xc_pagetab.c            |  7 ++---
 tools/libs/ctrl/xc_private.c            |  9 +++---
 tools/libs/ctrl/xc_private.h            |  7 +++--
 tools/libs/guest/xg_core.c              | 23 ++++++--------
 tools/libs/guest/xg_core.h              |  6 ++--
 tools/libs/guest/xg_core_arm.c          | 10 +++----
 tools/libs/guest/xg_core_x86.c          | 18 +++++------
 tools/libs/guest/xg_cpuid_x86.c         | 40 +++++++++++++------------
 tools/libs/guest/xg_dom_boot.c          | 16 +++-------
 tools/libs/guest/xg_domain.c            |  8 ++---
 tools/libs/guest/xg_offline_page.c      | 12 ++++----
 tools/libs/guest/xg_private.h           |  1 +
 tools/libs/guest/xg_resume.c            | 20 ++++++-------
 tools/libs/guest/xg_sr_common.h         |  2 +-
 tools/libs/guest/xg_sr_restore.c        | 17 ++++-------
 tools/libs/guest/xg_sr_restore_x86_pv.c |  2 +-
 tools/libs/guest/xg_sr_save.c           | 27 +++++++----------
 tools/libs/guest/xg_sr_save_x86_pv.c    |  6 ++--
 tools/libs/light/libxl_sched.c          | 16 +++++-----
 tools/libs/light/libxl_x86_acpi.c       |  4 +--
 tools/misc/xen-hvmcrash.c               |  6 ++--
 tools/misc/xen-lowmemd.c                |  6 ++--
 tools/misc/xen-mfndump.c                | 22 ++++++--------
 tools/misc/xen-vmtrace.c                |  6 ++--
 tools/vchan/vchan-socket-proxy.c        |  6 ++--
 tools/xenstore/xenstored_domain.c       | 15 +++++-----
 tools/xentrace/xenctx.c                 |  8 ++---
 30 files changed, 158 insertions(+), 183 deletions(-)

diff --git a/tools/console/client/main.c b/tools/console/client/main.c
index 1a6fa162f7..6775006488 100644
--- a/tools/console/client/main.c
+++ b/tools/console/client/main.c
@@ -408,17 +408,16 @@ int main(int argc, char **argv)
 	if (dom_path == NULL)
 		err(errno, "xs_get_domain_path()");
 	if (type == CONSOLE_INVAL) {
-		xc_dominfo_t xcinfo;
+		xc_domaininfo_t xcinfo;
 		xc_interface *xc_handle = xc_interface_open(0,0,0);
 		if (xc_handle == NULL)
 			err(errno, "Could not open xc interface");
-		if ( (xc_domain_getinfo(xc_handle, domid, 1, &xcinfo) != 1) ||
-		     (xcinfo.domid != domid) ) {
+		if (xc_domain_getinfo_single(xc_handle, domid, &xcinfo) < 0) {
 			xc_interface_close(xc_handle);
 			err(errno, "Failed to get domain information");
 		}
 		/* default to pv console for pv guests and serial for hvm guests */
-		if (xcinfo.hvm)
+		if (xcinfo.flags & XEN_DOMINF_hvm_guest)
 			type = CONSOLE_SERIAL;
 		else
 			type = CONSOLE_PV;
diff --git a/tools/debugger/kdd/kdd-xen.c b/tools/debugger/kdd/kdd-xen.c
index e78c9311c4..e63e267023 100644
--- a/tools/debugger/kdd/kdd-xen.c
+++ b/tools/debugger/kdd/kdd-xen.c
@@ -570,7 +570,7 @@ kdd_guest *kdd_guest_init(char *arg, FILE *log, int verbosity)
     kdd_guest *g = NULL;
     xc_interface *xch = NULL;
     uint32_t domid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
 
     g = calloc(1, sizeof (kdd_guest));
     if (!g) 
@@ -590,7 +590,8 @@ kdd_guest *kdd_guest_init(char *arg, FILE *log, int verbosity)
     g->domid = domid;
 
     /* Check that the domain exists and is HVM */
-    if (xc_domain_getinfo(xch, domid, 1, &info) != 1 || !info.hvm)
+    if (xc_domain_getinfo_single(xch, domid, &info) < 0 ||
+        !(info.flags & XEN_DOMINF_hvm_guest))
         goto err;
 
     snprintf(g->id, (sizeof g->id) - 1, 
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index d5f0923088..66179e6f12 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -1960,15 +1960,14 @@ int xc_domain_memory_mapping(
     uint32_t add_mapping)
 {
     DECLARE_DOMCTL;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int ret = 0, rc;
     unsigned long done = 0, nr, max_batch_sz;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get info for domain");
-        return -EINVAL;
+        PERROR("Could not get info for dom%u", domid);
+        return -1;
     }
     if ( !xc_core_arch_auto_translated_physmap(&info) )
         return 0;
diff --git a/tools/libs/ctrl/xc_pagetab.c b/tools/libs/ctrl/xc_pagetab.c
index db25c20247..d9f886633a 100644
--- a/tools/libs/ctrl/xc_pagetab.c
+++ b/tools/libs/ctrl/xc_pagetab.c
@@ -29,17 +29,16 @@
 unsigned long xc_translate_foreign_address(xc_interface *xch, uint32_t dom,
                                            int vcpu, unsigned long long virt)
 {
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     uint64_t paddr, mask, pte = 0;
     int size, level, pt_levels = 2;
     void *map;
 
-    if (xc_domain_getinfo(xch, dom, 1, &dominfo) != 1 
-        || dominfo.domid != dom)
+    if (xc_domain_getinfo_single(xch, dom, &dominfo) < 0)
         return 0;
 
     /* What kind of paging are we dealing with? */
-    if (dominfo.hvm) {
+    if (dominfo.flags & XEN_DOMINF_hvm_guest) {
         struct hvm_hw_cpu ctx;
         if (xc_domain_hvm_getcontext_partial(xch, dom,
                                              HVM_SAVE_CODE(CPU), vcpu,
diff --git a/tools/libs/ctrl/xc_private.c b/tools/libs/ctrl/xc_private.c
index 2f99a7d2cf..6293a45531 100644
--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -441,11 +441,12 @@ int xc_machphys_mfn_list(xc_interface *xch,
 
 long xc_get_tot_pages(xc_interface *xch, uint32_t domid)
 {
-    xc_dominfo_t info;
-    if ( (xc_domain_getinfo(xch, domid, 1, &info) != 1) ||
-         (info.domid != domid) )
+    xc_domaininfo_t info;
+
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
         return -1;
-    return info.nr_pages;
+
+    return info.tot_pages;
 }
 
 int xc_copy_to_domain_page(xc_interface *xch,
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index 80dc464c93..8faabaea67 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -16,6 +16,7 @@
 #ifndef XC_PRIVATE_H
 #define XC_PRIVATE_H
 
+#include <inttypes.h>
 #include <unistd.h>
 #include <stdarg.h>
 #include <stdio.h>
@@ -420,12 +421,12 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param,
 int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...);
 
 #if defined (__i386__) || defined (__x86_64__)
-static inline int xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
+static inline int xc_core_arch_auto_translated_physmap(const xc_domaininfo_t *info)
 {
-    return info->hvm;
+    return info->flags & XEN_DOMINF_hvm_guest;
 }
 #elif defined (__arm__) || defined(__aarch64__)
-static inline int xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
+static inline int xc_core_arch_auto_translated_physmap(const xc_domaininfo_t *info)
 {
     return 1;
 }
diff --git a/tools/libs/guest/xg_core.c b/tools/libs/guest/xg_core.c
index c52f1161c1..f83436d6cb 100644
--- a/tools/libs/guest/xg_core.c
+++ b/tools/libs/guest/xg_core.c
@@ -349,7 +349,7 @@ elfnote_dump_none(xc_interface *xch, void *args, dumpcore_rtn_t dump_rtn)
 static int
 elfnote_dump_core_header(
     xc_interface *xch,
-    void *args, dumpcore_rtn_t dump_rtn, const xc_dominfo_t *info,
+    void *args, dumpcore_rtn_t dump_rtn, const xc_domaininfo_t *info,
     int nr_vcpus, unsigned long nr_pages)
 {
     int sts;
@@ -361,7 +361,8 @@ elfnote_dump_core_header(
     
     elfnote.descsz = sizeof(header);
     elfnote.type = XEN_ELFNOTE_DUMPCORE_HEADER;
-    header.xch_magic = info->hvm ? XC_CORE_MAGIC_HVM : XC_CORE_MAGIC;
+    header.xch_magic = (info->flags & XEN_DOMINF_hvm_guest) ? XC_CORE_MAGIC_HVM
+                                                            : XC_CORE_MAGIC;
     header.xch_nr_vcpus = nr_vcpus;
     header.xch_nr_pages = nr_pages;
     header.xch_page_size = PAGE_SIZE;
@@ -423,7 +424,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
                                 void *args,
                                 dumpcore_rtn_t dump_rtn)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     shared_info_any_t *live_shinfo = NULL;
     struct domain_info_context _dinfo = {};
     struct domain_info_context *dinfo = &_dinfo;
@@ -468,15 +469,15 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         goto out;
     }
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get info for domain");
+        PERROR("Could not get info for dom%u", domid);
         goto out;
     }
     /* Map the shared info frame */
     live_shinfo = xc_map_foreign_range(xch, domid, PAGE_SIZE,
                                        PROT_READ, info.shared_info_frame);
-    if ( !live_shinfo && !info.hvm )
+    if ( !live_shinfo && !(info.flags & XEN_DOMINF_hvm_guest) )
     {
         PERROR("Couldn't map live_shinfo");
         goto out;
@@ -517,12 +518,6 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         dinfo->guest_width = sizeof(unsigned long);
     }
 
-    if ( domid != info.domid )
-    {
-        PERROR("Domain %d does not exist", domid);
-        goto out;
-    }
-
     ctxt = calloc(sizeof(*ctxt), info.max_vcpu_id + 1);
     if ( !ctxt )
     {
@@ -560,9 +555,9 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
      * all the array...
      *
      * We don't want to use the total potential size of the memory map
-     * since that is usually much higher than info.nr_pages.
+     * since that is usually much higher than info.tot_pages.
      */
-    nr_pages = info.nr_pages;
+    nr_pages = info.tot_pages;
 
     if ( !auto_translated_physmap )
     {
diff --git a/tools/libs/guest/xg_core.h b/tools/libs/guest/xg_core.h
index aaca9e0a8b..ff577dad31 100644
--- a/tools/libs/guest/xg_core.h
+++ b/tools/libs/guest/xg_core.h
@@ -134,15 +134,15 @@ typedef struct xc_core_memory_map xc_core_memory_map_t;
 struct xc_core_arch_context;
 int xc_core_arch_memory_map_get(xc_interface *xch,
                                 struct xc_core_arch_context *arch_ctxt,
-                                xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                                xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                                 xc_core_memory_map_t **mapp,
                                 unsigned int *nr_entries);
 int xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo,
-                         xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                         xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                          xen_pfn_t **live_p2m);
 
 int xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo,
-                                  xc_dominfo_t *info,
+                                  xc_domaininfo_t *info,
                                   shared_info_any_t *live_shinfo,
                                   xen_pfn_t **live_p2m);
 
diff --git a/tools/libs/guest/xg_core_arm.c b/tools/libs/guest/xg_core_arm.c
index de30cf0c31..34276152da 100644
--- a/tools/libs/guest/xg_core_arm.c
+++ b/tools/libs/guest/xg_core_arm.c
@@ -33,14 +33,14 @@ xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
 
 int
 xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unused,
-                            xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                            xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                             xc_core_memory_map_t **mapp,
                             unsigned int *nr_entries)
 {
     xen_pfn_t p2m_size = 0;
     xc_core_memory_map_t *map;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &p2m_size) < 0 )
         return -1;
 
     map = malloc(sizeof(*map));
@@ -59,7 +59,7 @@ xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unus
 }
 
 static int
-xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
 {
     errno = ENOSYS;
@@ -67,14 +67,14 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                               shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
diff --git a/tools/libs/guest/xg_core_x86.c b/tools/libs/guest/xg_core_x86.c
index c5e4542ccc..dbd3a440f7 100644
--- a/tools/libs/guest/xg_core_x86.c
+++ b/tools/libs/guest/xg_core_x86.c
@@ -49,14 +49,14 @@ xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
 
 int
 xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unused,
-                            xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                            xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                             xc_core_memory_map_t **mapp,
                             unsigned int *nr_entries)
 {
     xen_pfn_t p2m_size = 0;
     xc_core_memory_map_t *map;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &p2m_size) < 0 )
         return -1;
 
     map = malloc(sizeof(*map));
@@ -314,24 +314,24 @@ xc_core_arch_map_p2m_tree_rw(xc_interface *xch, struct domain_info_context *dinf
 }
 
 static int
-xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
 {
     xen_pfn_t *p2m_frame_list = NULL;
     uint64_t p2m_cr3;
-    uint32_t dom = info->domid;
+    uint32_t dom = info->domain;
     int ret = -1;
     int err;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &dinfo->p2m_size) < 0 )
     {
         ERROR("Could not get maximum GPFN!");
         goto out;
     }
 
-    if ( dinfo->p2m_size < info->nr_pages  )
+    if ( dinfo->p2m_size < info->tot_pages  )
     {
-        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
+        ERROR("p2m_size < nr_pages -1 (%lx < %"PRIx64, dinfo->p2m_size, info->tot_pages - 1);
         goto out;
     }
 
@@ -366,14 +366,14 @@ out:
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                               shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index bd16a87e48..57221ffea8 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -281,7 +281,8 @@ static int xc_cpuid_xend_policy(
     xc_interface *xch, uint32_t domid, const struct xc_xend_cpuid *xend)
 {
     int rc;
-    xc_dominfo_t di;
+    bool hvm;
+    xc_domaininfo_t di;
     unsigned int nr_leaves, nr_msrs;
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
     /*
@@ -291,13 +292,13 @@ static int xc_cpuid_xend_policy(
     xen_cpuid_leaf_t *host = NULL, *def = NULL, *cur = NULL;
     unsigned int nr_host, nr_def, nr_cur;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
-         di.domid != domid )
+    if ( (rc = xc_domain_getinfo_single(xch, domid, &di)) < 0 )
     {
-        ERROR("Failed to obtain d%d info", domid);
-        rc = -ESRCH;
+        PERROR("Failed to obtain d%d info", domid);
+        rc = -errno;
         goto fail;
     }
+    hvm = di.flags & XEN_DOMINF_hvm_guest;
 
     rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
     if ( rc )
@@ -330,12 +331,12 @@ static int xc_cpuid_xend_policy(
     /* Get the domain type's default policy. */
     nr_msrs = 0;
     nr_def = nr_leaves;
-    rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
-                                           : XEN_SYSCTL_cpu_policy_pv_default,
+    rc = get_system_cpu_policy(xch, hvm ? XEN_SYSCTL_cpu_policy_hvm_default
+                                        : XEN_SYSCTL_cpu_policy_pv_default,
                                &nr_def, def, &nr_msrs, NULL);
     if ( rc )
     {
-        PERROR("Failed to obtain %s def policy", di.hvm ? "hvm" : "pv");
+        PERROR("Failed to obtain %s def policy", hvm ? "hvm" : "pv");
         rc = -errno;
         goto fail;
     }
@@ -428,7 +429,8 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
                           const struct xc_xend_cpuid *xend)
 {
     int rc;
-    xc_dominfo_t di;
+    bool hvm;
+    xc_domaininfo_t di;
     unsigned int i, nr_leaves, nr_msrs;
     xen_cpuid_leaf_t *leaves = NULL;
     struct cpu_policy *p = NULL;
@@ -436,13 +438,13 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
     uint32_t host_featureset[FEATURESET_NR_ENTRIES] = {};
     uint32_t len = ARRAY_SIZE(host_featureset);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
-         di.domid != domid )
+    if ( (rc = xc_domain_getinfo_single(xch, domid, &di)) < 0 )
     {
-        ERROR("Failed to obtain d%d info", domid);
-        rc = -ESRCH;
+        PERROR("Failed to obtain d%d info", domid);
+        rc = -errno;
         goto out;
     }
+    hvm = di.flags & XEN_DOMINF_hvm_guest;
 
     rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
     if ( rc )
@@ -475,12 +477,12 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
 
     /* Get the domain's default policy. */
     nr_msrs = 0;
-    rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
-                                           : XEN_SYSCTL_cpu_policy_pv_default,
+    rc = get_system_cpu_policy(xch, hvm ? XEN_SYSCTL_cpu_policy_hvm_default
+                                        : XEN_SYSCTL_cpu_policy_pv_default,
                                &nr_leaves, leaves, &nr_msrs, NULL);
     if ( rc )
     {
-        PERROR("Failed to obtain %s default policy", di.hvm ? "hvm" : "pv");
+        PERROR("Failed to obtain %s default policy", hvm ? "hvm" : "pv");
         rc = -errno;
         goto out;
     }
@@ -514,7 +516,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         p->feat.hle = test_bit(X86_FEATURE_HLE, host_featureset);
         p->feat.rtm = test_bit(X86_FEATURE_RTM, host_featureset);
 
-        if ( di.hvm )
+        if ( hvm )
         {
             p->feat.mpx = test_bit(X86_FEATURE_MPX, host_featureset);
         }
@@ -571,7 +573,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
     {
         p->extd.itsc = itsc;
 
-        if ( di.hvm )
+        if ( hvm )
         {
             p->basic.pae = pae;
             p->basic.vmx = nested_virt;
@@ -579,7 +581,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         }
     }
 
-    if ( !di.hvm )
+    if ( !hvm )
     {
         /*
          * On hardware without CPUID Faulting, PV guests see real topology.
diff --git a/tools/libs/guest/xg_dom_boot.c b/tools/libs/guest/xg_dom_boot.c
index 263a3f4c85..6e0847e718 100644
--- a/tools/libs/guest/xg_dom_boot.c
+++ b/tools/libs/guest/xg_dom_boot.c
@@ -164,7 +164,7 @@ void *xc_dom_boot_domU_map(struct xc_dom_image *dom, xen_pfn_t pfn,
 
 int xc_dom_boot_image(struct xc_dom_image *dom)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int rc;
 
     DOMPRINTF_CALLED(dom->xch);
@@ -174,19 +174,11 @@ int xc_dom_boot_image(struct xc_dom_image *dom)
         return rc;
 
     /* collect some info */
-    rc = xc_domain_getinfo(dom->xch, dom->guest_domid, 1, &info);
-    if ( rc < 0 )
-    {
-        xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
-                     "%s: getdomaininfo failed (rc=%d)", __FUNCTION__, rc);
-        return rc;
-    }
-    if ( rc == 0 || info.domid != dom->guest_domid )
+    if ( xc_domain_getinfo_single(dom->xch, dom->guest_domid, &info) < 0 )
     {
         xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
-                     "%s: Huh? No domains found (nr_domains=%d) "
-                     "or domid mismatch (%d != %d)", __FUNCTION__,
-                     rc, info.domid, dom->guest_domid);
+                     "%s: getdomaininfo failed (errno=%d)",
+                     __func__, errno);
         return -1;
     }
     dom->shared_info_mfn = info.shared_info_frame;
diff --git a/tools/libs/guest/xg_domain.c b/tools/libs/guest/xg_domain.c
index f0e7748449..198f6f904a 100644
--- a/tools/libs/guest/xg_domain.c
+++ b/tools/libs/guest/xg_domain.c
@@ -37,7 +37,7 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
 {
     struct domain_info_context _di;
 
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     shared_info_any_t *live_shinfo;
     xen_capabilities_info_t xen_caps = "";
     unsigned long i;
@@ -49,9 +49,9 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
         return -1;
     }
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get domain info");
+        PERROR("Could not get dominfo for dom%u", domid);
         return -1;
     }
 
@@ -86,7 +86,7 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
                                        info.shared_info_frame);
     if ( !live_shinfo )
     {
-        PERROR("Could not map the shared info frame (MFN 0x%lx)",
+        PERROR("Could not map the shared info frame (MFN 0x%"PRIx64")",
                info.shared_info_frame);
         return -1;
     }
diff --git a/tools/libs/guest/xg_offline_page.c b/tools/libs/guest/xg_offline_page.c
index ccd0299f0f..292f73a3e7 100644
--- a/tools/libs/guest/xg_offline_page.c
+++ b/tools/libs/guest/xg_offline_page.c
@@ -370,7 +370,7 @@ static int clear_pte(xc_interface *xch, uint32_t domid,
  */
 
 static int is_page_exchangable(xc_interface *xch, uint32_t domid, xen_pfn_t mfn,
-                               xc_dominfo_t *info)
+                               xc_domaininfo_t *info)
 {
     uint32_t status;
     int rc;
@@ -381,7 +381,7 @@ static int is_page_exchangable(xc_interface *xch, uint32_t domid, xen_pfn_t mfn,
         DPRINTF("Dom0's page can't be LM");
         return 0;
     }
-    if (info->hvm)
+    if (info->flags & XEN_DOMINF_hvm_guest)
     {
         DPRINTF("Currently we can only live change PV guest's page\n");
         return 0;
@@ -462,7 +462,7 @@ err0:
 /* The domain should be suspended when called here */
 int xc_exchange_page(xc_interface *xch, uint32_t domid, xen_pfn_t mfn)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xc_domain_meminfo minfo;
     struct xc_mmu *mmu = NULL;
     struct pte_backup old_ptes = {NULL, 0, 0};
@@ -477,13 +477,13 @@ int xc_exchange_page(xc_interface *xch, uint32_t domid, xen_pfn_t mfn)
     xen_pfn_t *m2p_table;
     unsigned long max_mfn;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        ERROR("Could not get domain info");
+        PERROR("Could not get domain info for dom%u", domid);
         return -1;
     }
 
-    if (!info.shutdown || info.shutdown_reason != SHUTDOWN_suspend)
+    if (!dominfo_shutdown_with(&info, SHUTDOWN_suspend))
     {
         errno = EINVAL;
         ERROR("Can't exchange page unless domain is suspended\n");
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index e729a8106c..d73947094f 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -16,6 +16,7 @@
 #ifndef XG_PRIVATE_H
 #define XG_PRIVATE_H
 
+#include <inttypes.h>
 #include <unistd.h>
 #include <errno.h>
 #include <fcntl.h>
diff --git a/tools/libs/guest/xg_resume.c b/tools/libs/guest/xg_resume.c
index 77e2451a3c..c85d09a7f5 100644
--- a/tools/libs/guest/xg_resume.c
+++ b/tools/libs/guest/xg_resume.c
@@ -26,28 +26,28 @@
 static int modify_returncode(xc_interface *xch, uint32_t domid)
 {
     vcpu_guest_context_any_t ctxt;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     xen_capabilities_info_t caps;
     struct domain_info_context _dinfo = {};
     struct domain_info_context *dinfo = &_dinfo;
     int rc;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get domain info");
+        PERROR("Could not get info for dom%u", domid);
         return -1;
     }
 
-    if ( !info.shutdown || (info.shutdown_reason != SHUTDOWN_suspend) )
+    if ( !dominfo_shutdown_with(&info, SHUTDOWN_suspend) )
     {
         ERROR("Dom %d not suspended: (shutdown %d, reason %d)", domid,
-              info.shutdown, info.shutdown_reason);
+              info.flags & XEN_DOMINF_shutdown,
+              dominfo_shutdown_reason(&info));
         errno = EINVAL;
         return -1;
     }
 
-    if ( info.hvm )
+    if ( info.flags & XEN_DOMINF_hvm_guest )
     {
         /* HVM guests without PV drivers have no return code to modify. */
         uint64_t irq = 0;
@@ -133,7 +133,7 @@ static int xc_domain_resume_hvm(xc_interface *xch, uint32_t domid)
 static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
 {
     DECLARE_DOMCTL;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int i, rc = -1;
 #if defined(__i386__) || defined(__x86_64__)
     struct domain_info_context _dinfo = { .guest_width = 0,
@@ -146,7 +146,7 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
     xen_pfn_t *p2m = NULL;
 #endif
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         PERROR("Could not get domain info");
         return rc;
@@ -156,7 +156,7 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
      * (x86 only) Rewrite store_mfn and console_mfn back to MFN (from PFN).
      */
 #if defined(__i386__) || defined(__x86_64__)
-    if ( info.hvm )
+    if ( info.flags & XEN_DOMINF_hvm_guest )
         return xc_domain_resume_hvm(xch, domid);
 
     if ( xc_domain_get_guest_width(xch, domid, &dinfo->guest_width) != 0 )
diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 36d45ef56f..2f058ee3a6 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -220,7 +220,7 @@ struct xc_sr_context
     /* Plain VM, or checkpoints over time. */
     xc_stream_type_t stream_type;
 
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
 
     union /* Common save or restore data. */
     {
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 7314a24cf9..06231ca826 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -852,6 +852,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
                       xc_stream_type_t stream_type,
                       struct restore_callbacks *callbacks, int send_back_fd)
 {
+    bool hvm;
     xen_pfn_t nr_pfns;
     struct xc_sr_context ctx = {
         .xch = xch,
@@ -887,20 +888,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         break;
     }
 
-    if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )
+    if ( xc_domain_getinfo_single(xch, dom, &ctx.dominfo) < 0 )
     {
-        PERROR("Failed to get domain info");
-        return -1;
-    }
-
-    if ( ctx.dominfo.domid != dom )
-    {
-        ERROR("Domain %u does not exist", dom);
+        PERROR("Failed to get dominfo for dom%u", dom);
         return -1;
     }
 
+    hvm = ctx.dominfo.flags & XEN_DOMINF_hvm_guest;
     DPRINTF("fd %d, dom %u, hvm %u, stream_type %d",
-            io_fd, dom, ctx.dominfo.hvm, stream_type);
+            io_fd, dom, hvm, stream_type);
 
     ctx.domid = dom;
 
@@ -914,8 +910,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     }
 
     ctx.restore.p2m_size = nr_pfns;
-    ctx.restore.ops = ctx.dominfo.hvm
-        ? restore_ops_x86_hvm : restore_ops_x86_pv;
+    ctx.restore.ops = hvm ? restore_ops_x86_hvm : restore_ops_x86_pv;
 
     if ( restore(&ctx) )
         return -1;
diff --git a/tools/libs/guest/xg_sr_restore_x86_pv.c b/tools/libs/guest/xg_sr_restore_x86_pv.c
index dc50b0f5a8..eaeb97f4a0 100644
--- a/tools/libs/guest/xg_sr_restore_x86_pv.c
+++ b/tools/libs/guest/xg_sr_restore_x86_pv.c
@@ -903,7 +903,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
         ctx->dominfo.shared_info_frame);
     if ( !guest_shinfo )
     {
-        PERROR("Failed to map Shared Info at mfn %#lx",
+        PERROR("Failed to map Shared Info at mfn %#"PRIx64,
                ctx->dominfo.shared_info_frame);
         goto err;
     }
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 9853d8d846..3b2c5222e4 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -336,19 +336,18 @@ static int suspend_domain(struct xc_sr_context *ctx)
     }
 
     /* Refresh domain information. */
-    if ( (xc_domain_getinfo(xch, ctx->domid, 1, &ctx->dominfo) != 1) ||
-         (ctx->dominfo.domid != ctx->domid) )
+    if ( xc_domain_getinfo_single(xch, ctx->domid, &ctx->dominfo) < 0 )
     {
         PERROR("Unable to refresh domain information");
         return -1;
     }
 
     /* Confirm the domain has actually been paused. */
-    if ( !ctx->dominfo.shutdown ||
-         (ctx->dominfo.shutdown_reason != SHUTDOWN_suspend) )
+    if ( !dominfo_shutdown_with(&ctx->dominfo, SHUTDOWN_suspend) )
     {
         ERROR("Domain has not been suspended: shutdown %d, reason %d",
-              ctx->dominfo.shutdown, ctx->dominfo.shutdown_reason);
+              ctx->dominfo.flags & XEN_DOMINF_shutdown,
+              dominfo_shutdown_reason(&ctx->dominfo));
         return -1;
     }
 
@@ -893,8 +892,7 @@ static int save(struct xc_sr_context *ctx, uint16_t guest_type)
         if ( rc )
             goto err;
 
-        if ( !ctx->dominfo.shutdown ||
-             (ctx->dominfo.shutdown_reason != SHUTDOWN_suspend) )
+        if ( !dominfo_shutdown_with(&ctx->dominfo, SHUTDOWN_suspend) )
         {
             ERROR("Domain has not been suspended");
             rc = -1;
@@ -989,6 +987,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
         .fd = io_fd,
         .stream_type = stream_type,
     };
+    bool hvm;
 
     /* GCC 4.4 (of CentOS 6.x vintage) can' t initialise anonymous unions. */
     ctx.save.callbacks = callbacks;
@@ -996,17 +995,13 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
     ctx.save.debug = !!(flags & XCFLAGS_DEBUG);
     ctx.save.recv_fd = recv_fd;
 
-    if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )
+    if ( xc_domain_getinfo_single(xch, dom, &ctx.dominfo) < 0 )
     {
         PERROR("Failed to get domain info");
         return -1;
     }
 
-    if ( ctx.dominfo.domid != dom )
-    {
-        ERROR("Domain %u does not exist", dom);
-        return -1;
-    }
+    hvm = ctx.dominfo.flags & XEN_DOMINF_hvm_guest;
 
     /* Sanity check stream_type-related parameters */
     switch ( stream_type )
@@ -1018,7 +1013,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
         assert(callbacks->checkpoint && callbacks->postcopy);
         /* Fallthrough */
     case XC_STREAM_PLAIN:
-        if ( ctx.dominfo.hvm )
+        if ( hvm )
             assert(callbacks->switch_qemu_logdirty);
         break;
 
@@ -1028,11 +1023,11 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
     }
 
     DPRINTF("fd %d, dom %u, flags %u, hvm %d",
-            io_fd, dom, flags, ctx.dominfo.hvm);
+            io_fd, dom, flags, hvm);
 
     ctx.domid = dom;
 
-    if ( ctx.dominfo.hvm )
+    if ( hvm )
     {
         ctx.save.ops = save_ops_x86_hvm;
         return save(&ctx, DHDR_TYPE_X86_HVM);
diff --git a/tools/libs/guest/xg_sr_save_x86_pv.c b/tools/libs/guest/xg_sr_save_x86_pv.c
index 4964f1f7b8..f3d7a7a71a 100644
--- a/tools/libs/guest/xg_sr_save_x86_pv.c
+++ b/tools/libs/guest/xg_sr_save_x86_pv.c
@@ -20,7 +20,7 @@ static int map_shinfo(struct xc_sr_context *ctx)
         xch, ctx->domid, PAGE_SIZE, PROT_READ, ctx->dominfo.shared_info_frame);
     if ( !ctx->x86.pv.shinfo )
     {
-        PERROR("Failed to map shared info frame at mfn %#lx",
+        PERROR("Failed to map shared info frame at mfn %#"PRIx64,
                ctx->dominfo.shared_info_frame);
         return -1;
     }
@@ -943,7 +943,7 @@ static int normalise_pagetable(struct xc_sr_context *ctx, const uint64_t *src,
 #ifdef __i386__
             if ( mfn == INVALID_MFN )
             {
-                if ( !ctx->dominfo.paused )
+                if ( !(ctx->dominfo.flags & XEN_DOMINF_paused) )
                     errno = EAGAIN;
                 else
                 {
@@ -965,7 +965,7 @@ static int normalise_pagetable(struct xc_sr_context *ctx, const uint64_t *src,
 
             if ( !mfn_in_pseudophysmap(ctx, mfn) )
             {
-                if ( !ctx->dominfo.paused )
+                if ( !(ctx->dominfo.flags & XEN_DOMINF_paused) )
                     errno = EAGAIN;
                 else
                 {
diff --git a/tools/libs/light/libxl_sched.c b/tools/libs/light/libxl_sched.c
index 21a65442c0..b87e490d12 100644
--- a/tools/libs/light/libxl_sched.c
+++ b/tools/libs/light/libxl_sched.c
@@ -498,10 +498,10 @@ static int sched_rtds_vcpu_get(libxl__gc *gc, uint32_t domid,
 {
     uint32_t num_vcpus;
     int i, r, rc;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -552,10 +552,10 @@ static int sched_rtds_vcpu_get_all(libxl__gc *gc, uint32_t domid,
 {
     uint32_t num_vcpus;
     int i, r, rc;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -602,10 +602,10 @@ static int sched_rtds_vcpu_set(libxl__gc *gc, uint32_t domid,
     int r, rc;
     int i;
     uint16_t max_vcpuid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -662,11 +662,11 @@ static int sched_rtds_vcpu_set_all(libxl__gc *gc, uint32_t domid,
     int r, rc;
     int i;
     uint16_t max_vcpuid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
     uint32_t num_vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
diff --git a/tools/libs/light/libxl_x86_acpi.c b/tools/libs/light/libxl_x86_acpi.c
index 22eb160659..796b009d0c 100644
--- a/tools/libs/light/libxl_x86_acpi.c
+++ b/tools/libs/light/libxl_x86_acpi.c
@@ -87,14 +87,14 @@ static int init_acpi_config(libxl__gc *gc,
 {
     xc_interface *xch = dom->xch;
     uint32_t domid = dom->guest_domid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct hvm_info_table *hvminfo;
     int i, r, rc;
 
     config->dsdt_anycpu = config->dsdt_15cpu = dsdt_pvh;
     config->dsdt_anycpu_len = config->dsdt_15cpu_len = dsdt_pvh_len;
 
-    r = xc_domain_getinfo(xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(xch, domid, &info);
     if (r < 0) {
         LOG(ERROR, "getdomaininfo failed (rc=%d)", r);
         rc = ERROR_FAIL;
diff --git a/tools/misc/xen-hvmcrash.c b/tools/misc/xen-hvmcrash.c
index 4f0dabcb18..1d058fa40a 100644
--- a/tools/misc/xen-hvmcrash.c
+++ b/tools/misc/xen-hvmcrash.c
@@ -48,7 +48,7 @@ main(int argc, char **argv)
 {
     int domid;
     xc_interface *xch;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     int ret;
     uint32_t len;
     uint8_t *buf;
@@ -66,13 +66,13 @@ main(int argc, char **argv)
         exit(1);
     }
 
-    ret = xc_domain_getinfo(xch, domid, 1, &dominfo);
+    ret = xc_domain_getinfo_single(xch, domid, &dominfo);
     if (ret < 0) {
         perror("xc_domain_getinfo");
         exit(1);
     }
 
-    if (!dominfo.hvm) {
+    if (!(dominfo.flags & XEN_DOMINF_hvm_guest)) {
         fprintf(stderr, "domain %d is not HVM\n", domid);
         exit(1);
     }
diff --git a/tools/misc/xen-lowmemd.c b/tools/misc/xen-lowmemd.c
index a3a2741242..9d5cb549a8 100644
--- a/tools/misc/xen-lowmemd.c
+++ b/tools/misc/xen-lowmemd.c
@@ -38,7 +38,7 @@ void cleanup(void)
 #define BUFSZ 512
 void handle_low_mem(void)
 {
-    xc_dominfo_t  dom0_info;
+    xc_domaininfo_t dom0_info;
     xc_physinfo_t info;
     unsigned long long free_pages, dom0_pages, diff, dom0_target;
     char data[BUFSZ], error[BUFSZ];
@@ -58,13 +58,13 @@ void handle_low_mem(void)
         return;
     diff = THRESHOLD_PG - free_pages; 
 
-    if (xc_domain_getinfo(xch, 0, 1, &dom0_info) < 1)
+    if (xc_domain_getinfo_single(xch, 0, &dom0_info) < 0)
     {
         perror("Failed to get dom0 info");
         return;
     }
 
-    dom0_pages = (unsigned long long) dom0_info.nr_pages;
+    dom0_pages = (unsigned long long) dom0_info.tot_pages;
     printf("Dom0 pages: 0x%llx:%llu\n", dom0_pages, dom0_pages);
     dom0_target = dom0_pages - diff;
     if (dom0_target <= DOM0_FLOOR_PG)
diff --git a/tools/misc/xen-mfndump.c b/tools/misc/xen-mfndump.c
index b32c95e262..8863ece3f5 100644
--- a/tools/misc/xen-mfndump.c
+++ b/tools/misc/xen-mfndump.c
@@ -74,7 +74,7 @@ int dump_m2p_func(int argc, char *argv[])
 int dump_p2m_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     unsigned long i;
     int domid;
 
@@ -85,8 +85,7 @@ int dump_p2m_func(int argc, char *argv[])
     }
     domid = atoi(argv[0]);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -158,7 +157,7 @@ int dump_p2m_func(int argc, char *argv[])
 int dump_ptes_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     void *page = NULL;
     unsigned long i, max_mfn;
     int domid, pte_num, rc = 0;
@@ -172,8 +171,7 @@ int dump_ptes_func(int argc, char *argv[])
     domid = atoi(argv[0]);
     mfn = strtoul(argv[1], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -266,7 +264,7 @@ int dump_ptes_func(int argc, char *argv[])
 int lookup_pte_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     void *page = NULL;
     unsigned long i, j;
     int domid, pte_num;
@@ -280,8 +278,7 @@ int lookup_pte_func(int argc, char *argv[])
     domid = atoi(argv[0]);
     mfn = strtoul(argv[1], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -336,7 +333,7 @@ int lookup_pte_func(int argc, char *argv[])
 
 int memcmp_mfns_func(int argc, char *argv[])
 {
-    xc_dominfo_t info1, info2;
+    xc_domaininfo_t info1, info2;
     void *page1 = NULL, *page2 = NULL;
     int domid1, domid2;
     xen_pfn_t mfn1, mfn2;
@@ -352,9 +349,8 @@ int memcmp_mfns_func(int argc, char *argv[])
     mfn1 = strtoul(argv[1], NULL, 16);
     mfn2 = strtoul(argv[3], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid1, 1, &info1) != 1 ||
-         xc_domain_getinfo(xch, domid2, 1, &info2) != 1 ||
-         info1.domid != domid1 || info2.domid != domid2)
+    if ( xc_domain_getinfo_single(xch, domid1, &info1) < 0 ||
+         xc_domain_getinfo_single(xch, domid2, &info2) < 0)
     {
         ERROR("Failed to obtain info for domains\n");
         return -1;
diff --git a/tools/misc/xen-vmtrace.c b/tools/misc/xen-vmtrace.c
index 5b688a54af..ba2ce17a17 100644
--- a/tools/misc/xen-vmtrace.c
+++ b/tools/misc/xen-vmtrace.c
@@ -133,15 +133,15 @@ int main(int argc, char **argv)
 
     while ( !interrupted )
     {
-        xc_dominfo_t dominfo;
+        xc_domaininfo_t dominfo;
 
         if ( get_more_data() )
             goto out;
 
         usleep(1000 * 100);
 
-        if ( xc_domain_getinfo(xch, domid, 1, &dominfo) != 1 ||
-             dominfo.domid != domid || dominfo.shutdown )
+        if ( xc_domain_getinfo_single(xch, domid, &dominfo) < 0 ||
+             (dominfo.flags & XEN_DOMINF_shutdown) )
         {
             if ( get_more_data() )
                 goto out;
diff --git a/tools/vchan/vchan-socket-proxy.c b/tools/vchan/vchan-socket-proxy.c
index e1d959c6d1..9c4c336b03 100644
--- a/tools/vchan/vchan-socket-proxy.c
+++ b/tools/vchan/vchan-socket-proxy.c
@@ -222,7 +222,7 @@ static struct libxenvchan *connect_vchan(int domid, const char *path) {
     struct libxenvchan *ctrl = NULL;
     struct xs_handle *xs = NULL;
     xc_interface *xc = NULL;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     char **watch_ret;
     unsigned int watch_num;
     int ret;
@@ -254,12 +254,12 @@ static struct libxenvchan *connect_vchan(int domid, const char *path) {
         if (ctrl)
             break;
 
-        ret = xc_domain_getinfo(xc, domid, 1, &dominfo);
+        ret = xc_domain_getinfo_single(xc, domid, &dominfo);
         /* break the loop if domain is definitely not there anymore, but
          * continue if it is or the call failed (like EPERM) */
         if (ret == -1 && errno == ESRCH)
             break;
-        if (ret == 1 && (dominfo.domid != (uint32_t)domid || dominfo.dying))
+        if (ret == 0 && (dominfo.flags & XEN_DOMINF_dying))
             break;
     }
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f62be2245c..aeb7595ae1 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -339,15 +339,14 @@ static int destroy_domain(void *_domain)
 	return 0;
 }
 
-static bool get_domain_info(unsigned int domid, xc_dominfo_t *dominfo)
+static bool get_domain_info(unsigned int domid, xc_domaininfo_t *dominfo)
 {
-	return xc_domain_getinfo(*xc_handle, domid, 1, dominfo) == 1 &&
-	       dominfo->domid == domid;
+	return xc_domain_getinfo_single(*xc_handle, domid, dominfo) == 0;
 }
 
 static int check_domain(const void *k, void *v, void *arg)
 {
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 	struct connection *conn;
 	bool dom_valid;
 	struct domain *domain = v;
@@ -360,12 +359,12 @@ static int check_domain(const void *k, void *v, void *arg)
 		return 0;
 	}
 	if (dom_valid) {
-		if ((dominfo.crashed || dominfo.shutdown)
+		if ((dominfo.flags & XEN_DOMINF_shutdown)
 		    && !domain->shutdown) {
 			domain->shutdown = true;
 			*notify = true;
 		}
-		if (!dominfo.dying)
+		if (!(dominfo.flags & XEN_DOMINF_dying))
 			return 0;
 	}
 	if (domain->conn) {
@@ -486,7 +485,7 @@ static struct domain *find_or_alloc_domain(const void *ctx, unsigned int domid)
 static struct domain *find_or_alloc_existing_domain(unsigned int domid)
 {
 	struct domain *domain;
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 
 	domain = find_domain_struct(domid);
 	if (!domain && get_domain_info(domid, &dominfo))
@@ -1010,7 +1009,7 @@ int domain_alloc_permrefs(struct node_perms *perms)
 {
 	unsigned int i, domid;
 	struct domain *d;
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 
 	for (i = 0; i < perms->num; i++) {
 		domid = perms->p[i].id;
diff --git a/tools/xentrace/xenctx.c b/tools/xentrace/xenctx.c
index 85ba0c0fa6..9acb9db460 100644
--- a/tools/xentrace/xenctx.c
+++ b/tools/xentrace/xenctx.c
@@ -92,7 +92,7 @@ static struct xenctx {
     int do_stack;
 #endif
     int kernel_start_set;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
 } xenctx;
 
 struct symbol {
@@ -989,7 +989,7 @@ static void dump_ctx(int vcpu)
 
 #if defined(__i386__) || defined(__x86_64__)
     {
-        if (xenctx.dominfo.hvm) {
+        if (xenctx.dominfo.flags & XEN_DOMINF_hvm_guest) {
             struct hvm_hw_cpu cpuctx;
             xen_capabilities_info_t xen_caps = "";
             if (xc_domain_hvm_getcontext_partial(
@@ -1269,9 +1269,9 @@ int main(int argc, char **argv)
         exit(-1);
     }
 
-    ret = xc_domain_getinfo(xenctx.xc_handle, xenctx.domid, 1, &xenctx.dominfo);
+    ret = xc_domain_getinfo_single(xenctx.xc_handle, xenctx.domid, &xenctx.dominfo);
     if (ret < 0) {
-        perror("xc_domain_getinfo");
+        perror("xc_domain_getinfo_single");
         exit(-1);
     }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 11:18:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:18:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528526.821805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pto0y-0005k8-EH; Tue, 02 May 2023 11:17:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528526.821805; Tue, 02 May 2023 11:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pto0y-0005k1-BY; Tue, 02 May 2023 11:17:48 +0000
Received: by outflank-mailman (input) for mailman id 528526;
 Tue, 02 May 2023 11:17:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=87GZ=AX=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pto0x-0005jv-1C
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:17:47 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f635e8cc-e8da-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 13:17:45 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id
 ffacd0b85a97d-3062d764455so1422740f8f.3
 for <xen-devel@lists.xenproject.org>; Tue, 02 May 2023 04:17:45 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 i11-20020adfe48b000000b002c3f81c51b6sm30625977wrm.90.2023.05.02.04.17.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 02 May 2023 04:17:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f635e8cc-e8da-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683026264; x=1685618264;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=VomCUlaJl3m0YlBSN4OcT3ZcH66VA7ob9EV/O6KH2dU=;
        b=LzNHvFazYiCArmtJom+ikLLfmpgZUamKXE8Mx3YK6bgWFZYwgkujK4qWP6VCaiA81/
         Y34bqw3bMIbxAgKEl6fiC9jtK5JurIeQiebK/e5MnX6LsFMR2zwJZXFnSfnmjo+NmpJE
         kCZHUfoiCc5baaKYibF+IiK0X6SCu1ovZ9H9c=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683026264; x=1685618264;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=VomCUlaJl3m0YlBSN4OcT3ZcH66VA7ob9EV/O6KH2dU=;
        b=lKvP6CLFXrjDhffoONiJCYpwkORLWiC5cGlec5zNjRrrWa3QjucNTZKnLaTwmNdSkV
         oD+UpD1mCGuM4ADstrYx5e0ipP9mtytPfip1YM6RWYaMjMuApsBjiiRSq4jy2FsNap7u
         OT4KUAU3rSKtTc3kEKBGwRCmv46g6S6aXqy2w6A+ur0XnSF0biIRqu6P3KYRKQ2obrAK
         x8istrXbX/bPblewME3p/BxXZlqfYZ+lRV2eu3R6YcuVLShY0ugGnFzYNAw6QHw5iZp5
         NTA43eVn1nTGzn4GbvUs8xADPmESrn46MhKmX/G/kaovnNWJQ7mIDqCD1X3YRWNHXj5w
         0HYQ==
X-Gm-Message-State: AC+VfDxNMhcE7sXbtJGWZQLwxsFt3xYx8yDVp/T9PbEHbrmWDW8AFsz1
	KlYTcFbahsztN4zgpaooKUeeA+SeUKD5KJHTG2o=
X-Google-Smtp-Source: ACHHUZ5vjG7WZuFwcmvJG5zNM7aqMQvYRrPW4JI92sg348uThIygvQVzwJ6k7BuBteoso9YXjaM3VQ==
X-Received: by 2002:a5d:67cf:0:b0:306:2bf4:4fe9 with SMTP id n15-20020a5d67cf000000b003062bf44fe9mr4679463wrw.55.1683026264035;
        Tue, 02 May 2023 04:17:44 -0700 (PDT)
Message-ID: <6450f156.df0a0220.8ce3e.2dc6@mx.google.com>
X-Google-Original-Message-ID: <ZFDxVGKFBJFmqezz@EMEAENGAAD19049.>
Date: Tue, 2 May 2023 12:17:40 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>
Subject: Re: [PATCH v3 1/3] tools: Modify single-domid callers of
 xc_domain_getinfolist()
References: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
 <20230502111338.16757-2-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230502111338.16757-2-alejandro.vallejo@cloud.com>

On Tue, May 02, 2023 at 12:13:36PM +0100, Alejandro Vallejo wrote:
> xc_domain_getinfolist() internally relies on a sysctl that performs
> a linear search for the domids. Many callers of xc_domain_getinfolist()
> who require information about a precise domid are much better off calling
> xc_domain_getinfo_single() instead, that will use the getdomaininfo domctl
> instead and ensure the returned domid matches the requested one. The domtctl
> will find the domid faster too, because that uses hashed lists.
> 
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Christian Lindig <christian.lindig@citrix.com>
> 
> v3:
>  * Replaced single-domid xc_domain_getinfolist() call in ocaml stub with
>    xc_domain_getinfo_single()

My mistake here. It's supposed to have a "R-by: Andrew Cooper"


From xen-devel-bounces@lists.xenproject.org Tue May 02 11:18:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:18:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528529.821815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pto1f-0006FZ-Mn; Tue, 02 May 2023 11:18:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528529.821815; Tue, 02 May 2023 11:18:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pto1f-0006FQ-Jo; Tue, 02 May 2023 11:18:31 +0000
Received: by outflank-mailman (input) for mailman id 528529;
 Tue, 02 May 2023 11:18:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=87GZ=AX=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pto1e-0006FI-66
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:18:30 +0000
Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com
 [2a00:1450:4864:20::331])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0ff17e9c-e8db-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 13:18:28 +0200 (CEST)
Received: by mail-wm1-x331.google.com with SMTP id
 5b1f17b1804b1-3f1738d0d4cso21527345e9.1
 for <xen-devel@lists.xenproject.org>; Tue, 02 May 2023 04:18:28 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 d3-20020a05600c3ac300b003f19b3d89e9sm28840853wms.33.2023.05.02.04.18.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 02 May 2023 04:18:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ff17e9c-e8db-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683026307; x=1685618307;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=8+X+FRGE1DrLyhmqqDKnng02bF8UXtim7gerRdngijA=;
        b=YLNjwFlD4z/AZul9GNAlrTP92tKk96Jm+U48uA9AODsEIxRxKDmIwNZBViCAnBq7Lr
         O7BiylT+CDdvtc17hRxx+LxuCh6p9aPIkp/qi5bqfHnoS4rFQpi9Ua/Y4B2SZdryp2LI
         +psQj5pzWOy1XckQaxaQeRev1J9uF7E0VRy2U=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683026307; x=1685618307;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8+X+FRGE1DrLyhmqqDKnng02bF8UXtim7gerRdngijA=;
        b=N1XRYzZqoIkrIrTHpyxV498jJ27zLJyKEkpWTQvPFJvn9vOAgBZDHp6hRaDLwJme94
         t93hGfSoYE1Su4Q9SpjSyrzs7oxdJYowubQ0NOsOdlhXm8qcJ1kpWwyTMkDAR2r0szFg
         vzlB3kYRzIw/JASI4If3d2DHTMP703keWt9B2GsqUoDJmmKk8KxAGFPgyRg5CiW+owv7
         yQdI1aL9a/qXWP0RW4yrC2FInbu9QtMYDivkdpqA228MlxxsoDTjnC7PUWjDCJR06ptO
         8dMNO4VEUVGqh7YCh2aaikWZoMNFhF5EXTc4XykB99N+gJpPGS1PdlsYg8p4rar0TY/2
         EYBg==
X-Gm-Message-State: AC+VfDwAqjB4JYnHTvrtCEE9faOSSG7rUh4S5Ns9kpJXmBD5QE+yvrrk
	izgblw5WE8cXEe5MM1sGhZdrDOY8e/UikdYHoGg=
X-Google-Smtp-Source: ACHHUZ7acyPlN8Pg1jZo9IQ6xwFsYu3r965ZVnMjr4/dd1OxIWSeWCn5jKnCNLq0722TvA15V/RZxA==
X-Received: by 2002:a7b:c407:0:b0:3f3:481a:902e with SMTP id k7-20020a7bc407000000b003f3481a902emr823096wmi.15.1683026307281;
        Tue, 02 May 2023 04:18:27 -0700 (PDT)
Message-ID: <6450f182.050a0220.6009b.3ca5@mx.google.com>
X-Google-Original-Message-ID: <ZFDxfwjiP2xv12F5@EMEAENGAAD19049.>
Date: Tue, 2 May 2023 12:18:23 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v3 2/3] tools: Use new xc function for some
 xc_domain_getinfo() calls
References: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
 <20230502111338.16757-3-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230502111338.16757-3-alejandro.vallejo@cloud.com>

On Tue, May 02, 2023 at 12:13:37PM +0100, Alejandro Vallejo wrote:
> Move calls that require a information about a single precisely identified
> domain to the new xc_domain_getinfo_single().
> 
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Tim Deegan <tim@xen.org>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Juergen Gross <jgross@suse.com>
> 
> v3:
>  * Stylistic changes that fell under the cracks on v2
>  * Reinserted -errno convention from v1 that had been
>    removed in v2

Mistake here. It's _NOT_ supposed to have that "Reviewed-by"


From xen-devel-bounces@lists.xenproject.org Tue May 02 11:19:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:19:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528533.821824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pto2i-0006tT-0F; Tue, 02 May 2023 11:19:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528533.821824; Tue, 02 May 2023 11:19:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pto2h-0006tM-To; Tue, 02 May 2023 11:19:35 +0000
Received: by outflank-mailman (input) for mailman id 528533;
 Tue, 02 May 2023 11:19:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pto2g-0006tC-2p
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:19:34 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060b.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3659de89-e8db-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 13:19:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB8059.eurprd04.prod.outlook.com (2603:10a6:10:1e9::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 11:19:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 11:19:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3659de89-e8db-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jID5z2WgZrRDtW+1/yKgjHYLFz2ny9PK9aPg2Kt08wXviM4LEfKr/zRfx0wNgIcj8vKAZqsCaZxt5RoWUI5YL7mTnlZLEX+rva9S0UPWVrb/3Ean5vPCNsHqTxsYzEEaD5WWqZO/sejzO0+IvSgKc+xJnUsniVT2OL5knp8QHVWYcOh8bG5rgteOlCpAWct6xetvWpT2dWpiT3zoNCH+r/jARA3EGQuiJDGOXOi2OWL6s7gJYS3CiRllIBgGbBZIrhxsIlGLu1sFHJ28H7sie/vVbSmobEzBh84iNnyfLPq9hcERJRq5E7KHKbSP6i1s169EAKEyVNQi/2duO9kWfQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dqbuaeoXW+5rsCtxwnigwiHezoJP/GDY0KCe4Lwn3+U=;
 b=bkzry4NfrkR7rO4I3Fj5vuBUw2OKeBvjf3yPhipQtFicznNQ2SWHuwvj7W/+ImA7VZDXbJpJmYHygCFkezNkyk+XfW/z9yZzbjXTYOAc3+/ZfDU4h0DGaIGyfmpa00tudyhOHKhqBgKXhN2UStx921f+SiElAyFD118cdPudfQMEU+8UAs/kWmaUD6SFc5bcJc7NTfD/dlDCp2amPwZ4p5mckori7tmCj4JnVVpSuUQocQcCV2GknZPLCaZMU1FXOEKbXUm2U1r9/7+ymZTArOHbDcCyI80nOzm+pvL1S+fTs8phddpfXrhbtaajdRbGvpTQXkCkZphxpqEP6UuGTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dqbuaeoXW+5rsCtxwnigwiHezoJP/GDY0KCe4Lwn3+U=;
 b=AlyBTDydX9F3w8Qoz0dXWsSJEmS84f/RtjZUnYZ7VotYpKg/ebrufffMQ6BTckau69QVWipukZhnuwRHlX6WMMUFox6JNrhRfmtGlVRxsrJ2UP9fkhWQqCGrrT3OMS2qXmnSB/v0tjgIID2jn3Sfo3FlTb0PT4IM2lbRmVSilNagrDtbJVFDuTFRZ0O/P/C1HplhWt/z/1QwSgP58DOb3EW1PVB6NyLqDyKnfoQj5ymrK9j54d3sGgXXj5sPoSPVwQTsyg9e1ri1c6ZIYf/2Lks3T9OWA3quNT/apEluj08JjPVoGmgK/nuflG79LkNBIO3Q8v15oPa1ZEaE6CKW9g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b5292073-c675-587e-e19c-cbbeead41a7c@suse.com>
Date: Tue, 2 May 2023 13:19:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH RFC] SUPPORT.md: Make all security support explicit
Content-Language: en-US
To: George Dunlap <george.dunlap@cloud.com>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper@cloud.com>,
 Roger Pau Monne <roger.pau@cloud.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230428081231.2464275-1-george.dunlap@cloud.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230428081231.2464275-1-george.dunlap@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0077.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB8059:EE_
X-MS-Office365-Filtering-Correlation-Id: 49b311c6-ddd0-4f5c-7845-08db4aff19b0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8OPGBh78YVedYFLA78k5cbmGYko1lrNg1ewCqx+lq4fjSfbxfscm9R5eKRpmjLo8et6XcEnveif18bM3qdXT1dOnWhi6RDdjcXdBfmk/6JP1+fqDsJXCd6bLbUBDoYLuKEM8bHmc3VNh4IcuvONh83kjxf9DCnb94fG+OtL0Or+vaS/u4rJ9eA93TAbIPY9ttMd3Ssvifg5xUVel2RO01158ygQicP3v+Op4iO0VwEp2nmyrFRFkaeLxXZUuymAGrftN2weN1u7344jgk5Yhj2FsM2joJWobgHNc6D1Ru7HR38DOt9rD9mfvK4Zuy3iFgJjdj5g5ZDWEp0CD3ZDBKo8i8RnTNJ6T7LWab7V4KjTzTu8/Uk7l6saHuhWFIIRXYayWtC6PciA4F+50uiAvT0w6PaVrpAq9mGGiKG/Fww6cx+m4E9h1WSYe9734XcyNrfOWHrRFoaGR4U8bukivfPhR2TWLk+BXmsm7JrmY2YCNez/uKzO9J+69RaH/OUT6XTynEbCbxQto+V21ATIFsSHgTrA9mHp+x4d+DrQJ8FZnzlL0qylJCZZvKofq2AWrfGJ2x834/x/N1oxvIwn6ALc4hSy+ND0dVH+licLPFRqnUM3Flc+BU079/MBSa6WPTmsgQdUXG10Q6Ibq44yIwg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(346002)(396003)(376002)(39860400002)(451199021)(478600001)(26005)(6506007)(6512007)(53546011)(86362001)(31696002)(6486002)(83380400001)(54906003)(186003)(2616005)(31686004)(36756003)(66946007)(66556008)(66476007)(2906002)(316002)(6916009)(4326008)(38100700002)(8936002)(8676002)(5660300002)(166002)(15650500001)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VG1XVEczYjdGOU5zT1hOTDZSS05BMU5ReU50Y2YvdXREL2g4KzJya2VGMmJL?=
 =?utf-8?B?WWlzNHhxK3ROMUZzNVVPdjIvWUhvNW82NTRkQmFrRytuRllTREt5bzB1QnBh?=
 =?utf-8?B?bVo5cS9rNlgxL09mQyt0dFhVMGdSUUdKMndZR2oyam5JbjVhdllMMTRoRS81?=
 =?utf-8?B?cUs3Y3hCZ1VndWMxSTU2ajdCSklaY1lZSmNDWG8vNnFBQUlXOFErNWErc2o5?=
 =?utf-8?B?NTVvZnBUaFhPZXNsWlB4aGtScEFMbEpkcGV3QTFnZndsSy95SGhFQXM4Ujk0?=
 =?utf-8?B?MzhXT3hXWlh1QW9FSkVtVGQ1R3FDWEI4bHlZeXo1VWNyWUhKaElFdTk5UE1h?=
 =?utf-8?B?NVhJK1pEbWVHeCtJQWE1cVd1UXdEaHczSTRPUnN4UkpPWHgzYzBmN0JMclkx?=
 =?utf-8?B?ZERvTWM2NllSU0FqWlpHeWFBOUIwTTNOcjBnenlwZ3UrdExuQkxiT29FU1NK?=
 =?utf-8?B?d2R4dVNnTlowVmZQd1RaYk5RMjk4MDdzMkJKVkx4a2ppNWJUd3hiRFB0VTF4?=
 =?utf-8?B?R0pCTEgyQUpQK1pvak5PVnhHWWN5TTV2M3prdkx6OC9nUEZFQUpEME8rR3k0?=
 =?utf-8?B?MHlkT3NveithRUNPZWhkS1JpZk1YOFNOa2wvaXRadkZ3ZEhpQ1l5YlhKUG13?=
 =?utf-8?B?K0d5Qy96SGZFb1NvV1pranU0TExHWTNHRFV2MisvZ0o3eUttZ0pkQS9GSGIy?=
 =?utf-8?B?aXpPZnVUYU1EY2J1dE81ckdXeVRiZ001Rmo3OExDMHQ3aVdGczBLNTh4S3VQ?=
 =?utf-8?B?cGwvZXZucjI1UFNhSVdRaStYRDBnUTkraUFMVTJPbGMzc3lZZ3dwOW45OXZV?=
 =?utf-8?B?TmJzSzkzSEpWTUJVS3krWTJid2d5SDFsdFZxaFlRNTRvNzA0N0F4cGZlaXVV?=
 =?utf-8?B?bzkzZnQyRXhXalcrR01ZM1dJckhqNUJNcXcyZnMzTU5YbFNWT1dLa0JFNFlt?=
 =?utf-8?B?NXdTKzk5TWdhbWRXZ3RleVFqbWFqN0lmbmJSMmlMbnBVdFFLNDlWS3NvSGoy?=
 =?utf-8?B?Um9aUE16RGtCN1dHbTlmdG5sa09XbnZOL2ovWGdPQnRGL0hlS0VieTRpWGxa?=
 =?utf-8?B?TUVEbElLSHMzc3I1MHorSm5ENWxoZVF6WVN4bHlZckxyNk1WVHVwN2RtSXMy?=
 =?utf-8?B?Tks0SWo3UjU1OXZjcWE2YnZuTjYxQ2M2blBwbjhJdkNTQk9ZNW9iaVZKVTI1?=
 =?utf-8?B?cWYvZlZaSGFnd3puOCtYQWk3T0dNdG9mejFudlBpTHJIWVh5QXhLdjBtYU5y?=
 =?utf-8?B?anVYOVhLeS9PME93ek9QcHprTDFDWDNvTjFVSFdiK3lEZ0JDQVVEUnRJRTd1?=
 =?utf-8?B?bFM0SHhJZ3FvQ213MUI3SWEwSHgyNllVdnFBRXBLa3pyd0Nud1BKeHR5Wkt2?=
 =?utf-8?B?R1BXd1JZS2ZYNjR2U0lTWkFKN01iRlBOKzk4eFBVbHdTdzlTSytLemJuMGl1?=
 =?utf-8?B?R3ZmSUF0cHY2bTVqR2lNT2dpU2FVbk5OVWdJK1V1dFFHaHE0TDJ5dGdHd2Ro?=
 =?utf-8?B?VW51dzc4dnJrT2tHSGFqK3dPaVNoaDNNaVdNSzAza05LcUNuN1hDekJ1ZWVM?=
 =?utf-8?B?S3phZGNKWkZhQmc0Q3UxNU5aaTg1bklrMzNFYjF5TTM5YlZXV3lBYUFZYVZw?=
 =?utf-8?B?L3RFZU8zc0ZKbXlHY2NqRm51TVg0eE0yTDNJaWN5YmgzZnZnellFV04yRlJK?=
 =?utf-8?B?ZkdQNzNOYThlMlhuTlhGTlIycnVJdVhSU2M3QnpwejhXMFNQVkI2M0h3eXVO?=
 =?utf-8?B?TExFcFJlZnNHNktNU0RnbEkvOG5zekhOaEJVQWxKNlFzejdqSkpYZTZEZkxu?=
 =?utf-8?B?dThnTENMVVRhWUp1Z2FpZVpuRnVCOFdNQjkxNnRPd0dBQ0VXbXdEeTJBUEJ1?=
 =?utf-8?B?cDNnRTVrdDFmNFlER1RubUNVeG43NTBqWHlXajd5R2NZdVJwbHJCM1BENlRm?=
 =?utf-8?B?UTg1L2hKZUNaUWdqbmRBUWlGbzkyVGpEY1o5ZEZFeEpmU0poc2Q3d1hQUXdl?=
 =?utf-8?B?cUdSc3ZiMFNPd0V2a29vUFBsQzU4Wi8va3phUkd6RVdUZ0E1RlBqZjRwYWtD?=
 =?utf-8?B?cXNabGNtOXd0bEl4U2g4NnIyN3hlbW9SQjBtNkFQM1BLTnlPMWY5RzJvQlc4?=
 =?utf-8?Q?k23PPNjTdFFL+TBWmvUsIAIXF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 49b311c6-ddd0-4f5c-7845-08db4aff19b0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 11:19:31.2615
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jEwhyosF+CcD3YKWI3wdlqqOcscKeRzRiALSCbhl+RuVB3kNEzFQGLh7Cgc5jKwRS4U4AiVPFAkRCAyCLKAhMA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB8059

On 28.04.2023 10:12, George Dunlap wrote:
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -17,6 +17,36 @@ for the definitions of the support status levels etc.
>  Release Notes
>  : <a href="https://wiki.xenproject.org/wiki/Xen_Project_X.YY_Release_Notes">RN</a>
>  
> +# General security support
> +
> +An XSA will always be issued for security-related bugs which are
> +present in a "plain vanilla" configuration.  A "plain vanilla"
> +configuration is defined as follows:
> +
> +* The Xen hypervisor is built from a tagged release of Xen, or a
> +  commit which was on the tip of one of the supported stable branches.
> +
> +* The Xen hypervisor was built with the default config for the platform
> +
> +* No Xen command-line parameters were specified
> +
> +* No parameters for Xen-related drivers in the Linux kernel were specified
> +
> +* No modifications were made to the default xl.conf
> +
> +* xl.cfg files use only core functionality
> +
> +* Alternate toolstacks only activate functionality activated by the
> +  core functionality of xl.cfg files.
> +
> +Any system outside this configuration will only be considered security
> +supported if the functionality is explicitly listed as supported in
> +this document.
> +
> +If a security-related bug exits only in a configuration listed as not
> +security supported, the security team will generally not issue an XSA;
> +the bug will simply be handled in public.

In this last paragraph, did you perhaps mean "not listed as security
supported"? Otherwise we wouldn't improve our situation, unless I'm
misunderstanding and word order doesn't matter here in English. In which
case some unambiguous wording would need to be found.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 11:28:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:28:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528536.821835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoAk-0008Qe-Ph; Tue, 02 May 2023 11:27:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528536.821835; Tue, 02 May 2023 11:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoAk-0008QX-N5; Tue, 02 May 2023 11:27:54 +0000
Received: by outflank-mailman (input) for mailman id 528536;
 Tue, 02 May 2023 11:27:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptoAi-0008QR-Nk
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:27:52 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20628.outbound.protection.outlook.com
 [2a01:111:f400:fe12::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5dd3c46a-e8dc-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 13:27:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6807.eurprd04.prod.outlook.com (2603:10a6:20b:104::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.27; Tue, 2 May
 2023 11:27:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 11:27:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5dd3c46a-e8dc-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f4BbJ2WtFMlwH/xIgqBUosHoNGpHeN4SOvdzvm+y1aqyNI1svP2fkQMVP+ciD9y/MUuADR4cRgTQCzafMKGeAHjWMnAjlKtClFB41q5hpOwQEsVfL89zLFb48IGnHiAlU9Sa+csrGCTho1/vug2JY1GI0BUUzcP1eku0tHHEdSEiTpHcrI6oBLO80zeHCve8r4bR5PMRrYgyzzhBe83iAo7gGMJE+nXDujcvNaMvzj6PIRInR5GsjjervXSJFes1776p+FYz03ZdVO0/SX7/QVkPdHeiqKo5oCs799b9R/AHuUh/YchqNj9igKuyQXvwFWvLNla0XhEp26o5ZAeHpg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dB9bahqQeSO9ULtaIOJCrZaIgtCz4kfEoGdjEEz10DY=;
 b=KJb1NTkiWAfURYKljwQO68ZOOLqd49eSDsjp9w3/WXAW8k6B1hf74xacLa24VO5mHM3oWZfNNukTlFcRFGUFmtGQTWPLRWc9RzFNex0zvI6VBqdz+NOS3GRORsMeNZVNE1iM6rQf2Aaks/5pHPy7xEGTNGLuRc1YIYKpp4LOr3YUqA3l/gJvNd1fGCmYklZEXLvv5IB0sXpVp6Dwz3+u4IuNwWrlUtFF7/VDbiUEkYnJHWnYTc6JUAdx/xUxW7fO0uwKY7eYAkF3jUOGyaTR+63+vRw7RgVxQ/F2tbUay5tJ/eGwxc1LeBLINS3uZ3nz1XGHkEIOuyYSMC5FUwGAVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dB9bahqQeSO9ULtaIOJCrZaIgtCz4kfEoGdjEEz10DY=;
 b=Ywgki5ZhGPL4/Jowj+sKehesK0CYRq+EjpOAgpyeuqW7uQKUtiZOmeuyEVecAOCiOPujEM1GQOhtBafO+Xc5jf/f3anVAGaCBXo8R66umTCUblME5bXYaqiF4EOfABQumzhSeM2USnUzS3THtz+OX5AzhLQAVQro/XMtRt9qDnr9vB30YdW/yGkeMC2KSZviXamFdvhkklJaxmiluwFX2g1QFSblpVlMF2+PDeCobOURAPBgSgtaQgl4eJcof4nTll+1836MskNjMuY3IiRQoPMITtXGTvEum3DR0oxQxl5Z2ekJzDihIitLhNWpzpyClswIFuleSf7GMEH120C5vg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3485b49a-08ad-157e-f47e-1a6826339cfa@suse.com>
Date: Tue, 2 May 2023 13:27:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH RFC] SUPPORT.md: Make all security support explicit
Content-Language: en-US
To: George Dunlap <george.dunlap@cloud.com>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper@cloud.com>,
 Roger Pau Monne <roger.pau@cloud.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230428081231.2464275-1-george.dunlap@cloud.com>
 <CA+zSX=Z3Sr+OOoM3V-oVG6ooGFG7zmpqnAEdBC4q8pnmgfx7JA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CA+zSX=Z3Sr+OOoM3V-oVG6ooGFG7zmpqnAEdBC4q8pnmgfx7JA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0109.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6807:EE_
X-MS-Office365-Filtering-Correlation-Id: 882adbc0-b881-4fea-9b6a-08db4b004084
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZR6Yx1YAg7XrqPIEDOb8l4oag5brQp8Ee+49AJe6gO9X4D9XsR8+f/aNAECHWUOzlq5D3/h3az7BKSgahJEkTmF7jCZiExvofAOHoUpaTccp1TH/IfPhpuNzd7ZIGMWwXBJRGFNf4WWZRm5t0i5rWz8l/o7zwcl4utoYxSkG5Jk3YirVe144jiT5m9+F2GT6ky2sscI58EfxqwaVTwsHAI6ze4W0+8ait7qkKHPsI/cmZ3+jXCfSeVLGbMzS3h3elnG1fQff1D/dszALEqg0WHc8u0AiOf/+Z2HenNS8RPlTCHPuhgoUyx3CAK9LVD2vU48wIicGP1LRCs1f46Majsil1RimXUd6+4BPlbeVM4Vi2KQfZ7QJgah6eGHyJgky8f+oW/Xgep49XNuTlzTU1MOk+N7+n6K8JuYLNDoYs0lcqTvgoyTiHlQ1vgc+Ahniomrtr1Ue0LJOa3PEbuxkMt8+bIcC9Z9iK3QpqKpKF4YAVxOGoq3tvQYCsqezHeFTlWvuZ4fw0yUGeDbEUHbO5SkWSXWwa1vFGqDVtkJ+yPYfUGZGJHE2OnONV/l6Cib8Ol4TF1+vC574jfBDizZ/GqcpvmNJVX88Xh5rTc94qzGH64hE0yrfjaOB/CS9H4tW1xhLNVHwZ7xoh8WdgWTBHw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(346002)(39850400004)(396003)(376002)(451199021)(2616005)(83380400001)(478600001)(31686004)(186003)(54906003)(4326008)(6916009)(66476007)(66556008)(66946007)(53546011)(6486002)(6512007)(6506007)(26005)(316002)(8936002)(8676002)(5660300002)(41300700001)(2906002)(15650500001)(38100700002)(86362001)(31696002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QVU1Q1NZMHVidWFqR1lnWHJRN2NDcjlmaHIxS0xOYTVNeDZQcGRvdDBBSXJp?=
 =?utf-8?B?Ujd4b1VOTllNRktTZC9XT0ZQbHVvSXd2aDRlN2FkU1U4azVUMk1DVFN1VUJs?=
 =?utf-8?B?cGZwd2lYSTZ0U0ZTNCtnV3YveVlyQkZWYS83dFl5ZmNqSktWRmxId3NDYURs?=
 =?utf-8?B?U0tPUmNsUWl2NXNnZ2QzZnpvazVXQ2hyekNyeXlSSmt2eWVRV3VBeHB2S011?=
 =?utf-8?B?RVdxMEMza0JJUmkzSmVHZmlXZ0pZWDdwd01ST0Y3d1dUVkUwNjJoOEQ2SjRS?=
 =?utf-8?B?R3hLbWI2MEpaVm1mYi9FQ0Flb3JPWEkrWmJ2Y0lWUjc5eDM2Sm12Z1BlU3hu?=
 =?utf-8?B?V05sOFZvejVvTUxVbWJ5eWluUEFCdmJpTERCUDFoakYyaGEvaSt1Y2E5SlBV?=
 =?utf-8?B?MktLMS80Sm0wV0NTZTVzck9tOU4wdW1vUWZGdHlDWWRKVVVON3VOd1NCRXgx?=
 =?utf-8?B?V2c1Q1ZySjdRR3FQcGk4YlQ3WldUc0JjL0VmenZhVFNYTFFFT09OMFFmV1dU?=
 =?utf-8?B?NFJNZnpFU29mMmFmSnIrNWo3aWNRTUN2aHAxZ0VNbE1Zak9oemkyZkh1WnNP?=
 =?utf-8?B?UHFzY3VtdGFJNUtpYnJPNk1qR2VUTWNFMFNyWGR6ZFkvQzAzWWU1WE5XSC8r?=
 =?utf-8?B?V2kzbzhyY0hGK2xuVEkrYXNxSjRDZnU3c1VTRlVtK2ZHeGFWZU9udkdxZ1lw?=
 =?utf-8?B?WFE4aWVoQlQ3TEJKNEQxS1luWXp1b1RUb0UzTzJ4RDdJYWJoaGZhVENTeGdT?=
 =?utf-8?B?eEtLZjdKZUhqV01OSGtNRHVCakk1NmpFdnQ2cHlHSHRoUUo5RS9MSEVZc3BO?=
 =?utf-8?B?WUhDTmdHYjZmL3FjMksrUUgwR1Z1bW50Y2M5bkJxS2RhOHJwUzJxTllYUE1s?=
 =?utf-8?B?SVdlNjZjSTNhRjQ0Vk55cEVVbW5mMVloMkhBcUZWb2VmaHJucEtyK2dBeFp5?=
 =?utf-8?B?b0sybEJDOVlFWW5KakprV2ZkYndsb25pakJhMzRjWWdCa2hZb2ttbFVQUGNv?=
 =?utf-8?B?SnVNUDhNOUp6c2ZDeUtrS0p3bEprajRLTDdIY1E1c3RlaERFemtZMXBpUy9H?=
 =?utf-8?B?K2ZwUEIwU0VDQVFTMnROZk1waXFvdzNwb2tyQjllWjNTQVZ1SEFCdmFHbmV4?=
 =?utf-8?B?NGE0cjV1OGZDMVRibTJRZXl5TU9EQkNFb1A2RllFMHlud0FLNCtFM3Y0dTJj?=
 =?utf-8?B?NFJrcnhseFhENThuTnFNZTlQM3VsNERNaFJXVmFEcTJaeDh6K3dHYW5RVGI4?=
 =?utf-8?B?REpqNWsxSmNuYTJ0Vjh4eG9ZcGJnY05UWTRFT0pSY1lJOWZ0MC9neCt6ZUpK?=
 =?utf-8?B?aUdZOGRRQy9UbG80Wm9kdU14ekUvL2dXbFVNUHZrbGNQSWwyMUJjbmZWQzd2?=
 =?utf-8?B?eExmK0ZOT1l0YlNDbThORXB6MzFKZ0tTQlJvL2RlYXd6aEluTGJTTzNyWkJv?=
 =?utf-8?B?NlpRc1ROdFJ4S0p0NjdSN2lXcnRQc0trRnNiSjlmNVdvcnQ1dTRUSVM4anpl?=
 =?utf-8?B?Zjd4eVNPSkZyaU8vcVFNS00wUmV1UjliUmQrSXN5WkxjWmoxalZKem5Ia2V4?=
 =?utf-8?B?NHFoQzdRU1BvN2hqODFoOTVSdEpkTmZZRlgwbzd3VnZ6dnQ1K05EczNJOEVK?=
 =?utf-8?B?Q2M3Nm9pMmpJTFJqUmh0REZmbDJOc1RXR2JMSWFKSThlem5UeE0yL0ljRTUv?=
 =?utf-8?B?N1NvYTdBbmduZmxwVjBtU0FNYjVqdm1rS2E5NEhEZ3JMdW1ld0x6VVRVelRj?=
 =?utf-8?B?U1BJenVGMHc4cXhQUkk3dlBydmt1a1ZVUjNDTzlGMEtiODYzOGpMWUVMUHA2?=
 =?utf-8?B?MDF4RHN6ZHFyQlptNHJucHBwbjJ1NnRYZjVnK3VzWCtQMVlFU1VERG1XNEcx?=
 =?utf-8?B?cTBNdDdqMDFrbmJuTW0wTGlyc1JPTU5JMzFsbDNtbUhyNnRZY0pXUlpNUWVj?=
 =?utf-8?B?aDlnYytvUzVDRFdja1BHbnUvNEk3KzdObjQ5cUdmUmgvS3hJcUtkSWFKOTkx?=
 =?utf-8?B?eG9iNFZjR0pzVkhMcFRBcjZmS3N2RjJjeUNFRGR5UVNQUjhGLzh0QVR4VHNw?=
 =?utf-8?B?UDdOWlo2OE9XQnkrQkl3a2VHNEcxYUYyU2RldGJvSURGQ0lVZEpPeFg4M3dO?=
 =?utf-8?Q?xhvXEgi1ATgcd6sG0CstCjqBD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 882adbc0-b881-4fea-9b6a-08db4b004084
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 11:27:45.9082
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UM2yYJ7gwFaho+fziHmSjd44KeljhOHh4D5OWU1Ko3K/ytlDkZ6Of5lpTk9dZuOvc8UU30Scku8GAEx5D1t0FA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6807

On 28.04.2023 10:14, George Dunlap wrote:
> On Fri, Apr 28, 2023 at 9:12 AM George Dunlap <george.dunlap@cloud.com> wrote:
> It occurred to me that in many (most? all?) cases it would be more
> effective to define the security support parameters in the
> documentation itself.

I think I agree; the alternative of needing to look in two places (one
telling the syntax, the other telling whether it's "legitimate" to use)
would be prone to people omitting the 2nd step. And this isn't going to
be meaningfully more work right now: Any option we don't mean to
security-support won't need annotating, i.e. like in SUPPORT.md absence
of an explicit statement would mean "not supported".

While in the examples you list only command line options, I guess the
same could apply to xl.cfg / xl.conf ones? Albeit I notice xl.cfg.5.pod.in
in its title specifically says "syntax" right now, which then may want
changing.

For Kconfig items it's not as clear, because I wouldn't consider the
various Kconfig files "documentation", yet I guess we shouldn't require
people to look at source code.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 11:33:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:33:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528543.821845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoG4-0001Yx-J7; Tue, 02 May 2023 11:33:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528543.821845; Tue, 02 May 2023 11:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoG4-0001Yq-Ft; Tue, 02 May 2023 11:33:24 +0000
Received: by outflank-mailman (input) for mailman id 528543;
 Tue, 02 May 2023 11:33:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d5QU=AX=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1ptoG3-0001Yk-Oe
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:33:23 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.163]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 249ce61b-e8dd-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 13:33:22 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz42BXLcdY
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 2 May 2023 13:33:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 249ce61b-e8dd-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683027201; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=OPmksB21DJ/AM3xaWUaFAcKcHc64BbFMB73v/u4wOT3i927uccuD9SukXRIZvaQhID
    DJEq8Q66LYnXBFZ9r1KW/TDnUYcfLUKIjYIIlqGsbIT86770hv04loBQq1ZMDv0H0aVI
    5A3a8fXXz7Woegs3KdX7AYOA4SFOHj7Nuxc2lbFx15slcIAbUH5B049KWqp7UKlu2xai
    Ih4L6i1gKMPj1EzieMpX9aTZPdr13TiAbYdwFVfiloAtUmN+TLeUulzkBZDJXH1bTIjA
    jQ0bVi96oHHncNlTQF+6h4f3YSR1HbRlHMpsvbcSCD9EcMUIOKI3qZf0hRP73eX5nvxP
    gqZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683027201;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=a3cApHRkIUuCYLUFapSTvXuSRmye/cuLETd9DiG0noI=;
    b=ISXeZjZgIr3V0NLFvqCUV1BhAzOmu8OcsmRZBL0rKjuQGIx/TX5eOgxcaxHOsm4w2R
    VKwN5HyFRAGoF+VrdVj/yJMcVYyZCG+MCfpi0RD7QtdQmEPyvZgiZT15j7i7vb2ddZTT
    UFm4u5Kprraq/L/xOPQm8UOyd3n6LhQYHsB8ae/VUUaB2dj0Khjqw19vKjiLpoPgGQCB
    R9lf1hhVkzZxNUi1xVX6FNada8gSJzZaty9RNQ7oD1txhhmeCqjvZxjIpcyLUoMABH+i
    oYxM1MPx3iq6zu2ExwVJC4Yb5bpMFFA3aBuT6GczBiv/mpIheDookkb7Jso7M12eHJCC
    oFcA==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683027201;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=a3cApHRkIUuCYLUFapSTvXuSRmye/cuLETd9DiG0noI=;
    b=jGjW0GErGZEy4WbQqHVMLYT0A9O7hTsDS/qPrMyOBIZW2JH4rOkTvn+swnfYZq7LX8
    lxSSS00N/2pYm10CDYjR5XQrrUQE964BirJjdwH9nKodnqZZr4msGJ/YrngdlBWtQNZ7
    TGFUW6NaoEB470r8V0QpxBkQk4nCD6Qi+IRRmsNVsVavjz/EwndzuZnR442NMgnqBQnr
    nMAF7umO1+8CMIIFu8nO4xMPtiY/kyVczKAzFIkqBpcnQ6a1lDBpiVidPEiijuJjJdZk
    MgWt9DnC7EOT5Cw1zVDwYIaXnMVL8JIBIvytpCDUl5O/FDcsYYCGvO8QK4rcZtujmPu9
    ++Dg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683027201;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=a3cApHRkIUuCYLUFapSTvXuSRmye/cuLETd9DiG0noI=;
    b=Fc8fDeCGR/88HxagNILvsBGC25hJpHyM34J7f+xg5BbWjHbo0PyYaWEW/5FFYqTdCO
    g6wq6q52WXuaTUWbZ3DQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR5BxOIbBnsc1fym1gFvNQ7EzMpH+yFJc4aADp/8Q=="
Date: Tue, 2 May 2023 13:33:13 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: HAS_CC_CET_IBT misdetected
Message-ID: <20230502133313.2192eb99.olaf@aepfle.de>
In-Reply-To: <43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
References: <20230502074853.7cd10ee3.olaf@aepfle.de>
	<43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
X-Mailer: Claws Mail 20220819T065813.516423bc hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/ZQhO5xssfOKW5Bk_p7R4m3I";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/ZQhO5xssfOKW5Bk_p7R4m3I
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Tue, 2 May 2023 09:31:56 +0200 Jan Beulich <jbeulich@suse.com>:

> How does 2.37 vs 2.39 matter? CET-IBT support is present in gas as of 2.2=
9.

I have no idea. It turned out, the previous Leap image was based on 15.3, w=
hile the current one will be 15.4.

If I run this manually, it appears the error is produced properly:

gcc -Wall -fcf-protection=3Dbranch -mmanual-endbr -mindirect-branch=3Dthunk=
-extern -c -x assembler -o /dev/null - ; echo $?
gcc: error: unrecognized command line option =E2=80=98-fcf-protection=3Dbra=
nch=E2=80=99; did you mean =E2=80=98-fno-protect-parens=E2=80=99?
gcc: error: unrecognized command line option =E2=80=98-mmanual-endbr=E2=80=
=99
1

An for some reason there is no failure with the refreshed image on gitlab:

https://gitlab.com/xen-project/xen/-/jobs/4210269545/artifacts/external_fil=
e/build.log

I will investigate why it failed to build for me.


Olaf

--Sig_/ZQhO5xssfOKW5Bk_p7R4m3I
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRQ9PkACgkQ86SN7mm1
DoC9CA//aPfzO636VAjq1LQFiCVnWwcsTYs0KQwbwG9x31em7JznVL0gCwMfHYAW
ICNLXqRWD1h9g8/Yd8O9BquHMJkvW2GUyPDfEP815c8mnUiGoVBMAv+C4oWUNDAS
rpmZYgj4a6qvKDzCEienxUgTyYSamcNGg4sBY7jHuXnb8rSdDVlEE4hVQI2cQZFa
707tI0DQAGct+DbinX7VAO7feLRHm9vYEs9tDC/iaxrbqlcKSDJ/Nam60vSBma0M
esw8xw5+WoXbFCTQ2KVKcL3dfxaDG7vkN6QrCUD26SoDe1wA/a6bVMQ/nauidTZd
QQ/raf5KI65Nt1cSf8Cq7GZ34twmHXt76TRM2HQwPM4kFb7Agj0n1tdGPNxWJnkA
ioOTo7GBWsAwm47UKb/XGU83rFuwHHDVQNQegmMBG8w/M6WSaOSWqiy6M577FnLd
QoeQWoTgCsbwAI83UaYglllxYZBqNiflQI7YxFEOAm7hVkIDEU3fV6skVVgIeMHb
Rjtvx/WiLuARltBrfP2Wewr6qrPsJTI6bNlNoztPnxri50D/TcQUVbVTztQ4cseP
T9wFQ6+tzejC1C2C0joBdn8MsiW5O29cnqcMMDJ4EJLR/3OP6yUxhxAjQjvSr9Aq
JLH1x532mVooANRhpcSpG0kqwLW1Jz1o/2MuuB+nRSjvQ0P+IU0=
=IsJ1
-----END PGP SIGNATURE-----

--Sig_/ZQhO5xssfOKW5Bk_p7R4m3I--


From xen-devel-bounces@lists.xenproject.org Tue May 02 11:54:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 11:54:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528547.821855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoaV-00047n-4v; Tue, 02 May 2023 11:54:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528547.821855; Tue, 02 May 2023 11:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoaV-00047g-1p; Tue, 02 May 2023 11:54:31 +0000
Received: by outflank-mailman (input) for mailman id 528547;
 Tue, 02 May 2023 11:54:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptoaT-00047a-Qw
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 11:54:29 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2064.outbound.protection.outlook.com [40.107.247.64])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16d09ade-e8e0-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 13:54:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9066.eurprd04.prod.outlook.com (2603:10a6:10:2f1::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 11:53:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 11:53:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16d09ade-e8e0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SiGR+Kn7niNo8TaNJwOseoCdTInLhvHtyqhSW/Kho6TYp8rB3GDWB5unsDBXlW+4/+KMaSDR1vVodJ3ugcUmy00uKkeik5dRoq6uy7n4oZd7iKXGifGiYBE2GjYj3le2WSXR0yp5g+U28YzLCB3fTzqpafRX2yJoSwutohIk5zqEc0z9s4bPIj/rBMmkJvUXaKHBX3FABQ8VyoQ8RNr7wF0cc1ti5Y9ZS440R29J/mkO+NIlEzQSZgSS6ql3mGLnO4723vlyhcdNTxCwA555ajhrNq1XopH7VctesS69b0KdWSAOuKhAWc5nunRJsQBA8YJFFzjB/3jV6kq3Z9OykA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bT7wUUqaHwxjkzXfncf8KWA9QpevS2EclWSY0d65XTo=;
 b=fZ6Ugvc1+pJLSyNnE8qvQ+FeSiCqq3OsRWHhB+yq7XvFeViSTS72/qyOlHWh/tq1Bi+Kfshb6qbhDH+oALBxYWwXFaS+i5wrqwfOY7BW7JTRPwoHZQX2ggeUxcXXNBa1Jdnqz5aLNcR1kZLt+j6MJGFLheAsNY4LqCBsQ5gJz7ZohEL/IVz6agabE//fF9uJsv6Za3d/uUk+uXLCoZbLlJvXyeGgDy1EV8iyIsJ9tFbJAgTzmH2uqN1VZiotaQjOFDTSwBaUa4KOcYSw+XmAKiH3w90Kb8Ie1hR29Onat5wJycoGPWq4FBbddMvrh5QEkBLBnuV1ygDGSH/AYq5iIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bT7wUUqaHwxjkzXfncf8KWA9QpevS2EclWSY0d65XTo=;
 b=CJWz4+mLeT+IbrvTeMLxHO2t3lPArD5xSilsc8NKdcQ7uHtlgZapFvdq+KvahxISJueU8RVnmbQsRyKwiRnu87k+hir+ReOtyQciAdjuIt91hYYOfbWEIzlnO6pf8+uT8kR05F0zti8SUZ+ht4pK0KEs544gnk5l6+ghZd8tTxyizUKuyawuyY2kAiks53pQqd8pmEOSyUoR3GYeIYTZW8/f6zkiFs9OdMIDipfSpqAww0Vs33bcYW4q32Q8ADVrMZveNkwhJGVbMPbCfd9EAnZK6D0HFMUn+cPx8BvW+purV22qYK3cQNoJg1geZ6S+xL7Sa6IJxmZv6iEffL2Kaw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5516fbf5-dbfc-dcd5-0465-e4757fdc16de@suse.com>
Date: Tue, 2 May 2023 13:54:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 1/2] acpi: Make TPM version configurable.
Content-Language: en-US
To: Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230425174733.795961-1-jennifer.herbert@citrix.com>
 <20230425174733.795961-2-jennifer.herbert@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230425174733.795961-2-jennifer.herbert@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0103.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9066:EE_
X-MS-Office365-Filtering-Correlation-Id: 1179ff0f-d307-4ea2-0caa-08db4b03e9d8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WQt7RqRLwTMZs79gAnBEk52FNMKuHMin8+MxlxbKFoEdgOL+8axAiAtpjo5l2Z045mCXkk5PdAMktQhOJMCyxMeoVD9aTIKyp1fAl0mfVk4bo0X4KVcFD1mWCz5F+DRRmIn07yWSaW+RI59nHOgozH+VTt7Vqhykx5DlTf+qr7x9n8x3Z2FAJSpaYINXEMOpsvfGcy4wA/8wWo8f72ngIPk6UBf3tBEWQ/6wMf380IMIehqUJ/I4OSgIZkXPNxLG06oHBF8QLActoFgS1taVYYRzP1+7/Xv12rDvM/R5B5oGzxXmVEJAzHVO6Nln/TSY5ad4H0A0dm6/pKwyI+6uiYUGqnxE2AvGPpNQkb+vnauK9cP9UWRswZhzCJzxlvxRqB56qGoreq0GCGDScq9IFMqvNU2E5j1VHxKFg05IpL3tDGki7GIZ9NvIC+jFyeiwNsOaVj5trIQemc6htYwOJArJvvXcg9gzLYyR/LoC7lXryYdhnAJHwWVNuc5yjXaZcaYg96Zvq7dQXDVTd35SiKci1EvJspn0xHV/UMFx51S5x5/QCRmLgUXKG/6Yz8/aMxj0eURuyPNMNxR3+p/Egcm6VAtj1HL5gUoXUdAU6xZTcgRRyQmC6kfEvlNnxCXt4UYOI+YFIZB3Bs8a5UVsZQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(376002)(346002)(39850400004)(366004)(451199021)(478600001)(316002)(6916009)(66476007)(66946007)(66556008)(4326008)(38100700002)(8936002)(8676002)(41300700001)(36756003)(5660300002)(86362001)(31696002)(2906002)(6486002)(6512007)(6506007)(53546011)(26005)(6666004)(186003)(2616005)(31686004)(83380400001)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?azk5b1pKQS9ZSHo4LytjbzhvMnpMeXBUNllqMTFybnZ0N3d2WlN6eTdlOTAx?=
 =?utf-8?B?aTFDMTBkSlliKzNqYmFaZW9YWWVXV2xxMGV5aENia0tnWVRzVmFqMmFCMUxu?=
 =?utf-8?B?MnlEUTNpeXQ5RGZsL3pnMXJuYWJ1OHErN0V1Y1ZzdENHeFRCZVBkRWFRRlJJ?=
 =?utf-8?B?MEtMU1VVaE55OU9iMGpCbkNseldxbkw2VzFKbndaVmxIVVkrVVlsdE9ORHZV?=
 =?utf-8?B?ZVhzZnBqU0RTNC9UMlpPS1VBV1VPMmpKNW5MZkRsRGxZV2RrdGFXd0tDUW9q?=
 =?utf-8?B?Q0VHbXkxSTNSOVFyMVZQNk11Qyt0dElwcjA2Y0V0QWRqcVlqMlMvaDZ2RUM5?=
 =?utf-8?B?WllNNmFFUzZZcHUzYUZkT2RoSitIVWlTa0Q3eDV0bXphS0hZa1VhOFh4NjQ2?=
 =?utf-8?B?Q094elZzbFRXN04vNmVvS1pkVkdPYmw3TDkvUzRtV0VaTXplTUR2elAvOCtG?=
 =?utf-8?B?MTMxU2FKMHpHZnhiT3dCaEhlaDF0RkJISzV1dzlwODFQMzcycms2a0swWnIw?=
 =?utf-8?B?NzY3b0RORzZhM0o2cXVNZEtlRWtqcTNuMmJYTmFvaHJ6d3JyRGl2WVcrRStI?=
 =?utf-8?B?WnI5aDhqWG5wWE1DY3N6QWRTNC9kWnNHcUwya1UvRHJ4OUFXRnl0eitIcmhB?=
 =?utf-8?B?QWp6SSsyOWdqbnZoUENsTklGSDc4eWR4QUhRR0ZHV0dGUzIyL1V2QVlBMjdC?=
 =?utf-8?B?aHQyYzcxU2tFVXFDU00rUDNzZFV3STI4L0hoOHdxd09jRDV1QXRzajcwckx1?=
 =?utf-8?B?YjVjYVJUekc0ZW5zdWZ0RjdxQVRyTlpGdVNieGVITU5ZN1JVSVErSmlGdHFL?=
 =?utf-8?B?dFNIQis0OGFIdzhMUWpXd1hQUStMVWhKV3UyZkdUMHVXZXk4NjJjMW9aWVR4?=
 =?utf-8?B?M3ZPOGdHZW52RkQ0NlhJVjVUNDVkaG93cWJoYythUVdoVmdKbXRaQXVaQ3Fp?=
 =?utf-8?B?UUlkV3VEWk5WY2xKM1pvTHpaNlZseWI0NldIZkxhSUZESXJhM1JaRHJlSlc0?=
 =?utf-8?B?QmdlNXllOWpEUFo3QUVJWGxTZnVNSThFSXZVSzZxeVNCalhtYkR3QjFSNlJm?=
 =?utf-8?B?MXNhaE1aUWw2WWltVFRuS05lR0E0RUNlSTlET0cyWjdZbitEN0NSZHJ5cE1H?=
 =?utf-8?B?UmV3QnBqOUZWUWs4N3Nsd2lqNmZ2THZ5dzVrcUN4TFBaQ1E5cDdxK3ZKeDRo?=
 =?utf-8?B?c1MrZEtvMWlsQmhQL1VSOFNkVDFqYmVZZ0xTanZPMjFGRU4xcEVZQk5DRU5X?=
 =?utf-8?B?ZERtZk5hN2dWUGs0dnp3ZzdCVFRKUzNpT1NNRVBRb25BRWd6TzUrYkxtYVMv?=
 =?utf-8?B?VTNWWEViZ0s0cVRIV1lHVWMyNGJmc2hRK0NIclJRR29pcHpTTWpyUTU3a1F0?=
 =?utf-8?B?eUtYbi9PdG5JSDEzMEw3WEZpNlZ0a0VLVk1tRjA1ZzZmUitkZitVbFFVbHRn?=
 =?utf-8?B?b0xBcGVXSDZsN3Fqd202YkRSZUNVVlhyb1ZxVUFMTnlBMnByMGRsTHZQQ1ov?=
 =?utf-8?B?RDJlU2dOdGE3MWZSNE04c3Y3YjVLdkR1RUJ6ZmRkcnJSeERVYVJRRmd3cUt4?=
 =?utf-8?B?N3Y0Q3hXcER5dUhmeGJxUUV0b1hwc1FJeFprWDRLM3dha0dBdGtEeFVhU1dY?=
 =?utf-8?B?YVIwaVVRSzJxeHRkYlBVYXRZUmxVc1h2d3ZFVWVua2RiY3NhNzdvVzBCcElk?=
 =?utf-8?B?YUJRRDA1WitVeGc1VXpQNjRCOTZmOXhvQmt4alo1NmVuQXdvRXoxZy9SQ0do?=
 =?utf-8?B?b1dOU0NTK3pqd1Q5aHltbTVzS21IRm83NlV5OGdteFp5bE1XQ1dyOU9KVERa?=
 =?utf-8?B?bVVOWUNzVUJuYWtPUUpvd2FnQld0dTcwNTljZWYxQzVTazR4bHkzcGdCT2sv?=
 =?utf-8?B?Mmo3bjVxRUY1SENwZmMvbDV6YjNWVnJCUGVwRXZzREpUamNuOGI0anZSeWdJ?=
 =?utf-8?B?Q1l6RkNFM2RSSXFVVGVJRUZWcmhGQWNMVm82Slo0QURWNmhNSDBkM09zaDVJ?=
 =?utf-8?B?SDZPOEV6VHNMLzB5QzNVVGNFUlkyTjQ0aG5GY3ltd24rUVlpanhuSXN3R0h1?=
 =?utf-8?B?ZUV1VWVGelQyWnQ2K28xVU9oWXpzZ0xuYUFsejZVbUxpY3p5Zy9rcE9VQmRm?=
 =?utf-8?Q?XnetBPfPzIjQvbsYlt4WWD3Xd?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1179ff0f-d307-4ea2-0caa-08db4b03e9d8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 11:53:58.4639
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6Sdj6afygbv1f3YZMtMfKvSDPVa/a5YB7pY5bcmhNJOYKdorGtyutwrvxYCI5VfZp8Oy2MJlrqWiSGZIT8q/Gw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9066

On 25.04.2023 19:47, Jennifer Herbert wrote:
> This patch makes the TPM version, for which the ACPI libary probes, configurable.
> If acpi_config.tpm_verison is set to 1, it indicates that 1.2 (TCPA) should be probed.
> I have also added to hvmloader an option to allow setting this new config, which can
> be triggered by setting the platform/tpm_version xenstore key.
> 
> Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
> ---
>  docs/misc/xenstore-paths.pandoc |  9 +++++
>  tools/firmware/hvmloader/util.c | 19 ++++++---
>  tools/libacpi/build.c           | 69 +++++++++++++++++++--------------
>  tools/libacpi/libacpi.h         |  3 +-
>  4 files changed, 64 insertions(+), 36 deletions(-)

Please can you get used to providing a brief rev log somewhere here?

> --- a/tools/firmware/hvmloader/util.c
> +++ b/tools/firmware/hvmloader/util.c
> @@ -994,13 +994,22 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
>      if ( !strncmp(xenstore_read("platform/acpi_laptop_slate", "0"), "1", 1)  )
>          config->table_flags |= ACPI_HAS_SSDT_LAPTOP_SLATE;
>  
> -    config->table_flags |= (ACPI_HAS_TCPA | ACPI_HAS_IOAPIC |
> -                            ACPI_HAS_WAET | ACPI_HAS_PMTIMER |
> -                            ACPI_HAS_BUTTONS | ACPI_HAS_VGA |
> -                            ACPI_HAS_8042 | ACPI_HAS_CMOS_RTC);
> +    config->table_flags |= (ACPI_HAS_IOAPIC | ACPI_HAS_WAET |
> +                            ACPI_HAS_PMTIMER | ACPI_HAS_BUTTONS |
> +                            ACPI_HAS_VGA | ACPI_HAS_8042 |
> +                            ACPI_HAS_CMOS_RTC);
>      config->acpi_revision = 4;
>  
> -    config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
> +    s = xenstore_read("platform/tpm_version", "1");
> +    config->tpm_version = strtoll(s, NULL, 0);

Due to field width, someone specifying 257 will also get a 1.2 TPM,
if I'm not mistaken.

> +    switch( config->tpm_version )

Nit: Style (missing blank).

> --- a/tools/libacpi/build.c
> +++ b/tools/libacpi/build.c
> @@ -409,38 +409,47 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
>          memcpy(ssdt, ssdt_laptop_slate, sizeof(ssdt_laptop_slate));
>          table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
>      }
> -
> -    /* TPM TCPA and SSDT. */
> -    if ( (config->table_flags & ACPI_HAS_TCPA) &&
> -         (config->tis_hdr[0] != 0 && config->tis_hdr[0] != 0xffff) &&
> -         (config->tis_hdr[1] != 0 && config->tis_hdr[1] != 0xffff) )
> +    /* TPM and its SSDT. */
> +    if ( config->table_flags & ACPI_HAS_TPM )
>      {
> -        ssdt = ctxt->mem_ops.alloc(ctxt, sizeof(ssdt_tpm), 16);
> -        if (!ssdt) return -1;
> -        memcpy(ssdt, ssdt_tpm, sizeof(ssdt_tpm));
> -        table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
> -
> -        tcpa = ctxt->mem_ops.alloc(ctxt, sizeof(struct acpi_20_tcpa), 16);
> -        if (!tcpa) return -1;
> -        memset(tcpa, 0, sizeof(*tcpa));
> -        table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, tcpa);
> -
> -        tcpa->header.signature = ACPI_2_0_TCPA_SIGNATURE;
> -        tcpa->header.length    = sizeof(*tcpa);
> -        tcpa->header.revision  = ACPI_2_0_TCPA_REVISION;
> -        fixed_strcpy(tcpa->header.oem_id, ACPI_OEM_ID);
> -        fixed_strcpy(tcpa->header.oem_table_id, ACPI_OEM_TABLE_ID);
> -        tcpa->header.oem_revision = ACPI_OEM_REVISION;
> -        tcpa->header.creator_id   = ACPI_CREATOR_ID;
> -        tcpa->header.creator_revision = ACPI_CREATOR_REVISION;
> -        if ( (lasa = ctxt->mem_ops.alloc(ctxt, ACPI_2_0_TCPA_LAML_SIZE, 16)) != NULL )
> +        switch ( config->tpm_version )
>          {
> -            tcpa->lasa = ctxt->mem_ops.v2p(ctxt, lasa);
> -            tcpa->laml = ACPI_2_0_TCPA_LAML_SIZE;
> -            memset(lasa, 0, tcpa->laml);
> -            set_checksum(tcpa,
> -                         offsetof(struct acpi_header, checksum),
> -                         tcpa->header.length);
> +        case 0: /* Assume legacy code wanted tpm 1.2 */

Along the lines of what Jason said: Unless this is known to be needed for
anything, I'd prefer if it was omitted.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 12:01:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 12:01:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528552.821865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoh5-0005jJ-W6; Tue, 02 May 2023 12:01:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528552.821865; Tue, 02 May 2023 12:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoh5-0005jC-TN; Tue, 02 May 2023 12:01:19 +0000
Received: by outflank-mailman (input) for mailman id 528552;
 Tue, 02 May 2023 12:01:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MJpR=AX=citrix.com=prvs=4790f2855=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ptoh4-0005j6-7L
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 12:01:18 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0908c1e4-e8e1-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 14:01:15 +0200 (CEST)
Received: from mail-dm6nam11lp2170.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 08:01:09 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5888.namprd03.prod.outlook.com (2603:10b6:a03:2d6::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 12:01:07 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 12:01:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0908c1e4-e8e1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683028875;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=MO9YZe9mZayTlgGSpnI5uZXpludFlQY/Ro+jZvMe6yE=;
  b=dIReUgIAzMymFuirSf32CIN/TrUgaRp2EwNSPSHbLwE5oneFSKbnVnHK
   aUmVBzjQiaqA8syN3IF75MqLrHufVJkjO/XaCh/N7wovxDH/jGKnyR3Ll
   nO0RkZpUhS68yeqdyRkEQBeXZL60y5HPJ2jPncaRxh33oRO59Zoa53ukW
   I=;
X-IronPort-RemoteIP: 104.47.57.170
X-IronPort-MID: 107971263
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23://jsZ6CgJ4uPCRVW/wDiw5YqxClBgxIJ4kV8jS/XYbTApDJx0D1Uz
 mRLDD2BOq6NZ2D8eNgjOtzl/RsGsJOAzNBqQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw3el5WUAS3
 t0iKikzcEC9q8uLy5ehRbw57igjBJGD0II3nFhFlGicIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+uxuvDC7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXwnKkAdhMT9VU8NZBkFSyxWBICiQkXEu+neuztxOaQOt2f
 hl8Fi0G6PJaGFaQZt75VhOQqXOcsBoRHdZde8U44gyQzqvf4y6CG3MJCDVGbbQOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRVLufgcBRAoBptXm/oc6i0uWSs45SfHtyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CAhRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:kpM9jqyLK1sJV/MNO26yKrPwyb1zdoMgy1knxilNoH1uH/Bw+P
 rOoB1273/JYVUqKRUdcLK7Scq9qBrnnPYfjeZ+AV7IZniChILHFvAB0aLShxfnHDLz7fNQ2J
 pnV6h7BLTLfD9HpNe/5BOnV9UtxNTC96y3n+LT0mpgVmhRGthdxhY8DhyEVk57QAQDHpY3FJ
 f03Ls+mxOwPXAMcIC5Cn0JG/LIr9rNlJXpJQQNAQUq8mC1/FWVwa+/CRKfxRcYXXdEyaov/2
 7fg2XCjJmejw==
X-Talos-CUID: =?us-ascii?q?9a23=3A77BJxGhnQ5s07KuJSTM09DM6jzJuVXLTlFaMCG6?=
 =?us-ascii?q?CC39XVu2rZXat84Z+nJ87?=
X-Talos-MUID: 9a23:T51NuwQ1ExxVFvySRXTHlDhDN8sz4Z2cAVAIrYcMhpSgLhVZbmI=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="107971263"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WjiK3cgwVFpZFETyuqsDf7Z4ZhTFX3IPoKJen1lTGoe0Dar8CTtcTZGY8CsZypQUs2JmAVNje0tG9hBdWl1ymFBB68+jrdRmS8CfRtFDGzuvXgoPd1h019eKmbTJkOzjBc2lTbL9Ny3C65el3K4M3jvPP2PAxF8y8OjvKB8tf+pHvIUoe3bdAPaUFRyT5vab1hZKCUUq0v5EJ290J/fzSf7o9USCvZW6OhzfzKL2uTsMfoZje+/YPxM5UAhDfVvEiy8czuFHLjKMoZsyoxrLl4BdoqIQrhJSMn6MRvUu/PhRM/2khbuI2wmeAnvSfH0ByS5NmVOv49VxvZ58RiU80Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MO9YZe9mZayTlgGSpnI5uZXpludFlQY/Ro+jZvMe6yE=;
 b=mkgTl/uH3ccqFUk3Gd/H22GN6i4SPsQVzlVGHPIvTFYlLaFux5WTNTlAX5zbeuE4FSinafJajwbfaLbnWU4PG2fb4oOSTKLNYQqlsYwND/M0DqzLh7RcnYHYxBx4fYoDLnoR4z0SBk/rek1Ir7J9xNt2wSlrWPbGh/OLPhAQZm2vOW7hLOVYvDhoCU7kcFAJsih5lOEaDcoOp9eGzs3uT2cgGhEjkdHbbevRkCexBpEyEwW8V3/Dci/uc3SkiASt9wFtrX+e2ovbMQfdE19MzqF+MguC/SvJ4+8o64lFma63DWKqh+1tS9Sq2BKaSrnKIuunbMls4TRgtCdGBZMDwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MO9YZe9mZayTlgGSpnI5uZXpludFlQY/Ro+jZvMe6yE=;
 b=kKzwgn2UEyLvgVVmD1kmiWHuUTE7yXFu6K1f/NOEcOKD49BxSMuKf8ETlV4Oun6TAiNXPgmcZA+4VZO5xYJEvBWCdfiZJNNGMbAv0zRXLSadVQHMXP4LuPzgzO8szGMj7jlng/scj7odMejv8UxmpO6c/FznWtxObPGcTFkqVHU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <71c0b0a4-caf5-2b60-0e6a-54dfe313e332@citrix.com>
Date: Tue, 2 May 2023 13:01:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: HAS_CC_CET_IBT misdetected
Content-Language: en-GB
To: Olaf Hering <olaf@aepfle.de>, Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
References: <20230502074853.7cd10ee3.olaf@aepfle.de>
 <43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
 <20230502133313.2192eb99.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230502133313.2192eb99.olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0576.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:276::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5888:EE_
X-MS-Office365-Filtering-Correlation-Id: 5a327ba5-300e-4f1e-6363-08db4b04e943
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6j4PHyszZ79nd8anDnfw2tQDadC4m8CIJPjldIzYZBTaIl+53GbrRcVNbr9IBbykqw+zXUQ8ZwQj8V9DStc71Y2cHIRXCuS97TMUZHpql8AgmiklNi1I8usu//GGO6jNWS8jzdKLT86HqTnZOp7WzsZ6l1wuJqdKDoLHcy0iIv+QL6rhFf/c0W2J0HTyujKpYSAzG7JM1f9QNEYxcV3NTrqmc56NT6FxhambTlY/K7pXqHdBsz2Z3iZBeIqdau42UcrFDUuPXFoxpNLhoQX5NP98V7UYK0i8m6QqVncIGDtmav/T4ewDjTEi3YkyJs5mq+LVhZA6lXZLM0q8xU5al2mZXYKABgjU1fXeKnVsRt8K8hBeY8e803WhzLM+GFcA5BF1L1Mi3d3uM4+A/QTWiGSMwwTHYd22KM9KlPFv324NQvCjAyEUkDDRRfJsty8yztmQ8lgWMVa8GnjS+QoGkmM7w3XrehOqRCiYtURkFTTYPIgdoX1aTO371uEwbOnpU6Y/TqHvsoX501aat6bHy96SdlbD8o0CBDNRf2Jkq6xnoIlJGrSp9wW1RXVimSnzJ5IiErh+rBeEO+ZVKjbrelf67mxVRgcP3fJ4unktETSUb2FOyVjtkT50kazGpyRDy+4wB/HVICQHHlFXrCVSfw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(346002)(366004)(396003)(39860400002)(451199021)(66556008)(66476007)(82960400001)(38100700002)(6506007)(6512007)(31686004)(186003)(316002)(26005)(4326008)(53546011)(66946007)(110136005)(478600001)(41300700001)(7116003)(36756003)(2616005)(6666004)(966005)(86362001)(6486002)(83380400001)(5660300002)(2906002)(8676002)(8936002)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TXl0WUY2VUZtTE96d0Q0VzBNNUJNYXNyZGtneHA4SlZnSTJYcTkvSENxc1Ju?=
 =?utf-8?B?U2MrS0xQNnhSemVCaExVdjd0U3lNQ3htaklKTnRBZjJTU3A0SGNBNTlRc0xq?=
 =?utf-8?B?L0tHRFVpTm9meFBkWmRJUzBQYzQrR1dYYlI0bmNpeTlKMzhLcmZVTVd3WUxo?=
 =?utf-8?B?VXRLUVN5aGpCYTNpRjl6OWl4UnhXQVJ3ZmZEMDJQMTJNUHdyN0hnUGQ4UEk0?=
 =?utf-8?B?UTUzZ3ZzNjR4T0lKeXloUG9DaTNwdFpBSk1mR3dJcFhzenBnZXpRTVNicm1o?=
 =?utf-8?B?czE3WGlJTk9ERkZjS09ieTdTT25KQXU3bGRwUU9yNnkrbHJmYzF2UTJ1dW9P?=
 =?utf-8?B?a1lHbWkxTmtOamVrTHBDU3JlQXZGbVk4RVNQdGVSQjJBaHFDS0ZXKzV2Z3dE?=
 =?utf-8?B?ak5UalRteWk0ay94OHNsZG5rc3ppa0h2djVyaTFUWWNPT09UNllTVmVhQ1FE?=
 =?utf-8?B?VDIzS0h5VkJpNjNMTGpmSng5cVdjWnlzdkI1dy90K2RjRnhLY2VqSm03akR0?=
 =?utf-8?B?bjQwcU1OSkQzTVUzbTdFWGRiN1Z6cjh5M0JzNThoaW9nMVVYRVBHZUNGTUVF?=
 =?utf-8?B?V1crRlI1Szlqdkc4NWNMdnp5ZStIMDBSUDBmY3pCOU51K2g1MUJXN2hZTEhY?=
 =?utf-8?B?UDJSbEJLN3pRZXd6VE1ncFFxelJScFNYRGkxVzJmK3YzWUpXekxNc1p6TEIv?=
 =?utf-8?B?OC9iUlJ4OW5zM25Db001TTVVQlJKWit4MUxvZHVRMGpBTEsrNGFoWExsZEdp?=
 =?utf-8?B?dmkyL1pnbURNS21iWVFsdFVFWFJneERJK05kcURvY21ZL01tUVlIbDBrS1Mv?=
 =?utf-8?B?WUlpZDZLNFU0VmtWamxCR1VMVS9VOVp6ZkFTeTNFRithUCtOSnc0TklsWXM2?=
 =?utf-8?B?MTVIWlFQOUo1cnBVTHNKZ0hXczJtbktWLy9HM1Y5WkZ6Tk5MQ1g5aHFWVTZz?=
 =?utf-8?B?YXArMEJDdzBxRHhFbkh1d0huekVCcnJZNlN1bVBvYlhhemFEWnFFQXlUVS9I?=
 =?utf-8?B?Z3N3TzM0Si9JOGJrck80S1IzcjR3amxjR2o2MDFJY05wMS9MRS9Hb1BkbkNy?=
 =?utf-8?B?YU1NOXJDK1dUc21PQjFyaTRZdTRoZ2twcjBzaXBqbDJDdzA5clV5MGNUYmFD?=
 =?utf-8?B?YjRpNGZMNEw3ZzRmRmVaejZvTEZiblUwNmtzczJ6WVUrcGFRdzNuMFpyL2NC?=
 =?utf-8?B?SXBHUVJ6WUpKb29xajlHM3ZmOFdoM3IrNzk4TXJBM1YxdFZLaTdzV2krUFgv?=
 =?utf-8?B?V1RFWUczZk5GN2ZqWnY5d2ZYT2VNbVc3eGtVVTNtZjU5RXVEblgyRWRlSDBn?=
 =?utf-8?B?dkhkY0JvMG0zVmZ6VmlTYjhqMFVGeXJ4S0NucHVuOVBxUllZUVV2TzVlaEZp?=
 =?utf-8?B?VHhwdk42SjJqN09zd0ZlY3dGTExSSXZiSmtXSDM4R1RZSzdySkwwekx0dUZt?=
 =?utf-8?B?VkxWTDk0eXhOMXk2cWVBQVJOS3NoK0lBVUUvdVNOOUFrOHgwNWhKNEk1cGx5?=
 =?utf-8?B?QmlqUndDNHQ0SzRYZmF0L2hWSlQ5VkhiUDdKRkdUWUpQemxvbFdGK1JObWdM?=
 =?utf-8?B?djU1eWFLNlRZQ24wdU5RUjllU05ici9kNlhFNkc5aGhiZGlMQVFaZWN1Q0g2?=
 =?utf-8?B?WW82ZSsxTkd4N2JlT2VpL0NqSzZoWXk2RGNMc3AvSVVYMllJWGIrdWQrL2lo?=
 =?utf-8?B?WCsrV003UE5FMTlDTzIyb09MUk9SNFNLelFTdXBObmpybktLSnFvU01EZXFn?=
 =?utf-8?B?ZHlQWHUvdzJFcmd5WkNCWEt2OW9Ma3loRS8rVWJEdC9ITEhHa1k0MllSOUdU?=
 =?utf-8?B?cUEvZnRTRFVtSE1hWm0rUlkyMHBTcE8vMmdnV1VVbkY1S0xHcTZreDgrVVdK?=
 =?utf-8?B?d3J2Q252TWlQSVFxNkxJRkNZcEtDZklLS2VQQUJDY2tlZWVGU2ZLdFErQzBC?=
 =?utf-8?B?b2VTWmtxN0ZZWXVkUFZmYzI0RTc1Q2pGVGM2cCtILzVUWHNHNUJ1Z2dpd3oz?=
 =?utf-8?B?VTJsdm1xekhDZWY0UDlTUFF0WnVSeVlHNHRXcE1xeE1tVFNyNWtaZHdZWHhD?=
 =?utf-8?B?M2x0SlI1b2xmWndhdDdVV21GWFlQeDlRcWoyQllmaGRTZ3VQcW0vajEyM1Vx?=
 =?utf-8?B?SWFrRlErcXdMT3ZNYlBsT3FNSytUM3pKRXRLWkRnbC80OXRJWm1Zei9LRkY5?=
 =?utf-8?B?WXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Csywnk10FwI9jV3cZBOyb5ImAS/eIadGdeW2eLaKwjPLaMfxRpC2HXw+dz8GMBIYFu7U2uiAYSupXElwMKeRIb+gLFTF8g8bfV437O0tbH1vrFWvZ5Xw3QnI/zje3qZzmAqnFNiatMhNpJnk0gqQOq6HwJCAmyYu/sWh/hNdkEqNOMdGgEQg8wWnegkjgOv4b18UpWpbTqpGFbETn9jYL5iud0Lj9Phi2IeR9sjMR3N+e8JV2cIzBj8p05depYTu4jZ4PG/MQLwWww1cZSKHooRU0qsPTnM53Zzbt3vD5HJ3IA6j6qz/prL/HZXGJyeI++9txlP/uemekK/P/EwyAJAmt45bNYMRuo8CvE+1nrrw/ac6D69lTAeBrR2NDXlfKbYGKSZwOwc0b7Qv4vtY/ElLojnUkxleFoDCCRmP1KxiHPs7Fwhn4fPFMEOLzq71/fPARR309HXfxHCkZuY1ZW4Ydc0l/HLotA3Q66iWK+n8HFwJkkoUmg+kOfbnsQszUWA9FjLmmkyjg/gMR2/EH2qCVwq0f4JnTq4oYuEFluKoSNkrJ9/UP4OiNe2NdJOI1NKXy1s8EqhZunZ3I24Mqcvmkm79Y6/ltVM7uInNWuFMfTvNm/KMFLk1yAh9EbZPBX1yVLvH9UpK+gYO35bSJKO77nkil7x7YbpjVtywrLqzaoBuxAmQMBg8GHi6ec26KrxeLEUlU0EJL2/NAFw7+qbw+t05DBCi8wGbEfHgGb/bjUpv4r076e+FYM84qTOXVfPsaSo+z1YT+PJUI3FbMb13rYcYvS7L4YCnrkyiGEtlvbNdiLt9ViitpROhuy28sMwCKNhnYv3w9pWuQh+//g==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a327ba5-300e-4f1e-6363-08db4b04e943
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 12:01:07.1529
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: g9GZs7zyLzzo8aRF1CoWEumd4iExMffPEqehsC3nuSoi5rhBe1ysjRMepMlFVLN4+SuqI7GyC1clkzkjnyuAWled4XQkpBa2rhv3bhtJqk8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5888

On 02/05/2023 12:33 pm, Olaf Hering wrote:
> Tue, 2 May 2023 09:31:56 +0200 Jan Beulich <jbeulich@suse.com>:
>
>> How does 2.37 vs 2.39 matter? CET-IBT support is present in gas as of 2.29.
> I have no idea. It turned out, the previous Leap image was based on 15.3, while the current one will be 15.4.
>
> If I run this manually, it appears the error is produced properly:
>
> gcc -Wall -fcf-protection=branch -mmanual-endbr -mindirect-branch=thunk-extern -c -x assembler -o /dev/null - ; echo $?
> gcc: error: unrecognized command line option ‘-fcf-protection=branch’; did you mean ‘-fno-protect-parens’?
> gcc: error: unrecognized command line option ‘-mmanual-endbr’
> 1
>
> An for some reason there is no failure with the refreshed image on gitlab:
>
> https://gitlab.com/xen-project/xen/-/jobs/4210269545/artifacts/external_file/build.log
>
> I will investigate why it failed to build for me.

CET-IBT is far more dependent on the compiler, than it is on binutils.

The minimum version of GCC necessary is 9, but if you've backported the
requisite options then an older GCC will work too.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 02 12:04:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 12:04:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528555.821875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoka-0006IW-GB; Tue, 02 May 2023 12:04:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528555.821875; Tue, 02 May 2023 12:04:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoka-0006IP-Cg; Tue, 02 May 2023 12:04:56 +0000
Received: by outflank-mailman (input) for mailman id 528555;
 Tue, 02 May 2023 12:04:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d5QU=AX=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1ptokZ-0006IJ-G2
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 12:04:55 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.24]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8b8047f9-e8e1-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 14:04:52 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz42C4pcnt
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 2 May 2023 14:04:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b8047f9-e8e1-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683029091; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=EWO7skeoBe4yfkVPVp8E9tGGHEtGucIQH+nn7LMialRWFBTvh03Yjp7YddlGAcDH/P
    UVY3xpxjxfApHPIKEhs9lR58drijThiRQqw4uiwIPyEA/NPbk3a4hRGR9l9+75DmYEGB
    tBj7KA6tJ/TFViW27zmZ+tUyyycPaAjasJaSRnAPQSI5NZL0otasnFA8osNJnkhwqewV
    w6tjhiVXCFWnVSIyZIqy9W8V/gBioQORp80OAKUMiQ4OFnaEvipAhmbr0FQQ+uPYNXNt
    IgRY37a1nhcxm2P5WgK3+zO2z1kUTbQo7S6VYfd0XQhgjcM2jwgo4KovmqGtyr0mIATU
    Km/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683029091;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=qM5Q7xIHqNoa7V8tuK4qUQi9+8lwqvYj9KvI656lkEU=;
    b=IWBkxbwRO8gW5D0NTN27tqBe7//+rxZBBcqJ5ta8tYVGHHjf4n+o6k3/OZr1FKt+yx
    9wTYCAkBpg0P0owNRv99Tr4enViuSg2gDBI513xzrusBys40tcOvxLIyeKI4ihTcw/mq
    eOskdgw7beZ5/Y+ZBBni6CaApSvg7C6hZjmMY0y0/HAKYw9a6AV+Y4bRdRtzcn/EboA+
    87Sn80X7j3ky5ayF3u3x2Omj43ddxrgWt898tfanf9YOYL1nhtQUw4lhCGwoEwEzuaxM
    /Ok34dCQ22IdEdzLHVARzuZHJcIlSR76knCEnxSlwePBFPyyP0TMTaI5uOs5wrTAztsm
    1WFw==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683029091;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=qM5Q7xIHqNoa7V8tuK4qUQi9+8lwqvYj9KvI656lkEU=;
    b=OEBmRGRRxrpMGETMpt/ESEsFgkbUQ8aeGsU/LWiHnb+YzteC2koBVEhuUNiYY8BqML
    Xs5sB10vk53OoxD+0GEtPZ8NFgOto5D+zytdzti3WtAKlySpki+/STvXJOmzLXeeryGk
    brY4ybGpZEmglEFFVMXbSEYxuVoMOm9+OR1dU1tZPgic+u5I5ik150tiPpVYoJtycMU1
    lnqfz7LodshImI/ZNvr3rVKYwWKF7LmLoXyP+wThyEXZ4GOCHwWbLxS63STyt9HhcuLI
    Xnt8v4F9q2vwUCgqQMTjtIMb9CogEst2PnSlJHOWXxEInq+avu68LE7AERHftk1CJtK1
    sg/w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683029091;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=qM5Q7xIHqNoa7V8tuK4qUQi9+8lwqvYj9KvI656lkEU=;
    b=C76atAzpxdx41BT+4DfbwOyRKcXi0KcwZJn9OTlz/+ik+v3G00nujNUraDY3uIGbEc
    mW+W0i16b09mfspZ3yDA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR5BxOIbBnsc1fym1gFvNQ7EzMpH+yFJc4aADp/8Q=="
Date: Tue, 2 May 2023 14:04:44 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: HAS_CC_CET_IBT misdetected
Message-ID: <20230502140444.1dacdb33.olaf@aepfle.de>
In-Reply-To: <20230502133313.2192eb99.olaf@aepfle.de>
References: <20230502074853.7cd10ee3.olaf@aepfle.de>
	<43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
	<20230502133313.2192eb99.olaf@aepfle.de>
X-Mailer: Claws Mail 20220819T065813.516423bc hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/HiNIMqJmHC3=du_nGbCKnvP";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/HiNIMqJmHC3=du_nGbCKnvP
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Tue, 2 May 2023 13:33:13 +0200 Olaf Hering <olaf@aepfle.de>:

> I will investigate why it failed to build for me.

This happens if one builds first with the Tumbleweed container, and later w=
ith the Leap container, without a 'git clean -dffx' in between.

Is there a way to invalidate everything if the toolchain changes?


Olaf

--Sig_/HiNIMqJmHC3=du_nGbCKnvP
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRQ/FwACgkQ86SN7mm1
DoBBbA/+JpzGbwooQx4pzx5wH+ctKszUF1DHoZUoBlonOR8ZNXfyDkvFV7CfMLsF
ouQe/MzZPIMBXmRQvgdgPoamxOKbSwze+2qNNRsgkqeUMrFafUNT7/UkHqk4cHgG
Yx3zj+lKoJDqU1fybmNUFrIdA+0nYuRS/L/vPZkt3qOQXtu1MEn9vsQKmgSOs7Wj
7GfUJhSKXCN5brtxwWy9Gd9/bK6NemaCxLv9txIxKaySWDMJ16yNiKIhX/dK6tTx
GMq1kuAajxe8xgbSxUYiHVmJ+fbuFmuiPpejQbYurc30AEzXsVaRg04ub8bt37NF
nQxtUuK9E8awKRWtcjZJMs6jJCHtRwWyltROs7JW5R9wtvzu299Kd39NHN+Plu4m
RK45OQC8ypbEWO8boN4I4NJ5A1B2o99y6Wwnyufb3fzgZgSu4mnPlX1LcTGQoebD
miDxtIm2fAybqBTWQEX350GWSaBy6kCyKFYqNNxmKv29GyraY44jo213CWfQRFqD
G757CKQP4BPidjQJpIcarUAmHS3XXSOGbI93oPmLOfPBFt2yJSnnw4WR7uNDyGgR
z1m3cEZokbVX7YsTmzwGOE4k5N5S50UEwFKKl7ly2vg0j76yHtyuXfXTHkual3ko
xipHjguTHl/E08Z4M5cpmyn+FBQa/Tl/n+stFLsmWjqn3/Dails=
=MGIj
-----END PGP SIGNATURE-----

--Sig_/HiNIMqJmHC3=du_nGbCKnvP--


From xen-devel-bounces@lists.xenproject.org Tue May 02 12:09:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 12:09:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528559.821885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptopG-0006yn-0a; Tue, 02 May 2023 12:09:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528559.821885; Tue, 02 May 2023 12:09:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptopF-0006yg-UC; Tue, 02 May 2023 12:09:45 +0000
Received: by outflank-mailman (input) for mailman id 528559;
 Tue, 02 May 2023 12:09:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zdoi=AX=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1ptopE-0006ya-SS
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 12:09:44 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 38643c05-e8e2-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 14:09:42 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 23DE021A10;
 Tue,  2 May 2023 12:09:42 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9635D139C3;
 Tue,  2 May 2023 12:09:41 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id bPRFI4X9UGRmLwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 02 May 2023 12:09:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38643c05-e8e2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683029382; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=5isj2N5NCBHlgNEX7kDR8L5F10upbDgKEvNgnzD7pjU=;
	b=lCdfsFMOgUvQOOjznQ29dhIfCPfBATrhqs0ygxXq+WuLpchDR8rfeWRUqFg4gsVq1hAZso
	M6yRbkcUUeSN+OZpt0kdpNk+01NuOD3H4axIM8mN+INEgVDTKNBob6uRoIMwSVTHqsAyiH
	tXY30Kqoaqh0gDh5kUwQuvMJsIBgFwg=
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org,
	x86@kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-doc@vger.kernel.org
Cc: mikelley@microsoft.com,
	Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>,
	Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Jonathan Corbet <corbet@lwn.net>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Date: Tue,  2 May 2023 14:09:15 +0200
Message-Id: <20230502120931.20719-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series tries to fix the rather special case of PAT being available
without having MTRRs (either due to CONFIG_MTRR being not set, or
because the feature has been disabled e.g. by a hypervisor).

The main use cases are Xen PV guests and SEV-SNP guests running under
Hyper-V.

Instead of trying to work around all the issues by adding if statements
here and there, just try to use the complete available infrastructure
by setting up a read-only MTRR state when needed.

In the Xen PV case the current MTRR MSR values can be read from the
hypervisor, while for the SEV-SNP case all needed is to set the
default caching mode to "WB".

I have added more cleanup which has been discussed when looking into
the most recent failures.

Note that I couldn't test the Hyper-V related change (patch 3).

Running on bare metal and with Xen didn't show any problems with the
series applied.

It should be noted that patches 9+10 are replacing today's way to
lookup the MTRR cache type for a memory region from looking at the
MTRR register values to building a memory map with the cache types.
This should make the lookup much faster and much easier to understand.

Changes in V2:
- replaced former patches 1+2 with new patches 1-4, avoiding especially
  the rather hacky approach of V1, while making all the MTRR type
  conflict tests available for the Xen PV case
- updated patch 6 (was patch 4 in V1)

Changes in V3:
- dropped patch 5 of V2, as already applied
- split patch 1 of V2 into 2 patches
- new patches 6-10
- addressed comments

Changes in V4:
- addressed comments

Changes in V5
- addressed comments
- some other small fixes
- new patches 3, 8 and 15

Changes in V6:
- patch 1 replaces patches 1+2 of V5
- new patches 8+12
- addressed comments

Juergen Gross (16):
  x86/mtrr: remove physical address size calculation
  x86/mtrr: replace some constants with defines
  x86/mtrr: support setting MTRR state for software defined MTRRs
  x86/hyperv: set MTRR state when running as SEV-SNP Hyper-V guest
  x86/xen: set MTRR state when running as Xen PV initial domain
  x86/mtrr: replace vendor tests in MTRR code
  x86/mtrr: have only one set_mtrr() variant
  x86/mtrr: move 32-bit code from mtrr.c to legacy.c
  x86/mtrr: allocate mtrr_value array dynamically
  x86/mtrr: add get_effective_type() service function
  x86/mtrr: construct a memory map with cache modes
  x86/mtrr: add mtrr=debug command line option
  x86/mtrr: use new cache_map in mtrr_type_lookup()
  x86/mtrr: don't let mtrr_type_lookup() return MTRR_TYPE_INVALID
  x86/mm: only check uniform after calling mtrr_type_lookup()
  x86/mtrr: remove unused code

 .../admin-guide/kernel-parameters.txt         |   4 +
 arch/x86/hyperv/ivm.c                         |   4 +
 arch/x86/include/asm/mtrr.h                   |  43 +-
 arch/x86/include/uapi/asm/mtrr.h              |   6 +-
 arch/x86/kernel/cpu/mtrr/Makefile             |   2 +-
 arch/x86/kernel/cpu/mtrr/amd.c                |   2 +-
 arch/x86/kernel/cpu/mtrr/centaur.c            |  11 +-
 arch/x86/kernel/cpu/mtrr/cleanup.c            |  22 +-
 arch/x86/kernel/cpu/mtrr/cyrix.c              |   2 +-
 arch/x86/kernel/cpu/mtrr/generic.c            | 677 ++++++++++++------
 arch/x86/kernel/cpu/mtrr/legacy.c             |  90 +++
 arch/x86/kernel/cpu/mtrr/mtrr.c               | 195 ++---
 arch/x86/kernel/cpu/mtrr/mtrr.h               |  18 +-
 arch/x86/kernel/setup.c                       |   2 +
 arch/x86/mm/pgtable.c                         |  24 +-
 arch/x86/xen/enlighten_pv.c                   |  52 ++
 16 files changed, 721 insertions(+), 433 deletions(-)
 create mode 100644 arch/x86/kernel/cpu/mtrr/legacy.c

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 02 12:10:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 12:10:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528561.821894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoph-0008Fd-9o; Tue, 02 May 2023 12:10:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528561.821894; Tue, 02 May 2023 12:10:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptoph-0008FU-5s; Tue, 02 May 2023 12:10:13 +0000
Received: by outflank-mailman (input) for mailman id 528561;
 Tue, 02 May 2023 12:10:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zdoi=AX=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1ptopg-0008EV-Bf
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 12:10:12 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 49720d1c-e8e2-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 14:10:11 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id DFC711F8D6;
 Tue,  2 May 2023 12:10:10 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 92321139C3;
 Tue,  2 May 2023 12:10:10 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id GmhQIqL9UGTMLwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 02 May 2023 12:10:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49720d1c-e8e2-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683029410; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RMQn8Yiv0jCwQnecK3T9cC+8gstSs8Pnq7WDZyHBVNw=;
	b=UthapKSln5sVxc4P6W4BSa2qf1rvyEffS6HG2itfjchsaqorftKG30uqY4qHygD7foz2Sd
	qDX60C23glOf63Dt33mUz/OjzAGUAEkfM3DRKtIV4jLuqobQl4wBPaZynsRN7Is2PLb0tE
	yKXtXL2TX9qODXdqlLZNvDgVEnbQqZM=
From: Juergen Gross <jgross@suse.com>
To: linux-kernel@vger.kernel.org,
	x86@kernel.org
Cc: mikelley@microsoft.com,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	xen-devel@lists.xenproject.org
Subject: [PATCH v6 05/16] x86/xen: set MTRR state when running as Xen PV initial domain
Date: Tue,  2 May 2023 14:09:20 +0200
Message-Id: <20230502120931.20719-6-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230502120931.20719-1-jgross@suse.com>
References: <20230502120931.20719-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When running as Xen PV initial domain (aka dom0), MTRRs are disabled
by the hypervisor, but the system should nevertheless use correct
cache memory types. This has always kind of worked, as disabled MTRRs
resulted in disabled PAT, too, so that the kernel avoided code paths
resulting in inconsistencies. This bypassed all of the sanity checks
the kernel is doing with enabled MTRRs in order to avoid memory
mappings with conflicting memory types.

This has been changed recently, leading to PAT being accepted to be
enabled, while MTRRs stayed disabled. The result is that
mtrr_type_lookup() no longer is accepting all memory type requests,
but started to return WB even if UC- was requested. This led to
driver failures during initialization of some devices.

In reality MTRRs are still in effect, but they are under complete
control of the Xen hypervisor. It is possible, however, to retrieve
the MTRR settings from the hypervisor.

In order to fix those problems, overwrite the MTRR state via
mtrr_overwrite_state() with the MTRR data from the hypervisor, if the
system is running as a Xen dom0.

Fixes: 72cbc8f04fe2 ("x86/PAT: Have pat_enabled() properly reflect state when running on Xen")
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
V2:
- new patch
V3:
- move the call of mtrr_overwrite_state() to xen_pv_init_platform()
V4:
- only call mtrr_overwrite_state() if any MTRR got from Xen
  (Boris Ostrovsky)
---
 arch/x86/xen/enlighten_pv.c | 52 +++++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 093b78c8bbec..8732b85d5650 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -68,6 +68,7 @@
 #include <asm/reboot.h>
 #include <asm/hypervisor.h>
 #include <asm/mach_traps.h>
+#include <asm/mtrr.h>
 #include <asm/mwait.h>
 #include <asm/pci_x86.h>
 #include <asm/cpu.h>
@@ -119,6 +120,54 @@ static int __init parse_xen_msr_safe(char *str)
 }
 early_param("xen_msr_safe", parse_xen_msr_safe);
 
+/* Get MTRR settings from Xen and put them into mtrr_state. */
+static void __init xen_set_mtrr_data(void)
+{
+#ifdef CONFIG_MTRR
+	struct xen_platform_op op = {
+		.cmd = XENPF_read_memtype,
+		.interface_version = XENPF_INTERFACE_VERSION,
+	};
+	unsigned int reg;
+	unsigned long mask;
+	uint32_t eax, width;
+	static struct mtrr_var_range var[MTRR_MAX_VAR_RANGES] __initdata;
+
+	/* Get physical address width (only 64-bit cpus supported). */
+	width = 36;
+	eax = cpuid_eax(0x80000000);
+	if ((eax >> 16) == 0x8000 && eax >= 0x80000008) {
+		eax = cpuid_eax(0x80000008);
+		width = eax & 0xff;
+	}
+
+	for (reg = 0; reg < MTRR_MAX_VAR_RANGES; reg++) {
+		op.u.read_memtype.reg = reg;
+		if (HYPERVISOR_platform_op(&op))
+			break;
+
+		/*
+		 * Only called in dom0, which has all RAM PFNs mapped at
+		 * RAM MFNs, and all PCI space etc. is identity mapped.
+		 * This means we can treat MFN == PFN regarding MTRR settings.
+		 */
+		var[reg].base_lo = op.u.read_memtype.type;
+		var[reg].base_lo |= op.u.read_memtype.mfn << PAGE_SHIFT;
+		var[reg].base_hi = op.u.read_memtype.mfn >> (32 - PAGE_SHIFT);
+		mask = ~((op.u.read_memtype.nr_mfns << PAGE_SHIFT) - 1);
+		mask &= (1UL << width) - 1;
+		if (mask)
+			mask |= MTRR_PHYSMASK_V;
+		var[reg].mask_lo = mask;
+		var[reg].mask_hi = mask >> 32;
+	}
+
+	/* Only overwrite MTRR state if any MTRR could be got from Xen. */
+	if (reg)
+		mtrr_overwrite_state(var, reg, MTRR_TYPE_UNCACHABLE);
+#endif
+}
+
 static void __init xen_pv_init_platform(void)
 {
 	/* PV guests can't operate virtio devices without grants. */
@@ -135,6 +184,9 @@ static void __init xen_pv_init_platform(void)
 
 	/* pvclock is in shared info area */
 	xen_init_time_ops();
+
+	if (xen_initial_domain())
+		xen_set_mtrr_data();
 }
 
 static void __init xen_pv_guest_late_init(void)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 02 12:50:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 12:50:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528574.821905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpSX-0004Zq-Aj; Tue, 02 May 2023 12:50:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528574.821905; Tue, 02 May 2023 12:50:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpSX-0004Zj-6t; Tue, 02 May 2023 12:50:21 +0000
Received: by outflank-mailman (input) for mailman id 528574;
 Tue, 02 May 2023 12:50:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vLcn=AX=tibco.com=clindig@srs-se1.protection.inumbo.net>)
 id 1ptpSW-0004Zd-9l
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 12:50:20 +0000
Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com
 [2a00:1450:4864:20::32b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e4cd1940-e8e7-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 14:50:19 +0200 (CEST)
Received: by mail-wm1-x32b.google.com with SMTP id
 5b1f17b1804b1-3f3331f928cso21879345e9.2
 for <xen-devel@lists.xenproject.org>; Tue, 02 May 2023 05:50:19 -0700 (PDT)
Received: from smtpclient.apple (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 k7-20020adff5c7000000b00306299be5a2sm7086099wrp.72.2023.05.02.05.50.17
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 02 May 2023 05:50:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4cd1940-e8e7-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683031819; x=1685623819;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=s4U/e2ueVkIftF9kI3geNYcSUvjsAJfgMH+focDaLwM=;
        b=UPOW42jY0e1qYe7ptHYUn3nvE7C5mg6br2zNrp98TtVWAIjZhy4iQMly9PBtff3hdX
         sCR4fEysID5v63m1d4qKaGWYFbR6Ni+0AcMXJYgRVDm6H+gzGZSwZ5naDV59fYyhI7Id
         6xplGXvepfZjOlmKvq1xdCSe1EeEYWefmpfUI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683031819; x=1685623819;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=s4U/e2ueVkIftF9kI3geNYcSUvjsAJfgMH+focDaLwM=;
        b=HO2epBH+COFH+PFJg/kMMdpOLP/Ri/AXN2gnxm+0COpvSF3DOoelZxCmjHa6VVor1c
         Z2Vx9K9nQqxx8BsK7AJtXQoGBjeRnE70yQMLVBm5feoRRtUVRH72UWcPEkf2NpI3TF/H
         InrZrF4ncJg6WKMYFnBYFF8CrtsLGIv/ijkyyqHVpcBWHa+2w8YchLkxSh9GHofepIKG
         Fe/tpS9NU4ZH81/AxZaux/WYLZmf8betuzoH4qHwgCk9Nn0M9bxeAmdYhsYqmZWKdNYj
         dvfF+bYan9frYYVV1aPZb5yO1m6ACs4L6UOUbxp0ZU+tPHrW0RADbA/8yDNzR0PZ3OI0
         oKDA==
X-Gm-Message-State: AC+VfDy6Ux+sXPqooeX37IpcLmMF0ttq/GFnF8FxBsnWIkl4iOhg8vmH
	9r3BSXT+zYD3rHiTHpxWliPW3A==
X-Google-Smtp-Source: ACHHUZ5yU5MA+8UsxHIog3VNPIQHEYE6JqjPOldI32Apd5dXnAARkntx6nmRfVrN6gYiWxldQdM3MQ==
X-Received: by 2002:a1c:7c19:0:b0:3f1:8c5f:dfc5 with SMTP id x25-20020a1c7c19000000b003f18c5fdfc5mr12794192wmc.39.1683031818759;
        Tue, 02 May 2023 05:50:18 -0700 (PDT)
Content-Type: text/plain;
	charset=utf-8
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.1\))
Subject: Re: [PATCH v3 1/3] tools: Modify single-domid callers of
 xc_domain_getinfolist()
From: Christian Lindig <christian.lindig@cloud.com>
In-Reply-To: <20230502111338.16757-2-alejandro.vallejo@cloud.com>
Date: Tue, 2 May 2023 13:50:17 +0100
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>,
 Christian Lindig <christian.lindig@citrix.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <A40FFA8F-421C-48E1-B163-9B411D0F59B3@cloud.com>
References: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
 <20230502111338.16757-2-alejandro.vallejo@cloud.com>
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
X-Mailer: Apple Mail (2.3696.120.41.1.1)



> On 2 May 2023, at 12:13, Alejandro Vallejo =
<alejandro.vallejo@cloud.com> wrote:
>=20
> xc_domain_getinfolist() internally relies on a sysctl that performs
> a linear search for the domids. Many callers of =
xc_domain_getinfolist()
> who require information about a precise domid are much better off =
calling
> xc_domain_getinfo_single() instead, that will use the getdomaininfo =
domctl
> instead and ensure the returned domid matches the requested one. The =
domtctl
> will find the domid faster too, because that uses hashed lists.
>=20
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>

Acked-by: Christian Lindig <christian.lindig@cloud.com>

I mostly care about the OCaml bindings - looks good to me.

=E2=80=94 C=


From xen-devel-bounces@lists.xenproject.org Tue May 02 12:54:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 12:54:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528578.821914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpWg-00059x-Qk; Tue, 02 May 2023 12:54:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528578.821914; Tue, 02 May 2023 12:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpWg-00059q-OF; Tue, 02 May 2023 12:54:38 +0000
Received: by outflank-mailman (input) for mailman id 528578;
 Tue, 02 May 2023 12:54:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l/wp=AX=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1ptpWf-00059k-OE
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 12:54:37 +0000
Received: from sender3-of-o58.zoho.com (sender3-of-o58.zoho.com
 [136.143.184.58]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c5b8517-e8e8-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 14:54:35 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1683032061497608.2626436346678;
 Tue, 2 May 2023 05:54:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c5b8517-e8e8-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683032063; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=l2XLGY8zcqX3D8x898GT/KRbeRTOt8BCSGyucgps7mCnsDadj6hWr0CfO/hz5mK2Q/sckMfz74mskfOoD2g9H8RGYfUzV8PbOcWO7HekAUawfjwnYFUElJ6I2AtD3/bVsDLYMaMNjLy1HcsYuFEHkfWilNF7/23bhOPVwjXh47c=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1683032063; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=xgJzjqiynfOW2SuuA1mzX5zeflmOYIFlHVIvrgPOF+E=; 
	b=YZXVPBMgHa5R5rXlJvRTvpU6EtuCJulZEHyexO3BSj0lMqMib4kWO/FuA+bJTzv5SJVLGfM9nRUYMVn3YmQT4mlShQqu3kqU9Bx5l8KRF28KFs3jzueaJTZn/NH3AA4Z2Pn3FewFZN3yiCmQtiOIAfSW/nTWTcgyDIXl/Qlv0xM=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1683032063;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=xgJzjqiynfOW2SuuA1mzX5zeflmOYIFlHVIvrgPOF+E=;
	b=s+2EMRcww57hKhnh0PlnkU/6CiOICPQWsr7Y/OzuojHyQG7qGjIhmH8WMS/j2w4L
	vRml+47d2iEIfSscqyxqdDRrRQ7cJEvwC+6wmyHRE9oA1FWLPEN3ORjqqwfPkwQihHD
	IBbDDkdr2aD3WQoT89j5q3HZE8H22XMzGU8f31uE=
Message-ID: <c333f02e-0655-50ec-aee8-7c1449ca267f@apertussolutions.com>
Date: Tue, 2 May 2023 08:54:19 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
 <600c8c62-5982-ec7e-7996-5b7fbfb40067@apertussolutions.com>
 <22b2e03c-ac5e-915a-78a2-0a632b09a53a@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
In-Reply-To: <22b2e03c-ac5e-915a-78a2-0a632b09a53a@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 5/2/23 06:59, Jan Beulich wrote:
> On 02.05.2023 12:43, Daniel P. Smith wrote:
>> On 5/2/23 03:17, Jan Beulich wrote:
>>> Unlike for XEN_DOMCTL_getdomaininfo, where the XSM check is intended to
>>> cause the operation to fail, in the loop here it ought to merely
>>> determine whether information for the domain at hand may be reported
>>> back. Therefore if on the last iteration the hook results in denial,
>>> this should not affect the sub-op's return value.
>>>
>>> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> The hook being able to deny access to data for certain domains means
>>> that no caller can assume to have a system-wide picture when holding the
>>> results.
>>>
>>> Wouldn't it make sense to permit the function to merely "count" domains?
>>> While racy in general (including in its present, "normal" mode of
>>> operation), within a tool stack this could be used as long as creation
>>> of new domains is suppressed between obtaining the count and then using
>>> it.
>>>
>>> In XEN_DOMCTL_getpageframeinfo2 said commit had introduced a 2nd such
>>> issue, but luckily that sub-op and xsm_getpageframeinfo() are long gone.
>>>
>>
>> I understand there is a larger issue at play here but neutering the
>> security control/XSM check is not the answer. This literally changes the
>> way a FLASK policy that people currently have would be enforced, as well
>> as contrary to how they understand the access control that it provides.
>> Even though the code path does not fall under XSM maintainer, I would
>> NACK this patch. IMHO, it is better to find a solution that does not
>> abuse, misuse, or invalidate the purpose of the XSM calls.
>>
>> On a side note, I am a little concern that only one person thought to
>> include the XSM maintainer, or any of the XSM reviewers, onto a patch
>> and the discussion around a patch that clearly relates to XSM for us to
>> gauge the consequences of the patch. I am not assuming intentions here,
>> only wanting to raise the concern.
> 
> Well, yes, for the discussion items I could have remembered to include
> you. The code change itself, otoh, doesn't require your ack, even if it
> is the return value of an XSM function which was used wrongly here.

I beg to disagree, not that you could have, but that you should have. 
This is now the second XSM issue, that I am aware of at least, that 
myself and the XSM reviewers have been left out of. How and where the 
XSM hooks are deployed are critical to how XSM function, regardless of 
how mundane the change may be. By your logic, as the XSM maintainer I 
can make changes to the XSM code that changes how the system behaves for 
x86 and claim you have no Ack/Nack authority since it is XSM code. These 
subsystems are symbiotic, and we owe each other the due respect to 
include to the other when these systems touch or influence each other.

>> So for what it is worth, NACK.
> 
> I'm puzzled: I hope you don't mean NACK to the patch (or effectively
> Jürgen's identical one, which I had noticed only after sending mine).
> Yet beyond that I don't see anything here which could be NACKed. I've
> merely raised a couple of points for discussion.

I will comment on Jurgen's patch.


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:02:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:02:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528583.821925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpeD-0006jK-In; Tue, 02 May 2023 13:02:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528583.821925; Tue, 02 May 2023 13:02:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpeD-0006jD-Fo; Tue, 02 May 2023 13:02:25 +0000
Received: by outflank-mailman (input) for mailman id 528583;
 Tue, 02 May 2023 13:02:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MJpR=AX=citrix.com=prvs=4790f2855=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ptpeB-0006j7-Dc
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:02:23 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 924f5aaf-e8e9-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 15:02:21 +0200 (CEST)
Received: from mail-bn7nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 09:02:14 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB5910.namprd03.prod.outlook.com (2603:10b6:510:33::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 13:02:08 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 13:02:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 924f5aaf-e8e9-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683032541;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=MJ3EQEfCQv0MKz4apU7nZmyKA8cw+MX0Adgk9Yd37hU=;
  b=cauv5k62W3DD7EJwjqPC6f0J1u3UjJI/9uu4MRkDQk8EKqIQYdhNH3Kz
   sYw+7HDjdI/aikcJQwIMJk9t92u7bG96zM2qHBgWn30fq+HSPOuYFl11T
   O2GdRlXFs6B/4GCetqlxBeYi79aUycAqfCObYQ9ewLP3soRew4O12NHnq
   c=;
X-IronPort-RemoteIP: 104.47.70.106
X-IronPort-MID: 110012958
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:fTJgQ6onyDZ4OeuKR6Znnx7yMzFeBmLNZBIvgKrLsJaIsI4StFCzt
 garIBmCaPjcYzShfN52OoXl8UxSv5LdydE2TVFv+y48Hi4Q9JuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDzyNNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAAA/bgG93ead/JmceuBs19k6A5DHH7pK7xmMzRmBZRonabbqZvySoPV+g3I3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3j+CraYKLEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhr6Yy2QfCmzx75Bs+EmX4n92/gxKCWIgYI
 GsQxzsQsJgN6xn+JjX6d1jiyJKehTYVX9dSGus28gbL1KPQ5wubAUAPSjlcZJots8pebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakcsUg8t89Tl5oYpgXrnUtdmOL64iJvyAz6Y6
 zKFti8lnJ0IkNUGka68+Dj6bymEo5HISks/4FrRV2f8tgdhPtf9Ocqv9ETR6utGIMCBVF6ds
 XMYms+YqucTEZWKky/LS+IIdF2028u43PTnqQYHN/EcG/6FohZPoag4DOlCGXpU
IronPort-HdrOrdr: A9a23:SRqh6K5UbchOTRt3MgPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-Talos-CUID: 9a23:GsyM9G4xLmZaOMAfO9ssxHwUC/5/dlbh51DgKk/pFH9GEZG2cArF
X-Talos-MUID: 9a23:6ElAhgTZQyCLb9FuRXSrunJcEMFhv5itUgcGtpE7qveZBRNvbmI=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="110012958"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EdvJ2/xy2FrnRNNV1e0isfo+yqma3tE6UiViMjf+Yl34C9XW+QY75mvPzr0dEgK7WmTHgdrGBFZ17llgiJZ2mUxsHvHvWsVkoN1lLaNCPiHHKejfiNQ8ctYtWnndIjqBFPQZsVLtnXaPDF7Gu7Q4cAHHpTzi7Gl9pDEu1m9HgUOFQUXirIbuGaYQyO6ntmdWervTnTYpQL/aUQJV/S+ljUpm87VftsjjuzmdK75GpPbthzbd7QkRPoq39ybMGkXOVxcUj0VflDQhyqlSy1PzaX5r9or7wLXa+uC5j4JHQD2uz4S4JKg4HJmiX3I0M0zM9Kn71qBb4cU6IlQ77dVhJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HoYYC1NfbPSJoii0IlkINfgKg6sGhzrH7DPoIeM7umk=;
 b=ZOH+s7ZHBUiqfAJz98fb8jAfLxHfFNXWRm95iOpJgtEblOVVYwH+tQ656SxZXxM0XekQPvCa/Q9eHYe1HT/tuUuY+d71kPXVSk+4jCXhKo/UGyHBq59SSMMRYWZfm4/oh7yTlVpfGBJWN5IYYs9xuCvUGYgSW3Nws+iJJti1xj9myHi8pyc2OHvXD5XmYmmtQnC0yVqwsGoJFZaRECykNvgvHHrcCbYUOgle2uda0BAHhlFqjuK4vc2BD0R4REfRY/fji7OS7vGyyFxkARLGjUDVvD+GGZOJyIDUILLVta+6w3m1XRRd0iOnCJOtPLK+N+x0XoLU+1DmcxTpD+macQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HoYYC1NfbPSJoii0IlkINfgKg6sGhzrH7DPoIeM7umk=;
 b=nMSWzQJEsWtgWcnrWNDX/6+RagB4AN4xz35D3xFDUrbx2WDQrQJFTrp+zi3wTuQ6ZGpy9UyRbXVDmnoJJdeM09VJcDewEn6yPVSRBBRsjxB57SgThoMPf9+Tan1xqFZ1c9x8S+TALFbYWmenvH8y/soLwD1NJsQf5CNa+Viw6Bc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <ad134ec7-cebf-859c-dcc6-55bb0e90f1ea@citrix.com>
Date: Tue, 2 May 2023 14:02:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 2/3] tools: Use new xc function for some
 xc_domain_getinfo() calls
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Juergen Gross <jgross@suse.com>
References: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
 <20230502111338.16757-3-alejandro.vallejo@cloud.com>
 <6450f182.050a0220.6009b.3ca5@mx.google.com>
In-Reply-To: <6450f182.050a0220.6009b.3ca5@mx.google.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0667.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:316::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB5910:EE_
X-MS-Office365-Filtering-Correlation-Id: 3bdb1618-61f1-4af7-efc7-08db4b0d6f33
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	geco7vsJQSLOKdRCwwEedV3eoi6K06ThRRwec90d7TJpVDbkCAo9hGvIVyDYTC0ENNvh3GzX0gcx+mTAg+RrEi1uHl7JO5FqyijcUjoi6S5Y60ar26gL/EdAzpQWB9HZung1GzS+Anq2JJBK0ZNEGIUG2MqYaqdml12wObCeLfTcR8eC8+H2t63o2Yrhvz4eZzeVFsbibogBh22iM3x3VvKwe88vTPVPAc07XmB7Zwb9/JY+oTA4+yvdl8HOQ/ZFLpkcfq5vhLF4xy4jEPEInsEZIYmy4gkUCq+GzDx6ywwkRk5fnditMbijnxWPyr2mow7wJfwdtHfFjh2mkBwwDo1UGdzedOnnNUiOBYXtSVS7c8QrLZNLPqggXiUhxmnsrNQFyLTbKRWScRYli5EPAU1Lq6R+DwjpYoOWLArbyBVbOURIC77sXHV9XdnODq/PnIlsMQD6DCt231OYy6GjGpy6B2ggT3rBTTQ7IN2gfbVgsODGVOOIQRtW5YKuN/UAU6YL3ZY9BU225or7TaKdbU8cau5pyg4gb6ISzEYZEOJDcev1jGdz4FzGCktgsvpMNPMAzxiw42vAR75pH5mbF7xW7hj1x8rIXWf/QbPnnQ9soAu3K/qOz1dWsyhsypc3iNj0EMTZf1YwPigMjOve5A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(396003)(376002)(346002)(39860400002)(451199021)(36756003)(31686004)(2616005)(6486002)(6666004)(4744005)(2906002)(110136005)(478600001)(6512007)(6506007)(26005)(186003)(53546011)(54906003)(316002)(82960400001)(41300700001)(4326008)(66476007)(66556008)(66946007)(5660300002)(86362001)(8936002)(8676002)(38100700002)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Mk5qU0hIaE5jcmV6WDJsdkhXZ2g3Skh4Qi9wUGxCbHJVS3hEZStNRlZybExt?=
 =?utf-8?B?Umg3OFhuYmgwMk0xUHpZMUh4ampkMzZsSysxMlJTN1lBQTlmc00rTTlOOHJj?=
 =?utf-8?B?bldDck1OR3V3SVNWSC8vZWh6SldleVlkemxxWU5FZXVBd09sYTIzSkh2Q3I1?=
 =?utf-8?B?KzF2cFc3aGdQeXprdDVVT3dzUW8yUllVaTdZS0U4SExTQ0RqbWJHQlRzTFll?=
 =?utf-8?B?cVF0R3Fhc1l5QjRnQytvcndaYVQxdDQyd2pUR3JtSS93Yk42NjJZbTFxTlpm?=
 =?utf-8?B?aUZ2Y3d4MW5RcjQzUDlNV2VaN21LWmdPbnExa2FneVh2MWJHb2Z2cXdWWjMz?=
 =?utf-8?B?L1pMVGdZK25YWTFJaWY3VkIzVVZURDhzVHdFREdidjBUekh0c3NhdEN0bFU3?=
 =?utf-8?B?S3VQV1VjbWp1eGNlV3c3TytaMzFobUlyejZackQ0UFpmVkk2ZEoyMjNLcVk3?=
 =?utf-8?B?ZjRRSE90bENjeXAwWUNSOTVYY1dmd3FEWm9qNVc1V1EvekcyaUloM1pqWVFp?=
 =?utf-8?B?bENMaDB2dEozcGo2Wk5OUlhNTnNUUVhEWmVFUm1idmZLOHJ3MzRzNXdpQXQ0?=
 =?utf-8?B?T3dFd2M2YXV2ZGVWdlV4Y0g4cWlHK0wvUWZZTWlsdWtXVTdraTNBc0JaeDZ3?=
 =?utf-8?B?UlUwVHprSTBnYnFTL1J6bktKM0JEYW5BNXlqdGl3K3pDSXEzcUt0VFZtdUZi?=
 =?utf-8?B?UWZFZUdSSDhCSzRhVXczaGNLNG0rSitsZUlPMUk1WU5SSmpIMzdDcVBCRytZ?=
 =?utf-8?B?SE0xeFdSMTlzY0VreTRkaFFSQjVmUEYzUnB3aFExTTMrNkFMbkk1bTcxaGZn?=
 =?utf-8?B?Z3pJK2RTbkJMd25lNGp4Y2JCaEMyK3dSWmpiS1REYUdRNTBFMUZ2Q2xhSUhC?=
 =?utf-8?B?ZkNVZ3ByekZRMWdidWRaeDBHS2NrMkszOGp2dWJPYVN5K21lUTdRSnozK0hz?=
 =?utf-8?B?SU03VVczV05tUVlFVWloaERkOXNlTXlhKysrNTltekZkVHN3TzIyOGt3MU5p?=
 =?utf-8?B?dlFFODBEOSt2YjZWT1dFT1ROTFhZa0VzTXRwanI5Ujgza1Z3bGRFckErQlRy?=
 =?utf-8?B?K1RvSkx2Tld2WXpiSU1jaldaeG9YNVJUZ3VnbFU2ZnkyQ3VwOVFidnFmMDZK?=
 =?utf-8?B?TlpKVjllcjliMGJ3ZjVLMlNsdUxNTWJVY3lId3VjQm9IN3Vxd3Q4UmhOV2o4?=
 =?utf-8?B?a3RlU2drZytLWEQvMElFNWdwQkxxNkNxTmZPSzYyVHYvbElHZVoyekJsOHVq?=
 =?utf-8?B?TFE3UDFxNzVWMWNFN3N0V3B5a09QUm5ZS0w0OVBCR2ZEY2NycSsxMUdFRTdE?=
 =?utf-8?B?UjRsQ1A0OFZ2S2g5cTVNSHRrZmFNeDhvT2lzTXZ0a2hWYTluS24wTXROWnFt?=
 =?utf-8?B?bVE5L3hqZkE0WEVPWGVpb01Xai9XUzdKelBwdUZieWtBdUMyMDNGQUNlV0dv?=
 =?utf-8?B?RHBwTzhvSDVJTi9KUVJIeVE3ZnZJL28zY2xObWFKeHkxMXpXY2Y0YlFVb1VZ?=
 =?utf-8?B?bkVqV0EyNUVZZnJCYWhDemdKajUzaktPZEhBYXdvRjZGanBkYUZHbWZ5dm1w?=
 =?utf-8?B?Zk02QWh6eUdWT0hvZjFaenNWWkliYmk3a1BjVjJ0STZySlR2MTdxMHBCczY1?=
 =?utf-8?B?VmsvSXBRbFFFeWkxK1Z3NEJWeWxtcmVnZVh3R0lBQ0ZzcVFLQVY5bWZ2Nzdw?=
 =?utf-8?B?OGtrcVlPL1h3Znl6bCtobjVGcm9rVHhZWDJGNlZuV3UvbTcrL1RPNmxuYUEv?=
 =?utf-8?B?RUFGRGdJcU9JUVgrVEZxcncydFl3MnZ1dVduRnNpQkhPZ09MYjg3eWZOaVZM?=
 =?utf-8?B?WEdWT3JsQ09ZL011N0ZQamphUjdQTXVXYzNVUE9UdnZsbXJIdmU0M244c3JJ?=
 =?utf-8?B?V2liTlVyeUVRRnh3cDIrekJSZkxzUmVmYmlwVzdET2ViK1h0clkrODZZNnF6?=
 =?utf-8?B?T3hDK3d1R1ZNYndnVFFta1lUSXprRlRDa3gzdG9aY3BaY3c3dnZScE5SdnAy?=
 =?utf-8?B?dHBuYUFlL0JMSGtOWWcveWdXQmxNWEtObXNreUNzTDFKVHUydURBYXNsVTcx?=
 =?utf-8?B?SHU5bjkwa2NTd0h6WVRDd3JrZjNMLzYxcy92cW1HbnZpU2k3elNRWXFGUUtz?=
 =?utf-8?B?dlpTMnc5dlFmbmJuampHejY3dW1odXdsT0lYcmlNakNBWXh6QisyYlBoOVli?=
 =?utf-8?B?alE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	xZfK6dbcj7T8DNJOCSXMseZmQIoFjiIuAQJpmYXAjWaCOMOsMrV5TM36sIkDtzMPbFFW8mlXpmEq7nmOYLG8lx0U7FuLYrk7LQKN5ItWv5Op2VKBpQ9oorQrdACiXCIOQkm2DoyZO/r0aKADTP4Ihtb2MFJOnYa8CbZZcQlik6K9wQlxAgyzdtPydula16aZcgtCu+d0vpvxCsM/ljqUIMb3OwKj7RJReooiuvzpTy6f9BUO78jwnwa3+dUeVOlejEiakMtgbSr1ka9v5QuBQIeP5llN1MMjqeWPBSch5YstR63RTF+m1A16enmdAPeqarBwNwl4bJcc2SbTWMXKHgeezI9+5N3pY6CHjBKXaEF3c7MqZ/xb2m9rsRWLjC3QuQ6PV0t4adLECdfsuLrhlphMfsqNKBc2L3Je6wCO35rbzZ11HkSDbGv1r0jmb2bOA9P4O0ZUGF153Elq0p1OU2iwYi18jF3y9VHRnJoRKoO5g7svz8/N/TItLvcmO3HGOez6EiNHVNfXVpRSAYq3DrgPWCI8FSTFagjRhkP7DaU1ma/+0O0fzEo4Z5kGLd6xVcRYK//lKIPThrBH1oqSbxEdAIn2bCY/2V3jz6j07toGZaOF6nEiIa8d+d2WuSRrJY6SBR1JhVn6sAu8AuFldaGO3DgrPq+qlTvWqamIFvoSeKjj4kDFxqBwHASdDa0cbtUueIoW/AjvPy5JhXRkoF3pobw7AxHLj1t+mz95/H+E7unm28P1IS/TnINO+74yXIl4ZRDnE0M/VE4IrE/EA0vv+swvVLVtLVQaggjXJj7qO4a3eeYAGhWxH59Ij0ZsqGu+vle6+6d+NmHYg99BX0nlOgYwhW3iDKelgFYcYw4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3bdb1618-61f1-4af7-efc7-08db4b0d6f33
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 13:02:07.8685
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ihLPt+lY4UHNfK/L0uConxJSUceYgoPSg813WrdkE0pD8JZ32+cxTnQPleNgyGkUhjaHhcLmxdTWwFZ7lkYw3MdxjQsyB/LKR4ol3iSOZD0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5910

On 02/05/2023 12:18 pm, Alejandro Vallejo wrote:
> On Tue, May 02, 2023 at 12:13:37PM +0100, Alejandro Vallejo wrote:
>> Move calls that require a information about a single precisely identified
>> domain to the new xc_domain_getinfo_single().
>>
>> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> ---
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>> Cc: Wei Liu <wl@xen.org>
>> Cc: Anthony PERARD <anthony.perard@citrix.com>
>> Cc: Tim Deegan <tim@xen.org>
>> Cc: George Dunlap <george.dunlap@citrix.com>
>> Cc: Juergen Gross <jgross@suse.com>
>>
>> v3:
>>  * Stylistic changes that fell under the cracks on v2
>>  * Reinserted -errno convention from v1 that had been
>>    removed in v2
> Mistake here. It's _NOT_ supposed to have that "Reviewed-by"

Nevertheless, v3 now looks good.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:03:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:03:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528585.821934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpep-0007Ac-S4; Tue, 02 May 2023 13:03:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528585.821934; Tue, 02 May 2023 13:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpep-0007AV-P0; Tue, 02 May 2023 13:03:03 +0000
Received: by outflank-mailman (input) for mailman id 528585;
 Tue, 02 May 2023 13:03:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptpeo-00078T-UW
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:03:02 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a96bedf1-e8e9-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 15:03:00 +0200 (CEST)
Received: from mail-bn8nam04lp2048.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 09:02:57 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB5669.namprd03.prod.outlook.com (2603:10b6:510:33::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 13:02:55 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 13:02:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a96bedf1-e8e9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683032580;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=93UXsglR4TVhZ4ze/hMN0CfXqU1kmOtQG/yfKJ3p2q4=;
  b=OuzXlj2A7M/hJ7GG6Irp1DtyKOJmNOiLOqOof94LYak4l/2FwyYjrVDf
   DM9ZnrgdtP08i7OU0KksZ/1r10RzyLQjy+tu4yuNRVsuXVgFUfgKeEpU/
   57WXj/T50j9+WyD1i26b/gPT5ZnMZfm5CvuCPJtWfHWU1ZZvXBpcRHB5b
   Y=;
X-IronPort-RemoteIP: 104.47.74.48
X-IronPort-MID: 107590529
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ibHLBaBbp/QqzxVW/w7iw5YqxClBgxIJ4kV8jS/XYbTApGsg3zJRz
 2IXXD3TPvzfamf2co0iaNnjpE9Tu5OHzNFnQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwo9pLI0Fn8
 uwieBMqYhPYtsnn7amdc7w57igjBJGD0II3nFhFlW2cJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK+exrswA/zyQouFTpGMDSddGQA91cg26Tp
 37c/nS/CRYfXDCa4WPdrS/93rKVzEsXXqoyKq3gz8BR2mer/TQKIUQyS36xjKmQ3xvWt9V3b
 hZ8FjAVhao4+VGvT9L9dwalu3PCtRkZM/JPF8Uq5QfLzbDbiy6JC25BQjNfZdgOsM4tWSdsx
 lKPh8nuBzFkrPuSU331y1uPhTa7OCxQJ2lSYyYBFVcB+4O7/NF1iQ/TRNF+FqLzlsfyBTz73
 zGNqm45mqkXiskIka68+Dgrng6Rm3QAdSZtji2/Y45vxloRiFKND2Bw1WXm0A==
IronPort-HdrOrdr: A9a23:hjQZhaBsMXwhdGblHela55DYdb4zR+YMi2TDt3oddfWaSKylfq
 GV7ZImPHrP4gr5N0tOpTntAse9qDbnhPxICOoqTNCftWvdyQiVxehZhOOP/9SjIVyaygc078
 xdmsNFebnN5DZB7PoT4GODYqkdKNvsytHXuQ8JpU0dPD2DaMtbnndE4h7wKDwOeOHfb6BJaa
 Z14KB81kKdUEVSVOuXLF8fUdPOotXa/aiWHSLvV3YcmXKzZSrD0s+BLySl
X-Talos-CUID: =?us-ascii?q?9a23=3Ak5uMwGpku/chGS9jrSdVGXbmUZkZTl/95XDAGR+?=
 =?us-ascii?q?bLmpqE4y+GHKL3poxxg=3D=3D?=
X-Talos-MUID: 9a23:TuiK5wrYYxEIVFYsK58ezxFcJJ9R3Z/0Mng2yYwv4u2aECZdax7I2Q==
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="107590529"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EdJA6QaxrgoHJnqycfxY331CoBQSLDPAmChLgX05HhhBiKZ05E0U6CUwAcA9fHO9kMKUtBkPrNE7J9InYr5ojKiI8xp77OL1pDmaCe3LbSqDRPQ0SE/EnojlH23Nplhpr5EOaXW2A/0Jdfzz1ckfqBy8PvIQ2iNgAXOE8RRFvxdGuGoh6ulELC4MZIkYgF4T9w/SaAo6OZC39li4eKBkjseEOzvGPfUCWfQq4nP2UP/UZVB58DQcMpVVReWWU/1akvzgJwmIVrgaOM+tU/478tnRWyUnC01uteEtqkRhOGWfQP0PD7VUtb0eERTdb72dykf0Imtx2sHQT6vkrbZh/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WHooOrtZ455hB4epaiuSklactcjPB2cz/QE9dt8MMoM=;
 b=bqx9oLbUw770c78j8DOGlmy16Dd/pAICUV9YKU06py1+loQLua4rrmTtfqi8GbRxUxwpuJCZ9kc38emPlzsBXyCMx9hpQVO4rQfRUQEEpWaYIr/lztSiGjB075ZYoC7zFMTzjnU/zJ8CoPRagzJHxqPK1cCTpJyPgaPONtZ/L/NtAZo90zChzjTKq33NtuiI0FquL958wRHbg7K90GMefCsB2+U8TIzMSk46L5HhrzP3+P8pI5/n+l67CB9vMWyI3+1eBkvgB64l2ly254DSatg7o0VibYvjOOCtJ97Pq1fnX5dVYGK5bXccSlah/uKb1KgGpfBOob0tzwpxTdsdWQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WHooOrtZ455hB4epaiuSklactcjPB2cz/QE9dt8MMoM=;
 b=IQ34MWdPjYtBK7A6kMxrrpqKnYYkn3sdC55zxdyAr5/j39wcWQ2hEL2vzz/trjOOB9X0WWDkuKLs4Yd0Pa2oSQomEx02P9TsMJMQluvSeqrQHPidCvY3KMM5sOCummqZTH16oLh3kDc2TBPbZxnHBKq8uRDkv9BEcta2UH0KWFY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 2 May 2023 15:02:48 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH 1/2] x86/head: check base address alignment
Message-ID: <ZFEJ+LqEM2rwOxPG@Air-de-Roger>
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-2-roger.pau@citrix.com>
 <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
 <068ad06b-d766-4fc7-6bbc-289911441ee7@suse.com>
 <ZFDrT87RixpOmMfq@Air-de-Roger>
 <6aa9f2a5-bb57-4c56-84b8-5bc63b47cfa4@suse.com>
 <269b0894-5fe6-fd71-9f6d-24e3b08973cf@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <269b0894-5fe6-fd71-9f6d-24e3b08973cf@suse.com>
X-ClientProxiedBy: LO6P123CA0003.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:338::8) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB5669:EE_
X-MS-Office365-Filtering-Correlation-Id: 22a69cfa-8c4f-47c0-5056-08db4b0d8b6f
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xYYOogY0VgZP+q5llqclBcANKXt2f8tCWiFWEMrrMGU51px7CC1oGSMFpfnp+xeOy5R53rGyxs6DaG4TvtSF6dUhNytlzMvvh3Nes3zeh6f6ho3pDPjzBVfPMC70LvSienhj9dsV+k6xghj+1inCzV9aVTeoD12+puXTMHPtveQz1fUdusYsbKfVRv41xlbNhHVrRSw3fZZBDiJcTuwgJP+c6ss2QLZkzB5noQ9hJ6H8BczoDqN4UkuYb2GKFj4tl3ARX61mFZ50nEuxLqOmQvv8bVYKSr+EIf43HmqtCe5eu84TiXJnfr+Y0o+vz6o8CNjCR7g8UZtMzWCQT2Se1KzY82nLiTLgLwLiItjxFYMaPPuRCAc4XvYuvi3U3Yp8iwyp8clwDvQBWMJ+ND6MiFd16vVsM/ktHMRNIibcz0DPJ5JLEVQBYZaokLySgwuIkyfZ81TlLo1E4FNdnwe83vh0Kf+fkVF6ersQ3mpD4CfNsYNbYU9y0xZwu7v3AHlIX1oODOXcONGvjC+n0e0ga6DueXOyJZI6cDVNLjk2QBIdVmtdDbPTRh2rDcJ2SBPapsbBA5F+4Bl9je6iMzwO/UEU3DXbEiGDTQd5h32DINI=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(376002)(136003)(346002)(39860400002)(366004)(451199021)(26005)(6506007)(6512007)(9686003)(82960400001)(38100700002)(2906002)(66476007)(66556008)(66946007)(4326008)(6916009)(8936002)(8676002)(316002)(6666004)(54906003)(86362001)(41300700001)(5660300002)(478600001)(83380400001)(186003)(53546011)(6486002)(85182001)(33716001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NUNZTWNYSEkyaUhzZEdZSnN3OEd0YmJmTjQzOHFrWTFRNWdNWVFYcDY1TlJQ?=
 =?utf-8?B?SDUwd0p3clBRdEhyeE9VeURuUkNhRkNVNXlkNGp1RWFhc2hTYjkycEdQQ0tH?=
 =?utf-8?B?MGFLMkVWa2wzeXdPNmFtOXdva09rRG96M2o5cmYxRWcxdGtzQ0FSMXVxUklU?=
 =?utf-8?B?V0hYSGM4K080YjdnMVZNUXplajFoQnh5NWFyWmxnVHZuOTRJM290Mk1vYmhx?=
 =?utf-8?B?WE1xZzM4YzJialJocUVJMS9nSy9LTXNEL0gwMmtEOURPYUVFeGIrb0NqZUZQ?=
 =?utf-8?B?aDZwN0ZPaHIrOVZEYmJseU9aa0xGSHE2MnBqaU1qZ1Bvb3pPaldIa3BmS1Zp?=
 =?utf-8?B?ZkdvYyszOE9mNmNFc2ZhODRrTVdlcWdyQjdSV09BKzZBK2RCdk1yZ1lkWHgz?=
 =?utf-8?B?d3lBWE1pZkltVzZkUFcranAyaVUvNzYyaWxWVnI0ZTd1cm9zaGVwUllnUlNz?=
 =?utf-8?B?Q0d0OEhnNGJUOXgxYytRTzBPKzZ1QXRhRzF3KzRKYlU2NDZnck9HWGpDemIz?=
 =?utf-8?B?YVJMdk5Oakt0Wkl3bzBPdXJjTktOempjaW1wR21ZaFVzalNrcVpPSTkrZGEw?=
 =?utf-8?B?eXZXRDJkNmRnYXpKZThieUhsNTdiY0ZEWGJoa0JVWVBvelhwcXpTOFVGZnpL?=
 =?utf-8?B?T0VnVzg4OW81bUJWY01SNUU3YXlvQnVtMDcxU2RPTlRwcUJKWmlRWmpzN2ZQ?=
 =?utf-8?B?WDBHRys2MGd6V1VocUNXMG91NzJDZ2d0SlFzbmRsQ2VlNXlKcnA5Z3o1bzBO?=
 =?utf-8?B?eXRJbEhmK0owOVRFU0dDQVdSaWpSb0ZwMll3MkkrZ0tPQUxtTVFmNzhWSHdo?=
 =?utf-8?B?QkdvUlFGbUU0QU1idlNhdEM0UDJvQzMzRnYrRnBVV0t0Qk5IMy9PK3Y0Tktx?=
 =?utf-8?B?M3RmT2dtSUxJMFZwVUM5dkpNMnFpY0ZzaXkrbDNhUDJRUTY2eEtkMmRwQWF1?=
 =?utf-8?B?b1BkSy9KUXVTdlNBQjVTelU0MG82dUNTNEFnT2dkOHU5WHZ2SE83TUF6elJa?=
 =?utf-8?B?cTIrZ0tkRHJtcUNnZ1NzaWJzeldxMWtDYWNpUnNwRGlGd1VraXBxdHdwYUsw?=
 =?utf-8?B?TzVOR2UwT09XWjRBckNSK3cxT0g1TFZoN2pCSXhlcmxOSHdHWWhLVjkvbUxq?=
 =?utf-8?B?UjZoeCtPUklTOU41b2NoZ3JRamI4K0g2Z3k4SHh5S1dXZjNJSTd6OEw5ZUVN?=
 =?utf-8?B?aC8wSlgxTUhHQVZHdEJiUFcyYS8vY2JVb0daa3lreUh4eE54MjR0U2JJdSt3?=
 =?utf-8?B?UFZxWGZRM3I0TUduT0M0bFJOeDdsb0VKRDBUZndpZnV3dWZGVFdVaGJiUm10?=
 =?utf-8?B?eEtUOVIyc08vM0E1dHh2WmRLWHg0c2FMbFlWZTFEMTQrR2g1RjM4ZTVDVFpJ?=
 =?utf-8?B?c3dadGJsZ2FPSDlSSHp3d2ZHWUxhMmtsVHVkTFhWTjFoU3NBM21pcldwRzlU?=
 =?utf-8?B?ZE1LMmZwdGxqcU50QWxoNnVOSUFiYlNjamNTS21uRXBHWnRTUFR2ckF3aUNm?=
 =?utf-8?B?dTd6QkVJOHdjL0l0dGtYNGsrdXZOZE5Wd3RaMnN4ckZhcU1uODdsMXkxYk9O?=
 =?utf-8?B?MlAxMFY0WWEvSFZMOHZ5MlpVb0pZbGdTcjdOVUszM2MrTnNlZHovcVlqb0Jv?=
 =?utf-8?B?UDQxNVFsL3ZZZ29BQzB4SW1aTE0rd2RWbWZ2MTNnS0d0UkJRTC94V3UrN24z?=
 =?utf-8?B?elRYYVdWN3ozWXZBay9qQWo0NVpqanhBZGRvVm5FTFV0TkRXTlFIWE01NjlP?=
 =?utf-8?B?V1ZNOWk5akR0STFrMUFLRGI3dWhuWEVTM3FBaExJMUVEelE1bTlwdHROeUJX?=
 =?utf-8?B?RWkydnZicmtsZ1ZCSEhlSkJxR3R2YWw4UElEUkVaTnR3aVRMcnVNNnVXQ3Q2?=
 =?utf-8?B?NEZ5M3ZtcnErODdCTitUL0M4ZlRVcUxqMitVbTE0aEhsczdpS25reGN1NTE5?=
 =?utf-8?B?bk1pRkk5enFjVVZPNHJWZzh3c0ZkT0xsYmNIVitlaHpRQnZoQXd5cEVtWVlV?=
 =?utf-8?B?eWVzZWJlY3dzaFlPcE5wQXlHOUorQWZDOWdrbk1aeXFBL0wrNjh5M2NvN1Nq?=
 =?utf-8?B?cVBmYU5qTm5BdTB5T2JrSHNIUXV6d3RrUlR3bnhTSlZFbDV0YkxVYWp1MVMw?=
 =?utf-8?B?YWVlSTdDZVNDTEl6TmFIdGk1Zi9WRkl5UTBDa0VWLzF1S082WkxkaUs5VGor?=
 =?utf-8?B?QlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Luv07VhtD5O0ZmmoCQrPjMyOsRF1TzmBAli6FEO89m0J88q/dV93H3odXlP9pO6HYel0pYm3euZwMW5BeEnfOuBGqRDdd7DL7taAivMwysEX7mTBn+qSoNCv98lLbRlnIv/Lq9a2j7K8pxJ5KzwDvjxVOUL/uqVMRrWmSdj6TbTskZeTfIQ3qceLTp+xRVHPqY3/Njrq6+ZSVvMsLrDMZ7700o0rbkKspuWn1vXrs/b6BeNu4DKJsoL0ETowr3jx/Eo3VD31hIY1roJ/SV4y4KF9a60UvfS2uiMrPjK8cQxVQD7M8s8PKjU2BSw0rZlEHMpINHXQkRx9/xFMkQwIinkPrZAr9xOaEnp2po8F1vAxD7+a2Es1FAGhHpPhgHRAIEXlaLnDCnHBgROchLeuHDyNtkft0hp+LGCEWxRG386+HM/UwQJoRjyfgVjhxRq2XJ6SGImtkYVQpGeWd0p4ia7bdNphTDttcNP+c1cbE97Ci9EHMbWFaQMrolnAl2rXKpvdiCyDQm99JqQTbJJ/wTk2uF/o8VHIfdBxNPMxRAdfi0wQllJCYDiR+A21RE7mApLtqm71EGACfQ4/nKdY6z0cwtDqujSzKDv+O4a7aEwkQvtvutcq/daE6mBihJIOPYanzR0ZFxRNRXzuYE+TJExCAmkHvG8mvOtSxDQE93O97K/9t2xJsPgPiqtiHu8u6F3XvdskLkIgynsomF7DsQVfrmcLnR7dRL7IMUF8u96pas2VYLnZrv2pIr0KqPwiNHTyQ8cpuAIalQaZv2FfuknfH6C5rD1QLdP8M7pBEHUEKzpXwr6JhilmuH8N3k4+
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 22a69cfa-8c4f-47c0-5056-08db4b0d8b6f
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 13:02:55.1973
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ilgZf40NPjG5ky+RyuXRS3IwaRRKPBI+toaCpDGos2q5vjJAdSMen7xLsbfOYWNOPaQFQMTiLI+i73yQ4AER3g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5669

On Tue, May 02, 2023 at 01:11:12PM +0200, Jan Beulich wrote:
> On 02.05.2023 13:05, Jan Beulich wrote:
> > On 02.05.2023 12:51, Roger Pau Monné wrote:
> >> On Tue, May 02, 2023 at 12:28:55PM +0200, Jan Beulich wrote:
> >>> On 02.05.2023 11:54, Andrew Cooper wrote:
> >>>> On 02/05/2023 10:22 am, Roger Pau Monne wrote:
> >>>>> @@ -670,6 +674,11 @@ trampoline_setup:
> >>>>>          cmp     %edi, %eax
> >>>>>          jb      1b
> >>>>>  
> >>>>> +        /* Check that the image base is aligned. */
> >>>>> +        lea     sym_esi(_start), %eax
> >>>>> +        and     $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
> >>>>> +        jnz     not_aligned
> >>>>
> >>>> You just want to check the value in %esi, which is the base of the Xen
> >>>> image.  Something like:
> >>>>
> >>>> mov %esi, %eax
> >>>> and ...
> >>>> jnz
> >>>
> >>> Or yet more simply "test $..., %esi" and then "jnz ..."?
> >>
> >> As replied to Andrew, I would rather keep this inline with the address
> >> used to build the PDE, which is sym_esi(_start).
> > 
> > Well, I won't insist, and you've got Andrew's R-b already.
> 
> Actually, one more remark here: While using sym_esi() is more in line
> with the actual consumer of the data, the check triggering because of
> the transformation yielding a misaligned value (in turn because of a
> bug elsewhere) would yield a misleading error message: We might well
> have been loaded at a 2Mb-aligned boundary, and instead its internal
> logic which would then have been wrong. (I'm sorry, now you'll get to
> judge whether keeping the check in line with other code or with the
> diagnostic is going to be better. Or split things into a build-time
> and a runtime check, as previously suggested.)

What about adding a build time check that XEN_VIRT_START is 2MB
aligned, and then just switching to test instead of and, would that be
acceptable?

I know that using sym_esi(_start) instead of just esi won't change the
result of the check if XEN_VIRT_START is aligned, but I would prefer
to keep the usage of sym_esi(_start) for consistency with the value
used to build the page tables, as I think it's clearer.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:03:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:03:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528587.821944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpev-0007T2-88; Tue, 02 May 2023 13:03:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528587.821944; Tue, 02 May 2023 13:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpev-0007Sv-4w; Tue, 02 May 2023 13:03:09 +0000
Received: by outflank-mailman (input) for mailman id 528587;
 Tue, 02 May 2023 13:03:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l/wp=AX=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1ptpet-0007RC-V6
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:03:07 +0000
Received: from sender3-of-o59.zoho.com (sender3-of-o59.zoho.com
 [136.143.184.59]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ada93777-e8e9-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 15:03:06 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1683032582345892.9308198213303;
 Tue, 2 May 2023 06:03:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ada93777-e8e9-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683032583; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=dcnrjMXMEBqaDs8HEbZpO1CdF6/rgjf5Vvlwc8+apeqQe0DT6bMc+fLmzS123Eqx2HOoPNyZBgGDvjzrDl/k90nzuymYQYfhshju5oMKBv6NAY357rd9y6aWnPqbso9TuKQg7mqE+KJwrhfcwwibJSh+xzjk4RSu3+caj4wulbU=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1683032583; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=+PmM/urMgFqGb0mSi5wPADvNdwp6VGJZKc8t26d4ohA=; 
	b=Zs5kwMKNBZqS8yx+15CNimtqKJkvfKHP9XZv+pmDW/PJ865FwLAYEnbWVdtliui1mkIolBjbush/x8Hchwglss1oJ14AKbEsJiwXnNf2/T4GYFWHAkTyIbEh7ZAmAKvEkITJh0yd0jAGXfJrLokjwR+NKoreib0/KiRHKLAANGI=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1683032583;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=+PmM/urMgFqGb0mSi5wPADvNdwp6VGJZKc8t26d4ohA=;
	b=BcKrPcPdZbdx2GDypUx5ylnwwgJp3d5gGkboQZl1jK2Bz5jX2wH36yWz0zR28nzB
	A9QbRKDCGhrBoiu93reV+7VwhyqQNZrN/oIQK0+C0fn+dEmj+ngFPiDCeQdcrP8biVi
	xzPpivbblMjvlIoASFGgtSQ/If9WFt3HHWEas9EI=
Message-ID: <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
Date: Tue, 2 May 2023 09:03:00 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230430144646.13624-1-jgross@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling
 with XSM
In-Reply-To: <20230430144646.13624-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 4/30/23 10:46, Juergen Gross wrote:
> In case XSM is active, the handling of XEN_SYSCTL_getdomaininfolist
> can fail if the last domain scanned isn't allowed to be accessed by
> the calling domain (i.e. xsm_getdomaininfo(XSM_HOOK, d) is failing).
> 
> Fix that by just ignoring scanned domains where xsm_getdomaininfo()
> is returning an error, like it is effectively done when such a
> situation occurs for a domain not being the last one scanned.
> 
> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   xen/common/sysctl.c | 3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index 02505ab044..0cbfe8bd44 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -89,8 +89,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>               if ( num_domains == op->u.getdomaininfolist.max_domains )
>                   break;
>   
> -            ret = xsm_getdomaininfo(XSM_HOOK, d);
> -            if ( ret )
> +            if ( xsm_getdomaininfo(XSM_HOOK, d) )
>                   continue;
>   
>               getdomaininfo(d, &info);


This change does not match the commit message. This says it fixes an 
issue, but unless I am totally missing something, this change is nothing 
more than formatting that drops the use of an intermediate variable. 
Please feel free to correct me if I am wrong here, otherwise I believe 
the commit message should be changed to reflect the code change.

Second, as far as the problem description goes. The *only* time the call 
to xsm_getdomaininfo() at this location will return anything other than 
0, is when FLASK is being used and a domain whose type is not allowed 
getdomaininfo is making the call. XSM_HOOK signals a no-op check for the 
default/dummy policy, and the SILO policy does not override the 
default/dummy policy for this check.

V/r,
Daniel P. Smith


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:10:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:10:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528596.821955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptplz-0000zJ-05; Tue, 02 May 2023 13:10:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528596.821955; Tue, 02 May 2023 13:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptply-0000zC-TQ; Tue, 02 May 2023 13:10:26 +0000
Received: by outflank-mailman (input) for mailman id 528596;
 Tue, 02 May 2023 13:10:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptplx-0000z3-Gd
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:10:25 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b19caf54-e8ea-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 15:10:23 +0200 (CEST)
Received: from mail-dm6nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 09:10:15 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM4PR03MB6013.namprd03.prod.outlook.com (2603:10b6:5:388::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.20; Tue, 2 May
 2023 13:10:13 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 13:10:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b19caf54-e8ea-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683033023;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=MLTij1A4AGgMlMo36T9Wt4Qj75z05q5Hq5R1l5hi9LY=;
  b=h6eJw6Q4lvP8JFrhrk5eRk9ydzxSNLRUeKMqg9kLd+lbFaYPBercUMm8
   mhzutk6SBqRX1buUCjC7LT2Mc/0faJFQVh6zZFLNei31k6oTKPOzr3YFM
   n2yiCqCpCSIKyAROnqeFHWQMa4Jie8wyZngWzdslEPeDCrn+ehM84BxMN
   k=;
X-IronPort-RemoteIP: 104.47.57.177
X-IronPort-MID: 106330959
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:7oYHTqACK8wkChVW/9biw5YqxClBgxIJ4kV8jS/XYbTApDon0DdTn
 GIbWjrQa/mDNDOmed1zb4TnoRgD6pPWyYBqQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw/aFrP15Bx
 /gkMW4xbE7fhaWI2rS+Rbw57igjBJGD0II3nFhFlGicIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+uxuvDe7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqiz33beewHKTtIQ6L6Hk3ORQhEyowHEuBxEcd0DgreGzhRvrMz5YA
 wlOksY0loA+7FeuT8X9dxSgrWSYowUHXN5ND+w97hrLwa3Riy6bCXIDVSVpc8E9uYk9QjlC/
 k+EmZblCCJitJWRSGmB7fGEoDWqIy8XIGQeIygeQmMt4dPuvYUyhRLnVct4Hei+ididMTPtx
 zGHqgAuirNVitQEv42g5kzOiT+oopnPTyY26x/RU2bj6Rl2DKa6Y6S45F6d6uxPRLt1VXGEt
 XkA3sSbsuYHCMjUkDTXGbpSWra0+/yCLTvQx0Z1GIUs/Cis/Hjlep1M5DZ5JwFiNcNslSLVX
 XI/cDh5vPd7VEZGp4cuC25tI6zGFZTdKOk=
IronPort-HdrOrdr: A9a23:H+bZBqmszeS+IVs//9cWkMZOspXpDfIN3DAbv31ZSRFFG/Fw9v
 re5cjzsCWVtN9/YgBCpTntAtjjfZqkz+8S3WBzB8bGYOCZghrTEGgK1+KLqAEIfReOjtK1vp
 0QFJSWZueAdmSTiq7BjjVRRL4brOVu4siT5Ns33B9WIj2De8lbhTuQEG6gf3Ge7TM2YaYEKA
 ==
X-Talos-CUID: 9a23:slJ1Z2CuWA4nM5v6Ey4g6HArFfIhTmSD8HWOf03/N3xvTbLAHA==
X-Talos-MUID: 9a23:nY37hQqWsBZAQ7tzByEezwp+aO1y2aC/M0YQrJoKseufbiBoMTjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="106330959"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lFyDtxFMI5BcUyRjwWhOm6ZQUx9JZ+CZdN8MukvFFQSjUdtqgCVWlJVlAcDjqNs8/TRjXVzPadrcATJnAd/g21GqAWu5EmKmaKU7VAC/n8D5lnohjg6qm51aYVfWqJPyPmVpCo3ggRyecGpoPa3+igJWiX9/AZR7mMU3r9+hbVDpNTC6yMKjGbjoYaATrc5bDqRMad0I0UswgXA2QIRTZU4xQViMv/yGn1Ib7b/ORydt5iOWkGti8X7gEWqgH9HaQNVtA32VD4HfnGE1jOC1VSSWth1xWOFlbUgnpOYTLpI45NjLZml0KGaEkd77Dq57SSepxf2V0DCMGWcoKeg0sQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YnyaevPHt9hCzAcJesQfdl7YN/uqEVelGTTDFPY2XNk=;
 b=lGuhUQK+M6uSqWMO/N0NnDf5XPXAlt1y4eLybdpUTxRa596ZWpFFtjDnkDCyJMo5v8pxPgnjzY7Uke2VCyHDAsNRlDU3rfiSorBo7Y5iV+ibA4SBJB8OwS08Ttt8npfNbfjuu+ofmwNEaUbYG4MXhcJTgwjn8mATAUS8I57VZazBkrJa2T3iYRHWR03jH4P21GaHWZSkUliJ7bs3bTfUZi4H/vbfdyhp6MRDJBaVCQSWYxTKSQ+M0AIZyiUl8sE8UFba8bDUbSyphzr183xxqV+7FajZBDnqdqcWu58/ulQTac/4okl/X/I2P/p6JnlOg8M9EEY0xs6odCbNabD0tA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YnyaevPHt9hCzAcJesQfdl7YN/uqEVelGTTDFPY2XNk=;
 b=CspUmLAaxzHAmRVoplMQfH/XvFn/MyvB3VoUokVi5FvWPQC6Xm4Abpw3YPGF/O2BfTyDQ2DnEyAwZrISdVrryes8YA2SD0RsgdrtCbq+rR/4wSsGKo8mGU0qgp+D2ieV3vwU772ISeU8/eKh86bziEq3B79MbrrKU7mlx6EjqZs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 2 May 2023 15:10:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling
 with XSM
Message-ID: <ZFELr0BYpMgX9CzR@Air-de-Roger>
References: <20230430144646.13624-1-jgross@suse.com>
 <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
X-ClientProxiedBy: LO4P265CA0280.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:37a::17) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM4PR03MB6013:EE_
X-MS-Office365-Filtering-Correlation-Id: 6b812388-0d10-4fea-4791-08db4b0e90c3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	A284q4kta1n9fTTSN6C8Pz8YvnOENCFWrSpiO8RRAqHJeRNqT61uxpm1axYR7xlxUjrMvR1RmTqGXW4Fq+lLKNll0doD1lVYKds3Q7tCqpV1CuToSe0FKdGRLJo71TUL2CyXkJKaht/fnVXBY64s2mk5W+LHHxsfQx68gMTD40OlUDDFmdy59rJFx/q6tRcM7hxqeFk1JwCCeYQ82a69CecqoaDBg9BIkdyMSqnPgsHlyoHvUj/xd8th5Lt7kic6KtHZs8spDPfZ6vudymDwg6huTLRbOS/qQGQSthCVABxobJ+SBxYC77yIIXQ25q5KBprqNQOFDEfYTaHThD9TD7CNIp9JkM8bzI6UqgxO3nIY9QplxxRtu0O6pfTWojev1MEnadVib9yEIZhZlwaxhhnfvruE6+ODsndksjqlkDnqGlnWEpxmXAb5yKZ5hPb0CYKQ2bSXzKVTNSVoj8aV6slmuqOUlEp08XYmRsz28t42lame+nMtcyMyati1axvbNt0v+iee8cligFvBscxswaMmwZEftRkz5L3KNUyWTNKMCaCUFoWXtc3+TnvWaUgD
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(346002)(396003)(136003)(376002)(366004)(39860400002)(451199021)(66476007)(66556008)(66946007)(2906002)(86362001)(5660300002)(8936002)(8676002)(41300700001)(6666004)(85182001)(316002)(4326008)(6916009)(54906003)(478600001)(6486002)(186003)(82960400001)(6506007)(6512007)(9686003)(53546011)(26005)(83380400001)(33716001)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T0N0Qk9UZS9lTmlvd2ZybEFIZ3dETE9XQlVwRDNnZ2UxQ1hXM1o4dVJUWmRS?=
 =?utf-8?B?UFMxYXFMTE51b3RZUndhakJheWJLcHdVTDlxcm0xbmU1OW1hUlQyRzdTYU96?=
 =?utf-8?B?SVJGV1hrV1VuenU1VTBZZXAxQWNEdXh3UnAwTWlEMEdVOVBROFBNcXdoL1JU?=
 =?utf-8?B?NTRITmNyNmp6YzAyajA2S3B4aCtMSmg5d3B3VWg1K3ZPL0FBSW5ROHlQTjda?=
 =?utf-8?B?dnFranQvODBQRHRVM1pwbEdtVGo2aTV5b05yNTU1MW9CQld6TDZQV2lNU2s1?=
 =?utf-8?B?cXZCV2JLbFZNQnVzSVJlUVQ5SWFDZ05halVaR1ZIRGlMWTFtdlZHUUdjbWFS?=
 =?utf-8?B?RUlVMjlWdEJJT0tyN2d0TE00bW5vMDJFdFlUdUtET1RQck1uNE13cmZ4WlNB?=
 =?utf-8?B?ZnNKczhJVG5YMGhmRzB4SWxPUGMrWE1ZL1lvb2RiYUROWTEvNnBZZThIdUov?=
 =?utf-8?B?eWw1NGIrNllYelJxWmhuYzBuTlBVYjhxTUVVWnVQbW5BbS9GVHBrOFVZQTlq?=
 =?utf-8?B?T1hQdWxDUndSN0JxK3MwdndWSC9TZnR0T0tnN2h3T1NwZGxnQ2ErMDMreU5h?=
 =?utf-8?B?TDhkaVpOb1RWamlsRmNxSFlPWHVGOTgvejFtb05FbS9XMnh0ZU5HOTlVTnps?=
 =?utf-8?B?WTd3ajdmTnBFZVpHTzdVRFpZTFQwTzFQRkl3TFdkZ0J3aW4wRFYrdVRRZFpZ?=
 =?utf-8?B?aTBKajk4ZzFYRXk5TU82cUZ3ZmZib1UweUxFcjJ0c3pQeE5reGU1RTNZQWJJ?=
 =?utf-8?B?MTIyam5YNDNUVVlyTENWYXhGNGowamtzSWJ1MHhXcHdLM2lTaWRPRjlReTI2?=
 =?utf-8?B?Ky9ZdUJHSHhMNkp5bVJkTXRLTThRWExLbTQycy9hZU8vSGV2U2pEQ0k4SzdH?=
 =?utf-8?B?RC9LaXVWMXEvc3oyU3VQMEpvakg1NytybTFIT1dLUDJlcjl3RFVTNkZBSWFy?=
 =?utf-8?B?SVJCbkl3WHR5ajg0TkllWFAwbC85d0VXVkNjSHZydzJSNHVVeDkwMjNwVjBO?=
 =?utf-8?B?U0pGSlJVS2Y2NzdSQWJob1lmUGlDY1haZGlMdVJvSE9kYVNrZEo1RTdBTTNa?=
 =?utf-8?B?SnY4OWxBUnZVVmM1ZGZ6RHpwVE9WamtjSjB0SDBndTdodXdkeVlNSW9aWlNN?=
 =?utf-8?B?UGpYL2EzT0QzUlAvdFhqQi9XZTBsRjFMUXQwbVJ0SGtiOGtmV2JrOFhnendZ?=
 =?utf-8?B?ZXNLcDNjZ1VTRDJreXBmMU5QUHRkMFNzRnBPam9qMEFSOWZVN2RVdWcvUHNx?=
 =?utf-8?B?QjI0eDgzMlNnK21MZHM3dWRrbEhXNzFBVSthRVJVVjFhVExUT3J6OGdsSE1D?=
 =?utf-8?B?akpzNUtBR1B4aVNNVHgzdm0xTUY3VXlOWjh6US9IK1hXeHJPRkFWRlZGcXhO?=
 =?utf-8?B?OThjcllxcmoxV0M1T0FQdktmNmxjSUZXOVk3WVlDK0ZWZk1HSnRPZG9HQkJE?=
 =?utf-8?B?MDN5YVlVYkttZzZrdno2Tyt2aXU3R1hIajEwdm44L2tKaUpzN0kvamtRdit5?=
 =?utf-8?B?Z29QWEFPTWJ6QXM3bmJLMHZZS1U1dG0rYTlXcTZLRlM2MklSQW5tOUt3Z0lH?=
 =?utf-8?B?eGwzNnJuMHNJVTNWQ2dpSWpnR1VLWVBzSFd5YmZjS0thaE9UUTdQUGE5cXlH?=
 =?utf-8?B?TTNlUFE3Rk11YTJSeVhab0VNVklRd21IZXpUN1kzUTZDS2Q5R1JwditMcUh2?=
 =?utf-8?B?d2czOW5tZFlJeitHLzVEazdRQ2lISVBaVWVvRjdwR1ZxTmo5UWhMTlpWdmVu?=
 =?utf-8?B?Z2wwZzRhWFh2anhkQmdkUW9ZNlNLMzE3MWhuOGpoTUJEU1RNaG5EUkR2UFNJ?=
 =?utf-8?B?aGxJUWJuNDlDL3ZvQ2NlRXFWZFZpYWpPdFVpamcwaEltZmFYeFg2RkZCRnVB?=
 =?utf-8?B?clB2UTdOK0x2WmV3S2dQVDNwMG9LTjg2djlTdnBrUHlKQ2NHK2Izck1pVWNu?=
 =?utf-8?B?N3dOR2tzN3hVeVV6WVVwdmhlYUtGR1RWSEtqbGk0c1lSL2dtMVZFTVR2MmpO?=
 =?utf-8?B?STk4Q29kNUtGMXR1Wm5TV3pWSzhaTEhZMzJ0Z1hWdXdFU1JjM1RMckQ4WGlm?=
 =?utf-8?B?d0NVN1p0T3V1YzNnRCtjY0RVNWwvQ3FFay9yZGM3YVQ3cFc1eGM0QVkxbENK?=
 =?utf-8?B?cDUrMEt5bTBDdzlmNjhnc01ldjV4cWhBNWNmZmpRS0RSMGtkaEJXUGdGdWRU?=
 =?utf-8?B?MlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	2VNVuPYfGPrNgiXB7bw4LWeI0+ztdrtDXxwXL2ipUP4sffuIld55GgvPjcRe/HQgrLLYYXEs/hG90C66ucKAFaI9Hn5KY2eMODgGk/iGTKfGLMT+yESFqY4/nx796Q70Ys1+P3tlZQLIJWhQMYxuTqfWXzX1+U/xOgIE/dTOdJwAB+oQ6VLXviGBO5FCO887hmUksDvAmoxJtOmo7gontjvtp1ipk+TVbI6S1vFv03bOq/Xn/lYoKmPluhUak0GAe7GK2MyXlX3WHTIuP2luuzJ+e1BvZiacbGltywwkvcX2e+Z+HHscPLQddAITUMgrtZkPtm1yUFNrJ2h3PByQYd3aQziJY8rwmj83rHR43tf48k9MIksdQY+AcN4srUnv6D8+3prBMeXJmBxaZeCJPj+KKYEZCd8nZOV/yCLafrZw00+csldIaqnDgBviG9ghVTY3PrfCU/JnG2v7B+4FQJDmiRV1Ke3iaZIkrqwQSgPZUPYKMkC2OvwYYZDYs1LtTO/lQh8FwoTMHXdtKPNH5EF2k7noFMhnaPPOCaAZeTRfxQLZItE5lpyThHTtaRm7Rz4vPBJQfSevoiULQ9CpKpAYfZQbZuaKx9iqnO7mdGyLS9ulsR57Z8ShK+azW4OZLCkHWBRt39jV8Rg2iau0Og7Hl9ATYbPCHX3EO/8+H41/ILJXspZoa+juIsNiFo0af7whU0cEm1R7HDrBmyvW6hQ7w6iQx1zZtA/hZlr5JgjnxwVtbD2MuQWhOiarqywWA4EEgY3XD5a+ugEPnKn+pJV5iQ4niEW5+Q8U6tGT8rotonbo3+Amp8zb8IEpN6van5i/Nce1Q5jd3616RdGL40UUA+dQYukk6l2T/xXuW9rpXReCMLqPEvna/geO3RT87A9f5AU2cgxJZbjxO4qhxw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b812388-0d10-4fea-4791-08db4b0e90c3
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 13:10:13.5897
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ypz+UToJ0eskFZI3G0Lp0ZN5SWrpq++JqLAd6ETXEb9jPESZSfEvH+LuVfthZc6Z2k7cPnfDWe4KMN0zA9KIhQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6013

On Tue, May 02, 2023 at 09:03:00AM -0400, Daniel P. Smith wrote:
> On 4/30/23 10:46, Juergen Gross wrote:
> > In case XSM is active, the handling of XEN_SYSCTL_getdomaininfolist
> > can fail if the last domain scanned isn't allowed to be accessed by
> > the calling domain (i.e. xsm_getdomaininfo(XSM_HOOK, d) is failing).
> > 
> > Fix that by just ignoring scanned domains where xsm_getdomaininfo()
> > is returning an error, like it is effectively done when such a
> > situation occurs for a domain not being the last one scanned.
> > 
> > Fixes: d046f361dc93 ("Xen Security Modules: XSM")
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > ---
> >   xen/common/sysctl.c | 3 +--
> >   1 file changed, 1 insertion(+), 2 deletions(-)
> > 
> > diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> > index 02505ab044..0cbfe8bd44 100644
> > --- a/xen/common/sysctl.c
> > +++ b/xen/common/sysctl.c
> > @@ -89,8 +89,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
> >               if ( num_domains == op->u.getdomaininfolist.max_domains )
> >                   break;
> > -            ret = xsm_getdomaininfo(XSM_HOOK, d);
> > -            if ( ret )
> > +            if ( xsm_getdomaininfo(XSM_HOOK, d) )
> >                   continue;
> >               getdomaininfo(d, &info);
> 
> 
> This change does not match the commit message. This says it fixes an issue,
> but unless I am totally missing something, this change is nothing more than
> formatting that drops the use of an intermediate variable. Please feel free
> to correct me if I am wrong here, otherwise I believe the commit message
> should be changed to reflect the code change.

By dropping that intermediate variable it prevents returning an error
as the result of the hypercall if xsm_getdomaininfo() for the last
domain fails.

Note that xsm_getdomaininfo() failing for other domains not the last
one don't cause the return value of the hypercall to be an error
code, because the variable containing the error gets overwritten by
further loops.

Regards, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:13:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:13:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528600.821965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpoc-0001Xk-D6; Tue, 02 May 2023 13:13:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528600.821965; Tue, 02 May 2023 13:13:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpoc-0001Xd-A4; Tue, 02 May 2023 13:13:10 +0000
Received: by outflank-mailman (input) for mailman id 528600;
 Tue, 02 May 2023 13:13:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zdoi=AX=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1ptpob-0001XX-4v
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:13:09 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 13f1ec27-e8eb-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 15:13:06 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6C2532219E;
 Tue,  2 May 2023 13:13:06 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2887B134FB;
 Tue,  2 May 2023 13:13:06 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id UEh0CGIMUWRGUwAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 02 May 2023 13:13:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13f1ec27-e8eb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683033186; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=4/WSpv2WTFYkjPxg73ZtghIvmSLUD0+J5pZNvX3ePHA=;
	b=cvAOgHeqfHd4AvN/MDxZQvnltSnD5mrJ0GsdCeoO/R7Wc+xaMwfOnMVtxh6ZFySjFAnVvI
	RdIsKBEoEKjCm0l4dvyedwItYFhqfpdU+ztP6MGo0K5Lv5BP/a+8wxoLPUm4lJ9haPf734
	Ansikwl+1hWnzcsjkpDVJdRqNYJDXeM=
Message-ID: <ca098773-5428-c97f-87ae-402fffd114bb@suse.com>
Date: Tue, 2 May 2023 15:13:05 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230430144646.13624-1-jgross@suse.com>
 <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling
 with XSM
In-Reply-To: <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------kMsB1mAB7fj1RnTAIn2sTf0i"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------kMsB1mAB7fj1RnTAIn2sTf0i
Content-Type: multipart/mixed; boundary="------------sjLKwfXhK0tKd9Q2bGaxUaA7";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
Message-ID: <ca098773-5428-c97f-87ae-402fffd114bb@suse.com>
Subject: Re: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling
 with XSM
References: <20230430144646.13624-1-jgross@suse.com>
 <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
In-Reply-To: <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>

--------------sjLKwfXhK0tKd9Q2bGaxUaA7
Content-Type: multipart/mixed; boundary="------------wTeax1LXCXbzJqxcuuMG6NSl"

--------------wTeax1LXCXbzJqxcuuMG6NSl
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDIuMDUuMjMgMTU6MDMsIERhbmllbCBQLiBTbWl0aCB3cm90ZToNCj4gT24gNC8zMC8y
MyAxMDo0NiwgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4+IEluIGNhc2UgWFNNIGlzIGFjdGl2
ZSwgdGhlIGhhbmRsaW5nIG9mIFhFTl9TWVNDVExfZ2V0ZG9tYWluaW5mb2xpc3QNCj4+IGNh
biBmYWlsIGlmIHRoZSBsYXN0IGRvbWFpbiBzY2FubmVkIGlzbid0IGFsbG93ZWQgdG8gYmUg
YWNjZXNzZWQgYnkNCj4+IHRoZSBjYWxsaW5nIGRvbWFpbiAoaS5lLiB4c21fZ2V0ZG9tYWlu
aW5mbyhYU01fSE9PSywgZCkgaXMgZmFpbGluZykuDQo+Pg0KPj4gRml4IHRoYXQgYnkganVz
dCBpZ25vcmluZyBzY2FubmVkIGRvbWFpbnMgd2hlcmUgeHNtX2dldGRvbWFpbmluZm8oKQ0K
Pj4gaXMgcmV0dXJuaW5nIGFuIGVycm9yLCBsaWtlIGl0IGlzIGVmZmVjdGl2ZWx5IGRvbmUg
d2hlbiBzdWNoIGENCj4+IHNpdHVhdGlvbiBvY2N1cnMgZm9yIGEgZG9tYWluIG5vdCBiZWlu
ZyB0aGUgbGFzdCBvbmUgc2Nhbm5lZC4NCj4+DQo+PiBGaXhlczogZDA0NmYzNjFkYzkzICgi
WGVuIFNlY3VyaXR5IE1vZHVsZXM6IFhTTSIpDQo+PiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+PiAtLS0NCj4+IMKgIHhlbi9jb21tb24vc3lz
Y3RsLmMgfCAzICstLQ0KPj4gwqAgMSBmaWxlIGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspLCAy
IGRlbGV0aW9ucygtKQ0KPj4NCj4+IGRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3N5c2N0bC5j
IGIveGVuL2NvbW1vbi9zeXNjdGwuYw0KPj4gaW5kZXggMDI1MDVhYjA0NC4uMGNiZmU4YmQ0
NCAxMDA2NDQNCj4+IC0tLSBhL3hlbi9jb21tb24vc3lzY3RsLmMNCj4+ICsrKyBiL3hlbi9j
b21tb24vc3lzY3RsLmMNCj4+IEBAIC04OSw4ICs4OSw3IEBAIGxvbmcgZG9fc3lzY3RsKFhF
Tl9HVUVTVF9IQU5ETEVfUEFSQU0oeGVuX3N5c2N0bF90KSB1X3N5c2N0bCkNCj4+IMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICggbnVtX2RvbWFpbnMgPT0gb3AtPnUuZ2V0ZG9t
YWluaW5mb2xpc3QubWF4X2RvbWFpbnMgKQ0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCBicmVhazsNCj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldCA9IHhz
bV9nZXRkb21haW5pbmZvKFhTTV9IT09LLCBkKTsNCj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIGlmICggcmV0ICkNCj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICggeHNtX2dl
dGRvbWFpbmluZm8oWFNNX0hPT0ssIGQpICkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgY29udGludWU7DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBn
ZXRkb21haW5pbmZvKGQsICZpbmZvKTsNCj4gDQo+IA0KPiBUaGlzIGNoYW5nZSBkb2VzIG5v
dCBtYXRjaCB0aGUgY29tbWl0IG1lc3NhZ2UuIFRoaXMgc2F5cyBpdCBmaXhlcyBhbiBpc3N1
ZSwgYnV0IA0KPiB1bmxlc3MgSSBhbSB0b3RhbGx5IG1pc3Npbmcgc29tZXRoaW5nLCB0aGlz
IGNoYW5nZSBpcyBub3RoaW5nIG1vcmUgdGhhbiANCj4gZm9ybWF0dGluZyB0aGF0IGRyb3Bz
IHRoZSB1c2Ugb2YgYW4gaW50ZXJtZWRpYXRlIHZhcmlhYmxlLiBQbGVhc2UgZmVlbCBmcmVl
IHRvIA0KPiBjb3JyZWN0IG1lIGlmIEkgYW0gd3JvbmcgaGVyZSwgb3RoZXJ3aXNlIEkgYmVs
aWV2ZSB0aGUgY29tbWl0IG1lc3NhZ2Ugc2hvdWxkIGJlIA0KPiBjaGFuZ2VkIHRvIHJlZmxl
Y3QgdGhlIGNvZGUgY2hhbmdlLg0KDQpZb3UgYXJlIG1pc3NpbmcgdGhlIGZhY3QgdGhhdCBy
ZXQgZ2V0dGluZyBzZXQgYnkgYSBmYWlsaW5nIHhzbV9nZXRkb21haW5pbmZvKCkNCmNhbGwg
bWlnaHQgcmVzdWx0IGluIHRoZSByZXQgdmFsdWUgYmVpbmcgcHJvcGFnYXRlZCB0byB0aGUg
c3lzY3RsIGNhbGxlci4gQW5kDQp0aGlzIHNob3VsZCBub3QgaGFwcGVuLiBTbyB0aGUgZml4
IGlzIHRvIE5PVCBtb2RpZnkgcmV0IGhlcmUuDQoNCj4gU2Vjb25kLCBhcyBmYXIgYXMgdGhl
IHByb2JsZW0gZGVzY3JpcHRpb24gZ29lcy4gVGhlICpvbmx5KiB0aW1lIHRoZSBjYWxsIHRv
IA0KPiB4c21fZ2V0ZG9tYWluaW5mbygpIGF0IHRoaXMgbG9jYXRpb24gd2lsbCByZXR1cm4g
YW55dGhpbmcgb3RoZXIgdGhhbiAwLCBpcyB3aGVuIA0KPiBGTEFTSyBpcyBiZWluZyB1c2Vk
IGFuZCBhIGRvbWFpbiB3aG9zZSB0eXBlIGlzIG5vdCBhbGxvd2VkIGdldGRvbWFpbmluZm8g
aXMgDQo+IG1ha2luZyB0aGUgY2FsbC4gWFNNX0hPT0sgc2lnbmFscyBhIG5vLW9wIGNoZWNr
IGZvciB0aGUgZGVmYXVsdC9kdW1teSBwb2xpY3ksIA0KPiBhbmQgdGhlIFNJTE8gcG9saWN5
IGRvZXMgbm90IG92ZXJyaWRlIHRoZSBkZWZhdWx0L2R1bW15IHBvbGljeSBmb3IgdGhpcyBj
aGVjay4NCg0KWW91ciBzdGF0ZW1lbnQgc291bmRzIGFzIGlmIHhzbV9nZXRkb21haW5pbmZv
KCkgd291bGQgYWx3YXlzIHJldHVybiB0aGUgc2FtZQ0KdmFsdWUgZm9yIGEgZ2l2ZW4gY2Fs
bGVyIGRvbWFpbi4gSXNuJ3QgdGhhdCByZXR1cm4gdmFsdWUgYWxzbyBkZXBlbmRpbmcgb24g
dGhlDQpkb21haW4gc3BlY2lmaWVkIHZpYSB0aGUgc2Vjb25kIHBhcmFtZXRlcj8gSW4gY2Fz
ZSBpdCBpc24ndCwgd2h5IGRvZXMgdGhhdA0KcGFyYW1ldGVyIGV2ZW4gZXhpc3Q/DQoNCg0K
SnVlcmdlbg0K
--------------wTeax1LXCXbzJqxcuuMG6NSl
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------wTeax1LXCXbzJqxcuuMG6NSl--

--------------sjLKwfXhK0tKd9Q2bGaxUaA7--

--------------kMsB1mAB7fj1RnTAIn2sTf0i
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRRDGEFAwAAAAAACgkQsN6d1ii/Ey+E
vwf7Bj/AW2e3YlGXtK2EzdoSzUIvg1jbx7l6aodqBSCPKlePgvW9LejgbmI6LNzl/nXPCgEGke1y
gLLyufR1XqiTdngcQTeW2Klx18n0lxleHC0wMCNFJlcqtfiKcKLZu0K3dJjkP/d4ljV42vwlJD5u
4M1SuHguS5VKv8NhwqUb0NBvazN6pg2481LAwapyKFeh+SFvXsjGVo2aE0pR5kQa17R5ZBvnJ9OP
ZUCX47fFG4yVtWR1RxmsRieZTGK0XAysv94PokZSqtgE5bF4vCEfk1AUXmXo/KcaN+y3hoJ0y2Ee
qi8m9bKS5OpNxwxtacwmpGLpCbspz/978sF9jj72Pw==
=Df7a
-----END PGP SIGNATURE-----

--------------kMsB1mAB7fj1RnTAIn2sTf0i--


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:13:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:13:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528605.821975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptppD-000272-Re; Tue, 02 May 2023 13:13:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528605.821975; Tue, 02 May 2023 13:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptppD-00026v-Ov; Tue, 02 May 2023 13:13:47 +0000
Received: by outflank-mailman (input) for mailman id 528605;
 Tue, 02 May 2023 13:13:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l/wp=AX=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1ptppC-0001XX-HU
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:13:46 +0000
Received: from sender3-of-o57.zoho.com (sender3-of-o57.zoho.com
 [136.143.184.57]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 296b7a5f-e8eb-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 15:13:44 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 168303321714554.42914743891174;
 Tue, 2 May 2023 06:13:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 296b7a5f-e8eb-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683033219; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=DsQx1ESOzcVBBH2ANjAdHbaBnC4tR7oXlnEkDVluQxGPrxObtvhUb8qjW3ritVo5WWJDejqqTDTjcFtsmjpBMT3T0lhk9TJOPHx9mmBxpuAmAtCAxMi6APjG1zlym2oB2zRn78RzaN8NVUOjKDb9D1YqxnUdya4oZTbk/jZaOFM=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1683033219; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=uGj/pRqsgttZ5oVhJc1QbtEHDuuAij/v2kcj4sZAlMc=; 
	b=Zm5MBp5CVjOXAtHO3HyP+0SRyT+t1GPCx4oMkek8gtPIkpKaA1OY1oHONitQSu2GHJMTIxo0NloMM/LFx7WpkMoKOZgo8kZ6fbDCwFETwTYAECYwMqX84sL9K3jMQvd5s5HPIhFXnKWGuDTGnq/KVRO90XJbgHplrjO4CaIZhYU=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1683033219;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=uGj/pRqsgttZ5oVhJc1QbtEHDuuAij/v2kcj4sZAlMc=;
	b=m4dNLS0RGeNXCYU8fX3aR82Czddukbn7X233xE/IReNvI44Kx6g0PjSChyahQqUf
	3SL/ZTNInQCfUxShhP0x48G5cnSPnAiURsb8RRdIwWnTdGpgOrB1uXbj4U2VBGUNL3R
	z+wSXn5BUGrCbLJ5cjsa5IIEM/fyDwpDZk+6/FMg=
Message-ID: <ccbd0769-ef20-01ea-2204-ee0c211dcd5d@apertussolutions.com>
Date: Tue, 2 May 2023 09:13:35 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
 <600c8c62-5982-ec7e-7996-5b7fbfb40067@apertussolutions.com>
 <ZFDtMMUzBGXFZPsQ@Air-de-Roger>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
In-Reply-To: <ZFDtMMUzBGXFZPsQ@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 5/2/23 07:00, Roger Pau Monné wrote:
> On Tue, May 02, 2023 at 06:43:33AM -0400, Daniel P. Smith wrote:
>>
>>
>> On 5/2/23 03:17, Jan Beulich wrote:
>>> Unlike for XEN_DOMCTL_getdomaininfo, where the XSM check is intended to
>>> cause the operation to fail, in the loop here it ought to merely
>>> determine whether information for the domain at hand may be reported
>>> back. Therefore if on the last iteration the hook results in denial,
>>> this should not affect the sub-op's return value.
>>>
>>> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> The hook being able to deny access to data for certain domains means
>>> that no caller can assume to have a system-wide picture when holding the
>>> results.
>>>
>>> Wouldn't it make sense to permit the function to merely "count" domains?
>>> While racy in general (including in its present, "normal" mode of
>>> operation), within a tool stack this could be used as long as creation
>>> of new domains is suppressed between obtaining the count and then using
>>> it.
>>>
>>> In XEN_DOMCTL_getpageframeinfo2 said commit had introduced a 2nd such
>>> issue, but luckily that sub-op and xsm_getpageframeinfo() are long gone.
>>>
>>
>> I understand there is a larger issue at play here but neutering the security
>> control/XSM check is not the answer. This literally changes the way a FLASK
>> policy that people currently have would be enforced, as well as contrary to
>> how they understand the access control that it provides. Even though the
>> code path does not fall under XSM maintainer, I would NACK this patch. IMHO,
>> it is better to find a solution that does not abuse, misuse, or invalidate
>> the purpose of the XSM calls.
>>
>> On a side note, I am a little concern that only one person thought to
>> include the XSM maintainer, or any of the XSM reviewers, onto a patch and
>> the discussion around a patch that clearly relates to XSM for us to gauge
>> the consequences of the patch. I am not assuming intentions here, only
>> wanting to raise the concern.
>>
>> So for what it is worth, NACK.
> 
> I assume the NACK is to the remarks after the '---'?
> 
> The patch itself doesn't change the enforcement of the XSM checks,
> just prevents returning an error when the information from the last
> domain in the loop can not be fetched.
> 
> Am I missing something?

Actually, I should have finished my first cup of tea and looked closer 
at the patch in the larger context instead of the description, as the 
two do not align. You are correct, and provided I am not wrong here, the 
change is a no-op formatting change that removes an intermediate 
variable. I do not see how directly checking the return in an if versus 
checking the return stored in a variable. Additionally, the claim is 
that this occurs when XSM is enabled, which is also incorrect. The only 
difference at this location in code between not having XSM enabled and 
having it enabled is that for the latter, xsm_getdomaininfo() is an 
in-lined version versus a function call. In either case, both will 
return 0 unless you are using FLASK and have a policy blocking the 
domain from making the call.

V/r,
Daniel P. Smith




From xen-devel-bounces@lists.xenproject.org Tue May 02 13:16:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:16:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528608.821985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptps0-0002mE-9W; Tue, 02 May 2023 13:16:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528608.821985; Tue, 02 May 2023 13:16:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptps0-0002m7-6F; Tue, 02 May 2023 13:16:40 +0000
Received: by outflank-mailman (input) for mailman id 528608;
 Tue, 02 May 2023 13:16:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptprz-0002m1-Bm
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:16:39 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0621.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91b309a9-e8eb-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 15:16:38 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB10003.eurprd04.prod.outlook.com (2603:10a6:800:1e1::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.27; Tue, 2 May
 2023 13:16:35 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 13:16:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91b309a9-e8eb-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oJ3KZvBf2B0vzXhiefkVsiNqMULOUpmn95OoBoia67ruP+Ji+PhhfKA/heo0di5174WmeR6Yg1ge3tjYk1FxjogCWZ/410pWGc8rV0Vs/a+Rlodd/GCfasScrDnW3ueP0O+63CN1wzka+/Vv8PRKDQttMdbBfO19dFByORJqpAATQONzJXFdp/+NBsceZMOQWi9/l66rr+FF4w2wEVzTXADHKQiVNiu6gq5KZ0cI66EJeW4c0bTBr6+FEOJn38Gm+RWMls6zd/pN5plbwttdinbBY6PJguyiaty4oKC+Zrh2SQoKNl33cniWIqLE7gMJUKxLLOJpOwFCq/yU/PCPUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p5vUR8qPbyRvK1zt8gvDOYqVk5VfhmNIvpXaUejS8Ok=;
 b=VMRsT2GByfe+lvGQGNB0sj/MLzfz16+VUKjPX4xKtKtgk/Co5mgiUtpB158NkxAg91lYhpwd8NRg6fkTAxLxn+toZb4P7w3OEh0VSVVfbS7B6XysKngo/5o2Kc0ymuiHgeFKUyva2GvV+CYYHO1AbcN/KOp3sd2ibnyvS5O95T2y+ujL6/CfvXqhXEk0fkyDdakqxvfawHdYOflpdkozgl8KEAA7eGAbPSwCalPKdXgRvRBZkNLKpneo08USGa403GrzpSJ2pCTdgFTm5I4Sq94xn/syHiPWoBSpK8EZF1d7lTNe+sUPD6YsXFu3PhIAGKb8qCF2euzC5ZdsPd8jpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p5vUR8qPbyRvK1zt8gvDOYqVk5VfhmNIvpXaUejS8Ok=;
 b=TMebciRVbRZObZ9wzvnqmmsJAN9gzXXoUlQNc03ajjlduqv4x1lDjDiO9gj7cAK6l2i/U3BM66Yvb31Xz+7BgOnmJN67s7Vsedwy4hXHhrzYbiNLpcesuocYADHkTBcPjwbv09JnNUYaQ5lKZbEH0Jpa6Jmvq4qRfxCMx3PiQpQk4ah4EIPkOimxmhjuNpyG9XVYOUKab7Rr+P7NyqRIhsO//NgN+hWaJZsO5gvQ+ll4gRM6QD4+LzpBuEaiIEwcnbphmy031c9XQHp1cee9v1iuN2MyBU0v/utfmhHsXrJrwshIOh5O3V63jTrnmGvCOodTJ/ziPgkNSVBKhxCBDQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0ff2af4f-db6a-71b9-649c-028bf4148eb2@suse.com>
Date: Tue, 2 May 2023 15:16:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
 <600c8c62-5982-ec7e-7996-5b7fbfb40067@apertussolutions.com>
 <22b2e03c-ac5e-915a-78a2-0a632b09a53a@suse.com>
 <c333f02e-0655-50ec-aee8-7c1449ca267f@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <c333f02e-0655-50ec-aee8-7c1449ca267f@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0182.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB10003:EE_
X-MS-Office365-Filtering-Correlation-Id: 042848df-5c30-4928-9fd0-08db4b0f7457
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HFM6EyUMNIVOfETbZzVPcLUJTv48AUYr2JZR6UAbV+ggX9+dd/OQCyH624oj81dnXAbC8MGl5jXGV0TsZsg/irgKvMFHdNZJQPlFdVPjqIdT2GDxsr7SN0N0mmDZaFWm9GfNygzrh8EVrsgJde3PI0GfMx5NiU8qTQXXSClaVuGhA9wRN4u1lPT/+mpFpR4IkPVRXyUCgvmuHwqJnE4lXcCWgpi7Ffsb7I46+WYkQCRJ1tdLKKkS9mxNB9r+fnr96fgpO/LnffzsTPc6/K/hzrR+5/NcIbsAMeiHV6l5XRNbVQKZhhc5KFZYFMPjsqtbHlbAGCsCsGiSxxiOHxsK51Fq+9W8mNe50vKKi00GtECMkoOYJREup5M8Fkfehc/lT9B7E/vedIjWOkm/emccw3GkJaglowKGHv0O2vC/hxeZYX7rnfG2Pe8IkRYq0jFzMMgkI2TbgoHGTWAQ+ZoDwMsVrU+tLYiMUJ75dgzUD6itiA4odtT6479GkuuV47X/AaR6laQQ/UsVrdgc2D5wJfAANtxtUM4dQStAdOmh4nvZefx5HHcvXbGWe/i+/kJbKgAG0lLv9w8w7lI7nvkud0QOfqC5lQpk1Qz5zoKBLN+ihwJ79SUETCbBN3sEGGG3IWc3eWSzk4GOIVNE7y2GDw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(136003)(346002)(376002)(396003)(366004)(451199021)(2616005)(6916009)(66946007)(66556008)(316002)(66476007)(41300700001)(4326008)(83380400001)(6486002)(54906003)(478600001)(8936002)(8676002)(38100700002)(5660300002)(2906002)(31686004)(186003)(6512007)(6506007)(53546011)(26005)(31696002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TXNCWEJmenVKUzE2UFpkVVVUWGlsdnB3Z2VtN045OEpqRFNXVURDR0p1YmNi?=
 =?utf-8?B?T0hhRXNRanMzL2RCSkt0b0ptL3hxYlFVb3NJNDFpWTBzLys2NEliSWZWOGNZ?=
 =?utf-8?B?dlhlMXhNNWFxTGdvWHB5T0pVYUhUOUxwaGNoU2NQelRNa2hnRksweE1DNFNW?=
 =?utf-8?B?ZzR6SlUrUStMOFJNMXZFdE1VTjY4VlZuRFgrQnBVdkQxZXBTTmEyOTJidjlO?=
 =?utf-8?B?aHorZGNDU0hIRUlKM1lud2JmZytHeGtnZklwQnE2cS85UlVtT0VmVmR3b2hU?=
 =?utf-8?B?ZmhTSkcxZ3ZaZVVxYXB0S3F0WEQ3UDludXlpL3FwSFErbW5LdmRLcjlhSGhz?=
 =?utf-8?B?TGtSc2hsVURxbkhZTkp3a2J1aGhYeWorWXJuUUF2NFpPekVUTGR1eW0rbnBr?=
 =?utf-8?B?MnRjUVV1U092VGlJNDFNZ1ZtSjJ2Y3A3dXBSczU2cUtQT0FkalRzeXlNZCta?=
 =?utf-8?B?SHlRR2VpcWc4elhTaHoyWlZrVFYxdkN2b3NqZU1XZ203c3hIZE11Nm04cVRj?=
 =?utf-8?B?S2trcUx6Q01MYlFBWGVVNE1qNmlrOG8vS0IyZ05Zb1Q4TEg3YzlWditjcG5D?=
 =?utf-8?B?QlRUQ1Q5dGFrV1RFUkNrZDllM0N2Smh1U1NuNlN5Y1JlaTc3bG1HaE9XV3pN?=
 =?utf-8?B?WW5UOHJWV3NSNlhtYU56anBPVjBKNVAxb2o3VDh1WDM2MzdmMjUxblltaERt?=
 =?utf-8?B?SGlkdDIrSnRnTVJhVU9rZW5hbkZDWUg2OWt6VVpMbDMxZUVUMXZQVFQwTmVL?=
 =?utf-8?B?SWl5NWtkOWdvL25sK3Q1QVgxRTFzWld0QVhLRzNRbUU0dnhnMWY3aEE1YlNa?=
 =?utf-8?B?bUN0MGxaWnVZVDRDcnBaRWRKRDlSbkZzRkJPTVVpZlVUeVlRQUFQNk5lcnlT?=
 =?utf-8?B?ajBoMEhLeGxvZ1BQZ1FPTjRIaDR5b3Zua2NDcVcxd1BtMEpaNTQwNlh1U2k3?=
 =?utf-8?B?dlI5VGVGN1NYZ25vZ1dOSGkvbGNCSG05OWEzdlI0OHc4dUdrMFcrOEhXT3Vh?=
 =?utf-8?B?Tk1pcHZyenVueFNqdGxTd2gyRU5KVTVQQUVQV1BtOG1EWDRlODZNMmt1MjJ3?=
 =?utf-8?B?RDY4ZVVQSmtDcHFyL2pqZDZCSjB3dmFoWlBsNEI0cUpDbldBR0pIcTYwTGxt?=
 =?utf-8?B?clhmREpUd3FJcThUaDdVQWVHTlNFbDcxNzB3bk9SejQ5L0h5Vmh6ZXhDeCtt?=
 =?utf-8?B?RStOTDhLVEU1dnFrUEZlUHJBUE4zNCtNN3dIbHVZYzNLQlZlNTQvVGpYdGIy?=
 =?utf-8?B?OS8yRElMQW1CSnIwbkEzcWpKelBMaFpiU2RpS1dDT0N5MDlpMUJmOEJQTFhU?=
 =?utf-8?B?THhWUlQyZ0VseS9PK2FrSkFQc2pzb2lnM3hhenJ5Nkg0cWhpaG5ya08wN1Zk?=
 =?utf-8?B?N2RseVJaQjBadnRtVXBHZ0NhZmtyOWVLVk1hMHUrUk93VGpzY1dUR2dQV1JI?=
 =?utf-8?B?YjNqNVZpejJENEJzQ3YyOWlLaytYV0VwTXBwRFU1Ulc1Y0g3ZFVyeEJ3SUdD?=
 =?utf-8?B?aFF2d0Z3dDJJaFN3SEpQMTRQTzArODlkVzJJV3QvRHQzcGJPdzRWWWFxUkxm?=
 =?utf-8?B?SkxMSjh0Z0tQMDg1aW9VRjdMZTduNDAydUZSNkRyM0tyRzR5djVsaWh5VzJ1?=
 =?utf-8?B?S3hiZ2haaVBzbDZnR3FnWUFzWm1kWGJ2QlNCOXJLaGlFUDhEN2J4MkIyZVlC?=
 =?utf-8?B?Rjc3aEE5dVNZc1IvbDM2anVCZm5TZDVuZ1NtdnN5V0tXLzEwTmhQZVhGT0hW?=
 =?utf-8?B?RnA4dlFiNlFuRjFGYkl6MzVKZVJiUnhrcm10TzMzbW1HRmx6K294SnBDKzlF?=
 =?utf-8?B?TXZROG4rMlc3RWJiQzlkSzFlcWo5dW80Lzk1L2dENGswclhlOEdMa2xRVnF3?=
 =?utf-8?B?V00wRjdkRzVJZDJMWDFnWUI5KzFkTHZVL2VDNktVSlNJUEF6N1RYMVM2S2JI?=
 =?utf-8?B?T0J4QUtrYXpkSUVjbys5K3NLMjk3Y256VEVFMDRJM2ViSGNvQnZOYzM2OEVu?=
 =?utf-8?B?L09qZmZmc1NaVkFWSTltcEFINmlMSjA3c3RYTndPWGJ3cWY0MkJEZUZ5dThC?=
 =?utf-8?B?Z1F0UnVJbTgrOG5hY21DK0s5YTB3aXZnSFZXRDBQZ1dBSjdTNWwxN3hWRkQ4?=
 =?utf-8?Q?iWAAAhQg+i0W23rNGK5J/1tmG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 042848df-5c30-4928-9fd0-08db4b0f7457
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 13:16:35.3348
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HZW6ReGWRh6w8qzEM9N4Hfi+rF6PLzaNHEoFkB/eyTML0s/7RtJR3IuUxFeGVQIilTynLc0H2qaFve29am9wjg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB10003

On 02.05.2023 14:54, Daniel P. Smith wrote:
> On 5/2/23 06:59, Jan Beulich wrote:
>> On 02.05.2023 12:43, Daniel P. Smith wrote:
>>> On 5/2/23 03:17, Jan Beulich wrote:
>>>> Unlike for XEN_DOMCTL_getdomaininfo, where the XSM check is intended to
>>>> cause the operation to fail, in the loop here it ought to merely
>>>> determine whether information for the domain at hand may be reported
>>>> back. Therefore if on the last iteration the hook results in denial,
>>>> this should not affect the sub-op's return value.
>>>>
>>>> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> The hook being able to deny access to data for certain domains means
>>>> that no caller can assume to have a system-wide picture when holding the
>>>> results.
>>>>
>>>> Wouldn't it make sense to permit the function to merely "count" domains?
>>>> While racy in general (including in its present, "normal" mode of
>>>> operation), within a tool stack this could be used as long as creation
>>>> of new domains is suppressed between obtaining the count and then using
>>>> it.
>>>>
>>>> In XEN_DOMCTL_getpageframeinfo2 said commit had introduced a 2nd such
>>>> issue, but luckily that sub-op and xsm_getpageframeinfo() are long gone.
>>>>
>>>
>>> I understand there is a larger issue at play here but neutering the
>>> security control/XSM check is not the answer. This literally changes the
>>> way a FLASK policy that people currently have would be enforced, as well
>>> as contrary to how they understand the access control that it provides.
>>> Even though the code path does not fall under XSM maintainer, I would
>>> NACK this patch. IMHO, it is better to find a solution that does not
>>> abuse, misuse, or invalidate the purpose of the XSM calls.
>>>
>>> On a side note, I am a little concern that only one person thought to
>>> include the XSM maintainer, or any of the XSM reviewers, onto a patch
>>> and the discussion around a patch that clearly relates to XSM for us to
>>> gauge the consequences of the patch. I am not assuming intentions here,
>>> only wanting to raise the concern.
>>
>> Well, yes, for the discussion items I could have remembered to include
>> you. The code change itself, otoh, doesn't require your ack, even if it
>> is the return value of an XSM function which was used wrongly here.
> 
> I beg to disagree, not that you could have, but that you should have. 
> This is now the second XSM issue, that I am aware of at least, that 
> myself and the XSM reviewers have been left out of. How and where the 
> XSM hooks are deployed are critical to how XSM function, regardless of 
> how mundane the change may be. By your logic, as the XSM maintainer I 
> can make changes to the XSM code that changes how the system behaves for 
> x86 and claim you have no Ack/Nack authority since it is XSM code. These 
> subsystems are symbiotic, and we owe each other the due respect to 
> include to the other when these systems touch or influence each other.

No, that's not a proper representation of "my logic". Everyone can comment
on any patch, and pending objections will prevent it from going in. Still
not everyone needs to be Cc-ed on every patch. If you want to get to see
ones you're not Cc-ed on, you'll need to be subscribed to the list, to
look at (and perhaps comment on) all the ones of interest to you.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:17:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:17:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528611.821994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpt0-0003Ir-IM; Tue, 02 May 2023 13:17:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528611.821994; Tue, 02 May 2023 13:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpt0-0003Ik-FL; Tue, 02 May 2023 13:17:42 +0000
Received: by outflank-mailman (input) for mailman id 528611;
 Tue, 02 May 2023 13:17:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l/wp=AX=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1ptpsy-0003IY-V1
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:17:40 +0000
Received: from sender3-of-o59.zoho.com (sender3-of-o59.zoho.com
 [136.143.184.59]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b5f6a33f-e8eb-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 15:17:39 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1683033455258422.6280237200574;
 Tue, 2 May 2023 06:17:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5f6a33f-e8eb-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683033455; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=bKgVqM1cUjQg1RE4Z6Kx69DSsRAw4rMDqvv6/Sy2O23QeM5wbXrA0LI7luRdXAA/4dQw/9Ws5WwK9qMhCm1a8suuRhTktwnWhJwHo9YS5fF+o8M7ojwnViVEQTy17sXfZw7idmwx7uuC6KslLYd3sXcfZ7W/IfaUsqTeBLh39jI=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1683033455; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=Bj4+tCe/Djy4x77OBozlm/+/KUc/w8n08KGpXcqAFa0=; 
	b=PfzrMRF/rh23CiCZobBufEefvheYPU8Fgodc/pFxKVuVUhu406UagzdU9voi/3QdAp0WFl2VVNZ85deMgIjMllE3ilpnJ5aVXrAHgWrA/VRea7cQh0cpmFbxD9LSE3g0PHNQUtnIWio1BOyO7zr7GYMADceR0xa65Z6ppMFBR0I=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1683033455;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:Cc:Cc:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=Bj4+tCe/Djy4x77OBozlm/+/KUc/w8n08KGpXcqAFa0=;
	b=FgKzUjC/XehD1nI7NB2OdpPDvu9CCWiYUInyMEZe5qAO0E4oNWZZzWdt53yj/hyw
	cxHP95XG4PoLfEjXhrRfPJWPpVwZEJcn1F8TnU8b8eGKT9KwbLbZwl7Gm014ZU8K2S/
	m5+W2Le2wwJ9jMaKEbjc04GeUDuX3C1J1mCY6JAw=
Message-ID: <8eeb34f6-0889-c2e4-d2fc-e0319f032a97@apertussolutions.com>
Date: Tue, 2 May 2023 09:17:33 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling
 with XSM
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230430144646.13624-1-jgross@suse.com>
 <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
 <ZFELr0BYpMgX9CzR@Air-de-Roger>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
In-Reply-To: <ZFELr0BYpMgX9CzR@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 5/2/23 09:10, Roger Pau Monné wrote:
> On Tue, May 02, 2023 at 09:03:00AM -0400, Daniel P. Smith wrote:
>> On 4/30/23 10:46, Juergen Gross wrote:
>>> In case XSM is active, the handling of XEN_SYSCTL_getdomaininfolist
>>> can fail if the last domain scanned isn't allowed to be accessed by
>>> the calling domain (i.e. xsm_getdomaininfo(XSM_HOOK, d) is failing).
>>>
>>> Fix that by just ignoring scanned domains where xsm_getdomaininfo()
>>> is returning an error, like it is effectively done when such a
>>> situation occurs for a domain not being the last one scanned.
>>>
>>> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>    xen/common/sysctl.c | 3 +--
>>>    1 file changed, 1 insertion(+), 2 deletions(-)
>>>
>>> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
>>> index 02505ab044..0cbfe8bd44 100644
>>> --- a/xen/common/sysctl.c
>>> +++ b/xen/common/sysctl.c
>>> @@ -89,8 +89,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>>>                if ( num_domains == op->u.getdomaininfolist.max_domains )
>>>                    break;
>>> -            ret = xsm_getdomaininfo(XSM_HOOK, d);
>>> -            if ( ret )
>>> +            if ( xsm_getdomaininfo(XSM_HOOK, d) )
>>>                    continue;
>>>                getdomaininfo(d, &info);
>>
>>
>> This change does not match the commit message. This says it fixes an issue,
>> but unless I am totally missing something, this change is nothing more than
>> formatting that drops the use of an intermediate variable. Please feel free
>> to correct me if I am wrong here, otherwise I believe the commit message
>> should be changed to reflect the code change.
> 
> By dropping that intermediate variable it prevents returning an error
> as the result of the hypercall if xsm_getdomaininfo() for the last
> domain fails.

Ah, understood. I missed ret is state tracking.

> Note that xsm_getdomaininfo() failing for other domains not the last
> one don't cause the return value of the hypercall to be an error
> code, because the variable containing the error gets overwritten by
> further loops.

In the end, this is just addressing an issue that has not been seen by 
anyone and happened upon while debugging another issue.

V/r,
DPS


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:20:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:20:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528615.822005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpvO-0004lc-V7; Tue, 02 May 2023 13:20:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528615.822005; Tue, 02 May 2023 13:20:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpvO-0004lV-S5; Tue, 02 May 2023 13:20:10 +0000
Received: by outflank-mailman (input) for mailman id 528615;
 Tue, 02 May 2023 13:20:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dv4+=AX=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1ptpvN-0004lN-E4
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:20:09 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0df6aea5-e8ec-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 15:20:07 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-246-Y31MKCByMBe1jCFod_G6kw-1; Tue, 02 May 2023 09:19:59 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 107073828886;
 Tue,  2 May 2023 13:19:58 +0000 (UTC)
Received: from redhat.com (unknown [10.39.193.211])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C1880C15BAD;
 Tue,  2 May 2023 13:19:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0df6aea5-e8ec-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683033606;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=fGVklx9CdfvOhMG8evo/8EfQSW2qxZF5lAkrucGYcTQ=;
	b=NXq8nynpAATIIptmj2iJd7mKqe/yKxOWXihrIB0I/EJATr2Vvde9rf6/FrvVUVYvHZ01n9
	6WzEntsxzAsY4AgQ68ozxRkQyxOCFPWs0mNqCPkLbfJlEoHB7nEjrTgx/vgsu5avIJfgnB
	vo7aQ6J43ivVol52tfxSNCjWr91vDC8=
X-MC-Unique: Y31MKCByMBe1jCFod_G6kw-1
Date: Tue, 2 May 2023 15:19:52 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v4 04/20] virtio-scsi: stop using aio_disable_external()
 during unplug
Message-ID: <ZFEN+KY8JViTDtv/@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-5-stefanha@redhat.com>
 <ZEvWv8dF78Jpb6CQ@redhat.com>
 <20230501150934.GA14869@fedora>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="YXC5ZjyIo1QEPFar"
Content-Disposition: inline
In-Reply-To: <20230501150934.GA14869@fedora>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8


--YXC5ZjyIo1QEPFar
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Am 01.05.2023 um 17:09 hat Stefan Hajnoczi geschrieben:
> On Fri, Apr 28, 2023 at 04:22:55PM +0200, Kevin Wolf wrote:
> > Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > > This patch is part of an effort to remove the aio_disable_external()
> > > API because it does not fit in a multi-queue block layer world where
> > > many AioContexts may be submitting requests to the same disk.
> > >=20
> > > The SCSI emulation code is already in good shape to stop using
> > > aio_disable_external(). It was only used by commit 9c5aad84da1c
> > > ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
> > > disk") to ensure that virtio_scsi_hotunplug() works while the guest
> > > driver is submitting I/O.
> > >=20
> > > Ensure virtio_scsi_hotunplug() is safe as follows:
> > >=20
> > > 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
> > >    device_set_realized() calls qatomic_set(&dev->realized, false) so
> > >    that future scsi_device_get() calls return NULL because they exclu=
de
> > >    SCSIDevices with realized=3Dfalse.
> > >=20
> > >    That means virtio-scsi will reject new I/O requests to this
> > >    SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
> > >    virtio_scsi_hotunplug() is still executing. We are protected again=
st
> > >    new requests!
> > >=20
> > > 2. Add a call to scsi_device_purge_requests() from scsi_unrealize() so
> > >    that in-flight requests are cancelled synchronously. This ensures
> > >    that no in-flight requests remain once qdev_simple_device_unplug_c=
b()
> > >    returns.
> > >=20
> > > Thanks to these two conditions we don't need aio_disable_external()
> > > anymore.
> > >=20
> > > Cc: Zhengui Li <lizhengui@huawei.com>
> > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > > Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> >=20
> > qemu-iotests 040 starts failing for me after this patch, with what looks
> > like a use-after-free error of some kind.
> >=20
> > (gdb) bt
> > #0  0x000055b6e3e1f31c in job_type (job=3D0xe3e3e3e3e3e3e3e3) at ../job=
=2Ec:238
> > #1  0x000055b6e3e1cee5 in is_block_job (job=3D0xe3e3e3e3e3e3e3e3) at ..=
/blockjob.c:41
> > #2  0x000055b6e3e1ce7d in block_job_next_locked (bjob=3D0x55b6e72b7570)=
 at ../blockjob.c:54
> > #3  0x000055b6e3df6370 in blockdev_mark_auto_del (blk=3D0x55b6e74af0a0)=
 at ../blockdev.c:157
> > #4  0x000055b6e393e23b in scsi_qdev_unrealize (qdev=3D0x55b6e7c04d40) a=
t ../hw/scsi/scsi-bus.c:303
> > #5  0x000055b6e3db0d0e in device_set_realized (obj=3D0x55b6e7c04d40, va=
lue=3Dfalse, errp=3D0x55b6e497c918 <error_abort>) at ../hw/core/qdev.c:599
> > #6  0x000055b6e3dba36e in property_set_bool (obj=3D0x55b6e7c04d40, v=3D=
0x55b6e7d7f290, name=3D0x55b6e41bd6d8 "realized", opaque=3D0x55b6e7246d20, =
errp=3D0x55b6e497c918 <error_abort>)
> >     at ../qom/object.c:2285
> > #7  0x000055b6e3db7e65 in object_property_set (obj=3D0x55b6e7c04d40, na=
me=3D0x55b6e41bd6d8 "realized", v=3D0x55b6e7d7f290, errp=3D0x55b6e497c918 <=
error_abort>) at ../qom/object.c:1420
> > #8  0x000055b6e3dbd84a in object_property_set_qobject (obj=3D0x55b6e7c0=
4d40, name=3D0x55b6e41bd6d8 "realized", value=3D0x55b6e74c1890, errp=3D0x55=
b6e497c918 <error_abort>)
> >     at ../qom/qom-qobject.c:28
> > #9  0x000055b6e3db8570 in object_property_set_bool (obj=3D0x55b6e7c04d4=
0, name=3D0x55b6e41bd6d8 "realized", value=3Dfalse, errp=3D0x55b6e497c918 <=
error_abort>) at ../qom/object.c:1489
> > #10 0x000055b6e3daf2b5 in qdev_unrealize (dev=3D0x55b6e7c04d40) at ../h=
w/core/qdev.c:306
> > #11 0x000055b6e3db509d in qdev_simple_device_unplug_cb (hotplug_dev=3D0=
x55b6e81c3630, dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/core/q=
dev-hotplug.c:72
> > #12 0x000055b6e3c520f9 in virtio_scsi_hotunplug (hotplug_dev=3D0x55b6e8=
1c3630, dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/scsi/virtio-s=
csi.c:1065
> > #13 0x000055b6e3db4dec in hotplug_handler_unplug (plug_handler=3D0x55b6=
e81c3630, plugged_dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/cor=
e/hotplug.c:56
> > #14 0x000055b6e3a28f84 in qdev_unplug (dev=3D0x55b6e7c04d40, errp=3D0x7=
ffec55192e0) at ../softmmu/qdev-monitor.c:935
> > #15 0x000055b6e3a290fa in qmp_device_del (id=3D0x55b6e74c1760 "scsi0", =
errp=3D0x7ffec55192e0) at ../softmmu/qdev-monitor.c:955
> > #16 0x000055b6e3fb0a5f in qmp_marshal_device_del (args=3D0x7f61cc005eb0=
, ret=3D0x7f61d5a8ae38, errp=3D0x7f61d5a8ae40) at qapi/qapi-commands-qdev.c=
:114
> > #17 0x000055b6e3fd52e1 in do_qmp_dispatch_bh (opaque=3D0x7f61d5a8ae08) =
at ../qapi/qmp-dispatch.c:128
> > #18 0x000055b6e4007b9e in aio_bh_call (bh=3D0x55b6e7dea730) at ../util/=
async.c:155
> > #19 0x000055b6e4007d2e in aio_bh_poll (ctx=3D0x55b6e72447c0) at ../util=
/async.c:184
> > #20 0x000055b6e3fe3b45 in aio_dispatch (ctx=3D0x55b6e72447c0) at ../uti=
l/aio-posix.c:421
> > #21 0x000055b6e4009544 in aio_ctx_dispatch (source=3D0x55b6e72447c0, ca=
llback=3D0x0, user_data=3D0x0) at ../util/async.c:326
> > #22 0x00007f61ddc14c7f in g_main_dispatch (context=3D0x55b6e7244b20) at=
 ../glib/gmain.c:3454
> > #23 g_main_context_dispatch (context=3D0x55b6e7244b20) at ../glib/gmain=
=2Ec:4172
> > #24 0x000055b6e400a7e8 in glib_pollfds_poll () at ../util/main-loop.c:2=
90
> > #25 0x000055b6e400a0c2 in os_host_main_loop_wait (timeout=3D0) at ../ut=
il/main-loop.c:313
> > #26 0x000055b6e4009fa2 in main_loop_wait (nonblocking=3D0) at ../util/m=
ain-loop.c:592
> > #27 0x000055b6e3a3047b in qemu_main_loop () at ../softmmu/runstate.c:731
> > #28 0x000055b6e3dab27d in qemu_default_main () at ../softmmu/main.c:37
> > #29 0x000055b6e3dab2b8 in main (argc=3D24, argv=3D0x7ffec55196a8) at ..=
/softmmu/main.c:48
> > (gdb) p jobs
> > $4 =3D {lh_first =3D 0x0}
>=20
> I wasn't able to reproduce this with gcc 13.1.1 or clang 16.0.1:
>=20
>   $ tests/qemu-iotests/check -qcow2 040
>=20
> Any suggestions on how to reproduce the issue?

It happens consistently for me with the same command line, both with gcc
and clang.

gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)
clang version 15.0.7 (Fedora 15.0.7-2.fc37)

Maybe there is a semantic merge conflict? I have applied the series on
top of master (05d50ba2d4) and my block branch (88f81f7bc8).

Kevin

--YXC5ZjyIo1QEPFar
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmRRDfgACgkQfwmycsiP
L9Y3GRAAvRZ6+kD/AtNTvIukd+aeUv2vJhX2aEdXyjX3LKPe/VI+W1xE+g5PTbKs
oJePLOmjUPGvu2SRyquYTMW10Ap0+syE+iiEM4O7KaAmrM3pEdfxOdo782IAVy+o
61Ro0DyE8Is1MzLXswSckkArZq5xuiV4PSKBEL/B+Th+GzSCc8+Z8btZhj4hjNN1
KMBQnSSSaGagMLENz0DLHTZADfSRGqASNCO3Gc4DhcK/RiJwD7ScWy6PkptZEsjx
1k6G7aIoa55X0qeqCsvcYUUNpAGzh2SGf+DaJ7IL36DPmQhuAvXffFFV98qrc1Yh
MN8kV9/ZEL6fFVXD5QUv7sPy+2Nw760s8dv6NXyorwxqtV56ljkYcn7idbO2+HZn
bmcgMQRBbVw8FdzXxDc1k53Bb0LhW0f/0A2iqn+GIwJXZq2IdEJ6sL+N43IN6L/L
IJ8b+jrNvuVGD56cyY024FiJVfFEtyeLhN4QSbWgjmsy3TWAw4jZ28ZfSbpVI7ST
d3H/j+LZkCQft65xjA09j/7mOd1qSPJjkykUY5uAiBssipogL6nj9aGJc0pkaxGB
s5otr6zCgghfprNe+qsA+X4H/So09BjbFC1R3AomFNtJapWzvc0zzyB08HIpPho8
7yr7sTZINUS0aFOr8IG6fPrFR9p8ZNHtVvEIBRKtQMi3uV1Ca+M=
=xmD/
-----END PGP SIGNATURE-----

--YXC5ZjyIo1QEPFar--



From xen-devel-bounces@lists.xenproject.org Tue May 02 13:21:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:21:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528620.822015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpwA-0005Mm-Cm; Tue, 02 May 2023 13:20:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528620.822015; Tue, 02 May 2023 13:20:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpwA-0005Mf-9W; Tue, 02 May 2023 13:20:58 +0000
Received: by outflank-mailman (input) for mailman id 528620;
 Tue, 02 May 2023 13:20:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptpw8-0005FV-Vv
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:20:56 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2059.outbound.protection.outlook.com [40.107.7.59])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2bcd9ab2-e8ec-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 15:20:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7041.eurprd04.prod.outlook.com (2603:10a6:208:19a::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.26; Tue, 2 May
 2023 13:20:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 13:20:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2bcd9ab2-e8ec-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iMU04rxa9l5XTjyZJyUKkTEL1DRsQmF7BdOs/g3wrogTK+ELfNEYR03XiTs5Tk7l6u9OV3s0lyuxDysYUJvxcbQrZdxGTXaiplFUrFXR4eP/qDyIh9HiuBC3OJQ+WLJdErkQzdgMs4YZbT4Vo5iR3rbgaUczx72gGGx/ZQxK2NnzUVbDQQx9Impd5QUCpBAvAepUhZXPZ1wmkxkjIS9hzHpI8gxyd6wuyL00ih8+B3rcAsIDB9YgVT+C4diLQCEHvAXoVX8AU+gdF3mwcAIwUnQK/zhNGdHPK7iZrtPLRqOPZTIWUDievC2meP2llvbmSVPpHG26npcOkXcewJlGsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zwJg1hIV7/IcKn86O/iBIYSL1qO0b18JDv7CnuQlPJk=;
 b=QWP8KKIfwPxaF5Ee/ayHFxTSg0yo4bOLjBz8TCSFKmvKeE+NK4XbzLL8+DFIGeIw9947t8T0bkYi9Xfs29Hj9sk7SaFow7XDZ9eYAfxzZirbtPA+Fv/LYOqXRIsVBWMhi14QzEp2LX7Kj0gvlo8QaIsiGREAhhqpGAxd1ppiixTzX5uzSAzVtLr0ei5/90uLDa8zuABsIi93uoVNN0uPhCFe/kalDnUoKNm67qem9y6biPi7jayQijwcmyqSQvwXqdZTaYTgKgFJUHURx2XzLo+0BHAlVFFCx3mtDhKCB9ZKPzq9iZGJ9hWbkHNGC716Iqx90wva3Vsd7C3u1D7Nqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zwJg1hIV7/IcKn86O/iBIYSL1qO0b18JDv7CnuQlPJk=;
 b=LUisP/8IJctGnK+feS7dx02e58UpdCYmMpF6sHslUs6K9mRRGAY5VR6qTC4pNp39od9MvTaJn8/sme7zSVdtsVL96FAY+ezW/lPp2+dIbZawPH3YJOH3ssP59yDbtVMI2eOGz+ab5HBRWhSTmdEZgD4b+EmBQaIjZB8MSi379ThTSXeKrZKMjqXCJHzyUAWSFIOhhxmT95kRLOeuISmGYCuqZdISGEf0Zb6q/TQNvu14kJRneKLh9ZQdnREm3Zm3D05h4uraiPuXdj06RDB+O6IejZgIZYPYYST6FviVBPx4/u8Lsc1pz27pnDta3GE02Sz1LAQP6EOl2vn6mtH60A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e9f78423-b7de-066f-2d62-0c354ea90551@suse.com>
Date: Tue, 2 May 2023 15:20:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] sysctl: XSM hook should not cause
 XEN_SYSCTL_getdomaininfolist to (appear to) fail
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <26064a5a-423d-ded5-745e-61abb0fa601c@suse.com>
 <600c8c62-5982-ec7e-7996-5b7fbfb40067@apertussolutions.com>
 <ZFDtMMUzBGXFZPsQ@Air-de-Roger>
 <ccbd0769-ef20-01ea-2204-ee0c211dcd5d@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ccbd0769-ef20-01ea-2204-ee0c211dcd5d@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0054.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7041:EE_
X-MS-Office365-Filtering-Correlation-Id: 81ded193-a7ae-4e4c-b6d2-08db4b0ffe4c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zxdNHioxd4vv43zGhDKzCc+x1NOANDIrUvVyj4VKtpITppl7v0u3Jo0zOHU81O2h22Hqh0KRx+jQnUMbTmFoIBOBR+vi+K7hBQL5ciy4RujoVOPVj+3342ThZDVy82/QrHv5fybmLjEPYYawB9l0ewJrADWuhNbOK4+DCOisTWmDIXMnTofBp562qQtQhtxfNCbdt9IH0GmQBMlFhg3Dutj6A5MWvBKiaQBK28D1wCzJ415S4Pt4KOFruOVZECP/k2C57HFRWRS1VgJFsDIkIgs/2oelGhn+TvgHHRhpBYjG8ZsB3Y2l0KwLHr2UGYbIVGlKqKO2GDEnfeMX9v6UZ+dStK+TntTh/sLBOtUR/GV2r5pJnvKee4QKATb5v05nK6kBQwVV9SsVz2LyXjsptVzuQkT2hCMHFkQwkvCGFWGPPlVielC/cOw8cg63vbyVnzmUkloI+BABheAbA0E/F7NPS6cgdHtMrDiRfCSMsn6OqADhDnH3SMX++/S44qtaW/GeHHnAvUT6/XatFLAkcjU2cSWhFkj4smQuinsFvuIFmB7LA/pcb8ywp57/+y47WcaNaHFLljIr5SDZ/SapLJawsb2BN87rRA+mIbI8jt93Jv86DlUVSdff4PFVJ1h+sMOayRF28GnlfIjcJYdh3w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(376002)(366004)(396003)(346002)(451199021)(86362001)(36756003)(31696002)(38100700002)(53546011)(5660300002)(316002)(66556008)(66946007)(8676002)(4326008)(41300700001)(8936002)(66476007)(2906002)(6916009)(2616005)(478600001)(6486002)(83380400001)(54906003)(6506007)(186003)(6512007)(26005)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NU80YlQvK3JZMjI4eXNOb2piZ29ROTJwS3p6MG15emZvRVlJeUNCUXJBUlp4?=
 =?utf-8?B?eU5wVjdqKzIzbThHRkdCc3NIMVVLaGpIdjZPTFZIZ2d2MngrTVoyR1YwWHlY?=
 =?utf-8?B?NVBMWjQ3cUxQMFljckJSdWlGbnFMbFNFWVpEcVNMUE1NWlA4dHVSL2s1TFVl?=
 =?utf-8?B?Wnk4WkJyM1dSNjhLVXFGVG5tbmc1MFRWQWR4RGpodVJsd3ZwRXlBWkJvTlNN?=
 =?utf-8?B?MGZzRHVTcUVMQ0lla0hzcFQrZU5NOUJOamx0a09iR3J4TzdQM1l3Wk9GU2VO?=
 =?utf-8?B?RWFTZXNJKzJlZlplcDRRQyt4cXBvcmxKVER6ZzMvK3BUQktyamJUaGtsQ1BV?=
 =?utf-8?B?LzNJYVRrd1Fack1LMHljalpQa0FzK0hvYlNiS3B5Umh5dGprUEdqOTRVNHF1?=
 =?utf-8?B?YUxIM3k2dGc4UWdUMHNCRGVYZXR3dVh5UWt5MmJDUHBITDNUc1kxTHZuZDRJ?=
 =?utf-8?B?NDJDWk5mSU5EZXRHZy9mTUxoZ0JUM0F2T1lDSlFXclRrRFk1OHRZbnFkVXh5?=
 =?utf-8?B?WWh4T3JvZ0pHSzlOeDlnYUVlWnl5dnp0ZUo4WjhxUzdhdFd2MHJSMXRIMUZ4?=
 =?utf-8?B?Z2F5dWwvMWtBd2JrMllKWlFZK3IwNEswdUp6UjBVV3JBR280bkN2NUlmWHln?=
 =?utf-8?B?WEp1dHRsZkd4YlNUZjZocXRSaDZ3WWFpbmZVekQ1MjE1Nlh2UGNuY0hKR0RW?=
 =?utf-8?B?dFY1aGs5blRDMEFGS3dzZjhya2RYYWxIRG1PdmRlMWZsS24zelRmVjNuanly?=
 =?utf-8?B?Y0JxU24vWUFSaDZOa3F1VC9CQ1p6YUR6bXJJTkg3ZGVESERqTjc1TG5HRmpk?=
 =?utf-8?B?L1laRVhlREZBNVpDZ2lTcFFPdm55ajZLdWZnSmsrWUJ1S1NUVHJFY3RCK09M?=
 =?utf-8?B?T3k1M2ZiandIU1ZUSFkzcVlVckZLK3BjK1BHdFNmZGFKT05yNHQyUHczRUhz?=
 =?utf-8?B?bVU5Y1dnM1llY0pxQVFMTHNYaFAxd1ViRC83cEYvcjlHaWxrWUtSU3hLdWZC?=
 =?utf-8?B?S3pybFpjajZOaXQ4aFhyUXZaYjdWdEFmNmlrWmQ1aDYrZWdFZlRnUHJQUFRZ?=
 =?utf-8?B?eEM3dG9VaFJwcWlIUDRrYU5OR0dZcU4vSzJ4UmpjMDRyeWdSM0JmWTJCeXpm?=
 =?utf-8?B?cEN1Rm1yMWhDdWJZM2FqVktlNmE2UXhCK25Zc2xsMnF5ZDloYXEyM2hlRXhR?=
 =?utf-8?B?a21EdTliVlRZWFdkOE0vdXVVU0UrdUFBemVXemJmMndaN3NXSDBoWDdROHhY?=
 =?utf-8?B?cExUUlVzRjNSaTFMK25jSkxHRC8vVXNVRmRtSE9qc241U0RSZHVqMThhbkVG?=
 =?utf-8?B?dlBDekk1dm5ObVRxaEQ3VVRyVzRKQkE3emlIT1Uyb2VQZVJwNUZqamhUVytF?=
 =?utf-8?B?aGtXZDVvUWNidTFyc2Z5UzkrYTVCVVVWQ1ZkNyt0emJZTnJEY0tGSlp4UGcy?=
 =?utf-8?B?eEJJZ3ZPVDRLaGt1cUF0N2UxM1hUMkd0ZHVjMFI2a2oxNHFDNVZlZXFUYXRK?=
 =?utf-8?B?bGxSYWwxUC9kcHlNdDczYW5LV3hzRFJlKy8zWndwSFg1MkRRMFI3cElaZStI?=
 =?utf-8?B?YktLeDQ2OFhSSllzUDRuWTdMTFgycjJ1dWY5M28wdXBrWm4zaHNTeGJ2cHJR?=
 =?utf-8?B?emQvZU5tSDI2ZWlaY0UrRTdZTm5idzdYSlRzdlR6WWdYQWNVTGhrYjU5Zmdl?=
 =?utf-8?B?U04wdTE1SGJuRG9oQ2NQaFFreDRKNHkrRmx5aDdSWEJTRjdwNmtPa3FDbUV6?=
 =?utf-8?B?VUVtSEEvNXVuaGhuaHRhTU42Z1A3cEZlSEo3ODIyODBXcUd1QWx3SWpkMWx3?=
 =?utf-8?B?bDdZM2NMY2N1UnA4TVVyd3J2NEVCSlpCYzNpZE50TkxGN281WGp5U2FVNWFv?=
 =?utf-8?B?UUw5N21JQlNPMUxpMjdXZ1FEcmRldzh1N1ViWmZkKzNoSUJBV3dwL1FjVnAw?=
 =?utf-8?B?bmh1dHN0bDZ0NzJCY29JT0FZQ2lyR09Pb2U3YVE0ellLS3VnNzVyN3ZUWjB4?=
 =?utf-8?B?KzlrM2hWcVFBemwwYUFLNU1sN0pnWHZRbkYxbFZLMXp6bGl0NllzdjdDbjMz?=
 =?utf-8?B?a2F5RXV3RGhSRTl5cXkwWGNMVWpLYnVNbjNnL25IMmVpNXhDZWdpeHV0aXYx?=
 =?utf-8?Q?QL4kmjYgvqiDIpTaPkKhbv5xc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 81ded193-a7ae-4e4c-b6d2-08db4b0ffe4c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 13:20:26.7366
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Jqi+SJKJ6ic8IbrfSIUmOUwA/60e/yik2MNZPIAe27ifEl8eqRj3fRTunb1jogVWtapieF0+B7CKy0+lSKZS1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7041

On 02.05.2023 15:13, Daniel P. Smith wrote:
> On 5/2/23 07:00, Roger Pau Monné wrote:
>> On Tue, May 02, 2023 at 06:43:33AM -0400, Daniel P. Smith wrote:
>>> On 5/2/23 03:17, Jan Beulich wrote:
>>>> Unlike for XEN_DOMCTL_getdomaininfo, where the XSM check is intended to
>>>> cause the operation to fail, in the loop here it ought to merely
>>>> determine whether information for the domain at hand may be reported
>>>> back. Therefore if on the last iteration the hook results in denial,
>>>> this should not affect the sub-op's return value.
>>>>
>>>> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> The hook being able to deny access to data for certain domains means
>>>> that no caller can assume to have a system-wide picture when holding the
>>>> results.
>>>>
>>>> Wouldn't it make sense to permit the function to merely "count" domains?
>>>> While racy in general (including in its present, "normal" mode of
>>>> operation), within a tool stack this could be used as long as creation
>>>> of new domains is suppressed between obtaining the count and then using
>>>> it.
>>>>
>>>> In XEN_DOMCTL_getpageframeinfo2 said commit had introduced a 2nd such
>>>> issue, but luckily that sub-op and xsm_getpageframeinfo() are long gone.
>>>>
>>>
>>> I understand there is a larger issue at play here but neutering the security
>>> control/XSM check is not the answer. This literally changes the way a FLASK
>>> policy that people currently have would be enforced, as well as contrary to
>>> how they understand the access control that it provides. Even though the
>>> code path does not fall under XSM maintainer, I would NACK this patch. IMHO,
>>> it is better to find a solution that does not abuse, misuse, or invalidate
>>> the purpose of the XSM calls.
>>>
>>> On a side note, I am a little concern that only one person thought to
>>> include the XSM maintainer, or any of the XSM reviewers, onto a patch and
>>> the discussion around a patch that clearly relates to XSM for us to gauge
>>> the consequences of the patch. I am not assuming intentions here, only
>>> wanting to raise the concern.
>>>
>>> So for what it is worth, NACK.
>>
>> I assume the NACK is to the remarks after the '---'?
>>
>> The patch itself doesn't change the enforcement of the XSM checks,
>> just prevents returning an error when the information from the last
>> domain in the loop can not be fetched.
>>
>> Am I missing something?
> 
> Actually, I should have finished my first cup of tea and looked closer 
> at the patch in the larger context instead of the description, as the 
> two do not align. You are correct, and provided I am not wrong here, the 
> change is a no-op formatting change that removes an intermediate 
> variable. I do not see how directly checking the return in an if versus 
> checking the return stored in a variable. Additionally, the claim is 
> that this occurs when XSM is enabled, which is also incorrect. The only 
> difference at this location in code between not having XSM enabled and 
> having it enabled is that for the latter, xsm_getdomaininfo() is an 
> in-lined version versus a function call. In either case, both will 
> return 0 unless you are using FLASK and have a policy blocking the 
> domain from making the call.

While perhaps imprecise, "XSM enabled" typically is taken for "Flask
is in use". Then again, looking back, neither title nor description
say "XSM enabled". And it truly was the XSM hook which might have
caused the sub-op to wrongly be reported as failed (given, as you say,
a policy is in place which actually can cause failure from that hook).

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:23:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:23:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528624.822025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpyc-0005yl-QU; Tue, 02 May 2023 13:23:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528624.822025; Tue, 02 May 2023 13:23:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptpyc-0005ye-Ns; Tue, 02 May 2023 13:23:30 +0000
Received: by outflank-mailman (input) for mailman id 528624;
 Tue, 02 May 2023 13:23:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l/wp=AX=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1ptpyc-0005yW-41
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:23:30 +0000
Received: from sender4-of-o50.zoho.com (sender4-of-o50.zoho.com
 [136.143.188.50]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 85ce8669-e8ec-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 15:23:28 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1683033803259199.99416151184312;
 Tue, 2 May 2023 06:23:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85ce8669-e8ec-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683033805; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=NXhvWyQnjMPI/nrEPw0XkBZshU+P25rtO2yLhnkEQmGQqrlnWZneFIY84ZUlKOOQdXYAuegzeEbnmFd3up8THw1pdrBOHveeRnXm6hLQkO8qEANM3Vr1H3Vkv+oMefExL+o2N7W73LDkJ13Z0Tu1BqRERHsabw8XvT3laObArPg=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1683033805; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=UIZz6l23yKq/jxJE/q4bcYulDbBrcSxr9jYhWlWMR9c=; 
	b=JQHS+s1muxQlU+vFeRI3Jqki+N45oHYHU3EjjiP43OFfQS0s2+OL5Hu4GcCGsA1iD5gCM5WL5Uy3jU5va3qosishh+o0sCpabSJ7kjXidkpm/JHluwEWiYzaNKvfxls1ivMsUbuNqS+aWIvT0/zB/L0B9NpWk8+vMK+Ge91uu2I=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1683033805;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=UIZz6l23yKq/jxJE/q4bcYulDbBrcSxr9jYhWlWMR9c=;
	b=r+A2RcKVNKsbrXCR3HZkGOqgnkddlILRww2f+Oj2gGU3qqsnTZzmLPJ1Ow3C7z5Q
	0P0MymDcASyamqh3RblS6IRzdFds5M2gNVDleB1wQ3USMoVF2J0zZaxuzmzV5AqPYvI
	X52j4yw8Rd2F6/POEm+HhZ7boMhVAWj/LWeObPnw=
Message-ID: <25876523-8d03-7e7d-e70f-b88d52f2b270@apertussolutions.com>
Date: Tue, 2 May 2023 09:23:21 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230430144646.13624-1-jgross@suse.com>
 <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
 <ca098773-5428-c97f-87ae-402fffd114bb@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling
 with XSM
In-Reply-To: <ca098773-5428-c97f-87ae-402fffd114bb@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 5/2/23 09:13, Juergen Gross wrote:
> On 02.05.23 15:03, Daniel P. Smith wrote:
>> On 4/30/23 10:46, Juergen Gross wrote:
>>> In case XSM is active, the handling of XEN_SYSCTL_getdomaininfolist
>>> can fail if the last domain scanned isn't allowed to be accessed by
>>> the calling domain (i.e. xsm_getdomaininfo(XSM_HOOK, d) is failing).
>>>
>>> Fix that by just ignoring scanned domains where xsm_getdomaininfo()
>>> is returning an error, like it is effectively done when such a
>>> situation occurs for a domain not being the last one scanned.
>>>
>>> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   xen/common/sysctl.c | 3 +--
>>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>>
>>> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
>>> index 02505ab044..0cbfe8bd44 100644
>>> --- a/xen/common/sysctl.c
>>> +++ b/xen/common/sysctl.c
>>> @@ -89,8 +89,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) 
>>> u_sysctl)
>>>               if ( num_domains == op->u.getdomaininfolist.max_domains )
>>>                   break;
>>> -            ret = xsm_getdomaininfo(XSM_HOOK, d);
>>> -            if ( ret )
>>> +            if ( xsm_getdomaininfo(XSM_HOOK, d) )
>>>                   continue;
>>>               getdomaininfo(d, &info);
>>
>>
>> This change does not match the commit message. This says it fixes an 
>> issue, but unless I am totally missing something, this change is 
>> nothing more than formatting that drops the use of an intermediate 
>> variable. Please feel free to correct me if I am wrong here, otherwise 
>> I believe the commit message should be changed to reflect the code 
>> change.
> 
> You are missing the fact that ret getting set by a failing 
> xsm_getdomaininfo()
> call might result in the ret value being propagated to the sysctl 
> caller. And
> this should not happen. So the fix is to NOT modify ret here.

You are correct, my apologies for that.

>> Second, as far as the problem description goes. The *only* time the 
>> call to xsm_getdomaininfo() at this location will return anything 
>> other than 0, is when FLASK is being used and a domain whose type is 
>> not allowed getdomaininfo is making the call. XSM_HOOK signals a no-op 
>> check for the default/dummy policy, and the SILO policy does not 
>> override the default/dummy policy for this check.
> 
> Your statement sounds as if xsm_getdomaininfo() would always return the 
> same
> value for a given caller domain. Isn't that return value also depending 
> on the
> domain specified via the second parameter? In case it isn't, why does that
> parameter even exist?

It would if the default action was something other than XSM_HOOK. Look 
at line 82 of include/xsm/dummy.h. XSM_HOOK will always return 0 
regardless of the src or dest domains. The function xsm_defualt_action() 
is the policy for both default/dummy and SILO with the exception for 
evntchn, grants, and argo checks for SILO.

v/r,
DPS


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:27:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:27:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528629.822034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptq2X-0006aD-A7; Tue, 02 May 2023 13:27:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528629.822034; Tue, 02 May 2023 13:27:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptq2X-0006a6-7U; Tue, 02 May 2023 13:27:33 +0000
Received: by outflank-mailman (input) for mailman id 528629;
 Tue, 02 May 2023 13:27:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptq2W-0006Zv-4W
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:27:32 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2077.outbound.protection.outlook.com [40.107.7.77])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 174f6465-e8ed-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 15:27:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8452.eurprd04.prod.outlook.com (2603:10a6:20b:348::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 13:27:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 13:27:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 174f6465-e8ed-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O8eCJdpp0OLIISXdXLYpvPn2geRzysqmT0C7gXspYUm3NWO0+SDGFQ4T5ZI9y3+WMjQX255xslw7OmaeysZiAVh9JLCWWWPkJ7HQzWvWG0asSSRvL6EtLjFEzHOkv0f8ltguDUjC++c5imA1ddUY07q2/TbAEVqWPApcuxdo805uJaVbc1Bj9PqXY9pItgFGT1CKyy/RRI7RruVzrqWYhk+04c8oiCFyaAgcNvCKjcKwgQ5Cw0mKyaR/xFMhd9AUVbOKGpfSvZThxEOMcg93oah2BLsDzLDkFvo1t6cWQxQPyAON8FD4inC17IV5FrIKzCtXhKcDiyPC6sxl8wWwzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Yl+1NFWBDEgrcY/4RVNSJurCxhbAo/P0H3lFt4T6wzs=;
 b=PCmrXqYrHzweKipPBgiuESq80D34gSg71uNZyVQQISMLT3ymIbPUc5ZGs9vEcHelsMGXjE00ULQqnQlCMx+3yKEgo8oLadIB5FoGHr4gM9MVuoZtZ70wJcgtL67XXA24Qlj/nXOuX9Dss4CnMZx5cgxnXOVwXiXExpPgtGiKcXP6ug52SGOV/iJFbKNZjQq4Mx49dSCkrYIHVbLbhVKA7UyKQUJqr5Q93smzJo4Ez8g2XXqXmLwH7Tx7DbcIsQbxFZhJqwHPON4goXV6nPYKUech7zXxI2Xa14+87x4Knmy2kb2jWHgXO2zgzwnh3WGs3dxRK+fcdeEdGas8lbcadA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Yl+1NFWBDEgrcY/4RVNSJurCxhbAo/P0H3lFt4T6wzs=;
 b=xAHmYzXHh4D6wp5gxRoh8QqEnHObpBNQ2UJeUqzc1PTR/7fBs5OmuKla2c/ibSl/6ZizZeDb+jTr4Fz3524VaQuqZqxSDnyMaBEXykZTL35c/+croyTSSa4WoBiGue4+DyfZipRNDLNH3c5mAH4Y21Q9Ux9P4CBCIRvOSixe9krjvBS8fOB1IvHALH6YTofZYi9lU+0FLToIUjjS2/VIn/CvB6sxrI/Px/4PPDjUJsMa6tv4GVhnVO5DqgWrXbA1hYPfEV8FuFDfcclwHASWcnpQrMCS/MWW82kev4WfXs+2nilv+7bhLVf6yKbm3RyKyrigSpEVkvrjyweNfSBpgQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7a7661e7-b791-85ab-c638-eac471d02cd4@suse.com>
Date: Tue, 2 May 2023 15:27:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] x86/head: check base address alignment
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230502092224.52265-1-roger.pau@citrix.com>
 <20230502092224.52265-2-roger.pau@citrix.com>
 <89389465-f32c-7dfe-f62b-b957e2543cb8@citrix.com>
 <068ad06b-d766-4fc7-6bbc-289911441ee7@suse.com>
 <ZFDrT87RixpOmMfq@Air-de-Roger>
 <6aa9f2a5-bb57-4c56-84b8-5bc63b47cfa4@suse.com>
 <269b0894-5fe6-fd71-9f6d-24e3b08973cf@suse.com>
 <ZFEJ+LqEM2rwOxPG@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFEJ+LqEM2rwOxPG@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0057.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8452:EE_
X-MS-Office365-Filtering-Correlation-Id: 8beab0c3-fe21-4170-c1e6-08db4b10e93a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YjJi0nO/ghwHG9cpK5lOLrj3XRQh6nLMbHusdgKv2HHm3OrpoXD7vxx/4HA+AB8LEChGKQMJeppAAQFGV9qU4qKO8AQ/AC0/6gdVDkeVfy/lLUIZEo/MBuBIgi/mfEpT4s1GVu0lDf9iSmyLcRfk3AmeKbZjKcBZZm1zL52pIaf32UO8XT4cgHnNzlkHyxbfu8lLWLMcBbF1b9oO5eRMicABBCf5elAPkCUCz7ug58YqYUl7jLPVYmw5ek1+YFxEEk8VG193ZyAPEfdzUGfsMdIhlSJyky3GRxhF8DbYkQ/AnUgwkrWRXB9KcwN+2JiHQF2b8KontkJDrKQ/SUMsj+E1US9X893oU0iGIJCz2AXiLk8lgr8aM6Easu/JGswjlBONO4Uc1VnD0U/d1Rm6475fojXBcC8PJZPnKEV1HK+RNV0Dlf1JPr8+V8qxNxTla2NcJnLBI/wqOdGzLtaTXbwE0uL0nk5boJzYiaUewAjmu0hizc58PCBfH2L0WgQfv9Iz54qTNIwk7MhR/g2WN9bsJ7mgIqVjq1YkFm0mTnEUl1AB3A9HhkdaJ9TvCie0sRFmOM2aOtW0ztyvE0TO7wshmBmkvpTB9o9RQnISX0lKBD11gRTRkdLeoT91ZIo3IN/Js8vct96L9MAan76jIw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(346002)(396003)(39860400002)(366004)(451199021)(31686004)(6486002)(54906003)(316002)(4326008)(6916009)(5660300002)(478600001)(36756003)(31696002)(86362001)(41300700001)(66476007)(66556008)(66946007)(186003)(2616005)(6506007)(38100700002)(2906002)(8676002)(8936002)(26005)(6512007)(53546011)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cjczVEtXL2tPS1Y3aWRmM3VZbnBKd0JBMm01eE5DNTJvMGZLODMxTjhyYjhl?=
 =?utf-8?B?T2dmN1Q4UndScWhXRzZ2VWJRZ2dKQWkzRTc4Nkk0MkJmbkZkclp0QWxKVWF2?=
 =?utf-8?B?Ty8zMTYzWG1Ha2pFekxwdjJ5VWZjc0llTkRWVTBzUVo5ek1PYVRkdkFpMlFH?=
 =?utf-8?B?MDFsUVR2bkZCdnRzNGwrTGpBZ2ltUXArVHhwTysyTnBoRUFtWEh3NFZZeURD?=
 =?utf-8?B?RHRKcm0yZytoSXBpM3dtVEdicklscUVobjlGYmQveDVKQktxTEZmWHRvaVp1?=
 =?utf-8?B?WlFzcU1DSG5vRGlmOWJOZTgvaVZ3TEZpNittejVFaWpOS25Sd1lkVHZabG0r?=
 =?utf-8?B?VmZLcTNOYVYzWU8yMmxTcUlWREM1UHIyaVRPdVNNVDIwVkFRWldUZ3dTOEtF?=
 =?utf-8?B?QzVPbnA3QWZVdXpSTTN5TFdFZEtYbEFsakNLb3hXa29wejl5b1JvZDVZRXZy?=
 =?utf-8?B?T0FYL1pmRzloZXhKSUllV1hmNkp1UWlCaFk5UjN5MUx4b2lFanhVM0JXeVRX?=
 =?utf-8?B?NzBWRXc0YXlyazBBZTA5RjJpcW1aMHQ3dmRDV2FZaGJ3eCsyTEpuSTNBdCtZ?=
 =?utf-8?B?c0VTdzhmQjloUU1ZQ3BjVFM0aXhKckJkRHB5ckpHckZJV0d0eVFyWkthdTUw?=
 =?utf-8?B?bzdlczdXcE0rTWM0MEYyZmZFQ1BjZWNlYlY1OGo2L003d0g4elZ0VUVySGtj?=
 =?utf-8?B?OWh2Q2pERTVQSUgrdDdKUzVDYkd4N2xxTWZMVjZoQk0wdWo1d3Vpd1lmOWJ4?=
 =?utf-8?B?cTA5Z1krOFhDR1VLUXZjUFIvVURRdnV6eXAvbUdVd2t3aE1VYkZ0SWJ1MDJR?=
 =?utf-8?B?TDBMRzJONzNPbXpoRnB0VWVUZThMb1JDQVlONTVsbXBMT0lJeGc4WnV5Wk1i?=
 =?utf-8?B?bnlDUG5xMzBzWk8xQTRiRVFqa0ozd0dVbVJ3WXhsblBJaC9VNFRPSFZWK0h2?=
 =?utf-8?B?M205WE1ibFlTcXBiaGNuS2dUdmNjaDhmNG84K1hjanBpK1BYSGVnc1I0b0pI?=
 =?utf-8?B?NzR2OUliVCtUNjlJczFIU2Y4ZFR0a3kreE1JV0Nld1Q1Mlp5UzFxaXR0eGZx?=
 =?utf-8?B?QkRPMWJodVhTT0dBdDl2czhrWlIwckxWSUcyWHFId0kxcTVHUXphRE5DQ0sw?=
 =?utf-8?B?RzZ4ckkybllRcVVZVitLdm1YS0F2UW9IeVBWemdYWHB3SVBEQk9Jb0dPSEUv?=
 =?utf-8?B?bTZ2V05EaTdqTDlWeFhyeTc5SFZlZ3pWT3JIdHAyelo1KzVJbEZkQno5OGxx?=
 =?utf-8?B?SEN4SXNYbU1VSWJnVG93Zm8rN2d4Tm1KOWN2ajJRMS9wcW9PWWFCNGQ4UE05?=
 =?utf-8?B?bitHVVdCK09XUFpYRkJLUEpuejk5Um5GU0tGaTcxbzd5dGFKM25RN2hXb1Fh?=
 =?utf-8?B?N2VLaWNrVXB5OVJ1Z2dZR0ZaZnVKV3JNR0RDR09RcEd0MU1OV1Rwd2l3VWhG?=
 =?utf-8?B?Y3FrQkdSOXR5NGlBdVlXUTlCTVBqeFUxVlRmMVJZQWRpS2tXUXZhZ2JrbG9V?=
 =?utf-8?B?OXpyOUZ4U01rQkoydXNHdFZNTlFlN1Z5aCtQTjBEdVR6NXByamtTWkZoSEgy?=
 =?utf-8?B?UU42cVJncjNTZ01SclhGTk1wb0ZNNjBoK0ZRQnhCa3NFdUVBdXIyU3p1bXFL?=
 =?utf-8?B?c2pCMGc0YUQxeVpjOG9PaTZGK1FiVWhMc1RhN1V2K3Vjc2dPSVpENmpIN3ZM?=
 =?utf-8?B?bC9tU3ZSYit5S0lTQ0pJenNCdjhJVkUxZDhRS2g2RjRFYlcvekxOM0xMOStV?=
 =?utf-8?B?WDVEMnZjVHRJdjd2aURwMDE1WWZzaFk5UVJ3d01OU1dqenF5M25aNjVXU0g3?=
 =?utf-8?B?eSt2clRqb08xS3AxZ2JweWZzOU9KbCtXaHlGdlY5cjNESm1tVy9nWnlzckZa?=
 =?utf-8?B?SUdWMmhVNC9FYlA4MFVQdzFUQURLeXB1dU1hakVaa0JkYmlEMXpCYnBYeG1S?=
 =?utf-8?B?WVVWRGFoUEwzOFhKdTVGUHhrTWRYMWVvVGRQc3F5UHo3TzR2QXV4VFRvSndv?=
 =?utf-8?B?cmlLblA2RUFwUFRjU0xYWlRaRkQvRmRQMU1hdjdIaXpuQmF1dTdYZVh0bFpr?=
 =?utf-8?B?NXB2QnV2UVljQ2UxRGNxQms4ZWVMdnYxQkt5L1lGN01KU0d2dDJNaE1qV1Ju?=
 =?utf-8?Q?iM6xKnl454Wy7uMpp8rk4+InY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8beab0c3-fe21-4170-c1e6-08db4b10e93a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 13:27:00.9515
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bxHYrxNuJnxOXMDu3bVDnmVoUY9aWuc73arWnibg9JLlrc4OvtjatvdGi7X4PXGviW1siP2xekiDtsFXns6vRQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8452

On 02.05.2023 15:02, Roger Pau Monné wrote:
> On Tue, May 02, 2023 at 01:11:12PM +0200, Jan Beulich wrote:
>> On 02.05.2023 13:05, Jan Beulich wrote:
>>> On 02.05.2023 12:51, Roger Pau Monné wrote:
>>>> On Tue, May 02, 2023 at 12:28:55PM +0200, Jan Beulich wrote:
>>>>> On 02.05.2023 11:54, Andrew Cooper wrote:
>>>>>> On 02/05/2023 10:22 am, Roger Pau Monne wrote:
>>>>>>> @@ -670,6 +674,11 @@ trampoline_setup:
>>>>>>>          cmp     %edi, %eax
>>>>>>>          jb      1b
>>>>>>>  
>>>>>>> +        /* Check that the image base is aligned. */
>>>>>>> +        lea     sym_esi(_start), %eax
>>>>>>> +        and     $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
>>>>>>> +        jnz     not_aligned
>>>>>>
>>>>>> You just want to check the value in %esi, which is the base of the Xen
>>>>>> image.  Something like:
>>>>>>
>>>>>> mov %esi, %eax
>>>>>> and ...
>>>>>> jnz
>>>>>
>>>>> Or yet more simply "test $..., %esi" and then "jnz ..."?
>>>>
>>>> As replied to Andrew, I would rather keep this inline with the address
>>>> used to build the PDE, which is sym_esi(_start).
>>>
>>> Well, I won't insist, and you've got Andrew's R-b already.
>>
>> Actually, one more remark here: While using sym_esi() is more in line
>> with the actual consumer of the data, the check triggering because of
>> the transformation yielding a misaligned value (in turn because of a
>> bug elsewhere) would yield a misleading error message: We might well
>> have been loaded at a 2Mb-aligned boundary, and instead its internal
>> logic which would then have been wrong. (I'm sorry, now you'll get to
>> judge whether keeping the check in line with other code or with the
>> diagnostic is going to be better. Or split things into a build-time
>> and a runtime check, as previously suggested.)
> 
> What about adding a build time check that XEN_VIRT_START is 2MB
> aligned, and then just switching to test instead of and, would that be
> acceptable?

Hmm, yes, why not. (Except I would still express it as sym_offs(0)
rather than a direct use of XEN_VIRT_START, once again to better
match surrounding code.)

Jan

> I know that using sym_esi(_start) instead of just esi won't change the
> result of the check if XEN_VIRT_START is aligned, but I would prefer
> to keep the usage of sym_esi(_start) for consistency with the value
> used to build the page tables, as I think it's clearer.
> 
> Thanks, Roger.



From xen-devel-bounces@lists.xenproject.org Tue May 02 13:28:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:28:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528634.822045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptq3P-00079T-Pp; Tue, 02 May 2023 13:28:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528634.822045; Tue, 02 May 2023 13:28:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptq3P-00079M-Li; Tue, 02 May 2023 13:28:27 +0000
Received: by outflank-mailman (input) for mailman id 528634;
 Tue, 02 May 2023 13:28:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l/wp=AX=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1ptq3O-00079G-6U
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:28:26 +0000
Received: from sender4-of-o50.zoho.com (sender4-of-o50.zoho.com
 [136.143.188.50]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 35a06913-e8ed-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 15:28:23 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1683034099404379.736342581768;
 Tue, 2 May 2023 06:28:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35a06913-e8ed-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683034101; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=B3zZudwY+vSCNm3jZfy0190aw5ZEQND8RF9ZiH2IQgevHU8a5curtDKgmMmuMk6Yi7DwP/0LzOf9w9C+5rTudxgoZmXSWUz/zkRRP4uuHYqhFrEVlUsDz7dJxIY1f23LWzKkaF72gE7KW1oD6/rpZCmDFCWMtxjB3rhyt3NToeU=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1683034101; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=3NHg7AojgcM8Ma5gKRmn71RqBszAUpxCp3PnnZLm5qo=; 
	b=K63BsfvbzEkVV1n3kuFHUDNkL4J8fDG2p1yagVx3FxNfIGb9czlwnW2e1tVnkdyX2dibrm6BgQ8yb8Vv4pkAejAK7esvFXo5ZIw/LyO5rgkly0OmkMgZqiXSfFd2FtBI56ebmI6iVf+BGQQDGt7DkISygm5Uk2D6vQ/h9hQvbm0=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1683034101;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:From:From:To:To:Cc:Cc:References:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=3NHg7AojgcM8Ma5gKRmn71RqBszAUpxCp3PnnZLm5qo=;
	b=Tgxf0IoQvlbIsNtWJNp++CTOXZxHI+Imf0AGJI6Zf6PrEoZT0Xb9ilsze/p839VW
	qOZEpjV2v+5ca0pZq8k0nbti9zRWkDKW9Zqi7UwiJNKjryiRhbu3TTnmINx/Z8Hf+3E
	4VfSvJ+TDk+LaDjrLCx2g5LoHWbmJ/GjawC9/BbM=
Message-ID: <f08c67e6-72ab-a99c-b020-37f526efc743@apertussolutions.com>
Date: Tue, 2 May 2023 09:28:15 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling
 with XSM
Content-Language: en-US
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230430144646.13624-1-jgross@suse.com>
 <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
 <ca098773-5428-c97f-87ae-402fffd114bb@suse.com>
 <25876523-8d03-7e7d-e70f-b88d52f2b270@apertussolutions.com>
In-Reply-To: <25876523-8d03-7e7d-e70f-b88d52f2b270@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 5/2/23 09:23, Daniel P. Smith wrote:
> On 5/2/23 09:13, Juergen Gross wrote:
>> On 02.05.23 15:03, Daniel P. Smith wrote:
>>> On 4/30/23 10:46, Juergen Gross wrote:
>>>> In case XSM is active, the handling of XEN_SYSCTL_getdomaininfolist
>>>> can fail if the last domain scanned isn't allowed to be accessed by
>>>> the calling domain (i.e. xsm_getdomaininfo(XSM_HOOK, d) is failing).
>>>>
>>>> Fix that by just ignoring scanned domains where xsm_getdomaininfo()
>>>> is returning an error, like it is effectively done when such a
>>>> situation occurs for a domain not being the last one scanned.
>>>>
>>>> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>>   xen/common/sysctl.c | 3 +--
>>>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>>>
>>>> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
>>>> index 02505ab044..0cbfe8bd44 100644
>>>> --- a/xen/common/sysctl.c
>>>> +++ b/xen/common/sysctl.c
>>>> @@ -89,8 +89,7 @@ long 
>>>> do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>>>>               if ( num_domains == op->u.getdomaininfolist.max_domains )
>>>>                   break;
>>>> -            ret = xsm_getdomaininfo(XSM_HOOK, d);
>>>> -            if ( ret )
>>>> +            if ( xsm_getdomaininfo(XSM_HOOK, d) )
>>>>                   continue;
>>>>               getdomaininfo(d, &info);
>>>
>>>
>>> This change does not match the commit message. This says it fixes an 
>>> issue, but unless I am totally missing something, this change is 
>>> nothing more than formatting that drops the use of an intermediate 
>>> variable. Please feel free to correct me if I am wrong here, 
>>> otherwise I believe the commit message should be changed to reflect 
>>> the code change.
>>
>> You are missing the fact that ret getting set by a failing 
>> xsm_getdomaininfo()
>> call might result in the ret value being propagated to the sysctl 
>> caller. And
>> this should not happen. So the fix is to NOT modify ret here.
> 
> You are correct, my apologies for that.
> 
>>> Second, as far as the problem description goes. The *only* time the 
>>> call to xsm_getdomaininfo() at this location will return anything 
>>> other than 0, is when FLASK is being used and a domain whose type is 
>>> not allowed getdomaininfo is making the call. XSM_HOOK signals a 
>>> no-op check for the default/dummy policy, and the SILO policy does 
>>> not override the default/dummy policy for this check.
>>
>> Your statement sounds as if xsm_getdomaininfo() would always return 
>> the same
>> value for a given caller domain. Isn't that return value also 
>> depending on the
>> domain specified via the second parameter? In case it isn't, why does 
>> that
>> parameter even exist?
> 
> It would if the default action was something other than XSM_HOOK. Look 
> at line 82 of include/xsm/dummy.h. XSM_HOOK will always return 0 
> regardless of the src or dest domains. The function xsm_defualt_action() 
> is the policy for both default/dummy and SILO with the exception for 
> evntchn, grants, and argo checks for SILO.

Sorry, one last clarification. xsm_default_action() is also what is used 
when XSM=n. The difference is that for XSM=n, xsm_default_action() is 
in-lined at the call site whereas with XSM=y and not using FLASK results 
in a function call xsm_default_action().

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:29:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:29:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528637.822055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptq4I-0007l2-30; Tue, 02 May 2023 13:29:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528637.822055; Tue, 02 May 2023 13:29:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptq4H-0007kv-Ul; Tue, 02 May 2023 13:29:21 +0000
Received: by outflank-mailman (input) for mailman id 528637;
 Tue, 02 May 2023 13:29:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptq4G-0007ke-F1
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:29:20 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0617.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::617])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56c89931-e8ed-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 15:29:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8452.eurprd04.prod.outlook.com (2603:10a6:20b:348::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 13:29:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 13:29:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56c89931-e8ed-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dazf2fR6lB2HLePJKTnVMBJpQi0qR9i4H8M8leSrf/pobXazKGYOSwFpSMduMlbmLbtdtUrzofOK2WYu7mTTtjOMaxiHkF3FAigm/MCxy1N2vMB7MuIkqhWkmtUzQDmD3AB/2cR3MrY50KsQ1S4o2wDnOwtDf+vTuqA29bdivgnPi8FfIimPa8WFB9MhYKVVl//8djaAPIaGT6SH3fgxQ2MqjjvtDFk4fI/pIP7HKaTom066Y3MNwIQy6gsUJq77zS9aFxZwUgP0Eq6A3GmP8uvnCAzbN2oknsqfWGjot6eLSsXGAMV9hCn9r2y0b2yN9RE1y49k5+t4BNN+/3VwWg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ME2ZQwWulxZwBXmpwWH9t4GVzAmCDXXPpMuLKWJ28IA=;
 b=Y2NGa++oFOYkzyb70uhKxiAgYnL+Qxahxlgn2BdSW8HLfKMn00MBzhn/7RSCSdbFGvmNXH+/eaphrLnR+Iv5hZwD9V9TU/JkKgAFOLyHnpT/Sd+Tc0qV0kMzXUZLKahzc8VJVOtyLQlKsSbkFtYvYFYeeuXvb7fGJCMb8qZaqvrkOAIvnYxHAjcbLMwtSglGyZNK1M2lGkIhkF13ZuwzWoUgOu+BQfZjR3P3UwbF9UFZJIPmrbs0nr9zOPCeJaF0os2cyXoyUJNqq1ivGtfyDphJg7CN7H64DsZEhIpxF0NcmqHpwk4NNRyj5szF9pQ2wwUclfOTEwTnAYV1+6g9cQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ME2ZQwWulxZwBXmpwWH9t4GVzAmCDXXPpMuLKWJ28IA=;
 b=r+AQIZC0DpdQAzaW4Sa9+lM6UD5QBIJpBe49l/CEGwueWEKUULqqN957GCGULdtAo/UmdA3L0viMIURtM6vp4MLX9q7LL2nbXnm9onzSLM7nUJaWixWlAfh+iMbw0WxNBCIv3mPrS6ZXS0Fr4G1nX0qKHVA9gokQvFF8lU3WKufoPMPWH9CIKfkqQbTXBRZy4ssnYeQNozCfUL8tv0+j9VrMSh70KVxVreFqTJv5jwDQoMCprAHL0ocPYEQG0NgCAf0NuZnDXsiTc2JNo8SlOs33oYy6ZC+mDTH8j/1CBJk/CGs46B7qwnQFojyYLJnRwmOhYG+xNsE8HhBlF1O1aQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9353d3f3-563b-ff88-0b0b-dfa2bb03522c@suse.com>
Date: Tue, 2 May 2023 15:29:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: HAS_CC_CET_IBT misdetected
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
References: <20230502074853.7cd10ee3.olaf@aepfle.de>
 <43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
 <20230502133313.2192eb99.olaf@aepfle.de>
 <20230502140444.1dacdb33.olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230502140444.1dacdb33.olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0108.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8452:EE_
X-MS-Office365-Filtering-Correlation-Id: 5647e937-736d-40a3-634d-08db4b113a68
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1j0vhaKzmsGTWSnCb0iAAGAKuL4MhtY+mWb1jLSQr/huHtgE92mgq62M/mfZpsNm7YlUXMK83nghwKyFR9JTApdiNNlIcFJ3jVlocIvBkoy4MTj12GQ/XzNGbIYgv9V4Ay5vpvgKCsB0Yknlz/ySqXhrZ5KGJ3CioTroVXg0XFWJW19KTpQ0yLMdpCegQ0VfzSb0gU4JxuKygiYdzie+DB5B53M4Ug7bAtftkSsvZEa6y01l/sVCyDWfscWXfEPa24unSm6AvmKXy5X9FHc4bFeS+67AUS24GRjPO8vb4B5jKPOKUE9UtvAKPUcaRiYV+Cy8S3WIEs8l/5J+AqtLuFglFhOGBSxTXM7ELM1Fa8Fxp+T8KufEM4fA9286D3NRPqlA2YK93JQ8iWAL9PMJQVaLHxLUdakVTxezLPC4ioQoEpnTnDyhKFJnDsuqLgteOdQSPsIrBTqlDGcSVF+0AHBSDnFED4ntFMjbARj1FA6UHBg8I4nfsFIdbEUf+6MlCbsxjjH9ZA570hZ0UMYcFgUG28LqPzvpvt4/dXaUDV6CtA5nSYzmRMa2tUb/T8GjZoswNHnEDKI+tgeNtuExKziCPFqtuYcVjXlJqs1uhdwGwjvpm8EtTZ28bicy1oZNnzkSrbmwfexLfJJ5A9idyw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(346002)(396003)(39860400002)(366004)(451199021)(31686004)(6486002)(316002)(4326008)(6916009)(5660300002)(478600001)(36756003)(31696002)(86362001)(41300700001)(66476007)(66556008)(66946007)(186003)(2616005)(6506007)(38100700002)(2906002)(4744005)(7116003)(8676002)(8936002)(26005)(6512007)(53546011)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WVUydi9GTmZZdUNQQlpETFNxSmR6TlhVTzJaUFhoN25CNHdONFNpN29HZE1T?=
 =?utf-8?B?Ukd4WWNkRmtLV3NnQWUyL3JqOUp6WHhRZWxpWFJGRnlMOHlwQndMQ3hGZkZu?=
 =?utf-8?B?YW9XWmVWNWZRb0Y3aC90dlFlcC9aRitIbUxleWNoeFkyNldQQ0JJMHl3cGxm?=
 =?utf-8?B?TGpyV1pXekFwaUppUlU5Mzd3QzR1Z0J1OGZObU96dGFrZWRqMFY2dG1OSkJK?=
 =?utf-8?B?UkFpMEFleFNoZ2JBV0JRZk5xRjRTZHlMa04xYVc1OE53bHJwZnMzVHZIYm9j?=
 =?utf-8?B?dm1obStLR1RaOEdOcGtGSE9mRGY2dWY1dkI2YXdZVmVwMDBheVo4MW1SZUlN?=
 =?utf-8?B?cnpzK2RQNW9BUEJ1eHgxUGVhbHlVcmVNZUZxUzFOQWpFYWdTczdIMFBOQlo5?=
 =?utf-8?B?STNqWVU5SVAxZDZEK1ljY0VIcWZYQ3lvbSswK0tWQVJZVFdyNjBETDJFZjJp?=
 =?utf-8?B?QW1rWExidTU3dDBLVE9VU3RkRFU4UU9qbUJMWGdaQkFGZ2x6NXJIOE1Ec21P?=
 =?utf-8?B?MzRxbjBCYU5RZVRkL0wwdVVUeWJmYWZtamVSN0tsS0NyK0M4SWVaMFdWWDZr?=
 =?utf-8?B?cnpYYzZEandRa044R2pVTGVtWjcvUnhlQ3EvSlllSWhKSmtNQ2FwemhVT1JX?=
 =?utf-8?B?bWY0SC96bEJQR0Jnd1BnL0VnVUF5U2V4UDFVL0FkYVMxUGhCQ1VaWlQ1MzVW?=
 =?utf-8?B?d0JobC9sRnc3VHp6aW5MZnpBd0J0K1l5MDZ1eG1FZVMvVEtFdEg3TWozZWZB?=
 =?utf-8?B?UVZ2N1paSWE4UTNpQnBSWFhtWTF2SmFmN29ub3Z3MThuS2ppQ0JEOHE0U2k1?=
 =?utf-8?B?dzlablRXeHR1N3h2K2xwVVJqcy9FY2JLLzRpWGttRzNUVkxQT0I3dzFVbFRS?=
 =?utf-8?B?UUhEZW5yMVNBZm9OTk1lRFJIQjFHMGV0NVhjMDBWNTNxYUsvaGlidlhZTno3?=
 =?utf-8?B?VUpOMEo2d0lkeDF3dk54VDJMbGFXV243VjRCRjQva3kxbFhaNnNZUDNaOW53?=
 =?utf-8?B?amlzV0ZSWDVVQTBBNFpnb1VDbE44TXFoZHFwK1lvWno4djRKNVIrdXY1SUtP?=
 =?utf-8?B?YzhubXpITWZ6dVI3dXZDZUg1N1NST2VSejFzUno1dXBzcXdTa0hyK3JxVlVQ?=
 =?utf-8?B?c0tDbnRqMWFWNmZrRWM1MW45SHpVc2VBZjdpY0tmcnc0NGQ3ZWc1SkNPTVRG?=
 =?utf-8?B?enRrai9vanpFMjZDNFI5MEc1S3J3ejBmODhxWWV5V3VSSVhpSDRWWnJsSmVk?=
 =?utf-8?B?OWVUdk9xK0JyRjY4Vno2NFo1V2ZCOUdrMlJtUm5uR21vNzVtSldaZnlzVVht?=
 =?utf-8?B?UDd2VHl0ZXNQVUlKWTZ1OXIvaGptWXRaTC80VE9rTWdhZHppRU9XZ3dqcGdH?=
 =?utf-8?B?S2lWMVlVZTAwQis1S08vM1FWSHQvNldYZGgxWDFBNUNBc3F6MXNhaGpGb2hH?=
 =?utf-8?B?UzVZVVV5ZG1DTktXcCs4VGZqU2I2dTdiSEIrZndLcVBHQ0oyclFpeFpmaC9o?=
 =?utf-8?B?MFlRRUl5K21LUVNaMjlpM2U5NEU1R3pjeVBwZy9ZVm5INHN4clkzKzVSNTdX?=
 =?utf-8?B?ZnZzSHBNMm1vVDRLSkpVQ2MzcXB5Y05Pemppc09ycjRORStpOTIzSDZLTnVn?=
 =?utf-8?B?SmI0c1pMQVZnWStHallTTGpOTmNZQXhpamFCYTk0dldtWThRN3ZjbjAvVmFa?=
 =?utf-8?B?QkkyNnhhL2JLVXp5MHNPREYxaElvMzVtUUM2RFJkVzNqUlcwVW9TaWdOZEwz?=
 =?utf-8?B?WkhmaGp5bXc1Y2d0dlJlUDlrL1MvN2UweW1nOFRicUFDbDBZekl1cmxHTlZS?=
 =?utf-8?B?dnF0SUpLRVVLeDN0WW9ITVdqK2ZOSXJKS3puZGlyY0NiZC9aYkpOQnN6bU8r?=
 =?utf-8?B?MUtjVGVacEl6REpnL1ZTalltM25ZUWYxWEVHMlRGS09OU3ZPUE85dGNPbzQ4?=
 =?utf-8?B?VGwySEpZNDUra1FQREw2SFB0dUd1STUvdG90eVpGOHYzMk42dUgrcmkxa3hR?=
 =?utf-8?B?U21QbUtteHR2ZDNiZkNmeVB4NFhwNVNrOUtpVGhTeVFGWTlTUkZhOTNMR1ZH?=
 =?utf-8?B?cVBFMG5Eb1E2QngycEwxNVRIU2w1TVgyVHl3TTFoNlVMUVN2WDVXY0Z6R0Jz?=
 =?utf-8?Q?BdlLKp0kkg652JdbXe6cFe+qM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5647e937-736d-40a3-634d-08db4b113a68
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 13:29:17.1177
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FFJUnxEynLHNCH2wftjkb4h7SriH8FwfPQSBuDc1RsgRclWKckKHnWB3YUjYT/FJVOwnbBlgqaOfPlczhWSBSw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8452

On 02.05.2023 14:04, Olaf Hering wrote:
> Tue, 2 May 2023 13:33:13 +0200 Olaf Hering <olaf@aepfle.de>:
> 
>> I will investigate why it failed to build for me.
> 
> This happens if one builds first with the Tumbleweed container, and later with the Leap container, without a 'git clean -dffx' in between.
> 
> Is there a way to invalidate everything if the toolchain changes?

Getting this to work automatically is a continued subject of discussion.
Touching xen/.config before starting the build ought to work, though.

Jan



From xen-devel-bounces@lists.xenproject.org Tue May 02 13:30:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:30:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528641.822065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptq53-0000g8-Bk; Tue, 02 May 2023 13:30:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528641.822065; Tue, 02 May 2023 13:30:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptq53-0000fz-7r; Tue, 02 May 2023 13:30:09 +0000
Received: by outflank-mailman (input) for mailman id 528641;
 Tue, 02 May 2023 13:30:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zdoi=AX=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1ptq52-0008BD-FV
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:30:08 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 747642f8-e8ed-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 15:30:07 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 61F521F8D7;
 Tue,  2 May 2023 13:30:07 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 205A5134FB;
 Tue,  2 May 2023 13:30:07 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id JueEBl8QUWQHXQAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 02 May 2023 13:30:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 747642f8-e8ed-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683034207; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=D8QP5oT7WEw35VAPIubP9M44ZYf0Ip836qmoZQvEDUY=;
	b=lqXRg8nSHcy9ZmF7LlAJqfMVeNbMFmHdiLtbk1hnVxn96+P/oS1ew/7SYgmHu5H4Jrzb+J
	w0sK/wgslyq2mZvlWhntQFZTUI6B1Gfbuf0xRpAD7ZWHR/pyV4Ab1MG8w2L90Zn6vrid9G
	5gomh/5y3f1pXNn7opEu6Eu4he3cjrY=
Message-ID: <b6550b56-fcb5-42a2-93c3-7fe486a215ba@suse.com>
Date: Tue, 2 May 2023 15:30:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling
 with XSM
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230430144646.13624-1-jgross@suse.com>
 <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
 <ca098773-5428-c97f-87ae-402fffd114bb@suse.com>
 <25876523-8d03-7e7d-e70f-b88d52f2b270@apertussolutions.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <25876523-8d03-7e7d-e70f-b88d52f2b270@apertussolutions.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------wahdxtmcljEBJZ4CSpgT7zTZ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------wahdxtmcljEBJZ4CSpgT7zTZ
Content-Type: multipart/mixed; boundary="------------yeCflqXKhgW6Hjn7xizoBghT";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
Message-ID: <b6550b56-fcb5-42a2-93c3-7fe486a215ba@suse.com>
Subject: Re: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling
 with XSM
References: <20230430144646.13624-1-jgross@suse.com>
 <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
 <ca098773-5428-c97f-87ae-402fffd114bb@suse.com>
 <25876523-8d03-7e7d-e70f-b88d52f2b270@apertussolutions.com>
In-Reply-To: <25876523-8d03-7e7d-e70f-b88d52f2b270@apertussolutions.com>

--------------yeCflqXKhgW6Hjn7xizoBghT
Content-Type: multipart/mixed; boundary="------------87nHJ0dLWt40XxySAT8iMUHX"

--------------87nHJ0dLWt40XxySAT8iMUHX
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDIuMDUuMjMgMTU6MjMsIERhbmllbCBQLiBTbWl0aCB3cm90ZToNCj4gT24gNS8yLzIz
IDA5OjEzLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gT24gMDIuMDUuMjMgMTU6MDMsIERh
bmllbCBQLiBTbWl0aCB3cm90ZToNCj4+PiBPbiA0LzMwLzIzIDEwOjQ2LCBKdWVyZ2VuIEdy
b3NzIHdyb3RlOg0KPj4+PiBJbiBjYXNlIFhTTSBpcyBhY3RpdmUsIHRoZSBoYW5kbGluZyBv
ZiBYRU5fU1lTQ1RMX2dldGRvbWFpbmluZm9saXN0DQo+Pj4+IGNhbiBmYWlsIGlmIHRoZSBs
YXN0IGRvbWFpbiBzY2FubmVkIGlzbid0IGFsbG93ZWQgdG8gYmUgYWNjZXNzZWQgYnkNCj4+
Pj4gdGhlIGNhbGxpbmcgZG9tYWluIChpLmUuIHhzbV9nZXRkb21haW5pbmZvKFhTTV9IT09L
LCBkKSBpcyBmYWlsaW5nKS4NCj4+Pj4NCj4+Pj4gRml4IHRoYXQgYnkganVzdCBpZ25vcmlu
ZyBzY2FubmVkIGRvbWFpbnMgd2hlcmUgeHNtX2dldGRvbWFpbmluZm8oKQ0KPj4+PiBpcyBy
ZXR1cm5pbmcgYW4gZXJyb3IsIGxpa2UgaXQgaXMgZWZmZWN0aXZlbHkgZG9uZSB3aGVuIHN1
Y2ggYQ0KPj4+PiBzaXR1YXRpb24gb2NjdXJzIGZvciBhIGRvbWFpbiBub3QgYmVpbmcgdGhl
IGxhc3Qgb25lIHNjYW5uZWQuDQo+Pj4+DQo+Pj4+IEZpeGVzOiBkMDQ2ZjM2MWRjOTMgKCJY
ZW4gU2VjdXJpdHkgTW9kdWxlczogWFNNIikNCj4+Pj4gU2lnbmVkLW9mZi1ieTogSnVlcmdl
biBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KPj4+PiAtLS0NCj4+Pj4gwqAgeGVuL2NvbW1v
bi9zeXNjdGwuYyB8IDMgKy0tDQo+Pj4+IMKgIDEgZmlsZSBjaGFuZ2VkLCAxIGluc2VydGlv
bigrKSwgMiBkZWxldGlvbnMoLSkNCj4+Pj4NCj4+Pj4gZGlmZiAtLWdpdCBhL3hlbi9jb21t
b24vc3lzY3RsLmMgYi94ZW4vY29tbW9uL3N5c2N0bC5jDQo+Pj4+IGluZGV4IDAyNTA1YWIw
NDQuLjBjYmZlOGJkNDQgMTAwNjQ0DQo+Pj4+IC0tLSBhL3hlbi9jb21tb24vc3lzY3RsLmMN
Cj4+Pj4gKysrIGIveGVuL2NvbW1vbi9zeXNjdGwuYw0KPj4+PiBAQCAtODksOCArODksNyBA
QCBsb25nIGRvX3N5c2N0bChYRU5fR1VFU1RfSEFORExFX1BBUkFNKHhlbl9zeXNjdGxfdCkg
dV9zeXNjdGwpDQo+Pj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICggbnVtX2Rv
bWFpbnMgPT0gb3AtPnUuZ2V0ZG9tYWluaW5mb2xpc3QubWF4X2RvbWFpbnMgKQ0KPj4+PiDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGJyZWFrOw0KPj4+PiAtwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCByZXQgPSB4c21fZ2V0ZG9tYWluaW5mbyhYU01fSE9PSywgZCk7
DQo+Pj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICggcmV0ICkNCj4+Pj4gK8KgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgaWYgKCB4c21fZ2V0ZG9tYWluaW5mbyhYU01fSE9PSywgZCkg
KQ0KPj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGNvbnRpbnVlOw0K
Pj4+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBnZXRkb21haW5pbmZvKGQsICZpbmZv
KTsNCj4+Pg0KPj4+DQo+Pj4gVGhpcyBjaGFuZ2UgZG9lcyBub3QgbWF0Y2ggdGhlIGNvbW1p
dCBtZXNzYWdlLiBUaGlzIHNheXMgaXQgZml4ZXMgYW4gaXNzdWUsIA0KPj4+IGJ1dCB1bmxl
c3MgSSBhbSB0b3RhbGx5IG1pc3Npbmcgc29tZXRoaW5nLCB0aGlzIGNoYW5nZSBpcyBub3Ro
aW5nIG1vcmUgdGhhbiANCj4+PiBmb3JtYXR0aW5nIHRoYXQgZHJvcHMgdGhlIHVzZSBvZiBh
biBpbnRlcm1lZGlhdGUgdmFyaWFibGUuIFBsZWFzZSBmZWVsIGZyZWUgDQo+Pj4gdG8gY29y
cmVjdCBtZSBpZiBJIGFtIHdyb25nIGhlcmUsIG90aGVyd2lzZSBJIGJlbGlldmUgdGhlIGNv
bW1pdCBtZXNzYWdlIA0KPj4+IHNob3VsZCBiZSBjaGFuZ2VkIHRvIHJlZmxlY3QgdGhlIGNv
ZGUgY2hhbmdlLg0KPj4NCj4+IFlvdSBhcmUgbWlzc2luZyB0aGUgZmFjdCB0aGF0IHJldCBn
ZXR0aW5nIHNldCBieSBhIGZhaWxpbmcgeHNtX2dldGRvbWFpbmluZm8oKQ0KPj4gY2FsbCBt
aWdodCByZXN1bHQgaW4gdGhlIHJldCB2YWx1ZSBiZWluZyBwcm9wYWdhdGVkIHRvIHRoZSBz
eXNjdGwgY2FsbGVyLiBBbmQNCj4+IHRoaXMgc2hvdWxkIG5vdCBoYXBwZW4uIFNvIHRoZSBm
aXggaXMgdG8gTk9UIG1vZGlmeSByZXQgaGVyZS4NCj4gDQo+IFlvdSBhcmUgY29ycmVjdCwg
bXkgYXBvbG9naWVzIGZvciB0aGF0Lg0KDQpObyBuZWVkIHRvIGFwb2xvZ2l6ZS4gOi0pDQoN
Cj4+PiBTZWNvbmQsIGFzIGZhciBhcyB0aGUgcHJvYmxlbSBkZXNjcmlwdGlvbiBnb2VzLiBU
aGUgKm9ubHkqIHRpbWUgdGhlIGNhbGwgdG8gDQo+Pj4geHNtX2dldGRvbWFpbmluZm8oKSBh
dCB0aGlzIGxvY2F0aW9uIHdpbGwgcmV0dXJuIGFueXRoaW5nIG90aGVyIHRoYW4gMCwgaXMg
DQo+Pj4gd2hlbiBGTEFTSyBpcyBiZWluZyB1c2VkIGFuZCBhIGRvbWFpbiB3aG9zZSB0eXBl
IGlzIG5vdCBhbGxvd2VkIGdldGRvbWFpbmluZm8gDQo+Pj4gaXMgbWFraW5nIHRoZSBjYWxs
LiBYU01fSE9PSyBzaWduYWxzIGEgbm8tb3AgY2hlY2sgZm9yIHRoZSBkZWZhdWx0L2R1bW15
IA0KPj4+IHBvbGljeSwgYW5kIHRoZSBTSUxPIHBvbGljeSBkb2VzIG5vdCBvdmVycmlkZSB0
aGUgZGVmYXVsdC9kdW1teSBwb2xpY3kgZm9yIA0KPj4+IHRoaXMgY2hlY2suDQo+Pg0KPj4g
WW91ciBzdGF0ZW1lbnQgc291bmRzIGFzIGlmIHhzbV9nZXRkb21haW5pbmZvKCkgd291bGQg
YWx3YXlzIHJldHVybiB0aGUgc2FtZQ0KPj4gdmFsdWUgZm9yIGEgZ2l2ZW4gY2FsbGVyIGRv
bWFpbi4gSXNuJ3QgdGhhdCByZXR1cm4gdmFsdWUgYWxzbyBkZXBlbmRpbmcgb24gdGhlDQo+
PiBkb21haW4gc3BlY2lmaWVkIHZpYSB0aGUgc2Vjb25kIHBhcmFtZXRlcj8gSW4gY2FzZSBp
dCBpc24ndCwgd2h5IGRvZXMgdGhhdA0KPj4gcGFyYW1ldGVyIGV2ZW4gZXhpc3Q/DQo+IA0K
PiBJdCB3b3VsZCBpZiB0aGUgZGVmYXVsdCBhY3Rpb24gd2FzIHNvbWV0aGluZyBvdGhlciB0
aGFuIFhTTV9IT09LLiBMb29rIGF0IGxpbmUgDQo+IDgyIG9mIGluY2x1ZGUveHNtL2R1bW15
LmguIFhTTV9IT09LIHdpbGwgYWx3YXlzIHJldHVybiAwIHJlZ2FyZGxlc3Mgb2YgdGhlIHNy
YyANCj4gb3IgZGVzdCBkb21haW5zLiBUaGUgZnVuY3Rpb24geHNtX2RlZnVhbHRfYWN0aW9u
KCkgaXMgdGhlIHBvbGljeSBmb3IgYm90aCANCj4gZGVmYXVsdC9kdW1teSBhbmQgU0lMTyB3
aXRoIHRoZSBleGNlcHRpb24gZm9yIGV2bnRjaG4sIGdyYW50cywgYW5kIGFyZ28gY2hlY2tz
IA0KPiBmb3IgU0lMTy4NCg0KQWgsIG9rYXkuIEkgZGlkbid0IGFuYWx5emUgYWxsIG9mIHRo
ZSBpbnZvbHZlZCB4c20gY29kZS4NCg0KDQpKdWVyZ2VuDQo=
--------------87nHJ0dLWt40XxySAT8iMUHX
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------87nHJ0dLWt40XxySAT8iMUHX--

--------------yeCflqXKhgW6Hjn7xizoBghT--

--------------wahdxtmcljEBJZ4CSpgT7zTZ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRREF4FAwAAAAAACgkQsN6d1ii/Ey9R
zwgAj1wqv9R7OHSXYkxFA+Ct5IcW2++ciFYuVCoqcZK8qy67jSIVBWxUY9mRDQZ6pXHowls5Cm7r
bJvxVKkJSyTQys0Hiv+SoG8LQQoX5dkfLAqG8AbnCCHyGhH8rMN6ABpYBfLGTDi+gaxDqRRFglEo
bXjW4SJiUS8AKrcUEemjJ/MvuYRfjW0ifi5r9HrdwBDwhI5sVKX9Sz5VRBNxr2yFJmjcp/Uaij1L
/EVVvMn5M8G0M2KuFaV/Oz6tsG+iJeP/MrG+yCY2QmyPXE6mDyRdrrT23o2LOIV7Rum4wZMOKYF1
to/KQQOAVZtm/zoSZk7cPOrnH7Yoc9P4uyJeyAbdsg==
=8Xqb
-----END PGP SIGNATURE-----

--------------wahdxtmcljEBJZ4CSpgT7zTZ--


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:36:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:36:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528646.822075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqB7-0001ST-5f; Tue, 02 May 2023 13:36:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528646.822075; Tue, 02 May 2023 13:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqB7-0001SM-2n; Tue, 02 May 2023 13:36:25 +0000
Received: by outflank-mailman (input) for mailman id 528646;
 Tue, 02 May 2023 13:36:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d5QU=AX=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1ptqB5-0001SG-Cx
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:36:23 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.219]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 52adf646-e8ee-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 15:36:20 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz42DaJdEX
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 2 May 2023 15:36:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52adf646-e8ee-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683034580; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=bWXtk3LaVikoIw3D4mTe++wv8K7YiCNyTJAijZzsl1x//oQlxBWmeA/y839pu1zeve
    2xYSLnDjVzTOp1RYQXmq6lxF3w3E1lroNSWHzKCYUqfnPo/jldjrR+Hj0BkgBVSIabUD
    tGoI0W7UIEodIQUCW56oJhPvTPi8S++AHf6ZbHRO46sNORdB/hozp/S92AQ0fyyAEJrA
    TnH2Wjz7+OQac5BDRcA5KApKDWgnQ8AddGNvOKykaWVTSC3T9WUHZXp0nm6r4N8Auvwi
    SbeV920VjluHHx1cmBeQj7RVohP1HZuoKarFKMevu8XiQvtQBtWAk1CSII2w2rMU8LLG
    Jx7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683034580;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=cPq2GgRUcxP1kIv+LHSUq3CZsq05cQugJJqfWfYX7Ek=;
    b=CvlpxSVaLXaXOBmei4uvAak3EaNjlJOIk6+omxUK53Ggy63xhHqtGl0xb0Z/mvw526
    /jQ1dObqXoijFN+qix2FdXS89hFPK5hGVdoCQFMmUbQho87olrXCbzYfZBcyMbhN/lTO
    Gpr7JSmCwS/FhGBMtSkVRY6VfeXkCJkliax/c+1hmITUEJb6CY9P3C5iolwxbX4e4RAk
    E9EBZRcPjynuKANdm2QZEVDKmEKhnrKe+3X6BebHvJPz9UIVYXneu/4cXqPiQt8wjjmF
    dml7RKbzk37lvvsjYpaWflT6Exf30K69oadTAPZG8EBtwAkBd4mxdNcwYrYw7AIgHP09
    5O4g==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683034580;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=cPq2GgRUcxP1kIv+LHSUq3CZsq05cQugJJqfWfYX7Ek=;
    b=VezzypDt25o1ZryCvqOiPFZ3/v5jBOqw8o921SZHdYPoR9piaQTWrbuB73uYPjIV6p
    Y7JT1cLzm5g939/z1nkzEfjfoJxaxd4yQ1MXDTud4ZivFyqOIawda4lpw4rskD9Qz9gv
    V++kbzlVGSl0CNqZ+rD0J7jFYnfetRtD5rmG2Dm4l4ymTySkoTn+EY83wN9rBuFp0YK4
    MHdFN7pudVrhhRIeMj83Gii+JKTLW0UDlEsGHYWbmVsh+HVkgDyLIOW7YafD7uIzuKxn
    XtRvClIpgyY96ppJiUzwGU63r0QN4rXIFSFOsK0yALg8Gt1UNLT8JwMXgrv629IARO6e
    rrug==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683034580;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=cPq2GgRUcxP1kIv+LHSUq3CZsq05cQugJJqfWfYX7Ek=;
    b=LniSeSGi2qWNgsEkusvsX0EDSQJjjKmGlOyHkbUV1r8pCv5au062i7mFI61jpscHuy
    ppg6ripXLep36+28rGDw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR5BxOIbBnsc1fym1gFvNQ7EzMpH+yFJc4aADp/8Q=="
Date: Tue, 2 May 2023 15:36:12 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: HAS_CC_CET_IBT misdetected
Message-ID: <20230502153612.431dfc08.olaf@aepfle.de>
In-Reply-To: <9353d3f3-563b-ff88-0b0b-dfa2bb03522c@suse.com>
References: <20230502074853.7cd10ee3.olaf@aepfle.de>
	<43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
	<20230502133313.2192eb99.olaf@aepfle.de>
	<20230502140444.1dacdb33.olaf@aepfle.de>
	<9353d3f3-563b-ff88-0b0b-dfa2bb03522c@suse.com>
X-Mailer: Claws Mail 20220819T065813.516423bc hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/nhNf/G1jtomN+3AIav+pwpF";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/nhNf/G1jtomN+3AIav+pwpF
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Tue, 2 May 2023 15:29:19 +0200 Jan Beulich <jbeulich@suse.com>:

> Getting this to work automatically is a continued subject of discussion.

I think the only real solution is an out-of-tree build. Essentially every s=
ingle component needs to detect a toolchain change. This is unrealistic.


Olaf

--Sig_/nhNf/G1jtomN+3AIav+pwpF
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRREcwACgkQ86SN7mm1
DoCA1Q//QMUkhuk8WFuiQaHSttQhZWEbRnZL/1vY5c0ny8LLPpzWVEYOiMRPSI28
SN9XQJvmcZxuwbko2rrEbouxa1l+KW5lUltjpq6LMBhR5x/ELWJFbOugVWx9oKyN
bSjz32XPQl2ohO8kuzwFwAAo5byIZBrGmGW6KB1TdCwJIAXl4WA1l5qoJ41odqpu
zlWwm1X4PLwA7UlXqclDefXUWsbmVTKtV2amT9TJZdLWcUfBTybymeGUBH+DMWGi
FNSqUWNwROeHym/byCleYzbR/QTxwCDh09LHriJwK+imbvcHrpV8skoNzgaZ+PHZ
KkFohDyY+uIqZP+wIzuDV192M1m18EhV3yq9nM8gMAUq5ZrTWiuHPwS74tuO3yDO
rag0feQ/tSuVAMnv4HJ/DDsho9ntZzD3F+TlJYwohXZPEo+tXb0Zdq0dtuCyTjYU
8CYWSXeOHvsTO096rYzHbye/FaQnT4anfPdp+VaqTBUDxo80L6kzHEmdYwpiX2oF
O7RkaMmSHPnDfn2GZhKD2DKau5bgOTRt+tLaI9mmo+JxKOiq/G+NUymnfO1uXZFu
wvw9skmH+EkpIkRh66dpW1NCMQBwgSsqcVtujYor6mASXDEWq/pKq3Ttu4aZFXxE
D3Q6G62fczLZlPOr5O8t2muJqDLf5h3H0fpzhMCVdSyfO9Di3m4=
=E3nE
-----END PGP SIGNATURE-----

--Sig_/nhNf/G1jtomN+3AIav+pwpF--


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:41:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:41:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528651.822084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqFr-0002xp-Nh; Tue, 02 May 2023 13:41:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528651.822084; Tue, 02 May 2023 13:41:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqFr-0002xi-L4; Tue, 02 May 2023 13:41:19 +0000
Received: by outflank-mailman (input) for mailman id 528651;
 Tue, 02 May 2023 13:41:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptqFp-0002xc-TK
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:41:17 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20619.outbound.protection.outlook.com
 [2a01:111:f400:7d00::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0267da2a-e8ef-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 15:41:15 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8169.eurprd04.prod.outlook.com (2603:10a6:10:25d::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 13:41:13 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 13:41:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0267da2a-e8ef-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TSycrKs8lQBKsfgyb991ekFsr2SrL15ZgbVkZ3jHHTdDClpyiKPhuyjagJB1kRquYw1WldgMrojryzDz8yrhJsZE1SyaKiqVHaaZ1OVU2WJkrNwE2H3XKEUFbIiA5zMOIenTKhlearc/b0CJTPq4XAHCHh1NHEQETom91+0WgQKM/yY1UlGKG/Jo7sROucHsEu+I4gaDQir0VocYthp02QB3iD7zw12UsvjYEa9tkraVYs5ikqe0Z81qdPKDWqBVyYsx61gsHqe58SQ7t0smHtFL5KtbwAN9UcDv5ZM6Q6VL9HXo6dBhAYZnzLBNMxaD0oMOsIyiWqu92JO0D1PWew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=X1EDzMdMtmgMtrxFZhOwm06NyGinNaKMmyaW0YO/3FU=;
 b=J5viAt7NWkC8Syr5cuYg36snwqYtH+X2zwFtjEyY06ay6TQjLkbXhL+4ybMvw5GQ2xKM8WHAeM00cmgLX+O4j7GszJgAtBFlWERWDKeQUeHyILgDthFrgPW8/YXP5DdLnddI4UOFBI1lfO+o+F7phh3yCei8oFlOU0xig1zM0VGFgajjsNvCln309494znxEoraT1gMdKbuaELNpoEWvyju6KKwrHJgPh61jm2mq5/A9x1gr+U6s6ich1o9D5uMx4CM46gpN+dubw7/mPiyRH799IQGLIskYBeqeJWDaSU/C15rRaLjTjY6peAlM0uxTZeIRC0jU1FUbOoZZ/oga/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X1EDzMdMtmgMtrxFZhOwm06NyGinNaKMmyaW0YO/3FU=;
 b=vDJfhpFAli1ubyMpk19O+7KuHYSRlTzSD7RVENtuMS0J3SLSj+bsUoBAdXKQI+VVeCZOzi2tgPXOZLAhz0e0aeg4Rab4GPB94v0XpMuHLihniCKmz7KJ+ABftbTqZy/foykUvn9t85FJjGllJNoA+9hfot/cnibYRbpyKUwmi2zwbErjzxz7hGRYQwfeMSdMAMRvWm65JwAs47drrTMYtt0ml3Nu7vFXlqBXN9NXMTl7KSb6DCP3x6FJbMx8M14XR1tI8dQ9EFWQ0WcNee0o5Ual9+xcboyptGeewLnEhoMYItwTWDIXL4PJ/3PsbuY7X0yuI7ig0DLuYlC65OM/rQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f9e72c6c-9915-9995-06c0-0a0ac037bde1@suse.com>
Date: Tue, 2 May 2023 15:41:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 2/2] acpi: Add TPM2 interface definition.
Content-Language: en-US
To: Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230425174733.795961-1-jennifer.herbert@citrix.com>
 <20230425174733.795961-3-jennifer.herbert@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230425174733.795961-3-jennifer.herbert@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0146.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8169:EE_
X-MS-Office365-Filtering-Correlation-Id: 6102cd90-c77f-484b-0d21-08db4b12e594
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nTwGzFc8XMcwwZecidSVWDlZEpEAa+fdr9SFYoUDciq2gDciM4IPAWFE1YM+BDmVrYYPB4pkRxNOlu/zLTMrpl+4POq88LxJuYswvKDptKsd8hWGQ9llnGSjdxhrxZtVxwdEPC2zAdvkvuy419ol3dzBjaOesSriJqE+pKtA/kIOBQIvohfUzJzfhQ08SDR5kZ1T+9LPH7a8Aom3OMMJJ1ctdfD2O9PaOx5asv1Ega0hVOIt9Zvb/KH7vDydMSzVDZe/4QdBETVJNhgddlSMiUAJWcL4frMrArvZkTr/ol6PqtzMhgPcC9qXo6tJzNwszP97zxfu1vQRkNuIanDAxerM2ZdFXyjjmsptN/NU+p7EDQJdmmj+QD69SRs1XspTCXj7arLVq/I0nIsixl/BFGfOKBIh32YN7hJGiy3c+66BzvyJxeqc8v5HVZlQB84joGAOdsq+fHCsV6EEQT+szDVA2XrN9o2NDWpgOLfNKk18rUuml/QHDjLdZ7mSBvSq5Ptyk/hQAW31TL3CM8aAWpY7azynd0Dn6bf6UbiRp56coLFA7ac1DAo2iEUrsrcWbKyetX91FaaIFMveJ9kAgnpDfZaXLyjSetuFN2HvQgs9LX6aIzgIEHDcFDoapjkymBCXsJxX/N/Wimv4kwv/8NqCav3OPO6DIhOo0GqLPZ4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(346002)(376002)(136003)(39860400002)(451199021)(31686004)(54906003)(316002)(4326008)(6916009)(5660300002)(31696002)(478600001)(36756003)(6486002)(86362001)(41300700001)(66476007)(66556008)(66946007)(26005)(186003)(2616005)(6506007)(38100700002)(2906002)(8676002)(8936002)(6512007)(53546011)(2004002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WnlibGwrdUFuRVZtMnNsd0plNDUxelRrVmJDV0taK3N5TDZ6NGd5eE5oTllL?=
 =?utf-8?B?NUVtaVlIZGU5YkFFTXZuTDNqOFg0YnA2T0NiS2ZMTldNV096ZWNCR3dQVjM4?=
 =?utf-8?B?dGdnM1VDL25jdXMxVWt5cjlKaExZNWM0MmRHV0t5U3pOMWk0YmY3bDBUSTRi?=
 =?utf-8?B?MUwrK3M5UkNGWEU3aW5WQ0N5VzY4YkUxNEJhQW42azMrV3dJSVIzd3dCV2do?=
 =?utf-8?B?WXJNK21VZ0QrRWZ4Z0FQK1l2TFIxNTFJcVNtQU5zUG45RnF4L1NFd01IV0Z5?=
 =?utf-8?B?OVJkRFNTVHdGaHlMdVAxZ3pLUEx3dDMzN1gxQTdkRU5uNXhDb0dRM2ZZbklC?=
 =?utf-8?B?WkVGVWkvTzlTcm5BSC9ReEJ1NktLT2ZJUURwTkVJM2U2MjJhT2cvM0FCY0V5?=
 =?utf-8?B?aHpub3ZrTWhvOGtuWUtIWmppNHRNV2kxNnFtTVRvSDBON1NZRG5TR01WeXRz?=
 =?utf-8?B?cE9ocFNKa1ZxbkFxYjZyU3NJSVdMelFHMVhJaG1BQkluTS90bHVRdE4rRmxt?=
 =?utf-8?B?dkxXUEVpallzZVhscFhPMVM1VWswc09mV3kxd0NlNUJMRXBOTEhsTng4aFgv?=
 =?utf-8?B?NUk4L0tVQjRqT2l2VituUE1FMU1TcDFtWkR3MGM2Qm8rOFZMS043bnlhU1I5?=
 =?utf-8?B?WEpud1ZBWnBrTkJEYWl2VmxhdGtwby9lYkYrcVE0R0R1TnhuV1BXaGt1bVE4?=
 =?utf-8?B?ZmtLcmlTVlRoc0NDaFdYRnJLcXJ2RDI2aDJvYURHL2NmN0VxRnhCRXY5a29D?=
 =?utf-8?B?TGE3VUh1UW5rdzR5anZVOXU0WmtKeC9tNGdrQU1HWVcxR3QxQXdmWC94cDd4?=
 =?utf-8?B?aWpZdkdiKy9yWFkwalRVemdiNXFicHdkMUFFWDRIb3dubmt0ejc4dG1ZVUtZ?=
 =?utf-8?B?eTlPNy9FVjRDemR1Q2lWU2VVaDFpZllsOHQ2ZExSMy90eVhTSnhiTS9ZRjBu?=
 =?utf-8?B?Tkx6QStSZ3BxWW90Q3FZbVFTN1lGSGJTblg1dWJPSnlNWTFOZ2NuYlhvY2Q1?=
 =?utf-8?B?UmkwbnU2NVg3cjdGV1U0VTk2dFhmSVFTWXhKU0hDSEM4NmZKNkI4NDJhM2V4?=
 =?utf-8?B?SitxMXdETXRiNm13STVNRkZSSmpTN3huTTJIVnd4dmFtNFY3UWZGdm1iOXFH?=
 =?utf-8?B?SVpSMzBnaElyakdUZFFsT0FYMWJ4YzhNenBZSnQwQzYrR2hndVMyUVUyM2tM?=
 =?utf-8?B?Qi9jVEg3QktjaHlXZ3B5NVBScjg3VG5QSzVCWFRQSEVSU0szNUorMmJtYUxz?=
 =?utf-8?B?RENhUU5LTnB3cDR5QTU0M09EUkhZWHFPRmNKOWZ2MHlTbmkrai8yVC9XTlhB?=
 =?utf-8?B?N1pPbEIxWlhtdHA5Z2xPNXNCRDFQWThuTXVVWXpVMml0OFUxdlFlOHN5Sk1v?=
 =?utf-8?B?OXFEQndxbSs4Wmp4T3g2NHpyTU9DL0lIREttTE5RZDRTMkVZbWducWt2c3g4?=
 =?utf-8?B?bUM0WVk2Umo0TjluWjZpazB3WC9iUENNNEFPZWptR01CNllHVTExbnUzdGVG?=
 =?utf-8?B?bnJ0WS9BRW5JSGc4V2VpRmxsdzk2VHdEdUloemhFWkNiZnFYUzA0U0UzWjBB?=
 =?utf-8?B?U3d1R091SUJJODdHOWxZL0NaNWNwUWRJaGdSa2VZNWVPeldhdkhoLzc1MFNS?=
 =?utf-8?B?ZXorWTNMT2Q0SVdpWHp3R3V0VTduSlA0L2hTRWl6R1JQVy9NcElTMy84WGlJ?=
 =?utf-8?B?Mis5a09TMzFyVzZQSXVhY1c4YTNDTHdTTzhHVzNLWUVES1UyLzZyVDFCanBX?=
 =?utf-8?B?UGtPRm0zTGVaK0picVl0clk3TDB5aFcvRi9Xbk5mV25JbVBYNmpWaEdIdlZY?=
 =?utf-8?B?ZHovek4zSUtiRkV5dXk4c2hHMFZycDZJcHh4MG9xdSt5bC9UT2RVS3REbzYz?=
 =?utf-8?B?K1JZbHRiWGo5VzNmekFmcWJIMDJQSWRuUUorNmZRQmIxMFN6UEx3UThTazNa?=
 =?utf-8?B?Sm5SNUl0RDl3L29PeXhhL0Z5RzVaQkRoMFdzMGdGTkx5WldmUXNudDV6TkhG?=
 =?utf-8?B?TTNWd3ZIcVo1ZG9xSFJCVnVuZXp3emdJMWlrSTB2cmQyODJGUzljTVpWcXlP?=
 =?utf-8?B?L041MXJCRktXYndoQnBvYWNDK0o2R2YrZXBzbTQ0aGFjb2VZNGdMNEQwcnhm?=
 =?utf-8?Q?VC8bocAvUh/8UBZZUoE11hMcj?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6102cd90-c77f-484b-0d21-08db4b12e594
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 13:41:13.7821
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +tVatW8L8TKqLqPcSEIGrPBH3mXSJqj0FCvRx4IBYzYfM1/qW3ggdVOwvR/e9Gw6CETAFjL+seg27PdnxIk6hA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8169

On 25.04.2023 19:47, Jennifer Herbert wrote:
> --- a/tools/libacpi/acpi2_0.h
> +++ b/tools/libacpi/acpi2_0.h
> @@ -121,6 +121,36 @@ struct acpi_20_tcpa {
>  };
>  #define ACPI_2_0_TCPA_LAML_SIZE (64*1024)
>  
> +/*
> + * TPM2
> + */

Nit: While I'm willing to accept the comment style violation here as
(apparently) intentional, ...

> +struct acpi_20_tpm2 {
> +    struct acpi_header header;
> +    uint16_t platform_class;
> +    uint16_t reserved;
> +    uint64_t control_area_address;
> +    uint32_t start_method;
> +    uint8_t start_method_params[12];
> +    uint32_t log_area_minimum_length;
> +    uint64_t log_area_start_address;
> +};
> +#define TPM2_ACPI_CLASS_CLIENT      0
> +#define TPM2_START_METHOD_CRB       7
> +
> +/* TPM register I/O Mapped region, location of which defined in the
> + * TCG PC Client Platform TPM Profile Specification for TPM 2.0.
> + * See table 9 - Only Locality 0 is used here. This is emulated by QEMU.
> + * Definition of Register space is found in table 12.
> + */

... this comment wants adjusting to hypervisor style (/* on its own line),
as that looks to be the aimed-at style in this file.

> @@ -352,6 +353,7 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
>      struct acpi_20_tcpa *tcpa;
>      unsigned char *ssdt;
>      void *lasa;
> +    struct acpi_20_tpm2 *tpm2;

Could I talk you into moving this up by two lines, such that it'll be
adjacent to "tcpa"?

> @@ -450,6 +452,43 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
>                               tcpa->header.length);
>              }
>              break;
> +
> +        case 2:
> +            /* Check VID stored in bits 37:32 (3rd 16 bit word) of CRB
> +             * identifier register.  See table 16 of TCG PC client platform
> +             * TPM profile specification for TPM 2.0.
> +             */

Nit: This comment again wants a style adjustment.

> --- /dev/null
> +++ b/tools/libacpi/ssdt_tpm2.asl
> @@ -0,0 +1,36 @@
> +/*
> + * ssdt_tpm2.asl
> + *
> + * Copyright (c) 2018-2022, Citrix Systems, Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU Lesser General Public License as published
> + * by the Free Software Foundation; version 2.1 only. with the special
> + * exception on linking described in file LICENSE.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU Lesser General Public License for more details.
> + */

While the full conversion to SPDX was done in the hypervisor only so far,
I think new tool stack source files would better use the much shorter
SPDX equivalent, too.

Then on top of Jason's R-b,
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:41:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:41:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528655.822095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqGK-0003P7-Vk; Tue, 02 May 2023 13:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528655.822095; Tue, 02 May 2023 13:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqGK-0003P0-Sl; Tue, 02 May 2023 13:41:48 +0000
Received: by outflank-mailman (input) for mailman id 528655;
 Tue, 02 May 2023 13:41:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MJpR=AX=citrix.com=prvs=4790f2855=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1ptqGJ-0003Oi-8Q
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:41:47 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 135ae9a5-e8ef-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 15:41:45 +0200 (CEST)
Received: from mail-sn1nam02lp2042.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 09:41:35 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA1PR03MB6338.namprd03.prod.outlook.com (2603:10b6:806:1b5::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 13:41:31 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 13:41:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 135ae9a5-e8ef-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683034905;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=lKAkKK7RUMeCbc4AD2x5SGJG49S6l7oPERFgFVGQoD4=;
  b=RH3L8AKwlHjJXH3fTPrWKBfXYfMfiheB6l0IQr142o7Ef9g9W1kd9PmG
   rOpTbu6PFBZ1f0qZbwDV5bkWMqJulhCFRkuR6sKHsn16BzQH7riOCQJt9
   k7FngAeq68b8mEoC3QDcGlfQTqQAw9MMxfuLHKgqn985DmVFfedAgM42N
   M=;
X-IronPort-RemoteIP: 104.47.57.42
X-IronPort-MID: 106903496
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:YLOIWq7Dtrzt9L4Xi2w/CgxRtBbGchMFZxGqfqrLsTDasY5as4F+v
 mpJCmzVafmMZ2egedx3Ydy3pB9Q6MLWx95mGgJr/ngzHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0T5geE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m6
 sYfdgBQSi+6o827kO68Suxpo+omBZy+VG8fkikIITDxK98DGcqGeIOToNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6Ok0otitABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpMReXjqq806LGV7ncKDh4wDlGAm/y0q1ThWP1na
 H0MvQN7+MDe82TuFLERRSaQp3qNsDYVVsJeF+B85Azl4qje7hudB2MEZiVcc9Fgv8gzLRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2lRSWSN9mSPSxloetRWu2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBNzxooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:s7ax1a8e25NgaOudALVuk+AiI+orL9Y04lQ7vn2ZKSY5TiVXra
 CTdZUgpHvJYVMqMk3I9uruBEDtex3hHP1OkOws1NWZLWrbUQKTRekP0WKL+Vbd8kbFh4xgPM
 lbEpSXCLfLfCVHZcSR2njFLz73quP3j5xBho3lvglQpRkBUdAG0+/gYDzraXGfQmN9dPwEPa
 vZ3OVrjRy6d08aa8yqb0N1JdQq97Xw5evbiQdtPW9e1DWz
X-Talos-CUID: 9a23:mfuE2W3bKdM+ukjX8DFrE7xfBfoiLE2A6EjpB2C1WTpuSLiQbHGV0fYx
X-Talos-MUID: =?us-ascii?q?9a23=3A+mq4sQ+7Oy0ZUXCSnGgQDqCQf9xx5aeIU0o2q5A?=
 =?us-ascii?q?LmtPbNgNIeAaFyx3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="106903496"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ItTglMvh1w0Z5CY60gA0QjjZDMOjVDrteeoYDqDrLmOpJ498surCDr1Pn4ewDExldSXVM3/5t6nPbwDr3QwepuTG2bI76WZGptKRGFOrB4LrSVmC2RYbZG0tOu2dkCuCY+J2RfmdyH2chYtNz+QxsuOVkckI0bIEvs7RTskA7W6mD66CiXu9VH+42EG/iVB9gHSkpcnNAkToGoJQ9e85CFnKlfZZiNGyt9q6FHCyA8vJ7QutIL1UN0YjmlsnXBt1VRV19KejFyLeXcGUo8RfMONMcQ59cJ7plaJQmNIyaIlX9an8mEbOj+pdKKJYbYrvHFLBvse7F0xQBpRJGPozGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lKAkKK7RUMeCbc4AD2x5SGJG49S6l7oPERFgFVGQoD4=;
 b=KB3z/f5kxLQjZY/jVvgyShsigzMcuKKHuUERMR1ddyqdec+H8e6yy/l239XYV0HciJOtHT0iPKBRkjHUR8G2eEXikjJEvQMp3fp9PWRo4AvkaXzJD3GsZsDFzuqN3OcUXn2GIi+f1rlnAw8EpL+r2dhERKmJQnhMc+JLGD/GIenAFQ1edXJc6vwnyF41g+vafdrwODwC3PeownjGdkDvAC5VDPvWGyEPSRs0H6KH2x56F2ZCkQDQYkmcyJln6ZlFYLMDNZGjFOfZzX5aFsbpCxYmlYR8dxK/YGRQj36ctKZTzi1uyWY1nqM0Ei7TvULTKlg0SNP3xzGVeCASJGdu6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lKAkKK7RUMeCbc4AD2x5SGJG49S6l7oPERFgFVGQoD4=;
 b=nNE2IflHMCFcG/t+AaESVLnRct7OqRRHGH+2V9seOzGNoCN0SwV9MMFLBEopuBP0qnoTrfz/VH+0fIvcJmngFYUXWayPWrjXpVM6jxz9agsG7UlgzeWZXzbRE28pD9VHwMiyehxkGR85/oAhoFfYT8YSnX2r474sRQJyejlbXxg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <bb07dd8a-598a-564c-3dbd-457e982fd5d0@citrix.com>
Date: Tue, 2 May 2023 14:41:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: HAS_CC_CET_IBT misdetected
Content-Language: en-GB
To: Olaf Hering <olaf@aepfle.de>, Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
References: <20230502074853.7cd10ee3.olaf@aepfle.de>
 <43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
 <20230502133313.2192eb99.olaf@aepfle.de>
 <20230502140444.1dacdb33.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230502140444.1dacdb33.olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0204.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a5::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA1PR03MB6338:EE_
X-MS-Office365-Filtering-Correlation-Id: 7b4bd495-ecbb-488b-8c38-08db4b12efb7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TB5yhIKEPrLF4PaxMo8jcPd/qxsI7oiYaNRuUB24yd/tLNgTy4FQzNrbI8r0qS4kSOnYqclIp8JBVLEjNlwr3csDdKytI33JOUvREPLMeSAf1x2W8djcWiZyPAo4tGGTNK+JgbUt3ym89fNpaSSzl4ERBryAg/OZYxP9Joa3O/05HowLHHGhCyBx8Vpq9+rXmTs/tXGQt+gVKx+FwYDow3fyq/+GmQzc60PoDO5WBPLwgRjvwRR0cBFQwwj61Damp7hw88SWfW+3owK4Zv2CMg2CNCKca5yeOYQycotyc95jrQEknmd1liRYsvdsuoRGyQSqTZkxcmdLypAiN+jUpjAPAYe28qTLk2wfCIKU908FhEUfzK1Fz2VQhLsocTP+ExnTQPPpXlWe5sRL7BmL5lECGAbn8rDY3YIyMnUYWYED27i0FmfvVetR9sTfdimffVQcY/WqG34dPAUCHsPLUEPPmK4FXr7abfwF60rxv0d+QDknkBL8UuHdwSiCsChoxoZy0BoVLhuy0180vZcN7O/Uhe2bEwxr0saPNBy4xFmk779gm6aG8GFzit4ys5M4uuHhRbnaIjXn2B9VF6qYXi8kYxG75P+nvGDsJFKmXZF5/OpH4kAOcxLqNL0HqH7azKR/GFvC19xQ7P/2cvJMvw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(366004)(346002)(396003)(451199021)(66946007)(66476007)(66556008)(82960400001)(316002)(4326008)(6486002)(478600001)(31696002)(110136005)(36756003)(966005)(86362001)(186003)(6666004)(53546011)(6506007)(6512007)(26005)(4744005)(5660300002)(2616005)(31686004)(83380400001)(38100700002)(7116003)(8676002)(8936002)(41300700001)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WlNxaXplblIzRnN3RmJVYXBwVFcyN0M3VG5XaGU4cURyd3V4b2JjMklUWEY1?=
 =?utf-8?B?NFVMVkY4QzJ2KzlyaUtmM0VuQUR3dDljLyt0RmpXSXp3VUkwL1R6V2RUSjV5?=
 =?utf-8?B?U1JiZHVKSzE2ak1mcnN5YzJ3OW1aUWgydVZNaDVjV084WTBIemlpbFg3Zmsw?=
 =?utf-8?B?NENta3FaVkkyV01TeGVaMEZpWHJTZWtycEZ6L0Y4dTgrL2VRMjQ2ZGV3NEFE?=
 =?utf-8?B?NS8rOFVsM2J4NHlOODQyQnpLNjZDWmFPL2hFYnFjUDhCVmloa0dCSGZWQ3M2?=
 =?utf-8?B?YmpuZWhIQ0FpTUlGSWVzTkJyOCsxNGFTNWpkdlZyaEZOY21jYjZHcGszWkUz?=
 =?utf-8?B?SGdUMFJFaCtwaW9HYzFTMWlpVUtLaUVLbjBzSkN3aUloeU1ReTFwakg1d1Bq?=
 =?utf-8?B?NnRrN2hDTjd3RVpXT0hnN29PMnY4VkdyNUVyYTRnSDFOaHFSLzBZai94NDhL?=
 =?utf-8?B?TjcvWkFhWjVzRXdhRzdvYjNQdFpFWmJlcnRWNjdDeVJFSi84ZGJWWmVEVWFp?=
 =?utf-8?B?R1h6TUlRR1FOZDNsMVlxNXJDSHNTMUdBckl6Rmp4NnhuOS9NMDFMbnZMbUhR?=
 =?utf-8?B?eDBKdFZuek8vaEx0Z3R2bmVDRU5QMmsrS0QzNGwzbEV1VnY3ZzVETFVrbDhh?=
 =?utf-8?B?eHhpaktFNnRtanhvYS9GNzlvMFJieitPTkpQQzZMNDlMK3hISEljeVN4VXk5?=
 =?utf-8?B?Tjh1Vk5QM0dTaTFXS1BvLzk1QlVhYmVTZ3lpUEVJbkhtdlp5aURIcUpOb2hQ?=
 =?utf-8?B?RnpsdTJvUHREdlZCZFM4RFhudlV5aE4yQ0xvTFVsNzNOVk9nOVNObW1pYUV1?=
 =?utf-8?B?Yzh4MEhkM2diaVF5KytHUGxtVFZUYnlWZ2JYMTZveUF0TndzcGRsV3dWWFds?=
 =?utf-8?B?R2hoMHVzQjVtMjVKY0NZV3lhN2lXTFBwdnBsZ0U1ZmIzbGlGT0s0QWI2SmNH?=
 =?utf-8?B?STlMRmdmTlNEb1hQeGpyMHp6N084RzFQVUdYeW5sNVZGL2N3bjgrbHFkRmNz?=
 =?utf-8?B?Y1puUGpNUmxMZDZpUDVSL1dQRC9oUWN1eFdlT2R0U0xURVlRdXNxNWFpVkJW?=
 =?utf-8?B?cXRCbS91dHh2MExxT2tYcDdYck54bGVEcDdxN0V6dHk1KzU2Z0QxYXFZamlB?=
 =?utf-8?B?aWc0eWQ2d29KOUVmSlRzSkJRK3RYYWFuaThwcm5EUmI2YldSSjhCSXBDRWx1?=
 =?utf-8?B?TlJQUWhKRGdxZysyS3pieFlEU2g0bVMxVno3QUprVTJsMjdvOXEybUI4SVph?=
 =?utf-8?B?cjM0Rm1pMnk0ejBZc3hvVDN3eXJWd0JoMW5zWWltRzAyWUd4bkxYMzRBaWhO?=
 =?utf-8?B?Q2FMek1RbEp3WTJ5alljRHRmN0E1cFgvZUZ0cWMyckYvN3hxWnNDMzYxblY1?=
 =?utf-8?B?SHc2MXpNaGNDMFNVWDlVdTFXUnM0QjVRM01VcmRMdGlsSjJ5aDdSRCt1R2JT?=
 =?utf-8?B?Y0JNZG5tQ21pYWIyY293eTVDQ0dubElCdCtFeXhoSkdqT2Z5Sm04TklvcFRC?=
 =?utf-8?B?VGEvcnpPTE9VaXBhejNoMnYwOGszWDNRMC9WSTlhSzhrOWY0clFSWHc3R0lH?=
 =?utf-8?B?cGFYdWM4TFB1SHNKSENlNUx2OTFGRzF1VWowWTJNdlFYWk5BUGpGWk5UaGpv?=
 =?utf-8?B?NE9VVitNTFpVWmtENTR6ZGtxZ1h3dGpEMWo4S2FnaHpZeVo1ZndGd0RleWZX?=
 =?utf-8?B?d1IxWG1qY1A4U3p4bS9hQ29rcmFaVHJpdTFOb21uaUl6NXo3RWpxVEdiaURz?=
 =?utf-8?B?bSs1dG4walJ0d2gzYjZiMlRjZXIxbGxYUGNiYUFtR3lQa0R6REJ1NHBxdmdl?=
 =?utf-8?B?dVVTdEtWYjZyTVNYVUJiVFZveW5JN05UTDd2S2pIZzZiejdnNEJHU0NyWW1X?=
 =?utf-8?B?MkhwNVQ5RHZuQlEvY2x1OFpvUitVbWhHK3dhM1llbnJOOFpOMXpFbnZFNktM?=
 =?utf-8?B?NHZxc0h5QzRYTW41Q293c2t0M1h4Y3Z6clV2YXY1Z0xsajFTc2R0Wm5GSUxn?=
 =?utf-8?B?SzVKc0x3NktTLzFua01ZRnVyWm5OejFRWGxxUGx1QVNuSjRYb2NwaGpDSGMz?=
 =?utf-8?B?c1VLcTlZckhFaGhBS3BzMlc4Q2hXbC9xdmloaWtOcVR3anFTMlE3V21DaGtB?=
 =?utf-8?B?VUJZeUdSb0J3RlZXOHlsRGRsMzhFQ2xORnBRV09sTDVXdnNkRW9lTkVic0l6?=
 =?utf-8?B?Z3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	uRO9EuvQYu42DO6y07pantXZN/4Azqe3rl0k9wt+YYXJFU1MosxyTvOndg+JnQ/6XpZFX3dyxn7iI6KP33JgJf4hROpgFXq+rMf+fnevTDh688wfkUTjkHQNgw3EtdZGaiNaaP+PcFGrxxpgyZUt6jSk1WHKpB/kiHY05XhYSDwfj/tFKHZ3XdUED80cmt87ixLwdYMgXkMv+tWnjdoCmwc+EmOQZqIRiLxDxY7mDImwHd3Fak8z4C4rQb6ouVdBMVVNY6nkP+R50YWQywA+OQKXvOQtRG8YClfm3Meadf91lBuH9jzTUuhnBUK27FvK8Hxtzaza7/wnAoqxlWWTLAi3zEq0JBVvKy7glNZVb/iwQ3iDknW7sGlXYBpABtcIxy1qnIMiU8odQGCQzYPmTD7tBxAoqW7CZDSf0O5NON2Y7KaTqA3mwJ3lQJRQw1/+2+pvWptTKssFw/An061C8NLuCmrn0ewhnIA0reAtkYZ8SoT/VHIQ1RZOdkEk9r0p1ud7HtRyWaCpt7XleNq7W++yBvUVhRJNQWimIj3azs+ex/Td+pLyK6zMn4iHn1c4lrC9e5PbudLNX+s9EF1TE4h2mBdH90t91d9G2JcppCgsQ8Zjp/IEx/LL3hFEz2lBkGX20SowTVlFICC/bq2IzpUsq8iHKHa0Hw6bF5JRZjfLUaikoI5k9vcFo8v1y2VfbyED2kL3I9dsFYO6Qfw+D5v2qOqPcHUj6M8hcLj+L7l6jlgF9cxqpG/siGL1uoReZfeqvjfXArZ7ZwUke91ZondP9b2C8vRrT2ykMYUrsGgO6SwGf68i/BOSNAtek/GBc0y+ngNp5hvX9Hc9SJU7iQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7b4bd495-ecbb-488b-8c38-08db4b12efb7
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 13:41:31.0566
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UVRFQCzvJ4/cdJCs36ZA75p8H0Xt9j+j40Pm1HOI7wtC8cDSfoslalox+Cz/4gkIhUa9m36zFQdbI4duQGHkKa/p450civsQLzHBIzif1rQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6338

On 02/05/2023 1:04 pm, Olaf Hering wrote:
> Tue, 2 May 2023 13:33:13 +0200 Olaf Hering <olaf@aepfle.de>:
>
>> I will investigate why it failed to build for me.
> This happens if one builds first with the Tumbleweed container, and later with the Leap container, without a 'git clean -dffx' in between.
>
> Is there a way to invalidate everything if the toolchain changes?

I thought we had a fix for this.  But it turns out it's still on the list.

https://lore.kernel.org/xen-devel/20230320152836.43205-1-anthony.perard@citrix.com/

Does this improve things for you?

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:41:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528656.822104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqGO-0003hE-A1; Tue, 02 May 2023 13:41:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528656.822104; Tue, 02 May 2023 13:41:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqGO-0003h7-6s; Tue, 02 May 2023 13:41:52 +0000
Received: by outflank-mailman (input) for mailman id 528656;
 Tue, 02 May 2023 13:41:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l/wp=AX=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1ptqGN-0003Oi-CU
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:41:51 +0000
Received: from sender3-of-o58.zoho.com (sender3-of-o58.zoho.com
 [136.143.184.58]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 160e9f32-e8ef-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 15:41:50 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1683034905351328.7299606446121;
 Tue, 2 May 2023 06:41:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 160e9f32-e8ef-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683034906; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=Z8IImE2XPq+A8wp68OcU39T31usWWv0UrcK7QKKniAngTES4Oxsg/CB/EZhvXshxuL0mJDXJXMcJatiKMukhMu11UU73yZbOZufC+YlsZtyhy0dnd6NpVy+w3Vp5DpYW6SWrc4FVRozEZ26hl5udGrCGB4CRZ78iTB24HYSHBYE=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1683034906; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=xNbtRmBKjKSHqEgVtqUXyN+cG3mqZpX02zCxAI/TSZk=; 
	b=mYkQC4foXuhKPg11pWCBLhxMdGeHBs9skFnZ9HWK2EQnEcRu8LcFfvOcVc/BRygctP6jqDQMh6tO6oQaKEdsuyixHDvPW9glqq+arc92kWeDkgKS+Xk0swGFNelKE5WFR/RqI43C/6DyG2Ti8VVrUCUqeLBkRpA1E/bxq+ie2o0=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1683034906;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=xNbtRmBKjKSHqEgVtqUXyN+cG3mqZpX02zCxAI/TSZk=;
	b=VFEfKsGRoxz1UNAQVVwWgBDSC3v7++0diGGP6xwskp9HmbN4BDPNXRfLA88b0Ih3
	5b9+MMkUD2IN7KfSvw9XwJqWUTMdkdcUDJmt2WFRk2DTQ/K0FCJBlpO37HAgu+Ioo1E
	YoLF+4AkLRc2KMza4NdhSaj3Jra1yNtZKOQK9EGc=
Message-ID: <b358ca18-20c2-67d5-4745-b134a9176804@apertussolutions.com>
Date: Tue, 2 May 2023 09:41:43 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230430144646.13624-1-jgross@suse.com>
 <0c01a75d-8a98-3931-28fa-68ed373e9a2e@apertussolutions.com>
 <ca098773-5428-c97f-87ae-402fffd114bb@suse.com>
 <25876523-8d03-7e7d-e70f-b88d52f2b270@apertussolutions.com>
 <b6550b56-fcb5-42a2-93c3-7fe486a215ba@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH] xen/sysctl: fix XEN_SYSCTL_getdomaininfolist handling
 with XSM
In-Reply-To: <b6550b56-fcb5-42a2-93c3-7fe486a215ba@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 5/2/23 09:30, Juergen Gross wrote:
> On 02.05.23 15:23, Daniel P. Smith wrote:
>> On 5/2/23 09:13, Juergen Gross wrote:
>>> On 02.05.23 15:03, Daniel P. Smith wrote:
>>>> On 4/30/23 10:46, Juergen Gross wrote:
>>>>> In case XSM is active, the handling of XEN_SYSCTL_getdomaininfolist
>>>>> can fail if the last domain scanned isn't allowed to be accessed by
>>>>> the calling domain (i.e. xsm_getdomaininfo(XSM_HOOK, d) is failing).
>>>>>
>>>>> Fix that by just ignoring scanned domains where xsm_getdomaininfo()
>>>>> is returning an error, like it is effectively done when such a
>>>>> situation occurs for a domain not being the last one scanned.
>>>>>
>>>>> Fixes: d046f361dc93 ("Xen Security Modules: XSM")
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>> ---
>>>>>   xen/common/sysctl.c | 3 +--
>>>>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
>>>>> index 02505ab044..0cbfe8bd44 100644
>>>>> --- a/xen/common/sysctl.c
>>>>> +++ b/xen/common/sysctl.c
>>>>> @@ -89,8 +89,7 @@ long 
>>>>> do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>>>>>               if ( num_domains == 
>>>>> op->u.getdomaininfolist.max_domains )
>>>>>                   break;
>>>>> -            ret = xsm_getdomaininfo(XSM_HOOK, d);
>>>>> -            if ( ret )
>>>>> +            if ( xsm_getdomaininfo(XSM_HOOK, d) )
>>>>>                   continue;
>>>>>               getdomaininfo(d, &info);
>>>>
>>>>
>>>> This change does not match the commit message. This says it fixes an 
>>>> issue, but unless I am totally missing something, this change is 
>>>> nothing more than formatting that drops the use of an intermediate 
>>>> variable. Please feel free to correct me if I am wrong here, 
>>>> otherwise I believe the commit message should be changed to reflect 
>>>> the code change.
>>>
>>> You are missing the fact that ret getting set by a failing 
>>> xsm_getdomaininfo()
>>> call might result in the ret value being propagated to the sysctl 
>>> caller. And
>>> this should not happen. So the fix is to NOT modify ret here.
>>
>> You are correct, my apologies for that.
> 
> No need to apologize. :-)

I believe it is proper to admit when you are wrong.

>>>> Second, as far as the problem description goes. The *only* time the 
>>>> call to xsm_getdomaininfo() at this location will return anything 
>>>> other than 0, is when FLASK is being used and a domain whose type is 
>>>> not allowed getdomaininfo is making the call. XSM_HOOK signals a 
>>>> no-op check for the default/dummy policy, and the SILO policy does 
>>>> not override the default/dummy policy for this check.
>>>
>>> Your statement sounds as if xsm_getdomaininfo() would always return 
>>> the same
>>> value for a given caller domain. Isn't that return value also 
>>> depending on the
>>> domain specified via the second parameter? In case it isn't, why does 
>>> that
>>> parameter even exist?
>>
>> It would if the default action was something other than XSM_HOOK. Look 
>> at line 82 of include/xsm/dummy.h. XSM_HOOK will always return 0 
>> regardless of the src or dest domains. The function 
>> xsm_defualt_action() is the policy for both default/dummy and SILO 
>> with the exception for evntchn, grants, and argo checks for SILO.
> 
> Ah, okay. I didn't analyze all of the involved xsm code.

No worries! I am always willing to help in any way that I can. While I 
don't have the bandwidth to be proactive and keep up with everything on 
xen-devel, please do not hesitate to ask me or ping me on anything XSM 
related. I will gladly take a look and provide what insights I might 
have on your query.

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:44:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:44:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528666.822115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqJ9-0004hB-Oo; Tue, 02 May 2023 13:44:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528666.822115; Tue, 02 May 2023 13:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqJ9-0004h4-M7; Tue, 02 May 2023 13:44:43 +0000
Received: by outflank-mailman (input) for mailman id 528666;
 Tue, 02 May 2023 13:44:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fQfQ=AX=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1ptqJ8-0004gx-Kc
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:44:42 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on060b.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::60b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c79adbf-e8ef-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 15:44:40 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8453.eurprd04.prod.outlook.com (2603:10a6:20b:410::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.23; Tue, 2 May
 2023 13:44:38 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6340.030; Tue, 2 May 2023
 13:44:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c79adbf-e8ef-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A3VkFcEgpEAKuteAIF5KvAkBLHfZLggxHMosc6dwC0PUDGt1YCV0UAUyYUBSGHfHr28lGFHs1gZuW8laH5/ZxOw0WHtl3iqmJI5KHfohz3RXaKEIw5SkQ/wXFmwvJ73US4Zc0SqMxJGjHmugO54ok4Mq8PqJ/KzIcnazQx5CLgUruNmsgpp2IWV3J/UfCXTrzoyVwew8pQyNephczb2IgzAqLkBk2sX/UwwW26xYq6RLtOoxmv8CU6f+HE2toyRt459/TyktoNHbyAO/gvgmlCsJ+gkPVCNxAdyxU0Mqyf6BuUoAru67XBCh25eUr9RyE+6Rcy43WvciLy4rFX9Meg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2K4kVLOLa5gNCtfHtVS0PlZUKf9AIejVzarpWGB+1X4=;
 b=ahBocm5jBzDvIAwssSxE8pmammmzKGIoGjkCXPGFmk2FBdtEuLVjatYdCfCVGN2JKUU0fBEBY5tc4PwCS/cvUaurqGLAB/2xIDFmu0Xbq0AFoh9ocnBFOXJPjH9F/x9eQsn3FD/2P7Hmx4TPN+sjIBUHGmPytQ9hdmKgPio8WT8a4hnJ1czfF9PUkETYZg97kSAhyA4kLpzJugOIwGMkUsc/YSyBaWVMYenCwKBLUjNrSyTjXCBShoBIirQav3V5W2qQuKdThWHNRVSVgaEwUg2EJAk7wvGOCi5oSsEMVGwan3XV19pGh96fH17pax/tBGCPm6Y67HJluCIkeItmsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2K4kVLOLa5gNCtfHtVS0PlZUKf9AIejVzarpWGB+1X4=;
 b=pLuaQCoBSxTLGBQxW9BPLyrOMEY8+x9KGISYB/WtIZx3zdMKOalzr8SI0PS62YszwfIDm2TEN+yDszzzfJyGTwKoS7cleFr0rpt2Qn/KUoTfIJ5cmy9jW7/fELfUsJSClfx8UItoWcOjfgjXbgDHpnubUNrpuOrBRGv5hFBJkI9U8bDFFe6ag/jmTwhWpcycLwykbCsYw9nQve8hXesmBXlXNrud3Am15qKuYhlCOe6+u9Fi1fhBA6IzABe6TTbiaDgn/QFwfjlHlXQWEyKQXmwv2RjJtAF4JW5ltXsSnliv/+95+fXyOuaIn5kHpFvVVPqW86h1mxngxt9nsQx12Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ea9ec814-83b8-5545-2ce3-46192525cf17@suse.com>
Date: Tue, 2 May 2023 15:44:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: HAS_CC_CET_IBT misdetected
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
References: <20230502074853.7cd10ee3.olaf@aepfle.de>
 <43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
 <20230502133313.2192eb99.olaf@aepfle.de>
 <20230502140444.1dacdb33.olaf@aepfle.de>
 <9353d3f3-563b-ff88-0b0b-dfa2bb03522c@suse.com>
 <20230502153612.431dfc08.olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230502153612.431dfc08.olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0160.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b3::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8453:EE_
X-MS-Office365-Filtering-Correlation-Id: da8d63e4-880e-41a9-e4e1-08db4b135fb1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TS+o+vBavhdO3T9JE22iVBD11mqKajDwea9IyVP3rvoV2IttswwEe1dR+dgb17kd0FypCC/+ON7RQ6jEMi/JnxgDkV+pym3kdFgjTZABvoWb6kGpRiy24tjao9SgGGj0YvMUPBJMvXKK2G7YHd1xdxYvy5uDLzuOxyREIDu6lqn9EeGpEHGKsIbg1G+3ZT2WPxTspLwDlYqQX3HSx6STmUpkpE3/wWhhCCT44KKJMDa79BZ5tfko6DI/3eCqQ1W5MMhX9+k/bMNjT0mPBJRFq3bUF3LLe7upzGrOV1XHjsDLJmZnBvX4fvIVBL+34r9HDIlbxoW4SzIJH4pYO0Vjk2y5exqh4X5mBB7WmFMA2OT5ao7TRl5zVroM71kam1EUWzlIeGn725hoVfAwRLS1jNFx/Z57esaLSPXAlrf5h5C2RO7PkD5JaAO5BlKsBEVHhLNcmoOnQ3ziY7DSJjgiq7t/k/5/g6uPVe/nwQpA4booupd0H88RS+6voatOeFZbKq8RI9/81xq59zn+QZMZRKHpYJF25Z1l7+WODGJyZve5bsFLongN9shlIEZ9RTTsNpo6tB4GSJPbiUJjFx4Aq9ChfuVOXKqso2ZsgS+CHXZ5UeqxSVk6jcHi1PmXbbVriLrSKNKIrXtpLwW6/IlbTQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(346002)(376002)(136003)(451199021)(41300700001)(8936002)(8676002)(53546011)(31686004)(2616005)(83380400001)(6666004)(26005)(86362001)(4744005)(66476007)(478600001)(66556008)(6916009)(31696002)(66946007)(38100700002)(186003)(4326008)(6486002)(316002)(5660300002)(7116003)(36756003)(2906002)(6512007)(6506007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RWtWR2hOUElBeWM4Qm52RGRWRXZacExZcVdwZnRKODV2SjNpVFQ5Z3JHNXNN?=
 =?utf-8?B?bzI3Q0UzMWVuTXFhUjh1ckJRYzV2WUUrTUs3S2lXK0tZRE9SUWo4WGsybVgy?=
 =?utf-8?B?N1IyTFYzT2oyQ0M4cnp6enNXQ0MyVFJGOUs2V2hnRXlaNXVmNllGSEJTektX?=
 =?utf-8?B?bk9iMzQ1MERRZ09yVEVCSEN0NVlBTUxVaFR5TUlYaXdhRGw0ZHZjSWkzOE1o?=
 =?utf-8?B?dERoVlBoNE8wNHhGU3REaUwzRWFDM3dSaEdZeWhkWGdmQ0ZYTHVGejJubG8y?=
 =?utf-8?B?eDl2Qk9iSWN4cERsOVMwZXFudTBDNkRWYkhxaDNoNHRNb3RYRlNkQnBQQS9G?=
 =?utf-8?B?QXV5aVgyL1M1ZElWRXcvYndyUitNTHhNWXhwK2tPK05SdGpGeHBrZWFBaGV6?=
 =?utf-8?B?RzEybnAwazF6UDNlTGh1ZXpidmRSbis1SGpidnJnbWdDaTJGK2xNNmxYYXo3?=
 =?utf-8?B?VkhRSnc1d1JycEQwbFVoYnlnSHBIbEhPdTZWSkt4UnpwNFdBTXVldEZmS0tJ?=
 =?utf-8?B?dE9TOTFQSGJRVys4eHBpaFJPLzRPelJGbkZEdjUyVXMxL1AvcW0wM1RWUUkr?=
 =?utf-8?B?ZHNTRzRadFk5T0tRWkNSZGxHWDh6QUoxSFV3NVJVN3pYd293alBBdUpKdWYy?=
 =?utf-8?B?bE9zL1ZFbml5WEo1SzZVQktXd2haOVJOeTdaSmpZdDc3TWxrRDA5RmlDSWRt?=
 =?utf-8?B?ZWFhd0xrek9sTmdnNnk5OEVhemFDK1lwQ0pxSG1HN3JocnhDZUdteWs4RCtH?=
 =?utf-8?B?SFkzd0RyYkttVnFoS1Jrd3dwODc2NXZ6STFmTGc4UEpadkgra1RiaXpCaU83?=
 =?utf-8?B?SHNKY3NKNFl6YzBUUjNrdDNiNjEwLzNnZ1lTZGI3L28yV1NyYUNHbms0Y2sz?=
 =?utf-8?B?N0pxRVNISHR5b3VhSlNxVXVzMTg0eFJzeTQ1eFIrTFlUV3VJeHlTVFhxUGtz?=
 =?utf-8?B?SldET1JvQVdVVkx6RWlnQ3doQ2diaHg0dmpmTUdsVURhMlo1L2NMdnZ3ajJx?=
 =?utf-8?B?WitLVTNZY0pMVGdiYWZWa1dmN3hCZS9FYWhFbHJNUWYzR2pKTUd4REdQY1NZ?=
 =?utf-8?B?YU1JeFVIZDhBOVlDTFFBSjVTZk9hWEw2NkxjL1RiakhxZEpwSWVNWTFBREI5?=
 =?utf-8?B?djFPZlFSS3p2M0FHZld5UjVYcUpCM2MvT2MvVUZzV3l4V1NhS1ExcStiMTFV?=
 =?utf-8?B?MFBqWExyL3FMZkhIdXliaUFDbGkvTFJJKzRmdE8xR20wMDdteHVuanJCells?=
 =?utf-8?B?L25tLzFWeW11bzgzdmtEVkxKckF6T2pERXB6dTFUTDR5ZTA2M3YzeG9pTzVO?=
 =?utf-8?B?M0REa2thbVF1OWNHWUlUdVR2bmhpblRwQWNqSm5aRVNOZDZITERpWWo1Wm10?=
 =?utf-8?B?QUdESUxkNVNPTm56d0JYOVlGL2UydG04U1J4MFhVZXREcGR6OWQ3TUswVThR?=
 =?utf-8?B?UmdFcjBnM0RZUzZiVmJHZGtJQmJ6YTdNbUt5a21iRzBHZ2FTS0VHa2JGWnhT?=
 =?utf-8?B?Z2E5VGlPSHZYRGFwUm1NblRBQmRsYWI3VFEwSGdIU1NQZWZkckZXMVBEb25o?=
 =?utf-8?B?Tk8vbUY1Y0xpbUNjRFpaYm1hN0F5NllkTERuV2owbWV6eU4vd2NRTHhpOGJz?=
 =?utf-8?B?eStCSThjTXBrSE8vTzVCYjBjUFpNZ0t0RzZkck5URDZUTTFnMi9kQnh3VzVC?=
 =?utf-8?B?TUNVS2N1ZjdSSUJNa0JCQ3ZmcHA3SGdHYk4xT1NZOUdrM2NuYnBwSlBzOE10?=
 =?utf-8?B?NUJxRTdyaElQVjBTY2ZHQWMrTUt0SXQ1L1FjQTl1K3BDUHZkQW1LVlNnQjJq?=
 =?utf-8?B?ZlkremY5Q3J3eTZ6UERjbEhneWl4cldKSERJSVZLckZiRGZvQnpaTVVVYnQx?=
 =?utf-8?B?TFAvemZvVWNMUU1EK1orcklFMnJOYS9ZTnpWQUh3QzRYak92NmFKZXlmZHEv?=
 =?utf-8?B?bTVTSEMvVDZkR0lYUHlJRUpXZ0ZoOWFUU0srTDEyWXRYTzNxbnhvYU9KTGhv?=
 =?utf-8?B?ZzkrTEtwbWg0Z3JMRHRGMXc0UXlnNThKUURNQnVRNnVMZUZaSUtqK3BJdnZy?=
 =?utf-8?B?TDVqb2RmVkhBcnlGRGtlSjhSb2hhWjZ4ZGxybThqTjcrWTI3THdRUU9BOG9j?=
 =?utf-8?Q?rgedz8Q2gyO/j+GE1QAbDzN94?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: da8d63e4-880e-41a9-e4e1-08db4b135fb1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 13:44:38.6958
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kh5Pz7PceY9hdqkmySusWqEZKzAPsyrmihAO8vE/2Yp2JCCvlcCD/8HBOrbkpbJ1kToOpwKbFDfeEbdVGxP/Eg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8453

On 02.05.2023 15:36, Olaf Hering wrote:
> Tue, 2 May 2023 15:29:19 +0200 Jan Beulich <jbeulich@suse.com>:
> 
>> Getting this to work automatically is a continued subject of discussion.
> 
> I think the only real solution is an out-of-tree build. Essentially every single component needs to detect a toolchain change. This is unrealistic.

How would an out-of-tree build help (which for the hypervisor we now
have support for)? An incremental build there will hit exactly the same
issue afaict.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 13:45:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:45:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528670.822125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqKB-0005Ff-2G; Tue, 02 May 2023 13:45:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528670.822125; Tue, 02 May 2023 13:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqKA-0005FW-Vi; Tue, 02 May 2023 13:45:46 +0000
Received: by outflank-mailman (input) for mailman id 528670;
 Tue, 02 May 2023 13:45:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptqK9-0005FG-Vs; Tue, 02 May 2023 13:45:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptqK9-0004J2-El; Tue, 02 May 2023 13:45:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptqK9-00039J-3V; Tue, 02 May 2023 13:45:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptqK9-0004Gn-33; Tue, 02 May 2023 13:45:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ozha16LYgUogcjmrJ/98dBB/GrTJ7I2mbl/WLcbeX+4=; b=iujQ7yJwcQfibkn3lHCnFEnAOz
	aMr6frJQ9Kt0Gs00gUQiISinTZ1vv8l7PpwGz5rtaXlX9sw+kYYnJ3dvladVvPrTXbZChZMZ3cX09
	/Iim1znuxMxLJ9U7zRbIS8P9AAG/JGsmJIW+vNUiFtetj3ENyk0gHGM+VlhygQq4PSmQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180501-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180501: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-migrupgrade:xen-install/src_host:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ef841d2a2377f5297add27e637b725426bb4840a
X-Osstest-Versions-That:
    xen=ef841d2a2377f5297add27e637b725426bb4840a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 May 2023 13:45:45 +0000

flight 180501 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180501/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 180496 pass in 180501
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail in 180496 pass in 180501
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 180496 pass in 180501
 test-amd64-i386-migrupgrade  10 xen-install/src_host       fail pass in 180496

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180496
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180496
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180496
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180496
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180496
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180496
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180496
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180496
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180496
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180496
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180496
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180496
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  ef841d2a2377f5297add27e637b725426bb4840a
baseline version:
 xen                  ef841d2a2377f5297add27e637b725426bb4840a

Last test of basis   180501  2023-05-02 01:52:02 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue May 02 13:56:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 13:56:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528677.822135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqTv-0006tb-4O; Tue, 02 May 2023 13:55:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528677.822135; Tue, 02 May 2023 13:55:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptqTv-0006tU-1K; Tue, 02 May 2023 13:55:51 +0000
Received: by outflank-mailman (input) for mailman id 528677;
 Tue, 02 May 2023 13:55:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d5QU=AX=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1ptqTt-0006tO-BS
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 13:55:49 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.161]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 084e93ce-e8f1-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 15:55:44 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz42DthdKB
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 2 May 2023 15:55:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 084e93ce-e8f1-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683035743; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=XeW4ofbK+AA3guzhnDqpR8OkHutJp/iL36w6+/V1wRW6DBB79cQJZOoqxxJoscU9am
    NgPJQVZMsOd+TozEpbMso2Npo/1l25EE5l/UkEh0j3yMQm3zCEY+QXcH87gz5uoLbA6P
    BWcpM8hNWVA7uNhh6UstMlgpLozcvs7zFjhnbFKIfEvR1H18788PUKAbzLQ+GI27RkAo
    GiltCUI3BxBAZOfXnhax80fjL235hqXqIXrjAFZ9gsirOb95cOr0CF0eSB0HeG6IDMr2
    X6zI+OplFpwzRb8Knht+4D4TUxiVQz2teSVxT2EfDOBgayMRA3+cCfUGDwevDYIW8KGf
    7dpg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683035743;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=ppQjs137PIEaIYcsU7xdieFlAfxpPdqVwfIympx+wZU=;
    b=T2GR49QIZYjFVjUlSOs3RRCQHfyF3/FqUOFambY//D2OPdoM56N+RAJ4rKRbJ+fHHB
    11QX7ssQ/RuWvDjykyyLg7RT1yzL91QOGsADu+bg2IGUHdsfELlodWvlxd+vOwNYzV4z
    dCz4iWeHuZUl4Epl9ZOvjEEYYv4bhUV3rS5Wjpu2BMVIvndE87iR+LvgE+1kWTvwAgVJ
    7lvbpECRAtW/PbIBd9WxVfAZFGuFNpEkYG8tol632VeENdyAyf2mXBRdnt4A7e31+nfI
    F8q94lpoGceVY0L186fY9tPgqiXb+zUn5WRANJYseuTQpfnMmXwU4sZvWmY2yvwoGI0x
    GAmg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683035743;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=ppQjs137PIEaIYcsU7xdieFlAfxpPdqVwfIympx+wZU=;
    b=d2BReKChG41bRnZ9NbI5CKFIqd4LdKboFlmxKz17MU8Ez6ssOyx9RUL1GyrNZaAk39
    kEUOUAFz4EPW4aqKHpdVUD1gqgjnHgQzDsOZcYJKn75O4gsJDgAq+9J+CETMe6MawciB
    mFhZGxY4P0243jfxIxIt4P2JgxTFgMxeEWR7iFDJC5zDHSEChHxhLXMUiEej1dDJbAZj
    6ND/GvAptsZJQMT/6oEHbQpEB2hcgQVYXgCZdFDNUX9ag3mwkutFaeKxlOx6sUP8Jo/6
    4RfhAPEnwaB94INY98DwW7LN3xe6d3xK4A3yLzL4kdVAhjVistpAYy6CWoKEx7sR2y0P
    MJAA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683035743;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=ppQjs137PIEaIYcsU7xdieFlAfxpPdqVwfIympx+wZU=;
    b=iIFIO0oVE5btZ3rW4u9IZ3fr/IB7z7R36u8ZnyC7qbpGhXswZWGDtWYvnq9gJs6iW9
    8ETtO42BMUUahh6dg1DQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR5BxOIbBnsc1fym1gFvNQ7EzMpH+yFJc4aADp/8Q=="
Date: Tue, 2 May 2023 15:55:37 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: HAS_CC_CET_IBT misdetected
Message-ID: <20230502155537.7983f8c9.olaf@aepfle.de>
In-Reply-To: <ea9ec814-83b8-5545-2ce3-46192525cf17@suse.com>
References: <20230502074853.7cd10ee3.olaf@aepfle.de>
	<43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
	<20230502133313.2192eb99.olaf@aepfle.de>
	<20230502140444.1dacdb33.olaf@aepfle.de>
	<9353d3f3-563b-ff88-0b0b-dfa2bb03522c@suse.com>
	<20230502153612.431dfc08.olaf@aepfle.de>
	<ea9ec814-83b8-5545-2ce3-46192525cf17@suse.com>
X-Mailer: Claws Mail 20220819T065813.516423bc hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/txqaj7Ga2TsSeTMnuGl7Usi";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/txqaj7Ga2TsSeTMnuGl7Usi
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Tue, 2 May 2023 15:44:41 +0200 Jan Beulich <jbeulich@suse.com>:

> How would an out-of-tree build help (which for the hypervisor we now
> have support for)? An incremental build there will hit exactly the same
> issue afaict.

Each container target will use a separate output directory. The Leap contai=
ner will only see Leap things, the Tumbleweed container will only see Tumbl=
eweed things.

A toolchain update within a container will be no different than it is today=
. But there will be no unexpected jumps anymore.


Olaf

--Sig_/txqaj7Ga2TsSeTMnuGl7Usi
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRRFlkACgkQ86SN7mm1
DoDc6xAAmT2Bn9OAawtgCy/HNNIyKO1JcjrwHb0s32tyJtABnfXpFYLlqO2zgbcN
j3CfVD9YXzH1V9UE7pPhPFDJcih1elvyzu3iLUy64ZQ+xOcJW2H0sKGhYj9U7uR0
CEcK+2iZASYcW+rIg3AAHXQixDaZjP3P9fYQbvgYb/919TB4AW+Pk/OA9GCOBz5M
OXnMSHsgPlyfm9SpAkI4u5FPN4L28MlHV5K9AWTv0hB7FlYaYlfZcFXipYZAEtp4
ES9UxowQcjLpKb5P5ms2RfHCd6MDiQumRkiM7WCASGGnQbW1fY4UpNo7fZqO4hL0
+uEGVlOxQxl3aL1ZUlAHbfFUoBx1H4ZVA1C20uSnrhSqHKdI+p/Y75KQCsCoxMVT
/Zh9cvBegpDECHJc0XlOEC2nJsZ5SY/S9zNxtjB9mMLYQKnweHC35yv2uebbWSTQ
F1IPG3esTZukhv0I+eVfMKdGjwiWsq4PvYT47Q0l9ranP8nxKqSsuyI+ftyi6dZb
ogG0WVk2Iv+4/DXQY7DeaU+Ff3qh7TgAB5CHvU4nkEjh6mcVtYVyv0zWOIZMjQZM
zPqFPyBhs8Utfyd14dOpQ9Q7EQsodfjTcaGLC4iXJBJ9QeT1oS2fQ28AIb1B40ku
1j2i9fnpbN7ncSEGoWWZe2xI2wMWqFzbY1fqJx87qT0AWD2YHt8=
=LOIW
-----END PGP SIGNATURE-----

--Sig_/txqaj7Ga2TsSeTMnuGl7Usi--


From xen-devel-bounces@lists.xenproject.org Tue May 02 14:29:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 14:29:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528683.822145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptr03-0002Ew-LQ; Tue, 02 May 2023 14:29:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528683.822145; Tue, 02 May 2023 14:29:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptr03-0002Ep-II; Tue, 02 May 2023 14:29:03 +0000
Received: by outflank-mailman (input) for mailman id 528683;
 Tue, 02 May 2023 14:29:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptr02-0002Ef-DJ; Tue, 02 May 2023 14:29:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptr01-0005G0-Tp; Tue, 02 May 2023 14:29:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptr01-0004F1-5p; Tue, 02 May 2023 14:29:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptr01-0000vO-5M; Tue, 02 May 2023 14:29:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y6TEwowuifiYIJCbqKlbvg3zuzqFgR6NMDfoyetv85g=; b=M+C+bK9fiZJRVSSUbHY+8RUiQP
	cQ6g1CC6g2y1A8tki610jp0+2E7tFlpN8CWRq7RL9ouELBt3ZGwWCR3mY2hvO7HkgwXKGUGRIPuhi
	YfjKt3LE8rB/kBk8Sxu6faqElw5SzRTet24XSOcQnsIEG4pbSDSPAZioYvSi92QY4NmE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180505-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180505: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b033eddc9779109c06a26936321d27a2ef4e088b
X-Osstest-Versions-That:
    xen=ef841d2a2377f5297add27e637b725426bb4840a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 May 2023 14:29:01 +0000

flight 180505 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180505/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b033eddc9779109c06a26936321d27a2ef4e088b
baseline version:
 xen                  ef841d2a2377f5297add27e637b725426bb4840a

Last test of basis   180494  2023-05-01 11:03:16 Z    1 days
Testing same since   180505  2023-05-02 11:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ef841d2a23..b033eddc97  b033eddc9779109c06a26936321d27a2ef4e088b -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 02 14:37:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 14:37:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528688.822155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptr8c-0003kj-HF; Tue, 02 May 2023 14:37:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528688.822155; Tue, 02 May 2023 14:37:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptr8c-0003kc-EW; Tue, 02 May 2023 14:37:54 +0000
Received: by outflank-mailman (input) for mailman id 528688;
 Tue, 02 May 2023 14:37:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NQdG=AX=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1ptr8a-0003kW-Nh
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 14:37:52 +0000
Received: from mail-qk1-x730.google.com (mail-qk1-x730.google.com
 [2607:f8b0:4864:20::730])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e9cd0dce-e8f6-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 16:37:50 +0200 (CEST)
Received: by mail-qk1-x730.google.com with SMTP id
 af79cd13be357-74adf6adac6so359821485a.0
 for <xen-devel@lists.xenproject.org>; Tue, 02 May 2023 07:37:50 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com
 (207-172-141-204.s8906.c3-0.slvr-cbr1.lnh-slvr.md.cable.rcncustomer.com.
 [207.172.141.204]) by smtp.gmail.com with ESMTPSA id
 l17-20020a0ce511000000b005dd8b9345dbsm2123688qvm.115.2023.05.02.07.37.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 02 May 2023 07:37:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9cd0dce-e8f6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683038269; x=1685630269;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=mvVfVNvdUQ+dsVuQLzLVZgrOBwJbAlAHyxtlDPxYDPI=;
        b=lMWbI9liklknEyp0AdJfA4fBntRe1FaP0pj28bEYVRNdksesc9BHCIRW52HokmkTNa
         207rKs0CEVeHnl/UoakvcCupxqMXBb1fboaptwS7g6SIB5qtTCM3As3PLwDbG2T8bPpx
         YAry278fCOasg5uNV66jrPXKjxqSFpVqzUdIUVL3N4uTkKyk7l/pQxYi8Y2ih4HBERpp
         5lj6j+LX4Ke4vyr1hfbGEuqdLVIvv30gHWy8DztM1aBIMy8zB8HT0quLvLxbyatd5nd9
         ZarV8fvQ5oyAIu97M2gY+ZZo7tZsUdL5BcuptsiHKrLZq5vCk7MowTw1mCYIvQxvGCjA
         dKXw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683038269; x=1685630269;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=mvVfVNvdUQ+dsVuQLzLVZgrOBwJbAlAHyxtlDPxYDPI=;
        b=AcpNz2yHkwjE9Lzc4snGv0k38AJSLCxAKKvwb9o3GGrNMCBurcITr3clHmf7yb02lP
         0fepwoN118J13znmQVVBXYIdySwJ/J+p5s5YvMrkUntVEG3OzG9466yP6xDUoPlZRTWn
         kqI3qpOHaJ5Nd3lDpTmLOcOVLVZ9ZHjJYZ/6mORb351O8ajdgpXYVwyNl9u57oiB7ObN
         NbaQ4aby3dWhtvz3bUGuanSBpj8QFVL2IGZ3NfaF+RrgduMZsZCJUDjtZ5Anqp+JDSSw
         dciU/HroCGzuEXzUiCnjIbxMdMKfpNj3MKkleEuy3KzeoUROojKGojxtopnSJNBliDJ0
         dgKg==
X-Gm-Message-State: AC+VfDyEJMlQLGsMn+u/cQgku1tQJOYuPQNIH5P5tzSVPjf3RLLgWEWT
	YKP1uf0/pCAI/HQcPSlO6q4=
X-Google-Smtp-Source: ACHHUZ7zYKwzvzyqRlxFZI4PuMYtfmBEJ9Pyx91MTJn4ENx0QM8Y5we0PQ1VIzxQMHmYfE+nlnND0Q==
X-Received: by 2002:a05:6214:1d23:b0:5f7:8b31:4548 with SMTP id f3-20020a0562141d2300b005f78b314548mr6214915qvd.4.1683038269491;
        Tue, 02 May 2023 07:37:49 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: qemu-devel@nongnu.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Greg Kurz <groug@kaod.org>,
	Christian Schoenebeck <qemu_oss@crudebyte.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs)
Subject: [PATCH] 9pfs/xen: Fix segfault on shutdown
Date: Tue,  2 May 2023 10:37:22 -0400
Message-Id: <20230502143722.15613-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xen_9pfs_free can't use gnttabdev since it is already closed and NULL-ed
out when free is called.  Do the teardown in _disconnect().  This
matches the setup done in _connect().

trace-events are also added for the XenDevOps functions.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 hw/9pfs/trace-events     |  5 +++++
 hw/9pfs/xen-9p-backend.c | 36 +++++++++++++++++++++++-------------
 2 files changed, 28 insertions(+), 13 deletions(-)

diff --git a/hw/9pfs/trace-events b/hw/9pfs/trace-events
index 6c77966c0b..7b5b0b5a48 100644
--- a/hw/9pfs/trace-events
+++ b/hw/9pfs/trace-events
@@ -48,3 +48,8 @@ v9fs_readlink(uint16_t tag, uint8_t id, int32_t fid) "tag %d id %d fid %d"
 v9fs_readlink_return(uint16_t tag, uint8_t id, char* target) "tag %d id %d name %s"
 v9fs_setattr(uint16_t tag, uint8_t id, int32_t fid, int32_t valid, int32_t mode, int32_t uid, int32_t gid, int64_t size, int64_t atime_sec, int64_t mtime_sec) "tag %u id %u fid %d iattr={valid %d mode %d uid %d gid %d size %"PRId64" atime=%"PRId64" mtime=%"PRId64" }"
 v9fs_setattr_return(uint16_t tag, uint8_t id) "tag %u id %u"
+
+xen_9pfs_alloc(char *name) "name %s"
+xen_9pfs_connect(char *name) "name %s"
+xen_9pfs_disconnect(char *name) "name %s"
+xen_9pfs_free(char *name) "name %s"
diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c
index 0e266c552b..c646a0b3d1 100644
--- a/hw/9pfs/xen-9p-backend.c
+++ b/hw/9pfs/xen-9p-backend.c
@@ -25,6 +25,8 @@
 #include "qemu/iov.h"
 #include "fsdev/qemu-fsdev.h"
 
+#include "trace.h"
+
 #define VERSIONS "1"
 #define MAX_RINGS 8
 #define MAX_RING_ORDER 9
@@ -337,6 +339,8 @@ static void xen_9pfs_disconnect(struct XenLegacyDevice *xendev)
     Xen9pfsDev *xen_9pdev = container_of(xendev, Xen9pfsDev, xendev);
     int i;
 
+    trace_xen_9pfs_disconnect(xendev->name);
+
     for (i = 0; i < xen_9pdev->num_rings; i++) {
         if (xen_9pdev->rings[i].evtchndev != NULL) {
             qemu_set_fd_handler(qemu_xen_evtchn_fd(xen_9pdev->rings[i].evtchndev),
@@ -345,40 +349,42 @@ static void xen_9pfs_disconnect(struct XenLegacyDevice *xendev)
                                    xen_9pdev->rings[i].local_port);
             xen_9pdev->rings[i].evtchndev = NULL;
         }
-    }
-}
-
-static int xen_9pfs_free(struct XenLegacyDevice *xendev)
-{
-    Xen9pfsDev *xen_9pdev = container_of(xendev, Xen9pfsDev, xendev);
-    int i;
-
-    if (xen_9pdev->rings[0].evtchndev != NULL) {
-        xen_9pfs_disconnect(xendev);
-    }
-
-    for (i = 0; i < xen_9pdev->num_rings; i++) {
         if (xen_9pdev->rings[i].data != NULL) {
             xen_be_unmap_grant_refs(&xen_9pdev->xendev,
                                     xen_9pdev->rings[i].data,
                                     xen_9pdev->rings[i].intf->ref,
                                     (1 << xen_9pdev->rings[i].ring_order));
+            xen_9pdev->rings[i].data = NULL;
         }
         if (xen_9pdev->rings[i].intf != NULL) {
             xen_be_unmap_grant_ref(&xen_9pdev->xendev,
                                    xen_9pdev->rings[i].intf,
                                    xen_9pdev->rings[i].ref);
+            xen_9pdev->rings[i].intf = NULL;
         }
         if (xen_9pdev->rings[i].bh != NULL) {
             qemu_bh_delete(xen_9pdev->rings[i].bh);
+            xen_9pdev->rings[i].bh = NULL;
         }
     }
 
     g_free(xen_9pdev->id);
+    xen_9pdev->id = NULL;
     g_free(xen_9pdev->tag);
+    xen_9pdev->tag = NULL;
     g_free(xen_9pdev->path);
+    xen_9pdev->path = NULL;
     g_free(xen_9pdev->security_model);
+    xen_9pdev->security_model = NULL;
     g_free(xen_9pdev->rings);
+    xen_9pdev->rings = NULL;
+    return;
+}
+
+static int xen_9pfs_free(struct XenLegacyDevice *xendev)
+{
+    trace_xen_9pfs_free(xendev->name);
+
     return 0;
 }
 
@@ -390,6 +396,8 @@ static int xen_9pfs_connect(struct XenLegacyDevice *xendev)
     V9fsState *s = &xen_9pdev->state;
     QemuOpts *fsdev;
 
+    trace_xen_9pfs_connect(xendev->name);
+
     if (xenstore_read_fe_int(&xen_9pdev->xendev, "num-rings",
                              &xen_9pdev->num_rings) == -1 ||
         xen_9pdev->num_rings > MAX_RINGS || xen_9pdev->num_rings < 1) {
@@ -499,6 +507,8 @@ out:
 
 static void xen_9pfs_alloc(struct XenLegacyDevice *xendev)
 {
+    trace_xen_9pfs_alloc(xendev->name);
+
     xenstore_write_be_str(xendev, "versions", VERSIONS);
     xenstore_write_be_int(xendev, "max-rings", MAX_RINGS);
     xenstore_write_be_int(xendev, "max-ring-page-order", MAX_RING_ORDER);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 14:45:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 14:45:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528693.822165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrFe-0005I9-9J; Tue, 02 May 2023 14:45:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528693.822165; Tue, 02 May 2023 14:45:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrFe-0005I2-67; Tue, 02 May 2023 14:45:10 +0000
Received: by outflank-mailman (input) for mailman id 528693;
 Tue, 02 May 2023 14:45:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M8xE=AX=citrix.com=prvs=479c33cca=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1ptrFc-0005Hw-KT
 for xen-devel@lists.xen.org; Tue, 02 May 2023 14:45:08 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ebbba320-e8f7-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 16:45:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebbba320-e8f7-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683038706;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=AY1oCzQUnRwuXpbdb05yJrH7EzCQOVt+9I90RiIuFAE=;
  b=ecShEVvL2VB8e6oozCRqB6DuAdEx4guQ+HcDRrsdkkDAAIl2PeqKhxdn
   Z/AY9VvB+k3RFUFZr4RmG55NHuYMedPSvmwzh47lZhkkeZ5biKhG6rMAP
   DZ/2jn2iAL1tnlDrla71GRrJYGOGbSICz1Td2A8FpqCydAapAczOhYwj5
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106346520
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:/l7eVKNUtbzLopTvrR0cl8FynXyQoLVcMsEvi/4bfWQNrUomgz0Hx
 zMaXzyHbPiJZWekf4twOYm/9EJTvJfUmoJiSgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5gZmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rdJW01J0
 e4VESpTdzSlmdqZh5C7V9A506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUoXSG54JwBvC/
 goq+UzSPVZKL9yc8AC//yuzp7TBgg3VdZg7QejQGvlC3wTImz175ActfUW6u/Siigi9RtdWM
 WQQ+ywnt69081akJvHtUhv9rHOasxo0X9tLD/Z8+AyLjK3O7G6xBGceSSVaQMc7r8JwTjsvv
 neLgtfoCDpHoLCTD3WH+d+8szK0MiUTMSkNeC4YUQwZy93ipogpiVTIVNkLOLWplNTpHiq1z
 z2UhC8mwrESltIQkaG6+1ndhHSrvJehZgcx6xWRVG+j6A50TIqkYYWy7h7c9/koBIOQUlmAs
 WVCg8+f9uEDF7mJlSqEWuJLF7asj96CNDDfmkJ+BJkJ+DGk+nrldodViBlzPkZqdN0PeT7tZ
 E7VtitV5ZlaJnzsarV4C6q4E8kwxLLsPcjkXPvTKNFJZ/BMmBSvpX80IxTKhia0zRZqyPtkU
 XuGTSqyJSckU4hg6Ci7fv1DyJsN2BgRgkHTWKmumnxLzoGiiG6ppaYtaQXeNbhgtvPb/2054
 P4EaZLUlkw3vPnWJ3COrNVNdQ1iwW0TX8ieliBBSgKUzuOK8kkFAuSZ/74ucpcNc099xraRp
 SHVtqO1JTPCaZz7xeaiMCoLhEvHB8oXkJ7CFXVE0ayU83Yie52zy6wUaoE6e7IqnMQ6k64uH
 qdaIp/YXKkQItgixwnxkLGn9NAyHPhVrVvm09WZjMgXIMc7Gl2hFi7MdQrz7igeZheKWT8Fi
 +T4jGvzGMNTLzmO+e6KMJpDOXvt5ylC8A+zNmOUSuRulLLEqtk2cXSv1qVofKnh63zrn1On6
 upfOj9AzcGlnmP/2IKhaXysx2txL9ZDIw==
IronPort-HdrOrdr: A9a23:TdL6VaPfSjkpUMBcTjejsMiBIKoaSvp037BK7S1MoNJuEvBw9v
 re+sjzsCWftN9/Yh4dcLy7VpVoBEmsl6KdgrNhWotKPjOW21dARbsKheffKn/bakjDH4Zmvp
 uIGJIObOEYY2IasS77ijPIbOrJwrO8gd6VbTG19QYdceloAZsQnzuQEmygYzRLrJEtP+tFKH
 KbjPA33waISDAsQemQIGIKZOTHr82jruObXfZXbyRXkzVnlFmTmcTHLyQ=
X-Talos-CUID: =?us-ascii?q?9a23=3AWRI8C2gDAaWp0CjyKsLhsZVvTzJuI0OH4HD0MUK?=
 =?us-ascii?q?BEzhwbJuLcXDX2qN5qp87?=
X-Talos-MUID: =?us-ascii?q?9a23=3A08WewQ1QDdhbZwZ7qZUMyCLX5DUjyJvpWFsNkqw?=
 =?us-ascii?q?8gcTdKg0rNSizohmta9py?=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="106346520"
Date: Tue, 2 May 2023 15:44:50 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
CC: <xen-devel@lists.xen.org>, Juergen Gross <jgross@suse.com>, Julien Grall
	<julien@xen.org>, Vincent Guittot <vincent.guittot@linaro.org>,
	<stratos-dev@op-lists.linaro.org>, Alex =?iso-8859-1?Q?Benn=E9e?=
	<alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>, Erik Schilling
	<erik.schilling@linaro.org>
Subject: Re: [PATCH] libxl: arm: Allow grant mappings for backends running on
 Dom0
Message-ID: <92c7f972-f617-40fc-bc5d-582c8154d03c@perard>
References: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>

On Thu, Mar 30, 2023 at 02:13:08PM +0530, Viresh Kumar wrote:
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index 10f37990be57..4879f136aab8 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -1616,6 +1616,10 @@ properties in the Device Tree, the type field must be set to "virtio,device".
>  Specifies the transport mechanism for the Virtio device, only "mmio" is
>  supported for now.
>  
> +=item B<forced_grant=BOOLEAN>
> +
> +Allows Xen Grant memory mapping to be done from Dom0.

I think this description is missing context. I'm not sure what it would
means "from dom0" without reading the patch. Also, it says "allows",
maybe this doesn't convey the meaning of "forced". How about something
like:

    Always use grant mapping, even when the backend is run in dom0.
    (grant are already used if the backend is in another domain.)

> +
>  =back
>  
>  =item B<tee="STRING">
> diff --git a/tools/libs/light/libxl_virtio.c b/tools/libs/light/libxl_virtio.c
> index faada49e184e..e1f15344ef97 100644
> --- a/tools/libs/light/libxl_virtio.c
> +++ b/tools/libs/light/libxl_virtio.c
> @@ -48,11 +48,13 @@ static int libxl__set_xenstore_virtio(libxl__gc *gc, uint32_t domid,
>      flexarray_append_pair(back, "base", GCSPRINTF("%#"PRIx64, virtio->base));
>      flexarray_append_pair(back, "type", GCSPRINTF("%s", virtio->type));
>      flexarray_append_pair(back, "transport", GCSPRINTF("%s", transport));
> +    flexarray_append_pair(back, "forced_grant", GCSPRINTF("%u", virtio->forced_grant));
>  
>      flexarray_append_pair(front, "irq", GCSPRINTF("%u", virtio->irq));
>      flexarray_append_pair(front, "base", GCSPRINTF("%#"PRIx64, virtio->base));
>      flexarray_append_pair(front, "type", GCSPRINTF("%s", virtio->type));
>      flexarray_append_pair(front, "transport", GCSPRINTF("%s", transport));
> +    flexarray_append_pair(front, "forced_grant", GCSPRINTF("%u", virtio->forced_grant));

This "forced_grant" feels weird to me in the protocol, I feel like this
use of grant or not could be handled by the backend. For example in
"blkif" protocol, there's plenty of "feature-*" which allows both
front-end and back-end to advertise which feature they can or want to
use.
But maybe the fact that the device tree needs to be modified to be able
to accommodate grant mapping means that libxl needs to ask the backend to
use grant or not, and the frontend needs to now if it needs to use
grant.

>  
>      return 0;
>  }
> @@ -104,6 +106,15 @@ static int libxl__virtio_from_xenstore(libxl__gc *gc, const char *libxl_path,
>          }
>      }
>  
> +    tmp = NULL;
> +    rc = libxl__xs_read_checked(gc, XBT_NULL,
> +				GCSPRINTF("%s/forced_grant", be_path), &tmp);
> +    if (rc) goto out;
> +
> +    if (tmp) {
> +        virtio->forced_grant = strtoul(tmp, NULL, 0);
> +    }
> +
>      tmp = NULL;
>      rc = libxl__xs_read_checked(gc, XBT_NULL,
>  				GCSPRINTF("%s/type", be_path), &tmp);
> diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
> index 1f6f47daf4e1..3e34da099785 100644
> --- a/tools/xl/xl_parse.c
> +++ b/tools/xl/xl_parse.c
> @@ -1215,6 +1215,8 @@ static int parse_virtio_config(libxl_device_virtio *virtio, char *token)
>      } else if (MATCH_OPTION("transport", token, oparg)) {
>          rc = libxl_virtio_transport_from_string(oparg, &virtio->transport);
>          if (rc) return rc;
> +    } else if (MATCH_OPTION("forced_grant", token, oparg)) {
> +        virtio->forced_grant = strtoul(oparg, NULL, 0);

Maybe store only !!strtoul() ?
I don't think having values other that 0 or 1 is going to be good.

>      } else {
>          fprintf(stderr, "Unknown string \"%s\" in virtio spec\n", token);
>          return -1;

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 02 14:59:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 14:59:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528699.822174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrT6-00073A-Gx; Tue, 02 May 2023 14:59:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528699.822174; Tue, 02 May 2023 14:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrT6-000733-Dl; Tue, 02 May 2023 14:59:04 +0000
Received: by outflank-mailman (input) for mailman id 528699;
 Tue, 02 May 2023 14:59:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nbmr=AX=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1ptrT4-00072t-Ul
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 14:59:03 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2060.outbound.protection.outlook.com [40.107.247.60])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dd31d87a-e8f9-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 16:58:57 +0200 (CEST)
Received: from DB8PR04CA0026.eurprd04.prod.outlook.com (2603:10a6:10:110::36)
 by AS8PR08MB7815.eurprd08.prod.outlook.com (2603:10a6:20b:529::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 14:58:50 +0000
Received: from DBAEUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:110:cafe::e0) by DB8PR04CA0026.outlook.office365.com
 (2603:10a6:10:110::36) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 14:58:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT060.mail.protection.outlook.com (100.127.142.238) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.19 via Frontend Transport; Tue, 2 May 2023 14:58:49 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Tue, 02 May 2023 14:58:49 +0000
Received: from d9863d2e0def.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C45C26F2-BA9F-4E86-83E4-71C0489797EC.1; 
 Tue, 02 May 2023 14:58:42 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d9863d2e0def.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 02 May 2023 14:58:42 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by AS8PR08MB9791.eurprd08.prod.outlook.com (2603:10a6:20b:614::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 14:58:41 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb%7]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 14:58:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd31d87a-e8f9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CoBq4X/ytiUZ+0l51aI3Y1BdDXoWW4utuLAfOzMoDMw=;
 b=CGedJUB5Dw3X8EqHxpm4JSFSuD/JBi60rl8xHgJc4whKjfhvWz7JVNuvxVu9MNQc9gKjhC1He/Lb6rAXEWMO68T4NezCVYf9oSFD4QTfkZMcBhnpxXpQQ0OvJSLS5lbxkNc4q0drTmdrykTwjyI7kxoI97aSUQ5Oh7mkWKOc3aU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: a9e3d369f3555d3d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RuUoe9q251JAtDGbWBbhaJBZwGnv944m4b9/uZVKpYsHav2Or71oZ/yofCAQAY/0fr2xj891FUraR+tJxpng3igcGyl/omy3Ec127mYpZI3na+5E4qAmVJvuBsoaklDjTzmsfbHFYxukVb3U4hNsS/puVgVOnBJl2dP1BPq25y/zlDwoSWKCGvBvGq4kEX3/soEsRucVcFusnJ2pFry5kUxlyPeeTGJ5DUg/PKGKNk1k5GZC/PGVYvVnCfEcRaP6KL/ySp8UoMMvkRzhLTtO1DKVOsLsJjdIatDOW+hjZ0+bpyrlHDaSvS1M6vLwiuMYeJUGt5mLU26FndO2BetdIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CoBq4X/ytiUZ+0l51aI3Y1BdDXoWW4utuLAfOzMoDMw=;
 b=LdACtd0pPu6GFRfjr4IlN8OKcYNZIsHIHVhFlzxhxZfUemFo67RfFDuNlsqtaKWD5jcW5ljAiozBh7flCU5alUTWuceme9B582qBSDtFW5kAU8Gvfr0vfchwv3AvBzBAgFVHzDyGvfI9ewG+nDldEaKQPo0rC722txa0xgwShy1mBwFM0ZqArS0HCVRPzXPTEZvreYmtvUe/aj6vKBzFypSLNioWwXci4Q1PL3XHWVxRTQldaA101H2edM+uVWiTzlIWlIKbysiDp9AXAQQgz82VbQ5VrrR2hVO/TYw4wPh2J+brYbYwy1CYJm2/s8GGuzecX0qRuqWr9CWPg9trsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CoBq4X/ytiUZ+0l51aI3Y1BdDXoWW4utuLAfOzMoDMw=;
 b=CGedJUB5Dw3X8EqHxpm4JSFSuD/JBi60rl8xHgJc4whKjfhvWz7JVNuvxVu9MNQc9gKjhC1He/Lb6rAXEWMO68T4NezCVYf9oSFD4QTfkZMcBhnpxXpQQ0OvJSLS5lbxkNc4q0drTmdrykTwjyI7kxoI97aSUQ5Oh7mkWKOc3aU=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Ayan Kumar Halder <ayankuma@amd.com>
CC: Xen developer discussion <xen-devel@lists.xenproject.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, Juergen Gross
	<jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Oleksandr
 Tyshchenko <oleksandr_tyshchenko@epam.com>, Samuel Holland
	<samuel@sholland.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Marc
 Zyngier <maz@kernel.org>, Jane Malalane <jane.malalane@citrix.com>, David
 Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [PATCH] xen/evtchn: Introduce new IOCTL to bind static evtchn
Thread-Topic: [PATCH] xen/evtchn: Introduce new IOCTL to bind static evtchn
Thread-Index: AQHZec44jeWWxzG+E02fa4Z9QZ57Yq9At36AgAZiAgA=
Date: Tue, 2 May 2023 14:58:40 +0000
Message-ID: <116F2F64-6262-4DE3-B9AD-ADE21BD54E41@arm.com>
References:
 <48d30a439e37f6917b9a667289792c2b3f548d6d.1682685294.git.rahul.singh@arm.com>
 <a3a0dba5-a231-7cc2-dbad-79df7ad9a136@amd.com>
In-Reply-To: <a3a0dba5-a231-7cc2-dbad-79df7ad9a136@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7158:EE_|AS8PR08MB9791:EE_|DBAEUR03FT060:EE_|AS8PR08MB7815:EE_
X-MS-Office365-Filtering-Correlation-Id: 8c28ac84-2555-449d-a3d8-08db4b1dbcf3
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 jCSUpN1muS3VQGud2LvUjavYy486L/5e7a7Zk5n6IfmdwKUl/3J696gaflL6/T+AxjGlrYwDlmNIgMKZAGHRBocS435Z/f1x+hFBEdNpea9dU0axn6bdPZc3pIzcuUMjsvIwQlGMAxH2+efs6BUHMAPxV/OteeCYPLPSYbMEqVNLtv+A6g/vUh6UadpRu4gezw6qmu9xSmtd2eA1lpmGPkw4cT/Nf1HVrlY+H2asCLx0CixnMlx6YPWaOs/NkM/Y4f6DwJYBC1ay6xCIo9yijQqntlXYmljKgXYC4yMQzS5FwEUP4xERN11t2b1VU5aUcwEwN+wPF3KMthwUoxyz44+sYiFl7r8QC4VK7h9jMiue4dReIAHFap5nRcEHbqKXXfNjeICLaEb+Ny62XPr3cSDbskFlGGZW1owfn48AWN+CTyJOV4SZr+GBk28kKUwc66KlpP5GrbypZVcYhaAgQDGgKTtktR2aT8EHN3SEflk2+aSNHpzYTfuN7V+lOT0EOrnYnDnb0e/iqu44XpHD6u5Pnrf0eshdq8qxGd70W8S3c0Dv/KGCwggsSdncBPom07O9BlVjJF/qRzAkq8MlK9iMAiPceTvQUGAn8S5EiLWHHeC02Abzxaal5FMBpfiJKW7A4osiHpurPyr20w4VhA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(136003)(376002)(396003)(346002)(451199021)(54906003)(2616005)(83380400001)(478600001)(53546011)(6486002)(6512007)(26005)(6506007)(71200400001)(66946007)(6916009)(316002)(4326008)(66476007)(66446008)(64756008)(91956017)(186003)(66556008)(76116006)(38070700005)(122000001)(41300700001)(8936002)(5660300002)(7416002)(8676002)(38100700002)(2906002)(33656002)(86362001)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: multipart/alternative;
	boundary="_000_116F2F6462624DE3B9ADADE21BD54E41armcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9791
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	33f16859-7e54-42f2-3bf8-08db4b1db775
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	13LNVI5gDYd1TAi7qtXvttmJuVqjU2OE0IrohF0stePDTXU+L5bu9yvWDb+Ya0oSnNDnMwDItDHwstj+4FD51vqF/vmofqUzaqLlIamn9nwwROWU/SgRApVeyC2DwjuL9CX5opx30OuGwdyjQFAApVVmsUeauSfU6EFLAiatPwd7apZh8AlQfqLZhzjTAvPGzecWsSx2CjJG0G0roBkAzIf9wrQoZJ84BLT3PQzpgEaczLZmQ90uKJrAxDHm9PQSOuUbQ62LzWIqM5wt4bp4WogSHDYmbUDiV70X3B9F52lLkTglZiGduOohKip1MI998aaFZBZcVxO4kx7DOB+jtsYqgnsECHQ1lAD7Rr6go1FVfeUGagALw4lJ2Vw7E6N8LIs3nxVfJRla1OqH2Jp7jG/IFCGZDn75XyOehYui3Vp7cRiOpQk7TR6IeIXvSOTYomK1037pJBltk411VqVgiftchadsfbuMkgS6LefghK1VeehkyIwB8w5RvW20sbxquA6qCI/HxGQ8KZ7tS0bw0DGTj/T0pT3iqrNlL6oECLvbW8aO6phluQ5xPWqAf4JEcvYLGZH+ZVQb2mpZoJ/lj44A7LN7rIFYLDf3smxJmr+Y9K7pAGigswZdlsEmzcclX+NtmPVdAuvIRZYWEkwHFPcxwf1I79lqe3FOdmg/ii1vPUDxgJSFB4uYUhiyHVd5+QDEOgFe1wT3HNKeNbptnWPdMLMK4Kr6733oWq7hmjY=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(346002)(136003)(376002)(451199021)(40470700004)(46966006)(36840700001)(5660300002)(40480700001)(40460700003)(86362001)(8936002)(8676002)(6862004)(4326008)(82740400003)(41300700001)(70206006)(70586007)(81166007)(316002)(356005)(2906002)(107886003)(53546011)(26005)(6506007)(6512007)(34020700004)(6486002)(186003)(33964004)(336012)(36860700001)(83380400001)(47076005)(36756003)(33656002)(82310400005)(2616005)(54906003)(478600001)(45080400002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 14:58:49.9119
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8c28ac84-2555-449d-a3d8-08db4b1dbcf3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7815

--_000_116F2F6462624DE3B9ADADE21BD54E41armcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

IEhpIEF5YW4sDQoNCk9uIDI4IEFwciAyMDIzLCBhdCAyOjMwIHBtLCBBeWFuIEt1bWFyIEhhbGRl
ciA8YXlhbmt1bWFAYW1kLmNvbT4gd3JvdGU6DQoNCkhpIFJhaHVsLA0KDQpPbiAyOC8wNC8yMDIz
IDEzOjM2LCBSYWh1bCBTaW5naCB3cm90ZToNCkNBVVRJT046IFRoaXMgbWVzc2FnZSBoYXMgb3Jp
Z2luYXRlZCBmcm9tIGFuIEV4dGVybmFsIFNvdXJjZS4gUGxlYXNlIHVzZSBwcm9wZXIganVkZ21l
bnQgYW5kIGNhdXRpb24gd2hlbiBvcGVuaW5nIGF0dGFjaG1lbnRzLCBjbGlja2luZyBsaW5rcywg
b3IgcmVzcG9uZGluZyB0byB0aGlzIGVtYWlsLg0KDQoNClhlbiA0LjE3IHN1cHBvcnRzIHRoZSBj
cmVhdGlvbiBvZiBzdGF0aWMgZXZ0Y2hucy4gVG8gYWxsb3cgdXNlciBzcGFjZQ0KYXBwbGljYXRp
b24gdG8gYmluZCBzdGF0aWMgZXZ0Y2hucyBpbnRyb2R1Y2UgbmV3IGlvY3RsDQoiSU9DVExfRVZU
Q0hOX0JJTkRfU1RBVElDIi4gRXhpc3RpbmcgSU9DVEwgZG9pbmcgbW9yZSB0aGFuIGJpbmRpbmcN
CnRoYXTigJlzIHdoeSB3ZSBuZWVkIHRvIGludHJvZHVjZSB0aGUgbmV3IElPQ1RMIHRvIG9ubHkg
YmluZCB0aGUgc3RhdGljDQpldmVudCBjaGFubmVscy4NCg0KQWxzbywgc3RhdGljIGV2dGNobnMg
dG8gYmUgYXZhaWxhYmxlIGZvciB1c2UgZHVyaW5nIHRoZSBsaWZldGltZSBvZiB0aGUNCmd1ZXN0
LiBXaGVuIHRoZSBhcHBsaWNhdGlvbiBleGl0cywgX191bmJpbmRfZnJvbV9pcnEoKSBlbmQgdXAN
CmJlaW5nIGNhbGxlZCBmcm9tIHJlbGVhc2UoKSBmb3AgYmVjYXVzZSBvZiB0aGF0IHN0YXRpYyBl
dnRjaG5zIGFyZQ0KZ2V0dGluZyBjbG9zZWQuIFRvIGF2b2lkIGNsb3NpbmcgdGhlIHN0YXRpYyBl
dmVudCBjaGFubmVsLCBhZGQgdGhlIG5ldw0KYm9vbCB2YXJpYWJsZSAiaXNfc3RhdGljIiBpbiAi
c3RydWN0IGlycV9pbmZvIiB0byBtYXJrIHRoZSBldmVudCBjaGFubmVsDQpzdGF0aWMgd2hlbiBj
cmVhdGluZyB0aGUgZXZlbnQgY2hhbm5lbCB0byBhdm9pZCBjbG9zaW5nIHRoZSBzdGF0aWMNCmV2
dGNobi4NCg0KU2lnbmVkLW9mZi1ieTogUmFodWwgU2luZ2ggPHJhaHVsLnNpbmdoQGFybS5jb20+
DQotLS0NCiBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYyB8ICA3ICsrKysrLS0NCiBk
cml2ZXJzL3hlbi9ldnRjaG4uYyAgICAgICAgICAgICB8IDIyICsrKysrKysrKysrKysrKysrLS0t
LS0NCiBpbmNsdWRlL3VhcGkveGVuL2V2dGNobi5oICAgICAgICB8ICA5ICsrKysrKysrKw0KIGlu
Y2x1ZGUveGVuL2V2ZW50cy5oICAgICAgICAgICAgIHwgIDIgKy0NCiA0IGZpbGVzIGNoYW5nZWQs
IDMyIGluc2VydGlvbnMoKyksIDggZGVsZXRpb25zKC0pDQoNCmRpZmYgLS1naXQgYS9kcml2ZXJz
L3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYyBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFz
ZS5jDQppbmRleCBjNzcxNWY4YmQ0NTIuLjMxZjJkMzYzNGFkNSAxMDA2NDQNCi0tLSBhL2RyaXZl
cnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jDQorKysgYi9kcml2ZXJzL3hlbi9ldmVudHMvZXZl
bnRzX2Jhc2UuYw0KQEAgLTExMiw2ICsxMTIsNyBAQCBzdHJ1Y3QgaXJxX2luZm8gew0KICAgICAg
ICB1bnNpZ25lZCBpbnQgaXJxX2Vwb2NoOyAvKiBJZiBlb2lfY3B1IHZhbGlkOiBpcnFfZXBvY2gg
b2YgZXZlbnQgKi8NCiAgICAgICAgdTY0IGVvaV90aW1lOyAgICAgICAgICAgLyogVGltZSBpbiBq
aWZmaWVzIHdoZW4gdG8gRU9JLiAqLw0KICAgICAgICByYXdfc3BpbmxvY2tfdCBsb2NrOw0KKyAg
ICAgICB1OCBpc19zdGF0aWM7ICAgICAgICAgICAvKiBJcyBldmVudCBjaGFubmVsIHN0YXRpYyAq
Lw0KDQpJIHRoaW5rIHdlIHNob3VsZCBhdm9pZCB1OC91MTYvdTMyIGFuZCBpbnN0ZWFkIHVzZSB1
aW50OF90L3VpbnQxNl90L3VpbnQzMl90Lg0KDQpIb3dldmVyIGluIHRoaXMgY2FzZSwgeW91IGNh
biB1c2UgYm9vbC4NCg0KTWFrZSBzZW5zZS4gSSB3aWxsIGNoYW5nZSB0byBib29sIGluIG5leHQg
cGF0Y2guDQoNClJlZ2FyZHMsDQpSYWh1bA0KDQo=

--_000_116F2F6462624DE3B9ADADE21BD54E41armcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <2BFC176F7B3B424D9CEA0E18D6C59CC0@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJvdmVyZmxv
dy13cmFwOiBicmVhay13b3JkOyAtd2Via2l0LW5ic3AtbW9kZTogc3BhY2U7IGxpbmUtYnJlYWs6
IGFmdGVyLXdoaXRlLXNwYWNlOyI+DQombmJzcDtIaSBBeWFuLDxicj4NCjxkaXY+PGJyPg0KPGJs
b2NrcXVvdGUgdHlwZT0iY2l0ZSI+DQo8ZGl2Pk9uIDI4IEFwciAyMDIzLCBhdCAyOjMwIHBtLCBB
eWFuIEt1bWFyIEhhbGRlciAmbHQ7YXlhbmt1bWFAYW1kLmNvbSZndDsgd3JvdGU6PC9kaXY+DQo8
YnIgY2xhc3M9IkFwcGxlLWludGVyY2hhbmdlLW5ld2xpbmUiPg0KPGRpdj48c3BhbiBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiA0MDA7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0
YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6
IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBw
eDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFp
bXBvcnRhbnQ7Ij5IaQ0KIFJhaHVsLDwvc3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2Io
MCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1z
dHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogNDAw
OyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6
IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3Bh
Y2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlv
bjogbm9uZTsiPg0KPGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZh
bWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9u
dC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6
IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNm
b3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtp
dC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7Ij4NCjxzcGFu
IHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNh
OyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6
IG5vcm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1h
bGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0
ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13
aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBp
bmxpbmUgIWltcG9ydGFudDsiPk9uDQogMjgvMDQvMjAyMyAxMzozNiwgUmFodWwgU2luZ2ggd3Jv
dGU6PC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1p
bHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQt
dmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiA0MDA7IGxldHRlci1zcGFjaW5nOiBu
b3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9y
bTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQt
dGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyI+DQo8YmxvY2tx
dW90ZSB0eXBlPSJjaXRlIiBzdHlsZT0iZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXpl
OiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZv
bnQtd2VpZ2h0OiA0MDA7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IG9ycGhhbnM6IGF1dG87IHRl
eHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsg
d2hpdGUtc3BhY2U6IG5vcm1hbDsgd2lkb3dzOiBhdXRvOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdl
YmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7Ij4NCkNB
VVRJT046IFRoaXMgbWVzc2FnZSBoYXMgb3JpZ2luYXRlZCBmcm9tIGFuIEV4dGVybmFsIFNvdXJj
ZS4gUGxlYXNlIHVzZSBwcm9wZXIganVkZ21lbnQgYW5kIGNhdXRpb24gd2hlbiBvcGVuaW5nIGF0
dGFjaG1lbnRzLCBjbGlja2luZyBsaW5rcywgb3IgcmVzcG9uZGluZyB0byB0aGlzIGVtYWlsLjxi
cj4NCjxicj4NCjxicj4NClhlbiA0LjE3IHN1cHBvcnRzIHRoZSBjcmVhdGlvbiBvZiBzdGF0aWMg
ZXZ0Y2hucy4gVG8gYWxsb3cgdXNlciBzcGFjZTxicj4NCmFwcGxpY2F0aW9uIHRvIGJpbmQgc3Rh
dGljIGV2dGNobnMgaW50cm9kdWNlIG5ldyBpb2N0bDxicj4NCiZxdW90O0lPQ1RMX0VWVENITl9C
SU5EX1NUQVRJQyZxdW90Oy4gRXhpc3RpbmcgSU9DVEwgZG9pbmcgbW9yZSB0aGFuIGJpbmRpbmc8
YnI+DQp0aGF04oCZcyB3aHkgd2UgbmVlZCB0byBpbnRyb2R1Y2UgdGhlIG5ldyBJT0NUTCB0byBv
bmx5IGJpbmQgdGhlIHN0YXRpYzxicj4NCmV2ZW50IGNoYW5uZWxzLjxicj4NCjxicj4NCkFsc28s
IHN0YXRpYyBldnRjaG5zIHRvIGJlIGF2YWlsYWJsZSBmb3IgdXNlIGR1cmluZyB0aGUgbGlmZXRp
bWUgb2YgdGhlPGJyPg0KZ3Vlc3QuIFdoZW4gdGhlIGFwcGxpY2F0aW9uIGV4aXRzLCBfX3VuYmlu
ZF9mcm9tX2lycSgpIGVuZCB1cDxicj4NCmJlaW5nIGNhbGxlZCBmcm9tIHJlbGVhc2UoKSBmb3Ag
YmVjYXVzZSBvZiB0aGF0IHN0YXRpYyBldnRjaG5zIGFyZTxicj4NCmdldHRpbmcgY2xvc2VkLiBU
byBhdm9pZCBjbG9zaW5nIHRoZSBzdGF0aWMgZXZlbnQgY2hhbm5lbCwgYWRkIHRoZSBuZXc8YnI+
DQpib29sIHZhcmlhYmxlICZxdW90O2lzX3N0YXRpYyZxdW90OyBpbiAmcXVvdDtzdHJ1Y3QgaXJx
X2luZm8mcXVvdDsgdG8gbWFyayB0aGUgZXZlbnQgY2hhbm5lbDxicj4NCnN0YXRpYyB3aGVuIGNy
ZWF0aW5nIHRoZSBldmVudCBjaGFubmVsIHRvIGF2b2lkIGNsb3NpbmcgdGhlIHN0YXRpYzxicj4N
CmV2dGNobi48YnI+DQo8YnI+DQpTaWduZWQtb2ZmLWJ5OiBSYWh1bCBTaW5naCAmbHQ7cmFodWwu
c2luZ2hAYXJtLmNvbSZndDs8YnI+DQotLS08YnI+DQombmJzcDtkcml2ZXJzL3hlbi9ldmVudHMv
ZXZlbnRzX2Jhc2UuYyB8ICZuYnNwOzcgKysrKystLTxicj4NCiZuYnNwO2RyaXZlcnMveGVuL2V2
dGNobi5jICZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwO3wgMjIgKysrKysrKysrKysrKysrKystLS0tLTxicj4NCiZu
YnNwO2luY2x1ZGUvdWFwaS94ZW4vZXZ0Y2huLmggJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7fCAmbmJzcDs5ICsrKysrKysrKzxicj4NCiZuYnNwO2luY2x1ZGUveGVu
L2V2ZW50cy5oICZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO3wgJm5ic3A7MiArLTxicj4NCiZuYnNwOzQgZmlsZXMg
Y2hhbmdlZCwgMzIgaW5zZXJ0aW9ucygrKSwgOCBkZWxldGlvbnMoLSk8YnI+DQo8YnI+DQpkaWZm
IC0tZ2l0IGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMgYi9kcml2ZXJzL3hlbi9l
dmVudHMvZXZlbnRzX2Jhc2UuYzxicj4NCmluZGV4IGM3NzE1ZjhiZDQ1Mi4uMzFmMmQzNjM0YWQ1
IDEwMDY0NDxicj4NCi0tLSBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jPGJyPg0K
KysrIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmM8YnI+DQpAQCAtMTEyLDYgKzEx
Miw3IEBAIHN0cnVjdCBpcnFfaW5mbyB7PGJyPg0KJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7dW5zaWduZWQgaW50IGlycV9lcG9jaDsgLyogSWYgZW9pX2Nw
dSB2YWxpZDogaXJxX2Vwb2NoIG9mIGV2ZW50ICovPGJyPg0KJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7dTY0IGVvaV90aW1lOyAmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsvKiBUaW1lIGluIGpp
ZmZpZXMgd2hlbiB0byBFT0kuICovPGJyPg0KJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7cmF3X3NwaW5sb2NrX3QgbG9jazs8YnI+DQorICZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO3U4IGlzX3N0YXRpYzsgJm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7LyogSXMgZXZlbnQgY2hh
bm5lbCBzdGF0aWMgKi88YnI+DQo8L2Jsb2NrcXVvdGU+DQo8YnIgc3R5bGU9ImNhcmV0LWNvbG9y
OiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsg
Zm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdo
dDogNDAwOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1p
bmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdv
cmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVj
b3JhdGlvbjogbm9uZTsiPg0KPHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7
IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9y
bWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogNDAwOyBsZXR0ZXIt
c3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4
dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4
OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsg
ZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyI+SQ0KIHRoaW5rIHdlIHNo
b3VsZCBhdm9pZCB1OC91MTYvdTMyIGFuZCBpbnN0ZWFkIHVzZSB1aW50OF90L3VpbnQxNl90L3Vp
bnQzMl90Ljwvc3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQt
ZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBm
b250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogNDAwOyBsZXR0ZXItc3BhY2lu
Zzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFu
c2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Vi
a2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiPg0KPGJy
IHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNh
OyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6
IG5vcm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1h
bGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0
ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13
aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7Ij4NCjxzcGFuIHN0eWxlPSJjYXJldC1j
b2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEy
cHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13
ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRl
eHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFs
OyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0
LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFu
dDsiPkhvd2V2ZXINCiBpbiB0aGlzIGNhc2UsIHlvdSBjYW4gdXNlIGJvb2wuPC9zcGFuPjxiciBz
dHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsg
Zm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBu
b3JtYWw7IGZvbnQtd2VpZ2h0OiA0MDA7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxp
Z246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUt
c3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lk
dGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyI+DQo8L2Rpdj4NCjwvYmxvY2txdW90ZT4N
CjxkaXY+PGJyPg0KPC9kaXY+DQpNYWtlIHNlbnNlLiBJIHdpbGwgY2hhbmdlIHRvIGJvb2wgaW4g
bmV4dCBwYXRjaC48L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PlJlZ2FyZHMsPC9kaXY+
DQo8ZGl2PlJhaHVsPGJyPg0KPC9kaXY+DQo8YnI+DQo8L2JvZHk+DQo8L2h0bWw+DQo=

--_000_116F2F6462624DE3B9ADADE21BD54E41armcom_--


From xen-devel-bounces@lists.xenproject.org Tue May 02 15:00:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 15:00:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528703.822185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrUf-0008WM-1g; Tue, 02 May 2023 15:00:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528703.822185; Tue, 02 May 2023 15:00:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrUe-0008WF-Uv; Tue, 02 May 2023 15:00:40 +0000
Received: by outflank-mailman (input) for mailman id 528703;
 Tue, 02 May 2023 15:00:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptrUd-0008W9-BS
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 15:00:39 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1859a13a-e8fa-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 17:00:38 +0200 (CEST)
Received: from mail-mw2nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 10:59:38 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB6422.namprd03.prod.outlook.com (2603:10b6:a03:396::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.27; Tue, 2 May
 2023 14:59:36 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 14:59:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1859a13a-e8fa-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683039638;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=+/IIFwM9mnWmeO4C0VHoB/T3RUCbmQJsbpPxY0Jwn8s=;
  b=K+DS8ZHoYp6DDybyjkg8ZSqwZ99yL5MHBazHsL/qV/CBrlUzmbOpABlL
   gBH7w0ySIRJR0WbzKQvY4EKhnl4LNxyklIZ6pZXAnG/sVHpNPYR5I1wQ8
   X8BQTQCdnmt0BwPgWj40UgSjf8t7Q8dO26QTgK4RZpMRCY23Yao4+wORL
   M=;
X-IronPort-RemoteIP: 104.47.55.104
X-IronPort-MID: 106348824
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:kFtgX64nnShQk6lwj2MnBgxRtBjGchMFZxGqfqrLsTDasY5as4F+v
 mIfCG+HM/uNamPweYggaI6z9UwH7ceAxtFgSAA4qCk8Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0T5geE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mt
 sAHOis3NUm5p9mb6r+ZdPBSuZwjFZy+VG8fkikIITDxK98DGMiGZpqQoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooiOSF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxX2gBNJCTuHQGvhCukK+lisRNj8taBiFnumV0HGXX4tDJ
 BlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQN4sudIyRDcq/
 kSUhN6vDjtq2JWKTVqN+7HSqim9URX5NkcHbC4ACA4aud/qpdhrigqVF44/VqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraTxtwWzASpoRGpBcmS8g
 Q==
IronPort-HdrOrdr: A9a23:LIa61q+QX2xnH6lGeZRuk+G/dr1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwWpVoJkmsj6KdgLNhRotKOTOLhILGFvAH0WKP+V3d8mjFh5dgPM
 RbAtdD4aPLfD9HZK/BiWHXcurIguP3iJxA7d2us0uFJjsaDp2IgT0JaTpyRSZNNXR77NcCZd
 OhDo0tnUvSRV0nKuCAQlUVVenKoNPG0LrgfB49HhYirCWekD+y77b+Mh6AmjMTSSlGz7sO+X
 XM11WR3NTvj9iLjjvnk0PD5ZVfn9XsjvNFGcy3k8AQbhn8lwqyY4xlerua+BQ4uvum5loGmM
 TF5z0gI8NwwXXMeXzdm2qm5yDQlBIVr1Pyw16RhnXu5eT/WTIBEsJEwaZUaAHQ5UYMtMx1lP
 sj5RPTi7NnSTf72Ajt7dnBUB9n0mKyvHoZiOYWy1hSS5EXZrN9pZEWuGlVDJADNiTn751PKp
 gnMOjsoNJtNX+KZXHQuWdihPSqQ3QIBx+DBnMPv8SEugIm6ExR/g89/ogyj30A/JUyR91v/O
 LfKJllk7lIU4s/cb99LP1pe7r3NkX9BTb3dE6CK1XuE68Kf1jXrYTs3bkz7Oa2PLQV0ZoJno
 jbWl8wjx99R6vXM7zM4HR3yGGOfI3kNg6dj/22pqIJ9YEUfYCbcRFqEzsV4o+dS/Z2OLyvZx
 /8AuMQPxbZFxqfJW945XyBZ3BsEwhubCRsgKdcZ7uvmLO9FmS4jJ2sTN/jYJzQLB0DZkTTRl
 M+YRmbHrQz0qnsYA61vCTs
X-Talos-CUID: 9a23:SNOc2GGd6PwXxL/KqmIkyFEGAfp+K0f991r6Lm2GEFhWEL2sHAo=
X-Talos-MUID: =?us-ascii?q?9a23=3Ab56vpQzNBWCfRjotdy5cuSO+nwSaqJ6xJRBKk6s?=
 =?us-ascii?q?kgPTHCxNgF22ipTDuXoByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="106348824"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UbwtbursVbivaWSRocTQd0LWVyUWKuLyvPfQVNEviZ6tSY6nT+/kH5mJeb0qXZX5RLXPdIFsUULjMnxQIZ7WXr0LamjxGdt3KDVfAH7kEhWgm0nOXE1+zknbftxwRn9vEXDhAo+yYHpX1ZlrOkgSo0bPKsQ+DcoGi9mWCF9mbcABrdLUSi8MUtp955nkqN0vxWB5iQ7lqydQUiLNybGZ7lUN7WWwb5TJtJQdrgjmWwqWj46fHdWUhD7kLOybNdgJS5kqACiwo/ofOMEn4EHo2mh8C8CV4HlJqIFkbmdmfgpqZVF/u217myAxVwkOGe037NSBnx8QF7dIO2jfqYlTRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UN0wmpmoxxOBfT7pEjwKaFozddVpe6jQy1KrqXoAqWg=;
 b=KVR4zCHGoqwdPbQgxyo4P7iflX9oP+UT/8atSEFBpJSeN+SpdrqgxDrMOL/p5isydOvxBTKaRRLuXBgG/KPYMBOemcglpSZoSP9OsOMM4P+uzzPTvCyGRb3dZAFQ9IPhYDpjkcrZVovLgRk5gUeoDvstuaP7W68COEOr6UjUMrWwiHyBi4KBmONdU0uD9vJGRcIVVDsPhmAehZH/t8G92ynOfce7kp1Ftip7EMjfPz2gbsl+Dkcy67Hjz1CCF6Yb17i+C9CeXOyn84GpA0VxEgGtch9vrOG4P4QqoIGysgttzDYurs46Vjd07WKX7zPVohFtLTqGPzzAgXbOJJbjkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UN0wmpmoxxOBfT7pEjwKaFozddVpe6jQy1KrqXoAqWg=;
 b=KTEO/NvQDZtTb5IP+/TG4/YmiGxP8SVAs/Mqh/etOL/PIp/ZoWrhO7aCnUjElg39U7MCxSjxd4uEl68cNKC84ZmX5jLmAYgwAdpKhG+kbmHqvWqd75y4LE48Xss+1BLniu1NoNZHfKaVLcQ48HGP5nUstyvRFGf+vgC9K+FsUtU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/2] x86/head: check base address alignment
Date: Tue,  2 May 2023 16:59:19 +0200
Message-Id: <20230502145920.56588-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230502145920.56588-1-roger.pau@citrix.com>
References: <20230502145920.56588-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0033.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::20) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB6422:EE_
X-MS-Office365-Filtering-Correlation-Id: cf9cb7df-081e-4749-bdf3-08db4b1dd84f
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	znNSzi0YaNR+2YRay7gLbnh3QPiNRcEiwCY1djwkr9c6rmmRlUFKznkRLTXZHYr7Ws/ENDA6ho/qC8IKOsJe+l7tl96oeiL+mBZ+b8Bm+YYYB9m8s12NGQdZaF8lna1pB8h2oklMxkizh34BE9EZcxrppTjWytJJBZpIrJ5EV7qzfrhvWcZlYan2Q7jwuzw8+qqdUj9rFveypsaC/TSKCvhkNR9Mq9SIBN59IT2Lbbi0bKBBq02V7vWIG8Ywli8vF4zkJwa8RFrB4a5d2ALoOH8uMMIfImJCNdsLxyfAatu5Tx3gqq9ERj4fx19JhzZbtRAkD1FWFwLkttPbQ6VIuwgw8Kxjlb3wrn4MmmbhHXKEbKZ9y3NAEAxl5mbpxEYBp5NCYRZgSIkUu5Wenljf1YQysitzgNYIVgCuwLfhPebkgVyDpAIqbT50j9ydHEsPoWOdlThMtal10ErAnONo2BvIIuHN7Dd1wAQW9mxtLMUvi5kvJJ/BPT/B6cCwrKj+8T7DB9+G23a16L6zAVFU1f8juWUE41lZHcWZEUsdYGqXbuzKXK7AwVVvIu312Z3j
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(346002)(39860400002)(396003)(366004)(451199021)(2616005)(83380400001)(478600001)(186003)(54906003)(4326008)(6916009)(66476007)(66556008)(66946007)(6486002)(6666004)(6512007)(1076003)(6506007)(26005)(316002)(8936002)(8676002)(5660300002)(41300700001)(82960400001)(2906002)(38100700002)(86362001)(36756003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NEV4VUJyMVMwWHUyWnQ1elJXbWJQZlR0eitZbWo0SVVsOUMzTE1qaWRwL0I1?=
 =?utf-8?B?bVdvUTF1TFRpcWwwb3FqMjNBZ1VjOWd4MEVhd2hueFB6SkpCVVpPaVFkUVA4?=
 =?utf-8?B?YjdwWlJtcTBhV1UrSm9VQ29DNHI4YUQwaWpxTDV1Z0I2QWZ5RlRRZUlwSlpH?=
 =?utf-8?B?RzZCTU05YUdINlhGOFBtV3lkZDJXYUFYU0ZoZy8vVjh1ZlVLMGNya1lSNjVF?=
 =?utf-8?B?eXRpcG5HVUY5b3liWDd1VmUwbVVZTEpkVUltbURMdVBrNXJZZVJxWG1kbmxG?=
 =?utf-8?B?Nk9uZ2NRdXk5NzhrbWlRNDdHeTMzaVRENHp3NWwwMUttUzdLNzhaZzM5VjRD?=
 =?utf-8?B?NnB0VEhFMi9MTmJoYXlVbXorYnp1NnEwUmpuL1BUaEx4VlBNeFBDMThUaGts?=
 =?utf-8?B?OU52dldkOG9tdVB4eUdRUWtlSWdtV29TTXA3Q1hSY0dURjVYN0dGb2oxc1p3?=
 =?utf-8?B?dmpaVEg0SkZ1Vko3a2UwY2hITmFSRENIcjlCRGJ1WmlwQTVvdjAxc0FCZ2pk?=
 =?utf-8?B?SzEwRWVJYU1KUElWZ2N0aGo0VitrRVAyMkJOcFZ4dHJNNjNKTitYenYrbWtn?=
 =?utf-8?B?QXBsMUhKVUFpS0xRcWhHUmRNbzg3a055RVJJN2djZVBqM0FsUjJ0UUFLNnBh?=
 =?utf-8?B?ZU0zTG1NYyt4clVTOUlPZFptSUtOTnVpMTJmM3dUOVlCcnE2Q3R0YkNCemp0?=
 =?utf-8?B?OE9YenB0UitvWjI2azVkTWkzT0g0U3g1U2E2NzNEU2taSWxLY0lDa0dTZVZx?=
 =?utf-8?B?YzVMU3lXYTArK3lTc0k0ZFBPb3J4VDJHNnoyMVZRTmR2NjBmVllRSUgrSC9P?=
 =?utf-8?B?OTFBeWZQRWdkMjVyRlp4K25LT255ZGZLckFEcXM4M0ttNS8yV1dpME0weUZh?=
 =?utf-8?B?b0ZOWGk5eU1Id0p2V1BYbGF6TDRRUXV4RVlETW9zK3p0aGJDU0FzSFpFYXNM?=
 =?utf-8?B?SVN0ZlJ0UElWcWhxKzFoZ2VLaGh0ekJ5eVl3THVMWjBYb2p3SG05OE9zanVa?=
 =?utf-8?B?cGpmdmFVMWZMRkhrbjR0a09iVTlyYVpDMGtiNG5wdEZDMlFZR2dOR04vZSt1?=
 =?utf-8?B?T3czS0s1SEQzN2FrckJCczNHMy9VdjdIYmVtUnluOW5rQmpmS3Z3dVFqb2k5?=
 =?utf-8?B?SEpXZ1FRdlJjWS9uUGFCeE41VXZHVXdBQjZCc0FVd3ZCa2pDSW9VK1VBWlNw?=
 =?utf-8?B?VkdQclNmVGZLNGJLNm5VeFppWWR3QlBhSDU2SlQ1TGd5N3VpSlVzQ21UeERK?=
 =?utf-8?B?dHZqb0xwMmlyL0tVNVRxQjZWV2FNTVVlZHliYTBHWUZ2VTl1YSs5Rlc3cWNr?=
 =?utf-8?B?YWRlaUpXS1AzSXVmVjVlSUpINk5JOTJUdlBXc254U3BMOHBUUGhVYTNXaStY?=
 =?utf-8?B?dkFnL2xLcStJV2FuZk5TZFd3VnI1dW9wbEZZcm02R3h1Vm82ZmQrTDEvbHlB?=
 =?utf-8?B?T1BPeGdDSFBsejM2QXU2N0ZmVXJ5dlp3WUZyS1FWWVJRLzcrSHJZS0Y5YUhq?=
 =?utf-8?B?Q0FOaTJlZk1qU3ZaU2hoaVRmejNCdTdpUjY4OTBsVTQ5cHE3TTJRNDJzdWQx?=
 =?utf-8?B?aUh5VVN6SlJ1SFIwdWg5SkZyWURBSXIrRG5qRys4Y0JBRUdtVmdHVjBXNXJv?=
 =?utf-8?B?dU5sM2lqN1F3ZU5FT0ZUakdMSE9EMjVXOVcxcGJienRwMWk2N1hZcEFzRHN3?=
 =?utf-8?B?ZmdmSDBuOXVHTE5nWjlhbkxWclNGWDFySnphK1N2T3hkUC84S21zUmhTdEkx?=
 =?utf-8?B?d0k4d3kwU1hwUGdrZXppT0dSdlNQMVBlYXhHcWZTYkszQ0ovcmpvWFFPaWdC?=
 =?utf-8?B?WFFoTHRaditZMlpLQ0IrNFRRSHJsNERnendNTEkxZThZTWJZMHMzRlcyNWZl?=
 =?utf-8?B?bkRNVnNibWI2bnYwMloyMnN0by9qNFZWaXJMYzFNTWtjU0RYeEdUUElnakR5?=
 =?utf-8?B?OWpmNzBsU1QvMTUvWjZOZ2kzZ1NMN2FzZVpJdzA2SkpIcHF3bG1hd0ZVVjJa?=
 =?utf-8?B?cThGblVEZ1FPRWxvdytpMzZVVG1oMUhzd2hiNkZTM1RpQWlZRlUzWGp5UEZI?=
 =?utf-8?B?aGdTZnh2WFlOQ2twNWpUSjFDdmQ3UUluYmx5QzNhY3d2b1k3U0RzZG1hc1p0?=
 =?utf-8?B?c2dZdjJ3YVVUNnFOWmpQS0pPUlRJNDRPajA2YkJjQU42RDR6eStUeDNqaXRB?=
 =?utf-8?B?Z2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	1jYcfwCL3Xf/ZX7ZhOwangIto81AOQzK7G2EiUX4xfKqpXh09udMo1FPr6HrqukmQ22QI3Cj1sos4uZSwardb27UnPvwvWYOuk3qdWFUPeeZCi5WONONOL3VgW5YWvCnlZwZ2wjfxa+8dRGMabJHFTY/6uBmNy7KUe1HGSpjWon0MmRPe8oebHvcNR8Po1ikebjWqzd9KZ0kWpkjay1pZazNh2jOFGdSTaSQnZbCQpxayvx0c1rux7RIQof2piB9APEl+nVC99NO+m54FmRY+dtq/O5nJ6vnNT5O8QtmjahdV/S05cynI4RzTu/JSd0NGx3ajTxfwaSPS9EDXozqPrR9yG/BOX+ffkRX42+9UrGajTQUyRPtzMbeCRhkw6/Wxcxrl3fcyPmWkBDsTu0uOGYF/pcvIAuZTCIZQYTY35GROR39655zEGYTwhs9+z0mL1RTsTiF8jIHxx1hs1Ylh9cUZ+fzNAPnJQ9cGuuNX19aSLfZjsCrdmt+D5c7ygYgqyEbiScPIwbsgp9jenSapYgv0+TNjHwPJuUldGUCAcpX/sKHIgzlGGrinRU9BXr/QmPXlDSBy+XqKlYnT6748wDF7ckzwR1zoVlC2ddj1sy7mb23V4TRLfJT4QBDcnIWtlmOz/Sa00ArLz92EdzBJH6bSKjaxBbEeYQPnMNLSlT1vo7zekjQUMNpgm2zoGp8++ngNF423sQG68S2/to3LvCoBRkBpMaxPSYwSuiezO1n+dqE7c7gK7+6aGPII+DPZG2X6RZRdiCeKMjwGk+5NQV6upRYLoLYhNXJ3nBJCA3iE4ZNwzh2yOu4v1N5vX1M
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cf9cb7df-081e-4749-bdf3-08db4b1dd84f
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 14:59:36.0102
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CS2arztY9VNnVnduuTDlLji6U7vS7RZbsvczys27UrhDF/XImIj6fUXNrieAxft+Xi/epUqhYf5J1S9Y0DsYOg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6422

Ensure that the base address is 2M aligned, or else the page table
entries created would be corrupt as reserved bits on the PDE end up
set.

We have encountered a broken firmware where grub2 would end up loading
Xen at a non 2M aligned region when using the multiboot2 protocol, and
that caused a very difficult to debug triple fault.

If the alignment is not as required by the page tables print an error
message and stop the boot.  Also add a build time check that the
calculation of symbol offsets don't break alignment of passed
addresses.

The check could be performed earlier, but so far the alignment is
required by the page tables, and hence feels more natural that the
check lives near to the piece of code that requires it.

Note that when booted as an EFI application from the PE entry point
the alignment check is already performed by
efi_arch_load_addr_check(), and hence there's no need to add another
check at the point where page tables get built in
efi_arch_memory_setup().

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Use test instead of and instruction.
 - Add a build time check for sym_offs correctness.
 - Reword part of the commit message.
---
 xen/arch/x86/boot/head.S | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 0fb7dd3029f2..b9c9447df9df 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -1,3 +1,4 @@
+#include <xen/lib.h>
 #include <xen/multiboot.h>
 #include <xen/multiboot2.h>
 #include <public/xen.h>
@@ -121,6 +122,7 @@ multiboot2_header:
 .Lbad_ldr_nst: .asciz "ERR: EFI SystemTable is not provided by bootloader!"
 .Lbad_ldr_nih: .asciz "ERR: EFI ImageHandle is not provided by bootloader!"
 .Lbad_efi_msg: .asciz "ERR: EFI IA-32 platforms are not supported!"
+.Lbag_alg_msg: .asciz "ERR: Xen must be loaded at a 2Mb boundary!"
 
         .section .init.data, "aw", @progbits
         .align 4
@@ -146,6 +148,9 @@ bad_cpu:
 not_multiboot:
         add     $sym_offs(.Lbad_ldr_msg),%esi   # Error message
         jmp     .Lget_vtb
+not_aligned:
+        add     $sym_offs(.Lbag_alg_msg),%esi   # Error message
+        jmp     .Lget_vtb
 .Lmb2_no_st:
         /*
          * Here we are on EFI platform. vga_text_buffer was zapped earlier
@@ -670,6 +675,15 @@ trampoline_setup:
         cmp     %edi, %eax
         jb      1b
 
+        .if !IS_ALIGNED(sym_offs(0), 1 << L2_PAGETABLE_SHIFT)
+        .error "Symbol offset calculation breaks alignment"
+        .endif
+
+        /* Check that the image base is aligned. */
+        lea     sym_esi(_start), %eax
+        test    $(1 << L2_PAGETABLE_SHIFT) - 1, %eax
+        jnz     not_aligned
+
         /* Map Xen into the higher mappings using 2M superpages. */
         lea     _PAGE_PSE + PAGE_HYPERVISOR_RWX + sym_esi(_start), %eax
         mov     $sym_offs(_start),   %ecx   /* %eax = PTE to write ^      */
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue May 02 15:02:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 15:02:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528708.822195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrWF-0000eY-CN; Tue, 02 May 2023 15:02:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528708.822195; Tue, 02 May 2023 15:02:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrWF-0000eP-9W; Tue, 02 May 2023 15:02:19 +0000
Received: by outflank-mailman (input) for mailman id 528708;
 Tue, 02 May 2023 15:02:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptrWD-0000eD-G4
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 15:02:17 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 51e990ee-e8fa-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 17:02:15 +0200 (CEST)
Received: from mail-mw2nam12lp2045.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 10:59:43 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM4PR03MB6175.namprd03.prod.outlook.com (2603:10b6:5:39b::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.20; Tue, 2 May
 2023 14:59:41 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 14:59:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51e990ee-e8fa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683039735;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=w09wlSCp9EquAfQqRS+hpjyg6dxzrD1D0xfiQn0bbHI=;
  b=Zu2mt+6DvnN6FEgS6o8X/FvpnZZTo+ylZWoNrFr52N4uZSkxj6kE1Yxn
   I5Vj3oyOoXVHkI4pkDkxEnHVsbYnl1dVOsixoqairJnfhgd48KXHdbbbt
   RI+/68HtC+sFVOW46bAZKjkt6CXAG66mBfZ2vaeW/GpfTgZbbEtTgOL3j
   A=;
X-IronPort-RemoteIP: 104.47.66.45
X-IronPort-MID: 107998060
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:W51CSaNPUDEqF0jvrR2ElsFynXyQoLVcMsEvi/4bfWQNrUpw0GcOy
 WAbXDqFOf6DM2Lwc4p+b4iy909XuMTWy98wTwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5gZmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rspJXpwr
 6dEET1XdVPfp+O0xu+aQ9A506zPLOGzVG8ekldJ6GiDSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+/RxvzW7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqi3917+Xw3uTtIQ6MeLnrtVK3gKozEdDCB8Jc3eVvsCLsxvrMz5YA
 wlOksY0loAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxLmoOQyNFadcmnNQrXjFs3
 ViM9/v2ARR/vbvTTmiSnop4thu3MCkRaGodPykNSFJf58G5+N1uyBXSUtxkDai5yMXvHi39y
 CyLqy54gKgPickM1OOw+lWvby+Qm6UlhzUdvm3/Nl9JJCsgPNPNi1CAgbQD0ct9EQ==
IronPort-HdrOrdr: A9a23:/qMsLahr03lzSIjQigpdGdKEoXBQXu8ji2hC6mlwRA09TyVXrb
 HXoB17726OtN91YhsdcL+7Sc29qB/nhPtICMwqTNSftWrd2VdATrsSircKqgeIc0bDH6xmtZ
 uIFZIOauEYZmIK6/oSjjPIaurIA+PqzElrv4rjJrtWIj2CopsP0ztE
X-Talos-CUID: =?us-ascii?q?9a23=3AcU9vymg1dq0T1GPYrwYtyH4A4DJuLyP510mKeBO?=
 =?us-ascii?q?BBDxyC+G1SF2WpYJtqp87?=
X-Talos-MUID: =?us-ascii?q?9a23=3AYt4tIQ2B6+V2ni18cokrwPJS/zUjxvvxUGstz8o?=
 =?us-ascii?q?9seLUNS5vHRuzkw2Fe9py?=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="107998060"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fke3hhsVQYoZkWY7of+dP26OVNizg8F/s6TN5197GLmFyyOFoFQmSscbqHRFDo0122+axJtV7QW6LG/Z17rANCQKeJz996yrbvf3j84nLlMGiMwDf268OKkCWpqzHb+XxszKOyji/456g1ldjikLtc+U3MVcFZpWK9ZajQcaVrSa0PEYMSXSFNparEQDpGSvsmmThY/DKCb1cSTecif/m1AgmS3Jm7GJWWzJi1y0f7/xSJFTMxepF7Kzkm3Xiq7LKUN6wX7SUDHW2Dy0S7eZfTtORUY1xA8i8Cb9bh2mrzPs/WJmsbGc3SVUwUzRr5Bd/VVCpEjh2jpORS3Wqfg49w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q9yYdD46obGJvl7PXXkeV3azaYqYy0EKOnWgIgsZyZ0=;
 b=MG2KD3oaXiAfG1K4iMnyoD8MJcMeZY15TE5U3dZew/1v/z5zGpgccgt3EoOqFyto4R/SAJgb+oSrlCtE0KkvnP47cfexyEu4Hf/LHv0ykFjWkvgtuoimbJAp76PpZGvoXCs4gm5JoSbdn078MVVmIRTKUxe0+X0EjR0n8fqvaarduzj41TCLzZMfQAW86bytPjdbXQ4Rhfe5kb5mALpWCA4y6KOQ/B3LEksERsi2+bgfBGuNsVJs7xhdcGi6afPZMn1GbnOBr/mEtlLIk+Wnge398XIWy4cOo/rtTYhxKT1Dqyxqh97wMOfjV1nNStFKAm+bBFkXRv3YQDhCvg+wWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q9yYdD46obGJvl7PXXkeV3azaYqYy0EKOnWgIgsZyZ0=;
 b=GAbjRitQaPs/RQ3B6NB4CTHQZWFPm30fMYXBD2hNvQOdjJJPr1uJ5zZAzmLSqEqX9HSpSH7ieDFeogx7xQDxDq6Di4kqqyzLENB+mmGUnvy2MnTcAx7tWR+sBB5rWOY+dXaBP3d33Zn1SU1Fg+Z2kJd3gCW+BWrHVkU7f+MYZfg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/2] x86/trampoline: load the GDT located in the trampoline page
Date: Tue,  2 May 2023 16:59:20 +0200
Message-Id: <20230502145920.56588-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <20230502145920.56588-1-roger.pau@citrix.com>
References: <20230502145920.56588-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO0P265CA0005.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:355::10) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM4PR03MB6175:EE_
X-MS-Office365-Filtering-Correlation-Id: 17f44865-2af3-4561-258c-08db4b1ddb67
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RaIJeM2fVvwbWRpBBpvJFlB3N+owt56dnnojhjUnrg0uEOacxQpa6ouDRK00ZEdCvREEL0ZQvkUOi/JZM9Vw6qbU2aM47Q1De6HrNF8+fhKVtNniJJVc4f5R2xFB0BdVIqIIf0TnvTxMlkQu8adnb6OuAcyxizFB9Y31wdeEsaLqeXhH5ZzCrMglINIMEsorE45a/SkI7zKJGRrR8PWy8gA/ctE9nPyyZtDuLfj00vfgJIwThpusCcNhcZIsZYc89H9FeChdz+3xZgL2pABC4zHoejHMdvUTnGGxB6Wui5NpAzJDcHwdJMEgzwPGmuxA1ThmZX5rnMQwvhFKVEbfMi3VUG+/Pmp9W5+K7eI8X+6W0t8MeOcRH2GTcc3yiIYyPd2jlxMZb2CUTOI20OGf2it8MznxUdPCC0tfyK7xBZKUoqK3d3SVC4JpTZkCc1Zb0EqLFFJcZmnRX/AmQQoXiejGUlr5ifjEeJtNfYw4uChFMDf2dvsuxMPfvqKfZdop1XiM7sAlwl5KiMHtOwLmjcMRyWkq+8e2XugkK0ZHwLZ626oTh7lcBVjQzags6MxO
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(366004)(39860400002)(451199021)(82960400001)(38100700002)(2906002)(8936002)(5660300002)(8676002)(36756003)(86362001)(2616005)(478600001)(6666004)(54906003)(6512007)(6506007)(26005)(1076003)(6486002)(186003)(66946007)(4326008)(66556008)(66476007)(6916009)(41300700001)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NDhuNHNWVEZLMWZsb2tWQ3lpUTAxdmQwVWVOc3J6ZmZWd1ZRWlpTV1JFbHlo?=
 =?utf-8?B?M3dNTWxORG5QdnJMV3FmTGlLeW1hd3lhOWJITFpGNzhub0t6VkJqMTBQSzEw?=
 =?utf-8?B?WFhWMTF4OTExQThIaUZMMU1senVhKzhDTWFzWHlMeFpqOHR2UGl4V1M3Qkd4?=
 =?utf-8?B?WU5NdjdoL1lsNis1UWhSaGhGTHhLcHY3b1JhSEk4NDRFZWxWSlducVVnczFC?=
 =?utf-8?B?Sm05RjhseExMd0RuK2RTbWJ4cFJSd01JMWdYRm5SN28zYy9oVXlVT01QWlJM?=
 =?utf-8?B?K0dVL2ovWkdsdVUwK2hoYmg0N0k3emg1S3pWVmxNOHhVeW5DZXJ1MXFQcXRO?=
 =?utf-8?B?bTBySEFrcjBqRFlEUnlNei9JenFXZVcveFpnTWVWcWN4K3FCYktqTU9Uckpn?=
 =?utf-8?B?cFphV2JhcnJ3anY5eXEvdEpmdXVtcFlMY0Q1K1U0blp5d2dHVW1wZVdyckps?=
 =?utf-8?B?YXM0alQxWnZqTjczMVVCN2tMNTFVUGtJY0o2T050OUxsQ244aGJRbEk4S2p2?=
 =?utf-8?B?cGNxb1hBek1yS1U1dG5MVWtsalAxV2s5YWRYT3pXL1FxYmtoVlUxM29VdS9v?=
 =?utf-8?B?QUJrcUdkV25JOGJheitpUHZuRFBFNjcvMmNjTUI3YzVDc1RpWkVLQ21jeENH?=
 =?utf-8?B?dmtMWnJnU3RSVG1XcGlxQUtFMTFSZWdVUDYxNEdZTzdjbjcyenFJaElkcXdG?=
 =?utf-8?B?bmxZNkZSRHdjeDBaNXVrT2tURDAvSWo0cDJGS21ON2RnVUYvUEpVMTc0MTB0?=
 =?utf-8?B?U3hZQ3B1WVhQSXd1UkZNc3V5SVBhVXE0VTdoTENicnZnZ1hKZVJRckdnMFpI?=
 =?utf-8?B?YVhRWDVkdWJia0RuRml0L0UrY2JLV2hlNGFJMlV3b1lYV1lOT0RGVU41SnVv?=
 =?utf-8?B?RUZ2aldPRnpSb3hEL3MvVzZvS01RZFJEOEE2OEh6c1M2dENMb0tET0k1bXpJ?=
 =?utf-8?B?Yk1Nd3NBdGdiK1FsYVdjU1J2dkdLVkI3OWxtVXFMVUlOdVJ4VW5CK0JhSWE4?=
 =?utf-8?B?Z2lGeDJPL0ZsR3E3NW1VczExaXdBamZmTXNGdllOZHpmcW1DTXhHWFAwRlNM?=
 =?utf-8?B?bElDUDhvWWlZQy9LakhmSnRoR1NNbWw4VFRuWlNhV2h6Z0liTmVTZ0Q4SUxh?=
 =?utf-8?B?RHZxenppSnF2THZ2WG9VaXJDMFpQU2RONURKTXhPSkQxSXdReVFwajd0SjhZ?=
 =?utf-8?B?anB0T21hSTJCV3hPaHVkRTBHbndjMmJUeHhrbmlkdnZIN1A3alM1T1hwOVJX?=
 =?utf-8?B?VURQb05RL1prTklORndXd0VhcGRFRkVhNnNjTzZTSk9SRHRoWGRTaDdEYnd1?=
 =?utf-8?B?WlpDcWdBZHFVSmJqZ3Z3R05yMjJURlIrRHFneU0rQW5NTW1hY2c0VDJ1QzBt?=
 =?utf-8?B?QjhTNXVHbUJpVnh2MWthUUFFTFdNY2crV1V1eHM1Q0dNV0oxN3FwWmdIY1FG?=
 =?utf-8?B?QmdjU3VTaW1Pam9sTFJFWE9IcmhqL1pLK0crVVhXMGZPbGVVYjhSaXh3Uito?=
 =?utf-8?B?WjBHSFVMTnVheUtlcWxpZW41L0gyaGYwSGdmOGhoNnZEZGt6TU11WGxBQjlV?=
 =?utf-8?B?dnI0ZzRmck9CbENrRDRJL1hmR2NZbm81cWI4Z1pTd3QyUm1uVmY3K3BmWElo?=
 =?utf-8?B?ZmpJRjJ4aURSdlp6YjBqODVyTS9BRU8ybFlhbWhQMnFySEh1MVB1dnJaTmY3?=
 =?utf-8?B?V1NZQ1YrWjErSVkwUDdkeXhDY0RXVE82NlZkVG9LS25ENUZldnBwU29jYTZH?=
 =?utf-8?B?bXExdktxbWZXZFpoN0t2Y2dQaCtiYS9hZlo4akpLYlFhYUFVMG5Td1JaTnQy?=
 =?utf-8?B?ZFVubTZ6Z1NlTXd2NVIvNStNQnQ2VDBzUEhVOWt4TmJiQm4vbkR0U3dIaXlH?=
 =?utf-8?B?NGlrcEl0bGErcWlaL1ZZQXpsWmp4Y2FOakxNTEJUSXpqTHpEWURvZXFPR1cx?=
 =?utf-8?B?aExRMiswNDRSZkx1alBPRUdMQnRlTEU3bmh3WE1GYmpNeXNIdnUrTnpoRUpo?=
 =?utf-8?B?c0V3YWNvZEFuL1RVbVBwNkQ3SDl6Y2w3OHBKV0V1QlAxZDNZUkFyVS9CSy81?=
 =?utf-8?B?NjB0RVFNSGtWZVRpSVUyWURSWHdSdkNmQy9UNEZNd2UzN2RKb3F0aDhoUmhj?=
 =?utf-8?B?eDAyUzBKMlRyZVZPTGQ2cGRWaGd3Slh0cVY0Y1phNnRXazhUaEFHTW9LQ1dX?=
 =?utf-8?B?MXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	PLpHV08toh8qnxZmyT89R30wwyje0D4ymb97lqkUZczv2q2Q5qZbMDVb5Avex+q5TT5KZhUTA6yNrrMiwadQCKe3fSHa5GQADumDWZZoX8wRuhbywTUlG7959mTgWDUI6jTcL+ASDNgPMg2ffjaExggM+2v2FobrmEkHNiOinM6ebDS3qQa8EH4jpqSlCXSvD0jcpRqTr0LcnucQc7x/CgeXuNQ8FxSydlM1CR+OfS12HKNselKXSSpcHa+RMmdwLXDpz11S5xV07JsOkkKEBSmVbtz0nojSczpFKxKZ+9uugPwn/vu70P61BCf4W8jq5pPqkM1nxzXUQUqwzVEiZ1hxW1LOo0IPBhCPkLTgGoFh8jQX9dXsPkqbVhbynAdbM/5coYXXBXzxVecqcp9hAeY5N3eGh1vNGLebd1AF4TyOY3h+V9DzcsISd3RIL7KXAsLsefNVIxQgS6WkVkM+bBPbJ8VjyZO0lvQwviEHuKOi5UaCUJcXXV4qKAovDdL2lrIUnSW9HT2ToEpfckcMyFpwQjUESZ8LAzJd3LcIWccooWq4gb7pwFDIYKOPPjRuoanw1B6Qc4NHfXCVqfjD5dfFJjkFd8yeUMr9tPD+d4jg1CA+/JNkHUjsb7CgB64SmjMOvdSbVnSLyJCbxiVoClwu+0RzUDO8+SoD12/moXTRZDrEY+1sZXjy2kN7ZNoGySuBUpJ5h1CpeXS2UiFxFOmsBGbjxGbEhcabod3SKIlBi1+F58q5TyTF2BQqHXKR6deg5RTgUgb3JMu1kf7JQyVgmJTJJyAqMI6U2G5+C0UBzRTpoi049RgqgvBQpsG6
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 17f44865-2af3-4561-258c-08db4b1ddb67
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 14:59:41.3542
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: farOd407qZUcrtAYtbDIio7yzBzSZVOF0SoMAcWNN7zkAYITpZHTI5EMsTMU9RR1Wsger1gY2vGhsG29O2Pfsg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6175

When booting the BSP the portion of the code executed from the
trampoline page will be using the GDT located in the hypervisor
.text.head section rather than the GDT located in the relocated
trampoline page.

If skip_realmode is not set the GDT located in the trampoline page
will be loaded after having executed the BIOS call, otherwise the GDT
from .text.head will be used for all the protected mode trampoline
code execution.

Note that both gdt_boot_descr and gdt_48 contain the same entries, but
the former is located inside the hypervisor .text section, while the
later lives in the relocated trampoline page.

This is not harmful as-is, as both GDTs contain the same entries, but
for consistency with the APs switch the BSP trampoline code to also
use the GDT on the relocated trampoline page.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Changes since v1:
 - Reword comment.
---
 xen/arch/x86/boot/trampoline.S | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index cdecf949b410..c6005fa33d1f 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -164,6 +164,9 @@ GLOBAL(trampoline_cpu_started)
 
         .code32
 trampoline_boot_cpu_entry:
+        /* Switch to relocated trampoline GDT. */
+        lgdt    bootsym_rel(gdt_48, 4)
+
         cmpb    $0,bootsym_rel(skip_realmode,5)
         jnz     .Lskip_realmode
 
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue May 02 15:02:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 15:02:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528709.822205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrWc-0000yT-Le; Tue, 02 May 2023 15:02:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528709.822205; Tue, 02 May 2023 15:02:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrWc-0000yM-IA; Tue, 02 May 2023 15:02:42 +0000
Received: by outflank-mailman (input) for mailman id 528709;
 Tue, 02 May 2023 15:02:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptrWb-0000xz-Ao
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 15:02:41 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 600b64f5-e8fa-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 17:02:39 +0200 (CEST)
Received: from mail-mw2nam10lp2105.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.105])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 10:59:33 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB6422.namprd03.prod.outlook.com (2603:10b6:a03:396::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.27; Tue, 2 May
 2023 14:59:30 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 14:59:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 600b64f5-e8fa-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683039759;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=oFeo/VJqrFvjet3X+2e5EGDYnRXVjUnCVi3N0p1FJNI=;
  b=UlToa4MX9SkmZkIZEmXAEBjumCRkKRtoyEOUjil7IuRwwJ5GQYkXqTZ/
   KQVNUF8/9xpHSu7aB8PCmatBH/U07Tc9E2d6NmkdmEQKeJ693rSQRlwHS
   LOqjaDZ/bt7AGY1NnSFf2Hyjgoni/um8cGfbYH7j0tfevnUSex1w3Tkik
   c=;
X-IronPort-RemoteIP: 104.47.55.105
X-IronPort-MID: 110031688
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:7yhQdqD5MSjlzRVW/w/iw5YqxClBgxIJ4kV8jS/XYbTApG5x0WYOy
 mMfD2mBa/qDZWqge9kkbti+oEoGv5WHnYQ1QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw5el9G3oU+
 9chMylKfyuPnqGW7ZacVbw57igjBJGD0II3nFhFlGmcKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuuzW7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqyr03baRzXOTtIQ6PrKk08Z2oWSqzFNDNyI5anH84vqAsxvrMz5YA
 wlOksY0loAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxLmoOQyNFadcmnNQrXjFs3
 ViM9/v2ARR/vbvTTmiSnop4thu3MCkRaGodPykNSFJd58G5+dluyBXSUtxkDai5yMXvHi39y
 CyLqy54gKgPickM1OOw+lWvby+Qm6UlhzUdvm3/Nl9JJCsgDGJ5T+REMWTm0Ms=
IronPort-HdrOrdr: A9a23:HAsQlqE00bQEPJPFpLqE0seALOsnbusQ8zAXPidKJSC9E/b2qy
 nKpp8mPHDP5gr5NEtApTnjAtjifZqsz/5ICOAqVN/JMTUO01HYTr2Kg7GSpAHIKmnT8fNcyL
 clU4UWMqyXMbGit7ee3OBvKadF/OW6
X-Talos-CUID: 9a23:DxuosWNjKbH9OO5DRyZt7lVOFOscaWSa3UzcOgyYJEJOV+jA
X-Talos-MUID: 9a23:1rN1pgk341QFoH4QPVy4dnpIO9tN27iPMXkHrsU5hOa7H2szOA6k2WE=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="110031688"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i+cWA6A9WKM13hyyjNPxyAFyLMjEmp8toj8HqQ8JA3ZUJ0fneiUB86W0/Gic6lQ3AcaJpS9qriLTBBfj7iInr6bpXozB2bFG4cRio4WJbM5ObI+SJJl9T9XT4XBwWwQiik8FcS1zSpp4oCWeme42DdQkb6ctLJJG/xo5wu6hyYX+Hlg0nxu5O9XEPxnTZb6Zvr4z427LoH97TqNUfSlTpgPFEL6G0sbX0fwgVOL7x2/xM9rI62HrTpz1jQH+XwCViGzXe2/WX3rWPpfsMELFXhEfDZzBtvtCac/s9GBtnzmRY24nyhzILJqeMs/5AVElBN2sVsJWmEOrvaYgaq05RA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tHaYVkPXio4QryvASgtAEsRqT3MTpTQ6WEtH/GK1i0g=;
 b=i12GEvfcX9k5w65irzejnvOXmoylmMpoe8CTXeqM0b4c3W0AYfxkMjYYnnldBqHQ6I3OprOJmFMI3ERZFSFytJicck9yGp6OLYoMVUcNqjDKi1fM2Ujy1IINttvVEvEyWxILhzi0fG7uBa9DHQfgocXG6MoNnb51ErghpKDjfN32zm2Z7nAUab2M8hOfuVNs5x1p93N+G3kvSza/de1i+pPtnXEFkuiPYd2oloZUWjTYAD2i8CRApMzuSb6pN0xcuEFCzI5TB9sBT+vGYyT2LIwRcF3dnWWWngyHF9jubmPx9At0dzAqNXt7k5CrTmCX0rp/H91PiZHo3wQJTk3a4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tHaYVkPXio4QryvASgtAEsRqT3MTpTQ6WEtH/GK1i0g=;
 b=gpiwbs2EKSxfNA9Z14GOWKQBcwNLzfoP5HF17qU9D2VhebrwCxhODx7mTQ7FmkBq3yicxB44jLFfV+gJKR0xF3R8b9whNWBFpeql+E+LAu0XzRYV5LKnXZ/k68lwGaskbKxJWixV5NkSBqeFOaTRC1WrSsg8SrRHtmonOXPRI9E=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 0/2] x86: init improvements
Date: Tue,  2 May 2023 16:59:18 +0200
Message-Id: <20230502145920.56588-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP265CA0092.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:76::32) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB6422:EE_
X-MS-Office365-Filtering-Correlation-Id: 8e58fd73-7cde-4abd-0ea8-08db4b1dd4ec
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Xs1/A4yX5fSL8O81MCp+aK6m0jubL/WU9kkcJvrCHdSmslsYJsKNINI2FDjslOqJbpZwxb1KEws5p9G0xUcjF+f0FhUafuNs/oc+OTZZUz1TpuBdSUWzm75RdIiqCCAK0/JdUZgqw36w7fVflrlleHew9KORYbylQQyCbxKAdYQZrXx9S8iqSjC23uCjk9fVl9f7MCdo634dicF3GImRecElO3MjsSLxfiFMQM2RHx82CU2vPF6sIY9dAHfC89uip1QVKE5A8fscl59QsmFqOhU5wBAzzhsLO2GhPLw73TpGY5P8XqMZzQtyiZNCFkX0f4bFsx5/Z7EGjmIB/RNFeAoIX57akM79eKnqiYR6Jfq2KEx9h6yvlf9W3p2w97hzH2fValJ7vlrH3fHvj1Rh9li16kP6hWYBAZ0SK1cX+bJdyWBRVfwG/GK1Gf1IUsTCIC6E1r0Gr/uWZ/MHkGqqJaNt4+dWSdPKalPne8bRbdOjrlKMbbA17TIjVcCAUDWk8d+rLgguLVGNwEF/U+2N965KmD2HXm8LZ+Ba/FoxeQ2yPPcFVepCj8QWixpTH+pC
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(346002)(39860400002)(396003)(366004)(451199021)(2616005)(83380400001)(478600001)(186003)(54906003)(4326008)(6916009)(66476007)(66556008)(66946007)(6486002)(6666004)(6512007)(1076003)(6506007)(26005)(316002)(8936002)(8676002)(5660300002)(41300700001)(82960400001)(2906002)(38100700002)(86362001)(36756003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bUJHSmM5ODBoblBvaTBzNk8yL1B2RC9IN0xybVZkbDZKbE1VV1VmY255cDlQ?=
 =?utf-8?B?Q2dhZGVxaWluYklhc1FBMjV0d2lBa0JsTVhGNUdUaVVacDU0QkxvZWhIWXls?=
 =?utf-8?B?TG0wMS9CV09qNEFLYTJlYWtMbmowSGo2eWovOHFQTzJKVzBDSDZtMVRNVFhO?=
 =?utf-8?B?Znk2MGtXV28vMDhtTUgxY1dLakhZTEdBekxjS1FtT3g0S0dCcEJpd2lxWVRG?=
 =?utf-8?B?akljNjd0RDl5Y0hOV29Tdm8vZnVaOWsyRSt0Y2QyZHY1V0ZwbitIT3hiSVF4?=
 =?utf-8?B?ZWMvRmlBeVlHcDNRazJNd1lXMWNEY3ZnWUc3V29hTVdsbzlkblpHMDRHSGtP?=
 =?utf-8?B?VmZDTUVIS0NNa3A3ZXBjbWlPS2czRFZMS2IrVDNHVFQyN1FkcHA5LytBNkxO?=
 =?utf-8?B?dHoySEkwdDBlb0Q3cHRXSW9BeHg2Yytka041ZERKUUxMR1F0cit0MlQ2V1Q4?=
 =?utf-8?B?bVU0MFdKTTNUdzRtMHViTnk1ZHFJRFFLWW1pNzZiVFp1dDc5NE5Eb01RWE9a?=
 =?utf-8?B?WUNlbjRjbEYyTUNvT2I1bTBOVG8yMVJYS3k4dUdLaGZObGZYRU9Zb1VIZWlV?=
 =?utf-8?B?TXEyV0E3VHAvckZUK3V0QkF6OWREQkVDUVhhYmd5OFV4bVRRNHVTN0wyZ0gv?=
 =?utf-8?B?a2I5aThVT3dzaVJqVEk0VmJVbGQySTJXWDh5RGRsbGR6dHYvQjNMM1ZUSkJm?=
 =?utf-8?B?eks1b2I0cFh2VFVaZTN2U3B5Snl1L3RTa3dhTXk3c0xCck1RWTMzWlVpMFBK?=
 =?utf-8?B?dXp0QXIwTTRFQ1pnWXk1QTVlVjFmM1RUK3NvOFRVUlBwUE9sNStKc2o3MGRE?=
 =?utf-8?B?Uzc4R0I3WnpiVEwzRWdzQmRuNE14MW92a1BkcnRvSFhzZWx3dUliaE1pcTE3?=
 =?utf-8?B?a0pmYkRBQjlWNzk5aDY2eHpRb1FsZ3Jib0crL3l3cU5VdFZUQUVFWktqNTBR?=
 =?utf-8?B?RlA5N1NOWkgyc014ZlVrT09pUjdGMFlBSVRyYUZqZ0pJVWpkMDU1SENqK09y?=
 =?utf-8?B?S1V1RE5KNmZUUzhxVVpxZDVqZHpvc281dThHM3oyK2FYZllZdW1sZk9qUTc4?=
 =?utf-8?B?UVFDendqcmhId0NXUjNxQm5zb29oblQ4V0NpajNCdmxraG1aUlFHdkkxdXBS?=
 =?utf-8?B?YitqT0g0YlVhdnVJbWRzSFg2NENZWjBSUzBGRFV0THdtNnNKWUphUk5YZlVB?=
 =?utf-8?B?Z3Q1b3d6eThHUWZOYmNoRVVWOVFHcnFabStjMENDK3djUFRFMVNXMll0RFBY?=
 =?utf-8?B?OHNRZkxhd3lOczB2SXNpbit4YjlGQ2xUNkZEMFZsMU9GR3pSdlorUGJiVkZ0?=
 =?utf-8?B?MlJHRUh4QnRETm12SFlOUU53TVkyOXU0dURObitrbi80N1JEeU5XSEtGMWEx?=
 =?utf-8?B?eUVuREhseWl3b1dBSkZLYkNKREdmZW8wTXk3ejhReWw1Z0RneGFuSDNJaGNs?=
 =?utf-8?B?MHA2QnIzY3NFeFMybm5SNEZ1SXVXdmpZd0dUaUF1eEJqTTFWckRTTWJuRmZZ?=
 =?utf-8?B?L1kzYXNTSjk1K3ZlOG5tSGVlY3ZLYXc5cjFZNGZvTEhaSlIvaTFJZ0lUNzM2?=
 =?utf-8?B?ZHpTNVNtNjQwYzlPRDkyTEFrMlJKTHkxemU1VUgxVUluRFduS1lmYkhjMmR2?=
 =?utf-8?B?NDZVYjFEUTBNanNuQkYrRmlHc0kwOUQ4RnF4MGJrZHpSRmhmWmZhN1kzUmZK?=
 =?utf-8?B?ZVlsTWp3VTRnNm5ESnlhaGNsaVJmMW0weksyclZENGMvNktzUWxtc2RIcDFJ?=
 =?utf-8?B?TnRlb3ZPOE1tQTJaeE5ISjdudUNHZzVnTlhHYVpOYWNWaXFsTFh0UzJvK0N4?=
 =?utf-8?B?aGgvdUFTWVpySjVHS0FCQmdaWjZ1U1Zqamw2QjZ1TkJjTkJOVXNrQzR0eXdv?=
 =?utf-8?B?WXEzN2Zjbzhjb05jTFQwY0Q4S2kxdHRXUWJEK3BiVlAvOGhITC9jY2lmQVJ6?=
 =?utf-8?B?VTh0SkNMNXNhRHNtNGliOGNBQ2NlS0NGUnZnbGFjcVYycngzdGVWdXZiWXIy?=
 =?utf-8?B?S05oTVlEWHpobTh4QXBSMUFqbXRVRjVDN0tZWjJZMXVmYXJOUkJBaElFRlpj?=
 =?utf-8?B?ZGVKQXlsUXhwOVFJTWpjeWtyc0lOV2NRMHlyRnl5YUN5cTYwbFI4MjhXSG5J?=
 =?utf-8?B?WHo5R2FxZGd1S25OaGttS3NtajQzTG1XWm1QbXlvdi8vcEppMTgvL1ZCVXgv?=
 =?utf-8?B?eHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	GK28bXPhf/JMZFz+2T1n4pUPH1CpRLYAxV310vcIboEf9+TVuruEyP9taE8rQ0c4HMymPrCExltHNwU0BwqW2EBAXdObmMCqRnKj4lJBhHvsqP+m3m1IhC1Gx6XcqBgDMB/SJvRm4aTUJElC7V/hOIZc8GKk2ELwd7b3LLkAZWCi+Wwfjjxuxx7vqQdvBDg3AZIA+36jGqIUCXOFKlgcyTt7NlfCnOFSr+t8MRABU8E/N36KVv3E0hILtkTa5KUbp+MujteBp80xRyl0UUk9mCESViPKPO95hlqi8N2sfCRzavLSSWvzDpUbcFc7SX7jqwgfLDAJ7kAejY2TNC4obaS0ezudzqWaFICQOuemMUOKS/wSwuoxIhUUNSHYSn8NUJQ5vwPDRO47j9xvuZ9EeOYyTdvT1xmYsVH46+as4UFH7xg6uoGJzAvO+ltmOKhCDBGH6XRy9NVKHWw79PRwUAMhBNpG6BIzk14gqr6MEzo32vuQuHXvOAYgaKOKtMZb3PW+PJMerfnZq/iZQnozARLae3r2mk1PPy6Hc00mwCiAVWzaVm0ZxJ3r7OYzh/HcyML8RdmQe4VQKwK7ldfRgVO9BTNAD2XXKf3v5PsS2IYp/ysXZ+Hx4LZbLM5lTjmL0JgrOt+Mk58iSvVlWWzUqLRyjGefUP7unkZiH2EyuUt5xVk8eDQxbsU9kV3/NBGAdcyaWXT17/OG0WVnD0gv6bBNbx3uyppqEpIAL07HcaoHCOKdhNQZfbg8JHHbsdRSJM3JRFNjVB740rkgC5U1C2HphGy7Yi4L30F+rUfiTqH6V+7YpWWha+qKY2E+j3fp
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8e58fd73-7cde-4abd-0ea8-08db4b1dd4ec
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 14:59:30.2942
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PQoADG3c06z50l0LkQZTxNeFaUTOhkYoO/688yT+CXNcLDDLTfY/lhwr9xqsopdN8zPymj/TZa6ndzsvtAGoxg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6422

Hello,

The following series contain two minor improvements for early boot: the
first one is an alignment check when building the initial page tables,
the second is a consistency fix for the GDT used by the BSP for the
trampoline code.

Both are a result of some debugging work done on a system with broken
firmware that resulted in Xen text not being loaded at a 2Mb aligned
address.  This resulted in corrupted page tables that would manifest as
the ljmp from compatibility mode in trampoline_protmode_entry causing a
triple fault due to the GDT being located in the Xen text section, and
the page table entry for that address being corrupt because Xen was not
loaded a 2Mb boundary.

The aim of the series (specially the first patch) is not to allow
booting on such broken firmware, but to print an error message instead
of causing a triple fault.

Thanks, Roger.

Roger Pau Monne (2):
  x86/head: check base address alignment
  x86/trampoline: load the GDT located in the trampoline page

 xen/arch/x86/boot/head.S       | 14 ++++++++++++++
 xen/arch/x86/boot/trampoline.S |  3 +++
 2 files changed, 17 insertions(+)

-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue May 02 15:14:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 15:14:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528717.822215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrhR-0002uK-QM; Tue, 02 May 2023 15:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528717.822215; Tue, 02 May 2023 15:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrhR-0002uD-Ni; Tue, 02 May 2023 15:13:53 +0000
Received: by outflank-mailman (input) for mailman id 528717;
 Tue, 02 May 2023 15:13:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M8xE=AX=citrix.com=prvs=479c33cca=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1ptrhQ-0002u7-Gc
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 15:13:52 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f0df85e0-e8fb-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 17:13:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0df85e0-e8fb-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683040430;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=07zf0NeMm2d5BtZmhXtI2pUoiNhPMEhhTEmYaZi+6VM=;
  b=GMAjk58st6hl2mDRuSqNadIMKB6vrLloc/s7G7V/3mXC7y0dYOnUeyiR
   cNVNwriIz6gRR3M2wanbDItqztGNhWXrK1BlzZPFpwgAH7SVMkBu6ts73
   9YFeENaroaK8JKZ/nX9YzkNuC+PY2sHtvaZQl6WWX+PqrDJlPJvCGxD9Z
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110035196
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:s+eLYaA7HJEE5xVW/wjjw5YqxClBgxIJ4kV8jS/XYbTApD0lgTwGm
 GMZD2qCOfyJNzTzed5xPoy39EhQuZKHndJhQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwo/p7JExO0
 cIhBBsKMTqHm+ToxK3iVbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9I4XSHZ4IxxfGz
 o7A1ye+X1IxF9W49TOArmuJq8rFrCfwWJ1HQdVU8dY12QbOlwT/EiY+UFKhpPCjh02WWtRBK
 lcV8C4jsagz8kOwStD3GRa/pRasrhMaHtZdDeA+wAWM0bbPpRaUAHAeSTxMY8Bgs9U5LRQ10
 neZktWvAiZg2JWXRmia7ay8ti6pNG4eKmpqTSYcQBEM+dXLvIA5hRWJRdFmeJNZlfWsR2u2m
 WrT6nFj2fNK15VjO7iHEU7v2i6gg7XJajAMyi7QAUih8gUnYJH8eNn9gbTE1sqsPLp1X3HY4
 ihfw5HEvL9RZX2evHfTGbtQRdlF897AaWSB2gA3QvHN4hz3oxaekZZsDCaSzauDGuINYnfXb
 UDaomu9D7cDbSLxPcebj29cYvnGLJQM9vy/DJg4lvIUPvBMmPavpUmCn3K40WH3i1QLmqoiI
 5qdesvEJS9EWf42kmXvHb9DjOBDKsUC+I8ubcqjk0TPPUS2PRZ5tovpwHPRN7tkvctoUS3e8
 spFNtvi9iizpNbWO3GNmaZKdABiEJTOLcyuwyChXrLZc1UO9aBII6O5/I7NjKQ+wf4Oyr6Zr
 xlQmCZwkTLCuJEOEi3SAlgLVV8ldc0XQa4TVcD0AWuV5g==
IronPort-HdrOrdr: A9a23:RQmW+KkUW7NY1jcm7aKWfWYvGSDpDfIT3DAbv31ZSRFFG/FwWf
 re5cjztCWE8Ar5PUtLpTnuAtjkfZqxz+8W3WBVB8bAYOCEggqVxeNZnO/fKlTbckWUygce78
 ddmsNFebrN5DZB/KDHCcqDf+rIAuPrzEllv4jjJr5WIz1XVw==
X-Talos-CUID: =?us-ascii?q?9a23=3AdEIMBWsbHOkjgbhsdIo8DqpQ6It8bnH44Xf0HXa?=
 =?us-ascii?q?UImE2YrmJTwGU5/p7xp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AE7+Nig39ZiOi1+2vvmHzld7ZgDUjoOPzS2sumLI?=
 =?us-ascii?q?6sszYLAldHj2siD+3Xdpy?=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="110035196"
Date: Tue, 2 May 2023 16:13:40 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
	<marmarek@invisiblethingslab.com>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>
Subject: Re: [PATCH v3 2/4] tools/xendevicemodel: Introduce
 ..._get_ioreq_server_info_ext
Message-ID: <ZFEopAR3l2HCMK9d@perard>
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <1f6dc87eebe5d1c27ae15ec8f5d8006e5aa1c36d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <e529da7e-0da6-af2f-e5b1-bb8f361a518c@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e529da7e-0da6-af2f-e5b1-bb8f361a518c@suse.com>

On Thu, Apr 06, 2023 at 08:05:04AM +0200, Juergen Gross wrote:
> On 06.04.23 05:57, Marek Marczykowski-Grecki wrote:
> > Add xendevicemodel_get_ioreq_server_info_ext() which additionally
> > returns output flags that XEN_DMOP_get_ioreq_server_info can now return.
> > Do not change signature of existing
> > xendevicemodel_get_ioreq_server_info() so existing users will not need
> > to be changed.
> > 
> > This advertises behavior change of "x86/msi: passthrough all MSI-X
> > vector ctrl writes to device model" patch.
> > 
> > Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> > ---
> > v3:
> >   - new patch
> > 
> > Should there be some HAVE_* #define in the header? Does this change
> > require soname bump (I hope it doesn't...).
> 
> You need to add version 1.5 to libxendevicemodel.map which should define
> the new function.

And update MINOR in the Makefile.

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 02 15:17:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 15:17:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528720.822225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrky-0003W5-9c; Tue, 02 May 2023 15:17:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528720.822225; Tue, 02 May 2023 15:17:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrky-0003Vw-6r; Tue, 02 May 2023 15:17:32 +0000
Received: by outflank-mailman (input) for mailman id 528720;
 Tue, 02 May 2023 15:17:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d5QU=AX=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1ptrkw-0003Vq-VQ
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 15:17:31 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.162]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 734e5329-e8fc-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 17:17:28 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz42FHRdnY
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 2 May 2023 17:17:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 734e5329-e8fc-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683040647; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Wa57NokZYMCCzcN803tea9dgEoWAtmCZHOIBOey296a6pkqxIZXnNjZ8UMJwAV7obC
    Z1Hnptv5PcI4mT7GOtWARy5HQk/0V/Q1pK4D4JWpo3b5YOk9Gs+MBWAFPeZSOblrBPeJ
    JH7JF0ZNFHC0VZcrykd6vkKrd5aRNmZEiJRVSbUPWNfvqSuDjwDy+JNKD6ZkBMdN8eOg
    r0+0MZnnLqyghtBjYmVy0NY9WlWdNnsfGhB3U9TXRkMQ7/2gbUX1g4G7FpV68gl25QN4
    Dpcl+64MnjyvVtG/f9pktdVQynXCujpubHCqqpwwZNSa8q4HuQYg71FQxks8ZNE2IT+Z
    CgTQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683040647;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=Fwogx588Aaq2W17horur++XnjoVqpQdEpobUSP19l0k=;
    b=tCvtVlzx69JaEKJFbncI1ojaEt+7jWmnG27jF90to4daDrBRi8t6lhKZ8ZY4Ofsypk
    Bmvlyu097P/GBwoyJDmZ2HMic9jD/jVrqLogsvY7Vd9EIsNILVPwnHwPhi5vwLDRn/mj
    N2j7DgUUv63jezP5Qoz6KsVimd2eLMlSUFUJ4u4INv6dTsneZl3F5g6cEKkVMSZ8+fx2
    3MRA7yRyWIpDC97RmnxaFMOxPHROclUYDIE8er3jS06LDsDlM0An5qOj8lWdMzXPhodP
    px0FxytJhhv1C2CMZLYgwZPDMHaUOFOMFe8d8cAdxta88Ypd8k4O3ofDAWA9lyx3o8LZ
    dyWQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683040647;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=Fwogx588Aaq2W17horur++XnjoVqpQdEpobUSP19l0k=;
    b=QBncioqE5fe9U6uDbElKvJHsPyNUd9YyBEjutjToQrAPTAD/5/hQU+FEG7xqTUuZ6k
    elo5QLVcEUw743fpp48jdF4KBOfWcjqFT9eg8HN8NTtE1ge4dsOi2XorIW7NkTYPfiZ6
    xriap2gL9ohVosD8PdKNp7eTEJyvVzjvlDkFdjPzh/xUuFXEyMngvmpb0Ir+uFKzoUf7
    isqnU5rzRRziy+4kYhfwhKUb3t2OfTp3zJYAGEldXgJRiHtJ5Je6lEoH6NS+1D0hwWTD
    /Y9zhLfDa/r4iBDIXM4jaGeUJFQK6DGL8qtEgCKGLTEGC/t+27o7bTDkREd2aKGrph7w
    u0sA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683040647;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=Fwogx588Aaq2W17horur++XnjoVqpQdEpobUSP19l0k=;
    b=vNi8eV540a1kJ62UpVgJaLrN32m9t0liS3ULrEqaEBerzfymeoHenJ2FhTwPlTAc1x
    YhFY6ZvI+7mdKxGviZDw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR5BxOIbBnsc1fym1gFvNQ7EzMpH+yFJc4aADp/8Q=="
Date: Tue, 2 May 2023 17:17:20 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Subject: Re: HAS_CC_CET_IBT misdetected
Message-ID: <20230502171720.7970f914.olaf@aepfle.de>
In-Reply-To: <bb07dd8a-598a-564c-3dbd-457e982fd5d0@citrix.com>
References: <20230502074853.7cd10ee3.olaf@aepfle.de>
	<43b1c214-4248-a735-6f8c-9e08bdd2eaf6@suse.com>
	<20230502133313.2192eb99.olaf@aepfle.de>
	<20230502140444.1dacdb33.olaf@aepfle.de>
	<bb07dd8a-598a-564c-3dbd-457e982fd5d0@citrix.com>
X-Mailer: Claws Mail 20220819T065813.516423bc hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/w5sxl/Ob8gw=tMiashUNBs0";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/w5sxl/Ob8gw=tMiashUNBs0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Tue, 2 May 2023 14:41:25 +0100 Andrew Cooper <andrew.cooper3@citrix.com>:

> Does this improve things for you?

./checker: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by ./=
checker)
make[2]: *** [Makefile:24: check-headers] Error 1

I think as soon as tools/ or stubdom/ is built, more issues like that will =
appear.


Olaf

--Sig_/w5sxl/Ob8gw=tMiashUNBs0
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRRKYAACgkQ86SN7mm1
DoBiQQ//coHthXXsKJimfjYYvq/c2zdS+1YE4v6VW3u1EGTrJvGkIt/HOtLnEuk5
iZm9d/CAigEB998Cg5cZJADDJUhriYkH7b7wmypwfL+Sn5UXZrZwZVzugQ/fwBBI
uHSu5/exah/fV0vz58+WRT8Lg4ka791nBmzWZPd0A10z57RUlZtGez6vgGU/YHw/
7iAIrkPaA+MaQ6iPY7rhGoBBFrYwajRo1AZ6aAqO/790zlTKAuEAn9TOT6AbmkqA
okYhEHQ+iKKEZ+zhIINu4yDcH9+HnC6I6J0lhSdSIAz4RDqbttJ0ZyiSO88o4WW3
0J7+pEMJripV2CgYxAHjTdmrcnWYLtOUasmHtM96bbGYVrZtkMz9vExtIB0Nu3Rc
AcmUEy/JwvJLK69fJ112AGoqNuFiR9QWYtzVdUWPdSFYyVWWo2US6HnimQEKA8gW
aGSs8F/+yPrsENEaRBtQbvNydWiczyAHxi2RFD8UZAcVRl7CZe7MOWYykzWaPfSV
nDf20UbtOXtC27h5gb0Y/kVzb2nlfSN+8fzgMJhsl2pyUDeKWYG1KbXra44BA+ya
5TJqbonsLXWjLMx5HK2JmeGP2pfFYvsOqQoL5Coez4qIkl5LBvv/HnKxL7dphs+s
NWXolq+wz5D9PlRS/ERw+gZsqZyyBng1xcN0JlYr4XSvhWhaBuQ=
=HcC5
-----END PGP SIGNATURE-----

--Sig_/w5sxl/Ob8gw=tMiashUNBs0--


From xen-devel-bounces@lists.xenproject.org Tue May 02 15:21:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 15:21:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528726.822235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrp0-00051T-QA; Tue, 02 May 2023 15:21:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528726.822235; Tue, 02 May 2023 15:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptrp0-00051M-NB; Tue, 02 May 2023 15:21:42 +0000
Received: by outflank-mailman (input) for mailman id 528726;
 Tue, 02 May 2023 15:21:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dv4+=AX=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1ptroy-00051B-LR
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 15:21:40 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 078a6cf0-e8fd-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 17:21:38 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-556-js-j9vUPNr-JzXSEn51u-w-1; Tue, 02 May 2023 11:21:07 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CC73E18E5312;
 Tue,  2 May 2023 15:19:53 +0000 (UTC)
Received: from redhat.com (unknown [10.39.193.211])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C7EB52026D25;
 Tue,  2 May 2023 15:19:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 078a6cf0-e8fd-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683040896;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=O1kjfWbyj3zykPV4mxkot03UoMaYj696sk4Wl0CL1mY=;
	b=ZV0qWZFJvlO+3fqr3yJoSQXriwtRM7ZEl7fCXgrvSyXGOxM3d5zz7QshoCBqzlVSlPhtUn
	LnXCLR0lLfTNCXchNpIH4/j7NMAlfhuCymLiy1UmRIo48lSmZIchjJID4628wtsS1SwtNU
	jQHtkZSWdk2UZghJmRuJzlkU6f6IH54=
X-MC-Unique: js-j9vUPNr-JzXSEn51u-w-1
Date: Tue, 2 May 2023 17:19:46 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v4 03/20] virtio-scsi: avoid race between unplug and
 transport event
Message-ID: <ZFEqEkG4ktn9bBFN@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-4-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230425172716.1033562-4-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

Am 25.04.2023 um 19:26 hat Stefan Hajnoczi geschrieben:
> Only report a transport reset event to the guest after the SCSIDevice
> has been unrealized by qdev_simple_device_unplug_cb().
> 
> qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field
> to false so that scsi_device_find/get() no longer see it.
> 
> scsi_target_emulate_report_luns() also needs to be updated to filter out
> SCSIDevices that are unrealized.
> 
> These changes ensure that the guest driver does not see the SCSIDevice
> that's being unplugged if it responds very quickly to the transport
> reset event.
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

> @@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
>          blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL);
>          virtio_scsi_release(s);
>      }
> +
> +    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
> +        virtio_scsi_acquire(s);
> +        virtio_scsi_push_event(s, sd,
> +                               VIRTIO_SCSI_T_TRANSPORT_RESET,
> +                               VIRTIO_SCSI_EVT_RESET_REMOVED);
> +        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
> +        virtio_scsi_release(s);
> +    }
>  }

s, sd and s->bus are all unrealized at this point, whereas before this
patch they were still realized. I couldn't find any practical problem
with it, but it made me nervous enough that I thought I should comment
on it at least.

Should we maybe have documentation on these functions that says that
they accept unrealized objects as their parameters?

Kevin



From xen-devel-bounces@lists.xenproject.org Tue May 02 15:43:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 15:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528732.822245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pts9j-0007ca-IS; Tue, 02 May 2023 15:43:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528732.822245; Tue, 02 May 2023 15:43:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pts9j-0007cT-Fi; Tue, 02 May 2023 15:43:07 +0000
Received: by outflank-mailman (input) for mailman id 528732;
 Tue, 02 May 2023 15:43:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dv4+=AX=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1pts9h-0007cN-OJ
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 15:43:05 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0599a117-e900-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 17:43:03 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-647-kgZznv_pOqGm5hBR5_5Thg-1; Tue, 02 May 2023 11:42:57 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 24D3FA0F389;
 Tue,  2 May 2023 15:42:56 +0000 (UTC)
Received: from redhat.com (unknown [10.39.193.211])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 71F17404E4AD;
 Tue,  2 May 2023 15:42:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0599a117-e900-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683042181;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xl8B7Hoag5VEXFozvo4Ia9rRJMwFaCM8mAYKxZI+zJE=;
	b=BF7/GlcHdRmWRANPu3gFbydxQelORZTeHJo9gofW36xmZJUiF2/HuxZfFn6728srGVGeoF
	diygddfxRJGOjq9KW+GrOqc9n0fOMO10Ey6TwDrx6QEZBIime9UyP+qzaD/Li9ve2vDZ0U
	F+qAPzgn2dkxMKuuV/GubBgQrclKbCE=
X-MC-Unique: kgZznv_pOqGm5hBR5_5Thg-1
Date: Tue, 2 May 2023 17:42:51 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 06/20] block/export: wait for vhost-user-blk requests
 when draining
Message-ID: <ZFEve2GfI0TqsItA@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-7-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230425172716.1033562-7-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> Each vhost-user-blk request runs in a coroutine. When the BlockBackend
> enters a drained section we need to enter a quiescent state. Currently
> any in-flight requests race with bdrv_drained_begin() because it is
> unaware of vhost-user-blk requests.
> 
> When blk_co_preadv/pwritev()/etc returns it wakes the
> bdrv_drained_begin() thread but vhost-user-blk request processing has
> not yet finished. The request coroutine continues executing while the
> main loop thread thinks it is in a drained section.
> 
> One example where this is unsafe is for blk_set_aio_context() where
> bdrv_drained_begin() is called before .aio_context_detached() and
> .aio_context_attach(). If request coroutines are still running after
> bdrv_drained_begin(), then the AioContext could change underneath them
> and they race with new requests processed in the new AioContext. This
> could lead to virtqueue corruption, for example.
> 
> (This example is theoretical, I came across this while reading the
> code and have not tried to reproduce it.)
> 
> It's easy to make bdrv_drained_begin() wait for in-flight requests: add
> a .drained_poll() callback that checks the VuServer's in-flight counter.
> VuServer just needs an API that returns true when there are requests in
> flight. The in-flight counter needs to be atomic.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  include/qemu/vhost-user-server.h     |  4 +++-
>  block/export/vhost-user-blk-server.c | 16 ++++++++++++++++
>  util/vhost-user-server.c             | 14 ++++++++++----
>  3 files changed, 29 insertions(+), 5 deletions(-)
> 
> diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
> index bc0ac9ddb6..b1c1cda886 100644
> --- a/include/qemu/vhost-user-server.h
> +++ b/include/qemu/vhost-user-server.h
> @@ -40,8 +40,9 @@ typedef struct {
>      int max_queues;
>      const VuDevIface *vu_iface;
>  
> +    unsigned int in_flight; /* atomic */
> +
>      /* Protected by ctx lock */
> -    unsigned int in_flight;
>      bool wait_idle;
>      VuDev vu_dev;
>      QIOChannel *ioc; /* The I/O channel with the client */
> @@ -62,6 +63,7 @@ void vhost_user_server_stop(VuServer *server);
>  
>  void vhost_user_server_inc_in_flight(VuServer *server);
>  void vhost_user_server_dec_in_flight(VuServer *server);
> +bool vhost_user_server_has_in_flight(VuServer *server);
>  
>  void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
>  void vhost_user_server_detach_aio_context(VuServer *server);
> diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
> index 841acb36e3..092b86aae4 100644
> --- a/block/export/vhost-user-blk-server.c
> +++ b/block/export/vhost-user-blk-server.c
> @@ -272,7 +272,20 @@ static void vu_blk_exp_resize(void *opaque)
>      vu_config_change_msg(&vexp->vu_server.vu_dev);
>  }
>  
> +/*
> + * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
> + *
> + * Called with vexp->export.ctx acquired.
> + */
> +static bool vu_blk_drained_poll(void *opaque)
> +{
> +    VuBlkExport *vexp = opaque;
> +
> +    return vhost_user_server_has_in_flight(&vexp->vu_server);
> +}
> +
>  static const BlockDevOps vu_blk_dev_ops = {
> +    .drained_poll  = vu_blk_drained_poll,
>      .resize_cb = vu_blk_exp_resize,
>  };

You're adding a new function pointer to an existing BlockDevOps...

> @@ -314,6 +327,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
>      vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
>                               logical_block_size, num_queues);
>  
> +    blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
>      blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
>                                   vexp);
>  
>      blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);

..but still add a second blk_set_dev_ops(). Maybe a bad merge conflict
resolution with commit ca858a5fe94?

> @@ -323,6 +337,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
>                                   num_queues, &vu_blk_iface, errp)) {
>          blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
>                                          blk_aio_detach, vexp);
> +        blk_set_dev_ops(exp->blk, NULL, NULL);
>          g_free(vexp->handler.serial);
>          return -EADDRNOTAVAIL;
>      }
> @@ -336,6 +351,7 @@ static void vu_blk_exp_delete(BlockExport *exp)
>  
>      blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
>                                      vexp);
> +    blk_set_dev_ops(exp->blk, NULL, NULL);
>      g_free(vexp->handler.serial);
>  }

These two hunks are then probably already fixes for ca858a5fe94 and
should be a separate patch if so.

> diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
> index 1622f8cfb3..2e6b640050 100644
> --- a/util/vhost-user-server.c
> +++ b/util/vhost-user-server.c
> @@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
>  void vhost_user_server_inc_in_flight(VuServer *server)
>  {
>      assert(!server->wait_idle);
> -    server->in_flight++;
> +    qatomic_inc(&server->in_flight);
>  }
>  
>  void vhost_user_server_dec_in_flight(VuServer *server)
>  {
> -    server->in_flight--;
> -    if (server->wait_idle && !server->in_flight) {
> -        aio_co_wake(server->co_trip);
> +    if (qatomic_fetch_dec(&server->in_flight) == 1) {
> +        if (server->wait_idle) {
> +            aio_co_wake(server->co_trip);
> +        }
>      }
>  }
>  
> +bool vhost_user_server_has_in_flight(VuServer *server)
> +{
> +    return qatomic_load_acquire(&server->in_flight) > 0;
> +}
> +

Any reason why you left the server->in_flight accesses in
vu_client_trip() non-atomic?

Kevin



From xen-devel-bounces@lists.xenproject.org Tue May 02 15:58:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 15:58:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528736.822255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptsOI-0000uJ-Ti; Tue, 02 May 2023 15:58:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528736.822255; Tue, 02 May 2023 15:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptsOI-0000uC-QH; Tue, 02 May 2023 15:58:10 +0000
Received: by outflank-mailman (input) for mailman id 528736;
 Tue, 02 May 2023 15:58:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OyLC=AX=citrix.com=prvs=47975177b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1ptsOH-0000u6-Gd
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 15:58:10 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 201eb7ae-e902-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 17:58:07 +0200 (CEST)
Received: from mail-dm6nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 11:58:04 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA1PR03MB6498.namprd03.prod.outlook.com (2603:10b6:806:1c5::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 15:58:00 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 15:58:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 201eb7ae-e902-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683043087;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=VKhzaoHzMpkPI2oR0wG+B/3Pp4LdVNlAg8bUSXN0pJ0=;
  b=gwerr6cexusN8GPWctBfh5FzBp6x3ylsDBqCXFyqpwVg62oRb26kM29G
   tDEgB+KeedChZU3vGWWd+2FYNVXn6y7Y2xqi4zFkdZARku/ZF3dpaLvEi
   7dEoDd+dZDzP764j6eTTQcT+31moiYBkmAnMBCufAnJirOCWIWBkR5X5+
   E=;
X-IronPort-RemoteIP: 104.47.58.100
X-IronPort-MID: 108008988
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:7FKgqqo7gnbY1C1D+urO8uX5ByBeBmLAZBIvgKrLsJaIsI4StFCzt
 garIBnVbvjZYDb2L9Aia9/j9UpT65bWndFlSFRsrypnQyNE8ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDzyNNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABIEcCCfpPOT+6i6FOpvuOIMLNvBGapK7xmMzRmBZRonabbqZvySoPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeiraYKNEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdpCTOXop6466LGV7lIVLBoTaXuUmqSat2iwYslPK
 RRL1DV7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3Oc0SiYtz
 UShhM7yCHpkt7j9YWmG6r6eoDe2OC4UBWwPfykJSU0C+daLiIY3gxHUR9BvCpmpn8b1EjH9x
 TONhCUmjrBVhskOv42y7VrAjhqvq4LPQwpz6ga/Y46+xgZwZYrga4n271HetK5ENNzAFgHHu
 2UYkc+D6uxIFYuKiCGGXOQKGveu+uqBNzrfx1VoGvHN6giQxpJqRqgIiBkWGaujGpxslePBC
 KMLhT5s2Q==
IronPort-HdrOrdr: A9a23:6xcJG6t5EXH8F1FhSsRe4IlT7skCF4Aji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJhBo7+90We7MBbhHLpOkPEs1NaZLXDbUQ6TQL2KgrGD/9SNIVycygcZ79
 YaT0EcMqyNMbEZt7ec3ODQKb9Jrri6GeKT9IHjJh9WPHxXgspbnmNE42igYy9LrF4sP+tCKH
 PQ3Lswm9LmEk5nHviTNz0gZazuttfLnJXpbVovAAMm0hCHiXeF+aP3CB+R2zYZSndqza05+W
 bIvgTl7uH72svLgCP05iv21dB7idHhwtxMCIiljdUUECzljkKFdZlsQLqLuREyuaWK5EwxmN
 fBjh88N4BY6m/XfEuyvRzxsjOQmwoG2jvH8xu1kHHjqcv2SHYTDNdAv5tQdl/851A7tN9x/a
 pX1ybB3qAnRS/orWDY3ZzlRhtqnk27rT4LlvMStWVWVc87ZKVKpYIS0UtJGNMrHT786qogDO
 5yZfusrcp+QBe/VTT0r2NvyNujUjAaGQqHeFELvoiv3z1fjBlCvj4l7f1auk1F2IM2SpFC6e
 iBGL9vjqtyQsgfar84LPsdQOOsY1a9Dy7kASa3GxDKBasHM3XCp9rc+7Mu/tynf5QO0d8bhI
 nBalVFrmQ/EnieRvFm5Kc7siwlfV/NHggEkqplltpEU/zHNfbW2BS4ORETe5DKmYRbPiXZM8
 zDSq6+TcWTaVcGIrw5rjEWa6MiVkX2b/dlxOrTe2j+1v4jebeawdDzQbL0GIfHNwoCdyfWPk
 YjNQKDV/moqHrbF0PFvA==
X-Talos-CUID: 9a23:L42GzG/Zg9W9tYnDPX6Vv2wdBPA5UWz4923RfUy7N0BGSK+/bHbFrQ==
X-Talos-MUID: 9a23:YJ+q+QlxQOhbxWGimefsdnpcMuNv5piOJ3tQsqggtfCfJS4oZBmC2WE=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="108008988"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Lo8U/uUvEB2ha2pgKN9vjq0+YRDvxQZVObgj6yJiMjJGwAvBarm+m9xtTOOi1JPhIUJJ/BXiNHofuaH74vKDaOKD2nB2nMN13Q+B6COn/bFnjiqCP7AtGnWdno6wWW49s72hNeRUB+0bA2bKTGTMdVWdAaA3FOVbpi39n7bGg7AlIAK9gkUHZr1Ra1ZEwAjDtcli3s9Dw5yzAE0aOD0KRL5edgUNYMDcUgU0FhsCBp7SLyalwsuxtZcfPxNH9yZnaMX8pvXEUFErg2li7Bk83SRdlc59bschA2zh2pXgPvlZZcVnsPPszxLFNCfE5hx8jPVqKT2bgbbgOd1qj/s+4g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/QaIJbAvvaRh5Q90NiE5lmpweIBxxS9Nnf6nV7wK/SM=;
 b=d9fNLDW7+RpPR9dIp2ln/p3e6qODtN8CwUotzM3u51766kRzXux15W/oRUQm12ecxBTb8mgropqb19+xPP+HV9XEt0Y8lmCQk5ZDfFAX5GJW3HVzpgnWgRCCHKqNuMW0wZewLzABhFVGAOzq/sZwvNqEvaAp/4ReSzjA1AoCHYT/On7WFVL7aYNxskwBIxGvAwc6nNdPdVKu4SQ/Ed6lJxHkNKnE3m3/BPHGUL7NPeBxtBGCLJkZ2uOQ12c5xuavCyv3CR2L90NXnFy+aBEVae+6Wb6ueDRiNrqYOSHGXmPP4Lkvt32HDC5NDf1GRqgt3+mPnlFyO6xx00/5w/Jwyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/QaIJbAvvaRh5Q90NiE5lmpweIBxxS9Nnf6nV7wK/SM=;
 b=ifWJ3IR985Qh0FmHNHWrEEvj0/y/LWC7L/bct0obu712Of20dmh8lyramXtdHWpVYw3sOpWZXhTt0XUPnrPQzh6/3ZK9T9DTBBsSVYMHLLofsXZZ86+uJr+C0p9V51r5bVlEBWggCrlWbpYG5kghW+CI0RixziqfPXN2D1kaqHQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 2 May 2023 17:57:54 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Ross Lagerwall <ross.lagerwall@citrix.com>
Cc: xen-devel@lists.xenproject.org, jgross@suse.com, sstabellini@kernel.org,
	oleksandr_tyshchenko@epam.com, axboe@kernel.dk
Subject: Re: [PATCH] xen/blkfront: Only check REQ_FUA for writes
Message-ID: <ZFEzAnOskzdb61O4@Air-de-Roger>
References: <20230426164005.2213139-1-ross.lagerwall@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230426164005.2213139-1-ross.lagerwall@citrix.com>
X-ClientProxiedBy: LO4P265CA0153.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c7::16) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA1PR03MB6498:EE_
X-MS-Office365-Filtering-Correlation-Id: 3f5dad1b-c9b4-4298-a16e-08db4b2600d2
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	G2YnfR7bHaOMhnzA5rW4sSzuvCAFc1PDplUnlznmaMEIe2j7GUFb0MeyzC+glt6Z44GPSXlyOTWpI9O2bukoSWhhfrm3uZG1jAjVKHzMi5nnGcJAq+zbEx26qgNfrjT8pelyT8kouZReq/YVz/NH3pEumwOt8cVIHbSaA9KbXM9qBhp3Yrd5i7M284ezSd/EIy61Pfeb1k6R4KTPABms9nMED4f4mJ0iQcHUTYoQJ/Wkdg45SHhpgwNH1FejZzLn0aekonK440HoP+RyLCkJ4dtpYbZlOeHXh9duLI0bg81cRLTqHkF7fGbxRMhGA1zDN3/I4zftWwTN5RZTCL3o1OlYHv119E/1IwORu42F7zPKOyEnVbYfxzUl3W3DWtK/RwBFAc5G6P2hoyR+kvnOo6Az83EHtjqkuxY4NyNomUFEK2Qt8nrRGLtAaO6sdp7+TgSllu0L8RXWdQgvKY6UnbIkq37RddzM/guEViwO/5RFNVfErT2RIHUPQQjZwlKwkJ+ea7MUk+fssnaSzIl3Z6tT1+25/35TZR1CRrbf83primg/LzlC9vlXKpKm3DGr
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(346002)(376002)(396003)(366004)(39860400002)(136003)(451199021)(33716001)(83380400001)(85182001)(66946007)(66556008)(66476007)(38100700002)(41300700001)(82960400001)(4326008)(316002)(186003)(9686003)(6506007)(478600001)(6636002)(86362001)(6512007)(26005)(5660300002)(6862004)(8676002)(8936002)(2906002)(6666004)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YVBwS1Z6T1N4cm1lZnRvc0EyQW1hK1FpOHZJczUrVHFMRE1TbGNmbUE3aTFv?=
 =?utf-8?B?YWFUSWVCYlNsVDg4eDcrRWVkdGlyYThWdlZYcWdsTFVYRUJhRUVmNXNYektj?=
 =?utf-8?B?US9ab29tTlJUMUx0aWYxWHBwcXpsdGFsTFNGYmNzK1dueTc0Q0ovL1lCd3pT?=
 =?utf-8?B?ajRqWW9NTkQzcHc5OWYyMUE4ekEydXBaY1BKOEpaNXJnQ3c2MUhRRUJnS2RB?=
 =?utf-8?B?dHBldGZUQ0Y2dTREVm5rSjZnNUR0eGpHNmNSeTcydXdKYzU2bFdrSU9aaGZk?=
 =?utf-8?B?c09ydVdtQTlYb1Z3STl3ay9VbmVrMVRvY0JlUkJXUG1PVktpbDQvZENGNjRt?=
 =?utf-8?B?dm11eUxmVWZwRllLV0ZIVEMrcnZKaDkrSnAxYUtsaWFQQlZ4L3MyWmJHSVd0?=
 =?utf-8?B?dWIyVEUvaFlRQ0c0TXBDNjA0T0NBV2s5dVQwdnhCbTZ5L3IrVUtvcDM1Z0dq?=
 =?utf-8?B?Nk5wWW9xdk56Q01lSXE5cWJ6YW1DT08vaEs5ei9JeDZuMjBSNEhvU1hqSHd5?=
 =?utf-8?B?YTZVRmpTQmx0TDdzSXhMa2w3a2tWWm5mSmhzaVByc0gwVHU3SFZaNU5HdzNF?=
 =?utf-8?B?WnJYOElBanI2MGxvS1NHOHZHaTZlckpJZ1c3b1V6WC94ekQrTVJXK2Y0NENo?=
 =?utf-8?B?eWJqSnlIUmF4SVJubWIyamdPSGFkZTNRREZ5UXNnNUlEaTBjZWV2KzFXUnkv?=
 =?utf-8?B?dDFNSFNGeGZ6aGFZV0R0SDR4VVhjd3hNd0QyYzYycXBpdXRiY1g2MXgrYzF3?=
 =?utf-8?B?VnV0Nk5FVzBMZGowY2tSYkpMcVZXT3JyT2kxNWt5RFhUYVI0bm1LNk9xb0xG?=
 =?utf-8?B?bjE3SVlDZ1lYYXhKOXdiTk5MMmFUY2F3b1JiL2hiZG1JSG94cm51SDVMMWFj?=
 =?utf-8?B?QVFybnlmQ21qek9qS1o0N3NGTGdIRitZRWw5RzM4R1JVbERCSUtlcU1VRWpK?=
 =?utf-8?B?UldTOTkyYlZPZWg4U01CMTAxeit5MmJwUjBVUjdPZlpKK2VoWmpramRiUW8w?=
 =?utf-8?B?L2FqdTRpd2VtN2s5RnFxWWEzVTFXL2hlUFM0RExUOGFkcVgvVUhBdHR2bVBG?=
 =?utf-8?B?aVVmL3lXdllIMmVhVStKaHJOKzNoSFZ3anpBamd3cUdSRzlOTU5nMnNXQ1Zy?=
 =?utf-8?B?SDZDWldBN3k1cmNUajRYQ2pYSXhTVDVyVWJoS0lSbDY4OGk4L1NiTksyZVMy?=
 =?utf-8?B?ZytGSHgxM1hrb2ViangvR2JjeVhrc0trNFJFYXN1WkwvcTd3dWR5bzMxeEFv?=
 =?utf-8?B?S1ZkdU9nUUE2dHp6V1VUZFo1dUhBcERDZ2xHQXVlRFFWZFJUd2doYmdYSi80?=
 =?utf-8?B?NHB0dTEvOXoxUTFsQU5aakhjeU9tMXhqNlk3SUNmL1Y1anZpc1lCN0NTMVF1?=
 =?utf-8?B?QTJGUFpXbHVRelZnSDNTa3NVK240UmIvY1IvUDdGRFlaRVNKanJSc01aZTR5?=
 =?utf-8?B?Q2NiTkNaNlQ1bk5KL0lCSXBJTVNIWmpoVHZtNmhVMG9lTjdwYWhzVE1OQzVI?=
 =?utf-8?B?ckMwRFBzcGVjaVNFZDJYdUhmWEkrcXhFTU9Ra3l5bkhEUkorU0JFZzFFQWQ0?=
 =?utf-8?B?d3lRc3Z3aUNPM2Z0ZUhUQ05xR1dvVHl2S1E1NVNJSHcyeFhDclZCSTM3UlUz?=
 =?utf-8?B?TmdlNnMrQ1ZIQi9Xd016VWNLWjJkdWlqMG1JTXFtQkdlZ2FSN250TjhhY1dh?=
 =?utf-8?B?bVg1bXlidkw4N0N5SXJab2QwejkyeDdMWjVMUGVsMTlnMjhGNHFnM3NnVllu?=
 =?utf-8?B?L2VBZ1lYUjNUMTBNKzVEemIwSnJPWDNEUkF6bnZiU2ZicEJycFlBdTA1ZEhp?=
 =?utf-8?B?OG4yUXVLU3hNR1cyRndrYzI5bUxRVFBhL1Q0TmF0aDYrTXFlMk5hc3VKRVc0?=
 =?utf-8?B?RlF4c290NEpuWHpXRlNydnBLM2JuekRqSTNUUGZ6bHNRV3RvcWZGVEcwYjFC?=
 =?utf-8?B?WVA5U3o1UGJMRU5tcndmMTQ4T0ZHaDJKaDJMK2lOMEJnaTBGZjFoUnROQzlF?=
 =?utf-8?B?NTFUbnMwb3J0TEZCVTBWb0hiZ3Joc1RZWHVNSlZBcjZ6V0VWcUhVZFRuRktJ?=
 =?utf-8?B?L2cydXFaaXVqSU9yTFZ4V2pDM2Q4WXBQTVh5QjRaUy9sMXdiSEZKS25BRnoz?=
 =?utf-8?B?SURRZyttb0pDekVKSzVsdFJvNWpBVGlzc2hzeWtIeGZKZUU5ZkpDMnBjc0Zy?=
 =?utf-8?B?TlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	vJTrFmE//FTHUYox9LJde1JcozRR+cxMuvIo6QBYlVn5CJFsplXSmgq76QSm/VRFRd71BhMoTaCKTsKRXmb6oRq6impkUxOxM6YEH8WLOmAXX34WprSGX0azqhIb2uL5XlmtJI4GmrZKw2kb5j/8ChKmyDrJu1jlyLrR0C/IqIe/eg2MxNfkHiw//Geuyq3KYc4VXJaedX5MHpJSwee+KRldiPytykx6aXBMwFs7Olhn5wiLs4wnQKfW+sNmrfuYEyDFhtBgxKFsXSBZKrA/LqI4u76s+xxpLeP6pojQ5CDlGKCkDIHiuflQju0quDyllpn772NLEF1BVf4kCYgWwhwPs24sQxl1eNdVdkcEfZK2jc5Vb2w81GJ6CtHU7h0FW4om6bFuPkOf4ZXgjO4jjy3CzoTBaYHFFM3clXf7my/SVnpLPlxEGEFYYJL+3eN4Inz4go95pRuMiS4/Dhuvf/oOCQeSCOfKhVFyGUDcrTSnjCxMESXA41ivr35WOiWuYGvNzci/dpVtHK9JfSzjQ57SQUoq5yRttaL9dmbtpGJv4b3e1Vi2Bq34G0D7gc5QJXuCGb6MQ9TqUEZB918oLivQfIhWEAfKq0jw5BqimvM1Y7nBDqzqPuFzslLxfrWmQB3urdAD3LBx19QooHfOX0LbjFEQc9rpK1GPuFnplobDJer3CV/lJUIs6l/kqNV8gj+9vbsL0LfbrNfoIBwkMo4M93ECWsEFd097T5hnBtUU7ufpbcuFRDY1dr6nmzop2UZJjOZxyKAsOYPL20Ds6q9aMWHg+ViYKjbLUlrRtinAhP8WjGRZBkSBfUkjtXK4H2eDeAF2wd0SoXmLW4p1z/Qr55VshRHWJveC6prrE7tNnY+bhO6IHnwZWOBtuyru
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f5dad1b-c9b4-4298-a16e-08db4b2600d2
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 15:58:00.0789
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hMmq9He7APaTLki7WDsWHmuS7BCyNRWf0mFhPtQLJiDzl0WDFJV7yOQzrNGeYHpol5j5ov1lEwi3ba1yaCKSzg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6498

On Wed, Apr 26, 2023 at 05:40:05PM +0100, Ross Lagerwall wrote:
> The existing code silently converts read operations with the
> REQ_FUA bit set into write-barrier operations. This results in data
> loss as the backend scribbles zeroes over the data instead of returning
> it.
> 
> While the REQ_FUA bit doesn't make sense on a read operation, at least
> one well-known out-of-tree kernel module does set it and since it
> results in data loss, let's be safe here and only look at REQ_FUA for
> writes.

Do we know what's the intention of the out-of-tree kernel module with
it's usage of FUA for reads?

Should this maybe be translated to a pair of flush cache and read
requests?

> Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
> ---
>  drivers/block/xen-blkfront.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 23ed258b57f0..c1890c8a9f6e 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -780,7 +780,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
>  		ring_req->u.rw.handle = info->handle;
>  		ring_req->operation = rq_data_dir(req) ?
>  			BLKIF_OP_WRITE : BLKIF_OP_READ;
> -		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
> +		if (req_op(req) == REQ_OP_FLUSH ||
> +		    (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {

Should we print some kind of warning maybe once that we have received
a READ request with the FUA flag set, and the FUA flag will have no
effect?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 02 16:04:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 16:04:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528741.822265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptsUa-00031s-MG; Tue, 02 May 2023 16:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528741.822265; Tue, 02 May 2023 16:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptsUa-00031l-Jj; Tue, 02 May 2023 16:04:40 +0000
Received: by outflank-mailman (input) for mailman id 528741;
 Tue, 02 May 2023 16:04:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dv4+=AX=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1ptsUZ-00031f-JT
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 16:04:39 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 09774b34-e903-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 18:04:38 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-199-NtVPAAg0PAGoLecNFhMJ8Q-1; Tue, 02 May 2023 12:04:32 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 506A218E5380;
 Tue,  2 May 2023 16:04:30 +0000 (UTC)
Received: from redhat.com (unknown [10.39.193.211])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 77F2C1410F24;
 Tue,  2 May 2023 16:04:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09774b34-e903-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683043477;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ErZ2aQ2HUcNIqx2OSDpVI+Y5PzprxF1vRVAtic0RH4k=;
	b=KE7Bk8VEeG9NygJUkXVwLlCiYy5VWZfuL2kxlSIw6xNPU49FIyHa9ojuCT7JdUuP947y1V
	OOlUFPHTj8ZhUfksOyJpyUB0tMWWn5orR0JIQMSwydx7l2tQluaTvQF2ECAtv+nNbCgQ1F
	IUiMqApS8TFdxszmy+aRqJhcPOhkf0Y=
X-MC-Unique: NtVPAAg0PAGoLecNFhMJ8Q-1
Date: Tue, 2 May 2023 18:04:24 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 07/20] block/export: stop using is_external in
 vhost-user-blk server
Message-ID: <ZFE0iFnbr2ey0A7X@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-8-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230425172716.1033562-8-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> vhost-user activity must be suspended during bdrv_drained_begin/end().
> This prevents new requests from interfering with whatever is happening
> in the drained section.
> 
> Previously this was done using aio_set_fd_handler()'s is_external
> argument. In a multi-queue block layer world the aio_disable_external()
> API cannot be used since multiple AioContext may be processing I/O, not
> just one.
> 
> Switch to BlockDevOps->drained_begin/end() callbacks.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  block/export/vhost-user-blk-server.c | 43 ++++++++++++++--------------
>  util/vhost-user-server.c             | 10 +++----
>  2 files changed, 26 insertions(+), 27 deletions(-)
> 
> diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
> index 092b86aae4..d20f69cd74 100644
> --- a/block/export/vhost-user-blk-server.c
> +++ b/block/export/vhost-user-blk-server.c
> @@ -208,22 +208,6 @@ static const VuDevIface vu_blk_iface = {
>      .process_msg           = vu_blk_process_msg,
>  };
>  
> -static void blk_aio_attached(AioContext *ctx, void *opaque)
> -{
> -    VuBlkExport *vexp = opaque;
> -
> -    vexp->export.ctx = ctx;
> -    vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
> -}
> -
> -static void blk_aio_detach(void *opaque)
> -{
> -    VuBlkExport *vexp = opaque;
> -
> -    vhost_user_server_detach_aio_context(&vexp->vu_server);
> -    vexp->export.ctx = NULL;
> -}

So for changing the AioContext, we now rely on the fact that the node to
be changed is always drained, so the drain callbacks implicitly cover
this case, too?

>  static void
>  vu_blk_initialize_config(BlockDriverState *bs,
>                           struct virtio_blk_config *config,
> @@ -272,6 +256,25 @@ static void vu_blk_exp_resize(void *opaque)
>      vu_config_change_msg(&vexp->vu_server.vu_dev);
>  }
>  
> +/* Called with vexp->export.ctx acquired */
> +static void vu_blk_drained_begin(void *opaque)
> +{
> +    VuBlkExport *vexp = opaque;
> +
> +    vhost_user_server_detach_aio_context(&vexp->vu_server);
> +}

Compared to the old code, we're losing the vexp->export.ctx = NULL. This
is correct at this point because after drained_begin we still keep
processing requests until we arrive at a quiescent state.

However, if we detach the AioContext because we're deleting the
iothread, won't we end up with a dangling pointer in vexp->export.ctx?
Or can we be certain that nothing interesting happens before drained_end
updates it with a new valid pointer again?

Kevin



From xen-devel-bounces@lists.xenproject.org Tue May 02 16:14:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 16:14:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528746.822275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptsdy-0004al-JZ; Tue, 02 May 2023 16:14:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528746.822275; Tue, 02 May 2023 16:14:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptsdy-0004ae-GS; Tue, 02 May 2023 16:14:22 +0000
Received: by outflank-mailman (input) for mailman id 528746;
 Tue, 02 May 2023 16:14:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M8xE=AX=citrix.com=prvs=479c33cca=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1ptsdx-0004aY-1E
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 16:14:21 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 63048d9b-e904-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 18:14:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63048d9b-e904-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683044058;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=dlVsysxRrIKNoi35lH/qtyIF3TPOuG6EK6kie+J/XDs=;
  b=aKoANNoc/vLWoTEs6sGNeqOLhl2p50rd9HgJj7dkf5UnMTEjNzfjIPlf
   uadU+La7pvCD5YwE3jwQUNDYmpZbgDnsd+XrFjnO6mY81dYj2NLoNo6ZF
   P92YxuqV3H3MM5zoBqp5a6piS55vDr/1g8i13vOROwCoGxRKnaGjWdKFb
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: None
X-MesageID: 110045256
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:c9ANzqD0YxOYnRVW/6jiw5YqxClBgxIJ4kV8jS/XYbTApDIi3jIHn
 2AcD2mPO/qKajD1e9lxbdvg9B5TsMWDyt9hQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw59x0JGdv5
 KYhNTkQbhOsgP+k/pC4Rbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9I4XSHZkIxBvGz
 o7A10P3WzNBKPLC9RSY/kuwn/7i2hznQp1HQdVU8dYv2jV/3Fc7DAAdXB21qP+yh0q6RvpWM
 UlS8S0rxYAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxLmIJVCJbYdoq8so/XyU31
 0ShlsnsQzdotdW9Vn+csLyUoDWaMDIQa2QFYEcsVwwe6cPkp4I1ph3KR9dnVqWyi7XdBjXY0
 z2M6i8kiN07j8ER0L6g1UvamD/qrZ/MJiY57xvadnio5QR4YMiifYPAwV3S9/NJNouaZliHo
 nkfmsKa4fwODJeCjyiESqMGG7TBz/+dPSfVm1JHA5gr/DPr8HmmFb28+xkneh0vaJxdP2a0P
 gmK41g5CIJvAUZGpJRfO+qZY/nGB4C5S7wJiti8ggJyX6VM
IronPort-HdrOrdr: A9a23:fG0L0qsGgBbImt6lLOkPInQ57skC8IMji2hC6mlwRA09TyXGra
 6TdaUguiMc1gx8ZJh5o6H9BEGBKUmskaKdkrNhQotKPTOW9VdASbsC0WKM+UyZJ8STzJ8+6U
 4kSdkCNDSSNyk3sS+Z2njCLz9I+rDum8rE5Za8854ud3ARV0gJ1XY+Nu/xKDwQeOAyP+tBKH
 Pq3Lsgm9PPQwVzUu2LQl0+G8TTrdzCk5zrJTQcAQQ81QWIhTS0rJbnDhmxxH4lInxy6IZn1V
 KAvx3y562lvf3+4ATbzXXv45Nfn8ak4sdfBfaLltMeJlzX+0iVjcVaKvy/VQIO0aKSAWUR4Z
 vxStAbToFOAkbqDyGISN3Wqk3dOXgVmjjfIBSj8AXeSITCNUMH4ox69Ntkm1LimjodlcA536
 RR022DsZ1LSRvGgSTm/tDNEwpnj0yuvBMZ4KYuZlFkIP0jgYVq3MUi1VIQFI1FEDPx6YghHu
 UrBMbA5OxOeVffa3zCpGFgzNGlQ3x2R369MwI/k93Q1yITkGFyzkMeysBalnAc9IglQ50B4+
 jfKKxnmLxHU8dTZ6NgA+UKR9exFwX2MFnxGXPXJU6iGLAMOnrLpZKy6LIp5PuycJhN15c2kI
 SpaiIuiYfzQTObNSSj5uw/zvmWehTPYd3E8LAt26RE
X-Talos-CUID: =?us-ascii?q?9a23=3AgKksxmu5dlxP0KgktlxOAtUK6Is4SH74ylbBeXa?=
 =?us-ascii?q?lAHtpS5KSVE2u2qBrxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AhGkI6Q1fni7zQzgbHEgnroynEjUjxKr3KhsRv8g?=
 =?us-ascii?q?6ldSIDDBpKhrasyitXdpy?=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="110045256"
Date: Tue, 2 May 2023 17:13:41 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Luca Fancellu <luca.fancellu@arm.com>
CC: <xen-devel@lists.xenproject.org>, <bertrand.marquis@arm.com>,
	<wei.chen@arm.com>, George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
	<marmarek@invisiblethingslab.com>, Christian Lindig
	<christian.lindig@cloud.com>
Subject: Re: [PATCH v6 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Message-ID: <4e6758c0-dd81-4963-8989-d941eba2b257@perard>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-10-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230424060248.1488859-10-luca.fancellu@arm.com>

On Mon, Apr 24, 2023 at 07:02:45AM +0100, Luca Fancellu wrote:
> diff --git a/tools/include/xen-tools/arm-arch-capabilities.h b/tools/include/xen-tools/arm-arch-capabilities.h
> new file mode 100644
> index 000000000000..ac44c8b14344
> --- /dev/null
> +++ b/tools/include/xen-tools/arm-arch-capabilities.h
> @@ -0,0 +1,28 @@
> +/* SPDX-License-Identifier: GPL-2.0 */

Do you mean GPL-2.0-only ?

GPL-2.0 is deprecated by the SPDX project.

https://spdx.org/licenses/GPL-2.0.html


Besides that, patch looks fine:
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 02 16:21:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 16:21:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528751.822285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptsky-00068t-9y; Tue, 02 May 2023 16:21:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528751.822285; Tue, 02 May 2023 16:21:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptsky-00068m-7O; Tue, 02 May 2023 16:21:36 +0000
Received: by outflank-mailman (input) for mailman id 528751;
 Tue, 02 May 2023 16:21:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dv4+=AX=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1ptskw-00068e-A8
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 16:21:34 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6676d4a3-e905-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 18:21:33 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-248-XFbx3vdZP5Ky0DHbFh4gXA-1; Tue, 02 May 2023 12:21:28 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C753C2807D63;
 Tue,  2 May 2023 16:21:25 +0000 (UTC)
Received: from redhat.com (unknown [10.39.193.211])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id E2BA563F4A;
 Tue,  2 May 2023 16:21:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6676d4a3-e905-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683044492;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=veAM1yxiNJOFHsx6/inor+jTVAWMkCyEti4maPSbfwU=;
	b=elqcCIYjAyJ6fQ5hm3szZez32Xuf5TuBjBNhoLNy8OdYZQOIjLrcCJYNmhd2TqjQcrxKGp
	sEIpPKmzKdZqIm1IqUJPs059aV1B4BIZaBCC00cz7vYDRjTPK87t2Bk/BvQkA25sytiDIW
	ycfavmAL6n7onktQP1Ct+pUPmSIxXUQ=
X-MC-Unique: XFbx3vdZP5Ky0DHbFh4gXA-1
Date: Tue, 2 May 2023 18:21:20 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 10/20] block: drain from main loop thread in
 bdrv_co_yield_to_drain()
Message-ID: <ZFE4gFFXnu+FSk35@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-11-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230425172716.1033562-11-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> For simplicity, always run BlockDevOps .drained_begin/end/poll()
> callbacks in the main loop thread. This makes it easier to implement the
> callbacks and avoids extra locks.
> 
> Move the function pointer declarations from the I/O Code section to the
> Global State section in block-backend-common.h.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

If we're updating function pointers, we should probably update them in
BdrvChildClass and BlockDriver, too.

This means that a non-coroutine caller can't run in an iothread, not
even the home iothread of the BlockDriverState. (I'm not sure if it was
allowed previously. I don't think we're actually doing this, but in
theory it could have worked.) Maybe put a GLOBAL_STATE_CODE() after
handling the bdrv_co_yield_to_drain() case? Or would that look too odd?

    IO_OR_GS_CODE();

    if (qemu_in_coroutine()) {
        bdrv_co_yield_to_drain(bs, true, parent, poll);
        return;
    }

    GLOBAL_STATE_CODE();

Kevin



From xen-devel-bounces@lists.xenproject.org Tue May 02 16:40:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 16:40:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528754.822295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptt3C-0000Ig-S0; Tue, 02 May 2023 16:40:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528754.822295; Tue, 02 May 2023 16:40:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptt3C-0000IZ-PG; Tue, 02 May 2023 16:40:26 +0000
Received: by outflank-mailman (input) for mailman id 528754;
 Tue, 02 May 2023 16:40:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wkLo=AX=citrix.com=prvs=4791b1cf6=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1ptt3B-0000IQ-8V
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 16:40:25 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 07ec211e-e908-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 18:40:23 +0200 (CEST)
Received: from mail-mw2nam12lp2041.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 12:40:21 -0400
Received: from DM6PR03MB5372.namprd03.prod.outlook.com (2603:10b6:5:24f::15)
 by CO1PR03MB5892.namprd03.prod.outlook.com (2603:10b6:303:9f::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 16:40:18 +0000
Received: from DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::4418:2c5d:f218:e58]) by DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::4418:2c5d:f218:e58%7]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 16:40:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07ec211e-e908-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683045623;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=4mYyoRkb2KBysrxE5QbSTOtHlQEiOZdXXNJTxWMpNh8=;
  b=KRIcwXlYNbPPyhnYQXIR7U34Svr9dbXmRbUeCCFL2ZU8YO6LA1MurJua
   9peNYMdZZAYINLF0+OUq5ICTChL1kLPKNay0rx+RjFWXE443BGBsIc4Z6
   zoHjCeWNXgK9M7mdeIYXGkP2JBwIPdNiAma6u/SXv0BJdvvRrGSxUlAg6
   g=;
X-IronPort-RemoteIP: 104.47.66.41
X-IronPort-MID: 110048524
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:JW9YbKodScoReLkXi+lcRiD2+4NeBmL8ZBIvgKrLsJaIsI4StFCzt
 garIBnUPv3YYTH2ct91Pd+3/U4HuZTVmoNkTQNp/i80HnlA9JuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDzyNNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACIBTQuuucmY/IK+ScRhjZUqPOrZYqpK7xmMzRmBZRonabbqZvyQoPpnhnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3juarbIO9lt+iHK25mm6Co
 XnduWDwDRwAK9WbzRKO8262h/+JliT+MG4XPOThrqY20ADJmgT/DjUGBEOk/KGooHKFYIoCd
 k092QcpoIEboRnDot7VGkfQTGS/lg4RXZ9cHvM37CmJy7HI+ECJC24cVDlDZdc68sgsSlQC1
 FCTmMjyLSdyq7DTQnWYnp+Pti+7MyURKW4EZAcHQBED7t2lp5s85jrISttgC6ezgsfCBSDrw
 zuKoS49gJ0elccOka68+DjviiKmoZXhTQMv4AjTGG6mhj6Vf6agbo2srF3Et/BJKd/DSkHb5
 CRd3c+D8OoJEJeB0jSXR/kAF62o4PDDNyDAhVloHN8q8DHFF2OfQL28KQpWfC9BWvvosxeyC
 KMPkWu9PKNuAUY=
IronPort-HdrOrdr: A9a23:N70KvqCEqzrvKHXlHela55DYdb4zR+YMi2TDt3oddfWaSKylfq
 GV7ZImPHrP4gr5N0tOpTntAse9qDbnhPxICOoqTNCftWvdyQiVxehZhOOP/9SjIVyaygc078
 xdmsNFebnN5DZB7PoT4GODYqkdKNvsytHXuQ8JpU0dPD2DaMtbnndE4h7wKDwOeOHfb6BJaa
 Z14KB81kKdUEVSVOuXLF8fUdPOotXa/aiWHSLvV3YcmXKzZSrD0s+BLySl
X-Talos-CUID: =?us-ascii?q?9a23=3AWrTpcWlDvgSpPDMMge7cdhOyfJPXOW/R72z9HWS?=
 =?us-ascii?q?WNVZsS5CYQg6Z6IBWzeM7zg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AJIwiIQ6DoQb2wlMOhyuCJ4gsxox4sryxVRgcu6k?=
 =?us-ascii?q?9gJHZaC1ZGBqagR2eF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="110048524"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Pt7AcETFJHnpYgkREil2HpNjnnM+VD6AqXHWcgo0Gb3zyh5CWX2qqx26fI6y6uJnSxdFXyzHxs2pzFuP+oQl6WIdsNm5gzGt9AFZADblCtUkzc5l3ldZqZcNFI7iYhCvvqXJcY5Yrsdmq15AWhIZVfCtkGPEm5pyinwzQXNK3QcFwRNdx9FwXcqQi7KG55CwV1+nSN6Hl4Wz9ZOP6BlJBHhNtH4qQHd9oKbZ7R2xI6y/RgYsThLjMTM8QKB2zvjkBrFnEetrY9aQHinX3QtQ1+U83Bo0QKbA2h4d+neirtVLsoG8dbUvYaEgxYlw46irhpM3/0NY5zwRdriIoJbY4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4mYyoRkb2KBysrxE5QbSTOtHlQEiOZdXXNJTxWMpNh8=;
 b=j8c1QcRoWnYQSVMzUxsedD/nGCzLkt+QT69PdIIyEKkkPijDpv7HV3bK9xrwOy7dlvBx3dOKavvAJbYNvyVRPuMjGQb6hXptWT9QwWH64H3rgQe1NYSnUBvozduVe/dXQi9S5eWGx7XyxPU+RSzBV2/55Ui7uNtagEraKQvLGW/hrQ4qVKC/XWw4KC6O2DN82j8K77AKS0+vxnbpdOsW1Ai/WZQ6Svqmi8qm3CTsxEVaE/wi1XH1g32f3GX6my4Y794VlAV3JpHIdiUXDRHve2P1Nvm4UmJEJUmvEes20GV7US7v0YobmO/Zuz9jx/p9Tqg5Y7sN6+qp0O6e59TmGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4mYyoRkb2KBysrxE5QbSTOtHlQEiOZdXXNJTxWMpNh8=;
 b=Xi5mFnm3sSuJU4jwnGKbAD+xlD0rtbMKCm2sV9whAWXD2pSKnIjdFp91OID8f6+oN48rWP4UTm1BxV5GlPSsoxQnxy0ZgeNq1dbzauPTR7GGdz+oTgxoNeYhc5L2uKwEHdfNXBrwIDN5beKS2sDKwRXVtLvL90ChPa5iea8OZbo=
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"jgross@suse.com" <jgross@suse.com>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "oleksandr_tyshchenko@epam.com"
	<oleksandr_tyshchenko@epam.com>, "axboe@kernel.dk" <axboe@kernel.dk>
Subject: Re: [PATCH] xen/blkfront: Only check REQ_FUA for writes
Thread-Topic: [PATCH] xen/blkfront: Only check REQ_FUA for writes
Thread-Index: AQHZeF3Nw+6VlmAXM0WLxiEiRURXM69HLPoAgAAImgI=
Date: Tue, 2 May 2023 16:40:17 +0000
Message-ID:
 <DM6PR03MB53726982B16DC1FA7B77D143F06F9@DM6PR03MB5372.namprd03.prod.outlook.com>
References: <20230426164005.2213139-1-ross.lagerwall@citrix.com>
 <ZFEzAnOskzdb61O4@Air-de-Roger>
In-Reply-To: <ZFEzAnOskzdb61O4@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB5372:EE_|CO1PR03MB5892:EE_
x-ms-office365-filtering-correlation-id: 0351faf7-db63-42f8-6600-08db4b2be99a
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 ZSGq2kmR/cv1P1xiJIMt2SrYx0bQJqV0Q/3dawkb0LY2bd8xR/0Bh2QjLQfxezpVWsjEsdClm842MrjxXxef02wZEczPm3xjxQx1Q/0ihpXGBfcAVA3es/F4mBurb4kA7iZfgGnhhlLMwx6ku7r3LMgX/wSkmvmz5bvJfkhJWsZ9qpIC/1ACnPqQeiTn2nPZUpLq6KijDLrKzpiLfnfvFDvnesBSnwziduf70FMgn0sPTNkV7Wv0/Vqcm+zDK1oYodHfMZ9Gr2c/OxSrDhoboWeOZap53fMIgJpybl24pUkvsbAk51LZkA0JhN7z8gSFBwZHkkZVSwceyhs1Np8u+MBT/v3XTqG3u+3RC8jSjVE8zl7xrxrYp+voZdqDVsXs96RH9bArNWi//CwE4XxmqT5Pj3utnwUFJWS8QYn/7BcRQuzwDBw/mv1Ddm1iGTuYnXeQAMVA+DCFhO8bXP6egpizPvvKo0qZkS7PUY+NhZW/2EtP+1341JzEzD86/7tYUeRJgmrH/PH2N/IXgLHI/inpKo5aDHjgPqvee/zrxbVqjj1JC182hVPKAHWkW/Z5W1VhoyGxoq5hcfCs9tI+tS/f96BLZWlwLHeruaHA/ls=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB5372.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(376002)(396003)(39860400002)(346002)(451199021)(66556008)(64756008)(2906002)(76116006)(66446008)(66476007)(66946007)(52536014)(86362001)(44832011)(5660300002)(8676002)(478600001)(8936002)(55016003)(91956017)(6636002)(4326008)(71200400001)(33656002)(54906003)(316002)(41300700001)(966005)(82960400001)(7696005)(122000001)(9686003)(53546011)(6506007)(186003)(83380400001)(26005)(38070700005)(6862004)(38100700002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?+Mb5GxzHf2RJQYXMthwL7DFuzgmC4BBc6lp4iK4hLGu7FkRfx1Hlf/c7Rk?=
 =?iso-8859-1?Q?aH2e4LZjP8DWs4w9H6xK1imwlA/Wb+bi5/v500MXcNZ0777AwcH4YD8l4l?=
 =?iso-8859-1?Q?rKEei7toX9KDHQTK6wXjsXiw6cTxMZLo5aQO/NtuuJcuRlsYkKNVfcgj24?=
 =?iso-8859-1?Q?gFijuZ49oTAwqbmV8OnUwzBr5HXfQqNQSEkAxE9xMocqkWufLbyRamAfV7?=
 =?iso-8859-1?Q?4GaMjE6rC2baQfTNeiaT1rg8QBk6REwodq2zvYJ6s+wEgnVf/U/MHgt1d8?=
 =?iso-8859-1?Q?UaUhch2nJ3x4lV4mwM70doFX6AibZflUT7wbN3hOSrFp7edq3FZ4QZ6l1L?=
 =?iso-8859-1?Q?L0l1EjoMnLsCKQ/yOmf8hynKzU8q6K44vSiuDPSFGNyHleWSEoMjWRhyGK?=
 =?iso-8859-1?Q?YsfiAkh6V+7b3PjJn0xWolUMDbPThWNx0+Lg6bmTtG6ZrMV4y7SMliGqYV?=
 =?iso-8859-1?Q?mDW0N52QFoijesS7nT8a2jKsjrWdNSa6zgVZ9ci/9LGEUW/SrD/nA/hUC2?=
 =?iso-8859-1?Q?rSTR6LlF2EniE35fwnvQts65zkwf2zYBqxIuVbXzjM553AD4X9IIMHE6oK?=
 =?iso-8859-1?Q?W5iaB00wKtpYN7RNiLhk6x58F9aC95MnZ1X7G20h6iBtMp2a+BPy7bRH5r?=
 =?iso-8859-1?Q?D0BLMjJOj4RJX2oLEgudj7GvP2eT4kRX10R/c5e8akNdByW4rVcgdVbGcF?=
 =?iso-8859-1?Q?2uLE17nik4qYkTlizqHn8W4nliaDr2sHD+qbthULQ4EtpkABQE8ZPV2Qln?=
 =?iso-8859-1?Q?oCyxH77GkVp2YrkMX/gPKzy0eBe20JOlJq6cR7rcPbcjGtEDVpcv+A/nLg?=
 =?iso-8859-1?Q?y+21ZInpcubQ/7tlna+kawnMzZwRd2lOUWWh+juAjFEMxFZLAr4PEgeUaD?=
 =?iso-8859-1?Q?sCErqe7KIm/RfrWlIK92JX9fqaworTocidOS/z7J62CksX0evF8xcgQ+pu?=
 =?iso-8859-1?Q?ozRR3wscG3HxVVbQB+fo8laQuUZbl/OevHu1oLL9q0+IFy7cB9bru9AWbe?=
 =?iso-8859-1?Q?Wd1j+RSQmIsCzsFJwAm7Iq2WobcceK8U2V+aOEZxIrQttgHon2nIFzYLAA?=
 =?iso-8859-1?Q?PELFSxiN1OuDpB0mMiDL0o59axi6n4xX2YSS9Y5O4Xx6GVHcCoSG3ij/MW?=
 =?iso-8859-1?Q?hBmOEzmId8U/hGuawmQaaf2gYHhsT7+kwjDJ4b26P71kgPahvlcWV7p5/r?=
 =?iso-8859-1?Q?E3xKGT2OyBz+/0evAFtYe3N+vXqLG3gLbdqrVJonZXR7rVNDE45XOZv6CY?=
 =?iso-8859-1?Q?JH/zhSU5upifIDW4Ps05wWayYKv7WDPaOtuFdmCNZujwinGFnultvYRwvq?=
 =?iso-8859-1?Q?y/mqo4pqnGTEJ3OpMJvLKW2lmzCoTGnBYwDaTSMZXPHskpgJC4iU3xXtII?=
 =?iso-8859-1?Q?x0UzTjSEST1HzYHICeUWlhxyaC/6ZvvJbfiqctV6Qrq5YfOVwWu9TZy7cZ?=
 =?iso-8859-1?Q?UG/jMv6r3rkouVue9y5pXAfnXXCO2f37qbYVxKz0UWD/94nZNRUm0pndMD?=
 =?iso-8859-1?Q?q7PCkVfQvXd+Ah5MLCNnaZABB5JlMeAJOQgCXuOlG/B71Z0bkKWxkRCqDV?=
 =?iso-8859-1?Q?WO1c4x6P7S8yoGXnhPf92IIIsLlheYHWDsOT5vAEYTnNqV8KhNkUW+RZIE?=
 =?iso-8859-1?Q?6SwWWyva+TlV0pyHLnP7m6Iq37C+xyiQpQ?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	eliju2Rl4GyG3YiF9I9GWR+CXfwxErVP4SObx55TQQabDr68Tz6tkl2O01HoEEuj81qotQkcPZ/86Eubm3R0jnotc0v9RJGwI2LwjvEuQqr/0OeZiJs31qxhKGU9bm7zuuvg3PQ3Pl2khwP4vU1WaDzk2t7PWsbGhGqQabTM1NnkbPLUnyFOxQPVZXXqsYg2OmI5oU+AQ0mLtftYNqaYLEEt48eoEovF/cbpXyY+PXqroql1riyXhLm6JKL2n3mDfvt/nMTQOX/C49XyrjO1oMqUP9mww3pm4vkeXJPaT8HX4QSjuWVOLxeKjFJEpgPSI9Sk+ZPUx06lHcbe38uIo9tgnBgOdOw8MWaV+QUm2s/EfzxUETn6qwPQfk+Ymd3vuBrl+Xn1b0UBvfySnxAlSYS7vxhk7qkz6DboA7IPOJLG8vE9PGqkOW4oy2TLb4NRmGDx7cFQCtq36DGB1UlYs32CaxevKyj0QasvPRkbuna7Qs4diuFk6lzthlEA0UmTY8+AvnPNhl7RESxW5K21G1C+EgdVsAevlf330wR/vLf65JGZrHic+6rhnCtmyMPgJLWLzPsovqt/FmGLsu7n2N7cdzaGctftKJNdwi379Saai3wis3/ZGmRLtWxAcTlZllkqZQxQDj3wHMFGpQUVocixgwlQ8nC31uXr1Q6RYiJpA+y9gwZFca2AyHoVaAYTEV1xvkJeg83mcLRbqv9Pu3FKZOdG53a4YkQV+MU2lZNwEggJHQUuQ11jEm/bqOxNtB0y6kvCz6qkTpUJYuzFQz+YPqXRhZ6+zh3K9bvrorGQCmp7I7UaQks/x7oPuYY/l+O8ITO+438sHO+qFpofWymH8ToO5kPztDnUbCqrqFwLvZZfYEheybjwiOMSJiYA
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB5372.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0351faf7-db63-42f8-6600-08db4b2be99a
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 May 2023 16:40:17.7892
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: bMZAauT5joSy/R2Os63qhree6c3RugFqbGxuned7KyfmYchAU0plkj5HYz/Jg2XPIS1dws4CGSB+Shl/KhR64jzuJAlCo5CJAqIYIGvosCk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5892

> From: Roger Pau Monne <roger.pau@citrix.com>=0A=
> Sent: Tuesday, May 2, 2023 4:57 PM=0A=
> To: Ross Lagerwall <ross.lagerwall@citrix.com>=0A=
> Cc: xen-devel@lists.xenproject.org <xen-devel@lists.xenproject.org>; jgro=
ss@suse.com <jgross@suse.com>; sstabellini@kernel.org <sstabellini@kernel.o=
rg>; oleksandr_tyshchenko@epam.com <oleksandr_tyshchenko@epam.com>; axboe@k=
ernel.dk <axboe@kernel.dk>=0A=
> Subject: Re: [PATCH] xen/blkfront: Only check REQ_FUA for writes =0A=
> =A0=0A=
> On Wed, Apr 26, 2023 at 05:40:05PM +0100, Ross Lagerwall wrote:=0A=
> > The existing code silently converts read operations with the=0A=
> > REQ_FUA bit set into write-barrier operations. This results in data=0A=
> > loss as the backend scribbles zeroes over the data instead of returning=
=0A=
> > it.=0A=
> > =0A=
> > While the REQ_FUA bit doesn't make sense on a read operation, at least=
=0A=
> > one well-known out-of-tree kernel module does set it and since it=0A=
> > results in data loss, let's be safe here and only look at REQ_FUA for=
=0A=
> > writes.=0A=
> =0A=
> Do we know what's the intention of the out-of-tree kernel module with=0A=
> it's usage of FUA for reads?=0A=
=0A=
It was just a plain bug that has now been fixed:=0A=
=0A=
https://github.com/veeam/blksnap/commit/e3b3e7369642b59e01c647934789e5e20b3=
80c62=0A=
=0A=
I think this patch is still worthwile since reads becoming writes is=0A=
asking for data corruption.=0A=
=0A=
> =0A=
> Should this maybe be translated to a pair of flush cache and read=0A=
> requests?=0A=
> =0A=
> > Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>=0A=
> > ---=0A=
> >=A0 drivers/block/xen-blkfront.c | 3 ++-=0A=
> >=A0 1 file changed, 2 insertions(+), 1 deletion(-)=0A=
> > =0A=
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.=
c=0A=
> > index 23ed258b57f0..c1890c8a9f6e 100644=0A=
> > --- a/drivers/block/xen-blkfront.c=0A=
> > +++ b/drivers/block/xen-blkfront.c=0A=
> > @@ -780,7 +780,8 @@ static int blkif_queue_rw_req(struct request *req, =
struct blkfront_ring_info *ri=0A=
> >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ring_req->u.rw.handle =3D =
info->handle;=0A=
> >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 ring_req->operation =3D rq=
_data_dir(req) ?=0A=
> >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 BL=
KIF_OP_WRITE : BLKIF_OP_READ;=0A=
> > -=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 if (req_op(req) =3D=3D REQ_OP_FLU=
SH || req->cmd_flags & REQ_FUA) {=0A=
> > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 if (req_op(req) =3D=3D REQ_OP_FLU=
SH ||=0A=
> > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 (req_op(req) =3D=3D R=
EQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {=0A=
> =0A=
> Should we print some kind of warning maybe once that we have received=0A=
> a READ request with the FUA flag set, and the FUA flag will have no=0A=
> effect?=0A=
> =0A=
=0A=
I thought of adding something like this but I couldn't find any other=0A=
block layer code doing a similar check (also it seems more appropriate=0A=
in the core block layer).=0A=
=0A=
WARN_ONCE(req_op(req) !=3D REQ_OP_WRITE && (req->cmd_flags & REQ_FUA));=0A=
=0A=
I can add it if the maintainers want it.=0A=
=0A=
Thanks,=0A=
Ross=


From xen-devel-bounces@lists.xenproject.org Tue May 02 17:06:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 17:06:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528762.822304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pttSL-000359-Vm; Tue, 02 May 2023 17:06:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528762.822304; Tue, 02 May 2023 17:06:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pttSL-000352-TH; Tue, 02 May 2023 17:06:25 +0000
Received: by outflank-mailman (input) for mailman id 528762;
 Tue, 02 May 2023 17:06:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M8xE=AX=citrix.com=prvs=479c33cca=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pttSK-00034w-Ip
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 17:06:24 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a7b4f32b-e90b-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 19:06:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7b4f32b-e90b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683047180;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=X8Ye7JviKJ6lmX7NtPkFlXMM7AfRK4M9QiNKUFj64gg=;
  b=T232EKTWhpDuxPlSfNwf0ZcsLXvPrSVSXBi2EFQOn1hPPa9eQKo9voKv
   l1lma5sDr2jGcSMcDYqi0wGVIoyNAX/Q60GkAr67Emu9R/b2b3SucmsaI
   XZeTXoIBG0H0wJR9o+M+AiZMhzU39XO4Lq3cEM8G8KBYmdgQQJcH8q3PR
   I=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106369293
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:AK+AwqCpXJ7xdhVW//fjw5YqxClBgxIJ4kV8jS/XYbTApDgi0mYAz
 WMXWmqPP/7eamX8eNp0O4608kIEuJPSx4Q1QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5ARkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw1Od8KEFJy
 a0iCzUOPi29pr6tyZH8Rbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9I4TUHpsExBfDz
 o7A1yPHHzsTDfid8mCq9H+LpeD+kzuheZ1HQdVU8dY12QbOlwT/EiY+UUawqL+3g0i1VtZbN
 mQd4C9opq83nGS7Q9+4UxCmrXqsuh8HR8EWA+A88BuKyKff/0CeHGdsZjRMcsA8vck6Azkjz
 EaUnsjBDCZq9raSTBq16bqV6ziyNC49JHUHIyQDSGMt/N3LsIw1yBXVQb5LCqmuhMfyHjL26
 z+PpSk6wb4UiKYj1aqh+kvcqymxvZWPRQkwji3eRm+/5xl1TJKkbYevr1Pc6J59wJ2xFwfb+
 iJewo7Hsb5IVMvW/MCQfAkTNJ+o7N+lNwGNu31qHMkcyRXx4HeAOo8FtVmSO3xV3tY4lS7BO
 RGD4lsIvs8MYxNGfocsPdvvVp1CIbzIUI28C6uKNocmjo1ZLlfvwc14WaKHM4kBemAImLp3B
 5qUeN3E4Z0yWfU+l2reqwvwPNYWKsECKYD7H8qTI+yPi+b2WZJsYe5t3KGyRu449riYhw7e7
 sxSMcCHoz0GDr2kM3eIqNRLcw1VRZTeOa0aVuQNLrLTSuaYMDhJ5wDtLUMJJNU+wvU9ehbg9
 XChQE5IoGfCaYn8AVzSMBhLMeq/NauTWFpnZUTAy370gSl8CWtuhY9DH6YKkU4Pr7I5lqItH
 qhtlgfpKq0ndwkrMg81NfHVxLGOvjz27e5SF0JJuAQCQqM=
IronPort-HdrOrdr: A9a23:kwDN1Ku64flgASkKpK7SikcI7skDSdV00zEX/kB9WHVpmwKj5r
 mTdZUgpGfJYVMqMk3I9urwXZVoLUmsl6KdpLNhXotKPzOGhILLFvAH0WKK+VSJcBEWtNQ86U
 4KSdkYNDSfNykdsS842mWF+hQbreVvPJrGuQ4W9RlQcT0=
X-Talos-CUID: 9a23:CCLcimBUMSsS9E36Ewdm/2c0Hfx0SF7YzyftHW7kEm9KD7LAHA==
X-Talos-MUID: =?us-ascii?q?9a23=3Ap8osvg90opA8ZszmfJDj80iQf4BW/o6nMWQPqs8?=
 =?us-ascii?q?pvcSjFiNBNS/HrSviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,244,1677560400"; 
   d="scan'208";a="106369293"
Date: Tue, 2 May 2023 18:06:08 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Luca Fancellu <luca.fancellu@arm.com>
CC: <xen-devel@lists.xenproject.org>, <bertrand.marquis@arm.com>,
	<wei.chen@arm.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, "Juergen
 Gross" <jgross@suse.com>
Subject: Re: [PATCH v6 10/12] xen/tools: add sve parameter in XL configuration
Message-ID: <996db21b-e963-4259-884d-2131c548ca1e@perard>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-11-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230424060248.1488859-11-luca.fancellu@arm.com>

On Mon, Apr 24, 2023 at 07:02:46AM +0100, Luca Fancellu wrote:
> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
> index ddc7b2a15975..1e69dac2c4fa 100644
> --- a/tools/libs/light/libxl_arm.c
> +++ b/tools/libs/light/libxl_arm.c
> @@ -211,6 +213,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
>          return ERROR_FAIL;
>      }
>  
> +    /* Parameter is sanitised in libxl__arch_domain_build_info_setdefault */
> +    if (d_config->b_info.arch_arm.sve_vl) {
> +        /* Vector length is divided by 128 in struct xen_domctl_createdomain */
> +        config->arch.sve_vl = d_config->b_info.arch_arm.sve_vl / 128U;
> +    }
> +
>      return 0;
>  }
>  
> @@ -1681,6 +1689,26 @@ int libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
>      /* ACPI is disabled by default */
>      libxl_defbool_setdefault(&b_info->acpi, false);
>  
> +    /* Sanitise SVE parameter */
> +    if (b_info->arch_arm.sve_vl) {
> +        unsigned int max_sve_vl =
> +            arch_capabilities_arm_sve(physinfo->arch_capabilities);
> +
> +        if (!max_sve_vl) {
> +            LOG(ERROR, "SVE is unsupported on this machine.");
> +            return ERROR_FAIL;
> +        }
> +
> +        if (LIBXL_SVE_TYPE_HW == b_info->arch_arm.sve_vl) {
> +            b_info->arch_arm.sve_vl = max_sve_vl;
> +        } else if (b_info->arch_arm.sve_vl > max_sve_vl) {
> +            LOG(ERROR,
> +                "Invalid sve value: %d. Platform supports up to %u bits",
> +                b_info->arch_arm.sve_vl, max_sve_vl);
> +            return ERROR_FAIL;
> +        }

You still need to check that sve_vl is one of the value from the enum,
or that the value is divisible by 128.

> +    }
> +
>      if (b_info->type != LIBXL_DOMAIN_TYPE_PV)
>          return 0;
>  
> diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
> index fd31dacf7d5a..9e48bb772646 100644
> --- a/tools/libs/light/libxl_types.idl
> +++ b/tools/libs/light/libxl_types.idl
> @@ -523,6 +523,27 @@ libxl_tee_type = Enumeration("tee_type", [
>      (1, "optee")
>      ], init_val = "LIBXL_TEE_TYPE_NONE")
>  
> +libxl_sve_type = Enumeration("sve_type", [
> +    (-1, "hw"),
> +    (0, "disabled"),
> +    (128, "128"),
> +    (256, "256"),
> +    (384, "384"),
> +    (512, "512"),
> +    (640, "640"),
> +    (768, "768"),
> +    (896, "896"),
> +    (1024, "1024"),
> +    (1152, "1152"),
> +    (1280, "1280"),
> +    (1408, "1408"),
> +    (1536, "1536"),
> +    (1664, "1664"),
> +    (1792, "1792"),
> +    (1920, "1920"),
> +    (2048, "2048")
> +    ], init_val = "LIBXL_SVE_TYPE_DISABLED")

I'm not sure if I like that or not. Is there a reason to stop at 2048?
It is possible that there will be more value available in the future?

Also this mean that users of libxl (like libvirt) would be supposed to
use LIBXL_SVE_TYPE_1024 for e.g., or use libxl_sve_type_from_string().

Also, it feels weird to me to mostly use numerical value of the enum
rather than the enum itself.

Anyway, hopefully that enum will work fine.

>  libxl_rdm_reserve = Struct("rdm_reserve", [
>      ("strategy",    libxl_rdm_reserve_strategy),
>      ("policy",      libxl_rdm_reserve_policy),

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 02 17:08:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 17:08:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528766.822314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pttUe-0003dG-AR; Tue, 02 May 2023 17:08:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528766.822314; Tue, 02 May 2023 17:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pttUe-0003d9-7h; Tue, 02 May 2023 17:08:48 +0000
Received: by outflank-mailman (input) for mailman id 528766;
 Tue, 02 May 2023 17:08:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g1cy=AX=linaro.org=peter.maydell@srs-se1.protection.inumbo.net>)
 id 1pttUc-0003d1-0w
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 17:08:46 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fdcb4104-e90b-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 19:08:44 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-50bc4ba28cbso4763225a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 02 May 2023 10:08:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdcb4104-e90b-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1683047323; x=1685639323;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=wKq68nL7IZe8CxCPKWRpfsYPAqO/kJptEynBmrLXGno=;
        b=wJTz2Lb9P2BIcudGgZCX+y+30ZlzVkLjiFwcmaHxkHtJYW7P5L/qYfTAiZ+i4zqgtg
         0IlRSg4MA1HAqV+D0ba8Ih+KRJ5IYo8McCdQYaAKLoHJ0c0yM/3QuOL+N3HOsRbmwYwx
         YUU/sXI+6PaC+TgDxUDSllY3ZQNDycR8MxfDHtYukm8AT+M2ON6OQQ1vFKjobX/XROMj
         LU2FAObSj+uKhG8pYCmYJCX5Fj6GEMmI/En30Rt2dV3vL3qtYvuJaDb3J756VQ5g+rlh
         iU74ECh20gV+Mkf64cBgq/6oJ7b7yB7X2hG+wTL4we+cWq1KMC3EYdj26hWTxXZn5xCh
         tYlw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683047323; x=1685639323;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=wKq68nL7IZe8CxCPKWRpfsYPAqO/kJptEynBmrLXGno=;
        b=kTXo9YvaEjv/tNM4SKzB04PLYUfW62Y7NM1/NxGETOETLstZZS9L2W8qGK/sead7b1
         4q18R2sXjCHZKnj/zKyh78XYavQDQTOV9OecSHrUHuUcFxha+0/pbO55mT98TMgb+3J8
         AYmkIHWh8dmGgGturP1pwifhktUVwO7DArnhvhK3GHjMZMdez+mBtIJ7SyfjEDPeKxjP
         t9x4+z6k54zSNgZ49KWoPPMDCdLukJJKcyhaAc9ld+ZRkVj76GelxtavCjtUDVtAyZhl
         4F+KEBGxlzdcbLyUqeZq5nMuEUh4Boc50NLlfJpu+0tqGNgyP7Xy2obpga9xP3CHXXJ/
         6duw==
X-Gm-Message-State: AC+VfDxhYYJ0HvJlHc6wMGUTQEatVZSluh9rdj3T/azEkveaR3Zm4ePg
	2Y8YA4E5jheGsjneQlOvyOHlcq5fLKju6r5Ny7xOVQ==
X-Google-Smtp-Source: ACHHUZ5ryAaiEYM7YxlTJAHU6zLdkibBxJTqdgb+u5+jyCzwLxskxERr5n7iMCM3uXDn/QRibKS6VOtt1URTw1XYbYQ=
X-Received: by 2002:a05:6402:44a:b0:504:b511:1a39 with SMTP id
 p10-20020a056402044a00b00504b5111a39mr8695641edw.12.1683047322896; Tue, 02
 May 2023 10:08:42 -0700 (PDT)
MIME-Version: 1.0
References: <20230307182707.2298618-1-dwmw2@infradead.org> <20230307182707.2298618-6-dwmw2@infradead.org>
In-Reply-To: <20230307182707.2298618-6-dwmw2@infradead.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 2 May 2023 18:08:32 +0100
Message-ID: <CAFEAcA9gzJGMqsEY5TuNmb74RskgUTMW+XcqGV53n3SsKyVVXg@mail.gmail.com>
Subject: Re: [PULL 05/27] hw/xen: Watches on XenStore transactions
To: David Woodhouse <dwmw2@infradead.org>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, Paul Durrant <paul@xen.org>, 
	Joao Martins <joao.m.martins@oracle.com>, Ankur Arora <ankur.a.arora@oracle.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, vikram.garhwal@amd.com, 
	Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org, 
	Juan Quintela <quintela@redhat.com>, "Dr . David Alan Gilbert" <dgilbert@redhat.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, 7 Mar 2023 at 18:27, David Woodhouse <dwmw2@infradead.org> wrote:
>
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> Firing watches on the nodes that still exist is relatively easy; just
> walk the tree and look at the nodes with refcount of one.
>
> Firing watches on *deleted* nodes is more fun. We add 'modified_in_tx'
> and 'deleted_in_tx' flags to each node. Nodes with those flags cannot
> be shared, as they will always be unique to the transaction in which
> they were created.
>
> When xs_node_walk would need to *create* a node as scaffolding and it
> encounters a deleted_in_tx node, it can resurrect it simply by clearing
> its deleted_in_tx flag. If that node originally had any *data*, they're
> gone, and the modified_in_tx flag will have been set when it was first
> deleted.
>
> We then attempt to send appropriate watches when the transaction is
> committed, properly delete the deleted_in_tx nodes, and remove the
> modified_in_tx flag from the others.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Reviewed-by: Paul Durrant <paul@xen.org>

Hi; Coverity's "is there missing error handling?"
heuristic fired for a change in this code (CID 1508359):

>  static int transaction_commit(XenstoreImplState *s, XsTransaction *tx)
>  {
> +    struct walk_op op;
> +    XsNode **n;
> +
>      if (s->root_tx != tx->base_tx) {
>          return EAGAIN;
>      }
> @@ -720,10 +861,18 @@ static int transaction_commit(XenstoreImplState *s, XsTransaction *tx)
>      s->root_tx = tx->tx_id;
>      s->nr_nodes = tx->nr_nodes;
>
> +    init_walk_op(s, &op, XBT_NULL, tx->dom_id, "/", &n);

This is the only call to init_walk_op() which ignores its
return value. Intentional, or missing error handling?

> +    op.deleted_in_tx = false;
> +    op.mutating = true;
> +
>      /*
> -     * XX: Walk the new root and fire watches on any node which has a
> +     * Walk the new root and fire watches on any node which has a
>       * refcount of one (which is therefore unique to this transaction).
>       */
> +    if (s->root->children) {
> +        g_hash_table_foreach_remove(s->root->children, tx_commit_walk, &op);
> +    }
> +
>      return 0;
>  }

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Tue May 02 17:59:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 17:59:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528770.822325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptuHv-0001IF-53; Tue, 02 May 2023 17:59:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528770.822325; Tue, 02 May 2023 17:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptuHv-0001I8-1l; Tue, 02 May 2023 17:59:43 +0000
Received: by outflank-mailman (input) for mailman id 528770;
 Tue, 02 May 2023 17:59:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptuHt-0001Hy-S1; Tue, 02 May 2023 17:59:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptuHt-0002Oq-Cg; Tue, 02 May 2023 17:59:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptuHs-0006P0-Th; Tue, 02 May 2023 17:59:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptuHs-0006rw-RF; Tue, 02 May 2023 17:59:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fM4yXtJL6nBZqLERvzuPa4/Ju0V+pyT4JCiE9KAPLt0=; b=jawDzMgioiTEyZCCf0IjPkzzHR
	B1inYv3vViH8K/j1v1nWbatmKVkPs10iRVO+TGdBd+6KBvWA5zeJWjPxyCSHU1uJnd91qVQ/P/2NS
	46gMdcOOj7YZHZKBR7XAYUbim0gA7/GSvvuHXt0056sIbyeQEKg0NnsnWuUuP2xLQmEQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180504-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180504: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=865fdb08197e657c59e74a35fa32362b12397f58
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 May 2023 17:59:40 +0000

flight 180504 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180504/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2  12 debian-install             fail pass in 180500
 test-amd64-amd64-xl-vhd      21 guest-start/debian.repeat  fail pass in 180500

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 180500 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 180500 never pass
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                865fdb08197e657c59e74a35fa32362b12397f58
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   15 days
Failing since        180281  2023-04-17 06:24:36 Z   15 days   27 attempts
Testing same since   180500  2023-05-02 01:41:05 Z    0 days    2 attempts

------------------------------------------------------------
2156 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 261762 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 02 18:56:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 18:56:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528776.822335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvAL-000842-Bx; Tue, 02 May 2023 18:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528776.822335; Tue, 02 May 2023 18:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvAL-00083v-9D; Tue, 02 May 2023 18:55:57 +0000
Received: by outflank-mailman (input) for mailman id 528776;
 Tue, 02 May 2023 18:55:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ptvAJ-00083p-PY
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 18:55:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ptvAI-0003hh-TJ; Tue, 02 May 2023 18:55:54 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.27.23]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ptvAI-0005Bb-Mq; Tue, 02 May 2023 18:55:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=/ZyE9fpt/78nDThThdz/kDxQb85mcyv/qvVJpj6ieEk=; b=Qzgu9OvVgcAEZYAA0/Y0wse8EM
	2fyoW52R3LXF9ib8fWeQDRaPeuzvLMralupIneW8Q4KK7xKn6+kbRFQl9kQFcqHWghK6wSN7gCCY6
	ZCBoM+t+Vdcq/bdq9vmhM8KIf3c8Io9owrsl45fBkvip4PafUUiTt8/xzG/zlmLHBSUk=;
Message-ID: <c3be0db1-ed47-967b-7f98-6f1569691fb6@xen.org>
Date: Tue, 2 May 2023 19:55:52 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 05/13] tools/xenstore: use accounting buffering for
 node accounting
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-6-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230405070349.25293-6-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 05/04/2023 08:03, Juergen Gross wrote:
> Add the node accounting to the accounting information buffering in
> order to avoid having to undo it in case of failure.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   tools/xenstore/xenstored_core.c   | 21 ++-------------------
>   tools/xenstore/xenstored_domain.h |  4 ++--
>   2 files changed, 4 insertions(+), 21 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 84335f5f3d..92a40ccf3f 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -1452,7 +1452,6 @@ static void destroy_node_rm(struct connection *conn, struct node *node)
>   static int destroy_node(struct connection *conn, struct node *node)
>   {
>   	destroy_node_rm(conn, node);
> -	domain_nbentry_dec(conn, get_node_owner(node));
>   
>   	/*
>   	 * It is not possible to easily revert the changes in a transaction.
> @@ -1797,27 +1796,11 @@ static int do_set_perms(const void *ctx, struct connection *conn,
>   	old_perms = node->perms;
>   	domain_nbentry_dec(conn, get_node_owner(node));

IIRC, we originally said that domain_nbentry_dec() could never fail in a 
non-transaction case. But with your current rework, the function can now 
fail because of an allocation failure.

Therefore, shouldn't we now check the error? (Possibly in a patch 
beforehand).

>   	node->perms = perms;
> -	if (domain_nbentry_inc(conn, get_node_owner(node))) {
> -		node->perms = old_perms;
> -		/*
> -		 * This should never fail because we had a reference on the
> -		 * domain before and Xenstored is single-threaded.
> -		 */
> -		domain_nbentry_inc(conn, get_node_owner(node));
> +	if (domain_nbentry_inc(conn, get_node_owner(node)))
>   		return ENOMEM;
> -	}
>   
> -	if (write_node(conn, node, false)) {
> -		int saved_errno = errno;
> -
> -		domain_nbentry_dec(conn, get_node_owner(node));
> -		node->perms = old_perms;
> -		/* No failure possible as above. */
> -		domain_nbentry_inc(conn, get_node_owner(node));
> -
> -		errno = saved_errno;
> +	if (write_node(conn, node, false))
>   		return errno;
> -	}
>   
>   	fire_watches(conn, ctx, name, node, false, &old_perms);
>   	send_ack(conn, XS_SET_PERMS);
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
> index 6355ad4f37..e669f57b80 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -25,9 +25,9 @@
>    * a per transaction array.
>    */
>   enum accitem {
> +	ACC_NODES,
>   	ACC_REQ_N,		/* Number of elements per request. */
> -	ACC_NODES = ACC_REQ_N,
> -	ACC_TR_N,		/* Number of elements per transaction. */
> +	ACC_TR_N = ACC_REQ_N,	/* Number of elements per transaction. */
>   	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
>   };
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 02 18:56:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 18:56:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528780.822345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvAy-00005r-Pb; Tue, 02 May 2023 18:56:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528780.822345; Tue, 02 May 2023 18:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvAy-00005k-MN; Tue, 02 May 2023 18:56:36 +0000
Received: by outflank-mailman (input) for mailman id 528780;
 Tue, 02 May 2023 18:56:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GGWT=AX=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ptvAw-0008UA-N0
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 18:56:34 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0ddd27c4-e91b-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 20:56:33 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-297-hz_-_nNRNHKLUa7zat10Kw-1; Tue, 02 May 2023 14:56:28 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AF841811E7C;
 Tue,  2 May 2023 18:56:27 +0000 (UTC)
Received: from localhost (unknown [10.39.192.230])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5A29F492B03;
 Tue,  2 May 2023 18:56:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ddd27c4-e91b-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683053792;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JvqiB11pwfyb9mMlyyPXLE6NQZxd1luK1o/19K1fqnk=;
	b=Jl4gsC1KlQMRK0juJOoLPeJbdKQyXa0ar5TmsH51hUb7c8IzyFWx7oVpcQ7KsZ9fLU7e1y
	FFDf3I/nseYyoInDjLMk+EgL4IQGvKr/69RhcjVt0ePFUCGVC9phJK8ASvlRD0ivqxXZWi
	yTiPIzIo+9+GB2mMV5BEfCF3edfSXxs=
X-MC-Unique: hz_-_nNRNHKLUa7zat10Kw-1
Date: Tue, 2 May 2023 14:56:24 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v4 03/20] virtio-scsi: avoid race between unplug and
 transport event
Message-ID: <20230502185624.GA535070@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-4-stefanha@redhat.com>
 <ZFEqEkG4ktn9bBFN@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Qd9ljEEGWct7+HKe"
Content-Disposition: inline
In-Reply-To: <ZFEqEkG4ktn9bBFN@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10


--Qd9ljEEGWct7+HKe
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, May 02, 2023 at 05:19:46PM +0200, Kevin Wolf wrote:
> Am 25.04.2023 um 19:26 hat Stefan Hajnoczi geschrieben:
> > Only report a transport reset event to the guest after the SCSIDevice
> > has been unrealized by qdev_simple_device_unplug_cb().
> >=20
> > qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field
> > to false so that scsi_device_find/get() no longer see it.
> >=20
> > scsi_target_emulate_report_luns() also needs to be updated to filter out
> > SCSIDevices that are unrealized.
> >=20
> > These changes ensure that the guest driver does not see the SCSIDevice
> > that's being unplugged if it responds very quickly to the transport
> > reset event.
> >=20
> > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> > Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>=20
> > @@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHandler=
 *hotplug_dev, DeviceState *dev,
> >          blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL=
);
> >          virtio_scsi_release(s);
> >      }
> > +
> > +    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
> > +        virtio_scsi_acquire(s);
> > +        virtio_scsi_push_event(s, sd,
> > +                               VIRTIO_SCSI_T_TRANSPORT_RESET,
> > +                               VIRTIO_SCSI_EVT_RESET_REMOVED);
> > +        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
> > +        virtio_scsi_release(s);
> > +    }
> >  }
>=20
> s, sd and s->bus are all unrealized at this point, whereas before this
> patch they were still realized. I couldn't find any practical problem
> with it, but it made me nervous enough that I thought I should comment
> on it at least.
>=20
> Should we maybe have documentation on these functions that says that
> they accept unrealized objects as their parameters?

s is the VirtIOSCSI controller, not the SCSIDevice that is being
unplugged. The VirtIOSCSI controller is still realized.

s->bus is the VirtIOSCSI controller's bus, it is still realized.

You are right that the SCSIDevice (sd) has been unrealized at this
point:
- sd->conf.blk is safe because qdev properties stay alive the
  Object is deleted, but I'm not sure we should rely on that.
- virti_scsi_push_event(.., sd, ...) is questionable because the LUN
  that's fetched from sd no longer belongs to the unplugged SCSIDevice.

How about I change the code to fetch sd->conf.blk and the LUN before
unplugging?

Stefan

--Qd9ljEEGWct7+HKe
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRRXNgACgkQnKSrs4Gr
c8ipsAf+MYIOWS+2lk+xRt3nEDpVri7B1MaNSrlKDWSw2vK6J34jE0bGdHF3I5kZ
cS6XFEpcK0BRYes0zRpFZyksJFYS1033b2up4HGodKOJp34ahYy7Vg4yNrov6pzO
pZHJAEeEkK0FrHHJkho15qjoOykxt4bib6RzUFN+EdKo3KGQzk1dEyh8fzPEm40x
CwNN7D/FJAOOM3CpgsHLUGu6EvOBlfGkd8kS7It3qOD+/BeLFHKDiLBaU5w/IXtA
WmgFKnZiOEsMQIEIFgcg+m/YrlomPy0kRPFFSFNn7/5yCmhLCU5WNY/1sTiM3LEi
20aCYTqXQJt5E6c1jk1x21IACc9qRA==
=PP5I
-----END PGP SIGNATURE-----

--Qd9ljEEGWct7+HKe--



From xen-devel-bounces@lists.xenproject.org Tue May 02 18:57:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 18:57:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528784.822355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvBq-0000ki-2X; Tue, 02 May 2023 18:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528784.822355; Tue, 02 May 2023 18:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvBp-0000kb-VU; Tue, 02 May 2023 18:57:29 +0000
Received: by outflank-mailman (input) for mailman id 528784;
 Tue, 02 May 2023 18:57:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ptvBo-0000kJ-37
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 18:57:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ptvBn-0003j4-FA; Tue, 02 May 2023 18:57:27 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=[192.168.27.23]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ptvBn-0005E7-98; Tue, 02 May 2023 18:57:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=mUiVQ/ID7Mf9r5d/C6BFsfv6tI7q/JAE0UDgvSyOPsU=; b=WjTQ2BOSDP4ntjMiRL3gHONm9R
	n2Yfob7K1cXcYtRbvpEDozixeJDvHAGGbHov5ntM5ebI7W/OW3XGRG6fcohxFQhCgPxcRtglKnyjq
	bmlipOCZ2jeFBhhijBM86Ht368+KQlbzrDpFzekXjwSYlRVIAJISslizVXJkvhQ8lXDE=;
Message-ID: <4e7f5761-e21c-ee8b-6bb4-cd3753ce9b33@xen.org>
Date: Tue, 2 May 2023 19:57:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 06/13] tools/xenstore: add current connection to
 domain_memory_add() parameters
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-7-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230405070349.25293-7-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 05/04/2023 08:03, Juergen Gross wrote:
> In order to enable switching memory accounting to the generic array
> based accounting, add the current connection to the parameters of
> domain_memory_add().
> 
> This requires to add the connection to some other functions, too.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 02 19:09:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 19:09:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528788.822365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvNl-0002Yt-4L; Tue, 02 May 2023 19:09:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528788.822365; Tue, 02 May 2023 19:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvNl-0002Ym-1V; Tue, 02 May 2023 19:09:49 +0000
Received: by outflank-mailman (input) for mailman id 528788;
 Tue, 02 May 2023 19:09:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ptvNj-0002Yg-V4
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 19:09:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ptvNi-00046d-Un; Tue, 02 May 2023 19:09:46 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=[192.168.27.23]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ptvNi-0005pv-Oo; Tue, 02 May 2023 19:09:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=PQKNilhygwzDZCPuPcBWG5h80MuAqj51hGz16gI1WKI=; b=DMe6v1X6sejoZtew8uhtaR2Uyv
	sA3qEhyiP0+gDqjWtRj+EZZjr0NlZ7gf+k5Pow/h+tBNdpseczojI1WGOYQzUhY9pDYG74sfj1wVe
	4rsJ4pLvNYPtpvADJVpeW38QRE0+LO7qOmcKhNKxGUPSxUuI0A2zT/PigfRcgvd/qqfc=;
Message-ID: <25027287-441a-304c-f035-0d3da3572d3a@xen.org>
Date: Tue, 2 May 2023 20:09:45 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 07/13] tools/xenstore: use accounting data array for
 per-domain values
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-8-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230405070349.25293-8-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 05/04/2023 08:03, Juergen Gross wrote:
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
> index 5cfd730cf6..0d61bf4344 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -28,7 +28,10 @@ enum accitem {
>   	ACC_NODES,
>   	ACC_REQ_N,		/* Number of elements per request. */
>   	ACC_TR_N = ACC_REQ_N,	/* Number of elements per transaction. */
> -	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
> +	ACC_WATCH = ACC_TR_N,
> +	ACC_OUTST,
> +	ACC_MEM,
> +	ACC_N,			/* Number of elements per domain. */
>   };
>   
>   void handle_event(void);
> @@ -107,9 +110,8 @@ static inline void domain_memory_add_nochk(struct connection *conn,
>   void domain_watch_inc(struct connection *conn);
>   void domain_watch_dec(struct connection *conn);
>   int domain_watch(struct connection *conn);
> -void domain_outstanding_inc(struct connection *conn);
> -void domain_outstanding_dec(struct connection *conn);
> -void domain_outstanding_domid_dec(unsigned int domid);
> +void domain_outstanding_inc(struct connection *conn, unsigned int domid);

AFAICT, all the caller of domain_outstanding_inc() will pass 'conn->id'. 
So it is not entirely clear what's the benefits to add the extra parameter.

I am not against this change (and same for removing *domid_dec()). But I 
think this ought to be explained in the commit message as this feels 
unrelated.

> +void domain_outstanding_dec(struct connection *conn, unsigned int domid);
>   int domain_get_quota(const void *ctx, struct connection *conn,
>   		     unsigned int domid);
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 02 19:19:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 19:19:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528791.822375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvWm-000476-1W; Tue, 02 May 2023 19:19:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528791.822375; Tue, 02 May 2023 19:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvWl-00046z-Ud; Tue, 02 May 2023 19:19:07 +0000
Received: by outflank-mailman (input) for mailman id 528791;
 Tue, 02 May 2023 19:19:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ptvWk-00046t-Jq
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 19:19:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ptvWk-0004GX-CF
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 19:19:06 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.27.23]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ptvWk-00068z-7D
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 19:19:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=EBnYYM8HG9bUiwwQLUhRg9xb2vnvYthtANZybEVBW+w=; b=OT1Pl1ozn/hkxMrVGH22QAUg6o
	2D2AwFJUyT/w7fWhH1JYZcd/rYPxIsRhaz4ep9xaq2jcXYKEK4wQm6XlTolkp/EZqiON11q+iVTix
	bggIHuaWi1LWGsOQLQFYsLm7RCDbg9nYoJRbEHa+BmmJWOIY/zMLsO6xWFtsu39fFMoQ=;
Message-ID: <5b61ce7e-639c-4f2d-6cb9-421679d30d9d@xen.org>
Date: Tue, 2 May 2023 20:19:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 10/13] tools/xenstore: switch transaction accounting to
 generic accounting
Content-Language: en-US
To: xen-devel@lists.xenproject.org
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-11-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230405070349.25293-11-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 05/04/2023 08:03, Juergen Gross wrote:
> As transaction accounting is active for unprivileged domains only, it
> can easily be added to the generic per-domain accounting.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   tools/xenstore/xenstored_core.c        |  3 +--
>   tools/xenstore/xenstored_core.h        |  1 -
>   tools/xenstore/xenstored_domain.c      | 21 ++++++++++++++++++---
>   tools/xenstore/xenstored_domain.h      |  4 ++++
>   tools/xenstore/xenstored_transaction.c | 12 +++++-------
>   5 files changed, 28 insertions(+), 13 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 2d481fcad9..88c569b7d5 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -2083,7 +2083,7 @@ static void consider_message(struct connection *conn)
>   	 * stalled. This will ignore new requests until Live-Update happened
>   	 * or it was aborted.
>   	 */
> -	if (lu_is_pending() && conn->transaction_started == 0 &&
> +	if (lu_is_pending() && conn->ta_start_time == 0 &&

NIT: I know there are some places in the code checking for 
conn->ta_start_time == 0. But it feels like a better replacement to 
"conn->transaction_started" is "list_empty(...)".

I agree this is going to be more expensive. But you are switching the 
transaction accounting to a generic infrastructure which is pretty heavy 
compare to a simple addition/substraction. So I think a "list_empty()" 
would be OK here.

>   	    conn->in->hdr.msg.type == XS_TRANSACTION_START) {
>   		trace("Delaying transaction start for connection %p req_id %u\n",
>   		      conn, conn->in->hdr.msg.req_id);
> @@ -2190,7 +2190,6 @@ struct connection *new_connection(const struct interface_funcs *funcs)
>   	new->funcs = funcs;
>   	new->is_ignored = false;
>   	new->is_stalled = false;
> -	new->transaction_started = 0;
>   	INIT_LIST_HEAD(&new->out_list);
>   	INIT_LIST_HEAD(&new->acc_list);
>   	INIT_LIST_HEAD(&new->ref_list);
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
> index 5a11dc1231..3564d85d7d 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -151,7 +151,6 @@ struct connection
>   	/* List of in-progress transactions. */
>   	struct list_head transaction_list;
>   	uint32_t next_transaction_id;
> -	unsigned int transaction_started;
>   	time_t ta_start_time;
>   
>   	/* List of delayed requests. */
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
> index 1caa60bb14..40bcc1dbfa 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -419,12 +419,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
>   {
>   	struct domain *d = find_domain_struct(domid);
>   	char *resp;
> -	int ta;
>   
>   	if (!d)
>   		return ENOENT;
>   
> -	ta = d->conn ? d->conn->transaction_started : 0;
>   	resp = talloc_asprintf(ctx, "Domain %u:\n", domid);
>   	if (!resp)
>   		return ENOMEM;
> @@ -435,7 +433,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
>   
>   	ent(nodes, d->acc[ACC_NODES]);
>   	ent(watches, d->acc[ACC_WATCH]);
> -	ent(transactions, ta);
> +	ent(transactions, d->acc[ACC_TRANS]);
>   	ent(outstanding, d->acc[ACC_OUTST]);
>   	ent(memory, d->acc[ACC_MEM]);
>   
> @@ -1297,6 +1295,23 @@ void domain_outstanding_dec(struct connection *conn, unsigned int domid)
>   	domain_acc_add(conn, domid, ACC_OUTST, -1, true);
>   }
>   
> +void domain_transaction_inc(struct connection *conn)
> +{
> +	domain_acc_add(conn, conn->id, ACC_TRANS, 1, true);
> +}
> +
> +void domain_transaction_dec(struct connection *conn)
> +{
> +	domain_acc_add(conn, conn->id, ACC_TRANS, -1, true);
> +}
> +
> +unsigned int domain_transaction_get(struct connection *conn)
> +{
> +	return (domain_is_unprivileged(conn))
> +		? domain_acc_add(conn, conn->id, ACC_TRANS, 0, true)
> +		: 0;
> +}
> +
>   static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
>   static wrl_creditt wrl_config_rate           = WRL_RATE   * WRL_FACTOR;
>   static wrl_creditt wrl_config_dburst         = WRL_DBURST * WRL_FACTOR;
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
> index 0d61bf4344..abc766f343 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -31,6 +31,7 @@ enum accitem {
>   	ACC_WATCH = ACC_TR_N,
>   	ACC_OUTST,
>   	ACC_MEM,
> +	ACC_TRANS,
>   	ACC_N,			/* Number of elements per domain. */
>   };
>   
> @@ -112,6 +113,9 @@ void domain_watch_dec(struct connection *conn);
>   int domain_watch(struct connection *conn);
>   void domain_outstanding_inc(struct connection *conn, unsigned int domid);
>   void domain_outstanding_dec(struct connection *conn, unsigned int domid);
> +void domain_transaction_inc(struct connection *conn);
> +void domain_transaction_dec(struct connection *conn);
> +unsigned int domain_transaction_get(struct connection *conn);
>   int domain_get_quota(const void *ctx, struct connection *conn,
>   		     unsigned int domid);
>   
> diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
> index 11c8bcec84..1c14b8579a 100644
> --- a/tools/xenstore/xenstored_transaction.c
> +++ b/tools/xenstore/xenstored_transaction.c
> @@ -479,8 +479,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
>   	if (conn->transaction)
>   		return EBUSY;
>   
> -	if (domain_is_unprivileged(conn) &&
> -	    conn->transaction_started > quota_max_transaction)
> +	if (domain_transaction_get(conn) > quota_max_transaction)
>   		return ENOSPC;
>   
>   	/* Attach transaction to ctx for autofree until it's complete */
> @@ -505,9 +504,9 @@ int do_transaction_start(const void *ctx, struct connection *conn,
>   	list_add_tail(&trans->list, &conn->transaction_list);
>   	talloc_steal(conn, trans);
>   	talloc_set_destructor(trans, destroy_transaction);
> -	if (!conn->transaction_started)
> +	if (!conn->ta_start_time)

I think it would make more sense to move this if just before the 
list_add_tail() and use (list_empty()) for the checked. This would make 
the code more consistent...

>   		conn->ta_start_time = time(NULL);
> -	conn->transaction_started++;
> +	domain_transaction_inc(conn);
>   	wrl_ntransactions++;
>   
>   	snprintf(id_str, sizeof(id_str), "%u", trans->id);
> @@ -533,8 +532,8 @@ int do_transaction_end(const void *ctx, struct connection *conn,
>   
>   	conn->transaction = NULL;
>   	list_del(&trans->list);
> -	conn->transaction_started--;
> -	if (!conn->transaction_started)
> +	domain_transaction_dec(conn);
> +	if (list_empty(&conn->transaction_list))
>   		conn->ta_start_time = 0;

... with this check and also easier to spot they are related.

>   
>   	chk_quota = trans->node_created && domain_is_unprivileged(conn);
> @@ -588,7 +587,6 @@ void conn_delete_all_transactions(struct connection *conn)
>   
>   	assert(conn->transaction == NULL);
>   
> -	conn->transaction_started = 0;
>   	conn->ta_start_time = 0;
>   }
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 02 19:41:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 19:41:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528794.822385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvsB-0007bx-Rc; Tue, 02 May 2023 19:41:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528794.822385; Tue, 02 May 2023 19:41:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptvsB-0007bq-O9; Tue, 02 May 2023 19:41:15 +0000
Received: by outflank-mailman (input) for mailman id 528794;
 Tue, 02 May 2023 19:41:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GGWT=AX=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ptvsA-0007bk-Re
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 19:41:14 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4a9a5fbc-e921-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 21:41:12 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (66.187.233.88 [66.187.233.88]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-335-YMLPGPKMO4SxOjhl3dEpxA-1; Tue, 02 May 2023 15:41:01 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E962D85A588;
 Tue,  2 May 2023 19:40:54 +0000 (UTC)
Received: from localhost (unknown [10.39.192.230])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E85EC4020960;
 Tue,  2 May 2023 19:40:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a9a5fbc-e921-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683056471;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6xENbjVvp+Pts3X84Cy2DsWSBfSYloGA+NzmbgbplRU=;
	b=fOt5e4RU5TCQFUGbDzQYsMURmBV4TTppPLwFTDXuiOX/1HAeWzGUfAZBXzXiWnCsNNQ+6K
	ZzSkLnn2K9rjM23rxRBj5B/SGsNXfh3w0WRwxeaPssy8iGjfAXZmMM26l+zMdFxLzhUNlf
	R+HFjIHQCOB0Hw5mFMP//N1FNQGQAhg=
X-MC-Unique: YMLPGPKMO4SxOjhl3dEpxA-1
Date: Tue, 2 May 2023 15:40:45 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 06/20] block/export: wait for vhost-user-blk requests
 when draining
Message-ID: <20230502194045.GC535070@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-7-stefanha@redhat.com>
 <ZFEve2GfI0TqsItA@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="XRuGd9g1z5oS2ell"
Content-Disposition: inline
In-Reply-To: <ZFEve2GfI0TqsItA@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2


--XRuGd9g1z5oS2ell
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, May 02, 2023 at 05:42:51PM +0200, Kevin Wolf wrote:
> Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > Each vhost-user-blk request runs in a coroutine. When the BlockBackend
> > enters a drained section we need to enter a quiescent state. Currently
> > any in-flight requests race with bdrv_drained_begin() because it is
> > unaware of vhost-user-blk requests.
> >=20
> > When blk_co_preadv/pwritev()/etc returns it wakes the
> > bdrv_drained_begin() thread but vhost-user-blk request processing has
> > not yet finished. The request coroutine continues executing while the
> > main loop thread thinks it is in a drained section.
> >=20
> > One example where this is unsafe is for blk_set_aio_context() where
> > bdrv_drained_begin() is called before .aio_context_detached() and
> > .aio_context_attach(). If request coroutines are still running after
> > bdrv_drained_begin(), then the AioContext could change underneath them
> > and they race with new requests processed in the new AioContext. This
> > could lead to virtqueue corruption, for example.
> >=20
> > (This example is theoretical, I came across this while reading the
> > code and have not tried to reproduce it.)
> >=20
> > It's easy to make bdrv_drained_begin() wait for in-flight requests: add
> > a .drained_poll() callback that checks the VuServer's in-flight counter.
> > VuServer just needs an API that returns true when there are requests in
> > flight. The in-flight counter needs to be atomic.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  include/qemu/vhost-user-server.h     |  4 +++-
> >  block/export/vhost-user-blk-server.c | 16 ++++++++++++++++
> >  util/vhost-user-server.c             | 14 ++++++++++----
> >  3 files changed, 29 insertions(+), 5 deletions(-)
> >=20
> > diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user=
-server.h
> > index bc0ac9ddb6..b1c1cda886 100644
> > --- a/include/qemu/vhost-user-server.h
> > +++ b/include/qemu/vhost-user-server.h
> > @@ -40,8 +40,9 @@ typedef struct {
> >      int max_queues;
> >      const VuDevIface *vu_iface;
> > =20
> > +    unsigned int in_flight; /* atomic */
> > +
> >      /* Protected by ctx lock */
> > -    unsigned int in_flight;
> >      bool wait_idle;
> >      VuDev vu_dev;
> >      QIOChannel *ioc; /* The I/O channel with the client */
> > @@ -62,6 +63,7 @@ void vhost_user_server_stop(VuServer *server);
> > =20
> >  void vhost_user_server_inc_in_flight(VuServer *server);
> >  void vhost_user_server_dec_in_flight(VuServer *server);
> > +bool vhost_user_server_has_in_flight(VuServer *server);
> > =20
> >  void vhost_user_server_attach_aio_context(VuServer *server, AioContext=
 *ctx);
> >  void vhost_user_server_detach_aio_context(VuServer *server);
> > diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-=
user-blk-server.c
> > index 841acb36e3..092b86aae4 100644
> > --- a/block/export/vhost-user-blk-server.c
> > +++ b/block/export/vhost-user-blk-server.c
> > @@ -272,7 +272,20 @@ static void vu_blk_exp_resize(void *opaque)
> >      vu_config_change_msg(&vexp->vu_server.vu_dev);
> >  }
> > =20
> > +/*
> > + * Ensures that bdrv_drained_begin() waits until in-flight requests co=
mplete.
> > + *
> > + * Called with vexp->export.ctx acquired.
> > + */
> > +static bool vu_blk_drained_poll(void *opaque)
> > +{
> > +    VuBlkExport *vexp =3D opaque;
> > +
> > +    return vhost_user_server_has_in_flight(&vexp->vu_server);
> > +}
> > +
> >  static const BlockDevOps vu_blk_dev_ops =3D {
> > +    .drained_poll  =3D vu_blk_drained_poll,
> >      .resize_cb =3D vu_blk_exp_resize,
> >  };
>=20
> You're adding a new function pointer to an existing BlockDevOps...
>=20
> > @@ -314,6 +327,7 @@ static int vu_blk_exp_create(BlockExport *exp, Bloc=
kExportOptions *opts,
> >      vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
> >                               logical_block_size, num_queues);
> > =20
> > +    blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
> >      blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_d=
etach,
> >                                   vexp);
> > =20
> >      blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp);
>=20
> ..but still add a second blk_set_dev_ops(). Maybe a bad merge conflict
> resolution with commit ca858a5fe94?

Thanks, I probably didn't have ca858a5fe94 in my tree when writing this
code.

> > @@ -323,6 +337,7 @@ static int vu_blk_exp_create(BlockExport *exp, Bloc=
kExportOptions *opts,
> >                                   num_queues, &vu_blk_iface, errp)) {
> >          blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
> >                                          blk_aio_detach, vexp);
> > +        blk_set_dev_ops(exp->blk, NULL, NULL);
> >          g_free(vexp->handler.serial);
> >          return -EADDRNOTAVAIL;
> >      }
> > @@ -336,6 +351,7 @@ static void vu_blk_exp_delete(BlockExport *exp)
> > =20
> >      blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_ai=
o_detach,
> >                                      vexp);
> > +    blk_set_dev_ops(exp->blk, NULL, NULL);
> >      g_free(vexp->handler.serial);
> >  }
>=20
> These two hunks are then probably already fixes for ca858a5fe94 and
> should be a separate patch if so.

Sure, I can split them out.

hw/ doesn't need to call blk_set_dev_ops(blk, NULL, NULL) because
hw/core/qdev-properties-system.c:release_drive() -> blk_detach_dev()
does it automatically, but block/export does. It's easy to overlook and
that's probably why ca858a5fe94 didn't include it.

> > diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
> > index 1622f8cfb3..2e6b640050 100644
> > --- a/util/vhost-user-server.c
> > +++ b/util/vhost-user-server.c
> > @@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
> >  void vhost_user_server_inc_in_flight(VuServer *server)
> >  {
> >      assert(!server->wait_idle);
> > -    server->in_flight++;
> > +    qatomic_inc(&server->in_flight);
> >  }
> > =20
> >  void vhost_user_server_dec_in_flight(VuServer *server)
> >  {
> > -    server->in_flight--;
> > -    if (server->wait_idle && !server->in_flight) {
> > -        aio_co_wake(server->co_trip);
> > +    if (qatomic_fetch_dec(&server->in_flight) =3D=3D 1) {
> > +        if (server->wait_idle) {
> > +            aio_co_wake(server->co_trip);
> > +        }
> >      }
> >  }
> > =20
> > +bool vhost_user_server_has_in_flight(VuServer *server)
> > +{
> > +    return qatomic_load_acquire(&server->in_flight) > 0;
> > +}
> > +
>=20
> Any reason why you left the server->in_flight accesses in
> vu_client_trip() non-atomic?

I don't remember if it was a mistake or if there is a reason why it's
safe. I'll replace those accesses with calls to
vhost_user_server_has_in_flight().

Stefan

--XRuGd9g1z5oS2ell
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRRZz0ACgkQnKSrs4Gr
c8iEjwgAov+Ozs58HYaiYo0b4VnPoOnAM2QqHXYzmVRN4+5mqvFWNVKufgQMDbSG
1o1dgDNOCRU1tpOCUbNRYmvJxvYY6QA9Ho7AdLGPZc6jq2CR4LHarr5MP1Py5ktT
dGAN6GFH3qzsf93j4wEa0HnWax5RvOdFEPxkK2JKgXRA+AesbOLRizK1q2P5p3TH
6I0SfPnLhlTeosVaQ4mRLkZuXNt5/bTeh54lW/NSLP6IpbBoB082Wqr1JqCwjdVO
XxBNAMrMB/0oImJNHh9HSqpB2oHlL2FOa/yu++wNeZ1uzHdt7oRDpgwx2mQpMmiT
/HhN2vFe1XsM621h8uV5aa4HZH7Hiw==
=UcN5
-----END PGP SIGNATURE-----

--XRuGd9g1z5oS2ell--



From xen-devel-bounces@lists.xenproject.org Tue May 02 19:54:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 19:54:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528799.822395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptw59-0000qF-2f; Tue, 02 May 2023 19:54:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528799.822395; Tue, 02 May 2023 19:54:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptw58-0000q8-W9; Tue, 02 May 2023 19:54:38 +0000
Received: by outflank-mailman (input) for mailman id 528799;
 Tue, 02 May 2023 19:54:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=foPY=AX=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1ptw57-0000q2-Lw
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 19:54:37 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28f94b9b-e923-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 21:54:34 +0200 (CEST)
Received: from DB6P18901CA0014.EURP189.PROD.OUTLOOK.COM (2603:10a6:4:16::24)
 by PAXPR08MB7365.eurprd08.prod.outlook.com (2603:10a6:102:225::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 19:54:30 +0000
Received: from DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:16:cafe::ee) by DB6P18901CA0014.outlook.office365.com
 (2603:10a6:4:16::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 19:54:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT019.mail.protection.outlook.com (100.127.142.129) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.20 via Frontend Transport; Tue, 2 May 2023 19:54:30 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Tue, 02 May 2023 19:54:30 +0000
Received: from b54e168d3d85.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E57027C9-C2EC-48B5-AFD4-76F064C796A6.1; 
 Tue, 02 May 2023 19:54:23 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b54e168d3d85.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 02 May 2023 19:54:23 +0000
Received: from AM0PR08MB3745.eurprd08.prod.outlook.com (2603:10a6:208:ff::27)
 by AS8PR08MB7790.eurprd08.prod.outlook.com (2603:10a6:20b:527::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 19:54:20 +0000
Received: from AM0PR08MB3745.eurprd08.prod.outlook.com
 ([fe80::e6af:7fc5:bb80:9406]) by AM0PR08MB3745.eurprd08.prod.outlook.com
 ([fe80::e6af:7fc5:bb80:9406%7]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 19:54:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28f94b9b-e923-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rIauXjVPwCZaMr0emwlQmLTVrakJ4XH4Bg+H0hwJ6Ps=;
 b=Dn+OGWmWdLsk3GfgTNzj4BxB6XYEAtHamV31fGGOKpf34aTy9eCCl/9jv29XAwE0M3KYpn8O0m31qIjN63kAcaLweCGllEYLESWPgghAkgWvtiqofn+FYEdYgesmNIG+KuGPir0ki2pVF+i+0l1u0K7+RRBBhpz3Zweh6PwRMlQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: e32b693e000c3f57
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FpthWXKaINUH8Ffau9FkQTjG/1yDRnGL3JXR6Pjfxo6/LU5+NlNRPOp16Xj32KG2DArUmXeeVZ1/J116fmtmngh7hDunEbvMVHIGeCC0BFt/GXB5Ml/AVWwPu+zn/EDIpU3NFp8VQg59Z/lu6DlMV7V38XAI5VjKLq4mldeESrwJZBmXpHuqfT6bbDQgGYxdStvULFR/NZoEPVaIAJHixe40c2Gq9mxwHCwCwhFHOGATDjm7weTLZ4ZcP0x/AII+DtPbxFefyQVxwGBXtqmoixGxAE7mtJ4wS5RVuhDR99XuNumLIjGwA2i7QhXx08id+bqOvHxw1GmyRgARGEEAeA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rIauXjVPwCZaMr0emwlQmLTVrakJ4XH4Bg+H0hwJ6Ps=;
 b=b3wlaNk70l3h2uMgePNDcwa21sKbJarrPVzhT/TdQpxnK1zhVudO8FIDmp+i37fwixEE7/qfUH+ISMv5PUf5qrhiI6kswttR1aCobhlJwjWoC6+bB7epBLz+CNA/3u1zw626YrfIssy+Ed9hqOWt/YzKoplhObqbBQguXYnHK6OnZacJXfnhemlqbMt/un+X+89J7FK03VJ9yVtxLxcnFSXmAJtSp8x6AQEliLGWrBl88lOPLh6Wg4gLZlbItk+haXSmxyB4xHqinR1wF+ije5TH+U6djs2G5RvfTYabuJ8nZfnNVvKdBSIUyY5W+dynmG7w4iyhpkKaMrwiejqm5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rIauXjVPwCZaMr0emwlQmLTVrakJ4XH4Bg+H0hwJ6Ps=;
 b=Dn+OGWmWdLsk3GfgTNzj4BxB6XYEAtHamV31fGGOKpf34aTy9eCCl/9jv29XAwE0M3KYpn8O0m31qIjN63kAcaLweCGllEYLESWPgghAkgWvtiqofn+FYEdYgesmNIG+KuGPir0ki2pVF+i+0l1u0K7+RRBBhpz3Zweh6PwRMlQ=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v6 10/12] xen/tools: add sve parameter in XL configuration
Thread-Topic: [PATCH v6 10/12] xen/tools: add sve parameter in XL
 configuration
Thread-Index: AQHZdnKI1/ryq6h2kEuVT2yTgC4YMq9HQ+EAgAAu74A=
Date: Tue, 2 May 2023 19:54:19 +0000
Message-ID: <8C3DC6ED-83D8-4DD0-9C99-B34449304373@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-11-luca.fancellu@arm.com>
 <996db21b-e963-4259-884d-2131c548ca1e@perard>
In-Reply-To: <996db21b-e963-4259-884d-2131c548ca1e@perard>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM0PR08MB3745:EE_|AS8PR08MB7790:EE_|DBAEUR03FT019:EE_|PAXPR08MB7365:EE_
X-MS-Office365-Filtering-Correlation-Id: ed8df2ad-d3f4-4f97-8c36-08db4b470b2b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 nl7jwy/rHwCPApwMIp99tPwC058MrOC+sbuOUGWwMy0B4loMknKMU7rIELwR0U7EexJSfJimqvJN9M9BtR697nk60j/wIHfnE1XIplUMseFxr306WXBD2p6nQhyJOTIzIijiF122dxnzcKMBGVJv0gFZC1B2+HQ5BowBy40GBSl1Q30g+4p3XoROa239QlgdKF7QwcIcZV0H7QmQAIgMC2z2eRSXo9pGO5qg7Nu2ovn/e0oZN0ySxDUuMAb2cpy5t3jeLvgMRyh6L+eW4OXJA4QGxt3lmJ7RN/2HUX6eQrQfnd1XRfrJQcCsHsty/eF3UX+iTUXRSipXkYtlYLqL2BGcHFF58HfFIRghv8suVmdEyY62F9pMUEi2tBshNCGzkxDMs9R5tcAdDdEx2CaVn61et1UtAMw7JjkjQu55YJRMV2e1xhLhm1aIXurVUHQcKNXs3WFMvOdR0Uf7no5u6husfiMlLYaBb/Bo9oYIaC13bAX9ah2nuzR4/9p0ZM4+2s4vfgSAANd64DwyTe0Y2ujtenTuqg4mBvDIiCw6bBfHeZQXD6OSDJFc5Xhc6pqv4v+ZRtZJ3VcJBrZ+cDtVHtHoNKnB9oTWNadrc7HAogB7ZdZntg+5nruBE0h94UDqX1AbfCnxtKdXo6h4Zt+h6A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3745.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(39860400002)(346002)(396003)(376002)(451199021)(53546011)(26005)(6512007)(6506007)(4326008)(76116006)(64756008)(66446008)(66556008)(6916009)(66476007)(66946007)(2616005)(33656002)(478600001)(40140700001)(86362001)(91956017)(83380400001)(71200400001)(316002)(38070700005)(122000001)(38100700002)(41300700001)(2906002)(6486002)(36756003)(186003)(54906003)(5660300002)(8936002)(8676002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <AE75BDE125C23B43B704AF9D0A60A6BE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7790
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	017bfbe8-4da0-47fe-6519-08db4b4704c0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+gV8+WY3y81bNs8Edn910en4Q0IWtOLF4ggYKDGMXTxrTWrRgTUra+0ePZDtO2Qz+MBTXf2uWQEA0amTLNEfPknFXNN78FEeFzvCeV3tCOiXu9ex75UKe877xNnD0B6uxWEVSg3RoxXGjgUHmNT/pq6iRwhOBQK9GkdzL6Cq3oVvRxr2U53t5VACqNXTSAeZEPY8dipVcHr6H8Ta8oH1kSInH5Rn40/GoHSyr4dqX4W/BBpoFK3JAe7OvHWmzisHP2vh2lmUyqLs1Z9YbD4BKQhjq+YOhHoz/JFMKRyKylRsQv9xDvECtM3DvnRhBxlOM3ZHOkwBVLl8Rq7HjKWvo8LgaOYn397VpdbMJew4a6i9A7nnrxMSKMjx/QVZbO4fv68DMLIAmc+CoKN7QtgVc2Ezen8o0ksGmUUsQre/ny/sxHdTn/DNrmzWVVQCezMySDV355DuRCycU/ZfRnrfYiYjcLOOOMCWRgMFdx7l1SM56liV38U6P6PM18oh8n1+yv0sddGYVdBTiOo5tfDblRa6dIyUht70dYih8i5eOUeoz0yF7VZwm3EeykFObTEJr5TmM8N/3VLRlfbFvufRpco6qHsnqysVBTdlFRqXruSn+pg89cX3eitNlxDwHlH2HihSw79rJcEYy/a+d3icNnnAcW6TRPKHMSQaFSN3x+P+GC+4iGtRxpu0MBGJHfgz5BWzh1pgr8HHFKIed/wscv/LyBbZjH7DXEYDmVrduPg=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199021)(40470700004)(36840700001)(46966006)(6862004)(86362001)(40460700003)(5660300002)(8676002)(70206006)(8936002)(41300700001)(81166007)(356005)(82740400003)(70586007)(4326008)(316002)(40480700001)(2906002)(6486002)(34020700004)(36860700001)(36756003)(33656002)(2616005)(107886003)(53546011)(186003)(26005)(40140700001)(6506007)(82310400005)(6512007)(83380400001)(54906003)(47076005)(478600001)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 19:54:30.5221
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ed8df2ad-d3f4-4f97-8c36-08db4b470b2b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB7365

Hi Anthony,

Thank you for your review.

> On 2 May 2023, at 18:06, Anthony PERARD <anthony.perard@citrix.com> wrote=
:
>=20
> On Mon, Apr 24, 2023 at 07:02:46AM +0100, Luca Fancellu wrote:
>> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
>> index ddc7b2a15975..1e69dac2c4fa 100644
>> --- a/tools/libs/light/libxl_arm.c
>> +++ b/tools/libs/light/libxl_arm.c
>> @@ -211,6 +213,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc=
,
>>         return ERROR_FAIL;
>>     }
>>=20
>> +    /* Parameter is sanitised in libxl__arch_domain_build_info_setdefau=
lt */
>> +    if (d_config->b_info.arch_arm.sve_vl) {
>> +        /* Vector length is divided by 128 in struct xen_domctl_created=
omain */
>> +        config->arch.sve_vl =3D d_config->b_info.arch_arm.sve_vl / 128U=
;
>> +    }
>> +
>>     return 0;
>> }
>>=20
>> @@ -1681,6 +1689,26 @@ int libxl__arch_domain_build_info_setdefault(libx=
l__gc *gc,
>>     /* ACPI is disabled by default */
>>     libxl_defbool_setdefault(&b_info->acpi, false);
>>=20
>> +    /* Sanitise SVE parameter */
>> +    if (b_info->arch_arm.sve_vl) {
>> +        unsigned int max_sve_vl =3D
>> +            arch_capabilities_arm_sve(physinfo->arch_capabilities);
>> +
>> +        if (!max_sve_vl) {
>> +            LOG(ERROR, "SVE is unsupported on this machine.");
>> +            return ERROR_FAIL;
>> +        }
>> +
>> +        if (LIBXL_SVE_TYPE_HW =3D=3D b_info->arch_arm.sve_vl) {
>> +            b_info->arch_arm.sve_vl =3D max_sve_vl;
>> +        } else if (b_info->arch_arm.sve_vl > max_sve_vl) {
>> +            LOG(ERROR,
>> +                "Invalid sve value: %d. Platform supports up to %u bits=
",
>> +                b_info->arch_arm.sve_vl, max_sve_vl);
>> +            return ERROR_FAIL;
>> +        }
>=20
> You still need to check that sve_vl is one of the value from the enum,
> or that the value is divisible by 128.

I have probably missed something, I thought that using the way below to
specify the input I had for free that the value is 0 or divisible by 128, i=
s it
not the case? Who can write to b_info->arch_arm.sve_vl different value
from the enum we specified in the .idl?

>=20
>> +    }
>> +
>>     if (b_info->type !=3D LIBXL_DOMAIN_TYPE_PV)
>>         return 0;
>>=20
>> diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_t=
ypes.idl
>> index fd31dacf7d5a..9e48bb772646 100644
>> --- a/tools/libs/light/libxl_types.idl
>> +++ b/tools/libs/light/libxl_types.idl
>> @@ -523,6 +523,27 @@ libxl_tee_type =3D Enumeration("tee_type", [
>>     (1, "optee")
>>     ], init_val =3D "LIBXL_TEE_TYPE_NONE")
>>=20
>> +libxl_sve_type =3D Enumeration("sve_type", [
>> +    (-1, "hw"),
>> +    (0, "disabled"),
>> +    (128, "128"),
>> +    (256, "256"),
>> +    (384, "384"),
>> +    (512, "512"),
>> +    (640, "640"),
>> +    (768, "768"),
>> +    (896, "896"),
>> +    (1024, "1024"),
>> +    (1152, "1152"),
>> +    (1280, "1280"),
>> +    (1408, "1408"),
>> +    (1536, "1536"),
>> +    (1664, "1664"),
>> +    (1792, "1792"),
>> +    (1920, "1920"),
>> +    (2048, "2048")
>> +    ], init_val =3D "LIBXL_SVE_TYPE_DISABLED")
>=20
> I'm not sure if I like that or not. Is there a reason to stop at 2048?
> It is possible that there will be more value available in the future?

Uhm... possibly there might be some extension, I thought that when it will
be the case, the only thing to do was to add another entry, I used this way
also to have for free the checks on the %128 and maximum 2048.

>=20
> Also this mean that users of libxl (like libvirt) would be supposed to
> use LIBXL_SVE_TYPE_1024 for e.g., or use libxl_sve_type_from_string().
>=20
> Also, it feels weird to me to mostly use numerical value of the enum
> rather than the enum itself.
>=20
> Anyway, hopefully that enum will work fine.
>=20
>> libxl_rdm_reserve =3D Struct("rdm_reserve", [
>>     ("strategy",    libxl_rdm_reserve_strategy),
>>     ("policy",      libxl_rdm_reserve_policy),
>=20
> Thanks,
>=20
> --=20
> Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 02 20:03:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 20:03:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528802.822405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwD9-0002UO-T7; Tue, 02 May 2023 20:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528802.822405; Tue, 02 May 2023 20:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwD9-0002UH-Pt; Tue, 02 May 2023 20:02:55 +0000
Received: by outflank-mailman (input) for mailman id 528802;
 Tue, 02 May 2023 20:02:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GGWT=AX=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ptwD8-0002UB-MN
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 20:02:54 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 506a762f-e924-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 22:02:50 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-551-8i9sxkcoOrucqmHkxzDdSA-1; Tue, 02 May 2023 16:02:47 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C708A101A531;
 Tue,  2 May 2023 20:02:46 +0000 (UTC)
Received: from localhost (unknown [10.39.192.230])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B65FF63F5B;
 Tue,  2 May 2023 20:02:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 506a762f-e924-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683057769;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7lfhqttiODmbNxuxzmQIGPLL6DVlwheJNNJYYd5PHZI=;
	b=AGFtq8/EGGpm3JUeI6/CRMExUZ6AnZ5zcqDaOdxHzwxRxLC1k41BXmbB7V/2ZQznGsWuQ2
	6UXIAgqnPhyJc54mAl82el0Q/+FQQv8R22QPGyAhZPHmsL7on8f/869spz1FFVqXnORij1
	f1cRVtmlXDbCO5FcsKSFWKc88FvPOTE=
X-MC-Unique: 8i9sxkcoOrucqmHkxzDdSA-1
Date: Tue, 2 May 2023 16:02:43 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v4 04/20] virtio-scsi: stop using aio_disable_external()
 during unplug
Message-ID: <20230502200243.GD535070@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-5-stefanha@redhat.com>
 <ZEvWv8dF78Jpb6CQ@redhat.com>
 <20230501150934.GA14869@fedora>
 <ZFEN+KY8JViTDtv/@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="KLJLJshWTSqDDVpb"
Content-Disposition: inline
In-Reply-To: <ZFEN+KY8JViTDtv/@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5


--KLJLJshWTSqDDVpb
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, May 02, 2023 at 03:19:52PM +0200, Kevin Wolf wrote:
> Am 01.05.2023 um 17:09 hat Stefan Hajnoczi geschrieben:
> > On Fri, Apr 28, 2023 at 04:22:55PM +0200, Kevin Wolf wrote:
> > > Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > > > This patch is part of an effort to remove the aio_disable_external()
> > > > API because it does not fit in a multi-queue block layer world where
> > > > many AioContexts may be submitting requests to the same disk.
> > > >=20
> > > > The SCSI emulation code is already in good shape to stop using
> > > > aio_disable_external(). It was only used by commit 9c5aad84da1c
> > > > ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching sc=
si
> > > > disk") to ensure that virtio_scsi_hotunplug() works while the guest
> > > > driver is submitting I/O.
> > > >=20
> > > > Ensure virtio_scsi_hotunplug() is safe as follows:
> > > >=20
> > > > 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
> > > >    device_set_realized() calls qatomic_set(&dev->realized, false) so
> > > >    that future scsi_device_get() calls return NULL because they exc=
lude
> > > >    SCSIDevices with realized=3Dfalse.
> > > >=20
> > > >    That means virtio-scsi will reject new I/O requests to this
> > > >    SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
> > > >    virtio_scsi_hotunplug() is still executing. We are protected aga=
inst
> > > >    new requests!
> > > >=20
> > > > 2. Add a call to scsi_device_purge_requests() from scsi_unrealize()=
 so
> > > >    that in-flight requests are cancelled synchronously. This ensures
> > > >    that no in-flight requests remain once qdev_simple_device_unplug=
_cb()
> > > >    returns.
> > > >=20
> > > > Thanks to these two conditions we don't need aio_disable_external()
> > > > anymore.
> > > >=20
> > > > Cc: Zhengui Li <lizhengui@huawei.com>
> > > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > > > Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > >=20
> > > qemu-iotests 040 starts failing for me after this patch, with what lo=
oks
> > > like a use-after-free error of some kind.
> > >=20
> > > (gdb) bt
> > > #0  0x000055b6e3e1f31c in job_type (job=3D0xe3e3e3e3e3e3e3e3) at ../j=
ob.c:238
> > > #1  0x000055b6e3e1cee5 in is_block_job (job=3D0xe3e3e3e3e3e3e3e3) at =
=2E./blockjob.c:41
> > > #2  0x000055b6e3e1ce7d in block_job_next_locked (bjob=3D0x55b6e72b757=
0) at ../blockjob.c:54
> > > #3  0x000055b6e3df6370 in blockdev_mark_auto_del (blk=3D0x55b6e74af0a=
0) at ../blockdev.c:157
> > > #4  0x000055b6e393e23b in scsi_qdev_unrealize (qdev=3D0x55b6e7c04d40)=
 at ../hw/scsi/scsi-bus.c:303
> > > #5  0x000055b6e3db0d0e in device_set_realized (obj=3D0x55b6e7c04d40, =
value=3Dfalse, errp=3D0x55b6e497c918 <error_abort>) at ../hw/core/qdev.c:599
> > > #6  0x000055b6e3dba36e in property_set_bool (obj=3D0x55b6e7c04d40, v=
=3D0x55b6e7d7f290, name=3D0x55b6e41bd6d8 "realized", opaque=3D0x55b6e7246d2=
0, errp=3D0x55b6e497c918 <error_abort>)
> > >     at ../qom/object.c:2285
> > > #7  0x000055b6e3db7e65 in object_property_set (obj=3D0x55b6e7c04d40, =
name=3D0x55b6e41bd6d8 "realized", v=3D0x55b6e7d7f290, errp=3D0x55b6e497c918=
 <error_abort>) at ../qom/object.c:1420
> > > #8  0x000055b6e3dbd84a in object_property_set_qobject (obj=3D0x55b6e7=
c04d40, name=3D0x55b6e41bd6d8 "realized", value=3D0x55b6e74c1890, errp=3D0x=
55b6e497c918 <error_abort>)
> > >     at ../qom/qom-qobject.c:28
> > > #9  0x000055b6e3db8570 in object_property_set_bool (obj=3D0x55b6e7c04=
d40, name=3D0x55b6e41bd6d8 "realized", value=3Dfalse, errp=3D0x55b6e497c918=
 <error_abort>) at ../qom/object.c:1489
> > > #10 0x000055b6e3daf2b5 in qdev_unrealize (dev=3D0x55b6e7c04d40) at ..=
/hw/core/qdev.c:306
> > > #11 0x000055b6e3db509d in qdev_simple_device_unplug_cb (hotplug_dev=
=3D0x55b6e81c3630, dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/co=
re/qdev-hotplug.c:72
> > > #12 0x000055b6e3c520f9 in virtio_scsi_hotunplug (hotplug_dev=3D0x55b6=
e81c3630, dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/scsi/virtio=
-scsi.c:1065
> > > #13 0x000055b6e3db4dec in hotplug_handler_unplug (plug_handler=3D0x55=
b6e81c3630, plugged_dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/c=
ore/hotplug.c:56
> > > #14 0x000055b6e3a28f84 in qdev_unplug (dev=3D0x55b6e7c04d40, errp=3D0=
x7ffec55192e0) at ../softmmu/qdev-monitor.c:935
> > > #15 0x000055b6e3a290fa in qmp_device_del (id=3D0x55b6e74c1760 "scsi0"=
, errp=3D0x7ffec55192e0) at ../softmmu/qdev-monitor.c:955
> > > #16 0x000055b6e3fb0a5f in qmp_marshal_device_del (args=3D0x7f61cc005e=
b0, ret=3D0x7f61d5a8ae38, errp=3D0x7f61d5a8ae40) at qapi/qapi-commands-qdev=
=2Ec:114
> > > #17 0x000055b6e3fd52e1 in do_qmp_dispatch_bh (opaque=3D0x7f61d5a8ae08=
) at ../qapi/qmp-dispatch.c:128
> > > #18 0x000055b6e4007b9e in aio_bh_call (bh=3D0x55b6e7dea730) at ../uti=
l/async.c:155
> > > #19 0x000055b6e4007d2e in aio_bh_poll (ctx=3D0x55b6e72447c0) at ../ut=
il/async.c:184
> > > #20 0x000055b6e3fe3b45 in aio_dispatch (ctx=3D0x55b6e72447c0) at ../u=
til/aio-posix.c:421
> > > #21 0x000055b6e4009544 in aio_ctx_dispatch (source=3D0x55b6e72447c0, =
callback=3D0x0, user_data=3D0x0) at ../util/async.c:326
> > > #22 0x00007f61ddc14c7f in g_main_dispatch (context=3D0x55b6e7244b20) =
at ../glib/gmain.c:3454
> > > #23 g_main_context_dispatch (context=3D0x55b6e7244b20) at ../glib/gma=
in.c:4172
> > > #24 0x000055b6e400a7e8 in glib_pollfds_poll () at ../util/main-loop.c=
:290
> > > #25 0x000055b6e400a0c2 in os_host_main_loop_wait (timeout=3D0) at ../=
util/main-loop.c:313
> > > #26 0x000055b6e4009fa2 in main_loop_wait (nonblocking=3D0) at ../util=
/main-loop.c:592
> > > #27 0x000055b6e3a3047b in qemu_main_loop () at ../softmmu/runstate.c:=
731
> > > #28 0x000055b6e3dab27d in qemu_default_main () at ../softmmu/main.c:37
> > > #29 0x000055b6e3dab2b8 in main (argc=3D24, argv=3D0x7ffec55196a8) at =
=2E./softmmu/main.c:48
> > > (gdb) p jobs
> > > $4 =3D {lh_first =3D 0x0}
> >=20
> > I wasn't able to reproduce this with gcc 13.1.1 or clang 16.0.1:
> >=20
> >   $ tests/qemu-iotests/check -qcow2 040
> >=20
> > Any suggestions on how to reproduce the issue?
>=20
> It happens consistently for me with the same command line, both with gcc
> and clang.
>=20
> gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)
> clang version 15.0.7 (Fedora 15.0.7-2.fc37)
>=20
> Maybe there is a semantic merge conflict? I have applied the series on
> top of master (05d50ba2d4) and my block branch (88f81f7bc8).

I can't find 88f81f7bc8 but rebased on repo.or.cz/qemu/kevin.git block
(4514dac7f2e9) and the test passes here.

I rebased on qemu.git/master (05d50ba2d4) and it also passes.

Please let me know if the following tree (a0ff680a72f6) works on your
machine:
https://gitlab.com/stefanha/qemu/-/tree/remove-aio_disable_external

Thanks,
Stefan

--KLJLJshWTSqDDVpb
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRRbGMACgkQnKSrs4Gr
c8iL5Af+MRYfwim9BEWCQ+OFF9rPydT4E+bBnTgMftUJz8XcaxXz5sDzpq3sXhvM
UXDSMHg1/RY38aMEGGT1kPtlxI3zUO3pwNckFeU3rpqa22dI1yxMwI92Rn29LLIk
8txmNTd4eG89Vgvvl3zrqV/budvInTRmy7ZtMgMnNJ+SGAMxWWtakCZlBQzw7WBh
946sfiub2lvQKnvBgQJJLCOfVs3drSqV9+4IdzkpGwjN5fruVe3WhKC98Ha88ACF
L4cpa9MvlMyJvrxmPdB2uL8Eyn1mV3uLVi8Z0nRwiy4/mtahrGMZ3jkEJNi3Zn12
qpT/iMyl9DjsAj5nh2ElduicM7QFgw==
=9x4Q
-----END PGP SIGNATURE-----

--KLJLJshWTSqDDVpb--



From xen-devel-bounces@lists.xenproject.org Tue May 02 20:05:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 20:05:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528806.822415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwFp-00034v-B0; Tue, 02 May 2023 20:05:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528806.822415; Tue, 02 May 2023 20:05:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwFp-00034o-6i; Tue, 02 May 2023 20:05:41 +0000
Received: by outflank-mailman (input) for mailman id 528806;
 Tue, 02 May 2023 20:05:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d5QU=AX=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1ptwFo-00034i-E3
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 20:05:40 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.160]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b5024652-e924-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 22:05:38 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz42K5TeWg
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 2 May 2023 22:05:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5024652-e924-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683057929; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=cG8KjmyrRrTYB5GwSRIc8LwpK66j62bJNlj/2huPJn8NfiXUGLJafuQMCWyQry7VpL
    jsiu4rKFyCiqsQhvULmKEPR2drVTwIlcbcfRqc8uJjiWrHxKkLy+iPfyRcUz6pGKoK6x
    LctszF9rh0ZTALpbHTLgLoN52mjhartSdrK+bZKpVVvkxeKzO8eUt25tXc3CVuMKg832
    WSmu26Xq+fzWNI04vHMr1jOj7IgOmDqSrrXbeCDojL5K+nw1Ac0Wijjxy4kJvElFaPSh
    XedWXrpuA/lazArBvSlvfkfxLFY1W5oeVw4rBp7FxLJUQ6mr1p+4AsQav0qv/sfNQzbO
    LTPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683057929;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=Jcpwk8O1aNbRAScb4kL6cbWNb5fNbiHkZRl03EzEa2Y=;
    b=R9NQNLPDlwFbs06f7FnhVtVR0BGo0RnuNjeRbXLoxYMyQa12noGM2g2p+s73wzLRH+
    Bszkph2/uBTNc/wnIusfmGYuD7cOVSg+S8jA18RNFLXdzYrfdQkESBKlVTW5kYL/BaGS
    Z340NeHgAqYK6R63zabC4PXJknQyJ8iGQKJoWyqaqE/1JPiN8Rr0EbIxQqTbBe3r+BLr
    IDTjw0BWl2SShIWDgLOhn7Jk4ehL7OEXOHY8AYbUc2H0KeKjVP/KSv+sLvJDHgUDrdq1
    J2y8hW2qI1/nyRs3NhPq05RVajCCFZ3xnI2fc/qvJydIQzLb3SwXQ/81uoJDUayR20rx
    hPvQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683057929;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=Jcpwk8O1aNbRAScb4kL6cbWNb5fNbiHkZRl03EzEa2Y=;
    b=UKZD/btOkhsmuzrQ2ax97YV/4tJajWTvblfrPjEFQne5BfdY4rKA8Cz6n6xbZ0Gdoc
    yus83LWHbtzPEX8fkj7TQy7YAnlDewoqsMElkcSFBd/aRmUc1A5UFE3HZUST+O+08IU4
    ZGaxLts0aJdg+h8X0Eijk+jo+RVM+W/rZdClZUp+yhJpG5UMbOfcLAmlcimLbJSWS1mf
    7K62KuoHrtJz9TINh3P7UT7/pslAUUfYTCp0i+pZpBGYl0NZPTssnriQva8L0hqe0lvV
    8MAjzkRlWNIlZx/+B9LDNUfIWaDcgM/MXxzwCdB/MA+wPviJFMlRbQJufLfHccx3RvPX
    hzCw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683057929;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=Jcpwk8O1aNbRAScb4kL6cbWNb5fNbiHkZRl03EzEa2Y=;
    b=bo7z4PjawhUc+CdXTcaWsyybfHs0y1hAbRsIzFKNpPagK/56U2ateLU4sqT2NpKRYu
    WjuNsz/pTOdiJ62akIBw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4wqVv7FZ8tH5EUSbMVU80kUr7f4QlYaI60OjHt/Q=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v1] automation: remove python2 from opensuse images
Date: Tue,  2 May 2023 20:05:27 +0000
Message-Id: <20230502200527.5365-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

The upcoming Leap 15.5 will come without a binary named 'python'.
Prepare the suse images for that change.

Starting with Xen 4.14 python3 can be used for build.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 automation/build/suse/opensuse-leap.dockerfile       | 2 --
 automation/build/suse/opensuse-tumbleweed.dockerfile | 1 -
 2 files changed, 3 deletions(-)

diff --git a/automation/build/suse/opensuse-leap.dockerfile b/automation/build/suse/opensuse-leap.dockerfile
index c7973dd6ab..79de83ac20 100644
--- a/automation/build/suse/opensuse-leap.dockerfile
+++ b/automation/build/suse/opensuse-leap.dockerfile
@@ -58,8 +58,6 @@ RUN zypper install -y --no-recommends \
         'pkgconfig(libpci)' \
         'pkgconfig(sdl)' \
         'pkgconfig(sdl2)' \
-        python \
-        python-devel \
         python3-devel \
         systemd-devel \
         tar \
diff --git a/automation/build/suse/opensuse-tumbleweed.dockerfile b/automation/build/suse/opensuse-tumbleweed.dockerfile
index 7e5f22acef..abb25c8c84 100644
--- a/automation/build/suse/opensuse-tumbleweed.dockerfile
+++ b/automation/build/suse/opensuse-tumbleweed.dockerfile
@@ -61,7 +61,6 @@ RUN zypper install -y --no-recommends \
         'pkgconfig(libpci)' \
         'pkgconfig(sdl)' \
         'pkgconfig(sdl2)' \
-        python-devel \
         python3-devel \
         systemd-devel \
         tar \


From xen-devel-bounces@lists.xenproject.org Tue May 02 20:07:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 20:07:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528812.822425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwH1-0003h0-Mu; Tue, 02 May 2023 20:06:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528812.822425; Tue, 02 May 2023 20:06:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwH1-0003gt-K4; Tue, 02 May 2023 20:06:55 +0000
Received: by outflank-mailman (input) for mailman id 528812;
 Tue, 02 May 2023 20:06:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GGWT=AX=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ptwH0-0003gj-Er
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 20:06:54 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e0bd3cb8-e924-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 22:06:52 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-228-EPMPoQX1NfuQ2XU-a2gp2Q-1; Tue, 02 May 2023 16:06:48 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5758A85A588;
 Tue,  2 May 2023 20:06:47 +0000 (UTC)
Received: from localhost (unknown [10.39.192.230])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C37E1C15BAE;
 Tue,  2 May 2023 20:06:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0bd3cb8-e924-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683058011;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LgDhtbpQWSSNsgeEph8esneZJVWEPWcWGLZOnDkeVKE=;
	b=Kdbe6o3RSShoyolv1BJEpZjDP6CP1+4m4XuVtcQZgsQ8LZnWJ0ZRruYghKu6UwyXZPiH6I
	qEb+XNd2k/EysxjXB5O8vTPgiIC9pDlAMRg4iKbCoMn30TV93xjZDaSZ0cHl1p6ydQ5qJg
	fz0cE2caF7n03Xrp0fzPYt9s6obYr9Y=
X-MC-Unique: EPMPoQX1NfuQ2XU-a2gp2Q-1
Date: Tue, 2 May 2023 16:06:45 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 07/20] block/export: stop using is_external in
 vhost-user-blk server
Message-ID: <20230502200645.GE535070@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-8-stefanha@redhat.com>
 <ZFE0iFnbr2ey0A7X@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="O5xAqfWJ9HxCWIfk"
Content-Disposition: inline
In-Reply-To: <ZFE0iFnbr2ey0A7X@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8


--O5xAqfWJ9HxCWIfk
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, May 02, 2023 at 06:04:24PM +0200, Kevin Wolf wrote:
> Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > vhost-user activity must be suspended during bdrv_drained_begin/end().
> > This prevents new requests from interfering with whatever is happening
> > in the drained section.
> >=20
> > Previously this was done using aio_set_fd_handler()'s is_external
> > argument. In a multi-queue block layer world the aio_disable_external()
> > API cannot be used since multiple AioContext may be processing I/O, not
> > just one.
> >=20
> > Switch to BlockDevOps->drained_begin/end() callbacks.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  block/export/vhost-user-blk-server.c | 43 ++++++++++++++--------------
> >  util/vhost-user-server.c             | 10 +++----
> >  2 files changed, 26 insertions(+), 27 deletions(-)
> >=20
> > diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-=
user-blk-server.c
> > index 092b86aae4..d20f69cd74 100644
> > --- a/block/export/vhost-user-blk-server.c
> > +++ b/block/export/vhost-user-blk-server.c
> > @@ -208,22 +208,6 @@ static const VuDevIface vu_blk_iface =3D {
> >      .process_msg           =3D vu_blk_process_msg,
> >  };
> > =20
> > -static void blk_aio_attached(AioContext *ctx, void *opaque)
> > -{
> > -    VuBlkExport *vexp =3D opaque;
> > -
> > -    vexp->export.ctx =3D ctx;
> > -    vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
> > -}
> > -
> > -static void blk_aio_detach(void *opaque)
> > -{
> > -    VuBlkExport *vexp =3D opaque;
> > -
> > -    vhost_user_server_detach_aio_context(&vexp->vu_server);
> > -    vexp->export.ctx =3D NULL;
> > -}
>=20
> So for changing the AioContext, we now rely on the fact that the node to
> be changed is always drained, so the drain callbacks implicitly cover
> this case, too?

Yes.

> >  static void
> >  vu_blk_initialize_config(BlockDriverState *bs,
> >                           struct virtio_blk_config *config,
> > @@ -272,6 +256,25 @@ static void vu_blk_exp_resize(void *opaque)
> >      vu_config_change_msg(&vexp->vu_server.vu_dev);
> >  }
> > =20
> > +/* Called with vexp->export.ctx acquired */
> > +static void vu_blk_drained_begin(void *opaque)
> > +{
> > +    VuBlkExport *vexp =3D opaque;
> > +
> > +    vhost_user_server_detach_aio_context(&vexp->vu_server);
> > +}
>=20
> Compared to the old code, we're losing the vexp->export.ctx =3D NULL. This
> is correct at this point because after drained_begin we still keep
> processing requests until we arrive at a quiescent state.
>=20
> However, if we detach the AioContext because we're deleting the
> iothread, won't we end up with a dangling pointer in vexp->export.ctx?
> Or can we be certain that nothing interesting happens before drained_end
> updates it with a new valid pointer again?

If you want I can add the detach() callback back again and set ctx to
NULL there?

Stefan

--O5xAqfWJ9HxCWIfk
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRRbVUACgkQnKSrs4Gr
c8i4OAf+KMBGicSKs6PxKeNThQpFSMAOYw12vnQg0N97hfarjPeaKf9lZmdJLKB7
McPjsLuRThNx0snklzDlcRovZKNZcV5EjEddXKA1ikDqZXDeLue4X717xIIV1RF0
oYAyyPcTXr9V1JKG2nKQzz1fK/zb0oXyEycRI1uPB1YjiL/NnBUZEdAG3DEZmBXK
qjZcFpvChPe6DFydAfunLcRtoSASIb6cBLCeOFHXuTrKixFZeG9QDW87ONOMnG7D
JF3Cjcfsn94R6SFWDOK2TGeKF7IY6kVJa+gOJXyTE2+n2vZhpezwmroM4kprRJGc
TiMUeoBbZl3Oi5FrMOcj7MOeMEdRlw==
=PCPd
-----END PGP SIGNATURE-----

--O5xAqfWJ9HxCWIfk--



From xen-devel-bounces@lists.xenproject.org Tue May 02 20:10:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 20:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528815.822435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwKg-0005DY-6G; Tue, 02 May 2023 20:10:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528815.822435; Tue, 02 May 2023 20:10:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwKg-0005DR-3X; Tue, 02 May 2023 20:10:42 +0000
Received: by outflank-mailman (input) for mailman id 528815;
 Tue, 02 May 2023 20:10:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GGWT=AX=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1ptwKe-0005DL-MF
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 20:10:40 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6775e7a5-e925-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 22:10:38 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-215-bZJNyyK4N4GaEmywDujudQ-1; Tue, 02 May 2023 16:10:34 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 04349381D4CF;
 Tue,  2 May 2023 20:10:33 +0000 (UTC)
Received: from localhost (unknown [10.39.192.230])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4A3BA2026D16;
 Tue,  2 May 2023 20:10:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6775e7a5-e925-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683058237;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=G3M6X6Cb/uW8g9mA7FuVSmxMkf0IsGJPoGtjU0Aq7ck=;
	b=RZxDJe5lu4RnGcmN+KYDTrqtX7hrzuZzbT2Beipg3nNRdhCAPthdixq8qoRBt3l59wscW8
	qcsgj0Zt4h1IxBUffsPO2o3AT1V9brxXxYH/02H2VCMtD8+VoZMDrUTwarfpMVchnSUVAo
	6U8cRZm+2CHan+r0/c7LeYkJciKsnZA=
X-MC-Unique: bZJNyyK4N4GaEmywDujudQ-1
Date: Tue, 2 May 2023 16:10:29 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 10/20] block: drain from main loop thread in
 bdrv_co_yield_to_drain()
Message-ID: <20230502201029.GF535070@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-11-stefanha@redhat.com>
 <ZFE4gFFXnu+FSk35@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="brpBxr+X1vKHT7XM"
Content-Disposition: inline
In-Reply-To: <ZFE4gFFXnu+FSk35@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4


--brpBxr+X1vKHT7XM
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, May 02, 2023 at 06:21:20PM +0200, Kevin Wolf wrote:
> Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > For simplicity, always run BlockDevOps .drained_begin/end/poll()
> > callbacks in the main loop thread. This makes it easier to implement the
> > callbacks and avoids extra locks.
> >=20
> > Move the function pointer declarations from the I/O Code section to the
> > Global State section in block-backend-common.h.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>=20
> If we're updating function pointers, we should probably update them in
> BdrvChildClass and BlockDriver, too.

I'll do that in the next revision.

> This means that a non-coroutine caller can't run in an iothread, not
> even the home iothread of the BlockDriverState. (I'm not sure if it was
> allowed previously. I don't think we're actually doing this, but in
> theory it could have worked.) Maybe put a GLOBAL_STATE_CODE() after
> handling the bdrv_co_yield_to_drain() case? Or would that look too odd?
>=20
>     IO_OR_GS_CODE();
>=20
>     if (qemu_in_coroutine()) {
>         bdrv_co_yield_to_drain(bs, true, parent, poll);
>         return;
>     }
>=20
>     GLOBAL_STATE_CODE();

That looks good to me, it makes explicit that IO_OR_GS_CODE() only
applies until the end of the if statement.

Stefan

--brpBxr+X1vKHT7XM
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRRbjUACgkQnKSrs4Gr
c8gcCwf/T65LBfJJcwJAyvlXRfg2oJ/yZvsfJ3BZCAaupLZjx+EeDLlXwOdhSb2S
GKjZnCRY+804xb2asfcWI/aD6+FfB/OtlSQmoNpLQ0rRSl0ySEwoAolto1Z9ekGZ
L+Jfc76d+YxOe+NNrrZ9OzISmnofdrQVvqoJotHO/4bB0pUPvWzJG4f+ji/jy3u/
A+7u0pbN8C5zVOa6Cd3OXliYePtIv3nGH9xL8HmL/6YQD5rcPgFj2HlylhSMjRqy
dY54mvDssnwZL5sI2RfogH00TesZlaXrVwMD5Mr+HJ/T5H3myPLw1Y3RCgiPVLCM
WUaRx6PORhkBp198DzErfTVGl1grqQ==
=G1hQ
-----END PGP SIGNATURE-----

--brpBxr+X1vKHT7XM--



From xen-devel-bounces@lists.xenproject.org Tue May 02 20:15:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 20:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528818.822445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwOs-0005no-OM; Tue, 02 May 2023 20:15:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528818.822445; Tue, 02 May 2023 20:15:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwOs-0005nh-K8; Tue, 02 May 2023 20:15:02 +0000
Received: by outflank-mailman (input) for mailman id 528818;
 Tue, 02 May 2023 20:15:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d5QU=AX=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1ptwOr-0005nb-3W
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 20:15:01 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.218]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 03a698ed-e926-11ed-b225-6b7b168915f2;
 Tue, 02 May 2023 22:15:00 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz42KEleXg
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 2 May 2023 22:14:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03a698ed-e926-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683058487; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Ds0jttF+aKpnyizFvLwPcgYyQRhFfzMmVaTRFZFNctH8VH0kFLcpAqsfezmvXBRCYz
    KdHcAvL8a5xQxKV3oJ4eFsD9UpZKLkQJ0B3RifRzgEJCB9MSttCR6Vzo64PXAldLsrBc
    Wz07E2ECscbZkA5fovapg8KST8mN89S1n52mNib0tFWFXVUOeBmSxH7SET4Q7MgUQom3
    tu8yYbmqek/+H7PJCALNsWv/K6qdJ1Ujd203jQM7vnuFJ5NkoPZnCZ+Bw+Fk2eRpNA6E
    B2Q2wMD/a10MG4miOUoWb362pgJ0ynvRiq1jXFXazKxDb3PnokK50/j+CXGvKi4BnALs
    JmfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683058487;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=HaTckfyLjqk40qDuyIqmlzT69DZDd16J9hzv/RfFXdw=;
    b=dsxLrTGvJ23T0cLr9YlQYcXuwdqpEWfDYiLJ94z/QIJ6twmuzGXMtrvyN6ftwhIIYG
    cfYul3FNjCmKCe/qiQMwMckCgNWTmJ8WSpOXPjJp/77pGtBy0vUu1VM6YQvDHcdQI247
    Ng8R0AEne0dExBspNBKkpofLCNvuCXph5HkmY1aSB2iE/T7J3Onbc2wGrBCKFMJAEtKy
    52eKKRcbrln15szUJw+GSGCZiSKSPQUJEe0xD749jR3H52DVyaISMj+5qlzfGVI8zXyU
    EJC4xpoRO9h6WzGmNluT2N/aEl8wYKCuxPtym6chdTBQHnK/+HYHHty1aF3Fm/Kb0KBa
    hqVA==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683058487;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=HaTckfyLjqk40qDuyIqmlzT69DZDd16J9hzv/RfFXdw=;
    b=OXtbbvbKPXlMkGJF8ZaYajgpWeNfBs5y3Qf140/IE3LzHJPYCTOyQS3fS780NHuQ5j
    wKE5wO7VZgpvTzvymXqXSFRv2IFmMJ2lMH5CK9QJgEOu/c91IdiILy331uVW7b8XwSu6
    PMHFbPeI78m579EtlPF1iiCxl01o5b0/00uJQybU4YdMj+POgy+n6QqXVbbCSjEI9Sm0
    Aq3Gz5e041ng0qtZ7WMw4XXgRPtyx05VRzPaRGno9mNg9Q0S5azvMnIPXA+wqMmaUnp8
    ZClndfo8k0LLVapdnt19BAbUxbNGI0cTeZpys1u70TOiw40CosQ6wvChDjtgtmxQCx8V
    858w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683058487;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=HaTckfyLjqk40qDuyIqmlzT69DZDd16J9hzv/RfFXdw=;
    b=L5mt69J7gmNxWp5lL7WZVDPa68NiGMEPAUz+f0Z9EBN68KWNTHWFAEWe3zLIl2gfLE
    P6ezV3ZGiZBn8qyoISDA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4wqVv7FZ8tH5EUSbMVU80kUr7f4QlYaI60OjHt/Q=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v1] automation: provide example for downloading an existing container
Date: Tue,  2 May 2023 20:14:44 +0000
Message-Id: <20230502201444.6532-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 automation/build/README.md | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/automation/build/README.md b/automation/build/README.md
index 2d07cafe0e..8ad89a259a 100644
--- a/automation/build/README.md
+++ b/automation/build/README.md
@@ -12,6 +12,12 @@ can be pulled with Docker from the following path:
 docker pull registry.gitlab.com/xen-project/xen/DISTRO:VERSION
 ```
 
+This example shows how to pull the existing container for Tumbleweed:
+
+```
+docker pull registry.gitlab.com/xen-project/xen/suse:opensuse-tumbleweed
+```
+
 To see the list of available containers run `make` in this
 directory. You will have to replace the `/` with a `:` to use
 them.


From xen-devel-bounces@lists.xenproject.org Tue May 02 20:41:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 20:41:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528823.822455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwoW-00011m-SB; Tue, 02 May 2023 20:41:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528823.822455; Tue, 02 May 2023 20:41:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwoW-00011f-Nh; Tue, 02 May 2023 20:41:32 +0000
Received: by outflank-mailman (input) for mailman id 528823;
 Tue, 02 May 2023 20:41:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptwoV-00011V-Bv; Tue, 02 May 2023 20:41:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptwoV-0006CD-3m; Tue, 02 May 2023 20:41:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ptwoU-0007jw-PA; Tue, 02 May 2023 20:41:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ptwoU-00021N-Of; Tue, 02 May 2023 20:41:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Av+DCOvPiFoOSzqCPa5dDi6cyRQ7WduvSjAH+ZlDvOY=; b=VuEY9ZTEt6Z672mNSHfYeZwFVU
	2A4/KEKWzW76CoMKZa52J3yjU+/Pfg5gHnasGqdI3UZbzUt+KhwZ0vPQGcidfoLOfkTmsLRHLq9/N
	RvVCEk7rjfnjnVhcwhKa+g4JGPoZK5mIryg4mcqOST+U+B5RMKpqNhwKhrUdWSE0AdGQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180508-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180508: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d6b42ed7ed1b0c4584097f0d76798cff74c96379
X-Osstest-Versions-That:
    ovmf=23c71536efbebed57942947668f470f934324477
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 02 May 2023 20:41:30 +0000

flight 180508 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180508/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d6b42ed7ed1b0c4584097f0d76798cff74c96379
baseline version:
 ovmf                 23c71536efbebed57942947668f470f934324477

Last test of basis   180502  2023-05-02 07:42:13 Z    0 days
Testing same since   180508  2023-05-02 16:10:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   23c71536ef..d6b42ed7ed  d6b42ed7ed1b0c4584097f0d76798cff74c96379 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 02 20:44:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 20:44:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528828.822464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwr3-0001bJ-8M; Tue, 02 May 2023 20:44:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528828.822464; Tue, 02 May 2023 20:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwr3-0001bC-5X; Tue, 02 May 2023 20:44:09 +0000
Received: by outflank-mailman (input) for mailman id 528828;
 Tue, 02 May 2023 20:44:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b5+i=AX=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1ptwr2-0001b6-GB
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 20:44:08 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1389ce24-e92a-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 22:44:05 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 2EA135C0206;
 Tue,  2 May 2023 16:44:04 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 02 May 2023 16:44:04 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 2 May 2023 16:44:02 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1389ce24-e92a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1683060244; x=1683146644; bh=bquulcUGVvPXVr3hhUZYxsQgDschilipxpo
	PBxW7lwM=; b=RPmQ0OEV9S1XkKuov9bdECga8Q63lGNujL6ihp0PnRNOmlBYp7l
	EsvBDkYTCNEiDTD8WP6tOee8+sayAI3Fx9y7ZcpmmZ+ou/AhR5Oh3dQAXoiIFSan
	amtiFluu6Hj7BMyLhViAT5i1SI+Fkz7AQ0d49oNlnQS04R4oaVOHA1/gTtjVe2Dc
	wO/sFNKj3PU3mrBZg05Yv/VlSefvV3kFDlCGpmG/W/9QQIwoVGxOfsH28DBDCaqT
	N8gkWDQMJq+cONor2MdJSk9tMwg0MjnE9ReybQ7WTNyafJVW0KF9zvoWKRFryyNF
	WI+l2AV7hByEb4zm00CdvUYtIx3aP3769Sg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1683060244; x=1683146644; bh=bquulcUGVvPXV
	r3hhUZYxsQgDschilipxpoPBxW7lwM=; b=hHrHP8WKRMORzXF/1BbhU2fLHwaxC
	+SIWNSZAakxT1Xjiw2S4+upHy4LtpWg89ea3rUS42UJVq9QMN5fpSUBBmheMEvm3
	11qNlg4sKrLSvUCc8qNvNUEFjZCVmqthKmU/d1K+BF/sgOpuxyuGwFkfBf8+hqSX
	7Zt3fih/ZsgANDSLoMNgQczj63l1ItxGlooXPLoDSZ2raYWhReUnS3TFgtWDnE4e
	ulaJEooge2O1aVDlu9rlNOt/Eg6YRQfaIirQ5Pmovd56T/qEXnPa1G+42jRaSgxk
	srviAENp5d+ALe/+9wSYlKQvWCKMVeHLMRpuIJjZ9f657hgOx7GnaunWQ==
X-ME-Sender: <xms:E3ZRZB-kVMC_FO4nyrDydFnbPoKPVeehzfhjg0a6WOp7ZTJ-IyUYGg>
    <xme:E3ZRZFvXB2bS4HnIImV4hs668OcZzbpWpSH25FfooQIvr15J-lVEAL127QnpW2xJU
    WOnjyUEEJdg0A>
X-ME-Received: <xmr:E3ZRZPDLFuYiAGlyEd4AniNNLrfZDdoraytrdavI7_yhF8GZEX861MIv4y1IcLFo0J5g9tMYZ2btG44qQTPVwKkKv8bSzib5uE8>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedviedgudehfecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    udelteefvefhfeehieetleeihfejhfeludevteetkeevtedtvdegueetfeejudenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:E3ZRZFfhEdkJ7CJ5PXMmNmlwTx_fD1joy8ha3KrD8D9nuH6C-t4Vqw>
    <xmx:E3ZRZGPgiOcR1sX8aHDGmmx0fVdUlIJzzm8xJKDjUaQ-3FuPLHqAEA>
    <xmx:E3ZRZHkYC8WEzuJjgJAwB_YUYOcIVIjfES7db8EA562avGwKO9qT4A>
    <xmx:FHZRZDp4koRt3K0bpUFNlosNeteRmvU__iJejTJTOWi1HiZm6y2VBg>
Feedback-ID: i1568416f:Fastmail
Date: Tue, 2 May 2023 22:43:58 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] ns16550: enable memory decoding on MMIO-based PCI
 console card
Message-ID: <ZFF2D0NkvJdkR1dU@mail-itl>
References: <20230425143902.142571-1-marmarek@invisiblethingslab.com>
 <a3f2d048-78c5-9a5d-d44d-3a930ba780fd@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="eIxYRDk9pctUlU0V"
Content-Disposition: inline
In-Reply-To: <a3f2d048-78c5-9a5d-d44d-3a930ba780fd@suse.com>


--eIxYRDk9pctUlU0V
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 2 May 2023 22:43:58 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] ns16550: enable memory decoding on MMIO-based PCI
 console card

On Tue, May 02, 2023 at 12:53:15PM +0200, Jan Beulich wrote:
> On 25.04.2023 16:39, Marek Marczykowski-G=C3=B3recki wrote:
> > pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
> > devices, add setting PCI_COMMAND_MEMORY for MMIO-based UART devices too.
>=20
> This sentence is odd, as by its grammar it looks to describe the current
> situation only. The respective sentence in v1 did not have this issue.
>=20
> > --- a/xen/drivers/char/ns16550.c
> > +++ b/xen/drivers/char/ns16550.c
> > @@ -272,7 +272,15 @@ static int cf_check ns16550_getc(struct serial_por=
t *port, char *pc)
> >  static void pci_serial_early_init(struct ns16550 *uart)
> >  {
> >  #ifdef NS16550_PCI
> > -    if ( !uart->ps_bdf_enable || uart->io_base >=3D 0x10000 )
> > +    if ( uart->bar )
> > +    {
> > +        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
> > +                                  uart->ps_bdf[2]),
> > +                         PCI_COMMAND, PCI_COMMAND_MEMORY);
> > +        return;
> > +    }
> > +
> > +    if ( !uart->ps_bdf_enable )
> >          return;
> > =20
> >      if ( uart->pb_bdf_enable )
>=20
> While I did suggest using uart->bar, my implication was that the io_base
> check would then remain in place. Otherwise, if I'm not mistaken, MMIO-
> based devices not specified via "com<N>=3D...,pci" would then wrongly take
> the I/O port path.

I don't think MMIO-based devices specified manually have great chance to
work anyway (see the commit message), but indeed I shouldn't have broken
them even more.

> Furthermore - you can't use uart->bar alone here, can you? The field is
> set equally for MMIO and port based cards in pci_uart_config().

Right, I'll restore the io_base check.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--eIxYRDk9pctUlU0V
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRRdg8ACgkQ24/THMrX
1yxm9wf/dusj5o4WiivPB3JYBb6aS4pI2gj3KvUbO+8zK5nxFkL5SCTxE/4gMhY2
aTNlNTnxfo7xuhWqptqaJ1tM9mScc6vwHODrwUf6jv8o8K+YFZoEPgfhyeEC2Xjn
qJA6M8JPaEWi+QPCSbY2BeVlxXTNM30xKOoBIuCav9v8OMozbz02OGescxyDCt0e
xUzFozvsy/KC4Bvv22sZ7YxwKad+KbfmNhFN791YZ97RFn4uTErAgVCV/ajGH6FE
fXQeVQcLymxyWBI4tlicQdw9SNVYwvm0bkHXhjP6MmGXQhc//LxssUIRPe/KVXW9
DcAWB7caQ7yNyV+kbvFO6qK3UlMgtw==
=Q3KG
-----END PGP SIGNATURE-----

--eIxYRDk9pctUlU0V--


From xen-devel-bounces@lists.xenproject.org Tue May 02 20:48:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 20:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528836.822475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwux-0002HA-TL; Tue, 02 May 2023 20:48:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528836.822475; Tue, 02 May 2023 20:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptwux-0002H3-QK; Tue, 02 May 2023 20:48:11 +0000
Received: by outflank-mailman (input) for mailman id 528836;
 Tue, 02 May 2023 20:48:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d5QU=AX=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1ptwuw-0002Gn-K3
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 20:48:10 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [85.215.255.53]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a523253f-e92a-11ed-8611-37d641c3527e;
 Tue, 02 May 2023 22:48:08 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz42Km3eaa
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 2 May 2023 22:48:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a523253f-e92a-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683060483; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=YphspjVUr1YpHqRikiihIxJoLv2OCITtQV2UvMFMZ2PhF0noh9mB5RGWXjV+mAczkw
    L1D8CF4IovVOq4qUDTdEpIMR0bq6FwITgN0BhrxC1JsM2ThblhlULPZSoA/H3ylwwWpZ
    lJKF1YlpJhho+lVjd8yYd7KAsby9JrXF/DUnlZZ7TdUTpdGpNOm07UscfTohjdte4dnX
    /M2F6nIIxMjUfBO4hGi0/n6D6Se+FyRilaw0Nn9Ql9SRv0g94y8rBPDMg9ye+zM6cpHP
    vLhgBSlKBl6oQl/FO/K2gBFVFsabShGgQDKZAzFJ6s3qoxqTRmc8UeQ+GZaS/pwFnNOs
    AL9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683060483;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=QAmU+/VygWvLyKR/GJw/qjknyykgRWf7rx6CpuHmpt0=;
    b=JZKUUpuNJbmSYa79RF9EtvehtI53BYpwqF27R7Xb0vE6y9JiDhBdiahv8xaMttbDzD
    JYVebTs4az3SUiq33cVM1mG60iOju+tuA8HzdgVowSe2WSLkazyPMXuLwl+LCK6bhfwR
    /5XTEM7Hgxf5N0Intn9oRtQ4Z735QDIWZkKB7qmBUju0M723cSJLcBVYcd/ZhFBEUoOJ
    s9ZdgVk2SAOHU/U0e2RZ/Cuhd2dNaz0DJLWUYuXFYs4N1yR8oa+qa9Lend3L6JEce7Sb
    oVKSRpG4wXIDD1oWPnhXIgbbKnZFcszRybkb2InYmvYlb4b8VjcQm/IfsnfOZ2kl9omX
    05tw==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683060483;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=QAmU+/VygWvLyKR/GJw/qjknyykgRWf7rx6CpuHmpt0=;
    b=OzvPmEsxeE2vTN5SuQfki+E/p+3fuZVACtCtI2ckS9oij4NX5t3tm37W1cfbh/VWB9
    tycdyhkLAvDcHNa+8BZh6VDtqLZ01phSFgKtJHSjRwnAg4w2DF23wkrycAujdhwJJ9a3
    tIdo1c6kwidHeMmaQqJrTefbTa7AHpuf3pZpt+A/ZshFeU083lk/rVopJEWQDeLHRHEM
    hTSL21bDBRh3IaDxleayoIz6NL4gH2AuxfbvIdxTYbhLuv/AXXjH+bVMkJ3jTpT0BFfT
    idqlR5cAX7cG5zCUmMgJXzIXGyvadPXmTxTtYJCwkz6gUqte0LWin4GbFtDUMPZIpp/X
    Bkfg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683060483;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=QAmU+/VygWvLyKR/GJw/qjknyykgRWf7rx6CpuHmpt0=;
    b=BkQP3SKq0gNjF4VeD4vRL+gay92bML/DQMIgjjOJL2Jn0X4euM9t0Umb6TwfbfvmYF
    KdRauizvLKwtf1oIg4DA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4wqVv7FZ8tH5EUSbMVU80kUr7f4QlYaI60OjHt/Q=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1] tools: drop bogus and obsolete ptyfuncs.m4
Date: Tue,  2 May 2023 20:48:00 +0000
Message-Id: <20230502204800.10733-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

According to openpty(3) it is required to include <pty.h> to get the
prototypes for openpty() and login_tty(). But this is not what the
function AX_CHECK_PTYFUNCS actually does. It makes no attempt to include
the required header.

The two source files which call openpty() and login_tty() already contain
the conditionals to include the required header.

Remove the bogus m4 file to fix build with clang, which complains about
calls to undeclared functions.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 m4/ptyfuncs.m4     | 35 -----------------------------------
 tools/configure.ac |  1 -
 2 files changed, 36 deletions(-)
 delete mode 100644 m4/ptyfuncs.m4

diff --git a/m4/ptyfuncs.m4 b/m4/ptyfuncs.m4
deleted file mode 100644
index 3e37b5a23c..0000000000
--- a/m4/ptyfuncs.m4
+++ /dev/null
@@ -1,35 +0,0 @@
-AC_DEFUN([AX_CHECK_PTYFUNCS], [
-    dnl This is a workaround for a bug in Debian package
-    dnl libbsd-dev-0.3.0-1. Once we no longer support that
-    dnl package we can remove the addition of -Werror to
-    dnl CPPFLAGS.
-    AX_SAVEVAR_SAVE(CPPFLAGS)
-    CPPFLAGS="$CPPFLAGS -Werror"
-    AC_CHECK_HEADER([libutil.h],[
-      AC_DEFINE([INCLUDE_LIBUTIL_H],[<libutil.h>],[libutil header file name])
-    ])
-    AX_SAVEVAR_RESTORE(CPPFLAGS)
-    AC_CACHE_CHECK([for openpty et al], [ax_cv_ptyfuncs_libs], [
-        for ax_cv_ptyfuncs_libs in -lutil "" NOT_FOUND; do
-            if test "x$ax_cv_ptyfuncs_libs" = "xNOT_FOUND"; then
-                AC_MSG_FAILURE([Unable to find library for openpty and login_tty])
-            fi
-            AX_SAVEVAR_SAVE(LIBS)
-            LIBS="$LIBS $ax_cv_ptyfuncs_libs"
-            AC_LINK_IFELSE([AC_LANG_SOURCE([
-#ifdef INCLUDE_LIBUTIL_H
-#include INCLUDE_LIBUTIL_H
-#endif
-int main(void) {
-  openpty(0,0,0,0,0);
-  login_tty(0);
-}
-])],[
-                break
-            ],[])
-            AX_SAVEVAR_RESTORE(LIBS)
-        done
-    ])
-    PTYFUNCS_LIBS="$ax_cv_ptyfuncs_libs"
-    AC_SUBST(PTYFUNCS_LIBS)
-])
diff --git a/tools/configure.ac b/tools/configure.ac
index 9bcf42f233..c94257f751 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -70,7 +70,6 @@ m4_include([../m4/uuid.m4])
 m4_include([../m4/pkg.m4])
 m4_include([../m4/curses.m4])
 m4_include([../m4/pthread.m4])
-m4_include([../m4/ptyfuncs.m4])
 m4_include([../m4/extfs.m4])
 m4_include([../m4/fetcher.m4])
 m4_include([../m4/ax_compare_version.m4])


From xen-devel-bounces@lists.xenproject.org Tue May 02 22:41:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 22:41:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528844.822485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptyft-00070c-Op; Tue, 02 May 2023 22:40:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528844.822485; Tue, 02 May 2023 22:40:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptyft-00070V-M3; Tue, 02 May 2023 22:40:45 +0000
Received: by outflank-mailman (input) for mailman id 528844;
 Tue, 02 May 2023 22:40:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sTLj=AX=citrix.com=prvs=479cfccc8=jennifer.herbert@srs-se1.protection.inumbo.net>)
 id 1ptyfr-00070P-I6
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 22:40:43 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5baff743-e93a-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 00:40:39 +0200 (CEST)
Received: from mail-dm6nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 18:40:23 -0400
Received: from DS7PR03MB5414.namprd03.prod.outlook.com (2603:10b6:5:2c2::6) by
 MN2PR03MB4989.namprd03.prod.outlook.com (2603:10b6:208:1a5::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.20; Tue, 2 May
 2023 22:40:21 +0000
Received: from DS7PR03MB5414.namprd03.prod.outlook.com
 ([fe80::fdfd:97e5:7c55:82d]) by DS7PR03MB5414.namprd03.prod.outlook.com
 ([fe80::fdfd:97e5:7c55:82d%6]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 22:40:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5baff743-e93a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683067239;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=FNB9GSnyRPnYK98/Gs9Y6pEJFd5wbO1w+y7hmtAbIcA=;
  b=LuHqencmt8ngkpycZ7EhW8VOZLUMK5xVT7V/Yvqgxq/VNNuzqIXq6ijf
   EmGtXCE3dRtfMoDXeFpt9ekbyvaSCegVhYaXZaWXNPvka5o7EcihPX2zP
   I/x4CDL0difqZJlcIjN7PXTWdkJzOsUhHN0G8HYpsUY+BSWUPCeS3iLTY
   E=;
X-IronPort-RemoteIP: 104.47.59.175
X-IronPort-MID: 106400643
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:3ILQr66FAqk+9ypsawsoVAxRtCPGchMFZxGqfqrLsTDasY5as4F+v
 mNJDzyHO/3ZNDP8eYhza9vj9kxXu8DQxtBjSQM6pSgxHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0T5geF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m7
 O4TDjQQdUm/rsWx7OyFEtYyupsCFZy+VG8fkikIITDxK98DGMqGb4CUoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OnUooj+WF3Nn9I7RmQe1Xk0Cep
 2zL5SL5DwsQOcaD4TGE7mitlqnEmiaTtIc6TeXmqqYy3gHIroAVIBJOCFaGk/mBsHajXPYCd
 UsvpRE37pFnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+pQSiaPCEUKSoOYHECRA5cud37+ths01TIU8ppF7OzgpvtAzbsz
 juWrS84wbIOkcoM0Kb99lfC696xmqX0oscOzl2/dgqYAslRPeZJu6TABYDn0Mt9
IronPort-HdrOrdr: A9a23:EeK/za7YLSgKAYek3QPXwOTXdLJyesId70hD6qkXc3Bom62j+P
 xG+c5x6faaslgssR0b+OxoWpPwIk80hKQU3WB5B97LNmTbUQCTXeNfBOXZslvdMhy72ulB1b
 pxN4hSYeeAdGSSVPyKhDVQxexQp+Wv+qauguvV0ndqSgFmApsQijtENg==
X-Talos-CUID: 9a23:EE3REWNNP23Zh+5DQTts0koJAZkZW1ra4Cnue3DmNTlIcejA
X-Talos-MUID: 9a23:k1cfJwgUAYuKg1hODiTGt8MpO5817pmcCHg3qYgBn5TaMhcvAC7etWHi
X-IronPort-AV: E=Sophos;i="5.99,245,1677560400"; 
   d="scan'208";a="106400643"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FhynCaDbmxKP3i5Gsz3FwyM3xKKVa562v2Ee/d8QYLd0md19dchnifKlm9R2WOqMBEJLdUbVMXkbLC/O260MjJtMdmkZXwTLvi3aCiVZsqD2J/GSNeKaIgDKAJlXefMnEI3G+3v6hRJseHPn8bY6m46dhXBOMLJD38gg3ZnYcNyT48WSGotQcYwzOJwpZfS+k81/XIgBBWwcp3xYEaUX2vrTaky/J6ICl6LsyJ11mFD4inW2AkfTBXpCB9zbVWkztsLGNKC6aoz78PFnETApQ06kuD2nrra/Eow0/lEd/IGmcvDazF0gqIvSjnn0GHt/RoQgHfk8yPManXy7EV1XJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mlq3SnBA/2XciEPKI0qXHiISzju/CjJhWkRkawbwRwk=;
 b=E2eu+u4eIq+fS6OWReydzqs0MUv63TkR3RXJMMVqVsmO7lOdqE+P+DWiYjXVtLapLs12ZrwNpF1VgSvnmEMMLUzswUTECLWpApnFEVF8CmuZv8SWMiFY4jLA/8ZcYCX32CUsNn6k+jPRGXjn4VIDThWV0RsWSskrRi9xzDpqsp6aZZQh3sZC2bDhwsfHOQR//q8fO78snhzhaX5UxglrTj2Od1PISUTWTnHW66rSVwu9ynlwFCxUpGpU/FHbpJMFn+Mkwi6dxUWviYI8mDmmGZm0hF6W2ydbkBJoC6GZAtIsNJ8nzOCFxElX6ZWfJKrsgmxSUHXvFCHrdfta8rq4/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mlq3SnBA/2XciEPKI0qXHiISzju/CjJhWkRkawbwRwk=;
 b=KBnyIhg0haOjT7nZs38ciExnDmZ8sJ4QbWtliwa/yIATjAy1vGUYxDFygcUtJmVvM9teNXFNY1voBAkXrv0yxWMN1jBv4uXysWaJfkovNYeGp6vaOcWXG6vYjC0R8L61YjbdHzuQ84Fr/ctuLb0RkFKa/iG0eWtQ4PDQC1LEn2Y=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <5e877d50-de2d-0af4-9fa0-d4529a97ee2f@citrix.com>
Date: Tue, 2 May 2023 23:40:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 1/2] acpi: Make TPM version configurable.
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230425174733.795961-1-jennifer.herbert@citrix.com>
 <20230425174733.795961-2-jennifer.herbert@citrix.com>
 <5516fbf5-dbfc-dcd5-0465-e4757fdc16de@suse.com>
From: Jennifer Herbert <jennifer.herbert@citrix.com>
In-Reply-To: <5516fbf5-dbfc-dcd5-0465-e4757fdc16de@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0591.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:295::13) To DS7PR03MB5414.namprd03.prod.outlook.com
 (2603:10b6:5:2c2::6)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS7PR03MB5414:EE_|MN2PR03MB4989:EE_
X-MS-Office365-Filtering-Correlation-Id: 6882bbf6-3dc4-4193-6965-08db4b5e3609
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RnaqWcA84bQdqs9njpw3FW9OeB6hIWs4GZJqZvOXfknA1g7AHUMyR5zJHa47pCxWXf9QZPcYCX+TGa8KzM1IA5g57mii9NyLnvne0bGi3+WG5q/1jhhlaHCFV5FvBu9eKOlp37ZbIf8NI4gPmF4wXeuuXt9hsgQmBrItcNaHrUh2s7x678MKGUV0YNmuWmpwD5exVert2O8TX4T5kDAgyH5lHXVHSs1JEUzR9j9BwLDf/E+uyxMdg7it6XCcu9/giPKnVX4gKFnidIMVH6xJ0UPfMaE7iq0esZjA15ySxN82CdFKX95DUoSNIZekDvZ+LUsRVgJt8mYObKLho7qW8JhDlvSSNKsnoxYpUevZSqbFuU5TYQ83NMNhSnnv8YzK6DBov0n6T+/yZyvA5YeWG5xUhBTXMzeGqQO6qbYsGR1dts0tTcajdDs21DpwR45h95VtT6kNdLKJBu6pPFFMO6s3qsH9zTUGybI7rvYj2UQYWjnlWOk5mkV8K095SWzdiqYvqgVU7htStyyDbjH4onjQe5YYnI75/YHxay/0lhgM6C+u72xVu0/yhE17lGcK2iY3qy+35JZf5adzCb2mNl6STUfUDX26sLUOW60kaqWJo3qLHEYyrkgNSaZMglCVz41y4HUil/DD2Em3FRG4zA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5414.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(346002)(136003)(366004)(396003)(451199021)(54906003)(26005)(6506007)(31686004)(6512007)(2616005)(6666004)(478600001)(66556008)(66476007)(66946007)(4326008)(6916009)(316002)(53546011)(41300700001)(186003)(83380400001)(6486002)(5660300002)(8936002)(82960400001)(2906002)(38100700002)(44832011)(86362001)(8676002)(31696002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d3VOZ1Zxc2ZpZWEwb3A3eHpsdVUwNTBYcENVcGJhVjZrU1JmMWQvQnJrZStL?=
 =?utf-8?B?SHdrdndLSExMM3JvbGtjeVZCRndrUmtwaHIwNFJhMUJkMmJubWJwdnM1TlBX?=
 =?utf-8?B?ZzkySWo0L1BweE85dVVldkdhd1lpbGpEN3JjcEZFS2F6cjN2VGcvZUhOK3My?=
 =?utf-8?B?cmR1MjYxK3BBQ00yeW5sS0gxQnM2SjJLOWtvMjNpZFRWczh0TFh1aHJxYUJ5?=
 =?utf-8?B?NmpZNFlFNVNVV1hXVTVVUC82Ym9TS21hTDRMTUFmL2tmYithdXpGemNiVisx?=
 =?utf-8?B?ZFNSWkg2QkgxTXN4VW9xUG5KWk9QOFp6NWRLOUMxWGFieTVnemorVU5KYml6?=
 =?utf-8?B?K1k2elRaYVFjdFpGY2FUU0RkRVN6b214RXoyUlVQTU5zZFdyNEhKT0NWR1du?=
 =?utf-8?B?ZWR1dHdycDczbnhxVnYzbGhPNlV1MG4rWEFCU2p0OENkdm5BY2tlV2NpTGl2?=
 =?utf-8?B?cVp5ZUxOSUNBRytPcTA1TXRMMWZzNy9wSkZ5Uk5wL0dvNGZ0ZGdJZG9hR2ZL?=
 =?utf-8?B?UXZ2ZVQrbUwwbjJSVzczWTJ1dnpuUUJHRnBCQ1RCMHdPZ0Y4clBKbzBReEI0?=
 =?utf-8?B?eWgycVN0OHI2SDNuVU1NNXBOQmYwZDJGNHhITkYzR25ndmNRSk9PN0prU2J0?=
 =?utf-8?B?Z1FqcXg2RHMybGhBamRyOFQ4TUhKeFF3ekNDL0Z5Tll5cjJaTU1nTXZ2dHpt?=
 =?utf-8?B?Mi9FT2tnR1NRSy9UT1h2QVhPMmZBUzBpQ3lJR0xNc0RvejJaaFpUUnFDUlhp?=
 =?utf-8?B?dHVCSmNwN3haSlF1bTRWc1haaWZWcjFDaitlKzgwOXpwWlpObDVhdWpxUFFm?=
 =?utf-8?B?UkdKRVZ2NUVVbTJXdXJ5NDB4a1pLekUzb1dhQVFWSDg5Vzh6dlN0SE5BWnRP?=
 =?utf-8?B?Y0FpYi90RkQvbXhiYStHUUhZa1ZCT2J3WGtxQit1Y2dVNjE4VXlHZlhLYnNB?=
 =?utf-8?B?djltTDg5SjVhNml6UW8yQWhIK0k0MmFDMDRBZmUrRExMNDN0TE00bFY1OUFQ?=
 =?utf-8?B?VGx0OGt1cVJJYm5VNURBdDVYeUlFZ3lRNW9NZ05xSndYejFDbEFNTmdBeUdt?=
 =?utf-8?B?bCtWUERqK0Y1ZWd1RjZBSXdZUjVrQzE4WFc5OTlJME9BbnY3RGxsUHVlVnRa?=
 =?utf-8?B?SEtsdFpYajNUS01MNGVDbDBYQlZnb0VuTzNIVkMzVnVXM2g5UC9pVms3b3Zj?=
 =?utf-8?B?NDhpN2xzQVlCU3BwNkhrWjUyYmlnb1hzYU0zSmptdGZnbnJOTjlrc2FQNnhF?=
 =?utf-8?B?UDJsUnJmT3JHSUw0U0JYYnZPdnFzUEEwaHFLTjZ3bzBYN3FjRXplbEF1cHlu?=
 =?utf-8?B?TVJzYlYvUXVNU3Q4QTd3dWhWNTdUS2JWZlA0b0dhNnZlWEpCZlRoVGM3bG9l?=
 =?utf-8?B?MmZZYkZJYytKaE1nZEYrOGdVSUFOL080MFBrRTd1ZlRpN3d5aUhJRWRpVnhZ?=
 =?utf-8?B?RW52SkQ0RjMyR0ZPWGZMVnBuUUFZOTltNk1vNVZhaDROVk1GQWcrSFA2WnFP?=
 =?utf-8?B?dWlFYytjUXpTMU9ZeVZxNTBXMEtneDR2djNYZmgxdFJHY1JHcjhiVmdFeVJk?=
 =?utf-8?B?Ymc5Mjd1SXhUc0wvemFpSW9xWlJuelNFbGtseU9Kemk1RGczNHhjd1BZYUlM?=
 =?utf-8?B?U01wRU96UHhmR3dnbXFkaVgyQXRiMGY4eGhlZVVlVnViMGI3T2Nlb05EdHNH?=
 =?utf-8?B?WXEzR3g3aEJHNHUzMTR0bm5lUG04cFhlWTRxNXZIY1ExQ2ZiRHE0TTZEY3I0?=
 =?utf-8?B?TkYwR2Z4RGhFdnVLYUl4VitNWFVwZ2w4K1Vxc1pwVEpmYUh6V0YrZUhsNnBP?=
 =?utf-8?B?S3hPR1c3eGdkcnRWSmx4SG41SWluVzRnNXNDR2g5UFQzNXRwRkhOZEtKTGdC?=
 =?utf-8?B?YnlTYmpiV05WNG0vNkJaYVoxOTNsYmxIRXM2SjFEalBtZFlYbFVEb0grMHN3?=
 =?utf-8?B?ZlRsL3p1UTU5YlNFY1hpM2pLQW5ybUdJU0xVTktzaWw1eURPVFJJMTZRTzYz?=
 =?utf-8?B?NWNNUW5RbnVzckxwN2V4U0dWWW00eUpxRXREQUUyNDU3Z3FhcWZ5K2F3MHZ4?=
 =?utf-8?B?b3BFMTlySTkrRGZqNkhUTFhzMDY1M0VyY3JjTnBsMC9IbklITzNxZWpjclZk?=
 =?utf-8?B?S0tqTFJMWHVLNWRNTlRveWx3QmRsL0tidENIY1Q3Y2VZbnYxbEJseGd1b3Vx?=
 =?utf-8?B?clE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	kqTb7B2bWPV1YEHvVsbHP8wKyQqI13OgF7f0uEFSYKKj3q/o/0vRe6bQ83+ZZ2rIo7qWJpGAhinPtXj6nJFQbH+ohxBT1tiT6dMzjqobjoFd2DZdWmU7JT/vbYk9yAbOqlHBvmHI11G4696l2/Bw1QsoFhszL6uT9Qfdvzf14OG3b6fYZjJSUD+bBenlzF/4K3rmvA4zIrOszEPcF2SaIjQSDNqdar7bMwBNhZA0sDcLPYX+TWd78h8E/02AYmp51+9oR6zeiB39HuxRKF0gIjkYKFxgaXdsqBxzyj84bxE6CSq+FlfPOxgQYu2BtxUNv7yi+uEEWnTvQ/GHr+NRayqck9n15cOoZL4RAGFLH3l/5osmQ2nKPeWZ5TkKzPINQEh/F7pEPOnMVs6xrMEglhwOquYCrzncQwLJCcSfpoYShkFWxjnsgeBEFcJuCXpGP/zCozO2jEpVOSwIyYi0ObRn2UZvT92MU/bt8thJm61mx66ByFh7IH8IRMwrRmb0eQAoILf7IC5aOSvWa5LZ1+OmJ4OKqQ7CEQBAxKjrq9syYPwotNe/gC6v9D+9aS/oMJHivupeLWxNH5sChlfBQEwwM99TPp+jj02G2pf+c5bNHJdYfcfWeNbUiv8Wo2nqIgGMrUeTOnSwyYqfP3FOjCki2rSGbuQKZ03GqD6w0GJ9irNwXw4FHBzQPw2g2DpOUuYKB5ubov78Ck5uMzwRbH5gSDUbMbLSW8XJIVerAp4iqBLGIyi0Jof1AFdpt33530WW55SMxOcDWYeBrHvE5NCimy6LIJfrSVFzIaF28xE=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6882bbf6-3dc4-4193-6965-08db4b5e3609
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5414.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 22:40:21.1377
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vhQwWrJMGqmY34wBhzMfvQccN35nRNOUHITDbA6m2PIt/YHFza1l2zZyBorjuoRhrl0SvumVd9zCqCSnUuni+7o7VNuzpzwFLSTXb/Cbevo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB4989


On 02/05/2023 12:54, Jan Beulich wrote:
> On 25.04.2023 19:47, Jennifer Herbert wrote:
>> This patch makes the TPM version, for which the ACPI libary probes, configurable.
>> If acpi_config.tpm_verison is set to 1, it indicates that 1.2 (TCPA) should be probed.
>> I have also added to hvmloader an option to allow setting this new config, which can
>> be triggered by setting the platform/tpm_version xenstore key.
>>
>> Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
>> ---
>>   docs/misc/xenstore-paths.pandoc |  9 +++++
>>   tools/firmware/hvmloader/util.c | 19 ++++++---
>>   tools/libacpi/build.c           | 69 +++++++++++++++++++--------------
>>   tools/libacpi/libacpi.h         |  3 +-
>>   4 files changed, 64 insertions(+), 36 deletions(-)
> Please can you get used to providing a brief rev log somewhere here?

Yes, ok.

>> --- a/tools/firmware/hvmloader/util.c
>> +++ b/tools/firmware/hvmloader/util.c
>> @@ -994,13 +994,22 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
>>       if ( !strncmp(xenstore_read("platform/acpi_laptop_slate", "0"), "1", 1)  )
>>           config->table_flags |= ACPI_HAS_SSDT_LAPTOP_SLATE;
>>   
>> -    config->table_flags |= (ACPI_HAS_TCPA | ACPI_HAS_IOAPIC |
>> -                            ACPI_HAS_WAET | ACPI_HAS_PMTIMER |
>> -                            ACPI_HAS_BUTTONS | ACPI_HAS_VGA |
>> -                            ACPI_HAS_8042 | ACPI_HAS_CMOS_RTC);
>> +    config->table_flags |= (ACPI_HAS_IOAPIC | ACPI_HAS_WAET |
>> +                            ACPI_HAS_PMTIMER | ACPI_HAS_BUTTONS |
>> +                            ACPI_HAS_VGA | ACPI_HAS_8042 |
>> +                            ACPI_HAS_CMOS_RTC);
>>       config->acpi_revision = 4;
>>   
>> -    config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
>> +    s = xenstore_read("platform/tpm_version", "1");
>> +    config->tpm_version = strtoll(s, NULL, 0);
> Due to field width, someone specifying 257 will also get a 1.2 TPM,
> if I'm not mistaken.

Seems likely.   And i few other wacky values would give you 1.2 as well 
I'd think.   There could also be trailing junk on the version number.

I was a bit phased by the lack of any real error cases in 
hvmloader_acpi_build_tables.  It seemed the approch was if you put in 
junk, you'll get something, but possibly not what your expecting.

Do I take it you'd prefer it to only accept a strict '1' for 1.2 and any 
other value would result in no TPM being probed?  Or is it only the 
overflow cases your concerned about?


>> +    switch( config->tpm_version )
> Nit: Style (missing blank).
yup
>> --- a/tools/libacpi/build.c
>> +++ b/tools/libacpi/build.c
>> @@ -409,38 +409,47 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
>>           memcpy(ssdt, ssdt_laptop_slate, sizeof(ssdt_laptop_slate));
>>           table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
>>       }
>> -
>> -    /* TPM TCPA and SSDT. */
>> -    if ( (config->table_flags & ACPI_HAS_TCPA) &&
>> -         (config->tis_hdr[0] != 0 && config->tis_hdr[0] != 0xffff) &&
>> -         (config->tis_hdr[1] != 0 && config->tis_hdr[1] != 0xffff) )
>> +    /* TPM and its SSDT. */
>> +    if ( config->table_flags & ACPI_HAS_TPM )
>>       {
>> -        ssdt = ctxt->mem_ops.alloc(ctxt, sizeof(ssdt_tpm), 16);
>> -        if (!ssdt) return -1;
>> -        memcpy(ssdt, ssdt_tpm, sizeof(ssdt_tpm));
>> -        table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
>> -
>> -        tcpa = ctxt->mem_ops.alloc(ctxt, sizeof(struct acpi_20_tcpa), 16);
>> -        if (!tcpa) return -1;
>> -        memset(tcpa, 0, sizeof(*tcpa));
>> -        table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, tcpa);
>> -
>> -        tcpa->header.signature = ACPI_2_0_TCPA_SIGNATURE;
>> -        tcpa->header.length    = sizeof(*tcpa);
>> -        tcpa->header.revision  = ACPI_2_0_TCPA_REVISION;
>> -        fixed_strcpy(tcpa->header.oem_id, ACPI_OEM_ID);
>> -        fixed_strcpy(tcpa->header.oem_table_id, ACPI_OEM_TABLE_ID);
>> -        tcpa->header.oem_revision = ACPI_OEM_REVISION;
>> -        tcpa->header.creator_id   = ACPI_CREATOR_ID;
>> -        tcpa->header.creator_revision = ACPI_CREATOR_REVISION;
>> -        if ( (lasa = ctxt->mem_ops.alloc(ctxt, ACPI_2_0_TCPA_LAML_SIZE, 16)) != NULL )
>> +        switch ( config->tpm_version )
>>           {
>> -            tcpa->lasa = ctxt->mem_ops.v2p(ctxt, lasa);
>> -            tcpa->laml = ACPI_2_0_TCPA_LAML_SIZE;
>> -            memset(lasa, 0, tcpa->laml);
>> -            set_checksum(tcpa,
>> -                         offsetof(struct acpi_header, checksum),
>> -                         tcpa->header.length);
>> +        case 0: /* Assume legacy code wanted tpm 1.2 */
> Along the lines of what Jason said: Unless this is known to be needed for
> anything, I'd prefer if it was omitted.

I'm not awair of anything, but your comment 2 lines down  from version 2 
made me think you knew of some.  So if your happy with me removing this 
line, I am!


> Jan


From xen-devel-bounces@lists.xenproject.org Tue May 02 23:30:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:30:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528848.822495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzRy-0004CV-He; Tue, 02 May 2023 23:30:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528848.822495; Tue, 02 May 2023 23:30:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzRy-0004CO-Dk; Tue, 02 May 2023 23:30:26 +0000
Received: by outflank-mailman (input) for mailman id 528848;
 Tue, 02 May 2023 23:30:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sTLj=AX=citrix.com=prvs=479cfccc8=jennifer.herbert@srs-se1.protection.inumbo.net>)
 id 1ptzRx-0004CI-E6
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:30:25 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4db69382-e941-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 01:30:22 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 02 May 2023 19:29:57 -0400
Received: from DS7PR03MB5414.namprd03.prod.outlook.com (2603:10b6:5:2c2::6) by
 BN9PR03MB5964.namprd03.prod.outlook.com (2603:10b6:408:135::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.31; Tue, 2 May 2023 23:29:55 +0000
Received: from DS7PR03MB5414.namprd03.prod.outlook.com
 ([fe80::fdfd:97e5:7c55:82d]) by DS7PR03MB5414.namprd03.prod.outlook.com
 ([fe80::fdfd:97e5:7c55:82d%6]) with mapi id 15.20.6340.031; Tue, 2 May 2023
 23:29:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4db69382-e941-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683070222;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=PtilvS3couIDxORSsS51S+eP1j16D5Wq5MhXfN+iXfo=;
  b=Lx8o6MI/Vzba/YAdSME1pyYFWG7PjLC0flO/KbpQFrzgcy3POk8TS0Fk
   0/rDZKQXJRMDebzEHec5wuVekf3Opbr8jaLu7f5ZQ8ru6sh1L/WEwLuzP
   Tj++4d+Gn9FCOb68m9RIIfwNfI3cGGAtFH8JdUH6K49F8MjJfOfv4Mr8c
   k=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 108052479
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Da4lkqMDv4yCtgrvrR2/lsFynXyQoLVcMsEvi/4bfWQNrUp20mQHy
 GBOCmjQPfuIamL8ctEkYdyy8EoHu5WBzdRqHgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5gZmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0ttUAkpi1
 +wfEwFOfgKkvO2c76i8FfY506zPLOGzVG8ekldJ6GiASNwAEdXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PpxujaCpOBy+OGF3N79eNGMQ8Rbk1zep
 m/c9WnjHjkRNcCFyCrD+XWp7gPKtXqjCNpPTuHnp5aGhnW8nn04GDE8b2KrsMGQm0jhBYldA
 G0br39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLE7gDcCmUaQzppbN09qNRwVTEsz
 kWOnd7iGXpoqrL9dJ6G3rKdrDf3My5FK2YHPHYAVVFcvYilp5wvhBXSSNolCLSyktD+BTD3x
 XaNsTQ6gLIQy8UM0s1X4Gz6vt5lnbCRJiZd2+kddjv+hu+lTOZJv7CV1GU=
IronPort-HdrOrdr: A9a23:dAQUaq19z5GohPKf+9JByQqjBdVxeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5OEtOpTlPAtjmfZq6z+8O3WBxB8bYYOCCggeVxe5ZnOjfKlHbalTDH6tmpN
 9dmstFeaLN5DpB7foSiTPQe7hA/DDEytHPuQ639QYQcegAUdAE0+4WMHf+LqQ7fnglOXJvf6
 Dsm/av6gDQMEj+Ka+Adwo4dtmGg+eOuIPtYBYACRJiwA6SjQmw4Lq/PwmE0gwYWzZvx65n1W
 TeiQT26oiqrvn+k3bnpiLuxqUTvOGk5spIBcSKhMRQAjLwijywbIAkd6yesCszqOSP7k9vtN
 XXuR8vM+l69nuUVGCophnG3RXmzV8VmjXf4G7dpUGmjd3yRTo8BcYErYVFciHB405lmN1nyq
 pE00+QqpISVHr77W/AzumNcysvulu/oHIkn+JWp3tDUbEGYLsUiYAE5ktaHLoJASq/woE6F+
 tFCt3a+Z9tABunRkGcmlMq7M2nX3w1EBvDak8euvaN2zwTp3x9x1tw/r1qol4wsLYGD7VU7e
 XNNapl0JtUSNUNUK57DOAdBeOqF23kW3v3QSOvCGWiMJtCF2PGqpbx7rlwzvqtYoY0wJw7n4
 mEeE9EtFQ1Z1nlBaS1rdN2Gyj2MSaAtAnWu4NjD8ATgMy4eFOrC1zNdLkWqbrhnx1FaferH8
 paO/ptcorexCXVaMF0NjbFKulvwEklIbMoU+kAKiOzS+LwW/rXX7/gAYDuDYuoNwoYcUXCJV
 ZGdATPBax7nzKWsznD8VTsZ08=
X-Talos-CUID: 9a23:cVMy3GMVvauN2+5DZQpk32I2CsceQz7N3lDrLGWXE2E2V+jA
X-Talos-MUID: 9a23:eJxp7wroHYq6VhtgiZgezx46Jex4z46UMVAcqbwXvZbabi5bGjjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,245,1677560400"; 
   d="scan'208";a="108052479"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LVR4DddfZ3VJmR/6dMjgtHSn1mo4zeA5vuSO0NwBDd0InMmvuZ7pCLPvzUynRO8Waqr0EWQZx53EV6C46UsPKJyOvdblBJfuWlm9zBm9jkMeYyNh1MUkkstTiudunpBKdbc1WfJkGdiIDOo2e5sqEtK71JTgO6aqZ0HzVNM9FDwdZooYxzM67NibvQvV+05/9ZMFk3KiN37R12dKM4PAlTTi7sGh7+pUCOrzaHxsUO/DOrl+V5tWo7IQFaNhbpFXYRBiyNJamJ64n98m36tJBfxBBpJCWe/IPeKwuDOs4fDs+I40+YOkOEmAmK2uiBTjnXafqnh3bVxGZRined8d1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nnOUfgqG72GzlNLoWtCu9JubUWnp35TFYzuwlitj5yg=;
 b=YuIuVMJWsBt5FmpTJVbUB4jUYSLJQ8sC+1doBAs0Mzlq45fs15kgqxMo04iz5usFdwzM1sb6yaXFJtf0Ug04RzJnkNsjLGZ5MLTgEvN4ED6Gr7+UQLzgpP39y63xta8A+wPvJvEfyGSvsUisALGw9o9Zol/ezOzM/atyP6Ze7KScrLANM041MbVRJnQW/HGPV1zoa7V+HpjjzN/8ThsWbiceDO27mHOg3bgTBXbc8IysSh0n+PFcIrbW8uRq1TDnH+iFMpdujMTDSmwn2HNwz68wgybunU4+sx7hQRjmD3MOf4nbScyKn6xOtQWd6F9oegPJZ5/LJkWC39F5BpfVKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nnOUfgqG72GzlNLoWtCu9JubUWnp35TFYzuwlitj5yg=;
 b=KDIgP2XvydV+vk5M8aLAtUYvKsWuTQ0u3F2Hn+hAQBGSzH0wixzx1v+1mDs2A0osC+IOCjQnhuZCVnEZw1hIFibiMR+PLtGMQBAqOXiOad7PQtzZH+VlFfRY14SDws9rFwTgsOGhxuKr7EfdV08qvjtFomToh9dEbnit8rMnlSY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <91620112-039e-9e8b-9bac-452ea9ecddaf@citrix.com>
Date: Wed, 3 May 2023 00:29:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 2/2] acpi: Add TPM2 interface definition.
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230425174733.795961-1-jennifer.herbert@citrix.com>
 <20230425174733.795961-3-jennifer.herbert@citrix.com>
 <f9e72c6c-9915-9995-06c0-0a0ac037bde1@suse.com>
From: Jennifer Herbert <jennifer.herbert@citrix.com>
In-Reply-To: <f9e72c6c-9915-9995-06c0-0a0ac037bde1@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0417.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18b::8) To DS7PR03MB5414.namprd03.prod.outlook.com
 (2603:10b6:5:2c2::6)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS7PR03MB5414:EE_|BN9PR03MB5964:EE_
X-MS-Office365-Filtering-Correlation-Id: 99b355db-cd5a-4766-ac4b-08db4b652284
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	p9Zr2Wz0k+9oM26q2yO3LI+jZNdYZ1KAukRtO0O4X5pwY4hUUPrN3l9Na6ircHiE5y41jorQ5smd/U7lIzfGeH3ggia9vlERZSm1JwziagJUTqvFMaTmyX3TmicjThTX89yjnyyhGYwiw4WDZKvfv95JNmWLGqE7VusmM+uqUtAIXG4JQdGUijRvRBLJ4FyykP8dNKKeJ5D2+yV2k40R7g/C1yR2pqx7wd/WYe+WCm6TUY4H4uwte0DhpUOg7XPrdM/6xC0Zwj7GZYlmOHyANcf8enqd3o3z5IjRsrUlJDtC51brnFea+RYDM0kBMkbGtes/XWp0p1CxdSKAkN2gYiF8qeU88ilhu/kGK+QD8b0MxfHslhlf+BsNuVlQj1stmkQbuLK673lSt7mVwPFQ9vgN7KyrRw4KWb/OyBh0wClwSfUT/EDSkIu3Wq4MiB4xsMyetuElgp2jNocl3YKmPJ0gydPOqXzCvCzVKsG6xrorHeq6hPZjPy+gBw5Yh5BF2Juj2Ch/a+NVqXTtCIjzaye7YXXhdIbKDmSAx4oulaVbTQKieyFtme2imVhwBGNS8RwcblgPUV1/YxQkjxd6We5pdWThLj8y9UPa9lWhjJ1Vnd97WVcBdEAMD3QoLj78WGW0jFkj2CJ35PW4C3o3ZS1FzRPVIvM5jk3GuPGOxVQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5414.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(346002)(136003)(366004)(376002)(451199021)(6666004)(6486002)(2906002)(6512007)(6506007)(26005)(53546011)(186003)(54906003)(2616005)(31686004)(36756003)(478600001)(41300700001)(66946007)(44832011)(82960400001)(316002)(8936002)(38100700002)(8676002)(5660300002)(86362001)(66476007)(66556008)(6916009)(4326008)(31696002)(2004002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dkVKMVZqakFwS202ajVMQUEvbGZ4VzEwbGt4RWhJVWE5eUozOUJVazI0Wklx?=
 =?utf-8?B?TnJPWG1wNEQ1VzN4MUJ4Q2lUMldvTEUvRFh6REoyeElSeXp0UW4ybmh1cUhT?=
 =?utf-8?B?cFh3Zmt4OGcwQWpBS2ROZk16VUZsWGNEL2hRK0Y1YzRqWDBsYXBwNmlzaEZK?=
 =?utf-8?B?MkpSeHRzT09JWUx3a3ZaUmM5WEYyMERVZ0V4ZXdrc3JOSkRZNUQ3NkFvMnVs?=
 =?utf-8?B?NW9MWElTWTJkZmF1UmdGQk95dUpOMUJablQrcEVsZlFSMGQ0Y2hCVm5oN0pl?=
 =?utf-8?B?Rm5TSHVoRGZLUkF6cGJaeURaNklSOFg2djBaaFBhNjhLZmZFN0pwbEczUkFH?=
 =?utf-8?B?Um4vR1pYTjdtZEUyN0pSdWxxM3B5aEZrejZxTkJoQk13NzBVSTF6T3B2d29W?=
 =?utf-8?B?aXpuN2xMNGl6UFlHbVZIRHdwc21xb1hhbUdKL3h2cmdkM3crSSs0YUNKVmNQ?=
 =?utf-8?B?UWpYOStTT05RSG94ai9xcTcwK2RGdDRRR2JVYUY3Z1RTazluMWoyWG92aHFZ?=
 =?utf-8?B?QzFvR0ZOU2NBWlBVSmlhdThNZkl3aUIvMnhrYjd0V2lQOGtEUHhBbmkzRVNl?=
 =?utf-8?B?MkU1YVFTRnNzMDFGQjRiMXFUKy9oM09SNFcwRUhaM216SCtVeUNWaEpxMHNj?=
 =?utf-8?B?eTlSVmp3QjVTNStUOWhCYnllSlBKaVZCc1lFeUE5b3hPcHpxR3Bqd2FHdjBO?=
 =?utf-8?B?OEl4OWJOOUh2aDZjbVBwRmdNTUZ3SFcwNSswLzNINkYwTG5qRFA1Nk1WWnBQ?=
 =?utf-8?B?c2VkbjNMOXZRMmNUbi9iK1BkOUVpckxKcmR6WGQ1Tms3Vys0NDAvVHJxanp5?=
 =?utf-8?B?emQ3akNxM3o0aG52OWxxbGJGMFhzQWtMNHZ1dEwxU1lDTW9DVzdUNUNUY2Nw?=
 =?utf-8?B?SjBYd2paU1JQU1BHa1B0enYvUEFrQ3M4RmNhUklxSXlTRk9xdFUvbGo1Z0xE?=
 =?utf-8?B?MzlDKzc5Z1ZueW84amJBZmxqdEI4VDNKNmcxVSswa01RNnNqc1R0UmpvQVF3?=
 =?utf-8?B?UDJsK09hYmRNa293dHM2cWdLSTd2d0Z2cm9kNUJyWTdoM05OSVpqNU01akV3?=
 =?utf-8?B?MXpmTU1GTzIyNWt5K1A1QWlBTSt6NDhtdzBLUXpkVEg5ZDRxbWpEckgrTVhm?=
 =?utf-8?B?ejE5Ymc1T0lDWTZCUDFsemtrQzBsWTF3MmRqa2lyc292SlFUeTgrNnFlVW8z?=
 =?utf-8?B?QWpXejFoWjQvTGU5QS8wWmxMam8vNHIzU2Z6QjVKRU1kTlF0RkZkS0hIejN3?=
 =?utf-8?B?NHkycGsvNU1UQW56NklUd2JoazR0T0dxRDhsMGQ3RG5mLzNYeFltaXMvT0lD?=
 =?utf-8?B?ZGs0R0NPSUVwSG1qenhMT3RxKy9QQzV3bUxQamRDb2FNd21NaC9meS9kZG40?=
 =?utf-8?B?Ky9nOVNuUEtKTlB4RmJxeXlSYTE3WThLbGxZWkkrV3VabkFwbi9tbHJaZUk4?=
 =?utf-8?B?aVo5MEZNZFVYaDJsTTJ3ODZNd2FWTHcwcUZSajU2aHBCR0xLVlcwSnZkQWxG?=
 =?utf-8?B?TzNRb1FCZFdId2RBTUoycUVESnFiNGN5Z3phYVh6MXpkeFlLRENXY2ZSL1h4?=
 =?utf-8?B?T2wweXMvemtJdHZJRU1NZGpLWDE2L2tlV210L1M2L3BCOTVYd2lPRGZYaHdv?=
 =?utf-8?B?TnlOdm5sS2dkaXVVTGo2Y1ArV1pTK1FOaE14MHdHYjMxU3JsdjdXV3hrNVVE?=
 =?utf-8?B?OWhyRXVyb1c4TWxsRXJmbitma2hpY2hkRTFOakpmd3d3Q1lMNElwUzJwcEdS?=
 =?utf-8?B?dnRLOWRUeHlmbjBiRTh0OWRFT0NqZDRQNm1KK1JwNG5mVSt3R2RVa0pRaG1z?=
 =?utf-8?B?WlpEL0hCMHFobjFybVdRMVdTb3Ryd3dLdzRPWnM3aHpjd3RDOTV0Tm1mQ0dI?=
 =?utf-8?B?NkVFOHRoYS90UGlyaFlxODllSEpZc1UrSmJxdVBETGhJRUdxOTQ4YVUveHBp?=
 =?utf-8?B?dlU1L3g1enUyYzZSdUhuT0xXVFhRanpjN09OL0xwTHdqQlBiY2xQcGJDWCt3?=
 =?utf-8?B?OStseEZJaVNPd0tDcy9aZk14endOcExGWkE5QmZ0c1QzU1NxVFVUaXlxa1h2?=
 =?utf-8?B?OHRzUFp0TEl4RDFoNlVTbVQ3OUxBOXZ0UmwzTVlNa1lBaTA1dUNrQnRyT1JT?=
 =?utf-8?B?aGRKdEpBZlJoeDgrU3Ria2w0UWNTa3JGWkJMZDR1bE4xalFWV3o2d0pQOVJF?=
 =?utf-8?B?ZWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	9ycWrDL90mzr8SruEvNFf8nhroJMOFLF4OjtW1G79tupabDWwN43LJDGU+toDFIm4Mtf5iMoLv6GIL3oTwMiDTc7K2b6GBieWZqnIeIkNAn1/SZxvUD+E3+d0hQV+Ob0P6dCPqdUtQf2gQ0ixUXMbMXDBm7ysEKLYYG64iDJUPvTOlKJDm7G9fqnBsqnZhLiE4l+Q7fd5PlLLimtTc1E26oLfLNdRhh2tkhwEWKcgNPq+3beDVQQjMz3otWT4rHexEF5UzLLtr5QoYron2beqhhtOqRokRGwmSScokJoS+OTBPX38DXJ2owT3jkapn4tH7vEizahntowOY54iElN7aOu2H6zJdwzrdxVD20CdPwVgxBB7feJwt5hEAa17UZpuG6ZaEFpevvhNyAvSTi5Vd9SQyprwMK9wo/QBcGQEJoVzND/kpH56RZHc+oXyFWFfdR1NRhGb+8Z98VKXBKrF9O8nEQLPayO+wPI5ArBqusgkbQygl45duIBXEUZ/VOSLpKGEOb3a3yJwI5hhpFSqNwJTvvueVE8DGw4umx5NxwWsbPzsIkdL29r/rOC/yavk7O0ZuwFcBS/6JeY5/glIbWfev0tQuFmJ47oWVMCGDl2FZwIPRJO0blwl0/mJ+XdkVuX7UNtCzDVCqA0UhEceFPfP4aBQxny4UNV4IgTfg4S3P0i0v20huOHxZTmMiTaQUXQb98oI7cz4/O3CFnVIIti2ZbfIY9XfmloSTbcZLc85w0wv6UNME/aTsU74XJwoh/PYhA+jsULEKNwW2c10QWM3NHBvuvoSBykfWAXca4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 99b355db-cd5a-4766-ac4b-08db4b652284
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5414.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:29:54.8173
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: boBBs/8emxpfjGyoMwtLTTdC+jqCJvh5hIZlQcXr9EBSuADaQ6m3FNu8sTFEvw40t4iUcdVXqsLj3bmldDtmNIDyo4KJCfXX1bluOszZ1/M=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB5964

On 02/05/2023 14:41, Jan Beulich wrote:
> On 25.04.2023 19:47, Jennifer Herbert wrote:
>> --- a/tools/libacpi/acpi2_0.h
>> +++ b/tools/libacpi/acpi2_0.h
>> @@ -121,6 +121,36 @@ struct acpi_20_tcpa {
>>   };
>>   #define ACPI_2_0_TCPA_LAML_SIZE (64*1024)
>>   
>> +/*
>> + * TPM2
>> + */
> Nit: While I'm willing to accept the comment style violation here as
> (apparently) intentional, ...

Well, I was trying to keep the file consistant.   As far as I can tell, 
this styling is used thoughout the file - unless I'm misunderstanding 
your 'Nit'. (You object to a multi-line coment used for a single line? )
But I'm codes style blind, so just say how you want it.


>> +struct acpi_20_tpm2 {
>> +    struct acpi_header header;
>> +    uint16_t platform_class;
>> +    uint16_t reserved;
>> +    uint64_t control_area_address;
>> +    uint32_t start_method;
>> +    uint8_t start_method_params[12];
>> +    uint32_t log_area_minimum_length;
>> +    uint64_t log_area_start_address;
>> +};
>> +#define TPM2_ACPI_CLASS_CLIENT      0
>> +#define TPM2_START_METHOD_CRB       7
>> +
>> +/* TPM register I/O Mapped region, location of which defined in the
>> + * TCG PC Client Platform TPM Profile Specification for TPM 2.0.
>> + * See table 9 - Only Locality 0 is used here. This is emulated by QEMU.
>> + * Definition of Register space is found in table 12.
>> + */
> ... this comment wants adjusting to hypervisor style (/* on its own line),
> as that looks to be the aimed-at style in this file.

Will do.


>> @@ -352,6 +353,7 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
>>       struct acpi_20_tcpa *tcpa;
>>       unsigned char *ssdt;
>>       void *lasa;
>> +    struct acpi_20_tpm2 *tpm2;
> Could I talk you into moving this up by two lines, such that it'll be
> adjacent to "tcpa"?


No problem.


>> @@ -450,6 +452,43 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
>>                                tcpa->header.length);
>>               }
>>               break;
>> +
>> +        case 2:
>> +            /* Check VID stored in bits 37:32 (3rd 16 bit word) of CRB
>> +             * identifier register.  See table 16 of TCG PC client platform
>> +             * TPM profile specification for TPM 2.0.
>> +             */
> Nit: This comment again wants a style adjustment.

ok


>> --- /dev/null
>> +++ b/tools/libacpi/ssdt_tpm2.asl
>> @@ -0,0 +1,36 @@
>> +/*
>> + * ssdt_tpm2.asl
>> + *
>> + * Copyright (c) 2018-2022, Citrix Systems, Inc.
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU Lesser General Public License as published
>> + * by the Free Software Foundation; version 2.1 only. with the special
>> + * exception on linking described in file LICENSE.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU Lesser General Public License for more details.
>> + */
> While the full conversion to SPDX was done in the hypervisor only so far,
> I think new tool stack source files would better use the much shorter
> SPDX equivalent, too.

OK, this is where I get a bit confused.  I belive I copied the licence 
from ssdt_tpm.asl, for consistancy.

So I think i need to use a 'LGPL-2.1-only' but then it says its using 
exceptions on linking as discribed in LICENSE, but um, which LICENSE 
file?  So i'm not sure what exception I should be adding. Do you know?


> Then on top of Jason's R-b,
> Acked-by: Jan Beulich <jbeulich@suse.com>
>
> Jan


Thanks,

-jenny



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528853.822504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYZ-0004sP-9w; Tue, 02 May 2023 23:37:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528853.822504; Tue, 02 May 2023 23:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYZ-0004sI-7E; Tue, 02 May 2023 23:37:15 +0000
Received: by outflank-mailman (input) for mailman id 528853;
 Tue, 02 May 2023 23:37:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzYX-0004sC-LX
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:13 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20611.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 429877f1-e942-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 01:37:11 +0200 (CEST)
Received: from DM6PR06CA0068.namprd06.prod.outlook.com (2603:10b6:5:54::45) by
 IA1PR12MB7589.namprd12.prod.outlook.com (2603:10b6:208:42b::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 23:37:05 +0000
Received: from DM6NAM11FT068.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:54:cafe::e) by DM6PR06CA0068.outlook.office365.com
 (2603:10b6:5:54::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.20 via Frontend
 Transport; Tue, 2 May 2023 23:37:05 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT068.mail.protection.outlook.com (10.13.173.67) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:05 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:03 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 16:37:03 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:02 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 429877f1-e942-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZLZXj1JXvDZcDJ1ousp2Un42WflSH8aECl8ooOxRxSCk5vIeOZTNtfvq8MxYeBDv0O0iXM1HrEYo+0bn/wS7cGo3qHwwEMr8B+YHwmlwqflXUiQmaTgefbu6JQRBFa9BlKwp0slS9isLs5fmt/w7UHAaF+foWr/HgEDyTRGI9GohdK+k+mpjYGKPifAAP9OYgEbFbujKTUX8XM061MgSXCqG/o490d6V623TRQ3Y2uxC+Yj8bUZG5SN6/zKT6xYUajWaEF/B1M2WKraR3X4PDN/SCnPLExp2xdJdgQTiJy12K0lG8XUChMRFoo1njpFMi2wISQpRc0HSTXdcihwsCw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZscFKHtEwxmwCfbCbt4eRL4EXCZVnMpebuuBX3YRsyA=;
 b=RtitxiBzJxgpb1v9xEcCSbM1XNJE5GQ+V0NOrB9gCjzXVX2zWDQ8REjq1Vk3lgMTnRfFSx/UgRd7tJPbhYEIggZZqq/kK1XhmnVLHKkuVgp6Rw4zDnVv2XPN3i4jjRyC9Wjn6Wv0WO5H5C1rIr8Tkw1tapSBtu+HYhOeD4Xx0T1PgnkTfPIfjktl7Dy87uf3ogBIWBM7CeCq+u1z+iVG+wyLHfHz8Wmn1cjaZJZAlpY4AyXm8xjUMhIuQCPNHn/+3DvrYYGtP6IMy7YZ+D7UOCMd/xAD+/tZAirItowkhLJE7q4/4c/uUB0Tnira0CKE8jHqOBMc3bfpM1o0NNLw8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZscFKHtEwxmwCfbCbt4eRL4EXCZVnMpebuuBX3YRsyA=;
 b=jxDFiVkCYn9xatcHc0c0NhBlsAod0TNNt2oWeWhDKdVZgjCoNuvI0Z5G7JSTL1YCjg6sXRpaFQxZEyWY7oREBIeOpI8P2p047kFi9BLCmt8+ppfNVo6uaJv6DdaHkxVwLWGF5+cVqCycB/BxD6zDqjeM8GzsNc6jO/ymrhNFCvk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Rahul Singh
	<rahul.singh@arm.com>, Anthony PERARD <anthony.perard@citrix.com>, "Juergen
 Gross" <jgross@suse.com>
Subject: [XEN][PATCH v6 00/19] dynamic node programming using overlay dtbo
Date: Tue, 2 May 2023 16:36:31 -0700
Message-ID: <20230502233650.20121-1-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT068:EE_|IA1PR12MB7589:EE_
X-MS-Office365-Filtering-Correlation-Id: a7809011-32f1-4b14-44bb-08db4b66232d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N8w8ph0vVidtsKiePulqsXxpmZwZ8SgtS8c4gubAvBLUNGZawYNuRjEiTJjmO5epKkscOWSnikZgDhLr8mHAiUAIucSOk/wYehReqKrYbJr7bNaiybll00p6UYP+mGSRFETssuDKEbT3SWTgf5mgXf/SaWXG1SdWjnJ3h2baaQOwnCk/DNx0GMupPWvdhBydJUmLxjLZvYE7P4TRNVWkoxMD+8F+MBRiUVJGfo+AeRitK7w1i0PEgPRw+A6zHDCRh9H4CFaJUsruUexFm3qI4Jo0NLdBFidE9tWRgfE4LKCFZhLe2OdvH7hWlqcyr/TpH32hYriDelAVB5a1nMxrL8y/CJ3hPlEVqy0EgcX1RjzgUtokLiRPksuFWFdmAK227LlBa+zeATvbBuyUj3ycxv/ZPBxAr67iqyaI+jCAKD6JYkT1CJHFUcFyTlFR3tQ8Crp98IFX2yKqvFYvUh0sztpYVJJViyzbBLNhpP8tw2No+fK1oUUZeHHlmmdpHzXtStsVrBOxoOsXtLNqPjf6s03y99v8X4ry9XsBj6ar14uGcW65iJ0ujSzsfZgxve9sUb0rJxYkGsx7h+wHOkHDbvQ0Bv9GktgYaJrJHUq/ra0ol9XDCDn5ulISwjoPF/bM8UV+5Ivqp/jKFSMzQrQQonKW1Fy8VMP/pjzrppE51Qx4CXliwPtJzT4wGExdwGlUnODs7eUbitNx9tJrZF7j+cV1uWqOM1QXPqT8QniULr4=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(346002)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(316002)(426003)(336012)(40480700001)(5660300002)(82740400003)(44832011)(81166007)(41300700001)(47076005)(356005)(6916009)(4326008)(70206006)(70586007)(83380400001)(54906003)(36860700001)(2616005)(40460700003)(8676002)(8936002)(82310400005)(26005)(1076003)(86362001)(186003)(36756003)(7416002)(2906002)(6666004)(478600001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:05.1348
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a7809011-32f1-4b14-44bb-08db4b66232d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT068.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7589

Hi,
This patch series is for introducing dynamic programming i.e. add/remove the
devices during run time. Using "xl dt_overlay" a device can be added/removed
with dtbo.

For adding a node using dynamic programming:
    1. flatten device tree overlay node will be added to a fdt
    2. Updated fdt will be unflattened to a new dt_host_new
    3. Extract the newly added node information from dt_host_new
    4. Add the added node under correct parent in original dt_host.
    3. Map/Permit interrupt and iomem region as required.

For removing a node:
    1. Find the node with given path.
    2. Check if the node is used by any of domus. Removes the node only when
        it's not used by any domain.
    3. Removes IRQ permissions and MMIO access.
    5. Find the node in dt_host and delete the device node entry from dt_host.
    6. Free the overlay_tracker entry which means free dt_host_new also(created
in adding node step).

The main purpose of this series to address first part of the dynamic programming
i.e. making Xen aware of new device tree node which means updating the dt_host
with overlay node information. Here we are adding/removing node from dt_host,
and checking/set IOMMU and IRQ permission but never mapping them to any domain.
Right now, mapping/Un-mapping will happen only when a new domU is
created/destroyed using "xl create".

To map IOREQ and IOMMU during runtime, there will be another small series after
this one where we will do the actual IOMMU and IRQ mapping to a running domain
and will call unmap_mmio_regions() to remove the mapping.

Change Log:
 v5 -> v6:
    Add separate patch for memory allocation failure in __unflatten_device_tree().
    Move __unflatten_device_tree() function type changes to single patch.
    Add error propagation for failures in unflatten_dt_node.
    Change CONFIG_OVERLAY_DTB status to "ARM: Tech Preview".
    xen/smmu: Add remove_device callback for smmu_iommu ops:
        Added check to see if device is currently used.
    common/device_tree: Add rwlock for dt_host:
        Addressed feedback from Henry to rearrange code.
    xen/arm: Implement device tree node removal functionalities:
        Changed file name to dash format.
        Addressed Michal's comments.
    Rectified formatting related errors pointed by Michal.

 v4 -> v5:
    Split patch 01/16 to two patches. One with function type changes and another
        with changes inside unflatten_device_tree().
    Change dt_overlay xl command to dt-overlay.
    Protect overlay functionality with CONFIG(arm).
    Fix rwlock issues.
    Move include "device_tree.h" to c file where arch_cpu_init() is called and
        forward declare dt_device_node. This was done to avoid circular deps b/w
        device_tree.h and rwlock.h
    Address Michal's comment on coding style.

 v3 -> v4:
    Add support for adding node's children.
    Add rwlock to dt_host functions.
    Corrected fdt size issue when applying overlay into it.
    Add memory allocation fail handling for unflatten_device_tree().
    Changed xl overlay to xl dt_overlay.
    Correct commit messages.
    Addressed code issue from v3 review.

 v2 -> v3:
    Moved overlay functionalities to dt_overlay.c file.
    Renamed XEN_SYSCTL_overlay to XEN_SYSCTL_dt_overlay.
    Add dt_* prefix to overlay_add/remove_nodes.
    Added dtdevs_lock to protect iommu_add_dt_device().
    For iommu, moved spin_lock to caller.
    Address code issue from v2 review.

 v1 -> v2:
    Add support for multiple node addition/removal using dtbo.
    Replaced fpga-add and fpga-remove with one hypercall overlay_op.
    Moved common domain_build.c function to device.c
    Add OVERLAY_DTB configuration.
    Renamed overlay_get_target() to fdt_overlay_get_target().
    Split remove_device patch into two patches.
    Moved overlay_add/remove code to sysctl and changed it from domctl to sysctl.
    Added all overlay code under CONFIG_OVERLAY_DTB
    Renamed all tool domains fpga function to overlay
    Addressed code issues from v1 review.

Regards,
Vikram

Vikram Garhwal (19):
  xen/arm/device: Remove __init from function type
  common/device_tree: handle memory allocation failure in
    __unflatten_device_tree()
  common/device_tree: change __unflatten_device_tree() type
  common/device_tree.c: unflatten_device_tree() propagate errors
  xen/arm: Add CONFIG_OVERLAY_DTB
  libfdt: Keep fdt functions after init for CONFIG_OVERLAY_DTB.
  libfdt: overlay: change overlay_get_target()
  xen/device-tree: Add device_tree_find_node_by_path() to find nodes in
    device tree
  xen/iommu: Move spin_lock from iommu_dt_device_is_assigned to caller
  xen/iommu: protect iommu_add_dt_device() with dtdevs_lock
  xen/iommu: Introduce iommu_remove_dt_device()
  xen/smmu: Add remove_device callback for smmu_iommu ops
  asm/smp.h: Fix circular dependency for device_tree.h and rwlock.h
  common/device_tree: Add rwlock for dt_host
  xen/arm: Implement device tree node removal functionalities
  xen/arm: Implement device tree node addition functionalities
  tools/libs/ctrl: Implement new xc interfaces for dt overlay
  tools/libs/light: Implement new libxl functions for device tree
    overlay ops
  tools/xl: Add new xl command overlay for device tree overlay support

 SUPPORT.md                              |   6 +
 tools/include/libxl.h                   |  11 +
 tools/include/xenctrl.h                 |   5 +
 tools/libs/ctrl/Makefile.common         |   1 +
 tools/libs/ctrl/xc_dt_overlay.c         |  48 ++
 tools/libs/light/Makefile               |   3 +
 tools/libs/light/libxl_dt_overlay.c     |  71 ++
 tools/xl/xl.h                           |   1 +
 tools/xl/xl_cmdtable.c                  |   6 +
 tools/xl/xl_vmcontrol.c                 |  52 ++
 xen/arch/arm/Kconfig                    |   5 +
 xen/arch/arm/device.c                   | 144 ++++
 xen/arch/arm/domain_build.c             | 142 ----
 xen/arch/arm/include/asm/domain_build.h |   2 -
 xen/arch/arm/include/asm/setup.h        |   6 +
 xen/arch/arm/include/asm/smp.h          |   3 +-
 xen/arch/arm/smpboot.c                  |   1 +
 xen/arch/arm/sysctl.c                   |  16 +-
 xen/common/Makefile                     |   1 +
 xen/common/device_tree.c                |  50 +-
 xen/common/dt-overlay.c                 | 929 ++++++++++++++++++++++++
 xen/common/libfdt/Makefile              |   4 +
 xen/common/libfdt/fdt_overlay.c         |  29 +-
 xen/common/libfdt/version.lds           |   1 +
 xen/drivers/passthrough/arm/smmu.c      |  58 ++
 xen/drivers/passthrough/device_tree.c   |  87 ++-
 xen/include/public/sysctl.h             |  23 +
 xen/include/xen/device_tree.h           |  28 +-
 xen/include/xen/dt-overlay.h            |  58 ++
 xen/include/xen/iommu.h                 |   3 +
 xen/include/xen/libfdt/libfdt.h         |  18 +
 31 files changed, 1623 insertions(+), 189 deletions(-)
 create mode 100644 tools/libs/ctrl/xc_dt_overlay.c
 create mode 100644 tools/libs/light/libxl_dt_overlay.c
 create mode 100644 xen/common/dt-overlay.c
 create mode 100644 xen/include/xen/dt-overlay.h

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528854.822515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYg-00059l-IB; Tue, 02 May 2023 23:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528854.822515; Tue, 02 May 2023 23:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYg-00059e-EG; Tue, 02 May 2023 23:37:22 +0000
Received: by outflank-mailman (input) for mailman id 528854;
 Tue, 02 May 2023 23:37:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzYe-0004sC-F0
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:20 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20625.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 474e8439-e942-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 01:37:19 +0200 (CEST)
Received: from DS7PR03CA0256.namprd03.prod.outlook.com (2603:10b6:5:3b3::21)
 by PH0PR12MB5404.namprd12.prod.outlook.com (2603:10b6:510:d7::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 23:37:15 +0000
Received: from DM6NAM11FT072.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b3:cafe::86) by DS7PR03CA0256.outlook.office365.com
 (2603:10b6:5:3b3::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:15 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT072.mail.protection.outlook.com (10.13.173.181) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:15 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:13 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:13 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 474e8439-e942-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eqY7o3U2kc48o7sQ9kzIP4hsRjay3NTEhCYn6scr9vlx2szGvghJBY7BTqBZ5+ePpX2hGq0WAmuGD0pGkDFsLPsa4viQ73nQ5KnudJ3/1o1WnOAiiM8t0gUGwrhdnaUMx6ULthTR7pskowWxfrm2kZOb52Z3EacSFArS7+f7GJ34ASmTZjjK0YTLjF9R3pp9tLgsi3d1f82bQHE7EvMMte0BvmyQ1blkNla4LmgJhYVLRr3DZBmuEYnl0TluTKmlFH7tk0BdHEkZk9/ZJpTU9qBbzdFhN8Iy4K/jpuuoBBbLPLuZPqhMNSCvytZNDA3pOTBJcIpHUioeafUBrgarcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z1f/f7bsw24NdyZtZPfyDM5uGc9VQDLG7SJSmTtwvfg=;
 b=OWKuBhLTe0ujJwlwhEqj05kzwWoUPrsysQTQJ3V+a5n9hZvLwFkBOP8PGgdu7hu5kici3C0bnY22OjLvuhFGvvSa9erk3Cs3ue1k+ijPVkZ3o2wQyGSrRbDP5pwAYagNdBwSIvtKHKj5h14UBJ2cpt6bbV7vipmfFt/V2Ra5TZ6428Z8+b0+QQaB+Mq22qb1Hj1b/Z8X5sGVmHn5nBnYq0DUnX3LusBXuS19Z/CUBbX/OG/FmaSNnWfHBF/kbQyuFV6Qv7hTSNPiQfOYwlnPRLapX2rG3sXShsvSgWLGqxNzkOQZBMVc2jLd7JY+v2G4ICu2jTezlYrvDIrySlqqwQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z1f/f7bsw24NdyZtZPfyDM5uGc9VQDLG7SJSmTtwvfg=;
 b=y/JQ2yrTo88/hmwMi8xLbMSEhyKhySLQPOvP75g0ejnrU/llpBn/nzzZGD+Jnwrwyqj+wnCVFIw/yR1PIwBDerq6x7DaOwryVVmOAzX9VyER2w6s1il8SriJbBnJ4pIeLPnUpohQjNSmWzO0L/X5eU+uXdWvV1yrhLuD7szxO4w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v6 01/19] xen/arm/device: Remove __init from function type
Date: Tue, 2 May 2023 16:36:32 -0700
Message-ID: <20230502233650.20121-2-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT072:EE_|PH0PR12MB5404:EE_
X-MS-Office365-Filtering-Correlation-Id: 02c9cde4-7911-4d95-51bf-08db4b662922
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sIAHOPEjxzjT7v3MKV92z8sKxP+zmf6zc7ms5pdpvVtVtQs6Z2k2XuKcpRcN+ZvrIjO8HETaBTkyleK/7YapDMQNrmCmMUMOeqG0a9f4JDJFJt7HxcCH5JEaOwyGzbEdhcaHXM/lSNohS3xhTdS+/pEIEgf3eKp/WWBNznENr2TxBNCJq8EKqe+U4i5UvBhXhIZZ2+6/LMdM+TtKcc2GNMD1G3hkKG7zida+FFIrngZOe1MOWUJnWf5Qk8hP/Cy8fjZosLPpUrLPcPMeOR2JAI962904PTofp+Z9PItZ+Wbo3ypG76QbWU6i+Sqt1ooa1Auna1sHU20MLlDuJ5/KI/qi2jxOzf2oJdDOi/s3RnorY2bA7KWTdxVmjftYuDhZJpvsC+lR0Um9Q3PHlACk7nCR7FWtkx3lW6+9K1R68bP4R8CaUIEue7KeXN+J5opBoRlPxdQ1h1NehoNYzie4xVbkHr4F71wyXvV9mNcQCPDw50Yq+SVFECj2nXd7NeUkh/Xruv+4Js4dL3UVN0yyEGqop9JpUtTDs3wI8BKxhsWvv734xHYKAYGgtT/JwMkVlmOA8L4Hvd2KSKFjUwoLegDxUXb6lpy1araBni7IheHeXKtIjYEazEf0jitG23RJFC1K1Skk+fqH0Acs0H29sbGFm6G0z+ubs/yWPV7SxI4HFCOgGTH41zwbQQLtjY/iJxZGg1v3lJ0LOp2LDuJBX51bADg5L+QgSJFCxw26nvo=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199021)(36840700001)(40470700004)(46966006)(26005)(1076003)(86362001)(6666004)(8676002)(8936002)(40460700003)(5660300002)(2906002)(30864003)(70206006)(70586007)(6916009)(4326008)(36860700001)(82740400003)(81166007)(356005)(41300700001)(44832011)(316002)(478600001)(40480700001)(186003)(47076005)(36756003)(426003)(336012)(82310400005)(83380400001)(54906003)(2616005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:15.1138
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 02c9cde4-7911-4d95-51bf-08db4b662922
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT072.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5404

Remove __init from following function to access during runtime:
    1. map_irq_to_domain()
    2. handle_device_interrupts()
    3. map_range_to_domain()
    4. unflatten_dt_node()

Move map_irq_to_domain() prototype from domain_build.h to setup.h.

To avoid breaking the build, following changes are also done:
1. Move map_irq_to_domain(), handle_device_interrupts() and map_range_to_domain()
    to device.c. After removing __init type,  these functions are not specific
    to domain building, so moving them out of domain_build.c to device.c.
2. Remove static type from handle_device_interrupt().

Overall, these changes are done to support the dynamic programming of a nodes
where an overlay node will be added to fdt and unflattened node will be added to
dt_host. Furthermore, IRQ and mmio mapping will be done for the added node.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/arch/arm/device.c                   | 144 ++++++++++++++++++++++++
 xen/arch/arm/domain_build.c             | 142 -----------------------
 xen/arch/arm/include/asm/domain_build.h |   2 -
 xen/arch/arm/include/asm/setup.h        |   6 +
 xen/common/device_tree.c                |  12 +-
 5 files changed, 156 insertions(+), 150 deletions(-)

diff --git a/xen/arch/arm/device.c b/xen/arch/arm/device.c
index ca8539dee5..84197981a0 100644
--- a/xen/arch/arm/device.c
+++ b/xen/arch/arm/device.c
@@ -9,8 +9,10 @@
  */
 
 #include <asm/device.h>
+#include <asm/setup.h>
 #include <xen/errno.h>
 #include <xen/init.h>
+#include <xen/iocap.h>
 #include <xen/lib.h>
 
 extern const struct device_desc _sdevice[], _edevice[];
@@ -75,6 +77,148 @@ enum device_class device_get_class(const struct dt_device_node *dev)
     return DEVICE_UNKNOWN;
 }
 
+int map_irq_to_domain(struct domain *d, unsigned int irq,
+                      bool need_mapping, const char *devname)
+{
+    int res;
+
+    res = irq_permit_access(d, irq);
+    if ( res )
+    {
+        printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n",
+               d->domain_id, irq);
+        return res;
+    }
+
+    if ( need_mapping )
+    {
+        /*
+         * Checking the return of vgic_reserve_virq is not
+         * necessary. It should not fail except when we try to map
+         * the IRQ twice. This can legitimately happen if the IRQ is shared
+         */
+        vgic_reserve_virq(d, irq);
+
+        res = route_irq_to_guest(d, irq, irq, devname);
+        if ( res < 0 )
+        {
+            printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n",
+                   irq, d->domain_id);
+            return res;
+        }
+    }
+
+    dt_dprintk("  - IRQ: %u\n", irq);
+    return 0;
+}
+
+int map_range_to_domain(const struct dt_device_node *dev,
+                        u64 addr, u64 len, void *data)
+{
+    struct map_range_data *mr_data = data;
+    struct domain *d = mr_data->d;
+    int res;
+
+    /*
+     * reserved-memory regions are RAM carved out for a special purpose.
+     * They are not MMIO and therefore a domain should not be able to
+     * manage them via the IOMEM interface.
+     */
+    if ( strncasecmp(dt_node_full_name(dev), "/reserved-memory/",
+                     strlen("/reserved-memory/")) != 0 )
+    {
+        res = iomem_permit_access(d, paddr_to_pfn(addr),
+                paddr_to_pfn(PAGE_ALIGN(addr + len - 1)));
+        if ( res )
+        {
+            printk(XENLOG_ERR "Unable to permit to dom%d access to"
+                    " 0x%"PRIx64" - 0x%"PRIx64"\n",
+                    d->domain_id,
+                    addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1);
+            return res;
+        }
+    }
+
+    if ( !mr_data->skip_mapping )
+    {
+        res = map_regions_p2mt(d,
+                               gaddr_to_gfn(addr),
+                               PFN_UP(len),
+                               maddr_to_mfn(addr),
+                               mr_data->p2mt);
+
+        if ( res < 0 )
+        {
+            printk(XENLOG_ERR "Unable to map 0x%"PRIx64
+                   " - 0x%"PRIx64" in domain %d\n",
+                   addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1,
+                   d->domain_id);
+            return res;
+        }
+    }
+
+    dt_dprintk("  - MMIO: %010"PRIx64" - %010"PRIx64" P2MType=%x\n",
+               addr, addr + len, mr_data->p2mt);
+
+    return 0;
+}
+
+/*
+ * handle_device_interrupts retrieves the interrupts configuration from
+ * a device tree node and maps those interrupts to the target domain.
+ *
+ * Returns:
+ *   < 0 error
+ *   0   success
+ */
+int handle_device_interrupts(struct domain *d,
+                             struct dt_device_node *dev,
+                             bool need_mapping)
+{
+    unsigned int i, nirq;
+    int res;
+    struct dt_raw_irq rirq;
+
+    nirq = dt_number_of_irq(dev);
+
+    /* Give permission and map IRQs */
+    for ( i = 0; i < nirq; i++ )
+    {
+        res = dt_device_get_raw_irq(dev, i, &rirq);
+        if ( res )
+        {
+            printk(XENLOG_ERR "Unable to retrieve irq %u for %s\n",
+                   i, dt_node_full_name(dev));
+            return res;
+        }
+
+        /*
+         * Don't map IRQ that have no physical meaning
+         * ie: IRQ whose controller is not the GIC
+         */
+        if ( rirq.controller != dt_interrupt_controller )
+        {
+            dt_dprintk("irq %u not connected to primary controller. Connected to %s\n",
+                      i, dt_node_full_name(rirq.controller));
+            continue;
+        }
+
+        res = platform_get_irq(dev, i);
+        if ( res < 0 )
+        {
+            printk(XENLOG_ERR "Unable to get irq %u for %s\n",
+                   i, dt_node_full_name(dev));
+            return res;
+        }
+
+        res = map_irq_to_domain(d, res, need_mapping, dt_node_name(dev));
+        if ( res )
+            return res;
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f80fdd1af2..5e4108a233 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2257,41 +2257,6 @@ int __init make_chosen_node(const struct kernel_info *kinfo)
     return res;
 }
 
-int __init map_irq_to_domain(struct domain *d, unsigned int irq,
-                             bool need_mapping, const char *devname)
-{
-    int res;
-
-    res = irq_permit_access(d, irq);
-    if ( res )
-    {
-        printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n",
-               d->domain_id, irq);
-        return res;
-    }
-
-    if ( need_mapping )
-    {
-        /*
-         * Checking the return of vgic_reserve_virq is not
-         * necessary. It should not fail except when we try to map
-         * the IRQ twice. This can legitimately happen if the IRQ is shared
-         */
-        vgic_reserve_virq(d, irq);
-
-        res = route_irq_to_guest(d, irq, irq, devname);
-        if ( res < 0 )
-        {
-            printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n",
-                   irq, d->domain_id);
-            return res;
-        }
-    }
-
-    dt_dprintk("  - IRQ: %u\n", irq);
-    return 0;
-}
-
 static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
                                        const struct dt_irq *dt_irq,
                                        void *data)
@@ -2323,57 +2288,6 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
     return 0;
 }
 
-int __init map_range_to_domain(const struct dt_device_node *dev,
-                               u64 addr, u64 len, void *data)
-{
-    struct map_range_data *mr_data = data;
-    struct domain *d = mr_data->d;
-    int res;
-
-    /*
-     * reserved-memory regions are RAM carved out for a special purpose.
-     * They are not MMIO and therefore a domain should not be able to
-     * manage them via the IOMEM interface.
-     */
-    if ( strncasecmp(dt_node_full_name(dev), "/reserved-memory/",
-                     strlen("/reserved-memory/")) != 0 )
-    {
-        res = iomem_permit_access(d, paddr_to_pfn(addr),
-                paddr_to_pfn(PAGE_ALIGN(addr + len - 1)));
-        if ( res )
-        {
-            printk(XENLOG_ERR "Unable to permit to dom%d access to"
-                    " 0x%"PRIx64" - 0x%"PRIx64"\n",
-                    d->domain_id,
-                    addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1);
-            return res;
-        }
-    }
-
-    if ( !mr_data->skip_mapping )
-    {
-        res = map_regions_p2mt(d,
-                               gaddr_to_gfn(addr),
-                               PFN_UP(len),
-                               maddr_to_mfn(addr),
-                               mr_data->p2mt);
-
-        if ( res < 0 )
-        {
-            printk(XENLOG_ERR "Unable to map 0x%"PRIx64
-                   " - 0x%"PRIx64" in domain %d\n",
-                   addr & PAGE_MASK, PAGE_ALIGN(addr + len) - 1,
-                   d->domain_id);
-            return res;
-        }
-    }
-
-    dt_dprintk("  - MMIO: %010"PRIx64" - %010"PRIx64" P2MType=%x\n",
-               addr, addr + len, mr_data->p2mt);
-
-    return 0;
-}
-
 /*
  * For a node which describes a discoverable bus (such as a PCI bus)
  * then we may need to perform additional mappings in order to make
@@ -2401,62 +2315,6 @@ static int __init map_device_children(const struct dt_device_node *dev,
     return 0;
 }
 
-/*
- * handle_device_interrupts retrieves the interrupts configuration from
- * a device tree node and maps those interrupts to the target domain.
- *
- * Returns:
- *   < 0 error
- *   0   success
- */
-static int __init handle_device_interrupts(struct domain *d,
-                                           struct dt_device_node *dev,
-                                           bool need_mapping)
-{
-    unsigned int i, nirq;
-    int res;
-    struct dt_raw_irq rirq;
-
-    nirq = dt_number_of_irq(dev);
-
-    /* Give permission and map IRQs */
-    for ( i = 0; i < nirq; i++ )
-    {
-        res = dt_device_get_raw_irq(dev, i, &rirq);
-        if ( res )
-        {
-            printk(XENLOG_ERR "Unable to retrieve irq %u for %s\n",
-                   i, dt_node_full_name(dev));
-            return res;
-        }
-
-        /*
-         * Don't map IRQ that have no physical meaning
-         * ie: IRQ whose controller is not the GIC
-         */
-        if ( rirq.controller != dt_interrupt_controller )
-        {
-            dt_dprintk("irq %u not connected to primary controller. Connected to %s\n",
-                      i, dt_node_full_name(rirq.controller));
-            continue;
-        }
-
-        res = platform_get_irq(dev, i);
-        if ( res < 0 )
-        {
-            printk(XENLOG_ERR "Unable to get irq %u for %s\n",
-                   i, dt_node_full_name(dev));
-            return res;
-        }
-
-        res = map_irq_to_domain(d, res, need_mapping, dt_node_name(dev));
-        if ( res )
-            return res;
-    }
-
-    return 0;
-}
-
 /*
  * For a given device node:
  *  - Give permission to the guest to manage IRQ and MMIO range
diff --git a/xen/arch/arm/include/asm/domain_build.h b/xen/arch/arm/include/asm/domain_build.h
index 34ceddc995..b9329c9ee0 100644
--- a/xen/arch/arm/include/asm/domain_build.h
+++ b/xen/arch/arm/include/asm/domain_build.h
@@ -4,8 +4,6 @@
 #include <xen/sched.h>
 #include <asm/kernel.h>
 
-int map_irq_to_domain(struct domain *d, unsigned int irq,
-                      bool need_mapping, const char *devname);
 int make_chosen_node(const struct kernel_info *kinfo);
 void evtchn_allocate(struct domain *d);
 
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 38e2ce255f..40a30c78b7 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -165,9 +165,15 @@ void device_tree_get_reg(const __be32 **cell, u32 address_cells,
 u32 device_tree_get_u32(const void *fdt, int node,
                         const char *prop_name, u32 dflt);
 
+int handle_device_interrupts(struct domain *d, struct dt_device_node *dev,
+                             bool need_mapping);
+
 int map_range_to_domain(const struct dt_device_node *dev,
                         u64 addr, u64 len, void *data);
 
+int map_irq_to_domain(struct domain *d, unsigned int irq,
+                      bool need_mapping, const char *devname);
+
 extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
 
 #ifdef CONFIG_ARM_64
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 6c9712ab7b..5f7ae45304 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1811,12 +1811,12 @@ int dt_count_phandle_with_args(const struct dt_device_node *np,
  * @allnextpp: pointer to ->allnext from last allocated device_node
  * @fpsize: Size of the node path up at the current depth.
  */
-static unsigned long __init unflatten_dt_node(const void *fdt,
-                                              unsigned long mem,
-                                              unsigned long *p,
-                                              struct dt_device_node *dad,
-                                              struct dt_device_node ***allnextpp,
-                                              unsigned long fpsize)
+static unsigned long unflatten_dt_node(const void *fdt,
+                                       unsigned long mem,
+                                       unsigned long *p,
+                                       struct dt_device_node *dad,
+                                       struct dt_device_node ***allnextpp,
+                                       unsigned long fpsize)
 {
     struct dt_device_node *np;
     struct dt_property *pp, **prev_pp = NULL;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528855.822525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYn-0005Th-U9; Tue, 02 May 2023 23:37:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528855.822525; Tue, 02 May 2023 23:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYn-0005Ta-QX; Tue, 02 May 2023 23:37:29 +0000
Received: by outflank-mailman (input) for mailman id 528855;
 Tue, 02 May 2023 23:37:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzYm-0005Si-Oc
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:28 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20623.outbound.protection.outlook.com
 [2a01:111:f400:7e88::623])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4ba8dd56-e942-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 01:37:26 +0200 (CEST)
Received: from BN9PR03CA0740.namprd03.prod.outlook.com (2603:10b6:408:110::25)
 by MN0PR12MB6128.namprd12.prod.outlook.com (2603:10b6:208:3c4::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 23:37:22 +0000
Received: from BN8NAM11FT063.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:110:cafe::cb) by BN9PR03CA0740.outlook.office365.com
 (2603:10b6:408:110::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:22 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT063.mail.protection.outlook.com (10.13.177.110) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.22 via Frontend Transport; Tue, 2 May 2023 23:37:22 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:22 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:21 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:21 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ba8dd56-e942-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QTtkvu0gTNoHijYkh0caIcnZNMVLdlec5IVV1/GIiTyjW8HTmKScMudSqSiA3htV1cZ8vvhz5Bp/iZxudHo6+QUlCAtUcKThf+SRQ50LgjN9tHlngZkudxvpRTONEh2X3jrIQ7j0XDW1jl44qpHXArcZOmLeQibWvf31pSHUv/3Jiov4FyS8ANGHpCEQ16Ebhp3dmYOCLLU3oSD/5GM2sHD9RM0YdolPfIPFvsv5spS352CyEZZBn0DTy/bewuV3hojwbfrFIuT31bYRiCcbai+i86Qs9LrNF5edALt1qcC4S5osQ/yQ3mYm9BwHeuQzP/8yg1iio7VIktsgVQ8pxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qGBrlYz1MYPx3dVlqExLGnD1JoxqWVU0WGIisylldqw=;
 b=JGM+vcyC9zcVJ2lePJixNlxjB8OaBGl9kNpIMLOyiWdAyyaFllsWoph8OQu0PUe+WBjm7p5F9YKGVaP3cUkvauBksR2KigrhusO5wEBshJnBuHjtluCBEoW9hm8cgySyGsYiL3VoemRRJ6hSBwbGPxdwMQGh3ZbLsYlXYbIvQ80PiYAL9Ew0KohhuJm5SrL5K3d95bUblb9tydvbEWK3QYC/4R5oG2xOM4CmS58xTI0Rij8YK+/F2Q5JQeZ/kNHRDSOoG/LFQOfloIeFMAjvGBUnSSh8I7BAz7TRS6jIkI1Xv0WuvvQn+tSVfnwWCxj0CN6qgNbSQZpgzWP9jkqn0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qGBrlYz1MYPx3dVlqExLGnD1JoxqWVU0WGIisylldqw=;
 b=b0/gi+WxhUSiSWWIvLFFkAUwRqFGs3UEBrAFs07LH6hZ84oI6vK6hqW6holvwjIV3jQiHwobuv5R+dQHHoiqCYsvctQ+e/lOYTIdtXWan8YNF8dBsqGE6UYT4oybDw0v6AyDLHnktDOldilktGJ/5AV4TN9e95H/qWVqiFBbALo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: [XEN][PATCH v6 03/19] common/device_tree: change __unflatten_device_tree() type
Date: Tue, 2 May 2023 16:36:34 -0700
Message-ID: <20230502233650.20121-4-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT063:EE_|MN0PR12MB6128:EE_
X-MS-Office365-Filtering-Correlation-Id: 29ae46b7-ee61-4327-35bc-08db4b662d8a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TXciANatIM6/IA9AvojUj9UdWhHKXAjDMGdyeU7y3PB48zQRGYNcagqGIB4G/9pZwjBlr3rKRIB1gvc91DXmfFM/EKpywHX2cc4ldJKyO0WqRdKA1NeVp3RGih4E0Z4auQW29j56ew7JUrTl9LUcmJ+nYoM+V9ULgs6Inyh+Etrm8LLl6Qf/odhshXhPaXXq/L7rZmYz63BSG4HpzeV6clkINLNS3yKBzM/ueQ7YmFvDpLcCMWskCYFXxui12nhmw0urmd4HC3mPMDiWIcigy4idQRczIg9IOq4adYscUamiqrKLVbLEqWkL0UqK3+DKOn56XuSXtrLXuOircKZyjI9Je3gFadHOORoJE/4Jc74qMiK4x+1RoUoHeMV90AKRmbm6yis3hnW8/3zekqawupqsB1wG4KoFW6TQErDNFg82moRqoNZP/NR0hu8QJg6bVDHke/zXNNb6FIYWb3HWAGXlgk1UtOAfs2LgQm7y38qrHh29DZpdwrOKjqhIwfNHStb2q/7ycAHlUnm/4FLUK3sHTvzm5H4Jln/fNq9Pfcad/fF1C6sLNZr759GFEmm17lTzVU4dgHKAizHd8zr0ceqavN6CnDuITP5a+KQUpq6tZqDfhjaoymXCru5MnOyahS7RJaaJBvbrNjt1uFqM8BEuxbHqRrg+f2ZpBZYJCLkmtQwinAqxUpU1ZOH9ZvZ+g22HYi9fqk9I77DriJ/oHFnSw/MgvXuDfu6zLSCAncw=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(39860400002)(346002)(451199021)(40470700004)(46966006)(36840700001)(41300700001)(8936002)(44832011)(8676002)(5660300002)(86362001)(36860700001)(47076005)(83380400001)(81166007)(82740400003)(356005)(2616005)(426003)(316002)(26005)(40480700001)(478600001)(36756003)(336012)(54906003)(70586007)(4326008)(6666004)(6916009)(70206006)(40460700003)(2906002)(1076003)(186003)(82310400005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:22.5368
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 29ae46b7-ee61-4327-35bc-08db4b662d8a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT063.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6128

Following changes are done to __unflatten_device_tree():
    1. __unflatten_device_tree() is renamed to unflatten_device_tree().
    2. Remove __init and static function type.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/device_tree.c      | 9 ++++-----
 xen/include/xen/device_tree.h | 5 +++++
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index fc38a0b3dd..5daf5197bd 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -2047,7 +2047,7 @@ static unsigned long unflatten_dt_node(const void *fdt,
 }
 
 /**
- * __unflatten_device_tree - create tree of device_nodes from flat blob
+ * unflatten_device_tree - create tree of device_nodes from flat blob
  *
  * unflattens a device-tree, creating the
  * tree of struct device_node. It also fills the "name" and "type"
@@ -2056,8 +2056,7 @@ static unsigned long unflatten_dt_node(const void *fdt,
  * @fdt: The fdt to expand
  * @mynodes: The device_node tree created by the call
  */
-static int __init __unflatten_device_tree(const void *fdt,
-                                          struct dt_device_node **mynodes)
+int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes)
 {
     unsigned long start, mem, size;
     struct dt_device_node **allnextp = mynodes;
@@ -2183,9 +2182,9 @@ dt_find_interrupt_controller(const struct dt_device_match *matches)
 
 void __init dt_unflatten_host_device_tree(void)
 {
-    int error = __unflatten_device_tree(device_tree_flattened, &dt_host);
+    int error = unflatten_device_tree(device_tree_flattened, &dt_host);
     if ( error )
-        panic("__unflatten_device_tree failed with error %d\n", error);
+        panic("unflatten_device_tree failed with error %d\n", error);
 
     dt_alias_scan();
 }
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 19a74909ce..eef0335b79 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -178,6 +178,11 @@ int device_tree_for_each_node(const void *fdt, int node,
  */
 void dt_unflatten_host_device_tree(void);
 
+/**
+ * unflatten any device tree.
+ */
+int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes);
+
 /**
  * IRQ translation callback
  * TODO: For the moment we assume that we only have ONE
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528856.822535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYp-0005kw-5L; Tue, 02 May 2023 23:37:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528856.822535; Tue, 02 May 2023 23:37:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYp-0005kp-2Q; Tue, 02 May 2023 23:37:31 +0000
Received: by outflank-mailman (input) for mailman id 528856;
 Tue, 02 May 2023 23:37:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzYn-0004sC-M7
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:29 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2061b.outbound.protection.outlook.com
 [2a01:111:f400:fe59::61b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4c196cac-e942-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 01:37:28 +0200 (CEST)
Received: from DS7P222CA0004.NAMP222.PROD.OUTLOOK.COM (2603:10b6:8:2e::14) by
 DS0PR12MB8455.namprd12.prod.outlook.com (2603:10b6:8:158::16) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.30; Tue, 2 May 2023 23:37:22 +0000
Received: from DM6NAM11FT102.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:2e:cafe::b0) by DS7P222CA0004.outlook.office365.com
 (2603:10b6:8:2e::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:22 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT102.mail.protection.outlook.com (10.13.173.172) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:22 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:17 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:16 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c196cac-e942-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U+Jo3aZ/KXfCpID5n2MqwuEg+k/PJWPStk/V7ThEjUdZSxD2n071o+4GiDyi7VQShAVY17IRY7Ygfy2XcBUTIEmJs0AUzAi+Fnktq7MGgARIkko7gEphpqHdWdZ7OE5/VhUK5LRMpnQThG34+rszzv/22fhLnNqzN/+7Pzldq0Muno6B553aa2LvDjmv+IBqyBfCl1PXWMUSqtZY3aOdnS+XotN2TdwdmXXCQ2A+k+3iQcdjgBfnmrm5B4A8n5GKu4s9i6HkjlKMgjDyfb/Jtd6wB+TFceuEhm7elEcPXrJS+G9ICJovNLTohBfS4myaVSYHLTJGDl1hj6QSAnh5jw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vzddX9TV10ndAd/6iE6kCNK/WBOdnTL0B9KxpRRHgRI=;
 b=j3fsZJ9IS3+TAhj6342U+O5MPJm3D1bTxoachuuFo/r8ed1CYRt4rNRKAn9gD9n0krpTmCoebLydEExor4KfOcxoANBbt3fUieHqEAKR1t0CTUoqzdF80prbHe7a1Txqvz1yGL5JOBlgNKdtAlH1JdFggaU8Fdj3KgGz/OF6ulbE6PMMgPqiVGLQWlcYGv/zOj/KKcwimlVJL+oerHgfQ6rFa5k2ghnpXwHbkoZ80NmtxpJBKJIl2oICsPg0FtlN2TFxS2wgA9bbwe8KB50uqY4ZjyWMxxYOO7DPeZyurt/2WWGFzVwL5Maw0kDemxiYUKBl+FwoL7De839WZvrX1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vzddX9TV10ndAd/6iE6kCNK/WBOdnTL0B9KxpRRHgRI=;
 b=zTMm0RzZVvXES+Lc4cafDnZ6fs6J9ghbOAfPJXcrA7fJUAzCobmGtkkaJqH7E2PgoOoihXEnWr+Kh1+BdT7ujmx1hBYb36qg6bhudH5Rk1JZlgBk0IhPz7cJpjqEfzLOA4+xylCoZRhU1X/smL9grrElU3tAmjUoQCPgiexGxic=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: [XEN][PATCH v6 02/19] common/device_tree: handle memory allocation failure in __unflatten_device_tree()
Date: Tue, 2 May 2023 16:36:33 -0700
Message-ID: <20230502233650.20121-3-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT102:EE_|DS0PR12MB8455:EE_
X-MS-Office365-Filtering-Correlation-Id: 00c59c3d-252e-446b-6218-08db4b662d61
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bXZpaaDpLqpx3roIliDKqmF4bmnR5HXhmhZVM2NqaPcDVBz31GEC/P2lYHnGwskM3ttHDDta60GqtlFtTIBtKvOWiU5BFy3nB0+4yngPumQS6bNottUnxA+rZkIBx4GUOjkWuFzuDkFaapJG8Cha2wE1Rs1l3NZ3Kwl+GD8NAWwjLVK7dExeGwbJww/zmNuRW/1DpZ6S53DTOyWWbEx0cyMxkuD+2GsmLp8U3mvihdVWbrcOCMJJvgBxnqGSPy1eF8+hKrTvyitW9TuJGXIDA0cN83Q8aqbfa+NyW6Q2nzp/5iP7wMe3v5ofIgM+Lvhkkeq7FOAaTObKqS9xEV4GOmF3wDpBT3T19HYOFii1fJcAoxb8lXIgVxrGO982BogbCI3DdjJD2iPUU+Upywx3hAwhcDPcQwmf87l7UJQ3aFkfHQndWb0hw1pNQNvbwBNgAtwUqdDmFPSpzQNnPVhDUpZSNI0sehU2LBu7WKz+a1eUokDVs1L4nVcU5Er1bR03dOcV2gpt6yn4DWC4cJEVjPHb36XXbAdJoQw41qZyl2jteQx9cX3Btr4AiNOIsU7ZKp74sqW4Ns9H7kpvZCG/ZIV/ogC9rEekq7qZesxjsGau3cT+2b/aoImpD/5EKH7t7rLX+WUzX9q3mwdz3udItMPeHZjy5ekkdMXaQSKMeGhGrOPq845wG/bn+LwGKLPWnduD3troODuoJ0ZZDrZTtHtGBYO+Fxg3RikM9XX1iuU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(39860400002)(346002)(136003)(451199021)(40470700004)(46966006)(36840700001)(82310400005)(83380400001)(47076005)(36860700001)(2616005)(40480700001)(41300700001)(40460700003)(81166007)(356005)(36756003)(70206006)(70586007)(82740400003)(6916009)(316002)(4326008)(186003)(478600001)(1076003)(86362001)(336012)(426003)(54906003)(26005)(5660300002)(8936002)(8676002)(44832011)(2906002)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:22.2380
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 00c59c3d-252e-446b-6218-08db4b662d61
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT102.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8455

Change __unflatten_device_tree() return type to integer so it can propagate
memory allocation failure. Add panic() in dt_unflatten_host_device_tree() for
memory allocation failure during boot.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/device_tree.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 5f7ae45304..fc38a0b3dd 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -2056,8 +2056,8 @@ static unsigned long unflatten_dt_node(const void *fdt,
  * @fdt: The fdt to expand
  * @mynodes: The device_node tree created by the call
  */
-static void __init __unflatten_device_tree(const void *fdt,
-                                           struct dt_device_node **mynodes)
+static int __init __unflatten_device_tree(const void *fdt,
+                                          struct dt_device_node **mynodes)
 {
     unsigned long start, mem, size;
     struct dt_device_node **allnextp = mynodes;
@@ -2078,6 +2078,8 @@ static void __init __unflatten_device_tree(const void *fdt,
 
     /* Allocate memory for the expanded device tree */
     mem = (unsigned long)_xmalloc (size + 4, __alignof__(struct dt_device_node));
+    if ( !mem )
+        return -ENOMEM;
 
     ((__be32 *)mem)[size / 4] = cpu_to_be32(0xdeadbeef);
 
@@ -2095,6 +2097,8 @@ static void __init __unflatten_device_tree(const void *fdt,
     *allnextp = NULL;
 
     dt_dprintk(" <- unflatten_device_tree()\n");
+
+    return 0;
 }
 
 static void dt_alias_add(struct dt_alias_prop *ap,
@@ -2179,7 +2183,10 @@ dt_find_interrupt_controller(const struct dt_device_match *matches)
 
 void __init dt_unflatten_host_device_tree(void)
 {
-    __unflatten_device_tree(device_tree_flattened, &dt_host);
+    int error = __unflatten_device_tree(device_tree_flattened, &dt_host);
+    if ( error )
+        panic("__unflatten_device_tree failed with error %d\n", error);
+
     dt_alias_scan();
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528857.822545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYr-000649-G0; Tue, 02 May 2023 23:37:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528857.822545; Tue, 02 May 2023 23:37:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYr-000640-C4; Tue, 02 May 2023 23:37:33 +0000
Received: by outflank-mailman (input) for mailman id 528857;
 Tue, 02 May 2023 23:37:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzYq-0004sC-9f
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:32 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2061f.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::61f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4ea29618-e942-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 01:37:31 +0200 (CEST)
Received: from DS7P222CA0025.NAMP222.PROD.OUTLOOK.COM (2603:10b6:8:2e::31) by
 MN2PR12MB4344.namprd12.prod.outlook.com (2603:10b6:208:26e::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 23:37:28 +0000
Received: from DM6NAM11FT102.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:2e:cafe::f8) by DS7P222CA0025.outlook.office365.com
 (2603:10b6:8:2e::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:27 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT102.mail.protection.outlook.com (10.13.173.172) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:27 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:24 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:24 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:24 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ea29618-e942-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ya559k3ytzE58QL3KyvkDHWlCou/EpLMtnHz+AfDNesZbJBVQrHCVd9O/X2sfYs05nR0ZAZjqOqgqkRLwnTBn8PpcVmbtU0aopomIApedhGmB/llatC44tqLNBQcHF2dCqtBKIa8XLArx4gGhnW2qK1RbrGhIGqwPHPmb1s2YNb7D9543ZkB9Zlt+sZJv6s+rdx8Wy87BkrDiJD5I7XsjPtr+Vf9b5F8Q+mFGoFu9ICQkq24m3mQrsWcygnqvnH9icSZR8tERgym2WX49BiGJ8FGcasSlWzdsiSdzlJYWPE/lC+KElJ5/0v8NktQghL2FkC3kM2ZyQKZFZYSCqQ5Mw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Rjb1ox/OK8Kv1DTBbQ9Lbk0Z1Hvt88hsPr+KDuGNW5w=;
 b=Hz+rdkM/wfTd9fxqtd5x1iefsXZ5mWlTJ5i6ondLuEz2ceyHntIcRRSyW2xPHYCNdoO/FphYeMbP+AZcZoJtHp6s9p4jlAWFXTA3fEcpAXjpfvsf5BTEopY8T35mBYsXyruxraWInqxT8jGrWsrSKsWMfA2SKolXZxiEem64KeSU6hGDMIgXP7f3SEEHOnqi2kq9OOcemoTSZZwNjQGWja1kPKIZaa4d2oav8TC96FZr/MuJG7i68c+62fGEoqhrxj5BnLWdRKyWcpqQeS6ptJtAcZNIInrEI9BYFv6vVal2PGQBwa25PVZ2uW/OHC73Khk//DWjUug+5c1c9n8LhA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rjb1ox/OK8Kv1DTBbQ9Lbk0Z1Hvt88hsPr+KDuGNW5w=;
 b=ZvkjLJX73yjR4ZHX+wK0pLnIZu3fH6j8wQsc5N42hg8/AmY1r3cFKaB1XlyFRKWL+Oi8LCuu1F0BHRwFcZyTdtns+Lqx4vw1oHrJNhx1kzwEP9wL+1EW0XjNuy1D+bOoh5xsTiu9vgbC7Ov8cG0SeIJ776NQu1jZ1i9n7KM5wkU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: [XEN][PATCH v6 04/19] common/device_tree.c: unflatten_device_tree() propagate errors
Date: Tue, 2 May 2023 16:36:35 -0700
Message-ID: <20230502233650.20121-5-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT102:EE_|MN2PR12MB4344:EE_
X-MS-Office365-Filtering-Correlation-Id: 65cd3dcb-bc7d-4e88-029a-08db4b6630be
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/5YqEmdxPxt4isXUmi0aJJ8EK8xbb5p6mWWsxsKih3OJ7jlhYqr0NqHve5493P80TN1pNTDEc1DFblP0Ouzpycrjd+LxkKTEBzbvEtnMN3e0b7KRPMzUypiUo1vK+QbjbHQu8UHfGQM8NKRkzonnvO3DNpIbFFS0Q2vHARa5V8/28bunJXzQWBvd0hF7eHvL5qYHtwBHQfv7GHoTO60+FXPdw52gIbEgSQ6WIYdZzi3QBYV84eZnztPFccpZmoSzu8WlFMg5urAVsfzVEi7cxT7bY58p1ZVfa7j6fvM2TGV5srEiFU+CMjpOgP3oaYNhY5ljK2uilqsJ1IfqqCs0/oahIy8D4bleeMBMspvgbdk2z1x6kn2GVB6SsMRMLa27qzSivp/hx+g851UudOrtAONbpdmXccibs5dRHZnWoC/CDpCTjZKkGwfTnD+X+4XvOTwdvAst1hLtvNA1W5WpR2DkM9VUt9berrNsbMgKu9qnSpfqY8I97Gb0Xpj3RRSyqBF6YGAzlB/81X6dmsZMol/XnQFpbI8Jl3Fpnr+5vAqguHj/viIo1z0gLEXKMpq727MJbJ+7EhcG6ho3W2NDCLaxLfE+2p7vfy+crx/n8ecnpGU0IE3rHw7ZZTiXI6OQo6rd2/cGv6nJyIlL/R65zS1oXPtnok9SYBCJGIQdHge/pqel7XfAsGb4Wy2Wu00lpiWlZigbAN71hL/xve1Ga5PRwfYWRocifoMJu88U9Mc=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(396003)(346002)(451199021)(36840700001)(40470700004)(46966006)(6916009)(70586007)(70206006)(356005)(82740400003)(316002)(4326008)(40460700003)(478600001)(82310400005)(54906003)(36756003)(86362001)(186003)(6666004)(40480700001)(1076003)(26005)(36860700001)(5660300002)(2616005)(44832011)(83380400001)(336012)(426003)(8676002)(8936002)(41300700001)(81166007)(2906002)(47076005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:27.8782
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 65cd3dcb-bc7d-4e88-029a-08db4b6630be
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT102.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4344

This will be useful in dynamic node programming when new dt nodes are unflatten
during runtime. Invalid device tree node related errors should be propagated
back to the caller.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/device_tree.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 5daf5197bd..47ab2f7940 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -2071,6 +2071,9 @@ int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes)
     /* First pass, scan for size */
     start = ((unsigned long)fdt) + fdt_off_dt_struct(fdt);
     size = unflatten_dt_node(fdt, 0, &start, NULL, NULL, 0);
+    if ( !size )
+        return -EINVAL;
+
     size = (size | 3) + 1;
 
     dt_dprintk("  size is %#lx allocating...\n", size);
@@ -2088,11 +2091,19 @@ int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes)
     start = ((unsigned long)fdt) + fdt_off_dt_struct(fdt);
     unflatten_dt_node(fdt, mem, &start, NULL, &allnextp, 0);
     if ( be32_to_cpup((__be32 *)start) != FDT_END )
-        printk(XENLOG_WARNING "Weird tag at end of tree: %08x\n",
+    {
+        printk(XENLOG_ERR "Weird tag at end of tree: %08x\n",
                   *((u32 *)start));
+        return -EINVAL;
+    }
+
     if ( be32_to_cpu(((__be32 *)mem)[size / 4]) != 0xdeadbeef )
-        printk(XENLOG_WARNING "End of tree marker overwritten: %08x\n",
+    {
+        printk(XENLOG_ERR "End of tree marker overwritten: %08x\n",
                   be32_to_cpu(((__be32 *)mem)[size / 4]));
+        return -EINVAL;
+    }
+
     *allnextp = NULL;
 
     dt_dprintk(" <- unflatten_device_tree()\n");
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528858.822555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYs-0006LD-PZ; Tue, 02 May 2023 23:37:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528858.822555; Tue, 02 May 2023 23:37:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYs-0006Ky-LR; Tue, 02 May 2023 23:37:34 +0000
Received: by outflank-mailman (input) for mailman id 528858;
 Tue, 02 May 2023 23:37:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzYr-0004sC-JC
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:33 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20620.outbound.protection.outlook.com
 [2a01:111:f400:7e88::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f841d83-e942-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 01:37:33 +0200 (CEST)
Received: from BN1PR13CA0006.namprd13.prod.outlook.com (2603:10b6:408:e2::11)
 by CH3PR12MB8912.namprd12.prod.outlook.com (2603:10b6:610:169::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 23:37:30 +0000
Received: from BN8NAM11FT082.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e2:cafe::88) by BN1PR13CA0006.outlook.office365.com
 (2603:10b6:408:e2::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21 via Frontend
 Transport; Tue, 2 May 2023 23:37:30 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT082.mail.protection.outlook.com (10.13.176.94) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:29 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:29 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:29 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:28 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f841d83-e942-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nQNz5wxnSNHnhF3KsOCS17kfx8SR59b60S+M5rwJS51BB1LlF0e/zbfWbGKAvYUcj3Z6z71tfqp5eKHSE2Q0GuIagR0mTV0KKC9kZXojsXWBZmrKHYjAF5T7YxFddVDBHAPJLNWzJwQz/dL4PsEsLmMBojxcWIwJ5o14t1u8A8X6hNV4Jpy12FrCvfuenKv+BhuCwcVSbnVIQczWNeToQGmsH8VE1N00pU20sz3UIsAe95jOk+MAbaThqojZpzkds5FYpdVm0mhKNmP+2tJbacWdQf7Zxd2O4LKFW8bHU359DPkmwdq8zZR8ua3y1I3daiV+2mu1ydlVQBFlFqtiQg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=F0j3i9rO3ndTIMxTYPP/EpeTJUoU0W3bKZrIDhMYirk=;
 b=LCeMEu8LY7CJcnfW8lVkHrH/Ly9iygD+qYDlGweTDAA7kM4ySyjwLH+W9aLoQ+oysRBimROfg6KAkuTQ6RvetUthIweLOoudGaEnWDlrJ4bb5RFx4R6fOqMVuED4bAGXqN43a0rw9M43c+7bZwCEXTrTM3dx/bF3EiXSBkvcmz/iS2+b6tpr92NaxdcsmZOonL+MGoyXM0tRp76bXvrl3XLyV8IpUzUM5nf24B3uj306m6D78+CkXngZXr8sCUlsNn+Vm10+dQcgO4VxLcHyAvuMqZESXsfdMBWY7wxPI4SCO54UkfyP4ujPMV+tZLXlxAh5ng5/x50ID1wez7LuLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F0j3i9rO3ndTIMxTYPP/EpeTJUoU0W3bKZrIDhMYirk=;
 b=wZTm6IKZpqV9BPjt1HFwSMztoYc3VJ1K4NeX2B44LSc0Pqg2I4PClc2ux4dc8Pj+qw/Z7mh3paiX1io9MRxHhFBAoDe5EsIIKPJbT/ZsqQIm2J1yvNKb3kFBCFJduuqlbhQi8s/icZ4jUGexyCSh1RxGGTiWlWx6cF6FvywDrsY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "George
 Dunlap" <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, "Julien
 Grall" <julien@xen.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v6 05/19] xen/arm: Add CONFIG_OVERLAY_DTB
Date: Tue, 2 May 2023 16:36:36 -0700
Message-ID: <20230502233650.20121-6-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT082:EE_|CH3PR12MB8912:EE_
X-MS-Office365-Filtering-Correlation-Id: eb644b4e-1f72-4a36-9496-08db4b6631d5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	q7+45602Cr5J7QTaF4ra4FlnfGcVG8zk1+745qMSwgbfzEVWLF1xQNil9Tqpm3jwx99qVNPL71p0iQbUa5/q/alOz9PcI4EzygwQfSRiu2GJk30LShG5kwd0iYvPhPyhutNObrLdQGlSOsnbMEtJZz6FR8X2MWovIPJAZdqUKR+9zYLFyFG4mqGdX5xno1z8yuuaY03D6fQTV7/vszAjI3gyB7ZmJCbW/wYBwbdo+QqSZQneNk5z19FF76HMQzgGG3Jd6JIdtDQhhAcrGw79CgH1gYLkEGHZQRwDJWCjGRateAjHJA0CGVCuNnqNCbsEucFAMf5wZgpspGzpJpsZE6YT4mTBWHj7Eemw8XMYy1ABP5yhlThIUiZd2RCbSwuo9hU4k1dOyEn6y3yx8oAtx4DnRpIrBm5D7x52FoSkihBdDcjZT0q94j2aGtZzaKQGNLZnuU0dM3/Ex2vT7kk7CS2ynsl7t32LBWZ750KFipGUQjGvGpB3WN81oSQxkD5hM501ZVOOmQd2v8A7zg42kjaVq2YXT4P3sBffjYviceUPmU+Ubx4XoY2THoHy7x7/zSQOpdp8Stt4M546mdAQ2hl71xipILnm4h+MYwzkP5H8JHOLwpTgYYXUC4xYY9Sp/XPq5VIDco8dTfq0qBHd2QZ8PzrsL262ARghEZC+b2tqkUK23W2O76fl3JpfJ78OYRT90l+d3gTIZVOu9QZEO1jfAbB1JMAseLJlusMpTu8=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(39860400002)(346002)(451199021)(46966006)(36840700001)(40470700004)(81166007)(8676002)(8936002)(40480700001)(86362001)(4326008)(82310400005)(478600001)(6916009)(6666004)(316002)(36756003)(41300700001)(70586007)(70206006)(356005)(82740400003)(40460700003)(5660300002)(426003)(336012)(2616005)(54906003)(26005)(186003)(47076005)(1076003)(2906002)(44832011)(36860700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:29.7416
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: eb644b4e-1f72-4a36-9496-08db4b6631d5
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT082.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8912

Introduce a config option where the user can enable support for adding/removing
device tree nodes using a device tree binary overlay.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 SUPPORT.md           | 6 ++++++
 xen/arch/arm/Kconfig | 5 +++++
 2 files changed, 11 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index aa1940e55f..e40ec4fba2 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -822,6 +822,12 @@ No support for QEMU backends in a 16K or 64K domain.
 
     Status: Supported
 
+### Device Tree Overlays
+
+Add/Remove device tree nodes using a device tree overlay binary(.dtbo).
+
+    Status, ARM: Experimental
+
 ### ARM: Guest ACPI support
 
     Status: Supported
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..1fe3d698a5 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -53,6 +53,11 @@ config HAS_ITS
         bool "GICv3 ITS MSI controller support (UNSUPPORTED)" if UNSUPPORTED
         depends on GICV3 && !NEW_VGIC && !ARM_32
 
+config OVERLAY_DTB
+	bool "DTB overlay support (UNSUPPORTED)" if UNSUPPORTED
+	help
+	  Dynamic addition/removal of Xen device tree nodes using a dtbo.
+
 config HVM
         def_bool y
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528860.822565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYx-0006oD-8c; Tue, 02 May 2023 23:37:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528860.822565; Tue, 02 May 2023 23:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYx-0006o0-4X; Tue, 02 May 2023 23:37:39 +0000
Received: by outflank-mailman (input) for mailman id 528860;
 Tue, 02 May 2023 23:37:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzYw-0005Si-AY
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:38 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20613.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5111dbf2-e942-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 01:37:36 +0200 (CEST)
Received: from BN9PR03CA0347.namprd03.prod.outlook.com (2603:10b6:408:f6::22)
 by SJ2PR12MB8651.namprd12.prod.outlook.com (2603:10b6:a03:541::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 23:37:33 +0000
Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f6:cafe::85) by BN9PR03CA0347.outlook.office365.com
 (2603:10b6:408:f6::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:32 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:32 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:32 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 16:37:31 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:31 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5111dbf2-e942-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dOI3UsBnZi4ADCLNm0yqO8Y/wzh7xghvMpxvyh3MCnsrMZ8Ky7vfl8I+QHrtUg9oCKhturO0n71UZq2frso6yCbXO1lWnREJAYVt+K5dOFXLXE5Olsb5IMzkIZbHWa4MJRTAV1GZoRMPaHHKWNzes1ypE64DR/i3jQ7gjrJcY5nzybAv+sIyq4F4Gqzt1+Ia6DGDpCyQU+Q8oRhQCLv2jb1/xM40c4lntw2shxo6bJlhiOlUKnpXAEmzWOylVL/xqM3BeIbNvIFNoE8fhlxRNvUidpIhPLZddZ5El8GMvLYMreJAs2P8m0paW0cAu204rYjZjQuoXmbvhbnrSu3N2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+aOH8wbfxJ3CDB9n8z4otIPVVDXaY3ZnHxlpPeTlbO8=;
 b=InGULvt9rlpPWnhP/U/d5fplmSZzkHYKMu7e+WH2tT9AS3c2hL4Dq3egm/4Z1H6DbsJtozSV2+sMFUDHG+33u0THnv4iIJVSczrpgY7e3MUmOsFhuQBs/MDiZvNH+3/RlhpkperW81txoUf/SMeypvyc5tJXtDjhrU2IGucn82LaDOlTpcxtjHoWTMfuYvIJ4OtfcwpEzsyzQv90HuMBoX8PoJZRs2HNYWh1rfhNP3fJv8X9NhchbS6Ctf/JC2J4JXW//d8MP4HgK8OFaxqOe1xA4O+hTnUlmYkBgD/92aHxU79MjgtMLGdwThRLqZ4mUcz1diJP9pAQpI1IcTvd3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+aOH8wbfxJ3CDB9n8z4otIPVVDXaY3ZnHxlpPeTlbO8=;
 b=28l3iP1mztYxC9ocTLWhJFHIYBjUtI9s7/WBjkQRkkotPrsiTA6R7IZFB8xnjitJVFkqe3/nV5tWzqSh9baKAqazvjCRpAUdxikLGeiUVEJxydzVVZQ4Ew6+wB2PkpqX0uMkY52eQTctpOyzPUjNnkHRb5BI9Ex4esR615JLTX8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: [XEN][PATCH v6 06/19] libfdt: Keep fdt functions after init for CONFIG_OVERLAY_DTB.
Date: Tue, 2 May 2023 16:36:37 -0700
Message-ID: <20230502233650.20121-7-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|SJ2PR12MB8651:EE_
X-MS-Office365-Filtering-Correlation-Id: 12a6975c-c9e3-4350-bdb8-08db4b663375
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EUY32wrrKmUTKyUR8ekT4xiIwr4sbHJVwToDFSGra4BowHtWKoGqIm5p0h6+Qgth+w3gYRYkrKSsfkbvbxKCERjp1e3zRcwEUn7dOU6eYzEBTq7eB7qBPsZwwY25G1YOvy1WuzHlJwnMppbWlmNUVgUwx5LffnAlc89+C8zf5L08ZBorO02pWjexrbqKDRb72CVPYQyariwLzAnJ77dqPILNvtjP21+ADxDE3Z4a07oVMMKyNHzEEcvxSejnQSk25BDNpeLDRhtGgUTljhCgK5xtsllZke95HvqWZdZ5MkUp0w2QrCCKZ9DWlQ07QGGTu+p4wDjVkeRqpQZjtKiliOqcVXedzSi3XwtZsovNCpCrf48k6I5qQNYoYd+PzRyxvvgDWnfAloiTEqCy8MeQcdCyvyfCaGHFAotjM+QWH/B9v0I/bm1IqAWPOMJ5rYA5hRlgDaSyWIJdlzF5GT4b9RPoA1rn43Ls86U6VvzRBhAHoxlpyKFepKIp2ujp+rl4ODmOlUkNX3NT7xD5/BSn2D6qOhiWyR5ROyK8lQ8L6qH9mo1zPnxrOYsuxtd23yqBgqBFJxXcbbt8fWt0dDCU4P5VVeTks4mE05PDqIykKjB55SBT87i2U45dDrN443QULEJ3HL/8wcCFb/02ahhUWVsgWyih0V45Q/95FC/pbPNUqxtX8kAMfV8XuXij+uJfW5ozf9i6kSqajqkauDzcL66ZN0hDofsHjnXnzafeWXI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(39860400002)(376002)(451199021)(46966006)(40470700004)(36840700001)(40460700003)(1076003)(26005)(82740400003)(36756003)(2616005)(41300700001)(82310400005)(4326008)(6916009)(36860700001)(336012)(426003)(316002)(478600001)(47076005)(70586007)(70206006)(86362001)(83380400001)(356005)(4744005)(186003)(40480700001)(81166007)(54906003)(44832011)(2906002)(5660300002)(8936002)(8676002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:32.4690
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 12a6975c-c9e3-4350-bdb8-08db4b663375
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8651

This is done to access fdt library function which are required for adding device
tree overlay nodes for dynamic programming of nodes.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 xen/common/libfdt/Makefile | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/common/libfdt/Makefile b/xen/common/libfdt/Makefile
index 75aaefa2e3..d50487aa6e 100644
--- a/xen/common/libfdt/Makefile
+++ b/xen/common/libfdt/Makefile
@@ -1,7 +1,11 @@
 include $(src)/Makefile.libfdt
 
 SECTIONS := text data $(SPECIAL_DATA_SECTIONS)
+
+# For CONFIG_OVERLAY_DTB, libfdt functionalities will be needed during runtime.
+ifneq ($(CONFIG_OVERLAY_DTB),y)
 OBJCOPYFLAGS := $(foreach s,$(SECTIONS),--rename-section .$(s)=.init.$(s))
+endif
 
 obj-y += libfdt.o
 nocov-y += libfdt.o
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528862.822574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYz-00079J-G4; Tue, 02 May 2023 23:37:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528862.822574; Tue, 02 May 2023 23:37:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzYz-00078z-Cp; Tue, 02 May 2023 23:37:41 +0000
Received: by outflank-mailman (input) for mailman id 528862;
 Tue, 02 May 2023 23:37:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzYy-0004sC-BZ
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:40 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20622.outbound.protection.outlook.com
 [2a01:111:f400:7e88::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 52edf28a-e942-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 01:37:39 +0200 (CEST)
Received: from BN9PR03CA0346.namprd03.prod.outlook.com (2603:10b6:408:f6::21)
 by SA0PR12MB4591.namprd12.prod.outlook.com (2603:10b6:806:9d::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 23:37:37 +0000
Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f6:cafe::97) by BN9PR03CA0346.outlook.office365.com
 (2603:10b6:408:f6::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:36 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:36 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:34 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 16:37:34 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:33 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52edf28a-e942-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oU0ePOBiG+WFuk03h0aJuvgatXHo+HtslAsPF8/SjrUbRNz8GXcE6/2YtKyf2A3WNBqM/by7vOzfKMQPzzDyeXALfUbt7/SeNc4I3gJAEiWCEVZ5eOKWK9oviDcLb0JSPdTN0yse3ah6rZMytRmMZhM6PaQtXJg/fszsXPDNQmLVN/OUtJWZyjTKPHoEfSo+i87cYeYVhnDW9R0dupPBdcbaYxu1X5xBCG1Lsmbo6YU/j/rTWYzIdHKNkaoyBmembCoYu2+LKlVCwUnOCY7UoYKNFj2xydv8oaQQSC512dihHORCxXyXCFSngN0q5bM75KMH67vUboctcWB0vEepxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A7a8SO//tZf8dnpDvLiol5y8mZo4siOv2JmDmv476O8=;
 b=VqaAMbUA8W+NWm2MXFVLuvfkaddrdbCs+sVauAxPeDyqqEHIRgL7MJ8bhyh/EEQPXoyNCU3kX2TvQKvx25tQjvMUQnNDBXpx22lvHABtkG/Wvk+Md/sWQTU7Vl+3dqdo7A/wN60WgtkcPw8W3jjzteyRTl+tR0R4r2+MMjiYruO5hdJLH7xlWYh569bXNu01MTUaZUy7Gp1tAmOHNgtra0doncKwGbvqufaZ/qjaewJeeNZzO9vZrjZdAVukKILS8jVlNiZWJKnTSrZtS/x4HiqtKHxmUVBfxFhlt2xH3+Gt9RgbKt5uUZCcjvJdv/Wv/Ep0kg0KWjvwcZAyWKkgkg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A7a8SO//tZf8dnpDvLiol5y8mZo4siOv2JmDmv476O8=;
 b=Y//I1YxSxs07Ke9XfwSAHsAWj3cIR16fafxzouwyg7k7PxSVdnm/R327CwyfLaFnMIwvGCTaQ5iaEwIhXnlPPsD5xCnQ76ZtuXM3DbJ4ruYvVuu4W2dDGBJpceuYQM6u0e6v5lpEAhlJ/vxyg9E7eLcFNVACK57GfkN7yPy3fU0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN][PATCH v6 09/19] xen/iommu: Move spin_lock from iommu_dt_device_is_assigned to caller
Date: Tue, 2 May 2023 16:36:40 -0700
Message-ID: <20230502233650.20121-10-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|SA0PR12MB4591:EE_
X-MS-Office365-Filtering-Correlation-Id: 86cb0c46-2d1f-4b3b-d4d2-08db4b6635fb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r57mn/8nqbzEuAz8S7ebrddM+Jxdwf2cJa/mk60QqgcX+ghGXqbDvH7/hBVwN65/HxegsidKm5aDS+lpeVGcb6F9Z0cJreUAUojCavL7rczSVca3BYtsQKK7ixCyalADViBc2aMjyLILeoA/TINRadND2fd1Hp77IMkkffNScPof4TttK6FiH0/mOospzN8mam1RCAUC9pdeRSQ1+AClmYzPWdQ72O4e+nffDx75VM0yUtL2XwW43Bz6A5olRP5fwDBRTBDE38wAs4iekfeix2VYzysm0RzggOJ7ZVxjLSQ4xDrU7vZAwj4o7hmp2EgPIV7koe5E21v+WsVzT0XpkSqI8ctgGPd822o2osF20PxaeBNug8p2Qdpr8BF7uPcJyXLwUdf/0BFjez4VyanZzn0du8D2iMiBhPdkdQnfNnH6baW2EzFVZx2w4gPGivf73B3MWCYHLyuNticlldp/cHx/aCKI36ldyK8/kTYFZ53DiD1CJi5HVuJc36P1bXMXmBmp/hiDlMFhu72uYkKCpiQ/g8VHXilp5Wdf9AewmIH5Y05OdLhm2038GIe6ImZWMUOso9jrlKSn4eR9EE5yGjp/RnnwEsHL+AfUNYHLzQWLKsYHe7ou3VMroIqwcLj0Yp25sQIlwdZFJqCfZT3Y6fuTyl14bBEysGXBPKJVaGrTEZB+CZkx7mvJb5ZmkLbYlRvz7+J5R+c9gZ3DdRcieIACPBOdFVPtxtbecppjMpE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(39860400002)(376002)(451199021)(46966006)(36840700001)(40470700004)(47076005)(40460700003)(2616005)(426003)(186003)(336012)(40480700001)(36756003)(86362001)(82310400005)(356005)(81166007)(82740400003)(83380400001)(36860700001)(8936002)(6666004)(8676002)(316002)(41300700001)(44832011)(5660300002)(54906003)(478600001)(4326008)(6916009)(70206006)(70586007)(26005)(1076003)(2906002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:36.7188
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 86cb0c46-2d1f-4b3b-d4d2-08db4b6635fb
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4591

Rename iommu_dt_device_is_assigned() to iommu_dt_device_is_assigned_locked().
Remove static type so this can also be used by SMMU drivers to check if the
device is being used before removing.

Moving spin_lock to caller was done to prevent the concurrent access to
iommu_dt_device_is_assigned while doing add/remove/assign/deassign.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/drivers/passthrough/device_tree.c | 19 +++++++++++++++----
 xen/include/xen/iommu.h               |  1 +
 2 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 1c32d7b50c..c386fda3e4 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -83,16 +83,14 @@ fail:
     return rc;
 }
 
-static bool_t iommu_dt_device_is_assigned(const struct dt_device_node *dev)
+bool_t iommu_dt_device_is_assigned_locked(const struct dt_device_node *dev)
 {
     bool_t assigned = 0;
 
     if ( !dt_device_is_protected(dev) )
         return 0;
 
-    spin_lock(&dtdevs_lock);
     assigned = !list_empty(&dev->domain_list);
-    spin_unlock(&dtdevs_lock);
 
     return assigned;
 }
@@ -213,27 +211,40 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( (d && d->is_dying) || domctl->u.assign_device.flags )
             break;
 
+        spin_lock(&dtdevs_lock);
+
         ret = dt_find_node_by_gpath(domctl->u.assign_device.u.dt.path,
                                     domctl->u.assign_device.u.dt.size,
                                     &dev);
         if ( ret )
+        {
+            spin_unlock(&dtdevs_lock);
             break;
+        }
 
         ret = xsm_assign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
         if ( ret )
+        {
+            spin_unlock(&dtdevs_lock);
             break;
+        }
 
         if ( domctl->cmd == XEN_DOMCTL_test_assign_device )
         {
-            if ( iommu_dt_device_is_assigned(dev) )
+
+            if ( iommu_dt_device_is_assigned_locked(dev) )
             {
                 printk(XENLOG_G_ERR "%s already assigned.\n",
                        dt_node_full_name(dev));
                 ret = -EINVAL;
             }
+
+            spin_unlock(&dtdevs_lock);
             break;
         }
 
+        spin_unlock(&dtdevs_lock);
+
         if ( d == dom_io )
             return -EINVAL;
 
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 405db59971..76add226ec 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -214,6 +214,7 @@ struct msi_msg;
 #include <xen/device_tree.h>
 
 int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev);
+bool_t iommu_dt_device_is_assigned_locked(const struct dt_device_node *dev);
 int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev);
 int iommu_dt_domain_init(struct domain *d);
 int iommu_release_dt_devices(struct domain *d);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528863.822585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ0-0007SI-Sd; Tue, 02 May 2023 23:37:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528863.822585; Tue, 02 May 2023 23:37:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ0-0007Rn-Kt; Tue, 02 May 2023 23:37:42 +0000
Received: by outflank-mailman (input) for mailman id 528863;
 Tue, 02 May 2023 23:37:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzYz-0005Si-Hf
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:41 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2060f.outbound.protection.outlook.com
 [2a01:111:f400:7e88::60f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 53775b0b-e942-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 01:37:39 +0200 (CEST)
Received: from BN9PR03CA0343.namprd03.prod.outlook.com (2603:10b6:408:f6::18)
 by CH2PR12MB4103.namprd12.prod.outlook.com (2603:10b6:610:7e::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 23:37:36 +0000
Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f6:cafe::43) by BN9PR03CA0343.outlook.office365.com
 (2603:10b6:408:f6::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.32 via Frontend
 Transport; Tue, 2 May 2023 23:37:36 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:36 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:33 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:33 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53775b0b-e942-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OcuBEpgBv91+CABaEHG+sQ13V2oVko2y7E2qN/5olSnow6SkQ+e+yyDAb836RN70VOUGaV90hYDVSa27niA4ayCVwQoOt02mfu+k4kEobVnj9G2oyeYg/7SpbE/FJ5nqCgNCrElGFrFGstapy/Hv1E2S3FS0PpfNaoX20JkYDudfKiuAQbmrDTeCQ0GFXgD3MVLrFnve3e30AYtjM6fvwTLUqjUjJGgPmLSnke0u+Xl5tJpKoaVfxSNHu2oNXvU6cptQAwbf/bwzAqKNYs9bSCJVF7u3gUVCJWXl4W7SU+Hb8I1YQH0S/+347z2LPlTVkeTIee4hqjLCxgr4JjxWAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LptqMUkxHVxrSuaM+i6CgX8fqOqrHiX3/jew+F/gQRs=;
 b=P5m/nNPtt7VhJ+pgTHeLyfND9vt7eBABWqquZOefh7SK/CVMMtlusCr/PoLJj58WB0KuvT8srAFToGBx7q0WdFPWDilN/rM1EU/BDpEWNF4aYEmXWWXV0ISbMXVTdFoR4kyK7F2U9MPSL1EaFDrQevS9emZT2ed3u1HNE5adJvI6Eg1p6elajde6wJTDH2V0wFBDfACLY70kdeUAxDjss0ykfQKxB7b6PkAhzifjgpq8uPz2DxLOBszkPLL9tnf5I7vzkkC0YmxWvTxbm3T5wirzdnq2VzXslrAXuKKVC0G1jmgLrCrLPdtHXcxefFjeph5C13/vY7UNiVhFcRCyvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LptqMUkxHVxrSuaM+i6CgX8fqOqrHiX3/jew+F/gQRs=;
 b=w2IDg1FiVeNBJgY/xOdKhF9kBqNjIivZ1AzFnDhj95o/f++ewR0Mga7GrIvkOIlNU+lo2HVVGRrgAHMnMu5MATjB5JpJl0r9kgOSoB2NsdNcz4S5AKOKazjoFCaOsPgssVMlkhvz1LmlCf47gfcJvXLyWfV94QjW9+lVL0yfoLs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: [XEN][PATCH v6 08/19] xen/device-tree: Add device_tree_find_node_by_path() to find nodes in device tree
Date: Tue, 2 May 2023 16:36:39 -0700
Message-ID: <20230502233650.20121-9-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|CH2PR12MB4103:EE_
X-MS-Office365-Filtering-Correlation-Id: ac7d9bfb-95f8-4823-ffc0-08db4b6635c4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tnVNrl+EWQRdKhMY7bSFdykTa7BAPjsgesGWpRYzgt3des1TrY6lbV8bTx/RSGE/6bFXXXz3mzRE6kQf+3X2bk5oJnkQRcc7WBBEsrjJxBzR8GUwoN6pgR+S77tDxhBkhbg1Y9pIQEA7Olycq75K+X8MIBVhghJCbdsgsEUulzmI0RhNPr2ZPy18El+vNcYMg9//H7gOWxsFdVlI8ykTjbT/GZNhcEwiNYkE/3ctwXj9LpYUeC50SmZtMayqvbRQz+0iLywP6Gpc3tYTWEQXR4DebiSh0iOMxDPfgSGy3/kmLCLkSiFnQHCSeVlBniI853HZ+/GcIceOU9rXq0EFvYO8nRf2rjlgmZ+TX48Eo9VjFMfihoiPKrtk4Jw/FVX3lUCVMWWwxlHqncYWfA3IBFGOp+LmQMZVJaDXpHQG48p3YPuNZByepSTyp7oFTeEfPlyVPpoyE2Htxd9K8PGPqNNfDqcOm3ulPFH8sFGR73SX+gSS6c0kcd46rvWcncxcVxh174TD2RaGzLpSZ5GJsTsFNrgU7FE2geHXrxs6z5p1S65mYDBasXV2UaoAdWMQ8eLxcu3rVmTD+MEGTYkmg7kaAsVd5RvPXNKvek4fGhqx8uvUc7q8WzX6A0tzAinnYwPNsjvR740o9gjRGJkRT7bZ1xcqEOVx6hciulPNN1kEwjCTJGltj12IrExAQWTy9Q2ZPobgGPsnspKEyjCRqfDLlEc8jrLDm1rqOUwOIy6pZav127Epdm4y4Yvw3p701bFIJu94gKtPWtmgoVcFlA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(346002)(136003)(451199021)(36840700001)(40470700004)(46966006)(81166007)(41300700001)(356005)(5660300002)(8936002)(8676002)(70206006)(70586007)(316002)(82740400003)(2906002)(86362001)(40480700001)(40460700003)(44832011)(6916009)(1076003)(26005)(186003)(4326008)(336012)(83380400001)(36860700001)(426003)(47076005)(36756003)(82310400005)(2616005)(54906003)(478600001)(37363002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:36.3595
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ac7d9bfb-95f8-4823-ffc0-08db4b6635c4
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4103

Add device_tree_find_node_by_path() to find a matching node with path for a
dt_device_node.

Reason behind this function:
    Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
    device_tree_flattened) is created and updated with overlay nodes. This
    updated fdt is further unflattened to a dt_host_new. Next, we need to find
    the overlay nodes in dt_host_new, find the overlay node's parent in dt_host
    and add the nodes as child under their parent in the dt_host. Thus we need
    this function to search for node in different unflattened device trees.

Also, make dt_find_node_by_path() static inline.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/device_tree.c      |  5 +++--
 xen/include/xen/device_tree.h | 17 +++++++++++++++--
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 47ab2f7940..426a809f42 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -358,11 +358,12 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
     return np;
 }
 
-struct dt_device_node *dt_find_node_by_path(const char *path)
+struct dt_device_node *device_tree_find_node_by_path(struct dt_device_node *dt,
+                                                     const char *path)
 {
     struct dt_device_node *np;
 
-    dt_for_each_device_node(dt_host, np)
+    dt_for_each_device_node(dt, np)
         if ( np->full_name && (dt_node_cmp(np->full_name, path) == 0) )
             break;
 
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index eef0335b79..d6366d3dac 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -534,13 +534,26 @@ struct dt_device_node *dt_find_node_by_type(struct dt_device_node *from,
 struct dt_device_node *dt_find_node_by_alias(const char *alias);
 
 /**
- * dt_find_node_by_path - Find a node matching a full DT path
+ * device_tree_find_node_by_path - Generic function to find a node matching the
+ * full DT path for any given unflatten device tree
+ * @dt_node: The device tree to search
  * @path: The full path to match
  *
  * Returns a node pointer.
  */
-struct dt_device_node *dt_find_node_by_path(const char *path);
+struct dt_device_node *device_tree_find_node_by_path(struct dt_device_node *dt,
+                                                     const char *path);
 
+/**
+ * dt_find_node_by_path - Find a node matching a full DT path in dt_host
+ * @path: The full path to match
+ *
+ * Returns a node pointer.
+ */
+static inline struct dt_device_node *dt_find_node_by_path(const char *path)
+{
+    return device_tree_find_node_by_path(dt_host, path);
+}
 
 /**
  * dt_find_node_by_gpath - Same as dt_find_node_by_path but retrieve the
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528864.822590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ1-0007Yx-CL; Tue, 02 May 2023 23:37:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528864.822590; Tue, 02 May 2023 23:37:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ1-0007Wg-5y; Tue, 02 May 2023 23:37:43 +0000
Received: by outflank-mailman (input) for mailman id 528864;
 Tue, 02 May 2023 23:37:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzYz-0004sC-Se
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:41 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20600.outbound.protection.outlook.com
 [2a01:111:f400:fe59::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 53a655d4-e942-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 01:37:41 +0200 (CEST)
Received: from BN9PR03CA0358.namprd03.prod.outlook.com (2603:10b6:408:f6::33)
 by PH7PR12MB8180.namprd12.prod.outlook.com (2603:10b6:510:2b6::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.28; Tue, 2 May
 2023 23:37:38 +0000
Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f6:cafe::ae) by BN9PR03CA0358.outlook.office365.com
 (2603:10b6:408:f6::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:38 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:38 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:36 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:36 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53a655d4-e942-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VMuhiR7OjqK98R6C2Y+i2BwUPlmh/NGw2J8UYvPW8dKGxBa1iV1j5v33lhksmAVKe8fslgYp/p8OyGG+xz6Yr1br6oRqS0Rz0txYazHtUOD3ENh0pOiA/yDPtHWUDAxT7OftAIsG2ievhkgIQippqVQLEZAyw0TONxPYxx5dhumzJUrnPc8mXShxNWcUGBp07e/cxB+hH4Wn7z7+89ipGslY+3JEqTGd5NZbhw+pfEUEdSQbG+ECynalrKb3jBZJbH6kWsKdCQsfTePuf4YAURIh4hT1om4sjnbmzd9krWT5Vowk9s/jFQJqpKnvqgsBtq+LsKKsTsMr2fv5j2Ir0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=o//kziDFlVk7rDkLuyZCz6gXGJUkyhMLecqyMP3X36E=;
 b=j8RnKyNane9AlUt12z+CtWCP8pOZJ9vwNhFsxN7qgdNLCaOXJqeW5/nstSsVhdlOdS2zpo9RJXTId1o+lgEA6ubGR0qWqV1zxbj6rqkju6O9IWxYmRaWHqIy4uBg+MTBWM3oZ383FjVz1K+kpQCezutbY8mrCbAvdSmXk205liAE5ejIgOHkfzMQk4IIfSICMGnGQpzwhjUopVSdr47z6VNVZmSn3ceddCp20aT/mh6hIS9Oad6BT/m2wrH1msLYYLzy1bmVFZ8U61GkSJ7yWEsZiOfAjB0rGJTiusxx0v3UYqnMd9WWb2wWMghl6/2BxWA8qOYIHKlwEPYgLPd4dA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o//kziDFlVk7rDkLuyZCz6gXGJUkyhMLecqyMP3X36E=;
 b=K4bFgZs7i4fiQhliTFJ3687gVb/NFX5VZnNbJDYqIW9oAIuSE3IMOJ2I+uhNXIlwU2L3tqpyVuXWv2EaH04PzPmADLC3xQ4laizDvvSdI6KWscP5sqSJy6huRLe+/sYVY0xti30U20nWlkJrdkgcfxaMe8iMNeaLbx39WJb6YJE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>, Rahul Singh
	<rahul.singh@arm.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
	"Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v6 12/19] xen/smmu: Add remove_device callback for smmu_iommu ops
Date: Tue, 2 May 2023 16:36:43 -0700
Message-ID: <20230502233650.20121-13-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|PH7PR12MB8180:EE_
X-MS-Office365-Filtering-Correlation-Id: 1a718bd1-80da-4de4-b416-08db4b6636c1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NiE6EW8yQhtuk8zLuQOPtsucZA/jm2YfnWqSL444s42WCtwp71ub9OatR2Zk4hEGxH4OiRqh+3BZj9X+/rjfKeubFY/6ciKhQ+1pvi2nyqzdIutUKkIeP5cCziAw1FLWaYZ/5BjtcBdZSMvrGqR+phs+kNQOtnSKM79X8lo+JvdRDsHR5pUBVWELT6C4qLbQj1eP6egOYuThDxG+2aBRWEXWDKel8IV/u+5AfT/ms6OGA0SJd+UEevJoae7OfTnCLkew/JMFZb+7udFcrwlg9sK8S9e3ejXkUBVJ0ANwZjNCzxNQgcmf8LmXHl181MER0Ltc1IIVfHwjut0Ab70W9sSeiFhG58DPmMYnTNEw4QU1kL+9b4MO0pivJgu5RKwWvYJ6vqwjZ1FaPsdVu83bnneVzMsnaLAT7cLGL58Xr9fjMYa9k/+v39LL0LNfGEVhr3QuE9GxBW9zqLFUI0NW33kDXfzlyytuBwJc8X//GWC7s82JEtLWz85zM7/h8YmNqWWRfuVNoxtzfrwLKEhXyQYiCrZn/iOMyPSPAPqSy20XRFmBI3YFoAp2hXPY9b1RVzgACY6Ay5jFdkebh/9l2pJlzKkVR7tG0vdDHxiiBu/+Dgfcd6x2SunGKnGV1QgKPU5xBiuPhY0rEQycKJWWbxvz8XKG03oDxR315qg+C9zAAPokCoHbTOEO24NCu8DfDyo3nmkHoe4R3UVJdaW52lMat4Htl1aQu8T0/HPqF7c=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(396003)(346002)(451199021)(40470700004)(36840700001)(46966006)(2906002)(478600001)(47076005)(70586007)(70206006)(40480700001)(8936002)(8676002)(83380400001)(5660300002)(44832011)(426003)(336012)(41300700001)(356005)(81166007)(54906003)(82740400003)(4326008)(316002)(6916009)(2616005)(40460700003)(36860700001)(6666004)(86362001)(36756003)(186003)(82310400005)(1076003)(26005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:38.0156
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1a718bd1-80da-4de4-b416-08db4b6636c1
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB8180

Add remove_device callback for removing the device entry from smmu-master using
following steps:
1. Find if SMMU master exists for the device node.
2. Check if device is currently in use.
3. Remove the SMMU master.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/drivers/passthrough/arm/smmu.c | 58 ++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0a514821b3..39d3a5c345 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -816,6 +816,19 @@ static int insert_smmu_master(struct arm_smmu_device *smmu,
 	return 0;
 }
 
+static int remove_smmu_master(struct arm_smmu_device *smmu,
+			      struct arm_smmu_master *master)
+{
+	if (!smmu->masters.rb_node) {
+		ASSERT_UNREACHABLE();
+		return -ENOENT;
+	}
+
+	rb_erase(&master->node, &smmu->masters);
+
+	return 0;
+}
+
 static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 					 struct device *dev,
 					 struct iommu_fwspec *fwspec)
@@ -853,6 +866,34 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 	return insert_smmu_master(smmu, master);
 }
 
+static int arm_smmu_dt_remove_device_legacy(struct arm_smmu_device *smmu,
+					 struct device *dev)
+{
+	struct arm_smmu_master *master;
+	struct device_node *dev_node = dev_get_dev_node(dev);
+	int ret;
+
+	master = find_smmu_master(smmu, dev_node);
+	if (master == NULL) {
+		dev_err(dev,
+			"No registrations found for master device %s\n",
+			dev_node->name);
+		return -EINVAL;
+	}
+
+	if (iommu_dt_device_is_assigned_locked(dev_to_dt(dev)))
+		return -EBUSY;
+
+	ret = remove_smmu_master(smmu, master);
+	if (ret)
+		return ret;
+
+	dev_node->is_protected = false;
+
+	kfree(master);
+	return 0;
+}
+
 static int register_smmu_master(struct arm_smmu_device *smmu,
 				struct device *dev,
 				struct of_phandle_args *masterspec)
@@ -876,6 +917,22 @@ static int register_smmu_master(struct arm_smmu_device *smmu,
 					     fwspec);
 }
 
+static int arm_smmu_dt_remove_device_generic(u8 devfn, struct device *dev)
+{
+	struct arm_smmu_device *smmu;
+	struct iommu_fwspec *fwspec;
+
+	fwspec = dev_iommu_fwspec_get(dev);
+	if (fwspec == NULL)
+		return -ENXIO;
+
+	smmu = find_smmu(fwspec->iommu_dev);
+	if (smmu == NULL)
+		return -ENXIO;
+
+	return arm_smmu_dt_remove_device_legacy(smmu, dev);
+}
+
 static int arm_smmu_dt_add_device_generic(u8 devfn, struct device *dev)
 {
 	struct arm_smmu_device *smmu;
@@ -2858,6 +2915,7 @@ static const struct iommu_ops arm_smmu_iommu_ops = {
     .init = arm_smmu_iommu_domain_init,
     .hwdom_init = arch_iommu_hwdom_init,
     .add_device = arm_smmu_dt_add_device_generic,
+    .remove_device = arm_smmu_dt_remove_device_generic,
     .teardown = arm_smmu_iommu_domain_teardown,
     .iotlb_flush = arm_smmu_iotlb_flush,
     .assign_device = arm_smmu_assign_dev,
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528865.822597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ2-0007i9-24; Tue, 02 May 2023 23:37:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528865.822597; Tue, 02 May 2023 23:37:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ1-0007h6-MS; Tue, 02 May 2023 23:37:43 +0000
Received: by outflank-mailman (input) for mailman id 528865;
 Tue, 02 May 2023 23:37:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzZ0-0004sC-C1
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:42 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2060e.outbound.protection.outlook.com
 [2a01:111:f400:7e89::60e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 547f4a68-e942-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 01:37:41 +0200 (CEST)
Received: from DS7PR03CA0208.namprd03.prod.outlook.com (2603:10b6:5:3b6::33)
 by DM4PR12MB5071.namprd12.prod.outlook.com (2603:10b6:5:38a::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 23:37:36 +0000
Received: from DM6NAM11FT068.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b6:cafe::99) by DS7PR03CA0208.outlook.office365.com
 (2603:10b6:5:3b6::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.32 via Frontend
 Transport; Tue, 2 May 2023 23:37:36 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT068.mail.protection.outlook.com (10.13.173.67) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:36 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:32 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 16:37:32 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:32 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 547f4a68-e942-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SrS7MKU7griNkYaA1QkcHvAz1NT8Pcu0D5BbYbnr/ayE3Ag2TZjEAUJeykBeDaXsA5VusHzafAfq9VJXw3ttMv2CFBiLSJ0z5I7f0vh1BFv9GDdfZpWFzPGSek133i900JsoaYci0E48xxOkP1vEnh4P2N2QEPWI/SFxduijnXRENOkau8cBF9ZrbApazqiLabACxtapl0C+j5BfSa67NWNL3LN84UAQ48hNyQbfigittrGm++/NF3A4fSh5sEQ+UolZsEPgxCkZTwtpDnnCgPWkuAffKFSCQhmPtjtCZztWuem8sZTmwvJvEYaL4Eyb66/3gcL2XV1GjUdCUzQhRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Izi8iFk0zRF4BRyNOjdDxCvThsPDcczKcmdhMhR/3Sk=;
 b=KwdI1Z2YIxKavOXDxN7d/J7eEfXh5WYQDKccFRFeEFXolQusimheCwFSdn/y7h3MTVrREeYn2U3ccn+o4HiGQGONPtDZI6ZmYrBYkS7i2z9EuhY32wnsQ+G+QvsEVfvdNKRZx9D3ZHx7c1FgN4Q+9EQieG2qGsQh6FDuUDTBHOlTplJBB7qE8QZ4khYu9Hj1yRAKSj+sf5WFHUitLLhsn7sWoaOWlLahHCnTiDj03VcwJxJ0WA0RJpstYXxRSTwvmM7TaqhTB3C2oh29RKc1dspvpZxorovjxYTwUGWOkIFs8a1SaeZcpwCNdZnbigNbsHvaX7jxLv70qGCwGoqb7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Izi8iFk0zRF4BRyNOjdDxCvThsPDcczKcmdhMhR/3Sk=;
 b=VuNvzciNWJ4Lx0/rMEBLsVGEvfM3eIhHYxvb38q6fUaQhmahn2jrDfK13eOT4zO6kuHGYo2lPTbE7BXQUoePeXmsaxTwV8kiZL/Z/DO0z0UHp++qYoBWDSmBDt5+3mUR876gzcBMLQaWKvZ6KAQLrZ8tKHedZ0VE5wcw/d43Z0Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>, Vikram Garhwal
	<fnu.vikram@xilinx.com>, David Gibson <david@gibson.dropbear.id.au>
Subject: [XEN][PATCH v6 07/19] libfdt: overlay: change overlay_get_target()
Date: Tue, 2 May 2023 16:36:38 -0700
Message-ID: <20230502233650.20121-8-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT068:EE_|DM4PR12MB5071:EE_
X-MS-Office365-Filtering-Correlation-Id: f6d2c9fa-7007-410d-69ba-08db4b6635a2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eVMJI7TaxrFA/V7rVr0NqyN0WoPsJmJvpK7gnokNKWc6GU6g0vqSn53EQn8c3sZR+4IlT+SZXnzwOdMGL8hBC27wg9fxzUer8ui8dimykTIZ39M61mFWDrIOkU3MaSYs08NbJBF78SvnQ3Rl/04M5ZQbnm50kYVM+xuuysOjHSwg7ADDjZAFOKudF2RLo345dH7lylPcFnBHCKhj8areY+Awtd+W04IxWTRr5VWlON1qYjC+IwzZbe96lxO/By9yFLCrFaOBIr5a28yf+R8w2Wa9BxsO/v9QrzTRYmuDORLX0yFSPNCG6rnLRUHdJTO2S7cN6vobJXqW7qHGuEAgOo6vVEIQV2Za+BzFLAGFEjWOGfJTiGTf8pVefihRFj7KWqxpVtycFmywKfjkJDdePu2d/vCZ0nuqC0/ogh6BxUZ2YXenwusJuHyul7Mb/PUMGFgOvz1kjU90KxxHT3NPC/l4kpEmmcO/nDaZ8HevIC071FtUTX8DvlZp7LOOLSFhIiSAcXWJWHBsmw3inYsPkoA7DPVySm699o/TNOrps57+iASgLNDV6HnGxdCodGHRL6gtJP8wJRzVbK1k2YNYhvzhlKlO4AXbfPmzYTVUstKBm+eXr9sF5kCCF+0cLWKtJd8dFYTvmWS8R0LCEUzYbHfeL8a8/2xO+eoJGsghtKUwk849j0WhKaQxucynE+dIkj3Z8JtIlmgoMFzrw/rp2rTWFGUIwdP+W+4XR+Qkzqg=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(346002)(39860400002)(376002)(451199021)(36840700001)(40470700004)(46966006)(26005)(1076003)(40460700003)(316002)(2906002)(70586007)(8936002)(70206006)(6916009)(5660300002)(86362001)(4326008)(8676002)(44832011)(356005)(81166007)(82740400003)(41300700001)(82310400005)(40480700001)(186003)(478600001)(47076005)(36860700001)(2616005)(336012)(426003)(54906003)(36756003)(83380400001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:36.0859
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f6d2c9fa-7007-410d-69ba-08db4b6635a2
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT068.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5071

Rename overlay_get_target() to fdt_overlay_target_offset() and remove static
function type.

This is done to get the target path for the overlay nodes which is very useful
in many cases. For example, Xen hypervisor needs it when applying overlays
because Xen needs to do further processing of the overlay nodes, e.g. mapping of
resources(IRQs and IOMMUs) to other VMs, creation of SMMU pagetables, etc.

Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
Message-Id: <1637204036-382159-2-git-send-email-fnu.vikram@xilinx.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Origin: git://git.kernel.org/pub/scm/utils/dtc/dtc.git 45f3d1a095dd

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/common/libfdt/fdt_overlay.c | 29 +++++++----------------------
 xen/common/libfdt/version.lds   |  1 +
 xen/include/xen/libfdt/libfdt.h | 18 ++++++++++++++++++
 3 files changed, 26 insertions(+), 22 deletions(-)

diff --git a/xen/common/libfdt/fdt_overlay.c b/xen/common/libfdt/fdt_overlay.c
index 7b95e2b639..acf0c4c2a6 100644
--- a/xen/common/libfdt/fdt_overlay.c
+++ b/xen/common/libfdt/fdt_overlay.c
@@ -41,37 +41,22 @@ static uint32_t overlay_get_target_phandle(const void *fdto, int fragment)
 	return fdt32_to_cpu(*val);
 }
 
-/**
- * overlay_get_target - retrieves the offset of a fragment's target
- * @fdt: Base device tree blob
- * @fdto: Device tree overlay blob
- * @fragment: node offset of the fragment in the overlay
- * @pathp: pointer which receives the path of the target (or NULL)
- *
- * overlay_get_target() retrieves the target offset in the base
- * device tree of a fragment, no matter how the actual targeting is
- * done (through a phandle or a path)
- *
- * returns:
- *      the targeted node offset in the base device tree
- *      Negative error code on error
- */
-static int overlay_get_target(const void *fdt, const void *fdto,
-			      int fragment, char const **pathp)
+int fdt_overlay_target_offset(const void *fdt, const void *fdto,
+			      int fragment_offset, char const **pathp)
 {
 	uint32_t phandle;
 	const char *path = NULL;
 	int path_len = 0, ret;
 
 	/* Try first to do a phandle based lookup */
-	phandle = overlay_get_target_phandle(fdto, fragment);
+	phandle = overlay_get_target_phandle(fdto, fragment_offset);
 	if (phandle == (uint32_t)-1)
 		return -FDT_ERR_BADPHANDLE;
 
 	/* no phandle, try path */
 	if (!phandle) {
 		/* And then a path based lookup */
-		path = fdt_getprop(fdto, fragment, "target-path", &path_len);
+		path = fdt_getprop(fdto, fragment_offset, "target-path", &path_len);
 		if (path)
 			ret = fdt_path_offset(fdt, path);
 		else
@@ -638,7 +623,7 @@ static int overlay_merge(void *fdt, void *fdto)
 		if (overlay < 0)
 			return overlay;
 
-		target = overlay_get_target(fdt, fdto, fragment, NULL);
+		target = fdt_overlay_target_offset(fdt, fdto, fragment, NULL);
 		if (target < 0)
 			return target;
 
@@ -781,7 +766,7 @@ static int overlay_symbol_update(void *fdt, void *fdto)
 			return -FDT_ERR_BADOVERLAY;
 
 		/* get the target of the fragment */
-		ret = overlay_get_target(fdt, fdto, fragment, &target_path);
+		ret = fdt_overlay_target_offset(fdt, fdto, fragment, &target_path);
 		if (ret < 0)
 			return ret;
 		target = ret;
@@ -803,7 +788,7 @@ static int overlay_symbol_update(void *fdt, void *fdto)
 
 		if (!target_path) {
 			/* again in case setprop_placeholder changed it */
-			ret = overlay_get_target(fdt, fdto, fragment, &target_path);
+			ret = fdt_overlay_target_offset(fdt, fdto, fragment, &target_path);
 			if (ret < 0)
 				return ret;
 			target = ret;
diff --git a/xen/common/libfdt/version.lds b/xen/common/libfdt/version.lds
index 7ab85f1d9d..cbce5d4a8b 100644
--- a/xen/common/libfdt/version.lds
+++ b/xen/common/libfdt/version.lds
@@ -77,6 +77,7 @@ LIBFDT_1.2 {
 		fdt_appendprop_addrrange;
 		fdt_setprop_inplace_namelen_partial;
 		fdt_create_with_flags;
+		fdt_overlay_target_offset;
 	local:
 		*;
 };
diff --git a/xen/include/xen/libfdt/libfdt.h b/xen/include/xen/libfdt/libfdt.h
index c71689e2be..fabddbee8c 100644
--- a/xen/include/xen/libfdt/libfdt.h
+++ b/xen/include/xen/libfdt/libfdt.h
@@ -2109,6 +2109,24 @@ int fdt_del_node(void *fdt, int nodeoffset);
  */
 int fdt_overlay_apply(void *fdt, void *fdto);
 
+/**
+ * fdt_overlay_target_offset - retrieves the offset of a fragment's target
+ * @fdt: Base device tree blob
+ * @fdto: Device tree overlay blob
+ * @fragment_offset: node offset of the fragment in the overlay
+ * @pathp: pointer which receives the path of the target (or NULL)
+ *
+ * fdt_overlay_target_offset() retrieves the target offset in the base
+ * device tree of a fragment, no matter how the actual targeting is
+ * done (through a phandle or a path)
+ *
+ * returns:
+ *      the targeted node offset in the base device tree
+ *      Negative error code on error
+ */
+int fdt_overlay_target_offset(const void *fdt, const void *fdto,
+			      int fragment_offset, char const **pathp);
+
 /**********************************************************************/
 /* Debugging / informational functions                                */
 /**********************************************************************/
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528868.822614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ4-0008US-L4; Tue, 02 May 2023 23:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528868.822614; Tue, 02 May 2023 23:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ4-0008Tz-Ew; Tue, 02 May 2023 23:37:46 +0000
Received: by outflank-mailman (input) for mailman id 528868;
 Tue, 02 May 2023 23:37:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzZ2-0004sC-Fg
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:44 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 552ea58e-e942-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 01:37:43 +0200 (CEST)
Received: from DM6PR10CA0030.namprd10.prod.outlook.com (2603:10b6:5:60::43) by
 PH8PR12MB6841.namprd12.prod.outlook.com (2603:10b6:510:1c8::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 23:37:38 +0000
Received: from DM6NAM11FT073.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:60:cafe::88) by DM6PR10CA0030.outlook.office365.com
 (2603:10b6:5:60::43) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:38 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT073.mail.protection.outlook.com (10.13.173.152) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:37 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:35 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 16:37:35 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:34 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 552ea58e-e942-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fVNgEAorIrinAzAbFAp1oO37hZXxBb7ekDWjwhWkcemEPGnxRKqcW+m3UToKpPlfVd5OCRi1xKCFrZ5/wejFBIqH3b1Sg/PB2Gch6emWZvgQ8egmSLDJ33AbTJIrCRcn6yeNRSi5YjHKGgW0RvmZYOfrcEVXpXtwKkSabinpq7IdVcbt2VjGlEWFGoNlfIPP7T1S6jkb8Vo0Mo9X6mkX+eE+gQA5ntIefbtpOGhR9OAYBiD/6w84OY6lZSXdK3bZ4TBHKtP4l3CasBIvAKuwRFk9xrXybm5I2RfUkP1tfYo2E051mpKe4JBWuDr6UIg6bxyNST9AcCBfRsgz3JlGXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=45fuJEPfdOT3ElxOeWsri05r41QvJhpC/khBsBhOc+k=;
 b=ltr4GcawQtiBZOSzhjSlRlnkHepmTmdlewD6rX+IyOnI3Qa0LfkDFdAJL+YFn1zSkzgBVKIl0ZYra/377eGVtKJkDB/QS1G9Gx7cQ7W0OjEhLrtQVq05w0nrfPRtqcJKkocQAfU+tHtORxkrmo0ywS9hFQuhc05Ibiys4Kecv0D3QXRYBmMfDpNhqyZ1A+aNUwrMfA+hpyfeCtQpHSL0iqhcofaTixr4F5QliDttmXUExnqhYAnKj6S77jlubO6+MY3Tp00N1CFECr7kYBf3FrPJQVFXbMwR8saRecwD+Xe3vtR3s7sq4HF3eW9tfZdMPrwGcGMirvRqTe4uD/27SA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=45fuJEPfdOT3ElxOeWsri05r41QvJhpC/khBsBhOc+k=;
 b=f8ysGqbPRFTrPohT1BVu30l4EsRvmo5iN1OZd5jYXU1jJPbaurGryIB/ed7WzKBC8kShyNGm0hocom6HFA2xp3LLHwyDlR21MvM0LDlP8Qbr7lKzzckUhrnSyLtrgeB8JTqcJPoFLPry19PslYAGuGQKRG6I7Xlyu1gCoJ0yiFE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: [XEN][PATCH v6 10/19] xen/iommu: protect iommu_add_dt_device() with dtdevs_lock
Date: Tue, 2 May 2023 16:36:41 -0700
Message-ID: <20230502233650.20121-11-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT073:EE_|PH8PR12MB6841:EE_
X-MS-Office365-Filtering-Correlation-Id: 79ad38c7-1516-43e6-acb0-08db4b6636c3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NU8RTqxFer0gTDcYCP8jr+YW3AMDAsYu2QvLaiu8yd98kz3m4aPe0bQWQEwgYiIdstnCe19lnOY3Xye7H7DwX7O8UW56F7yM9nwvcy1JQAc6rcNZbOSxR2PG2bdbrJCrkvTvNtHlhP4/M4d+rUoh8jEynp0yVWHXSMGg9uwY8OVHhZIFoOyYXZJiCgxo2CpVU81kWQdvBKJ+9dGL+mjsQQWor5CamdMZROucpMUupXD/bZIVlPAZ57cCYHaqf4W61yqsW8tV/XYcnKESHWvCva67K6QkrokKkyiExt2abTqEECYfekxRmbYg2bV3vuaEl6ZnnfYU7MQ32qxyPHEOTz4mMOQiAPSFnaMLfTh8vSzeelg+lLIfhxH0VYpNjyTHHahFFqXaIdCnBIMmMetWnFdmbOeElN7Q/kH+izWNQSsJIHioRxrKBZx+FzrvQNiOhJ4I6Le3omOpfDUIZXP/QYGpTUcFGVVtjGNZomH5v9QVIpauS3pOMtWTpnYWhH59Ze2ts3dBNAdqRy4zELKma4W2XvNmeVRCzdYT3fUsy4TMe7GaCkrCB3adacu7lAJTQCjA39Fs7PzN7nHMhzdFEMIvxp5WZezlFReFpekc5vT7EhmbuJ1WyntoWf33ocp/766tix0lhGZjAIwpelVWUPKZJOFmQbZDnIUfc6TAsFYYWcus/E24U1PsPwnlRU+rnwNKWIblsuESa/gPXDi3eNKp+UtS4VARwlth7KSQg1s=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(346002)(136003)(396003)(451199021)(36840700001)(40470700004)(46966006)(54906003)(478600001)(2616005)(83380400001)(36860700001)(47076005)(40480700001)(6666004)(26005)(1076003)(336012)(6916009)(4326008)(316002)(356005)(82740400003)(70586007)(70206006)(186003)(426003)(41300700001)(2906002)(81166007)(8676002)(5660300002)(44832011)(8936002)(40460700003)(86362001)(36756003)(82310400005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:37.9816
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 79ad38c7-1516-43e6-acb0-08db4b6636c3
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT073.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6841

Protect iommu_add_dt_device() with dtdevs_lock to prevent concurrent access add.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/drivers/passthrough/device_tree.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index c386fda3e4..f3867ef1a6 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -145,6 +145,8 @@ int iommu_add_dt_device(struct dt_device_node *np)
     if ( dev_iommu_fwspec_get(dev) )
         return 0;
 
+    spin_lock(&dtdevs_lock);
+
     /*
      * According to the Documentation/devicetree/bindings/iommu/iommu.txt
      * from Linux.
@@ -157,7 +159,10 @@ int iommu_add_dt_device(struct dt_device_node *np)
          * these callback implemented.
          */
         if ( !ops->add_device || !ops->dt_xlate )
-            return -EINVAL;
+        {
+            rc = -EINVAL;
+            goto fail;
+        }
 
         if ( !dt_device_is_available(iommu_spec.np) )
             break;
@@ -188,6 +193,8 @@ int iommu_add_dt_device(struct dt_device_node *np)
     if ( rc < 0 )
         iommu_fwspec_free(dev);
 
+fail:
+    spin_unlock(&dtdevs_lock);
     return rc;
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528869.822620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ5-000060-6Y; Tue, 02 May 2023 23:37:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528869.822620; Tue, 02 May 2023 23:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ4-00004v-UB; Tue, 02 May 2023 23:37:46 +0000
Received: by outflank-mailman (input) for mailman id 528869;
 Tue, 02 May 2023 23:37:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzZ2-0005Si-I8
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:44 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20615.outbound.protection.outlook.com
 [2a01:111:f400:7eab::615])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 542f6709-e942-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 01:37:42 +0200 (CEST)
Received: from BN9PR03CA0353.namprd03.prod.outlook.com (2603:10b6:408:f6::28)
 by DS0PR12MB8017.namprd12.prod.outlook.com (2603:10b6:8:146::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.20; Tue, 2 May
 2023 23:37:38 +0000
Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f6:cafe::79) by BN9PR03CA0353.outlook.office365.com
 (2603:10b6:408:f6::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21 via Frontend
 Transport; Tue, 2 May 2023 23:37:38 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:38 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:37 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:37 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 542f6709-e942-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lbLrY0XLfmtX5VteDWmQwhU0kk2Ylnhgds6RZ+kfmVv+oussvigC+N19O6lGs47+KeUblhHXdolMu9LH6xcHSmkRtx+yYjeY2V1qZTGJo7ZCRPwbhTYpVYUHIRmEpJbdX1P2cAHvf3C2RksP+jLk6vMv/ByNntBL78KPO2SNbGAAe3vTB8Q83yTXoKrXtEIAUovJ6/HTXVJPWI7TWQbOck8P7Tae/+L/rUiXDEL6IrRRYNRRWsMS1zYbVMPo9XClfsovvTbbvDeQpLMqEbBnj8eP0igPQb1TU/c4Aww3cmwDbkYqiI/XQcz231ht3T1raBkk6Oz05iiYUq0i7X4mbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+CZpvoQNw8JQfGrQR7sP9j/mivY1rVi0G0xxugML0N8=;
 b=OYsyBQ2qtqWf9g4uKUwiBP9H0/uFQGZgVXvKlpP8iP5ZoN4x9aKo/EFMymredk4PiW4oSgu3M5BWjelyXrC5qMVLoKFCLxWVOm2e6lgMhpwnbviHTt3NEhe5ivHhIrqa0bBFHNkqWBZLY8KddIbB6/W4T+vhlFyzWnjk3DQLF8RRYe21RM8so3gX9N1dJW2FzJWV3WanfYdE2DU0DbFmQkL8JKuOyf7L4cAmuDwiuqIZdkV70RS6oVV+bOEA89Y/TWwz/lkHPbQdKw1oLrYKh2rpqYesYyCN5AefVo/vtk/zeWTFbpWNaCK/xHq8t4KkmvMo35sAFYfpLLDpiS29Dw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+CZpvoQNw8JQfGrQR7sP9j/mivY1rVi0G0xxugML0N8=;
 b=wPIjnURyLlj93gqdi6Z9aCwXcg+joWkR/kkmEM8sFZVJWqiAR9Zzf0gTzm0bh6clUjcfjwg/jMz6JuCxjD1vPxnPWM3Dqpc7cgWJanHT2GKTQ0W6aORAjlFk1ik/ZvSlwGmyUcBI+Ab0WbZQwEutjYwJ9Gj1xFpOVhJ40cnjhUQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN][PATCH v6 13/19] asm/smp.h: Fix circular dependency for device_tree.h and rwlock.h
Date: Tue, 2 May 2023 16:36:44 -0700
Message-ID: <20230502233650.20121-14-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|DS0PR12MB8017:EE_
X-MS-Office365-Filtering-Correlation-Id: 497fbd75-4f3b-410f-3942-08db4b663709
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QDJUL5UEsyw4J3mOZMHxO6MCIzm75A6YFTW2AkkQIcsuLRHs5vWqj64AKqydffx5mFFgPMwILp2KH7S/s2EeVNQYdWveT69oPyb25LzbrtBEY592D+WV68LQ/kx6hV3nBM1JFUIjzLIgJWIyHWGIorDRg4GP1w5RAXulf+7Q61hrJYNXc4Pd9DFyeJqOAFYhXwKMW5c41bI7WgyN/jD4cbQtX4+6oBUuizaN5u4f164CJMLg3DyiDK5/ecex3eh97PyONaJDMjaq4B74FyltkzLDz0YtnSA9VveGUs3Xe1smDdM9pATBCVOIjfVHem1wkh8FFVPBU0N9imnou9JI+jT+CS8rhHl4D860WfrG8ZjIkl+7yY4czOHf95HOZyR8Vf426SllumytAZFP3XhrP7bVD4tfqPGqy/VEe8Zf+HsmgtCyd0wcvYtBxi1VoozpHHUrdNm3mUAvnZ0S8MJ9U2nJnYFSI+mcemuUSkVWFxadpIokx9PaJ4VZQM42dtFk8cCW7QHC+7Jte7kNSJ2b339pKiQiYj70ICIhvNywYHEL7eOzXMUvoUqnKx1IRDGhVu9xeyVIOylmJpw0eQ3sl80hABabLPFDJLRct2aPwDDHF0tYgL2JNPAFDVnzs7ZsO6Zdzxh3A0SeGwDw0nYk4+Bknur1eOWtGSDxHvo/j715/FTzc6LAIlQUoGk3DM9tzpbwNTHw2Ga+YJ/rb/btLKHjzTx9KHbjiGOnco08ufU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199021)(36840700001)(46966006)(40470700004)(81166007)(356005)(82740400003)(44832011)(41300700001)(8936002)(5660300002)(8676002)(2906002)(40480700001)(86362001)(40460700003)(82310400005)(6666004)(36860700001)(47076005)(54906003)(478600001)(186003)(36756003)(2616005)(70206006)(83380400001)(1076003)(26005)(336012)(6916009)(70586007)(4326008)(426003)(316002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:38.4843
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 497fbd75-4f3b-410f-3942-08db4b663709
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8017

Dynamic programming ops will modify the dt_host and there might be other
function which are browsing the dt_host at the same time. To avoid the race
conditions, adding rwlock for browsing the dt_host. But adding rwlock in
device_tree.h causes following circular dependency:
    device_tree.h->rwlock.h->smp.h->asm/smp.h->device_tree.h

To fix this, removed the "#include <xen/device_tree.h> and forward declared
"struct dt_device_node".

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Henry Wang <Henry.Wang@arm.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/arch/arm/include/asm/smp.h | 3 ++-
 xen/arch/arm/smpboot.c         | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/smp.h b/xen/arch/arm/include/asm/smp.h
index a37ca55bff..b12949ba8a 100644
--- a/xen/arch/arm/include/asm/smp.h
+++ b/xen/arch/arm/include/asm/smp.h
@@ -3,13 +3,14 @@
 
 #ifndef __ASSEMBLY__
 #include <xen/cpumask.h>
-#include <xen/device_tree.h>
 #include <asm/current.h>
 #endif
 
 DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask);
 DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
 
+struct dt_device_node;
+
 #define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
 
 #define smp_processor_id() get_processor_id()
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 4a89b3a834..255bbcc967 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -10,6 +10,7 @@
 #include <xen/cpu.h>
 #include <xen/cpumask.h>
 #include <xen/delay.h>
+#include <xen/device_tree.h>
 #include <xen/domain_page.h>
 #include <xen/errno.h>
 #include <xen/init.h>
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528870.822625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ6-0000Mm-3V; Tue, 02 May 2023 23:37:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528870.822625; Tue, 02 May 2023 23:37:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ5-0000Kd-M3; Tue, 02 May 2023 23:37:47 +0000
Received: by outflank-mailman (input) for mailman id 528870;
 Tue, 02 May 2023 23:37:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzZ3-0005Si-IN
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:45 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on20624.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 55505898-e942-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 01:37:42 +0200 (CEST)
Received: from BN9PR03CA0343.namprd03.prod.outlook.com (2603:10b6:408:f6::18)
 by CH0PR12MB5267.namprd12.prod.outlook.com (2603:10b6:610:d2::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 23:37:39 +0000
Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f6:cafe::43) by BN9PR03CA0343.outlook.office365.com
 (2603:10b6:408:f6::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.32 via Frontend
 Transport; Tue, 2 May 2023 23:37:39 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:39 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:38 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:38 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55505898-e942-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QJGXP6I5DbTWCJd4UPQsUa0et2w6HnKRklEs3Tq0HDyaZjomDSMmOx394jyxoEYWtAH6vCir7FylPFU47fc5cNMSPqak6eMuO60yyiS1EqWOULnwGw6+asuLYM933ih96Mlz4rdVOpmwLYAyPBEP9t8vE5yFsf7bHbV4I/Dchquluyt4YAkiDVcSaV8BpTMtQTWOD10Gins5RVJpjeduZkDKnL3jubsobkU9OAVDcU58UFWtQO8oMrkNifcG8+gQmEa0XZxKFfY3bw7JHWvad/J1pZN6gywslRw3mFs+sy8g5c7DzMxMnnZh4ZUkhYrqSY0rHmnzyMydW8mBwsGjbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vIBy8QBowBYZ6c4wTJeiTkeZZ3rBM64WqnXkHigySw8=;
 b=NR+ALs6okeXu7dtkr7pF8B7BIDy9LXA4clEVyPo7OHLKid4HxwQCPa6HuTiDDxEHk6LT1A9FfYuaNZDF//KamLzgVZnaUUBnezUlWuAH6zBLqG7DQy1KYcNd62eZkEb0CtH7SKWfVJLs1jTqL+vA3jr3TWWrHHeM6SbHyN1lqL+sVexlS2Spqx5JVYVgGFaf6aVbTt6irajHTJpcgjl5DCyoyXCQawyLUGXKgVeuBkdh7QYfOAU8kjH32Z2MPE6ORFCDwVcond62doT2494uDpknItOSiuq3S/LnYgdhmKt7LehAavC9htErHbgv7D1l0XH3IiGVzCBhjc9daIx9pA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vIBy8QBowBYZ6c4wTJeiTkeZZ3rBM64WqnXkHigySw8=;
 b=FxrmF8KpbY8OMt9twCQcmBtIxQj5cssKdQJWsbfwwjf9LZ6+wqZgxnrMzBXMKkjrx/97XNSxcVTeU9iV1fn8YxDLaXlDFB+NgnzDSeKR8p1cOZ/EAxELUFAnFuguxJt4eaX1z3OTPb9ONhgsB6iCik+9jIrYsV0EUMufMGS4rZE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: [XEN][PATCH v6 14/19] common/device_tree: Add rwlock for dt_host
Date: Tue, 2 May 2023 16:36:45 -0700
Message-ID: <20230502233650.20121-15-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|CH0PR12MB5267:EE_
X-MS-Office365-Filtering-Correlation-Id: 8a7aefba-4bc9-41ea-9579-08db4b663768
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bY3CIOnIcI4X9pEmoRAiF0hbj8IBsn/tdl4KhSa8MyxKKPCRWysPgd+5Su4M60i8YtOI3p0Ob5N0HQGmeayfBYiS1j+xRM1O5UHiP3eVLwM3SB4Xert+T/Ta3TJvAWgRQMAju3f/bxjG975cMuNiwfCsys72Qfquwuc2QTVKY8iH9vAozLUVnqobOJiHJi2yGLIk5Uc73GNtigC7fOSTVU/dDDZn715IDTPsLtwj0f9YWiCBfZShzDp4cfOWCttgnVX7YmXIAIw35g8prowY3mh5dX7a/F6gAtEw+w5Zlg9N7nUY5KGlBINMtKrHyug7jsHHAceFgSKSfPKqhrLf4PsfetJAUldfao/+rkU4USgq+59DcH+RK5FR32N5VV3qNza4glve1w2pUBY4q0dwFuYCX9mb7RQ6FC212GdKc5CWVwGWnXtEvcK93whOoJcwYzlbnjyc1TGvF1DQIVDutwjYLMQgjyExTKPz+YpuxPWbd6+dWxzAmr6e/9gbYBRVEL+avVZOYBcFpS7SG8bE7kG+4h0SrM3RbvEpfrZXsLCNzadFqjC18NLaBVYyFbUh+DDMJRaoBnvU70sxgoqXCcbaM55SVGF1qXxgWB+eSaT7PMuJTw+FS0kTxfaJnkEP2Hr4WPdZP4lENxwUyzA5B+qZoRC2I374aMQUTVrLvVueswPoDjcuhl+HCx7F/QWwFH7COFZnePKschJzE3miL3N+gGL7wabVZMpH+GqMs70=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(81166007)(26005)(356005)(82740400003)(1076003)(186003)(83380400001)(2616005)(336012)(36860700001)(426003)(47076005)(36756003)(44832011)(40460700003)(2906002)(40480700001)(41300700001)(54906003)(5660300002)(82310400005)(478600001)(6666004)(70586007)(6916009)(70206006)(4326008)(8936002)(86362001)(316002)(8676002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:39.1092
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8a7aefba-4bc9-41ea-9579-08db4b663768
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5267

 Dynamic programming ops will modify the dt_host and there might be other
 function which are browsing the dt_host at the same time. To avoid the race
 conditions, adding rwlock for browsing the dt_host during runtime.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/device_tree.c              |  4 ++++
 xen/drivers/passthrough/device_tree.c | 18 ++++++++++++++++++
 xen/include/xen/device_tree.h         |  6 ++++++
 3 files changed, 28 insertions(+)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 426a809f42..48cb68bcd9 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -2109,7 +2109,11 @@ int unflatten_device_tree(const void *fdt, struct dt_device_node **mynodes)
 
     dt_dprintk(" <- unflatten_device_tree()\n");
 
+    /* Init r/w lock for host device tree. */
+    rwlock_init(&dt_host->lock);
+
     return 0;
+
 }
 
 static void dt_alias_add(struct dt_alias_prop *ap,
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 46f9080c8f..e3be8e3f91 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -111,6 +111,8 @@ int iommu_release_dt_devices(struct domain *d)
     if ( !is_iommu_enabled(d) )
         return 0;
 
+    read_lock(&dt_host->lock);
+
     list_for_each_entry_safe(dev, _dev, &hd->dt_devices, domain_list)
     {
         rc = iommu_deassign_dt_device(d, dev);
@@ -118,10 +120,14 @@ int iommu_release_dt_devices(struct domain *d)
         {
             dprintk(XENLOG_ERR, "Failed to deassign %s in domain %u\n",
                     dt_node_full_name(dev), d->domain_id);
+
+            read_unlock(&dt_host->lock);
             return rc;
         }
     }
 
+    read_unlock(&dt_host->lock);
+
     return 0;
 }
 
@@ -245,6 +251,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
     int ret;
     struct dt_device_node *dev;
 
+    read_lock(&dt_host->lock);
+
     switch ( domctl->cmd )
     {
     case XEN_DOMCTL_assign_device:
@@ -294,7 +302,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         spin_unlock(&dtdevs_lock);
 
         if ( d == dom_io )
+        {
+            read_unlock(&dt_host->lock);
             return -EINVAL;
+        }
 
         ret = iommu_add_dt_device(dev);
         if ( ret < 0 )
@@ -310,6 +321,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
             printk(XENLOG_G_ERR "XEN_DOMCTL_assign_dt_device: assign \"%s\""
                    " to dom%u failed (%d)\n",
                    dt_node_full_name(dev), d->domain_id, ret);
+
+        read_unlock(&dt_host->lock);
         break;
 
     case XEN_DOMCTL_deassign_device:
@@ -328,11 +341,15 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
             break;
 
         ret = xsm_deassign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
+
         if ( ret )
             break;
 
         if ( d == dom_io )
+        {
+            read_unlock(&dt_host->lock);
             return -EINVAL;
+        }
 
         ret = iommu_deassign_dt_device(d, dev);
 
@@ -347,5 +364,6 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         break;
     }
 
+    read_unlock(&dt_host->lock);
     return ret;
 }
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index d6366d3dac..e616dd7e9c 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -18,6 +18,7 @@
 #include <xen/string.h>
 #include <xen/types.h>
 #include <xen/list.h>
+#include <xen/rwlock.h>
 
 #define DEVICE_TREE_MAX_DEPTH 16
 
@@ -106,6 +107,11 @@ struct dt_device_node {
     struct list_head domain_list;
 
     struct device dev;
+
+    /*
+     * Lock that protects r/w updates to unflattened device tree i.e. dt_host.
+     */
+    rwlock_t lock;
 };
 
 #define dt_to_dev(dt_node)  (&(dt_node)->dev)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:37:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:37:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528871.822633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ7-0000YY-7U; Tue, 02 May 2023 23:37:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528871.822633; Tue, 02 May 2023 23:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ6-0000VP-Im; Tue, 02 May 2023 23:37:48 +0000
Received: by outflank-mailman (input) for mailman id 528871;
 Tue, 02 May 2023 23:37:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzZ4-0005Si-IQ
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:46 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20605.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 54ed58db-e942-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 01:37:43 +0200 (CEST)
Received: from BN9PR03CA0357.namprd03.prod.outlook.com (2603:10b6:408:f6::32)
 by PH8PR12MB7026.namprd12.prod.outlook.com (2603:10b6:510:1bd::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 23:37:37 +0000
Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f6:cafe::c6) by BN9PR03CA0357.outlook.office365.com
 (2603:10b6:408:f6::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:37 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:37 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:36 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 16:37:36 -0700
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:35 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54ed58db-e942-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lnzedPcLVf9AgNtKSrsWnxlj8wg/n0b35HQajSB8VeMk4/pP7cndeadLyZowOcl2cp8NM5Php0iB6EEijSr27Gcr6oW5YxBvmLqan8hePFHoBA5z8xdMPfKDTamtWJK6IpxtnoDCI9SeL+WK7LJTuFWhKNAIRL+NdnJ0FBQ/cQyBL8XbJIUfuBHGHgrlwky19/krsIwZ9Skv1eUOqyI1tdoS/SIGj21ZpfUacUZRD4ipLXElLbe4+FHqUPd47hstrCmvywKSfZHcubZ15rpdxhjTXVTD4X5T3SiSuexWs5l5ITZkTiXtg8e+NkVOv6HoD202Y8xKvgoADEnXjVu7kQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rIRy40Pphy3puPxLyarufMzgZUdXxhJqsOpQVvIQDWY=;
 b=hUh4Ds6KIjy/4l/tB4nJ3FwOgDFMEgiZmqb4yxVATYO+IOx+kG3xHZS8zilSoIVmBglcIUo6LhW8N0KL+ZE66jVfkv6sI4Ei+3us6nzoco9EAKtcuK3sFkpvZwkdFRncjFHPfWEbAj9dH4fRhCiwmjM+HqHgsDkvQhOl2UFtFm5QxluZsBkNmu9QetIzLlQ77P0p3Tumwv613ZgUCs+gAubxx/yQiYTvYWcXJw3BiYiTM9kHFNkdGJYydGQcTMviMfyfSViCujx29GugZbQ2twaF4jbnYsBIWP84S2dICBi7p7hwqPwN0rt1AoDgDjLDJ1hxf6atjmg9BbgDWeV8Kg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rIRy40Pphy3puPxLyarufMzgZUdXxhJqsOpQVvIQDWY=;
 b=Q/vuMHZ8fOVxvR+uIcHcM5KH/SxWuak/R7bE8S7aLTxzJu0aA69UCBk4z+RcfrbI/eNljptrJ1S/C/+1SkX5AUO6FUgc6h5GxXXpkyB/cYLkV2Law6ezhiZ6LuNTcuQ/N0Eg9OBAgvy0i+4aQrIdO3C1mjAgdEmVTeHyi1cRViQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN][PATCH v6 11/19] xen/iommu: Introduce iommu_remove_dt_device()
Date: Tue, 2 May 2023 16:36:42 -0700
Message-ID: <20230502233650.20121-12-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|PH8PR12MB7026:EE_
X-MS-Office365-Filtering-Correlation-Id: 2bca90a4-d19f-4adf-e3c7-08db4b66364c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Mzjbtg8HLssdAgmmIu3wUMMiNi91uy0HTN5DYlJM0B0ON8jJCxHlcp05X1gONEuw9Oc9aAAkv+VsvX5rXJB+IDtvfdptl0OPABfZkC8hOeVvmaLUJj/uHlvze+znXRGJvmukIUrLfu/0HUmPlImEE9I2g7+JnNZBX0bRZiw9zVDjP1BHgMaMysjnJlSNJE8R2rBETEiD2OthCc7X1SmBVh97APnSTMDBTM96YtBhuWENBFZru89Y31VeMxXbo9PjDO1ZQV5xAU3lNvgXjSDG+r1TclAGDFsRaEgg8Sfs4U3cfCRgubWjassDtP/MY5RbsqS+mTrXHFO7VyLsPZQMDO8xA4CdJ3xZRHayJGBxf1uGpb15h/Rm01w5QJ5v6S2+99dN2sy/n3JDj3er/Adl+xeZd6Kp+sUKAE+xMbUcOuLhvqn9LCegjapwsNzrpWiybDx0SuOkkNLfwWcwjGvdTSe9k0KJpLXmH30clwUls/CAwOMlIV5Gb380W9oMSk0+hysSA2uq7C0APc6XI5pyyej24WoLWviAjsjbITzBwqI8h4+mrZTsWiy/tWaaJPNKC6QEGLZ2SUNpBRvWZSYzluTAXGClYkmSpmyImqPDfHFsQZRGKL/AVXBDTEEryo/lAtTmPHUsjcxD4OGjKX1CSln//eDn2adIOoy9LmYcQPKQQ4U37/jgo3Av3tIChk5j4Z94PV5Nm8WrQ3GkreUCWXNSw6vt3wp/qXUzN2jeerY=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(39860400002)(396003)(451199021)(46966006)(40470700004)(36840700001)(2616005)(83380400001)(478600001)(36860700001)(186003)(4326008)(54906003)(70586007)(70206006)(6916009)(47076005)(426003)(26005)(6666004)(336012)(1076003)(44832011)(316002)(40460700003)(8676002)(82740400003)(8936002)(5660300002)(356005)(41300700001)(2906002)(40480700001)(81166007)(82310400005)(86362001)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:37.2500
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2bca90a4-d19f-4adf-e3c7-08db4b66364c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT047.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7026

Remove master device from the IOMMU. This will be helpful when removing the
overlay nodes using dynamic programming during run time.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/drivers/passthrough/device_tree.c | 41 +++++++++++++++++++++++++++
 xen/include/xen/iommu.h               |  2 ++
 2 files changed, 43 insertions(+)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index f3867ef1a6..46f9080c8f 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -125,6 +125,47 @@ int iommu_release_dt_devices(struct domain *d)
     return 0;
 }
 
+int iommu_remove_dt_device(struct dt_device_node *np)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    struct device *dev = dt_to_dev(np);
+    int rc;
+
+    if ( !ops )
+        return -EOPNOTSUPP;
+
+    spin_lock(&dtdevs_lock);
+
+    if ( iommu_dt_device_is_assigned_locked(np) )
+    {
+        rc = -EBUSY;
+        goto fail;
+    }
+
+    /*
+     * The driver which supports generic IOMMU DT bindings must have this
+     * callback implemented.
+     */
+    if ( !ops->remove_device )
+    {
+        rc = -EOPNOTSUPP;
+        goto fail;
+    }
+
+    /*
+     * Remove master device from the IOMMU if latter is present and available.
+     * The driver is responsible for removing is_protected flag.
+     */
+    rc = ops->remove_device(0, dev);
+
+    if ( !rc )
+        iommu_fwspec_free(dev);
+
+fail:
+    spin_unlock(&dtdevs_lock);
+    return rc;
+}
+
 int iommu_add_dt_device(struct dt_device_node *np)
 {
     const struct iommu_ops *ops = iommu_get_ops();
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 76add226ec..6ba8d73966 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -219,6 +219,8 @@ int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev);
 int iommu_dt_domain_init(struct domain *d);
 int iommu_release_dt_devices(struct domain *d);
 
+int iommu_remove_dt_device(struct dt_device_node *np);
+
 /*
  * Helper to add master device to the IOMMU using generic IOMMU DT bindings.
  *
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:38:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:38:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528872.822640 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ8-0000vu-EI; Tue, 02 May 2023 23:37:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528872.822640; Tue, 02 May 2023 23:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ8-0000ql-12; Tue, 02 May 2023 23:37:50 +0000
Received: by outflank-mailman (input) for mailman id 528872;
 Tue, 02 May 2023 23:37:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzZ5-0005Si-IX
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:47 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 553c6b0d-e942-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 01:37:44 +0200 (CEST)
Received: from BN9PR03CA0742.namprd03.prod.outlook.com (2603:10b6:408:110::27)
 by MN0PR12MB5714.namprd12.prod.outlook.com (2603:10b6:208:371::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 23:37:40 +0000
Received: from BN8NAM11FT063.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:110:cafe::24) by BN9PR03CA0742.outlook.office365.com
 (2603:10b6:408:110::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.32 via Frontend
 Transport; Tue, 2 May 2023 23:37:40 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT063.mail.protection.outlook.com (10.13.177.110) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.22 via Frontend Transport; Tue, 2 May 2023 23:37:40 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:39 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:38 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 553c6b0d-e942-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jf+Pl77K6re/uAAOe5J4d4QO0tluyAQnYENz1B5K2kLmkfK2zC954matqkxE/DTpmsorFJIonpoTDzx7fOFWprJ9S7Bvqll8TlSBiH4BnemvzvPeyso+0rpNmy7MGyKe9KPnttUlKYLAVY/2z3e+s6elSp2vRTD3rB8t2rKpgXyLwAO0SUN8+5T1AjE6mtISgWvX1r6pQ6RuqfvZv2DAmJkAlqlM4YY0cf9Tk9FoYwTlozNsKDQs6j9tM6mYER2uQfX0AebVFv5gdx6amuzpHtDwMbuArX3JLSIkG1n6SLSdKfzjjzU7bJ3iXCm8iSc3RTRVKXJQ+0+aFAH/H26Zhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6lA6+5GWeGyVJ/eIdTxBUhZWKw1xIAoedVb14517C6o=;
 b=R6TNLe8+2RrL5HKo7lQY9ChtM6wPoQjqI8QVsU8yacg70PslIwv7gsqU7yDzeNGVO1WSXiUFFloIUrWWvnMlYozfoc8SN5SfQA0GwUFJ5BDkYb1Twasrbq0iNx3vOf7QE7rKjuK3+8jUJEnBaJ5oU3w0i9ZiUE3HRoZ47JfaXOTkDQdhjJMEcpKmY2Sdwgz9BgbM37J63dDvfPx+YoeoIRPYoYjQH5CHPyaQPk8E4aTG1bdELaS2K3smzybPrlYqZOBA4VMScnkxL95NYvLkofVWX5+lHA2D1dkU+gwfYt5e923k9TK46aNuqqBW7vzxT5JbER2Z3W1Y/qhI7QUEUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6lA6+5GWeGyVJ/eIdTxBUhZWKw1xIAoedVb14517C6o=;
 b=Qpbpu8+IF1xpezD7wZX6vbBwY7w0HXK28rUpruhu9xKRMEx7yreFz58Lf2T5UN2N+v0m5Horm7pVGooqFKqAXF7r991YNQbfOlfKUIZnUigIKJ+FwxKz9CJyPxMPkOXvR7mfmvE8FrKnYtmjopNVAfQXEAD1uGNQGY3XumDNZmU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: [XEN][PATCH v6 15/19] xen/arm: Implement device tree node removal functionalities
Date: Tue, 2 May 2023 16:36:46 -0700
Message-ID: <20230502233650.20121-16-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT063:EE_|MN0PR12MB5714:EE_
X-MS-Office365-Filtering-Correlation-Id: a9d8e0b7-d6c2-42a9-5f77-08db4b66380b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ObIVh6nvqFLs+CfF0txS2yj8MSGrx3IkESWWOS7Xihpa/omLs9Z9DE8hsoMksIvtKx0wYkWBMOeby0wVb1AaD2Nxasd/fA4vpgtHlVtsZMIDKnhMRYN5huYQf+vZF6zHJ3uxhxbXhVVcC7wlIavLfsBYoteJ3fgYy1m8lwwt3T7gCyFMT1WAT2VHCzEXFVrX4u/ohnQVHcTDD/yI2MZ+ve9Bj+itFG+yWdtj2FhWLTfpfOQ5K1AwmF5egPx9yHffC7IpiW7dDyrntp34WUZn5zUzu8ivEYK4VMWd4zRsG3198PLBcyIB3k9/WPRIFiV4N5EuHGmIcdcVJlNoREsELzT6iociJr01r+kj6SyCNRoesUFHIQCKD+d3Gz3PEWNGi7Lj1s8OnEvXlIRE1VVfWGRe8sslB4bOswgD8hOkJc029sFq1Ma+flo/M/PmvdF6zyMf5zfmL3+ORytITDp5xsVGsy3bYncKMVn8GbuTXjFxkWFUrpiIhfDNu46TtPkNzBZE1jsQnwGHC+qQgeaCAffvndxLlY54DegzisCl7N2oKgmmxdM/A+oS1PndxHWIwxUFpULO7HJvqEtKkd4ycf5Mp8DbL1rGea1tPF5Obu34rEn0hqKDeOBlbzLC4UaXM9rhs8ZxpQ7WhrJT7sKCUZYjS1ivbWQqzeJTapEjNWsjVuQodkj7QzUZ75wLFgspkTSIxMXmzirTDOlWSN7dA1tprGom/4NNEou1nBB6mWQqv+eqIjDx4j+2DHuZSC3r6RKTLITpQQL6MLc7a52+9g==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(376002)(136003)(451199021)(40470700004)(36840700001)(46966006)(478600001)(36860700001)(36756003)(316002)(5660300002)(2906002)(30864003)(40460700003)(8936002)(8676002)(70206006)(40480700001)(86362001)(4326008)(82310400005)(70586007)(81166007)(6916009)(336012)(356005)(44832011)(41300700001)(82740400003)(83380400001)(426003)(47076005)(54906003)(1076003)(186003)(26005)(2616005)(6666004)(403724002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:40.1605
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a9d8e0b7-d6c2-42a9-5f77-08db4b66380b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT063.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5714

Introduce sysctl XEN_SYSCTL_dt_overlay to remove device-tree nodes added using
device tree overlay.

xl dt-overlay remove file.dtbo:
    Removes all the nodes in a given dtbo.
    First, removes IRQ permissions and MMIO accesses. Next, it finds the nodes
    in dt_host and delete the device node entries from dt_host.

    The nodes get removed only if it is not used by any of dom0 or domio.

Also, added overlay_track struct to keep the track of added node through device
tree overlay. overlay_track has dt_host_new which is unflattened form of updated
fdt and name of overlay nodes. When a node is removed, we also free the memory
used by overlay_track for the particular overlay node.

Nested overlay removal is supported in sequential manner only i.e. if
overlay_child nests under overlay_parent, it is assumed that user first removes
overlay_child and then removes overlay_parent.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/arch/arm/sysctl.c        |  16 +-
 xen/common/Makefile          |   1 +
 xen/common/dt-overlay.c      | 419 +++++++++++++++++++++++++++++++++++
 xen/include/public/sysctl.h  |  23 ++
 xen/include/xen/dt-overlay.h |  58 +++++
 5 files changed, 516 insertions(+), 1 deletion(-)
 create mode 100644 xen/common/dt-overlay.c
 create mode 100644 xen/include/xen/dt-overlay.h

diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index b0a78a8b10..456358166c 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -9,6 +9,7 @@
 
 #include <xen/types.h>
 #include <xen/lib.h>
+#include <xen/dt-overlay.h>
 #include <xen/errno.h>
 #include <xen/hypercall.h>
 #include <public/sysctl.h>
@@ -21,7 +22,20 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
 long arch_do_sysctl(struct xen_sysctl *sysctl,
                     XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
-    return -ENOSYS;
+    long ret = 0;
+
+    switch ( sysctl->cmd )
+    {
+    case XEN_SYSCTL_dt_overlay:
+        ret = dt_sysctl(&sysctl->u.dt_overlay);
+        break;
+
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+
+    return ret;
 }
 
 /*
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 46049eac35..e7e96b1087 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
 obj-$(CONFIG_IOREQ_SERVER) += dm.o
 obj-y += domain.o
+obj-$(CONFIG_OVERLAY_DTB) += dt-overlay.o
 obj-y += event_2l.o
 obj-y += event_channel.o
 obj-y += event_fifo.o
diff --git a/xen/common/dt-overlay.c b/xen/common/dt-overlay.c
new file mode 100644
index 0000000000..b89cceab84
--- /dev/null
+++ b/xen/common/dt-overlay.c
@@ -0,0 +1,419 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * xen/common/dt-overlay.c
+ *
+ * Device tree overlay support in Xen.
+ *
+ * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
+ * Written by Vikram Garhwal <vikram.garhwal@amd.com>
+ *
+ */
+#include <asm/domain_build.h>
+#include <xen/dt-overlay.h>
+#include <xen/guest_access.h>
+#include <xen/iocap.h>
+#include <xen/xmalloc.h>
+
+static LIST_HEAD(overlay_tracker);
+static DEFINE_SPINLOCK(overlay_lock);
+
+/* Find last descendants of the device_node. */
+static struct dt_device_node *
+                find_last_descendants_node(struct dt_device_node *device_node)
+{
+    struct dt_device_node *child_node;
+
+    for ( child_node = device_node->child; child_node->sibling != NULL;
+          child_node = child_node->sibling );
+
+    /* If last child_node also have children. */
+    if ( child_node->child )
+        child_node = find_last_descendants_node(child_node);
+
+    return child_node;
+}
+
+static int dt_overlay_remove_node(struct dt_device_node *device_node)
+{
+    struct dt_device_node *np;
+    struct dt_device_node *parent_node;
+    struct dt_device_node *device_node_last_descendant = device_node->child;
+
+    parent_node = device_node->parent;
+
+    if ( parent_node == NULL )
+    {
+        dt_dprintk("%s's parent node not found\n", device_node->name);
+        return -EFAULT;
+    }
+
+    np = parent_node->child;
+
+    if ( np == NULL )
+    {
+        dt_dprintk("parent node %s's not found\n", parent_node->name);
+        return -EFAULT;
+    }
+
+    /* If node to be removed is only child node or first child. */
+    if ( !dt_node_cmp(np->full_name, device_node->full_name) )
+    {
+        parent_node->child = np->sibling;
+
+        /*
+         * Iterate over all child nodes of device_node. Given that we are
+         * removing parent node, we need to remove all it's descendants too.
+         */
+        if ( device_node_last_descendant )
+        {
+            device_node_last_descendant =
+                                        find_last_descendants_node(device_node);
+            parent_node->allnext = device_node_last_descendant->allnext;
+        }
+        else
+            parent_node->allnext = np->allnext;
+
+        return 0;
+    }
+
+    for ( np = parent_node->child; np->sibling != NULL; np = np->sibling )
+    {
+        if ( !dt_node_cmp(np->sibling->full_name, device_node->full_name) )
+        {
+            /* Found the node. Now we remove it. */
+            np->sibling = np->sibling->sibling;
+
+            if ( np->child )
+                np = find_last_descendants_node(np);
+
+            /*
+             * Iterate over all child nodes of device_node. Given that we are
+             * removing parent node, we need to remove all it's descendants too.
+             */
+            if ( device_node_last_descendant )
+                device_node_last_descendant =
+                                        find_last_descendants_node(device_node);
+
+            if ( device_node_last_descendant )
+                np->allnext = device_node_last_descendant->allnext;
+            else
+                np->allnext = np->allnext->allnext;
+
+            break;
+        }
+    }
+
+    return 0;
+}
+
+/* Basic sanity check for the dtbo tool stack provided to Xen. */
+static int check_overlay_fdt(const void *overlay_fdt, uint32_t overlay_fdt_size)
+{
+    if ( (fdt_totalsize(overlay_fdt) != overlay_fdt_size) ||
+          fdt_check_header(overlay_fdt) )
+    {
+        printk(XENLOG_ERR "The overlay FDT is not a valid Flat Device Tree\n");
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+/* Count number of nodes till one level of __overlay__ tag. */
+static unsigned int overlay_node_count(const void *overlay_fdt)
+{
+    unsigned int num_overlay_nodes = 0;
+    int fragment;
+
+    fdt_for_each_subnode(fragment, overlay_fdt, 0)
+    {
+        int subnode;
+        int overlay;
+
+        overlay = fdt_subnode_offset(overlay_fdt, fragment, "__overlay__");
+
+        /*
+         * overlay value can be < 0. But fdt_for_each_subnode() loop checks for
+         * overlay >= 0. So, no need for a overlay>=0 check here.
+         */
+        fdt_for_each_subnode(subnode, overlay_fdt, overlay)
+        {
+            num_overlay_nodes++;
+        }
+    }
+
+    return num_overlay_nodes;
+}
+
+static int handle_remove_irq_iommu(struct dt_device_node *device_node)
+{
+    int rc = 0;
+    struct domain *d = hardware_domain;
+    domid_t domid;
+    unsigned int naddr, len;
+    unsigned int i, nirq;
+
+    domid = dt_device_used_by(device_node);
+
+    dt_dprintk("Checking if node %s is used by any domain\n",
+               device_node->full_name);
+
+    /* Remove the node if only it's assigned to domain 0 or domain io. */
+    if ( domid != 0 && domid != DOMID_IO )
+    {
+        printk(XENLOG_ERR "Device %s is being used by domain %u. Removing nodes failed\n",
+               device_node->full_name, domid);
+        return -EINVAL;
+    }
+
+    dt_dprintk("Removing node: %s\n", device_node->full_name);
+
+    nirq = dt_number_of_irq(device_node);
+
+    /* Remove IRQ permission */
+    for ( i = 0; i < nirq; i++ )
+    {
+        rc = platform_get_irq(device_node, i);
+        if ( rc < 0 )
+        {
+            printk(XENLOG_ERR "Failed to get IRQ num for device node %s\n",
+                   device_node->full_name);
+            return -EINVAL;
+        }
+
+        if ( irq_access_permitted(d, rc) == false )
+        {
+            printk(XENLOG_ERR "IRQ %d is not routed to domain %u\n", rc,
+                   domid);
+            return -EINVAL;
+        }
+        /*
+         * TODO: We don't handle shared IRQs for now. So, it is assumed that
+         * the IRQs was not shared with another devices.
+         */
+        rc = irq_deny_access(d, rc);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "unable to revoke access for irq %u for %s\n",
+                   i, device_node->full_name);
+            return rc;
+        }
+    }
+
+    /* Check if iommu property exists. */
+    if ( dt_get_property(device_node, "iommus", &len) )
+    {
+        rc = iommu_remove_dt_device(device_node);
+        if ( rc != 0 && rc != -ENXIO )
+            return rc;
+    }
+
+    naddr = dt_number_of_address(device_node);
+
+    /* Remove mmio access. */
+    for ( i = 0; i < naddr; i++ )
+    {
+        uint64_t addr, size;
+
+        rc = dt_device_get_address(device_node, i, &addr, &size);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
+                   i, dt_node_full_name(device_node));
+            return rc;
+        }
+
+        rc = iomem_deny_access(d, paddr_to_pfn(addr),
+                               paddr_to_pfn(PAGE_ALIGN(addr + size - 1)));
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Unable to remove dom%d access to"
+                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
+                   d->domain_id,
+                   addr & PAGE_MASK, PAGE_ALIGN(addr + size) - 1);
+            return rc;
+        }
+
+    }
+
+    return rc;
+}
+
+/* Removes all descendants of the given node. */
+static int remove_all_descendant_nodes(struct dt_device_node *device_node)
+{
+    int rc = 0;
+    struct dt_device_node *child_node;
+
+    for ( child_node = device_node->child; child_node != NULL;
+         child_node = child_node->sibling )
+    {
+        if ( child_node->child )
+            remove_all_descendant_nodes(child_node);
+
+        rc = handle_remove_irq_iommu(child_node);
+        if ( rc )
+            return rc;
+    }
+
+    return rc;
+}
+
+/* Remove nodes from dt_host. */
+static int remove_nodes(const struct overlay_track *tracker)
+{
+    int rc = 0;
+    struct dt_device_node *overlay_node;
+    unsigned int j;
+
+    for ( j = 0; j < tracker->num_nodes; j++ )
+    {
+        overlay_node = (struct dt_device_node *)tracker->nodes_address[j];
+        if ( overlay_node == NULL )
+        {
+            printk(XENLOG_ERR "Device %s is not present in the tree. Removing nodes failed\n",
+                   overlay_node->full_name);
+            return -EINVAL;
+        }
+
+        rc = remove_all_descendant_nodes(overlay_node);
+
+        /* All children nodes are unmapped. Now remove the node itself. */
+        rc = handle_remove_irq_iommu(overlay_node);
+        if ( rc )
+            return rc;
+
+        read_lock(&dt_host->lock);
+
+        rc = dt_overlay_remove_node(overlay_node);
+        if ( rc )
+        {
+            read_unlock(&dt_host->lock);
+
+            return rc;
+        }
+
+        read_unlock(&dt_host->lock);
+    }
+
+    return rc;
+}
+
+/*
+ * First finds the device node to remove. Check if the device is being used by
+ * any dom and finally remove it from dt_host. IOMMU is already being taken care
+ * while destroying the domain.
+ */
+static long handle_remove_overlay_nodes(void *overlay_fdt,
+                                        uint32_t overlay_fdt_size)
+{
+    int rc = 0;
+    struct overlay_track *entry, *temp, *track;
+    bool found_entry = false;
+
+    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
+    if ( rc )
+        return rc;
+
+    if ( overlay_node_count(overlay_fdt) == 0 )
+        return -EINVAL;
+
+    spin_lock(&overlay_lock);
+
+    /*
+     * First check if dtbo is correct i.e. it should one of the dtbo which was
+     * used when dynamically adding the node.
+     * Limitation: Cases with same node names but different property are not
+     * supported currently. We are relying on user to provide the same dtbo
+     * as it was used when adding the nodes.
+     */
+    list_for_each_entry_safe( entry, temp, &overlay_tracker, entry )
+    {
+        if ( memcmp(entry->overlay_fdt, overlay_fdt, overlay_fdt_size) == 0 )
+        {
+            track = entry;
+            found_entry = true;
+            break;
+        }
+    }
+
+    if ( found_entry == false )
+    {
+        rc = -EINVAL;
+
+        printk(XENLOG_ERR "Cannot find any matching tracker with input dtbo."
+               " Removing nodes is supported for only prior added dtbo. Please"
+               " provide a valid dtbo which was used to add the nodes.\n");
+        goto out;
+
+    }
+
+    rc = remove_nodes(entry);
+
+    if ( rc )
+    {
+        printk(XENLOG_ERR "Removing node failed\n");
+        goto out;
+    }
+
+    list_del(&entry->entry);
+
+    xfree(entry->dt_host_new);
+    xfree(entry->fdt);
+    xfree(entry->overlay_fdt);
+
+    xfree(entry->nodes_address);
+
+    xfree(entry);
+
+out:
+    spin_unlock(&overlay_lock);
+    return rc;
+}
+
+long dt_sysctl(struct xen_sysctl_dt_overlay *op)
+{
+    long ret;
+    void *overlay_fdt;
+
+    if ( op->overlay_fdt_size == 0 || op->overlay_fdt_size > KB(500) )
+        return -EINVAL;
+
+    overlay_fdt = xmalloc_bytes(op->overlay_fdt_size);
+
+    if ( overlay_fdt == NULL )
+        return -ENOMEM;
+
+    ret = copy_from_guest(overlay_fdt, op->overlay_fdt, op->overlay_fdt_size);
+    if ( ret )
+    {
+        gprintk(XENLOG_ERR, "copy from guest failed\n");
+        xfree(overlay_fdt);
+
+        return -EFAULT;
+    }
+
+    switch ( op->overlay_op )
+    {
+    case XEN_SYSCTL_DT_OVERLAY_REMOVE:
+        ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
+        xfree(overlay_fdt);
+
+        break;
+
+    default:
+        xfree(overlay_fdt);
+        break;
+    }
+
+    return ret;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 2b24d6bfd0..28f7fba98b 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -1057,6 +1057,24 @@ typedef struct xen_sysctl_cpu_policy xen_sysctl_cpu_policy_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpu_policy_t);
 #endif
 
+#if defined(__arm__) || defined (__aarch64__)
+/*
+ * XEN_SYSCTL_dt_overlay
+ * Performs addition/removal of device tree nodes under parent node using dtbo.
+ * This does in three steps:
+ *  - Adds/Removes the nodes from dt_host.
+ *  - Adds/Removes IRQ permission for the nodes.
+ *  - Adds/Removes MMIO accesses.
+ */
+struct xen_sysctl_dt_overlay {
+    XEN_GUEST_HANDLE_64(void) overlay_fdt;  /* IN: overlay fdt. */
+    uint32_t overlay_fdt_size;              /* IN: Overlay dtb size. */
+#define XEN_SYSCTL_DT_OVERLAY_ADD                   1
+#define XEN_SYSCTL_DT_OVERLAY_REMOVE                2
+    uint8_t overlay_op;                     /* IN: Add or remove. */
+};
+#endif
+
 struct xen_sysctl {
     uint32_t cmd;
 #define XEN_SYSCTL_readconsole                    1
@@ -1087,6 +1105,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_livepatch_op                  27
 /* #define XEN_SYSCTL_set_parameter              28 */
 #define XEN_SYSCTL_get_cpu_policy                29
+#define XEN_SYSCTL_dt_overlay                    30
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -1117,6 +1136,10 @@ struct xen_sysctl {
 #if defined(__i386__) || defined(__x86_64__)
         struct xen_sysctl_cpu_policy        cpu_policy;
 #endif
+
+#if defined(__arm__) || defined (__aarch64__)
+        struct xen_sysctl_dt_overlay        dt_overlay;
+#endif
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/xen/dt-overlay.h b/xen/include/xen/dt-overlay.h
new file mode 100644
index 0000000000..5b369f8eb7
--- /dev/null
+++ b/xen/include/xen/dt-overlay.h
@@ -0,0 +1,58 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ /*
+ * xen/dt-overlay.h
+ *
+ * Device tree overlay support in Xen.
+ *
+ * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
+ * Written by Vikram Garhwal <vikram.garhwal@amd.com>
+ *
+ */
+#ifndef __XEN_DT_OVERLAY_H__
+#define __XEN_DT_OVERLAY_H__
+
+#include <xen/list.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/device_tree.h>
+#include <xen/rangeset.h>
+
+/*
+ * overlay_node_track describes information about added nodes through dtbo.
+ * @entry: List pointer.
+ * @dt_host_new: Pointer to the updated dt_host_new unflattened 'updated fdt'.
+ * @fdt: Stores the fdt.
+ * @nodes_fullname: Stores the full name of nodes.
+ * @nodes_irq: Stores the IRQ added from overlay dtb.
+ * @node_num_irq: Stores num of IRQ for each node in overlay dtb.
+ * @num_nodes: Stores total number of nodes in overlay dtb.
+ */
+struct overlay_track {
+    struct list_head entry;
+    struct dt_device_node *dt_host_new;
+    void *fdt;
+    void *overlay_fdt;
+    unsigned long *nodes_address;
+    unsigned int num_nodes;
+};
+
+struct xen_sysctl_dt_overlay;
+
+#ifdef CONFIG_OVERLAY_DTB
+long dt_sysctl(struct xen_sysctl_dt_overlay *op);
+#else
+static inline long dt_sysctl(struct xen_sysctl_dt_overlay *op)
+{
+    return -ENOSYS;
+}
+#endif
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:38:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:38:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528873.822650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ9-00017l-NO; Tue, 02 May 2023 23:37:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528873.822650; Tue, 02 May 2023 23:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzZ8-000150-TA; Tue, 02 May 2023 23:37:50 +0000
Received: by outflank-mailman (input) for mailman id 528873;
 Tue, 02 May 2023 23:37:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzZ6-0005Si-Om
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:48 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2060f.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::60f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56cde790-e942-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 01:37:46 +0200 (CEST)
Received: from BN9PR03CA0251.namprd03.prod.outlook.com (2603:10b6:408:ff::16)
 by DM4PR12MB7647.namprd12.prod.outlook.com (2603:10b6:8:105::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Tue, 2 May
 2023 23:37:43 +0000
Received: from BN8NAM11FT101.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ff:cafe::8) by BN9PR03CA0251.outlook.office365.com
 (2603:10b6:408:ff::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:43 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT101.mail.protection.outlook.com (10.13.177.126) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:42 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:40 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:40 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56cde790-e942-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d+CBlD4/N0WIl6ao3jV5P6Wv5qMbul26Yjh9mI5DwgaymAwiNKw/amRYWmNZGTfp3fzAjzcZxOOTBagXx6zCsqBe2j09bhcCT8qUR9nFYsCw5ayia2K3EzKm9z+c2ipeXZBpcSDGZaAk+Q7d9Htoh0EUxOhPCn8WiKJZle0AQSWfs/zNpnGwjlDif25uhC7XNH/at8OQUtnFaueowFB01OA7/aNKJT3WwUm/LLJNB18XuvNJgtbpVR7w1jn9XY9Cwy5KLHLV0H1Zqu+GVQWthMcMLEouOl0u5z814f1JTb7GIbDRj221ltOVe/YZ5k9dcHhKJhQckzZrLSZSTsW+Qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DAz4bXIcLBaNxB7D8chHRJkgDHozUzgRvSx6fTSNQTA=;
 b=F7HyEkG+GLwlElxdZczQOhg9mmuvbLSMRxkaOk6v9qgRuZIvRupd+6El8aa1WkB+eDy78qsDEP3ZWq3PeeHMJQLJipyRtbzeZquFPSrFFX6Sluu8MlD9M0qmYs+DnAvyxS+VsAQOdmD16YNNTobdS6+ts1/DKfElwkd3xjShBt1g/NeA6g/MBd32CvU+IGqF3AIAJimRN1w43GtiBNTx6+wW6GEkS2K4fZvMZ0iogVma8zhuXpje8DyuaG8Rz0XJidZ2t+WVefKlayhCJGOoc+KOcKJsIFuZrfnrHjYUb889soCDbOY1adXtiyNZMmjnaEQxCN9d3Goa8keIakrX3A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DAz4bXIcLBaNxB7D8chHRJkgDHozUzgRvSx6fTSNQTA=;
 b=tvwnbBv0YwQoW7fIrtmBRyeA/qbvHPPniXYcQF5dZLJDTAj1ggmq3qLLyoBs75aSCqiDKKkW+9E50/THsMvuilQAJP+PSMeUzZ3ge5W9CnBLylvVHGemuCzZceD23O46wQuhIcXpfHg+Lamascd6G4qNHHk0eKi7qeUBaVSKsw4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Andrew Cooper <andrew.cooper3@citrix.com>, "George
 Dunlap" <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, "Julien
 Grall" <julien@xen.org>, Wei Liu <wl@xen.org>
Subject: [XEN][PATCH v6 16/19] xen/arm: Implement device tree node addition functionalities
Date: Tue, 2 May 2023 16:36:47 -0700
Message-ID: <20230502233650.20121-17-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT101:EE_|DM4PR12MB7647:EE_
X-MS-Office365-Filtering-Correlation-Id: 70cb53a0-6b2e-4327-0398-08db4b6639b3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1hPgD0kpSyOEMfq6/nPxjohfDV4v+5ckgSAvz0uL7Xh2ayAcX333ubGPBXhgr9M4Y8QQuaaD/t26ORibFv3saFNqXXJzWpW27YcQJdXklOyVITQyWTcWkKhkR3LpE0mhAatM2zqzScbsjodCUYigyhO9VYuT3qwbExXOuaLpzLJ7CNFkh/luYDSEqrI7kRig/0gNyKL1EORQcPgfNJ2iVBwGGv7wgoocxgoPLLAH6bhef78rD8caJ98SrFy6ipMK8A8VlCZS/1U4RFbfK+G8zYajluUuBnUHkzIiW6qInYciCaDLQ+lcNuNVhOIJRfstvPRyK265gKVcC/ikmgy2aDGdcHeJ6m2Epq4dW7LTia2pFWwFvLEK3OmZysf8kSMJOqZzE90ajzTSe7mDtva1cjQVKjHDTr72NjqJe9rTfIlj+nkwEuR1mdKe4VOS19YogYQIbEgLtqkvmE1hr1fD+SL6ZqDYAGRxqqvvumQWHkHgf4K8/9+0ujSnQGPZ874SMrUq51dMbv8nfHUUD6uov6aofwBfRMTr5ka2ndauqP6IOFjD9SAazAW2nZrKpzN88d5AkmQwHxFc9kT3XUtfZZZ1n4n+VpoH1gQ5gZsT4K+68kDYlt02Ye4+ocVAQPQdwPzuTQG45f1j81QG/hn8YbhDGyZ5Iodrqxh5Kx++bDNwVaoU9qNmlkLKj/BEPfq/4ztuuMhYPtl6VtGYLrzh9AAy6L/2XEfjheeDrOXE//Y=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(376002)(136003)(396003)(451199021)(40470700004)(36840700001)(46966006)(30864003)(66899021)(426003)(44832011)(4326008)(82310400005)(36860700001)(336012)(83380400001)(47076005)(2906002)(5660300002)(8936002)(8676002)(41300700001)(36756003)(316002)(186003)(26005)(70586007)(70206006)(6916009)(1076003)(356005)(81166007)(40460700003)(82740400003)(54906003)(86362001)(2616005)(40480700001)(478600001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:42.9525
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 70cb53a0-6b2e-4327-0398-08db4b6639b3
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT101.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7647

Update sysctl XEN_SYSCTL_dt_overlay to enable support for dtbo nodes addition
using device tree overlay.

xl dt-overlay add file.dtbo:
    Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
    device_tree_flattened) is created and updated with overlay nodes. This
    updated fdt is further unflattened to a dt_host_new. Next, it checks if any
    of the overlay nodes already exists in the dt_host. If overlay nodes doesn't
    exist then find the overlay nodes in dt_host_new, find the overlay node's
    parent in dt_host and add the nodes as child under their parent in the
    dt_host. The node is attached as the last node under target parent.

    Finally, add IRQs, add device to IOMMUs, set permissions and map MMIO for the
    overlay node.

When a node is added using overlay, a new entry is allocated in the
overlay_track to keep the track of memory allocation due to addition of overlay
node. This is helpful for freeing the memory allocated when a device tree node
is removed.

The main purpose of this to address first part of dynamic programming i.e.
making xen aware of new device tree node which means updating the dt_host with
overlay node information. Here we are adding/removing node from dt_host, and
checking/setting IOMMU and IRQ permission but never mapping them to any domain.
Right now, mapping/Un-mapping will happen only when a new domU is
created/destroyed using "xl create".

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 xen/common/dt-overlay.c | 510 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 510 insertions(+)

diff --git a/xen/common/dt-overlay.c b/xen/common/dt-overlay.c
index b89cceab84..09ea46111b 100644
--- a/xen/common/dt-overlay.c
+++ b/xen/common/dt-overlay.c
@@ -33,6 +33,25 @@ static struct dt_device_node *
     return child_node;
 }
 
+/*
+ * Returns next node to the input node. If node has children then return
+ * last descendant's next node.
+*/
+static struct dt_device_node *
+dt_find_next_node(struct dt_device_node *dt, const struct dt_device_node *node)
+{
+    struct dt_device_node *np;
+
+    dt_for_each_device_node(dt, np)
+        if ( np == node )
+            break;
+
+    if ( np->child )
+        np = find_last_descendants_node(np);
+
+    return np->allnext;
+}
+
 static int dt_overlay_remove_node(struct dt_device_node *device_node)
 {
     struct dt_device_node *np;
@@ -106,6 +125,76 @@ static int dt_overlay_remove_node(struct dt_device_node *device_node)
     return 0;
 }
 
+static int dt_overlay_add_node(struct dt_device_node *device_node,
+                               const char *parent_node_path)
+{
+    struct dt_device_node *parent_node;
+    struct dt_device_node *next_node;
+
+    parent_node = dt_find_node_by_path(parent_node_path);
+
+    if ( parent_node == NULL )
+    {
+        dt_dprintk("Parent node %s not found. Overlay node will not be added\n",
+                   parent_node_path);
+        return -EINVAL;
+    }
+
+    /* If parent has no child. */
+    if ( parent_node->child == NULL )
+    {
+        next_node = parent_node->allnext;
+        device_node->parent = parent_node;
+        parent_node->allnext = device_node;
+        parent_node->child = device_node;
+    }
+    else
+    {
+        struct dt_device_node *np;
+        /* If parent has at least one child node.
+         * Iterate to the last child node of parent.
+         */
+        for ( np = parent_node->child; np->sibling != NULL; np = np->sibling );
+
+        /* Iterate over all child nodes of np node. */
+        if ( np->child )
+        {
+            struct dt_device_node *np_last_descendant;
+
+            np_last_descendant = find_last_descendants_node(np);
+
+            next_node = np_last_descendant->allnext;
+            np_last_descendant->allnext = device_node;
+        }
+        else
+        {
+            next_node = np->allnext;
+            np->allnext = device_node;
+        }
+
+        device_node->parent = parent_node;
+        np->sibling = device_node;
+        np->sibling->sibling = NULL;
+    }
+
+    /* Iterate over all child nodes of device_node to add children too. */
+    if ( device_node->child )
+    {
+        struct dt_device_node *device_node_last_descendant;
+
+        device_node_last_descendant = find_last_descendants_node(device_node);
+        /* Plug next_node at the end of last children of device_node. */
+        device_node_last_descendant->allnext = next_node;
+    }
+    else
+    {
+        /* Now plug next_node at the end of device_node. */
+        device_node->allnext = next_node;
+    }
+
+    return 0;
+}
+
 /* Basic sanity check for the dtbo tool stack provided to Xen. */
 static int check_overlay_fdt(const void *overlay_fdt, uint32_t overlay_fdt_size)
 {
@@ -145,6 +234,82 @@ static unsigned int overlay_node_count(const void *overlay_fdt)
     return num_overlay_nodes;
 }
 
+/*
+ * overlay_get_nodes_info gets full name with path for all the nodes which
+ * are in one level of __overlay__ tag. This is useful when checking node for
+ * duplication i.e. dtbo tries to add nodes which already exists in device tree.
+ */
+static int overlay_get_nodes_info(const void *fdto, char ***nodes_full_path,
+                                  unsigned int num_overlay_nodes)
+{
+    int fragment;
+
+    *nodes_full_path = xzalloc_bytes(num_overlay_nodes * sizeof(char *));
+
+    if ( *nodes_full_path == NULL )
+        return -ENOMEM;
+
+    fdt_for_each_subnode(fragment, fdto, 0)
+    {
+        int target;
+        int overlay;
+        int subnode;
+        const char *target_path;
+
+        target = fdt_overlay_target_offset(device_tree_flattened, fdto,
+                                           fragment, &target_path);
+        if ( target < 0 )
+            return target;
+
+        if ( target_path == NULL )
+            return -EINVAL;
+
+        overlay = fdt_subnode_offset(fdto, fragment, "__overlay__");
+
+        /*
+         * overlay value can be < 0. But fdt_for_each_subnode() loop checks for
+         * overlay >= 0. So, no need for a overlay>=0 check here.
+         */
+        fdt_for_each_subnode(subnode, fdto, overlay)
+        {
+            const char *node_name = NULL;
+            int node_name_len;
+            unsigned int target_path_len = strlen(target_path);
+            unsigned int node_full_name_len;
+            unsigned int node_num = 0;
+
+            node_name = fdt_get_name(fdto, subnode, &node_name_len);
+
+            if ( node_name == NULL )
+                return node_name_len;
+
+            /*
+             * Magic number 2 is for adding '/' and '\0'. This is done to keep
+             * the node_full_path in the correct full node name format.
+             */
+            node_full_name_len = target_path_len + node_name_len + 2;
+
+            (*nodes_full_path)[node_num] = xmalloc_bytes(node_full_name_len);
+
+            if ( (*nodes_full_path)[node_num] == NULL )
+                return -ENOMEM;
+
+            memcpy((*nodes_full_path)[node_num], target_path, target_path_len);
+
+            (*nodes_full_path)[node_num][target_path_len] = '/';
+
+            memcpy((*nodes_full_path)[node_num] + target_path_len + 1,
+                    node_name, node_name_len);
+
+            (*nodes_full_path)[node_num][node_full_name_len - 1] = '\0';
+
+            node_num++;
+        }
+    }
+
+    return 0;
+}
+
 static int handle_remove_irq_iommu(struct dt_device_node *device_node)
 {
     int rc = 0;
@@ -371,6 +536,344 @@ out:
     return rc;
 }
 
+/*
+ * Handles IRQ and IOMMU mapping for the overlay_node and all descendants of the
+ * overlay_node.
+ */
+static int handle_add_irq_iommu(struct domain *d,
+                                struct dt_device_node *overlay_node)
+{
+    int rc;
+    unsigned int naddr, i, len;
+    struct dt_device_node *np;
+
+    /* First let's handle the interrupts. */
+    rc = handle_device_interrupts(d, overlay_node, false);
+    if ( rc < 0 )
+    {
+        printk(XENLOG_ERR "Failed to retrieve interrupts configuration\n");
+        return rc;
+    }
+
+    /* Check if iommu property exists. */
+    if ( dt_get_property(overlay_node, "iommus", &len) )
+    {
+        /* Add device to IOMMUs. */
+        rc = iommu_add_dt_device(overlay_node);
+        if ( rc < 0 )
+        {
+            printk(XENLOG_ERR "Failed to add %s to the IOMMU\n",
+                   dt_node_full_name(overlay_node));
+            return rc;
+        }
+    }
+
+    /* Set permissions. */
+    naddr = dt_number_of_address(overlay_node);
+
+    dt_dprintk("%s naddr = %u\n", dt_node_full_name(overlay_node), naddr);
+
+    /* Give permission to map MMIOs */
+    for ( i = 0; i < naddr; i++ )
+    {
+        uint64_t addr, size;
+
+        /*
+         * For now, we skip_mapping which means it will only permit iomem access
+         * to hardware_domain using iomem_permit_access() but will never map as
+         * map_range_p2mt() will not be called.
+         */
+        struct map_range_data mr_data = { .d = d,
+                                          .p2mt = p2m_mmio_direct_c,
+                                          .skip_mapping = true
+                                        };
+
+        rc = dt_device_get_address(overlay_node, i, &addr, &size);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
+                   i, dt_node_full_name(overlay_node));
+            return rc;
+        }
+
+        rc = map_range_to_domain(overlay_node, addr, size, &mr_data);
+        if ( rc )
+            return rc;
+    }
+
+    /* Map IRQ and IOMMU for overlay_node's children. */
+    for ( np = overlay_node->child; np != NULL; np = np->sibling )
+    {
+        rc = handle_add_irq_iommu(d, np);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Adding IRQ and IOMMU failed\n");
+            return rc;
+        }
+    }
+
+    return rc;
+}
+
+/*
+ * Adds device tree nodes under target node.
+ * We use tr->dt_host_new to unflatten the updated device_tree_flattened. This
+ * is done to avoid the removal of device_tree generation, iomem regions mapping
+ * to hardware domain done by handle_node().
+ */
+static long handle_add_overlay_nodes(void *overlay_fdt,
+                                     uint32_t overlay_fdt_size)
+{
+    int rc, j, i;
+    struct dt_device_node *overlay_node;
+    struct overlay_track *tr = NULL;
+    char **nodes_full_path = NULL;
+    unsigned int new_fdt_size;
+
+    tr = xzalloc(struct overlay_track);
+    if ( tr == NULL )
+        return -ENOMEM;
+
+    new_fdt_size = fdt_totalsize(device_tree_flattened) +
+                                 fdt_totalsize(overlay_fdt);
+
+    tr->fdt = xzalloc_bytes(new_fdt_size);
+    if ( tr->fdt == NULL )
+    {
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    tr->num_nodes = overlay_node_count(overlay_fdt);
+    if ( tr->num_nodes == 0 )
+    {
+        xfree(tr->fdt);
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    tr->nodes_address = xzalloc_bytes(tr->num_nodes * sizeof(unsigned long));
+    if ( tr->nodes_address == NULL )
+    {
+        xfree(tr->fdt);
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
+    if ( rc )
+    {
+        xfree(tr->nodes_address);
+        xfree(tr->fdt);
+        xfree(tr);
+        return rc;
+    }
+
+    /*
+     * Keep a copy of overlay_fdt as fdt_overlay_apply will change the input
+     * overlay's content(magic) when applying overlay.
+     */
+    tr->overlay_fdt = xzalloc_bytes(overlay_fdt_size);
+    if ( tr->overlay_fdt == NULL )
+    {
+        xfree(tr->nodes_address);
+        xfree(tr->fdt);
+        xfree(tr);
+        return -ENOMEM;
+    }
+
+    memcpy(tr->overlay_fdt, overlay_fdt, overlay_fdt_size);
+
+    spin_lock(&overlay_lock);
+
+    memcpy(tr->fdt, device_tree_flattened,
+           fdt_totalsize(device_tree_flattened));
+
+    /* Open tr->fdt with more space to accommodate the overlay_fdt. */
+    rc = fdt_open_into(tr->fdt, tr->fdt, new_fdt_size);
+    if ( rc )
+    {
+        printk(XENLOG_ERR "Increasing tr->fdt size failed with error %d\n",
+               rc);
+        goto err;
+    }
+
+    /*
+     * overlay_get_nodes_info is called to get the node information from dtbo.
+     * This is done before fdt_overlay_apply() because the overlay apply will
+     * erase the magic of overlay_fdt.
+     */
+    rc = overlay_get_nodes_info(overlay_fdt, &nodes_full_path,
+                                tr->num_nodes);
+    if ( rc )
+    {
+        printk(XENLOG_ERR "Getting nodes information failed with error %d\n",
+               rc);
+        goto err;
+    }
+
+    rc = fdt_overlay_apply(tr->fdt, overlay_fdt);
+    if ( rc )
+    {
+        printk(XENLOG_ERR "Adding overlay node failed with error %d\n", rc);
+        goto err;
+    }
+
+    /*
+     * Check if any of the node already exists in dt_host. If node already exits
+     * we can return here as this overlay_fdt is not suitable for overlay ops.
+     */
+    for ( j = 0; j < tr->num_nodes; j++ )
+    {
+        overlay_node = dt_find_node_by_path(nodes_full_path[j]);
+        if ( overlay_node != NULL )
+        {
+            printk(XENLOG_ERR "node %s exists in device tree\n",
+                   nodes_full_path[j]);
+            rc = -EINVAL;
+            goto err;
+        }
+    }
+
+    /* Unflatten the tr->fdt into a new dt_host. */
+    rc = unflatten_device_tree(tr->fdt, &tr->dt_host_new);
+    if ( rc )
+    {
+        printk(XENLOG_ERR "unflatten_device_tree failed with error %d\n", rc);
+        goto err;
+    }
+
+    for ( j = 0; j < tr->num_nodes; j++ )
+    {
+        struct dt_device_node *prev_node, *next_node;
+
+        dt_dprintk("Adding node: %s\n", nodes_full_path[j]);
+
+        /* Find the newly added node in tr->dt_host_new by it's full path. */
+        overlay_node = device_tree_find_node_by_path(tr->dt_host_new,
+                                                     nodes_full_path[j]);
+        if ( overlay_node == NULL )
+        {
+            /* Sanity check. But code will never come here. */
+            ASSERT_UNREACHABLE();
+            goto remove_node;
+        }
+
+        /*
+         * Find previous and next node to overlay_node in dt_host_new. We will
+         * need these nodes to fix the dt_host_new mapping. When overlay_node is
+         * take out of dt_host_new tree and added to dt_host, link between
+         * previous node and next_node is broken. We will need to refresh
+         * dt_host_new with correct linking for any other overlay nodes
+         * extraction in future.
+         */
+        dt_for_each_device_node(tr->dt_host_new, prev_node)
+            if ( prev_node->allnext == overlay_node )
+                break;
+
+        next_node = dt_find_next_node(tr->dt_host_new, overlay_node);
+
+        read_lock(&dt_host->lock);
+
+        /* Add the node to dt_host. */
+        rc = dt_overlay_add_node(overlay_node, overlay_node->parent->full_name);
+        if ( rc )
+        {
+            read_unlock(&dt_host->lock);
+
+            /* Node not added in dt_host. */
+            goto remove_node;
+        }
+
+        read_unlock(&dt_host->lock);
+
+        prev_node->allnext = next_node;
+
+        overlay_node = dt_find_node_by_path(overlay_node->full_name);
+        if ( overlay_node == NULL )
+        {
+            /* Sanity check. But code will never come here. */
+            ASSERT_UNREACHABLE();
+            goto remove_node;
+        }
+
+        rc = handle_add_irq_iommu(hardware_domain, overlay_node);
+        if ( rc )
+        {
+            printk(XENLOG_ERR "Adding IRQ and IOMMU failed\n");
+            return rc;
+        }
+
+        /* Keep overlay_node address in tracker. */
+        tr->nodes_address[j] = (unsigned long)overlay_node;
+    }
+
+    INIT_LIST_HEAD(&tr->entry);
+    list_add_tail(&tr->entry, &overlay_tracker);
+
+    spin_unlock(&overlay_lock);
+
+    if ( nodes_full_path != NULL )
+    {
+        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
+              i++ )
+        {
+            xfree(nodes_full_path[i]);
+        }
+        xfree(nodes_full_path);
+    }
+
+    return rc;
+
+/*
+ * Failure case. We need to remove the nodes, free tracker(if tr exists) and
+ * tr->dt_host_new.
+ */
+remove_node:
+    tr->num_nodes = j;
+    rc = remove_nodes(tr);
+
+    if ( rc )
+    {
+        /*
+         * User needs to provide right overlay. Incorrect node information
+         * example parent node doesn't exist in dt_host etc can cause memory
+         * leaks as removing_nodes() will fail and this means nodes memory is
+         * not freed from tracker. Which may cause memory leaks. Ideally, these
+         * device tree related mistakes will be caught by fdt_overlay_apply()
+         * but given that we don't manage that code keeping this warning message
+         * is better here.
+         */
+        printk(XENLOG_ERR "Removing node failed.\n");
+        spin_unlock(&overlay_lock);
+        return rc;
+    }
+
+err:
+    spin_unlock(&overlay_lock);
+
+    if ( tr->dt_host_new )
+        xfree(tr->dt_host_new);
+
+    xfree(tr->overlay_fdt);
+    xfree(tr->nodes_address);
+    xfree(tr->fdt);
+
+    if ( nodes_full_path != NULL )
+    {
+        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
+              i++ )
+        {
+            xfree(nodes_full_path[i]);
+        }
+        xfree(nodes_full_path);
+    }
+
+    xfree(tr);
+
+    return rc;
+}
+
 long dt_sysctl(struct xen_sysctl_dt_overlay *op)
 {
     long ret;
@@ -395,6 +898,13 @@ long dt_sysctl(struct xen_sysctl_dt_overlay *op)
 
     switch ( op->overlay_op )
     {
+    case XEN_SYSCTL_DT_OVERLAY_ADD:
+        ret = handle_add_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
+        if ( ret )
+            xfree(overlay_fdt);
+
+        break;
+
     case XEN_SYSCTL_DT_OVERLAY_REMOVE:
         ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
         xfree(overlay_fdt);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:39:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528894.822675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzaF-0005p9-PG; Tue, 02 May 2023 23:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528894.822675; Tue, 02 May 2023 23:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzaF-0005on-LD; Tue, 02 May 2023 23:38:59 +0000
Received: by outflank-mailman (input) for mailman id 528894;
 Tue, 02 May 2023 23:38:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzZ7-0004sC-Lr
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:49 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7eab::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 582050bd-e942-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 01:37:49 +0200 (CEST)
Received: from BN9PR03CA0269.namprd03.prod.outlook.com (2603:10b6:408:ff::34)
 by DS0PR12MB6559.namprd12.prod.outlook.com (2603:10b6:8:d1::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.20; Tue, 2 May 2023 23:37:45 +0000
Received: from BN8NAM11FT101.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ff:cafe::2c) by BN9PR03CA0269.outlook.office365.com
 (2603:10b6:408:ff::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:45 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT101.mail.protection.outlook.com (10.13.177.126) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:45 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:43 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:42 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 582050bd-e942-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DROLClaF+aGH7NYS5zydzrPTP4mibpTNPe0ZoUB0bNtEgTnF7SV6zAwcvfmxH3k8vXh0kFr3FltTZ9r851MD1pbZeZ/2Y4GGAoUmC64jqHyT7oieI4BlyMayUPqlbrzPZJn3JKnhqRFvb8RmEVY5yDj3/msJR0MkBMAoZ0c9ubGZvmUG2Yyys/8mVGJEcIl2fVFRZ0TwB9LKsNidCjRYK7wBxEqI/KqTn93UHrmDTqRLM29+5bA1ptTsEN68F3xU7vt02WgfzTenO1Fqst1MWPbqytuWEtf/l5x/iEshJ8RaTYN/OuC0iJ+NEP7qMrT8EHhgWnYhsXIVB9VysQAgow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W8ANf04rvVBwwTVoSmk1wMh7BnV4jbgpw3PgliDnkIY=;
 b=T9QZoJnBB2xOZ1/jmK/an5wpiebV1ZJURBqpIOADuHLmDou8E6zhHXpMY3idjmzyiHZghWswDPZsaf07fVuWeVaHQIWUTk7wf5jApo5WaBXs5vsBpn/23OUeCky6wA4v7cpeyQnBNkBviXD+oUiosL/uYaYBSwX3kr74lqjs3fszIw8ay97Gyj0sYnAD9I5a7ZBU216h4WztFF72nJ/F/WsAae5RCRNMmXiukDov2K5RinaWmAT9tMRsA1w+sWhZpsfiRDfgCjOvS3RMyaSH9aJuDxdaRb4WN3Vgrq3YmyJRnvCB/ioixZc+VV89Sj1xEsSqh5QWYr6n8Zz5H9EPZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W8ANf04rvVBwwTVoSmk1wMh7BnV4jbgpw3PgliDnkIY=;
 b=mfMEDx1uFImG7A79WBozqHEF2QaRa5nkflQ1zG9Mp0jlMowT7VOk82LRmrBYQfGZDBCBnpgATlgNfzKix+vIYo4VbAU6H9liO3KEs+CYIjNkz/fSrX4+cOhbflIktTiB+vr4TSuOhNIgM1lMUj28Xv853meqj4V/feAdQPZpnls=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [XEN][PATCH v6 19/19] tools/xl: Add new xl command overlay for device tree overlay support
Date: Tue, 2 May 2023 16:36:50 -0700
Message-ID: <20230502233650.20121-20-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT101:EE_|DS0PR12MB6559:EE_
X-MS-Office365-Filtering-Correlation-Id: 87db793a-0ce9-45e1-14f2-08db4b663b0f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5Ky0jDQ4nyKCJRDumE86REXWu2MwxLdK/N4E2ZhzE8ccdaSyv3yNRoPZcsBcI9fQGC1FFQ/fuGX/KAekGKo6JCmQo+ouL+BxL123Vhxn60oV76fmKva9JBNBWOfuMEUH9voXJHENWv8N22JPCO26FT0XR1OT9seGW9JnX4p0redjuQ7qAAGFxLsHDhDB0ggjl+HHJeiBhSGfRJhD3ikXu72OwkfaSYS5bWHGKlInr4B1K5p9N5aanBizbmTHGlN1pgUgQX3rSMf2c7l7zFRCvAhjeoC18ebw/S8OwmNrf5aBIYL18q3kKRXSQVqiuAhpxeZAowrLBMCideSLROGISJ06GfSo8FWUxf+jypIXCJPmwGYT0jPWrsR4l8GCCpJjtmi29Jvax1Ed9gInCVCh6JmtKRwhkcRo27MU0oGdbLYkBcoTnvuuFz6mfoA4ndvHW6AUFxWiJA2IsVSsXUdD2FqsPGOcOZXH3bs7/gVpITYaJCTM9nrjVGmLCV24FsGrjawGG+1UM313zXFLcq5GB6MHVQ/0HU6bR7gJDhaFdK9VB/iNnzZbYxSUW8EdZJdsLfbxsnR1v0nlWHtASoCs5KFNuWFnfGe2Ji9uR5nMwjSMt2R9ZCPJw8aOZOi9A8zdLYopXEkBmVdxLdOx+C9ZGJykxd28HM6La4dK1GWl3S3Z7O9C3VRTneCMX0RyorMCAsI/GPgWuT0fl2/RsGRNNEWRPCHSl1V8urf9CMN8nmU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(396003)(136003)(451199021)(40470700004)(46966006)(36840700001)(356005)(2906002)(81166007)(1076003)(82740400003)(186003)(26005)(6666004)(47076005)(36860700001)(2616005)(82310400005)(336012)(426003)(36756003)(44832011)(40460700003)(5660300002)(54906003)(478600001)(86362001)(83380400001)(8936002)(8676002)(41300700001)(6916009)(70206006)(70586007)(316002)(40480700001)(4326008)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:45.2336
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 87db793a-0ce9-45e1-14f2-08db4b663b0f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT101.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB6559

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 tools/xl/xl.h           |  1 +
 tools/xl/xl_cmdtable.c  |  6 +++++
 tools/xl/xl_vmcontrol.c | 52 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 59 insertions(+)

diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 72538d6a81..a923daccd3 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -138,6 +138,7 @@ int main_shutdown(int argc, char **argv);
 int main_reboot(int argc, char **argv);
 int main_list(int argc, char **argv);
 int main_vm_list(int argc, char **argv);
+int main_dt_overlay(int argc, char **argv);
 int main_create(int argc, char **argv);
 int main_config_update(int argc, char **argv);
 int main_button_press(int argc, char **argv);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index ccf4d83584..db0acff62a 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -630,6 +630,12 @@ const struct cmd_spec cmd_table[] = {
       "Issue a qemu monitor command to the device model of a domain",
       "<Domain> <Command>",
     },
+    { "dt-overlay",
+      &main_dt_overlay, 0, 1,
+      "Add/Remove a device tree overlay",
+      "add/remove <.dtbo>"
+      "-h print this help\n"
+    },
 };
 
 const int cmdtable_len = ARRAY_SIZE(cmd_table);
diff --git a/tools/xl/xl_vmcontrol.c b/tools/xl/xl_vmcontrol.c
index 5518c78dc6..de56e00d8b 100644
--- a/tools/xl/xl_vmcontrol.c
+++ b/tools/xl/xl_vmcontrol.c
@@ -1265,6 +1265,58 @@ int main_create(int argc, char **argv)
     return 0;
 }
 
+int main_dt_overlay(int argc, char **argv)
+{
+    const char *overlay_ops = NULL;
+    const char *overlay_config_file = NULL;
+    void *overlay_dtb = NULL;
+    int rc;
+    uint8_t op;
+    int overlay_dtb_size = 0;
+    const int overlay_add_op = 1;
+    const int overlay_remove_op = 2;
+
+    if (argc < 2) {
+        help("dt_overlay");
+        return EXIT_FAILURE;
+    }
+
+    overlay_ops = argv[1];
+    overlay_config_file = argv[2];
+
+    if (strcmp(overlay_ops, "add") == 0)
+        op = overlay_add_op;
+    else if (strcmp(overlay_ops, "remove") == 0)
+        op = overlay_remove_op;
+    else {
+        fprintf(stderr, "Invalid dt overlay operation\n");
+        return EXIT_FAILURE;
+    }
+
+    if (overlay_config_file) {
+        rc = libxl_read_file_contents(ctx, overlay_config_file,
+                                      &overlay_dtb, &overlay_dtb_size);
+
+        if (rc) {
+            fprintf(stderr, "failed to read the overlay device tree file %s\n",
+                    overlay_config_file);
+            free(overlay_dtb);
+            return ERROR_FAIL;
+        }
+    } else {
+        fprintf(stderr, "overlay dtbo file not provided\n");
+        return ERROR_FAIL;
+    }
+
+    rc = libxl_dt_overlay(ctx, overlay_dtb, overlay_dtb_size, op);
+
+    free(overlay_dtb);
+
+    if (rc)
+        return EXIT_FAILURE;
+
+    return rc;
+}
 /*
  * Local variables:
  * mode: C
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:39:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528899.822685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzaH-00065C-10; Tue, 02 May 2023 23:39:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528899.822685; Tue, 02 May 2023 23:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzaG-000655-Tg; Tue, 02 May 2023 23:39:00 +0000
Received: by outflank-mailman (input) for mailman id 528899;
 Tue, 02 May 2023 23:38:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzZ8-0005Si-KC
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:50 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2061b.outbound.protection.outlook.com
 [2a01:111:f400:7e88::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5852231d-e942-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 01:37:48 +0200 (CEST)
Received: from BN9PR03CA0269.namprd03.prod.outlook.com (2603:10b6:408:ff::34)
 by MW3PR12MB4441.namprd12.prod.outlook.com (2603:10b6:303:59::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Tue, 2 May
 2023 23:37:44 +0000
Received: from BN8NAM11FT101.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ff:cafe::2c) by BN9PR03CA0269.outlook.office365.com
 (2603:10b6:408:ff::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Tue, 2 May 2023 23:37:44 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT101.mail.protection.outlook.com (10.13.177.126) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:44 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:42 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:41 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5852231d-e942-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O9KJvxaJwC7gSbVT/N0kcaG447n665alx2AFy1uWliqBzkmRVikvSOeO+S5Ubpv9EW2JcueM5Ns+LHYlbYnq2CXHnK42OyJIzo7cngq1FoKWKXJ3DhSjBMhpQvTSt1h8JjgB1LMxOR29g0qrnriJlFI2LRg0QdJfoUhmXr3lvOJNfiENv/A8ZVn1j6l6wS0MSe+xWd16iR84yEJZQVZ0BLIWNbeNxflQqJQEfAJQMO3rgNSl7/Ahsuth0O9lY3N3jqlffV6vvLBmhAyZPh+EmCkQUjfxNQlPSpSdVSFxMF0tpwnim08Frz6HPdZhK9yNmXLFa1ciisrAfJF7SwXm2w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=91E2KEtRfiC4xks88DcPT08G46pT034ykx3UwRRlezk=;
 b=bYPfW6hbwLA+thF6ca7CBjdBIlWJxlk0Sg5jlme6Najiqe+bl2kdfgvb97Lr+ujgZnd4yrJVZ8MdH4XyKgWpUl0bQF3ZjeQp7ijqGU8lsZq/G5g6ZqtRNJ5SRa+s+iWE6vldLb98w9LDhKS/ftkDFxl2DCo1IgquGcvjki0aPp57C1FQ7LFKJB5bPpV7XEfq9cQHjeZElRmjpwb1TwPwZQfQxYjbRMglaBJv79zZqksys65Edxe1mcHZ5cgwqZXR5HksP1Hd6gmEwCjMWkaqqZPmMsUQUdA8KDUKxF9M2s4Y0MUfYrHMMc05ILm+6nBeo0dR+Dv7A190Ox9L3rwaEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=91E2KEtRfiC4xks88DcPT08G46pT034ykx3UwRRlezk=;
 b=hQeYj5zsHKXYLQ5oZ+X59iJtWrHLjoaauVSb5Tg/5IuQt5Vu2d4Z15FmUy4YWG5JY45jxcV6Mn/UmRM7SLJRVyd+CAWee5mFkcfL/V3wyZtDY/djiKDk9/8z/eZnZ1kvSIodkAC+saLlGRKZJlvyxPvaIRZZizMj6eStxABGNhw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>
Subject: [XEN][PATCH v6 18/19] tools/libs/light: Implement new libxl functions for device tree overlay ops
Date: Tue, 2 May 2023 16:36:49 -0700
Message-ID: <20230502233650.20121-19-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT101:EE_|MW3PR12MB4441:EE_
X-MS-Office365-Filtering-Correlation-Id: 5347c826-59cd-4976-0a45-08db4b663a80
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WDRiwXSfCM+NKXNP/23ZDrpZh1shJqPjsyD0mxQbTv1llItNW+0F+6gp6h0+uCV715uKpHYCDSt2Y1WQIXGT+iX9roA/pKCce3QTT7lnpTYZHeWQaUB7EhUZlKuyzOczC7fkm0Z0EMqyCmDdZnTkyCDStaEx5cRm6xUxinC9eEINZhxvH8k4LmpdIdPm43on96NUQLxZKmVXGdYop5XVEy6pEO0aXd6ThHiQKO+bQx2gO4JONuKwuJRAR787zGyKibdnysVIlGPXYTQxt3lXGrC2EkXHeac4+0v5VG3pHU0OhvItI14/9ZiEkjUQbfOrt2mC3zAfgS8Gwd4xFzp8avQeXywY6RKxwMcM2fMyg+jvH3Q2fCwWAR+W2RxRXWF4/Lb1OyhPBu7bO5tKawuRrK3/xk/2HubNPyKJN2z++G68x+6cUEtfq+a3VgY5pJwfSPxdvVMxZKA4/SeYJuzurysfof86jlj3PokCw89qmnYZb41pyaknKgQWmL3pRMoYz+rQ2T1G0GEupxlkpxEafGt1IvihjOz5DyEDWHNZvqSEKH+sPw/1j6w3YtClPsbWI5sittpV1abYsn8H//mvN1VMsTdz3VhQ3DUzeh+QmD8EGba7mCL5Wph1X4G34D0Km0niXBTAJniZzNRG13WnybOhoq5ioKU5JABDwClfiAmfPL8n67pGUr8HZEiWhck7uoeR+/0PqNbZltJ7CiPFbW1BpOGEMvOZGSyuqzwix2+9vrylBuO9x+HVXW3RdaNU
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(136003)(376002)(451199021)(40470700004)(46966006)(36840700001)(478600001)(82310400005)(316002)(2906002)(6916009)(70206006)(70586007)(4326008)(36756003)(40480700001)(86362001)(54906003)(40460700003)(1076003)(8936002)(8676002)(44832011)(2616005)(336012)(426003)(36860700001)(5660300002)(186003)(47076005)(26005)(82740400003)(356005)(81166007)(41300700001)(2004002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:44.2961
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5347c826-59cd-4976-0a45-08db4b663a80
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT101.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4441

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 tools/include/libxl.h               | 11 +++++
 tools/libs/light/Makefile           |  3 ++
 tools/libs/light/libxl_dt_overlay.c | 71 +++++++++++++++++++++++++++++
 3 files changed, 85 insertions(+)
 create mode 100644 tools/libs/light/libxl_dt_overlay.c

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index cfa1a19131..1c5e8abaae 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -250,6 +250,12 @@
  */
 #define LIBXL_HAVE_DEVICETREE_PASSTHROUGH 1
 
+#if defined(__arm__) || defined(__aarch64__)
+/**
+ * This means Device Tree Overlay is supported.
+ */
+#define LIBXL_HAVE_DT_OVERLAY 1
+#endif
 /*
  * libxl_domain_build_info has device_model_user to specify the user to
  * run the device model with. See docs/misc/qemu-deprivilege.txt.
@@ -2453,6 +2459,11 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
                                         int *num);
 void libxl_device_pci_list_free(libxl_device_pci* list, int num);
 
+#if defined(__arm__) || defined(__aarch64__)
+int libxl_dt_overlay(libxl_ctx *ctx, void *overlay,
+                     uint32_t overlay_size, uint8_t overlay_op);
+#endif
+
 /*
  * Turns the current process into a backend device service daemon
  * for a driver domain.
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 96daeabc47..563a1e8d0a 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -112,6 +112,9 @@ OBJS-y += _libxl_types.o
 OBJS-y += libxl_flask.o
 OBJS-y += _libxl_types_internal.o
 
+# Device tree overlay is enabled only for ARM architecture.
+OBJS-$(CONFIG_ARM) += libxl_dt_overlay.o
+
 ifeq ($(CONFIG_LIBNL),y)
 CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
 endif
diff --git a/tools/libs/light/libxl_dt_overlay.c b/tools/libs/light/libxl_dt_overlay.c
new file mode 100644
index 0000000000..a6c709a6dc
--- /dev/null
+++ b/tools/libs/light/libxl_dt_overlay.c
@@ -0,0 +1,71 @@
+/*
+ * Copyright (C) 2021 Xilinx Inc.
+ * Author Vikram Garhwal <fnu.vikram@xilinx.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxl_internal.h"
+#include <libfdt.h>
+#include <xenctrl.h>
+
+static int check_overlay_fdt(libxl__gc *gc, void *fdt, size_t size)
+{
+    int r;
+
+    if (fdt_magic(fdt) != FDT_MAGIC) {
+        LOG(ERROR, "Overlay FDT is not a valid Flat Device Tree");
+        return ERROR_FAIL;
+    }
+
+    r = fdt_check_header(fdt);
+    if (r) {
+        LOG(ERROR, "Failed to check the overlay FDT (%d)", r);
+        return ERROR_FAIL;
+    }
+
+    if (fdt_totalsize(fdt) > size) {
+        LOG(ERROR, "Overlay FDT totalsize is too big");
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+int libxl_dt_overlay(libxl_ctx *ctx, void *overlay_dt, uint32_t overlay_dt_size,
+                     uint8_t overlay_op)
+{
+    int rc;
+    int r;
+    GC_INIT(ctx);
+
+    if (check_overlay_fdt(gc, overlay_dt, overlay_dt_size)) {
+        LOG(ERROR, "Overlay DTB check failed");
+        rc = ERROR_FAIL;
+        goto out;
+    } else {
+        LOG(DEBUG, "Overlay DTB check passed");
+        rc = 0;
+    }
+
+    r = xc_dt_overlay(ctx->xch, overlay_dt, overlay_dt_size, overlay_op);
+
+    if (r) {
+        LOG(ERROR, "%s: Adding/Removing overlay dtb failed.", __func__);
+        rc = ERROR_FAIL;
+    }
+
+out:
+    GC_FREE;
+    return rc;
+}
+
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:39:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:39:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528915.822695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzaP-0006X2-9G; Tue, 02 May 2023 23:39:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528915.822695; Tue, 02 May 2023 23:39:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzaP-0006Wt-6M; Tue, 02 May 2023 23:39:09 +0000
Received: by outflank-mailman (input) for mailman id 528915;
 Tue, 02 May 2023 23:39:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/cxx=AX=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1ptzZ7-0005Si-M1
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:37:49 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20600.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 578e1a5d-e942-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 01:37:47 +0200 (CEST)
Received: from BN9PR03CA0243.namprd03.prod.outlook.com (2603:10b6:408:ff::8)
 by SN7PR12MB7953.namprd12.prod.outlook.com (2603:10b6:806:345::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Tue, 2 May
 2023 23:37:44 +0000
Received: from BN8NAM11FT101.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ff:cafe::1a) by BN9PR03CA0243.outlook.office365.com
 (2603:10b6:408:ff::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21 via Frontend
 Transport; Tue, 2 May 2023 23:37:43 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT101.mail.protection.outlook.com (10.13.177.126) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.21 via Frontend Transport; Tue, 2 May 2023 23:37:43 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 2 May
 2023 18:37:41 -0500
Received: from xsjfnuv50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 2 May 2023 18:37:41 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 578e1a5d-e942-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bERWKi0IoKVO4XW7QmAZRl3xJLDoCyjpHNLhXy2w8KqjDAoMs4NvnCUKsP8Ee2WJwWXgEeCIvj5SsNeRzRSw+O9XGQi3DaHdSauVbrSF6eGvT60gHy4K7VsV3+wh8upOth/TCzae97M7vL0vIZiOeZv2A703E7v+v4lRPiqLWAL6GV1FsYA1cZme8c992f2NDqbHxuiatP1xZhUv+cnRLY5Ex39gbw6PeelY5rR3Pis192NKcDft3LuoL3Z3Z7H0avkqt0ha7trY++XL2wodIBgdKunUOBAMXdaLIgiht7Gw3fTELp02mvA6E4lsnx6/wPNTUdL3GBSzZ3Q5DWSVzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EQ8GqVc2MExei+u64NzeTvfYwAOuigBIRcvJ0a0ukt4=;
 b=TBzHrtkDlEDXZ5Lzf5F99GDULi3RK4nBdtOX21n/x9a6YZTpImEBQ5Whmz29A0Tiz+T/NfvGXC9BxqCIcW0uuBppMH5DBdEtl6xQbhis8jFH471u8+Vw3MQtcZ7sxomqGaK55/VekubFpRxbedk/ZxTLrD0XBozo6xETNhqtPzyG+BR4XAp5aX6FhFqXzHFB753uSVxZfMCf7PUUjqo7k3PVLOg/aNitMpHW3gjSYByVl7OlBQjNEbg2q5s3pnzJ6fTUxO2bEh5cQPJb5QaGu1EHX73zFUEqbtpwbrsJCYSpEGnHxPmZEqhlhJus2bhTs7rQBofRbguO4OEu1E8cyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EQ8GqVc2MExei+u64NzeTvfYwAOuigBIRcvJ0a0ukt4=;
 b=r3D+eMzSnKuDoXOsPjZC7VuCLArC/Phzl8Bgg+JYBuvvdMKeWbrwes775Dj3DJ/BPoOms52uOzfZHhKdRhrs271X7kb84qsN9vr2UBcMqa6faF1/ue/6u1R3KLiTepvf4V0PNlgPvOHeAxnfWXaljExWAuCiaw7hFz5g2qNWpQU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Vikram Garhwal <vikram.garhwal@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <vikram.garhwal@amd.com>,
	<michal.orzel@amd.com>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>
Subject: [XEN][PATCH v6 17/19] tools/libs/ctrl: Implement new xc interfaces for dt overlay
Date: Tue, 2 May 2023 16:36:48 -0700
Message-ID: <20230502233650.20121-18-vikram.garhwal@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230502233650.20121-1-vikram.garhwal@amd.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT101:EE_|SN7PR12MB7953:EE_
X-MS-Office365-Filtering-Correlation-Id: a4a8e144-13c2-4e46-1033-08db4b663a23
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	V8hJH/YlyuhXPweNA4KTn69XI5co255t1TQrBJSBvak61sR9Z3kCIRm5PnYmiv/8VqFuHPWmw23kSZX9Sgkb1NCFN/xGvif9AfKeylD3OvOSJNwhd2tXo0Jma9dFo399P2p40XlneedckLjeWsUFVEm7BNKVYFIjNl8lfhjCPrdSOB593Wr1vI9Wd7xvkCo5n1NopJMXwljxnE23QQVusPVxoAzbYmLh0w86LPSAH93OiSvn2LLcpWfOEeeyNC/bUFw/93dBSLQPXyon8uYIi6bVTtMpWqAXPh9CyRNEwwshjuLmjWx6d/UBUgY2AJFBP4ipLifz4pY65iVwivesH8NYfiqiXMDW4Zk/sOeDyj4K0mjp+znx5UzhcuLhT87RhGxNR3IcnMOLy6NoAsVmZMIJO5VnkHjH0MQbIFsVHDfmeDYziHmTYE+ufpzQkzNnZY+LRCOawJUiStHSrYS5FkzwcWS8sLXjrYUDh6VqFfMSbf2ewn33FRHtTk30RrMBNAGJiSOpRMlJ0pZ+VJlddqBEzd+yUVl4PQYf01N9kA8qgbPmzZdu6yk289xZGJQykT0fIIAgXxBBr+1frpPvXxKB+unoP/6Ad2o+xLMu0kxg7nxtgaqU/3UFHDOXFIw8rqAKt+xEIg5TMRfFUiWKEw5iEiuxBdvGKaSwymKKL42uIU0VmC3zLY6k92kisa9ONiTE9SuDFSZUM+SFTGybctS0pLMx3HwMmy4G43noknm4Di25fqdTgxamn/6aoYA2
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199021)(46966006)(36840700001)(40470700004)(5660300002)(36860700001)(1076003)(40480700001)(478600001)(356005)(81166007)(26005)(186003)(82740400003)(54906003)(47076005)(426003)(2616005)(336012)(8676002)(41300700001)(8936002)(4326008)(70206006)(2906002)(316002)(6916009)(44832011)(70586007)(86362001)(36756003)(40460700003)(82310400005)(2004002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2023 23:37:43.6868
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a4a8e144-13c2-4e46-1033-08db4b663a23
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT101.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7953

xc_dt_overlay() sends the device tree binary overlay, size of .dtbo and overlay
operation type i.e. add or remove to xen.

Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
---
 tools/include/xenctrl.h         |  5 ++++
 tools/libs/ctrl/Makefile.common |  1 +
 tools/libs/ctrl/xc_dt_overlay.c | 48 +++++++++++++++++++++++++++++++++
 3 files changed, 54 insertions(+)
 create mode 100644 tools/libs/ctrl/xc_dt_overlay.c

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 752fc87580..1a99c06561 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -2666,6 +2666,11 @@ int xc_livepatch_replace(xc_interface *xch, char *name, uint32_t timeout, uint32
 int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
                          xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
 
+#if defined(__arm__) || defined(__aarch64__)
+int xc_dt_overlay(xc_interface *xch, void *overlay_fdt,
+                  uint32_t overlay_fdt_size, uint8_t overlay_op);
+#endif
+
 /* Compat shims */
 #include "xenctrl_compat.h"
 
diff --git a/tools/libs/ctrl/Makefile.common b/tools/libs/ctrl/Makefile.common
index 0a09c28fd3..247afbe5f9 100644
--- a/tools/libs/ctrl/Makefile.common
+++ b/tools/libs/ctrl/Makefile.common
@@ -24,6 +24,7 @@ OBJS-y       += xc_hcall_buf.o
 OBJS-y       += xc_foreign_memory.o
 OBJS-y       += xc_kexec.o
 OBJS-y       += xc_resource.o
+OBJS-$(CONFIG_ARM)  += xc_dt_overlay.o
 OBJS-$(CONFIG_X86) += xc_psr.o
 OBJS-$(CONFIG_X86) += xc_pagetab.o
 OBJS-$(CONFIG_Linux) += xc_linux.o
diff --git a/tools/libs/ctrl/xc_dt_overlay.c b/tools/libs/ctrl/xc_dt_overlay.c
new file mode 100644
index 0000000000..202fc906f4
--- /dev/null
+++ b/tools/libs/ctrl/xc_dt_overlay.c
@@ -0,0 +1,48 @@
+/*
+ *
+ * Device Tree Overlay functions.
+ * Copyright (C) 2021 Xilinx Inc.
+ * Author Vikram Garhwal <fnu.vikram@xilinx.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "xc_private.h"
+
+int xc_dt_overlay(xc_interface *xch, void *overlay_fdt,
+                  uint32_t overlay_fdt_size, uint8_t overlay_op)
+{
+    int err;
+    DECLARE_SYSCTL;
+
+    DECLARE_HYPERCALL_BOUNCE(overlay_fdt, overlay_fdt_size,
+                             XC_HYPERCALL_BUFFER_BOUNCE_IN);
+
+    if ( (err = xc_hypercall_bounce_pre(xch, overlay_fdt)) )
+        goto err;
+
+    sysctl.cmd = XEN_SYSCTL_dt_overlay;
+    sysctl.u.dt_overlay.overlay_op = overlay_op;
+    sysctl.u.dt_overlay.overlay_fdt_size = overlay_fdt_size;
+
+    set_xen_guest_handle(sysctl.u.dt_overlay.overlay_fdt, overlay_fdt);
+
+    if ( (err = do_sysctl(xch, &sysctl)) != 0 )
+        PERROR("%s failed", __func__);
+
+err:
+    xc_hypercall_bounce_post(xch, overlay_fdt);
+
+    return err;
+}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 02 23:56:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 02 May 2023 23:56:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528927.822704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzqh-0001MK-Rw; Tue, 02 May 2023 23:55:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528927.822704; Tue, 02 May 2023 23:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ptzqh-0001MD-PN; Tue, 02 May 2023 23:55:59 +0000
Received: by outflank-mailman (input) for mailman id 528927;
 Tue, 02 May 2023 23:55:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PvCK=AX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1ptzqg-0001M7-UZ
 for xen-devel@lists.xenproject.org; Tue, 02 May 2023 23:55:59 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dfb62e28-e944-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 01:55:54 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 464DE6133A;
 Tue,  2 May 2023 23:55:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7E40C433D2;
 Tue,  2 May 2023 23:55:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfb62e28-e944-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683071752;
	bh=21q8w7ad/mG6Dz+Zc8o3O6b5fBKnpcih2EQHedK29+8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=T0y+SeALxGjv7qjOAHnrqgrZT6aZedSmotuF2IoKNPZ8uxf8OkvphYb0qZIo6MFE+
	 1NAsx6pVeLibpHQ8TZM/tEvGzTtr5CUajFlDBi+UNyKdtt+I6LYtlbGIe/ztos7pQ6
	 boF1nDodstjmLzeBx6dDvgyLxCJH0vJglPyMc/gvHZCeG0cN4gpxlBcBU2StAgY7Dm
	 8dczSL+C5RmIjH2vG/xX1YKaInsCFLpDqgPPIanh2DOP3D3IsoyO1PeP8PMdUTEVEu
	 WDfd7vSPHOZovoZEwt8vKQgafg1f5wvQhWSvG5mv+YfA9zN+nFzsqmKQy3kSi6wC4a
	 nAfVh5kPvrM0Q==
Date: Tue, 2 May 2023 16:55:49 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    stefano.stabellini@amd.com, julien@xen.org, Volodymyr_Babchuk@epam.com, 
    bertrand.marquis@arm.com
Subject: Re: [RFC PATCH] xen/arm: arm32: Enable smpboot on Arm32 based
 systems
In-Reply-To: <20230502105849.40677-1-ayan.kumar.halder@amd.com>
Message-ID: <alpine.DEB.2.22.394.2305021643010.974517@ubuntu-linux-20-04-desktop>
References: <20230502105849.40677-1-ayan.kumar.halder@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 2 May 2023, Ayan Kumar Halder wrote:
> On some of the Arm32 based systems (eg Cortex-R52), smpboot is supported.
> In these systems PSCI may not always be supported. In case of Cortex-R52, there
> is no EL3 or secure mode. Thus, PSCI is not supported as it requires EL3.
> 
> Thus, we use 'spin-table' mechanism to boot the secondary cpus. The primary
> cpu provides the startup address of the secondary cores. This address is
> provided using the 'cpu-release-addr' property.
> 
> To support smpboot, we have copied the code from xen/arch/arm/arm64/smpboot.c
> with the following changes :-
> 
> 1. 'enable-method' is an optional property. Refer to the comment in
> https://www.kernel.org/doc/Documentation/devicetree/bindings/arm/cpus.yaml
> "      # On ARM 32-bit systems this property is optional"
> 
> 2. psci is not currently supported as a value for 'enable-method'.
> 
> 3. update_identity_mapping() is not invoked as we are not sure if it is
> required.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> 
> The dts snippet with which this has been validated is :-
> 
>     cpus {
>         #address-cells = <0x02>;
>         #size-cells = <0x00>;
> 
>         cpu-map {
> 
>             cluster0 {
> 
>                 core0 {
> 
>                     thread0 {
>                         cpu = <0x02>;
>                     };
>                 };
>                 core1 {
> 
>                     thread0 {
>                         cpu = <0x03>;
>                     };
>                 };
>             };
>         };
> 
>         cpu@0 {
>             device_type = "cpu";
>             compatible = "arm,armv8";
>             reg = <0x00 0x00>;
>             phandle = <0x02>;
>         };
> 
>         cpu@1 {
>             device_type = "cpu";
>             compatible = "arm,armv8";
>             reg = <0x00 0x01>;
>             enable-method = "spin-table";
>             cpu-release-addr = <0xEB58C010>;
>             phandle = <0x03>;
>         };
>     };
> 
> Although currently I have tested this on Cortex-R52, I feel this may be helpful
> to enable smp on other Arm32 based systems as well. Happy to hear opinions.

I think you are right


>  xen/arch/arm/arm32/smpboot.c | 84 ++++++++++++++++++++++++++++++++++--
>  1 file changed, 80 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/smpboot.c b/xen/arch/arm/arm32/smpboot.c
> index 518e9f9c7e..feb249d3f8 100644
> --- a/xen/arch/arm/arm32/smpboot.c
> +++ b/xen/arch/arm/arm32/smpboot.c
> @@ -1,24 +1,100 @@
>  #include <xen/device_tree.h>
>  #include <xen/init.h>
>  #include <xen/smp.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>
>  #include <asm/platform.h>
>  
> +struct smp_enable_ops {
> +        int             (*prepare_cpu)(int);
> +};

coding style


> +static uint32_t cpu_release_addr[NR_CPUS];
> +static struct smp_enable_ops smp_enable_ops[NR_CPUS];

they could be __initdata


>  int __init arch_smp_init(void)
>  {
>      return platform_smp_init();
>  }
>  
> -int __init arch_cpu_init(int cpu, struct dt_device_node *dn)
> +static int __init smp_spin_table_cpu_up(int cpu)
> +{
> +    uint32_t __iomem *release;
> +
> +    if (!cpu_release_addr[cpu])

code style


> +    {
> +        printk("CPU%d: No release addr\n", cpu);
> +        return -ENODEV;
> +    }
> +
> +    release = ioremap_nocache(cpu_release_addr[cpu], 4);
> +    if ( !release )
> +    {
> +        dprintk(XENLOG_ERR, "CPU%d: Unable to map release address\n", cpu);
> +        return -EFAULT;
> +    }
> +
> +    writel(__pa(init_secondary), release);
> +
> +    iounmap(release);

I think we need a wmb() ?


> +    sev();
> +
> +    return 0;
> +}
> +
> +static void __init smp_spin_table_init(int cpu, struct dt_device_node *dn)
>  {
> -    /* Not needed on ARM32, as there is no relevant information in
> -     * the CPU device tree node for ARMv7 CPUs.
> +    if ( !dt_property_read_u32(dn, "cpu-release-addr", &cpu_release_addr[cpu]) )

It looks like cpu-release-addr could be u64 or u32. Can we detect the
size of the property and act accordingly? If the address is u64 and
above 4GB it is fine to abort.


> +    {
> +        printk("CPU%d has no cpu-release-addr\n", cpu);
> +        return;
> +    }
> +
> +    smp_enable_ops[cpu].prepare_cpu = smp_spin_table_cpu_up;
> +}
> +
> +static int __init dt_arch_cpu_init(int cpu, struct dt_device_node *dn)
> +{
> +    const char *enable_method;
> +
> +    /*
> +     * Refer Documentation/devicetree/bindings/arm/cpus.yaml, it says on
> +     * ARM 32-bit systems this property is optional.
>       */
> +    enable_method = dt_get_property(dn, "enable-method", NULL);
> +    if (!enable_method)

coding style


> +    {
> +        return 0;
> +    }
> +
> +    if ( !strcmp(enable_method, "spin-table") )
> +        smp_spin_table_init(cpu, dn);
> +    else
> +    {
> +        printk("CPU%d has unknown enable method \"%s\"\n", cpu, enable_method);
> +        return -EINVAL;
> +    }
> +
>      return 0;
>  }
>  
> +int __init arch_cpu_init(int cpu, struct dt_device_node *dn)
> +{
> +    return dt_arch_cpu_init(cpu, dn);
> +}
> +
>  int arch_cpu_up(int cpu)
>  {
> -    return platform_cpu_up(cpu);
> +    int ret = 0;
> +
> +    if ( smp_enable_ops[cpu].prepare_cpu )
> +        ret = smp_enable_ops[cpu].prepare_cpu(cpu);
> +
> +    if ( !ret )
> +        return platform_cpu_up(cpu);

I think this should be:

    if ( smp_enable_ops[cpu].prepare_cpu )
        ret = smp_enable_ops[cpu].prepare_cpu(cpu);
    else
        ret = platform_cpu_up(cpu);




> +    return ret;
>  }
>  
>  void arch_cpu_up_finish(void)
> -- 
> 2.17.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed May 03 00:15:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 00:15:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528934.822727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu09Z-0004Ym-Mh; Wed, 03 May 2023 00:15:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528934.822727; Wed, 03 May 2023 00:15:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu09Z-0004Yf-Jf; Wed, 03 May 2023 00:15:29 +0000
Received: by outflank-mailman (input) for mailman id 528934;
 Wed, 03 May 2023 00:15:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pu09Z-0004YV-5t; Wed, 03 May 2023 00:15:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pu09Z-0004tp-3U; Wed, 03 May 2023 00:15:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pu09Y-0000LK-J8; Wed, 03 May 2023 00:15:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pu09Y-0006Wl-II; Wed, 03 May 2023 00:15:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WL6VCTzBja3me8Re44aOgtooiAZuZdOQfC75djFLKDs=; b=vrOR7xpm5iFaIEwS8M8FGtrt75
	FeOf8aVDpM8MJg4IALF6abruau/ju21aNY4Kzc8dcJqWZTRdV5jGrXtO757LyUm6G3bAUDEcNq2+6
	GW3v0JsZnszoyutqXCfyEQLU4nTFcwC0haoyIwYhQ+S0JbBAZcJeHtSNl/l8HgcxgCOw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180506-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180506: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-livepatch:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b033eddc9779109c06a26936321d27a2ef4e088b
X-Osstest-Versions-That:
    xen=ef841d2a2377f5297add27e637b725426bb4840a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 May 2023 00:15:28 +0000

flight 180506 xen-unstable real [real]
flight 180510 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180506/
http://logs.test-lab.xenproject.org/osstest/logs/180510/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-livepatch     7 xen-install         fail pass in 180510-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180501
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180501
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180501
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180501
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180501
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180501
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180501
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180501
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180501
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180501
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180501
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180501
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  b033eddc9779109c06a26936321d27a2ef4e088b
baseline version:
 xen                  ef841d2a2377f5297add27e637b725426bb4840a

Last test of basis   180501  2023-05-02 01:52:02 Z    0 days
Testing same since   180506  2023-05-02 14:40:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    fail    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ef841d2a23..b033eddc97  b033eddc9779109c06a26936321d27a2ef4e088b -> master


From xen-devel-bounces@lists.xenproject.org Wed May 03 04:23:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 04:23:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528943.822737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu411-000239-Te; Wed, 03 May 2023 04:22:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528943.822737; Wed, 03 May 2023 04:22:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu411-000231-Od; Wed, 03 May 2023 04:22:55 +0000
Received: by outflank-mailman (input) for mailman id 528943;
 Wed, 03 May 2023 04:22:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rpRg=AY=microsoft.com=mikelley@srs-se1.protection.inumbo.net>)
 id 1pu410-00022v-9h
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 04:22:54 +0000
Received: from BN3PR00CU001.outbound.protection.outlook.com
 (mail-eastus2azlp170100001.outbound.protection.outlook.com
 [2a01:111:f403:c110::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2aa4e9dd-e96a-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 06:22:52 +0200 (CEST)
Received: from BYAPR21MB1688.namprd21.prod.outlook.com (2603:10b6:a02:bf::26)
 by SJ1PR21MB3672.namprd21.prod.outlook.com (2603:10b6:a03:454::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.5; Wed, 3 May
 2023 04:22:47 +0000
Received: from BYAPR21MB1688.namprd21.prod.outlook.com
 ([fe80::9a0f:a04d:69bd:e622]) by BYAPR21MB1688.namprd21.prod.outlook.com
 ([fe80::9a0f:a04d:69bd:e622%4]) with mapi id 15.20.6387.008; Wed, 3 May 2023
 04:22:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2aa4e9dd-e96a-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NIX0NsEqXlxjBuV+z3lpVjMeduOj+RWt3DWEQW/YX9/xvnwOjBvNxH5hPDvx5r3ypIynmawqZsJ6lnLKJcT4xOcjJXMLisIaZRzsuhDzoKAXRTirHYMNZENLlQy8utMSnEUc6F9slpyTwGugbFwFizLTUdaMqWeOFDXGxKldT462dnh5/kBsLGpwy+S29aGvKFXuPtUYdC7RRIwfHRVRaog0Foq4m/uMsPO1Limp0svxqIq7JhDoDCWpb4K1bxfcFuDuDgVyo28VfzTZ8Qr70tc6YWj8CwBmM9/VcO2TxN22LPIFfspmyOgGcYlpxtUT94PUQJTCpSn8YZEu1F2S4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mEbWF04JB1+jJ2e/vZRLaUQDQhU6GPACa5+X14xXBfA=;
 b=nJvmAFRblAAsjjZmfjspG/CN//djt467hPSelBXUVIm1FnzJu9rgpkAhqBAcfRcHKtigjNXqWGJKE8TbbRpdBLBoRBImlnIHJYb0oly/SZC10s0qCn0x5hrcpwYW6GbZQKVnuT9HUavxt1o+8uUrR/14T6ebLHqemuCt52R41v8CpMk2Y7Cddg6tfMlbGmpzStk4Nw8CkfffSMpc1MfD2QXhyldDdxtpo+yvolymSKVcEm/Zux8JDP+X7gA0sMYj7ERdIjUML3OyQQPKhny7VRJAlNvDEt1NXME4p6ht7oZOlgzYZF5xQU87Q2sP+07TFq6iY2xGru6LDqFwAc54HQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=microsoft.com; dmarc=pass action=none
 header.from=microsoft.com; dkim=pass header.d=microsoft.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mEbWF04JB1+jJ2e/vZRLaUQDQhU6GPACa5+X14xXBfA=;
 b=AcnimC9uDbYIbBVE395BH32ulUHT4KZJB6XqIyMk1/PfkEYTZPdcteELcaDyIb3ZofLsaIROLcEMksfetG6jkHnIw35CRtXcYHMEgF0LJA3JOp6Srn2bVdzSV8xFPI//cfSfnV0mioVWrWmbSFQWoH0hS2dzpcvmUUgUP4G3LpY=
From: "Michael Kelley (LINUX)" <mikelley@microsoft.com>
To: Juergen Gross <jgross@suse.com>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, "x86@kernel.org" <x86@kernel.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>
CC: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, KY Srinivasan <kys@microsoft.com>, Haiyang
 Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>, Dexuan Cui
	<decui@microsoft.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jonathan
 Corbet <corbet@lwn.net>, Andy Lutomirski <luto@kernel.org>, Peter Zijlstra
	<peterz@infradead.org>
Subject: RE: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Thread-Topic: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without
 MTRR
Thread-Index: AQHZfO8AidMlTOanxE29vr+R9vbz+69H8llg
Date: Wed, 3 May 2023 04:22:47 +0000
Message-ID:
 <BYAPR21MB168878186F9642843E8480BCD76C9@BYAPR21MB1688.namprd21.prod.outlook.com>
References: <20230502120931.20719-1-jgross@suse.com>
In-Reply-To: <20230502120931.20719-1-jgross@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
 MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ActionId=623c2fcc-89cf-47dd-a85a-04fd8fadfa20;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ContentBits=0;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Enabled=true;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Method=Standard;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Name=Internal;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SetDate=2023-05-03T04:17:01Z;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SiteId=72f988bf-86f1-41af-91ab-2d7cd011db47;
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=microsoft.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR21MB1688:EE_|SJ1PR21MB3672:EE_
x-ms-office365-filtering-correlation-id: 0c1bf722-548b-4607-f879-08db4b8e0cba
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 fwCBycC5zSlUwDxiBf74VJaq8hMw2JHL6NjE1faRai7H37vkQew8zG7pgu9wrjuS9Nj7cqskAsjPm7/YGDKG8BTqFus85NLKQlaQiDJKUgBgJYDUYH6bIW9nENgue4H/EHpW+adBZiMaBRibZxvQDhvdEmA2n98rkqFQFHXiw/mBQaNwPaNoz8ow9E8fI7H38f26YO49HIm6U7AP4Vv27xlB1ekG/4MOcDKBqOjm5i36UN17AgG+L7x7Q3lfjrL+4tOU/0Gg4U1Zwfnp5JsIlmagCrXcyzmCIxHsZ55w6XODEYG19ZbKIbJ2QhGNV9SPxmh967Set4Ke6/5ScQehV/BbqNDwh9oydXxTjxlGgJ4+DGZGkBVpkONs9gMvK9N7mb0QoLX6cvf/EMCBaR7nr3iNiE7FExXVlUd3KCANZGl4vga4HtGrHu1In74MtEVUD/uqCLS4omRP61weN90X1CYKsilbl4KulrR9vAwmqjIKvb3Di5U4yjUlWIi/xzcGAI8skQ3CWGUsQuQHVSf4tDLyQi/lXIk1Py54Qym7QQMA34TPAIbjVZKpJF40x7c6tM/2VGi3zGxlqlmfDEMXGh17LZO0KVXNx/kUC2slExoq6YnVxSp/hCasEJQQEUJp8r6FN7U/KDTTuHZGTQhWOGt2YEAlHe/dNlB0ytzZtIQ=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR21MB1688.namprd21.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(376002)(396003)(136003)(366004)(451199021)(52536014)(8936002)(5660300002)(8676002)(7416002)(41300700001)(82960400001)(316002)(786003)(82950400001)(33656002)(38070700005)(86362001)(2906002)(122000001)(38100700002)(55016003)(8990500004)(10290500003)(26005)(54906003)(6506007)(9686003)(83380400001)(478600001)(186003)(71200400001)(7696005)(76116006)(4326008)(64756008)(66556008)(66446008)(66946007)(66476007)(110136005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?6Be8axflG7PfbcOl6TqBYxOmhTBdjq/lv27ZadRFiFK8L2wVavNNyLiPmVMy?=
 =?us-ascii?Q?UqZXpDRAbchH/dWmtdQeFzHeGIkkRZXD0M0av4NZu9lGRivjSoeqQ0CrQIxl?=
 =?us-ascii?Q?60rt+Vq6F8ATV4JRra3Lmv4Fu/xOaLQZLCj05cBT0+UkyZnO886CUKbdgwTq?=
 =?us-ascii?Q?QtW04zG+LF+y8mkWZbtB063JHSJf1ywDpdveLssOzjwdqhE2cJH8xDRjsHDB?=
 =?us-ascii?Q?6+V1R1W3RBCyYOPMcpwXOP/MoLoE9OCrBBD4NNWH3+OEnBUApNG53Z0FN3hC?=
 =?us-ascii?Q?ZvXxSWyeuZKqAv4qyhkkuKNlq6aWcXBnPe2aHO3ikkhXf6qRCFvdG60uMqca?=
 =?us-ascii?Q?aHkCplQYZ28+7lbkNizN0msrQ+z4xha2Bfj6LN36ZwFYFr8OgW7SPrpWRqNk?=
 =?us-ascii?Q?4LTRN4CkfRpBJ3QHqNdVtlZQeICQAHHvtH3rT70x7yklyzjFvNYS0t7IVUsE?=
 =?us-ascii?Q?e3zs/g3a5Di1QIj5aRKjmDM0dEewWQ8jja4ILX6d/hw7CoquWpi/nExgdNr2?=
 =?us-ascii?Q?dwNQyc8Vz/6G+bzaFSDGd9fYJlF1Hopns8i0abYt4+JkEgw8ml/9ql6JV9dC?=
 =?us-ascii?Q?289CyZ2P1FFMMJBX4qT+m//wTl9FuN5dwzO/YLb31g7KfU6BT7had5u0DJxq?=
 =?us-ascii?Q?uw3UxtRTyAnkoJB+BLlruNtajJVEg+Gj4fqg67L3krAeLFwc1MyBZGI9o6fI?=
 =?us-ascii?Q?PcWRwa8ToFHdY0gmpbEPO83ByBqjaPhT84/dTC/HTpzfGZjnlyce1pjvH3z2?=
 =?us-ascii?Q?9l0lak9bQKqz4CIpJl47MGaOhOaQQ4fb71Fb+ooqYXVQfj9JA107gbVAfGXa?=
 =?us-ascii?Q?6cPCPQimkeqFwUUL0I3P9XuhnQqMJrBwbsXuTn5PLysRVo28uU4ZswvZHoxe?=
 =?us-ascii?Q?6nj08xlzbQce9LqioNqDGNJFxiwBXNw3epoP0inHSzioHW82vqVRwdAXosAE?=
 =?us-ascii?Q?GGez/Gq2jwsT0P5sH0P2t84CoZLTDQIjPdbycqtOuGe0HvSCn+x7xk1l5Ony?=
 =?us-ascii?Q?GpJ2CybBKhYz51VS3VYreNFpGl8M+6wp8J1f0zX9Vz8vU4y2h5LLFD9t8EOF?=
 =?us-ascii?Q?15qKoo44bXv0CEz8iz/GEo+LxHlN9YWX+7QYkdaAHbcfDe/Xivf5EuGQXcZp?=
 =?us-ascii?Q?gSwBzYOPYY035u9nifu4QQZWQYRpihCrJz4dDitRUCsK4I+M2bzeQxOJ7upB?=
 =?us-ascii?Q?og096lBeMmE36ekMsgsISyqpY/Uj4up0BixRrE7SbN8dhfaVqvUb4A+BFjWR?=
 =?us-ascii?Q?QeZHvwXub0uROgb7t2iGl3HqkKtDdP/aBBOdVPCcZMVGEr9P4jY5VKPP5/QK?=
 =?us-ascii?Q?wPISdH/59BAiS9V63EQmZA15fJ6OCnT4scO3RBM1skDr1cMNLEJnc6lhg8RY?=
 =?us-ascii?Q?5/dMDlAE0K10GgL2kiKsQL5teGAtWrCVcPSy1Pd7ngwhvKakdn+OSZoY7uhz?=
 =?us-ascii?Q?eV1ftHiHKKL2OClkRuasqtaSZ6tNruLpiyhF/lBJlvhcrRgsPAbz1+sFMBd+?=
 =?us-ascii?Q?5eWkAzVQLK9QXCch7FcUDD5RJBSibQmK9C06juqV1+8giA12z45qdqRgvCeW?=
 =?us-ascii?Q?qpb7AqTih8BSHE9zdznG9sEiKY6mIp7k5YB8QlnA69pmaErR1ooEMpHYVeiw?=
 =?us-ascii?Q?9Q=3D=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: microsoft.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR21MB1688.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0c1bf722-548b-4607-f879-08db4b8e0cba
X-MS-Exchange-CrossTenant-originalarrivaltime: 03 May 2023 04:22:47.3546
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: F4oz+y3a+0Vw8WA7Yyy7owpm/xWojADSC5j6iYwY6aydHvqP9XDf88eWc5mo3pwV5g7MITimVS8bEgxbVTFOLHfvMUMBbAfxxuOAAsxlbEw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR21MB3672

From: Juergen Gross <jgross@suse.com> Sent: Tuesday, May 2, 2023 5:09 AM
>=20
> This series tries to fix the rather special case of PAT being available
> without having MTRRs (either due to CONFIG_MTRR being not set, or
> because the feature has been disabled e.g. by a hypervisor).
>=20
> The main use cases are Xen PV guests and SEV-SNP guests running under
> Hyper-V.
>=20
> Instead of trying to work around all the issues by adding if statements
> here and there, just try to use the complete available infrastructure
> by setting up a read-only MTRR state when needed.
>=20
> In the Xen PV case the current MTRR MSR values can be read from the
> hypervisor, while for the SEV-SNP case all needed is to set the
> default caching mode to "WB".
>=20
> I have added more cleanup which has been discussed when looking into
> the most recent failures.
>=20
> Note that I couldn't test the Hyper-V related change (patch 3).
>=20
> Running on bare metal and with Xen didn't show any problems with the
> series applied.
>=20
> It should be noted that patches 9+10 are replacing today's way to
> lookup the MTRR cache type for a memory region from looking at the
> MTRR register values to building a memory map with the cache types.
> This should make the lookup much faster and much easier to understand.
>=20
> Changes in V2:
> - replaced former patches 1+2 with new patches 1-4, avoiding especially
>   the rather hacky approach of V1, while making all the MTRR type
>   conflict tests available for the Xen PV case
> - updated patch 6 (was patch 4 in V1)
>=20
> Changes in V3:
> - dropped patch 5 of V2, as already applied
> - split patch 1 of V2 into 2 patches
> - new patches 6-10
> - addressed comments
>=20
> Changes in V4:
> - addressed comments
>=20
> Changes in V5
> - addressed comments
> - some other small fixes
> - new patches 3, 8 and 15
>=20
> Changes in V6:
> - patch 1 replaces patches 1+2 of V5
> - new patches 8+12
> - addressed comments
>=20
> Juergen Gross (16):
>   x86/mtrr: remove physical address size calculation
>   x86/mtrr: replace some constants with defines
>   x86/mtrr: support setting MTRR state for software defined MTRRs
>   x86/hyperv: set MTRR state when running as SEV-SNP Hyper-V guest
>   x86/xen: set MTRR state when running as Xen PV initial domain
>   x86/mtrr: replace vendor tests in MTRR code
>   x86/mtrr: have only one set_mtrr() variant
>   x86/mtrr: move 32-bit code from mtrr.c to legacy.c
>   x86/mtrr: allocate mtrr_value array dynamically
>   x86/mtrr: add get_effective_type() service function
>   x86/mtrr: construct a memory map with cache modes
>   x86/mtrr: add mtrr=3Ddebug command line option
>   x86/mtrr: use new cache_map in mtrr_type_lookup()
>   x86/mtrr: don't let mtrr_type_lookup() return MTRR_TYPE_INVALID
>   x86/mm: only check uniform after calling mtrr_type_lookup()
>   x86/mtrr: remove unused code
>=20
>  .../admin-guide/kernel-parameters.txt         |   4 +
>  arch/x86/hyperv/ivm.c                         |   4 +
>  arch/x86/include/asm/mtrr.h                   |  43 +-
>  arch/x86/include/uapi/asm/mtrr.h              |   6 +-
>  arch/x86/kernel/cpu/mtrr/Makefile             |   2 +-
>  arch/x86/kernel/cpu/mtrr/amd.c                |   2 +-
>  arch/x86/kernel/cpu/mtrr/centaur.c            |  11 +-
>  arch/x86/kernel/cpu/mtrr/cleanup.c            |  22 +-
>  arch/x86/kernel/cpu/mtrr/cyrix.c              |   2 +-
>  arch/x86/kernel/cpu/mtrr/generic.c            | 677 ++++++++++++------
>  arch/x86/kernel/cpu/mtrr/legacy.c             |  90 +++
>  arch/x86/kernel/cpu/mtrr/mtrr.c               | 195 ++---
>  arch/x86/kernel/cpu/mtrr/mtrr.h               |  18 +-
>  arch/x86/kernel/setup.c                       |   2 +
>  arch/x86/mm/pgtable.c                         |  24 +-
>  arch/x86/xen/enlighten_pv.c                   |  52 ++
>  16 files changed, 721 insertions(+), 433 deletions(-)
>  create mode 100644 arch/x86/kernel/cpu/mtrr/legacy.c
>=20
> --
> 2.35.3

I've tested the full v6 series in a normal Hyper-V guest and in an SEV-SNP =
guest.

In the SNP guest, the page attributes in /sys/kernel/debug/x86/pat_memtype_=
list
are "write-back" in the expected cases.  The "mtrr" x86 feature no longer a=
ppears
in the "flags" output of "lscpu" or /proc/cpuinfo.  /proc/mtrr does not exi=
st, again
as expected.

In a normal VM, the "mtrr" x86 feature appears in the flags, and /proc/mtrr
shows expected values.  The boot option mtrr=3Ddebug works as expected.

Tested-by: Michael Kelley <mikelley@microsoft.com>


From xen-devel-bounces@lists.xenproject.org Wed May 03 05:14:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 05:14:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528949.822746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu4oK-0000gh-NO; Wed, 03 May 2023 05:13:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528949.822746; Wed, 03 May 2023 05:13:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu4oK-0000ga-Km; Wed, 03 May 2023 05:13:52 +0000
Received: by outflank-mailman (input) for mailman id 528949;
 Wed, 03 May 2023 05:13:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pu4oJ-0000gU-7d
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 05:13:51 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4863da49-e971-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 07:13:47 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 433382220F;
 Wed,  3 May 2023 05:13:46 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 18B661331F;
 Wed,  3 May 2023 05:13:46 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id yzlhA4rtUWTuDAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 05:13:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4863da49-e971-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683090826; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LaWDROxnQcpwHJGfkAql2NTegOv3GyU+pkxSUG8g+SA=;
	b=pnNZhh5mhscbPVHU/wsicPKUkI9tXkqGFZyRghDH8ItD91k9k36az7kNVBMVWR6eTJ6J4K
	vHdrd91uaGWn5BQPHaAaWAZGNYrmWrkcGVHLlGlf9FR8PhU4Xln1j+vqAV130Th6TtICoQ
	oNKnVcY+NJKrp27X2nT+OFh05uQkVFA=
Message-ID: <887675ad-ef06-cf8c-8a32-5b3f726e2198@suse.com>
Date: Wed, 3 May 2023 07:13:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v4 05/13] tools/xenstore: use accounting buffering for
 node accounting
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-6-jgross@suse.com>
 <c3be0db1-ed47-967b-7f98-6f1569691fb6@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <c3be0db1-ed47-967b-7f98-6f1569691fb6@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------VeQcZ8FdmnwJV4taqQPZtlrI"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------VeQcZ8FdmnwJV4taqQPZtlrI
Content-Type: multipart/mixed; boundary="------------nfpqoveNwzJbFL96YvSvSfM1";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <887675ad-ef06-cf8c-8a32-5b3f726e2198@suse.com>
Subject: Re: [PATCH v4 05/13] tools/xenstore: use accounting buffering for
 node accounting
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-6-jgross@suse.com>
 <c3be0db1-ed47-967b-7f98-6f1569691fb6@xen.org>
In-Reply-To: <c3be0db1-ed47-967b-7f98-6f1569691fb6@xen.org>

--------------nfpqoveNwzJbFL96YvSvSfM1
Content-Type: multipart/mixed; boundary="------------N8VE7HNtmR9cgCw1BF6JdMag"

--------------N8VE7HNtmR9cgCw1BF6JdMag
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDIuMDUuMjMgMjA6NTUsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDA1LzA0LzIwMjMgMDg6MDMsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBB
ZGQgdGhlIG5vZGUgYWNjb3VudGluZyB0byB0aGUgYWNjb3VudGluZyBpbmZvcm1hdGlvbiBi
dWZmZXJpbmcgaW4NCj4+IG9yZGVyIHRvIGF2b2lkIGhhdmluZyB0byB1bmRvIGl0IGluIGNh
c2Ugb2YgZmFpbHVyZS4NCj4+DQo+PiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxq
Z3Jvc3NAc3VzZS5jb20+DQo+PiAtLS0NCj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmPCoMKgIHwgMjEgKystLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiDCoCB0b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmggfMKgIDQgKystLQ0KPj4gwqAgMiBmaWxlcyBj
aGFuZ2VkLCA0IGluc2VydGlvbnMoKyksIDIxIGRlbGV0aW9ucygtKQ0KPj4NCj4+IGRpZmYg
LS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2NvcmUuYw0KPj4gaW5kZXggODQzMzVmNWYzZC4uOTJhNDBjY2YzZiAx
MDA2NDQNCj4+IC0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+ICsr
KyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+IEBAIC0xNDUyLDcgKzE0
NTIsNiBAQCBzdGF0aWMgdm9pZCBkZXN0cm95X25vZGVfcm0oc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4sIA0KPj4gc3RydWN0IG5vZGUgKm5vZGUpDQo+PiDCoCBzdGF0aWMgaW50IGRlc3Ry
b3lfbm9kZShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUpDQo+
PiDCoCB7DQo+PiDCoMKgwqDCoMKgIGRlc3Ryb3lfbm9kZV9ybShjb25uLCBub2RlKTsNCj4+
IC3CoMKgwqAgZG9tYWluX25iZW50cnlfZGVjKGNvbm4sIGdldF9ub2RlX293bmVyKG5vZGUp
KTsNCj4+IMKgwqDCoMKgwqAgLyoNCj4+IMKgwqDCoMKgwqDCoCAqIEl0IGlzIG5vdCBwb3Nz
aWJsZSB0byBlYXNpbHkgcmV2ZXJ0IHRoZSBjaGFuZ2VzIGluIGEgdHJhbnNhY3Rpb24uDQo+
PiBAQCAtMTc5NywyNyArMTc5NiwxMSBAQCBzdGF0aWMgaW50IGRvX3NldF9wZXJtcyhjb25z
dCB2b2lkICpjdHgsIHN0cnVjdCANCj4+IGNvbm5lY3Rpb24gKmNvbm4sDQo+PiDCoMKgwqDC
oMKgIG9sZF9wZXJtcyA9IG5vZGUtPnBlcm1zOw0KPj4gwqDCoMKgwqDCoCBkb21haW5fbmJl
bnRyeV9kZWMoY29ubiwgZ2V0X25vZGVfb3duZXIobm9kZSkpOw0KPiANCj4gSUlSQywgd2Ug
b3JpZ2luYWxseSBzYWlkIHRoYXQgZG9tYWluX25iZW50cnlfZGVjKCkgY291bGQgbmV2ZXIg
ZmFpbCBpbiBhIA0KPiBub24tdHJhbnNhY3Rpb24gY2FzZS4gQnV0IHdpdGggeW91ciBjdXJy
ZW50IHJld29yaywgdGhlIGZ1bmN0aW9uIGNhbiBub3cgZmFpbCANCj4gYmVjYXVzZSBvZiBh
biBhbGxvY2F0aW9uIGZhaWx1cmUuDQoNCkhvdyB3b3VsZCB0aGF0IGJlIHBvc3NpYmxlIHRv
IGhhcHBlbj8NCg0KZG9tYWluX25iZW50cnlfZGVjKCkgY2FuIG9ubHkgYmUgY2FsbGVkIGlm
IGEgbm9kZSBpcyBiZWluZyBvd25lZCBieSBhbiBhbHJlYWR5DQprbm93biBkb21haW4uIFNv
IGFsbG9jYXRpb24gaXMgaW1wb3NzaWJsZSB0byBoYXBwZW4sIGFzIHRoaXMgd291bGQgYmUg
YSBtYWpvcg0KZXJyb3IgaW4geGVuc3RvcmVkLg0KDQo+IFRoZXJlZm9yZSwgc2hvdWxkbid0
IHdlIG5vdyBjaGVjayB0aGUgZXJyb3I/IChQb3NzaWJseSBpbiBhIHBhdGNoIGJlZm9yZWhh
bmQpLg0KDQpJIGRvbid0IHRoaW5rIHNvLiBJIGNhbiBhZGQgYSBjb21tZW50IGlmIHlvdSB3
YW50Lg0KDQoNCkp1ZXJnZW4NCg==
--------------N8VE7HNtmR9cgCw1BF6JdMag
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------N8VE7HNtmR9cgCw1BF6JdMag--

--------------nfpqoveNwzJbFL96YvSvSfM1--

--------------VeQcZ8FdmnwJV4taqQPZtlrI
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRR7YkFAwAAAAAACgkQsN6d1ii/Ey/O
7Qf7BulSDLguGvrfdji0yzCuDH9ISSVnqqtzcCMi3AZy9wmPBj113oiv2J7ZzwGyQ3VJBKb/kDk5
JM178hdWL1FiLNNtnac6ax5jiRg75E2665mu2LzoPoRz6Qdhxr5gEb8h+Bq2+1Smawdm4g0bt0rF
vbfNPmmRXF8xlGYIKNkgSXdgzKdXeFv9QrzZ7G1V/dq1e2o86r94MMFfN9czyXrMqjsh9mVM5HPb
TL4iD8ch8HMkon0oigu9IK+pEcfyfoqDe2zgctyvzbVS1bxX08ZEpzaYRzFGAjLHxJJRQ7Jw0CUQ
WH6QgH+ej9RJMLCzH+D1wUI1luasOklU29qNHGk/6Q==
=DZH8
-----END PGP SIGNATURE-----

--------------VeQcZ8FdmnwJV4taqQPZtlrI--


From xen-devel-bounces@lists.xenproject.org Wed May 03 05:22:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 05:22:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528952.822757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu4w0-00027p-Fz; Wed, 03 May 2023 05:21:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528952.822757; Wed, 03 May 2023 05:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu4w0-00027i-Cz; Wed, 03 May 2023 05:21:48 +0000
Received: by outflank-mailman (input) for mailman id 528952;
 Wed, 03 May 2023 05:21:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pu4vz-00027c-Um
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 05:21:47 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6622d888-e972-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 07:21:47 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 899501FF78;
 Wed,  3 May 2023 05:21:46 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5BB821331F;
 Wed,  3 May 2023 05:21:46 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id SlEwFGrvUWTOEAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 05:21:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6622d888-e972-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683091306; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=yOi3yGLq7RX6Qag50H/c43X0b6cIunKT3mfc8j3/LUQ=;
	b=oT2zFUg2A/o39AZK0LgXBeVQCIx0cBXMRXnKCMGji2a3eIA4++vaWe9sH4oUms6W7+DTa0
	D2ENxwGNML8qF5U5XRkjCbbw9o1/cIZaT1uT4CQdRK/D199vYQoK1izTNacyNlUVmLcyrn
	jc7gD6XqD6uLf+yO+qdglV985puhnek=
Message-ID: <80246bbf-85eb-0829-8f86-d5a2ea3941fe@suse.com>
Date: Wed, 3 May 2023 07:21:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-8-jgross@suse.com>
 <25027287-441a-304c-f035-0d3da3572d3a@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v4 07/13] tools/xenstore: use accounting data array for
 per-domain values
In-Reply-To: <25027287-441a-304c-f035-0d3da3572d3a@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------FcxXT8qekjRmiPntJ9Rci2Tl"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------FcxXT8qekjRmiPntJ9Rci2Tl
Content-Type: multipart/mixed; boundary="------------Rm8WJ3mb8Jcsj3d8pClM9LxW";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <80246bbf-85eb-0829-8f86-d5a2ea3941fe@suse.com>
Subject: Re: [PATCH v4 07/13] tools/xenstore: use accounting data array for
 per-domain values
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-8-jgross@suse.com>
 <25027287-441a-304c-f035-0d3da3572d3a@xen.org>
In-Reply-To: <25027287-441a-304c-f035-0d3da3572d3a@xen.org>

--------------Rm8WJ3mb8Jcsj3d8pClM9LxW
Content-Type: multipart/mixed; boundary="------------KEYllicDOrIfjMYCxqnnIMog"

--------------KEYllicDOrIfjMYCxqnnIMog
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDIuMDUuMjMgMjE6MDksIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDA1LzA0LzIwMjMgMDg6MDMsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBk
aWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5oIA0KPj4gYi90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmgNCj4+IGluZGV4IDVjZmQ3MzBjZjYu
LjBkNjFiZjQzNDQgMTAwNjQ0DQo+PiAtLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
ZG9tYWluLmgNCj4+ICsrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaA0K
Pj4gQEAgLTI4LDcgKzI4LDEwIEBAIGVudW0gYWNjaXRlbSB7DQo+PiDCoMKgwqDCoMKgIEFD
Q19OT0RFUywNCj4+IMKgwqDCoMKgwqAgQUNDX1JFUV9OLMKgwqDCoMKgwqDCoMKgIC8qIE51
bWJlciBvZiBlbGVtZW50cyBwZXIgcmVxdWVzdC4gKi8NCj4+IMKgwqDCoMKgwqAgQUNDX1RS
X04gPSBBQ0NfUkVRX04swqDCoMKgIC8qIE51bWJlciBvZiBlbGVtZW50cyBwZXIgdHJhbnNh
Y3Rpb24uICovDQo+PiAtwqDCoMKgIEFDQ19OID0gQUNDX1RSX04swqDCoMKgIC8qIE51bWJl
ciBvZiBlbGVtZW50cyBwZXIgZG9tYWluLiAqLw0KPj4gK8KgwqDCoCBBQ0NfV0FUQ0ggPSBB
Q0NfVFJfTiwNCj4+ICvCoMKgwqAgQUNDX09VVFNULA0KPj4gK8KgwqDCoCBBQ0NfTUVNLA0K
Pj4gK8KgwqDCoCBBQ0NfTizCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIC8qIE51bWJlciBvZiBl
bGVtZW50cyBwZXIgZG9tYWluLiAqLw0KPj4gwqAgfTsNCj4+IMKgIHZvaWQgaGFuZGxlX2V2
ZW50KHZvaWQpOw0KPj4gQEAgLTEwNyw5ICsxMTAsOCBAQCBzdGF0aWMgaW5saW5lIHZvaWQg
ZG9tYWluX21lbW9yeV9hZGRfbm9jaGsoc3RydWN0IA0KPj4gY29ubmVjdGlvbiAqY29ubiwN
Cj4+IMKgIHZvaWQgZG9tYWluX3dhdGNoX2luYyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubik7
DQo+PiDCoCB2b2lkIGRvbWFpbl93YXRjaF9kZWMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4p
Ow0KPj4gwqAgaW50IGRvbWFpbl93YXRjaChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubik7DQo+
PiAtdm9pZCBkb21haW5fb3V0c3RhbmRpbmdfaW5jKHN0cnVjdCBjb25uZWN0aW9uICpjb25u
KTsNCj4+IC12b2lkIGRvbWFpbl9vdXRzdGFuZGluZ19kZWMoc3RydWN0IGNvbm5lY3Rpb24g
KmNvbm4pOw0KPj4gLXZvaWQgZG9tYWluX291dHN0YW5kaW5nX2RvbWlkX2RlYyh1bnNpZ25l
ZCBpbnQgZG9taWQpOw0KPj4gK3ZvaWQgZG9tYWluX291dHN0YW5kaW5nX2luYyhzdHJ1Y3Qg
Y29ubmVjdGlvbiAqY29ubiwgdW5zaWduZWQgaW50IGRvbWlkKTsNCj4gDQo+IEFGQUlDVCwg
YWxsIHRoZSBjYWxsZXIgb2YgZG9tYWluX291dHN0YW5kaW5nX2luYygpIHdpbGwgcGFzcyAn
Y29ubi0+aWQnLiBTbyBpdCANCj4gaXMgbm90IGVudGlyZWx5IGNsZWFyIHdoYXQncyB0aGUg
YmVuZWZpdHMgdG8gYWRkIHRoZSBleHRyYSBwYXJhbWV0ZXIuDQoNCmRvbWFpbl9hY2NfYWRk
KCkgd2lsbCBuZWVkIGNvbm4uIEkgYWdyZWUgdGhhdCBJIHNob3VsZCBkcm9wIHRoZSBkb21p
ZA0KcGFyYW1ldGVyLg0KDQoNCkp1ZXJnZW4NCg0K
--------------KEYllicDOrIfjMYCxqnnIMog
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------KEYllicDOrIfjMYCxqnnIMog--

--------------Rm8WJ3mb8Jcsj3d8pClM9LxW--

--------------FcxXT8qekjRmiPntJ9Rci2Tl
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRR72kFAwAAAAAACgkQsN6d1ii/Ey9F
dQf9Ek9e2fCCniFtImYz1SfFBlRInCabKSX7y3w2m1F3CqKVf4GKv4m845vV21Hph22VUPJ7ogNq
kRNov81HNSQ3LQuPBg7wP4mGiCR6cONf88EGj7eMJump/AQD9utV+OXy0xvXiSFnBbdvx0E/5lD6
PHPORQjM0HyJocEZaGYxt5a71CgWIAJE/4LH45QSd2sDxoT+XQ0I5q63syBQu9B6m334g5ycjdld
4Q6sEUXRo8Sb5O7Z+844OT6MqptO1KXgxdLq4coC/ghBlaQunVCYSJtyVUVGI/qYbjEsH+axnBPv
fCImRNdaYZPPpEooPBduu8yFOmAfLMLUtYyFifpFXQ==
=SbpM
-----END PGP SIGNATURE-----

--------------FcxXT8qekjRmiPntJ9Rci2Tl--


From xen-devel-bounces@lists.xenproject.org Wed May 03 05:27:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 05:27:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528955.822767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu51K-0002mr-04; Wed, 03 May 2023 05:27:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528955.822767; Wed, 03 May 2023 05:27:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu51J-0002mk-So; Wed, 03 May 2023 05:27:17 +0000
Received: by outflank-mailman (input) for mailman id 528955;
 Wed, 03 May 2023 05:27:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pu51J-0002ma-8e; Wed, 03 May 2023 05:27:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pu51J-0007LX-2A; Wed, 03 May 2023 05:27:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pu51I-0003n3-Cw; Wed, 03 May 2023 05:27:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pu51I-0006NP-BH; Wed, 03 May 2023 05:27:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BexeGpCqAkTvV1hg0XPr1SSXsfkkQ+O9nnEWB14rzVE=; b=gvIk0A54qIlMFVUuIyfKcw2Z5X
	C1c/JqB/UtPK+0j2FIfHb2aDzhPWpBPyWXnScGmPqw1e3C9ired3umeCtJRxoSbuWaVS2wg0KPjbQ
	G3PwE3RsNwdeyiEQUSngPTNif14A+RKHJRzT1WE3EjPrzzjkq03AsTadfb6CEFiqUTLE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180507-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180507: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-xsm:xen-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b5f47ba73b7c1457d2f18d71c00e1a91a76fe60b
X-Osstest-Versions-That:
    qemuu=7c18f2d663521f1b31b821a13358ce38075eaf7d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 May 2023 05:27:16 +0000

flight 180507 qemu-mainline real [real]
flight 180512 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180507/
http://logs.test-lab.xenproject.org/osstest/logs/180512/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-xsm        7 xen-install         fail pass in 180512-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180487
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180487
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180487
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180487
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180487
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180487
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180487
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180487
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 qemuu                b5f47ba73b7c1457d2f18d71c00e1a91a76fe60b
baseline version:
 qemuu                7c18f2d663521f1b31b821a13358ce38075eaf7d

Last test of basis   180487  2023-04-30 07:39:02 Z    2 days
Testing same since   180507  2023-05-02 15:37:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Bulekov <alxndr@bu.edu>
  Fabiano Rosas <farosas@suse.de>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   7c18f2d663..b5f47ba73b  b5f47ba73b7c1457d2f18d71c00e1a91a76fe60b -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed May 03 06:15:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 06:15:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528964.822777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu5mK-0008OU-O9; Wed, 03 May 2023 06:15:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528964.822777; Wed, 03 May 2023 06:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu5mK-0008ON-LW; Wed, 03 May 2023 06:15:52 +0000
Received: by outflank-mailman (input) for mailman id 528964;
 Wed, 03 May 2023 06:15:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pu5mJ-0008OH-G2
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 06:15:51 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f1cfa927-e979-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 08:15:47 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 1C770222BD;
 Wed,  3 May 2023 06:15:47 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id F247613584;
 Wed,  3 May 2023 06:15:46 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 95p8ORL8UWTtKQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 06:15:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1cfa927-e979-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683094547; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=kmcwBsajxIzxE0uREb5tgy7d6wQRBvMrRV6hOczWqJY=;
	b=akO08lM+JHLACcvgISVgWsLWChTKaWbmO6swufQ+52xziMfKGuEw6CnGQz/+XpUOjYNz2/
	5J+tWmlD+TyMVitCWFk3B6QSV8dKXEtwks4dU+b6nLKUc0ofV+ZQbsgDmxkV+0O31txUo+
	61XRRgqDCgXrlyX54LcNbluL2ytsarM=
Message-ID: <2d98605e-9310-4e95-7688-003374dddefe@suse.com>
Date: Wed, 3 May 2023 08:15:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v4 10/13] tools/xenstore: switch transaction accounting to
 generic accounting
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-11-jgross@suse.com>
 <5b61ce7e-639c-4f2d-6cb9-421679d30d9d@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <5b61ce7e-639c-4f2d-6cb9-421679d30d9d@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------MyRyK0u3FPb61Z70nR0fXyY5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------MyRyK0u3FPb61Z70nR0fXyY5
Content-Type: multipart/mixed; boundary="------------SgTviRsEdOjj6hzyoDajOReW";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Message-ID: <2d98605e-9310-4e95-7688-003374dddefe@suse.com>
Subject: Re: [PATCH v4 10/13] tools/xenstore: switch transaction accounting to
 generic accounting
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-11-jgross@suse.com>
 <5b61ce7e-639c-4f2d-6cb9-421679d30d9d@xen.org>
In-Reply-To: <5b61ce7e-639c-4f2d-6cb9-421679d30d9d@xen.org>

--------------SgTviRsEdOjj6hzyoDajOReW
Content-Type: multipart/mixed; boundary="------------O3h0PR8yVdeKvkLJu22VSBPE"

--------------O3h0PR8yVdeKvkLJu22VSBPE
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDIuMDUuMjMgMjE6MTksIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGksDQo+IA0KPiBP
biAwNS8wNC8yMDIzIDA4OjAzLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gQXMgdHJhbnNh
Y3Rpb24gYWNjb3VudGluZyBpcyBhY3RpdmUgZm9yIHVucHJpdmlsZWdlZCBkb21haW5zIG9u
bHksIGl0DQo+PiBjYW4gZWFzaWx5IGJlIGFkZGVkIHRvIHRoZSBnZW5lcmljIHBlci1kb21h
aW4gYWNjb3VudGluZy4NCj4+DQo+PiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxq
Z3Jvc3NAc3VzZS5jb20+DQo+PiAtLS0NCj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmPCoMKgwqDCoMKgwqDCoCB8wqAgMyArLS0NCj4+IMKgIHRvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9jb3JlLmjCoMKgwqDCoMKgwqDCoCB8wqAgMSAtDQo+PiDCoCB0b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmPCoMKgwqDCoMKgIHwgMjEgKysrKysrKysrKysr
KysrKysrLS0tDQo+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmjCoMKg
wqDCoMKgIHzCoCA0ICsrKysNCj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFu
c2FjdGlvbi5jIHwgMTIgKysrKystLS0tLS0tDQo+PiDCoCA1IGZpbGVzIGNoYW5nZWQsIDI4
IGluc2VydGlvbnMoKyksIDEzIGRlbGV0aW9ucygtKQ0KPj4NCj4+IGRpZmYgLS1naXQgYS90
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2NvcmUuYw0KPj4gaW5kZXggMmQ0ODFmY2FkOS4uODhjNTY5YjdkNSAxMDA2NDQNCj4+
IC0tLSBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+ICsrKyBiL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+IEBAIC0yMDgzLDcgKzIwODMsNyBAQCBz
dGF0aWMgdm9pZCBjb25zaWRlcl9tZXNzYWdlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uKQ0K
Pj4gwqDCoMKgwqDCoMKgICogc3RhbGxlZC4gVGhpcyB3aWxsIGlnbm9yZSBuZXcgcmVxdWVz
dHMgdW50aWwgTGl2ZS1VcGRhdGUgaGFwcGVuZWQNCj4+IMKgwqDCoMKgwqDCoCAqIG9yIGl0
IHdhcyBhYm9ydGVkLg0KPj4gwqDCoMKgwqDCoMKgICovDQo+PiAtwqDCoMKgIGlmIChsdV9p
c19wZW5kaW5nKCkgJiYgY29ubi0+dHJhbnNhY3Rpb25fc3RhcnRlZCA9PSAwICYmDQo+PiAr
wqDCoMKgIGlmIChsdV9pc19wZW5kaW5nKCkgJiYgY29ubi0+dGFfc3RhcnRfdGltZSA9PSAw
ICYmDQo+IA0KPiBOSVQ6IEkga25vdyB0aGVyZSBhcmUgc29tZSBwbGFjZXMgaW4gdGhlIGNv
ZGUgY2hlY2tpbmcgZm9yIGNvbm4tPnRhX3N0YXJ0X3RpbWUgDQo+ID09IDAuIEJ1dCBpdCBm
ZWVscyBsaWtlIGEgYmV0dGVyIHJlcGxhY2VtZW50IHRvICJjb25uLT50cmFuc2FjdGlvbl9z
dGFydGVkIiBpcyANCj4gImxpc3RfZW1wdHkoLi4uKSIuDQoNCkZpbmUgd2l0aCBtZS4NCg0K
PiANCj4gSSBhZ3JlZSB0aGlzIGlzIGdvaW5nIHRvIGJlIG1vcmUgZXhwZW5zaXZlLiBCdXQg
eW91IGFyZSBzd2l0Y2hpbmcgdGhlIA0KPiB0cmFuc2FjdGlvbiBhY2NvdW50aW5nIHRvIGEg
Z2VuZXJpYyBpbmZyYXN0cnVjdHVyZSB3aGljaCBpcyBwcmV0dHkgaGVhdnkgY29tcGFyZSAN
Cj4gdG8gYSBzaW1wbGUgYWRkaXRpb24vc3Vic3RyYWN0aW9uLiBTbyBJIHRoaW5rIGEgImxp
c3RfZW1wdHkoKSIgd291bGQgYmUgT0sgaGVyZS4NCj4gDQo+PiDCoMKgwqDCoMKgwqDCoMKg
wqAgY29ubi0+aW4tPmhkci5tc2cudHlwZSA9PSBYU19UUkFOU0FDVElPTl9TVEFSVCkgew0K
Pj4gwqDCoMKgwqDCoMKgwqDCoMKgIHRyYWNlKCJEZWxheWluZyB0cmFuc2FjdGlvbiBzdGFy
dCBmb3IgY29ubmVjdGlvbiAlcCByZXFfaWQgJXVcbiIsDQo+PiDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgY29ubiwgY29ubi0+aW4tPmhkci5tc2cucmVxX2lkKTsNCj4+IEBA
IC0yMTkwLDcgKzIxOTAsNiBAQCBzdHJ1Y3QgY29ubmVjdGlvbiAqbmV3X2Nvbm5lY3Rpb24o
Y29uc3Qgc3RydWN0IA0KPj4gaW50ZXJmYWNlX2Z1bmNzICpmdW5jcykNCj4+IMKgwqDCoMKg
wqAgbmV3LT5mdW5jcyA9IGZ1bmNzOw0KPj4gwqDCoMKgwqDCoCBuZXctPmlzX2lnbm9yZWQg
PSBmYWxzZTsNCj4+IMKgwqDCoMKgwqAgbmV3LT5pc19zdGFsbGVkID0gZmFsc2U7DQo+PiAt
wqDCoMKgIG5ldy0+dHJhbnNhY3Rpb25fc3RhcnRlZCA9IDA7DQo+PiDCoMKgwqDCoMKgIElO
SVRfTElTVF9IRUFEKCZuZXctPm91dF9saXN0KTsNCj4+IMKgwqDCoMKgwqAgSU5JVF9MSVNU
X0hFQUQoJm5ldy0+YWNjX2xpc3QpOw0KPj4gwqDCoMKgwqDCoCBJTklUX0xJU1RfSEVBRCgm
bmV3LT5yZWZfbGlzdCk7DQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2NvcmUuaCBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmgNCj4+IGluZGV4
IDVhMTFkYzEyMzEuLjM1NjRkODVkN2QgMTAwNjQ0DQo+PiAtLS0gYS90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfY29yZS5oDQo+PiArKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
Y29yZS5oDQo+PiBAQCAtMTUxLDcgKzE1MSw2IEBAIHN0cnVjdCBjb25uZWN0aW9uDQo+PiDC
oMKgwqDCoMKgIC8qIExpc3Qgb2YgaW4tcHJvZ3Jlc3MgdHJhbnNhY3Rpb25zLiAqLw0KPj4g
wqDCoMKgwqDCoCBzdHJ1Y3QgbGlzdF9oZWFkIHRyYW5zYWN0aW9uX2xpc3Q7DQo+PiDCoMKg
wqDCoMKgIHVpbnQzMl90IG5leHRfdHJhbnNhY3Rpb25faWQ7DQo+PiAtwqDCoMKgIHVuc2ln
bmVkIGludCB0cmFuc2FjdGlvbl9zdGFydGVkOw0KPj4gwqDCoMKgwqDCoCB0aW1lX3QgdGFf
c3RhcnRfdGltZTsNCj4+IMKgwqDCoMKgwqAgLyogTGlzdCBvZiBkZWxheWVkIHJlcXVlc3Rz
LiAqLw0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4u
YyANCj4+IGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5jDQo+PiBpbmRleCAx
Y2FhNjBiYjE0Li40MGJjYzFkYmZhIDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2RvbWFpbi5jDQo+PiArKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
ZG9tYWluLmMNCj4+IEBAIC00MTksMTIgKzQxOSwxMCBAQCBpbnQgZG9tYWluX2dldF9xdW90
YShjb25zdCB2b2lkICpjdHgsIHN0cnVjdCBjb25uZWN0aW9uIA0KPj4gKmNvbm4sDQo+PiDC
oCB7DQo+PiDCoMKgwqDCoMKgIHN0cnVjdCBkb21haW4gKmQgPSBmaW5kX2RvbWFpbl9zdHJ1
Y3QoZG9taWQpOw0KPj4gwqDCoMKgwqDCoCBjaGFyICpyZXNwOw0KPj4gLcKgwqDCoCBpbnQg
dGE7DQo+PiDCoMKgwqDCoMKgIGlmICghZCkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1
cm4gRU5PRU5UOw0KPj4gLcKgwqDCoCB0YSA9IGQtPmNvbm4gPyBkLT5jb25uLT50cmFuc2Fj
dGlvbl9zdGFydGVkIDogMDsNCj4+IMKgwqDCoMKgwqAgcmVzcCA9IHRhbGxvY19hc3ByaW50
ZihjdHgsICJEb21haW4gJXU6XG4iLCBkb21pZCk7DQo+PiDCoMKgwqDCoMKgIGlmICghcmVz
cCkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gRU5PTUVNOw0KPj4gQEAgLTQzNSw3
ICs0MzMsNyBAQCBpbnQgZG9tYWluX2dldF9xdW90YShjb25zdCB2b2lkICpjdHgsIHN0cnVj
dCBjb25uZWN0aW9uIA0KPj4gKmNvbm4sDQo+PiDCoMKgwqDCoMKgIGVudChub2RlcywgZC0+
YWNjW0FDQ19OT0RFU10pOw0KPj4gwqDCoMKgwqDCoCBlbnQod2F0Y2hlcywgZC0+YWNjW0FD
Q19XQVRDSF0pOw0KPj4gLcKgwqDCoCBlbnQodHJhbnNhY3Rpb25zLCB0YSk7DQo+PiArwqDC
oMKgIGVudCh0cmFuc2FjdGlvbnMsIGQtPmFjY1tBQ0NfVFJBTlNdKTsNCj4+IMKgwqDCoMKg
wqAgZW50KG91dHN0YW5kaW5nLCBkLT5hY2NbQUNDX09VVFNUXSk7DQo+PiDCoMKgwqDCoMKg
IGVudChtZW1vcnksIGQtPmFjY1tBQ0NfTUVNXSk7DQo+PiBAQCAtMTI5Nyw2ICsxMjk1LDIz
IEBAIHZvaWQgZG9tYWluX291dHN0YW5kaW5nX2RlYyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgDQo+PiB1bnNpZ25lZCBpbnQgZG9taWQpDQo+PiDCoMKgwqDCoMKgIGRvbWFpbl9hY2Nf
YWRkKGNvbm4sIGRvbWlkLCBBQ0NfT1VUU1QsIC0xLCB0cnVlKTsNCj4+IMKgIH0NCj4+ICt2
b2lkIGRvbWFpbl90cmFuc2FjdGlvbl9pbmMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4pDQo+
PiArew0KPj4gK8KgwqDCoCBkb21haW5fYWNjX2FkZChjb25uLCBjb25uLT5pZCwgQUNDX1RS
QU5TLCAxLCB0cnVlKTsNCj4+ICt9DQo+PiArDQo+PiArdm9pZCBkb21haW5fdHJhbnNhY3Rp
b25fZGVjKHN0cnVjdCBjb25uZWN0aW9uICpjb25uKQ0KPj4gK3sNCj4+ICvCoMKgwqAgZG9t
YWluX2FjY19hZGQoY29ubiwgY29ubi0+aWQsIEFDQ19UUkFOUywgLTEsIHRydWUpOw0KPj4g
K30NCj4+ICsNCj4+ICt1bnNpZ25lZCBpbnQgZG9tYWluX3RyYW5zYWN0aW9uX2dldChzdHJ1
Y3QgY29ubmVjdGlvbiAqY29ubikNCj4+ICt7DQo+PiArwqDCoMKgIHJldHVybiAoZG9tYWlu
X2lzX3VucHJpdmlsZWdlZChjb25uKSkNCj4+ICvCoMKgwqDCoMKgwqDCoCA/IGRvbWFpbl9h
Y2NfYWRkKGNvbm4sIGNvbm4tPmlkLCBBQ0NfVFJBTlMsIDAsIHRydWUpDQo+PiArwqDCoMKg
wqDCoMKgwqAgOiAwOw0KPj4gK30NCj4+ICsNCj4+IMKgIHN0YXRpYyB3cmxfY3JlZGl0dCB3
cmxfY29uZmlnX3dyaXRlY29zdMKgwqDCoMKgwqAgPSBXUkxfRkFDVE9SOw0KPj4gwqAgc3Rh
dGljIHdybF9jcmVkaXR0IHdybF9jb25maWdfcmF0ZcKgwqDCoMKgwqDCoMKgwqDCoMKgID0g
V1JMX1JBVEXCoMKgICogV1JMX0ZBQ1RPUjsNCj4+IMKgIHN0YXRpYyB3cmxfY3JlZGl0dCB3
cmxfY29uZmlnX2RidXJzdMKgwqDCoMKgwqDCoMKgwqAgPSBXUkxfREJVUlNUICogV1JMX0ZB
Q1RPUjsNCj4+IGRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWlu
LmggDQo+PiBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaA0KPj4gaW5kZXgg
MGQ2MWJmNDM0NC4uYWJjNzY2ZjM0MyAxMDA2NDQNCj4+IC0tLSBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9kb21haW4uaA0KPj4gKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVk
X2RvbWFpbi5oDQo+PiBAQCAtMzEsNiArMzEsNyBAQCBlbnVtIGFjY2l0ZW0gew0KPj4gwqDC
oMKgwqDCoCBBQ0NfV0FUQ0ggPSBBQ0NfVFJfTiwNCj4+IMKgwqDCoMKgwqAgQUNDX09VVFNU
LA0KPj4gwqDCoMKgwqDCoCBBQ0NfTUVNLA0KPj4gK8KgwqDCoCBBQ0NfVFJBTlMsDQo+PiDC
oMKgwqDCoMKgIEFDQ19OLMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgLyogTnVtYmVyIG9mIGVs
ZW1lbnRzIHBlciBkb21haW4uICovDQo+PiDCoCB9Ow0KPj4gQEAgLTExMiw2ICsxMTMsOSBA
QCB2b2lkIGRvbWFpbl93YXRjaF9kZWMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4pOw0KPj4g
wqAgaW50IGRvbWFpbl93YXRjaChzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubik7DQo+PiDCoCB2
b2lkIGRvbWFpbl9vdXRzdGFuZGluZ19pbmMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHVu
c2lnbmVkIGludCBkb21pZCk7DQo+PiDCoCB2b2lkIGRvbWFpbl9vdXRzdGFuZGluZ19kZWMo
c3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHVuc2lnbmVkIGludCBkb21pZCk7DQo+PiArdm9p
ZCBkb21haW5fdHJhbnNhY3Rpb25faW5jKHN0cnVjdCBjb25uZWN0aW9uICpjb25uKTsNCj4+
ICt2b2lkIGRvbWFpbl90cmFuc2FjdGlvbl9kZWMoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4p
Ow0KPj4gK3Vuc2lnbmVkIGludCBkb21haW5fdHJhbnNhY3Rpb25fZ2V0KHN0cnVjdCBjb25u
ZWN0aW9uICpjb25uKTsNCj4+IMKgIGludCBkb21haW5fZ2V0X3F1b3RhKGNvbnN0IHZvaWQg
KmN0eCwgc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIHVuc2lnbmVkIGludCBkb21pZCk7DQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMv
eGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMgDQo+PiBiL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF90cmFuc2FjdGlvbi5jDQo+PiBpbmRleCAxMWM4YmNlYzg0Li4xYzE0Yjg1
NzlhIDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0
aW9uLmMNCj4+ICsrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF90cmFuc2FjdGlvbi5j
DQo+PiBAQCAtNDc5LDggKzQ3OSw3IEBAIGludCBkb190cmFuc2FjdGlvbl9zdGFydChjb25z
dCB2b2lkICpjdHgsIHN0cnVjdCANCj4+IGNvbm5lY3Rpb24gKmNvbm4sDQo+PiDCoMKgwqDC
oMKgIGlmIChjb25uLT50cmFuc2FjdGlvbikNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1
cm4gRUJVU1k7DQo+PiAtwqDCoMKgIGlmIChkb21haW5faXNfdW5wcml2aWxlZ2VkKGNvbm4p
ICYmDQo+PiAtwqDCoMKgwqDCoMKgwqAgY29ubi0+dHJhbnNhY3Rpb25fc3RhcnRlZCA+IHF1
b3RhX21heF90cmFuc2FjdGlvbikNCj4+ICvCoMKgwqAgaWYgKGRvbWFpbl90cmFuc2FjdGlv
bl9nZXQoY29ubikgPiBxdW90YV9tYXhfdHJhbnNhY3Rpb24pDQo+PiDCoMKgwqDCoMKgwqDC
oMKgwqAgcmV0dXJuIEVOT1NQQzsNCj4+IMKgwqDCoMKgwqAgLyogQXR0YWNoIHRyYW5zYWN0
aW9uIHRvIGN0eCBmb3IgYXV0b2ZyZWUgdW50aWwgaXQncyBjb21wbGV0ZSAqLw0KPj4gQEAg
LTUwNSw5ICs1MDQsOSBAQCBpbnQgZG9fdHJhbnNhY3Rpb25fc3RhcnQoY29uc3Qgdm9pZCAq
Y3R4LCBzdHJ1Y3QgDQo+PiBjb25uZWN0aW9uICpjb25uLA0KPj4gwqDCoMKgwqDCoCBsaXN0
X2FkZF90YWlsKCZ0cmFucy0+bGlzdCwgJmNvbm4tPnRyYW5zYWN0aW9uX2xpc3QpOw0KPj4g
wqDCoMKgwqDCoCB0YWxsb2Nfc3RlYWwoY29ubiwgdHJhbnMpOw0KPj4gwqDCoMKgwqDCoCB0
YWxsb2Nfc2V0X2Rlc3RydWN0b3IodHJhbnMsIGRlc3Ryb3lfdHJhbnNhY3Rpb24pOw0KPj4g
LcKgwqDCoCBpZiAoIWNvbm4tPnRyYW5zYWN0aW9uX3N0YXJ0ZWQpDQo+PiArwqDCoMKgIGlm
ICghY29ubi0+dGFfc3RhcnRfdGltZSkNCj4gDQo+IEkgdGhpbmsgaXQgd291bGQgbWFrZSBt
b3JlIHNlbnNlIHRvIG1vdmUgdGhpcyBpZiBqdXN0IGJlZm9yZSB0aGUgbGlzdF9hZGRfdGFp
bCgpIA0KPiBhbmQgdXNlIChsaXN0X2VtcHR5KCkpIGZvciB0aGUgY2hlY2tlZC4gVGhpcyB3
b3VsZCBtYWtlIHRoZSBjb2RlIG1vcmUgY29uc2lzdGVudC4uLg0KDQpPa2F5Lg0KDQoNCkp1
ZXJnZW4NCg==
--------------O3h0PR8yVdeKvkLJu22VSBPE
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------O3h0PR8yVdeKvkLJu22VSBPE--

--------------SgTviRsEdOjj6hzyoDajOReW--

--------------MyRyK0u3FPb61Z70nR0fXyY5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRR/BIFAwAAAAAACgkQsN6d1ii/Ey//
VAf9EH5+YR0g1o2f23X0hBYSoAYB1MEzYk6zNNBJZ0LmHiUiPUH8f6+G3WKjq/a9RyJBFTXmgv1r
cNysT3L2vJKSvcSDP9Mnv3pzft+1DJib2OxM3EZlkh8ChYZd72TFV07LYdvOZPFf3i9vJcmBSP/Z
6LEI9HFGYR5J/rr7vEyXnFGfYwUlW/2MnCeSCyObU04xjsQFeo0fa5sDWJzkAXPrKEoi0Wkr/WYw
fWJMf0rvbwmLKdrTmdxlaX3BFSl55gxZpPP08zswNAz4qDvrH56RZ94fFP4NdB5Py4Iya9NNje3R
lk1N7/J24224oX3aGGggSuNF3aSEYGMOy9IRyBPZfw==
=rdqI
-----END PGP SIGNATURE-----

--------------MyRyK0u3FPb61Z70nR0fXyY5--


From xen-devel-bounces@lists.xenproject.org Wed May 03 07:08:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 07:08:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528968.822786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu6bR-0005KR-Mg; Wed, 03 May 2023 07:08:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528968.822786; Wed, 03 May 2023 07:08:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu6bR-0005KK-Jy; Wed, 03 May 2023 07:08:41 +0000
Received: by outflank-mailman (input) for mailman id 528968;
 Wed, 03 May 2023 07:08:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu6bP-0005KE-P1
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 07:08:39 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20612.outbound.protection.outlook.com
 [2a01:111:f400:7d00::612])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 52cf0d40-e981-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 09:08:37 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DUZPR04MB9967.eurprd04.prod.outlook.com (2603:10a6:10:4dd::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 07:08:35 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 07:08:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52cf0d40-e981-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jonfv+56iCepjnplQOTpQC1sb/vsZZNPYpco53+O1LLm1se8t5n1//JF2kKOYY06Amz7F1tkEEWGHwawegiaMCcyNS1850gYO4gqUXuuksLx6D+ajDS/A/OhLRXkBI4p/HE7D3sG+PhMkGuaNUIHO8NROg6Nz8IXcX4iJr3mcb9KU+Io8XrrObwQWjNZHovgtKyyswgTsOPKlzr8r0peGDXBC973EXmJ2FDgPf551doE8M35Wy3gLCEdenrz+wK2F7JhWLH0kDvBpED9v/RL5u+ln8758sE64lPaH2j94I5gaVk28uHG1wwagAjdniweWVPwHwi7s9uFHOMuo9QABA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YXcIHP4+HOvaIqGF7aCScNCByN18B1j4dRBfMEacL2A=;
 b=BwWCOiblj89GykKEQ5z5Xm7VKnXVNbac7bOQjzjqfwHMxpMsrEg034IOT02SP3Gx66hzFdqrQTlplV73YPwBYK+u54vLEY0uAXUM/RW52Nvm/SaheplQElVE5YN4ceXwgaBIIHboRg7wVvy28RDQUcp+qIinVlzdW31z4R5TLscjC3PUZJhwc9qafkpFmlUSziNnSr/7v6Y0BqoBoUJ4mZPwtdS4cgd7wf9D4GZKDDkzAXkq5BuaWZJvC2pVE4Q9LUcdJZqpVkfHlM5tlenwn4kLSmvI2HomwRO6nTMsS/maFjaqYB3yyz3XZpl8MX6k31M1aPHOoWEWqahQ3Jy3nw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YXcIHP4+HOvaIqGF7aCScNCByN18B1j4dRBfMEacL2A=;
 b=K8pTth6oXr0g+dcDCO78ogfIs6sKiHGaattx0PXm5viopVL610FgoUvAU6SIPdnwiKmNS0fn9B+jTbQQHz9UK9bUpHuhKZag9OGocZ3AKZy0e+l8vbIKm2ZK1qT4xv30PDjtkpQFWpLdc/rE8ZNQUZDGwJPtLCnvRvr1CfQi3Uw9/Lvl7X1gv9q+teOIV03DbPtMDQTNdIDquuHM1y6LWpatlPna4sE6yVXZAVC2qYS60UmeRgv+dpVofS+inAELjUSHaMzdMfG62wrMQxU1+tj3kn9azUHdatVmbT05Y7GhlcG5H8dG8RfjmtPbyLQNK/Cgk2lMZINM0LD7fU/9NQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d9c25ce5-12fa-6271-f61f-eb67404c2011@suse.com>
Date: Wed, 3 May 2023 09:08:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 1/2] acpi: Make TPM version configurable.
Content-Language: en-US
To: Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230425174733.795961-1-jennifer.herbert@citrix.com>
 <20230425174733.795961-2-jennifer.herbert@citrix.com>
 <5516fbf5-dbfc-dcd5-0465-e4757fdc16de@suse.com>
 <5e877d50-de2d-0af4-9fa0-d4529a97ee2f@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <5e877d50-de2d-0af4-9fa0-d4529a97ee2f@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0156.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::13) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DUZPR04MB9967:EE_
X-MS-Office365-Filtering-Correlation-Id: 583b9408-7ce4-49e0-8867-08db4ba5359b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NbXzl1RMvkvvB0wgYJx6ysTb6i1E5XdpEb+8Xquwzq5gIcoJhxvbDoNEM+H/8moq32t7ZuHXEW++q7EXo5Te9e5Da+dfJbPcPxw/U9I7n+y8d2WEKh7zCNVfCPJ0z/s1grpzMXizlvJr68ev8uDFSNP58tIt9P610m24xsc2TrWrieDOQxNI1oaIrzz0x3XVFMfKA7bZwRswlEUYztS2mpI0cgByLE8U7wZWo5ABgyBdCNpTlQSOaBBCCxUz5lkkmFyE4o0qTLD0tpiDY+cLH6wRsLEWrUf06co37i8DiKasDxE396GSYqv+rf9kkZfZ4L7b+juXrmrDJkRLNNPnO/S2OwphkPSMtErjsuwT2XPzjBTuY+GRrUzlmvgXbW4t5ibAvjAoetJEZkIVQtLVEA+0wwU992X2KBrJu9KvygiQ4EqmNNT4Jnpt3HNQBelwuZ099O3CAgJXbNAUJU62HnGojaCndzSuBsagAOtROGbxouevVaKEGLYeEJ/VHZIw1ChkWNCPH1x7xyZrpQwAsmpMyRrkw5zK6c7t+RjXqa28yAKKVQ+yuTDfGdNNzM89MHHhDtDzi3CqUepANikHOpRQwRBH2zs89wJWY7TxhY8RpVAxNDUoI59gwh+30zmZfUPG11z/AE3h3AfBPMo+zQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(39860400002)(396003)(366004)(376002)(451199021)(31686004)(83380400001)(4326008)(6916009)(38100700002)(41300700001)(8936002)(316002)(6512007)(6506007)(26005)(6486002)(53546011)(478600001)(66946007)(66476007)(66556008)(2616005)(86362001)(31696002)(5660300002)(36756003)(8676002)(186003)(54906003)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QkNJQ0hiUFM3ZFdMK1Y2WHUzQWRjc3d0N1NFclRsNlpmYmZoS0RlcWN2WW5x?=
 =?utf-8?B?SjcxaTBiUHF0MTNtRFZnbUJtSTR2REVxWkpIbmJNcXI1K2d5akF3ekRoZGtW?=
 =?utf-8?B?dm9iT1Fud0VLeUtvaFA1akt2cmthTUQ3SG56U2lxS0RUd2Zpd0dhZzJCOVJq?=
 =?utf-8?B?U2xPY0hla1RyWHRYS2d3UEVWVUJZSXQ5REJCM2o3RTVhazBweG8wZTBXN1lk?=
 =?utf-8?B?R2Q5dm1kQzBISVNwem53dDlScWJ5bnRUS1lkVWRXdTdzY0tvMmVHMys2RnFn?=
 =?utf-8?B?WjNERksrU3hDQnZCS3REaGl4YmFiUHBGTVhBMGg3N1duK09aZTRwWjBScjdU?=
 =?utf-8?B?S1NWQTU5b0tIelk2eFdhYWFXaW9kVjAzTEpDb2FZdUdhTWdkamxCUjBxa0RF?=
 =?utf-8?B?YlNBbTdSVGh4b1lkWldOSDRDSmEyQnhpejQyL3Q4cEpvUUE5RW0wRW56WHZL?=
 =?utf-8?B?RWJ3YjZGaFd3UWMzY2pXb3k5b1NoT2Y3UEczQ1gyUDR5d3gwS3pORkpSbGVr?=
 =?utf-8?B?MnJ3YUorbS9hNWw3ODVEeU9RekRpVkFqQzRmd3ZhQmFXRkYrTkRUbzJjd1hi?=
 =?utf-8?B?cUIrY1RHeC9wcS9lSGRIdXhPQnUvbEdIVmRXQnQ3N3ZDWmRZWndmUXY2K1RN?=
 =?utf-8?B?c3BUbnFWRkdDay9rRkZPWE85eWkwOVY0ci9BVVlqWWJGOUhvcFoxNmNuWTcr?=
 =?utf-8?B?L0ZTS2hGR2RjZjd1RTlKWVRDcGQ4cWVlUVdtS0d3aXFyNjN5TElCdkg4eXI5?=
 =?utf-8?B?c3BiS2VaTUJsNytqTzFTSGpWbTVQcFM3bEhaSmVyaE9WcXAvaVROc2ZieWNY?=
 =?utf-8?B?WFplMTh5ZThJaVV1cFNJTkRPQ1lWUzRNTVlGM0dGaUVVU1dWQ3ljZDVHMXM2?=
 =?utf-8?B?a1V0cEswQitFdmJaQnVabEYybXRnT3dKQzJMWDZFZys5Y3Q3ZDB6MFlUMDFm?=
 =?utf-8?B?a2gxamlIN2RMVE5lRHFidWthNDEzMVF3UDhGb01GN2JuTnhmb3ExMkFMRGxW?=
 =?utf-8?B?Zi9MazNIQWZsd0pIT1duRDZjek91cEZJVjdFUTJQbWJybVZHSnN6MWFpYW9N?=
 =?utf-8?B?Y0ZDaU5INWdRZ0grWGJ2blczYTVJa0hjWGpjK2VTSFpaZGMxZjF6T3IzeitH?=
 =?utf-8?B?QWVrWmNEK3JkMEZEMkJoaEdKZDRMNjdkREZaUTF0M1FmNmgyRTR0eTJqRENR?=
 =?utf-8?B?QWEwUlVUY29GNUlFYWFPZlk5T0xjQWt2cmNhVjA0R1NwcmVyd2Uvc0thMnJ1?=
 =?utf-8?B?SGdsK2NKVHVyVnoxUERxUE5zbU1jVXFBNWpISHU0Tnp6ZC8rbEJlVGtsS0dq?=
 =?utf-8?B?amRhVW9FQjVQNHFUMVNsYW1nTHo0L3E0cHNOZUdwM1Vad3Z4ZnV1MlJLajRW?=
 =?utf-8?B?MWZRUXNYNE9UK050R01XTzMyV09FN29KVThscXFMZlBZMEdYdGxOWDI0NXY0?=
 =?utf-8?B?eGw4VVRCbnhNRGhDbksrZlBMMmloN0NLdGxFUmdBRHAzNG9Jb29Yc2pkd2t3?=
 =?utf-8?B?aURhSWM3UEw4QmVxUUVYdi8rWWtCMDJGK2NpdnFBdk9sZzdYa2NzdndHTWk4?=
 =?utf-8?B?ek12dm4rMzJZUkxNb00wNHFXUzVZMmNJSDNHbEFLQzFHV2YyKzB4VEN5Rm4w?=
 =?utf-8?B?aE1HbkJYcW9Qa0d5czZzQVNmVTdNUWVsaHdGWnpZUjMxdkoxNkxzbElxaDFB?=
 =?utf-8?B?Ri9adk1rK0xjTHpSRGhJS2RvM1dOaVpQdWl0ZGlkVGJseEJXZzZpVXpzWk9p?=
 =?utf-8?B?dHNWZHU1TlZtV1paNlN4V1lpYnZGSWV5aUxsMWhKRldjYlZOU0RmSTNLYjB1?=
 =?utf-8?B?TkVyY3Nua0RGbWVOT1lhTVF1TkpGTFBkSE0xeXFPWnBiamNQZlZOWlZDRDBp?=
 =?utf-8?B?TGxMZFlSK2hQb3IwT3BSbERRdTVERlY1VG5HRG5hT2V5akQ0T0hQV0p6WDM1?=
 =?utf-8?B?WXRIWHlkVVJBdzhpRS9rZU1QZ1VtM01UcnJSNmwyaVJUZ2hwMndRbFAxcFFO?=
 =?utf-8?B?aksyK0swRjFUUVNhbWt4Y1ZSekllTDVFTStVbHV4a1hTRVprZXBsWkRlRVBM?=
 =?utf-8?B?b2hhaklpRzExQThlY2lodFFuQng5Z0dEQUliSE51VjJ6enJTSTlpN1BoQmRn?=
 =?utf-8?Q?JKyy3LHSlEmcHleFOR4vlC76h?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 583b9408-7ce4-49e0-8867-08db4ba5359b
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 07:08:34.9031
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: daZ7ji40hnxsjgg6+6fqpEHd3tevjB60KIg8ZC5XwCBzDXgWuPkVVmBgMFoiiW4gKUOAENdaly4DKog6Ddse1g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DUZPR04MB9967

On 03.05.2023 00:40, Jennifer Herbert wrote:
> On 02/05/2023 12:54, Jan Beulich wrote:
>> On 25.04.2023 19:47, Jennifer Herbert wrote:
>>> --- a/tools/firmware/hvmloader/util.c
>>> +++ b/tools/firmware/hvmloader/util.c
>>> @@ -994,13 +994,22 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
>>>       if ( !strncmp(xenstore_read("platform/acpi_laptop_slate", "0"), "1", 1)  )
>>>           config->table_flags |= ACPI_HAS_SSDT_LAPTOP_SLATE;
>>>   
>>> -    config->table_flags |= (ACPI_HAS_TCPA | ACPI_HAS_IOAPIC |
>>> -                            ACPI_HAS_WAET | ACPI_HAS_PMTIMER |
>>> -                            ACPI_HAS_BUTTONS | ACPI_HAS_VGA |
>>> -                            ACPI_HAS_8042 | ACPI_HAS_CMOS_RTC);
>>> +    config->table_flags |= (ACPI_HAS_IOAPIC | ACPI_HAS_WAET |
>>> +                            ACPI_HAS_PMTIMER | ACPI_HAS_BUTTONS |
>>> +                            ACPI_HAS_VGA | ACPI_HAS_8042 |
>>> +                            ACPI_HAS_CMOS_RTC);
>>>       config->acpi_revision = 4;
>>>   
>>> -    config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
>>> +    s = xenstore_read("platform/tpm_version", "1");
>>> +    config->tpm_version = strtoll(s, NULL, 0);
>> Due to field width, someone specifying 257 will also get a 1.2 TPM,
>> if I'm not mistaken.
> 
> Seems likely.   And i few other wacky values would give you 1.2 as well 
> I'd think.   There could also be trailing junk on the version number.
> 
> I was a bit phased by the lack of any real error cases in 
> hvmloader_acpi_build_tables.  It seemed the approch was if you put in 
> junk, you'll get something, but possibly not what your expecting.
> 
> Do I take it you'd prefer it to only accept a strict '1' for 1.2 and any 
> other value would result in no TPM being probed?  Or is it only the 
> overflow cases your concerned about?

Iirc xs convention is that numeric values can be spelled out in arbitrary
ways. So limiting to strictly "1" may be too restrictive. Anything that
evaluates to 1 without overflow nor trailing junk ought to be taken to
mean 1, I think.

>>> --- a/tools/libacpi/build.c
>>> +++ b/tools/libacpi/build.c
>>> @@ -409,38 +409,47 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
>>>           memcpy(ssdt, ssdt_laptop_slate, sizeof(ssdt_laptop_slate));
>>>           table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
>>>       }
>>> -
>>> -    /* TPM TCPA and SSDT. */
>>> -    if ( (config->table_flags & ACPI_HAS_TCPA) &&
>>> -         (config->tis_hdr[0] != 0 && config->tis_hdr[0] != 0xffff) &&
>>> -         (config->tis_hdr[1] != 0 && config->tis_hdr[1] != 0xffff) )
>>> +    /* TPM and its SSDT. */
>>> +    if ( config->table_flags & ACPI_HAS_TPM )
>>>       {
>>> -        ssdt = ctxt->mem_ops.alloc(ctxt, sizeof(ssdt_tpm), 16);
>>> -        if (!ssdt) return -1;
>>> -        memcpy(ssdt, ssdt_tpm, sizeof(ssdt_tpm));
>>> -        table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
>>> -
>>> -        tcpa = ctxt->mem_ops.alloc(ctxt, sizeof(struct acpi_20_tcpa), 16);
>>> -        if (!tcpa) return -1;
>>> -        memset(tcpa, 0, sizeof(*tcpa));
>>> -        table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, tcpa);
>>> -
>>> -        tcpa->header.signature = ACPI_2_0_TCPA_SIGNATURE;
>>> -        tcpa->header.length    = sizeof(*tcpa);
>>> -        tcpa->header.revision  = ACPI_2_0_TCPA_REVISION;
>>> -        fixed_strcpy(tcpa->header.oem_id, ACPI_OEM_ID);
>>> -        fixed_strcpy(tcpa->header.oem_table_id, ACPI_OEM_TABLE_ID);
>>> -        tcpa->header.oem_revision = ACPI_OEM_REVISION;
>>> -        tcpa->header.creator_id   = ACPI_CREATOR_ID;
>>> -        tcpa->header.creator_revision = ACPI_CREATOR_REVISION;
>>> -        if ( (lasa = ctxt->mem_ops.alloc(ctxt, ACPI_2_0_TCPA_LAML_SIZE, 16)) != NULL )
>>> +        switch ( config->tpm_version )
>>>           {
>>> -            tcpa->lasa = ctxt->mem_ops.v2p(ctxt, lasa);
>>> -            tcpa->laml = ACPI_2_0_TCPA_LAML_SIZE;
>>> -            memset(lasa, 0, tcpa->laml);
>>> -            set_checksum(tcpa,
>>> -                         offsetof(struct acpi_header, checksum),
>>> -                         tcpa->header.length);
>>> +        case 0: /* Assume legacy code wanted tpm 1.2 */
>> Along the lines of what Jason said: Unless this is known to be needed for
>> anything, I'd prefer if it was omitted.
> 
> I'm not awair of anything, but your comment 2 lines down  from version 2 
> made me think you knew of some.  So if your happy with me removing this 
> line, I am!

I'm afraid I can't make the connection to that comment (assuming I fished
out the right one). In any event, especially with Jason's observation that
ACPI_HAS_TPM won't be set alongside ->tpm_version being zero, I see no
use for keeping the "case 0:".

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 07:28:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 07:28:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528973.822797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu6ul-0007qP-DT; Wed, 03 May 2023 07:28:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528973.822797; Wed, 03 May 2023 07:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu6ul-0007qI-AB; Wed, 03 May 2023 07:28:39 +0000
Received: by outflank-mailman (input) for mailman id 528973;
 Wed, 03 May 2023 07:28:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu6uk-0007qA-OG
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 07:28:38 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062a.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c5d2d85-e984-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 09:28:34 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DB8PR04MB7116.eurprd04.prod.outlook.com (2603:10a6:10:127::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 07:28:32 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 07:28:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c5d2d85-e984-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ImCULAaXzgubtsZRDgZSe3poikXKaqzTyOFOpP3ttUfarNMuGj4AI6vGwJexNpi8FIqgc6jqG+AfidU6e4TI6snK1a3JohGTTN2dbkqeCvqCSvTeBUJ6U4CAL7gAIDi/HrPrzkPWm4n/53TApEPh92kVxbzhX0Q9juU4BA3KGDTEhg5U84sN2z/q/Cy1wAQjC+5p3MV8KiZUbUJwKlql5bLCd2w/NJlQhOrA3BggIXeHCNdjXsRsVAwfU3tC7e+SKSmtpzYpFr295deYkKCSgg1dtZLDOCg6BNr6kAjZNBYiddGn1Celm4QVucfAjarzxXUOMYKgKSW+RTEi7rcFbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XYDxzoDfSntGovRDW6SRfZYPYluyZM7hlhUJpC/rvBY=;
 b=odZqeaoco9TfplBEEH+wN2Wxghicz6nH4WkD3FbBoshjvEmsUObSE+iPHlLMNYLF16vp1ZNrZCqI+O5AMr1khsL8isg2kw4tH+NqpT5xy2nu1CWV6/iMy4kXRFH+fpGi88u6ylcCOqfHqx3Z+6OsZJTBpjHOZwBrAKelH8o/ySxOjcP5GDJPPMpp3itt6SvF7rQcR5omQ5CsQ5w3XtGr2W6K9XymU4bbKm+smBVQY9s50OjWHs+PkHBErjbHpwRSxKXCXP+vV9r6SrQTmsiyTvcYWMDQGE0+M4oFIHfFr2PmtQ1n+YlrClq0b5++sXavGkJXiFrXuzOcKRIxVCxiIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XYDxzoDfSntGovRDW6SRfZYPYluyZM7hlhUJpC/rvBY=;
 b=OeTwGPJEza4xRhOWFbJFxicoaHsdT9U5R9INUD8JwgKgz9h+CvMGT0C3ny2rAnOnVloACdCfMiwZ2gXf82CuMLT8TQUtUDWtQCkNHIN1LbGiQdTW9r8EyPWyOjTHyVbBMlz07dIabuC9n6FylFiDk54jIvYHjUi1jlE3TDWAbMpWbPERI+sv0JZQufcYuHmErIjMrj6/7+4gHRU8Lkt8G5YqMNSxjVkuodpovPSwDvz3A1xx1XadNiyEPQomqIu2YrKTZlwAwv+4L88h4Av51dVIm71P2Zokb/1fvMvyRQs4A9e5tCjxgSHTYxN1M79Pi6fPQ93jf5y6fK1RPOaUMg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b08fb315-0ed6-1d7e-f977-14bf5e4be848@suse.com>
Date: Wed, 3 May 2023 09:28:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 2/2] acpi: Add TPM2 interface definition.
Content-Language: en-US
To: Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230425174733.795961-1-jennifer.herbert@citrix.com>
 <20230425174733.795961-3-jennifer.herbert@citrix.com>
 <f9e72c6c-9915-9995-06c0-0a0ac037bde1@suse.com>
 <91620112-039e-9e8b-9bac-452ea9ecddaf@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <91620112-039e-9e8b-9bac-452ea9ecddaf@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0184.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::16) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DB8PR04MB7116:EE_
X-MS-Office365-Filtering-Correlation-Id: 75ae5f8d-2c46-434f-ba65-08db4ba7ff5f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Fbkuy/bRoeSiw8p5wtBUORJfFKBYxuosjG0o0PS2ZiK5eDLhv310HASGxMuf8CtsU6A/u5jwpRrYlvv9o8jUFTbcz8owzmJUClFPPc1g229gRKBaHmjwPCeWdi93O/yXgImGBVJc+eel8xe6882Zz5WaYizcP5NhLErVxvDtnKLiqo6ZyeQ6EBpbUxanDTD9y3JkBp/kb9qog6C2Ik9jnw1uaI442jRy5ESHTf38uNlzSG6oC1ZNoWtmkQjI6CXmt3wM+kpjZNbrcnRbKTss4iZFuKGHgjVEnTbSmrJTZRF9owyG0FHRY38wDe5JzVsyepOi5S9XLK2rtpjHosBclfckLh+0xrTwu7sI9clLGJy/ysLTCmjWNddkgIg6zFfdrfW1z9ECpfsPNgT0+Mok5+/yEhFislmecC5vhuT02y7UJTHXmmWLOhQZjxzbosYePTsgWaj+OdoPOfAqi5D+NEbR4PE2l8Hs68LpEVWlq83u2djvNdpDyGdTNYeX++/AQJs0p4sGn+rpDyVyMBWoylIIIq5b/gEqRHvGEDQMFFxVQjgv871Rs4mVSWVAIyVEsiKh+GOribBHLeK2nPH/N2Fm4zmPgpSyUyfr/M1+l7wKdlp2lyXzDpUDuRl2ZMIUzYZfba2l5ibDDDNZKXvDxSVWkKpuB2mITgg30YIuhec=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(396003)(376002)(346002)(39860400002)(451199021)(316002)(66476007)(6916009)(66556008)(66946007)(4326008)(38100700002)(8936002)(8676002)(41300700001)(5660300002)(36756003)(31696002)(86362001)(2906002)(6512007)(6486002)(478600001)(6506007)(53546011)(26005)(2616005)(31686004)(186003)(54906003)(2004002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YmpSenRBc0kxNGczKzhUeC9oY0U2SXBHWjZ4MFpBOHlBNVhicGljNnFQcGQw?=
 =?utf-8?B?SWt3NlBLY2NoUVJNM05ySkdzRzFCL3FvbFozeGNvOGpPWCt5Mml2U2xkUHY4?=
 =?utf-8?B?SE9FSkU1dFI2MmxRQk05eHhPSlpvVXUxQS8waHdSVGJINFB1ZTNsYi9JcHBZ?=
 =?utf-8?B?YVRRWVZaWFhZYmFpQ2hqd2l6aENsemhFR2dNOFFJbVVmYmwvNVMxZFBkVjEv?=
 =?utf-8?B?dTNQNXlNbUpHNlJnYWhyckxocXRtTkFlUFVKYlNnV3VHZVE5YUtEbVZpSloy?=
 =?utf-8?B?T1hhc1htN2lhNU5ya1JBdGVlL2FTYVF0Smx6Tk41d3RJZ1I2MTNVeFlHNkNo?=
 =?utf-8?B?U0JONmFLT0ZaOHExY2cvYVQrOGc0Qm9SaWpqMHZTSndMMlp3QzRyS1JuSkpz?=
 =?utf-8?B?RXZxV0dLNXVzYXplUUdvZnpjRnhDUTN3WElvc29XaS9rSEQxaHJRYVRxcS8z?=
 =?utf-8?B?dkVjZWxVUDN6c0pFcGlWTnk0SEJEV0tzeHJOdlBLWWVqSVlBK0VMS09nVWEy?=
 =?utf-8?B?Wk1QNlhzcDRoeE9kakVOV1lBb3ZOWmFCWEI3d0Z4ZFNUdDFKVGtNeTVCeWNT?=
 =?utf-8?B?YnFrcmw3TFNGWi9GcFBCUUVVbUMzZnNtN1lleGlES0hvQWtPenBRYU5penZt?=
 =?utf-8?B?RnptWnpUWFRUQmJGd1ZvSm1DNEkvYnpmZDdoSW9RUjNRTmpCZllubXpYQnFS?=
 =?utf-8?B?d3MycGIxd1ZvYkFjVjJxRHhWQ3dicHh0TnhJbXNHbVV1TWpyNEJwdnRhaDc5?=
 =?utf-8?B?SVFCZXBmelVubHkwbWcwME5tbFhKRkJ5dE5CSWZBNlhMSTNWYjVjdEFRMUEr?=
 =?utf-8?B?aGVhZloyZHNWbUtjL1ZwSWV2WnJQdFRLOXJYYUtYZXBtRDZIQjJ3cGdwNkkv?=
 =?utf-8?B?UzZFcDJwRVdOK0ZnakNLdXQrUGxCMlE5VHBEaFU4N1F1K3VNWlM3V3VJbjRn?=
 =?utf-8?B?QUxFazM1RXFGZlRMWGtna0pHMzVVbGpTRkkwcERPNU5XaFNmdWgzQVplaFRW?=
 =?utf-8?B?Tzk3NUx0aTVaUno4aVFBVUVjMmxINnZaWkQ0U3QvZVRqK2tZTndkUXBtbXk0?=
 =?utf-8?B?QzgwR1h2UnNtRnlBbVVRR1dZaEovWVNZS0p3ZzZRdnVCSVJzYjBsYWVST1NL?=
 =?utf-8?B?ZTQ2Y3g2aEhXeUQrQzdxL05YMWphVzZHM01TbmtSanA4YUxQT1llbmkvVURJ?=
 =?utf-8?B?UElyeWxxZmJ3b1ZoOGE3cUhZckR6L3dXUGxnNDdRemV1eWxrbDVCdXNYazlO?=
 =?utf-8?B?UEhYcHRpTDZPaEZKdDNmMk9mdysreEk4Yk9JaFo5cjVCTHFlZW1wQzllT1Jv?=
 =?utf-8?B?S0NUSDZ5aVUrSGR3YTVZekI0N3lQSWdWUTh4U3lFVHZ2cjdLQldiUkVJaWhK?=
 =?utf-8?B?d1cyczZOaGxpOGdRN01OeUh2ajJjL0EraVZFaHgwQzBvNXdZd1VJc3NxWXB2?=
 =?utf-8?B?TWJEWHNUNVBxcjdRbUZOcjFiUnBWbnU3c2tMMWhIbzRHd3dNTjNBaVQybzgx?=
 =?utf-8?B?Z3YvZkFnTHg0UzFCOVlpZ215K3hVRzErQ1NGdHI4TnVYaFF3SmYydEdoak5m?=
 =?utf-8?B?SDl4QmhQY0YwdUZHNG50VmhKaEY5VEFGeDJUSEdxMlgzRGhSSTlvZmwyN2ZE?=
 =?utf-8?B?R2xjVGxQRWIxeTU1NEI2dFVpRHhOVkQzMGROQnRaUGZicUlFVkFNUHN0UlBZ?=
 =?utf-8?B?aHZGY0RVcTNLbVAyZkdjY0V2TnZHZEVEcHlsVTd3MzFrQ0M3dStRa0prbEdJ?=
 =?utf-8?B?TndJcGhoYkZOWGFLVW93b1JRTm1EeXUxaGNlSEhadkxGQTdsV1ozcXRmUThJ?=
 =?utf-8?B?aEdWNXI0Rm1vL3lPaHhUUXJlc09JeXViRzRLSjlVSUFHNlBiaytrdTc5MUUy?=
 =?utf-8?B?NWdqUys0cklQbkRHb0hNM1ZHMm55SmFHWXVSSHN6QmYrYTdUL1huOUJRWE5h?=
 =?utf-8?B?ZFBxbytpYlVETGh4UXVaZzBRVE5OdUVLc3FLTk9EaThFQUpQeU5LQm5PWkdE?=
 =?utf-8?B?cHFIL29rbVc1YWtaSEtFYlFiYUM2QWtVWGdUa2hST1Q3WGlQS2ZqOWx5NWV4?=
 =?utf-8?B?MEpwZHdpTkFJSXp6dC9JZzdGY3p5ZExEMTcycDdLdXpieDFMVzk5ZDQ0Rm1j?=
 =?utf-8?Q?gnlfOBVpJ6MpSW7KocQFxxUNK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 75ae5f8d-2c46-434f-ba65-08db4ba7ff5f
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 07:28:32.0372
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6dRDS1gzxhCZD91CdmJwLdqNdS6Gl9MOOJyIL/EyjF6xRdSiw2+9L8OftWznN6guGr71PUGUhc/mZIFSNZCGHw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7116

On 03.05.2023 01:29, Jennifer Herbert wrote:
> On 02/05/2023 14:41, Jan Beulich wrote:
>> On 25.04.2023 19:47, Jennifer Herbert wrote:
>>> --- a/tools/libacpi/acpi2_0.h
>>> +++ b/tools/libacpi/acpi2_0.h
>>> @@ -121,6 +121,36 @@ struct acpi_20_tcpa {
>>>   };
>>>   #define ACPI_2_0_TCPA_LAML_SIZE (64*1024)
>>>   
>>> +/*
>>> + * TPM2
>>> + */
>> Nit: While I'm willing to accept the comment style violation here as
>> (apparently) intentional, ...
> 
> Well, I was trying to keep the file consistant.   As far as I can tell, 
> this styling is used thoughout the file - unless I'm misunderstanding 
> your 'Nit'. (You object to a multi-line coment used for a single line? )
> But I'm codes style blind, so just say how you want it.

Right - strictly speaking those multi-line comments all ought to be single-
line ones, but aiui they're multi-line intentionally so they stand out.
Hence - as you say, for consistency - it's okay for this one to follow
suit.

>>> --- /dev/null
>>> +++ b/tools/libacpi/ssdt_tpm2.asl
>>> @@ -0,0 +1,36 @@
>>> +/*
>>> + * ssdt_tpm2.asl
>>> + *
>>> + * Copyright (c) 2018-2022, Citrix Systems, Inc.
>>> + *
>>> + * This program is free software; you can redistribute it and/or modify
>>> + * it under the terms of the GNU Lesser General Public License as published
>>> + * by the Free Software Foundation; version 2.1 only. with the special
>>> + * exception on linking described in file LICENSE.
>>> + *
>>> + * This program is distributed in the hope that it will be useful,
>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>>> + * GNU Lesser General Public License for more details.
>>> + */
>> While the full conversion to SPDX was done in the hypervisor only so far,
>> I think new tool stack source files would better use the much shorter
>> SPDX equivalent, too.
> 
> OK, this is where I get a bit confused.  I belive I copied the licence 
> from ssdt_tpm.asl, for consistancy.
> 
> So I think i need to use a 'LGPL-2.1-only' but then it says its using 
> exceptions on linking as discribed in LICENSE, but um, which LICENSE 
> file?  So i'm not sure what exception I should be adding. Do you know?

First of all I think commit 68823df358e8 ("acpi: Re-license ACPI builder
files from GPLv2 to LGPLv2.1") was wrong using LICENSE; I'm pretty sure
COPYING was meant instead. And indeed the difference between libacpi's
COPYING and LICENCES/LGPL-2.1 look to be formatting plus an extra section
at the bottom of the latter; I haven't found any "special exception on
linking" anywhere. IOW I think using LGPL-2.1 here is what is wanted
(unlike e.g. GPL-2.0-only there's no LGPL-2.1-only afaics), the more that
you're contributing a new file (and of course provided you're okay to put
the new file under that license).

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 07:40:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 07:40:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528977.822806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu769-0001n0-EQ; Wed, 03 May 2023 07:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528977.822806; Wed, 03 May 2023 07:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu769-0001mt-Bj; Wed, 03 May 2023 07:40:25 +0000
Received: by outflank-mailman (input) for mailman id 528977;
 Wed, 03 May 2023 07:40:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pu767-0001mX-Rl
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 07:40:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pu767-0002pH-BV; Wed, 03 May 2023 07:40:23 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pu767-000695-56; Wed, 03 May 2023 07:40:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=kvXYVl/zAq6keySiHy5yktJ5q2UYOeC69zj7hcDPRlM=; b=OU8E/atPMu/WlYT4ZfpwHDKr+X
	1pvAl3d9oLRTtodR4Bj2+MwYN4Oq+ngKUeoLKCRmwzLQGkLL/bGJx3CHv4E/5qAyPQ7Ntbdkfwp2V
	pXG8gVfKCUufQXcAdo2KdgFcQJPB2tTs+SKXr+lLk0b7XhvMdXbIBUuyjVCCu+aLtpd0=;
Message-ID: <2d764f29-2eb9-ecff-84cd-9baf12961c54@xen.org>
Date: Wed, 3 May 2023 08:40:21 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [RFC PATCH] xen/arm: arm32: Enable smpboot on Arm32 based systems
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230502105849.40677-1-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230502105849.40677-1-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

Title: Did you mean "Enable spin table"?

On 02/05/2023 11:58, Ayan Kumar Halder wrote:
> On some of the Arm32 based systems (eg Cortex-R52), smpboot is supported.

Same here.

> In these systems PSCI may not always be supported. In case of Cortex-R52, there
> is no EL3 or secure mode. Thus, PSCI is not supported as it requires EL3.
> 
> Thus, we use 'spin-table' mechanism to boot the secondary cpus. The primary
> cpu provides the startup address of the secondary cores. This address is
> provided using the 'cpu-release-addr' property.
> 
> To support smpboot, we have copied the code from xen/arch/arm/arm64/smpboot.c

I would rather prefer if we don't duplicate the code but instead move 
the logic in common code.

> with the following changes :-
> 
> 1. 'enable-method' is an optional property. Refer to the comment in
> https://www.kernel.org/doc/Documentation/devicetree/bindings/arm/cpus.yaml
> "      # On ARM 32-bit systems this property is optional"

Looking at this list, "spin-table" doesn't seem to be supported
for 32-bit systems. Can you point me to the discussion/patch where this 
would be added?

> 
> 2. psci is not currently supported as a value for 'enable-method'.
> 
> 3. update_identity_mapping() is not invoked as we are not sure if it is
> required.

This is not necessary at the moment for 32-bit. This may change in the 
future as we make the 32-bit boot code more compliant. For now, I would 
not add it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 07:45:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 07:45:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528980.822817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7BI-0002Ox-1g; Wed, 03 May 2023 07:45:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528980.822817; Wed, 03 May 2023 07:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7BH-0002Oq-Ug; Wed, 03 May 2023 07:45:43 +0000
Received: by outflank-mailman (input) for mailman id 528980;
 Wed, 03 May 2023 07:45:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu7BG-0002Ok-Im
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 07:45:42 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20630.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7fa575a0-e986-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 09:45:39 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DB9PR04MB9473.eurprd04.prod.outlook.com (2603:10a6:10:369::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 07:45:36 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 07:45:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fa575a0-e986-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MVwpTULunggPn9zGNavkWQdrBfER/3JzKbUwxmQQuBvL5y6NsqfR0zTDYuT+dGo0xMI8ahINwqUitkRjHIQInxqgz56y1gAIIeDc0PBlhKlM09IixN7zGF0XqnYKIcAmcBPaCAyUcuVEfcCnRngE/0Vp/90++vFjmeelEBy3gUKqUxrOeme746AxcsArLvPjbIOdtEMXKoPRWVFMIIqVdQHmZ7KikZntNMxnO8hIy4jl+FF6NeB9Op77XWNq+Zk4PLWgUy8uBj+NMlDZT5Hz296xbU1sfdTpEusS3P9xLdaRGKhFhqM0QBGAB8Fd901NeTWu4GStBFFQT2dPlEY9ug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2IxiRtGBIVXxf6pXJPzRosvgwZ0y1a6EjJDShLld+2I=;
 b=OnpDf8i1UFnHn4dXivQCvc5vGX3LWblNBekF9O8uMcpthzsOKggoQtl5cZS4UxsF7Pvw2sttWf/G5pvMefA07p6MkU+l30hdwViXO7LZDZXPNF+9UMaBHEsgEzd4Eo2Cwo5kGmm2THZ9bFmSapE/MhuJYfmWju+ufkgCMGKKFTJrclYmXLrZi7effwIZIwb5SivPemK8Je48PaSAdJct0cVy70R6NJQqiBfeIBn/2zVbkNZbjYuUT8AU/VGb+5xHLDA7gABI8udZakTyeNpaeZlnZSVj/Nkvw9IszdF70qX2pFDr5BktOHw3sYV0H7m+8tjC8po473cIC09p4TVt/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2IxiRtGBIVXxf6pXJPzRosvgwZ0y1a6EjJDShLld+2I=;
 b=27Ujovh27GP56YtYNXdVelmelfOjDCZmmHIr+GF6UWFLLsb2VN3XA1BVQIi7V4+QiWkCT4dUGQ/7oUaxfyjX3qNB2GwEdhyoAx16z6kHXKsxW9EEohe3IvRhkfXnajZKMmvNwV77aCUbvwtaQ083xJd1Vr3fa1sAEJ4iK8vlYTXfscYKzAmDGc5V4BesAHRSOxbcZhHl8F/d0kcvqm7AksE9kmg7WSBxYfRyJ589fAjvbt9AopCPoB5OJNXEX6g0eFD6Q3TN7swGInqAPlg8y12KbVPtvdXD/wN0snOANbJOfFuMBDXDb+ywdcDMJweoBTdIvbtj19vBYLeGUsfglA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a06075c3-8e6f-a470-e8a0-58fd299373c1@suse.com>
Date: Wed, 3 May 2023 09:45:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Jennifer Herbert <jennifer.herbert@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] libacpi: switch to SPDX
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0144.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::15) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DB9PR04MB9473:EE_
X-MS-Office365-Filtering-Correlation-Id: d3c6e7c2-4981-4611-4c16-08db4baa619a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tH6/wwROm5mlGVc6kVk3AGjf+NOErREBA1FP9KvVw0jCF/w/5mEoF9x0ulh+lC2daGCpAETXAfXQ+04vn2dD8ks9HMY1fM0Urhm4HOtmbsCJbREL3CswnxaMIJqUvhsR5DHYGl5kZ7lupjbPt6kOtMZuUSokzeDihQ6dT4ocmzMDvxArw4jRVjd4Fu7nBTxdr9SqBVHPS1pbh27cwLrpoIiJZymScZCvYqLra8Ky6Hp7mbdvvw0IXhLdMyMHQH2gvU6D8tc340S7/HpvN506rxDD9ODIzmzM8+s5hXy27B4IXrn4a0ALlbnD778HfFgjH+d4fYMMKZe43Xw28Uc9VNKA1TH1y2IHfa4XspeaJPjX0CS1C1HlfDgPYm4DNRLEJnr9jOehNGOLr7RPZZjUzE8OX7V3ILii4EbW878euF7NLiYskcEBd4tfCinyyzjaNhlXpSxhJHJJkCaVM2BonfGCsTcakQiOh2S/u3P0qk5MVJjneeu1hv7J699nCcPckLOeLVwpR8HJtV6QFUBAoIa7H28xGG9f4dKoTuB+QiPHWHbHhiMRxvO7kYtLlt+a7WonWw9yTpVeDt6k0i5jKYxnG1V9dE2amcy3+BH2XEepAGBKze1WZEm3cXWNjjCoKTrzDgZtD7deTmysMb1x9bDWOXLMN5ytd9kDEwO13KmLxLW+h3aeF+GTVYynX65v
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(366004)(396003)(346002)(136003)(451199021)(83380400001)(31686004)(2616005)(478600001)(31696002)(6666004)(6486002)(66476007)(66946007)(54906003)(66556008)(4326008)(316002)(6512007)(26005)(186003)(6506007)(6916009)(8936002)(8676002)(2906002)(38100700002)(30864003)(5660300002)(41300700001)(66899021)(86362001)(36756003)(55084003)(2004002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dTlqUS9Odk94QWZDaE5VY0w5MkZ5TElLekpwS0VVSEw0RTVwaWEyeEx3bUg4?=
 =?utf-8?B?b3lXUDEvUExWM3IrM0xGdjh4enBKcnFNTTlodWdOR2VZWXUzNmNaVnJLREMz?=
 =?utf-8?B?YllqUnk2NVdzQTRNdFU4a25lSWd4WGJQZmJFbXVuWVowQmtwdTFtKzhwQjRK?=
 =?utf-8?B?WjZXdUUzL3ViVmtEcEhva1JSVmJKR3ZLNGlKcTVvZXYxaGxZc1FWYThOZzBj?=
 =?utf-8?B?YTRlVUVXdUFHSUxjL0ZWQWdHaURjUEVqL0pkemdYZkplUjA2QXFFcDJuSE1G?=
 =?utf-8?B?ajhqeER3MWZ4UlU4MUQvRkw4Q1JCclJNb2RtTkN2Ty9NcHhKN0ZHRmRsVG9m?=
 =?utf-8?B?VXJKSmpGQ0tPdjVjamJSb3owbHVQY3htQ3kvMGJLMVV2d0ZiTVpEYjViRVZn?=
 =?utf-8?B?cXUvcy80aTJrNVJFUDlOSXNBcTdNZW90RlVKdXcxZjdTbm96a29PeUFFcExR?=
 =?utf-8?B?U2padUcwRTYwWHlhU3E1clhGd1NlWFdkVzltVG9QQnNObis3cHFjc2VNcHU4?=
 =?utf-8?B?Z3hlU2ZIOHpQSkwrZnd1Rjd6dlZkUVZpYnMrajBrMU42cjlzdjNESjdSZGJz?=
 =?utf-8?B?YVE4RzlkSDdJUWhXRHpGNXNSMXRXR3ZRdzhiZ0FFTEtraDA0ZEJ4MlI4MXlO?=
 =?utf-8?B?NG5zTithODI1UWJXNHA3c00xalFHNzBiUThob3ZJenp5dCtHamNocVlYRVNi?=
 =?utf-8?B?T2tuQWJXRi9qaElSdWgyRXB3RHF0ZkV5Kzdvc0s1bHB5b0JWMzY1MWdtWlZE?=
 =?utf-8?B?U25CYlcyRGpIcW1DWTNOa3lVbUQwUGJFeFZzQXVhR0JKWHZQR1UzWWRPcVBt?=
 =?utf-8?B?bEJxZ3cvSVltSkplSzJNWGRhcFd2K0RCenVjSklLemdHWDRFbGdWWUMyemg5?=
 =?utf-8?B?T3pxdWcrK1JidjFycGZVM09CVzQwT1VKbzRCZkFqbnRNd0M4TTY4My83M01U?=
 =?utf-8?B?TG42VWdWM24wcTNBSnI1di9LNVpDZXQ0OUZlQ1ZTYnpORUZSa1BMc3pENWdH?=
 =?utf-8?B?OWl1eEd4R1VUTDlOUHg5ZkNyOFdoN2VKVmQ4NHp3UjFpOHc2TVpXVGl1ekdo?=
 =?utf-8?B?M3NtdHFIRUphSnlYT3ZDVmtHYlI5VHI0dFR0K1h3YWlndTdqbzNqV1V3RzJR?=
 =?utf-8?B?QkRDSkpaazBHUUgxakp4bW43eERSQ0h2eEVDOEtXb3QzaDI5YnVWYlZHQjFx?=
 =?utf-8?B?a1pVQ2tRcTVIMms4eW1tTEdhbE5VYUdURU5YVFVOU1JIbWlPL3RZWVZsZnQz?=
 =?utf-8?B?WEV2cFVnRFNmMFVSTS8wYkp5cWZmdVI2dExNUWx5MXBhZ1E2b3hvWEROejFB?=
 =?utf-8?B?Z1Zyb3FRSnU2bDVJeWcwb2VROXBGYlhjTWtRMjMxQThFa09pNkoyVjdYZFFQ?=
 =?utf-8?B?Uk9LYUlaUkIrYmYybjNPODRNWGFCUm5kb0VyRHBLS2FXRW84aklwejQ2K3ND?=
 =?utf-8?B?Rjk1ZFNFT1Y4Q1VTVC9xclYzNDE2SWpHU0NOTHRnMkNQVlhvNUR6Vnd1WVNp?=
 =?utf-8?B?bGRLL1VWSFZpS0dJMnlPbml6OWpkeWJ6aU0wTGtpcGpjVS8rYUNPQ1lkU2Ez?=
 =?utf-8?B?RXFDa2lyUjlEaHk1c01xSit3M3JuSFRqUkViUm15MHBVdFkrUVdmbGszUjVj?=
 =?utf-8?B?NHlKYWRydUt4RDdCYUNCdjVibDZFQityZmVjL3p3WWRKNFdxY0lNV2d4ckhM?=
 =?utf-8?B?dndrUmptL2xSRENpaUVSTEFveWc3TWlTTVBGbEFXTnNweld5VnY1dENNYzJu?=
 =?utf-8?B?NmNwQ1F5NHVUYTNLU1dFQzdEUmJCaGxxQU9aVFVZV0NQMGozaWVnd016T2s5?=
 =?utf-8?B?dzA3djJyTHdHbjQ4SzU0UDJTMWhDUjBIME5mVU9ERzZSVUcrWVprVDYwcnRv?=
 =?utf-8?B?NGFDMFZXby9rVjlQeUhqUDA2bW93T0U3RkJQbHN5SGhkTzRFa29adStSRGln?=
 =?utf-8?B?ZW1DTEFsTFZnOC9TaEEvcFhiZ2o0NTYzcW5RcmFmOW0za3hLbFNITFhLWVFn?=
 =?utf-8?B?c0ZnYitDejBvTG9jemhlalU0RC90TEdubWN3RG1qdDNYTUFUaXM3K3NPUmpO?=
 =?utf-8?B?Sm0rSysrSU5RNG00RjRUd0F3cHVlcUZSNnlmYVk4NEFFSmhZb3k5V1lIR0dY?=
 =?utf-8?Q?caeP69tsDf+JbNtEMt7FLrhy5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d3c6e7c2-4981-4611-4c16-08db4baa619a
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 07:45:35.9128
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZNN7gMI9N8VQr6OCMAu2kS51qdefEUDRtI3MCE9EnK2QRbaC4C52GMG8YRA4Xt1zD/2kWMsth2WDDHqaNTuM1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9473

Commit 68823df358e8 ("acpi: Re-license ACPI builder files from GPLv2 to
LGPLv2.1") added references to a "special exception on linking described
in file LICENSE", without actually adding such a file. Quite likely
COPYING was meant instead, yet then its text matches LICENSES/LGPL-2.1
except for some explanatory text (clarifying the "only" aspect) at the
top (and formatting). Hence replace the text in all the files with SPDX
references to LGPL-2.1.

Note that dsdt_acpi_info.asl had no license text. An SPDX tag is being
added there nevertheless.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/libacpi/COPYING
+++ /dev/null
@@ -1,468 +0,0 @@
-This library is licensed under LGPL v2.1 to allow its usage in LGPL-2.1
-libraries such as libxl. Note that the only valid version of the LGPL as
-far as the files in this directory (and its subdirectories) are concerned
-is _this_ particular version of the license (i.e., *only* v2.1, not v2.2
-or v3.x, unless explicitly otherwise stated.
-
-Where clause 3 is invoked in order to relicense under the GPL then
-this shall be considered to be GPL v2 only for files which have
-specified LGPL v2.1 only.
-
-                  GNU LESSER GENERAL PUBLIC LICENSE
-                       Version 2.1, February 1999
-
- Copyright (C) 1991, 1999 Free Software Foundation, Inc.
- 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
-[This is the first released version of the Lesser GPL.  It also counts
- as the successor of the GNU Library Public License, version 2, hence
- the version number 2.1.]
-
-                            Preamble
-
-  The licenses for most software are designed to take away your
-freedom to share and change it.  By contrast, the GNU General Public
-Licenses are intended to guarantee your freedom to share and change
-free software--to make sure the software is free for all its users.
-
-  This license, the Lesser General Public License, applies to some
-specially designated software packages--typically libraries--of the
-Free Software Foundation and other authors who decide to use it.  You
-can use it too, but we suggest you first think carefully about whether
-this license or the ordinary General Public License is the better
-strategy to use in any particular case, based on the explanations below.
-
-  When we speak of free software, we are referring to freedom of use,
-not price.  Our General Public Licenses are designed to make sure that
-you have the freedom to distribute copies of free software (and charge
-for this service if you wish); that you receive source code or can get
-it if you want it; that you can change the software and use pieces of
-it in new free programs; and that you are informed that you can do
-these things.
-
-  To protect your rights, we need to make restrictions that forbid
-distributors to deny you these rights or to ask you to surrender these
-rights.  These restrictions translate to certain responsibilities for
-you if you distribute copies of the library or if you modify it.
-
-  For example, if you distribute copies of the library, whether gratis
-or for a fee, you must give the recipients all the rights that we gave
-you.  You must make sure that they, too, receive or can get the source
-code.  If you link other code with the library, you must provide
-complete object files to the recipients, so that they can relink them
-with the library after making changes to the library and recompiling
-it.  And you must show them these terms so they know their rights.
-
-  We protect your rights with a two-step method: (1) we copyright the
-library, and (2) we offer you this license, which gives you legal
-permission to copy, distribute and/or modify the library.
-
-  To protect each distributor, we want to make it very clear that
-there is no warranty for the free library.  Also, if the library is
-modified by someone else and passed on, the recipients should know
-that what they have is not the original version, so that the original
-author's reputation will not be affected by problems that might be
-introduced by others.
-
-  Finally, software patents pose a constant threat to the existence of
-any free program.  We wish to make sure that a company cannot
-effectively restrict the users of a free program by obtaining a
-restrictive license from a patent holder.  Therefore, we insist that
-any patent license obtained for a version of the library must be
-consistent with the full freedom of use specified in this license.
-
-  Most GNU software, including some libraries, is covered by the
-ordinary GNU General Public License.  This license, the GNU Lesser
-General Public License, applies to certain designated libraries, and
-is quite different from the ordinary General Public License.  We use
-this license for certain libraries in order to permit linking those
-libraries into non-free programs.
-
-  When a program is linked with a library, whether statically or using
-a shared library, the combination of the two is legally speaking a
-combined work, a derivative of the original library.  The ordinary
-General Public License therefore permits such linking only if the
-entire combination fits its criteria of freedom.  The Lesser General
-Public License permits more lax criteria for linking other code with
-the library.
-
-  We call this license the "Lesser" General Public License because it
-does Less to protect the user's freedom than the ordinary General
-Public License.  It also provides other free software developers Less
-of an advantage over competing non-free programs.  These disadvantages
-are the reason we use the ordinary General Public License for many
-libraries.  However, the Lesser license provides advantages in certain
-special circumstances.
-
-  For example, on rare occasions, there may be a special need to
-encourage the widest possible use of a certain library, so that it becomes
-a de-facto standard.  To achieve this, non-free programs must be
-allowed to use the library.  A more frequent case is that a free
-library does the same job as widely used non-free libraries.  In this
-case, there is little to gain by limiting the free library to free
-software only, so we use the Lesser General Public License.
-
-  In other cases, permission to use a particular library in non-free
-programs enables a greater number of people to use a large body of
-free software.  For example, permission to use the GNU C Library in
-non-free programs enables many more people to use the whole GNU
-operating system, as well as its variant, the GNU/Linux operating
-system.
-
-  Although the Lesser General Public License is Less protective of the
-users' freedom, it does ensure that the user of a program that is
-linked with the Library has the freedom and the wherewithal to run
-that program using a modified version of the Library.
-
-  The precise terms and conditions for copying, distribution and
-modification follow.  Pay close attention to the difference between a
-"work based on the library" and a "work that uses the library".  The
-former contains code derived from the library, whereas the latter must
-be combined with the library in order to run.
-
-                  GNU LESSER GENERAL PUBLIC LICENSE
-   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
-
-  0. This License Agreement applies to any software library or other
-program which contains a notice placed by the copyright holder or
-other authorized party saying it may be distributed under the terms of
-this Lesser General Public License (also called "this License").
-Each licensee is addressed as "you".
-
-  A "library" means a collection of software functions and/or data
-prepared so as to be conveniently linked with application programs
-(which use some of those functions and data) to form executables.
-
-  The "Library", below, refers to any such software library or work
-which has been distributed under these terms.  A "work based on the
-Library" means either the Library or any derivative work under
-copyright law: that is to say, a work containing the Library or a
-portion of it, either verbatim or with modifications and/or translated
-straightforwardly into another language.  (Hereinafter, translation is
-included without limitation in the term "modification".)
-
-  "Source code" for a work means the preferred form of the work for
-making modifications to it.  For a library, complete source code means
-all the source code for all modules it contains, plus any associated
-interface definition files, plus the scripts used to control compilation
-and installation of the library.
-
-  Activities other than copying, distribution and modification are not
-covered by this License; they are outside its scope.  The act of
-running a program using the Library is not restricted, and output from
-such a program is covered only if its contents constitute a work based
-on the Library (independent of the use of the Library in a tool for
-writing it).  Whether that is true depends on what the Library does
-and what the program that uses the Library does.
-
-  1. You may copy and distribute verbatim copies of the Library's
-complete source code as you receive it, in any medium, provided that
-you conspicuously and appropriately publish on each copy an
-appropriate copyright notice and disclaimer of warranty; keep intact
-all the notices that refer to this License and to the absence of any
-warranty; and distribute a copy of this License along with the
-Library.
-
-  You may charge a fee for the physical act of transferring a copy,
-and you may at your option offer warranty protection in exchange for a
-fee.
-
-  2. You may modify your copy or copies of the Library or any portion
-of it, thus forming a work based on the Library, and copy and
-distribute such modifications or work under the terms of Section 1
-above, provided that you also meet all of these conditions:
-
-    a) The modified work must itself be a software library.
-
-    b) You must cause the files modified to carry prominent notices
-    stating that you changed the files and the date of any change.
-
-    c) You must cause the whole of the work to be licensed at no
-    charge to all third parties under the terms of this License.
-
-    d) If a facility in the modified Library refers to a function or a
-    table of data to be supplied by an application program that uses
-    the facility, other than as an argument passed when the facility
-    is invoked, then you must make a good faith effort to ensure that,
-    in the event an application does not supply such function or
-    table, the facility still operates, and performs whatever part of
-    its purpose remains meaningful.
-
-    (For example, a function in a library to compute square roots has
-    a purpose that is entirely well-defined independent of the
-    application.  Therefore, Subsection 2d requires that any
-    application-supplied function or table used by this function must
-    be optional: if the application does not supply it, the square
-    root function must still compute square roots.)
-
-These requirements apply to the modified work as a whole.  If
-identifiable sections of that work are not derived from the Library,
-and can be reasonably considered independent and separate works in
-themselves, then this License, and its terms, do not apply to those
-sections when you distribute them as separate works.  But when you
-distribute the same sections as part of a whole which is a work based
-on the Library, the distribution of the whole must be on the terms of
-this License, whose permissions for other licensees extend to the
-entire whole, and thus to each and every part regardless of who wrote
-it.
-
-Thus, it is not the intent of this section to claim rights or contest
-your rights to work written entirely by you; rather, the intent is to
-exercise the right to control the distribution of derivative or
-collective works based on the Library.
-
-In addition, mere aggregation of another work not based on the Library
-with the Library (or with a work based on the Library) on a volume of
-a storage or distribution medium does not bring the other work under
-the scope of this License.
-
-  3. You may opt to apply the terms of the ordinary GNU General Public
-License instead of this License to a given copy of the Library.  To do
-this, you must alter all the notices that refer to this License, so
-that they refer to the ordinary GNU General Public License, version 2,
-instead of to this License.  (If a newer version than version 2 of the
-ordinary GNU General Public License has appeared, then you can specify
-that version instead if you wish.)  Do not make any other change in
-these notices.
-
-  Once this change is made in a given copy, it is irreversible for
-that copy, so the ordinary GNU General Public License applies to all
-subsequent copies and derivative works made from that copy.
-
-  This option is useful when you wish to copy part of the code of
-the Library into a program that is not a library.
-
-  4. You may copy and distribute the Library (or a portion or
-derivative of it, under Section 2) in object code or executable form
-under the terms of Sections 1 and 2 above provided that you accompany
-it with the complete corresponding machine-readable source code, which
-must be distributed under the terms of Sections 1 and 2 above on a
-medium customarily used for software interchange.
-
-  If distribution of object code is made by offering access to copy
-from a designated place, then offering equivalent access to copy the
-source code from the same place satisfies the requirement to
-distribute the source code, even though third parties are not
-compelled to copy the source along with the object code.
-
-  5. A program that contains no derivative of any portion of the
-Library, but is designed to work with the Library by being compiled or
-linked with it, is called a "work that uses the Library".  Such a
-work, in isolation, is not a derivative work of the Library, and
-therefore falls outside the scope of this License.
-
-  However, linking a "work that uses the Library" with the Library
-creates an executable that is a derivative of the Library (because it
-contains portions of the Library), rather than a "work that uses the
-library".  The executable is therefore covered by this License.
-Section 6 states terms for distribution of such executables.
-
-  When a "work that uses the Library" uses material from a header file
-that is part of the Library, the object code for the work may be a
-derivative work of the Library even though the source code is not.
-Whether this is true is especially significant if the work can be
-linked without the Library, or if the work is itself a library.  The
-threshold for this to be true is not precisely defined by law.
-
-  If such an object file uses only numerical parameters, data
-structure layouts and accessors, and small macros and small inline
-functions (ten lines or less in length), then the use of the object
-file is unrestricted, regardless of whether it is legally a derivative
-work.  (Executables containing this object code plus portions of the
-Library will still fall under Section 6.)
-
-  Otherwise, if the work is a derivative of the Library, you may
-distribute the object code for the work under the terms of Section 6.
-Any executables containing that work also fall under Section 6,
-whether or not they are linked directly with the Library itself.
-
-  6. As an exception to the Sections above, you may also combine or
-link a "work that uses the Library" with the Library to produce a
-work containing portions of the Library, and distribute that work
-under terms of your choice, provided that the terms permit
-modification of the work for the customer's own use and reverse
-engineering for debugging such modifications.
-
-  You must give prominent notice with each copy of the work that the
-Library is used in it and that the Library and its use are covered by
-this License.  You must supply a copy of this License.  If the work
-during execution displays copyright notices, you must include the
-copyright notice for the Library among them, as well as a reference
-directing the user to the copy of this License.  Also, you must do one
-of these things:
-
-    a) Accompany the work with the complete corresponding
-    machine-readable source code for the Library including whatever
-    changes were used in the work (which must be distributed under
-    Sections 1 and 2 above); and, if the work is an executable linked
-    with the Library, with the complete machine-readable "work that
-    uses the Library", as object code and/or source code, so that the
-    user can modify the Library and then relink to produce a modified
-    executable containing the modified Library.  (It is understood
-    that the user who changes the contents of definitions files in the
-    Library will not necessarily be able to recompile the application
-    to use the modified definitions.)
-
-    b) Use a suitable shared library mechanism for linking with the
-    Library.  A suitable mechanism is one that (1) uses at run time a
-    copy of the library already present on the user's computer system,
-    rather than copying library functions into the executable, and (2)
-    will operate properly with a modified version of the library, if
-    the user installs one, as long as the modified version is
-    interface-compatible with the version that the work was made with.
-
-    c) Accompany the work with a written offer, valid for at
-    least three years, to give the same user the materials
-    specified in Subsection 6a, above, for a charge no more
-    than the cost of performing this distribution.
-
-    d) If distribution of the work is made by offering access to copy
-    from a designated place, offer equivalent access to copy the above
-    specified materials from the same place.
-
-    e) Verify that the user has already received a copy of these
-    materials or that you have already sent this user a copy.
-
-  For an executable, the required form of the "work that uses the
-Library" must include any data and utility programs needed for
-reproducing the executable from it.  However, as a special exception,
-the materials to be distributed need not include anything that is
-normally distributed (in either source or binary form) with the major
-components (compiler, kernel, and so on) of the operating system on
-which the executable runs, unless that component itself accompanies
-the executable.
-
-  It may happen that this requirement contradicts the license
-restrictions of other proprietary libraries that do not normally
-accompany the operating system.  Such a contradiction means you cannot
-use both them and the Library together in an executable that you
-distribute.
-
-  7. You may place library facilities that are a work based on the
-Library side-by-side in a single library together with other library
-facilities not covered by this License, and distribute such a combined
-library, provided that the separate distribution of the work based on
-the Library and of the other library facilities is otherwise
-permitted, and provided that you do these two things:
-
-    a) Accompany the combined library with a copy of the same work
-    based on the Library, uncombined with any other library
-    facilities.  This must be distributed under the terms of the
-    Sections above.
-
-    b) Give prominent notice with the combined library of the fact
-    that part of it is a work based on the Library, and explaining
-    where to find the accompanying uncombined form of the same work.
-
-  8. You may not copy, modify, sublicense, link with, or distribute
-the Library except as expressly provided under this License.  Any
-attempt otherwise to copy, modify, sublicense, link with, or
-distribute the Library is void, and will automatically terminate your
-rights under this License.  However, parties who have received copies,
-or rights, from you under this License will not have their licenses
-terminated so long as such parties remain in full compliance.
-
-  9. You are not required to accept this License, since you have not
-signed it.  However, nothing else grants you permission to modify or
-distribute the Library or its derivative works.  These actions are
-prohibited by law if you do not accept this License.  Therefore, by
-modifying or distributing the Library (or any work based on the
-Library), you indicate your acceptance of this License to do so, and
-all its terms and conditions for copying, distributing or modifying
-the Library or works based on it.
-
-  10. Each time you redistribute the Library (or any work based on the
-Library), the recipient automatically receives a license from the
-original licensor to copy, distribute, link with or modify the Library
-subject to these terms and conditions.  You may not impose any further
-restrictions on the recipients' exercise of the rights granted herein.
-You are not responsible for enforcing compliance by third parties with
-this License.
-
-  11. If, as a consequence of a court judgment or allegation of patent
-infringement or for any other reason (not limited to patent issues),
-conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License.  If you cannot
-distribute so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you
-may not distribute the Library at all.  For example, if a patent
-license would not permit royalty-free redistribution of the Library by
-all those who receive copies directly or indirectly through you, then
-the only way you could satisfy both it and this License would be to
-refrain entirely from distribution of the Library.
-
-If any portion of this section is held invalid or unenforceable under any
-particular circumstance, the balance of the section is intended to apply,
-and the section as a whole is intended to apply in other circumstances.
-
-It is not the purpose of this section to induce you to infringe any
-patents or other property right claims or to contest validity of any
-such claims; this section has the sole purpose of protecting the
-integrity of the free software distribution system which is
-implemented by public license practices.  Many people have made
-generous contributions to the wide range of software distributed
-through that system in reliance on consistent application of that
-system; it is up to the author/donor to decide if he or she is willing
-to distribute software through any other system and a licensee cannot
-impose that choice.
-
-This section is intended to make thoroughly clear what is believed to
-be a consequence of the rest of this License.
-
-  12. If the distribution and/or use of the Library is restricted in
-certain countries either by patents or by copyrighted interfaces, the
-original copyright holder who places the Library under this License may add
-an explicit geographical distribution limitation excluding those countries,
-so that distribution is permitted only in or among countries not thus
-excluded.  In such case, this License incorporates the limitation as if
-written in the body of this License.
-
-  13. The Free Software Foundation may publish revised and/or new
-versions of the Lesser General Public License from time to time.
-Such new versions will be similar in spirit to the present version,
-but may differ in detail to address new problems or concerns.
-
-Each version is given a distinguishing version number.  If the Library
-specifies a version number of this License which applies to it and
-"any later version", you have the option of following the terms and
-conditions either of that version or of any later version published by
-the Free Software Foundation.  If the Library does not specify a
-license version number, you may choose any version ever published by
-the Free Software Foundation.
-
-  14. If you wish to incorporate parts of the Library into other free
-programs whose distribution conditions are incompatible with these,
-write to the author to ask for permission.  For software which is
-copyrighted by the Free Software Foundation, write to the Free
-Software Foundation; we sometimes make exceptions for this.  Our
-decision will be guided by the two goals of preserving the free status
-of all derivatives of our free software and of promoting the sharing
-and reuse of software generally.
-
-                            NO WARRANTY
-
-  15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
-WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
-EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
-OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
-KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
-IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
-LIBRARY IS WITH YOU.  SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
-THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
-
-  16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
-WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
-AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
-FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
-CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
-LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
-RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
-FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
-SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
-DAMAGES.
-
-                     END OF TERMS AND CONDITIONS
--- a/tools/libacpi/Makefile
+++ b/tools/libacpi/Makefile
@@ -1,16 +1,6 @@
+# SPDX-License-Identifier: LGPL-2.1
 #
 # Copyright (c) 2004, Intel Corporation.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU Lesser General Public License as published
-# by the Free Software Foundation; version 2.1 only. with the special
-# exception on linking described in file LICENSE.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU Lesser General Public License for more details.
-#
 
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
--- a/tools/libacpi/acpi2_0.h
+++ b/tools/libacpi/acpi2_0.h
@@ -1,15 +1,6 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
 /*
  * Copyright (c) 2004, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; version 2.1 only. with the special
- * exception on linking described in file LICENSE.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU Lesser General Public License for more details.
  */
 #ifndef _ACPI_2_0_H_
 #define _ACPI_2_0_H_
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -1,16 +1,7 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
 /*
  * Copyright (c) 2004, Intel Corporation.
  * Copyright (c) 2006, Keir Fraser, XenSource Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; version 2.1 only. with the special
- * exception on linking described in file LICENSE.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU Lesser General Public License for more details.
  */
 
 #include LIBACPI_STDUTILS
--- a/tools/libacpi/dsdt.asl
+++ b/tools/libacpi/dsdt.asl
@@ -1,17 +1,8 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
 /******************************************************************************
  * DSDT for Xen with Qemu device model
  *
  * Copyright (c) 2004, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; version 2.1 only. with the special
- * exception on linking described in file LICENSE.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU Lesser General Public License for more details.
  */
 
 DefinitionBlock ("DSDT.aml", "DSDT", 2, "Xen", "HVM", 0)
--- a/tools/libacpi/dsdt_acpi_info.asl
+++ b/tools/libacpi/dsdt_acpi_info.asl
@@ -1,3 +1,4 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
 
     Scope (\_SB)
     {
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -1,18 +1,9 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
 /******************************************************************************
  * libacpi.h
  * 
  * libacpi interfaces
  * 
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; version 2.1 only. with the special
- * exception on linking described in file LICENSE.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU Lesser General Public License for more details.
- *
  * Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved.
  */
 
--- a/tools/libacpi/mk_dsdt.c
+++ b/tools/libacpi/mk_dsdt.c
@@ -1,14 +1,4 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; version 2.1 only. with the special
- * exception on linking described in file LICENSE.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU Lesser General Public License for more details.
- */
+/* SPDX-License-Identifier: LGPL-2.1 */
 
 #include <stdio.h>
 #include <stdarg.h>
--- a/tools/libacpi/ssdt_laptop_slate.asl
+++ b/tools/libacpi/ssdt_laptop_slate.asl
@@ -1,17 +1,8 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
 /*
  * ssdt_conv.asl
  *
  * Copyright (c) 2017  Citrix Systems, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; version 2.1 only. with the special
- * exception on linking described in file LICENSE.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU Lesser General Public License for more details.
  */
 
 /*
--- a/tools/libacpi/ssdt_pm.asl
+++ b/tools/libacpi/ssdt_pm.asl
@@ -1,18 +1,9 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
 /*
  * ssdt_pm.asl
  *
  * Copyright (c) 2008  Kamala Narasimhan
  * Copyright (c) 2008  Citrix Systems, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; version 2.1 only. with the special
- * exception on linking described in file LICENSE.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU Lesser General Public License for more details.
  */
 
 /*
--- a/tools/libacpi/ssdt_s3.asl
+++ b/tools/libacpi/ssdt_s3.asl
@@ -1,17 +1,8 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
 /*
  * ssdt_s3.asl
  *
  * Copyright (c) 2011  Citrix Systems, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; version 2.1 only. with the special
- * exception on linking described in file LICENSE.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU Lesser General Public License for more details.
  */
 
 DefinitionBlock ("SSDT_S3.aml", "SSDT", 2, "Xen", "HVM", 0)
--- a/tools/libacpi/ssdt_s4.asl
+++ b/tools/libacpi/ssdt_s4.asl
@@ -1,17 +1,8 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
 /*
  * ssdt_s4.asl
  *
  * Copyright (c) 2011  Citrix Systems, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; version 2.1 only. with the special
- * exception on linking described in file LICENSE.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU Lesser General Public License for more details.
  */
 
 DefinitionBlock ("SSDT_S4.aml", "SSDT", 2, "Xen", "HVM", 0)
--- a/tools/libacpi/ssdt_tpm.asl
+++ b/tools/libacpi/ssdt_tpm.asl
@@ -1,17 +1,8 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
 /*
  * ssdt_tpm.asl
  *
  * Copyright (c) 2006, IBM Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; version 2.1 only. with the special
- * exception on linking described in file LICENSE.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU Lesser General Public License for more details.
  */
 
 /* SSDT for TPM TIS Interface for Xen with Qemu device model. */
--- a/tools/libacpi/static_tables.c
+++ b/tools/libacpi/static_tables.c
@@ -1,16 +1,7 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
 /*
  * Copyright (c) 2004, Intel Corporation.
  * Copyright (c) 2006, Keir Fraser, XenSource Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU Lesser General Public License as published
- * by the Free Software Foundation; version 2.1 only. with the special
- * exception on linking described in file LICENSE.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU Lesser General Public License for more details.
  */
 
 #include "acpi2_0.h"


From xen-devel-bounces@lists.xenproject.org Wed May 03 07:48:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 07:48:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528984.822826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7E8-00033Q-Jw; Wed, 03 May 2023 07:48:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528984.822826; Wed, 03 May 2023 07:48:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7E8-00033J-HB; Wed, 03 May 2023 07:48:40 +0000
Received: by outflank-mailman (input) for mailman id 528984;
 Wed, 03 May 2023 07:48:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pu7E7-00033D-D5
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 07:48:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pu7E6-0002zq-O0; Wed, 03 May 2023 07:48:38 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pu7E6-0006SL-H6; Wed, 03 May 2023 07:48:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=yj+e1ojIhHwufSuyLSUso4b/P3PcU7U9UvJs47xRsIU=; b=dH+FJbRwaeuaH0NjrkQa91XgU4
	bWUxDnN3om/arSYfa08X2l9tl4WL0XFPKENF8tjk7DJ1V6macCncSwzlMLeIIZRcWFKs5x9uXjWkl
	nASqC68QL7mZVy6R0b4pSrb7cdmZWSw+lxlgE8MHoSlwenJnRz/c9KBakpzkjZQJdAoA=;
Message-ID: <a0d48f47-bb62-5ed0-0c9b-95935dc75ca3@xen.org>
Date: Wed, 3 May 2023 08:48:36 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [RFC PATCH] xen/arm: arm32: Enable smpboot on Arm32 based systems
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230502105849.40677-1-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2305021643010.974517@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2305021643010.974517@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 03/05/2023 00:55, Stefano Stabellini wrote:
>> +    {
>> +        printk("CPU%d: No release addr\n", cpu);
>> +        return -ENODEV;
>> +    }
>> +
>> +    release = ioremap_nocache(cpu_release_addr[cpu], 4);
>> +    if ( !release )
>> +    {
>> +        dprintk(XENLOG_ERR, "CPU%d: Unable to map release address\n", cpu);
>> +        return -EFAULT;
>> +    }
>> +
>> +    writel(__pa(init_secondary), release);
>> +
>> +    iounmap(release);
> 
> I think we need a wmb() ?

I am not sure why we would need a wmb() here. Instead, looking at the 
Linux version, I think we are missing a cache flush (so does on arm64) 
which would be necessary if the CPU waiting for the release doesn't have 
cache enabled.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 07:54:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 07:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528987.822837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7J8-0004UL-6O; Wed, 03 May 2023 07:53:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528987.822837; Wed, 03 May 2023 07:53:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7J8-0004UE-38; Wed, 03 May 2023 07:53:50 +0000
Received: by outflank-mailman (input) for mailman id 528987;
 Wed, 03 May 2023 07:53:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu7J7-0004U8-Eg
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 07:53:49 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0600.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a2717ada-e987-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 09:53:47 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM8PR04MB7281.eurprd04.prod.outlook.com (2603:10a6:20b:1d4::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 07:53:45 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 07:53:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2717ada-e987-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YDh4CXiHVE0BJZWqTtssqSY/PAUkMuWOkg3SLgOyFBzEZeDOS2v5Xu1WmtN33BFQ+tEC637oKgXenG6gzTRSUiA+A4qKC3U1kR1WLf9+ikIPBjYRrdIFD+G4OIB8oUWdH3Uz990MSM3fs0hU/ZE1+wzoTBKojCPYZRkh4fqS7QFpy/1VWzeRJW9dn8frMoRp+49UtWFebHpsWFLIDdHST6eVK3Vjnk+QcGH+QSXUCkmYm0xHBIyNeLI6ptcSTocsps4X3F2BhvWja/AlZ1mLTohzyxzbbfXGBtN537Rzz/fAom0kyyECPUYNk95XCQvd6xNBd9/Y8aKxoWDMuQeTIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=e9CbK7nt9CXb0fo98FUzYPyx8qOlbbc9IREixlGl91A=;
 b=Hq9ADPT4pF0QUafHQNiQWBKGf33xjbLJ7IIifg4w9z+ZzVeAjUHFX+zilK7tjsSZZ5li1Gj1/l93l9QCVzVYPELxQpjP3kI+FzAJx6GFjbWvnS7RYXIuEtHnAosJGIdoDwW9Wx6P0/Lwx9/IPzpYt3RXwk5fk457lA2i3Nra6dnFgWxBJ/xxBFoIywn/ydgPqh6t2TyrZ4O5Mh/GgPE4YmgIaT8ozQDAC5UaLsAxRnL3GzLzD8MhY+yijhRc9TCZxEtFNat1dwksxcoVzd/kq1r+4yX3SK3WsCncRcbG9hCopwvpCmb9fqw1OnsDaMyemdBg3KVfpeALxKPLvmIQvQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=e9CbK7nt9CXb0fo98FUzYPyx8qOlbbc9IREixlGl91A=;
 b=tGPCGEzXO44KrTyjuaUb7w+h2IEuI7m63fk/4SHAITKvKmrz/M+Wj6oNZlYZ3GmyBbCO45rKgJmwHh4DK7yGsQmkIu19MLbiOtIbGHIJ222WdAhz1x0s0/CtSPpJ1DTMMoQkmVvsm/5WueKAdkjyt+bLwS3oPnrkuDrkxR0Ww+T/UfuL3cXcF7l2WjfvpaX2/h2fUh987JgMUrpblthxz3fGBP5h49U/RBi7RMRGg+Hrr+HXYg+DvBOXgbs3NqUAwpSTgLKdItlUu86AytEeJBwQ3aFfASaLMXV6zUVDsVjtwa7+gbmGvnrPzEa4ifvl58dXkD8XzBWJ/8x0YNlIqA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <01be67ec-f7e6-f9bf-9d9c-57941ec1dd70@suse.com>
Date: Wed, 3 May 2023 09:53:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN][PATCH v6 09/19] xen/iommu: Move spin_lock from
 iommu_dt_device_is_assigned to caller
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com,
 Julien Grall <julien@xen.org>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-10-vikram.garhwal@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230502233650.20121-10-vikram.garhwal@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0101.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::10) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AM8PR04MB7281:EE_
X-MS-Office365-Filtering-Correlation-Id: c59cf137-f0d4-41d9-a495-08db4bab8567
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4+PNPvfYGfcnkB0/mi/BLaNPh6jtCB3RZfUlZVnZNoCYfU77iGq4f89mDiBz+qfb9cbp6OVY/QEts1niQAOGRO2XjI2Mb7hiRP8nFQ1pb7wFRbdvoMWujTuVIp9eFxV+DWtLMFPVCJ94MoK3gOuYku3uOuklTuhO4s53/hjU9ypc8LcB+sbmO9SYDjYUXKuNBr0GR7quMeoLxQ1b9KNvqC+nDsg4FrFrWGU6IxE5gULnX6slVOSMvWSpIqdaGIUxhg9YIx0ALsL7LRHPC3Se17eC5haaw4lWoCP4LovvC300ufgctPlGr4o+R/EnKje37w2reeVkw0TZNCeS5c9KoAjRE5XEjm5SJumIMVIqkeUDTNNzzg/fWqpLZUIelc4OJ/XdUp35BA0bgUsxzPkFYW0xLvreIvlFrW9R+WQNnrceXWsU1cz0FQ+n3LxhJ1jnLe/aIc6C9YpGJYnuLxyWWipGWUBqvkDnxWC1SRvZxAX32gviGcx9pZoAhIpS1urrmZdItYm0z2aCYk8NJ7uyF6T++D/kd/mqeh9xkWWqtSyb4H8+CzqATJLBtefGJzivf9B9BIXUbEHCMFESeB7sc4gMtxrY6PGgywFUc6AMtSYWLI8SK6EOT5hg9f63qVD7JjBngGDbIITEkaH78iiiiA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(366004)(136003)(346002)(396003)(451199021)(38100700002)(4744005)(2906002)(8936002)(8676002)(5660300002)(31696002)(36756003)(86362001)(6486002)(478600001)(54906003)(186003)(6512007)(6506007)(26005)(53546011)(31686004)(2616005)(83380400001)(41300700001)(66556008)(66946007)(66476007)(6916009)(4326008)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c3FlcFBvbVBoK0tqL21jTlJsSDdQSXZLMHF3MHZjR0FvV25pVUxsZlY1TW5L?=
 =?utf-8?B?YXpkRjlaNUFJd05sZHZZUitqTmd1dy83ZE9HaHNQaU9jV1Nwd013Z1lJaGtT?=
 =?utf-8?B?RWJsQW5QUFArQUthZjh3STRnWGQ3ZEtNTEVURzI3V3NyN3F3bDJzMmFHZU1E?=
 =?utf-8?B?NDZBTWQ5c2RHWDhWWVYxUVNmb2NFNTF3b3VnQVNaOGVjd2lRQUVwWWZrREUx?=
 =?utf-8?B?eFlBSnBqV1BjaC9jTy9RTE8xb0Fqek8wd0pqR0FxbkU4WVNxRWt4aWVNclZs?=
 =?utf-8?B?K0dOTGxjQnBvSHphVWhIdzdXdGpRUlNFNUtTTjdXVzgwSW5WNklsUWpkamtR?=
 =?utf-8?B?aEtpMTl2ZUVXbDE3N1Zubm1OQnJVbklDaGNMZEJOcmdPODJDME1zY0FDb0VL?=
 =?utf-8?B?VlRoSjhkdEhscTYyVmU2RUpzMWh1eXlvWnBQTk1OaFgrZW5NNTdBUVZUaWVE?=
 =?utf-8?B?SWF0RFdGTG1lNU5UOVJGWHhTa3ordWtOS2IyakpUSm9ZTW9YSHRqOU44LzRk?=
 =?utf-8?B?bzNWZnpXNkdnc0tPdFhXRklPUlRYem16NjBrL3g1V2g4eGgrQ0xIb3A5ckRN?=
 =?utf-8?B?ejl6YUIveGdrdU96Q1hjNkcweFR6VDV1RE4yMkFwMFNiSGxhT0VZNkVkeHk5?=
 =?utf-8?B?TGNMVlgrOFRoQ3VrVTRtaGJBM1BTN212L1ZQRUw2UUdOU0JwN01Kalhvd0pq?=
 =?utf-8?B?YmdFV0xTKzBhM081NHc1MDBQLzVseDJVait0TTdCN1J4RHZFdzR3K2hoeTJp?=
 =?utf-8?B?WE1wMUIzaWVkRjNtQVJjRlVmYWgyenlPMGRMMVVFNlNzeDhKQW82ZnVONW1E?=
 =?utf-8?B?UkRNSG9oRHFDVVllaVZaK0dvdTZUdVBpUmd6bm53cDBHQnZGL1V5WWpMOTNt?=
 =?utf-8?B?eDR0Nk9oR1FVMlJlU0xRZnduT3Rxb1BEekpDdGJmVGxjSGNrN1hHZGtPWURY?=
 =?utf-8?B?U29lemY4N3dURlgwb1JOS1htSDh2eUpCM1pFTFZONkZiVCtLdVZEUDE2Si8v?=
 =?utf-8?B?cXh2VjVpQlZGVGthdjZRdjFVMi9vOU5yQlMyY2VPT0lDSjhFNjZtdGcwQWxs?=
 =?utf-8?B?ZjZoOEo0VlJqd3hjZFNISkU3d1p6Z1NGZmNuRmZJRjgxR1hkMkRETnZWcXBQ?=
 =?utf-8?B?TVZwc3pXcHpoUW5zWDR2dWdUdC9IdFJJODVsd01XMTgzWGRPRzVZUzh6ZTA1?=
 =?utf-8?B?VXFjWDVLUFlXWUs5dElEeWtaRDh6NWdnRUJNUlYybHZKQTFiTm9iblM5dTVp?=
 =?utf-8?B?ejBXVG9XRStWTjFhM1l6UDhlbm85dXczRmlQK2dLWWw5QnNKK2Nybm94bDhu?=
 =?utf-8?B?dnFOK04zY1RBYTZXK1kxclBybVFPVUk0cXd2SENaREZsazl1aUhnRS94eGtD?=
 =?utf-8?B?OEVTbC9wY3FWVmNFc0pCQlA3b3ZvY0J1UkZTZEljQmIvNHhjdTNXeUdzUFJn?=
 =?utf-8?B?Tk9DN1h6b1lGMlVqUTNJV0FzMm9FNC9SSXg5VUpZTUtHVGdMRnpJRHB5QzdH?=
 =?utf-8?B?UEMvc01Bc2ZnUkhIeE9qUFY5eE1HTzdSZENZcGgvblVtTzNkaDBlTDBxckpI?=
 =?utf-8?B?Ym83NVBqaXpaeDlCYUtuNW5SWDNyQ0lKWENRMVR4bFpjbHZSeFhTdXBTcFZw?=
 =?utf-8?B?UnpuaEgvZ2FjSStRa0JBUzhpdXN6RHdYZnFSdTE4ZGxXaisxdE5wcHBubWhl?=
 =?utf-8?B?WnlYQmFFZ1RyOCtsdy9BbzhnLzBaVHRYVWhSNjBZSjkxRzFSWDA0ZmVHVTBG?=
 =?utf-8?B?SENJY3IrcG51VHRQeUMzcGZ0YlMvMi9GY1NQZTF2RGsrSWVFOWR6VWhFdytB?=
 =?utf-8?B?aHpSb1gvcW5BRnlBREdSZFJIMDNBK2M5Q21vNXE4M08va3hOZGJDcnRLT3BR?=
 =?utf-8?B?dEd6dHhwYnQrZGtSSTQ3Ym5VV2hFOVEzL3pmZkZtZUZsaEtXZTFBUlBTQnNn?=
 =?utf-8?B?UEh1QnFsS29jclRzak9wcmhqMDk3NFI2MjlwYkxXZGw4aytFRWxHajl3Mks4?=
 =?utf-8?B?TDM4VGtIdnJpY3I2V2ZyWHA5Tkc4NnpaSkhuM1A2YmJaL2YvRTBacnFqKzNI?=
 =?utf-8?B?dGtqbnhXek5EaVh1OGFYa0hya054S2c1WWNsNmZTTTA2NjlQS04xSk9WQktP?=
 =?utf-8?Q?CF2/eCyAwN9FWjaUx993bnqCn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c59cf137-f0d4-41d9-a495-08db4bab8567
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 07:53:45.7345
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3+BX5ewiv8USQ6o6sIwmlSsDtWHmyiRypJYGv2T2omrbvcjKeqbMWZ75PgpDCJsarudSbYT19nA79lJVxwiR7A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7281

On 03.05.2023 01:36, Vikram Garhwal wrote:
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -214,6 +214,7 @@ struct msi_msg;
>  #include <xen/device_tree.h>
>  
>  int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev);
> +bool_t iommu_dt_device_is_assigned_locked(const struct dt_device_node *dev);

Hmm, exposing a function globally which, first and foremost because of
requiring the caller to hold the correct lock(s), is pretty much
internal, doesn't look very nice to me. Can this perhaps be put in a
private header, such that it gets only limited visibility?

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 07:56:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 07:56:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528990.822847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7LV-00052t-JJ; Wed, 03 May 2023 07:56:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528990.822847; Wed, 03 May 2023 07:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7LV-00052m-GL; Wed, 03 May 2023 07:56:17 +0000
Received: by outflank-mailman (input) for mailman id 528990;
 Wed, 03 May 2023 07:56:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu7LU-00052f-8G
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 07:56:16 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0622.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fa990d2f-e987-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 09:56:15 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM9PR04MB8162.eurprd04.prod.outlook.com (2603:10a6:20b:3b5::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 07:56:13 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 07:56:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa990d2f-e987-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UiJ0wOBbjB0Qz7Au7ACpQDJLnP+cjf2b8roK6Gs4RbM1l+/nWn5veVz1WK0g4Ie5ka5kfSr0MFyITsTNsdU2c5Ucv2Fi8EEslpVhOQ+F/GkkTM2lJDJoR0j09RWxleMT7VAg5xkeuiLAiHCbMPZf9XeYpnW9dkARqj1T4wffTXofcEulugILt/J5vf3OQQsjTyI/V7wzHcSLITM5ehsbLSGYVt8fzE21fIYWwLn9cZmlH1nZW5OfLBF85gL8LdLAGO4E4pHQB5nYt/8LcVE6QkHCJ0N2cJ96jp1/q/rqNGUSlM2sEGpaHzcTTND8cqWBWRqJZg9uBKJceLoxFxAZUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=V1x8YyTWyQBNdIxAs9f3haq8niI/qNnULSndmW7mmjQ=;
 b=epASxsusJnEMjGQas782OyL9/0RPLQ9GlFWOlkPyLznnwq3laM+xeuyvGEK0PUrsjfev/Cl+/vGnFSgn4AZKC5ESKeUfcKFP82tvlMGP4EuF4Nb90Zij6Fug1kE0f/tdjwvAsbNoOLoIzfNsqSfOV5naGLUVHvHGDqETPHuZr6jgijxXuMu7zbZgz2LoNghIs9gHbeica6Ehrsd5D1he0JIger0b7NLI7DDkODWwVH++pY6iWdL7aMyA4UNarqOlqkchQy/ha0HBfQDd3Hn7q374Qzteh5Vb51JKQ1GN0N6NoDsKMcG/GbbKdfEBJZ73B5rNdSl0IR7oPhSGk29MNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V1x8YyTWyQBNdIxAs9f3haq8niI/qNnULSndmW7mmjQ=;
 b=JN5Yd16lxdZbIH/2iqTkKEAY0tK7AaYH/1ehEtiPeeQqYs5B4p+kJxrM6SmGxjevCGSbdukN3vldd+voVlu6o3Z1z7SIOEy9zuI2pXTdokBkO5PNTaDW+mvOvKM5jjZ21NBM4k3yVQMLRIWu+VnoW+HChs0wz1s/aQb7FKbypE9ewL7s+h05ju0usYr9mNH2fPfZg15lPauXq63RE8CGQEVwUizHrKFs6/66ayEXMfGZkw+83UY8sm02PuWsffPMYVwkWNrUYVfFeD2rZNk+lLu3U6l8pxGwa/MdhFZYi3jJlAaoUpTJzwnvUxQ0MuyfBO00+dP9wX9TBg3kL13Iug==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7e4419e2-5715-85b7-0742-67d7cc35ba1c@suse.com>
Date: Wed, 3 May 2023 09:56:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN][PATCH v6 11/19] xen/iommu: Introduce
 iommu_remove_dt_device()
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com,
 Julien Grall <julien@xen.org>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-12-vikram.garhwal@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230502233650.20121-12-vikram.garhwal@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0098.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::6) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AM9PR04MB8162:EE_
X-MS-Office365-Filtering-Correlation-Id: d6c5794f-7e73-4445-39ad-08db4babdd5f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4ZA4Jf6cdbc8RCYktaV8qQBNMe3fhhwf4cXPspxPv6BL6SSZcUuotOCPm1qnE8vKopz5tp/4amuWkw6KTd1CFsFoT7O8g8i5Nxv4X7hP58b3zdWeMVUGbwlSXTbHk593Q7mRwTCBsiuTU4+cjcW7gtiDJfN7IU3o6kmyeEEUBRdZg6npP+EYPifT48/ckhWI8esvONOSLGh10U9/2IGrHqNUvNhDWOEsr2k4ctllIIy91ogUDePakWAFa2qJga/e4Ivtg3J15sep/UB3d2tJpQxMLQUXl2Ibbq5dZ1/+AzglcWTdJy82WjtVAOHawCE9JHMOUSWjEqAR0Z5/mbSTDUZrK5vD+MAqwUfn7oM+zhBNpyE4VJIFL/JBFLyxtcydVbfPQsEZxaHehWJv5Kxrgo4CPuwAwwTu5Y2HK1sG2WFxaViaRH2iWNnNvmqNw2xhJPgibvaVV7DmcU+cA1t2SSrcrC2c3OwtYmXdAJ7grKe171o51uRVlOD5X1tAjYwZ6RnCSpXkw3LckfXWXDaC+gVw+Y8nh+iXEK1j4/7Kc559BQwzBzG8NObmcwlzezA5TmViFIyunx+vLhUVRkCT9SDqFlVNz3OV14fUoMYYKzqvacknx/qwejvAOml/vSXC6alPw9JKI4vxQ5psc6niamzePx6rIjQsXaIBpymDibDAwwMBAipTlYP4BT34eYM1XGxQPPdPv1rIKWaztAr6hQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(346002)(366004)(39860400002)(396003)(451199021)(31686004)(6916009)(4326008)(66556008)(66946007)(66476007)(478600001)(6486002)(316002)(54906003)(36756003)(86362001)(31696002)(53546011)(6506007)(26005)(6512007)(41300700001)(5660300002)(8936002)(8676002)(2906002)(4744005)(38100700002)(186003)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SVFtcy9tQzhDYW54UkNaUU81QVNZMmRYNVUyekJFTTRZSVpyTGN0SHhlaVVK?=
 =?utf-8?B?UzFremRtMkg1WGYrRWFDbmVGWFVJUVlDY0s2RExPK3ZaOTIwVjM5dkJOeWZm?=
 =?utf-8?B?UXRIZ1RncDVxVFFRUU52dkxKWk9VMzQ0VzM4MUcxd3ltNHVRcTJOdWpqOUtD?=
 =?utf-8?B?QWYvRHNzK1hMTTdMMFdnaUFWMmRvaHlZNGIxQVJiMVdhbkRocjd2bmNXN0hW?=
 =?utf-8?B?bE0yQkw4UWFFdUJ6d3M0cnFVcTRxSTFCdlFhREF0NjJLakhVeFozSkRqT3I1?=
 =?utf-8?B?c3dsbGMvQmhHRm44REFTVFQyeElwQ1RRcnhrekJLb21uZmxNc0FWdk1ueUF2?=
 =?utf-8?B?bk5Bb2R6cEN3VTlnNG5XQ3NnVm5mVGNCZHBPcWlvMWNOSWsrZ0hhb2duN1Fk?=
 =?utf-8?B?bkk2cys3anZxMm0rdHRVWjlJMW1reUdJaWRiWVk3WWJoUGNMNzExL2ZJekxV?=
 =?utf-8?B?VTNjd29xZDRVTk1kWU5RbEl1YnhMZ0pjU1lueUFBaFdDcDFZRExBcFM3Vmwx?=
 =?utf-8?B?WkwrNmY4RVQ3NzA2NCtYTTVuckJ2a040bVBLQjVWRERVSUZFVWRkZXVVMkhN?=
 =?utf-8?B?dCtYQWRXVitmUVlna0xXZWlVaFVCeTVzMXNCNFpUT2hOeHJ3bjNaNmlKNFpu?=
 =?utf-8?B?bUVtUC9oMlZTekFidVpocE1KQTlQeHVwbzFtZDY1ZkdxOGU2cnNZd0xDQ2tt?=
 =?utf-8?B?TFhJRFNEZUswWXdUVmVrWUxLME5UVkNIT2t4ZEhsVjNCYUcrcVJLT2U4MGd4?=
 =?utf-8?B?amRnNzlNdDRaaS85UThKdk44ZkExSjBvOVN2M1duNXBoaUY2SDNGd2RzTVR3?=
 =?utf-8?B?M3BwdnNLejNoa21NNnVLWVFoeWVJYjd1VXVEa1lKZ0JOYVhsYUs1dmI0aXZu?=
 =?utf-8?B?V294SjJsb3l3NEp2MGhmK1kwaVk1YSs1VjZCSTNwWkV0aDRTYng3cnVNY2Mw?=
 =?utf-8?B?bFFKWDg1bjFBRUlwTFNmT25DQUVCK2lVaE95eDU0TktuclA2UnNNZDczU3M0?=
 =?utf-8?B?di9kVkhQL2I0ZVFOSzhkdDRQNnRGVW5Bc3lFdGV3ZERCNHdsZlRQRThRSDYr?=
 =?utf-8?B?c0ZGQitxZGNYbG5PdndjYUpoVjA2Z0toSkdNb2NSU0NtbzF0L1NGYWZZOFMr?=
 =?utf-8?B?WWgzaVdIWFBHcW9kVWMzQjh5bWFCSThmOUI1WGV3djdTdFhYMU5hVGE0V1Yz?=
 =?utf-8?B?bkFocFBWOWVLUis4U3dTcXBCMnlINDNVdHoyTDcyY1JqUEk4SFNDcmNWZDhq?=
 =?utf-8?B?K1JJQ0FEQmhrZDFNTkJaVm5TcWYzNlAzQ25mcnZUNm9iaG9vTUhhNDZ1RVNo?=
 =?utf-8?B?WW51cm95T1o2ek5jVzRvMjRDVEV0ZWZmNmhnMldmZXJjSnUxMXozNG1pL0g3?=
 =?utf-8?B?UjJkczIrZlVuc0VsbkxyYUNvMTdkWkFBNHVIRzRTVmJxQlVqOHQ2Zzk0dHpZ?=
 =?utf-8?B?L3UxZzl2NTNzWWNCMjRxeC9oSUdSWHhud0lzUnBVdnQzbzNwbjdES2xiUmRi?=
 =?utf-8?B?WkYrZkdPeEt3bFA2cnl2anlBcWxGZVgxNlpLT3MzZnJUclprT2VkckFNeUdP?=
 =?utf-8?B?TWlRWTVQUURoYk1mbHpzcE1XT3p0RUdITElQM3VwOUtKSHBlVEptcHY0QW05?=
 =?utf-8?B?WUNnMEtkVTEyS2tpUVlobjJxSmNSNjlvZXpVUm9QamE1SWp5Zm9zQzh0UG9H?=
 =?utf-8?B?SDdJeUVDTzBTenJYWlVjbWwzTkt1MTBnNjM1Z2FqRGdtbUZybW11b1ZoYzNy?=
 =?utf-8?B?VmZVVERPWDExeFFZOGFWU0Q3MzNrb2ZkYzFsdEJuWWQ2RFNpSFJtcy95MFRC?=
 =?utf-8?B?L01vYmg1cXVKMW55S2hRK282bEsva3ZXZmtRWXFFeVk3QUN2Mm5JVjVXUDJD?=
 =?utf-8?B?MFRpK0ZTcGZkb3pHYThwekdhZ1F4K2I4bmdmVFpxYzVzTlM3VGYySU8wTVB4?=
 =?utf-8?B?eEd1NXFxWkRCMC92NzJWT0dMSFRlOUxiR0lMTFcrTHllUjdLOU5ISkN0ejUz?=
 =?utf-8?B?aGF1TU1BYlVjcjZpaUZSRDNTY1VYOW5UbmN6RmR2THM0MlVCMVVFb2dNallZ?=
 =?utf-8?B?SW1xSXNBZW5INWw4eTY0QkVoRUh5dkczdnlEcFMrSTZFRmpIbE42bE1WQnRm?=
 =?utf-8?Q?KoIBQXNXUHOhteZcwROphLTMo?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d6c5794f-7e73-4445-39ad-08db4babdd5f
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 07:56:12.9828
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pHHNVjqa6RQoHeLw3eNJnmmuFtz7yGDOIEq0NQ4rYfA+BX1qdcB6qAnUQJgfwF0+REvwT9JiOB/JsfA/vS+Rmg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8162

On 03.05.2023 01:36, Vikram Garhwal wrote:
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -219,6 +219,8 @@ int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev);
>  int iommu_dt_domain_init(struct domain *d);
>  int iommu_release_dt_devices(struct domain *d);
>  
> +int iommu_remove_dt_device(struct dt_device_node *np);
> +
>  /*
>   * Helper to add master device to the IOMMU using generic IOMMU DT bindings.
>   *

Nit: I'd consider it more logical if this declaration came after
iommu_add_dt_device()'s. With that adjustment:
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 07:57:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 07:57:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528993.822857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7MH-0005an-UB; Wed, 03 May 2023 07:57:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528993.822857; Wed, 03 May 2023 07:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7MH-0005ag-RA; Wed, 03 May 2023 07:57:05 +0000
Received: by outflank-mailman (input) for mailman id 528993;
 Wed, 03 May 2023 07:57:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZlPw=AY=citrix.com=prvs=4803f4e7c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pu7MG-0005aa-Kq
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 07:57:04 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 15600c5c-e988-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 09:57:02 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 May 2023 03:56:58 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA3PR03MB7396.namprd03.prod.outlook.com (2603:10b6:806:39a::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 07:56:56 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 07:56:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15600c5c-e988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683100621;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=bGjhDo7vIaxEMzr49lNGjOBF70RQKgKjgiJwsZsUYkY=;
  b=dLkTKMOWq/5u9MQ4WnJHcqpdtLrXPwS8IzPwj+HoOCQbO+o3/xziErhx
   yAfh69N+58dER13PbTGbcyKSXARgeiYXUkZo1adOjT9WjSr8Y/2InRGWk
   WvC0vPoN03oSTxBU+U7MsoNdNK8Da03R2a96LnkwyS8rotoNrxP7XclsL
   Y=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 107698380
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:4JTAi6k1UzF3A60W9PW9UH7o5gxUJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJOWmmGOPiIYmv9KNEnOd7npxwHucLRxoUwGgQ9rC82EiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgW5ASGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 fkdcWsJXzCzvdP1x5abQ/Rqpocmcsa+aevzulk4pd3YJdAPZMmaBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3iea9WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOLmp64x0AX7Kmo7LAMHClCLm+mAlE+ncNx2K
 24rxQYHov1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+WsDezNC49PWIEIygeQmMt+ML/qYs+ihbOSNdLE6OviNDxXzbqz
 FiisywWl7gVy8kR2M2GEUvvhjutot3DSF4z7wCOB2a9tFonNMiiepCi7kXd4bBYNoGFQ1Kdv
 X8C3c+D8OQJCpLLnyuIKAkQIIyUCz++GGW0qTZS81MJrVxBJ1bLkVhs3QxD
IronPort-HdrOrdr: A9a23:llbhLK6BaN3+WzVkcwPXwY2BI+orL9Y04lQ7vn2ZKCYlB/Bw8v
 rE8sjzuiWVtN9vYgBdpTntAsi9qBDnhO1ICPcqTNWftWDd0QPDEGgI1/qA/9SPIVyaygZXvZ
 0QDJSXYLfLYWST5qzBjzVR3LwbreWvweSQoaP78l8odAdtbshbnnVE4sTwKDwJeOGDb6BJZK
 Z1I6B81kudkA8sH6CGL0hAZfHHu9rI0Lr+eHc9dmcawTjLtyqs9Ln5VzOF3hISOgk/vIsKwC
 z+ignk4afmlPm+xnbnpgjuxqUTosLl1txAQOqTjcQPQw+c7DqAVcBaQrifuzJwmsGDgWxa6O
 XkklMbJsFu7HGURG2vvhf3/AHl3F8VmgTf4G7du2Lnvcv6AA03ENBAg4UxSGqi13Yd
X-Talos-CUID: 9a23:c3kzHmP+TQT9Ne5DXSw+r0olOvAfQz7a6FfZPkmYSmM3R+jA
X-Talos-MUID: 9a23:PeXJLQbtPsYJkeBTkyH92zpSDtdR/aW3I04kkbc5hOWtOnkl
X-IronPort-AV: E=Sophos;i="5.99,246,1677560400"; 
   d="scan'208";a="107698380"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H6FJtlJcw+2wB9EFVXfNpDk7wb4gGhppc4F+lsQUjoeKIw99G77CjiYzkod9t5NoIS/Bk/qeRrCCAcU217jeMZ9Nd3AeCz9Zkwm1E/gbZDCMlIvTWNe7W0RrWfOATikVqvF9/gTzzWq5JCzUvcHxJXmcQAyGl/JA1B0aBWyaUCRingH85tUh4ufVelCpBPNx1qS5NjG+t2AhiuQy9ErsJvY9Dya6PXKSn4M9cy0NRx9T+btjRbpVvaqBu+Z+tIMFdG5FpLIktaa+a4DzXYWlC6KdEoAfKjzHdbqEkqiZWIeMVh6uSDV20Ki5uQL317oC2J5w3Jd2nh/4u2/MRzrSwQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bGjhDo7vIaxEMzr49lNGjOBF70RQKgKjgiJwsZsUYkY=;
 b=CH5+vAlLMb52jSTvR7VJsuH1SeqtLvnNjkx0GuHEhXPRQdFD6ypyVy2L69vt2X5Nx31G6tXJ0Rbpru1t2gY3NDyYxi6/PmhGG6TWh4Leb6RUrRewEJ9nZlNJuFStvdqFWHD3VUo3lKGbS2py3oIeXc2Sugu4RU3eMTAJLg7R1du4Ad5WiYFBfmgZj0XZCrJasIoI48qhC8ED9KPkIWNrVGBgeODy2Wji5cbn3b7QcrVuqHJkc8YlIDVh48W/towFNj7aQ+ZJUwjPmvsoIrdMaUPmPtbPhjiUzNaRhfu+RgXepO1YLVF1WadWlxqB16XEGvg5lLG/eFCn0R++wKcakw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bGjhDo7vIaxEMzr49lNGjOBF70RQKgKjgiJwsZsUYkY=;
 b=bFrAoHOXqPvp/eiaHdgpKGjMmvcZQwLhTh8R3UIYLpVKyVcBEoYf0NZL3YDb8UfCBLCt51NXdHU/lJHmLKa8iK0LL0zbcsFCFwgRw6Xh0Lpo2oVoVIEJUuyrC0jO4IzjD0qvlflybsRLs07SgUTTVjsEJ0R+QtgXNFJb/wequYQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <9e9927ec-dae4-c204-73e4-a534368bbddb@citrix.com>
Date: Wed, 3 May 2023 08:56:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] libacpi: switch to SPDX
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Jennifer Herbert <jennifer.herbert@citrix.com>
References: <a06075c3-8e6f-a470-e8a0-58fd299373c1@suse.com>
In-Reply-To: <a06075c3-8e6f-a470-e8a0-58fd299373c1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0316.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a4::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA3PR03MB7396:EE_
X-MS-Office365-Filtering-Correlation-Id: 6a11f186-01b5-4919-dc9a-08db4babf6bc
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nQ1WEvjeLhboXYbS57F09TxQOgx2xKDbDTudFDG5cz2wxqF/yv7Vi18hUfg4bra4QTAsPcdK3PZyA7u06iLuDPEJSQuTt6XIlJLYGnRceWG2PGVZloSXPOsjV1enOWTGHxslhR5qhNw/8S4bOdGXzsBY4tr5C7byIXJGCJnvSNK+AMjSKuL6klO/69qNVq6Z0XUkcqatwaQiwC2o30fejKFG38iRewUBRr8npA6Udl+yok8QbnPjM/tH7IOj7khRUvfFgZs+P3AHADWNHetlLBPCuqb3z+E8amEh8257DSMI0d6LK9qCN1gpv3gU3WELPa+y9uIqRJfjxz1UjtnDyjJDn/lUOIix0CJq6ENNZ6535x3uDOuL1k6GfcQDQj8GVZychUhf31+zKPdtUrOfZDOXvfHBIlhRKio1cXOKw49dIGZ6XnKjyHJ5NYIn999MLE37njfslPSBWbMXkwDBPKzaeM35adNjxnOjykENaicF9qC80uwc8PDd8iD3VVXN78r5fp13gnShlRuLLp/ww5Dh8QjTOeKhP7hF688VVPvIcPIU1WdZgFGD6tit5EO37rB9LnMOt0ECngkAyjlTr9/q7ZixkCL8ChFzrtM3Xy3OrCix1MBD2imTDiWy+uGacILVJCgb/j6IEQvZcaYQWA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(366004)(396003)(376002)(451199021)(66476007)(53546011)(66946007)(4326008)(66556008)(6506007)(316002)(86362001)(82960400001)(26005)(6512007)(31696002)(110136005)(38100700002)(54906003)(478600001)(6666004)(6486002)(186003)(36756003)(107886003)(41300700001)(8676002)(2616005)(4744005)(2906002)(5660300002)(8936002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c3RoU05OZTR3SUhvUmVyL1krRzZjZjJ0VzRNSnppMi9LUVVyd0dMWndGM2Jo?=
 =?utf-8?B?S0lDVGNXaE5hMmpCSVp2VVZFOXduTVpBWWdvQWZvY3E0Y2JmdTNWeG9VRlh4?=
 =?utf-8?B?dHFKQjRnb0M3QU1vMEJvaHppcVdqREhjOVErZk9JVUNCanNjMkt2YWdoUEJT?=
 =?utf-8?B?S0dKaU5hdjdrdy9OWEV5dHhrclc0UkZObUIyWU16ems5WXd3dU5Nck1uek92?=
 =?utf-8?B?WWdGZURaeElTZ00xK1lENmhsUXEzdkwwMHA5cWgvTXlaUUg2Tkw1V25Jek9l?=
 =?utf-8?B?c21GRldxeVNqa2dPVjlTVVBRYkRCeFNGWjhTK0t4Y3VHUVV4QWRqMUhiUlBk?=
 =?utf-8?B?UDZwYjRkMS9kMDlBSVg3cTZtVU1iamxJWWhsS0pXTTFrcUhGZ0RNc0dPQXc4?=
 =?utf-8?B?UjR2Z2VDc0lTRlFxY2NZamN6VFdtajhUOVFMZS9SZkJCRzFPbzVjazd2Rldq?=
 =?utf-8?B?cFVWem1VdEY3ekJWV3VUbDRWVjM4dFNsMThwWkxjSGdSc1MzZmZvQ3llU0hk?=
 =?utf-8?B?OVI4QjNSWFNCSXNxenI1ZzNPMlJhOUY1dzlTeTU5ZW02QktPWlYySWhSakdm?=
 =?utf-8?B?ZU5nL3BsMWkxTTEwNnJPRGwvVlZDRFZzenVKSWJMWTJGWWdOVjlSb1FHMzky?=
 =?utf-8?B?ZVY0VEY5MXlHQjRpWGEvOXpWODlBT0trY3RYZVFxZUVmL0p5d2dHcHNtWnU5?=
 =?utf-8?B?Y09jYWpwN1E3MmJCM0N2cVlwaUJjbUxSTjBleGhGbllsWFpHcXkvcnI0QXVE?=
 =?utf-8?B?M0kycmhyUXVodzQwRVdtNFp2MWRqSW9PZE5HMEFCQTJ6NERnUmw4aUFBaDFk?=
 =?utf-8?B?azlYYytsV1ZxT3czMXBDbHlmN1plcHluZGVldnpGSzFORDc2M2JWRU0vdjBr?=
 =?utf-8?B?MnpwbEpwcll3WEJ0VmF2WjNETGt0WnIreHVId3BBcEFySGgyS0VkM0F2ZVky?=
 =?utf-8?B?WkRNV2RTYm5yMktlRHdBekxVMDQ1UXBFN1I2M2RlK2tVcDVDSXM5WlAvckth?=
 =?utf-8?B?SUJyb3RLTVVpUm8rRkgxTlJJQ0QxeXhybjJ1b3BqUFB5eXBwUVR6WlRMWG5D?=
 =?utf-8?B?S09BcXI0K0JuemhWL1ByV0lHTlZXcHQ0Ykh6T0syN0tFV2Q0dTNrREhZcFpR?=
 =?utf-8?B?SnczYUs0VWYvcUgwMWRBd2pmc2o1VEJ6TUNsdWJldW1KNUtMQlg3VC9tUlI5?=
 =?utf-8?B?blRnTlRRdnpZM2dmNE9iSUxaY05nbEIvTEJmWkhqd05DM1RuRG8yaVpOMW9n?=
 =?utf-8?B?ZkNFQTlua2F1QlJtTWtDcTdhUXlnUTl2QTVDZnhuT1o2YUloTlZMb3hVVlBN?=
 =?utf-8?B?RW5YVTlJK1hVRXhLVUovNGxMeXdGRnRMb1R6RCtIWW5xaHhiU2dsNW5RYU91?=
 =?utf-8?B?UHdjaGZ6K3U5aUVoZkVFdTViQjVFdWFpWnUwbUtsNG9nLzdtOHQ2NmpwaDdp?=
 =?utf-8?B?bEtXOTJ0Z0RTbExoT05iSisrZ2lpTmpvVGQ5TjlNR2JuRDV1bDNtZERSQXZu?=
 =?utf-8?B?RGFzaWhacUxRWHV6NlhGOVBjOCtSZDNqd0N3Y2R6b0tyOHVnSEgweW1pRVB0?=
 =?utf-8?B?cjF0ditha2JFdDJabFBJMVM2K0ZuVE9Za1NaV2ladWs3ZUR2Lzl4Um81YkFH?=
 =?utf-8?B?ODgxOUQya1FUMDhYUkJjcmNHNmI2TUhFeFZ4RzdTR09Nd0RuVWd1Zmk3S0Mz?=
 =?utf-8?B?aGNBYmhuTGJaOTNsb0E3TnJKb0s3bExEaHRXQ2dDNzdIM1dwd1RBY0RCdzg3?=
 =?utf-8?B?SVJEeFA4SGIxSGg5eVZ3L2FZZFZXbkxJd2svS3orRkNJcUI1TG9zcjhyRWxE?=
 =?utf-8?B?UDN2YzltczNPZFNwUlRHWEMxUFhqVGs5aTNqUkh1TzVpTzQ0SzhXMm5PeVI0?=
 =?utf-8?B?SzZ5VjlHaTEzRXNaMnd4emp2S3Z2S3EwUU9naXZ1bkI5N2dsV3pINUtXZFkx?=
 =?utf-8?B?c1hHM3M2d1hzaDgreThSZzRrNkZ2cnJuZGY5bk04a3JHelcvdjVhaGU4Nlox?=
 =?utf-8?B?cU1ZS1RGZEhDMGkwUmpsRjd0ZG1qZkc0RVQ0cnZ0bWdNdmtzV1BsL0k4ZUd4?=
 =?utf-8?B?R25vZ1FQbHFGRXE0bGJKZTlzTDJvZXU1eUVDczNLTXA4YTlZNHZIR0Z3Vjd0?=
 =?utf-8?B?SUx2UlcwdkdzbWd4WndOc1h1NW9vTkNjbVpmNXBrb082QkdWR0M1dHJlWGEv?=
 =?utf-8?B?aEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	DbpQoi6pHE8EcAQdYaUaWMVGefJ9aUIYxFVes96lZvYQLVZIOLtbmGbk8wK41QUoWaGUKneFxDSkA1+hx7Uw42oX4AmeB+f6dJRUWLLU0LyVAQ/VAqMVIr/YTgy/J/dISlXKGcSAlP2yB7v0DQscL4sN9qIAbujBuZMjpaGKZ0jLjvVbRlj8ZApUYI5cQs7FA8C7iAzL5nXdEiefg5KToPXVECtB9lD8zCzZeYfTfO4zHQMF1xPS7ylzCZ2rCpg1EL8qrqRcabBwOCO1TRZGrvx8PwNUyCWwtPylOuWpvbnYAfXwGv6lBV+pb8iabQPWOI8mV3mtR5hlkSchTH7MYgDnUqTamaLiS0CKz+eY8yxmdx6aHUEqn8L7vFageSEtDnt+Le4MBhLGAKzCN2alPWaBZmiXYkAhH/aL4ZnvzKfeYejLR0ANsDMX0PvLlJiowO0auW2OQYIJNq+1mf5SKWmROGNs9a8NPO2QN+uiT3AWWThik/Zz2aWgAp2VHxjanBWWgQMHIZuxmoJbs8Ykr2syZdZUoFl5Q6W9oLc/Rm1daM4m0DBB0ynxXx2ie2UzkoVBNcSsegwtCaV8+WZNoRB8VIVVazFJVktW0uXMP3/MmnM4ss92BfyV/9DMJ2CJS3wRWfzd5ao6/3S4WQQwQm4JKJVOQwET9IYaLvntCbRdxH3hgLnsHdVTyCAbVaijbY7EqbGwHKqzy/OIarFp+oHkjx+rmjIxVtD5acT1Zj94O/vLTPWSG3BT22aRdaOOdLqVVGpxnPu3opaXTIQCAuYSyAFnExiQPSwTChH3mlllLeLmAE/yMihIuMSI4YFGMRYgPIzmFhSrolGUJGC8hQ61+CEd35WNDa7lTwQT/N8=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a11f186-01b5-4919-dc9a-08db4babf6bc
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 07:56:55.7541
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /FViMrhnaLElDVjkoe0rnwROErnSaw7yGYxF+Lq8x6kQ5hYl4e3FHlL/T4oJUfVoPWETcymwBpIebYzzGrIeDNVVb6zlY3JnOCYhUpOJZeE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR03MB7396

On 03/05/2023 8:45 am, Jan Beulich wrote:
> Commit 68823df358e8 ("acpi: Re-license ACPI builder files from GPLv2 to
> LGPLv2.1") added references to a "special exception on linking described
> in file LICENSE", without actually adding such a file. Quite likely
> COPYING was meant instead, yet then its text matches LICENSES/LGPL-2.1
> except for some explanatory text (clarifying the "only" aspect) at the
> top (and formatting). Hence replace the text in all the files with SPDX
> references to LGPL-2.1.
>
> Note that dsdt_acpi_info.asl had no license text. An SPDX tag is being
> added there nevertheless.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed May 03 07:57:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 07:57:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.528996.822866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7Ma-00062b-8j; Wed, 03 May 2023 07:57:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 528996.822866; Wed, 03 May 2023 07:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7Ma-00062U-68; Wed, 03 May 2023 07:57:24 +0000
Received: by outflank-mailman (input) for mailman id 528996;
 Wed, 03 May 2023 07:57:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tguP=AY=citrix.com=prvs=48085cdab=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pu7MY-0005aa-89
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 07:57:22 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 208a384d-e988-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 09:57:20 +0200 (CEST)
Received: from mail-dm6nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 May 2023 03:57:17 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH8PR03MB7200.namprd03.prod.outlook.com (2603:10b6:510:238::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Wed, 3 May
 2023 07:57:12 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 07:57:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 208a384d-e988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683100640;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=itWKGxrLUCqshMYvXypKtCJ+px6WnIaXc4zX2RJ0me4=;
  b=QvfZuZV3On+WCHkSM2gC+s1QbZ0IyibG/dptgvIrAivdgQBfF4lIVuLP
   +Jaj+TLxt15RbXAHnQ5P8tqWEV1T6KPWtZfm6mDIaqPuGEhSqSfhsvpcy
   +vml2ZsX12ckDzokA3AZY0SBn1uoT8YHQteFs6cV6DoHPTR5AVxlNk/Uf
   8=;
X-IronPort-RemoteIP: 104.47.59.174
X-IronPort-MID: 107698403
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ZMeSbq0VZUnqLDhPsvbD5Qtwkn2cJEfYwER7XKvMYLTBsI5bp2AGy
 GAfC2qBaPzcMGahLdklaN7i9hsCupeAy9JnTAM6pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFmPqgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfBUVN/
 sY0FzkxYDfchsbm/OrhSeVrmZF2RCXrFNt3VnBI6xj8VKxjbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqsi6Kk1AZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13rCRzHOnANl6+LuQr6JqokWV9kMqNQQNcxy5m7qYsG+GcocKQ
 6AT0m90xUQoz2S7Q9+4UxCmrXqsuh8HR8EWA+A88BuKyKff/0CeHGdsZjxOcts9r+ctWCcnk
 FSOmrvBHidzubeYTXac8La8rj6oPyURa2gYakcsSg8I4MLqpo0puQ7eVdZoEKOzjdrdFCn5x
 naBqy1Wr7wDh8kG/6a251bKh3SgpfD0ohUd4wzWWiep611/bYv8PYiwswGEsbBHMZqTSUSHs
 D4cgc+C4esSDJaL0iuQXOEKG7Lv7PGAWNHBvWNS81Aa32zF0xaekUp4uVmS+G8B3h44RALU
IronPort-HdrOrdr: A9a23:0rf7RKGsRo6H44v8pLqFrZHXdLJyesId70hD6qkRc20hTiX8ra
 vBoB1173/JYUkqKQ0dcLy7WZVoIkmshqKdn7NhX4tKNTOO0AGVxepZnOjfKlPbakjDHuU079
 YeT0AXYuedMbAQ5/yU3OF2eexM/PC3tJmNwcPi5zNVSwduApsQnTuQyGygYzNLrM0tP+tIKH
 JYjPA31gZIAk5nCviTNz0+Ru3eoN+OvIv+CCR2fiIP2U21lDa177y/OASZ2xp2aUIz/Z4StV
 LdlhD/5OGFu/W2oyWssFP73tBtgd78zdkGItKKhtN9EESLti+YIL55XqGEvnQOgMzH0idTrP
 D85y04Oth16TfqcnqrrQDL0w3tuQxekEPK+BujmH7+ps68ez4gEcpGgutiA2Hk13Y=
X-Talos-CUID: 9a23:SbAwbWMnPi/O1+5DXA1k/3Q4JckZfFqCzVOAHxfiCmpuV+jA
X-Talos-MUID: 9a23:NGDfMQQQdlb3Ybl6RXS12DFALMFYzJ2TEVsr1pc8+Oi+FBRZbmI=
X-IronPort-AV: E=Sophos;i="5.99,246,1677560400"; 
   d="scan'208";a="107698403"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AeuASqqABLOx1yIhEZqX8BvvtOQu2Dy6rm0gYmAHfl4fsmri6m8h+iqM2Vuxq3nPJF07HPMRHUO6AULxHbMTBtgvJ9rf739/JvKmk4VvgIXee5hPaFINgZPu235r4bGTY5Zm4L+z12g9Z2hbGK8VJfazqLGJldu3es7vrCdKfLMhEYusJpiRdXcpBN4TL1cYayJiaFS17X70dgtJOfBp+OpENzeYqXpxGJx17hwjFSaXdVsofu2z+hvcXWqRnoYixjMAhw/ONaILu46tcS1GzJxoDo4Lvc6T2J4Mb37JfDXl78Kk2f2s2aEWpx/gVkwtQXy/ebWliVbOLzjd6R2TIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4SE1h/hbxFGyTWQaiwT8El7+ijhYgcZMhFhy29wQyVA=;
 b=nR0Ydg74lKq7NF9gh11WFe5rmxaFxfRStqRhBZjZOTtFycXd94E3bA3WZY9CyXgCNR974SU3cuGedQfTRlV2uR0pQJ4qAVRzWrIRV9EDACWVBkMSDPuz5ew0nFOMfOgE8osQamUaeWg5vNkfnF1DaBTGZDRNNQYsikSY9zjWItOPoZS3NAd02k+4A+t6GGIUOvTdd/DkUC+Vb+WcoE+goYN2xa7TtlU+OxRuQUttLhR8aLvHrpgUE+OWC54zHV1Yj8YUtVi3ayfF5LbiKA3J2joxfSlcbSD5l/nMpf20Plo2ipbqa0HthT4iosL0EiWFzTVosRc02jhA9chyRxM3uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4SE1h/hbxFGyTWQaiwT8El7+ijhYgcZMhFhy29wQyVA=;
 b=qGfxJ31aT/umzKH4CRGEKepOM3knajT64UfVs4ekfO/Qz0d8YzwbZeFVNwT1WKJiw6wCVr/gputai1cmtt6bjZnHGKwxh3B2lfI8/ciwNwWflI0t1c2xelc50gfbDG2TWg5so+FzIA5aSHhr+rDncXLDBfKjy2c/+UN89cL/MO4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 3 May 2023 09:57:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Ross Lagerwall <ross.lagerwall@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"jgross@suse.com" <jgross@suse.com>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
	"axboe@kernel.dk" <axboe@kernel.dk>
Subject: Re: [PATCH] xen/blkfront: Only check REQ_FUA for writes
Message-ID: <ZFIT0+rQRCpBVDU8@Air-de-Roger>
References: <20230426164005.2213139-1-ross.lagerwall@citrix.com>
 <ZFEzAnOskzdb61O4@Air-de-Roger>
 <DM6PR03MB53726982B16DC1FA7B77D143F06F9@DM6PR03MB5372.namprd03.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <DM6PR03MB53726982B16DC1FA7B77D143F06F9@DM6PR03MB5372.namprd03.prod.outlook.com>
X-ClientProxiedBy: LO2P265CA0152.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::20) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH8PR03MB7200:EE_
X-MS-Office365-Filtering-Correlation-Id: 5ac2aa51-87e0-4f92-d8b2-08db4bac00d2
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wGF/M20XJenEdc5WJsI03d/3J2Mw7006RagmapNu7QGhn0GiMNHcIPUvjcEYDEbYVQyOhdxi2CNYhx/wNBt11TXF9MII7R2pdMmP2+MvHJc9AqBMO7SPfYPtyMNZbfdj9VA71zRDeu7QG/bCoELU20Y0wtfPl7UvUxtE5WH32+dTGdn7/Qbj33/y6l3chgsIdRnstmGx2vOYW06i9EH2Sraoz1xxjjaESU1S1vc8jJr2DIxIE1qHW61Pi47tibAIqR5lYqJbpxsubTx1/2Bn7HSHKV5+DcJN1cwkdbpyChn+MVe6kV/mjhUNBNvo+kJXJ/okZQ8X3/xkZkwDkhH625pG2jshugnHT7VvSzgiVJhBi+wWiVWpBZiWv6xNvRs2ZnD44rK3WoRtbTXMbkCrZy2hll3juMmph+eeo4WJInSyTnUFH9DfiGyDvgbvSnWLtXRpCHYnoWRMZXxi9zWGOKMWNNTLQhg2BxUARM1ZiKFUNgZisicAVEQQhlahWNwaA/XaLmdE6NLxIKu8aR3Kw6b8eQtQdl06it3xtWEwOJhdCNFuLdoVwvCKrlLL5ErXggThecenHx6rLDddjG9j/iou/MJAqRnK3yTdrY5jsRU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(396003)(136003)(39860400002)(366004)(376002)(451199021)(478600001)(83380400001)(186003)(4326008)(6636002)(66556008)(54906003)(53546011)(9686003)(966005)(6486002)(6666004)(6512007)(6506007)(26005)(66476007)(316002)(66946007)(8676002)(41300700001)(5660300002)(82960400001)(8936002)(6862004)(2906002)(38100700002)(85182001)(86362001)(33716001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M0JZRDBVVlh2UUlxVTJJYlNEajEwL2FKemxoL3dHbG5Xc2tDVURQenFyV3ZJ?=
 =?utf-8?B?UFRnN1hFMVg5RkpFNnRsNmhoNlN0cUc3MGJEd2FPcHAvZFBhc09OT201dnVT?=
 =?utf-8?B?aG1Na05GdWxMN09jRjdrclVvN0VqcEhMamRnYWlTUndaa1Nrd1VyQ3JiYlRT?=
 =?utf-8?B?elJRS0ZRUndDVjRIcU1rdVNDRTVzTERZbjN3TkRQM1M2UmU1bE1oQVZucWxE?=
 =?utf-8?B?aURyNDhFenlVVElsQkxaSDI4NFEvREhOSVVrZUdYdkFZNWV4dFF1OFR0dm45?=
 =?utf-8?B?TzlOLy94dWdNeWlId3JqdTFKY0JCUW5NbmxDbHFIemZmdjc5T2V5aEZhVUJm?=
 =?utf-8?B?OEx6VEJNV1B4Tks1WTR3TC9pR3ZmWXp1M3NjUkVoODJCVEZnQm82b0pxQlZB?=
 =?utf-8?B?OHVtbkZOM0I2NUt6N0lSSjB0MlVjSDNaZGp2Q1pqb2Rld20yT0hCOE4xOTZV?=
 =?utf-8?B?V0NLSWpoS05EemhENXYyWWVaU05XOXNBamxRZUp3T2I3M3BFZ1h5OGU5VnlP?=
 =?utf-8?B?YUxWOTVPOWdUR0tVRlJBM2xQak5YNmRibEsyOWxtR05TdUJ4SmpIcGRFU3ZZ?=
 =?utf-8?B?UDNENFlyMnV5Um83blR3N1FUWDlPb0JjSXJjM2VmZ3pQd01Ia0l4MGZ1STRU?=
 =?utf-8?B?MnRIV2RaZjdFZnpMVi8vVVRBMlZxS0pUdVk0V0doekwzQjRtbXNhc2E2Mndk?=
 =?utf-8?B?RUE3VkJXaTlwNXhBK3BDSlBGaGVmOEFnQ2c5UW10SFc2c2JFKy9rUCsrdEY0?=
 =?utf-8?B?ZzJvczdLYUNnMHFGY0RBZGRsMHZLa0h3WmZnVC9iRy8wSDNRZEJTRFYvNVhG?=
 =?utf-8?B?ZkhNL3ZsT1JZZHRUTlcxY1BsY1ZlLy9mVFJQaERKOGNOaDdFTHZVWUlyNk42?=
 =?utf-8?B?N2dIemt4UWEyL1ZoYU04bmRiVElrUlVMR2EyYlh6MTlkcmlIaVh6bDZuam9K?=
 =?utf-8?B?eW1pR3pYK28vaFh5K1MxbGhYYnUwM0RsRVZEYS8vcDFFU3drUlE3SWFsdzBw?=
 =?utf-8?B?VmV1WStWQkRmWkZXZ3V4VjM5Uk1iWDYvYzdQT2FyMFJQQjBUSHM2dUdVU0lZ?=
 =?utf-8?B?V2FIaHdOTFcvaXlJZ3JveEdyVUtBZjFobFNEeDU0YWZMd0RwMjlQZXYwUTVo?=
 =?utf-8?B?TVdLQ1ZrVWxmeGdmeW5aN0lZZGpVbldyUEx1OGhPUVh4MXVtNTFlM29qYlZU?=
 =?utf-8?B?VVlyYU9nZVVZbGtvZGltTzJoYnIyTm5Sd1ExTHk3Z2hza1JoYkdzbDRTUHN1?=
 =?utf-8?B?L3hWM2QrVUljZEZkM0M0VmdVMjVEZ0lNazNQZE53U0VOTEhOclVraVFMTlFF?=
 =?utf-8?B?U2xpb1N2U3VmNU55RnhHRXR4ZFRVY2ZaVlpOZ2dXb2NqSzlicXVnbGZQT1Vr?=
 =?utf-8?B?WG4ybkdZUWF0M3BaR001NUZsSk5Vb2dXVFRCcldxdE9nc1ZjMkR6SVNsWTNF?=
 =?utf-8?B?eHZyZStIUXA4Ymx4dEFLeUt2NmtHRUpPK2U2emgyY1V1QmRmbzJCa0NCSmIw?=
 =?utf-8?B?K2lpKzYxYXVuSTJFYkpNdmkvOU4ydkJ3Zm9rd0VBS2ZOUW8xYnovb2NlOGdQ?=
 =?utf-8?B?UE9ML0FUUlNlNEE0Q3JEZEV2UjltdVVTMXFDQVhCRCtrQ0ZTMWdqL3djRWZl?=
 =?utf-8?B?VSttYk5Pb2RSTzlyUkxRWFVIc3dQS0JJOHdVTFJlV3VJK05XcnlxT00vUEV5?=
 =?utf-8?B?MEpkbE1PcFM0bWt3QnpWVW5wTHI1eTN6eEVVTWx0d0xMWVh4UkdnVExESy9Y?=
 =?utf-8?B?WlRwa1l6N2xWR2dRK3RZa29rRkFaQ0FGUUlObU80d3kxbzhBeFNFQnNxaHpz?=
 =?utf-8?B?VG03VXZobmw0Q281clc4ZjVKMmxoTkxBT3dTTVRNeklmbThLNE15SVVZL2JC?=
 =?utf-8?B?VkJGTTBnRkoyTUtTSCtNN0pnNGF0Z09nR0E5WGdvSkZmZk1waTBkaFFIcHJi?=
 =?utf-8?B?V2IrUGtOd24wUDZHTjlTWHlDd05zd0JNUjQyMWZHMms1VklsVkRuNXFSNnRs?=
 =?utf-8?B?L0ZqbWJOU0RGaGo0QTI3UTdyUXgybFo3NG9DL0FNUmtEYU03NjhzK1plUVJV?=
 =?utf-8?B?d3d0NmdVZ1pCWDdoNkxvdFFzRUJMalI1V3gxNDhLUTF2UlNtYXJkeGZnSnVi?=
 =?utf-8?B?QWJqcS9VbUdKaVFOTjJLdXJsc3JocS9pdkZjeldPMU02VklLWHgwanhtUVFq?=
 =?utf-8?B?K1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	8ewb1XD0ZKu+1j2l/D3I8loJVw0bdzx4B2dvY7XcCKCySe+wwZVUd4vpZjMaoDc/J5mjhWote+LpwhrNg+GEY+UagvQBMzku9jBgXPBkaVOuReyYZQmhdHLiKr1imTCMtB+QI1t9LT1nDnusHKmcRQDmjZBAa0xZKP5xoQq+++EqNZJi4eYeYcaldy5yzgaGO7TFK4kxDr8j4QnU1m3u8wrqbXUcXZ9iX2OcjGB7AgTvrf282QLr/BcAnpFt/2V2VwzEoVmIHu1B3B/sKR3xmVAOFvI6UuqssIdAno/duu+uqoq4tYUNdz8mjIhq3IeOefvttnsNSZoVrUgYyq6t3YBia5gZsWgEA2G5hjn9HS2zUcA4dpIQ0uNa+Q22PMBB4WQFCX1/FCpPiQf30MJtBOua0+kffp2/ZzzIevmNfhQbw7EbrjC9CO6SZEmxd8irG0aaETF3pNhjPaRLR5BWehAvoxC1uydAnEYhW03YLXmxXjZiJ+zLfvreTdW8jzAN/WgpaP+5DK9w+xnLenxtRYwoY0PfluBfbUOZGWhxtJnimXo4UbrdSKS8DRT2ihLz59ohTNWbivQQfYqDq89V1e5+oy3St3KrQIx7HFfPSduA+biauL4CPZvC1dIxozwCw/E0OP8tQpfBcb+Z0H6PxxvUbyq6k6lOjCvAotINbphDGcYMjWyeZ02FfqlrIQmwOBkpOaqoWMwEHK3bAXvnCUSMJgXAvtfWrSsYdZ7XRrLqOHISTbMt4sPSxDl9P4E4GXpoKa1nwr7U7mZP3qxs3c9yeM//arTnxZutcysfkWkMKM4cWMAy7SqEvS8dTSdrHlzB3BI8RTorsB41knZLoStFpMK9OOi30GC7U72feAaq3C5TPkDo3o99mRPsjWgm
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5ac2aa51-87e0-4f92-d8b2-08db4bac00d2
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 07:57:12.6007
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nKlAsURm+LbZWB35AOf+UJvY19oTdIj7E0sPnPDmIBr2v1OTdNMsmwg13YyizPzkxYJo78Fo/2fimEUnXrM4yw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR03MB7200

On Tue, May 02, 2023 at 04:40:17PM +0000, Ross Lagerwall wrote:
> > From: Roger Pau Monne <roger.pau@citrix.com>
> > Sent: Tuesday, May 2, 2023 4:57 PM
> > To: Ross Lagerwall <ross.lagerwall@citrix.com>
> > Cc: xen-devel@lists.xenproject.org <xen-devel@lists.xenproject.org>; jgross@suse.com <jgross@suse.com>; sstabellini@kernel.org <sstabellini@kernel.org>; oleksandr_tyshchenko@epam.com <oleksandr_tyshchenko@epam.com>; axboe@kernel.dk <axboe@kernel.dk>
> > Subject: Re: [PATCH] xen/blkfront: Only check REQ_FUA for writes 
> >  
> > On Wed, Apr 26, 2023 at 05:40:05PM +0100, Ross Lagerwall wrote:
> > > The existing code silently converts read operations with the
> > > REQ_FUA bit set into write-barrier operations. This results in data
> > > loss as the backend scribbles zeroes over the data instead of returning
> > > it.
> > > 
> > > While the REQ_FUA bit doesn't make sense on a read operation, at least
> > > one well-known out-of-tree kernel module does set it and since it
> > > results in data loss, let's be safe here and only look at REQ_FUA for
> > > writes.
> > 
> > Do we know what's the intention of the out-of-tree kernel module with
> > it's usage of FUA for reads?
> 
> It was just a plain bug that has now been fixed:
> 
> https://github.com/veeam/blksnap/commit/e3b3e7369642b59e01c647934789e5e20b380c62
> 
> I think this patch is still worthwile since reads becoming writes is
> asking for data corruption.

Right, can you add to the commit message that this was a bug in the
driver?  It wasn't clear to me whether that was the case or it was
legit for FUA to be used with requests != write.

> > 
> > Should this maybe be translated to a pair of flush cache and read
> > requests?
> > 
> > > Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
> > > ---
> > >  drivers/block/xen-blkfront.c | 3 ++-
> > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > > index 23ed258b57f0..c1890c8a9f6e 100644
> > > --- a/drivers/block/xen-blkfront.c
> > > +++ b/drivers/block/xen-blkfront.c
> > > @@ -780,7 +780,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
> > >                ring_req->u.rw.handle = info->handle;
> > >                ring_req->operation = rq_data_dir(req) ?
> > >                        BLKIF_OP_WRITE : BLKIF_OP_READ;
> > > -             if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
> > > +             if (req_op(req) == REQ_OP_FLUSH ||
> > > +                 (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
> > 
> > Should we print some kind of warning maybe once that we have received
> > a READ request with the FUA flag set, and the FUA flag will have no
> > effect?
> > 
> 
> I thought of adding something like this but I couldn't find any other
> block layer code doing a similar check (also it seems more appropriate
> in the core block layer).
> 
> WARN_ONCE(req_op(req) != REQ_OP_WRITE && (req->cmd_flags & REQ_FUA));
> 
> I can add it if the maintainers want it.

Hm, yes, it would be better placed in some generic blk code rather
than (possibly) at every driver, otherwise people might complain that
xen-blkfront trows warnings for requests other drivers handle fine.

Would you be up for sending such a patch to generic blk layer?

For the code here:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

With the commit message slightly adjusted to make it clear read + fua
was a bug in the module?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 03 07:59:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 07:59:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529001.822877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7Oq-0006oD-Ln; Wed, 03 May 2023 07:59:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529001.822877; Wed, 03 May 2023 07:59:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7Oq-0006o6-Iz; Wed, 03 May 2023 07:59:44 +0000
Received: by outflank-mailman (input) for mailman id 529001;
 Wed, 03 May 2023 07:59:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu7Oo-0006o0-TJ
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 07:59:42 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0614.outbound.protection.outlook.com
 [2a01:111:f400:fe02::614])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 74e5dbbf-e988-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 09:59:40 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DB9PR04MB9499.eurprd04.prod.outlook.com (2603:10a6:10:362::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 07:59:38 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 07:59:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74e5dbbf-e988-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iBifMORxsMWiuSuej3g989o3uQq5ekhk2Dl3I/xiM5dG0UOk1BUw25p+/kJSuyGu+5t0yNlNONNFhGKUt7VPdWYFxhYnfp+w/n1sW0uwTTzZ6obBTK5KYgBzMLVdZUnOXEKjICXEr8VLZnTjbBq2cURz3sku6wouuLtr7gKkwJ1LsyzvhV33fgPU/Rosm/4At0MhIIpl8TSrZjLE+RpyQE3DkR7X3rO3VB94h/k84PqmyzXTgOc+temXuEAfnINF+ynwk0uFmEKw7u6SZ6Gd4ksGRbnJTZUYuXzRG3ntPrIy0o5SNA8H59hJTV/ExW/xTMDGbyxROMdKWZH5T3OgzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TnfPxbd9O5raFiv4FU6HklW0GmRoH0bxmZ8WkaI26WQ=;
 b=NSJn5XY9hPeXqzUMPLTlzkz/WjjqRUTcP+/ZBOzFu87gaRU7NR9fBARdZnxbR4CIQ6PWWov1rJOv+dI69J+epm3ILj85PhTT3LjL88Bc/WsGNS3QDUNUxSi1a0HAGvRZarOskBWNnYc4lE9OXP9Um3l2SLlFwc/Djvx8Yc9uajS9+hQVk5/Ptrk7XemJp9o+SytVTJiWbB4pPWaOgLWSVqLOI2KuuYTXFBQMS58O/LxkUvi6tGv+9Jf5XNTcmkCmv3oCutLrJc8IuyK2hpPdw+dN4sVWLF17/67XOrB9Hdqy/6ZSsYsmL7g7LncyT9lvDh3DOx8vovettHZzZPGBEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TnfPxbd9O5raFiv4FU6HklW0GmRoH0bxmZ8WkaI26WQ=;
 b=4n2rsWJ8VcynALeWFrSsldxOwOO3xu9YRCNkWDkJejZa28ALA25dce1p5X6guroQYn+kS64fPw81dT61Ep8Ory8h59pbuNfAZ3p1fMIgiwaduPWa1a4e2Yd/2GpRt/jxGMiWZ81OFeIC8QGX33rTm0XasmoVV4Lj21zKfKT+nm2WnlYQbU9Y6HY2/rZZ/16ngJBwWgzSJEkvZtWGtg3Jwg+PDqS4iXJaEWe/6x+cZ7ghzrv9q5gdqe9KXw6SrHOhkqKj0BZ2lwBgRB8nrVhS+cW6+HJARWFKXbCCK5zN/qiTa+zjKDyZ5xQTT0mvFevLFkaR8iFnf8p2QPEMZEg1pw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5ab506b4-1084-b9b5-6fb1-7d1961416dec@suse.com>
Date: Wed, 3 May 2023 09:59:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN][PATCH v6 15/19] xen/arm: Implement device tree node removal
 functionalities
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-16-vikram.garhwal@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230502233650.20121-16-vikram.garhwal@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0045.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::16) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DB9PR04MB9499:EE_
X-MS-Office365-Filtering-Correlation-Id: d7254c95-99e7-4475-c1a0-08db4bac578d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Lsxd7DjnsESfd5XmFxw4kLvPKvrjn3YB5RPewRgqIhyoaCfir3fvyPie0AT2HKXMiF9e1221P4iNNtwLOqk5bUV1HyDoiWJkQZ+fT6magpYnviQ8VlJgu6locUVNebVCdQyViugQyitVOgd0Zcb+UtoPXmnl1O0VoWJKhLqrTgadVTIHK070D5M5zNCiueEM3uTa25VellDcP7D6P8XHvA2VGvN8THB4wV/oDHwYL88pNbfaxX4OLLWI0+zOW8sl/1lX+zBMJqpIVIYwTheyydzK0a1fiOWv8yjqDNJ4gTnEXx4k44eaG5EUw30IAq4QI3GpIhUXat/qQDM6QKw9Y/Ipigm0/FB20m2HYkEltUkrTJclvINyGhHuADd69klZLdosapTKaoDqEy+vdFppmuMaQpvqPY6uGg3ip6a0nh0SBWxQJ3NM4OQD8XMdpW9//jNcy66y3piNv3/1MYxjQCydq1s4R9dtVwTCm5Ti7rbkq1sYiXSfhEhDQmQ972/uqWbCZrYA+U1JJuS/LKEq0fQRl1rJlseio3qOskg1Fj0W6pIYycUNzILfI0AfWCOESRgkslaO0nRLeVoP8egcrCuI1eVFM1AME9kisZbVOgnHjGHAloKx+ak4HswEEpKEFf0V22IFJr0oYkzwljmByrKk9RuzNfI8+lMwCCKzJPA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(346002)(376002)(39860400002)(366004)(451199021)(54906003)(478600001)(31686004)(2616005)(53546011)(186003)(6506007)(6512007)(26005)(6486002)(41300700001)(6916009)(4326008)(66946007)(66556008)(66476007)(316002)(83380400001)(8676002)(5660300002)(7416002)(2906002)(38100700002)(8936002)(31696002)(36756003)(86362001)(403724002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q2hDbHhDS1NCY0txVTcyT0RVSGhNQjB2ZG1wSERHOUJuMlp4R1NENmpKWll2?=
 =?utf-8?B?ZFBUb0pSMUljeTBFbk5mTVpYaGlITmlXNkpwM3p5SWE3M0pDR0tRTkt2ZmdI?=
 =?utf-8?B?L3JQRWlzakpWQTBRSzY5SHhtN1BPMzNQVlFKdDUvay9yRFdwWEUrSk1MYS9D?=
 =?utf-8?B?bXNXSjdYTHgwb1hMbzZST1BJd0NjdkZLcElDbW52SU94UmNFWmlXVU9BOWdU?=
 =?utf-8?B?LytnVmFFVUNWcTdjZUl3SmNzRmRqUytIdjlta0lIaUVZNTBhcEszRG5hU2di?=
 =?utf-8?B?YXlCNlVmS2N6QTd2WDFFcXBlV3IvTE5mc2tqMXo2UkpjS1d2VUhlMEtKeHVQ?=
 =?utf-8?B?RFU4VzNMN2xOU1hiaFBXMlZHYWU0WmhRdFFlM1hORnRGQW51aE9UUFNlejBK?=
 =?utf-8?B?Wi9zZytGUVhWcVZNc0t2Y3hRQTZvd1hZb2EvNlMrZGVFOHUzc3owekNvdDJx?=
 =?utf-8?B?dFYwOXZxcEJ4RzhObS9sU0NvSWVhcmJMTkF2WkF1R1h0ZS9SZHZ0TG1iK3hu?=
 =?utf-8?B?UE5nWmxpVGJyTFdGNXJveUU1UEQ2elhab3FsNCtEQ2JvUDUwOC9kYlNkM2Z2?=
 =?utf-8?B?bHdidzI1VzRobnB5RHU1VFJJNUtTSlBmWmhwN3JKVGsyVkJJN0hCY2dORkY3?=
 =?utf-8?B?OHQ1d3pPQ0JHUVFEVzB3TVhSQTBSN2hsMVhrczllMGdCMHFZRnR4U2pERThC?=
 =?utf-8?B?MFdIcWdLa3Y0WS9OeG5MOHBDMFNPQ3RxNTZMR0VvZE5IelRxVnZKcmdUK0NM?=
 =?utf-8?B?L2Jjc0l4Qkl5RjRYRjgvTGg0a2FIcXorS0pjcGpSL1F1WE9abU1JZmpTa1F4?=
 =?utf-8?B?bS9LSDZOUlZ2V3RJTTFiYTh3amdKdDdRMFR0SlN6N0VQQkU1ckllblhUdUUr?=
 =?utf-8?B?U1dVNXpIaEZlaTlOaUJKZ3dVU05UenZITGRPZ1E1WGJ6MkcwTWMyaitLNnZ4?=
 =?utf-8?B?YkZzVG5taVhvT3UzNXl3aVBocVpBbnhzNUZzRGZGSGRidzV2U052VStEVEEy?=
 =?utf-8?B?d0hUZ0NnMW9XYVNxdEJZSGpGWmxZZmgxTXlPTjJ4TlR3NkZrSE50cm5KeHJW?=
 =?utf-8?B?UkJkLzRCRTVSNDdqYjUyNjFrdGxyMUFRVjQwR0ZLTGhxb2JKR09MeUkrd1FI?=
 =?utf-8?B?dEhtVFpiWWx6KzBkK1kxeHlJck05TEZaKzBYL0RscHhKZlF2bEg5Y1ROTnp1?=
 =?utf-8?B?N3c3bUdYc2txUUI3NVpGS0RVN1hodkd4bWZiMTBWZVR6a2oyZmxmdkh3YlUv?=
 =?utf-8?B?ZE41cGFTd0wrNDF4a293WkdrMXZZS2xyTUU2U2h0U1o2aWt1alNMbFUvQmY1?=
 =?utf-8?B?cFdHVHhXMURiWTZ4L2RVTHB4VktPYjBibzFKWTQvVlQ2eGlTY2ozU0lmS2Vt?=
 =?utf-8?B?Z3I0ZURuMlU0ZWdWWGtGaWN4eXNCZVlqYUdPTE1peHBZeW42NzNjcmZOQVRR?=
 =?utf-8?B?aW85ZW0zVGp2c0dNOFZHVTgwRUlHZmJaUWpqdTUvSG56TGk2QXJtT256QjBN?=
 =?utf-8?B?SnJIYkkxMzU5RjM0V3VQdW1jcDJHYk5jYUd0NTVZTjEreFdFR3hJa2tLUllh?=
 =?utf-8?B?ZEVSdFdWS0RJUkRPbDBxeHRLMktFRWZOOFcybHYxU3VGc2FRdzFiOEg4b1VY?=
 =?utf-8?B?T0t0TGhWcmliTGQ2cFhOKzRkS0RCaWQ4NUxQeHp0emZyZTZPNWtsK1BZUGlU?=
 =?utf-8?B?V0ZyVmZQNzVlZGlOeW9vM3hQTXFaYlBvT1kzdTZ6MGlhSFhncWVxVTByQTE3?=
 =?utf-8?B?Mk5xZkttZ1JlRk1XOEJVMDhjT2ZycVBUc20rek9yanVhZEVNYkVKUlB4NVFn?=
 =?utf-8?B?cTM1QWhaaGErNndWMFJQVklCN09EYXUwbEhhSVlQRXFRR1JEazJCcVlEOVBG?=
 =?utf-8?B?QitzaS9HNGZGdldPcTM5eXYySXZRT0hmbVJNbTZMM3g3M2dTVWtVZU9rcFVl?=
 =?utf-8?B?S01ZcGtjaTF6QWxJSi9rSUdKbjY2a0U4VXJhZnUvdmZwamlpYjFxTGpmSk5N?=
 =?utf-8?B?UUU5d1kzM0I4dC9QM0RlSlNqcTZ6cFhaaStFNTgzVDZCS2FFSENNcVB3NnVr?=
 =?utf-8?B?ZFBvUzZ4eDVDQlJaNHBlVElONzRobjlBS1lCTEo4cHpoNWZDZXBCN0lmWmhn?=
 =?utf-8?Q?bbEs9uYi8QRZnoI8AeiZJ06JM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d7254c95-99e7-4475-c1a0-08db4bac578d
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 07:59:37.9763
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0GcOjQjEbxTAT2sX4nzDbHLCuxWkGhNnlzpQgcTusG+SD276i3/h9Mk56VntdDZ1SWyTIKoFy4bEfoWazzfTJA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9499

On 03.05.2023 01:36, Vikram Garhwal wrote:
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -1057,6 +1057,24 @@ typedef struct xen_sysctl_cpu_policy xen_sysctl_cpu_policy_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpu_policy_t);
>  #endif
>  
> +#if defined(__arm__) || defined (__aarch64__)
> +/*
> + * XEN_SYSCTL_dt_overlay
> + * Performs addition/removal of device tree nodes under parent node using dtbo.
> + * This does in three steps:
> + *  - Adds/Removes the nodes from dt_host.
> + *  - Adds/Removes IRQ permission for the nodes.
> + *  - Adds/Removes MMIO accesses.
> + */
> +struct xen_sysctl_dt_overlay {
> +    XEN_GUEST_HANDLE_64(void) overlay_fdt;  /* IN: overlay fdt. */
> +    uint32_t overlay_fdt_size;              /* IN: Overlay dtb size. */
> +#define XEN_SYSCTL_DT_OVERLAY_ADD                   1
> +#define XEN_SYSCTL_DT_OVERLAY_REMOVE                2
> +    uint8_t overlay_op;                     /* IN: Add or remove. */
> +};

I think you want to make padding explicit here and also check it to
be zero when incoming.

> --- /dev/null
> +++ b/xen/include/xen/dt-overlay.h
> @@ -0,0 +1,58 @@
> + /* SPDX-License-Identifier: GPL-2.0 */
> + /*
> + * xen/dt-overlay.h
> + *
> + * Device tree overlay support in Xen.
> + *
> + * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
> + * Written by Vikram Garhwal <vikram.garhwal@amd.com>
> + *
> + */
> +#ifndef __XEN_DT_OVERLAY_H__
> +#define __XEN_DT_OVERLAY_H__
> +
> +#include <xen/list.h>
> +#include <xen/libfdt/libfdt.h>
> +#include <xen/device_tree.h>
> +#include <xen/rangeset.h>
> +
> +/*
> + * overlay_node_track describes information about added nodes through dtbo.
> + * @entry: List pointer.
> + * @dt_host_new: Pointer to the updated dt_host_new unflattened 'updated fdt'.
> + * @fdt: Stores the fdt.
> + * @nodes_fullname: Stores the full name of nodes.
> + * @nodes_irq: Stores the IRQ added from overlay dtb.
> + * @node_num_irq: Stores num of IRQ for each node in overlay dtb.
> + * @num_nodes: Stores total number of nodes in overlay dtb.
> + */
> +struct overlay_track {
> +    struct list_head entry;
> +    struct dt_device_node *dt_host_new;
> +    void *fdt;
> +    void *overlay_fdt;
> +    unsigned long *nodes_address;
> +    unsigned int num_nodes;
> +};
> +
> +struct xen_sysctl_dt_overlay;
> +
> +#ifdef CONFIG_OVERLAY_DTB
> +long dt_sysctl(struct xen_sysctl_dt_overlay *op);
> +#else
> +static inline long dt_sysctl(struct xen_sysctl_dt_overlay *op)
> +{
> +    return -ENOSYS;

Pretty certainly -EOPNOTSUPP.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 08:01:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 08:01:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529008.822886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7QI-0000JU-Cl; Wed, 03 May 2023 08:01:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529008.822886; Wed, 03 May 2023 08:01:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7QI-0000JN-9b; Wed, 03 May 2023 08:01:14 +0000
Received: by outflank-mailman (input) for mailman id 529008;
 Wed, 03 May 2023 08:01:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+yte=AY=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1pu7QG-0000JF-Mz
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 08:01:12 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a833ac18-e988-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 10:01:07 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-178-h_zbahIaOUiVHVIBwANCXg-1; Wed, 03 May 2023 04:01:03 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F0BEB381D4CB;
 Wed,  3 May 2023 08:01:01 +0000 (UTC)
Received: from redhat.com (dhcp-192-205.str.redhat.com [10.33.192.205])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 4B0B4C15BAE;
 Wed,  3 May 2023 08:00:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a833ac18-e988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683100866;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=GJwJ5SBwqAyYWEb432D7rKlgjx5C7ieltMZT2oWDkb8=;
	b=L1hllRACNE1PYINwhLxv59l+/ORhUNJ4LCxae9wzc2L7L/npIWDWZKNWr/HGFDDsIAVCT2
	bhgAx8rkejPr336TW+NK23jjBBmeojSmlph48QzmcsWMVKFgJ1wIEauqfqODKfU5bfM/cD
	n7yrvEV9O8fw5P96XKuWTndq7ZkQcTo=
X-MC-Unique: h_zbahIaOUiVHVIBwANCXg-1
Date: Wed, 3 May 2023 10:00:57 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v4 03/20] virtio-scsi: avoid race between unplug and
 transport event
Message-ID: <ZFIUue5ouDtch31y@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-4-stefanha@redhat.com>
 <ZFEqEkG4ktn9bBFN@redhat.com>
 <20230502185624.GA535070@fedora>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="j7CkwuFkyJz+LKdb"
Content-Disposition: inline
In-Reply-To: <20230502185624.GA535070@fedora>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8


--j7CkwuFkyJz+LKdb
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Am 02.05.2023 um 20:56 hat Stefan Hajnoczi geschrieben:
> On Tue, May 02, 2023 at 05:19:46PM +0200, Kevin Wolf wrote:
> > Am 25.04.2023 um 19:26 hat Stefan Hajnoczi geschrieben:
> > > Only report a transport reset event to the guest after the SCSIDevice
> > > has been unrealized by qdev_simple_device_unplug_cb().
> > >=20
> > > qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized fi=
eld
> > > to false so that scsi_device_find/get() no longer see it.
> > >=20
> > > scsi_target_emulate_report_luns() also needs to be updated to filter =
out
> > > SCSIDevices that are unrealized.
> > >=20
> > > These changes ensure that the guest driver does not see the SCSIDevice
> > > that's being unplugged if it responds very quickly to the transport
> > > reset event.
> > >=20
> > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > > Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> > > Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> >=20
> > > @@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHandl=
er *hotplug_dev, DeviceState *dev,
> > >          blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NU=
LL);
> > >          virtio_scsi_release(s);
> > >      }
> > > +
> > > +    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
> > > +        virtio_scsi_acquire(s);
> > > +        virtio_scsi_push_event(s, sd,
> > > +                               VIRTIO_SCSI_T_TRANSPORT_RESET,
> > > +                               VIRTIO_SCSI_EVT_RESET_REMOVED);
> > > +        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
> > > +        virtio_scsi_release(s);
> > > +    }
> > >  }
> >=20
> > s, sd and s->bus are all unrealized at this point, whereas before this
> > patch they were still realized. I couldn't find any practical problem
> > with it, but it made me nervous enough that I thought I should comment
> > on it at least.
> >=20
> > Should we maybe have documentation on these functions that says that
> > they accept unrealized objects as their parameters?
>=20
> s is the VirtIOSCSI controller, not the SCSIDevice that is being
> unplugged. The VirtIOSCSI controller is still realized.
>=20
> s->bus is the VirtIOSCSI controller's bus, it is still realized.

You're right, I misread this part.

> You are right that the SCSIDevice (sd) has been unrealized at this
> point:
> - sd->conf.blk is safe because qdev properties stay alive the
>   Object is deleted, but I'm not sure we should rely on that.

This feels relatively safe (and it's preexisting anyway), reading a
property doesn't do anything unpredictable and we know the pointer is
still valid.

> - virti_scsi_push_event(.., sd, ...) is questionable because the LUN
>   that's fetched from sd no longer belongs to the unplugged SCSIDevice.

This call is what made me nervous.

> How about I change the code to fetch sd->conf.blk and the LUN before
> unplugging?

You mean passing sd->id and sd->lun to virtio_scsi_push_event() instead
of sd itself? That would certainly look cleaner and make sure that we
don't later add code to it that does something with sd that would
require it to be realized.

Kevin

--j7CkwuFkyJz+LKdb
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmRSFLkACgkQfwmycsiP
L9ai9hAAhKY0YAEhZNY7SY5b0lE/PTJpz63IpPn12rH1l8oQFgwUK5larJcxRmnG
2dC+TkTReWSEiF5cs3ubVy72RpQtzO6TcBLneAPFoxj5cSVZ9KQG2CWZhdC51snx
+CIjJT5P818w0gd5YDlLPKvh9O+L9Uw6ljR3gJxPUuO13q3RwypT14PSCaeccr6k
mTbLr4iBx+4tUJ7SR5fBA+vwaoIj1XbGBvx97AlyI2yQ2wSz0e70ZLXXArNorUil
eyhXmh+aWYELab95fkGlqYWu1vz0YLhHCdsD5pSzQa8SSvlQ+WqMdt7mXm7zW2wp
tpGcAV7Ew/HDfCTINHWqSAED+csbdqvSk4Ujh+RevUmgQPSAr/SiX6477WIIVec8
xxjWlwskBs1OjGhPUhVISdOqG/bad/oHR2RJeEam1FeStFppDSw0up5Z+FDRwZK0
M0sQog8eLqLvCkb7/pdH0w838xTDieVOCJ2aKIGcKvEkmSBALduzeuBdDMo/Vjz4
UM1oM3jw4oCQhL2GUaz23sDbMtkbTRK0/DgPB1qK7Qxa1qA+fI+CHBbYgyaYzPpi
yIy2K+p2yGUuym35lFyOsfPjESFTOyaiKsrTG6cm/ZB3IymCGjfQ5R4UgObfeVkw
+aS2yboG/lBZzkxo1Hc6vHcC8NHMztxT2zfjD9EfzLnyQzc9TYM=
=hjg/
-----END PGP SIGNATURE-----

--j7CkwuFkyJz+LKdb--



From xen-devel-bounces@lists.xenproject.org Wed May 03 08:02:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 08:02:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529013.822897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7RE-0000vY-RB; Wed, 03 May 2023 08:02:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529013.822897; Wed, 03 May 2023 08:02:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7RE-0000vR-OF; Wed, 03 May 2023 08:02:12 +0000
Received: by outflank-mailman (input) for mailman id 529013;
 Wed, 03 May 2023 08:02:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu7RD-0000vH-Ft
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 08:02:11 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0622.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::622])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd75bf2c-e988-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 10:02:09 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by VI1PR04MB9977.eurprd04.prod.outlook.com (2603:10a6:800:1d9::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Wed, 3 May
 2023 08:02:06 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 08:02:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd75bf2c-e988-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AmLCN5c06VUAzbyjhWOYR4eFmmS8TyCVRoJCKrCDV42a7R0YaJlMnC0hUwgfxIK3o1h6suRH1wP0dFup4lpIT+H+dbBDrWDc8JXI7D7hdxBKhuC4Xyng17i9Pd2vGi5aqKJNbNOP3X/Xd2LnxB+G5B6LUhw0VuElnW75pH0XJbJHGSjRRBj0bUy/WfRgw5qfwhaztVLli0fcT+LjjlrscwO/UMK6IfGodKnmxxMndk78c2p4B+lK9iDedYtPBlGYbK7tsWYNfzAWZpTY6rzHAj8Lq125ao3+UL+qPUJy+flyMDcnosTIbssTO6epSNmTUlpHbZCErNjsnIFL/ZZXBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8XxwCWxGBDJ9HjygqjC1krQMtwlO2IPEWApQaNDq2/k=;
 b=hw4Om+hBEv1rQlltg6NlspKK4fq9dQkYU/MoConA7RP4HX5YPMMq5b4L3eviyP+C5F6CsX78KeRtYRaCPek9eLEWn/KPIGw3L9b+heWuYJGOWsGuZRUz49/Uz2DpsaqHNJ3Jgzj52dmX8ixbzV+7rsfikFCXeZlAvEQc1MQMAWhodmTYr6TrXtVxeEMhlMA2E37cOSNX6iCNSOtFKYkIg0hjV0zp9U1PKluHPHyEVmfhrag0kgjEned7D75I1sDJO4GNnbbzWQCe3pha1cT0UT/xH++X2OMg781oAbJC3U1eU/UOUrHJoOHqZhdyzg7kmoU4cSP+NqiPTCe4LC1XRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8XxwCWxGBDJ9HjygqjC1krQMtwlO2IPEWApQaNDq2/k=;
 b=dwYShb5MRg7YFbxstPKVZQKCoPdfRZeLLX0vLtT0W5Om+U0M6gNcql7F67ZvIKoszpPGhHkbzBhOdZ0+KK9/Wo6FqSGn0+GPB1wDe5+2rtI6XTPmPHwwo1GNuV0AwCN7uKEITUGRLyhyiRbcfSth6cPma7JbmAwYkcwZ51fphal4QfLUdH0h6i8FzC/aJbjk75WoiByk7deZnHym/RnKt15xCXtuLPskVVd8JpYy+UMtnl8EPfCApgGphmBfLC9zWw2H1iYWwDF995FdGSLuq4QHHZTWhjs+p4eid1WknO1bmYUoNqaP8Q3pIY6C2Dl7o7aoVS1JGI+gZokvcTmOhw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <83a1b85b-ad6e-475f-41fb-9d3ff896d86c@suse.com>
Date: Wed, 3 May 2023 10:02:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN][PATCH v6 15/19] xen/arm: Implement device tree node removal
 functionalities
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-16-vikram.garhwal@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230502233650.20121-16-vikram.garhwal@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0072.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::11) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|VI1PR04MB9977:EE_
X-MS-Office365-Filtering-Correlation-Id: cf3b8679-072d-449c-682e-08db4bacafd6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HqqQ5e7WcyI5bytZHxYHbsChOpxEG3GapNkcR8M8yhIzPBgNa3rIuWeCGdQF6zDRywxVAACC50olUA9So7RKIj9DnWD3PqONqNDqsIxgVorSE+swWnPZzOmAFB03EPyLRRj41dZXW9sVd+E4TVBgp2vDOKaCh6mKrvKA4XVM51DI1nRJA6z9M+N/cgMv8G51Q6aSP1gUdwF4aOF0qmpYa06RUaMSkcLBUK4Z28cFun9ZpE41mdVCbD593er/ExJXbNJ9HDWajSf7OS3BwgSEnDplGJt0qsq4J21+hUWcETVG8cBjjpX2sti5il6m/NUjh+fO472JAafWD9ukKojmj1annvrUmLhiGGl4WfxBTLRuVuIyZMBp6bO/W0DF4v/xHYxdrBkOLCDVijruJLOW3TmiuEg5gt+zpEG+lgjrExarh7VdGbLkKJm0RZ3xSS01Xe08Ig6ah5aIMDVRzvKwQN0tVz2kw8E+pXMzteZAAxyW2kaTX5F8TROZrsQ6tkv1zn0hSkJkqOsK0k1YWasSJBm01395SYx4qaTsHRVQHwx6QxuShy8M9lwQ9IiwSbja4dnVmlFmyXOKSBIUd1PrN2opUEERsOuyPfRh8xvb0s1K/NHf0wbIpgSr8xYp/5gGeEhJ8tcTj3fAAxVuHQjyyQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(39860400002)(366004)(136003)(396003)(451199021)(86362001)(31696002)(2616005)(83380400001)(5660300002)(478600001)(41300700001)(8936002)(8676002)(4326008)(38100700002)(66476007)(66556008)(6916009)(6486002)(54906003)(66946007)(7416002)(31686004)(36756003)(316002)(26005)(6506007)(53546011)(186003)(2906002)(6512007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YmhQTVFKOHBqd0s3RDgxcXZqeG5jbFNzRTRMY0M3bUpmcW9Dc1QzNnkrczZ3?=
 =?utf-8?B?b2JycW9Jc205SjM4TWlpYW1JRFBHc2NDOUE0cHlJZEdVb0RPWUVqSWt6eUR2?=
 =?utf-8?B?azJIaHdjVkdaT3ZxcmFpNUxkWm1OL3c3U3FwamFkenRVMWFkbEFjQWhoWlB4?=
 =?utf-8?B?NXZEV01EKzJkTEsxNjhEL1lhVXlUa2JHWXFNT3hFaDlJRzhBdURhdmxQSnlu?=
 =?utf-8?B?c0tmSFExQ2JVQnRiYU1aTyszQXprVExnSERVQWlqRW1QaVY1bkVMMUJZK0FG?=
 =?utf-8?B?enFoRVJld2JQSWdHM0RRVHA4cjd0TFBkaDNKL2JLeURXaUpjUjNCdnlwRmVQ?=
 =?utf-8?B?V0FWQmRCWWxxU3RleGZ2b3pRVEkyMmZOMFg0d3FadkxFSktJTjJQTzRrb053?=
 =?utf-8?B?cE94OVRScllrOWVpSEVxMUdyTDhLSmgxb1FveHE0aFA3RWIvS2dGelVkY3FN?=
 =?utf-8?B?V0lzcTNsS0lTVjRadjIyS3B3Q01zN0NkOUUvN1grNDNLVEk2NjhnQnRUTUFG?=
 =?utf-8?B?Q2FKdkJ2Ris2NEsvTzZMNE82UTlHQ0dVM2xJTUNDSjRpUmc1UCt4NHZVSnEz?=
 =?utf-8?B?ZWJub0JqZEVzZG5RTlRoTEZiY1BYYVBaWW8rTzZqQTJyUXdMejd6anVFenVS?=
 =?utf-8?B?RTIyZVNKN2hnRDBkdm9NTHAvalNBY01Vbjk0WUVqenRSUjBZT2o3ZUQ4L3dk?=
 =?utf-8?B?eUJKdW15TS92YTYxQVZNZjJIZ0xVL1llM3ZrbnhnSGVwVEVXQURVT2ZHalVG?=
 =?utf-8?B?VUFRamJFSzNMS0p4Q0gvNklQRXZwclJSdFo3ckVJeHFjeEJGQ3J6L2d5L3gz?=
 =?utf-8?B?all2RWNTWDFiWGxDem9GN0p5QS9hbUwzd2pQZnd3STl3NlgvUWhMWUZaRk1J?=
 =?utf-8?B?TlczNk9uUDVCNEE1Ym5NYlIybFlaQXBkUEp0R2l1S3VIMEZHbGw3dDBGZWdx?=
 =?utf-8?B?dFJNRjk1VytrUWMvZEpCYitLK0RsZ2NUMmdNT1JmeGV2U1hSY0s2VzIxdlR1?=
 =?utf-8?B?T0NXOHQ1eFNYNmlJS2ZwTlJUczBMcG8rZXBtMUx0N1BuY0o1T3Y3ell2Sndx?=
 =?utf-8?B?akpQVmtHS1hFeUJ1WTV4NU9aM21xOUdFYzdiaXg5Vy9GOHlaMmpqU2dJZDFl?=
 =?utf-8?B?M093V0pjd0w0ZEFZK1M5TmxYaFVXZ2xPS1dVNXU0b0l6cS80cmpJWXE2RlpW?=
 =?utf-8?B?RTYzNERmT1ZrOUdZL0hBcWtvMGlMRkd6bEQ2cUw2a3hML3krL2V6aEJxSlJ5?=
 =?utf-8?B?My9kN1BUNkg4ODBKT21xTEtBVENzSDV2TkhOMTBkUUZZVy9ucG1GWVJXa0dl?=
 =?utf-8?B?SERNbTdySTV3RnNYZXRrWWsrVmFMTVZFZFJVZ3FUTnp6MCs4VHlpdXowMURz?=
 =?utf-8?B?eDJzRWRVdUV2cHZTbVZDYXFDQUxLUzcvWXhHNGs0d05qaGY3MVZtSFpuVURR?=
 =?utf-8?B?YmIxWDc3OGRZQ1NqYXFaaGxBVmt0dmszQTE4MFlXUmUzcXphbXNwSmdOSHhH?=
 =?utf-8?B?T0tETXJRa0R4TUtNQlU2MFFIekU1N0VrSldSbUdrVUZpaWNqdUtCSlBkbW94?=
 =?utf-8?B?bUVRWm1oMkRCVXFKYzJLTndtaTdJQUszMWhqclk0bGtoaVJ1RU94TXR4cUZI?=
 =?utf-8?B?amJNaW50WVJkWmdJeFpDalQ4THNQcGZQMkpzdUJkZDF0UzA2Tk9UbXFMSVZS?=
 =?utf-8?B?bWx5WGQ0a0pyTHJUWXdsV2tIQWRNY1p1Y3JUMzV5UEp2TkNOdXRKbTZMcG5T?=
 =?utf-8?B?UkFoNVl6TUQwenZacjlJc2IvQmpvbmxRUWhTb3lmYVlPUk9mbmdTZ2FIeE4w?=
 =?utf-8?B?M2lKSFZGemdOU3VMeVlCamdkU25zQXA0K3BHamFQMmJMaC82TDd1Z1hNSXVw?=
 =?utf-8?B?YnhaU0ZUOTJIaytUb0VySU1XUk5wNFVmNmFjMHRiS0psN3hDcUM2Y0tUWDVF?=
 =?utf-8?B?R3dwRWNnOWJ4ZjFyc1Rpb3hUeU1XbXNJV2l4clB3Wi9EMUJkd08zRmVNeGJ1?=
 =?utf-8?B?QTI2VWFIRXFMWHhoV2xxeHNzV3oxS2NlZ2NFdjMzTnlQdjZ0ZEJMczZPaWdz?=
 =?utf-8?B?VS93RGZnZldxWUkyQ3N4STNyTTJUbmszeGdVeE94VmtWWTNCR1dmZjE1Rlhs?=
 =?utf-8?Q?fD+RiU8S4LrTeTM/+nWbLArEa?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cf3b8679-072d-449c-682e-08db4bacafd6
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 08:02:06.0699
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ux4OGw/yrdQcyIn7hFS+ExpqCcdcN/aYPM4uSN0NMja2/0McON/Yo+IjXfj7P99IYBrIybPNJkO6PR+l66BaoQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB9977

On 03.05.2023 01:36, Vikram Garhwal wrote:
> Introduce sysctl XEN_SYSCTL_dt_overlay to remove device-tree nodes added using
> device tree overlay.
> 
> xl dt-overlay remove file.dtbo:
>     Removes all the nodes in a given dtbo.
>     First, removes IRQ permissions and MMIO accesses. Next, it finds the nodes
>     in dt_host and delete the device node entries from dt_host.
> 
>     The nodes get removed only if it is not used by any of dom0 or domio.
> 
> Also, added overlay_track struct to keep the track of added node through device
> tree overlay. overlay_track has dt_host_new which is unflattened form of updated
> fdt and name of overlay nodes. When a node is removed, we also free the memory
> used by overlay_track for the particular overlay node.
> 
> Nested overlay removal is supported in sequential manner only i.e. if
> overlay_child nests under overlay_parent, it is assumed that user first removes
> overlay_child and then removes overlay_parent.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/arch/arm/sysctl.c        |  16 +-
>  xen/common/Makefile          |   1 +
>  xen/common/dt-overlay.c      | 419 +++++++++++++++++++++++++++++++++++
>  xen/include/public/sysctl.h  |  23 ++
>  xen/include/xen/dt-overlay.h |  58 +++++
>  5 files changed, 516 insertions(+), 1 deletion(-)
>  create mode 100644 xen/common/dt-overlay.c
>  create mode 100644 xen/include/xen/dt-overlay.h

Is it possible that on the next patch I and other REST maintainers are Cc-ed
merely because at this point of introducing the two new files they're not
added to (perhaps) the DEVICE TREE section of ./MAINTAINERS right away?

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 08:02:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 08:02:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529015.822907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7Ra-0001PV-2e; Wed, 03 May 2023 08:02:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529015.822907; Wed, 03 May 2023 08:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7RZ-0001Ov-Vu; Wed, 03 May 2023 08:02:33 +0000
Received: by outflank-mailman (input) for mailman id 529015;
 Wed, 03 May 2023 08:02:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pu7RY-0001Nb-4t
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 08:02:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pu7RX-000419-8m; Wed, 03 May 2023 08:02:31 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pu7RX-00072m-2y; Wed, 03 May 2023 08:02:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=agd0YHdB4COnxADCJ9g9QhBh4bEoptan8icrJwMebws=; b=XKvf32gvDUY6nW4SOcu5kWxp0H
	yls5SMFIcQAMLcD519IWP0ST8P7GRos7UELyD2wg/lrSMefPL9aq+ZbRSP4xjFPQ5W9g6yC1pVL5M
	LDcsZgX/PHI6LGP0b4F6UJN1/onxIG7JuXtErqnctwvKRC0Vi4/BOHgF1J8YYmPSIRrc=;
Message-ID: <acea2ebe-47c5-1d26-887d-b29df06d07dd@xen.org>
Date: Wed, 3 May 2023 09:02:29 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 05/13] tools/xenstore: use accounting buffering for
 node accounting
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-6-jgross@suse.com>
 <c3be0db1-ed47-967b-7f98-6f1569691fb6@xen.org>
 <887675ad-ef06-cf8c-8a32-5b3f726e2198@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <887675ad-ef06-cf8c-8a32-5b3f726e2198@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 03/05/2023 06:13, Juergen Gross wrote:
> On 02.05.23 20:55, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 05/04/2023 08:03, Juergen Gross wrote:
>>> Add the node accounting to the accounting information buffering in
>>> order to avoid having to undo it in case of failure.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   tools/xenstore/xenstored_core.c   | 21 ++-------------------
>>>   tools/xenstore/xenstored_domain.h |  4 ++--
>>>   2 files changed, 4 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/tools/xenstore/xenstored_core.c 
>>> b/tools/xenstore/xenstored_core.c
>>> index 84335f5f3d..92a40ccf3f 100644
>>> --- a/tools/xenstore/xenstored_core.c
>>> +++ b/tools/xenstore/xenstored_core.c
>>> @@ -1452,7 +1452,6 @@ static void destroy_node_rm(struct connection 
>>> *conn, struct node *node)
>>>   static int destroy_node(struct connection *conn, struct node *node)
>>>   {
>>>       destroy_node_rm(conn, node);
>>> -    domain_nbentry_dec(conn, get_node_owner(node));
>>>       /*
>>>        * It is not possible to easily revert the changes in a 
>>> transaction.
>>> @@ -1797,27 +1796,11 @@ static int do_set_perms(const void *ctx, 
>>> struct connection *conn,
>>>       old_perms = node->perms;
>>>       domain_nbentry_dec(conn, get_node_owner(node));
>>
>> IIRC, we originally said that domain_nbentry_dec() could never fail in 
>> a non-transaction case. But with your current rework, the function can 
>> now fail because of an allocation failure.
> 
> How would that be possible to happen?
> 
> domain_nbentry_dec() can only be called if a node is being owned by an 
> already
> known domain. So allocation is impossible to happen, as this would be a 
> major
> error in xenstored.

 From my understanding, the nodes accounting will be temporary and then 
committed at the end of the request.

So we would call acc_add_changed_dom() which may require allocation to 
hold the temporary accounting.

> 
>> Therefore, shouldn't we now check the error? (Possibly in a patch 
>> beforehand).
> 
> I don't think so. I can add a comment if you want.

See above.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 08:08:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 08:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529019.822917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7XJ-0002Ak-Mx; Wed, 03 May 2023 08:08:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529019.822917; Wed, 03 May 2023 08:08:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7XJ-0002Ad-Jr; Wed, 03 May 2023 08:08:29 +0000
Received: by outflank-mailman (input) for mailman id 529019;
 Wed, 03 May 2023 08:08:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu7XI-0002AX-BO
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 08:08:28 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20621.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ae2e2a2e-e989-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 10:08:26 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DU2PR04MB8758.eurprd04.prod.outlook.com (2603:10a6:10:2e1::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 08:08:23 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 08:08:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae2e2a2e-e989-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ifkwBpCendxZkEV9W6kA0Vw6qpd2Qm5WNJsTHbZW1oHTD0uBt23rug0QR8Ge/SL/KoJxK0svfoTZOID+gSXKpCQ6H27Z/HDfysoT0Hl9cg8bsPYBw1W5jFD4OD+eaSPWXvWMotDG0cKdb4LxLg7u40dU2af1stzUOyFjEQPC7Uf3Uy16rLcm61JG3A/FrhlLFaztTEbZWyaR3IptzD/FsS9k2rXo+miLS9vrQVkMLTDPxXEidoeh6YRUx/O7R6vhHhVqk4V5cOX6857X4nfr2t2fH1r1KRBylSEyS2KsNLx+pTCJYdmPKEKI6GwIDiIMrNKGynSl8M+Pnapia0nT9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DaxVrrA90NzQvNu/mjQcPr0kvA7Gj9knhoypR4X+QXI=;
 b=iV8/Xlb4DUaL29JayndH83Kz4ubk71ihxSlEJJW04n9e39Ovx2huPR1Zgs1hysbHy7+Ck+70jnBO+63OfFbrdHoJ3FM5V+msospBfGiHOuxk/siEdJi9vUZc5oOjlhpb3nmMSDzlrLiUlWEjNZSiPNCKJLs1yMMDTS9eyFh4QV/uCCZgYpQBOsAKX7OjkM/n1A0stUXZx9vkKrwWHPxuXTHnQ0OJFcmIdP2vI8W1Xe6VpB8YIqzfjdiGmkzTjtdoLWZy4HBVCvt7+aeRnuL1qP4KB+30OK9ZmxovvUNwQSfQ/vjERddH4aX5CbhDrhmp7QUVMZKc30CaEN4c771NMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DaxVrrA90NzQvNu/mjQcPr0kvA7Gj9knhoypR4X+QXI=;
 b=FbK+HGKm+aDVs5vyQPMQ562L6OK3Ur/Ai+cSV7777mJsbgmPNMgxwgezaXvf1rOhHDEnCH/JGSoXsT255H6TXeYNUQW2hsSJ0m8ESVUTBD7PI0wOfsZQymW+Ry6I15fH6r9ffIh6iNhjLK9z9uTPSG/XsLDk4cMkSPhfX0XlnBhgVU/IZ70uqjCrOIR2b97JVLbHM8J1C0LNgpKgKpMjoSN9pL5Zaak0sXDKCRC6apy+UYd83moZKTQ/TeOuS066oxdWDT/4RC1emjDHNHeyVv1ALyJnlBX/Ghenh8d9x+hhkKPcD1FjcyA4CqxZgJsqJgt4Nsrmfj0QeTOKqFAM5w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5045583e-8776-0834-ca93-44f85888d877@suse.com>
Date: Wed, 3 May 2023 10:08:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 1/2] x86/head: check base address alignment
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230502145920.56588-1-roger.pau@citrix.com>
 <20230502145920.56588-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230502145920.56588-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0107.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::20) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DU2PR04MB8758:EE_
X-MS-Office365-Filtering-Correlation-Id: 0500c80f-b3aa-4a50-55db-08db4bad9075
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ol80LB5lkYT8LRDvFeP04SVCgDs9LApsj3X6B5zvvqc7HCx7uJHB/QgffOis5b6ybaLDrU8BXXvWcJCGwCVLN25NgRmjZPf0zEvpPO4avuFz/v+56F2ZIky6bJw3mISwJyGii0YBPwUnHS5QUsVR2sX+Xy3hgBgHn1OboTsJ4n3SajxsdjxOEKeS/Xbki9bgMmdjuS50jGMU9wOlA46cA7RqPSHp5sDUiS/oPaVjWTWfvQHzfRKDqh5Qf7x5LLtNknb7FORIxY8qfhy0N5V4OgQxjr6nRQ5qUdelYEWn/OpnxeCee6e81LkZ6jYvtqmPahDwgN0QAOm/aBExXwtxHgaF3oP1DmSxLfqujklIup2unq4ITfFMw4spiilawDowR//BBlNuE3ltX9AE7YkFTAw1rrgnKGS+60EBV9TiB40/1EzJP8oZh6P2cvUBXlGiYWpRxgz90UPAWzpgnZXpImUq8Fdyr7Kd6zZIhqMP1zFtpyU8eurz9o4KVyImb5pJt/CrYk4fTVIna3TZxwU0AG8H7Bv/V3y4Ed19keawu536NIo6quZ4Z/93cNf495zonSlKcBMfW3zan+E1YX6KYqTR3qf6CCqkFu8KYv7yjEoHO/tdOU3hH7RVkqCR4PMLymhQcZMdW8qrNWrc50CVxA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(376002)(396003)(136003)(366004)(451199021)(478600001)(26005)(53546011)(6506007)(6512007)(6486002)(83380400001)(54906003)(186003)(31686004)(2616005)(4326008)(38100700002)(6916009)(66556008)(66476007)(66946007)(8676002)(41300700001)(8936002)(316002)(86362001)(31696002)(2906002)(5660300002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dC9Dam1RM0JNaktwKzRhOVBLcFN2YnpKK2pzemRBZVFMUEIyUzdkN2MxVXJs?=
 =?utf-8?B?Um9VaVFwNnVNNWVaZmNUaVRyZ1doNHgraHdRdWN3SnBUcHprZUg3cGNQSWNl?=
 =?utf-8?B?cFp2QVRMWi9OVVRxdm1mZWlXMk1JNC9COFBWUkpwTTJEYUNGZlNub2dwb2dP?=
 =?utf-8?B?YkNEa0VHMGlvWlJlUU9xRnVtU2tSUVFtZzQxOU9QMDN6d2FWaGY1UjBFTS9l?=
 =?utf-8?B?T0dWaW9nLzFTZ0NtSG1reVk4ODNKRy9pR1l6a1I1UzNocDlsd3BVVmVzTWd0?=
 =?utf-8?B?NmlxdlNtT2JqK2xqOFFRWFJaajVJa2Q1WW9mYmZlWWdWUStqQ2RqWko5dk4w?=
 =?utf-8?B?amtkdXVhMEZ6bWVzZXo2SEtaT0hDZ1BtWUZMZk5ndTVGbWE4UUMvYUl1UnIw?=
 =?utf-8?B?RlFVb2NZdWVvT0V2cUZNS3Z0YklPS05ZRDY5bW9IK3dJNjRralBrUnE4KzEx?=
 =?utf-8?B?WWVwb3MwM1VDLytKL1h1RFU2Q2FWeXM2SXZhVklwWjVEQXpYTTdXRG9YRTBF?=
 =?utf-8?B?bDJoemN4enU0Zk5VNnd4alpMak42L01BMmtJYVUrQ01HOWVrNkJtaTZQdVlX?=
 =?utf-8?B?Qm1FdExISUx4UGRsRGFGZ0tLN1hIOWw2Sy8vN0syR3dhK0IxdXFScmNyNENs?=
 =?utf-8?B?ZUJvZzRVQlRXZnRTa3FicDBXZkZZWlBpa2d4aXBwRHk2RUNDTkRvSXpnSTJa?=
 =?utf-8?B?Qk9ReG1Wam90UVZMTndzZEpKMTU1VjhhcjZPenRQR0VxYXRQVlpoV0FoOG9i?=
 =?utf-8?B?TGdVNTh2eC9rTmFVbXNNdGJ4RUI1S1k0Ky8xWjNLYXZzQ1FrcHQ3OFdpR1Z4?=
 =?utf-8?B?NEQ0d2lVZVFvUnBmbVA4WFgrbURzMTVEdytMc2d4N1h4dTExZDU0MG1YVDB1?=
 =?utf-8?B?dFE4R2NXMEJTT2JDNllvYk1EQytObjVvS0pCM21XZnBZSXh6ZDVoWFprYTNK?=
 =?utf-8?B?aG9xZHBxOEw2K3JWekdBUkZPeWxVcW5MYUU2b0RaMnJkWXdsWnAxUzgxaENE?=
 =?utf-8?B?alZDQTY0d2ZLcldNRHlHSDAreTR4aTR3T1MvdGR5OW5xb2pqQ01SRnhMMzBs?=
 =?utf-8?B?eUJ6M2hBV09Ia255Q2JHaVRDSFRCQjlsWWxHdXlOMXoyMEk3TFRpNU9ISkxx?=
 =?utf-8?B?NGRpblhmcVRDNkxXNFFWUXA1Tkl1QWU4ZmU3eWE3aDRzMVpWUm9DVnBKSFNh?=
 =?utf-8?B?TVJZZFgrY3M2bUVycUcyeW4xbUZMUE5qQW1BdzUxcDcvVEVNZWxRWWJWbEdS?=
 =?utf-8?B?L0lnOXpxV2Z4Zlk4eitDa2RzZGJ5cnp5all3RWNNb21ad2dCM3pjdUxCWHR3?=
 =?utf-8?B?OVMzK203emUzaGpxQ25wS2lUalZHcm9TQmh5T0owVCtOaWc4QWFYaUowRkFh?=
 =?utf-8?B?MUY4S2NpYUtaUXpnclYzcVNkbDlxQTdaVk9OS0tjZ3FnU0pKWTVwSVNubkFG?=
 =?utf-8?B?TWJEK2wxZzZmcE1kbHRaNXNtSEE4bmo5S0FSQTBMaDJMcXJpMy9vN0QwaUhl?=
 =?utf-8?B?OXUveWRqZ1ZMMmM1dC9Xc0h4YlZBeDhjMFpwdTRCYThrcWN5QjNlS08wZ0wy?=
 =?utf-8?B?Yk9qOXZSN0U2Zjc3U2pUb0ErK2ZDNUJGZkxSZ1Z1VU9ORmNMMG81bWMxakl5?=
 =?utf-8?B?dC95alFIM2sxdWsvUjltZ2RMdHExV0dKZG1odmtPT3hEaDdZS1h2RU40bEJJ?=
 =?utf-8?B?M2M3QUdQVE12ejNuby9zam1EUDg1U2JPZ1BBbm9ON2w0cXdoRytVQnFORFdK?=
 =?utf-8?B?alJLSjVCUjEwUjBIYmRPVllVZWZ2UitQNlEvWXBERzh0WlIwMW9TVzlKUEpy?=
 =?utf-8?B?Y1RHK2ZFZVB6OG9BYjZ4WkJOeDdMeC90MEFiUVJJZjZZNzQ4ZjRGdWNFOW5B?=
 =?utf-8?B?QStEMFF1QnBVVGlYOGg1NldMNTFpaXlUSy9OeUdsOXdHejZCbkVxQXhoZHZX?=
 =?utf-8?B?TmlxOHpBQVF1M2ZNcFk0UUNoTGJwNFkvV3BjRzQ2cGEyQWY1dklLRkQ0SFNi?=
 =?utf-8?B?VFVQbWxlSzJ2YnMvZTlhWW13RkxhckNWbDhiSHE4dUFzUVNselVTY0NJY3hz?=
 =?utf-8?B?dUxuY3Z3VU5jblFCTVUySmRLVjBmRXQ2bjcrNnI1NnZTRFNUZzdKTmplU3Vh?=
 =?utf-8?Q?Q83Jw0nWhx6iIdvX6KJyyKoXr?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0500c80f-b3aa-4a50-55db-08db4bad9075
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 08:08:22.9215
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YAJZAMBudSRTZrzLpMQ1kN7y3PD3XhoU3D0+0Jt6/Pmj0oG5MJeDIG3w7Up1p94Uu27ymrtA6/Vjc6wLnLPigg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8758

On 02.05.2023 16:59, Roger Pau Monne wrote:
> Ensure that the base address is 2M aligned, or else the page table
> entries created would be corrupt as reserved bits on the PDE end up
> set.
> 
> We have encountered a broken firmware where grub2 would end up loading
> Xen at a non 2M aligned region when using the multiboot2 protocol, and
> that caused a very difficult to debug triple fault.
> 
> If the alignment is not as required by the page tables print an error
> message and stop the boot.  Also add a build time check that the
> calculation of symbol offsets don't break alignment of passed
> addresses.
> 
> The check could be performed earlier, but so far the alignment is
> required by the page tables, and hence feels more natural that the
> check lives near to the piece of code that requires it.
> 
> Note that when booted as an EFI application from the PE entry point
> the alignment check is already performed by
> efi_arch_load_addr_check(), and hence there's no need to add another
> check at the point where page tables get built in
> efi_arch_memory_setup().
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Would you mind if, while committing, ...

> @@ -146,6 +148,9 @@ bad_cpu:
>  not_multiboot:
>          add     $sym_offs(.Lbad_ldr_msg),%esi   # Error message
>          jmp     .Lget_vtb
> +not_aligned:

... a .L prefix was added to this label, bringing it out of sync with the
earlier one, but in line with e.g. ...

> +        add     $sym_offs(.Lbag_alg_msg),%esi   # Error message
> +        jmp     .Lget_vtb
>  .Lmb2_no_st:

... this one? I don't think the label is particularly useful to have in
the symbol table (nor are not_multiboot and likely a few others).

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 08:09:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 08:09:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529021.822927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7Xn-0002bv-V9; Wed, 03 May 2023 08:08:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529021.822927; Wed, 03 May 2023 08:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu7Xn-0002bm-Rf; Wed, 03 May 2023 08:08:59 +0000
Received: by outflank-mailman (input) for mailman id 529021;
 Wed, 03 May 2023 08:08:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+yte=AY=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1pu7Xn-0002AX-5e
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 08:08:59 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c0e62e46-e989-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 10:08:58 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-36-vLcfhCiUOwuGWkdaAmksmg-1; Wed, 03 May 2023 04:08:51 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C56C810504AC;
 Wed,  3 May 2023 08:08:50 +0000 (UTC)
Received: from redhat.com (dhcp-192-205.str.redhat.com [10.33.192.205])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 4DD9F492B00;
 Wed,  3 May 2023 08:08:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0e62e46-e989-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683101337;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=GdsrTvtpKiwphnVpAamWYth1JM74zuqKQh3Am2keHQg=;
	b=DrcBo0DMnCv/KYxjqbWPFEyuBQ5x0xjmLkCGM4iu9AWTD7bMDcq3Wb9YyNkcujEtAfsvw/
	j+MCnqqGLEWWFB1X3FiTInfVcS+E9cAVWuKkQJdECig6CHgi8sOGjV7gIO0ZE6SJgaxxPl
	EOqzB54vB61LF8uKV5o2rajrMYOoT1w=
X-MC-Unique: vLcfhCiUOwuGWkdaAmksmg-1
Date: Wed, 3 May 2023 10:08:46 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 07/20] block/export: stop using is_external in
 vhost-user-blk server
Message-ID: <ZFIWjuST/9tHVNMG@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-8-stefanha@redhat.com>
 <ZFE0iFnbr2ey0A7X@redhat.com>
 <20230502200645.GE535070@fedora>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="8uIPJqxz2LaoW5Jz"
Content-Disposition: inline
In-Reply-To: <20230502200645.GE535070@fedora>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10


--8uIPJqxz2LaoW5Jz
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Am 02.05.2023 um 22:06 hat Stefan Hajnoczi geschrieben:
> On Tue, May 02, 2023 at 06:04:24PM +0200, Kevin Wolf wrote:
> > Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > > vhost-user activity must be suspended during bdrv_drained_begin/end().
> > > This prevents new requests from interfering with whatever is happening
> > > in the drained section.
> > >=20
> > > Previously this was done using aio_set_fd_handler()'s is_external
> > > argument. In a multi-queue block layer world the aio_disable_external=
()
> > > API cannot be used since multiple AioContext may be processing I/O, n=
ot
> > > just one.
> > >=20
> > > Switch to BlockDevOps->drained_begin/end() callbacks.
> > >=20
> > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > ---
> > >  block/export/vhost-user-blk-server.c | 43 ++++++++++++++------------=
--
> > >  util/vhost-user-server.c             | 10 +++----
> > >  2 files changed, 26 insertions(+), 27 deletions(-)
> > >=20
> > > diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhos=
t-user-blk-server.c
> > > index 092b86aae4..d20f69cd74 100644
> > > --- a/block/export/vhost-user-blk-server.c
> > > +++ b/block/export/vhost-user-blk-server.c
> > > @@ -208,22 +208,6 @@ static const VuDevIface vu_blk_iface =3D {
> > >      .process_msg           =3D vu_blk_process_msg,
> > >  };
> > > =20
> > > -static void blk_aio_attached(AioContext *ctx, void *opaque)
> > > -{
> > > -    VuBlkExport *vexp =3D opaque;
> > > -
> > > -    vexp->export.ctx =3D ctx;
> > > -    vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
> > > -}
> > > -
> > > -static void blk_aio_detach(void *opaque)
> > > -{
> > > -    VuBlkExport *vexp =3D opaque;
> > > -
> > > -    vhost_user_server_detach_aio_context(&vexp->vu_server);
> > > -    vexp->export.ctx =3D NULL;
> > > -}
> >=20
> > So for changing the AioContext, we now rely on the fact that the node to
> > be changed is always drained, so the drain callbacks implicitly cover
> > this case, too?
>=20
> Yes.

Ok. This surprised me a bit at first, but I think it's fine.

We just need to remember it if we ever decide that once we have
multiqueue, we can actually change the default AioContext without
draining the node. But maybe at that point, we have to do more
fundamental changes anyway.

> > >  static void
> > >  vu_blk_initialize_config(BlockDriverState *bs,
> > >                           struct virtio_blk_config *config,
> > > @@ -272,6 +256,25 @@ static void vu_blk_exp_resize(void *opaque)
> > >      vu_config_change_msg(&vexp->vu_server.vu_dev);
> > >  }
> > > =20
> > > +/* Called with vexp->export.ctx acquired */
> > > +static void vu_blk_drained_begin(void *opaque)
> > > +{
> > > +    VuBlkExport *vexp =3D opaque;
> > > +
> > > +    vhost_user_server_detach_aio_context(&vexp->vu_server);
> > > +}
> >=20
> > Compared to the old code, we're losing the vexp->export.ctx =3D NULL. T=
his
> > is correct at this point because after drained_begin we still keep
> > processing requests until we arrive at a quiescent state.
> >=20
> > However, if we detach the AioContext because we're deleting the
> > iothread, won't we end up with a dangling pointer in vexp->export.ctx?
> > Or can we be certain that nothing interesting happens before drained_end
> > updates it with a new valid pointer again?
>=20
> If you want I can add the detach() callback back again and set ctx to
> NULL there?

I haven't thought enough about it to say if it's a problem. If you have
and are confident that it's correct the way it is, I'm happy with it.

But bringing the callback back is the minimal change compared to the old
state. It's just unnecessary code if we don't actually need it.

Kevin

--8uIPJqxz2LaoW5Jz
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmRSFo4ACgkQfwmycsiP
L9bGwQ/9HZIvpsXG7HrllRtRSZut7ucmoWbVO6oQFsVMI4RFuShXz3jZHU1roMAs
tCqrZRC6M8Y+JA/pROYtwBTcrvOE/FUc6d/KXFXblg/NGTFDUGwmPWa3t8kZPul/
wfOG8F/BRVLtOhY1DtJhAqO1cXkda/bZxqFDrCgyqtfLpXh+kuVQSOCHbontOMYu
vNfhY3FYP0FH44fRnPsUJQVsx0pmZ07waR5lc2ukINv8d0uSN2L7U8Vw8JrNaWAK
xGJmpznwr+eZla+/lsWQCQhWysYQIuZGuk/WFeXbitwLEnw7CiJc7qpYf55v4v8Z
AOLuMBQVN7QGGrKmpbwuzoD09MOWsIk7dd6X30DxiF7eaE5kO/4jkIeVYPtiaXg2
WUJpZYL1bRX04Tg2snXA4fw06fWMOO1LdAhrDHbU8P+WmxgjDpSO6zFGySFEjukb
NHi9p14nSXwrhFUD5wOREzVRR828cmgltcrVypCRPXwISwqxfHuAvqRdrG4/+pIh
5u/VTqxrKerXIPxMgiU9yqJAwaRJkyK/+yFmg9vYUiBcR4sGw5HQIUzeEif5V+pd
ckV5NxGX/Wbuyo5RpiL1YnzwB4LthdVaRYLQBuR3RaKyY7G6VSUP9ARQjBg6hEr/
fDnW23q/JH7Gr7CIdAKWz/D9Q4aL+rw4z9CbfBFkq4oG7naqDIU=
=3jaa
-----END PGP SIGNATURE-----

--8uIPJqxz2LaoW5Jz--



From xen-devel-bounces@lists.xenproject.org Wed May 03 08:42:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 08:42:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529035.822937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu83f-00070s-IJ; Wed, 03 May 2023 08:41:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529035.822937; Wed, 03 May 2023 08:41:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu83f-00070l-Fh; Wed, 03 May 2023 08:41:55 +0000
Received: by outflank-mailman (input) for mailman id 529035;
 Wed, 03 May 2023 08:41:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pu83e-00070f-1W
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 08:41:54 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 59abf5b9-e98e-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 10:41:51 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 75AAC22448;
 Wed,  3 May 2023 08:41:51 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4E77C139F8;
 Wed,  3 May 2023 08:41:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ekddEU8eUmT0dwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 08:41:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59abf5b9-e98e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683103311; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lUD7esgrDW79z9o7+4vsMOz/TSmW2ZPXBObw8d5vtnU=;
	b=Z95QtXrIV8lVavs295Wjb54a3o+JdDbodmdlTdvZ5OBWGCGCegXm+GcDi3lzbNOT4bKnZm
	wxgNWV1JogPKjZfZGjRlb4EXF9ekz6uPoaYHG4j//PMQhubJLIml3rvdf/0sEbEOFPhaFC
	NDIS/b95wbSa3tRRghUQgGW03zzYTdM=
Message-ID: <ce231d58-c300-36d8-791c-2c6544b5e329@suse.com>
Date: Wed, 3 May 2023 10:41:50 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v4 05/13] tools/xenstore: use accounting buffering for
 node accounting
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-6-jgross@suse.com>
 <c3be0db1-ed47-967b-7f98-6f1569691fb6@xen.org>
 <887675ad-ef06-cf8c-8a32-5b3f726e2198@suse.com>
 <acea2ebe-47c5-1d26-887d-b29df06d07dd@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <acea2ebe-47c5-1d26-887d-b29df06d07dd@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------TMp6T5rdQQnTdKltCroqfQzP"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------TMp6T5rdQQnTdKltCroqfQzP
Content-Type: multipart/mixed; boundary="------------5CBjTXPndDZHeScJshg2sxDR";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <ce231d58-c300-36d8-791c-2c6544b5e329@suse.com>
Subject: Re: [PATCH v4 05/13] tools/xenstore: use accounting buffering for
 node accounting
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-6-jgross@suse.com>
 <c3be0db1-ed47-967b-7f98-6f1569691fb6@xen.org>
 <887675ad-ef06-cf8c-8a32-5b3f726e2198@suse.com>
 <acea2ebe-47c5-1d26-887d-b29df06d07dd@xen.org>
In-Reply-To: <acea2ebe-47c5-1d26-887d-b29df06d07dd@xen.org>

--------------5CBjTXPndDZHeScJshg2sxDR
Content-Type: multipart/mixed; boundary="------------OB0I27oQAIR4oCHfZceSMzAY"

--------------OB0I27oQAIR4oCHfZceSMzAY
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTA6MDIsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDAzLzA1LzIwMjMgMDY6MTMsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBP
biAwMi4wNS4yMyAyMDo1NSwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+IEhpIEp1ZXJnZW4s
DQo+Pj4NCj4+PiBPbiAwNS8wNC8yMDIzIDA4OjAzLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0K
Pj4+PiBBZGQgdGhlIG5vZGUgYWNjb3VudGluZyB0byB0aGUgYWNjb3VudGluZyBpbmZvcm1h
dGlvbiBidWZmZXJpbmcgaW4NCj4+Pj4gb3JkZXIgdG8gYXZvaWQgaGF2aW5nIHRvIHVuZG8g
aXQgaW4gY2FzZSBvZiBmYWlsdXJlLg0KPj4+Pg0KPj4+PiBTaWduZWQtb2ZmLWJ5OiBKdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+Pj4+IC0tLQ0KPj4+PiDCoCB0b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jwqDCoCB8IDIxICsrLS0tLS0tLS0tLS0tLS0tLS0t
LQ0KPj4+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmggfMKgIDQgKyst
LQ0KPj4+PiDCoCAyIGZpbGVzIGNoYW5nZWQsIDQgaW5zZXJ0aW9ucygrKSwgMjEgZGVsZXRp
b25zKC0pDQo+Pj4+DQo+Pj4+IGRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfY29yZS5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYw0KPj4+PiBpbmRl
eCA4NDMzNWY1ZjNkLi45MmE0MGNjZjNmIDEwMDY0NA0KPj4+PiAtLS0gYS90b29scy94ZW5z
dG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+Pj4+ICsrKyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0
b3JlZF9jb3JlLmMNCj4+Pj4gQEAgLTE0NTIsNyArMTQ1Miw2IEBAIHN0YXRpYyB2b2lkIGRl
c3Ryb3lfbm9kZV9ybShzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgDQo+Pj4+IHN0cnVjdCBu
b2RlICpub2RlKQ0KPj4+PiDCoCBzdGF0aWMgaW50IGRlc3Ryb3lfbm9kZShzdHJ1Y3QgY29u
bmVjdGlvbiAqY29ubiwgc3RydWN0IG5vZGUgKm5vZGUpDQo+Pj4+IMKgIHsNCj4+Pj4gwqDC
oMKgwqDCoCBkZXN0cm95X25vZGVfcm0oY29ubiwgbm9kZSk7DQo+Pj4+IC3CoMKgwqAgZG9t
YWluX25iZW50cnlfZGVjKGNvbm4sIGdldF9ub2RlX293bmVyKG5vZGUpKTsNCj4+Pj4gwqDC
oMKgwqDCoCAvKg0KPj4+PiDCoMKgwqDCoMKgwqAgKiBJdCBpcyBub3QgcG9zc2libGUgdG8g
ZWFzaWx5IHJldmVydCB0aGUgY2hhbmdlcyBpbiBhIHRyYW5zYWN0aW9uLg0KPj4+PiBAQCAt
MTc5NywyNyArMTc5NiwxMSBAQCBzdGF0aWMgaW50IGRvX3NldF9wZXJtcyhjb25zdCB2b2lk
ICpjdHgsIHN0cnVjdCANCj4+Pj4gY29ubmVjdGlvbiAqY29ubiwNCj4+Pj4gwqDCoMKgwqDC
oCBvbGRfcGVybXMgPSBub2RlLT5wZXJtczsNCj4+Pj4gwqDCoMKgwqDCoCBkb21haW5fbmJl
bnRyeV9kZWMoY29ubiwgZ2V0X25vZGVfb3duZXIobm9kZSkpOw0KPj4+DQo+Pj4gSUlSQywg
d2Ugb3JpZ2luYWxseSBzYWlkIHRoYXQgZG9tYWluX25iZW50cnlfZGVjKCkgY291bGQgbmV2
ZXIgZmFpbCBpbiBhIA0KPj4+IG5vbi10cmFuc2FjdGlvbiBjYXNlLiBCdXQgd2l0aCB5b3Vy
IGN1cnJlbnQgcmV3b3JrLCB0aGUgZnVuY3Rpb24gY2FuIG5vdyBmYWlsIA0KPj4+IGJlY2F1
c2Ugb2YgYW4gYWxsb2NhdGlvbiBmYWlsdXJlLg0KPj4NCj4+IEhvdyB3b3VsZCB0aGF0IGJl
IHBvc3NpYmxlIHRvIGhhcHBlbj8NCj4+DQo+PiBkb21haW5fbmJlbnRyeV9kZWMoKSBjYW4g
b25seSBiZSBjYWxsZWQgaWYgYSBub2RlIGlzIGJlaW5nIG93bmVkIGJ5IGFuIGFscmVhZHkN
Cj4+IGtub3duIGRvbWFpbi4gU28gYWxsb2NhdGlvbiBpcyBpbXBvc3NpYmxlIHRvIGhhcHBl
biwgYXMgdGhpcyB3b3VsZCBiZSBhIG1ham9yDQo+PiBlcnJvciBpbiB4ZW5zdG9yZWQuDQo+
IA0KPiAgRnJvbSBteSB1bmRlcnN0YW5kaW5nLCB0aGUgbm9kZXMgYWNjb3VudGluZyB3aWxs
IGJlIHRlbXBvcmFyeSBhbmQgdGhlbiANCj4gY29tbWl0dGVkIGF0IHRoZSBlbmQgb2YgdGhl
IHJlcXVlc3QuDQo+IA0KPiBTbyB3ZSB3b3VsZCBjYWxsIGFjY19hZGRfY2hhbmdlZF9kb20o
KSB3aGljaCBtYXkgcmVxdWlyZSBhbGxvY2F0aW9uIHRvIGhvbGQgdGhlIA0KPiB0ZW1wb3Jh
cnkgYWNjb3VudGluZy4NCg0KQWgsIHJpZ2h0LCBnb29kIGNhdGNoIQ0KDQpXaWxsIGFkZCBj
aGVja2luZyB0aGUgcmV0dXJuIHZhbHVlIGFuZCBtb3ZlIHRoZSBjYWxscyBhaGVhZCBvZiB0
aGUgdGRiIGNoYW5nZXMuDQoNCg0KSnVlcmdlbg0K
--------------OB0I27oQAIR4oCHfZceSMzAY
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------OB0I27oQAIR4oCHfZceSMzAY--

--------------5CBjTXPndDZHeScJshg2sxDR--

--------------TMp6T5rdQQnTdKltCroqfQzP
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRSHk4FAwAAAAAACgkQsN6d1ii/Ey/x
/Af+LQayKbh6eHMkv4HWMtqlDxEjXR1ipNLuHisUPL+VBlb2oqr35hC2mKpkD014gUBr7gGonEHH
yBbA+EKgTBdNM/HAnB2UzxOW/IHavKElP+U2CVAM9g9PPuOTJZoLIo5OP6tMRWjbIETvT0Kn8OLm
/rP5jqmBi4G0slEKnci4XPn4qeW2GwOagcvJgW5d05a4rfxyX1O4ik2lzFj641/MpLyc25++JJwR
PrgY7nB4nhDok1xYz0Q0GUXfHfIN/zjEoVamIsrltsIv+FQBaRgufH6Mxwx7TC8koDJOwl6PIHz1
pbPa6P3o71M8r05ofPeRW9HW5Dmn+1k12DdNoRaVuA==
=WGAn
-----END PGP SIGNATURE-----

--------------TMp6T5rdQQnTdKltCroqfQzP--


From xen-devel-bounces@lists.xenproject.org Wed May 03 09:01:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:01:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529039.822947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu8MU-00011m-1E; Wed, 03 May 2023 09:01:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529039.822947; Wed, 03 May 2023 09:01:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu8MT-00011f-UI; Wed, 03 May 2023 09:01:21 +0000
Received: by outflank-mailman (input) for mailman id 529039;
 Wed, 03 May 2023 09:01:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tguP=AY=citrix.com=prvs=48085cdab=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pu8MR-00011Z-Rt
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 09:01:20 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0edab7a3-e991-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 11:01:17 +0200 (CEST)
Received: from mail-dm3nam02lp2045.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.45])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 May 2023 05:01:10 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6478.namprd03.prod.outlook.com (2603:10b6:510:b3::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Wed, 3 May
 2023 09:01:07 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 09:01:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0edab7a3-e991-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683104477;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=uHpfFcdAXwIgkMT8P5xBrItPuaNsQy8c7pMU+K0M+mk=;
  b=YIU6XQ8HcgJ/bVSiK0R2x6c3e3ZMRDVvLNU6L6wjh70rEhZ2+FQBiIzW
   7bwmDCqpbGKicp8EsVvZhvQUY9JYcjPii15a4hdXksVMGeMoDWsiZ5Gik
   8qY+fEZmalxtTVHm9uIlmzNzMyP4HOSzARH6+D8nLpuGViE5MK/8wHe9K
   M=;
X-IronPort-RemoteIP: 104.47.56.45
X-IronPort-MID: 107704670
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jdsSyKBRX+LTXBVW/9Xiw5YqxClBgxIJ4kV8jS/XYbTApGh20WRVx
 mRNWzyPbveIZTfwe9siaoq39RgD6seDzdNkQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5QRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw0+11IFwf+
 6YkNz1Qchuqt8mUzLy8Vbw57igjBJGD0II3nFhFlWucIdN9BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI++xrvgA/zyQouFTpGMDSddGQA91cg26Tp
 37c/nS/CRYfXDCa4WPdrS302rGQxUsXXqpCMbO31vNBgGSv/W8PKlpGTla4+aeQ3xvWt9V3b
 hZ8FjAVha077kmiVNT+dxy+vn+fvxQYVsZQEus18wWEwOzf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8rzm/JCwUJm8qfjIfQE0O5NyLiJ43pgLCSJBkCqHdpt/oHTD9x
 RiaoS54gK8c5eYQzLmy913DhzOqp7DKQxQz6wGRWXiqhithbZOhT5yl7x7c9/koEWqCZlyIv
 XxBl83F6ukLVcuJjHbVHLhLG6y17fGYNjGamURoA5Qq6zWq/TikYJxU5zZ9YkxuN67oZAPUX
 aMagisJjLc7AZdgRfIfj16ZYyjy8ZXdKA==
IronPort-HdrOrdr: A9a23:0BOv2q8laMStvgsvEXBuk+G/dr1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwXZVoZUmsjaKdhrNhRotKPTOWwVdASbsP0WKM+V3d8kHFh41gPO
 JbAtJD4b7LfCdHZKTBkW6F+r8bqbHokZxAx92uqUuFJTsaF52IhD0JbjpzfHcGJjWvUvECZe
 ehD4d81nOdkTN9VLXJOlA1G8z44/HbnpPvZhALQzYh9Qm1lDutrJLqDhSC2R8acjVXhZMv63
 LMnQDV7riq96jT8G6Q60bjq7Bt3PfxwNpKA8KBzuATNzXXkw6tIKhxRrGYuzgxgee3rHInis
 PFrRsMN9l6r1nRYma2ix3w3BSI6kdl11bSjXujxVfzq83wQzw3T+JHmIJiaxPcr24tpst13q
 5n13+Q88M/N2KKoA3No/zzEz16nEu9pnQv1cYVknxkSIMbLJtct5YW8k95GIoJWAj69IckOu
 9zC9y03ocfTXqqK1Ti+kV/yt2lWXo+Wj+AX0g5o8SQlwNbmXhopnFosPA3rzMlztYQWpNE7+
 PLPuBDj7dVVPIbaqp7GaMoXda3Inale2OMDEuiZXDcUI0XMXPErJD6pJ8v4vuxRZAOxJwu3L
 zcTVJjs3IocU6GM7zB4HRyyGGPfIyBZ0Wu9ikHjKIJ/4EUBYCbfhFrcWpe0/dJ+J4kc4nms/
 XaAuMiPxasFxqoJW9z5XyPZ3BjEwhhbCQrgKdLZ7uvmLO9FmS4jJ2sTN/jYJzQLB0DZkTTRl
 M+YRmbHrQz0qnsYA61vCTs
X-Talos-CUID: 9a23:sOS36WwxDsCyMdlTexHZBgU9EcIETyzTlkvPLhKkAk9ZY5+ZR1iPrfY=
X-Talos-MUID: =?us-ascii?q?9a23=3AfgNDAQ4WeAJZMqELBVPbVwtKxow4xqqhKE0mkaw?=
 =?us-ascii?q?6gMqYF2sgIDqe1GqeF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,246,1677560400"; 
   d="scan'208";a="107704670"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a04QjF8cUxkfO3hpRqGr6wKBWdSA9N+NibetoKJimr0CJPRKJPnZqafsCkjNvnpdaANPPSrc9Qa1hI501i0eyrXR0eLDRI9IDY6x/AZJasF77hXh0KW63Jr4Kndm2DRHUZw5hWRy8ZAaluXOdf1cifP2MYTTU5jEdEgnzWvlnpPRgUMi3ft7bUqrUrn1X9uuMNpKyv2Ql8y0NE21yU8NpFqqdCGnpGIRdtZfkJgpaLbZBID1OhsLXbN/2p2UYJHJTAEizWEje1PUk7aRVlpu5oGfU+mkDsSJFocrUCTBqstt1vkN+hecn9ZyIUcuNpcxdIqMKJtWb9BNTks9F/B8sQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=I9zylJ9wiT91cpmea72jc3/pwcQNSdq+sLu8ekS/W6k=;
 b=UiqI3sJueayFAY5WTMRufMa7nKI72KuykVgQVzUDZtZFagmAG0/SFLwBL5hGZYbz7GfKiQeFtPdb7jeSx56kjiLU2wrRYTx1Pli1r7TJpoJxtbhsewvMUklqUcBcySqAj6G9tDJ/Eh+e1OmexAoZNS6ME9G9myDF27VAMMPjCEIG6DS5Q1PhzWxU0dHRXCoGaMdgo/nCT+gp97fXRTACYBuPAtwME/CDPPfseaEJ3CAMBu2+KrY+RkdP5QaE69vuEg+3HOfvCZHeEW5i5V7Uc4nA8tiZsTfXJ2hQQefmrUtTnKcGhSL9Q3/fTsBVvN2SPSBS1LkQSGc83Js1Ct6/4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I9zylJ9wiT91cpmea72jc3/pwcQNSdq+sLu8ekS/W6k=;
 b=vzZ/OWxPi6N0TdAfRPzfyC2P1tYFwtyvTXKzisAstgzd5fEKpWxq9AyQyTuIHGBFoO5FYoUHg5pFBJAl3gHWBq9/DYuhk6O608Y8v0klN88/U+Jfd6r4MFjycmeVWAT95gvQosNUB+WN33UOHliCY8IQkE5E6wSpqeLPUjPkYtI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 3 May 2023 11:01:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v3 1/4] x86/msi: passthrough all MSI-X vector ctrl writes
 to device model
Message-ID: <ZFIizbiltUCtz4Po@Air-de-Roger>
References: <cover.c12fc399ea0151818e48ac5179ad554c00c9386d.1680752649.git-series.marmarek@invisiblethingslab.com>
 <f799fdc6b6899fa65a07eae0d6401753f7d61ef2.1680752649.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f799fdc6b6899fa65a07eae0d6401753f7d61ef2.1680752649.git-series.marmarek@invisiblethingslab.com>
X-ClientProxiedBy: LO4P265CA0074.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bd::13) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6478:EE_
X-MS-Office365-Filtering-Correlation-Id: 0cc9a1a0-45d1-4b0b-3ae3-08db4bb4ee8b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6OPqI/Kux9wUr+CKQ3nLOHe5ODt5ftQUkFh+Pb8IeIqjZ4wiPHhFQLpNuMW4nAMzFeDLpHZsPzI9fNRLmIx9io/dK22SQJGV+X3xOhxhoDaT1Pu3JGxm320dW20MjI4WfH4BucSca8SwFcR5PsrgLuc53FnRGEEnPFl0Ht/R3SV+0JVHBgjjDekshnQGbLPGIGwtNKA++DINRqpRTUboycfszynXMEWxem/t2bxW4je2nfRo5z2O9HhyTH7fpXiKo0AFJeAq7ILv0QkZtFBZ4BUuFU/b/WW/2LGbXeAwLYPzbAzQt/ZyG6+vDdAFuwNxKc2Wy8pJtuw/m+l0+jGkCpsAypeA0b29r4Ac9ah/lder8jI0mUM7PM6dLWpZcrm0fDnI0+O9D0SiIB4wtxYZgfnHksfW35EPGoo6EhXUsNj9Jl9l6cLqE+b4cbxPRmAW4hVYCwPqwTCNms5ZAwLaxf5j8VjtrFNx6x0lNgmVttqsKSHy//SbumEbAMzSnd1ZvdigaRwn0Ghcmqggs6iV0KgKPgJR+QpfTzBQ/2p5uzRZ1V8gD7rcgyN86C9vhLHB
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(39860400002)(136003)(376002)(396003)(366004)(346002)(451199021)(6916009)(5660300002)(8676002)(86362001)(6512007)(83380400001)(9686003)(26005)(66574015)(186003)(33716001)(82960400001)(6506007)(38100700002)(8936002)(316002)(6486002)(54906003)(85182001)(41300700001)(478600001)(6666004)(4326008)(66946007)(66476007)(66556008)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cUQ1YUN2Ty80WXA5TEZwZm15UzVnaUlndUtmUFlSZXhOcHZSMGdGZ2M1N25k?=
 =?utf-8?B?Z3R2YmRMaWFKNHRRcXptSGtQeXZnUTIxR2dzZ253TnBOL0UwdEhVOFhveFYw?=
 =?utf-8?B?QmdhSDkyeEE0ZmZxcHA0UjJVQnNpdkxreTVXdzd2d29PM3ZxSFpmVDhHTGdY?=
 =?utf-8?B?QjgzQXA5TVlWeWVobXZIaVg3VEk4eDc2Q3UxaDl6QytjZzZYMGFVdWV5bnFW?=
 =?utf-8?B?ZVduUHpSRndxY0tCeUhvTXloM25RSjBRcFRpOE9aWUhJNW4rK0FLMXEvaU1N?=
 =?utf-8?B?SW5XNlRESjlHN3FjejBYNlJFMksxK0F4bGhwbHJGM1A5Qzlza2RPWld6blE4?=
 =?utf-8?B?THowbW15a1RKZTZSd2V6M2NHOGRYSno4Q3d0aiswRjV2bVRHNFZ2c0JQWnRi?=
 =?utf-8?B?bWJXcXVIbUM0KzRCanRNaFcvUWJsQWQzWHlZa3VJbTlsbXNzcEd6TmJlUGpK?=
 =?utf-8?B?aHBXeDFoL3cva2VnWWI1QU5XOURnNG1GWis0N2JPby9BUHdjbGtpTjYyeFhN?=
 =?utf-8?B?dDRNZXZ2M241VjlYRHNzdTE0S3NINGIxa3RJUThFOGZBN2JHOGFrV2p4OXd0?=
 =?utf-8?B?MzEwQWRLMm1rcXBNeXpnUVFyTjhhbnBsVGxoeUxaUlVNa0cxOEpVMFFwQmps?=
 =?utf-8?B?VzQvcXg4SWN0VWtYanhVWFFnMWJ3VHFJQUpkWmt1OUJweGZobm0rVFpTK2tR?=
 =?utf-8?B?aHp5ZExaNUdIVXoxZEQ0MDY0cEJZRGs5d0ltT2E4czZ0MDBXeFF3aTdreGlE?=
 =?utf-8?B?M1hkNjNMZlpzVGlzNU5ld3plUlNQaWNSdkJHZGR2Z0trbk0rcWdTbFNhdHla?=
 =?utf-8?B?N2xLOE5iTUdUWnMwTUw4ZXFHUnJ5a0haa25NZkZsTnRTcVZneDVoVWw0NElQ?=
 =?utf-8?B?amV0ZWhwRXJVd1BZT0o4RmVBTUF5NStjMElFSi9JWldIQlpwUGQ4RnFMSjZk?=
 =?utf-8?B?NmIwVEdyNzgvejdWZU9ROTBrSjVUQUUvOWNTUGxKYUhnYTZZZEJiQUM2N09v?=
 =?utf-8?B?YmFXNEsvdWtLcHV5QWFtMUJ0djJLczRnRFBjenhjK29HQkRneDJQa21wb3dW?=
 =?utf-8?B?TStSUjFKZ0pYNGFRS3YyM2w0N2gvdUh4R3JUdzNUaG5LQnRHSzFseFpaQTE4?=
 =?utf-8?B?VG10ZVp0V09VTTVnamxrQkdPMHArWmM2K1diQS82QTd2SjZEQlJBemFLWmo0?=
 =?utf-8?B?eGp0KzNuRXlPS0huLzBpYVE5SUViRnViL2F5R2ZXeUhLYWNjanFXODVlM3JV?=
 =?utf-8?B?N0JFT1Z5OTI2Q1dTc0tpL0F2VUQzcFd0VVFCRG14d2h1WVpFaEMvSm9qZEtn?=
 =?utf-8?B?cXpXRStuMDI0aXQxVXpvS0hncEdNbC93R0dSUlkxN2M2MG5LZ1dJVkNpcTND?=
 =?utf-8?B?VjNkdU8rMVlFMEdNc01mUk5zRVRreHFpN3pCOC9RbFUxWnFKd2xtQVZTaWpB?=
 =?utf-8?B?SGlNaksxY1BobmNDcXB6MDJNSnBpMDZMeDE0Ym5iVFZManFRK21zRW5MZmFo?=
 =?utf-8?B?a2dBOW9LV29HY20xQlJwQ0FIVjZDbzF2N3pkQUJJQW9UTVN3N3A3L2JOSVlZ?=
 =?utf-8?B?bmdSRzdiNzBwY3R1VEYvNHJDUGpnKzlPNEZTeFNuVXpmODlmQ2laOEJKM1ZU?=
 =?utf-8?B?bVNBVGlOTmQwbkNZMnFXUk9rem1aaHVMU3l4dW5FRlZBS1FTR3l3N2RLbHFo?=
 =?utf-8?B?cDI1SXl4dEk4N0wzVEtoa2I5aFcvaHRoaVFsUXFUWVhvSUl0WlpKT09mUitm?=
 =?utf-8?B?c01iQjhaZDdYSnRtSDJDVjJiOFd5dFEyM0FyeXFrWUt1TElxck1WZ0hsVktF?=
 =?utf-8?B?VG1zUmpOL3VxY3JiT1Rya0l0ck9IcTV3NTVJS3MwSmUxWWNPbUhwcTc1NmxH?=
 =?utf-8?B?MXpFdGhrNHhoN203OEt4N3AvOUs3TklnOEtrbFBIMEx4Zzkyd3FQSFdabk8y?=
 =?utf-8?B?OXIrbzJ2TEFqUnF1Ty8rMDBtZHkydEduS1hrTnpPRC9xSjN4R2dQbHU5eTgx?=
 =?utf-8?B?NUt2dzZya1RwVHlzWVRpcnRjV3d1TThJbDA5dmZYbXdOVUVBM1FUZXo5QW42?=
 =?utf-8?B?d2NIbU5yM2pDQkN0SGtjR3drL1FrTjI2cTJ3Wmh5bFFxV0RCeGZjSzhKTUl2?=
 =?utf-8?B?VXhLbHJramtIZDQybWxOWWNEWWFwbTR3YmpUTjZkYXJjK0RFb25vMWNDdW0y?=
 =?utf-8?B?eXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	HJC4Qw2E+uy3KMPfGkCmiLojakkcS/9hCju5rNr985KLEGZ8+GK1DkjFQtWmUXW8aK0FbP/fYTBvGrI6Yzp3IzmSvuImLbFMZBtxl83UGLXK98CjmZNkXKwtN1RWg2RJtKuDWlXeYPOySD6Ey0s4pFMLOBhH3FHkIzYneBsV0WbqZyvTWu565AHwhn/hhsRStz+j1GzVmx9VGnpqfVY69vcQ9F/kHfaYN6uLv2tAIzwWQbJr1wKRWfh94j0uz6RzgPGdfU0L6NCOiCUGGBPSFXbLGjOGM81qq9GP/f7DQ9BhBGFh/FxkmtU6sJp4a8KsmLnUWYEZV7sLtnR0ZMYENp4JPUjlgj+cB7RB7xIurQXkqoK1jwnA41++0EAjcdaGqVqTqrvhiqTNGt/vkHtfbjXBu8Su60WTGRdntgva2up/p1jBPIuzO3PsgiYvOktiTuuPRFq3bcXKBKqdADL/xm24ufNxWSaPaf/8mBPUAul+WBZLq85oICpZIqi9sXPXB+xF7Kg3LHCtSHVyZXVk3GCPWYifS5aM6Lio5CdGcIrDRPwk7tQxxPs+KBdnDt6EGbc0BDJpu/8zy5X9CY+0HxAmR1XFN6AVThqV4PkQGzV/Eed1nQfbe1ShdHIt1MOA8r1ob/KmLiUJYFZvKaBTwiIrSQ6YGhbFZozsuSBq7lqmuIRBHk2fCjcFliAGgBgzq8i1DgUFCIj/gyWsjRJyjEb/qW7bgATrGFqibed5fDgcSU855ueZUMlU0lCjo/fpvi3ujmOod4yPod/UpYRhOCkm9LIS5V0ehfxcOwCkFBV2ZfarCvEAhfH38YfhOEJn4i18AzNgnm/yrSZPcaTRIJSe6RQQ289KNVv38YXBWpqMotZBXtm9oBG+GE2C8wITwbsHgX0Jiw+ZFnBElTFtvJPn8xsjcE1+alJm+xjw2O0=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0cc9a1a0-45d1-4b0b-3ae3-08db4bb4ee8b
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 09:01:07.3519
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WFgL2LF1+GL4tWZDI5M9FWLaeFkFilc/uVeeTX3cKPKmZ7TFX3u97zZbsllIh9cO9gT4DVZYUIusUHy3EuPK/Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6478

On Thu, Apr 06, 2023 at 05:57:23AM +0200, Marek Marczykowski-Górecki wrote:
> QEMU needs to know whether clearing maskbit of a vector is really
> clearing, or was already cleared before. Currently Xen sends only
> clearing that bit to the device model, but not setting it, so QEMU
> cannot detect it. Because of that, QEMU is working this around by
> checking via /dev/mem, but that isn't the proper approach. It's just a
> workaround which in fact is racy.
> 
> Give all necessary information to QEMU by passing all ctrl writes,
> including masking a vector.
> 
> While this commit doesn't move the whole maskbit handling to QEMU (as
> discussed on xen-devel as one of the possibilities), it is a necessary
> first step anyway. Including telling QEMU it will get all the required
> information to do so. The actual implementation would need to include:
>  - a hypercall for QEMU to control just maskbit (without (re)binding the
>    interrupt again
>  - a methor for QEMU to tell Xen it will actually do the work
> Those are not part of this series.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
> v3:
>  - advertise changed behavior in XEN_DMOP_get_ioreq_server_info - make
>    "flags" parameter IN/OUT
>  - move len check back to msixtbl_write() - will be needed there anyway
>    in a later patch
> v2:
>  - passthrough quad writes to emulator too (Jan)
>  - (ab)use len==0 for write len=4 completion (Jan), but add descriptive
>    #define for this magic value
> 
> Should flags on output include only "out" values (current version), or
> also include those passed in by the caller unchanged?
> ---
>  xen/arch/x86/hvm/vmsi.c        | 18 ++++++++++++++----
>  xen/common/ioreq.c             |  9 +++++++--
>  xen/include/public/hvm/dm_op.h | 12 ++++++++----
>  3 files changed, 29 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
> index 3cd4923060c8..231253a2cbd4 100644
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -272,6 +272,15 @@ out:
>      return r;
>  }
>  
> +/*
> + * This function returns X86EMUL_UNHANDLEABLE even if write is properly
> + * handled, to propagate it to the device model (so it can keep its internal
> + * state in sync).
> + * len==0 means really len==4, but as a write completion that will return
> + * X86EMUL_OKAY on successful processing. Use WRITE_LEN4_COMPLETION to make it
> + * less confusing.

Isn't it fine to just forward every (valid) write to the dm, and so
not introduce WRITE_LEN4_COMPLETION? (see my comment about
_msixtbl_write()).

> + */
> +#define WRITE_LEN4_COMPLETION 0
>  static int msixtbl_write(struct vcpu *v, unsigned long address,
>                           unsigned int len, unsigned long val)
>  {
> @@ -283,8 +292,9 @@ static int msixtbl_write(struct vcpu *v, unsigned long address,
>      unsigned long flags;
>      struct irq_desc *desc;
>  
> -    if ( (len != 4 && len != 8) || (address & (len - 1)) )
> -        return r;
> +    if ( (len != 4 && len != 8 && len != WRITE_LEN4_COMPLETION) ||
> +         (len && (address & (len - 1))) )
> +        return X86EMUL_UNHANDLEABLE;

I think you want to just return X86EMUL_OKAY here, and ignore the
access since it's not properly sized or aligned?

>  
>      rcu_read_lock(&msixtbl_rcu_lock);
>  
> @@ -345,7 +355,7 @@ static int msixtbl_write(struct vcpu *v, unsigned long address,
>  
>  unlock:
>      spin_unlock_irqrestore(&desc->lock, flags);
> -    if ( len == 4 )
> +    if ( len == WRITE_LEN4_COMPLETION )
>          r = X86EMUL_OKAY;
>  
>  out:
> @@ -635,7 +645,7 @@ void msix_write_completion(struct vcpu *v)
>          return;
>  
>      v->arch.hvm.hvm_io.msix_unmask_address = 0;
> -    if ( msixtbl_write(v, ctrl_address, 4, 0) != X86EMUL_OKAY )
> +    if ( msixtbl_write(v, ctrl_address, WRITE_LEN4_COMPLETION, 0) != X86EMUL_OKAY )
>          gdprintk(XENLOG_WARNING, "MSI-X write completion failure\n");

Would it be possible to always return X86EMUL_UNHANDLEABLE from
_msixtbl_write() and keep the return values of msixtbl_write()
as-is?

>  }
>  
> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
> index ecb8f545e1c4..bd6f074c1e85 100644
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -743,7 +743,8 @@ static int ioreq_server_destroy(struct domain *d, ioservid_t id)
>  static int ioreq_server_get_info(struct domain *d, ioservid_t id,
>                                   unsigned long *ioreq_gfn,
>                                   unsigned long *bufioreq_gfn,
> -                                 evtchn_port_t *bufioreq_port)
> +                                 evtchn_port_t *bufioreq_port,
> +                                 uint16_t *flags)
>  {
>      struct ioreq_server *s;
>      int rc;
> @@ -779,6 +780,9 @@ static int ioreq_server_get_info(struct domain *d, ioservid_t id,
>              *bufioreq_port = s->bufioreq_evtchn;
>      }
>  
> +    /* Advertise supported features/behaviors. */
> +    *flags = XEN_DMOP_all_msix_writes;
> +
>      rc = 0;
>  
>   out:
> @@ -1374,7 +1378,8 @@ int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op)
>                                     NULL : (unsigned long *)&data->ioreq_gfn,
>                                     (data->flags & XEN_DMOP_no_gfns) ?
>                                     NULL : (unsigned long *)&data->bufioreq_gfn,
> -                                   &data->bufioreq_port);
> +                                   &data->bufioreq_port, &data->flags);
> +
>          break;
>      }
>  
> diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
> index acdf91693d0b..490b151c5dd7 100644
> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -70,7 +70,9 @@ typedef struct xen_dm_op_create_ioreq_server xen_dm_op_create_ioreq_server_t;
>   * not contain XEN_DMOP_no_gfns then these pages will be made available and
>   * the frame numbers passed back in gfns <ioreq_gfn> and <bufioreq_gfn>
>   * respectively. (If the IOREQ Server is not handling buffered emulation
> - * only <ioreq_gfn> will be valid).
> + * only <ioreq_gfn> will be valid). When Xen returns XEN_DMOP_all_msix_writes
> + * flag set, it will notify the IOREQ server about all writes to MSI-X table
> + * (if it's handled by this IOREQ server), not only those clearing a mask bit.
>   *
>   * NOTE: To access the synchronous ioreq structures and buffered ioreq
>   *       ring, it is preferable to use the XENMEM_acquire_resource memory
> @@ -81,11 +83,13 @@ typedef struct xen_dm_op_create_ioreq_server xen_dm_op_create_ioreq_server_t;
>  struct xen_dm_op_get_ioreq_server_info {
>      /* IN - server id */
>      ioservid_t id;
> -    /* IN - flags */
> +    /* IN/OUT - flags */
>      uint16_t flags;
>  
> -#define _XEN_DMOP_no_gfns 0
> -#define XEN_DMOP_no_gfns (1u << _XEN_DMOP_no_gfns)
> +#define _XEN_DMOP_no_gfns         0  /* IN */
> +#define _XEN_DMOP_all_msix_writes 1  /* OUT */
> +#define XEN_DMOP_no_gfns         (1u << _XEN_DMOP_no_gfns)
> +#define XEN_DMOP_all_msix_writes (1u << _XEN_DMOP_all_msix_writes)

FWIW, we usually interleave _XEN_DMOP_no_gfns and XEN_DMOP_no_gfns,
ie:

#define _XEN_DMOP_no_gfns         0  /* IN */
#define XEN_DMOP_no_gfns          (1u << _XEN_DMOP_no_gfns)
#define _XEN_DMOP_all_msix_writes 1  /* OUT */
#define XEN_DMOP_all_msix_writes  (1u << _XEN_DMOP_all_msix_writes)

I wonder whether XEN_DMOP_all_msix_writes should be a feature
requested by the dm, as to not change the existing behaviour of how
MSIX writes are handled (which might work for QEMU, but could cause
issues with other out of tree users of ioreqs)?

That would turn XEN_DMOP_all_msix_writes into an IN flag also.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 03 09:24:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:24:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529044.822957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu8iY-0003YG-Vb; Wed, 03 May 2023 09:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529044.822957; Wed, 03 May 2023 09:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu8iY-0003Y9-SQ; Wed, 03 May 2023 09:24:10 +0000
Received: by outflank-mailman (input) for mailman id 529044;
 Wed, 03 May 2023 09:24:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PgPZ=AY=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pu8iY-0003Y3-1Z
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 09:24:10 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2082.outbound.protection.outlook.com [40.107.7.82])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 41efe7cc-e994-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 11:24:09 +0200 (CEST)
Received: from AM6PR0502CA0072.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::49) by AS2PR08MB9975.eurprd08.prod.outlook.com
 (2603:10a6:20b:62c::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Wed, 3 May
 2023 09:23:34 +0000
Received: from AM7EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:56:cafe::f5) by AM6PR0502CA0072.outlook.office365.com
 (2603:10a6:20b:56::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.33 via Frontend
 Transport; Wed, 3 May 2023 09:23:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT006.mail.protection.outlook.com (100.127.141.21) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.22 via Frontend Transport; Wed, 3 May 2023 09:23:33 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Wed, 03 May 2023 09:23:33 +0000
Received: from 0921191df72a.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 02A1AA8F-D41C-45C9-A4BF-BDEBC6B8812F.1; 
 Wed, 03 May 2023 09:23:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0921191df72a.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 03 May 2023 09:23:22 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM8PR08MB6356.eurprd08.prod.outlook.com (2603:10a6:20b:36b::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 09:23:19 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be%6]) with mapi id 15.20.6363.021; Wed, 3 May 2023
 09:23:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41efe7cc-e994-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G/fSbBTFe3/uzUMdyRg6Fli0XJbqBBbHqOn+Zoz/Qus=;
 b=gPLp8Jd07J+b27HVBb01G5c85sou7BAXnjVa2AtlqlhwMPN6oj/ChUBaD4UxnjSsBAcJZPSW9CyW2EQU9QyQT1PuC4Desocm0r/bEsOJUMk4hbWTNxyGfKFzYdAou1s2ILT31aUnb0G+/u1bxTK+dpzM5RwzLFDvuoQP4pyczg0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 719093a87f3c6ce5
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ItxQTXveafxQT6qGqiK0sW41uxS+L9SdUTpm65Zrqzd0slDbkkB9S3wdSz+JlNHW2mPxu546Vwm2CckQ/XZAu5T00f/0pyRXxkl8aU+c2YHzYflIdROtIGf8rfw/3Ez34+mf//iZwYjXijiFIZ8G05uU3aDwRSFqwwiy4x+O5fVBPZoRu1zkvk1vS24CKmSh15nks9czTj7kBatcSVpgk3xUXNBwThCZvnE9VAvvLoD+ge4hzHNrTSF9UwOHc0Bevj6kDzbwFDOJP2kqVd2IguxFleoe+Gx46GtvIyyjXGUIkyr2Tkw8TVu3GgXeecvKmUmbNdnMkPY6l1phX0T/Og==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G/fSbBTFe3/uzUMdyRg6Fli0XJbqBBbHqOn+Zoz/Qus=;
 b=XguU+roEG5S+n/KMfm+ThVDFzaoACjWpusoETMjuSwUQ+28OezuVXCIIgEVwW63Lv0FHEKdOZyIuS7L6b+usFVXNFaMNfRrpEhL8G5dvNk1/5V5fgJ2lD4VKNVeHwC5REdNBMvwuUkdEiwFyD2SoKSO57jx2f5d7LQmoSmp7XrOkyO/4cI5+1/4TNP1eOalSBuIdG7kkxcWOpRmdp8hMnmcxgxDs2dyfqp9B6OsAOEtgt1VHU0LeE7bX5JQjzJmY/IvBTWTrkktw3ridCqU9xF/vDf+eQTHkaocCdLshsohzLv2YyCSzIMM0e4QVzSna6T71JWKIH0w46HhqiUfDbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G/fSbBTFe3/uzUMdyRg6Fli0XJbqBBbHqOn+Zoz/Qus=;
 b=gPLp8Jd07J+b27HVBb01G5c85sou7BAXnjVa2AtlqlhwMPN6oj/ChUBaD4UxnjSsBAcJZPSW9CyW2EQU9QyQT1PuC4Desocm0r/bEsOJUMk4hbWTNxyGfKFzYdAou1s2ILT31aUnb0G+/u1bxTK+dpzM5RwzLFDvuoQP4pyczg0=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu
	<wl@xen.org>, Juergen Gross <jgross@suse.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>,
	=?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>, Christian Lindig
	<christian.lindig@cloud.com>
Subject: Re: [PATCH v6 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Topic: [PATCH v6 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Index: AQHZdnKCeDO91wnv7UqBsQsgLvHj6a9HNTmAgAEfoAA=
Date: Wed, 3 May 2023 09:23:19 +0000
Message-ID: <34A79CE8-FEE8-426B-810C-1E928E207724@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-10-luca.fancellu@arm.com>
 <4e6758c0-dd81-4963-8989-d941eba2b257@perard>
In-Reply-To: <4e6758c0-dd81-4963-8989-d941eba2b257@perard>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM8PR08MB6356:EE_|AM7EUR03FT006:EE_|AS2PR08MB9975:EE_
X-MS-Office365-Filtering-Correlation-Id: 5f7d314a-72b7-4766-2907-08db4bb81105
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 pwdSKhDAs+MaGRnoB1uwi+D+3Nk7oM8LNainJYqONG1yB2KmApys5N6Of5jXhEJLtDG6zGv2mvGwxh5giyypZ8Yg+NYhFh2AEiEcdcI6tb1C/CJUSHSG2XpnkSUbHcmn/2LlybF3jXDlv31yRr0jujibqfC9W1IeTZSnNyWFYXNiHsIyYUQ12royGN6YkIW707RaS53whxJeR1juNgKfwsCow9VhDoRshxrAKu0bW9g1AwXjnD966jG7Uauw+wuMdeUfuhkPgxNJ4ml/YSPt3qTKfFAro7nruxek9NVOXASvRWgZTL9XyCnLX/X6hrJjzZ7eZzhEfsM26DTV0U9ifs1VkRSIoV4NcpFDqx99cabvB1zdq7zB5Bo3/m76lG+Es2HriGKVgQsq63LkPrLAEanGPgVWucicOBJTJfLBaSlxHXD5rF0eOrAWzAKWApbWWJOUim1xJ3p6XpZga/sCOFg6bRR8kVQW5YnAYiWyItkXAGMjwIo/3W+vQsrYJ97fUtJ1ZNphdro5z1ALHQMk8ZCAcYzHF710lE5GrhD4Skr3qSd9t+I6w3gQomPPiiAq0Nmnrl40wNuOm261cPsWXiNl5fx97f+2j5UtGPaJq1T5EpUaPB89o3OKnBrmzUVsREDNrwPutanH51pZVNolIQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(366004)(396003)(39860400002)(451199021)(76116006)(91956017)(966005)(71200400001)(6506007)(6486002)(478600001)(2616005)(6512007)(53546011)(26005)(54906003)(186003)(6916009)(38100700002)(7416002)(2906002)(4744005)(36756003)(33656002)(5660300002)(122000001)(66556008)(66446008)(64756008)(66476007)(66946007)(4326008)(41300700001)(8936002)(8676002)(38070700005)(316002)(86362001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <8A55BEE214CFBD4C841E48339770822B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6356
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d137c074-f75e-49c3-d7a7-08db4bb808ce
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	o+IzolzGAT5eNBQHz1/w8bqG4vLdPEOg6Cd181NTE0XWobbea9tLnDApIqTbDrfGn9WG/l7+kLOkxBJFPvBqn0ADSB5SO571xh+/uB6EAOlnNr2VXNuBlOA2REFYJSUENfKPbjDny5rozebP5chDGj76iTztEY+Jx6umjmj9m3sBuZTpvHG7FSZcq8pULSjVrW86jBG0CGeXzCufbcdBKrK/bJ4Y7MNGWpTU/G7FTGZN0Puihw8Tf48Dwhd6qmrb6JCb/wxVsUAjl4aQOk1RgcjWyMoe8lF5Aw5NqXkELC6jsWmfOY3I1J9m/iYdbzFRw5fDPvJ6HE3S1rtitXpImIW+mdvdxYYMvGHwGljjai1i+yNfr3oIqJVtmIZgebkqFE7GMCrFusNjGZwAy1D54UmQOvCyuyQYKzpHzZIJMrNbZ+BnJxjqmUi/Y1T5aTXaLVB4XDBbU/t3Tnh8BsAUlceYcPg7tImzuKkJ/wnsnbqsMHIJUeA06TcSH//6CJM9Mfte35K3KIrMfOd6Kst6t9twv31tCvkG3W1+EqfUUD7rY98IjB3Bzpg3eGPGE4MPVKTHZtxcy1nxM8eXfkfv4UtQJJDR6hrs2eukqIOxxTygzqOiInru5tB1YU4mh/KnixUMAKaMS5Q9RugVaXLR+RAOziRdx78VdVlFxtWutzCqH0e2jRKMTte3/DXyjDz5fx9jIQcGdRSIW/lbG662yWM1OdDHCZjYoCQS/emzkMo=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(396003)(136003)(376002)(451199021)(36840700001)(40470700004)(46966006)(2616005)(107886003)(478600001)(36860700001)(186003)(4326008)(54906003)(70586007)(70206006)(47076005)(53546011)(26005)(966005)(6486002)(6512007)(336012)(6506007)(316002)(6862004)(34020700004)(40460700003)(8676002)(82740400003)(8936002)(5660300002)(356005)(41300700001)(2906002)(40480700001)(81166007)(4744005)(82310400005)(86362001)(33656002)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 09:23:33.3892
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5f7d314a-72b7-4766-2907-08db4bb81105
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9975

DQoNCj4gT24gMiBNYXkgMjAyMywgYXQgMTc6MTMsIEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBl
cmFyZEBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IE9uIE1vbiwgQXByIDI0LCAyMDIzIGF0IDA3
OjAyOjQ1QU0gKzAxMDAsIEx1Y2EgRmFuY2VsbHUgd3JvdGU6DQo+PiBkaWZmIC0tZ2l0IGEvdG9v
bHMvaW5jbHVkZS94ZW4tdG9vbHMvYXJtLWFyY2gtY2FwYWJpbGl0aWVzLmggYi90b29scy9pbmNs
dWRlL3hlbi10b29scy9hcm0tYXJjaC1jYXBhYmlsaXRpZXMuaA0KPj4gbmV3IGZpbGUgbW9kZSAx
MDA2NDQNCj4+IGluZGV4IDAwMDAwMDAwMDAwMC4uYWM0NGM4YjE0MzQ0DQo+PiAtLS0gL2Rldi9u
dWxsDQo+PiArKysgYi90b29scy9pbmNsdWRlL3hlbi10b29scy9hcm0tYXJjaC1jYXBhYmlsaXRp
ZXMuaA0KPj4gQEAgLTAsMCArMSwyOCBAQA0KPj4gKy8qIFNQRFgtTGljZW5zZS1JZGVudGlmaWVy
OiBHUEwtMi4wICovDQo+IA0KPiBEbyB5b3UgbWVhbiBHUEwtMi4wLW9ubHkgPw0KPiANCj4gR1BM
LTIuMCBpcyBkZXByZWNhdGVkIGJ5IHRoZSBTUERYIHByb2plY3QuDQo+IA0KPiBodHRwczovL3Nw
ZHgub3JnL2xpY2Vuc2VzL0dQTC0yLjAuaHRtbA0KPiANCj4gDQo+IEJlc2lkZXMgdGhhdCwgcGF0
Y2ggbG9va3MgZmluZToNCj4gUmV2aWV3ZWQtYnk6IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBl
cmFyZEBjaXRyaXguY29tPg0KDQpUaGFua3MsIEnigJlsbCBmaXggaW4gdGhlIG5leHQgcHVzaCBh
bmQgSeKAmWxsIGFkZCB5b3VyIFItYnkNCg0KPiANCj4gVGhhbmtzLA0KPiANCj4gLS0gDQo+IEFu
dGhvbnkgUEVSQVJEDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed May 03 09:33:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:33:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529046.822967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu8rX-00051q-T6; Wed, 03 May 2023 09:33:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529046.822967; Wed, 03 May 2023 09:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu8rX-00051j-Oc; Wed, 03 May 2023 09:33:27 +0000
Received: by outflank-mailman (input) for mailman id 529046;
 Wed, 03 May 2023 09:33:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tguP=AY=citrix.com=prvs=48085cdab=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pu8rW-00051K-TI
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 09:33:26 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8a370b6a-e995-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 11:33:22 +0200 (CEST)
Received: from mail-mw2nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 May 2023 05:33:18 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM6PR03MB5130.namprd03.prod.outlook.com (2603:10b6:5:1e3::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 09:33:16 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 09:33:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a370b6a-e995-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683106402;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=7kyWfYul+N75/GPvgOf9U6x0PuiLzb7N0wkESfsbdow=;
  b=U7gn32TxpdyjxK85FxjUDW93x0XT7OOFX1brSgH9cUu0L/CeOu2/UcAF
   D0CHl0IQjb6/Tk0NDSRhQJ8TbxC6qwFTiJtY9KpnSH6moeT27zZ2GrUWM
   2LXeNm8p740IzAvLOs+YjVohUX4OQGjGCRVbyBBhhzxbdEGRc/V95RoHK
   k=;
X-IronPort-RemoteIP: 104.47.55.109
X-IronPort-MID: 107574493
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:9GBtEqKTzD/+qfJWFE+R95QlxSXFcZb7ZxGr2PjKsXjdYENShTJRz
 GQeDWCFPPbfNjbxeNFyOoS/9EwBsZ/VyIc2QARlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wRlPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5GWFtv1
 fNAMAtRfxHYoMGs4L6cFtZV05FLwMnDZOvzu1lG5BSAVbMDfsqGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dqpTGLnWSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv02bKQwX+qA+r+EpWY66Itw2bDzVVPDTYGZGWdnMu9yW6hDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8Ul7Cmdx6yS5ByWbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8kN+pES0cLGtHYDBeSwIAuoHnuNtq1kOJSct/GqmoiNGzASv33
 z2BsCk5gfMUkNIP0KK4u1vAhlpAu6T0c+L83S2PNkrN0++zTNXNi1CAgbQD0ct9EQ==
IronPort-HdrOrdr: A9a23:e3yq9au0g1rG7JZ7IJZFWhSa7skDqtV00zEX/kB9WHVpm62j5r
 uTdZEgviMc5wx+ZJhNo7290eq7MBfhHOdOgLX5ZI3DYOCEghrLEGgB1/qb/9SIIUSXnNK1s5
 0QFpSWY+eeMbEVt6rHCUaDYrEdKXS8gcaVrPab5U1ECSttb7hk7w9/AAreKEtrXwNLbKBJd6
 Z0ovA33gadRQ==
X-Talos-CUID: =?us-ascii?q?9a23=3ApNEZ3Gl3tjV41vLMYiPTjQoYiTfXOUyE9lbwLBe?=
 =?us-ascii?q?gMjp4SeayakWw9IQ4rPM7zg=3D=3D?=
X-Talos-MUID: 9a23:+R31Xgbtne+r1uBTlyKww3JZZMxRv6GFGHEzt6g0v4qkKnkl
X-IronPort-AV: E=Sophos;i="5.99,246,1677560400"; 
   d="scan'208";a="107574493"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h38E4wq066uDdodZCwl2CD3/gjWlD+a2ayXmwKgq9qVX4O1MSqs3lB6ITLW+F+ox3h8S10nWWVFAnPKTYmc/STe8F5XTJBVlBO1GH/BetIjrkbz96u4k9RbN8DNiYPDSXv379LdNtG0W9sG8R0VKERfy0CWKyTzbZUWPRlgl6Zw6Xvjd8hI+DGGVO7c7VmCpi5ZZainiLTuOY9IvCR/PPb0/b3+rC4fglHZycj+0CiN2S3G4jqINPxBdw1K/X/BYbWiBTw9VYro+Bq18YWBxsy3f0WPvm3LYsA10Loyx9CjK3/LjSyoQ1R0MchDgJE8w/MP0ZDG7Z173Xa15ctDTng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PJFcSxQoVfNgBFA18BFegmC01/OvepmAEyPElUa5FiA=;
 b=KHFFLwYt6WtngYimK6VLe5XR9jylchbIsPf+cNzWCzwU2XFdGqk6H8aKRUPbzVeCWLnzyMaqsPzdrqKQKzudOO3Pnw8A7ox4zihr6XYvNsmvu2/42c9yjXUJ34KKdI3vSqsMovWY3SR1C4F2nrD4xYuREFAjjNxbUPOzpukamakJaBTn3iw8NELU0tdmv1YNrt5oW5bZIsZ1q4kDCko8P15xGGgXb73p2QFGmfvDV8RNrjX3ebpH/T4wlN6e2s/qBl8yw2xTTWzxdvw2h3ei/cNJtSgP7vz/FMMJmyaYeiUfC+JqtWVqq1qACtXUxTEJCxjXiXBcKIFsA6b3AwKp8Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PJFcSxQoVfNgBFA18BFegmC01/OvepmAEyPElUa5FiA=;
 b=FusvH7OS77V7esoJ2zUTQ2EbcL3+5E8DK72xoVlL1jkW6fuvKRYf2Lrle+LvLKronuGb4+//B3+NWl8H/CmM1BJuIf/45Hzr52D97st57ZgHnATFGET6iJPNNSWkUMA86AXv8cAg7n39QOr80sezCYC36LrQtQxzJMIWKu0gbn4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 3 May 2023 11:33:09 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 1/2] x86/head: check base address alignment
Message-ID: <ZFIqVcPEwQ6ZkhPd@Air-de-Roger>
References: <20230502145920.56588-1-roger.pau@citrix.com>
 <20230502145920.56588-2-roger.pau@citrix.com>
 <5045583e-8776-0834-ca93-44f85888d877@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5045583e-8776-0834-ca93-44f85888d877@suse.com>
X-ClientProxiedBy: LO4P123CA0526.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:2c5::12) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM6PR03MB5130:EE_
X-MS-Office365-Filtering-Correlation-Id: 062a9424-83dc-4392-9642-08db4bb96bfc
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Bobxd91YDYQx0Tw/pV0aYTfHSZTQ3lrM5m0eNQxA+ox4CB9k/MUzY9pSPd3tGZUr+hBK3K0ydOHty9AT5ciit4ClvrHuKy6XStxvgAedUTQPDZZgE0NdyGwh/T4SUVwbaUdim6Tfw9xAqrM5zqVk5QjBv7W7M2Up2GgCZQgU7W9RED4aO/Cqt8MpI1P3QnlCnc2e2LpMB8EdCzXEpZugRLfB17kqMwOSGTPh6tyFKz+Wdh7yDVUihEjc8jbjmpzhgw19Z1mtI55XGWZHMKdNxBSlXjG56MzYBI8epn5qT5Z+w/Bulk1viXh/5zl04FmLf65oloUdSMRrXGBhxfTEzzRKE863b+P9O2T3T5WxMiaUVfomFXilnSUXx+IPV1+IZCC9M4gTi209G7LfOJsG2JPw88xrSHYNFqwKjBaQ3B15sARRIjdbORs7UDW80nHxUc7jUXS4XAm4T+vS1+R08WfdeBd3RMLeDe96V8z3XRUDHZJaYsmKOW6kfiHuEPONluBERdshJJtUt2TTNg5sFrOQo5/ZOMMYVh5CQO3xtGh5yorUh2PGxOIxH0kub682
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(136003)(396003)(346002)(39860400002)(366004)(451199021)(9686003)(8936002)(8676002)(83380400001)(2906002)(186003)(82960400001)(85182001)(33716001)(38100700002)(86362001)(26005)(5660300002)(6512007)(6506007)(53546011)(41300700001)(6916009)(4326008)(66946007)(66556008)(66476007)(478600001)(316002)(6486002)(6666004)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WHQvdVN6c09pOHBldzFYVzB4Z2tDOG9tMzRCRjE5cTJyMi92Y2s5bkF3VFEr?=
 =?utf-8?B?Rmk1clBkcnR3Z3JkdkI0WkhBa3Y0bmVZcXVRMXRKWGdBRDkrMkFvaktoZzZp?=
 =?utf-8?B?SmdRRDNacC82L0NOU0xHMFBMQVowNUlHSjZSbnFjZnh0cXFUNkFYN25iUVh6?=
 =?utf-8?B?b0xLRUFJVDFRRmhzYnlkdXAwMXk3RXl5MFFqekdubnBCRWdSdGxHOVo1K1J4?=
 =?utf-8?B?R0JpRjErcnMvVW1IZEkrNnowR0tNV0VTY1ZEQ2J4bU1KZ3JqY0RlWjUvaGhS?=
 =?utf-8?B?eExwbkxDaGRSaVg3Q1JtSFd2WHhxSkVuTjFtRjg1cFVyVWkrYnVUc3daQUE3?=
 =?utf-8?B?dU9QOXR1QS9hWlI2RmMyRUNSenR2VHZ2RkxJdXNsM2VXQmFCUzltQXVRT0VF?=
 =?utf-8?B?Nm5JeHg5cHpab2g2Zm1MUE5sSVF4MUR4WUlFeVVqaU81Rnp2anBqTTZ4NFV1?=
 =?utf-8?B?alZjZWl0Y1lldGgxSGRKb015d1FrNXpHSGlZbzV3dmJudkFYaWM4aWZER0pL?=
 =?utf-8?B?WGZGd0RBRTI2aVZTZUpKcWw5QXh0NFl3Y2FhWVcyN0kvd3lLR1J6bVd3dmV6?=
 =?utf-8?B?WXU3em5uWUgzZXgvbzVxcnJYWTd4M3psOUNFVEtucDhHM0xSWlhMdFFTb0cr?=
 =?utf-8?B?SUMwYnRFemJoZDM2WXBKRk1FdFB4VHhvaXE4aGR5VUw3Z0dDNmVick5VZS9l?=
 =?utf-8?B?NXlxNjl1azA3RVZsUmpCQXMzbFRtK3hmNmZLQmZSdkZ3a2ZSdXUyM2dtVTJY?=
 =?utf-8?B?cFdCRDJkSjV6SkxaaWZRUmxMbzlMVG42b3g3dkxaQmV1RFhVQlRoQy9jYzZV?=
 =?utf-8?B?MVVSV0llWGl0ZFV0ZjRDejhpS0h0d1V1ZHRqS1ZkQ2RWSklIVDVicW9RcXRJ?=
 =?utf-8?B?R0M5dGVGby9XNi9HMCtxdTllRVlVNTFKeEpRQ0RWRk5XMnlWTGJJSlp3elp6?=
 =?utf-8?B?V2FvTE5Ld2tXUlNrT0ZZb2xIL0RocHBZOU9GTlpRUVRoaXRjb0VGa1dmL2Fz?=
 =?utf-8?B?eXNJeWsrNHVMN3RPTTVSbW9QUzdjWmw0V2ZidE85a1Z4aHB4cHZmVXRFdEtW?=
 =?utf-8?B?UHBhUXlweGtCSkl5M2lDTDZvNlFleWprKzVvdG9vZkVNWU5wdWpOT05wNzZL?=
 =?utf-8?B?WWlHTzI2WDdtWjR4ZVdTbGdselNEM1AwcmszOURnWkEyTkJsS0dKNisyWEFv?=
 =?utf-8?B?cmFHeTNpZEhpSFFMQUpPenRzdVkwWmh5dEN6bGZqYUpudGdFU3V5amxrYmVV?=
 =?utf-8?B?cVo1TnlQbkJyZ3pzV2grQm82Q3ZXVk9ncnppcUpCT0t6K2dFbm04cVNqUWZQ?=
 =?utf-8?B?bFRvV0oxbVhSMnExUVRLSU9mMW1JZnZUd2szcDVRV2MvNFM0NUkyMFhzV25F?=
 =?utf-8?B?b25vWHFid09rR05LaWs2QUVKZGJ1UkdlZkdPZmNHMDVicEFuTHRxSlVYZ1M1?=
 =?utf-8?B?YUFPS3ZKOW4waXlmQUxmWGxBNUdtdEdDUnJjTzJOQWJWVlFwamFsTU5IOTds?=
 =?utf-8?B?R2JDTHNFMndYVm1YVXJuclR2c2NwUWpmdkV0V3RkMUlVQTdmTXN6ck9QOE16?=
 =?utf-8?B?Y2J4b25rOFlPTlVmZVVUQ05udnlxNVc3cnVRRjNvUmIvQStpL3dHYjRnVGxI?=
 =?utf-8?B?KzZzUnJTc1hERWFTNFByRi9DNThXWmRaWENSZVgvejBPTm1BRU1vbUkrK2VH?=
 =?utf-8?B?RG50SnQ5NWVVZmM4VWtZa21DU0hNc2JMVHJpWjZYTGtHK0ZORFc1Tno1cmlG?=
 =?utf-8?B?OVUvZXUzUXhuUTBLbVZ2eGtDR0hBMkdrT3hucDdiSGJNUVdTbzFoR2VZS2pY?=
 =?utf-8?B?VHlyZytCVm1taFVHSlVzUFIwSzM2N0dFdVJVZCtVZkZSM3lOTVg4VndPVUpI?=
 =?utf-8?B?YVhpMnF3dCtwVEVtVWp2UVJDZTRFSFZYSmxhNXNWOVYwVkpqb09wcy9NRS96?=
 =?utf-8?B?MzJZUzI5QWd4enByUitVMTQ3ZFhSRGtUZG9TamJpVGNHSFlBc3BHdjEreHg0?=
 =?utf-8?B?U0UyWXZmSkFWSjU0OUNRLzBBanZCSEliS29NQlJZY0xTT0JhZTJxTVJFNFIz?=
 =?utf-8?B?T1dUY3ROcDl0NGRsWjlBSEplaVpoeFBXWWhodk5tU1ViUW1IUzZtaVFWN1Fa?=
 =?utf-8?B?aGtydlhUdWtJamh6Wjc1dnF1b0U2dmlCM3Y3dGRJc1hja04xeFlXUzJubU1S?=
 =?utf-8?B?aGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	pOJsr4RIOW8ELRhh7sc8hHUC3SDAkOZHV5PT5WNuAs3gKzRdJlmosNSwvW2di3q6IochLxezE7eKeFZEU6FugLn7u2D8/ITp2GBYJNjLz8ELNUVDtm/++MKcXOlP1rO5F7fJSqfSTml+pqqXcPzSrwNdp4o0EwzSw6Oby5lLBdKXQIAYWfwTWJuqnyj8veuw1rIFjfIl9cFHAm/B1S4bvp06fkZIW5K8OvqoKpu14cqbTKYHTw0Nw2lMEx6YfwTh+3qlGiCIkMzk5rwa9u+dNi+WJkUqeEHvyDU+Ym76OBfx72QZODf5h0lyLPHhb9UOY0Xv7XcZver6cXIh/hwXtwwS/uXLvlhfPT4bK9+/z/mRSejpVXJCVXMsD37pLfzQF+5a3QRpB5i2u0M6LklL3zVz6X1oF6rS2xAev7lEoS06VYVVGHA2hzqB5iAnD4Q/VPvCjAjsj+FZPW/2KdozKFfho5Lbwu0qSuTBJfWxg0wutmK8pVAA1YACpg8vg/pdrVTGH7YvIanfmooJkBhjCj6hKEKVXWzYxdbTE1GI4Z22NIqSsO2pNr2GCUaLP2mlWVgQzgvAfbRnITQa34UyfnXBpfI5++fIaF2aWzOWRmqk326FDAVH7/+6pKaA+VmSKovnEEfAaBtJq2Zi6Kqu3e9YDnhN/2B37LBJ8tALFS+5/fSLXyJ0xRcvQL0bmXZQ16KRBqvcV19allgEvA4LctPJ7LzUHShanB0qPj/qes8EZL38u5ISY1SbUwKu6yIT+e3+q8FeXKQgXqyRMylEphJjj1Cdhh53Vhkzv7DvElG8l7pwXwL4Ye/wUDMvDX4w
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 062a9424-83dc-4392-9642-08db4bb96bfc
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 09:33:15.8613
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: b6RMVv8yMGEdEm/iWO0KKivXLa7nubGJd5Vosk3EdtRXnGX7T/N1CEN+76MdrUWZZLY7QLgsZGryNlQ2Yi97xg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5130

On Wed, May 03, 2023 at 10:08:20AM +0200, Jan Beulich wrote:
> On 02.05.2023 16:59, Roger Pau Monne wrote:
> > Ensure that the base address is 2M aligned, or else the page table
> > entries created would be corrupt as reserved bits on the PDE end up
> > set.
> > 
> > We have encountered a broken firmware where grub2 would end up loading
> > Xen at a non 2M aligned region when using the multiboot2 protocol, and
> > that caused a very difficult to debug triple fault.
> > 
> > If the alignment is not as required by the page tables print an error
> > message and stop the boot.  Also add a build time check that the
> > calculation of symbol offsets don't break alignment of passed
> > addresses.
> > 
> > The check could be performed earlier, but so far the alignment is
> > required by the page tables, and hence feels more natural that the
> > check lives near to the piece of code that requires it.
> > 
> > Note that when booted as an EFI application from the PE entry point
> > the alignment check is already performed by
> > efi_arch_load_addr_check(), and hence there's no need to add another
> > check at the point where page tables get built in
> > efi_arch_memory_setup().
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Would you mind if, while committing, ...
> 
> > @@ -146,6 +148,9 @@ bad_cpu:
> >  not_multiboot:
> >          add     $sym_offs(.Lbad_ldr_msg),%esi   # Error message
> >          jmp     .Lget_vtb
> > +not_aligned:
> 
> ... a .L prefix was added to this label, bringing it out of sync with the
> earlier one, but in line with e.g. ...
> 
> > +        add     $sym_offs(.Lbag_alg_msg),%esi   # Error message
> > +        jmp     .Lget_vtb
> >  .Lmb2_no_st:
> 
> ... this one? I don't think the label is particularly useful to have in
> the symbol table (nor are not_multiboot and likely a few others).

Hm, right, yes, I don't think having those on the symbol table is
helpful, please adjust.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 03 09:39:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:39:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529048.822976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu8xI-0005dn-Fq; Wed, 03 May 2023 09:39:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529048.822976; Wed, 03 May 2023 09:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu8xI-0005dg-D3; Wed, 03 May 2023 09:39:24 +0000
Received: by outflank-mailman (input) for mailman id 529048;
 Wed, 03 May 2023 09:39:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pu8xG-0005dW-Iu; Wed, 03 May 2023 09:39:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pu8xG-0006Da-76; Wed, 03 May 2023 09:39:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pu8xF-0000HF-N0; Wed, 03 May 2023 09:39:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pu8xF-0004aw-MX; Wed, 03 May 2023 09:39:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lBL/1R6TBV3ZK3HEeR14Sq5JE1N++D0RdiQWO+Qq9+E=; b=64lzuoTngGVg+VEsgzvi3gkR3I
	PdFfhB7DOdGu4Fizhbty/nXOTHUAuq2xzkhgKJ7uSdKeGFGLIv9XW74OhgwK3XtO045zQlm6kA2QP
	k7Weax86/Jw4Gwwe7Ey/X8P3IILT1mM6KgUzYZveYN0VfjXlLxwOwMwDCJ89eAdoI0DQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180509-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180509: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d7b3ffe2d7e476f11d73b74093006aa936f59e8b
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 May 2023 09:39:21 +0000

flight 180509 linux-linus real [real]
flight 180515 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180509/
http://logs.test-lab.xenproject.org/osstest/logs/180515/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180515-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                d7b3ffe2d7e476f11d73b74093006aa936f59e8b
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   16 days
Failing since        180281  2023-04-17 06:24:36 Z   16 days   28 attempts
Testing same since   180509  2023-05-02 18:14:04 Z    0 days    1 attempts

------------------------------------------------------------
2179 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 264469 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 03 09:43:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:43:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529054.822986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu91Y-00077b-5C; Wed, 03 May 2023 09:43:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529054.822986; Wed, 03 May 2023 09:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu91Y-00077U-2P; Wed, 03 May 2023 09:43:48 +0000
Received: by outflank-mailman (input) for mailman id 529054;
 Wed, 03 May 2023 09:43:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu91W-00077O-Kx
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 09:43:46 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on060c.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::60c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ff163415-e996-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 11:43:45 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by VI1PR04MB7086.eurprd04.prod.outlook.com (2603:10a6:800:121::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 09:43:43 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 09:43:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff163415-e996-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nvlVrJCN1wpMUgj5RBZnno5n0WKap57OL1+JRDZ+F+g+KkFqqEPEh/ZJ+fYg8GTyCIw9EHZb83WR1IYoR6uWXPuj6xwsItRbGt4/wzYmDPn3EL+IUJy5eBWL75MDauAz4vBkBKsZkBB2ZBljNfWnKuGKloYUCqqR02mL9l/3zyX1Tsk5LWLZsO/NEGbnkEvs1ooRtbtlOXmKy3mr1lEqxF+lVFBsZRGdizqEGK1EmlEBcCdiOuhxoouVjYQQJXIFnTRlup8qFaz7zgd05vpIBveqO0A5QyhPXI5fp7ScdliD/mfYRxaQ8yPfMwlcFqc4/cWMJRerBYIdNJRexF8N1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4Qwx2nQ1Z3b4p2Wd1pGiSoSwuiZ9EYUJViH5NdLyHps=;
 b=dufUCTfLXKqyjgptp1iOZTWPPYfzECmxfAzmTaGlyrh9GW+LyK9LHiuJDJKUbLXWrQT+YoUWo/w8X2rt1+niE0uqFrdCK5ezlXpK0bdCT5gch9RBnJdANYvswpdW25uzhqDrB5y7I4g2ggkFzf8cZenUmfXjbL/bFB8/MxPMI6qtUi2sS3gAScs5UGP2EzocKHWn5m/TYyarPbN1AVL5NBSg081iiDCftz4L70rtZYYbQaS0uydZvEJaPpp4jh7To17opIurqh/VyJW0OhZF9z1wAykjbQIGoShZzMji6qPLk1butwNfcsHycZhOCyPzBfy38UcMpsfuYVwsmL93tQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4Qwx2nQ1Z3b4p2Wd1pGiSoSwuiZ9EYUJViH5NdLyHps=;
 b=b+aVJ4RrBo3Op8HqLlmoDkjfH42M2z4CenQFBy7kWX3+eUr+VytlS7FWC9ZB/ImqlU+YSUOkA4EXpn/wOcNCFMZ7xSJKbdN4pZc8S0kPqXUCeCNY17RXiHCKOcnTebOSP+SpevsU2+RG4mg+/NBYvId/w/iY3zdKHSG9mKDxgcjiqVd0N1ljd7LOzYF3Mcfc9Ito5PfIFwDhhFwDOlNAuypu04j8tMRL0b7R77005i8d6J566AhdqtGy0rP3zvI5+nl4f+7tAiZ13G4S9O8qm4OkUi5S0pqvovxpcqwWPcDYGaJwUBlKWeSJNrzCXT7TXAd9udnPLCxy4FHvgXhtwA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
Date: Wed, 3 May 2023 11:43:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/6] x86: reduce cache flushing overhead
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0182.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::17) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|VI1PR04MB7086:EE_
X-MS-Office365-Filtering-Correlation-Id: 60341197-c4d2-4551-8528-08db4bbae23e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FkMeEnChH262wCFRe4r7jt69aNSmLpwDSoHKxhFiC/onfK87bxmIqT7atpT0wUkRNvMs8VM4dNL6WdP6o1PbUbkTgk1DXRqlvA4P3DOPluheeHAQr25MHlnlnZ/Nuh5nkVKLnTqhOTx4AAOPGZ5fegU2g+tyqif9Okv4+qnyzD/zij1kGhP3PQ++YiMca4cSNQiyWh/7L8Zgo8yYX5Rwd0dhQQH0Oxg1X8spohnI65+vf1KOP06uRuAaIslnhsXmjZ8dJDq/uLMpdsKya/iIBu5+RQyNwJ0VdVR0BrbcyTb2xL66XRE1ZuQKzUKLfXFv4+v6HAUeewjEqaBkSLeCEUYwa8+cp1m1G9kj8fJkCQY6Q1dhhMEzaD1A7/PxiJiiwBvwHIUnIBdIXTxo6TpQunrDyxRok5EnBjheyJFdD1rq5UEFgfEEJ5RjVM/XwBb+6iMNvJMY5Oqm6eny3WnYhHyMbO/lYiDwLW+/tYqGYk2wUI3V2GbfjhYEzjUfAkGx5hjnadinIFDdW5oDSY8PuIfKugRksjda8IXOICd/TVajvnbA++mG7shx4TrjU+PMWvzyMOlFijdJTjpyZgbVGHMY0hpc2lzdTjvq1bP2+TA0jwgmD3RvIoBPGTpYUdX2CW8tjz8OrmyeODZaEMwqug==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(376002)(136003)(346002)(451199021)(31686004)(36756003)(5660300002)(2906002)(38100700002)(31696002)(8676002)(316002)(86362001)(66476007)(66946007)(66556008)(41300700001)(8936002)(4326008)(83380400001)(54906003)(186003)(6512007)(26005)(6916009)(478600001)(6486002)(6506007)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VTd1Z3djQVV1L0VVWkhxdFNxNEZ5cm5GSXdmUWpxK0EzTjJDU3pUaU5palU4?=
 =?utf-8?B?WHdUVzRybzB4cjgyQlJTaE9sVTNuWXplYnBUSWRmMjFnenpJNHlUY0FNTVdH?=
 =?utf-8?B?S2t0Q0tiZ3h6bzgzam5yeFNZOTIyekR5TDlpQUVUV0U0N1F3dGlKVDc4dVJC?=
 =?utf-8?B?Q2dFcDBMTnF3TXQyM00xcllvY3MrYklEeXduZndIMm9JdVNRaWNZQ2dJcXUy?=
 =?utf-8?B?M21nckFhRGNreWlDS3BXR2g3VTl4ejRoSUVmaTRycVhKZVNEZlBkVVdYenpV?=
 =?utf-8?B?c0Nmd1U4cXI5TUNXZFhFMmMxU1RaOTZzVS9GeFpxV2cwazhRSmZTV3hEeCtY?=
 =?utf-8?B?V0N0MFc0WVpSb3F5d3J1VStwZ3RaVDhwVjkyU3J1MTN5WDBSU1VSUEVvQWFp?=
 =?utf-8?B?Z1pUbWlhbVBvUk5XOXQwR0hERkFpbEQ5eHdQaFVtRHZqdGlhRUZSSWVqcDJL?=
 =?utf-8?B?RHNoSkxyZndaWHhybEgyTG92RDBtWnpUUEZWMWY0WWxvMzlDOG50T3lVN0g0?=
 =?utf-8?B?ZkVGL1AzUERDRzZWektGMUZRS0ZCbGFCaDFaTitpQjNYWmk2KzdUSmxGYmxJ?=
 =?utf-8?B?N1FEWDJCV0NrM0Q4cG5lVHpicXMvazNINk1QdkxaaDJzVFB3RUJUNHJBRkFE?=
 =?utf-8?B?cFZodEgrSGhzVlE2N2JiYWtZUEVaczVoNEZYRk1uVHB3NmdZSzIzekd2cFBC?=
 =?utf-8?B?MHFtSUg4Y3JIYXJMNFFCc3FKNVBUa1NOZUlzRkVRMENrZ29uRjRvT0NjQW9o?=
 =?utf-8?B?a2FpZlpWbGhWYWJDd3RMUytRZkNRNHc0dVh3MFFvRjROaUdHUklLT0xmaUs3?=
 =?utf-8?B?WTZPNnNGRlA5T0w0V1FSMy96OGYwelZhbE9kc1puaHFzVG5uenN2UW42aUlQ?=
 =?utf-8?B?MGdSRkFQRGVuZlplZnYrbXZEeHJUMVE1M2NuZnRSRkRWaVI0eFpGUnlZbXFy?=
 =?utf-8?B?dXROb1hBSVFxaDNzQlNCem1wd3NwUW1vclJtSjhkMDlEY0VOL2cwUll6Z2do?=
 =?utf-8?B?OXZQS2c5MS9RVGRHbTFWVWtNSzVNUmJ4T3czTXZmRnpHTUF4TmdyR09TSjgr?=
 =?utf-8?B?WHFjajhBRmhucmxCcndmWUNCVVROTENqTGptMW5DZmxPaGYydzNDdUtBcjZl?=
 =?utf-8?B?cWRTbHdpb21vNDJkWFByMWxOZnJtMHRYVHhYSHRvMjFqWG41SVRONk0zTkZt?=
 =?utf-8?B?QXIxYWJmWTdLZm1Nb0tXYlB5Y0s0NkZEYXR5U3VCWVd5anIzcllNSUhtV2R4?=
 =?utf-8?B?dXVycmhsS1dhRlVyNnRoNHlnTDcxQi9ITzNzUWNubVI1Q2hIaTZ1REJtTGp6?=
 =?utf-8?B?b3FOZ3JtWVFaZEpsVjErc3A4SmUvVjhIUVNwY0VTL1IwWEJJb3pmM2JqUWYv?=
 =?utf-8?B?cTdqODloZjFwMlR2UnlCUkswdXAvVjFMaXJ1Rk5vVnBta0puZkFsR1hiZExP?=
 =?utf-8?B?cGdyYmwrK2V2Z0hpdEpKV3pXNlRIUndCR3EyTDR5a0JXejR4R3pqM0VSNTc4?=
 =?utf-8?B?R3phQ0JCMzFSckRVRHhGdjRDbEVvK0NqTHhoczBNZ2pWMUM3N2VWQXgyVkc1?=
 =?utf-8?B?SVNsQk1rWHRGYmhraFpTdTVTNGVMNGpkRWMwc2tNZmt1d1AzNGdnMytON0hI?=
 =?utf-8?B?dEE1VkV5N3NTK0pyWUpJdldKSXRHblNKeE9VdVlGUi9PMDBPUmRXdUM0S2l2?=
 =?utf-8?B?RzlsazUxNmRaRTFtT2lkTXNKWmx5QU1mWXM1VVFYOEVIYUlzV0M3S1BudkNv?=
 =?utf-8?B?d25pT1Q1UDJIdlhGaE9qUkd5dFpRa01YTmJoTDlKMTVoV3JVVEhja3pMVnhp?=
 =?utf-8?B?UFoyQUtXcnlTSEhrckQ4dk5EV05HV0hqQ3BKRGpRMWQ2YlNOVCtLTXltNmhH?=
 =?utf-8?B?WjBJMHVJdUViWGJPTmZBa1liVFlvY29va2dKZW1tdkIvaXhGK2svdkZaNHlI?=
 =?utf-8?B?NjBkNTJPUGlkMU02Y1ZyU2xkNmhOUFpXVkZFUHJSbS9hVDVmRjZlbnA0YUI4?=
 =?utf-8?B?VkNJT3daUzZDbWFJR3RFU0RWQnRpNUxyQmlaMUN1YkRodGVYNWRTYUtGZEFZ?=
 =?utf-8?B?RDNma2RKOFhFZFpPMW5la2MxeWZDc1VabXA1SVpCTTV1UjFhTHBJMU11NkRP?=
 =?utf-8?Q?dPYy31lpixX1oadyT/L5FK62K?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 60341197-c4d2-4551-8528-08db4bbae23e
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 09:43:43.5911
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Hnc4CZffQ4vYpGE4NIt+OCnmRL+VctUOGgTtMB9dm3CXrOAjdlwUSSGzLa0wyEehkzPF2qg44ru23rCKGYYwYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7086

..., first and foremost by using cache write-back operations instead
of flushing (evicting) ones when available and sufficient for the
purpose.

In the context of making the last patch I started wondering whether
for PV we don't flush (write back) too little for MMUEXT_FLUSH_CACHE:
Just like for HVM, pCPU-s a vCPU has run on before could still hold
data in their caches. (We clearly still flush / write back too much
in MMUEXT_FLUSH_CACHE_GLOBAL even with this series in place.) We also
can't call this the guest's responsibility, as it may not have any
means to have one of its vCPU-s run on the intended pCPU.

v2 is merely changing some names, compared to v1. Other discussion
on feedback sadly looks to have stalled.

1: x86: support cache-writeback in flush_area_local() et al
2: x86/HVM: restrict guest-induced WBINVD to cache writeback
3: x86/PV: restrict guest-induced WBINVD (or alike) to cache writeback
4: VT-d: restrict iommu_flush_all() to cache writeback
5: x86: FLUSH_CACHE -> FLUSH_CACHE_EVICT
6: x86/HVM: limit cache writeback overhead

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 09:44:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:44:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529056.822997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu92S-0007ce-EK; Wed, 03 May 2023 09:44:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529056.822997; Wed, 03 May 2023 09:44:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu92S-0007cX-BG; Wed, 03 May 2023 09:44:44 +0000
Received: by outflank-mailman (input) for mailman id 529056;
 Wed, 03 May 2023 09:44:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu92Q-0007cL-VY
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 09:44:42 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20627.outbound.protection.outlook.com
 [2a01:111:f400:fe16::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 20d896b0-e997-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 11:44:42 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by VI1PR04MB7086.eurprd04.prod.outlook.com (2603:10a6:800:121::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 09:44:40 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 09:44:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20d896b0-e997-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nvlRo9gffZaOASpcLxLtB6sqsLqdsDbGzUGPPf3VBZ1Zi9glelxkR9Hf0+CQEZj1TYnvboQwjk3TJIxDuSCtUS6I1sWw75HzkajVwc34j9Eyl2WqfXjC0tyd5TfDXMjTpgpEqwsxsN7p71ZhZ7B5qnAke3g9koOO6MoJkFe0UcOl6VYQiukFMFJK9C9q8NmqNj2HadTmEMh96eWNhFQ7jibSwhE9prtSxsRI895dKtETfdnw3tGIAXNrYy3a1Sj1yf2NWcME7xzUIaSdFwzQUWRgvEY0kLsWcn72KAflvXh1bctnf15JrrKQUAOl7UhO7pkNWi+Z9j84W/GuxHFksw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uG2zYnfRmLb43OT9mdiD5lm/T/9r7Wlaz+YjVV6KHds=;
 b=BjZkalo36iwu8/QwTG0G1MjeJE6yJHJFk5fmd/6RHu2DrlsqKjH5LJWj4AhOV4VWv3x/8WNh+0PdUb4hBfqmdOiZ4AYjdk8Vj1r1anXMAX5ZAbroWc//TbJNgn25Z3ENIS1VGLaIprJXn8KywolSmvLyGnq+AJI2hRni05MH8/3Ob3ub+nkAK1puW4a42UspU6bgGBVaFXvs1kXgGdDRqn7gvh0UoO9IYgnZ9oXOzyAkwvjD3duWYBkrVV1TfzuCS9KVhqqqzBzPcxaHhV5V1F/6kjyUYkY5jyctio0kqu9+AdAMvcvPMuTmAVdiAb1WlihEIhOHUlYeieudGtsuXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uG2zYnfRmLb43OT9mdiD5lm/T/9r7Wlaz+YjVV6KHds=;
 b=I6EBmK5BzFg05tm4Le3m41t63GyqeAInXGTuf2aQf/7gKeVehD7mElMeE5B0s5x5HsGs27aIYf+w4+XXB0Rs4ZbhAknWLcenQwt1FgQtgLNzmI2Q81u7SUDNZ3CdyffSSHyHNARnB9ntTHmEfZn7ReGn8rFbwm0cfGk0+vIz+K4fut4yHNZk3ngtBom4s5KQVgFWBkj6LG6Fr7hekTcyC8Rl9NdaHkdzDbNgzEf15dUQrrYwuAWuLLJHOKBWegyDnHC643K31abqubZp0e3nHHyjS8EplO+ttQsJ/qde+yBF1H77SoVKXS5YEj5LQU+lUITywzd60nhL1JWxw9O5ow==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e27ff909-93d1-b51b-ac88-20b17f5cf642@suse.com>
Date: Wed, 3 May 2023 11:44:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 1/6] x86: support cache-writeback in flush_area_local() et
 al
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
In-Reply-To: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0182.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::17) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|VI1PR04MB7086:EE_
X-MS-Office365-Filtering-Correlation-Id: 8524c7b2-6117-4f39-caea-08db4bbb0445
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WwaZuD7vJLgsKWHayFHThjQ9LIhWqFoDzh4bQG2FlAaXxPpj3gLoFX2UZV0YJ7Ceb/b3VT3PYVTA9yrHABhXtfWodKBcPDVypkO+BakHm+BG1mgmD+rZ7tQAkQFbRsfAJhB+Ds9vc7OQ0Vadd+PGA+KzFUFqQro1qkTi7+LdpF5p7Qg0F7mZjvIYxMohHV3X5bT52Z6jew+usGCyerlBPQHFV0tnSF4qKejZFPHeq9JyEd9st/s0yixzolhvtXs8xrHogi4I63nhFhcYlArF9/iRDLkJS/Du8gJuRaJ/uySFQUWUHYox7phTSZOnhypyonyt0ItYUEnsufxTZpdE0RI0vsLu82pE/DI1JGjBdIPUnhUbhLrn9ql0iFgIzWIJnNtWmvwpq7HNyTWeNEcJ3mWMGJx7AUFTpmDMZThiechytmC8/5+eDd8or4hGI+Iy7RnO05CU8tJrNxySjHZy/9XuNdPmVQHHYALEAbCcGYKfjKkcKHuFtpmznNkJkSI4iNFnv2OT3/UXCdJ56b4a/PoheVEwkRvcfFJLXGzmEzspOVqqf034pRH8W4oRIRaOc07B0Ha9kwjQA+3NmVgSC30Vvh3bb/WASAUaL0H6ynCZmp5fCYhCMqJE1WlOJQhGip8YfFEXJTgSyYb4s+rsaA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(376002)(136003)(346002)(451199021)(31686004)(36756003)(5660300002)(2906002)(38100700002)(31696002)(8676002)(316002)(86362001)(66476007)(66946007)(66556008)(41300700001)(8936002)(4326008)(83380400001)(54906003)(186003)(6512007)(26005)(6916009)(478600001)(6486002)(6506007)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WFJWRlBzdWRVYzFrOE5XRWlDNUp0Zk92VlJDM0ZHY0NpVmNReVAxMzg3QVRH?=
 =?utf-8?B?ckVXYU1HYmtvQ0UrUnk5VndFMkI2Z2pSczZQUzJjVHJERFBWZERWeGxiRnVM?=
 =?utf-8?B?NjJlWVlWbE9mRjJGajd2eFdQa0NCT3ZsQVRyY04vSEQycFBLWkJqZHRNTlly?=
 =?utf-8?B?M2M4ZUwzTkh3VjgyN3F4VFV4TzdhblpyK3VCUWQvMlpZZEtkdkMyblh2b09s?=
 =?utf-8?B?cHExU1oxeHZrdGl4bVBoWVpJaHNPblpmbjhPZnI0WjhuekZvdmd2Y0R6anBr?=
 =?utf-8?B?d1pYRzk3L3BzdzFmdDFpVzRYemFTMzVnNGJQSUtacW82dlV5bVI3ekNjOVc3?=
 =?utf-8?B?YTdzcWNoaHdrd2F4TUh0ZmhKMG94Qy9wRCtYemFQQmkyeUtyRFdEMVZESk5K?=
 =?utf-8?B?TC9lcjU5Qmd2cFJscXRMUzJFdks1UFBrcVplNFJtTnZKN2hvTUZyK0Q5MnVP?=
 =?utf-8?B?M2l2Zi9Gd2JuaHFWenZEeTBPRlNmbmpMeEFkei9Qc0w0VGNaWGRrMUNoTHdZ?=
 =?utf-8?B?OW84NU9mQnNySlREL2xSVmpvQXZsbmtnK2Mxc2xtNVBFUXRyaGJkTHpDekdr?=
 =?utf-8?B?cWhRZUhueHAzWjRFd0dxQkFkdVl5emgrY0RYL284VjI4ekVHdnZocXZENUhJ?=
 =?utf-8?B?dlEwV0hGZ1IxdDlPNmU1dkFFWFMvK0EyYUFSaGttTGljZmpqZVpnZWdJdjZ0?=
 =?utf-8?B?dDF3MDJFZC95eWg1bzR4V3JZaUQxWitxUittWUg0MjRWbVZCR0tzazlDQzNB?=
 =?utf-8?B?YmpLSU9RUkJiVzBCWmgrbEVLY2h3dXEyb1pmcUM4eFg4Y0YrNjZlc2xTdUpR?=
 =?utf-8?B?ZTJodTFPb0prb2hIYUdubkRiT1ZSMTUrdHpnMTZDZk1CelpIOEVTSFNxTUdI?=
 =?utf-8?B?cE15L0hMOG4reklLSkM3dmpjWXVKeVdMK3pHRUd6L2tpOGc1SFB1OGVkaCt5?=
 =?utf-8?B?MDZneGJqMG5kUlQxN1FraGZ5TzU4TGgxdGVqTkY0Ni9BWmhSa2kyUkdUOVU0?=
 =?utf-8?B?ZFhDYmJKOUhFd0dPdHU0dVEySEFjSU92am1GNGk3VHVQTFNpQmN5Y0t3eGcv?=
 =?utf-8?B?QUJ5eTVvRS9jZU1SZllYaDVGZEJydUZweTN4bjkxWUJ2dmdkcHM0MjRPQUZP?=
 =?utf-8?B?a3puaEFabFhDa1F5MmI5T3NONmdBU3U4VHIvbzJCU3FJNitwMGlmOU90T1Ew?=
 =?utf-8?B?ekVia0paeXRXTFFreElrd1U5cDVNMHZsQXlWcUZXTGRUbmU2c0Z4STBtSjMr?=
 =?utf-8?B?OEpMZ2Y4Qi9rZkk5b2V6THZFQTVIS09wY2VHYnBuL01vbi93SHY2bXBscmlH?=
 =?utf-8?B?bm9lMHBFUEFuSlVDU2JyUlJRdThCVFczUkNtbVU3c1ZnWlU2QlhlZ0pFK0ph?=
 =?utf-8?B?U1piZEViQnRtM1BsRHJQaXNhUGxBL0c0dW9EZFNVWlVjb1VhY3gxSFd6RFJ5?=
 =?utf-8?B?R3FuZzhXbXJkK2llMHhYMEpqaDhLa1R0bXR6ekdOaVpERjZpM3RFUFB2QVNw?=
 =?utf-8?B?c0JpUTlhS3Z6bzBGT1prRzhjZXVqYXZaNS9HUHh6dk9wUjFwa1hCalRLR0Fv?=
 =?utf-8?B?OERtS2tHOTF0ZmwvU0lvTUREQmpBVkRndE8rMHk5Q3JQbWZUdkJjQ21aYWEz?=
 =?utf-8?B?MUhkY3J2S1RscVMvZGcvbmlESzdsRy84OEVWUUMvLzVJM1IyTUNGTFdsMmtO?=
 =?utf-8?B?VFRjMGtUc2NHTXJxTlZDSXVPK0JGNkhKaXRTa2pqTExMVmlVb3NMRzhsd0wv?=
 =?utf-8?B?cTZPQlRzUnp1TXUvTGM4all4aVEyZVY1LzBmSWpZZi9mKytSaXVYNDI2THI5?=
 =?utf-8?B?aEZTckVxZVp2ellzMlg2RGlRRXY0TmdtTDZ2YlpEejl0MC9TMGR2ZUFrTzI5?=
 =?utf-8?B?RHl3K1pwdmdiU0dldFc2TS92MC9ON29pOXhZbmhHbU9BWUw4cWxqc0ZRUTRz?=
 =?utf-8?B?RitwdlM5TFRzK1pLb2pZU21EcEVIUzNhVEQzV3dMTmY0cWVoTGptQjkyK0Vw?=
 =?utf-8?B?RGRBdGp2TW5PMGc1dW4xTmozRldUZVBkZTA1Tm1GdmFHT3lwZlZLc1M0MktN?=
 =?utf-8?B?RWxmTjFnUmVydVhwMFptKzlHSkdMZEhFT3V1eFZqZDJEMTFSZllqc3pucDNm?=
 =?utf-8?Q?s2AIRFmnOe2ypOBYDwOvg2X/u?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8524c7b2-6117-4f39-caea-08db4bbb0445
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 09:44:40.6861
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QZ2rxB6r1k1fURd8zcHoxXn0iyi9GaEvBtbH2jri/gUD5oOfPxd0fpTJws8tlcn01FJhd/+r/veyNTp1yBu3Yg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7086

The majority of the present callers really aren't after invalidating
cache contents, but only after writeback. Make this available by simply
extending the FLUSH_CACHE handling accordingly. No feature checks are
required here: cache_writeback() falls back to cache_flush() as
necessary, while WBNOINVD degenerates to WBINVD on older hardware.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: FLUSH_WRITEBACK -> FLUSH_CACHE_WRITEBACK.

--- a/xen/arch/x86/flushtlb.c
+++ b/xen/arch/x86/flushtlb.c
@@ -232,7 +232,7 @@ unsigned int flush_area_local(const void
     if ( flags & FLUSH_HVM_ASID_CORE )
         hvm_flush_guest_tlbs();
 
-    if ( flags & FLUSH_CACHE )
+    if ( flags & (FLUSH_CACHE | FLUSH_CACHE_WRITEBACK) )
     {
         const struct cpuinfo_x86 *c = &current_cpu_data;
         unsigned long sz = 0;
@@ -245,13 +245,16 @@ unsigned int flush_area_local(const void
              c->x86_clflush_size && c->x86_cache_size && sz &&
              ((sz >> 10) < c->x86_cache_size) )
         {
-            cache_flush(va, sz);
-            flags &= ~FLUSH_CACHE;
+            if ( flags & FLUSH_CACHE )
+                cache_flush(va, sz);
+            else
+                cache_writeback(va, sz);
+            flags &= ~(FLUSH_CACHE | FLUSH_CACHE_WRITEBACK);
         }
-        else
-        {
+        else if ( flags & FLUSH_CACHE )
             wbinvd();
-        }
+        else
+            wbnoinvd();
     }
 
     if ( flags & FLUSH_ROOT_PGTBL )
--- a/xen/arch/x86/include/asm/flushtlb.h
+++ b/xen/arch/x86/include/asm/flushtlb.h
@@ -135,6 +135,8 @@ void switch_cr3_cr4(unsigned long cr3, u
 #else
 # define FLUSH_NO_ASSIST 0
 #endif
+ /* Write back data cache contents */
+#define FLUSH_CACHE_WRITEBACK  0x10000
 
 /* Flush local TLBs/caches. */
 unsigned int flush_area_local(const void *va, unsigned int flags);
@@ -194,7 +196,11 @@ static inline int clean_and_invalidate_d
 }
 static inline int clean_dcache_va_range(const void *p, unsigned long size)
 {
-    return clean_and_invalidate_dcache_va_range(p, size);
+    unsigned int order = get_order_from_bytes(size);
+
+    /* sub-page granularity support needs to be added if necessary */
+    flush_area_local(p, FLUSH_CACHE_WRITEBACK | FLUSH_ORDER(order));
+    return 0;
 }
 
 unsigned int guest_flush_tlb_flags(const struct domain *d);



From xen-devel-bounces@lists.xenproject.org Wed May 03 09:45:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:45:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529058.823007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu939-0008AE-PE; Wed, 03 May 2023 09:45:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529058.823007; Wed, 03 May 2023 09:45:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu939-0008A7-L3; Wed, 03 May 2023 09:45:27 +0000
Received: by outflank-mailman (input) for mailman id 529058;
 Wed, 03 May 2023 09:45:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu938-00089y-Kw
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 09:45:26 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20608.outbound.protection.outlook.com
 [2a01:111:f400:fe16::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3adfdd92-e997-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 11:45:25 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by VI1PR04MB7086.eurprd04.prod.outlook.com (2603:10a6:800:121::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 09:45:24 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 09:45:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3adfdd92-e997-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VGtyGOsO7zERtWbhfcjaOWGePa40SgKqvrcfepqF2fArWmil5qv4VIvrXkSGMca/IGS8/eyDw1VpIhSYiOTjg/eRJdNBDsyYhF3uYx9WEATvW1BFkuFeFqg7+2c+FWKdhFh5KaDdRgl9GKA3Q2kewwjxoCaHiGnNLdiU0hNASgfScggtt8t+9AOFVWxqTXS6cFfCRkZE5z9Ox9RJZYtW9XdeG0+vOaPSptP1lYV4V0kwCdY5N9fYgAbKMcQw9W8mSqGHhTaU4WlT0yamXyDqtmQw0hBdOgte5NJNxmwJkSxOxqpvvQKODrFwYMEi7HO8jJMXNq4KzBrdtRhDM3Qkgg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zX9q4vPbP6ZoAmb7n1j4z3y9de90GGwg4j8OmjDy0DU=;
 b=eDnvx2/8VO+DaLCVpFsqNOU5ScZZ+2au3awGE2RNnzrxAm9cGATuI9WymdbXehLcYEd7w+Q9F70ZG8WNLGlzBWBalAB2lTDqupEhzabKLCFr+hlUGZD1xVj51ab25pfBGDwfIzFfYkhyMw3UfTDtYSBJKXKILiekQT+5/B4RsHcluqUecP+jM/QUHQynLBliYEL8RCnVmldl64sblV+FqTQEYWp6VeJvH7DvEz9JDKBa/zxux8K5HlTCscuhGjrc04jWliwVYCPz2Qpe8WupwPCoO6TIJhegpdruuqHIeUX3dGfVNSUi8pYSIOcXKi0Pex7L3vHA0InxLko98wvOog==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zX9q4vPbP6ZoAmb7n1j4z3y9de90GGwg4j8OmjDy0DU=;
 b=GenVxBVomwXnuwavtyG0H/gbr6cPoySUXFaIK+rS5sejKNgT5Hvawu+6aoHK8C5JV6bn7kdFyuys+tt2ds1q07OTUD3NIqrw1bsPIO0IaJw+DzsTJSBaQWU73GgDp56Pu1Tcjuhb2cGHFL4MGWcZp/utHirWVNLc6DkFCuYCR0j82S46gHFQBCXcubS8xTM7NXwLfCRfWPyZh7wEHxBabgMCLkFOu+j9LHZwm9P2fVbO7V8tEmXBCKmdrfH4a26BJJI4NohLEkqHt+Skk4IAgc5e10yZtmqSdBlOSN/j3DS+CYo0rVmWhRowq4+ZxeYhd+W16lvmld5ZU7IDQfINjw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d55070c0-04c5-70a4-f9f3-3227d42578e6@suse.com>
Date: Wed, 3 May 2023 11:45:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 2/6] x86/HVM: restrict guest-induced WBINVD to cache
 writeback
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
In-Reply-To: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0090.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::6) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|VI1PR04MB7086:EE_
X-MS-Office365-Filtering-Correlation-Id: d0a95501-d4d9-423a-b453-08db4bbb1e75
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pVu+Re5JO518uDmDhT6tkkfRyCRtvjUc1uHnT30Gfc8oPwLSsfckovHlIhCKdkryMdyP9+RELS4e2MhcVugtQvBXeCfHeVepuIQzS+XG2duFgeGA9EWLzXLJriR0E+tMpqeya8Kjkdb7BUB9T2wk+sOBx5PT9y7P5JBtmSX1/9JS90qNtzWF8Pq4JC+SxhOiaT3Ui6Ng3BhTSQIDzLyINHsFG1+b7+lDhzafnzsIWO97UfsyET0HIFDKplPHdTeB5aQ8bzd73B0D/+kh033HNiORKeF5oKeZp/xTFHr2k5/lSy88IcQb1AoQgcau7kn9P+nmwo2rU43pToiW6Dty7Mu5BZComGSHTOcXG7MClOsPwYQnv7dcS/mVly7QjZARM3KY0flUQSyOze5Ty4KbBgmpGwm9mdB7ae16NVuulaKJkOgAivNqrTxFPLC7ipZ4Tq3TacVeT+L3vAK9n2CGz/fT0LnhPba4QXuIAr77CRzsyWOXLhBdtqAr/N7Nr7BnwbzXyi+ikddaqc3WIYsKNj/nveeSOV0WiiyVuDsI6elT0s+lwKpqNR3n30Kpjr0o1AXeOPCMKG5XVUDwDSoczr99CmDXyX3+dnL2XhhQObixXjnZHKFHC8ME3dN8JXM2iqeaagXCEudXHmkEiaqpZA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(376002)(136003)(346002)(451199021)(31686004)(36756003)(5660300002)(2906002)(38100700002)(31696002)(8676002)(316002)(86362001)(66476007)(66946007)(66556008)(41300700001)(8936002)(4326008)(83380400001)(54906003)(186003)(6512007)(26005)(6916009)(478600001)(6486002)(6506007)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z0NpK1JRb2MwTUFyYkpPRjBmaFUvajh3b0tCQkFmMTFuZGl3UWRCcTF3UENV?=
 =?utf-8?B?YU5JNVFKQ25vS3JRRDFQZjZZR1RQa0l5eG1QSlV6N3ZkMEpZVU03QU5mSlcz?=
 =?utf-8?B?M3BLME90alpRaVI2QzlzQkN2OGlSU1MrZUszMG1RMmFhS09BV2VtMVlTNCth?=
 =?utf-8?B?Wm8weFF4UVV2cU1ScVRvYU9kakU0ZmQ3VSs2UThORDdZVkpXQXBOdnEwRnJp?=
 =?utf-8?B?dGR1L3JJZURrYlBTZFR5akcwM2V5Q0VPckh4OEJzcnU2ZHFXVUpVaW1VNEdn?=
 =?utf-8?B?enIyV3Rrb3NXWkZrdkJCSHNMQW0yWmVyTE1qUU1vK3FLdjhmVG9URXRDaFFn?=
 =?utf-8?B?Wklvam5zdU1seHMwblRRWUtrM09WMkhUWVJUQmdNUThnM2tKT0dVeUVSVGRS?=
 =?utf-8?B?c1k5ZlpyMzNPWWN4UXRqRmcwOWd4MkVtelBtNWNIK1F3bTg2SXRSM3lXS3BE?=
 =?utf-8?B?TXE4VVlyOExPRUN4L1N6cVRqTWFUejVoWjQvcFJXcHBpSjFtanZlVkFyaWc4?=
 =?utf-8?B?RTc0TUdtd25jMG1HTDFId3hQRVkxdkVJcnJZb0hTWC9KQkUrMFNURmVQS2pZ?=
 =?utf-8?B?RkRISENxZGRjbmVMSklNdnUzaW9WUEtEV3dZZktSSEZpV2pYUFhNaVUxd2tt?=
 =?utf-8?B?MEt2WCttb1lQR1ptRUsxTGYyWXo1NTR4eE5PMzl3WXdQTFZuME5Icm9OeXFL?=
 =?utf-8?B?VVBPRUR2T0pkeTFHWmNpYjJEdzlOaDgwODIxNnVDMWtReGJHL096TGdjZ1Jj?=
 =?utf-8?B?bjAyOE53Zk94TWdTbUZXdlRweGVySXc0YVVNVlNIVy90Z3JVWm9vaThMamtu?=
 =?utf-8?B?NVBnUW54b0txbkJxWjEvK0ppRmovNDREY3JCQzQ1SWM1MmtLYy9lSkFSYmw1?=
 =?utf-8?B?dE1zTHk2RjdkUGFOS1h6b2lGRVFYYVZGUFoxalE1VVgrREc0aStVbzlqbHph?=
 =?utf-8?B?OUFGbVBPMHhPYnpkeFYzSkFRN1pSQ3ZzSWhZV2VFR2o0aFRDUHVaMVcrM1Ju?=
 =?utf-8?B?SGlaaS9QbEU0MWJlUElSeTdxZDdwYzdwWFI1REZsT1F0UFNlNjNJSDhtbkEw?=
 =?utf-8?B?cXRrVTh3WitXMVd0QUV1N043OHBIRFc2VVQ4Nzg1NGorenFDZG1JQjRtYk5N?=
 =?utf-8?B?aXVzWGJKV2J6anRKLzhWQmpRMkNkY2h6UVdKYVZKTS94NlVhYy9zT1lMcXEv?=
 =?utf-8?B?Wm0vdllGNVJHVDVXQVVKbWJLbjd2YWtUbXh3VTFrT0FYTkg3SE0rZ3h5YXR4?=
 =?utf-8?B?Q09SQWhMZjVvSXJiR0R0eTJYei92N1kwTUdNazFRcGlvS1NRZmdVbVBMZ0lr?=
 =?utf-8?B?UEI1NDl5bjhGbWp0VHVZT25RNlZPRnZHOFh4MHFkTk1CRk1HZjkwVjdqNnlM?=
 =?utf-8?B?dGlwSkpwSGxGWGxtcnpYcCsxUkJQL09KQkVxcGlOVEdPWEg0aEhpb3FDQTJo?=
 =?utf-8?B?Ymh6eWtvOUVnaitOenhQS0pCa2Iyd3VOdDlRNnJaTml5OGh3dnFqQUFtdVRy?=
 =?utf-8?B?OFY5Z0FSMS9VeVpDZkR4TWZRWHZ3by9nNWZtQVBMZ25KM01ReTJseVlxcG1V?=
 =?utf-8?B?N3ZHVTFkckRUdjJ3T3I0TEZWdmtWMWFpM01aK204UEgxY1lKYjg5eFR5SWxx?=
 =?utf-8?B?cDFiZVhpcWdyc2lUSlBaNS8rVy9rQklweEJQeXRKOFJEZVBoRjZvK25naHU2?=
 =?utf-8?B?QlNYbld0YitWMEVrZkcrOWZQOHVrcnhwRHZ0T1Y4em0xRys2Z0diZGY3R1cz?=
 =?utf-8?B?c2wxZi9rRGI1RnBIUEpLZkhXdVdtdEpKdGoxTGh0SDNVeDRqTUR3ZjVzWjEx?=
 =?utf-8?B?R3VhK2czTzNGVkY2elFDWHh4czVHNHVDV2pUckNtc0N3NlBXdXZoRkp0cENl?=
 =?utf-8?B?MjV4Y0RmNDluck9MVWdldmtraExkZ25qSEZrU25OeDE4WmNHUjJSSDB3OUpx?=
 =?utf-8?B?RGNrSzl3dzV5OHVYM1lKUkJtWGUrT0lJdUZmeEMwSmtiOG53ZFJET1E0Wm9W?=
 =?utf-8?B?Vmgyd2pDVkNNUkVLUGZaSDB0aWlDdGF4dkFOQmpKemxaa2ZSWjhGQ203T3F3?=
 =?utf-8?B?SzdhNmpJYWxWS21hVGtmZFlJTHBxT05QbWkwaDNaNGlxT0dBT0tMZ1ZFZFlN?=
 =?utf-8?Q?7oQxkoMiIkilqkYfjhd7cJcVe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d0a95501-d4d9-423a-b453-08db4bbb1e75
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 09:45:24.6381
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XjPGC+mrbEbWB9KG0+jRskqWVRrRhdkXsYYzGAwLXeu5NWzhJxRzGugBXDJIcKxHzYtyvK6QIpz/opdlekX+bA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7086

We allow its use for writeback purposes only anyway, so let's also carry
these out that way on capable hardware.

With it now known that WBNOINVD uses the same VM exit code as WBINVD for
both SVM and VT-x, we can now also expose the feature that way without
further distinguishing the specific cases of those VM exits. Note that
on SVM this builds upon INSTR_WBINVD also covering WBNOINVD, as the
decoder won't set prefix related bits for this encoding in the resulting
canonicalized opcode.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: FLUSH_WRITEBACK -> FLUSH_CACHE_WRITEBACK.

--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2364,7 +2364,7 @@ static void svm_vmexit_mce_intercept(
 static void cf_check svm_wbinvd_intercept(void)
 {
     if ( cache_flush_permitted(current->domain) )
-        flush_all(FLUSH_CACHE);
+        flush_all(FLUSH_CACHE_WRITEBACK);
 }
 
 static void svm_vmexit_do_invalidate_cache(struct cpu_user_regs *regs,
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1881,12 +1881,12 @@ void cf_check vmx_do_resume(void)
     {
         /*
          * For pass-through domain, guest PCI-E device driver may leverage the
-         * "Non-Snoop" I/O, and explicitly WBINVD or CLFLUSH to a RAM space.
-         * Since migration may occur before WBINVD or CLFLUSH, we need to
-         * maintain data consistency either by:
-         *  1: flushing cache (wbinvd) when the guest is scheduled out if
+         * "Non-Snoop" I/O, and explicitly WB{NO,}INVD or CL{WB,FLUSH} RAM space.
+         * Since migration may occur before WB{NO,}INVD or CL{WB,FLUSH}, we need
+         * to maintain data consistency either by:
+         *  1: flushing cache (wbnoinvd) when the guest is scheduled out if
          *     there is no wbinvd exit, or
-         *  2: execute wbinvd on all dirty pCPUs when guest wbinvd exits.
+         *  2: execute wbnoinvd on all dirty pCPUs when guest wbinvd exits.
          * If VT-d engine can force snooping, we don't need to do these.
          */
         if ( has_arch_pdevs(v->domain) && !iommu_snoop
@@ -1894,7 +1894,7 @@ void cf_check vmx_do_resume(void)
         {
             int cpu = v->arch.hvm.vmx.active_cpu;
             if ( cpu != -1 )
-                flush_mask(cpumask_of(cpu), FLUSH_CACHE);
+                flush_mask(cpumask_of(cpu), FLUSH_CACHE_WRITEBACK);
         }
 
         vmx_clear_vmcs(v);
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3714,9 +3714,9 @@ static void cf_check vmx_wbinvd_intercep
         return;
 
     if ( cpu_has_wbinvd_exiting )
-        flush_all(FLUSH_CACHE);
+        flush_all(FLUSH_CACHE_WRITEBACK);
     else
-        wbinvd();
+        wbnoinvd();
 }
 
 static void ept_handle_violation(ept_qual_t q, paddr_t gpa)
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -238,7 +238,7 @@ XEN_CPUFEATURE(EFRO,          7*32+10) /
 /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
 XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
 XEN_CPUFEATURE(RSTR_FP_ERR_PTRS, 8*32+ 2) /*A  (F)X{SAVE,RSTOR} always saves/restores FPU Error pointers */
-XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*   WBNOINVD instruction */
+XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*S  WBNOINVD instruction */
 XEN_CPUFEATURE(IBPB,          8*32+12) /*A  IBPB support only (no IBRS, used by AMD) */
 XEN_CPUFEATURE(IBRS,          8*32+14) /*S  MSR_SPEC_CTRL.IBRS */
 XEN_CPUFEATURE(AMD_STIBP,     8*32+15) /*S  MSR_SPEC_CTRL.STIBP */



From xen-devel-bounces@lists.xenproject.org Wed May 03 09:46:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:46:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529061.823017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu943-0000Nu-63; Wed, 03 May 2023 09:46:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529061.823017; Wed, 03 May 2023 09:46:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu943-0000Nn-3F; Wed, 03 May 2023 09:46:23 +0000
Received: by outflank-mailman (input) for mailman id 529061;
 Wed, 03 May 2023 09:46:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu941-0000NW-Hl
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 09:46:21 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2052.outbound.protection.outlook.com [40.107.7.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5a6b7481-e997-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 11:46:18 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DBBPR04MB7980.eurprd04.prod.outlook.com (2603:10a6:10:1f0::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 09:45:49 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 09:45:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a6b7481-e997-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J4Ti2FyuNUFQo5B8p5+8s/4oJXDMxJr4KKeZe8p8wtr3e1k4W5eDwmb7tNGjKTiFZ8R/2fkCzoQvb6S4L99qPz2G34yxw8XbbZVeemMv81G3lhYCOHzpHw4I65ar99LXv10pzDykBzWDVazamiMiMmx9HfjndxVUJCgLzcbh/iKhqWcjIFPjzSsZHoI4EDdp4hGtkCI+hVrpMjP7b9moBgopZCanlq8r7+I7yKd5e6rTaE5py+ai2W+PWhln8wHBC1UuDxiGc6yU8NhPug96hjWXoHXzDDhvwxs3R8ICcyJBxNJvLh8YquXWKaSBk9wUBJV0V45n20+K72K9bOE8TA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6s2ziFxyLLrxphcX9bcoXKx8fmbmlbg2IZSNSJNSJw4=;
 b=FMTOR7WhKDNt4LkIe9tfRUyAZ9nEOLx/by1cEAo2LlyZ5ifueXwpA/HV+aR6OoIW5CEkERNlu5TVRQn83k+D1B7mSuZurzkjyBo3tRgz1cuGopmXbf9lJ2Od1QcEIIdUxtJV430GitjlOt18S11dJA7r3u47WGDksihXrkR5QBWahZ0uAlSJnIa6Xtis2qJDSeOByAUFV8lTHfpNAhOEvaD4X5r+3GqSKbNf6Fn+6FJc1VvkRJ9KVQyO1vkIopyMzcWKR8owXyByqrCRNhi4rGAXqta5RjNKyk1U5z599nJ0K/81KV0vWi0VKomsTIC40pmDsKXZ21YC01BXucjgbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6s2ziFxyLLrxphcX9bcoXKx8fmbmlbg2IZSNSJNSJw4=;
 b=ktwluAyIBMs75Hpu+3Nb54GPGgIiTdVoBjUA0PphmB0CUjFKOJ8yTC02HwGmI3PjNOqanvVR2B9VgCMxN2ckOEbsiNRTmPAkaihvZHoZMNe7w1VphDaNqHRLL4s8vGVMxMIC3OagahJ7OsCA15OpgCdr8ULVu/T9cMFuKLZX3RgR0yGdfc3QKa01jjk7kDS0vgr72tEk6VAmkSC/EMof96tEsQkMrOhuOhMSb9QEqEUNeS3zYVA+6qNHQ7culh1Ob5q0pNPAXqCJxKnj1+GlYKYmJTLzXnehamKrXeAD7aLU6HdAKskqtWor14jog8a7YX0T+dy3sEU2gHpwcCLFKA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9dc789fe-6c03-165d-c361-6aaa4ba09763@suse.com>
Date: Wed, 3 May 2023 11:45:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 3/6] x86/PV: restrict guest-induced WBINVD (or alike) to
 cache writeback
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
In-Reply-To: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0040.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::7) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DBBPR04MB7980:EE_
X-MS-Office365-Filtering-Correlation-Id: d6d72f2c-df64-4a34-50b1-08db4bbb2d63
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zfJSpX52zNoSJgSDqDh8oLyEvYsXe+x+6prtXbYx8ALGbJb96oKPqeyHF5F+meIC3b8O4RI4gNZhxPNXdDWBxyUElvQ9I/Lhv3I60Rwue4pRhfItPmrpK//PLjdVH5960/t612XGZePRHIKawThacTL0cka/7GjDCrvOhHBzgrcnBH98DzrcVP3cGCgtVAvNgOXZiBK8VsoGGrjazOguvxYU9wubhbZSRfC1rRyoTMOviOPKCE5Q7PSjhsXEleE5FuRysMF/XhmUn/KVFwNtMAlJea0hysPzlCKBQxdWY+tPuLFuwfy//EPhlMI5KkAGgf/ElUFbjnA9Mg1qGerYDU0EF8Av4SZzw/E8xq43HcOUZj/z0WlG1RpR8+zbY7pYO7vd3jYhbZ0aHblAVXPWmjBmn+v3kUZp/DckA8afR2Q+EeeSlDEzdPISB45X2c/IyhpzI2LvgCLRjMrwrqhVqPaC7g9z9S8dug8PyLISR169G6OSqN5LLy4DjrNSQuHQVz3XHTCgH/cIpxfLtyqPy7y4A8smtbRou9uLySp9suCu7YnlBkGrU8wWv8ye05NgDp4g7MSKDml3yZtKy1L7IoxnZwyxy+KLh42yDMebpWkBqhvp3SNU7Oeo9R7pB15Z260MTz6CSIrdpw7akGxAsg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(346002)(376002)(366004)(396003)(451199021)(6506007)(26005)(6512007)(186003)(31686004)(2616005)(2906002)(5660300002)(83380400001)(8936002)(8676002)(6486002)(41300700001)(36756003)(478600001)(54906003)(38100700002)(66476007)(66556008)(316002)(31696002)(66946007)(6916009)(4326008)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VU5kWGhqKzNkK29WdFVEb25pdG5Rc2JKL0xvL0UvTXc1V3FrdllDaElBQ2Ft?=
 =?utf-8?B?SHc1NGFwdFNEdGxoNForSFRTUllieVhERG82aFE0RlZXTnU5NUJVaklZK1RE?=
 =?utf-8?B?UnJPTno4NDhyR0RId3BaVk1KdXVFODZuUk1PdzFvcHBwUXFveTdtMlFqVUVs?=
 =?utf-8?B?bk9yYnNxZkc4b3RUMVJrK1RQSFFScUhTVDFzVmtWbVoyLzBiZW9FWlVTZlQw?=
 =?utf-8?B?d1FLNGhlK0F5cDdDM2RSVE5wM1MvTVVHSlBDcHJma1B1eHl3RXlNeTRZSW5L?=
 =?utf-8?B?Qi90bFpQaVpNaXFoVllPNStMeDVzNEdwMVVYYkJJMGtQcUFVMTJuZzFzSVk4?=
 =?utf-8?B?UFJMRWQyRTdHSFFiT2VjdFVBUWU1VXh4Z3JGUkFjdjdiM24rQUljSWhtNDdD?=
 =?utf-8?B?THJoNkRublZpdlRmdCtkTEw0RVd5WFN1NWVvb2JhLy8rVmFTdllObDQxR2hM?=
 =?utf-8?B?SThmK1YzU3U4cWRYRlhIQUpYOC9UN2RvMHBGM0t6Y2lGVVFyYlpyU0hkbTB4?=
 =?utf-8?B?NkNUcHhKZmhBMFhOWkNBOTM0ZHR0ZFRlc2NXOEMyNm41THl5dlZVVVBHTFBi?=
 =?utf-8?B?SHdHZkVML3pSVjA1b3htN3kxeUMvcWVwZHhQeGpoSSt4UUpUTWhhaWprWlk3?=
 =?utf-8?B?MTRySGFvVWUrMlBJaXBmb3lpdHI4cXkzTC9WNVJmWmtIUUMvQ0dJK1JISURQ?=
 =?utf-8?B?TEI1TlJ1M0svV01HTE5mc2NtYWlYUnZHRGkwNWNyUmZ1RE5McDNwZFdYdThE?=
 =?utf-8?B?Y0NpTlE5THprSGtWMGg4Qjlaa1BZVjY5WmV2WVFqT0dJWlQxb016NkpFTld0?=
 =?utf-8?B?RGdYK3ZaU1BIcmpVZ0tpODkrOGMyNFZ6TzMvdVdvbE9kRjhVNitPZDdVNUtP?=
 =?utf-8?B?c2o4QXlzNDdpR1krRU9tZHVLbEh3N25saXRMbldaVDk3RkRkU25vVGN0TElp?=
 =?utf-8?B?WXkzR1J2UE9Qd1VDdWlUdWt6L3czQ2dSZnVNVTNLRjZVLzJQRXN3emljQ2pZ?=
 =?utf-8?B?Ui9sVStwZXZYSlduVmhaWHAyTGVDamp2dUxSS2poYStidGMvQkZSbFhuVTVv?=
 =?utf-8?B?c3M1L0dDODlwcmh5ZzgxUU9IMEVTN09aZlI3Y3FJYUtCS1NNV05ZRFlEb1Mv?=
 =?utf-8?B?eTBWSmlEb2JEd2c5eGw2RHJlL29nQzVnaE1pYks4TEZGbHU0UVdUUFJoRFdJ?=
 =?utf-8?B?Rmo4dnFvQkhiSzBDOElBclR3Y0JzWU50NVpuWjV1dUVaUDVhL3E0L3lCck1L?=
 =?utf-8?B?ZkE4d2ZTdUdyV2RkdmdpNThQdlVXUW9QZjJCdWJqSUVJNUF2Wkd6aWlGTlhN?=
 =?utf-8?B?M0NLaW9ybnh5N1lIbG1aeWs4VzVob2ZIZUhzdXJXcXc3VlUzUkdyaHJnR0x3?=
 =?utf-8?B?RnQya2F2TkVLYTA3ZUZia2pKNTVOUEc0dUJlY0Z1TTYwTjZTVUNzSG1GZVcx?=
 =?utf-8?B?SW1zc1VNaVBiMVIxQStWeU45STJzeEJwdTl4TWt0SmZDRnNFc05ZVlRTZTNK?=
 =?utf-8?B?SEVDMkdGdGdOb1RaY212Wm5ZaG1YOU1ENFcvTWZ5dDBYQlBVdFdTN2p3VVRu?=
 =?utf-8?B?VmE3Nm5HOVZra1Z5ckFiRTI5YlZic2s3eGtyZUF5VGRYZlpWbEJsdms4TWdU?=
 =?utf-8?B?akhwNlZ6WFBrSEtMaTdjM3JuZXExNUdMV2JMR3EwemhmVHZMV014QTBWQmZt?=
 =?utf-8?B?NG1ZalloKyt6bDlXRE9aTEcyWjhvOUNxSUYrbXo3dnZrTVdkblczUFRaYWgw?=
 =?utf-8?B?bVlyaDFqSnlrUUJOSTN0ckx0QTZ1NEJFT2s2dDdOYXBwdlVOSDA2MnBNUVlo?=
 =?utf-8?B?b0ZTT2Nya2Z0QjNKcGJ1YzBkcmhpSGdHTDhoYi9iOEYwdVJ2MW5ZcTRRMmJ6?=
 =?utf-8?B?S1JWd0UrVkltRFdhdUZuNDFFMTVyTjhWV1FOUlFwWktlRnBDOVAveGplWkNs?=
 =?utf-8?B?bXJJc0VPVXJ6Ylh4VnhzY24wWkFKOEV0R05sR1JDblk5VXRqcUJUVDR0blVT?=
 =?utf-8?B?L2JvUEszVUUvNGVvb2Rhdm5FZ09wb2tqeU14OTdWUG5hK21UMThGVEFQOHl6?=
 =?utf-8?B?YkVIWVFKM1dYYksySjJML3RMZFVoZElCUmMrT2V6THhhUWxueVRHMUtrVHla?=
 =?utf-8?Q?TCaWxD72OPwrIrqFmACPfYKct?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d6d72f2c-df64-4a34-50b1-08db4bbb2d63
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 09:45:49.6761
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HZSwL4/OqmrpkAbdfihTLGdGwYzfL8zA+AX5fDWRIJJwdCV0osEFvJvWF/yJv3ziRbzjENpPGGkk5olv6NBIYw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7980

We allow its use for writeback purposes only anyway, so let's also carry
these out that way on capable hardware.

We can then also expose the WBNOINVD feature, as there's no difference
to WBINVD anymore. Note that respective emulation logic has already been
in place since ad3abc47dd23 ("x86emul: support WBNOINVD").

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: FLUSH_WRITEBACK -> FLUSH_CACHE_WRITEBACK.

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3772,7 +3772,7 @@ long do_mmuext_op(
             else if ( unlikely(!cache_flush_permitted(currd)) )
                 rc = -EACCES;
             else
-                wbinvd();
+                wbnoinvd();
             break;
 
         case MMUEXT_FLUSH_CACHE_GLOBAL:
@@ -3788,7 +3788,7 @@ long do_mmuext_op(
                     if ( !cpumask_intersects(mask,
                                              per_cpu(cpu_sibling_mask, cpu)) )
                         __cpumask_set_cpu(cpu, mask);
-                flush_mask(mask, FLUSH_CACHE);
+                flush_mask(mask, FLUSH_CACHE_WRITEBACK);
             }
             else
                 rc = -EINVAL;
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -1196,10 +1196,8 @@ static int cf_check cache_op(
          * newer linux uses this in some start-of-day timing loops.
          */
         ;
-    else if ( op == x86emul_wbnoinvd /* && cpu_has_wbnoinvd */ )
-        wbnoinvd();
     else
-        wbinvd();
+        wbnoinvd();
 
     return X86EMUL_OKAY;
 }
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -238,7 +238,7 @@ XEN_CPUFEATURE(EFRO,          7*32+10) /
 /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
 XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
 XEN_CPUFEATURE(RSTR_FP_ERR_PTRS, 8*32+ 2) /*A  (F)X{SAVE,RSTOR} always saves/restores FPU Error pointers */
-XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*S  WBNOINVD instruction */
+XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*A  WBNOINVD instruction */
 XEN_CPUFEATURE(IBPB,          8*32+12) /*A  IBPB support only (no IBRS, used by AMD) */
 XEN_CPUFEATURE(IBRS,          8*32+14) /*S  MSR_SPEC_CTRL.IBRS */
 XEN_CPUFEATURE(AMD_STIBP,     8*32+15) /*S  MSR_SPEC_CTRL.STIBP */



From xen-devel-bounces@lists.xenproject.org Wed May 03 09:46:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:46:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529062.823027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu944-0000dN-HC; Wed, 03 May 2023 09:46:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529062.823027; Wed, 03 May 2023 09:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu944-0000dE-CN; Wed, 03 May 2023 09:46:24 +0000
Received: by outflank-mailman (input) for mailman id 529062;
 Wed, 03 May 2023 09:46:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu942-0000NW-EQ
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 09:46:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2052.outbound.protection.outlook.com [40.107.7.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5bbe9fa1-e997-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 11:46:20 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DBBPR04MB7980.eurprd04.prod.outlook.com (2603:10a6:10:1f0::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 09:46:13 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 09:46:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5bbe9fa1-e997-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=baVyL1YjsprtZuoztGplHhfLl3GIExQ+Byq4JgrWfmfX7r/N303+eIdSzV4KloqlV3Ymvl+PrAlL5VRjF8UVgvw7M+1PItN2OAPJdooiD2IrPFrnJSXSES4CLNUbPJX68veDCfJZ3JnI3TcmsLxl0NdRrQfi+2GQN5sexOVt6EISbi9TAp7oa6Z2DcpvLraGfWmREB8yXOnQ9dyfN693LlKy9pwIhsAie6+5t8IE573CaRqHtBPhfkHzee5yGDqYX8f1237YxM/G2kdqGRSNw4cBCeZvG1wRy2EcMblr94BrfSYndG7mGDXU8PzOozxLv5ESnq8ygfS3LjDHOlf0lA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ws+81il08fZF9Iyvf//QRJ+mt2teUl0Lbm4jBjIkcAo=;
 b=c2NF8nB8CjOAOryJyw4anItilvXvtyfOf6XAkN6Yu/yH6QEfle4SY+flQ20iC4mrSkeAfPmM0eCLfxrUXpgvsdew3cVMza9Nx2SEFgMtLe9Y47L+sA/+htm666ln2/uI4kktOMXmGT3uuRHlPpB2NLXKAN27qvWYwbwIEOGtLyGALLwNvHqrZrBjPldOEi1LZtG1HEQKODKF/TegIl1BT0ttOgp75CvMmRfYewVKsqFtDGSZIT1goLrGpouRaj87bB27I68iR3MQUp+PB3G8cZB0o3VjepFZCrLL0AMP1N1/C3/SU2zeEWCOD9MNhgO23GKdsjieYmGhF2MPp38gGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ws+81il08fZF9Iyvf//QRJ+mt2teUl0Lbm4jBjIkcAo=;
 b=amgxSUQqariV5Z5kqocOG8FAyVQZS3nVpB2u23E7qGlxCuq6nkpSIucU/Jbd+XxNMf9kdcaiEPzg47WNX08Vdw2FHmyrxCo80HhyyI50qPC9A+/R57HD8xHPYm7ieVPsOkJ5Iy8rdUJetjWfSuQFQaiVG4JSgqSyeJLTGutVVdok4Cce4XVzaOLyz9x/dTQExuQI9rxV7Ug3Ik0Wj5hw8RbdJxKb2Lg6b0CwnY3ws0pk3Q7IiGshPQnEfh5eTSLL5O6UocmAwdt39Sbc7DD783t3sMcuSdlxbG9S84h0CZaWvoCnbZ10I90kUQDjQuzsX9p/wbDqpbSTknBz4MSmKA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bf99949c-0e09-13a5-3ad9-a6c26377bdbf@suse.com>
Date: Wed, 3 May 2023 11:46:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 4/6] VT-d: restrict iommu_flush_all() to cache writeback
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
In-Reply-To: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0001.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::11) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DBBPR04MB7980:EE_
X-MS-Office365-Filtering-Correlation-Id: 7a1238cc-c88c-40f4-e604-08db4bbb3b83
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ClJdEjTpDKlYlZcz6EMYiejEPiGMD6VV9DUGIWvjRhbtqGCMWn9eTa3PBqsD73O6+3VuB/pXw1JhvarF2XrgnYMFSxocaFfEur5pLLravmG6Jvkxo0Hy9UFW6LlXLPUPbtnDSy1NU4T0FPnPyataGDtoSSFi4hqoZAy5LRdR5Z3VEbmjp3KDInh/HJ+35+lBbhVo7lM4M2MuCEBCcsBiWL/p39oQF7TNZwWUYtRZ8eSycmO6Ng9UdFGdOdPGZZgb/llj78CWDXKEmKELQPcNxqbPpOEx+kfNP0VgXvLQB2nbb0jwWrR95kwXttTp9KFNXZJ55k+z5/Y5irGXA0oiEfucu2lQddq0ygsS23ylOW//mppRPJrd4nwqeF22rWtmWcDPXNTgVXqfVzFtW8SHDlcxGQT21yw4LZyaxtmwpJusr4oFsQDkY0ahREfNq1F/klT21FJ9K9ccagZ2We/GPnBG1yFm6xYgUoCHL1TgGoKB6FYGO4AXeAD5BSe7BAC8X/h24+uec8ww8y+gFBRYMMellTG7ravC8Bf9IJuI7kroKni6NTs0feNeMjMZH13ZXmI2ZQhpGIuZlGjBojzMkCxDYAwdwcrMopQmkc7uTf5gjep3X902jo9JIJ1ksNXPmYrw+iQyP32893itb/713g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(346002)(376002)(366004)(396003)(451199021)(6506007)(26005)(6512007)(186003)(31686004)(2616005)(4744005)(2906002)(5660300002)(83380400001)(8936002)(8676002)(6486002)(41300700001)(36756003)(478600001)(54906003)(38100700002)(66476007)(66556008)(316002)(31696002)(66946007)(6916009)(4326008)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NVdzZ2tsaU5oRkkrbmhIY1J2UVNMWXpGSmg3M09UOVNkOWRUK0xxNzFzWWVj?=
 =?utf-8?B?SmdHK2pxNFA0NXUzK2ZSa2E5aTlBQVUrVkNGdTdWYzlQbVhucklvNzdUYzJE?=
 =?utf-8?B?MEMwVVFDUkRranIza3dBU0RFOExhQi9ZN2s0WEljWGMzaktNYkRnUUZBK0pk?=
 =?utf-8?B?c09WL2ludGxvSlIyK3g1alFCYS92aHFiRWNQaXc5ckJLQ2lpd2NPNmJKZ0VV?=
 =?utf-8?B?RUdyZ0IxVVZ1Y1M5VXM4bW4vMmVvbkJxYm1tdXkvbVV1cGd2eUtVb3RtUXJ4?=
 =?utf-8?B?cTM1aTNaN2FETmFSS0JJR2pFbmtTUGs4Z0VES09mVG5KNVIzZWdXS2dQQ2Rs?=
 =?utf-8?B?ZzBQcjNLK2k4aHY3ZUNSZDRDN3FrczZyWVR1ZllMd1FLbE9Hd3p0Z2NkS0xx?=
 =?utf-8?B?d1U2b1p6d2kwc0hrVHkrTm9YQTdmVkZhVkwraFdENzg0V3dLOTN3ZnBMUkZy?=
 =?utf-8?B?eFVoWlNiMVVnaGRGaWRWZ3dMU1lpdnpLUlpIRmtuN29UZmZPRmU4enZvcjFN?=
 =?utf-8?B?NGpsUFJlNFZTdDhvaHU4VEFUdmdTWithd2hwV2o5eXNWa1lQaHArQ1oxd2hV?=
 =?utf-8?B?aUprcHhmbVcrNmZMN0lzQnBEYS9PdkVMTkFSY1A2VlcwNTFoRE50a2c3bUpQ?=
 =?utf-8?B?d0hwTm1LSVIzZVFmQ3gySWVqRGNxZU1oNldST1ZTS1FzMDNqMERTUUU4UzZy?=
 =?utf-8?B?NDMwVCtMMmlPMVc4L09JSG5FWW9MYUVXM0cwbFVXTGw0Y1lCTmIrYm12dUtG?=
 =?utf-8?B?cEQxY2RhMzNlUXZVa1VKS3FZTjlHdWF4Qkh2bzNabFdWb2dkN3NndWJjYjcv?=
 =?utf-8?B?cWQ3Uis4UW1MQks4MitpWGJBSittR1laakt5U09EQ3FYQ0hNWWxHOHZLWnZa?=
 =?utf-8?B?UTZMS3g5OWg4ZW5XT3dXaGF5QWNJbkxObXJza3Y2V1Y0c2FYT2UyWW52Zks2?=
 =?utf-8?B?R1QxRTlSRmwrb0VNUlBwb0pqOWozNzBsZDY0QUVJRi9VZWVxVkV1MCtzY2xL?=
 =?utf-8?B?WFZQcHV3L1crNUhPUXJBcXptOEt5bzk4OFkyYUlwYy9KK1k4Z0d4aWVtb21K?=
 =?utf-8?B?cUhHTUxTbGFBbU1NUHg3VFBwUFZqUGt4TWpOWUlNYjlibVVBYkZoZnBJaC9K?=
 =?utf-8?B?N1VjVDJjSGtmMGRVZFd6MkpNUFhkM3kyc2JhSkxldDZLRlprUzBzamJWS3oy?=
 =?utf-8?B?VnhXS2NuSlFlTU5qMGo2cEFNK2VqTUZOalZ5a1FHc01XSlhKZHpNUTdycm05?=
 =?utf-8?B?RU55ejlRUmtHRVVKOGErbXV1WHRiN2NGT0hiNXVtWUIwdDV6UTBYMDkvNGMy?=
 =?utf-8?B?MSszQS9qR1preXlsZWRKS2x4TDhSMlRjR09wUEFramgzUE8rTGNPa0NpLzZp?=
 =?utf-8?B?R1A1V3B3UHNaSUprbytRQzhkVjJtL05TQXRndWFBTTd6UER3OHBTTTVBTUJm?=
 =?utf-8?B?U21pRnVFQkY5YzdteTJ4U3hqNTdqMWVva2JXdWlGS0ltY0x1TlBrQmJ5RXNB?=
 =?utf-8?B?UkpjRlR1cSthNEdjVUNxM1NmbGlJbXMrbnhJOFFZYVdNL3ROTE92MnZoUTA3?=
 =?utf-8?B?SFJydDZuTjhLUk9YMXZoRHVFS2ZWcWwxUUgwL1R3czVUaFZiTkFVb1BKQXhT?=
 =?utf-8?B?MkMyVVhyTkxqeHpJM3Vkamdac3pPNEhSZXdncW1teVd4SWhrSmczTTRWcW5y?=
 =?utf-8?B?RmZOcStrcFlWd1VVRGNYLzNkK1VZY2ZOQXpTWVFBc2ZQRVpaM0ZGaEZtc1NN?=
 =?utf-8?B?VE00UDRlN2JEWmhoY2FHc3ZhZThsOXhWSlk4RHVrbWpicmZwZy9peERGZzN6?=
 =?utf-8?B?bFVUTk1IaVNmdWZQV243MjFVVzJEVlJueEpPRVdUbVhWZitCdEw0UllwdnRz?=
 =?utf-8?B?VjU1anBlQlhZaWVQQURHWE5ORUgxVkRuTmlwc05SeFoxNFVoMTlPb1pIWk5w?=
 =?utf-8?B?RElpSU1OaW93WkQ3OXUzRWNUQkx2QzN2clZSQlJ5VXRNVEJ3eVRjWVZVdTZF?=
 =?utf-8?B?Z2RTWUZhcFAyQkpIRThoZUVERm01cUJKU2tsMENWQkR3VzkvVWFmTGlWNFpV?=
 =?utf-8?B?MTg0U0tydjlZWmRZUDRpSGhMOEthaVFkQktSSlc1dWx2elhUdnNjRVZNVGk1?=
 =?utf-8?Q?YAnr6AOKPPVA74P/Fj2VOMMw1?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7a1238cc-c88c-40f4-e604-08db4bbb3b83
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 09:46:13.4542
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XzSW4C2/tBeUvdnlgoTGW4+dhDrRraPPIiedr5w9JzuBXfbHq3IqBvBeFO2kv90l2W9ZNez75QPniwfFM8W/Jw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7980

We don't need to invalidate caches here; all we're after is that earlier
writes have made it to main memory (and aiui even that just in case).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
This, aiui, being an analogue to uses of iommu_sync_cache() (just not
range restricted), I wonder whether it shouldn't be conditional upon
iommu_non_coherent. Then again I'm vaguely under the impression that
we had been here before, possibly even as far as questioning the need
for this call altogether.
---
v2: FLUSH_WRITEBACK -> FLUSH_CACHE_WRITEBACK.

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -693,7 +693,7 @@ static int __must_check iommu_flush_all(
     bool_t flush_dev_iotlb;
     int rc = 0;
 
-    flush_local(FLUSH_CACHE);
+    flush_local(FLUSH_CACHE_WRITEBACK);
 
     for_each_drhd_unit ( drhd )
     {



From xen-devel-bounces@lists.xenproject.org Wed May 03 09:46:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529065.823037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu94R-0001Kn-PF; Wed, 03 May 2023 09:46:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529065.823037; Wed, 03 May 2023 09:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu94R-0001Ke-MX; Wed, 03 May 2023 09:46:47 +0000
Received: by outflank-mailman (input) for mailman id 529065;
 Wed, 03 May 2023 09:46:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu94P-0000NW-M8
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 09:46:45 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2045.outbound.protection.outlook.com [40.107.7.45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 698984d9-e997-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 11:46:44 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DBBPR04MB7980.eurprd04.prod.outlook.com (2603:10a6:10:1f0::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 09:46:42 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 09:46:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 698984d9-e997-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VaR0A5tp8fMh/lROh/quBvvsVXHbFyPUdvPfJjuVjjyQRZZuJBQOo0LApaCs6cVHp87yH2bgp/Sa5Z9DyZIdrKe3+dmqxP47joldFirb/eG+Wy78vsTd7Nu9Xyq2qGYCz/SBsf+07yLLQIX/wjvc2AgUbUCuH8++U8X8zzzqWbKzX0i5A+jXE2XM3MZ3et/QBI7jSCguqOPA7Nxa15NfYy8LC+2DnfISTnsVnsTrHyUPucKgX/q2aFml3F9hDJieEwRnOTDbHQj2A+4ZcSl4tk3S1caj+1J9jleBLTmTaX6yuO6LFtHsgrLD9E1ZMNsoX+JrwW1MNyz45YDDGMsWGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sedWtunTwKdoeXFOwuCOGYeVfeFrxIMCs6hPZD4OxMU=;
 b=N1pcB5tDRlN0LlN57NvJ5aF40rJTwHSKxP3kG44dClPY1SNKXzF3A3OqnEY3aotZffpCw4qVAF8TRHHc3TsTQNPkw9LrIgur5VbWFkfMzYcQNEjynCNjRAOX1uc0eEjWOqSAq8RPwY0x4mcLKL+BcJiWLvEIt8JxurkyfbtfiIV1RT3O9Mr1Ehyn0vhlc5WC1sosOuokalGkAjkRQsJY5CYS6C5gxcLdtlrvCawBRfF9LdKElPxFzuAmYPNlHJrbZVih8gg+MrFSZB0XTZ9mWgnAaR4OV1VjJZpt0FuLMmoCfhmx6LS5f2nj5Z1o/TvnyUsFe8bCTIAxjbhOrBsq1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sedWtunTwKdoeXFOwuCOGYeVfeFrxIMCs6hPZD4OxMU=;
 b=Za8+QXovpFpczNGsYnFlYDIa+np3IluLtYet1NBmSLVW3ZHAk3/gDmRmmW2xManq7N58K1y323lr/9vKQ+OBOrifqwwNpyyhmfek7+4l7kSMIF9xFKnTxIVA54CAWunJjd/vUTpiJktvWLMgvvx/H++yRKfOiA8tYq2kgKsXgeRoBDpMU3wjMZRW/OAmITlA5EbXyUt1102hZAVHDPahDIVrEK0gnBF1UWhFREbYnoP49cqx14ppXHXdOQDVMp6VbHflegDgn2+Yux2aoHpU9jZNLbbJt/iAuSM2rO3+ywb4NesbPqTxcqk2SuukTSzGd1vY2Q+inKVvvvUT2CK6ow==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ca63920c-b349-bcd3-8c1d-c869d8de4d99@suse.com>
Date: Wed, 3 May 2023 11:46:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 5/6] x86: FLUSH_CACHE -> FLUSH_CACHE_EVICT
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
In-Reply-To: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0157.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::18) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DBBPR04MB7980:EE_
X-MS-Office365-Filtering-Correlation-Id: c2546fdf-8b07-4440-6e49-08db4bbb4cf0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fpRL9bnhCOhnsC6alSfEVtWe8YJYTJaE2+JHcW9qTIonFtJjgm7EYSzObIfQ1vQRifbWAy3nvRlXyZwI+IrftptNWbY9jM8AuTfbkBg+oa46wl2OwhEt/3DZoaDg92anp+nA/7eEui9hQqhnhVC4/yn0110QK1gPAAK3c6PQTFfMH6ibSEUFgvmClWAbO9HlryKswV4aoqcIfbQxnD55OqiqLC03TyZhHELFhf8U4oGvYCWpDPX5MUdxU9sQIUC22jZIMA06xeXPBkSlHBir+OQ6DAgO4uK9PZMXzCfkZFq36TAU/NCUcabVG1XDwpfBeWHzRjDMsSjmOq0RjVA/bOpCRqgTi1SbAQ9fzp5wOzC0kmDvAckavSRbX3SwKuL0Ej0Ht0N/54vRMj53qmiw2dFUoLSFI+HJDIgsqXu2V6D9kj5hlum75vDItSwcdzlkhgeBpAY20rtqmFaCMb6/OgZjrF1Sa5wslNiIP+Gx0nj2BqkY51JqG+qqHn0l9T10gRqkeN/NomuFcpcw+mEE6wDYIqF0la2FOs1Yyyg/7sgWzw+O306RiHl0QksDfJjjV0FlMuGOoc7gXj8LOU3wiEIJvOwAfGZ8EOVjVz9dH4wt5iYaNWURSDCk3qhQ3VMZpNxp4kHI9oDJJQ14QJwshA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(346002)(376002)(366004)(396003)(451199021)(6506007)(26005)(6512007)(186003)(31686004)(2616005)(2906002)(5660300002)(83380400001)(8936002)(8676002)(6486002)(41300700001)(36756003)(478600001)(54906003)(38100700002)(66476007)(66556008)(316002)(31696002)(66946007)(6916009)(4326008)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M0EvZ1VtNjZCMEg4amc3Tmkra0EvMWFhVTgwbjRSTVByN1FCdTQwYzVWcUg4?=
 =?utf-8?B?c0NNSHZsS2ZRU2p6S0dYSi90a0RiRUt5OFNVdk1OTWp0VkpNOGhtd0N0dVAw?=
 =?utf-8?B?SDdVVloyK0ZTaGI4bDFOdUFwVnljdkQzVnNiSlhMMDFtanF3NGpmWUZaaDVW?=
 =?utf-8?B?UVhITVZaTmhnejdqL20yNWdCdTltcDJYUytIYWZwMWxpQlFTTHRhVU5lWE1L?=
 =?utf-8?B?R0loUzgwQWU2cldLRjhjWUR3Ym5ncHpoYXdiOVNQZWVZUDVENWJhNUpOeXU3?=
 =?utf-8?B?V1hJYVBCZWZIOEMwbXZ6SkdvMlE4V0l3M0VBYXJMWUFKalBlcldtNnB3eTN1?=
 =?utf-8?B?cHFmMUVPTkJpdlN5dHEvZHhVQ0R5ZFIvWlY3TWVFc29EK3cwMlZxSE1lRWlK?=
 =?utf-8?B?WkNoY3lWRDRiSGxwVEoxL0owVUQ1bFN4STZnUkduQ1lXaWRjWlU3QTExbk5x?=
 =?utf-8?B?R0RGUlNITVFVbWRyWjA0WmdZOUladmlQQlRubWkvU3ZzbnZWOW9wNTFSelZq?=
 =?utf-8?B?S1JkblBUUnZ6enU5c0ovdWQvejNucEFJMEVhS3hxVU0vV3FlbGNENVhEWGht?=
 =?utf-8?B?M3c5THB2YWh5RnlrYzdnUkpHZ2lTR3pYMWFCUExheFF6cEJkQzFmZWJwM2hZ?=
 =?utf-8?B?UkVlZ0pHNUQ3NkxPeDNOUmoxZVQwcEFzSk1hM21yd3ZrRmZXTDYzVzRybDBG?=
 =?utf-8?B?MGk4WDZvdFJOMXY1bG5CUnJCbkp4bGxtTlRwdURydGYraEFGUDU1LytueUJF?=
 =?utf-8?B?UzdtOGZGVTlXVGNGQzNHcXVGM05iTXpxUklIUnROY3o4eUI5OUk4RnpsV3BB?=
 =?utf-8?B?ZWFUNFlpRFFSWnBqSmJIVDg2WEpNbitUcWlUWTRFS0lIQTJvNG9WQldPVHpT?=
 =?utf-8?B?b2VaL1hlTGgwaXZmNkhrbUR0U0N6amxzN3hxTnowMW5hMU5rSFZPckp2Rkp6?=
 =?utf-8?B?WnFNVGZHaTNBWlhkUHlYR3VwdDI4K3A3YUJKbHFFUUVEZmtnQVh1cDhOZGI0?=
 =?utf-8?B?S3JWTjV6TWZCSHhVV2J5NkRqMXJwamF2ZGYweDBqbmozTGsydytieUNBOUMv?=
 =?utf-8?B?TlF5L0dDcC90bURvc0loTjFPcTNXdGFOdG50WWpNUEdtZ1JOSFlOVFlQN01F?=
 =?utf-8?B?elRwRmk4Q0ZUdFQzS3R0cmRUYlBRNXFsNm1UUWdKb3dSVVdIMC8waHlTYTZT?=
 =?utf-8?B?OUlTakdJSlhYbU11bWE3OUp1V3VSRno1YzZJNGQyd2x3eTZSR25NbkVzTFJQ?=
 =?utf-8?B?WitZbURxUTRuUXBQZnh2WWRad0ZmMzhJeFFBMmJKeDh1VGVBNFl4RnoyRElr?=
 =?utf-8?B?WFV3WFJSL2ZuYWZCb2d1L1FTaXB5L0NFaVl2djIwL09yZHA3amQ3a1hXWWlp?=
 =?utf-8?B?NmZ2NUp4d1hzcnc4ei9nTml4TG5FUkluRW5zR25aVDJXV3FNdUdHd2pBVWpz?=
 =?utf-8?B?djlkQ1JmWVJCNlowekdpUWZXa1ZVOUJMWnJwaWJzWjh1UUR1RnI3a1o3V1Bv?=
 =?utf-8?B?blhtK3lZNHNWMXl3cWJIK2RiMEU0aVg1OFFZTUY2MUdJbWdEWG1lVHJyMis2?=
 =?utf-8?B?MTErajVLQjdnWDZiRU9GZjlPbFpiYTIvWWRXRzgzTTI3NHdBaW1mKzBBaHp2?=
 =?utf-8?B?bStvVjVoczBSWVo2OFVGNVljaUhZZ3h4bWJxYzNtWU1EaElnSWFsSVNpeHps?=
 =?utf-8?B?RFBHYk91Qmtzdzg0U3l3cFoxODZYUzZ5aEtsM0pmV3FvYjZrV3doMFA3SzJK?=
 =?utf-8?B?dFVwMWRwYy9LN1F2UGNkRGQ1VWNqMVFtM3czNWFpMFJwak1NSDA5QlF0b211?=
 =?utf-8?B?ZkFpOWJWTWdhWkJiN3hDak1BV3dmeG5WZlRYY2JHUW5CZFkwMGYvczRtUmFZ?=
 =?utf-8?B?T0pzZjhXanZ1Ymx3aGZjSm1JdEkyL1NSS1ZxSnpLN0lhaG1naHFHQzFtc3lW?=
 =?utf-8?B?NWVSNUpRRHlDUUtkWXQwbHdJaUpDK2dBWEszUjdORDVOVDVheWEvM1Jvdnpx?=
 =?utf-8?B?TytrRDFjODM4WktURFRCeVlaME5MZDc1aUo3Z1VYRlU0enhwUUwxQUxMdVF2?=
 =?utf-8?B?SnZQQndKalltVkpjcjJtRzhpd3RmTGsvS21jNnVxaEZPU00xTC9EakdEWUNj?=
 =?utf-8?Q?SMk1duqpQVan/4rXP6UMAz3wg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c2546fdf-8b07-4440-6e49-08db4bbb4cf0
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 09:46:42.6204
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oAbjnMzlOUvnn1zJGckmhAAb86n+GubmED2soiX7lSxTEnNaTZRkLr5UBhYyPwfCpCMDaHSs8fAZ12vNbF69VQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7980

This is to make the difference to FLUSH_CACHE_WRITEBACK more explicit.

Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Note that this (of course) collides with "x86/HVM: restrict use of
pinned cache attributes as well as associated flushing".
---
v2: New.

--- a/xen/arch/x86/flushtlb.c
+++ b/xen/arch/x86/flushtlb.c
@@ -232,7 +232,7 @@ unsigned int flush_area_local(const void
     if ( flags & FLUSH_HVM_ASID_CORE )
         hvm_flush_guest_tlbs();
 
-    if ( flags & (FLUSH_CACHE | FLUSH_CACHE_WRITEBACK) )
+    if ( flags & (FLUSH_CACHE_EVICT | FLUSH_CACHE_WRITEBACK) )
     {
         const struct cpuinfo_x86 *c = &current_cpu_data;
         unsigned long sz = 0;
@@ -245,13 +245,13 @@ unsigned int flush_area_local(const void
              c->x86_clflush_size && c->x86_cache_size && sz &&
              ((sz >> 10) < c->x86_cache_size) )
         {
-            if ( flags & FLUSH_CACHE )
+            if ( flags & FLUSH_CACHE_EVICT )
                 cache_flush(va, sz);
             else
                 cache_writeback(va, sz);
-            flags &= ~(FLUSH_CACHE | FLUSH_CACHE_WRITEBACK);
+            flags &= ~(FLUSH_CACHE_EVICT | FLUSH_CACHE_WRITEBACK);
         }
-        else if ( flags & FLUSH_CACHE )
+        else if ( flags & FLUSH_CACHE_EVICT )
             wbinvd();
         else
             wbnoinvd();
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2228,7 +2228,7 @@ void hvm_shadow_handle_cd(struct vcpu *v
             domain_pause_nosync(v->domain);
 
             /* Flush physical caches. */
-            flush_all(FLUSH_CACHE);
+            flush_all(FLUSH_CACHE_EVICT);
             hvm_set_uc_mode(v, 1);
 
             domain_unpause(v->domain);
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -614,7 +614,7 @@ int hvm_set_mem_pinned_cacheattr(struct
                         break;
                     /* fall through */
                 default:
-                    flush_all(FLUSH_CACHE);
+                    flush_all(FLUSH_CACHE_EVICT);
                     break;
                 }
                 return 0;
@@ -680,7 +680,7 @@ int hvm_set_mem_pinned_cacheattr(struct
 
     p2m_memory_type_changed(d);
     if ( type != X86_MT_WB )
-        flush_all(FLUSH_CACHE);
+        flush_all(FLUSH_CACHE_EVICT);
 
     return rc;
 }
@@ -782,7 +782,7 @@ void memory_type_changed(struct domain *
          d->vcpu && d->vcpu[0] )
     {
         p2m_memory_type_changed(d);
-        flush_all(FLUSH_CACHE);
+        flush_all(FLUSH_CACHE_EVICT);
     }
 }
 
--- a/xen/arch/x86/include/asm/flushtlb.h
+++ b/xen/arch/x86/include/asm/flushtlb.h
@@ -113,7 +113,7 @@ void switch_cr3_cr4(unsigned long cr3, u
  /* Flush TLBs (or parts thereof) including global mappings */
 #define FLUSH_TLB_GLOBAL 0x200
  /* Flush data caches */
-#define FLUSH_CACHE      0x400
+#define FLUSH_CACHE_EVICT 0x400
  /* VA for the flush has a valid mapping */
 #define FLUSH_VA_VALID   0x800
  /* Flush CPU state */
@@ -191,7 +191,7 @@ static inline int clean_and_invalidate_d
 {
     unsigned int order = get_order_from_bytes(size);
     /* sub-page granularity support needs to be added if necessary */
-    flush_area_local(p, FLUSH_CACHE|FLUSH_ORDER(order));
+    flush_area_local(p, FLUSH_CACHE_EVICT | FLUSH_ORDER(order));
     return 0;
 }
 static inline int clean_dcache_va_range(const void *p, unsigned long size)
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5193,7 +5193,7 @@ int map_pages_to_xen(
     if ( (flags & _PAGE_PRESENT) &&            \
          (((o_) ^ flags) & PAGE_CACHE_ATTRS) ) \
     {                                          \
-        flush_flags |= FLUSH_CACHE;            \
+        flush_flags |= FLUSH_CACHE_EVICT;      \
         if ( virt >= DIRECTMAP_VIRT_START &&   \
              virt < HYPERVISOR_VIRT_END )      \
             flush_flags |= FLUSH_VA_VALID;     \



From xen-devel-bounces@lists.xenproject.org Wed May 03 09:47:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529069.823047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu95T-00026b-6t; Wed, 03 May 2023 09:47:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529069.823047; Wed, 03 May 2023 09:47:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu95T-00026U-46; Wed, 03 May 2023 09:47:51 +0000
Received: by outflank-mailman (input) for mailman id 529069;
 Wed, 03 May 2023 09:47:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu95S-00026M-D7
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 09:47:50 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2053.outbound.protection.outlook.com [40.107.7.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8fb96d3f-e997-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 11:47:48 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DBBPR04MB7980.eurprd04.prod.outlook.com (2603:10a6:10:1f0::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 09:47:19 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 09:47:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fb96d3f-e997-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nk+Zig6DRm5g/T5AVqh2RtQ8dg9/LyGuF5laszhFvbe1uMMb5mlaJqSHr/WU8pGJ77yS035Hb28sjnc0Q2/prmEKrTMc+E3jDls9fReKBToVOyjN2oQc39Ce4xUNIddkAIj//pvqlsVbw7+o0BLYazQ9W5FG4Uped0cK2setlLk4lKUIXO6ESrbjqLf4qTEvJK4c56u4jM6IirY+Rf1ij7AIa1NxPFGmNGU1CNUhYdXZHT+FJ8SCyaMBQftiNLZ62BqYly2jY0xyImSK594kXeuFzglZo1ht1jxs/xwZZy8F7NwveNFpIQGeUhyU3C9uyjzwCu/K0RcEcr05RsGTPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KxFwwb3BZGcHC7xshgmJ4kLtF4OIl105qptm5zCcXSA=;
 b=EvRaJti7qqk2J+WJxPzZjlQXlhvK+HeaG+lI9+Y1K2xn44lZ+ndYG+jU9OA1o7O4nYp8BnqyGRI0b/77JT9ULtMpDhBSc5Gl4oue4WOlqcUq4b7XFhPYrax4/MqNbTqJlXUyK0YXNUNTonIfOU3S7fBz+AAOigxySRmQUQ8Q0K+7MG8NzpGYF/GybQSBo/0HW+GMzI9ZG7vFbp0DCIIYMcz1FSdIKDyH2yM5sB3R+qb5b085IxctWm0F7pHMliTgh4hVVVo3yiS0vQZqyMzvDKUupRUF3lecc4tmPZVgmG1MYcGhvuYHt33dZvAVApbZogfwoQvbVeQDgQgpZ0DOuA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KxFwwb3BZGcHC7xshgmJ4kLtF4OIl105qptm5zCcXSA=;
 b=4Tpn1AY+3Nx41MSek8MHxr+5itCcZ7tF23iHge2xklIIplLYV1eQ3k9heNhYzehSRfLs65OHGiVAlOid4d2m5YEjNh4B1lIyU266kJAlqfQFLM3hUPC+Acw60PDJ7fMpPBKeTbMtbmWfj/U3AyGWv2rocydVdNdFxktx5IhsscRSgGEpYOSI3NlDs2OT/RqAyMZAOwT4RIyi0WEux0WateV//cylzMA7PLRCilCaudcJq3SQt4R5tpbAIW49RoRyBBPV0yYIy2FP7KcMk3zw4JR2hgXAyKSuUM2kHEI42cMCXlKi7t6qt+4e7SyYcJB/iLCbB/R/mX9lKOqUMkDY/A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9274fbb1-c1be-9570-ecfc-8f0ac9a1f42b@suse.com>
Date: Wed, 3 May 2023 11:47:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 6/6] x86/HVM: limit cache writeback overhead
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
In-Reply-To: <c030bfde-c5bb-f205-edff-435278a435f4@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0157.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::18) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DBBPR04MB7980:EE_
X-MS-Office365-Filtering-Correlation-Id: fc04887b-0423-4823-8252-08db4bbb632d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/yNMJBKt644QL+6Shjl0FzLE8bJGk7zcFnsk+9lyD1eO8QyJKn9AsTHj6yhQpGMXOxwDhJsft+Vh0WttnFSst+ZeZozBie4UMoJifjPQlhYiJFINjVYF3VjY//lam9Z/JrB6cgIKM/iJPG5ivecenOkgyqp5L5/ZcJgz1PQX/CzEOAPBy33JS9WhXK9F7SJc8EmG8CBAwkyddrczQta6AzNDp7oGo9TpfLLNb4lzmFy1t0qIp6eXckyE7CVGid4CfRUxG3Fhbtrhc+juoiiLOsb65JKxa1vzSOjyMW1/oa8xTY6t+qky4J/JOAmysgDBaRVJuwWkreYBnoM3D0hUM7JifTLkyo79OS4BuJcjpTvEFDFnlQhF4rTPj37Ba1yrzUsRJr5F7Q2HGsrgEBi4rYKgxsGd0rBiNLTXcGop8zNwjBjMWE+6jrYeHSh9D3WFTM68aq79lURWF8+/Ee39ZY1SQC637A9n+ilPNyYJUq4y3AHgh7D/rsTGE78/xOdP7zW9oH7OoOf158x5q4jv5bgOopvh2R+my6mB/CwHgicBNlTQNN45h6003oUbujkb7stmvreJVrAr1vze6VuSjou9SPLBisAT1Hs5wCctX4g97N/VRXTa/apvOqTmgFJxupQ/ybBGq/92QhS4LKWqLg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(346002)(376002)(366004)(396003)(451199021)(6506007)(26005)(6512007)(186003)(31686004)(2616005)(2906002)(5660300002)(83380400001)(8936002)(8676002)(6486002)(41300700001)(36756003)(478600001)(54906003)(38100700002)(66476007)(66556008)(316002)(31696002)(66946007)(6916009)(4326008)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WHFZTG9pMHNKQkZlT21TTWVGMkVnMVkrNUpBeXhTWTZWdjBKc1lQZWg5QmtC?=
 =?utf-8?B?U2dkdVNiamRNb2F3RTdUeDh2Y2Jzd3JyRC9KYnJ2K2cvUlJKTVlOL2lFRTdk?=
 =?utf-8?B?MlBZM0VwdVNBUDVndHRVT2tQZGtZZGVwM3p0VFN1YkNxOHpJNThCMUtPdURF?=
 =?utf-8?B?b3djQlhkSTQ3UytjMmpoWmhRTzh5clJvSE9va0tFbXZkLzRZa2Ficmt1NzBr?=
 =?utf-8?B?SnNFNkU3ZWJkM3hVSE1aWFYvM0hKL3VXNWp5SW1qYncrcW5PMGNNaWpsTXVI?=
 =?utf-8?B?Q0trZGUzZ1VaSnRBYU1RcjN0cEUzaW1MaURWVngxakVUMXBSSTFrV05JZUNa?=
 =?utf-8?B?MkJuZVIySXl1YTVtZEEvaXJXVFl0RXhlQkN2U05wMjlheGhhL09nSjQxbFNq?=
 =?utf-8?B?ajB3NVY2NjJRK1p1V0JlSGFEVXp3QURsM3Z4SDNZOXo3WWwrWkYwTnJuOGpM?=
 =?utf-8?B?TlVhc1JZOVdyTTU1ZUJSdi9WS1JTQ1pvQjFXMjNweGdoWURGcmtIcVd1ZUkz?=
 =?utf-8?B?cXcvUlZuS1d1Q1BFQ3NvWXl4YnE5Z0hsVTdWTU1xeEF3N2ZUaFJPN1hVa21K?=
 =?utf-8?B?V0pmVGtmTklLVHdVMzRHTFJLenNOQVo0SlowSTg1WnpycmxSeWcvQWJDOFVw?=
 =?utf-8?B?UXhmcW5BTGV1L2NlKzl2WVpzNHp6ZCs2NEJXYjU4S1NLN1c5RmNjSjR2OUdB?=
 =?utf-8?B?ZnJWT0tXZ2cwbmM2dWk5bmNZUlhMTVlBbG9jalZFaUJTakhVaFBvRWhhdzhH?=
 =?utf-8?B?a21SSWFUNWo3aUpTSUlKdDVYTm1ZZE9PSTdJUW5NWmFnYUxZUVljR2NtbHF2?=
 =?utf-8?B?MWpJN0thQktvL3R1cEFkMjJGWmFzMjV2bldNVXQvMFVkZ0lnMXdBUndqQWhG?=
 =?utf-8?B?VFFuWXh3RFMvVzNqTXBNOXd1b3ZVOW9WZld0Nm0rSTZnUzJEbEp3bENWcUhs?=
 =?utf-8?B?VGZNLzFUV2FzOTNKUXZmQlhIVWExeFUyS2FjOFdMbFhrMEJObkhaNFhiZ3B5?=
 =?utf-8?B?NG1wNEhmTVlHa0gwUkpXNVN0dnU4blZOU2M4TEpOSUR2S3NNaTVRVVcrWDBO?=
 =?utf-8?B?S1VGRnZRVmFJb1RWRTFDQURvc1ZxYjdRWlFXRzhaMUtsNDR5TVVsbHpZbUN0?=
 =?utf-8?B?ZXhpZEpxcDRLYno5MDdkdkVoWFNCSUhja1FKQ3dyWEU2RURTWHVTbm0wYXRt?=
 =?utf-8?B?UGZXbU5UWTVhN0JDU2d1UGM4RVc4SEtjREJmakdhZlhtK2RWM2drMlN2K2Uz?=
 =?utf-8?B?R2R2Q3NYdXMrWTI2NjJoVUhaVkpoT0pZVFNmZzMrUWtxc1E5R1VrQTBHUC9t?=
 =?utf-8?B?eGR4b08vZVpWblBCSFRJRFJzRmxPYStRQ09Vazl2WVB1TmtyVmJzK2pEQkp6?=
 =?utf-8?B?RmZhYVNCMkpBdEpmbE44Z1UvdzRLY0F4RVZRQmEwUGhuZkpGNlNteTlWckty?=
 =?utf-8?B?SXJHaUttZjQ2WG51UU9aS3cyQXhLWHBwbTlWTGxwMVV1d3NvMjRsczFENEdh?=
 =?utf-8?B?eHJVeEZ2Y2lCSXhGTVVDN2crT3RrenMyaDNSN3MvS0xTeHhxa2wwbExiL0ov?=
 =?utf-8?B?NzhwT2hGOEFjMEEremVieXpDM3lzRUtIaElmc0tRMlVLcVZ0SkwyU2hENU9w?=
 =?utf-8?B?QUR3MExyZkhTVU1wam45cFEzR3ZwMHBHSlg2SU5xb2NMR2IxcnZvVjlCNFlJ?=
 =?utf-8?B?R214WWxFY0pOYzduTmRMOTAxekRlNXV0K1BoUHVleG1QT1dzR2s5UHI1U1JQ?=
 =?utf-8?B?RWVvOGF4NktETExEaG9sd2VNYU9PVFk4VUxNK3hiVDdrNy9FUG5hdG1VcU9H?=
 =?utf-8?B?ajhaRmczdG93ZVBMclh1QnNMOXREamM1ZkFyK1I4MmRqWHJESkQ2WjBSZzFV?=
 =?utf-8?B?aEU0S2owMWpEclppdm5DK2hTSzZvc0tacEFHYnhXZitvVjJZRUpJODFsYnJD?=
 =?utf-8?B?SUsrbXIvWnpTYUQ2WHBybnZRQ2F4K1pGdk8vVVZCVEpqZVRzM0kxMzJKc2x4?=
 =?utf-8?B?SVNQSXVFSkIvVnNWTklaREtzbWk2cDBuWUFVWkRjN0RoNCtyQWlkb2ZaNjJq?=
 =?utf-8?B?VEgwNitISTFkSlVHTUV3K2F1NE1LcVRuZDVLdFl0QncvVmNkK0VqZERPY1Qr?=
 =?utf-8?Q?OZaxGm2A3BCEUCFQjdMckCsvD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fc04887b-0423-4823-8252-08db4bbb632d
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 09:47:19.9090
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pS4CniR+agOgejHU9tQB0i7PoAuRN3b2C3kIYP+c35gMjDm0T6Q3bB9l3l8HGnmYyRWVNqpKt3klmdqIyG3Fmg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7980

There's no need to write back caches on all CPUs upon seeing a WBINVD
exit; ones that a vCPU hasn't run on since the last writeback (or since
it was started) can't hold data which may need writing back.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
With us not running AMD IOMMUs in non-coherent ways, I wonder whether
svm_wbinvd_intercept() really needs to do anything (or whether it
couldn't check iommu_snoop just like VMX does, knowing that as of
c609108b2190 ["x86/shadow: make iommu_snoop usage consistent with
HAP's"] that's always set; this would largely serve as grep fodder then,
to make sure this code is updated once / when we do away with this
global variable, and it would be the penultimate step to being able to
fold SVM's and VT-x'es functions).
---
v2: Re-base over changes earlier in the series.

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -537,6 +537,8 @@ void hvm_do_resume(struct vcpu *v)
         v->arch.hvm.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
     }
 
+    __cpumask_set_cpu(v->processor, v->arch.hvm.cache_dirty_mask);
+
     if ( unlikely(v->arch.vm_event) && v->arch.monitor.next_interrupt_enabled )
     {
         struct x86_event info;
@@ -1592,6 +1594,10 @@ int hvm_vcpu_initialise(struct vcpu *v)
     if ( rc )
         goto fail6;
 
+    rc = -ENOMEM;
+    if ( !zalloc_cpumask_var(&v->arch.hvm.cache_dirty_mask) )
+        goto fail6;
+
     rc = ioreq_server_add_vcpu_all(d, v);
     if ( rc != 0 )
         goto fail6;
@@ -1621,6 +1627,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
     hvm_vcpu_cacheattr_destroy(v);
  fail1:
     viridian_vcpu_deinit(v);
+    FREE_CPUMASK_VAR(v->arch.hvm.cache_dirty_mask);
     return rc;
 }
 
@@ -1628,6 +1635,8 @@ void hvm_vcpu_destroy(struct vcpu *v)
 {
     viridian_vcpu_deinit(v);
 
+    FREE_CPUMASK_VAR(v->arch.hvm.cache_dirty_mask);
+
     ioreq_server_remove_vcpu_all(v->domain, v);
 
     if ( hvm_altp2m_supported() )
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2363,8 +2363,14 @@ static void svm_vmexit_mce_intercept(
 
 static void cf_check svm_wbinvd_intercept(void)
 {
-    if ( cache_flush_permitted(current->domain) )
-        flush_all(FLUSH_CACHE_WRITEBACK);
+    struct vcpu *curr = current;
+
+    if ( !cache_flush_permitted(curr->domain) )
+        return;
+
+    flush_mask(curr->arch.hvm.cache_dirty_mask, FLUSH_CACHE_WRITEBACK);
+    cpumask_copy(curr->arch.hvm.cache_dirty_mask,
+                 cpumask_of(curr->processor));
 }
 
 static void svm_vmexit_do_invalidate_cache(struct cpu_user_regs *regs,
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3710,11 +3710,17 @@ static void vmx_do_extint(struct cpu_use
 
 static void cf_check vmx_wbinvd_intercept(void)
 {
-    if ( !cache_flush_permitted(current->domain) || iommu_snoop )
+    struct vcpu *curr = current;
+
+    if ( !cache_flush_permitted(curr->domain) || iommu_snoop )
         return;
 
     if ( cpu_has_wbinvd_exiting )
-        flush_all(FLUSH_CACHE_WRITEBACK);
+    {
+        flush_mask(curr->arch.hvm.cache_dirty_mask, FLUSH_CACHE_WRITEBACK);
+        cpumask_copy(curr->arch.hvm.cache_dirty_mask,
+                     cpumask_of(curr->processor));
+    }
     else
         wbnoinvd();
 }
--- a/xen/arch/x86/include/asm/hvm/vcpu.h
+++ b/xen/arch/x86/include/asm/hvm/vcpu.h
@@ -161,6 +161,8 @@ struct hvm_vcpu {
         struct svm_vcpu svm;
     };
 
+    cpumask_var_t       cache_dirty_mask;
+
     struct tasklet      assert_evtchn_irq_tasklet;
 
     struct nestedvcpu   nvcpu;



From xen-devel-bounces@lists.xenproject.org Wed May 03 09:57:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 09:57:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529071.823057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9Ew-0003f2-4Y; Wed, 03 May 2023 09:57:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529071.823057; Wed, 03 May 2023 09:57:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9Ew-0003ev-1U; Wed, 03 May 2023 09:57:38 +0000
Received: by outflank-mailman (input) for mailman id 529071;
 Wed, 03 May 2023 09:57:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pu9Ev-0003ep-58
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 09:57:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pu9Eu-0006Xx-Ne; Wed, 03 May 2023 09:57:36 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pu9Eu-0003nQ-GK; Wed, 03 May 2023 09:57:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=ul3QFpGjbwRlVg8hyIG/++WptfdF4QkYqxE1QSqeqkc=; b=rC3qS9xkoPH+d0b41Hsy+JIQnY
	hRsV3/ltomwLOWl8LLM0B3U6ojNwUElqJAlONBdCR8bNcEZcPu52tO6TWFImKzcFpjix4gJ9kvuSa
	cGdN83izIEI4rxAMyWqpuWl423RMJ3AuGmsYKHOrIJyZqYWIuPkvYbSNlGZGfsumisI0=;
Message-ID: <aae0daea-cfc4-166d-45b7-582bc2cebf71@xen.org>
Date: Wed, 3 May 2023 10:57:33 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 11/13] tools/xenstore: remember global and per domain
 max accounting values
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-12-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230405070349.25293-12-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 05/04/2023 08:03, Juergen Gross wrote:
> Add saving the maximum values of the different accounting data seen
> per domain and (for unprivileged domains) globally, and print those
> values via the xenstore-control quota command. Add a sub-command for
> resetting the global maximum values seen.
> 
> This should help for a decision how to set the related quotas.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 10:18:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 10:18:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529083.823067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9Ym-00069a-OU; Wed, 03 May 2023 10:18:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529083.823067; Wed, 03 May 2023 10:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9Ym-00069T-Lp; Wed, 03 May 2023 10:18:08 +0000
Received: by outflank-mailman (input) for mailman id 529083;
 Wed, 03 May 2023 10:18:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pu9Yl-00069N-Cu
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 10:18:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pu9Yk-00078e-E8; Wed, 03 May 2023 10:18:06 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pu9Yk-0004ko-79; Wed, 03 May 2023 10:18:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=MFnrBR4HCIIlWL1rbTcejWiIvoGKmGhxOCDBSwCs6GI=; b=MDOaEpj0CPWx3z7qbqDK8J+gQ1
	Uf9l/7buGbt2TEJyfcwek2XDQNOtico9tmQLuiYD8Y3k/gRbwXfqTawoiozi0EDF40FRRK0esbfOO
	ge3t4T99vzFG6G8Y1pNtndf0Jd3ukLhr6waRQIlvrxxn3QWP/BjCO9n82IatppGyNqlg=;
Message-ID: <e4f8a0e6-7a4c-3193-ce38-e43891f063ed@xen.org>
Date: Wed, 3 May 2023 11:18:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 12/13] tools/xenstore: use generic accounting for
 remaining quotas
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-13-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230405070349.25293-13-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 05/04/2023 08:03, Juergen Gross wrote:
> The maxrequests, node size, number of node permissions, and path length
> quota are a little bit special, as they are either active in
> transactions only (maxrequests), or they are just per item instead of
> count values. Nevertheless being able to know the maximum number of
> those quota related values per domain would be beneficial, so add them
> to the generic accounting.
> 
> The per domain value will never show current numbers other than zero,
> but the maximum number seen can be gathered the same way as the number
> of nodes during a transaction.
> 
> To be able to use the const qualifier for a new function switch
> domain_is_unprivileged() to take a const pointer, too.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   tools/xenstore/xenstored_core.c        | 14 ++++-----
>   tools/xenstore/xenstored_core.h        |  2 +-
>   tools/xenstore/xenstored_domain.c      | 39 ++++++++++++++++++++------
>   tools/xenstore/xenstored_domain.h      |  6 ++++
>   tools/xenstore/xenstored_transaction.c |  4 +--
>   tools/xenstore/xenstored_watch.c       |  2 +-
>   6 files changed, 48 insertions(+), 19 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 88c569b7d5..65df2866bf 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -799,8 +799,8 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
>   		+ node->perms.num * sizeof(node->perms.p[0])
>   		+ node->datalen + node->childlen;
>   
> -	if (!no_quota_check && domain_is_unprivileged(conn) &&
> -	    data.dsize >= quota_max_entry_size) {
> +	if (domain_max_chk(conn, ACC_NODESZ, data.dsize, quota_max_entry_size)
> +	    && !no_quota_check) {

It feels a bit odd to move the !no_quota_check right after the actual 
check. But AFAICT, you are doing it because domain_max_chk() will also 
update the maximum value seen by the current quota.

Is that correct? If so, it would be worth mentioning it in a comment.

>   		errno = ENOSPC;
>   		return errno;
>   	}
> @@ -1168,7 +1168,7 @@ static bool valid_chars(const char *node)
>   		       "0123456789-/_@") == strlen(node));
>   }
>   
> -bool is_valid_nodename(const char *node)
> +bool is_valid_nodename(const struct connection *conn, const char *node)
>   {
>   	int local_off = 0;
>   	unsigned int domid;
> @@ -1188,7 +1188,8 @@ bool is_valid_nodename(const char *node)
>   	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
>   		local_off = 0;
>   
> -	if (strlen(node) > local_off + quota_max_path_len)
> +	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off,
> +			   quota_max_path_len))
>   		return false;
>   
>   	return valid_chars(node);
> @@ -1250,7 +1251,7 @@ static struct node *get_node_canonicalized(struct connection *conn,
>   	*canonical_name = canonicalize(conn, ctx, name);
>   	if (!*canonical_name)
>   		return NULL;
> -	if (!is_valid_nodename(*canonical_name)) {
> +	if (!is_valid_nodename(conn, *canonical_name)) {
>   		errno = EINVAL;
>   		return NULL;
>   	}
> @@ -1775,8 +1776,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
>   		return EINVAL;
>   
>   	perms.num--;
> -	if (domain_is_unprivileged(conn) &&
> -	    perms.num > quota_nb_perms_per_node)
> +	if (domain_max_chk(conn, ACC_NPERM, perms.num, quota_nb_perms_per_node))
>   		return ENOSPC;
>   
>   	permstr = in->buffer + strlen(in->buffer) + 1;
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
> index 3564d85d7d..9339820156 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -258,7 +258,7 @@ void check_store(void);
>   void corrupt(struct connection *conn, const char *fmt, ...);
>   
>   /* Is this a valid node name? */
> -bool is_valid_nodename(const char *node);
> +bool is_valid_nodename(const struct connection *conn, const char *node);
>   
>   /* Get name of parent node. */
>   char *get_parent(const void *ctx, const char *node);
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
> index d21f31da92..49e2c5c82a 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -433,7 +433,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
>   		return ENOMEM;
>   
>   #define ent(t, e) \
> -	resp = talloc_asprintf_append(resp, "%-16s: %8u (max: %8u\n", #t, \
> +	resp = talloc_asprintf_append(resp, "%-17s: %8u (max: %8u\n", #t, \

This changes feels a bit unrelated. Can you mention why this is 
necessary in the commit message?

>   				      d->acc[e].val, d->acc[e].max); \
>   	if (!resp) return ENOMEM
>   
> @@ -442,6 +442,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
>   	ent(transactions, ACC_TRANS);
>   	ent(outstanding, ACC_OUTST);
>   	ent(memory, ACC_MEM);
> +	ent(transaction-nodes, ACC_TRANSNODES);

You seem to convert multiple quotas but only print one. Why?

>   
>   #undef ent
>   
> @@ -459,7 +460,7 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
>   		return ENOMEM;
>   
>   #define ent(t, e) \
> -	resp = talloc_asprintf_append(resp, "%-16s: %8u\n", #t,   \
> +	resp = talloc_asprintf_append(resp, "%-17s: %8u\n", #t,   \
>   				      acc_global_max[e]);         \

Ditto.

>   	if (!resp) return ENOMEM
>   
> @@ -468,6 +469,7 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
>   	ent(transactions, ACC_TRANS);
>   	ent(outstanding, ACC_OUTST);
>   	ent(memory, ACC_MEM);
> +	ent(transaction-nodes, ACC_TRANSNODES);
>   
>   #undef ent
>   
> @@ -1081,12 +1083,22 @@ int domain_adjust_node_perms(struct node *node)
>   	return 0;
>   }
>   
> +static void domain_acc_valid_max(struct domain *d, enum accitem what,
> +				 unsigned int val)
> +{
> +	assert(what < ARRAY_SIZE(d->acc));
> +	assert(what < ARRAY_SIZE(acc_global_max));
> +
> +	if (val > d->acc[what].max)
> +		d->acc[what].max = val;
> +	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
> +		acc_global_max[what] = val;
> +}
> +
>   static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
>   {
>   	unsigned int val;
>   
> -	assert(what < ARRAY_SIZE(d->acc));

I think this assert should be kept because...

> -
>   	if ((add < 0 && -add > d->acc[what].val) ||

... of this check. Otherwise, you would check that 'what' is within the 
bounds after the use.

>   	    (add > 0 && (INT_MAX - d->acc[what].val) < add)) {
>   		/*
> @@ -1100,10 +1112,7 @@ static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
>   	}
>   
>   	val = d->acc[what].val + add;
> -	if (val > d->acc[what].max)
> -		d->acc[what].max = val;
> -	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
> -		acc_global_max[what] = val;
> +	domain_acc_valid_max(d, what, val);
>   
>   	return val;
>   }
> @@ -1221,6 +1230,20 @@ void domain_reset_global_acc(void)
>   	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
>   }
>   
> +bool domain_max_chk(const struct connection *conn, enum accitem what,
> +		    unsigned int val, unsigned int quota)
> +{
> +	if (!conn || !conn->domain)
> +		return false;
> +
> +	if (domain_is_unprivileged(conn) && val > quota)
> +		return true;
> +
> +	domain_acc_valid_max(conn->domain, what, val);
> +
> +	return false;
> +}
> +
>   int domain_nbentry_inc(struct connection *conn, unsigned int domid)
>   {
>   	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
> index b55c9bcc2d..3e17b63659 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -32,6 +32,10 @@ enum accitem {
>   	ACC_OUTST,
>   	ACC_MEM,
>   	ACC_TRANS,
> +	ACC_TRANSNODES,
> +	ACC_NPERM,
> +	ACC_PATHLEN,
> +	ACC_NODESZ,
>   	ACC_N,			/* Number of elements per domain. */
>   };
>   
> @@ -128,6 +132,8 @@ void acc_drop(struct connection *conn);
>   void acc_commit(struct connection *conn);
>   int domain_max_global_acc(const void *ctx, struct connection *conn);
>   void domain_reset_global_acc(void);
> +bool domain_max_chk(const struct connection *conn, unsigned int what,
> +		    unsigned int val, unsigned int quota);
>   
>   /* Write rate limiting */
>   
> diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
> index 1c14b8579a..0b256f9b18 100644
> --- a/tools/xenstore/xenstored_transaction.c
> +++ b/tools/xenstore/xenstored_transaction.c
> @@ -252,8 +252,8 @@ int access_node(struct connection *conn, struct node *node,
>   
>   	i = find_accessed_node(trans, node->name);
>   	if (!i) {
> -		if (trans->nodes >= quota_trans_nodes &&
> -		    domain_is_unprivileged(conn)) {
> +		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1,
> +				   quota_trans_nodes)) {
>   			ret = ENOSPC;
>   			goto err;
>   		}
> diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
> index e30cd89be3..61b1e3421e 100644
> --- a/tools/xenstore/xenstored_watch.c
> +++ b/tools/xenstore/xenstored_watch.c
> @@ -176,7 +176,7 @@ static int check_watch_path(struct connection *conn, const void *ctx,
>   		*path = canonicalize(conn, ctx, *path);
>   		if (!*path)
>   			return errno;
> -		if (!is_valid_nodename(*path))
> +		if (!is_valid_nodename(conn, *path))
>   			goto inval;
>   	}
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 10:31:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 10:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529087.823077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9lO-0008V1-Tw; Wed, 03 May 2023 10:31:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529087.823077; Wed, 03 May 2023 10:31:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9lO-0008Uu-Qp; Wed, 03 May 2023 10:31:10 +0000
Received: by outflank-mailman (input) for mailman id 529087;
 Wed, 03 May 2023 10:31:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZDiC=AY=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1pu9lN-0008Uo-7S
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 10:31:09 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9bddfd91-e99d-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 12:31:05 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id
 2adb3069b0e04-4ecb137af7eso5789766e87.2
 for <xen-devel@lists.xenproject.org>; Wed, 03 May 2023 03:31:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bddfd91-e99d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683109865; x=1685701865;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=XH75TWJ2QK3zdQ/KOUS0feKVGadWx7stMsjQVAPoLPk=;
        b=Rb6ZOSztdPJhHIt2xej/h224+3Edu0xk4ke2X6vH/IPAL9jb/53cT5nEH28c/Hd/NS
         FBXA2zZiZDPjX3+hd0FWPEM6O5yybznZra2LbJvbY63ylhYayOIUNjFmsACgjrfZfj1R
         ax8Jj3NOCDOU1Rgzr9DtemrhagdEwAdBAkCIU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683109865; x=1685701865;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=XH75TWJ2QK3zdQ/KOUS0feKVGadWx7stMsjQVAPoLPk=;
        b=cU9+0PJj2BK7vXbtrUZPP764PfPXpNuaYmGWdV1/IVcWF3OYzLpxT7PYFQHGyTqbEE
         po7jVr5kBS69Ky4Kc31wEaDgRuXtu1HgpqdzXgoxdURIzYgYokcaP8yQJJdamp/xxJfd
         x+dl3pEtTYB0AHc68DcalGzzgf5QowWaXhDYY/svfsKzhy17Rgs2+snw3MktAACt0iuB
         6dbAnVO03pNtc4s5edn16LhIqXUGv//zD/ftDMaecDbZC4ArEBlWV76f6Qw4+bpasZ8L
         7cvVsWziOf8vlzZISb/IFk6B3qCpw7cjAvjB/fKOJ1UxuWmF8kwRk5EHCjy8NrK69hH5
         hTnA==
X-Gm-Message-State: AC+VfDxsPc3bFV4qqIFAKTAwZuWCh92FNk0Uwgvoa+BFKka3rSAWpxTQ
	oX0VAULD07BdMLBFywfl00rians+LENjRkmgyErpFw==
X-Google-Smtp-Source: ACHHUZ6hTs76I1dbN2GtIr2MCjXCJG09adCJ7BxIwzWF6znw3MNSgQAe1EX5GBuLa33kH0FpAh1BgdXq7yUDWaIdrAg=
X-Received: by 2002:ac2:5ec2:0:b0:4dd:af71:a5b7 with SMTP id
 d2-20020ac25ec2000000b004ddaf71a5b7mr819878lfq.41.1683109864742; Wed, 03 May
 2023 03:31:04 -0700 (PDT)
MIME-Version: 1.0
References: <20230428081231.2464275-1-george.dunlap@cloud.com> <b5292073-c675-587e-e19c-cbbeead41a7c@suse.com>
In-Reply-To: <b5292073-c675-587e-e19c-cbbeead41a7c@suse.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Wed, 3 May 2023 11:30:53 +0100
Message-ID: <CA+zSX=YKzn6qqie3cKd-78Q5Sqhux3eok3CDwr=jQbJed8NzHw@mail.gmail.com>
Subject: Re: [PATCH RFC] SUPPORT.md: Make all security support explicit
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper@cloud.com>, 
	Roger Pau Monne <roger.pau@cloud.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="0000000000009b382e05fac78d41"

--0000000000009b382e05fac78d41
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, May 2, 2023 at 12:19=E2=80=AFPM Jan Beulich <jbeulich@suse.com> wro=
te:

> On 28.04.2023 10:12, George Dunlap wrote:
> > --- a/SUPPORT.md
> > +++ b/SUPPORT.md
> > @@ -17,6 +17,36 @@ for the definitions of the support status levels etc=
.
> >  Release Notes
> >  : <a href=3D"
> https://wiki.xenproject.org/wiki/Xen_Project_X.YY_Release_Notes">RN</a>
> >
> > +# General security support
> > +
> > +An XSA will always be issued for security-related bugs which are
> > +present in a "plain vanilla" configuration.  A "plain vanilla"
> > +configuration is defined as follows:
> > +
> > +* The Xen hypervisor is built from a tagged release of Xen, or a
> > +  commit which was on the tip of one of the supported stable branches.
> > +
> > +* The Xen hypervisor was built with the default config for the platfor=
m
> > +
> > +* No Xen command-line parameters were specified
> > +
> > +* No parameters for Xen-related drivers in the Linux kernel were
> specified
> > +
> > +* No modifications were made to the default xl.conf
> > +
> > +* xl.cfg files use only core functionality
> > +
> > +* Alternate toolstacks only activate functionality activated by the
> > +  core functionality of xl.cfg files.
> > +
> > +Any system outside this configuration will only be considered security
> > +supported if the functionality is explicitly listed as supported in
> > +this document.
> > +
> > +If a security-related bug exits only in a configuration listed as not
> > +security supported, the security team will generally not issue an XSA;
> > +the bug will simply be handled in public.
>
> In this last paragraph, did you perhaps mean "not listed as security
> supported"? Otherwise we wouldn't improve our situation, unless I'm
> misunderstanding and word order doesn't matter here in English. In which
> case some unambiguous wording would need to be found.
>

No, I think your wording is more accurate.

 -George

--0000000000009b382e05fac78d41
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Tue, May 2, 2023 at 12:19=E2=80=AF=
PM Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com" target=3D"_blank">j=
beulich@suse.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" =
style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);pa=
dding-left:1ex">On 28.04.2023 10:12, George Dunlap wrote:<br>
&gt; --- a/SUPPORT.md<br>
&gt; +++ b/SUPPORT.md<br>
&gt; @@ -17,6 +17,36 @@ for the definitions of the support status levels et=
c.<br>
&gt;=C2=A0 Release Notes<br>
&gt;=C2=A0 : &lt;a href=3D&quot;<a href=3D"https://wiki.xenproject.org/wiki=
/Xen_Project_X.YY_Release_Notes" rel=3D"noreferrer" target=3D"_blank">https=
://wiki.xenproject.org/wiki/Xen_Project_X.YY_Release_Notes</a>&quot;&gt;RN&=
lt;/a&gt;<br>
&gt;=C2=A0 <br>
&gt; +# General security support<br>
&gt; +<br>
&gt; +An XSA will always be issued for security-related bugs which are<br>
&gt; +present in a &quot;plain vanilla&quot; configuration.=C2=A0 A &quot;p=
lain vanilla&quot;<br>
&gt; +configuration is defined as follows:<br>
&gt; +<br>
&gt; +* The Xen hypervisor is built from a tagged release of Xen, or a<br>
&gt; +=C2=A0 commit which was on the tip of one of the supported stable bra=
nches.<br>
&gt; +<br>
&gt; +* The Xen hypervisor was built with the default config for the platfo=
rm<br>
&gt; +<br>
&gt; +* No Xen command-line parameters were specified<br>
&gt; +<br>
&gt; +* No parameters for Xen-related drivers in the Linux kernel were spec=
ified<br>
&gt; +<br>
&gt; +* No modifications were made to the default xl.conf<br>
&gt; +<br>
&gt; +* xl.cfg files use only core functionality<br>
&gt; +<br>
&gt; +* Alternate toolstacks only activate functionality activated by the<b=
r>
&gt; +=C2=A0 core functionality of xl.cfg files.<br>
&gt; +<br>
&gt; +Any system outside this configuration will only be considered securit=
y<br>
&gt; +supported if the functionality is explicitly listed as supported in<b=
r>
&gt; +this document.<br>
&gt; +<br>
&gt; +If a security-related bug exits only in a configuration listed as not=
<br>
&gt; +security supported, the security team will generally not issue an XSA=
;<br>
&gt; +the bug will simply be handled in public.<br>
<br>
In this last paragraph, did you perhaps mean &quot;not listed as security<b=
r>
supported&quot;? Otherwise we wouldn&#39;t improve our situation, unless I&=
#39;m<br>
misunderstanding and word order doesn&#39;t matter here in English. In whic=
h<br>
case some unambiguous wording would need to be found.<br></blockquote><div>=
<br></div><div>No, I think your wording is more accurate.</div><div><br></d=
iv><div>=C2=A0-George=C2=A0</div></div></div>

--0000000000009b382e05fac78d41--


From xen-devel-bounces@lists.xenproject.org Wed May 03 10:37:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 10:37:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529091.823087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9qu-0000m2-Ln; Wed, 03 May 2023 10:36:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529091.823087; Wed, 03 May 2023 10:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9qu-0000lv-IV; Wed, 03 May 2023 10:36:52 +0000
Received: by outflank-mailman (input) for mailman id 529091;
 Wed, 03 May 2023 10:36:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu9qt-0000lp-67
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 10:36:51 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20619.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::619])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6919bb39-e99e-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 12:36:50 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by VE1PR04MB7263.eurprd04.prod.outlook.com (2603:10a6:800:1af::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 10:36:47 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 10:36:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6919bb39-e99e-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cl5p12lIMh/AmyDfA2AnyabYzMVMLVngMTmR1PluzZT51IO9gh0e6tOaxQtHpu/Ln5FeQfIhupw8UUUAcd2ocXPFpcJKICd3/OpJkOxKgw+EajqslZ6B85EjLbfKDi0qxhlESdy9ZNvh379AwDw3Ev1+QGHc5jXx6F0tkRdWQM0brl5Xsyi4mE5w8hPOeGn0lY9IIf3OQBnH+2mWobljYEIxYGn8XSLIMEHEzvyBQZ0HS4x2r7lLWdnf04idUO1CdnjNtFnh/nUmZNHqpXi2uRuG3yUhTCwaS5rHI3yEKehlLrNWWp0FQKsHEDl2r0OexICJMqjvKWk3TzIKpwGBGg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4hFX+X1sh+3XqbisDaZVyq/2N10sj1zXa+mjFkM1wa4=;
 b=NaFrp9Qe6qajzY7JOnA6XMU0Upv5Zz32xhKZF3vJz8TH+OeSzQdeqtR/Yrex3RQUFtC05hKMVJ81We3FVhmGiCu3dPI7sOrGoDK0VNRX9eU3N3KfevRGu+37BIScd8tZEEPjddiMxGu52o/2iO0yo5U28LuL0ynk18wmsiGIiaAwBBxse9NfS3frwxQUAaA9IMX/A+0BTJ8FAzVK8gOvQ5JADp2BQfH8BcT30cFMirnHpcx9lXXPkbG4AC6jCG9h6lxqgpnFSQoH1CEzszb53++vaB6FwIQ4Zp/XT2OWKkjzi7IfhKqMsTE+KmAU7DJt7Or+sN+x/c3LGZNFtYXDUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4hFX+X1sh+3XqbisDaZVyq/2N10sj1zXa+mjFkM1wa4=;
 b=S2HRuKQBn7OmsBGIvXdBWLLJTgFyyLR/ACxMX/HTm/kZ1Y7Staj0qAfm4SRZDomjZjKYvjv5+hFtGfb1Jto3Umf6fBy5z2Rg6a6fwTsMFxbgSfCadF/jqFEVKdfR1EsEWlBP3NonGqOmNifykpqmsRVJarF1jpbQ9iqqDW1CtFGJ+gXEue8XSBjeMUUz760YF5Wmj4lyzAt74AR+nqHVrIgVzuq7++Zv7UsqFu6lha0cP6apvgqBYiECmZR2xQKuWLBVO4gh1HYUhOxL6RD5k4dQlUNYavZ5uX0rWgLWrh4xsPckhLnP5P0Y6biVD1I4aqiadckTonvBtf57H3B9Rw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fef669f5-2b28-c47b-9b57-60c4eb99017e@suse.com>
Date: Wed, 3 May 2023 12:36:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Ping: [PATCH RFC] build: respect top-level .config also for
 out-of-tree hypervisor builds
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <beace0ce-e90b-eb79-4419-03de45ea7360@suse.com>
In-Reply-To: <beace0ce-e90b-eb79-4419-03de45ea7360@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0114.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::6) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|VE1PR04MB7263:EE_
X-MS-Office365-Filtering-Correlation-Id: 513571ee-53ae-47a7-4526-08db4bc24bb9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RWPi+bzsMpR58Ct8rKlP8TTg54OyOsiLk0oy03whHf0GKjkO26IGXf+/YY2gsWwh3peYgNL3yRIi2Q7PrEstBl+YwCezL3PKuGujwe7W0MyTEMHfb+aKkr0IOuSeW8/zPLEt1ZuhopBzlPGzIuH8UzTTzPvFjh65jcl3WObOjfK8zMC2CGkHj4vyN4xOjwCRmwRD4ZxvFPwvYsnpfgaXpm7XJ8BSZ3kQeoX3Qk6WAm3hL2HjiXnsDB9vQnpEdrCHfO6Yhg4h1xe4/nmPS45TUQTDg9fa3ekvcAg/MRxUXAiClkTybgRTsa9MUVhO7RM2X8oBeu1L0ouzqpnqoLfZM6WtxSEXRbyoiNhDKgCeh2RrEwZjiK1HD689C6dEOFzXlXNf/oaUgzHKxg+cb9FYshuR6wnyqAqqv+0Lgg1yC3fvCBexCbUPys9Z1WKuzb715eDfN5pu5KR6GPbHww5jLVqAMZFIHi/vF3mtkZRKwe1oYUO7gWwkeCV26vlZ4iFa4BKJSmGbmcZHPLfBy4ZrsdM90IJYgOJpGY9XVGvghna5e/pjrj/vpsGZCVZLMCQLgWYSrXNM/9I8XV9RDB2GX87f/28u5ZH5iDxAesFogs/18NuZ+DxsEwzSKCBZRKfd/BJKR56Tp5QiGzgeBqHAWyuGg8/cWqx892ABWAXNepG7Ze1ZZIAzI50a6GRmeWflH3JB3kKFIpyKzoSt3uSioGhWdqva0LaEdh+8lzmt6w8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(396003)(136003)(39850400004)(451199021)(186003)(31686004)(2906002)(38100700002)(2616005)(54906003)(53546011)(6506007)(26005)(6512007)(66476007)(66556008)(66946007)(478600001)(5660300002)(8936002)(8676002)(41300700001)(36756003)(86362001)(6486002)(31696002)(316002)(4326008)(6916009)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MUpaTVpTTWx1NW1DNUN1aE1mcS96bTNzY2Q5L2ZVOTc1TDlOdXpUTTJCQjI2?=
 =?utf-8?B?cm45c2dCcWdUTkJpMVdsS21ZcWU0ZHptcHUvbS9PNHNmUlVMSnNMOEs1N3R3?=
 =?utf-8?B?eWZtNFd6bUNMaEVWd2RNWUl2OTVYUVpHWHZzcUxOd0xRblI0SmxRN2xlbnhL?=
 =?utf-8?B?RFR1MllRcWx0amlaTE93ZXQrMElFUSt2OHUxT3ErcDFYWGFEV0M0VUlwNW5W?=
 =?utf-8?B?cUc4YU1SK2VkcXFJelNyM21LZGdCRjNEdXpzN2MvSlRQc1lNZU9zYXVoUU91?=
 =?utf-8?B?bXUyM3BqSDJFRmw4MmowUUp6ZFZZdFUrZU5kd1Ixbk8zN1AxQWViUi9NOUxB?=
 =?utf-8?B?a3cxdHdVbHhjVXRZVmxBOHMwYTVqTk9PV3pLaGpoMm9MRmR1cDltRnFyNG9H?=
 =?utf-8?B?VlVYbzNMNHZjT0JIMDVFRWlRSjJhUmNPZ1c3U2NDMWQvUTlGR1I3dFdDbFFn?=
 =?utf-8?B?SUZZWEh4ZFZTS3dGMDhmT3NLS3d1bUJxQ0JJYzNRZzZQd3Nsd2pYTURRNDU2?=
 =?utf-8?B?bUF0MGFZbytyQ1VELzJXOXl1ZnhYdEsyUnp1ZmZBRVltNkpGdlM2TGdBaHp3?=
 =?utf-8?B?VHBONkI1QlNld3M1WXNmaXNic1BCZU40OGwzL05jZitHK0VqVStxVnlWUVFr?=
 =?utf-8?B?bCtZVWE3YmpsemZqRXB3eUIrcEdjb3FvNWNiNlMwTkkwWVA0SStvYVFIVzhT?=
 =?utf-8?B?bWEvMGF1VTNWRUhOVElyR2dKUjA4L0lYWWtFdWxmbXp4YnBCbWRielYyd1E5?=
 =?utf-8?B?TmU2cElwa0VmdGxhdjhydnlzYWxSTExKdjRXZzdQNXZLL3REMmtsWDlSRy8y?=
 =?utf-8?B?THozTjFKcSt0VXRIcXEvSUtSaHVPeEJXMmx0T0poRThSRklMbENFTm5zbnZX?=
 =?utf-8?B?ZDFreXRsRVozRnV0ekJ6bWhNWmxZMFFjcHNNeCtIZzgreXZDWVY5VU9yeTF4?=
 =?utf-8?B?TjJWZ3Q4ZTVZL0VydDQ5elI1Zm5LYkpWc3cxMnpIWE12RUJnUUhSQXlkT2c2?=
 =?utf-8?B?em1JQVU0cjhzM0hQUXZJM3RmRFM5KzFnenkrRGFuRnh6SER5TXgra2E5YktW?=
 =?utf-8?B?RUQ5SnpISUtTODJPM2RDVWRiQ05melhQcFBPZmQ4aTZ5TU5OcEVlZWdoQVVx?=
 =?utf-8?B?OGg2U0NpRkltSVlmMzArMy9mVkVCL0ltMjZsVlJ2Y1VEUUVjclVJajJVU0Rw?=
 =?utf-8?B?WDFXUEhGRnZyNy9oUnNiWHRVWlBHOEZEYVRZS3J3WVlFTWZuRnFjRkFQL09L?=
 =?utf-8?B?d0hGbVNhUDlqdWtmcXh1azhSYW16WDBLUjM0b1BBSUhjNFpyVzZNM2dFa3da?=
 =?utf-8?B?VlFWUE1hYWRzN21aZTlMMXE0NStvcWdGZ1k4aEFQeDl6Y2FWcFp2VkpsWG1p?=
 =?utf-8?B?dTNEQ1FueXRYREEvMnNSWGxxeDhrMk5IRVdWb3VwajFZME5jNkE3YlV1VCt6?=
 =?utf-8?B?U1ZQQ29GdmJOMTB0MGZBeExMR04xMDIzTGxrakc5bXJ1TFdHYW03eUg5d09k?=
 =?utf-8?B?czRBRjBRRHRpNENiQ054di9VWmRnQ09paHhMZ3poMHMyV3JkbTd1SytDY3pU?=
 =?utf-8?B?Nm9GMkhEazFva2FabmMzSmZmbTBoUkl2QlZwMGptNDVtY2lTV1QwMHpyL3hE?=
 =?utf-8?B?QkN0S3Roa1puTGQ1clFhOEMzSk1tZzB1TU15eVJhdm9zTENLcWlYSklMc0Zw?=
 =?utf-8?B?T2JXdmxheURrS2ZUcU91cVh4NFVWRk92dTdoMFkraVVXdVREaTQvazFOUGgz?=
 =?utf-8?B?dXhvU2EwVlRjaVR2QnVRYndjK1pIOTJWR2RZQnAvajdTZ1d3cmhLWVJkMS9B?=
 =?utf-8?B?eEl0TFNBdjlGRGMxTGNac1NneDMwWjgzQUQwMURnQk5jTWEyelFaRTNldHJP?=
 =?utf-8?B?SDNoU3Z4NFBDWnQ3ZWpVN0puM3N0NGlkQjhFSFNyV3krQTRZWTNsT1Z4TXlJ?=
 =?utf-8?B?MFp2TnlGUHBHM2tnY282S2Evd2ZlMEJDNGRVNUlwWWZEeHppQldlbFUwYjha?=
 =?utf-8?B?K2R0WC8wL2FLWnlQc2t3VitRMzNxRTZPQSs0UGhtYms4cjl1cHpGL3VIMW93?=
 =?utf-8?B?dm1oakh5b1R1dHc3S0xEa2hIaEtBNWRVaEVPampaOWc3K1JkcjU5VDNxS1pm?=
 =?utf-8?Q?3Wb22EHlKeyJHwGrXd9qdPvOC?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 513571ee-53ae-47a7-4526-08db4bc24bb9
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 10:36:47.1327
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: f6eZUv9EpTvJhEpWvi3jxyEup6Ia7GgquzBLg9bQlon20oKn9v4aUOupgHDw0AJZzG9w8SsQW9LrQo8gQwapDw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7263

On 15.03.2023 15:58, Jan Beulich wrote:
> With in-tree builds Config.mk includes a .config file (if present) from
> the top-level directory. Similar functionality is wanted with out-of-
> tree builds. Yet the concept of "top-level directory" becomes fuzzy in
> that case, because there is not really a requirement to have identical
> top-level directory structure in the output tree; in fact there's no
> need for anything top-level-ish there. Look for such a .config, but only
> if the tree layout matches (read: if the directory we're building in is
> named "xen").
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> RFC: The directory name based heuristic of course isn't nice. But I
>      couldn't think of anything better. Suggestions?
> 
> RFC: There also being a .config in the top-level source dir would be a
>      little problematic: It would be included _after_ the one in the
>      object tree. Yet if such a scenario is to be expected/supported at
>      all, it makes more sense the other way around.

Anyone? I'm certainly okay for my approach to be rejected, but I'd like
to see out-of-tree builds to reach functional parity with in-tree ones.

Jan

> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -236,8 +236,17 @@ endif
>  
>  include scripts/Kbuild.include
>  
> -# Don't break if the build process wasn't called from the top level
> -# we need XEN_TARGET_ARCH to generate the proper config
> +# Don't break if the build process wasn't called from the top level.  We need
> +# XEN_TARGET_ARCH to generate the proper config.  If building outside of the
> +# source tree also check whether we need to include a "top-level" .config:
> +# Config.mk, using $(XEN_ROOT)/.config, would look only in the source tree.
> +ifeq ($(building_out_of_srctree),1)
> +# Try to avoid including a random unrelated .config: Assume our parent dir
> +# is a "top-level" one only when the objtree is .../xen.
> +ifeq ($(patsubst %/xen,,$(abs_objtree)),)
> +-include ../.config
> +endif
> +endif
>  include $(XEN_ROOT)/Config.mk
>  
>  # Set ARCH/SUBARCH appropriately.
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -17,6 +17,13 @@ __build:
>  
>  -include $(objtree)/include/config/auto.conf
>  
> +# See commentary around the similar contruct in Makefile.
> +ifneq ($(abs_objtree),$(abs_srctree))
> +ifeq ($(patsubst %/xen,,$(abs_objtree)),)
> +../.config: ;
> +-include ../.config
> +endif
> +endif
>  include $(XEN_ROOT)/Config.mk
>  include $(srctree)/scripts/Kbuild.include
>  
> 



From xen-devel-bounces@lists.xenproject.org Wed May 03 10:37:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 10:37:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529092.823097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9rK-0001E4-TD; Wed, 03 May 2023 10:37:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529092.823097; Wed, 03 May 2023 10:37:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9rK-0001Dx-QQ; Wed, 03 May 2023 10:37:18 +0000
Received: by outflank-mailman (input) for mailman id 529092;
 Wed, 03 May 2023 10:37:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu9rJ-0000lp-Cu
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 10:37:17 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7927021c-e99e-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 12:37:16 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by VE1PR04MB7263.eurprd04.prod.outlook.com (2603:10a6:800:1af::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 10:37:15 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 10:37:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7927021c-e99e-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=idqv5qDC/nh9+ZeiUrkDV/6iHDBR6RPMZzuH+pFEz/2GqMoF2JjXAjyld6VtXNWENIs/q5Tzaqj/G9J6gpq6hj1ujLYWHuNWD43ZmZHzqBBdu3JAj9po/RSJAU6wPlGso6MCxqGFyHL2GkWwffp6sFWyoCGv02In+aMMa0RZ5RlNLBTXW3abXQdRICf+GNMv03eCRwCM29DSapqstk4E383u5+KibZWZzsFVygrU6lbPBYB1tTE7XVWcWFLUUHIr3TkCuy2ExIdFV5miNa865CZVOe1b18wDgxieyolMnapZOZLFLNTjZ5cITDmvqiBL15b+T5aXgJvzkH+xXnKqrg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cHBPuULj0Sa/v+7ugwG02mbs8ZvG329LJjK9HhyCgLY=;
 b=i+Bz2/o/7zyAbvnZ1KeNETh1WQ5GynvW8NwZg+uoV8h6Sl09ydRpMEyEqyTAdhuS3UYKrUh6iIVcYSuTh80lxubHR8M5eiaNwM7iMObNFXufZv5Fu9f6XlAvWByCorDyL2nl/qA+6LTXzBarljSbJDeUDMjdCyBEJVoWerfUEfe1dqHLTCdev1IRj3U8XkdfWFqqdQcE9gJWqdvmfWi2HsmpxtTK/mzy+nMSCc+Brdngw1xkDloxSx1lm3K6S3YipGobCr7twTG1VpvzF+YEM8B3NDyZMmu544sVr+GTDdLH706/1HHf9HtR8MGEZhx50Kf7H4KCbVFDN7NbWbKYvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cHBPuULj0Sa/v+7ugwG02mbs8ZvG329LJjK9HhyCgLY=;
 b=Ut0w9EP1qB83gfS95dE40C5K83uHoTo35rejFPXI3sPuwOOaiDuAQfhtPkLMGFySPXhsbo5oKhjH1O119CysKeldtDvQ0qAdibcWf0CDKhBkSW6xsUPKO1KqOv0TV/dpW8lZ1kIbShTL6xcoJBELdqwoD5xnmayfoU1SqUyTaslwJPWPEt15WuIWHTGBTZHu7KIhxTVLZbwJcWxBbY7RWYn4Yx+sDd6izZHwOiDRG0DYLgygXP+SUakk8vzsVRhjJ27XWyBA/4uyWXHnSxGp3hrpMfpkRYSorySv+L0a2G74SDrswBWII5BAZBkMFtu7swBwlWBami5Vph5n9CjOZw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <594ca301-8f4d-83ff-3de2-a68741767bb2@suse.com>
Date: Wed, 3 May 2023 12:37:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Ping: [PATCH v2 0/2] build: out-of-tree building adjustments
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <2e3c8f9d-8007-638a-c88b-21ad2783d8d3@suse.com>
In-Reply-To: <2e3c8f9d-8007-638a-c88b-21ad2783d8d3@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0114.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::6) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|VE1PR04MB7263:EE_
X-MS-Office365-Filtering-Correlation-Id: 2425850b-2cfd-4ae7-56f6-08db4bc25be6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	h0OUC79i62Nq6BVvA0enfoIXZ9pEVFDvjWjAmtu57Alt8YVrxHRRdQkp2e3jmMJYTtHYoI59xENtnK0AxuIOOkG1PiB0YYpn99E6b3V7i/h6Z8w03W5EFCbSNPnDh0xEVImsc/zl+FEexsGdi2vrcXy9w1oovEME3m2RSvwNL7D+S8vS02Eo9Wn2BBGfj9df/N4AmYNpTZ1S9T0WCa8EtUJqLa+PD+YK9074yly2zYVjSlss37YhM2Wnv06AvhFFiTIdgP6ZDH+rqx6l+qb+CdCuS33Zvi1iijcOv07cgf+VE7IHOU3eIKIxvrD40zyIY1CDmcXdRfNsbBh80gE9IHEerrJH7X5VDoj0BazjSaNEQqr3Odo7nWYyGRQLMKBN2EjuWPu8opIxHkAafYdYwp+tvn/C16YdIFNzmgDjclLOEwlALMdoIcL7Ar3wBze91IMfH6V5KqPQWa3unqICWfkyFw/yC+WEk7xBnhVZt2uBxzcDyZ4r0h1z+yRCXyIDabZTjdSxW9G3F9aa2T0nRpwP6Psq4BVCLvwKzH0F16+9mzYayBSehaWaNI/8VuIqGCqPWJC0tUmqWuPkhz1fCRvR9I32OWrNvflrg3f92VrcxTAa28IhfgQi3aUpS9aEgEdZ+ZL5puFDpn0KI23aNkDgA14A1Y+v2xv9VShHYAu/eE7iJui6+QIhrto0PWMOFSVk5VoK8fYmAHVmpK4X4w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(396003)(136003)(39850400004)(451199021)(186003)(31686004)(4744005)(2906002)(38100700002)(2616005)(54906003)(53546011)(6506007)(26005)(6512007)(66476007)(66556008)(66946007)(478600001)(5660300002)(8936002)(8676002)(41300700001)(36756003)(6666004)(86362001)(6486002)(31696002)(316002)(4326008)(6916009)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WEJpNTZRQVpoYTk1ZWZJbFl5WWIzK2pTOVRublRqYXhvZ3J5NkFBWXljYVIr?=
 =?utf-8?B?OWNYT2drQVQzMUZ4WTBGQ2hLYU5QdzdBYnBXaVI4NWRZMXRKZ29ZL3NDbm01?=
 =?utf-8?B?Rlc5MmZVWDZzVEZuQnE5dEpDYmxoL0FzL2tYSUs1REpzYWdTUS9LSTlPdzYr?=
 =?utf-8?B?R1V1cGN4bUdlS1dIcGV3OUJkeW1rSTdocjIvOWZjaC9xU2lLNncwcW8zS3pi?=
 =?utf-8?B?TUk1eE9QYXBxUER2bGhkUEpYSkhoSE85SThoWVp4SE1IUFFGbk5WOFhla3BB?=
 =?utf-8?B?MnFxUG5rZkZuTWV6SUZwemRPSnMyVTl4QXNxMUZiZXBQbmxQTS90cDFyNW11?=
 =?utf-8?B?KzJhQStXR25ya0g2Ykx2MEltLzljOFBYQW1HV3lTV3p3NTVaV2VZc3lzYzBu?=
 =?utf-8?B?RjhZNXhaZnNlR3NHMVN6VFlQQmg3eWMwTEZCWlJkNXF6Q2lmWGVxL1ZxNmg0?=
 =?utf-8?B?d01MTEhXUWwvc3IvaGdIWXpuMDVpOVZKQVVKVzB3ZWxONFF3eEUwaG9rcEFl?=
 =?utf-8?B?NHVpMW80YStHcmxiR3EzUXFhZUpjRENRdDd4WUxhdHJMVXJPUVdiQlZiM2d5?=
 =?utf-8?B?L1l5Nldad29lYWJYL3V1R3BvcXkzMlI3cFp3eEtqczJZanpIMTJJZmF0RWYv?=
 =?utf-8?B?NWx1bjFlNXFxd1E1cTNsakRnUWtkYUF0dm1wUWlsSUtYeXI2WkRqTDZLV1p6?=
 =?utf-8?B?MEhSUktQUVJIZFVkZDBDU3lwQWIrZXhScEFOejRIS1dCYVJpUHBZL0VKTkx5?=
 =?utf-8?B?MEloMVZYZjdBd3VQRTFQOU1uY2dOOVZSZVNKYzd1dDdvVGVsYmd0VlpLZkRG?=
 =?utf-8?B?US90QTBjTmk5YzJESnVFMjRaeXhoQVNZQ0VzM3doZlB5SFFtb3c1cUlKMGpY?=
 =?utf-8?B?MVc4VUdWMmthYWU1TU9hcGFOOHkrV2dMK1FzZUV0N0s5YnFidTgxRXNiUmp0?=
 =?utf-8?B?aExoUmV3MVA0b3JLZ3dhS3B6TXc1U2JQY3A5aEdqVVZCOW1aY0c5QVlKRThQ?=
 =?utf-8?B?bXRQUWFkUTlURTlEbC9rQjZTRFRHZlI5K1Q0TnErUTRZNUFTb3gzelA0OUVR?=
 =?utf-8?B?RTk1YXlCQUtodGNzVDMwZTNkQzZ2OFpmOHNiSGRMU21sdVdvdXp1dy9RaUtk?=
 =?utf-8?B?TzFLTEh2U25SRmxDd2MydUZOODVQRktQQTZXWkxrd2R5TVV4dk81RGtjVXRE?=
 =?utf-8?B?WHdpZlhnVzZGTlFDWVRVSWp0SXdBeDJadm5kbDRjVHBheWxjNWw1SjRudTFU?=
 =?utf-8?B?dWV3U0c2MHNYL1l4KzR5VU5xSlFqVkwrckJsSDVIK1FBcEVYSlVVYm9uVk9u?=
 =?utf-8?B?Ly9uS044Zkt2Zkw0eXZ4Rm8ybHNZZ3hQVW1XeVB3d3hLS2F1YzJDd2lXMWI1?=
 =?utf-8?B?SlN3WVR4ZUxlcHhPc0tHN0RuclVBSjJ2V2kyTkNNQnRHNExrekpSdWtQZ3pZ?=
 =?utf-8?B?SGozMkYvdHRMcVVad0VWUEhTUVBmYThmcERrbXcxeU9CcHNKdDd4VEtNaG5t?=
 =?utf-8?B?cjY0RDVwSUN6Ykx2dXJpMHcwbU4rWTJwOE1GT0FMbWRwLzZRVmNodHdVZVlX?=
 =?utf-8?B?MmE1ZWZmRk45MGhMZFFyZkdZU0c3MGVXd1kraDd6Skl0b2tPS0NhcTZXZWNV?=
 =?utf-8?B?cDZZcmhwWU50YTl4N2hVUDBwbTUwa25PaTNUdFJqTUFrUVJiemQyTGJxc2dk?=
 =?utf-8?B?U1NUR3orWTNRUUNiZ2xscE5MbWlybkFyb2U0U3V2ekhqMDBrbHg1U1V2L2xz?=
 =?utf-8?B?TEFIdkFuTnEvcy9Kd2V1Y2hQUGRBR2F6Y0tUQXZkc01mSDArRnZyenRRZlhU?=
 =?utf-8?B?QjRkb0E0RHQwWFFnaWovYVJ5SEVIQytTeHZwTGNEejI4eFp0Nmh0a0g5L0hu?=
 =?utf-8?B?MnN3M0VkL3FjdjdaRTVDbllUcEN1eEhNdVNOVG9xMTlvNnpPWHk3YnlZSWhM?=
 =?utf-8?B?VlV4SXpsOVNxU2pEWEk4SjFXM3JEeHd6WDB2R1drL1hCSktwWlhsTCtqa0J2?=
 =?utf-8?B?ZjBWQ3FVY1dpN0YxNmpaUVNadk5pL1hDTk5ndzlXTGtEcXhBTWxJOUpMVXRZ?=
 =?utf-8?B?RVVaNTcwa1BNclIwRXBTWWhRZUVLQ0IxWk5nNEg4WHhwTnNvTU0yYlBwQ3BQ?=
 =?utf-8?Q?KJJBcmem+OmjmlRg9bEub6MWo?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2425850b-2cfd-4ae7-56f6-08db4bc25be6
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 10:37:15.5031
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ydnn68yUoyKjAoKp3Ov7CzEhE9YaBcL9xOuM6J/CFAX7MgOkHznLwv8NjBbWlz3S3dvN/UrgebWxnl+p+N1fGQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7263

On 29.03.2023 12:20, Jan Beulich wrote:
> The 1st patch is new, addressing comments on the previously standalone
> (and unchanged) 2nd patch in a way different from what was discussed.
> 
> 1: don't export building_out_of_srctree
> 2: omit "source" symlink when building hypervisor in-tree

Ping?

Thanks, Jan



From xen-devel-bounces@lists.xenproject.org Wed May 03 10:39:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 10:39:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529097.823107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9t5-0001rg-8T; Wed, 03 May 2023 10:39:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529097.823107; Wed, 03 May 2023 10:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9t5-0001rX-56; Wed, 03 May 2023 10:39:07 +0000
Received: by outflank-mailman (input) for mailman id 529097;
 Wed, 03 May 2023 10:39:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu9t3-0001rD-Mw
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 10:39:05 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20624.outbound.protection.outlook.com
 [2a01:111:f400:7d00::624])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b98a52d9-e99e-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 12:39:04 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by PAXPR04MB9708.eurprd04.prod.outlook.com (2603:10a6:102:24e::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 10:39:01 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 10:39:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b98a52d9-e99e-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BzVtkFcvkJS5xCtCqy2zTZ2VXEXAnRmdJUZvJHuuUYhHX9eI1MitKhYYEVcSztQ55bILvtyyArMegJgq0Qrw06yjYrU009ITQtHxucF4cY9CRc4UreZfgnO/g3mrGVl5Uxfcq1An3Odd+xXRAN2L31PuuG7FzkZRzeiJq6Z0IL0sMYxIePVL7PvGCa8+cX7YfTU4b7HbOpSHOUN82X96RGvJYaugoVK86oCI+nC9Re65bhU8+iuTNUKAvuiqn/3ir9Cel2/8MIenH8dZ2h+r+ih4kzODuEzGMPj8oRo77vsk0KJAYpaplhyK06oyXgmbpoMLUvItpYk8uPU5MJeBpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YZnJA7uwg1W9CXtAacAnccV/fU/ZKdDe4hNxkcvkVs4=;
 b=UIPi1G/GnNoI6W31ZtyQIbYCusztGPgVuqOqNR0sXYTuB1vT0pgUoyOeS7IzYH3HFP97koD4DNwc88GEldTUb++7KtWVJPbdBBv+28ce4APZXdwnhYXMa/TbJfUARa7tiXpGcDESCQJDrBi+sjR4NuonM1D3j+m8OBEYGN4cF/s+0dneAPFMzaps6pg2LMDGXM0pmZdIIQCrToAkd0RhrUput9nKYGBIvYqhBDMD9ylHy1mU/H1eDwv30HXYvCQLuseKHbXThL5Yhy37pIge9HNxqT1+N540SOuX4ljrJhS7RH7OwZxDOtbPCCMbjk2GHycmWTJ5xkI+eELa5iowdw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YZnJA7uwg1W9CXtAacAnccV/fU/ZKdDe4hNxkcvkVs4=;
 b=0OLzElqy49CTs6nlUt9j+KyM+bhE1oSs5kJ5jbMs41J3kC8Az+zU3yy0o+l82K+8cwhTyORRInIjmtRiRAfAqC2+jzD4x0zgtePxAGn6wqmVASq2HSRlsULVWWIuEapV5aGsA/9Lp+zWUllVlsSlY5fxN6CFn1X2JE0zNkIAI4pl9B/0P+URwWlp9+zUvuyHobaWlmIh4qsKaoEN6+JqrkP2QyfWMuL4ikaIEKOJFfbEQ+GcyH/GHOSJtq/ribdgyQHUcNorcdU3UevzjmP2Ys6OtvjygHmqotV/d8nD6KNcSX53mOMMWqWJg5eV6eGi86BNTvp6w8tZId0JbMpn0g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0820220d-a500-7920-3630-eea074856e3f@suse.com>
Date: Wed, 3 May 2023 12:38:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3] xen/vcpu: ignore VCPU_SSHOTTMR_future
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230419143155.36864-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230419143155.36864-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0208.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::18) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|PAXPR04MB9708:EE_
X-MS-Office365-Filtering-Correlation-Id: 902caba1-e3ce-401d-2298-08db4bc29bb8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qslPK06QJaDlZ6v06JV5tA0yzaoyi0+tJ0TXinklQOSqKxrcI7n7oDPuf2RyvZ/pd+ZedzVXhhBMh4PfXwPd6uPxdbhW9VJoxnYWEaSHwiqOsCIdyQahBpMLOI+eE5p1HYKos73PPawjaG2NzGfaHcmByJCaisrIwFBt3b2PUyp9Wv37mvFuM7dSWzO6yZmXS/jxhP1ZH0Hh+gvfI6TfqtOL8OSPSj+kd79dhajm07RC0ScPTUYjxz19pvOHZwpUBOxetWN3DmOOsi8n6Y87vLZ9T1pkf7y9+PC7dWr2K9ltqGXidFXtoDalqhGrOaFTpvaKKwPKb4pD9vAbmhJ2WrR2Hh1F25dD9bhBr4SSsuFQZeoHpqt8B1svFI/OeHD4nQpa+v81aj5Om7bWOHnX99MOf+zZ9Wxe5fdereDG9eppx3YS/ucz7i92GBcFyFShjyxNzeOYsOyMv01iiG+eFXFVD3OTsYqPNkJRcMWnnRICwhHuTBDc6xZyrwfIKpNZUzd3OIz2w+NShH6nMMAdc0xx0vrnHhwDwmyVQMsCZjyiUhIDflXTpu2MQzOCjSv84tRWU6PqxaldFS7b8ngye7JCTx9gs8NeMsoSqGcMsICB2gtTt9Xp0RpS1KdlKSNHXbsskc9MeeOl1rCuUA+aBA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(376002)(136003)(366004)(346002)(451199021)(6916009)(66946007)(66476007)(4326008)(66556008)(316002)(478600001)(6486002)(31696002)(54906003)(36756003)(86362001)(186003)(53546011)(6506007)(6512007)(26005)(5660300002)(2906002)(2616005)(31686004)(83380400001)(38100700002)(8676002)(8936002)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YmFISG9HcDRzRkl6Znp6d0tJcDllek9NS0VjVmZMM1lxbGJYVEtudGRFME4y?=
 =?utf-8?B?bkdNRDh4dHQrdy9wWGRpUkJoYmdrUGZ6dkZ3amV5VXdDYTRTeXNxemROL0FN?=
 =?utf-8?B?REYvajFLV2M4N1dLZWdpd0ZGMW9oK0Y1NUJHeUgyY1p3K1F1Z092cWxvNXBw?=
 =?utf-8?B?RWhpcDZ0T3BpR1NSblVNQ0YxVXJnUi9wVXZFZFJpbzhzNE1MNDRNN213bHc1?=
 =?utf-8?B?V01VcktmVlF0blc5ZSsvMzFuSm1wL0V3ZFlGNmRWbkFMV1gzSlNINlU5c2tm?=
 =?utf-8?B?cjBGMnEycUJBc240Rlo3Vk1GNXNvY2hpNFpYc3E1UkdnNENSSU9oQ0xHUFhB?=
 =?utf-8?B?ZkYydWU3L0l4RG9HeWtwbVc0R3JtWUN5Y0R6TXNtOXMzRHpqd0pQck5wUzdx?=
 =?utf-8?B?QjlrNW9Ka0daaFdwdlk0dFNaRUJLSWpLT2hrSUZPbFIyekdEMjczNzVQYWlN?=
 =?utf-8?B?NEpLL2loa1J1a1hSczFqaDU0VTd6c0QvMFB2WEdnZm80THBpdlNuWU42Z2pK?=
 =?utf-8?B?YUNsaUluYmsyanJHV3BBQWxhM00vQjhTRjdOMnhkRHh3WEFKYTI3WWdXMmxx?=
 =?utf-8?B?c1Qzai8vWjBxSEYyeloyTmpYR3UvblVBaWQ0cnl3bC9EVHQ4UFJXR2NDRkxB?=
 =?utf-8?B?NVBhUXJNTFdRbDZkNU9uMmpJamEzNWN1M2VVbWlSZXBVNHhFTW9uZGZaUkNO?=
 =?utf-8?B?dHpVSUFuc3A4NjRKTmhWamFLMk9NS3VnTmpVUTFWTE11dkd1ZG5VaUVWRlU1?=
 =?utf-8?B?c21CUEJnQlc4K0xhTTYrbXk0MzdtM3dVSHFtOW1kYWIrVUV4cG80NVJqSTVO?=
 =?utf-8?B?RHdzYVlkMGFZbWFEekFVMlBGZTM4R3V2TVFLRWxuWnBKMlQ0bEJnY2YxLzJ2?=
 =?utf-8?B?SzRIbU5HMVJlbTd5Wk84eWJ0eU9FNFJtUXhVbWhzeU5QNnNVRmF4Y2JHR1ow?=
 =?utf-8?B?Z3RLbU0va04rcS9aTktoOVpJR1c4NnM4WUw5ZElVNmVIaDhSSkRzVTk2Y0VC?=
 =?utf-8?B?cUZwK1dUVldQc0h5R3h2Vm9LUmZsUmpoWm1kRlhhQVVCdnY0NVk3K00ralJv?=
 =?utf-8?B?OTJUYmNkV2grbk5ocWMvNHZMeGRuR1hkVjNVMG1DNGx3eGZuNnRLdnVPR0JY?=
 =?utf-8?B?S0xXZ04vQUxwNWZIMXB1Ym1VWEU2bUU3WWRQVHltMHhaL2s4Mk1KcmtiYkQz?=
 =?utf-8?B?ZTZuNVRwNE4yRGloQ09SUVpKVVZNVVdRaGFCMEhFUDhTa2NhMlFvT3hvZmtP?=
 =?utf-8?B?NFFsSnM1L1pKa05sbHBMZ3ppT2hxUWNSRDNOMEtaZWZpbHVvOGUvOTVDejVC?=
 =?utf-8?B?ckpjQkR6cXJkaUwxTThWZTBhYWFUZDJIVGxqTlY3cFBnTTR4VzVTUGdxZmU2?=
 =?utf-8?B?MlZJVDFDaDVSVFQraERsNVZKcmloVEs1U2w0WWRTZ3NEcmZKR0lVSWo4N2gx?=
 =?utf-8?B?OFN1RHNkS0hqaHhZYk1rbHFBbUVNYUhsR2wrc1ZaWUlBWUMzSGJHc1poL0R1?=
 =?utf-8?B?dkFONDY1bi9ONHhiQWNRS0JZT2JYZDNPUkNjWTBiM1lYemgwMk1CSVlMUzFx?=
 =?utf-8?B?bTlRbUV4TEkwSTdGWGdHdzhad0NrcWZCdzRuMysvd1F5TTIxaTJUYTdoalFq?=
 =?utf-8?B?ZUlRRDBhUDVTanV5U2VSbUIzaFQ4R3V0aDhwNnh0dmlIanJMVFBFQTRNcXNv?=
 =?utf-8?B?NjA1NFhBSVVIQmJMUGNFdmM2anEwMkJMUTZZQklZOFpHQ291eDdCZCsrUWpE?=
 =?utf-8?B?RWVabUNnZ3ludEEydW5rS2NvR1Z5RytzaHlEK1l2OWpVbGFldVo4OFEwWFhi?=
 =?utf-8?B?RWRYWmIxenhSMkZSSnVJa2M3Z0dQSWY1anF1OXY2RjRKM3Z4OUpubkV0R1lZ?=
 =?utf-8?B?bUl2Sno0OXZjYmpGZm1RakszdkZzVnExcHRZdHFmWkpLU0NNOXltR0N5S2Y5?=
 =?utf-8?B?T3JZSjJTaStNbnlYcTI2ZFRYRmpDN2RhVllsT3o1bjk2NEVPWnNueVp4Njlu?=
 =?utf-8?B?WlUxOWMxVnI2d0k5aSs0ZWlZajhZaFQ5YmdBQkZpeTYzaFZ4TW9iOE5oNnhp?=
 =?utf-8?B?Z0VnSnFWd1NaNWdaRU16V1lNcmgyQmpGdmR1dHdBbkpxcmNEN1plT2t4Nmc3?=
 =?utf-8?Q?ZGDYEyhEmojWzes4mPsnHQPAf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 902caba1-e3ce-401d-2298-08db4bc29bb8
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 10:39:01.3511
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: J3mTB5ptPPKcCTVhZqlzKq9sP29OINTRcHV/GAGE8xY6A56Pw1WcH7OEVwoSPB4V4918kOwHGFRshHNw+2hvWA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9708

On 19.04.2023 16:31, Roger Pau Monne wrote:
> The usage of VCPU_SSHOTTMR_future in Linux prior to 4.7 is bogus.
> When the hypervisor returns -ETIME (timeout in the past) Linux keeps
> retrying to setup the timer with a higher timeout instead of
> self-injecting a timer interrupt.
> 
> On boxes without any hardware assistance for logdirty we have seen HVM
> Linux guests < 4.7 with 32vCPUs give up trying to setup the timer when
> logdirty is enabled:
> 
> CE: Reprogramming failure. Giving up
> CE: xen increased min_delta_ns to 1000000 nsec
> CE: Reprogramming failure. Giving up
> CE: Reprogramming failure. Giving up
> CE: xen increased min_delta_ns to 506250 nsec
> CE: xen increased min_delta_ns to 759375 nsec
> CE: xen increased min_delta_ns to 1000000 nsec
> CE: Reprogramming failure. Giving up
> CE: Reprogramming failure. Giving up
> CE: Reprogramming failure. Giving up
> Freezing user space processes ...
> INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
> Task dump for CPU 14:
> swapper/14      R  running task        0     0      1 0x00000000
> Call Trace:
>  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
>  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
>  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
>  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
>  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
>  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
> Task dump for CPU 26:
> swapper/26      R  running task        0     0      1 0x00000000
> Call Trace:
>  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
>  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
>  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
>  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
>  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
>  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
> Task dump for CPU 26:
> swapper/26      R  running task        0     0      1 0x00000000
> Call Trace:
>  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
>  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
>  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
>  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
>  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
>  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> 
> Thus leading to CPU stalls and a broken system as a result.
> 
> Workaround this bogus usage by ignoring the VCPU_SSHOTTMR_future in
> the hypervisor.  Old Linux versions are the only ones known to have
> (wrongly) attempted to use the flag, and ignoring it is compatible
> with the behavior expected by any guests setting that flag.
> 
> Note the usage of the flag has been removed from Linux by commit:
> 
> c06b6d70feb3 xen/x86: don't lose event interrupts
> 
> Which landed in Linux 4.7.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Acked-by: Henry Wang <Henry.Wang@arm.com> # CHANGELOG

A little hesitantly, but since no-one else appears to show any interest:
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 10:43:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 10:43:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529101.823117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9xL-0003NH-TW; Wed, 03 May 2023 10:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529101.823117; Wed, 03 May 2023 10:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pu9xL-0003NA-Qc; Wed, 03 May 2023 10:43:31 +0000
Received: by outflank-mailman (input) for mailman id 529101;
 Wed, 03 May 2023 10:43:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pu9xK-0003N4-KB
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 10:43:30 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56998652-e99f-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 12:43:28 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by PA4PR04MB7711.eurprd04.prod.outlook.com (2603:10a6:102:e1::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 10:43:25 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 10:43:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56998652-e99f-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=btIpTBakCJSSCiuxUHE3FTP+qS5cGuyPWFYaoqDxenWoWTA60zqkJcy8u7OeWu0AIiLfos9TGcMlp40IwsXHaRntt6s4BsnjrZRA3tnfV9IYX4iivVnd/AhaIGbf7/xnPT4djGrA+pxMAmau2eHND0hG10chn/bojj4Zzr9V1QyPe27hyG84GcZSEeIwg1hQeNEVi+2B+yPoytYDIr558910D2sRYgyBPPfPzwItwbdAbhMzKw1Vwg7nMkT+wV5g9pn7tn8SeF5ZVN19iKgEYo28MKQ7BlTQgTV29SugVLsxU4NWDFCKo6/6e8RsYiIZMnl72JJQQgbR3G1axyR1iw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tw0s9+B+lOUlPzGuqqHZc+BliG9zWYxRoLru8awX5BU=;
 b=dEYtG6yHF8kJH14ULqSWjrr99kgp27zP+hYWqB8vwUCWVq6IJv6kM3BT9Lv9GQmXzZGBmD5Kuz+giezgclhLfErhlvcxV/LtaYC09GqzcvQ2B2kWlEy6YWPaWF+ZV3Z4avdB/oiFnn5FZK+wSlCoiF1DWxd6TW+0CIk0T7fYTqBt3nDGJw3Zb7tTzKaC5xKCK8x89GoIxo6WpHv6JbTjMgSAtX3G9id15lNJDsEaIhwfnh4FXNX4qX87leIoXnPXorIXO3oy9sGROFCfOWd2W5+oYz9jysrsfEB7PqGCPAcjGpJf6+sOd6RCC4MaKvkMHzGocl3anR/ji9hmTXJchA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tw0s9+B+lOUlPzGuqqHZc+BliG9zWYxRoLru8awX5BU=;
 b=FG2fZlsjlijToQ45xGQgw00XhGVsgCvfSz7rJgVd7nnj7DiToQf+Es7ULHf9JZBf4AG04+FR5ScQ2LToYvZaYdEIEmQVTFnBw0zg4R8nuDwBjp8mW4OJkSnQ5uBgaf4PU181djhd9MOGgFsV3gPmHvpPFVYDEyipa9s+/z3j0CgU/ZDUtYaahRcPY1B4qN2MeO392m4yo/PXt5ub+QyDfJayrY0DgVbgviD4nURxcjBJOm6rFQH1QSJDDn+O+W3ary061OIOEiAfQNZ2f47mDEFDTCl8fEugerzC4EqyWqL8rnw9tbXfb59LhVp8JqVolF0K3NHJ7FirrIlYgYI8Sw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9f319f7b-5080-2802-09e0-2793d9dad1a7@suse.com>
Date: Wed, 3 May 2023 12:43:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] cmdline: document and enforce "extra_guest_irqs" upper
 bounds
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <54e126fc-484b-92fa-ce66-f901f92ec19c@suse.com>
 <6c5cdffa-f3fb-8f40-c44f-ad7431451929@citrix.com>
 <f3a11fa7-6e39-f7a9-7705-17c3af34273e@suse.com>
In-Reply-To: <f3a11fa7-6e39-f7a9-7705-17c3af34273e@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0131.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::16) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|PA4PR04MB7711:EE_
X-MS-Office365-Filtering-Correlation-Id: 951cc484-a02b-4e10-b73a-08db4bc3391c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	y6fUfyX3E65fJ9h9MgEwmrNp4Dhzzee2PB2VVFV43rmXG4xDCgXYfzMwkI2FpKO7jjw3LSgqBPefl0BPu/BTvM/l4mlUp4bZ6/5wV/ySGXSE5y5oBPRVln84ou8GGAweTfPLDmgAFNpU0HluHd5cd9h36LNVHpmZo8bRiWMLlxU15nJvo50gXi57M+MIsJWriUda7R+5D+96S3jpo7XtJwULUVUUwWWV+X/Y5qD4FYGdj8l2CeItbRxEqICjZZFAVyAO18JuLVfM9h3lmzqrYnc9ZeDZQXySNslE5fGtfts2NiTVBlYpdkbZZSFEBqPpTgY7JqZec91J+tuJsWfvZ9zq9IF6e6ReXKE1RRD/BXTYDUYsTY+vRWVgYOMugj/MKB9cVmgomU1IeJPcTlVBlq9Ox+ystcZlNJTJ/dR2pfjwWcVmiDiVZ3KkttyWc1RI7vCweNmkSs6XwBi5H6PO3KoNrkn1i3YukzqTkpzyqUsHVb3Hw65Ad5xbTjU24kxfLH+7yHX955Pje2ZvCRSw6cSNMal30OUAUsYf/O3nyXJIIBsqNt5yUO8LJdl6q6yUM9Odzvdk808vVL8+OgeFQWajZHMicFWP0284yf/IqSiwGK0vI5jzHDzMHRZU5HiKcanLAOyBuYIKaBbsPF+mkQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39860400002)(376002)(396003)(366004)(346002)(451199021)(31686004)(2906002)(4744005)(5660300002)(36756003)(8676002)(8936002)(66476007)(66946007)(66556008)(110136005)(41300700001)(86362001)(31696002)(478600001)(316002)(4326008)(54906003)(6486002)(38100700002)(186003)(53546011)(2616005)(83380400001)(26005)(6506007)(6512007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S09DdkcyVTRaUy9aZk10K3YxWjhHVFVHRkZnZEp3cHFnRXJlQTdncWY0WTBx?=
 =?utf-8?B?dkVXZkJyMGlVOXZoZmRuQnAySTZXanNDZy9FSEduN25RR3dtNmQ0MldCN0Z1?=
 =?utf-8?B?NEtRdCtFSjQyQUtjUkFWU1M2aVp0Qld2OWhiZEFlSHYxYVZzejU1NDJSaFdP?=
 =?utf-8?B?VGI1QWlGMUdtM2c4aGcrRi9pTThTOUpkNEtad2ZoVnFrWFppZmRZVFd5V3RW?=
 =?utf-8?B?TVZDQmJReGYvN2dPcVZVaXlWT0dtQ3diV3BMbW0ycVdVRjJKQlNocWNIQ2tp?=
 =?utf-8?B?ZzhZL3ZsVURWRS9TTEpNb3VKMzJ1SWFPdXhDWXRTdEpzRStvQ3lFVU5sRFVG?=
 =?utf-8?B?OHdxNFpzb1F2V29la2RuOVc4Qm84dHk2ZDZubjkvUWpYQ0ljcXhoeE9SWDRl?=
 =?utf-8?B?TVRiRmFJN1NXcVFodGpWc2RLM1Nac084YWNRZVljYXh1NWtJZkZRYmJ5ZXlR?=
 =?utf-8?B?RmdkY0lIYlpOc2M1MFlDU1k1UXpkV0tXTmx6V1lITXpnSnIyQjN5TWY1R3hT?=
 =?utf-8?B?WUl4V2wzSW1IY3hPY1Q1aW9qQVZrWXorOWxWMW51TUh6OEM1Q1Uzakx3T0Fr?=
 =?utf-8?B?cm5KcjhCbVVUaEdjZitOS3Z5OWJQa2kvdE5qSlZ2RkRJS1BHdGp4RmZiMDR1?=
 =?utf-8?B?MW5EZENLREg2ZEQrWTAzeG9iTHY3ZW05U3NhRjRaTW9tOXNPUkVzMWtBRnp0?=
 =?utf-8?B?REI2S3NHTm5nZ3RpQURzZlJtTTNmbmY0Uzh2VlZqWXd6S25tdXVkQThVUitD?=
 =?utf-8?B?ZG82SVlrZlVzbUdHVlBWUFZzQnZUZDVBOXZ6aXFZS0NjZGF1REk3eHAwY1lP?=
 =?utf-8?B?clVJN2dWcWdnLzBXSlFOQy9rVUxuaU1ZR09RTWlydUVpd1pyLzZ6YWlvcVRP?=
 =?utf-8?B?dmhDOHFudFJMWUtnQTBvTU9xdXAxVTZmMllQQUdWRzBodHhTLzlBNk1CQ3hT?=
 =?utf-8?B?SFRLMFdjWGIxRHVyVTZtczRsbEVmVnlQaFgvajlxNlhXNmtVeFkzeVMyRWVT?=
 =?utf-8?B?aUhYWVdTKzgya3pSRHNtZ3VRTFJWWnFNUkZhUjJ1d05NNE5HSFpUVDhNRVlU?=
 =?utf-8?B?Rzl4MEdZb1FYYUwxZmJIV3M2b1h6ZWlyMG5PTzNieEJzRmpqM2JlR1dpRy9v?=
 =?utf-8?B?MithVFovQnVHdFZ3N0NKOFNkb3VVbjNwTm9lNHNxNlUvUVhnS3J3dlgrdktp?=
 =?utf-8?B?OThvV1RJcEJRRitaR1dqVTRnRENZZ3lpbXdZVDhjUzRzdTlRNHd1SHpXZkhT?=
 =?utf-8?B?QllSRDVZMmx1ZlJUdHFMSmU5bGV1ZGVVZ3FSbnA3bjY5Q0pGYXVyUEo1V2Jw?=
 =?utf-8?B?MFN4N213aElPZ3Y2bEFIc2hhdE9XdHRpQ2ppRksvWEUrblBTcHpoYW8xN2VX?=
 =?utf-8?B?aWFXYXRyb1RuODU1Y0tYSlI5SVpYYnJEL25pby96WC8yNHFIWnJhM3BORmcz?=
 =?utf-8?B?MTR4a21YamducVBiRVR6cmxZYm1xaTc4MlN1b1dORUo5NnF0ajJWWlR2ZE85?=
 =?utf-8?B?WTc5NTZDOGV2bHc2WUNUeTNKelFOblp6U2xQeVBlbHRVSWRCY2hRR2h0RExr?=
 =?utf-8?B?aGZydHdCRFhkdHJ2ZXhaYmdkVnZ3MS9rK0QzYnpXRmpwUldMdTFvaGJIdjhS?=
 =?utf-8?B?UnJJZTJFcWprNE5MNHVOYlo2aS9LOXc3M0crQjdzdWs2QysxdGQ5elRBdGk4?=
 =?utf-8?B?bTY2M3kwSVRDRDRNMXVYS0dLT1dVZ2FUSWFWaHhIWUhzN2N3QXliTVRLZms1?=
 =?utf-8?B?VnNqMU00b2YvU2hhSytHN3lZQzN4akx4aFhFSU9YNnkvU1gwUFVsZU1Zcmdl?=
 =?utf-8?B?QlB1TzBSY3hmdnJRczEvMWdMZVJBK1dkOEx1Tk1VUG56QnZBSk1UZm5GRU91?=
 =?utf-8?B?QVRmWFZDS3lxZUF6ZWl4aWErNEdHQlo3SjQycHhTOStGdmFZNmlZdG5lZ0dh?=
 =?utf-8?B?K2l6WUVxRUQ4UzdnUXhhOHVqajc1VTdaQ0k3T29nU0prMFlaTEJCZFRaQmZu?=
 =?utf-8?B?Y3dYT1FRY1dJT1paeDI3c3JUdXUzS2JFdHJuMGRNYm1jK09ORDZ2eXhJdjFB?=
 =?utf-8?B?NjkyYXBZYlFTL3EzQm5sN1hQUzE3TWlZOXQ1aHN0eG14WnhCOUxaSU5jcU5r?=
 =?utf-8?Q?XrDJTBo81YwH28DYC4Xepweyo?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 951cc484-a02b-4e10-b73a-08db4bc3391c
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 10:43:25.3609
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KvqwDhGlE/+m0dR9nGU34fMZktn8oNqgixCoGV0KM4tNbWzwx/4+6O855lrGoP6Q5qwMfEU/Ch7Nm3sRxaX9/Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7711

On 04.04.2023 12:40, Jan Beulich wrote:
> On 04.04.2023 12:34, Andrew Cooper wrote:
>> On 04/04/2023 10:20 am, Jan Beulich wrote:
>>> --- a/xen/arch/arm/include/asm/irq.h
>>> +++ b/xen/arch/arm/include/asm/irq.h
>>> @@ -52,7 +52,7 @@ struct arch_irq_desc {
>>>  
>>>  extern const unsigned int nr_irqs;
>>>  #define nr_static_irqs NR_IRQS
>>> -#define arch_hwdom_irqs(domid) NR_IRQS
>>> +#define arch_hwdom_irqs(d) NR_IRQS
>>
>> I know it's not your bug, but this ought to be (d, NR_IRQS) as you're
>> changing it.
> 
> I can add this (with a cast to void), but I'll leave the final say to
> Arm maintainers.

Arm maintainers,

may I ask for some kind of statement here? I'd be happy to follow
Andrew's request, but I don't want to then end up with an "unrelated
change" objection.

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 10:53:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 10:53:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529104.823127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puA70-0004sJ-R4; Wed, 03 May 2023 10:53:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529104.823127; Wed, 03 May 2023 10:53:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puA70-0004sC-OI; Wed, 03 May 2023 10:53:30 +0000
Received: by outflank-mailman (input) for mailman id 529104;
 Wed, 03 May 2023 10:53:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puA6z-0004s6-NU
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 10:53:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puA6y-0007xn-LB; Wed, 03 May 2023 10:53:28 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puA6y-0006MO-Dp; Wed, 03 May 2023 10:53:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=1BCdJCjM28LGf/KTqeCxNJ67GzgccYNIlqtOxilJCvs=; b=rLE6Pp8ZSjuZuJlVO8Il++yoZ8
	nPLjeyEuc3QAa+4BtyUZGeLmmyUo9jeXeZS84stL453xg+4fqCSHVVZfQd61jYkN15tZd0uVKdwYL
	nAKzH7AmiS6k6JL1iEZ2lNzcdwkJZZkUGhJvFQuq6sHbBknrlIqyH+ttNabWOj0Uqqzg=;
Message-ID: <1a529c44-1564-ad42-3924-f58efaa83a91@xen.org>
Date: Wed, 3 May 2023 11:53:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v4 13/13] tools/xenstore: switch quota management to be
 table based
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-14-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230405070349.25293-14-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 05/04/2023 08:03, Juergen Gross wrote:
> Instead of having individual quota variables switch to a table based
> approach like the generic accounting. Include all the related data in
> the same table and add accessor functions.
> 
> This enables to use the command line --quota parameter for setting all
> possible quota values, keeping the previous parameters for
> compatibility.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - new patch
> One further remark: it would be rather easy to add soft-quota for all
> the other quotas (similar to the memory one). This could be used as
> an early warning for the need to raise global quota.

I don't have a strong opinion on this topic.

> ---
>   tools/xenstore/xenstored_control.c     |  43 ++------
>   tools/xenstore/xenstored_core.c        |  85 ++++++++--------
>   tools/xenstore/xenstored_core.h        |  10 --
>   tools/xenstore/xenstored_domain.c      | 132 +++++++++++++++++--------
>   tools/xenstore/xenstored_domain.h      |  12 ++-
>   tools/xenstore/xenstored_transaction.c |   5 +-
>   tools/xenstore/xenstored_watch.c       |   2 +-
>   7 files changed, 155 insertions(+), 134 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
> index a2ba64a15c..75f51a80db 100644
> --- a/tools/xenstore/xenstored_control.c
> +++ b/tools/xenstore/xenstored_control.c
> @@ -221,35 +221,6 @@ static int do_control_log(const void *ctx, struct connection *conn,
>   	return 0;
>   }
>   
> -struct quota {
> -	const char *name;
> -	int *quota;
> -	const char *descr;
> -};
> -
> -static const struct quota hard_quotas[] = {
> -	{ "nodes", &quota_nb_entry_per_domain, "Nodes per domain" },
> -	{ "watches", &quota_nb_watch_per_domain, "Watches per domain" },
> -	{ "transactions", &quota_max_transaction, "Transactions per domain" },
> -	{ "outstanding", &quota_req_outstanding,
> -		"Outstanding requests per domain" },
> -	{ "transaction-nodes", &quota_trans_nodes,
> -		"Max. number of accessed nodes per transaction" },
> -	{ "memory", &quota_memory_per_domain_hard,
> -		"Total Xenstore memory per domain (error level)" },
> -	{ "node-size", &quota_max_entry_size, "Max. size of a node" },
> -	{ "path-max", &quota_max_path_len, "Max. length of a node path" },
> -	{ "permissions", &quota_nb_perms_per_node,
> -		"Max. number of permissions per node" },
> -	{ NULL, NULL, NULL }
> -};
> -
> -static const struct quota soft_quotas[] = {
> -	{ "memory", &quota_memory_per_domain_soft,
> -		"Total Xenstore memory per domain (warning level)" },
> -	{ NULL, NULL, NULL }
> -};
> -
>   static int quota_show_current(const void *ctx, struct connection *conn,
>   			      const struct quota *quotas)
>   {
> @@ -260,9 +231,11 @@ static int quota_show_current(const void *ctx, struct connection *conn,
>   	if (!resp)
>   		return ENOMEM;
>   
> -	for (i = 0; quotas[i].quota; i++) {
> +	for (i = 0; i < ACC_N; i++) {
> +		if (!quotas[i].name)
> +			continue;
>   		resp = talloc_asprintf_append(resp, "%-17s: %8d %s\n",
> -					      quotas[i].name, *quotas[i].quota,
> +					      quotas[i].name, quotas[i].val,
>   					      quotas[i].descr);
>   		if (!resp)
>   			return ENOMEM;
> @@ -274,7 +247,7 @@ static int quota_show_current(const void *ctx, struct connection *conn,
>   }
>   
>   static int quota_set(const void *ctx, struct connection *conn,
> -		     char **vec, int num, const struct quota *quotas)
> +		     char **vec, int num, struct quota *quotas)
>   {
>   	unsigned int i;
>   	int val;
> @@ -286,9 +259,9 @@ static int quota_set(const void *ctx, struct connection *conn,
>   	if (val < 1)
>   		return EINVAL;
>   
> -	for (i = 0; quotas[i].quota; i++) {
> -		if (!strcmp(vec[0], quotas[i].name)) {
> -			*quotas[i].quota = val;
> +	for (i = 0; i < ACC_N; i++) {
> +		if (quotas[i].name && !strcmp(vec[0], quotas[i].name)) {
> +			quotas[i].val = val;
>   			send_ack(conn, XS_CONTROL);
>   			return 0;
>   		}
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 65df2866bf..6e2fc06840 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -89,17 +89,6 @@ unsigned int trace_flags = TRACE_OBJ | TRACE_IO;
>   
>   static const char *sockmsg_string(enum xsd_sockmsg_type type);
>   
> -int quota_nb_entry_per_domain = 1000;
> -int quota_nb_watch_per_domain = 128;
> -int quota_max_entry_size = 2048; /* 2K */
> -int quota_max_transaction = 10;
> -int quota_nb_perms_per_node = 5;
> -int quota_trans_nodes = 1024;
> -int quota_max_path_len = XENSTORE_REL_PATH_MAX;
> -int quota_req_outstanding = 20;
> -int quota_memory_per_domain_soft = 2 * 1024 * 1024; /* 2 MB */
> -int quota_memory_per_domain_hard = 2 * 1024 * 1024 + 512 * 1024; /* 2.5 MB */
> -
>   unsigned int timeout_watch_event_msec = 20000;
>   
>   void trace(const char *fmt, ...)
> @@ -799,7 +788,7 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
>   		+ node->perms.num * sizeof(node->perms.p[0])
>   		+ node->datalen + node->childlen;
>   
> -	if (domain_max_chk(conn, ACC_NODESZ, data.dsize, quota_max_entry_size)
> +	if (domain_max_chk(conn, ACC_NODESZ, data.dsize)
>   	    && !no_quota_check) {
>   		errno = ENOSPC;
>   		return errno;
> @@ -1188,8 +1177,7 @@ bool is_valid_nodename(const struct connection *conn, const char *node)
>   	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
>   		local_off = 0;
>   
> -	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off,
> -			   quota_max_path_len))
> +	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off))
>   		return false;
>   
>   	return valid_chars(node);
> @@ -1501,7 +1489,7 @@ static struct node *create_node(struct connection *conn, const void *ctx,
>   	for (i = node; i; i = i->parent) {
>   		/* i->parent is set for each new node, so check quota. */
>   		if (i->parent &&
> -		    domain_nbentry(conn) >= quota_nb_entry_per_domain) {
> +		    domain_nbentry(conn) >= hard_quotas[ACC_NODES].val) {
>   			ret = ENOSPC;
>   			goto err;
>   		}
> @@ -1776,7 +1764,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
>   		return EINVAL;
>   
>   	perms.num--;
> -	if (domain_max_chk(conn, ACC_NPERM, perms.num, quota_nb_perms_per_node))
> +	if (domain_max_chk(conn, ACC_NPERM, perms.num))
>   		return ENOSPC;
>   
>   	permstr = in->buffer + strlen(in->buffer) + 1;
> @@ -2644,7 +2632,16 @@ static void usage(void)
>   "                          memory: total used memory per domain for nodes,\n"
>   "                                  transactions, watches and requests, above\n"
>   "                                  which Xenstore will stop talking to domain\n"
> +"                          nodes: number nodes owned by a domain\n"
> +"                          node-permissions: number of access permissions per\n"
> +"                                            node\n"
> +"                          node-size: total size of a node (permissions +\n"
> +"                                     children names + content)\n"
>   "                          outstanding: number of outstanding requests\n"
> +"                          path-length: length of a node path\n"
> +"                          transactions: number of concurrent transactions\n"
> +"                                        per domain\n"
> +"                          watches: number of watches per domain"
>   "  -q, --quota-soft <what>=<nb> set a soft quota <what> to the value <nb>,\n"
>   "                          causing a warning to be issued via syslog() if the\n"
>   "                          limit is violated, allowed quotas are:\n"
> @@ -2695,12 +2692,12 @@ int dom0_domid = 0;
>   int dom0_event = 0;
>   int priv_domid = 0;
>   
> -static int get_optval_int(const char *arg)
> +static unsigned int get_optval_int(const char *arg)
>   {
>   	char *end;
> -	long val;
> +	unsigned long val;
>   
> -	val = strtol(arg, &end, 10);
> +	val = strtoul(arg, &end, 10);
The changes in get_optval_int() feels like more a clean-up because the 
returned value cannot be negative (see check below). I would prefer if 
they are done in a separate patch.

>   	if (!*arg || *end || val < 0 || val > INT_MAX)

Now that 'val' is unsigned long, then there is no point for checking val 
is < 0.

Lastly, I would rename the helper to make clear it returns an unsigned 
value. How about get_optval_uint()?

>   		barf("invalid parameter value \"%s\"\n", arg);
>   
> @@ -2709,15 +2706,19 @@ static int get_optval_int(const char *arg)
>   
>   static bool what_matches(const char *arg, const char *what)
>   {
> -	unsigned int what_len = strlen(what);
> +	unsigned int what_len;
> +
> +	if (!what)
> +		false;

Shouldn't this be "return false"?

>   
> +	what_len = strlen(what);
>   	return !strncmp(arg, what, what_len) && arg[what_len] == '=';
>   }
>   
>   static void set_timeout(const char *arg)
>   {
>   	const char *eq = strchr(arg, '=');
> -	int val;
> +	unsigned int val;
>   
>   	if (!eq)
>   		barf("quotas must be specified via <what>=<seconds>\n");
> @@ -2731,22 +2732,22 @@ static void set_timeout(const char *arg)
>   static void set_quota(const char *arg, bool soft)
>   {
>   	const char *eq = strchr(arg, '=');
> -	int val;
> +	struct quota *q = soft ? soft_quotas : hard_quotas;
> +	unsigned int val;
> +	unsigned int i;
>   
>   	if (!eq)
>   		barf("quotas must be specified via <what>=<nb>\n");
>   	val = get_optval_int(eq + 1);
> -	if (what_matches(arg, "outstanding") && !soft)
> -		quota_req_outstanding = val;
> -	else if (what_matches(arg, "transaction-nodes") && !soft)
> -		quota_trans_nodes = val;
> -	else if (what_matches(arg, "memory")) {
> -		if (soft)
> -			quota_memory_per_domain_soft = val;
> -		else
> -			quota_memory_per_domain_hard = val;
> -	} else
> -		barf("unknown quota \"%s\"\n", arg);
> +
> +	for (i = 0; i < ACC_N; i++) {
> +		if (what_matches(arg, q[i].name)) {
> +			q[i].val = val;
> +			return;
> +		}
> +	}
> +
> +	barf("unknown quota \"%s\"\n", arg);
>   }
>   
>   /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
> @@ -2808,7 +2809,7 @@ int main(int argc, char *argv[])
>   			no_domain_init = true;
>   			break;
>   		case 'E':
> -			quota_nb_entry_per_domain = strtol(optarg, NULL, 10);
> +			hard_quotas[ACC_NODES].val = strtoul(optarg, NULL, 10);

I think we should use get_optval_int() here and all the other below.

>   			break;
>   		case 'F':
>   			pidfile = optarg;
> @@ -2826,10 +2827,10 @@ int main(int argc, char *argv[])
>   			recovery = false;
>   			break;
>   		case 'S':
> -			quota_max_entry_size = strtol(optarg, NULL, 10);
> +			hard_quotas[ACC_NODESZ].val = strtoul(optarg, NULL, 10);
>   			break;
>   		case 't':
> -			quota_max_transaction = strtol(optarg, NULL, 10);
> +			hard_quotas[ACC_TRANS].val = strtoul(optarg, NULL, 10);
>   			break;
>   		case 'T':
>   			tracefile = optarg;
> @@ -2849,15 +2850,17 @@ int main(int argc, char *argv[])
>   			verbose = true;
>   			break;
>   		case 'W':
> -			quota_nb_watch_per_domain = strtol(optarg, NULL, 10);
> +			hard_quotas[ACC_WATCH].val = strtoul(optarg, NULL, 10);
>   			break;
>   		case 'A':
> -			quota_nb_perms_per_node = strtol(optarg, NULL, 10);
> +			hard_quotas[ACC_NPERM].val = strtoul(optarg, NULL, 10);
>   			break;
>   		case 'M':
> -			quota_max_path_len = strtol(optarg, NULL, 10);
> -			quota_max_path_len = min(XENSTORE_REL_PATH_MAX,
> -						 quota_max_path_len);
> +			hard_quotas[ACC_PATHLEN].val =
> +				strtoul(optarg, NULL, 10);
> +			hard_quotas[ACC_PATHLEN].val =
> +				 min((unsigned int)XENSTORE_REL_PATH_MAX,
> +				     hard_quotas[ACC_PATHLEN].val);
>   			break;
>   		case 'Q':
>   			set_quota(optarg, false);
Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 10:59:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 10:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529115.823137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAD6-0005WN-H1; Wed, 03 May 2023 10:59:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529115.823137; Wed, 03 May 2023 10:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAD6-0005WG-ED; Wed, 03 May 2023 10:59:48 +0000
Received: by outflank-mailman (input) for mailman id 529115;
 Wed, 03 May 2023 10:59:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tguP=AY=citrix.com=prvs=48085cdab=roger.pau@srs-se1.protection.inumbo.net>)
 id 1puAD4-0005WA-SF
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 10:59:47 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9bffbf91-e9a1-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 12:59:45 +0200 (CEST)
Received: from mail-bn8nam12lp2168.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 May 2023 06:59:41 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by CH3PR03MB7433.namprd03.prod.outlook.com (2603:10b6:610:19f::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 10:59:39 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 10:59:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bffbf91-e9a1-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683111585;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=6RWwbk5SEKauqK8nsuOFdhJu82y4KomYtMURf7L1qLo=;
  b=Ht7CZnw9xqwtrZOi6zoUqaYMoJhnu1ePqRMrWQjFaWSI0kZKiRfY9bSa
   jdl69jaOxh3hy0AFhshmoPS1tpwrtmHs/bW1rwEuc+zUaj1b8uD3oqFU/
   MognUM4QVDzgt95W37ONSIlGqZUa3zE1EMY+CP3bRMUdYsvJ4hfdOjJpF
   o=;
X-IronPort-RemoteIP: 104.47.55.168
X-IronPort-MID: 108104179
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:dgedVqM8IBHZRmrvrR0hlsFynXyQoLVcMsEvi/4bfWQNrUom1zAHx
 zdLWGGGMv2JYTbxc4x2YIjkph5TvpLRyYIwQQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5gdmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uF1E39op
 P05E3MMYA+y2rqRnbOaZtA506zPLOGzVG8ekldJ6GiDSNwAEdXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PdxujaDpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyr837eTxHyqMG4UPIWZ5vJ3vUzI/XRJAVofc2uj++i602frDrqzL
 GRRoELCt5Ma9kamU938VB2Qu2Ofs1gXXN84O8037hucjJXd5QmxD3IBCDVGbbQOv8gzQCEs1
 0OY2dbgAzVgvae9WX+b7q2Trz65JW4SN2BqTS0ZSQoI5fHzrYd1iQjAJv54C7K8hNDxHTD2w
 hiJoTI4irFVitQEv42k+XjXjjTqoYLGJiYl6wOSUm+74wdRYI++e5fu+VXd9exHLouSUh+Gp
 ndspiSFxOUHDJXInirdRuwIReut/6zcbm2ahkNzFZ488Tjr42SkYY1b/DB5IgFuL9oAfjjqJ
 kTUvGu9+aNuAZdjVocvC6rZNijg5fGI+QjNPhwMUudzXw==
IronPort-HdrOrdr: A9a23:NsTmwal+0o1KkjCV5pE1VCV4g3LpDfII3DAbv31ZSRFFG/Fwwf
 re+cjzsiWE9Ar5OUtQ5OxoXZPrfZqyz+8T3WB8B8bFYOCighrSEGgA1+rfKl/baknDH4dmvM
 8KAstD4Z/LfCBHZK7BkWuF+r0bsaC6Gc6T5ds3uh1WPHtXgmJbgzuRyDz3LqS7fmZ77FMCeq
 ah2g==
X-Talos-CUID: 9a23:bIz+i2/EoNtwm+4Iu0SVv0kWCN4DI3z89zTRLVCJVD5OFby0ZWbFrQ==
X-Talos-MUID: 9a23:ltcYBASKJKQxBHRGRXTUpSFuKfp5uJ+2UklKmLM2o5OVNnVZbmI=
X-IronPort-AV: E=Sophos;i="5.99,247,1677560400"; 
   d="scan'208";a="108104179"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CxZ3/7zv6RX6YN8Ziz4xBdR50HNBzdunqDCzWnXipIgzdnCImQsRD4qTrfgMZw7teuQnjEfk4u3qLIDFL1B/fwaRi/WISIjsqOYzj9tOteiS2Y4j0AMgvgPgzqfYYkpPkye103biQ0fbGWcwLgJhzp2K5FvBlO+TmfdUjbia6qrEf4iFvziDY9qp5JGsK923GBy/AzobxbXMjfK7EMrujjydOK0PKBH4DA/JrOFbb90VYz+zB1hUAeJ3uZdTymec6QEcbKVf7QfGLSYKCVKHIKDI9ONB1MLGY378j/QEiNODGDJ4PgFMMf8nDIWXTc1vWYw/rXoKfFsEcLjCtjlhSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=t1WAO9qDhaUZ+sqChhfahS0xja8lxmtg8lYiM5KDuO4=;
 b=FptxSn79Q4ffQXiK8i7+2SP0UtZVEUf8RLoq+D4wLHVV5ufQ8R5Xw0Y6k8OTLcR53P5og9ZpfGo+iETbDFuiFtOs4vJsOcIqgV77pTdTwhkvca407wEJZmJGqR0XxtPLUsgi6XEQ9W6pxIhwcMpHTuWr5exaCjvfFK2iMGGdsUo0qW4Cb8z2wC/6oRD+b+ED1P7F7OBDkiex1hHp3A8Hg6vL/9ffkCSDSX+1vId7BS/ik+RQDhWkj+vIk4V6IFZH2ECAX70Mczwxn2NnqQMBMzWRN+xdcAKA+Fa9XN796Pmk+4Jr6l9udVAjl5MBAR3fbjcg8yDcPr8JR6JW2+N7ww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t1WAO9qDhaUZ+sqChhfahS0xja8lxmtg8lYiM5KDuO4=;
 b=luGhlknKHK0I95pyLVB5TGzwPOD/CFMB3p3qspFRFv/gS0eVT1zW8fmQFW5B6JoYKTHhhX6d2jsFSPdAJOwLiwoC9Dv1yyA+wUKYPF5PSL2rehAmN31bcPZH/IEEXOE2oTkpBgrh5HFQg+I503i3vn/3Ds0dzn6IE0qugqOM9Gc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 3 May 2023 12:59:32 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3] xen/vcpu: ignore VCPU_SSHOTTMR_future
Message-ID: <ZFI+lJl8X96BtzqK@Air-de-Roger>
References: <20230419143155.36864-1-roger.pau@citrix.com>
 <0820220d-a500-7920-3630-eea074856e3f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0820220d-a500-7920-3630-eea074856e3f@suse.com>
X-ClientProxiedBy: LO2P265CA0082.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::22) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|CH3PR03MB7433:EE_
X-MS-Office365-Filtering-Correlation-Id: 1c5e9566-3d21-4ecf-bc09-08db4bc57d77
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gBFskLXDcSLMokwjmRfUYvAW0wQCHcknYjKu6kKoVQTrxDXwWsmFDKdVpePqec11G8AQGTXWONZslFBnjggXmUmvw7YM2T7Rb6+lHNtd3EbdQCaATTHp1Qt6v4XA+x/QwLWr5lBNt5gaR6weQGQCZ/37qXq7RkWNwY3vyQd7EMlmb9WDLrMN7a4Iif9k9ARzyVEy0aHP0eLZsXpb9fUYBI7Sm/OOr+00g87b/2Ifp70wyAw6JudXp+fAGoHICbJXZ0p5h6P4JnnCa37QoDUCl7mxu4eAMzZgESmQ95BK2LCpa2zEfwJCbuMc6eOBbPbuRXsALKiWwNxxH2JWJ3txH0RbkOd7MjBb8OPaxvJiwnbv6xHN2YydJA8kxZgU6r9h/jTmp5OEdboBxGLi60p6XMEClV/xpRB5I2xbCnMFNjtCdUsy8IypgE59lK+NviU4mrFOZJP6WYarUmt0TCjb0KSJ5Wi+/ZY9cuCoiM+evCsaBZJ6sDC2rbEg2fA1G7wPkRRVgaLtS6pnrFCF/BpcnEHZd8AKiQxfYKWNpsy8EVnUINUpi9VQKa3MA3IREwaeHKFwVciBM9qV153TgYhw1f2FZaj+JVH7BwaIS9fZWec=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(396003)(136003)(39860400002)(376002)(366004)(346002)(451199021)(82960400001)(41300700001)(33716001)(316002)(38100700002)(4326008)(6916009)(83380400001)(54906003)(85182001)(186003)(478600001)(66556008)(66946007)(9686003)(6666004)(53546011)(6506007)(6486002)(8676002)(66476007)(26005)(86362001)(6512007)(2906002)(5660300002)(8936002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M3NzeWpFQUM4NEFmOGtpM0dZK21MRUpIdW5ZdE5VWmVhZ1FPdWZSYW85Unl3?=
 =?utf-8?B?d1R3WFZUdXZIMEZ0c0dJb2d4cG9UL3JuaWhsaFZ3clVvUDlkWlcxU1A4N0Ex?=
 =?utf-8?B?OHFDbU1iTnB5aVVYNkJFTS94NmlQeFBsV0diRHcvQjFiWWJKcHovbjRGSUU5?=
 =?utf-8?B?REV1RHEzbGVnb3VIU2IxdzdsdXFkYXlNUW1vL2djdUpNb3lEaGVMK3M3Y2JK?=
 =?utf-8?B?VlN5UzMxVFYzMk0rUHErTVBuV0JseUhpc0tsQVdyN2xTQUdWOHU2dkZNWlR4?=
 =?utf-8?B?elI0UFNLZEtTSEE4MVhoek4xUlhWeVR4L3NvTzBRaVUweFA5RE1ZalgrQlhv?=
 =?utf-8?B?eXVYQnVLaUUwNnBYclRLY1JTRlVaNmhkQUVFbEkyVkYrVWd6dXA1M0RybkxZ?=
 =?utf-8?B?S2JMbnh3N1B4S2ZxUG9VUjBkK3JjVW5lTEdmR2ZXOThOVVVMZnA5QkJXWnlG?=
 =?utf-8?B?YjZiYTlvaWVQckptMWwrMDU3ZWg5VmF2MTZYS25QUEwxai9MZVo5OFRCeUtG?=
 =?utf-8?B?REFPRXpPTFloRFg5c2d1OWVtTENMUnNVS0dMN1hFcVVteHNkVmdTbVhuYjQr?=
 =?utf-8?B?UWVwa3A1TkJ3Qy9MYWRYSTd3Y3lVVklxTE9wWVNuR1NYS0VncWJPTDJvbERt?=
 =?utf-8?B?akE4Lzg5c01vR1JIc2o5Vko5MUZVQk1JWjVReFU2QzY1Sm9YeHdHQU5xYVN5?=
 =?utf-8?B?YlJDcmtTeFUrTVk0MVFCRHc3SnBxQVlGYXNMRXpXby9VSG5udFFMWURESFQ5?=
 =?utf-8?B?WlNaNWc3akk1T1ZmT3ZPTVVSN2MxSEtLL29vd2drVWFPbVhoWHRxeDNmWURG?=
 =?utf-8?B?OGZ4V2g3eWdDazRWcmxia1hrblR2Mi9UYi96QzRnRGRLVUVKVHJpay9vWFYz?=
 =?utf-8?B?bDhCRTFqc3JFM0paaUszSFZuNmlyNys4WmFQRjVRV1ZlczB6NXh4QTJCQnEz?=
 =?utf-8?B?NmhBOUhIa2Q1cjBtRGtHK0w5a05ZellXU1RyK0FyT3BiSGlwL3YzbmdVOXNt?=
 =?utf-8?B?V2dJVExmSjhiV0JRZ2xrYWdlYVoyTnh6K0xmSTNuRnR2ejRVSk9TRFo1SkM0?=
 =?utf-8?B?T2lDSThDME5JNDNMK29peDVYMTNEOHpQUkNBRkdWR0xGajh0cmh5bUZxSWNR?=
 =?utf-8?B?Q01jc1RldFdCUUREUWFYQ3ppTGZxMitXYjdIODVyazRtK2hESEdGYUt4bndB?=
 =?utf-8?B?aGs5d1hMODlNaFFRbi9WQ2kxZmJKb3VUZWhTTThTU1NlSU4rN1J3SlJXblJl?=
 =?utf-8?B?akUyMVhFTDZYb0s5QXVBZmRFYnpoMTN0M3hZTGNXM0VaM2YxU2tpSkxmWDZ2?=
 =?utf-8?B?R3FDWHVxTHh4QS9uR004TmxLcVRYNGltQi9UcVl4aUpqTTNsNkxHTFhDSzB1?=
 =?utf-8?B?Njh1VXUvSlFNNEFNUXpLRTZKdVVicmhnVVpsc3FrSHkzdTd6S0doZlR6TnhM?=
 =?utf-8?B?VGcxcGVQekJqSmlLeFFPVXhZSnJSdXNUQnoyMytUcnFLa3Fyc2ZnM3Q4WWQx?=
 =?utf-8?B?bUFHUENOQUxCY1FpWGFMUGxzb05qSThoVmduZk5LQkN1eHpLQkUxMDA4UStl?=
 =?utf-8?B?T2RNQ2EwTmtqQUU5Q0RYTXBxTkkwWjhrdEZLbEhTZlpHTzJrbEJ1VThxa2tn?=
 =?utf-8?B?K2lWK2hVa0loaU5RZXZyUng3SHJxUHdkUml1QnFZYXVFaHkva3FnZ1FwNUdZ?=
 =?utf-8?B?VlY1cXFkbUFBTW9qb1dEcmU1cUdPQmdyeU1QempibEdZZm5jMHMraXpzTitN?=
 =?utf-8?B?aXkrdEtMc0tUejBiaWllcHdZOGJiK2NzWFlFMngzeVFBWkJ2alNEaXhvNEJ6?=
 =?utf-8?B?bXhXMnQyTEMzT0pIU1NSZWpWcGE2WnVuOEYrS0JCYUdXUm91NHRpVzlJd0xl?=
 =?utf-8?B?MG5kT3plejIzay9Id0xmR3BlMFhnMG1UbXZLK21rZnIxUnhZVld3OWxROEFk?=
 =?utf-8?B?elpvcWVKWHRkMEpKS00wUW1XRGxBTDc4MElhVzNUSE5HdjU2RGp2bDdDUkZ3?=
 =?utf-8?B?RkdvWk96UFBxNUh0bHpuRnBoQWs2UXZDVyt1RTg1Vk16TGtCTEJjblA5emNH?=
 =?utf-8?B?cDBDakp6SWRqOEFQSHlqR2t1Z1VEMGhXbG84dXFSZ05mQlRpOVQ5VzY0c2pJ?=
 =?utf-8?B?UmZKbk5WSzRMN2pGc1hzRDJubmwvWVM2M1VrUTdFNTRKaDNxUXRtTENxZlN4?=
 =?utf-8?B?T1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	CpYO7J0DkI3U7dnB42GwPEmK0+JoepzdLuMvKHxC8fRjcwzR+/Tw+/ytMvlHFsXV4Pd9f4VfO4abCPBrLNSLAMSEfekTDnXECU18+KJTwWtcHMw65+I8nZ6N/TnWwWRTIeldyFrd16lm+mxb54DCOE/NbueK5QXud8H58wdxww4ZsVIphBheGNk6ufetyU2jU7EcjALvtx+vT6ourWXvMcFdDuh7m7SLGSmRXmGLOArIQDo74m3kFjaoo+LbxNoRReGQzTkhGXPwjMruMiTnti9dzOeYm0JLx8GolyOShjENYz0K8LQMrnZAbhJfAZP54XhzKntYHu3h74cf6cNBE20RAwCLS0Dw8iDCtWxYe7ipiehNKhYnlzLloLs5fvqauaqwXjEI4AumLsuL5VAOP8mDwZEkHxeoDblk4yNKZcvmdyjroOtsBS89BdSdfROYVQF2T35e3ebkOo4D7Ms0oLF5jGo8UY46Vj1h4RD3eZ9aN+4zsH2CU00ifgyO672LvX3JbrTmH9XaUtxMj5LQVIdwrZeqrvwF23KlS9fBaR7s5p3is42tmDZFmJlAElyGOWQnkBG4GcSLEnRkyKqNdJ+V5xL3BhYp2SoGET/f4CzPVNaNekxsqbiNTiMAL8NURCyviRyRTlBB4lm8CAuBUjZcohKQWcwbiTRPKQczP6exsDWy9hDz49lZIovjhAGtyt8eLqh+2el6053JGUxL+T/AIl5DT9G90yfkkkLq/GcSqrk8wcjzl03XuT+63Gu8XwTDMT5Whb4VGoYeloQwmTcuNh90fcKTzTMPX5oBjvEHmh75lqG2Zp6ucsOqq7lk/FZHvMJPURAyjgfcwvohwehTgJSnEyKD+7SxaIJ8XrxSb+7pI8ScasbYsSc1JqPuwKcJbT9tCbgnVCjFJ9wUjqMwMbC+yPy4q2QLRn3p3DM=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1c5e9566-3d21-4ecf-bc09-08db4bc57d77
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 10:59:39.1594
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nLpPyJdsMW5etHRUzOyF4UqssSwCwtn2C7JjaiB/OXhGWvJQE1hk/rbQzh7viNA4FnmmbZK4+KI0VosYoGgG4g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR03MB7433

On Wed, May 03, 2023 at 12:38:59PM +0200, Jan Beulich wrote:
> On 19.04.2023 16:31, Roger Pau Monne wrote:
> > The usage of VCPU_SSHOTTMR_future in Linux prior to 4.7 is bogus.
> > When the hypervisor returns -ETIME (timeout in the past) Linux keeps
> > retrying to setup the timer with a higher timeout instead of
> > self-injecting a timer interrupt.
> > 
> > On boxes without any hardware assistance for logdirty we have seen HVM
> > Linux guests < 4.7 with 32vCPUs give up trying to setup the timer when
> > logdirty is enabled:
> > 
> > CE: Reprogramming failure. Giving up
> > CE: xen increased min_delta_ns to 1000000 nsec
> > CE: Reprogramming failure. Giving up
> > CE: Reprogramming failure. Giving up
> > CE: xen increased min_delta_ns to 506250 nsec
> > CE: xen increased min_delta_ns to 759375 nsec
> > CE: xen increased min_delta_ns to 1000000 nsec
> > CE: Reprogramming failure. Giving up
> > CE: Reprogramming failure. Giving up
> > CE: Reprogramming failure. Giving up
> > Freezing user space processes ...
> > INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
> > Task dump for CPU 14:
> > swapper/14      R  running task        0     0      1 0x00000000
> > Call Trace:
> >  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> >  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> >  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> >  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> >  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> >  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> > INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
> > Task dump for CPU 26:
> > swapper/26      R  running task        0     0      1 0x00000000
> > Call Trace:
> >  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> >  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> >  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> >  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> >  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> >  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> > INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
> > Task dump for CPU 26:
> > swapper/26      R  running task        0     0      1 0x00000000
> > Call Trace:
> >  [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
> >  [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
> >  [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
> >  [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
> >  [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
> >  [<ffffffff900000d5>] ? start_cpu+0x5/0x14
> > 
> > Thus leading to CPU stalls and a broken system as a result.
> > 
> > Workaround this bogus usage by ignoring the VCPU_SSHOTTMR_future in
> > the hypervisor.  Old Linux versions are the only ones known to have
> > (wrongly) attempted to use the flag, and ignoring it is compatible
> > with the behavior expected by any guests setting that flag.
> > 
> > Note the usage of the flag has been removed from Linux by commit:
> > 
> > c06b6d70feb3 xen/x86: don't lose event interrupts
> > 
> > Which landed in Linux 4.7.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > Acked-by: Henry Wang <Henry.Wang@arm.com> # CHANGELOG
> 
> A little hesitantly, but since no-one else appears to show any interest:
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 03 11:05:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 11:05:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529118.823147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAId-00072v-9o; Wed, 03 May 2023 11:05:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529118.823147; Wed, 03 May 2023 11:05:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAId-00072o-6c; Wed, 03 May 2023 11:05:31 +0000
Received: by outflank-mailman (input) for mailman id 529118;
 Wed, 03 May 2023 11:05:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puAIb-00072i-Hv
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 11:05:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puAIa-0008Mk-FX; Wed, 03 May 2023 11:05:28 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puAIa-0006zv-7A; Wed, 03 May 2023 11:05:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=AeUrmNI/HwRkNCXfF485hOEexMp7VO2G/kxR0JPvuwo=; b=F3LB9urI+oDNr9Oi8jc/hvixXc
	SF8yb67dGPIonAapg/89vE52Vay4aODK1HPI+LQZBk9weHdIPx9F3UGnFt7IlqgLsGktOVktzop96
	7yn+YuyxcjxWHF5jhCIIdpowGNVRoGihsR3/PaqZJtyvRvZ87J+WR4xK+JJ5yft2nuUs=;
Message-ID: <7e8a0e14-51d0-a8aa-1395-6e50a7f6eb94@xen.org>
Date: Wed, 3 May 2023 12:05:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH] cmdline: document and enforce "extra_guest_irqs" upper
 bounds
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>
References: <54e126fc-484b-92fa-ce66-f901f92ec19c@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <54e126fc-484b-92fa-ce66-f901f92ec19c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 04/04/2023 10:20, Jan Beulich wrote:
> PHYSDEVOP_pirq_eoi_gmfn_v<N> accepting just a single GFN implies that no
> more than 32k pIRQ-s can be used by a domain on x86. Document this upper
> bound.
> 
> To also enforce the limit, (ab)use both arch_hwdom_irqs() (changing its
> parameter type) and setup_system_domains(). This is primarily to avoid
> exposing the two static variables or introducing yet further arch hooks.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Instead of passing dom_xen into arch_hwdom_irqs(), NULL could also be
> used. That would make the connection to setup_system_domains() yet more
> weak, though.
> 
> On Arm the upper limit right now effectively is zero, albeit with -
> afaict - no impact if a higher value was used (and hence permitting up
> to the default of 32 is okay albeit useless). The question though is
> whether the command line option as a whole shouldn't be x86-only.

AFAIK, ->nr_pirq is not used at all on Arm because we don't have any 
concept of PIRQs.

So I think it would be fine to move the command line option to x86-only 
(assuming this will not be needed on RISC-V).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 11:06:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 11:06:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529121.823156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAJc-0007aO-Hk; Wed, 03 May 2023 11:06:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529121.823156; Wed, 03 May 2023 11:06:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAJc-0007aH-F9; Wed, 03 May 2023 11:06:32 +0000
Received: by outflank-mailman (input) for mailman id 529121;
 Wed, 03 May 2023 11:06:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puAJb-0007a3-4d
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 11:06:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puAJX-0008Ng-I3; Wed, 03 May 2023 11:06:27 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puAJX-000738-Ak; Wed, 03 May 2023 11:06:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=n1oLv0j52Jg7zu+EwY2y3c3OMNeTuHcEx0S6u4b3fxM=; b=2WQzMLvM1+lZKuSD/9ZVSNwxHX
	tsqrkqI5jff+/YehoXDRIQwxCwEOX1EvxzezCMrak5bEakJO4MKwPiqW8M+lkWeYkJtZjsV2j6yEZ
	i8kTmPlimc1vPVGkVkh70RzXUoPy4AS/Faww/SHaYZSdY2g+cf+3ihzAj4OGaHh9tIXU=;
Message-ID: <f2850cf3-91cc-f35e-f8b6-67eaf0d27c10@xen.org>
Date: Wed, 3 May 2023 12:06:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH] cmdline: document and enforce "extra_guest_irqs" upper
 bounds
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <54e126fc-484b-92fa-ce66-f901f92ec19c@suse.com>
 <6c5cdffa-f3fb-8f40-c44f-ad7431451929@citrix.com>
 <f3a11fa7-6e39-f7a9-7705-17c3af34273e@suse.com>
 <9f319f7b-5080-2802-09e0-2793d9dad1a7@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <9f319f7b-5080-2802-09e0-2793d9dad1a7@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 03/05/2023 11:43, Jan Beulich wrote:
> On 04.04.2023 12:40, Jan Beulich wrote:
>> On 04.04.2023 12:34, Andrew Cooper wrote:
>>> On 04/04/2023 10:20 am, Jan Beulich wrote:
>>>> --- a/xen/arch/arm/include/asm/irq.h
>>>> +++ b/xen/arch/arm/include/asm/irq.h
>>>> @@ -52,7 +52,7 @@ struct arch_irq_desc {
>>>>   
>>>>   extern const unsigned int nr_irqs;
>>>>   #define nr_static_irqs NR_IRQS
>>>> -#define arch_hwdom_irqs(domid) NR_IRQS
>>>> +#define arch_hwdom_irqs(d) NR_IRQS
>>>
>>> I know it's not your bug, but this ought to be (d, NR_IRQS) as you're
>>> changing it.
>>
>> I can add this (with a cast to void), but I'll leave the final say to
>> Arm maintainers.
> 
> Arm maintainers,
> 
> may I ask for some kind of statement here? I'd be happy to follow
> Andrew's request, but I don't want to then end up with an "unrelated
> change" objection.

I am OK so long it is mentioned in the commit message.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 11:18:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 11:18:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529124.823167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAUp-0000iM-IV; Wed, 03 May 2023 11:18:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529124.823167; Wed, 03 May 2023 11:18:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAUp-0000iF-F2; Wed, 03 May 2023 11:18:07 +0000
Received: by outflank-mailman (input) for mailman id 529124;
 Wed, 03 May 2023 11:18:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puAUo-0000i9-7U
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 11:18:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puAUn-00007V-Gv; Wed, 03 May 2023 11:18:05 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puAUn-0007XG-8n; Wed, 03 May 2023 11:18:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=LM+GbpkIg/ENzcdPKrIAU1+f7oH9sLXG/MVrC12j6Cs=; b=LlQVSYlyBgU1o3n5F7uai8ef/v
	VFQCFcZJxiP5Z3aaDKYXsFA4bLKPheX1k4/NRDQTa0/RGNR//jXsx2urJev4N5odebmF9lh9uFYvd
	J+8mM6EyMpiJfLfrI2uwo4xgsg3e4Er1AUp7K0wOhSjMpnTtFRA2BOTgaMav38vI393g=;
Message-ID: <935ea085-5a82-973a-00cf-b5f57b4e769b@xen.org>
Date: Wed, 3 May 2023 12:18:02 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 02/12] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-3-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230428175543.11902-3-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 28/04/2023 18:55, Ayan Kumar Halder wrote:
> The DT functions (dt_read_number(), device_tree_get_reg(), fdt_get_mem_rsv())
> currently accept or return 64-bit values.
> 
> In future when we support 32-bit physical address, these DT functions are
> expected to accept/return 32-bit or 64-bit values (depending on the width of
> physical address). Also, we wish to detect if any truncation has occurred
> (i.e. while parsing 32-bit physical addresses from 64-bit values read from DT).
> 
> device_tree_get_reg() should now be able to return paddr_t. This is invoked by
> various callers to get DT address and size.
> 
> For fdt_get_mem_rsv(), we have introduced a wrapper named
> fdt_get_mem_rsv_paddr() which will invoke fdt_get_mem_rsv() and translate
> uint64_t to paddr_t. The reason being we cannot modify fdt_get_mem_rsv() as it
> has been imported from external source.
> 
> For dt_read_number(), we have also introduced a wrapper named dt_read_paddr()
> dt_read_paddr() to read physical addresses. We chose not to modify the original
> function as it is used in places where it needs to specifically read 64-bit
> values from dt (For e.g. dt_property_read_u64()).
> 
> Xen prints warning when it detects truncation in cases where it is not able to
> return error.
> 
> Also, replaced u32/u64 with uint32_t/uint64_t in the functions touched
> by the code changes.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 11:25:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 11:25:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529130.823177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAba-0002Aj-Bj; Wed, 03 May 2023 11:25:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529130.823177; Wed, 03 May 2023 11:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAba-0002Ac-8Z; Wed, 03 May 2023 11:25:06 +0000
Received: by outflank-mailman (input) for mailman id 529130;
 Wed, 03 May 2023 11:25:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puAbZ-0002AW-RE
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 11:25:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puAbZ-0000GD-Bn; Wed, 03 May 2023 11:25:05 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puAbZ-0007oI-3q; Wed, 03 May 2023 11:25:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=6sG+lt55RW6cg3w+0EPGSycs/EzwUim5Jp6dpCWi5DA=; b=UD0X50o87WcMeQjBXdYmyau2vZ
	mziPLsMUO/gH64SjlIlcx2dE/LG+bTr+1AjCWgCjh7/vX0EjWzc3gzj6T39UaF5wLBEDx+SmiZX51
	Eiu4qKc/7LHJp19Ss4qd7Gvs3lK6hyv92fbO5jDOXA2DPEkoBbzJagznZTmJ0OjcRyp4=;
Message-ID: <37c9a45f-ae07-8d47-093a-6cf7501389d4@xen.org>
Date: Wed, 3 May 2023 12:25:02 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 03/12] xen/arm: Introduce a wrapper for
 dt_device_get_address() to handle paddr_t
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-4-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230428175543.11902-4-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Ayan,

On 28/04/2023 18:55, Ayan Kumar Halder wrote:
> dt_device_get_address() can accept uint64_t only for address and size.
> However, the address/size denotes physical addresses. Thus, they should
> be represented by 'paddr_t'.
> Consequently, we introduce a wrapper for dt_device_get_address() ie
> dt_device_get_paddr() which accepts address/size as paddr_t and inturn
> invokes dt_device_get_address() after converting address/size to
> uint64_t.
> 
> The reason for introducing this is that in future 'paddr_t' may not
> always be 64-bit. Thus, we need an explicit wrapper to do the type
> conversion and return an error in case of truncation.
> 
> With this, callers can now invoke dt_device_get_paddr().
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes from -
> 
> v1 - 1. New patch.
> 
> v2 - 1. Extracted part of "[XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size"
> into this patch.
> 
> 2. dt_device_get_address() callers now invoke dt_device_get_paddr() instead.
> 
> 3. Logged error in case of truncation.
> 
> v3 - 1. Modified the truncation checks as "dt_addr != (paddr_t)dt_addr".
> 2. Some sanity fixes.
> 
> v4 - 1. Some sanity fixes.
> 2. Preserved the declaration of dt_device_get_address() in
> xen/include/xen/device_tree.h. The reason being it is currently used by
> ns16550.c. This driver requires some more changes as pointed by Jan in
> https://lore.kernel.org/xen-devel/6196e90f-752e-e61a-45ce-37e46c22b812@suse.com/
> which is to be addressed as a separate series.
> 
> v5 - 1. Removed initialization of variables.
> 2. In dt_device_get_paddr(), added the check
> if ( !addr )
>      return -EINVAL;
> 
>   xen/arch/arm/domain_build.c                | 10 +++---
>   xen/arch/arm/gic-v2.c                      | 10 +++---
>   xen/arch/arm/gic-v3-its.c                  |  4 +--
>   xen/arch/arm/gic-v3.c                      | 10 +++---
>   xen/arch/arm/pci/pci-host-common.c         |  6 ++--
>   xen/arch/arm/platforms/brcm-raspberry-pi.c |  2 +-
>   xen/arch/arm/platforms/brcm.c              |  6 ++--
>   xen/arch/arm/platforms/exynos5.c           | 32 +++++++++---------
>   xen/arch/arm/platforms/sunxi.c             |  2 +-
>   xen/arch/arm/platforms/xgene-storm.c       |  2 +-
>   xen/common/device_tree.c                   | 39 ++++++++++++++++++++++
>   xen/drivers/char/cadence-uart.c            |  4 +--
>   xen/drivers/char/exynos4210-uart.c         |  4 +--
>   xen/drivers/char/imx-lpuart.c              |  4 +--
>   xen/drivers/char/meson-uart.c              |  4 +--
>   xen/drivers/char/mvebu-uart.c              |  4 +--
>   xen/drivers/char/omap-uart.c               |  4 +--
>   xen/drivers/char/pl011.c                   |  6 ++--
>   xen/drivers/char/scif-uart.c               |  4 +--

What about the call in xen/drivers/char/ns16550.c?

>   xen/drivers/passthrough/arm/ipmmu-vmsa.c   |  8 ++---
>   xen/drivers/passthrough/arm/smmu-v3.c      |  2 +-
>   xen/drivers/passthrough/arm/smmu.c         |  8 ++---
>   xen/include/xen/device_tree.h              | 13 ++++++++
>   23 files changed, 120 insertions(+), 68 deletions(-)
Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 11:41:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 11:41:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529132.823186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAr2-0004am-Pf; Wed, 03 May 2023 11:41:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529132.823186; Wed, 03 May 2023 11:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAr2-0004af-M9; Wed, 03 May 2023 11:41:04 +0000
Received: by outflank-mailman (input) for mailman id 529132;
 Wed, 03 May 2023 11:41:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+yte=AY=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1puAr1-0004aZ-1j
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 11:41:03 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5ea33c10-e9a7-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 13:40:58 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-638-0-m7RWIiMhaNdaGkO17WiQ-1; Wed, 03 May 2023 07:40:55 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6BEB7A0F386;
 Wed,  3 May 2023 11:40:54 +0000 (UTC)
Received: from redhat.com (dhcp-192-205.str.redhat.com [10.33.192.205])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 132F1405DBBC;
 Wed,  3 May 2023 11:40:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ea33c10-e9a7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683114057;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dRwOSZS087B+j9NDbbckQmSe0NEnNmqFyABRgslLrL0=;
	b=QathHsffgtsumWCQxOd1tvVwCEFZzKWMwfiTHAJ3ebD2ZZ5FHqBTZdNG2IcXPYPUlkkiQq
	XFOSdcpdSe1h66XSIeiG8P5VMBhL45pwU7r3fLIgrikl3bUU8KklC/J2h+JoTaOXvC9oHp
	90TxTV5HTVBVLnIIcRqNdnsmfXJYxsQ=
X-MC-Unique: 0-m7RWIiMhaNdaGkO17WiQ-1
Date: Wed, 3 May 2023 13:40:49 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v4 04/20] virtio-scsi: stop using aio_disable_external()
 during unplug
Message-ID: <ZFJIQW6RpndfCcXR@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-5-stefanha@redhat.com>
 <ZEvWv8dF78Jpb6CQ@redhat.com>
 <20230501150934.GA14869@fedora>
 <ZFEN+KY8JViTDtv/@redhat.com>
 <20230502200243.GD535070@fedora>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Sh4VljrAVjcetdM0"
Content-Disposition: inline
In-Reply-To: <20230502200243.GD535070@fedora>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1


--Sh4VljrAVjcetdM0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Am 02.05.2023 um 22:02 hat Stefan Hajnoczi geschrieben:
> On Tue, May 02, 2023 at 03:19:52PM +0200, Kevin Wolf wrote:
> > Am 01.05.2023 um 17:09 hat Stefan Hajnoczi geschrieben:
> > > On Fri, Apr 28, 2023 at 04:22:55PM +0200, Kevin Wolf wrote:
> > > > Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > > > > This patch is part of an effort to remove the aio_disable_externa=
l()
> > > > > API because it does not fit in a multi-queue block layer world wh=
ere
> > > > > many AioContexts may be submitting requests to the same disk.
> > > > >=20
> > > > > The SCSI emulation code is already in good shape to stop using
> > > > > aio_disable_external(). It was only used by commit 9c5aad84da1c
> > > > > ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching =
scsi
> > > > > disk") to ensure that virtio_scsi_hotunplug() works while the gue=
st
> > > > > driver is submitting I/O.
> > > > >=20
> > > > > Ensure virtio_scsi_hotunplug() is safe as follows:
> > > > >=20
> > > > > 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
> > > > >    device_set_realized() calls qatomic_set(&dev->realized, false)=
 so
> > > > >    that future scsi_device_get() calls return NULL because they e=
xclude
> > > > >    SCSIDevices with realized=3Dfalse.
> > > > >=20
> > > > >    That means virtio-scsi will reject new I/O requests to this
> > > > >    SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
> > > > >    virtio_scsi_hotunplug() is still executing. We are protected a=
gainst
> > > > >    new requests!
> > > > >=20
> > > > > 2. Add a call to scsi_device_purge_requests() from scsi_unrealize=
() so
> > > > >    that in-flight requests are cancelled synchronously. This ensu=
res
> > > > >    that no in-flight requests remain once qdev_simple_device_unpl=
ug_cb()
> > > > >    returns.
> > > > >=20
> > > > > Thanks to these two conditions we don't need aio_disable_external=
()
> > > > > anymore.
> > > > >=20
> > > > > Cc: Zhengui Li <lizhengui@huawei.com>
> > > > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > > > > Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> > > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > >=20
> > > > qemu-iotests 040 starts failing for me after this patch, with what =
looks
> > > > like a use-after-free error of some kind.
> > > >=20
> > > > (gdb) bt
> > > > #0  0x000055b6e3e1f31c in job_type (job=3D0xe3e3e3e3e3e3e3e3) at ..=
/job.c:238
> > > > #1  0x000055b6e3e1cee5 in is_block_job (job=3D0xe3e3e3e3e3e3e3e3) a=
t ../blockjob.c:41
> > > > #2  0x000055b6e3e1ce7d in block_job_next_locked (bjob=3D0x55b6e72b7=
570) at ../blockjob.c:54
> > > > #3  0x000055b6e3df6370 in blockdev_mark_auto_del (blk=3D0x55b6e74af=
0a0) at ../blockdev.c:157
> > > > #4  0x000055b6e393e23b in scsi_qdev_unrealize (qdev=3D0x55b6e7c04d4=
0) at ../hw/scsi/scsi-bus.c:303
> > > > #5  0x000055b6e3db0d0e in device_set_realized (obj=3D0x55b6e7c04d40=
, value=3Dfalse, errp=3D0x55b6e497c918 <error_abort>) at ../hw/core/qdev.c:=
599
> > > > #6  0x000055b6e3dba36e in property_set_bool (obj=3D0x55b6e7c04d40, =
v=3D0x55b6e7d7f290, name=3D0x55b6e41bd6d8 "realized", opaque=3D0x55b6e7246d=
20, errp=3D0x55b6e497c918 <error_abort>)
> > > >     at ../qom/object.c:2285
> > > > #7  0x000055b6e3db7e65 in object_property_set (obj=3D0x55b6e7c04d40=
, name=3D0x55b6e41bd6d8 "realized", v=3D0x55b6e7d7f290, errp=3D0x55b6e497c9=
18 <error_abort>) at ../qom/object.c:1420
> > > > #8  0x000055b6e3dbd84a in object_property_set_qobject (obj=3D0x55b6=
e7c04d40, name=3D0x55b6e41bd6d8 "realized", value=3D0x55b6e74c1890, errp=3D=
0x55b6e497c918 <error_abort>)
> > > >     at ../qom/qom-qobject.c:28
> > > > #9  0x000055b6e3db8570 in object_property_set_bool (obj=3D0x55b6e7c=
04d40, name=3D0x55b6e41bd6d8 "realized", value=3Dfalse, errp=3D0x55b6e497c9=
18 <error_abort>) at ../qom/object.c:1489
> > > > #10 0x000055b6e3daf2b5 in qdev_unrealize (dev=3D0x55b6e7c04d40) at =
=2E./hw/core/qdev.c:306
> > > > #11 0x000055b6e3db509d in qdev_simple_device_unplug_cb (hotplug_dev=
=3D0x55b6e81c3630, dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/co=
re/qdev-hotplug.c:72
> > > > #12 0x000055b6e3c520f9 in virtio_scsi_hotunplug (hotplug_dev=3D0x55=
b6e81c3630, dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/scsi/virt=
io-scsi.c:1065
> > > > #13 0x000055b6e3db4dec in hotplug_handler_unplug (plug_handler=3D0x=
55b6e81c3630, plugged_dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw=
/core/hotplug.c:56
> > > > #14 0x000055b6e3a28f84 in qdev_unplug (dev=3D0x55b6e7c04d40, errp=
=3D0x7ffec55192e0) at ../softmmu/qdev-monitor.c:935
> > > > #15 0x000055b6e3a290fa in qmp_device_del (id=3D0x55b6e74c1760 "scsi=
0", errp=3D0x7ffec55192e0) at ../softmmu/qdev-monitor.c:955
> > > > #16 0x000055b6e3fb0a5f in qmp_marshal_device_del (args=3D0x7f61cc00=
5eb0, ret=3D0x7f61d5a8ae38, errp=3D0x7f61d5a8ae40) at qapi/qapi-commands-qd=
ev.c:114
> > > > #17 0x000055b6e3fd52e1 in do_qmp_dispatch_bh (opaque=3D0x7f61d5a8ae=
08) at ../qapi/qmp-dispatch.c:128
> > > > #18 0x000055b6e4007b9e in aio_bh_call (bh=3D0x55b6e7dea730) at ../u=
til/async.c:155
> > > > #19 0x000055b6e4007d2e in aio_bh_poll (ctx=3D0x55b6e72447c0) at ../=
util/async.c:184
> > > > #20 0x000055b6e3fe3b45 in aio_dispatch (ctx=3D0x55b6e72447c0) at ..=
/util/aio-posix.c:421
> > > > #21 0x000055b6e4009544 in aio_ctx_dispatch (source=3D0x55b6e72447c0=
, callback=3D0x0, user_data=3D0x0) at ../util/async.c:326
> > > > #22 0x00007f61ddc14c7f in g_main_dispatch (context=3D0x55b6e7244b20=
) at ../glib/gmain.c:3454
> > > > #23 g_main_context_dispatch (context=3D0x55b6e7244b20) at ../glib/g=
main.c:4172
> > > > #24 0x000055b6e400a7e8 in glib_pollfds_poll () at ../util/main-loop=
=2Ec:290
> > > > #25 0x000055b6e400a0c2 in os_host_main_loop_wait (timeout=3D0) at .=
=2E/util/main-loop.c:313
> > > > #26 0x000055b6e4009fa2 in main_loop_wait (nonblocking=3D0) at ../ut=
il/main-loop.c:592
> > > > #27 0x000055b6e3a3047b in qemu_main_loop () at ../softmmu/runstate.=
c:731
> > > > #28 0x000055b6e3dab27d in qemu_default_main () at ../softmmu/main.c=
:37
> > > > #29 0x000055b6e3dab2b8 in main (argc=3D24, argv=3D0x7ffec55196a8) a=
t ../softmmu/main.c:48
> > > > (gdb) p jobs
> > > > $4 =3D {lh_first =3D 0x0}
> > >=20
> > > I wasn't able to reproduce this with gcc 13.1.1 or clang 16.0.1:
> > >=20
> > >   $ tests/qemu-iotests/check -qcow2 040
> > >=20
> > > Any suggestions on how to reproduce the issue?
> >=20
> > It happens consistently for me with the same command line, both with gcc
> > and clang.
> >=20
> > gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)
> > clang version 15.0.7 (Fedora 15.0.7-2.fc37)
> >=20
> > Maybe there is a semantic merge conflict? I have applied the series on
> > top of master (05d50ba2d4) and my block branch (88f81f7bc8).
>=20
> I can't find 88f81f7bc8 but rebased on repo.or.cz/qemu/kevin.git block
> (4514dac7f2e9) and the test passes here.
>=20
> I rebased on qemu.git/master (05d50ba2d4) and it also passes.
>=20
> Please let me know if the following tree (a0ff680a72f6) works on your
> machine:
> https://gitlab.com/stefanha/qemu/-/tree/remove-aio_disable_external

Fails in the same way.

So I tried to debug this myself now. The problem is that iterating the
jobs in blockdev_mark_auto_del() is incorrect: job_cancel_locked()
frees the job and then block_job_next_locked() is a use after free.

It also drops job_mutex temporarily and polls, so even switching to a
*_FOREACH_SAFE style loop won't fix this. I guess we have to restart
the whole search from the start after a job_cancel_locked() because the
list might look very different after the call.

Now, of course, how this is related to your patch and why it doesn't
trigger before it, is still less than clear. What I found out is that
adding the scsi_device_purge_requests() is enough to crash it. Maybe
it's related to the blk_drain() inside of it. That the job finishes
earlier during the unplug now or something like that.

Anyway, changing blockdev_mark_auto_del() fixes it. I'll send a patch.

Kevin

--Sh4VljrAVjcetdM0
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmRSSEAACgkQfwmycsiP
L9Z4jRAAvTT64p9bQdNSCCIFrqkkGEZS5J/8ud4wvKBgYacJ8pOY30Z2B3dA78bz
ik5KMZlaTa4GXFwlKGNn/OUGiB3ivuhXeIUHXv+pw8F9sCQy4WJDpeO6WPDtQ6Os
swWWC3Uv+cnfLQH8dZjf4yoLY4hYACt683Ptml960LIYhpX/AXjdnOv6f/tj3gXO
q+mxfLwTi5S08Kkq08e8sXbqCwSiTPXX106MDdk/oMD3MxzUmqu6qFxrf9n6Yp5i
kI1rJD0VR64ScA58lJlFnAdZGWL7d9kX5WUbZ+x27lLay4Uel4cmY9AIsUWThnHZ
fMk2Sol1aUKSEAlVdHL5vWbQF/UyYe+KB2tn++4HJZ/ojVBL1GNdnWgmk+ZPXGfI
BtIc5+h7Qsa2uXUX2gJQYP7j6Y/EBQMAT0oTlG2v+CxS+1MrSa4O9ufnx1c9hoLF
FOUU2CT4vrJlYI281yWZ5R6pZrm0ZJMHAGStCJH4/ayo1xVj/5gPdffIRksFwUzy
yfia4Pis1nU9ET5riOk4WTCHz44BgIPvkDSYe1aJMS9XT3DM6MsZm9shuUMueLBd
0H5rU7ScgGB6ZLJLOQN0m84AfHOrxSgo7+bdAmXriAYvgDwr9B6fc50cDUBOEGNo
zDb0PBH7R3fZT9rEpHlQi4t3SOBkDV3L1toGvWHYymLPkzOC3/w=
=9639
-----END PGP SIGNATURE-----

--Sh4VljrAVjcetdM0--



From xen-devel-bounces@lists.xenproject.org Wed May 03 11:42:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 11:42:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529137.823197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAsU-0005Ad-7I; Wed, 03 May 2023 11:42:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529137.823197; Wed, 03 May 2023 11:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puAsU-0005AW-4W; Wed, 03 May 2023 11:42:34 +0000
Received: by outflank-mailman (input) for mailman id 529137;
 Wed, 03 May 2023 11:42:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puAsS-0005AQ-QW
 for xen-devel@lists.xen.org; Wed, 03 May 2023 11:42:32 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20607.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::607])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 96bd1c28-e9a7-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 13:42:31 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM7PR04MB6998.eurprd04.prod.outlook.com (2603:10a6:20b:10a::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.33; Wed, 3 May
 2023 11:42:29 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 11:42:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96bd1c28-e9a7-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jzIJHT4HhHUjeJNm9Xv4nW2Mc4JnyDJHVwILzK1biKIll3zQkfRVloH9QIqqwyXKpe7WtUwgSmy3sQ16nvm7n53i01Ib53ODdFW4wBkxvFeecIuL5QBu44kXLSIRcTKyaZcAYQgz60iekk/AD2F2R88vJz7fpQRwwZe0gn5b2V3iYbB7fWVB9e2CTmp7+H/gOnL/kaStwQYsNnQ5sSfXhRrU3Sk3gTyH4biXkv+/+qU2ssjstGb/W9VpqNx8kjvy3L/b+cj38lHKwgkh0LlQEtsYhfBi+5h3vWfy65R4WdIRiNZrivGCMLNcxYZ8URw77jZtGEZgDIL4xIrEQfTAiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=T6yWBchkBNgoC9CCoa83SX//6pM98VoMRlufZMx9IEM=;
 b=IofnN4TDjXT29HplfUE20jkIWNUP2ZHTcDn89WiqK+UMsg+DWahRY5fBm+z6wAR7h8/dEZjIFRDuP6rkUZXNluFX19V6jZ5avoC6KirOceNJxzeSJ6DEEYqrhUdGsKurDv/rt+bf60enQ/5KLQrckMTEOaBP3ELQQ3+voGcY1iiPSSq2flZSYb1R1oTYGvwfyKCZr/JqyEJruzQdDACJaR0qSt3Z6ogZPWDMeo7M1xyb4Elj+FnASURJezFnv0B5koaxiGgh7nrMhBeEZoNDIX/LkFxuFQzSUILicT4sAV7Bv+5OiOrjO4seh2j2Me5WRciBvVw4sM1usA30RLGlpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T6yWBchkBNgoC9CCoa83SX//6pM98VoMRlufZMx9IEM=;
 b=wZZcmxFX77cbgKZ1aJXQExSyrxQd7BzNO2ILww8/BBZLPIEDTMf6p1I5b3eN/DxqC0w44goDt17QPb3JKQlUffdJjN1PWAgq01UjiE8njkG2BKF9Uv3zcjorySA0AzVGD2j9N7Y07Y/MvpLS6XYFQkZLmHlKAZDFcLMuxiFnIX4lXsHpGq4IisKs9Ua9pjekCqpOX31SNVjngZff+/9FynlfpgHEkpxYZn+SB0GOxbLZtAqmy/EijRO3nSGVKxJhXQtthxn40ssizIg/olOZpH8WkzGAG2ivMLKJSwxyjEez+67Op+yoeDhZHxzQxTaIb2tnMzmCjoJMOml8AStebg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6d18f284-8cf9-4c21-7057-5f53bd98536e@suse.com>
Date: Wed, 3 May 2023 13:42:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH V3 2/2] libxl: fix matching of generic virtio device
Content-Language: en-US
To: Viresh Kumar <viresh.kumar@linaro.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
 stratos-dev@op-lists.linaro.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>,
 Erik Schilling <erik.schilling@linaro.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xen.org, Julien Grall <julien@xen.org>,
 Juergen Gross <jgross@suse.com>
References: <18458fa39433ce4ac950a0a20cc64da93db0b03a.1680771422.git.viresh.kumar@linaro.org>
 <888e60d2ec49f53230bc82df393b6bed4180cb8a.1680771422.git.viresh.kumar@linaro.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <888e60d2ec49f53230bc82df393b6bed4180cb8a.1680771422.git.viresh.kumar@linaro.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0136.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::10) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AM7PR04MB6998:EE_
X-MS-Office365-Filtering-Correlation-Id: d8beee39-b378-42c4-f366-08db4bcb7975
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Gnn/WE853ow6ZIQvpxAXQByDhilqk9fVTdBQ/CyqwX3kN8shqwqMMPB7lsr6l3FAq3ofQ83rAzwH3fwyPNQdoxpLzrLtCEnHfC+/6+jQGJIXMk3E0AEyEdRn5bU/X2UygE/cvcsOwCqJqnnrqHZiHxc6nHs8slyoXhDOlGwy0RyiVeC+D+gRmUAsCtR/yhjafnVpJTyH2FQ0YV2ybgwYtPztVQb89YH+h5rIeW5u8KgMSZjr+qkwvl29fo+rDT0bl8KvwcTkdlcHtJd2snOMzqL4/eOkIpQvNJQsfW+zC00dMoAS7I/WNEmgQr5Qq+TbjmEIS+rEeJm+BSWMm7E9GXKsprl8MAEGGEpXwxqaJDXT2Kv7vKhukVqxc1C7iYG6/dxLQsfidgo2SEF73ZeqlNYme/AKfRsjKmRDbPwXUW38UgTuEkvYcHe895QRLXOiuhdKC0ChCJmGj5kfyMa3aPcaI6Qs/mj8P4aBOatqH8ipc472R+HQ3bSrID3UnZ3M0hHxAFV4nlLt0MPDwdyYUNXPbvdQ/A2yKEjOJyMZSxHmkM7tK6HFzY4urhr/4lCGFWdGUMIImFC9EveA9pP8HDsIJ2mfKCxq4zHeWj01soQWwYhvFnE4S14hn/wu+4YJ5RqYa9EWvmvz5ySL1ZpQ1HAQxpiXNGKmPnUzDxDl3nk=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(39860400002)(376002)(366004)(346002)(451199021)(2616005)(6506007)(83380400001)(2906002)(26005)(6512007)(53546011)(41300700001)(4326008)(66946007)(66476007)(66556008)(316002)(110136005)(54906003)(186003)(7416002)(107886003)(5660300002)(6486002)(8936002)(8676002)(478600001)(38100700002)(86362001)(31696002)(36756003)(31686004)(41533002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SlBtMEZGeHBRcWVwMHB4aWF1dnBMN1oyTElpYXAxM2NZVXJDNEdIb0ZSbGgw?=
 =?utf-8?B?R2FzUkJHVWpBNElOc3NXUkk3RjZhT3BjSWZDMlBxK0xNYXZJV0xweG1iRHhR?=
 =?utf-8?B?U29TNjdjbmloYkxnOFA3dkJ0UnhMU0VLTlgzZUgrVU8rb2IrYVJJOGZBYjBS?=
 =?utf-8?B?UUx4QlJhK2F4eUREQWpIci9LUlU3R1FZc2hhT29lc05nenRtL3JZQnNIdGJP?=
 =?utf-8?B?TTJUWWNDV0lVMFcvbkdRUHNHTjBvS0NWUWs2c1lqYWhaY29KNzNxQ1pOZjI3?=
 =?utf-8?B?bVM2MzVuR3pwdEtNbFBRVUsvT1lJV2RTSERrNTV2ZXp5R0VqclFFdE0vWUxx?=
 =?utf-8?B?aktEaFZlNXYxUU41RDdyT05NMnBoa3BxczdUZDB1c0NBUGRUZ0F0ZWdGTGtr?=
 =?utf-8?B?YnRqUlBsWWJ4cy95OGEzTmpFWXhHcmFSMmFMVDV0SXZ2ZHBsQ3pXMDJvQWJk?=
 =?utf-8?B?dGlGSVBlVFhlaTVRQkkyZGpSOHFnRngrb3BLMnU4dGZjaE1Gcy9tb0NEdmNt?=
 =?utf-8?B?MkJXbU01bE0xQ3JJdzdUR0ZDUldjeVNnQVRHa05PSisxYnJZc0thOHlGTXkw?=
 =?utf-8?B?SElTd0cybTRtcmFlTmFhZUU4QkdvcEVoTzRHYVU5cjN2dkk3U1l0bzhBY1Nm?=
 =?utf-8?B?QWlSU1pMVFNkY0UwQzVQR1hzZnhWdWpMNmZHMGs4T2MwQVRpT3lzTGE4Y3U5?=
 =?utf-8?B?aGZ2OUxoMUx1cy9kbzRwK0pxTVV6MlBJK0tTamRpMVZNeUdsU2ZPL2VwY3NQ?=
 =?utf-8?B?OC9xdUNkOFNNckpxaUJUWDlBVHQxZUg0ak1PeUY4TVgxWkZkYXN6MTJqd3VP?=
 =?utf-8?B?N3JXNGJkNVFTZjdUY3hkeGZNTUxObGRiQ0R2aUhQQTdxZklJSXNrRlo5dy95?=
 =?utf-8?B?NFFQa3h0eFd2S0pFQ3lwblE4ME9JTUh0TWt2djEyaU80a2I4UzM5c0dvVjVX?=
 =?utf-8?B?WklleWY5azI1VGF1WmhHbFk5UmRkVmJQQ2d4Y0FKY3VJME44cllDT0pEbUdu?=
 =?utf-8?B?TFc4V3BNSVNZcSsxK2RpdDlKOTJMaTNveHVUdzlSVXhMTzNRTzNjcWluRUxW?=
 =?utf-8?B?RFlETEdMV1lLNzhvb0U4dXRwbFJoR29rZlhubzIwNUVJbjRqckF1bVFlMGxB?=
 =?utf-8?B?TVhhZ0RYMThwYTMrTm1SN2ZEVngxTHVtR1RtZCtFcGVKSFhoVHUwMHpRaDk1?=
 =?utf-8?B?RVNlMktBUkltUFhITXNkL1VKZ1dOQk9xTmRoNUl3UXlJNVFvOXl0d2w5Sjc2?=
 =?utf-8?B?TUJ5eWMrbzJKa1Eya2VWcExSR2xmeUdWbDc0SDZXSGZDdWpkdU1nMVZWMDk0?=
 =?utf-8?B?cjcvSWdKUGZ3cFRiT2RvSUsvK0pZRWNMUTJvSUh4UHZrczVhNUo1YlM0bXo5?=
 =?utf-8?B?MEh3VjY3VXZ2RE5EVVhTYVZWODlNUlZnR1QwTTJDcmNtUS9PQVYrb1FFL3lt?=
 =?utf-8?B?clRVNlJZZ000Y3FIR241S1ArOE5JUTZpbytkM1pSYWlTU29MOHo2ckxtYjVh?=
 =?utf-8?B?dEdOSlJ2TCtZQ2hTaHJMZHJHRS9IV2I3dDZoZlEzREdmQTE5MXhURG1JV3hW?=
 =?utf-8?B?SnVDb05tT0xCdFdlVDVoTzRDVW9lbmx0NlM5dTFlMDRVclRWYWtUVmVmZ2V1?=
 =?utf-8?B?bmIveWh2OW9pSWlxQXBpYithUnJ0K1JHY1IvSm1aYlZDdmY0ZklWTGdCY1NR?=
 =?utf-8?B?R0psa2NodVFnWnd4ZkV2bENJbWs2aFJEaUpXTVBaUDhpU1RPYmo5cWNnRWM2?=
 =?utf-8?B?dVMxOW9MTjhZM0pVNi9CVEluSy9OUmFnT2dmZ3FVNFB2L3c1bjRhamgveEt0?=
 =?utf-8?B?Q0dWZGNCeTFkbTBEZEU4cEsrL20reklzUzlJdnhWUUs3eFVoT2toZUdWdlha?=
 =?utf-8?B?MjNaRXZOMU1GQzBGZHRQYlJSTTZZaW93THFTQ0IwZ0h6M1RwWmtkUWN4Znlh?=
 =?utf-8?B?bnBKMmJJQUNsRllaekNwYVFhOVdZcmNRVmIyaFZxZzU5YnVqZ3NsUXNqRzlZ?=
 =?utf-8?B?MjFucUEwZDk0VWFlN0hHaXdCTXB4SDZWSmdmN2IrRXJoMGFOWlpLcUFLZFph?=
 =?utf-8?B?d0ltaFJqWGFCTExad0IrUGlpWjZVeGs5MGtzcURBYThaVkNBYU9NdzRNcXI2?=
 =?utf-8?Q?QQflbsdCZYtY74Gcib3et3qxW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d8beee39-b378-42c4-f366-08db4bcb7975
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 11:42:29.4095
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ajXbDfDc/mJOmnEg/SLlzCr2Co2o+nDsb6Qaaxmmr/9j3NqgErSGuZqW5BQvH3M6m2q+PWoUWgyFXRbSe7hYgQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6998

On 06.04.2023 10:58, Viresh Kumar wrote:
> The strings won't be an exact match, as we are only looking to match the
> prefix here, i.e. "virtio,device". This is already done properly in
> libxl_virtio.c file, lets do the same here too.
> 
> Fixes: 43ba5202e2ee ("libxl: add support for generic virtio device")
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

While I've committed the doc patch (patch 1), I don't think I should
commit this one without a maintainer ack, even if it looks pretty
straightforward. Anthony, Wei?

Jan

> ---
> V2->V3:
> - Tag from Oleksandr.
> 
>  tools/libs/light/libxl_arm.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
> index ddc7b2a15975..97c80d7ed0fa 100644
> --- a/tools/libs/light/libxl_arm.c
> +++ b/tools/libs/light/libxl_arm.c
> @@ -1033,10 +1033,14 @@ static int make_virtio_mmio_node_device(libxl__gc *gc, void *fdt, uint64_t base,
>      } else if (!strcmp(type, VIRTIO_DEVICE_TYPE_GPIO)) {
>          res = make_virtio_mmio_node_gpio(gc, fdt);
>          if (res) return res;
> -    } else if (strcmp(type, VIRTIO_DEVICE_TYPE_GENERIC)) {
> -        /* Doesn't match generic virtio device */
> -        LOG(ERROR, "Invalid type for virtio device: %s", type);
> -        return -EINVAL;
> +    } else {
> +        int len = sizeof(VIRTIO_DEVICE_TYPE_GENERIC) - 1;
> +
> +        if (strncmp(type, VIRTIO_DEVICE_TYPE_GENERIC, len)) {
> +            /* Doesn't match generic virtio device */
> +            LOG(ERROR, "Invalid type for virtio device: %s", type);
> +            return -EINVAL;
> +        }
>      }
>  
>      return fdt_end_node(fdt);



From xen-devel-bounces@lists.xenproject.org Wed May 03 12:02:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 12:02:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529144.823208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBBw-0007jJ-5g; Wed, 03 May 2023 12:02:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529144.823208; Wed, 03 May 2023 12:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBBw-0007jC-0v; Wed, 03 May 2023 12:02:40 +0000
Received: by outflank-mailman (input) for mailman id 529144;
 Wed, 03 May 2023 12:02:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puBBv-0007j6-7E
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 12:02:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBBu-0001Cs-Jm; Wed, 03 May 2023 12:02:38 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBBu-0001C8-CJ; Wed, 03 May 2023 12:02:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=VMVN2vAC2nE/Zj3G5fg5xXXvofC9KMg65JAvt4knDXs=; b=bdKzYUSOlnV/tUygOkaX1/EW3u
	f8BtAxDvy73ogFwi9vxzrfB+P3JBJdrgvhWR+d/lZjjWBe/E9T4QTsH4gK+2dE/how1+SBeyQNkW0
	pxOOnx+SVRvbUekyof/misBK7tQSuFp2Xj9JIg8txCTjnkluPZ/2K6VzGKMhtm5H9q2I=;
Message-ID: <6ab92e3d-03b9-91d3-ef6f-697f69a2debd@xen.org>
Date: Wed, 3 May 2023 13:02:35 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 05/12] xen/arm: Introduce choice to enable 64/32 bit
 physical addressing
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-6-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230428175543.11902-6-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Ayan,

On 28/04/2023 18:55, Ayan Kumar Halder wrote:
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 239d3aed3c..192582b61d 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -19,13 +19,41 @@ config ARM
>   	select HAS_PMAP
>   	select IOMMU_FORCE_PT_SHARE
>   
> +menu "Architecture Features"
> +
> +choice
> +	prompt "Physical address space size" if ARM_32
> +	default ARM_PA_BITS_40 if ARM_32
> +	help
> +	  User can choose to represent the width of physical address. This can
> +	  sometimes help in optimizing the size of image when user chooses a
> +	  smaller size to represent physical address.
> +
> +config ARM_PA_BITS_32
> +	bool "32-bit"
> +	depends on ARM_32
> +	select PHYS_ADDR_T_32
> +	help
> +	  On platforms where any physical address can be represented within 32 bits,
> +	  user should choose this option. This will help is reduced size of the
> +	  binary.
> +
> +config ARM_PA_BITS_40
> +	bool "40-bit"
> +	depends on ARM_32
> +endchoice
> +
> +config PADDR_BITS
> +	int
> +	default 32 if ARM_PA_BITS_32
> +	default 40 if ARM_PA_BITS_40
> +	default 48 if ARM_64
> +
>   config ARCH_DEFCONFIG

Any particular reason to move this config under "Architectures 
features"? IOW... Why didn't you add...

>   	string
>   	default "arch/arm/configs/arm32_defconfig" if ARM_32
>   	default "arch/arm/configs/arm64_defconfig" if ARM_64
>   
> -menu "Architecture Features"
> -

... your new config here rather than moving "menu"?

>   source "arch/Kconfig"
>   
>   config ACPI
> diff --git a/xen/arch/arm/include/asm/page-bits.h b/xen/arch/arm/include/asm/page-bits.h
> index 5d6477e599..deb381ceeb 100644
> --- a/xen/arch/arm/include/asm/page-bits.h
> +++ b/xen/arch/arm/include/asm/page-bits.h
> @@ -3,10 +3,6 @@
>   
>   #define PAGE_SHIFT              12
>   
> -#ifdef CONFIG_ARM_64
> -#define PADDR_BITS              48
> -#else
> -#define PADDR_BITS              40
> -#endif
> +#define PADDR_BITS              CONFIG_PADDR_BITS
>   
>   #endif /* __ARM_PAGE_SHIFT_H__ */
> diff --git a/xen/arch/arm/include/asm/types.h b/xen/arch/arm/include/asm/types.h
> index e218ed77bd..e3cfbbb060 100644
> --- a/xen/arch/arm/include/asm/types.h
> +++ b/xen/arch/arm/include/asm/types.h
> @@ -34,9 +34,15 @@ typedef signed long long s64;
>   typedef unsigned long long u64;
>   typedef u32 vaddr_t;
>   #define PRIvaddr PRIx32
> +#if defined(CONFIG_PHYS_ADDR_T_32)
> +typedef unsigned long paddr_t;
Looking at this again, I think this needs an explanation in the commit 
message and Kconfig at least and possibly in the code why we are not 
using uint32_t.


> +#define INVALID_PADDR (~0UL)
> +#define PRIpaddr "08lx"
> +#else
>   typedef u64 paddr_t;
>   #define INVALID_PADDR (~0ULL)
>   #define PRIpaddr "016llx"
> +#endif
>   typedef u32 register_t;
>   #define PRIregister "08x"
>   #elif defined (CONFIG_ARM_64)
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 74f6ff2c6f..5ef5fd8c49 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -703,6 +703,11 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>       const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
>       int rc;
>   
> +    /*
> +     * The size of paddr_t should be sufficient for the complete range of
> +     * physical address.
> +     */
> +    BUILD_BUG_ON((sizeof(paddr_t) * BITS_PER_BYTE) < PADDR_BITS);
>       BUILD_BUG_ON(sizeof(struct page_info) != PAGE_INFO_SIZE);
>   
>       if ( frametable_size > FRAMETABLE_SIZE )

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 12:04:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 12:04:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529147.823217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBDY-0008GX-Do; Wed, 03 May 2023 12:04:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529147.823217; Wed, 03 May 2023 12:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBDY-0008GQ-Ay; Wed, 03 May 2023 12:04:20 +0000
Received: by outflank-mailman (input) for mailman id 529147;
 Wed, 03 May 2023 12:04:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tXb3=AY=xen.org=julien@srs-se1.protection.inumbo.net>)
 id 1puBDW-0008GG-Ut
 for xen-devel@lists.xen.org; Wed, 03 May 2023 12:04:18 +0000
Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0fd9587-e9aa-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 14:04:18 +0200 (CEST)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBDS-0001Ei-C9; Wed, 03 May 2023 12:04:14 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBDS-0001Dc-51; Wed, 03 May 2023 12:04:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0fd9587-e9aa-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=iFeaJQawpHU0XOg+AJe7sZ7kc1MFF3Cejlix51ECQjA=; b=ea4CyjXpNW1exCleaO5ewKF85H
	T7g0YCjlRJ4UW32dEVnA6SRQ0vopfcvTP6Yf/Yydfzj4y7OzeLH33F3dBMtZI2jRYKVe7NJ7PF1es
	vCHmI94K2n0iQlc2Ni+jZ2MFzZfRPOtc6Fqkl1gHVyUt64qYO27fcl2apXyM1ZpYsXZE=;
Message-ID: <1e34ae06-c5b1-586a-c840-1dcb768c70e9@xen.org>
Date: Wed, 3 May 2023 13:04:10 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH V3 2/2] libxl: fix matching of generic virtio device
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>, Viresh Kumar <viresh.kumar@linaro.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Wei Liu <wl@xen.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
 stratos-dev@op-lists.linaro.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>,
 Erik Schilling <erik.schilling@linaro.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xen.org, Juergen Gross <jgross@suse.com>
References: <18458fa39433ce4ac950a0a20cc64da93db0b03a.1680771422.git.viresh.kumar@linaro.org>
 <888e60d2ec49f53230bc82df393b6bed4180cb8a.1680771422.git.viresh.kumar@linaro.org>
 <6d18f284-8cf9-4c21-7057-5f53bd98536e@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <6d18f284-8cf9-4c21-7057-5f53bd98536e@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 03/05/2023 12:42, Jan Beulich wrote:
> On 06.04.2023 10:58, Viresh Kumar wrote:
>> The strings won't be an exact match, as we are only looking to match the
>> prefix here, i.e. "virtio,device". This is already done properly in
>> libxl_virtio.c file, lets do the same here too.
>>
>> Fixes: 43ba5202e2ee ("libxl: add support for generic virtio device")
>> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
>> Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> While I've committed the doc patch (patch 1), I don't think I should
> commit this one without a maintainer ack, even if it looks pretty
> straightforward. Anthony, Wei?

AFAICT Anthony has already given his acked-by:

https://lore.kernel.org/xen-devel/5e98d465-be8f-4050-a988-2a0829a71a2e@perard/#R

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 12:08:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 12:08:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529150.823227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBHx-0000TR-0h; Wed, 03 May 2023 12:08:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529150.823227; Wed, 03 May 2023 12:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBHw-0000TK-So; Wed, 03 May 2023 12:08:52 +0000
Received: by outflank-mailman (input) for mailman id 529150;
 Wed, 03 May 2023 12:08:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puBHv-0000TE-RR
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 12:08:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBHv-0001Jm-9D; Wed, 03 May 2023 12:08:51 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBHv-0001UT-0w; Wed, 03 May 2023 12:08:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=V1Br1aM8ZSk05T7KMZpn1Acl9L+OCjkgD7wdGT3hsIw=; b=LVv9jlBQf916GX2/6r8Hlsibto
	dz5+3TaUU8OyqT8G1vjquxRj/6nqiwGomPPZFfEm/l+/wkMuzt88glKVUddduajdQhNPJh+cSckiR
	TnJAuh6p0QE/YN/SKbDMNQDasYUfRxddIuNcNk6LJgAjkRKqpvm6g/hjMuQgGuOq2djw=;
Message-ID: <d515c3a5-9473-3cde-2838-a20875aa1181@xen.org>
Date: Wed, 3 May 2023 13:08:48 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 08/12] xen: dt: Replace u64 with uint64_t as the callback
 function parameters for dt_for_each_range()
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-9-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230428175543.11902-9-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 28/04/2023 18:55, Ayan Kumar Halder wrote:
> In the callback functions invoked by dt_for_each_range() ie handle_pci_range(),
> map_range_to_domain(), 'u64' should be replaced with 'uint64_t' as the data type
> for the parameters.

Please explain why this needs to be replaced. I.e. Xen coding style 
mention that u32 should be avoided.

> Also dt_for_each_range() invokes the callback functions with
> 'uint64_t' arguments.
> 
> There is another callback function ie is_bar_valid() which uses 'paddr_t'
> instead of 'u64' or 'uint64_t'. We will change it in the subsequent commit.

I would rather prefer if this is folded in this patch.

> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes from :-
> 
> v1-v5 - New patch introduced in v6.
> 
>   xen/arch/arm/domain_build.c      | 4 ++--
>   xen/arch/arm/include/asm/setup.h | 2 +-
>   xen/common/device_tree.c         | 4 ++--
>   xen/include/xen/device_tree.h    | 2 +-
>   4 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 1c558fca0c..9865340eac 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1637,7 +1637,7 @@ out:
>   }
>   
>   static int __init handle_pci_range(const struct dt_device_node *dev,
> -                                   u64 addr, u64 len, void *data)
> +                                   uint64_t addr, uint64_t len, void *data)
>   {
>       struct rangeset *mem_holes = data;
>       paddr_t start, end;
> @@ -2331,7 +2331,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>   }
>   
>   int __init map_range_to_domain(const struct dt_device_node *dev,
> -                               u64 addr, u64 len, void *data)
> +                               uint64_t addr, uint64_t len, void *data)
>   {
>       struct map_range_data *mr_data = data;
>       struct domain *d = mr_data->d;
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index 47ce565d87..fe17cb0a4a 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -166,7 +166,7 @@ u32 device_tree_get_u32(const void *fdt, int node,
>                           const char *prop_name, u32 dflt);
>   
>   int map_range_to_domain(const struct dt_device_node *dev,
> -                        u64 addr, u64 len, void *data);
> +                        uint64_t addr, uint64_t len, void *data);
>   
>   extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
>   
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 2163cf26d0..ab5f8df66c 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -997,7 +997,7 @@ int dt_device_get_paddr(const struct dt_device_node *dev, unsigned int index,
>   
>   int dt_for_each_range(const struct dt_device_node *dev,
>                         int (*cb)(const struct dt_device_node *,
> -                                u64 addr, u64 length,
> +                                uint64_t addr, uint64_t length,
>                                   void *),
>                         void *data)
>   {
> @@ -1060,7 +1060,7 @@ int dt_for_each_range(const struct dt_device_node *dev,
>   
>       for ( ; rlen >= rone; rlen -= rone, ranges += rone )
>       {
> -        u64 a, s;
> +        uint64_t a, s;
>           int ret;
>   
>           memcpy(addr, ranges + na, 4 * pna);
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index ce25b89c4b..b3888c1b96 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -681,7 +681,7 @@ int dt_for_each_irq_map(const struct dt_device_node *dev,
>    */
>   int dt_for_each_range(const struct dt_device_node *dev,
>                         int (*cb)(const struct dt_device_node *,
> -                                u64 addr, u64 length,
> +                                uint64_t addr, uint64_t length,
>                                   void *),
>                         void *data);
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 12:10:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 12:10:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529153.823237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBJQ-0001pl-DB; Wed, 03 May 2023 12:10:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529153.823237; Wed, 03 May 2023 12:10:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBJQ-0001pe-97; Wed, 03 May 2023 12:10:24 +0000
Received: by outflank-mailman (input) for mailman id 529153;
 Wed, 03 May 2023 12:10:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puBJO-0001pW-NZ
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 12:10:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBJO-0001N1-CA; Wed, 03 May 2023 12:10:22 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBJO-0001X6-4p; Wed, 03 May 2023 12:10:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=f7n7K79rPqQ1unoFM74/iUlmR416DinXE/L8aH1BDmk=; b=wqsPYl/4KfSVkhHHQyLI1hqz5c
	W4NrsUCCwxnE488PD8tS9/DjMEGznZoGesCUYIuKUn1ve+JUaflgoFC40cNQvyA82PplzWIKsjU0/
	NXpNjfmZi1o8EUWXVuzBjkkYsYouKF0t7cc7/Lo5i1PN7A6gcjMGZozSQXMDHeT+d7G4=;
Message-ID: <9735d546-4869-2847-db5c-411c489faea1@xen.org>
Date: Wed, 3 May 2023 13:10:19 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 10/12] xen/arm: domain_build: Check if the address fits
 the range of physical address
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-11-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230428175543.11902-11-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 28/04/2023 18:55, Ayan Kumar Halder wrote:
> handle_pci_range() and map_range_to_domain() take addr and len as uint64_t
> parameters. Then frame numbers are obtained from addr and len by right shifting
> with PAGE_SHIFT. The frame numbers are expressed using unsigned long.
> 
> Now if 64-bit >> PAGE_SHIFT, the result will have 52-bits as valid. On a 32-bit
> system, 'unsigned long' is 32-bits. Thus, there is a potential loss of value
> when the result is stored as 'unsigned long'.
> 
> To mitigate this issue, we check if the starting and end address can be
> contained within the range of physical address supported on the system. If not,
> then an appropriate error is returned.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 12:20:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 12:20:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529156.823246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBSv-0003KL-9x; Wed, 03 May 2023 12:20:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529156.823246; Wed, 03 May 2023 12:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBSv-0003KE-7B; Wed, 03 May 2023 12:20:13 +0000
Received: by outflank-mailman (input) for mailman id 529156;
 Wed, 03 May 2023 12:20:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puBSt-0003K5-SO
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 12:20:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBSt-0001Xh-74; Wed, 03 May 2023 12:20:11 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBSs-0001tb-UY; Wed, 03 May 2023 12:20:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=hVAdin5xqML1rwquAjKeVU1U2+e0AMiPJc2yq1ixpdg=; b=V2L0gW2nYDxpx0czJP1uItXgGq
	MR9OaD3tJuHlqfLNg71EHjoMb1HiTK6MhtBAwTxpQNJtqVrQzITo7cathYaoFlPMtdW/plMClKCHO
	bFh/1jqxaPJNf1OyDPmks2FlCgG21g5kfXpaqelUQMDoJa7yMT6cXKv10zpr3PjoHaqU=;
Message-ID: <63fa927e-72f5-1645-97c0-6986f2fdcabe@xen.org>
Date: Wed, 3 May 2023 13:20:07 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 11/12] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-12-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230428175543.11902-12-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 28/04/2023 18:55, Ayan Kumar Halder wrote:
> Restructure the code so that one can use pa_range_info[] table for both
> ARM_32 as well as ARM_64.
> 
> Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
> p2m_root_order can be obtained from the pa_range_info[].root_order and
> p2m_root_level can be obtained from pa_range_info[].sl0.
> 
> Refer ARM DDI 0406C.d ID040418, B3-1345,
> "Use of concatenated first-level translation tables
> 
> ...However, a 40-bit input address range with a translation granularity of 4KB
> requires a total of 28 bits of address resolution. Therefore, a stage 2
> translation that supports a 40-bit input address range requires two concatenated
> first-level translation tables,..."
> 
> Thus, root-order is 1 for 40-bit IPA on ARM_32.
> 
> Refer ARM DDI 0406C.d ID040418, B3-1348,
> 
> "Determining the required first lookup level for stage 2 translations
> 
> For a stage 2 translation, the output address range from the stage 1
> translations determines the required input address range for the stage 2
> translation. The permitted values of VTCR.SL0 are:
> 
> 0b00 Stage 2 translation lookup must start at the second level.
> 0b01 Stage 2 translation lookup must start at the first level.
> 
> VTCR.T0SZ must indicate the required input address range. The size of the input
> address region is 2^(32-T0SZ) bytes."
> 
> Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of input
> address region is 2^40 bytes.
> 
> Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b which is 24.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes from -
> 
> v3 - 1. New patch introduced in v4.
> 2. Restructure the code such that pa_range_info[] is used both by ARM_32 as
> well as ARM_64.
> 
> v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and P2M_ROOT_LEVEL.
> The reason being root_order will not be always 1 (See the next patch).
> 2. Updated the commit message to explain t0sz, sl0 and root_order values for
> 32-bit IPA on Arm32.
> 3. Some sanity fixes.
> 
> v5 - pa_range_info is indexed by system_cpuinfo.mm64.pa_range. ie
> when PARange is 0, the PA size is 32, 1 -> 36 and so on. So pa_range_info[] has
> been updated accordingly.
> For ARM_32 pa_range_info[0] = 0 and pa_range_info[1] = 0 as we do not support
> 32-bit, 36-bit physical address range yet.
> 
>   xen/arch/arm/include/asm/p2m.h |  8 +-------
>   xen/arch/arm/p2m.c             | 32 ++++++++++++++++++--------------
>   2 files changed, 19 insertions(+), 21 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
> index f67e9ddc72..4ddd4643d7 100644
> --- a/xen/arch/arm/include/asm/p2m.h
> +++ b/xen/arch/arm/include/asm/p2m.h
> @@ -14,16 +14,10 @@
>   /* Holds the bit size of IPAs in p2m tables.  */
>   extern unsigned int p2m_ipa_bits;
>   
> -#ifdef CONFIG_ARM_64
>   extern unsigned int p2m_root_order;
>   extern unsigned int p2m_root_level;
> -#define P2M_ROOT_ORDER    p2m_root_order
> +#define P2M_ROOT_ORDER p2m_root_order

This looks like a spurious change.

>   #define P2M_ROOT_LEVEL p2m_root_level
> -#else
> -/* First level P2M is always 2 consecutive pages */
> -#define P2M_ROOT_ORDER    1
> -#define P2M_ROOT_LEVEL 1
> -#endif
>   
>   struct domain;
>   
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 418997843d..1fe3cccf46 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -19,9 +19,9 @@
>   
>   #define INVALID_VMID 0 /* VMID 0 is reserved */
>   
> -#ifdef CONFIG_ARM_64
>   unsigned int __read_mostly p2m_root_order;
>   unsigned int __read_mostly p2m_root_level;
> +#ifdef CONFIG_ARM_64
>   static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>   /* VMID is by default 8 bit width on AArch64 */
>   #define MAX_VMID       max_vmid
> @@ -2247,16 +2247,6 @@ void __init setup_virt_paging(void)
>       /* Setup Stage 2 address translation */
>       register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
>   
> -#ifdef CONFIG_ARM_32
> -    if ( p2m_ipa_bits < 40 )
> -        panic("P2M: Not able to support %u-bit IPA at the moment\n",
> -              p2m_ipa_bits);
> -
> -    printk("P2M: 40-bit IPA\n");
> -    p2m_ipa_bits = 40;
> -    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
> -    val |= VTCR_SL0(0x1); /* P2M starts at first level */
> -#else /* CONFIG_ARM_64 */
>       static const struct {
>           unsigned int pabits; /* Physical Address Size */
>           unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
> @@ -2265,19 +2255,26 @@ void __init setup_virt_paging(void)
>       } pa_range_info[] __initconst = {
>           /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */
>           /*      PA size, t0sz(min), root-order, sl0(max) */
> +        [2] = { 40,      24/*24*/,  1,          1 },

I don't like the fact that the index are not ordered anymore and...

> +#ifdef CONFIG_ARM_64
>           [0] = { 32,      32/*32*/,  0,          1 },
>           [1] = { 36,      28/*28*/,  0,          1 },
> -        [2] = { 40,      24/*24*/,  1,          1 },
>           [3] = { 42,      22/*22*/,  3,          1 },
>           [4] = { 44,      20/*20*/,  0,          2 },
>           [5] = { 48,      16/*16*/,  0,          2 },
>           [6] = { 52,      12/*12*/,  4,          2 },
>           [7] = { 0 }  /* Invalid */
> +#else
> +        [0] = { 0 },  /* Invalid */
> +        [1] = { 0 },  /* Invalid */
> +        [3] = { 0 }  /* Invalid */
> +#endif

... it is not clear to me why we are adding 3 extra entries. I think it 
would be better if we do:

#ifdef CONFIG_ARM_64
    [0] ...
    [1] ...
#endif
    [2] ...
#ifdef CONFIG_ARM_64
    [3] ...
    [4] ...
    ...
#endif

>       };
>   
>       unsigned int i;
>       unsigned int pa_range = 0x10; /* Larger than any possible value */
>   
> +#ifdef CONFIG_ARM_64
>       /*
>        * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
>        * with IPA bits == PA bits, compare against "pabits".
> @@ -2291,6 +2288,9 @@ void __init setup_virt_paging(void)
>        */
>       if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
>           max_vmid = MAX_VMID_16_BIT;
> +#else
> +    p2m_ipa_bits = PADDR_BITS;
> +#endif
Why do we need to reset p2m_ipa_bits for Arm?

>   
>       /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits". */
>       for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
> @@ -2306,24 +2306,28 @@ void __init setup_virt_paging(void)
>       if ( pa_range >= ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_range].pabits )
>           panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", pa_range);
>   
> +#ifdef CONFIG_ARM_64
>       val |= VTCR_PS(pa_range);
>       val |= VTCR_TG0_4K;
>   
>       /* Set the VS bit only if 16 bit VMID is supported. */
>       if ( MAX_VMID == MAX_VMID_16_BIT )
>           val |= VTCR_VS;
> +
> +    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
> +#endif
> +
>       val |= VTCR_SL0(pa_range_info[pa_range].sl0);
>       val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
>   
>       p2m_root_order = pa_range_info[pa_range].root_order;
>       p2m_root_level = 2 - pa_range_info[pa_range].sl0;
> -    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;

I think this line should stay for 32-bit as well because we p2m_ipa_bits 
should be based on the PA range we selected (see the loop 'Choose 
suitable "pa_range"...').

>   
>       printk("P2M: %d-bit IPA with %d-bit PA and %d-bit VMID\n",
>              p2m_ipa_bits,
>              pa_range_info[pa_range].pabits,
>              ( MAX_VMID == MAX_VMID_16_BIT ) ? 16 : 8);
> -#endif
> +
>       printk("P2M: %d levels with order-%d root, VTCR 0x%"PRIregister"\n",
>              4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val);
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 12:28:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 12:28:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529165.823257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBaO-00042n-79; Wed, 03 May 2023 12:27:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529165.823257; Wed, 03 May 2023 12:27:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBaO-00042g-47; Wed, 03 May 2023 12:27:56 +0000
Received: by outflank-mailman (input) for mailman id 529165;
 Wed, 03 May 2023 12:27:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puBaM-00042a-KX
 for xen-devel@lists.xen.org; Wed, 03 May 2023 12:27:54 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20625.outbound.protection.outlook.com
 [2a01:111:f400:fe13::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ebebd58c-e9ad-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 14:27:51 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM8PR04MB7332.eurprd04.prod.outlook.com (2603:10a6:20b:1db::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 12:27:51 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 12:27:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebebd58c-e9ad-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Qw98ewXzUnygrXuCZH3fKbYbAOyfhIiRAnG382VjQ9OleHtRNZTm8EAyvwa77hlhiZE2HbSpWx4FBXpjVfsI44kGK9eUny1lf8g1ErYFd1eOfAPfo6c67Z4vdU/L4I3hJINQKwBhQnZsCrqn2v+SKAbVX1QQMbHV+DLDwKjHN6WwWKdFkjB5RvLg5vOax76LmXuXLUAz0y5YOVkob/IteEselgtyEZSYQBOank8tpUYNu6tutXLqUvWP/4++9woknm5Bfbs9qLRRsfvZtRMm7cd70AAXAdaFWNkRe8E7VCPzU52uRm0Gi7OIzWikfWIHFcx4UCZfJ/QCnJwuf0KlPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=v9M5VZYtq+Ran5nYj9i4Y8tieuCE7epdLKj66oHAKeY=;
 b=T7pBL6CYiog40yzAKvYGpCwuEF1XzzfdiX05zzz7VFoIYaZBZ4OPVUxMOUggCNgFI5CDeyFrQYFcdNVMgAcLzkn5xY3F77b6Ka6ydFJUDkpRDyGrr/KmTn9OSty4o/A6snhzdcEQUvg48N7sll4fFtpbWr70Mz1sZ+aCOB/mFG7Mkqa8CpR733uas6kI0kiKCvHk9AHWoJew0iTFrwATyh1rCrmNrLyMONeLJNkmUULLjNZ2VcqmeehzJjepVUAtPU0kNgRH4lfMpqv/WCpQ3irGLSYsTtfxtBG0MhiUM7PFctkn/3R3LkWooiPj02a9Wj4fyaKHnE1/pnMXloNE9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v9M5VZYtq+Ran5nYj9i4Y8tieuCE7epdLKj66oHAKeY=;
 b=Le1kwJhj7WPYIxOnoloTHR9k3PcyiPe0UOh314Kmst1ZmG+JelG+nye1hnFDxLYRVmvNL1bxpIhpaEp0kRvZiC3t7wG/pLhDUbSzsjefGrAMTugX4rsW5NN/RRc4gVvoiC+aSKM0PNJ34eK8l5Z8J3ceCnUopwcJoiKD7EWUsdK/oCZxVENWPN6YKVXGUBwK6pcPNocjm+9A4HGAyhGoKR9j3OtbZwZQcbwrNONyRFS3vISm/kQ62dL38OMGw24dkqWIqVUxRYnm6bFdxFkBFPGSXwFLuBP3LwwBSUOiDdJst06rwmUB7Xg6gZUmPXJROcToakQ1Qa4eNQyigfas9A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7cd38c79-2876-87a5-ed2a-8a49a432d79d@suse.com>
Date: Wed, 3 May 2023 14:27:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH V3 2/2] libxl: fix matching of generic virtio device
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
 stratos-dev@op-lists.linaro.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>,
 Erik Schilling <erik.schilling@linaro.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xen.org, Juergen Gross <jgross@suse.com>,
 Wei Liu <wl@xen.org>, Viresh Kumar <viresh.kumar@linaro.org>
References: <18458fa39433ce4ac950a0a20cc64da93db0b03a.1680771422.git.viresh.kumar@linaro.org>
 <888e60d2ec49f53230bc82df393b6bed4180cb8a.1680771422.git.viresh.kumar@linaro.org>
 <6d18f284-8cf9-4c21-7057-5f53bd98536e@suse.com>
 <1e34ae06-c5b1-586a-c840-1dcb768c70e9@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <1e34ae06-c5b1-586a-c840-1dcb768c70e9@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0049.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::9) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AM8PR04MB7332:EE_
X-MS-Office365-Filtering-Correlation-Id: 55303217-77ca-4134-25e4-08db4bd1cfc7
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	C7vKvupxex64WdRGeTWpDWPNpvjqQG7cZTGvlI2jj2lgLnJhfTfVbOyH8ZXTHl3VLqmvhu9xd72WNXyh15sbKpVcY+z994FB3ARliho629K/8dE94V/MXizYGvpaj1fFMjuH4TYo9zlHQlSaKJVuqlyHdYxwK0GzkNmrcXWFD80MpJvbYDnBwX5yAirFHDRTqdB7HpC+EEIXbLfijcTyGgTW45sp4XYzMlSuh72/Qikpgc29j5ztIAnqdOimtAsudg5tRggB44vLuKTLTC3S0eLzSMzfq8kvDQRCfn9z7Y6lMK4Sz1SOriBX91A1ujOcghTANeZ3YErDiMAbXIn8mqcdWQI+0Ra7eLNGn2wenIZf8T0wP5JoASJrE/OA/KvqElHW+CXALvp758OwlqMURYfhc/LE9caK5nJsRSpmf07dobEJBLidNbx0WhwqmEtnw/Vq0vXr/Mlh2b9aILPAY6EW87fVf+hIWLn3BSI2ZHjSnHwUIoSbFk8S9/oRZnSdwi0yTweZ/nXyI4sMLadQ7w4p5dmYWiCcogIFPH2v6oKMwz5yjo3JY/nlmaJ6aAID5YxtDXwqu0QlSZJi1R+58ZfRr8yOYmOX46bv4nsYykxeBwVK5QewvioT+niOYsjV8p3kRsJv0cM8cMmwomSfRyX65zSi1Rf6CDEUsNoCMPg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(136003)(39860400002)(376002)(396003)(451199021)(966005)(6506007)(6486002)(478600001)(110136005)(2616005)(6512007)(53546011)(26005)(54906003)(186003)(38100700002)(2906002)(4744005)(7416002)(36756003)(5660300002)(66556008)(66476007)(66946007)(4326008)(41300700001)(8936002)(8676002)(31696002)(316002)(86362001)(31686004)(41533002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YzhnT1hwNmN1Ynp6SVlYS3hselN5dHEwMWRPcmlVbFJXRk5ackx2cFFaaGIx?=
 =?utf-8?B?UzNMY28rVllrRjMvV3QvNC90aVRkTzdsUG43Um1zYmZLZEcxUGg5S2MvZzZ3?=
 =?utf-8?B?NUUwQjhNRzNxbndPYk9EdnE0OUhSOFRLLzdId2FrRE1naUNMaW0wdVJSdDBY?=
 =?utf-8?B?WUx1U3NFWTVSWis4c0pjaFVDSGVBQmRsMUJVaEtyY3JIS1ZLdHZDK2R2Slly?=
 =?utf-8?B?czZiUW04T1dDRFNxZVdpZzUzRGNTaThXaVFGWXFvS3kxQU5TZ3J6Ym9pVitq?=
 =?utf-8?B?STF4VnUvUHU3WlRJOW5BNzBCNkw5WUhHVkRhbG5uRTNCK2Jpc0FtR2RoQ0dP?=
 =?utf-8?B?emp5RnM5b3dmQVA5SDdrMHdiOFhTV3dSVWlxcUk1WXRhMjNVMll0UCtKWC9L?=
 =?utf-8?B?L2g4ZWlqUk9qL3dVYTBGUElmcGFrZTFYcGE2YlpVMU5YTzJBOVVlVTRLTUV6?=
 =?utf-8?B?cUFuZ3lOOFhwemFURkpEODdsVU56MzRScTZqdHJPVTIwMnB5bWsyYWtYMG9J?=
 =?utf-8?B?VUFqOWVtSHNCY2t5L2I1am4zUE01L2R0ei9WZjNUNEFkZG9OR3NFTnFWWEhv?=
 =?utf-8?B?dDR0b01vRFVkZ2FzVHNRZi81dWUyVjI5aXMrS3VSZTdBd1UvdTlsbGRzT3h4?=
 =?utf-8?B?eE9BUTF1eUNQUDhMQURQMURZQXlML28xQWt3UjgxbXdLait0NTlTbUVyVENl?=
 =?utf-8?B?aE44ejdUM0Q0WFlacjlaRmVlMFJCNFowaHIvOWpUbE96OVNtYzhLNEVnMVNS?=
 =?utf-8?B?UVhZYjBJelVDQ2V3R1V6dURmQVFROXRaY3NSZ2lKR1h2bTYvRW5laXlDemRV?=
 =?utf-8?B?czR5b2swLzdHbmNLYmtObUJ3eEIrNHJtZmJHRnZTUWRzWUZnUFFYS2FmZ0JB?=
 =?utf-8?B?MUlvcks5YW1JeHF1TmRXZ0M1b3BOYWt6VlZIWnJNQVpQNi81cEI1Y1o5eDZy?=
 =?utf-8?B?MDVkKzdYZG54SDBwaHZDSEdpcHh1QzBUQThVdjBVM1ZST2xXV0xKaFJwNFVw?=
 =?utf-8?B?TkxsY0g5VjRicHhlckVtd2x4eFF3ampMQWR2Q0o3SkJXV1RONGEvbFRWcWVo?=
 =?utf-8?B?anNHSDc4NnpvNXZCTDg2L25EWnpvOEZwT2FLRUt0ZkRTaWNYcXZmQ3c3cVND?=
 =?utf-8?B?cXJ6dUNwMnh3N1dWN1lnZEJRMW9BTFd3TmFpeDlWOHRySnhWWXNpbFZiSUl1?=
 =?utf-8?B?SjBVYUdINFVnS09GRkhmNGd0d244MFQybG1zTXlrZG96cDNFOVRVVUV3bzho?=
 =?utf-8?B?eGxBMzBOOHJPL0NYaVF4WkhneS9xdjBvYXcxQVFjeGVkdVpzZ1hpc0RLRHdB?=
 =?utf-8?B?ekdHZTRBcnhJMk1UL1E1bEQzaDk5eGhXMEpVL3FhclkwR0pPc0ZCeGt0TnFQ?=
 =?utf-8?B?cFYrd3A1RTgyajV6LzU2WGNUanhmaC80TUg0TlVuenZkd0FpN1dJVXlCTmRP?=
 =?utf-8?B?dWNORU1UK25LbFdnZ0pEeHQ4RTIwUE1qbUlqUXVTd3dKcTZxRFZkblJ4b3lu?=
 =?utf-8?B?Szlka2ZDakQ3dVprclIwNTJQUDFFeGppT0xKVm5Bc1F3c2lUY1BtZmc4MjdT?=
 =?utf-8?B?T244endpNTVBcUNSUVNCdVhnbk15c0JTTXJWSkFYREpXQ2J2U3MwSGIrblpX?=
 =?utf-8?B?UkhVd2lNNWNqVFpCbjViMjNoYWxPdzlZbm9jQXF5ZS9wMHNJRllETUw4Yy9h?=
 =?utf-8?B?SWxSTmcxS0o1VXlhOUY5OXlQa1B1azVjdVZSQVRPak16bnNLOExLWTdGcm9O?=
 =?utf-8?B?U0RHK0ZkTFgwTGNFT2F5OFlQVThRTlhRWHVqVFVLUmYxMHYvdlk2R2JDVkl2?=
 =?utf-8?B?N1RyR0JabjE0bHhaOW5nMTRVNkZMTWs5MXR1UzBuK0FSVXdTOXVuK0llK3dW?=
 =?utf-8?B?aEY5L3I1QmFuazJNQXYwNUdLTkZFUU1ia2lmYjh1Z21uWDhQNmF6WWhzNTRa?=
 =?utf-8?B?MDV6MmdhNDliUHhxRTRDdnRaVFFnTzFLNmhHRktuSUxWSDhuUWFxblZlanlZ?=
 =?utf-8?B?bWhXZ1FnYis0OUxvbGJibDQxTEE0UjRFNzZTcC9EVGUwNTlubjk0TGxhZU5l?=
 =?utf-8?B?elNleDQvK25aZGZWNTZ6MVZkZVV2bllJekIwWTZTVldwUytzaWF0VWMzbi9P?=
 =?utf-8?Q?HJfzdfY7340gKyRkVdsqpZ9Ms?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 55303217-77ca-4134-25e4-08db4bd1cfc7
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 12:27:51.1052
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jw+CWna+tnLJJiG4Z3q7uM1F61Cb0SU0Sm5TErcWLYM9ZsnCEwgEC9o6bnYpmbP1HbLyO7Yhq1bACT6ew+kpfA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7332

On 03.05.2023 14:04, Julien Grall wrote:
> On 03/05/2023 12:42, Jan Beulich wrote:
>> On 06.04.2023 10:58, Viresh Kumar wrote:
>>> The strings won't be an exact match, as we are only looking to match the
>>> prefix here, i.e. "virtio,device". This is already done properly in
>>> libxl_virtio.c file, lets do the same here too.
>>>
>>> Fixes: 43ba5202e2ee ("libxl: add support for generic virtio device")
>>> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
>>> Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> While I've committed the doc patch (patch 1), I don't think I should
>> commit this one without a maintainer ack, even if it looks pretty
>> straightforward. Anthony, Wei?
> 
> AFAICT Anthony has already given his acked-by:
> 
> https://lore.kernel.org/xen-devel/5e98d465-be8f-4050-a988-2a0829a71a2e@perard/#R

Oh, right you are. Thanks for pointing this out. And, Anthony: I'm sorry.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 12:36:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 12:36:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529169.823266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBi6-0005Wi-Vz; Wed, 03 May 2023 12:35:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529169.823266; Wed, 03 May 2023 12:35:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBi6-0005Wb-TM; Wed, 03 May 2023 12:35:54 +0000
Received: by outflank-mailman (input) for mailman id 529169;
 Wed, 03 May 2023 12:35:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puBi5-0005WV-Cs
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 12:35:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBi4-0001y0-Qe; Wed, 03 May 2023 12:35:52 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBi4-0002Xj-JJ; Wed, 03 May 2023 12:35:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	References:Cc:To:From:Subject:MIME-Version:Date:Message-ID;
	bh=lpezSXCZbHYzG/uYZGtKRGiZ9sjYSZy6hUjCZRVKOas=; b=k5X+wG9s2ZrxRK6QiNaBjNnZqR
	0e4K0w8pXtIUTlrQDj2eUvb2hdrmrZSqaRBVP7Yih60vkuImU4dv7VxsUb6gYIyZcXHek4WeZ5TmM
	KhbuLtTPCB97mf93HXg2m/qOc06vToxZ1fTA/9x+Zh+FVJOYWrGBOKbK20Nnf3nUHySQ=;
Message-ID: <178f9c0f-2f72-daac-772b-c3c4221bea40@xen.org>
Date: Wed, 3 May 2023 13:35:49 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 11/12] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
Content-Language: en-US
From: Julien Grall <julien@xen.org>
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-12-ayan.kumar.halder@amd.com>
 <63fa927e-72f5-1645-97c0-6986f2fdcabe@xen.org>
In-Reply-To: <63fa927e-72f5-1645-97c0-6986f2fdcabe@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 03/05/2023 13:20, Julien Grall wrote:
> Hi,
> 
> On 28/04/2023 18:55, Ayan Kumar Halder wrote:
>> Restructure the code so that one can use pa_range_info[] table for both
>> ARM_32 as well as ARM_64.
>>
>> Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
>> p2m_root_order can be obtained from the pa_range_info[].root_order and
>> p2m_root_level can be obtained from pa_range_info[].sl0.
>>
>> Refer ARM DDI 0406C.d ID040418, B3-1345,
>> "Use of concatenated first-level translation tables
>>
>> ...However, a 40-bit input address range with a translation 
>> granularity of 4KB
>> requires a total of 28 bits of address resolution. Therefore, a stage 2
>> translation that supports a 40-bit input address range requires two 
>> concatenated
>> first-level translation tables,..."
>>
>> Thus, root-order is 1 for 40-bit IPA on ARM_32.
>>
>> Refer ARM DDI 0406C.d ID040418, B3-1348,
>>
>> "Determining the required first lookup level for stage 2 translations
>>
>> For a stage 2 translation, the output address range from the stage 1
>> translations determines the required input address range for the stage 2
>> translation. The permitted values of VTCR.SL0 are:
>>
>> 0b00 Stage 2 translation lookup must start at the second level.
>> 0b01 Stage 2 translation lookup must start at the first level.
>>
>> VTCR.T0SZ must indicate the required input address range. The size of 
>> the input
>> address region is 2^(32-T0SZ) bytes."
>>
>> Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of 
>> input
>> address region is 2^40 bytes.
>>
>> Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b 
>> which is 24.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>> Changes from -
>>
>> v3 - 1. New patch introduced in v4.
>> 2. Restructure the code such that pa_range_info[] is used both by 
>> ARM_32 as
>> well as ARM_64.
>>
>> v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and 
>> P2M_ROOT_LEVEL.
>> The reason being root_order will not be always 1 (See the next patch).
>> 2. Updated the commit message to explain t0sz, sl0 and root_order 
>> values for
>> 32-bit IPA on Arm32.
>> 3. Some sanity fixes.
>>
>> v5 - pa_range_info is indexed by system_cpuinfo.mm64.pa_range. ie
>> when PARange is 0, the PA size is 32, 1 -> 36 and so on. So 
>> pa_range_info[] has
>> been updated accordingly.
>> For ARM_32 pa_range_info[0] = 0 and pa_range_info[1] = 0 as we do not 
>> support
>> 32-bit, 36-bit physical address range yet.
>>
>>   xen/arch/arm/include/asm/p2m.h |  8 +-------
>>   xen/arch/arm/p2m.c             | 32 ++++++++++++++++++--------------
>>   2 files changed, 19 insertions(+), 21 deletions(-)
>>
>> diff --git a/xen/arch/arm/include/asm/p2m.h 
>> b/xen/arch/arm/include/asm/p2m.h
>> index f67e9ddc72..4ddd4643d7 100644
>> --- a/xen/arch/arm/include/asm/p2m.h
>> +++ b/xen/arch/arm/include/asm/p2m.h
>> @@ -14,16 +14,10 @@
>>   /* Holds the bit size of IPAs in p2m tables.  */
>>   extern unsigned int p2m_ipa_bits;
>> -#ifdef CONFIG_ARM_64
>>   extern unsigned int p2m_root_order;
>>   extern unsigned int p2m_root_level;
>> -#define P2M_ROOT_ORDER    p2m_root_order
>> +#define P2M_ROOT_ORDER p2m_root_order
> 
> This looks like a spurious change.
> 
>>   #define P2M_ROOT_LEVEL p2m_root_level
>> -#else
>> -/* First level P2M is always 2 consecutive pages */
>> -#define P2M_ROOT_ORDER    1
>> -#define P2M_ROOT_LEVEL 1
>> -#endif
>>   struct domain;
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 418997843d..1fe3cccf46 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -19,9 +19,9 @@
>>   #define INVALID_VMID 0 /* VMID 0 is reserved */
>> -#ifdef CONFIG_ARM_64
>>   unsigned int __read_mostly p2m_root_order;
>>   unsigned int __read_mostly p2m_root_level;
>> +#ifdef CONFIG_ARM_64
>>   static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>>   /* VMID is by default 8 bit width on AArch64 */
>>   #define MAX_VMID       max_vmid
>> @@ -2247,16 +2247,6 @@ void __init setup_virt_paging(void)
>>       /* Setup Stage 2 address translation */
>>       register_t val = 
>> VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
>> -#ifdef CONFIG_ARM_32
>> -    if ( p2m_ipa_bits < 40 )
>> -        panic("P2M: Not able to support %u-bit IPA at the moment\n",
>> -              p2m_ipa_bits);
>> -
>> -    printk("P2M: 40-bit IPA\n");
>> -    p2m_ipa_bits = 40;
>> -    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
>> -    val |= VTCR_SL0(0x1); /* P2M starts at first level */
>> -#else /* CONFIG_ARM_64 */
>>       static const struct {
>>           unsigned int pabits; /* Physical Address Size */
>>           unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
>> @@ -2265,19 +2255,26 @@ void __init setup_virt_paging(void)
>>       } pa_range_info[] __initconst = {
>>           /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table 
>> D5-6 */
>>           /*      PA size, t0sz(min), root-order, sl0(max) */
>> +        [2] = { 40,      24/*24*/,  1,          1 },
> 
> I don't like the fact that the index are not ordered anymore and...
> 
>> +#ifdef CONFIG_ARM_64
>>           [0] = { 32,      32/*32*/,  0,          1 },
>>           [1] = { 36,      28/*28*/,  0,          1 },
>> -        [2] = { 40,      24/*24*/,  1,          1 },
>>           [3] = { 42,      22/*22*/,  3,          1 },
>>           [4] = { 44,      20/*20*/,  0,          2 },
>>           [5] = { 48,      16/*16*/,  0,          2 },
>>           [6] = { 52,      12/*12*/,  4,          2 },
>>           [7] = { 0 }  /* Invalid */
>> +#else
>> +        [0] = { 0 },  /* Invalid */
>> +        [1] = { 0 },  /* Invalid */
>> +        [3] = { 0 }  /* Invalid */
>> +#endif
> 
> ... it is not clear to me why we are adding 3 extra entries. I think it 
> would be better if we do:
> 
> #ifdef CONFIG_ARM_64
>     [0] ...
>     [1] ...
> #endif
>     [2] ...
> #ifdef CONFIG_ARM_64
>     [3] ...
>     [4] ...
>     ...
> #endif

Looking at the next patch. An alternative would be to go back 
duplicating the lines. So after the two patches we would have

#ifdef CONFIG_ARM_64
     [0] ...
     [7] ...
#else
     { /* 32-bit */ }
     { /* 40-bit */ }
#endif

I didn't add '[X] = ' because the index is not necessary for 32-bit.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 12:46:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 12:46:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529177.823277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBsK-000722-VK; Wed, 03 May 2023 12:46:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529177.823277; Wed, 03 May 2023 12:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puBsK-00071v-S6; Wed, 03 May 2023 12:46:28 +0000
Received: by outflank-mailman (input) for mailman id 529177;
 Wed, 03 May 2023 12:46:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puBsJ-00071Y-NZ
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 12:46:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBsI-0002Ba-Dq; Wed, 03 May 2023 12:46:26 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puBsI-00032K-6X; Wed, 03 May 2023 12:46:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=E9fnji8EWnL4Q3KzOAL+r0yyQHK5Ln8yKDSohAdh+lY=; b=MfeyMzW5NmhzccmnDms08bftKd
	/+bAE7iZwagUAKRdP3Js3vnNU1nf8AQeMwmCvzdpB/i2fK4EhrWPK1z03zv7AABml39NNUO14quQ1
	zwd2L3Elq2guyzFZjZ9DX+4Ckk0Ll0YP+2qjhPrvKbOIkObhXxRFarPRXvWJCYanCs1I=;
Message-ID: <0654529d-fe53-f3b4-a1f2-bf3515af8e93@xen.org>
Date: Wed, 3 May 2023 13:46:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2 01/13] tools/xenstore: verify command line parameters
 better
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-2-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230330085011.9170-2-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 30/03/2023 09:49, Juergen Gross wrote:
> Add some more verification of command line parameters.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   tools/xenstore/xenstored_core.c | 19 +++++++++----------
>   1 file changed, 9 insertions(+), 10 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 6e2fc06840..7214b3df03 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -2809,7 +2809,7 @@ int main(int argc, char *argv[])
>   			no_domain_init = true;
>   			break;
>   		case 'E':
> -			hard_quotas[ACC_NODES].val = strtoul(optarg, NULL, 10);
> +			hard_quotas[ACC_NODES].val = get_optval_int(optarg);

Ah, so that exactly what I was exactly on the other series.

I would be OK if this is kept separate:

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 12:56:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 12:56:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529185.823287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puC1a-000050-QI; Wed, 03 May 2023 12:56:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529185.823287; Wed, 03 May 2023 12:56:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puC1a-00004r-NP; Wed, 03 May 2023 12:56:02 +0000
Received: by outflank-mailman (input) for mailman id 529185;
 Wed, 03 May 2023 12:56:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puC1Z-0008WR-UJ
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 12:56:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puC1Y-0002NG-OU; Wed, 03 May 2023 12:56:00 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puC1Y-0003Mq-H3; Wed, 03 May 2023 12:56:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=cYThjRTWIYId03gPcjqSu6mJI5Pz9knWhki2c3eBCtU=; b=wWalS9+sdX1DLpgcrnF/f0iA98
	G+9MGwv3GLwBSAVb/bBofMDglFJmUGCnJT/u57Fo4hTy9UrjUXKTBvNupyGcIypG6N1sWOTZXdWzj
	aVasbntlDK14kJTwR9nnA+oPDob8AP0x8GHbudXRTeTpQ7MJWtAcQ6KHfFj9ROzFi7Qc=;
Message-ID: <8d91f57b-41ed-2939-94e8-9f73f0d523a6@xen.org>
Date: Wed, 3 May 2023 13:55:58 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2 02/13] tools/xenstore: do some cleanup of hashtable.c
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-3-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230330085011.9170-3-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 30/03/2023 09:50, Juergen Gross wrote:
> Do the following cleanups:
> - hashtable_count() isn't used at all, so remove it
> - replace prime_table_length and max_load_factor with macros
> - make hash() static
> - add a loadlimit() helper function
> - remove the /***/ lines between functions
> - do some style corrections
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   tools/xenstore/hashtable.c | 71 ++++++++++++++------------------------
>   tools/xenstore/hashtable.h | 10 ------
>   2 files changed, 26 insertions(+), 55 deletions(-)
> 
> diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
> index 3d4466b597..c1b11743bb 100644
> --- a/tools/xenstore/hashtable.c
> +++ b/tools/xenstore/hashtable.c
> @@ -40,22 +40,25 @@ static const unsigned int primes[] = {
>   50331653, 100663319, 201326611, 402653189,
>   805306457, 1610612741
>   };
> -const unsigned int prime_table_length = sizeof(primes)/sizeof(primes[0]);
> -const unsigned int max_load_factor = 65; /* percentage */
>   
> -/*****************************************************************************/
> -/* indexFor */
> -static inline unsigned int
> -indexFor(unsigned int tablelength, unsigned int hashvalue) {
> +#define PRIME_TABLE_LEN   ARRAY_SIZE(primes)
> +#define MAX_LOAD_PERCENT  65
> +
> +static inline unsigned int indexFor(unsigned int tablelength,
> +                                    unsigned int hashvalue)
> +{
>       return (hashvalue % tablelength);
>   }
>   
> -/*****************************************************************************/
> -struct hashtable *
> -create_hashtable(const void *ctx, unsigned int minsize,
> -                 unsigned int (*hashf) (const void *),
> -                 int (*eqf) (const void *, const void *),
> -                 unsigned int flags)
> +static unsigned int loadlimit(unsigned int pindex)
> +{
> +    return ((uint64_t)primes[pindex] * MAX_LOAD_PERCENT) / 100;
> +}
> +
> +struct hashtable *create_hashtable(const void *ctx, unsigned int minsize,
> +                                   unsigned int (*hashf) (const void *),
> +                                   int (*eqf) (const void *, const void *),
> +                                   unsigned int flags)
>   {
>       struct hashtable *h;
>       unsigned int pindex, size = primes[0];
> @@ -64,7 +67,7 @@ create_hashtable(const void *ctx, unsigned int minsize,
>       if (minsize > (1u << 30)) return NULL;
>   
>       /* Enforce size as prime */
> -    for (pindex=0; pindex < prime_table_length; pindex++) {
> +    for (pindex=0; pindex < PRIME_TABLE_LEN; pindex++) {

As you fix the style, how about adding a space before/after '=' and...

>           if (primes[pindex] > minsize) { size = primes[pindex]; break; }

... break this line in multiple ones?

With or without this included here:

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 12:59:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 12:59:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529190.823297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puC4q-0000fB-8g; Wed, 03 May 2023 12:59:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529190.823297; Wed, 03 May 2023 12:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puC4q-0000f4-5n; Wed, 03 May 2023 12:59:24 +0000
Received: by outflank-mailman (input) for mailman id 529190;
 Wed, 03 May 2023 12:59:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puC4o-0000er-RD
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 12:59:22 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20620.outbound.protection.outlook.com
 [2a01:111:f400:7d00::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 523fdbba-e9b2-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 14:59:21 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DU0PR04MB9588.eurprd04.prod.outlook.com (2603:10a6:10:31d::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 12:59:20 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 12:59:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 523fdbba-e9b2-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T7+7htrPsnZscAy77VQ0DrJY/CD8rQ4wZmA0MdDLqXalaiuJW3skRaZlbwLn+vFOXFx8+mY649YFJ4I8Fk12MK37SQu+DvNDh+Pkzdy1Ihv7VepmTE41poUSmYz8tfMyK2C5ZSNBiqXcsbYgwKy3Eajwygzl8BTxVpVxSj4IjqcpevCkCFq+X14PxEcVd9O4vQIOPXDb+3k5iz7e0G1/IAnhF1eKsM3LIgk4GNQeQLfplJg4ZfU2r4RzBjjSccgdrZQfReQUFanI7ErbVa91sKdQRTdOZm8OirHITUjxyUD5+UVcEL09EsmDaoT5boucGscQGmHhm3fXZKz21u0VPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vTIfjDGnj3/WeqbrCcUfjD6T8DjZ2Pu0a9/w7OnCY6E=;
 b=RmgKJ0brgpFDbK67k7XoUoY26A7/mFZAwcz7fCoL7gtgbjlhH5hjAAmLsR4zg/IyItyVK/103JFxDylqaFHQ4YSM2UmMZ4pA/ZUgRw2Hf5mCgpPrCsI0iBxTw8kCnzb5uuKsUPnVnSIj8FIcV/bcUYTb9FWT3TC6vCVeXHSu9SdyMN7ssiChWAsprIeMQvLllGdnTbnxvH+o5k/E0YWdnSj9cBAoehQBPoAKmly/9TAvYmCZji+oeZ7xHw0UDxT5/GEAt1ErQ4r3KhACQUBEIBXfynAw+vaIqfMaJQK7oZ/G+QjhB5Vh9GjjBE5PUyQXgPlgJkdjJaqdR3RKzxALVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vTIfjDGnj3/WeqbrCcUfjD6T8DjZ2Pu0a9/w7OnCY6E=;
 b=qQBxlO7FegaZrQp3PBWVF8O+EMLWrgP7oAh0GXt6bREjoxgEYI+LzFJdxPMTA2L5DZVeypprssBxpdW8T8ltxZqXbPgqXvigUv0t25DLRitVZbTtFY2iIdIwIXYLzpRXYkVC5a8as7BHD8FtSgSIcFozkThz46ZKGs+yWFCyfeka6BdeswJinjzBpht7vmk46GTymk3GNW4NfCBcULMp6CSsSzHaWErL7jQWzaxvi4DUF17cjA9LlTZmkCrOSZ9O4Z+dLy0Q5N3ld/NrrkO0al103LTSwK1koKR0f6hZhd5N0Oz+OUvYRThbUZnsyrm/2134tB5ujjFyhrk8Go6z8Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7e21773f-7178-5c8c-f7bc-941308d0297a@suse.com>
Date: Wed, 3 May 2023 14:59:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: xen | Failed pipeline for staging | 0956aa22
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6452557e7028b_2a54f38959a5@gitlab-sidekiq-catchall-v2-bdc885877-cd8zv.mail>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6452557e7028b_2a54f38959a5@gitlab-sidekiq-catchall-v2-bdc885877-cd8zv.mail>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0173.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::15) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DU0PR04MB9588:EE_
X-MS-Office365-Filtering-Correlation-Id: ad911d4e-1aaa-4f7d-d1ab-08db4bd6358f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r98y/HrmxbuMqatbAL87ofxAqeI9Nl1FNyqitSZz+pIn5J+lskyc3b9Qf7zb4NYoU/NjLtqCa2oMddGh0JDWvGETFWCrhzJGgowSnTFDo3Dpy5UGVyDAe/khv1MG14fYjBVvfoqYhejf+RjEagf60LZ7/Tcn/sWVDjGLLDdbpZzzQsGjlYDgf9RtYuDCJjAEXWljKWMPdYaRZIPLUG5UsKeuFj442qjHwtd6ZdvyNWNBy11dtd1cgtc9FpYZvMrRE0n0SuGU+JD1QkBujI22lLwhzN6Nt5EtvN1lyAS1nqj2Lyt9gkum9zNswVAdiXpKDZ44r/6OJ20/fEEnqA6U7dIhGl628MhEh2L2huAL3Nq3mQq4LKbzuxVUznWrUpltOp5+q3fEwu9z9gUXYn6uyiLZKQKl8FLU52BTizj4l/76P59cvw9HhLCFAxhxk49aZEMW1nIS40xYM0FbYMLGBajEZzPFsT1+dSeCMP82iuT8NFkoc3J/qY1A56ZCLG8a2/E1Mht8u4oOzksfN9odwQJjIouaEe4EbOhNToEqTYBEN4m+zVh7diVsj4LmHIwkmE1wkK9P5sxQgNhdWzBlwUkhq26XPO+bF83VUoYhncR/SiJfM04Dpah6jt3ODWEc0IrE2i11nltiDAvTiEQiNtl56a9jtGetOgTsO7PDJfo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(346002)(376002)(39860400002)(366004)(451199021)(478600001)(53546011)(31686004)(2616005)(186003)(6506007)(6512007)(26005)(6486002)(966005)(6916009)(41300700001)(66476007)(66946007)(66556008)(316002)(83380400001)(8676002)(5660300002)(2906002)(38100700002)(8936002)(31696002)(36756003)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d2lHcGVwOXdmbmgwY1RVdTRoaEdjKzFLaE5qNGVIMEFEV3YrZlRYM2k5d28v?=
 =?utf-8?B?R1pOVkIrR3Vxazl6Y2hRcXQ5eU9ueVV3Q0xHUE13VGdwWUtraXp0cDcxem9p?=
 =?utf-8?B?VFErN00wM2dRWTFTWTh0cG9QaEJtUkRUS08weVg4R21UTnVPUFdIcTB6Mlpa?=
 =?utf-8?B?U1U1R0tFRHpMa0JXSHlGOWpQbHRDdUMycVNYcE1pcEpIVmREbE1IcmFVbC9k?=
 =?utf-8?B?RHZKSEVObFlvNVFER2tSYW81MVJDL2duN1d0di9SbTJSRk42clNhVzBNNHlw?=
 =?utf-8?B?c2ZRTmpyb21XN2M2bnBubWVRYU9lbmh5Qm1OQ2U3S1RIRWNXeTRYdFMxeGlx?=
 =?utf-8?B?ejg1S0pwN25GKzVqdCtTYVh2cGtPZ0c2bGdBQU9UOXlRZUF6QUllaXZiRWRn?=
 =?utf-8?B?dU1JVHpDa1h2VWFBQ01GRWdkNml5Q2Fpdm0xN3hLdk9JaGZYdExucGNKWWZD?=
 =?utf-8?B?YmtNUmZFcWVQLzA1cGFCaGJoa0g3YkptSEhGeVZCZXFBd2NNSVlrQ0Q2MlVV?=
 =?utf-8?B?K1krYlJxMGhhZlZwa1B5RmVHSURSR29RRGNSS2thUlAxTDZOMGNXQSszd2xh?=
 =?utf-8?B?K0NGeHpwd2o5bnY1TElRSGdaS2Y4WWVYUW5TWXpCaGR0eUxHRWo0Unh1Uk5B?=
 =?utf-8?B?TUZJYnA3ZUZWL0t6Sko3VW1aNzhvc3VqaU84U2MyWHU3aXQ2NFpYR0pIZ290?=
 =?utf-8?B?TzU2d0ZQRkNRaFhQS0pqRnlVbmUyVUQ3QXBMMFhaSStWbjI2WUd3QzBkczBB?=
 =?utf-8?B?RElSZjFVODl0aHhOWVI2cm4wUzVJc3VIeXZPQ2ZKbytHbWI1dzFJZ0xMVjdT?=
 =?utf-8?B?NmxZYzlhcGZSQ0ZueVo2Vmp5V05jWVhhRndidEx3OCtMNW9VaTRlL0k5UG5t?=
 =?utf-8?B?S1BNWDBzNkF1Z0VQWmNDelJmS1B6S1g0QkJKVi9WWEZOK3B3SGV3MzYvMllx?=
 =?utf-8?B?Q2NpOEhxdWpXTGExczYrL0hWSHVDWHY3ck4vemRUOXJZS3ZIS0Z5enB0NkJl?=
 =?utf-8?B?QXRnRXlxR3c4UG1meGkxR3JuZjZnL1ZXRUh1cTNJZDUrUjlodk9LY1ZVY3BV?=
 =?utf-8?B?WkNLRmtqaEV5a2V4ZDEza2hHckgvVkE1Q2hvSitFLzBRajhqQXN1M01waUdk?=
 =?utf-8?B?K1lJZ0UrN2pDVnp0a0ozWEtTU0RYZXZ5Q0lsNGRXbEJwMHpyRG93Mkpudkoz?=
 =?utf-8?B?aysvVnRiZUtZMHdLdFA4K3JnbEtvb3N2dVZzbkx3VEl6czZTRzk5c2NrdDBQ?=
 =?utf-8?B?MU9vOGphellXcDBuK2hLS2IydlhRT0F6QVorcTFEZkVvSjR3dkxzcWdpMjll?=
 =?utf-8?B?U2NlREVsWmZKdjFBZURTYVM4MHpzckRodGlYZHFHbDVRWlBmV0hTWDN5TEpv?=
 =?utf-8?B?SE1MZ3p2Z3ZJN2oyYXRHZDUwNFdiaFlwUkFQVnVKbitxaW1NZVNIOHFkNUVE?=
 =?utf-8?B?bVRCOUdzL0lWVldkaldDSGtIOG9abEdlU1d3VVpGaG10RVdZYlN4dTExNDRQ?=
 =?utf-8?B?RlB0QmkvKzg1bEJOZ3VIT3VLc0puNWFPemYwTUY3NjJTbnArZHc1ZEYzY0NQ?=
 =?utf-8?B?QUNoSm05RVNvVzRBTEU0L3lCQnlaNlBSOFNhcksyVzFadWtPYTJ4UmhEY1lQ?=
 =?utf-8?B?ZjI1WTBFb2dQcE00WXVqQ0NURXhHUElBUE0vbXpIZ3lTL0FWcHQyc0YzdC9y?=
 =?utf-8?B?YURkTkxzdHhRV2Q0elNYSE0vc1l6UStDN1lYVTYyZThTdnZ4UGVpZmNwTUhP?=
 =?utf-8?B?Q0VhcUpzQitmUFhiVnZ1RXB2TGczUjk4U0RrZUJMSjA5TWExalJ0L1VZNkRw?=
 =?utf-8?B?THFoVEROMlpOUVNRZDhYdkQyeVhBN3hXOUtmckJlODI1N3ZVdEN3c3dtWGhY?=
 =?utf-8?B?V1ZkOERxRWtGTUxQekZnV211VkszVzlOOTdZUDBNVEJUVFRsb3N5QVVoOTNw?=
 =?utf-8?B?bW5mTHdLWjl0eE5nWmovQTZaOHp0bkE0VXMxV2FNVGlCVXhKem5ZaldSRFVO?=
 =?utf-8?B?U1FPa29ra0F3QnJpY0xzeDNzMEh1WlVHWm5qekZpTTZpaDJpaTBiU2gyemI0?=
 =?utf-8?B?RlRBTUNqZzJ1My9zaC9tZkZrZEN4SXVmWXhIaWxKNFp6Mi9IbTFCdHF5YUV6?=
 =?utf-8?Q?TQmfYbtoL4zfep+Jpz7eXCz0q?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad911d4e-1aaa-4f7d-d1ab-08db4bd6358f
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 12:59:19.8471
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: n+hZby+46q7JDSxKMmQFztqVZEzX6VBkJm4GC+En5o3m+zDcFXgPa3vvcORFyZPbxCSiFVqvoMg1TpIOCkIIKA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9588

On 03.05.2023 14:37, GitLab wrote:
> 
> 
> Pipeline #855820014 has failed!
> 
> Project: xen ( https://gitlab.com/xen-project/xen )
> Branch: staging ( https://gitlab.com/xen-project/xen/-/commits/staging )
> 
> Commit: 0956aa22 ( https://gitlab.com/xen-project/xen/-/commit/0956aa2219745a198bb6a0a99e2108a3c09b280e )
> Commit Message: x86/mm: replace bogus assertion in paging_log_d...
> Commit Author: Jan Beulich ( https://gitlab.com/jbeulich )
> 
> Pipeline #855820014 ( https://gitlab.com/xen-project/xen/-/pipelines/855820014 ) triggered by Ganis ( https://gitlab.com/ganis )
> had 6 failed jobs.
> 
> Job #4218447934 ( https://gitlab.com/xen-project/xen/-/jobs/4218447934/raw )
> 
> Stage: build
> Name: opensuse-tumbleweed-clang
> Job #4218447943 ( https://gitlab.com/xen-project/xen/-/jobs/4218447943/raw )
> 
> Stage: build
> Name: opensuse-tumbleweed-gcc-debug
> Job #4218447940 ( https://gitlab.com/xen-project/xen/-/jobs/4218447940/raw )
> 
> Stage: build
> Name: opensuse-tumbleweed-gcc
> Job #4218447966 ( https://gitlab.com/xen-project/xen/-/jobs/4218447966/raw )
> 
> Stage: test
> Name: adl-pci-hvm-x86-64-gcc-debug
> Job #4218447937 ( https://gitlab.com/xen-project/xen/-/jobs/4218447937/raw )
> 
> Stage: build
> Name: opensuse-tumbleweed-clang-debug

Two of the build failures look to be addressed by Olaf's still pending
"tools/libs/guest: assist gcc13's realloc analyzer". I guess I'm going
to commit this with (just) the two R-b that there are.

The other two are less clear, at least to me:

checking for openpty et al... configure: error: in `/builds/xen-project/xen/tools':
configure: error: Unable to find library for openpty and login_tty
See `config.log' for more details
configure: error: ./configure failed for tools

Jan

> Job #4218448078 ( https://gitlab.com/xen-project/xen/-/jobs/4218448078/raw )
> 
> Stage: test
> Name: qemu-smoke-dom0less-arm32-gcc-debug-gzip
> 



From xen-devel-bounces@lists.xenproject.org Wed May 03 12:59:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 12:59:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529192.823307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puC5C-00013Z-KV; Wed, 03 May 2023 12:59:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529192.823307; Wed, 03 May 2023 12:59:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puC5C-00013S-HS; Wed, 03 May 2023 12:59:46 +0000
Received: by outflank-mailman (input) for mailman id 529192;
 Wed, 03 May 2023 12:59:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puC5B-000138-IK
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 12:59:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puC5B-0002QN-0Z; Wed, 03 May 2023 12:59:45 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puC5A-0003Rr-QE; Wed, 03 May 2023 12:59:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=XjRyBmen38hK+2UHszJvugGdf2RXVGnU3dXLBe5dbg4=; b=EeRHE7ncI8jOofgXmkpWu2d3ay
	gnUPEHirBoc2ozm8dN96kesg5yQSoDfiivFjZYoI9kX68vzg1vrSl3CZBfhIQCwYqX7q1QkLLyk4z
	Qj8Cs2n5rrr0VriQa+ixJBm6tKZhdoDUWtgW+BirORqQZCxO8y2vLiKXAqLzhRSaDz04=;
Message-ID: <1a2a7aed-8947-c5ed-e1ed-8fa80bc75750@xen.org>
Date: Wed, 3 May 2023 13:59:43 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2 03/13] tools/xenstore: modify interface of
 create_hashtable()
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-4-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230330085011.9170-4-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 30/03/2023 09:50, Juergen Gross wrote:
> The minsize parameter of create_hashtable() doesn't have any real use
> case for Xenstore, so drop it.
> 
> For better talloc_report_full() diagnostic output add a name parameter
> to create_hashtable().
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   tools/xenstore/hashtable.c        | 20 ++++++--------------
>   tools/xenstore/hashtable.h        |  4 ++--
>   tools/xenstore/xenstored_core.c   |  2 +-
>   tools/xenstore/xenstored_domain.c |  4 ++--
>   4 files changed, 11 insertions(+), 19 deletions(-)
> 
> diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
> index c1b11743bb..ab1e687d0b 100644
> --- a/tools/xenstore/hashtable.c
> +++ b/tools/xenstore/hashtable.c
> @@ -55,36 +55,28 @@ static unsigned int loadlimit(unsigned int pindex)
>       return ((uint64_t)primes[pindex] * MAX_LOAD_PERCENT) / 100;
>   }
>   
> -struct hashtable *create_hashtable(const void *ctx, unsigned int minsize,
> +struct hashtable *create_hashtable(const void *ctx, const char *name,
>                                      unsigned int (*hashf) (const void *),
>                                      int (*eqf) (const void *, const void *),
>                                      unsigned int flags)
>   {
>       struct hashtable *h;
> -    unsigned int pindex, size = primes[0];
> -
> -    /* Check requested hashtable isn't too large */
> -    if (minsize > (1u << 30)) return NULL;
> -
> -    /* Enforce size as prime */
> -    for (pindex=0; pindex < PRIME_TABLE_LEN; pindex++) {
> -        if (primes[pindex] > minsize) { size = primes[pindex]; break; }
> -    }
>   
>       h = talloc_zero(ctx, struct hashtable);
>       if (NULL == h)
>           goto err0;
> -    h->table = talloc_zero_array(h, struct entry *, size);
> +    talloc_set_name_const(h, name);
> +    h->table = talloc_zero_array(h, struct entry *, primes[0]);
>       if (NULL == h->table)
>           goto err1;
>   
> -    h->tablelength  = size;
> +    h->tablelength  = primes[0];

I find the connection between this line, ...

>       h->flags        = flags;
> -    h->primeindex   = pindex;
> +    h->primeindex   = 0;

... this one and ...

>       h->entrycount   = 0;
>       h->hashfn       = hashf;
>       h->eqfn         = eqf;
> -    h->loadlimit    = loadlimit(pindex);
> +    h->loadlimit    = loadlimit(0);

... now more difficult to find. How about setting h->primeindex first 
and then using it in place of 0?

>       return h;
>   
>   err1:
> diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
> index 04310783b6..0e1a6f61c2 100644
> --- a/tools/xenstore/hashtable.h
> +++ b/tools/xenstore/hashtable.h
> @@ -10,7 +10,7 @@ struct hashtable;
>      
>    * @name                    create_hashtable
>    * @param   ctx             talloc context to use for allocations
> - * @param   minsize         minimum initial size of hashtable
> + * @param   name            talloc name of the hashtable
>    * @param   hashfunction    function for hashing keys
>    * @param   key_eq_fn       function for determining key equality
>    * @param   flags           flags HASHTABLE_*
> @@ -23,7 +23,7 @@ struct hashtable;
>   #define HASHTABLE_FREE_KEY   (1U << 1)
>   
>   struct hashtable *
> -create_hashtable(const void *ctx, unsigned int minsize,
> +create_hashtable(const void *ctx, const char *name,
>                    unsigned int (*hashfunction) (const void *),
>                    int (*key_eq_fn) (const void *, const void *),
>                    unsigned int flags
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 7214b3df03..6ce7be3161 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -2511,7 +2511,7 @@ void check_store(void)
>   	struct check_store_data data;
>   
>   	/* Don't free values (they are all void *1) */
> -	data.reachable = create_hashtable(NULL, 16, hash_from_key_fn,
> +	data.reachable = create_hashtable(NULL, "checkstore", hash_from_key_fn,
>   					  keys_equal_fn, HASHTABLE_FREE_KEY);
>   	if (!data.reachable) {
>   		log("check_store: ENOMEM");
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
> index df64c87efc..6d40aefc63 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -1017,7 +1017,7 @@ void domain_init(int evtfd)
>   	int rc;
>   
>   	/* Start with a random rather low domain count for the hashtable. */
> -	domhash = create_hashtable(NULL, 8, domhash_fn, domeq_fn, 0);
> +	domhash = create_hashtable(NULL, "domains", domhash_fn, domeq_fn, 0);
>   	if (!domhash)
>   		barf_perror("Failed to allocate domain hashtable");
>   
> @@ -1804,7 +1804,7 @@ struct hashtable *domain_check_acc_init(void)
>   {
>   	struct hashtable *domains;
>   
> -	domains = create_hashtable(NULL, 8, domhash_fn, domeq_fn,
> +	domains = create_hashtable(NULL, "domain_check", domhash_fn, domeq_fn,
>   				   HASHTABLE_FREE_VALUE);
>   	if (!domains)
>   		return NULL;

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 13:01:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 13:01:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529197.823317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puC6p-0002ge-Ve; Wed, 03 May 2023 13:01:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529197.823317; Wed, 03 May 2023 13:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puC6p-0002gX-SW; Wed, 03 May 2023 13:01:27 +0000
Received: by outflank-mailman (input) for mailman id 529197;
 Wed, 03 May 2023 13:01:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZlPw=AY=citrix.com=prvs=4803f4e7c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puC6o-0002g5-MT
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 13:01:26 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a23f189-e9b2-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 15:01:23 +0200 (CEST)
Received: from mail-mw2nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 May 2023 09:01:09 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6455.namprd03.prod.outlook.com (2603:10b6:a03:38d::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 13:01:07 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 13:01:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a23f189-e9b2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683118884;
  h=message-id:date:subject:to:references:from:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=GJeHLp374TLZGI7k5umNOKZxgQ00DxIGQ5macyftYIk=;
  b=EuylkQDB8unKNJXhf6FGTPqKevuhQE4hyhELKpOpzJu2DHns5Ysas5Ny
   6JycFiPh9ubnLEm7CRua6HvCYF1xNaGVSCmVYnsaj47KxL7AeYvBbKJr5
   CHtc7QQGoV+BCFVD+XqdugRgJk5Kay/o1uS3GB46qUs8f5Xqgm4+AqSoh
   M=;
X-IronPort-RemoteIP: 104.47.55.108
X-IronPort-MID: 107038349
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:E3EDQ6jzElGaSw/snzeqlWYZX161eREKZh0ujC45NGQN5FlHY01je
 htvXGiBb6uKN2r3f95xPoW2p01V7ZWEnYJiHAplpCw0E3gb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QeFzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQFdABUZSCjjdi974meestUgeFyCcLCadZ3VnFIlVk1DN4AaLWaG+Dv2oUd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEsluG1YLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXnrqU62wHCroAVIC8tc0S9jejgtnCvBM1vd
 Q8e+XF/lpFnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+pQSiaPCEUKSoHenUCRA5cu937+thr3lTIU8ppF7OzgpvtAzbsz
 juWrS84wbIOkcoM0Kb99lfC696xmqX0oscOzl2/dgqYAslROOZJu6TABYDn0Mt9
IronPort-HdrOrdr: A9a23:9PNNKK9A0U1MmP7kRIluk+Fcdb1zdoMgy1knxilNoENuHPBwxv
 rAoB1E73PJYW4qKQodcLC7UpVoMkmsj6KdgLNhdYtKOTOGhILGFvAa0WKP+UyDJ8SczJ8X6U
 4DSdkHNDSYNzET5qaKgzVQeOxQpOVvhZrY49s2uE0dKj2CBZsQijuRDDz3LmRGAC19QbYpHp
 uV4cRK4xC6f24MU8i9Dn4ZG8DeutzijvvdEFM7Li9izDPLoSKj6bb8HRTd9AwZSSlzzbAr9n
 WAuxDl55+kr+qwxnbnpiPuBtVt6ZTcI+l4dY2xY/suW3XRY8GTFcdcsoi5zX4ISSeUmRQXeZ
 f30lId1o9Img7slymO0GfQMk/boXwTA7uI8y7evZMlyvaJAA4SGo5Pg5lUfQDe7FdltNZg0L
 hT12bcrJZPCwjc9R6NkOQhx3lR5zWJSFcZ4JsuZkZkIP8jQa4UqZZa8FJeEZ8GEi6/4Ic7EP
 N2BMWZ4PpNa1uVY33Qo2EqmbWXLz0ONwbDRlJHtt2e0jBQknw8x0wExNYHlnNF8J4mUZFL6+
 nNL6wtnrBTSc0da757GY46ML2KI32IRQiJPHOZIFzhGq1CM3XRq4Tv6LFw/+2ucIxg9upBpH
 0AaiIqiYcfQTOfNSTV5uw0zvnkehTNYQjQ
X-Talos-CUID: 9a23:lTQ/fGONbGpgn+5DRXVXrUUkOsofeVLm6UfyG0idWENiYejA
X-Talos-MUID: 9a23:kr82ogkFfIc3tjdZ4hgsdnpQD+F35Yi/UXwzgMoZi5fHPy8ragyk2WE=
X-IronPort-AV: E=Sophos;i="5.99,247,1677560400"; 
   d="scan'208";a="107038349"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gdRdI2MYENYPMvkUlGRUrojLTp1a9nINojpWGkebCTIVzq5yPXblNqdzrW0cMBavHCM3LUHFBEgbSi57QcEyRhtIPL+B/YUdvno380OsB5/u2Djun0yLcuxW1GrZrpHs2tDMkDIUd/NKI6eQuuwK7656YmDHd2cjqUszqOlyIVx8rUPA3xvGHFt/CCwbh2absd9Zlb8jAJtiqXdKZhDLmPv02Ql6OteyOaB8oRTXBT6SIoiaeZiuzX7b5SZXyl2XmVrNOLBaxY33VQ46H9YdAltLCFQq+vGiU8Udmq6ZbUz2F8uZrVtP8++GgY3Y7XxEsyPz8SMioGryo4bYlsaFlw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GJeHLp374TLZGI7k5umNOKZxgQ00DxIGQ5macyftYIk=;
 b=Q6GWLFamqfwaRhklE/1Kzyx9+GMD/SZ5FjTiFuL4nruMKheqKzKdEcu4JUDJCXcgKBoYphtYjMkFV9vJVaEdsK3TnKPTwDBxj02HYzBJdSL+7/D7An2w/B46DfQN3JOB8Ium4JKuY9iy2YWbz6REqT51zLiaHUhZPdQSuzK4Rhhpzte7kpbrLDsTcTJ5d2Xfr6Q2zNhf3Scws8NaUJNecXhxi4AM1EEyCXXZ7/L/IguILxflt0X4nurdRAVUqpOfgY/NGzP51MRUDHKHK8Lm1g9OpogsVXwOFaW6b2ziUjay/oFIka69DkO7U5HRxgHwUH23sbOsdRWJ9j9Pjun9Iw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GJeHLp374TLZGI7k5umNOKZxgQ00DxIGQ5macyftYIk=;
 b=bZFKgATTh7lyuBDXnvYYZ0ZMmB81eKMrr3GhI79Smm6NRU8xZkO4jZRJsCQ6quHiA3z/VE6FZpAcTplRSKsmXqb3gRNZf4HbDnZKyHIiyauPYUn3WpGC+jBdXOEkwfni2XK4zQ4Oy0OhDWqP/dgKfG5+7ds1PATqL4yNVeju/CQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <836de116-bfac-3bf7-ef36-9f83d7f7dcc1@citrix.com>
Date: Wed, 3 May 2023 14:01:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: xen | Failed pipeline for staging | 0956aa22
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6452557e7028b_2a54f38959a5@gitlab-sidekiq-catchall-v2-bdc885877-cd8zv.mail>
 <7e21773f-7178-5c8c-f7bc-941308d0297a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <7e21773f-7178-5c8c-f7bc-941308d0297a@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP265CA0011.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5e::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6455:EE_
X-MS-Office365-Filtering-Correlation-Id: 8f273861-27f2-4e3a-015f-08db4bd67540
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ehDi5b9lYW0T4FgBnZJNFEiKnjJwHHNzamW8VeU+W+ppsNO/PhejbPknKuvXx+H0AqFmt/k6FIvPmSlMZeD+R/XrtYo1u72Cz/WsH9EDMFjV2mnbK9Fl8oHKYtyL1437SRqxIr9J2eOihXY0sa72fiQ0jTE607YklNvF73CNr5CeBs55049a/pyeiqAZxVsl8NC9hBmACDyYjhA9/jrbypiZ1HGM9slRAyqpskfu2Wqc374Qkaie1JEct/V18zBsGnePIOobLwW6OP86hL0w+WAMAoSPywxcW+m6V7lzMekWlT1xtJ3CQc1sDlBc3sCFv9urdTPFVNcXTPHHoZaCLr2bq5+nIsI33fZifhssF0ir0v052gqdBqUPMzYRFoyMhez5O9r+69RMue+Ny91U+emiwtFiSJnbuyydoDeublNnQ9UbKIHZZl9YZfnpstarG2u6Ci5VIKh4b9uEgrewe8k3APQPmndYeSMAtNtd+ShuWKy4qJOkCU+Xf3+nNWX5108j4B7klkkSNxFLHHujutZwqkIxmlKnSNvd3lugINFpzL9wCiOvSWI8+vtARVfXiFrr6nrA0pKBFTXxko0kL7iQmfIAEut++hHQABfVl53e5e7xbb8ydOQrOgZb3DEmc1Z3h3lxsMqbeoAa5S6lu+1UZV4P7cyTPQYmDH/eMdCkMg8nDMXQUr6C4QnedFRt
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(39860400002)(366004)(346002)(451199021)(5660300002)(6506007)(966005)(8936002)(8676002)(83380400001)(2616005)(2906002)(186003)(36756003)(82960400001)(38100700002)(86362001)(31696002)(6512007)(26005)(53546011)(41300700001)(66946007)(66556008)(66476007)(478600001)(316002)(31686004)(6486002)(110136005)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WTNkalJ6SDNvbUphQ3ZieVMvL1JqY2l3LzU1S2ZjZWlkQnE3em11cmtiQVFN?=
 =?utf-8?B?aGRBZXVVdGFLZDlDL2Z3NWUrU1V4SVpQNGxvVDAzbjRxZzI4Ti9kdDhUTzFI?=
 =?utf-8?B?STlEK01XbnJabFlOWEtLU2IrTkdpUU9lb0g4UjhMbFpYOGI3Yy84UDhTTUFT?=
 =?utf-8?B?aTR3bDhHU3VXeVBUWm5wb0s4REZ2NWovV1RQREdQMGlXajRpU2FrZm9iQ3Q1?=
 =?utf-8?B?RGliSFhmVmFrcjB0c1p6N0JyKytpa0tEU1RTMERyb1RjOFZTN2pRM1pYTnpM?=
 =?utf-8?B?bHhNSjRTMFRRRlF1YWtBUERrU1pyNk1QNkpzOGRpdGY0VHBoMTk4R25XSHhP?=
 =?utf-8?B?TXZWZGpWOVU1c1hRVGlZdjgyWGh0YXlXOSs0QXU4d3JPUU1kYm5iclc0WmNl?=
 =?utf-8?B?Qk95ckdma0hyUFYzV2ZkbFZnUFRiVHA3TjFQZ0JFWWtQUEVTN242L2MzZ04v?=
 =?utf-8?B?aWV3Ym96Y2QrUm5NOWlOWWpyUWtwVEsraGdqVURFR1FwTU1CUkdlWUlFVmdW?=
 =?utf-8?B?Z1RGdkRCdVFiUmZYOGdTQS9jZXVsckFtTlREclcvL3JXaUpBNy9qQ1NYS0Zo?=
 =?utf-8?B?c1hPVFVMZUJMdWozdFVoOW5ZSE5vQmFCNDYvZ3RXWFZKR2dLZDdLdVV1Z1Fa?=
 =?utf-8?B?RzlwUmZTNUNJSFVXWFJTcmxzbW9neS82cHBjRE1Dak45LzZkUUNUSjM2eTdE?=
 =?utf-8?B?QVp0UXJrWTI5bldsNU03N0ZNSU5rZ2dsenBvdGF5Nm1ubUplWHN0SHVFWkx5?=
 =?utf-8?B?T1pQRkF2TGJVbUwrcXdFeW1ESTRuOFA2bzNmWWZEd2Y3VFAzS0NicG9RN0Nw?=
 =?utf-8?B?T1kvbnRGd0lVZXN0NVorbVN1ZmlmUlFqcXE4dlNYUFRGTlBLdEVkZVB6aWEz?=
 =?utf-8?B?L2tUZ1RlQnhNM3ZFV29YVnZZcHc3U01uekVyUUJEU3Bpb1k5amhacjZFQ0xi?=
 =?utf-8?B?OU5SZnd5dHZZMjJXUDRENUZ1Q1ZwWnZ1eEVPTUZxb1FReERrV2dKQ1NxWGdX?=
 =?utf-8?B?dVcraUlTS2tkVXMwSER0cEFHWEpJZTFuOFQ0d2p0M3k4cEkvc3FKQ2srSmxI?=
 =?utf-8?B?OHBCSE5TaTh3dWxGRDhyOUJwU1N6MlpWYmZFWDRadDNkSi82U3JRcnZjck53?=
 =?utf-8?B?QjhSSFRqUndmTldESi9zaVNGZ2ZucG1oZXlScDZWS1ZwMmZ4eXd1RXRaTFhN?=
 =?utf-8?B?Tm5EN1VFdVV1M1JEcXVOYlIwdUJTQnlGM0tPU0tjLzNJbnc2a0NKdWxmT2d6?=
 =?utf-8?B?VHJUUHlDOGNaM0dRTDRwdCtXdDJoS1ErQ1VvemE3QnFtOWdYcXU0QWdlZVN4?=
 =?utf-8?B?Rkc5TStIbTMxK1R3d0ZHckhZeHZibVhzV0srdGhTdkx1M1VjZXRoQ3hCemJ4?=
 =?utf-8?B?OU9CYXZtQVlLd3Y4ZjF1Rlg0Tk96T3pXL0h2bjdoWllRUUVwdXBleGRobGhx?=
 =?utf-8?B?ZG5VelArVUJXY2hUOWt4eDZ2Wm5ZeHBKSDg1OHBsQjIzL1hZaEMySnFNckpp?=
 =?utf-8?B?UHZKenZQOGZvRmMxNHpScWhwalkweTI5bUl2YkVCNXBsSk96cEVFbnRSOFFx?=
 =?utf-8?B?Zzh5WWdTeSt4TE41OGdhMjZ2MHFib3Q5NVNqcTNwb2xaNUROQXAvM3FnYkV1?=
 =?utf-8?B?MnNNT3cxUHRCUmVETVJ1Ty84bGFEQzgxNlZZWGIxcjdCaHcvOTJrSzFNWnhY?=
 =?utf-8?B?YUlQU2QzMGN3UGlVZ0pHMCt3OGtBYmM1ZVJLOEdUQ0tZR3pzOGVzUnUrZ25o?=
 =?utf-8?B?VTlwM3JOWk9QWFJxckNNaGExYzd1ZTFzREQvaDMwb1RacXVHQW9DalBHT1lH?=
 =?utf-8?B?bU91a0NyRmk4d1RwU3M1RHU2SFIyeWp6em9TUytIWnIzTDU0eUx1RHhWRzdZ?=
 =?utf-8?B?Z1R4dE1QWFNEcytJN3BQNEUrcXhQMDkrdyt2N2hPeWxPbUpWdVBMT0Z1bWxT?=
 =?utf-8?B?dTl4WjhReFdLUEFyQ2dCdTdwelIzekhkYnBaNW1QSGhXbC81MG5RNnViTE1t?=
 =?utf-8?B?MjdERzBLMHY3enpvSkdFTmJzOVVYS3Rkb1hlTEV5MnYvM0lmTk5sN1gzcldn?=
 =?utf-8?B?QkF1UnE5NjNKdGxTYko0RERDcWt0N05iZkdOMG5OT2ZESmZYb053NVZ3U01k?=
 =?utf-8?B?UEF4aGY2UVhpZk5UVDcrbjYvSWxhdWhENWZXMXZ4ZG1xTjlkZnVUV2VnVVgw?=
 =?utf-8?B?ZUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	xJrZd+XJIx6oQil3B6AGjOLY9Zqx52+ET4H7bB384ui439eVrMEmWEpRzbcwQB5rQ+B9qVugG0291cVPmaWLMk5rihcDKLIh3Avo3Y4FpWjQrzmoaB7mBxU+SPfqq4F3yjoopHs2mTduoV6I9uvzlUSPnq4J7nX3hOEc6IOz/mG/tUVLF5N5OdL0OWoaomuGtqxBoCKUYRa7vlxuFPFpZddTgnJw/J9QA0q9NhbZAQEsbog4nUGSlZ9ja1rVjlmYpYyF+nBQdHtfnGRHZDNWFpJi/zmVd6NLwfJoHkflJSut8242lByuiOzs9zwhYmt3dZr2wRr97Nwdw9mOvFuYwAduWqJId0u0SUFel075WzPsyuZvNKPg/y42SqYAelYOcgAYxZX9uftzMkC+inqiDQ545exvnMCBnbAAneGmb7aRoTnIf8VP/gP3DsKyPHkDYoWay3kF/gA74qXv/dhgi9NuQTpqWJXz2SC0kYiEsnOvqKyLh9t3frEEvwhRG1hY9G3tT9idc+qzGF4gkOq35eEwN5NCN61JQaq2OPaOFXDItipFYnjOhSkZ0M/Nc3AiX++tDGm1TLtze3PU2DSWAAfjDINtSyJXz0+PfpRs29kFDBjLzSlU9hJoh2QGK1MFxLkWMgi+MH3YS2OMIm2dhC+rD/xGos0AV9EiFaI9sxCkOk8tdlaZEHFROmd3hopHngFNHUqNik9fMJPpspegkMAZxTxqBLwQVAhgB7OfKpmbDmBJMVyOHerWu8VcCTKXdV2yU4oFKKNEwfxz64zczr3LvqsQaCXIchV9NCV1JOw=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f273861-27f2-4e3a-015f-08db4bd67540
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 13:01:06.8357
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wX9hDx2/qkMUM+b6snqtBuXKh5C2Uc6EabiL+4KnJpxLz4qhQILX5TgBeSpnZK3cYgrhIk3WPU88vrvF+N2xVztoyWfKR++r18EEi7ckBrk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6455

On 03/05/2023 1:59 pm, Jan Beulich wrote:
> On 03.05.2023 14:37, GitLab wrote:
>>
>> Pipeline #855820014 has failed!
>>
>> Project: xen ( https://gitlab.com/xen-project/xen )
>> Branch: staging ( https://gitlab.com/xen-project/xen/-/commits/staging )
>>
>> Commit: 0956aa22 ( https://gitlab.com/xen-project/xen/-/commit/0956aa2219745a198bb6a0a99e2108a3c09b280e )
>> Commit Message: x86/mm: replace bogus assertion in paging_log_d...
>> Commit Author: Jan Beulich ( https://gitlab.com/jbeulich )
>>
>> Pipeline #855820014 ( https://gitlab.com/xen-project/xen/-/pipelines/855820014 ) triggered by Ganis ( https://gitlab.com/ganis )
>> had 6 failed jobs.
>>
>> Job #4218447934 ( https://gitlab.com/xen-project/xen/-/jobs/4218447934/raw )
>>
>> Stage: build
>> Name: opensuse-tumbleweed-clang
>> Job #4218447943 ( https://gitlab.com/xen-project/xen/-/jobs/4218447943/raw )
>>
>> Stage: build
>> Name: opensuse-tumbleweed-gcc-debug
>> Job #4218447940 ( https://gitlab.com/xen-project/xen/-/jobs/4218447940/raw )
>>
>> Stage: build
>> Name: opensuse-tumbleweed-gcc
>> Job #4218447966 ( https://gitlab.com/xen-project/xen/-/jobs/4218447966/raw )
>>
>> Stage: test
>> Name: adl-pci-hvm-x86-64-gcc-debug
>> Job #4218447937 ( https://gitlab.com/xen-project/xen/-/jobs/4218447937/raw )
>>
>> Stage: build
>> Name: opensuse-tumbleweed-clang-debug
> Two of the build failures look to be addressed by Olaf's still pending
> "tools/libs/guest: assist gcc13's realloc analyzer". I guess I'm going
> to commit this with (just) the two R-b that there are.
>
> The other two are less clear, at least to me:
>
> checking for openpty et al... configure: error: in `/builds/xen-project/xen/tools':
> configure: error: Unable to find library for openpty and login_tty
> See `config.log' for more details
> configure: error: ./configure failed for tools

Olaf has a patch for these too.  "[PATCH v1] tools: drop bogus and
obsolete ptyfuncs.m4"

I haven't got to looking at it yet, while I'm still chasing the ARM bug.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 03 13:03:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 13:03:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529202.823326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puC90-0003Ga-CF; Wed, 03 May 2023 13:03:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529202.823326; Wed, 03 May 2023 13:03:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puC90-0003GT-92; Wed, 03 May 2023 13:03:42 +0000
Received: by outflank-mailman (input) for mailman id 529202;
 Wed, 03 May 2023 13:03:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puC8y-0003GN-Co
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 13:03:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puC8x-0002hD-Gd; Wed, 03 May 2023 13:03:39 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puC8x-0003lm-8P; Wed, 03 May 2023 13:03:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=h+RlaBcIWCR28e7I+kfx12e/YKFC6yRGKVdpoY41KuE=; b=gTJjWFEhVB2YsmAO8bIy28t2sD
	OBU4/9XTrlMJUXWRFW4ZDPR/dGbBNe3DfLkmKxkmPy1U//CI1upBhp44eXQrmsP2NVhgTB2ogEeQt
	8pBiHpPjVhgSxLlTYnvBfwdVg1LC/33QG0k/nuYviMWpzW+pJ7GNpquaxdbb7YAdqRWY=;
Message-ID: <d918cc78-de22-9599-9a91-f6c11028d11b@xen.org>
Date: Wed, 3 May 2023 14:03:37 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2 04/13] tools/xenstore: let hashtable_insert() return 0
 on success
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-5-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230330085011.9170-5-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 30/03/2023 09:50, Juergen Gross wrote:
> Today hashtable_insert() returns 0 in case of an error. Change that to
> let it return an errno value in the error case and 0 in case of success.

I usually find such change risky because it makes the backport more 
complex if we introduce a new call to hashtable_insert() and it is also 
quite difficult to review (the compiler would not help to confirm all 
the callers have changed).

So can you provide a compelling reason for doing the change? 
(consistency would not be one IMO)

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 13:06:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 13:06:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529205.823337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCBK-0003qz-Nt; Wed, 03 May 2023 13:06:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529205.823337; Wed, 03 May 2023 13:06:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCBK-0003qs-Kw; Wed, 03 May 2023 13:06:06 +0000
Received: by outflank-mailman (input) for mailman id 529205;
 Wed, 03 May 2023 13:06:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0hrU=AY=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puCBK-0003qm-2x
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 13:06:06 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 421be6c3-e9b3-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 15:06:04 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-21-PTV4Sh6-PaqKGLOW2cNEKg-1; Wed, 03 May 2023 09:05:59 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 92E9A885627;
 Wed,  3 May 2023 13:05:58 +0000 (UTC)
Received: from localhost (unknown [10.39.195.169])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6C56C2026D16;
 Wed,  3 May 2023 13:05:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 421be6c3-e9b3-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683119163;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Phu2rUngwbJv3O5WPwObndw41FS7BwEFsVdpGZ12G2o=;
	b=MjA20ljRzl3jfPkuzyrEiCNfJLcmHX/6jllIzT490eNt2wn9EfbF9ct9YQSFlt2X0MyVTZ
	9a6mweYEbJ5dp/PimhVeyFCFWBOJ6//VGgHlnnyafyNtWDQLk696kSaLrDqeBGrs2P1Tp1
	wkZfWXvflCgv1uuFEiEYnGsr30sofGs=
X-MC-Unique: PTV4Sh6-PaqKGLOW2cNEKg-1
Date: Wed, 3 May 2023 09:05:50 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v4 04/20] virtio-scsi: stop using aio_disable_external()
 during unplug
Message-ID: <20230503130550.GA757667@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-5-stefanha@redhat.com>
 <ZEvWv8dF78Jpb6CQ@redhat.com>
 <20230501150934.GA14869@fedora>
 <ZFEN+KY8JViTDtv/@redhat.com>
 <20230502200243.GD535070@fedora>
 <ZFJIQW6RpndfCcXR@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="SX9dxWjHVceQRg4J"
Content-Disposition: inline
In-Reply-To: <ZFJIQW6RpndfCcXR@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4


--SX9dxWjHVceQRg4J
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, May 03, 2023 at 01:40:49PM +0200, Kevin Wolf wrote:
> Am 02.05.2023 um 22:02 hat Stefan Hajnoczi geschrieben:
> > On Tue, May 02, 2023 at 03:19:52PM +0200, Kevin Wolf wrote:
> > > Am 01.05.2023 um 17:09 hat Stefan Hajnoczi geschrieben:
> > > > On Fri, Apr 28, 2023 at 04:22:55PM +0200, Kevin Wolf wrote:
> > > > > Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > > > > > This patch is part of an effort to remove the aio_disable_exter=
nal()
> > > > > > API because it does not fit in a multi-queue block layer world =
where
> > > > > > many AioContexts may be submitting requests to the same disk.
> > > > > >=20
> > > > > > The SCSI emulation code is already in good shape to stop using
> > > > > > aio_disable_external(). It was only used by commit 9c5aad84da1c
> > > > > > ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detachin=
g scsi
> > > > > > disk") to ensure that virtio_scsi_hotunplug() works while the g=
uest
> > > > > > driver is submitting I/O.
> > > > > >=20
> > > > > > Ensure virtio_scsi_hotunplug() is safe as follows:
> > > > > >=20
> > > > > > 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
> > > > > >    device_set_realized() calls qatomic_set(&dev->realized, fals=
e) so
> > > > > >    that future scsi_device_get() calls return NULL because they=
 exclude
> > > > > >    SCSIDevices with realized=3Dfalse.
> > > > > >=20
> > > > > >    That means virtio-scsi will reject new I/O requests to this
> > > > > >    SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
> > > > > >    virtio_scsi_hotunplug() is still executing. We are protected=
 against
> > > > > >    new requests!
> > > > > >=20
> > > > > > 2. Add a call to scsi_device_purge_requests() from scsi_unreali=
ze() so
> > > > > >    that in-flight requests are cancelled synchronously. This en=
sures
> > > > > >    that no in-flight requests remain once qdev_simple_device_un=
plug_cb()
> > > > > >    returns.
> > > > > >=20
> > > > > > Thanks to these two conditions we don't need aio_disable_extern=
al()
> > > > > > anymore.
> > > > > >=20
> > > > > > Cc: Zhengui Li <lizhengui@huawei.com>
> > > > > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > > > > > Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> > > > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > >=20
> > > > > qemu-iotests 040 starts failing for me after this patch, with wha=
t looks
> > > > > like a use-after-free error of some kind.
> > > > >=20
> > > > > (gdb) bt
> > > > > #0  0x000055b6e3e1f31c in job_type (job=3D0xe3e3e3e3e3e3e3e3) at =
=2E./job.c:238
> > > > > #1  0x000055b6e3e1cee5 in is_block_job (job=3D0xe3e3e3e3e3e3e3e3)=
 at ../blockjob.c:41
> > > > > #2  0x000055b6e3e1ce7d in block_job_next_locked (bjob=3D0x55b6e72=
b7570) at ../blockjob.c:54
> > > > > #3  0x000055b6e3df6370 in blockdev_mark_auto_del (blk=3D0x55b6e74=
af0a0) at ../blockdev.c:157
> > > > > #4  0x000055b6e393e23b in scsi_qdev_unrealize (qdev=3D0x55b6e7c04=
d40) at ../hw/scsi/scsi-bus.c:303
> > > > > #5  0x000055b6e3db0d0e in device_set_realized (obj=3D0x55b6e7c04d=
40, value=3Dfalse, errp=3D0x55b6e497c918 <error_abort>) at ../hw/core/qdev.=
c:599
> > > > > #6  0x000055b6e3dba36e in property_set_bool (obj=3D0x55b6e7c04d40=
, v=3D0x55b6e7d7f290, name=3D0x55b6e41bd6d8 "realized", opaque=3D0x55b6e724=
6d20, errp=3D0x55b6e497c918 <error_abort>)
> > > > >     at ../qom/object.c:2285
> > > > > #7  0x000055b6e3db7e65 in object_property_set (obj=3D0x55b6e7c04d=
40, name=3D0x55b6e41bd6d8 "realized", v=3D0x55b6e7d7f290, errp=3D0x55b6e497=
c918 <error_abort>) at ../qom/object.c:1420
> > > > > #8  0x000055b6e3dbd84a in object_property_set_qobject (obj=3D0x55=
b6e7c04d40, name=3D0x55b6e41bd6d8 "realized", value=3D0x55b6e74c1890, errp=
=3D0x55b6e497c918 <error_abort>)
> > > > >     at ../qom/qom-qobject.c:28
> > > > > #9  0x000055b6e3db8570 in object_property_set_bool (obj=3D0x55b6e=
7c04d40, name=3D0x55b6e41bd6d8 "realized", value=3Dfalse, errp=3D0x55b6e497=
c918 <error_abort>) at ../qom/object.c:1489
> > > > > #10 0x000055b6e3daf2b5 in qdev_unrealize (dev=3D0x55b6e7c04d40) a=
t ../hw/core/qdev.c:306
> > > > > #11 0x000055b6e3db509d in qdev_simple_device_unplug_cb (hotplug_d=
ev=3D0x55b6e81c3630, dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/=
core/qdev-hotplug.c:72
> > > > > #12 0x000055b6e3c520f9 in virtio_scsi_hotunplug (hotplug_dev=3D0x=
55b6e81c3630, dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../hw/scsi/vi=
rtio-scsi.c:1065
> > > > > #13 0x000055b6e3db4dec in hotplug_handler_unplug (plug_handler=3D=
0x55b6e81c3630, plugged_dev=3D0x55b6e7c04d40, errp=3D0x7ffec5519200) at ../=
hw/core/hotplug.c:56
> > > > > #14 0x000055b6e3a28f84 in qdev_unplug (dev=3D0x55b6e7c04d40, errp=
=3D0x7ffec55192e0) at ../softmmu/qdev-monitor.c:935
> > > > > #15 0x000055b6e3a290fa in qmp_device_del (id=3D0x55b6e74c1760 "sc=
si0", errp=3D0x7ffec55192e0) at ../softmmu/qdev-monitor.c:955
> > > > > #16 0x000055b6e3fb0a5f in qmp_marshal_device_del (args=3D0x7f61cc=
005eb0, ret=3D0x7f61d5a8ae38, errp=3D0x7f61d5a8ae40) at qapi/qapi-commands-=
qdev.c:114
> > > > > #17 0x000055b6e3fd52e1 in do_qmp_dispatch_bh (opaque=3D0x7f61d5a8=
ae08) at ../qapi/qmp-dispatch.c:128
> > > > > #18 0x000055b6e4007b9e in aio_bh_call (bh=3D0x55b6e7dea730) at ..=
/util/async.c:155
> > > > > #19 0x000055b6e4007d2e in aio_bh_poll (ctx=3D0x55b6e72447c0) at .=
=2E/util/async.c:184
> > > > > #20 0x000055b6e3fe3b45 in aio_dispatch (ctx=3D0x55b6e72447c0) at =
=2E./util/aio-posix.c:421
> > > > > #21 0x000055b6e4009544 in aio_ctx_dispatch (source=3D0x55b6e72447=
c0, callback=3D0x0, user_data=3D0x0) at ../util/async.c:326
> > > > > #22 0x00007f61ddc14c7f in g_main_dispatch (context=3D0x55b6e7244b=
20) at ../glib/gmain.c:3454
> > > > > #23 g_main_context_dispatch (context=3D0x55b6e7244b20) at ../glib=
/gmain.c:4172
> > > > > #24 0x000055b6e400a7e8 in glib_pollfds_poll () at ../util/main-lo=
op.c:290
> > > > > #25 0x000055b6e400a0c2 in os_host_main_loop_wait (timeout=3D0) at=
 ../util/main-loop.c:313
> > > > > #26 0x000055b6e4009fa2 in main_loop_wait (nonblocking=3D0) at ../=
util/main-loop.c:592
> > > > > #27 0x000055b6e3a3047b in qemu_main_loop () at ../softmmu/runstat=
e.c:731
> > > > > #28 0x000055b6e3dab27d in qemu_default_main () at ../softmmu/main=
=2Ec:37
> > > > > #29 0x000055b6e3dab2b8 in main (argc=3D24, argv=3D0x7ffec55196a8)=
 at ../softmmu/main.c:48
> > > > > (gdb) p jobs
> > > > > $4 =3D {lh_first =3D 0x0}
> > > >=20
> > > > I wasn't able to reproduce this with gcc 13.1.1 or clang 16.0.1:
> > > >=20
> > > >   $ tests/qemu-iotests/check -qcow2 040
> > > >=20
> > > > Any suggestions on how to reproduce the issue?
> > >=20
> > > It happens consistently for me with the same command line, both with =
gcc
> > > and clang.
> > >=20
> > > gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)
> > > clang version 15.0.7 (Fedora 15.0.7-2.fc37)
> > >=20
> > > Maybe there is a semantic merge conflict? I have applied the series on
> > > top of master (05d50ba2d4) and my block branch (88f81f7bc8).
> >=20
> > I can't find 88f81f7bc8 but rebased on repo.or.cz/qemu/kevin.git block
> > (4514dac7f2e9) and the test passes here.
> >=20
> > I rebased on qemu.git/master (05d50ba2d4) and it also passes.
> >=20
> > Please let me know if the following tree (a0ff680a72f6) works on your
> > machine:
> > https://gitlab.com/stefanha/qemu/-/tree/remove-aio_disable_external
>=20
> Fails in the same way.
>=20
> So I tried to debug this myself now. The problem is that iterating the
> jobs in blockdev_mark_auto_del() is incorrect: job_cancel_locked()
> frees the job and then block_job_next_locked() is a use after free.
>=20
> It also drops job_mutex temporarily and polls, so even switching to a
> *_FOREACH_SAFE style loop won't fix this. I guess we have to restart
> the whole search from the start after a job_cancel_locked() because the
> list might look very different after the call.
>=20
> Now, of course, how this is related to your patch and why it doesn't
> trigger before it, is still less than clear. What I found out is that
> adding the scsi_device_purge_requests() is enough to crash it. Maybe
> it's related to the blk_drain() inside of it. That the job finishes
> earlier during the unplug now or something like that.
>=20
> Anyway, changing blockdev_mark_auto_del() fixes it. I'll send a patch.

Thanks, and sorry for taking your time with debugging it!

Stefan

--SX9dxWjHVceQRg4J
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRSXC4ACgkQnKSrs4Gr
c8i74Qf6AklkO6aWrRBz6XAZ8dK/EiYL1W+TDr0RyrsU7nCVkvdlyP7AlGvIyjcs
YrK8FIjM+cjgHKnMkRqFBMn0xVQMrnSNZ2cLDZaVY4DlgVlmYmmt0RNnQWlftAuc
UJkymWg5dZV6hw/TI7ku1+pRnZRx32fxLiMIn+R1tkD8+OX+JMeN7WG3cAhvpPUI
m24TcHXZrqNO0kV4qXoM4GJNkDkYLaF1pXQ4MsZpPHcr8Zzhyn+5vlVmo+FEp7iX
ozXdMSGcA+fQ3yUGEIUISvZMrc3f8aIfaV29gti8Sj+ekULPGAS9It/2ZJuZTRos
Rbou/5fwuiQX6myLLQCGj7HQuwlZ+g==
=GY3E
-----END PGP SIGNATURE-----

--SX9dxWjHVceQRg4J--



From xen-devel-bounces@lists.xenproject.org Wed May 03 13:07:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 13:07:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529209.823347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCCd-0004Sd-6i; Wed, 03 May 2023 13:07:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529209.823347; Wed, 03 May 2023 13:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCCd-0004SW-3U; Wed, 03 May 2023 13:07:27 +0000
Received: by outflank-mailman (input) for mailman id 529209;
 Wed, 03 May 2023 13:07:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puCCb-0004SJ-Iy
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 13:07:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puCCb-0002lj-IC
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 13:07:25 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puCCb-0003xa-BP
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 13:07:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=J25v0xeHQmJ98iK3AGD9avQ1ZwSuZ9KoC9ZQDOtObug=; b=hURQbTZRe8JfAQoef9xYeABi2j
	kXmArVjZEHjOQsGZtAijTPbsE0Prcsco1uEU5ZHnSCjrx5DH0kM/8FfLZVjkJfSUTIzexYSjGtiQD
	2rhZzm7wwhRKtWYdyJGliaidH+mpIxAgEMWqUCA9GjiiUEAp976D3gUPcf0kAq77iRVM=;
Message-ID: <764c6eaf-3f3a-bf27-3678-391b12f56cd9@xen.org>
Date: Wed, 3 May 2023 14:07:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2 07/13] tools/xenstore: remove stale TODO file
Content-Language: en-US
To: xen-devel@lists.xenproject.org
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-8-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230330085011.9170-8-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 30/03/2023 09:50, Juergen Gross wrote:
> The TODO file is not really helpful any longer. It contains only
> entries which no longer apply or it is unknown what they are meant
> for ("Dynamic/supply nodes", "Remove assumption that rename doesn't
> fail").
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 13:08:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 13:08:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529213.823357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCDI-0004yQ-FC; Wed, 03 May 2023 13:08:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529213.823357; Wed, 03 May 2023 13:08:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCDI-0004yJ-CB; Wed, 03 May 2023 13:08:08 +0000
Received: by outflank-mailman (input) for mailman id 529213;
 Wed, 03 May 2023 13:08:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puCDG-0004yB-NV
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 13:08:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puCDF-0002mb-Vj; Wed, 03 May 2023 13:08:05 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puCDF-0003zY-PD; Wed, 03 May 2023 13:08:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=YtF1M7NzKn1kvI4rH1OiF33P+f9VuzkgDiqImSOPZ5g=; b=twKI5E1Z5r7GYXTFVie4BcJYus
	/0FFJdFrAHh/OudyAVF9ZcYwMwZmFFjcPFaG9DuZkb+uygS7v6M5g8+D7JwJkF1cER3lZCIG2Kc2U
	x4XJForuSvE4COwaBet9eeEvDAETZa0VJkPWC9CXOVIf4nJ4lbE/OWMlFFyqUAPE8bqE=;
Message-ID: <f6af4f23-cd32-51b9-b805-6bfb114f3468@xen.org>
Date: Wed, 3 May 2023 14:08:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2 08/13] tools/xenstore: remove unused events list
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-9-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230330085011.9170-9-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 30/03/2023 09:50, Juergen Gross wrote:
> struct watch contains an used struct list_head events. Remove it.

Typo: s/used/unused/?

> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
>   tools/xenstore/xenstored_watch.c | 5 -----
>   1 file changed, 5 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
> index e8eb35de02..4195c59e17 100644
> --- a/tools/xenstore/xenstored_watch.c
> +++ b/tools/xenstore/xenstored_watch.c
> @@ -36,9 +36,6 @@ struct watch
>   	/* Watches on this connection */
>   	struct list_head list;
>   
> -	/* Current outstanding events applying to this watch. */
> -	struct list_head events;
> -
>   	/* Offset into path for skipping prefix (used for relative paths). */
>   	unsigned int prefix_len;
>   
> @@ -205,8 +202,6 @@ static struct watch *add_watch(struct connection *conn, char *path, char *token,
>   
>   	watch->prefix_len = relative ? strlen(get_implicit_path(conn)) + 1 : 0;
>   
> -	INIT_LIST_HEAD(&watch->events);
> -
>   	domain_watch_inc(conn);
>   	list_add_tail(&watch->list, &conn->watches);
>   	talloc_set_destructor(watch, destroy_watch);

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 13:11:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 13:11:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529215.823366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCGd-0006SK-TD; Wed, 03 May 2023 13:11:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529215.823366; Wed, 03 May 2023 13:11:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCGd-0006SD-Qh; Wed, 03 May 2023 13:11:35 +0000
Received: by outflank-mailman (input) for mailman id 529215;
 Wed, 03 May 2023 13:11:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0hrU=AY=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puCGc-0006S7-6o
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 13:11:34 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 04e02d26-e9b4-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 15:11:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-400-scHiVcTtPcyrb409VnmdJw-1; Wed, 03 May 2023 09:11:29 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C73D1185A7A2;
 Wed,  3 May 2023 13:11:27 +0000 (UTC)
Received: from localhost (unknown [10.39.195.169])
 by smtp.corp.redhat.com (Postfix) with ESMTP id CD6E3492C1B;
 Wed,  3 May 2023 13:11:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04e02d26-e9b4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683119490;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zDf5y2EkHT9U0n319r/7zDgcT1di3Iez/iSIsbs73yg=;
	b=DomLdPxnEEBmpmTt9RiVVGSaBm/Lr0x7JARg2vzdbJJJAxynU7WHPUMN54enoHsBr5g1Jh
	6kopnctjm6D7lonwZK+1NEDnU3slT4BrfzMz8YwXsl8ciOOevVc8T6C0KuwusR7VOXHmlP
	1FEAf5WtU243XZ/9zD5UZpZ1aicqSuY=
X-MC-Unique: scHiVcTtPcyrb409VnmdJw-1
Date: Wed, 3 May 2023 09:11:25 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 07/20] block/export: stop using is_external in
 vhost-user-blk server
Message-ID: <20230503131125.GB757667@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-8-stefanha@redhat.com>
 <ZFE0iFnbr2ey0A7X@redhat.com>
 <20230502200645.GE535070@fedora>
 <ZFIWjuST/9tHVNMG@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="7BkvsqboKjcTwbWK"
Content-Disposition: inline
In-Reply-To: <ZFIWjuST/9tHVNMG@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9


--7BkvsqboKjcTwbWK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, May 03, 2023 at 10:08:46AM +0200, Kevin Wolf wrote:
> Am 02.05.2023 um 22:06 hat Stefan Hajnoczi geschrieben:
> > On Tue, May 02, 2023 at 06:04:24PM +0200, Kevin Wolf wrote:
> > > Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > > > vhost-user activity must be suspended during bdrv_drained_begin/end=
().
> > > > This prevents new requests from interfering with whatever is happen=
ing
> > > > in the drained section.
> > > >=20
> > > > Previously this was done using aio_set_fd_handler()'s is_external
> > > > argument. In a multi-queue block layer world the aio_disable_extern=
al()
> > > > API cannot be used since multiple AioContext may be processing I/O,=
 not
> > > > just one.
> > > >=20
> > > > Switch to BlockDevOps->drained_begin/end() callbacks.
> > > >=20
> > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > ---
> > > >  block/export/vhost-user-blk-server.c | 43 ++++++++++++++----------=
----
> > > >  util/vhost-user-server.c             | 10 +++----
> > > >  2 files changed, 26 insertions(+), 27 deletions(-)
> > > >=20
> > > > diff --git a/block/export/vhost-user-blk-server.c b/block/export/vh=
ost-user-blk-server.c
> > > > index 092b86aae4..d20f69cd74 100644
> > > > --- a/block/export/vhost-user-blk-server.c
> > > > +++ b/block/export/vhost-user-blk-server.c
> > > > @@ -208,22 +208,6 @@ static const VuDevIface vu_blk_iface =3D {
> > > >      .process_msg           =3D vu_blk_process_msg,
> > > >  };
> > > > =20
> > > > -static void blk_aio_attached(AioContext *ctx, void *opaque)
> > > > -{
> > > > -    VuBlkExport *vexp =3D opaque;
> > > > -
> > > > -    vexp->export.ctx =3D ctx;
> > > > -    vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
> > > > -}
> > > > -
> > > > -static void blk_aio_detach(void *opaque)
> > > > -{
> > > > -    VuBlkExport *vexp =3D opaque;
> > > > -
> > > > -    vhost_user_server_detach_aio_context(&vexp->vu_server);
> > > > -    vexp->export.ctx =3D NULL;
> > > > -}
> > >=20
> > > So for changing the AioContext, we now rely on the fact that the node=
 to
> > > be changed is always drained, so the drain callbacks implicitly cover
> > > this case, too?
> >=20
> > Yes.
>=20
> Ok. This surprised me a bit at first, but I think it's fine.
>=20
> We just need to remember it if we ever decide that once we have
> multiqueue, we can actually change the default AioContext without
> draining the node. But maybe at that point, we have to do more
> fundamental changes anyway.
>=20
> > > >  static void
> > > >  vu_blk_initialize_config(BlockDriverState *bs,
> > > >                           struct virtio_blk_config *config,
> > > > @@ -272,6 +256,25 @@ static void vu_blk_exp_resize(void *opaque)
> > > >      vu_config_change_msg(&vexp->vu_server.vu_dev);
> > > >  }
> > > > =20
> > > > +/* Called with vexp->export.ctx acquired */
> > > > +static void vu_blk_drained_begin(void *opaque)
> > > > +{
> > > > +    VuBlkExport *vexp =3D opaque;
> > > > +
> > > > +    vhost_user_server_detach_aio_context(&vexp->vu_server);
> > > > +}
> > >=20
> > > Compared to the old code, we're losing the vexp->export.ctx =3D NULL.=
 This
> > > is correct at this point because after drained_begin we still keep
> > > processing requests until we arrive at a quiescent state.
> > >=20
> > > However, if we detach the AioContext because we're deleting the
> > > iothread, won't we end up with a dangling pointer in vexp->export.ctx?
> > > Or can we be certain that nothing interesting happens before drained_=
end
> > > updates it with a new valid pointer again?
> >=20
> > If you want I can add the detach() callback back again and set ctx to
> > NULL there?
>=20
> I haven't thought enough about it to say if it's a problem. If you have
> and are confident that it's correct the way it is, I'm happy with it.
>
> But bringing the callback back is the minimal change compared to the old
> state. It's just unnecessary code if we don't actually need it.

The reasoning behind my patch is that detach() sets NULL today and we
would see crashes if ctx was accessed between detach() -> attach().
Therefore, I'm assuming there are no ctx accesses in the code today and
removing the ctx =3D NULL assignment doesn't break anything.

However, my approach is not very defensive. If the code is changed in a
way that accesses ctx when it's not supposed to, then a dangling pointer
will be accessed.

I think leaving the detach() callback there can be justified because it
will make it easier to detect bugs in the future. I'll add it back in
the next revision.

Stefan

--7BkvsqboKjcTwbWK
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRSXX0ACgkQnKSrs4Gr
c8gM4wf9EGB7urw2+olwINpeSZIIcwDydnYjHtFLdg2c1j5VCnMBvL9ZLGU43pUg
9ZjyblYuKFxm/lepAy8L67L9X7MWyUDAC5RVjvTKviQuEcYzsXdKqxCM7FDLz5/G
z+jOU1ds4cfCAL6RhRfkEZPClubmNTp9kuWXpsbISy6+WfMGui4957grADrXgH47
qL7ERau5lMwU+NVljzZ7q0U+hPdu1j7jVM5qq34NXLoZTAmDNpaUr29MGJGCYgEP
xnv2eAJ/2+bJ4uDvd0G8L93nbpXSulv9l+bTViQEjDP1R4tPxtiXQ7WNfkRoVrHe
H4iug75lp6O5k2vDRwNAbqrlaRJZLw==
=6nP0
-----END PGP SIGNATURE-----

--7BkvsqboKjcTwbWK--



From xen-devel-bounces@lists.xenproject.org Wed May 03 13:12:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 13:12:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529218.823377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCHk-0006yp-6Z; Wed, 03 May 2023 13:12:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529218.823377; Wed, 03 May 2023 13:12:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCHk-0006yi-3s; Wed, 03 May 2023 13:12:44 +0000
Received: by outflank-mailman (input) for mailman id 529218;
 Wed, 03 May 2023 13:12:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0hrU=AY=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puCHi-0006yU-7u
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 13:12:42 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2dc9f713-e9b4-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 15:12:39 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-244-nouLQjL3P5uWvchsYPzZzg-1; Wed, 03 May 2023 09:12:37 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B4C4B38173C1;
 Wed,  3 May 2023 13:12:36 +0000 (UTC)
Received: from localhost (unknown [10.39.195.169])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 3686B40C6E68;
 Wed,  3 May 2023 13:12:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2dc9f713-e9b4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683119559;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MvTBKYXxzCnGpUAnqw7NUl7Kpg6AmIw7nyBP3ZgVcq8=;
	b=D/yynmV2oe6C3Dwi5G3YeNQxt5WO5CPuaZQbRv9nPDnHUL9r8kSXaeFkjm2+ceTcdOcWC8
	H8HszutuW+3TaR5y3jUFsgqwVfgAuq16JAbcVQ0QLKMcmS195sH4b4ELd0+ekO79FZiOdV
	r1hKMyruujc5qAcsyPDOiMwt5VroAsM=
X-MC-Unique: nouLQjL3P5uWvchsYPzZzg-1
Date: Wed, 3 May 2023 09:12:34 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v4 03/20] virtio-scsi: avoid race between unplug and
 transport event
Message-ID: <20230503131234.GC757667@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-4-stefanha@redhat.com>
 <ZFEqEkG4ktn9bBFN@redhat.com>
 <20230502185624.GA535070@fedora>
 <ZFIUue5ouDtch31y@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="hRXnVXzvq+032LBu"
Content-Disposition: inline
In-Reply-To: <ZFIUue5ouDtch31y@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2


--hRXnVXzvq+032LBu
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, May 03, 2023 at 10:00:57AM +0200, Kevin Wolf wrote:
> Am 02.05.2023 um 20:56 hat Stefan Hajnoczi geschrieben:
> > On Tue, May 02, 2023 at 05:19:46PM +0200, Kevin Wolf wrote:
> > > Am 25.04.2023 um 19:26 hat Stefan Hajnoczi geschrieben:
> > > > Only report a transport reset event to the guest after the SCSIDevi=
ce
> > > > has been unrealized by qdev_simple_device_unplug_cb().
> > > >=20
> > > > qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized =
field
> > > > to false so that scsi_device_find/get() no longer see it.
> > > >=20
> > > > scsi_target_emulate_report_luns() also needs to be updated to filte=
r out
> > > > SCSIDevices that are unrealized.
> > > >=20
> > > > These changes ensure that the guest driver does not see the SCSIDev=
ice
> > > > that's being unplugged if it responds very quickly to the transport
> > > > reset event.
> > > >=20
> > > > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> > > > Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > >=20
> > > > @@ -1082,6 +1073,15 @@ static void virtio_scsi_hotunplug(HotplugHan=
dler *hotplug_dev, DeviceState *dev,
> > > >          blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), =
NULL);
> > > >          virtio_scsi_release(s);
> > > >      }
> > > > +
> > > > +    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
> > > > +        virtio_scsi_acquire(s);
> > > > +        virtio_scsi_push_event(s, sd,
> > > > +                               VIRTIO_SCSI_T_TRANSPORT_RESET,
> > > > +                               VIRTIO_SCSI_EVT_RESET_REMOVED);
> > > > +        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED)=
);
> > > > +        virtio_scsi_release(s);
> > > > +    }
> > > >  }
> > >=20
> > > s, sd and s->bus are all unrealized at this point, whereas before this
> > > patch they were still realized. I couldn't find any practical problem
> > > with it, but it made me nervous enough that I thought I should comment
> > > on it at least.
> > >=20
> > > Should we maybe have documentation on these functions that says that
> > > they accept unrealized objects as their parameters?
> >=20
> > s is the VirtIOSCSI controller, not the SCSIDevice that is being
> > unplugged. The VirtIOSCSI controller is still realized.
> >=20
> > s->bus is the VirtIOSCSI controller's bus, it is still realized.
>=20
> You're right, I misread this part.
>=20
> > You are right that the SCSIDevice (sd) has been unrealized at this
> > point:
> > - sd->conf.blk is safe because qdev properties stay alive the
> >   Object is deleted, but I'm not sure we should rely on that.
>=20
> This feels relatively safe (and it's preexisting anyway), reading a
> property doesn't do anything unpredictable and we know the pointer is
> still valid.
>=20
> > - virti_scsi_push_event(.., sd, ...) is questionable because the LUN
> >   that's fetched from sd no longer belongs to the unplugged SCSIDevice.
>=20
> This call is what made me nervous.
>=20
> > How about I change the code to fetch sd->conf.blk and the LUN before
> > unplugging?
>=20
> You mean passing sd->id and sd->lun to virtio_scsi_push_event() instead
> of sd itself? That would certainly look cleaner and make sure that we
> don't later add code to it that does something with sd that would
> require it to be realized.

Yes, I'll do that in the next revision.

Stefan

--hRXnVXzvq+032LBu
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRSXcIACgkQnKSrs4Gr
c8iBrQgAnO/r1Kps6Svp0/9nUqUx3S65i1XwWIAK1IbuFqWP6Nq2R+xecFEdmAUN
8f6QyslrUI1EnQ9qXoMRagSDc8iw5CBtA9CtPTg/32uRelh111GM85U3uYvhBlGY
dd9GPbB17c/u2Y2CuxBrfc2w2TpDgp0tazuXiQHYO1drrGYyyZu73XdGF+W4DChc
Gvk2FofPZ7luLv/kcZvR9UAn9hmBHXpLjByS2XM4QxB+1qXKMYnac6NPl5Icr14H
1TIthHvPJkBklFGrMFyz7CFIYzMIdqFYWT74NpSeEveAODZMxWuZ8JVZtdLIbC6q
ZK0P/CdfiC3AKLh1BYuQFDGdrkDJWQ==
=7qbb
-----END PGP SIGNATURE-----

--hRXnVXzvq+032LBu--



From xen-devel-bounces@lists.xenproject.org Wed May 03 13:19:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 13:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529223.823387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCNq-0007dg-TZ; Wed, 03 May 2023 13:19:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529223.823387; Wed, 03 May 2023 13:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puCNq-0007dZ-QQ; Wed, 03 May 2023 13:19:02 +0000
Received: by outflank-mailman (input) for mailman id 529223;
 Wed, 03 May 2023 13:19:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l+lA=AY=amazon.de=prvs=480977c02=mheyne@srs-se1.protection.inumbo.net>)
 id 1puCNq-0007dT-5z
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 13:19:02 +0000
Received: from smtp-fw-6001.amazon.com (smtp-fw-6001.amazon.com [52.95.48.154])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10b465f4-e9b5-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 15:19:00 +0200 (CEST)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-pdx-2a-m6i4x-83883bdb.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-6001.iad6.amazon.com with
 ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 May 2023 13:18:56 +0000
Received: from EX19D008EUC001.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-pdx-2a-m6i4x-83883bdb.us-west-2.amazon.com (Postfix)
 with ESMTPS id 1C7BB60F81; Wed,  3 May 2023 13:18:53 +0000 (UTC)
Received: from EX19MTAUWC001.ant.amazon.com (10.250.64.145) by
 EX19D008EUC001.ant.amazon.com (10.252.51.165) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.2.1118.26; Wed, 3 May 2023 13:18:52 +0000
Received: from dev-dsk-mheyne-1b-c1362c4d.eu-west-1.amazon.com (10.15.57.183)
 by mail-relay.amazon.com (10.250.64.145) with Microsoft SMTP Server
 id
 15.2.1118.25 via Frontend Transport; Wed, 3 May 2023 13:18:51 +0000
Received: by dev-dsk-mheyne-1b-c1362c4d.eu-west-1.amazon.com (Postfix,
 from userid 5466572)
 id 38D6595D; Wed,  3 May 2023 13:18:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10b465f4-e9b5-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1683119941; x=1714655941;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=5qY6hxBY40KASy+98g6jZI4fvE1bwgAl23ESUykmNsg=;
  b=IkmiogEVDztPAwavdF6wMBs1Qo87FT/UxGRsffIvVHhFqiz7JPP1mRxK
   sz01x6kETEXfGqbQudY9MskQO/NHSbU2fjnpIANzi0Lp9RLdx+tj40Dh9
   fcKqVpiMjpI3iwl5AMyjvbtSEvQzLwhM0Fo4aADyIDU78m8vud2Muni+c
   w=;
X-IronPort-AV: E=Sophos;i="5.99,247,1677542400"; 
   d="scan'208";a="327667209"
From: Maximilian Heyne <mheyne@amazon.de>
To: 
CC: Maximilian Heyne <mheyne@amazon.de>, Juergen Gross <jgross@suse.com>,
	Bjorn Helgaas <bhelgaas@google.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen
	<dave.hansen@linux.intel.com>, <x86@kernel.org>, "H. Peter Anvin"
	<hpa@zytor.com>, Marc Zyngier <maz@kernel.org>, Kevin Tian
	<kevin.tian@intel.com>, Jason Gunthorpe <jgg@ziepe.ca>, Ashok Raj
	<ashok.raj@intel.com>, "Ahmed S. Darwish" <darwi@linutronix.de>, Greg
 Kroah-Hartman <gregkh@linuxfoundation.org>, <xen-devel@lists.xenproject.org>,
	<linux-pci@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: [PATCH] x86/pci/xen: populate MSI sysfs entries
Date: Wed, 3 May 2023 13:16:53 +0000
Message-ID: <20230503131656.15928-1-mheyne@amazon.de>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Precedence: Bulk
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit

Commit bf5e758f02fc ("genirq/msi: Simplify sysfs handling") reworked the
creation of sysfs entries for MSI IRQs. The creation used to be in
msi_domain_alloc_irqs_descs_locked after calling ops->domain_alloc_irqs.
Then it moved into __msi_domain_alloc_irqs which is an implementation of
domain_alloc_irqs. However, Xen comes with the only other implementation
of domain_alloc_irqs and hence doesn't run the sysfs population code
anymore.

Commit 6c796996ee70 ("x86/pci/xen: Fixup fallout from the PCI/MSI
overhaul") set the flag MSI_FLAG_DEV_SYSFS for the xen msi_domain_info
but that doesn't actually have an effect because Xen uses it's own
domain_alloc_irqs implementation.

Fix this by making use of the fallback functions for sysfs population.

Fixes: bf5e758f02fc ("genirq/msi: Simplify sysfs handling")
Signed-off-by: Maximilian Heyne <mheyne@amazon.de>
---
 arch/x86/pci/xen.c  | 8 +++++---
 include/linux/msi.h | 9 ++++++++-
 kernel/irq/msi.c    | 4 ++--
 3 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
index 8babce71915f..014c508e914d 100644
--- a/arch/x86/pci/xen.c
+++ b/arch/x86/pci/xen.c
@@ -198,7 +198,7 @@ static int xen_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 		i++;
 	}
 	kfree(v);
-	return 0;
+	return msi_device_populate_sysfs(&dev->dev);
 
 error:
 	if (ret == -ENOSYS)
@@ -254,7 +254,7 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 		dev_dbg(&dev->dev,
 			"xen: msi --> pirq=%d --> irq=%d\n", pirq, irq);
 	}
-	return 0;
+	return msi_device_populate_sysfs(&dev->dev);
 
 error:
 	dev_err(&dev->dev, "Failed to create MSI%s! ret=%d!\n",
@@ -346,7 +346,7 @@ static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 		if (ret < 0)
 			goto out;
 	}
-	ret = 0;
+	ret = msi_device_populate_sysfs(&dev->dev);
 out:
 	return ret;
 }
@@ -394,6 +394,8 @@ static void xen_teardown_msi_irqs(struct pci_dev *dev)
 			xen_destroy_irq(msidesc->irq + i);
 		msidesc->irq = 0;
 	}
+
+	msi_device_destroy_sysfs(&dev->dev);
 }
 
 static void xen_pv_teardown_msi_irqs(struct pci_dev *dev)
diff --git a/include/linux/msi.h b/include/linux/msi.h
index cdb14a1ef268..a50ea79522f8 100644
--- a/include/linux/msi.h
+++ b/include/linux/msi.h
@@ -383,6 +383,13 @@ int arch_setup_msi_irq(struct pci_dev *dev, struct msi_desc *desc);
 void arch_teardown_msi_irq(unsigned int irq);
 int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type);
 void arch_teardown_msi_irqs(struct pci_dev *dev);
+#endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */
+
+/*
+ * Xen uses non-default msi_domain_ops and hence needs a way to populate sysfs
+ * entries of MSI IRQs.
+ */
+#if defined(CONFIG_PCI_XEN) || defined(CONFIG_PCI_MSI_ARCH_FALLBACKS)
 #ifdef CONFIG_SYSFS
 int msi_device_populate_sysfs(struct device *dev);
 void msi_device_destroy_sysfs(struct device *dev);
@@ -390,7 +397,7 @@ void msi_device_destroy_sysfs(struct device *dev);
 static inline int msi_device_populate_sysfs(struct device *dev) { return 0; }
 static inline void msi_device_destroy_sysfs(struct device *dev) { }
 #endif /* !CONFIG_SYSFS */
-#endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */
+#endif /* CONFIG_PCI_XEN || CONFIG_PCI_MSI_ARCH_FALLBACKS */
 
 /*
  * The restore hook is still available even for fully irq domain based
diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
index 7a97bcb086bf..b4c31a5c1147 100644
--- a/kernel/irq/msi.c
+++ b/kernel/irq/msi.c
@@ -542,7 +542,7 @@ static int msi_sysfs_populate_desc(struct device *dev, struct msi_desc *desc)
 	return ret;
 }
 
-#ifdef CONFIG_PCI_MSI_ARCH_FALLBACKS
+#if defined(CONFIG_PCI_MSI_ARCH_FALLBACKS) || defined(CONFIG_PCI_XEN)
 /**
  * msi_device_populate_sysfs - Populate msi_irqs sysfs entries for a device
  * @dev:	The device (PCI, platform etc) which will get sysfs entries
@@ -574,7 +574,7 @@ void msi_device_destroy_sysfs(struct device *dev)
 	msi_for_each_desc(desc, dev, MSI_DESC_ALL)
 		msi_sysfs_remove_desc(dev, desc);
 }
-#endif /* CONFIG_PCI_MSI_ARCH_FALLBACK */
+#endif /* CONFIG_PCI_MSI_ARCH_FALLBACK || CONFIG_PCI_XEN */
 #else /* CONFIG_SYSFS */
 static inline int msi_sysfs_create_group(struct device *dev) { return 0; }
 static inline int msi_sysfs_populate_desc(struct device *dev, struct msi_desc *desc) { return 0; }
-- 
2.39.2




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Wed May 03 13:43:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 13:43:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529233.823396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puClP-0002cz-08; Wed, 03 May 2023 13:43:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529233.823396; Wed, 03 May 2023 13:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puClO-0002cs-Tk; Wed, 03 May 2023 13:43:22 +0000
Received: by outflank-mailman (input) for mailman id 529233;
 Wed, 03 May 2023 13:43:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+yte=AY=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1puClN-0002ck-Fl
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 13:43:21 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 749eda3a-e9b8-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 15:43:16 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-63-BMx0iVPUPgGDcLgQ7YyuzA-1; Wed, 03 May 2023 09:43:10 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 92812852AF6;
 Wed,  3 May 2023 13:43:09 +0000 (UTC)
Received: from redhat.com (dhcp-192-205.str.redhat.com [10.33.192.205])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 164622166B29;
 Wed,  3 May 2023 13:43:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 749eda3a-e9b8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683121395;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LnWGqGWWdJPHGGU1rFwcwMAX+ugZvyvG/DLBRKCKxQk=;
	b=IrjBXvGomeoyBrVgohlnRrrk8WFPFBDsym+7DJT6odAmaLlcn44QKw3ATnui2caZk5U7N2
	5j+/lqhdBSrCq0d+mmLhDQ1TUJgSV8DnkADDlUhiOXuk0CfJ3Gr4UT3k+WW4pIY4+68f0B
	zFJF4s3ltd6cGJeSIWYeFrYpL+gaUq8=
X-MC-Unique: BMx0iVPUPgGDcLgQ7YyuzA-1
Date: Wed, 3 May 2023 15:43:05 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 07/20] block/export: stop using is_external in
 vhost-user-blk server
Message-ID: <ZFJjicw0Kjfvl5qN@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-8-stefanha@redhat.com>
 <ZFE0iFnbr2ey0A7X@redhat.com>
 <20230502200645.GE535070@fedora>
 <ZFIWjuST/9tHVNMG@redhat.com>
 <20230503131125.GB757667@fedora>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="3jXylBdepGNQTKaY"
Content-Disposition: inline
In-Reply-To: <20230503131125.GB757667@fedora>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6


--3jXylBdepGNQTKaY
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Am 03.05.2023 um 15:11 hat Stefan Hajnoczi geschrieben:
> On Wed, May 03, 2023 at 10:08:46AM +0200, Kevin Wolf wrote:
> > Am 02.05.2023 um 22:06 hat Stefan Hajnoczi geschrieben:
> > > On Tue, May 02, 2023 at 06:04:24PM +0200, Kevin Wolf wrote:
> > > > Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > > > > vhost-user activity must be suspended during bdrv_drained_begin/e=
nd().
> > > > > This prevents new requests from interfering with whatever is happ=
ening
> > > > > in the drained section.
> > > > >=20
> > > > > Previously this was done using aio_set_fd_handler()'s is_external
> > > > > argument. In a multi-queue block layer world the aio_disable_exte=
rnal()
> > > > > API cannot be used since multiple AioContext may be processing I/=
O, not
> > > > > just one.
> > > > >=20
> > > > > Switch to BlockDevOps->drained_begin/end() callbacks.
> > > > >=20
> > > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > ---
> > > > >  block/export/vhost-user-blk-server.c | 43 ++++++++++++++--------=
------
> > > > >  util/vhost-user-server.c             | 10 +++----
> > > > >  2 files changed, 26 insertions(+), 27 deletions(-)
> > > > >=20
> > > > > diff --git a/block/export/vhost-user-blk-server.c b/block/export/=
vhost-user-blk-server.c
> > > > > index 092b86aae4..d20f69cd74 100644
> > > > > --- a/block/export/vhost-user-blk-server.c
> > > > > +++ b/block/export/vhost-user-blk-server.c
> > > > > @@ -208,22 +208,6 @@ static const VuDevIface vu_blk_iface =3D {
> > > > >      .process_msg           =3D vu_blk_process_msg,
> > > > >  };
> > > > > =20
> > > > > -static void blk_aio_attached(AioContext *ctx, void *opaque)
> > > > > -{
> > > > > -    VuBlkExport *vexp =3D opaque;
> > > > > -
> > > > > -    vexp->export.ctx =3D ctx;
> > > > > -    vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
> > > > > -}
> > > > > -
> > > > > -static void blk_aio_detach(void *opaque)
> > > > > -{
> > > > > -    VuBlkExport *vexp =3D opaque;
> > > > > -
> > > > > -    vhost_user_server_detach_aio_context(&vexp->vu_server);
> > > > > -    vexp->export.ctx =3D NULL;
> > > > > -}
> > > >=20
> > > > So for changing the AioContext, we now rely on the fact that the no=
de to
> > > > be changed is always drained, so the drain callbacks implicitly cov=
er
> > > > this case, too?
> > >=20
> > > Yes.
> >=20
> > Ok. This surprised me a bit at first, but I think it's fine.
> >=20
> > We just need to remember it if we ever decide that once we have
> > multiqueue, we can actually change the default AioContext without
> > draining the node. But maybe at that point, we have to do more
> > fundamental changes anyway.
> >=20
> > > > >  static void
> > > > >  vu_blk_initialize_config(BlockDriverState *bs,
> > > > >                           struct virtio_blk_config *config,
> > > > > @@ -272,6 +256,25 @@ static void vu_blk_exp_resize(void *opaque)
> > > > >      vu_config_change_msg(&vexp->vu_server.vu_dev);
> > > > >  }
> > > > > =20
> > > > > +/* Called with vexp->export.ctx acquired */
> > > > > +static void vu_blk_drained_begin(void *opaque)
> > > > > +{
> > > > > +    VuBlkExport *vexp =3D opaque;
> > > > > +
> > > > > +    vhost_user_server_detach_aio_context(&vexp->vu_server);
> > > > > +}
> > > >=20
> > > > Compared to the old code, we're losing the vexp->export.ctx =3D NUL=
L. This
> > > > is correct at this point because after drained_begin we still keep
> > > > processing requests until we arrive at a quiescent state.
> > > >=20
> > > > However, if we detach the AioContext because we're deleting the
> > > > iothread, won't we end up with a dangling pointer in vexp->export.c=
tx?
> > > > Or can we be certain that nothing interesting happens before draine=
d_end
> > > > updates it with a new valid pointer again?
> > >=20
> > > If you want I can add the detach() callback back again and set ctx to
> > > NULL there?
> >=20
> > I haven't thought enough about it to say if it's a problem. If you have
> > and are confident that it's correct the way it is, I'm happy with it.
> >
> > But bringing the callback back is the minimal change compared to the old
> > state. It's just unnecessary code if we don't actually need it.
>=20
> The reasoning behind my patch is that detach() sets NULL today and we
> would see crashes if ctx was accessed between detach() -> attach().
> Therefore, I'm assuming there are no ctx accesses in the code today and
> removing the ctx =3D NULL assignment doesn't break anything.

Sometimes ctx =3D NULL defaults to qemu_get_aio_context(), so in theory
there could be cases where NULL works, but a dangling pointer wouldn't.

> However, my approach is not very defensive. If the code is changed in
> a way that accesses ctx when it's not supposed to, then a dangling
> pointer will be accessed.
>=20
> I think leaving the detach() callback there can be justified because it
> will make it easier to detect bugs in the future. I'll add it back in
> the next revision.

Ok, sounds good me.

Kevin

--3jXylBdepGNQTKaY
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmRSZOgACgkQfwmycsiP
L9Z4eQ/+LgU+OhaX4MhIHoKVbBeqTOHQYSWlDg/eh/I3pVF0g/HToZc1vLm0azxU
NhSqg87kLo3DRKFUh25e5370t219xFbDW4rz4RVy0OJLklLfj7qi59XOxE0MLiVI
5ySbG1FxdFTNS9dAoYmPuxrucX+RPEpwHw1gIpmIx+GqLGuYhrjvjqOeB/Nx7tm6
dAUmxB4Vt16wsTPs92zA09I8KtiWqr+fm+P2u/jfdsC1HqtWL93uiyQoon7LoCqO
c3VwmyR+aFr5AUmlIkS0dxzd9Bbnixd0jnHUEdaowCVLmAYbKvDJkl5HTjYEyF+D
p7Lm7GRdQPaJgu4GyKtj8NWwfra2VXEDW0M+VhWEYCLMrka00wXGO1bxjUZzEpP5
8F25eFcpNb2rixwMHlEU3UQJxNS+wtODzmkjIcj4N8F0aM7eNZDuUIDnwGMQirid
9VubITTJZtDtnp+AOO2B3zMhuAqi7EslMXOYHb5NZ9Yi31otsYE5BKScKxaCy5jq
w1h+o7sNoyW+48yvjUhxkH1bU/CHbhv9VLQ1QTHgy+uR5lADMjEba7XHBQnclHs2
rzyA9WgwuQcdq4pbUCH/hFMpoPCSNdL/Dw7riadrx7hoJB+GRr7AYfR9NEk6NtAd
uBfxOOO5MBpngMUdiDQdlDHmigg2QSJMyFacVTs/NVGFuAO1B6c=
=+t1f
-----END PGP SIGNATURE-----

--3jXylBdepGNQTKaY--



From xen-devel-bounces@lists.xenproject.org Wed May 03 14:07:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 14:07:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529239.823407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puD8O-0005F3-Rv; Wed, 03 May 2023 14:07:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529239.823407; Wed, 03 May 2023 14:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puD8O-0005Ew-OM; Wed, 03 May 2023 14:07:08 +0000
Received: by outflank-mailman (input) for mailman id 529239;
 Wed, 03 May 2023 14:07:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dYIa=AY=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1puD8N-0005Eq-8L
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 14:07:07 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20631.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c76bb478-e9bb-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 16:07:04 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by SA1PR12MB8859.namprd12.prod.outlook.com (2603:10b6:806:37c::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 14:07:01 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94%7]) with mapi id 15.20.6363.022; Wed, 3 May 2023
 14:07:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c76bb478-e9bb-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CpFXu2m9Esb3p5mMr35zNmMg7eIHt651b4Maet0sn5/0YHC3LnlplCMZ0kN1APzPKN73YWhDoO01qAaeLVsptpnPbi54dXnP0bysre/K69EMUTSxQypjXLeopXUqTbNlyw8Vid+jUFXGwewqSsnppGHjHsbOq8Gyy0wuEb3v0qD8L5Hy60gpSBlRTHUQbiH9dBdWSLRWshIYs04YPQ/zLb+6a5OJZPm3S98eShsBSDjKlF+qK9iOV3nU8cCPKFCRosHkYVZmtdmCgr0QL+3wQfZjOlr1dtq04pPeqMVsIXHCSGlqHuYCblmrQFjj3JH1LRkfTZMenDduK6FvXki+Cg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=htY6wAQlmkfy+g2a0rtOTjM6Ae3lrM6ND5lLPxrqvUQ=;
 b=MIuD2xKDUtbdiaSsfZT2mBE0nJdBuK3UOe9YJiu6wbd/iNVYQxreC8nAQqyOPfJ5xhgsUPSlFsskSNR/w018Ba6PKomcTiDfJm+jSEScP2d3NezSzGmqPOIySKBMqLn4d1uy10JwVVkj6NquoEmAkX3po9ifrJpMwhrnuhb8Tpa6oMokC9MUj+/2qjEwUWeUNG3pdq7pm6SO+gdYX5D8XkNor3dlVKflaWWnqJjzNybmQovTxjpblngF609jESTSwz29yr0ESJGQ4gFof9JOwVIFXdLz3Wbp2FgGloU3h2ZUwHR9s1L404fvpWSTyDO1p8lr2srTuUk12IyQjS9wIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=htY6wAQlmkfy+g2a0rtOTjM6Ae3lrM6ND5lLPxrqvUQ=;
 b=XTua8Q2+cBXBkeOYxMPpJx4pbReWhxMFB0mQr3y9XABnwi44mta0eUnO9VpwdxgMGkRdiuFmT54otLqWMklE0qXNs08hSCaH0YMXdvzwBJ4tPj0o4LSoDYFNfz1KU9HM6wXqK7XjtLQYoUGFUg7yP2w73Gukn3JrkTd2HzyTPMs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <d15ba304-8f79-f80a-b0ac-4dccebded17c@amd.com>
Date: Wed, 3 May 2023 15:06:54 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 03/12] xen/arm: Introduce a wrapper for
 dt_device_get_address() to handle paddr_t
To: Julien Grall <julien@xen.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-4-ayan.kumar.halder@amd.com>
 <37c9a45f-ae07-8d47-093a-6cf7501389d4@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <37c9a45f-ae07-8d47-093a-6cf7501389d4@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0202.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a5::9) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|SA1PR12MB8859:EE_
X-MS-Office365-Filtering-Correlation-Id: dcae417e-3d97-44e1-5438-08db4bdfaa1c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nUU6F07NEW+ox6KrhiHqcQ8L1Xbq2osPC0MX2z0cpIkR+rl590tKOgLutJuOokCqtA1nzJ2z9NoO+4bJQgTUjSeNZTmm0ui6yXt29onAyH0h6UTxIGMWNAYNhLRwMHL9mWhC0j547UdkRTURKcxjle9lTRDWn5oExHTlY/nxEVbx/3vinGnPEdD9IPyKE/y7vyWDaf4WwJYJXd4uQGlEUH7u+8nudI94w/G9EW+CFzPh55t4vAyOmBmGv36uVif/QPSi9wKXitfx7hyapMl5N7iVcojXCe9JSmDrRL+hLm26Nd1KAq7ejoHhqAea1+tNl7NDHjzQyDXo56IhfXvfeAV4ABuy7B7RTjaYsin3UKwuwqyOq/umfRATjrhCt+w39bVh5eaD/I1flOVFZ5aTTgMK70SlOFVZmwLed0yRSpQSvMXsnP5OUaxoke/rChxX5p9vnsjRixRjP3iBz2lhCSekDQ1p25+1mFejAI2qRztK8qYuiPDTzXIqTwgg0lC4iZ3N1E4dNRT2EvMENoz6eWLvppJFWbojEnk5RIET4Cb6x71ybS35q+xZXAVtrFMDwgdN6XTXBQTKkmuIiqFnXfFHvtIhDzCZs3/KCx/yGGe4Lt0QyagtuKSHSwixSeSKWGK0sAR8yPdnncBraivSDg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(39860400002)(346002)(136003)(396003)(451199021)(31686004)(966005)(8676002)(8936002)(6512007)(26005)(6506007)(2616005)(7416002)(53546011)(36756003)(2906002)(83380400001)(6666004)(5660300002)(4326008)(478600001)(6486002)(41300700001)(316002)(186003)(66946007)(66556008)(66476007)(110136005)(31696002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VFVTb2F2bkRBeGxvaUppcS9kdjJrZWlCWktpRzhkRmhEN2o5cmE1QmsxWGlL?=
 =?utf-8?B?QWY2UVFqcExhYzNTbTZ3a0FobXRYckRhTGFnN2EycnVJK2VxMXgraUxRYXU0?=
 =?utf-8?B?K0hQUlJZOWsrTTZPNDFxR3FoZGpPbFBjZ25KVzdRdVZIOW9ERE5CaXpSQXRx?=
 =?utf-8?B?VGd1U0pKN2s2OVFyWGRDTWRkY2FHNHFEcUt5YW5lNXRpdXA1SUh3VVdmRmdz?=
 =?utf-8?B?YlJhQXd3ZjYydXdQWmZHRmdJaGxuREtDZEc1NFpVbGN0bTdBL0FpTnU4MlVY?=
 =?utf-8?B?SFQvLzl0ZzduaXA3R2lRMWNCWXhQNW5QaTgvTk9RaE4zYlJkbDM5Mm9RZm5j?=
 =?utf-8?B?YkVIOU03WEpZRzhHWWNIUzIxNUQ4cDh5VWhYYmRxWW9mTFVRUzVXR3BHdHUw?=
 =?utf-8?B?SE9UczY4YzI1TGdOREVkRlZjR1BnYUVrZSszU2dZTk5sVFdhcHlBa0g5ZEow?=
 =?utf-8?B?NmpTTzg2aDY0N01ydThYeHZHZ296Qm91eEJZQmphSVhIVnNEa1hVbFJ3RGVY?=
 =?utf-8?B?dVEzTmswL1U5bWNldjBvMEptbmEwb2o5M2E3NTloK1dJMTdKOWpwdnhWcE9v?=
 =?utf-8?B?V3N2K0J1V1FkZGJ0NkNCcXp2MWdpbGVTMnZRYk9mamNUSjBwWUFUZXkxVlFL?=
 =?utf-8?B?WkN2dGFsb3JxaFBEZkRPbFVnRzJ0QWdIRlVGMG1JV0g3Nnd5ZmdVY2RmSU9F?=
 =?utf-8?B?VjhVbG1pZmxRSTlML1pISVMrNThoNGNpV2ZmaDFya0tIa0t6Q1pkL01Fbm9H?=
 =?utf-8?B?TTZUWFZxTnhScHNrK2QxMXhjQkFKR3JCUGtxQUpQQUNaRzE4a0RqMXhTOUdD?=
 =?utf-8?B?YWRPOGNJTmNVRkpweWxjVjBkSmc4bGtSbzlZN3hKaVViMmdmbEJMTWgvR2d5?=
 =?utf-8?B?UkhPVk9uOVZ1YzNxM21TU2RLcmhCNEJhVVBPUVE5eVk3S2pwc1paT0pOQkpz?=
 =?utf-8?B?K0NWMHROZFMyQnkrODdVZDh2L2NZRnR3SGZKQ1dySmFmMTNVSGNySE5GMjMz?=
 =?utf-8?B?ckNQZG5DNjUzQkNCZGpZeGFyRFVtYXc5VHdpZzY2M05PM29JdU9ZRjQ3bzRa?=
 =?utf-8?B?MFNabWJTN2xhNFZBSzI3Y2I2bUtwVVJrYzhFaU8vbkt3emZHL3dzcWxLYmFO?=
 =?utf-8?B?TWJmeWl1bDBPaUhaYkVOeHZ5QkZrNC9aaUU1dWFTeXBBc0ZxMFMvTmxETnJC?=
 =?utf-8?B?d1dlNVc4VDU3cXdvYmhpYjdCRjVidGZvdnlldzZqajlpQXUwa1p0ZHFtaE5p?=
 =?utf-8?B?OExKRkRaUVVtcWhVWEpobDNnbnFPS2VqUzdxT296ODRBelJHQ0t6bDRZanZi?=
 =?utf-8?B?VkNMSUh5RXRJVGwwbVRzUUNnWnBVYlB0RHdORnN4aWcxRnp3cjAyeENkQ2Q0?=
 =?utf-8?B?ZnlKL2kxOEJPQzFJWUJ2UWlrNWpHNVVxT1N3aEwrck5zY3h4RzB0RnI5ZGRM?=
 =?utf-8?B?eStUclZ2OE8wM09oeEVLbGZuUzFqTE5ZdXRYK3hmMnAwQTJTczh3ZVlNeGMx?=
 =?utf-8?B?b2k4S1RpSXlLSEx1a2RRcFdkQXhKcjU4L0NKOW0yU2hOTTk0SHZnSFgvWldy?=
 =?utf-8?B?a3VXN005c1VGcUtrUEw2K2VoUFk4ZTRFOWg4VXpiUUI1SmF3bmZTU1FDWC8r?=
 =?utf-8?B?WE40MUJQUlNrTERmdW4vQW5QRDY4b1RoVm1oaU5Jd2N3OEM1V29OZm4weEVI?=
 =?utf-8?B?L1FjSXdaRFpQRlBvSFJGZktsaEV4ZXZ6UjJPODJGT2FxQkR5eTB3dmhkMHEv?=
 =?utf-8?B?cFo5R3ZPaXJQeG1BOWdhT3BqRzdodDc0QjVzYUFPMWV2RnROZkx6V3lXUElr?=
 =?utf-8?B?RThYZFRhTDdsZnR3YktPNnFEeXJRUDVPWmFLMko1Z1c2VmloTnRCNGFnR0ds?=
 =?utf-8?B?ZHNSL1RzeWMzL2ZLWUQvcXJuVjN4cTJvZndVb01ZbUFnUEs2QjdMTEtOenMz?=
 =?utf-8?B?Z0FkRlllMkZzNDV1RVZuQUNVYUgrQnNCdVpWRE1nZDloYUZmVzMyTU5zeWpv?=
 =?utf-8?B?aE5ndnozVi9uSkh5NkZzRTZiU01oUTBkdDgvZEFiV0Y0WEpweURhbmVTcEhU?=
 =?utf-8?B?bFIrMXgwR3RFTWxKa3orNHlueDVIYVpZSGo1ODM0cUlrbjc5LzNwaHA1SlZs?=
 =?utf-8?Q?1vS7J3ukNsLJZzdErk4qKpqTR?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dcae417e-3d97-44e1-5438-08db4bdfaa1c
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 14:07:00.9647
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ckkZbgPKlkXzMsEU3TeRb/CQlxR7T1v5ugcE1ekrx0EWbRAyJ11HC2Bpe9VwnDb5+XIhSKL5myynxZoeVvjP+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8859


On 03/05/2023 12:25, Julien Grall wrote:
> Hi Ayan,
Hi Julien,
>
> On 28/04/2023 18:55, Ayan Kumar Halder wrote:
>> dt_device_get_address() can accept uint64_t only for address and size.
>> However, the address/size denotes physical addresses. Thus, they should
>> be represented by 'paddr_t'.
>> Consequently, we introduce a wrapper for dt_device_get_address() ie
>> dt_device_get_paddr() which accepts address/size as paddr_t and inturn
>> invokes dt_device_get_address() after converting address/size to
>> uint64_t.
>>
>> The reason for introducing this is that in future 'paddr_t' may not
>> always be 64-bit. Thus, we need an explicit wrapper to do the type
>> conversion and return an error in case of truncation.
>>
>> With this, callers can now invoke dt_device_get_paddr().
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>> Changes from -
>>
>> v1 - 1. New patch.
>>
>> v2 - 1. Extracted part of "[XEN v2 05/11] xen/arm: Use paddr_t 
>> instead of u64 for address/size"
>> into this patch.
>>
>> 2. dt_device_get_address() callers now invoke dt_device_get_paddr() 
>> instead.
>>
>> 3. Logged error in case of truncation.
>>
>> v3 - 1. Modified the truncation checks as "dt_addr != (paddr_t)dt_addr".
>> 2. Some sanity fixes.
>>
>> v4 - 1. Some sanity fixes.
>> 2. Preserved the declaration of dt_device_get_address() in
>> xen/include/xen/device_tree.h. The reason being it is currently used by
>> ns16550.c. This driver requires some more changes as pointed by Jan in
>> https://lore.kernel.org/xen-devel/6196e90f-752e-e61a-45ce-37e46c22b812@suse.com/ 
>>
>> which is to be addressed as a separate series.
>>
>> v5 - 1. Removed initialization of variables.
>> 2. In dt_device_get_paddr(), added the check
>> if ( !addr )
>>      return -EINVAL;
>>
>>   xen/arch/arm/domain_build.c                | 10 +++---
>>   xen/arch/arm/gic-v2.c                      | 10 +++---
>>   xen/arch/arm/gic-v3-its.c                  |  4 +--
>>   xen/arch/arm/gic-v3.c                      | 10 +++---
>>   xen/arch/arm/pci/pci-host-common.c         |  6 ++--
>>   xen/arch/arm/platforms/brcm-raspberry-pi.c |  2 +-
>>   xen/arch/arm/platforms/brcm.c              |  6 ++--
>>   xen/arch/arm/platforms/exynos5.c           | 32 +++++++++---------
>>   xen/arch/arm/platforms/sunxi.c             |  2 +-
>>   xen/arch/arm/platforms/xgene-storm.c       |  2 +-
>>   xen/common/device_tree.c                   | 39 ++++++++++++++++++++++
>>   xen/drivers/char/cadence-uart.c            |  4 +--
>>   xen/drivers/char/exynos4210-uart.c         |  4 +--
>>   xen/drivers/char/imx-lpuart.c              |  4 +--
>>   xen/drivers/char/meson-uart.c              |  4 +--
>>   xen/drivers/char/mvebu-uart.c              |  4 +--
>>   xen/drivers/char/omap-uart.c               |  4 +--
>>   xen/drivers/char/pl011.c                   |  6 ++--
>>   xen/drivers/char/scif-uart.c               |  4 +--
>
> What about the call in xen/drivers/char/ns16550.c?

Refer to 
https://lore.kernel.org/xen-devel/6196e90f-752e-e61a-45ce-37e46c22b812@suse.com/ 
, Jan mentioned that this driver needs some prior cleanup.

So, I decided to take it out and do it after the current series has been 
committed.

See 
https://patchew.org/Xen/20230413173735.48387-1-ayan.kumar.halder@amd.com/ 
, Jan agreed to this.

Is this ok with you ?

- Ayan

>
>> xen/drivers/passthrough/arm/ipmmu-vmsa.c   |  8 ++---
>>   xen/drivers/passthrough/arm/smmu-v3.c      |  2 +-
>>   xen/drivers/passthrough/arm/smmu.c         |  8 ++---
>>   xen/include/xen/device_tree.h              | 13 ++++++++
>>   23 files changed, 120 insertions(+), 68 deletions(-)
> Cheers,
>


From xen-devel-bounces@lists.xenproject.org Wed May 03 14:39:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 14:39:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529249.823427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDdO-0000Vp-Fs; Wed, 03 May 2023 14:39:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529249.823427; Wed, 03 May 2023 14:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDdO-0000Vi-C3; Wed, 03 May 2023 14:39:10 +0000
Received: by outflank-mailman (input) for mailman id 529249;
 Wed, 03 May 2023 14:39:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZlPw=AY=citrix.com=prvs=4803f4e7c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puDdM-0000S9-61
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 14:39:08 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3fdcec86-e9c0-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 16:39:05 +0200 (CEST)
Received: from mail-dm3nam02lp2044.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.44])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 May 2023 10:39:01 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5540.namprd03.prod.outlook.com (2603:10b6:208:296::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 14:38:59 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 14:38:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3fdcec86-e9c0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683124744;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=7nnX6IZ8Z2sVLN67lMjh6GA1f0VbnRwRssrhiTdYdV8=;
  b=dZg7uYWWyt2Oc62X6T3jC/X5OkWfdln9/sK/SnXtaP1Ur080td3HbezU
   8bfsg5kYjEnbStD7rzj59QFA3HhhzJMqJ6DA93YbP6Fz/yCZ17JspUoZB
   h97j3Zm+QIhaxDGSR+EX6l1rcLUYaXlCXpOnL0kJs0wvoENdAdnrYR+zU
   0=;
X-IronPort-RemoteIP: 104.47.56.44
X-IronPort-MID: 108134434
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jB6Si6BxslovwBVW/5Diw5YqxClBgxIJ4kV8jS/XYbTApDsm0j1Uz
 GNNXz+DOP+LMzf1eN9zadizo08H6pTUyIU2QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw3LknUFNcx
 9cidT0yfz6krdu54LCDRbw57igjBJGD0II3nFhFlWucIdN9BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI++xrvwA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE37efxHinBNlOfFG+3qZPhVuYmEESNDQTWXu28d60l0CyWOsKf
 iT4/QJr98De7neDVcLhVhe1pHqFuB80WNdKFeA+rgaXxcL86gKUBGECQiRGLsIvsMs7RzsC3
 VuOgt+vDjtq2JWeTneY96uIoCmpETgYK3cYYi0JRhdD5MPsyKk2jxnPR9IlF7Sng9ndEDT8h
 TuNqUAWl7gVyMIGyai/1VTGmC634IjESBYv4QfaVX7j6Rl2DKanaJah7Vzz5PNJPoGfCFKGu
 RAsmceE6/sVJYqQjyHLS+IIdJmr7fuYNDzXgXZ0AoIssT+q/haLcIxP4Tc4OEBzNcUscjvuf
 UOVsgRUjKK/J1OvZK5zJo60UMIjyPC6Ecy/DqyIKN1TfpJ2aQmLujl0YlKd1Hzsl05qlrwjP
 ZCccoCnCnNy5blb8Qdajtw1idcDrh3SD0uKLXwn53xLCYajWUM=
IronPort-HdrOrdr: A9a23:oHQ0WKzeNJf0CXoN5o9oKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-Talos-CUID: 9a23:lMuyV2yUe+BVnTAysBylBgUyRZB0QG/v502NLnCpJ0FDcbK1S2efrfY=
X-Talos-MUID: =?us-ascii?q?9a23=3AfZpUQQxhj3AEIfhT2uPYrXxQGNuaqPyLDUYGz7g?=
 =?us-ascii?q?Hh+CBOQx9axCQkTq7QbZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,247,1677560400"; 
   d="scan'208";a="108134434"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KFn9piLntcqBiRDvDM20tk2/H+fLr0BlyqpfCdVjWUaTUjRslxU+qN7Gj3RJ/VZaJ5VZjlsrm+cBgXItwfqr2HpXGkXv+AXYq1voLGD/6EeL6z1bVi8vbG2xMNY3wN043zYoeC1AxAR7hnkZCClub1aTtrOaqgaKBxz5d5Ely2zobuJsvXy7d7N5Cm2V0MmUmD9KsjsaSj27swaXmjYLsrM3BZxq6yPOMMrdbW9nI/wiLXEsarammGIawpMrgHM/aDHNZ8jZmemh/5GjEameLVRkWEAa4lUTfoqlOV1xnQ28QaqgRerowXtdui3nFJITc/YMybF+Gw98ll6StyOlyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7nnX6IZ8Z2sVLN67lMjh6GA1f0VbnRwRssrhiTdYdV8=;
 b=jPqSbxkcszKui9XpuUGIhSTwL7eEIkvGlBFz8jOveArwmBoM+FWoWFM5TuNn/o0BusAEtz8PXOMkm0OCeNQEHpcCYol+whc37vHcQWIb2pU9BR+QmcrE7z4lx42P05KGTbBd3W64x6mKEV5mxg3mxKPIT9V/fnk740G1SDYxdrZe7TGfohOibOKnIDOdWgeUOBN8GE/pz6gAml9cv1BPrw73IUgAzX1SNvzAzxk14H0aeE06qCxUaiE0uIkEAc5Z2ux42LWt+2WGdTyDMIQbXITLxqExDl6SGoLXp9mQyEfo/v25Q7c0aNoGQ5LrRhjTrLCzabKm9tZrNw6sv7eZ6Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7nnX6IZ8Z2sVLN67lMjh6GA1f0VbnRwRssrhiTdYdV8=;
 b=gzYsfQbwmpw16HGqUOUv8XyT7do433Tg+SHcNIBIrlFPvByE+3HE2V1OOrk6orX5gGn4Vn9mblSxqkMasmR0VUsxi83t5mff0A+FDunaAch/dmQXY2vkjVEDY3oB+nGDERM325Cx+vNMP34WBnfIK48UEZJrESVXuj86++acYWo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <c74d231f-75e2-a26d-f2c4-3a135cc1ac10@citrix.com>
Date: Wed, 3 May 2023 15:38:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: andrew.cooper3@citrix.com
Subject: dom0less vs xenstored setup race Was: xen | Failed pipeline for
 staging | 6a47ba2f
Content-Language: en-GB
To: Stefano Stabellini <sstabellini@kernel.org>, alejandro.vallejo@cloud.com
Cc: committers@xenproject.org, michal.orzel@amd.com,
 xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Juergen Gross <jgross@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edwin.torok@cloud.com>
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
 <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P123CA0081.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:138::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BLAPR03MB5540:EE_
X-MS-Office365-Filtering-Correlation-Id: a3ab658a-78d1-4323-ae00-08db4be42136
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mrGujqyEYLe/9uiW/wNnNr9FkT5l5Ub4V2JEwdyZg+lgH0VhIS7s2GTZC/aqcMELFTazl03Sfr037TQtt6sfwyEUsE/jxCJk5++Jdd2x+mlhLEwbcc+bQq1TQEmiv6qK2bk/vUl6A4/dqw2VQYkESFqJtd2t3/f7LN1GYd5f3z70WNvAOmy/PhOd1RINLdGW5II/Rv0ybFNwPabCpbGIaQFKmti5XCZSqJHPjBz6oa16Idh7ov8zJQpF7TzE67R41uXrHerDyudz5ZoEfv/qjsb4NupivK9lyrZj6crkGHOM/rjuhUZcitXTVprmzNyITQ/bmdNPnNpvYHfs1pPtfak+MKhQQpzDPalRW2mQtsc15Ps+4IEKdzGhvmxX8Cqm1GBALeUytNL/KKOlKOaaeJDPax4Q5NUeEwz0gHQ84fMxJnJyjBTaWMWO2owbaRINK26u07ZKzYuyafn1iaX/nVkEds1vE0+QOxfhHsmutm3YzleFqEs0Oei74qU6eXlY1QW1eTsIPpEMlgSTusD3ozikCfTAeegk0MOQar2wkQZE2LvIgSu7oUMDLyYACzYv08Nn645nfhD4SX3choMVaHb+tSrWZmb9PZyahe6ocKyLeV47Y25s4IwZN4y4Prcl
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39860400002)(346002)(136003)(366004)(451199021)(6666004)(6486002)(83380400001)(2906002)(6512007)(9686003)(26005)(6506007)(966005)(186003)(478600001)(31686004)(36756003)(2616005)(54906003)(41300700001)(66476007)(66556008)(66946007)(82960400001)(38100700002)(316002)(8676002)(8936002)(7416002)(86362001)(5660300002)(4326008)(31696002)(32563001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N2xaM1lId3RrcUpiSitnb2ptN0dZemJYRCtjTU9CWW05ZXh5N0wzNzNIUzBi?=
 =?utf-8?B?bjQ1aEd3Wm9zb1RoZ2NBOENnaEtSRWpmcHRocnhlVW02elhBcXRQclJkYXdW?=
 =?utf-8?B?VENzVUNNQVRrRDhuLzd6QSt0cDR4ZlRPWHZTbUJYZmJTTTNpOTdOUXE5T2Mw?=
 =?utf-8?B?dVc2NXNLVEt0Kzg3WFBLeEZRaEZNL3VYOU1BWXVhcHc5THBMV1AvM3dsU3h2?=
 =?utf-8?B?ZENmeWg2WVlUN2w4a3RjZ0tibDEzeDFiZnJ6b0NDaE5pRDZjNXFvVDF3T3FL?=
 =?utf-8?B?c25DM0ErakRuOGE0clR5OURYYXNwaVJQREVQK0J0SFlPemxXL0dDQnRsWitF?=
 =?utf-8?B?WVZma0NSNnBxU3hxSkdaQjVMbzZOWnJoaTI1Y3V4UVlIdnNpUktUdWQ5eFFS?=
 =?utf-8?B?VjNOY1N0RVJ3dnhQTVBaTDBybWJ4UlBoeitJdENLbjFrRmdaU2xhWm5iOHlC?=
 =?utf-8?B?MkJ6TC9NaEJscjRDSkR5eUNGbFVFWVhpRDIxZTR6SndVYU1qT3FMMmZyQXNE?=
 =?utf-8?B?NVg1bHk5UTJmUGNaclVpSTNMbU80OXNNdU9xbldoTzBSdU84WUxUSHlMNGVM?=
 =?utf-8?B?NVY5VWx6UDZralZHWG5pcG40WkJYSlVFWXJESHQxOTU4T3Z6Z2hIRDVLZFFD?=
 =?utf-8?B?MFF2N1VXNzF4WTY5Z2RtakVxYmd5K3ZNMElrRjVGQmlpVnBKdFcvOGtadmk3?=
 =?utf-8?B?ZDBBdVJQYUd4MDZ4Q0pNOWgvTWlsSXAyQWh1VXNLbmx2TVlISFJXTWhvNXB3?=
 =?utf-8?B?UzI5WDVoMGtQNjhLcVU2aW5uMlhUSmpWaWZLT1h4d0h5V3VGMk1vM3E1cnc1?=
 =?utf-8?B?RElnMHptOHVIZktnN0g5cE93elM3bjRNdUtsWHdjbDBmeEJLWkNVazUwQk1V?=
 =?utf-8?B?YkNyU1NTQ2E2MGd2bE00a3o0dlMvT3RWVVdwZ3RxcVFaVW4wL0RmZFlOVGxP?=
 =?utf-8?B?SHk4cE1tWFE0clRsTitNcGc0UVNpZjBUMzh3eGptZnJxbjVORjhXT3pSNm41?=
 =?utf-8?B?NnBTYkU1eVVHWVU3WmpsQXlFaHNDUEpKeXd6Q1RDcXFXSkRMKzl6dTZBcHc5?=
 =?utf-8?B?L3YvQUZqQ01xbncxZzhvbzNwR3M4SUt4WTRwendENGN6RGFsVnFiaUdJSU1j?=
 =?utf-8?B?SStUZCtlRHBHMG85Q1dDeFl5eTY4bHI5bG8xVWQ0SFVVb0hZYXEwUWJtbWV0?=
 =?utf-8?B?cW9qUTROejVUUzN3NjNTeVNnSTNnb3VqNFZpVlA4MlY2Wm8zSktjZjhlcFk2?=
 =?utf-8?B?b0M3d1FFSzhHM2drNDJoWlZqeXJDa1JLSUxJL0VBcVBUN09PZENJS2dMdmJM?=
 =?utf-8?B?QU9STUpwaS9DYXIrbHNMalljbzJSS01OL1dGS05EZkVlV2h1cG1kU1RIOTQr?=
 =?utf-8?B?djZENnRqQkt5OFI3Sm5ZU0ZwRFBkcUQ4YTAxblZFcGFxbWM5MU9ZRndjZmxj?=
 =?utf-8?B?V2J6VEF2SDJ6VUtxRFFaMVR6N3hXTEV2VzVLbGMxZW1TU2g4Q1N2K3pMdVJ5?=
 =?utf-8?B?R1FFSGxoUFZQWFFSbyszVElHT0RzN3NlaGxIcEg4aGN3U1BSaFl0dHU3Q08x?=
 =?utf-8?B?c3I0aXFCZXdSK01heWxDdTZWa2doSGVhT1FhRHZVMWkxOVBCV1dDTktSRUFx?=
 =?utf-8?B?U3pkWnByQXVyNlRwQU1xQ0V3blFvMWx2dTNZUTVYVFRMQXpxb3N1NThGeC9k?=
 =?utf-8?B?eTRyS0NYTy9kMGdrVXJ1M0NNN0xtMVYxanZTWW54bFpXbTEvWm8vemJuamdM?=
 =?utf-8?B?cTNyZ3VhblRqTEdqaUdkVjdkZW1hTlFZdVhlYWVMOExEQ29HNUtvRUdKVGMz?=
 =?utf-8?B?L0JQK1ZEYy9PUjVHdWZ4Q3pXMFoyczhYMTltYTFtQWN4czFTbkJsNjVCWkFq?=
 =?utf-8?B?VGJCdmxsaU5RaEdwZkpFMERnc2pqdTNqTXRhSGx3aEFCVEtSOFd6cXF4RXZD?=
 =?utf-8?B?ZEZSNU5hdGtoYlNYb2V3UGU3TG9QaDRibDZ5ekFmaVBIUWN5clRrenVXTXQ2?=
 =?utf-8?B?YU40WERsTEpxSzBwQTFCVWF0a201RUduK3hTcGdMK3ZLMlFXRkc3NDVIZFRL?=
 =?utf-8?B?Qm5meHp5U1Y1U0FCN2lMY2tabmN1Z0dzT0RxUFZ4SFQyVk9kN2NMTGtiWlI4?=
 =?utf-8?B?SnczamI2ZTAyY0pKQUlrTDRjR3Bsa0ZqYWdENHpkbEhFdDBmbUFhekRYZUlV?=
 =?utf-8?B?dWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?QXNSalNvSHRYNVhUSnpyd1QrT2M1emlKOXliK2p1UW9tQjNWQXlNMHY3eUJk?=
 =?utf-8?B?MVdlWjdRQ2hUcTc0OEJvMTk2eEdvRksvTGNGTHhyc3hHZGg0eWo5VURlQTdU?=
 =?utf-8?B?Umd1UzZGc0F0bVNzUUtYdCtqYy8xZ3lpN1ZlcTQ1d0NRd0todXhuNUxXa1ZL?=
 =?utf-8?B?V3BkVExablRiQzd2QUJsSWxlTFdNcWh3UkJ2WXllRGlLbkxya3I4amRNMyt1?=
 =?utf-8?B?ZHpUaWE4THAvR3MrOURRWnNOSU01WGhIZXMxNFlqR1dXeEM2dTVyMnI1bVc3?=
 =?utf-8?B?VTJmQ3BPQmk4dHFaOGgrTDVjYkIzenRzeFQ3NXZOcmVSekdTa1NKT2hkMERR?=
 =?utf-8?B?cFVyWTcyR0VHak9PZ3lqQXZPTXVVMGtTV3gzTWh2bWptS0JJaTI4d1dncVJV?=
 =?utf-8?B?aGg2WGRONGlJS1dKbGZOckJSOXNaQjM4Sk5Ocm55VlYzQzI2NDU1blV3R3Zr?=
 =?utf-8?B?aUdZNTRlVkptN1VqVjNXdjZ1ZGdFU09tdU9xZEJERWhDZXlhZTJOSjlibGRq?=
 =?utf-8?B?dnh2SDJsZDUvTDEwKzdjT3ZCZWpxMk5Qd0UrK1dvL0tLUDVvWU5iM2xNUTg2?=
 =?utf-8?B?UXNqN3k3UFFSaGxDQnRPa3Z2andsdkN1eExhcUF0MGlCS1ZFTkxKRE43RThu?=
 =?utf-8?B?UktkazhzQzN6a0t1RkJOSmQ2ZGgvWlovZVBhME5WUHI3dys2N1h4TENKMDBS?=
 =?utf-8?B?RHlqUnNSL1VwbTBJSjhwc2QwRkMzSnZYSUlLN09rMXpBL0xFTUgwaGc2R3d0?=
 =?utf-8?B?WTduWXhFbjJaZXpScnVCd0RqYzNBTEtRaG1qNDNUdVMvR3VzNVZIMTdCVmxN?=
 =?utf-8?B?Wk1kSkZsRHFqOVF2RlRscTJMSWM3YlU0enlNVllKZ2pjcmc5TVgzQjFUWGM2?=
 =?utf-8?B?aFZRQVgvb2Q5b0xMVFhhUGJCNS91b3VTdGg2VXBYRE0ySndsa1JTNnNNVjFP?=
 =?utf-8?B?Wmg5cm9WTElrNnNMVW9lVWdxOVJ1eHBZNml3bzJjOENJUFRJam0vbWJQN2pI?=
 =?utf-8?B?SmZGUkt3UStrckxNK3F5ZFI0MVRlZWpkTmY4VEdxVDBCRXVkejBhRkVrQ0xZ?=
 =?utf-8?B?UlJQNlE4ajlQZ1ZWR0xoMlF3VmllT3pVSFlVZUo0Z3c2QWtJWmN1eXBGYUhU?=
 =?utf-8?B?WGZvYTc5OUVHMGo0TDFHNU5UQW5hQmtPM2RVRm9ZZEZZNHZocDJkdmwyRlFP?=
 =?utf-8?B?VHBpQ0NnN01YakdMKy9tT1REeXV1V3VJcTluSjJxSm9waUphQmM2OGd3dlhr?=
 =?utf-8?Q?fJBimSaZlby3L6T?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a3ab658a-78d1-4323-ae00-08db4be42136
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 14:38:58.8377
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hLQMm52kx4Mj1zuwWhofrXQ4WgiMJxOneYJkNfWPOAH5xtYsc/NTsSrHyJnIYeSHIhyYtEizNdUmwYaa03JuMUQH2Y3LakW9UoD8xg0qDG0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5540

Hello,

After what seems like an unreasonable amount of debugging, we've tracked
down exactly what is going wrong here.

https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4219721944

Of note is the smoke.serial log around:

io: IN 0xffff90fec250 d0 20230503 14:20:42 INTRODUCE (1 233473 1 )
obj: CREATE connection 0xffff90fff1f0
*** d1 CONN RESET req_cons 00000000, req_prod 0000003a rsp_cons
00000000, rsp_prod 00000000
io: OUT 0xffff9105cef0 d0 20230503 14:20:42 WATCH_EVENT
(@introduceDomain domlist )

XS_INTRODUCE (in C xenstored at least, not checked O yet) always
clobbers the ring pointers.  The added pressure on dom0 that the
xensconsoled adds with it's 4M hypercall bounce buffer occasionally
defers xenstored long enough that the XS_INTRODUCE clobbers the first
message that dom1 wrote into the ring.

The other behaviour seen was xenstored observing a header looking like this:

*** d1 HDR { ty 0x746e6f63, rqid 0x2f6c6f72, txid 0x74616c70, len
0x6d726f66 }

which was rejected as being too long.  That's "control/platform" in
ASCII, so the XS_INTRODUCE intersected dom1 between writing the header
and writing the payload.


Anyway, it is buggy for XS_INTRODUCE to be called on a live an
unsuspecting connection.  It is ultimately init-dom0less's fault for
telling dom1 it's good to go before having waited for XS_INTRODUCE to
complete.

I am going to start by correcting the documentation to make these
details clear, and then figure out what is the best set of steps to
unbreak this.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 03 14:39:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 14:39:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529246.823417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDdL-0000G9-5a; Wed, 03 May 2023 14:39:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529246.823417; Wed, 03 May 2023 14:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDdL-0000G2-2T; Wed, 03 May 2023 14:39:07 +0000
Received: by outflank-mailman (input) for mailman id 529246;
 Wed, 03 May 2023 14:39:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puDdJ-0000Fq-BU; Wed, 03 May 2023 14:39:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puDdJ-0004zJ-2R; Wed, 03 May 2023 14:39:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puDdI-00036r-M6; Wed, 03 May 2023 14:39:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puDdI-0005Vl-Li; Wed, 03 May 2023 14:39:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DBU34PiOuxKjr3RLyM+LZyoVHSP6YFjxDOhsP2Lu5f0=; b=LFusJ+rJskgpXg0xbqJCIq+SM1
	qaSTv0v3O5p+p4BHtGi+9zunSR1t2bOSlojkcHWseBg5fYYXWSM5dk9caVmK5Ps7Ta2BxulN3izpT
	nmoXOZkLTu6eZMGI/c7Med9ptgiuMpAPWkWyM63CF1Er7OgUgPapsN1rSiewQcxQ0l94=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180517-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180517: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0956aa2219745a198bb6a0a99e2108a3c09b280e
X-Osstest-Versions-That:
    xen=b033eddc9779109c06a26936321d27a2ef4e088b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 May 2023 14:39:04 +0000

flight 180517 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180517/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0956aa2219745a198bb6a0a99e2108a3c09b280e
baseline version:
 xen                  b033eddc9779109c06a26936321d27a2ef4e088b

Last test of basis   180505  2023-05-02 11:00:27 Z    1 days
Testing same since   180517  2023-05-03 12:01:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Henry Wang <Henry.Wang@arm.com> # CHANGELOG
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Viresh Kumar <viresh.kumar@linaro.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b033eddc97..0956aa2219  0956aa2219745a198bb6a0a99e2108a3c09b280e -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 03 14:39:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 14:39:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529250.823437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDdb-0000rr-TH; Wed, 03 May 2023 14:39:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529250.823437; Wed, 03 May 2023 14:39:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDdb-0000rk-Q0; Wed, 03 May 2023 14:39:23 +0000
Received: by outflank-mailman (input) for mailman id 529250;
 Wed, 03 May 2023 14:39:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1puDda-0000qr-Sv
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 14:39:23 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4a9f4469-e9c0-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 16:39:21 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 04A0722906;
 Wed,  3 May 2023 14:39:21 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D0D701331F;
 Wed,  3 May 2023 14:39:20 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id PbOVMRhyUmR8SgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 14:39:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a9f4469-e9c0-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683124761; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=QaN379kgUJzfI8bNyZlqD4wJZ+lpSc8tdJtpKFXSYtQ=;
	b=M7Lg376jX5leWitBzM0BNDu885VXV9anawqCnStKarkmtmrRVE6wGtXTfNo+AXaYcsRvDq
	5Omc/hsc09koV3p1mN7pngc5WXaSv6N64FPLntjmkt3IiDjt+/I0O6jRkZr/j/hwBEPP2g
	NeM+8K1+6SRFL3uLWZ92PTl7SiYEZi4=
Message-ID: <da3b9daf-9358-2af8-edc3-4f74f9cc0c55@suse.com>
Date: Wed, 3 May 2023 16:39:20 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v4 12/13] tools/xenstore: use generic accounting for
 remaining quotas
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-13-jgross@suse.com>
 <e4f8a0e6-7a4c-3193-ce38-e43891f063ed@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <e4f8a0e6-7a4c-3193-ce38-e43891f063ed@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------7wLbm2MU50tdk2L67QW1JM8K"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------7wLbm2MU50tdk2L67QW1JM8K
Content-Type: multipart/mixed; boundary="------------gxSoSjJcQC64XgM4FYX2wIV8";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <da3b9daf-9358-2af8-edc3-4f74f9cc0c55@suse.com>
Subject: Re: [PATCH v4 12/13] tools/xenstore: use generic accounting for
 remaining quotas
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-13-jgross@suse.com>
 <e4f8a0e6-7a4c-3193-ce38-e43891f063ed@xen.org>
In-Reply-To: <e4f8a0e6-7a4c-3193-ce38-e43891f063ed@xen.org>

--------------gxSoSjJcQC64XgM4FYX2wIV8
Content-Type: multipart/mixed; boundary="------------7qAD10QsO7GsKd0Nap2aWKUZ"

--------------7qAD10QsO7GsKd0Nap2aWKUZ
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTI6MTgsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gDQo+IA0KPiBPbiAw
NS8wNC8yMDIzIDA4OjAzLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gVGhlIG1heHJlcXVl
c3RzLCBub2RlIHNpemUsIG51bWJlciBvZiBub2RlIHBlcm1pc3Npb25zLCBhbmQgcGF0aCBs
ZW5ndGgNCj4+IHF1b3RhIGFyZSBhIGxpdHRsZSBiaXQgc3BlY2lhbCwgYXMgdGhleSBhcmUg
ZWl0aGVyIGFjdGl2ZSBpbg0KPj4gdHJhbnNhY3Rpb25zIG9ubHkgKG1heHJlcXVlc3RzKSwg
b3IgdGhleSBhcmUganVzdCBwZXIgaXRlbSBpbnN0ZWFkIG9mDQo+PiBjb3VudCB2YWx1ZXMu
IE5ldmVydGhlbGVzcyBiZWluZyBhYmxlIHRvIGtub3cgdGhlIG1heGltdW0gbnVtYmVyIG9m
DQo+PiB0aG9zZSBxdW90YSByZWxhdGVkIHZhbHVlcyBwZXIgZG9tYWluIHdvdWxkIGJlIGJl
bmVmaWNpYWwsIHNvIGFkZCB0aGVtDQo+PiB0byB0aGUgZ2VuZXJpYyBhY2NvdW50aW5nLg0K
Pj4NCj4+IFRoZSBwZXIgZG9tYWluIHZhbHVlIHdpbGwgbmV2ZXIgc2hvdyBjdXJyZW50IG51
bWJlcnMgb3RoZXIgdGhhbiB6ZXJvLA0KPj4gYnV0IHRoZSBtYXhpbXVtIG51bWJlciBzZWVu
IGNhbiBiZSBnYXRoZXJlZCB0aGUgc2FtZSB3YXkgYXMgdGhlIG51bWJlcg0KPj4gb2Ygbm9k
ZXMgZHVyaW5nIGEgdHJhbnNhY3Rpb24uDQo+Pg0KPj4gVG8gYmUgYWJsZSB0byB1c2UgdGhl
IGNvbnN0IHF1YWxpZmllciBmb3IgYSBuZXcgZnVuY3Rpb24gc3dpdGNoDQo+PiBkb21haW5f
aXNfdW5wcml2aWxlZ2VkKCkgdG8gdGFrZSBhIGNvbnN0IHBvaW50ZXIsIHRvby4NCj4+DQo+
PiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+PiAt
LS0NCj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmPCoMKgwqDCoMKgwqDC
oCB8IDE0ICsrKystLS0tLQ0KPj4gwqAgdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUu
aMKgwqDCoMKgwqDCoMKgIHzCoCAyICstDQo+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9y
ZWRfZG9tYWluLmPCoMKgwqDCoMKgIHwgMzkgKysrKysrKysrKysrKysrKysrKystLS0tLS0N
Cj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaMKgwqDCoMKgwqAgfMKg
IDYgKysrKw0KPj4gwqAgdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX3RyYW5zYWN0aW9uLmMg
fMKgIDQgKy0tDQo+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guY8KgwqDC
oMKgwqDCoCB8wqAgMiArLQ0KPj4gwqAgNiBmaWxlcyBjaGFuZ2VkLCA0OCBpbnNlcnRpb25z
KCspLCAxOSBkZWxldGlvbnMoLSkNCj4+DQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2NvcmUuYyBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMN
Cj4+IGluZGV4IDg4YzU2OWI3ZDUuLjY1ZGYyODY2YmYgMTAwNjQ0DQo+PiAtLS0gYS90b29s
cy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+PiArKysgYi90b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfY29yZS5jDQo+PiBAQCAtNzk5LDggKzc5OSw4IEBAIGludCB3cml0ZV9ub2Rl
X3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgVERCX0RBVEEgKmtleSwgDQo+PiBzdHJ1
Y3Qgbm9kZSAqbm9kZSwNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCArIG5vZGUtPnBlcm1zLm51
bSAqIHNpemVvZihub2RlLT5wZXJtcy5wWzBdKQ0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgICsg
bm9kZS0+ZGF0YWxlbiArIG5vZGUtPmNoaWxkbGVuOw0KPj4gLcKgwqDCoCBpZiAoIW5vX3F1
b3RhX2NoZWNrICYmIGRvbWFpbl9pc191bnByaXZpbGVnZWQoY29ubikgJiYNCj4+IC3CoMKg
wqDCoMKgwqDCoCBkYXRhLmRzaXplID49IHF1b3RhX21heF9lbnRyeV9zaXplKSB7DQo+PiAr
wqDCoMKgIGlmIChkb21haW5fbWF4X2Noayhjb25uLCBBQ0NfTk9ERVNaLCBkYXRhLmRzaXpl
LCBxdW90YV9tYXhfZW50cnlfc2l6ZSkNCj4+ICvCoMKgwqDCoMKgwqDCoCAmJiAhbm9fcXVv
dGFfY2hlY2spIHsNCj4gDQo+IEl0IGZlZWxzIGEgYml0IG9kZCB0byBtb3ZlIHRoZSAhbm9f
cXVvdGFfY2hlY2sgcmlnaHQgYWZ0ZXIgdGhlIGFjdHVhbCBjaGVjay4gQnV0IA0KPiBBRkFJ
Q1QsIHlvdSBhcmUgZG9pbmcgaXQgYmVjYXVzZSBkb21haW5fbWF4X2NoaygpIHdpbGwgYWxz
byB1cGRhdGUgdGhlIG1heGltdW0gDQo+IHZhbHVlIHNlZW4gYnkgdGhlIGN1cnJlbnQgcXVv
dGEuDQoNCkNvcnJlY3QuDQoNCj4gDQo+IElzIHRoYXQgY29ycmVjdD8gSWYgc28sIGl0IHdv
dWxkIGJlIHdvcnRoIG1lbnRpb25pbmcgaXQgaW4gYSBjb21tZW50Lg0KDQpPa2F5Lg0KDQo+
IA0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgIGVycm5vID0gRU5PU1BDOw0KPj4gwqDCoMKgwqDC
oMKgwqDCoMKgIHJldHVybiBlcnJubzsNCj4+IMKgwqDCoMKgwqAgfQ0KPj4gQEAgLTExNjgs
NyArMTE2OCw3IEBAIHN0YXRpYyBib29sIHZhbGlkX2NoYXJzKGNvbnN0IGNoYXIgKm5vZGUp
DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAiMDEyMzQ1Njc4OS0vX0Ai
KSA9PSBzdHJsZW4obm9kZSkpOw0KPj4gwqAgfQ0KPj4gLWJvb2wgaXNfdmFsaWRfbm9kZW5h
bWUoY29uc3QgY2hhciAqbm9kZSkNCj4+ICtib29sIGlzX3ZhbGlkX25vZGVuYW1lKGNvbnN0
IHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCBjaGFyICpub2RlKQ0KPj4gwqAgew0K
Pj4gwqDCoMKgwqDCoCBpbnQgbG9jYWxfb2ZmID0gMDsNCj4+IMKgwqDCoMKgwqAgdW5zaWdu
ZWQgaW50IGRvbWlkOw0KPj4gQEAgLTExODgsNyArMTE4OCw4IEBAIGJvb2wgaXNfdmFsaWRf
bm9kZW5hbWUoY29uc3QgY2hhciAqbm9kZSkNCj4+IMKgwqDCoMKgwqAgaWYgKHNzY2FuZihu
b2RlLCAiL2xvY2FsL2RvbWFpbi8lNXUvJW4iLCAmZG9taWQsICZsb2NhbF9vZmYpICE9IDEp
DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgbG9jYWxfb2ZmID0gMDsNCj4+IC3CoMKgwqAgaWYg
KHN0cmxlbihub2RlKSA+IGxvY2FsX29mZiArIHF1b3RhX21heF9wYXRoX2xlbikNCj4+ICvC
oMKgwqAgaWYgKGRvbWFpbl9tYXhfY2hrKGNvbm4sIEFDQ19QQVRITEVOLCBzdHJsZW4obm9k
ZSkgLSBsb2NhbF9vZmYsDQo+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBxdW90
YV9tYXhfcGF0aF9sZW4pKQ0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiBmYWxzZTsN
Cj4+IMKgwqDCoMKgwqAgcmV0dXJuIHZhbGlkX2NoYXJzKG5vZGUpOw0KPj4gQEAgLTEyNTAs
NyArMTI1MSw3IEBAIHN0YXRpYyBzdHJ1Y3Qgbm9kZSAqZ2V0X25vZGVfY2Fub25pY2FsaXpl
ZChzdHJ1Y3QgDQo+PiBjb25uZWN0aW9uICpjb25uLA0KPj4gwqDCoMKgwqDCoCAqY2Fub25p
Y2FsX25hbWUgPSBjYW5vbmljYWxpemUoY29ubiwgY3R4LCBuYW1lKTsNCj4+IMKgwqDCoMKg
wqAgaWYgKCEqY2Fub25pY2FsX25hbWUpDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgcmV0dXJu
IE5VTEw7DQo+PiAtwqDCoMKgIGlmICghaXNfdmFsaWRfbm9kZW5hbWUoKmNhbm9uaWNhbF9u
YW1lKSkgew0KPj4gK8KgwqDCoCBpZiAoIWlzX3ZhbGlkX25vZGVuYW1lKGNvbm4sICpjYW5v
bmljYWxfbmFtZSkpIHsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBlcnJubyA9IEVJTlZBTDsN
Cj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gTlVMTDsNCj4+IMKgwqDCoMKgwqAgfQ0K
Pj4gQEAgLTE3NzUsOCArMTc3Niw3IEBAIHN0YXRpYyBpbnQgZG9fc2V0X3Blcm1zKGNvbnN0
IHZvaWQgKmN0eCwgc3RydWN0IA0KPj4gY29ubmVjdGlvbiAqY29ubiwNCj4+IMKgwqDCoMKg
wqDCoMKgwqDCoCByZXR1cm4gRUlOVkFMOw0KPj4gwqDCoMKgwqDCoCBwZXJtcy5udW0tLTsN
Cj4+IC3CoMKgwqAgaWYgKGRvbWFpbl9pc191bnByaXZpbGVnZWQoY29ubikgJiYNCj4+IC3C
oMKgwqDCoMKgwqDCoCBwZXJtcy5udW0gPiBxdW90YV9uYl9wZXJtc19wZXJfbm9kZSkNCj4+
ICvCoMKgwqAgaWYgKGRvbWFpbl9tYXhfY2hrKGNvbm4sIEFDQ19OUEVSTSwgcGVybXMubnVt
LCBxdW90YV9uYl9wZXJtc19wZXJfbm9kZSkpDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgcmV0
dXJuIEVOT1NQQzsNCj4+IMKgwqDCoMKgwqAgcGVybXN0ciA9IGluLT5idWZmZXIgKyBzdHJs
ZW4oaW4tPmJ1ZmZlcikgKyAxOw0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF9jb3JlLmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5oDQo+PiBp
bmRleCAzNTY0ZDg1ZDdkLi45MzM5ODIwMTU2IDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2NvcmUuaA0KPj4gKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2NvcmUuaA0KPj4gQEAgLTI1OCw3ICsyNTgsNyBAQCB2b2lkIGNoZWNrX3N0b3JlKHZv
aWQpOw0KPj4gwqAgdm9pZCBjb3JydXB0KHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25z
dCBjaGFyICpmbXQsIC4uLik7DQo+PiDCoCAvKiBJcyB0aGlzIGEgdmFsaWQgbm9kZSBuYW1l
PyAqLw0KPj4gLWJvb2wgaXNfdmFsaWRfbm9kZW5hbWUoY29uc3QgY2hhciAqbm9kZSk7DQo+
PiArYm9vbCBpc192YWxpZF9ub2RlbmFtZShjb25zdCBzdHJ1Y3QgY29ubmVjdGlvbiAqY29u
biwgY29uc3QgY2hhciAqbm9kZSk7DQo+PiDCoCAvKiBHZXQgbmFtZSBvZiBwYXJlbnQgbm9k
ZS4gKi8NCj4+IMKgIGNoYXIgKmdldF9wYXJlbnQoY29uc3Qgdm9pZCAqY3R4LCBjb25zdCBj
aGFyICpub2RlKTsNCj4+IGRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
ZG9tYWluLmMgDQo+PiBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYw0KPj4g
aW5kZXggZDIxZjMxZGE5Mi4uNDllMmM1YzgyYSAxMDA2NDQNCj4+IC0tLSBhL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYw0KPj4gKysrIGIvdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX2RvbWFpbi5jDQo+PiBAQCAtNDMzLDcgKzQzMyw3IEBAIGludCBkb21haW5fZ2V0
X3F1b3RhKGNvbnN0IHZvaWQgKmN0eCwgc3RydWN0IGNvbm5lY3Rpb24gDQo+PiAqY29ubiwN
Cj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gRU5PTUVNOw0KPj4gwqAgI2RlZmluZSBl
bnQodCwgZSkgXA0KPj4gLcKgwqDCoCByZXNwID0gdGFsbG9jX2FzcHJpbnRmX2FwcGVuZChy
ZXNwLCAiJS0xNnM6ICU4dSAobWF4OiAlOHVcbiIsICN0LCBcDQo+PiArwqDCoMKgIHJlc3Ag
PSB0YWxsb2NfYXNwcmludGZfYXBwZW5kKHJlc3AsICIlLTE3czogJTh1IChtYXg6ICU4dVxu
IiwgI3QsIFwNCj4gDQo+IFRoaXMgY2hhbmdlcyBmZWVscyBhIGJpdCB1bnJlbGF0ZWQuIENh
biB5b3UgbWVudGlvbiB3aHkgdGhpcyBpcyBuZWNlc3NhcnkgaW4gdGhlIA0KPiBjb21taXQg
bWVzc2FnZT8NCg0KInRyYW5zYWN0aW9uLW5vZGVzIiBoYXMgMTcgY2hhcmFjdGVycy4gOi0p
DQoNCj4gDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIGQtPmFjY1tlXS52YWwsIGQtPmFjY1tlXS5tYXgpOyBcDQo+PiDCoMKgwqDCoMKgIGlm
ICghcmVzcCkgcmV0dXJuIEVOT01FTQ0KPj4gQEAgLTQ0Miw2ICs0NDIsNyBAQCBpbnQgZG9t
YWluX2dldF9xdW90YShjb25zdCB2b2lkICpjdHgsIHN0cnVjdCBjb25uZWN0aW9uIA0KPj4g
KmNvbm4sDQo+PiDCoMKgwqDCoMKgIGVudCh0cmFuc2FjdGlvbnMsIEFDQ19UUkFOUyk7DQo+
PiDCoMKgwqDCoMKgIGVudChvdXRzdGFuZGluZywgQUNDX09VVFNUKTsNCj4+IMKgwqDCoMKg
wqAgZW50KG1lbW9yeSwgQUNDX01FTSk7DQo+PiArwqDCoMKgIGVudCh0cmFuc2FjdGlvbi1u
b2RlcywgQUNDX1RSQU5TTk9ERVMpOw0KPiANCj4gWW91IHNlZW0gdG8gY29udmVydCBtdWx0
aXBsZSBxdW90YXMgYnV0IG9ubHkgcHJpbnQgb25lLiBXaHk/DQoNCkFoLCBzb3JyeSBmb3Ig
b21pdHRpbmcgdGhlIG90aGVyIG9uZXMuIFRoZSBmb2xsb3dpbmcgcGF0Y2ggaXMgYWRkaW5n
DQp0aGVtIGFnYWluLCBzbyBJIGRpZG4ndCByZWNvZ25pemUgdGhlbSBtaXNzaW5nLg0KDQo+
IA0KPj4gwqAgI3VuZGVmIGVudA0KPj4gQEAgLTQ1OSw3ICs0NjAsNyBAQCBpbnQgZG9tYWlu
X21heF9nbG9iYWxfYWNjKGNvbnN0IHZvaWQgKmN0eCwgc3RydWN0IA0KPj4gY29ubmVjdGlv
biAqY29ubikNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gRU5PTUVNOw0KPj4gwqAg
I2RlZmluZSBlbnQodCwgZSkgXA0KPj4gLcKgwqDCoCByZXNwID0gdGFsbG9jX2FzcHJpbnRm
X2FwcGVuZChyZXNwLCAiJS0xNnM6ICU4dVxuIiwgI3QswqDCoCBcDQo+PiArwqDCoMKgIHJl
c3AgPSB0YWxsb2NfYXNwcmludGZfYXBwZW5kKHJlc3AsICIlLTE3czogJTh1XG4iLCAjdCzC
oMKgIFwNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgYWNjX2dsb2JhbF9tYXhbZV0pO8KgwqDCoMKgwqDCoMKgwqAgXA0KPiANCj4gRGl0dG8u
DQo+IA0KPj4gwqDCoMKgwqDCoCBpZiAoIXJlc3ApIHJldHVybiBFTk9NRU0NCj4+IEBAIC00
NjgsNiArNDY5LDcgQEAgaW50IGRvbWFpbl9tYXhfZ2xvYmFsX2FjYyhjb25zdCB2b2lkICpj
dHgsIHN0cnVjdCANCj4+IGNvbm5lY3Rpb24gKmNvbm4pDQo+PiDCoMKgwqDCoMKgIGVudCh0
cmFuc2FjdGlvbnMsIEFDQ19UUkFOUyk7DQo+PiDCoMKgwqDCoMKgIGVudChvdXRzdGFuZGlu
ZywgQUNDX09VVFNUKTsNCj4+IMKgwqDCoMKgwqAgZW50KG1lbW9yeSwgQUNDX01FTSk7DQo+
PiArwqDCoMKgIGVudCh0cmFuc2FjdGlvbi1ub2RlcywgQUNDX1RSQU5TTk9ERVMpOw0KPj4g
wqAgI3VuZGVmIGVudA0KPj4gQEAgLTEwODEsMTIgKzEwODMsMjIgQEAgaW50IGRvbWFpbl9h
ZGp1c3Rfbm9kZV9wZXJtcyhzdHJ1Y3Qgbm9kZSAqbm9kZSkNCj4+IMKgwqDCoMKgwqAgcmV0
dXJuIDA7DQo+PiDCoCB9DQo+PiArc3RhdGljIHZvaWQgZG9tYWluX2FjY192YWxpZF9tYXgo
c3RydWN0IGRvbWFpbiAqZCwgZW51bSBhY2NpdGVtIHdoYXQsDQo+PiArwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgaW50IHZhbCkNCj4+ICt7DQo+PiArwqDC
oMKgIGFzc2VydCh3aGF0IDwgQVJSQVlfU0laRShkLT5hY2MpKTsNCj4+ICvCoMKgwqAgYXNz
ZXJ0KHdoYXQgPCBBUlJBWV9TSVpFKGFjY19nbG9iYWxfbWF4KSk7DQo+PiArDQo+PiArwqDC
oMKgIGlmICh2YWwgPiBkLT5hY2Nbd2hhdF0ubWF4KQ0KPj4gK8KgwqDCoMKgwqDCoMKgIGQt
PmFjY1t3aGF0XS5tYXggPSB2YWw7DQo+PiArwqDCoMKgIGlmICh2YWwgPiBhY2NfZ2xvYmFs
X21heFt3aGF0XSAmJiBkb21pZF9pc191bnByaXZpbGVnZWQoZC0+ZG9taWQpKQ0KPj4gK8Kg
wqDCoMKgwqDCoMKgIGFjY19nbG9iYWxfbWF4W3doYXRdID0gdmFsOw0KPj4gK30NCj4+ICsN
Cj4+IMKgIHN0YXRpYyBpbnQgZG9tYWluX2FjY19hZGRfdmFsaWQoc3RydWN0IGRvbWFpbiAq
ZCwgZW51bSBhY2NpdGVtIHdoYXQsIGludCBhZGQpDQo+PiDCoCB7DQo+PiDCoMKgwqDCoMKg
IHVuc2lnbmVkIGludCB2YWw7DQo+PiAtwqDCoMKgIGFzc2VydCh3aGF0IDwgQVJSQVlfU0la
RShkLT5hY2MpKTsNCj4gDQo+IEkgdGhpbmsgdGhpcyBhc3NlcnQgc2hvdWxkIGJlIGtlcHQg
YmVjYXVzZS4uLg0KPiANCj4+IC0NCj4+IMKgwqDCoMKgwqAgaWYgKChhZGQgPCAwICYmIC1h
ZGQgPiBkLT5hY2Nbd2hhdF0udmFsKSB8fA0KPiANCj4gLi4uIG9mIHRoaXMgY2hlY2suIE90
aGVyd2lzZSwgeW91IHdvdWxkIGNoZWNrIHRoYXQgJ3doYXQnIGlzIHdpdGhpbiB0aGUgYm91
bmRzIA0KPiBhZnRlciB0aGUgdXNlLg0KDQpPa2F5Lg0KDQoNCkp1ZXJnZW4NCg==
--------------7qAD10QsO7GsKd0Nap2aWKUZ
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------7qAD10QsO7GsKd0Nap2aWKUZ--

--------------gxSoSjJcQC64XgM4FYX2wIV8--

--------------7wLbm2MU50tdk2L67QW1JM8K
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRSchgFAwAAAAAACgkQsN6d1ii/Ey+N
MAf/fuTLpwqPZRUoqYFhGYUhzjzobF/vUFRpp0RvdYISKu0ybq1zUoELfSW9ODeqamhjCBJZPXeD
3oEUGjHjJ+jmWOSFZCBfSFffo3Rp4mx7+C+nITEoKfmLL+/1deEt/V+sHD8tEo3zLpJRfEc3f+is
ECGFqkKX+BPSgNNNjzcnLdbmGVQ1MGGnA71fFHesC3DaNAwwiCqAaufIBa6TGVXOalySGYpTaC60
A9xpkYYvVI0JncPQoFR1n2TMkbZ+P/oMvB8hEkb2VzG7svVqQHkeY4X99oVB6qgxNefcQLzBn/If
YXTK3ZHJe+ZW42B9Mxulud/ngsKvObw7hU8GK/RR/Q==
=gM3c
-----END PGP SIGNATURE-----

--------------7wLbm2MU50tdk2L67QW1JM8K--


From xen-devel-bounces@lists.xenproject.org Wed May 03 14:44:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 14:44:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529259.823447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDiC-0002rU-Fn; Wed, 03 May 2023 14:44:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529259.823447; Wed, 03 May 2023 14:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDiC-0002rN-CC; Wed, 03 May 2023 14:44:08 +0000
Received: by outflank-mailman (input) for mailman id 529259;
 Wed, 03 May 2023 14:44:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1puDiB-0002rH-0W
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 14:44:07 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f424ea50-e9c0-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 16:44:05 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A6AE320472;
 Wed,  3 May 2023 14:44:05 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7C69C1331F;
 Wed,  3 May 2023 14:44:05 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id TymTHDVzUmQWTQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 14:44:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f424ea50-e9c0-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683125045; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=DtfcldeIlyvyiJm+cGQ7ZADA31nTBjnzUzAh3WEK4LI=;
	b=pDHBxrMoLmy2lNOK3R253gIMgyGg4rxjMSpygX5VAkpG7UunSqtHf//aam6kSqkQiLDQSK
	WV1xy55fe+L1wBWBFsb1aeB7PLHxFtcdZhHtFpO+tdbTo6YpJwVJUM1GI7ii7kllO3ZW7N
	509yzLXjDojD3HBdUYe4/52IL0PIyrU=
Message-ID: <a08d4683-2610-b37d-2bd5-529256a2e994@suse.com>
Date: Wed, 3 May 2023 16:44:05 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v4 13/13] tools/xenstore: switch quota management to be
 table based
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-14-jgross@suse.com>
 <1a529c44-1564-ad42-3924-f58efaa83a91@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <1a529c44-1564-ad42-3924-f58efaa83a91@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------ZUF9dq0iMRAl2VbFZRkyRanD"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------ZUF9dq0iMRAl2VbFZRkyRanD
Content-Type: multipart/mixed; boundary="------------7v7rD0UKHduATGBLDKGFLyoT";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <a08d4683-2610-b37d-2bd5-529256a2e994@suse.com>
Subject: Re: [PATCH v4 13/13] tools/xenstore: switch quota management to be
 table based
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-14-jgross@suse.com>
 <1a529c44-1564-ad42-3924-f58efaa83a91@xen.org>
In-Reply-To: <1a529c44-1564-ad42-3924-f58efaa83a91@xen.org>

--------------7v7rD0UKHduATGBLDKGFLyoT
Content-Type: multipart/mixed; boundary="------------LP7Vogz9Cyihv0QPkgO5Q10w"

--------------LP7Vogz9Cyihv0QPkgO5Q10w
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTI6NTMsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDA1LzA0LzIwMjMgMDg6MDMsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBJ
bnN0ZWFkIG9mIGhhdmluZyBpbmRpdmlkdWFsIHF1b3RhIHZhcmlhYmxlcyBzd2l0Y2ggdG8g
YSB0YWJsZSBiYXNlZA0KPj4gYXBwcm9hY2ggbGlrZSB0aGUgZ2VuZXJpYyBhY2NvdW50aW5n
LiBJbmNsdWRlIGFsbCB0aGUgcmVsYXRlZCBkYXRhIGluDQo+PiB0aGUgc2FtZSB0YWJsZSBh
bmQgYWRkIGFjY2Vzc29yIGZ1bmN0aW9ucy4NCj4+DQo+PiBUaGlzIGVuYWJsZXMgdG8gdXNl
IHRoZSBjb21tYW5kIGxpbmUgLS1xdW90YSBwYXJhbWV0ZXIgZm9yIHNldHRpbmcgYWxsDQo+
PiBwb3NzaWJsZSBxdW90YSB2YWx1ZXMsIGtlZXBpbmcgdGhlIHByZXZpb3VzIHBhcmFtZXRl
cnMgZm9yDQo+PiBjb21wYXRpYmlsaXR5Lg0KPj4NCj4+IFNpZ25lZC1vZmYtYnk6IEp1ZXJn
ZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4+IC0tLQ0KPj4gVjI6DQo+PiAtIG5ldyBw
YXRjaA0KPj4gT25lIGZ1cnRoZXIgcmVtYXJrOiBpdCB3b3VsZCBiZSByYXRoZXIgZWFzeSB0
byBhZGQgc29mdC1xdW90YSBmb3IgYWxsDQo+PiB0aGUgb3RoZXIgcXVvdGFzIChzaW1pbGFy
IHRvIHRoZSBtZW1vcnkgb25lKS4gVGhpcyBjb3VsZCBiZSB1c2VkIGFzDQo+PiBhbiBlYXJs
eSB3YXJuaW5nIGZvciB0aGUgbmVlZCB0byByYWlzZSBnbG9iYWwgcXVvdGEuDQo+IA0KPiBJ
IGRvbid0IGhhdmUgYSBzdHJvbmcgb3BpbmlvbiBvbiB0aGlzIHRvcGljLg0KDQpNZSBuZWl0
aGVyLg0KDQo+IA0KPj4gLS0tDQo+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29u
dHJvbC5jwqDCoMKgwqAgfMKgIDQzICsrLS0tLS0tDQo+PiDCoCB0b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfY29yZS5jwqDCoMKgwqDCoMKgwqAgfMKgIDg1ICsrKysrKysrLS0tLS0tLS0N
Cj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmjCoMKgwqDCoMKgwqDCoCB8
wqAgMTAgLS0NCj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uY8KgwqDC
oMKgwqAgfCAxMzIgKysrKysrKysrKysrKysrKystLS0tLS0tLQ0KPj4gwqAgdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2RvbWFpbi5owqDCoMKgwqDCoCB8wqAgMTIgKystDQo+PiDCoCB0
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfdHJhbnNhY3Rpb24uYyB8wqDCoCA1ICstDQo+PiDC
oCB0b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfd2F0Y2guY8KgwqDCoMKgwqDCoCB8wqDCoCAy
ICstDQo+PiDCoCA3IGZpbGVzIGNoYW5nZWQsIDE1NSBpbnNlcnRpb25zKCspLCAxMzQgZGVs
ZXRpb25zKC0pDQo+Pg0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb250cm9sLmMgDQo+PiBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb250cm9sLmMN
Cj4+IGluZGV4IGEyYmE2NGExNWMuLjc1ZjUxYTgwZGIgMTAwNjQ0DQo+PiAtLS0gYS90b29s
cy94ZW5zdG9yZS94ZW5zdG9yZWRfY29udHJvbC5jDQo+PiArKysgYi90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfY29udHJvbC5jDQo+PiBAQCAtMjIxLDM1ICsyMjEsNiBAQCBzdGF0aWMg
aW50IGRvX2NvbnRyb2xfbG9nKGNvbnN0IHZvaWQgKmN0eCwgc3RydWN0IA0KPj4gY29ubmVj
dGlvbiAqY29ubiwNCj4+IMKgwqDCoMKgwqAgcmV0dXJuIDA7DQo+PiDCoCB9DQo+PiAtc3Ry
dWN0IHF1b3RhIHsNCj4+IC3CoMKgwqAgY29uc3QgY2hhciAqbmFtZTsNCj4+IC3CoMKgwqAg
aW50ICpxdW90YTsNCj4+IC3CoMKgwqAgY29uc3QgY2hhciAqZGVzY3I7DQo+PiAtfTsNCj4+
IC0NCj4+IC1zdGF0aWMgY29uc3Qgc3RydWN0IHF1b3RhIGhhcmRfcXVvdGFzW10gPSB7DQo+
PiAtwqDCoMKgIHsgIm5vZGVzIiwgJnF1b3RhX25iX2VudHJ5X3Blcl9kb21haW4sICJOb2Rl
cyBwZXIgZG9tYWluIiB9LA0KPj4gLcKgwqDCoCB7ICJ3YXRjaGVzIiwgJnF1b3RhX25iX3dh
dGNoX3Blcl9kb21haW4sICJXYXRjaGVzIHBlciBkb21haW4iIH0sDQo+PiAtwqDCoMKgIHsg
InRyYW5zYWN0aW9ucyIsICZxdW90YV9tYXhfdHJhbnNhY3Rpb24sICJUcmFuc2FjdGlvbnMg
cGVyIGRvbWFpbiIgfSwNCj4+IC3CoMKgwqAgeyAib3V0c3RhbmRpbmciLCAmcXVvdGFfcmVx
X291dHN0YW5kaW5nLA0KPj4gLcKgwqDCoMKgwqDCoMKgICJPdXRzdGFuZGluZyByZXF1ZXN0
cyBwZXIgZG9tYWluIiB9LA0KPj4gLcKgwqDCoCB7ICJ0cmFuc2FjdGlvbi1ub2RlcyIsICZx
dW90YV90cmFuc19ub2RlcywNCj4+IC3CoMKgwqDCoMKgwqDCoCAiTWF4LiBudW1iZXIgb2Yg
YWNjZXNzZWQgbm9kZXMgcGVyIHRyYW5zYWN0aW9uIiB9LA0KPj4gLcKgwqDCoCB7ICJtZW1v
cnkiLCAmcXVvdGFfbWVtb3J5X3Blcl9kb21haW5faGFyZCwNCj4+IC3CoMKgwqDCoMKgwqDC
oCAiVG90YWwgWGVuc3RvcmUgbWVtb3J5IHBlciBkb21haW4gKGVycm9yIGxldmVsKSIgfSwN
Cj4+IC3CoMKgwqAgeyAibm9kZS1zaXplIiwgJnF1b3RhX21heF9lbnRyeV9zaXplLCAiTWF4
LiBzaXplIG9mIGEgbm9kZSIgfSwNCj4+IC3CoMKgwqAgeyAicGF0aC1tYXgiLCAmcXVvdGFf
bWF4X3BhdGhfbGVuLCAiTWF4LiBsZW5ndGggb2YgYSBub2RlIHBhdGgiIH0sDQo+PiAtwqDC
oMKgIHsgInBlcm1pc3Npb25zIiwgJnF1b3RhX25iX3Blcm1zX3Blcl9ub2RlLA0KPj4gLcKg
wqDCoMKgwqDCoMKgICJNYXguIG51bWJlciBvZiBwZXJtaXNzaW9ucyBwZXIgbm9kZSIgfSwN
Cj4+IC3CoMKgwqAgeyBOVUxMLCBOVUxMLCBOVUxMIH0NCj4+IC19Ow0KPj4gLQ0KPj4gLXN0
YXRpYyBjb25zdCBzdHJ1Y3QgcXVvdGEgc29mdF9xdW90YXNbXSA9IHsNCj4+IC3CoMKgwqAg
eyAibWVtb3J5IiwgJnF1b3RhX21lbW9yeV9wZXJfZG9tYWluX3NvZnQsDQo+PiAtwqDCoMKg
wqDCoMKgwqAgIlRvdGFsIFhlbnN0b3JlIG1lbW9yeSBwZXIgZG9tYWluICh3YXJuaW5nIGxl
dmVsKSIgfSwNCj4+IC3CoMKgwqAgeyBOVUxMLCBOVUxMLCBOVUxMIH0NCj4+IC19Ow0KPj4g
LQ0KPj4gwqAgc3RhdGljIGludCBxdW90YV9zaG93X2N1cnJlbnQoY29uc3Qgdm9pZCAqY3R4
LCBzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgIGNvbnN0IHN0cnVjdCBxdW90YSAqcXVvdGFzKQ0KPj4gwqAgew0K
Pj4gQEAgLTI2MCw5ICsyMzEsMTEgQEAgc3RhdGljIGludCBxdW90YV9zaG93X2N1cnJlbnQo
Y29uc3Qgdm9pZCAqY3R4LCBzdHJ1Y3QgDQo+PiBjb25uZWN0aW9uICpjb25uLA0KPj4gwqDC
oMKgwqDCoCBpZiAoIXJlc3ApDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgcmV0dXJuIEVOT01F
TTsNCj4+IC3CoMKgwqAgZm9yIChpID0gMDsgcXVvdGFzW2ldLnF1b3RhOyBpKyspIHsNCj4+
ICvCoMKgwqAgZm9yIChpID0gMDsgaSA8IEFDQ19OOyBpKyspIHsNCj4+ICvCoMKgwqDCoMKg
wqDCoCBpZiAoIXF1b3Rhc1tpXS5uYW1lKQ0KPj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
Y29udGludWU7DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgcmVzcCA9IHRhbGxvY19hc3ByaW50
Zl9hcHBlbmQocmVzcCwgIiUtMTdzOiAlOGQgJXNcbiIsDQo+PiAtwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgcXVvdGFzW2ldLm5hbWUsICpx
dW90YXNbaV0ucXVvdGEsDQo+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgcXVvdGFzW2ldLm5hbWUsIHF1b3Rhc1tpXS52YWwsDQo+PiDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
cXVvdGFzW2ldLmRlc2NyKTsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoIXJlc3ApDQo+
PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gRU5PTUVNOw0KPj4gQEAgLTI3
NCw3ICsyNDcsNyBAQCBzdGF0aWMgaW50IHF1b3RhX3Nob3dfY3VycmVudChjb25zdCB2b2lk
ICpjdHgsIHN0cnVjdCANCj4+IGNvbm5lY3Rpb24gKmNvbm4sDQo+PiDCoCB9DQo+PiDCoCBz
dGF0aWMgaW50IHF1b3RhX3NldChjb25zdCB2b2lkICpjdHgsIHN0cnVjdCBjb25uZWN0aW9u
ICpjb25uLA0KPj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBjaGFyICoqdmVjLCBpbnQg
bnVtLCBjb25zdCBzdHJ1Y3QgcXVvdGEgKnF1b3RhcykNCj4+ICvCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgY2hhciAqKnZlYywgaW50IG51bSwgc3RydWN0IHF1b3RhICpxdW90YXMpDQo+
PiDCoCB7DQo+PiDCoMKgwqDCoMKgIHVuc2lnbmVkIGludCBpOw0KPj4gwqDCoMKgwqDCoCBp
bnQgdmFsOw0KPj4gQEAgLTI4Niw5ICsyNTksOSBAQCBzdGF0aWMgaW50IHF1b3RhX3NldChj
b25zdCB2b2lkICpjdHgsIHN0cnVjdCBjb25uZWN0aW9uIA0KPj4gKmNvbm4sDQo+PiDCoMKg
wqDCoMKgIGlmICh2YWwgPCAxKQ0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiBFSU5W
QUw7DQo+PiAtwqDCoMKgIGZvciAoaSA9IDA7IHF1b3Rhc1tpXS5xdW90YTsgaSsrKSB7DQo+
PiAtwqDCoMKgwqDCoMKgwqAgaWYgKCFzdHJjbXAodmVjWzBdLCBxdW90YXNbaV0ubmFtZSkp
IHsNCj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgICpxdW90YXNbaV0ucXVvdGEgPSB2YWw7
DQo+PiArwqDCoMKgIGZvciAoaSA9IDA7IGkgPCBBQ0NfTjsgaSsrKSB7DQo+PiArwqDCoMKg
wqDCoMKgwqAgaWYgKHF1b3Rhc1tpXS5uYW1lICYmICFzdHJjbXAodmVjWzBdLCBxdW90YXNb
aV0ubmFtZSkpIHsNCj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHF1b3Rhc1tpXS52YWwg
PSB2YWw7DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBzZW5kX2Fjayhjb25uLCBY
U19DT05UUk9MKTsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiAwOw0K
Pj4gwqDCoMKgwqDCoMKgwqDCoMKgIH0NCj4+IGRpZmYgLS1naXQgYS90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfY29yZS5jIGIvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYw0K
Pj4gaW5kZXggNjVkZjI4NjZiZi4uNmUyZmMwNjg0MCAxMDA2NDQNCj4+IC0tLSBhL3Rvb2xz
L3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+ICsrKyBiL3Rvb2xzL3hlbnN0b3JlL3hl
bnN0b3JlZF9jb3JlLmMNCj4+IEBAIC04OSwxNyArODksNiBAQCB1bnNpZ25lZCBpbnQgdHJh
Y2VfZmxhZ3MgPSBUUkFDRV9PQkogfCBUUkFDRV9JTzsNCj4+IMKgIHN0YXRpYyBjb25zdCBj
aGFyICpzb2NrbXNnX3N0cmluZyhlbnVtIHhzZF9zb2NrbXNnX3R5cGUgdHlwZSk7DQo+PiAt
aW50IHF1b3RhX25iX2VudHJ5X3Blcl9kb21haW4gPSAxMDAwOw0KPj4gLWludCBxdW90YV9u
Yl93YXRjaF9wZXJfZG9tYWluID0gMTI4Ow0KPj4gLWludCBxdW90YV9tYXhfZW50cnlfc2l6
ZSA9IDIwNDg7IC8qIDJLICovDQo+PiAtaW50IHF1b3RhX21heF90cmFuc2FjdGlvbiA9IDEw
Ow0KPj4gLWludCBxdW90YV9uYl9wZXJtc19wZXJfbm9kZSA9IDU7DQo+PiAtaW50IHF1b3Rh
X3RyYW5zX25vZGVzID0gMTAyNDsNCj4+IC1pbnQgcXVvdGFfbWF4X3BhdGhfbGVuID0gWEVO
U1RPUkVfUkVMX1BBVEhfTUFYOw0KPj4gLWludCBxdW90YV9yZXFfb3V0c3RhbmRpbmcgPSAy
MDsNCj4+IC1pbnQgcXVvdGFfbWVtb3J5X3Blcl9kb21haW5fc29mdCA9IDIgKiAxMDI0ICog
MTAyNDsgLyogMiBNQiAqLw0KPj4gLWludCBxdW90YV9tZW1vcnlfcGVyX2RvbWFpbl9oYXJk
ID0gMiAqIDEwMjQgKiAxMDI0ICsgNTEyICogMTAyNDsgLyogMi41IE1CICovDQo+PiAtDQo+
PiDCoCB1bnNpZ25lZCBpbnQgdGltZW91dF93YXRjaF9ldmVudF9tc2VjID0gMjAwMDA7DQo+
PiDCoCB2b2lkIHRyYWNlKGNvbnN0IGNoYXIgKmZtdCwgLi4uKQ0KPj4gQEAgLTc5OSw3ICs3
ODgsNyBAQCBpbnQgd3JpdGVfbm9kZV9yYXcoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIFRE
Ql9EQVRBICprZXksIA0KPj4gc3RydWN0IG5vZGUgKm5vZGUsDQo+PiDCoMKgwqDCoMKgwqDC
oMKgwqAgKyBub2RlLT5wZXJtcy5udW0gKiBzaXplb2Yobm9kZS0+cGVybXMucFswXSkNCj4+
IMKgwqDCoMKgwqDCoMKgwqDCoCArIG5vZGUtPmRhdGFsZW4gKyBub2RlLT5jaGlsZGxlbjsN
Cj4+IC3CoMKgwqAgaWYgKGRvbWFpbl9tYXhfY2hrKGNvbm4sIEFDQ19OT0RFU1osIGRhdGEu
ZHNpemUsIHF1b3RhX21heF9lbnRyeV9zaXplKQ0KPj4gK8KgwqDCoCBpZiAoZG9tYWluX21h
eF9jaGsoY29ubiwgQUNDX05PREVTWiwgZGF0YS5kc2l6ZSkNCj4+IMKgwqDCoMKgwqDCoMKg
wqDCoCAmJiAhbm9fcXVvdGFfY2hlY2spIHsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBlcnJu
byA9IEVOT1NQQzsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gZXJybm87DQo+PiBA
QCAtMTE4OCw4ICsxMTc3LDcgQEAgYm9vbCBpc192YWxpZF9ub2RlbmFtZShjb25zdCBzdHJ1
Y3QgY29ubmVjdGlvbiAqY29ubiwgDQo+PiBjb25zdCBjaGFyICpub2RlKQ0KPj4gwqDCoMKg
wqDCoCBpZiAoc3NjYW5mKG5vZGUsICIvbG9jYWwvZG9tYWluLyU1dS8lbiIsICZkb21pZCwg
JmxvY2FsX29mZikgIT0gMSkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBsb2NhbF9vZmYgPSAw
Ow0KPj4gLcKgwqDCoCBpZiAoZG9tYWluX21heF9jaGsoY29ubiwgQUNDX1BBVEhMRU4sIHN0
cmxlbihub2RlKSAtIGxvY2FsX29mZiwNCj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIHF1b3RhX21heF9wYXRoX2xlbikpDQo+PiArwqDCoMKgIGlmIChkb21haW5fbWF4X2No
ayhjb25uLCBBQ0NfUEFUSExFTiwgc3RybGVuKG5vZGUpIC0gbG9jYWxfb2ZmKSkNCj4+IMKg
wqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gZmFsc2U7DQo+PiDCoMKgwqDCoMKgIHJldHVybiB2
YWxpZF9jaGFycyhub2RlKTsNCj4+IEBAIC0xNTAxLDcgKzE0ODksNyBAQCBzdGF0aWMgc3Ry
dWN0IG5vZGUgKmNyZWF0ZV9ub2RlKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCANCj4+IGNv
bnN0IHZvaWQgKmN0eCwNCj4+IMKgwqDCoMKgwqAgZm9yIChpID0gbm9kZTsgaTsgaSA9IGkt
PnBhcmVudCkgew0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgIC8qIGktPnBhcmVudCBpcyBzZXQg
Zm9yIGVhY2ggbmV3IG5vZGUsIHNvIGNoZWNrIHF1b3RhLiAqLw0KPj4gwqDCoMKgwqDCoMKg
wqDCoMKgIGlmIChpLT5wYXJlbnQgJiYNCj4+IC3CoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGRv
bWFpbl9uYmVudHJ5KGNvbm4pID49IHF1b3RhX25iX2VudHJ5X3Blcl9kb21haW4pIHsNCj4+
ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGRvbWFpbl9uYmVudHJ5KGNvbm4pID49IGhhcmRf
cXVvdGFzW0FDQ19OT0RFU10udmFsKSB7DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oCByZXQgPSBFTk9TUEM7DQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBnb3RvIGVy
cjsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCB9DQo+PiBAQCAtMTc3Niw3ICsxNzY0LDcgQEAg
c3RhdGljIGludCBkb19zZXRfcGVybXMoY29uc3Qgdm9pZCAqY3R4LCBzdHJ1Y3QgDQo+PiBj
b25uZWN0aW9uICpjb25uLA0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiBFSU5WQUw7
DQo+PiDCoMKgwqDCoMKgIHBlcm1zLm51bS0tOw0KPj4gLcKgwqDCoCBpZiAoZG9tYWluX21h
eF9jaGsoY29ubiwgQUNDX05QRVJNLCBwZXJtcy5udW0sIHF1b3RhX25iX3Blcm1zX3Blcl9u
b2RlKSkNCj4+ICvCoMKgwqAgaWYgKGRvbWFpbl9tYXhfY2hrKGNvbm4sIEFDQ19OUEVSTSwg
cGVybXMubnVtKSkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gRU5PU1BDOw0KPj4g
wqDCoMKgwqDCoCBwZXJtc3RyID0gaW4tPmJ1ZmZlciArIHN0cmxlbihpbi0+YnVmZmVyKSAr
IDE7DQo+PiBAQCAtMjY0NCw3ICsyNjMyLDE2IEBAIHN0YXRpYyB2b2lkIHVzYWdlKHZvaWQp
DQo+PiDCoCAiwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgbWVtb3J5OiB0b3RhbCB1c2VkIG1lbW9yeSBwZXIgZG9tYWluIGZvciBub2Rlcyxc
biINCj4+IMKgICLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdHJhbnNhY3Rpb25zLCB3YXRjaGVzIGFuZCByZXF1
ZXN0cywgYWJvdmVcbiINCj4+IMKgICLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgd2hpY2ggWGVuc3RvcmUgd2ls
bCBzdG9wIHRhbGtpbmcgdG8gDQo+PiBkb21haW5cbiINCj4+ICsiwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgbm9kZXM6IG51bWJlciBub2Rl
cyBvd25lZCBieSBhIGRvbWFpblxuIg0KPj4gKyLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBub2RlLXBlcm1pc3Npb25zOiBudW1iZXIgb2Yg
YWNjZXNzIHBlcm1pc3Npb25zIHBlclxuIg0KPj4gKyLCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCBub2RlXG4iDQo+PiArIsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIG5vZGUtc2l6ZTogdG90YWwgc2l6ZSBvZiBhIG5vZGUg
KHBlcm1pc3Npb25zICtcbiINCj4+ICsiwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGNoaWxkcmVuIG5h
bWVzICsgY29udGVudClcbiINCj4+IMKgICLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBvdXRzdGFuZGluZzogbnVtYmVyIG9mIG91dHN0YW5k
aW5nIHJlcXVlc3RzXG4iDQo+PiArIsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIHBhdGgtbGVuZ3RoOiBsZW5ndGggb2YgYSBub2RlIHBhdGhc
biINCj4+ICsiwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgdHJhbnNhY3Rpb25zOiBudW1iZXIgb2YgY29uY3VycmVudCB0cmFuc2FjdGlvbnNc
biINCj4+ICsiwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHBlciBkb21haW5cbiINCj4+ICsi
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgd2F0
Y2hlczogbnVtYmVyIG9mIHdhdGNoZXMgcGVyIGRvbWFpbiINCj4+IMKgICLCoCAtcSwgLS1x
dW90YS1zb2Z0IDx3aGF0Pj08bmI+IHNldCBhIHNvZnQgcXVvdGEgPHdoYXQ+IHRvIHRoZSB2
YWx1ZSA8bmI+LFxuIg0KPj4gwqAgIsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIGNhdXNpbmcgYSB3YXJuaW5nIHRvIGJlIGlzc3VlZCB2aWEg
c3lzbG9nKCkgaWYgDQo+PiB0aGVcbiINCj4+IMKgICLCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBsaW1pdCBpcyB2aW9sYXRlZCwgYWxsb3dl
ZCBxdW90YXMgYXJlOlxuIg0KPj4gQEAgLTI2OTUsMTIgKzI2OTIsMTIgQEAgaW50IGRvbTBf
ZG9taWQgPSAwOw0KPj4gwqAgaW50IGRvbTBfZXZlbnQgPSAwOw0KPj4gwqAgaW50IHByaXZf
ZG9taWQgPSAwOw0KPj4gLXN0YXRpYyBpbnQgZ2V0X29wdHZhbF9pbnQoY29uc3QgY2hhciAq
YXJnKQ0KPj4gK3N0YXRpYyB1bnNpZ25lZCBpbnQgZ2V0X29wdHZhbF9pbnQoY29uc3QgY2hh
ciAqYXJnKQ0KPj4gwqAgew0KPj4gwqDCoMKgwqDCoCBjaGFyICplbmQ7DQo+PiAtwqDCoMKg
IGxvbmcgdmFsOw0KPj4gK8KgwqDCoCB1bnNpZ25lZCBsb25nIHZhbDsNCj4+IC3CoMKgwqAg
dmFsID0gc3RydG9sKGFyZywgJmVuZCwgMTApOw0KPj4gK8KgwqDCoCB2YWwgPSBzdHJ0b3Vs
KGFyZywgJmVuZCwgMTApOw0KPiBUaGUgY2hhbmdlcyBpbiBnZXRfb3B0dmFsX2ludCgpIGZl
ZWxzIGxpa2UgbW9yZSBhIGNsZWFuLXVwIGJlY2F1c2UgdGhlIHJldHVybmVkIA0KPiB2YWx1
ZSBjYW5ub3QgYmUgbmVnYXRpdmUgKHNlZSBjaGVjayBiZWxvdykuIEkgd291bGQgcHJlZmVy
IGlmIHRoZXkgYXJlIGRvbmUgaW4gYSANCj4gc2VwYXJhdGUgcGF0Y2guDQoNCk9rYXkuDQoN
Cj4gDQo+PiDCoMKgwqDCoMKgIGlmICghKmFyZyB8fCAqZW5kIHx8IHZhbCA8IDAgfHwgdmFs
ID4gSU5UX01BWCkNCj4gDQo+IE5vdyB0aGF0ICd2YWwnIGlzIHVuc2lnbmVkIGxvbmcsIHRo
ZW4gdGhlcmUgaXMgbm8gcG9pbnQgZm9yIGNoZWNraW5nIHZhbCBpcyA8IDAuDQoNCk9oLCBp
bmRlZWQuDQoNCj4gDQo+IExhc3RseSwgSSB3b3VsZCByZW5hbWUgdGhlIGhlbHBlciB0byBt
YWtlIGNsZWFyIGl0IHJldHVybnMgYW4gdW5zaWduZWQgdmFsdWUuIA0KPiBIb3cgYWJvdXQg
Z2V0X29wdHZhbF91aW50KCk/DQoNCk9rYXkuDQoNCj4gDQo+PiDCoMKgwqDCoMKgwqDCoMKg
wqAgYmFyZigiaW52YWxpZCBwYXJhbWV0ZXIgdmFsdWUgXCIlc1wiXG4iLCBhcmcpOw0KPj4g
QEAgLTI3MDksMTUgKzI3MDYsMTkgQEAgc3RhdGljIGludCBnZXRfb3B0dmFsX2ludChjb25z
dCBjaGFyICphcmcpDQo+PiDCoCBzdGF0aWMgYm9vbCB3aGF0X21hdGNoZXMoY29uc3QgY2hh
ciAqYXJnLCBjb25zdCBjaGFyICp3aGF0KQ0KPj4gwqAgew0KPj4gLcKgwqDCoCB1bnNpZ25l
ZCBpbnQgd2hhdF9sZW4gPSBzdHJsZW4od2hhdCk7DQo+PiArwqDCoMKgIHVuc2lnbmVkIGlu
dCB3aGF0X2xlbjsNCj4+ICsNCj4+ICvCoMKgwqAgaWYgKCF3aGF0KQ0KPj4gK8KgwqDCoMKg
wqDCoMKgIGZhbHNlOw0KPiANCj4gU2hvdWxkbid0IHRoaXMgYmUgInJldHVybiBmYWxzZSI/
DQoNClllcy4NCg0KDQpKdWVyZ2VuDQo=
--------------LP7Vogz9Cyihv0QPkgO5Q10w
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------LP7Vogz9Cyihv0QPkgO5Q10w--

--------------7v7rD0UKHduATGBLDKGFLyoT--

--------------ZUF9dq0iMRAl2VbFZRkyRanD
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRSczUFAwAAAAAACgkQsN6d1ii/Ey/C
rwf/RujEp52TS0gYT54CJSshoUqjmJS8HvP7on68P1iFJGoe6pc5R1fKPNGcbqQZAQUodNaILssy
vWhIMY9GZiwbRBXGYES9rUxzCDMPcITuRlQh2+x0Xzbzcg/DXu01rUYlJq/Ed0+82QdR/5wN2SYL
FY7wKWKvIWCTRDGzFVip9lccbtovvV/PF0baiwPvVly0ZYl2ybBpsDm/C6sZDoIvE4N9hFnFbsDn
hbWhA7xxokpmTe9/9XYTYSiNGElJEWBTk+CHJDWtb/ws1N9t9+x4UAzp883cHHTdePkJ9xBwMYjj
gTRScnbLKu4f4WKIBMjsMezvaExDA61nfsdsvuQTEA==
=H1Z+
-----END PGP SIGNATURE-----

--------------ZUF9dq0iMRAl2VbFZRkyRanD--


From xen-devel-bounces@lists.xenproject.org Wed May 03 14:45:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 14:45:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529264.823458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDjW-0003UW-Vl; Wed, 03 May 2023 14:45:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529264.823458; Wed, 03 May 2023 14:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDjW-0003UP-Qf; Wed, 03 May 2023 14:45:30 +0000
Received: by outflank-mailman (input) for mailman id 529264;
 Wed, 03 May 2023 14:45:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1puDjV-0003Tz-83
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 14:45:29 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2558c85d-e9c1-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 16:45:28 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1A5DD20472;
 Wed,  3 May 2023 14:45:28 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E4C601331F;
 Wed,  3 May 2023 14:45:27 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id JLF5NodzUmTzTQAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 14:45:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2558c85d-e9c1-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683125128; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hGQYK42d3jA6rJoAskGLq5Fd4hxMy9lkmej84hXY6bk=;
	b=SMmWC5G6ERV3cjrfWKkUE+RNx7QvqF9tcw+V3WWvmgCteTwolgAlMRIpVMd74me4DIo12f
	H0a3GhMONfseA8yWGZFxXJKu970GLf7My2biGEeFV8IcGE45QcVkD6ZiufOFbWB3lzJCgl
	jHz/724htpgdD2zyZ4Mr2pg8iV1g9QM=
Message-ID: <ffe24c91-2227-8551-ebfb-55e84cf57a55@suse.com>
Date: Wed, 3 May 2023 16:45:27 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 03/13] tools/xenstore: modify interface of
 create_hashtable()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-4-jgross@suse.com>
 <1a2a7aed-8947-c5ed-e1ed-8fa80bc75750@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <1a2a7aed-8947-c5ed-e1ed-8fa80bc75750@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------9QaKlmPp0zNMNJYTjaYe2SOX"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------9QaKlmPp0zNMNJYTjaYe2SOX
Content-Type: multipart/mixed; boundary="------------wqR7kyJJtfyTTDjEJUj6v8r0";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <ffe24c91-2227-8551-ebfb-55e84cf57a55@suse.com>
Subject: Re: [PATCH v2 03/13] tools/xenstore: modify interface of
 create_hashtable()
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-4-jgross@suse.com>
 <1a2a7aed-8947-c5ed-e1ed-8fa80bc75750@xen.org>
In-Reply-To: <1a2a7aed-8947-c5ed-e1ed-8fa80bc75750@xen.org>

--------------wqR7kyJJtfyTTDjEJUj6v8r0
Content-Type: multipart/mixed; boundary="------------2afD6VB57g35pVY2pMO5GNIx"

--------------2afD6VB57g35pVY2pMO5GNIx
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTQ6NTksIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDMwLzAzLzIwMjMgMDk6NTAsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBU
aGUgbWluc2l6ZSBwYXJhbWV0ZXIgb2YgY3JlYXRlX2hhc2h0YWJsZSgpIGRvZXNuJ3QgaGF2
ZSBhbnkgcmVhbCB1c2UNCj4+IGNhc2UgZm9yIFhlbnN0b3JlLCBzbyBkcm9wIGl0Lg0KPj4N
Cj4+IEZvciBiZXR0ZXIgdGFsbG9jX3JlcG9ydF9mdWxsKCkgZGlhZ25vc3RpYyBvdXRwdXQg
YWRkIGEgbmFtZSBwYXJhbWV0ZXINCj4+IHRvIGNyZWF0ZV9oYXNodGFibGUoKS4NCj4+DQo+
PiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+PiAt
LS0NCj4+IMKgIHRvb2xzL3hlbnN0b3JlL2hhc2h0YWJsZS5jwqDCoMKgwqDCoMKgwqAgfCAy
MCArKysrKystLS0tLS0tLS0tLS0tLQ0KPj4gwqAgdG9vbHMveGVuc3RvcmUvaGFzaHRhYmxl
LmjCoMKgwqDCoMKgwqDCoCB8wqAgNCArKy0tDQo+PiDCoCB0b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfY29yZS5jwqDCoCB8wqAgMiArLQ0KPj4gwqAgdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2RvbWFpbi5jIHzCoCA0ICsrLS0NCj4+IMKgIDQgZmlsZXMgY2hhbmdlZCwgMTEgaW5z
ZXJ0aW9ucygrKSwgMTkgZGVsZXRpb25zKC0pDQo+Pg0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xz
L3hlbnN0b3JlL2hhc2h0YWJsZS5jIGIvdG9vbHMveGVuc3RvcmUvaGFzaHRhYmxlLmMNCj4+
IGluZGV4IGMxYjExNzQzYmIuLmFiMWU2ODdkMGIgMTAwNjQ0DQo+PiAtLS0gYS90b29scy94
ZW5zdG9yZS9oYXNodGFibGUuYw0KPj4gKysrIGIvdG9vbHMveGVuc3RvcmUvaGFzaHRhYmxl
LmMNCj4+IEBAIC01NSwzNiArNTUsMjggQEAgc3RhdGljIHVuc2lnbmVkIGludCBsb2FkbGlt
aXQodW5zaWduZWQgaW50IHBpbmRleCkNCj4+IMKgwqDCoMKgwqAgcmV0dXJuICgodWludDY0
X3QpcHJpbWVzW3BpbmRleF0gKiBNQVhfTE9BRF9QRVJDRU5UKSAvIDEwMDsNCj4+IMKgIH0N
Cj4+IC1zdHJ1Y3QgaGFzaHRhYmxlICpjcmVhdGVfaGFzaHRhYmxlKGNvbnN0IHZvaWQgKmN0
eCwgdW5zaWduZWQgaW50IG1pbnNpemUsDQo+PiArc3RydWN0IGhhc2h0YWJsZSAqY3JlYXRl
X2hhc2h0YWJsZShjb25zdCB2b2lkICpjdHgsIGNvbnN0IGNoYXIgKm5hbWUsDQo+PiDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgaW50ICgqaGFzaGYpIChjb25zdCB2b2lkICopLA0K
Pj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIGludCAoKmVxZikgKGNvbnN0IHZvaWQgKiwgY29uc3Qg
dm9pZCAqKSwNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBpbnQgZmxhZ3MpDQo+
PiDCoCB7DQo+PiDCoMKgwqDCoMKgIHN0cnVjdCBoYXNodGFibGUgKmg7DQo+PiAtwqDCoMKg
IHVuc2lnbmVkIGludCBwaW5kZXgsIHNpemUgPSBwcmltZXNbMF07DQo+PiAtDQo+PiAtwqDC
oMKgIC8qIENoZWNrIHJlcXVlc3RlZCBoYXNodGFibGUgaXNuJ3QgdG9vIGxhcmdlICovDQo+
PiAtwqDCoMKgIGlmIChtaW5zaXplID4gKDF1IDw8IDMwKSkgcmV0dXJuIE5VTEw7DQo+PiAt
DQo+PiAtwqDCoMKgIC8qIEVuZm9yY2Ugc2l6ZSBhcyBwcmltZSAqLw0KPj4gLcKgwqDCoCBm
b3IgKHBpbmRleD0wOyBwaW5kZXggPCBQUklNRV9UQUJMRV9MRU47IHBpbmRleCsrKSB7DQo+
PiAtwqDCoMKgwqDCoMKgwqAgaWYgKHByaW1lc1twaW5kZXhdID4gbWluc2l6ZSkgeyBzaXpl
ID0gcHJpbWVzW3BpbmRleF07IGJyZWFrOyB9DQo+PiAtwqDCoMKgIH0NCj4+IMKgwqDCoMKg
wqAgaCA9IHRhbGxvY196ZXJvKGN0eCwgc3RydWN0IGhhc2h0YWJsZSk7DQo+PiDCoMKgwqDC
oMKgIGlmIChOVUxMID09IGgpDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgZ290byBlcnIwOw0K
Pj4gLcKgwqDCoCBoLT50YWJsZSA9IHRhbGxvY196ZXJvX2FycmF5KGgsIHN0cnVjdCBlbnRy
eSAqLCBzaXplKTsNCj4+ICvCoMKgwqAgdGFsbG9jX3NldF9uYW1lX2NvbnN0KGgsIG5hbWUp
Ow0KPj4gK8KgwqDCoCBoLT50YWJsZSA9IHRhbGxvY196ZXJvX2FycmF5KGgsIHN0cnVjdCBl
bnRyeSAqLCBwcmltZXNbMF0pOw0KPj4gwqDCoMKgwqDCoCBpZiAoTlVMTCA9PSBoLT50YWJs
ZSkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBnb3RvIGVycjE7DQo+PiAtwqDCoMKgIGgtPnRh
YmxlbGVuZ3RowqAgPSBzaXplOw0KPj4gK8KgwqDCoCBoLT50YWJsZWxlbmd0aMKgID0gcHJp
bWVzWzBdOw0KPiANCj4gSSBmaW5kIHRoZSBjb25uZWN0aW9uIGJldHdlZW4gdGhpcyBsaW5l
LCAuLi4NCj4gDQo+PiDCoMKgwqDCoMKgIGgtPmZsYWdzwqDCoMKgwqDCoMKgwqAgPSBmbGFn
czsNCj4+IC3CoMKgwqAgaC0+cHJpbWVpbmRleMKgwqAgPSBwaW5kZXg7DQo+PiArwqDCoMKg
IGgtPnByaW1laW5kZXjCoMKgID0gMDsNCj4gDQo+IC4uLiB0aGlzIG9uZSBhbmQgLi4uDQo+
IA0KPj4gwqDCoMKgwqDCoCBoLT5lbnRyeWNvdW50wqDCoCA9IDA7DQo+PiDCoMKgwqDCoMKg
IGgtPmhhc2hmbsKgwqDCoMKgwqDCoCA9IGhhc2hmOw0KPj4gwqDCoMKgwqDCoCBoLT5lcWZu
wqDCoMKgwqDCoMKgwqDCoCA9IGVxZjsNCj4+IC3CoMKgwqAgaC0+bG9hZGxpbWl0wqDCoMKg
ID0gbG9hZGxpbWl0KHBpbmRleCk7DQo+PiArwqDCoMKgIGgtPmxvYWRsaW1pdMKgwqDCoCA9
IGxvYWRsaW1pdCgwKTsNCj4gDQo+IC4uLiBub3cgbW9yZSBkaWZmaWN1bHQgdG8gZmluZC4g
SG93IGFib3V0IHNldHRpbmcgaC0+cHJpbWVpbmRleCBmaXJzdCBhbmQgdGhlbiANCj4gdXNp
bmcgaXQgaW4gcGxhY2Ugb2YgMD8NCg0KRmluZSB3aXRoIG1lLg0KDQoNCkp1ZXJnZW4NCg==

--------------2afD6VB57g35pVY2pMO5GNIx
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2afD6VB57g35pVY2pMO5GNIx--

--------------wqR7kyJJtfyTTDjEJUj6v8r0--

--------------9QaKlmPp0zNMNJYTjaYe2SOX
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRSc4cFAwAAAAAACgkQsN6d1ii/Ey9P
0ggAkS5HMD+CA45UEsBPOrXVWkP+jY6NYAHffWzFz8D8oXkWaqyroga1kZlbTxBilzlsd8UqPdhj
+TlLCwlaGj0+MLT/4q/OA4GBklwN7MqZ4fh4I1+rBquUE3KxBCUQwPzP8Dwueln1NxPCNEA5CYbn
99XQ7qilvhCqQLKxZHBAW5M064xT8VKN97w3IACB/S6SHohaUvxGJ/mlO+Z4f+krESA87PckU8zz
Pbc4kyvp2cIht2hYN1oVxrZyueRx7xGi1gABV286e/frRIFUCvJWhtm1HFt6eso2XAF4TlXTr2Ou
l4luEdXdQHx8g41swz0pbx7bWUrGLRBO1chgJbymiw==
=yyFZ
-----END PGP SIGNATURE-----

--------------9QaKlmPp0zNMNJYTjaYe2SOX--


From xen-devel-bounces@lists.xenproject.org Wed May 03 14:46:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 14:46:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529267.823467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDk5-0003zd-6o; Wed, 03 May 2023 14:46:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529267.823467; Wed, 03 May 2023 14:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDk5-0003zW-3V; Wed, 03 May 2023 14:46:05 +0000
Received: by outflank-mailman (input) for mailman id 529267;
 Wed, 03 May 2023 14:46:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1puDk3-0003zE-GJ
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 14:46:03 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 39af8658-e9c1-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 16:46:02 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 56BAB1FFDE;
 Wed,  3 May 2023 14:46:02 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 309871331F;
 Wed,  3 May 2023 14:46:02 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Sh1wCqpzUmRFTgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 14:46:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39af8658-e9c1-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683125162; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=pFpmv1gGDLXU8PfERn9zZeyVEmZy5izRrOkIEEPjLEc=;
	b=A8EHRUb/mS9PCKrQruVCiAPVfIyydnOqbJETVV8gxX9LVMFDmXJ6yyUj0/axUeH5idSUET
	df5BJmNCxqTGW/CLOZYTz318HVQ4luzG74rdcZaHJSJXJYuGK31Cgs1wSmYf77bifHR6be
	EKR1Ox7tw9ZYih9yQYQmupPtfvy9vSg=
Message-ID: <fb4a9057-e13c-e890-caf6-1ba2abf3a850@suse.com>
Date: Wed, 3 May 2023 16:46:01 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 02/13] tools/xenstore: do some cleanup of hashtable.c
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-3-jgross@suse.com>
 <8d91f57b-41ed-2939-94e8-9f73f0d523a6@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <8d91f57b-41ed-2939-94e8-9f73f0d523a6@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------EddWgL95A5UVdSL8zjLHahIL"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------EddWgL95A5UVdSL8zjLHahIL
Content-Type: multipart/mixed; boundary="------------LReBWUmikuryo4KQO0eBI0CF";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <fb4a9057-e13c-e890-caf6-1ba2abf3a850@suse.com>
Subject: Re: [PATCH v2 02/13] tools/xenstore: do some cleanup of hashtable.c
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-3-jgross@suse.com>
 <8d91f57b-41ed-2939-94e8-9f73f0d523a6@xen.org>
In-Reply-To: <8d91f57b-41ed-2939-94e8-9f73f0d523a6@xen.org>

--------------LReBWUmikuryo4KQO0eBI0CF
Content-Type: multipart/mixed; boundary="------------76X27yqS6xh0zS29YlioB0ik"

--------------76X27yqS6xh0zS29YlioB0ik
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTQ6NTUsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDMwLzAzLzIwMjMgMDk6NTAsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBE
byB0aGUgZm9sbG93aW5nIGNsZWFudXBzOg0KPj4gLSBoYXNodGFibGVfY291bnQoKSBpc24n
dCB1c2VkIGF0IGFsbCwgc28gcmVtb3ZlIGl0DQo+PiAtIHJlcGxhY2UgcHJpbWVfdGFibGVf
bGVuZ3RoIGFuZCBtYXhfbG9hZF9mYWN0b3Igd2l0aCBtYWNyb3MNCj4+IC0gbWFrZSBoYXNo
KCkgc3RhdGljDQo+PiAtIGFkZCBhIGxvYWRsaW1pdCgpIGhlbHBlciBmdW5jdGlvbg0KPj4g
LSByZW1vdmUgdGhlIC8qKiovIGxpbmVzIGJldHdlZW4gZnVuY3Rpb25zDQo+PiAtIGRvIHNv
bWUgc3R5bGUgY29ycmVjdGlvbnMNCj4+DQo+PiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdy
b3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+PiAtLS0NCj4+IMKgIHRvb2xzL3hlbnN0b3JlL2hh
c2h0YWJsZS5jIHwgNzEgKysrKysrKysrKysrKystLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0N
Cj4+IMKgIHRvb2xzL3hlbnN0b3JlL2hhc2h0YWJsZS5oIHwgMTAgLS0tLS0tDQo+PiDCoCAy
IGZpbGVzIGNoYW5nZWQsIDI2IGluc2VydGlvbnMoKyksIDU1IGRlbGV0aW9ucygtKQ0KPj4N
Cj4+IGRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS9oYXNodGFibGUuYyBiL3Rvb2xzL3hl
bnN0b3JlL2hhc2h0YWJsZS5jDQo+PiBpbmRleCAzZDQ0NjZiNTk3Li5jMWIxMTc0M2JiIDEw
MDY0NA0KPj4gLS0tIGEvdG9vbHMveGVuc3RvcmUvaGFzaHRhYmxlLmMNCj4+ICsrKyBiL3Rv
b2xzL3hlbnN0b3JlL2hhc2h0YWJsZS5jDQo+PiBAQCAtNDAsMjIgKzQwLDI1IEBAIHN0YXRp
YyBjb25zdCB1bnNpZ25lZCBpbnQgcHJpbWVzW10gPSB7DQo+PiDCoCA1MDMzMTY1MywgMTAw
NjYzMzE5LCAyMDEzMjY2MTEsIDQwMjY1MzE4OSwNCj4+IMKgIDgwNTMwNjQ1NywgMTYxMDYx
Mjc0MQ0KPj4gwqAgfTsNCj4+IC1jb25zdCB1bnNpZ25lZCBpbnQgcHJpbWVfdGFibGVfbGVu
Z3RoID0gc2l6ZW9mKHByaW1lcykvc2l6ZW9mKHByaW1lc1swXSk7DQo+PiAtY29uc3QgdW5z
aWduZWQgaW50IG1heF9sb2FkX2ZhY3RvciA9IDY1OyAvKiBwZXJjZW50YWdlICovDQo+PiAt
LyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqLw0KPj4gLS8qIGluZGV4Rm9yICovDQo+PiAtc3Rh
dGljIGlubGluZSB1bnNpZ25lZCBpbnQNCj4+IC1pbmRleEZvcih1bnNpZ25lZCBpbnQgdGFi
bGVsZW5ndGgsIHVuc2lnbmVkIGludCBoYXNodmFsdWUpIHsNCj4+ICsjZGVmaW5lIFBSSU1F
X1RBQkxFX0xFTsKgwqAgQVJSQVlfU0laRShwcmltZXMpDQo+PiArI2RlZmluZSBNQVhfTE9B
RF9QRVJDRU5UwqAgNjUNCj4+ICsNCj4+ICtzdGF0aWMgaW5saW5lIHVuc2lnbmVkIGludCBp
bmRleEZvcih1bnNpZ25lZCBpbnQgdGFibGVsZW5ndGgsDQo+PiArwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oCB1bnNpZ25lZCBpbnQgaGFzaHZhbHVlKQ0KPj4gK3sNCj4+IMKgwqDCoMKgwqAgcmV0dXJu
IChoYXNodmFsdWUgJSB0YWJsZWxlbmd0aCk7DQo+PiDCoCB9DQo+PiAtLyoqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqLw0KPj4gLXN0cnVjdCBoYXNodGFibGUgKg0KPj4gLWNyZWF0ZV9oYXNo
dGFibGUoY29uc3Qgdm9pZCAqY3R4LCB1bnNpZ25lZCBpbnQgbWluc2l6ZSwNCj4+IC3CoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBpbnQgKCpoYXNoZikgKGNv
bnN0IHZvaWQgKiksDQo+PiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaW50
ICgqZXFmKSAoY29uc3Qgdm9pZCAqLCBjb25zdCB2b2lkICopLA0KPj4gLcKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGludCBmbGFncykNCj4+ICtzdGF0aWMg
dW5zaWduZWQgaW50IGxvYWRsaW1pdCh1bnNpZ25lZCBpbnQgcGluZGV4KQ0KPj4gK3sNCj4+
ICvCoMKgwqAgcmV0dXJuICgodWludDY0X3QpcHJpbWVzW3BpbmRleF0gKiBNQVhfTE9BRF9Q
RVJDRU5UKSAvIDEwMDsNCj4+ICt9DQo+PiArDQo+PiArc3RydWN0IGhhc2h0YWJsZSAqY3Jl
YXRlX2hhc2h0YWJsZShjb25zdCB2b2lkICpjdHgsIHVuc2lnbmVkIGludCBtaW5zaXplLA0K
Pj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGludCAoKmhhc2hmKSAoY29uc3Qgdm9pZCAq
KSwNCj4+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpbnQgKCplcWYpIChjb25zdCB2b2lkICosIGNvbnN0
IHZvaWQgKiksDQo+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgaW50IGZsYWdzKQ0KPj4g
wqAgew0KPj4gwqDCoMKgwqDCoCBzdHJ1Y3QgaGFzaHRhYmxlICpoOw0KPj4gwqDCoMKgwqDC
oCB1bnNpZ25lZCBpbnQgcGluZGV4LCBzaXplID0gcHJpbWVzWzBdOw0KPj4gQEAgLTY0LDcg
KzY3LDcgQEAgY3JlYXRlX2hhc2h0YWJsZShjb25zdCB2b2lkICpjdHgsIHVuc2lnbmVkIGlu
dCBtaW5zaXplLA0KPj4gwqDCoMKgwqDCoCBpZiAobWluc2l6ZSA+ICgxdSA8PCAzMCkpIHJl
dHVybiBOVUxMOw0KPj4gwqDCoMKgwqDCoCAvKiBFbmZvcmNlIHNpemUgYXMgcHJpbWUgKi8N
Cj4+IC3CoMKgwqAgZm9yIChwaW5kZXg9MDsgcGluZGV4IDwgcHJpbWVfdGFibGVfbGVuZ3Ro
OyBwaW5kZXgrKykgew0KPj4gK8KgwqDCoCBmb3IgKHBpbmRleD0wOyBwaW5kZXggPCBQUklN
RV9UQUJMRV9MRU47IHBpbmRleCsrKSB7DQo+IA0KPiBBcyB5b3UgZml4IHRoZSBzdHlsZSwg
aG93IGFib3V0IGFkZGluZyBhIHNwYWNlIGJlZm9yZS9hZnRlciAnPScgYW5kLi4uDQo+IA0K
Pj4gwqDCoMKgwqDCoMKgwqDCoMKgIGlmIChwcmltZXNbcGluZGV4XSA+IG1pbnNpemUpIHsg
c2l6ZSA9IHByaW1lc1twaW5kZXhdOyBicmVhazsgfQ0KPiANCj4gLi4uIGJyZWFrIHRoaXMg
bGluZSBpbiBtdWx0aXBsZSBvbmVzPw0KDQpXaWxsIGRvIGJvdGguDQoNCj4gDQo+IFdpdGgg
b3Igd2l0aG91dCB0aGlzIGluY2x1ZGVkIGhlcmU6DQo+IA0KPiBSZXZpZXdlZC1ieTogSnVs
aWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCg0KVGhhbmtzLA0KDQoNCkp1ZXJnZW4N
Cg0K
--------------76X27yqS6xh0zS29YlioB0ik
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------76X27yqS6xh0zS29YlioB0ik--

--------------LReBWUmikuryo4KQO0eBI0CF--

--------------EddWgL95A5UVdSL8zjLHahIL
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRSc6kFAwAAAAAACgkQsN6d1ii/Ey8y
XwgAiROwk8wPAlouLo9qg3I+G4Ngy2oBDrZfkY6SftKQQRARuizeHq6dCNwpDVX8CAG7Gom9Nh9e
KW9pOhXSjJl7G48PcKoncRh2X64K4x4VknU5PCIRCaHurTD3m7jvDu+z9zHgSh52iOcZ0PTs3NeN
Z9VdeIXziC9yvMHn9Fpnnp4LGZUty9xBTJ8cjwFqnRIiprFgdDYxDgDxhvAbQYO+/1BkZay1Z1/e
uo6sIcVoro4KEF0jWuEcffwsT7BIT78dZKsPcHZt93zgXuj4G9teQpkF3PapVa1cu5ukw3oXf1ai
mmokGMrCQwJ1Pr2V3b4Cs+WaCt2P+z5tRNRqM6IPHw==
=lYd0
-----END PGP SIGNATURE-----

--------------EddWgL95A5UVdSL8zjLHahIL--


From xen-devel-bounces@lists.xenproject.org Wed May 03 14:48:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 14:48:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529270.823477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDmd-0004d9-MF; Wed, 03 May 2023 14:48:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529270.823477; Wed, 03 May 2023 14:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDmd-0004d2-IY; Wed, 03 May 2023 14:48:43 +0000
Received: by outflank-mailman (input) for mailman id 529270;
 Wed, 03 May 2023 14:48:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1puDmc-0004cq-IS
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 14:48:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 98322c7d-e9c1-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 16:48:41 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id DBDC02047A;
 Wed,  3 May 2023 14:48:40 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B603F1331F;
 Wed,  3 May 2023 14:48:40 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id KQoOK0h0UmTkTwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 14:48:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98322c7d-e9c1-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683125320; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hB2lLaG8pUH5+daj/u8rW0lInWwuo6ot3NYeUtBZcfQ=;
	b=aAuHV+So7HcZC1mPG1MTlnbtcrlCdf9Pdwsh/uoKLlUK9VEwJCUvjAvdLPK1cYHbE6gXBl
	EK0GWD3CFjpkm7Z+/qG8ML+IVxxExUaY8MXcqIYXRQ4/n1ZnR3M26wt5QuMm0//IV2eKbG
	ICV4CCY67zWjsFlGM61pvQqVM8otsmg=
Message-ID: <991a003c-fc7e-f922-6a5c-9a8d75c543db@suse.com>
Date: Wed, 3 May 2023 16:48:40 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 04/13] tools/xenstore: let hashtable_insert() return 0
 on success
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-5-jgross@suse.com>
 <d918cc78-de22-9599-9a91-f6c11028d11b@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <d918cc78-de22-9599-9a91-f6c11028d11b@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------I4fGrNu5OMWFyWrFpZNunVy4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------I4fGrNu5OMWFyWrFpZNunVy4
Content-Type: multipart/mixed; boundary="------------6oQMX19aK3dEXiLiQ4sh08Cc";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <991a003c-fc7e-f922-6a5c-9a8d75c543db@suse.com>
Subject: Re: [PATCH v2 04/13] tools/xenstore: let hashtable_insert() return 0
 on success
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-5-jgross@suse.com>
 <d918cc78-de22-9599-9a91-f6c11028d11b@xen.org>
In-Reply-To: <d918cc78-de22-9599-9a91-f6c11028d11b@xen.org>

--------------6oQMX19aK3dEXiLiQ4sh08Cc
Content-Type: multipart/mixed; boundary="------------zN0jWzD3yow0FXWGwseRoJWV"

--------------zN0jWzD3yow0FXWGwseRoJWV
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTU6MDMsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGksDQo+IA0KPiBP
biAzMC8wMy8yMDIzIDA5OjUwLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gVG9kYXkgaGFz
aHRhYmxlX2luc2VydCgpIHJldHVybnMgMCBpbiBjYXNlIG9mIGFuIGVycm9yLiBDaGFuZ2Ug
dGhhdCB0bw0KPj4gbGV0IGl0IHJldHVybiBhbiBlcnJubyB2YWx1ZSBpbiB0aGUgZXJyb3Ig
Y2FzZSBhbmQgMCBpbiBjYXNlIG9mIHN1Y2Nlc3MuDQo+IA0KPiBJIHVzdWFsbHkgZmluZCBz
dWNoIGNoYW5nZSByaXNreSBiZWNhdXNlIGl0IG1ha2VzIHRoZSBiYWNrcG9ydCBtb3JlIGNv
bXBsZXggaWYgDQo+IHdlIGludHJvZHVjZSBhIG5ldyBjYWxsIHRvIGhhc2h0YWJsZV9pbnNl
cnQoKSBhbmQgaXQgaXMgYWxzbyBxdWl0ZSBkaWZmaWN1bHQgdG8gDQo+IHJldmlldyAodGhl
IGNvbXBpbGVyIHdvdWxkIG5vdCBoZWxwIHRvIGNvbmZpcm0gYWxsIHRoZSBjYWxsZXJzIGhh
dmUgY2hhbmdlZCkuDQo+IA0KPiBTbyBjYW4geW91IHByb3ZpZGUgYSBjb21wZWxsaW5nIHJl
YXNvbiBmb3IgZG9pbmcgdGhlIGNoYW5nZT8gKGNvbnNpc3RlbmN5IHdvdWxkIA0KPiBub3Qg
YmUgb25lIElNTykNCg0KVGhlIG1vdGl2YXRpb24gd2FzIGNvbnNpc3RlbmN5LiA6LSkNCg0K
VGhlIGFsdGVybmF0aXZlIHdvdWxkIGJlIHRvIHJlYWxseSBzZXQgZXJybm8gaW4gdGhlIGVy
cm9yIGNhc2UsIHdoaWNoDQp3b3VsZCBhZGQgYWRkaXRpb25hbCBsaW5lcy4NCg0KSSdtIG5v
dCByZWFsbHkgZmVlbGluZyBzdHJvbmcgaGVyZSwgQlRXLg0KDQoNCkp1ZXJnZW4NCg0K
--------------zN0jWzD3yow0FXWGwseRoJWV
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------zN0jWzD3yow0FXWGwseRoJWV--

--------------6oQMX19aK3dEXiLiQ4sh08Cc--

--------------I4fGrNu5OMWFyWrFpZNunVy4
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRSdEgFAwAAAAAACgkQsN6d1ii/Ey8A
cwf+IpaXsnTe8X6ponzYHoHUEwlFAKBgwOiMKgF4xpAJOO683wVsgnmFHhnWvcGTEXYMdRAr5q6o
23+0XGVeVj2omMaBbIzmurEVN5joJfCDa61gQ7WGUQ/xcgBkTAfQCFR9vOb+kOZykyPsnv79Hhw4
Rn01vm1JGFowzoXxL3oUeqT5SSZIUZlMRsj/wzVBFpOjjS+XYKyW764UMkT0eS5Zam4hg1Tp56lu
4vJNsAik1t7IJ8KVjiOXfvS2yKE8jO+o7qaF3RYo7+4gTmqOS6coJ7ASsCWBHLv34ivLCM2pRUMg
7Qdq6Z+6g5DaPIuguuDD5edffmymCo743rerPVqIwg==
=4IAH
-----END PGP SIGNATURE-----

--------------I4fGrNu5OMWFyWrFpZNunVy4--


From xen-devel-bounces@lists.xenproject.org Wed May 03 14:49:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 14:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529274.823487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDn2-0005AH-1Y; Wed, 03 May 2023 14:49:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529274.823487; Wed, 03 May 2023 14:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDn1-0005AA-Uv; Wed, 03 May 2023 14:49:07 +0000
Received: by outflank-mailman (input) for mailman id 529274;
 Wed, 03 May 2023 14:49:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1puDn0-0004cq-Ba
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 14:49:06 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a6a03ed4-e9c1-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 16:49:05 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2035F22908;
 Wed,  3 May 2023 14:49:05 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id EE8491331F;
 Wed,  3 May 2023 14:49:04 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 4azQOGB0UmQlUAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 14:49:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6a03ed4-e9c1-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683125345; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xt+2MKOFCuUEdtIgZo1nmqQoEhcMQgz9aEUMbQS5MDY=;
	b=lhkJHTrsF9sknW9V/Txq3KTqB0AUTPZB+Q2DNVpch+VN/ildkeJmF2DNmz9+/8aAvxO56d
	ryZMoVGoLv1KH0tHk9UEUlsuSrrLnn6F5MPNzzlCAtHpXhk4qHNYLCtkxQS6FSaRaUIKJw
	V3tvHAAyhXdY0E4FyiY1Eg40M/u94xo=
Message-ID: <a80d0ea1-fed5-3766-a452-3c7d1fda3cb7@suse.com>
Date: Wed, 3 May 2023 16:49:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 08/13] tools/xenstore: remove unused events list
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-9-jgross@suse.com>
 <f6af4f23-cd32-51b9-b805-6bfb114f3468@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <f6af4f23-cd32-51b9-b805-6bfb114f3468@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------JnrhvwRIw6rbY9M6s0GUNiEu"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------JnrhvwRIw6rbY9M6s0GUNiEu
Content-Type: multipart/mixed; boundary="------------vpwi0A7LVy2H0fP9NOAoXmwU";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <a80d0ea1-fed5-3766-a452-3c7d1fda3cb7@suse.com>
Subject: Re: [PATCH v2 08/13] tools/xenstore: remove unused events list
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-9-jgross@suse.com>
 <f6af4f23-cd32-51b9-b805-6bfb114f3468@xen.org>
In-Reply-To: <f6af4f23-cd32-51b9-b805-6bfb114f3468@xen.org>

--------------vpwi0A7LVy2H0fP9NOAoXmwU
Content-Type: multipart/mixed; boundary="------------dmr02Ru0PPkiM60ng1104hgS"

--------------dmr02Ru0PPkiM60ng1104hgS
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTU6MDgsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDMwLzAzLzIwMjMgMDk6NTAsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBz
dHJ1Y3Qgd2F0Y2ggY29udGFpbnMgYW4gdXNlZCBzdHJ1Y3QgbGlzdF9oZWFkIGV2ZW50cy4g
UmVtb3ZlIGl0Lg0KPiANCj4gVHlwbzogcy91c2VkL3VudXNlZC8/DQoNClllcy4NCg0KPiAN
Cj4+DQo+PiBTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+
DQo+IA0KPiBBY2tlZC1ieTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCg0K
VGhhbmtzLA0KDQoNCkp1ZXJnZW4NCg0K
--------------dmr02Ru0PPkiM60ng1104hgS
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------dmr02Ru0PPkiM60ng1104hgS--

--------------vpwi0A7LVy2H0fP9NOAoXmwU--

--------------JnrhvwRIw6rbY9M6s0GUNiEu
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRSdGAFAwAAAAAACgkQsN6d1ii/Ey8c
/wf/XMi1oZE6O5e/CG1N+FqpGMafzsawid5ewdeXfFIioLzzy51yshm5KaIeySxiCTn1IMeGfH3s
XDz+GL74wp3a4bxN/rvxKRHTjw2DuGiNdWz6VtqYjHlpOOt5KSANGwO0lC4v/iP2F/IvBOUHxhX4
yP/iGultzMTH51A3eEBGjr0S7GispQglW1EwvIl9NFv/UXQqrthwSOqBaEHDRUV4+/q20+VxIfiS
svgxeACVfM5SKAzc6vernec9UN6UNZXyEGb9i4lQmvtMM8T5+UWMfQKvl99Iib1vPAOtgknSdr9K
PlS7b2kSwlQMA7H0yNz0Y6O+GaZL3MDWzTMIsE7UQg==
=364L
-----END PGP SIGNATURE-----

--------------JnrhvwRIw6rbY9M6s0GUNiEu--


From xen-devel-bounces@lists.xenproject.org Wed May 03 14:51:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 14:51:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529278.823497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDpa-0006ec-E1; Wed, 03 May 2023 14:51:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529278.823497; Wed, 03 May 2023 14:51:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDpa-0006eU-BE; Wed, 03 May 2023 14:51:46 +0000
Received: by outflank-mailman (input) for mailman id 529278;
 Wed, 03 May 2023 14:51:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZlPw=AY=citrix.com=prvs=4803f4e7c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puDpY-0006eM-Qv
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 14:51:44 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 038539d4-e9c2-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 16:51:43 +0200 (CEST)
Received: from mail-dm6nam11lp2176.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 May 2023 10:51:32 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6911.namprd03.prod.outlook.com (2603:10b6:8:46::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.20; Wed, 3 May
 2023 14:51:30 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 14:51:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 038539d4-e9c2-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683125503;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=/X/3AygOT6pqvdCJTuddwvw1xXUXT6MPGOspvFfgm/0=;
  b=R2egYkHScXUaATCsrp/V/iUMBKHVLmtFNhi/C9M4a6fzwka+piS96tkR
   YmOdwckbwpuGee+2nlyynGs3wt1cNsINLLyPb4xz8NIs9CMuhrCGXXpRc
   jYnh9f3iODGHKFNa+cZBMYEdOUa+5emYDZ3AaNMN6Ra3MSWhZN0WGsi4r
   I=;
X-IronPort-RemoteIP: 104.47.57.176
X-IronPort-MID: 107055755
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Wl1eBaAKzHakcBVW/xriw5YqxClBgxIJ4kV8jS/XYbTApD500mYDx
 mAfW2yDbviPZGDzf40nO4jg8U9XscfdyYNrQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw98cpPTtX2
 sIjBTkkYSLElbKcn6qnVbw57igjBJGD0II3nFhFlGicIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+uxuvDa7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXwnumBd1PSdVU8NZggUShxT0PSyEtTH2WnNW5iEK/W/12f
 hl8Fi0G6PJaGFaQZtvyRRqju1afowURHdFXFoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRwtJWFRHTb8a2bxRuwJCwUIGkqdSICCwwf7LHeTJobixvOSpNvFfCzh9isQDXom
 WnV8m45mqkZitMN2+Oj51fbjjmwp5/PCAko+gHQWWHj5QR8DGK4W7GVBZHgxa4oBO6kopOp5
 RDoR+D2ADgyMKyw
IronPort-HdrOrdr: A9a23:yQ67tq13+mLSTsw58XkSnQqjBEQkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7gr5PUtLpTnuAsa9qB/nm6KdpLNhX4tKPzOW31dATrsSjrcKqgeIc0HDH6xmpM
 JdmsBFY+EYZmIK6foSjjPYLz4hquP3j5xBh43lvglQpdcBUdAQ0+97YDzrYnGfXGN9dOME/A
 L33Ls7m9KnE05nFviTNz0+cMXogcbEr57iaQ5uPW9a1OHf5QnYk4ITCnKjr20jbw8=
X-Talos-CUID: =?us-ascii?q?9a23=3ASE7unGiihWlDWsSIKalMP5EZLjJuf1yD8C3JDHG?=
 =?us-ascii?q?DJG9AR4/SWw+6p7t4jJ87?=
X-Talos-MUID: 9a23:Plz/oATG81mYEk8MRXTmhShQN5Z0oJiVEWYgrLkGucWFNQNZbmI=
X-IronPort-AV: E=Sophos;i="5.99,247,1677560400"; 
   d="scan'208";a="107055755"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CuWJIykQPjZ8IcNIvublUGnV4QS3JBJGPly/w8U+il/505S9OqR7UAniAbazny02AFVJlpiENX8/XOSNPulNVzaf7oSQEI+qjZ79QLnv3yi5hgzGDPPb3T9xoMAi5WpYclpS3paG/fud7nt55mF/X9+Y2Ei4ByDdxxPY2n3yutcZ8ljW3StYuJ1c7Ba8zn3M0d1vMHZUzvG/du5uQ/nUDYTFhNll5Htekio7lMk+6BLDmL+0rvI1X3L3FcwvmlI/rkDhyQ0TNzA0EACw8RnooaJloy5W/ybKeE99paDC5LEIDKBQqeuUeo2GQ3yKjo4nqtFIFA0hOOKrtrLgMK7kGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fK1HdlSFgUeNp73i8nRGTp0o0QDbjonIrKf9mR7N5pA=;
 b=BgePcSAcH393hYeIhFJ7I4kykyve6QhPn4Yn79hKkAv8ABHcsFPFrNn6VK0JacSj79j3220w5hIjOHBNxl+b3UnP3ftyl6akwBmBIVSMO0zQbNnthk97PAI03cCGmdK3QrVnF1e/TGWBZoLZdF77AKbUX6NzDQjG37ITfeMqQSHmhCZF246Y8VqiPpyYndAh/4J1eOq8R7TgohQ5ag+Gj98qVw5ljVHv2py/XVDBhUC9d+KnordIqqjOpWC1raj7ZWXtLw28IJE1lvTawewzF0SGW2hErVZcPvDfPDaEtvjw8gmvg0DOb4p292iUQtOMq+zg67Fy7/XbOhGQeevEaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fK1HdlSFgUeNp73i8nRGTp0o0QDbjonIrKf9mR7N5pA=;
 b=iw4UNMSbnLd15FjksGdVGppjkqfDb3HPXh9OC2AEB3hEpDpPwWMR9rKvI5pXX+dse7fX7tFeH12zCvWfoH5d+uXCCsaGLoKNNlEj10U91fWyWIImyjP6PdIaA3zkrgKytRCQyEyE6d7SmWR//Fj1d1XQps0vZA/+XD5qHcbBXk0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <f790c02a-6a04-b126-ec74-7af9ce708b52@citrix.com>
Date: Wed, 3 May 2023 15:51:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2 05/13] tools/xenstore: make some write limit functions
 static
Content-Language: en-GB
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-6-jgross@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230330085011.9170-6-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P302CA0005.GBRP302.PROD.OUTLOOK.COM
 (2603:10a6:600:2c2::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM4PR03MB6911:EE_
X-MS-Office365-Filtering-Correlation-Id: f1a1ee10-0972-4fca-6117-08db4be5e155
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZG7jdHrcX9EPN8D4IvDEoV8MlBtnERv2gKHtPPQIgM73SnfsC73/76dQw0rI2B3xkLhpKcxbg9BxK1ionl6qjMOgpP8DZNux0m5//QF7n616HItA9WpvWhM50LX2raPnkpn3Xi0uIiASouUdB+eieY3zoNnRTDVwvx6p92tpafJM5U6EIj6pun6GcuINeyS4HvRVMmt05cpMleQ5p475GXFNkgyElTGp3FxRDj4BQULfpiVBnSqD3T7d/JpMQH4QKjUoIduX3vOigX8nclHs932t1NxRUzAfNZHs/xsz3ha4/WcefoufmAVQnp1S9uuXP5YvWYjflADGbxJguGAw7GTDPiKBL7czwF5eMP+Tcsm9D8RwACwQdH9uUSmZPDlWYqWme9h6O2XCs/VKkp1WEKmsplfh+vMKeBAy1Ku9Uvowf0pGKQyfCdodm4vvQ/akZONCbwhJxQVOymV9d0IMwEoEB9+i6QMoWacwoXxTHz+bPj36c7VDaT/hwf6ttTDM/0bgwtgpYa91ZYI4xFmp/8RavXTW6hexpUCn/XwnP9JLn8C3sDCc+mu8N/7rDpULCO8Ou1oT/TlfYmXXbU5+5dOcO59Yy2+tBvszUuw7a56LLA69j5hvmbRAyvNzEaSTqIqO5BAsg9hcWHln0gqLlQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(376002)(396003)(366004)(451199021)(5660300002)(2616005)(86362001)(31696002)(6506007)(107886003)(186003)(26005)(6512007)(82960400001)(53546011)(38100700002)(8676002)(8936002)(54906003)(6486002)(36756003)(41300700001)(4326008)(316002)(6666004)(478600001)(31686004)(66556008)(66946007)(66476007)(2906002)(4744005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R0luREsrWlFPOHFTMnBLNzB3cS9GL3hDRk5QVkh3L04rQjJhWnN3eWl1d3gv?=
 =?utf-8?B?OVUxY25RbThlZnJUT2ozcXBnb25Qbk05NlZ5blpFR2VncmZnMHlSRGUyMXdT?=
 =?utf-8?B?eEhzbzd5bElSeXFkcFo2ZXZRS3RFQzA2UjVxdk5zTkI3STQ5c2dPR1dvUWhM?=
 =?utf-8?B?R1FkZmZJVVJvZE5QaVVrS0t4QlNZby9NZDFhY0FGbVlpZFRTQ2VRdklueVNO?=
 =?utf-8?B?R095bmEvWWpUeGYrN3hnWHVPOUN1dWcxbmRpYys3VHhPSVd3QXpobXQ4Vy9s?=
 =?utf-8?B?VitnVWFxY0QxZlc3ZGhqOUdxdTVRTlRsemoraDlVb1FaY0U1OW5Hd0VDbXJ3?=
 =?utf-8?B?Y0xRY2FSRmxiczRNMmdyUGZXS3NrOHpVSnJBNVF6WlZwalFiWmNDbnM2V1RM?=
 =?utf-8?B?QStocUxMUEhDaXBOK015WXY1QlRvcnpiSEZ3dUlwNEVSY1QyZDd4VVUwNCsz?=
 =?utf-8?B?M2RUL3FkVW5PWWZleUkxMGU5QUZpMW5IMldhYjE0YlNMdmJNNTV2bXg0dXBr?=
 =?utf-8?B?RlZmV0pYS3lSRlhRMktsN3JXWC9KMDBWY09UOUlXU0p2MFZpT01kN2NSV1Av?=
 =?utf-8?B?dHlhMXc1QmFqaENNQ3E2Z2hBeTYxQTJDVjJJTWZlWm0zSzhXUVNYeWZoVysz?=
 =?utf-8?B?UjJ5M1J1NTJyUFRJUS9rWjh2Q2J3WHZaV0N6SEJ5ZktnMHI3V2hBM0pUOUE4?=
 =?utf-8?B?OWFxamQyWkdyOFZrdkd3RkNDeW5CTGtKR2RuR2RGTUlqU2ZSQmFHckxxbWVS?=
 =?utf-8?B?cDdzeThOVXpJZDMrcWROeE4rSEZwQmd5QlpnMTZRbjladjNCTk8zMUFFcStE?=
 =?utf-8?B?b1dCUjRpRmRlWld1OXlBcHBNRjFaL2ZXQldia1BtanVrYVZFaDBSQ1ZiOEE1?=
 =?utf-8?B?TWtGVTY4ek9zNHhjeDYvWi9aTkVmRXRWbzdldnBXMlB2enBUWE9qUTVKM0p6?=
 =?utf-8?B?KzQxSzVySzg3eitEblpXWHBjVWtmMy83N3ZZWjJrMWkxRUZMT2NGZkZUcVVQ?=
 =?utf-8?B?elN1QUF5MXM3R013TTNHenFHRFZ5OTlmQ3ZIOVVWdnNUZm4zNWc3eE96Qmls?=
 =?utf-8?B?WFJTZERvSDBOQ1UwZldWaEl6RjVWaWFXcnpnc1djTUg5UHk3YnZyOFZMMmxW?=
 =?utf-8?B?NllRSnBhdGIxRGRzQjZGUSt6UldpU0p4Tk9zWEdtemFqNUdSUWszUDlSUTQ2?=
 =?utf-8?B?WnFYdDRZL1lGbGk1ZVpzWmFnS3ZTS1lTcnJPbE1OSUVCL1ovNEV6Yk5hbFNE?=
 =?utf-8?B?dUhWcEt2c1hlZkYvUzg2RmsvQi93RnRJUlJYV2N1NXdraW94eVNNM0R6Y2Ri?=
 =?utf-8?B?VHB3NmxkV3AwdVdzK2tHZWVZa0xCWkRQSFpxaFcvVmc2SXl4SU1icVUwdjI5?=
 =?utf-8?B?QXNaSkJYKzE2cG9rNDlCdTdGSWVOT01DaGpNK2NpNSs2Z3h1ODZJbERtdXlm?=
 =?utf-8?B?LzhpQVZBSGJpMjF0akpYTVBTY3FkbUFnWXpQVEZEaEo4QU5KczNvL0FjSVAv?=
 =?utf-8?B?N2pEYWkyZlFyaGlPaDVONkF4THZreHdwaTFnWnNNaU5iRWRVdTlvR2RleGxZ?=
 =?utf-8?B?VGxKZk1oSi94dXFCd0U3TFVPN1U5WnJ4STNkUGNwVW9CZEFlcjVlYzhhc0hE?=
 =?utf-8?B?WG81cHlaTXNwa1VSTnIzNitoc1dsSlVnNG50VkpybmZOSTg1bzJwSkNLcVNh?=
 =?utf-8?B?WTRiT0N2U2VVbzBZbDlmRkRDbC9QbzY3eXBXS0RrdnJnU1MzZmRmWFNTUmZo?=
 =?utf-8?B?bmZkeTNqNVllQklUNFlNY2VDOWFpNHhjb2JsVVNVZENmTmlQQU5RTEhTNU9h?=
 =?utf-8?B?YlppV2dDNFAzNHhlRFVaYzVvVXVrYWd0czMvTHllaWh0Um5sc0ZiMlNrL0ZU?=
 =?utf-8?B?cGxMR3EyeFQ4Z0RwVSsxWEJFanI3eTI2VHhoQzVaczEzY2VkMUtsTEhIT2RF?=
 =?utf-8?B?NFB0QlZYMVg4U0Y0bTFvZVJIZEoyTjdZbGZFR1oyN0ZTYjRmeVFoZURSV09X?=
 =?utf-8?B?blNBc1VwcXA3bEg2Mk1nNlJlSlZKenczb1g5RHFnS2VHcVAzSmtZaVJqTDhV?=
 =?utf-8?B?eTFUbitaZktERytpNk1aRnRqZ3JvNjNUbUovN2NsUHBXNzJ2em91dEVhQklo?=
 =?utf-8?B?WEc5MHlqMmZ3Yjc3NWZrN3RyU3N2anVEbmxxcGxRdjVQZklyYnNJZDhCV2Qx?=
 =?utf-8?B?cHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	2TO8NoFPeYkmIJmzyyjrDALsH2pbEteKu5AZfWJwqzDhijoG70mLIphbeswOgik/BKrwYC5pnYw/cgJbILCKGU9ZOgYitNAgYyFOhetai5HUa1+gYiG2l78BqJevNSud1anMHYFYsD6mt6CSzICZQGodAYmPTkr6Xne9GWqn8FQ0nvOgQhrxX9Ps//Xbfuk+EnrDc5uHKXtyoqyL6DJDjQUIAIeuzakuvIOtqGtwBVO1YXbHlnSVyj4cvFUkm0Z4E8bEp3VU8YcA7T6/4XuTgow3fl4Y4DzwYMXgmZcRE2KH/jIpiJ16OhgEFCTe9i081HFP4i8PS0teLwSX7NONKqV73eHcgBhAhWAK/7MYqUDJEnYJ1ISvN6gFVitwnpwtArAWUC9yl86UcXEN1uudM73KIKAnBNCQAW34MEUBA8jekTTSw/FEVZUm6NWF9AyeqKZefNADuXMl1guRljhN9fnO8WjiqEvHcIhwOMGUxT5ZiQ3X/3EFNEIvNI6AJFb7xhtIWoE+TDUJt6pl4Y7jjtIF97+EW8QatQ74pwspxGPu0uiYsoCF1EVH40f1C2jlzOZ5GAyik4CG1PlK2JU8vPnBR/S+/TwAEAQVuU7lUN+Xnpc6hjTSnDTZHjNk/qNlEeAA2XVJiRJCT2OPWHZpDFjlnJMZBC9sWx3GIVk0pT7pILTG/lSfWK7ULLVlBZK2tBuQ3Eo92fxCkgaHXOzuuaATrcz0AKAAzlAzANFgu0TKLsDMEr7FCDL2ZpBqNE3TSXeOwWECb2pACiuyiDRRcYh5MeAZn51KSD96bcsBMOJ55ldRcENx4ahVPtZyVk0A
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f1a1ee10-0972-4fca-6117-08db4be5e155
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 14:51:30.6006
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JH2o1c7mmSgtRswSWXKK0fxFmUU2JUV7Ssf6s18Ubz5o3TbwqMYVf72H+AlMR4Dz+HxTYNG4ceLFs9hx/NukDjqa/KACg0lMN2rIP3BhKds=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6911

On 30/03/2023 9:50 am, Juergen Gross wrote:
> +static void wrl_xfer_credit(wrl_creditt *debit,  wrl_creditt debit_floor,
> +			    wrl_creditt *credit, wrl_creditt credit_ceil)
> +	/*
> +	 * Transfers zero or more credit from "debit" to "credit".
> +	 * Transfers as much as possible while maintaining
> +	 * debit >= debit_floor and credit <= credit_ceil.
> +	 * (If that's violated already, does nothing.)
> +	 *
> +	 * Sufficient conditions to avoid overflow, either of:
> +	 *  |every argument| <= 0x3fffffff
> +	 *  |every argument| <= 1E9
> +	 *  |every argument| <= WRL_CREDIT_MAX
> +	 * (And this condition is preserved.)
> +	 */
> +{
> +	wrl_creditt xfer = MIN( *debit      - debit_floor,
> +			        credit_ceil - *credit      );

MIN() evaluates its parameters multiple times.  I believe the only legal
way for the compiler to emit this code is to interleave double reads.

As with pretty much any C code, you want to read the pointers into
locals first, then operate on them, then write them out at the end.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 03 14:54:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 14:54:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529281.823507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDs2-0007DU-R5; Wed, 03 May 2023 14:54:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529281.823507; Wed, 03 May 2023 14:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDs2-0007DN-OF; Wed, 03 May 2023 14:54:18 +0000
Received: by outflank-mailman (input) for mailman id 529281;
 Wed, 03 May 2023 14:54:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1puDs1-0007DH-Ew
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 14:54:17 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5f75e5cb-e9c2-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 16:54:15 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 17EEC2291F;
 Wed,  3 May 2023 14:54:15 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id D93C01331F;
 Wed,  3 May 2023 14:54:14 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id +SafM5Z1UmQTUwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 14:54:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f75e5cb-e9c2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683125655; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=4lwZKYFnvfyzxvRQhR5IxeQ9hEUssK/wK2hS9nUgYfE=;
	b=SaExl2Xq7Kl3oT54ki1tokQuO5gTsqAKDuV6ZP5A1ZT1y30gRZHpn4xyol0w22NZy1qk2u
	H+PBiXZzBzBKCQgcQfasRsiU8k1ZuflH+Pcat3kJHHrpzomD87LjLVglg1jIfFDi2Bt/Ab
	tjYtk7pjmhrg/gzenaczb6LQF/VheuA=
Message-ID: <65f15210-2a7a-4629-1ee0-628fb0ccb8b4@suse.com>
Date: Wed, 3 May 2023 16:54:14 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 05/13] tools/xenstore: make some write limit functions
 static
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-6-jgross@suse.com>
 <f790c02a-6a04-b126-ec74-7af9ce708b52@citrix.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <f790c02a-6a04-b126-ec74-7af9ce708b52@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------iUv8k4vJcnYY1mZGZ8drVHpb"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------iUv8k4vJcnYY1mZGZ8drVHpb
Content-Type: multipart/mixed; boundary="------------uur77b6610Ll2FVVs7I0Gkdo";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <65f15210-2a7a-4629-1ee0-628fb0ccb8b4@suse.com>
Subject: Re: [PATCH v2 05/13] tools/xenstore: make some write limit functions
 static
References: <20230330085011.9170-1-jgross@suse.com>
 <20230330085011.9170-6-jgross@suse.com>
 <f790c02a-6a04-b126-ec74-7af9ce708b52@citrix.com>
In-Reply-To: <f790c02a-6a04-b126-ec74-7af9ce708b52@citrix.com>

--------------uur77b6610Ll2FVVs7I0Gkdo
Content-Type: multipart/mixed; boundary="------------02RPCUk7hmNjeWEFo5EoNtf6"

--------------02RPCUk7hmNjeWEFo5EoNtf6
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTY6NTEsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+IE9uIDMwLzAzLzIw
MjMgOTo1MCBhbSwgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4+ICtzdGF0aWMgdm9pZCB3cmxf
eGZlcl9jcmVkaXQod3JsX2NyZWRpdHQgKmRlYml0LCAgd3JsX2NyZWRpdHQgZGViaXRfZmxv
b3IsDQo+PiArCQkJICAgIHdybF9jcmVkaXR0ICpjcmVkaXQsIHdybF9jcmVkaXR0IGNyZWRp
dF9jZWlsKQ0KPj4gKwkvKg0KPj4gKwkgKiBUcmFuc2ZlcnMgemVybyBvciBtb3JlIGNyZWRp
dCBmcm9tICJkZWJpdCIgdG8gImNyZWRpdCIuDQo+PiArCSAqIFRyYW5zZmVycyBhcyBtdWNo
IGFzIHBvc3NpYmxlIHdoaWxlIG1haW50YWluaW5nDQo+PiArCSAqIGRlYml0ID49IGRlYml0
X2Zsb29yIGFuZCBjcmVkaXQgPD0gY3JlZGl0X2NlaWwuDQo+PiArCSAqIChJZiB0aGF0J3Mg
dmlvbGF0ZWQgYWxyZWFkeSwgZG9lcyBub3RoaW5nLikNCj4+ICsJICoNCj4+ICsJICogU3Vm
ZmljaWVudCBjb25kaXRpb25zIHRvIGF2b2lkIG92ZXJmbG93LCBlaXRoZXIgb2Y6DQo+PiAr
CSAqICB8ZXZlcnkgYXJndW1lbnR8IDw9IDB4M2ZmZmZmZmYNCj4+ICsJICogIHxldmVyeSBh
cmd1bWVudHwgPD0gMUU5DQo+PiArCSAqICB8ZXZlcnkgYXJndW1lbnR8IDw9IFdSTF9DUkVE
SVRfTUFYDQo+PiArCSAqIChBbmQgdGhpcyBjb25kaXRpb24gaXMgcHJlc2VydmVkLikNCj4+
ICsJICovDQo+PiArew0KPj4gKwl3cmxfY3JlZGl0dCB4ZmVyID0gTUlOKCAqZGViaXQgICAg
ICAtIGRlYml0X2Zsb29yLA0KPj4gKwkJCSAgICAgICAgY3JlZGl0X2NlaWwgLSAqY3JlZGl0
ICAgICAgKTsNCj4gDQo+IE1JTigpIGV2YWx1YXRlcyBpdHMgcGFyYW1ldGVycyBtdWx0aXBs
ZSB0aW1lcy7CoCBJIGJlbGlldmUgdGhlIG9ubHkgbGVnYWwNCj4gd2F5IGZvciB0aGUgY29t
cGlsZXIgdG8gZW1pdCB0aGlzIGNvZGUgaXMgdG8gaW50ZXJsZWF2ZSBkb3VibGUgcmVhZHMu
DQo+IA0KPiBBcyB3aXRoIHByZXR0eSBtdWNoIGFueSBDIGNvZGUsIHlvdSB3YW50IHRvIHJl
YWQgdGhlIHBvaW50ZXJzIGludG8NCj4gbG9jYWxzIGZpcnN0LCB0aGVuIG9wZXJhdGUgb24g
dGhlbSwgdGhlbiB3cml0ZSB0aGVtIG91dCBhdCB0aGUgZW5kLg0KDQp4ZW5zdG9yZWQgaXMg
c2luZ2xlLXRocmVhZGVkLiBTbyBubyBuZWVkIHRvIHdvcnJ5IGhlcmUuDQoNCg0KSnVlcmdl
bg0KDQo=
--------------02RPCUk7hmNjeWEFo5EoNtf6
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------02RPCUk7hmNjeWEFo5EoNtf6--

--------------uur77b6610Ll2FVVs7I0Gkdo--

--------------iUv8k4vJcnYY1mZGZ8drVHpb
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRSdZYFAwAAAAAACgkQsN6d1ii/Ey9q
Mwf+NEeKKQ7yqLna3vKVq3tKT5dHf7OXthyBNL8pa2Klur8q5FcZdWdBZjFh0IaJqiVFiUNaGeX8
TruPzEuHOcHja6FLiJyzADpLtUqebzs1/isVBVdykLSqF5vf/o2jSF/zOkK608hfOgNkA6qvX964
tAdyZ2vFySWs9EhLZsU/TGwmNGo/asR2DYnggUcleLtEPs7bXesJoJh59ZC75YUDU3D6duQ7FZtu
2xnOIT5ZTsUVRLny6XSV8y+wgaXCKIs/shTRGNfnaoHsMt71YlOaL3gvYn9jEzC4u5N3IzpvZALw
SgGd3GYym+AMEQmKuctrW6cnDZj502B78lep5zmwgQ==
=3xuG
-----END PGP SIGNATURE-----

--------------iUv8k4vJcnYY1mZGZ8drVHpb--


From xen-devel-bounces@lists.xenproject.org Wed May 03 15:02:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:02:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529285.823517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDze-0000NA-QI; Wed, 03 May 2023 15:02:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529285.823517; Wed, 03 May 2023 15:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puDze-0000N3-M3; Wed, 03 May 2023 15:02:10 +0000
Received: by outflank-mailman (input) for mailman id 529285;
 Wed, 03 May 2023 15:02:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YH74=AY=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1puDzd-0000Mx-II
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:02:09 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7849e45f-e9c3-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 17:02:06 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz43F1sivZ
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 3 May 2023 17:01:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7849e45f-e9c3-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683126114; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=an3Si/DZaCH9pneAM2ENpePOMfOaZa3AqjW2ZOatMWSZjVpyNM67SFvneQ2TG1R6ns
    a5/y296PLUGefDtOel9H0Bq5qpvuUtrYPl45LiYkjVwoqxAPGDDMU9oLbiqiadRX9WBN
    a45gNROs6dbRgLUxkpXqEHd2G3El65Ye1VtPO1rA2rf2JmhT6zLsa5Q/5vUqtMYLnI1J
    NR4rSCzfSLzIRE/JRglCKKYZhBH8+PBk63G2Zwl+hiKsZLDgE7OracoM25zD9jw0vqu4
    GPSfrIt9CBsjPX3r1lYpcLpGbVdPXLuEyFLl9mxweE4KqcK+4wA7BFPyFOx1GbmX3Hdm
    uXZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683126114;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=n+NGG3DQPZc+9Rhhl9b5pYee5RxxjvFCd1OK5V3cABk=;
    b=tYOI7l4mnIlwWPP2R9bd0LxGM5EIKKxU/38cbULkB/wbY6q8PIA/d/GYi2Tmk+75Wn
    krowxG79KasyoaWG59yNEAxTuCH1HCC93uscrw8MgsM7pynAleOg8cnBDevFKERLG8V9
    Ss4dPh8X8eju60Q5+wQY82FumUikBXi8Gk/vTftt0YBRhngzc47QKr6o07YRwo5fRTeE
    43CaqutIvvFafc9K7rdMu98Lo7yJr0MtR/MUVMe8TxpIKzHAGnuo6GgnmgeDiVlzBi22
    4bmxBBb0hmyPmYcwM/F4chhxZCBfuxwynQlKo7rU7SycmvD/DoUkxW6NQPOzibAdO5Pd
    L9jA==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683126114;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=n+NGG3DQPZc+9Rhhl9b5pYee5RxxjvFCd1OK5V3cABk=;
    b=TwqpkV9ug89fcyLYQDObqODL/gjAl5sWmoKuMUOfMOgoeZcolvstn32mO+JzHJ9y7F
    emDjSSgB63OHF6aLZAhY8zOEX149SnVPN8t1w7Q+Ji+ztysf61onHr7DMloLguAZAnSp
    8V0KWR0TfPbnDTWQwBG9W0/0oe1o3dow3KeFb+Lkuxz8IXaRLYbWgU3TjP5GeD8afUNS
    C2etHiuduuDJE8XHRPIM7EBQDnFGvPLYP34J+qkhNOA5YJuKiS+gYgUBVTSQkPzpvLfl
    x37KOnrQbT0KPUYw8shGFUJjZG/ygx3JPgD0Een/FZLKRyOWF226YaBiUebgT9896CJM
    +u+A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683126114;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=n+NGG3DQPZc+9Rhhl9b5pYee5RxxjvFCd1OK5V3cABk=;
    b=wC8s4YqIb8NF+7s79yfRMixs7jT1xwEizl+opuOr9Hzp8EA4X61a0D70gEtzXL5m2F
    R5CDEjEVQEyQ8TTYBRCQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4wqVv7FZ8tH5EUSbMVU80kUr7f4QlYaI60OjHt/Q=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH v1] tools: convert bitfields to unsigned type
Date: Wed,  3 May 2023 15:01:42 +0000
Message-Id: <20230503150142.4987-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

clang complains about the signed type:

implicit truncation from 'int' to a one-bit wide bit-field changes value from 1 to -1 [-Wsingle-bit-bitfield-constant-conversion]

The potential ABI change in libxenvchan is covered by the Xen version based SONAME.

The xenalyze change follows the existing pattern in that file.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/include/libxenvchan.h | 6 +++---
 tools/xentrace/xenalyze.c   | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/include/libxenvchan.h b/tools/include/libxenvchan.h
index 30cc73cf97..3d3b8aa8dd 100644
--- a/tools/include/libxenvchan.h
+++ b/tools/include/libxenvchan.h
@@ -79,11 +79,11 @@ struct libxenvchan {
 	xenevtchn_handle *event;
 	uint32_t event_port;
 	/* informative flags: are we acting as server? */
-	int is_server:1;
+	unsigned int is_server:1;
 	/* true if server remains active when client closes (allows reconnection) */
-	int server_persist:1;
+	unsigned int server_persist:1;
 	/* true if operations should block instead of returning 0 */
-	int blocking:1;
+	unsigned int blocking:1;
 	/* communication rings */
 	struct libxenvchan_ring read, write;
 	/**
diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index 12dcca9646..1b4a188aaa 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -1377,7 +1377,7 @@ struct hvm_data {
     tsc_t exit_tsc, arc_cycles, entry_tsc;
     unsigned long long rip;
     unsigned exit_reason, event_handler;
-    int short_summary_done:1, prealloc_unpin:1, wrmap_bf:1;
+    unsigned short_summary_done:1, prealloc_unpin:1, wrmap_bf:1;
 
     /* Immediate processing */
     void *d;


From xen-devel-bounces@lists.xenproject.org Wed May 03 15:14:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:14:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529291.823526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEBO-0001t3-Rp; Wed, 03 May 2023 15:14:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529291.823526; Wed, 03 May 2023 15:14:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEBO-0001sw-P8; Wed, 03 May 2023 15:14:18 +0000
Received: by outflank-mailman (input) for mailman id 529291;
 Wed, 03 May 2023 15:14:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puEBN-0001sm-8P; Wed, 03 May 2023 15:14:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puEBM-0005nh-Vg; Wed, 03 May 2023 15:14:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puEBM-0003vb-L3; Wed, 03 May 2023 15:14:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puEBM-00017q-Kd; Wed, 03 May 2023 15:14:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CkX0CWQN5dGu+QHwRrHu9ztnqAXvkdTUDXKsdzfAlnI=; b=Tv2jSXdjf/SXsK4amjgx/cjkad
	bAu2qs85koQKsyShR17BO6Y5xCNz9pedga6/Tmqo43U7OMLJZMVAQ239D2EHb/WBhwIXW5oOKIOE9
	3rh8HBdexM4siePQxBRgaQ765Dz9mzxNR6gdu8+H+87ioihEhJjmnK4KOrALB/1jiGXI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180511-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180511: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-livepatch:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b033eddc9779109c06a26936321d27a2ef4e088b
X-Osstest-Versions-That:
    xen=b033eddc9779109c06a26936321d27a2ef4e088b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 May 2023 15:14:16 +0000

flight 180511 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180511/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-livepatch     7 xen-install      fail in 180506 pass in 180511
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180506

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180506
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180506
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180506
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180506
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180506
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180506
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180506
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180506
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180506
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180506
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180506
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180506
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  b033eddc9779109c06a26936321d27a2ef4e088b
baseline version:
 xen                  b033eddc9779109c06a26936321d27a2ef4e088b

Last test of basis   180511  2023-05-03 01:53:22 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed May 03 15:14:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:14:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529289.823537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEBt-0002J4-51; Wed, 03 May 2023 15:14:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529289.823537; Wed, 03 May 2023 15:14:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEBt-0002Ix-2C; Wed, 03 May 2023 15:14:49 +0000
Received: by outflank-mailman (input) for mailman id 529289;
 Wed, 03 May 2023 15:11:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uDRO=AY=linaro.org=dan.carpenter@srs-se1.protection.inumbo.net>)
 id 1puE8t-0001rK-Ob
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:11:43 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cf90cd04-e9c4-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 17:11:42 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-3f192c23fffso33825995e9.3
 for <xen-devel@lists.xenproject.org>; Wed, 03 May 2023 08:11:42 -0700 (PDT)
Received: from localhost ([102.36.222.112]) by smtp.gmail.com with ESMTPSA id
 d15-20020adfe84f000000b002fb60c7995esm34427038wrn.8.2023.05.03.08.11.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 03 May 2023 08:11:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf90cd04-e9c4-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1683126702; x=1685718702;
        h=content-disposition:mime-version:message-id:subject:cc:to:from:date
         :from:to:cc:subject:date:message-id:reply-to;
        bh=omxGsHaKkcGC+xqSddM+y0Rv4lJTnT54/CknYbjXVhY=;
        b=YfDzGylBhlR3CVxpMMI4acCmCfqTfHfM5iBfgOCW6OyIloOXcvH8dEmNrR80OObTjK
         IwiUIcA3EAalQUC1fcg6MNtj8x4BJYd3sJPyd1U6v+NcbtOe5wHOosZpqzvgGp8d60s/
         GEuDIhroTKLKU3RLl3Q1FYflOwqkteq5501Hv7jKKV+O+4lgokZlz6LlAf+t4oJvNHp0
         ZWyt5MEIYS7GLmms+q9eXjK+KzyG/JMYwSlML9xXTrZnmx3fQyKd5U046ZCENMYczcQL
         jcsxZ8mDjvZC0tP1ENbJbTfiTKJKjO/AZbTZ0Prl4nDNDNwD+/yjN0s7b2KBvKuTLtdC
         JPDw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683126702; x=1685718702;
        h=content-disposition:mime-version:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=omxGsHaKkcGC+xqSddM+y0Rv4lJTnT54/CknYbjXVhY=;
        b=RmCb0C9AyrlrhUXPumuOMPebQTYtdtXm62T9O1nJYIH2SkJxLep/u7zTMj5A5jDJlC
         6+oRe7WVe7JYp+r37ZTzFSRSNOZZk2giURZ2kOQTJ4p2honMbO2au+Jz5jNXN6U5Xc8Y
         IpYNFndVbQPy1DsugbWHsYPO4ThMqAjj4LvnC/iFXcmUBLvpNq0QeEZJWNn19bF1Vt9I
         iTEEo/yog2kjOKTBPmDhAduEqI9DAcnT62mCXfju4/EG9J9LmU5aWxxBN+of6mlcKfiM
         WGVQYkNJNDt4lPRp+7vqKeCUk/l1q+GDzQG1gqSCxCJm+3o7p3YoDEFf9B5mbDJfZSSt
         Mheg==
X-Gm-Message-State: AC+VfDy2HybHgahXaThD4ADmZOxYn8j6YW7nyc5g34HbztCHBV+sKcgf
	hhgugVTsbrAn7SeoDAaeUsS2fw==
X-Google-Smtp-Source: ACHHUZ5OZVQBtqAWZgaLWLuVjAfGEiqAlM8i3902hJEurNrW2MxWTzUvvq1ZwT17Z0WElKBzUOcmHw==
X-Received: by 2002:adf:dd4c:0:b0:2f6:8834:5952 with SMTP id u12-20020adfdd4c000000b002f688345952mr361499wrm.8.1683126701992;
        Wed, 03 May 2023 08:11:41 -0700 (PDT)
Date: Wed, 3 May 2023 18:11:35 +0300
From: Dan Carpenter <dan.carpenter@linaro.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Juergen Gross <jgross@suse.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, kernel-janitors@vger.kernel.org
Subject: [PATCH] xen/pvcalls-back: fix double frees with
 pvcalls_new_active_socket()
Message-ID: <e5f98dc2-0305-491f-a860-71bbd1398a2f@kili.mountain>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Mailer: git-send-email haha only kidding

In the pvcalls_new_active_socket() function, most error paths call
pvcalls_back_release_active(fedata->dev, fedata, map) which calls
sock_release() on "sock".  The bug is that the caller also frees sock.

Fix this by making every error path in pvcalls_new_active_socket()
release the sock, and don't free it in the caller.

Fixes: 5db4d286a8ef ("xen/pvcalls: implement connect command")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
---
 drivers/xen/pvcalls-back.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 1f5219e12cc3..7beaf2c41fbb 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -325,8 +325,10 @@ static struct sock_mapping *pvcalls_new_active_socket(
 	void *page;
 
 	map = kzalloc(sizeof(*map), GFP_KERNEL);
-	if (map == NULL)
+	if (map == NULL) {
+		sock_release(sock);
 		return NULL;
+	}
 
 	map->fedata = fedata;
 	map->sock = sock;
@@ -418,10 +420,8 @@ static int pvcalls_back_connect(struct xenbus_device *dev,
 					req->u.connect.ref,
 					req->u.connect.evtchn,
 					sock);
-	if (!map) {
+	if (!map)
 		ret = -EFAULT;
-		sock_release(sock);
-	}
 
 out:
 	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
@@ -561,7 +561,6 @@ static void __pvcalls_back_accept(struct work_struct *work)
 					sock);
 	if (!map) {
 		ret = -EFAULT;
-		sock_release(sock);
 		goto out_error;
 	}
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed May 03 15:15:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:15:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529298.823547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puECX-0002xY-JF; Wed, 03 May 2023 15:15:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529298.823547; Wed, 03 May 2023 15:15:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puECX-0002xR-GQ; Wed, 03 May 2023 15:15:29 +0000
Received: by outflank-mailman (input) for mailman id 529298;
 Wed, 03 May 2023 15:15:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puECV-0002wu-Lv
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:15:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puECR-0005qG-T8; Wed, 03 May 2023 15:15:23 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puECR-0001Ng-KE; Wed, 03 May 2023 15:15:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=nIC/YWeeZCwRzdVGnmO3ranVa2mAyxX8vWqqRl7z65g=; b=ZoinoO1kru205KIRLGGkkYOr05
	mgcw4CV/vDzgUVEGElf4q9zwXk6Rty5M8N7GijXNoGiHYtriNOkJXMpnMDhfa6PjlOOg5a4t05Wup
	0aTzZikFJTod+pR/ETpK/WZC9uZoO9jEFIfAlc+h1v1PseVv/05G7yCxqdchNV6dsxoQ=;
Message-ID: <28235d38-ad7f-f1bd-f093-bd83f9fd6589@xen.org>
Date: Wed, 3 May 2023 16:15:20 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: dom0less vs xenstored setup race Was: xen | Failed pipeline for
 staging | 6a47ba2f
Content-Language: en-US
To: andrew.cooper3@citrix.com, Stefano Stabellini <sstabellini@kernel.org>,
 alejandro.vallejo@cloud.com
Cc: committers@xenproject.org, michal.orzel@amd.com,
 xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Juergen Gross <jgross@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edwin.torok@cloud.com>
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
 <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
 <c74d231f-75e2-a26d-f2c4-3a135cc1ac10@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <c74d231f-75e2-a26d-f2c4-3a135cc1ac10@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 03/05/2023 15:38, andrew.cooper3@citrix.com wrote:
> Hello,
> 
> After what seems like an unreasonable amount of debugging, we've tracked
> down exactly what is going wrong here.
> 
> https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4219721944
> 
> Of note is the smoke.serial log around:
> 
> io: IN 0xffff90fec250 d0 20230503 14:20:42 INTRODUCE (1 233473 1 )
> obj: CREATE connection 0xffff90fff1f0
> *** d1 CONN RESET req_cons 00000000, req_prod 0000003a rsp_cons
> 00000000, rsp_prod 00000000
> io: OUT 0xffff9105cef0 d0 20230503 14:20:42 WATCH_EVENT
> (@introduceDomain domlist )
> 
> XS_INTRODUCE (in C xenstored at least, not checked O yet) always
> clobbers the ring pointers.  The added pressure on dom0 that the
> xensconsoled adds with it's 4M hypercall bounce buffer occasionally
> defers xenstored long enough that the XS_INTRODUCE clobbers the first
> message that dom1 wrote into the ring.
> 
> The other behaviour seen was xenstored observing a header looking like this:
> 
> *** d1 HDR { ty 0x746e6f63, rqid 0x2f6c6f72, txid 0x74616c70, len
> 0x6d726f66 }
> 
> which was rejected as being too long.  That's "control/platform" in
> ASCII, so the XS_INTRODUCE intersected dom1 between writing the header
> and writing the payload.
> 
> 
> Anyway, it is buggy for XS_INTRODUCE to be called on a live an
> unsuspecting connection.  It is ultimately init-dom0less's fault for
> telling dom1 it's good to go before having waited for XS_INTRODUCE to
> complete.

So the problem is xenstored will set interface->connection to 
XENSTORE_CONNECTED before finalizing the connection. Caqn you try the 
following, for now, very hackish patch:

diff --git a/tools/xenstore/xenstored_domain.c 
b/tools/xenstore/xenstored_domain.c
index f62be2245c42..bbf85bbbea3b 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -688,6 +688,7 @@ static struct domain *introduce_domain(const void *ctx,
                 talloc_steal(domain->conn, domain);

                 if (!restore) {
+                       domain_conn_reset(domain);
                         /* Notify the domain that xenstore is available */
                         interface->connection = XENSTORE_CONNECTED;
                         xenevtchn_notify(xce_handle, domain->port);
@@ -730,8 +731,6 @@ int do_introduce(const void *ctx, struct connection 
*conn,
         if (!domain)
                 return errno;

-       domain_conn_reset(domain);
-
         send_ack(conn, XS_INTRODUCE);

         return 0;

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 15:22:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:22:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529301.823556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEJb-0004WW-9o; Wed, 03 May 2023 15:22:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529301.823556; Wed, 03 May 2023 15:22:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEJb-0004WP-7C; Wed, 03 May 2023 15:22:47 +0000
Received: by outflank-mailman (input) for mailman id 529301;
 Wed, 03 May 2023 15:22:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1puEJa-0004WI-1R
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:22:46 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5a2d4f79-e9c6-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 17:22:44 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 0A4B51FEC0;
 Wed,  3 May 2023 15:22:44 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AE31E13584;
 Wed,  3 May 2023 15:22:43 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id RmCDKEN8UmRyYwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 03 May 2023 15:22:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a2d4f79-e9c6-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683127364; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ZiJyyFT5NgGLMYEWC/qN3yTxGI2R5QUMP+cQbIcM3+0=;
	b=DudI60UQ37JdVpDHP3DqnQsgwBRBkbiDoD9bVg2ZXwyzdPFYmUm59x0TeBhiEYFQQ/vBfz
	f4USNzO+sh0ccfIvEgOJZUMrzY+Ha7u8SV+Viw+2cQIf2rglczQANONlbPaPQCe4zbf8fm
	DcQq+niJUBrMHLvok92/U/MoQfAfclM=
Message-ID: <998dbbeb-94d7-7afe-e4c2-02ef134cbe94@suse.com>
Date: Wed, 3 May 2023 17:22:43 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: dom0less vs xenstored setup race Was: xen | Failed pipeline for
 staging | 6a47ba2f
To: Julien Grall <julien@xen.org>, andrew.cooper3@citrix.com,
 Stefano Stabellini <sstabellini@kernel.org>, alejandro.vallejo@cloud.com
Cc: committers@xenproject.org, michal.orzel@amd.com,
 xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, =?UTF-8?B?RWR3aW4gVMO2csO2aw==?=
 <edwin.torok@cloud.com>
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
 <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
 <c74d231f-75e2-a26d-f2c4-3a135cc1ac10@citrix.com>
 <28235d38-ad7f-f1bd-f093-bd83f9fd6589@xen.org>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <28235d38-ad7f-f1bd-f093-bd83f9fd6589@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------LKvmCwmwmvjJzje9EwrTHrka"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------LKvmCwmwmvjJzje9EwrTHrka
Content-Type: multipart/mixed; boundary="------------pCCo8kU23o0UY5ep3g9fRqkn";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, andrew.cooper3@citrix.com,
 Stefano Stabellini <sstabellini@kernel.org>, alejandro.vallejo@cloud.com
Cc: committers@xenproject.org, michal.orzel@amd.com,
 xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, =?UTF-8?B?RWR3aW4gVMO2csO2aw==?=
 <edwin.torok@cloud.com>
Message-ID: <998dbbeb-94d7-7afe-e4c2-02ef134cbe94@suse.com>
Subject: Re: dom0less vs xenstored setup race Was: xen | Failed pipeline for
 staging | 6a47ba2f
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
 <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
 <c74d231f-75e2-a26d-f2c4-3a135cc1ac10@citrix.com>
 <28235d38-ad7f-f1bd-f093-bd83f9fd6589@xen.org>
In-Reply-To: <28235d38-ad7f-f1bd-f093-bd83f9fd6589@xen.org>

--------------pCCo8kU23o0UY5ep3g9fRqkn
Content-Type: multipart/mixed; boundary="------------3Fd0Q0UDsfMiJdb1Utf8YC0a"

--------------3Fd0Q0UDsfMiJdb1Utf8YC0a
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTc6MTUsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGksDQo+IA0KPiBP
biAwMy8wNS8yMDIzIDE1OjM4LCBhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tIHdyb3RlOg0K
Pj4gSGVsbG8sDQo+Pg0KPj4gQWZ0ZXIgd2hhdCBzZWVtcyBsaWtlIGFuIHVucmVhc29uYWJs
ZSBhbW91bnQgb2YgZGVidWdnaW5nLCB3ZSd2ZSB0cmFja2VkDQo+PiBkb3duIGV4YWN0bHkg
d2hhdCBpcyBnb2luZyB3cm9uZyBoZXJlLg0KPj4NCj4+IGh0dHBzOi8vZ2l0bGFiLmNvbS94
ZW4tcHJvamVjdC9wZW9wbGUvYW5keWhocC94ZW4vLS9qb2JzLzQyMTk3MjE5NDQNCj4+DQo+
PiBPZiBub3RlIGlzIHRoZSBzbW9rZS5zZXJpYWwgbG9nIGFyb3VuZDoNCj4+DQo+PiBpbzog
SU4gMHhmZmZmOTBmZWMyNTAgZDAgMjAyMzA1MDMgMTQ6MjA6NDIgSU5UUk9EVUNFICgxIDIz
MzQ3MyAxICkNCj4+IG9iajogQ1JFQVRFIGNvbm5lY3Rpb24gMHhmZmZmOTBmZmYxZjANCj4+
ICoqKiBkMSBDT05OIFJFU0VUIHJlcV9jb25zIDAwMDAwMDAwLCByZXFfcHJvZCAwMDAwMDAz
YSByc3BfY29ucw0KPj4gMDAwMDAwMDAsIHJzcF9wcm9kIDAwMDAwMDAwDQo+PiBpbzogT1VU
IDB4ZmZmZjkxMDVjZWYwIGQwIDIwMjMwNTAzIDE0OjIwOjQyIFdBVENIX0VWRU5UDQo+PiAo
QGludHJvZHVjZURvbWFpbiBkb21saXN0ICkNCj4+DQo+PiBYU19JTlRST0RVQ0UgKGluIEMg
eGVuc3RvcmVkIGF0IGxlYXN0LCBub3QgY2hlY2tlZCBPIHlldCkgYWx3YXlzDQo+PiBjbG9i
YmVycyB0aGUgcmluZyBwb2ludGVycy7CoCBUaGUgYWRkZWQgcHJlc3N1cmUgb24gZG9tMCB0
aGF0IHRoZQ0KPj4geGVuc2NvbnNvbGVkIGFkZHMgd2l0aCBpdCdzIDRNIGh5cGVyY2FsbCBi
b3VuY2UgYnVmZmVyIG9jY2FzaW9uYWxseQ0KPj4gZGVmZXJzIHhlbnN0b3JlZCBsb25nIGVu
b3VnaCB0aGF0IHRoZSBYU19JTlRST0RVQ0UgY2xvYmJlcnMgdGhlIGZpcnN0DQo+PiBtZXNz
YWdlIHRoYXQgZG9tMSB3cm90ZSBpbnRvIHRoZSByaW5nLg0KPj4NCj4+IFRoZSBvdGhlciBi
ZWhhdmlvdXIgc2VlbiB3YXMgeGVuc3RvcmVkIG9ic2VydmluZyBhIGhlYWRlciBsb29raW5n
IGxpa2UgdGhpczoNCj4+DQo+PiAqKiogZDEgSERSIHsgdHkgMHg3NDZlNmY2MywgcnFpZCAw
eDJmNmM2ZjcyLCB0eGlkIDB4NzQ2MTZjNzAsIGxlbg0KPj4gMHg2ZDcyNmY2NiB9DQo+Pg0K
Pj4gd2hpY2ggd2FzIHJlamVjdGVkIGFzIGJlaW5nIHRvbyBsb25nLsKgIFRoYXQncyAiY29u
dHJvbC9wbGF0Zm9ybSIgaW4NCj4+IEFTQ0lJLCBzbyB0aGUgWFNfSU5UUk9EVUNFIGludGVy
c2VjdGVkIGRvbTEgYmV0d2VlbiB3cml0aW5nIHRoZSBoZWFkZXINCj4+IGFuZCB3cml0aW5n
IHRoZSBwYXlsb2FkLg0KPj4NCj4+DQo+PiBBbnl3YXksIGl0IGlzIGJ1Z2d5IGZvciBYU19J
TlRST0RVQ0UgdG8gYmUgY2FsbGVkIG9uIGEgbGl2ZSBhbg0KPj4gdW5zdXNwZWN0aW5nIGNv
bm5lY3Rpb24uwqAgSXQgaXMgdWx0aW1hdGVseSBpbml0LWRvbTBsZXNzJ3MgZmF1bHQgZm9y
DQo+PiB0ZWxsaW5nIGRvbTEgaXQncyBnb29kIHRvIGdvIGJlZm9yZSBoYXZpbmcgd2FpdGVk
IGZvciBYU19JTlRST0RVQ0UgdG8NCj4+IGNvbXBsZXRlLg0KPiANCj4gU28gdGhlIHByb2Js
ZW0gaXMgeGVuc3RvcmVkIHdpbGwgc2V0IGludGVyZmFjZS0+Y29ubmVjdGlvbiB0byBYRU5T
VE9SRV9DT05ORUNURUQgDQo+IGJlZm9yZSBmaW5hbGl6aW5nIHRoZSBjb25uZWN0aW9uLiBD
YXFuIHlvdSB0cnkgdGhlIGZvbGxvd2luZywgZm9yIG5vdywgdmVyeSANCj4gaGFja2lzaCBw
YXRjaDoNCj4gDQo+IGRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9t
YWluLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmMNCj4gaW5kZXggZjYy
YmUyMjQ1YzQyLi5iYmY4NWJiYmVhM2IgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9kb21haW4uYw0KPiArKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRf
ZG9tYWluLmMNCj4gQEAgLTY4OCw2ICs2ODgsNyBAQCBzdGF0aWMgc3RydWN0IGRvbWFpbiAq
aW50cm9kdWNlX2RvbWFpbihjb25zdCB2b2lkICpjdHgsDQo+ICDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgdGFsbG9jX3N0ZWFsKGRvbWFpbi0+Y29ubiwgZG9tYWluKTsNCj4g
DQo+ICDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaWYgKCFyZXN0b3JlKSB7DQo+
ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBkb21haW5f
Y29ubl9yZXNldChkb21haW4pOw0KPiAgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCAvKiBOb3RpZnkgdGhlIGRvbWFpbiB0aGF0IHhlbnN0b3JlIGlz
IGF2YWlsYWJsZSAqLw0KPiAgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCBpbnRlcmZhY2UtPmNvbm5lY3Rpb24gPSBYRU5TVE9SRV9DT05ORUNURUQ7
DQoNCkkgdGhpbmsgdGhlcmUgYXJlIGJhcnJpZXJzIG1pc3NpbmcgKGVzcGVjaWFsbHkgaW4g
b3JkZXIgdG8gd29yayBvbiBBcm0pPw0KDQpBbmQgSSB0aGluayB5b3Ugd2lsbCBicmVhayBk
b20wIHdpdGggY2FsbGluZyBkb21haW5fY29ubl9yZXNldCgpLCBhcyB0aGUNCmtlcm5lbCBt
aWdodCBhbHJlYWR5IGhhdmUgd3JpdHRlbiBkYXRhIGludG8gdGhlIHhlbmJ1cyBwYWdlLiBT
byB5b3UgbWlnaHQNCndhbnQgdG8gbWFrZSB0aGUgY2FsbCBkZXBlbmQgb24gIWlzX21hc3Rl
cl9kb21haW4uDQoNCg0KSnVlcmdlbg0K
--------------3Fd0Q0UDsfMiJdb1Utf8YC0a
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------3Fd0Q0UDsfMiJdb1Utf8YC0a--

--------------pCCo8kU23o0UY5ep3g9fRqkn--

--------------LKvmCwmwmvjJzje9EwrTHrka
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRSfEMFAwAAAAAACgkQsN6d1ii/Ey8G
9Af+NEwenYC1cdXgo/lIM3vfvV5ynElLMzFKBEG6TxJHrMLGsNvYonkVl1O3gf6EsjBKZjZsSBta
AlFhsqPW4o/VLWWMxSR/Buyck3KLyEmsEnaH0lOqLEClOIS9lvYKZnMBrK1wPZzXYywy6VMr6kgO
8H7eVufMtt878b5tSPCwr7NTDa5fBM600MNYn1SAtW65kMHjwjB/4ZErh5lIFrJZRu+OtG8JZri4
qbnNHW416PpW0GDIwdu3AzQ69ZATsT9LuL3frTxyp86EOGONukItDPbjhuZX2dNyeY2Qrzec/HcY
jGUxCVjFhHMDsUS4YKw3Vdq7JEWnxAc/Mvfuq1LQgg==
=aWMQ
-----END PGP SIGNATURE-----

--------------LKvmCwmwmvjJzje9EwrTHrka--


From xen-devel-bounces@lists.xenproject.org Wed May 03 15:31:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:31:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529304.823567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puES8-00061F-44; Wed, 03 May 2023 15:31:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529304.823567; Wed, 03 May 2023 15:31:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puES8-000618-1J; Wed, 03 May 2023 15:31:36 +0000
Received: by outflank-mailman (input) for mailman id 529304;
 Wed, 03 May 2023 15:31:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puES6-000612-OG
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:31:34 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20623.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 959eed8d-e9c7-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 17:31:33 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DBBPR04MB7498.eurprd04.prod.outlook.com (2603:10a6:10:20b::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 15:31:32 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:31:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 959eed8d-e9c7-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AwXYI9hGwllz7WFkaXJzseWu6gBclgIP+sKNRk7IkdGLOZVCNdDzwKxep5U676xKWzWY93uCCqT+c0VAeRQOvkH3Fb68IQ7eB5QcZ50y0C+ohTqEX6k5wnAuHxPJpzAybnlWjz25OfEelfdyC3RA+wXx3FYmsjnhQfHMby65tccRSR6Y0Fa8sUAN1A7Ll+oQlSjZzXZ2iJgVBQCfrChKY0TmKGplarA7a0EK3j4KtrWAjx/mqA3rd4Q+nxnf1D0RtIrCsRWZOfZanwgbgCJ8QJmxHSQiw3PWPh329VT4YXHsm4Wpq34glx8I494gDB2k7R04KXKKTSoZu96wPvDHVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=H2xNVauUIZx/Q0E/ZFLgBZEw6IdycaAvbYocEzDzrNA=;
 b=bcX5NVUyfmyi3B8iYPDSKIutil33aYHB9MEs18Fe3H2ktn0x6tVSIYoe69cYtrk4rulDIj35tC3ni+0mg+DXEdGdhsc2uDoB7+/vpcPcoh1ovewLBJorF5M8BI5NGgyq7ZA+kUl5lxqzn5IW0MgaXHxkwH/9i7swK4bShOwpNTWVyQtvWj+LWv3kE/yT6KVMp3hv4gpcUNpoZY/Qsi4JWDUPN9bf/FCUymNN4XKjR8AUwmDhlstyjMa6DdvPFOyzJCW6dDl4dcbOA9avvnSjs0LazwOvogeByyzHAB5Do9Z5p0iv5lVysorEKmQvZES5zRUFKK4g8FLlkFM8pEqBKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H2xNVauUIZx/Q0E/ZFLgBZEw6IdycaAvbYocEzDzrNA=;
 b=MdECWLAR2t6oAuwhKfXNr8DsRx0IfidXB901+DMweeyEFGQG+lRJOhVg4n9cMxgig2eBn5MWL2iB7+qe2afSqwMA0b5tgFqFvRl4ZhmF5aJabMrp45fUE+iJGKc+FISpHW0HJnnGkjzHbav/Qg9L0rZFVwjCzx/xh5JfVFE1kbUCNwFD+uGEhfCMw+2aAWsesLTLQDvyOMBi2y4Njcu1BrCx6gRdEmpaMWVc4dRl5a9pRlfE5iQbBA7tlQEVRJTxHEWqioaGFnW2TV/+74Q9Wn5tBNkhfZ0+Rj3P+htaWUPfECRhDAarKcnTUZ9HG0+nY7/Pm78dfWsyk2WxzhcjQA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <85e59fd5-9a06-48b4-ba7e-81865d44e332@suse.com>
Date: Wed, 3 May 2023 17:31:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] new CONFIG_HAS_PIRQ and extra_guest_irqs adjustment
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0256.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::9) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DBBPR04MB7498:EE_
X-MS-Office365-Filtering-Correlation-Id: 3e6d5a8b-e508-4a54-a9f1-08db4beb790e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CuuTJOXHpxWsbBFYh906b/aRzcD5b66USGPSbgRMS3uOaTOQ+XqG47zmMk01ucYUDx8NaxxibjyZunDtgFxf/HCozMhISPRpliOxQWhg5luCCQMUB3stcwS8gVZlQGx6IGihocqqUe14CNJrzm93QHvLSTHJ1k8M07u6IUfSDy5CBVMMsOQk4E6OLz+iL8qY/VGx1mmVaO8hdeRn6465vtXpQtgv4dKAyccyTD60K5zQEA5M1rzlWJ0tceYMW67UEQ5iNysyQxAVsNVSAMH0GKQc3yiWIkhtHh9R2RBbzfHkGhBJDzy7pETqhCpSn+e+LBo3z0ItXw1scsnRgf4d53vd1lrijsd2VA0UWdf1qVISOs1nEHfEuGv9jjOJbHTNVyVkiz5AxWUeGou4GeD94CNnUVS8NDdJKRnpHGxwE+x7Vyf+Pdm/59wKVUW91CjL4HyfBEbvw/qw8TCm+n/7DGF/nmhCmH1v6AubX2epqmBFwuYrsgkz315hpKQuXM+7LM3dmYVop2/wC2HDam8F7q3Yu9sEpbPRGtrO8dZqZr1Jc24M1ZzUSZ7fU3XLYnWx/bFhrsaJFZ+Z2vRuF4T0N/JuUXHT/2klwauJ3ZPdFegGDAw4c/y1IixhYifa/OBnnhE1A/gajTsh/Coa9CXUXUssA70tVFlG6bRZsOempjVqVd7wpTuNGJKBR1f0GZ/w
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(346002)(366004)(376002)(39860400002)(136003)(451199021)(2906002)(38100700002)(2616005)(26005)(6512007)(6506007)(186003)(6486002)(36756003)(558084003)(8676002)(8936002)(5660300002)(86362001)(31696002)(478600001)(4326008)(6916009)(54906003)(316002)(41300700001)(66476007)(66946007)(66556008)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bGVOUXNwKzU4NkRHcHJCd0dzTWZ2TjNYRHFMc1lrS21KMTc2OE5SZ0MrNEp0?=
 =?utf-8?B?TmxwcTczMll4bkREb0toSHh3aWtsbUx6ZXMvNUgxbEgybnI1alo0eithZ3dp?=
 =?utf-8?B?Z0NPWDlVM0duTG94cGp3QlRYTWdiZGlrbE1tL2c5S2R0THF1OFJnTEpHTFRV?=
 =?utf-8?B?T2lTNlRsTnNhRXl5THYyU2hQYWtvRVRjbklrcEc0S2E0bDJsZHlVeE0zM0Js?=
 =?utf-8?B?MTN0UVV2OGtQSTVqMnM4eDJGMnk1UzlIWEEvZGNRTGZ1ME5pVmNIU254QW5P?=
 =?utf-8?B?YjQxVWFuQ0pYc1BYOE1rVWZaM2xxZGN6eTJ2ZUE1eXNXMWVHSjRGdm5RV0Vj?=
 =?utf-8?B?UkpxUTJqWm5XRkVIbGQ1dkVCc3dBeXlHTWRTTXBtVm5sVi94T1ZBWVRtMEVK?=
 =?utf-8?B?RktVLzVicnRhTjBqR0p2dENEMkFwRTNoeEpicGJETFZRVGNGVVRFaENLSGMw?=
 =?utf-8?B?Mko4Z0pGTGg1WWd6c0dWRXV3YUszTnFVbGQvOUx5QUVZYVJ1elI4OGJrd0N3?=
 =?utf-8?B?ZHVscVRsbUl0Qzd1NG5Id3pWbEhpeXVsczhnVXZ5b2VQL0ZqN2dEQXdjUXFr?=
 =?utf-8?B?ZHNkWEZIM09uYVFsRy9CM3laV0w4MXQ3WHZOajNOVTNsb3ExZXd3TVk0QkVR?=
 =?utf-8?B?T2dGV0ZOU1k1U1YwdUloanNoTi9scmlwekNQczFqd2dRME85dWdjWkhIYmwv?=
 =?utf-8?B?bFpDc0hrM0lFUGZtdndWZDJiZ0lMaW94K3Rrb0VVUW01eHdZRDdheGFDc0F2?=
 =?utf-8?B?bm5SMnorNkRCa21maUhDNmR2aTJGcnZmczhrdkljUi92SHBFUGZscjQzMDk3?=
 =?utf-8?B?bi9XT3B1QllrYnJsSXNtM2MvZ05PVzlEV1gxaXBJS3Z0NkxsREtoT1VFMGJT?=
 =?utf-8?B?REhUc3o0VTQ0Y2NwMEFQYW9FMXFSaFpFMjgwa0FUOUh4djB1L05BOGJ1WDFt?=
 =?utf-8?B?WXRPQkM2UXBmelFQNUZ2K0t2MUtFejBrSG1RdS9mYk9TdGpEQ0lSYmdWbytx?=
 =?utf-8?B?cXFPSG5YU0d1bUVZNFVqVnhPcGtzWENFaEg5ay9lYk01YmExcGluWjRUWldU?=
 =?utf-8?B?V2Zkbm1Ganc0UDNUVDlPaXB2dTIwcVVoSGV5SEo3TGFNNjZrSVlDUlVDSyt6?=
 =?utf-8?B?ejU2Ym5LVE9mZ1pOREVFVlZLSDVZOEQvVkRkb2YzWEZDMVdRR1VXMXMveTc2?=
 =?utf-8?B?ZkF5V0NlM2hidndFY2lOS3RSa1QxaGlZaCtpT28zazNJdmV4STFyTjgwUXh5?=
 =?utf-8?B?ZWNoNm9qMktqYlR0Skdib2paeUkzSCtJRUk1VWdwekZCeGtCaW5FcGt0RG5E?=
 =?utf-8?B?Z0xNNG1IOE1haE1rQ0RTWDQraGZJNnRBNVhtV1MwNXdQL0NOT3RVRVhjVU1Z?=
 =?utf-8?B?RHUySmwyeEl3QWZpZ1ZGblhkNnlrRklrd1Z4WDAzbWI0UDh6Q29ZZnU4THZl?=
 =?utf-8?B?SDlXSjM1OUZDbHd6bkhJYWFrSHZFUkhkRm56LzlVbVpQZyt0M0dSTHM2NG9X?=
 =?utf-8?B?QzBJSWFpdk1PVisrcSsxNW1naE9TT25HVTAyWDRUV3FCSUFJaWJXb1BkTTFC?=
 =?utf-8?B?eVhhU2FQVHJUcXlYTzdzV3J1ci9rQy9CTGhaZ2RheHR1cktLWkZXWUo0Zjd3?=
 =?utf-8?B?RkxrcFZKOFM3N3ZFcFF2NzlqWjdwNEFRV0d4bFdtQzZnbzVpaXU2Zk5iL1g2?=
 =?utf-8?B?by9aSTBJRUJmTmNDTW9qVmZhUGtLZnZlRWF6eDBLOUJnQmdrV3NmckMxNE9w?=
 =?utf-8?B?TkFGaHV5QkdabjlIYkJWV1BWWVJMdk13czRKcDVMQWlaNlJxVHhZMXh6eVll?=
 =?utf-8?B?SndMdU5iaU0vM2hLY3I0MVhhUmx6T1VKcVo2U1lmUUhxblBzQWxEQldhV3E0?=
 =?utf-8?B?WnFkVnVkTHVNVnlFVWlWallGSWQrYnhwRmxUS0l4NlFsazRDVW55MDJQalRH?=
 =?utf-8?B?YllReHkyVks2QkFDY2xPTlN6aEVSVVF3b3hlYmRxN1p0dmxrekNiRzJWaDJz?=
 =?utf-8?B?STd6QW9ockwzNkI3MU9hb2x2dkhQOGhyMVNNZFlveFY5Vmd3U0ZVRjNjTVdq?=
 =?utf-8?B?SktpUDZGYXZaaThjRzQxcXBSMHFQMXhVQmFGc2pUckFwQ1JLVHZBZjZyUkNJ?=
 =?utf-8?Q?EAt+A1PYjdxbyjSVqb4UhdaG8?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3e6d5a8b-e508-4a54-a9f1-08db4beb790e
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:31:32.5434
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LwCoFThrBh5+er6xEfw/rOCvcQLjoj/+TOPzvM7oqlNb376bZQrGtcan/8MPK4jqpM2v4tb7UGmsKlIWTopk1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7498

The 1st patch (new in v2) has the effect of the 2nd one no longer
affecting Arm.

1: restrict concept of pIRQ to x86
2: cmdline: document and enforce "extra_guest_irqs" upper bounds

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 15:33:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:33:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529307.823577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puETg-0006YL-EM; Wed, 03 May 2023 15:33:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529307.823577; Wed, 03 May 2023 15:33:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puETg-0006YE-BO; Wed, 03 May 2023 15:33:12 +0000
Received: by outflank-mailman (input) for mailman id 529307;
 Wed, 03 May 2023 15:33:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puETe-0006Y6-WC
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:33:11 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20616.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce137e99-e9c7-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 17:33:08 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DBBPR04MB7498.eurprd04.prod.outlook.com (2603:10a6:10:20b::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 15:33:07 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:33:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce137e99-e9c7-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nSgB/yFikd9A65BGSSGIKIyQygm92cG+/3011FVurmipxC58uOkyY7W4m6Yvd8xMheY1Ao4f22SZA5IZyEBC/2ccaXsV+2VuL9RfaMUXbSzTY/yQR8W6TI7AdGmCWyVnfyjzfSXD4dN9fYULBzNrgsE/7LurvowxL7uw6+CCpQU/oRhiCY/kPVsWKO/5I3YnDr46uWKSEfWPJKd5YZ827V3KQYDGGsE4kHES9/ytnke97ur3xfplzbzC9z10qqhX9JmalwPAEQB0lOrh1m3fgFJZDED25YcbTy2jrnq0EqAbI6qFl2eQu9Vb5gKLjgWbhDquWgwQhQyCy9eGNSOHvQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=g3wz92lQ5KDUxdt8BmZClL8NBUi7+IdNuZks2UaRJyU=;
 b=gXtfyiiP1q1b9Q+Ftsb7t7PhMZE/zsJ+JcGhXU8ZSxD1A3CFYwjRhAyj8Q0K2HZr3c9lj+822nboWrJEvq9Kv+PT9DX6AbP0mFpChXyAYUcAawM6BtMlh9EwTf4IWlTj0uANRF/g7mfnGjSlG0aAvq10VZAB2edFTwCD1YtK/OA/Kvlh4OELwxVnlGmt+CKvrjSmEKdpIU4G12UWWHxbwC8b9KyLh7lAkvrMw6gy5rbPMlg91lXvM/zJz3jmkChzU0u2VOzvzIIOlrL/axu9IST8n+yPeC+ZQYZxIsVRN3vbTIazM/i4dF97eV6tFc1UuZ/7Sh9iRg/6K6GLObf+Tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=g3wz92lQ5KDUxdt8BmZClL8NBUi7+IdNuZks2UaRJyU=;
 b=wWZJWEo2csCq+ts+WmBivzAf7dj+q15/dhb8hM6lCZzGF7dEYJf6GpodWSdyXJ48Eh/3oFUrt4WhAYcJiKLXY4l7UQX6qAHXHf7eKhK70uy1jft1p0mGqZamsBuL3ds17stW/NaaNK2YU6zS49GliCxirAKzhpA4t4wubmvNLoS3vVHurUhATqKXLFAQtDKbzr+AUGS2SHG+2NL+pDiF2eiHD+D+k6c54H7rEfroEPPrMs1YVr/YjailEXntJvMG7CVQqWDMUUg8fFU+izziIwsyth5WIUZTMxdkwCA73MRuHangD9aLlboMtWwPVj4u+XB1kfc6OM2vsOXPPZ6Arw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <98f51b96-8a1c-7f33-b4d3-1744174df465@suse.com>
Date: Wed, 3 May 2023 17:33:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 1/2] restrict concept of pIRQ to x86
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>
References: <85e59fd5-9a06-48b4-ba7e-81865d44e332@suse.com>
In-Reply-To: <85e59fd5-9a06-48b4-ba7e-81865d44e332@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0039.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::22) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DBBPR04MB7498:EE_
X-MS-Office365-Filtering-Correlation-Id: 9a73f6e4-c252-4979-c19b-08db4bebb195
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	X8N1uBMFM/CQ2lRA4tF99/KGO78WBR/GOmU7pEy+b5Zw9dGDVVlCmAtWoXpK9tIDvFA8bbV+fa9l2PJa1YYwp9rCG6BD9lK396w6UiLWtp68qhQBu5V+YfJ2czPkkprwd0h43qXyJXpWH2ybDGAUFrQ1Xr1Jv+5UqOc0YrNRN4oOHCj8bHnEwuUPv5vTEe35nPyA0tkCA7E8vhq+Hf9Fa3RefUcAV1TbD57w9QUucxq+juHHqczCSnH+nfMAmfXDcjWsox8btc/l71neZfkacDUJijKhAjEg8SQMltIzG3zAxNw3d+uBoCCqG/d4vv/fOC9Jqr4tHqjOSo/kkstxUkn985C63JZFyaWZK8XGz0UFTBNWL6t4GS/kGvTZVLvTRbk+QXbgw6CjmXqwwafp0ekrBxNsMkVAi7dOuZx3Vlp8fN6zbt7ufrcAlwRFJp6Gt7IMYmwCFc9vEfrmI1pRCqzBeZZuLV22hInMwXUewtol1/UQsXIJKKkNJGDUMMhWULM2pTZxr23Zb42knaETBhsnC+bvcp+usEpX6Tza0v1n/+J1HoAbb6kzR8U7b5sAeHjQD+sR5XQqV1oHiaD9rgDTdzTfftooFdhwWClORixFyMsXfcFgC2J1kzxYSiRUr8h+sixoglMZLa/ISA8MEezjjiy5j9rjrowIMZitahCZlokFAAOYbBZF3lOmZa/O
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(346002)(366004)(376002)(39860400002)(136003)(451199021)(2906002)(38100700002)(83380400001)(2616005)(26005)(6512007)(6506007)(186003)(6486002)(36756003)(8676002)(8936002)(5660300002)(86362001)(31696002)(478600001)(4326008)(6916009)(54906003)(316002)(41300700001)(66476007)(66946007)(66556008)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZEk3WVR3Tk85TWsxcEF2SlZxcnpaK1FCNXRZRENlcHVtUkZTWi8rSGtYYjFs?=
 =?utf-8?B?aUMyMjEyQ0ZaUHhlNE9XU3VuWHl0Wk43TU1QVEZSOW9EcXpYQkFIYTh4QjlZ?=
 =?utf-8?B?MXhnMzd0YnBKV3YxR1M3YWFOSzZZcmxZN1JET3A5MFZqQmZVelppRVg4RE13?=
 =?utf-8?B?TTBPYjRwYm9YeUUya2J5MVA2eHhkUFNXVUNINzdock9YSnh3M3g4dTAwak9N?=
 =?utf-8?B?blhRdjVDS0x1YjB5WHdhUnJKTDRPeHNLc1hkVVdFSkl5ODRyMHlQWldpWHJa?=
 =?utf-8?B?NUJoUXRrQ09KUXMzMG01dEQ0SXhJejAyejRMS3pZYmZFTHFyUUQyTGoyZ291?=
 =?utf-8?B?YzhSd21HQko0R1lHMmExZXlBQjRuVTFydUtEczZKcUV6MlQ3V0d6Zm15MGRP?=
 =?utf-8?B?MkFINWt0MmRMSGhzK29yN3pUVDNrWE1iRmRSeWhDSnNxYkk4M0R5b0JNRktH?=
 =?utf-8?B?VmxvRU0wREJROEVrT0wrZWoxSXYxSzUwSTdNY2w5bmdrSXMzUnFGTUNvREJx?=
 =?utf-8?B?cVNkOS9KM1NzM3ZHSmNDNmhpM2RZR0hmbS9tU0JEbXVrWHJ6QTNBeDAwZnJC?=
 =?utf-8?B?TEQ5S3I4L1lJN1JrdmFGa0tFalFRTFVJSDBQWE1iQlVaM0ptS2FhaFM0S2Jw?=
 =?utf-8?B?RGFXVzRCMVRnWGNwQ0hza1p3VXFzKzF4WkJVSE1hTlh4SnNuOFJqVEhuUGdE?=
 =?utf-8?B?dEw4ZmRtUkZhdXRUOUlMQnpLQnlQbjkyTFNER0s0elJGZXQ0Zjd1UnNPRmox?=
 =?utf-8?B?ZHVPSk1aU1VxeTEwTmJRQkZZN0lhc0g2VnVra2lna01xcVpheWNXM3Q2cm81?=
 =?utf-8?B?UlhWcjlsT3VZd2ZrQkVOM09RbXBuN0I2UWx0bVg5WGZqOElaa0FRMDZLcU9a?=
 =?utf-8?B?Z2s0bW4xT3ZPRFFzRlMxTWFvZlVPM2V6NkhJWkxpU2l3aFNpWWE2cFVrZVVh?=
 =?utf-8?B?Y2I1c0JQVnhMSHNnbFBKa1FneExpZWx3YlBhTEJUei9LRGFlM0pXZWJSK3V4?=
 =?utf-8?B?WGpWd3NRbFNmMHpad0p5NWJlUmxVZzNoc0RJMzZZMzBocStLbkdNTmdMN0JN?=
 =?utf-8?B?clAvbmhDcnN4VkFuNE9ab20xdXNhd3JvU2tsUVIvRHpOb05LMHU1dW9rVzVV?=
 =?utf-8?B?Zjk1RXQvWkJNK1JzWkpMZzQ4cWMyQUh1eGJpb2FlZzFUL3dwelYzQ2hDK1M3?=
 =?utf-8?B?MldLVys0NWlobEhjY2ZQVTBST29uY3hrenF3NlpaVXN5WXVkV0tDT0VDNzRI?=
 =?utf-8?B?c09uRmRwVzZnU0VHRmE0K3lMM0d1NDg3N1JxbDA3V1ViRy9qOXNmU3RlamM4?=
 =?utf-8?B?N2swdEpYZkNMcXVNRDM0U2RCY2tJOEdmSndUcDBEOUNVTUdlOTBmREp6S2RH?=
 =?utf-8?B?dW5QWWtNdUJKdlJjbGMyL0R6QlIyak1kdGkzVHJnazlYa01rc29YMlBwUEJn?=
 =?utf-8?B?TXlzVzBGVTBEdXVxVncvMjZkYXhPWUZES1JQY1ExMDNhRVJQaE9OVzFpVWlH?=
 =?utf-8?B?WXpKWThjbWVzVEhOVklhSEZiaE9yK2lVVTVra3lEbFdVU2w1bmdVQXJXbE0z?=
 =?utf-8?B?a0M1MmNScnNuOGduOXAwM1FXUkxyamJKT0NXSWMwMmVCNjZUeS9GaGh1VGs1?=
 =?utf-8?B?cUtHelhIRGFKQk1zczk4Z1RrMnEwZkYrT1ZqNjhob3hmRGI5MnZZZjB2cGk5?=
 =?utf-8?B?VUpCU1ZHUTBkQ3BoQTBUNUc5eHR4a3lJSmJMNTQxbnkrc2I2cnpsV2x4ank4?=
 =?utf-8?B?SVpkelZRZ2xnVHlqTkZEWXNQcGs1Zms4czJiZW9PM3JjSklGc2VGeHVFbXFF?=
 =?utf-8?B?RXltaXNYSGxBLzVpcHNobWZOSUxYTG93aEh3eVR2cTB2UWU4MEgxVjBIZjJH?=
 =?utf-8?B?RzROVzJ6MkU3TUJGUmFpQVQ2RHk2MkZXVzVDcDVrSWd6Wk00Z3RtVEZDZEZZ?=
 =?utf-8?B?WitoU2hDMzl0aW43WDNBR2tFUnhkTWliZ0h3ckhZb0hKUitod3B4NGtMOURZ?=
 =?utf-8?B?NzRjYTFlbU1FTFc3MzdWY2pFVkhPT05RZnNDaXVPK050SmYwSEIxNE14OHgw?=
 =?utf-8?B?YWxDUXNkZ1BhQktxZHV4dTYveUdwTHRhWHRNYTVMOVdLVzZ3MWMyaEdxYUla?=
 =?utf-8?Q?ivwht6oUnMOXiNx3tAKvhgc3o?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a73f6e4-c252-4979-c19b-08db4bebb195
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:33:07.2824
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BJwHarARIxSm0JrOqugUK70nhkVOKmSA66weHMWK3Nk8Ohw889mtHQV8TJOn4v999HAheH0LeUzzowLi1t5h/g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7498

... by way of a new arch-selectable Kconfig control.

Note that some smaller pieces of code are left without #ifdef, to keep
things better readable. Hence items like ECS_PIRQ, nr_static_irqs, or
domain_pirq_to_irq() remain uniformly.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I'm not really certain about XEN_DOMCTL_irq_permission: With pIRQ-s not
used, the prior pIRQ -> IRQ translation cannot have succeeded on Arm, so
quite possibly the entire domctl is unused there? Yet then how is access
to particular device IRQs being granted/revoked?
---
v2: New.

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1120,7 +1120,7 @@ introduced with the Nehalem architecture
       intended as an emergency option for people who first chose fast, then
       change their minds to secure, and wish not to reboot.**
 
-### extra_guest_irqs
+### extra_guest_irqs (x86)
 > `= [<domU number>][,<dom0 number>]`
 
 > Default: `32,<variable>`
--- a/xen/arch/arm/include/asm/irq.h
+++ b/xen/arch/arm/include/asm/irq.h
@@ -52,7 +52,6 @@ struct arch_irq_desc {
 
 extern const unsigned int nr_irqs;
 #define nr_static_irqs NR_IRQS
-#define arch_hwdom_irqs(domid) NR_IRQS
 
 struct irq_desc;
 struct irqaction;
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -25,6 +25,7 @@ config X86
 	select HAS_PCI
 	select HAS_PCI_MSI
 	select HAS_PDX
+	select HAS_PIRQ
 	select HAS_SCHED_GRANULARITY
 	select HAS_UBSAN
 	select HAS_VPCI if HVM
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -56,6 +56,9 @@ config HAS_KEXEC
 config HAS_PDX
 	bool
 
+config HAS_PIRQ
+	bool
+
 config HAS_PMAP
 	bool
 
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -350,6 +350,8 @@ static int late_hwdom_init(struct domain
 #endif
 }
 
+#ifdef CONFIG_HAS_PIRQ
+
 static unsigned int __read_mostly extra_hwdom_irqs;
 static unsigned int __read_mostly extra_domU_irqs = 32;
 
@@ -364,6 +366,8 @@ static int __init cf_check parse_extra_g
 }
 custom_param("extra_guest_irqs", parse_extra_guest_irqs);
 
+#endif /* CONFIG_HAS_PIRQ */
+
 /*
  * Release resources held by a domain.  There may or may not be live
  * references to the domain, and it may or may not be fully constructed.
@@ -653,6 +657,7 @@ struct domain *domain_create(domid_t dom
     if ( is_system_domain(d) && !is_idle_domain(d) )
         return d;
 
+#ifdef CONFIG_HAS_PIRQ
     if ( !is_idle_domain(d) )
     {
         if ( !is_hardware_domain(d) )
@@ -664,6 +669,7 @@ struct domain *domain_create(domid_t dom
 
         radix_tree_init(&d->pirq_tree);
     }
+#endif
 
     if ( (err = arch_domain_create(d, config, flags)) != 0 )
         goto fail;
@@ -755,7 +761,9 @@ struct domain *domain_create(domid_t dom
     {
         evtchn_destroy(d);
         evtchn_destroy_final(d);
+#ifdef CONFIG_HAS_PIRQ
         radix_tree_destroy(&d->pirq_tree, free_pirq_struct);
+#endif
     }
     if ( init_status & INIT_watchdog )
         watchdog_domain_destroy(d);
@@ -1151,7 +1159,9 @@ static void cf_check complete_domain_des
 
     evtchn_destroy_final(d);
 
+#ifdef CONFIG_HAS_PIRQ
     radix_tree_destroy(&d->pirq_tree, free_pirq_struct);
+#endif
 
     xfree(d->vcpu);
 
@@ -1864,6 +1874,8 @@ long do_vm_assist(unsigned int cmd, unsi
 }
 #endif
 
+#ifdef CONFIG_HAS_PIRQ
+
 struct pirq *pirq_get_info(struct domain *d, int pirq)
 {
     struct pirq *info = pirq_info(d, pirq);
@@ -1893,6 +1905,8 @@ void cf_check free_pirq_struct(void *ptr
     call_rcu(&pirq->rcu_head, _free_pirq_struct);
 }
 
+#endif /* CONFIG_HAS_PIRQ */
+
 struct migrate_info {
     long (*func)(void *data);
     void *data;
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -683,11 +683,13 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         unsigned int pirq = op->u.irq_permission.pirq, irq;
         int allow = op->u.irq_permission.allow_access;
 
+#ifdef CONFIG_HAS_PIRQ
         if ( pirq >= current->domain->nr_pirqs )
         {
             ret = -EINVAL;
             break;
         }
+#endif
         irq = pirq_access_permitted(current->domain, pirq);
         if ( !irq || xsm_irq_permission(XSM_HOOK, d, irq, allow) )
             ret = -EPERM;
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -555,6 +555,7 @@ static int evtchn_bind_ipi(evtchn_bind_i
     return rc;
 }
 
+#ifdef CONFIG_HAS_PIRQ
 
 static void link_pirq_port(int port, struct evtchn *chn, struct vcpu *v)
 {
@@ -580,9 +581,11 @@ static void unlink_pirq_port(struct evtc
             chn->u.pirq.prev_port;
 }
 
+#endif /* CONFIG_HAS_PIRQ */
 
 static int evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
 {
+#ifdef CONFIG_HAS_PIRQ
     struct evtchn *chn;
     struct domain *d = current->domain;
     struct vcpu   *v = d->vcpu[0];
@@ -639,6 +642,9 @@ static int evtchn_bind_pirq(evtchn_bind_
     write_unlock(&d->event_lock);
 
     return rc;
+#else /* !CONFIG_HAS_PIRQ */
+    return -EOPNOTSUPP;
+#endif
 }
 
 
@@ -671,6 +677,7 @@ int evtchn_close(struct domain *d1, int
     case ECS_UNBOUND:
         break;
 
+#ifdef CONFIG_HAS_PIRQ
     case ECS_PIRQ: {
         struct pirq *pirq = pirq_info(d1, chn1->u.pirq.irq);
 
@@ -680,14 +687,13 @@ int evtchn_close(struct domain *d1, int
                 pirq_guest_unbind(d1, pirq);
             pirq->evtchn = 0;
             pirq_cleanup_check(pirq, d1);
-#ifdef CONFIG_X86
             if ( is_hvm_domain(d1) && domain_pirq_to_irq(d1, pirq->pirq) > 0 )
                 unmap_domain_pirq_emuirq(d1, pirq->pirq);
-#endif
         }
         unlink_pirq_port(chn1, d1->vcpu[chn1->notify_vcpu_id]);
         break;
     }
+#endif
 
     case ECS_VIRQ: {
         struct vcpu *v;
@@ -1097,6 +1103,8 @@ int evtchn_bind_vcpu(evtchn_port_t port,
     case ECS_INTERDOMAIN:
         chn->notify_vcpu_id = v->vcpu_id;
         break;
+
+#ifdef CONFIG_HAS_PIRQ
     case ECS_PIRQ:
         if ( chn->notify_vcpu_id == v->vcpu_id )
             break;
@@ -1106,6 +1114,8 @@ int evtchn_bind_vcpu(evtchn_port_t port,
                           cpumask_of(v->processor));
         link_pirq_port(port, chn, v);
         break;
+#endif
+
     default:
         rc = -EINVAL;
         break;
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -438,12 +438,14 @@ struct domain
 
     struct grant_table *grant_table;
 
+#ifdef CONFIG_HAS_PIRQ
     /*
      * Interrupt to event-channel mappings and other per-guest-pirq data.
      * Protected by the domain's event-channel spinlock.
      */
     struct radix_tree_root pirq_tree;
     unsigned int     nr_pirqs;
+#endif
 
     unsigned int     options;         /* copy of createdomain flags */
 



From xen-devel-bounces@lists.xenproject.org Wed May 03 15:33:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:33:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529312.823587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEUP-00078N-RB; Wed, 03 May 2023 15:33:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529312.823587; Wed, 03 May 2023 15:33:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEUP-00078G-OM; Wed, 03 May 2023 15:33:57 +0000
Received: by outflank-mailman (input) for mailman id 529312;
 Wed, 03 May 2023 15:33:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puEUN-000715-TT
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:33:55 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062a.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e9d48b08-e9c7-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 17:33:55 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DBBPR04MB7498.eurprd04.prod.outlook.com (2603:10a6:10:20b::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 15:33:53 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:33:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9d48b08-e9c7-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Osh+id0j45zfZXNwwHYouI5WHcCjfXcW/PyhDXbQXBsTt22k+zb9dvmqUh0qb8/gJpX49M+717kFrQpfsxD4+zEjGH28AxGPjcDXZRf7B91Sdp46bA8jKrHKjKo0T2ForzHKAis1oWD687NmykAO+8nyLPd3VYIlavX/tUeSpWmlOQVrU1RksTJnk94eScXQstVvafNxxH11XSLNEI8zTyYWUYMYdmivRB7v/66vuSZhqReKu5UGjhbNtxD1TN4zeyoVxapi5UTkRTzyxdRWh8HIQp5njCtIyfqTfkLD01m3SIzWucl/UfXoPIk+OpgA7kYrO8ok3N6HU4M2tAaYyg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9Stzvh+MzN3TPUQsXG0ZEOGdaQBSL5CX/KwjgUIRxmE=;
 b=XNIvxcwah0j0y5meDtGSTvbuPRRZaltVoJGZABiv4ebymhF28ftVLne/3HN9SLCXTFVwbqHDqlEICGRfq6yrE0gsoquglqbRg33k/OME9bJ0VswsZUBkprlPKPSXQOj91Q/4nezcAzaMl4Noob+DJQVCjhPWv0jk6SJbflqF/MnvmW7tJ3e2KsBpyehb80qv5rgfHJNKMmN96/71pFcNH5rzXx2vIXAEl+5vXXhRRrJWFR2yMiZ67yOW+HYWpQLft2edyQd/TE7UMohdLHQKjnEybuSIY4RhiOkTzSEfRBwQqQX1MQQa7pzTPaYceVMHFSevJSspkKIv/4hNjOi1dQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9Stzvh+MzN3TPUQsXG0ZEOGdaQBSL5CX/KwjgUIRxmE=;
 b=WSCpiCM0pwYb3is2V7/KjC/dXNtG/1sAm2BHuO7hURtLJlMZ62T22xzj6NV2xR5rgptr76qpkDYNo7UA8l3w1gQWD7MxxHYkO6xy2INWmLylU4M35sKmQqjzouqPyVdlXXOsOvcXjRS2MueZbCaWqmq79j1DQK9oJQ70v+wYojnHdUaGH38KefhKWmxCb5hjU3pYDdOQQBKmDUqzhzdk3UAzLp/lXOZtBTqSn1UYUyv8HepHNo+B5JV17VCG9bkaug/bpDltBs/Y2ovnMt9pjCy6uAM8k0dvQduAf6SwNsMO492AuxOcj/IcoyL9OT2MpIaqBVf8FSbN7+J4IWkv1Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b9425d47-dff8-bf1e-5310-afbf834b8366@suse.com>
Date: Wed, 3 May 2023 17:33:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 2/2] cmdline: document and enforce "extra_guest_irqs" upper
 bounds
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <85e59fd5-9a06-48b4-ba7e-81865d44e332@suse.com>
In-Reply-To: <85e59fd5-9a06-48b4-ba7e-81865d44e332@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0041.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::6) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DBBPR04MB7498:EE_
X-MS-Office365-Filtering-Correlation-Id: 1ab24165-e4b5-4636-bd4c-08db4bebcd02
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	leDVpWvs0zltC/Ydikt60yEb5v24ta6p02297aWXobICp5wlrY8ahIQCAWWMGoAwPtzog8ZsVA03Wa3IIsn6tLDUf7/9Q+95p4OQ/wNGiL/nsbR/Ch+MLBiKgJ5f+vYv5dcBT4DbVi7Al0aua7QCrHLOE0m1zRXp62gPfA3ZGGl8KY1s81zDO2jVMWZcNIboRb/HSolJl1zq5aQRcxROafTKwjtTi+AfGOTJdsy6inexFh48GbL9Bw9Xzqz9Nb2iocKUnd7rl/48yZSe8954gBXACcB7L6AAw5RFWBVjfqAOfypyXDjcG/tKog/+JzfiLThwKKsc2bxKkGLbMTrT+qloI/9yPBGaq2StcE0L6b8YcjF+LWcRiG3RcNgmrv2HlGtqKTBJ3sD0J+cefabBut1VXzVvvvJ08XK1H/FQ40HJGkjQHu/3bFOCBXXLHDjAGfLwF6QqYNEKPx/r9rwUZ7xIgvp9Vouzdcura2/qMDEWsnLqmWHFlG3661i7mk/ofELMz1nM7xyImamS7032MsxuQATPC6xtkmFKE88hma3eP9a7ekjx3oxG0D+4hkkKjBAge0LBnZz3v3nEvhrP7o50acptyY+6KicwTBgmHcpGvhdrPkz0inLe3LxkGKdipuhDEg18TGq0k6bXRzoGggfkSbssihQ2Z7txnbdc1eA7gDqsX3tUEb8zDYwLPvxaJsAha4PVxNkYx5Y3Kafhcw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(346002)(366004)(376002)(39860400002)(136003)(451199021)(2906002)(38100700002)(83380400001)(2616005)(26005)(6512007)(6506007)(186003)(6486002)(36756003)(8676002)(8936002)(5660300002)(86362001)(31696002)(478600001)(4326008)(6916009)(54906003)(316002)(41300700001)(66476007)(66946007)(66556008)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RnVwb0R3ZVN0OUdZd3gvMkovN04wM2xLUXRmUmY4ckRwbDZiUmZmQ3ZZMnRT?=
 =?utf-8?B?dVYxVmEwTTNiY2dvNGNsVmxrZThIc3hqeHZvZkxjdHdOYXU1RXBvZURLa0xj?=
 =?utf-8?B?OGZlT0VmYW52eE9KNHgxSjBQemZpWk5PaG5vUHdTVVpaSVo4c3VWWGh3OHlp?=
 =?utf-8?B?YkJ3blZna3ovR0dmN0tjOE9hMUc2Z0dYMkc2bzg4cURqM3FtSXZqR0ptQkNn?=
 =?utf-8?B?cGZ6cFY3RjNhWjJxTUtleTVSQjBVZzZIQmZSblRuWmluTjV0bEhrdlFxZ1ZV?=
 =?utf-8?B?QVNVei8wQnVhNHc0MVcwd0hhYUE4bUh4bWhFU2Z4RXBLQmM0ZXVmWnNXeHAz?=
 =?utf-8?B?b3JObG0xS3RTRTAzME8wV0FYa0taZ2d3MnZLU0o4aDROSjRSTGE0WFBuTDJD?=
 =?utf-8?B?bnNGcUliYVNsRmZxaUpXUC9EZ0REYzJDV2RkSisvUzIwUmdKbUhCRXFQNG1U?=
 =?utf-8?B?VWRkRno5cXVDQUlSYTJtSXlEMDNCaWs3eHZCK1JlaHV5cVllbllkL0lULzJP?=
 =?utf-8?B?VWpvekk5eXFra0xGcEEyR28vemJrd29xL3JpVDQ2K25lUFJNSVhwa3pkMFNu?=
 =?utf-8?B?R2Q2dUt0dFo3VFlwdU44VHVTc0hQMGc5NjZmQkZGM1lmTi9pZ3l3NG43TUUv?=
 =?utf-8?B?RWh2OThVM2VpcVRleEtBTG91cXVaMTZNSmdUWlorNXcvUGJ4eS9McDRCQWtQ?=
 =?utf-8?B?MUoyUGU3MHNzeklVSzVjZ1NNTnZXNnA5QnhXcDNCRStSRnc5U3BnUi9PRVZx?=
 =?utf-8?B?a0xtZnNtT0Z6eHpsQ2xFb01WaGt1RW1aNkJlMGZILzdtb1RDOTQzREhhUzJ3?=
 =?utf-8?B?RVlKR2ZvbWVhdTBBMmJjcjFxSDFLVE1kb0VWUFZnVmdyZ00xYk1zUXVPMDNw?=
 =?utf-8?B?ZDgzNHA4TExUMmwvdHY5elgzZUpYU1pDOHRURktBV3N2YkxYcEZQLzdKSU1w?=
 =?utf-8?B?R3lEVjlkeDh6d0FVeHdyU2dCUDh3Q2lZZEh2WnBEV2RhK01zVFFTay85NXN3?=
 =?utf-8?B?UU85bldaYzFsb3VlWE94MGQza2YrUit5aGV0MUt1SVJIMHJHNHVkbkQwdFVu?=
 =?utf-8?B?cWhtNnVQYWRxS08zOVY0TEwwTmdHRUlUWktSbDdidlA1MzdiT3BMeVRyM2da?=
 =?utf-8?B?dC9LcGgxeDlOOHBRMml2cjNTWGxyZVZtMjdIQkxna2lXcWFpVE1xSkFabzRB?=
 =?utf-8?B?MFN6TVRHUWRCakV1NEgrMUhtc1A2VXAzWERlM3FRZ3d5Qm96UGVmUFNLejls?=
 =?utf-8?B?QVd5b0p3TW5iQ0JZNkJ6QkNJdFZaSmhENUF0UW4yaUJocWhJTFFWbUp5TFUy?=
 =?utf-8?B?YndUMWh1L1lzS3NKeHJOaUZKdk9leEFCVU4yN1dnRjRXRzRTa2ZxUXBORVdM?=
 =?utf-8?B?S3I0Y0Uvd0t1YjE5YXQvNlAxaVlUZm10WFp4YU9wNWtaZER2QU9XaGp6UHBE?=
 =?utf-8?B?SjFKQU81U0NjZmZEbmhGSDl0TTR1NkV1eFF3ZndEQTVUZzRZNW55ZFowQnlZ?=
 =?utf-8?B?MUphWDY5bjRnOWN2MU9qMU5mamJ2Vm9hUzBDZ1pVWE5LelRNWkdOU0I5clJk?=
 =?utf-8?B?QVFzbzRZMTdkdW1NSzV2NWdHRkdNOUVVamt4cWFKU2VuYXJvUjlTNjhISWcv?=
 =?utf-8?B?YjRUYWhTVktyZ2IwOHJYYTU2eGJadkVTeUJlL2V2eGoxdCtraEFXczR5REZS?=
 =?utf-8?B?a3FEU1p5ZE5IYkg1QmszWTlZM1VyRE0xVVZQUUh3QXo1QzdXclpabVJjcFMz?=
 =?utf-8?B?R2RBT09WMW11aUpuWWRxTlN6WVJmU25Pb0ZYS1NCVGprbUlYYS9Cdm1vV1Vw?=
 =?utf-8?B?cG54c1oyV0hGNnYyS28ybHpsNGdjVkxtWGNyV3R4SzRMSnhHSFZFalpXUnNB?=
 =?utf-8?B?NnVNMElEUEdWdmMyVHpjaW5YVXRiQTZEb0pDM29WVVR3SzJLY2dlcncrU2JI?=
 =?utf-8?B?dExBUmwwY1VwbHdTamR4aTJBRmRNT2xORlJyT21qaFJ0VERKeGl4eVcxNWJ5?=
 =?utf-8?B?NjFTOUlVWE1NcDVJQXlaeGozSHpDM1hPd3BFNWlyRUpQSE9jWE5BQU16ZnJl?=
 =?utf-8?B?Q28wWWVBQ3J3aEtJdmdHdzh2OHdQbmpNSy83bnhnV1ZBR3FsY2hZb0N3aUE2?=
 =?utf-8?Q?g6DE0DOb7vWi951dVVcTjWMKl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1ab24165-e4b5-4636-bd4c-08db4bebcd02
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:33:53.3300
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /qSKEzYxXp5UlJBs3RLCbphf7RVbJrX2cagk/opAjDZKWg6seXFDKXD31QlFJmdfcNh9yXf6qaWqAgOnMm9RNg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7498

PHYSDEVOP_pirq_eoi_gmfn_v<N> accepting just a single GFN implies that no
more than 32k pIRQ-s can be used by a domain on x86. Document this upper
bound.

To also enforce the limit, (ab)use both arch_hwdom_irqs() (changing its
parameter type) and setup_system_domains(). This is primarily to avoid
exposing the two static variables or introducing yet further arch hooks.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Instead of passing dom_xen into arch_hwdom_irqs(), NULL could also be
used. That would make the connection to setup_system_domains() yet more
weak, though.

Passing the domain pointer instead of the domain ID would also allow
to return a possibly different value if sensible for PVH Dom0 (which
presently has no access to PHYSDEVOP_pirq_eoi_gmfn_v<N> in the first
place).
---
v2: Also enforce these bounds. Adjust doc to constrain the bound to x86
    only. Re-base over new earlier patch.

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1130,7 +1130,8 @@ common for all domUs, while the optional
 is for dom0.  Changing the setting for domU has no impact on dom0 and vice
 versa.  For example to change dom0 without changing domU, use
 `extra_guest_irqs=,512`.  The default value for Dom0 and an eventual separate
-hardware domain is architecture dependent.
+hardware domain is architecture dependent.  The upper limit for both values on
+x86 is such that the resulting total number of IRQs can't be higher than 32768.
 Note that specifying zero as domU value means zero, while for dom0 it means
 to use the default.
 
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -2665,18 +2665,21 @@ void __init ioapic_init(void)
            nr_irqs_gsi, nr_irqs - nr_irqs_gsi);
 }
 
-unsigned int arch_hwdom_irqs(domid_t domid)
+unsigned int arch_hwdom_irqs(const struct domain *d)
 {
     unsigned int n = fls(num_present_cpus());
 
-    if ( !domid )
+    if ( is_system_domain(d) )
+        return PAGE_SIZE * BITS_PER_BYTE;
+
+    if ( !d->domain_id )
         n = min(n, dom0_max_vcpus());
     n = min(nr_irqs_gsi + n * NR_DYNAMIC_VECTORS, nr_irqs);
 
     /* Bounded by the domain pirq eoi bitmap gfn. */
     n = min_t(unsigned int, n, PAGE_SIZE * BITS_PER_BYTE);
 
-    printk("Dom%d has maximum %u PIRQs\n", domid, n);
+    printk("%pd has maximum %u PIRQs\n", d, n);
 
     return n;
 }
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -664,7 +664,7 @@ struct domain *domain_create(domid_t dom
             d->nr_pirqs = nr_static_irqs + extra_domU_irqs;
         else
             d->nr_pirqs = extra_hwdom_irqs ? nr_static_irqs + extra_hwdom_irqs
-                                           : arch_hwdom_irqs(domid);
+                                           : arch_hwdom_irqs(d);
         d->nr_pirqs = min(d->nr_pirqs, nr_irqs);
 
         radix_tree_init(&d->pirq_tree);
@@ -790,6 +790,24 @@ void __init setup_system_domains(void)
     if ( IS_ERR(dom_xen) )
         panic("Failed to create d[XEN]: %ld\n", PTR_ERR(dom_xen));
 
+#ifdef CONFIG_HAS_PIRQ
+    /* Bound-check values passed via "extra_guest_irqs=". */
+    {
+        unsigned int n = max(arch_hwdom_irqs(dom_xen), nr_static_irqs);
+
+        if ( extra_hwdom_irqs > n - nr_static_irqs )
+        {
+            extra_hwdom_irqs = n - nr_static_irqs;
+            printk(XENLOG_WARNING "hwdom IRQs bounded to %u\n", n);
+        }
+        if ( extra_domU_irqs > max(32U, n - nr_static_irqs) )
+        {
+            extra_domU_irqs = n - nr_static_irqs;
+            printk(XENLOG_WARNING "domU IRQs bounded to %u\n", n);
+        }
+    }
+#endif
+
     /*
      * Initialise our DOMID_IO domain.
      * This domain owns I/O pages that are within the range of the page_info
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -173,8 +173,9 @@ extern irq_desc_t *pirq_spin_lock_irq_de
 
 unsigned int set_desc_affinity(struct irq_desc *, const cpumask_t *);
 
+/* When passed a system domain, this returns the maximum permissible value. */
 #ifndef arch_hwdom_irqs
-unsigned int arch_hwdom_irqs(domid_t);
+unsigned int arch_hwdom_irqs(const struct domain *);
 #endif
 
 #ifndef arch_evtchn_bind_pirq



From xen-devel-bounces@lists.xenproject.org Wed May 03 15:34:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:34:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529314.823596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEUh-0007XD-4H; Wed, 03 May 2023 15:34:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529314.823596; Wed, 03 May 2023 15:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEUh-0007X6-1B; Wed, 03 May 2023 15:34:15 +0000
Received: by outflank-mailman (input) for mailman id 529314;
 Wed, 03 May 2023 15:34:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZlPw=AY=citrix.com=prvs=4803f4e7c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puEUf-000715-28
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:34:13 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f2e70610-e9c7-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 17:34:11 +0200 (CEST)
Received: from mail-co1nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 May 2023 11:34:08 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA0PR03MB5641.namprd03.prod.outlook.com (2603:10b6:806:b0::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 15:34:05 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:34:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2e70610-e9c7-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683128051;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ekEcTjd1EIWBQhUD19Z4YcSet7yF/pD6/o03EZiH10w=;
  b=MIv76NJOjg2jly3xtN2tXBk02CxXPERfCCUZr85UqP4z7nPkmJ4r1XvS
   v05g6VmxzKQSiuPpRclUdp1gvbKSyU4xlDdSoLbsO907blPPPk0scJ/0E
   vGmfWG0wE68GJTAS7ELoiZR1LdNHXPFZ+vkSHHnCbNILETez87+VihYNF
   w=;
X-IronPort-RemoteIP: 104.47.56.173
X-IronPort-MID: 108143656
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Gda3bK3OGQxlFd4v5vbD5Ztxkn2cJEfYwER7XKvMYLTBsI5bpzICx
 zMbD2uDaPbbNmGhKIh1bIS1o01S7MWBz9RkGQRlpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFmPqgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfAlFOp
 fo0LCk3dwmMrriM37mEENNSiZF2RCXrFNt3VnBI6xj8VK9jareaBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvS6Kk1cZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXqjCNtOT+PlqJaGhnWBgXc1Mg84SWCk//OIkWmQafByA
 n0Lr39GQa8asRbDosPGdx+3unmfpTYHRsFdVeY97WmlyLfQ4gufLngJSHhGctNOnNQtWTUg2
 1uNntXoLT9iqruYTTSa7Lj8hTq2NCocK2MYYmkaRA8B7tvkiIo3iQ/DCN1kFcadhdrwHDDs3
 z2QtwAuirMLl8kJ2q6nu1fdjFqEo5nCTgcxoALNTG+hxgp8aMiuYInAwUjW67NMIZiUSnGFv
 WMYgI6O4eYWF5aPmSeRBuIXE9mB5fmfOTnYqVdqFosm8XKm/HvLVYJa7Sx6JUxpGt0ZYjKva
 0jW0Stc6IBSOj22arVwYKq6D8M3we7rEtGNaxzPRt9HY5w0fwje+ihrPBeUxzq0zxNqlrwjM
 5CGd8rqFWwdFals0DuxQaEazKMvwSc9g2jUQPgX0iia7FZXX1bNIZ9tDbdERrlRAH+syOkNz
 +tiCg==
IronPort-HdrOrdr: A9a23:/OOaRq8hh1kQc0ndt+puk+DcI+orL9Y04lQ7vn2ZKCY4TiX8ra
 uTdZsguiMc5Ax+ZJhDo7C90di7IE80nKQdieN9AV7IZniEhILHFvAH0aLShxHmBi3i5qp8+M
 5bAs9D4QTLfDpHZBDBkWyFL+o=
X-Talos-CUID: 9a23:UDe2XG4CXndJyzvKOtss934USsYJdmTn1W7fChWCFm1WEoerRgrF
X-Talos-MUID: =?us-ascii?q?9a23=3A9+s48g+KIpmJTCaX1sSPZGeQf9Uv+IKUGm4Rqop?=
 =?us-ascii?q?YsfifEgJCGRq0vSviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,247,1677560400"; 
   d="scan'208";a="108143656"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fXDNieHtHUKGnvmk4b3Zxjo2/fB0mBgzGr/wL3cmMwQTO4fFf9+mrZWKFMHPlWdmYt9FTYcbcZIWMvQ/PCIAQnkxRMrLuFr7GRuzDjY1jbCywjrI7SNcdvDtpB+dP9K67zUe2eeZcZQ+ddfGUaH3I583HzKxUgJVSph/942jwG0SnkQNngGnxO1kvpLtws8m/bqqR1InzZiVI0uWyEiUL1gcnaI1y7+mHtAf3tSqRjsI4ttTCdwn27/id9MLL+ONaqdclBxV/ZViKJ8ODUvhbVXJv13MOCDPM2yZ3kZwaNfZ9z8J1s46YTIsx3tNfXNZmPslZk1kpWsA/TPP2+DZYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ekEcTjd1EIWBQhUD19Z4YcSet7yF/pD6/o03EZiH10w=;
 b=OzPsNNZs+ZolpP5D/Wzqt5TUvJGSCns3IBQabZDuudoh0fNrwTF5XW1PmJfWeUJ27IFbHKs9yLN4Sy5qYJ0bvSMA79oKuaGqUOlZdCZj9cOdy4h1X7b1cpavU2ByMoBCirGtORg+DkIRsx7WkCdndoLhYHLdiNOUUQG9h7oTpG+Orc/6amzrAp34EPoWomejM2dURXNSmmrc7ZcOPj+Ov4s0TTERCtF1n/4cK1rYsZ2oSfQsCHyxgfTLYtP7SQZVhMwjYcf+d3I2ZSMBknDGl2I5q7urD+vLtPSD/ODec+deSaanHK//RQqjb9+iJ/PI62i5eDp//jNIcp50+d+kEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ekEcTjd1EIWBQhUD19Z4YcSet7yF/pD6/o03EZiH10w=;
 b=PNe1IPb+pQWDI04iuPReezREcYUblqU41cqB+OfuoKK9cAjcBs9fIVno5ce95PKYw2SJaRc3Mv1okNHjFcFrT2DSFSEpjWR1lsYyX2n4wPi4viqNgxeODmMV+A0ZkxHML6OpAf+XLVxdtx3m7re8KkoTTL0fZvW4GR22rf4nSSU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <a32b0131-82af-8828-13ed-dc1feac1fcd4@citrix.com>
Date: Wed, 3 May 2023 16:33:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: andrew.cooper3@citrix.com
Subject: Re: dom0less vs xenstored setup race Was: xen | Failed pipeline for
 staging | 6a47ba2f
Content-Language: en-GB
To: Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, alejandro.vallejo@cloud.com
Cc: committers@xenproject.org, michal.orzel@amd.com,
 xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, =?UTF-8?B?RWR3aW4gVMO2csO2aw==?=
 <edwin.torok@cloud.com>
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
 <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
 <c74d231f-75e2-a26d-f2c4-3a135cc1ac10@citrix.com>
 <28235d38-ad7f-f1bd-f093-bd83f9fd6589@xen.org>
 <998dbbeb-94d7-7afe-e4c2-02ef134cbe94@suse.com>
In-Reply-To: <998dbbeb-94d7-7afe-e4c2-02ef134cbe94@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P302CA0015.GBRP302.PROD.OUTLOOK.COM
 (2603:10a6:600:2c2::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA0PR03MB5641:EE_
X-MS-Office365-Filtering-Correlation-Id: 7ea3f220-1edf-4c54-ae9d-08db4bebd43b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bKg+BguLZd3M8LgHp2jBZmxFW1/b7xtoAYyC1RjhYrRqpPncGu0VFVc3eBAyTQgw7GLsauO1jJZh9i016wHQPZz8h9dh7ZEMJcsc/ho0CqAeTw7zgydPneH4XEeDx+3XwWT3LdYWoCsUdPJqgA/GV2UIOxr+gzAaKCavSIM/GV9lHU0wKcfh7goQiSPHXEBEiSeAs3UF+FzT3CY+ygyTJimYFYdz9lUslLsJ54gXgV1fN74109lfx3h3wYRYIFf40E7IDk1yzVgolIJoya+iO0fwNLFwKYuYvLgJUaWPlGs6oVry6ewTxO4QOJZuCk31Bd1OGIwSwBA+YzaDESY3y5WinwP2OvxilnPvXfdGSix7sIhhlRbCSNz2oX7JwubJKE+vV+ouSywAxiuz5X6VRKkUlpV1bmtICHGK02pNVd2IJBW6x7FXJyrQ1vymq4eg77BYl0ZhmVZ9vb+TBkviwBD/+aN2C95ezTihDHycB6vT8ri0qXhFMUtVipgy2SfrV8XverIBmBsTOyzCPVtIn7a0F/V5KUsxgJvlJyzRpFMdBdJ3i9JjOwWX4vtllmEpsfh7aQ3UFkyb+UQHCtxSuEAHqbten/7THpED8mQm2nlO96DbjRac8Uz+qRKrmPya
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(366004)(39860400002)(376002)(451199021)(6666004)(186003)(6486002)(83380400001)(2906002)(53546011)(9686003)(6506007)(6512007)(26005)(966005)(110136005)(478600001)(31686004)(2616005)(36756003)(54906003)(66556008)(41300700001)(66946007)(82960400001)(316002)(86362001)(7416002)(38100700002)(5660300002)(8936002)(8676002)(66476007)(4326008)(31696002)(32563001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MEF1NVdWVmE5ckNtQXlhQkZRRk5PNXNSSmxZOHViUWRjdW5pdWRRU0RnNjh2?=
 =?utf-8?B?YlFldGNSa3MvdnpUcHFXM2JJc3hBMnBUWjRZOUJ6OS9CSTY5YytJUmMvME9T?=
 =?utf-8?B?clhZQmNleVFMeGRjUlM3Kzl2S25ZNHpJY2pMVGt6a1Q2bzArU0M0cW03OFFL?=
 =?utf-8?B?WDRBZ0dDTUdJemtMb1l3dk5jZDNpK0dKb3R2SWZBcEtZRmRqQXVjdUsyQkVw?=
 =?utf-8?B?aURRT01KQzBVcFREckdPTG0yZ2JsMlRKL3NLaFczNnUwcHF1cDUvaXZHL0JX?=
 =?utf-8?B?dEp3cEtXT2preDdjYnBUZC9oV25EYkNqZnRCZnZnQUtGNkJxWDM5RXRGbmlZ?=
 =?utf-8?B?MVdWN0F4WERRSE5HbmZyVXcrYURFOWl3MWF3SlJLT3FPTHRoVjJIOW5XZDBX?=
 =?utf-8?B?bFg4MnhmcnNwQzdnclBlVitWL1loc09UK0N1Z2xqeDRUcXkxMVliL2ZHODBU?=
 =?utf-8?B?cDNGeUU2RlpVaXR2NktHdndlL1hmcjhScEQrdEV0WXZQTGtkOElaOGRiYzBY?=
 =?utf-8?B?UFg0MmE1dHpybXJqTW5QUUlDRG9wVEx6QUJJMXpvT0J1Ti8yKy91RFlUN20w?=
 =?utf-8?B?QXNqTzd6a0RZMkY2Zis3TDZ4T1R5NVhTYjkwQUIySXEyamtBZ1lWdTJ5YnZz?=
 =?utf-8?B?K3FiaWZMQlhTK29KazVhQTI5MjlEWklxZ3g4MklpSXJFUTJGVjB3V1VxVUpt?=
 =?utf-8?B?eWJQVHEvVGU1WTlNNmNXVFNIcFYwNWExQXVOV2dJMjJCNkg3RHBWeG5McU5z?=
 =?utf-8?B?OFBpajBObFNqRVk5b0RPVVNaby9xMXlGaHJDcStJVkRJbnFuRkF3YnpvZVFI?=
 =?utf-8?B?T0Fjd3h4aXdVaEQ5UHdzMklTVS9XdEdLTkVKMXZ1THVFZHFYczFqM2dhQXpo?=
 =?utf-8?B?ZWRUbzBRTlo1MlVFUEJqRS9kZlJpY3o1RVp5a3Q2dU5rVVFjNDIzZ2xhckJl?=
 =?utf-8?B?ZXZOR3hBQ2h4c0hRV1VwU0R5bytiRFVqQW9pQlEvcXdTeHhDMjhXK3Mza1Ra?=
 =?utf-8?B?UGhoNjJtVm1MbWJpOXZBeUFvSnEydHpJZVIzM3JEc3BCYm16cG9RRjgyL3JL?=
 =?utf-8?B?alNtM3daYUljaWFmZFlMMTRzSmVBc0ZDdDIrc0FFaXdnbk9lRFZPdUcwQ21z?=
 =?utf-8?B?NG9mY3ZKY2pYanhXc2N1SWVGeW1JOWFwb2I1QTc1b3pGZ20vZmR1KzcxOHpW?=
 =?utf-8?B?VE1CVTVBWUIxS2NNa3VqR1VSNzgwRU1aMk5PNHlHQXdaL3hlc2U1eVczK1V2?=
 =?utf-8?B?TmxtU3JBZDFwYUZXMDMrZFkxSUNZdDgvREMyZXMxdFJOSitwMGFYRVlWNWpk?=
 =?utf-8?B?ZGZzZkQwYVRMOTcyRkFhbUhYTHdjRi9rNTd4bXJ0NHREZ3UxbXd5eE1lT08y?=
 =?utf-8?B?MTFnNjZybUtLOEJVWVJWMEk3aWdtSExKM2hMSUF5V01PVThPR2wzbzYvMC9I?=
 =?utf-8?B?VEU4Y1oyUU54UlRLUjlQbzdEK2lhZm82NjF2cHRJUExlWnRSeXo3TGcyR3Zx?=
 =?utf-8?B?dHNHVlM1cy9NdmxrMXY5OXRUNDZTVHNhOThtamp6d2MzbDFRc0lMVTBQMUQ1?=
 =?utf-8?B?R3RGRnNmbkJyaXdyWmtieWV0SXNnRnVNWEtGd1hCeGhjWlZHT0liUDBYV0J5?=
 =?utf-8?B?MGp1ZWhYUTNDVkhySCtmc3h5WHRCZXpwWCtHWG5zY2hoa1Y5VlVYWGVPcUFv?=
 =?utf-8?B?elhJRC80R3haL3hRZzFZeGhYVnBmREtuU1hvbWtNeVdxYTZxZzdaczBrV3Qw?=
 =?utf-8?B?eHF3QlgrcFZGdWFGQmE4REErZERHMFlyR05DWU9VdlFKdUNrRkVjdmVxSjZ4?=
 =?utf-8?B?MjZiaHdDaEZRWDNOWTVhZlc1ZFEwSTNJWEhNTk9Ld0h4YnpxWGdvS0hJQy8x?=
 =?utf-8?B?RE5hK1dTTncvV1c0Z3Bxd3VMRXVlR2dsSG10b2pCc09lOU9LUFp2Sk5FUkZi?=
 =?utf-8?B?cW5DTGIvZ29yekFWRHU4M01rVWlic2w4dzI0cUhGS296QVNvRjU2bVkrZWMr?=
 =?utf-8?B?TlBWV0huY09TMmNFTkhDeFc0YjJDRlQzZzdVWWh2QmRtNGpVWU5jK0JBY3dq?=
 =?utf-8?B?dE9lWThoYkx4Tjd3ZmJ5WVg4eUt5RmIrbTZDVU1OYzgwcXkvY0N0bHhwYkVH?=
 =?utf-8?B?OTNyTHJheFR3TVduRzcrTE1EanU1Q3ViOWFSNS9IVlZFN3d4ZjlPM3pYcUpl?=
 =?utf-8?B?Smc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?UTArT0NlVThwZWJ5SCtETTRnNzBFeXJkTTdrYU9Vd09zN3dXRTVSMHpKWWZZ?=
 =?utf-8?B?ZlBYS3BhM2gvd2FMOU95RVJNQm5uRFQyRUVNSUcxbXRpQmdEWUl5V01pTUM0?=
 =?utf-8?B?S2oxOTF5Vzd0WER6cEZYUVJ4THZQNU1GWkVzM0xZcDQ4MUVkbWgzbmhqc2pO?=
 =?utf-8?B?YWwvYUtXdm9mbjNBVlZvSDAxaUpwcCt1WjNGMWtIb0FraTl0R2FURzIrMmdV?=
 =?utf-8?B?eE45eDFpUGZadXkzYVhxUTBqTVo0VGk4dzcyaG1QQ0Nxbitzc3o5QWlGTExK?=
 =?utf-8?B?V2ExSVA5QnRmWDFSY2xKU1hNVWhGMnhDTy9GaTdXbTU1aXNUeWhncUlITmZD?=
 =?utf-8?B?OSsrT3hPZHBwM21UY2ZaQWJsTmhUTVB0UG94WXVXdTJIUzNFRVN3dldBMjZs?=
 =?utf-8?B?cVNBSGwySTZDQ1FVNDVJMU1XNnNBaU5wQTkraVNIUzZ1TUJoYlVEamdSbXhP?=
 =?utf-8?B?RnFIcnRZVE1EMGVsMU9IRm5LNVFhS0JUY2lwWG9ReVMzK09ZUTlGVng5enBw?=
 =?utf-8?B?Uy9NaUJDZjlZcU9vRzdoN2NtSHN0SHNiWit1elBXSEpFaEsydW9MUVFhTTFC?=
 =?utf-8?B?YWNkMUd1NFNpV3dQU3pYdS9hZFE2QThtazkrcUEzZVdXbTRLK1FETHBmcmFa?=
 =?utf-8?B?UW5RUmxiN2RKL0VweTY3VFFZbGtaSkdrVERMU2IrR0tmVS9RMjNuQnRuVHJh?=
 =?utf-8?B?L3pRUzNpZDRTdkxzVDI0OHErd2UzYVNpN0JPZTJHL0hXNGIzMnV5cC9DVTRT?=
 =?utf-8?B?eWNXcTZneVV4MkZDQ3EyVHlWaHJuV3gyMGhTbncrODFwUXRsMGhMbG5hUDAv?=
 =?utf-8?B?U09HRm0vTWtNcFJaNkVad05zZmR6MWlZTENVeGxmMGdpOUQ5UkcwZ0ZEaTl1?=
 =?utf-8?B?UDZ2VENZS2VIUEtOS21SNG5zWDZwQ3dvOXFiOUhlQmJ2MEtJVVVHaVRIamxM?=
 =?utf-8?B?K05LZUJtUmFZb0tmdk0yVWkrNjFWejdtV2F0Y1Jrc3NIcG0rTXJtbTduTGMr?=
 =?utf-8?B?cUM4OWZlWkRFSXQ2eTZ2NVQycHpqei9KUi9SbmZBVlFyeC9ITUNkaDVjaTZB?=
 =?utf-8?B?QW1QK09tZWZLUnV4R0dENlc3Y3FxZ0ZORDlIdEpxem9uU2VMM0g4UndxT3Fz?=
 =?utf-8?B?ek5hb1kyUHc2QjJGWVJHbDg4Ly81ZFc5TE9Wa2RWL3BkcVJ5ZitzVVI4QmNj?=
 =?utf-8?B?SVo1SUIyQ0ZhcG41dkd5TW9mcmVsLzBBeHhSUC9KK2VnQW51MjdLQncrYWQ1?=
 =?utf-8?B?ek9WYWpDc2VkbUxkSkpxd0czTURxdmpKaFdmSGI0WTJiK05nUT09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7ea3f220-1edf-4c54-ae9d-08db4bebd43b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:34:05.6501
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HLusJwCxuNLJiGgUqgBRoOb1yYN5fbfr8F4Mwmcqx1jvr/FiHdi0lRI6RVZZGpENJaMhShDgsDXXWO9oZl2o6e2L4txyR9LUSU8sR9JU9qI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5641

On 03/05/2023 4:22 pm, Juergen Gross wrote:
> On 03.05.23 17:15, Julien Grall wrote:
>> Hi,
>>
>> On 03/05/2023 15:38, andrew.cooper3@citrix.com wrote:
>>> Hello,
>>>
>>> After what seems like an unreasonable amount of debugging, we've
>>> tracked
>>> down exactly what is going wrong here.
>>>
>>> https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4219721944
>>>
>>> Of note is the smoke.serial log around:
>>>
>>> io: IN 0xffff90fec250 d0 20230503 14:20:42 INTRODUCE (1 233473 1 )
>>> obj: CREATE connection 0xffff90fff1f0
>>> *** d1 CONN RESET req_cons 00000000, req_prod 0000003a rsp_cons
>>> 00000000, rsp_prod 00000000
>>> io: OUT 0xffff9105cef0 d0 20230503 14:20:42 WATCH_EVENT
>>> (@introduceDomain domlist )
>>>
>>> XS_INTRODUCE (in C xenstored at least, not checked O yet) always
>>> clobbers the ring pointers.  The added pressure on dom0 that the
>>> xensconsoled adds with it's 4M hypercall bounce buffer occasionally
>>> defers xenstored long enough that the XS_INTRODUCE clobbers the first
>>> message that dom1 wrote into the ring.
>>>
>>> The other behaviour seen was xenstored observing a header looking
>>> like this:
>>>
>>> *** d1 HDR { ty 0x746e6f63, rqid 0x2f6c6f72, txid 0x74616c70, len
>>> 0x6d726f66 }
>>>
>>> which was rejected as being too long.  That's "control/platform" in
>>> ASCII, so the XS_INTRODUCE intersected dom1 between writing the header
>>> and writing the payload.
>>>
>>>
>>> Anyway, it is buggy for XS_INTRODUCE to be called on a live an
>>> unsuspecting connection.  It is ultimately init-dom0less's fault for
>>> telling dom1 it's good to go before having waited for XS_INTRODUCE to
>>> complete.
>>
>> So the problem is xenstored will set interface->connection to
>> XENSTORE_CONNECTED before finalizing the connection. Caqn you try the
>> following, for now, very hackish patch:
>>
>> diff --git a/tools/xenstore/xenstored_domain.c
>> b/tools/xenstore/xenstored_domain.c
>> index f62be2245c42..bbf85bbbea3b 100644
>> --- a/tools/xenstore/xenstored_domain.c
>> +++ b/tools/xenstore/xenstored_domain.c
>> @@ -688,6 +688,7 @@ static struct domain *introduce_domain(const void
>> *ctx,
>>                  talloc_steal(domain->conn, domain);
>>
>>                  if (!restore) {
>> +                       domain_conn_reset(domain);
>>                          /* Notify the domain that xenstore is
>> available */
>>                          interface->connection = XENSTORE_CONNECTED;
>
> I think there are barriers missing (especially in order to work on Arm)?

Yes there are.  I think x86 skates by on side effects of hypercalls.

>
> And I think you will break dom0 with calling domain_conn_reset(), as the
> kernel might already have written data into the xenbus page. So you might
> want to make the call depend on !is_master_domain.

And this is why I am very deliberately not doing anything until the
documentation is matches reality, and is safe to use.

For starters, shuffling this doesn't make any difference for a domU
which hasn't been taught about this optional extension.  Ignoring such
cases is not an acceptable fix.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 03 15:54:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:54:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529322.823606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEns-0001oY-Tt; Wed, 03 May 2023 15:54:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529322.823606; Wed, 03 May 2023 15:54:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEns-0001oR-R3; Wed, 03 May 2023 15:54:04 +0000
Received: by outflank-mailman (input) for mailman id 529322;
 Wed, 03 May 2023 15:54:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puEnr-0001oL-RI
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:54:03 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2059.outbound.protection.outlook.com [40.107.7.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b85d7bbb-e9ca-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 17:54:00 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by VI1PR04MB7183.eurprd04.prod.outlook.com (2603:10a6:800:128::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 15:53:31 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:53:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b85d7bbb-e9ca-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bjb8BEJj0PYX7jh5eTsnRZYJT/OqHwLu1yxqLdmcEoz3+b0vR5QJ4irQXi04nd/F39NLnD0A35wsx0MQDVjgeiNkzzPcB1BD/SRVSygfIdSdjLMfWkyllKnp5ULtofG2RENazC8pftLy6Vwp1MqNevIeKCxiNF6+w5gT7JNYyqVCAuzvIo2seDo56cWPRRH7urzAwMkmXuARUTWL1ecgGLJkDLGFZF9VNzwqU6oOYAfeP6+6i41e3GT88ihr3OFatS75alePmh/4Ockdz0mw8DY/HpOly25xpbgpFZy93JhF+hAiY4huqYtJ1LhOC3qNpm1IVwTjoYdHFHUDygJQTg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+q1JcwX+Pu//DS9SDuAr7savIZQBK7Vvwp/Ti40aPT4=;
 b=BtlQrmpw6JlLjgYKRNa0EQNrdNfaih0UMFhfclqidS88MN5woWRMBVvBXZWE/4loVpph1q+iKWbNK/QZFGVnbku7UFASp3YXvt6sdwNQ0JcHQIUDdPj8BHDGClvnmClBZaGFc7f+ZX7wVGZFQ8SDiqN1v2O3bSGPDAmpwc8sUIQ3MdWINyPIYPiZSVYQbw9R3F/xThhc5MT5VgBQrqVvpIonDQSXrGivihmzDQ8ZvFlWuAORCMdN4nKkq7Tb6BWwcL32NP7fSf0dnsNoFB2ao5UPIUX+g/ViWowEdeoUF9Mi6x2Mb7Bg+6ZwAIhbinIjy7c0dUEF02vzVxLKFm1FRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+q1JcwX+Pu//DS9SDuAr7savIZQBK7Vvwp/Ti40aPT4=;
 b=JRgaaE2k4VlhP2MKPwoVmj1rHn0FWA6lSyc/dZVuNFSanSJI7c33644RuJMM2MdLX2jE1ufzlq4STsWxJGWGXVPmzSE5tcjqVXYSHGpAdkMZlquu+Yqxsax8N6PRHRhRvH572yZKx+xKHBzVW3v98SeCO5HJODf+ZVQdNjfibJ08opjJAICrdEnqVSHOr2JJ1NWVf8HLi4bxtGtkV3nKpgnFEKzRwTgEyNnbCaLwi7BZ7NIdTYYku1pUWY7XpElAudrFyIA3PaHpHcPJ2GcPYq6kZ3bFNIRWj0f8Z+loX4QQu/eK0vcgOuOVBys61VJ+osrt21kHYD4wX3g3kbaUaQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
Date: Wed, 3 May 2023 17:53:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/8] runstate/time area registration by (guest) physical
 address
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0269.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::7) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|VI1PR04MB7183:EE_
X-MS-Office365-Filtering-Correlation-Id: 31ab90fe-81b6-474c-a200-08db4bee8b3f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XrrkzfdvYkXBQMMtx5RdXqr293tjkkXppdHW4t7rGB5z9+eN78f9SfFU7qk9gkSsdEvtEzIudKG8jRYBz96FVqfYrW66ixOt+xeo/CKiYWd77sn3pjTlbHtvkgj0TD0iO2ef017/ow3jK9Qs/pUc8+4cWOepoGM1gBVOrn3g9IkEyJSngW62TwTuNFSH7vmkQNtEeut7JUugmscN4DyuZlPptyNWsdVUVuy2+TsdL6HpBmlr1LL4ltXmcKsgZzMbCHJr9HGd9rQuzBqnSbN8aPyeJAUBa1P5wmOUM8ajBrKiufiTyqSMjsOAaYMXfzOSyui/kryAMdzt+GeMF+6DJb5AgmvjzVO1USdpAmql0hnKtRPBUxd8Bh7lpek8Ro7t5Jphbe3cIYaF7Ve0ppTEXLWooSSjIBzYs9gmJQbT44KYdI7VDwH6dSe3RmqIL0v869UHVCvmv/Ah+ILXeqsLNDL/v+zldXSrSrJjggaHs3m4pSwyaRu8QLeutQ85ygNqwh84cUQDaj0JPYmMjLptSR6F7MJv4bKzNiAScYoIu8nLfKEeVLyeMqg6kegoVVeBv5KFV/embj20jI6C+72hPHuL+EOIxLYhCaSsh1URlzVmIkALmFzGbczJeqcLjuev8pdAKA8+/nE/kO8X9Flu4g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(366004)(136003)(346002)(396003)(451199021)(4744005)(38100700002)(2906002)(8936002)(8676002)(5660300002)(31696002)(36756003)(86362001)(6486002)(478600001)(54906003)(186003)(6506007)(26005)(6512007)(31686004)(2616005)(83380400001)(41300700001)(66556008)(66946007)(66476007)(4326008)(6916009)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MUxqQzBqc2kzNDFYeUpNR29kQnFKekY5NmJUTVpDcTZuY1pKRDhFSXRlTGJI?=
 =?utf-8?B?ajhjOHlMUTZiTlpXdC8vOCs3dG5lOGh4MVpYNXFtd1BhZjFaNC9BZEZkU0Qr?=
 =?utf-8?B?RVRDRmtlYlFtQnhUR09lV0NhQlB1N3Y3U2VnQ0FlV25ZOFM3WFlvdkZOMDkz?=
 =?utf-8?B?bDlNbFd6ajd0cllKS0dGMDBhbkdrdDFlVEN3VGF0emtEcTVvbHVSbnIyVDlT?=
 =?utf-8?B?U292bTVTeEhuNDhhYlMxM2d0dU5ISVFJVE94dGJFSzdwTEN0Ky9HZlV3dmpL?=
 =?utf-8?B?RDlUck1HNzd5R0V1MkhGcGdSOVUwQ3huOW5FMGhOZWswckVuQXBBY1hiTE96?=
 =?utf-8?B?bzV3T0lOZ1BBc05yMWxaa1c1RmRKcEg3c1JZY2NrVEZUNFQzNW9QSlJjUk5J?=
 =?utf-8?B?STZIMFVYV210enRnejBHNTk0MlBpQXVZdXRLOG0xSzRPZ01nRmo0QWNGd1NP?=
 =?utf-8?B?elFyTG9EbFJsNHU0dGhTbUpCWlhJZDlraTZzQUxsMFhBVy9RNjJnUzExd3d4?=
 =?utf-8?B?MXhTVGtaaEFHdFkxaHkzOVdBY3BPSVV1MWRlYWg5d1plalE4dnZDNzE4NW1X?=
 =?utf-8?B?eEV1NmtCWEp2ZUhMSGFzNGx2RUg5QmxxZWVJcEpNWnBOZDJOTkZWN3ZWVFcv?=
 =?utf-8?B?Wi92QTNFWHdPdW9NN0dIQ1dCK2RvM2F3NkxWSkcxOU5jRUlCWHNJd3NPMmQ1?=
 =?utf-8?B?bDB3dHVydnZucG5jajJ1VSs5YjRHbHZNejlKUDZLUCtPa3hTRlJQOE9ld1Zi?=
 =?utf-8?B?YTVMSHlDQXI3SWVQUnJBRUVTTCtLUnhxN1NPb1BLQWZGU01weGFjd3VaSVc3?=
 =?utf-8?B?TXZiYU9nNFY4RHprRitRNHAyUWFkQjdJS1NVRWtRVmhuRFBZUDFMZHhPSnp1?=
 =?utf-8?B?c09FaS9yWklsaXpwbWlSU3YwYXZRZDdUeFZ5akFHWnRaTUVHMVlwd0hHQjJy?=
 =?utf-8?B?Wnk0dnhrSEYrYXViYy8zQzVDMGIvZEJ1bVFxeTVySTlycXloRTlTUWh5Qkkz?=
 =?utf-8?B?TTNYbzBvVGVYcTFrcUtrME9sS1dKa3UzbHFZRi82MHl1S09DaEJxRHlwVVFX?=
 =?utf-8?B?T2RnZ2wyYzdWKzNIekV4TUNjQzV4OFF3UTB6NmMvV0lDcWI5S3VTcmFjU1Fy?=
 =?utf-8?B?dzFmNlBlZ0RzMjQrMnA3OC9FanpHK2U2UWRJL2ZaNkZIVER3UlR4N1lTOFdD?=
 =?utf-8?B?Y1NxTFV1Tmp0NC9xak42M3dCcDV0UXllaTV1bW1TQU9EbDA4a3BFSHNnOW1I?=
 =?utf-8?B?NXFqK1ZYRklsY3l0NlMyZG5TdlZCK2UraVRwZzF3M29kcGhWdmdXa1pFR0xJ?=
 =?utf-8?B?Z0NqMEo3aEJybnQrMTZRZTR5Z3gyam5BQ2twNCt4elFTMzJvZTk1NnhZdStC?=
 =?utf-8?B?dGUzcUJVWTdqeW0xZkM5QlpJSTliUHJ6T0k0TWJqcllJeWx6T3QvSWJaZDZH?=
 =?utf-8?B?dzRsVG9vT1hDcEh6WUMydUlPcmZWeDhUeUhsWURjWWJ0REhOcEtnUjZrS3VR?=
 =?utf-8?B?S0JQSGhlV1l6SGRVMGhqYmhySUxWOXZ2eGZhU0kvVlp5endFT0pMN3NkM0tn?=
 =?utf-8?B?S0JoTGNjTzdFS2pNTUtKbXpsZnQ1MVducnhBMHA3aDg2cWJsWWtjT2N0QXlo?=
 =?utf-8?B?QlVDZEZCdWlhdFBrUytUN3haRXlIdGt5MWQ3ZGZHOXRWWW1ubEZSNlB0Ni9m?=
 =?utf-8?B?YlhQNkFWZHlraG5kZzVkVXNPeThxM3VGSlN0ZkF2bzlSOVVIR3E1eHplVkxW?=
 =?utf-8?B?WHp5UTdvRnlvdGYwTGRkYXJDeStDUHlOaDNGL1VxL0swNmhhakcwUjhQK2hz?=
 =?utf-8?B?NE5VY0JEMy9ONWdQajd1TXFYZ29CY09oM2FKNHc4UzJEODNsTG1vRzhtNkQw?=
 =?utf-8?B?RXdBNzBHMDhTckxSazY1N1RRbWNCRVZPSmsrcThCYy9iRWUxYTlmRW9hOW02?=
 =?utf-8?B?cXNoaERsOG1MTmpRa3lCLzg0OEpVakVvUFdCc3dLYXdiSUc4Tmw2S2dDaGdB?=
 =?utf-8?B?U2s0THY3SmN0ZE4ybzdTQnp2V21sVzV6clZPWThUOG51YzZYNUR4U3BVVEdP?=
 =?utf-8?B?bGswb2V4NTE4aXppQlZaNXJiWkJNYktwek5WUDZZaWlWRHNYWG1Od3dTNmhx?=
 =?utf-8?Q?S9Bnow7KxREHAdw5vGDCqbWwl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 31ab90fe-81b6-474c-a200-08db4bee8b3f
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:53:31.7567
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JIDX0MEPcQl6k5iarXJqr0Us8GZ/+wYEt85/K7TmBDv+bZLlEpp7V/pghU/yE3nmsr1si/6r9RLq1Dt8QkUyoA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7183

Since it was indicated that introducing specific new vCPU ops may be
beneficial independent of the introduction of a fully physical-
address-based ABI flavor, here we go. There continue to be a number of
open questions throughout the series, resolving of which was one of the
main goals of the earlier v2 posting; v3 has very little changes and is
meant to serve as a reminder that this series wants looking at.

1: domain: GADDR based shared guest area registration alternative - cleanup
3: domain: update GADDR based runstate guest area
4: x86: update GADDR based secondary time area
5: x86/mem-sharing: copy GADDR based shared guest areas
6: domain: map/unmap GADDR based shared guest areas
7: domain: introduce GADDR based runstate area registration alternative
8: x86: introduce GADDR based secondary time area registration alternative
9: common: convert vCPU info area registration

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 03 15:55:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529325.823617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEp7-0002Me-7X; Wed, 03 May 2023 15:55:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529325.823617; Wed, 03 May 2023 15:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEp7-0002MX-4J; Wed, 03 May 2023 15:55:21 +0000
Received: by outflank-mailman (input) for mailman id 529325;
 Wed, 03 May 2023 15:55:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puEp6-0002MD-1O
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:55:20 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2081.outbound.protection.outlook.com [40.107.13.81])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e64e6568-e9ca-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 17:55:18 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DU0PR04MB9658.eurprd04.prod.outlook.com (2603:10a6:10:31f::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 15:54:49 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:54:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e64e6568-e9ca-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EWxT0JNuD/IcywYuvhA5HvAE1wg0FUD86AIUHN3xrgU2LOGT1lIEST8aWLu15jfJRV41koLhE+PglHGgE2nurqYpdJ1ihUX9ymRRI7k1fMIXJt08HazO+/2WYXqfyVunJUXi595easd5UDF7J0npFg/O6F8uPpzfxn5Fy7G+Hwr2vu4Wx0zPOgUnolSf8WU3xXml7DwGSCz0eJ+f73WrutXcMcTN1GXnACEqxP/CBz/2efMide4JVDF+in+8qIX6ZiqJp46UumsE+65+CwiAjUOaR7lpJs/3cqJ1oeBtV3qWemfhxkO2DAOWO7ELNyxJ3095CoorJP4xHDQxL8MLGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bQ4sjL50lVdQLdIcCEEN0lFGFwGWLMMNd8s5YRynU9w=;
 b=CTnIt+QB4/HGpwjgHslqoHPIvwZ/DbsXDv9ayW8eRDJDJnL0stGVaMX+d5r1KLB/gozgW4NjAGUtTS/ZU2vT86AEigRMHHbI78Fx1eBCaHakjgnZR1JDOZ69WJQnF405r15wAbmC1BGQvl7wHQE+DZvJqys4i3Niz6ojw888ajbSQkc92fF6m50kibj6abRiDoHYbNpFQgDhB1wOT+w1mkWxVlx55gnltDdEwDqe8+gk333iJdkQe+yzxD3zD/dLieDs14pf+UQ9JrMWE00SDwMlSnRRnBmVg17ek81KpBfUtNscHYIvF5pRzihnCiadFcT9yOQydLcHyxpvHkWq5Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bQ4sjL50lVdQLdIcCEEN0lFGFwGWLMMNd8s5YRynU9w=;
 b=ZZDp1jAKFNSe8oww7lYPnsm8pgapYjtkSCYrIbasG0m5wUCUl3BoPaWP+41vIMb7/9BpmVW4O8SeyRwkzgaHKms32pdoTm9wh0IaR0ZVnks8KkoNB9SwkT3iLarwrDZ3+oL2T4C1xGALMo+XGV14kQKuHL4seeMgM0SUBVck4J+RLZW0qnaiaMHT5/4PGRVZpCv0X4Y4kFMnk7mSPCgv70o0uze8YcLE7ONkZDZIn5gv7XKC+wn8zMruD8THSnP0l/IIJTTQZ7JtDfk0WU+qGLVT/gpZcHzQmF1N+feyFW3WLKnMSh4tal3tI1vbkDpnDmV0uxMaNjGX5i6pIl0LuA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <68cdb299-c41c-b6a5-c9ce-bd915508ecf2@suse.com>
Date: Wed, 3 May 2023 17:54:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 1/8] domain: GADDR based shared guest area registration
 alternative - teardown
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
In-Reply-To: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0015.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::25) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DU0PR04MB9658:EE_
X-MS-Office365-Filtering-Correlation-Id: bf5e5982-4c31-4f43-a921-08db4beeb9a9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8ATUtPuHGqXm+bDAuOd4cESMq7QuAD5Bkl4WjdS+yLWHkRmIwEQRpcDq5uaI6FMIJt0OYLiSTAoWZFaY31zfksU8XthFysVYI1IIm74HIZ58f2ot0pMCRpjJntofkvSdr7b+wjsyptOT5sPv5uy3AmGhPjqHeJXS3sdgoIE4atYlBckRsyqhxhUTnPHA5uTHOM+io0n5HEjANeKCUxUaWzobehyKYPFCYKWA6nIXRkYeY75tCmkH6X4Z9PlifzmpBDsKlGUHdXu2LIbrBrzrF7idClVzuYxYq1uaC5As8bbIyuCeV5olTM4Sna2kW3cMPgXfJIEH4ucWvr0hkIpd5mMeXE4aplJn5yCvrN/oTSqFPsEJuJTQi2DxQuRmaH7Z406+mNMFqK5AKNURIyjtXB3d/G6sZDCmr1LcqclXkSKVrLEOh0mtjMTq60z0wlf7y2Fjy6C+oa6YsUC/WbmUOvo81wAvu4VE+vSDIGJuTRqP++hmiBbJm9Ied+0bsBeidMRf+DnwksCOeAIf2iNOLOjKj3ZjnG6esM4cG9Pknxb8cCtUefOpsH2W5ehTdjRVgJqcQBoDjdI6T9pMqfJy4pIIduHcoF9uwmaSfI9XQ0JDI5hCcTNXtuiCvek+2uBUaYciijoIfCBO8W6nkpFDqA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(346002)(136003)(376002)(451199021)(31686004)(66946007)(6916009)(66476007)(6486002)(316002)(66556008)(41300700001)(54906003)(4326008)(86362001)(31696002)(36756003)(5660300002)(26005)(6512007)(6506007)(2616005)(8676002)(2906002)(38100700002)(186003)(8936002)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NVIxM2hvYityYm5yTm9xWWdYODViekMxN3VjSDVtR0RaamxVT1BXQnhvdlVZ?=
 =?utf-8?B?SkJWLzFqZDhYU0dPejFvUkVFT1hPNTB2Ymh5NXhGMnZXVVQ1MU9NVTR3dnM0?=
 =?utf-8?B?YlpwdEYwdXNsZGFYNFhFYXQzR2JrZk5TWStqT0VyWkRZRURicVhzMVVhL1pQ?=
 =?utf-8?B?Y2ZmVjFxVEFxSmNFNHBEL3RMWFB1NjdoZzEwYytWMGRDVDUvRUluaUg3NDdY?=
 =?utf-8?B?MlYraHdDdkthNkF6aUlBRjNOWjR4QXBrbFV6VzhUaUhJTUJ5QWQ5RUtYNHJl?=
 =?utf-8?B?Ym5YcHV4TmlUdXJwTGFqa3krbUc2WWFDd2VwOWU4UHB1VjFoMUtCSkVYTytV?=
 =?utf-8?B?em1FSnJ0OTJKZnhubU1RZUdyTEthYzFtSWVYOGN2U1M4TGtaVzJEc1V1aVF0?=
 =?utf-8?B?Umo4OUpPbldGdForbnlsVm9rdE5hWE1pOVdPeTdxcitLM2hveDB1ZjI4T3N0?=
 =?utf-8?B?UjNPU0JmZ3FyQUwwL1VRZkh6WWY0dkR1VEhPRCtVNXZnalRVemVhUEFkUGsr?=
 =?utf-8?B?VGJLL1EwdlhCNjAwbDJ1UGhHclRsd0RucDRXTXlwTUJ1YjRTSUdyKzBUVVdU?=
 =?utf-8?B?ZVZnVUZ2K1lESU5UR2dFdG1kUnN6bjJDTm9tLzc5Uk1JdkVTQkhxZGNTUjkz?=
 =?utf-8?B?TDJ1c3VwaGFqZEZYNlN3MUptZ3NUQTZJaGVOZ25KbkExL3NpdU0ybDg0NHFk?=
 =?utf-8?B?Q28rZGlLYi9ZVlhMOVFNMHd2T01LRG1vZ2djV3lHSzh0WE9ZNEFFWWZKNHo3?=
 =?utf-8?B?clNWOThWU1FiTFYxK2VKbUFrNG02bEJTT0YyUjU3Q0ZlbUhGNi9yckl2amls?=
 =?utf-8?B?T25iTGdqZWNFYW9GTUhYaVR1dkZSUTRiTVN1UXZrOFJRSkFVeGhnZ2NhVURt?=
 =?utf-8?B?a0pXcmZDYUUwVStPakI5RzRLWFpNcFNybnZBQ1I0aGV5RU1ydlo5OGh1U2dY?=
 =?utf-8?B?Vmd1NkhXTzZJbkREQ3lreStlN285VHNyWktwOTEwU0lhelFnQWVaVE5NOGEr?=
 =?utf-8?B?a0VLbHQ3VEZadmlxTTQ0SnJycVY2dms4Z1A0UjRhOUdWWmVOUFcvQjJ6cWph?=
 =?utf-8?B?QjhDd1Z6TjdaMUFqNjlrL0NlVmtObkVLclVscmxFVkR0d1BCZXRlYUk4TGt4?=
 =?utf-8?B?NDZLdzlIMnZlN3RKeEJId0FFb3p6UEZjT3VCSnFPSVVpTzRDTjAxZzlMdmpx?=
 =?utf-8?B?LzI2NWlUay83R0NwcGtDOVVObEIvN3dUMnNKZVp5Q3pNa0RHdVVpdzc2Ymxh?=
 =?utf-8?B?QkppaXRuN1BXVnZOcmRaalpvWlFtTTFrWHEvY3ZWd1hzeC83ZHFUcE1ScFpL?=
 =?utf-8?B?a0VTMG9aVm1DQUxQZ3JrT0ttQ2F3OHBiVHFhVFFKV24vUzVaU3F6b3hDbzlH?=
 =?utf-8?B?Z2dhSEhvOVRibTlPSWJpZ3VBVUQ4THEzYjlCVFZFY29WVjhUY1JtYlh1Ykpl?=
 =?utf-8?B?c2VBMmZoa3ZPN1J1ZXJOT2xBWlZPb2ZhbXhwWmpoL0lEenNydlFuYXpOd1Mx?=
 =?utf-8?B?M2grK2h0SHRLeVFZOG5JVFFFTklodXp0RHNwQThGMnc1SEVEZjkyVGVXeDdh?=
 =?utf-8?B?aG92L09HdVZicU41UmRvMyt1ZW4ySkFTVlVyS3dNcnAxdE1rdklvSDQ2OEdW?=
 =?utf-8?B?dFBIMjFWL3BrNnBVQTlLa0tzU1dqU0lReTBkYzlFRDM2SDg0NG9yMEFtY2l1?=
 =?utf-8?B?R2pBbjBtWW1Oc2VjM2Q5VklubUk3NHE4ZHptd2hSaUhQQjdWNWdPQzBOcEZl?=
 =?utf-8?B?WVc0bHlwTUZSVlhVbkVGMWF1QllGQjg4bnJhZ0ZmWi8rYUlnWmZBTWVudFpz?=
 =?utf-8?B?NXZtOGNoYldacGh0RmpFaDJuaVpDVitOTDlIVSs0T0tjRjlIMWcyR2F6UnlD?=
 =?utf-8?B?ZDd5bkdmUkUyVzV3Um1EYkJzYXMyVEVPc1d0dnVaYkhlcUpVQ3paQmVGNjNU?=
 =?utf-8?B?WXhrZ1JvRzU5anZOSFVXcWd3ZmxtQnF0VnFKTjVEUHRKU3VKU0pJLzJBU3g4?=
 =?utf-8?B?YUhoQkRTVk1tZUU5RjBrUWhTaHp2L25wblJLTGdLKzlkeS9UdHhJK0NCNXpx?=
 =?utf-8?B?dmFDVmFDSWJPdmVwZXo3TGdNY3RsclE0Q2h6eW9FN3J1YWw0eE0rWEwwYUMw?=
 =?utf-8?Q?IrXl3850+5Ziq5SIkOAFNSZiU?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bf5e5982-4c31-4f43-a921-08db4beeb9a9
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:54:49.3439
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BQ8Rjd067xQUqIeJOCXp0IjbyVVxKWUziIJ+yLKPTgRMxvnbayLxXRBLL6VFSFSDgWFkNaPqqtsc4Vf/Ekqi2Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9658

In preparation of the introduction of new vCPU operations allowing to
register the respective areas (one of the two is x86-specific) by
guest-physical address, add the necessary domain cleanup hooks.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
RFC: Zapping the areas in pv_shim_shutdown() may not be strictly
     necessary: Aiui unmap_vcpu_info() is called only because the vCPU
     info area cannot be re-registered. Beyond that I guess the
     assumption is that the areas would only be re-registered as they
     were before. If that's not the case I wonder whether the guest
     handles for both areas shouldn't also be zapped.
---
v2: Add assertion in unmap_guest_area().

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1019,7 +1019,10 @@ int arch_domain_soft_reset(struct domain
     }
 
     for_each_vcpu ( d, v )
+    {
         set_xen_guest_handle(v->arch.time_info_guest, NULL);
+        unmap_guest_area(v, &v->arch.time_guest_area);
+    }
 
  exit_put_gfn:
     put_gfn(d, gfn_x(gfn));
@@ -2356,6 +2359,8 @@ int domain_relinquish_resources(struct d
             if ( ret )
                 return ret;
 
+            unmap_guest_area(v, &v->arch.time_guest_area);
+
             vpmu_destroy(v);
         }
 
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -669,6 +669,7 @@ struct arch_vcpu
 
     /* A secondary copy of the vcpu time info. */
     XEN_GUEST_HANDLE(vcpu_time_info_t) time_info_guest;
+    struct guest_area time_guest_area;
 
     struct arch_vm_event *vm_event;
 
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -382,8 +382,10 @@ int pv_shim_shutdown(uint8_t reason)
 
     for_each_vcpu ( d, v )
     {
-        /* Unmap guest vcpu_info pages. */
+        /* Unmap guest vcpu_info page and runstate/time areas. */
         unmap_vcpu_info(v);
+        unmap_guest_area(v, &v->runstate_guest_area);
+        unmap_guest_area(v, &v->arch.time_guest_area);
 
         /* Reset the periodic timer to the default value. */
         vcpu_set_periodic_timer(v, MILLISECS(10));
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -963,7 +963,10 @@ int domain_kill(struct domain *d)
         if ( cpupool_move_domain(d, cpupool0) )
             return -ERESTART;
         for_each_vcpu ( d, v )
+        {
             unmap_vcpu_info(v);
+            unmap_guest_area(v, &v->runstate_guest_area);
+        }
         d->is_dying = DOMDYING_dead;
         /* Mem event cleanup has to go here because the rings 
          * have to be put before we call put_domain. */
@@ -1417,6 +1420,7 @@ int domain_soft_reset(struct domain *d,
     {
         set_xen_guest_handle(runstate_guest(v), NULL);
         unmap_vcpu_info(v);
+        unmap_guest_area(v, &v->runstate_guest_area);
     }
 
     rc = arch_domain_soft_reset(d);
@@ -1568,6 +1572,19 @@ void unmap_vcpu_info(struct vcpu *v)
     put_page_and_type(mfn_to_page(mfn));
 }
 
+/*
+ * This is only intended to be used for domain cleanup (or more generally only
+ * with at least the respective vCPU, if it's not the current one, reliably
+ * paused).
+ */
+void unmap_guest_area(struct vcpu *v, struct guest_area *area)
+{
+    struct domain *d = v->domain;
+
+    if ( v != current )
+        ASSERT(atomic_read(&v->pause_count) | atomic_read(&d->pause_count));
+}
+
 int default_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct vcpu_guest_context *ctxt;
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -5,6 +5,12 @@
 #include <xen/types.h>
 
 #include <public/xen.h>
+
+struct guest_area {
+    struct page_info *pg;
+    void *map;
+};
+
 #include <asm/domain.h>
 #include <asm/numa.h>
 
@@ -76,6 +82,11 @@ void arch_vcpu_destroy(struct vcpu *v);
 int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset);
 void unmap_vcpu_info(struct vcpu *v);
 
+int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size,
+                   struct guest_area *area,
+                   void (*populate)(void *dst, struct vcpu *v));
+void unmap_guest_area(struct vcpu *v, struct guest_area *area);
+
 int arch_domain_create(struct domain *d,
                        struct xen_domctl_createdomain *config,
                        unsigned int flags);
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -202,6 +202,7 @@ struct vcpu
         XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
     } runstate_guest; /* guest address */
 #endif
+    struct guest_area runstate_guest_area;
     unsigned int     new_state;
 
     /* Has the FPU been initialised? */



From xen-devel-bounces@lists.xenproject.org Wed May 03 15:55:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:55:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529326.823627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEp9-0002cn-GK; Wed, 03 May 2023 15:55:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529326.823627; Wed, 03 May 2023 15:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEp9-0002cg-Cl; Wed, 03 May 2023 15:55:23 +0000
Received: by outflank-mailman (input) for mailman id 529326;
 Wed, 03 May 2023 15:55:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puEp7-0002MD-Cz
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:55:21 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2081.outbound.protection.outlook.com [40.107.13.81])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e783c349-e9ca-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 17:55:19 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DU0PR04MB9658.eurprd04.prod.outlook.com (2603:10a6:10:31f::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 15:55:13 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:55:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e783c349-e9ca-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BEn3SZamiUh+LaSGxz7RnHnIzh7e18dlWlHCy6dICRCUkgBM67J5anVsw931BjnKX8w6H//1huTXHaVGCrETzaU8iCB8BnGBYf5CKCHOJJtHOlSvq6fSMBCVPQ1dPLsBggoDbDnebGqreSCtdlEGCkYSFCN1daVdyQiILjqWCnmfys425MrGCfxubl+Rvq402iZ1BgOAfU+TPF0yAZszO+E+ARa0EnFd3el7zYAjuRoXGvAwbzVIfczwjA7BIqnp5RqH6VgfyREUulfmvevLsMGgJZz6MVyFZFE8e0Xgcri5jcdFDH8aZZpXyVdeaqX9SiFXWl5ZUA2SPvYAXJOupA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=j6yjHr0PfftdPO6Ee88aaSu+J8Pv7p1vKjlZEcGgTTw=;
 b=Dm7r90AHXe+maGSyqlOiRafIFF2DFnW0tjdci60DDSWPXn1M/7k+oxTXOv8Q9x8NgbZFAv3BSStJ7LNwzU4QqrPE3Qy7kPe0otAn27OIOaYaouyMKz97IfSpI1fPs9giMnwyDOhzr52YCKS7V01tVEmjxNHJLg+Qyz/F0TmxKsEGQNQOeC0Sc83RpTHWFPDVp74egokb6Hudpzv0C2DjJ3q9SlTVLAc1orgA7MDgtU98NixXr3Ua4JCuTCjpWBjHXvPIVRP9E9VCSGQNf/RnpoLksRInNSBDW/nwtx6QTklvj+F0/h3s8LTSb0cUuC9fT53v0tJyvnzRJTDANe0BcQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=j6yjHr0PfftdPO6Ee88aaSu+J8Pv7p1vKjlZEcGgTTw=;
 b=D1GYqkZTkOLEKzzmIi9q5uEAvCEJvrZ4Ee7Vz9DOw+kX+RMHfUlVqsRkRtyletxBp5UEcr4QNT10qq74FJ/C4t7cM+PEKMCB0S548RE7Dc1eRF+bb7Cx3QQkIihY0kiWME6gpwOGqgJB1iIxNnFN6ppbYEWjf3KG9+N3egQvkfPNCMXa3jTcfxw8MPOzVPQrnkR5OztHV9rPcv0hBcB1HbFugURuKzEMUECBX1P/CtMt4ocFKKd/Y/crFfAJ6UPUgnZjQFpDwu9byx7mTgxmzt1bQXusauj4LbSfHMBUxJBoVeL0BSY2HwbNVJoEK07JXCBFB5PQbabyqFKN+SW9bw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <65893a9e-e2ae-6853-7c4d-54f2bf19b17b@suse.com>
Date: Wed, 3 May 2023 17:55:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 2/8] domain: update GADDR based runstate guest area
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
In-Reply-To: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0012.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::22) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DU0PR04MB9658:EE_
X-MS-Office365-Filtering-Correlation-Id: 595c3862-ac38-43d7-4e5b-08db4beec7fc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tEUd3Jv42yDnr32vQG7G5osjlauYf2E+lSNMeMo3iDIKhunZm7E5ZTcJlYbLwPoQg6rojBIuVrR03F4KMm2Hm54eE3yrDet0U8jPMV/LkOypAxJvfmPFFujWDyZZL/49O8W8+XIEdyhunFKKcP8vvqSFEVk47N2XMJZoSwFZfZwLNnsBYfvblWBYsTOjFDqLg+0P8pSekEOYoRUp3dbIPiM3e9kobI2IPkIvA+m23wDVYO7rvGxnufNgjJjI8+3+RsbM6SsBUu4RmHmKVgF+r31hfeuFphD4GfSRvZDmflSk3SXJ281ELse1YcsFXXWGLgM4lKxupQzrh/rdbG51eTPHELC64ycV5145CkHpTn7eP3uX0JfuLbaRrY2+ORI0xWc3mBnqkhZh85Mtk7qAoeX+1JPGqnFTVdcJDTFGoDx2RfVpZZH042Ciq9FQoK673hrwx+1Pun8L88P4LLroQuNPj/gMA1bkp9WLUD2MBxyXSrVWrY/lTIYBjV6snkLeN/BOZAmBKM25qpvplSF2+9zE3LfbMDSaW87DBjTLs6yShAR35soTP5Ta1BzaDZgFBsHE6kITp0dJj+uLBD5Ttk0EQ97LJk2/0O3hDUEsZljBYS2/Gh83/5IYjsBDtPaQkl4No9twnHINvF5z4AxyCQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(346002)(136003)(376002)(451199021)(31686004)(66946007)(6916009)(66476007)(6486002)(316002)(66556008)(41300700001)(54906003)(4326008)(86362001)(31696002)(36756003)(5660300002)(83380400001)(26005)(6512007)(6506007)(2616005)(15650500001)(8676002)(2906002)(38100700002)(186003)(8936002)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q0dtWkJ3UnBhaDRleEdFeFlxVGlabFR6NVlTcTBMdGx4TndqK0lTOFpaUVp0?=
 =?utf-8?B?QVlVakdVYlArcnRkanNhenZVUTBlYnFCT1krZHpIS2dVdGhrV2JBRmcrTFF0?=
 =?utf-8?B?S1V4aUNuOC9qa1lCb3dWTDIvZ3hQaVlkSkN4RHhEaVB5TGpTc2tHVnlzNEcw?=
 =?utf-8?B?bnM2dG90dWdkVHhlcitVOUlqT3dzeVgxSWpUU21yekQ4bjZkSHdpd3RBeUl6?=
 =?utf-8?B?R0NkRHRaMTNTNWVrSUd4QUxsTndLakg5akZ0aUpzM2xSUk9oRU9TUDJPZ3Q5?=
 =?utf-8?B?K2RjMEJzUVlhdUZQVlQ2ZEU0UnFpdldpYmxvQWpnYWtSYWsyMzZBclk3NVA2?=
 =?utf-8?B?dG1VejdMdXE4MFgxd1VIUmEzci9VSjJTQU9FSC9qTXdZcUQ5QkNnUGdia1Yw?=
 =?utf-8?B?THkrczVpVFNFeUU1eEZ5Y3duSzQwK1FpdWI1cEZxQ21ET2dzS3ZQa1Q0blhF?=
 =?utf-8?B?ZWhtUlQzRHI2Y1ZjRlhGRkhpMGZteDVSdVI2N1h1SEN0K2FHYm9kZ1dXSlVZ?=
 =?utf-8?B?Ykt1NkxVOTAxSTJIM0hoMDdiamxNMU1HeWx6S1lMbUx0ZjZka3Y2UG1HcnpN?=
 =?utf-8?B?VzdzQXJZeHJYRGxtSEo1WUpBU2xrenJUeVIxNXMxaTJqMC9idXh0SWQrbXBi?=
 =?utf-8?B?LzFKSEduTEpFTWp0TUZrcUs2a2c5MWxMQ2lnQmNMVUw4L2taelhGMzhOdDRw?=
 =?utf-8?B?SzZVNU90Sm5HZFJjWjBGNHRsNXJCa1YrYmg2OU1xSTZWU09NWTFzZFhJUS9o?=
 =?utf-8?B?K2FRaXY0bnMzWEFHVXNRVUVxYUxHYUhOM0VlYXJWdFNubVBmOWtucEp4ZlA2?=
 =?utf-8?B?VlVtaEpuNVVzN1RzVGcxZmZTUkd1dC92WTZUbDN6T2ErN2F1UnpjZjNjc3lG?=
 =?utf-8?B?YTdiMlZ0dHp0TWxZc2ZPd2h0MGdwMzh2Sk5WOEVNR3RyVDlTUndrYVZGYzMz?=
 =?utf-8?B?UkNObFZPVWlDRzdNVkc3WVpGTVpXL29XcU80ZmJGMFE4Ui9uQlVaN3JVcmdo?=
 =?utf-8?B?ajgvTkpobjNQUWVNcDVMcWRTOVQ2dmY2N2lTQUR6bGV2RkFZYlZJY0llTTBS?=
 =?utf-8?B?bFBPR1JVUWFtZjhhd0o4WVc0NU5rUWJYU2daRkJmVTE5TDNMNTA1Q0IzTDBU?=
 =?utf-8?B?Ymc0OWFaSkhTV2lLUkgvNndEcWtSNXE0QTduUHhsb0ZQRGM3TG9EWDBhd2Mw?=
 =?utf-8?B?TlhDQzBLRXNCdXYva2wxT2kxakkzOE1Qc0Uvb2hLWlBxaGZpZ0NITjRINDdy?=
 =?utf-8?B?c2R6blBXOTFySjRLWXdGZDlCNzNiVU4yRzVLbDIrTDBhMjdNNEpuRGYzNmE2?=
 =?utf-8?B?T1lxYlUvVFZvR3l0SmhQd0FDRjVFLzgwTXdxM3cxSjRqYXRMVU1nWmV0SWQr?=
 =?utf-8?B?cDFueEwrL1NRdGhhUkdGZ1cxb3c1VjhlMGRLSDFOT2pIdEtIeWdwZngrcXhz?=
 =?utf-8?B?T0todFlIVC9lUllncWUwbFZtMEhJMk5wdVMyRHE4dEFQUjI0OGxQVW0xaUs3?=
 =?utf-8?B?c2phOVdaWjRlVHY4Uk91a0xiNjBKMFR6c2thT2Z6YzZqTkdKRDdyQWkzS0s5?=
 =?utf-8?B?NHFncXllT3RaOTB0UXBnVmFHN2V2K0podFVYeEl6cFFKSkxEbThkd1ByN1N2?=
 =?utf-8?B?dU5Ta0NCZnRJTVljM2ZVSDB0bDVlMFYxZHJ5aUMxS3V3OEh5SlNLWkM1ZHpX?=
 =?utf-8?B?NkNubEhySk9ZbDg1SVpWV2xGSEdBSXJxWi9zSlV5L0dJMXpJRGdsNkhkak1U?=
 =?utf-8?B?ektaLzJDWWpqWjhMenFsMlUxaDBFODlnUEk0WG9VV2ZSZ3cwdE13MnJYNk5w?=
 =?utf-8?B?QThxRnZVMHhRekhkY0tKcER4SW1GUDkzRENHcVVoNzZzcWY4VENBWWZWZnAx?=
 =?utf-8?B?K1hyZWl4MlRUT055cjJMS1VlZW1ibEQ2Z2I2Y25PZFhkME5yRE1sWWxlNC9W?=
 =?utf-8?B?RHUwMmw2YnBkK2p6MVJtbWZLL2VsQlpJNFdsR3FyZmVrSGs3N1NIL1pEUHVz?=
 =?utf-8?B?eFBzT0F3VzAvblVrQ0VjcjhydGlRaXRzMW1WV2tXNHBQZTZkWHJPVkJEVVox?=
 =?utf-8?B?aEUrUWZlUjNHaTdkUU9YR1FqN0VJWDJ0QmczWHZqaHdsbklBcjRRQ1JDeFps?=
 =?utf-8?Q?Y8F2vNbsBwPE3QPpkzYckBWo3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 595c3862-ac38-43d7-4e5b-08db4beec7fc
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:55:13.3717
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XqEp816xcAml/2VFrs3XPNY+k5XyBRnNeAN8RSnOPqHEHmWcbOuoQRbm2ZtlHRJWzMgL/6gPcyul4t3ZhVhkgQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9658

Before adding a new vCPU operation to register the runstate area by
guest-physical address, add code to actually keep such areas up-to-date.

Note that updating of the area will be done exclusively following the
model enabled by VMASST_TYPE_runstate_update_flag for virtual-address
based registered areas.

Note further that pages aren't marked dirty when written to (matching
the handling of space mapped by map_vcpu_info()), on the basis that the
registrations are lost anyway across migration (or would need re-
populating at the target for transparent migration). Plus the contents
of the areas in question have to be deemed volatile in the first place
(so saving a "most recent" value is pretty meaningless even for e.g.
snapshotting).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: HVM guests (on x86) can change bitness and hence layout (and size!
     and alignment) of the runstate area. I don't think it is an option
     to require 32-bit code to pass a range such that even the 64-bit
     layout wouldn't cross a page boundary (and be suitably aligned). I
     also don't see any other good solution, so for now a crude approach
     with an extra boolean is used (using has_32bit_shinfo() isn't race
     free and could hence lead to overrunning the mapped space).
---
v3: Use assignment instead of memcpy().
v2: Drop VM-assist conditionals.

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1615,15 +1615,52 @@ bool update_runstate_area(struct vcpu *v
     bool rc;
     struct guest_memory_policy policy = { };
     void __user *guest_handle = NULL;
-    struct vcpu_runstate_info runstate;
+    struct vcpu_runstate_info runstate = v->runstate;
+    struct vcpu_runstate_info *map = v->runstate_guest_area.map;
+
+    if ( map )
+    {
+        uint64_t *pset;
+#ifdef CONFIG_COMPAT
+        struct compat_vcpu_runstate_info *cmap = NULL;
+
+        if ( v->runstate_guest_area_compat )
+            cmap = (void *)map;
+#endif
+
+        /*
+         * NB: No VM_ASSIST(v->domain, runstate_update_flag) check here.
+         *     Always using that updating model.
+         */
+#ifdef CONFIG_COMPAT
+        if ( cmap )
+            pset = &cmap->state_entry_time;
+        else
+#endif
+            pset = &map->state_entry_time;
+        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
+        write_atomic(pset, runstate.state_entry_time);
+        smp_wmb();
+
+#ifdef CONFIG_COMPAT
+        if ( cmap )
+            XLAT_vcpu_runstate_info(cmap, &runstate);
+        else
+#endif
+            *map = runstate;
+
+        smp_wmb();
+        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+        write_atomic(pset, runstate.state_entry_time);
+
+        return true;
+    }
 
     if ( guest_handle_is_null(runstate_guest(v)) )
         return true;
 
     update_guest_memory_policy(v, &policy);
 
-    memcpy(&runstate, &v->runstate, sizeof(runstate));
-
     if ( VM_ASSIST(v->domain, runstate_update_flag) )
     {
 #ifdef CONFIG_COMPAT
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -231,6 +231,8 @@ struct vcpu
 #ifdef CONFIG_COMPAT
     /* A hypercall is using the compat ABI? */
     bool             hcall_compat;
+    /* Physical runstate area registered via compat ABI? */
+    bool             runstate_guest_area_compat;
 #endif
 
 #ifdef CONFIG_IOREQ_SERVER



From xen-devel-bounces@lists.xenproject.org Wed May 03 15:55:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:55:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529331.823637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEph-0003Qa-Ss; Wed, 03 May 2023 15:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529331.823637; Wed, 03 May 2023 15:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEph-0003QT-Pf; Wed, 03 May 2023 15:55:57 +0000
Received: by outflank-mailman (input) for mailman id 529331;
 Wed, 03 May 2023 15:55:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puEpg-0002MD-KG
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:55:56 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20624.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc9c191c-e9ca-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 17:55:55 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DU0PR04MB9658.eurprd04.prod.outlook.com (2603:10a6:10:31f::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 15:55:53 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:55:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc9c191c-e9ca-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=inLmMtkj/mKxcjN81kgyPe/lnB/4VWpaGW50iS+vg88WDPZBlDV/ycy8N1GP97TGe3ZG82g/HYehDp5IbofzsKhgZcaFKPJH1/n9qnk8U8KGD+/1HJUJrowrjvPG30kgbRDi38rjV6HibvL6mFw3TciDbvmayb/aTpjF1lYhh646qt1ahcGVtS5X7FrwoZphGYAgkXGhagwQSKP8fKA8sl50nV7sR70u/j6EwifIqq41eOoesaWd74SO5YXDWBo3Rg59q65dxqU8JSh4M71os02P/97Ic0hkMV45tVEEfaSoV927OXyyiC/EVqemn+j4YLRY4uWusLw91S9DGcwRmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Tj6boY6MQ0Ajhx1lKmKqTBrZzfbIB+gH7SjBjs88Cgs=;
 b=WOmbBwvy6yvVcm4/sRFEytdm/A7nloW60M0F43SDphMcYYxgBT214ndozt8RGtpoqQPD7QCOfHTQJrGGhN03C8JJUFULnK06hHKMgevfMb/frnlf0q5aY4k4smaRnz3ca8Ls9FY9c1q0PgHH2GiPsPGlfT5V7b2HB5GzzH/7BqVrtG1RECjwcr97KSpdfYd/qW5tYwSQkt8VWo/hai6mnjPS/GcPtaguMBkgP7GWkjUGpSTVh9V0NPLYD0FLO5DScuHC0B4rLxTUPkMVjjV7kfVD4LO+9cKtptauRDGkYiuvYDNxaY3SdOlS0CkVoUJujN+2VNpgTRaSJGdoio6PKA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Tj6boY6MQ0Ajhx1lKmKqTBrZzfbIB+gH7SjBjs88Cgs=;
 b=JnxDbA7c26oKLPC7EQWSU+WaPlfFEV14MIM8YKhRqQ2+CIl5hq475tY5yDlLbk+JPalAJjCIzNb9kGagNIeR9wYb/csi4rTf3pf+QK5szHVs0KNT+9fNS2R7WSSYqPoAw0Grk1RZImFrEfSlm4G6cBC5qk3vYueP/wn460uNnuG4bTLUIne8hemx+j5hCdxcQ1LN+Kk6P5spq+irvLmd8KMoSeS9hYY6E8g+so6tp/b5qrwXUxX9g7lfNmLTE5/RmdwGI04aLI6YIVh7afGnQ4z8V7IkUCt9wHU9Tk/CYNB7BNGv48g2oDvtoBb32eigVzxpHH9S7mqo9P3astgCOg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3cbc92ac-7817-e08c-0e7a-bddd0a4ce070@suse.com>
Date: Wed, 3 May 2023 17:55:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 3/8] x86: update GADDR based secondary time area
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
In-Reply-To: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0008.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::18) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DU0PR04MB9658:EE_
X-MS-Office365-Filtering-Correlation-Id: e382cbbb-bef9-4559-58fc-08db4beedffe
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FBHUQEIdZLzsPyjdhKbpxKEMLI8j1EcOqu1V6MCesVhb7bAsTi8sttRZOz84wyxW6HQf+gWYp1HApVMRzhqLRqSa+qh/3A407eB/whFLAvipq2JZoRI3SxZBxG1ToSJM/IL4z9By3aZhLWNxd27deKEqrtLz4F+8YfYMEs0a8OMWQx5Lfkw0XRjjsN0piturilZfpsZW81pDJ//6FHmybvhHpN1lHBFAYCTr7M8OqtPAsxHM3mNHBNBg1c1CYzH2neP/ZSAZZA/ToonoQb6LyWHdGNtBthxKVYGiV+IsSYVMJVaZScVRS+UApr43wNwrhr/OSdEepjGqA7jxJ7Nyft9yqGcZV/hLPq6guwg5/3qTTx9jA8ykFtt5JwP4HPZ8lFa9AZwm0pEY+6Akvk7uc8hBGdqOHluOV9XTKqBridrUGuz4pFfTIUQc2YnbqgjqDPVij60ViZsdOvWFZ38vMPQZZr/D5kH2S543rpnWA+Fd5NEvfTPDhWT+iAX8yyyqxXT8LlMsZsS5v43dytaAgi9hZVas9hRKWEo0+3MeEwLMu6KRC92d6V3xZkjoU2SBwnNueMi0OjT5Iyz8gRl1LyPMJWChiqrxIyH1VsCoDFbIPzuWh34eL9fJ984/yjfSBl8AI39gr3EdrNxM155VPw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(346002)(136003)(376002)(451199021)(31686004)(66946007)(6916009)(66476007)(6486002)(316002)(66556008)(41300700001)(54906003)(4326008)(86362001)(31696002)(36756003)(5660300002)(83380400001)(26005)(6512007)(6506007)(2616005)(15650500001)(8676002)(2906002)(38100700002)(186003)(8936002)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?LzJSRUN2ME00K21CeGtLMkx3WlpsYTF1NDVZdnJUQ3lPUjJZS1A2N1RNZk9X?=
 =?utf-8?B?V2JyUiswamR6cll2Kzh2T0gxbEFBZFVFOWNZNERvNzl2cTJZeEkwcUppOG9Z?=
 =?utf-8?B?Y24rakNzTzNEdlh0R1VodUZCdktXWmdRWllYVEF4YUQvN0JEV1FaNkRZVENx?=
 =?utf-8?B?ckNHOXRiVWNuRk5tMFU1YVJ5cXRyNVYzOTBKR0tsVmF2OUkrUTR4ZkVGZ2Vm?=
 =?utf-8?B?Q3dKaWRTUTRiT0NSR2NDbXpjb05WQkkrNVh5Vm5CVndKZHJmUzYrM1d2UDhz?=
 =?utf-8?B?U1RhYjc3SlNHdEhrTHNmZGdVR1ZrMDhRcjBObUlKRy9HZUdXUUV4S0FoYW5p?=
 =?utf-8?B?VjFVYjgzQzd3RnRWSFJJWERrNytFSGFhcHRMbEhWNG1Jc2QwZW1LWGVzOWhO?=
 =?utf-8?B?UXBQZmRCS08zaktNeWlFVWFwSDdhZUhObGw5T1Z3OWRMek5KcHZnNXlWL2Jp?=
 =?utf-8?B?dFYzdlBrdGhIU0VPYTBXTUJ4VkNpUHhCbDdpZ1ZyZWJSV3pWRzNkRUcxbFpm?=
 =?utf-8?B?Q0tXQjFFZDVPWUNDazRGNkpOYlBqSDlMU2IvVVhOdGZyRTIzNHJmcC83eG53?=
 =?utf-8?B?dXFaRFFYQ0o3UzFSMFVQQktJM1dubmdmWDAwM1NKci8rTVFPU1pzc0I1SWhC?=
 =?utf-8?B?QTg1d0MwKzI2bUFmejhHdi95bWtHbEVidlMvT2hnTzB3TVovV1VZNXNPMmxS?=
 =?utf-8?B?cU9tWDdZVTl0eWkrVElxMldSYWdTS2pBZXI4ZFo4NTVmaXhhdjd3RVgxWGxI?=
 =?utf-8?B?VXZ3Vy9QeFRkZ1p1ZFpYTnhzT3hJZVFxYU1RSEY0eXJEdzhLUElJaG9tSzUr?=
 =?utf-8?B?WVFmRHk2Vnc4ejVXQ3U2K2dYT245a25PSmhqWDRZV3B4WjZHbmppQUwxbEFY?=
 =?utf-8?B?MXI3aHVhU1BmS3ZYakI3NDNDZE1pYWozMkdqODRGemM5dDRNbXJ3OXE0QUND?=
 =?utf-8?B?blVFWXZpSVBSRG1lNXl3UGR1YlJYZ2hYYmZIQUxtSFdrMzFydllsK2J6dlpN?=
 =?utf-8?B?bTJHVUdmeE52OUJnaGtkOUEzOTQ0ZjE5ZGREYnZ2TEYwWkNXOHVkQS9wUDlH?=
 =?utf-8?B?cWhDMmk5Ni9oL2IvY2JoNGdWMFBiTE9MUlRzLzV0UnE0aTRHYnhCTlFqRUdH?=
 =?utf-8?B?YXYydjFJUUJvbW00RHZpVzNyRUU3YjZBbkQvdHZTcUtGZEtEL0NHUkt6dVFs?=
 =?utf-8?B?TW40Rk5LNGZSN0F3S2hHaFNLaXBYS0xybVRRY09zTHJXekl0UWJXVEQ2YjFG?=
 =?utf-8?B?QUc3UUsvZjdVdXZQcDVoY0J0bFdLN2lwYkJwRWZCa2xZQ1hwcGlMSnZidzVS?=
 =?utf-8?B?bzkyVUIxNGRhbWg0OXBWOUc1MzRWMmZZaWVqL3NUNnQxVFFoNHA4U2NkR3hm?=
 =?utf-8?B?dHlJRjNxV1VKTGk3WU90TXlDS0JZZml4NDV4RWZtRDBnV1FIUjlXd3h0em4y?=
 =?utf-8?B?OXlBUXZHb0lTVWsyNmdGamRQVEwyVno4b3NNemE0V0o2VUhDaEhyWW4wN2g4?=
 =?utf-8?B?VWFJZU5ZTXh4M3ZCM2FnZEhHOG5MNmp6RTQrekY3WFh6cWxPV0c2TXJLbUlS?=
 =?utf-8?B?MysraFZQRUJjRnZxNHJ0WGgvaEVLWm5zYXJLa0xtdk9GY1dwbUhYV3h2NnQ4?=
 =?utf-8?B?a2dtZTl5ajhIZVVCajdIV0N6T2kyZXU1YlhPKzJvZU1RVS9ncFlNVWMxamhF?=
 =?utf-8?B?VHozb2h0QnA5cVhLZndmTHlyYXdMSEdxZmFUWnJLUkxQOFFOeWMrZ2RNUncw?=
 =?utf-8?B?WEYrK082SGZQdWJlMWpyYmxYQWw1QVc3ZU5BY0dqWnV4TWhGc0lQM0drUmpp?=
 =?utf-8?B?bVJnQ3JzcWdWSWZ3WmRoNkdhZk9ha29uM3JzZ0tQN1ZhRjJTbnROWllkSWtG?=
 =?utf-8?B?RUdvS3IwcmpUN3RIV3V1bW9NZ3Z0Y3A1dzJkZkErcTg5VVNLWVI1OFdkV2d1?=
 =?utf-8?B?MUs4M0RrMjhXdEJINnRFZTRRNlJ2OTZlL1NuVVM4MFB0N2dUK21UQkhrNXlz?=
 =?utf-8?B?LzIyOTVMTExNeHVwUTNsblB3ZVhzeXR4WFlXWXlvWExSS0Y5MFFIWHBPakVn?=
 =?utf-8?B?MDdzbVZMeTRUMi9zY2lUY3hzZDBaSTI5TW5reHY3OTlZYWRlUG81aVlUdjVV?=
 =?utf-8?Q?E8/OKwJ3Do1uKbFCotEbWo8VG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e382cbbb-bef9-4559-58fc-08db4beedffe
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:55:53.6708
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QSQc75YlcKMNRqWx8MZ1zJ9JS8jp1LH/vunCpmGTJMgDcmie6m0JU3L509gt7IByUozI12ZKrmFwlWS1ZQmqpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9658

Before adding a new vCPU operation to register the secondary time area
by guest-physical address, add code to actually keep such areas up-to-
date.

Note that pages aren't marked dirty when written to (matching the
handling of space mapped by map_vcpu_info()), on the basis that the
registrations are lost anyway across migration (or would need re-
populating at the target for transparent migration). Plus the contents
of the areas in question have to be deemed volatile in the first place
(so saving a "most recent" value is pretty meaningless even for e.g.
snapshotting).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1571,12 +1571,34 @@ static void __update_vcpu_system_time(st
         v->arch.pv.pending_system_time = _u;
 }
 
+static void write_time_guest_area(struct vcpu_time_info *map,
+                                  const struct vcpu_time_info *src)
+{
+    /* 1. Update userspace version. */
+    write_atomic(&map->version, src->version);
+    smp_wmb();
+
+    /* 2. Update all other userspace fields. */
+    *map = *src;
+
+    /* 3. Update userspace version again. */
+    smp_wmb();
+    write_atomic(&map->version, version_update_end(src->version));
+}
+
 bool update_secondary_system_time(struct vcpu *v,
                                   struct vcpu_time_info *u)
 {
     XEN_GUEST_HANDLE(vcpu_time_info_t) user_u = v->arch.time_info_guest;
+    struct vcpu_time_info *map = v->arch.time_guest_area.map;
     struct guest_memory_policy policy = { .nested_guest_mode = false };
 
+    if ( map )
+    {
+        write_time_guest_area(map, u);
+        return true;
+    }
+
     if ( guest_handle_is_null(user_u) )
         return true;
 



From xen-devel-bounces@lists.xenproject.org Wed May 03 15:56:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:56:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529335.823647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEqa-000437-5V; Wed, 03 May 2023 15:56:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529335.823647; Wed, 03 May 2023 15:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEqa-000430-2k; Wed, 03 May 2023 15:56:52 +0000
Received: by outflank-mailman (input) for mailman id 529335;
 Wed, 03 May 2023 15:56:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puEqY-00042q-Nk
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:56:50 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2052.outbound.protection.outlook.com [40.107.7.52])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1d1b9cdb-e9cb-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 17:56:49 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM9PR04MB8290.eurprd04.prod.outlook.com (2603:10a6:20b:3e3::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 15:56:48 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:56:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d1b9cdb-e9cb-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CgjoTUN6CbxKasVTNMIxaD8XwmSLIRV+DyqK6OYqLjQbNQHofyE4bQ5PqpvLrDrmPcPX6VrLYQwxIUnED8nZJxZ9Z7fIbdmu9HIAe18oVsv2laqUU3gMrcdeusKrWJmebtzqIF4tJ4PktucSPMOMCUitQHHQfIp3neMghZmIkPuVSCza29qEu52s52WUBqsNP0uGy0na+lk2GyNyrckDpEvYaYAiLCvbRABagoCa1O2zZXDBmUAsPtQG3j//Z4DslV0k3x5eui+9KwrMofN4raU5LM4Jvl7Upwy+XeGVPgYNyARikWEMxc/uiXHRB5PohhEeSc7ngDQg/Qpw8rr+Ug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W+yAFYV7AlsDjWOjH7N7vMb8CphBeaTJd50l2bpX5dg=;
 b=Z+yC1slpCuamfXNjHaPWV1NAMO6BU1z3EQja1Fs9ngsGSUxe4pelq1WuMqKJQJBtj8vYkpWV3tBMvzYKjS/82Jz+4eudTx8jgXAT7WEKB8BcPPXFoM94d4LG3NkDxwlgj9ip3igFL9t8naOpAMxAhyRP9yNoTFpeTeuTtHM5vvWSPgktKQOuf14qjsH/nwR414u9oBT9+xpsTCy8nkV+VH1Jo6lJlkE3z+pRHmGyGLHD3TUWMysvKycxckf4zcm6acG9pLESb77B/lxtTnqfqVLEUFMZhS8fVK3cxYymhdxzf/5w0GTMUPKasOJ/sCihsHBJHd7gkhuT1udU3LnPYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W+yAFYV7AlsDjWOjH7N7vMb8CphBeaTJd50l2bpX5dg=;
 b=vrgIQX9XaVWdZVcA4/SlZpstpXpAJmmoQL76lByw/wEQJs07/XjrzPos0G0YyOm0ilx7uFhERhhM2T35+56bNxisQdnW1Qf92hlw1BY47GpjnEFAsvNVwlCphNuwFEOfxaXAmg7cRojrvToJ4Ysd8v37YENn5Burjyp3O2YTAxNTBNDvt7GTVrW3Ndf/5How766lcpzswxfYWw9CBsy/eDDxiAdiELX5kUl0ILQGoFCSTsmfW4rbmC9H3i64CyduLMdfOYgnkWnFl0lnmJ5DG9X+P8lUjf0n7co3R30vGdvxeWBqB8Dsovy7piESniTxZtNjpTU36Ih6cUGA803JKw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a79d2a8b-6d6e-bd31-b079-a30b555e5fd0@suse.com>
Date: Wed, 3 May 2023 17:56:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 4/8] x86/mem-sharing: copy GADDR based shared guest areas
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
In-Reply-To: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0067.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::7) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AM9PR04MB8290:EE_
X-MS-Office365-Filtering-Correlation-Id: b1bb34b2-2845-498c-2856-08db4bef00a7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	281ssijAfKKP6qCPeODgn5xmJm5AZl5edljNsSZgwqj34uMmnMoIkfFxz4QZlheV+Bm6gEuQmO1LfzTtHCGXY9R56jlawoU1bgb5UdMZ/4VGZdDCu3qZxY63Eyq56uANVn6uTMmCWyQ/++fLoqAPaeMmT5nXH8HmizOynZgdp9avkfJOV2lSWzVEIKxfcDDVCEOMzhHV3Ptk5h5ng2E++VjWcOX+gTQzlhzUynyZZJrYlnCRMHujW/oiIU6LYiICpRrmkerdH1iBlzAkafokwUFBWckQDIVl2EakBPCxNxUtKPJabmS44pVodSZnVWaoHAgdnuJuGMh0s1QN34HmCh4AMdS5mcq8APBAkDFTdCU9GZWOxOZEe/w9IJdA2Rf1qyPjiWBv+1TlDiIsNpqg5SH795pxJzrKhYBGzyDr/GKcHvUrUhvDYgmPBPgwzT6m5dTKejIc3tVgI7+4wxCfBT/R/ayudPrbUixg6uCl1ESVB5hGk3ebDezEy4Dnu+a6/BiZrIYptOEDrRPp53ybqnT3MFwIt4Kh1uTVVyKo6e9BODrULxuXzAuLFtxhKnIrHqtr9Z5B4P/LqAwyBIkV4l7XWzuM4/IbmXs1IEfDXgra9vEcRLAZZ2x4Txg7R7c4KRIiMJKpImsdpwmGZDLBTA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(39860400002)(346002)(396003)(136003)(451199021)(2906002)(38100700002)(8936002)(8676002)(5660300002)(31696002)(36756003)(86362001)(6486002)(478600001)(54906003)(186003)(6512007)(26005)(6506007)(31686004)(2616005)(83380400001)(41300700001)(66556008)(66946007)(66476007)(6916009)(4326008)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V2RsanIwWDdvUVhBd0J4YXRmTk1wTnNOc0g0K25JejNZa0J4ZEFZOUtUN0g4?=
 =?utf-8?B?eFVHL1ptZDNDZjNkM1lhNE9wb3g5cGwxcjFlUzZ6NTFMQWJTcXRuN0JzZk1h?=
 =?utf-8?B?V2dkZjlab05laHhGaGU5cC80N0RUWWtPTXNvcElGRURpUWlESDJ1NnRaUm93?=
 =?utf-8?B?MkdWbkFZZ0N0RUdMK1dwK2JmbWZxUTNYc0V3QmRBekd1L1ovbmdFNjZCRzFE?=
 =?utf-8?B?d21XZEYzL05DanRBZHhpLzdpWlgxZVMwRlBEa2tyWWkreGVic3pOQURkays4?=
 =?utf-8?B?V3h0MW5SbElsUk1YeXIwczFqK1l6aFlQUDRaM3NuOEo3Ly9OenF2MHpwQ0cr?=
 =?utf-8?B?U1BLcFZiQXMxTndGTzZIQmNvSDFZS1Rtd3RkZ3pSSHBFTlVIajF1YndNU29E?=
 =?utf-8?B?WXBiL0ZBSHRHbG1tZmN3YjVtTzJCanhXK25GWWRyVVdmWThkVDAvV21EQTVB?=
 =?utf-8?B?ZDFmNGh0L3o4NFdiQTZSMHp1cDlXdXp5V0w3WW84eTlkWHVoQmk5MTRnOFZ3?=
 =?utf-8?B?MmlqakFQMlo3cjRScTJmWWdhRTRJVnJxZkJDaEdOcE1yZUNYRGlGMllVRGpJ?=
 =?utf-8?B?dGVVUjIvUXBRM1FRZU5xTk1nNnV5M3gybW1jVmErc2tMWnhKdTBlQXdNZXFo?=
 =?utf-8?B?d0wyWmlaNGduQ09KcEdJUmNIRXRENmZzTGlrZlUzelhxZGVTOVc3ZEkySTBX?=
 =?utf-8?B?aGliYk41ZXBROVg1cnBlbTc3WFIyOUR6NW9sMkNhV0lNUEVzVXR3TmNpVmdS?=
 =?utf-8?B?a3QzTDZYYkt6QmZ4UkMyVyszd2lVRUNFTG00ZXRFL21PRE9Xdzl4MmRUbmM4?=
 =?utf-8?B?UFE2WWl6eFFUc216K0RUZXpHQmRUYU9YK09CcXNNQU01ZFg0RVhBVXdZSWt3?=
 =?utf-8?B?U3JHa3dEbHl4S1BFL3VOMitmNXBGWnFKOW8zaTNWbXFrU3hXUmhyUnZOZm1X?=
 =?utf-8?B?Y25ORGo5TmZlQ255aGNHRkduUGVJTkZVNnpoZVQvajJCeGRLMmxtd2JtdEpp?=
 =?utf-8?B?SjF1UmZjVWVwM25Xbk8ybk1TRWFOcGxORVRCMlpNTnBIczcxbXh5V2NRR25G?=
 =?utf-8?B?UFlZOTZucGIwUlpETzkzSmFwc1p3K2xOUTJ1ZmVPV0VsRnZuMTVWbEo3M2hx?=
 =?utf-8?B?eXBOYTUwejhXTkRsUEU2eW80YUJkVWRuWkluNDZZNDdqTkJidGpaRi85eklL?=
 =?utf-8?B?c0ZSbzlWdkFwSHc3S3k2azJMekpxWTh0WEttSFZoZDFBVy9ydGFpd205bHYz?=
 =?utf-8?B?M2lDdW9jTS93cXVaakw5SmQ2d0U3NnFVRW15eXRSc1cvVzNyc0V0MnFqM29y?=
 =?utf-8?B?bXpzVXNvcW5TZm1XVmdoeXJHczFBTTBYZFAxVlNlOWlqZkczeG1ZWHUvOXdM?=
 =?utf-8?B?Umpub3JNVFN5VnJDVE5CODYzNUtWdGczK1JPdG5lRy9ZNGhtdEdvWnNxVUNR?=
 =?utf-8?B?U2hHQjRBbmE5UlRQVGVnUWxhVG16alNQdzVWUi9zT3dPSWk5bCtBdUYwYk1j?=
 =?utf-8?B?NTJoRkVkQ2lxVkVLcmFPSnZnOVFNaGpqVzdoZlZJM3UvaURjUnFGMnpiTjEv?=
 =?utf-8?B?ZW5KcjZGQzA1ZHdqc05jODAyZXBKNVJhNmFiS21zM2ZpK1RpYUhob1BEd1pX?=
 =?utf-8?B?VGtSRkMwUWZJRi9lOFBjSURReWQ0U2gySXlmOTI3VkJDcTViQitaaGJjQ2or?=
 =?utf-8?B?YnZhSmYxZWZ4d2R2MWg2UjU2NHJQRGI5NlUyVjJiT2dnY0Fya0RSUFVuRDNF?=
 =?utf-8?B?SmEyVTVhV2RHYUZzZlBNaXlXZFdKbjkwNXB3UmVTYlpkS3FtUlVLM2tvRFhW?=
 =?utf-8?B?eXBUWUMyK0RIYXRXNWRZNllmbzk3RHhBZUhsYU0zL0FqWVRPOUxMWVVnMU9S?=
 =?utf-8?B?bUNYU0FnaWJiZ0xiR1hVT1BQdTlYeXp0cm9BWUZ5eTFpUmNaTUhOM0UrYTlr?=
 =?utf-8?B?bWs4eTBWSHoyVTMwZUhMaXVpcHpDTXBROEs4NlgvaFVrYmh2bUVSbDBBR29H?=
 =?utf-8?B?aExucXQxMC92b3pBeGNZWXZkcFRIOWl3Y1FxZnhTMFVvNGp6TmxZdnZtMVpV?=
 =?utf-8?B?NERNNkVlUU83WVY5a2IxTThYM1BPWVV4UGJ6TWJOUjdLWnhseHlYcU1VcUh1?=
 =?utf-8?Q?uYXiPlATYodlWJnF0llVq2vBM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b1bb34b2-2845-498c-2856-08db4bef00a7
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:56:48.4224
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gbQ2fpMW8jWUj8uN6cZSiyEIXjZxhuReTi0qZoTODu897SIIU59hWq+bDj6+BU7IiSU/fZn6ycP+pd1pnaxOCw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8290

In preparation of the introduction of new vCPU operations allowing to
register the respective areas (one of the two is x86-specific) by
guest-physical address, add the necessary fork handling (with the
backing function yet to be filled in).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Extend comment.

--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1641,6 +1641,68 @@ static void copy_vcpu_nonreg_state(struc
     hvm_set_nonreg_state(cd_vcpu, &nrs);
 }
 
+static int copy_guest_area(struct guest_area *cd_area,
+                           const struct guest_area *d_area,
+                           struct vcpu *cd_vcpu,
+                           const struct domain *d)
+{
+    mfn_t d_mfn, cd_mfn;
+
+    if ( !d_area->pg )
+        return 0;
+
+    d_mfn = page_to_mfn(d_area->pg);
+
+    /* Allocate & map a page for the area if it hasn't been already. */
+    if ( !cd_area->pg )
+    {
+        gfn_t gfn = mfn_to_gfn(d, d_mfn);
+        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
+        p2m_type_t p2mt;
+        p2m_access_t p2ma;
+        unsigned int offset;
+        int ret;
+
+        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
+        if ( mfn_eq(cd_mfn, INVALID_MFN) )
+        {
+            struct page_info *pg = alloc_domheap_page(cd_vcpu->domain, 0);
+
+            if ( !pg )
+                return -ENOMEM;
+
+            cd_mfn = page_to_mfn(pg);
+            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
+
+            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K, p2m_ram_rw,
+                                 p2m->default_access, -1);
+            if ( ret )
+                return ret;
+        }
+        else if ( p2mt != p2m_ram_rw )
+            return -EBUSY;
+
+        /*
+         * Map the into the guest. For simplicity specify the entire range up
+         * to the end of the page: All the function uses it for is to check
+         * that the range doesn't cross page boundaries. Having the area mapped
+         * in the original domain implies that it fits there and therefore will
+         * also fit in the clone.
+         */
+        offset = PAGE_OFFSET(d_area->map);
+        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
+                             PAGE_SIZE - offset, cd_area, NULL);
+        if ( ret )
+            return ret;
+    }
+    else
+        cd_mfn = page_to_mfn(cd_area->pg);
+
+    copy_domain_page(cd_mfn, d_mfn);
+
+    return 0;
+}
+
 static int copy_vpmu(struct vcpu *d_vcpu, struct vcpu *cd_vcpu)
 {
     struct vpmu_struct *d_vpmu = vcpu_vpmu(d_vcpu);
@@ -1733,6 +1795,16 @@ static int copy_vcpu_settings(struct dom
             copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
         }
 
+        /* Same for the (physically registered) runstate and time info areas. */
+        ret = copy_guest_area(&cd_vcpu->runstate_guest_area,
+                              &d_vcpu->runstate_guest_area, cd_vcpu, d);
+        if ( ret )
+            return ret;
+        ret = copy_guest_area(&cd_vcpu->arch.time_guest_area,
+                              &d_vcpu->arch.time_guest_area, cd_vcpu, d);
+        if ( ret )
+            return ret;
+
         ret = copy_vpmu(d_vcpu, cd_vcpu);
         if ( ret )
             return ret;
@@ -1974,7 +2046,10 @@ int mem_sharing_fork_reset(struct domain
 
  state:
     if ( reset_state )
+    {
         rc = copy_settings(d, pd);
+        /* TBD: What to do here with -ERESTART? */
+    }
 
     domain_unpause(d);
 
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1572,6 +1572,13 @@ void unmap_vcpu_info(struct vcpu *v)
     put_page_and_type(mfn_to_page(mfn));
 }
 
+int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size,
+                   struct guest_area *area,
+                   void (*populate)(void *dst, struct vcpu *v))
+{
+    return -EOPNOTSUPP;
+}
+
 /*
  * This is only intended to be used for domain cleanup (or more generally only
  * with at least the respective vCPU, if it's not the current one, reliably



From xen-devel-bounces@lists.xenproject.org Wed May 03 15:57:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:57:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529336.823657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puErN-0004Zm-Gh; Wed, 03 May 2023 15:57:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529336.823657; Wed, 03 May 2023 15:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puErN-0004Zf-DU; Wed, 03 May 2023 15:57:41 +0000
Received: by outflank-mailman (input) for mailman id 529336;
 Wed, 03 May 2023 15:57:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puErM-0004ZO-Hd
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:57:40 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2051.outbound.protection.outlook.com [40.107.7.51])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3ad93c58-e9cb-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 17:57:39 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM9PR04MB8290.eurprd04.prod.outlook.com (2603:10a6:20b:3e3::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 15:57:11 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:57:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ad93c58-e9cb-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ecl6OoeY3v9bYuSfyv4XwM4HHyOiqOqkPixZ2UG8OIn7hDHfwjANc11GAG5XUJP9H9cZDdUiSD/QKUk2Tlzqq4AAkW3O0FrFTnYS+HWBGl07akyZCsr87tGCSAJA1bWxzC8I6cj7fQLzGWsASxQdUGGEiel2hOUJdMqhWelguqaCK+6Ox+UN8fioKTxBFpkQMBXSGUnQ1lhmmINQIsTXy8Bzq7RqYZ3Bb5Zq08j6PfnUGowBT6c7p5Q9y13CLCXCL0w/TTFbyTJsdvPrzjvkfM4+jfV8EW4iPkyyvOhO6pxUUZF8VomcIXUxFTYNU9RqZekujizyJ4q6bB/FNnL5aw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NH84g47tK7c4GlHoaAADCkWf+J2O8DieVbFkOg4ugNs=;
 b=Riy4pq1Vr2jZEw0jWPQCWBvh1Gzq2UkbXR6FB4EvslF8RwR8WdvsmllqgTXGcnbExY8lAYNReBHnxQSGSc/YhTky4W15b38Z4fHcmibXXqf/OT9kQ6WM6jTUBXPcNV9d7mUF9G1WsedKS2xci/lHaXmAe7LwJKW6Dg8QeArwewyWsL5zmYCgBP3Xbrun8jZq0YMnJfydhX6O6O2XZHNOakqvw5yuZtnH+LaGn0X0N/OkKL4RiS1Sg8RdtRZxp4xOZAJA+AgPOVFMnQ7WxD2DYGUVAPpd4FXmrfCJY3+Nft64nkyfn+pLJi5FpsvL3rhzFqaXP6bacWdpqXcYK6RQWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NH84g47tK7c4GlHoaAADCkWf+J2O8DieVbFkOg4ugNs=;
 b=AN+UqnCuIkVXFXGo75TS2i16XkYOfqjLHsaiJixrQWP6QljTr+mRTbhvMnR1KiEIXj+D/OdvKKLW+xksjr1pRwiDNn4FnupuljH+M4NOqedXj1hMWupWC4bvtYKj7fWME+Yn0/Yuu6PstWH+z6VzHgN96hFax3hm8a94tUnozA/XYK/cd/ya88DuRZ6skyvdypsLasIVZMbycOgePIV5b8tjYUv76S7rPSV+DHzRE+KI49rfNaRff+zUPQ52NbXDrGHDqhmkOcfNYmh8daT4Ua797b3mGpeBapdTL90jjNsLQa2ByAjVsMDTVZxlvJeqRGVLNTXSmXcHlYiJglRexQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <140e906f-07c4-ae40-1d37-ae9966709289@suse.com>
Date: Wed, 3 May 2023 17:57:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 5/8] domain: map/unmap GADDR based shared guest areas
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
In-Reply-To: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0040.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::8) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AM9PR04MB8290:EE_
X-MS-Office365-Filtering-Correlation-Id: 3d4925db-50fc-4a26-627b-08db4bef0e49
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VdbsEsxYOJZLI2yg0DerlCvyJ3okHohulxga3CtEn+dyKuFWyhHj8GeRDd7BhLWOmO0k3U42VG+RbLWWLxeMtVB7sCocOTd4YFe+nePU4KjNvG9dv0Jb67AsRetRN8IVbMaCXSFDxmuN61N+b+nbl8pMVM+ODzGxswu8TyusXsJoPpot1MocdrNuDo0IYAdbGp+NE853JC+SB7mBzyQunjudg5Q36Tt8CRdMPU0krLmkRn3GD2Qrtg/trLgLofs2WnGfja6aYx0idpAQePzMuFncI4DXrTdDSmbM04GFalU0Qgv0wEAdO0H+44awXnBDpUHeaJKtj+bSzxA2t+7ypyLXzFEvpeIucgUqSsjkE9jNm8E5OrPUhddzQBwJLgPWcS/6giBDA25K6wsq6tZ5CiRG63nLG2lti8MPl0+44E8fZUHshNO2NOhfEkG6iT9zuf82nNkk1EvV583GjlFzA+OSDZZaTStVV+wolbnae2ZwzsMTkhEiyjiG8gkF561pDp3AWg2E+PZeE3TMJ6sqSq2Dc/CfW3cHfvYX8z1trmM2uozFRzt17F3TKBZuI6hYC2GqcHGbIKLWSVGd4PLy9U/7d4IZIQBzO4c436/5h/HhOx4zQIAmX5/y5UurzbptreLqK5lICp+45OGGO9lENw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(39860400002)(346002)(396003)(136003)(451199021)(2906002)(38100700002)(8936002)(8676002)(5660300002)(31696002)(36756003)(86362001)(6486002)(478600001)(54906003)(186003)(6512007)(26005)(6506007)(31686004)(2616005)(83380400001)(41300700001)(66556008)(66946007)(66476007)(6916009)(4326008)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VXhmcjVYNC9ZeUVhVzZkOXg1M0hrTUVFdUYrUlY0Y1g1WlFPQS9PblJXY29F?=
 =?utf-8?B?eXc2QWJPdmdmY2FZT2ZXZGVYVmhQMVRiV2gwRWp6OVFIQ2ZPWGlSUEcyRy9q?=
 =?utf-8?B?TkxzMjFRV084RlFwSGY2eFhMbFdTQUNoRHpRTFNBTVBkT2Nzc0NwOFJteDE3?=
 =?utf-8?B?N3pJT0FyUVd3cUFTY2s0SlAzUmFWZXFnZDd3R2JaZzBiNGRSUzlQU2o2UzBP?=
 =?utf-8?B?V3RzZ3E1RHhxaVFSNUlMazNqVm1zQk5OVVFpemxuUld6Q2JKeHVZWHVwVzFh?=
 =?utf-8?B?ZjExa090SHNNdWd4QTZsVzJHZ2xxaThkWExESzI4TkpUSlBCOGg3NWlaUWlB?=
 =?utf-8?B?c1JlMWg5ZTJVOWlmUHo3akNIS3huNjUvNE1CZTdiTHFpMi9QVkpsYVpCL0lL?=
 =?utf-8?B?ajJVZU5iMTk0akxxRStDazcxQS9pTDhaUkVsUE9hay9pUHZhSEIzY1NjT091?=
 =?utf-8?B?ZUhlNVlhV3ROTUpEWXV4K3ZVa25uSUlVT1pwd1RtQ1NaRThkb3cyZmVBYkpn?=
 =?utf-8?B?L0tXWm1vWHFXaE5aOFU5WmdXY2VEa2RkdTUvaFNmOGQvUVJJMDVjbXFTbXpn?=
 =?utf-8?B?YlNFMjg2dmdsaERtVG5NdHFPYUh2ZitoL0dxb1F1NElibGkwNjhtYVM1VU1H?=
 =?utf-8?B?U3pJNzE4c0o5NnN0cHZVeWR6VjIySWtQWDdlRi9uNlc1V3hLSERwSkx1MjJl?=
 =?utf-8?B?YVBiYnFqSGt6dmVtVjBXNHltV0hubDd6VHBxUmgxTHdLWVRIVis3TnBiZ1lj?=
 =?utf-8?B?MWNmYk5jZzE3SzZsK2JOZGs0ckJKbTNRRkdXY3BzenEveFdOZlp4K04vb2Uz?=
 =?utf-8?B?MDZnV0E1WEJoMHdnNDJKT2tWK21lNFRyL0pIRE9DSXpPam1tdVYydXRFajBo?=
 =?utf-8?B?VE1vQ1ZrQXFHRVFSOFpuQ0w5MGxUOTFoZmlBNTNRTm9mYkplVWxSSlVsbTFB?=
 =?utf-8?B?RU1HODZ1ZndxQTVqOUtWRE5mQzZtUVM4TWhsYVM5UjFRTFQwQVk5aTlMZ1cw?=
 =?utf-8?B?TmNzbC9JWk5wWGtzZlNPL1RkMWlqdDZJanMrOGM2WjFBbzFZZmJTMnRjbTdu?=
 =?utf-8?B?bUg0ZEdnSjFoZ3JTQTRwK3VVeTJlVHNCRElsVnZFdVZtZ3RveU1hc1dUMlhR?=
 =?utf-8?B?UUYraUVYTkNqUUg5ak9PL2Q3bG9jOVZJMGxqbGhORHlsdGRYVWhQS1Ztb241?=
 =?utf-8?B?anQ4ZXMzQlhtZmpJVGxiTm1XY3N1VU1YbXlrRHFMQzMvOXUzcjNpSllaakp3?=
 =?utf-8?B?WnhXRWVncFp6UG9SRlpUeUlnMDY3c1FucC9ZKzNtdXBZQ3RZTGRZL3pFU0Nh?=
 =?utf-8?B?K3JSRjdGNTZVSHk0Q3JDRlM5YjJSNHdqWkR5YndONklJUFRqVGlNQkRwV0dr?=
 =?utf-8?B?VmY0b1BhVEh2eHlHLzRHeXdYYm5EdGVJMjJxbDRUclZScE5HQnhsdzArc214?=
 =?utf-8?B?YW9DZTZMRjQzRTRIOFZUU3FDVWFZSVpvM0VuRGovNk1HK1NVdk82M1lqY253?=
 =?utf-8?B?Sk9MNHYvcGNHZk1OWmVFU2J6RnhnNUl3Z3dwcHQzV3pNcEhwS0JCN2hUV3dS?=
 =?utf-8?B?WnVZOHJVZGhCdHNqVHl1aDYwdERXdTFHMHFpOXkzZFNwSGNUMktaZWJOOHIr?=
 =?utf-8?B?NTBlbzdSZmFNWUwwb05NVFBFVkR3dGcxbU1yVzFuVnhXbHZzdmM2SUJiOVZl?=
 =?utf-8?B?WGxJakxrZU8wRTZHc2RmS1Q2SEw5U2NrNU5KNVUvNGIwVDdTLytEeXJhcmdN?=
 =?utf-8?B?Y2JaQWlacVp0L1VldEpjOUFOanFGajhBR2xGZG5xV2t2b0VOYURQa3hYbGNm?=
 =?utf-8?B?Vzl0eTRyWktsUWUwOGlwZGY2cGZFZjNOQ0F1ai9VRFJzRmZSMm1vTk9mNWxO?=
 =?utf-8?B?QUFhdXVVR1NYNHlTMDJyWkgzZWhPejJBak9iWUo5bWhWcGVnUWZpbjNZZGVX?=
 =?utf-8?B?RnJtR3RpblRwNGt0bkNOR2puSTRmbW1XMHVYaDQ5QTE4dVhhaEVub01KbnMz?=
 =?utf-8?B?U21lVGFiTjRyN1FVMWdBUlRhQlhpbWcydWxpYko1dWRVWGozWGJzSkxmSExp?=
 =?utf-8?B?RGwwOHc0QkRmZkpFVWk0WGFnWHlSbWludjNZbzN0WHhBWmVqRUxuN21abW9T?=
 =?utf-8?Q?uFEgIwLhpGdzxZ0nUHOjmHLsw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3d4925db-50fc-4a26-627b-08db4bef0e49
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:57:11.3208
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xmD79uMgkBcmhtDa0F3SJ3rz3sD99yKkN/378NoEOkjHiRuRQFin3v3khjMrvG1gGlo5CMPsaA3dvEyifSaLUw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8290

The registration by virtual/linear address has downsides: At least on
x86 the access is expensive for HVM/PVH domains. Furthermore for 64-bit
PV domains the areas are inaccessible (and hence cannot be updated by
Xen) when in guest-user mode, and for HVM guests they may be
inaccessible when Meltdown mitigations are in place. (There are yet
more issues.)

In preparation of the introduction of new vCPU operations allowing to
register the respective areas (one of the two is x86-specific) by
guest-physical address, flesh out the map/unmap functions.

Noteworthy differences from map_vcpu_info():
- areas can be registered more than once (and de-registered),
- remote vCPU-s are paused rather than checked for being down (which in
  principle can change right after the check),
- the domain lock is taken for a much smaller region.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: By using global domain page mappings the demand on the underlying
     VA range may increase significantly. I did consider to use per-
     domain mappings instead, but they exist for x86 only. Of course we
     could have arch_{,un}map_guest_area() aliasing global domain page
     mapping functions on Arm and using per-domain mappings on x86. Yet
     then again map_vcpu_info() doesn't (and can't) do so.

RFC: In map_guest_area() I'm not checking the P2M type, instead - just
     like map_vcpu_info() - solely relying on the type ref acquisition.
     Checking for p2m_ram_rw alone would be wrong, as at least
     p2m_ram_logdirty ought to also be okay to use here (and in similar
     cases, e.g. in Argo's find_ring_mfn()). p2m_is_pageable() could be
     used here (like altp2m_vcpu_enable_ve() does) as well as in
     map_vcpu_info(), yet then again the P2M type is stale by the time
     it is being looked at anyway without the P2M lock held.
---
v2: currd -> d, to cover mem-sharing's copy_guest_area(). Re-base over
    change(s) earlier in the series. Use ~0 as "unmap" request indicator.

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1576,7 +1576,82 @@ int map_guest_area(struct vcpu *v, paddr
                    struct guest_area *area,
                    void (*populate)(void *dst, struct vcpu *v))
 {
-    return -EOPNOTSUPP;
+    struct domain *d = v->domain;
+    void *map = NULL;
+    struct page_info *pg = NULL;
+    int rc = 0;
+
+    if ( ~gaddr )
+    {
+        unsigned long gfn = PFN_DOWN(gaddr);
+        unsigned int align;
+        p2m_type_t p2mt;
+
+        if ( gfn != PFN_DOWN(gaddr + size - 1) )
+            return -ENXIO;
+
+#ifdef CONFIG_COMPAT
+        if ( has_32bit_shinfo(d) )
+            align = alignof(compat_ulong_t);
+        else
+#endif
+            align = alignof(xen_ulong_t);
+        if ( gaddr & (align - 1) )
+            return -ENXIO;
+
+        rc = check_get_page_from_gfn(d, _gfn(gfn), false, &p2mt, &pg);
+        if ( rc )
+            return rc;
+
+        if ( !get_page_type(pg, PGT_writable_page) )
+        {
+            put_page(pg);
+            return -EACCES;
+        }
+
+        map = __map_domain_page_global(pg);
+        if ( !map )
+        {
+            put_page_and_type(pg);
+            return -ENOMEM;
+        }
+        map += PAGE_OFFSET(gaddr);
+    }
+
+    if ( v != current )
+    {
+        if ( !spin_trylock(&d->hypercall_deadlock_mutex) )
+        {
+            rc = -ERESTART;
+            goto unmap;
+        }
+
+        vcpu_pause(v);
+
+        spin_unlock(&d->hypercall_deadlock_mutex);
+    }
+
+    domain_lock(d);
+
+    if ( map )
+        populate(map, v);
+
+    SWAP(area->pg, pg);
+    SWAP(area->map, map);
+
+    domain_unlock(d);
+
+    if ( v != current )
+        vcpu_unpause(v);
+
+ unmap:
+    if ( pg )
+    {
+        unmap_domain_page_global(map);
+        put_page_and_type(pg);
+    }
+
+    return rc;
 }
 
 /*
@@ -1587,9 +1662,24 @@ int map_guest_area(struct vcpu *v, paddr
 void unmap_guest_area(struct vcpu *v, struct guest_area *area)
 {
     struct domain *d = v->domain;
+    void *map;
+    struct page_info *pg;
 
     if ( v != current )
         ASSERT(atomic_read(&v->pause_count) | atomic_read(&d->pause_count));
+
+    domain_lock(d);
+    map = area->map;
+    area->map = NULL;
+    pg = area->pg;
+    area->pg = NULL;
+    domain_unlock(d);
+
+    if ( pg )
+    {
+        unmap_domain_page_global(map);
+        put_page_and_type(pg);
+    }
 }
 
 int default_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)



From xen-devel-bounces@lists.xenproject.org Wed May 03 15:58:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:58:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529342.823673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puErs-0005GW-A4; Wed, 03 May 2023 15:58:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529342.823673; Wed, 03 May 2023 15:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puErs-0005Fz-48; Wed, 03 May 2023 15:58:12 +0000
Received: by outflank-mailman (input) for mailman id 529342;
 Wed, 03 May 2023 15:58:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puErr-0004ZO-Ar
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:58:11 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2042.outbound.protection.outlook.com [40.107.7.42])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4cfa37bf-e9cb-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 17:58:09 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM9PR04MB8290.eurprd04.prod.outlook.com (2603:10a6:20b:3e3::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 15:58:03 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:58:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cfa37bf-e9cb-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DWQ3OkdIcWpnojaH7W+ucsbUWR1ZTeRXk5MK8lmHmASi3pmiabuGpCVVT8WhdErXDqBa4BxJCJ1Alt17mH8fg248Xic8TrMIJ++LrScAGsEwlll0b6TfeOOaLY01SA0PS5k2iVMN18pKgOIzCwdmULN3hBwcdh0EuFYxWKen9PRoixnOkv1nOOGWlQRY04I1ShT2hEXzgbRDQsCixQt4+rR5vfeFZ0mjS6tfqwMez3mMqH0HiAL6oOqD3V5K4Zg7m8Gv4P4sxaaFRdSePHnvsvTXHGqFt54HS6WQvZWHaGPXmmXnpNUPOR0GPGiaTkp6jdEw2CCIMgb3BTcNJoygJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6IQwYv9j9TS5s8pPI31juObfMSOZTWBzzVzQE/pCSAE=;
 b=JezfK0d/OwXAuSHXH42UrRKDhCwMCqcO7K2vS5uJyVBzBcwnKpxZvQ8D1eIlfGjAHsYzegb/2g8p/GuIckjqP/B5jNCywu/riXvX5Ngwqv7Ssdv5N7wum62mSKsPv7pc+TrAmBVrHTDL8EkPeYBd30evFQgys4vE7PjEnqJ/pGrepvThyoudQ/Gq3q+duZaxXbu9oU+jwijYWDxyIk/PdOXv1OhJfR2s/EVsF8RQOpV7+RRzmqK+wpP4a7dx5kEyqnOMebC7VSnmrC+MHhqOsaG70oubSciRIfv8w2M3LMYJTlqxP/Tpc+umpMfhNw0FvexYy+VIINgj8PwFLHhF6Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6IQwYv9j9TS5s8pPI31juObfMSOZTWBzzVzQE/pCSAE=;
 b=0YW3mguoNtsxk604tYbrugzbScH4vqZrqMg1kaVOUPjNZArkBc8pAhDEoTDdtodKMCe5LtlzaZHttuoPb8apcOUbYMT0Wnc86JwxNaJLlHiUaelFOFVCRNdtS+Mq3Y2ySaR7AWXjDceYgc5zkcDojgW4osb8kCcPe2qBv1AsnOWNaRkxEwXLL/vn1hX/NOd3KXRKC+2TwcILbIx3VFaUqlD2jkr/5wUzaw3ni3NUfgx7UJvNci+tCVmmeW8BrR+oOlRfUjLlFNKqaPOHMuMxhMdXNc5pWltg5l/Tzmms7WErcg/UN3/hYb6aHchx/KRVgy8OCQwdiNFRteQdm1MgMQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <77218fb0-5e96-4ecb-c2b0-4fe8c3ba683f@suse.com>
Date: Wed, 3 May 2023 17:58:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 7/8] x86: introduce GADDR based secondary time area
 registration alternative
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
In-Reply-To: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0014.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::24)
 To AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AM9PR04MB8290:EE_
X-MS-Office365-Filtering-Correlation-Id: 8adacc83-d5b0-4ccc-3c71-08db4bef2d0e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nA0ZLjq39ztusNsD3W+n4YXi02q0iFsSXGG607E1IlXbU/vEiyXEfcd6er3qAZhnjj6EAul0hlyYTuFV27y538Qc1q7kCt78VSVA8G/d1R5xNZxBazrtKc4/0KeECMujJYaN88m8yKE3LsPLGiDPPTjltj8wEQgO53lhW86RcS094EcgOsFtKlANBGyAQmbdp710a27VeAmCPFCoIMU/xbJzBs6uNgeG6N7Y4iZ8t3Mxxl2enZ+/p0ILnUimMJrRBfx67u4AY+o3RbJ65nTdTaN2CP+2XsBaH4ckc6bdL941/LhsK1us2oQCMM1yWS98U+P3EOvkeoQWm92wYjwmUK3GwNo3+8w0rTv3hRRkm4xw9vp6gA64aSQtT9eiQi3LlJ60MBsrs6mfGy40ckVxkB1VKuv2Wg9fWXsKDJMtNFsfkBDR4WRATn9hO5AAt5gbrEehI0DGny5NvYnJT1WP5nW/TQOsTtx/tDwnZDnLrkGshhwIUER0jdAKT40qG824G/iR7N3kTNVqG7qShTmlnJNmqAef4U9XcNMc3V7/iPjevo+ShXe55sJov5e+CBcuT36nUGkFL5Qal/fkRmcErRUbGe4KUUjTN1JrvqicOLouQ9+Bm3PCx7fKzUgZgZ5l493G++OhciqIzjc1UK91+Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(39860400002)(346002)(396003)(136003)(451199021)(2906002)(38100700002)(8936002)(8676002)(5660300002)(31696002)(36756003)(86362001)(6486002)(478600001)(54906003)(186003)(6512007)(26005)(6506007)(31686004)(2616005)(83380400001)(41300700001)(66556008)(66946007)(66476007)(6916009)(4326008)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z25FQ1Q5M2hJTDFWMWxSZWtOTysxZlNBbEJ4MUFLK1phNmhTYVhlZ3NaWUlG?=
 =?utf-8?B?ZUlxb0JIdzU3RUlRNFF2UzZ3a3h2dy9PbDNTbDkza0grYUNHQksxZENWSXpr?=
 =?utf-8?B?R21tRURVYnVTVkhwSWhYdHJmRnJBTmNXLzRLZys3TVhJTUN4TW9MLzJWMGNo?=
 =?utf-8?B?dWdHNmxrTHYwMXVrTTN1M0FRK1FON0ZlSlZpM1ZwMFVSd2pXK3lZOVFlUlA2?=
 =?utf-8?B?UVVmakR1VUxyV1FvSVRtcEZHdzNYdCtXRkZRRkorcHRpemo4b25TeXNIYm9h?=
 =?utf-8?B?N0o0cnNhS0pCdjVIN2swbFNMSnlpSkRHVTJRNjVVQlhxcDMwSUgrRjY0SlFh?=
 =?utf-8?B?UTVaejk4QS8rdDExNnFjakY0WVpMWWpJM2NOUEVwVFVsZ05RdXEvb0loamgz?=
 =?utf-8?B?bGpHd0NGTGJTWkg5OWc4MXFJYjNoS1hlMGZXaHdCYnd6QjZPRWtLQVFSZGxr?=
 =?utf-8?B?NmI4eXdyY0cxb1RKaTVvcGR5cFF0Q0dTcnl1S1h6dlV3UDFHZFVQNWR0WFp6?=
 =?utf-8?B?eE5sMGhuL0VxczhDb29MMXVvVS9rVGZQMXd2YUl0dXhQdGU0LzhqbVhJRksy?=
 =?utf-8?B?ZS8zY1pwcWZFL1MzNXhrRndTOThLUDI4L2hUbkdPaEF0YTlMQlY5bnc5UmY5?=
 =?utf-8?B?ZWJCazdZTXFmUHRKYjNCa1JTM0hEaVlCblcxa2RIakN1TzIvSUViaDlpZG9h?=
 =?utf-8?B?SGM2cDA2eEpaSG52WW1oeHBCeFg0UkNsR2pLZHZ1b3B3cHdJWjJPcis1Yk9Z?=
 =?utf-8?B?TmNQT2xKUCs4bXlKVCtuREtLZDZhNU44bWpydWUrRzhaamJ5aVFtQ29tbWxI?=
 =?utf-8?B?SUcyZlRQSHdjTkFGdm9BeXdwT3pXZkgzRzFuK3kyYnVNVWwzRGpaYjREOUl1?=
 =?utf-8?B?QlJCd3N5ZWZWTlRlcm1vTjJTbUw4N0JyWnlJMWNYR24vZ1d1YmFucGJMRDFs?=
 =?utf-8?B?MU1rKzBNNkU1M1A5SkovVmxRSzNTNWZnWWE4VHNxZjA3NGhROTVFV09WMWEx?=
 =?utf-8?B?MUkzWUtmV1IrRUREa2JXcU02SVRMTVpPV3BqdFFjOHh1SmRSQWF0RkQzdmF0?=
 =?utf-8?B?OUZPTnAvV0RxeG1LTkEvUnNzWVJraVlueHpBSStMaHQ4QzZKd1JFN2tkZUJY?=
 =?utf-8?B?NDZGNkFrL0NGdDBSbk4zVGoyd3JsUzFxZHlocms3MVhCNjBiTEpiY0pNMGRz?=
 =?utf-8?B?L0xRRHlvWTBSWXE3UytCMUZZeHd1QitvMnYwc1pqUW10Si91ZFlOcVVhZVE1?=
 =?utf-8?B?TG5KSVNmTEdISGIrb2lxb0RGU2U3cGRvTFBaSnJxTEhEY3R3NWpuL1JRTFNK?=
 =?utf-8?B?ckdzMVdsME5PWHFIbGVxQUd6dWFTMDdieDlGSDA5UW1wNzFyM0FzNCtCSGhQ?=
 =?utf-8?B?cS9EQ2RlOXAvakVWSjZDR0R5amJRcXV5N2tXeFNVSGFoR0VNdTh2ZFkvS2tx?=
 =?utf-8?B?OThXQ1NFTHF0VEZKd0UzNUthV0hwZWlkRjNqdERYMnZQQTdHUEw4Wk92S0VM?=
 =?utf-8?B?c05GUEFaMG1zbEZ5WWh0ektxSEpIbkJUdmNlNlJxWGFLVU9DR1ZxRU01ZG1X?=
 =?utf-8?B?L0JycEpUNjRlQUJ1SWk3Z0l6SWp4ZjVPSGdqMHN2UVFZbnNzbUppMkk5YkE0?=
 =?utf-8?B?bkNFNHRyWmdYOW5tTHFHTmZzR0FyK1VxN09DSHBBQVJ1eVNXdFNQTTh4eGVG?=
 =?utf-8?B?a1c4ejViOVZxeTk0SVM5aGkvdXU0bnJ6TmZvTGhUTE5KZ3RRVjFuOG9WTWYz?=
 =?utf-8?B?aHk4NUI4bGprRTF1RDN3cmg0Z29KRmdrZDRjYnNMVzlsRlBRK2E5a1haMzhM?=
 =?utf-8?B?SWpiRTB3Rld3dWlIN2lud1FCRUNEVGhoWUhZcjQxbWg3ZzB4U1kvNmR4MUlU?=
 =?utf-8?B?NFNtVVpaZ213bnJHeDdDQjRZYlJDeU54Q3FBQlJSRWhvNWI4NldrcGN4dHNr?=
 =?utf-8?B?MThvY0hMcUoxbzlMdUUyeFQxWVZzOVRiRmZUMFUxWGpuMXIwMTZGQ1cyV3Rv?=
 =?utf-8?B?Tmxub01xNG4wUU1uK29rK3RNZ1BiSmJkZ3NYRENLSGZjV0NqYnpVNlY1SVVZ?=
 =?utf-8?B?MDBBQWtBMWxOZytnUWdvOHJGQTQ2MlM3RmtaRHFJbUl4L3hFc3IxMFlOemVx?=
 =?utf-8?Q?XIuaRkbVaIWiyqxtnUYQ4tp74?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8adacc83-d5b0-4ccc-3c71-08db4bef2d0e
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:58:02.9418
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zeKTkZKAXxPcnWAwQHiplkpt2tWBC3ott9VQCqULVr/4I+nw8db2Y0Y2YBBXEDzEnjkTaTxz7vFA7+37YSJmDA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8290

The registration by virtual/linear address has downsides: The access is
expensive for HVM/PVH domains. Furthermore for 64-bit PV domains the area
is inaccessible (and hence cannot be updated by Xen) when in guest-user
mode.

Introduce a new vCPU operation allowing to register the secondary time
area by guest-physical address.

An at least theoretical downside to using physically registered areas is
that PV then won't see dirty (and perhaps also accessed) bits set in its
respective page table entries.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Re-base.
v2: Forge version in force_update_secondary_system_time().

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1504,6 +1504,15 @@ int arch_vcpu_reset(struct vcpu *v)
     return 0;
 }
 
+static void cf_check
+time_area_populate(void *map, struct vcpu *v)
+{
+    if ( is_pv_vcpu(v) )
+        v->arch.pv.pending_system_time.version = 0;
+
+    force_update_secondary_system_time(v, map);
+}
+
 long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
@@ -1541,6 +1550,25 @@ long do_vcpu_op(int cmd, unsigned int vc
 
         break;
     }
+
+    case VCPUOP_register_vcpu_time_phys_area:
+    {
+        struct vcpu_register_time_memory_area area;
+
+        rc = -EFAULT;
+        if ( copy_from_guest(&area.addr.p, arg, 1) )
+            break;
+
+        rc = map_guest_area(v, area.addr.p,
+                            sizeof(vcpu_time_info_t),
+                            &v->arch.time_guest_area,
+                            time_area_populate);
+        if ( rc == -ERESTART )
+            rc = hypercall_create_continuation(__HYPERVISOR_vcpu_op, "iih",
+                                               cmd, vcpuid, arg);
+
+        break;
+    }
 
     case VCPUOP_get_physid:
     {
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -692,6 +692,8 @@ void domain_cpu_policy_changed(struct do
 
 bool update_secondary_system_time(struct vcpu *,
                                   struct vcpu_time_info *);
+void force_update_secondary_system_time(struct vcpu *,
+                                        struct vcpu_time_info *);
 
 void vcpu_show_registers(const struct vcpu *);
 
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1633,6 +1633,16 @@ void force_update_vcpu_system_time(struc
     __update_vcpu_system_time(v, 1);
 }
 
+void force_update_secondary_system_time(struct vcpu *v,
+                                        struct vcpu_time_info *map)
+{
+    struct vcpu_time_info u;
+
+    collect_time_info(v, &u);
+    u.version = -1; /* Compensate for version_update_end(). */
+    write_time_guest_area(map, &u);
+}
+
 static void update_domain_rtc(void)
 {
     struct domain *d;
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -115,6 +115,7 @@ compat_vcpu_op(int cmd, unsigned int vcp
 
     case VCPUOP_send_nmi:
     case VCPUOP_get_physid:
+    case VCPUOP_register_vcpu_time_phys_area:
         rc = do_vcpu_op(cmd, vcpuid, arg);
         break;
 
--- a/xen/include/public/vcpu.h
+++ b/xen/include/public/vcpu.h
@@ -233,6 +233,7 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_register_ti
  * VMASST_TYPE_runstate_update_flag engaged by the domain.
  */
 #define VCPUOP_register_runstate_phys_area      14
+#define VCPUOP_register_vcpu_time_phys_area     15
 
 #endif /* __XEN_PUBLIC_VCPU_H__ */
 



From xen-devel-bounces@lists.xenproject.org Wed May 03 15:58:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:58:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529341.823667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puErr-0005D0-Ud; Wed, 03 May 2023 15:58:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529341.823667; Wed, 03 May 2023 15:58:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puErr-0005Ct-Rm; Wed, 03 May 2023 15:58:11 +0000
Received: by outflank-mailman (input) for mailman id 529341;
 Wed, 03 May 2023 15:58:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puErq-0004ZO-Aq
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:58:10 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2042.outbound.protection.outlook.com [40.107.7.42])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4ca49550-e9cb-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 17:58:09 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM9PR04MB8290.eurprd04.prod.outlook.com (2603:10a6:20b:3e3::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 15:57:41 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:57:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ca49550-e9cb-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iwDh4Py+rBePb7bAhR8XAK5SQklMGxSoM4B60GiysjRoNhVdXIIETeqIdIpwB4liBRnZOqrtK8te8JS5PdF/Vme+5E6fNBAPbQZsHIkWbp5e8S3APn5D2At8H1WkMUGURTnUdjJ2BYc/eDX4wJb/qActLI+rm+kw03V/bICqBxW6iLNLYj54V4rtHSxXWCvgOfjvJowc8oIact7ZoRZKeP3qkAMWrioHp6AvMG7k1omCGG83fUHn8coO3oaPNzGWn7wLOA0jvQHVrTu48Jf8aYzEo7Heca8OGpsqFMGP6ZEHmbpAsqxk3bnEoGdFMLSi+Hzd63yoEtzRbwj9Pwunmw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cHmrg8DCDjzc3gUkSKqzrnriTYYYSioIWkI/FbXeWW0=;
 b=TNCmdZWVx1dDyeiANy/VoLyhhx4gNPzIlEsCmK/aZ2To0EKw2fN6Jv0Xokp3jeB8PY9BgchcJ/WfuQVKkJVhXQ52zk+oDKxB09cVzml+e1Qv+nxnthIDIeOtlGMeLbqFTIjqoudqrQaKC5NbV8iWwBxtYEXyjn8q99rVyjjpPu3VRJ10QsFPAXhmEpbbeFff2RbNNPBsYBET74EsoUIJ5p9K0ujtu4s1H8gRASPyji/rY0SdIxLRUJNOXkFgHhbcPhwHRJJ+ZFpAXqOD6cNxpJYskOXtDpTu/B3bvJYqEDaG87EUCY0nCuY6mGkiv/k+oQV0E7Txzgs6yz1wWnhh3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cHmrg8DCDjzc3gUkSKqzrnriTYYYSioIWkI/FbXeWW0=;
 b=CKhfH78q8lFOPOEj5ms4DACcQP8x9I6T9pIspJRMAuWHJEHR+Xn1XLvD/7H8HAxLHCGvWlXSyh9wU2xm6J8bnIu8pOwvq+k41n1ACpTd4oT4KeLWMl1MQVMJV15i/Fy5FyOOmccQWD64GGcVjfYLBOaDBAAL/tHQE+o87loAjIFi55ZgPM/SbgwRdB+gy4bYDNgDzLsjQZwudhvUFR3g9+fS0DXSD7C+zsCq7y1tk9+9qWFjgU9P2u7E0mmeMGk8Q4NNwJ5CVDjZtTn2em3FnkdMLtPSLGwiQTzmevR3ICAhLlMsaYStj8NOBvEmMEIGh4pZnxkFsViQg3RSGlo1kw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bb902943-c139-ec6c-66f9-284ceff3995d@suse.com>
Date: Wed, 3 May 2023 17:57:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 6/8] domain: introduce GADDR based runstate area
 registration alternative
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
In-Reply-To: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0025.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::12) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AM9PR04MB8290:EE_
X-MS-Office365-Filtering-Correlation-Id: 884b6d8d-6959-4ce3-f2d8-08db4bef2063
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	90rUOQXzqh4HlIWA+xbpGtmpMbfcpg6+OyFNPMGlDpOacSeJS/0+0cIpDtjQHNxni+VsmvfdQmxtUIfspCqipZE9wiAgVCIL63S3susurwondSPqjzvHL0s0mp+09T/m3bAp/wU5q/ClzBGsJRg2a7gOk/tJPVxvzQunAYsKYc3gMY0ube73QLba28FCG0oBFuZDwGXOPHOLvcTKEBTEqQuSGNRmSy2EfdpVe5NlxFEwK2vXvID0zxNZ5Gq5Kbao3wtxSdaK5SO9/K2LoYPzW4Qn8jv+5sm9vnUAO1cFIf2v8AyXovyFZ04XnG2HiZ/zTkHasuxzTKqNMC6g/rAKzSVWuTvynktvsYmJcuoIGPCPv/vp4OiFSQQDsEi9xSYZrX+F74o5qag2Ln38oDKHeQq6llhqnTAEPmsJ3pJUKcTMCunzzcwad0CDsMr2lkOLvG415ITI72VLbQduRi0Sv6X9zuJuFVKIrOYrUSQ28q1Hia0tU8IVRuXvcTsLHMssOhfy184CQWqgdJ3TbXgibeL5Ogoep3tCvJYSDJ3/mCnp0nBBLE64WW7ETXP7yix+CCUEvv4zec4FixLR4bPXJf4qS3QkA6Nx6G9XFPchOINDYpbziRZzaomDz3dJdw+QZmJ+A5rYGHmGtsQahoKFrQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(39860400002)(346002)(396003)(136003)(451199021)(2906002)(38100700002)(8936002)(8676002)(5660300002)(31696002)(36756003)(86362001)(6486002)(478600001)(54906003)(186003)(6512007)(26005)(6506007)(31686004)(2616005)(83380400001)(41300700001)(66556008)(66946007)(66476007)(6916009)(4326008)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QkF1Z0pGNTBkT3QzMjE4cEdOS01HNllxM0U0cEM5WEt2RkxheHVDVVVrZTUr?=
 =?utf-8?B?L2dZRHRyeDQ2VzVFdnpxYkI0QUFRQnd2UWpjYXEvbUZ4QWRsUDRCYW10RkI1?=
 =?utf-8?B?Y2tJWGU5Wjh1Wml5Z21RV1NENUh1QmgvYngwbEVlSGwwa0xHdHVOaFhOV096?=
 =?utf-8?B?bWZpdm54ejJvVGRaWDJxMk1LOXcyMjNqbkptRWNvR1crdnpqNVlCL0YrR05a?=
 =?utf-8?B?MndKTWZyWmtKOEcza2dXTFIrTk43M2xycmg1K1Z6SkZyVGxCaUJ2SldmODhT?=
 =?utf-8?B?dHNsa1VaSWgrb2lMVGk1cTZhSEV5cG85QmNsN05sdFVMemFsYlFvUkNtSHNr?=
 =?utf-8?B?OW9xVzBEYXB2YTdvSDFCQTZuTnczcUU3bThWZG5HbDF0MnByMjVUOEk5aC8x?=
 =?utf-8?B?MEhTNTZqUzkxbUp1Tkd0eUwrNGtqVS9DQ1ZFZkFsTUhmOUIrNjNhOHF4dGlK?=
 =?utf-8?B?TXVsWlo0cCtSOE1QOWVVc1JCcnJSWGJIRW04YzZpWXZXVnppV0J2Zk1yK3ln?=
 =?utf-8?B?UTFHY2xjeHE0b09YT0MvNU51dlg1SlFaejBnUUlybkd3Q1l2SUQrcTNKQ2J6?=
 =?utf-8?B?blJGd3dmd2NzeG5Od2Vtb2FmRFlpdUpoWmlNRUxKTlIvT0tkaC9yMWQxN1Vt?=
 =?utf-8?B?bTJEMFJkb01ZTkg1M3B4QTNHL1U4UG1MWUlqVUxyL0FMSGtSczFBVmlvSzN3?=
 =?utf-8?B?OW9iN1BVM1ZWQyttMDFlREg1elhFZmV2WWREbVF0N0FsSnZIRStNOXpnRlpt?=
 =?utf-8?B?cHZYWVo2WnN4eXNVY2kxM2tqdEJlMDgxcjdaYWFqVFIxbkZ5T1VrcGdnRGFt?=
 =?utf-8?B?Yk93UVh6OHFLWFVuMXRTQ2VaNkw5Z2E3SzRHaVdMcmc1eHhJVW1Fb2Q1UWtN?=
 =?utf-8?B?ZlV6ODFCN201ZTY5SVkwbjBLU1Y0cEpiNFdaMEV4MXhteUZiaWMvTDZ3SFdO?=
 =?utf-8?B?aVRZNGNDRGhwZVk2UHVROVp6VCttQk5kNTZOYW1aNWlqeWxGazV1R0ZMalIw?=
 =?utf-8?B?bVNxaEEvWEtKMEIxVHptbjgzMHF4ZThZNmxLUXEzTVVncHlHaFZOOHQyd3lv?=
 =?utf-8?B?SHJuek9qbGV4QmRZeHpRbW94U3h5Z3ZTY1VsV0g0TVNSaEMwZTFlRmJNNkZv?=
 =?utf-8?B?RnptbVREaE51Uk8vN3cwNnNnU1ROSnNMZi9MU2c0dExrajQzdFpyTEllZ0RL?=
 =?utf-8?B?RkpWbHJQQ3RiU2RTTEJpU09sajN4OFBmUW93WC93bDkvRGVtSkc1MUp4L25u?=
 =?utf-8?B?RXZ4RTBYYnZTeWNway9wU1NaRFE1V0dMTWg1UFR3anBXUmFNTnV2Y2VKdnU1?=
 =?utf-8?B?dG5hUHhOTVlPVHN4OTVxUElvODhzODk5M2lsWmd3MWt4Smk1NUtWVnNVaFhV?=
 =?utf-8?B?WlUyVVNyT3FKMjNzdXZHMHRuWStrWVEveVZlaDJvWGpVYkR4aFpxWUNlcjJM?=
 =?utf-8?B?dVBha3BaTUZEL1FnVzdYdHlCVzBVTjNBMG9QenFkSGtrM3NCYWlzSVBCUGZn?=
 =?utf-8?B?eXdJanZNWmtLeHNaUnBvVUxaZWs3OStER3RxSWxuc2ZWbHJUUlhxZU4zcU9L?=
 =?utf-8?B?TmtCRnVhTGp4R3I4ME1zNWE3ejhmMlN6SHNRRGljbjZlZmQ5OUZObU1oR0VM?=
 =?utf-8?B?S0xDcTNQU05kTkJrcFhmSllmWURiQkRnTngzVmhXU2pMbFFud1V4djRLdW91?=
 =?utf-8?B?YnZaYnF6UXZqMXhIZ1B6SGVJRHdrVEpDRUZXYmpDMlE4a1NWYWtIcGFJOXhk?=
 =?utf-8?B?QTczb3pVSXlvd3hlWlhmZVVYcUIxU091SS81NVNyK0paSUNIZmNjM1I5VzVK?=
 =?utf-8?B?WnVYdzBJRGUzZFc0YmNjN2kxWXYwYUUrRjU0QXFJL1pxSW5GM3VtSGJ5RUhn?=
 =?utf-8?B?QitHZS9oTGdETjRacHpEcFNyNWxRSU9GMVBwSE5TY2NkTjYydksreEJKMTdv?=
 =?utf-8?B?SzMyQTNxZm5aU1FRVTJpdkpGdHZiekhKa3RTc2thbUphOXBiYUZHWkFkVSsr?=
 =?utf-8?B?MEdJTElrd0ZQcXZQYlNFN2JIT2kzMnlVU1UzZGU1dUhwWWV5eEN0NDFZMlZp?=
 =?utf-8?B?TzdLS3Z2b1E3aFplYWpqZkN1SHl6TFNKdzRJZCthZlhmdXFsQTNLRFNRcFJU?=
 =?utf-8?Q?IqcBH3knkketuzJfsnl8IkDxJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 884b6d8d-6959-4ce3-f2d8-08db4bef2063
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:57:41.6601
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 708Gqe18ZRlBelGpJLvdT/lrx4D3F5jxQ4pq5jCfTGM1tHxQTcLoYubFkPry1Y4pf/5aEteWWAVrJt26SBoYKA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8290

The registration by virtual/linear address has downsides: At least on
x86 the access is expensive for HVM/PVH domains. Furthermore for 64-bit
PV domains the area is inaccessible (and hence cannot be updated by Xen)
when in guest-user mode.

Introduce a new vCPU operation allowing to register the runstate area by
guest-physical address.

An at least theoretical downside to using physically registered areas is
that PV then won't see dirty (and perhaps also accessed) bits set in its
respective page table entries.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Extend comment in public header.

--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -12,6 +12,22 @@
 CHECK_vcpu_get_physid;
 #undef xen_vcpu_get_physid
 
+static void cf_check
+runstate_area_populate(void *map, struct vcpu *v)
+{
+    if ( is_pv_vcpu(v) )
+        v->arch.pv.need_update_runstate_area = false;
+
+    v->runstate_guest_area_compat = true;
+
+    if ( v == current )
+    {
+        struct compat_vcpu_runstate_info *info = map;
+
+        XLAT_vcpu_runstate_info(info, &v->runstate);
+    }
+}
+
 int
 compat_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
@@ -57,6 +73,25 @@ compat_vcpu_op(int cmd, unsigned int vcp
 
         break;
     }
+
+    case VCPUOP_register_runstate_phys_area:
+    {
+        struct compat_vcpu_register_runstate_memory_area area;
+
+        rc = -EFAULT;
+        if ( copy_from_guest(&area.addr.p, arg, 1) )
+            break;
+
+        rc = map_guest_area(v, area.addr.p,
+                            sizeof(struct compat_vcpu_runstate_info),
+                            &v->runstate_guest_area,
+                            runstate_area_populate);
+        if ( rc == -ERESTART )
+            rc = hypercall_create_continuation(__HYPERVISOR_vcpu_op, "iih",
+                                               cmd, vcpuid, arg);
+
+        break;
+    }
 
     case VCPUOP_register_vcpu_time_memory_area:
     {
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1801,6 +1801,26 @@ bool update_runstate_area(struct vcpu *v
     return rc;
 }
 
+static void cf_check
+runstate_area_populate(void *map, struct vcpu *v)
+{
+#ifdef CONFIG_PV
+    if ( is_pv_vcpu(v) )
+        v->arch.pv.need_update_runstate_area = false;
+#endif
+
+#ifdef CONFIG_COMPAT
+    v->runstate_guest_area_compat = false;
+#endif
+
+    if ( v == current )
+    {
+        struct vcpu_runstate_info *info = map;
+
+        *info = v->runstate;
+    }
+}
+
 long common_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
@@ -1982,6 +2002,25 @@ long common_vcpu_op(int cmd, struct vcpu
 
         break;
     }
+
+    case VCPUOP_register_runstate_phys_area:
+    {
+        struct vcpu_register_runstate_memory_area area;
+
+        rc = -EFAULT;
+        if ( copy_from_guest(&area.addr.p, arg, 1) )
+            break;
+
+        rc = map_guest_area(v, area.addr.p,
+                            sizeof(struct vcpu_runstate_info),
+                            &v->runstate_guest_area,
+                            runstate_area_populate);
+        if ( rc == -ERESTART )
+            rc = hypercall_create_continuation(__HYPERVISOR_vcpu_op, "iih",
+                                               cmd, vcpuid, arg);
+
+        break;
+    }
 
     default:
         rc = -ENOSYS;
--- a/xen/include/public/vcpu.h
+++ b/xen/include/public/vcpu.h
@@ -221,6 +221,19 @@ struct vcpu_register_time_memory_area {
 typedef struct vcpu_register_time_memory_area vcpu_register_time_memory_area_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_register_time_memory_area_t);
 
+/*
+ * Like the respective VCPUOP_register_*_memory_area, just using the "addr.p"
+ * field of the supplied struct as a guest physical address (i.e. in GFN space).
+ * The respective area may not cross a page boundary.  Pass ~0 to unregister an
+ * area.  Note that as long as an area is registered by physical address, the
+ * linear address based area will not be serviced (updated) by the hypervisor.
+ *
+ * Note that the area registered via VCPUOP_register_runstate_memory_area will
+ * be updated in the same manner as the one registered via virtual address PLUS
+ * VMASST_TYPE_runstate_update_flag engaged by the domain.
+ */
+#define VCPUOP_register_runstate_phys_area      14
+
 #endif /* __XEN_PUBLIC_VCPU_H__ */
 
 /*



From xen-devel-bounces@lists.xenproject.org Wed May 03 15:58:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 15:58:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529346.823687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEsG-00069v-FZ; Wed, 03 May 2023 15:58:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529346.823687; Wed, 03 May 2023 15:58:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puEsG-00069o-CH; Wed, 03 May 2023 15:58:36 +0000
Received: by outflank-mailman (input) for mailman id 529346;
 Wed, 03 May 2023 15:58:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puEsF-0004ZO-F4
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 15:58:35 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2047.outbound.protection.outlook.com [40.107.7.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5b68b310-e9cb-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 17:58:34 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM9PR04MB8290.eurprd04.prod.outlook.com (2603:10a6:20b:3e3::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Wed, 3 May
 2023 15:58:32 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 15:58:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b68b310-e9cb-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Bd50JmvKVslpnn3Ed/ZBvSBw54hpjdTregOxwQUs0Xed+evjNrY0L5b5PNLDvoVEsTfg+1f22sXAkJGlPNqyawclBnJOnuo5N7WSK31c1DzxkgzesPk0um0V5dVvMOwEnfv2np5gISrxQkT7RDg0u12y7pA82GkynF5E+BkYjX/0Tfumxuwo5BCOEqSqqKAspkhHrU9u3uqwe24mnK1O0cBmPOdf9FTbYc2/YlLE6V4OALi2cP6pL6SOeQpp4iVxqK16ooY3ZXcn42ilEXHhYVUq+ROaSqEbvHXVAKmj8WK4GA0ACriC8zXoBTHdWFovK7THFj6xPq3nE8TnXEGk5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=F0sX7Qef0nkvc/rC96yOb8shBC/hUhR66PShSaviz8s=;
 b=CU//CzaPs5F3r/iQZE8U1AMY7xIkMw6C9vqo2MPFPiCaC0FTo0b6dHF954rsPSW+tam8LNTYcMZ28kXEwzLa9x4hugHbyp89GH7Zyv2d3J+z7n7VrARQ8kcc21hTnz2xhL2WKIyJm3p5AwSHoeryG2UUhVBF1nQjTRAfupCzbnd+FHutNsBswF/4KlCMA30r7cvZyWzdKW4SoWts+6tr+uUt13B8mraDKNfZHGdfrQYFo1VP5/l/AHLVgS61QrcooHWnBHKEj7NdLdp7PCHiaJau0oxAcSOPMlCsqYENySyagX3hQYmL47MtUs+hEXEHoPb5cKGvxmLRt3XLOzhGlw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F0sX7Qef0nkvc/rC96yOb8shBC/hUhR66PShSaviz8s=;
 b=DWFubvPfRH5sAPn18sEAXiebzjuhtdxWkNQ0fOMc1mlHloNUI4zFbPXUHPYfphnkiHj/y++Eu3d9oU0LVOQAA33brDgLwlMhJ83dSJ/maLQ7Wy5hAVybk0KDKIAPOyTnWfArbI4kdvvaavp4J3PSnCFvFDZMljU9YzkxLAH2AllQ37+HUORb/gZXkxSuJ1A8yl9wjMczTdoVtPlrmBSSi0udFqTLmkOJcklg/ryURfyfzzVSC88kulxOij4kY5f12q2tyHGPSfk0rhMF+VOEgI5uyKe/uP97afxzm0jfejbNEiGKXpTsN/Dkr8BfOR8odX5ehyRvEZ9GGf0iTcND2w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ada41793-629b-3864-c2fc-412bd8d0047d@suse.com>
Date: Wed, 3 May 2023 17:58:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 8/8] common: convert vCPU info area registration
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
In-Reply-To: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0088.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::13) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AM9PR04MB8290:EE_
X-MS-Office365-Filtering-Correlation-Id: bf0b7847-74f1-44e7-d870-08db4bef3e6e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	p6bqDp2ARKnRgO1KHKF1evwJDNuv6wPzdDaycrBO9whT8bkj3YRMep1PLeEASJFg+29aaEbwnDPBFfB7SNCXwJt137bt8wPS23Sngs91oQ/DwY7Lx7iTzv/Q8acWkx3J31HKeGYQvPhM4h77osqOThjk5VxP/wk8ziwR0ehXWyP92VQ8WWvB9H6wzJesE0YREK0xOMbd73z4shn8mIT8C14XL43BzwqBVS0vxwdd9PqBg4ptocd2W/kjYNmppNotRGVRkMLB+qpH6HDebtfBynNxHEKCnJz4RKnd2YNZSkx1yU1EjXXo+wb7IxVP2NmH2NDhBMrhDgtgFYlY4uH5ydxcmL++hCBD7XjLoxuSV6olhgEyJlZ0Xp2RfJs6v7z3yxnGbHzXnwBoHQrGBKBGvx4DWW7nix0BYtZ8C4YS3jduqyRbXeZFqfp8EXjYagivVsFgEKegM46zkImUZfb+8ZQjmxzSpYbkspqoP9/66hh2zn/XxG+rdEKxLt0TMqE9ysc9RJLKKQXqVFjCwKcVwqTQ1R22bK7jR9p2nStAayiMq7CXPGNtdaK6/axwrnJwGqqgQ/yfH+HckoWN1PQ1fWWJhxws5JBiBYxyh9tQigeIhslg0H6zTgy+nMDMLC5ZmuYkGPDLuK4GGLxii9rkKZq+MI+3Ij51WxRK6Iy1/6fkP8zkir1Je3bARx/Le0UH
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(39860400002)(346002)(396003)(136003)(451199021)(30864003)(2906002)(38100700002)(8936002)(8676002)(5660300002)(31696002)(36756003)(86362001)(6486002)(478600001)(54906003)(186003)(6512007)(26005)(6506007)(31686004)(2616005)(83380400001)(41300700001)(66556008)(66946007)(66476007)(6916009)(4326008)(316002)(43740500002)(45980500001)(309714004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M09RTzZheWFyMU1Uby91OW5wOEJobmdTcVNEeFkwa21TdFBheHNvaFlNRVVw?=
 =?utf-8?B?Z2lUNW1QUUl6R3VtSHZJYjFJSzFxdUUwMTdpOUV5bFhrWC8vQXcvaHVJUUZu?=
 =?utf-8?B?UmVJTGFOZUx4VXJBcWJwcmVjQkRzUVc4aVdjajdEdkVyQ1J4ZlJaL3dqWVdj?=
 =?utf-8?B?Rm9WSmRKSDRmTnR3SGJjdDV3bnQzS1UxTGlxM0xrNUVWOTNkL3JQeEhLNy9a?=
 =?utf-8?B?Y3IvQVdNT1lrNFhJRStrTUU2Vzh3cDVJYVlxZzhlaStkQlUrU3NLRElsZ2RV?=
 =?utf-8?B?bW9Xa09kbzExa1AxaUJVOFEyV0VVSUhxZVdlampWbjNZR214TG1SaXpNQXdy?=
 =?utf-8?B?MDVyNDAxY1I4ckxSYW5Kc1lSbVVobXZ4akc0SXJiSDVIbkxBdml3bVZCUi9I?=
 =?utf-8?B?Q1RkMCtYRFhob09qTE9sNmNmbHdhWEVCcVRCYld1S0VmaUFmT3FTanQrMnFT?=
 =?utf-8?B?Z29ld1hXa1crc2VMMUdWZzFqNVNvUHN1d2ZPRHBETlFFZDE0bTY0ZnU4d0lo?=
 =?utf-8?B?UzlRazRESy92RzBPeDhpeEcrMWp5OUF6d0hkZS9pQXp2d2JqdHZreE9HWkY5?=
 =?utf-8?B?QXRDYUJKOUtLcE9YVnNUSUZMeHZpbGVnOHFVd2NCVEJ1RjhOcWRaMk1tMllW?=
 =?utf-8?B?Z0NnVHBycENXcnZRU0FkMXMyUU9ObVZ6OFdQZ1c1VUhQS1h3Wm5WYjhsekNK?=
 =?utf-8?B?aFlYWUpXQUt5NkVWQmExMmwxWkkyOHpsUlM5Q2k0b09HYjJESFlXamQzVWlp?=
 =?utf-8?B?UGFyWHc3RHBHTTU5bWE4WjhNaWw1Y2p0OS9neUFKUURzTkdvMlFoZFc4TjZZ?=
 =?utf-8?B?cWFKcHQ4d25NeHRYQ1MxcnJSUzdWdk80Mm9vTEtoamJIMWlheVlKNzZITk5R?=
 =?utf-8?B?c1ZVR2RjWk9wcm5ZeXl6UjBpODkvMC9veGI0WEx0K1d5cVZvSzEzdVcrNjV0?=
 =?utf-8?B?aW9RQklCYnM3b2M5MVNlcVdUanZ0NDZzbVJQTm56ZjZ3NXBkOGluZVBaWFJB?=
 =?utf-8?B?QkNtYTV1N0V0d290OEtRZm92WmZMSXFZRXREV3hrYVU1ZDh0cE45VXBaTUt2?=
 =?utf-8?B?VlBNMWRxMzBBU3pkdTI1bXBRZThscThySTF2ZklPL3ArNExCT2pQaUtqeDZS?=
 =?utf-8?B?RVNHdmZxMVRWTUlFUG10cmpmazlTMllPbVF0OTBRcUl1MG9ncFlob0hoMjBw?=
 =?utf-8?B?bktscWRvRk5ITUwrb3JJUVkwbU1LeGJyN0xWd2JUWjBpNkZteE5XaXUwdy9R?=
 =?utf-8?B?Ti9OSVE0RCtmckxiR2hJampjVG5YUUtTL2oxL0ludC9hMlBkQzluYW5vWjNl?=
 =?utf-8?B?UDBMU1hMTE1DeU9zL3pVREduckxoZk5xcEdtVzFYYmwwVzQ3TDhKemRsR3lF?=
 =?utf-8?B?bU5ZS3RDZytVWXZ4cGc0RVF4b2tZUkljUm94Q2s5NDJpcWZlcmQ0OWxIcVl1?=
 =?utf-8?B?QlNwTHlIN2Jza0R3dEUyeVVzYklXeFhQbzNPeTJRQ2RKVkRJOFhrVkN2TlRV?=
 =?utf-8?B?Y0ZsMVAzc3JQZFZIZmtRK1lpNTR3MTRRa0xEbzl5SW9Lc01mcnNGZjAwbUV1?=
 =?utf-8?B?SHR4TFBKK2lDNnVtMUFxOGhvcC9JZldVejIrd3N0bkQyL2M0NFRMRHFmcWt5?=
 =?utf-8?B?VGVCcGtoMWlwU0lQRjIxd1B2eTgwZktVcWhnQWVWeDNybmJZVlpTazFhSWtK?=
 =?utf-8?B?Q0lia2lHVDNRblA2VFgzSy9PK0hyaWFiN1Znblg5cFgzaVMyS01DTmJ1WHBU?=
 =?utf-8?B?eHJzVGRpbzREN0xqYzA4Q0tDK2ZEb0dTYXVIdGlxRmp0SndTL1ZPZC9oS1Fx?=
 =?utf-8?B?WmdTZUZDcWZkT2dHVVlEcm90eEJJTmw5cERSRXN3VC8zckJia3NZSWFXNVFi?=
 =?utf-8?B?V3J6Wk94NEsvNXpoSlhNYmFBOVAraEV6d3Y1NWdBUEZ3dXV4VS9lL1p4SkRr?=
 =?utf-8?B?WDA2N0dqYWFlUnA4M3ZvQm8xNy9YRFdaT2pwWGp2enZjVGVnVnlUUTVJZW9a?=
 =?utf-8?B?cmxyeWZyOTQrUm1CRWpKN1RJUys3RldSOTBMbHplTnZLSndNRTVIZVp4RWZX?=
 =?utf-8?B?bXczaWx5S05yMVVZbTE1Z01UZTFLaEc1ak8ycHJVNFJmRzhMbm5MejNHdit1?=
 =?utf-8?Q?SaLkKtnIDcYU8e4NL2vYmr6Im?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bf0b7847-74f1-44e7-d870-08db4bef3e6e
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 15:58:32.1179
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MpwZDxQK+7PAdFMZA5LzTU1Vu1oU/aRCEhX5K9XUzBeFls2sjy0/H43+t0JqdyQ+3f5N7Bi9+6cjyeBJuQMR3A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8290

Switch to using map_guest_area(). Noteworthy differences from
map_vcpu_info():
- remote vCPU-s are paused rather than checked for being down (which in
  principle can change right after the check),
- the domain lock is taken for a much smaller region,
- the error code for an attempt to re-register the area is now -EBUSY,
- we could in principle permit de-registration when no area was
  previously registered (which would permit "probing", if necessary for
  anything).

Note that this eliminates a bug in copy_vcpu_settings(): The function
did allocate a new page regardless of the GFN already having a mapping,
thus in particular breaking the case of two vCPU-s having their info
areas on the same page.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: I'm not really certain whether the preliminary check (ahead of
     calling map_guest_area()) is worthwhile to have.
---
v2: Re-base over changes earlier in the series. Properly enforce no re-
    registration. Avoid several casts by introducing local variables.

--- a/xen/arch/x86/include/asm/shared.h
+++ b/xen/arch/x86/include/asm/shared.h
@@ -26,17 +26,20 @@ static inline void arch_set_##field(stru
 #define GET_SET_VCPU(type, field)                               \
 static inline type arch_get_##field(const struct vcpu *v)       \
 {                                                               \
+    const vcpu_info_t *vi = v->vcpu_info_area.map;              \
+                                                                \
     return !has_32bit_shinfo(v->domain) ?                       \
-           v->vcpu_info->native.arch.field :                    \
-           v->vcpu_info->compat.arch.field;                     \
+           vi->native.arch.field : vi->compat.arch.field;       \
 }                                                               \
 static inline void arch_set_##field(struct vcpu *v,             \
                                     type val)                   \
 {                                                               \
+    vcpu_info_t *vi = v->vcpu_info_area.map;                    \
+                                                                \
     if ( !has_32bit_shinfo(v->domain) )                         \
-        v->vcpu_info->native.arch.field = val;                  \
+        vi->native.arch.field = val;                            \
     else                                                        \
-        v->vcpu_info->compat.arch.field = val;                  \
+        vi->compat.arch.field = val;                            \
 }
 
 #else
@@ -57,12 +60,16 @@ static inline void arch_set_##field(stru
 #define GET_SET_VCPU(type, field)                           \
 static inline type arch_get_##field(const struct vcpu *v)   \
 {                                                           \
-    return v->vcpu_info->arch.field;                        \
+    const vcpu_info_t *vi = v->vcpu_info_area.map;          \
+                                                            \
+    return vi->arch.field;                                  \
 }                                                           \
 static inline void arch_set_##field(struct vcpu *v,         \
                                     type val)               \
 {                                                           \
-    v->vcpu_info->arch.field = val;                         \
+    vcpu_info_t *vi = v->vcpu_info_area.map;                \
+                                                            \
+    vi->arch.field = val;                                   \
 }
 
 #endif
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1749,53 +1749,24 @@ static int copy_vpmu(struct vcpu *d_vcpu
 static int copy_vcpu_settings(struct domain *cd, const struct domain *d)
 {
     unsigned int i;
-    struct p2m_domain *p2m = p2m_get_hostp2m(cd);
     int ret = -EINVAL;
 
     for ( i = 0; i < cd->max_vcpus; i++ )
     {
         struct vcpu *d_vcpu = d->vcpu[i];
         struct vcpu *cd_vcpu = cd->vcpu[i];
-        mfn_t vcpu_info_mfn;
 
         if ( !d_vcpu || !cd_vcpu )
             continue;
 
-        /* Copy & map in the vcpu_info page if the guest uses one */
-        vcpu_info_mfn = d_vcpu->vcpu_info_mfn;
-        if ( !mfn_eq(vcpu_info_mfn, INVALID_MFN) )
-        {
-            mfn_t new_vcpu_info_mfn = cd_vcpu->vcpu_info_mfn;
-
-            /* Allocate & map the page for it if it hasn't been already */
-            if ( mfn_eq(new_vcpu_info_mfn, INVALID_MFN) )
-            {
-                gfn_t gfn = mfn_to_gfn(d, vcpu_info_mfn);
-                unsigned long gfn_l = gfn_x(gfn);
-                struct page_info *page;
-
-                if ( !(page = alloc_domheap_page(cd, 0)) )
-                    return -ENOMEM;
-
-                new_vcpu_info_mfn = page_to_mfn(page);
-                set_gpfn_from_mfn(mfn_x(new_vcpu_info_mfn), gfn_l);
-
-                ret = p2m->set_entry(p2m, gfn, new_vcpu_info_mfn,
-                                     PAGE_ORDER_4K, p2m_ram_rw,
-                                     p2m->default_access, -1);
-                if ( ret )
-                    return ret;
-
-                ret = map_vcpu_info(cd_vcpu, gfn_l,
-                                    PAGE_OFFSET(d_vcpu->vcpu_info));
-                if ( ret )
-                    return ret;
-            }
-
-            copy_domain_page(new_vcpu_info_mfn, vcpu_info_mfn);
-        }
-
-        /* Same for the (physically registered) runstate and time info areas. */
+        /*
+         * Copy and map the vcpu_info page and the (physically registered)
+         * runstate and time info areas.
+         */
+        ret = copy_guest_area(&cd_vcpu->vcpu_info_area,
+                              &d_vcpu->vcpu_info_area, cd_vcpu, d);
+        if ( ret )
+            return ret;
         ret = copy_guest_area(&cd_vcpu->runstate_guest_area,
                               &d_vcpu->runstate_guest_area, cd_vcpu, d);
         if ( ret )
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -383,7 +383,7 @@ int pv_shim_shutdown(uint8_t reason)
     for_each_vcpu ( d, v )
     {
         /* Unmap guest vcpu_info page and runstate/time areas. */
-        unmap_vcpu_info(v);
+        unmap_guest_area(v, &v->vcpu_info_area);
         unmap_guest_area(v, &v->runstate_guest_area);
         unmap_guest_area(v, &v->arch.time_guest_area);
 
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1547,7 +1547,7 @@ static void __update_vcpu_system_time(st
     struct vcpu_time_info *u = &vcpu_info(v, time), _u;
     const struct domain *d = v->domain;
 
-    if ( v->vcpu_info == NULL )
+    if ( !v->vcpu_info_area.map )
         return;
 
     collect_time_info(v, &_u);
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -53,7 +53,7 @@ void __dummy__(void)
 
     OFFSET(VCPU_processor, struct vcpu, processor);
     OFFSET(VCPU_domain, struct vcpu, domain);
-    OFFSET(VCPU_vcpu_info, struct vcpu, vcpu_info);
+    OFFSET(VCPU_vcpu_info, struct vcpu, vcpu_info_area.map);
     OFFSET(VCPU_trap_bounce, struct vcpu, arch.pv.trap_bounce);
     OFFSET(VCPU_thread_flags, struct vcpu, arch.flags);
     OFFSET(VCPU_event_addr, struct vcpu, arch.pv.event_callback_eip);
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -96,7 +96,7 @@ static void _show_registers(
     if ( context == CTXT_hypervisor )
         printk(" %pS", _p(regs->rip));
     printk("\nRFLAGS: %016lx   ", regs->rflags);
-    if ( (context == CTXT_pv_guest) && v && v->vcpu_info )
+    if ( (context == CTXT_pv_guest) && v && v->vcpu_info_area.map )
         printk("EM: %d   ", !!vcpu_info(v, evtchn_upcall_mask));
     printk("CONTEXT: %s", context_names[context]);
     if ( v && !is_idle_vcpu(v) )
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -49,7 +49,7 @@ int compat_common_vcpu_op(int cmd, struc
     {
     case VCPUOP_initialise:
     {
-        if ( v->vcpu_info == &dummy_vcpu_info )
+        if ( v->vcpu_info_area.map == &dummy_vcpu_info )
             return -EINVAL;
 
 #ifdef CONFIG_HVM
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -127,10 +127,10 @@ static void vcpu_info_reset(struct vcpu
 {
     struct domain *d = v->domain;
 
-    v->vcpu_info = ((v->vcpu_id < XEN_LEGACY_MAX_VCPUS)
-                    ? (vcpu_info_t *)&shared_info(d, vcpu_info[v->vcpu_id])
-                    : &dummy_vcpu_info);
-    v->vcpu_info_mfn = INVALID_MFN;
+    v->vcpu_info_area.map =
+        ((v->vcpu_id < XEN_LEGACY_MAX_VCPUS)
+         ? (vcpu_info_t *)&shared_info(d, vcpu_info[v->vcpu_id])
+         : &dummy_vcpu_info);
 }
 
 static void vmtrace_free_buffer(struct vcpu *v)
@@ -964,7 +964,7 @@ int domain_kill(struct domain *d)
             return -ERESTART;
         for_each_vcpu ( d, v )
         {
-            unmap_vcpu_info(v);
+            unmap_guest_area(v, &v->vcpu_info_area);
             unmap_guest_area(v, &v->runstate_guest_area);
         }
         d->is_dying = DOMDYING_dead;
@@ -1419,7 +1419,7 @@ int domain_soft_reset(struct domain *d,
     for_each_vcpu ( d, v )
     {
         set_xen_guest_handle(runstate_guest(v), NULL);
-        unmap_vcpu_info(v);
+        unmap_guest_area(v, &v->vcpu_info_area);
         unmap_guest_area(v, &v->runstate_guest_area);
     }
 
@@ -1467,111 +1467,6 @@ int vcpu_reset(struct vcpu *v)
     return rc;
 }
 
-/*
- * Map a guest page in and point the vcpu_info pointer at it.  This
- * makes sure that the vcpu_info is always pointing at a valid piece
- * of memory, and it sets a pending event to make sure that a pending
- * event doesn't get missed.
- */
-int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset)
-{
-    struct domain *d = v->domain;
-    void *mapping;
-    vcpu_info_t *new_info;
-    struct page_info *page;
-    unsigned int align;
-
-    if ( offset > (PAGE_SIZE - sizeof(*new_info)) )
-        return -ENXIO;
-
-#ifdef CONFIG_COMPAT
-    BUILD_BUG_ON(sizeof(*new_info) != sizeof(new_info->compat));
-    if ( has_32bit_shinfo(d) )
-        align = alignof(new_info->compat);
-    else
-#endif
-        align = alignof(*new_info);
-    if ( offset & (align - 1) )
-        return -ENXIO;
-
-    if ( !mfn_eq(v->vcpu_info_mfn, INVALID_MFN) )
-        return -EINVAL;
-
-    /* Run this command on yourself or on other offline VCPUS. */
-    if ( (v != current) && !(v->pause_flags & VPF_down) )
-        return -EINVAL;
-
-    page = get_page_from_gfn(d, gfn, NULL, P2M_UNSHARE);
-    if ( !page )
-        return -EINVAL;
-
-    if ( !get_page_type(page, PGT_writable_page) )
-    {
-        put_page(page);
-        return -EINVAL;
-    }
-
-    mapping = __map_domain_page_global(page);
-    if ( mapping == NULL )
-    {
-        put_page_and_type(page);
-        return -ENOMEM;
-    }
-
-    new_info = (vcpu_info_t *)(mapping + offset);
-
-    if ( v->vcpu_info == &dummy_vcpu_info )
-    {
-        memset(new_info, 0, sizeof(*new_info));
-#ifdef XEN_HAVE_PV_UPCALL_MASK
-        __vcpu_info(v, new_info, evtchn_upcall_mask) = 1;
-#endif
-    }
-    else
-    {
-        memcpy(new_info, v->vcpu_info, sizeof(*new_info));
-    }
-
-    v->vcpu_info = new_info;
-    v->vcpu_info_mfn = page_to_mfn(page);
-
-    /* Set new vcpu_info pointer /before/ setting pending flags. */
-    smp_wmb();
-
-    /*
-     * Mark everything as being pending just to make sure nothing gets
-     * lost.  The domain will get a spurious event, but it can cope.
-     */
-#ifdef CONFIG_COMPAT
-    if ( !has_32bit_shinfo(d) )
-        write_atomic(&new_info->native.evtchn_pending_sel, ~0);
-    else
-#endif
-        write_atomic(&vcpu_info(v, evtchn_pending_sel), ~0);
-    vcpu_mark_events_pending(v);
-
-    return 0;
-}
-
-/*
- * Unmap the vcpu info page if the guest decided to place it somewhere
- * else. This is used from domain_kill() and domain_soft_reset().
- */
-void unmap_vcpu_info(struct vcpu *v)
-{
-    mfn_t mfn = v->vcpu_info_mfn;
-
-    if ( mfn_eq(mfn, INVALID_MFN) )
-        return;
-
-    unmap_domain_page_global((void *)
-                             ((unsigned long)v->vcpu_info & PAGE_MASK));
-
-    vcpu_info_reset(v); /* NB: Clobbers v->vcpu_info_mfn */
-
-    put_page_and_type(mfn_to_page(mfn));
-}
-
 int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size,
                    struct guest_area *area,
                    void (*populate)(void *dst, struct vcpu *v))
@@ -1633,14 +1528,44 @@ int map_guest_area(struct vcpu *v, paddr
 
     domain_lock(d);
 
-    if ( map )
-        populate(map, v);
+    /* No re-registration of the vCPU info area. */
+    if ( area != &v->vcpu_info_area || !area->pg )
+    {
+        if ( map )
+            populate(map, v);
 
-    SWAP(area->pg, pg);
-    SWAP(area->map, map);
+        SWAP(area->pg, pg);
+        SWAP(area->map, map);
+    }
+    else
+        rc = -EBUSY;
 
     domain_unlock(d);
 
+    /* Set pending flags /after/ new vcpu_info pointer was set. */
+    if ( area == &v->vcpu_info_area && !rc )
+    {
+        /*
+         * Mark everything as being pending just to make sure nothing gets
+         * lost.  The domain will get a spurious event, but it can cope.
+         */
+#ifdef CONFIG_COMPAT
+        if ( !has_32bit_shinfo(d) )
+        {
+            vcpu_info_t *info = area->map;
+
+            /* For VCPUOP_register_vcpu_info handling in common_vcpu_op(). */
+            BUILD_BUG_ON(sizeof(*info) != sizeof(info->compat));
+            write_atomic(&info->native.evtchn_pending_sel, ~0);
+        }
+        else
+#endif
+            write_atomic(&vcpu_info(v, evtchn_pending_sel), ~0);
+        vcpu_mark_events_pending(v);
+
+        force_update_vcpu_system_time(v);
+    }
+
     if ( v != current )
         vcpu_unpause(v);
 
@@ -1670,7 +1595,10 @@ void unmap_guest_area(struct vcpu *v, st
 
     domain_lock(d);
     map = area->map;
-    area->map = NULL;
+    if ( area == &v->vcpu_info_area )
+        vcpu_info_reset(v);
+    else
+        area->map = NULL;
     pg = area->pg;
     area->pg = NULL;
     domain_unlock(d);
@@ -1801,6 +1729,27 @@ bool update_runstate_area(struct vcpu *v
     return rc;
 }
 
+/*
+ * This makes sure that the vcpu_info is always pointing at a valid piece of
+ * memory, and it sets a pending event to make sure that a pending event
+ * doesn't get missed.
+ */
+static void cf_check
+vcpu_info_populate(void *map, struct vcpu *v)
+{
+    vcpu_info_t *info = map;
+
+    if ( v->vcpu_info_area.map == &dummy_vcpu_info )
+    {
+        memset(info, 0, sizeof(*info));
+#ifdef XEN_HAVE_PV_UPCALL_MASK
+        __vcpu_info(v, info, evtchn_upcall_mask) = 1;
+#endif
+    }
+    else
+        memcpy(info, v->vcpu_info_area.map, sizeof(*info));
+}
+
 static void cf_check
 runstate_area_populate(void *map, struct vcpu *v)
 {
@@ -1830,7 +1779,7 @@ long common_vcpu_op(int cmd, struct vcpu
     switch ( cmd )
     {
     case VCPUOP_initialise:
-        if ( v->vcpu_info == &dummy_vcpu_info )
+        if ( v->vcpu_info_area.map == &dummy_vcpu_info )
             return -EINVAL;
 
         rc = arch_initialise_vcpu(v, arg);
@@ -1961,16 +1910,29 @@ long common_vcpu_op(int cmd, struct vcpu
     case VCPUOP_register_vcpu_info:
     {
         struct vcpu_register_vcpu_info info;
+        paddr_t gaddr;
 
         rc = -EFAULT;
         if ( copy_from_guest(&info, arg, 1) )
             break;
 
-        domain_lock(d);
-        rc = map_vcpu_info(v, info.mfn, info.offset);
-        domain_unlock(d);
+        rc = -EINVAL;
+        gaddr = gfn_to_gaddr(_gfn(info.mfn)) + info.offset;
+        if ( !~gaddr ||
+             gfn_x(gaddr_to_gfn(gaddr)) != info.mfn )
+            break;
 
-        force_update_vcpu_system_time(v);
+        /* Preliminary check only; see map_guest_area(). */
+        rc = -EBUSY;
+        if ( v->vcpu_info_area.pg )
+            break;
+
+        /* See the BUILD_BUG_ON() in vcpu_info_populate(). */
+        rc = map_guest_area(v, gaddr, sizeof(vcpu_info_t),
+                            &v->vcpu_info_area, vcpu_info_populate);
+        if ( rc == -ERESTART )
+            rc = hypercall_create_continuation(__HYPERVISOR_vcpu_op, "iih",
+                                               cmd, vcpuid, arg);
 
         break;
     }
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -79,9 +79,6 @@ void cf_check free_pirq_struct(void *);
 int  arch_vcpu_create(struct vcpu *v);
 void arch_vcpu_destroy(struct vcpu *v);
 
-int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned int offset);
-void unmap_vcpu_info(struct vcpu *v);
-
 int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size,
                    struct guest_area *area,
                    void (*populate)(void *dst, struct vcpu *v));
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -175,7 +175,7 @@ struct vcpu
 
     int              processor;
 
-    vcpu_info_t     *vcpu_info;
+    struct guest_area vcpu_info_area;
 
     struct domain   *domain;
 
@@ -288,9 +288,6 @@ struct vcpu
 
     struct waitqueue_vcpu *waitqueue_vcpu;
 
-    /* Guest-specified relocation of vcpu_info. */
-    mfn_t            vcpu_info_mfn;
-
     struct evtchn_fifo_vcpu *evtchn_fifo;
 
     /* vPCI per-vCPU area, used to store data for long running operations. */
--- a/xen/include/xen/shared.h
+++ b/xen/include/xen/shared.h
@@ -44,6 +44,7 @@ typedef struct vcpu_info vcpu_info_t;
 extern vcpu_info_t dummy_vcpu_info;
 
 #define shared_info(d, field)      __shared_info(d, (d)->shared_info, field)
-#define vcpu_info(v, field)        __vcpu_info(v, (v)->vcpu_info, field)
+#define vcpu_info(v, field)        \
+        __vcpu_info(v, (vcpu_info_t *)(v)->vcpu_info_area.map, field)
 
 #endif /* __XEN_SHARED_H__ */



From xen-devel-bounces@lists.xenproject.org Wed May 03 16:32:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 16:32:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529380.823729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFOl-00047h-9j; Wed, 03 May 2023 16:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529380.823729; Wed, 03 May 2023 16:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFOl-00046e-1Y; Wed, 03 May 2023 16:32:11 +0000
Received: by outflank-mailman (input) for mailman id 529380;
 Wed, 03 May 2023 16:32:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O9tK=AY=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1puFOj-0003pe-9y
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 16:32:09 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0b1c3d72-e9d0-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 18:32:06 +0200 (CEST)
Received: by mail-lf1-x12a.google.com with SMTP id
 2adb3069b0e04-4f00c33c3d6so6788921e87.2
 for <xen-devel@lists.xenproject.org>; Wed, 03 May 2023 09:32:06 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 h20-20020a2e9ed4000000b002a634bfa224sm6074321ljk.40.2023.05.03.09.32.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 03 May 2023 09:32:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b1c3d72-e9d0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683131526; x=1685723526;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Na1AM2pS/tIxc8UzxXCz1qX+aY29hkPhhjEAhDGZruo=;
        b=YN4i0x2Yf/gqJP/4Wil8NwO76B5aVgrwNYJfDOi8HrmreBLvnv7H08aytbVqZ11YhL
         4KekE+f6UrOsu3MY9X0MLjVCZYapuKXWZQmCGXGqq8oAdXx3NeEHZitnOUlLKf47hQIK
         vlcgKcr35nyhsh7XF1oHcpG/Q4bJJS3fAYMSY0cvU/HeMXSfq6cRBqE/bRG/69xs0Sr6
         4icKCS2Oqd4uKktE5QoUTobPVOK4ETFx7dZIOhK1XQ8c7XZdnXz09wuhqf0jo67vsRsp
         NK2vVQRZ+tmQYCMI8Qneg6sD6J93ozqM+SGdnvl+wYsxK5GDUpBkMIECURDrm9kV4ydO
         tPxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683131526; x=1685723526;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Na1AM2pS/tIxc8UzxXCz1qX+aY29hkPhhjEAhDGZruo=;
        b=eII/XvRzZF/PXjkQ8ZFZoREfOlqHoiknav+w5Sz+PIpOGfLmLQWOQE7EXUR2XWtR5D
         aLjnTOtf7S6GUHp9rOYfl2xzCB8Y5aIwsoGMDWKZJz++4ozQ28tbHQugnFluD+HV5/Rx
         sfpU7IhZqV8uOe3HQCk4JmGlMzwnEYI+HbrozCES3w3LvRdRAluXM1Nc+ShDefJm1SUL
         zRC9/8Kyl3J0AwowVYT+C/sD13gKHfLDGWAXd+JlxwrRi8nXhcYdipY9pyKEM7quvudO
         qG2xjHzKb7IPn1Q5hErenOiaZj8RSEDIMGgCXzkoo+TByMfuF5Pk5wPlCu9zctWt44hb
         N7eQ==
X-Gm-Message-State: AC+VfDysJH1CQn/S7VEFuRQRKd3yt7XIhzg/2gU27msm2wh/9+O/Kcd1
	67Ke0iL1sgT9NjKlc8rmAAvVj/Fys2w=
X-Google-Smtp-Source: ACHHUZ6yZKqCW5UQULMXODri/ZEA/RVcnsdSbX8Yx5MfUVYSJ4SkZhriZAecO1eFmtWzAYYMtP4P3g==
X-Received: by 2002:ac2:42d4:0:b0:4ec:9fe9:fea9 with SMTP id n20-20020ac242d4000000b004ec9fe9fea9mr1289229lfl.56.1683131525895;
        Wed, 03 May 2023 09:32:05 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 1/4] xen/riscv: add VM space layout
Date: Wed,  3 May 2023 19:31:58 +0300
Message-Id: <a4004849c87990e5379acc5d60a52492385cd8e0.1683131359.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1683131359.git.oleksii.kurochko@gmail.com>
References: <cover.1683131359.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Also it was added explanation about ignoring of top VA bits

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V6:
 - update comment above the RISCV-64 layout table
 - add Slot column to the table with RISCV-64 Layout
 - update RV-64 layout table.
---
Changes in V5:
* the patch was introduced in the current patch series.
---
 xen/arch/riscv/include/asm/config.h | 31 +++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 763a922a04..73b86ce789 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -4,6 +4,37 @@
 #include <xen/const.h>
 #include <xen/page-size.h>
 
+/*
+ * RISC-V64 Layout:
+ *
+ * From the riscv-privileged doc:
+ *   When mapping between narrower and wider addresses,
+ *   RISC-V zero-extends a narrower physical address to a wider size.
+ *   The mapping between 64-bit virtual addresses and the 39-bit usable
+ *   address space of Sv39 is not based on zero-extension but instead
+ *   follows an entrenched convention that allows an OS to use one or
+ *   a few of the most-significant bits of a full-size (64-bit) virtual
+ *   address to quickly distinguish user and supervisor address regions.
+ *
+ * It means that:
+ *   top VA bits are simply ignored for the purpose of translating to PA.
+ *
+ * ============================================================================
+ *    Start addr    |   End addr        |  Size  | Slot       |area description
+ * ============================================================================
+ * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | L2 511     | Xen
+ * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | L2 511     | FDT
+ * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | L2 511     | Fixmap
+ *                 ...                  |  1 GB  | L2 510     | Unused
+ * 0000003200000000 |  0000007f40000000 | 331 GB | L2 200-609 | Direct map
+ *                 ...                  |  1 GB  | L2 199     | Unused
+ * 0000003100000000 |  0000003140000000 |  3 GB  | L2 196-198 | Frametable
+ *                 ...                  |  1 GB  | L2 195     | Unused
+ * 0000003080000000 |  00000030c0000000 |  1 GB  | L2 194     | VMAP
+ *     .................. unused ..................
+ * ============================================================================
+ */
+
 #if defined(CONFIG_RISCV_64)
 # define LONG_BYTEORDER 3
 # define ELFSIZE 64
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 03 16:32:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 16:32:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529378.823714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFOj-0003q4-LX; Wed, 03 May 2023 16:32:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529378.823714; Wed, 03 May 2023 16:32:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFOj-0003px-IK; Wed, 03 May 2023 16:32:09 +0000
Received: by outflank-mailman (input) for mailman id 529378;
 Wed, 03 May 2023 16:32:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O9tK=AY=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1puFOi-0003pe-LE
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 16:32:08 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0ac7d538-e9d0-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 18:32:06 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2ac785015d6so5005641fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 03 May 2023 09:32:06 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 h20-20020a2e9ed4000000b002a634bfa224sm6074321ljk.40.2023.05.03.09.32.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 03 May 2023 09:32:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ac7d538-e9d0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683131525; x=1685723525;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=TWeR2qcurw4IzjHMr6XJWFzGIGAQGUDuYVTUhmEWiwA=;
        b=UXJ6jF2MYR5jhjvd7sXNNq9wGZuk7Z/kA1pQbsm3a9PjKdqhRsWejMfJ9y7o2FPLmS
         IV7/smydWp6/vfnGWUGpbtMfQ0+w4mnc8KTmf1szhotCEswsrtKn9AEOavqJOhrC/tqE
         cHTZ0r0Uw0iRfE2TZMhrmTiwdlVHJCtSMsll4Are/9MiLGkqN1a73ytgX79wlJlQF4ke
         +sFl4rBzipRzHZokuwX4dGG5TubK/TurL0TtVDPUgK1SpkkmihfSFVW0N8farxL6SUKg
         vjAjo3TdnnnNZcOrx9pkSXC9ol08m5rxiXIEpoNf4gyncLtVX3/W3gzroJAD7I90U9s6
         gbcw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683131525; x=1685723525;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=TWeR2qcurw4IzjHMr6XJWFzGIGAQGUDuYVTUhmEWiwA=;
        b=PsTE7EORsviiLB8B9g6iYzgpkz6+SDmzKx8U4VXsbT7T0ZXYZ8Ovopnn0MELdARG4E
         Y9jFb2PEShX+DQ6pxDvBbDeVRF+W07SIzk76RFIPJWj3svpZ44chPU1/S4PnuCanJXv9
         LYGUWus9Z427h1C68YVwgxk5Pbh6LXR8kiwns1OMndf51WAzhYJ2ioXZtZ8/AShniMKp
         7ZLQeiFawRh4kXpJYtgEsxUPjHElqSNuOgrtKLvZVgNcyoPMBHGHkpg/VJ0dHfxICVr6
         OShLrjrMdMBZEBeaU9sd5w5IlTUq7IWoULMRtJ8IBTsP75oNKem24n6yj4lYEgQQn56O
         KpLA==
X-Gm-Message-State: AC+VfDxT/pTfyssY7Em16mSqozvBct/zdgtdEfX0s04+Yl7QymRm1G01
	kuy7fLUBgIKJhC3G/w5zOZe5XCiH/vw=
X-Google-Smtp-Source: ACHHUZ7glxPCRwO7qFBhiTeHjIW6cp91jCb1h0bTuP4OpAw0eE7y3beNZGyL2fnKjsEQSFsif8ejkA==
X-Received: by 2002:a05:651c:117:b0:2a6:1682:3a1e with SMTP id a23-20020a05651c011700b002a616823a1emr147641ljb.31.1683131525180;
        Wed, 03 May 2023 09:32:05 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 0/4] enable MMU for RISC-V
Date: Wed,  3 May 2023 19:31:57 +0300
Message-Id: <cover.1683131359.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following things:
1. Functionality to build the page tables for Xen that map
   link-time to physical-time location.
2. Check that Xen is less then page size.
3. Check that load addresses don't overlap with linker addresses.
4. Prepare things for proper switch to virtual memory world.
5. Load the built page table into the SATP
6. Enable MMU.

---
Changes in V6:
  - update RV VM layout and related to it things
 	- move PAGE_SHIFT, PADDR_BITS to the top of page-bits.h
 	- cast argument x of pte_to_addr() macros to paddr_t to avoid risk of overflow for RV32
 	- update type of num_levels from 'unsigned long' to 'unsigned int'
 	- define PGTBL_INITIAL_COUNT as ((CONFIG_PAGING_LEVELS - 1) + 1)
 	- update type of permission arguments. changed it from 'unsgined long' to 'unsigned int'
 	- fix code style
 	- switch while 'loop' to 'for' loop
 	- undef HANDLE_PGTBL
 	- clean root page table after MMU is disabled in check_pgtbl_mode_support() function
 	- align __bss_start properly
 	- remove unnecesssary const for paddr_to_pte, pte_to_paddr, pte_is_valid functions
 	- add switch_stack_and_jump macros and use it inside enable_mmu() before jump to
 	  cont_after_mmu_is_enabled() function

---
Changes in V5:
  * rebase the patch series on top of current staging
  * Update cover letter: it was removed the info about the patches on which
    MMU patch series is based as they were merged to staging
  * add new patch with description of VM layout for RISC-V2
  * Indent fields of pte_t struct
	* Rename addr_to_pte() and ppn_to_paddr() to match their content
---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
  * Update the cover letter message: the patch series isn't depend on
    [ RISC-V basic exception handling implementation ] as it was decied
    to enable MMU before implementation of exception handling. Also MMU
    patch series is based on two other patches which weren't merged [1]
    and [2]
  - Update the commit message for [ [PATCH v3 1/3]
    xen/riscv: introduce setup_initial_pages ].
  - update definition of pte_t structure to have a proper size of pte_t in case of RV32.
  - update asm/mm.h with new functions and remove unnecessary 'extern'.
  - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
  - update paddr_to_pte() to receive permissions as an argument.
  - add check that map_start & pa_start is properly aligned.
  - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to <asm/page-bits.h>
  - Rename PTE_SHIFT to PTE_PPN_SHIFT
  - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses and after
    setup PTEs permission for sections; update check that linker and load addresses don't
    overlap.
  - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is necessary.
  - rewrite enable_mmu in C; add the check that map_start and pa_start are aligned on 4k
    boundary.
  - update the comment for setup_initial_pagetable funcion
  - Add RV_STAGE1_MODE to support different MMU modes.
  - update the commit message that MMU is also enabled here
  - set XEN_VIRT_START very high to not overlap with load address range
  - align bss section
---
Changes in V2:
  * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
    introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
  * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
  * Remove clear_pagetables() functions as pagetables were zeroed during
    .bss initialization
  * Rename _setup_initial_pagetables() to setup_initial_mapping()
  * Make PTE_DEFAULT equal to RX.
  * Update prototype of setup_initial_mapping(..., bool writable) -> 
    setup_initial_mapping(..., UL flags)  
  * Update calls of setup_initial_mapping according to new prototype
  * Remove unnecessary call of:
    _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
  * Define index* in the loop of setup_initial_mapping
  * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
    as we don't have such section
  * make arguments of paddr_to_pte() and pte_is_valid() as const.
  * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
  * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
  * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
  * set __section(".bss.page_aligned") for page tables arrays
  * fix identatations
  * Change '__attribute__((section(".entry")))' to '__init'
  * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
    setup_inital_mapping() as they should be already aligned.
  * Remove clear_pagetables() as initial pagetables will be
    zeroed during bss initialization
  * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
    as there is no such section in xen.lds.S
  * Update the argument of pte_is_valid() to "const pte_t *p"
  * Remove patch "[PATCH v1 3/3] automation: update RISC-V smoke test" from the patch series
    as it was introduced simplified approach for RISC-V smoke test by Andrew Cooper
  * Add patch [[xen/riscv: remove dummy_bss variable] as there is no any sense in
    dummy_bss variable after introduction of inittial page tables.
---

Oleksii Kurochko (4):
  xen/riscv: add VM space layout
  xen/riscv: introduce setup_initial_pages
  xen/riscv: setup initial pagetables
  xen/riscv: remove dummy_bss variable

 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  44 +++-
 xen/arch/riscv/include/asm/current.h   |  10 +
 xen/arch/riscv/include/asm/mm.h        |   9 +
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  62 +++++
 xen/arch/riscv/mm.c                    | 315 +++++++++++++++++++++++++
 xen/arch/riscv/riscv64/head.S          |   1 +
 xen/arch/riscv/setup.c                 |  22 +-
 xen/arch/riscv/xen.lds.S               |   4 +
 10 files changed, 469 insertions(+), 9 deletions(-)
 create mode 100644 xen/arch/riscv/include/asm/current.h
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 03 16:32:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 16:32:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529381.823735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFOl-0004D8-J4; Wed, 03 May 2023 16:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529381.823735; Wed, 03 May 2023 16:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFOl-0004Ay-BX; Wed, 03 May 2023 16:32:11 +0000
Received: by outflank-mailman (input) for mailman id 529381;
 Wed, 03 May 2023 16:32:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O9tK=AY=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1puFOj-0003pk-UX
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 16:32:09 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0c6e7a89-e9d0-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 18:32:08 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ac79d4858dso1400431fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 03 May 2023 09:32:09 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 h20-20020a2e9ed4000000b002a634bfa224sm6074321ljk.40.2023.05.03.09.32.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 03 May 2023 09:32:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c6e7a89-e9d0-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683131529; x=1685723529;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sf2FBCUITLNv92K79Iv4iXww3Xtu3EP3+kJgTTV/WyQ=;
        b=Wwb8m+A/MBKxX8/aWnWoEAimCvThAmY/wZKelw/NuyBSqjIs8K3Lh/c9Sq/1t/ME/B
         ksePu5gs7LyHEmnqWN/5snSwFgID7DcOdjRzLgdyz6UeQ2TvdJq3H43a1PqnL468docT
         nIyX+mzpXdJRddfjR681v6+JpEBQU55RJmGrZdq8eWhr+7EytTMr2nkRF2an9QIDFhY/
         2hmGYZkQhzdVs8kMqWkfcGYopmM/kPsfP8H2SCeonQNHF/NcwTvMQLYvzD5OFOuQTX/W
         /3u4WkPKpUYJ7j0GoR1w93PxgQBL26pniiLXqZ5ZlNj6fXtAn8Qc7MkjTJNl4CnIuwhx
         NYxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683131529; x=1685723529;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=sf2FBCUITLNv92K79Iv4iXww3Xtu3EP3+kJgTTV/WyQ=;
        b=itYgQfLeDAabNWq+STTHVqECHBZTPPusIWluzEq7TGW4478FN7Cp/Iui5ygvamzwUz
         JUAV66xWxSEX948KZk1GevSBgnhGzWI4xMzE2DaS/igSZXBJFae2SySdBfRDtfYf1n+p
         u1U4c+HRXJw6kukM/QewFONLXP9uONwT4eGZoO+uUbujyWcn+jLRXn25d5sQ5C6qndwp
         mi2ln2SezsltZD7ZZ9GKCbbStixHj+y0aBBtlJRY1F2mgOI4zM2rrUN0Z9j4Yqo45IxC
         8GW+IddELgLi9StkoUKABfyXscAJc377tDztVnlcEzkPPYj1NatWf4vTf2fsg4SI411L
         3kUQ==
X-Gm-Message-State: AC+VfDxoM+imqHaFJOZOmag6SnuTtLl9gmWnpP4NIvC1PoOHA3oqFq5C
	4nYv+Mx2UskN5h5v9qaDfnCOHnQpTKA=
X-Google-Smtp-Source: ACHHUZ5ONYjmDZJcXbTYja7oYhZ9BtG1XHn25ZZdgX5rmJ5PlsbPBv4Yf2u9gk6gVigp38rrlLOaxA==
X-Received: by 2002:a2e:2c13:0:b0:2a7:6812:ed9d with SMTP id s19-20020a2e2c13000000b002a76812ed9dmr149304ljs.50.1683131528551;
        Wed, 03 May 2023 09:32:08 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 4/4] xen/riscv: remove dummy_bss variable
Date: Wed,  3 May 2023 19:32:01 +0300
Message-Id: <b651cf8d8f5aaa3e5b3ab57e41befc666b7629f1.1683131359.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1683131359.git.oleksii.kurochko@gmail.com>
References: <cover.1683131359.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

After introduction of initial pagetables there is no any sense
in dummy_bss variable as bss section will not be empty anymore.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V6:
 - Nothing changed. Only rebase
---
Changes in V5:
 - Nothing changed. Only rebase
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 * patch was introduced in the current one patch series (v3).
---
Changes in V2:
 * patch was introduced in the current one patch series (v2).
---
 xen/arch/riscv/setup.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index cf5dc5824e..845d18d86f 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -8,14 +8,6 @@
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
-/*  
- * To be sure that .bss isn't zero. It will simplify code of
- * .bss initialization.
- * TODO:
- *   To be deleted when the first real .bss user appears
- */
-int dummy_bss __attribute__((unused));
-
 void __init noreturn start_xen(unsigned long bootcpu_id,
                                paddr_t dtb_addr)
 {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 03 16:32:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 16:32:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529379.823724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFOk-000450-T1; Wed, 03 May 2023 16:32:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529379.823724; Wed, 03 May 2023 16:32:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFOk-00044t-Pn; Wed, 03 May 2023 16:32:10 +0000
Received: by outflank-mailman (input) for mailman id 529379;
 Wed, 03 May 2023 16:32:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O9tK=AY=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1puFOj-0003pk-8I
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 16:32:09 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0bc8b9ae-e9d0-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 18:32:08 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2a8dc00ade2so54559771fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 03 May 2023 09:32:08 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 h20-20020a2e9ed4000000b002a634bfa224sm6074321ljk.40.2023.05.03.09.32.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 03 May 2023 09:32:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bc8b9ae-e9d0-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683131528; x=1685723528;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bsfM97J5O+UpCSjMju/nOXYsGeOrD/W7nZOAovHNtiM=;
        b=W6+ogGuzG53c3UETfJpq0zks2Ni0FaBWiOO91I+UT4EJyiSF09+76TXD5JQmrocGMc
         DOUU6FJJ/GcfmvHwZunYG+5sVZ+CiwMOoxQAr6RIA7Gj24SF7kO103UHa+Ibwo3v1DsT
         +TA7Xxc9cjGjC2VbY4P/BCW0j19xEYdUU1kFe894TcyKutuVtv54B7rcH7q4PnkbMJnm
         27HaPgJIPAkunfK1kYIhD389Wz4IkQk3VXhQY4hifb08eAu81ccS7ReTGlwqozKP6ote
         7KiZeLYdJieEhZb8PrFyc9ooB6bt1PLmrFOYp5xILkPca3mtFC4H0hgwk9GuEv5zwV7F
         d+Ww==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683131528; x=1685723528;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=bsfM97J5O+UpCSjMju/nOXYsGeOrD/W7nZOAovHNtiM=;
        b=ABQJ+WzFjLj61bJS+cD5ixJeI4/n5FKDrsTzo31eZmKbTMo7D38ZmvLcRnhycFuMnk
         +X7kuwTYBxAyz2pvYkYVVseFFa4HO4s6jWSlwx3fEjqrPeppwoH3l8a4xjwlCzM1tOzs
         P76Bpjuyzde2sdk7LIOxtRxowSSGZtvAa9grBGU6ARMkIo2dken3vS83rUa26ZvcblHw
         r6fmhEaAAH2OF46r6CLTF2sC+FjDFYQ6f9s5Z/ibdbLqyEGOF4ZU+NVjueHSZMChFunA
         0/Z2bAd7BziBAwdBID4koKSiLnZ3tf0jJoVd53+9czyDUfBBwVzogGdV4nlgfMifTJAC
         KoEw==
X-Gm-Message-State: AC+VfDwRb42PK1FDMSeAuu9YrJotZBzGCfBArE8ZxZIuS5NbLZIzXdnL
	/F9PPE/wV3xz3Ea5kKrT5+w4qPr8F9c=
X-Google-Smtp-Source: ACHHUZ503Xhj72fABoOXv+XvN3atFesd1TP1pzqMQS1XFTYQF0u1guxYyIZ7sV0gX6RP58M84Vi1ig==
X-Received: by 2002:a2e:920c:0:b0:2a1:ab4a:153d with SMTP id k12-20020a2e920c000000b002a1ab4a153dmr152012ljg.29.1683131527692;
        Wed, 03 May 2023 09:32:07 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 3/4] xen/riscv: setup initial pagetables
Date: Wed,  3 May 2023 19:32:00 +0300
Message-Id: <4880494180634e53332dcfde24cc03ed1fea1a86.1683131359.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1683131359.git.oleksii.kurochko@gmail.com>
References: <cover.1683131359.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch does two thing:
1. Setup initial pagetables.
2. Enable MMU which end up with code in
   cont_after_mmu_is_enabled()

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V6:
 - Nothing changed. Only rebase
---
Changes in V5:
 - Nothing changed. Only rebase
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 - update the commit message that MMU is also enabled here
 - remove early_printk("All set up\n") as it was moved to
   cont_after_mmu_is_enabled() function after MMU is enabled.
---
Changes in V2:
 * Update the commit message
---
 xen/arch/riscv/setup.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 315804aa87..cf5dc5824e 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -21,7 +21,10 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 {
     early_printk("Hello from C env\n");
 
-    early_printk("All set up\n");
+    setup_initial_pagetables();
+
+    enable_mmu();
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 03 16:32:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 16:32:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529382.823741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFOl-0004HL-SJ; Wed, 03 May 2023 16:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529382.823741; Wed, 03 May 2023 16:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFOl-0004FD-Jt; Wed, 03 May 2023 16:32:11 +0000
Received: by outflank-mailman (input) for mailman id 529382;
 Wed, 03 May 2023 16:32:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O9tK=AY=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1puFOk-0003pe-9z
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 16:32:10 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0bc3b58e-e9d0-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 18:32:07 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id
 2adb3069b0e04-4f137dbaa4fso1235194e87.2
 for <xen-devel@lists.xenproject.org>; Wed, 03 May 2023 09:32:08 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 h20-20020a2e9ed4000000b002a634bfa224sm6074321ljk.40.2023.05.03.09.32.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 03 May 2023 09:32:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bc3b58e-e9d0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683131527; x=1685723527;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Ax1p/+Ojg24s3uOBpiRJ9X07/nApnSHzGweJ6WFH648=;
        b=DCHKe6PsTaBnm7AOWO6SRyvaqhHS6WvbwA2mELkNLgWfyvqCpkZlqeWtqEASKyiyH6
         VAMP1WaeJ8ufcdRyFQsXsA2H7cE8QsKx2HgJEmsBM4QoYFqtW2AIDczM/34w5vD5T/MP
         0AlI5uqCr2Y1zxyx2uzhBAL9o/dHj92b55MEwpOV17NTUGB4RxV/d/9GfP3vsyrPkSqP
         GloF2oLTL7OHc/ZeumNo6nIU1p27+MJ9DuUiWlCbuRiMCGRag2MjuTRP4F7pyMI0i3ls
         0srcojwQjdaghB9HPFtOu4CG4MVxgur8hW2QAD2pSKwVnqRkItABCY8wHJJSXOv+psi1
         vsZg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683131527; x=1685723527;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Ax1p/+Ojg24s3uOBpiRJ9X07/nApnSHzGweJ6WFH648=;
        b=Uy7j80v4EX0i3RUXA2GSRvYor1lLMUlxcF2RwPrBbIl4eo2LvO7L6qDSBTAfobL2gi
         mzXzsvrDGFQF7Ac8FnzgikpXCeg7skdMdi4KUeOYso/nF3QJnL3MgosQziUXeK2jkTZ+
         ZHWL9qZMsFqNdEW7QmivGceE2iVl5Um9Tg+tniYbAZ/qzYhdHp2SN/zmWN7t/Vj7QEk7
         kSkYTLWR7hXu6oxm+c8o7/o4FqL478Ftwi0zjun7ExkTIzO50cxvU3JFYTQYC5KDHTD/
         0aqj1oM5uFB/lVd8Ss+CQk78jTf0HMQZfwEctln8ViOzGAOYKFSj6FQurOTVyNb50j7Y
         pEUQ==
X-Gm-Message-State: AC+VfDwS8lHypIUPLKDReK0K3jhoIoW8X+/1DWLGfCFaDPApliJnoH18
	g6FppJ9khuNmVtu3Ocx41sH5Tu0DCG0=
X-Google-Smtp-Source: ACHHUZ5gv8pRr8ymywPtNmbKQcjo67kL49ke7vlfhtiGZnZDzEeBZjtmAa6XvbL/ag9h3FJA/2scXA==
X-Received: by 2002:ac2:4a66:0:b0:4eb:e7f:945 with SMTP id q6-20020ac24a66000000b004eb0e7f0945mr1000813lfp.41.1683131526959;
        Wed, 03 May 2023 09:32:06 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 2/4] xen/riscv: introduce setup_initial_pages
Date: Wed,  3 May 2023 19:31:59 +0300
Message-Id: <d1a6fb6112b61000645eb1a4ab9ade8a208d4204.1683131359.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1683131359.git.oleksii.kurochko@gmail.com>
References: <cover.1683131359.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The idea was taken from xvisor but the following changes
were done:
* Use only a minimal part of the code enough to enable MMU
* rename {_}setup_initial_pagetables functions
* add an argument for setup_initial_mapping to have
  an opportunity to make set PTE flags.
* update setup_initial_pagetables function to map sections
  with correct PTE flags.
* Rewrite enable_mmu() to C.
* map linker addresses range to load addresses range without
  1:1 mapping. It will be 1:1 only in case when
  load_start_addr is equal to linker_start_addr.
* add safety checks such as:
  * Xen size is less than page size
  * linker addresses range doesn't overlap load addresses
    range
* Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
* change PTE_LEAF_DEFAULT to RW instead of RWX.
* Remove phys_offset as it is not used now
* Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
  in  setup_inital_mapping() as they should be already aligned.
  Make a check that {map_pa}_start are aligned.
* Remove clear_pagetables() as initial pagetables will be
  zeroed during bss initialization
* Remove __attribute__((section(".entry")) for setup_initial_pagetables()
  as there is no such section in xen.lds.S
* Update the argument of pte_is_valid() to "const pte_t *p"
* Add check that Xen's load address is aligned at 4k boundary
* Refactor setup_initial_pagetables() so it is mapping linker
  address range to load address range. After setup needed
  permissions for specific section ( such as .text, .rodata, etc )
  otherwise RW permission will be set by default.
* Add function to check that requested SATP_MODE is supported

Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V6:
 	- move PAGE_SHIFT, PADDR_BITS to the top of page-bits.h
 	- cast argument x of pte_to_addr() macros to paddr_t to avoid risk of overflow for RV32
 	- update type of num_levels from 'unsigned long' to 'unsigned int'
 	- define PGTBL_INITIAL_COUNT as ((CONFIG_PAGING_LEVELS - 1) + 1)
 	- update type of permission arguments. changed it from 'unsgined long' to 'unsigned int'
 	- fix code style
 	- switch while 'loop' to 'for' loop
 	- undef HANDLE_PGTBL
 	- clean root page table after MMU is disabled in check_pgtbl_mode_support() function
 	- align __bss_start properly
 	- remove unnecesssary const for paddr_to_pte, pte_to_paddr, pte_is_valid functions
 	- add switch_stack_and_jump macros and use it inside enable_mmu() before jump to
 	  cont_after_mmu_is_enabled() function
---
Changes in V5:
	* Indent fields of pte_t struct
	* Rename addr_to_pte() and ppn_to_paddr() to match their content
---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
 - update definition of pte_t structure to have a proper size of pte_t
   in case of RV32.
 - update asm/mm.h with new functions and remove unnecessary 'extern'.
 - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
 - update paddr_to_pte() to receive permissions as an argument.
 - add check that map_start & pa_start is properly aligned.
 - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to
   <asm/page-bits.h>
 - Rename PTE_SHIFT to PTE_PPN_SHIFT
 - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses
   and after setup PTEs permission for sections; update check that linker
   and load addresses don't overlap.
 - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is
   necessary.
 - rewrite enable_mmu in C; add the check that map_start and pa_start are
   aligned on 4k boundary.
 - update the comment for setup_initial_pagetable funcion
 - Add RV_STAGE1_MODE to support different MMU modes
 - set XEN_VIRT_START very high to not overlap with load address range
 - align bss section
---
Changes in V2:
 * update the commit message:
 * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
   introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
 * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
 * Remove clear_pagetables() functions as pagetables were zeroed during
   .bss initialization
 * Rename _setup_initial_pagetables() to setup_initial_mapping()
 * Make PTE_DEFAULT equal to RX.
 * Update prototype of setup_initial_mapping(..., bool writable) -> 
   setup_initial_mapping(..., UL flags)  
 * Update calls of setup_initial_mapping according to new prototype
 * Remove unnecessary call of:
   _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
 * Define index* in the loop of setup_initial_mapping
 * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
   as we don't have such section
 * make arguments of paddr_to_pte() and pte_is_valid() as const.
 * make xen_second_pagetable static.
 * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
 * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
 * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
 * set __section(".bss.page_aligned") for page tables arrays
 * fix identatations
 * Change '__attribute__((section(".entry")))' to '__init'
 * Remove phys_offset as it isn't used now.
 * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
   setup_inital_mapping() as they should be already aligned.
 * Remove clear_pagetables() as initial pagetables will be
   zeroed during bss initialization
 * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
   as there is no such section in xen.lds.S
 * Update the argument of pte_is_valid() to "const pte_t *p"
---
 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  13 +-
 xen/arch/riscv/include/asm/current.h   |  10 +
 xen/arch/riscv/include/asm/mm.h        |   9 +
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  62 +++++
 xen/arch/riscv/mm.c                    | 315 +++++++++++++++++++++++++
 xen/arch/riscv/riscv64/head.S          |   1 +
 xen/arch/riscv/setup.c                 |  11 +
 xen/arch/riscv/xen.lds.S               |   4 +
 10 files changed, 435 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/include/asm/current.h
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 443f6bf15f..956ceb02df 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-y += entry.o
+obj-y += mm.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 73b86ce789..3bd206766e 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -70,12 +70,23 @@
   name:
 #endif
 
-#define XEN_VIRT_START  _AT(UL, 0x80200000)
+#ifdef CONFIG_RISCV_64
+#define XEN_VIRT_START 0xFFFFFFFFC0000000 /* (_AC(-1, UL) + 1 - GB(1)) */
+#else
+#error "RV32 isn't supported"
+#endif
 
 #define SMP_CACHE_BYTES (1 << 6)
 
 #define STACK_SIZE PAGE_SIZE
 
+#ifdef CONFIG_RISCV_64
+#define CONFIG_PAGING_LEVELS 3
+#define RV_STAGE1_MODE SATP_MODE_SV39
+#else
+#define RV_STAGE1_MODE SATP_MODE_SV32
+#endif
+
 #endif /* __RISCV_CONFIG_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/include/asm/current.h b/xen/arch/riscv/include/asm/current.h
new file mode 100644
index 0000000000..1cb5946fe9
--- /dev/null
+++ b/xen/arch/riscv/include/asm/current.h
@@ -0,0 +1,10 @@
+#ifndef __ASM_CURRENT_H
+#define __ASM_CURRENT_H
+
+#define switch_stack_and_jump(stack, fn)    \
+    asm volatile (                          \
+            "mv sp, %0 \n"                  \
+            "j " #fn :: "r" (stack) :       \
+    )
+
+#endif /* __ASM_CURRENT_H */
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
new file mode 100644
index 0000000000..e16ce66fae
--- /dev/null
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -0,0 +1,9 @@
+#ifndef _ASM_RISCV_MM_H
+#define _ASM_RISCV_MM_H
+
+void setup_initial_pagetables(void);
+
+void enable_mmu(void);
+void cont_after_mmu_is_enabled(void);
+
+#endif /* _ASM_RISCV_MM_H */
diff --git a/xen/arch/riscv/include/asm/page-bits.h b/xen/arch/riscv/include/asm/page-bits.h
index 1801820294..4a3e33589a 100644
--- a/xen/arch/riscv/include/asm/page-bits.h
+++ b/xen/arch/riscv/include/asm/page-bits.h
@@ -4,4 +4,14 @@
 #define PAGE_SHIFT              12 /* 4 KiB Pages */
 #define PADDR_BITS              56 /* 44-bit PPN */
 
+#ifdef CONFIG_RISCV_64
+#define PAGETABLE_ORDER         (9)
+#else /* CONFIG_RISCV_32 */
+#define PAGETABLE_ORDER         (10)
+#endif
+
+#define PAGETABLE_ENTRIES       (1 << PAGETABLE_ORDER)
+
+#define PTE_PPN_SHIFT           10
+
 #endif /* __RISCV_PAGE_BITS_H__ */
diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
new file mode 100644
index 0000000000..b4fc67484b
--- /dev/null
+++ b/xen/arch/riscv/include/asm/page.h
@@ -0,0 +1,62 @@
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <xen/const.h>
+#include <xen/types.h>
+
+#define VPN_MASK                    ((unsigned long)(PAGETABLE_ENTRIES - 1))
+
+#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
+#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) + PAGE_SHIFT)
+#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
+#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
+#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
+
+#define PTE_VALID                   BIT(0, UL)
+#define PTE_READABLE                BIT(1, UL)
+#define PTE_WRITABLE                BIT(2, UL)
+#define PTE_EXECUTABLE              BIT(3, UL)
+#define PTE_USER                    BIT(4, UL)
+#define PTE_GLOBAL                  BIT(5, UL)
+#define PTE_ACCESSED                BIT(6, UL)
+#define PTE_DIRTY                   BIT(7, UL)
+#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
+
+#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE | PTE_WRITABLE)
+#define PTE_TABLE                   (PTE_VALID)
+
+/* Calculate the offsets into the pagetables for a given VA */
+#define pt_linear_offset(lvl, va)   ((va) >> XEN_PT_LEVEL_SHIFT(lvl))
+
+#define pt_index(lvl, va) pt_linear_offset(lvl, (va) & XEN_PT_LEVEL_MASK(lvl))
+
+/* Page Table entry */
+typedef struct {
+#ifdef CONFIG_RISCV_64
+    uint64_t pte;
+#else
+    uint32_t pte;
+#endif
+} pte_t;
+
+#define pte_to_addr(x) (((paddr_t)(x) >> PTE_PPN_SHIFT) << PAGE_SHIFT)
+
+#define addr_to_pte(x) (((x) >> PAGE_SHIFT) << PTE_PPN_SHIFT)
+
+static inline pte_t paddr_to_pte(paddr_t paddr,
+                                 unsigned int permissions)
+{
+    return (pte_t) { .pte = addr_to_pte(paddr) | permissions };
+}
+
+static inline paddr_t pte_to_paddr(pte_t pte)
+{
+    return pte_to_addr(pte.pte);
+}
+
+static inline bool pte_is_valid(pte_t p)
+{
+    return p.pte & PTE_VALID;
+}
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
new file mode 100644
index 0000000000..b13f15f75f
--- /dev/null
+++ b/xen/arch/riscv/mm.c
@@ -0,0 +1,315 @@
+#include <xen/compiler.h>
+#include <xen/init.h>
+#include <xen/kernel.h>
+#include <xen/pfn.h>
+
+#include <asm/early_printk.h>
+#include <asm/config.h>
+#include <asm/csr.h>
+#include <asm/current.h>
+#include <asm/mm.h>
+#include <asm/page.h>
+#include <asm/processor.h>
+
+struct mmu_desc {
+    unsigned int num_levels;
+    unsigned int pgtbl_count;
+    pte_t *next_pgtbl;
+    pte_t *pgtbl_base;
+};
+
+extern unsigned char cpu0_boot_stack[STACK_SIZE];
+
+#define PHYS_OFFSET ((unsigned long)_start - XEN_VIRT_START)
+#define LOAD_TO_LINK(addr) ((addr) - PHYS_OFFSET)
+#define LINK_TO_LOAD(addr) ((addr) + PHYS_OFFSET)
+
+
+/*
+ * It is expected that Xen won't be more then 2 MB.
+ * The check in xen.lds.S guarantees that.
+ * At least 3 page (in case of Sv39 )
+ * tables are needed to cover 2 MB. One for each page level
+ * table with PAGE_SIZE = 4 Kb
+ *
+ * One L0 page table can cover 2 MB
+ * (512 entries of one page table * PAGE_SIZE).
+ *
+ */
+#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
+
+#define PGTBL_ENTRY_AMOUNT  (PAGE_SIZE / sizeof(pte_t))
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_root[PGTBL_ENTRY_AMOUNT];
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_nonroot[PGTBL_INITIAL_COUNT * PGTBL_ENTRY_AMOUNT];
+
+#define HANDLE_PGTBL(curr_lvl_num)                                          \
+    index = pt_index(curr_lvl_num, page_addr);                              \
+    if ( pte_is_valid(pgtbl[index]) )                                       \
+    {                                                                       \
+        /* Find L{ 0-3 } table */                                           \
+        pgtbl = (pte_t *)pte_to_paddr(pgtbl[index]);                        \
+    }                                                                       \
+    else                                                                    \
+    {                                                                       \
+        /* Allocate new L{0-3} page table */                                \
+        if ( mmu_desc->pgtbl_count == PGTBL_INITIAL_COUNT )                 \
+        {                                                                   \
+            early_printk("(XEN) No initial table available\n");             \
+            /* panic(), BUG() or ASSERT() aren't ready now. */              \
+            die();                                                          \
+        }                                                                   \
+        mmu_desc->pgtbl_count++;                                            \
+        pgtbl[index] = paddr_to_pte((unsigned long)mmu_desc->next_pgtbl,    \
+                                    PTE_VALID);                             \
+        pgtbl = mmu_desc->next_pgtbl;                                       \
+        mmu_desc->next_pgtbl += PGTBL_ENTRY_AMOUNT;                         \
+    }
+
+static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
+                                         unsigned long map_start,
+                                         unsigned long map_end,
+                                         unsigned long pa_start,
+                                         unsigned int permissions)
+{
+    unsigned int index;
+    pte_t *pgtbl;
+    unsigned long page_addr;
+    pte_t pte_to_be_written;
+    unsigned long paddr;
+    unsigned int tmp_permissions;
+
+    if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
+    {
+        early_printk("(XEN) Xen should be loaded at 4k boundary\n");
+        die();
+    }
+
+    if ( map_start & ~XEN_PT_LEVEL_MAP_MASK(0) ||
+         pa_start & ~XEN_PT_LEVEL_MAP_MASK(0) )
+    {
+        early_printk("(XEN) map and pa start addresses should be aligned\n");
+        /* panic(), BUG() or ASSERT() aren't ready now. */
+        die();
+    }
+
+    for ( page_addr = map_start; page_addr < map_end; page_addr += XEN_PT_LEVEL_SIZE(0) )
+    {
+        pgtbl = mmu_desc->pgtbl_base;
+
+        switch ( mmu_desc->num_levels )
+        {
+        case 4: /* Level 3 */
+            HANDLE_PGTBL(3);
+        case 3: /* Level 2 */
+            HANDLE_PGTBL(2);
+        case 2: /* Level 1 */
+            HANDLE_PGTBL(1);
+        case 1: /* Level 0 */
+            index = pt_index(0, page_addr);
+            paddr = (page_addr - map_start) + pa_start;
+
+            tmp_permissions = permissions;
+
+            if ( is_kernel_text(LINK_TO_LOAD(page_addr)) ||
+                    is_kernel_inittext(LINK_TO_LOAD(page_addr)) )
+                tmp_permissions =
+                    PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
+
+            if ( is_kernel_rodata(LINK_TO_LOAD(page_addr)) )
+                tmp_permissions = PTE_READABLE | PTE_VALID;
+
+            pte_to_be_written = paddr_to_pte(paddr, tmp_permissions);
+
+            if ( !pte_is_valid(pgtbl[index]) )
+                pgtbl[index] = pte_to_be_written;
+            else
+            {
+                /*
+                * get an adresses of current pte and that one to
+                * be written
+                */
+                unsigned long curr_pte =
+                    pgtbl[index].pte & ~(PTE_DIRTY | PTE_ACCESSED);
+
+                pte_to_be_written.pte &= ~(PTE_DIRTY | PTE_ACCESSED);
+
+                if ( curr_pte != pte_to_be_written.pte )
+                {
+                    early_printk("PTE overridden has occurred\n");
+                    /* panic(), <asm/bug.h> aren't ready now. */
+                    die();
+                }
+            }
+        }
+    }
+    #undef HANDLE_PGTBL
+}
+
+static void __init calc_pgtbl_lvls_num(struct  mmu_desc *mmu_desc)
+{
+    unsigned long satp_mode = RV_STAGE1_MODE;
+
+    /* Number of page table levels */
+    switch (satp_mode)
+    {
+    case SATP_MODE_SV32:
+        mmu_desc->num_levels = 2;
+        break;
+    case SATP_MODE_SV39:
+        mmu_desc->num_levels = 3;
+        break;
+    case SATP_MODE_SV48:
+        mmu_desc->num_levels = 4;
+        break;
+    default:
+        early_printk("(XEN) Unsupported SATP_MODE\n");
+        die();
+    }
+}
+
+static bool __init check_pgtbl_mode_support(struct mmu_desc *mmu_desc,
+                                            unsigned long load_start,
+                                            unsigned long satp_mode)
+{
+    bool is_mode_supported = false;
+    unsigned int index;
+    unsigned int page_table_level = (mmu_desc->num_levels - 1);
+    unsigned level_map_mask = XEN_PT_LEVEL_MAP_MASK(page_table_level);
+
+    unsigned long aligned_load_start = load_start & level_map_mask;
+    unsigned long aligned_page_size = XEN_PT_LEVEL_SIZE(page_table_level);
+    unsigned long xen_size = (unsigned long)(_end - start);
+
+    if ( (load_start + xen_size) > (aligned_load_start + aligned_page_size) )
+    {
+        early_printk("please place Xen to be in range of PAGE_SIZE "
+                     "where PAGE_SIZE is XEN_PT_LEVEL_SIZE( {L3 | L2 | L1} ) "
+                     "depending on expected SATP_MODE \n"
+                     "XEN_PT_LEVEL_SIZE is defined in <asm/page.h>\n");
+        die();
+    }
+
+    index = pt_index(page_table_level, aligned_load_start);
+    stage1_pgtbl_root[index] = paddr_to_pte(aligned_load_start,
+                                            PTE_LEAF_DEFAULT | PTE_EXECUTABLE);
+
+    asm volatile ( "sfence.vma" );
+    csr_write(CSR_SATP,
+              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
+              satp_mode << SATP_MODE_SHIFT);
+
+    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) == satp_mode )
+        is_mode_supported = true;
+
+    csr_write(CSR_SATP, 0);
+
+    /* Clean MMU root page table */
+    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
+
+    asm volatile ( "sfence.vma" );
+
+    return is_mode_supported;
+}
+
+/*
+ * setup_initial_pagetables:
+ *
+ * Build the page tables for Xen that map the following:
+ *  1. Calculate page table's level numbers.
+ *  2. Init mmu description structure.
+ *  3. Check that linker addresses range doesn't overlap
+ *     with load addresses range
+ *  4. Map all linker addresses and load addresses ( it shouldn't
+ *     be 1:1 mapped and will be 1:1 mapped only in case if
+ *     linker address is equal to load address ) with
+ *     RW permissions by default.
+ *  5. Setup proper PTE permissions for each section.
+ */
+void __init setup_initial_pagetables(void)
+{
+    struct mmu_desc mmu_desc = { 0, 0, NULL, NULL };
+
+    /*
+     * Access to _stard, _end is always PC-relative
+     * thereby when access them we will get load adresses
+     * of start and end of Xen
+     * To get linker addresses LOAD_TO_LINK() is required
+     * to use
+     */
+    unsigned long load_start    = (unsigned long)_start;
+    unsigned long load_end      = (unsigned long)_end;
+    unsigned long linker_start  = LOAD_TO_LINK(load_start);
+    unsigned long linker_end    = LOAD_TO_LINK(load_end);
+
+    if ( (linker_start != load_start) &&
+         (linker_start <= load_end) && (load_start <= linker_end) )
+    {
+        early_printk("(XEN) linker and load address ranges overlap\n");
+        die();
+    }
+
+    calc_pgtbl_lvls_num(&mmu_desc);
+
+    if ( !check_pgtbl_mode_support(&mmu_desc, load_start, RV_STAGE1_MODE) )
+    {
+        early_printk("requested MMU mode isn't supported by CPU\n"
+                     "Please choose different in <asm/config.h>\n");
+        die();
+    }
+
+    mmu_desc.pgtbl_base = stage1_pgtbl_root;
+    mmu_desc.next_pgtbl = stage1_pgtbl_nonroot;
+
+    setup_initial_mapping(&mmu_desc,
+                          linker_start,
+                          linker_end,
+                          load_start,
+                          PTE_LEAF_DEFAULT);
+}
+
+void __init noinline enable_mmu()
+{
+    /*
+     * Calculate a linker time address of the mmu_is_enabled
+     * label and update CSR_STVEC with it.
+     * MMU is configured in a way where linker addresses are mapped
+     * on load addresses so in a case when linker addresses are not equal
+     * to load addresses, after MMU is enabled, it will cause
+     * an exception and jump to linker time addresses.
+     * Otherwise if load addresses are equal to linker addresses the code
+     * after mmu_is_enabled label will be executed without exception.
+     */
+    csr_write(CSR_STVEC, LOAD_TO_LINK((unsigned long)&&mmu_is_enabled));
+
+    /* Ensure page table writes precede loading the SATP */
+    asm volatile ( "sfence.vma" );
+
+    /* Enable the MMU and load the new pagetable for Xen */
+    csr_write(CSR_SATP,
+              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
+              RV_STAGE1_MODE << SATP_MODE_SHIFT);
+
+    asm volatile ( ".align 2" );
+ mmu_is_enabled:
+    /*
+     * Stack should be re-inited as:
+     * 1. Right now an address of the stack is relative to load time
+     *    addresses what will cause an issue in case of load start address
+     *    isn't equal to linker start address.
+     * 2. Addresses in stack are all load time relative which can be an
+     *    issue in case when load start address isn't equal to linker
+     *    start address.
+     *
+     * We can't return to the caller because the stack was reseted
+     * and it may have stash some variable on the stack.
+     * Jump to a brand new function as the stack was reseted
+     */
+
+    switch_stack_and_jump((unsigned long)cpu0_boot_stack + STACK_SIZE,
+                          cont_after_mmu_is_enabled);
+}
+
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 8887f0cbd4..983757e498 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,4 +1,5 @@
 #include <asm/asm.h>
+#include <asm/asm-offsets.h>
 #include <asm/riscv_encoding.h>
 
         .section .text.header, "ax", %progbits
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 3786f337e0..315804aa87 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -2,6 +2,7 @@
 #include <xen/init.h>
 
 #include <asm/early_printk.h>
+#include <asm/mm.h>
 
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
@@ -26,3 +27,13 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 
     unreachable();
 }
+
+void __init noreturn cont_after_mmu_is_enabled(void)
+{
+    early_printk("All set up\n");
+
+    for ( ;; )
+        asm volatile ("wfi");
+
+    unreachable();
+}
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index 31e0d3576c..f9d89b69b9 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -137,6 +137,7 @@ SECTIONS
     __init_end = .;
 
     .bss : {                     /* BSS */
+        . = ALIGN(POINTER_ALIGN);
         __bss_start = .;
         *(.bss.stack_aligned)
         . = ALIGN(PAGE_SIZE);
@@ -172,3 +173,6 @@ ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misaligned")
 
 ASSERT(!SIZEOF(.got),      ".got non-empty")
 ASSERT(!SIZEOF(.got.plt),  ".got.plt non-empty")
+
+ASSERT(_end - _start <= MB(2), "Xen too large for early-boot assumptions")
+
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 03 16:36:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 16:36:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529394.823764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFTD-0006h6-Fj; Wed, 03 May 2023 16:36:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529394.823764; Wed, 03 May 2023 16:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFTD-0006gz-Co; Wed, 03 May 2023 16:36:47 +0000
Received: by outflank-mailman (input) for mailman id 529394;
 Wed, 03 May 2023 16:36:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2zA8=AY=citrix.com=prvs=480c9ef0c=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1puFTC-0006gt-B6
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 16:36:46 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id af208a88-e9d0-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 18:36:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af208a88-e9d0-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683131803;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=qJd4BNQgLPxNBx6IhpSFPUe3kUXTC4ATTMmrCT7u5Hc=;
  b=YdX15iZECYWYTr99d7saY9lfUSokFQLn2IVSad3xWnTPv6ZdboI0vpfO
   xNP5t4OlIncW7OEY4f+LJr7y1Vo9xoBG+685GdIpskE4xXzojMU4+ruyD
   8s5QNdJSkJl9YVRlpsJ5d5UzJs08Xf1HaH6VfC9uX1NAsBP/d9uJR2Pji
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108152269
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:2+qypa7/vQ6LiBpupMMRbwxRtDjHchMFZxGqfqrLsTDasY5as4F+v
 jEaXWmFPamLMGP2eNlxb4u/oExQ6seAzNFiGQc6/3wyHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0T5weE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m3
 /s8OTUIUUm/hfOHy52hdsBUm+smFZy+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrnD5bz1frkPTvact6nLf5AdwzKLsIJzefdniqcB9xx7I/
 z+cpTqoav0cHPKw9gLb+UywvMLKzXndSdhCJqK4+sc/1TV/wURMUUZLBDNXu8KRmkO4Ht5SN
 UEQ0i4vtrQpslymSMHnWB+1q2LCuQQTM/JPF8Uq5QfLzbDbiy6bCXIDVSVpc8E9uYk9QjlC/
 laRksngHzBHrLyfQnXb/bCRxQ5eIgBMczVEP3VdC1JYvZ+6+tpbYg/zoshLOqmRn9jwJmjMw
 SG7pwcku5wrkOEO7vDulbzYuA5AtqQlXyZsuFWMBjv/vlwmDGK2T9f2sAaGtJ6sOK7cFwDc5
 yZcxqBy+chUVfmweDqxrPLh9V1Dz9KMK3XijFFmBPHNHBz9qif4Lei8DNyTTXqF0/romhezO
 ic/QSsLuPdu0IKCNMebmb6ZBcUw1rTHHt/4TP3SZdcmSsEvJFTfoXozPBTKhjCFfK0QrE3CE
 c3DLZbE4YgyUMyLMwZat89CiOR2l0jSNEvYRIzhzgTP7IdykEW9EO9fWHPXN7BR0U9xiFmNm
 zqpH5fQmko3vSyXSnW/zLP/2nhTdyhnWcyo9JYLHgNBSyI/cFwc5zbq6etJU+RYc259z48kI
 lnVtpdk9WfC
IronPort-HdrOrdr: A9a23:8qq0Pa4hEe/l+tfyLQPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-Talos-CUID: 9a23:sMRAyGHtHkf3qbmBqmJN82oFH8UaK0Hf3UbbDFWEIF50Z7isHAo=
X-Talos-MUID: 9a23:ztX78QZSRvkhc+BTmRDTvG9pCttSzqH+UmYcy8U0pMmIDHkl
X-IronPort-AV: E=Sophos;i="5.99,247,1677560400"; 
   d="scan'208";a="108152269"
Date: Wed, 3 May 2023 17:36:32 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>
Subject: Re: [PATCH v3 1/3] tools: Modify single-domid callers of
 xc_domain_getinfolist()
Message-ID: <bba61acc-4d4c-49f9-9cb8-b93bb6409180@perard>
References: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
 <20230502111338.16757-2-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230502111338.16757-2-alejandro.vallejo@cloud.com>

On Tue, May 02, 2023 at 12:13:36PM +0100, Alejandro Vallejo wrote:
> diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
> index 7f0986c185..5709b3e62f 100644
> --- a/tools/libs/light/libxl_domain.c
> +++ b/tools/libs/light/libxl_domain.c
> @@ -349,16 +349,12 @@ int libxl_domain_info(libxl_ctx *ctx, libxl_dominfo *info_r,
>      int ret;
>      GC_INIT(ctx);
>  
> -    ret = xc_domain_getinfolist(ctx->xch, domid, 1, &xcinfo);
> +    ret = xc_domain_getinfo_single(ctx->xch, domid, &xcinfo);
>      if (ret<0) {
> -        LOGED(ERROR, domid, "Getting domain info list");
> +        LOGED(ERROR, domid, "Getting domain info");
>          GC_FREE;
>          return ERROR_FAIL;
>      }
> -    if (ret==0 || xcinfo.domain != domid) {
> -        GC_FREE;
> -        return ERROR_DOMAIN_NOTFOUND;

I kind of think we should keep returning ERROR_DOMAIN_NOTFOUND on error
as this is the most likely explanation. Also, the comment for this
function in libxl.h explain this:
    /* May be called with info_r == NULL to check for domain's existence.
     * Returns ERROR_DOMAIN_NOTFOUND if domain does not exist (used to return
     * ERROR_INVAL for this scenario). */
    int libxl_domain_info(libxl_ctx*, libxl_dominfo *info_r,
                          uint32_t domid);

Would it be possible to find out from xc_domain_getinfo_single() if it's
a domain not found scenario vs other error (like permission denied)?

> -    }
>  
>      if (info_r)
>          libxl__xcinfo2xlinfo(ctx, &xcinfo, info_r);
> @@ -1657,14 +1653,15 @@ int libxl__resolve_domid(libxl__gc *gc, const char *name, uint32_t *domid)
>  libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
>                                         int *nr_vcpus_out, int *nr_cpus_out)
>  {
> +    int rc;
>      GC_INIT(ctx);
>      libxl_vcpuinfo *ptr, *ret;
>      xc_domaininfo_t domaininfo;
>      xc_vcpuinfo_t vcpuinfo;
>      unsigned int nr_vcpus;
>  
> -    if (xc_domain_getinfolist(ctx->xch, domid, 1, &domaininfo) != 1) {
> -        LOGED(ERROR, domid, "Getting infolist");
> +    if ((rc = xc_domain_getinfo_single(ctx->xch, domid, &domaininfo)) < 0) {

The variable name "rc" is reserved for libxl return code. For syscall
and other external call, we should use the name "r". (I know that in
other part of this patch the name used is "ret", but as it is already
existing in the code base, I'm not asking for a change elsewhere)

Also, assignment to the variable should be done outside of the if(). So
the new code should look like:

    r = xc_domain_getinfolist(...);
    if (r < 0) {

(There's tools/libs/light/CODING_STYLE for all this explained)

> +        LOGED(ERROR, domid, "Getting dominfo");
>          GC_FREE;
>          return NULL;
>      }
> diff --git a/tools/libs/light/libxl_sched.c b/tools/libs/light/libxl_sched.c
> index 7c53dc60e6..21a65442c0 100644
> --- a/tools/libs/light/libxl_sched.c
> +++ b/tools/libs/light/libxl_sched.c
> @@ -219,13 +219,11 @@ static int sched_credit_domain_set(libxl__gc *gc, uint32_t domid,
>      xc_domaininfo_t domaininfo;
>      int rc;
>  
> -    rc = xc_domain_getinfolist(CTX->xch, domid, 1, &domaininfo);
> +    rc = xc_domain_getinfo_single(CTX->xch, domid, &domaininfo);
>      if (rc < 0) {
> -        LOGED(ERROR, domid, "Getting domain info list");
> +        LOGED(ERROR, domid, "Getting domain info");
>          return ERROR_FAIL;
>      }
> -    if (rc != 1 || domaininfo.domain != domid)
> -        return ERROR_INVAL;

We can probably return ERROR_INVAL on error instead of ERROR_FAIL, as I
guess it's more likely that we try to change a non-existing domain
rather than having another error.

>  
>      rc = xc_sched_credit_domain_get(CTX->xch, domid, &sdom);
>      if (rc != 0) {
> @@ -426,13 +424,11 @@ static int sched_credit2_domain_set(libxl__gc *gc, uint32_t domid,
>      xc_domaininfo_t info;
>      int rc;
>  
> -    rc = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
> +    rc = xc_domain_getinfo_single(CTX->xch, domid, &info);
>      if (rc < 0) {
>          LOGED(ERROR, domid, "Getting domain info");
>          return ERROR_FAIL;

dito.

>      }
> -    if (rc != 1 || info.domain != domid)
> -        return ERROR_INVAL;
>  
>      rc = xc_sched_credit2_domain_get(CTX->xch, domid, &sdom);
>      if (rc != 0) {


Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed May 03 16:50:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 16:50:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529398.823774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFft-0008Fe-JE; Wed, 03 May 2023 16:49:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529398.823774; Wed, 03 May 2023 16:49:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFft-0008FX-GT; Wed, 03 May 2023 16:49:53 +0000
Received: by outflank-mailman (input) for mailman id 529398;
 Wed, 03 May 2023 16:49:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dYIa=AY=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1puFfs-0008FR-Ad
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 16:49:52 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20607.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::607])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8459330c-e9d2-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 18:49:50 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by CO6PR12MB5412.namprd12.prod.outlook.com (2603:10b6:5:35e::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 16:49:47 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94%7]) with mapi id 15.20.6363.022; Wed, 3 May 2023
 16:49:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8459330c-e9d2-11ed-b225-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C50VgqF/7/92gmrzb/0Y1CrOo3aKpqLOn8GhSOsdn+B8gPRCIGB7B1gsC+FHPhIFeR/tenSHrFYcaOU0zC4RVnUQBrlRwo6b09IMC3wAVEem5cMIq2H+jWJ03zK/dHszhKK5kf151TtOdE/JvolUMMUL1EHHZ5R9jccczawgumMAbDQlPeFYyZkYIecocEK+nJBy+cYoZ1se0PofKLTQhdUZk4aieWBoLmoR7qKkymooQc68rWbocdnpQEVnmyLmkgB1ImzE4qp54EwrXIQ4kguujOvc2ATSvkgcH264mMIa42jY6lTK0WyuzTLWWdvudkYgh3anH24POsalYHF4jQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tw2kDdaQA48o2ztDNh7bwQ29b2De6+ByZ/x1YYln9oE=;
 b=Rtu7AosvgxaTmYHD00Q7FJJP32VbhSJMeuitmBUSHhoUzHXIgq9k7bFNleT0s3BDEpzSUn2N+YCvHh2G5fNvkbnWoN/h6RCnzi4ZHiegDwWhkOb2Go2TzmK0+2VWtbeMZVz254+3iJcSThHm0RUSUchx9q27RhLV16pSKUwvuA0bhcWQUA6QVbMyJdcbJGryD/l6lSlUgWlmpIq2y7HgEoreugeM33IyDiTx4y6KKSwMpztiHUcF+PpvrELNQZVhX051Gcq/6h8qjqdM2V5culG1H+MGBPnbuYr8lnJfQIsz1UqDOjjMV7ZFKso7O+sqX4TgdktacwB7vt5TRVU6oQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tw2kDdaQA48o2ztDNh7bwQ29b2De6+ByZ/x1YYln9oE=;
 b=wPsiJ1kNsKdIdU/j4AQ4ntvvcuI/gRehwcq3XZ3KPda/w1IRBNBcheofwnPueA2rxEhesastPchSobdYfVmy6pti57OONfMrZTsVtwwBsUGt6B4jm6KnZTiE6RtHmmfvLe2mESESx4TEIr8AY6pttLMathqcIID0Ha+CyjrOGug=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <e9a95271-021f-523a-770a-302c638bfe73@amd.com>
Date: Wed, 3 May 2023 17:49:40 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [RFC PATCH] xen/arm: arm32: Enable smpboot on Arm32 based systems
To: Julien Grall <julien@xen.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230502105849.40677-1-ayan.kumar.halder@amd.com>
 <2d764f29-2eb9-ecff-84cd-9baf12961c54@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <2d764f29-2eb9-ecff-84cd-9baf12961c54@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P302CA0022.GBRP302.PROD.OUTLOOK.COM
 (2603:10a6:600:2c1::7) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|CO6PR12MB5412:EE_
X-MS-Office365-Filtering-Correlation-Id: 9583befc-75bd-4df0-6108-08db4bf66703
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zosjtUhAKz0EhQlnRXtL9YNgD7aN5uwrzpHoial+a7SAhmM225wQFr3zjhhBdoHMyUjPkWSKVyt16POS1rPpQAvFqGOFi2Vti2pGQQSKJpYA7b0B9nw2pRHp9ZvHt1utv9kvDDTmhXu9Bwdhmp/0MTDMAfG639siVUa+ygMC9hvQMa+72MoGwzg7X9l7UZ3/EvnaFx2BtUe5tFu/f/4/IHzhDBC3AojE7m935w6GzAOQWf4JNV9gmEocX54o4crNNnbz5IniRlJqjb4bWv5/r4scvmvo9GpbUpB4PY0GHqSKDRk/TL6tf+/naSuQ1jyCw3iWXZcIWg5/qZ5lVTJUs8OY1JN9sSS9DRUbj1eBBiyZu0eSyWZ5g+ApJsA9Hs7bLwny9KHiZyI8vMyp9Q0/MSYhOkpsG55WgjbvPMe1I2StejRmaPSEmtxvajrLFJtGu6L8BuZYjOAIxPtfUslXWnXPGrNVmtFN0aceUFiZFxhyRMzQgvea1hZYfqfoB/Eve0MDg3KYoInd8Maf5ImlrN/+pnmdV3Jc1auh7dtig7GDRwXW5BIZLDECZvG6aUQ6VzS6WiFk5a3QbD7gsw0DT79X4yAhrLgwKw3kWti1eVun8TKqZvqGXrgYnG/DIreLGp5SvwCA0bkBi6z/G80nwpTJPUybma0IH36FGa1jPYceLyLhwRLWP2RcxRkBkV9l
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(366004)(39860400002)(376002)(136003)(451199021)(6506007)(6512007)(83380400001)(2616005)(66556008)(478600001)(31686004)(31696002)(6486002)(966005)(26005)(6666004)(316002)(66476007)(4326008)(66946007)(110136005)(53546011)(186003)(41300700001)(2906002)(5660300002)(8676002)(8936002)(38100700002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K1dvWTVXQkFqQ2Z5US9FSFJHQUFjZWxIeW1sUnVYcUFReklLejBGOGxhS1ZY?=
 =?utf-8?B?UEh0ZjliNkRDN2MxQXV5SEdXaXpBS0FYMFVxRGhkbzBPeDJIeDBYRng2SDFQ?=
 =?utf-8?B?c3dISWhzMWRETnlxU0pvWUxPMzZoRGFDSG9QTWU1ODBvQjkreERJdXJrdTFO?=
 =?utf-8?B?eDg4b0F1N3RqT2dTREo3aEFNWHBPVkdqNDEwWWdWODBkMHRZZ3ZuRm91a0M5?=
 =?utf-8?B?UUU0eElxaFZ3MUVWVlR3OCtUS0lmSzhiK3k2MkdOaU5lWTVPSUdhNXQzbUt0?=
 =?utf-8?B?bU1adkVhb2w0QXVYb3QwUG16VHhncEpac0VzNEFFcVFmNUtyQmlKY2xoR2ho?=
 =?utf-8?B?ZlVua0ZWNDliM1hvZFdvWmRCSWpUQnZqYVAvTDdTcXhGbGNnSVZkRXU4RGpn?=
 =?utf-8?B?ZGl6cTQzNVVmTFNHUEpSdUdtZ0F3amZWemdBSWZRMCt4TytjckVSZWJRVlBj?=
 =?utf-8?B?NzY5em9rZmFNQlFqMjFMVjdjOTlzbVl5VmdHUmNmZWNkTS9MS2c3Q3d1dCt1?=
 =?utf-8?B?VEpadmdmK3RBZjdUdjQwcFhIQldWT2haVitQdnFZWVhPWE9oTi9YajVrUFNF?=
 =?utf-8?B?bnhJYTQyREk1OVc5UW8yZlArcUFHY2VkVW16V2RPL3RNTHQ5ODNhTGtBTHc3?=
 =?utf-8?B?RW9zY0V4ZkFLWTZCUS9qbS9YNy9OcUFxZm5UWnM5NmZ2M0R4eW44N01iYjlQ?=
 =?utf-8?B?ejhuZzVoWW5kUGVnSlcxWHRrMk5xK2RtLzdFUndzKzc5VGMwYmdOYnMyYlFN?=
 =?utf-8?B?ckhOUDVIb1I3bnVMWS8zcVVXNW9wQmJFOTF1R0pqcjZ1ZVIrY2VBYW9yOVYy?=
 =?utf-8?B?UENHT3psYUc5aGV3RlduSEdaY3VmWXRWdmFlUmxMS201UW9yNFJvTDdndkFN?=
 =?utf-8?B?ZlQrR3RpZFR1OXlKelVxUmIxQkxocm9IcVU0Zk9pUjZaZUxJbTZOclNGT3JF?=
 =?utf-8?B?VGtXTXZZcmswV25CVVlGbjRXeXlQZkwxRVhuSkUzVFFsYTRFVnZaN05VbHRi?=
 =?utf-8?B?NGh4UkVPWU5nZEpBS3hhMkM4VnBvNlBieFg0T2NIcjcrNk5la2R6RU9va0FK?=
 =?utf-8?B?aGFCTjBEWmRHTEV4NDhZS1NTYTJISSszTlBqVzIzUWpaNDNpV3FIQTNjZWJn?=
 =?utf-8?B?K0JGWkZXVDRPSmVJSjg4Z0hBL2U1NE5QZVR0T0pPcnNFOXNWamk1YnJxcnJx?=
 =?utf-8?B?Y1MwNEdDNmphd3NidTZ6S0NvdGxiVjBpai9jWTkrTG12c3NXbXlPNWhTeWFO?=
 =?utf-8?B?eE9LZ05ZK0hSbjl2VWFmZDFSMkVXL2N3SE5GSnQ1RlF3WUFTZ3V3MEdkSVZK?=
 =?utf-8?B?QjNrQ2d3TGFtOG9TMTVpb1hWNVdJWXFFYXFCQkNSVGNPdTFFQzNlVFVpMnY0?=
 =?utf-8?B?TFNIbHkrQ1hFdjYzUFJPdEMwV1dJMThTOEd0ZFZGQXcrODhLSXMxbEhVbWdi?=
 =?utf-8?B?WTZQNDNkbk1BQ0RJT2RxUDVVaERSUTR3MnJUU3VpbDljS28wdElGTGppVElK?=
 =?utf-8?B?bWhDcDJQeGpTN3JWQ2xMSXFib1VhSlFpY3pwME1HWTdLM0hBcnZOSG5kTjVQ?=
 =?utf-8?B?aUh1ZkJzVStIK1F6TTZpT296VVp1SGg2Mk9MYlZ1c2E4NnJWaENIblVzMGUr?=
 =?utf-8?B?QkZxc1Jua3dGcFlLQkVJLzNKSWowNEVGWHZobDNCWHU5SEJYQjBUSi9KUU52?=
 =?utf-8?B?bmVYVVhsSWdpSlBKME5hOEx1VmJPd1h3b0s3ZU1nMHpZUE5rNTEvVEw0TWlw?=
 =?utf-8?B?SWlIM0JBa2ZlaUFQKy85OUtxc2g4NFBJcWNVbmZUNHkxV0JLYnVVWFNYWGtJ?=
 =?utf-8?B?c3FPejNEclRqY1dKanpIaXNMK2F3UnIwYUU1QnFDc0hjWkt2TEdCODZNbGMy?=
 =?utf-8?B?Mm1aT3ZrN0J2dVRhTVRpckFEQTJqSVNIcVJxNnl3UjNVRmhmQkFrNEw0Z2xa?=
 =?utf-8?B?NUZFbVZsd1ptNFBOb3JLL25nVkNjRGkxRzBxaHAveFc5Y3loOWxmdE9rRlVQ?=
 =?utf-8?B?aWhGc0VXZnphUllRSUZHN0lsOE5IQ3JXVXNWdndsQVJ4bzN1V29PQk1KZzBo?=
 =?utf-8?B?dVNVWXdNNmpIQUlOTEl6NTRxZE85ZG9HS0RRR3ZsNUdDZE5GZjhRSkNLTXZs?=
 =?utf-8?Q?Sdi6KhCf2McN85z2AMbsATWPo?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9583befc-75bd-4df0-6108-08db4bf66703
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 16:49:46.7835
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fgKudBZMl92uRWGb5gX1tyWNqs4Xr1QdBpy3rJQWgw44NeYBLAEyUAphN98oiVc4oZbZdM/7WCkT1DGmYLnRpQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR12MB5412


On 03/05/2023 08:40, Julien Grall wrote:
> Hi,
Hi Julien,
>
> Title: Did you mean "Enable spin table"?
Yes, that would be more concrete.
>
> On 02/05/2023 11:58, Ayan Kumar Halder wrote:
>> On some of the Arm32 based systems (eg Cortex-R52), smpboot is 
>> supported.
>
> Same here.
Yes
>
>> In these systems PSCI may not always be supported. In case of 
>> Cortex-R52, there
>> is no EL3 or secure mode. Thus, PSCI is not supported as it requires 
>> EL3.
>>
>> Thus, we use 'spin-table' mechanism to boot the secondary cpus. The 
>> primary
>> cpu provides the startup address of the secondary cores. This address is
>> provided using the 'cpu-release-addr' property.
>>
>> To support smpboot, we have copied the code from 
>> xen/arch/arm/arm64/smpboot.c
>
> I would rather prefer if we don't duplicate the code but instead move 
> the logic in common code.
Ack
>
>> with the following changes :-
>>
>> 1. 'enable-method' is an optional property. Refer to the comment in
>> https://www.kernel.org/doc/Documentation/devicetree/bindings/arm/cpus.yaml 
>>
>> "      # On ARM 32-bit systems this property is optional"
>
> Looking at this list, "spin-table" doesn't seem to be supported
> for 32-bit systems. 

However, looking at 
https://developer.arm.com/documentation/den0013/d/Multi-core-processors/Booting-SMP-systems/SMP-boot-in-Linux 
, it seems "spin-table" is a valid boot mechanism for Armv7 cpus.


> Can you point me to the discussion/patch where this would be added?

Actually, this is the first discussion I am having with regards to 
adding a "spin-table" support on Arm32.

The logic that we will use for secondary cpu booting is similar to the 
"spin-table" mechanism used in arm64/smpboot.c.

This is :-

1. Write the address of init_secondary() to cpu-release-address register 
of the secondary cpu. (In my current patch, I attempt to achieve this.)

2. Write to the configuration register of the secondary cpu to bring it 
out of reset.

This is the corresponding patch (yet to be cleaned) which will be used 
to do step 2.

--- a/xen/arch/arm/platforms/amd-versal-net.c
+++ b/xen/arch/arm/platforms/amd-versal-net.c
@@ -36,6 +36,47 @@ static int versal_net_init_time(void)
      return 0;
  }

+static __init void versal_net_populate_plat_regs(void)
+{
+    /* TODO :- Parse 0xEB58C000 ie CORE_1_CFG0 from dtb */
+}
+
+static __init int versal_net_init(void)
+{
+    versal_net_populate_plat_regs();
+
+    return 0;
+}
+
+static __init int versasl_net_smp_init(void)
+{
+    return 0;
+}
+
+static __init int versal_net_cpu_up(int cpu)
+{
+    uint32_t __iomem *cpu_rel_addr = ioremap_nocache(0xEB58C000, 4);
+    uint32_t i = 0;
+
+    writel(1, cpu_rel_addr);
+
+    /* Delay has been added due to some platform nuance */
+    __iowmb();
+    for (i=0; i<0xF000000; i++)
+        __asm __volatile("nop");
+
+    writel(0, cpu_rel_addr);
+
+    /* Delay has been added due to some platform nuance */
+    __iowmb();
+    for (i=0; i<0xF000000; i++)
+        __asm __volatile("nop");
+
+    iounmap(cpu_rel_addr);
+
+    return 0;
+}
+
  static const char * const versal_net_dt_compat[] __initconst =
  {
      "xlnx,versal-net",
@@ -44,5 +85,8 @@ static const char * const versal_net_dt_compat[] 
__initconst =

  PLATFORM_START(versal_net, "XILINX VERSAL-NET")
      .compatible = versal_net_dt_compat,
+    .init = versal_net_init,
+    .smp_init = versasl_net_smp_init,
+    .cpu_up = versal_net_cpu_up,
      .init_time = versal_net_init_time,
  PLATFORM_END

>
>>
>> 2. psci is not currently supported as a value for 'enable-method'.
>>
>> 3. update_identity_mapping() is not invoked as we are not sure if it is
>> required.
>
> This is not necessary at the moment for 32-bit. This may change in the 
> future as we make the 32-bit boot code more compliant. For now, I 
> would not add it.

Ack.

- Ayan

>
> Cheers,
>


From xen-devel-bounces@lists.xenproject.org Wed May 03 17:03:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 17:03:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529401.823784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFsq-0002Cq-Om; Wed, 03 May 2023 17:03:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529401.823784; Wed, 03 May 2023 17:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puFsq-0002Cj-L0; Wed, 03 May 2023 17:03:16 +0000
Received: by outflank-mailman (input) for mailman id 529401;
 Wed, 03 May 2023 17:03:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2zA8=AY=citrix.com=prvs=480c9ef0c=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1puFsp-0002Cd-5k
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 17:03:15 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6229df47-e9d4-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 19:03:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6229df47-e9d4-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683133392;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=TFk000qXnvuKsmqHZ8WX/bYNw58Rk1KBLQEnXQGux8M=;
  b=Z0j0Bk6uCZXOdsBLz3jPU3HfL+XWLR9I09bzv24vcESjOGnRZVrgXfNA
   ta7QtFc4o8M3KSfMqzwBXI06+6riEJ81wLU40cKkWNMACLfkCc0YLg3SZ
   cadA9C3F+i4+9dNT2OEgOd/xyJ+Oqv+JjQiczEeh+QLHIsM+7l62oSrQw
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110190138
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:sVtXLqhjwiauR/ZizUulX+AuX161dxAKZh0ujC45NGQN5FlHY01je
 htvWD3SOPuIZzHxLdB+btnjo0pTupPUndFiHlA/pChkECgb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QeFzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tREKiI3bjaIntiq0Y+fd/Uwo58JJcPCadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XejtEqFWTtOwv7nLa1gBZ27nxKtvFPNeNQK25m27B/
 zqfrjygWUFy2Nq3igOPrUiW2ObzvRjLZbwrCK+mq6A3nwjGroAUIEJPDgbqyRWjsWaiWtd3O
 0ESvC00osAa/VSmVNDnUzWkoXSPuVgXXN84O+818gaW0YLP/h2UQGMDS1ZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9UQAKKUcSaClCShEKi+QPu6lq0EiJFIw6Vvfo0JusQ2qYL
 y22QDYW27cBt84t3LyB4UH6pw+w+r2SZAc/z1CCNo661T+VdLJJdqTxtwiAta8afNnGJrWSl
 CNawpbDtYjiGbnIzXXQG7tVQdlF8t7faFXhbUhT847NHthH01qqZshu7T53Py+F2e5UKGayM
 Cc/Ve68jaK/3UdGjoctOepd8+xwkcDd+S3ND5g4lOZmbJlrbxOg9ypzf0OW1G2FuBFywfpnZ
 MzGKJz2VCZy5UFb8dZLb71Fje9DKt4WnAs/uqwXPzz4iOHDNRZ5uJ8OMUeUb/BR0Z5oVD79q
 o4FX+PTkkU3bQELSnWPmWLlBQxQfCdT6FGfg5A/S9Nv1SI/QjF9V6OPnul9E2Gn9owM/tr1E
 riGchcw4DLCabfvd21mtlgLhGvTYKtC
IronPort-HdrOrdr: A9a23:V982hairHaw8sVIoyQv8TIKZnXBQXlMji2hC6mlwRA09TyX4ra
 yTdZEgviMc5wxwZJhNo7G90ey7MArhHLROkO4s1NSZMzUOxlHYSr2KhLGKq1eMJ8S9zJ8k6U
 4HSdkENDSaNzZHZKjBkXWFOudl7N6b8L25wcfypk0dMj2CspsQlTuR3Dzrb3FedU19CZ0lD4
 rZw8xIqTa6EE5nDPiTNz0+U+/fvM2OsZTpbxIcQzsq9wWK5AnYjYLSIlyj0hACSCMK+Kwl8m
 TOjmXCl8aemsD+8BPaynTCq69bgd7wjuZEbfb87vQoFg==
X-Talos-CUID: 9a23:2hL3xGww7DzlUDf2SilKBgVIGNIfclvW1U3MMnKcNHo1bea/QFW5rfY=
X-Talos-MUID: 9a23:WHvQ8gi07hnAne3yEbQzt8Mpd/5ivJ6UVkkxsa42hve9KT5uZmnAk2Hi
X-IronPort-AV: E=Sophos;i="5.99,247,1677560400"; 
   d="scan'208";a="110190138"
Date: Wed, 3 May 2023 18:02:58 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>, Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v3 2/3] tools: Use new xc function for some
 xc_domain_getinfo() calls
Message-ID: <1828a067-95fe-4d6e-9923-54f76593b9d8@perard>
References: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
 <20230502111338.16757-3-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230502111338.16757-3-alejandro.vallejo@cloud.com>

On Tue, May 02, 2023 at 12:13:37PM +0100, Alejandro Vallejo wrote:
> diff --git a/tools/libs/light/libxl_x86_acpi.c b/tools/libs/light/libxl_x86_acpi.c
> index 22eb160659..796b009d0c 100644
> --- a/tools/libs/light/libxl_x86_acpi.c
> +++ b/tools/libs/light/libxl_x86_acpi.c
> @@ -87,14 +87,14 @@ static int init_acpi_config(libxl__gc *gc,
>  {
>      xc_interface *xch = dom->xch;
>      uint32_t domid = dom->guest_domid;
> -    xc_dominfo_t info;
> +    xc_domaininfo_t info;
>      struct hvm_info_table *hvminfo;
>      int i, r, rc;
>  
>      config->dsdt_anycpu = config->dsdt_15cpu = dsdt_pvh;
>      config->dsdt_anycpu_len = config->dsdt_15cpu_len = dsdt_pvh_len;
>  
> -    r = xc_domain_getinfo(xch, domid, 1, &info);
> +    r = xc_domain_getinfo_single(xch, domid, &info);
>      if (r < 0) {
>          LOG(ERROR, "getdomaininfo failed (rc=%d)", r);

You could change this error message. The value of 'r' isn't interesting
anymore. Instead, you could replace that by LOGE or LOGED (both will
print errno).


Otherwise, patch looks good:
Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed May 03 17:13:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 17:13:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529403.823793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puG2N-0003hq-LU; Wed, 03 May 2023 17:13:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529403.823793; Wed, 03 May 2023 17:13:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puG2N-0003hj-It; Wed, 03 May 2023 17:13:07 +0000
Received: by outflank-mailman (input) for mailman id 529403;
 Wed, 03 May 2023 17:13:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2zA8=AY=citrix.com=prvs=480c9ef0c=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1puG2N-0003hd-42
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 17:13:07 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c3c7fa37-e9d5-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 19:13:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3c7fa37-e9d5-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683133986;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=2M2YzHhQe+4b8uYTmzqGzDo4Ommpt+0EZR59fV316rw=;
  b=FeitYJwnKDgZTl2N4YMsDOc7cHw4SVKN/DjMJaXmh2vZp7B4uKQugpIA
   wPN4LATi2AeEqdpMZkDInovZjYTuCOtKO5KHjibVVRf+l9+MT+bbfh3tc
   gVy7+zYHb/mMbxomNBUhAPDnjZ9+FBSaY9+IAmNd5g1yCUBqh6PpnKfzt
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108156469
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:KQ5owazzxBZDaiTWTYl6t+fyxirEfRIJ4+MujC+fZmUNrF6WrkUBm
 2BOUDuCOv+ONjOkKdt2Yd+19B5U7MCAndVhSQpp/iAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPasT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KUxk0
 vEZBWEqVAqen9qd6bL4Usdoltt2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZwNzh/F+
 D6YrgwVBDkfK87cimCXz0mSh7OVsDrXc78pLbano6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0WdBdDuk74wGl0bfP7kCSAW1sZjxcbN0rsucmSDps0
 UWG9/vyHiBmurCRTXOb95+XoCm0NCxTKnUNDQcbSSMV7t+lp5s85jrNQcxkC7WdlcDuFHf7x
 DXihCEmiq8al8Ijy6Sx9leBiDWpzqUlVSZsuF+RBDj8qFokOsj8PdfABUXnAehoAay+Q1is7
 V8+gdWG1MsnDdKVuiidX7BYdF223MppIAEwkHY2Qcl6r233qyH8FWxDyGogfRk0a67obResO
 RaO4l0Jufe/KVPwNcdKj5SN59PGJEQKPfDsTbjqY9VHefCdnyfXrXg1NSZ8M40A+XXAcJ3T2
 r/BK65A9V5AVcxaIMOeHo/xK4MDyCEk3n/0Tpvm1Rmh2rf2TCfLGexdbQDTN7pjtfPsTODpz
 jqiH5HSl0U3vBPWO0E7DrL/3XhVdCNmVPgaWuRcd/KZIxoOJVzN/8T5mOt7E6Q8xvQ9qws91
 i3lMqOu4Aal1CKvxMTjQiwLVY4Dqr4m9yJnYXByYgz0s5XhCK72hJoim1IMVeFP3IReITRcF
 ZHpp+3o7ixzdwn6
IronPort-HdrOrdr: A9a23:mHurl67v6tHjiHPC1QPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-Talos-CUID: 9a23:1jjxam0e36RCFg2nBqTwxLxfN+oCf3zwnVDqOVKgMU1ASP6NZ1uA5/Yx
X-Talos-MUID: 9a23:VZnVpASIhgFhruwIRXTpmyMhd5163p+QCRxXvIwqkPS/DCdvbmI=
X-IronPort-AV: E=Sophos;i="5.99,247,1677560400"; 
   d="scan'208";a="108156469"
Date: Wed, 3 May 2023 18:12:53 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>
Subject: Re: [PATCH v3 3/3] domctl: Modify XEN_DOMCTL_getdomaininfo to fail
 if domid is not found
Message-ID: <acca7d2f-8060-4366-b048-a4963b014da5@perard>
References: <20230502111338.16757-1-alejandro.vallejo@cloud.com>
 <20230502111338.16757-4-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230502111338.16757-4-alejandro.vallejo@cloud.com>

On Tue, May 02, 2023 at 12:13:38PM +0100, Alejandro Vallejo wrote:
> It previously mimicked the getdomaininfo sysctl semantics by returning
> the first domid higher than the requested domid that does exist. This
> unintuitive behaviour causes quite a few mistakes and makes the call
> needlessly slow in its error path.
> 
> This patch removes the fallback search, returning -ESRCH if the requested
> domain doesn't exist. Domain discovery can still be done through the sysctl
> interface as that performs a linear search on the list of domains.
> 
> With this modification the xc_domain_getinfo() function is deprecated and
> removed to make sure it's not mistakenly used expecting the old behaviour.
> The new xc wrapper is xc_domain_getinfo_single().
> 
> All previous callers of xc_domain_getinfo() have been updated to use
> xc_domain_getinfo_single() or xc_domain_getinfolist() instead. This also
> means xc_dominfo_t is no longer used by anything and can be purged.
> 
> Resolves: xen-project/xen#105
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed May 03 17:15:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 17:15:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529408.823803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puG4j-0004Lu-6w; Wed, 03 May 2023 17:15:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529408.823803; Wed, 03 May 2023 17:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puG4j-0004Ln-2v; Wed, 03 May 2023 17:15:33 +0000
Received: by outflank-mailman (input) for mailman id 529408;
 Wed, 03 May 2023 17:15:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nFVo=AY=tklengyel.com=bounce+e181d6.cd840-xen-devel=lists.xenproject.org@srs-se1.protection.inumbo.net>)
 id 1puG4i-0004Lf-87
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 17:15:32 +0000
Received: from rs227.mailgun.us (rs227.mailgun.us [209.61.151.227])
 by se1-gles-sth1.inumbo.com (Halon) with UTF8SMTPS
 id 1ae33198-e9d6-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 19:15:31 +0200 (CEST)
Received: from mail-yb1-f172.google.com (mail-yb1-f172.google.com
 [209.85.219.172]) by
 628c2f3507a6 with SMTP id 645296b08a040ae8e218a377 (version=TLS1.3,
 cipher=TLS_AES_128_GCM_SHA256); Wed, 03 May 2023 17:15:28 GMT
Received: by mail-yb1-f172.google.com with SMTP id
 3f1490d57ef6-b9a6f17f2b6so4533649276.1
 for <xen-devel@lists.xenproject.org>; Wed, 03 May 2023 10:15:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 1ae33198-e9d6-11ed-b225-6b7b168915f2
DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=tklengyel.com;
 q=dns/txt; s=mailo; t=1683134128; x=1683141328; h=Content-Type: Cc: To: To:
 Subject: Subject: Message-ID: Date: From: From: In-Reply-To: References:
 MIME-Version: Sender: Sender;
 bh=TirsW8/KIZ0QmHxWPC0SOfA+JzpVCBqdOLxkFAQMwV4=;
 b=8BSDSsNsgeenPXKChP/CNp0bd6CSJS08K51I5h2jBzcVm7nc9hSlZl5xkerUSrcZyV/S/VsiVrlacePnX4tpyjtHYZaXrVQ/ib4yz0zFCQZTrpefIHa1VOXcwGHMVq/72gZKRB7TPaWRFd7IZX59gGLqXYPXy8Kpt+qCnX+BQGAHh24gSZt4GtK2fRjrQHM8V/eQTHa5AEHXQfqg4Tx/Z7s7CWKEYYxwYQN0u6TVqdjg9Hx4rra8XMXHw8UuXsFDKPRQq5o8Kh0e4gYwPv2VxCBeehG0Ts+tgM+CkSB++my5ls20NevHTN+Ffq6T9Qi88JC8ChDm1C3LyapMZ50ZQg==
X-Mailgun-Sending-Ip: 209.61.151.227
X-Mailgun-Sid: WyIyYTNmOCIsInhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZyIsImNkODQwIl0=
Sender: tamas@tklengyel.com
X-Gm-Message-State: AC+VfDz5HOCAGu+5FavMDCVam1aBe8Vmh9/34MphLR1/dymX8k/h7ow9
	3y3KnJF4wUNEEs46w6OlIiXIPJ19FYAjIdYqbzM=
X-Google-Smtp-Source: ACHHUZ767567dvazsFs+5+/XAzI0om2r5wjeBctqT0Wjowi5pEXIaqiZcoTMxgrROd1AJ8YZpsNPt6doCFvFF/LxWXo=
X-Received: by 2002:a81:5284:0:b0:55a:c779:d8c0 with SMTP id
 g126-20020a815284000000b0055ac779d8c0mr2918180ywb.22.1683134128464; Wed, 03
 May 2023 10:15:28 -0700 (PDT)
MIME-Version: 1.0
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com> <a79d2a8b-6d6e-bd31-b079-a30b555e5fd0@suse.com>
In-Reply-To: <a79d2a8b-6d6e-bd31-b079-a30b555e5fd0@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Wed, 3 May 2023 13:14:52 -0400
X-Gmail-Original-Message-ID: <CABfawhn4CRnctzV-17di4eYyNhSGTSMckZjgphS1Rg6HUGOtHw@mail.gmail.com>
Message-ID: <CABfawhn4CRnctzV-17di4eYyNhSGTSMckZjgphS1Rg6HUGOtHw@mail.gmail.com>
Subject: Re: [PATCH v3 4/8] x86/mem-sharing: copy GADDR based shared guest areas
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: multipart/alternative; boundary="000000000000d62b1705facd33c8"

--000000000000d62b1705facd33c8
Content-Type: text/plain; charset="UTF-8"

> @@ -1974,7 +2046,10 @@ int mem_sharing_fork_reset(struct domain
>
>   state:
>      if ( reset_state )
> +    {
>          rc = copy_settings(d, pd);
> +        /* TBD: What to do here with -ERESTART? */

Ideally we could avoid hitting code-paths that are restartable during fork
reset since it gets called from vm_event replies that have no concept of
handling errors. If we start having errors like this we would just have to
drop the vm_event reply optimization and issue a standalone fork reset
hypercall every time which isn't a big deal, it's just slower. My
preference would actually be that after the initial forking is performed a
local copy of the parent's state is maintained for the fork to reset to so
there would be no case of hitting an error like this. It would also allow
us in the future to unpause the parent for example..

Tamas

--000000000000d62b1705facd33c8
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">&gt; @@ -1974,7 +2046,10 @@ int mem_sharing_fork_reset(str=
uct domain<br>&gt;<br>&gt; =C2=A0 state:<br>&gt; =C2=A0 =C2=A0 =C2=A0if ( r=
eset_state )<br>&gt; + =C2=A0 =C2=A0{<br>&gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0rc =3D copy_settings(d, pd);<br>&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0/* =
TBD: What to do here with -ERESTART? */<br><div><br></div><div>Ideally we c=
ould avoid hitting code-paths that are restartable during fork reset since =
it gets called from vm_event replies that have no concept of handling error=
s. If we start having errors like this we would just have to drop the vm_ev=
ent reply optimization and issue a standalone fork reset hypercall every ti=
me which isn&#39;t a big deal, it&#39;s just slower. My preference would ac=
tually be that after the initial forking is performed a local copy of the p=
arent&#39;s state is maintained for the fork to reset to so there would be =
no case of hitting an error like this. It would also allow us in the future=
 to unpause the parent for example..<br></div><div><br></div><div>Tamas<br>=
</div></div>

--000000000000d62b1705facd33c8--


From xen-devel-bounces@lists.xenproject.org Wed May 03 17:37:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 17:37:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529411.823813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puGPV-0006pJ-Sy; Wed, 03 May 2023 17:37:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529411.823813; Wed, 03 May 2023 17:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puGPV-0006pC-QK; Wed, 03 May 2023 17:37:01 +0000
Received: by outflank-mailman (input) for mailman id 529411;
 Wed, 03 May 2023 17:37:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZlPw=AY=citrix.com=prvs=4803f4e7c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puGPU-0006p6-75
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 17:37:00 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19cc0720-e9d9-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 19:36:58 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 May 2023 13:36:51 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6616.namprd03.prod.outlook.com (2603:10b6:a03:389::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 17:36:45 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 17:36:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19cc0720-e9d9-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683135418;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=XQoRvwDsNAbb2Ldv2vsHR2jw0A+eesLTBRQ0JAHIYZI=;
  b=XrQxQ5peAW8sK57BWfAuI00QHm46BVDOPZzx25JrknnG0IpzuRcLuxMF
   ZydkGkDUsKTR5l0LUDl/j0tIBoz1Mek4bY9FjdrHUkVMZVPq/i4V+UNxs
   7I96rzCAob4GkPPYvcn8Bzd7HxhWY/N2N9VihKRcE8WlfBTbW/ilK4AKP
   Y=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 110194738
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:OoV7gKCtUkBhOBVW/83iw5YqxClBgxIJ4kV8jS/XYbTApG8k1TAEm
 2MfWTiOP/+DYzT9fNklOo2yphsEscSEnNNmQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A5QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw/d0uLG5ey
 OUkNxMGSiu4neGMwbCdRbw57igjBJGD0II3nFhFlGicJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9exuvTi7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraWxXqqBN1JRNVU8NZJq1/U6DUtMyQsD2aenMHgq227edRAf
 hl8Fi0G6PJaGFaQZsnwWVi0rWCJujYYWsFMCKsq5QeV0K3W7g2FQG8eQVZpatYrqcs3TjwCz
 UKSkpXiAjkHmKKRYWKQ8PGTtzzaESoaN2gZfgcfUBAIpdLkpekbjA/LT9tlOL64iJvyAz6Y6
 yuRsCE0irEXjMgK/6a251bKh3SrvJehc+IuzgDeX2bg5AUpYoegPtWs8QKCs6YGK5uFRF6cu
 nRCg9KZ8O0FEZCKkmqKXfkJG7aqof2CNVUwnGJSInXozBz1k1bLQGyayGgWyJtBWircRQLUX
 Q==
IronPort-HdrOrdr: A9a23:kUDlKKunYYPikJ0wxn/YQRyo7skDVNV00zEX/kB9WHVpm62j5q
 WTdZEgv3LJYVkqNE3I9eruBED4ewK6yXcX2/hyAV7BZmnbUQKTRelfBO3ZrQEIcBeOldK1u5
 0AT0FIMqyVMbErt63HCdGDYqwdKQO8gdiVbDrlvhFQpN1RGtpdBtlCe3um+iIffng+OaYE
X-Talos-CUID: 9a23:DN45g28AhDanwNXW77iVv0kvQ+0qYkDE8GjzKGuDMnxqcryXb1DFrQ==
X-Talos-MUID: 9a23:V9VPzQuV9aiXg2sgP82npxpgc+kx3Z+XMhowtKg+4uinMA52JGLI
X-IronPort-AV: E=Sophos;i="5.99,247,1677560400"; 
   d="scan'208";a="110194738"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YO2+CGSKubaWEYAcnbck40Jmvju5UacLj4D9ZcoL3hkyqj3p92FfnekWxBObctA/dZ3XFNpqr/UFtZaxcbLtPvci5SVL/XGnpB4x1r9KfuskQ+KW8V5Iu1J0GBusP7mIr7xxZjP19nJjRu4xPREHg14SEUdzs0owhhacov1b6zv7te/OrNdjOKViCGdyTS6dh7Yvfby7w4O1LFilXccQok+OrytXKIn/cdiqgK4TBMQSffcnuUSNmt0nvyKx6s8876vlGOXAiC4njCKIZM3I2stcEYR4tN1ToqkHnPl5njPFcctTMhhVxpbTHxGQkMg4+xdXy837vRvzERXMgofJDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Gth2XfANnyxBojNYe2210OffKWETGMpY5ZLlikIbgls=;
 b=aW5XaTADBXtq4LrjLZXTxPtdqPBA+7gudne83WluH++jME57nxcP9k+UbdkKqf6/AeuBJSaiLBwLzPk5y1JzJ+iXc+zAKOJIA0Ria8lubNuehhkmzvCp/ZzRGsxGNviSbI/TfQtPHrkaYkS4yn2CPE7ep5m4avH9NV8CreXqUswUcj8dCc9vvSmX7Xs0dHC+8KNui7CnprS8h7o4wpxCvVlaZJjC5CIFNxAKPUznwczIS7UsNjtMnlbD18R20JTuBcIhTGLf2WWEhrKi855lQjek9k+BAY5s/t6PHUVE8UEnfSF9dBPqgzM3VrrLrrtPDMuTAW+AR3Q37d1w/BvSmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gth2XfANnyxBojNYe2210OffKWETGMpY5ZLlikIbgls=;
 b=mMP4wjN5qWh8cOVs3fYwsQDQrbwTvKDoNgL92rMt6V2e3hmEGyE06Z97Q55peyhfulk/CNN/XPKREJFsLRa1yN1CeYG2PKflZyuf4R6jEZz367btveXEbNDIp1VDdX9j67i2bwiINx+1ZEoVuGP0NdTOHferiTyDUIJf5u4YZ1A=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <e21cb410-c227-0cdd-edf6-ffed6d2dc831@citrix.com>
Date: Wed, 3 May 2023 18:36:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v5 3/3] tools/xen-ucode: print information about currently
 loaded ucode
Content-Language: en-GB
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross <jgross@suse.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20230417135335.17176-1-sergey.dyasli@citrix.com>
 <20230417135335.17176-4-sergey.dyasli@citrix.com>
In-Reply-To: <20230417135335.17176-4-sergey.dyasli@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0103.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:191::18) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6616:EE_
X-MS-Office365-Filtering-Correlation-Id: 8987df46-5fd2-4ae5-c92c-08db4bfcf6e1
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/mvLE98wnHuCNj6thre3J8vYFwhoO+4dTfq8JxEO/EzZyQ2ecCKfZsOVe7Ef+2H+yWJcMhAGd62SPhpnhXw4IP4/iteuM/zOwtMayKGVJ3NOTkTZDzz0VqTP6vKFqOXh4dB+yuJZ8Rvpbs3c4BhfmvPPq6335wXUgTVEMoTj2STejqO72pylQQeM304V8KNFElaOfwbx5q45HHuoWusK+UIYZ86xtnN67dvLrfZ4EhvvXH1UwdaVJ5glW6jIA5R8kqM/8/vClYDXozs7lgib/FisQSGpaLBvRdKO5A8kPRiGo0RnvJOOxMDG7D4vKagl7xHYxDPCxSXtS+Pl9wv2gxGLE/H88uE74OMWL0d6qWIHSqkPz7kCSEvhIzXwSgOk4tVaY/J7ADAOP0k/qQS/mkwdk/DTTeqmJCIGE/hnbnekbe7ktJQy5T6cPOB2FDRHL4wpny3BBAh/o6YWPXn3SXduv+nmgmMVAsxuIu/C3XezZkywv6mnkA0kCjqB/z5GDWeEeTfp6zip4Peg7o7fCa5i/Qe2zouEk4AwQ8uOfJbxYSPB9Pt/h91HzKzresA1wcp8WsRH7YxidzsQEc58oUDGzrGFFle0lN04Lo1eQozcbl/oweSEDlUfvWh0zUPOX52PiEPOYqZfxl9ZlJydJA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(366004)(39860400002)(346002)(376002)(451199021)(38100700002)(6486002)(6666004)(6506007)(53546011)(26005)(6512007)(36756003)(2616005)(186003)(82960400001)(86362001)(41300700001)(316002)(4326008)(6916009)(4744005)(2906002)(31696002)(66556008)(66476007)(66946007)(8676002)(8936002)(31686004)(478600001)(5660300002)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dU95eWpZWUtTOFF0bEU3aTBJbUZSSUNJWTBYUVZuVEFiY01oRW96Q3FvTFBW?=
 =?utf-8?B?NGo2ZmxITVZnVGxBd2tEZkhpUlJYMmRuQnZZdDdvM3RFU2pQb1ljYmpab25S?=
 =?utf-8?B?S0pGNEdHMm81Q2lOdFVyVXluYy9uNXZNNmt5ODE5eUoxS1NCeG9keVIwMEVN?=
 =?utf-8?B?MlhiRFZUOU9tUHZCUXVXT0M3TzJsTlRUUFAxY1lxVmJDbnQ5SzVJVzFlR3ps?=
 =?utf-8?B?TmFTSkRLa2tSd3B4UDZnVkY5aE1YWUhWVmViL2lmUUVmY1VKWjN0cjZ5bXJY?=
 =?utf-8?B?Vzg1YkpVZWE2am5yRUVaYmkwRjRkR0M4Sjg4TExPbE5EOW9rcUYwUTNyOWRq?=
 =?utf-8?B?SEpUNDVraVltWGRJNnhkQU1YVlhVcTkzUFBkeDVjNVRuUllTQ0lPSCtkanlZ?=
 =?utf-8?B?Mm1paURkU0Y1bWVsRGY0Q0FjZDBvQyt6T0xiMlA5eGUrQWpTN3ZQTngwcC80?=
 =?utf-8?B?MlJsazArVmpVcTkwS1hZbUs5TGszNS9hcXl1b3BYMTJhT211ZmQvNWtyQ2hP?=
 =?utf-8?B?THltUHZvZ2NrVWhVMEhTcXdsQlpKQ1pHcUtQb2xLcFVlcnREOXlPZHY4K2x6?=
 =?utf-8?B?Vk1uSlMrcytCR1JMeHZqM29QRkd5L3lOQXpyci9VUWplRjd5WXpmT3ZrTHAv?=
 =?utf-8?B?RUtUZGZNZ2NMVktoMkVyc3BDOFlkb3JtNDU3UEswakZXdHNrbm1YSkx4TlVE?=
 =?utf-8?B?dVQ5eXBLY1M2MnFmazJzeHFWVVJaWTRaZG5pWi92QUxsNVVZaVFWc2ZYMWRa?=
 =?utf-8?B?TjRWaW13RVJMejlnWWJCZm1vSkVsWm8zSnNTbVJINXVQK1d5RXo1cm1YVEw2?=
 =?utf-8?B?UXNoUTIxelVnbEloYWFZVEdKUWJYbVI3SnoyaG0vZGlsZWlpL3RFKzBGa2pp?=
 =?utf-8?B?OXE0VlR2OGxwNzM4MEd0SzFVRXZIT3R5anBSc2R3WjhrYUJTY0R6WTVvWUk1?=
 =?utf-8?B?dE82bEg3ZUVYTFJhL3V4ZkZiVzg1S1ZCRlUyY3NZZWtkeVo0UXAxcWcxYWo0?=
 =?utf-8?B?RDFhOXVpaDRnUHA5cGYyY0QvT0JoYXYzcml0dmEySmx2ZnA0Nmc4M1FMdWtH?=
 =?utf-8?B?aTBqNDRMWDJVbFdwS3NFZjRHd21wRlBMdnJWcU1GdndFTWpIMjNacUJQWEQz?=
 =?utf-8?B?cTZ4SnRYRGUwM091dHgzKzZieE9KbWNxMC9QLzVlUitVeEg2NW9vNGRGUDdB?=
 =?utf-8?B?bHYrOSs1UGRFcTJYZHc3SGx6cEZoU0Zxa2tpQmxybGhBNjRLakNaa1p0bG8w?=
 =?utf-8?B?bzlGa04yRUZueUFYcU9DUkdwUnJqSXZIU2RiUkdIbll2dmJlRFBnN2FDbkdj?=
 =?utf-8?B?VS9zaFd2c0swSjBFbUZBQ2ZSeUJJM29NV2ZsZHFpbE4xRXlaMjYxejdTRHA3?=
 =?utf-8?B?WE1ZVzFXbTJYMXpZTGsyS3dtblRQSi9xV3huNVN4a0kyLy9FRStxSW9TKzU4?=
 =?utf-8?B?RVRMUWtkdmtoMHRQa1dqYTYyb1B3bHlUTzh3UCtka2pyUFFyNVRBRFYxdlhn?=
 =?utf-8?B?MmNndVdQTjVJQUtIVFRpUy9CbDhVUlk5TDNFUFJid2N1WWdidEVoZHppNE1n?=
 =?utf-8?B?R05SOXI5bEVmdGc0TVdQbzQ4amNUc3JuSDM0QVU5YTMvNTNKclpXSVRLVVUx?=
 =?utf-8?B?UzU5ZythT3UxeENVcE04MjdIMXBZaFVMbXhqTHAzbThwRnNNZjIyL3RIZzlt?=
 =?utf-8?B?NTROWkVlaCtzSE56c0gyQndMZGpFczk1Wjh5ckVTS0ZFaVo5cFZVbnI3SDJw?=
 =?utf-8?B?ZTJUbFJlNmZ3V3NOeTB2QmlrRDNHWWhhK0xjcGdyRFFiMjh4MkMxMmF2eWVm?=
 =?utf-8?B?bVkxelJEUWNib0FrejZ4SldwVjNXbzVJTHA4dUg2bVFINkdiMldKSXRGeUx1?=
 =?utf-8?B?TEhVdXE4Qm83dVIyN2ZncWgyZVBzY3ZlTnhVbHRBUG9aM0tsY2c2MDhpNTRY?=
 =?utf-8?B?V2lWa0o4M01zUGlyeWxZTGtpZWQ1YU04eE1rQjNzdEt5OW9GRDVQbUp1UTMw?=
 =?utf-8?B?L3pxbTRNeURFdUJIVnVaNlBGYnlUZlArMlRxVlVsVFhwL0FHbjNLeXlmbjUr?=
 =?utf-8?B?WHBmbHVUNGU3bjRteVlWeW1MQWNrTTk1em8zVkRTemZZT2RPQ1hVcmZGb0Y5?=
 =?utf-8?B?bVhSdjZUWnNRMTN2U0Nqd2pjdmxlQUIzVFBHMDlueXhhVHo5WDI0LzQzOXMz?=
 =?utf-8?B?dlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	XBu/sw2V3JaD5oAP+cIDlB1s/UP6SeBGenGQxDOtRhFcIurx2e3KyBCZ1QpgBSz6GzLWi9TusnmVU9/rmZlPZbiaJAMSDrvdYo2pM6CWYOQo8kaTdvAZpgKfoZ5acv7TMxs92s0s+UQuipIHhd5qo3ma9dG7eh3pphFk7HkOZJx9617+V7SOJmGLdvtGObO+r7m6vqng6emLzjeFdghEi/O7n1d0sD/sFyQJgYw/VIOGVfsa5C3s27RseBoM0CQQq5LA6gYC6bMsWeIoXQ0Ia3dO8AlwKaG9eg52LVYkx+VkkPUFPaC8TKjIS0vcU/VaVaQWKostKhHdm3l1VLRNkC1M4awUPZDNJ7GHR2iab241Upd7YV2RVrPqxWPW9PjIUMiRveBGTlSRsG8yjBF6iGrzwhJgFAX1/4BJJDv2rDNdTnInOMkQNVD+JS5bpc1UVRqTqb8lNz3eyNmaHx1RPvXHq/DWhToz4/MeWOEw+qLsbDOQCgEmCwCHlwSB/Moqwo9OEDIkUqM5a96iWzkoiTXTKwlCZ97aGWHhA6dBYlJQeBwvI5wtoWcIq6qvthqSupFRgXL5AYqQy5xRs+OEf70NWEre9ezvgoEZilCAuQLB8s9B2imZ69cMD+4ZdzFFQyOWzXTJGuly9TB/HYS+syLqbu9P2Id4USrGcF/EoRocc6J8KsMWZhiuGqFG3PnJ3l4VxEQc7ALE4zJfpihCnpaED2meilOrBakc0R3+Nxec+PTwOSBAfkcsiwuAcfggo0A4VoN4ixHJ7bW3lVfewTGlfnDIML0LqW5kydJPJ/hH2kXvjcqo+y7MVCjeelW4qcHQZGWb6Zh9TMFQiBXQZ6vdN4MTqNfAFfqAgzssD58=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8987df46-5fd2-4ae5-c92c-08db4bfcf6e1
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 17:36:45.3250
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2wkg5dfYukcSrV4rL4LfSDE2+CUE3+NDQZxbtMpdxUdw21EBC3MabB2HfJqsfj47web0djbWDueemVLzZPQtBQAg6i036xMNWkFfUdbKQbk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6616

On 17/04/2023 2:53 pm, Sergey Dyasli wrote:
> Add an option to xen-ucode tool to print the currently loaded ucode
> revision and also print it during usage info.  Print CPU signature and
> platform flags as well.  The raw data comes from XENPF_get_cpu_version
> and XENPF_get_ucode_revision platform ops.
>
> Example output:
>     Intel: CPU signature 06-55-04 (raw 0x00050654) pf 0x1 revision 0x02006e05
>       AMD: CPU signature 19-01-01 (raw 0x00a00f11) revision 0x0a0011ce
>
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

This is good enough for now.  Incremental improvements can follow up.


From xen-devel-bounces@lists.xenproject.org Wed May 03 17:44:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 17:44:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529414.823824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puGW4-0008Gc-M0; Wed, 03 May 2023 17:43:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529414.823824; Wed, 03 May 2023 17:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puGW4-0008GV-IN; Wed, 03 May 2023 17:43:48 +0000
Received: by outflank-mailman (input) for mailman id 529414;
 Wed, 03 May 2023 17:43:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puGW3-0008GP-Qx
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 17:43:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puGW3-0001DY-6Q; Wed, 03 May 2023 17:43:47 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puGW2-0004EE-SN; Wed, 03 May 2023 17:43:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=MqtxypmUk17dKd0G1AXzSt7RuNsLGclS8MiMovjho+0=; b=e6XZocs5E9HLnhipQ/5hjf1KrZ
	ZKN6VUuZcy6CGZTFtwE4Kyimdoh984dANucEPTMdKEqRIcZga2c96cu2hYzO5MKBOcsjbHo1l9k4r
	ApbepXV5zplE+QlFtiSY5sCr4ZeaCYa6FPOFHglHD9Bp/Gsxtxs3au/TratNBiLUh9sk=;
Message-ID: <556611a5-dc9a-8155-650d-327b6853f761@xen.org>
Date: Wed, 3 May 2023 18:43:44 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [RFC PATCH] xen/arm: arm32: Enable smpboot on Arm32 based systems
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230502105849.40677-1-ayan.kumar.halder@amd.com>
 <2d764f29-2eb9-ecff-84cd-9baf12961c54@xen.org>
 <e9a95271-021f-523a-770a-302c638bfe73@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <e9a95271-021f-523a-770a-302c638bfe73@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Ayan,

On 03/05/2023 17:49, Ayan Kumar Halder wrote:
> 
> On 03/05/2023 08:40, Julien Grall wrote:
>> Hi,
> Hi Julien,
>>
>> Title: Did you mean "Enable spin table"?
> Yes, that would be more concrete.
>>
>> On 02/05/2023 11:58, Ayan Kumar Halder wrote:
>>> On some of the Arm32 based systems (eg Cortex-R52), smpboot is 
>>> supported.
>>
>> Same here.
> Yes
>>
>>> In these systems PSCI may not always be supported. In case of 
>>> Cortex-R52, there
>>> is no EL3 or secure mode. Thus, PSCI is not supported as it requires 
>>> EL3.
>>>
>>> Thus, we use 'spin-table' mechanism to boot the secondary cpus. The 
>>> primary
>>> cpu provides the startup address of the secondary cores. This address is
>>> provided using the 'cpu-release-addr' property.
>>>
>>> To support smpboot, we have copied the code from 
>>> xen/arch/arm/arm64/smpboot.c
>>
>> I would rather prefer if we don't duplicate the code but instead move 
>> the logic in common code.
> Ack
>>
>>> with the following changes :-
>>>
>>> 1. 'enable-method' is an optional property. Refer to the comment in
>>> https://www.kernel.org/doc/Documentation/devicetree/bindings/arm/cpus.yaml
>>> "      # On ARM 32-bit systems this property is optional"
>>
>> Looking at this list, "spin-table" doesn't seem to be supported
>> for 32-bit systems. 
> 
> However, looking at 
> https://developer.arm.com/documentation/den0013/d/Multi-core-processors/Booting-SMP-systems/SMP-boot-in-Linux , it seems "spin-table" is a valid boot mechanism for Armv7 cpus.

I am not able to find the associated code in Linux 32-bit. Do you have 
any pointer?

> 
> 
>> Can you point me to the discussion/patch where this would be added?
> 
> Actually, this is the first discussion I am having with regards to 
> adding a "spin-table" support on Arm32.

I was asking for the discussion on the Device-Tree/Linux ML or code.
I don't really want to do a "spin-table" support if this is not even 
supported in Linux.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 17:46:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 17:46:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529416.823834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puGYC-0000Qj-11; Wed, 03 May 2023 17:46:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529416.823834; Wed, 03 May 2023 17:45:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puGYB-0000Qc-UJ; Wed, 03 May 2023 17:45:59 +0000
Received: by outflank-mailman (input) for mailman id 529416;
 Wed, 03 May 2023 17:45:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puGYA-0000QS-Gs; Wed, 03 May 2023 17:45:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puGYA-0001H4-FH; Wed, 03 May 2023 17:45:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puGYA-0000F1-3C; Wed, 03 May 2023 17:45:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puGYA-0001Am-2d; Wed, 03 May 2023 17:45:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yG2gX4m8SPgzFxt09UFUplBjhrvtZSdRomfzn7CSzac=; b=09Gk4UhEh4kspzgLf/s9YSAiiV
	2HUel732nZec5HJ1RG5BuvGNB0eT5a6jd2eBcV4BACogpLQSyUPMgmcTdnmTXTe1bdqt+6GCLms4C
	uG2WcX1k874ch+xe8YSlBA1NwdM5ojbkp5sHugquJDVAHEUmMINqRU4VQcCc4vzSZLeg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180513-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180513: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=b4f5e6c91b9871173de205f81b51cf06a833fcb1
X-Osstest-Versions-That:
    libvirt=844a3b48d6161560eddc1e1f85719b67659f1ea9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 May 2023 17:45:58 +0000

flight 180513 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180513/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180479
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180479
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180479
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              b4f5e6c91b9871173de205f81b51cf06a833fcb1
baseline version:
 libvirt              844a3b48d6161560eddc1e1f85719b67659f1ea9

Last test of basis   180479  2023-04-29 04:20:19 Z    4 days
Testing same since   180513  2023-05-03 04:18:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiri Denemark <jdenemar@redhat.com>
  Peter Krempa <pkrempa@redhat.com
  Peter Krempa <pkrempa@redhat.com>
  Shaleen Bathla <shaleen.bathla@oracle.com>
  Tim Shearer <tshearer@adva.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   844a3b48d6..b4f5e6c91b  b4f5e6c91b9871173de205f81b51cf06a833fcb1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 03 17:51:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 17:51:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529424.823844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puGd5-0001vw-Nx; Wed, 03 May 2023 17:51:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529424.823844; Wed, 03 May 2023 17:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puGd5-0001vp-Kc; Wed, 03 May 2023 17:51:03 +0000
Received: by outflank-mailman (input) for mailman id 529424;
 Wed, 03 May 2023 17:51:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puGd4-0001vf-NN; Wed, 03 May 2023 17:51:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puGd4-0001MC-It; Wed, 03 May 2023 17:51:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puGd4-0000S0-2d; Wed, 03 May 2023 17:51:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puGd4-0002wd-29; Wed, 03 May 2023 17:51:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MsiWno7UsTjhI/mSnDzGVdjJifgINL2VW4bzZfnD9gk=; b=AvUrIoXrYtafWCnH+j73cC2M3W
	Eg2sXcNhbK5dorBkdAN7z5Iuzb+evSEGKCEk0eknPzOnGuOwW3Z+ldU1xoW5H41W56HtmBmWQDRVx
	bub082P2SjgB9c5ak9QCZuru5DmIBKgjyMKCOk7opb+/j0keHW9Uf34Zvnuw80SxxiXo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180518-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180518: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=99a9c3d7141063ae3f357892c6181cfa3be8a280
X-Osstest-Versions-That:
    xen=0956aa2219745a198bb6a0a99e2108a3c09b280e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 May 2023 17:51:02 +0000

flight 180518 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180518/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  99a9c3d7141063ae3f357892c6181cfa3be8a280
baseline version:
 xen                  0956aa2219745a198bb6a0a99e2108a3c09b280e

Last test of basis   180517  2023-05-03 12:01:56 Z    0 days
Testing same since   180518  2023-05-03 15:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Viresh Kumar <viresh.kumar@linaro.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0956aa2219..99a9c3d714  99a9c3d7141063ae3f357892c6181cfa3be8a280 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 03 17:54:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 17:54:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529429.823854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puGgG-0002Vf-6W; Wed, 03 May 2023 17:54:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529429.823854; Wed, 03 May 2023 17:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puGgG-0002VY-3h; Wed, 03 May 2023 17:54:20 +0000
Received: by outflank-mailman (input) for mailman id 529429;
 Wed, 03 May 2023 17:54:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ISkc=AY=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1puGgF-0002VS-Ew
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 17:54:19 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 86135c70-e9db-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 19:54:18 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id BE40260F7B;
 Wed,  3 May 2023 17:54:16 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3FBB9C433EF;
 Wed,  3 May 2023 17:54:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86135c70-e9db-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683136456;
	bh=pYSY+/3DUBT8zv91GwZKnPhkAlej1WQJEZ6BUC42DsM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ZbZQB4SU/VtM5irjWc60DW7lyHBSM0kf8mCFsv9LbYSgD/tyhYNdsg+aOhQbUks5x
	 q4pV+8YX+xd+98Jb/GYDyCrJqD5g0UADxB+gXs7d0NHl96NUjKgzBb9U6o+nTyjGSv
	 rXJe86FOc1K+W4CcqdFFDqdYdQMy8M1ZPp6e5G8nemHjOEgeQyk3brd0zJJwmzwmtD
	 J3YKxomKeQBxJG/9K9WGid1hvJyLgmE6jvQrn4NA8/9RQ4Rxj15ZlB6OdumfaarL60
	 lrPr8pyrYVKNuovx1NIgmu2/2pU7i582EsytVw/Yy/c4NySIkdXeUYzcF7xfpSdl+J
	 YNDGQFZnFpD2g==
Date: Wed, 3 May 2023 10:54:12 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Ayan Kumar Halder <ayan.kumar.halder@amd.com>, 
    xen-devel@lists.xenproject.org, stefano.stabellini@amd.com, 
    Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
Subject: Re: [RFC PATCH] xen/arm: arm32: Enable smpboot on Arm32 based
 systems
In-Reply-To: <a0d48f47-bb62-5ed0-0c9b-95935dc75ca3@xen.org>
Message-ID: <alpine.DEB.2.22.394.2305031053590.974517@ubuntu-linux-20-04-desktop>
References: <20230502105849.40677-1-ayan.kumar.halder@amd.com> <alpine.DEB.2.22.394.2305021643010.974517@ubuntu-linux-20-04-desktop> <a0d48f47-bb62-5ed0-0c9b-95935dc75ca3@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 3 May 2023, Julien Grall wrote:
> Hi Stefano,
> 
> On 03/05/2023 00:55, Stefano Stabellini wrote:
> > > +    {
> > > +        printk("CPU%d: No release addr\n", cpu);
> > > +        return -ENODEV;
> > > +    }
> > > +
> > > +    release = ioremap_nocache(cpu_release_addr[cpu], 4);
> > > +    if ( !release )
> > > +    {
> > > +        dprintk(XENLOG_ERR, "CPU%d: Unable to map release address\n",
> > > cpu);
> > > +        return -EFAULT;
> > > +    }
> > > +
> > > +    writel(__pa(init_secondary), release);
> > > +
> > > +    iounmap(release);
> > 
> > I think we need a wmb() ?
> 
> I am not sure why we would need a wmb() here.

The code does:

writel(__pa(init_secondary), release);
iounmap
sev();

I was thinking of trying to make sure the write is completed before
issuing a sev(). Technically it is possible for the CPU to reorder the
sev() before the write as there is no explicit dependency between the
two?


> Instead, looking at the Linux
> version, I think we are missing a cache flush (so does on arm64) which would
> be necessary if the CPU waiting for the release doesn't have cache enabled.

I thought about it as well but here the patch is calling
ioremap_nocache(), so cache flushes should not be needed?


From xen-devel-bounces@lists.xenproject.org Wed May 03 18:19:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 18:19:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529444.823864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puH46-00056B-5a; Wed, 03 May 2023 18:18:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529444.823864; Wed, 03 May 2023 18:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puH46-000564-1k; Wed, 03 May 2023 18:18:58 +0000
Received: by outflank-mailman (input) for mailman id 529444;
 Wed, 03 May 2023 18:18:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puH45-00055y-FP
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 18:18:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puH45-00022g-2c; Wed, 03 May 2023 18:18:57 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230] helo=[192.168.7.72])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puH44-00063p-RC; Wed, 03 May 2023 18:18:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=NNhvbZyBNIyCOxi54PjONIH+ezlsZypVpRL/LqBtjNg=; b=IwgPIXVtPteAOIbwzng4iPF4rZ
	bvCXgThwiPL5y8QvsEEuctIz6MJMA35rndKy5MbM01n7t61qatHpZb/R7ZfGLaJfMF4sMolYam5qZ
	BMWyn1nNDXtuE3LWOG2mH2/Y6WOuj+2/AcA2GF8kBXcE+2Eaqw/7bI0QxApr4aIITP4g=;
Message-ID: <e46baef3-be2f-bf76-1667-8b0562849b06@xen.org>
Date: Wed, 3 May 2023 19:18:54 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [RFC PATCH] xen/arm: arm32: Enable smpboot on Arm32 based systems
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230502105849.40677-1-ayan.kumar.halder@amd.com>
 <alpine.DEB.2.22.394.2305021643010.974517@ubuntu-linux-20-04-desktop>
 <a0d48f47-bb62-5ed0-0c9b-95935dc75ca3@xen.org>
 <alpine.DEB.2.22.394.2305031053590.974517@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2305031053590.974517@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 03/05/2023 18:54, Stefano Stabellini wrote:
> On Wed, 3 May 2023, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 03/05/2023 00:55, Stefano Stabellini wrote:
>>>> +    {
>>>> +        printk("CPU%d: No release addr\n", cpu);
>>>> +        return -ENODEV;
>>>> +    }
>>>> +
>>>> +    release = ioremap_nocache(cpu_release_addr[cpu], 4);
>>>> +    if ( !release )
>>>> +    {
>>>> +        dprintk(XENLOG_ERR, "CPU%d: Unable to map release address\n",
>>>> cpu);
>>>> +        return -EFAULT;
>>>> +    }
>>>> +
>>>> +    writel(__pa(init_secondary), release);
>>>> +
>>>> +    iounmap(release);
>>>
>>> I think we need a wmb() ?
>>
>> I am not sure why we would need a wmb() here.
> 
> The code does:
> 
> writel(__pa(init_secondary), release);
> iounmap
> sev();
> 
> I was thinking of trying to make sure the write is completed before
> issuing a sev(). Technically it is possible for the CPU to reorder the
> sev() before the write as there is no explicit dependency between the
> two?

I would assume that iounmap() would contain an wmb(). But I guess this 
is not something we should rely on.

> 
>> Instead, looking at the Linux
>> version, I think we are missing a cache flush (so does on arm64) which would
>> be necessary if the CPU waiting for the release doesn't have cache enabled.
> 
> I thought about it as well but here the patch is calling
> ioremap_nocache(), so cache flushes should not be needed?

Hmmm... You are right, we are using ioremap_nocache(). I got confused 
because Linux is using ioremap_cache(). I am now wondering whether we 
are using the correct helper.

AFAIU, spin table is part of the reserved memory section in the 
Device-Tree. From section 5.3 in [1], the memory could be mapped 
cacheable by the other end. So for correctness, it seems to me that we 
would need to use ioremap_cache() + cache flush.

BTW, writel_relaxed() should be sufficient here as there is no ordering 
necessary with any previous write.

Cheers,

[1] 
https://github.com/devicetree-org/devicetree-specification/releases/tag/v0.3

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 03 18:37:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 18:37:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529447.823874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puHLe-0007W7-MM; Wed, 03 May 2023 18:37:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529447.823874; Wed, 03 May 2023 18:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puHLe-0007W0-JX; Wed, 03 May 2023 18:37:06 +0000
Received: by outflank-mailman (input) for mailman id 529447;
 Wed, 03 May 2023 18:26:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6/6o=AY=intel.com=sohil.mehta@srs-se1.protection.inumbo.net>)
 id 1puHBn-0006ZA-Gs
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 18:26:55 +0000
Received: from mga04.intel.com (mga04.intel.com [192.55.52.120])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1221fb74-e9e0-11ed-b225-6b7b168915f2;
 Wed, 03 May 2023 20:26:52 +0200 (CEST)
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 03 May 2023 11:14:32 -0700
Received: from fmsmsx603.amr.corp.intel.com ([10.18.126.83])
 by fmsmga008.fm.intel.com with ESMTP; 03 May 2023 11:14:31 -0700
Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Wed, 3 May 2023 11:14:31 -0700
Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by
 fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Wed, 3 May 2023 11:14:31 -0700
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23 via Frontend Transport; Wed, 3 May 2023 11:14:31 -0700
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.176)
 by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.23; Wed, 3 May 2023 11:14:30 -0700
Received: from BYAPR11MB3320.namprd11.prod.outlook.com (2603:10b6:a03:18::25)
 by BN9PR11MB5529.namprd11.prod.outlook.com (2603:10b6:408:102::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 18:14:28 +0000
Received: from BYAPR11MB3320.namprd11.prod.outlook.com
 ([fe80::a0e0:efa7:eda3:b36b]) by BYAPR11MB3320.namprd11.prod.outlook.com
 ([fe80::a0e0:efa7:eda3:b36b%6]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 18:14:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1221fb74-e9e0-11ed-b225-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1683138412; x=1714674412;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=YU1LpqW36QSHkB5/jJBCAPcOaEJjCOlcwEq4cJabuDE=;
  b=NIJrpZ1nl7ilNw570AgiH8u45EuwVQrgX6BRoH5zYo3ddmWpP9SDuJcn
   5wGPDcblT4zhfReqimwa5oJQux3Glg77RWzAlhBoQdRvyPFTZ+cgZ5w65
   YsiynqCF45lj6IhguEnPR0K1pG0tFc3Tx39rsLkMNg4lCNsLRV4mtVeZr
   vPP4VGBbt6u/+6C0tI55LEJxQf+Rl25KH9WfVvf4UlJ/tNBo5s480KlYk
   SIkhiYk4Sk6MHvYKbMC5eOsGDqkHG2lW42SyrUpO2R6P6uBmGKsgUHCXl
   6y/b3IXY3s+94psKDEllJzUhIVx3YDcyAXu5QE6mYsnbVgj1DqFNfIoG7
   g==;
X-IronPort-AV: E=McAfee;i="6600,9927,10699"; a="347551839"
X-IronPort-AV: E=Sophos;i="5.99,248,1677571200"; 
   d="scan'208";a="347551839"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10699"; a="761589777"
X-IronPort-AV: E=Sophos;i="5.99,247,1677571200"; 
   d="scan'208";a="761589777"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gTPuKaefgy/ZBI5yTNUOYt0GMYuuLIQiUjXpmenHdINhxGqLze9cIZ9zPCq5YA8j4dJuhi5a4VzI3cyl5cnoe2VP49Zt/wdakJHNTS5eLz8owErWD+nODsbZuUwFhGpZDo8UtYcarQUl+8ttLjNjDdb7M8HWDSKhmGLduc1SgVDdxsGh+/paOrZV7hZIXZmcKZ6oFf/iYDQsDtvQjY4nckax7+8tjdaaJPOQlOcyQP5okZuyJ/RzWlhIx62pnxiZoNo6djMY5jh5i1nw0tHovAGb/a+4162xKDz5c/jaZQScqOAses2uMSmCsRWHkhZ0LNBBGhs9RTS0wI8KsfgeEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=acApguog1hrw9FK2yIo2Pl8uvzK1Ho/Twm/E37M8al4=;
 b=T+T7MBorv47xi1TIOwGHyMRcmEKDOQVcW0BNGrcJ4It2pq8/VSWC+803n/LbvLlYJ95N7InSprwc7yKA6Gv0w7fZltFvXwgtjxx6NaNeCEknq03Xf5LKhiNpp+BxtYV13P43SDxS6qEAyPiOhvgRv12oHKOU/kDYj4siTaprfcrWRdmdduKKa2HBW68qd8PdC029D+lKIEIDlS00C0uq9q6wBpxIzBP1gVsWJo+7ayOYy6fFIJhQTi5yHfZS1JMCpPtUwDW5Qx6bboKr4xWvbDHQyWiTHMliPYe9q7BCOuRdJdnf7BNXeqaBubM2eK7ifzCXvNHphDgYrEUUelynhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
Message-ID: <26e6947c-6c56-d8c0-1baf-1ff006258679@intel.com>
Date: Wed, 3 May 2023 11:14:25 -0700
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
To: Juergen Gross <jgross@suse.com>
CC: <mikelley@microsoft.com>, Thomas Gleixner <tglx@linutronix.de>, "Ingo
 Molnar" <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen
	<dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>, "K. Y.
 Srinivasan" <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>, "Wei
 Liu" <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, <xen-devel@lists.xenproject.org>, "Jonathan
 Corbet" <corbet@lwn.net>, Andy Lutomirski <luto@kernel.org>, Peter Zijlstra
	<peterz@infradead.org>, <linux-kernel@vger.kernel.org>, <x86@kernel.org>,
	<linux-hyperv@vger.kernel.org>, <linux-doc@vger.kernel.org>
References: <20230502120931.20719-1-jgross@suse.com>
Content-Language: en-US
From: Sohil Mehta <sohil.mehta@intel.com>
In-Reply-To: <20230502120931.20719-1-jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SJ0PR05CA0048.namprd05.prod.outlook.com
 (2603:10b6:a03:33f::23) To BYAPR11MB3320.namprd11.prod.outlook.com
 (2603:10b6:a03:18::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR11MB3320:EE_|BN9PR11MB5529:EE_
X-MS-Office365-Filtering-Correlation-Id: fd3be5c8-54b9-4457-ca96-08db4c023b98
X-LD-Processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: hNgUlvKNNJaYjJgnnPrK3FzXCampfuny5QwS+vmt6f2QTBm1OtW4v4FoxRAWr2v3oX/pQ6GNPD9tjH2DacjOJtF+fYMJ7ZKqrGVjxOyQ/iBJNcqUYKpWywvlW9WNfwOtGbf11KIr2L4Wjf0LdRQhFBjWqFU3ut6vFm9r+7t2DVSMijey2HtR8mfakAYSRtjGIMkZfpRQAK6Kr2zOvbIuGsK7DfwlxsYuRCousZF4c5+tXJETZMpVknUeCIlzq5dWExFaM7JTBKHGGayNYgkxLvzaocGhFKhvBYqUXP86/sy2HTNuUXjp3BqQ6WBvknnX7VbD1vv25aUS88bTQ/EgR0bx0RAZEGdR05YEiciZTAKQa4YiNDDIUrHDUZZHRlXeteBm09FZ/PI9OZ7kby99SrNYtZKj+8jYOLe1Nx0zsDKR27rXeUr9s7Zug0EZWYebbcqV0ISZToGiqAiT6VP/C/NDZhygAF4crUWL3vej2SWNT/Cp5JniHptxrxnpaTKYRF/QaGTAXFqDwcKN08px/o0mOdEC0ofUySi/iBX1Fl2SLeDAu16SvLroS7NVSy1r6EMe1g5wuliOexjTbkPHPkT0N8Co713V7/6v/iQ++Tvzp2cxuhYdH/7ZaMtXbj4fpcqHRfKvjm5hbe5AqKwEnw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR11MB3320.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39860400002)(366004)(346002)(376002)(396003)(451199021)(478600001)(2906002)(316002)(66946007)(66556008)(66476007)(4326008)(6486002)(6666004)(6916009)(82960400001)(36756003)(186003)(41300700001)(6506007)(6512007)(26005)(2616005)(86362001)(38100700002)(54906003)(7416002)(31686004)(8676002)(8936002)(31696002)(44832011)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?QkQrTE5vd2lFV2h0TDNacUxjbXhzS3ZYSHlmTzdCRG8wcHArd1ZLK29BNlh0?=
 =?utf-8?B?eGg3RkNVVkp4eEU0TGxQZHkrYkRkMVJHTnJ3SXFwYmFCbTZqeTIvQnk5aWpK?=
 =?utf-8?B?ZnF1ZlBsNVR6QnFNRXFGbnhwR2E5QWFBUmsrZzZieWJEbWFHV1lyTlRtQkhI?=
 =?utf-8?B?S2N5cXNxaVhJUmtPREZwRVNkczRLTUZsMEtkT1F4QXBuWWw5NzUvclpMclZR?=
 =?utf-8?B?M3luYUsyUUxIT1hwRWlaUm1tc0Y3Zmpjczl6b2Y2YmNsT3Y4Q0ZlTVJrMFp1?=
 =?utf-8?B?WVltM0R3c2tRTlRUQWl6cW5SZC94bThnMXUySzlMc3lvZFptQkowUXdXZjc1?=
 =?utf-8?B?SXRIK3hwMWhXUyt6ek1CcWFtdysxZDk0ekUzTDA0Ukt5aFk1RGV1ek1GUWI2?=
 =?utf-8?B?d240NzRGUmlHMlQyUDk2NURDS2JKamp3cTl3NGZEOVR3S2g3RHlqRTNPcS9l?=
 =?utf-8?B?MUE1cS8yRTJQeE1ZNW83OUsyQTN0Y2lUbmZoZGpJdFZpN1F0ejUvYSs4Nkxh?=
 =?utf-8?B?UlQyU1RnQjB4aHVQWTVld2ZKQ1Jac3NBMm5jVzNHdlZUNmt6VEoybkt0Ni9j?=
 =?utf-8?B?MFVxYjkya2ZpekY0c1A0dmZyOWlWNS9rNU1HNDBSeFQzSkRKdkFEYkovZjlw?=
 =?utf-8?B?TDNpOXZ3RHNNUzVZZWljMnJuVC91TjFyK3IzNEwxdCtwUlJUbnFKbm5mVHEy?=
 =?utf-8?B?NmpEc1BvNEphSmRCV0tOSFRLblBLbElYS08yeFc1UUFEVnUyQVh5M0ZrMmxq?=
 =?utf-8?B?WFZ6dEN3ck1Hd1ZqYkxrSU10UTJqa3RhVmJPeTRRNHVnL3kyV045WEhMamN0?=
 =?utf-8?B?dXNzYUFYd1hmTEdTbko3TFVtenJZeHJlSXhsNk10UlZ2WWZXTkJrUDJLN2pV?=
 =?utf-8?B?anNUZ3NKN1NWcFN3QTZZdlRUbEpDVzVMNUJzZ1dXUWVyU2ZXQitxbFFZME4r?=
 =?utf-8?B?WTVwUzdkSWhyVUd2Tm1KRlh6V3d3MjFadUZ2YUlYeGd5TGZWU1djM3J6U25v?=
 =?utf-8?B?Mk0vWlBxeVhtYTViV01pbWFEaGxENnc4WDhwNDZaOTMxUENYVDNLanhLNDVP?=
 =?utf-8?B?akpFemxiWGJ4ZmUxd1VwOVVXd05wSGVuMHBnbGREWFJwR3NBSDJ6Qnl3QXkx?=
 =?utf-8?B?aGRGSWRhOUprODVrNVEvRzFmQjRPL1NlT2s5UmZ1WFR4SUFYZlJaL1ZCRWpl?=
 =?utf-8?B?L1VRSWJjT25RN1hwb1hzZnliS0VjYUhhTEMvSVFIRllTSzBSS1F4QWN0bDYz?=
 =?utf-8?B?ZVBqQTdlaXZ2eWRVcnZwdklJSWpxSUVCVlV6blFyTVk1UWcwNXRDcDNiR0Zv?=
 =?utf-8?B?ZTZJTTFXZXNIMEpzREdTS0ZJcGI2RFRoYU9rbDg1emljeENoSDdaT0V5U2l3?=
 =?utf-8?B?a01ZOEZ5U1ZHL1p5T3JzaTBEQlZMZWtUK3VPUG9TdG14eFNtanpxSzZtOFlD?=
 =?utf-8?B?ZTlIU0kxT0FiNGlhTGNRY2lETnhTM0tGYWowQ1kyd29KT25yZ1FtNmUzVFhC?=
 =?utf-8?B?b1FNMnpaV3J5ZkVIUVdmS1UwcXdORkNMVko2ZXlEUkJFbm9TYWl4citBTnJY?=
 =?utf-8?B?c256YkNJUGFPaVdIODBxVkZSZGZaWVFOZnBtWHVST1B1R1Z4azI2RGZIeDJl?=
 =?utf-8?B?aVViVyswTUZURmdrRGhmd0piYkdOMlhFV0NybFMyTWtIbmcwcjN4NGc2bm5J?=
 =?utf-8?B?NlhhR0pQSlo0ZnIyV1p2V1Y2Um8zUnhrQkdEa2FJaS8rdDR2eUMrK3VDV0x3?=
 =?utf-8?B?NENtdEVHUWxqLzJNNVJoZjRKUlhUYmoyTU9iemtSN0lNRVpNZG8xT2dXalJp?=
 =?utf-8?B?M1pwWk1Nakc4KzdWMU9memdUeUJldkdaV1Z6S29RYzlYUkhFN1lXbHBBZ2xa?=
 =?utf-8?B?L3NDMHlRb2dQQVoxR0xSWWJjeWNUMXgxN0J0MmtyNlFvME1DY05qTDEvcytm?=
 =?utf-8?B?Sm82V1NKZUY3WE5ycEEzeXUvNjJzY2dIWEdVUHZSU1kvLytFOXhtSDhFZzZn?=
 =?utf-8?B?bmFpT05rZk9YSEdqZXRUeDZaazlmdFRPTnZKQWtpRjBTYTBkQURZWUVGbjFM?=
 =?utf-8?B?NUUvUDFBSjdRanV1T0pKZlV2bEx3cS93QjMvWFFURHhXbmVnQnQ4QzdCWk02?=
 =?utf-8?Q?ps/1dgk7bcBs1d+wmhSmBhegD?=
X-MS-Exchange-CrossTenant-Network-Message-Id: fd3be5c8-54b9-4457-ca96-08db4c023b98
X-MS-Exchange-CrossTenant-AuthSource: BYAPR11MB3320.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 18:14:27.8377
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8ktFFMmqRMunU622QH2XA2R1m6wBuoC/irgg/3W26PGzyAN7jZ4Tz010x8zYpjva6OQxsQ2CxSOtMsk2EVa7oA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR11MB5529
X-OriginatorOrg: intel.com

> Juergen Gross (16):
>   x86/mtrr: remove physical address size calculation
>   x86/mtrr: replace some constants with defines
>   x86/mtrr: support setting MTRR state for software defined MTRRs
>   x86/hyperv: set MTRR state when running as SEV-SNP Hyper-V guest
>   x86/xen: set MTRR state when running as Xen PV initial domain
>   x86/mtrr: replace vendor tests in MTRR code
>   x86/mtrr: have only one set_mtrr() variant
>   x86/mtrr: move 32-bit code from mtrr.c to legacy.c
>   x86/mtrr: allocate mtrr_value array dynamically
>   x86/mtrr: add get_effective_type() service function
>   x86/mtrr: construct a memory map with cache modes
>   x86/mtrr: add mtrr=debug command line option
>   x86/mtrr: use new cache_map in mtrr_type_lookup()
>   x86/mtrr: don't let mtrr_type_lookup() return MTRR_TYPE_INVALID
>   x86/mm: only check uniform after calling mtrr_type_lookup()
>   x86/mtrr: remove unused code
> 

A Nit -> Documentation/process/maintainer-tip.rst suggests:
"The condensed patch description in the subject line should start with a
uppercase letter and ..."

-Sohil


From xen-devel-bounces@lists.xenproject.org Wed May 03 18:58:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 18:58:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529455.823884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puHgL-0001X2-Ck; Wed, 03 May 2023 18:58:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529455.823884; Wed, 03 May 2023 18:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puHgL-0001Wv-9i; Wed, 03 May 2023 18:58:29 +0000
Received: by outflank-mailman (input) for mailman id 529455;
 Wed, 03 May 2023 18:58:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZlPw=AY=citrix.com=prvs=4803f4e7c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puHgJ-0001Wp-Sn
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 18:58:28 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7a7728d0-e9e4-11ed-b226-6b7b168915f2;
 Wed, 03 May 2023 20:58:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a7728d0-e9e4-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683140305;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=TZ0uzHXRe5NFf4mFFjml7I3UzuX0lFxn2xNYPMCq6gY=;
  b=a2QIiL5KdIyMuroyjjK4+MXSGFEJrKCyxdl/o1kLvWMN8s4I4ueCjYP4
   hCIxpASNXch9cBuWjPq7TbUVUQm3jqQuT/IR3X2LtedyIa8D522MjNqlc
   poCq2jo6+9VHIpr75H0oSgbw+btwuw0iL5hx6j87QVvK8IGFpIjc7hrUu
   E=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110204666
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:8svK163+VERbbQF1EPbD5atxkn2cJEfYwER7XKvMYLTBsI5bpzMFx
 mAbWDvTaPePazb1L951OYS2pk8Gvp6Hx9I3Hlc4pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFmPqgQ1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfDGhBx
 P40JyI0X1OGl9Ob7L/mb+J+mZF2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiHJwNxB7E+
 DyYl4j/KjQBJMSazzPYyy2Pr8PMrA/cdZ0dJqLto5aGh3XMnzdOWXX6T2CTvv2RmkO4HdVFJ
 CQ86ico6KQ/6kGvZt38RAGj5m6JuAYGXNhdGPF87xuCooL2yQuEAmkPThZadccr8sQxQFQXO
 kShxo2zQ2Y16fvMFCzbr+3Pxd+vBcQLBWILah4GYQQX2uigpZECoz7CE/NoArHg27UZBgrML
 yC2QDkW3utD1pZSjfXkojgrkBr3+MGXE1ddChH/Gzv8s1gnPNPNi5mAswCz0BpWEGqOorBtV
 lAgktPW0u0BBIrleMelELRUR+HBCxpo3VThbb9T83oJrW7FF4aLJ9w43d2EGG9nM9wfZRjia
 1LJtAVa6fd7ZSX6NvcqON/oUZ5zncAM8OjYug38NIISMvCdiifelM2RWaJg9z+0yxV9+U3OE
 ZyabdytHR4nNEiT9xLvH711+eZylkgDKZb7GciT5w65yoCXeHP9Ye5DaDNimMhltvLbyOgUm
 v4DX/a3J+J3CbCnPnWKqtZMdTjn7xETXPjLliCeTcbbSiIOJY3rI6W5LW8JE2C9o5loqw==
IronPort-HdrOrdr: A9a23:5hNiYKgNQUymdnn2984wpRfx6XBQXuQji2hC6mlwRA09TyX4ra
 GTdZEgvnXJYVkqKRIdcLy7VZVoOEmsk6KdgrN+AV7BZmXbUQKTRelfBeGL+UyYJ8SUzIFgPM
 lbE5SXBLXLfDpHZcuT2njeLz4rqOP3lZxBio/lvhNQcT0=
X-Talos-CUID: 9a23:RltK3GEu6OG/keIfqmJq/UElGOwqYETsj3LAEmKUBDtMQ6+KHAo=
X-Talos-MUID: 9a23:Yh4tvwVtzxDBkFbq/Br3lClnMs412vuNIWwOoLkhltaLBzMlbg==
X-IronPort-AV: E=Sophos;i="5.99,248,1677560400"; 
   d="scan'208";a="110204666"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/ucode: Refresh raw CPU policy after microcode load
Date: Wed, 3 May 2023 19:58:13 +0100
Message-ID: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Loading microcode can cause new features to appear.  This has happened
routinely since Spectre/Meltdown, and even the presence of new status bits can
mean the administrator has no further work to perform.

Refresh the raw CPU policy after late microcode load, so xen-cpuid can reflect
the updated state of the system.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

This is also the first step of being able to livepatch support for new
functionality in microcode.
---
 xen/arch/x86/cpu-policy.c             | 6 +++---
 xen/arch/x86/cpu/microcode/core.c     | 4 ++++
 xen/arch/x86/include/asm/cpu-policy.h | 6 ++++++
 3 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index a58bf6cad54e..ef6a2d0d180a 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -15,7 +15,7 @@
 #include <asm/setup.h>
 #include <asm/xstate.h>
 
-struct cpu_policy __ro_after_init     raw_cpu_policy;
+struct cpu_policy __read_mostly       raw_cpu_policy;
 struct cpu_policy __ro_after_init    host_cpu_policy;
 #ifdef CONFIG_PV
 struct cpu_policy __ro_after_init  pv_max_cpu_policy;
@@ -343,7 +343,7 @@ static void recalculate_misc(struct cpu_policy *p)
     }
 }
 
-static void __init calculate_raw_policy(void)
+void calculate_raw_cpu_policy(void)
 {
     struct cpu_policy *p = &raw_cpu_policy;
 
@@ -655,7 +655,7 @@ static void __init calculate_hvm_def_policy(void)
 
 void __init init_guest_cpu_policies(void)
 {
-    calculate_raw_policy();
+    calculate_raw_cpu_policy();
     calculate_host_policy();
 
     if ( IS_ENABLED(CONFIG_PV) )
diff --git a/xen/arch/x86/cpu/microcode/core.c b/xen/arch/x86/cpu/microcode/core.c
index 61cd36d601d6..cd456c476fbf 100644
--- a/xen/arch/x86/cpu/microcode/core.c
+++ b/xen/arch/x86/cpu/microcode/core.c
@@ -34,6 +34,7 @@
 #include <xen/watchdog.h>
 
 #include <asm/apic.h>
+#include <asm/cpu-policy.h>
 #include <asm/delay.h>
 #include <asm/nmi.h>
 #include <asm/processor.h>
@@ -677,6 +678,9 @@ static long cf_check microcode_update_helper(void *data)
         spin_lock(&microcode_mutex);
         microcode_update_cache(patch);
         spin_unlock(&microcode_mutex);
+
+        /* Refresh the raw CPU policy, in case the features have changed. */
+        calculate_raw_cpu_policy();
     }
     else
         microcode_free_patch(patch);
diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
index b361537a602b..99d5a8e67eeb 100644
--- a/xen/arch/x86/include/asm/cpu-policy.h
+++ b/xen/arch/x86/include/asm/cpu-policy.h
@@ -24,4 +24,10 @@ void init_dom0_cpuid_policy(struct domain *d);
 /* Clamp the CPUID policy to reality. */
 void recalculate_cpuid_policy(struct domain *d);
 
+/*
+ * Collect the raw CPUID and MSR values.  Called during boot, and after late
+ * microcode loading.
+ */
+void calculate_raw_cpu_policy(void);
+
 #endif /* X86_CPU_POLICY_H */
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 03 19:13:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 19:13:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529460.823894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puHuS-0003zq-Os; Wed, 03 May 2023 19:13:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529460.823894; Wed, 03 May 2023 19:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puHuS-0003zj-M3; Wed, 03 May 2023 19:13:04 +0000
Received: by outflank-mailman (input) for mailman id 529460;
 Wed, 03 May 2023 19:13:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ArVX=AY=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1puHuR-0003zd-T9
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 19:13:04 +0000
Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 86470aff-e9e6-11ed-b226-6b7b168915f2;
 Wed, 03 May 2023 21:13:02 +0200 (CEST)
Received: from zn.tnic (p5de8e8ea.dip0.t-ipconnect.de [93.232.232.234])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id B681A1EC0674;
 Wed,  3 May 2023 21:13:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86470aff-e9e6-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1683141181;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=L/8fxhHQV9gUmxp73tGqFoK9bK/rX7AQ4vVPU8ZIH0I=;
	b=KcU0BTLrM12dhsRj+QLZSG0/+RmUKUiYJfiYee8JlzdiNbgVFD/LmgeqgPH66sbzspCphX
	bqQXAAnTIVKkGkWa5UKHtB5PoYSIhviQ5VRtBo8QE52D5xzSpgI5rqz3rUcKRpD07YdiXO
	lnMXaXWkqPpgYFEw1Wfi0/RkoaQnTVI=
Date: Wed, 3 May 2023 21:12:56 +0200
From: Borislav Petkov <bp@alien8.de>
To: Sohil Mehta <sohil.mehta@intel.com>
Cc: Juergen Gross <jgross@suse.com>, mikelley@microsoft.com,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, linux-kernel@vger.kernel.org,
	x86@kernel.org, linux-hyperv@vger.kernel.org,
	linux-doc@vger.kernel.org
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Message-ID: <20230503191256.GGZFKyOLi12JS/HuXD@fat_crate.local>
References: <20230502120931.20719-1-jgross@suse.com>
 <26e6947c-6c56-d8c0-1baf-1ff006258679@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <26e6947c-6c56-d8c0-1baf-1ff006258679@intel.com>

On Wed, May 03, 2023 at 11:14:25AM -0700, Sohil Mehta wrote:
> A Nit -> Documentation/process/maintainer-tip.rst suggests:
> "The condensed patch description in the subject line should start with a
> uppercase letter and ..."

Yeah, good point.

But my patch massaging script does that automatically so no need to
resend.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Wed May 03 19:17:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 19:17:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529462.823904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puHyk-0004cP-A4; Wed, 03 May 2023 19:17:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529462.823904; Wed, 03 May 2023 19:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puHyk-0004cI-72; Wed, 03 May 2023 19:17:30 +0000
Received: by outflank-mailman (input) for mailman id 529462;
 Wed, 03 May 2023 19:17:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ISkc=AY=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1puHyi-0004cC-S7
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 19:17:28 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 233d47f7-e9e7-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 21:17:26 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D970B62E8D;
 Wed,  3 May 2023 19:17:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C610C433EF;
 Wed,  3 May 2023 19:17:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 233d47f7-e9e7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683141444;
	bh=7DLf9DxLpQdqFGhYy5M9J4sQ9tSCqG3t+rA1StCLhlE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=X2aXxT9thLirWjWcV+6M+Oxq5/YD6oX9/Qx5urINHMTXMUQhcMGdkcEujr/7dW5nS
	 HDdde+bRtr6mbyytXFEZeckeHlz6pzb2CODhRQlyib0PZAo3vbYBhuVgAuRPB0y2Ul
	 ctGdK8twVJgdncY33MFr9HSeZEvgY2xK7OEtCEDSsuJ6z/A5FAUFZuZb9KdMfDxKT1
	 VIDduja2BhCfwXLTOe4mmyckv1ZGuSPxD/PTO0tDppenqGoseYjkOpqHKNWnCKaMQk
	 iWBfzc7zXSUAsC+WiUm7kRTQS5fqP0hqXhmiXUVBzwSszn2oGBYH9zZf8FjRdMsxpm
	 XCkga0NTgR6PA==
Date: Wed, 3 May 2023 12:17:21 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Ayan Kumar Halder <ayan.kumar.halder@amd.com>, 
    xen-devel@lists.xenproject.org, stefano.stabellini@amd.com, 
    Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
Subject: Re: [RFC PATCH] xen/arm: arm32: Enable smpboot on Arm32 based
 systems
In-Reply-To: <e46baef3-be2f-bf76-1667-8b0562849b06@xen.org>
Message-ID: <alpine.DEB.2.22.394.2305031216310.974517@ubuntu-linux-20-04-desktop>
References: <20230502105849.40677-1-ayan.kumar.halder@amd.com> <alpine.DEB.2.22.394.2305021643010.974517@ubuntu-linux-20-04-desktop> <a0d48f47-bb62-5ed0-0c9b-95935dc75ca3@xen.org> <alpine.DEB.2.22.394.2305031053590.974517@ubuntu-linux-20-04-desktop>
 <e46baef3-be2f-bf76-1667-8b0562849b06@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 3 May 2023, Julien Grall wrote:
> Hi Stefano,
> 
> On 03/05/2023 18:54, Stefano Stabellini wrote:
> > On Wed, 3 May 2023, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 03/05/2023 00:55, Stefano Stabellini wrote:
> > > > > +    {
> > > > > +        printk("CPU%d: No release addr\n", cpu);
> > > > > +        return -ENODEV;
> > > > > +    }
> > > > > +
> > > > > +    release = ioremap_nocache(cpu_release_addr[cpu], 4);
> > > > > +    if ( !release )
> > > > > +    {
> > > > > +        dprintk(XENLOG_ERR, "CPU%d: Unable to map release address\n",
> > > > > cpu);
> > > > > +        return -EFAULT;
> > > > > +    }
> > > > > +
> > > > > +    writel(__pa(init_secondary), release);
> > > > > +
> > > > > +    iounmap(release);
> > > > 
> > > > I think we need a wmb() ?
> > > 
> > > I am not sure why we would need a wmb() here.
> > 
> > The code does:
> > 
> > writel(__pa(init_secondary), release);
> > iounmap
> > sev();
> > 
> > I was thinking of trying to make sure the write is completed before
> > issuing a sev(). Technically it is possible for the CPU to reorder the
> > sev() before the write as there is no explicit dependency between the
> > two?
> 
> I would assume that iounmap() would contain an wmb(). But I guess this is not
> something we should rely on.
> 
> > 
> > > Instead, looking at the Linux
> > > version, I think we are missing a cache flush (so does on arm64) which
> > > would
> > > be necessary if the CPU waiting for the release doesn't have cache
> > > enabled.
> > 
> > I thought about it as well but here the patch is calling
> > ioremap_nocache(), so cache flushes should not be needed?
> 
> Hmmm... You are right, we are using ioremap_nocache(). I got confused because
> Linux is using ioremap_cache(). I am now wondering whether we are using the
> correct helper.
> 
> AFAIU, spin table is part of the reserved memory section in the Device-Tree.
> From section 5.3 in [1], the memory could be mapped cacheable by the other
> end. So for correctness, it seems to me that we would need to use
> ioremap_cache() + cache flush.

Good point


> BTW, writel_relaxed() should be sufficient here as there is no ordering
> necessary with any previous write.



From xen-devel-bounces@lists.xenproject.org Wed May 03 19:18:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 19:18:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529465.823914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puHzq-0005A8-L2; Wed, 03 May 2023 19:18:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529465.823914; Wed, 03 May 2023 19:18:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puHzq-0005A1-HQ; Wed, 03 May 2023 19:18:38 +0000
Received: by outflank-mailman (input) for mailman id 529465;
 Wed, 03 May 2023 19:18:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KNfr=AY=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1puHzp-00059b-7E
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 19:18:37 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20603.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::603])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4a27939c-e9e7-11ed-8611-37d641c3527e;
 Wed, 03 May 2023 21:18:32 +0200 (CEST)
Received: from DS7PR05CA0005.namprd05.prod.outlook.com (2603:10b6:5:3b9::10)
 by SA1PR12MB7224.namprd12.prod.outlook.com (2603:10b6:806:2bb::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Wed, 3 May
 2023 19:18:28 +0000
Received: from DM6NAM11FT012.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b9:cafe::62) by DS7PR05CA0005.outlook.office365.com
 (2603:10b6:5:3b9::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.20 via Frontend
 Transport; Wed, 3 May 2023 19:18:28 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT012.mail.protection.outlook.com (10.13.173.109) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.26 via Frontend Transport; Wed, 3 May 2023 19:18:27 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 3 May
 2023 14:18:27 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 3 May
 2023 12:18:27 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 3 May 2023 14:18:26 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a27939c-e9e7-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=awTPgLlZQFHMVZe1jqiHifPCm8RPkQCeWRvL5+7vFylfx/Qc4fHKrVoYF4MT7t902wAO/JZ8nCAQJFDiZ5PuRlA0tIJ1aFslSWdI46ZYUjfW74J2iVSXooFf0sckF3x2LkpUod/o1cgN4cExgGEtQ1XNjev1Te88h9yAvm7jM9kDBsjwNXva+RPSUii3Vq46VUhKWIVvjcP3h5fH3jWpQ4nsHzMFuwlTsZqkFmea2lfQlquGNwaZ6ZUMb0IJcdi7wrKvlFA75knPMusqAGTGvSFomprvILqORQD62VNGT/ZTeMl5qPT9pASJ1vJG/Eo/euGT6igFJCJn+eSAIJeeZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nJnGX5ULMa6JNgalgwK8aS04dIdQBzShnRt2jL2i2WM=;
 b=dBGkZW3uDEYItZiwXENNMuP29NYyArq044dWcsaIBOk30Gcyyq6Sm8hgJe+igvXrI5syczxZW6bmbpJ48qEE5MJK9K48RvjCJHWm07Nn42X9y+XkyPvjpAaWHlGLmgM+P4zmqZJkD+zAuH51eFrZ5KRG8OWtvwWAEco35P5SOCmD4/AjPCOkpY+XaIMRfVjckhHlJ3kUuxoUNak7PrQk7Z+K6wc9wYlQaLca6nidah77CngF6Euy6rteU41NoXI/1GCgqMeqxX6YxiXWM56Dd3+tsTFLxTv5gpjIfuUmUx14bIb4Tejy+Y/hyLbNz3qoy1LVyEATLC1/7ozHZ1JSPQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nJnGX5ULMa6JNgalgwK8aS04dIdQBzShnRt2jL2i2WM=;
 b=iVbfvjiGf990ikhmQSB62zpx9UFqw45spyGC5sr8jyqgNTgss+pwDvXiOzzlc2WP7Ztmgi1qPtKFhZ2hKviw/2vkPfF+CMnj8kLPT5eN3tVieGNCA5ULz63MeSStz+k+htK+EoPSLSJtfPI2LkWrgf52eo07FnaC8qXPa+nIm94=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: pci: fix -Wtype-limits warning in pci-host-common.c
Date: Wed, 3 May 2023 15:18:20 -0400
Message-ID: <20230503191820.78322-1-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT012:EE_|SA1PR12MB7224:EE_
X-MS-Office365-Filtering-Correlation-Id: c09f3d0e-ee97-443d-94ff-08db4c0b2ca7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xgbjipOMaSEIiUtsJGDiIr0b5vaSw45Yifz+cwzKOVYRvB+pb4qlCRBZXphO3BoBtFZDHwD3VKLhevAwIr4m/x+fDxezlXV6sBiVRaxrnQm9WSrGWOzqNqnORhUASH60wcyRNo52QqSnupVdSR0hjT/y3XdDdDCmF/S0XinxoZ513z8wl0UaptjksxLaGkZVH7iqBVOrVS5KiFUvfZfo9T+fElYr3h/Kr1af0iWnoEcvMqUAvqME993AmQaCHdBbRQF/SI0Ny2nU7jUta76MqZ/hh7O+tDvmz8Vrcx+gswLe0dStVXaFmVCpthu2U8XMFWUBZeAz5QvYxREffijn3bLQ8uzFL78UQYUwgU1tiqDx5RAqJh+2yd+NC4+00B4eEJPxpmPKmCItNEGr9SWe87CYWVZwpi+VrFkHSA9kgAjkc9mSMFbZ2qKE+s+c6Ap9HvLEddg43ktzwiaO4/oVfrVwQxfQBrETlgIXDVQyWyl4UGk8yG6sOCo4pGSsJ4f/SzrW4Re6w8lTHdoAlBwuDw9Z0RIRan3MRQlQ/SbKpJ1nxyT4ZjdIMyOhcpsJwi8Z6WqN5VQ94lRWsHfddfs754hF+D79AIroXViI5afBjtkA2Ghxv9I77Yl9JGvENXIPOZ8kS03yFTA543dAVdGr8jfz3aja9IifGK9cis0J1mXySKI2hJOfpcj3k6Lmw/zH6bhwYTLt82a6e0wEureT32bsFUig/d7qavteNMOTi4M=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(376002)(136003)(451199021)(36840700001)(46966006)(40470700004)(36756003)(86362001)(54906003)(316002)(6916009)(4326008)(70206006)(70586007)(6666004)(478600001)(41300700001)(40480700001)(82310400005)(2906002)(5660300002)(8936002)(82740400003)(8676002)(44832011)(81166007)(186003)(356005)(26005)(36860700001)(1076003)(426003)(336012)(47076005)(83380400001)(2616005)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 19:18:27.9807
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c09f3d0e-ee97-443d-94ff-08db4c0b2ca7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT012.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7224

When building with EXTRA_CFLAGS_XEN_CORE="-Wtype-limits", we observe the
following warning:

arch/arm/pci/pci-host-common.c: In function ‘pci_host_common_probe’:
arch/arm/pci/pci-host-common.c:238:26: warning: comparison is always false due to limited range of data type [-Wtype-limits]
  238 |     if ( bridge->segment < 0 )
      |                          ^

This is due to bridge->segment being an unsigned type. Fix it by introducing a
new variable of signed type to use in the condition.

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
 xen/arch/arm/pci/pci-host-common.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
index a8ece94303ca..7474d877deb8 100644
--- a/xen/arch/arm/pci/pci-host-common.c
+++ b/xen/arch/arm/pci/pci-host-common.c
@@ -214,6 +214,7 @@ int pci_host_common_probe(struct dt_device_node *dev,
     struct pci_host_bridge *bridge;
     struct pci_config_window *cfg;
     int err;
+    int domain;
 
     if ( dt_device_for_passthrough(dev) )
         return 0;
@@ -234,12 +235,13 @@ int pci_host_common_probe(struct dt_device_node *dev,
     bridge->cfg = cfg;
     bridge->ops = &ops->pci_ops;
 
-    bridge->segment = pci_bus_find_domain_nr(dev);
-    if ( bridge->segment < 0 )
+    domain = pci_bus_find_domain_nr(dev);
+    if ( domain < 0 )
     {
         printk(XENLOG_ERR "Inconsistent \"linux,pci-domain\" property in DT\n");
         BUG();
     }
+    bridge->segment = domain;
     pci_add_host_bridge(bridge);
 
     return 0;
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 03 21:02:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 21:02:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529470.823924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puJbb-0008LP-31; Wed, 03 May 2023 21:01:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529470.823924; Wed, 03 May 2023 21:01:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puJba-0008LI-Ug; Wed, 03 May 2023 21:01:42 +0000
Received: by outflank-mailman (input) for mailman id 529470;
 Wed, 03 May 2023 21:01:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puJba-0008L8-7L; Wed, 03 May 2023 21:01:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puJba-0005nz-1a; Wed, 03 May 2023 21:01:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puJbZ-0001Sd-EI; Wed, 03 May 2023 21:01:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puJbZ-0004WW-Dp; Wed, 03 May 2023 21:01:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e5ZzJ1QkxlXcZG8A7Wih1vkuWg1EhJzwMQ+yXbUS0e4=; b=MR4rhn/1Bv/uobRRO6SU0VjJUU
	3+G/5mirPAOm2z9lJE5V0Q1GtRJyJrKmdG1BDP6IRxp4qXZZA1tnINXJCC4Z7703Q6YvN+7jHkFu2
	GIOSOQ5Dh0bqqrXJIGgHloF4+IT7bcTGjEH/9xud7YCUWWOcjfvgLhmvrEEXNoGjQPg0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180520-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180520: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b95a72bb5b2df24ff1baaa27920e57947dc97d49
X-Osstest-Versions-That:
    xen=99a9c3d7141063ae3f357892c6181cfa3be8a280
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 May 2023 21:01:41 +0000

flight 180520 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180520/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b95a72bb5b2df24ff1baaa27920e57947dc97d49
baseline version:
 xen                  99a9c3d7141063ae3f357892c6181cfa3be8a280

Last test of basis   180518  2023-05-03 15:00:25 Z    0 days
Testing same since   180520  2023-05-03 18:03:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Sergey Dyasli <sergey.dyasli@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   99a9c3d714..b95a72bb5b  b95a72bb5b2df24ff1baaa27920e57947dc97d49 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 03 21:53:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 21:53:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529475.823934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puKPd-0005YZ-U6; Wed, 03 May 2023 21:53:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529475.823934; Wed, 03 May 2023 21:53:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puKPd-0005YS-RM; Wed, 03 May 2023 21:53:25 +0000
Received: by outflank-mailman (input) for mailman id 529475;
 Wed, 03 May 2023 21:53:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ISkc=AY=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1puKPb-0005YG-NW
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 21:53:23 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb9685cf-e9fc-11ed-b226-6b7b168915f2;
 Wed, 03 May 2023 23:53:22 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 8347B613F1;
 Wed,  3 May 2023 21:53:20 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47887C433D2;
 Wed,  3 May 2023 21:53:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb9685cf-e9fc-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683150799;
	bh=f5O1HcnIBoGH8WgEsfzB7pRadYy0g18+BTxdPIh3MME=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=F5QzscPRZCEUIpb9lPUcy9f7aruV6lAUiukTjUHOh0KdA3CoTeR3IlWMtOm2bxsfp
	 8yVyBYh/0xK5FHXSoiAKvjjTR+jNKq8+y8+F//1qk6gOMAY+t66udy6Wn+AGRsj+NZ
	 scRcKn1DrMocRqTl604GxDaDahtTOnlH7Vy8HpjGsj43rd9Gf5sbFVzijtCxuZkc+J
	 2PcnmnRnQxms0mFVVcvp+v/7DNZWtUCOTl1Uhw+looNOkZ8m+pfGgjEuVZVgY692x3
	 r0LdAk+d0i3kyj1+oJMWqGtehIF9zAcCg0h0j9LKnhmXPJcRbSaaeFCjCypks7M9ZH
	 1b2T4OyLI9hwg==
Date: Wed, 3 May 2023 14:53:15 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: andrew.cooper3@citrix.com, Stefano Stabellini <sstabellini@kernel.org>, 
    alejandro.vallejo@cloud.com, committers@xenproject.org, 
    michal.orzel@amd.com, xen-devel@lists.xenproject.org, 
    Julien Grall <jgrall@amazon.com>, Juergen Gross <jgross@suse.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Jan Beulich <jbeulich@suse.com>, 
    =?UTF-8?Q?Edwin_T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>
Subject: Re: dom0less vs xenstored setup race Was: xen | Failed pipeline for
 staging | 6a47ba2f
In-Reply-To: <28235d38-ad7f-f1bd-f093-bd83f9fd6589@xen.org>
Message-ID: <alpine.DEB.2.22.394.2305031451130.974517@ubuntu-linux-20-04-desktop>
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail> <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop> <c74d231f-75e2-a26d-f2c4-3a135cc1ac10@citrix.com>
 <28235d38-ad7f-f1bd-f093-bd83f9fd6589@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1599493801-1683150798=:974517"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1599493801-1683150798=:974517
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 3 May 2023, Julien Grall wrote:
> On 03/05/2023 15:38, andrew.cooper3@citrix.com wrote:
> > Hello,
> > 
> > After what seems like an unreasonable amount of debugging, we've tracked
> > down exactly what is going wrong here.
> > 
> > https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4219721944
> > 
> > Of note is the smoke.serial log around:
> > 
> > io: IN 0xffff90fec250 d0 20230503 14:20:42 INTRODUCE (1 233473 1 )
> > obj: CREATE connection 0xffff90fff1f0
> > *** d1 CONN RESET req_cons 00000000, req_prod 0000003a rsp_cons
> > 00000000, rsp_prod 00000000
> > io: OUT 0xffff9105cef0 d0 20230503 14:20:42 WATCH_EVENT
> > (@introduceDomain domlist )
> > 
> > XS_INTRODUCE (in C xenstored at least, not checked O yet) always
> > clobbers the ring pointers.  The added pressure on dom0 that the
> > xensconsoled adds with it's 4M hypercall bounce buffer occasionally
> > defers xenstored long enough that the XS_INTRODUCE clobbers the first
> > message that dom1 wrote into the ring.
> > 
> > The other behaviour seen was xenstored observing a header looking like this:
> > 
> > *** d1 HDR { ty 0x746e6f63, rqid 0x2f6c6f72, txid 0x74616c70, len
> > 0x6d726f66 }
> > 
> > which was rejected as being too long.  That's "control/platform" in
> > ASCII, so the XS_INTRODUCE intersected dom1 between writing the header
> > and writing the payload.
> > 
> > 
> > Anyway, it is buggy for XS_INTRODUCE to be called on a live an
> > unsuspecting connection.  It is ultimately init-dom0less's fault for
> > telling dom1 it's good to go before having waited for XS_INTRODUCE to
> > complete.
> 
> So the problem is xenstored will set interface->connection to
> XENSTORE_CONNECTED before finalizing the connection. Caqn you try the
> following, for now, very hackish patch:
> 
> diff --git a/tools/xenstore/xenstored_domain.c
> b/tools/xenstore/xenstored_domain.c
> index f62be2245c42..bbf85bbbea3b 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -688,6 +688,7 @@ static struct domain *introduce_domain(const void *ctx,
>                 talloc_steal(domain->conn, domain);
> 
>                 if (!restore) {
> +                       domain_conn_reset(domain);
>                         /* Notify the domain that xenstore is available */
>                         interface->connection = XENSTORE_CONNECTED;
>                         xenevtchn_notify(xce_handle, domain->port);
> @@ -730,8 +731,6 @@ int do_introduce(const void *ctx, struct connection *conn,
>         if (!domain)
>                 return errno;
> 
> -       domain_conn_reset(domain);
> -
>         send_ack(conn, XS_INTRODUCE);

Following Jurgen's suggestion, I made this slightly modified version of
the patch. With it, the problem is solved:

https://gitlab.com/xen-project/people/sstabellini/xen/-/pipelines/856450703


diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f62be2245c..5ca160d9f2 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -688,6 +688,8 @@ static struct domain *introduce_domain(const void *ctx,
 		talloc_steal(domain->conn, domain);
 
 		if (!restore) {
+			if (!is_master_domain)
+				domain_conn_reset(domain);
 			/* Notify the domain that xenstore is available */
 			interface->connection = XENSTORE_CONNECTED;
 			xenevtchn_notify(xce_handle, domain->port);
@@ -730,8 +732,6 @@ int do_introduce(const void *ctx, struct connection *conn,
 	if (!domain)
 		return errno;
 
-	domain_conn_reset(domain);
-
 	send_ack(conn, XS_INTRODUCE);
 
 	return 0;
--8323329-1599493801-1683150798=:974517--


From xen-devel-bounces@lists.xenproject.org Wed May 03 22:20:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 22:20:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529480.823944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puKpi-0000fm-6B; Wed, 03 May 2023 22:20:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529480.823944; Wed, 03 May 2023 22:20:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puKpi-0000fe-1c; Wed, 03 May 2023 22:20:22 +0000
Received: by outflank-mailman (input) for mailman id 529480;
 Wed, 03 May 2023 22:20:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZlPw=AY=citrix.com=prvs=4803f4e7c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puKpg-0000fX-R8
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 22:20:21 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae0ccc2c-ea00-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 00:20:17 +0200 (CEST)
Received: from mail-mw2nam04lp2171.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 03 May 2023 18:20:13 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB4959.namprd03.prod.outlook.com (2603:10b6:208:1a3::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Wed, 3 May
 2023 22:20:11 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Wed, 3 May 2023
 22:20:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae0ccc2c-ea00-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683152417;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=h+x/L9PuPiMriAfHyORNjFM1aZwyLjAAJ6seYpBIHio=;
  b=Bed0PDqWUobGG15Dv5GLNiEcQHL0vY2LJfm/Ji+gg3EkFB9/DUuAgX6o
   PLZCJRrNkSsxuqHwmC81th7d8mpYbvmEBFqihA9HTRMoQvXeVyQtCj8Rd
   vbs0I1zMRRBAwfvnq5InAm/qDQKIHcLqcHvtygb8lEDHTP5vCwE/Y/w5k
   Q=;
X-IronPort-RemoteIP: 104.47.73.171
X-IronPort-MID: 107802591
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:De1B3Kx7fGXxTxK1Mph6t+d/xirEfRIJ4+MujC+fZmUNrF6WrkUHx
 2pMCDyCaf+PYWbxfdpxYY63phgAuZLRx9RlGVE6+CAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPasT5zcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KWxK/
 PwmLC1OVRug2P718qqga9JPjNt2eaEHPKtH0p1h5RfwKK56BLX8GeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDOVlVMouFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiAN1CTOflp6cCbFu75jYNJSYESXGB4vTkh2zgW49vO
 1I/9X97xUQ13AnxJjXnZDWju2KNtBMYX9tWEsU55RuLx66S5ByWbkAGUzpAZdoOpMIwAzsw2
 TehltfkBzVpvKeSD2yU8rOZrzSaMiwSMGNEbigBJSMO5NzmoZ0vgwjUZsZuFravid/4Ei22x
 CqFxAA7hr4ThMpN0L+p8FTvijeg4JPOS2Yd9gjRG26o8A59TIqkfJCzr0jW6+5aK4SURUXHu
 2IL8+Cc4/oHCZWlnSmEUuILWrqu4p6tMjLGhkV0N4I87Dnr8HmmFahS6jxjIEZiMu4fZCTkJ
 kTUvGt56ZNMPX3scahtZIGZAMAt0KSmHtPgPs04dfJLa5l1MQqYpidnYBbM23i3yRd116YiJ
 Z2cbMCgS24ADrhqxya3QOFb1qI3wic5xiXYQpWTIwmb7IdyrUW9Ed8tWGZipMhghE9YiG05K
 +piCvY=
IronPort-HdrOrdr: A9a23:2ULlAKgkYY3/fRYsCbTs6P2rynBQX6F13DAbv31ZSRFFG/FwWf
 re+MjztCWE/Ar5PUtK9+xoV5PhfZqiz+8L3WB8B9aftWrdyRmVxf9ZnOnfKlTbckWVygc379
 YCT0ERMqyUMbBw5fyKnjVRe7wbrOVum8qT6ts3AB1WID1CWuVYy0NcNy7eK0txQWB9dO8E/F
 j33Ls3m9JlE05nHfhSwxM+Lpj+Tqbw5fXbSC9DPQcj9A6NyRuw8dfBYmCl9yZbaSpL3bAhtU
 PYkwn1j5/Tz82T+1vnzmrO6JYTv9PkxrJ4daqxo/lQECzolgGrIKJ+XLGY1QpF2d2H2RIRid
 zRpBVlBeRfgkmhBV2dkF/Wwgz91zRr0XP41lOCpnPmraXCNUgHIvsEv5tdbhzar3Utp8t91q
 Uj5RPli6Zq
X-Talos-CUID: 9a23:fYlJAmMXLilnHO5DVS1u9HAQB4MeeV6C0DTiO1SZMkVjV+jA
X-Talos-MUID: =?us-ascii?q?9a23=3AcK8zxw4zo/cLFLvQr8o7LeUVxoxIubqDM1IRu69?=
 =?us-ascii?q?FtpSJF3JwZgeNkh+4F9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,248,1677560400"; 
   d="scan'208";a="107802591"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RyzEKcg82G7OwiUeZsD6hDzF6oh6pS/cvCF4caKthD+TavpElqoe2p1JUgoWGBsVLjyCwb8rBizpEGf7mJ7/GqvPpnSn4zRKYGxahV2V4fzmjvMKcY8zyqqQtXdrdrebtZDtk5cz/2kcHbToSJDjjhyYn5NPwPleHMaTjSEO0rYziHSY0bMsO95aQgyVKgZLgY01Y8oKzFITAW21NomGYaMaZ4hE09ZI4/XS5mTpThgfuJL7HJQAUFqnWc8otgMxkbp0Ad8raEcrWZ09KgENzAB/9mtQtGFuqRrZBgnWGESDmROxPaqxqmcpjts2Y8dHEov37KIBeukszso0MKXs/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G+FFpobbSOiQkMF40Buh+1T++dobOOXyl+bvvWhtG1I=;
 b=RREHfe/DMvhOuBlrJGa8OAlhlX16G7F0MPoe4ORX/iiTbA/cXlWVM1GmrwHYbDqdwsIEgtyZvn17qk+4mN+wPScUAt3WdKIHqcnXv8uJECfye97DV0tleLfB82+co1k9YLaS601eHO0t83hBc+hFwA2GGxsTszZHt2e//in/3e4Of/5G21AVV2ACp0SSgw3OvjzzZ6DvyYWWCIKmhWdvZYsQWzUWqt0BnvHyJlMbBvKppOfu9tf7yV/8P2ddY6q5j9ZE300YTgNdGq47g+l52rvSkNhYveF23ObLe3G8eVxcSP15gxn/EwNwxx77SB6K9XZa8NbaKhxp8x/LNgrY3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G+FFpobbSOiQkMF40Buh+1T++dobOOXyl+bvvWhtG1I=;
 b=PVF1rO0bByYj/mCGVm813GVh/xLSatiChIYHAuzn9ULSqe1k2frMik5skkQZuvcn77du0fEdAh7qw8JIHMXFjovAAFKIY0ur+8pLIJXiTt5WUNc+qxr6CqrpMyqYktXxYWWYC3q/QHSKH+E+meQB4XdORF2yqPCNKeU2hc8mE/Y=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <2c1c83be-19d8-34cc-113a-36245adac07e@citrix.com>
Date: Wed, 3 May 2023 23:20:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: andrew.cooper3@citrix.com
Subject: Re: dom0less vs xenstored setup race Was: xen | Failed pipeline for
 staging | 6a47ba2f
Content-Language: en-GB
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Cc: alejandro.vallejo@cloud.com, committers@xenproject.org,
 michal.orzel@amd.com, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>, Juergen Gross <jgross@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, =?UTF-8?B?RWR3aW4gVMO2csO2aw==?=
 <edwin.torok@cloud.com>
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail>
 <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop>
 <c74d231f-75e2-a26d-f2c4-3a135cc1ac10@citrix.com>
 <28235d38-ad7f-f1bd-f093-bd83f9fd6589@xen.org>
 <alpine.DEB.2.22.394.2305031451130.974517@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2305031451130.974517@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO6P265CA0002.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:339::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB4959:EE_
X-MS-Office365-Filtering-Correlation-Id: a251c83d-89d4-46ac-38bb-08db4c248e51
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9jYhAAqlr8fmn/5uvT6XBDSPtej+imosfm+wtPePrBuVHISWWBbtVKv8J6/A+kEsA+VfbwDnKcl87qKX2JqORfYQkCKNnVivnNee18CGt7OKpdkbYJHduBWK3gcdNp1+1JcR6atf1vBFHeeOGrWJqH4Mu7TmE2KJoEP3LCN119xpcAS3QfSgVc8wbaa2qheQ5qZuL0/Ax9TwyEF4einO50HG3DJAjkbLpLIE8bbI2gN2fE/d1JzxkUAx7dg4mJicCk255sIdlh91aWOSaGWmJa8bZtylvLC/KeUVQMqI1HydQ7koOMUo3D6leMi40n/h4ANwp1RPPMg5jF2LHlTMrZL5O6H3X74tLNbBxP7gBx6QveOCrWd0De7ut8BZMjXlQkiXF7L4Z0vF7I2dJih8CI5A//mYljxudE6TyTOrKXQecgwufLP+/Q6X9NRE8Web39cLs2ps5LYMR8IHVbnq3+KLlUelvXfnd5UomS+VxSjx7eBgXbucCrpklzQNL2+3wZB+xS6HAoINS5B9HrK2S65vIGJBTaACM3MXzDbJ9TUCPbMEPUDeSXEeN14TuFMz4IjXk6ZN4tkDSj5lTg0AzJ0ipibyeFC0Vs1TzJPecuQEFIy0UBRBDFYVYMkN7W8sGIsbSO5DePY0Vbl+DjJnrMc+efWnbFHNJGvSXylyOAg=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(366004)(136003)(39860400002)(451199021)(6666004)(6486002)(186003)(83380400001)(31696002)(26005)(53546011)(6512007)(9686003)(6506007)(966005)(110136005)(54906003)(36756003)(2616005)(31686004)(478600001)(66476007)(66946007)(66556008)(82960400001)(316002)(8676002)(8936002)(38100700002)(41300700001)(5660300002)(4326008)(86362001)(2906002)(7416002)(32563001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N2ZCQlBpeWczdThYY24wOVA1NEl2Zi9EUW9QeTg3dDdQTVRNRXRvcEdPb3ZE?=
 =?utf-8?B?eFg5cEtUSGRBM1JkV0V6MWpSS3dRM20wYkRQQVFtMVJKQWlMSkhKT3lvN0FY?=
 =?utf-8?B?TURiSGdCRWZYY09seDdiNzQzMktQUFIzQVdYM2pYNENxWENCTktqTFdENm9s?=
 =?utf-8?B?RzNleGRnV1lBc2NiWHBWblRhZ2lVV0xXNlZvZm1WTkdGSmpKc3VQRjdHSWNE?=
 =?utf-8?B?U09YOUtINzFMYkY2R2JXUWdaRWVCWHp6RUZvUXEyYmNVS1ZIZTUvTlpTc2p2?=
 =?utf-8?B?Ky81ZElhSHVPcGl5TGJmVENkYW5IQUdZUzI3NzY2L2lTTkl1S2NRWVBZdG96?=
 =?utf-8?B?eDlOWTZIOEFIQ0NzcS9NRkRoSHNDTXZBMUxPWnFJbFJtYWYxVEc2V0ZGbHhD?=
 =?utf-8?B?VGVNN0YxNXBJZjlDc3VIRmxmVFpIQmkxdlVidTlZK2ZvbVFzQVN0ZTJvOCtt?=
 =?utf-8?B?aFBEaWxoa05Jb21INjBkWHBuVmJPOVN1YU9hWFAwcUhVa3RsSjVCcmNuNkg4?=
 =?utf-8?B?a3ZNbU5SRTlSZHprOVVjSUVqcHdvN1l5U3NpakFXS3ZjQlMyWVlkZGlEc1ov?=
 =?utf-8?B?ZnlCRUNiRmlmeDFLZzBJYXYwNXBnUEZWMlpVbUQ0RDFreFhxd21lejV3TjZI?=
 =?utf-8?B?RW5EeVNtVDBjRURrb3lHMkZTSy9nVTNvSHZnUWxMRE1wS1B6ZEFiWC9qYWor?=
 =?utf-8?B?M1dXK0wzak8vRFV3WC8xdkh1TDA5ekFMMEFSZU85YTF1SlQyQWd4MkRSdHJu?=
 =?utf-8?B?bE5Sek9aZU5qbVRqRVg1R21iUWx0U0NDcXVYYlNQZEtNdEFhQVZJaHpqL1ZF?=
 =?utf-8?B?WExSQUxCVXd3Zm95TEhXUEVadTZzbElUK3hIMkVGS1gvUWViTy9iR2hOQlBY?=
 =?utf-8?B?UHJSd3JhNkhwMGdGQVduelNmKzI3eTVwanl2V2ZMaE81cm4vZFVqZy82SHZa?=
 =?utf-8?B?Wi9URTF6N0VmUVlld1pXcDNBemZIZ1RvYWRVaEdXWVJuUTJKcU1nTDFsYWFP?=
 =?utf-8?B?b2tzSUpYaHYyRit2TkN3aE9zNXNZczZDVGgxS3VDc3Y5MFUvU1liM2NpVVVy?=
 =?utf-8?B?YzBINFBMMStYdkhhbGppVFRDam1WOFVkbWppTVBZNGZndCtCOENTSXNteEpE?=
 =?utf-8?B?VmZGbXFRMzRqRDlxNDAva1UzQkwrK2p5S2RGeGpxSDU0NVJwbHF5SXcwSFhz?=
 =?utf-8?B?NWMzV1A0RHRYN1FaSUhVQ1NqVGFwR2licG16dmFpZy82bEVDbXVLeFNyZU5t?=
 =?utf-8?B?a0lUazFKd2JtYVBPejd2SkZjU2FQdDI2ZGtjTDRTZE8zREhOL093NWZ4Y25j?=
 =?utf-8?B?OVd6cmRhZTRzSFlGSVFNRUs3NHJKSkxNanNRbVFBT3lsY3VIK01GK1BIOUpH?=
 =?utf-8?B?T2RxQTlpd0t1RGJrNUw0M2hJemRXL1VVTGI2TENVN0lBM1BzaDNaZGZtd09F?=
 =?utf-8?B?SVJKbk5oRFVGdDRpeEdNaHZWWm92WnpMTFVobjRabGxoRkFXUlpZalY2T1JK?=
 =?utf-8?B?ZVNWRkZ5WndON1dTcXRDUHV1Y2taQ1FWWGNxKzNmTmwzV0k2T3d5cENCWmNw?=
 =?utf-8?B?VGY1VXJmRTAwbHdCWXYvQXAzMXd6anBIaW5ZK0lPZFJMVk1wSW1XSE9nS042?=
 =?utf-8?B?SDY0T3ErdGg2V0k4ODhMWDVuaitZVmRlUm9wd2xQZFFBNkN3cXUvTFJQb1VE?=
 =?utf-8?B?YjVFNEhMQmZQSEhCbXd6STRPVGt3aWJPek90c05UTWxNMmw0K0ZmSXkrWEZU?=
 =?utf-8?B?TjFoQVZTbWNsSzhZMUR1M09GdkUxbU5lZnN1dVdpMVlDR3E3Yzk5Ty9FajRl?=
 =?utf-8?B?WW01NWVYb1ZaNyt2WjJkdU5BWXU2ZG5xNnl3SVFNRjRkUG10MW9zN0kxME5Y?=
 =?utf-8?B?UmxxNzVFYnFveXluVzRqSGIrbldLdTdTM1IwM2VoTlhNczFwcTZTMVUveEJj?=
 =?utf-8?B?OFBLenVBN2NxdDFhdTlHbHNWTzZSZDNOb01wWkhOV2ZNb0E2Vmk0dzBJYm9u?=
 =?utf-8?B?QjFueCtEck8zemJDUXp2UU9teDFLUmovY0l4SFpPNUpSSGFyeEVvQTR1bWZB?=
 =?utf-8?B?bjhPMlRXREJaNDhBSHczaFlDRW5LajZWb0hQUVNLeHAyUG5QbmhpMHlXMDAw?=
 =?utf-8?B?R1ZvaFcxRmpTQWZEd2FKSzNuVS9ScDN3K1RjYWdOMThLc3RiY0N4NFk0Zi9T?=
 =?utf-8?B?Q3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?L0tqaFZhcXRZOHNaSmZLMzZBMEJGZ0lzSlo0WjdKc0ZuSlNvVGlqUkl4aDNI?=
 =?utf-8?B?OXFGQW50bStWVG5Xb1ZPTEtCc204ZkRsa2JmTFZucWwrblZjcmsvRmlMZ1li?=
 =?utf-8?B?cFQzaFBGQndydTBpMlhSaEdJQWVNZFZRUyswQUxFUWtpUVgrRUk0NnJSL3Yx?=
 =?utf-8?B?M2Q5bUo3S0poQ0JWNi93SEhHK0VwK3UwSlVwSVdLbmJoV0UxTlpnSlpCdGli?=
 =?utf-8?B?eVNpNTU5d0s2UWhLVFV2N2RiYm56b3l4MlprV0t5R2RLaUhMSTBzdXhDUllI?=
 =?utf-8?B?d2U0V0YxOEtrcGdGRDNLaGNYY3Q0dHFRVENWVElrTXFWL3NTeUFNcEcxL0c5?=
 =?utf-8?B?WHBicTVJVFFrWlZEcDRYdUN2dlpqbGZFTUhmVE1jUmp3azg2UjBUN3E2Zk5h?=
 =?utf-8?B?M0svY3d3TmZWYzJFbDFsZjFPR3crZW5HVndMTDhhamphWk96dHV5TEg4akxR?=
 =?utf-8?B?SDlkeVdRbHFZRVB2Y2xnakdiQVhmVE9PMW5Pd3NxV1l6dlJ3VWFBNkJYOTYw?=
 =?utf-8?B?ZFRuSmJMSEdWT3c2cTQ1TzR2U2lXeUg4eDBYVnB4Vk9hd0NDc093U2l3VUlt?=
 =?utf-8?B?RHpUbkxOSGRUdCtrWHZRcmtnTVlUdktQY010NnhyaXMzSGQ2WXBiYkx3NWd2?=
 =?utf-8?B?VXJzL21jR3ZXczIrSGFtY21KR1VHUEFJUU9HQ3pFQVhxVW81aTZsQmJaZkdo?=
 =?utf-8?B?Sk9RN1RQaU1hN0JZYXVsU0p4Wm16RVphV0tlWjJtVjJvNEZlQXBUTkRJWnBa?=
 =?utf-8?B?d3B2UlRvQ3FmNU9hM09vVklibFVhTXd4bEZ4UDlMU3FPaHc0ckJkdVNub3I0?=
 =?utf-8?B?eVhNZjF2b0dyWEowaWtRRURMZ1Z2SjVoNmVJaVJFWU5iYTZhZG5uY2lMNTNx?=
 =?utf-8?B?L2xMR2gwVzR3VTFsTWdzaTB0OXlVNnZNL1NjVlhnelVVUnZYckJLQzU4bEtL?=
 =?utf-8?B?M2xXcUZyZU0vRFF2N252Mjg1QjNJSGFqQldMZlU3WTZYcHVaR0E1amVjam04?=
 =?utf-8?B?WnhpSnNOMm5CSW1HZU14cWpIOVhaYzk2UUp0MjVIc1pVL01RZkt1aVJOSlQx?=
 =?utf-8?B?UGxmQmEyQytyaStjSUczZXU0KzQzOHBCODRFT2JTdDBwYW1ibksvTEhTVmIz?=
 =?utf-8?B?V0x1QVhSdUdib3M5Tk50cjZOM2U1cjVwSXpPSEJZc2hTTzliTVE2TE9ubWVH?=
 =?utf-8?B?ZERNYWF6VVl0YVprblZYSEhxTFBDVW9VY1hYNWVUVWJhbG1pejZpdkhBa1lD?=
 =?utf-8?B?YVREeXV3RS9QTTZMaWMrQmljWjJWVnhsYjY0SHJaMzFubUtGZz09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a251c83d-89d4-46ac-38bb-08db4c248e51
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2023 22:20:09.6884
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: D/j6Km3Nug55maPlR4m5AT/bwHWj91Nfn8icpJ93j2jzVmdhf4NubkLj6jncQngVKkrfu4jqPFhK/Dt4fgjoBuT+/BCu2vHV3KweLrafRuM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB4959

On 03/05/2023 10:53 pm, Stefano Stabellini wrote:
> On Wed, 3 May 2023, Julien Grall wrote:
>> On 03/05/2023 15:38, andrew.cooper3@citrix.com wrote:
>>> Hello,
>>>
>>> After what seems like an unreasonable amount of debugging, we've tracked
>>> down exactly what is going wrong here.
>>>
>>> https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4219721944
>>>
>>> Of note is the smoke.serial log around:
>>>
>>> io: IN 0xffff90fec250 d0 20230503 14:20:42 INTRODUCE (1 233473 1 )
>>> obj: CREATE connection 0xffff90fff1f0
>>> *** d1 CONN RESET req_cons 00000000, req_prod 0000003a rsp_cons
>>> 00000000, rsp_prod 00000000
>>> io: OUT 0xffff9105cef0 d0 20230503 14:20:42 WATCH_EVENT
>>> (@introduceDomain domlist )
>>>
>>> XS_INTRODUCE (in C xenstored at least, not checked O yet) always
>>> clobbers the ring pointers.  The added pressure on dom0 that the
>>> xensconsoled adds with it's 4M hypercall bounce buffer occasionally
>>> defers xenstored long enough that the XS_INTRODUCE clobbers the first
>>> message that dom1 wrote into the ring.
>>>
>>> The other behaviour seen was xenstored observing a header looking like this:
>>>
>>> *** d1 HDR { ty 0x746e6f63, rqid 0x2f6c6f72, txid 0x74616c70, len
>>> 0x6d726f66 }
>>>
>>> which was rejected as being too long.  That's "control/platform" in
>>> ASCII, so the XS_INTRODUCE intersected dom1 between writing the header
>>> and writing the payload.
>>>
>>>
>>> Anyway, it is buggy for XS_INTRODUCE to be called on a live an
>>> unsuspecting connection.  It is ultimately init-dom0less's fault for
>>> telling dom1 it's good to go before having waited for XS_INTRODUCE to
>>> complete.
>> So the problem is xenstored will set interface->connection to
>> XENSTORE_CONNECTED before finalizing the connection. Caqn you try the
>> following, for now, very hackish patch:
>>
>> diff --git a/tools/xenstore/xenstored_domain.c
>> b/tools/xenstore/xenstored_domain.c
>> index f62be2245c42..bbf85bbbea3b 100644
>> --- a/tools/xenstore/xenstored_domain.c
>> +++ b/tools/xenstore/xenstored_domain.c
>> @@ -688,6 +688,7 @@ static struct domain *introduce_domain(const void *ctx,
>>                 talloc_steal(domain->conn, domain);
>>
>>                 if (!restore) {
>> +                       domain_conn_reset(domain);
>>                         /* Notify the domain that xenstore is available */
>>                         interface->connection = XENSTORE_CONNECTED;
>>                         xenevtchn_notify(xce_handle, domain->port);
>> @@ -730,8 +731,6 @@ int do_introduce(const void *ctx, struct connection *conn,
>>         if (!domain)
>>                 return errno;
>>
>> -       domain_conn_reset(domain);
>> -
>>         send_ack(conn, XS_INTRODUCE);
> Following Jurgen's suggestion, I made this slightly modified version of
> the patch. With it, the problem is solved:
>
> https://gitlab.com/xen-project/people/sstabellini/xen/-/pipelines/856450703

This fails to solve 3(?) of the 4(?) bugs pointed out between this email
thread and on IRC.

Stop with the bull-in-a-china-shop approach.  There is no acceptable fix
to this mess which starts with anything other than corrections to the
documentation, and a plan for how to make startup work robustly given
all the bugs introduced previously by failing to do it properly the
first time around.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 03 23:13:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 23:13:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529483.823953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puLeU-0006IX-3S; Wed, 03 May 2023 23:12:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529483.823953; Wed, 03 May 2023 23:12:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puLeU-0006IQ-0r; Wed, 03 May 2023 23:12:50 +0000
Received: by outflank-mailman (input) for mailman id 529483;
 Wed, 03 May 2023 23:12:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puLeS-0006ID-4x; Wed, 03 May 2023 23:12:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puLeR-00008j-TN; Wed, 03 May 2023 23:12:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puLeR-0005dw-Fq; Wed, 03 May 2023 23:12:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puLeR-0006fF-FQ; Wed, 03 May 2023 23:12:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wsQpAf+PN7LBtc5b+mq/sNu5Fn74iXEGjxo0c1K21A4=; b=BU+G4f8xRpmSISGjhGwm88RKiO
	mlsrqUNY7YysTsRZj1UNuyfDGeNF2zdWZYEgiRh6CFx8UpEUgWx3w/0D1VX5G9DvrAPI9hw0BoLX2
	669CrEeVimlw1WpkUYLnaYjvsbyq5kfcCwp95N89IYCrA6/254Un+Ykuhcue3iaBgskc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180514-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180514: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c586691e676214eb7edf6a468e84e7ce3b314d43
X-Osstest-Versions-That:
    qemuu=b5f47ba73b7c1457d2f18d71c00e1a91a76fe60b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 03 May 2023 23:12:47 +0000

flight 180514 qemu-mainline real [real]
flight 180521 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180514/
http://logs.test-lab.xenproject.org/osstest/logs/180521/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 180521-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180507
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180507
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180507
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180507
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180507
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180507
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180507
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180507
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                c586691e676214eb7edf6a468e84e7ce3b314d43
baseline version:
 qemuu                b5f47ba73b7c1457d2f18d71c00e1a91a76fe60b

Last test of basis   180507  2023-05-02 15:37:40 Z    1 days
Testing same since   180514  2023-05-03 05:31:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Claudio Fontana <cfontana@suse.de>
  Cédric Le Goater <clg@kaod.org>
  Daniel Bertalan <dani@danielbertalan.dev>
  Fabiano Rosas <farosas@suse.de>
  Patrick Venture <venture@google.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   b5f47ba73b..c586691e67  c586691e676214eb7edf6a468e84e7ce3b314d43 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed May 03 23:16:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 03 May 2023 23:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529489.823964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puLho-0006yL-N3; Wed, 03 May 2023 23:16:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529489.823964; Wed, 03 May 2023 23:16:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puLho-0006yE-Jc; Wed, 03 May 2023 23:16:16 +0000
Received: by outflank-mailman (input) for mailman id 529489;
 Wed, 03 May 2023 23:16:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ISkc=AY=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1puLhm-0006y6-Sy
 for xen-devel@lists.xenproject.org; Wed, 03 May 2023 23:16:14 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e2281fe-ea08-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 01:16:12 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 1441A63042;
 Wed,  3 May 2023 23:16:11 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EB734C433D2;
 Wed,  3 May 2023 23:16:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e2281fe-ea08-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683155770;
	bh=mcZbuchNBi5Fppn0ME2sv81rsNiMa7IiZd7Odu5pZ4w=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=TIt+fGLsl1M2PVavsLiu9koO6+kHcojFudesqPs55ZIJYJBY+T/0zekiTTiYaM9+L
	 mzEZNnIIigSwMmNjuUPRuODF8naScvaLDhUlR3TYElrm+NnCh98eR9e6OkjXSjjpxU
	 YHAMDk7lP23b+OHXStUdo2cp7uD+XHiThShIj8A2NzeEeNH5soRwiQXqIk5lBKIjwK
	 dIHp7dwk30/fjjBR8+nWbTDo05FB++JJ8k4UqQYOT1HXmgElY92iUQ8u4RYPS8Fj+k
	 8UchsvMA+7GIsc8kUpe7dxXkU6kcsl5k39QwGVTtXptU0GWaMy5vRhVQf2JmJPM1JX
	 PkKEf5sjuAQhw==
Date: Wed, 3 May 2023 16:16:07 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: andrew.cooper3@citrix.com
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    alejandro.vallejo@cloud.com, committers@xenproject.org, 
    michal.orzel@amd.com, xen-devel@lists.xenproject.org, 
    Julien Grall <jgrall@amazon.com>, Juergen Gross <jgross@suse.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Jan Beulich <jbeulich@suse.com>, 
    =?UTF-8?Q?Edwin_T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>
Subject: Re: dom0less vs xenstored setup race Was: xen | Failed pipeline for
 staging | 6a47ba2f
In-Reply-To: <2c1c83be-19d8-34cc-113a-36245adac07e@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2305031604070.974517@ubuntu-linux-20-04-desktop>
References: <644bfbc6939d8_2a49bbb403253f4@gitlab-sidekiq-catchall-v2-78885c497-qxnp2.mail> <alpine.DEB.2.22.394.2304281905020.974517@ubuntu-linux-20-04-desktop> <c74d231f-75e2-a26d-f2c4-3a135cc1ac10@citrix.com> <28235d38-ad7f-f1bd-f093-bd83f9fd6589@xen.org>
 <alpine.DEB.2.22.394.2305031451130.974517@ubuntu-linux-20-04-desktop> <2c1c83be-19d8-34cc-113a-36245adac07e@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1729168010-1683155543=:974517"
Content-ID: <alpine.DEB.2.22.394.2305031612400.974517@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1729168010-1683155543=:974517
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305031612401.974517@ubuntu-linux-20-04-desktop>

On Wed, 3 May 2023, andrew.cooper3@citrix.com wrote:
> On 03/05/2023 10:53 pm, Stefano Stabellini wrote:
> > On Wed, 3 May 2023, Julien Grall wrote:
> >> On 03/05/2023 15:38, andrew.cooper3@citrix.com wrote:
> >>> Hello,
> >>>
> >>> After what seems like an unreasonable amount of debugging, we've tracked
> >>> down exactly what is going wrong here.
> >>>
> >>> https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/4219721944
> >>>
> >>> Of note is the smoke.serial log around:
> >>>
> >>> io: IN 0xffff90fec250 d0 20230503 14:20:42 INTRODUCE (1 233473 1 )
> >>> obj: CREATE connection 0xffff90fff1f0
> >>> *** d1 CONN RESET req_cons 00000000, req_prod 0000003a rsp_cons
> >>> 00000000, rsp_prod 00000000
> >>> io: OUT 0xffff9105cef0 d0 20230503 14:20:42 WATCH_EVENT
> >>> (@introduceDomain domlist )
> >>>
> >>> XS_INTRODUCE (in C xenstored at least, not checked O yet) always
> >>> clobbers the ring pointers.  The added pressure on dom0 that the
> >>> xensconsoled adds with it's 4M hypercall bounce buffer occasionally
> >>> defers xenstored long enough that the XS_INTRODUCE clobbers the first
> >>> message that dom1 wrote into the ring.
> >>>
> >>> The other behaviour seen was xenstored observing a header looking like this:
> >>>
> >>> *** d1 HDR { ty 0x746e6f63, rqid 0x2f6c6f72, txid 0x74616c70, len
> >>> 0x6d726f66 }
> >>>
> >>> which was rejected as being too long.  That's "control/platform" in
> >>> ASCII, so the XS_INTRODUCE intersected dom1 between writing the header
> >>> and writing the payload.
> >>>
> >>>
> >>> Anyway, it is buggy for XS_INTRODUCE to be called on a live an
> >>> unsuspecting connection.  It is ultimately init-dom0less's fault for
> >>> telling dom1 it's good to go before having waited for XS_INTRODUCE to
> >>> complete.
> >> So the problem is xenstored will set interface->connection to
> >> XENSTORE_CONNECTED before finalizing the connection. Caqn you try the
> >> following, for now, very hackish patch:
> >>
> >> diff --git a/tools/xenstore/xenstored_domain.c
> >> b/tools/xenstore/xenstored_domain.c
> >> index f62be2245c42..bbf85bbbea3b 100644
> >> --- a/tools/xenstore/xenstored_domain.c
> >> +++ b/tools/xenstore/xenstored_domain.c
> >> @@ -688,6 +688,7 @@ static struct domain *introduce_domain(const void *ctx,
> >>                 talloc_steal(domain->conn, domain);
> >>
> >>                 if (!restore) {
> >> +                       domain_conn_reset(domain);
> >>                         /* Notify the domain that xenstore is available */
> >>                         interface->connection = XENSTORE_CONNECTED;
> >>                         xenevtchn_notify(xce_handle, domain->port);
> >> @@ -730,8 +731,6 @@ int do_introduce(const void *ctx, struct connection *conn,
> >>         if (!domain)
> >>                 return errno;
> >>
> >> -       domain_conn_reset(domain);
> >> -
> >>         send_ack(conn, XS_INTRODUCE);
> > Following Jurgen's suggestion, I made this slightly modified version of
> > the patch. With it, the problem is solved:
> >
> > https://gitlab.com/xen-project/people/sstabellini/xen/-/pipelines/856450703
> 
> This fails to solve 3(?) of the 4(?) bugs pointed out between this email
> thread and on IRC.
> 
> Stop with the bull-in-a-china-shop approach.  There is no acceptable fix
> to this mess which starts with anything other than corrections to the
> documentation, and a plan for how to make startup work robustly given
> all the bugs introduced previously by failing to do it properly the
> first time around.

I am not suggesting this is the fix (I didn't add any Signed-off-by or
commit message on purpose). I think it is useful to know if a
theoretical proposal would work in practice as it helps us understand
the problem. In the little time I had in-between meetings I thought I
could help a bit by providing this update.

Like you, I would appreciate a comprehensive fix which includes a
documentation update.

Genuine question: how would you like to proceed? In the project
management sense of who does what and what is the suggested timeline.
--8323329-1729168010-1683155543=:974517--


From xen-devel-bounces@lists.xenproject.org Thu May 04 03:55:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 03:55:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529528.823974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQ3b-0005AG-1r; Thu, 04 May 2023 03:55:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529528.823974; Thu, 04 May 2023 03:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQ3a-00059u-RJ; Thu, 04 May 2023 03:55:02 +0000
Received: by outflank-mailman (input) for mailman id 529528;
 Thu, 04 May 2023 03:55:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ABSM=AZ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1puQ3Z-00059l-4c
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 03:55:01 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20624.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::624])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6fb4fa2a-ea2f-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 05:54:58 +0200 (CEST)
Received: from AM6P192CA0060.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:82::37)
 by AS2PR08MB8454.eurprd08.prod.outlook.com (2603:10a6:20b:55a::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 03:54:55 +0000
Received: from AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:82:cafe::5a) by AM6P192CA0060.outlook.office365.com
 (2603:10a6:209:82::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 03:54:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT065.mail.protection.outlook.com (100.127.140.250) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.21 via Frontend Transport; Thu, 4 May 2023 03:54:55 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Thu, 04 May 2023 03:54:54 +0000
Received: from 05c584a71ee4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9191A3A7-3D2D-4559-A542-1213A3019919.1; 
 Thu, 04 May 2023 03:54:49 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 05c584a71ee4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 03:54:49 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by VE1PR08MB5823.eurprd08.prod.outlook.com (2603:10a6:800:1a5::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 03:54:47 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%6]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 03:54:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fb4fa2a-ea2f-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TRauj12Zeuec/7oL0evdHlgULMSFkh+0FOD2EYPTdzE=;
 b=K9W7WZN/VDfESn/v3wY3Mrf+X1bdrYNCe/Lqd8odEVodM36zPO607yxdtzUem2FBe7FkfqUe6VC3/z4z4JsmHABRQkD2TMF4LIbTMxXszlfmT48PEGwZkhU7vaDGhzKB2GZKUIZKLNpSLC331ZRzy9xCDAfq5m7YNZdGh8EpO1E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PBWYzNcrvDlu3EiNS2hqudOQkYDjt1ZtGyhv3wi1ss9ozhBvVyLstzspRkGjp4XXqOwAot+RQa98cdVFe+FCd9dXf+4Q1udwzdCWVNKkeRP7ai3n+wWmCHRyHm2DL+IaetnSNCv1HbOdHLOex7GR2xvIxorGD2NYayaX8suBxwXupMG9576p+AL4iYi5ty7E2zFkGHNc+YVpf9gTTb9mosfSX/kzkNhGSxvppu+GcILXlP7f4oW9fuuwNLGAWrqgpAheW42wGFvWvlPekNJ7TtzbhkiQmDIm2U+ZwDgyat/kOfuRjFpdp5/wGEtJfjlugXzQ7mpn05PtaE/rjgkvzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TRauj12Zeuec/7oL0evdHlgULMSFkh+0FOD2EYPTdzE=;
 b=l64eiEScDW3RNEWmI97rDn+KEKQ/HKV3Ld4WGF3tWtugHc9GPU1j7XSFCV1OACMb3uNdTfcZrxq9/uXOBa+YZzkh7cS3uHTisUZ97JNk877je849zCmygRYNli7K/FPvkjT0BIzA4gFlQfWlBpncVCycR4NKzfNaC9+Sok9d6NAJLzykuLOqvR110sVGyVyvFPKOKedXEjwPtlr1NJlzOKpCmJoLpWbqg1c6SKA501C9dnCemncfP1dyh5BRJk1DOva56EWMWqwQtMgF7iIPrMh0GvkzsYJGdfkpUdpXFaHjOMNk4+Q5Ps5SEPf3CkpQT21EoI25oTaTTm+WotbJrg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TRauj12Zeuec/7oL0evdHlgULMSFkh+0FOD2EYPTdzE=;
 b=K9W7WZN/VDfESn/v3wY3Mrf+X1bdrYNCe/Lqd8odEVodM36zPO607yxdtzUem2FBe7FkfqUe6VC3/z4z4JsmHABRQkD2TMF4LIbTMxXszlfmT48PEGwZkhU7vaDGhzKB2GZKUIZKLNpSLC331ZRzy9xCDAfq5m7YNZdGh8EpO1E=
From: Henry Wang <Henry.Wang@arm.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: RE: [XEN][PATCH v6 02/19] common/device_tree: handle memory
 allocation failure in __unflatten_device_tree()
Thread-Topic: [XEN][PATCH v6 02/19] common/device_tree: handle memory
 allocation failure in __unflatten_device_tree()
Thread-Index: AQHZfU8Zj43F8/qoe0aOSV+uN58JIK9JfX4Q
Date: Thu, 4 May 2023 03:54:46 +0000
Message-ID:
 <AS8PR08MB799168BC091D76E168C9F4E7926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-3-vikram.garhwal@amd.com>
In-Reply-To: <20230502233650.20121-3-vikram.garhwal@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 6D50E71DC551B149BE8281E2F88E7FAA.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|VE1PR08MB5823:EE_|AM7EUR03FT065:EE_|AS2PR08MB8454:EE_
X-MS-Office365-Filtering-Correlation-Id: 80667c16-239a-4f66-ffe9-08db4c53526f
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ztzKfw5IbwaZPZdvf/NR/xq0XeyqxhdNuhAhgRQsy0Nup/HvPmrylapLIm9dQEZjTkf0lzh8JncWeJ/cU6CMFEcsgitoy+HiV9+1BhDkwUjO60xpC2BIhe3EhYn4RsXV8iE7tCwSzfwMR4auVNX59ISDqP9oyPvLgeuYUSrH9Fdr1w2aAUJAbG1uqPr7T5XZW1MfsNTb4cgS3Vg4j4+gGdiJY9n3vyJfgRx7qAxebXhZRuzyOMP4P1tDeicTRVLxm131/3BEIh+5M0lvQ0QGT1Obi4X9BNm9bm2+s8/PMrtocLlKb6HfJbJOnNVAp58cPuG7U63WleHZnM1E30UG5lIXkFRP8XDYkdXFsgzTEriUUOJZ8E+0uCv5W1OWJGG2OpnlJm0LFTPbbUAASvd4XlzXY4ts5Txjex7hC6hGlGQbtvHmGsS0AdmMuiGuWivur5CnlEZLL36sdiKk3BRC9MhpIdqQvI6QJqFvrgnJzebHTHxClLuyznNjYGeMurqr7h1V9Ehq6RVGMatE/XtBRpPSjFr3h3IkqvEnlU0uOhMlnmB4F9jruHrKHL6kHWvTz3idQl6NT4w/DZnRC8z0NxkDjAef/FOi0L+IEFs3r7oEzeiUHLKliIBWXh8mvv4T
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(366004)(136003)(39850400004)(376002)(451199021)(86362001)(33656002)(54906003)(316002)(110136005)(66946007)(66476007)(66556008)(64756008)(4326008)(76116006)(7696005)(478600001)(66446008)(71200400001)(41300700001)(2906002)(55016003)(8676002)(8936002)(4744005)(5660300002)(52536014)(38100700002)(122000001)(38070700005)(186003)(6506007)(26005)(9686003)(83380400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5823
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ab152563-eca1-4fd0-9ceb-08db4c534d33
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GmxvWtX6+bJmgQZfHbGfhXAz9WEjM0dQo00VbSLDtO2llWFpphfcuOuNDdAMoC1yFfscobGCzmgaHPTWrNjQjL1K+kabxErI7ngEj/yBRu7voaLfDgb1RUu/3K7WqJnYiH8B4AutxUBpaoSy67HABj2vuHJbDCwMgyxc5X4AXT/DHNjREtAIA6+3Xwmu97fbHZE/s2YkTa69/nmhqh2J84tOApzCHPFCnm4ZrIm9LpaJ8ZTH7sXOiDUYrUQazVUBx3Xhdc+DtTAULbLSMu4PaCVg3Vo2Oyi375n8YIR0S5ryI7jMS67nc0pYZSPTR7lFuraS+bODrZTTLBzBLUjgn9e2QPBGO4f1oPE+3CK71oSQ4qlBelDCZQXRvJenAvZ8Y+fYZzxBbjJtI5d8DqrN9J+jJ5mjWp5HcrvhAslMHNgZF0GXQnKFakkSB+WX5UFnmckf9S/QtDPgvHzJRctA5sPRbWOu6Uib0hSxnArv1jLE5M5A4a6ek/FmStEElAE7TM0G0UiJvnKfiObCeBE/64vuEhELa/4LeDxWQMdo0kWTrolGx9oiSmqIJBj4JNF9hl9uFS35AAWAaSbgt3l+E1MRmyhVyHmYJjBlYZ+T1JRaGRpzXGejC2a2rpH5GjYIag1CO7tE9idOQoZM14qWqvIM/sD2+OeVttAfVAfOoJYWWcOTU0uyT8OeuoU3MFH2Us3yaKqPQA5BAgLKGggI5GyVp/XJbDEcjGuHRt8Rjss=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(346002)(396003)(136003)(451199021)(40470700004)(46966006)(36840700001)(36860700001)(8676002)(40480700001)(55016003)(82740400003)(356005)(33656002)(82310400005)(81166007)(34020700004)(70206006)(316002)(41300700001)(4326008)(86362001)(70586007)(478600001)(54906003)(110136005)(40460700003)(7696005)(8936002)(4744005)(2906002)(336012)(52536014)(5660300002)(83380400001)(6506007)(9686003)(26005)(186003)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 03:54:55.1500
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 80667c16-239a-4f66-ffe9-08db4c53526f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT065.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8454

Hi Vikram,

> -----Original Message-----
> Subject: [XEN][PATCH v6 02/19] common/device_tree: handle memory
> allocation failure in __unflatten_device_tree()
>=20
> Change __unflatten_device_tree() return type to integer so it can propaga=
te
> memory allocation failure. Add panic() in dt_unflatten_host_device_tree()=
 for
> memory allocation failure during boot.
>=20
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu May 04 04:02:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 04:02:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529531.823983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQAq-0006kS-MM; Thu, 04 May 2023 04:02:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529531.823983; Thu, 04 May 2023 04:02:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQAq-0006kL-Jc; Thu, 04 May 2023 04:02:32 +0000
Received: by outflank-mailman (input) for mailman id 529531;
 Thu, 04 May 2023 04:02:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ABSM=AZ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1puQAp-0006kD-Sl
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 04:02:31 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20617.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::617])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7d103d32-ea30-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 06:02:29 +0200 (CEST)
Received: from AS9P250CA0021.EURP250.PROD.OUTLOOK.COM (2603:10a6:20b:532::19)
 by AS8PR08MB7718.eurprd08.prod.outlook.com (2603:10a6:20b:50a::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.25; Thu, 4 May
 2023 04:02:25 +0000
Received: from AM7EUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:532:cafe::ee) by AS9P250CA0021.outlook.office365.com
 (2603:10a6:20b:532::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 04:02:25 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT013.mail.protection.outlook.com (100.127.140.191) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.21 via Frontend Transport; Thu, 4 May 2023 04:02:24 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Thu, 04 May 2023 04:02:24 +0000
Received: from 7b7f01d2abe7.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CD37423D-E1B3-498D-A885-44CB77E5BD62.1; 
 Thu, 04 May 2023 04:02:18 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7b7f01d2abe7.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 04:02:18 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DB4PR08MB8079.eurprd08.prod.outlook.com (2603:10a6:10:385::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Thu, 4 May
 2023 04:02:12 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%6]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 04:02:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d103d32-ea30-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XaxhynDBCTXoTSxI9Bm3w0rOkevBy3kSLJCJWr+icL4=;
 b=IRwSFvDzAgeDnR6J+MytlgvBsCqL6lf3vnFKbAb0PB5VtecbtO7oQJsIt3EEfK3K4IVhFhJDjLnb3T4aKWAyz6n14yF0JqyULws3LN9PMISBbvml4w2bP0Yyr4DxZXvV5IWrDmT+rnlBbtc1ydLiBDsmnjQxR/KMkNSkxaG9jmY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fhLDA9ktPq2qbTYeY6WxJxZiRrFFqqXN4z/8sryUytfp/AjOeCkmzJXqgJCJmspHo3G9npsebVwKm8FDtSIVDyNDhLcVCJFGdvUYlLoCa/zndqpv6YS49VBnRqzpXALkLihLgJLOwAAn7hFFKX2kVtj4XuE3GyqQV7aLQaa6j/oK2lYT6W0z1LHAH5Kdgbi2sin1F6IF9og0vTvUEMsbH0ZU8DjWwBPe3An7E+dA4RbLOibeBb5n7JpJJLf41IEIYFu7WslWCN9AmrAFSuGBr3t0e+q536nxM0TF/64R5+M47iOK8lZr1eCNQg9K/ZYKMs7NkxgKDLcYM6XI6FYA/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XaxhynDBCTXoTSxI9Bm3w0rOkevBy3kSLJCJWr+icL4=;
 b=JfwDmKb2yj2jFnKWYzQn5JLrFxY2F9aFiP4TjaNoR+x/5NWH/7sNiMzBA2gEKkpaJhkMW3j2MMFDt0R6s0Flxomxlm2Scd4Uyo9DzO74+GAePCe+VeeEPv1gVV+B7wB9JSXmXfmzBci/ah+w9pwUWFsKxsQqCGJ/lRTtdVcMeJQnQj8pmUBpDFm464EeYW2KPc7qpKrYc8jrixmzsfV4+NlM1sBB46aGutndhlppVukRZMvpyxv/elgks7GoGnCanCnd6EX3Tg3GEw41cfp/DOlySyROkhW6ITClQiwX/amCP5/ZletCnNMpeIXDfhOOi9QfiXEnINju5prCKQ3Yrg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XaxhynDBCTXoTSxI9Bm3w0rOkevBy3kSLJCJWr+icL4=;
 b=IRwSFvDzAgeDnR6J+MytlgvBsCqL6lf3vnFKbAb0PB5VtecbtO7oQJsIt3EEfK3K4IVhFhJDjLnb3T4aKWAyz6n14yF0JqyULws3LN9PMISBbvml4w2bP0Yyr4DxZXvV5IWrDmT+rnlBbtc1ydLiBDsmnjQxR/KMkNSkxaG9jmY=
From: Henry Wang <Henry.Wang@arm.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: RE: [XEN][PATCH v6 03/19] common/device_tree: change
 __unflatten_device_tree() type
Thread-Topic: [XEN][PATCH v6 03/19] common/device_tree: change
 __unflatten_device_tree() type
Thread-Index: AQHZfU8eFkyMAUM1+keBcOFUYgl+f69Jf5iw
Date: Thu, 4 May 2023 04:02:12 +0000
Message-ID:
 <AS8PR08MB799161E2F76EB54D4BF0A78E926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-4-vikram.garhwal@amd.com>
In-Reply-To: <20230502233650.20121-4-vikram.garhwal@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 3B261B364574344BBB4AF7D4E5CF721A.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DB4PR08MB8079:EE_|AM7EUR03FT013:EE_|AS8PR08MB7718:EE_
X-MS-Office365-Filtering-Correlation-Id: 65c2850d-fd5b-4c3a-ab3b-08db4c545e80
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 2wKQiCXB3W9owXHSbAT01FTBYDAAR5an3buN8KAewWeHxsJVxIFdeHNEISZmn7+p1ca/gaa5bFjTewUmoWOiQHQidphtLorI2mmXgTM1sVuSJq9OZ3HF5XUKssHcyVDLm0ELfbTA31ruQyns7xw8tpmIO/uy8kWPSKulmpgcaKRFs561nfm8tK/zsvVUdhXGE/br+WN+0pllDrpm8u9wvTnc0IAF2VYHdjhoruNVkYdzRDDsmhIXDW744OKK002G7xhJVi5o3M/VMsEgOTfGK3I86yU/pyv9lW6uVznXhpYI+EFfYHyI+OEAY475fy/5wRbJofViQKGInpPx+7t4HUwK/jknhkZWJjYFSmKXXGsv5afjJ4oGPJp2BJVd5jGykg/Uy6t7zvsvHtn6osuiJUms+Pe982hyXDo/5wo3IbVKihG7UsdPR4FFEOdK7tblDurchup+bW8MhCVQDIfpCWq9fzA/UFsUIrC/xbRQfxqYfJvncTduDpdGq0Gi47L6TVFe6lhJZDmZISqPFxORS0slo0l4ygtqpDW9/hqq0irkUEv2gAzotJEA7ZPrhTbrO83rplzTs+3gwlBz4i4gytFBp5IkklZ0rISTN9/W/DBhVr9GejY/ahCgtpbvfMZK
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(366004)(39850400004)(396003)(136003)(451199021)(2906002)(38100700002)(4744005)(8936002)(8676002)(52536014)(5660300002)(33656002)(86362001)(122000001)(38070700005)(55016003)(71200400001)(7696005)(54906003)(478600001)(110136005)(186003)(26005)(6506007)(9686003)(83380400001)(76116006)(66446008)(66476007)(66946007)(66556008)(64756008)(41300700001)(4326008)(316002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB8079
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9ad99b47-f05d-4115-55bf-08db4c5456d1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5VtMzY41cjmmAf9oevHWI0URtocUXtIJhvrAXQ0vUsK83/LNbln8s9E/yvzNzmJVeBz4k4hkD5arnKymn3h3iNDDgi19K3BDmhNUSK7e6pFrAupeg0H+eraTItURuHO57WNu95UFCrWnfUHLBmkO4gYkKK+pm6VSK88f+MEVT10BZzUPXWhAGHNer3qUW2y/1HynPEFVziumluwZ0jQ8Kj6r0B9uT9d86Io9gJrVDNgzaDD5pSpAuDEzrgtfa+35zEfWGH9WmDc7xfJgBaedWm2pUdDSXGFNsV7qTMtbxMYSLw/de970KZrZKq5kiav46kRyXiCAfvE06E8EvdMtqj4vzkSwGH3YeJqCtOim13a3RboGsrY5+0ALMjymaMTTyhR5S8rN/gtClcEtpn8f6NAitgMEYDx2HznV8ffGX3ZyOv9GDsd2ufdw16KDqmaa0xG+M+f513HPCzWoDYBSXmVIb0sJrq1HR4n2oXK6M8Co4OsN/VQU1vtj9vXdF6nTRLZKJmELm45rQ0DQ2Xuimz6BI+y6uJguziD+pk2hPYr5t4ZTCp5kbdnbyo7QzVvYm8E0s9b1onjyamnr+9LBtyGtYIhhb90BBFwg9fnZLjngQXsHrkSy6FUL9NpZr8YaU5gpNsrsciCKOTwpFgT5oPRqwopLmB39vUZYO+oCq3yLboJBUX0ZdrTh/nyYG40nA0iyjF462rfHhh9x8k4njDuMVIF66ci+vasL2TffMZo=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(39860400002)(376002)(451199021)(36840700001)(40470700004)(46966006)(34020700004)(478600001)(356005)(81166007)(40480700001)(55016003)(52536014)(82740400003)(5660300002)(8676002)(8936002)(40460700003)(54906003)(110136005)(82310400005)(6506007)(26005)(86362001)(70206006)(70586007)(47076005)(4326008)(83380400001)(7696005)(336012)(9686003)(316002)(33656002)(2906002)(186003)(41300700001)(4744005)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 04:02:24.8886
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 65c2850d-fd5b-4c3a-ab3b-08db4c545e80
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7718

Hi Vikram,

> -----Original Message-----
> Subject: [XEN][PATCH v6 03/19] common/device_tree: change
> __unflatten_device_tree() type
>=20
> Following changes are done to __unflatten_device_tree():
>     1. __unflatten_device_tree() is renamed to unflatten_device_tree().
>     2. Remove __init and static function type.
>=20
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>

I have a feeling that it may be better to swap the order between patch #2
and patch #3, so you won't need the extra name change in function
dt_unflatten_host_device_tree(). But this is my personal opinion and I thin=
k
the Arm maintainers are the people to do the final call.

Anyway this change looks good to me, either with the swap of patch order
or not, you can keep my:

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu May 04 04:08:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 04:08:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529536.823994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQGU-0007Sc-Di; Thu, 04 May 2023 04:08:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529536.823994; Thu, 04 May 2023 04:08:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQGU-0007SV-Au; Thu, 04 May 2023 04:08:22 +0000
Received: by outflank-mailman (input) for mailman id 529536;
 Thu, 04 May 2023 04:08:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ABSM=AZ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1puQGT-0007SP-Dp
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 04:08:21 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061b.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d88cf89-ea31-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 06:08:19 +0200 (CEST)
Received: from DB6PR1001CA0046.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:55::32)
 by GV1PR08MB8178.eurprd08.prod.outlook.com (2603:10a6:150:92::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 04:08:14 +0000
Received: from DBAEUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:55:cafe::68) by DB6PR1001CA0046.outlook.office365.com
 (2603:10a6:4:55::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31 via Frontend
 Transport; Thu, 4 May 2023 04:08:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT014.mail.protection.outlook.com (100.127.143.22) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 04:08:14 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Thu, 04 May 2023 04:08:14 +0000
Received: from 766ddd8ff3a2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A74EA22D-9AE0-4D17-BCA9-94B26A60CCA2.1; 
 Thu, 04 May 2023 04:08:08 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 766ddd8ff3a2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 04:08:08 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS2PR08MB9318.eurprd08.prod.outlook.com (2603:10a6:20b:59a::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 04:08:06 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%6]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 04:08:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d88cf89-ea31-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PWvM9wmLCUR6GOvCgPkahatkE9po1Rn73LzxnwCS/+Y=;
 b=sQ4+WLxRmFFVqd1gYweOBbK9jWht7Q7/XHL9e5jE0g3ni7GS7EVIfYY/Uf4B3ZQJogq9nhcDAXgqKeu4UQHWw3lDivpSTAEXnCgKCUIRP4WXevKYR2dm9y+m0JlxF7/3oFA6lAyCRiSvYCpCaMo0fBjYs3dBrfxVDppDtZSgjGo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZGFQcYUdgSBUTMm+Lw6cApFR/2rqnNOg4bjqpti2j9Ky2Oz4/i2Yh8TlpLJD+QEssBSCtyb7/vZzMkYt0FaeYPGm5GX4ElcEWczpbJvofBFfjeE5vJUKYXGPA3QBQNoUDOeVLTQYyrU6d1QTz4j9Zk+bkBfNtdUjEI9bNKwMi1pC8GRRCKyD4fOW28q7x/Prf+1Kcpc7p5sheclgnkhFhR7k4+pKoXXHMn6MDXU6fH0Fhu7mY7b8tKl/WSTYG62KIbvakAvZAIDCkqni/8lcqPEnmlO3gMnck9cNhoE+Dw1FkvPAr02vprGpHb7mTQNJfHhvMDVevS51IafZ0+E7Cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PWvM9wmLCUR6GOvCgPkahatkE9po1Rn73LzxnwCS/+Y=;
 b=EXh/rqwQZJZ5wAUZn905Fs/ae7n9CzMheieKlhfVoBkwMag+WC+SDY7x+33Kx/Mxx6r+RnxaYxpRAiNFHVC3AR96PMxnFI7ui8RiuGQqUIreTW3KWn7B+t+j0Jgds5xOMjgGHH4hyGDSUrAo6MAxxji+dSjuykLJtbzWIZXhiiCsBG9B2eMdtRt4spuoZaMf2Xw4t9PfZAV+snB1u4JS5WKSdUvUxv9yDWIik+eW60hbS2S7dzjWxMTUmTI3JKI2aA3XaCF7Vc1ZOH2/GnI18cfQIWGZ5LpFX4INhTs2QHHfQHiIFLH31NBav2JypsmwGaWd09zf8NIZ1IS2qpKVlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PWvM9wmLCUR6GOvCgPkahatkE9po1Rn73LzxnwCS/+Y=;
 b=sQ4+WLxRmFFVqd1gYweOBbK9jWht7Q7/XHL9e5jE0g3ni7GS7EVIfYY/Uf4B3ZQJogq9nhcDAXgqKeu4UQHWw3lDivpSTAEXnCgKCUIRP4WXevKYR2dm9y+m0JlxF7/3oFA6lAyCRiSvYCpCaMo0fBjYs3dBrfxVDppDtZSgjGo=
From: Henry Wang <Henry.Wang@arm.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: RE: [XEN][PATCH v6 04/19] common/device_tree.c:
 unflatten_device_tree() propagate errors
Thread-Topic: [XEN][PATCH v6 04/19] common/device_tree.c:
 unflatten_device_tree() propagate errors
Thread-Index: AQHZfU8dTUEoDGBGd0KIZrm/bdcnRK9JgM7A
Date: Thu, 4 May 2023 04:08:05 +0000
Message-ID:
 <AS8PR08MB79913746C549B68EAA72D240926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-5-vikram.garhwal@amd.com>
In-Reply-To: <20230502233650.20121-5-vikram.garhwal@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: D3BF081E2E0A9140A7C268B1B873586D.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS2PR08MB9318:EE_|DBAEUR03FT014:EE_|GV1PR08MB8178:EE_
X-MS-Office365-Filtering-Correlation-Id: 8ce8015e-0feb-4fec-d84a-08db4c552ebd
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 T3hdUIoV3LZho5cDGA3nt5Cta3CvRGAupxs3iy/sspzML0MnGIj5em75YaRxcF3IhSzGCLB9uCGkm9tVRtntALvn1YSFbUmvwHi+f+b191PmMWQNCGeLFXCu/6xQgS9Czc2GdblbUupowdu+1MTATWP2v9cMrbbwl3SJNJGzUAKeVH1Hwm/9wivwgBhZmRD19vYVueV+8VyT+MKEx34Vr5hqWOSjFeDbBsCRR57ZUvL6sJtJCcyx5ZHiIKn0hAg7mpM7FGuGcoYtbwi/OpCGCYK8lGcUtEf4MMvQ1dnQZe45ZM8L04oCA4RoBEyk32j15cQhEl/jZo9GrsqbO92rUT+PDb/q2zJ3laKaoNayrYMoNXm1AqzULMO22TylFQ2R5riaY97AI1LKOAC7RWLvC3nHvx4cqhLEO3u23i2I+WEeI1HGn9hd14nAyw7EfCQcvxD8Cuws9Z3xujR2Kq7iDGHD5ngOYWeOYpUziUZa6Vm9LW7OjVffVaCAAFspDrx8cDpxxeSrcmb+Gqm5r9fK4w7dfWWI+Tw811tTANkujwQ+GFAljwHMJPjXpwz4vNaNyF8/IH0vPS4+1i4Vr2R6G2oELtlWLorYyE+U2sqz8Dtor6rg32IARPwV3er8FZ0H
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(366004)(396003)(451199021)(316002)(186003)(66446008)(41300700001)(4744005)(2906002)(9686003)(33656002)(26005)(6506007)(54906003)(110136005)(71200400001)(86362001)(66476007)(7696005)(64756008)(4326008)(83380400001)(66556008)(66946007)(76116006)(478600001)(38070700005)(52536014)(8936002)(38100700002)(8676002)(5660300002)(122000001)(55016003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9318
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	03990af5-4397-4b7d-6504-08db4c5529c5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zUFSVEjrVBptzOnu7Fzf+FHfOBWhkZ7OhgUPdVnw1Mw1geAz1qhU0qOUgEAmC/7bS6xs9Gtqwpj+W/gZthdtiUrgvIxE21ZSSDuFJuj6YoA48o63z5wvbiYd5GDoBhHihHUdxH3RdGP9ZArMGdMO6BsBO38PRu5znTa2/4/H1G3CNAL4FCJ5JX5tMI/sdNlkHIO8A8vqlpCzw9043oXz27sa48jx+PX2pCHm7FceZlbWWtBUR4qfgVu9EjM6DyLvJ7Dr4PIzd2JLFQs7cqKvLaHDh/3wRugpdnctvo/Oqf1Ir4xxeGE6FtEtnFiPIIQHpaowhTCYoMA2pLjMWuoBrD7p4KLdHf/AQH5svzM9AsSnXHR1Y7HBD+OEl9PMSfzQ6AuIGmjqufJ8JAPUqyXwk7rbwChuZLcM1jIhZHJScCkXKmJwXD6Lg9s5too7NEcdG/o5b9eylHoAFDkEEPMd3bp/PL3poUTT8Cp5aM71WNaT84LkLImEZeBWQk7J1q+DK+Z8Tet8x4SS985F7TvD56YXWKwljKstfZvtUygBNkutuDT0rfYPm/GnIRrbA1M2W2z2LctvsnvbAH5XX+Tiq04DI1ZyGommPYJMS0Jc1XxgueDe0zv6WMuYZRNxPa6PW3V2tMM20lfXcYvq71rPh2i06UfttPvKTQ+qsCbf45J0GkWTbChJd94IPMzDFe+Ayhi58HVFpqGBHivI/V6P9FgLQSh3D+5eEsrga79i4BM=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(39860400002)(346002)(451199021)(40470700004)(46966006)(36840700001)(8676002)(8936002)(36860700001)(40480700001)(356005)(82310400005)(41300700001)(81166007)(34020700004)(4326008)(33656002)(70586007)(55016003)(86362001)(110136005)(478600001)(70206006)(316002)(54906003)(40460700003)(7696005)(82740400003)(4744005)(2906002)(336012)(52536014)(5660300002)(9686003)(6506007)(26005)(186003)(47076005)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 04:08:14.3025
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ce8015e-0feb-4fec-d84a-08db4c552ebd
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8178

Hi Vikram,

> -----Original Message-----
> Subject: [XEN][PATCH v6 04/19] common/device_tree.c:
> unflatten_device_tree() propagate errors
>=20
> This will be useful in dynamic node programming when new dt nodes are
> unflatten

Typo: s/unflatten/unflattened/?

> during runtime. Invalid device tree node related errors should be propaga=
ted
> back to the caller.
>=20
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu May 04 04:11:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 04:11:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529539.824003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQJU-0000TX-Rt; Thu, 04 May 2023 04:11:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529539.824003; Thu, 04 May 2023 04:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQJU-0000TQ-PE; Thu, 04 May 2023 04:11:28 +0000
Received: by outflank-mailman (input) for mailman id 529539;
 Thu, 04 May 2023 04:11:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ABSM=AZ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1puQJT-0000T9-JW
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 04:11:27 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bceb1300-ea31-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 06:11:26 +0200 (CEST)
Received: from DB3PR08CA0007.eurprd08.prod.outlook.com (2603:10a6:8::20) by
 AS8PR08MB8969.eurprd08.prod.outlook.com (2603:10a6:20b:5b4::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 04:11:22 +0000
Received: from DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:0:cafe::89) by DB3PR08CA0007.outlook.office365.com
 (2603:10a6:8::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 04:11:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT020.mail.protection.outlook.com (100.127.143.27) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.20 via Frontend Transport; Thu, 4 May 2023 04:11:21 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Thu, 04 May 2023 04:11:21 +0000
Received: from f58563182d46.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C3930B08-7ACF-4A63-923B-820A0CEB416C.1; 
 Thu, 04 May 2023 04:11:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f58563182d46.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 04:11:15 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS2PR08MB9318.eurprd08.prod.outlook.com (2603:10a6:20b:59a::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 04:11:13 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%6]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 04:11:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bceb1300-ea31-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fxay42dSdmjWN8IT6/RlKiEPIPktamK7xVeI87saUeg=;
 b=qrEwkInrCzW9aNRH/D2CbUoY8lTIBEDk2gqRTy0YTZgc7qpEGR5EGA6h3MkaqB0yLWNM7FSraWf7kxwHIch+Tgg3iwbGDMpT3EcWRvmiC/dTqg0GMeq83Zaw8uapYhGjBDSEN/Nae9foIZpO0mhCTJYRDqsjKyN1f6pAp+QIb50=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KGFou6L6yyLFXannkGB0McshBhr7X1odx48U/zRMDhhL8c/bTrW3VP0LH/uJsgG2qXfPjiL74GwaPJn/np8LCif9kHsd6p5Pg9zVWEbIuk3pOQhaZi6wPN8TJdaNtRrWakIt383+Hy9lUy5tPYA4uAijQttmdAKnslYBpG7NF8+tY+kJ/VcGyFxCDYA1+gkW3GxOzXONGOMYOehoU9vxyXis5PayngUIeryzqegVY/0m3fKv3Zm+XLC/C+vwrV9sj2NvnFk3pfzee/XARLqEEMMZ+5i1qCRM6tbS//XACLkpltA7DaF0rmXJPcBwC+YQ9JxhXEpHgi/211xgKQ49Nw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fxay42dSdmjWN8IT6/RlKiEPIPktamK7xVeI87saUeg=;
 b=j/pbfzpj9dEqabyTCIhH3VR1nNp5cETezwcRjnBEYEaQ2U7rXRSXFpU9Mi8HvW7HByzRrlXxOAoq15JSHi++W3ugS/aKQbjlm6ELgaSAx6LSLII0wHdGP8r5vaocj1fVAmMiyYj5Eyr0IDdSab6gJaksGc6sC3sNP1Ko/S4FjUsv0sT/ekhhfNmpzcDx/LC0nAbtEEMR6Lh0iNWWHPbV9EsSfCDAqXNGB9JU/BEwADCFphAGvgR4uJH2VEJmxmoPuHLk5vXSTJmatzDSibOUWvfckNa8Zv1gk7eNOfEkzU1z6QMNoLA5/Vs+9c4ae9Cg7gFNI0f7G92PwSX6oDF4cw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fxay42dSdmjWN8IT6/RlKiEPIPktamK7xVeI87saUeg=;
 b=qrEwkInrCzW9aNRH/D2CbUoY8lTIBEDk2gqRTy0YTZgc7qpEGR5EGA6h3MkaqB0yLWNM7FSraWf7kxwHIch+Tgg3iwbGDMpT3EcWRvmiC/dTqg0GMeq83Zaw8uapYhGjBDSEN/Nae9foIZpO0mhCTJYRDqsjKyN1f6pAp+QIb50=
From: Henry Wang <Henry.Wang@arm.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Andrew Cooper <andrew.cooper3@citrix.com>, George
 Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [XEN][PATCH v6 05/19] xen/arm: Add CONFIG_OVERLAY_DTB
Thread-Topic: [XEN][PATCH v6 05/19] xen/arm: Add CONFIG_OVERLAY_DTB
Thread-Index: AQHZfU8esEuGfevR30ChHQjU9+v2dq9JgYNw
Date: Thu, 4 May 2023 04:11:12 +0000
Message-ID:
 <AS8PR08MB799123AE54B0ABE907F2FBB8926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-6-vikram.garhwal@amd.com>
In-Reply-To: <20230502233650.20121-6-vikram.garhwal@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 97A432DCC59437498D34F28A40C1798E.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS2PR08MB9318:EE_|DBAEUR03FT020:EE_|AS8PR08MB8969:EE_
X-MS-Office365-Filtering-Correlation-Id: e56dfe3e-d323-4327-94e2-08db4c559e6c
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 tInXyNO8hj44Vsiq4fqyQS6PWbzKj3/Y8i+FU7ZJZejnEodZx5228Og0CFwpo8MxUoXgO3SeAlzm1fhZDJxOrYNgzC6TSNpdb763BdmhIBaN0PXiu4VTvyuPZPB+ahgzoc7+DqM8hZ8KMKQ269ultHN+B+cCKJDokX65GQx6hR29wcYLqncFIGF4MKVJnAMjUux0GLbpGM8V1fg/LvhUT/bTrPmrtjIrdEhkDG5NBZ03cdDvuzzfFR26b8LqQVz5rYm5S5BUFzjCQhc2J+MyskBnZDKQ/1NRZ4E/KFgkntJXMCUZPsaNWHDc1h9oV+zCsCt7d0sya3hXiUjCZj+N04XHQqmYi7sp7EhTFL0i/hdP/P3Kk6HZcArkE0OzxLLOYOvYlF+nvgQ28zz68hg/cGtSxULjAJ44xbqYmRU0OCPZyrjUJi07Yx/XdRjLfDmm4G07XhJMWtViB6i9NIccvfOHxRplwzmn4qjEuEiiTrFSrQyTmE8VUvWCxR1ibbcGBFsc3nWoQi7XvGSkFaQMz8X3t6K00JDxf6bgjn8Qr1XbN2brjuWnKzVqPa9/ZT8bcKZq7Sz2UxQZ1V06JtOwIbG47NwzLyGLVVAsieJKiiQkiQHC4XLze4kcIlyaHJmb
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(366004)(136003)(346002)(376002)(451199021)(38070700005)(7416002)(478600001)(122000001)(55016003)(52536014)(8676002)(5660300002)(8936002)(38100700002)(110136005)(71200400001)(54906003)(6506007)(86362001)(66946007)(66556008)(76116006)(66476007)(7696005)(4326008)(83380400001)(64756008)(26005)(66446008)(316002)(186003)(33656002)(9686003)(41300700001)(2906002)(4744005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9318
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	124e434a-952d-438d-4aab-08db4c559936
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	D6G+VX+dKBTBdUcjDsvD+R6/kXGYFuI88hRAsT8hopAzHcb78bIPoeIpUg/OMtfqF7Ke4oYFtApsexvsVCHI2zWF3y49FQ6GYASaySANkPreyUGAqKP5+vl1b/IJaryQ7vRNBj0qYUFrbghXI0x4kRTMo513UU1S6V9Tn0y7Vp8NbzJW6ygI9QZYRTRUtlKGWWXx3ApIQcD+ubOeAqSvcPv/GDm1t4McCSyB4ZbrD2gPaFlHyYqt56cqhKFuLNDjyin9th9i7UquAwN/oL49gyOvDLy2tPBAR7dNSo4zaca5e51jgET35S/CcSf7vOoocN8Ui7m/r0NYRkXXuvwu1mpxCRDRUAnjs2296jqRJX9PrODa8Y5b8abEAPLR7DQYHXcPzWrJsUIoUsNDlbHKZqV9Zw8cZmdwWWcL5RG4XBj6fxzdhPhvPrxEy8o6vYuefgCnugFsc7Kfj/EP6eNPQ9n9a6sMOT1yQWP03OQOCSADsLOg2sQ8WHhHlFiC+oZUJ0ULyUoXq6ORfRSJ4emVa+CC1CHGF1+A0Ukm4+j+2OltRAdnymlN0KSCUB87hy6mEN0EWK0ptromh9yE/9mGlcpeNII11OlQ89FJ4RoyEM1K1XeGbfzilnT2k3f0hawI5FGSRi3quCe/dJ2RWep01qgCRYQc1ndqSgVA1Aq28DvoNpP6VcnqA/S/vHfZ2hN6raQkT/F7iOULn35ediebxzVvXfz4W1cCc+VxeqaiRg8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(346002)(396003)(451199021)(40470700004)(36840700001)(46966006)(82310400005)(70586007)(41300700001)(83380400001)(40480700001)(336012)(33656002)(4744005)(2906002)(316002)(34020700004)(55016003)(5660300002)(478600001)(52536014)(4326008)(86362001)(82740400003)(70206006)(356005)(81166007)(110136005)(54906003)(7696005)(47076005)(40460700003)(36860700001)(186003)(107886003)(6506007)(26005)(9686003)(8936002)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 04:11:21.6591
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e56dfe3e-d323-4327-94e2-08db4c559e6c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8969

Hi Vikram,

> -----Original Message-----
> Subject: [XEN][PATCH v6 05/19] xen/arm: Add CONFIG_OVERLAY_DTB
>=20
> Introduce a config option where the user can enable support for
> adding/removing
> device tree nodes using a device tree binary overlay.

May I please also suggest adding a CHANGELOG entry in the "### Added"
section? I personally think this series deserves a CHANGELOG entry but I
am open to others' opinion though. Thanks!

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu May 04 04:14:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 04:14:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529542.824013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQMG-00014L-9z; Thu, 04 May 2023 04:14:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529542.824013; Thu, 04 May 2023 04:14:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQMG-00014E-6f; Thu, 04 May 2023 04:14:20 +0000
Received: by outflank-mailman (input) for mailman id 529542;
 Thu, 04 May 2023 04:14:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ABSM=AZ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1puQME-000140-HR
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 04:14:18 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2071.outbound.protection.outlook.com [40.107.7.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 22ca196c-ea32-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 06:14:17 +0200 (CEST)
Received: from AS4P195CA0001.EURP195.PROD.OUTLOOK.COM (2603:10a6:20b:5e2::8)
 by PAWPR08MB10017.eurprd08.prod.outlook.com (2603:10a6:102:34e::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.36; Thu, 4 May
 2023 04:13:48 +0000
Received: from AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:5e2:cafe::41) by AS4P195CA0001.outlook.office365.com
 (2603:10a6:20b:5e2::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.25 via Frontend
 Transport; Thu, 4 May 2023 04:13:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT046.mail.protection.outlook.com (100.127.140.78) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 04:13:47 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Thu, 04 May 2023 04:13:47 +0000
Received: from 935fa7e549fd.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 382A2462-025A-48FB-86A2-9B54290EC670.1; 
 Thu, 04 May 2023 04:13:41 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 935fa7e549fd.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 04:13:41 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AS2PR08MB9318.eurprd08.prod.outlook.com (2603:10a6:20b:59a::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 04:13:39 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%6]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 04:13:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22ca196c-ea32-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C3kU+1WbGesUTX/nZUfvqnyHgj4JvcY4aYWwS7cXBno=;
 b=PZNlDyRsB1aXaOVAURS+iO17+j2kjnu9+KEp+JAu+1ajgtnE9nO7PVD5zhYoFmPlyUK45QcwFhECAqViRNorNyjDuViMplurl3sQZYoN/3ckXwJ/DUbyEmsM3SvPQFAA7o23LKgj5eCSrzrWD3lOx2+9yFgPSFRMQoQ5AXgYsFI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jLxvT5ggCmmSVQy73NC25iTWkS6CNVZKgNxn8nc8oeaphV7pt9H4cZInVKCLpBgi/mptK4YNAqwEkbGFDNuOIQDA2tu8dmm1zlX0HIEx9UfnqoRbClFwPtupflpSkXlsDlpJt7ifCCWFX20PwkT3YTxGbadCGc4ZEzzeUhKg2bYzfHvZny6Q7UwHIQ/X7SIWAhb9pXfxVZc6ljfImL0zZihn1/vAopDmba7gmS+RP79FOASdUk6rvntlcR8cjbhgS/Um9r3PM07LPO/CAHG4t1j+ynrDYgug8EfhBp7C7t5elhwF2Slq8fVFqGq4VS1xDCICo1FyT8Y10oX/VJMgkA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=C3kU+1WbGesUTX/nZUfvqnyHgj4JvcY4aYWwS7cXBno=;
 b=XiFy/0yq9tZ2OdxoG/ghQk8trvww9wKWpprYaQ3QfC8TbfEL0T4m6J887eF3umlxcMZVs2MpdAXqZ5slxQvhUgw5Tmk9ZzVOacsAKN8ZZTFfcgSVzUshyJmDawvBbuFjJ9duTsLoKoSx5UJidNh3xP+dfU0FZp3+XuAQwsivCbFksWg/OL01nV4gVkW3G67ODNzyrpbfW54oYo4CFt6mGaVL94HPriYzYqwzfvCn/ym3dWcuB2ajmRx3lhPpTy/U5vbKQUAD3Txuvq27+l8onjQo9jWunMG6gJnTa6q1E/C4/crM5OqxxUdoLcLXCt/ebaVG4+jqljwiTWdxqJOnhQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C3kU+1WbGesUTX/nZUfvqnyHgj4JvcY4aYWwS7cXBno=;
 b=PZNlDyRsB1aXaOVAURS+iO17+j2kjnu9+KEp+JAu+1ajgtnE9nO7PVD5zhYoFmPlyUK45QcwFhECAqViRNorNyjDuViMplurl3sQZYoN/3ckXwJ/DUbyEmsM3SvPQFAA7o23LKgj5eCSrzrWD3lOx2+9yFgPSFRMQoQ5AXgYsFI=
From: Henry Wang <Henry.Wang@arm.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>, Vikram Garhwal
	<fnu.vikram@xilinx.com>, David Gibson <david@gibson.dropbear.id.au>
Subject: RE: [XEN][PATCH v6 07/19] libfdt: overlay: change
 overlay_get_target()
Thread-Topic: [XEN][PATCH v6 07/19] libfdt: overlay: change
 overlay_get_target()
Thread-Index: AQHZfU8lidJvDau6tkqOqDCcDCOd6K9JgsNA
Date: Thu, 4 May 2023 04:13:39 +0000
Message-ID:
 <AS8PR08MB79916EBA9A9AD2540BDB6676926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-8-vikram.garhwal@amd.com>
In-Reply-To: <20230502233650.20121-8-vikram.garhwal@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 8EB2B4542D95134BB0A71869053B359A.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AS2PR08MB9318:EE_|AM7EUR03FT046:EE_|PAWPR08MB10017:EE_
X-MS-Office365-Filtering-Correlation-Id: 11a288c1-5c2c-4909-0948-08db4c55f565
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3IKUGnKAyCG4He4Cr6wW2BGCouWJapks1E2DlcMKUo3mPAuAQWlhjjKOIo5YwEKaGI+qhkFs6yAgRupFKP1O83heVP/DwouEtf8d6IDX64Obkpq50eNITM6MsTIHoGqn/VU/HQTv9hUeo+XhTyDod31YKxruu2GMV7782nPRY+bRIDL7I9JI0BcV5ciP2Tc78WRuExApvb1hYB8v0trB7xB/roWWKDFqDRLqj2iYORQ4dnU0+ChLa3MYH7iiS/CgjfWnyO8ZSyqajrUAN0Yiu9uysn3VRjbQa+a9Z5sHGKJyQiFSM+gWnyoJrpgAaqhQp6ohicGUcDSYoRO9fPtNaeEb3U8dUKFp9Cf+Ais/XRMt+3RifymEKehCxV8iI2Fg4DA6KJXfEt/jtCEEK0+2z9tpCwg8v/LF3HrtZ0WRi75WdDY8OEHrZFd0Xg0X6o5Rhgbb23ddvPcTimn7gkRjiA7tVI3KfXnrgwl6LTcQB2/dxtA6LV5J0RDF8pg90vLzslQUbO3SgxfKQbpRQmu3HD64rgc/2hhOs1yx6O+rR0yPh14VOfmpjN1CgsBt2fkW4P2y3GsSA00bqcAnTc+gkxzXoXMXXLI6I+7mZRNqfZi09C6bRm0c5XeRWMyfyOtJ
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(366004)(136003)(346002)(376002)(451199021)(38070700005)(478600001)(122000001)(55016003)(52536014)(8676002)(5660300002)(8936002)(38100700002)(110136005)(71200400001)(54906003)(6506007)(86362001)(66946007)(66556008)(76116006)(66476007)(7696005)(4326008)(83380400001)(64756008)(26005)(66446008)(316002)(186003)(33656002)(9686003)(41300700001)(2906002)(4744005);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9318
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5b518887-47c0-47d0-fc2c-08db4c55f06d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JD/dNjjbdY3Uww9hh/hSZZCW3QTbUvU0tLUnE4sG7Tixy+GdJYavXCmdcx3nLV4uRA/xdtoZo3BKRgPgck6MNiiS0Ze7If308hZyneI019uUViqv0IV1lugBp0mBw1apDfrdVqyfSQGcxjVI4maPvu4DOSfgtQkdXclgUdv1ZhLUIYCrMHUBZHbDJMkdOlT8vLdA+3Q+sv2pOCa9FsXpKvzZFe6t212C7Eeya6UZgUX3fD36jrS+XNBWDa5+ejg+Fv/XQ/7vra4Ifx8jKFmmhWsRubUiA+vkUSSIyRh6BomG2I+GiOjRrin06RiXCWkXBKb6C9kP+5YXlaiBWpX2sK8wtgOC+e/BE/L0Y23C08Ch1pI9+bS6I/5fIxdenFPqVp035BuwwwUwomXlMLL6s76kjLBLh442adbgHs96DJJQWn6A6mEd1LheyXI3tv15xOFX6dK80FebR2AAB4F8j9x9DYYfwoxv2cytFkFqxzadvzDEmXsxSQG0clnq5ckthbOT016ufmnjDaCxWLevrgXM4iXUC0S96MB5iGjijGelr0LGzJBvWEi0yLH7qFgQop6mHxPmyv5Ua04Pf/PRTJr7se+J9MlOACRZ9yfTPgX70PKJJZbTXfyz4WjezDy01nVkX67OFbdc0DcNm2Na0czDgHukNgv+HxffZpn7KPS5qMj+Nwj3MRK6p/TH7Dsb6msyAYS0111EWPbhTSDiXcxAFNWy74SxAdA5/LgVeR0=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199021)(40470700004)(36840700001)(46966006)(40480700001)(34020700004)(478600001)(356005)(81166007)(55016003)(52536014)(82740400003)(8936002)(5660300002)(8676002)(40460700003)(86362001)(7696005)(54906003)(82310400005)(26005)(2906002)(6506007)(70206006)(47076005)(70586007)(336012)(107886003)(83380400001)(4326008)(33656002)(316002)(186003)(9686003)(41300700001)(4744005)(110136005)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 04:13:47.5402
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 11a288c1-5c2c-4909-0948-08db4c55f565
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB10017

Hi Vikram,

> -----Original Message-----
> Subject: [XEN][PATCH v6 07/19] libfdt: overlay: change overlay_get_target=
()
>=20
> Rename overlay_get_target() to fdt_overlay_target_offset() and remove sta=
tic
> function type.
>=20
> This is done to get the target path for the overlay nodes which is very u=
seful
> in many cases. For example, Xen hypervisor needs it when applying overlay=
s
> because Xen needs to do further processing of the overlay nodes, e.g.
> mapping of
> resources(IRQs and IOMMUs) to other VMs, creation of SMMU pagetables,
> etc.
>=20
> Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com>
> Message-Id: <1637204036-382159-2-git-send-email-fnu.vikram@xilinx.com>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Origin: git://git.kernel.org/pub/scm/utils/dtc/dtc.git 45f3d1a095dd
>=20
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu May 04 04:23:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 04:23:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529547.824024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQVH-0002rY-7l; Thu, 04 May 2023 04:23:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529547.824024; Thu, 04 May 2023 04:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQVH-0002rR-4g; Thu, 04 May 2023 04:23:39 +0000
Received: by outflank-mailman (input) for mailman id 529547;
 Thu, 04 May 2023 04:23:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ABSM=AZ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1puQVG-0002r0-Ck
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 04:23:38 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20618.outbound.protection.outlook.com
 [2a01:111:f400:7d00::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ed1914b-ea33-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 06:23:34 +0200 (CEST)
Received: from AS9PR01CA0004.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:540::13) by PAXPR08MB6672.eurprd08.prod.outlook.com
 (2603:10a6:102:137::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 04:23:30 +0000
Received: from AM7EUR03FT039.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:540:cafe::ce) by AS9PR01CA0004.outlook.office365.com
 (2603:10a6:20b:540::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 04:23:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT039.mail.protection.outlook.com (100.127.140.224) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.25 via Frontend Transport; Thu, 4 May 2023 04:23:29 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Thu, 04 May 2023 04:23:29 +0000
Received: from f83775cd8bbe.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2CC55D5F-6E59-4398-825A-921D5D0CAAED.1; 
 Thu, 04 May 2023 04:23:23 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f83775cd8bbe.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 04:23:23 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by VE1PR08MB5742.eurprd08.prod.outlook.com (2603:10a6:800:1a9::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 04:23:18 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%6]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 04:23:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ed1914b-ea33-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/HYqL7xhkw9+WdB4I1aHYn7t4ZClm5rVJZUpNZxuLvU=;
 b=GGwvP4wRizwIhmXGXkRlDay4pfHwTO7J6ibCdRpkTq1lGKk966seetuNM41l5TxaqS/TlOQJxlll+RG/NzF7zgHCOX6mCbTkYkORunjtM3nsbeujLVh/+cXJ0GZ4SI029mOdhJfolx8+QcMLPgJb255oqCLmsNVDfjIvId9lVPo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QldMFTUaabUlA+KJ6xPGELEqViReWBaQt2iHFKymtJgzm6p8LxGn9J1OazTlbsCmVPVX25dWM2+dUoDH6ikVSGf5GMYF4ngXEwsJeT2pXGpLrVd267Rty020tRaaGMo0otVxqFwsW9KX6BkyzpyV+WvUj2tUGQT0+BezBWPK0naEHX3+ISlr6lmK5bDhsv4cScx6Byx2PojBQ+0WhSTubIBuq8EsSi98zOs84ft+2kVVPe5bGh9gYip64nwVX32Y5yC3nDAtuHvXqgJ1mP/OB7izm9n9WYwF+ur7i8QvzQxkmaabl02Rsrfe16ZNnawDHjIdbhKQ+x7/pMxnbog/hA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/HYqL7xhkw9+WdB4I1aHYn7t4ZClm5rVJZUpNZxuLvU=;
 b=KC+SyAoSAaq1bonYkrT6FSLYodgAkOA7/JKTrcLMweOew+qXY3NQCWj0RvqYXuGDDhtPQ8HS1EPv7scFTK326IdFuXAgx2HxPChIT+n8WWPSkiV1G7bB+qezVjL72xr32sxufLuiLVvlnAJtKdYEnr1ytysN4yssFf5HffJLu3SJoSgWeG5QanLP3ZV7NjBSVuk61nmFOYEIWF7Pp31PF9lr92TuSyhK8q7QgMrZOY0ehR548hBhmUvsoUtV3ZT0fmHObKyvtuQ8Eoz/rvovNdrVhzoi7C4F0lAATXN/DQZ20IicN+wzBon3cA9zzDHmNKhXx9GQWm8DBJr/CpsQgQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/HYqL7xhkw9+WdB4I1aHYn7t4ZClm5rVJZUpNZxuLvU=;
 b=GGwvP4wRizwIhmXGXkRlDay4pfHwTO7J6ibCdRpkTq1lGKk966seetuNM41l5TxaqS/TlOQJxlll+RG/NzF7zgHCOX6mCbTkYkORunjtM3nsbeujLVh/+cXJ0GZ4SI029mOdhJfolx8+QcMLPgJb255oqCLmsNVDfjIvId9lVPo=
From: Henry Wang <Henry.Wang@arm.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: RE: [XEN][PATCH v6 08/19] xen/device-tree: Add
 device_tree_find_node_by_path() to find nodes in device tree
Thread-Topic: [XEN][PATCH v6 08/19] xen/device-tree: Add
 device_tree_find_node_by_path() to find nodes in device tree
Thread-Index: AQHZfU8kxczuWvlyd0qZ9yNYcQTG6K9JhHkw
Date: Thu, 4 May 2023 04:23:17 +0000
Message-ID:
 <AS8PR08MB79910CFF4439E503046660EF926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-9-vikram.garhwal@amd.com>
In-Reply-To: <20230502233650.20121-9-vikram.garhwal@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 9532491CA0CA444AB7D822D303AFE3B8.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|VE1PR08MB5742:EE_|AM7EUR03FT039:EE_|PAXPR08MB6672:EE_
X-MS-Office365-Filtering-Correlation-Id: a91a931b-0fbe-4635-9a75-08db4c575030
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 NASxhxlVnZp8OrUzQvzGDhRAqRAm7P2N6GSaFDhGOQL5176YCoRo3lOqRp5XvePbVMxAlVsXB++76GDPSh5/qIdTWCiej5Vna7ms+jc5w9pRQ0c0QnxOkYT9ETZOXGHw/KbDOykWMFlSRrZvE/8nepliabffW0hsyTvi6/UvRBVMUmoyU4dMuRQPGtago72OmCScEsU7Wk96+HqGxNRJ/VbLsiQfwzXcSfx1VoLJ2YKOGeDt+Y3KF+N34cCZnKw6cHU3dS9ZIfu7nMnIoRZsCA4TDaXJFO5SImfT0HqGn0xKgk9Eyk29brd7D5zgm273QTwVvCiQN6K2p7d3PgCB+f8qNGFmORR53vYmb8SVd4YKdEqqF3OxR0J5qGWKHyppaehCpy6SXzRELTjOgkr4BampOi3uaPs07M0uy72/ldcBKkuNpcS3CPuOxg+rJXtBjBXLAI1iklGEHX5wUSiUYogiX6dUr9nruzt+V8eENgqXEUbYszMStNp31m+Npnfevi0FyMHkr2VrsrJCstNIf33nufDFQtDJuFQpPPDO4GcJS/ffELQZXd10LkWG04+LFEEE2wmXJeSQdUVEq4vB5/ZFrbYji7xgnFVZHitFKEQ7rjzkNz2jVlu+wBWa1t8MrX9gTS0Ke7oiOXoMlVqgsQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39850400004)(376002)(366004)(346002)(136003)(451199021)(52536014)(8676002)(8936002)(55016003)(7696005)(86362001)(71200400001)(4326008)(316002)(33656002)(41300700001)(38070700005)(64756008)(66446008)(66476007)(66556008)(76116006)(66946007)(122000001)(5660300002)(478600001)(38100700002)(2906002)(26005)(54906003)(110136005)(9686003)(6506007)(83380400001)(186003)(37363002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5742
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	762f33d9-5b21-4aad-bb9b-08db4c57494b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6ZY0uS7Jpe0PCPLFfA8Z+uuJOEdkixKYLR62d1AQ0Kr35rVRBr+cVeoFu25p5yiBEeTbfqvtdX+wTdepTG+el5GYg2IT052R2mw4YxaCSL+lTCSkymP49i++FEw+V6bffBZCfK08DT53VcMlj3nSAi5K/elFmy5Y4MTei/oPfGyIdxZV2cGHEg+fj1dPCHT4ruuPb2ejxyirQsk2T5WBQNilH7dFPVope07HPc8f5hJ1/6twybya1uMmAW1h0CZWs+pqnQ7WsQNl5OBLX46HGrPMCTE8hz3pcbglKAbRSGHAeqM8b0kVGLUyT3zIcbud7xVd5f3+NmO4ls2ipzx+i8WhKtsBvR9429aPlc9VrPsr3R24gWEFA6UZfVSrdZelTGS/FDbeqPPViUxEX1zksyrUvMjiMvpL7QUFlQ+ee/jJSTM7gmghZIJMCHFJhcKGyJya6CjCzlg9k1XZD4naCMl6zp8XLHDFqLkMTyNs+hF16P4mGgpeonrPLqRjciLkVmY5Vio24ktI2pkEvWFbSlUn93eh15UK3876JwMD/XrtbYmmPkAnA0L+2epS248qavWIePgj+LjgtKYy4ILXJFw91cv5rT3K8XoEm54N8QWWaVVM24MCuKAM/9XONhfFE+mfCEC5d96RQIONigE0TghKwSBykJw4WoDV2B53abNOoHqZ5X9mI1KPu3lyoFK364Qw1yNc1ogBdvGLXwDgrn3EyS+vl06XmwKspMJ1Lh/rVIp/mJkW/fBblz5Id0Dib6n0ngGNzICH9lfdlDOE2g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(39860400002)(346002)(376002)(451199021)(36840700001)(46966006)(40470700004)(52536014)(34020700004)(8676002)(55016003)(40480700001)(8936002)(82310400005)(4326008)(316002)(7696005)(86362001)(41300700001)(33656002)(70206006)(70586007)(356005)(81166007)(40460700003)(5660300002)(82740400003)(478600001)(336012)(83380400001)(54906003)(110136005)(47076005)(2906002)(186003)(9686003)(26005)(6506007)(36860700001)(37363002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 04:23:29.3816
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a91a931b-0fbe-4635-9a75-08db4c575030
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6672

Hi Vikram,

> -----Original Message-----
> Subject: [XEN][PATCH v6 08/19] xen/device-tree: Add
> device_tree_find_node_by_path() to find nodes in device tree
>=20
> Add device_tree_find_node_by_path() to find a matching node with path for
> a
> dt_device_node.
>=20
> Reason behind this function:
>     Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
>     device_tree_flattened) is created and updated with overlay nodes. Thi=
s
>     updated fdt is further unflattened to a dt_host_new. Next, we need to=
 find
>     the overlay nodes in dt_host_new, find the overlay node's parent in d=
t_host
>     and add the nodes as child under their parent in the dt_host. Thus we=
 need
>     this function to search for node in different unflattened device tree=
s.
>=20
> Also, make dt_find_node_by_path() static inline.
>=20
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/common/device_tree.c      |  5 +++--
>  xen/include/xen/device_tree.h | 17 +++++++++++++++--
>  2 files changed, 18 insertions(+), 4 deletions(-)
>=20

[...]

>  /**
> - * dt_find_node_by_path - Find a node matching a full DT path
> + * device_tree_find_node_by_path - Generic function to find a node
> matching the
> + * full DT path for any given unflatten device tree
> + * @dt_node: The device tree to search

I noticed that you missed Michal's comment here about renaming the
"dt_node" here to "dt" to match below function prototype...

>   * @path: The full path to match
>   *
>   * Returns a node pointer.
>   */
> -struct dt_device_node *dt_find_node_by_path(const char *path);
> +struct dt_device_node *device_tree_find_node_by_path(struct
> dt_device_node *dt,

...here. I personally agree with Michal so I think please fix the comment
to keep consistency.

The rest of the patch looks good to me, so as long as you fixed this, you
can have my:

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry



From xen-devel-bounces@lists.xenproject.org Thu May 04 04:39:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 04:39:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529550.824033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQjx-0004Yk-Gc; Thu, 04 May 2023 04:38:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529550.824033; Thu, 04 May 2023 04:38:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puQjx-0004Yd-Dw; Thu, 04 May 2023 04:38:49 +0000
Received: by outflank-mailman (input) for mailman id 529550;
 Thu, 04 May 2023 04:38:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ABSM=AZ=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1puQjw-0004YT-0W
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 04:38:48 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20602.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::602])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8e057a87-ea35-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 06:38:46 +0200 (CEST)
Received: from AS8PR04CA0009.eurprd04.prod.outlook.com (2603:10a6:20b:310::14)
 by AM8PR08MB6545.eurprd08.prod.outlook.com (2603:10a6:20b:368::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Thu, 4 May
 2023 04:38:37 +0000
Received: from AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:310:cafe::3c) by AS8PR04CA0009.outlook.office365.com
 (2603:10a6:20b:310::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 04:38:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT044.mail.protection.outlook.com (100.127.140.169) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 04:38:37 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Thu, 04 May 2023 04:38:37 +0000
Received: from c7311422318c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1A9751A1-930A-4FDB-8517-62D73A2CD018.1; 
 Thu, 04 May 2023 04:38:31 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c7311422318c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 04:38:31 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by AM7PR08MB5365.eurprd08.prod.outlook.com (2603:10a6:20b:109::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Thu, 4 May
 2023 04:38:27 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%6]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 04:38:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e057a87-ea35-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9IMZTaetQU/LDyWZ0wOaYTWh5ffcjtrO06NcTJvkw4Y=;
 b=jdAjDFwNxYqW2SP5C4+voYD6zBYm3iU7TKOPNCJZw2uTvfI8Yoi8xvny2j+QkbBRNHL3KNsf4tCzY9ByR2soq21JBSxuxRj9azlmiictv+o8LUuUe7kWLG3BP/1xX5QlZg4XqYVZ52O0Yywz3zOAEHGup2idbsjnrr37QiTpgrc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h891+WVSYNkDSsl0p3idiC+O4Vh0a2bV8iqvG3t3xxQlrHTz8U3fXuB3klsRmMip6Cp1KrNvgdIUbQ7X2hwSpdF4VCbIf7Ql3VQoqsoaWwY33qs5YDIQd7P8jW3+IkAdlgVO6Lt8Xb58FOaMGwrQTA5YfHCNQ3XS4lhcucQi1JdWH7Cy2K5Fq+6I81Xvs8B+4+jYGV4d0emF3WnLUGngeI7aj22lUYOPirPohSu5P86Lknff7rhXcf93uyEWeT/JtmLPhDsO8jXUiLaR2GP76CQhnrjY81pLiodYK5DwjV43JF3IRwbqjypsGmOWHruv419DIbxE6mtGoYpkpecM9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9IMZTaetQU/LDyWZ0wOaYTWh5ffcjtrO06NcTJvkw4Y=;
 b=HvUYcRfbl+iLpyD14flz4wfc8EkAPOQ/UZhar/y5f8WXxRzUGk7BrEJfcsnSoFpbXou5aFsk9mE9KEdksEZc7YmLI7U9asGdtFSOwd4cDwojKfV8nnaQw4b7G0tfayL9T/YowrLVr0JnPsKEJ6ldafH8f5cQlePKyhJqtPl8jpj+t0Daejw9ArOH8p2xBr9WMMpghIPUuVhGR6xB8BDtjgvKhisxd7ZYYQIqY3XanIppFWdkPDXUgTIPILdqLKRZRO6QtHffzyrTznYPZvGa8pCS9YenRNNuAKkwhKb+fim3k/SuBE+HzZGxBVkA27t7vadimFx/vfvTOGFDrhLRCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9IMZTaetQU/LDyWZ0wOaYTWh5ffcjtrO06NcTJvkw4Y=;
 b=jdAjDFwNxYqW2SP5C4+voYD6zBYm3iU7TKOPNCJZw2uTvfI8Yoi8xvny2j+QkbBRNHL3KNsf4tCzY9ByR2soq21JBSxuxRj9azlmiictv+o8LUuUe7kWLG3BP/1xX5QlZg4XqYVZ52O0Yywz3zOAEHGup2idbsjnrr37QiTpgrc=
From: Henry Wang <Henry.Wang@arm.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>, "michal.orzel@amd.com"
	<michal.orzel@amd.com>, Julien Grall <julien@xen.org>
Subject: RE: [XEN][PATCH v6 14/19] common/device_tree: Add rwlock for dt_host
Thread-Topic: [XEN][PATCH v6 14/19] common/device_tree: Add rwlock for dt_host
Thread-Index: AQHZfU8ldKybmQ5LfEKEsMjqvUt8n69Jh4jw
Date: Thu, 4 May 2023 04:38:26 +0000
Message-ID:
 <AS8PR08MB799196697F990D65F00163C2926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-15-vikram.garhwal@amd.com>
In-Reply-To: <20230502233650.20121-15-vikram.garhwal@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: F5DEE6E9CFF93740BBB29AE2D5873925.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|AM7PR08MB5365:EE_|AM7EUR03FT044:EE_|AM8PR08MB6545:EE_
X-MS-Office365-Filtering-Correlation-Id: 275a40de-3bfc-4ef7-61f0-08db4c596d61
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 zGE/iAS5D+Z5KOllX6q4lvf3yjjg5M/Rc9YEpZv1MmKLlB8PlfareyL61VznCrlgzVrEFTEsc/J4xwnkHAHVAQFI8X+rtqh0bftispKmBRV0INwkiRIZg/4S65EAx65gq4wl8x6ecXa1hIBQm2SKXbRyZxHrnK+w2JGnNF5gvg7l7OI6lq69jdFWPvpdbLLbs4IgegVCEd4uHGkiPPuZRzMBaSI1WyQGeTEud0LT0jbN6yZgaBGMG80PeFIa/smQE7pB9G7WwSi0qV1VlU/PvVwzZ3WTHLhDYYhm8WP3xgT18W7uftUOKDQ4b0ogTX1m40zkxobVELalinoMoRhqkHBrURem2vEeQ5jnLvj6I7h8KmpEmYLmNnNNydvl3r3RCzQOu0vacWdiaZWyR4jJ99gwWL2PN/R7M+2FVa0Gxd0mDJkDpxZF36u0sALlhbgZaBMm4dz0FLkamQiBWSQXVifJ5LjPdozfiVXW72eDr1mfsmMY+QijfDBR/EMNZ0czKzHecuqMKKyO3Bk00gvmWC0HVuIiJs0IQeY6Vs6cnqDsrXPR+wrXbeScwItZQVxDCqsD8HQdexGsazdg5tsNrubQspJtdCC81BNfZ7quh3P+kuE0CVzBaDzsNvU/I5tk
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(366004)(39850400004)(396003)(136003)(451199021)(122000001)(38100700002)(8936002)(2906002)(8676002)(52536014)(5660300002)(33656002)(86362001)(38070700005)(55016003)(71200400001)(7696005)(110136005)(54906003)(478600001)(26005)(6506007)(9686003)(186003)(83380400001)(41300700001)(76116006)(66476007)(66946007)(64756008)(66446008)(4326008)(66556008)(316002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5365
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	08ad4aed-3095-48e4-bd06-08db4c5966d1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Unp5QTXPpUZ9QgExU0kGsAZ7WbDUyFGyMzjR3DpYh0+i15HzzWPnVjjE6WMI7etqf47htdWjoYjlnjhY4W/pUpMEk/Ha5lLT4As1niMEfo6jYU3iAOMDWBmZ+A2ryMeJTw5GffM/DLmXhbkZllAW7AC3NsCvEeDjozdzv+KJoOBuHd1ql0hf/IiVelGvHpH1aJx+FEZ4ON5N+13Ffrk9dfQdvtJ6nZIheZYUUvCHL5c5pxQ6vdrAoL7mOSZFmDXpCh5YhwYzBxC6hMrPO482na2DjxiKJKGjQUqPhC+iCpQ3EQIqvpxzEbO8Cma4ygvveYPSG2ZfpYxs48ECUX7OSz9pi9cF9yPkNxermX1CaxSsXNYRBC8Plyntho1NK9sVyNwtkDof7N0Wm+iiJ6/BafNQaJE6Yn69ZrSh57HtwqcYjzsYoBBesa5520IF+waYfno8nB5N/Yt8dARmGNRCYZ+UC1IpWgCYajSz/DMp/zkrccJ8sB+2jx/ZXBA9MHMQNcgIOz3XTyzTp+mLvZKVdJ2956Csax15Q4c0i4bKdTu1cTEu4UfU5oevfTvzNL9bWE1uPac8mh82OKGK0kk/TTK4jpFYPbh6paUJPLQdKoG5jRet8dstv8cFMNKsLtpVWjIBlw8QODt6sBeIKzJ0qbxhn/tPJoxFv1E0XEPt/4Mn9/EtoBQ1HIuSTNCuRoDe/S5nxt9/7XbfFX79oJAbLqRHjlrNeZTEXrbUma7U2J0=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(376002)(136003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(36860700001)(40460700003)(34020700004)(40480700001)(55016003)(2906002)(478600001)(6506007)(7696005)(83380400001)(4326008)(82740400003)(316002)(47076005)(110136005)(81166007)(8676002)(5660300002)(52536014)(41300700001)(86362001)(8936002)(356005)(70206006)(336012)(70586007)(54906003)(9686003)(33656002)(186003)(82310400005)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 04:38:37.3331
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 275a40de-3bfc-4ef7-61f0-08db4c596d61
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6545

Hi Vikram,

> -----Original Message-----
> Subject: [XEN][PATCH v6 14/19] common/device_tree: Add rwlock for dt_host
>=20
>  Dynamic programming ops will modify the dt_host and there might be other
>  function which are browsing the dt_host at the same time. To avoid the r=
ace
>  conditions, adding rwlock for browsing the dt_host during runtime.

While now I understand why you use rwlock instead of spinlock in this patch
since you explained it in replying my comment in v5 (Thanks!). I would stil=
l
suggest that you can add that kind of explanation in the commit message to
make the commit message clear to everyone that reading this patch.

>=20
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/common/device_tree.c              |  4 ++++
>  xen/drivers/passthrough/device_tree.c | 18 ++++++++++++++++++
>  xen/include/xen/device_tree.h         |  6 ++++++
>  3 files changed, 28 insertions(+)
>=20

[...]

>          ret =3D iommu_add_dt_device(dev);
>          if ( ret < 0 )
> @@ -310,6 +321,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl,
> struct domain *d,
>              printk(XENLOG_G_ERR "XEN_DOMCTL_assign_dt_device: assign
> \"%s\""
>                     " to dom%u failed (%d)\n",
>                     dt_node_full_name(dev), d->domain_id, ret);
> +
> +        read_unlock(&dt_host->lock);

Since you added "read_unlock(&dt_host->lock);" before the final return,
i.e. "return ret", I don't think you need to add "read_unlock(&dt_host->loc=
k);"
here before the break. Or am I missing something?

>          break;
>=20
>      case XEN_DOMCTL_deassign_device:
> @@ -328,11 +341,15 @@ int iommu_do_dt_domctl(struct xen_domctl
> *domctl, struct domain *d,
>              break;
>=20
>          ret =3D xsm_deassign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev=
));
> +

Nit: Unnecessary blank line addition here.

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Thu May 04 05:02:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 05:02:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529553.824044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puR6V-0008L2-BB; Thu, 04 May 2023 05:02:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529553.824044; Thu, 04 May 2023 05:02:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puR6V-0008Kv-7i; Thu, 04 May 2023 05:02:07 +0000
Received: by outflank-mailman (input) for mailman id 529553;
 Thu, 04 May 2023 05:02:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puR6U-0008Kl-1h; Thu, 04 May 2023 05:02:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puR6T-0003Fz-Nj; Thu, 04 May 2023 05:02:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puR6T-0001V8-EI; Thu, 04 May 2023 05:02:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puR6T-0004gS-8F; Thu, 04 May 2023 05:02:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BVzlKS27eCwBG75ubFEALwQoOw/lTowyKqMriHkgyS4=; b=Ou+/8149eZs2v09fomVRIplen6
	af37q9PT9EqeH/7nRiCBQ3x1Ci/asO2vphSdWl2jbzgOX1MKxNdqFagLzu0DSTJzVe+2N1pe8I2Ah
	kK8D77Lm3aG3Qa5uP4cE+VESvCHAIpQa8Tyag16tnFtcbRyHlzcPGPG3ax5sc/H2Cop4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180516-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180516: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=348551ddaf311c76b01cdcbaf61b6fef06a49144
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 May 2023 05:02:05 +0000

flight 180516 linux-linus real [real]
flight 180523 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180516/
http://logs.test-lab.xenproject.org/osstest/logs/180523/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                348551ddaf311c76b01cdcbaf61b6fef06a49144
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   17 days
Failing since        180281  2023-04-17 06:24:36 Z   16 days   29 attempts
Testing same since   180516  2023-05-03 09:42:44 Z    0 days    1 attempts

------------------------------------------------------------
2194 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 267091 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 04 05:56:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 05:56:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529560.824054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puRxM-0005T6-Cw; Thu, 04 May 2023 05:56:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529560.824054; Thu, 04 May 2023 05:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puRxM-0005Sz-A7; Thu, 04 May 2023 05:56:44 +0000
Received: by outflank-mailman (input) for mailman id 529560;
 Thu, 04 May 2023 05:56:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iTNB=AZ=gmail.com=terryyang28@srs-se1.protection.inumbo.net>)
 id 1puRxL-0005Sq-CF
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 05:56:43 +0000
Received: from mail-vk1-xa33.google.com (mail-vk1-xa33.google.com
 [2607:f8b0:4864:20::a33])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7046a8fe-ea40-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 07:56:40 +0200 (CEST)
Received: by mail-vk1-xa33.google.com with SMTP id
 71dfb90a1353d-44f985f250aso28151e0c.3
 for <xen-devel@lists.xenproject.org>; Wed, 03 May 2023 22:56:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7046a8fe-ea40-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683179799; x=1685771799;
        h=cc:subject:message-id:date:from:in-reply-to:references:mime-version
         :from:to:cc:subject:date:message-id:reply-to;
        bh=VK/HOsLMJbrMWPV7/oXJICeUdndI85E2By62v77+Wp4=;
        b=CVPuBUJ70jfuo+olUztYtGZHcgaT7XXOHxwsGCPcDKaXAZEina6PHhvYqVCD1+W6og
         m0Zo2g+9+K+W+90sghiFTQIkB7lNgChgexYKP8GaBlcct8FnvHAFOHewZtSL6JumaUDz
         V0Ls3s9JQNX3MYryDyAwfQipDXj5rtfvBzTiYvYU2W6H7/u2S3JHdz0hOwX8XcgsMOds
         N2SXNIA2Ow+kn2RLHelpzjOqXeNtd7KVFJ4FPo19hpFOkgCzUFTjNcI5p8HfYcQXqOWW
         n9xqXg7IVTmV0eY47YG2MojzgBTkG3QHi+E6a1iIZb2LnDUZK0w4BKAc7mOudrkjle/I
         z9JQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683179799; x=1685771799;
        h=cc:subject:message-id:date:from:in-reply-to:references:mime-version
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=VK/HOsLMJbrMWPV7/oXJICeUdndI85E2By62v77+Wp4=;
        b=Qrb3VNpHF+mjqUYiIeY04FG5RZCUoRr16HQO3l8KM11Sj3nvI/KoetuGl+p1lLdeJP
         T01y6BFqPUVE3d9gxww5i+Kaffaw8ECwn94k4aHcdfHc1uNnRFyWXtaX2R90vQrKtkTT
         H+Zdb4dVIW6mfLWgGptWMBF9q4YH1lX2UHiXb53U5QQv6UibnX4tZxUeEkiI/CGTBGlF
         RfIrHvLuaYDHghPk3qc3ExTd8Bf5PLV8ueLVBrVh7r+zdpAeAhsZzJiks4nXe9I2LPib
         CXy/qd4p5YXmjH8a8SIZLEbNZmip7cDDcYdNVpE6wS2qfsLNpQJJrkdZaHqNubIOCPw8
         sMaA==
X-Gm-Message-State: AC+VfDxhafHjn8tDSBmnisdZIeIzElkdra3P3WiImhhDXZMxhITfEVJy
	rsED+AVkoHAvyLQypuebIKZnf1cXybZS4Qwwl5WQeU0J
X-Google-Smtp-Source: ACHHUZ7JaNJYTFJLwDI1ONQaLdQiWw35e54cEKfjeOHNm4IQg2S/uOx5yO3WSEwWNwWYCd0wsuZsW6T2rTceCOO7B+M=
X-Received: by 2002:a05:6102:40f:b0:430:76df:b03b with SMTP id
 d15-20020a056102040f00b0043076dfb03bmr2821153vsq.30.1683179799302; Wed, 03
 May 2023 22:56:39 -0700 (PDT)
MIME-Version: 1.0
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-9-vikram.garhwal@amd.com> <AS8PR08MB79910CFF4439E503046660EF926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
In-Reply-To: <AS8PR08MB79910CFF4439E503046660EF926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Terry Yang <terryyang28@gmail.com>
Date: Thu, 4 May 2023 13:56:27 +0800
Message-ID: <CABeCVi4p7S1=b0bHqFJ=MyY8Xy3nCOOC3-28wKmNSoNq=yj4fw@mail.gmail.com>
Subject: unsubscribe
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="00000000000007d79705fad7d633"

--00000000000007d79705fad7d633
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Henry Wang <Henry.Wang@arm.com>=E4=BA=8E2023=E5=B9=B45=E6=9C=884=E6=97=A5 =
=E5=91=A8=E5=9B=9B12:23=E5=86=99=E9=81=93=EF=BC=9A

> Hi Vikram,
>
> > -----Original Message-----
> > Subject: [XEN][PATCH v6 08/19] xen/device-tree: Add
> > device_tree_find_node_by_path() to find nodes in device tree
> >
> > Add device_tree_find_node_by_path() to find a matching node with path f=
or
> > a
> > dt_device_node.
> >
> > Reason behind this function:
> >     Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
> >     device_tree_flattened) is created and updated with overlay nodes.
> This
> >     updated fdt is further unflattened to a dt_host_new. Next, we need
> to find
> >     the overlay nodes in dt_host_new, find the overlay node's parent in
> dt_host
> >     and add the nodes as child under their parent in the dt_host. Thus
> we need
> >     this function to search for node in different unflattened device
> trees.
> >
> > Also, make dt_find_node_by_path() static inline.
> >
> > Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> > ---
> >  xen/common/device_tree.c      |  5 +++--
> >  xen/include/xen/device_tree.h | 17 +++++++++++++++--
> >  2 files changed, 18 insertions(+), 4 deletions(-)
> >
>
> [...]
>
> >  /**
> > - * dt_find_node_by_path - Find a node matching a full DT path
> > + * device_tree_find_node_by_path - Generic function to find a node
> > matching the
> > + * full DT path for any given unflatten device tree
> > + * @dt_node: The device tree to search
>
> I noticed that you missed Michal's comment here about renaming the
> "dt_node" here to "dt" to match below function prototype...
>
> >   * @path: The full path to match
> >   *
> >   * Returns a node pointer.
> >   */
> > -struct dt_device_node *dt_find_node_by_path(const char *path);
> > +struct dt_device_node *device_tree_find_node_by_path(struct
> > dt_device_node *dt,
>
> ...here. I personally agree with Michal so I think please fix the comment
> to keep consistency.
>
> The rest of the patch looks good to me, so as long as you fixed this, you
> can have my:
>
> Reviewed-by: Henry Wang <Henry.Wang@arm.com>
>
> Kind regards,
> Henry
>
>
>

--00000000000007d79705fad7d633
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div><br></div><div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=
=3D"gmail_attr">Henry Wang &lt;<a href=3D"mailto:Henry.Wang@arm.com">Henry.=
Wang@arm.com</a>&gt;=E4=BA=8E2023=E5=B9=B45=E6=9C=884=E6=97=A5 =E5=91=A8=E5=
=9B=9B12:23=E5=86=99=E9=81=93=EF=BC=9A<br></div><blockquote class=3D"gmail_=
quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1=
ex">Hi Vikram,<br>
<br>
&gt; -----Original Message-----<br>
&gt; Subject: [XEN][PATCH v6 08/19] xen/device-tree: Add<br>
&gt; device_tree_find_node_by_path() to find nodes in device tree<br>
&gt; <br>
&gt; Add device_tree_find_node_by_path() to find a matching node with path =
for<br>
&gt; a<br>
&gt; dt_device_node.<br>
&gt; <br>
&gt; Reason behind this function:<br>
&gt;=C2=A0 =C2=A0 =C2=A0Each time overlay nodes are added using .dtbo, a ne=
w fdt(memcpy of<br>
&gt;=C2=A0 =C2=A0 =C2=A0device_tree_flattened) is created and updated with =
overlay nodes. This<br>
&gt;=C2=A0 =C2=A0 =C2=A0updated fdt is further unflattened to a dt_host_new=
. Next, we need to find<br>
&gt;=C2=A0 =C2=A0 =C2=A0the overlay nodes in dt_host_new, find the overlay =
node&#39;s parent in dt_host<br>
&gt;=C2=A0 =C2=A0 =C2=A0and add the nodes as child under their parent in th=
e dt_host. Thus we need<br>
&gt;=C2=A0 =C2=A0 =C2=A0this function to search for node in different unfla=
ttened device trees.<br>
&gt; <br>
&gt; Also, make dt_find_node_by_path() static inline.<br>
&gt; <br>
&gt; Signed-off-by: Vikram Garhwal &lt;<a href=3D"mailto:vikram.garhwal@amd=
.com" target=3D"_blank">vikram.garhwal@amd.com</a>&gt;<br>
&gt; ---<br>
&gt;=C2=A0 xen/common/device_tree.c=C2=A0 =C2=A0 =C2=A0 |=C2=A0 5 +++--<br>
&gt;=C2=A0 xen/include/xen/device_tree.h | 17 +++++++++++++++--<br>
&gt;=C2=A0 2 files changed, 18 insertions(+), 4 deletions(-)<br>
&gt; <br>
<br>
[...]<br>
<br>
&gt;=C2=A0 /**<br>
&gt; - * dt_find_node_by_path - Find a node matching a full DT path<br>
&gt; + * device_tree_find_node_by_path - Generic function to find a node<br=
>
&gt; matching the<br>
&gt; + * full DT path for any given unflatten device tree<br>
&gt; + * @dt_node: The device tree to search<br>
<br>
I noticed that you missed Michal&#39;s comment here about renaming the<br>
&quot;dt_node&quot; here to &quot;dt&quot; to match below function prototyp=
e...<br>
<br>
&gt;=C2=A0 =C2=A0* @path: The full path to match<br>
&gt;=C2=A0 =C2=A0*<br>
&gt;=C2=A0 =C2=A0* Returns a node pointer.<br>
&gt;=C2=A0 =C2=A0*/<br>
&gt; -struct dt_device_node *dt_find_node_by_path(const char *path);<br>
&gt; +struct dt_device_node *device_tree_find_node_by_path(struct<br>
&gt; dt_device_node *dt,<br>
<br>
...here. I personally agree with Michal so I think please fix the comment<b=
r>
to keep consistency.<br>
<br>
The rest of the patch looks good to me, so as long as you fixed this, you<b=
r>
can have my:<br>
<br>
Reviewed-by: Henry Wang &lt;<a href=3D"mailto:Henry.Wang@arm.com" target=3D=
"_blank">Henry.Wang@arm.com</a>&gt;<br>
<br>
Kind regards,<br>
Henry<br>
<br>
<br>
</blockquote></div></div>

--00000000000007d79705fad7d633--


From xen-devel-bounces@lists.xenproject.org Thu May 04 06:39:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 06:39:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529567.824064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puSc6-0001cP-N1; Thu, 04 May 2023 06:38:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529567.824064; Thu, 04 May 2023 06:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puSc6-0001cI-KK; Thu, 04 May 2023 06:38:50 +0000
Received: by outflank-mailman (input) for mailman id 529567;
 Thu, 04 May 2023 06:38:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puSc5-0001cC-3G
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 06:38:49 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20629.outbound.protection.outlook.com
 [2a01:111:f400:7d00::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 520ac285-ea46-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 08:38:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8928.eurprd04.prod.outlook.com (2603:10a6:102:20f::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 06:38:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 06:38:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 520ac285-ea46-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hK0z8MRh+oZMYIiPBu8adL2pdYoVh3ULBXAIMzQpRxPTUtBetPt3YCf+KhitQUsSUUG8NAx6Rl2Pg3u0TyZw+/fNrZIdIWBHg7csD7U1aBfJVPUuHQ6xOpmyoOogYG4uyZgsajTF3Vax0P/SaXeLtcA+h2L2M0V0e5cgQ3Lv0Rpx5iytXXG3IDafn20GvW0zZlO2LSpv91cLS3xkI7K+NCLLAK2GiRGU7CPFRztXWZpH/bWNLCni45R4TPTiP4DjcEesKIOqP1vNIW08hcM4ywyudRkcqvgqTK7LavWCawj44RPgDZ/V7EzP4AeIWA3rv7Ntmh1Eu3j49LnHoq7H7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AYJ7iD/4wH8NobWx0XxiU2uD48mzWp79ptgMKoq265g=;
 b=XwNrnRBg6QPDKtE5WT330MYZnOOdvOpF+wGf8sqzxvz/c4KfQPArnBKuXLoEOjXNAYRi4lc4yH8/cQQAMhsxRuiCRFncAessvTmn1SnmCcNuk/og0AgDBYAuUHO90rjXlHy8PPV0lZ2Ri6aJKQ73eXGTOhEeJs+MDCFXILV9PMXG28QXb8kUOzYXmcrjzyr03jDzU6wWK7uj208DCngsfXp8EXnuqAkwLppJhQFVSy80QT9zvaZl/kYGQMWN9GH5AOcKQ6jX0IAgYow4qslzMzpJ6zzs7IBG6b561zLT6LA95VWyK/eIrOq5/19FKH47bta1rQueTi5yxFDH7Qz6EA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AYJ7iD/4wH8NobWx0XxiU2uD48mzWp79ptgMKoq265g=;
 b=Hlw0qb4ffkVr4piAdCBwwmv3669rCCLyRdwgNIunwu9u/L7AR9rmnqVAmhXMnmZa8DIAvj4iu0a/wcZMUXS7qzVhl1SMwq7Vm5T3KFo4O22XJgQS4afDC0J3DSkGNRYJYJ7ZUz4sSS21TV+FA7Vn/D+CFdw0khsGEu5UUWRE6sGeF6F0dHsrqIDuF+VuhdmtqnTt/JHnko7zuRJllNA+VJtSj1RqIFrmkzpDuEoArDfsvvSFud3lPYkqfIaDQuSMTnj0j9f/Fbz4HgtLzIqBgKrUizK+OOyfVqf/K6S6EOBVgsOfnLrdEEK/Z8KRpzdfJPWlHKqU1ccqQHxnveVIsQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c4aaf146-3b28-532f-5643-053e43425d0f@suse.com>
Date: Thu, 4 May 2023 08:38:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 1/2] restrict concept of pIRQ to x86
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>
References: <85e59fd5-9a06-48b4-ba7e-81865d44e332@suse.com>
 <98f51b96-8a1c-7f33-b4d3-1744174df465@suse.com>
In-Reply-To: <98f51b96-8a1c-7f33-b4d3-1744174df465@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0260.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8928:EE_
X-MS-Office365-Filtering-Correlation-Id: 980f9379-1d07-41b8-67c7-08db4c6a345e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IL9uO7e8dswtLUicdJoU/V97XEPZjycUpuaN6iG72V+rVMSycgfkGUs7axfRGo2HdsyVfaKbF/ZCbYsbphGPxyC2EGTx9pNUBz+RXEY8il8ESMAPNmfAmXXcoZng2K6FWAhdHDPV7Mtt7uH8uMr1ZFUff2/Ni4Xw8DI60fokymZ3ujFGP/hqOzwAm6V3t7XdsY4mf9Gy9E2EuMnKaafBqHZHbsMJNJz8WQZaBY19iWhKaqWxUAB3xkdQEMz1Iv9ch/KokQqJn+MzfATTxbjm+8uyJciJGCJ4gVTq4e6wodBabHsF6RtDpcdx2E/SDpfcsM6lXd+NNcd3sdHuTgYoslQCgNptK/RN4yW34NY76L9F60IL8Sehwos90nWYvSYYaWNLSdNH8dyt855nh2IZ1mAJAK7a07xMqtkZXiHpvD4rrhR4riajgy+FiVicHU0THJMS2l3JQlgzQrXeZvp26ePB1NP4kMjS2yniGhxluXRJIg/A8tj1m95O4KoqOorWfODdTjzlPwE2xOQy1ZW4CzeY2hhjQJB9ZlJZuYOtCqIE9Iu/DYKgG6YEKW269Wl8mNP4ryWAnoC1Q0fkk3EkNJSkukUuW+PNKJBbCPy8XQ0gKv1f7LM/A/e8XshUacwAIMxwgONt/0K3j7lj5NTQ9Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(396003)(136003)(366004)(346002)(451199021)(38100700002)(478600001)(5660300002)(8936002)(8676002)(2616005)(54906003)(6506007)(2906002)(31696002)(86362001)(66556008)(66946007)(66476007)(6916009)(6486002)(26005)(83380400001)(4326008)(316002)(186003)(53546011)(6512007)(41300700001)(31686004)(4744005)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?KzRtdHZwZ2tTZmdSckdoVHVoaitob3QxaVJmR0NlS0JXUm9PYlhtOVhYTEto?=
 =?utf-8?B?bzN4ck45eDlReXZMaUU3OXZWcytVa0JHTEl1UFo2NGU2RzlRVWh4eDhKNnU2?=
 =?utf-8?B?RktRSzZ6Tk9FZVpQL3pJZnFkTlpkOGptUkl2TzRVOWx1cHIxbmRLNElkVUlZ?=
 =?utf-8?B?bklVTGZNLy9IbDR0TE5wcEx4SXNRS0swQWJMeDFHeHBRYWdacGF5NVNkOFIv?=
 =?utf-8?B?VHZTY2xpM0R4Tyt5N29zY0ZtcDBRcG5GK044ZW9SajdUamI3bCtXZFdXUzQ0?=
 =?utf-8?B?VDAvazNmZHVKMHlNV3lJL0RIeVZ1UTBVaTYzaUI1SjVmY2JrQ0dESndLUTBO?=
 =?utf-8?B?czdCbHp2OTYrb1BaYkN1S1lSSEppR21UTk5HcHJBUUtCSlBPTzhVeFZ3VUNB?=
 =?utf-8?B?K3Fna2ZxRktjVkhVZHhoWVVkVFJ0RmhhNDhNSmZnMUtzTE5hRFRDRXoxcXZo?=
 =?utf-8?B?aFQ0aGxsWjVwek9uRVk1WTd5TWNsNG0rSllzcVBRKzBUckgrei9BaWJ3V05F?=
 =?utf-8?B?OWk3bG1kYjYzL09YZTdJMUpMNGl0VEZoM0VObHFaeW9MMXQvcUlCeWJxUnVa?=
 =?utf-8?B?WGVuVTFHenBoSkt5Yms1MGR6VXUxVExvb0s5UGg3TUpiK1l1eEI1VkRRUlJZ?=
 =?utf-8?B?aTY0cm9PZ2xsYXhIcUpGZTh2TWRSYTlxUHQzc1FDZEZXczBDZWFpSStHams4?=
 =?utf-8?B?aE9KZ2htWE14L1lsREVFY2dKYlp5bWhMeVRqazVNa2hSTDZqRnBoQkgwV1dq?=
 =?utf-8?B?bEx4TDIvTkZoZEVReWpQSHNTK1hLTHBva2RDbTFhVlU1QVFtRWNQVExUbmI4?=
 =?utf-8?B?dldFTDNSVER2T0pYWnlFOUVkRFFmbDY4RGFvTXZZaXAxT21DUTBjcHcxQUVm?=
 =?utf-8?B?SEVHNEtuT24zODNFN1RsWFRGeGNpMEtZR2tiS2d5NkliUmxvYmZJclpDc2xw?=
 =?utf-8?B?a1Z0VFdUeWF2TWFEdVViYXQvQnZjRGQwZHAyQ0tOVXFWeEZRTXhMVS9TUU4x?=
 =?utf-8?B?NUxWSUp3YW94NktyQzliL0tPUm9pMW4vWUVpZ082bjk5L0xDZmZEK2JVMGVZ?=
 =?utf-8?B?KzE2UU84aGRKTmxYQS9JdXpMT0hvSysrZURIOTNkQWtrZklyYzI3Q1pWUFlF?=
 =?utf-8?B?NXlxNzhGSmVpUzhuS0RveDNpNDA5amRaNzNjd1dxbCtacWlSOVlDS2ZDY2hL?=
 =?utf-8?B?ZzA2RXJPcUhSZ250WGloV2JMeW9ERWJWbUhianZBeUxxMW5JV05xdHF2OVFn?=
 =?utf-8?B?alBQTXJZUmVibXNPcU1YNEJtaXRIL0J2WWRrMmpsbGt2d3F4T1hBR3VwNUhh?=
 =?utf-8?B?cHpSSGNSazBPVXcwZmwwZ0lROEtqVGxhSmRyemtTL0NmRVYvcVZUUVlxZVNY?=
 =?utf-8?B?MFhVOU8xWkFVSjRuS1A4NXl5WDZyUzE4ZS9NbU9RUkoyNGFlNFp1M0pOVWFX?=
 =?utf-8?B?Z1dIMVhrUXpTaFBjY1BMU2ZmaVlMeXVpSmFUdEoxVTNuMWpmcWN5eHBzY0M4?=
 =?utf-8?B?YnM4ajdkVDV1NkhHN0c3YXZvRTBocm1mVzB4Z3NiNU9YeGtuRW56OEhHUGR6?=
 =?utf-8?B?WnRnRVByaXhnTWFEY0NoNGpqZjFlTHcxeGwrSWN4d0pCaU8vc2toLzBTVGJw?=
 =?utf-8?B?ek84U0FhUmVtdlBTb0ZIdEVmZy82RU5zSXNnNjRFb0NqQStpZ0xSTlpYTkFU?=
 =?utf-8?B?Y2Y5SFJxWHZQQ2J5WnpNa1lTSjAvZDNWUndBaE1CbC90dlJYMVoyaWVSNHNz?=
 =?utf-8?B?UlQ5clNhWFZKa3BrYzlKM2QyMUpwY1F4S0NxSG5uZmZ3cnhZM2VEYlFEL2tz?=
 =?utf-8?B?T1padUo4eHVDMzQ2WndIcGllb216ODZpK0l2cXhsTmsyUkgvU3NBZzVHUEpT?=
 =?utf-8?B?NDZsUldISmJLZzBmb09wRDNTWVBFdyt3Q1haKzJJSERIWm9mU1RPNnFPakQ0?=
 =?utf-8?B?TTZYdDBBNEhkS3lkMmZHdHFQTEFuK3Uxb2w0SWdXTXdBdWJiWHBDbXFEOU8w?=
 =?utf-8?B?OU9oaWtueEE2SmliQjRUckhEeFFiTURMbmg5dkp0WnRpcXlIYjRha2F0djdZ?=
 =?utf-8?B?bFE5VllaZ0VhYTR1YlR1aXE4a0dLVmh6a0Fmd2kyK0ZYeVUrL3dTME5JUDA5?=
 =?utf-8?Q?N3W6r7YywDF9xBMRa1Kk77+u5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 980f9379-1d07-41b8-67c7-08db4c6a345e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 06:38:43.4016
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: g14kofs0TsLfZgnwJy1mrdJ9mPpaT7GtZd+qPO0JAItYmY7QLkuHl6DpyW4gazK4dlSVABAG37lzCtaJW7VawQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8928

On 03.05.2023 17:33, Jan Beulich wrote:
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -683,11 +683,13 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          unsigned int pirq = op->u.irq_permission.pirq, irq;
>          int allow = op->u.irq_permission.allow_access;
>  
> +#ifdef CONFIG_HAS_PIRQ
>          if ( pirq >= current->domain->nr_pirqs )
>          {
>              ret = -EINVAL;
>              break;
>          }
> +#endif
>          irq = pirq_access_permitted(current->domain, pirq);
>          if ( !irq || xsm_irq_permission(XSM_HOOK, d, irq, allow) )
>              ret = -EPERM;

It has occurred to me that in place of adding #ifdef here, we could simply
drop that if(), resulting in out-of-range value to also hit the -EPERM
path. Would that (a) be acceptable and (b) maybe be preferred for being
less code and (more importantly) less #ifdef-ary?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 04 07:08:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 07:08:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529570.824074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puT4V-0005BD-VN; Thu, 04 May 2023 07:08:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529570.824074; Thu, 04 May 2023 07:08:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puT4V-0005B6-SV; Thu, 04 May 2023 07:08:11 +0000
Received: by outflank-mailman (input) for mailman id 529570;
 Thu, 04 May 2023 07:08:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puT4V-0005B0-8C
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 07:08:11 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6bc27803-ea4a-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 09:08:07 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB9994.eurprd04.prod.outlook.com (2603:10a6:150:11a::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Thu, 4 May
 2023 07:08:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 07:08:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bc27803-ea4a-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SJK8lApNMuCDa6EhPpym/yWjsLTlG0QQxtoUFI8cYht1bOXkHwW7dhvGMlwcNg1uKmCJ9WZsSJlqcj/fXU3NNIc+rK4dNn2OCAQ6SKn3OHAujEbQwntMc3lsDY2+0HSQp7Fhqyh+M2IfyGmC552qx7idEL8sor5IaHxX3ihbG7Al5f7sEQovpjBlCOYARHZZLclK7iRWDxb/nn6d4yVd6edvLsDoIsO+qkDuBzTcKQkH9NedTj7dmW37mDkt8KcPayhtc2uWn40db1qp9pua2HXtphd3QZ69wM09sA7RZb79VHP+vmvQhXw1pBIJ9NdiIeKMHDgJWVPJPdtxj3oKlw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EMCovZM5a+5Fe3ZHk/ZrcwEFDoMx44Y0U1aq3mDeCHg=;
 b=X8sKn/gVa9kMTKvq1sS3RKaynzLi5t7cYOB9lwc6hoQs7Bj1IxxPSj3Q5SjhzCEu3IwECN0mAaX84s91pnxD0k4hjITOwYMDCHfHZCjs5fQyVJaHeABfu4SHJlRi4wLnUpCEUOoyWecepMQCzeXDYmYznikVhw6ollNuhHVqdyUwPufBAYRvmiAFvL5rHinOglAHh4RN7aASnUSeBVySsSX+8HTdkIj6OPMiu+MSxkJqPgP75k9GoDO+O4978NTL2D6txcHRUNjVTH5VX2GZSYkmYqmH0+c8bOO9pN1o/rz29kvJcmasDR73m5iKVQTU9h8/Y0GgGJ6AcLDlAuBikw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EMCovZM5a+5Fe3ZHk/ZrcwEFDoMx44Y0U1aq3mDeCHg=;
 b=fDmkOD5ZiOGtBToTpAlobXmUMzeRxy/N4QMwpCmxe6oHds6zV0U/GOaHP5kqvYOyYOsBR+ILp0wx0Tvg0OLTYvNjrCx+yAMVqLy/txZdgcGKxDX839HTSmQAghZ1wreKvXeJyVcFQBnwzszOS+EcW+T/XFeSTwFu9L59Wmm+nmKalFW02arWhsd8q6/NtK7g6JVC2+fZLl9VTzrkdUDrA8B5EB9CKI+dh1CQX9qb07J1HYygwAUls73GvEjIU8WYwHW3sUNvlMtgGu3Ibw5aLp69CCH5BhGV0TLGkiC2YSa7D39TSNaDAqGzEQuj2DiZbIUjZbueoAsaw+LP8oxG9w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ccf68f0f-6fd7-a9a4-ef72-03999d11035a@suse.com>
Date: Thu, 4 May 2023 09:08:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/ucode: Refresh raw CPU policy after microcode load
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0010.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::20)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB9994:EE_
X-MS-Office365-Filtering-Correlation-Id: e7d4a750-fbb7-4a1a-373a-08db4c6e4d4c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6N0pE+SRulAfFCgkWYLUMMXDq+lWyGuGEY0+2Oyuub8xKRtb6zYiLwFw9ArItYgr92mXf8iNA3hPd6mfvyYC6XictQ7YOTKcnjSR5ASKmNm0kH+TeT9XHipGM97NSxVXxYZKOULQKclmPUwMBvbWYa0NtEWCX9I0qQcU5+NgRX6M9j9JsIBPs0yh4F6CYisqbT9gJ+v4YQRs3J8l1fXEUOZXGpEn5hnyFiL900LEZMdbh5/u4xb+5XIe91+Rdqiyy31iAPAIGjVcxF18C+AtwYuy/SnHWD9xVDkYWYkkR4kFKlCNTQMHcAdWj3dcvwr1GBXO1+5VoBPX65Icu8n5kD3EyIXYPRexdoaUYtgFU2jtsR9krptHdBzXi7dmfgF78LGUAxOnh3zpZLf84/6b+39MDDJIt1ssIG9gnkARB/KYVRe+cR6gmjy5Xqug3J7tG6Is/XnM4lLdjr4C1Jlw+RdW0MVjajoTBew+xP9ZBdzT0P7RJtpXSc+5bilUeWLo6hiUOjErOqnFMwJQb8/WdKMgwNFoRMfjwsi3LDt4X54v/CD7Puncel0LQvEJfOIJQH1pENh6PHJ8kmMEC39f7pJdCSkvEkRq52Yy7HYSe6ClYLEDEt/a/s+dSYYemzsnG2ceHyi7+ieO9QGVXhY5qg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(396003)(366004)(376002)(136003)(39860400002)(451199021)(31686004)(5660300002)(8936002)(66899021)(2906002)(8676002)(41300700001)(316002)(6916009)(4326008)(66476007)(66556008)(66946007)(38100700002)(83380400001)(186003)(6486002)(478600001)(2616005)(54906003)(31696002)(6512007)(36756003)(6506007)(53546011)(86362001)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SUxHaTFoaW04TExJUW91RklBTG9mQitoclErNVd3d3VJT09hV0h5c3NzeEd0?=
 =?utf-8?B?K1UydkFxOXVBM0NDaGZnZVZhb1Q4dzFldlJ0amlFeFlUR2IrUWhBNXUxRlNL?=
 =?utf-8?B?dkRYU0tXVGhuYTlGTVhNSVpRQmM3a3ZoNm43VlQySXROYURsVjBTbzJyRU0w?=
 =?utf-8?B?UHR4aWZVQUtYaXp2Z0hFOElsSnNacWYwbzV0UUhDTEtSMTlTZ3BReC8xN3VV?=
 =?utf-8?B?eWZNKzVObkJpNi9zRXZaQ1BBejljTGpoK3JlbC9LV2FnWGlIL1c1ay9Ieis5?=
 =?utf-8?B?NHFOTkt5Q2lJVEc4ZG0yWTBGSitLdjMyQ1NTZmdsM01kK3c3MG43eEJnN1lK?=
 =?utf-8?B?KzF6QUJlYVJyc3RvRXV2elNoVFRWNm9WV1BveElIUU94TVM1akx4bEZXRDZK?=
 =?utf-8?B?NmVITnN6ZE0yMnMyS1pSLzJ6bHZxT0ZFa0dWa0N2Mno3eEdGazFRVVRSWjRk?=
 =?utf-8?B?QUFxd3dPbitBMjh0NGJpTktjMUc3VnBTbytYL0RwcXN2bURDUlZ4NWZLVFVN?=
 =?utf-8?B?Z3RSSjlEOFFlR3NXeWZpWFdHeHdJbkVPbjRiK3hlT2kvdzBuaDFBNHduTlNG?=
 =?utf-8?B?eHhDcm1MVmptWFdVOXk3VFRzd0xBMlpuYlYxU3BCNFk1OE1nMEMyRXFtaEFM?=
 =?utf-8?B?eUN1R0QwZHBzSEZUR29VU1FqVVNPeGY4WVRtdXFYd0lqdHlZUzVjZjdhdURv?=
 =?utf-8?B?RDludm1xTVQ2YndiUFpzblMwME9XYm9kWkdETjQwRk9wb2JJQmllY0JMa1Ny?=
 =?utf-8?B?Q2piNDhMNTFhZXU5REQ5LzF4cGxHa20vN1c4MnNtTGU5KzQ2eDI4Ykx2SEtO?=
 =?utf-8?B?a21pdFJGRVBGMEJhRkdNakJCQ3BnYlJwQlY4QWRIenVoc1BPZGVNWXVHTVdW?=
 =?utf-8?B?dk9uRnp4QzFEZnd4TFNjbElGYkhEczJ5OFRVKzI5VEF4RTVpQ2xleHRzMTZ2?=
 =?utf-8?B?SzFLMG9UTVpJeWNGekZldkFuK0ZSLzRPVDdWSGI0ZjkrWmh6NnNCd2ljOEJJ?=
 =?utf-8?B?ZDNjL2xRcnRTc1dGcHlaWkk0SHNBb3R4VE91VktkSFArd3FWNGczemZRbUY1?=
 =?utf-8?B?R3JZM1VsNS9FcnRqVXhBTVBEckVLS1VjcjRtazU4ZzBhRXNPc3F4cERmNWZr?=
 =?utf-8?B?MzdjVXpLVnA1cjZVZkJ6T2hKMCthYUs3Nk10Q2lPMlhaVmwvM0hUZ05yV0Nu?=
 =?utf-8?B?enpLMjhNSGpUMW9KVnZ5MnNpaTlBVElXaFJKSFhURWZrSEZ3UzNXNkFETzJh?=
 =?utf-8?B?ay9CQXlXVlJLdStCQVcyVmU0SmFvVWZVdEdiZUZwdGRaWUpmOUtadmU4UFZD?=
 =?utf-8?B?d01PbVZITnJaRG5SSE5GbEFicUxnSStBcmlLVG5uM3Z1bFAvb2xSd2pjcVlP?=
 =?utf-8?B?a2V1cm9xMWY3dUtKZTdIQkpSL1pzOTVPekc2ZkNTcE5tV256SzBZcDUyZ2pW?=
 =?utf-8?B?YSswTEJHOFlTeGs4M2JEU2RnR09NMi95M0ZBa3ZKTytzL2JlZ01VNzZlaDkr?=
 =?utf-8?B?ZmZwSkhTZDc5R1JPVVpRckhicGVEMC9nZmh5KzZNSHVVQXpqVE9UeU4rL0c0?=
 =?utf-8?B?djR2QnJSSkZtYmRGaUhsVVZIeXE5Q3pSQ2ltTTl1cm83Tm5nZE1oSlB3NVlW?=
 =?utf-8?B?TG1HY2V6cVBjWFJuYTRwOTFHcnpzOTlvRGpxTXdybXVNUExGODlSVzUvYytn?=
 =?utf-8?B?OFY0cmxscTFqOEZVcmNIclJWbTVXWHZ6ejhkVFM1S0VTSGFyUnpHL1l3NU9E?=
 =?utf-8?B?SUhBbzc1RUFNdkRBSGdBbjMvSk5nQmlFTkJOSjFaR2JTZk1kZ1l2Yy9TTHV5?=
 =?utf-8?B?MVR6SXh1V3cxeGtTeHk2MTI5V1BZRWlRcDNTKzQySGpiOHRqd051L1hQQWl3?=
 =?utf-8?B?eEMwNWxUdHVzdkFuMjh5cmFMWXVlTHhGMkRiUmlTM3I1YzU0YU9TYnJadXkv?=
 =?utf-8?B?clBtRnF1MU84MXM2VktMUE8wRDVBdVh5ZUpWL09kQzhvM1VtTUFBYnk0QVU3?=
 =?utf-8?B?V09ZSUs1OEVLWHpTQVQrZ0VmK243cHVWc2FOQlJHTFY2eGFzMmlMbmtVRE9Y?=
 =?utf-8?B?T1hTbEhxYkgwU1djSkp5aHoyTGRyQnhUc3Awa1Z2TEVGWm5OMEtYbDdxOFgr?=
 =?utf-8?Q?s8H7F8fy/2rWuA3cyk6C3A23E?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e7d4a750-fbb7-4a1a-373a-08db4c6e4d4c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 07:08:03.1123
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ySQCA00UbaDE7s6AJDCte1d8MrqmQNEM43NOgGKZ0r0EALxUi6dQOI33tCYD8xXJAyBI8esf0FRKATn0gVh/gg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB9994

On 03.05.2023 20:58, Andrew Cooper wrote:
> Loading microcode can cause new features to appear.

Or disappear (LWP)? While I don't think we want to panic() in this
case (we do on the S3 resume path when recheck_cpu_features() fails
on the BSP), it would seem to me that we want to go a step further
than you do and at least warn when a feature went amiss. Or is that
too much of a scope-creeping request right here?

> @@ -677,6 +678,9 @@ static long cf_check microcode_update_helper(void *data)
>          spin_lock(&microcode_mutex);
>          microcode_update_cache(patch);
>          spin_unlock(&microcode_mutex);
> +
> +        /* Refresh the raw CPU policy, in case the features have changed. */
> +        calculate_raw_cpu_policy();

I understand this is in line with what we do during boot, but there
and here I wonder whether this wouldn't better deal with possible
asymmetries (e.g. in case ucode loading failed on one of the CPUs),
along the lines of what we do near the end of identify_cpu() for
APs. (Unlike the question higher up, this is definitely only a
remark here, not something I'd consider dealing with right in this
change.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 04 07:27:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 07:27:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529575.824084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTN1-0007jt-JU; Thu, 04 May 2023 07:27:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529575.824084; Thu, 04 May 2023 07:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTN1-0007jm-Gk; Thu, 04 May 2023 07:27:19 +0000
Received: by outflank-mailman (input) for mailman id 529575;
 Thu, 04 May 2023 07:27:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=evI0=AZ=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1puTN0-0007jg-8p
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 07:27:18 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2061a.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::61a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 171caca3-ea4d-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 09:27:14 +0200 (CEST)
Received: from DM6PR13CA0003.namprd13.prod.outlook.com (2603:10b6:5:bc::16) by
 CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6340.31; Thu, 4 May 2023 07:27:11 +0000
Received: from DM6NAM11FT064.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:bc:cafe::94) by DM6PR13CA0003.outlook.office365.com
 (2603:10b6:5:bc::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.9 via Frontend
 Transport; Thu, 4 May 2023 07:27:11 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT064.mail.protection.outlook.com (10.13.172.234) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 07:27:11 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 02:27:10 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 02:27:10 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 4 May 2023 02:27:08 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 171caca3-ea4d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NwPyUoFiPLy79x+V3gAUVNnuJ2u0KI35vP94nwvM/SaJBXAWZb9c37jU2Pokch9tRiD2GdxUhp3RO5CnVKKIXPmr4E28b8NSwFfcfdhxmzKZ2AMdxhgcqmmxCDeNX8nBOzQ6MCwIDaaeCQHGz4Rv8EysxCPlJTKzjJ1ynsEP7lrzIGgS/C5NYCazPZaVsqLbAYjO6xABrIv1KW59+7ySHjp6X3+a0S5wMGuOfZbA7hRmtT1aGfdgIfaDL6Y3sWX2urooLvF/oTwnwui4Cx1JL2nC4/i3Onr1MXzJXxXuapngnBMu+4CMb4+tMnOKDlZHEGiWpISsZdN/+fzL22Ql2g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=piNITOBU4G5juGieXw/xT3ryw6m2OzxAGcoOcWNc/iw=;
 b=HHaj8lxxAIK2tFW68v0PHujjDhshCHYqhRkprRY7urzOJmeZvuCEpGuqNvfAJ9c62zilsYGl1s3nbN32KRxa6DEUskgNa7sr/1bXrJWiqCuu/0lm5pa8wKvlTS7wSUwEYCRxWd4beSbyYoyCVKU6y/NFBeB/0zYy+J/MGnip9Dqw6vLsj3GC3BKZWgb678GXRi9Rb+w5+0aqbZFsxiaOC/3G66t7ej+fhe055CmQlGKzWAL9wwyHINEkiHuUCu/PXJ4EMM+aNofs87pn3zN8/fl9y3EzGkDekeNhsG90/jp+KOwf7avDZd6ac1v4RNnzLZ46CzgpXn+SYQGa0UMJ0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=piNITOBU4G5juGieXw/xT3ryw6m2OzxAGcoOcWNc/iw=;
 b=OBZhlXOw8V74WbQ6zZcl/o9w1qQXUpmx1bPGfzrlLL8Jsl6pdGqFlhY8xTYAJP7DWc59DjQtKceV5gfVRxMmoBjvQTFOovO6wqMevWmkyGRwh3p288wyUpQ2L6sTUNB4zLf6RQDurBcUF66jQzxt4wZmsaQ2nz4UIWH7ebmI0+c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <a5aae4d8-ffe8-bde5-9f3a-d1489e31b0d2@amd.com>
Date: Thu, 4 May 2023 09:27:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN v6 02/12] xen/arm: Typecast the DT values into paddr_t
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-3-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230428175543.11902-3-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT064:EE_|CY5PR12MB6405:EE_
X-MS-Office365-Filtering-Correlation-Id: f634efe8-35b0-43aa-009e-08db4c70f9c5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uk5LriQ8imUIgWFcpf5KoiCMPj0Rf0so0oPeAFhQJL7tpUr34CnJmUYAqxMJY5S6erwe0Xl/2AkWamXG68SrABUSx0Ynw7oyi26edk5K/dNFuV/XoppVqMrhZBTRUZLxQnlo0P3Ypw8C83rTSzw+UF9CBhSXJH0JU1tThWqjjKW6g/Q0lCrXGNkwiZyM3ejfQRcLZMDz0owHvyl9DNITtKYQ+2/L0lF1IA3AgoNQZHIcBbcGZ7zuvZRui/tqbYOzD3bb8DccM8AQaGvmc2XsFuB2UMq+qt3tVUTbiZoOxscHtPequQ0wd0GHUN2hczrWUl73Q4Yn2R+WaaNR9MX8YZC5JmF/7S6gShWEfdXyOeQ7RsSGv58RFqvmCTS8kyNxisLt2QvOTyPuPDk1mgT/G+gykePIci1LSSBNuFFmh9vyxr9w4C9rCgZmW5AvY7uiV+0VIADDgeSI2cv/J9GB3C3xei3sOTzag0GrtmkJGf4f+BoqP2OYE4LpSimT06NpegZht33zdFxldxhkMSzzDbjX+zzcFXoyok+q176mmLlEGwEUkpzEMkW2BoaTKW4LjzZwIjq9v5mOVFnYyQB/4dtIvazYB8GsTXBFWt0REqfOKN6UsUKFUaOQoAU5k59p0zudwSZ73SX9/hgVjDKXQS2kV7iXpn6MzpJyHqv+i9CCM7FCNqRTw8kUJm8G+aWxQkmgPwpzQGNsTrJqHZX2otyq6KfmHhaLEYvARApU/++SpDVQclY+Lx9LsNfqRtBDF4/eck5OsTywoSggLVFcjA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(136003)(346002)(396003)(451199021)(40470700004)(36840700001)(46966006)(70586007)(70206006)(82740400003)(356005)(316002)(4326008)(40460700003)(16576012)(478600001)(82310400005)(31696002)(110136005)(54906003)(36756003)(86362001)(186003)(40480700001)(53546011)(26005)(36860700001)(2616005)(5660300002)(44832011)(31686004)(7416002)(83380400001)(426003)(8676002)(8936002)(336012)(41300700001)(81166007)(47076005)(2906002)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 07:27:11.2761
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f634efe8-35b0-43aa-009e-08db4c70f9c5
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT064.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6405



On 28/04/2023 19:55, Ayan Kumar Halder wrote:
> 
> 
> The DT functions (dt_read_number(), device_tree_get_reg(), fdt_get_mem_rsv())
> currently accept or return 64-bit values.
> 
> In future when we support 32-bit physical address, these DT functions are
> expected to accept/return 32-bit or 64-bit values (depending on the width of
> physical address). Also, we wish to detect if any truncation has occurred
> (i.e. while parsing 32-bit physical addresses from 64-bit values read from DT).
> 
> device_tree_get_reg() should now be able to return paddr_t. This is invoked by
> various callers to get DT address and size.
> 
> For fdt_get_mem_rsv(), we have introduced a wrapper named
> fdt_get_mem_rsv_paddr() which will invoke fdt_get_mem_rsv() and translate
> uint64_t to paddr_t. The reason being we cannot modify fdt_get_mem_rsv() as it
> has been imported from external source.
> 
> For dt_read_number(), we have also introduced a wrapper named dt_read_paddr()
> dt_read_paddr() to read physical addresses. We chose not to modify the original
> function as it is used in places where it needs to specifically read 64-bit
> values from dt (For e.g. dt_property_read_u64()).
> 
> Xen prints warning when it detects truncation in cases where it is not able to
> return error.
> 
> Also, replaced u32/u64 with uint32_t/uint64_t in the functions touched
> by the code changes.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal



From xen-devel-bounces@lists.xenproject.org Thu May 04 07:29:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 07:29:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529578.824094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTOX-0008Hh-UP; Thu, 04 May 2023 07:28:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529578.824094; Thu, 04 May 2023 07:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTOX-0008Ha-Rb; Thu, 04 May 2023 07:28:53 +0000
Received: by outflank-mailman (input) for mailman id 529578;
 Thu, 04 May 2023 07:28:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zFwM=AZ=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1puTOW-0008HO-5M
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 07:28:52 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 502288e7-ea4d-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 09:28:49 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 4D94E2093F;
 Thu,  4 May 2023 07:28:49 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 26172133F7;
 Thu,  4 May 2023 07:28:49 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id ljjEB7FeU2TREAAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 04 May 2023 07:28:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 502288e7-ea4d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683185329; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ay238mjQf1ceO3a0O/KbE61dVumd3XZZUy4XAGxOU7M=;
	b=WpRLtgl4xAFAqsK4cZW08WEZwX9+BFJ1rQaX4L28EDWJmsbGJJyNoPX46XCcNd7Ug7lqPr
	TXSLcYMkOHf7h9l5yOIGEdtI/bV9YCnZ2ei3b3unvvuyIf1KfA8Nsptf7iF3PcDuM8yaW+
	OoXCVklH1bO5qFeUjyLS5/2oClW4fBs=
Message-ID: <b97e5159-7419-625d-d1e8-fc00c553a9dc@suse.com>
Date: Thu, 4 May 2023 09:28:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v4 12/13] tools/xenstore: use generic accounting for
 remaining quotas
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-13-jgross@suse.com>
 <e4f8a0e6-7a4c-3193-ce38-e43891f063ed@xen.org>
 <da3b9daf-9358-2af8-edc3-4f74f9cc0c55@suse.com>
In-Reply-To: <da3b9daf-9358-2af8-edc3-4f74f9cc0c55@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------omy7s3TeOwCTo65ysfZS0svu"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------omy7s3TeOwCTo65ysfZS0svu
Content-Type: multipart/mixed; boundary="------------krxl8VYciXw4aVNwHwjYgAy3";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <b97e5159-7419-625d-d1e8-fc00c553a9dc@suse.com>
Subject: Re: [PATCH v4 12/13] tools/xenstore: use generic accounting for
 remaining quotas
References: <20230405070349.25293-1-jgross@suse.com>
 <20230405070349.25293-13-jgross@suse.com>
 <e4f8a0e6-7a4c-3193-ce38-e43891f063ed@xen.org>
 <da3b9daf-9358-2af8-edc3-4f74f9cc0c55@suse.com>
In-Reply-To: <da3b9daf-9358-2af8-edc3-4f74f9cc0c55@suse.com>

--------------krxl8VYciXw4aVNwHwjYgAy3
Content-Type: multipart/mixed; boundary="------------Al8MzfbhTp0V99nba1sGjqRy"

--------------Al8MzfbhTp0V99nba1sGjqRy
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTY6MzksIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+IE9uIDAzLjA1LjIz
IDEyOjE4LCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+PiBPbiAwNS8wNC8yMDIzIDA4OjAzLCBK
dWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4+ICtzdGF0aWMgdm9pZCBkb21haW5fYWNjX3ZhbGlk
X21heChzdHJ1Y3QgZG9tYWluICpkLCBlbnVtIGFjY2l0ZW0gd2hhdCwNCj4+PiArwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgaW50IHZhbCkNCj4+PiArew0K
Pj4+ICvCoMKgwqAgYXNzZXJ0KHdoYXQgPCBBUlJBWV9TSVpFKGQtPmFjYykpOw0KPj4+ICvC
oMKgwqAgYXNzZXJ0KHdoYXQgPCBBUlJBWV9TSVpFKGFjY19nbG9iYWxfbWF4KSk7DQo+Pj4g
Kw0KPj4+ICvCoMKgwqAgaWYgKHZhbCA+IGQtPmFjY1t3aGF0XS5tYXgpDQo+Pj4gK8KgwqDC
oMKgwqDCoMKgIGQtPmFjY1t3aGF0XS5tYXggPSB2YWw7DQo+Pj4gK8KgwqDCoCBpZiAodmFs
ID4gYWNjX2dsb2JhbF9tYXhbd2hhdF0gJiYgZG9taWRfaXNfdW5wcml2aWxlZ2VkKGQtPmRv
bWlkKSkNCj4+PiArwqDCoMKgwqDCoMKgwqAgYWNjX2dsb2JhbF9tYXhbd2hhdF0gPSB2YWw7
DQo+Pj4gK30NCj4+PiArDQo+Pj4gwqAgc3RhdGljIGludCBkb21haW5fYWNjX2FkZF92YWxp
ZChzdHJ1Y3QgZG9tYWluICpkLCBlbnVtIGFjY2l0ZW0gd2hhdCwgaW50IGFkZCkNCj4+PiDC
oCB7DQo+Pj4gwqDCoMKgwqDCoCB1bnNpZ25lZCBpbnQgdmFsOw0KPj4+IC3CoMKgwqAgYXNz
ZXJ0KHdoYXQgPCBBUlJBWV9TSVpFKGQtPmFjYykpOw0KPj4NCj4+IEkgdGhpbmsgdGhpcyBh
c3NlcnQgc2hvdWxkIGJlIGtlcHQgYmVjYXVzZS4uLg0KPj4NCj4+PiAtDQo+Pj4gwqDCoMKg
wqDCoCBpZiAoKGFkZCA8IDAgJiYgLWFkZCA+IGQtPmFjY1t3aGF0XS52YWwpIHx8DQo+Pg0K
Pj4gLi4uIG9mIHRoaXMgY2hlY2suIE90aGVyd2lzZSwgeW91IHdvdWxkIGNoZWNrIHRoYXQg
J3doYXQnIGlzIHdpdGhpbiB0aGUgYm91bmRzIA0KPj4gYWZ0ZXIgdGhlIHVzZS4NCj4gDQo+
IE9rYXkuDQoNCkhtbSwgSSdtIG5vIGxvbmdlciBzdXJlIHRoaXMgaXMgYSBnb29kIHJlYXNv
biB0byBkdXBsaWNhdGUgdGhlIGFzc2VydCgpLg0KDQpGb2xsb3dpbmcgdGhpcyByZWFzb25p
bmcgSSdkIG5lZWQgdG8gcHV0IGl0IGludG8gZXZlbiBtb3JlIGZ1bmN0aW9ucy4gQW5kIGFu
DQphc3NlcnQoKSB0cmlnZ2VyaW5nIGEgbGl0dGxlIGJpdCBsYXRlIGlzIG5vIHJlYWwgcHJv
YmxlbSwgYXMgaXQgd2lsbCBhYm9ydA0KeGVuc3RvcmVkIGFueXdheS4NCg0KQWRkaXRpb25h
bGx5IHdpdGggdGhlIGdsb2JhbCBhbmQgdGhlIHBlci1kb21haW4gYXJyYXlzIG5vdyBjb3Zl
cmluZyBhbGwNCnBvc3NpYmxlIHF1b3RhcywgaXQgd291bGQgZXZlbiBiZSByZWFzb25hYmxl
IHRvIGRyb3AgdGhlIGFzc2VydCgpcyBpbg0KZG9tYWluX2FjY192YWxpZF9tYXgoKSBjb21w
bGV0ZWx5Lg0KDQoNCkp1ZXJnZW4NCg==
--------------Al8MzfbhTp0V99nba1sGjqRy
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------Al8MzfbhTp0V99nba1sGjqRy--

--------------krxl8VYciXw4aVNwHwjYgAy3--

--------------omy7s3TeOwCTo65ysfZS0svu
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRTXrAFAwAAAAAACgkQsN6d1ii/Ey/c
YAf+MbaXkDJigZEt8y6Sh5KvAOMsl3S4KOLaM4KZgsGjgMA5S5O/tMI3FV7z97sF8UzOZ0naj7UH
eBvjq+SCXbL4fXN4k812h3Nn2hSfEGugcsl9XWgBh+CFCUJS7Jf1KQ3wS1QF1/zgBjom/Kari62c
EMVQaJVp0S+9QNDGlT+19mZFocfPxQerlUTp5+xr0kf9VBeMrijEK6i86VOmKyFg2K6ZkzRMmKTg
KcDjmv8e8xgcU8QIZiwnftYZj7qE0wnFfuwf0hFFNnn+n7EhkM4bgUlMDxr22/3OpMmMRF1hGNFG
B27d5HJ8qQz43E6Ha+IKnRogYxEPS2uhVMHCXp0dKw==
=umYT
-----END PGP SIGNATURE-----

--------------omy7s3TeOwCTo65ysfZS0svu--


From xen-devel-bounces@lists.xenproject.org Thu May 04 07:30:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 07:30:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529581.824103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTQ4-0001Gz-AG; Thu, 04 May 2023 07:30:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529581.824103; Thu, 04 May 2023 07:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTQ4-0001Gs-7B; Thu, 04 May 2023 07:30:28 +0000
Received: by outflank-mailman (input) for mailman id 529581;
 Thu, 04 May 2023 07:30:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=evI0=AZ=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1puTQ2-0001Gf-Og
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 07:30:26 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20603.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::603])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 87f718a7-ea4d-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 09:30:24 +0200 (CEST)
Received: from DS7PR03CA0328.namprd03.prod.outlook.com (2603:10b6:8:2b::30) by
 PH0PR12MB5679.namprd12.prod.outlook.com (2603:10b6:510:14f::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.26; Thu, 4 May 2023 07:30:19 +0000
Received: from DM6NAM11FT044.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:2b:cafe::5d) by DS7PR03CA0328.outlook.office365.com
 (2603:10b6:8:2b::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 07:30:18 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT044.mail.protection.outlook.com (10.13.173.185) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.27 via Frontend Transport; Thu, 4 May 2023 07:30:18 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 02:30:17 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 02:30:17 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 4 May 2023 02:30:15 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87f718a7-ea4d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EV6gYA28E9DOfBjQk9A0XhceyM2C/8+TklkE1RLM354c9oyu/RYyEu9dp2epeIk/bMb2QzqGdQO7S6q4PCa25RP4IA6aQAUI7M4aZdMwrtFmF8FVDy9Nnm7nWarIYAcnpsIC5Wp9u9GQFupmxG8Yv15tKSLT2Rcdz/g4EsgQ6735/AghMkHQyq2kO1/WAGN7fd8YfmRHFCZoPaPZ+TekOSha3pNGXN17CPYj8bPRLiTjjC+UmBws0Vlq/NdvLYF45MtMVx/DFhQZP7tMVqGQ2n9lMS4F2C1FIiIBKuH23sQgvvL7+V/7IAoS8aA+YyzwrAVuXyxsA1zslkuJeg+6DA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lX6B3Pv3MVT8M+7HUeJq5+tI69DxXdfq+5n4o6/+Vt8=;
 b=gj43ETfFIwHvlSYgybRFHVrOXotPzPm7IF8tKPevEzzB3z8ohM82kdPkxvMydzv48qQaRpSwb4HZrIqmwNSZmV71IsfW1C5KVOgJofrrDUCjr4FdHRjTuJImf3vn3Rpt0fZxVPh+8Dm8jIJZQ0b344OmVH1hojyRcizAA3fAeWqZPP5LjYkHmSUm94JlQHYGgzd5Uv8kvGGXx0D+Bsp9dSgiZTDEVDGHRAMdu2lg46fgDy6MB/bhiP78CE04garUl1zUEdnvwxLzgs0yywnLFbrw4HFHTfZo86DJqXwl67Y6PG9/6YdhHf16lCfnnz5A7FaVzGq5yTQGM/nTcVZ3Ug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lX6B3Pv3MVT8M+7HUeJq5+tI69DxXdfq+5n4o6/+Vt8=;
 b=R1/wPWDfeusBSqWi7QVkXEIm5IylL6qXln2gxbHwnQl1ndCp1Bn2KpkVe4TR16NApGoy+LUAwUqzFwOC/iPycuW3ve269o8VPEqgkOdrCPW46JKkycAmbtrSFAPiHff/jkS4Tz45KPwQ1i3hLXG6Mh1NVBwsXUnRg708Ak2/DyU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <0c947418-9551-c5cc-7fa3-8569f08d4c25@amd.com>
Date: Thu, 4 May 2023 09:30:00 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN v6 03/12] xen/arm: Introduce a wrapper for
 dt_device_get_address() to handle paddr_t
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-4-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230428175543.11902-4-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT044:EE_|PH0PR12MB5679:EE_
X-MS-Office365-Filtering-Correlation-Id: 80876131-7851-4e06-7add-08db4c71694f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z222BoF3KjJwJw1ysBbUfXFxGGWFDJ4lUjajOyxc5UjbCOmkYJWFcyz/qm75hpYbOIH2EBU1lhsbchWOLuslSCc6vYzZfTO5UjAB8Ya/ODsPOsr2eyi3qZlz5LZjbvml65If25KStByTtjswvvnFdRLWT7xu3fqWUrtadZUhoBPBpenQ8xN1CCNnHOdedUx1zhRMczociE0ueTj6yvb76OzhxfgqpWl46NJVdUFYzDxeGOopKDHaj6UcJlZZz8CPMlJDDOm7r7IvtCCdz5wT55KyAG1SseXceSpvft3sH1Nl8NjaCKao4XzzKcZHztRIwciAAlCW5ahtW/C1BO3wJZWZ4zgT05OICD38Z7wa3Q3QfhKYUTzPhW8tLvArn6SpURpKOnTgFb64ns41n42fM/9OY2CxgY6IqEvjxAHqJarHwP9qZsgbryWjsIm6OxlBBjzGK5YMwPEY5SUeNF8kb8Z+Q2fS+rvS/1CqmSWe1genr20vSMAT7mv7UFsko1vuN17MgBWMB7ZxlDBDTJJ/kM62bZB0CkK1MugMWx/xZmOgH/Yl66ejjFPJhzUIA3HmFDaa73q5Hrddt0uO32XYAdBWuEs48Y/8kbesPCP0IdfUq6kpxlz9TrkYNhx95vzVkeV6SVaTl3DJ5W7/TdbTJNcH98E8vE7QFyWr/QwM5kGK2VJ6qwC8jylkZRdXZmlQjApi3HAphAS4A44xUq7XT8v3RCgFnqBWu/d7gvCAy1M5Y9Ee8eSb+J/EqL5FuKV3pyaqjrl0OrNEAUC7j2QOSQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(376002)(451199021)(46966006)(36840700001)(40470700004)(36756003)(4326008)(70586007)(426003)(336012)(70206006)(53546011)(26005)(186003)(110136005)(2906002)(54906003)(16576012)(31686004)(478600001)(40460700003)(2616005)(81166007)(82740400003)(356005)(8676002)(8936002)(40480700001)(44832011)(31696002)(6666004)(86362001)(41300700001)(82310400005)(7416002)(5660300002)(47076005)(316002)(36860700001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 07:30:18.4107
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 80876131-7851-4e06-7add-08db4c71694f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT044.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5679


On 28/04/2023 19:55, Ayan Kumar Halder wrote:
> 
> 
> dt_device_get_address() can accept uint64_t only for address and size.
> However, the address/size denotes physical addresses. Thus, they should
> be represented by 'paddr_t'.
> Consequently, we introduce a wrapper for dt_device_get_address() ie
> dt_device_get_paddr() which accepts address/size as paddr_t and inturn
> invokes dt_device_get_address() after converting address/size to
> uint64_t.
> 
> The reason for introducing this is that in future 'paddr_t' may not
> always be 64-bit. Thus, we need an explicit wrapper to do the type
> conversion and return an error in case of truncation.
> 
> With this, callers can now invoke dt_device_get_paddr().
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 6c9712ab7b..2163cf26d0 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -955,6 +955,45 @@ int dt_device_get_address(const struct dt_device_node *dev, unsigned int index,
>      return 0;
>  }
> 
> +int dt_device_get_paddr(const struct dt_device_node *dev, unsigned int index,
> +                        paddr_t *addr, paddr_t *size)
> +{
> +    uint64_t dt_addr, dt_size;
> +    int ret;
> +
> +    ret = dt_device_get_address(dev, index, &dt_addr, &dt_size);
> +    if ( ret )
> +        return ret;
> +
> +    if ( !addr )
Because of this ...

> +        return -EINVAL;
> +
> +    if ( addr )
you should drop this.

> +    {
> +        if ( dt_addr != (paddr_t)dt_addr )
> +        {
> +            printk("Error: Physical address 0x%"PRIx64" for node=%s is greater than max width (%zu bytes) supported\n",
> +                   dt_addr, dev->name, sizeof(paddr_t));
> +            return -ERANGE;
> +        }
> +
> +        *addr = dt_addr;
> +    }
> +
> +    if ( size )
> +    {
> +        if ( dt_size != (paddr_t)dt_size )
> +        {
> +            printk("Error: Physical size 0x%"PRIx64" for node=%s is greater than max width (%zu bytes) supported\n",
> +                   dt_size, dev->name, sizeof(paddr_t));
> +            return -ERANGE;
> +        }
> +
> +        *size = dt_size;
> +    }
> +
> +    return ret;
> +}
> 
>  int dt_for_each_range(const struct dt_device_node *dev,
>                        int (*cb)(const struct dt_device_node *,
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 5f8f61aec8..ce25b89c4b 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -585,6 +585,19 @@ int dt_find_node_by_gpath(XEN_GUEST_HANDLE(char) u_path, uint32_t u_plen,
>   */
>  const struct dt_device_node *dt_get_parent(const struct dt_device_node *node);
> 
> +/**
> + * dt_device_get_paddr - Resolve an address for a device
> + * @device: the device whose address is to be resolved
> + * @index: index of the address to resolve
> + * @addr: address filled by this function
> + * @size: size filled by this function
> + *
> + * This function resolves an address, walking the tree, for a give
s/give/given

Other than that:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Thu May 04 07:37:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 07:37:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529586.824113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTWb-00020B-5O; Thu, 04 May 2023 07:37:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529586.824113; Thu, 04 May 2023 07:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTWb-000204-2j; Thu, 04 May 2023 07:37:13 +0000
Received: by outflank-mailman (input) for mailman id 529586;
 Thu, 04 May 2023 07:37:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=evI0=AZ=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1puTWa-0001zw-1W
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 07:37:12 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2062a.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7a1026a6-ea4e-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 09:37:10 +0200 (CEST)
Received: from BN0PR02CA0034.namprd02.prod.outlook.com (2603:10b6:408:e5::9)
 by SA1PR12MB8844.namprd12.prod.outlook.com (2603:10b6:806:378::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 07:37:07 +0000
Received: from BN8NAM11FT096.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e5:cafe::53) by BN0PR02CA0034.outlook.office365.com
 (2603:10b6:408:e5::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 07:37:07 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT096.mail.protection.outlook.com (10.13.177.195) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 07:37:07 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 02:37:06 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 00:37:06 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 4 May 2023 02:37:03 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a1026a6-ea4e-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oHvF1C89f/2TW8wdWUOmYH+s2bG6TZ7D7X8dwIDxQ+aSmUZuthmJJ03ou9ro14b+raEtgwzVI8eqlPZncaOmaE55qDApPZ9EuRluFIfq1qlRG04UgYpHe0cbqLVQOd7ViMJp4hdPMqvBshjfIpJPIchQNbtrNoQcICDLxfORTikDn4GBpk+o3UJliO0TqXnYxHC44q+jduuRi12LCQSnQYzJtnngb+L/gcS1hl5hHXLFiquvLmxZj4vBsajbM+v9zX/8jD1BOkOjl/XDRfEYlzGyYtUJnyKKdHd1nZP7GFYymt+k2zDl4UuWq3xn9SOMYItnoG+1poUVt/R2slUezw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Hu6RIZv7TDT51Iy6tIM1pYoLNuxak+6DMHpCl2kjCLY=;
 b=QKsOihvQnjv25Wai3rsM7/2ZNQgmS0ruZbbp7QQw3r7Jy9Vgonf7Tlr3VhjS9vwg1nSi59UQe9VveOgYKreNtu4UQtZ1+Apa5hPFeTsyvXzjhcDv+EcKknkaMmejMUxNQOTQazTo/JfDpwEQ1ELdkRUoJQYxRmsEPt3YLhNKbLi9qjjugU5PD+q/2xuHnCcpIeD4CMw7Js842c+zAD0d+8Pya1b2lV7WzHhe/S7PfznS8VRaXCifr3gWNhuljaZhIQdxKfFi8EajSf5ktGd6t19cmxYoS3drRJ/4IYCCoW5dn+aJtXXaPpIPHt1pT+p4Oc2cwq6JH0ks2GNrMTX0fA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hu6RIZv7TDT51Iy6tIM1pYoLNuxak+6DMHpCl2kjCLY=;
 b=VtJWsARv31OXPw9ZAHCx0sdouzVKXFjIcL6CFXVyDOwl0bnZGlAppTLWU3WPpNXhRlh/jwReC5gn08TgkTj99CBHPEIvzny84xlssQzk4bLmsJKa+0U1mFCqFcJHp+sJTVOCSopZ4JjyX287GdFukXZrnKAdPaawaZhD+sw2aHU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <c8021f4b-1607-23df-803b-ac162d9d4324@amd.com>
Date: Thu, 4 May 2023 09:37:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN v6 04/12] xen/arm: smmu: Use writeq_relaxed_non_atomic() for
 writing to SMMU_CBn_TTBR0
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-5-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230428175543.11902-5-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT096:EE_|SA1PR12MB8844:EE_
X-MS-Office365-Filtering-Correlation-Id: 603f5a7e-87af-46ff-d2db-08db4c725ce9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	p2snQRYJSJYpHFA1xT35VNyEWPn2EERHZfGw/vDbcIuwBE9TGdYbWxL3Zv9M9E41rCSG5M54v7uhB02qP22LDeMsnW/nIxt/pRUUH4Dtghlv2S4T4ycMegYkd+CqnSGcqhqGIL5CdOrDxbfn2fFsuP77TMjNkFgpojDNJMAVTu2OnCGkHmTRo5n/1PD218QxtMtMzUN/+MWH56Yjo5z72468XX9HBRdKa5S5oFl5sVoqBBavIAXTqsf77sEuz0oXRMYV+pKSVxnuOpyQGDkhd2EYzKlZobrelkZaNz5lZ2PGWLIpVpjphC12bq2T6aWphVRTUxJ/7KTCJim0NiB3fW0/37aibbnSsC0xH06zQMX/sBmd8SKnkEcavazSVkLBfrnm0YD/jHEtmIiFK2mEGEVxLacXwYHFEqD0RazhtV/DNg09F7XhAnALRwboQY8zfBFpEynn+IwrUYJPzFuC5CyLZ6V1MT7ZA7WV0A1LfUA2ekrQAgeAaVq5RsIaUl7sJLSDegiW3atVnVH9LkbFNM6egqDeqI5PS5KlmnOeNswL5DnO5Zq3pN20gPrexQ0HoDEKEfbeZPCzFgwMVk85o/lUwFGB3SDES5eX4MGPYkwYMNkcoG01RP31VjcMGgXyZiKz598FQfx91ze5S2zYziz9SxqBx51zx6jcX3SG9DscAas3Wok/kvhHzWLyWToSo0Co+cckXwIXFnA7bGM9A7yS9aYgWIqB43GTBJMPvxVph0zgOjdDb6vyq/wfSIyaEo5knxwAOB0XiM/sQvFxfQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(396003)(39860400002)(346002)(451199021)(36840700001)(40470700004)(46966006)(356005)(8676002)(53546011)(40480700001)(36756003)(26005)(336012)(8936002)(2616005)(426003)(7416002)(5660300002)(16576012)(316002)(110136005)(54906003)(70586007)(478600001)(4326008)(186003)(70206006)(41300700001)(81166007)(31696002)(86362001)(82740400003)(36860700001)(47076005)(83380400001)(44832011)(2906002)(82310400005)(40460700003)(31686004)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 07:37:07.1398
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 603f5a7e-87af-46ff-d2db-08db4c725ce9
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT096.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8844



On 28/04/2023 19:55, Ayan Kumar Halder wrote:
> 
> 
> Refer ARM IHI 0062D.c ID070116 (SMMU 2.0 spec), 17-360, 17.3.9,
> SMMU_CBn_TTBR0 is a 64 bit register. Thus, one can use
> writeq_relaxed_non_atomic() to write to it instead of invoking
> writel_relaxed() twice for lower half and upper half of the register.
> 
> This also helps us as p2maddr is 'paddr_t' (which may be u32 in future).
> Thus, one can assign p2maddr to a 64 bit register and do the bit
> manipulations on it, to generate the value for SMMU_CBn_TTBR0.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> Changes from -
> 
> v1 - 1. Extracted the patch from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".
> Use writeq_relaxed_non_atomic() to write u64 register in a non-atomic
> fashion.
> 
> v2 - 1. Added R-b.
> 
> v3 - 1. No changes.
> 
> v4 - 1. Reordered the R-b. No further changes.
> (This patch can be committed independent of the series).
> 
> v5 - Used 'uint64_t' instead of u64. As the change looked trivial to me, I
> retained the R-b.
> 
>  xen/drivers/passthrough/arm/smmu.c | 15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 79281075ba..fb8bef5f69 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -499,8 +499,7 @@ enum arm_smmu_s2cr_privcfg {
>  #define ARM_SMMU_CB_SCTLR              0x0
>  #define ARM_SMMU_CB_RESUME             0x8
>  #define ARM_SMMU_CB_TTBCR2             0x10
> -#define ARM_SMMU_CB_TTBR0_LO           0x20
> -#define ARM_SMMU_CB_TTBR0_HI           0x24
> +#define ARM_SMMU_CB_TTBR0              0x20
>  #define ARM_SMMU_CB_TTBCR              0x30
>  #define ARM_SMMU_CB_S1_MAIR0           0x38
>  #define ARM_SMMU_CB_FSR                        0x58
> @@ -1083,6 +1082,7 @@ static void arm_smmu_flush_pgtable(struct arm_smmu_device *smmu, void *addr,
>  static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
>  {
>         u32 reg;
> +       uint64_t reg64;
>         bool stage1;
>         struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
>         struct arm_smmu_device *smmu = smmu_domain->smmu;
> @@ -1177,12 +1177,13 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
>         dev_notice(smmu->dev, "d%u: p2maddr 0x%"PRIpaddr"\n",
>                    smmu_domain->cfg.domain->domain_id, p2maddr);
> 
> -       reg = (p2maddr & ((1ULL << 32) - 1));
> -       writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_LO);
> -       reg = (p2maddr >> 32);
> +       reg64 = p2maddr;
> +
>         if (stage1)
> -               reg |= ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT;
> -       writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_HI);
> +               reg64 |= (((uint64_t) (ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT))
> +                        << 32);
I think << should be aligned to the second '(' above.

Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Thu May 04 07:44:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 07:44:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529589.824124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTdK-0003TX-VI; Thu, 04 May 2023 07:44:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529589.824124; Thu, 04 May 2023 07:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTdK-0003TQ-RC; Thu, 04 May 2023 07:44:10 +0000
Received: by outflank-mailman (input) for mailman id 529589;
 Thu, 04 May 2023 07:44:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puTdJ-0003TK-E1
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 07:44:09 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20606.outbound.protection.outlook.com
 [2a01:111:f400:fe13::606])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 72f7c0da-ea4f-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 09:44:07 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8644.eurprd04.prod.outlook.com (2603:10a6:20b:42b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 07:44:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 07:44:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72f7c0da-ea4f-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J7JJucWs7/ZMt+VkgeBDHTPXL1Ll1z8h7etz+PqaD5tjrmdTFOCr05jlcVzyOdCNPP1AZYEtLk9hw0EZC4lygBCUoKAabf/08sKKV6pX8+sdDiMQt57QUxbf4+g4C45oeav0KRArbwZV2oRwvqQXtgCTlJlriidHFss74xnkrSE3M1Q+OrC0FXu3HYszADolR4ywfBEXehBhu2nibtMrMQph9PJ7cdRqu/3o9y104csVD1qHEgw+BrR6ndf3Rh1T3jMYwbYfhnc1kmx5ZBNktOon4csX1t0jemxyV3b85RL3o9WBYmL3ZEfHtV/CSEDiLodRNN3UajuviPGHI5ReZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9CiAMrpXGLanBsbatUg3PQiFHBMR9ut/YXD3WLXe1lg=;
 b=GNwk/IJnwT9S5yUHg6+vNXgtmRSAX2HY2d3hXx/FHAtjiKWcZH/SEq472dNGtlIe5yUbzmsbkshhNTrBEF91HB3AGJRsvQ3HUahdrJDApYg+5jqVihBfRue0X37lVUPTpIq8NERHD4OSZar8SzqX3QNdGSEhIdLvVH6MOFs7wxlTSEpOyBarI/i7UECj9gNRRkeAd8KiwcepGK6vuJWV3lm2qWTyGTBB9232wf0h0A/sX76ZBEO93EI+KUmFQEykZox3idWqA6ViBnlVotCbyO1gMdQ+F15GMGAA2qNFrXGAqXhy2cofIPFeyn8z0ypTzh9fjXsdVOONqoCXGCAuUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9CiAMrpXGLanBsbatUg3PQiFHBMR9ut/YXD3WLXe1lg=;
 b=VrGgZnuvs3LU4SEo0wPh0Fn58qtBqm6Pd0xgvujRqK0o4dQJFR/GBJd2aqRfV5Fv+CNSGqMSeOuA67QTi5z8Idvzwg1g/WMFCoUjhf11hXxLKhlB6iJq/r9ZUS/np3T3xUpQx9FK0SmDydKSGfsXqYkl1Cme0bm7cMdoKHhxPmO4d1naFaN+rUje0zSa01DowB49hCtaZ7TX17IwPYXA1bQtSGfTf7csNprAnLpdZqd9YWdFag8s3/CtHBAHZMa+h7F4l0JrxYfU9F3u9+qZYGzIlsuTMQaFlXcM7MTqJcCOQzQTtx4BBLOXbKoLdvtoiH3XpnvthK2r+IslcgwRmQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c5f2ee35-0f5e-da04-9a28-aba49d2aba29@suse.com>
Date: Thu, 4 May 2023 09:44:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 4/8] x86/mem-sharing: copy GADDR based shared guest
 areas
Content-Language: en-US
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
 <a79d2a8b-6d6e-bd31-b079-a30b555e5fd0@suse.com>
 <CABfawhn4CRnctzV-17di4eYyNhSGTSMckZjgphS1Rg6HUGOtHw@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CABfawhn4CRnctzV-17di4eYyNhSGTSMckZjgphS1Rg6HUGOtHw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0075.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8644:EE_
X-MS-Office365-Filtering-Correlation-Id: cfbfd59d-2789-46c2-3124-08db4c735647
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/ab44n/bbzDb0iWahSJ6e2Io39H6bdzUL2SnD3ANzJRj28T7sw6qjO9nzWpRVGfL/KxT5gcRAJKc79YgX3ZFA7cMQqin80Kpo+MkvxNkjaFkeF9VhX5BUOqyVJpZvLrUAAGR/MgkOB5izRFFqpvaTuwn0kBx50t6YXvLCbyRGRFY7+ZZRYBkQ6LeBzXRtFMQ6C12JJUOThYkWa9/450wTfbYyrV4qB2GOEZg74KcvoNT/JyFcfE9qiRkLj+z9z+XDsYKBwuelsaDFGdWfmw2faoarNsPHrEHxPKoRA3mo932TCHC+RRnxkpNZ/G6kfX4I1wbWCAMWMOg9Xj4/nSuOZvFHYjipxgHjPc8GoHzPhVYKOdixQXx2LYVi58+e7xNXLuHs5t512Y4A8DzrrXmEJy7+C6rnMOss0/EYvA4zq3La6e5FWOTa2lGclUHEaoQGLkkNPR1NWPhSA0m9GgMhMZbqSuxuZ6cb0y+sJFkmU5lDbqec+kCBI3DwuQXPRA+Q6XzzWQvWCIRCzJiYXy5mhQCjG0GPSDRei6KLnl815U0HlFRX5Fal4WaJiwQ96Tq1nGPAfANrMqBLmk4r/wHhcjWO7+OZR5VVEWexlF84zTI48g5Tg4UXKzJXOqqK74CraZCZDRWAFLu7EoS+Cy9mA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(346002)(396003)(376002)(366004)(451199021)(26005)(6506007)(6512007)(53546011)(186003)(2616005)(66899021)(31686004)(54906003)(83380400001)(6486002)(36756003)(316002)(5660300002)(478600001)(2906002)(6916009)(4326008)(31696002)(38100700002)(86362001)(41300700001)(8676002)(8936002)(66556008)(66946007)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?enk5V2g3K2V5WmJBelg2aUd4ZFdFQkhtc0RaWVFPNUpRYVNWNndicTVsTGlF?=
 =?utf-8?B?bmFPQWtwSk1QUXZQbHNzU21aUytzVEZaRWRUczBwY2M3OHo1T09TSjdhelZI?=
 =?utf-8?B?Q3A0SE1XcGpQSXIxU2FrVTNjWXBrZHpTUVhmSDV3NFJkekpDcDNMOTZ0cXhh?=
 =?utf-8?B?OTVFRHptMzQ3MUE4NTdhRHVKWnRaRGJWcEFvM2d3WTI1YktrR3JaR1Z1aUdU?=
 =?utf-8?B?OWJTN3ZVdE1icmdONGwxMlJJMG40MUMvR1RxUHBjN1ZzejQvVkxUcXdTTjVt?=
 =?utf-8?B?NzNxd2JWQU9JblVab2FxSlNkdmRoQUp5RTU4UUhBTGsvQWZwQUhWTjd1NXpy?=
 =?utf-8?B?SkdYclBqSFJhMm4weW93cE9rR2lFVjMvSEZvdjdwc084SVJOQWFzVTRBb01a?=
 =?utf-8?B?L05IQXF1MFhJZ1llNm1nRWUyT3cxd0VPQ1d2Z0NTM1ZITDJ3RDhXd2pqMkdv?=
 =?utf-8?B?OE5meFRyOTYvWmpSWktjbk8vM2toUXZWTDdpYjlsSGFIRnlQcmcrNk9sWStQ?=
 =?utf-8?B?NjJEWm9Jd0REMmtSb01yT1JscHJHTkxBbWhKZzNKVlhmV2x0Wi9scFY4OUdP?=
 =?utf-8?B?Z0NzTnJ6QUFzNWhUYit0YmRBK3M0OWRYTUpvWTMvVCtVWmZxVUVxOXZIUGxM?=
 =?utf-8?B?SDI4cVFaMFRXdFpIWG5kYUxwSGJMRlBqNUluUW81VWxaWGJrZFhSSy92SDI0?=
 =?utf-8?B?R3E0UEZ1Z21saHFaY3Y5RldVYUp6Mkdxc1pMVTYrbHVMMU0xYU9LQmNpUFJv?=
 =?utf-8?B?T2JJUkYrSmk2cDltZFo1Y2k0aG1uaGt1d3BkZTVEWHU0UmpvY1hPOXJjdmNu?=
 =?utf-8?B?VDNQUjVCNHhsUWRuTWVMNkpaS2Y2bFd5Qm5yeWI2dWRjQ0VmMEd2Rnd6Z3Yv?=
 =?utf-8?B?d3NGanpkM3I0aXNnNEp0MXp3K2FTbUVsalNiMmg3QkZ1TUJGdEdtbGp6bWF6?=
 =?utf-8?B?dWlzWk5zVzlmS0hGWUZwcTc0YkZZNGRUbmZTdEhSUWZtNjM3eFN0LzdpTXA3?=
 =?utf-8?B?SkU3QVJqSjFxSkJRR1ZsbWxwNFdLZCs0bVh1N2M4bTFid1B6Q21RMyszemhW?=
 =?utf-8?B?bUFSRS9FblpSbEVQbFNXYUxCUTdDZmdFeHJTS1FHbDYrOFNaRXhMbU5oVnU2?=
 =?utf-8?B?blAyYldqOTd2WjVkaEZNQXJxYkpKMzJJQnFRTEFDbXF4WE9WbUhTcEloVG4v?=
 =?utf-8?B?ekZOMGZxNmQ1WFYvellZY2lKZVVwdFJsUTlHQ1BFUWx6cEpqSWJueFVLWkxo?=
 =?utf-8?B?TEtySC8xU2pISW5INGFhU3dFZnNVWVdjbEs0aGYzTHNjTmw5Q2NUU044UkZN?=
 =?utf-8?B?eGVpaGhMOWJnZk9uMlV1ckorM3Y2bHprcGlFNnhJbXAwK2tqeCs5dGREYS93?=
 =?utf-8?B?R242NzJkQnpkbzNjMS8reTE1THdsb3hFZENTV0xMeVV3MFB1akcyWFBsUTkr?=
 =?utf-8?B?bFNyQVRYTlJ3cFp6amp6OTJJZWlwN29lYWgyQlVPVHo2SHRVcisyZTErbEFx?=
 =?utf-8?B?NGdjWlI1NnJ6bEVscitVVUs1YS9FYlhCenZJUVBnVjk5VEd4T2U4RHY4MzRH?=
 =?utf-8?B?emVyaVZIMHB5WFhRL3YxSklpK3lLN09NNCtVMWc2MWtYTDVUaGl0UTJPc2hm?=
 =?utf-8?B?T1Bna3lXVnc4ZEVvMVVaaGFuVmZNdFY1WG1ETE44RWVFMFlTNEprUk56Wmkz?=
 =?utf-8?B?aWU2b3gzQjhwakJpck9SaUNkVzJaSTRHdmR3NEdUTjhuUDRxSHlWMlI1cjd0?=
 =?utf-8?B?Mjl4Z21pYm9YSng2d3dNVWsydnpuWnRTY2w1aHB2cmpORnpyYXdJZEZJdHBE?=
 =?utf-8?B?RjF1Qjd5NUErTlNESnU4dEFYdS9hTlRYOVRJVlk2K1VEKzRMQk1PaURWb0lo?=
 =?utf-8?B?SnN0eThkMFdaN1BJdWFuaHBpMmUyY0VFamt1ZDNaUGx3L1JtVG5Jdi85U1p5?=
 =?utf-8?B?T29TRXVTM1BiRXRSZWN4U0FFSGJpRUJCM2dodHNyVzFOdWovR05aWWNqME1J?=
 =?utf-8?B?WnlSbkFlU1lxMkZSSVpsQXVCNEZjWnV4ZFJlSGRyL1d4VEx6OFJyOHJNWVNB?=
 =?utf-8?B?Nm8wK1NPSFpFRy9mWDMvb2hCNHJJVEN3SXhtV0ZwaXR0STRHZTU3cDlpd2ly?=
 =?utf-8?Q?FXUmKovkzkYfnqTCSIyIx54Cz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cfbfd59d-2789-46c2-3124-08db4c735647
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 07:44:05.6629
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OifVk2omLdFPV3VmcdQcMIZh/W0NDMcF4YPg9MLA8agIG4deKMoGq+16z5mfww1lHmox6YQbSgEGSD0jzCHwng==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8644

On 03.05.2023 19:14, Tamas K Lengyel wrote:
>> @@ -1974,7 +2046,10 @@ int mem_sharing_fork_reset(struct domain
>>
>>   state:
>>      if ( reset_state )
>> +    {
>>          rc = copy_settings(d, pd);
>> +        /* TBD: What to do here with -ERESTART? */
> 
> Ideally we could avoid hitting code-paths that are restartable during fork
> reset since it gets called from vm_event replies that have no concept of
> handling errors. If we start having errors like this we would just have to
> drop the vm_event reply optimization and issue a standalone fork reset
> hypercall every time which isn't a big deal, it's just slower.

I'm afraid I don't follow: We are in the process of fork-reset here. How
would issuing "a standalone fork reset hypercall every time" make this
any different? The possible need for a continuation here comes from a
failed spin_trylock() in map_guest_area(). That won't change the next
time round.

But perhaps I should say that till now I didn't even pay much attention
to the 2nd use of the function by vm_event_resume(); I was mainly
focused on the one from XENMEM_sharing_op_fork_reset, where no
continuation handling exists. Yet perhaps your comment is mainly
related to that use?

I actually notice that the comment ahead of the function already has a
continuation related TODO, just that there thought is only of larger
memory footprint.

> My
> preference would actually be that after the initial forking is performed a
> local copy of the parent's state is maintained for the fork to reset to so
> there would be no case of hitting an error like this. It would also allow
> us in the future to unpause the parent for example..

Oh, I guess I didn't realize the parent was kept paused. Such state
copying / caching may then indeed be a possibility, but that's nothing
I can see myself deal with, even less so in the context of this series.
I need a solution to the problem at hand within the scope of what is
there right now (or based on what could be provided e.g. by you within
the foreseeable future). Bubbling up the need for continuation from the
XENMEM_sharing_op_fork_reset path is the most I could see me handle
myself ... For vm_event_resume() bubbling state up the domctl path
_may_ also be doable, but mem_sharing_notification() and friends don't
even check the function's return value.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 04 07:44:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 07:44:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529590.824133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTdb-0003nt-5y; Thu, 04 May 2023 07:44:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529590.824133; Thu, 04 May 2023 07:44:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTdb-0003nm-3B; Thu, 04 May 2023 07:44:27 +0000
Received: by outflank-mailman (input) for mailman id 529590;
 Thu, 04 May 2023 07:44:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x2gh=AZ=citrix.com=prvs=481e65374=roger.pau@srs-se1.protection.inumbo.net>)
 id 1puTdZ-0003TK-AN
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 07:44:25 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b29185c-ea4f-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 09:44:22 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 03:44:17 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BLAPR03MB5409.namprd03.prod.outlook.com (2603:10b6:208:290::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 07:44:16 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 07:44:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b29185c-ea4f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683186261;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=yk8GEGwtFNhf6MLWbdraXhm9Rq7CHNlbs51CGLhvgeg=;
  b=RnCjVuHzs1eEZUsfYKTomu6mR25nwfQwCbxxz7vx3pYcQFQSZOZ9F2xL
   8flAkcaTrkG02JMpZ69kFrD0mZfDtyHse9NPqu/aw5sJyFRVTsijPguJC
   mnp22D7dNR5sTMfeKSS8iNP43hxvk2LDw4yH5GIhseBAZ2nnzM9p45YIq
   I=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 108226374
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:PLdoCqKsjLrZUAwhFE+RVZQlxSXFcZb7ZxGr2PjKsXjdYENS0jAOn
 zAeUDuEbKnZMWakco8kPoXjp08O6JCBztQ2T1RlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wRiPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5bCEBUy
 eNfIQtQSRuvru2T4Y+CEs1F05FLwMnDZOvzu1lG5BSAVLMMZ8CGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGNnWSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv01rWVw3ikBOr+EpWAqMdskgK0x1c4BR8dS3++rtu8mFWhDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy
 hmOhdyBLT5ytLyYT1qN+7HSqim9UQAONnMLbyIASQoD4vHgrZs1gxaJScxseIaqivXlFDe2x
 CqFxAA0iq8Pl8cN2+O+9ErenjO3jpHTS0g+4QC/dnyi6gR1dYu0fbui4FLQ7etDBIuBR1zHt
 38B8/Vy98gLBJCJ0SaLEOMEGejw4+7faWONx1lyA5Mm6jKhvWa5epxd6y1/I0EvNdsYfTjuY
 wnYvgY5CIJvAUZGpJRfO+qZY/nGB4C7fTg5fpg4tuZzX6U=
IronPort-HdrOrdr: A9a23:rYIDbaABe6xfAGPlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-Talos-CUID: 9a23:bGbPUmP7RazVWO5DdCBG7nILXckeL1rW4F7fBm6FMD5NYejA
X-Talos-MUID: 9a23:qFainQgxt3to+LSNhFRPHMMpDJlR5bbxL0E2upgEsNegMCNSPiWGk2Hi
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="108226374"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dPPOv+E/oG41XGUAawPerIwgks5lQUiVoVJjupgDW4R4nGlEVv7Hb1QCut/Ab4zSquRMnS5VgamV6e0aBr/Ku9Ywq4KZeR3rsCMpT3ek/lkaeCh4BzphM1ULrHPpGpx3FTEv63GQi6m8IlAUa9zxDODnrLqsk5zFnQFs35j0Wo/FsvpMKB4iYe+rv3z0fmsYs7HvnmVBMzm4eMI+GFwC2ItuUyPcybOpmAhQem5VidnqzaAGZvO5O3ikQxBnMTVQStYYRTU0YslNtWUjRFGe4VYo4TByVpuB53v6eQ+2I3dUE0s69FcjwQ/J+3MX5wZOKAKbFd+Lgqn6+oK0Plv+nA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LTFv2L769fnTJo8Ke6VVXviRQQALqA65dGzZvVdiga8=;
 b=JwVWUjXfUX5TZ6bZwbKLIZLyN9UEh2hAozXChpszJFkVGBXDp0qazv+o4QAY+SpB+e9DjZenoWLVNx9yxYhyG4CRh65O74X1ceXE+y+/SKb795+2ZWmdsAKvIDTDx9PI44lsrRYjaqtTQ5fRJ+N0jPoqSiAsf+wAj1YFHJyYM7PWRZBwq2RWM5fqbHwTtSRo238nWoaphKRYmamWQV3Sk7ViHzSucqkj2GczPOFjBfwUAT6wsXb7f02vsc2pfSP4M7vx+NoVRt8477qu1kJqhIQhk7LqMG9IAP6D1zO2KJgUjfMQf9syOMU2livccuZUOzcSoFHwrcZPxtl7FzcZRA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LTFv2L769fnTJo8Ke6VVXviRQQALqA65dGzZvVdiga8=;
 b=ft3ZsGtXNJJVW0KWp/Deauzp7f7xt9QKH1jpBS1+TO7MmfTpMSK78ucXVlIGA6G0vRGqXkpGt6yPNqo6C9kMadEQmxcx61HR7qEAb8CjDEMxP7Q4llZTe6ZvvHl9c2FQyRJWsCmVMYrexyzviXjLLnNe8vpkW18UuCH5xSYK1ck=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 4 May 2023 09:44:11 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Subject: Re: [PATCH v2 1/2] restrict concept of pIRQ to x86
Message-ID: <ZFNiS8oxfozlxCz6@Air-de-Roger>
References: <85e59fd5-9a06-48b4-ba7e-81865d44e332@suse.com>
 <98f51b96-8a1c-7f33-b4d3-1744174df465@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <98f51b96-8a1c-7f33-b4d3-1744174df465@suse.com>
X-ClientProxiedBy: LO4P123CA0513.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:272::23) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BLAPR03MB5409:EE_
X-MS-Office365-Filtering-Correlation-Id: 11bfeee6-5f5a-4d75-b6b4-08db4c735cb9
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qnNrM7Z3w/+QWoVJJyYzWCifwYA7cjHH3ffJD3Bu6Y/slg7kkymqN253mt+4Cb5m3FF1g9YQVIACFIIGhKEyBbcXDf31NzxS8pVRtn8CZW7+Cpf65ykl9tBiEs9pZkeR6gstV2+vlcneum2GwrpR2l16osEzdYPpbsIvSKfNl66NmBDZvThUmE6EeSsJjZ1qgcKSzmXz9hUomep0J420vxHH02Z2sPB0QfRYYx87KNPqlvj7zcJHzfkiFuo2sBrrUGWtvn9vY62fNP+f7e34Fu5HaoPCBl0GoogWH23zOHkNCFx6U3ugWlJNEvBH/NC+tpKvffRi/fIte5g/TzS9SL0/uolr5wv7U5LVHMhSGbiZ8ZdsLWAcg8403qe/9lfE3BFMHgOTRmpu26dbeRu/J3QOPpRt5z1DLQ5a1X8S/Bq53hGeWwPXcaCO7bfVpzhNZ6B8fEq18Ldz/2VIjDpoEyOw5NqzKCWwOC3LtglcmumiDoMP6JkHyPJIun/gaUj906PBSeEhTSZxWBVZ4UGB4E/HPRdrJOplta73QHzjVTKZH7VDpvCMl8mhIz3ilLQg
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(366004)(136003)(376002)(346002)(39860400002)(396003)(451199021)(54906003)(66556008)(316002)(66476007)(6486002)(6666004)(478600001)(6916009)(4326008)(66946007)(86362001)(85182001)(83380400001)(6506007)(6512007)(26005)(9686003)(8936002)(4744005)(8676002)(2906002)(5660300002)(41300700001)(82960400001)(33716001)(38100700002)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bWN5cy80bkZHc2IzS1orekQ2WDMvTXg0Zm9hcEt2L2dmeXZtTnhacWhtVCtQ?=
 =?utf-8?B?eTVGTVZta1BCeHVSTkVHY3pCNjh1NEFtR1dUSDRVV04wcHg5MGJENnBhNWVw?=
 =?utf-8?B?eFJkY2ord3M3Rm44SmdhYVpYWDc3aVY2SEp1L3M0c3ZYYnd4dEgxWEJZdXhn?=
 =?utf-8?B?QUxHWHRhMy8wQUtPSmRpb0FaSWNjcmZNdW5FZGJpQlQwSytTNDBDeWRJa3FB?=
 =?utf-8?B?ZnFlbGtvR2xROTd3UVZ2UG5yWUNRRHF0SDdyMllNOHpWNlIxSVdkaDh2Szly?=
 =?utf-8?B?RE9pU085YnNnMjMvS1NRT0VjdUhscDE4WGtPQThRc1JoSEx0OEhaaFV5WXhE?=
 =?utf-8?B?c21HTkFrTExGYkN2ZVEvc1JSRkhFSGtaNUN0YzZZaVZOL0dVN3YwZkNGakdX?=
 =?utf-8?B?emcySTk3ZlpvN0JtRlVRaFNFc3JyY1JBVGFoNnBCQWExVldReEI0RVNwbFg0?=
 =?utf-8?B?NUNtRWdMNldiRGpRdDNRSmNIbUR6MFU4eURRQnYxcnFxYVQyMi81R3Fjekpm?=
 =?utf-8?B?UU53SGU3TTd1MzFBWm5hTUl5czhJc0xoR3R5b1hMQ0xJbG1MM2pxOXJaUWJP?=
 =?utf-8?B?TFVsK1BLWDVleWdWWFJtME1mTlFvR2p2QVNnbGJicTFmcDEvOHJhdmZvQVov?=
 =?utf-8?B?eTFTRW53K2VvdERtNWVmQjlrcllKMjVSRG5GKzV5QnpJVW94Rm1LTDExb2Fp?=
 =?utf-8?B?UjJvQlM4dHFDZlNXVS8rVHpFWi9kRm15SThtWklXanZRTm9VVUJEa2dsOTJU?=
 =?utf-8?B?QTIzVEZEdGEzN1JqWGJGSE1sR3hyVkMySkxEMDdYMnJFNmhvL1FKMXlmYU1K?=
 =?utf-8?B?dEFNaUo1ZjliRU9ENzZkSUtXejFvdjNjdWwrYUphNUZuRXE4djczQXBHZFI5?=
 =?utf-8?B?QmFOYzF4R1dlWG1qZ3ZxVnhYSDJqalZSckxXaHBLU01CTHBaeVJSK3ZvRnpw?=
 =?utf-8?B?VGlkM2orZjc1eW9kTDEzUC81ZzcxZGg1MklxTitJWkhaRU5COHBhWVQ3L09L?=
 =?utf-8?B?Kzc4U0NoZFdqVEh2aGcyVk02TTM3K3RKSElpWFRNdHpzT3NtZVl4L2QxL21X?=
 =?utf-8?B?Zk1MN2drZzRBT0lKR3dCS3RnSmNDVDFqNGEvVDJMNjFoQVd4UHZ2SDJsVUFt?=
 =?utf-8?B?U1ljZ3AvZkgvQVRkckNWTldtTGs5S2NuSmJLL2hmbzBtakhpNitpN3U0Q2Fo?=
 =?utf-8?B?UXFNUjdHMG8vN1NnTkhxd1o5dmJpZzJFaEoxakY1bW4zaDdUYkF0dUVmaXdZ?=
 =?utf-8?B?NnAvM3BiZ1dQNnQxMWhZZG9PNkIyRUNucm5rdEhMSVkzRHRsMUxYZUFycVhQ?=
 =?utf-8?B?QWVlMFI0dC9JN3RUeUxYM1lBcndJVVVDTXVZMCsySWxSRFZuSEQ2eTVVTFFR?=
 =?utf-8?B?cmhuQzUzSTNoZ3dYOW9GRktnaXovOXdTNFlscG1qeWFsTHV1dlNWK3VBU1B1?=
 =?utf-8?B?QzNVbVlrcDQwMnZZR0l3U2pDNm5GZDN3UllzQytEeUk5V0hrRS9ldjVQQ0Zy?=
 =?utf-8?B?aHpJOHp6RXQxajlkWmpTcTRKZzU0Y3NIb3J1VC8zdFBwejlQQUJ3b0VBcHdP?=
 =?utf-8?B?WjZZTDUzQ0lSMnQrUXZPKzNNRCs1NzE1YmpqZ1NyMlluY3VtcHRiTU9XV0dE?=
 =?utf-8?B?dCtLK1MyUVN6TGZ2dkxyTVBxVUExMnhXbFFXSkY3MjZTSS9kK2g5Qmo5Nm56?=
 =?utf-8?B?bEdMdVIyM1drUlMxcVVjeVZqeGtsM2U3Z29wU1k2VGNwWkQwNkI0eDNjT2hU?=
 =?utf-8?B?V0pXcWZ0V0QvdVNXejQ1dkdjMDk3Z3VZVThlMy95WCtKYVZvZHRDTUFnTXV3?=
 =?utf-8?B?Z1hINFFJS0VWRnc3MkZVck80UEJQSmdnZVhJcVlEYlRBMVk1WlBybmRpdmd0?=
 =?utf-8?B?azZzZ2EyUnVwM0FNRTJvN1FDMlRLVFpuQW5ZYzZMK0cvVWpsMnliSlZTeDJu?=
 =?utf-8?B?SHhqdFp6cEhNbzNzdFhyeEdab3FaazZqNEJiSEtUY2tlTEF6eEkveFNYZHBK?=
 =?utf-8?B?STlXRE9CcG9jZjhYNWNrMEpqOElvWnhXNWxSL050YkptK2VZTXkxRk1mOXFT?=
 =?utf-8?B?SmRPbUN0a3cvcSs3TkFIM1owUFk4dTQxNHNscnNFKytub1dTbHFxc3RFczVu?=
 =?utf-8?Q?nUYo6hgeaKsg2Lh8ImjunzEmg?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	QYzVcb+PCuuOzSnWSwZQ6lAohSVyy9jz25w3U4XllgoY2SfpZfxKgO98AGudEph8E78n1eOWKjyO1ixUpMFUfjxwEgtyn+cpdUW94mHvehusdniV2bISkqcNQf99FQI9DM1nZnIGzC5RJiOQmo1gi0o1Einb8redk7bX568YzrnXvbyNTPgAC0raWKuotr2NKiMDQR8KoOiJDFIaqg33AG7yaPAxxjZsKxyUge5fJ+oSPap+BMMLZpy5QpYbwwSSxuTTb3ks8d+X8Vy0Na0Z3Dk2OFs/egfYuUvFv4yWlQJ23/g3SUj9vrs1m9kM0+IfmXxoO1In42NQGL/a1BmZQLIq6BaauQTXa8/SgyJadviFsDaXrwi1AAMp/W1UIVGIlDveJA5CaRGXVqNsZx8N/jRmENJJI5cpYCbHoTsEmvHdnYPWuQhG4jGOXzc4dsEpTSw3juWj80nafQAd3Rd2yJetw5HLILlL7d8bdguzA+1LjgT7SWLibPGMFK1nKxdTwZeyuqUw7dKS6dFDnULYwqBnmnTwsOD0wlGS0FEUr34RwlxBq1AjuEGwjuMyw3q04eV02FcY/HorCJoq8rxgB/WMFiLmV16sZ2gvq8iKwMjVSq0ETaNhbf8Tuo1emygBkKcp2MLpEHX6YP8oxFJB83W4zsga+YQB3CEDxegE7Qxak3CoOqUfqyp7/bDQITf4yOWWZy/k/EWU6xs+UA/7N55fU1utcCHxszyFxFQkH8axIKFXOFQIse4kwCB2nDkTPZe1C3+8rHIMvDw3Aeq8D2xeaypjfINx8VRXUf5G5qnpkntP/H5pkcHdApyqwdM9WZQ+jfTNlPLCg1+8OSeufNBdej+dizTFut//TzTFopX3R0T1tFzN/XkKSw6EiCrH16LVgOLyjw6G4U35gaeJ7A==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 11bfeee6-5f5a-4d75-b6b4-08db4c735cb9
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 07:44:16.4996
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DokWulNcHQJeYrd5TU+CXtW8Dly+OMKxfIQvHt1/QRu/zA7H4uxXqEu9ZeiUmYmInnwRrYPHFOxmSPb5TZhK9Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5409

On Wed, May 03, 2023 at 05:33:05PM +0200, Jan Beulich wrote:
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -438,12 +438,14 @@ struct domain
>  
>      struct grant_table *grant_table;
>  
> +#ifdef CONFIG_HAS_PIRQ
>      /*
>       * Interrupt to event-channel mappings and other per-guest-pirq data.
>       * Protected by the domain's event-channel spinlock.
>       */
>      struct radix_tree_root pirq_tree;
>      unsigned int     nr_pirqs;
> +#endif

Won't it be cleaner to just move this into arch_domain and avoid a
bunch of the ifdefary? As the initialization of the fields would be
moved to arch_domain_create() also.

Maybe we would want to introduce some kind of arch-specific event
channel handler so that the bind PIRQ hypercall could be handled
there?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 04 07:50:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 07:50:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529598.824144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTjY-0005at-VZ; Thu, 04 May 2023 07:50:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529598.824144; Thu, 04 May 2023 07:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puTjY-0005am-Sr; Thu, 04 May 2023 07:50:36 +0000
Received: by outflank-mailman (input) for mailman id 529598;
 Thu, 04 May 2023 07:50:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puTjX-0005ag-OK
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 07:50:35 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20602.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::602])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 594dea75-ea50-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 09:50:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9708.eurprd04.prod.outlook.com (2603:10a6:102:24e::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Thu, 4 May
 2023 07:50:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 07:50:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 594dea75-ea50-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JP0dg7emPWMFCbzQHGL/5yHAjo5qtmu6tZBzrdZuC554LjwQWcSc0cM45W5WCLtHDkyAPguUILVEHlsXSwRsUK9bcJ7qZG/NcY5O2Qtn6JSH3suwRqzBMkc2AI9hIEwpMWZYZfN75L5ZI2TxROXZz22zws58Jm1slf2L86fZJZ7DNVW1PstZU4VTnE+l2u5Zd3H8HsfIfiYcN+jd5tU/KhnWL+s+pfKfy9ovSXb16T/9RpqecZO86b1Mg0xSROAjCmUSHpTSe/3m9n0G91BqZ4zD4OzwzoECSwksonNh1PGa4tpfdaFnw8/V8L0LhM+OwruwB3YmNc0Ockhy3sAVJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Lv6nK4okzX9Eb24rzwqQlQtMHnW6zOzECXaBdXmSqh0=;
 b=FUDMwP3EvBs8q7nm4avNI8Xarpb5DVv3JXXTE0DSNTXtbbME9V6w0bvgUDAVEyIgdyOt+31slRNJeuTGIMkuesFss/m5apfj6ObKm4OfN80xmJzt9bLE6cChxbjvvsHRizDZ5OASTgQ0aWhY3irC53EsFVI1PpUwkhPz4VTSJtRXE8sYK9WI07cCYsBZJws6wZd1pbDcaC3FMMItqp0tzFAAi8yHyyHhj/RtrPciLz5mT+Sln3/2sQ4ZhRdh0QMCW+I1TX+g7PMtBHVWRkVjE95oGZsuMxbOXnWNqkGbVls4mVW4wViTdHriNY7YXoCFlDVcDiun9ju+dE8pp6j96A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Lv6nK4okzX9Eb24rzwqQlQtMHnW6zOzECXaBdXmSqh0=;
 b=PFc0i+CJGkznrX1w9D8cvcBv9r/32GnJ2Ah2nJIGrpZwhOqc8Rw01rBIgZOgU/wAU1q3pdXna4g4kxBDzQ47sn2DEGmWy8K4avMROVzJGdjolgKe4227Zl+InVcFNeketRWdufgrCOE0gjvYO2y67uNzXXnAFesElfewNEZwpoSxnQw873ytD/wbIfox6ZbwNCgCTLot6Pz7ijwE9Gl5gpXFpyt4vjhDpTHxsYgzkiJKEiQq7iCoDk6pqkQ3tV8zYGzXXoVFqGs9PIQ/pjRodw1M+MXYXyBp4nn521N6Onm5iD6X/OxDPiromeXH9OPjPHORpUxjICuEbHpUsgzmoA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2a46c7df-b380-cc41-5582-70b4829d7f47@suse.com>
Date: Thu, 4 May 2023 09:50:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 1/2] restrict concept of pIRQ to x86
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>
References: <85e59fd5-9a06-48b4-ba7e-81865d44e332@suse.com>
 <98f51b96-8a1c-7f33-b4d3-1744174df465@suse.com>
 <ZFNiS8oxfozlxCz6@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFNiS8oxfozlxCz6@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0055.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9708:EE_
X-MS-Office365-Filtering-Correlation-Id: 99aebccb-6191-4677-00f4-08db4c743b87
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	egBNRetxNH9oziw851GHE3LrKHTaGXHIytcqY4eyoWqQfdJ9JaLwyUvZVLDwqSjUqVai4+lo6W76CoBgVD5TgqqLsF2fi3HTgGZSG9sAtJsuf9bArgu3KL506f8ytUWoWuEdzPIG+mfRjXkZ0b5KxSVOjP5efg77GWPgOfMKFA/PFX25DVA3hHPknUucCMYlaPAA7/uWPQx+71uSFgnD9DR8OkfPcx9pxQ9nQmBz80Atdl7RYyDvjVqTOUdg+vCuczbYc1uBsoyIKWpbzg+VoRwrTM/Q5nV+Bqew0YXGHOTbmagWB5E6Zncpqo5MDhyc2ijNwK308X3eL5hwZHnNADRAMuIQC3MjpBDJ7Y5UsW2MNIT6YBv+lOfHOr/BdkM0niOwHzkn3wmOz02+iU4jf6OpvnhRdg6wRWJQiqv2lhIgacDqzl5vpSKtkJz2QzdQHo9fGo5nS4xIC3/BYwOIk3W58vyOfRmdQ4p6XrV2B7e6gyGCCVPRZ/oWyb56cnGqqrg+Utity8HkB6RrKZRY2mnuCNVW9rndgH7jgiI71M+4lJ0hnEG3Sse48dOvX4WceqPeQQj0qjNTGir4/G+NfVTky68ZH9j7rfUX/XJPYKEdyU50V2nz7oMImq5Zojvd2GuXU2Qflg+1QiH1tQTYlw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(376002)(366004)(39860400002)(346002)(451199021)(5660300002)(2906002)(31686004)(2616005)(6512007)(6506007)(53546011)(26005)(41300700001)(83380400001)(38100700002)(8676002)(8936002)(316002)(66556008)(66946007)(66476007)(4326008)(36756003)(86362001)(31696002)(54906003)(186003)(478600001)(6666004)(6486002)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VXBvNzZCcFRYR0hyYVRHV3Q5SEZHWGxvTEtEaVF2MFVzamdCVmd6OS9pV01K?=
 =?utf-8?B?MFRBOWZWa3FBdldGNXhZSjJ3czE3bWlSM282VWQ5TFFlRWRTQ2E0WklqcmVo?=
 =?utf-8?B?VFVrWGVTL3UrcUEyRXMvZWpWUzFqa3J6VHU2bVdVdWVOa2NRVnZLQTlmSmVo?=
 =?utf-8?B?VW1zUDlqaUs0TXJaajh2aG5WSW0xMnhMR1BRMWNIZXF6enhJL0RadmZNL2lV?=
 =?utf-8?B?cGVJa3JKZkEzLy9ZTDN2SU5yVENXN2xENEdiMDRCU3hUMms2TjdqcWcvN3dT?=
 =?utf-8?B?NCszdm9BeWNlZnlwSXpHRW9MWXordDV1dllTc3FKQ0p4WnhFbHRMRUdabTZu?=
 =?utf-8?B?ZWRkK1Z2OGxQQ0JaOWdPUXk2SFBYeFg1T0ZSSTNlT1JiZzE1UzZzTWNzR1kv?=
 =?utf-8?B?N0xqWlRqQnEzQlJaT3hJY1FiNlNPU016bTM4ajdCSDBUOWY4MXlBN2tPQ25j?=
 =?utf-8?B?Tmc2VXVPRlZqTDJ5RnVzci8wUnZpRkdSZVZHazYrM2VDak82MUhaQzFONkxC?=
 =?utf-8?B?NnE3a2dhb2JjOTkzRmd4L25WTlNiZFZSNkowOWZCeGxEZGkyOGVYeU9BVHNG?=
 =?utf-8?B?RklTTGgycGtoUWNla3F1RkpRMjl1UDNlU1ByZE1yQ3ZBRGM1cTF5Yjc4djZ1?=
 =?utf-8?B?S1l6MVFwdEp1aFRvVlM1WkJ2RUlIWEpHTVl2dzVCL0EwV3YvazlENmpkR1VL?=
 =?utf-8?B?M2ViTkJma0pNdkZ2Nm1nOGo5M2FhYTU5RS9qWERiU3pINEtaLzB6Y0FweUdU?=
 =?utf-8?B?YmxUK3JBZ2QzczhRRVo4QVhaRjdGR0VPUk95V1FId1VGeENiTkJiK2tIN2M0?=
 =?utf-8?B?RnBmUnFWeXVCL2V3R3pkTGYzb3NwWFZmTjRJM0c1SXArd0M5akdQcDA1d2FS?=
 =?utf-8?B?OG9ObE1lMHNhU2locmdMUHFHcG5KRExaVVkyeTFpbVdyY0xiWFZJVWJUYjg5?=
 =?utf-8?B?cjd4Q25lYnRacXdTZ1J0WDlXMHlkcUxkbCt0Njg1V1IxLzEzVWs0SU9LSEtx?=
 =?utf-8?B?YU82OUlEUGFrQVM4RGVCV3FlZ1lOMzlUMS9NelpMRUZIdXZmWlJ5QUkyMlZh?=
 =?utf-8?B?M1JwMHBtWCtRajlEd1pIQXNrVWtlcTVSMmhQb01ZcFJLWHJSWEFyQzAxS2xD?=
 =?utf-8?B?OEZwRXFVRXNNWWczbWZwcGxPNit3R1o1dG9NQmw5djRGaGtDb1hrdGV6TGE2?=
 =?utf-8?B?THZXQ2hKQXZ0TjgzRnk1TkRxdjlSSmg0K1Vpa3pVREx1ZytvZ0NNTi9HcjlD?=
 =?utf-8?B?UmJpaFBmOUVPZWFsbHRwYW8yeGlEZHJxSXZoU3ltaUxyTlRsblVxRm9jMmZP?=
 =?utf-8?B?Mk5ycjNaK2U5M2ZpK2ZWdVBQWll0M09HS1Q3aDFRakh0aXVNYnJ4Nm83UXEw?=
 =?utf-8?B?eHVLQkZQYnJXMjJHVUdZcGNaeC9jRzFtZVZOT1Zzby85djM0eDdRNDJJQzh5?=
 =?utf-8?B?cHZzSTdvQ04reU1YY2dWRXdobjRnWGY1WWZneGF2bkFRU0dldFI0YVErTy9F?=
 =?utf-8?B?UVRZeGd6czlHam5xdjBvTkE1RitrTzlzcGhleVJPaUR0UjVVYzhpY0x4UVZC?=
 =?utf-8?B?ZCtXSGNpRGRmVWxMeEhHMXYvaDZyaDAxWVZuS2d6bXJXMHU5bGdicStFZ28z?=
 =?utf-8?B?Mkh2OTNyb3VweTlYZVB5QThyM05kTnhReHIzVjhyV3FsSHpvc0VkWWI1QlVi?=
 =?utf-8?B?Y0ZSVDAvTTNYSjdibG9yZXpnQitKamttaGJoZnhsb3RIVEdITld4Mjc5SjI5?=
 =?utf-8?B?OS8wZjRBSWowckpnai93eVhVMTUrb1BtRytSRmJZM3g1Ynd1S1E1aVM1WEZJ?=
 =?utf-8?B?Zy82M1h1TjBLdVA4WFhBK0dkOExnLzZqQnR1Um1kajlKNjNGczRRSVZTUUQ3?=
 =?utf-8?B?K1BGUk0yNTZjMzg4aHdicWYvK0Fid3ZIRld3K0JnT2tYeGtzVHF1Und3UE0y?=
 =?utf-8?B?K1JqZE5Bajcxd0xUV1Q3MFRUS25YZEFqZHBoK0VEZnVzSGNzREx5OEJjUEFu?=
 =?utf-8?B?QmsrMW13M0Y3YTlsR0pDUnlrMlM2bzdZd1FnZkdlMVJoNEU3OXQ4dmU2Wmx2?=
 =?utf-8?B?OG5xbHJrc0l1WkJpakpsY0pXSXhIei9mYUVPcWdRNjFwRmpodlpCVExoR2pq?=
 =?utf-8?Q?2DJkFr7GARF2a5H9wZI45YX9d?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 99aebccb-6191-4677-00f4-08db4c743b87
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 07:50:30.3616
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Xwm4pV/z8hQ2Tit/87I8FkCn20OZ0OCNW4SbAg+34leMZuw12P9b0rcnu4QOYac0enm9RLI8vqIcPxfaNP11Jw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9708

On 04.05.2023 09:44, Roger Pau Monné wrote:
> On Wed, May 03, 2023 at 05:33:05PM +0200, Jan Beulich wrote:
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -438,12 +438,14 @@ struct domain
>>  
>>      struct grant_table *grant_table;
>>  
>> +#ifdef CONFIG_HAS_PIRQ
>>      /*
>>       * Interrupt to event-channel mappings and other per-guest-pirq data.
>>       * Protected by the domain's event-channel spinlock.
>>       */
>>      struct radix_tree_root pirq_tree;
>>      unsigned int     nr_pirqs;
>> +#endif
> 
> Won't it be cleaner to just move this into arch_domain and avoid a
> bunch of the ifdefary? As the initialization of the fields would be
> moved to arch_domain_create() also.

That's hard to decide without knowing what e.g. RISC-V is going to
want. Taking (past) IA-64 into consideration - that would likely
have wanted to select this new HAS_PIRQ, and hence keeping these
pieces where they are imo makes sense. I did actually consider that
alternative, albeit just briefly. If that ...

> Maybe we would want to introduce some kind of arch-specific event
> channel handler so that the bind PIRQ hypercall could be handled
> there?

... and hence this was the route to take, I suppose I would simply
drop this patch and revert the 2nd one to what it was before (merely
addressing the review comment on Arm's arch_hwdom_irqs()). That's
simply more intrusive a change than I'm willing to make right here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 04 08:08:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 08:08:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529604.824154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puU0I-0007lu-OS; Thu, 04 May 2023 08:07:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529604.824154; Thu, 04 May 2023 08:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puU0I-0007ln-LT; Thu, 04 May 2023 08:07:54 +0000
Received: by outflank-mailman (input) for mailman id 529604;
 Thu, 04 May 2023 08:07:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puU0H-0007ld-9X; Thu, 04 May 2023 08:07:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puU0G-0007yx-W7; Thu, 04 May 2023 08:07:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puU0G-0001It-EE; Thu, 04 May 2023 08:07:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puU0G-0006JM-De; Thu, 04 May 2023 08:07:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sBWiGdTjO6+aD21Q2L1qj43QH/HzVELulFslG71XckQ=; b=6qTldF87GIH4LXRvQTRJxUnz5S
	r6Rn5PGSqZuP3E6fbv6yIncIAk7hT/QRQMcymiyvd3a6Q83VVJhlM1NY5COAkRFBqfmUTfB7WvTx/
	kfmHfKdgAJ3IvMZ4yYmAXb/1yVnj1+hiafKOi7PhwS6vN06tvwcoCC4lidBa+o+li9zs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180519-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180519: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0956aa2219745a198bb6a0a99e2108a3c09b280e
X-Osstest-Versions-That:
    xen=b033eddc9779109c06a26936321d27a2ef4e088b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 May 2023 08:07:52 +0000

flight 180519 xen-unstable real [real]
flight 180526 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180519/
http://logs.test-lab.xenproject.org/osstest/logs/180526/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 180511

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180511
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180511
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180511
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180511
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180511
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180511
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180511
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180511
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180511
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180511
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180511
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180511
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  0956aa2219745a198bb6a0a99e2108a3c09b280e
baseline version:
 xen                  b033eddc9779109c06a26936321d27a2ef4e088b

Last test of basis   180511  2023-05-03 01:53:22 Z    1 days
Testing same since   180519  2023-05-03 15:38:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Henry Wang <Henry.Wang@arm.com> # CHANGELOG
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Viresh Kumar <viresh.kumar@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0956aa2219745a198bb6a0a99e2108a3c09b280e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed May 3 13:38:30 2023 +0200

    x86/mm: replace bogus assertion in paging_log_dirty_op()
    
    While I was the one to introduce it, I don't think it is correct: A
    bogus continuation call issued by a tool stack domain may find another
    continuation in progress. IOW we've been asserting caller controlled
    state (which is reachable only via a domctl), and the early (lock-less)
    check in paging_domctl() helps in a limited way only.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit eaa324bfebcf17333d993b74024901701e0e2162
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Wed May 3 13:37:19 2023 +0200

    x86/trampoline: load the GDT located in the trampoline page
    
    When booting the BSP the portion of the code executed from the
    trampoline page will be using the GDT located in the hypervisor
    .text.head section rather than the GDT located in the relocated
    trampoline page.
    
    If skip_realmode is not set the GDT located in the trampoline page
    will be loaded after having executed the BIOS call, otherwise the GDT
    from .text.head will be used for all the protected mode trampoline
    code execution.
    
    Note that both gdt_boot_descr and gdt_48 contain the same entries, but
    the former is located inside the hypervisor .text section, while the
    later lives in the relocated trampoline page.
    
    This is not harmful as-is, as both GDTs contain the same entries, but
    for consistency with the APs switch the BSP trampoline code to also
    use the GDT on the relocated trampoline page.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 0946068e7faea22868c577d7afa54ba4970ff520
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Wed May 3 13:36:25 2023 +0200

    x86/head: check base address alignment
    
    Ensure that the base address is 2M aligned, or else the page table
    entries created would be corrupt as reserved bits on the PDE end up
    set.
    
    We have encountered a broken firmware where grub2 would end up loading
    Xen at a non 2M aligned region when using the multiboot2 protocol, and
    that caused a very difficult to debug triple fault.
    
    If the alignment is not as required by the page tables print an error
    message and stop the boot.  Also add a build time check that the
    calculation of symbol offsets don't break alignment of passed
    addresses.
    
    The check could be performed earlier, but so far the alignment is
    required by the page tables, and hence feels more natural that the
    check lives near to the piece of code that requires it.
    
    Note that when booted as an EFI application from the PE entry point
    the alignment check is already performed by
    efi_arch_load_addr_check(), and hence there's no need to add another
    check at the point where page tables get built in
    efi_arch_memory_setup().
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 19c6cbd90965b1440bd551069373d6fa3f2f365d
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Wed May 3 13:36:05 2023 +0200

    xen/vcpu: ignore VCPU_SSHOTTMR_future
    
    The usage of VCPU_SSHOTTMR_future in Linux prior to 4.7 is bogus.
    When the hypervisor returns -ETIME (timeout in the past) Linux keeps
    retrying to setup the timer with a higher timeout instead of
    self-injecting a timer interrupt.
    
    On boxes without any hardware assistance for logdirty we have seen HVM
    Linux guests < 4.7 with 32vCPUs give up trying to setup the timer when
    logdirty is enabled:
    
    CE: Reprogramming failure. Giving up
    CE: xen increased min_delta_ns to 1000000 nsec
    CE: Reprogramming failure. Giving up
    CE: Reprogramming failure. Giving up
    CE: xen increased min_delta_ns to 506250 nsec
    CE: xen increased min_delta_ns to 759375 nsec
    CE: xen increased min_delta_ns to 1000000 nsec
    CE: Reprogramming failure. Giving up
    CE: Reprogramming failure. Giving up
    CE: Reprogramming failure. Giving up
    Freezing user space processes ...
    INFO: rcu_sched detected stalls on CPUs/tasks: { 14} (detected by 10, t=60002 jiffies, g=4006, c=4005, q=14130)
    Task dump for CPU 14:
    swapper/14      R  running task        0     0      1 0x00000000
    Call Trace:
     [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
     [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
     [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
     [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
     [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
     [<ffffffff900000d5>] ? start_cpu+0x5/0x14
    INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=6922, c=6921, q=7013)
    Task dump for CPU 26:
    swapper/26      R  running task        0     0      1 0x00000000
    Call Trace:
     [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
     [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
     [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
     [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
     [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
     [<ffffffff900000d5>] ? start_cpu+0x5/0x14
    INFO: rcu_sched detected stalls on CPUs/tasks: { 26} (detected by 24, t=60002 jiffies, g=8499, c=8498, q=7664)
    Task dump for CPU 26:
    swapper/26      R  running task        0     0      1 0x00000000
    Call Trace:
     [<ffffffff90160f5d>] ? rcu_eqs_enter_common.isra.30+0x3d/0xf0
     [<ffffffff907b9bde>] ? default_idle+0x1e/0xd0
     [<ffffffff90039570>] ? arch_cpu_idle+0x20/0xc0
     [<ffffffff9010820a>] ? cpu_startup_entry+0x14a/0x1e0
     [<ffffffff9005d3a7>] ? start_secondary+0x1f7/0x270
     [<ffffffff900000d5>] ? start_cpu+0x5/0x14
    
    Thus leading to CPU stalls and a broken system as a result.
    
    Workaround this bogus usage by ignoring the VCPU_SSHOTTMR_future in
    the hypervisor.  Old Linux versions are the only ones known to have
    (wrongly) attempted to use the flag, and ignoring it is compatible
    with the behavior expected by any guests setting that flag.
    
    Note the usage of the flag has been removed from Linux by commit:
    
    c06b6d70feb3 xen/x86: don't lose event interrupts
    
    Which landed in Linux 4.7.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Henry Wang <Henry.Wang@arm.com> # CHANGELOG
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit f8135d234a90777b4b6606a241605fb75f239530
Author: Viresh Kumar <viresh.kumar@linaro.org>
Date:   Wed May 3 13:35:40 2023 +0200

    docs: allow generic virtio device types to contain device-id
    
    For generic virtio devices, where we don't need to add compatible or
    other special DT properties, the type field is set to "virtio,device".
    
    But this misses the case where the user sets the type with a valid
    virtio device id as well, like "virtio,device1a" for file system device.
    The complete list of virtio device ids is mentioned here:
    
    https://docs.oasis-open.org/virtio/virtio/v1.2/cs01/virtio-v1.2-cs01.html#x1-2160005
    
    Update documentation to support that as well.
    
    Fixes: dd54ea500be8 ("docs: add documentation for generic virtio devices")
    Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
    Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu May 04 08:08:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 08:08:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529612.824164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puU14-0008Ke-7T; Thu, 04 May 2023 08:08:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529612.824164; Thu, 04 May 2023 08:08:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puU14-0008KX-4N; Thu, 04 May 2023 08:08:42 +0000
Received: by outflank-mailman (input) for mailman id 529612;
 Thu, 04 May 2023 08:08:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puU12-00089f-OY
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 08:08:40 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dee8e1d8-ea52-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 10:08:38 +0200 (CEST)
Received: from mail-dm6nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 04:08:12 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CO3PR03MB6805.namprd03.prod.outlook.com (2603:10b6:303:164::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 08:08:10 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 08:08:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dee8e1d8-ea52-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683187718;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=rURzpHR8WrEvYtHltPsT9tGpCzCxO67WbyXQ7TTmH/I=;
  b=dkcprs1wlcHkl5+zAkccE6tG2TxoQtS893+rSDwpRn+xqVueZArtC4LJ
   AVdOl/oN7J9QgdEFD0aChsm+Docn9fnr4WQQEEKIavlKMAcQaUqpTdPsX
   K4/5oQM9PCxwcoU7uMNeiw12930BDx+/EAvY/B/o2h6b4m2zvbfapoNmX
   g=;
X-IronPort-RemoteIP: 104.47.59.169
X-IronPort-MID: 106583845
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:iWvoxahs3JR9dEywYrlVDbYuX161TBEKZh0ujC45NGQN5FlHY01je
 htvCG2Oa/qJNmT3fNB2b9u28UhU7ZGGzoJgT1Q6rywxHygb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QeCzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ2Iwkmainbjtvonquwc9J2quIMLpP0adZ3VnFIlVk1DN4AaLWaGeDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEgluGzYbI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXmq64x0AHOroAVID4mSH264tq5sHWZXO8DB
 hQv9HY2jKdnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakc5oRAt5tDipMQ/i0zJR9M6Sqqt1ISrSHf33
 iyAqzU4i/MLl8kX2q6n/FfBxTWxupzOSQ1z7QLSNo640j5EiEeeT9TAwTDmATxodd3xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:44NbyapnSXnsq8nXJj8sLCgaV5s2LNV00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwOE80hqQFhrX5Wo3SITUO2VHYVr2KiLGP/9SOIVycygcw79
 YET0E6MqyKMbEYt7eF3ODbKbYdKbC8mcjH5Ns2jU0dNT2CA5sQkDuRYTzrdnGeKjM2Y6bRWK
 DshPau8FGbCAgqh4mAdzE4t6+pnay4qLvWJTo9QzI34giHij2lrJb8Dhijxx8bFx9f3Ls49m
 DBsgrhooGuqeuyxBPw33Laq80+oqqs9vJzQOi3zuQFIDTljQilIKxnRr25pTgw5M2/9Vowl9
 HIghE4e+B+8WnYcG2ZqQbknyPgzDEtwXn/zkLwuwqvneXJABYBT+ZRj4NQdRXUr2ImodFHya
 pOm0aUrYBeAx/slDn0o4GgbWAhqmOE5V4Z1cIDhX1WVoUTLJdXsIwk5UtQVLMNBjjz5owLGP
 RnSOvc+PFVW1WHaG2xhBgl/PWcGlAIWjuWSEkLvcKYlxBQgXBC1kMdgPcSm38RnahNPKVs1q
 DhCOBFhbtORsgZYeZWH+EaW/a6DWTLXFblLH+SCU6PLtBGB1v977rMpJkl7uCjf5IFiLEono
 7abV9evWkuP2rzFMy12oFR+BylehT9Yd3U8LAd23FFgMy4eFKyWhfzDGzG0vHQ7cn3O/erGM
 paY/ltcrjexWiHI/c84+SxYegVFZAkarxnhj8KYSP+niv1EPybigX6SoekGFO/K0dsZkrPRl
 0+YRPUGOJsqmiWZ16QummlZ5qqQD2xwa5N
X-Talos-CUID: =?us-ascii?q?9a23=3AVwvQmmlrrHok/5QRTwVsXCHrOhLXOVPW9VzUGhe?=
 =?us-ascii?q?xMDg3UbasckW83oxmydU7zg=3D=3D?=
X-Talos-MUID: 9a23:nOAn3QZzYNqxauBTlA/xojAhF8dS2IOpNRwgya4I59GlOnkl
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="106583845"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JSbaFnCo4eB8wC2GxG2Y4dgZC92A//05vtURnHtT/IzfUkh/pdXAGDH+zfonUWoOXZ5t0zrWLy3cpBkvS5OOzWg3BTPZrOjsyOr8FxMw2OgkpueQ0mnJfXCVcN1H9rv6bYmzMa4qpSr+Pxdu1jpIwCbRyMowz6iqa0EfIPVqBIX1/LONp/kgkNH0YDihMDbIZTcz6RKAwdtdPPO030XdnC9hQI4DoG3Ubfjw0BtYJX8tz77yzB90Hs04OhWBuYkbBCovHsiUPzhfDUYVzyl6EdnCIzyTLse7JjDdefcirBj66tO2lj42YHP892KbEkS4m3OS85hmNC4rRZsx/1bZDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gaFTPGTvNngWNE43MLLjvuWMkEaKHJDKYxMeBcTAGUE=;
 b=KQg+1fvi2IEFdD46jXEU+U+VHgL0UuANa5FDI8243gdHhiyiWzgQgdDZJiYodh9P2fAubZlPlZU1xICKq/76pnQegVarwl5u7bUa10Uhz7whS6gXeOWN0erfLjbHLfEQcHP4Rff86quJKbdlQOQe+UgYvg8DhR6rg/yO9jwIClS6Dyg5Y+IVhEiNu3H/+zjTHHz+ifHal+Il7q8BKfWcegS90eX/gjqPjOANoZmWv1lzIxYM31522HmeCFB3ByMZhZnyodBtbctn33mfz/zDK1lZ9DhkG9JyVB82QcPesjTGasIjbeLQZkA2XPhG7q4pwI6nLSPAonR4fYguBzv4+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gaFTPGTvNngWNE43MLLjvuWMkEaKHJDKYxMeBcTAGUE=;
 b=UPTZDqA2/9DDsXPNxAxYGdPsjN+85zRQn0bncABEd/RI5/byw2H+Gc+DfEfWQN6ejD8DNkgrjVOgJ+uOTRGTMbCf5SrXCJCKqGXehXzxgC21SQi/Xj+Va1RMApUR5L+g7t5cv54SVteJt1pUCixDZ7hgv4KssKb0pfcjVbHoD1k=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <b655305f-293e-a0dc-ab39-36b0c9787433@citrix.com>
Date: Thu, 4 May 2023 09:08:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/ucode: Refresh raw CPU policy after microcode load
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
 <ccf68f0f-6fd7-a9a4-ef72-03999d11035a@suse.com>
In-Reply-To: <ccf68f0f-6fd7-a9a4-ef72-03999d11035a@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0068.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::32) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|CO3PR03MB6805:EE_
X-MS-Office365-Filtering-Correlation-Id: 5f82236e-b346-46ef-69ac-08db4c76b25a
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	q07bJRdyBQ2vcFGBGSoR7HmmV/kKQqQomfJYD9cwKwBOKQEsmp+81FdLOFHivwtW8gdcY33mVPBjLccb2dDtMjsFdTiRzxDNcyNFsrlRMO7dYi9WxSQQCghsidex4No0QYalSGQuh5/PTyCPno3XfjOO785b6+taLWPtfgNAIfavDoj/4WNrA42V4BHFZcEVrtY3Agm9KNa6YZH/liTLC6BcbhLHc5hBw2agnK/umaWyNDpm2FGh/mFFxwUQMiaFp2qlDh6juJzKbufV0LKSmeULvDTL4o6JmbTddneH9HXrZGJm/nw3WfiDsMAJ1aRc+nIHPNT5S0xuYNm2FxRMxAhTR0xNRq7AgeQ9iP3CRFZB52luJFi5NOPnit1GnZ3o2FNuIDZKC20zH7Yha2WvSXOIk67Qll0w6RbYaBbGz9/VczMQoNYa0VIIVud9cd7Uj0CEH9O4EYTT64LKRCaIb79eav8EdmhCpWOWN+4il39z54iPgNkiusgdJ8fw2KzL8C17sI3mZ7zNUv4jgOiOIf0WVY0D1XSdw2bfnfz3aS7a7+mRsxm0f0evprqFFNE2AleJclokAAtXDHCoTEldJ3b1AofCP+kP0HTHGMVX7KlihZ7PTM31C5WTn/yQ//IFuMCqYhNU5ElYHg/nqMbXFg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(39860400002)(136003)(376002)(396003)(451199021)(66899021)(31686004)(36756003)(38100700002)(2906002)(8676002)(8936002)(86362001)(82960400001)(31696002)(316002)(5660300002)(66556008)(66946007)(41300700001)(4326008)(66476007)(6916009)(83380400001)(6512007)(186003)(53546011)(26005)(6486002)(6506007)(54906003)(478600001)(2616005)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MGNLcXRGMEYwMkVBczk5bFRrcGg2TWF3c0ZETTM2UmlKcFRmSU9uTjJLbFJp?=
 =?utf-8?B?Tlo5c3p0Y1Z6dk9INHVDeERXMHpYRFhhY3RsNG0rbXVhbmFOTFpUZWpOZWJS?=
 =?utf-8?B?VHhLRk1Kdmh1SFhNSk9TWEVNKzViUEFZZFM5SkkwQVFjRW93bUY4ZlNqeHJu?=
 =?utf-8?B?YVVhWC9FRFdEUHY1SEZZNjFvTTVpQUhHNk9TdDR1STBZakgzdkU3eHFoVU5E?=
 =?utf-8?B?bXdFdnNHZm5sMlhacEpJNFFmVDJXZkt3VXFjeVhLSzlGbmdEQkJJS0lWejNW?=
 =?utf-8?B?Q0RYSnZCc1RsWU5FSXh2cVVFcGZHL0NvM21ZSmZtMWVVTUxGUGxXcVNkNXBp?=
 =?utf-8?B?N2t0bUUxM3ZCZjIvZm9IWlZCVjMrQWFldVFMWkhNR0ZZdUhNUWJnb1dVTGtw?=
 =?utf-8?B?UDZlUXBIOEpGTHhZSjZCeU1NT0l6UGg0NVN5YmZTdUhFNVVMa0V5c0VVNU1y?=
 =?utf-8?B?OEErbHRXUHRsbVB2SXBjWWRkR1lwQmFmR1NVYWQzTkdDOUw4Sndpd21lTlZF?=
 =?utf-8?B?RUE0U1QvQXBmditXSVo4TVNXQU5NVldFMVFEZ3VDY3RQTGgyd0cvYmJKYW1v?=
 =?utf-8?B?YlhZeDZDcmhlZjVmYS9YK095TzZVNU5uT0ZyUVpUZ0QzaXZaRjM1T2Vic083?=
 =?utf-8?B?SmFTMlRNZUdyNjhJRjNmMEYrZlkvclpnQlFqaTZIbzUxa2JHWUI4OGVYTmtm?=
 =?utf-8?B?MzFhaGNOVS94WjJTMkYzKzZHeTRScEVkKzZGK1hCeWJXVVRzVXFYZEZ0TkVQ?=
 =?utf-8?B?cmhrUXU5UGdoYUpNa3luVHdnRnBqelVpZjA4Y1RIZ3R0Q1RXSUNnS0hYQ216?=
 =?utf-8?B?UElocFFTK21oUmdkSFNIU0VwWGlNOWUzejVZTVVQSGl2VkxOU0UyalgzL0lI?=
 =?utf-8?B?WFA2OTgvb3RiTU4rZnVlSlBqdmlxeGZhNFUxZE1zRUkvR2pKNGFTL3dvV3FC?=
 =?utf-8?B?Z1hKcGJlaGgzMUY2eGRsdWE2bUZiZEVVdnI2eVBXZEIvMEdlVFRqYjlMU0F5?=
 =?utf-8?B?Qk05VS9QM2Y1RmduYTJ6STVtMmMvSFk1WVpGb0VjSHA2T1RwMUR0cVMzdWlH?=
 =?utf-8?B?WTRmeXNVN1RNOFQ5TDhHeVYxMUlaUnY0MVJ3cHN1SFZ2QU1YNXNEV05JNnR0?=
 =?utf-8?B?K2pBWldyN0NHRW1RUjN4a1NhR254Z2RSUzVMK253U1pZWFhiNWoydFNEZ2E4?=
 =?utf-8?B?ZWtaUEhTeFNjU0hRWlQrYWw2T0s4RTU0Zi9kTEFEUUNqK2pBUC9pMFptWFZi?=
 =?utf-8?B?TXk0dGR1RWtHWC9CaEJxZG9HS1dyMk1lLzMzRlJJR2dZdC83aENLaFR6RDhQ?=
 =?utf-8?B?SWpERmVEM0V5bE9RQ2RaemZuSndqYXZoOUlJazVEb2xCcnYvUnM2YUNzQWd0?=
 =?utf-8?B?K09xNEZ6QUprTkFqVFRvUlF2RDV1Mjhuci9VWm90OWxrTkhYcHVrRjRGeHRC?=
 =?utf-8?B?ek9xU0QyZHRZVElDdmdqN2N4VThBdGgrSTlMRDdoQ2w0SHFUSlFpRjc0K1BC?=
 =?utf-8?B?OFR2NmdVYnJ1bTBIampjRkcrQWJRb2FsYWxVWVEzUXdyNEtGekxTU1o3NXJH?=
 =?utf-8?B?S3BNRmhZdER3ZElXczRRdlp1OU9zVDA4djloN1hCaTREMzNGYUFMVk5LWmx3?=
 =?utf-8?B?NFJtVUtNcmMyaTZtNFRndlBZUmFqSHkvcWdJQWFzOVJBYUluVXJrVmd6WEZV?=
 =?utf-8?B?Zlo2VTNBc05WWCt5czRITXorNENpcUswWG1VYjBaOXNRRjk5UzNuMk1RWTdY?=
 =?utf-8?B?N0N5QXNjSzZMUlFDczAwcXY3bXd5dmFiUEJKaEVPMkNhNzFRZVFpTk9tWXVN?=
 =?utf-8?B?cHVGeTRSa2xHWXBvNXNtVlhEVExlVGxjd1J6anJYSnRUSFMwd0lSUmtqZkdZ?=
 =?utf-8?B?czFTY0piV1ljQjlJd3E3Mk5zSTFIS3F2bjFUdld4ZDkvWXNoZ0YrN3dDZTUy?=
 =?utf-8?B?cGVLN0V3dzlkZ1RVdSt4c0p6bWVoRGN5ZWY5MHhienZYWVc5YnluZzRvVFBu?=
 =?utf-8?B?YkNSTzN4S2dZMGVjUEpIMGNSVlNMbnBIcVNKL0g5TWhnY2NQeEtOVzB1ZC83?=
 =?utf-8?B?ZnFOdFJqTUhxU0V1RUxhQTlmVHo5czRsOVFIY1crdnZvWmt6RDZnQlJOUzNI?=
 =?utf-8?B?ZHRGTThTenFjcGI4T0NmU1RObzZhMFh2RVVZZFQzVzZKOWFSZFpMUXJEVHBq?=
 =?utf-8?B?clE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	gSx6qIXQNOA+yIk4oxeWKq6jKTWGO2BAelG6Ghvzmh8nPodMJIryIuaEV+573TdV9piOoZuE4jC9ckoBdhOuT/wF9H2y8l3gk2h9D6k18cx7k92Mjf9z7mWjsmYfetCIbvmSo4otUEgErEpjK0tUR7klbEXbJiyEPMiv2N3sOTYpakEqOi1vtzDJ8FhlQqavcLxehFTBJ6Tj9Z+szCZY6aexS3upfBgfOlQBdioMk3FwBq/nc1mGXl3MMGfkJ+DIh67wttbgAvenZQfoTKPrDfKYJRQsgvdfgCugXInr6KCXbXfuZzI2m3+obyQx5BfBJ4cgfOsUx7Ce78JbLS4mAkkpTGp5Vg64tiUCyrFK4q1xyIhGBjuiEJj/9LIGqjoFsVcDruD2ZiTL+NBCE5r/1+gbvn6XR09TF+De8azbKJ/9w+DgMk7OUeLvRBKUi8TsinsxwxK0pWsfiMUvSIfdNchxCmyi7Tg4X3GwBxSKe6LUlx6eNXK9MRdXKuU2k8bV9j7yWkYMGSS3eyO+IBht41uAYiLIqJYtRIRqAdoNRvFInz+ESZmARiOUlMcyM9du7z8NjtOaUXr57XQcdJ3nY8QqSwSJhGpIFm3sUDhmXN1MmSVQVkfZfPU+LcrVEoPu7LCr2bjmQ2JmCfPbsIHJQO6by7e8TuMTh/zUs4Nw4V9hDfD1eHHEkWWegrDFrOfcDDr6mRmzfHChZOjNnKA6yyeQ80oeY3csuSLDka2E/4AE1FHiDJSP3MuPiSUnctmHOiIo4cCitPZ3Wk8dfxjUGYLXxdDsLd4xkjpBB7O4B4UndSI1wAFVPxAkyOzn5wdJ
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5f82236e-b346-46ef-69ac-08db4c76b25a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 08:08:08.8822
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9VakSZSl3l3PoCeB5zJTT2r12xwxWDDtQddsZxZFb0rKByuDrtb76PDMtELmLd5IHZJzIMzQJUrJF+KBFI52KXa+PLx571xGGLsW2XwCXx4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO3PR03MB6805

On 04/05/2023 8:08 am, Jan Beulich wrote:
> On 03.05.2023 20:58, Andrew Cooper wrote:
>> Loading microcode can cause new features to appear.
> Or disappear (LWP)? While I don't think we want to panic() in this
> case (we do on the S3 resume path when recheck_cpu_features() fails
> on the BSP), it would seem to me that we want to go a step further
> than you do and at least warn when a feature went amiss. Or is that
> too much of a scope-creeping request right here?

You're correct that I ought to discuss the disappear case.  But like
livepatching, it's firmly in the realm of "the end user takes
responsibility for trying this in their test system before running it in
production".

For LWP specifically, we ought to explicitly permit its disappearance in
recheck_cpu_features(), because this specific example is known to exist,
and known safe as Xen never used or virtualised LWP functionality. 
Crashing on S3

>
>> @@ -677,6 +678,9 @@ static long cf_check microcode_update_helper(void *data)
>>          spin_lock(&microcode_mutex);
>>          microcode_update_cache(patch);
>>          spin_unlock(&microcode_mutex);
>> +
>> +        /* Refresh the raw CPU policy, in case the features have changed. */
>> +        calculate_raw_cpu_policy();
> I understand this is in line with what we do during boot, but there
> and here I wonder whether this wouldn't better deal with possible
> asymmetries (e.g. in case ucode loading failed on one of the CPUs),
> along the lines of what we do near the end of identify_cpu() for
> APs. (Unlike the question higher up, this is definitely only a
> remark here, not something I'd consider dealing with right in this
> change.)

Asymmetry is an increasingly theoretical problem.  Yeah, it exists in
principle, but Xen has no way of letting you explicitly get into that
situation.

This too falls firmly into the "end user takes responsibility for
testing it properly first" category.

We have explicit symmetric assumptions/requirements elsewhere (e.g. for
a single system, there's 1 correct ucode blob).

We can acknowledge that asymmetry exists, but there is basically nothing
Xen can do about it other than highlight that something is very wrong on
the system.  Odds are that a system which gets into such a state won't
survive much longer.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 04 08:11:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 08:11:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529615.824173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puU3H-0001NC-Kb; Thu, 04 May 2023 08:10:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529615.824173; Thu, 04 May 2023 08:10:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puU3H-0001N3-Ho; Thu, 04 May 2023 08:10:59 +0000
Received: by outflank-mailman (input) for mailman id 529615;
 Thu, 04 May 2023 08:10:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puU3G-0001Mx-Im
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 08:10:58 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20606.outbound.protection.outlook.com
 [2a01:111:f400:7d00::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 32c285a6-ea53-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 10:10:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9078.eurprd04.prod.outlook.com (2603:10a6:20b:445::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 08:10:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 08:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32c285a6-ea53-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AFoLyuR9hyHT2W5nm8VTPIDjrmNv4CbA15hI38VL2cUiNV3g5vQJZP9+AZSj3zpGYjQKg6fIFTIvzHKAtTPKwBg3uhTnqDBN04gT4TIAZHD+vKOE4r0aR3SssZ0eXCOf0Pi0cdq/u9ELgXzdJqGkx+CX7zSFjsOyO0OVv+VMsAoo/ybu8FwvHjbMdlDp39aj5KeQTLKihI0VnbNWwekplTl/05y+G65SY67HCsXRIGUc39JlpFUWU/suFBPSD2sZNIvJDuZ5y8SuGRS9ZlVCQD8A0uzrd0kXWJNnF1NvG7YEjI7WjcnO7Qjriln7vdOD9Rmg1lW9oz6xcjw09Xrdlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=x8lYMg19hUZpAHP16/2jCnOsXiBiDA/1TpD45NDmil0=;
 b=UQVeJ8fq9plHWzZyGFLpAjVkLEGYaAxrDlKsUDTg6KQjdalxClqL+Y/zgx6TKG8IpPDYpnjlu2toX2eIgePGKFQO6y+NqmXkorQKBBX7DnAPiqMGVo++Fpua2we7uC2BbXJWsWMT5WY0yt1oj2mgG5iKqqVrjBhtGKvjbA36SpmzQTFgKrMkccjCbmXa1ayzZfKg9SDH8Pgm2Gw617Os1rR1hmotUyW603xkHy7Ogwfqrps/TUE9r/VM6hKnbTOKidcLQb8k3aLmi5tJonmssTL6miAejKG9LGWgY6xp82Vl9yx9WxOo1vDCLcRmHiTc4qA1dwT477LYDCN/L9+sSQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x8lYMg19hUZpAHP16/2jCnOsXiBiDA/1TpD45NDmil0=;
 b=XgMGbX5w8uwjybYzIUXGkIthddMY4OaKqM/iNGjHf6BB4nRE1BAkY6B7pggHhwZ86L1e7/LxHsL9NfsTuU91bnqj78W8XGfz0XJaWyUlPthQ508YXMJUkduit+P6inps2oOBK4iK3SCoJaIIruzlOlOxNbb3PNVhaYt4qcIAo4Oy2AxOJaEHTMdDBdsC1XEGG22Tgf/8k0yQUej3EYSwrhbbw9x6bGqMGQpJIBjGXgGwxIdDn2PrqMh3NbvUQ7snz2YnZ4JAzz3C7p6SUx8HnNHaYw/piRAG01SS3h3KffR3r9FLOHaFa6QBL5CRKEMqe0IpuEXxMqOm+sZOk1a1rA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <01e1cf98-1c36-37f2-18b3-0994e279e5a1@suse.com>
Date: Thu, 4 May 2023 10:10:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v6 1/4] xen/riscv: add VM space layout
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1683131359.git.oleksii.kurochko@gmail.com>
 <a4004849c87990e5379acc5d60a52492385cd8e0.1683131359.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a4004849c87990e5379acc5d60a52492385cd8e0.1683131359.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0208.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ad::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9078:EE_
X-MS-Office365-Filtering-Correlation-Id: 6d66c6f6-02b5-40f3-ab4f-08db4c771527
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	o3PgupvEZmLrWJn1DjCzZ3JrmLqZuAXtaapqgd5lAWMbS5D4hsOacsqaJ9Iwfpn7zcS5dqX1taokX/p5nrp7o3M7tq25jQCLU84zUakm2z5mp+kBUchzbcDt5AEwQwTnHFZUQgKoMusmYrw0RUltq9696bXrh1wTsB2ao7Dl9ingA8Ta2dP3NGsKHvCMxOOun9HtoNUsbvA+/ty7iVZaHbQXqCDRoLOTXylm1SSfdF7/cpAbbBEjn5m6sGROTmk2Poi806StiQAPcAqxQ21oIo+ZUgQAbIGoXcgmeBw+hx7VZQOvEhRiLFunbg3s2HH28mpMKqa0oBNACCRq4lM6G6twSXhjR2Lk7k+5UY8L4B0ZeJIUYwPaqm5ygHnSZIhoa137Y3kKun5JcttkOEXEJf9IfwxmsRFuhMn0ZBVii/yewSGIDnad3gCZAeCiqCysLvNgJyBXYLTKrf8SsYIjFU5EARww4bQUhD9a9lIe1+DTl+w2TfBjn6qtoah7HWtxpp8JVT4N0laxBivVom9NxDoCo/g9A8Rn4tIE3GGYrjJFfmt0yFAGzKjUExLbXwOAdCBwGa0z44eV4e88U5L4Tu6cpLtdtmyP0e55HD3DFpMKwvfj99ZtIhuPakUgp+6pYLR9RTG3fy0NbZctVQOhIg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(346002)(366004)(376002)(39860400002)(451199021)(316002)(41300700001)(2906002)(186003)(53546011)(6512007)(6506007)(26005)(6916009)(31696002)(54906003)(2616005)(86362001)(6486002)(4326008)(83380400001)(66556008)(66476007)(31686004)(66946007)(36756003)(8936002)(8676002)(5660300002)(478600001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?THF2K2V4MlBuQTdMS0Y4bldiQWlBeTBkYmtDUWVNckZNeEJydHA3eVRRQ05C?=
 =?utf-8?B?OXliVmpGVWlDNys2RVdWTTdhNHJVbHNjbGt0OVNRakxGNXE4RHVrd2RrbHpn?=
 =?utf-8?B?VGw4b3l2WEd2eWp4WUdEYVZLRkZCYW9iNDdTdkkyNW1zN0svekFFZ2ppakRG?=
 =?utf-8?B?cFMrTEMzd2xEUFU4ZURkdXRoVHZuRjdqSlphUkY2T2N6d2JHRnVlamh6d2tZ?=
 =?utf-8?B?WTZITzFqZHZzM01aWHZhZGFvbDdBdnppK0hOYkU0RHhNS0R4MGlxalFFRXky?=
 =?utf-8?B?cUNGWEJpQ0drc3BjS2JpK0pQdXBkaklNL0N1OXptdmdRblR6NzZYd1p6SDJs?=
 =?utf-8?B?VFRSVnRMempGVzRCSjZuYnhHbGxkVTc4eGlOSHlOcUFBQXdROWxGQW9kVUNj?=
 =?utf-8?B?TVl6d3ZnY200eVZGd0pTb3FQaXcrL2RoV3NVT1N1anozZUY0aVBKNVllY2gx?=
 =?utf-8?B?eWVRMzExdElBSENqancyYUQ0WmMrS2JuYXhsanEwRmlObk9oNUVwWmxNOHUr?=
 =?utf-8?B?U3UyeWJEU2J0dk9vd05zWXlhdkZRYVFjMW5ZQVduZDVBcVN0RUtKN3lsd0du?=
 =?utf-8?B?Y3RnVy9Zc3dnVXhMQ0NXb0RpWTlhTDNaODVwRXYrMy9HVWtRVmRxKy9pY1Fm?=
 =?utf-8?B?c01qS3pBeDZweWhXQkc3bXkwRzFhTUFhSVBHWjVDWHd0MDR3Z2R2by9uNWty?=
 =?utf-8?B?S2NTNlVuVHZJejBRclRBSDNKTXVldkErWmNVc2Y2T3A5V09DUVJrMTQyUlNL?=
 =?utf-8?B?VHl1bVB2bHdTanA0TmQ2WnhiSVRZb2JxVzhmTXVyS01hMW92YzB3cmRPQ3Vh?=
 =?utf-8?B?U21wWTlwUGlmYWVjbnhTZUZXdkJMQ1IrMXdocG9zWTg1cXFablp0T0NKYkQ0?=
 =?utf-8?B?cVFqQUh0YlcwTnR0VzNrZnI5Q2o2OFAwcjcxbXU5ak5FSFpDTWdOZUV0bzdL?=
 =?utf-8?B?NytXdTB5N0ZYUmlyTXArUHJhZ01RN3pUeTVyK2lFRlhybUZDcklreVQ3TmRN?=
 =?utf-8?B?RWo0SS9WY3A0QjVGUkFHUVhQZmIycEdrc2EvTlBwQ0hZdFI3OENmWjBUVG94?=
 =?utf-8?B?dGpZd0c5ZDVUTkRaVDJHYWVaazJSNjg3Vk9iVnIyOXczTFFKVzQ4WmQ2c2Mv?=
 =?utf-8?B?TVg4M2lyVFg1VzJIa3I3NXdldW12MCsveU9CdVg1RkQ5bEx3QjZMWmRWWDk5?=
 =?utf-8?B?bnNaSzZjdTFuZkZDNHBkTlB5bSsySEpKMGthV0FiRFJJUGVqV3FlQllmbXBG?=
 =?utf-8?B?QU9lZmlPWmlPOElycHNKeDFzNENxaFh0MGZYZlZ5NDV1V1pPaFRZWlYwZ0VI?=
 =?utf-8?B?eXV5QUxlQXUyYTJwU1VOOGJ6ZmlIUzEzd0o3L3FMSlA2WE9NTTFBOUUrbjc2?=
 =?utf-8?B?eE0vTXdxNDgydFFaMEw1eEk5aHlSbS9vcm9zTXFtS1VlRitGSmxiODU1aEx5?=
 =?utf-8?B?WlExdlRadDBCc2oxZm04RFp5OWF2UHRGam5kbk1hWlEvK1FJTDlrWWpxRWFP?=
 =?utf-8?B?Nzk1bXBXcS9aOVQydHMvMXc2QkcxcU5aZm94TFhmZUVPL3Y0WWZaUHJyRkRV?=
 =?utf-8?B?cDhFRCtVSnFhWUxVcVlqZ0NSeStnQ1Rsa1Qrc25aeE96UnR5Nm4yeG9KdDE2?=
 =?utf-8?B?OVd1MkRYWksrQTRVNkxwY3MwbjVIM01MYUtBTmlQMDVqbzFjSjhJM3VtRjI5?=
 =?utf-8?B?SnhyQkNQMW0xY04rR1JpVWtLWjB6NlNQN1VMa0RaTTBQaXhoeUEzYjFpMjZW?=
 =?utf-8?B?Z1pNNk1wZk5wQ0p1S3ZBSldxQVhsQXgyb0RmZmlXR1FhMmpYY2FTRkJKdEhX?=
 =?utf-8?B?M1RzY2JhRE96a29aZmtsU0N1VDVtdDAvNmNoNHRwOWswUW8rVUFXTlRtZzN3?=
 =?utf-8?B?ZHJkUm1pV2RXQ2g4ZTltQXlTb0syRmhmTWhkVVBIRXZhNE1VaDhCQmx1MmZR?=
 =?utf-8?B?cXpiN1VJY3Z4SldTSE1ubnpXM0RVc0cvdVlmNTlBTmUwTnVzdFVtZ2dkblZG?=
 =?utf-8?B?aVRJTHlneERDZGcyQkFiV0NTV1l0U2ljRFJISjJhYXl4eWNwMUMwNzNFd0Nh?=
 =?utf-8?B?MWdRdHV6dnd5cDJFVXpsem1zdzVEZmgrVzk2U0g2VXMzRXNwWnkzN1doTmht?=
 =?utf-8?Q?healedcNF92Xn7dVKP5rXrppr?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6d66c6f6-02b5-40f3-ab4f-08db4c771527
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 08:10:54.4597
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NddVjoidE3TgwVXRAvf1QiWkw3yRPcbdGyNIRMmn8yWHl5i5A4AhIAEvAvdn1vgRXwcmKK09lphUSY7zhyeWkQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9078

On 03.05.2023 18:31, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -4,6 +4,37 @@
>  #include <xen/const.h>
>  #include <xen/page-size.h>
>  
> +/*
> + * RISC-V64 Layout:
> + *
> + * From the riscv-privileged doc:
> + *   When mapping between narrower and wider addresses,
> + *   RISC-V zero-extends a narrower physical address to a wider size.
> + *   The mapping between 64-bit virtual addresses and the 39-bit usable
> + *   address space of Sv39 is not based on zero-extension but instead
> + *   follows an entrenched convention that allows an OS to use one or
> + *   a few of the most-significant bits of a full-size (64-bit) virtual
> + *   address to quickly distinguish user and supervisor address regions.
> + *
> + * It means that:
> + *   top VA bits are simply ignored for the purpose of translating to PA.
> + *
> + * ============================================================================
> + *    Start addr    |   End addr        |  Size  | Slot       |area description
> + * ============================================================================
> + * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | L2 511     | Xen
> + * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | L2 511     | FDT
> + * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | L2 511     | Fixmap
> + *                 ...                  |  1 GB  | L2 510     | Unused
> + * 0000003200000000 |  0000007f40000000 | 331 GB | L2 200-609 | Direct map

I guess the upper value is 509 here?

> + *                 ...                  |  1 GB  | L2 199     | Unused
> + * 0000003100000000 |  0000003140000000 |  3 GB  | L2 196-198 | Frametable

The two leftmost columns cover 1Gb only.

> + *                 ...                  |  1 GB  | L2 195     | Unused
> + * 0000003080000000 |  00000030c0000000 |  1 GB  | L2 194     | VMAP
> + *     .................. unused ..................
> + * ============================================================================
> + */

Two more remarks: This map is, aiui, Sv39-specific. The quotation from the
doc doesn't really imply that, so I'd suggest to add something to make this
explicit. This might be as simple as a suitable #ifdef around or inside
the comment (even inside I think it'll be easily understood what it means;
see e.g. the CONFIG_BIGMEM conditional in x86'es table).

The other oddity here is the sorting of entries: You sort downwards by L2
slot, but upwards within slot 511. Once suitably re-ordered it'll become
apparent that there's another "unused" row missing (or perhaps even two).

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 04 08:13:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 08:13:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529618.824183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puU5r-0001xv-23; Thu, 04 May 2023 08:13:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529618.824183; Thu, 04 May 2023 08:13:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puU5q-0001xo-Ve; Thu, 04 May 2023 08:13:38 +0000
Received: by outflank-mailman (input) for mailman id 529618;
 Thu, 04 May 2023 08:13:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x2gh=AZ=citrix.com=prvs=481e65374=roger.pau@srs-se1.protection.inumbo.net>)
 id 1puU5p-0001xe-5u
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 08:13:37 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 905933bc-ea53-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 10:13:35 +0200 (CEST)
Received: from mail-dm3nam02lp2043.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 04:13:33 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DS7PR03MB5511.namprd03.prod.outlook.com (2603:10b6:5:2c5::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 08:13:30 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 08:13:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 905933bc-ea53-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683188015;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=qV+TOaaCZDXGVxD1MZbIph6jolX0UTUma1vADEkvVAo=;
  b=Al5g7VJjN3n0axQMGELbHGIY1BzU4HYnrka5D4/bMSl7/QY5GzxTmrk1
   jPl9JrGHbymtOZdxojMh2R5VXXwbWpxXNMdoQ1TF0eTLU8mS9G/3JqjU0
   jJjhC5pGS4HAFlcu+lfn6hQ9a5HR8C2a8fP8uQDRUTz2mKoB7rnvoq+Of
   M=;
X-IronPort-RemoteIP: 104.47.56.43
X-IronPort-MID: 108229542
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:XLXcIq/u6JVRGPQz64+fDrUDBX+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 DdJWmmPa62Mamf8fdx1YIq08UIHupHRnddqTVdsryk8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKgX5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkke2
 PolcQwiRyrArNy60q2ZdvRUhvg8eZyD0IM34hmMzBn/JNN+G9XvZv6P4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWNilAquFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37efx3mqBd5IfFG+3qJDm3mjnlUjMSZMeFyGmtiz02KvcusKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHsqCRSH+b3qeZq3W1Iyd9BXQZeSYOQA8B4t/iiII+lBTCSpBkCqHdpsLxMSH9x
 XaNtidWr78el9IR3qO3u1XOmSuxp4PhRxQwoA7QWwqN8AR9Y4K0Yp2y3lLS5/1AMYWxQ0GIu
 T4PnM320QwVJZSElSjITOBWGrisv6yBKGeE3QUpGIQ9/TOw/XLlZZpX/Dx1OEZuNIADZCPtZ
 0jQ/whW4fe/IUeXUEO+WKrpY+xC8EQqPY2Nuiz8BjaWXqVMSQ==
IronPort-HdrOrdr: A9a23:DTSA3a5tJEaikQxsXQPXwMbXdLJyesId70hD6qkRc3Bom6mj/P
 xG88516faZslgssRMb+exoSZPgfZq0z/cci+Qs1NyZLWrbUQWTXeRfxLqn7zr8GzDvss5xvJ
 0QF5SW0eeAb2RHsQ==
X-Talos-CUID: 9a23:7w8DBm9DxBkE3OmwbdyVvx45HP18TlL09jT7HUOqFkRVVYWVdUDFrQ==
X-Talos-MUID: 9a23:U3ddTglXMzWeD6JIh4eAdno+HdxM0/yXOHk2y7gBhvKlP3VcKh6k2WE=
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="108229542"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kFYl15GBp1EIN8kUSHhn47Ih3N+rdQQfuzFF7M/NC8sAcSWZXIX4736q4CJAwV5pqt8+eZfDudB/78S/pnjMe6h5gz7Ele68X+KZ53I7vUbsy6CNelU6LSiOmJjiOfWnc1kOnQr7FHv03xS4QMSv0GpigjgvTzH5zTVBN1gE3DXvhTx1W8H+eK1bDjgS88kAvcOmvnH5+WzEP7LGoEhobHy+GVNYzOGZaJNeLX3OiHJwyZL9OpZZGkzKrkAEiN9qakcsvJ/dwwS0baLSys0sodhySV8SEJVEbmnDiNa2rvEuvMlP1lTCFW0frHPK7XDlQN/duMc5TvzlVmeglqVqfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UZxLcVB2P6h+bOOzpC2vFL/eewznKkbc0nq8gWRwdeM=;
 b=b2bcsF/CK9uIoTLvesFEUm53Qoay1BVeIvrIF13wPvG9OQA1UkwGAd0NiwXbcNo6BHCqiF7xw77DlXQB68Ss8Nnl559cajgY02Pganub1RPNoSGbWa+bszW7rKd0iOunTIYrw203Z4rwRBrE6h9T1LkfzHTz0HkNMHgrosHKbmZ2Q/d554bZOldJUUAaXdZQ/QDcqpK2E8KoaJ1qAg9vMmLlFpbX9yS3gsXmXEOqAg4BGAzedP3jqb5g7Wf3kBnwlfo1yGwjh6wmoVQrnq4t48TmZ9VtzQPgONXDjiPrns+vA5vQBlzqP9qfpdpd2BMnWWj3DOkc6xXG2mFTIMk01w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UZxLcVB2P6h+bOOzpC2vFL/eewznKkbc0nq8gWRwdeM=;
 b=P0yzwMgPXeZ27ynzm5KvP0fg6LS71iBD8+uVZF68jNVbZzNVcDNLTIPXCdz4DikHSCqA4dhHg4s6C2yyrBr8mwVhIS+VTv1QnX680Xe1X8EDRdvx20rtH/XuUOCH/+Z0VfIdmGQwGaeevafiHJBWvP+GhFVyFswHD//dSK9ib5E=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 4 May 2023 10:13:23 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Subject: Re: [PATCH v2 1/2] restrict concept of pIRQ to x86
Message-ID: <ZFNpI929Zk61sZ5X@Air-de-Roger>
References: <85e59fd5-9a06-48b4-ba7e-81865d44e332@suse.com>
 <98f51b96-8a1c-7f33-b4d3-1744174df465@suse.com>
 <ZFNiS8oxfozlxCz6@Air-de-Roger>
 <2a46c7df-b380-cc41-5582-70b4829d7f47@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2a46c7df-b380-cc41-5582-70b4829d7f47@suse.com>
X-ClientProxiedBy: LO2P265CA0421.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::25) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DS7PR03MB5511:EE_
X-MS-Office365-Filtering-Correlation-Id: fb9715b0-4cec-4c6e-5614-08db4c7771bd
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6SMcK5RiNVI37CcDdt2pC+XR/yDWLG5zZl8kmIXotKs2RZXbNKAN/L/m2ELjUJBqp5dDyPphOYQzWZzxd7T8rbb8C2MbJ7nUwUONuAHJ98FE8nSaXa99TxhQemeQtAcd/WJja2dxkQnkclhf6nKMUFQ5r0SDKql8VysrDXrkc/wlc86ryxw1TfZ0tX81aCBSBNq5UYEMScxVNLW+k8kuR8wdcPxM3iLbahfIbbJqObti0Joxbf8S6wP9DAAXbd4/kZxetQ+uihK6jrie3zj+mTlMsv4TmLIYZMAXROdclFbFdII+2hy1A3w8cfZRgQtQaFeUNhfilvQ8KM/lfmX1ZAwqACGWEyhAPg2yLcUh9wy+hwFFSy23iZPI6/YB3WPqZ6Zi3fKyvoGgKePH15vjEwM22hH7VIAOf2vp4ZKv6f7CXskPnluu5PnB2MNRjJZL5WdUaSAqlBqBF+DcuDYNmzm8+jL/TRoVfjX3gQMmsaNnN0rFtmA0qu7yE9vLpXDGJx0oyHIn/gLBkDyR2n+Z/MINfUBlxbAkHCY1Gt4oaAM+mdGDDhmfpV5KCRMew1tN
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(136003)(376002)(366004)(346002)(396003)(39860400002)(451199021)(66946007)(6916009)(66476007)(4326008)(66556008)(5660300002)(186003)(2906002)(41300700001)(316002)(38100700002)(83380400001)(82960400001)(8676002)(8936002)(85182001)(6512007)(6506007)(9686003)(26005)(54906003)(478600001)(86362001)(53546011)(33716001)(6486002)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TmR1QUNsaHB6bzcwMTlYY2Zwa3dDQVBFZytsSEJzcW9pTUFQdm8ycHNLTkJi?=
 =?utf-8?B?UVMremV6cXpxaEtlZnJMeXJ6RFM4bDA5emZUZUcwQVJIUjBraHBraGdmeThm?=
 =?utf-8?B?TnpvSStLM1VTeHF2cHg2RE0xOWJjNXN1bVhrcytxc28zMXpja29XMFlmOHZu?=
 =?utf-8?B?VzdNTjI0dStrVVlEWVZ0VkFIRUhHelY1cnB6QXRyVEI0S3NmbHpBV0JieWd2?=
 =?utf-8?B?WGRobXBZeWVuSCs3Q0lGWlkvR3ZnQkVaem96VythUFJocUxKd2VvYzFiNzho?=
 =?utf-8?B?ek90OEQrSWVOdlZJSEJDZmhYanpFOW1lV1pYODNDdFNPc1ozOGdEeEVmUFhz?=
 =?utf-8?B?aFVpZjZTbkdXSThScW9VMGhiLzRwK293TDNhWktPT09zUTh4d2FmVExxQ2s1?=
 =?utf-8?B?RytMSVl3eVhJalRVdC9aQjlFK0lHN1B3ZzVPUHVXd3BDM25paVhGRWdIa2Jt?=
 =?utf-8?B?d1J6d0FYZUpYWkttRnRpWWlGN2d1ZTRiMHpHRUVnRk0yTXhyb2tkWmFsYjQy?=
 =?utf-8?B?eGYvVzBPSmpodDYrcnBsKzY3bnljSDg0bTRpUmFQZndMcnNsZWdXWVNSRzg5?=
 =?utf-8?B?QlA5dFZIY3hIWW53VjFiYm1EcE1sb0hIVzY5OXRpYUNVSjNOUHp4ZWpzbHNQ?=
 =?utf-8?B?QnpIQ1hQQlJpUWk5NnFrYlJkSk8rakI5NFg3T0VySFZyTWN4Nk9QRzl0bjBy?=
 =?utf-8?B?QWpFbEdOazMzVmFaa09pUGlGU2NlU3RGenZzUlNFYy9yYkhzeDcxaFNIcDRv?=
 =?utf-8?B?eXhjY05DZDQ4MTloY003V0t1WEdjYnNDZkhWanlaMG1LblpRT3ZHZExvTFJO?=
 =?utf-8?B?ZW5FR2xaYXp5cE8ybDF5UW9JMXVSRmYzNldjQTBpd2NGSU15dWY5MDVVbjFu?=
 =?utf-8?B?MGZxdExHUWJTUHFKcTYydmtNRWIxS2g0QkdaREd5MDNVa1U0aTEva3pxd0JM?=
 =?utf-8?B?UTBIV0liV0o0RW5Bb0VvSXo5akJTL1ZNeDdzZ0t2M0FXM25Rak1WVFFWbXBh?=
 =?utf-8?B?YWlqbkdOUVFSNy9jbXc3LzA4RG1Ib0hNb3FscndZMTBWc2dhdHJuOVh1akF6?=
 =?utf-8?B?RWdqaEVmV2k0bDNmcStva3UrdCt1SzdvL1FjOG5SS1I3MFZpNDZpOEZpendP?=
 =?utf-8?B?N2lBaDd0QTJ6MytQWitveUUzVTFzODVraklQZ0xDUS9sRzF6NWRiU0hONWtj?=
 =?utf-8?B?NktKUVdQTGdhbkNNRGlVdWpRb0xBUGZGY2N4YktHUThyUGNmSUlOcUVFMks1?=
 =?utf-8?B?MU0vM0tGakJWbGs5TmFuOVV1VzlGZzRPK0Q5RTVLOXZIVzV5N0FQMFFMVlJ6?=
 =?utf-8?B?bHdHanVFRlF3ZkpWeDBnTGk3MkNRNVJjaFRaVFpkZW5jNHk2MXc5Z0QvWTF0?=
 =?utf-8?B?aUlFMzJ1QXBOaUJTaFpGemhtRlFvZFJ2R2VwbHlyRm9aZnp1MW82TThrMTBT?=
 =?utf-8?B?QXdwZlFORFd6aElPU2N5UDBYcC94UmxLUzFKRVducjlpdXV4WDFacVg3Ni9t?=
 =?utf-8?B?aFdQZG1WV0hzQmwxSTQzYVE4ZUZXb1ZwTTQvMnNyUkU2YTZCU3BBWHM5ajJC?=
 =?utf-8?B?QWRDRFZwU3lmWVRkSE9aWWFFNm03TFJ3Zm1RdWlPN2IvOWRxWmNVaW5LYVNr?=
 =?utf-8?B?TURYUUhocWNOMmJKS2dmQmo1SmF1cVljYjhOS0VrS25kNzdMT25DSy9JbVpn?=
 =?utf-8?B?M01QTXZnWTBQNlNleTFUaDVLYmFtdTVPZGI5OERGVGlWaDQrcFZrT0F0V080?=
 =?utf-8?B?QTM1eCs1dEsyTk8vb0M4UzczYWJEQnZMaTFzOE91VVNhbDZyWHVmMHpkT2hB?=
 =?utf-8?B?V2IweWZIdGd2b04zelFzdkhxcEZIU211OE83TnVuUXd1Qk5aZ1ozdkFxN1Ru?=
 =?utf-8?B?UUxnV0JNdWJUYWk3RWd2SStNaUI0OXpBTnVXZmlIa1NtelQxWGUwdlY5cndX?=
 =?utf-8?B?Z1VMQ29PMU5POUI5UVpuNkw2bHFsRXo4UUxwNjVzYzBXamZ3UEtadXpiZk1P?=
 =?utf-8?B?SG5paXBzQzNRSE9YNExxRlRKTFVwN3FpVDRsT2lXSzJ1b1BGandMUGM5aDBR?=
 =?utf-8?B?YVhaNVBKZGtBWUZKNVJhWlAveS9XcVZSU0pzV2lrV01XZ29CVEpvWVpPWnAx?=
 =?utf-8?Q?cYAxxNzxNnUh7QP+BcGQ4UqVb?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	PSzg8J0dl8lWFFxNiwQ3gJEJ3+0oKWeH5tce+3dKoHdUIBJ72tSxA2eSSDu73JZKdFRUIefbbAah7JIFcOYeJRUtsSFSkV3OCpHvbKL+0eAyBBQjqhdYzOGMKAn+7nzy2JjrOsyfpfV1HM2neXQ61tcB6HgKayq3ePtR3F2yTDfSfRZ/9qCclPHrHG3DnksW8YOvcZL4Yw4y6YsAjCgs4K7W7B4jyg2MqGlEkpQykQaO62+6NnrgSkRlYVVXI1W29/LZv5aeEj2u1fewVX/mBkgco6r1HIp3390kGpLGl8iUK/pi/RkrxfpLXNgkYGPcoOllBWeBApjo7BfoWzMtcQjY+ltGwEfcTaIzqYZP1djeZLLlhvrBhgIj0MajjyaG4EdVL4dzK/d/f1DYZXS99faiyoVaLiLgVl85cwW601eIDiXh5iBccnX3TGz4dUTF8gh0o1T/EzLFmKIEr9hVE89vkZ7cm+SyQJvZ8yXxlHtdGr3VtJrO7qOR0c3lZ0RKX+d+u+ltZ3lcie/hkwXz1u/EAYNlrLqmUgh0roYA+JpW0NBPFmGVCxHavps7h70WXgYjEWHLFhxU7cviJzYCe3sJqz4AypZSs94IW8AzDE2RoW3Rx3fyyPsu+3cbjyXrriFIGaCxMQfHDIf56ZdEdPK6S3Xf6wWjEkpue6QMltrUMop8eNxck1YxLENHM5+y3kJTZKYP/2Sv4mQMxC1LeYKnf361rvRnRfNkdOr/Hwrp8aDnNHK21KldWeuxUZSOwBYS5tmWScjxvecC6gfZ4uqPQETWFoODAC8/seOYU3nTT5YM/F/dt3HZ6/hEVNHbRR35Ie+DIzNj5e17nUKhFlgdnkYNNbsQEI0FJFJK17nOOrCeNOMNJlHfGn4B3kr97IXN6v+PebGzNPHnbrjQxg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fb9715b0-4cec-4c6e-5614-08db4c7771bd
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 08:13:29.8888
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bFzbkuQCfwiHOx2IQJroyVkQ8TnsKWXOo7ViTwRFWt9bw2+gt5AzhccZsuCLV9aEw0t7+DlckNJO1xGh4c9gVQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5511

On Thu, May 04, 2023 at 09:50:27AM +0200, Jan Beulich wrote:
> On 04.05.2023 09:44, Roger Pau Monné wrote:
> > On Wed, May 03, 2023 at 05:33:05PM +0200, Jan Beulich wrote:
> >> --- a/xen/include/xen/sched.h
> >> +++ b/xen/include/xen/sched.h
> >> @@ -438,12 +438,14 @@ struct domain
> >>  
> >>      struct grant_table *grant_table;
> >>  
> >> +#ifdef CONFIG_HAS_PIRQ
> >>      /*
> >>       * Interrupt to event-channel mappings and other per-guest-pirq data.
> >>       * Protected by the domain's event-channel spinlock.
> >>       */
> >>      struct radix_tree_root pirq_tree;
> >>      unsigned int     nr_pirqs;
> >> +#endif
> > 
> > Won't it be cleaner to just move this into arch_domain and avoid a
> > bunch of the ifdefary? As the initialization of the fields would be
> > moved to arch_domain_create() also.
> 
> That's hard to decide without knowing what e.g. RISC-V is going to
> want. Taking (past) IA-64 into consideration - that would likely
> have wanted to select this new HAS_PIRQ, and hence keeping these
> pieces where they are imo makes sense.

I'm kind of confused, what does Arm do here?  AFAICT the pirq_tree is
used by both PV and HVM guests in order to store the native interrupt
-> guest interrupt translation, doesn't Arm also need something
similar?

> I did actually consider that
> alternative, albeit just briefly. If that ...
> 
> > Maybe we would want to introduce some kind of arch-specific event
> > channel handler so that the bind PIRQ hypercall could be handled
> > there?
> 
> ... and hence this was the route to take, I suppose I would simply
> drop this patch and revert the 2nd one to what it was before (merely
> addressing the review comment on Arm's arch_hwdom_irqs()). That's
> simply more intrusive a change than I'm willing to make right here.

It's hard to tell whether other arches will need that, I think I need
to better understand why Arm doesn't before making any judgment
myself.

I do think it would be cleaner if the field was moved, but that would
require us being quite sure it won't be needed by other arches.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 04 08:17:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 08:17:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529624.824194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puU9b-0002dt-Lk; Thu, 04 May 2023 08:17:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529624.824194; Thu, 04 May 2023 08:17:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puU9b-0002dm-J2; Thu, 04 May 2023 08:17:31 +0000
Received: by outflank-mailman (input) for mailman id 529624;
 Thu, 04 May 2023 08:17:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puU9b-0002dg-2b
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 08:17:31 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2062f.outbound.protection.outlook.com
 [2a01:111:f400:fe12::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1c86e7d7-ea54-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 10:17:30 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9078.eurprd04.prod.outlook.com (2603:10a6:20b:445::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 08:17:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 08:17:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c86e7d7-ea54-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oHJlK11xkY+CMu53cM6p1sE+Ctcpz6xPff6T+93JQ5Wk8oODPHMP4ERBwv4c8V/CkJQ+jTHdG1MjupQxZGJwAUs8UsnyD35yiL05G/B5+t4xe5UcSjdtXV+qfBp86fIBBnU9KpHmijoBYfhhMDprB3Wo7yibSF9mC1IMdo/uoDqAjwMk9eVHHfHv97zcuwdHv4f7loWj17v8+b7XHd3j7H5H4ItdjdTCD+3ew66YuBs7EJy4iK3QLX3BzGU3JhBF0UFk8JsXEu8FvafwJbcoEfP2yqE6lfJ6jBSEQKBF+Pyj8y2JxLHhKV/tURRrt8PXMVKWgTIFd1Ys8o3tqDek0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iTHJ1g0+LZrRj1zftOmP6q5palV4trRf8dC586XjlVU=;
 b=XRK/bhlXMFHwcq+dK3CXc7qku5fcTr/I5wuFbgQL7v/oYEV8pJTheK4Vpqzi+AtTLQWeRj9r8cHl34TObBu7t8kYqQ846ubG7Z+HKG2rQO/TEYkBD8rMw7+CAHtKhi9x1CEWso5C8rFA/7mJ5j8SFdEntXC0l/y3W/evSU/1Rmr/r+uQmDmDFkgOO1/elFD3sea1ReRg5/8MeUU0Dnz5Fr/VIH673LB+aXJEpKkAc/x4ecKCqNuPGGFYM0HxkcqnnUT1V9EmWxUY0DQjj6humJm9MfGnhyhZM5FlaedmUqM0VcrmqQSkyPIvrEGbMfNLlliLm7mNhCnfZ8rVisb/iQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iTHJ1g0+LZrRj1zftOmP6q5palV4trRf8dC586XjlVU=;
 b=ZBAtokuwBU9I9IVnlLRX29kXwDgGEYy0VZYAEYHQFf5FKfYaLLJ67OWHd+T15l62qkKEFkGfWXUTp1Y0bWZmkwye81ybH49ab5/Y2DwLIbcg5afnJlzJ9+g6AOcwKxnIXu9NXImorqOwX3TU+a2yNJ+5BLNJxP4b/UGDSQ11K5wkWbMp0BV0/a6vmXEw2ei4UVQ0lbFJwYwjh6opahJ6mPT6Jmu5QFAGj9zBia6JXI4vOG9F29tNMeAaivysPTuzb2ZwoSXRaoAdWP+K8/h3Lj5EK8h8SQ9q6mYyqJhuhE8mWEIAp97i2P01qSfa9LyUdGq7qCkKdHPic6RMffiNEw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2b125ebf-53ed-27a9-2f04-be2a6cac7fd0@suse.com>
Date: Thu, 4 May 2023 10:17:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/ucode: Refresh raw CPU policy after microcode load
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
 <ccf68f0f-6fd7-a9a4-ef72-03999d11035a@suse.com>
 <b655305f-293e-a0dc-ab39-36b0c9787433@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b655305f-293e-a0dc-ab39-36b0c9787433@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0043.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9078:EE_
X-MS-Office365-Filtering-Correlation-Id: ac014da5-5556-4680-f6d8-08db4c77fff1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZcVFnvW+w+5xiR/gD+yUIgh24uLPv97Ed3ROtx0Xgur1xcWxdqz6BiBN+6PukiGJyCXsTHPpo9K7rbiw+8HAjHS09d8Gs51AnC4s0FrZEHC9jHLVXMFhXztk3wXAx7E22N69wD5C6VGXvMKyqYJCXtHtLFnsHK1t2IoI+HqSTgMH/vF8YCQewkfwchk+mz7aLKAZS4IU2i1OCg+lnLbjZzqIbGB5hdkWKPsxbVuPHWW0s5ZYQaO5Q77Hw0wmU0tn62bKDoDvBPXvnNnCnIInQKbmm/ZY+eUStiocXdPm3asmYZNCyorJzU7+BNJdNSulFo6Sc2f5VtLUC/Z6ZUXs52BCs+ii6V3PGRQxFp9sXxnT0UUiny/EbK0NdrgUutH/+0S2ijLcm9W0mdFOjXwe2IHmfN/Lj6fe3SEHjG6XnBcWGUbg5Uac7U/03RBD1PNbp61tkFBo1FtuqWouvYMvdM2QV15L4JvugQ/XVdiCDZYJP0qev7QSeK5E61+qOJKOMTdWurZqAM2xE9NMaFrgV+EvbmqD93J+rmCd7ii2MtVSqtx7ANjNkQWkBQQkBx51pIj46doEFtwxEp0ICe5BKvr+25gVWt2K7xJ9XzInEGoWcfS/Y2Sq3UclSsVICitzNYYrLd8lE/L3gnAIBEC5rw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(346002)(366004)(376002)(39860400002)(451199021)(316002)(41300700001)(2906002)(186003)(53546011)(6512007)(6506007)(26005)(6916009)(31696002)(54906003)(2616005)(86362001)(6486002)(4326008)(83380400001)(66556008)(66476007)(31686004)(66946007)(36756003)(66899021)(8936002)(8676002)(5660300002)(478600001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U21lTlNRdzh2ckRUNmlDMEo1UU5DTStuRUFWN1R0YmlmU2gxVjM1TDlsUUMy?=
 =?utf-8?B?QkpzZ04xVTVoSXcza3FDU1NtLzNmcWQ5enZENkREamNqc0U3dkphTnBROEV1?=
 =?utf-8?B?NnZaajQ1a1lWelUxT2QvVzZsa0tjc1lHcEs5TXQ3WmZnN1FWdFpRQTBIdXMz?=
 =?utf-8?B?bHlkVHBSR3pXSU5aMXA4Uld2aTNOSkU0TXJVZ29oalprWXlsMTFDWll5Y2c4?=
 =?utf-8?B?cnVvWU1oR1VvMlhHZ1Nic0FnOWgzTlBFbmVLK2F6dFZmSjM4c0NLU2hrYVlN?=
 =?utf-8?B?T3p4cW1CYXNHMzcwTnZtUUM5TUJyNDlmSERvMmlnVERiSXR0QkROV1loM1d1?=
 =?utf-8?B?VktqajVDMmFvRDd2SVFHeXVONXdLZ010MDhDWWFzMXFxa2VkdDhKaElRbWN0?=
 =?utf-8?B?aTZKd2VGWVhLbzhwODlvQlAxOTlZLzUxSzROaWFQZEdqNGZiOHVQWmlDd0k3?=
 =?utf-8?B?SktNczVKSUNUTGlkdDBGd2FJUzRoZjE4Z25qQ0VRZmxlZFZ1cnZiaWZKeVgz?=
 =?utf-8?B?K1NMZGpySUJPeWI1L1QxUE5QR2QrTC9Cdm5KL2NueDJMZy9ENXlvdVdVeGV1?=
 =?utf-8?B?NEthT0xSbEJ2UGFlbFpDSFlJYWE2TFc0bGdHNC91NmowN0RuRjRONjg3YVNY?=
 =?utf-8?B?OCtiTGVoZ2NTUHVNTncrWjNFWS9SZ1EvajZBWGY4ZzRkYXp0SXBNWHY5eU15?=
 =?utf-8?B?SU5QSlYzQ0JZaXdBemdlWm5YZC9KbGNSK0UwQUhkai9kV2hCdnlHcythY3pN?=
 =?utf-8?B?UU5rMGxZMWREUGM4TllIRDJ3ZGxxSEkzdm9FUnUyL1RTZU8yTjY2eUJSQ0JP?=
 =?utf-8?B?L3UwN21GUlVmNjBnT1gzMGtDTHVlR0VnK2tMNEdObzFISGVmWmw0UlZIWUs3?=
 =?utf-8?B?U2c2aS90VHUzRCtLZmN1RFU2VlVGdG0rMCswUWtTdlhFekNJamNHaWZBT1pp?=
 =?utf-8?B?Ui9nMW5ualhqd2cvTkN3MFBaOTk1OHFvSkJyVUhZWjlCblNzTm9XQjlQTytV?=
 =?utf-8?B?WnlDNG42WXhaZnhsV2VaaVpxcWYxRGloUGtLYUIxbkN0K0NvTitHY2pLeHVJ?=
 =?utf-8?B?NFNhSWpNRlA5SFVQL2QxV0d2OVpGOVRrV3RsL0ppV3hJVnJSRWorVkVPQzFO?=
 =?utf-8?B?Wm9IVFU1NU4xbDMxSnNqZWV6S2FsOXlxZkRlcEpwZDFGOU1UNHh5Wjh4MElL?=
 =?utf-8?B?Q01EZy82MHRUZjg5Sk9namhyaU50bVU4d0FqQVFhYzAxRmFQdU9ZYnloUjNq?=
 =?utf-8?B?ZklHZkMrenQwWkNHVThRR0c1b01WRXpmQmtDTkFSTFpxdWNYbTJUa1I0RHRR?=
 =?utf-8?B?N0lCeWVjMmg4VVUxZUs1aTVEcklIeWxJR3lkMzNtRnhCaWt0Q0NoQlpXUWZ3?=
 =?utf-8?B?RnEzQmNCN21OSlBNZmZES05iZ1RLbXl5dUlxZDl0alRvYXpIaThxNmloRUZH?=
 =?utf-8?B?QTc1VnMxRURNODkzeG4vUkxoNkgvNTU1ajU5cmdWOU5rUysxVWc4UVYxRjIw?=
 =?utf-8?B?SWY1OHdGWFN1VThmbDVzZWF4N1ZLZTFKdStRRHI0VHRtbVhxM0EwaW5jNzdC?=
 =?utf-8?B?eXljUjVaRlF1ckp3ZTIrY0F6VUpYSjVySEs1M3U4TTRBSnBFcVA2SzFIV1M3?=
 =?utf-8?B?ZTRrNWo3RkYyYmFGeVVzSEQ5M0V1cmVqV2w4aXJ0RXh6b01BQmZyZFpKeXFp?=
 =?utf-8?B?TnF3QmMvbXJ4WE8xVGQ5L2pkaURkKzA3M1JDbWFXVlFhb3RpYnBEaEZKQTZT?=
 =?utf-8?B?bkgwZmNaNWM2Q3lXaURhWjZtbFJoVHJhY1hiTUJpVDJzWXY3aHpXSlRzeGFR?=
 =?utf-8?B?N0w3NGViSWY5YU9RcE5vVElvOTRNNktYcXNpMDRtR0d5dENFeWtLVWUwcWQw?=
 =?utf-8?B?VnBPWU4vNnI2amM2bk95cDdmWjgrV0lyV3Vpa2Nka0ZyTittemFETzRhbXNB?=
 =?utf-8?B?U3JiV0pQMXd5VnUveTdnNmd3THJ2MmZtMGJRT0FpSnpVUjJqTitZWFJvYU5Q?=
 =?utf-8?B?d3BibVl4WDhobUo5eEZSbkp1ekxUVEM3UTlGWlkwL2h4YXc0LzE3aFJmUkVw?=
 =?utf-8?B?QXIwK3BZSHBoelM2WDVhdzBYVjE0aGxKdkE1NjNoaC96VUEyblZZaTl4WGNl?=
 =?utf-8?Q?CvgDjeOxzQ/oPbSmRiJQnPoeB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ac014da5-5556-4680-f6d8-08db4c77fff1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 08:17:28.3085
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: v5zE08o0Bl/etrbbyx7xuH8Vw1jA20tKQaG7T+EYTeoPoU1f6i+d5cmxmcPJyLowgWhDlifhn+GJ+B0xrxslYg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9078

On 04.05.2023 10:08, Andrew Cooper wrote:
> On 04/05/2023 8:08 am, Jan Beulich wrote:
>> On 03.05.2023 20:58, Andrew Cooper wrote:
>>> Loading microcode can cause new features to appear.
>> Or disappear (LWP)? While I don't think we want to panic() in this
>> case (we do on the S3 resume path when recheck_cpu_features() fails
>> on the BSP), it would seem to me that we want to go a step further
>> than you do and at least warn when a feature went amiss. Or is that
>> too much of a scope-creeping request right here?
> 
> You're correct that I ought to discuss the disappear case.  But like
> livepatching, it's firmly in the realm of "the end user takes
> responsibility for trying this in their test system before running it in
> production".

Okay, with the case at least suitably mentioned
Reviewed-by: Jan Beulich <jbeulich@suse.com>

> For LWP specifically, we ought to explicitly permit its disappearance in
> recheck_cpu_features(), because this specific example is known to exist,
> and known safe as Xen never used or virtualised LWP functionality. 

Right, but iirc we did expose it to guests earlier on. And I've used it as
a known example only anyway. Who knows what vendors decide to make disappear
the next time round ...

> Crashing on S3

I may guess you meant to continue "... is bad anyway"?

>>> @@ -677,6 +678,9 @@ static long cf_check microcode_update_helper(void *data)
>>>          spin_lock(&microcode_mutex);
>>>          microcode_update_cache(patch);
>>>          spin_unlock(&microcode_mutex);
>>> +
>>> +        /* Refresh the raw CPU policy, in case the features have changed. */
>>> +        calculate_raw_cpu_policy();
>> I understand this is in line with what we do during boot, but there
>> and here I wonder whether this wouldn't better deal with possible
>> asymmetries (e.g. in case ucode loading failed on one of the CPUs),
>> along the lines of what we do near the end of identify_cpu() for
>> APs. (Unlike the question higher up, this is definitely only a
>> remark here, not something I'd consider dealing with right in this
>> change.)
> 
> Asymmetry is an increasingly theoretical problem.  Yeah, it exists in
> principle, but Xen has no way of letting you explicitly get into that
> situation.
> 
> This too falls firmly into the "end user takes responsibility for
> testing it properly first" category.
> 
> We have explicit symmetric assumptions/requirements elsewhere (e.g. for
> a single system, there's 1 correct ucode blob).
> 
> We can acknowledge that asymmetry exists, but there is basically nothing
> Xen can do about it other than highlight that something is very wrong on
> the system.  Odds are that a system which gets into such a state won't
> survive much longer.

Indeed. Hence the desire to at least log the fact, such that investigation
of the sudden death won't take long.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 04 08:21:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 08:21:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529627.824204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puUDT-00046j-6I; Thu, 04 May 2023 08:21:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529627.824204; Thu, 04 May 2023 08:21:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puUDT-00046c-3Z; Thu, 04 May 2023 08:21:31 +0000
Received: by outflank-mailman (input) for mailman id 529627;
 Thu, 04 May 2023 08:21:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puUDS-00046W-B4
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 08:21:30 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061a.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aaa7fda9-ea54-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 10:21:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7479.eurprd04.prod.outlook.com (2603:10a6:10:1ad::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.25; Thu, 4 May
 2023 08:21:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 08:21:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aaa7fda9-ea54-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dzZ7YQWEJzZZfabhrbsKP041Xp5luhcZjaLYohDTM1Mkxrc/P/+JX/vwsLwyu7Oy2ke3gnKmcaxcSscIj5qQBcAu6dvwxBKIi4lZYYX4MGX/g4kyoCDs0ono0QFRjbvBICd5JEDyj1tmj9pLuq+UCLkxU2/29k4A8bAPugjHYF13watgpav5+2AnhVLC8GLVd8RmYzVPzMWu7TkS9nd519DBqrkPiCi04QqHHmKMdxsKLoZeGuYS3Qs6Ah3MZyY86O0osBqd6B/jlDKpukKKrXBcwIGDcH/jjNtDtePu642bBpBYyE4dAp1jhDKAv5jhHsx1rjXWOAwI0rdFSKOYOg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aI0/JyEON9kCkhqTyTPi7ggRbwp5JhYQG2jo6txVfvQ=;
 b=h/0hiF3Jdjz5YAuen/QLsYPpCzLbDT1R/vFWhY20TSJvDuMC9xPiGIk8jBiieztAOSmAFCiyvfTplIqn0fPUaXTbpdhAErZIocPPqJq6VFHIK74ETQtodEJPo+0cGmN18nlq6CaC0YZ9IieOQSWGg54rswrBXLay94N9zgizjfxWDgdvFs5me0P7SWvdD3igYij11b26cug+2TAa902S51OreP97fMYlclotRM0hX8JcXM9u5zSpwJBbaAp6rN8m6FmlNkJEwNw1XVCmffg3NMSFC+5d308PC1Mp23de6UwxNDvrtLsFaBHqbHkkJB1nvuZXc8VB3z5+1urPqfLdKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aI0/JyEON9kCkhqTyTPi7ggRbwp5JhYQG2jo6txVfvQ=;
 b=pFYKFDSuauBvx32KLRw76S+hJ17h9l2Gf6sXBlld95hgDUeHG3Qz/QcbUwopNU/9tc3zYYj7qECE/lyP4VoenNiNvcxESM+F/ubvyLX3Z2grn+9aPRgWFYozvZf92YKm81p+H0DKWKOwXWDgCBfvmn9nj6563+kTBOGmEgE5oL9/uB7iO2j4rR/flHM73Her2TEcsOqPxNTC93+Ab/2nQKYy5jG0iw8gYQNwBK/0IWQ895T0igcqZVFNNBHzJOzDxguhhqBWt4ruVGVHVNmdLUV9QoGEsgVpgPgCFjDkapPjZ0DXXOT1cXsXDbFgFJrXCfkeyGkb0nkoqMMLMeKFTw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cf8bb56f-8fc5-77a6-40c1-a2f10970a094@suse.com>
Date: Thu, 4 May 2023 10:21:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 1/2] restrict concept of pIRQ to x86
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>
References: <85e59fd5-9a06-48b4-ba7e-81865d44e332@suse.com>
 <98f51b96-8a1c-7f33-b4d3-1744174df465@suse.com>
 <ZFNiS8oxfozlxCz6@Air-de-Roger>
 <2a46c7df-b380-cc41-5582-70b4829d7f47@suse.com>
 <ZFNpI929Zk61sZ5X@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFNpI929Zk61sZ5X@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0117.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7479:EE_
X-MS-Office365-Filtering-Correlation-Id: 10202286-02d4-4af8-4f1d-08db4c788d60
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	G/BJ2udFc8x9GiqZWgq9MxCsX3EPfOEFb74NCUcsfMEGRwuQvsEpXoB5LK96SbyicUz8QjhWqBRFtFsoPCEamB8zzetsDX4sOLHasx8gk+tLo8kbsTtZedE6zV8NU/81YL4ThZKQNBRXeYie2xeJQVD8fxRH7lb9CmoeUD28atm5/KML6h0qLKrL4W3HfUfqPkO322PeYhta4VECjhg1+DpOAMzHJhZosLakDYNPsXH2lXVu6ldEbOeK38fn0+TBWC9RRExYW6PoW8ANN0pcynxXNq4jG1FbT27iM7AcOmK1sUy/aqBNyFi2IloTN6/NIf3JSDPa86K5A7mydJUdzx7A3/moBBHt6QMW+snvDNxsYpAejfJL/DuVarvUJaeqbneARhH/jNuglGQHMFuwLYEfg5DjICKC9HfsW1c7OyzhoSn0cqxt8yBw7hZUI0M2hVfhr5Y4yUjiidqgUniqOlhB821qbU1UdzT2OjOzL/fdo2StxvG2JIv5Q1VMLRjk15yX11qVykQ94Ujq7I1x0msxogD+kPuEeCiyIvd0yrwG2l9zBUpjkoLM8IHTrps9sD61/fzAXBIUVWMd/nqPJhLqoFeTjOoCC1g8qrljiV122jTLkb7hMcFb3sKc4Z9hzRFzgkImB8aD3QE5YCg7mw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(136003)(376002)(366004)(346002)(451199021)(966005)(6486002)(83380400001)(6512007)(36756003)(2616005)(186003)(31696002)(38100700002)(86362001)(6506007)(53546011)(26005)(54906003)(2906002)(66476007)(66556008)(4326008)(6916009)(8676002)(66946007)(316002)(8936002)(478600001)(41300700001)(5660300002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WkFwOEh1VGRnMy9wWS9IWWVwcWJwaldJSnJ6V2pRSmRGUi9SM0NiOG5tYWty?=
 =?utf-8?B?dEVLVlZnRlNtY1dxVm1TdGVuanp5UzAvR1VpSERjMURnWnY0RVpubnRWYllt?=
 =?utf-8?B?S3cySWNpa1FvRUpxSFlsTFkwVVBkMERUQjBvT2FIbmxER3h3SEpSQ1hqaWJ2?=
 =?utf-8?B?d1RDeXN3MzZEb0hwL0N1aXB5TmVVSXNVTzFaTTdreVBoaSt5YjJSUVF3R1pq?=
 =?utf-8?B?bnYxcEZPUWl0Mm03ZXVreXdwSThmT293N2tFeGpLVmx3UjRYTzdyemxiejFy?=
 =?utf-8?B?U2J1WXorR0JUQmFZZXRaZXhMdjV2eE1JVUpSbU5DU1h5VlNhc3czWmNlNUxW?=
 =?utf-8?B?M2hVRkJxMlE3RzJSL3dJNVhPS0JKYnlrQnkvcDBTNnNLemphbHdBM0ZneW1M?=
 =?utf-8?B?UzJDTk1ycGxYdkg1QndLaXBrWjRodFZuTHJKcndlTS9XcU5PZDhiT0F4bjFM?=
 =?utf-8?B?MVBoOXk4aUVKM2Q1MXB6MDBCeXpEZHNIeUs2QzU0WEhWVG9NNXNBWFV3T2Z5?=
 =?utf-8?B?M3grSXJ0VnhETVozczIrWlhYU081VnB0MVA5SCtPNUNmbk5VZG1pQUoxR2dG?=
 =?utf-8?B?N3QzV3ZMMjlyckVsTERLWXlGU3RTV3IrVklRMHMwakQ0R2Q3T0hsOHZuVCtw?=
 =?utf-8?B?REpVT2RTemxBVEVvWGRnU2U2eVBSWnc3S3pzVEh5QnNrYndZdC9CWE0xTFZs?=
 =?utf-8?B?VTBRYVlZekZ4dUNkaWZZYXFxL1Y3SkJOUzRiQmNmbmphU08xcUJCOGpiVVhh?=
 =?utf-8?B?cjFSR1dEa3FqVTBVZlk5TzNOUi9lUWlqaG1US0l1SWZwWnBGUGZrQ2lDcmdP?=
 =?utf-8?B?Zm4rTVM4WTh3Nm1KendNam1oRHhJajAySkIyd3c0NkFGUHBCcVhiUXl6c2w2?=
 =?utf-8?B?L1lMMXJsanNlU2p4Y3V1eDdHR09BV3ZNSjczRjZWSXYxelRlVllVZ3JRaGxv?=
 =?utf-8?B?MXlER3VhK2dUOWJMQkRia0FsZFFWbktoZFVzSExuU1hoRFcwOXdNaEN6VW5P?=
 =?utf-8?B?cnJRd0lQRUZHbXJLNzFmMU9BY2lxTGpTWEVEOUxETjlYbTVSUmJ2Wk5SbmpJ?=
 =?utf-8?B?VTBGNmFDMWI5Nk5lNCtJaFY0YUVTMGI3eWQzZHBaVGFUQWs2Zjc0YjUxK21w?=
 =?utf-8?B?ajdtMjV0bFp6a3M3WUYvZHFlZ1NWZWdsRnRrcUFhL2pldG9RZDFJYlJWbjhD?=
 =?utf-8?B?dTVRTXdVNE1MTnhvejFuV1JVblJPNHAvRFNNRFBFSnNWTHM0dktMcDBJcmZx?=
 =?utf-8?B?YXA3NFV4enlDZG1EU3dqT0xLTmRkanRLaUdWM3lPZmM1MWlPOWJ3cTA1Zkdp?=
 =?utf-8?B?NHVRWC9nZXdwTzU5d1ppbU1GQlZLS0dkVVdvcGhCa3NSdkVBemFieE8xOG9I?=
 =?utf-8?B?eW12d1RCSEJtUUVjV0tNWThBOElIUFVtTFpJRG1sREtCTW56WkFuZlRGTFJq?=
 =?utf-8?B?cVFmeVgreG1sa1dIN3ZhVFV3ZWdNTGlreDF0aGNiajNhcGN4UjNueVg5bWxC?=
 =?utf-8?B?amV6eGVkZ2E5VFA4dXluQmEzWDd2MzFuZVlLVk1iNDdmQS9mRXI3YXRPRStD?=
 =?utf-8?B?WnN6a1JBZk1CUlRuTGFWUys4a1VxYndhWU1wU0R6VFJKNnlDUzJXck0vY25w?=
 =?utf-8?B?YVdHeWkvSWJrY1YxU1ZncUQ1cENEUU1pb3p2eGNBSW1SaFJvcnYvK1VlQkRW?=
 =?utf-8?B?cjE3RE1TTy9wUGNMNnhKM0pHQ2pERGJSVWRyUmRiRG00VU9YM1NPeVZaQWk2?=
 =?utf-8?B?ZkNwNUZrMitORGtqRDVNUnRRQS9yUitTZm5QME9uRGswSlVmM25MMHVyRnlM?=
 =?utf-8?B?VW1vV0Q1RllYWDNVNVRYYU85Vi81SkJPcXdPYVVoeTZKU2NUSFNVd2FHanNx?=
 =?utf-8?B?M1RLdVZqU0h6YnJnVlowQzRSVDVmdWNMNXVkRUpOcFB1S2dNM29DcVVkZm5I?=
 =?utf-8?B?anc4cC9mSUFydVJGWlhrUFFYQjhhUjVwTDJybzlJRy9OeFRpWllzZEJHNXdT?=
 =?utf-8?B?Sm40Y0ZzTktkdjFLd29zT3JRbVlXZEhIZjFGSy9SMCtFQ0l6UlB4Sm5Eb1hS?=
 =?utf-8?B?cC9aV2xRZGRvd1BoS09pR1BIQk4vcE5XT1c3QkJKc0dpYmNpVkdHTWtxdGVI?=
 =?utf-8?Q?bA5hhd7UgfEyoh/nAbNQir5j5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 10202286-02d4-4af8-4f1d-08db4c788d60
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 08:21:25.6542
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RWy9DL/NesyFekUOBuEiyZ78OyEGMBT3BBYu+zFdIEibvDHKMZn5rXrJ+I2xZkqEknvXqZRySGXoCLUOmflxYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7479

On 04.05.2023 10:13, Roger Pau Monné wrote:
> On Thu, May 04, 2023 at 09:50:27AM +0200, Jan Beulich wrote:
>> On 04.05.2023 09:44, Roger Pau Monné wrote:
>>> On Wed, May 03, 2023 at 05:33:05PM +0200, Jan Beulich wrote:
>>>> --- a/xen/include/xen/sched.h
>>>> +++ b/xen/include/xen/sched.h
>>>> @@ -438,12 +438,14 @@ struct domain
>>>>  
>>>>      struct grant_table *grant_table;
>>>>  
>>>> +#ifdef CONFIG_HAS_PIRQ
>>>>      /*
>>>>       * Interrupt to event-channel mappings and other per-guest-pirq data.
>>>>       * Protected by the domain's event-channel spinlock.
>>>>       */
>>>>      struct radix_tree_root pirq_tree;
>>>>      unsigned int     nr_pirqs;
>>>> +#endif
>>>
>>> Won't it be cleaner to just move this into arch_domain and avoid a
>>> bunch of the ifdefary? As the initialization of the fields would be
>>> moved to arch_domain_create() also.
>>
>> That's hard to decide without knowing what e.g. RISC-V is going to
>> want. Taking (past) IA-64 into consideration - that would likely
>> have wanted to select this new HAS_PIRQ, and hence keeping these
>> pieces where they are imo makes sense.
> 
> I'm kind of confused, what does Arm do here?  AFAICT the pirq_tree is
> used by both PV and HVM guests in order to store the native interrupt
> -> guest interrupt translation, doesn't Arm also need something
> similar?

According to [1] they don't, hence the (new in v2) change here. Aiui
they simply map IRQ to pIRQ 1:1.

Jan

[1] https://lists.xen.org/archives/html/xen-devel/2023-05/msg00258.html



From xen-devel-bounces@lists.xenproject.org Thu May 04 08:23:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 08:23:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529630.824214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puUEy-0004f1-HL; Thu, 04 May 2023 08:23:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529630.824214; Thu, 04 May 2023 08:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puUEy-0004eu-Dz; Thu, 04 May 2023 08:23:04 +0000
Received: by outflank-mailman (input) for mailman id 529630;
 Thu, 04 May 2023 08:23:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x2gh=AZ=citrix.com=prvs=481e65374=roger.pau@srs-se1.protection.inumbo.net>)
 id 1puUEx-0004el-0y
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 08:23:03 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e1b0bf3a-ea54-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 10:23:01 +0200 (CEST)
Received: from mail-dm6nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 04:22:59 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB5648.namprd03.prod.outlook.com (2603:10b6:a03:288::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 08:22:57 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 08:22:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1b0bf3a-ea54-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683188581;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=29Hbv4zczTPm7mmZsDXiWwpgGywep53NlU2KoFia+Ks=;
  b=XrJzlUvCo6vvUIUJoLsrSsAWQ2H9SpUE1uKe5tXz8iJoYFNN34K6t/RR
   2/VSH7RQgNSf5FIE+1+V+cAIX0Y79TZv9CHVf7mfM9aJmU2VSv6cf2+Zp
   Y5MgnAA016zTwKdenjs9tJJbS7hoZCSzSmpwqHerADbmp8P62LmkHWbz+
   Q=;
X-IronPort-RemoteIP: 104.47.58.104
X-IronPort-MID: 106585431
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:6dUlkK/r8W4Wagp5riJCDrUDpn+TJUtcMsCJ2f8bNWPcYEJGY0x3m
 mEbCj+Oa6uPa2Omf90lbYu1/BgF65PTz9M2T1Bpqi48E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKgX5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklw+
 94KJSEoZyq4isCn64n8YOlCppoKeZyD0IM34hmMzBn/JNN/GdXvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWDilUpjNABM/KMEjCObd9SkUuC4
 HrP4kzyAw0ANczZwj2Amp6prraXw36rBd5JTdVU8NZxuFmul0MdOicPC1qqhuCSgF+YRulQf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaBMQOBWoLZCtBQQ5b5dDm+dk3lkiWFoolF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9XABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:6rr6Jqnatn9OM+lMf8vqPmuC1QvpDfIW3DAbv31ZSRFFG/Fw8P
 rDoB1773DJYVMqM03I9urvBEDtexLhHPxOkOos1MaZPDUO0VHAROsO0WKI+UyDJ8SRzJ876Y
 5QN4R4Fd3sHRxboK/BkW+F+g8bsby6GXaT9IPj80s=
X-Talos-CUID: 9a23:xSSJzGAwslYYNi76Exdp1lwZC+UsSEbQ7mb7GAiSCUhJFpTAHA==
X-Talos-MUID: =?us-ascii?q?9a23=3AYgLQ2Q8pptk3vgv21uLeMoaQf9xQ55n/J04/q4x?=
 =?us-ascii?q?YutijLAd6KhGAtzviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="106585431"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WLK373GUcwahTfMmf8XAGl/Chmc8ERWlGZKXxUorWPyXaMdFA5Rx3eNptNvB1h9GXBYcytJXjSDCyLSLX4iISARWSLI9PL+Nxs47XindI7ulaJgKIHNyZRfwdrv0KmTKDQUSd0MWUQ3J/4cgqP7TPr7UmgB0Uwmb0XiOAxtL0SP9vU7fihfWoOk3DvCV4yCWhDBBWog14+QvCsMBIXM0CDSb2VTAdr6Cy7yw1NdSKijUq8ujQ8Zu52GMQAA2WmhaGzMYjlmnesEUwUq6Ys/LIMSG2fHstkluodKEgB6zKMDIjqVFoYU7ykbxxDPDiiznit0HeRt/fhoNoWdlo2ZL0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vbwPc94iyHwUkqQgtzgz/Y6XXp7gSahAMvglzZG4bls=;
 b=YYFL0bvnQMYA/gCvQiDKB7myLKs2SIqcJIztkU6pOH+lXr/dqTZ22iz7FuTpwZumhY45XirT66qomCgBb0Mua90cWdItmTpkxoJIO1cl73/e9JCzUNxB/PtRb4JHArBUgO9RCI231hn9psQ50Z62MRu8hD8kkN4XTh7XBBTd1d2eOq5NJLkrAwgN1HdbPRCE6oV9vWVl0V1Hx6UmS/qvgFAMki/2O/XI7s9yve1cq6kbhUQs1soLCmGGmoO8NrYKPxkOzhQKHQDSw5ymNeWKO65p197rczE6f32GQEGknWQAl28zlu//cbWOw3Y8chKcuQV+Is9ukES0Rp6IVtvzHA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vbwPc94iyHwUkqQgtzgz/Y6XXp7gSahAMvglzZG4bls=;
 b=QzcZfixCFnfr+RoJCK5gMRlS3sQTqIahoRn5/gCM8xs4nJbf78xGV2kyQfa01mpvZgNK7ie5YBvrostieVdldUTRTPmC5rASpdxgvMhj1ezOz4++B1OMIevUPEl+YJCtyYAIIHz6FVAtkZYn/aPRS0p0Dv34YzLZaDE9gq4oFsI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 4 May 2023 10:22:51 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86/ucode: Refresh raw CPU policy after microcode load
Message-ID: <ZFNrW5a/aY7a3KTs@Air-de-Roger>
References: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
 <ccf68f0f-6fd7-a9a4-ef72-03999d11035a@suse.com>
 <b655305f-293e-a0dc-ab39-36b0c9787433@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b655305f-293e-a0dc-ab39-36b0c9787433@citrix.com>
X-ClientProxiedBy: LO6P265CA0016.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ff::7) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB5648:EE_
X-MS-Office365-Filtering-Correlation-Id: bcb543bc-21ef-4fc9-0d87-08db4c78c439
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xoiLXjy6P22MRQka85m4N4n3qTK0mpjyxch2PkmakTJF0xDNC1YuIscRAa7/SGbReZVKKis65RlsS8LlsTcUsTTQkxBkVmIpxAMDI69LFar19YYslQfnkmjAnSOkWE8QHY4i0hm/pJc3/QUuA7+l4MyNHJx5iSjgiQKDjaKDXeuzZGGLtTC2C7Q7qacBUYPwoJUuHTlo4LPnV6mxhG+EnfTn8dFcPfafi546PJa5ZOK7X809nRf5j7WbMFy4x7O5tALV8OpUbOrpKuLd/ta5YrnjRFlGtwNeZjgUUUapqHKrzT4MScLPF2lVTV4dFQc3BJVZ7jRAxswkd7Oet0CpFjOKnGMEfqLBI2DsjKI7PtlHY0pQ5ZnIdLc9pgLftJcnt52jb4HrnuV2p1CJdqD1f9bDT33GJDQsU+twn60deduFFk+43VgTbsG4ltn2QC27gBsT/qj0+Bi2YJcZTsfxFhsuRQfT9proW0w4EtS0aKWgrm7MtBUZyh2O4e476xIVzVsWMjcaBy0G0gVdaoy7EJ5CeVl5MKZtpSnyXIHiV3HWmAqtO5yfY5Hqt/RfzZW4
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(39860400002)(366004)(376002)(136003)(396003)(346002)(451199021)(66899021)(33716001)(8936002)(26005)(8676002)(9686003)(6512007)(6506007)(6862004)(53546011)(2906002)(83380400001)(478600001)(186003)(4326008)(6666004)(85182001)(6486002)(6636002)(41300700001)(5660300002)(316002)(66946007)(66556008)(66476007)(54906003)(38100700002)(82960400001)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R1NmSFFhQ2JUKzRqcENaVWJIdUFtTTJTUVhscTFCZjNRcFBjeEJvZ1kzM095?=
 =?utf-8?B?a3ZXSlQ4QXg4TitKaXZQT0c0MjlaUVRpdmpwQUdPY3AvWFdtSktvdHRqY2da?=
 =?utf-8?B?SHdFMmFkS2Zlb3hSdXNNT3dpMnYxbmhpYVZNVnh3Q0JxemJsdTVHNTB3WHpD?=
 =?utf-8?B?OFhBTzhYWWhtQnBoTDFCeUN2cGFBTDNMWHdmeFpZZkViaXpnMW9sNk9veVRG?=
 =?utf-8?B?YVR0SnZhdVA3dERNRTNZQXdxa0c1NHFGZDJvbnd1Z3dMVmk1YXFRdVpxeXFj?=
 =?utf-8?B?NjM0dzVVTVJSSy9Lc1JXZ09FRmlqaHhSMjZnK1R0S0tVVVB1WnZOL1VpQ2lC?=
 =?utf-8?B?NngvVEtubHp6Y015SllMejJ2TnBzYmY5dXNKR3dzY1BxSWpPZmh5U1JyeFRI?=
 =?utf-8?B?ZVIwbXFKc2krQTU1YnpwUXJvNnFSeGVUQ3JCQ0ZGNXhHaGp6ZXcwUHlTUjJt?=
 =?utf-8?B?TWRQNkVzcXZhS2FPRzhOa0VMcGx4ajBTSU9UWERGY2xGMmtSNXJmRXlucXBC?=
 =?utf-8?B?eS9kdXBoTWYzY0Q5SkRuTW1rcGZ1QVUxVmtveUc2dHh6SThYa3NtYWRsVnM0?=
 =?utf-8?B?UEVZUTc3TTdnbFY0M3JYWmZJU05DSVpSVzJKanliSFY1aDl4bmZVNitqSWFl?=
 =?utf-8?B?VGdIa2FEaThkYjdoekpwaWFwWEZrd2JNdlFXVzA2V0pOcmVKYUY0SlgvM3J6?=
 =?utf-8?B?LzBTNTdXMStIWS8wOGNvZnVvei9iOGtkQUpXL0d2TlhRVjhxWWVWTEdKSXk1?=
 =?utf-8?B?L3d0TW9JOHRqYm4rOFFkMzFMcFBsemRLS3dFK0tyd2NhSFB1K2sxWFkrUlRh?=
 =?utf-8?B?U1gxK1NCUGJ2UDcwZE05OWFQVU14NnlFQ2t5RGlrbUE1RTVKMW9UUmFZVlh1?=
 =?utf-8?B?ZzRHUVhSaHVQcXF1ZlJJVWhsV3dpUTFMMFBGdzl4WWMyRElwb0d4RXM3NkRE?=
 =?utf-8?B?dm5tUGUraEVLVHhybU9pR3V5NjFJYUVJTUR2WDYxd0hDSG1jcDJpRTE0ellt?=
 =?utf-8?B?enVIRUtPTFFTRHl3SjlXNWpJQXBDMCtOR244ZURoK3RYUjRpSk5kS0pJTDly?=
 =?utf-8?B?Y3Z5R0dOWkdhaUZGR1puL01XVFJiQUJkRnVLdTVuNUFDVm9VTGU2c1hzdmNG?=
 =?utf-8?B?V2t3eXA4ZTVQNVhCQ1ZsNmNRdG1hTmsrN0JXV0tjbTlvWXNveVRLeXQ5aUFG?=
 =?utf-8?B?WXUxZTRiSERmLy9NOWIvZ1V1SG50eVR4aFpmYUM3UUhxWFJwaDZqTW5TSHhs?=
 =?utf-8?B?aWVsZ2FmN0lPQ3o0U0xoUktSN1BmaTc0RUFNT0hWRytJOWIrWEdmMFZyT3Nx?=
 =?utf-8?B?YnFnZDBQME02TkNpQlExS3hEaGlZUnlJQUsrTHRwQ2JLMnk1c1NjSzZHRDlQ?=
 =?utf-8?B?eWJKTzlaWHRkM1Erdy9iNG42WGZFUFMwM3REaWZlY0t4eGMrc0tmR2dwNUJJ?=
 =?utf-8?B?dGtpQ0lBdjRLdCtxUVQxUEhCZjNVNG5SSWZQU2pUNk81R3NhWkg3d3U4UEVB?=
 =?utf-8?B?cjdjb0dWMGhRWGVIZlNEeXB3a0xuZ1dERGNxMzZGZW0wOFhoK1pLUitydXlM?=
 =?utf-8?B?UnVmL3BwNjhYZGNUVndqMWhBRnN0WW5lWk4ycVRpdmJ2TlBHMEkvb0dGS21J?=
 =?utf-8?B?VkxBbWpjbnU5b0Y1Z1NtK1EycUZIdlpLRUFOa2lBaUNwU1JuT2dJV1VNa0Zt?=
 =?utf-8?B?ZmROZm55REpvc0grNm85QitmdlVlZm00cWdHMFRSL3JuYjBqUmxPOFJjRHJ6?=
 =?utf-8?B?VHFqWkp6djRUYW1yOTM0aDh6bU0zbWVrNXViZytobjFrMjg4UGF0NWpBZTAw?=
 =?utf-8?B?c1Z5Q2hObGJWWUFFNWtPWHJkNXlVbTRYOGNtQS8yNitYTTJZSjZpTnRoY3hs?=
 =?utf-8?B?NWtNUkNjNk1QUzUxamRGeGN4bERpQ1hZUmxJR1NWdUp4UXdvSEM2angzSy9s?=
 =?utf-8?B?dWpvT0pyT2NYeVN1RGlPQnNldk1obzRUOS9LK0o5cFA5MzBkVlN1anZKQ2xO?=
 =?utf-8?B?eWpXZncvOWtpNzJXYlVvMTdNRWt1blQxTmdSam5WbFhCUkV1M3Z5dW1ZcVhS?=
 =?utf-8?B?NzQxQnYrVzZ6NzZnRGRXZ1p6TXFwVjQrK01seWw1eXR1MUdsUGd4K0FFci9X?=
 =?utf-8?Q?7cgHu1TSS6pbN8Rmhv1I6aqvZ?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ATQJZC+NXqsUJTIB/C93uEsActuBsUDUI1H9PtsN43fczgFz9LN2mYkuMECjsCK+ntQu99Bg2mgZqjj6ZzmfXv6ZMN0HgW3/uB7hGrvnTugY/ljwfvg/EzpzLF9kAT6ufUigxFjs2ROi+LQZrq/7K8MZlnRsJsWujd9TcWWe1AsTNlZuHtsybrgZSxbLzyHKUkJkHTFCkTPiqfrgxHe3POV+uqEppxPL3Pv2CqHBWiGaBo2KSnKwZ93bgHp3IYsfVxMN902tfmuEjak1kq2DfWu6GP2pBW97NG+msxTz73khSYT07N3c1C5qSyS48j4298qA0u8dXivlyb9AuCKW37ezDBi28zVk+8/XXXj1Z9mKxLQRkk9aL6FRPO1ViBVywc4un03fNjWcCWqPNLypYIicAHlQ/hzx5njUZqK3JFpYs8I32wXVh+k99QgAae54jAvVuLEDSs+3z0aDUk0ngmyJHQGcBHlQ15nEKqJIJ5XJVrMKeGQo15zCK2ROTWBRDIc3v4y2iLYnm7GFqsxzvTxNExJJ+jiDB8zo5Hmz+RVtIAUIXcDKCMkYgP6VGU8Tp7rqGR0ffk8jjSuJNYzsOKxozf/ToB7fOquhryh29xo9kpJx117tmqS3ppvmUAQQFitFdfe+3zUtePuzrBAlSYy6HfuoByqtF/nseMOp/84/hRDw7fFjXVzd8JVkgd8qzG2dpx00TU9RJthIzpgQV4Jm5pDLZKEomdRrSDRtrM97nljFmrMTcGDKXK4S8+9URZ36q5LkkvGCUMLM/yFwN9XDpKrD5d5RlmBHKnqEs41BGS0w9NXnmhfz9ORPVaZI
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bcb543bc-21ef-4fc9-0d87-08db4c78c439
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 08:22:57.6547
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zft3jmgwPQVZUK+nupfBR18LSv2muX4ZRF6JLqM1wXg1t6es59eUvXPkk3m2JLIMq/pc7XiK29miM/OBqMLinw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5648

On Thu, May 04, 2023 at 09:08:02AM +0100, Andrew Cooper wrote:
> On 04/05/2023 8:08 am, Jan Beulich wrote:
> > On 03.05.2023 20:58, Andrew Cooper wrote:
> >> Loading microcode can cause new features to appear.
> > Or disappear (LWP)? While I don't think we want to panic() in this
> > case (we do on the S3 resume path when recheck_cpu_features() fails
> > on the BSP), it would seem to me that we want to go a step further
> > than you do and at least warn when a feature went amiss. Or is that
> > too much of a scope-creeping request right here?
> 
> You're correct that I ought to discuss the disappear case.  But like
> livepatching, it's firmly in the realm of "the end user takes
> responsibility for trying this in their test system before running it in
> production".
> 
> For LWP specifically, we ought to explicitly permit its disappearance in
> recheck_cpu_features(), because this specific example is known to exist,
> and known safe as Xen never used or virtualised LWP functionality. 
> Crashing on S3

Printing disappearing features might be a nice bonus, but could be
done from the tool that loads the microcode itself, no need to do such
work in the hypervisor IMO.

> >
> >> @@ -677,6 +678,9 @@ static long cf_check microcode_update_helper(void *data)
> >>          spin_lock(&microcode_mutex);
> >>          microcode_update_cache(patch);
> >>          spin_unlock(&microcode_mutex);
> >> +
> >> +        /* Refresh the raw CPU policy, in case the features have changed. */
> >> +        calculate_raw_cpu_policy();
> > I understand this is in line with what we do during boot, but there
> > and here I wonder whether this wouldn't better deal with possible
> > asymmetries (e.g. in case ucode loading failed on one of the CPUs),
> > along the lines of what we do near the end of identify_cpu() for
> > APs. (Unlike the question higher up, this is definitely only a
> > remark here, not something I'd consider dealing with right in this
> > change.)
> 
> Asymmetry is an increasingly theoretical problem.  Yeah, it exists in
> principle, but Xen has no way of letting you explicitly get into that
> situation.
> 
> This too falls firmly into the "end user takes responsibility for
> testing it properly first" category.
> 
> We have explicit symmetric assumptions/requirements elsewhere (e.g. for
> a single system, there's 1 correct ucode blob).
> 
> We can acknowledge that asymmetry exists, but there is basically nothing
> Xen can do about it other than highlight that something is very wrong on
> the system.  Odds are that a system which gets into such a state won't
> survive much longer.

Would it make sense to only update the CPU policy if updated ==
nr_cores?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 04 08:57:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 08:57:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529636.824227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puUm2-0008LO-80; Thu, 04 May 2023 08:57:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529636.824227; Thu, 04 May 2023 08:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puUm2-0008LH-5G; Thu, 04 May 2023 08:57:14 +0000
Received: by outflank-mailman (input) for mailman id 529636;
 Thu, 04 May 2023 08:57:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o2+9=AZ=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1puUm0-0008LB-D2
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 08:57:12 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20600.outbound.protection.outlook.com
 [2a01:111:f400:7eae::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a739ef4f-ea59-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 10:57:10 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by MN0PR12MB6150.namprd12.prod.outlook.com (2603:10b6:208:3c6::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 08:57:07 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::5b9b:f31f:ac6d:be94%7]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 08:57:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a739ef4f-ea59-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V5gcc2Zjco2uTjZEwkckk85r6s/WdAHcPiGxVMxMnzcGdjhUcA5rk4vq+k+jAu3S8WXzqItdIUpLz4p2rWeelg2mRr6QixWe2zdZGt5KWdTMUQlcLMNlI/IiJ7p4Ex17vpvew9CnKxBQyGRK099kf292UCeV1PIter4X+xPhXhEqGwMHDC4TXFhk/wHLAWXmWFQxXQKobplFcM//9SXZF9JRBL5YU+W4dV/1uNRSeQmFhs8Wli/jEcaq6juLEJLmBLFs98dNVlvXxP4BESH5ESA68sgC1/63Dk0Rzi/W/b9TukjRpNTOfBi6MpPXYXpfVQ3O6A50rMXHU11cOI52WA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bOcCAn5J6X8Pm64VOVcAOsHmFnDG/KQ8frpCsmJ+zzY=;
 b=RF28fiOq7LUiNB/qA+LxEN7s5tnsZzlUOV6AJ+Vp5OSao3ZycLQU81VvRXQvtkNd7skGu+mqMPt9yLnuCH3MQo/NeLlQiaKpf0b4QG4gYbBTuTHy8+faojU8ofFhNpMPW2NiL6VqBryhri673yIpusWQWhRU0CnkMKPite6I4sgl4Rk4rIwKFqm6cB6pvVz4vUjxplTT35Mv8YCJPwbHe3sVXLCOVrtr/UNAjSNIovN/0YMLBFkrrlgt+W7Lhf1/Z14fdarfCUDG9i6iAsoKwBJtAfkgYqn/uXapVnT5OFnzs6/vl9+ROhc0i2bTI0c/GmelxkQReOV97TesERZqiQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bOcCAn5J6X8Pm64VOVcAOsHmFnDG/KQ8frpCsmJ+zzY=;
 b=DGUNHVRpwYrRt9NaYynA62vhDPd3DCL2xMsRQQ8R8ApQGt/DqTGeqH8u826i4nTd5dCb5aCOQCluiAPGDwrMnZONTm+CXJ2FeT4aDtlCwMfyD8F3CFLXaX5wpUPSK9M5CNiKFkNX8hFOc6wXg9q57wh4ryCHe70HD5YxoiYHKPQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <22c4da5c-9ad7-68ba-b005-8ba18e584bf6@amd.com>
Date: Thu, 4 May 2023 09:57:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.1
Subject: Re: [RFC PATCH] xen/arm: arm32: Enable smpboot on Arm32 based systems
To: Julien Grall <julien@xen.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230502105849.40677-1-ayan.kumar.halder@amd.com>
 <2d764f29-2eb9-ecff-84cd-9baf12961c54@xen.org>
 <e9a95271-021f-523a-770a-302c638bfe73@amd.com>
 <556611a5-dc9a-8155-650d-327b6853f761@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <556611a5-dc9a-8155-650d-327b6853f761@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0322.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a4::22) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|MN0PR12MB6150:EE_
X-MS-Office365-Filtering-Correlation-Id: 7e170e52-c7d5-4b73-e76a-08db4c7d89b5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Pf1gvA9IXaWApc6aTTNxVBn+hHNVK2SY6nk4axFLBAZ45EeMT3l3YhTakMceUoFWrXnMus6PqtGCKgfvNkOOMjErm5jCwatLxuqXjxtbM87WoK+6C7ZbhNT0XfkF07csWkvdyPvTAiVy5k/SRRtvef6Ijs7O7dKCPG7oi1etKcXMwwMNf2/NJL9FRXkhMB4cgAuX9sjgqDRWoaPZrBDvmnTrZii2BoJU/Hz82waHM0zVD20TakSD/6m8KUV0RUWR6BiItOhin1sWAP+BQocb5pDSIEoLT9DDiWJ/UR3C0otX2A1SoWMbZ9IuBDZV4LZ6/JtpdClDZvq4ByW7e8CsEhAoZFfjEf+oigOs7dXrEbcu3UyguFPeAHPzQ/yRYU8XCavVQnPikIRHY8BzYhQalS+MvCxo350LoQboYWWEgB5h/1XwbjsEdGasD6qdDrOOEEZyfFiGzVLqUiEs5sIdsWYhVx+vBnvX5o3CIhVZBZJxQc+wcpFlY5y02ZF3UoK0kB53uWstSxW1VlbRYJqdS6yY1nE2My0j/u+0JLeL3j+bOcx+HSSGAmTTAUXIaVwmbOpCh2VMs/SNW8lglPJls5pW5qNQSaV5aP1klpfrESuiNZmQETTzM0g3Rcch2h+zPcNasNZqMQgTd9sAxf0yg47ZEeXMlefWZW1Xb2JtM9w1sG92uwal6hpNIeTb4D/p
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(39860400002)(366004)(346002)(396003)(451199021)(31686004)(4326008)(66476007)(66946007)(478600001)(966005)(66556008)(6666004)(6486002)(316002)(110136005)(36756003)(31696002)(83380400001)(53546011)(2616005)(6512007)(6506007)(26005)(8676002)(41300700001)(8936002)(2906002)(5660300002)(38100700002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YTFhT1NZcVFwVUNXcHBPQ1VJWm5nV2I0RGZpNTh1MDlQL2FiT3FOcnRaQlhH?=
 =?utf-8?B?REU3RGJSUElmRTFSbUJBZTZWOEV0T3FUTUZ6NXRyb2RvNmc3cCtQbmlZcmd4?=
 =?utf-8?B?cWVRM3AybytSRU9pVWx4cTUwQUZCZ3V5bElBTEpYUnVSQW1jWmR6RGZCcWxJ?=
 =?utf-8?B?QTFJN2p5RHdGaFZMSHpuSnFhSlhkWjJqVmNuN3JaS3k1aHZldm9NUGwyRHRv?=
 =?utf-8?B?MW4vZ2VXWW9OZkE1K2JSTXBPRms1ejI0bFpMMUV2QzZYWXlFekpqcVR1ZmFW?=
 =?utf-8?B?V2V3eHR1VkF4SWEzdUUwb09BNE9PcGlKS29CYlJEenRNQlFRZ2FFSGwzNVpV?=
 =?utf-8?B?RWIzYXk0ZzNodGtENFZsMk5zajlyYnFVNnNrWjNiUmMxU1I2L01wNWlSajZx?=
 =?utf-8?B?bHU3azhGWmlLaE5LYkJqNlk5a1dqd1FWYkI2cXJUVFpORHl4eW1rMlNLbzMz?=
 =?utf-8?B?STc2VjljL1N1QWhscVNTUVhWbGJTOUF6YzZPd3NUNm1Qa3hLTlBqM2MzWEln?=
 =?utf-8?B?KzlmTURWeVRuT3lQQW1rcEtDK3hYOFNUNUlSbGkyck0vNzE3MzB0TkZLRmhp?=
 =?utf-8?B?QTQ1YUd4YUwrN1V5S3F3VFdkZ2tDcmNXNGx0T0o5OHNwTUR3TFk2UXlic0wr?=
 =?utf-8?B?U0pFcTViWFA0RHRGTS9UVFVlSjhBYnpZYytZMHhkVGFmUDRaMm5rOWxxVmxF?=
 =?utf-8?B?Uy82cUx6bkhoYStPT0g4ZWtQUzJPUHB0VWNCYkVnZFhFVDdCQThuSEZCN0oz?=
 =?utf-8?B?UkJ3dnZocmVEdkhuSDIxbTNmakh2MnkreFVodzl0azBadlJiWmNNL0dVdVIv?=
 =?utf-8?B?TXgxdGxSM0lIbko2NDdDbWNhV1VTN3F1L2lLRUpnRmxBWURhMnRhdTRzMVpJ?=
 =?utf-8?B?b2VWdmRBS21YKzczcW1pTjdTQXN4Qk85TE8rcnRkR21MU2JrSFhVeHVXckVi?=
 =?utf-8?B?ZHQ2T3lmRldqekNIRGwzVDNZVWNBcEtSWFc2cGdPRGtHWnFpVUh3RmViU0NH?=
 =?utf-8?B?ZEdKeXNiQmcxMTIzOVQ5dlVDU1JpdzVlakV2YjE5U3FTcklmQzlaNWdWQkZt?=
 =?utf-8?B?QldzUElmcnBFRVBpRS9XL2lvZkpZVVpTWTM0QzU4dFI0NlF6VldTM0JIbStR?=
 =?utf-8?B?OXhKNmNNMW9OMVd3Mml0UXROM3U0YU1wdTNUVWlEV290OUNwWTZoNHpuVjZH?=
 =?utf-8?B?UWVPQyt3T3FId2Z5UU1wVmZTRkprVXJuSWgxdlQvVmpiMkczbEhCNFBMckJX?=
 =?utf-8?B?TnlmVDRxcjBRZkFkVHFzWDhHdHUxWDZqSjhvalZlMW1FV2JSMDcvTklRNkc0?=
 =?utf-8?B?aDFQcFkrbVplQmhma3pQM2dxOEtyNW8rZ2l1RmdiVVlNaW5TUm5FeFhqdk5i?=
 =?utf-8?B?VlU2WVZzc0ZlcFdEdUprbHR1WHl1K1VWd2VQZEFNdTRxTVB1TkpPL01yd1Q5?=
 =?utf-8?B?V0drTThRV0M4d3FqeG50T1RLY2Qxa0t2cHVlUjRVb1dESzZLemRmVlYvTmZG?=
 =?utf-8?B?SFB1QmV5VnZBYTk1eHdQVlpVQUw5T3J1Q1BKMXlTVUhwZUVqNEdkUTcxWnR6?=
 =?utf-8?B?Q25WWkxXb1NtUmRjNHBIdE9vM0E1eE1MOWVoeDZvaXdBTTRQQ1ErYWJMejFL?=
 =?utf-8?B?TTZDVVZBdU5OaWVabUM1ak84WnVjTXFuckRPMEZVQUs3Qzk1eDRZais2R2FY?=
 =?utf-8?B?bU8ySlhDQTJTQVgzMkdrUVNBMzIva1JxRzhaV0owT1FqeExXZ3FvTmlITlFh?=
 =?utf-8?B?MUQzTjFjQUxheHVrOXRTUnlyMW54NE5NUGJjYlh5a2VHNnZva3hVcUpQMGpj?=
 =?utf-8?B?RjRDMEFWTmc4dmJ4Wm5vNFVibFZrWkdkTTY1dHdpVkJTUDBFb1dIUTczbUgv?=
 =?utf-8?B?LytSbmdTU0JpWDN4OUl5VzZaeHRkYlBqSDFkUXNkM2g0MlRXN2lISDdpQ1oz?=
 =?utf-8?B?TFUwSzVyR1ZiOHhibG1lenZTbTMzb0c4WXk4a2tpMGUzZy9OR3ExVDlzZ1F6?=
 =?utf-8?B?VC9ET3VJK0ZCMUJOZTVnQ3VKVlVET0U2S2E1NFFpZ0RETXh3azRNYkErSWZ4?=
 =?utf-8?B?YnBwRVJKdkdxMWpkLytFK1l0bmEzWWc4RDF4OGt5b0dlMkVxd2Y0d0UzZGpB?=
 =?utf-8?Q?tRB0fPFHouoZfUjnMVofDxjz3?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e170e52-c7d5-4b73-e76a-08db4c7d89b5
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 08:57:07.0618
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: G61RrDPaAk7D1HDfQB8/3qo6By6bTewltqiyqgNxQ3kHxEqsXhOsa5YYfmtMUvD9pEivNDdfY9F/hh/VLA7TVw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6150


On 03/05/2023 18:43, Julien Grall wrote:
> Hi Ayan,
Hi Julien,
>
> On 03/05/2023 17:49, Ayan Kumar Halder wrote:
>>
>> On 03/05/2023 08:40, Julien Grall wrote:
>>> Hi,
>> Hi Julien,
>>>
>>> Title: Did you mean "Enable spin table"?
>> Yes, that would be more concrete.
>>>
>>> On 02/05/2023 11:58, Ayan Kumar Halder wrote:
>>>> On some of the Arm32 based systems (eg Cortex-R52), smpboot is 
>>>> supported.
>>>
>>> Same here.
>> Yes
>>>
>>>> In these systems PSCI may not always be supported. In case of 
>>>> Cortex-R52, there
>>>> is no EL3 or secure mode. Thus, PSCI is not supported as it 
>>>> requires EL3.
>>>>
>>>> Thus, we use 'spin-table' mechanism to boot the secondary cpus. The 
>>>> primary
>>>> cpu provides the startup address of the secondary cores. This 
>>>> address is
>>>> provided using the 'cpu-release-addr' property.
>>>>
>>>> To support smpboot, we have copied the code from 
>>>> xen/arch/arm/arm64/smpboot.c
>>>
>>> I would rather prefer if we don't duplicate the code but instead 
>>> move the logic in common code.
>> Ack
>>>
>>>> with the following changes :-
>>>>
>>>> 1. 'enable-method' is an optional property. Refer to the comment in
>>>> https://www.kernel.org/doc/Documentation/devicetree/bindings/arm/cpus.yaml 
>>>>
>>>> "      # On ARM 32-bit systems this property is optional"
>>>
>>> Looking at this list, "spin-table" doesn't seem to be supported
>>> for 32-bit systems. 
>>
>> However, looking at 
>> https://developer.arm.com/documentation/den0013/d/Multi-core-processors/Booting-SMP-systems/SMP-boot-in-Linux 
>> , it seems "spin-table" is a valid boot mechanism for Armv7 cpus.
>
> I am not able to find the associated code in Linux 32-bit. Do you have 
> any pointer?

Unfortunately, no.

I see that in linux, "spin-table" support is added in 
arch/arm64/kernel/smp_spin_table.c. So, there seems to be no Arm32 
support for this.

>
>>
>>
>>> Can you point me to the discussion/patch where this would be added?
>>
>> Actually, this is the first discussion I am having with regards to 
>> adding a "spin-table" support on Arm32.
>
> I was asking for the discussion on the Device-Tree/Linux ML or code.
> I don't really want to do a "spin-table" support if this is not even 
> supported in Linux.

I see your point. But that brings me to my next question, how do I parse 
cpu node specific properties for Arm32 cpus.

In 
https://www.kernel.org/doc/Documentation/devicetree/bindings/arm/cpus.yaml 
, I see some of the properties valid for Arm32 cpus.

For example:-

secondary-boot-reg
rockchip,pmu

Also, it says "additionalProperties: true" which means I can add platform specific properties under the cpu node. Please correct me if mistaken.

My cpu nodes will look like this :-

         cpu@1 {
             device_type = "cpu";
             compatible = "arm,armv8";
             reg = <0x00 0x01>;
             enable-method = "spin-table";
             amd-cpu-release-addr = <0xEB58C010>; /* might also use "secondary-boot-reg" */
             amd-cpu-reset-addr = <0xEB58C000>;
             amd-cpu-reset-delay = <0xF00000>;
             amd-cpu-re
             phandle = <0x03>;
         };

         cpu@2 {
             device_type = "cpu";
             compatible = "arm,armv8";
             reg = <0x00 0x02>;
             enable-method = "spin-table";
             amd-cpu-release-addr = <0xEB59C010>; /* might also use "secondary-boot-reg" */
             amd-cpu-reset-addr = <0xEB59C000>;
             amd-cpu-reset-delay = <0xF00000>;
             amd-cpu-re
             phandle = <0x03>;
         };

If the reasoning makes sense, then does the following proposed change 
looks sane ?

diff --git a/xen/arch/arm/arm32/smpboot.c b/xen/arch/arm/arm32/smpboot.c
index e7368665d5..0b281edb9d 100644
--- a/xen/arch/arm/arm32/smpboot.c
+++ b/xen/arch/arm/arm32/smpboot.c
@@ -10,10 +10,7 @@ int __init arch_smp_init(void)

  int __init arch_cpu_init(int cpu, struct dt_device_node *dn)
  {
-    /* Not needed on ARM32, as there is no relevant information in
-     * the CPU device tree node for ARMv7 CPUs.
-     */
-    return 0;
+    return platform_cpu_init(cpu, dn);
  }

  int arch_cpu_up(int cpu)
diff --git a/xen/arch/arm/include/asm/platform.h 
b/xen/arch/arm/include/asm/platform.h
index d03b261ce8..5cd7af4d0e 100644
--- a/xen/arch/arm/include/asm/platform.h
+++ b/xen/arch/arm/include/asm/platform.h
@@ -18,6 +18,7 @@ struct platform_desc {
      /* SMP */
      int (*smp_init)(void);
      int (*cpu_up)(int cpu);
+    int (*cpu_init)(int cpu, struct dt_device_node *dn);
  #endif
      /* Specific mapping for dom0 */
      int (*specific_mapping)(struct domain *d);
diff --git a/xen/arch/arm/platform.c b/xen/arch/arm/platform.c
index a820665020..ed54b68c20 100644
--- a/xen/arch/arm/platform.c
+++ b/xen/arch/arm/platform.c
@@ -95,6 +95,14 @@ int __init platform_specific_mapping(struct domain *d)
  }

  #ifdef CONFIG_ARM_32
+int __init platform_cpu_init(int cpu, struct dt_device_node *dn)
+{
+    if ( platform && platform->cpu_init )
+        return platform_cpu_init(cpu, dn); /* parse platform specific 
cpu properties */
+
+    return 0;
+}
+
  int platform_cpu_up(int cpu)
  {
      if ( psci_ver )


Let me know your thoughts, please.

Kind regards,

Ayan

>
> Cheers,
>


From xen-devel-bounces@lists.xenproject.org Thu May 04 09:00:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 09:00:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529639.824237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puUog-0000WU-M1; Thu, 04 May 2023 08:59:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529639.824237; Thu, 04 May 2023 08:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puUog-0000WN-JL; Thu, 04 May 2023 08:59:58 +0000
Received: by outflank-mailman (input) for mailman id 529639;
 Thu, 04 May 2023 08:59:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VaLI=AZ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1puUof-0000WF-H1
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 08:59:57 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0631.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0a32a5d0-ea5a-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 10:59:56 +0200 (CEST)
Received: from DU2PR04CA0323.eurprd04.prod.outlook.com (2603:10a6:10:2b5::28)
 by DB9PR08MB10378.eurprd08.prod.outlook.com (2603:10a6:10:3da::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 08:59:53 +0000
Received: from DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b5:cafe::8) by DU2PR04CA0323.outlook.office365.com
 (2603:10a6:10:2b5::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 08:59:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT032.mail.protection.outlook.com (100.127.142.185) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.22 via Frontend Transport; Thu, 4 May 2023 08:59:53 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Thu, 04 May 2023 08:59:53 +0000
Received: from 054332754a50.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8D8824C4-A056-4F05-A9D7-D31265CD6C0C.1; 
 Thu, 04 May 2023 08:59:46 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 054332754a50.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 08:59:46 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS8PR08MB9741.eurprd08.prod.outlook.com (2603:10a6:20b:617::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Thu, 4 May
 2023 08:59:45 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::9283:e527:bed8:ab23]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::9283:e527:bed8:ab23%7]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 08:59:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a32a5d0-ea5a-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Pe4iGj8pxSowClsFdNt0rI/Ih7O9o6ufQz/RjTFFtZY=;
 b=4oSeY/Jd/EJOwmBOY7IyEYD+aiybvQKu+mkXUJdB6sN79v+NODW5bdeqDS2TgH6n46vrpegLe94RK+XYQpteBIO5gLGhjN4y+1iixye/Bny8yalj/Ozs5GyiREuchFl+DgY2o4F7+B7sPwWJoR8X9JZOPM3DaDWl4DYVF1YtuRU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f6b78dfc23327d82
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VFUYSGY6Fi7KpZ1EpOD78bU4SjdODDDGXYYJf9cHx9cP/11HeJrrH5njMMxkH3jA5uYDXXx2V3Nw1bixeMRwZgFadbcixg/QI1BzXOyHn0xHi5EgPICgJ5JTo8e6ChJ/IcIpYsK4qmwYeQOrc9v6SDqNhmI/IzpFC8SEGycMVjF3S21oWov0tJfEBOpHn65Fj5j26R2d30aYGOdIn+M5hIVenKClmMN+qKyzpVOipU412xtTN/gQuy2Bjt/EVfcPwFb7a8dPNJXDdxUZPq2mWbio48CleGGAfPxij2T9vVjnaSkjrcZg/VmYyPiQ9rNPxS3HHciQ0W6JmnR6S+QYHg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Pe4iGj8pxSowClsFdNt0rI/Ih7O9o6ufQz/RjTFFtZY=;
 b=B8ueVsysN5bjB2BbbX4UxWBO/eoLEW3pxKR+ElVbPOmBBPeCs/8rdnC6L6/LGtNt6R2EjVpAEXID6XJuW0kAvzlVflykJOFY++oJM6t1/QLoDWWdLrJY8aqUb7a5HPpBH09jwvW4Zm5EsDOE3vCBDun1FVkhvXK+AyItXyIfeeGL2CqZsVImOyFo9TFKEpXyyN/zs68R5nIINbZUR3ACI40qDbGBXO/wzbe71voyItZar2VAXyozWLFwsC+b5MZPpXcflpT819tjUkJej5nnSmN77zCDoYJahJJmmsPaBzpbJ8vP97HlTJwz9Qv3MebDxRFORnv/z1ErAGn41/al4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Pe4iGj8pxSowClsFdNt0rI/Ih7O9o6ufQz/RjTFFtZY=;
 b=4oSeY/Jd/EJOwmBOY7IyEYD+aiybvQKu+mkXUJdB6sN79v+NODW5bdeqDS2TgH6n46vrpegLe94RK+XYQpteBIO5gLGhjN4y+1iixye/Bny8yalj/Ozs5GyiREuchFl+DgY2o4F7+B7sPwWJoR8X9JZOPM3DaDWl4DYVF1YtuRU=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stewart Hildebrand <Stewart.Hildebrand@amd.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: pci: fix -Wtype-limits warning in
 pci-host-common.c
Thread-Topic: [PATCH] xen/arm: pci: fix -Wtype-limits warning in
 pci-host-common.c
Thread-Index: AQHZffQTsmRRBKh80kOBQgH8fU6H2a9J0ZUA
Date: Thu, 4 May 2023 08:59:44 +0000
Message-ID: <5D298044-314C-473F-97AB-420DA3DA44A2@arm.com>
References: <20230503191820.78322-1-stewart.hildebrand@amd.com>
In-Reply-To: <20230503191820.78322-1-stewart.hildebrand@amd.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS8PR08MB9741:EE_|DBAEUR03FT032:EE_|DB9PR08MB10378:EE_
X-MS-Office365-Filtering-Correlation-Id: 30fad04d-eb30-4116-c8ed-08db4c7decdd
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 DcuV7NsL4RyDysu2xbACbluAmjrKp0t+plcOgUJTcr8n+85pm+n5eTPBSi6V4oyp3OJraYMFoDVB8t0huVgKWTv9+GCQfQgrqzSK3s5R+zMFrPCV19NiEz7g8T7wHugTS400vDdQa89Bx2pol4BpxU5DLFczF1BEwQj0AynthYDTHRhfxS7p8Qk6x/GIS5rR+Db5sd+zr/XBEsPQrXoEvoL7i9HPXaQAGeLV19SUp/5t9zm+wYy16cYP6ZtSP6/DPQ4u0LBt6NFLK00VTYtGGOcJPu/vpeBT7b/5Xx4lH+JkKA8/CwIGh7MCVqpaiQMW+NyC8W/LQyCh40Cg8bgcob7Em7stKaZfeNjZqEQA2sIatJtiTTT95d36UWF82KURuwniHCDafSreE8FAm1trtRA655BTJS+obs0aNTAAezIURzxcC2Ebm9Xcf/OYoIJIJ1g6dgug0KUKywCX9Lk/OIDuaQAK4A0rcP6WHmLplWslre/FhSV1Q1TYVImoRyfIjoPSbtqQib3aOPhZANY7pRxbmRE6vDfMGJ+Ylm/dXQn5deZMRsI0szByD4JHr5YR+dxdbwdtyQSq9y54+fcnYcDO2Btp1z/pU252pJQahW8FhyUx5Vt2E7TKfWd618sicJRG7A+OaKtiVJ/8o8LELg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(346002)(376002)(39860400002)(136003)(451199021)(54906003)(186003)(478600001)(6506007)(2616005)(6512007)(53546011)(71200400001)(6486002)(4326008)(6916009)(66446008)(41300700001)(66556008)(76116006)(66476007)(91956017)(66946007)(64756008)(316002)(83380400001)(5660300002)(8676002)(8936002)(122000001)(2906002)(38100700002)(38070700005)(36756003)(33656002)(86362001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <99CF58F2C0329B4EBD5E10C71324FF75@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9741
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c34286c9-720b-4b8e-417a-08db4c7de7f9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6IoV9SQAmRE1vymZaHsBsX5BlYowCLJsHFVFVOtNQmIdgCcxOGdJzCzyMCrVMweKjqxVXRyXyzfQtVhPR+C3vKykprdse4xMM9wnx/6h3fcsBoVUkPl0kHx4NgjZOaK1PPegf6n8rPk5Onix2CUdOGcQA7PuCSjBrSBbXWNBJOXyVHLk9aheKoDVLrWk9rG+u1inqMXGmpIGQNfZlffiWcfPz1+gAUz9rlRzP1FC3tpS8DKYz2XlhruDFRVtda44FF5gYnBVOhqdW2Zh4qbaSXUCJ0bQFoVoHCeOICTKxrl/USWNeRiHPCo5qAWgsMgAFhXTH6Qer8JgCc1+VZJYifP5PH+Ya4BxYChDWhPRVoBhjj2+q2Qfznd1lOpXM3F0MGo03BHB3TvbD7rRt5DqQV2LUCu9mSFW5JDyagcoxSfuH9vG6cl14C+jtCRz+Z89oNjRzrvuDRWyQpOOFZlL3NLh+KSllzpq5xQaZAcQwwOIROx+Ojba2Tmbr/c2wAM+TnXbT132BNocEO44GnFX9Vgnot86OKzEzmrKUOKZTGSAvjMGQKYcZq9GOkxdxBJM0/iPh04hqui0XVAvLHQIZ/rNbiOpERB72uHZ1o3XsVsn3sxNVr6LtA/nlgXc69FeMg2WgwkENoo3q24qnoiiNJcdujEa2TvHtWomzCcRld0J0YkAHp3YLMe67OmuItue5f36kKGN/w6aht4mtzpZbriCOx02k78xQpUG38ZTeHU=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199021)(40470700004)(36840700001)(46966006)(316002)(186003)(41300700001)(2906002)(33656002)(53546011)(6506007)(6512007)(54906003)(2616005)(82310400005)(86362001)(6486002)(107886003)(83380400001)(4326008)(336012)(70206006)(47076005)(26005)(70586007)(36860700001)(478600001)(40460700003)(34020700004)(36756003)(8936002)(82740400003)(6862004)(5660300002)(8676002)(81166007)(356005)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 08:59:53.1481
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 30fad04d-eb30-4116-c8ed-08db4c7decdd
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB10378

SGkgU3Rld2FydCwNCg0KPiBPbiAzIE1heSAyMDIzLCBhdCAyMToxOCwgU3Rld2FydCBIaWxkZWJy
YW5kIDxTdGV3YXJ0LkhpbGRlYnJhbmRAYW1kLmNvbT4gd3JvdGU6DQo+IA0KPiBXaGVuIGJ1aWxk
aW5nIHdpdGggRVhUUkFfQ0ZMQUdTX1hFTl9DT1JFPSItV3R5cGUtbGltaXRzIiwgd2Ugb2JzZXJ2
ZSB0aGUNCj4gZm9sbG93aW5nIHdhcm5pbmc6DQo+IA0KPiBhcmNoL2FybS9wY2kvcGNpLWhvc3Qt
Y29tbW9uLmM6IEluIGZ1bmN0aW9uIOKAmHBjaV9ob3N0X2NvbW1vbl9wcm9iZeKAmToNCj4gYXJj
aC9hcm0vcGNpL3BjaS1ob3N0LWNvbW1vbi5jOjIzODoyNjogd2FybmluZzogY29tcGFyaXNvbiBp
cyBhbHdheXMgZmFsc2UgZHVlIHRvIGxpbWl0ZWQgcmFuZ2Ugb2YgZGF0YSB0eXBlIFstV3R5cGUt
bGltaXRzXQ0KPiAgMjM4IHwgICAgIGlmICggYnJpZGdlLT5zZWdtZW50IDwgMCApDQo+ICAgICAg
fCAgICAgICAgICAgICAgICAgICAgICAgICAgXg0KPiANCj4gVGhpcyBpcyBkdWUgdG8gYnJpZGdl
LT5zZWdtZW50IGJlaW5nIGFuIHVuc2lnbmVkIHR5cGUuIEZpeCBpdCBieSBpbnRyb2R1Y2luZyBh
DQo+IG5ldyB2YXJpYWJsZSBvZiBzaWduZWQgdHlwZSB0byB1c2UgaW4gdGhlIGNvbmRpdGlvbi4N
Cj4gDQo+IFNpZ25lZC1vZmYtYnk6IFN0ZXdhcnQgSGlsZGVicmFuZCA8c3Rld2FydC5oaWxkZWJy
YW5kQGFtZC5jb20+DQoNCkkgd291bGQgc2VlIHRoaXMgYXMgYSBidWcgZml4IG1vcmUgdGhhbiBh
IGNvbXBpbGVyIHdhcm5pbmcgZml4IGFzIHRoZSBlcnJvciBjb2RlIHdhcw0KaWdub3JlZCBiZWZv
cmUgdGhhdC4NCg0KQW55d2F5Og0KUmV2aWV3ZWQtYnk6IEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRy
YW5kLm1hcnF1aXNAYXJtLmNvbT4NCg0KQ2hlZXJzDQpCZXJ0cmFuZA0KDQoNCj4gLS0tDQo+IHhl
bi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtY29tbW9uLmMgfCA2ICsrKystLQ0KPiAxIGZpbGUgY2hh
bmdlZCwgNCBpbnNlcnRpb25zKCspLCAyIGRlbGV0aW9ucygtKQ0KPiANCj4gZGlmZiAtLWdpdCBh
L3hlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtY29tbW9uLmMgYi94ZW4vYXJjaC9hcm0vcGNpL3Bj
aS1ob3N0LWNvbW1vbi5jDQo+IGluZGV4IGE4ZWNlOTQzMDNjYS4uNzQ3NGQ4NzdkZWI4IDEwMDY0
NA0KPiAtLS0gYS94ZW4vYXJjaC9hcm0vcGNpL3BjaS1ob3N0LWNvbW1vbi5jDQo+ICsrKyBiL3hl
bi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtY29tbW9uLmMNCj4gQEAgLTIxNCw2ICsyMTQsNyBAQCBp
bnQgcGNpX2hvc3RfY29tbW9uX3Byb2JlKHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqZGV2LA0KPiAg
ICAgc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlOw0KPiAgICAgc3RydWN0IHBjaV9jb25m
aWdfd2luZG93ICpjZmc7DQo+ICAgICBpbnQgZXJyOw0KPiArICAgIGludCBkb21haW47DQo+IA0K
PiAgICAgaWYgKCBkdF9kZXZpY2VfZm9yX3Bhc3N0aHJvdWdoKGRldikgKQ0KPiAgICAgICAgIHJl
dHVybiAwOw0KPiBAQCAtMjM0LDEyICsyMzUsMTMgQEAgaW50IHBjaV9ob3N0X2NvbW1vbl9wcm9i
ZShzdHJ1Y3QgZHRfZGV2aWNlX25vZGUgKmRldiwNCj4gICAgIGJyaWRnZS0+Y2ZnID0gY2ZnOw0K
PiAgICAgYnJpZGdlLT5vcHMgPSAmb3BzLT5wY2lfb3BzOw0KPiANCj4gLSAgICBicmlkZ2UtPnNl
Z21lbnQgPSBwY2lfYnVzX2ZpbmRfZG9tYWluX25yKGRldik7DQo+IC0gICAgaWYgKCBicmlkZ2Ut
PnNlZ21lbnQgPCAwICkNCj4gKyAgICBkb21haW4gPSBwY2lfYnVzX2ZpbmRfZG9tYWluX25yKGRl
dik7DQo+ICsgICAgaWYgKCBkb21haW4gPCAwICkNCj4gICAgIHsNCj4gICAgICAgICBwcmludGso
WEVOTE9HX0VSUiAiSW5jb25zaXN0ZW50IFwibGludXgscGNpLWRvbWFpblwiIHByb3BlcnR5IGlu
IERUXG4iKTsNCj4gICAgICAgICBCVUcoKTsNCj4gICAgIH0NCj4gKyAgICBicmlkZ2UtPnNlZ21l
bnQgPSBkb21haW47DQo+ICAgICBwY2lfYWRkX2hvc3RfYnJpZGdlKGJyaWRnZSk7DQo+IA0KPiAg
ICAgcmV0dXJuIDA7DQo+IC0tIA0KPiAyLjQwLjENCj4gDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu May 04 09:07:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 09:07:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529644.824246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puUwB-00024C-Fl; Thu, 04 May 2023 09:07:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529644.824246; Thu, 04 May 2023 09:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puUwB-000245-Cv; Thu, 04 May 2023 09:07:43 +0000
Received: by outflank-mailman (input) for mailman id 529644;
 Thu, 04 May 2023 09:07:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puUwA-00023z-Ew
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 09:07:42 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on062d.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1f69816b-ea5b-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 11:07:41 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7196.eurprd04.prod.outlook.com (2603:10a6:10:123::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 09:07:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 09:07:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f69816b-ea5b-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j4H1VA8bznmMSbKgMZocZJvvgaOwAHBFtDH/eX8KHjkbw4qHICIvLk8IvdOy9wF5Acv9nakuFlz3wieQYlRH9EvCEnerJIwiINrchDN55RfUeDMo7byptkcboBVNQ+nFYZNNPhY++8Spq/ewqWYOPXV32IoXySvD0gRyA1JWQVzmlO8p/kjgYXoXNBqoJy9svUd4a363Iyo2/qG3k8SU7lkBGMvsk4CR34JF+FGQhkcmIpc6FgQlKxDUm5ewaUF0EcHsmNMxzVFfYgxwEhvEHy5mDr6HirbflxmMTvtM1HeCdwxcUgv2RqSPrBp/6KlID8pRhfvVv8wWdeDyv1a96Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zlZrFoirnzlXG/C+nNjoRY8I/3VK2E7VhUCpER1XfAE=;
 b=Q9YDBDQcO5LfDPH7v4BaC8mR04ONiu29LMPfS0pORKYhaz9MtrbllcejolxtormKt650CNqba4gRxYzJQt5RriSccYQc8NrcpY1PgCgqZHgB+bGaV1HDQt2RPKzLjyE1Hp5vmQV+T6qzEwBAgELh3fhdI2TNucfG3HC6Sj6pUlHD2FkYRBizYjk2gH+TLAUzWFDQDbwfUjtfZobLLIxCkD0FSE7D3B4+r7d2x5OnAV4DNv5L8bx7biSBpq/+ieWmJ5mAezTJ/AM4CzZDTVsKUcqaZ7iXLjAJWYuyPuFLN8ztE9Y3Mn5z9gsczObV2GmlQCcm7/cQ6B4TYkYpouEuZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zlZrFoirnzlXG/C+nNjoRY8I/3VK2E7VhUCpER1XfAE=;
 b=EVRm5HP1ARaBXSEdnz1aB5gBKt7VK2YZPtoJdqTHdnkIgATcv1fBQfuRXbJSpH/xkBo1DybK8CYF8jkO+WJ2uIC94NcdVXS+dhZVf+ETCPtDSN3331RJDrrJFJNWUUGaC1o4R2t+G4552xfJNVKhUSYaKqVLAeuB3enRVOVN+2hTI/Ux17fEXPdoF9Qaia1PqwjIF7tLQjPsHWLNzM/DUjq+dhMdWF1tcK5fkE6S3ygXjjELVAMn8LLfhk6Nzctkx1w8A6rNZuSHa/uGpS9AgHxwRh+o5WOkuT4jmLQzHH5YjPyh2VecUXJawPQCc2dVc+WOIKDmlOnQfEVbJJ+evQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7e159a5b-b7be-c582-7300-154cec7a8e91@suse.com>
Date: Thu, 4 May 2023 11:07:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/ucode: Refresh raw CPU policy after microcode load
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
 <ccf68f0f-6fd7-a9a4-ef72-03999d11035a@suse.com>
 <b655305f-293e-a0dc-ab39-36b0c9787433@citrix.com>
 <ZFNrW5a/aY7a3KTs@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFNrW5a/aY7a3KTs@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0045.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7196:EE_
X-MS-Office365-Filtering-Correlation-Id: 5718b1c8-f2d7-4513-60a6-08db4c7f0166
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9zCrcCsnwpNP5iAQJnFh29r1DtLdbsXHRp12nm/Ea23tb0Wj4uOb4aX8QllUyA39muhmufyct8Tq6gHMukKg0qrebYSEDGmAf48hCNMDObAN4k21UtYZVZC2TWdUPnQF294hX+sHS50WCMYC2LWIaKubFRiKewBhlvZ0dvRIx6E+dyjPtprWBmUukr+E/9UGGpDgyb4lg556JzW8ip8hW50S3u/h8+VdnCT/ZaW3NNh2ZXDisdXLSz+3gp3IWCzDOSpfg4FnHZcnDB+hTHEbdF6xZ4t7aKTJQgDQpXXrxLymrF4QqHN/z18mautVH61EzvmwBl8oQzvesir5bwm5DR2MQeLN2uLDntSuqSnAFQXnq57R7azHztEmescyz6nU8uwrhIVT4YWX3qWDBzX9r9eGaQ/wNnXd3HNn+gp2BkJC3XwBz2igvD3RaXCFnwqLjf3TPYHAceUp7OiU1n2HxDjln4n6w9BsURTUPMsyJp/jitk8c+x/fsCfh7u3JNtjx+NeIVF32MM+TZGJVqqaQbCyIzBMtTe9SFfwEhXQuFpcH622jVj07kB7v0Y0ZUr8TyhnRMkTKVWdQmZqa2yjwBHnF1jiUXa3O0RoEK4wLAGB2dOlacelq1SLQvrpijpaoDmfTyl+YNad9eNjfj+zzQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(396003)(346002)(366004)(376002)(451199021)(83380400001)(54906003)(5660300002)(186003)(2906002)(38100700002)(31686004)(316002)(8936002)(8676002)(53546011)(6506007)(6512007)(26005)(41300700001)(36756003)(6486002)(2616005)(66476007)(66556008)(66946007)(6916009)(478600001)(4326008)(86362001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VzZVQkVZZEtrdG9nWm1JZXRLVVpRUk1Eb1NOVGx2c1BEd1Y2cVpkeW5xKzV3?=
 =?utf-8?B?OTdIQXJyWlJ5NWZKT3k2TmV1bUpGMEJUdnBkenhiN1czdVR0Ti9xaWVJY0Jt?=
 =?utf-8?B?NkNBUXNYYmFBUEl1c1dHeHJLcDFPaWJSNk5ZSEx4ZjNicldoMXJ4NnUyaStR?=
 =?utf-8?B?ZnJPWGxOM0hCQVpSMWNrVnh4dzBJSXJ4ZVRjNzAyd3lrelkrQ2N2aFpTb1Iz?=
 =?utf-8?B?VEhTWFI2YjVGbnlRV1hsMzlFM1dzdHdTekVNazRMUXNRdmZYUmlaT2dqUWNv?=
 =?utf-8?B?L09jYUtXeHBUS1NJemxNNGhDdk9aRVdDVlE2VWVJY0duVjl2dVA4M29uazlx?=
 =?utf-8?B?c01DWE5oY2VzMkVoTXBQWHMvRHoxZmtwM2wzUDBIZTdmTS9ldFhTdDBtcnRH?=
 =?utf-8?B?MEt2TjlEdWJUQ2wvQ0EwcXRBWTJPOEE1SVVZSVFBRjBWNmxrT2oyYnUzejgx?=
 =?utf-8?B?aGd4elFVbUsxUzdvRUhWUWJRTFdWamwyQ1RwRDJjdTgvYTVIN1RiVUY0Wjl1?=
 =?utf-8?B?ZlJUcjRVR2xwWXFXYVVvUzBTdjNtSXNZOWVMUXJkYWpaeEk1cExZcVB1NHUv?=
 =?utf-8?B?MWw3ZkJEMmVrV3ZtV0ErQkdVSVVtb01pZHYwNzJ5SkNjWGRLMXM2VHRrN1BH?=
 =?utf-8?B?bVNjTHNIRUY3NEF4MmNucTdPMms4Y2JkSExvWEJUTmx3dzlvdWkwK21pWVl3?=
 =?utf-8?B?cXBMaGJsaE92anlleW1hT1gwSStzVjBmQ3J0R0IrSkZlVDhXSlJHVkg5Ynhn?=
 =?utf-8?B?Q1JjTE8vNWZ0NU0yb1RTaTU3L2wwaThkL2VodkNGQ2JXTFF3VTViT2s1TzF5?=
 =?utf-8?B?bzFrSjkwalkrQklUSjh0dlpvMlNzQjljVXdFb2ZyaHhSbFQ1ckc5dXFzdW1w?=
 =?utf-8?B?bXhnenVyNjl5MUcrVHd5M1VYb3ZieEhSRVJSell1M2FMQmZ6QlFEWjlwcFAv?=
 =?utf-8?B?WlM4ZGIwTWROUTk3U0I1QmkrRThzaWhtV2RJZUJzM3RHK3FrdnRPdEVlV2ZF?=
 =?utf-8?B?QTBQL3BIZUpqTTl6NnpDaU1QYklKSThFSTdyVG9FNGhCcU1vUFBIamlkei9J?=
 =?utf-8?B?RE50RVBtSXZ0RFBERi9KRWRzZ2xYbThkRWc0NW1VNDNhOTBiVmJ3b0s3clA3?=
 =?utf-8?B?STVBMEJoajZXeG9qNnRpUWl1bUUycjZBbjFxZ3h5UGpndGNDTXJyOHQwcWla?=
 =?utf-8?B?dGd2bWdBcmN1LzhPR1ZkY1RFQTFpMkZ0MDZZYmRPMGh5NkRXWFpVZS9XWDRl?=
 =?utf-8?B?N2dxTUdRandhck5qK0RLeGl4TUw2SWRxenFDMWJpZjZSS0JUYVFrMDRFUjZj?=
 =?utf-8?B?OCtPL0NaKzc4bU9MTzBMZHBLcTd4S202cHptQkpEaWpQeCszcWVwdU5kWEhp?=
 =?utf-8?B?UmEwVlg2UVJjc1l5MlJOcllTZjArS2xPMWRCWlpsOE1ORlBySmc1b2pWUUVP?=
 =?utf-8?B?NHp1NDNMa3lvRVNBYzBWSDEzdEtmT2d0RTdMZVJrNWNKSmNNVTRyTUFNTEtH?=
 =?utf-8?B?dHJ5eVNxdC8zWVk4SzJybWM3Mi9HbTdORWRYaDY5S2s4VVFpRE5LZXJOYkR0?=
 =?utf-8?B?b2xhMFE5TXp0aUYwWWhYVVBrbGFuMGt4TnA2L1pHTlRQQndrV2p0SlgxWE8x?=
 =?utf-8?B?VU1FMHRDbHRLclhMVWM3NTAxNGs3aWFucTJTRXZoRHZ3VkNibDlvSzByRWRY?=
 =?utf-8?B?NVkxbGh2by82OXJpK01YR1Jnc25IaTVQd2VnWURqSk8ycUduRzFQSTNCMFV6?=
 =?utf-8?B?Qm5KaTlkRk11a0ZIeVp4OW52UVNwVVkxbHBEbWhqUDhpYzNYdDlxSFY2QjdR?=
 =?utf-8?B?R1JvM3JaV3dVdHg3QmxPMFBkU3lLUmFoUWg1d2FKdmlPUFE4NVBsWElqUWlo?=
 =?utf-8?B?Z2pWRFdNWHByK0lwU2UzeWVDLzIxSmlKbEttaVFSYWRlS2kzNXJUNVptTWtX?=
 =?utf-8?B?Y20rRWs2K1IrbFNGTHpjTUVlWFlUZFJreDE4ejFTbVBsOFdBN09QSW96ZWE0?=
 =?utf-8?B?SzVNZWhXTS9DNTFwSXR2RWpsODY3TzcvZyt3N2poT0syTDR4VXFxd2tOa2Ew?=
 =?utf-8?B?cUc5TFB5bnp0U2JzRzN6M2lqNFZNYVlNSXAxZDU5ZTNTQkx4S1NDMjdGSU5E?=
 =?utf-8?Q?U05uGXxxoPehwn6En/yOyn/GM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5718b1c8-f2d7-4513-60a6-08db4c7f0166
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 09:07:37.2298
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5SWQqwkqyZYJH/YZCLou74z22vX1jX4iSbs6twqh1A6Z/P9muLJpan3TTxT5G97B6QOXQI2uXOAd4RoQEJPSpQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7196

On 04.05.2023 10:22, Roger Pau Monné wrote:
> On Thu, May 04, 2023 at 09:08:02AM +0100, Andrew Cooper wrote:
>> On 04/05/2023 8:08 am, Jan Beulich wrote:
>>> On 03.05.2023 20:58, Andrew Cooper wrote:
>>>> Loading microcode can cause new features to appear.
>>> Or disappear (LWP)? While I don't think we want to panic() in this
>>> case (we do on the S3 resume path when recheck_cpu_features() fails
>>> on the BSP), it would seem to me that we want to go a step further
>>> than you do and at least warn when a feature went amiss. Or is that
>>> too much of a scope-creeping request right here?
>>
>> You're correct that I ought to discuss the disappear case.  But like
>> livepatching, it's firmly in the realm of "the end user takes
>> responsibility for trying this in their test system before running it in
>> production".
>>
>> For LWP specifically, we ought to explicitly permit its disappearance in
>> recheck_cpu_features(), because this specific example is known to exist,
>> and known safe as Xen never used or virtualised LWP functionality. 
>> Crashing on S3
> 
> Printing disappearing features might be a nice bonus, but could be
> done from the tool that loads the microcode itself, no need to do such
> work in the hypervisor IMO.

Except that the system may not last long enough for the (or any) tool
to actually make it to producing such output, let alone any human
actually observing it (when that output isn't necessarily going to
some remote system).

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 04 09:29:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 09:29:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529649.824257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puVGk-0004gt-9O; Thu, 04 May 2023 09:28:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529649.824257; Thu, 04 May 2023 09:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puVGk-0004gm-6d; Thu, 04 May 2023 09:28:58 +0000
Received: by outflank-mailman (input) for mailman id 529649;
 Thu, 04 May 2023 09:28:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puVGi-0004gg-TU
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 09:28:57 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1469e726-ea5e-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 11:28:52 +0200 (CEST)
Received: from mail-co1nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 05:28:47 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6665.namprd03.prod.outlook.com (2603:10b6:303:120::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 09:28:45 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 09:28:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1469e726-ea5e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683192532;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=3SjYCgeiKRFBUkUBBZDSAPkT9FgNhZv7QMCfPzQlePU=;
  b=YVrxyidmCpWGKcnVvSQ8DAYsf7LXlycLhoZb8RiTdS7ZJWBRMVy2g0+o
   ALvn39xg2Cu/C2yfQ8K+UeHgQhh4xX2KAOCfdSDIrl1cwmdLpa79Z5CSm
   Jng3uS4/Sjj9YfzXMJgNjS2ORTvB+032iCMbYhB5FHSoQigR+ICsPwIBO
   c=;
X-IronPort-RemoteIP: 104.47.56.173
X-IronPort-MID: 106594044
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:8/+nHamy59lAXDffA2U24Gjo5gysJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIYXmCOO62MNmH8f4t/b4u3pksEsZeHzodjQFRrriBmHiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgW5AOGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 fYWcD41byyEvOirmfGWb/dcj98bFsa+aevzulk4pd3YJdAPZMmaBo7tvJpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVI3iee2WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOPlp6Iy3AH7Kmo7DzE6WlKFi8mAtXWRHPB7e
 3FE9zQwsv1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xGWwsXjNHLts8u6ceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDRLoViMA6tjn5Yo01xTGS486FLbv14KuXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94ZRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:4AJsOKEyLLSAaDbipLqE+ceALOsnbusQ8zAXPiFKJCC9F/by/f
 xG885rtiMc9wxhOk3I9ervBEDiex/hHPxOgbX5VI3KNDUO01HGEGgN1+rfKjTbakjDytI=
X-Talos-CUID: =?us-ascii?q?9a23=3A5oYlZWndslUzZCK+iHA8R52pi5fXOXfTlHnoInO?=
 =?us-ascii?q?/Mmx0Ef64bEGg8bhcqcU7zg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3A3nvAsgwrc4lbH8+P5poBu+fl0IuaqKOCJE8Er78?=
 =?us-ascii?q?2gva/GmtSH2eg3DmpXJByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="106594044"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aU8cJwsCVGaNz4grwnsgSMsATAvip8qDZxxOl9rN700sWG5GoGd7eUzt93eQgCrRbAgM2OEe2xQ0dzXve1sDrthvkHxQVWi+7nv64O8IyM1MwBP2QB+jzTYZ7kUAzFu8goWaW+AGLLAizniEYZsG45ZhWtiRdErtTcutbHvhutJ/2CC+HwxJWHGmj2cikUs2xlIu0o7n0WCs/QhJIkX63NJkukwe5FkMHgRKJowb592egqazi8XOg9ii6PeQ8dPq7et5eNuT3zebM/Ma+n7lYOpuGDra1CXcRzO2Xi7NV5TtrKEL9S2C0aJfsGNLUf8p2HUcJvLsFq+F1L4syk79Pg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3SjYCgeiKRFBUkUBBZDSAPkT9FgNhZv7QMCfPzQlePU=;
 b=TgZU8yiUIjlX21UkprydHslXERHz794xIeVWQmB0sEOMNYzwm13cx5SR1ESUV0EAyS+VVEi2WPBzuPSGflL2Qebxnm+SZqUuRJx9A5kasADsdYH8AtiDE7uL0nPpwNyydCYjFi0af8vBY0Ctrx7F/R49lH3S/Ct2Ww5w7MWNUOsLDJ0FbLGY5tgUY683iWnmU434R+aV/NLRAjOYxFTJrMv+cvhFmqH0U57mQt5xUIVIi5DMzvOvWC/9Y9I7XUfmQngHJ6a8RXF6Tm7azRSNadlCpKG5ALXpFpsVhFyyC2Kui4Plo95Z9UBtUQ6yFi6z337jCM1cS7FJKcL1DET1PQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3SjYCgeiKRFBUkUBBZDSAPkT9FgNhZv7QMCfPzQlePU=;
 b=PGZHLRspaaQKWH6j2L3fJVi0H3omcq+yI6zsSXYC1VLeTQnC4mwLOkHflsfZ/8WekKNVanqd3o2+xqRst+RYSQeUFITYQwUidOfGRo+fNsa1hyul2GcJQ8HzLBCiZ5sahSG3zlsqWdbY0okctEdXtiraPq8jd7ihLM5z0UwFFzA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <575505d8-b6bd-be14-a70c-547d4229ad90@citrix.com>
Date: Thu, 4 May 2023 10:28:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/ucode: Refresh raw CPU policy after microcode load
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
 <ccf68f0f-6fd7-a9a4-ef72-03999d11035a@suse.com>
 <b655305f-293e-a0dc-ab39-36b0c9787433@citrix.com>
 <ZFNrW5a/aY7a3KTs@Air-de-Roger>
 <7e159a5b-b7be-c582-7300-154cec7a8e91@suse.com>
In-Reply-To: <7e159a5b-b7be-c582-7300-154cec7a8e91@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0346.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MW4PR03MB6665:EE_
X-MS-Office365-Filtering-Correlation-Id: 2c3be7a9-d59f-49a9-e14e-08db4c81f47e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	K9XBHQej91rQizQR9sGHII6vqzLm4tTg+KM+WQHnAafqQ6ojzTob9ZkCHVx0G6nwz2X36HgcO+ZHDqwHUAfXuOcyMEgdSR8JAWbPXU9w/pGnUZBcFLhOx7TimFkL4ILs9Ms2dWu/tkhSF6l8mm5v6i6njrpmVZoUoK03vOeeur+t0br2WJYoq51223xW5kv2IiPhVKnQL6e8OGmwNmSulcO0niGUGkWTFmjJLfT+Hm5jx14Mpvix8Rq4DHqVTcYhxTjWZiEloS6LUsOiGB1+KxLEmp4GoUCKAT6S8Q8s5Ds1lP0doZUWm7nABg6bEi+aWLNk8ab2WVXmTFj1vfC2ULo+GzBKZSg9TX9RlNHX9V4ZZR/sR6Co3Iepiq1XMEcxAOX3HF07oAeAfO/oraUPz/ldB4pPbStY3Pv3nW06hdd7xBnLSzk42nm1+srK0EpUbL/XCl1alf2Z900EJDAFuXC6a+WyvzuLXM8d42hT9jNc/JW9TKtkyj2dtW/YrZmOGg2OIrENSrVBYu7Hv34OitrrV9TT7T4HWHbvFoUiNzfMxwjS/oANIq+kDNIEGaHmFbHHvdSYsUbRcp2PM7Iwtwj3WO/ZGNOBW4DdI+VM23ZjZmLzSWodtDR3i9wLaxRKfsnNFa3jdQ5Prm5BLUXw7g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(396003)(39860400002)(136003)(366004)(451199021)(6512007)(6506007)(26005)(53546011)(31686004)(186003)(8936002)(2616005)(66946007)(8676002)(2906002)(316002)(41300700001)(83380400001)(6666004)(36756003)(66476007)(478600001)(38100700002)(4326008)(6636002)(86362001)(82960400001)(5660300002)(66556008)(6486002)(54906003)(31696002)(110136005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YzFRQWNaUERqRXp6ZjJPdkF5S0QyRWlDcURmb2ptQUJtWWJCM0d3TGRSTGZD?=
 =?utf-8?B?T1g2TlBtem5uOVNwYlZCVnFCcDFYbDVMb0xGMDliUXRvNEpjWlJNV0tVdGNJ?=
 =?utf-8?B?bFh4cFhaWTlnMzBHWktod29QMGVFaUZ0dUU3NkJPekFJQWkyTGhqNHBxU3kx?=
 =?utf-8?B?ZTZKQ0ZUeXpCaGduM1VKTDJTdlk2Q0dMZnFQS3AzUEt4UURLQzB2em5WMWU1?=
 =?utf-8?B?RmFha2NiZWpvekJwa2Y0ZkFtT0pPVlhUNWJQajdGVm1Iak8vcUVjOUpicHNT?=
 =?utf-8?B?akM5c1Fja2FRYUdtM3RPMzR1WXpWd2ZpbkEvd2VFT290OVFiSTF0MldZdWxH?=
 =?utf-8?B?UEx3VW96MkpRY1JiWHRrWjkvdXQ2ZkdJN0QzczBkWjBDVTA2TVRzM1JxSEtX?=
 =?utf-8?B?K3M5blRwZEpJNStVMHp4YWIra1U0aExlcEdJM1Rkd3pJRENCQmxoWm1MZS9x?=
 =?utf-8?B?bmI3R2FVOHVIMGN3c1lpZTBYNWZHVFdaVWlvZVpmSU02SXZuR2dKQWtvVHp2?=
 =?utf-8?B?anZCZTdjKzArc2FPdlZMWlhIQjJRbzhoS1NFNUQ2WlQ4TDBsQUZFNzA3YVlk?=
 =?utf-8?B?Sy82MWN2aVgzZkJpeHZhSU1OVTcybGRWOHJMa2dKcWQwaTZFNGI0MUovVzE3?=
 =?utf-8?B?REZqVXJBOGVZeTFBbndhYmtyZEpFZWh3ZzE4dmdrd1E4Ri8ydUtTbzg4blNI?=
 =?utf-8?B?aE1RemRLS2NaTzkwNHZyckVKcDRiRi9xYWhDaGpjZ1FWYjMrRi9idzNyOTdU?=
 =?utf-8?B?NXl0NkMwQjFyZm9OUng4K3dRSVRwL1VQdXR1NDhOTVRJeGJkeGVaZjQzK2RD?=
 =?utf-8?B?b01oc0RnNkZuT3grQkt2cDFoTE5XZGhUSm1IejdSZk8zQksvc29FOUphV0hF?=
 =?utf-8?B?emE1L1pPZUJRVUUrd2lZamwwMit1SGIrSEhrTXhYakpraGNLeG5acDNycERy?=
 =?utf-8?B?U0pZbiszWi82aDVqVUd3K2trTzAzbytDUVgrYkR6VzZYN3NuYXRKb0haQVJr?=
 =?utf-8?B?cVR0TW54RlJXTDYvbEl3d3RxaDhCelZzZE9LQ21rTHdjL2lNWE5aVjdTdWJI?=
 =?utf-8?B?MFJYQktvT3hqbGJaa0taVmYxSElyRFJhQUZsN08zbXJUOVZac2N3bHVOQ0Vr?=
 =?utf-8?B?N0pnam83SWtzZUlWL0RnZjFyMEcxOHpyV0d6ajlac0hmWG9qb3RlYlMzdFdQ?=
 =?utf-8?B?UjNaaThaTEtaVjM4cWFkZllqYzBXN0dmdmswVjVCd0hsdFRuOFk0cS8zU25W?=
 =?utf-8?B?WWk5dHdwclN3TTQ2T0laT3d5TGVLVm9Wd09iYTVBRldxZ2ZLOEZ6ZGg4VSt6?=
 =?utf-8?B?MTZYdjlicHNxd3ViSnFFdExRRDRjV25GNElJUHFuaHZ3SkxMeWNjaWYwWGIx?=
 =?utf-8?B?YjZpeU9qWHBHZUtLYkZUaXpUbWhKUFRTSG5nSGQ4VTlBU3E2Um5wSTBNWWlE?=
 =?utf-8?B?bGtjZ0dBTnJueitzaHRWMEY5WlNtK1p2QXZXdjFuYnRHRStyTEg0M0t6UTB1?=
 =?utf-8?B?MUZwZ3ZRVTVuaFJOd1N2ZUJnbjJJa01ib0RpNCtjTHlnVi9hc21BUlBaL0hX?=
 =?utf-8?B?NkdVbWF4enMxOXE2SmpqYXlrTzYwekNFekYrV1VHVFg5c3M3K240SjdxcDlK?=
 =?utf-8?B?aVRBOWVyTXVmdlRnOGRkcmV6WE10dU50eXd0TGpMeEJkaEc3VEtpNjIzU1RQ?=
 =?utf-8?B?VlVaZ1BqS01zUEh6dUNEYjc2MkxnQUJkaStnZFpoNlBZTHd0WEwvbE5FZVFS?=
 =?utf-8?B?aDVVak84RlhPRHNzdm1FZ3dFK2lzaEljcTQ5OWRYaWtzbWJmYnE5cXozZnRt?=
 =?utf-8?B?N05wMERxQVBwN05XT2oybnBsWGlyTk1jRjd6eWprQ2tXZEpDeXVHRnlKQVA2?=
 =?utf-8?B?U0NQaEFIZWQ3b0NpdVViQ0F3cG1zN3pkUkMyaEtIVG90dW0xYmx1ek82VDVE?=
 =?utf-8?B?UllzMnR5K2tmdis3NFBtVGI5U0NIeXN2U2h0MEJmOGVCMVQrczdSVjBBZFo4?=
 =?utf-8?B?d3dyQmkyTVEzbUlmRk1rN056S1pNUHRrNW1Xbnd6L0tzcmpDSENWYUwzNGpQ?=
 =?utf-8?B?Y1Z3RStUVDJJMUJzczRGUlA5S0RQTE1pRWFSUlpuYUgwVS9qbWh3WmY5L2tH?=
 =?utf-8?B?YUtjSHdLanpuN2NKbnJaZmhNaENlOUR3M3dnSjZCUEJXSTNLVE5UcmdvNVhp?=
 =?utf-8?B?bGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	7roCAQXfQ1HPgdojV2gaspnJW9JjH2WFu763yHNCa0zyuM4xrviotWfQQm1UgvlzDJFQYPdP8ISY7qoCvcxgr0CoYm1pCufCUdeHD8WaHq/uiX6XDcUI1npewKWftibyL4sks25zlZCCjA3D1ZAtmEiKAGFXLboY1EnzFoYMsYIBEJ/LqyxhXCV1hyiBQNKqv/ZXVheN0nw5gAJw+CNQuOGSTJlWW5vdSxvTmmx1a9dNu/lmyV05SgLVXCGdIOcKP4V3Dh+wzoom4WTlHOsRU2hEiTYuwa0Y+dUvmlBor7V7c0Ndi3YqcrwH1O6XvU6kQixDGraiqaWwJElc1h5gAiaq/Rg8a9OcvI8DlS8BuDCnnAOuV6j17+iYqCBnSuNYnih9UFRBSzFjYSMbVQViIXREGueL7v2KQn3KMwy96BRBuU4vIJaqBuqizLv4XBSIxvILUY8qm6gW89F6sd8ekFGlXTD4waSzvA1dxi7VZEfxZXV6BWdFvifHPNVGQgndC9ugunNwFvJb+K3xnnC8i7H0scuFPdkKlFI0Ivnf92sWkSl9i2rPK1flGvYk2TJDWGpftIGKDqA2JHDZGHbM+VETdSBO/3/tiT14oqd56V3OLkDCAMPFJqOKfSINjQPNbgZSweUGCnx8Oe25cI3hIp52bvTQpnpfA/pdmbjo7lsPKLCQJFcRcS6olusZ5z7lzHng0Pt2CXhD7jbUCDXsoEMh9iHYMGMqWUobN09UKzoQGKSd3sgEO1xABXVg8MLyVwkq/UXN9ZYDaoZXX1CjkFL/Jwj0dv9Wy9C3NoTexHTvZfMBaWd971mwrIjDmfzc
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2c3be7a9-d59f-49a9-e14e-08db4c81f47e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 09:28:44.3497
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2197Ocdx0mihdrzx9end/aI/r8DBHdLfxm7DhdrZ7gnhKHeCAwSEa5uiZR7jQaQMTzMHRtyneJqAT9LvKAHt2tesrrJZGHwtNU6bPqbqe2E=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6665

On 04/05/2023 10:07 am, Jan Beulich wrote:
> On 04.05.2023 10:22, Roger Pau Monné wrote:
>> On Thu, May 04, 2023 at 09:08:02AM +0100, Andrew Cooper wrote:
>>> On 04/05/2023 8:08 am, Jan Beulich wrote:
>>>> On 03.05.2023 20:58, Andrew Cooper wrote:
>>>>> Loading microcode can cause new features to appear.
>>>> Or disappear (LWP)? While I don't think we want to panic() in this
>>>> case (we do on the S3 resume path when recheck_cpu_features() fails
>>>> on the BSP), it would seem to me that we want to go a step further
>>>> than you do and at least warn when a feature went amiss. Or is that
>>>> too much of a scope-creeping request right here?
>>> You're correct that I ought to discuss the disappear case.  But like
>>> livepatching, it's firmly in the realm of "the end user takes
>>> responsibility for trying this in their test system before running it in
>>> production".
>>>
>>> For LWP specifically, we ought to explicitly permit its disappearance in
>>> recheck_cpu_features(), because this specific example is known to exist,
>>> and known safe as Xen never used or virtualised LWP functionality. 
>>> Crashing on S3
>> Printing disappearing features might be a nice bonus, but could be
>> done from the tool that loads the microcode itself, no need to do such
>> work in the hypervisor IMO.

Xen is really the only entity in a position to judge when stuff has gone
away, so this can't really be deferred to another tool.

We have the X86_FEATURE_* names during boot for parsing the
{dom0-}cpuid= command line options, but we don't keep this beyond init

> Except that the system may not last long enough for the (or any) tool
> to actually make it to producing such output, let alone any human
> actually observing it (when that output isn't necessarily going to
> some remote system).

Yeah, `xl dmesg`/serial/whatever is the one place liable for anything
like this to get noticed.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 04 09:33:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 09:33:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529652.824267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puVLJ-00069i-R7; Thu, 04 May 2023 09:33:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529652.824267; Thu, 04 May 2023 09:33:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puVLJ-00069b-OQ; Thu, 04 May 2023 09:33:41 +0000
Received: by outflank-mailman (input) for mailman id 529652;
 Thu, 04 May 2023 09:33:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Xb8=AZ=epam.com=prvs=9488c893be=oleksii_moisieiev@srs-se1.protection.inumbo.net>)
 id 1puVLI-00069C-6H
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 09:33:40 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id beeaea84-ea5e-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 11:33:37 +0200 (CEST)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 3446m1I2022834; Thu, 4 May 2023 09:33:17 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2113.outbound.protection.outlook.com [104.47.17.113])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3qc7rj8mwg-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 04 May 2023 09:33:17 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com (2603:10a6:102:ea::23)
 by PAWPR03MB8914.eurprd03.prod.outlook.com (2603:10a6:102:332::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.25; Thu, 4 May
 2023 09:33:12 +0000
Received: from PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::bcf5:cd14:fd35:1300]) by PA4PR03MB7136.eurprd03.prod.outlook.com
 ([fe80::bcf5:cd14:fd35:1300%4]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 09:33:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: beeaea84-ea5e-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cVYcjlctwyKuhngP4otlVbHV8rz8lvBwoZwDY9n1iwcEdfi6t8h+8wHaOnq6I930Qm66o5khLwoayspFAiSK3H21cyPWs2CmG7RjOAcUpztrZd24Mt0E76wKahgV4BEJKVn4VnLo9AfZPavA83qS8KbCftOqja1kMqZoZR5p2DXX0O03a5MOGwniXz7gDIH6clhUS5axQcAF42+kURIn6ImEZb6HqHnNS/PDzeI5PfkpBjxwy3lQy2lJUbkZ6NuWK96iwP20oeXpsTF2bIIOQqVEBl/o/cqBqZzKxNsIr3KZ4KIXIxTfkRBYPZLpeel2p9OwMSMpyCVilv4dks3wNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GDao+USfzRI6dC60XrUYVQ0VZ4TY5qoaErLxSCvXCag=;
 b=mn3TlgxCdcJO8Zwh+rqLpe3s/bLZb/6dnp7oLMKqHGOjD2OZ4PXPKwcuxDShHX558BxUm52wGOikNCrbtfBUntEOmvCzzTUsbp+EpsG09ZxFvnkoop3mR2Ybcq1jSK4r7X39zFIfhxrv5GtCGgUDLDI7Dv6N6xgo1EC5Yy+xrhEu/2lJGTkxU9TN4l0tFLwYkEOynZPTqU0i16JTbDZdWYe74P5kgxJnxdG4q0ZUN3jjCyiQtJgZLJvShAeu4+mIIh1uCR5LiPYGCgih6pPPVIZihTSd3hwr0BwdDt03y+osK8VT02Pcj1f7wJqaaYGf+AE3k9VgHHCoPO+lpyo8eA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GDao+USfzRI6dC60XrUYVQ0VZ4TY5qoaErLxSCvXCag=;
 b=D9RaDaA6gP+xPjNpWrPx/4hPrwNstN652ZI0foumcpv5FoTlVLb5KSwtQElnp5lbXPcrJrVLik/0ByfbqpFUNNubmhihTWULudQqxRvP4MAbe0vAedA8ZTv7l0UYYIE8+WppZUaH0D3W8Sc3ss5vH3wxv11Z19odwz3cR7+jVsWiap0Z9bFiiaeV3dOGj07Zf/G4liPX+zxTfbhS24GbwRtAMgMn4STUF55FcVGHLJSYXfBcj9qhzJZEWBYyryqXd97KMLHTk3o+OjXKlGnYe3yCcNbEtvfgYaAcFg12mKqj6V8BNpOqWfscX0Nd0fsXD+R23CbqJCi4ay9KDqR4tg==
From: Oleksii Moisieiev <Oleksii_Moisieiev@epam.com>
To: "sstabellini@kernel.org" <sstabellini@kernel.org>
CC: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
        "devicetree@vger.kernel.org"
	<devicetree@vger.kernel.org>,
        "julien@xen.org" <julien@xen.org>,
        "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
        "olekstysh@gmail.com"
	<olekstysh@gmail.com>,
        "robh@kernel.org" <robh@kernel.org>,
        "robin.murphy@arm.com" <robin.murphy@arm.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 1/2] dt-bindings: arm: xen: document Xen iommu device
Thread-Topic: [PATCH v3 1/2] dt-bindings: arm: xen: document Xen iommu device
Thread-Index: AQHZfmttKueJIQgVZkeItcqDtfTLRQ==
Date: Thu, 4 May 2023 09:33:10 +0000
Message-ID: <3fb45717-46a2-6465-ff3f-30a641ba67a4@epam.com>
References: 
 <alpine.DEB.2.22.394.2202071616300.2091381@ubuntu-linux-20-04-desktop>
In-Reply-To: 
 <alpine.DEB.2.22.394.2202071616300.2091381@ubuntu-linux-20-04-desktop>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: PA4PR03MB7136:EE_|PAWPR03MB8914:EE_
x-ms-office365-filtering-correlation-id: d0857f72-4333-4e40-ac8f-08db4c8293b3
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 t5AK+5C4amBvwaLBuawZcCJso0WltRlSi2f+bwrzJiukcy40CUQVRBVCuVjLpDIgjnU/F4DfrziHFvteMA7ArxCCv8u/7IQlxeU9scxiBPcNCTOoKl3dYuAaPJ47ntE3xXcS9Db72QqUSzogiobRiSVZBWttiLvTZxvIW395dR1eAy0zH4TxTSjzw3dUQVdGuD5h5PwPrtw0gSz9hABrMDeiR0KHUxmc0NgHQsEP+TJePDCXVrDCe41nTEOdf/RAGG0o22jBHbL3bpiLZQy2B8RPkvezsrJhbbDVYKo+oVzI1060/gNhiV3L3ZyOchmy5QCVizDTyFDamEaBVu5rPuTKtNyOXoSUa4AiaVZK2MzQCVKNWoj0T7ptw1U+I0TKlgbxrTq2DAbxyznQXIgZv0+h3jynWLyUrCiDd95Wttn8j4Eh8jNIZy88bQhTVco+0387gSfdq7KHRN5Pl2M8ka/tz3ZXFUqJiygZUSYf/BrrhB8s5+cUfjUKXFtlS7U66FmUITomTi8ZeEX3d//U9eC5lHGfqEVFryX63C/UNiluA7MuQAI+jANcxMTIhaEfQ2LSuqN8RECHrd1VnhkRRBBoGCtMGiGu4R5cOzlfJw3Wyt13sAwSIzM5CMsj7alqDV7YWIAjoASZa/ugMJbqAQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR03MB7136.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(39860400002)(136003)(346002)(396003)(451199021)(316002)(41300700001)(6506007)(186003)(2906002)(71200400001)(26005)(6512007)(31696002)(2616005)(6916009)(83380400001)(6486002)(4326008)(66946007)(76116006)(66476007)(86362001)(64756008)(66446008)(66556008)(91956017)(31686004)(4744005)(38070700005)(36756003)(54906003)(478600001)(8676002)(8936002)(5660300002)(122000001)(38100700002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?utf-8?B?aitwdzl6QVZBMFMvVVkrSzRqaUpMcFNQaHRzaXdFeDRkdUE3NjFycjRiV0Er?=
 =?utf-8?B?VkNXRDJMd1BPS01Rd3pEWEZjTFl0RnRaN3FCRmRxWU1yUlRIWlIzZ2ozUFp5?=
 =?utf-8?B?NXhCdnpLUEZteGlEUlh0TVJVRnA2QXJ0b3F3QW43VitTVEY0L01jd0Z1V2VX?=
 =?utf-8?B?MG9TdTBLcEZYbnpJdHJmOFNrOGZIamFzVmxrWTgxUkVEUUFCOXd5N1hmRGNK?=
 =?utf-8?B?N2xUU01IVXdSZXloRWpUSExJK1A4Uis3ZmxpZ0xWYStyNnR4NldWTCtKaDVR?=
 =?utf-8?B?NnRyVEZZQ0FZbTJYbmk5RW5iZU9sdWJKOXM0RkMrVitlL1IxZ3Rmd1IrNGZh?=
 =?utf-8?B?aXB4MitXZlRNUTJHU3FvTllNS2tFNzE4WkgxUzdqaFlsMUE2MWdwWUtaakZK?=
 =?utf-8?B?Nm5ZNFlpSEllcWpFTDVtZHczdUpmRnhjcCtueEFzQjdSZnZIa2VYelBiNWFz?=
 =?utf-8?B?QUJYUUZRM3dTVktzOVRCYlRwRkJPNkV4cERlaVZGUlQ4Uk9xYjZQS2VvR1JS?=
 =?utf-8?B?RVMzbWxsUDRkVnZGQ29OSDhpY0tkVU1xeXlXSW1SNzNvY01kWEZuVmh6eHNk?=
 =?utf-8?B?QTBvUUgrYjdwR2Z2Vnlza0dxbGszWW5aQ3c5TmIvdVdnRGV2bUJkL3hBVTRy?=
 =?utf-8?B?Qk00b0d2RDI5cmRxWnhjYVFzRlV6Y21qY1BtdWVleXA3eWZoWk5SeTdQS1Ry?=
 =?utf-8?B?KzBZU2t5c2pFR3pxNHFWWHFFUHNVR3FKT0pTMndDNVR4ZFZNaTY1MWkvWVFt?=
 =?utf-8?B?YlNXU21BYndCS2lZZ3FkeHkvanAxd0tQTmJjdHRhbTdycG05VG9TNUVqWFZR?=
 =?utf-8?B?MjhWdmxuVC9URmZodnJVSk1jNG53QWtpNjcyZ2ZQUXdwOU92YjUxaFJYN2dT?=
 =?utf-8?B?dUF3WjdUNWh2d0d4K1NhbCttRlk1NE93cWltNUl5VVREVVBGY3ZCd0doYU40?=
 =?utf-8?B?UnZDMlZzUDJhdVRMYXd3YW1RQVJ2dVBUSmhGdkVpU1pFSDRRRFVxVFdRVVY0?=
 =?utf-8?B?VDIvZmNJaHFseTJrSEJqT0VoUEJJTHhBTklZOUNqK3JrUFljWE0ybDRRVnhI?=
 =?utf-8?B?WlRkNmgrWXRuMFJ4R3NGYUdWNmNTVVNCUWRpZlk0Y0E1Z2Y3dUtGWDJCMmxu?=
 =?utf-8?B?ZVZ1RHMxbFZZSEU4VHJrM2hyT1ZGb2JkY3IrQ2NJZFdGODVOelVqTnFTNEJo?=
 =?utf-8?B?TGZyK21DeEVDS3d1ZHVDSUJtMm9VL1hhUmV2d0V2VFc1MyswYm5rOXk4QXNJ?=
 =?utf-8?B?ZzIzcExlY2pRaGFGM29MNHhiaGYvUHhYZFdKczVBc0FLc2F3b1U3UHZEeHN3?=
 =?utf-8?B?RnB1eWM1YU81a0wza3kxOGswL3lGaWV0c20wZSs4WEl4K2IzbjZveWxJUVor?=
 =?utf-8?B?amw4dTh4My92OStuZmg5M0ZuN3JQbFR1cjJtN2JmZDhId2FHaW1RSUpNcUNC?=
 =?utf-8?B?Wi9ROWtNemVseURkQnZyZXRlSWpIak5MVmpnc1BENHNKRCtxclMxRzRVaTY2?=
 =?utf-8?B?T05Sc0NqeUgrZW5xdlE3WGo5bmJnSHorREdBL2IrZityTmhzWU5kSmEvOVJ4?=
 =?utf-8?B?aDlHUlZtOTBmczRuNGs4dGxQVnVCWkpKRUgwR3dzTzJ5azU4UTBETStERXo0?=
 =?utf-8?B?SHVqbGhOMnR1RytwdEV3cVB3ajZBOXdsSnp5TC9IVzhlOUt3bGJ3MUJENmIr?=
 =?utf-8?B?REkveml2K0dYRUhKSGF1UGo1eXZVYzQ0ei9aRGtCcjRwSnN0Z29scktCRUZh?=
 =?utf-8?B?SFp3ejdyUEtrdDEwZ2lGU0JRWEJhcmN6YmxZL3F3SVBjQStyNkVMcWxPTGNt?=
 =?utf-8?B?d3o0OXBzd01GbkN4Zmovd0k2UDJNUHdWK2ZtVDZOeEd2VDJha0tHTVArODRs?=
 =?utf-8?B?OFJ0VGFENWVWQU4vUDBDYk1wc0JQUmJaVzlncHFmdi9aQzl4Wlp2a1ZwTTg4?=
 =?utf-8?B?amNIQjREdFNFZS8vVGhQS1lZbVhUU0FaT0xKQ0IwV2N1TitOcnN0OXRJR2w0?=
 =?utf-8?B?R0FPVStDdVppT1pSK1lqemFjb0FES3FiS0dueU9oNHBDZW5WVitjWGI0MGJF?=
 =?utf-8?B?Vk9JaHB3QVI5ZFc2SHFIOEZGUWdlWTNjOVBQbS9ldmFhRHczZFpLdmlFZlJ3?=
 =?utf-8?B?UXlLeDVpOU43WWNlNm1uMWs3aVBqSEUxTndMNjYycEp2ZVBBM2lzM3B0cHN2?=
 =?utf-8?B?Zmc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <AB841656C7DC464999D21799E398A141@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PA4PR03MB7136.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d0857f72-4333-4e40-ac8f-08db4c8293b3
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 May 2023 09:33:10.9929
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: eDAKFVIAj7b2qN9jgD/Yk9gWkFBBlrjNk/aD+yuAyPr6JtMCrGNUuke9ZZ+de7UGOvlLa71+rFzzz7XoHPbmu6ImSpaKuFKfzeNietFuFfs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR03MB8914
X-Proofpoint-GUID: vjyAbJriD-icjyqPmRKJL_QZO1Qh5wrr
X-Proofpoint-ORIG-GUID: vjyAbJriD-icjyqPmRKJL_QZO1Qh5wrr
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-05-04_06,2023-05-03_01,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=986
 malwarescore=0 spamscore=0 adultscore=0 bulkscore=0 suspectscore=0
 lowpriorityscore=0 priorityscore=1501 mlxscore=0 impostorscore=0
 clxscore=1011 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2303200000 definitions=main-2305040078

SGkgU3RlZmFubywNCg0KID4gVGhlIHRpbnkgInhlbixpb21tdS1lbDItdjEiIGRyaXZlciBjb3Vs
ZCBiZSBiYWNrcG9ydGVkIHRvIHRoZSBzdGFibGUNCiA+IHRyZWVzLCBJIHdvdWxkIGltYWdpbmUu
IE90aGVyd2lzZSwgZG8geW91IGhhdmUgYW5vdGhlciBzdWdnZXN0aW9uPw0KDQpUaGVyZSBpcyBz
dHViIElPTU1VIGRyaXZlciBhbHJlYWR5IG1lcmdlZCB0byB0aGUgTGludXggS2VybmVsOg0KQ29t
bWl0IDFjYTU1ZDUwZTUwYzc0NzQ3YTdiODg0NmRhYzMwNmZiZTVhYzRjZjUgKCJ4ZW4vZ3JhbnQt
ZG1hLWlvbW11OiANCkludHJvZHVjZSBzdHViIElPTU1VIGRyaXZlciIgYWRkZWQgYnkgT2xla3Nh
bmRyIFR5c2hjaGVua28uDQoNCkkgd2FzIGFibGUgdG8gdXNlIGl0IGFzIGFuIGVtcHR5IElPTU1V
IGRyaXZlciBvbiBteSB0ZXN0IHNldHVwOg0KTWFkZSB0aGUgZm9sbG93aW5nIGRldmljZS10cmVl
IGNoYW5nZXM6DQoNCnhlbl9pb21tdTogeGVuLWlvbW11IHsNCgljb21wYXRpYmxlID0gInhlbixn
cmFudC1kbWEiOw0KCWlvbW11LWNlbGxzID0gPDA+Ow0KfTsNCg0KaTJjQGU2MGIwMDAwIHsNCglp
b21tdXMgPSA8Jnhlbl9pb21tdSAweDA+Ow0KfTsNCg0KTWF5YmUgdGhpcyBkcml2ZXIgY2FuIGJl
IHVzZWQgdG8gc29sdmUgdGhlIGRlZmVycmVkIHByb2JsZW0gd2l0aCBzb21lIA0KbW9kaWZpY2F0
aW9ucz8NCldoYXQgaXMgeW91ciBvcGluaW9uPw0KDQpCZXN0IHJlZ2FyZHMsDQpPbGVrc2lpLg==


From xen-devel-bounces@lists.xenproject.org Thu May 04 09:56:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 09:56:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529658.824280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puVgj-0000KV-LJ; Thu, 04 May 2023 09:55:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529658.824280; Thu, 04 May 2023 09:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puVgj-0000KO-IC; Thu, 04 May 2023 09:55:49 +0000
Received: by outflank-mailman (input) for mailman id 529658;
 Thu, 04 May 2023 09:55:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Gg4=AZ=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1puVgi-0000KI-3C
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 09:55:48 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0611.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d7a67724-ea61-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 11:55:47 +0200 (CEST)
Received: from DB8PR04CA0015.eurprd04.prod.outlook.com (2603:10a6:10:110::25)
 by AS8PR08MB6072.eurprd08.prod.outlook.com (2603:10a6:20b:296::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 09:55:43 +0000
Received: from DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:110:cafe::34) by DB8PR04CA0015.outlook.office365.com
 (2603:10a6:10:110::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.25 via Frontend
 Transport; Thu, 4 May 2023 09:55:43 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT011.mail.protection.outlook.com (100.127.142.132) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 09:55:43 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Thu, 04 May 2023 09:55:43 +0000
Received: from b4f547a610d0.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9F65F0B0-D926-43EF-899D-E88E5A2255D5.1; 
 Thu, 04 May 2023 09:55:37 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b4f547a610d0.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 09:55:37 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by DU0PR08MB9558.eurprd08.prod.outlook.com (2603:10a6:10:44d::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.30; Thu, 4 May
 2023 09:55:35 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb%7]) with mapi id 15.20.6363.026; Thu, 4 May 2023
 09:55:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7a67724-ea61-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oT3cAAImx46iN9Cg877ANIWn8JL/IUJ158jZ8MQ4VBY=;
 b=eS5ZBkQziw7r33wwVJg6sx3NR+NAQ4wIb8+xPGsJU811DU9bxc/bi/VEb9/cwBVXZUnn7eJuAc3IOcUFoAkdiTNpZ8YNKQYVJQE3r4w68zHLJIru0ZtqGHW04nmjPAdSqQwUi8dwVPzjPSnWLthslcTrBpT5s+DfzPqCW3ek3Tg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 612cc924003d0784
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nRZyZRnXGhC9bUOUhhL2eebZ/6eLA7w27v0LsUOG5gJ7va5qvq/T8Gjp1uJKUNAWfEL0EcTsJ74jA20wlNfZ6WeB7VniAmKnMFHoUXd9MGDktHRY8oyAwuBIlqSW6+QWjjZwBczW9pv+YcGX1EdlAjc1GZFZSTimP3pnHR4U5sxis9ZqicUsjXFL8tu9qjLjp1ILPrs3FYz1iLs+0fXeW52CKXhdbtW9v8Usnvsx08iikbNtLTFmyMk+pploaqMlqUU4JvWc+08fRcORK1GMQfyK6UIprQxD8F/W6LeMDkjFaign2lU6stevtyJNQgr2hgbdllceGIbRmuFP0SldfQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oT3cAAImx46iN9Cg877ANIWn8JL/IUJ158jZ8MQ4VBY=;
 b=dduKJGL8g/ek0QWvPyclcU1g6+FwgYI3Xi4iyD7OWatyGovEeIPW29kyhMWsvPKZFEDjxSNMFZ2E9FR5biQ00dnQcG88Aoh6hr61nwiZ3UbBh4OZxc1V1xPT2DCsVZ/T61R2fq/kaqxA4Bpdayti43c5ne52Uv5BcFwYN7vcsFsXFPr3Vn9MEuKO7r7KMw1Xf1A8RpqCAr7yUvqHzQrTGBR1gqyfpQwuHDxVbhRJYZbeURNfY0rSARX1QeMKXiU5JPW5xTpgp8VOuGeP1WSECTgDFfV7y4B6Ihr+D61NuJkUgYECWzvXynV3TwZSXCVB8kCvqf0Xk8I7SSC73fBrIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oT3cAAImx46iN9Cg877ANIWn8JL/IUJ158jZ8MQ4VBY=;
 b=eS5ZBkQziw7r33wwVJg6sx3NR+NAQ4wIb8+xPGsJU811DU9bxc/bi/VEb9/cwBVXZUnn7eJuAc3IOcUFoAkdiTNpZ8YNKQYVJQE3r4w68zHLJIru0ZtqGHW04nmjPAdSqQwUi8dwVPzjPSnWLthslcTrBpT5s+DfzPqCW3ek3Tg=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stewart Hildebrand <Stewart.Hildebrand@amd.com>
CC: Xen developer discussion <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: pci: fix -Wtype-limits warning in
 pci-host-common.c
Thread-Topic: [PATCH] xen/arm: pci: fix -Wtype-limits warning in
 pci-host-common.c
Thread-Index: AQHZffQdsUu9XjzfjkKQkjxjNw1HC69J4TCA
Date: Thu, 4 May 2023 09:55:35 +0000
Message-ID: <7E04E3DD-B260-49E4-8543-77BBF8715BCB@arm.com>
References: <20230503191820.78322-1-stewart.hildebrand@amd.com>
In-Reply-To: <20230503191820.78322-1-stewart.hildebrand@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7158:EE_|DU0PR08MB9558:EE_|DBAEUR03FT011:EE_|AS8PR08MB6072:EE_
X-MS-Office365-Filtering-Correlation-Id: 40dc0e22-1c9d-496a-1021-08db4c85b9d2
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 SR5rAQr7dIoqrJ2lQ6NyN7PzKmvjnIdOAJq07eIam7k4zv/i/pCHpEvMK7rA8lxfYYAlbtwQo4NI+qre28yF+ARSFuI/6OgObrtDVVXRjHqrs+SlHaOGaXrTGkShmAeZL4NLwLI2x2bRKdyEemAMwFCVePVlRExSRPLAoeMBMlfDQGB/AquhFH81etvrAEp8+ccH66FnCvyRFogEK/BjIIG7zKAWM7y+sv+pWZJm09HS89tA9Jv0o4JV6J+hLTn48d2b0tVW1GPFMCmrYchDVgOUqsYaOZz3T7HGq0v0/ho0KEeoMVWoKjYcmcdHfMJJBmBnM/MKKTI9dtUxcU6wHikxxyQGEsjSwRnQggrCzKTIJimeo8xEEsh2jY/SCBzJ7sil0TGdpp6QMicRORSBcf4Zq/8oEZsClBeAPXrQuIRYh4tCnbK2UvB7HxuBsrR3gQHUTlLqfs3zSk+AFzs2jRLwn5j33itZgB+bBMDB0G9HeFEHZmsYCF4AOUnNPEITLMwGCGHObDm8irx06atgn22wAYMvYterzkZFcW9d+qsAa2XUgILws7OkCC9lfyygpJeQBTzXi1GWuD603DFWdHkaR5w+ao4voVRKhmWaRueGjHZIkDVt6oAvHIFsvYexQu+/k0JqNl0CZ6OYxoskNg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(346002)(376002)(396003)(136003)(451199021)(66446008)(76116006)(2616005)(83380400001)(478600001)(186003)(91956017)(66556008)(54906003)(4326008)(64756008)(66476007)(6916009)(66946007)(53546011)(6486002)(26005)(71200400001)(6512007)(6506007)(316002)(8676002)(41300700001)(8936002)(5660300002)(2906002)(38100700002)(122000001)(4744005)(38070700005)(36756003)(33656002)(86362001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: multipart/alternative;
	boundary="_000_7E04E3DDB26049E4854377BBF8715BCBarmcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9558
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	49159545-3040-4820-4e7e-08db4c85b54d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UGaVWNNv+G6KBOwrDxGA3y72xZPqQ7Zh3dG9SSOYdEJYSATrINUBTJsEh4zgRncGfve+g4eQNR7efJQnZiEeQsgW/do+kszhI3tG0VsUGqMKQmRqBL/qLh19+Z4tmBF9oVqeAwdLO3J55J1YR7ganE0AXtBAc/TvuU/PFHv/9tG0uZkOXJPrbeO5XlOJWSx2VfWiaqzH0UEzLe1SZ5r/o3pdfAOpnr5w4PJDJ52SlgyGWlIqmjEwQ8WNHuIwVLGflHBu135NFPT4KXIv/I40O0TNo/UGKsx8Sp75pctFlyaNneMniy5RQpb/GtyP0uTUqdZcvB9pvroCO74TMUmG/yjy24lX21+YmiEfDFC0NEDtdT6+pdUKS+rFhzpAUlI9mC0D7IXPRCW44Ectq/NEjXyXJmduBXLIZkXLT8jWOFYcu1JVT9Rw6s00u6n7NMU6XI/sgp3av8mEOZGXZjvfmlp8S25RoSyTKfNTo60cA3dWs4I7yNTQkNcY9spnSzULf+02G53Tkw8CnELYnWV7Mqjt2lO1KTy/DLTISdnDkq2YDoElCJ2ffMHnRYB1ez0/9qNz4PehzQylBkNMudqsmOwlRMIQypbkSok2FBb03odcgKprPT8RdmNS8NPdSyXJWb2pTPdbeFGQhyMtVKINy9zrEpZW5IoW+yJ4KZVN625QJxyV/JaVIQxU+NYkXqjhj96oeq2iNBjrSRjCgfQURM3yVTvA6IXCRdsLyvchnhQ=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(396003)(346002)(451199021)(46966006)(36840700001)(40470700004)(40480700001)(45080400002)(478600001)(86362001)(34020700004)(107886003)(40460700003)(336012)(83380400001)(53546011)(47076005)(26005)(36860700001)(186003)(6506007)(6512007)(36756003)(2616005)(82740400003)(6486002)(82310400005)(33964004)(81166007)(33656002)(356005)(54906003)(6862004)(5660300002)(41300700001)(8936002)(8676002)(2906002)(316002)(70206006)(70586007)(4326008);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 09:55:43.4970
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 40dc0e22-1c9d-496a-1021-08db4c85b9d2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6072

--_000_7E04E3DDB26049E4854377BBF8715BCBarmcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

SGkgU3Rld2FydCwNCg0KT24gMyBNYXkgMjAyMywgYXQgODoxOCBwbSwgU3Rld2FydCBIaWxkZWJy
YW5kIDxTdGV3YXJ0LkhpbGRlYnJhbmRAYW1kLmNvbT4gd3JvdGU6DQoNCldoZW4gYnVpbGRpbmcg
d2l0aCBFWFRSQV9DRkxBR1NfWEVOX0NPUkU9Ii1XdHlwZS1saW1pdHMiLCB3ZSBvYnNlcnZlIHRo
ZQ0KZm9sbG93aW5nIHdhcm5pbmc6DQoNCmFyY2gvYXJtL3BjaS9wY2ktaG9zdC1jb21tb24uYzog
SW4gZnVuY3Rpb24g4oCYcGNpX2hvc3RfY29tbW9uX3Byb2Jl4oCZOg0KYXJjaC9hcm0vcGNpL3Bj
aS1ob3N0LWNvbW1vbi5jOjIzODoyNjogd2FybmluZzogY29tcGFyaXNvbiBpcyBhbHdheXMgZmFs
c2UgZHVlIHRvIGxpbWl0ZWQgcmFuZ2Ugb2YgZGF0YSB0eXBlIFstV3R5cGUtbGltaXRzXQ0KIDIz
OCB8ICAgICBpZiAoIGJyaWRnZS0+c2VnbWVudCA8IDAgKQ0KICAgICB8ICAgICAgICAgICAgICAg
ICAgICAgICAgICBeDQoNClRoaXMgaXMgZHVlIHRvIGJyaWRnZS0+c2VnbWVudCBiZWluZyBhbiB1
bnNpZ25lZCB0eXBlLiBGaXggaXQgYnkgaW50cm9kdWNpbmcgYQ0KbmV3IHZhcmlhYmxlIG9mIHNp
Z25lZCB0eXBlIHRvIHVzZSBpbiB0aGUgY29uZGl0aW9uLg0KDQpTaWduZWQtb2ZmLWJ5OiBTdGV3
YXJ0IEhpbGRlYnJhbmQgPHN0ZXdhcnQuaGlsZGVicmFuZEBhbWQuY29tPg0KDQpSZXZpZXdlZC1i
eTogUmFodWwgU2luZ2ggPHJhaHVsLnNpbmdoQGFybS5jb208bWFpbHRvOnJhaHVsLnNpbmdoQGFy
bS5jb20+Pg0KDQpSZWdhcmRzLA0KUmFodWwNCg0K

--_000_7E04E3DDB26049E4854377BBF8715BCBarmcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <C402AA6BD882D04C9F495AF52E808D43@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJvdmVyZmxv
dy13cmFwOiBicmVhay13b3JkOyAtd2Via2l0LW5ic3AtbW9kZTogc3BhY2U7IGxpbmUtYnJlYWs6
IGFmdGVyLXdoaXRlLXNwYWNlOyI+DQo8ZGl2IGRpcj0iYXV0byIgc3R5bGU9Im92ZXJmbG93LXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgbGluZS1icmVhazogYWZ0
ZXItd2hpdGUtc3BhY2U7Ij4NCkhpIFN0ZXdhcnQsPGJyPg0KPGRpdj48YnI+DQo8YmxvY2txdW90
ZSB0eXBlPSJjaXRlIj4NCjxkaXY+T24gMyBNYXkgMjAyMywgYXQgODoxOCBwbSwgU3Rld2FydCBI
aWxkZWJyYW5kICZsdDtTdGV3YXJ0LkhpbGRlYnJhbmRAYW1kLmNvbSZndDsgd3JvdGU6PC9kaXY+
DQo8YnIgY2xhc3M9IkFwcGxlLWludGVyY2hhbmdlLW5ld2xpbmUiPg0KPGRpdj4NCjxkaXY+V2hl
biBidWlsZGluZyB3aXRoIEVYVFJBX0NGTEFHU19YRU5fQ09SRT0mcXVvdDstV3R5cGUtbGltaXRz
JnF1b3Q7LCB3ZSBvYnNlcnZlIHRoZTxicj4NCmZvbGxvd2luZyB3YXJuaW5nOjxicj4NCjxicj4N
CmFyY2gvYXJtL3BjaS9wY2ktaG9zdC1jb21tb24uYzogSW4gZnVuY3Rpb24g4oCYcGNpX2hvc3Rf
Y29tbW9uX3Byb2Jl4oCZOjxicj4NCmFyY2gvYXJtL3BjaS9wY2ktaG9zdC1jb21tb24uYzoyMzg6
MjY6IHdhcm5pbmc6IGNvbXBhcmlzb24gaXMgYWx3YXlzIGZhbHNlIGR1ZSB0byBsaW1pdGVkIHJh
bmdlIG9mIGRhdGEgdHlwZSBbLVd0eXBlLWxpbWl0c108YnI+DQombmJzcDsyMzggfCAmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDtpZiAoIGJyaWRnZS0mZ3Q7c2VnbWVudCAmbHQ7IDAgKTxicj4NCiZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwO3wgJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Xjxicj4NCjxicj4NClRoaXMgaXMgZHVlIHRvIGJyaWRnZS0mZ3Q7c2VnbWVudCBi
ZWluZyBhbiB1bnNpZ25lZCB0eXBlLiBGaXggaXQgYnkgaW50cm9kdWNpbmcgYTxicj4NCm5ldyB2
YXJpYWJsZSBvZiBzaWduZWQgdHlwZSB0byB1c2UgaW4gdGhlIGNvbmRpdGlvbi48YnI+DQo8YnI+
DQpTaWduZWQtb2ZmLWJ5OiBTdGV3YXJ0IEhpbGRlYnJhbmQgJmx0O3N0ZXdhcnQuaGlsZGVicmFu
ZEBhbWQuY29tJmd0Ozxicj4NCjwvZGl2Pg0KPC9kaXY+DQo8L2Jsb2NrcXVvdGU+DQo8ZGl2Pjxi
cj4NCjwvZGl2Pg0KPGRpdj5SZXZpZXdlZC1ieTogUmFodWwgU2luZ2ggJmx0OzxhIGhyZWY9Im1h
aWx0bzpyYWh1bC5zaW5naEBhcm0uY29tIj5yYWh1bC5zaW5naEBhcm0uY29tPC9hPiZndDs8L2Rp
dj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PlJlZ2FyZHMsPC9kaXY+DQo8ZGl2PlJhaHVsPC9k
aXY+DQo8L2Rpdj4NCiZuYnNwOzwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K

--_000_7E04E3DDB26049E4854377BBF8715BCBarmcom_--


From xen-devel-bounces@lists.xenproject.org Thu May 04 10:11:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 10:11:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529663.824290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puVvQ-0002wS-3i; Thu, 04 May 2023 10:11:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529663.824290; Thu, 04 May 2023 10:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puVvP-0002wL-Vv; Thu, 04 May 2023 10:10:59 +0000
Received: by outflank-mailman (input) for mailman id 529663;
 Thu, 04 May 2023 10:10:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1puVvP-0002wE-3P
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 10:10:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puVvN-0002PT-D6; Thu, 04 May 2023 10:10:57 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.4.157]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1puVvN-0000CW-59; Thu, 04 May 2023 10:10:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=0wRa0wnni9ASFXBeskJw0fRv7Mml4oo/YTejwSsw2C0=; b=gVvOXfcivJlVrxfzILrEreJ/N3
	WWzt0793nq4JvBqMou8IzQkt0+igNSK31QmUt1L8+P1nyeNmN03BvxWvISVXSuBE+/mo2zbQ8ZUnZ
	yO7d76MFKkOOyi8NnvbCmS8CG/jCfl2PJXqqT3vgQfZGQuRrJdJgqfa6GNO+yWdaQYSM=;
Message-ID: <4ca00734-8e1e-fe5b-b2a0-6f08f3835433@xen.org>
Date: Thu, 4 May 2023 11:10:55 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH] xen/arm: pci: fix -Wtype-limits warning in
 pci-host-common.c
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stewart Hildebrand <Stewart.Hildebrand@amd.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230503191820.78322-1-stewart.hildebrand@amd.com>
 <5D298044-314C-473F-97AB-420DA3DA44A2@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <5D298044-314C-473F-97AB-420DA3DA44A2@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 04/05/2023 09:59, Bertrand Marquis wrote:
> Hi Stewart,
> 
>> On 3 May 2023, at 21:18, Stewart Hildebrand <Stewart.Hildebrand@amd.com> wrote:
>>
>> When building with EXTRA_CFLAGS_XEN_CORE="-Wtype-limits", we observe the
>> following warning:
>>
>> arch/arm/pci/pci-host-common.c: In function ‘pci_host_common_probe’:
>> arch/arm/pci/pci-host-common.c:238:26: warning: comparison is always false due to limited range of data type [-Wtype-limits]
>>   238 |     if ( bridge->segment < 0 )
>>       |                          ^
>>
>> This is due to bridge->segment being an unsigned type. Fix it by introducing a
>> new variable of signed type to use in the condition.
>>
>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
> 
> I would see this as a bug fix more than a compiler warning fix as the error code was
> ignored before that.

+1. Also there is a missing fixes tag. AFAICT this issue was introduced 
by 6ec9176d94ae.

> 
> Anyway:
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Just to clarify, you are happy with the current commit message? If so, I 
can commit it later on with the Reviewed-by + fixes tag.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 04 10:20:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 10:20:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529666.824299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puW43-0003e6-V9; Thu, 04 May 2023 10:19:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529666.824299; Thu, 04 May 2023 10:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puW43-0003dz-SI; Thu, 04 May 2023 10:19:55 +0000
Received: by outflank-mailman (input) for mailman id 529666;
 Thu, 04 May 2023 10:19:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puW42-0003dt-4J
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 10:19:54 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3350d690-ea65-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 12:19:50 +0200 (CEST)
Received: from mail-dm6nam12lp2168.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 06:19:42 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS0PR03MB7204.namprd03.prod.outlook.com (2603:10b6:8:122::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 10:19:40 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 10:19:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3350d690-ea65-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683195590;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=JwqknuLbanHrBbQKXOwppLLuBzXBLyYx3ASMR+/7AdY=;
  b=G4DXBrLpLeaCtP+7Z+LEOmxZlmw4aQrKty877oac9GmQkhF41t9b1p6y
   aGuLTQUPT9Dp/nvkKSFoV7kf74R7kz68Y/DhqECNUkAGK5uh8LNDsQvYt
   lLiQI3JLALYXGoSKjsq/EsumBF7wIm2wn8M+qNP7DqA9DmtLatMSBCEjs
   E=;
X-IronPort-RemoteIP: 104.47.59.168
X-IronPort-MID: 107165809
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Lfba7a6RpHkqwylLbcBhYQxRtBXGchMFZxGqfqrLsTDasY5as4F+v
 moWXzjTOP7eN2WmfIp1Ydjk8UMGvMTWz9Q2HVE//nw8Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0T4AeH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5ms
 vEcMTkcdAu5hLiT752rSM5vu84zM5y+VG8fkikIITDxK98DGMqGb4CUoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OnUoojuiF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNtKS+Tmq64y0TV/wEQuGkYobAW9scKZsUm0ZeBtM
 X5IoSoX+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6GAkAUQzgHb8Yp3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqWMMfZQ4M4t2mrIRtiBvKF4xnCPTs0I2zHizsy
 TeXqiR4n68UkcMAy6S8+xbAni6ooZ/KCAUy4207Q16Y0++wX6b9D6TA1LQRxa8owFqxJrVZg
 EU5pg==
IronPort-HdrOrdr: A9a23:5/cgh6iawKCwC+NFcdqiYLwm5HBQXt4ji2hC6mlwRA09TyXPrb
 HLoB1773/JYVkqM03I9errBEDiexLhHPxOjrX5Zo3SOTUO0VHARL2Ki7GO/9SKIUPDH4BmuZ
 uJ3MJFebrN5fQRt7eY3OEYeexQouW6zA==
X-Talos-CUID: 9a23:C0mGJWzTvvumcMh0DiuqBgVTF8Q8e2zH103TeWqXJnZrb76oQ0+PrfY=
X-Talos-MUID: 9a23:RwYfswWURKRbhTnq/AL+3w9basF42ISVD0ozo6cciZXfNRUlbg==
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="107165809"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tuv1ZuCnvtBzUSePmJXJUxaH9PCNvJ0LqMyhV+dojnUMgwO2e26uBm7Ax14w995PaLXfygWAa8KtLAsJZcoF6ycpyoXpdWBFJstq7LJ/SgW2pOh7qDfn1VDAk50iLYCD93JkeOuHS7ijm7dTaYHkrfBAiuxe40kYiGC5ngm1S7zZ+p4+QBY8Q2+2YaikjNhoEFo1a+CvTIc9nwMBPGywpKQgZ87bL4HRV3O9ArvGrlv76CLOrtOLBTWsECI2zZRQQe9Vo59gQqnzwl/ICRuR2azI91Ul8YUxunXMChQaB9w1GmfhYrqG4BadkK9CWti2AHNKQAuXeTsT67AzuodlOA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JwqknuLbanHrBbQKXOwppLLuBzXBLyYx3ASMR+/7AdY=;
 b=V+Rw0mEnOsWiWwmnx+uXR6fNQbeWHMVLqfpFDbhH2jTtZNvgiFxGxl9AYjC+2TcfOqmsnAY870pGp3MWLdT4/PU/c64/u0EJapeNcqEXKHMPtr9RQrGdcNdutkQUcbGZZ/hGiX+wlqJgd1Ii+4CAgNcAAHvkkqx0QftLzWMMcPtQz5uJ2MC73/hFy7849IwJTKDifYG4GZ/cAW6xlJAKtsZckvXnxRxUXcPVTh0EzqRwqoGDtPI6NIB4ulHqlqROtVoAT4yyobsqW7NAmm3Ci+TSjUYDvLWGxRRp/F9SYUOIQuPeh+wNFF4TQXyvu9cVLPu8SZ62MA5czRH0R5kVxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JwqknuLbanHrBbQKXOwppLLuBzXBLyYx3ASMR+/7AdY=;
 b=bAR8YnmgTWwBXB9AydGWE8UqgTBuSUeTcWIdMGOaGQ7mifkOJG3tBqGFbSGxRXF4hfxqETeQDLN9bChic1SnLaQeV7tlw5R/EMSu8i+3sW9PZ2UN23Io+fxEKYdGkgfeM4fUCFfUSyZ9nWoS26rzOFv1zHt+DAsARbJf8ot/zk4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <82bca889-e412-c71c-a680-8cc1530cc06b@citrix.com>
Date: Thu, 4 May 2023 11:19:34 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/ucode: Refresh raw CPU policy after microcode load
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
 <ccf68f0f-6fd7-a9a4-ef72-03999d11035a@suse.com>
 <b655305f-293e-a0dc-ab39-36b0c9787433@citrix.com>
 <2b125ebf-53ed-27a9-2f04-be2a6cac7fd0@suse.com>
In-Reply-To: <2b125ebf-53ed-27a9-2f04-be2a6cac7fd0@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0006.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ad::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DS0PR03MB7204:EE_
X-MS-Office365-Filtering-Correlation-Id: 1bc68f9b-df22-42ee-d8a2-08db4c891207
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	15z0UuX5BGWx+LZuycy4syS3WvJru07IMNY+UvO8lg0Wx/VX7eYh/b5pQ0UX/8Jgoor4E6WYNFtI/IN5TPlG2eRnSVDRhe0WS5d1kb2ag5Qc8xnevjOQPsH8ebw0dM7xHTeZSChLjli/FF7HIROInzUvKM58VEyGr6fCO1RjZzwH24VDkMWw5jsDVJnUUVeGDAdPf+t37b4/Q0Z9vWjfMqth9F3ZaADOlCQcwgGIoqjsqx4w2+VqdneWhz7En+ifKQmMWXbWqHykdR4ptI9zp/d2VB80Lst+J+YzgnDgXuStQRGaM3UEw06GJT9jcEURKeK60W2Rkw8Topi3YKLs5nPqcSsYRyonZ1w2ZKAosExx0W1ke7YUsfA7m4JLDsyYAcZQY3HMU1PUjPTay2wn8JbZUrZAZXvvaFLnKNXyoEjmGYY4/cTudwbqvUu7Giz0AwHgMGnTsG2XevUqd2fQC3L79a1Xdz9jtkhACYcMyVXEcmoF9T/xIWh2IqHNSfxPpGnfQ9Ko3VXVhXJ2rdkSx0e6KWB0BMsCIwbPIODsLpV1ityYNowgQAvNFOmLa/EvBigKItZLC9WSIi71zI+5NWbF4HMzH1Bxr5fkikbNhrA7kg3U+MTXMP3u2DICS6Ite2dARiuuaZdv33DJ8wSOWw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(136003)(376002)(396003)(346002)(451199021)(31686004)(36756003)(5660300002)(38100700002)(2906002)(8936002)(31696002)(316002)(86362001)(8676002)(66476007)(66946007)(4326008)(66556008)(41300700001)(6916009)(82960400001)(83380400001)(186003)(6512007)(6506007)(26005)(53546011)(6486002)(478600001)(2616005)(6666004)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z0lxU0NzR2dhakU0bVFGZmpWNTVHbzFOK09hc0xQRnFJdDc5TFQ5NFVjVHA2?=
 =?utf-8?B?NTVoaEswcmxGWTR6bTR5c2RhSm9zTGE3aU1sVEJrRmdiLyt2a0k0WGdyQWNS?=
 =?utf-8?B?M1YvbGw2cG1DZWMvZW0yaVU1RDhPQlFkQS94V0x0VFFHUXl6VlErZ04yK2wx?=
 =?utf-8?B?dnZuekhmRHl5OTVvWDV3T0F1WHRkeVJTa3pKM1Q0SDFlUUtiTXFSOHo0b3hF?=
 =?utf-8?B?b0NLbnFPUVVGWEJIRTFYcFREQ1Z2Q3dUSjRyS3JVTmhHZlQ0eEV4RUR1d3Ft?=
 =?utf-8?B?cWFxd2tubVRQOU5JYW5RVzduQnFpY2VSUkpQb1pHRFRhc2lSUWxOcm5ReHBN?=
 =?utf-8?B?NWdKeHF3Tlp1bkh4azlCNDA3Ky9aLyttaVYvV3NMSkd1WUJ2bzE2WTRsNnhB?=
 =?utf-8?B?M1lMcHE5L1p0cU9FUFlvQWw0a3hrNno4and4U1VzcGV3TmpCYjZTejVSZmlt?=
 =?utf-8?B?ZXlGQ3pTY1FzaGJDbjlRZFIvSEtxQzlYamhsbkNTUzZ5aEJ2Z3dWSXpVdlRj?=
 =?utf-8?B?eVRhNDk3VVU1aE43RW9NV1J6K2dPbUpBc3NOajdxVFp1VUZEQXltUFpJUGh3?=
 =?utf-8?B?MWJsQ0F3ZXhLZDdUTnpPUHF6TzlPQUs0YkFONDlHSWphTUFKbW5Nb1ZDTEFD?=
 =?utf-8?B?WGZkUlFsTWdsYXBnMEpaTTdKUGEwWThsWisyRlJYZTVpNDViYk8wbmFuZEhl?=
 =?utf-8?B?bjBPcmp4NVVDZDNEUFVIZTdRY3RnTklqUkRUWDFwRDgwWFdNTnFyUEE5enlH?=
 =?utf-8?B?dEN5UGQ4Ym03cDZOaTVnK2xwYlhmRThFOHdKdklUSmdVN1NlRHU2WmdzR3hL?=
 =?utf-8?B?dUNQS3BqK1Y3bHhrR0NBZUgwaDA0RFprNGdZUWVNNkZZcmhKYmJ0dHRZMnk0?=
 =?utf-8?B?YWMxbTlZTi9mZi9XTHVwekVzTXp6cS83SCtVYnhVb2ZNNlBTK0RLQmtmbzJ1?=
 =?utf-8?B?S3NwYk5HZDlDOTEySlEvd3Q2UUllSWJrMW5tSlpTSXhYb1FXY1Fvd3ZnRGs4?=
 =?utf-8?B?cHV2Y0wrYWI4ZW5UTmNMTTlmTVFscVVoTG5jY2JYRERkbWUrNFRobU1VSWN2?=
 =?utf-8?B?T0R3bHp2NXJQTU9YOWd0U05KL014VzBNQzE5aHNaWjNCb2NLWTNLZk54ZzN5?=
 =?utf-8?B?NEFsNG93eFZmSFE2RDFNSUV3Q1pWck1HYlZQMlpWdzN3ano4UW83enpsWDVu?=
 =?utf-8?B?blVYY2NqRGRUckxNTEVmbTlvYnRBQytMUmNEd2JrMmNEaDFnM01OYkZ4a3Vu?=
 =?utf-8?B?SWZ2ZllhWXNDTk00UHpRWC8yQS83TjRGN2M2WXJEYStyRm9qWXhhNjg4WHVK?=
 =?utf-8?B?TW1jc1B6ZkFnVnNPQnhQbERFM1lMRDF0YXV1Rkh0YzdEcWR3Q2l6OWtUZC9R?=
 =?utf-8?B?eU15d0ZFZ04rT3dYbExQblVSeWZ5N2s0OHY5RzJxbFdlYVptZXVWRzlxdVE3?=
 =?utf-8?B?Z3Nzdi9Xdy9ZSmZCMTJWbUFNZnFzNU4xSk9RaEdtTVIzQkwwNThmdzJBU04v?=
 =?utf-8?B?NndYc0FBdy9NRUJNb3ZGb3E0SXBKL3ZsN3lWeUhzS0YzNWFBalhtUkM0MCt5?=
 =?utf-8?B?Tk5ZSlZ5MWxYVXo2TGtVbU1nc0l2blpzVUp5dURjemVSd0dMS2RVYVRPNWRq?=
 =?utf-8?B?czhucmJScEF5bkkxUTRIS2hUT3E5TmdxNGV1Qk9JakpnaW4ranM0UHh1NUp4?=
 =?utf-8?B?K05XRmdOZGU0N1hjbWl4Z0plYUdRK1J0NzludlVOc240aFRhMm5MaGNEdlVu?=
 =?utf-8?B?cFpuMjd5T0tld01RV2dneG5SdDJsNWVvS3dGeXhQdStDSC9rNTBCa1RaSG1Z?=
 =?utf-8?B?UkVhd3huUUVGOWR3SStCenhSRE9ZcE5WWW4xUmFuVE9jSGkxMklONGVLMG5K?=
 =?utf-8?B?ZUVyYnkzVDVDN3Y3cGRqTmt5TkwzSnpXRkRiN3F1akkzNk5MWXZtZzY1bGYz?=
 =?utf-8?B?NlRWdE5teEx5WVVCWDcyb1llSHl0TUxseTlYemJKKytWdHNIaDFaRVduVldr?=
 =?utf-8?B?RlFrSVpQUzRGS2dsZURERUo2VkhOM1dzKzFJUnhjSmRBdGpJWnF6THBOQksv?=
 =?utf-8?B?K0VieFhrOVRScEg0ZUdjbE9GWHJSMENpS25GRXZ4WEl3eFpQQlBvZWgxSmVu?=
 =?utf-8?B?R0ZFNTJxdEFLN20rQnMzZ0dla2I1YTdlQnRzMGk2VVAxNXZVTC9GbmU5OS9s?=
 =?utf-8?B?cEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ZzBW7gUI2ksla1pMXxwucoxKXpTT7hwMPngptL2C8QqpTZZyADsxTAdlWhUQoZgEqXk1zYaw6hMfbpyUqyHO5Q+higWVZEceYNRlqZ1bjwhbY+/U+4i7Slh+9ILbGxHzFTDp5dMELzjlbf0wJtRO+jZ0N9jMej/5ZIfqTWX9mh9NDgaOzsfsauuekwagyul/w+K8jwIpzgYw/fb9013nKJfUyq1huIVAF2WMbl2cn4EUEw52FbjtCqSgNou+3D2KciiorEqJ0eQqUXGnJ0P3j3wmtWlX6Bh1OY+G0AqlJDuKfquw1X1EbsT/RAl6HfojCM5E0ci9WBPsxp42go8r1uZWv6kFew1nH5EIl8whv/h2cG66jrS9pStEFA1o74WwskKiUwfQcaeSCn5eAyth+j0MFUOE5MGNNsFj7wAh7D+eY+9Wd0qJvhU/64f0AcNNXUP+JVfSMofotaWKdzBzx1N1kWAxpOojx4fqvomCWNZxIxcje9LGSnPBTOuXdyOh9CqS7a11lUkugB+7PogLFfmcvbrdESV0g17D4nz1HlgWQ0fNnvP1bz7jXO4bAprV8uQ8ztgRR/eUPYrBpOo/RO3IWshEO9UKKzgV4hJqyaA4NPOn2mBDRtlnTsM99r8pQKZApcMvVMnSC0J0gNFM5r8xptFfQ50Uui0a828hdQF3bXFvZtZGSoJdW5Z/zL2anRElN887EdeiIa40QECMolfd2TP0NInTqoUJZ14sToqfQ+77O818GYPYcWQ3xLFQEV3ztGBHL6Tm8PSmWM7uwztlRKCH/F4gF5X3z1aqJSBXkJ6KmuidlQhmWF/I+jxt
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1bc68f9b-df22-42ee-d8a2-08db4c891207
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 10:19:40.2397
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: L7nUFkuLJE2m46bjhYR5FtZKrZBphZd4/JRx+QzduRtN4ut5dEnlL08FweL0LGjmgHxcRxu3FaEyyRi5cCfjlcS3rHdhorqdylBl3tu3CxM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR03MB7204

On 04/05/2023 9:17 am, Jan Beulich wrote:
> On 04.05.2023 10:08, Andrew Cooper wrote:
>> On 04/05/2023 8:08 am, Jan Beulich wrote:
>>> On 03.05.2023 20:58, Andrew Cooper wrote:
>>>> Loading microcode can cause new features to appear.
>>> Or disappear (LWP)? While I don't think we want to panic() in this
>>> case (we do on the S3 resume path when recheck_cpu_features() fails
>>> on the BSP), it would seem to me that we want to go a step further
>>> than you do and at least warn when a feature went amiss. Or is that
>>> too much of a scope-creeping request right here?
>> You're correct that I ought to discuss the disappear case.  But like
>> livepatching, it's firmly in the realm of "the end user takes
>> responsibility for trying this in their test system before running it in
>> production".
> Okay, with the case at least suitably mentioned
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks,

>> For LWP specifically, we ought to explicitly permit its disappearance in
>> recheck_cpu_features(), because this specific example is known to exist,
>> and known safe as Xen never used or virtualised LWP functionality. 
> Right, but iirc we did expose it to guests earlier on. And I've used it as
> a known example only anyway. Who knows what vendors decide to make disappear
> the next time round ...

It's true that we used to expose the CPUID bit to guests, but IIRC we
never virtualised it correctly which was my justification for hiding the
feature bit when I was doing the original work to rationalise what
guests saw.

 
Removing LWP was an extraordinary situation, and AMD didn't do it lightly.

What they did was sacrifice a fairly expensive LWP errata workaround to
make space ucode space for IBPB instead.  They hid the CPUID bit (and
specifically by modifying the CPUID mask MSR only) because there were no
known production users of LWP, and a consequence of the patch, an errata
got unfixed.

On real AMD systems which used to enumerate LWP, and subsequently ceased
to, it was only "normal virt" levels of feature hiding.  The LWP CPUID
leaf still has real data in it, and you can still use %xcr0.lwp, and the
LWP MSRs still function.

Software which was genuinely using LWP, not tickling the errata case,
and late loaded this patch wouldn't actually have anything malfunction
underfoot.

>> Crashing on S3
> I may guess you meant to continue "... is bad anyway"?

Oops.  I was clearly in a rush for my morning meeting.  What I meant was
"we probably shouldn't crash on S3 resume in this known-ok case".

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 04 10:26:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 10:26:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529670.824309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWAM-00058r-Kd; Thu, 04 May 2023 10:26:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529670.824309; Thu, 04 May 2023 10:26:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWAM-00058k-I4; Thu, 04 May 2023 10:26:26 +0000
Received: by outflank-mailman (input) for mailman id 529670;
 Thu, 04 May 2023 10:26:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puWAL-00058e-JM
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 10:26:25 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1d772e04-ea66-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 12:26:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d772e04-ea66-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683195983;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=6RwmabpMbB9/J7uHjepa/CNpM2RIXnmAeogNAZikKco=;
  b=TuSfCwHATmp8MXXwZ5jLwzFeWl38witb2GFd9ZojyVcHgeErjcxJOybk
   GvWL8aq94kiGTt6ZZoqnGEYe/ZdYdMJVMqW9+GMdDLuoc1uSzjUsxXx3v
   MNEsbKjHgkDbONljid+KVRmTznSbY/bOhXKA7zNw2Sf5FaVyK3hoMIJ+j
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106600554
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:9DOO6anr1QDCcLvkcDm14Xno5gzwJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIdCGDVPPqOZjbxLowlOd+3phgEvZSDxtcxHVQ//CpnESMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgW5AOGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 eIkcQhTTj/cvvOv6o2FULhwo8clJsa+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglHWdTFCpU3Tjq0w+2XJlyR60aT3McqTcduPLSlQth/A+
 DyepzmkXnn2MvSvxRu5+GC2iNPG3j7ZWpxDPZKq+tV11Qj7Kms7V0RNCArTTeOColG6c8JSL
 QoT4CVGhbg/8gmnQ8fwWzW8oWWYpVgMVtxICeo45QqRjK3O7G6xBGIJUzpAY9wOr9ItSHoh0
 Vrhoj/yLWUx6vvPEyvbr+rK62roYkD5MFPuewcacVI9vfnM/7gilzjwcMwyDIu2iNf6TGSYL
 y+xkMQuu1kCpZdVh/7jpAqX3G3ESovhFVBsuFiONo6xxkYgPdP+OdT1gbTOxawYRLt1WGVtq
 5TtdyK2yOkVRa+AmyWWKAnmNOH4vq3VWNEwbLMGInXAy9hO0yT5FWyoyGsiTHqFy+5dEdMTX
 GfduBlK+LhYN2awYKl8buqZUpp6lvG8TY29Dq+FM7Kih6RMmPKvpnkyNSZ8IUi0+KTTrU3PE
 cjCKpv9ZZrrIa9m0CC3V48g7FPf/QhnnTm7bcmin3yaPU+2OCb9pUEtbAHfMYjULcqs/G3oz
 jqoH5HUk04DDbegM3m/HEx6BQliEEXXzKve86R/HtNv6CI4cI39I5c9GY8cRrE=
IronPort-HdrOrdr: A9a23:EIk5NKEFVmwxkz/BpLqEPceALOsnbusQ8zAXPiFKOHlom6mj/P
 xG88526faZslkssQgb6K690cq7MBHhHPxOgbX5Zo3SODUO0VHAROsO0WKF+VPd8kbFh41gPM
 lbEpSWAueAamSTqq7BkXDIa6dasaO62ZHtpsPXz3JgVmhRGt1dBn9Ce3um+pMffnghOXIAfK
 DsmfavDADQCEgqUg==
X-Talos-CUID: =?us-ascii?q?9a23=3AAJbbVWsZBulxo2xCSbnbbm+D6Is7TiTlklf/fHa?=
 =?us-ascii?q?pEExWdqfEVXqT/qlrxp8=3D?=
X-Talos-MUID: 9a23:outa3ATo0obGXgzrRXTciBBePtVy7ZinS10vk4tXkuaWOQdvbmI=
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="106600554"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2] x86/ucode: Refresh raw CPU policy after microcode load
Date: Thu, 4 May 2023 11:26:07 +0100
Message-ID: <20230504102607.3078223-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
References: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Loading microcode can cause new features to appear.  This has happened
routinely since Spectre/Meltdown, and even the presence of new status bits can
sometimes mean the administrator has no further actions to perform.

Conversely, loading microcode can occasionally cause features to disappear.
As with livepatching, it is very much the administrators responsibility to
confirm that a late microcode load is safe on the intended system before
rolling it out in production.

Refresh the raw CPU policy after late microcode load appears to have done
something, so xen-cpuid can reflect the updated state of the system.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

This is also the first step of being able to livepatch support for new
functionality in microcode.

v2:
 * Discuss disappearing features in the commit message
---
 xen/arch/x86/cpu-policy.c             | 6 +++---
 xen/arch/x86/cpu/microcode/core.c     | 4 ++++
 xen/arch/x86/include/asm/cpu-policy.h | 6 ++++++
 3 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index a58bf6cad54e..ef6a2d0d180a 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -15,7 +15,7 @@
 #include <asm/setup.h>
 #include <asm/xstate.h>
 
-struct cpu_policy __ro_after_init     raw_cpu_policy;
+struct cpu_policy __read_mostly       raw_cpu_policy;
 struct cpu_policy __ro_after_init    host_cpu_policy;
 #ifdef CONFIG_PV
 struct cpu_policy __ro_after_init  pv_max_cpu_policy;
@@ -343,7 +343,7 @@ static void recalculate_misc(struct cpu_policy *p)
     }
 }
 
-static void __init calculate_raw_policy(void)
+void calculate_raw_cpu_policy(void)
 {
     struct cpu_policy *p = &raw_cpu_policy;
 
@@ -655,7 +655,7 @@ static void __init calculate_hvm_def_policy(void)
 
 void __init init_guest_cpu_policies(void)
 {
-    calculate_raw_policy();
+    calculate_raw_cpu_policy();
     calculate_host_policy();
 
     if ( IS_ENABLED(CONFIG_PV) )
diff --git a/xen/arch/x86/cpu/microcode/core.c b/xen/arch/x86/cpu/microcode/core.c
index 61cd36d601d6..cd456c476fbf 100644
--- a/xen/arch/x86/cpu/microcode/core.c
+++ b/xen/arch/x86/cpu/microcode/core.c
@@ -34,6 +34,7 @@
 #include <xen/watchdog.h>
 
 #include <asm/apic.h>
+#include <asm/cpu-policy.h>
 #include <asm/delay.h>
 #include <asm/nmi.h>
 #include <asm/processor.h>
@@ -677,6 +678,9 @@ static long cf_check microcode_update_helper(void *data)
         spin_lock(&microcode_mutex);
         microcode_update_cache(patch);
         spin_unlock(&microcode_mutex);
+
+        /* Refresh the raw CPU policy, in case the features have changed. */
+        calculate_raw_cpu_policy();
     }
     else
         microcode_free_patch(patch);
diff --git a/xen/arch/x86/include/asm/cpu-policy.h b/xen/arch/x86/include/asm/cpu-policy.h
index b361537a602b..99d5a8e67eeb 100644
--- a/xen/arch/x86/include/asm/cpu-policy.h
+++ b/xen/arch/x86/include/asm/cpu-policy.h
@@ -24,4 +24,10 @@ void init_dom0_cpuid_policy(struct domain *d);
 /* Clamp the CPUID policy to reality. */
 void recalculate_cpuid_policy(struct domain *d);
 
+/*
+ * Collect the raw CPUID and MSR values.  Called during boot, and after late
+ * microcode loading.
+ */
+void calculate_raw_cpu_policy(void);
+
 #endif /* X86_CPU_POLICY_H */
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 04 10:32:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 10:32:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529673.824320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWFo-0006d8-8I; Thu, 04 May 2023 10:32:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529673.824320; Thu, 04 May 2023 10:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWFo-0006d1-4s; Thu, 04 May 2023 10:32:04 +0000
Received: by outflank-mailman (input) for mailman id 529673;
 Thu, 04 May 2023 10:32:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zFwM=AZ=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1puWFm-0006cv-RU
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 10:32:02 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7612373-ea66-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 12:32:01 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 7EE3A209F6;
 Thu,  4 May 2023 10:32:00 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7C1EE133F7;
 Thu,  4 May 2023 10:31:59 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id sf/UHJ+JU2ShewAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 04 May 2023 10:31:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7612373-ea66-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683196320; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SysVoVEKW+NX4qyVGu/niFlwGn5aiW8M5/2sd3Frq6w=;
	b=juSzgQlFlUDuBImH9ZJN/4cvnNQiKW1CsIfyP7OnP0w5p0L5JLGR4eMvuerD53YtoDxppF
	iCPh2YubO9R5cvjksS70oKLJ4ICxj2GC8yaeouEoRVXZdyQ6AV7PLoLgCnqQ5JJ9R9/XXD
	TURNTorKdCr0S6/Vh9F0rphyrPDr8dI=
Message-ID: <edd4b974-08de-0769-0dba-f945ed06f222@suse.com>
Date: Thu, 4 May 2023 12:31:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH RFC 16/43] x86-64: Use per-cpu stack canary if supported
 by compiler
Content-Language: en-US
To: Hou Wenlong <houwenlong.hwl@antgroup.com>, linux-kernel@vger.kernel.org
Cc: Thomas Garnier <thgarnie@chromium.org>,
 Lai Jiangshan <jiangshan.ljs@antgroup.com>, Kees Cook
 <keescook@chromium.org>, Brian Gerst <brgerst@gmail.com>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 Andy Lutomirski <luto@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Darren Hart <dvhart@infradead.org>, Andy Shevchenko <andy@infradead.org>,
 Nathan Chancellor <nathan@kernel.org>,
 Nick Desaulniers <ndesaulniers@google.com>, Tom Rix <trix@redhat.com>,
 Peter Zijlstra <peterz@infradead.org>, "Mike Rapoport (IBM)"
 <rppt@kernel.org>, Ashok Raj <ashok.raj@intel.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Guo Ren <guoren@kernel.org>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 "Jason A. Donenfeld" <Jason@zx2c4.com>,
 Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
 Kim Phillips <kim.phillips@amd.com>, David Woodhouse <dwmw@amazon.co.uk>,
 Josh Poimboeuf <jpoimboe@kernel.org>, xen-devel@lists.xenproject.org,
 platform-driver-x86@vger.kernel.org, llvm@lists.linux.dev
References: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
 <7cee0c83225ffd8cf8fd0065bea9348f6db3b12a.1682673543.git.houwenlong.hwl@antgroup.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <7cee0c83225ffd8cf8fd0065bea9348f6db3b12a.1682673543.git.houwenlong.hwl@antgroup.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------dy9uFmhcjqIP2UV6JYsq3Ms5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------dy9uFmhcjqIP2UV6JYsq3Ms5
Content-Type: multipart/mixed; boundary="------------ov0MDEHsfp47A5JSPZkJ0II7";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Hou Wenlong <houwenlong.hwl@antgroup.com>, linux-kernel@vger.kernel.org
Cc: Thomas Garnier <thgarnie@chromium.org>,
 Lai Jiangshan <jiangshan.ljs@antgroup.com>, Kees Cook
 <keescook@chromium.org>, Brian Gerst <brgerst@gmail.com>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 Andy Lutomirski <luto@kernel.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Darren Hart <dvhart@infradead.org>, Andy Shevchenko <andy@infradead.org>,
 Nathan Chancellor <nathan@kernel.org>,
 Nick Desaulniers <ndesaulniers@google.com>, Tom Rix <trix@redhat.com>,
 Peter Zijlstra <peterz@infradead.org>, "Mike Rapoport (IBM)"
 <rppt@kernel.org>, Ashok Raj <ashok.raj@intel.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Catalin Marinas <catalin.marinas@arm.com>, Guo Ren <guoren@kernel.org>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 "Jason A. Donenfeld" <Jason@zx2c4.com>,
 Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
 Kim Phillips <kim.phillips@amd.com>, David Woodhouse <dwmw@amazon.co.uk>,
 Josh Poimboeuf <jpoimboe@kernel.org>, xen-devel@lists.xenproject.org,
 platform-driver-x86@vger.kernel.org, llvm@lists.linux.dev
Message-ID: <edd4b974-08de-0769-0dba-f945ed06f222@suse.com>
Subject: Re: [PATCH RFC 16/43] x86-64: Use per-cpu stack canary if supported
 by compiler
References: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
 <7cee0c83225ffd8cf8fd0065bea9348f6db3b12a.1682673543.git.houwenlong.hwl@antgroup.com>
In-Reply-To: <7cee0c83225ffd8cf8fd0065bea9348f6db3b12a.1682673543.git.houwenlong.hwl@antgroup.com>

--------------ov0MDEHsfp47A5JSPZkJ0II7
Content-Type: multipart/mixed; boundary="------------AE0inQaMWfJGKdJPNCVK0Np8"

--------------AE0inQaMWfJGKdJPNCVK0Np8
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjguMDQuMjMgMTE6NTAsIEhvdSBXZW5sb25nIHdyb3RlOg0KPiBGcm9tOiBCcmlhbiBH
ZXJzdCA8YnJnZXJzdEBnbWFpbC5jb20+DQo+IA0KPiBGcm9tOiBCcmlhbiBHZXJzdCA8YnJn
ZXJzdEBnbWFpbC5jb20+DQo+IA0KPiBJZiB0aGUgY29tcGlsZXIgc3VwcG9ydHMgaXQsIHVz
ZSBhIHN0YW5kYXJkIHBlci1jcHUgdmFyaWFibGUgZm9yIHRoZQ0KPiBzdGFjayBwcm90ZWN0
b3IgaW5zdGVhZCBvZiB0aGUgb2xkIGZpeGVkIGxvY2F0aW9uLiAgS2VlcCB0aGUgZml4ZWQN
Cj4gbG9jYXRpb24gY29kZSBmb3IgY29tcGF0aWJpbGl0eSB3aXRoIG9sZGVyIGNvbXBpbGVy
cy4NCj4gDQo+IFtIb3UgV2VubG9uZzogRGlzYWJsZSBpdCBvbiBDbGFuZywgYWRhcHQgbmV3
IGNvZGUgY2hhbmdlIGFuZCBhZGFwdA0KPiBtaXNzaW5nIEdTIHNldCB1cCBwYXRoIGluIHB2
aF9zdGFydF94ZW4oKV0NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEJyaWFuIEdlcnN0IDxicmdl
cnN0QGdtYWlsLmNvbT4NCj4gQ28tZGV2ZWxvcGVkLWJ5OiBIb3UgV2VubG9uZyA8aG91d2Vu
bG9uZy5od2xAYW50Z3JvdXAuY29tPg0KPiBTaWduZWQtb2ZmLWJ5OiBIb3UgV2VubG9uZyA8
aG91d2VubG9uZy5od2xAYW50Z3JvdXAuY29tPg0KPiBDYzogVGhvbWFzIEdhcm5pZXIgPHRo
Z2FybmllQGNocm9taXVtLm9yZz4NCj4gQ2M6IExhaSBKaWFuZ3NoYW4gPGppYW5nc2hhbi5s
anNAYW50Z3JvdXAuY29tPg0KPiBDYzogS2VlcyBDb29rIDxrZWVzY29va0BjaHJvbWl1bS5v
cmc+DQo+IC0tLQ0KPiAgIGFyY2gveDg2L0tjb25maWcgICAgICAgICAgICAgICAgICAgICAg
fCAxMiArKysrKysrKysrKysNCj4gICBhcmNoL3g4Ni9NYWtlZmlsZSAgICAgICAgICAgICAg
ICAgICAgIHwgMjEgKysrKysrKysrKysrKystLS0tLS0tDQo+ICAgYXJjaC94ODYvZW50cnkv
ZW50cnlfNjQuUyAgICAgICAgICAgICB8ICA2ICsrKysrLQ0KPiAgIGFyY2gveDg2L2luY2x1
ZGUvYXNtL3Byb2Nlc3Nvci5oICAgICAgfCAxNyArKysrKysrKysrKystLS0tLQ0KPiAgIGFy
Y2gveDg2L2luY2x1ZGUvYXNtL3N0YWNrcHJvdGVjdG9yLmggfCAxNiArKysrKysrLS0tLS0t
LS0tDQo+ICAgYXJjaC94ODYva2VybmVsL2FzbS1vZmZzZXRzXzY0LmMgICAgICB8ICAyICst
DQo+ICAgYXJjaC94ODYva2VybmVsL2NwdS9jb21tb24uYyAgICAgICAgICB8IDE1ICsrKysr
KystLS0tLS0tLQ0KPiAgIGFyY2gveDg2L2tlcm5lbC9oZWFkXzY0LlMgICAgICAgICAgICAg
fCAxNiArKysrKysrKysrLS0tLS0tDQo+ICAgYXJjaC94ODYva2VybmVsL3ZtbGludXgubGRz
LlMgICAgICAgICB8ICA0ICsrKy0NCj4gICBhcmNoL3g4Ni9wbGF0Zm9ybS9wdmgvaGVhZC5T
ICAgICAgICAgIHwgIDggKysrKysrKysNCj4gICBhcmNoL3g4Ni94ZW4veGVuLWhlYWQuUyAg
ICAgICAgICAgICAgIHwgMTQgKysrKysrKysrLS0tLS0NCj4gICAxMSBmaWxlcyBjaGFuZ2Vk
LCA4OCBpbnNlcnRpb25zKCspLCA0MyBkZWxldGlvbnMoLSkNCj4gDQoNCi4uLg0KDQo+IGRp
ZmYgLS1naXQgYS9hcmNoL3g4Ni94ZW4veGVuLWhlYWQuUyBiL2FyY2gveDg2L3hlbi94ZW4t
aGVhZC5TDQo+IGluZGV4IDY0M2QwMjkwMGZiYi4uMDllYWY1OWU4MDY2IDEwMDY0NA0KPiAt
LS0gYS9hcmNoL3g4Ni94ZW4veGVuLWhlYWQuUw0KPiArKysgYi9hcmNoL3g4Ni94ZW4veGVu
LWhlYWQuUw0KPiBAQCAtNTEsMTUgKzUxLDE5IEBAIFNZTV9DT0RFX1NUQVJUKHN0YXJ0dXBf
eGVuKQ0KPiAgIA0KPiAgIAlsZWFxCShfX2VuZF9pbml0X3Rhc2sgLSBQVFJFR1NfU0laRSko
JXJpcCksICVyc3ANCj4gICANCj4gLQkvKiBTZXQgdXAgJWdzLg0KPiAtCSAqDQo+IC0JICog
VGhlIGJhc2Ugb2YgJWdzIGFsd2F5cyBwb2ludHMgdG8gZml4ZWRfcGVyY3B1X2RhdGEuICBJ
ZiB0aGUNCj4gLQkgKiBzdGFjayBwcm90ZWN0b3IgY2FuYXJ5IGlzIGVuYWJsZWQsIGl0IGlz
IGxvY2F0ZWQgYXQgJWdzOjQwLg0KPiArCS8qDQo+ICsJICogU2V0IHVwIEdTIGJhc2UuDQo+
ICAgCSAqIE5vdGUgdGhhdCwgb24gU01QLCB0aGUgYm9vdCBjcHUgdXNlcyBpbml0IGRhdGEg
c2VjdGlvbiB1bnRpbA0KPiAgIAkgKiB0aGUgcGVyIGNwdSBhcmVhcyBhcmUgc2V0IHVwLg0K
PiAgIAkgKi8NCj4gICAJbW92bAkkTVNSX0dTX0JBU0UsJWVjeA0KPiAtCW1vdnEJJElOSVRf
UEVSX0NQVV9WQVIoZml4ZWRfcGVyY3B1X2RhdGEpLCVyYXgNCj4gKyNpZiBkZWZpbmVkKENP
TkZJR19TVEFDS1BST1RFQ1RPUl9GSVhFRCkNCj4gKwlsZWFxCUlOSVRfUEVSX0NQVV9WQVIo
Zml4ZWRfcGVyY3B1X2RhdGEpKCVyaXApLCAlcmR4DQo+ICsjZWxpZiBkZWZpbmVkKENPTkZJ
R19TTVApDQo+ICsJbW92YWJzCSRfX3Blcl9jcHVfbG9hZCwgJXJkeA0KDQpTaG91bGRuJ3Qg
YWJvdmUgMiB0YXJnZXRzIGJlICVyYXg/DQoNCj4gKyNlbHNlDQo+ICsJeG9ybAklZWF4LCAl
ZWF4DQo+ICsjZW5kaWYNCj4gICAJY2RxDQo+ICAgCXdybXNyDQo+ICAgDQoNCg0KSnVlcmdl
bg0K
--------------AE0inQaMWfJGKdJPNCVK0Np8
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AE0inQaMWfJGKdJPNCVK0Np8--

--------------ov0MDEHsfp47A5JSPZkJ0II7--

--------------dy9uFmhcjqIP2UV6JYsq3Ms5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRTiZ8FAwAAAAAACgkQsN6d1ii/Ey/C
/Qf7BVOcd5ra8zfx6EPiAv74I4RpWlIuL3rn0B5v6bOo0FFa4n4zj9VOvcYOUGlraDWF3O8GumNw
Cb2IMVqEVZErE9qRr7deURIsoKwrXg8XsZZjyjfJHu8zKP0vbrA0de6Vd5j4KtUt9Zbex5+iSI6o
znSFEHTx2lXV8YGE6XnBxHulxsAoSOpevs72mkXEZvqkplnfYYm1wbXEdHz1Kv7YDAzhn8sDHmIc
kXp/CLT/2dmoEhI/S5jdbBan7ywy6EL+49FsfPxbNoHu4grjSjXTW000ltfEWEQL8ZA3GPAsM3PX
xoMnuhBcTopWMqDtZyEsrsTuG79jnYi0WXwT9+wWmQ==
=o9bz
-----END PGP SIGNATURE-----

--------------dy9uFmhcjqIP2UV6JYsq3Ms5--


From xen-devel-bounces@lists.xenproject.org Thu May 04 10:32:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 10:32:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529678.824329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWGN-0007Bb-Lx; Thu, 04 May 2023 10:32:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529678.824329; Thu, 04 May 2023 10:32:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWGN-0007BU-JK; Thu, 04 May 2023 10:32:39 +0000
Received: by outflank-mailman (input) for mailman id 529678;
 Thu, 04 May 2023 10:32:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VaLI=AZ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1puWGM-0006cv-Ow
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 10:32:38 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on060f.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::60f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fcccc8a3-ea66-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 12:32:37 +0200 (CEST)
Received: from AM6P193CA0141.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::46)
 by DB9PR08MB8459.eurprd08.prod.outlook.com (2603:10a6:10:3d5::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 10:32:34 +0000
Received: from AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:85:cafe::c9) by AM6P193CA0141.outlook.office365.com
 (2603:10a6:209:85::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 10:32:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT037.mail.protection.outlook.com (100.127.140.225) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.25 via Frontend Transport; Thu, 4 May 2023 10:32:34 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Thu, 04 May 2023 10:32:34 +0000
Received: from 271d7a319c8c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9592A008-E23D-425A-A8BD-D97393C9E3CB.1; 
 Thu, 04 May 2023 10:32:27 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 271d7a319c8c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 10:32:27 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS8PR08MB9792.eurprd08.prod.outlook.com (2603:10a6:20b:613::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Thu, 4 May
 2023 10:32:23 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::9283:e527:bed8:ab23]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::9283:e527:bed8:ab23%7]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 10:32:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcccc8a3-ea66-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gd/NjnaXxNh81P765sCdBk6OvwGARw4SD4t12/DyxQw=;
 b=ZyvS3wWe19J70ts/P63elgFhe3LB4hHIpsRk5ezjMvRdQ/NoYRD6rXDyiZsSWl55WO0J5CibrYej3B1M02JIv6wTNh8MbTViqFVhl1D7u8tV3Qoxoc86tLpjtyn+cqG02xpHPAh22el68Bq9EziRRFBs94wY1s7gq1VH+xeEzjU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 31291b0853fa72b4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TIzjCFJz3L37hTwIw1CdETIrhR6uSxgwwQG/66xSrWotr5nHwlqLJqjrXavo+1uUsBDyK1IxbMmZMurU6algKceJhGXP9GPyXVPkZ+HhDDtzM/GPY4XiVGoDdoaZFZvCBSi71CUtve5nP+2KN0LuQSn2nT4R3nDAAMiLHgeYg9HosI2ZZKBJinlPzuVnkwJi89OBRA2KOp/XSHuszy19Xnqj0uVl4a666h7H5fm6c1EkKYwXH5hB2V+BzY+9Mq6anL3MaE6adXgOST1bWTEbhkwVIIVJCaCgOrm1Jl7AlV6uKTa11gUx+Dhhph9NP05afkXPAoEklbmF2rWB62jn3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Gd/NjnaXxNh81P765sCdBk6OvwGARw4SD4t12/DyxQw=;
 b=HZc/CCtlnYBzuwiGvKLPBhHpmpxxxoAyOeAemgnrNFNnE3PDtZZPL//M99IBW1TiOKBftngkdgP/QHhz5WmOxp2O3n3O4ohUKevfMZw2i35yF0uOWC3ECS90we2PliIT+T8PC+YKV/megHBnYQZYoK4X3DfKbg7d123odC660uD4mqzutF56x2LCQZyWir8XnwZFfqkbhhoLrvzC0Au5i/e/ILKktIToVEY5ONMx6Tndy5O8NKZ0PFxv9I8JjZ2BsXU6LE8ms0DByKBEYrbTMImDP1Lno281NmsBPANYS4wP71iH3mR4zxmem72kWIgpxFYIEsXRq/OHnRrFcvWDJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gd/NjnaXxNh81P765sCdBk6OvwGARw4SD4t12/DyxQw=;
 b=ZyvS3wWe19J70ts/P63elgFhe3LB4hHIpsRk5ezjMvRdQ/NoYRD6rXDyiZsSWl55WO0J5CibrYej3B1M02JIv6wTNh8MbTViqFVhl1D7u8tV3Qoxoc86tLpjtyn+cqG02xpHPAh22el68Bq9EziRRFBs94wY1s7gq1VH+xeEzjU=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: pci: fix -Wtype-limits warning in
 pci-host-common.c
Thread-Topic: [PATCH] xen/arm: pci: fix -Wtype-limits warning in
 pci-host-common.c
Thread-Index: AQHZffQTsmRRBKh80kOBQgH8fU6H2a9J0ZUAgAAT8ICAAAXxgA==
Date: Thu, 4 May 2023 10:32:21 +0000
Message-ID: <837CD804-85F2-4D3C-87FD-3F65F22A8432@arm.com>
References: <20230503191820.78322-1-stewart.hildebrand@amd.com>
 <5D298044-314C-473F-97AB-420DA3DA44A2@arm.com>
 <4ca00734-8e1e-fe5b-b2a0-6f08f3835433@xen.org>
In-Reply-To: <4ca00734-8e1e-fe5b-b2a0-6f08f3835433@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS8PR08MB9792:EE_|AM7EUR03FT037:EE_|DB9PR08MB8459:EE_
X-MS-Office365-Filtering-Correlation-Id: 04514016-2f31-4239-2b36-08db4c8adfd0
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 qrUfNZv1o8scGQhui2K+/8TR4e44HRQJ7inq0tysho4x0zbgVObHCZSDoUf0YnZlEHy0qruu3ssnHAt/EjZvLBLokyXLfc7JILi9UpyqFg3GzyHiHThCth0zmUv6IiyAbyN3HxEkG9zFqtC3uKr89YRpoD5tCWz8PAFVKNCYFt9Hj7Qr4ch7lJNu09409Paw6FjXyr7YR8qYMLpzMfUxV8tIthS21Fn2uhgqVNDtFViMx+m/5Xjf6NP2vaeMPxyOb+JOzB36IhAJhMbXBC7sq8gDEBSpOdv8Hs0tnQKrt+ChEbh8gj+uGuBleMv4rcj5r1sL/RjlSRmO73e5usF2f43SChx/ga442dXYUKmEKBMYDmUGl5zq3OJNUziFl3JqR1KgvEbaypmRtpDCU6vddNht2PleKHC782q7SFy4k+pMGKpHREXP+usvSEAhO3SA30OjqL6t8jZsfLRKtGDL7YTbYwdFiTGsaxvMXDiJgjUB/TnhOBYLbTiuhnqeVCSjDE/o0nwAdLvRGBMIVb5Bl7u6L6A2i9oii9Ik9aAJbN8PgXefpkX198MDPrxFq4QRDr4hnty4YyAw/7kcpMqTkXkrU7PjSa1rZiVO5jDs5XIYqLlYAkNKb2UU2hlE+sVWe6DRfa4XYTICLhTv/bEcYA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(366004)(376002)(39860400002)(136003)(451199021)(54906003)(186003)(478600001)(6506007)(2616005)(6512007)(53546011)(71200400001)(6486002)(4326008)(6916009)(66446008)(41300700001)(66556008)(76116006)(64756008)(66476007)(91956017)(66946007)(316002)(83380400001)(5660300002)(8676002)(8936002)(122000001)(2906002)(38100700002)(38070700005)(36756003)(33656002)(86362001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <B2F448B9882E0A4E861C3689BE4D80D1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9792
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5872a7cb-cf82-48f9-ab99-08db4c8ad7e8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9KP2eHYCyJTyPnBv5KPTeJ3tm9/wOmsgx50RZMVZ+1dPVltKWdP56KhlxPwWfhDHuk8Q761WrYH0EFC0gAnO4gJMkWvzWOEn1MN7JJW2CQOYMsKr2PgOQ9Rw+g/AcIU+JGWmyS+MUNUlCJbKf6uUvDhYpvsSQe26m7JqJqZ0f1mxuQO+6P1BaDRjQ8iBkFqkAf522NWXfcPa7KeFNKiGe05jib8Nrcqt4BSNzNkc2qqNezk4YwI3XjBL5CajewrPagbB/qCfnsGPL1la+s9wQgCzeqgLklcxKhh2XjTMMW+qi3ehbpGFAGSHVtsYvGAaDEmfK3/JtzqFjQ+vAP+XyRcu1JVaS527sHXtkwikA6na026VZs/14W99/ATX6dewdDsTdre92PmP3SXTpBex7taS6SGn77JE0kbA2+Rfjz9mjGfb0LjWRnN1GxJof5xgstk/EDTdHiTVgQJNgGKJFyleu7Zirbg+Nze/jvwAhomCyz/1UAy5lR6ZKRGrwByQ3PfX0CPCgAYx701IBWlspA4KwnP8geDH7XSv51g7rTW7TCoiaS16A6PBZ+vEyTKQn2k1COaaqtQOCNUcdUKZ0wt9jmAvKlcAaxVJ0/utxLTqGkM2w0mDX3RMMUouD0Z5pqwQ27I47zyClvBSXgKiL01i8Pm89F08DIP9BZiF56kf6NwUwQJuTMfKEWmDoUWUZWX1J8g0Omw7m4A2sresPtmQx/XONBXUvohVFoQdFeo=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(39860400002)(136003)(451199021)(46966006)(36840700001)(40470700004)(2906002)(316002)(4326008)(8936002)(8676002)(47076005)(82310400005)(2616005)(336012)(86362001)(5660300002)(6862004)(54906003)(40460700003)(41300700001)(36756003)(70206006)(70586007)(478600001)(33656002)(6486002)(40480700001)(53546011)(6512007)(6506007)(356005)(81166007)(36860700001)(26005)(186003)(107886003)(82740400003)(34020700004)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 10:32:34.6591
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 04514016-2f31-4239-2b36-08db4c8adfd0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8459

SGksDQoNCj4gT24gNCBNYXkgMjAyMywgYXQgMTI6MTAsIEp1bGllbiBHcmFsbCA8anVsaWVuQHhl
bi5vcmc+IHdyb3RlOg0KPiANCj4gSGksDQo+IA0KPiBPbiAwNC8wNS8yMDIzIDA5OjU5LCBCZXJ0
cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4gSGkgU3Rld2FydCwNCj4+PiBPbiAzIE1heSAyMDIzLCBh
dCAyMToxOCwgU3Rld2FydCBIaWxkZWJyYW5kIDxTdGV3YXJ0LkhpbGRlYnJhbmRAYW1kLmNvbT4g
d3JvdGU6DQo+Pj4gDQo+Pj4gV2hlbiBidWlsZGluZyB3aXRoIEVYVFJBX0NGTEFHU19YRU5fQ09S
RT0iLVd0eXBlLWxpbWl0cyIsIHdlIG9ic2VydmUgdGhlDQo+Pj4gZm9sbG93aW5nIHdhcm5pbmc6
DQo+Pj4gDQo+Pj4gYXJjaC9hcm0vcGNpL3BjaS1ob3N0LWNvbW1vbi5jOiBJbiBmdW5jdGlvbiDi
gJhwY2lfaG9zdF9jb21tb25fcHJvYmXigJk6DQo+Pj4gYXJjaC9hcm0vcGNpL3BjaS1ob3N0LWNv
bW1vbi5jOjIzODoyNjogd2FybmluZzogY29tcGFyaXNvbiBpcyBhbHdheXMgZmFsc2UgZHVlIHRv
IGxpbWl0ZWQgcmFuZ2Ugb2YgZGF0YSB0eXBlIFstV3R5cGUtbGltaXRzXQ0KPj4+ICAyMzggfCAg
ICAgaWYgKCBicmlkZ2UtPnNlZ21lbnQgPCAwICkNCj4+PiAgICAgIHwgICAgICAgICAgICAgICAg
ICAgICAgICAgIF4NCj4+PiANCj4+PiBUaGlzIGlzIGR1ZSB0byBicmlkZ2UtPnNlZ21lbnQgYmVp
bmcgYW4gdW5zaWduZWQgdHlwZS4gRml4IGl0IGJ5IGludHJvZHVjaW5nIGENCj4+PiBuZXcgdmFy
aWFibGUgb2Ygc2lnbmVkIHR5cGUgdG8gdXNlIGluIHRoZSBjb25kaXRpb24uDQo+Pj4gDQo+Pj4g
U2lnbmVkLW9mZi1ieTogU3Rld2FydCBIaWxkZWJyYW5kIDxzdGV3YXJ0LmhpbGRlYnJhbmRAYW1k
LmNvbT4NCj4+IEkgd291bGQgc2VlIHRoaXMgYXMgYSBidWcgZml4IG1vcmUgdGhhbiBhIGNvbXBp
bGVyIHdhcm5pbmcgZml4IGFzIHRoZSBlcnJvciBjb2RlIHdhcw0KPj4gaWdub3JlZCBiZWZvcmUg
dGhhdC4NCj4gDQo+ICsxLiBBbHNvIHRoZXJlIGlzIGEgbWlzc2luZyBmaXhlcyB0YWcuIEFGQUlD
VCB0aGlzIGlzc3VlIHdhcyBpbnRyb2R1Y2VkIGJ5IDZlYzkxNzZkOTRhZS4NCj4gDQo+PiBBbnl3
YXk6DQo+PiBSZXZpZXdlZC1ieTogQmVydHJhbmQgTWFycXVpcyA8YmVydHJhbmQubWFycXVpc0Bh
cm0uY29tPg0KPiANCj4gSnVzdCB0byBjbGFyaWZ5LCB5b3UgYXJlIGhhcHB5IHdpdGggdGhlIGN1
cnJlbnQgY29tbWl0IG1lc3NhZ2U/IElmIHNvLCBJIGNhbiBjb21taXQgaXQgbGF0ZXIgb24gd2l0
aCB0aGUgUmV2aWV3ZWQtYnkgKyBmaXhlcyB0YWcuDQoNCldvdWxkIGJlIG5pY2UgdG8gYWRkIHRo
ZSBwcm9wZXIgZml4ZXMgZmxhZyBpZiB5b3UgY2FuIGRvIGl0IG9uIGNvbW1pdCA6LSkNCg0KQ2hl
ZXJzDQpCZXJ0cmFuZA0KDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSANCj4gSnVsaWVuIEdyYWxs
DQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu May 04 10:48:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 10:48:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529682.824340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWVH-0000Ue-10; Thu, 04 May 2023 10:48:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529682.824340; Thu, 04 May 2023 10:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWVG-0000UX-Sy; Thu, 04 May 2023 10:48:02 +0000
Received: by outflank-mailman (input) for mailman id 529682;
 Thu, 04 May 2023 10:48:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x2gh=AZ=citrix.com=prvs=481e65374=roger.pau@srs-se1.protection.inumbo.net>)
 id 1puWVF-0000UQ-Aj
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 10:48:01 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 218ea07f-ea69-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 12:47:58 +0200 (CEST)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 06:47:56 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM4PR03MB6192.namprd03.prod.outlook.com (2603:10b6:5:39c::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 10:47:52 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%3]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 10:47:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 218ea07f-ea69-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683197278;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=P2pHireLaG86uQAbiKAmSpJCeImRzsJeRL5SrU7VkXo=;
  b=YjC3AvPc9ea1/b/1RFlXEA/etTN77BnKqfSye7OSQmh1LugZVItk4Qf+
   g34pjIrTtTKOJqrsbrYn0kfEYg2O2wtLt+fFVDgoI2Af0OZHa2rf57uGq
   ezY9sICuVOfRJkv7WP0eQWKUmabjXoEMrrnOAyjJ2KwaxGrCwkpz427Fj
   A=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 107169153
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:kRsbQquMo9JJrpona8siKVQos+fnVHBfMUV32f8akzHdYApBsoF/q
 tZmKW6EafqPYzehfNpybIyw/U8A6pTQzdFlGQU5pH9hRCtD+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AKGyyFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwExxcTBusgNqM5LuDd7R0nvk/HvTmBdZK0p1g5Wmx4fcOZ7nmGv+PyfoGmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgf60b4W9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiAN1LSuzhq6UCbFu7ySszDExPBAaCo/ylu0W6QOAOD
 UEN5X97xUQ13AnxJjXnZDWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnM08SCEu1
 1SJt8j0HjEpu7qQIVqC8p+EoDX0PjIaRVLufgcBRAoBptXm/oc6i0uVSs45SfHqyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CChRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:EMNG36oNWObjVr1ly/DdNQQaV5t9LNV00zEX/kB9WHVpm5Oj9/
 xGzc576farslgssSkb6K690KnpewK7yXcH2/hhAV7CZniohILMFvAA0WKM+UybJ8STzJ846U
 4kSdkANDSSNyk1sS+Z2njELz9I+rDum8rE6YiurQYJcegpUdAd0+4TMHfjLqQCfng8OXNPLu
 vl2iMonUvGRV0nKuCAQlUVVenKoNPG0Lj8ZwQdOhIh4A6SyRu19b/TCXGjr1YjegIK5Y1n3X
 nOkgT/6Knmmeq80AXg22ja6IkTsMf9y+FEGNeHhqEuW3XRY0eTFcdcso+5zXUISdKUmRIXeR
 730lAd1vFImjHsl6eO0F3QMkfboW8TAjTZuC+laDPY0L/ErPhTMbsYuWqbGiGpsHbJxrtHof
 92NznyjesKMfuF9x6NtuQhknlR5xCJSb5Lq59Ns5SZObFuNoO55LZvjn99AdMOGjn355sgF/
 QrBMbA5OxOeVffdHzBuHJzqebcFEjbMy32CnTqgPblmAR+jTR81Q8V1cYflnAP+NY0TIRF/f
 3NNuBtmKtVRsEbYKphDKNZKPHHRlDlUFbJKiafMF7nHKYINzbErIP2+qw84KWvdIYTxJU/lZ
 zdWBdTtHI0eUjpFcqStac7vyzlUSG4R3Dg28te7592tvn1Q6fqKzSKTBQ0n86ps5wkc4Tmsj
 aISeRr6tPYXBzT8NxyrnjDsrFpWA0jbPE=
X-Talos-CUID: 9a23:HSXPBmOGgQ/9Au5DYwU21kU7HfgfVnTl8SrIO3KkO1tkcejA
X-Talos-MUID: 9a23:gWY1lgaZhzLXH+BTuHjMgGA7ZZpR4aGyOUsJiqdXncSdKnkl
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="107169153"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dyE0bEIhSSgQlHvEicsCEs/EetezhtA7zqB/+8kCAPR9bUDflY8CZzdqkKsptx7/UgI42uHPE8mgeeT3gjTVf++fBGJ/tNuFh7ooGSiwuTq2WLS4bP7n7s4lU+cLhZ/dgm0hunjyKhJZMWutH9HBMtC1wcL5Nd5CoXZfKLhfHitnXn+PVZ7XJscCMVvTp/dAFKdCw6hr+UxzMMiyUqYGxJLXz+J3Ru4KRdJEHajGOlZO5bbQ0qqM/t5rrWH1BkhF77NYNQzRKzStTKh3g+hjZpLoq/8/dxVPN4lU+2mbfoVDiNA9cAXV6aOAFLsLVQoyZTTVZfo8C59tinj/DfAD+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SfmadrDjcYOWOSGpeJtM2ZYjcsHOQEl/UXhaGNTYSgc=;
 b=hjiMeRxuIbQwqW/RaJ7Yiqb57idMXX9wV9ncKoq16d86N6oa3vC42dNrZ+CJCIQ6kjHIEMDVYpvEJmoS1ySCuL5ryH2yMjucpC0fFzNHTZnVDoFd6VrSNRWCZq7PaHfzY/AGaNLqzzAEFbEgTvT+SHQVt5awdl5WeWZ9cMxMRvFNV+XmIzSgwzZR9Y4l2d7H/9Sch+qcZwszy5ZUZcDbf0M4ohr+OGgmiQlqDK0u3Rssnj6L+ehNxubgJisxr9qXwjNwPMAB+QQicOo10tdbowN9ZhxDTOM8EVZXW9aGvWStL2cYDQ+g8yO54LKPC1fCiLQWviVI4Au66WmC59tZ4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SfmadrDjcYOWOSGpeJtM2ZYjcsHOQEl/UXhaGNTYSgc=;
 b=BxhZ5i8vCEXKX/jypZ9U4oPKEHhltRxTUgGxJwiy14Eq5IoogJyEa2plZ0YTF8rOcYtXjvvTm+UDbthAkieZ0UIUtYNSirRJTm6oqzC0IsrOebwVnJxFl6P9M4BFuwQK2y17gysHJC1Q66hDMvosiBVDD0gnYIIsTI16TldqR5I=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 4 May 2023 12:47:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] x86/ucode: Refresh raw CPU policy after microcode load
Message-ID: <ZFONUdkG2ow9uckX@Air-de-Roger>
References: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
 <20230504102607.3078223-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230504102607.3078223-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: LO3P265CA0028.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:387::11) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM4PR03MB6192:EE_
X-MS-Office365-Filtering-Correlation-Id: 333eac7b-4255-4466-9617-08db4c8d0286
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Lalpm72umfoLNCdnLYxChu+iOrMwi4REhaK0gnrPZ4pP6H0OvG75pLcGiBCkx/jPUkh/gHtBgiQVFbp0XOgSElgib3FPypWaSi3LOMQ9P3Kgecq9Rlo7WBkCvdYdzEgtfV1QqPOY1daTZ5p0lvKRDtRsWjrNf27ReBtnqAumDndUzsaKdeNhgfmCZ3wnROhumeAxHWMp+jsH7Y7oBVof+OKIUfE58q8XSV1nFSvtqqEmSYEzoXtOrYJS9lzlIWabFnQJNyVyxKpJ58Q343L6MoJHFP8NjYfZavPGtghYhJiTEYncJgyU8yjpAApuDg7KBmTGxJ+ruSfq3xm7ryFhCU2JNEA69EsLKyb6qzLHVS06RrJj6an9mrCR8qVibMCOCZ+Jcf7ZkYOvKTAiAKFGV2M8tepa1fBi8V37hdwe/jrjenlpdWy0FDLA0V9yWIUqQJ2pRZuIEyc/VhPfUAdymEMFRib1wVh5EE0isfkyJbjz6UUs/escDSJtSLi9t5e2ysQQNsblgJkWt0WM+IerPL3UEKksylBW6QbXKNSsmrg/8rWc61RwSlY7qSGqDPs5
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(136003)(396003)(39860400002)(376002)(346002)(366004)(451199021)(478600001)(8936002)(85182001)(316002)(6666004)(8676002)(4326008)(86362001)(41300700001)(6486002)(6636002)(33716001)(66476007)(82960400001)(5660300002)(66556008)(66946007)(6862004)(38100700002)(2906002)(83380400001)(54906003)(6512007)(9686003)(6506007)(186003)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YXZqcHBNbG9LYSs1MHJxU2VTVnhTRU1vdXo5QXFlUkVkdGtPU2RycEZwczdD?=
 =?utf-8?B?TFVWMFNQMm1ITXMzbjQrWGdnRDJkU2VoTjI0dUVVL3RiczZFWTQxUkFYMWg3?=
 =?utf-8?B?TFYySmMxcytxZ0FZQ2E1WnhvTEsrbVgyTHJDckdEa2J0SW5jamtrRTBFRk83?=
 =?utf-8?B?N2x5dDF6QWVEb1RFOTBZZmQ0NDdXRVJxTEoyOVNwUHEvdXZRUjhtV1AzWGZp?=
 =?utf-8?B?cHNUT3RNUnFhWXA2b1JMV1pJVTI5K0JpN0NpYTJQWnNmNzhzMlR1K0tYTFg4?=
 =?utf-8?B?bHkzRFZIT0lvQnFWM3RTRVd6bGNIbmY3bGpiZisvV1V0WDJMYUtxY0x6K2lr?=
 =?utf-8?B?TndUSGs2ZnpST0NpQWs1dCt6WitTZDN1NFhHcWZ0Nkwzei9OcVhMd0duZ0Ns?=
 =?utf-8?B?MWh6QVhaR2FWRDlxaTJ5WnhSVW9HSnpjaVNNM3NlSEswUnNzT3Z2b1BzSmV6?=
 =?utf-8?B?TnNHL2NDYWRzejI4MXlveHFEWDYzLzNyMWFwMklsRXdZUUEyTzNlNkNrOVpk?=
 =?utf-8?B?dFJEQ3B2dVZIL2M2U1EySnNzdkVSTjlWdTJrU3ZZamRuaDY4RkRlT21NcjlQ?=
 =?utf-8?B?am1uSFpNNmFoeVNwcTR1T2U4L2kxOGdQaDBEeWU2ZGp5Z016OG5YakRDUnU0?=
 =?utf-8?B?bXhrd3d3R3JtWjdtYkY5WFNmOUFEMnBjWlRpdXBrNlpqYmVxTzB0cjJzSXBD?=
 =?utf-8?B?TXYzY2gxOUszY093TUhvRXh5KzQ2bmVXcWIrRzlwVWNJanBxdnhFQWlRbGhD?=
 =?utf-8?B?bEZtT21kOXBlbjdDajhCenFkOE1EL2RwZGVmY0x5eHFZZ3Mzd0ZrY3VLandB?=
 =?utf-8?B?TjlHWm1SOEV6Tjlqd0FvaVFJd2lmemlOYytVZEhtcE1KRnc5YUtNM3l3dUNp?=
 =?utf-8?B?SzJmNUM5OWwycXlhUGh3YXlKTDZYZ01sa0V6ekwrWEVXai9oWDNaWHFVRFR4?=
 =?utf-8?B?dUVFZGFkaUw2SFJEYTBzQlAxNjIyTXRCazFZWnkyWUxKTU93MTJTUVVwTkRi?=
 =?utf-8?B?aGs2bE5rSmUvSnZIU0hLSGZORUhEa2F6czF5RUFUOEpOSVNkN1ZtRFF6Tk9z?=
 =?utf-8?B?SllyV1Badm5SSDJPM3B0SzZ0cHErK1NLZjY5YUduRnZLbEFyZEhiZnVGU1Z5?=
 =?utf-8?B?bmhKbzVxMmh0UE56aUZQWkg1TzZSenkva01lcGJlNmRiNU5yYTlEOGZYRExu?=
 =?utf-8?B?N212TVU5QTlRalIycWMwU1NCMkdWUjZYZUxXV0pmUFZuQ1FvZFBDOWN0YTRM?=
 =?utf-8?B?RjFweXNpUDVSZnZWTEhyS2plNE9Xc0R3cEUvNWIyTUNTZ3FGQWZ6VnJ5UFdW?=
 =?utf-8?B?dVlTaXNHRmI5Y1hFYWhWbnFUencvNW05RkU4cWdsQ0M3WGN4ajdyWS9WQVN1?=
 =?utf-8?B?KzFmTjVEK0YwWFBJQU5yaEpaMkxESWJhZnc1V0h2bXhhK1h4VUR5cXN1TnZC?=
 =?utf-8?B?MUhBakNUTkdwS2VCbTVrN1l3djRSaE5oM3EzYjB3UXhmeUZ1b29xL1c5L1hH?=
 =?utf-8?B?SHZESUZWUnBQUGh2TjlSeHJOdlZnbFhwRjRwaWNabWFxb2IyUXVwNmx5UTNl?=
 =?utf-8?B?WWdhNWluRnJlUW1vN05WU2tTVUlDSUYzM2lLN3JMdVcwNzgybUMvNmlSTVAx?=
 =?utf-8?B?RkZUUU9lVmZ3azNoQkxHeFNaNVExMjhobFpITlZSWFQ5REZGUG5KRHhyeWNj?=
 =?utf-8?B?bmRRZlp3bXFFb2Ztdm5ybzRJUFJnQyt4UU1wTzN0YXVzZ1pZOFRGZ1FqVnNV?=
 =?utf-8?B?K01uZmlqa2tScTlRQjNOT1Z2NEt4S2d4UkkzYU4ydk9VcXNEL2plSVdYMUli?=
 =?utf-8?B?K0VENDZJMERqMGRVU2Mvcm5zOHBXK3RNTWlteUE4R3RGU0FRcW0xYUdabWJY?=
 =?utf-8?B?KzVVZVFXK0ZJM1ltR0xxdjR5dGpIV09Bb2ZIbDJJZTJkNkRjeTYyTUtyUzRB?=
 =?utf-8?B?U2FoQkxTcEsrU2FvNzFPRVE2bmpiYzErbiswcVEyT0VCMjM1WTdZdmd5TG1h?=
 =?utf-8?B?T1EzYjFTRDdHWTVVYlc4bFRIU29MQzJWdmJIWmVyOTJMTHNIOWpTY09FRk1r?=
 =?utf-8?B?OUl0dmFVY2FpWWt5dDJiN1VTSFk0SGtzSmVyQ05lM1g5M1NuWWtHMFlOaDhP?=
 =?utf-8?Q?eSNE9M+O620YEo+DKNV/Bw4ty?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	+r8kOmlHX6NVnQsCA9WOPBpvmtk0A0RMM4CS7uHUjcFHid43L40Rtwfz7Ny8v1N44pkXWGXrS1HybPt6WXa7sW09Ekcw/NvJH+TZLOysOKv5pNuEewpwD/O3W1pOS0nLe49jRdi8AlHfKNuErgP5H13rrn5qVIn05BR0p2rfatW/eYGBFlkaVehOmLXO1vKGwD2hAlplJ3wvyzHGfPoDZSUQls/cW+XiTNRIhcpqZewsx4eZyYMzd8aroaXppIHhvomN98EWDbpip9d59a+JpXT71mBuzZSniTnNC4NJ92t1uQLZkv+9IyCOyBxgzhCe9FKhk6TSYI4Y3X1OyPz3diRjE/4YJDj5U+PI6Vv3NzvDt9hKsnGuADk4WHl0Sd77gIhDbm94uhP2IinVkXcESCRvF2RlCUJZA9dzoPmqRkPXdhIjglPLkptNwISeHm1cCDRz/cy839Q7qLc5oSgznqg8oN/0OvCKcmP8oIbGCwzaV1OuMbqLcrSc92SBIWEuIy2SkudJD9aQLsjKmhgYzVPk0GDFIcfrgsv4oLomPhVEpV8kcvlscvSMbrqPxWGV6A9FidzHN3ePG7CHHSq6Mi9/dt+NMQKOzfnxTVnDeG4xDcGnsGcQc1sH7It4dTdzULfCG+Oy1v74WadWjXJ53lupVVbQ67UJzY4FDbqstsWNhSHpf5z6GfMrfnOqj9K2VL1c3ynXaNFJrgJ8/XFjQ4Lym1wFUOLnQaao3W9kTD2cSE1GPwBxNo1v+gKbRSR0/9BogGL48ZcuS38vVqlw4hVv2zxBA1IFhpeX8Tf7Pt7U8s+cO48B0p5HDBectyiZ
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 333eac7b-4255-4466-9617-08db4c8d0286
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 10:47:52.1108
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EG02X4l14x0SYl8pLGqZKr8IhmSJhYd6/GLCOXTvarYhEhk4GLTeGN1zgSRrpqhiQ3+z6fq1scAfxnmMEBCK6w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6192

On Thu, May 04, 2023 at 11:26:07AM +0100, Andrew Cooper wrote:
> Loading microcode can cause new features to appear.  This has happened
> routinely since Spectre/Meltdown, and even the presence of new status bits can
> sometimes mean the administrator has no further actions to perform.
> 
> Conversely, loading microcode can occasionally cause features to disappear.
> As with livepatching, it is very much the administrators responsibility to
> confirm that a late microcode load is safe on the intended system before
> rolling it out in production.
> 
> Refresh the raw CPU policy after late microcode load appears to have done
> something, so xen-cpuid can reflect the updated state of the system.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

I'm not fully sure it's worth calling calculate_raw_cpu_policy() if
updated != nr_cores, as it's possible the current CPU where the policy
is regenerated hasn't been updated.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 04 11:00:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 11:00:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529685.824350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWhM-0002zr-5I; Thu, 04 May 2023 11:00:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529685.824350; Thu, 04 May 2023 11:00:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWhM-0002zj-0G; Thu, 04 May 2023 11:00:32 +0000
Received: by outflank-mailman (input) for mailman id 529685;
 Thu, 04 May 2023 11:00:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puWhK-0002za-1G
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 11:00:30 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df9734d9-ea6a-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 13:00:27 +0200 (CEST)
Received: from mail-dm6nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 07:00:19 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH7PR03MB7224.namprd03.prod.outlook.com (2603:10b6:510:244::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Thu, 4 May
 2023 11:00:13 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 11:00:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df9734d9-ea6a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683198027;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=6BFBWRNJT8gQpyjBB03kfOs52D8txaObxzukCNgKSwM=;
  b=hjp5r0SqJxirZFX9z8He3aDjITK8kvN+L/tf1VQh6G87+6zuwcNVK73M
   vyiDB51qQwXoAJhVx7tJzF5+SDfgrRhr65OH77aWVleB17+kenkqXg540
   c+S/ghoA9Xl4Wrlv5IYuxwuUirhjFOqeXKwbBsEgyltsWlKZ8GEeX7t3w
   8=;
X-IronPort-RemoteIP: 104.47.58.102
X-IronPort-MID: 110287718
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:pUq2oK5/MYKorbHwjBWTBwxRtBXGchMFZxGqfqrLsTDasY5as4F+v
 mEZXWGEP/iDMWr9eNF2Ot/n9B4DvMTXz4RhTwBl/iFjHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0T4AeH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m1
 OU7JQpOMzy5g++E+uPmUfJJueoRI5y+VG8fkikIITDxK98DGMiGb4CUoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OnEooiOKF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNtLTOfoqaEz0DV/wEQeLy8ZDXedkMKYk3ynC/54F
 XUS6BQx+P1aGEuDC4OVsweDiG6JuFsQVsRdF8U+6RqR0ezE7gCBHG8GQzVdLts8u6ceRzYny
 1uIlNPBHiF0vfueTnf13qiQhSO/P24SN2BqWMMfZQ4M4t2mrIRtiBvKF45nCPTs1oezHizsy
 TeXqiR4n68UkcMAy6S8+xbAni6ooZ/KCAUy4207Q16Y0++wX6b9D6TA1LQRxa8owFqxJrVZg
 EU5pg==
IronPort-HdrOrdr: A9a23:0eVjXaq46fcsqMRpJDudUKcaV5ujLNV00zEX/kB9WHVpm5Oj5q
 eTdaUgpHvJYWgqKRQdcIi7Sdu9qLbnhNdICYl4B8bZYOCUghrcEGgC1/qj/9SEIVzTygcz79
 YfT0ETMqyJMbE+t7eI3ODaKadi/DDkytHYuQ629R4EJm0EBdACnmEJczp3CncXeOAFP+tIKH
 P23Ls7m9PUQwVpUi3NPAh4YwGsnayxqHo7CSR2ZSIP2U2igS6l4qP9CDi0mi4EVTlC260v/A
 H+4kbED5GYwrSGI1LnpjTuB1s/oqqh9vJzQOi3zuQFIDTljQilIKxnRr25pTgw5M2/9Vowl9
 HIghE4e+B+8WnYcG2ZqQbkn1CI6kdy11bSjXujxVfzq83wQzw3T+JHmIJiaxPcr2Ywod1m16
 pP/mSB875aFwnJkijR78XBEztqik23i3w/lvN7tQ0EbaIuLJtq6aAP9kJcF5kNWAr89YAcCe
 FrSPrR4f5HGGnqI0ww6gNUoeCEbzAWJFOrU0ICssua33x9h3Zi1XYVw8QZgzMp6I89Y4Ms3Z
 WXDo1Y0JV1CuMGZ6N0A+kMBeGtDHbWfB7KOGWOZXz6CaA8PW7XoZKf2sRn2AiTQu1c8HIOou
 WRbLoYjx9tR6vWM7zQ4HSfyGGcfI2/NQ6dh/22qaIJ+MypeFIsWRfzDGzG2PHQ18n3OferEM
 pa4vptcrnexFvVaMF0NkXFKu5vwFklIZQoU4UAKgqzSjSiEPyligQtGMyja4YFVwxUBl8Wbx
 M4LW/ODfQF02TucnL0gB+Uc2/xeyXEjLhNOZmfwsxW4IUBNooJixMShEiy++uMJDEqiN0aQG
 JOZI7C1o62oGmo8X2N1WVmPRJBEl1YiY+QeE9ilEs2FwfRWZYn/+6yU0xz41erYiJea+++Km
 RijmUyxIbyC7Sx4GQfN/KLW1j2s5PXzEj6Z6v1XsW4lO7YklRRNOdxZER8eD+7YSCdsTwa4F
 t+VA==
X-Talos-CUID: 9a23:x0CT82wNl3CEj71CVkmtBgU1A+MoSD6A4E3ULn2CWDtyU5iqZ22frfY=
X-Talos-MUID: 9a23:W/gM7QYKRlKAZuBTuhjei2lOEvZTwr2cERAhtqs/vdu6Knkl
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="110287718"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Qrk8bSQHTzdzlHYqqbQsxXSS+YV8LOYSk4miGAJzizONwQS8JcSCgKBeey0CrViFXVaGYgMMIpEiiNxx2Pw5D+c9fHW/7QLdeYNa5fKwqr9e10e4Cl2Weuu1sTgTEfPRh4bMCw7dk8SkN+9w7VxchmQs9AdGZiad6f9ZvCkrjCx6medDX9D98f4sYBMu2asChGsXyBUZs+xLOAxTxVlglsFkWOTjaYXpKXF2QW95eWzbmSOQ6+KyDssbFJ/V6DYf1OE/RuHMJINCWXXpT4jgN+C73TOUN/ErBL3U+wPp6IlVw2tqHqTdXmYFIj2XuNfrkl4wVsyEf9Rl/oYkOZcDJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Kf6e6xuLsE0l5KJi49O5tulDMMCfKntruNGAkxmvoEY=;
 b=avE7O3nyofZMUqu5yyc/gCmd0u0nsGHeQMsMaiyrMbWHjOLlx8gsdnDpYZUZJLucSOIafst0yGtPTIhd3pijPxwj5BW9NrJlVEFTXK01Jg54/Ahuynr23I3ltukYkAnSgId6Dl+DHPfAHI/7m9st8URB2sH9+ZQIQ1miulJeVv5HQiwtFh4jmNYAMiG/Xz/pM/n7mYuSHXrbTk49HwsNQ9k+b0DIKyyNbEiNVXsidlMGj+bl8XRNmbiWXfRb6cgMi5iDUHCgrGJWQPqL/40IPp0Ziz+MUc+HgO21f/RjyZSoPY6FG0/nQx7zxkycBe1mxYT3Fk4C8OVCKII3F/N/Eg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Kf6e6xuLsE0l5KJi49O5tulDMMCfKntruNGAkxmvoEY=;
 b=E2D/51umChD67baLcQqn3tLDb8qeOeAngZQd2JJ93issaYvDVYcYUw9/o80H/noojIuapGCpZVTDLP2QAV4rXKSjFXjyuFVEzvoeP/3BXq7cONubNHC1MsNMdsHNjzKD4RnkN3VoAF6dGum6mlthSeTjW/eiBunMiWQlcTtfqSQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <e9e60c7a-2db5-3985-356e-3a039e44d1a1@citrix.com>
Date: Thu, 4 May 2023 12:00:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] x86/ucode: Refresh raw CPU policy after microcode load
Content-Language: en-GB
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20230503185813.3050382-1-andrew.cooper3@citrix.com>
 <20230504102607.3078223-1-andrew.cooper3@citrix.com>
 <ZFONUdkG2ow9uckX@Air-de-Roger>
In-Reply-To: <ZFONUdkG2ow9uckX@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0441.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:e::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH7PR03MB7224:EE_
X-MS-Office365-Filtering-Correlation-Id: 0d8365ff-dc0f-44a3-a98b-08db4c8ebbeb
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/Ue5BuW7t6nQ8kRPetenfUcP24CJg2T0aXN8Rj1Zb75gx4EPL8oCK3QHWtTMzNg3nlER27sEikOPHwgHOKqzrJA30O/XQO0oOOPQDP5LX3KRQG7YpcvumDANNkEoMWZ6V/XS5KdJ0KolSy8AJT6sAHqAtUZCOJ9nLjdbIpXVyfy1LtIaXDyhO0HZSb+b8dp1Fq6k3aHnXqdGmIrM0sfdP6lvsOBOEw/k44IUpLGnnTp32cl7KaLZnjxaOPSq/1r316F4BOdCs+07JM3wdNYPhdkcyv7iaRwmlbFG4LcQgq3Vcyaicp8CxD172I19IAhdhXKStBAQUC7Ir4PPJ6z1r17nqWR2tmorflBHYl2pByH0aVSpvJgA3DasBnKmlMmHJuYP0yZv1tw46LK1Q4J/TpPotuwXHhfJHQ5ShN8TXSskbz4VGRsPDz3kiY8tNtPnv2HKpc+SR/gHjCJaX8I+U9KgmssS1ociVLoYvJAzRwPsFep5XtzKgg/pKKzhaUgK17NbSAt9lAlHLtzAAF7CRbgsmwEY9I5M6yORw9abD1dBiOAvswqScudYC0pn9A6dZixKSeHK7bZkpD9wGF4UT1Qu4ZyYuyMXrBBnb+rMtFkcBpRrIJ+7SOmTL/1nkeHMYebOhZk2cbIJH/qWraD4oA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(396003)(346002)(366004)(39860400002)(451199021)(36756003)(53546011)(31686004)(2616005)(83380400001)(186003)(6512007)(6636002)(6506007)(41300700001)(66556008)(4326008)(8676002)(66476007)(66946007)(6862004)(8936002)(5660300002)(2906002)(82960400001)(26005)(316002)(478600001)(37006003)(86362001)(54906003)(31696002)(6666004)(38100700002)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NU92aHJLMmVIa0JrVkdQbUZsYlpHYk80V0JIVERwMEU3QnNIdGJzUmo1bCti?=
 =?utf-8?B?L3ZiVTBpMXBNTy9JOWZBd1B4SDBOQ0M2VGhESkhrR2ZKNzdyM2Y4K2RjQkZW?=
 =?utf-8?B?NVJJL0NWRTlBL090OWliK0ZXR2t0VytKQklSZk9PejNiekpYZGpCMW95SmFk?=
 =?utf-8?B?UjdCT054ZTNzbDBqejZjdGk3Z25zMWF3Z1dEV2pvVjhwZmFuZ0pQZEdHTEI5?=
 =?utf-8?B?TGJ5NmovOEdvakpXTXdZUlc5dWZFYlFwYW41d1U0enlkb3lLR3g1K2tqTW84?=
 =?utf-8?B?Uzl2SUZ6d1I4K3RPREgrT3BReFVLTzNtcWlBRnpTYlBCL29YbnVBMmh4L3Bn?=
 =?utf-8?B?eVZjQWhwSGlOSWtlVFlPRmJSQWNwQ1RiVlluSk94TW5hd3JlUnNLUVkrcFIr?=
 =?utf-8?B?OTNreTB2bmwrck5VZExJZ1I2bk8ySFYzaWdqS3NGYW1zY1hNSW9MN0JuajFM?=
 =?utf-8?B?TTdrc3dtZlhXRUhxSndFdjV3ZWxnTlV2Vm1IRFpEVTdrcDk5Vk5yeU5xMktw?=
 =?utf-8?B?cklhczBlNjN5YzJ0ek8ydEJaRjBvWFVzMmhKNTFZRllKNlQ2cHAvM3NRZzhH?=
 =?utf-8?B?bE9laHRxeXFuNDQrWGlLT2NTZkRSM1F1Tjcra1dKaU15aE1DTjlrVEwzdTdS?=
 =?utf-8?B?QUxzR3JYc0svSllVQnRKSUNKbGN6c3hMOCsxOTFXRHg4SUd5em42aEk5amlt?=
 =?utf-8?B?YWtnMVhiK1FHbUNTNzhmbG9wdDR1WlVneHorSkxJSis1YTRzeWJsR2Jnck9V?=
 =?utf-8?B?YUE3NVlIR0VYSmxFcW9tTDExSlp5STduK1NqNHFvVE1oR1dGT1E3cWFxRUg1?=
 =?utf-8?B?NU1Ecnh2RXhKMEtsRmNWdURVanY3dzZnWDVKNlNyWXB3alNxajFHeFVVcldV?=
 =?utf-8?B?blhYN2FadWdZL0RjcnJEZlRBbGRPYUx5d296eS9UQlMrSHk5bG04QW9XektW?=
 =?utf-8?B?YmFUSjRXVlljRThPMVFna0RuNEpqWkRtOWpmcWcxOUovUXE5UTIweEpPYVd6?=
 =?utf-8?B?dUNkRGJBVjdBT2p6dnlLSGg2MFM1SkR1azFJVVZmcnlwU2NDaVBVN2hWSml0?=
 =?utf-8?B?SVFOV04xL3pTd09nWU8vS1RvQVFOTGZ5akNralVNRnhONTBJZjVPTURIWExv?=
 =?utf-8?B?VGQ4b1dFd0xvaHI4anRSWlJzR0R6bXE2SnhSYUJrNjVCeUFMMWV0MldtVi9J?=
 =?utf-8?B?Q3hZOU9BY3JqWmI1L1JhTGNNTE5vemJxTjF3L0N3RGpQeXE4dHNhR25mc0Q3?=
 =?utf-8?B?MXBybDBSOGVlSmRTdVM3cHhONXVFWmhYYjh2cVNnZGYzU3JTVlpuQ0dJWGhH?=
 =?utf-8?B?aVVaMG0zYkxTeDZhclVGKzlnTkoxenl3TW9WYjBEVzJZUnovUDVFcE9yeWc5?=
 =?utf-8?B?dWtvU0lOdVpjbE5FL2tvV1kzTEtnQ2ZlZy9pU1VBeDFiYzU1ZTZzQS9vbEtq?=
 =?utf-8?B?WlcxUGJPeHlOZzk0Vjd2blV4R1ZZWGNPYVJpdnJrZnczSDZrbkVHb3UrNE1O?=
 =?utf-8?B?SkEvVlZtRUVOMnZORUtqMDdCbGp1bDQ1NHFiSXFXaHNZS2hQNEFudEpMOG8x?=
 =?utf-8?B?NmFZMFgvd3dhbTBiVEl2dURRTklSZlZhRm1OcVdvNzhZZ1IwUDN1ZE92VjVk?=
 =?utf-8?B?ZUFYdkFhaDR5TVp3ZWVJSzg1N054UThtTkJmSWgrbnI5RG5PV2Z6dDdnMjZK?=
 =?utf-8?B?bUVvRTMvSFdUUXFFWUh4Q01TQXRmZk9tRENjdHFZbnVPbTZRUXhUUTlrZ2Vx?=
 =?utf-8?B?VTRuQnNudnFBSHJFeldOUCtoemowMEQzdkZmN1BlMktKYkJBdlVJd3poRGpn?=
 =?utf-8?B?MjlCZ1Z1TnJLTHFOOGZ2TTY3NEdXVmV2eE9ITFl1Wnc4UkhRZ01xc0dXTks4?=
 =?utf-8?B?UHZvNkZ3Q2tYWGlIdldrblkzNlIxN3JzNFdPWVZKUjJaVGIwTndWcDBkeHhN?=
 =?utf-8?B?alhEVVdTN2lCUmRvM2FtdUtlK21JaUNYRnhUTE1rRkhXdzFaZzRkQkhzZ0dp?=
 =?utf-8?B?Tm5GNUhEdFBkcmYvOW1mR2ZxYjZUNGtaNjNTVkRnWnhJMC95Wml2TDVIUjBo?=
 =?utf-8?B?ZXRFclAwWi9ZdDNXSXVIU1VSSFRIZUJycG82V01idjBmMGNpUHQ3d3BDRnBk?=
 =?utf-8?B?NlFwYlYyYWNuWW9LenpMMFJmWEI4c28vaVpLQXIwTmhVS2RHd0pTemU3RjFS?=
 =?utf-8?B?dXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	bOtmzGK2bBs5p4SUJXBkwJmfRhgQaWO4yz0ptzGF3SO7/RpcZ0KsjNEnbMAEMa0OLsF5zjy/3GwEeY00ZxzMF+qIV/45gjGvcJ7dXLY+JHoKBUneOOMShjs1d4H9q0WE4T6rkOQyRHB/Hvv3oQF6ocRco7Dw7yq8WRjhffIr/GVI/kxHYlJE0+C9avryfESET1yIcK7FiCFO6kOzobuAh5jbyh8pPWZS8X3dEb/o1mMahYS0NoPh/ga2Xc8WgXwTLQFmrqIQ0OVo3fctYjtbDVOKavKHclKrbpl0af2fykO61yye9MxO+7FfZG6vYzl2LTEJ7Rc1nejOpBwBk43Zq4XenbkB0frINBWeTmntJcREeNDjgrXu8U57AJtiOpIt0Ytk5cYnk1Md5VZL9+jakprsHhNw7/E6SxKm8FmGF+5cembKSHbK5fdsC4dO8KykkrU57eIvWPCr2mSJCw48IgGpsgIq4mnhTWYggUIRZZ1wHi7X3r2bvJ2qCYwMpmXbHVU8tZ5CONfBh4mx5apVOU+FMBLKSX6Ez0QQ1pFwUwyXw8ardDhsswdeXBu8Rg7g4gPevMEA/yMlJ6kugZDyLR+/zU5yYmTPteiNjbQWZwbK96OR5XMyD70dEG/aHP7ydnSboFR2tNl7J3zmfM9OrgeexpHdwGMs85aaGVDBLl2SNSP9F/hAK5SRnaSXJRQvcYacDA/7xn6zxMDjZoOtolNpPWa1Jhepuba97dS9Wu59ZY0K32SXmqsU8sNvsfq+LTKS9+Fbj6OW8yF7L0INoQVlKDsrFF/sAJFwXjX64m5gLy/UanXZpfMwb4K31wf/
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0d8365ff-dc0f-44a3-a98b-08db4c8ebbeb
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 11:00:12.9018
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZvV309/W5NsM/y4g14uMDgvugpWDTFjPQ6iqyBNMMXlzBaXIzIxr6sHP9c2HtCWfWallPG71SVpZkmzfwkXUTLuSIlgwFKZtFQiI43ij9Sg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7224

On 04/05/2023 11:47 am, Roger Pau Monné wrote:
> On Thu, May 04, 2023 at 11:26:07AM +0100, Andrew Cooper wrote:
>> Loading microcode can cause new features to appear.  This has happened
>> routinely since Spectre/Meltdown, and even the presence of new status bits can
>> sometimes mean the administrator has no further actions to perform.
>>
>> Conversely, loading microcode can occasionally cause features to disappear.
>> As with livepatching, it is very much the administrators responsibility to
>> confirm that a late microcode load is safe on the intended system before
>> rolling it out in production.
>>
>> Refresh the raw CPU policy after late microcode load appears to have done
>> something, so xen-cpuid can reflect the updated state of the system.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> I'm not fully sure it's worth calling calculate_raw_cpu_policy() if
> updated != nr_cores, as it's possible the current CPU where the policy
> is regenerated hasn't been updated.

As said previously, updated != nr_cores may exist in principle, but it
is not a corner case we should care about.

If the system really is asymmetric after a late load, then it is
probably not long for the world.  There is nothing Xen can do to recover.


It is awkward that we're not on the BSP when collecting.  I think that
part of the logic didn't survive the rebase over switching to
stop-machine.  I'll add it to the gitlab ticket of minor ucode work,
because it's not the end of the world.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 04 11:17:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 11:17:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529691.824360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWx6-0004jy-It; Thu, 04 May 2023 11:16:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529691.824360; Thu, 04 May 2023 11:16:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puWx6-0004jr-G4; Thu, 04 May 2023 11:16:48 +0000
Received: by outflank-mailman (input) for mailman id 529691;
 Thu, 04 May 2023 11:16:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puWx5-0004jh-JF
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 11:16:47 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on062d.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 273ec68d-ea6d-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 13:16:45 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DUZPR04MB9793.eurprd04.prod.outlook.com (2603:10a6:10:4b0::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 11:16:42 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 11:16:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 273ec68d-ea6d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N64I1iwmbMxAKwEsaemQDBbUh78M9b+vl3vIpdg2byb4h15Ml2KyE4/FbqUjiBLsfAy+wEtv2pe47QduNqI13EZGjJtKOobaDJ3vmsBjpCMKZulOh9LbnpPIIItv4BDFw8mb/eBLsIuXEkeAt+/Wd8AJutbRn6CPfvm89S04SiVDQrXGxtINwnh+JUv4Aw/cPeMYqdqfbuuzRa2NsDRW6IF/rFy6Lo0AqhGr0uP2L3Dw7hjDzuvBNUByvgnShysbFpWIBXaYdh5GDi+Xr9OsLgMct3uy6nS+ZWQej2Ygdi1TSD5T/DCgJ4JMTTJ6EmjZbKX5A7ovIrdR8oHIh3EQlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2S3W2ejMYn/rH8lVeyeUbkQTvRytkuUemNf430mkBhQ=;
 b=RWzZlSFlSCpSS9pRfZHp7+gN4AiCw9sx7lVFLQXFvCw82wvRa4WB9A4/frYItAFccbX6WeeCn3ekSGbfSkwZHrbO9rKPNCT0Kpp6AcK8mOdDKgiWLG5daWuPByhF07VKdU1iu5BvXlKy3gddyO4RnVcp/M+LOE9/qzWjoBCbbzwSJoAZ84DzxuIj20vH3mPYpKKAacqDBXuilCLmgyM2b5Jy+9Ey/mllUHMQjheRmkuyh3ejaMvVSyqi2+3TbQRMAli+AWGdra648ARWR+1OF9EYBBKqHqLWiB7lKSniHGC8ZS/1IA9AF2QeJsNeF/V1v2fu41JghJoSnwfqWWwYtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2S3W2ejMYn/rH8lVeyeUbkQTvRytkuUemNf430mkBhQ=;
 b=ADer5v4rbHKDno4WhREKfnZZWNIPXp/jsi7xsA4Gw2yyH0LwVsYBYaNb/TgGuHmz39MvH8VTHG2R1Jpzr6N0+W/yKeMn/L2q1ShNBsC6p50B7eYWjCo90LcP5UY5vlRzGhnlrKJN98Btg2pbB45/XpOvIkamWRNJuX/oMwCAYFh0VKXBLSrJqDWIBVsvfGAqMq/zVrLHTi/UePR71AetgIbGEb5bNUkiXNZejH9LZntk1Qc6K2kPl9lA9QZT1VD4+DXZgh73IKKRI6GUlBKG7l6OoudNfr2V/xo7Q1ahYzgGSKoSkuE+x53wmXInwmcV78kGTGqHpBMiSm//orteSQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6e03df79-578b-165e-3935-58370b93e96d@suse.com>
Date: Thu, 4 May 2023 13:16:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 03/14 RESEND] cpufreq: Export intel_feature_detect
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-4-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230501193034.88575-4-jandryuk@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0130.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DUZPR04MB9793:EE_
X-MS-Office365-Filtering-Correlation-Id: fc258486-3d59-47f4-78b1-08db4c9109ea
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HIwKMs2YLwsCkBHnnJY+U0XWF+5LsfrgJvSlO/KrgIlaKc7E8uSWIOfRbnXT9CiG46cLJZVg2kDqTr0w1lZdcSK0I6+ilSpjitD8H+18jWqsf4deC9goPatvKsjS+yJBo0jViHGjifwpqxpWs/ygHXKVvfry5oA6GfA4V11A2ip9+YU8fetti8AlEWUA8YRV1vxBrkYY9e+vPZ57oBCfs7rO1CjtVBgKqJQRn6XOlirMBW/eg1Ncb4Lv9QM9CyFqVoB/4oP2Wiu5xzhPseIJnqAhqU/8NMhVt94dCiMqqSYthtT7pVEjgHV0ghBiUH6KBH32YrxQJ2B3BBDuI6urssOj5d6iO19JinWA0b64aonMxN/nyNLxPZjgZldVVvs3txFyfMlvHMZyTtmaIplFcWScomdWiY/6bYIsufL97yZeYw5HdV2D6k40uNxsvDWBMFsTtq/xvFmV7bG1O3wgY+1QVDcQHs0Luea5Htnj1DijIHwobJ5yEmRe86sx794Ml4pBp0EgZA9WtvRHBsUwEiouiT91lGjbzvw5Hi+DaN+t6CbW5eJV9OeNdmoLRoYGU5CNufgs29z1Ynk2BqVPYAJc98KFHmYrr/lXv3s87Fym4OLHRI9EiX3ruq15IUbhwyK4FEXM6Z/ijNHKiXXA8w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(366004)(346002)(39860400002)(136003)(451199021)(31696002)(6666004)(86362001)(41300700001)(6486002)(478600001)(36756003)(66476007)(66556008)(66946007)(4326008)(6916009)(558084003)(316002)(54906003)(5660300002)(2616005)(31686004)(6506007)(26005)(83380400001)(53546011)(6512007)(2906002)(186003)(8676002)(8936002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dWpJYXlEUU9jUk1wV3h4WHRKY3d5cFlHSGJWd1dLL0wyTENLb3NtVG11SEwv?=
 =?utf-8?B?Z052bGY2L2J6c1pxd3V6L1NFTHYrQmpYRzFiSkxmVmdBdloweFRkNXlnaFdx?=
 =?utf-8?B?dFFMNGJaNm9ZQkpabitsRVplUWhFbVNOWEY1TE52SDRERmZycFNkK3hhQ0VB?=
 =?utf-8?B?WTQxS2xTTGdUS2hHYWFyWDZCeS93OXB6UldJUURpTW1xTjlCWXdLOE1uZG93?=
 =?utf-8?B?K2RwVmdXMGZXMnA4cS85R2lUaHJRUDlSclBhbTRWbkJzcVdCbUMvSS9IZE52?=
 =?utf-8?B?cERTcngrL1RPQjZGdWNPdTFHSUJKZkdUR0h6NWdsVHdpVmNMRGZZQ2dTNnNK?=
 =?utf-8?B?d01CMzE5YVlsWURPYktFQ2Z0Snl6TzU3eXZTUXdkMHFwbndqTTU3S3AzZHlQ?=
 =?utf-8?B?dHlDNzErbW1WRXdVdXZ1WkJEL3FDTXBCNThMem40WVRTNjZPcFp5SUpuUXdL?=
 =?utf-8?B?SVl6M1FTMVloVG9oQ3hqSFJqSlJPOWpYc3VUc1lCYTdETEVJVWJLU2wwUWhw?=
 =?utf-8?B?TFM2YmdxdVpWeksvNW4wakNDNlRrWHdPTE94amRQTFpxaTlKaHhlTmJWQ1la?=
 =?utf-8?B?bzhab3NKZW0vdHBNdzZwRHlMZnBBUlY4TmtpTnFCMUtVSWZHMEphSGNIOGNZ?=
 =?utf-8?B?RDQ0eWZIbUV2WGFkQkNGMDZ1eGpwNGd4ZTk5SFc3TEhIamVHVER5VXcrQVVa?=
 =?utf-8?B?ZmV5YzFma2lHME1VQ2tJQ20rc0o5bFN5RldLYkZJQmdRMzJpN1RtN25WdVB0?=
 =?utf-8?B?Y1FIZ2w4YlBNSU1xc2VTV0tDYjNpLzlEeW1mU0pxeHA5N0ZURFVyd3pKOENv?=
 =?utf-8?B?d2NrMUVjcFlFaVUxQ0JMM0lIWnl1NW9vK0xveVIveUpzQ1JFVFFGekdmY2lS?=
 =?utf-8?B?bWQ0SW0yRUlnNzJHay9PQVZDYW55RlZQajFISWJ3RUVNS0lCb3B4VXdBNGhE?=
 =?utf-8?B?WjZsdE1zd0xSRzRNamQrNWx3VkpjSThabE1BOUhvTUhneUowTmFSbEcraGVy?=
 =?utf-8?B?SEVWOXM0TzdJVVFLeVFSMzJoRVJmdGFMWmFNZDZRRW5zVk8zN0IxWGV5azZ2?=
 =?utf-8?B?bzkxMXdQaExYdTlPcUpjWnR6dTQyRFVUbDMvYjFZU1BiM0FZa3V4K2tBLzVz?=
 =?utf-8?B?K29pdGVrUUhXZzV2bmpodnFGaW1Rd3kvN2k1b21uMFJaR3ZudW5GTDN2MzJy?=
 =?utf-8?B?aWdYU0NQU2pKdHRLVkRRZ3BKQ0dRaXZ4dTJ2MUVrTmRIenRnR0hjejJtMmpR?=
 =?utf-8?B?Y3FRMnhNTy93UXV4azc3L1RNWnBFTU5aRkJGTGdaSXl4d2ZqZTd0aUJzN25X?=
 =?utf-8?B?QXpiTWtUKy9ZR1hoa0xPQnhMMURUcHdNRFpHUldLcmJRS0VKOERJSkkxMGNU?=
 =?utf-8?B?dEhwaVZTdU02SlNjRHNrdDB5Mi8rR3BlVXJIalB6WEF2eWZsdmhtMzZncSs3?=
 =?utf-8?B?eTFSeFdtRjlwZllaeWtoWDhzVUZ1WTN6T3d2Rm5yVXZjOW5iSnFrNEpjeGts?=
 =?utf-8?B?clR3TmV6RU5yZnVsdUtXa0RnQVlDelQzTnJYMlJ0Sk44V04zWXBpa0J3T1FP?=
 =?utf-8?B?SUVNcFZiRFpLVXhrSnc5QVJiek42V2YwQnFUNzhSWXYrUVZCczJNWnVxbUR2?=
 =?utf-8?B?TVRHVXdvTXZtakd2NFJFV2MydHFTdlZOVytjOTVzalJDYW9PMmxmOUpkMjds?=
 =?utf-8?B?Q3oyMXpKL3NrWFIrSE1SanAwTGRNdHlTRUZqNGdtME1Pb2hIL1Y4eFJIcXRt?=
 =?utf-8?B?Y3pvTHV2UmE2YXkzOXljcldIT2d0a1BnTnhOTGRrMjNyS2hidmxNK2FtVnRp?=
 =?utf-8?B?NmtlQ00vWG9vT1hBQnlXSk40NFROZ0RCMlNIanE2WmFRUnNOSVBsbkxkR3JO?=
 =?utf-8?B?NjNDWGllajQwWmNjVVdRNDYwTmgxRzJQSENxdDJQUXJ5dTBxM2VrTjFubmcw?=
 =?utf-8?B?OTdUYkZSQXBpOUc4cnVhVlpNbXR5T2tUOXFIM1AxMWtXU3QyWFRsbjluU2FY?=
 =?utf-8?B?c3hERTVPUktCQ0U0MjMxeHg1d2R2cmYzU1FIRStURU9wLzNERTM0bTk0YjBU?=
 =?utf-8?B?YW5yR1Y3S1RDUHY3RFh2TkNndWEwbm0rL1ZFQXRYQ3RTUXlQZDVVZFJLSlAr?=
 =?utf-8?Q?4Dx370DvV7jUT4+Hv20Hnme4l?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fc258486-3d59-47f4-78b1-08db4c9109ea
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 11:16:42.5134
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: M4U8hSPxoXLRjwgmBdo4RU0a/HVvVPoLKl1sRySQDGl4ou0pcIlYRbOuI5PxhR2twKFzwRFNGKemn7r+2ISLeg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DUZPR04MB9793

On 01.05.2023 21:30, Jason Andryuk wrote:
> Export feature_detect as intel_feature_detect so it can be re-used by
> HWP.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Thu May 04 11:20:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 11:20:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529694.824370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puX07-0005Mb-2g; Thu, 04 May 2023 11:19:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529694.824370; Thu, 04 May 2023 11:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puX06-0005MU-V7; Thu, 04 May 2023 11:19:54 +0000
Received: by outflank-mailman (input) for mailman id 529694;
 Thu, 04 May 2023 11:19:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kQRW=AZ=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1puX06-0005MO-5z
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 11:19:54 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 96bb6b96-ea6d-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 13:19:52 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ac770a99e2so4275701fa.3
 for <xen-devel@lists.xenproject.org>; Thu, 04 May 2023 04:19:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96bb6b96-ea6d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683199191; x=1685791191;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=dRu1hub18XXS79LypLt9yhwEdFG+s3JAgEck87ztnn4=;
        b=CD2xOqKJiY64uIDLEvUdH4nrCMj1/SaotQ5Vtl3OKvSDwx8OTFxkrTBN9fAxA7HIQJ
         kwVVL3ei/6R1l7gu35MAkAglzO9aapDx6qV63dAnkB2SFHjb0POAFkirRwEJTW6ZkFgY
         pk6MJt5AQATyZvFrEnLwcdG9tHP67E/3oiWME=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683199191; x=1685791191;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=dRu1hub18XXS79LypLt9yhwEdFG+s3JAgEck87ztnn4=;
        b=ltl2uhFlKERTRXZ/TJIu5esU+qinsbekhb0Fd8kQCxYb4mbuQ1/FazCckcqaMZqmDJ
         SVsjtD1Yzsliw2ghwAw4L06BwhiSzsHV4tzWpIH9Pwu4+B004UnpA9JyYbutlrc9tWwr
         CpoS9Oq8hjhFwxPxq/jerVwNEy6uXDpioqXIuG5mDpqAzmD4+ggwZicbyxbz2oxpwPrz
         wvlPruHAo4uphk13/r/RArEe+MZui17O/l5kVY3xzL3gU+eDPilipCiKMskp2ip7AZWx
         O36lZjzuH6PNY/i37gBNZHJSjUwaCprimLNzgvasl0x0wMX6ZO+coXza9axZYTR4pDmp
         r5uw==
X-Gm-Message-State: AC+VfDzOwvlbW+iiEbV6aJmJmkKSLf/i1tkqeKXK62YiayKlVRKkm6qO
	rpMt1np1/4rLquLhn5NML+j5lHk0NvmY/mFedCruww54N7oy8+cZpXg=
X-Google-Smtp-Source: ACHHUZ6rTR2qCy3sC0UC0jWppc45/LPmq3Oxk/pFN456EGX+D5pJJvnm0C5WyrxNeliUHm/f70ZVRtVb7sBA3YSErQg=
X-Received: by 2002:a2e:8693:0:b0:2a9:f9d2:7b44 with SMTP id
 l19-20020a2e8693000000b002a9f9d27b44mr862210lji.40.1683199191120; Thu, 04 May
 2023 04:19:51 -0700 (PDT)
MIME-Version: 1.0
From: George Dunlap <george.dunlap@cloud.com>
Date: Thu, 4 May 2023 12:19:39 +0100
Message-ID: <CA+zSX=a5jDtELctshP_V=d+PXaOo6zy6YO2LngeraSeQv6dOwQ@mail.gmail.com>
Subject: [ANNOUNCE] Call for agenda items for 4 May Community Call @ 1500 UTC
To: Xen-devel <xen-devel@lists.xenproject.org>, 
	Tamas K Lengyel <tamas.k.lengyel@gmail.com>, "intel-xen@intel.com" <intel-xen@intel.com>, 
	"daniel.kiper@oracle.com" <daniel.kiper@oracle.com>, Roger Pau Monne <roger.pau@citrix.com>, 
	Sergey Dyasli <sergey.dyasli@citrix.com>, 
	Christopher Clark <christopher.w.clark@gmail.com>, Rich Persaud <persaur@gmail.com>, 
	Kevin Pearson <kevin.pearson@ortmanconsulting.com>, Juergen Gross <jgross@suse.com>, 
	Paul Durrant <pdurrant@amazon.com>, "Ji, John" <john.ji@intel.com>, 
	"edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, 
	"robin.randhawa@arm.com" <robin.randhawa@arm.com>, Artem Mygaiev <Artem_Mygaiev@epam.com>, 
	Matt Spencer <Matt.Spencer@arm.com>, Stewart Hildebrand <Stewart.Hildebrand@amd.com>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Rian Quinn <rianquinn@gmail.com>, "Daniel P. Smith" <dpsmith@apertussolutions.com>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLRG91ZyBHb2xkc3RlaW4=?= <cardoe@cardoe.com>, 
	George Dunlap <george.dunlap@citrix.com>, David Woodhouse <dwmw@amazon.co.uk>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLVmFyYWQgR2F1dGFt?= <varadgautam@gmail.com>, 
	Brian Woods <brian.woods@xilinx.com>, Robert Townley <rob.townley@gmail.com>, 
	Bobby Eshleman <bobby.eshleman@gmail.com>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQ29yZXkgTWlueWFyZA==?= <cminyard@mvista.com>, 
	Olivier Lambert <olivier.lambert@vates.fr>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Ash Wilding <ash.j.wilding@gmail.com>, Rahul Singh <Rahul.Singh@arm.com>, 
	=?UTF-8?Q?Piotr_Kr=C3=B3l?= <piotr.krol@3mdeb.com>, 
	Brendan Kerrigan <brendank310@gmail.com>, insurgo@riseup.net, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Scott Davis <scottwd@gmail.com>, 
	Anthony PERARD <anthony.perard@citrix.com>, Michal Orzel <michal.orzel@amd.com>, 
	Marc Ungeschikts <marc.ungeschikts@vates.fr>, Zhiming Shen <zshen@exotanium.io>, 
	Xenia Ragiadakou <burzalodowa@gmail.com>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLSGVucnkgV2FuZw==?= <Henry.Wang@arm.com>, 
	Per Bilse <per.bilse@citrix.com>, Samuel Verschelde <stormi-xcp@ylix.fr>, 
	Andrei Semenov <andrei.semenov@vates.fr>, Yann Dirson <yann.dirson@vates.fr>, 
	Bernhard Kaindl <bernhard.kaindl@cloud.com>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLTHVjYSBGYW5jZWxsdQ==?= <luca.fancellu@arm.com>, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Content-Type: multipart/alternative; boundary="000000000000df92ab05fadc590f"

--000000000000df92ab05fadc590f
Content-Type: text/plain; charset="UTF-8"

Hi all,


The proposed agenda is in
https://cryptpad.fr/pad/#/2/pad/edit/7Vv6erX9dwCazMfCZ4Hbnf8b/ and you can
edit to add items.  Alternatively, you can reply to this mail directly.

Agenda items appreciated a few days before the call: please put your name
besides items if you edit the document.

Note the following administrative conventions for the call:
* Unless, agreed in the previous meeting otherwise, the call is on the 1st
Thursday of each month at 1600 British Time (either GMT or BST)
* I usually send out a meeting reminder a few days before with a
provisional agenda

* To allow time to switch between meetings, we'll plan on starting the
agenda at 16:05 sharp.  Aim to join by 16:03 if possible to allocate time
to sort out technical difficulties &c

* If you want to be CC'ed please add or remove yourself from the
sign-up-sheet at
https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/

Best Regards
George


== Dial-in Information ==
## Meeting time
16:00 - 17:00 British time
Further International meeting times:
<https://www.timeanddate.com/worldclock/meetingdetails.html?year=2020&month=4&day=2&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179>
<https://www.timeanddate.com/worldclock/meetingdetails.html?year=2021&month=04&day=8&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179>
https://www.timeanddate.com/worldclock/meetingdetails.html?year=2023&month=5&day=4&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179


## Dial in details
Web: https://meet.jit.si/XenProjectCommunityCall

Dial-in info and pin can be found here:

https://meet.jit.si/static/dialInInfo.html?room=XenProjectCommunityCall

--000000000000df92ab05fadc590f
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><p>Hi all,<br></p><p><br>The proposed agenda is in <a href=
=3D"https://cryptpad.fr/pad/#/2/pad/edit/7Vv6erX9dwCazMfCZ4Hbnf8b/">https:/=
/cryptpad.fr/pad/#/2/pad/edit/7Vv6erX9dwCazMfCZ4Hbnf8b/</a>=C2=A0and you ca=
n edit to add items.=C2=A0 Alternatively, you can reply to this mail direct=
ly.<br></p><p>Agenda items appreciated a few days before the call: please p=
ut your name besides items if you edit the document.<br></p><p>Note the fol=
lowing administrative conventions for the call:<br>* Unless, agreed in the =
previous meeting otherwise, the call is on the 1st Thursday of each month a=
t 1600 British Time (either GMT or BST)<br>* I usually send out a meeting r=
eminder a few days before with a provisional agenda<br></p><p>* To allow ti=
me to switch between meetings, we&#39;ll plan on starting the agenda at 16:=
05 sharp.=C2=A0 Aim to join by 16:03 if possible to allocate time to sort o=
ut technical difficulties &amp;c</p><p>* If you want to be CC&#39;ed please=
 add or remove yourself from the sign-up-sheet at=C2=A0<a href=3D"https://c=
ryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/">https://cryptpad.fr/=
pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/</a><br></p><p>Best Regards<br>Ge=
orge<br></p><p><br></p><p>=3D=3D Dial-in Information =3D=3D<br>## Meeting t=
ime<br>16:00 - 17:00 British time<br>Further International meeting times:=
=C2=A0<a href=3D"https://www.timeanddate.com/worldclock/meetingdetails.html=
?year=3D2020&amp;month=3D4&amp;day=3D2&amp;hour=3D15&amp;min=3D0&amp;sec=3D=
0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224&amp;p4=3D179"></a><a href=3D"https=
://www.timeanddate.com/worldclock/meetingdetails.html?year=3D2021&amp;month=
=3D04&amp;day=3D8&amp;hour=3D15&amp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p=
2=3D37&amp;p3=3D224&amp;p4=3D179"></a><a href=3D"https://www.timeanddate.co=
m/worldclock/meetingdetails.html?year=3D2023&amp;month=3D5&amp;day=3D4&amp;=
hour=3D15&amp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224&am=
p;p4=3D179">https://www.timeanddate.com/worldclock/meetingdetails.html?year=
=3D2023&amp;month=3D5&amp;day=3D4&amp;hour=3D15&amp;min=3D0&amp;sec=3D0&amp=
;p1=3D1234&amp;p2=3D37&amp;p3=3D224&amp;p4=3D179</a><br></p><p><br>## Dial =
in details<br>Web:=C2=A0<a href=3D"https://meet.jit.si/XenProjectCommunityC=
all">https://meet.jit.si/XenProjectCommunityCall</a><br></p><p>Dial-in info=
 and pin can be found here:<br></p><p><a href=3D"https://meet.jit.si/static=
/dialInInfo.html?room=3DXenProjectCommunityCall">https://meet.jit.si/static=
/dialInInfo.html?room=3DXenProjectCommunityCall</a></p></div>

--000000000000df92ab05fadc590f--


From xen-devel-bounces@lists.xenproject.org Thu May 04 11:50:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 11:50:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529697.824380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puXTb-0001Ou-Gp; Thu, 04 May 2023 11:50:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529697.824380; Thu, 04 May 2023 11:50:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puXTb-0001On-Du; Thu, 04 May 2023 11:50:23 +0000
Received: by outflank-mailman (input) for mailman id 529697;
 Thu, 04 May 2023 11:45:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y0QJ=AZ=amazon.co.uk=prvs=4812655cd=hakor@srs-se1.protection.inumbo.net>)
 id 1puXPA-0000SM-H5
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 11:45:48 +0000
Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 33c92210-ea71-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 13:45:44 +0200 (CEST)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-iad-1a-m6i4x-617e30c2.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-2101.iad2.amazon.com with
 ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2023 11:45:41 +0000
Received: from EX19D002EUA001.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-iad-1a-m6i4x-617e30c2.us-east-1.amazon.com (Postfix)
 with ESMTPS id CB41564DCA; Thu,  4 May 2023 11:45:39 +0000 (UTC)
Received: from EX19D037EUB003.ant.amazon.com (10.252.61.119) by
 EX19D002EUA001.ant.amazon.com (10.252.50.66) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.2.1118.26; Thu, 4 May 2023 11:45:39 +0000
Received: from EX19D017EUB002.ant.amazon.com (10.252.51.51) by
 EX19D037EUB003.ant.amazon.com (10.252.61.119) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.2.1118.26; Thu, 4 May 2023 11:45:38 +0000
Received: from EX19D017EUB002.ant.amazon.com ([fe80::5ea2:5e50:d1b:734c]) by
 EX19D017EUB002.ant.amazon.com ([fe80::5ea2:5e50:d1b:734c%3]) with mapi id
 15.02.1118.026; Thu, 4 May 2023 11:45:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33c92210-ea71-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1683200745; x=1714736745;
  h=from:to:cc:subject:date:message-id:content-id:
   content-transfer-encoding:mime-version;
  bh=J0akEHc9aLCPKI1M4DlhScOC9UUiHnTvk5Sz2yRXswQ=;
  b=ginojQnXZR7cTsaj/KbEX48jTf4KZyoRRlIyhaIhJ6gP/IapAMO/un2P
   GhtZZHCEIzjFrDdkMEIOciC2QZ5s7xPnpGDGRSSsNUw6dZ01HiFyBTQ4/
   +gh6imdtkIwXfESNw2OlAvCzeLrR+SxU+bZ783t721XvlZBSFBkgCxNO0
   0=;
X-IronPort-AV: E=Sophos;i="5.99,249,1677542400"; 
   d="scan'208";a="321455840"
From: "Hakobyan, Ruben" <hakor@amazon.co.uk>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Jan Beulich
	<jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Grall,
 Julien" <jgrall@amazon.co.uk>, Andrew Cooper <andrew.cooper3@citrix.com>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH] x86/msi: dynamically map pages for MSI-X tables
Thread-Topic: [PATCH] x86/msi: dynamically map pages for MSI-X tables
Thread-Index: AQHZfn3ya9GP9+Vv7EWd6l/G5jjEQQ==
Date: Thu, 4 May 2023 11:45:38 +0000
Message-ID: <04EE5E8F-19DF-49B2-A63F-AC640D4BF9AE@amazon.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [10.106.82.18]
Content-Type: text/plain; charset="utf-8"
Content-ID: <D1E7D2ED24D5354F96DF0B6F94376B92@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

T24gMDIvMDUvMjAyMywgMTE6MjMsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6DQo+IE9uIFR1ZSwg
TWF5IDAyLCAyMDIzIGF0IDEyOjE4OjA2UE0gKzAyMDAsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiA+
IE9uIDAyLjA1LjIwMjMgMTI6MTAsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6DQo+ID4gPiBPbiBX
ZWQsIEFwciAyNiwgMjAyMyBhdCAwMjo1NToyMFBNICswMDAwLCBSdWJlbiBIYWtvYnlhbiB3cm90
ZToNCj4gPiA+PiBYZW4gcmVzZXJ2ZXMgYSBjb25zdGFudCBudW1iZXIgb2YgcGFnZXMgdGhhdCBj
YW4gYmUgdXNlZCBmb3IgbWFwcGluZw0KPiA+ID4+IE1TSS1YIHRhYmxlcy4gVGhpcyBsaW1pdCBp
cyBkZWZpbmVkIGJ5IEZJWF9NU0lYX01BWF9QQUdFUyBpbiBmaXhtYXAuaC4NCj4gPiA+Pg0KPiA+
ID4+IFJlc2VydmluZyBhIGZpeGVkIG51bWJlciBvZiBwYWdlcyBjb3VsZCByZXN1bHQgaW4gYW4g
LUVOT01FTSBpZiBhDQo+ID4gPj4gZGV2aWNlIHJlcXVlc3RzIGEgbmV3IHBhZ2Ugd2hlbiB0aGUg
Zml4bWFwIGxpbWl0IGlzIGV4aGF1c3RlZCBhbmQgd2lsbA0KPiA+ID4+IG5lY2Vzc2l0YXRlIG1h
bnVhbGx5IGFkanVzdGluZyB0aGUgbGltaXQgYmVmb3JlIGNvbXBpbGF0aW9uLg0KPiA+ID4+DQo+
ID4gPj4gVG8gYXZvaWQgdGhlIGlzc3VlcyB3aXRoIHRoZSBjdXJyZW50IGZpeG1hcCBpbXBsZW1l
bnRhdGlvbiwgd2UgbW9kaWZ5DQo+ID4gPj4gdGhlIE1TSS1YIHBhZ2UgbWFwcGluZyBsb2dpYyB0
byBpbnN0ZWFkIGR5bmFtaWNhbGx5IG1hcCBuZXcgcGFnZXMgd2hlbg0KPiA+ID4+IHRoZXkgYXJl
IG5lZWRlZCBieSBtYWtpbmcgdXNlIG9mIGlvcmVtYXAoKS4NCj4gPiA+DQo+ID4gPiBJIHdvbmRl
ciBpZiBBcm0gcGxhbnMgdG8gcmV1c2UgdGhpcyBjb2RlLCBhbmQgd2hldGhlciB0aGVuIGFybTMy
IHdvdWxkDQo+ID4gPiBiZXR0ZXIga2VlcCB0aGUgZml4bWFwIGltcGxlbWVudGF0aW9uIHRvIGF2
b2lkIGV4aGF1c3RpbmcgdmlydHVhbA0KPiA+ID4gYWRkcmVzcyBzcGFjZSBpbiB0aGF0IGNhc2Uu
DQo+ID4NCj4gPiBJIHRoaW5rIHRoaXMgd291bGQgdGhlbiBuZWVkIHRvIGJlIHNvbWV0aGluZyB0
aGF0IDMyLWJpdCBhcmNoaXRlY3R1cmVzDQo+ID4gZG8gc3BlY2lhbGx5LiBSaWdodCBub3cgYWl1
aSBQQ0kgKGFuZCBoZW5jZSBNU0ktWCkgd29yayBvbiBBcm0gdGFyZ2V0cw0KPiA+IG9ubHkgQXJt
NjQuDQo+ID4NCj4gPiA+IFRoaXMgYWxzbyBoYXZlIHRoZSBzaWRlIGVmZmVjdCBvZiBpb3JlbWFw
KCkgbm93IHBvc3NpYmx5IGFsbG9jYXRpbmcgYQ0KPiA+ID4gcGFnZSBpbiBvcmRlciB0byBmaWxs
IHRoZSBwYWdlIHRhYmxlIGZvciB0aGUgbmV3bHkgYWxsb2NhdGVkIFZBLg0KPiA+DQo+ID4gSW5k
ZWVkLCBidXQgSSB0aGluayB0aGUgKHZhZ3VlKSBwbGFuIHRvIHN3aXRjaCB0byBpb3JlbWFwKCkg
aGFzIGJlZW4NCj4gPiBhcm91bmQgZm9yIGEgcHJldHR5IGxvbmcgdGltZSAocGVyaGFwcyBmb3Jl
dmVyIHNpbmNlIDMyLWJpdCBzdXBwb3J0DQo+ID4gd2FzIHB1cmdlZCkuDQo+DQo+DQo+IFl1cCwg
SSdtIG5vdCBzYXlpbmcgdGhlIGFib3ZlIHNob3VsZCBibG9jayB0aGUgcGF0Y2gsIGJ1dCBtaWdo
dCBiZQ0KPiB3b3J0aCBtZW50aW9uaW5nIGluIHRoZSBjb21taXQgbWVzc2FnZS4NCj4NCj4NCj4g
Um9nZXIuDQoNClN1cmUsIEkgd2lsbCBhZGQgYSBub3RlIGluIHRoZSBjb21taXQgbWVzc2FnZSBy
ZWdhcmRpbmcgdGhlIHBvc3NpYmxlIHVzZQ0Kb2YgdGhpcyBwYXRjaCBmb3IgMzItYml0IGFyY2hp
dGVjdHVyZXMuDQoNClJlZ2FyZGluZyBvdGhlciBjb21tZW50cyB5b3UgbWFkZSBvbiB0aGUgcGF0
Y2gsIEkgaGF2ZSBub3RlZCB0aGVtIGFsbA0KYW5kIGFncmVlIHdpdGggeW91ciBzdWdnZXN0aW9u
cy4gSSB3aWxsIHNlbmQgb3V0IGEgbmV3IHZlcnNpb24uDQoNClRoYW5rcyBmb3IgcmV2aWV3aW5n
IHRoZSBwYXRjaCENCg0KUnViZW4uDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu May 04 12:51:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 12:51:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529713.824390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYQs-000060-46; Thu, 04 May 2023 12:51:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529713.824390; Thu, 04 May 2023 12:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYQs-00005t-18; Thu, 04 May 2023 12:51:38 +0000
Received: by outflank-mailman (input) for mailman id 529713;
 Thu, 04 May 2023 12:51:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OChG=AZ=tklengyel.com=bounce+e181d6.cd840-xen-devel=lists.xenproject.org@srs-se1.protection.inumbo.net>)
 id 1puYQq-00005l-On
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 12:51:36 +0000
Received: from so254-35.mailgun.net (so254-35.mailgun.net [198.61.254.35])
 by se1-gles-sth1.inumbo.com (Halon) with UTF8SMTPS
 id 65cddc4c-ea7a-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 14:51:34 +0200 (CEST)
Received: from mail-yb1-f181.google.com (mail-yb1-f181.google.com
 [209.85.219.181]) by
 075b51505512 with SMTP id 6453aa548a040ae8e21ea421 (version=TLS1.3,
 cipher=TLS_AES_128_GCM_SHA256); Thu, 04 May 2023 12:51:32 GMT
Received: by mail-yb1-f181.google.com with SMTP id
 3f1490d57ef6-b9e2b65d006so699408276.3
 for <xen-devel@lists.xenproject.org>; Thu, 04 May 2023 05:51:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 65cddc4c-ea7a-11ed-b226-6b7b168915f2
DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=tklengyel.com;
 q=dns/txt; s=mailo; t=1683204692; x=1683211892; h=Content-Type: Cc: To: To:
 Subject: Subject: Message-ID: Date: From: From: In-Reply-To: References:
 MIME-Version: Sender: Sender;
 bh=tkmb9S4Tg8bEdHenLjg9YyLXaYTYGV68ajT5w+c8RTk=;
 b=CRnPQ7toi6oiH0sWpacXVwrTjYHyFZMcc1eRVgXHjdOpI3NILjg89tuJ0GOOXQathIbN80qP7WIj0yC1mBM21JBI28KHNquobZl9ozflLPijQNHxFR3eNSNrXhhnasHxRIqrLox99XeFodEV77Nfageu7AwUG59ESuW5CPY3Qd/m3auQFti2NVqgydvYMyZ1Uucu/IggAmbZdarBC5aIE8il0y5vyD8NKL6MdqJXudXGgYvvGoKo9rUZvrfoEFiX21KEpD/0K78xFc/FgmuB92G9DjnGlcabq4l5EqSJvYwrQ5hRwkOaUlLraUUvqW2d+ytvZPKdogB0ofujRjVqfg==
X-Mailgun-Sending-Ip: 198.61.254.35
X-Mailgun-Sid: WyIyYTNmOCIsInhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZyIsImNkODQwIl0=
Sender: tamas@tklengyel.com
X-Gm-Message-State: AC+VfDz2uXMlH7itspENrfJSdwHWXJdlI4pK6zIlPSkX1Ev87IfvCWXB
	+AV7FuUUZeTpB7t/t0FtYiZgpytA/4QDxglG1FU=
X-Google-Smtp-Source: ACHHUZ4ejql2sxB+MnupfVRD4actJYR8J2/WGcWm5vTmppRT4fbnkZIRRcRhFn5GKu8P5oyvrJ0mQbm+3sD8QuODJG0=
X-Received: by 2002:a25:240c:0:b0:b99:58d3:7ea2 with SMTP id
 k12-20020a25240c000000b00b9958d37ea2mr23810039ybk.24.1683204692325; Thu, 04
 May 2023 05:51:32 -0700 (PDT)
MIME-Version: 1.0
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
 <a79d2a8b-6d6e-bd31-b079-a30b555e5fd0@suse.com> <CABfawhn4CRnctzV-17di4eYyNhSGTSMckZjgphS1Rg6HUGOtHw@mail.gmail.com>
 <c5f2ee35-0f5e-da04-9a28-aba49d2aba29@suse.com>
In-Reply-To: <c5f2ee35-0f5e-da04-9a28-aba49d2aba29@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Thu, 4 May 2023 08:50:55 -0400
X-Gmail-Original-Message-ID: <CABfawhnt=465mank4ye==5zbczcSeLWDSKjMoc6bxGTLqPqX-w@mail.gmail.com>
Message-ID: <CABfawhnt=465mank4ye==5zbczcSeLWDSKjMoc6bxGTLqPqX-w@mail.gmail.com>
Subject: Re: [PATCH v3 4/8] x86/mem-sharing: copy GADDR based shared guest areas
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: multipart/alternative; boundary="000000000000c53e9205fadda1bd"

--000000000000c53e9205fadda1bd
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 4, 2023 at 3:44=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
>
> On 03.05.2023 19:14, Tamas K Lengyel wrote:
> >> @@ -1974,7 +2046,10 @@ int mem_sharing_fork_reset(struct domain
> >>
> >>   state:
> >>      if ( reset_state )
> >> +    {
> >>          rc =3D copy_settings(d, pd);
> >> +        /* TBD: What to do here with -ERESTART? */
> >
> > Ideally we could avoid hitting code-paths that are restartable during
fork
> > reset since it gets called from vm_event replies that have no concept o=
f
> > handling errors. If we start having errors like this we would just have
to
> > drop the vm_event reply optimization and issue a standalone fork reset
> > hypercall every time which isn't a big deal, it's just slower.
>
> I'm afraid I don't follow: We are in the process of fork-reset here. How
> would issuing "a standalone fork reset hypercall every time" make this
> any different? The possible need for a continuation here comes from a
> failed spin_trylock() in map_guest_area(). That won't change the next
> time round.

Why not? Who is holding the lock and why wouldn't it ever relinquish it? If
that's really true then there is a larger issue then just not being able to
report the error back to the user on vm_event_resume path and we need to
devise a way of being able to copy this from the parent bypassing this
lock. The parent is paused and its state should not be changing while forks
are active, so if the lock was turned into an rwlock of some sort so we can
acquire the read-lock would perhaps be a possible way out of this.

>
> But perhaps I should say that till now I didn't even pay much attention
> to the 2nd use of the function by vm_event_resume(); I was mainly
> focused on the one from XENMEM_sharing_op_fork_reset, where no
> continuation handling exists. Yet perhaps your comment is mainly
> related to that use?
>
> I actually notice that the comment ahead of the function already has a
> continuation related TODO, just that there thought is only of larger
> memory footprint.

With XENMEM_sharing_op_fork_reset the caller actually receives the error
code and can decide what to do next. With vm_event_resume there is no path
currently to notify the agent of an error. We could generate another
vm_event to send such an error, but the expectation with fork_reset is that
it will always work because the parent is paused, so not having that path
for an error to get back to the agent isn't a big deal.

Now, if it becomes the case that due to this locking we can get an error
even while the parent is paused, that will render the vm_event_resume path
unreliably so we would just switch to using XENMEM_sharing_op_fork_reset so
that at least it can re-try in case of an issue. Of course, only if a
reissue of the hypercall has any reasonable chance of succeeding.

>
> > My
> > preference would actually be that after the initial forking is
performed a
> > local copy of the parent's state is maintained for the fork to reset to
so
> > there would be no case of hitting an error like this. It would also
allow
> > us in the future to unpause the parent for example..
>
> Oh, I guess I didn't realize the parent was kept paused. Such state
> copying / caching may then indeed be a possibility, but that's nothing
> I can see myself deal with, even less so in the context of this series.
> I need a solution to the problem at hand within the scope of what is
> there right now (or based on what could be provided e.g. by you within
> the foreseeable future). Bubbling up the need for continuation from the
> XENMEM_sharing_op_fork_reset path is the most I could see me handle
> myself ... For vm_event_resume() bubbling state up the domctl path
> _may_ also be doable, but mem_sharing_notification() and friends don't
> even check the function's return value.

Sure, I wasn't expecting that work to be done as part of this series, just
as something I would like to get to in the future.

Tamas

--000000000000c53e9205fadda1bd
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><br>On Thu, May 4, 2023 at 3:44=E2=80=AFAM Jan Beulich=
 &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; wrote:<=
br>&gt;<br>&gt; On 03.05.2023 19:14, Tamas K Lengyel wrote:<br>&gt; &gt;&gt=
; @@ -1974,7 +2046,10 @@ int mem_sharing_fork_reset(struct domain<br>&gt; &=
gt;&gt;<br>&gt; &gt;&gt; =C2=A0 state:<br>&gt; &gt;&gt; =C2=A0 =C2=A0 =C2=
=A0if ( reset_state )<br>&gt; &gt;&gt; + =C2=A0 =C2=A0{<br>&gt; &gt;&gt; =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0rc =3D copy_settings(d, pd);<br>&gt; &gt;=
&gt; + =C2=A0 =C2=A0 =C2=A0 =C2=A0/* TBD: What to do here with -ERESTART? *=
/<br>&gt; &gt;<br>&gt; &gt; Ideally we could avoid hitting code-paths that =
are restartable during fork<br>&gt; &gt; reset since it gets called from vm=
_event replies that have no concept of<br>&gt; &gt; handling errors. If we =
start having errors like this we would just have to<br>&gt; &gt; drop the v=
m_event reply optimization and issue a standalone fork reset<br>&gt; &gt; h=
ypercall every time which isn&#39;t a big deal, it&#39;s just slower.<br>&g=
t;<br>&gt; I&#39;m afraid I don&#39;t follow: We are in the process of fork=
-reset here. How<br>&gt; would issuing &quot;a standalone fork reset hyperc=
all every time&quot; make this<br>&gt; any different? The possible need for=
 a continuation here comes from a<br>&gt; failed spin_trylock() in map_gues=
t_area(). That won&#39;t change the next<br><div>&gt; time round.</div><div=
><br></div><div>Why not? Who is holding the lock and why wouldn&#39;t it ev=
er relinquish it? If that&#39;s really true then there is a larger issue th=
en just not being able to report the error back to the user on vm_event_res=
ume path and we need to devise a way of being able to copy this from the pa=
rent bypassing this lock. The parent is paused and its state should not be =
changing while forks are active, so if the lock was turned into an rwlock o=
f some sort so we can acquire the read-lock would perhaps be a possible way=
 out of this.<br></div><div><br></div>&gt;<br>&gt; But perhaps I should say=
 that till now I didn&#39;t even pay much attention<br>&gt; to the 2nd use =
of the function by vm_event_resume(); I was mainly<br>&gt; focused on the o=
ne from XENMEM_sharing_op_fork_reset, where no<br>&gt; continuation handlin=
g exists. Yet perhaps your comment is mainly<br>&gt; related to that use?<b=
r>&gt;<br>&gt; I actually notice that the comment ahead of the function alr=
eady has a<br>&gt; continuation related TODO, just that there thought is on=
ly of larger<br><div>&gt; memory footprint.</div><div><br></div><div>With=
=20
XENMEM_sharing_op_fork_reset the caller actually receives the error code an=
d can decide what to do next. With=20
vm_event_resume there is no path currently to notify the agent of an error.=
 We could generate another vm_event to send such an error, but the expectat=
ion with fork_reset is that it will always work because the parent is pause=
d, so not having that path for an error to get back to the agent isn&#39;t =
a big deal.</div><div><br></div><div>Now, if it becomes the case that due t=
o this locking we can get an error even while the parent is paused, that wi=
ll render the=20
vm_event_resume path unreliably so we would just switch to using=20
XENMEM_sharing_op_fork_reset so that at least it can re-try in case of an i=
ssue. Of course, only if a reissue of the hypercall has any reasonable chan=
ce of succeeding.<br></div><div><br></div>&gt;<br>&gt; &gt; My<br>&gt; &gt;=
 preference would actually be that after the initial forking is performed a=
<br>&gt; &gt; local copy of the parent&#39;s state is maintained for the fo=
rk to reset to so<br>&gt; &gt; there would be no case of hitting an error l=
ike this. It would also allow<br>&gt; &gt; us in the future to unpause the =
parent for example..<br>&gt;<br>&gt; Oh, I guess I didn&#39;t realize the p=
arent was kept paused. Such state<br>&gt; copying / caching may then indeed=
 be a possibility, but that&#39;s nothing<br>&gt; I can see myself deal wit=
h, even less so in the context of this series.<br>&gt; I need a solution to=
 the problem at hand within the scope of what is<br>&gt; there right now (o=
r based on what could be provided e.g. by you within<br>&gt; the foreseeabl=
e future). Bubbling up the need for continuation from the<br>&gt; XENMEM_sh=
aring_op_fork_reset path is the most I could see me handle<br>&gt; myself .=
.. For vm_event_resume() bubbling state up the domctl path<br>&gt; _may_ al=
so be doable, but mem_sharing_notification() and friends don&#39;t<br>&gt; =
even check the function&#39;s return value.<br><div><br></div><div>Sure, I =
wasn&#39;t expecting that work to be done as part of this series, just as s=
omething I would like to get to in the future.<br></div><div><br></div><div=
>Tamas<br></div></div>

--000000000000c53e9205fadda1bd--


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:08:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:08:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529717.824400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYgr-0001ph-B4; Thu, 04 May 2023 13:08:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529717.824400; Thu, 04 May 2023 13:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYgr-0001pa-8C; Thu, 04 May 2023 13:08:09 +0000
Received: by outflank-mailman (input) for mailman id 529717;
 Thu, 04 May 2023 13:08:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puYgp-0001pM-GT
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:08:07 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b45a5c5c-ea7c-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 15:08:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b45a5c5c-ea7c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683205685;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=q8H9vCnPnI2+EX1f/tER0jjkt7DS5cboSTFX7RAPK34=;
  b=ai3EXpsEZguNyPe5am29NZO5H7W+iMCLvnQCAhyKqfxCeLys4fArHA4H
   HYiYmor6CdT3uQzVlVNU7xOYeamLg8as2rqNrzoIDyLZBQKjhrxIUjXRd
   PMs6kv2H5/YwD7tsWOXeu7peFdajmz9Mi9xd89YAAfkFcf7OdV3eOkryT
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110302272
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:faUxaKm4w6kXsJfZ72YKe9zo5gz3JkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xJOXGHUOPuIZWOge91xbom2oBwH7JDUzYJlTwVo/HpnRiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgW5AOGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 cRfBQwAUDW/vrKZ55a/FdFeip4/HOC+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglHWdTFCpU3Tjq0w+2XJlyR60aT3McqTcduPLSlQth/B/
 jmepT6mWHn2MvTc82O1+yiC3dWQkCyrSZAIEJqc6OJ11Qj7Kms7V0RNCArTTeOColG6c8JSL
 QoT4CVGhbg/8gmnQ8fwWzW8oWWYpVgMVtxICeo45QqRjK3O7G6xJEIJUzpAY9wOr9ItSHoh0
 Vrhoj/yLWUx6vvPEyvbr+rK62roYkD5MFPuewc8CiY57ufGnLhjoTXrSolbIqLvku3cTGSYL
 y+xkMQuu1kCpZdVh/7jpAqX3G3ESovhFVBsuFiONo6xxkYgPdP+OdT1gbTOxawYRLt1WGVtq
 5TtdyK2yOkVRa+AmyWWKAnmNOH4vq3VWNEwbLMGInXAy9hO0yT5FWyoyGsiTHqFy+5dEdMTX
 GfduBlK+LhYN2awYKl8buqZUpp6lvG8TY29Dq+FM7Kih6RMmPKvpnkyNSZ8IUi0+KTTrU3PE
 cjCKpv9ZZrrIa9m0CC3V48g7FPf/QhnnTm7bcmin3yaPU+2OCb9pUEtbAHfMYjULcqs/G3oz
 jqoH5HWk08HDrelPUE6M+c7dDg3EJTyPriuw+Q/SwJJClAO9L0JYxMJ/Y4cRg==
IronPort-HdrOrdr: A9a23:KY/HHKpx2IEENzHASm/RS7EaV5pIeYIsimQD101hICG9E/b5qy
 nKpp8mPHDP5Qr5NEtLpTniAsi9qA3nmqKdiLN5VYtKNzOLhILHFu9f0bc=
X-Talos-CUID: =?us-ascii?q?9a23=3AXO32XminQGZewHj0g++mM3mpCDJuSjrw8HruBnO?=
 =?us-ascii?q?BWUlObLnKcW2cor9Uup87?=
X-Talos-MUID: 9a23:kjJfygg3njKSgrJtOx/BFsMpd/0x0630Jhs3zpgJ69iWLyhXHymBk2Hi
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="110302272"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/bitops: Drop include of cpufeatureset
Date: Thu, 4 May 2023 14:07:55 +0100
Message-ID: <20230504130755.3181176-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Nothing in x86/bitops uses anything from x86/cpufeatureset, and it is creating
problems when trying to untangle other aspects of feature handling.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/include/asm/bitops.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/xen/arch/x86/include/asm/bitops.h b/xen/arch/x86/include/asm/bitops.h
index 5a71afbc89d5..aa8bd65b4565 100644
--- a/xen/arch/x86/include/asm/bitops.h
+++ b/xen/arch/x86/include/asm/bitops.h
@@ -6,7 +6,6 @@
  */
 
 #include <asm/alternative.h>
-#include <asm/cpufeatureset.h>
 
 /*
  * We specify the memory operand as both input and output because the memory
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 04 13:12:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:12:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529720.824410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYkI-0003Iy-QL; Thu, 04 May 2023 13:11:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529720.824410; Thu, 04 May 2023 13:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYkI-0003Ir-Nb; Thu, 04 May 2023 13:11:42 +0000
Received: by outflank-mailman (input) for mailman id 529720;
 Thu, 04 May 2023 13:11:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puYkH-0003Ie-Ce
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:11:41 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20613.outbound.protection.outlook.com
 [2a01:111:f400:7d00::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34299245-ea7d-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 15:11:38 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7431.eurprd04.prod.outlook.com (2603:10a6:10:1a1::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 13:11:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 13:11:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34299245-ea7d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JzSqEBGb0UHn+h3Or2FyV13NN5nYKGVPLDaugGq8/nPhvyu7avrqX9orK83eBimLi4EdOrU/YjezCajc2MloRgg6/D+p6lALW2rfNdK0DF/hf3/5+9aW1F8Lp7xU3lhKpgjiIUXwdsrFASiIZDi5rU1r/nAJb7blOiM2y5FbNCia0br8RkoKyJNuRYFhL7MPMkPdoLl0tUqr5/Vh8c5jN0S/cdfJUM/u1gdI7FpxF5IZkImP+xzR3bFFqsR4OY+IZ2P02UB9IhTkn83iNXaFzUyfoAD62JUFQ01wCAqsObumpR9cSNfPXmsj6jhv6voMqxPqx49imfjrEYwxCbYrxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vivl20jtCFC6k7pDANCpn5BJUzOuWxfdeeF4eIOCzUQ=;
 b=LtH4ra8AsQb6hJaMiQNfP/PeuBEo5S3co+OLbJ+3dX344uZdmwCssw5rwJX9c3DPZxzo6YNcE+Hgcdm45bT2q/oAQySWyhO+DfTo3wbeE7yLnV7DbzJbJZHqlhgJaW2Q0zn8bENeVGT1l9btGrLowFIsDtAJjNUEPuUUQi88MCHfLkTES72VGtNsH8768tXHjVSZMoArx4HuLbMrN5nbfQsfGKmPTjTMi90rjHQh/V7yccHZ0KOf7ji/gFlibgJdxQxozWX97LV4NGIKOwk3qHkVFQEYU9FQYn2NAG17eEBr0dq1L3GZCL/LGmpuXbErsmLRYfQRev3ttPMCVifIgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vivl20jtCFC6k7pDANCpn5BJUzOuWxfdeeF4eIOCzUQ=;
 b=26ISd0m4R71j7+lGVz9mYBUrmOVToN3KUXD8SdHjNMSsUfRdVlsXIEYM4PF398SLGyWj/7jC68u52wPcEOP/tDBtKiR+UgIX55yR+93Iy6wIiAEKQYlmVA+ATOhOganC2eeaiWlWM5sfa6KCD1qEK0+RmLi8UCYFa3uFIK4pqyX9XqHcFrOOHEerHFf3TTlMLJ4LizBo2U2l+TXuDGoeZCGAluGJezip0QZ9n9zTlum9xC5j/XfzNYZ/CD3GDgRz5biq8zxtqDQxA9JYJWuyoS7s3u/Tfc5uP48H+Txtx9bXerBvScreBIFYe1MxbHLZYL8QNNB7w/5P/zcc6Psc2g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <43c519f7-577d-f657-a4b1-1a31bf7f093a@suse.com>
Date: Thu, 4 May 2023 15:11:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 04/14 RESEND] cpufreq: Add Hardware P-State (HWP)
 driver
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-5-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230501193034.88575-5-jandryuk@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0111.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7431:EE_
X-MS-Office365-Filtering-Correlation-Id: 1b7dbdf3-3dd3-4983-5ba9-08db4ca11698
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	w+mtcKiVR+xmriAsIpd5sBxv46mpA5dLp1ih9jZY/fRnviN/qAsDASPM4dVJ6dSv55jt0ARvEkvxwFtIn/aBC6tdMW3RoiPg/gBXc+pikr2GP99B1ZLvtEBj1WsKehGrz+eMjM5RPKrogUyNNN7hkyxMQ0KaC74sErEtyHlceZdyi+cB+eUjUey+6zqyRPRQw2LTyfBn9PfPe5Dm8mUuGhiI/2P1lRiwK3bzyjyg0wFJgy3GqKiIfw7pdTmYXFEHrPjfeyyBAT9ZrF7j3by5iYtfy7p3ENE2K2qFKYa7KLFbtizDamGtxGodN6szBUExaae1AMcovIpiYBKalw446MApuSOq3DNBkajC9w/ZRwfC3e8OqzRl9ie39/ik/iCj4lfvJm2ROi5QSTGmGkLFAkoR3kuzsEXRfeDAxIbRKBLNowDEHp6o2GFt5Lvl6zPkq7Wo2lVtpUyMnfxrOoThgNn+bbglu69lQVIM2gD8Z3pJTR6Y6hcNUzem0TrhOLl9qC2jMGMT8f7wnEtbCKxpdGe7V+R3xols56zMUU6UF7QlP+tmI/Cf3EZhdFNbnQGOqq4dRZmTE56uaW9WNeF3cNTgenhYuULl5htuHBEDtgTFc+YrzBMGsKDiSsMnJTHuXXdeTCTT6hgWuMNL2jrHHg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(376002)(346002)(136003)(366004)(451199021)(478600001)(5660300002)(8676002)(38100700002)(8936002)(2616005)(6916009)(54906003)(86362001)(6512007)(6506007)(26005)(6666004)(31696002)(66476007)(66556008)(66946007)(36756003)(6486002)(4326008)(83380400001)(31686004)(316002)(186003)(2906002)(53546011)(41300700001)(30864003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UUR3YnlpSmt1OW5PM0tMbGR4YlBsOUZuZXFhV2lHVkJDeGZwckc1T01MeTZl?=
 =?utf-8?B?QjcwUHJDdmFBV0JtRFQvc2lUeVhsMHdLMHNLY2xNQk9jbEowMFNuVFFUbnFN?=
 =?utf-8?B?SkloMk9yNUMrUjVXTnNWaEFDUHc0K2MrQXRWR1VVc3hXWjFLZjNQaTM3WlRv?=
 =?utf-8?B?SXNKdTE1cXhxQ0x3a3hROXlUOEF1M2tKUXdjRXBzbTZ5c0pkWmZSenJxL1pH?=
 =?utf-8?B?UDVVZ2lJc1NIOW42NkVVRXRzU2xwTDBuWXFEYnkrbmQ3dVlYbnFaMDhXN0wx?=
 =?utf-8?B?S0RRU3RMUnN6N3YwMFRSOVBwc0t4bmluNkEwU1NWQ0xoZHhCb2pScjlhMGN5?=
 =?utf-8?B?ZzBiZ2FxUHhJeDc4MFpWR2E1QmJNdkI3QUxtSC9OS1MwUXczb09aL1ZXKzVM?=
 =?utf-8?B?VmZaVkVBWGpmNnBlREtpeDNkUWErYlJRNkV3cWJHSGR3YkFmWXFFRUJ6QnZi?=
 =?utf-8?B?ZVJLTjlyTi9QMDgwWUllMkNPdGp1bGJqTUpSaVNJL21FNEh6bGF6V2FEcFZG?=
 =?utf-8?B?VU9GQm0zZ3owQjdmdFQ0ZGZRL2lJSHYyWW11bXdhQk9MKzVzRXhIa2RGcG90?=
 =?utf-8?B?UG5oVmxhRzUwUU5hdDhRS0hhSXZCdmlSczlKeHJsbzhScjVDdytPNStFWFpD?=
 =?utf-8?B?WmZPaXlBckkwSUM1MnZaYm1HN05xMkg5am1HRVRrY3lVN3ZVTkFCWHU2Ti9I?=
 =?utf-8?B?OG9GWXVqNVVHMi96b2xIK2drakZtV01JSVVFT3h3ekswclhQZlU1b2RlVXdD?=
 =?utf-8?B?amlwR3hsYnlIYmlRVkRKRTZ5NmFIS1MwaUJCS2t4RTRSOHFBTUVRMlVkK0px?=
 =?utf-8?B?V001THRXV1dCQnl2TXk5ZkdDVlNNdnU4RDVuMkhwVkE3a2Q0UFdtcTlqcDls?=
 =?utf-8?B?T2VLUTNVTG5EaENUdzZsUFZlMk1UK0RSN0hWempLQXVSVTM2VmJSL0l1R1lp?=
 =?utf-8?B?aHRtQWN3eXFSMkRGdHkrWFJKZ1BlMVI3RDBiUmtueW9HVlg4ZnhGaytwM3pR?=
 =?utf-8?B?VmV0MEsyZk03dG43alBXY2YzNExka29RazhHamtNSXNGMnFGOGpib0pCcWtK?=
 =?utf-8?B?cGs2YnhIVy9IdDFabkJhUUdtQjY0NVp4RnoxN2ZtWlQ0Mmg5QUtKNHN1ZGJP?=
 =?utf-8?B?cE0yOHRhZTJUSmd2VEpCTWZPbmlZUCtCZW9GMThjT0NTSGFUNUxkYlVSRmVX?=
 =?utf-8?B?UjIvUHBXYXFPRjNKZjBVaTlSNHFLYy9scUlVYlRZMVVKdWl4QnY0UEJ2Zk5C?=
 =?utf-8?B?RXpXNkJ5ckpwYXhXWkJta3BpRm1jd2EvWUh1bVJaK2pNbnQ3YzB3cjRsNFJW?=
 =?utf-8?B?Q0pjcnU4WWRQVlR2cVFFQURWZmZJUStjcVc5dHVuaUcyTCtrWERIY29LQlVJ?=
 =?utf-8?B?SnhvYmtJUXNyZHhHdVdFQ2oxWGpDcmhrcmxvT1JyQm1sZ0M1V0RWc09wTEZh?=
 =?utf-8?B?RG9ZTkpuL3lUL1lXT3JCUW5EVWpRRU5kdDZhTjdKcGN4bEhCMVlVVklsWE05?=
 =?utf-8?B?azkwNEF4cEZDdWl0dnlPM2tBN0lvOHVkSVR3ajU4NHQrKytMYmtrblY1Yktz?=
 =?utf-8?B?aFBGZ09oUGg5aWNEeVYzU1lwRThJL0NPSFMrRnBOUi9GcUtoUEF3L1RjZlhI?=
 =?utf-8?B?V0N0MjJ3bWpFb0JSaE11RWs3RFY1Z2t2UnkxVFdLeVJ4RXBySFd4RktMS051?=
 =?utf-8?B?YWhVejQ1eTQ4a1N2MDRHY1JDWllUK3RLcWxQYS9INW5nYnBvSXF1SDRtYWd6?=
 =?utf-8?B?NkdmN05nZjl2SmZEb21ONHBPMXBJUXpqTVRrbEZZZHRhN080UzhtbDNmUmJP?=
 =?utf-8?B?MEg2dzhlTjNRM2p1dUhiZ0lsVHhUZ0o5U0NWb2RMWGp5eUpLNlMzeDVnL1By?=
 =?utf-8?B?cTRrSm5YN0ZVSlhGd202RVFHMlptclJXcGEwZnAzM1JoQ2xCVWY5OEtGS0kw?=
 =?utf-8?B?MDRpeVJtWncxd1F2MkJaN2dEbG1QNGhNcEwyTWFtcUZ3NGxXT0dLeTJVVTVo?=
 =?utf-8?B?TUZqQkRJT1hTdVRkRjV1bTBWRm5mLzhEMEtJa2VTb2Rwc1FuaU9paEliSU8y?=
 =?utf-8?B?U1RXcWFzaXBVd2RjSDNsMFNPL2NXeE5DQ090K0ZNeWo2UnpmamRyc1FGcWZu?=
 =?utf-8?Q?HyxSM8M6V45VQO7UimoNClBlT?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b7dbdf3-3dd3-4983-5ba9-08db4ca11698
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 13:11:35.7148
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 14EQqvdH1cDAfBqfCgPZ3jbidm6LapNVtCNYV7ssd5WUvsBz1D08iNbaWaUREr5hIOOyCgV5D0zDCMOTr7WI/g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7431

On 01.05.2023 21:30, Jason Andryuk wrote:
> For cpufreq=xen:hwp, placing the option inside the governor wouldn't
> work.  Users would have to select the hwp-internal governor to turn off
> hwp support.

I'm afraid I don't understand this, and you'll find a comment towards
this further down. Even when ...

>  hwp-internal isn't usable without hwp, and users wouldn't
> be able to select a different governor.  That doesn't matter while hwp
> defaults off, but it would if or when hwp defaults to enabled.

... it starts defaulting to enabled, selecting another governor can
simply have the side effect of turning off hwp.

> Write to disable the interrupt - the linux pstate driver does this.  We
> don't use the interrupts, so we can just turn them off.  We aren't ready
> to handle them, so we don't want any.  Unclear if this is necessary.
> SDM says it's default disabled.

Definitely better to be on the safe side.

> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -499,7 +499,7 @@ If set, force use of the performance counters for oprofile, rather than detectin
>  available support.
>  
>  ### cpufreq
> -> `= none | {{ <boolean> | xen } [:[powersave|performance|ondemand|userspace][,<maxfreq>][,[<minfreq>][,[verbose]]]]} | dom0-kernel`
> +> `= none | {{ <boolean> | xen } [:[powersave|performance|ondemand|userspace][,<hdc>][,[<hwp>]][,[<maxfreq>]][,[<minfreq>]][,[verbose]]]} | dom0-kernel`

Considering you use a special internal governor, the 4 governor alternatives are
meaningless for hwp. Hence at the command line level recognizing "hwp" as if it
was another governor name would seem better to me. This would then also get rid
of one of the two special "no-" prefix parsing cases (which I'm not overly
happy about).

Even if not done that way I'm puzzled by the way you spell out the interaction
of "hwp" and "hdc": As you say in the description, "hdc" is meaningful only when
"hwp" was specified, so even if not merged with the governors group "hwp" should
come first, and "hdc" ought to be rejected if "hwp" wasn't first specified. (The
way you've spelled it out it actually looks to be kind of the other way around.)

Strictly speaking "maxfreq" and "minfreq" also should be objected to when "hwp"
was specified.

Overall I'm getting the impression that beyond your "verbose" related adjustment
more is needed, if you're meaning to get things closer to how we parse the
option (splitting across multiple lines to help see what I mean):

`= none
 | {{ <boolean> | xen } [:{powersave|performance|ondemand|userspace}
                          [{,hwp[,hdc]|[,maxfreq=<maxfreq>[,minfreq=<minfreq>]}]
                          [,verbose]]}
 | dom0-kernel`

(We're still parsing in a more relaxed way, e.g. minfreq may come ahead of
maxfreq, but better be more tight in the doc than too relaxed.)

Furthermore while max/min freq don't apply directly, there are still two MSRs
controlling bounds at the package and logical processor levels.

> --- /dev/null
> +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> @@ -0,0 +1,506 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * hwp.c cpufreq driver to run Intel Hardware P-States (HWP)
> + *
> + * Copyright (C) 2021 Jason Andryuk <jandryuk@gmail.com>
> + */
> +
> +#include <xen/cpumask.h>
> +#include <xen/init.h>
> +#include <xen/param.h>
> +#include <xen/xmalloc.h>
> +#include <asm/io.h>
> +#include <asm/msr.h>
> +#include <acpi/cpufreq/cpufreq.h>
> +
> +static bool feature_hwp;
> +static bool feature_hwp_notification;
> +static bool feature_hwp_activity_window;
> +static bool feature_hwp_energy_perf;
> +static bool feature_hwp_pkg_level_ctl;
> +static bool feature_hwp_peci;
> +
> +static bool feature_hdc;

Most (all?) of these want to be __ro_after_init, I expect.

> +__initdata bool opt_cpufreq_hwp = false;
> +__initdata bool opt_cpufreq_hdc = true;

Nit (style): Please put annotations after the type.

> +#define HWP_ENERGY_PERF_BALANCE         0x80
> +#define IA32_ENERGY_BIAS_BALANCE        0x7
> +#define IA32_ENERGY_BIAS_MAX_POWERSAVE  0xf
> +#define IA32_ENERGY_BIAS_MASK           0xf
> +
> +union hwp_request
> +{
> +    struct
> +    {
> +        uint64_t min_perf:8;
> +        uint64_t max_perf:8;
> +        uint64_t desired:8;
> +        uint64_t energy_perf:8;
> +        uint64_t activity_window:10;
> +        uint64_t package_control:1;
> +        uint64_t reserved:16;
> +        uint64_t activity_window_valid:1;
> +        uint64_t energy_perf_valid:1;
> +        uint64_t desired_valid:1;
> +        uint64_t max_perf_valid:1;
> +        uint64_t min_perf_valid:1;

The boolean fields here would probably better be of type "bool". I also
don't see the need for using uint64_t for any of the other fields -
unsigned int will be quite fine, I think. Only ...

> +    };
> +    uint64_t raw;

... this wants to keep this type. (Same again below then.)

> +bool __init hwp_available(void)
> +{
> +    unsigned int eax, ecx, unused;
> +    bool use_hwp;
> +
> +    if ( boot_cpu_data.cpuid_level < CPUID_PM_LEAF )
> +    {
> +        hwp_verbose("cpuid_level (%#x) lacks HWP support\n",
> +                    boot_cpu_data.cpuid_level);
> +        return false;
> +    }
> +
> +    if ( boot_cpu_data.cpuid_level < 0x16 )
> +    {
> +        hwp_info("HWP disabled: cpuid_level %#x < 0x16 lacks CPU freq info\n",
> +                 boot_cpu_data.cpuid_level);
> +        return false;
> +    }
> +
> +    cpuid(CPUID_PM_LEAF, &eax, &unused, &ecx, &unused);
> +
> +    if ( !(eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE) &&
> +         !(ecx & CPUID6_ECX_IA32_ENERGY_PERF_BIAS) )
> +    {
> +        hwp_verbose("HWP disabled: No energy/performance preference available");
> +        return false;
> +    }
> +
> +    feature_hwp                 = eax & CPUID6_EAX_HWP;
> +    feature_hwp_notification    = eax & CPUID6_EAX_HWP_NOTIFICATION;
> +    feature_hwp_activity_window = eax & CPUID6_EAX_HWP_ACTIVITY_WINDOW;
> +    feature_hwp_energy_perf     =
> +        eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE;
> +    feature_hwp_pkg_level_ctl   = eax & CPUID6_EAX_HWP_PACKAGE_LEVEL_REQUEST;
> +    feature_hwp_peci            = eax & CPUID6_EAX_HWP_PECI;
> +
> +    hwp_verbose("HWP: %d notify: %d act-window: %d energy-perf: %d pkg-level: %d peci: %d\n",
> +                feature_hwp, feature_hwp_notification,
> +                feature_hwp_activity_window, feature_hwp_energy_perf,
> +                feature_hwp_pkg_level_ctl, feature_hwp_peci);
> +
> +    if ( !feature_hwp )
> +        return false;
> +
> +    feature_hdc = eax & CPUID6_EAX_HDC;
> +
> +    hwp_verbose("HWP: Hardware Duty Cycling (HDC) %ssupported%s\n",
> +                feature_hdc ? "" : "not ",
> +                feature_hdc ? opt_cpufreq_hdc ? ", enabled" : ", disabled"
> +                            : "");
> +
> +    feature_hdc = feature_hdc && opt_cpufreq_hdc;
> +
> +    hwp_verbose("HWP: HW_FEEDBACK %ssupported\n",
> +                (eax & CPUID6_EAX_HW_FEEDBACK) ? "" : "not ");

You report this, but you don't really use it?

> +    use_hwp = feature_hwp && opt_cpufreq_hwp;

There's a lot of output you may produce until you make it here, which is
largely meaningless when opt_cpufreq_hwp == false. Is there a reason you
don't check that flag first thing in the function?

> +static void hdc_set_pkg_hdc_ctl(bool val)
> +{
> +    uint64_t msr;
> +
> +    if ( rdmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
> +    {
> +        hwp_err("error rdmsr_safe(MSR_IA32_PKG_HDC_CTL)\n");
> +
> +        return;
> +    }
> +
> +    if ( val )
> +        msr |= IA32_PKG_HDC_CTL_HDC_PKG_ENABLE;
> +    else
> +        msr &= ~IA32_PKG_HDC_CTL_HDC_PKG_ENABLE;
> +
> +    if ( wrmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
> +        hwp_err("error wrmsr_safe(MSR_IA32_PKG_HDC_CTL): %016lx\n", msr);
> +}
> +
> +static void hdc_set_pm_ctl1(bool val)
> +{
> +    uint64_t msr;
> +
> +    if ( rdmsr_safe(MSR_IA32_PM_CTL1, msr) )
> +    {
> +        hwp_err("error rdmsr_safe(MSR_IA32_PM_CTL1)\n");
> +
> +        return;
> +    }
> +
> +    if ( val )
> +        msr |= IA32_PM_CTL1_HDC_ALLOW_BLOCK;
> +    else
> +        msr &= ~IA32_PM_CTL1_HDC_ALLOW_BLOCK;
> +
> +    if ( wrmsr_safe(MSR_IA32_PM_CTL1, msr) )
> +        hwp_err("error wrmsr_safe(MSR_IA32_PM_CTL1): %016lx\n", msr);
> +}

For both functions: Elsewhere you also log the affected CPU in hwp_err().
Without this I'm not convinced the logging here is very useful. In fact I
wonder whether hwp_err() shouldn't take care of this and/or the "error"
part of the string literal. A HWP: prefix might also not be bad ...

> +static void hwp_get_cpu_speeds(struct cpufreq_policy *policy)
> +{
> +    uint32_t base_khz, max_khz, bus_khz, edx;
> +
> +    cpuid(0x16, &base_khz, &max_khz, &bus_khz, &edx);
> +
> +    /* aperf/mperf scales base. */
> +    policy->cpuinfo.perf_freq = base_khz * 1000;
> +    policy->cpuinfo.min_freq = base_khz * 1000;
> +    policy->cpuinfo.max_freq = max_khz * 1000;
> +    policy->min = base_khz * 1000;
> +    policy->max = max_khz * 1000;
> +    policy->cur = 0;

What is the comment intended to be telling me here?

> +static void cf_check hwp_init_msrs(void *info)
> +{
> +    struct cpufreq_policy *policy = info;
> +    struct hwp_drv_data *data = this_cpu(hwp_drv_data);
> +    uint64_t val;
> +
> +    /*
> +     * Package level MSR, but we don't have a good idea of packages here, so
> +     * just do it everytime.
> +     */
> +    if ( rdmsr_safe(MSR_IA32_PM_ENABLE, val) )
> +    {
> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_PM_ENABLE)\n", policy->cpu);
> +        data->curr_req.raw = -1;
> +        return;
> +    }
> +
> +    /* Ensure we don't generate interrupts */
> +    if ( feature_hwp_notification )
> +        wrmsr_safe(MSR_IA32_HWP_INTERRUPT, 0);
> +
> +    hwp_verbose("CPU%u: MSR_IA32_PM_ENABLE: %016lx\n", policy->cpu, val);
> +    if ( !(val & IA32_PM_ENABLE_HWP_ENABLE) )
> +    {
> +        val |= IA32_PM_ENABLE_HWP_ENABLE;
> +        if ( wrmsr_safe(MSR_IA32_PM_ENABLE, val) )
> +        {
> +            hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_PM_ENABLE, %lx)\n",
> +                    policy->cpu, val);
> +            data->curr_req.raw = -1;
> +            return;
> +        }
> +    }
> +
> +    if ( rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, data->hwp_caps) )
> +    {
> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_CAPABILITIES)\n",
> +                policy->cpu);
> +        data->curr_req.raw = -1;
> +        return;
> +    }
> +
> +    if ( rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw) )
> +    {
> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_REQUEST)\n", policy->cpu);
> +        data->curr_req.raw = -1;
> +        return;
> +    }
> +
> +    if ( !feature_hwp_energy_perf ) {

Nit: Brace placement.

> +        if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, val) )
> +        {
> +            hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n");
> +            data->curr_req.raw = -1;
> +
> +            return;
> +        }
> +
> +        data->energy_perf = val & IA32_ENERGY_BIAS_MASK;
> +    }

In order to not need to undo the "enable" you've already done, maybe that
should move down here? With all the sanity checking you do here, maybe
you should also check that the write of the enable bit actually took
effect?

> +/* val 0 - highest performance, 15 - maximum energy savings */
> +static void hwp_energy_perf_bias(const struct hwp_drv_data *data)
> +{
> +    uint64_t msr;
> +    uint8_t val = data->energy_perf;
> +
> +    ASSERT(val <= IA32_ENERGY_BIAS_MAX_POWERSAVE);
> +
> +    if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, msr) )
> +    {
> +        hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n");
> +
> +        return;
> +    }
> +
> +    msr &= ~IA32_ENERGY_BIAS_MASK;
> +    msr |= val;
> +
> +    if ( wrmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, msr) )
> +        hwp_err("error wrmsr_safe(MSR_IA32_ENERGY_PERF_BIAS): %016lx\n", msr);
> +}
> +
> +static void cf_check hwp_write_request(void *info)
> +{
> +    struct cpufreq_policy *policy = info;
> +    struct hwp_drv_data *data = this_cpu(hwp_drv_data);
> +    union hwp_request hwp_req = data->curr_req;
> +
> +    BUILD_BUG_ON(sizeof(union hwp_request) != sizeof(uint64_t));
> +    if ( wrmsr_safe(MSR_IA32_HWP_REQUEST, hwp_req.raw) )
> +    {
> +        hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_HWP_REQUEST, %lx)\n",
> +                policy->cpu, hwp_req.raw);
> +        rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw);
> +    }
> +
> +    if ( !feature_hwp_energy_perf )
> +        hwp_energy_perf_bias(data);
> +
> +}
> +
> +static int cf_check hwp_cpufreq_target(struct cpufreq_policy *policy,
> +                                       unsigned int target_freq,
> +                                       unsigned int relation)
> +{
> +    unsigned int cpu = policy->cpu;
> +    struct hwp_drv_data *data = per_cpu(hwp_drv_data, cpu);
> +    /* Zero everything to ensure reserved bits are zero... */
> +    union hwp_request hwp_req = { .raw = 0 };
> +
> +    /* .. and update from there */
> +    hwp_req.min_perf = data->minimum;
> +    hwp_req.max_perf = data->maximum;
> +    hwp_req.desired = data->desired;
> +    if ( feature_hwp_energy_perf )
> +        hwp_req.energy_perf = data->energy_perf;
> +    if ( feature_hwp_activity_window )
> +        hwp_req.activity_window = data->activity_window;
> +
> +    if ( hwp_req.raw == data->curr_req.raw )
> +        return 0;
> +
> +    data->curr_req = hwp_req;
> +
> +    hwp_verbose("CPU%u: wrmsr HWP_REQUEST %016lx\n", cpu, hwp_req.raw);
> +    on_selected_cpus(cpumask_of(cpu), hwp_write_request, policy, 1);
> +
> +    return 0;
> +}

If I'm not mistaken these 3 functions can only be reached from the user
space tool (via set_cpufreq_para()). On that path I don't think there
should be any hwp_err(); definitely not in non-verbose mode. Instead it
would be good if a sensible error code could be reported back. (Same
then for hwp_cpufreq_update() and its helper.)

> --- a/xen/arch/x86/include/asm/cpufeature.h
> +++ b/xen/arch/x86/include/asm/cpufeature.h
> @@ -46,8 +46,17 @@ extern struct cpuinfo_x86 boot_cpu_data;
>  #define cpu_has(c, bit)		test_bit(bit, (c)->x86_capability)
>  #define boot_cpu_has(bit)	test_bit(bit, boot_cpu_data.x86_capability)
>  
> -#define CPUID_PM_LEAF                    6
> -#define CPUID6_ECX_APERFMPERF_CAPABILITY 0x1
> +#define CPUID_PM_LEAF                                6
> +#define CPUID6_EAX_HWP                               (_AC(1, U) <<  7)
> +#define CPUID6_EAX_HWP_NOTIFICATION                  (_AC(1, U) <<  8)
> +#define CPUID6_EAX_HWP_ACTIVITY_WINDOW               (_AC(1, U) <<  9)
> +#define CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE (_AC(1, U) << 10)
> +#define CPUID6_EAX_HWP_PACKAGE_LEVEL_REQUEST         (_AC(1, U) << 11)
> +#define CPUID6_EAX_HDC                               (_AC(1, U) << 13)
> +#define CPUID6_EAX_HWP_PECI                          (_AC(1, U) << 16)
> +#define CPUID6_EAX_HW_FEEDBACK                       (_AC(1, U) << 19)

Perhaps better without open-coding BIT()?

I also find it a little odd that e.g. bit 17 is left out here despite you
declaring the 5 "valid" bits in union hwp_request (which are qualified by
this CPUID bit afaict).

> +#define CPUID6_ECX_APERFMPERF_CAPABILITY             0x1
> +#define CPUID6_ECX_IA32_ENERGY_PERF_BIAS             0x8

Why not the same form here?

> --- a/xen/arch/x86/include/asm/msr-index.h
> +++ b/xen/arch/x86/include/asm/msr-index.h
> @@ -151,6 +151,13 @@
>  
>  #define MSR_PKRS                            0x000006e1
>  
> +#define MSR_IA32_PM_ENABLE                  0x00000770
> +#define  IA32_PM_ENABLE_HWP_ENABLE          (_AC(1, ULL) <<  0)
> +
> +#define MSR_IA32_HWP_CAPABILITIES           0x00000771
> +#define MSR_IA32_HWP_INTERRUPT              0x00000773
> +#define MSR_IA32_HWP_REQUEST                0x00000774

I think for new MSRs being added here in particular Andrew would like to
see the IA32 infixes omitted. (I'd extend this then to
CPUID6_ECX_IA32_ENERGY_PERF_BIAS as well.)

> @@ -165,6 +172,11 @@
>  #define  PASID_PASID_MASK                   0x000fffff
>  #define  PASID_VALID                        (_AC(1, ULL) << 31)
>  
> +#define MSR_IA32_PKG_HDC_CTL                0x00000db0
> +#define  IA32_PKG_HDC_CTL_HDC_PKG_ENABLE    (_AC(1, ULL) <<  0)

The name has two redundant infixes, which looks odd, but then I can't
suggest any better without going too much out of sync with the SDM.

> --- a/xen/drivers/cpufreq/cpufreq.c
> +++ b/xen/drivers/cpufreq/cpufreq.c
> @@ -565,6 +565,38 @@ static void cpufreq_cmdline_common_para(struct cpufreq_policy *new_policy)
>  
>  static int __init cpufreq_handle_common_option(const char *name, const char *val)
>  {
> +    if (!strcmp(name, "hdc")) {
> +        if (val) {
> +            int ret = parse_bool(val, NULL);
> +            if (ret != -1) {
> +                opt_cpufreq_hdc = ret;
> +                return 1;
> +            }
> +        } else {
> +            opt_cpufreq_hdc = true;
> +            return 1;
> +        }
> +    } else if (!strcmp(name, "no-hdc")) {
> +        opt_cpufreq_hdc = false;
> +        return 1;
> +    }

I think recognizing a "no-" prefix would want to be separated out, and be
restricted to val being NULL. It would result in val being pointed at the
string "no" (or "off" or anything else parse_bool() recognizes as negative
indicator).

Yet if, as suggested above, "hwp" became a "fake" governor also when
parsing the command line, "hdc" could actually be handled in its
handle_option() hook.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:12:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:12:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529725.824420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYlW-0003u9-9O; Thu, 04 May 2023 13:12:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529725.824420; Thu, 04 May 2023 13:12:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYlW-0003u2-5m; Thu, 04 May 2023 13:12:58 +0000
Received: by outflank-mailman (input) for mailman id 529725;
 Thu, 04 May 2023 13:12:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UZDr=AZ=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1puYlU-0003tK-NK
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:12:56 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 61776874-ea7d-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 15:12:54 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6E4F22F4;
 Thu,  4 May 2023 06:13:38 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CFCBC3F67D;
 Thu,  4 May 2023 06:12:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61776874-ea7d-11ed-b226-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [PATCH 0/3] Fix and improvements to xen-analysis.py
Date: Thu,  4 May 2023 14:12:42 +0100
Message-Id: <20230504131245.2985400-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This serie includes a fix for a limitation of xen-analysis.py that when using
cppcheck and make parallel build, it was spuriously failing.

The second patch is enabling the tool to accept cppcheck version above 2.7
(excluding 2.8 for the reasons described in the documentation).

The final one is a fix for the generated cppcheck text report to include the
relative path from the repository, instead that the relative path from the
/xen/xen directory.

Luca Fancellu (3):
  xen/misra: xen-analysis.py: fix parallel analysis Cppcheck errors
  xen/misra: xen-analysis.py: allow cppcheck version above 2.7
  xen/misra: xen-analysis.py: use the relative path from the ...

 xen/scripts/xen_analysis/cppcheck_analysis.py | 36 ++++++++++++-------
 xen/tools/cppcheck-cc.sh                      | 19 +++++++++-
 2 files changed, 41 insertions(+), 14 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 13:12:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:12:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529726.824430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYlX-00049z-Gr; Thu, 04 May 2023 13:12:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529726.824430; Thu, 04 May 2023 13:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYlX-00049q-E4; Thu, 04 May 2023 13:12:59 +0000
Received: by outflank-mailman (input) for mailman id 529726;
 Thu, 04 May 2023 13:12:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UZDr=AZ=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1puYlW-0003tK-9n
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:12:58 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 633d48c4-ea7d-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 15:12:57 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6418112FC;
 Thu,  4 May 2023 06:13:41 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E283A3F67D;
 Thu,  4 May 2023 06:12:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 633d48c4-ea7d-11ed-b226-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/3] xen/misra: xen-analysis.py: allow cppcheck version above 2.7
Date: Thu,  4 May 2023 14:12:44 +0100
Message-Id: <20230504131245.2985400-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230504131245.2985400-1-luca.fancellu@arm.com>
References: <20230504131245.2985400-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Allow the use of Cppcheck version above 2.7, exception for 2.8 which
is known and documented do be broken.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/scripts/xen_analysis/cppcheck_analysis.py | 20 +++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
index 658795bb9f5b..c3783e8df343 100644
--- a/xen/scripts/xen_analysis/cppcheck_analysis.py
+++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
@@ -157,13 +157,25 @@ def generate_cppcheck_deps():
             "Error occured retrieving cppcheck version:\n{}\n\n{}"
         )
 
-    version_regex = re.search('^Cppcheck (.*)$', invoke_cppcheck, flags=re.M)
+    version_regex = re.search('^Cppcheck (\d+).(\d+)(?:.\d+)?$',
+                              invoke_cppcheck, flags=re.M)
     # Currently, only cppcheck version >= 2.7 is supported, but version 2.8 is
     # known to be broken, please refer to docs/misra/cppcheck.txt
-    if (not version_regex) or (not version_regex.group(1).startswith("2.7")):
+    if (not version_regex) or len(version_regex.groups()) < 2:
         raise CppcheckDepsPhaseError(
-                "Can't find cppcheck version or version is not 2.7"
-              )
+            "Can't find cppcheck version or version not identified: "
+            "{}".format(invoke_cppcheck)
+        )
+    major = int(version_regex.group(1))
+    minor = int(version_regex.group(2))
+    if major < 2 or (major == 2 and minor < 7):
+        raise CppcheckDepsPhaseError(
+            "Cppcheck version < 2.7 is not supported"
+        )
+    if major == 2 and minor == 8:
+        raise CppcheckDepsPhaseError(
+            "Cppcheck version 2.8 is known to be broken, see the documentation"
+        )
 
     # If misra option is selected, append misra addon and generate cppcheck
     # files for misra analysis
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 13:13:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:13:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529727.824440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYlY-0004Q1-O1; Thu, 04 May 2023 13:13:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529727.824440; Thu, 04 May 2023 13:13:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYlY-0004Pu-Ku; Thu, 04 May 2023 13:13:00 +0000
Received: by outflank-mailman (input) for mailman id 529727;
 Thu, 04 May 2023 13:12:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UZDr=AZ=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1puYlX-00049n-Kx
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:12:59 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 626d8b4b-ea7d-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 15:12:56 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F0003C14;
 Thu,  4 May 2023 06:13:39 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5F9C63F67D;
 Thu,  4 May 2023 06:12:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 626d8b4b-ea7d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [PATCH 1/3] xen/misra: xen-analysis.py: fix parallel analysis Cppcheck errors
Date: Thu,  4 May 2023 14:12:43 +0100
Message-Id: <20230504131245.2985400-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230504131245.2985400-1-luca.fancellu@arm.com>
References: <20230504131245.2985400-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently Cppcheck has a limitation that prevents to use make with
parallel build and have a parallel Cppcheck invocation on each
translation unit (the .c files), because of spurious internal errors.

The issue comes from the fact that when using the build directory,
Cppcheck saves temporary files as <filename>.c.<many-extensions>, but
this doesn't work well when files with the same name are being
analysed at the same time, leading to race conditions.

Fix the issue creating, under the build directory, the same directory
structure of the file being analysed to avoid any clash.

Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/scripts/xen_analysis/cppcheck_analysis.py |  8 +++-----
 xen/tools/cppcheck-cc.sh                      | 19 ++++++++++++++++++-
 2 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
index ab52ce38d502..658795bb9f5b 100644
--- a/xen/scripts/xen_analysis/cppcheck_analysis.py
+++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
@@ -139,7 +139,6 @@ def generate_cppcheck_deps():
     # Compiler defines are in compiler-def.h which is included in config.h
     #
     cppcheck_flags="""
---cppcheck-build-dir={}/{}
  --max-ctu-depth=10
  --enable=style,information,missingInclude
  --template=\'{{file}}({{line}},{{column}}):{{id}}:{{severity}}:{{message}}\'
@@ -150,8 +149,7 @@ def generate_cppcheck_deps():
  --suppress='unusedStructMember:*'
  --include={}/include/xen/config.h
  -DCPPCHECK
-""".format(settings.outdir, CPPCHECK_BUILD_DIR, settings.xen_dir,
-           settings.outdir, settings.xen_dir)
+""".format(settings.xen_dir, settings.outdir, settings.xen_dir)
 
     invoke_cppcheck = utils.invoke_command(
             "{} --version".format(settings.cppcheck_binpath),
@@ -204,9 +202,9 @@ def generate_cppcheck_deps():
 
     cppcheck_cc_flags = """--compiler={} --cppcheck-cmd={} {}
  --cppcheck-plat={}/cppcheck-plat --ignore-path=tools/
- --ignore-path=arch/x86/efi/check.c
+ --ignore-path=arch/x86/efi/check.c --build-dir={}/{}
 """.format(xen_cc, settings.cppcheck_binpath, cppcheck_flags,
-           settings.tools_dir)
+           settings.tools_dir, settings.outdir, CPPCHECK_BUILD_DIR)
 
     if settings.cppcheck_html:
         cppcheck_cc_flags = cppcheck_cc_flags + " --cppcheck-html"
diff --git a/xen/tools/cppcheck-cc.sh b/xen/tools/cppcheck-cc.sh
index f6728e4c1084..16a965edb7ec 100755
--- a/xen/tools/cppcheck-cc.sh
+++ b/xen/tools/cppcheck-cc.sh
@@ -24,6 +24,7 @@ Options:
 EOF
 }
 
+BUILD_DIR=""
 CC_FILE=""
 COMPILER=""
 CPPCHECK_HTML="n"
@@ -66,6 +67,10 @@ do
             help
             exit 0
             ;;
+        --build-dir=*)
+            BUILD_DIR="${OPTION#*=}"
+            sm_tool_args="n"
+            ;;
         --compiler=*)
             COMPILER="${OPTION#*=}"
             sm_tool_args="n"
@@ -107,6 +112,12 @@ then
     exit 1
 fi
 
+if [ "${BUILD_DIR}" = "" ]
+then
+    echo "--build-dir arg is mandatory."
+    exit 1
+fi
+
 function create_jcd() {
     local line="${1}"
     local arg_num=0
@@ -199,13 +210,18 @@ then
             exit 1
         fi
 
+        # Generate build directory for the analysed file
+        cppcheck_build_dir="${BUILD_DIR}/${OBJTREE_PATH}"
+        mkdir -p "${cppcheck_build_dir}"
+
         # Shellcheck complains about missing quotes on CPPCHECK_TOOL_ARGS, but
         # they can't be used here
         # shellcheck disable=SC2086
         ${CPPCHECK_TOOL} ${CPPCHECK_TOOL_ARGS} \
             --project="${JDB_FILE}" \
             --output-file="${out_file}" \
-            --platform="${platform}"
+            --platform="${platform}" \
+            --cppcheck-build-dir=${cppcheck_build_dir}
 
         if [ "${CPPCHECK_HTML}" = "y" ]
         then
@@ -216,6 +232,7 @@ then
                 --project="${JDB_FILE}" \
                 --output-file="${out_file%.txt}.xml" \
                 --platform="${platform}" \
+                --cppcheck-build-dir=${cppcheck_build_dir} \
                 -q \
                 --xml
         fi
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 13:13:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:13:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529728.824446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYlZ-0004Tv-67; Thu, 04 May 2023 13:13:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529728.824446; Thu, 04 May 2023 13:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYlY-0004SZ-UU; Thu, 04 May 2023 13:13:00 +0000
Received: by outflank-mailman (input) for mailman id 529728;
 Thu, 04 May 2023 13:12:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UZDr=AZ=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1puYlX-0003tK-OS
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:12:59 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 64104f88-ea7d-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 15:12:59 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CC979139F;
 Thu,  4 May 2023 06:13:42 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 56C753F67D;
 Thu,  4 May 2023 06:12:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64104f88-ea7d-11ed-b226-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 3/3] xen/misra: xen-analysis.py: use the relative path from the ...
Date: Thu,  4 May 2023 14:12:45 +0100
Message-Id: <20230504131245.2985400-4-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230504131245.2985400-1-luca.fancellu@arm.com>
References: <20230504131245.2985400-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

repository in the reports

Currently the cppcheck report entries shows the relative file path
from the /xen folder of the repository instead of the base folder.
In order to ease the checks, for example, when looking a git diff
output and the report, use the repository folder as base.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/scripts/xen_analysis/cppcheck_analysis.py | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
index c3783e8df343..c8abbe0fca79 100644
--- a/xen/scripts/xen_analysis/cppcheck_analysis.py
+++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
@@ -149,7 +149,7 @@ def generate_cppcheck_deps():
  --suppress='unusedStructMember:*'
  --include={}/include/xen/config.h
  -DCPPCHECK
-""".format(settings.xen_dir, settings.outdir, settings.xen_dir)
+""".format(settings.repo_dir, settings.outdir, settings.xen_dir)
 
     invoke_cppcheck = utils.invoke_command(
             "{} --version".format(settings.cppcheck_binpath),
@@ -240,7 +240,7 @@ def generate_cppcheck_report():
     try:
         cppcheck_report_utils.cppcheck_merge_txt_fragments(fragments,
                                                            report_filename,
-                                                           [settings.xen_dir])
+                                                           [settings.repo_dir])
     except cppcheck_report_utils.CppcheckTXTReportError as e:
         raise CppcheckReportPhaseError(e)
 
@@ -257,7 +257,7 @@ def generate_cppcheck_report():
         try:
             cppcheck_report_utils.cppcheck_merge_xml_fragments(fragments,
                                                                xml_filename,
-                                                               settings.xen_dir,
+                                                               settings.repo_dir,
                                                                settings.outdir)
         except cppcheck_report_utils.CppcheckHTMLReportError as e:
             raise CppcheckReportPhaseError(e)
@@ -265,7 +265,7 @@ def generate_cppcheck_report():
         utils.invoke_command(
             "{} --file={} --source-dir={} --report-dir={}/html --title=Xen"
                 .format(settings.cppcheck_htmlreport_binpath, xml_filename,
-                        settings.xen_dir, html_report_dir),
+                        settings.repo_dir, html_report_dir),
             False, CppcheckReportPhaseError,
             "Error occured generating Cppcheck HTML report:\n{}"
         )
@@ -273,7 +273,7 @@ def generate_cppcheck_report():
         html_files = utils.recursive_find_file(html_report_dir, r'.*\.html$')
         try:
             cppcheck_report_utils.cppcheck_strip_path_html(html_files,
-                                                           (settings.xen_dir,
+                                                           (settings.repo_dir,
                                                             settings.outdir))
         except cppcheck_report_utils.CppcheckHTMLReportError as e:
             raise CppcheckReportPhaseError(e)
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 13:15:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:15:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529741.824459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYnR-00067X-FI; Thu, 04 May 2023 13:14:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529741.824459; Thu, 04 May 2023 13:14:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYnR-00067Q-CU; Thu, 04 May 2023 13:14:57 +0000
Received: by outflank-mailman (input) for mailman id 529741;
 Thu, 04 May 2023 13:14:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VaLI=AZ=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1puYnP-00067C-Se
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:14:56 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062e.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a7ef1a09-ea7d-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 15:14:53 +0200 (CEST)
Received: from DB6PR0301CA0043.eurprd03.prod.outlook.com (2603:10a6:4:54::11)
 by AS8PR08MB7306.eurprd08.prod.outlook.com (2603:10a6:20b:441::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.25; Thu, 4 May
 2023 13:14:49 +0000
Received: from DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:54:cafe::67) by DB6PR0301CA0043.outlook.office365.com
 (2603:10a6:4:54::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21 via Frontend
 Transport; Thu, 4 May 2023 13:14:49 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT032.mail.protection.outlook.com (100.127.142.185) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.22 via Frontend Transport; Thu, 4 May 2023 13:14:49 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Thu, 04 May 2023 13:14:49 +0000
Received: from e843eb7c2d31.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AF0832D5-E534-47FB-AD98-B11EF29AF27D.1; 
 Thu, 04 May 2023 13:14:38 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e843eb7c2d31.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 13:14:38 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DB9PR08MB7399.eurprd08.prod.outlook.com (2603:10a6:10:371::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Thu, 4 May
 2023 13:14:35 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::9283:e527:bed8:ab23]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::9283:e527:bed8:ab23%7]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 13:14:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7ef1a09-ea7d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=roH7IhSKNfhABtOeE1XmF3YdKI+Xv0DCyrgQ3HF6/MI=;
 b=Tq5C7d4Qi7OxmnLzcn9wfN9nV3v+hJnjt06Fl7KfRYpZ1QuhZQJZSaf0KH8MHx1174eMVL3OtsobNCuealxdk4EKrf/u3uXYdouwpWZLUWuhxK/Grsl0lo1yvUhgtYJzOvwXh3eNZBOzmC0JVl/L6gMNhKuXYBaklWAb32rQ5s0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9c0a31e43e31591e
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EaUfYwiJc684TKhFtkWq6kGX71yD7YzPA5Nhi31uCuB5BuphFSD699gmbSebSW5Feag90DJEY8HvKd+jPnC8AWSLypruVVp0D20S2fpAW2DJ/Tme2TMNKu6lvOFHuhjPueKf/lNaM2YENq2lkW2xdMKIPoMv2B4tIg64Shp8RY4WL2JSUXKwJyoJg3PlG8NRxUfr42YLOjggrrnbOgT1pKif2LP8UiBrhuf2PE4DeFjsCVWut0SQgYwjl0OqAqAoKAWdm6basN0lNFolbgIDosGz9p7RxOrfgtu0TiZbbd/Af0N0uv8J4D0d/Z7LHxHf7Otz/AJ1ujV3oT02QxpHOg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=roH7IhSKNfhABtOeE1XmF3YdKI+Xv0DCyrgQ3HF6/MI=;
 b=UKvX6WVAagqDjeve7zt9i2EiLSdxUGZZo1AkM1623gIqzIIaSX8qz0pN35qWZwtVLaCFnubkTbDtAkWe0vU4dTm1GsHyuHVH9f0rBSAtZZMBCYPo7eKGHCdpX9JRHZZrlHr0wPS97x7AO58lNP/BDRTpSFFZJmqaGq2znr6FchCcIEB9qiUoP9PoUBozVqX+t9hKfyEB6210HW+nWZNTUO+9AINekwzlwyN+CGkNv1INloiHDnGWSNgzIcZl/+akbm8PozzuMAesZsWN23YFFkh1kptAqW0wockSoPbLYvu1gvsFrAk2Ywoik6yPF2G0Sh2QY6dpWuSWCVDUBussfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=roH7IhSKNfhABtOeE1XmF3YdKI+Xv0DCyrgQ3HF6/MI=;
 b=Tq5C7d4Qi7OxmnLzcn9wfN9nV3v+hJnjt06Fl7KfRYpZ1QuhZQJZSaf0KH8MHx1174eMVL3OtsobNCuealxdk4EKrf/u3uXYdouwpWZLUWuhxK/Grsl0lo1yvUhgtYJzOvwXh3eNZBOzmC0JVl/L6gMNhKuXYBaklWAb32rQ5s0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>,
	Marc Bonnici <Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Jens Wiklander <jens.wiklander@linaro.org>
Subject: Re: [XEN PATCH v8 02/22] xen/arm: tee: add a primitive FF-A mediator
Thread-Topic: [XEN PATCH v8 02/22] xen/arm: tee: add a primitive FF-A mediator
Thread-Index: AQHZbdfHy/Y46nL9n0ej3opYe7XQ5q8pKpqAgAAQ9oCAAUdZgIAftheA
Date: Thu, 4 May 2023 13:14:33 +0000
Message-ID: <BF745983-F062-4237-B6B9-E3455E72233E@arm.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-3-jens.wiklander@linaro.org>
 <ad1d5ebd-38e5-bab9-24ac-6facc8ccb95c@xen.org>
 <d7f18393-262b-f2b1-9af3-a371dae75994@citrix.com>
 <CAHUa44FYGeA-knf2HMR6t4B_q3JZ_WuEq9fpTmD2_sJLMwPoQw@mail.gmail.com>
In-Reply-To:
 <CAHUa44FYGeA-knf2HMR6t4B_q3JZ_WuEq9fpTmD2_sJLMwPoQw@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DB9PR08MB7399:EE_|DBAEUR03FT032:EE_|AS8PR08MB7306:EE_
X-MS-Office365-Filtering-Correlation-Id: a55a6ad9-75f6-4337-4b4f-08db4ca18a11
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Y7qKgTZ3ZQo3ElQhCLg+WiDHM8nyVwtz+rMDzhWzycgrHxtcKhWuQnWf4WhyyFs2+3vCJFuU+gluORp9TtQolgYPI2kOisld8yR0p7zpa1mvI6WNaGrIB6ItlpqIkUCmgTgT32ay4rZAZoMY+wraomatC66ooCt0LTVXrketbnRJ5cACxeP2TKxQJ2QUZOSrRjo8Ty14U3kC0bEsr948kJdxMFyy1o+bjSAq5w1Ik2RFhvBeL89lQ5slGloIEV6Y4sow//H9ovoByzg/ietZZiNs5uLR3ZIOgaKTHHdHApk1SN1oroXkRhCI/PbNYrcBTAFCqYYFfy/s3/oz9/nn6OH9I90vxEIL472DmegiAcYckttDuuMfT9x2dM+KOlHIshblN+e/ZQA+ZwVENul3IuXRaLx6GL/YGykpTAjyp8Zlr5bpj9aQzA3CnCmVHjuQLGSCnthjN65IBP6F6zuBcb4Nh20c5QrZ4fmcePVDggUcJRVIz804Gq12h8muKMqlrvlQuwU7QTIKjfwft732j4YO09pC/4P1v/KMxKqZVgxGNDRuQcDzAqXvCr+PQ6FpLhpR0E1FGibHK74AbGnycoToMRIViJ2bXM7MIafak1hsHNfw6bwGn0trmiTV86os3alYj0PWWSsu3Ls3jEoDrA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(366004)(376002)(136003)(396003)(451199021)(6486002)(186003)(71200400001)(83380400001)(6916009)(26005)(6506007)(6512007)(53546011)(54906003)(36756003)(2616005)(33656002)(478600001)(64756008)(66476007)(66446008)(76116006)(91956017)(66556008)(66946007)(316002)(122000001)(8676002)(38100700002)(8936002)(5660300002)(41300700001)(66899021)(86362001)(38070700005)(2906002)(4326008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <A3759B7A564D934B9E145235EFB31651@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7399
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c029c3c3-b624-4129-eaec-08db4ca180a5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	p9KoFi0dSxhBLDaEUz8086XVhxlMaxiEamJ0oEd3BSkabKcWXRoa6Pru4vkwkIQmIPMnHciaTpUXtp551wWCsOYHSWuIjQywwPzWvQVOjV0BSocWl0oY+kPszBS2tBcEFuUkB39MvLWG2F8kBPWk/z0dOyLewHVw2EF+OYktbeYAO7AvedOnfiixjLLwJEM6JLfPhsz9KUANXldCRTKmdC0ers/CiuvNTTEa2OrLEAiDQvUHX0vDbyYzaA6v528tWLystDh9UX5mzTJ6iVEvzd3IYsH9X0JPGSY0rq9Izan6+ATIWDqpIidRimHAL+DeQQr99GXeC0/CvY9GTFpuRMMG+dmKK7G5qn04M8niJFJ+2m7HPsU/jOcqHIavdQv6ndknS4xBqhhhPfSZK1KtBz6ELUUIIjcr81D1KRDPAmc40Jl1sMO9mtql0IUMURjLQL/UXkwLP8R8cqu0QlXMz4qIPgLp6+DmHxRhwzQcTwuTUNI0kEDj6+TL2f85Yqaq0VgQoLVRSe0JMz+C8oipGtuZ/4dcC1GGdcMVpV8bRFSSKHOwPX1qTIJDrfSq3JyM7hJ7FtjXqK2xVklgio+aq1PoYLFJBF2XZ5hWhfKDn1EZ1TSMuey/jAYnwgNLPcEQ5b2RqLDZR4NkZHUrOedE1E+PRWnCeOEQo9TkRLYn0PYkuavjSF93KWFFTO4t/WMnJCzqQNjs1RP+ghcix4am4lj7RPfAnATO8dTdgpOKxOo=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(66899021)(70206006)(4326008)(478600001)(70586007)(6486002)(316002)(54906003)(33656002)(36756003)(86362001)(336012)(83380400001)(53546011)(107886003)(2616005)(6512007)(6506007)(36860700001)(26005)(47076005)(186003)(8676002)(41300700001)(6862004)(8936002)(2906002)(5660300002)(82310400005)(40480700001)(82740400003)(356005)(81166007)(34020700004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 13:14:49.2891
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a55a6ad9-75f6-4337-4b4f-08db4ca18a11
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7306

SGkgQW5kcmV3LA0KDQo+IE9uIDE0IEFwciAyMDIzLCBhdCAxMDo1OCwgSmVucyBXaWtsYW5kZXIg
PGplbnMud2lrbGFuZGVyQGxpbmFyby5vcmc+IHdyb3RlOg0KPiANCj4gSGksDQo+IA0KPiBPbiBU
aHUsIEFwciAxMywgMjAyMyBhdCAzOjI34oCvUE0gQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3Bl
cjNAY2l0cml4LmNvbT4gd3JvdGU6DQo+PiANCj4+IE9uIDEzLzA0LzIwMjMgMToyNiBwbSwgSnVs
aWVuIEdyYWxsIHdyb3RlOg0KPj4+PiArc3RhdGljIGludCBmZmFfZG9tYWluX2luaXQoc3RydWN0
IGRvbWFpbiAqZCkNCj4+Pj4gK3sNCj4+Pj4gKyAgICBzdHJ1Y3QgZmZhX2N0eCAqY3R4Ow0KPj4+
PiArDQo+Pj4+ICsgICAgaWYgKCAhZmZhX3ZlcnNpb24gKQ0KPj4+PiArICAgICAgICByZXR1cm4g
LUVOT0RFVjsNCj4+Pj4gKw0KPj4+PiArICAgIGN0eCA9IHh6YWxsb2Moc3RydWN0IGZmYV9jdHgp
Ow0KPj4+PiArICAgIGlmICggIWN0eCApDQo+Pj4+ICsgICAgICAgIHJldHVybiAtRU5PTUVNOw0K
Pj4+PiArDQo+Pj4+ICsgICAgZC0+YXJjaC50ZWUgPSBjdHg7DQo+Pj4+ICsNCj4+Pj4gKyAgICBy
ZXR1cm4gMDsNCj4+Pj4gK30NCj4+Pj4gKw0KPj4+PiArLyogVGhpcyBmdW5jdGlvbiBpcyBzdXBw
b3NlZCB0byB1bmRvIHdoYXQgZmZhX2RvbWFpbl9pbml0KCkgaGFzIGRvbmUgKi8NCj4+PiANCj4+
PiBJIHRoaW5rIHRoZXJlIGlzIGEgcHJvYmxlbSBpbiB0aGUgVEVFIGZyYW1ld29yay4gVGhlIGNh
bGxiYWNrDQo+Pj4gLnJlbGlucXVpc2hfcmVzb3VyY2VzKCkgd2lsbCBub3QgYmUgY2FsbGVkIGlm
IGRvbWFpbl9jcmVhdGUoKSBmYWlsZWQuDQo+Pj4gU28gdGhpcyB3aWxsIHJlc3VsdCB0byBhIG1l
bW9yeSBsZWFrLg0KPj4+IA0KPj4+IFdlIGFsc28gY2FuJ3QgY2FsbCAucmVsaW5xdWlzaF9yZXNv
dXJjZXMoKSBvbiBlYXJseSBkb21haW4gY3JlYXRpb24NCj4+PiBmYWlsdXJlIGJlY2F1c2UgcmVs
aW5xdWlzaGluZyByZXNvdXJjZXMgY2FuIHRha2UgdGltZSBhbmQgdGhlcmVmb3JlDQo+Pj4gbmVl
ZHMgdG8gYmUgcHJlZW1wdGlibGUuDQo+Pj4gDQo+Pj4gU28gSSB0aGluayB3ZSBuZWVkIHRvIGlu
dHJvZHVjZSBhIG5ldyBjYWxsYmFjayBkb21haW5fZnJlZSgpIHRoYXQgd2lsbA0KPj4+IGJlIGNh
bGxlZCBhcmNoX2RvbWFpbl9kZXN0cm95KCkuIElzIHRoaXMgc29tZXRoaW5nIHlvdSBjYW4gbG9v
ayBhdD8NCj4+IA0KPj4gDQo+PiBDbGVhbnVwIG9mIGFuIGVhcmx5IGRvbWFpbiBjcmVhdGlvbiBm
YWlsdXJlLCBob3dldmVyIHlvdSBkbyBpdCwgaXMgYXQNCj4+IG1vc3QgInRoZSBzYW1lIGFtb3Vu
dCBvZiB0aW1lIGFnYWluIi4gIEl0IGNhbm5vdCAoYWJzZW50IG9mIGRldmVsb3BtZW50DQo+PiBl
cnJvcnMpIHRha2UgdGhlIHNhbWUgaW5kZWZpbml0ZSB0aW1lIHBlcmlvZHMgb2YgdGltZSB0aGF0
IGEgZnVsbA0KPj4gZG9tYWluX2Rlc3Ryb3koKSBjYW4uDQo+PiANCj4+IFRoZSBlcnJvciBwYXRo
IGluIGRvbWFpbl9jcmVhdGUoKSBleHBsaWNpdGx5IGRvZXMgY2FsbCBkb21haW5fdGVhcmRvd24o
KQ0KPj4gc28gd2UgY2FuIChldmVudHVhbGx5KSBwdXJnZSB0aGVzZSBkdXBsaWNhdGUgY2xlYW51
cCBwYXRocy4gIFRoZXJlIGFyZQ0KPj4gZmFyIHRvbyBtYW55IGVhc3kgZXJyb3JzIHRvIGJlIG1h
ZGUgd2hpY2ggb2NjdXIgZnJvbSBoYXZpbmcgc3BsaXQNCj4+IGNsZWFudXAsIGFuZCB3ZSBoYXZl
IGhhZCB0byBpc3N1ZSBYU0FzIGluIHRoZSBwYXN0IHRvIGFkZHJlc3Mgc29tZSBvZg0KPj4gdGhl
bS4gIChIZW5jZSB0aGUgZWZmb3J0IHRvIHRyeSBhbmQgc3BlY2lmaWNhbGx5IGNoYW5nZSB0aGlu
Z3MsIGFuZA0KPj4gcmVtb3ZlIHRoZSBhYmlsaXR5IHRvIGludHJvZHVjZSB0aGUgZXJyb3JzIGlu
IHRoZSBmaXJzdCBwbGFjZS4pDQo+PiANCj4+IA0KPj4gUmlnaHQgbm93LCBpdCBpcyBzcGVjaWZp
Y2FsbHkgYXdrd2FyZCB0byBkbyB0aGlzIG5pY2VseSBiZWNhdXNlDQo+PiBkb21haW5fdGVhcmRv
d24oKSBkb2Vzbid0IGNhbGwgaW50byBhIHN1aXRhYmxlIGFyY2ggaG9vay4NCj4+IA0KPj4gSU1P
IHRoZSBiZXN0IG9wdGlvbiBoZXJlIGlzIGV4dGVuZCBkb21haW5fdGVhcmRvd24oKSB3aXRoIGFu
DQo+PiBhcmNoX2RvbWFpbl90ZWFyZG93bigpIHN0YXRlL2hvb2ssIGFuZCB3aXJlIGluIHRoZSBU
RUUgY2xlYW51cCBwYXRoIGludG8NCj4+IHRoaXMgdG9vLg0KPj4gDQo+PiBBbnl0aGluZyBlbHNl
IGlzIGV4cGxpY2l0bHkgYWRkaW5nIHRvIHRlY2huaWNhbCBkZWJ0IHRoYXQgSSAob3Igc29tZW9u
ZQ0KPj4gZWxzZSkgaXMgZ29pbmcgdG8gaGF2ZSB0byByZXZlcnQgZnVydGhlciBkb3duIHRoZSBs
aW5lLg0KPj4gDQo+PiBJZiB5b3Ugd2FudCwgSSBhbSBoYXBweSB0byBwcm90b3R5cGUgdGhlIGFy
Y2hfZG9tYWluX3RlYXJkb3duKCkgYml0IG9mDQo+PiB0aGUgZml4LCBidXQgSSB3aWxsIGhhdmUg
dG8gZGVmZXIgd2lyaW5nIGluIHRoZSBURUUgcGFydCB0byBzb21lb25lDQo+PiBjYXBhYmxlIG9m
IHRlc3RpbmcgaXQuDQo+IA0KPiBZb3UncmUgbW9yZSB0aGFuIHdlbGNvbWUgdG8gcHJvdG90eXBl
IHRoZSBmaXgsIEkgY2FuIHRlc3QgaXQgYW5kIGFkZA0KPiBpdCB0byB0aGUgbmV4dCB2ZXJzaW9u
IG9mIHRoZSBwYXRjaCBzZXQgd2hlbiB3ZSdyZSBoYXBweSB3aXRoIHRoZQ0KPiByZXN1bHQuDQoN
Cg0KQ291bGQgeW91IHRlbGwgdXMgaWYgeW91IGFyZSBzdGlsbCBoYXBweSB0byB3b3JrIG9uIHRo
ZSBwcm90b3R5cGUgZm9yDQphcmNoX2RvbWFpbl90ZWFyZG93biBhbmQgd2hlbiB5b3Ugd291bGQg
YmUgYWJsZSB0byBnaXZlIGEgZmlyc3QgcHJvdG90eXBlID8NCg0KUmVnYXJkcw0KQmVydHJhbmQN
Cg0K


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:20:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:20:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529746.824469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYsL-0006wf-5G; Thu, 04 May 2023 13:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529746.824469; Thu, 04 May 2023 13:20:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYsL-0006wY-2T; Thu, 04 May 2023 13:20:01 +0000
Received: by outflank-mailman (input) for mailman id 529746;
 Thu, 04 May 2023 13:20:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puYsJ-0006wS-Qc
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:20:00 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5c9f4431-ea7e-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 15:19:57 +0200 (CEST)
Received: from mail-sn1nam02lp2044.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.44])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 09:19:54 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6567.namprd03.prod.outlook.com (2603:10b6:a03:388::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 13:19:50 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 13:19:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c9f4431-ea7e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683206397;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=9RTdJ5ycvxry+uYHLjAUjcrAJrpCvO6544pUQYTLnho=;
  b=hpeHy8iM8vFfadR26IN6N3UhmqCBsP1cV+ukEwQRkBPgJnMhz0mnb/fO
   5UTJAk9Jw8Z9pN8FIMFf8npneqfpFvgP11wJJdPEfvYc3sSLuNinwBDn7
   hFwAUP+xu6Jj+LZ4njKgJ+FxwBCtB9e3bDqjVeHwf3qK2N3+FIrJQE/mg
   4=;
X-IronPort-RemoteIP: 104.47.57.44
X-IronPort-MID: 110304341
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1AGOHasJJBfMiMb7xEctD6ZqLufnVMFfMUV32f8akzHdYApBsoF/q
 tZmKTrSPq6MN2Gjed53Otvk/UMAu5GHx9E2S1c5rio0Qi8b+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AKGyyFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwLGEcMDu7xPuP3b+mcNs0mMAHBZa2M9ZK0p1g5Wmx4fcOZ7nmG/+P3vkBmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osj/6xbLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeTgraY22wHKroAVIBQzbHG9i/WXsXKBAexxC
 0EM5zEDl6dnoSRHSfG4BXVUukWsugMXUpxeGusx5QWJ14Ld+QPfDW8BJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaESIYN3MYbCkICw4M+cD+oZobhwjKCN1kFcadkdD0Xzrwz
 jaipTI7wb4UiKYj1a+24FTGiDKEvYXSQ0g+4QC/dmC46gJ0Yqa1aoru7kLUhd5bN5qQRFSFu
 HkCmuCd4foIAJXLkzaCKM0SGJm56vDDNyfT6WODBLEk/jWpvmWlJIZZ5WkkIF8zappYPzj0f
 EXUpAVdoodJO2enZrN2ZIT3DNk2ya/nFpLuUfW8gsdyX6WdvTSvpElGDXN8FUi0+KTwucnT4
 aumTPs=
IronPort-HdrOrdr: A9a23:qegiyqmS5nY/5xCUUqVTIKaKi4jpDfOtimdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcIi7SdK9qXO1z+8X3WBjB8bZYOCGghriEGgG1+ffKlLbakrDH4JmtJ
 uINpIOcOEYbmIKx/oSgjPIderIqePvmM/HuQ6d9QYVcegAUdAD0+4NMHf+LqQAfngiOXNWLv
 qhz/sCgwDlVWUcb8y9CHVAdfPEvcf3mJXvZgNDLwI76SGV5AnYpILSIly95FMzQjlPybAt/S
 zuiAri/JiutPm911v1y3LT1ZJLg9Hso+EzS/Bky/JlZAkEuDzYJLiJaIfy/wzdZ9vfqmrCpe
 O84ivI+f4Drk85MFvF5ScFkDOQrQrGo0WStWNwx0GT7PARDQhKdPZplMZXdADU5FEnu8w52K
 VX33iBv54SFh/Ymj/hjuK4IS2Cu3DE1UbKq9Rj+EB3QM8bcvtcvIYf9ERaHNMJGz/78pkuFK
 1rANvH7PhbfFuGZzSB11MfiOCETzA2BFOLU0ICssua33xfm2141VIRwIgakm0b/JwwRpFY76
 DPM7hulrtJUsgKBJgNTdspUI+yECjAUBjMOGWdLRDuE7wGIWvEr9rt7LA89IiRCek1JNdYou
 WzbLqZjx9BR6vDM7z/4HQQyGGyfIyUZ0Wd9v1j
X-Talos-CUID: 9a23:zX/l9W/d3GWxeuE+LmOVv0pFC54OSj7E9WrdPnWKUXhtbJTKFFDFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3AC/DKmQzkcXT1JFyZzi6k0C1OxK+aqK2LMWoLu5k?=
 =?us-ascii?q?ogPW7bxJeGzqFkS+aeIByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="110304341"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IGbnkgxIKmZIO18QB4usrN37TLLNi5R4ewd+L/XJ1KauANr92jd7ufze4bTO8z9vLOPvaVnYo1nyaZN1k0RALLu4kBhTSM3xPks1oNoa1BNmnLNczcsBANU3c13w9OSMrSXIKL1DZAWSTsLnOG4xTBY/D47MvVsr+fwOBPbXPyaBmuoVDKtlSHBVlYS3tYtGhBTnQ2NgC/pz0Zp8ByOKIXl1G+OAh638uzT6oYxVXboVSJZLvpzbR5wcw9zRR+2cNoxfPZB+8mOtgK4wAldEX2fT7LvYvYvTGnR3550zRKnLYWXL8Bn9PWl0qYSjKgnvOncEDFIHXuimhUZ+Hn+Z8g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iDyUL6c239KtTq3SoHXH9sj+wF/BRaLca3LyaXl6eQ0=;
 b=kUpNUk+r8j9TP+e9Nlx2hZTXnQlEnCSXkcpBlZY0IQI4bwZu+QdUHIjaN7TrbwHcCJlejRJEERwjIgDbt0NK+iQWqVXXr/jrB6upoZJM4csl8dBChxnXmfcX0jK1r33Sw+Q6UKMiqLjhLHy9Tn8bDHjiX+Og3wJfDPGw4pX28q6shjIMnh050unNKBWt4W9MQcqLaK/zpjUNnfq5nHgo3K6s0PQO6eF3g4X606aNaZnExgIN9yudi6cm2eHOBycaToITthVSTlltRQkjPFFSW8NizbG32K3c1tWfQ2GBp2XhsGSIanfRGTDjCMxJ7L2GDKy8Be+IYC1qY/X58aRB2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iDyUL6c239KtTq3SoHXH9sj+wF/BRaLca3LyaXl6eQ0=;
 b=lN5bY1Bg8+SgVcT6duhAU8rETNXEJiC9pgq8qRx2Q2r7Z6cj4eZ1DDRcvUE/SCEeDilH8LfXgdUWYfcU7nR8zO4KVRq5i9Jo6ZTMLs1QGYaVSrOJ8eZbV9f03izjKW0KVaaMY74xjyoKFTaMlhkiESOFV+LclZtrre+4EmeLKjo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <a349cf91-b103-7177-1f1b-743f0894f517@citrix.com>
Date: Thu, 4 May 2023 14:19:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/3] xen/misra: xen-analysis.py: allow cppcheck version
 above 2.7
Content-Language: en-GB
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20230504131245.2985400-1-luca.fancellu@arm.com>
 <20230504131245.2985400-3-luca.fancellu@arm.com>
In-Reply-To: <20230504131245.2985400-3-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0215.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:33a::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6567:EE_
X-MS-Office365-Filtering-Correlation-Id: 993d094f-69dc-4235-f82a-08db4ca23cb6
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sWyjGs6+IxeRdoK97ahpyEhNAFJ00bkb4UcdkwjzK64+61WqZfZE4EoaN+8pjTH/HRnkRzDm6DKiFC7xZ00C4nF+UyCbbeHhqrRClR+6uhmnpJAKg5R7dH7/e+MnO0WQ0+1Cgn9HnSIZNmBDDOCkIx6sOKC7udOmoAII6oLGQZCIpn/VTCk4PUsyupM4ltPidimz+0UbI+s3/btbLBoUWgmCfiRc6pp6NB0KZV2HuGpcMuBiLy0Qg8zWYFSgyFrPn7bXFYNUkNnvN3FfNZYC43STknOuUrcuQsu/MZ0fVFc4muUlktw+KBH1mWM224A2Enkj5a0CPXcweeFL1D5uisR1PcnsmfViFclfmjov/9l3xC5ZfTRPjcITb6cOQiQYsc4mDxIznC3NZU4gbnyXdQlr7Z89RG4899hJ+XwITLAOycOxzoRSsMSjUP6Phrhl07oBms4oqwd2cBsXp9bu3iesALDvthA//sqZugqZZngqJMsewsEzcjnJgUcLUQ2ZMqMK4H4Exv30ACsCETFJOG1e+L5C3ENNb/dnPQPCt9ebDLzSuBSWLV7jnMjxmSAG0rKVN7eUZFqsRxy4Fgk3+90lpjDU0TAr1a0o3YAhrx9s6xg5Hq3gOQvTt9kE6CnHfEvTzIoykoA14sEtAgxrIA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(346002)(396003)(136003)(376002)(451199021)(316002)(82960400001)(38100700002)(186003)(54906003)(8936002)(8676002)(6512007)(53546011)(6506007)(26005)(5660300002)(86362001)(6666004)(2616005)(31686004)(83380400001)(2906002)(31696002)(478600001)(66476007)(66556008)(41300700001)(4326008)(66946007)(6486002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NEhOTlIwendhTzJBbU44NngvQTVHQ1JmeTVFeWY1aWd2R29zUDl1djAvVllP?=
 =?utf-8?B?WUR2Ty9GY2pqT3FJNTNlVENLZTh6MThORzVZUmhDVzRaV01yNU1lcVk5QjFo?=
 =?utf-8?B?b3pSbXZDd2V5a3A2STMzcWNjMDkvRFk0SnVRTndPS0c0NG02djZ3OGRETUZs?=
 =?utf-8?B?K0lJSmdrTVcrenROUXl0N3BGQ0xsWTJVUm1mNFhrRzJYcFd6REh0cWJhU0M1?=
 =?utf-8?B?NWphbEVFVHZVTWwrbCtSRTZISUZsY2xKQU4wbHAwNkdTYVIvKzlmdjNZbTdU?=
 =?utf-8?B?a2l3TzBIWkFpbEJoSG5hWS9PRTd2bjVqSHBBcE1US000blpYVVM5RU5JdVps?=
 =?utf-8?B?NzVCak4vYVY5a29GVWQyZE9hU3g2N2tVTmpWS0lLajBneHAxWDhJRWZicXhs?=
 =?utf-8?B?NTBYQWRQNGRjeWRqaDlVSGNiNlh0V0xacmI4ekhER3pYU25SYmhoUFJXTU9X?=
 =?utf-8?B?eVNsam1MNXNSSE0xbndFcGMrVmdHNTNCT3h2Qml2TzErUFlBVndOQWdTRjN6?=
 =?utf-8?B?TUtjMU1Gd1BER2RWNmVzNElYSEx4bHBibjZBQXc3MkJUakxxQm9yMm03QXp3?=
 =?utf-8?B?NnVTaWZiWDhjM25IVVhOMnZMbFkvSVkzWFJaZWhsSmlmSW93aTJDSml1R0Qx?=
 =?utf-8?B?Ulk1dEo2dEtDbTZPb2dtN1I1eHpqd2RtYlIxK3JyQ296WlkvMzJhZEM0YTkw?=
 =?utf-8?B?N3FNZ09MTzVFSFZLbFFMeGJkR3ZNZWdMbkt0WHN6bHNwUkJ0MHFhVHdGdmR1?=
 =?utf-8?B?WHJGYTFHUGpMMVVkdG5pTTRRaTBFSEtrdjlOaGFRaG9zZEJYSFJkem1GdER1?=
 =?utf-8?B?MnV3M29YV2ZkZ0ZtSzdDZGREdWxPQklUTUhpSkttcGJ0a3VQVTdST2RPTWow?=
 =?utf-8?B?S2NzTEpuSkVpeFVPb3hNNWw0YkZSeGx5TWl4VkxjWDhjZFUzOEVGQ0VRa1hi?=
 =?utf-8?B?dE5jc09iVFlFTldsczE5RFEvT3VBbjlMN0cvSmpDUENDOEw1QUdkUkYyN3lo?=
 =?utf-8?B?TG9uU1BmOEJDK0xFTm85WUo4RWE2RE1ENElUQ0ZoSE9zKzYxNUU2LzI0UVc4?=
 =?utf-8?B?eXBPUTRSenZnZWFpM001YXVwQ0xBWlJvbkIyREtsbUc0bHdyOTFLR2Y3QURm?=
 =?utf-8?B?RmdXY0NvNGVKS0sydi82SkpXc0I4TDRtY2VHOXczenlyT0tTVlR1SjlTYk5l?=
 =?utf-8?B?YU9DT0JVYXgzN2x2TU1IWlJHMU5TUXJocFFwM01UaGd3alFVSU9TYWg2QlBR?=
 =?utf-8?B?dXN2Q0hXWHNCOTY4QlVNa0x5VGRWY2JBU1VqRVVnb2xZYytxdnFxQVFHS2JU?=
 =?utf-8?B?enUvVkRQNXJnMXNWZis3bEpoaDZ2NE9wMFhDQU03UkNYSGtiLzEvYW1jd2FM?=
 =?utf-8?B?WDVmdXJnU1dCSGU4OXEraGRmbFBCdEo1YjBKdmdwZ0dDa2NaMTJtU3FXVmxR?=
 =?utf-8?B?bWlwWlROYWorbW9vMkZScnE0dzJuckxzQyt2NVJWNmtpWndHNVEzTFQwWlc4?=
 =?utf-8?B?a1B0d2dQRmN3QnVGalBYTUNKRzRTcnEzU0RJOXZEdk0zc2QvR2lrQzBNcUk1?=
 =?utf-8?B?UnVGdTJ5YmU2TmZtQXhyamZCellac3ZXSlhSSXVCeFRvWkJqaXNqajJIakxk?=
 =?utf-8?B?cGhCQjFsOUpaZ0JpUDUySHVWcTNOQVpTYlE1MnAyOEtvV1RvN1lDczFidTdS?=
 =?utf-8?B?NG5ST2N6L2pTcGdGQnZZQThNckN4WVhIOVlVM2drTXF4VXhjbWRBd3dRWE9N?=
 =?utf-8?B?MXBIbEEvTFhQcGNtcnB4WUdyVXlDcUpnaDVsZ25RT2ltYzluc1FmYzU3b0JB?=
 =?utf-8?B?c1ZsRmZCLzhGTzhSQVpkTEhSV2ZqbW5NWWVIYXNCTThEaC9JeFpZVkJhU2xj?=
 =?utf-8?B?aE5QWkhnbEpGVno0VWF6enNtMXJYemZZSzhuZXhCZ1BwQ1cvRm5zcFlJNUF1?=
 =?utf-8?B?ZWdnZGdSNXlOM1QwTTArc201NHdPY2tOQ2NGT2JXeFR5RE5KK29yUlFzUDJJ?=
 =?utf-8?B?VGxYbTY5Y24yUVlNYVh1ZFM1WW9yQ1k3ZE5OVkExbmNTTzBzM2lIUGFTdkMr?=
 =?utf-8?B?c05GQ3djOS9BQmdocHo4Ylp2RDExekFLRlBIL05LWGYrbFV0QWdUUmFhUUZ6?=
 =?utf-8?B?QS9oOU40WU5SZWlWOEV6MTdWUGw4OVNramhGVXJZTGtBUnF4TVVEbEUxWE1D?=
 =?utf-8?B?OWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ZWsz7fOQUiZNXlWS3EdbNnkGHKkgwK1+LE3N5dCrYFFZREr7O4Bwxbymf+Mb34Hi7kz5Xkxcrj32s8LL1iETJ5QBsRrbG/lylHINrdHjX/pRiX0n+w4Y64ctRJb26VRRYlCOcDPqsb8mLJzdKmQwTi9VhIYa785PYX8KIExvRMieol1Vfp0pNe350nWZYuC6SuNpvBIrjPWxjvjfB1HhUTXA3tNl6IzKWuevjQdM4XL4S+BE4ixNvoAHAiI8yMNSYcxD4oTBHvoWgA5aA/Oj1vpHOXp9O3Kt/pJa0q9T6xApbwDeA+C+HvtSaJ70OUGREEHbC++aeSqONji6msW8ySsxULGEq68svt1BhtLaScLB81RXV538LodlWCmY24uOzZNRJ/80Puh4xd5MlU9agLBZUHzMxCtMGb+midiKbz3CqHVJ+GWoIJJ/lwvu2L89/G0b0499K/QB+WBLRpOoIAoIIeaR8NxX9o7nNapyEhHNyajLlJ03qwpVnbPOt1hKv5I5bKwF5YFfZjYODdaEUUOk8i/ohfgXm+XD7gw5EjyJt6jzzjBIYOQ8vjqOW9jZJ81rZFYulsYIDjC/BpAxMClVdaJhaEIXOkN6TT5c1er6zlNR3h96zo1msQWlLZh7T3uTPnkwv+4Lx4FTBagI2sgHjYjsYtasK8pNOvVq7n7bILllTda+runPj/zl2YQ2Fk3UJZ9E2oLC8gPcTHY2InVvdw8Jt6ib+hTeJAbXCv7ij22p9aupdeZKjSlQWrvrMHnzafbsQlNN3vrMmAg2oksTmaMTNXOJiaRiNqLciJLtryypsbBAhJQtXuNT814BsyrT2BKD6zWc/vW99nPHkG92EI1J/tHQtccST5dJcCrG2RtBXdPbg965urWwN+EB
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 993d094f-69dc-4235-f82a-08db4ca23cb6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 13:19:49.4077
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nCxYHKv4zLQCc7BYi1h6DB1c7FEK8kxa+1D2GegS/V7xq2+JGvFlZst/0QFKMjxrd7zhfsqAZNS8QZkW4kTVKco3qtCI6PtSpSNvpAR1u7U=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6567

On 04/05/2023 2:12 pm, Luca Fancellu wrote:
> Allow the use of Cppcheck version above 2.7, exception for 2.8 which
> is known and documented do be broken.
>
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  xen/scripts/xen_analysis/cppcheck_analysis.py | 20 +++++++++++++++----
>  1 file changed, 16 insertions(+), 4 deletions(-)
>
> diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
> index 658795bb9f5b..c3783e8df343 100644
> --- a/xen/scripts/xen_analysis/cppcheck_analysis.py
> +++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
> @@ -157,13 +157,25 @@ def generate_cppcheck_deps():
>              "Error occured retrieving cppcheck version:\n{}\n\n{}"
>          )
>  
> -    version_regex = re.search('^Cppcheck (.*)$', invoke_cppcheck, flags=re.M)
> +    version_regex = re.search('^Cppcheck (\d+).(\d+)(?:.\d+)?$',
> +                              invoke_cppcheck, flags=re.M)
>      # Currently, only cppcheck version >= 2.7 is supported, but version 2.8 is
>      # known to be broken, please refer to docs/misra/cppcheck.txt
> -    if (not version_regex) or (not version_regex.group(1).startswith("2.7")):
> +    if (not version_regex) or len(version_regex.groups()) < 2:
>          raise CppcheckDepsPhaseError(
> -                "Can't find cppcheck version or version is not 2.7"
> -              )
> +            "Can't find cppcheck version or version not identified: "
> +            "{}".format(invoke_cppcheck)
> +        )
> +    major = int(version_regex.group(1))
> +    minor = int(version_regex.group(2))
> +    if major < 2 or (major == 2 and minor < 7):
> +        raise CppcheckDepsPhaseError(
> +            "Cppcheck version < 2.7 is not supported"
> +        )
> +    if major == 2 and minor == 8:
> +        raise CppcheckDepsPhaseError(
> +            "Cppcheck version 2.8 is known to be broken, see the documentation"
> +        )

Python sorts tuples the helpful way around, so for example

v = (2, 9)

if v < (2, 7) or v == (2, 8):
    # handle error

does what you want, and far more concisely.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:21:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:21:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529749.824480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYtN-0008IU-Ex; Thu, 04 May 2023 13:21:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529749.824480; Thu, 04 May 2023 13:21:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYtN-0008IN-Bz; Thu, 04 May 2023 13:21:05 +0000
Received: by outflank-mailman (input) for mailman id 529749;
 Thu, 04 May 2023 13:21:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puYtM-0008IH-9n
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:21:04 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20630.outbound.protection.outlook.com
 [2a01:111:f400:fe12::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8402c845-ea7e-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 15:21:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8109.eurprd04.prod.outlook.com (2603:10a6:102:1c2::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 13:21:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 13:21:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8402c845-ea7e-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Gm0bs90qeOG0WODSswUzUOhzkZ38lpvH8xbJmewoJqCIMpl/5jqJmehxgFvp3fKoFmehYKbgWrPy87LIpHupktbPM97b3V8Sl7M8ABmQleo9iTSXElA62QOvLSXx5hhkyf0kJ+7d7Xx+EvQ7VdwpkUjWmgj6o4cnADHJgZGLGoGZbxa8akOpWD4tfxFZY1xDdKJZnI+YYJHEF3z4Aq0ThNq8s6RvgQH3kn4an1hcMCksU3Kz9Nib/V6zA5x2M4lCXbU+lkVXO5gp7DMpl7LoG2y1oDzweSDE1eTS3+0BF6gtKzZAHNZEt1Zzu+qSE7tY9LXxGFAXCGMYAfrDwKwqAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=X4lJ5TCVn+ArYoM6a9T+pkoACxd5SnywVsjoTgae4rU=;
 b=VzK4l8udJDs7dHKzUKpNQH4zv97ZrLhcflGMO2Zu6SF2BfNt+u8lvpbWOCOzPGetk73SvimjgHFaOH8UBOL8eKOIaCyRH8MDSCmAsXTtJMiC/tkVR7UxLjp3czKhjD7AiK7tQbD9ulDey4qpP0buZsGle9niFtLh3pTNooob6bF/BzCCr7qFfp59jPwKnKF71mH9ZCYjA5/V09+pC9NPbYiyiNXo++vHehuCojknBKtikCu+f+6EAfbkgWo1TcN+4okLdH8s/TtPbPkRAGZMvSF0PR2RrJyLQ3nzxtkOPdg+gKJUb05IKOjdpRbtefeE/elWIuc+109JLyri+PwwzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X4lJ5TCVn+ArYoM6a9T+pkoACxd5SnywVsjoTgae4rU=;
 b=m9mn00OWurukgjPA2dNPa7Bhi0HyOC9AR9uYW3SuD4wSZc2UaW2I9vY2whvuCc0E3jo1G1fybn6sKESfNTLH769Q7IapI4iK9wsondvH8l5K+Ib6OGtexNstYP3HKA7ec17FO4YQOwTDexP5dwpABo4qP2/mLjP5JdnjKVGD7jYLq7QwDamknhbXKQuJ7OAsGp4+tY4hFwG8RQOdXBM5pyFOI+4akbBq/RM7CGId7eR3qd9CLhHMpbJdHXUovtlO/6TO7WvoxuC2vn3LUhOXsgXVY3q3Bg9miFjLZl9NndT7bO/zX1lm9HDAaDx81PvHOqHhmgmoVgVxxouq7rqVUA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <12008cc4-965e-9d7d-b655-95c867b3bcb2@suse.com>
Date: Thu, 4 May 2023 15:20:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/bitops: Drop include of cpufeatureset
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504130755.3181176-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230504130755.3181176-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0149.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8109:EE_
X-MS-Office365-Filtering-Correlation-Id: 8e515585-10e6-4872-76b7-08db4ca26778
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OqaQLPii4qhrQGCqjMhKumNAtW29Lx8nCNbTOK1jnYA/2wSfy8zla+BlHLoTgd/+9U6DPleF5ovlTCW1DUWC7+sjiO91P6TwtjgnEdI5vcU0A6OjyK05K43DBJvYwKMVnA1LUrFSOqhwSHLE5yr4iM3c2KaEmI5SL7LS0ZIlY3A8USDxnTBnY4/b95fKFFACU9D75uIcQau36DvOARRsgzFL/wKPxaRKh6pTNTM0ljjFkV7eV3RLB7SEfk3UivzPfv4fyOilQS/H5AUJDD8VCIb/kYhpBD6Ys5Mz39YrK2g5BsQtysoyT5DJ/twkh9xjI6ODA+VrN0wst/XWGU0nfWoLyVQR9yrPdydfkp0xrxk2pzqQzHGY3lAOXUOpeVSPTWYl0mg2Hu6IL+EucU11T5TjtP98sLYSZ9iPzrUOOpEQhlrA8lkpEmePV4B2ZfLqG2JByfkdSWZ15KDukZi5GxAikzSKNj61q89iJTHyQGAdahc0uvFe11HjTfBtAY4Xe311kNHoOy5jQSvP/oZo9wMvrfzCYJrPTqTWqtKTz/D+E/ku5mhp6HnJN72KfaElCpUhUTXulBBrjZPQmrGmehx11gCGGf2saJV4B5DloA8SFjFx4h90UiyiVMwix/JJkQa01uMx5smh+IbIFJcc5A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(396003)(366004)(39860400002)(346002)(451199021)(83380400001)(54906003)(2906002)(478600001)(316002)(5660300002)(8936002)(66556008)(8676002)(41300700001)(66476007)(66946007)(6916009)(4326008)(31696002)(38100700002)(86362001)(6486002)(36756003)(186003)(6506007)(53546011)(6512007)(26005)(31686004)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UDZxZ3NUcXR4TitsMFJteDFFZnJrNFlNUWk2WTgzUWFSblc2R1BtSUhURlNT?=
 =?utf-8?B?REQ5YXdObkg0TUZlRlMwMGhXaVJUQVdTKzBwSElSb0Y5bng0WFR6Z0VGNys2?=
 =?utf-8?B?NWwrcWVHeDdoQXErMnd2bEhNZVdxQ3JTOFcyNWtqeXVaaGFHU05CYVM3T3Jo?=
 =?utf-8?B?UUFxVDNqQzFKeTd6QytqLzhsWElkdTVNS2VPVDlzaWFqd1RUaTJScDNrVUF3?=
 =?utf-8?B?Y2ZBN2N6WmdTUDJBbXNXSWpUS3kyY0JrYjNIQ1NNb2dMYTdzMnJoSjZGRTcv?=
 =?utf-8?B?MWNNSDRjOWYvTVBTNlN0bXA1UGJjekYxVmlGNXpmYWFRMUY4R085a0lEV2xZ?=
 =?utf-8?B?Z1Jlc2FKbytqdkJMNXhrVW9ic0w2MFVjd0NxYXN4K0ZuMWw3OXdBTFJ3Rzd6?=
 =?utf-8?B?cm11L1g1VkFsVXBNT0dUbGJRMzF4YmdmRUtVZW1JdXlMQTk1UmFlRHJ4TUUv?=
 =?utf-8?B?cjNOUzdDakErMUxtNU9WaVIvTVZjcjFQOFVjcDcyTTJEK0JHQStIM09Fblh3?=
 =?utf-8?B?UEVOdmIrbVowelU5aUFzZXRoS0tyOUZIQ1c1ZWpHQ0xPMVlxd3JXc1VJK0Fy?=
 =?utf-8?B?ODJPd3V1UFBYd0ltM1FIYldtSHZuRnhWNEJYT3NtNnRZV1ZFUkw5TDU4SFdX?=
 =?utf-8?B?ZFNOZVRXTjdkU1ZtQjFXZkk1STNiVlhVcTA2eHp3Z2R4N1IyWmpLckZIc1pV?=
 =?utf-8?B?WnZQdmZhNTU3alhWTHZySmp0NXdrZWJFSTNDdjd4QytTbzU5Nm9iL2k0V3lx?=
 =?utf-8?B?U05zeWxjSWR1Ykd6MzFoKy82aXFVNDVpT2VQdTAveEFKVHZHZFNXandsWFBr?=
 =?utf-8?B?S3FYdEsrY2NFVEgxMEVwejR5WUhEQjM3dVZjMjVqOXVsdFJvaE1wUjM5VWZ6?=
 =?utf-8?B?ZUNhbU5xNExMTHV5NEN1Y1ErdkkvOEJzZVF2dkE3eFVZQ3JMa2JNbHI3YjdF?=
 =?utf-8?B?NjZuaURBeHdTZFROUFBsaHFRbU9XSjRwNU5LV01RMGNJTmdEYmRLcUVzS2Zn?=
 =?utf-8?B?SmY4VEJGdCs3WlIyZTlIQ1N1d1hSSHJqUlhLdzdqN2JpZndGSWtGVG9xU0VV?=
 =?utf-8?B?a0lOMmhuSWFVcDNpbFMwUzJwdVZhUE1ZZks5bnU3OUhzRGwvSndUaUlQcnk3?=
 =?utf-8?B?Qi9WaElzL1AxMUFNYks4QWlQU2o0Q1lFL1plM3BKS09HMmVFTDhHWDVzTEQ0?=
 =?utf-8?B?R2liQlBJSjA2SmhhTExEbGNIQjkvMEovQ2s5V1M3eEU3aUhwVHVpMEZEWmZN?=
 =?utf-8?B?b011akcvVU94bEsweHk2UFhGY1FNc2k0UkVrOUdVclpOZmJnSk5lTWxpT0hJ?=
 =?utf-8?B?WWxzV1htWGMyVGZQZHNMYWJJOUxhaDhnUVNQRzhIbkZhU2dSWkIxU0dmZE03?=
 =?utf-8?B?b0hNcDRqbEJhT1dieHp4MWhSV1NWUHg1ZFhoU3pUUUE5QzBnNTdueFZqNUNZ?=
 =?utf-8?B?SnZBd3haSnVONUxhNG1KN0xFTTlWZ2NFazg0Zm9JMnhQM3RldGVDTThqOGVu?=
 =?utf-8?B?QkdYaG1zTkkwc1JjQXY4WDg5KzFoSzFkemdIRzVwR3VZaWNDSC9BTlNaMmlU?=
 =?utf-8?B?dmJtaTlrRFF0ZlVoUWlmWDZQZzFSamVJcmQyK0QxVU9remZKenlZSS95T0Vs?=
 =?utf-8?B?bVpMVzg1WDd2Yyt2ZU53UFNGZlUvYVEvSDdNdDBWUHkzMWxETHFpaHFad3Fm?=
 =?utf-8?B?Wk0yMDZub3VURVM0ak01cEl1T0V4Q1lNWERqYW1lai9Bak5FQ2xlRDE2ZWVG?=
 =?utf-8?B?NCtCaS9iS0ZrdFQrZW4zTG82MWRwbEE3Rkg4aEdjWFlzWjVUNmZIakZPcEh4?=
 =?utf-8?B?MENXcmdFQTkyOVo0NGNxWCs1clp4M2lJVEROVDhQUHBpbXNOZnJ5aE9CNEFQ?=
 =?utf-8?B?VFgvVTBRanhOcnp0aXlqSkNiMWI1MjRCNEVsQVkyL1BndVd0QlAzL1FXelhB?=
 =?utf-8?B?d09qSWdqaUE5VEFYU0tSNGxyODV1UjU3UTF4ZXRGZzFNVmpxWWZKOERRaC9v?=
 =?utf-8?B?d1hmd1hYUFdpME81S3hpc3NRbGRoRmpaYUxId0JFQTJERm1pZG5TUGVPbzRm?=
 =?utf-8?B?dkl4SG5QVjlsWndBZnRrWi8vMEN6SkxQM1ZlaWgzMFpOWFRKTGo3d3BhL0Zw?=
 =?utf-8?Q?OPAfcA+Mo5K7uE9dm6ec9B3O9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8e515585-10e6-4872-76b7-08db4ca26778
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 13:21:00.8562
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: y/ojy2G6to1WF50O+/Gs97rRqpuLcTsQYKdbcVDsY0UhalF5xSdq7eoemK4t3lz9UdVPBPdYgqvZNW3m65Uuiw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8109

On 04.05.2023 15:07, Andrew Cooper wrote:
> Nothing in x86/bitops uses anything from x86/cpufeatureset, and it is creating
> problems when trying to untangle other aspects of feature handling.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> ---
>  xen/arch/x86/include/asm/bitops.h | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/xen/arch/x86/include/asm/bitops.h b/xen/arch/x86/include/asm/bitops.h
> index 5a71afbc89d5..aa8bd65b4565 100644
> --- a/xen/arch/x86/include/asm/bitops.h
> +++ b/xen/arch/x86/include/asm/bitops.h
> @@ -6,7 +6,6 @@
>   */
>  
>  #include <asm/alternative.h>
> -#include <asm/cpufeatureset.h>

Prior to your 44325775f724 ("x86/cpuid: Untangle the <asm/cpufeature.h>
include hierachy") it was asm/cpufeature.h that was included here,
presumably for the use of X86_FEATURE_BMI1 in __scanbit(). I guess that
wants to be asm/cpufeatures.h now instead?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:24:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:24:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529752.824489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYw5-0000Tw-Re; Thu, 04 May 2023 13:23:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529752.824489; Thu, 04 May 2023 13:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYw5-0000Tp-P4; Thu, 04 May 2023 13:23:53 +0000
Received: by outflank-mailman (input) for mailman id 529752;
 Thu, 04 May 2023 13:23:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYMk=AZ=citrix.com=prvs=481980579=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1puYw4-0000SC-DT
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:23:52 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e813e0cf-ea7e-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 15:23:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e813e0cf-ea7e-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683206631;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=/pvEq/xZicrqe/jwH85spYOYRNCODnIg5BSH4wsVGzc=;
  b=Q4/eDSd3c2ItnxLGYWt0z7h9BvTejdvIfBcyxEfZ51J6rTdatfSLCO59
   CzgFv/NwOQlTb1P9cQp/kNslloGooJa2Lpzk9X77zN4YRlih2gOpBjnDj
   8Nq0AqVaadBikmhPc+QiMIxe/vZlafHxbr2Lvh0nqpmMCAvZL4l1ayskz
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107748272
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:948xPaOPQ9hc62fvrR3al8FynXyQoLVcMsEvi/4bfWQNrUok3zVRy
 2BLCmDSO6rcZWX3e4pwat7joUMD657czNdrHAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5gBmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0vRQIH1+y
 dNBEgIMay6boOSu56K+UPY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUoXSGpoJzhzH/
 Aoq+UzlXC4kZYOY7Aa9qE/zg8TqgAqkUYENQejQGvlC3wTImz175ActfUS/iem0jFakXNBSI
 FBS/TAhxYA77EGxR8PxdwG5qnWD+BUbXrJ4A+A8rQ2A1KfQywKYHXQfCC5MbsQ8s807TiBs0
 UWG9+4FHhQ27ufTEyjEsO7J83XrY3N9wXI+iTEsXywk/+nfj9gJvBPKcM5EFraSntjvBmSlq
 9yVlxTSl4n/nOZSifXgpQmd023zznTaZlVrv1uKBwpJ+is8Pdf4PNLwtDA3+N4adO6kok+9U
 G/ociR0xMQHFtmzmSOEW43h95n5tq/eYFUwbbOCdqTNFghBGFb5J+i8GBkkeC9U3j8sIFcFm
 nP7twJL/4N0N3C3d6JxaI/ZI510nfO5SYy8DqiLP4cmjn1NSeN61Hs2OR74M57FySDAbp3Ty
 b/EKJ3xXB72+IxszSasRvd17ILHMhsWnDuJLbiilkTP7FZrTCLNIVvzGAfUP79RAWLtiFm9z
 uuzwOPQl00FCLOkOXa/HEx6BQliEEXXzKve86R/HtNv6CI/cI39I5c9GY8cRrE=
IronPort-HdrOrdr: A9a23:7N5aD6qMMpAyQkxcl8ZjXIoaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-Talos-CUID: 9a23:mLhtnGM8gkpiE+5DSg5G6H4oNZkfQHzY8m/3JVefGWNzV+jA
X-Talos-MUID: =?us-ascii?q?9a23=3AfLdoPg1klOxh+1mii7qGBMvHrDUj//qRNhFSlrA?=
 =?us-ascii?q?6g5O5ailbHiq00z20Xdpy?=
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="107748272"
Date: Thu, 4 May 2023 14:23:45 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: George Dunlap <george.dunlap@cloud.com>
CC: <xen-devel@lists.xenproject.org>, Anthony Perard
	<anthony.perard@cloud.com>, Wei Liu <wl@xenproject.org>
Subject: Re: [PATCH 1/2] xenalyze: Handle start-of-day ->RUNNING transitions
Message-ID: <96a6a07f-b61a-4c11-9dcb-0c0bf1942bf3@perard>
References: <20230327161326.48851-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230327161326.48851-1-george.dunlap@cloud.com>

On Mon, Mar 27, 2023 at 05:13:25PM +0100, George Dunlap wrote:
> A recent xentrace highlighted an unhandled corner case in the vcpu
> "start-of-day" logic, if the trace starts after the last running ->
> non-running transition, but before the first non-running -> running
> transition.  Because start-of-day wasn't handled, vcpu_next_update()
> was expecting p->current to be NULL, and tripping out with the
> following error message when it wasn't:
> 
> vcpu_next_update: FATAL: p->current not NULL! (d32768dv$p, runstate RUNSTATE_INIT)
> 
> where 32768 is the DEFAULT_DOMAIN, and $p is the pcpu number.
> 
> Instead of calling vcpu_start() piecemeal throughout
> sched_runstate_process(), call it at the top of the function if the
> vcpu in question is still in RUNSTATE_INIT, so that we can handle all
> the cases in one place.
> 
> Sketch out at the top of the function all cases which we need to
> handle, and what to do in those cases.  Some transitions tell us where
> v is running; some transitions tell us about what is (or is not)
> running on p; some transitions tell us neither.
> 
> If a transition tells us where v is now running, update its state;
> otherwise leave it in INIT, in order to avoid having to deal with TSC
> skew on start-up.
> 
> If a transition tells us what is or is not running on p, update
> p->current (either to v or NULL).  Otherwise leave it alone.
> 
> If neither, do nothing.
> 
> Reifying those rules:
> 
> - If we're continuing to run, set v to RUNNING, and use p->first_tsc
>   as the runstate time.
> 
> - If we're starting to run, set v to RUNNING, and use ri->tsc as the
>   runstate time.
> 
> - If v is being deschedled, leave v in the INIT state to avoid dealing
>   with TSC skew; but set p->current to NULL so that whatever is
>   scheduled next won't trigger the assert in vcpu_next_update().
> 
> - If a vcpu is waking up (switching from one non-runnable state to
>   another non-runnable state), leave v in INIT, and p in whatever
>   state it's in (which may be the default domain, or some other vcpu
>   which has already run).
> 
> While here, fix the comment above vcpu_start; it's called when the
> vcpu state is INIT, not when current is the default domain.
> 
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:24:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:24:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529753.824500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYwO-0000sM-7N; Thu, 04 May 2023 13:24:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529753.824500; Thu, 04 May 2023 13:24:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYwO-0000sF-3D; Thu, 04 May 2023 13:24:12 +0000
Received: by outflank-mailman (input) for mailman id 529753;
 Thu, 04 May 2023 13:24:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYMk=AZ=citrix.com=prvs=481980579=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1puYwM-0000qi-Ju
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:24:10 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f20e8753-ea7e-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 15:24:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f20e8753-ea7e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683206648;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=rJ+UqEziv1wa7BwSqkCSSHwlAp/FpZLFP6MYajFVB/U=;
  b=AMlxQBXlGmCqAI2N1U+aukLbyufSWTVSqGYpfe3zbd0JJe9HwnItIUqZ
   qzhbSAwGXoH80vzKwjTrfngmWJzSStTduzxPQ9LgHOZh8x4o0mbr+RBuS
   S4mbCbfESrrtqWq4PCyGz8gvuLrpRsXdvFdE/VvuoCyvfV4HsHOUPQHfH
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110304925
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:AhfKpqK6077jzOtTFE+Rp5UlxSXFcZb7ZxGr2PjKsXjdYENS0WAHm
 2UYXzuGbveCYDCgKNB/Oo+z9kpQvpOGy98xQAJlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wRiPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5QW21zp
 aQdOAs9USCDvPCVkPGiSsVz05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TTHJ0OxhrJ/
 D+uE2LRIz1DbcC08CK+o3epj9eImA/GALA4G+jtnhJtqALKnTFCYPEMbnOyufSjg1Syc85eI
 UcTvCEpqMAa60iDXtT7GRqirxasrhMaHtZdDeA+wAWM0bbPpRaUAHAeSTxMY8Bgs9U5LQHGz
 XfQwYmvX2Y29uTIFzTErOz8QS6O1TY9CjUOWH9cSBs+0+bToLohrUKMV9ZPD/vg5jHqIg0c0
 wxmvQBn2eVI1ZdRh/rklbzUq2ny/8aUF2bZ8i2SBzv4tV0hOeZJcqTysTDmAeB8wJF1p7Vrl
 FwNgICg4e8HFvlhfwTdEbxWTNlFCxtoWQAwYGKD/LF7rVxBA1b5IehtDMhWfS+FyPosdz7ze
 1P0sghM/pJVN3bCRfYpM9noV5xzlfC/RImNuhXoUzazSsIpKF/vEN9GPCZ8IFwBYGBzyPpia
 P93gO6nDGoACLQP8Qdas9w1iOdxrghnnDO7eHwO50j/uVZoTCLPGOht3ZrnRrxR0Z5oVy2Pr
 44Fb5XQkEo3vS+XSnC/zLP/5GsidRATba0aYeQNHgJfCmKKwF0cNsI=
IronPort-HdrOrdr: A9a23:bwKk9qNLCF2gIcBcTu+jsMiBIKoaSvp037BL7S9MoHluGfBw+P
 re+8jzuSWUtN9pYgBEpTniAse9qA3nhPpICNIqTNSftWDd0QPDQe1fBO3Zslvd8kbFltK1u5
 0QCJSWIeeAb2RSvILX5xS5DsZl4PTvytHTuQ4G9QYVcei9UdAZ0ztE
X-Talos-CUID: =?us-ascii?q?9a23=3A4lplDmjVifv3C8rTFuI+T2+qFjJuVSfR8SeIJl+?=
 =?us-ascii?q?DOVlZSIaHag/T4aNIjJ87?=
X-Talos-MUID: 9a23:9INY5Qmj0qYYlB0P91OUdnpDGJ4x/4+XEHtclJAPg46BGREoCR2S2WE=
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="110304925"
Date: Thu, 4 May 2023 14:23:59 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: George Dunlap <george.dunlap@cloud.com>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@cloud.com>
Subject: Re: [PATCH 2/2] xenalyze: Basic TRC_HVM_EMUL handling
Message-ID: <a3fc0719-7972-448b-afa7-175822eadac1@perard>
References: <20230327161326.48851-1-george.dunlap@cloud.com>
 <20230327161326.48851-2-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230327161326.48851-2-george.dunlap@cloud.com>

On Mon, Mar 27, 2023 at 05:13:26PM +0100, George Dunlap wrote:
> For now, mainly just do volume analysis and get rid of the warnings.
> 
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:25:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:25:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529760.824509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYxg-0001dl-HF; Thu, 04 May 2023 13:25:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529760.824509; Thu, 04 May 2023 13:25:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puYxg-0001de-Ea; Thu, 04 May 2023 13:25:32 +0000
Received: by outflank-mailman (input) for mailman id 529760;
 Thu, 04 May 2023 13:25:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puYxf-0001dU-3A; Thu, 04 May 2023 13:25:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puYxe-0006d4-Pq; Thu, 04 May 2023 13:25:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puYxe-000740-8R; Thu, 04 May 2023 13:25:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puYxe-0007Z5-7w; Thu, 04 May 2023 13:25:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PP6A64J4n0z3h6d7D66HKiUOdO/9/ZCP2ucff9ylkYA=; b=OKNJCbBbQ4zo3IYjnfNRHXA0yU
	0IvoVGoS1raQDNXM4Kvvgu33+hMbzYYux4JOIukmsnlfaMNt1jaBM6Xbj/3raSUD6PsvuhfaqFtE6
	ckv72ahG72o6CiM08p9sHayLLWYekm3hcLPY8UNL0LhcMItPG4KGZzb4Ke9vnvjAbr/Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180522-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180522: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=044f8cf70a2fdf3b9e4c4d849c66e7855d2c446a
X-Osstest-Versions-That:
    qemuu=c586691e676214eb7edf6a468e84e7ce3b314d43
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 May 2023 13:25:30 +0000

flight 180522 qemu-mainline real [real]
flight 180529 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180522/
http://logs.test-lab.xenproject.org/osstest/logs/180529/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail pass in 180529-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180514
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180514
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180514
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180514
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180514
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180514
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180514
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180514
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180514
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                044f8cf70a2fdf3b9e4c4d849c66e7855d2c446a
baseline version:
 qemuu                c586691e676214eb7edf6a468e84e7ce3b314d43

Last test of basis   180514  2023-05-03 05:31:08 Z    1 days
Testing same since   180522  2023-05-03 23:41:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Daniel Henrique Barboza <dbarboza@ventanamicro.com>
  Dickon Hood <dickon.hood@codethink.co.uk>
  Juan Quintela <quintela@redhat.com>
  Junqiang Wang <wangjunqiang@iscas.ac.cn>
  Kiran Ostrolenk <kiran.ostrolenk@codethink.co.uk>
  Nazar Kazakov <nazar.kazakov@codethink.co.uk>
  Richard Henderson <richard.henderson@linaro.org>
  Weiwei Li <liweiwei@iscas.ac.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   c586691e67..044f8cf70a  044f8cf70a2fdf3b9e4c4d849c66e7855d2c446a -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:28:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:28:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529765.824520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZ0g-0002GA-WC; Thu, 04 May 2023 13:28:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529765.824520; Thu, 04 May 2023 13:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZ0g-0002G3-TK; Thu, 04 May 2023 13:28:38 +0000
Received: by outflank-mailman (input) for mailman id 529765;
 Thu, 04 May 2023 13:28:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puZ0f-0002Fx-P5
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:28:37 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91d4ec42-ea7f-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 15:28:36 +0200 (CEST)
Received: from mail-mw2nam12lp2042.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 09:28:33 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY1PR03MB7192.namprd03.prod.outlook.com (2603:10b6:a03:534::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Thu, 4 May
 2023 13:28:28 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 13:28:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91d4ec42-ea7f-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683206916;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=tMhNF3tjmiLpEHIrd9aTonEoqOY14bSGzTOIv+x/trw=;
  b=KKHGK9vE0ZidBzIy4D/rsz5vvdFuXVHEc0ZRHjGO75YA5u4/SYL6xtaB
   5pp2xqpG2VyH6uROz/oDT5E47632+knJmJB65HzyaCSvx7oKkjAuLCtSw
   GEHxQ8xTBtLsqCkKprqy8RWxt+Ut3RFcXXnpJ6oNjmzM48+CaYV0craQi
   k=;
X-IronPort-RemoteIP: 104.47.66.42
X-IronPort-MID: 107883195
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:IzEegaPgnlvW/h7vrR2HlsFynXyQoLVcMsEvi/4bfWQNrUp3gTYOx
 jQbDW+AP/nbZGr9f9hyYY7i9EIPvpHRnd9jTQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5gBmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rx4HjhFz
 fBfE2wUYxym186IxLaLTfY506zPLOGzVG8ekldJ6GiDSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+/RxvzK7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXwnOrA9tDSdVU8NZnrGOTyUsXByc2UF6gsdiot0emBdJmf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaBMQOBWoLZCtBSBRf5dDm+ds3lkiXEowlF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9bABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:qf7UOqnkZ6gdZOO83J9igm+Jz2XpDfMxiWdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0msjKKdkrNhWotKOzOWxVdATbsSl7cKpgeNJ8SQzJ8/6U
 4NSdkaNDS0NykAsS+Y2njHLz9D+rm6GcmT7I+xrkuFDzsaE52Ihz0JdTpzeXcGIDWua6BJcq
 Z0qvA3xQZJLh8sH7iG7zQ+LqD+T5qhruOVXTc2QzocrCWehzKh77D3VzCewxclSjtKhZsy7G
 TflAT9x6O799W20AXV2WP/54lf3IKJ8KoOOOW8zuwubhn8gAehY4psH5WEoTAOuemqrHo6jd
 XWpB8kHsJrr1fcZHu8rxfB0xTplBwu93jh41mFhmaLm721eBsKT+56wa5JeBrQ7EQt+Pl6za
 Jwxmqc875aFwnJkijR78XBE0gCrDv/nVMS1cooy1BPW4oXb7Fc6aQZ4UNuCZ8FWAb38pouHu
 VCBNzVoNxWbVSZRXbEuXQH+q3mYl0DWjO9BmQSsM2c1DZb2Fh/0ksj3cQa2kwN8ZosIqM0kN
 jsA+BNrvVjX8UWZaVyCKMqWs2sEFHARhrKLSa7PUnnPLtvAQOMl7fHpJEOoM26cp0By5U/3L
 7bVklDiGI0c0XyTeWTwZxw9AzXSmnVZ0Wt9ihn3ek6hlTAfsuvDcXaI2pe1/dI4s9vTPEzYs
 zDe66/WJTYXCzT8YUg5XyLZ3AdEwhZbCQvgKdJZ7u/mLO7FmTUjJ2qTB/yHsuaLd92YBK3Pl
 IzGB7OGe5n0meHHlfFvTm5YQKZRqW4x+M+LJTn
X-Talos-CUID: 9a23:Ezu5Rm6tIC/7wRuSxdssrkELQ8sdfjrmkG7dL2roJVhtTI2SVgrF
X-Talos-MUID: =?us-ascii?q?9a23=3AjkwdDQ7eUoJwhORvXUSA3cnXxoxlz5quB2cJnq8?=
 =?us-ascii?q?mvsyfJQ1hBj6F0jSOF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="107883195"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d4d5dFsC43DHD09xJGtFYg87Mx/e+ZnBeEiEzaBU8ljIxpRQOzYMt90lFtDhyIHyAewL7Z2GWMTBSvrbiLn30FHsVE6EEwPyEH+tbbxIwWFkzgGv4b/e2DWi9dW/B709Ry0Q5T0eYgZjdElssundow07gXu3DKS/bgvWO7bS/0Rz4rVm/YqzL0N0s9oU9YJ0sk9ZR+MmlYxNrQ6z4tiRX2/T7DMA9wIq0eSrAUl0ilDUiVyaDrdvVBmzWyrcHcnl7mbhctX1qZMAOxcCM2lJWBCABOz43NxNBsGn4iSaZhdO61OBurDGlhu0U+hEIC5AAzX84LggDFJ5ILXgEQRClQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kMmuhkZuM146KCfqk1A9ON90F2AnP2kioSb1kYH7Asw=;
 b=Gkq+N1ixduwaTugOu8n3yxlKIdcSKvbgEAAaY/Lg3akN5ub7vlxRKhfcMs1vUVwbaMvk3EYjqvW2q/AqCwVmU31Kp9FBznXOj5fkjmlAuwvYfvsOy89SP2WauVYT+p0sGK4f8Q6ELq6U1C2b5o1WF8pI/MKAFerw4F6ha4nBK/Ef5jvk/ZYqGB8ACmX5wDo4AvkDNyqXEWJ/0VUvKJqHOzSTv1XZ2dw3eHklvd9AtI9V+OmlRMiO0JeBJKH5yv2qSv3FXMVhWdoOm5weLbhTRKevRyiwJ0JmmfS/NbfQaJJUJCCYz2IyI8T8/+/NUHHRIYeZ4sfsjoe3eZ2G04xKwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kMmuhkZuM146KCfqk1A9ON90F2AnP2kioSb1kYH7Asw=;
 b=IavGi9YWq5B3o6x6FVqBwq+KhAGhUFCgH30X+pKmE2qDmM2XL5bHzS9jrlxHuCfg0QjXzal12UjXN8UwFfICnJOu3LBv1L7j+JJr5UmX5x+CLB/OmkM+1QUh6raS4c7MOUOSJw55ZZae2JPeYsoXCyPoyZ+KvixldenO6/O/q6U=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <3b82513c-f726-61e3-d3c6-9ad41c5db6ac@citrix.com>
Date: Thu, 4 May 2023 14:28:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/bitops: Drop include of cpufeatureset
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504130755.3181176-1-andrew.cooper3@citrix.com>
 <12008cc4-965e-9d7d-b655-95c867b3bcb2@suse.com>
In-Reply-To: <12008cc4-965e-9d7d-b655-95c867b3bcb2@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP265CA0049.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY1PR03MB7192:EE_
X-MS-Office365-Filtering-Correlation-Id: b6c81d71-2078-43bb-a51d-08db4ca37256
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kLxRdM1qBTUcJ8Wc2FffNyCHcNJFYEl1td7RzE4ZVtYZQiCVMp9A7xcS8vuNB6mIHVX3k/kSp8+l/a7BgEp2xtZY6n5APRT4EMrROzyvty45wkzoTwqN4gF98N2ntLv+T05hRT9WSSKcwwzrBP9aiul5oAJ8GlTnnySx01tnP8oO0EjDHUbwReuKLKONA2gQofKqGcO+gqMtzTtQbfO4xgDrd7X2PbyvVGy4FSiFZvSd2rhj2TF3GfZHekTq7WzOUDzSBmXoIlx1NrLw6RQFH88qfCeagb/YMXCI/t6yURI0mMTHOEEK1rgmX+0OpafyDoY+/5KeAaI3mesh6MSCXAm2TkxuqIZb+3U7Q8AZVzNqVBqB2Pl/to3/LdeROEXq6k8EFzLCL1Lkp2tfF47EKUq4z9mVW53xwHQ3sbCIcv7sJI+0UBDFO59+8J4s7zK/i7LK1hCKfHm5L9woPSu29KpXajGdOI+Kc9Bu+22ho01pma3mnmvu8fOQN54jIbD535BvL85mXTnKX9e+2BfvfZL738Sg5F0ivxzndRQrZMYC3YJsJthSCHV4D38bp6r9JBbXqspWxQ44OF/I91XRuzOKwgtdOW59cdyy0R+/6xahif+/iEjgVI0+X6oew+/TS1UuErRWSTxXq1Xa2PvexQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(366004)(376002)(396003)(136003)(451199021)(6512007)(6506007)(26005)(82960400001)(186003)(53546011)(83380400001)(38100700002)(41300700001)(8936002)(5660300002)(36756003)(31696002)(2616005)(6666004)(478600001)(2906002)(54906003)(316002)(6486002)(8676002)(4326008)(86362001)(66556008)(66476007)(6916009)(66946007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WG5IMnc3RCtCS3JBbnppYk81NVBxN3pwRHU0T0JiSktpeDJWbzFudldXS3J5?=
 =?utf-8?B?QnFEQXBZZlJNbHIvOGFOSFVUME1PajBaWDhNdUtlbTJjZXl1dzN5ckI4cUV5?=
 =?utf-8?B?MklnS0FmNzBnYTBXdmpLNWJxUUVCYyt2eHZBZEN6L2pDckVkOGVOVStHKzBP?=
 =?utf-8?B?ZmQ5V1J2eFdkOUVKL050NDRVVWJVcXNtTGlxR2JIV3c3eVdiVzJMUnVVZm9w?=
 =?utf-8?B?bzM0bVYwcDgrOXJXN2hYN3BIQkRHaFVKYm15Qm1MYXk1MEFYNlR5cEpuazBF?=
 =?utf-8?B?TXRxa3RHN1NmOUN5WEN0Q1JVend4dDZFUEs0UlN0aDFTQy9qYytrSGdWaVB4?=
 =?utf-8?B?ei85azF6RzR2V2ZqQ3JtUWh4UXhDVjVwNW9SY0hSTkpPQnNrQjZQa1FoSWRa?=
 =?utf-8?B?SUQ5MlEwdUlPaXRSWFVLeC83K3pNTUhQYzVoTzJGWkpQUkJENHJmUmlKRkg1?=
 =?utf-8?B?bDdMeHY0d3RLWXJwNTZlQVRHNVpXSE9CVFhmOGJZRWVZZzhHbTRMQys0ZkFB?=
 =?utf-8?B?Y1dub0oreXZHaERLbVkvcDhUbEhTZk1ZZnU3MlFlTlk1bzNuWitYb1hQN0JE?=
 =?utf-8?B?NTZIYjNPdWZFUTM3ditmem0waHVhR1hKbjVjSm5wSFdMc1BWWEdENVNoajgy?=
 =?utf-8?B?TitFcFJOUXlBQTlDOGFSTWl5Z0sybElGYmpFVy9lUnJhV0hsSzgwUnMycHNr?=
 =?utf-8?B?b2dWek1SMFRWeEptSDNnbFlSZlNUQ0NrdkU0L3BYdTI1WE5MdWwzOEs1UTZJ?=
 =?utf-8?B?Z3MwaDUwZ1ZYRDRvNVVmaWpxbjZNellpU2U3akZOaHVMNFA0OWZNRWxiYitp?=
 =?utf-8?B?U2FpT3RyYmNzM0tvbzlzM1c4MEE5cTZFbFkvdDEyYVJMdHQxR1NVR3MzZm82?=
 =?utf-8?B?RzRQd1BPYnZPajJHMTU1c042N1lTaE5ITU8zb212VURXUlh1L1B1R0ppT1A4?=
 =?utf-8?B?RUFUZlg1RDliWkx0WkxyVHJkRlRyOHpyY2o0NjUyUmhxQzlPZmJocHVoSGFI?=
 =?utf-8?B?aHZ0THdKWmZ4ckt2MHZOV0psMThpLzdCQUxyeXZOL2NCeGVBaWZzSHVZZW96?=
 =?utf-8?B?L0xQVkVNTElwV1JWZVp6UXNveDdmMzEvQ1dyQlJZYmNSdFNYK2RFaUhnRFl6?=
 =?utf-8?B?Y0RvTUFhNXFzT2JubTFHd3VScmZWRHRBYjU2Z0NTUm9DdjN5VlR5RUF1dERn?=
 =?utf-8?B?Z1RZWnk0UVEzV1hmS0NyL0JySWRGKzRDVWJiTmJJRmxGdldVdUc3NDNSSitL?=
 =?utf-8?B?aDBEQ0VVaDAvcjg3NGJGM2Vjald3cElHeFllRWpqcllZOGhqc2IrQy9pczNx?=
 =?utf-8?B?U0NEMGxYTGJmZWJvR1lDNGQ3czlLZkNkV0lwOE92NTZCZmNkZlZtenRzVnk4?=
 =?utf-8?B?QVExQWRjMHVJVjloMXNxSlJqRStWY01oUlpKSTUyNStpc05Ic09XV2Jkd2lW?=
 =?utf-8?B?MlNIVzA0Z2pNOVNkU2V2ODZKTSs1ejN0SVZiOFNMRjZFSWM2RnVxSTdWbzk4?=
 =?utf-8?B?SElrY1lSNGZaYjZMNUZDMDZZWUxWL2xUUFJjbW95aXZhZ003ZjhCRlE0MHM1?=
 =?utf-8?B?V3BsNlpXaVMrNHowNHE1SHZET3Y0K2ZILzVQYUh3RVhyMkR5anNNdkd4UDJl?=
 =?utf-8?B?Mis1b0FOTllaMUlnaVgzSEpBZDB1Ylg0VlNxRURGMkhYdGp2OXRqRDN3SUR2?=
 =?utf-8?B?VGRzNTBIQU1Ib2Iza1piRkxxTmNpRGdaTEFpNG8vU3FkWnEra0cvQmllYW5H?=
 =?utf-8?B?Z0I1RTRTZU5qc3c4OTZYZmtxYnFodmUxZmdwSlBYZDBCOS9FS1d1bkZTUVQx?=
 =?utf-8?B?Qk1qcG1uVUw0bVJBbEtuU1BSbm55c2VQYjhZbnNYbUN6NVVZNVdUVWVxdXJq?=
 =?utf-8?B?L3ZROHpnczRRb3hmaFd0VUZIbXV0YWVUckJiSGtxNUFrbjdCVHRUNGRWWXla?=
 =?utf-8?B?WVZrRGtQSEdDZ2t2OC9uQUgzTkQ1Y0swaURhZkRhMndQeFRudUdxdWdPVUFo?=
 =?utf-8?B?a2o2NGxaVGUwejBqSG0wV0JKVDJIVmt0SnpNTzkraEdZeWNpbnJYK0J4VkVl?=
 =?utf-8?B?dmVYb3FyZHlyZktSdDZjenhleXdhZExNUllkL2Z2eU9sSGZOLzJnVVkrM0Zh?=
 =?utf-8?B?eWFoV3VPVm4wMWMvdDdwZGtrQm80T1VwWlZBQ2Q2L2dFTGVwaW5KNGRnWlR2?=
 =?utf-8?B?Znc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	dpwA8+zQ+JJcEqz1RmGsPXloTpCrfet5vjdVqNb59t4BzLTDIPRfAzZDuZt1I0rhz7dBpKl94Io+AaOep1PAdJp10vkJmxYK8oSpUd/NgoTK6yYObaq2P3zf9TuXZpO2B7AuxDgwfo7UpkIarWQ+JixXdGhvdDNdgfBSpNacEju6bkzN+HQRf7HWJI4z/NkR7SnsKZdx7cHV7z2N7TA/rbOZ1zX3PWJKPS6ZhtRyH6jwM2RJyTerQrfEh4e9MznWtAdiNSmhMTllqyeicCDDMPibrKztPUcV6oMXaVCGrqGlv8/uEJTuAh6Na3yCzn9QM0T6sgszNGBII+Rk5nrG097w/23DmpgqNFIisCdlIJ12EDSuFNlmf29U+PxR/9gEwdiFm4OBhnaVz6A/UWtUGtWTWN5C/vdIX7eKc9na2M7UFmUU3etJV7TbwRtdKsg4BHMswS/FGSB1cizA0WrD5MNi+2eSg56F85bWaov0/j0EcD0+r6MlIxy3zqw+t5cN6WVVTUtZ5VG3YekLvIYTpq1ufhmNKvjm1Lmk7ronO2EJOqmKr4Mr/R4ylT6uCQnojNY6Dj7C+tj5gycT4Byp5eJ3NcFqSusTNNR9/gNTEBycMFd0VAK7EkWUblvaWzkK/ivj2IOwMHaiHeBhqiFk6s5+fPeS+VrTTOeCUs+4b4m/dOC+1oycOuHHEdo8tiBSevgKL0n4cKNYirsKIuBpM4OG1+NNLkntl4dxnKCJqMZ93tJK5ZiIah6oitg1eFhbA6XkIRIybyrWMLjnad3R1C2K4Emp1XchZ0rO7S99D0/nRhUi2b6uye6hM9GGTMnX
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b6c81d71-2078-43bb-a51d-08db4ca37256
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 13:28:28.7754
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: J+TsBeQnU1ohS5BtHtKjZAZ62Lf2gQW+PrvRJR45TEIWzMdSWnHcVBR4y+Ca2fo0ZEFKWfrQLRiZfRhGe+r+DHfl3clLcPHG+Zu6jI7i8+w=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY1PR03MB7192

On 04/05/2023 2:20 pm, Jan Beulich wrote:
> On 04.05.2023 15:07, Andrew Cooper wrote:
>> Nothing in x86/bitops uses anything from x86/cpufeatureset, and it is creating
>> problems when trying to untangle other aspects of feature handling.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>> ---
>>  xen/arch/x86/include/asm/bitops.h | 1 -
>>  1 file changed, 1 deletion(-)
>>
>> diff --git a/xen/arch/x86/include/asm/bitops.h b/xen/arch/x86/include/asm/bitops.h
>> index 5a71afbc89d5..aa8bd65b4565 100644
>> --- a/xen/arch/x86/include/asm/bitops.h
>> +++ b/xen/arch/x86/include/asm/bitops.h
>> @@ -6,7 +6,6 @@
>>   */
>>  
>>  #include <asm/alternative.h>
>> -#include <asm/cpufeatureset.h>
> Prior to your 44325775f724 ("x86/cpuid: Untangle the <asm/cpufeature.h>
> include hierachy") it was asm/cpufeature.h that was included here,
> presumably for the use of X86_FEATURE_BMI1 in __scanbit(). I guess that
> wants to be asm/cpufeatures.h now instead?

Oh.  I missed that, but nothing fails to compile, which means that
there's a prior path including cpufeatureset anyway.

I think I'll drop this and leave the header rearranging to a later
point.  I ended up having to do the untangling differently anyway.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:29:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:29:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529770.824530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZ1j-0002vu-FS; Thu, 04 May 2023 13:29:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529770.824530; Thu, 04 May 2023 13:29:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZ1j-0002vn-CF; Thu, 04 May 2023 13:29:43 +0000
Received: by outflank-mailman (input) for mailman id 529770;
 Thu, 04 May 2023 13:29:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UZDr=AZ=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1puZ1i-0002vd-3o
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:29:42 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20608.outbound.protection.outlook.com
 [2a01:111:f400:fe16::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b87d0ade-ea7f-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 15:29:39 +0200 (CEST)
Received: from DU2PR04CA0224.eurprd04.prod.outlook.com (2603:10a6:10:2b1::19)
 by AS2PR08MB8879.eurprd08.prod.outlook.com (2603:10a6:20b:5f6::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Thu, 4 May
 2023 13:29:27 +0000
Received: from DBAEUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b1:cafe::c9) by DU2PR04CA0224.outlook.office365.com
 (2603:10a6:10:2b1::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.25 via Frontend
 Transport; Thu, 4 May 2023 13:29:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT027.mail.protection.outlook.com (100.127.142.237) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.22 via Frontend Transport; Thu, 4 May 2023 13:29:27 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Thu, 04 May 2023 13:29:27 +0000
Received: from a4358d686b9b.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2593CD0A-A6BF-41A1-AAC0-2B0613541D6F.1; 
 Thu, 04 May 2023 13:29:15 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a4358d686b9b.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 13:29:15 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB9PR08MB8409.eurprd08.prod.outlook.com (2603:10a6:10:3d7::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 13:29:12 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be%6]) with mapi id 15.20.6363.021; Thu, 4 May 2023
 13:29:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b87d0ade-ea7f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3ioYc2i7kK+BhRWyxLCx2x30+Dx/K6ksmcXzJy71rcM=;
 b=KfUgqdLpDH9LqZzqfKkq5wbuDve+oCWVumPMxI9LLvuyzOiRMN3TpeNq9FjHQGbN137ZTYRLa6esXh6im5CczOjMbYtXFwzAdVDJ+DEofNOQB2bPS1f9zXYRjPAh+WeVBUspUXw1gIjPklQXD/HWg03DqBLRgaKVDhqwEzOszvE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1155cd88e01115dd
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B7RgcxXW/soawQ5XTGABrJh8CU985W/sUO8XR21NzCtAvH36uKy07ECEyoI5TLZHkqI8Ey2jMAKGnWYINLuVD05vF9qMN9BEOPOh0+fR4j6otT5QvBDe5Nh/ehOXjmFQTAHysSBMsySQo7FInjqRiWTYwq459gUUWiOZh1ri5RDxjSapu7aQiEG2NskWF18xXtq0G2lafb2qnPsMoGI6HDJokarQBULOdLu9/zff0ko6hxrP41cxGqnUdfzhsdMj4T8vFOEHn2q2WvnNwSMHENjBMctskEQxCaB1ImmWAAoh8Pb0I1kzMus1S1qhYfZsSuk1Xd8DGI8OryH4Gh3mFw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3ioYc2i7kK+BhRWyxLCx2x30+Dx/K6ksmcXzJy71rcM=;
 b=HZt6A48AL0Y14l5trIvBleqIK1bP2aYPuYbYpuWrLW2bKmvW3VXBabIY7pEObqdBeC6JqavoHmoiQAhS34pN+7tpOou5fYOfjHFKZMVaZgzKTSW8IWqd0Kccw1WIrw66kyJXPX5HMrB6ixBgn5yLOo/IVDRlWG7GeUm+uOscAKxJeGwM7XS7USeCFw2cspw7YfKGA9boMepWEuEqRQ5rzVBdZidQ0q4olfRG8Zq0XvBjN8lby5BNIQI2nEu5Tc6VuJvasIKhCdJ5qH6sfgrg2RfYOJVlQ0Szcs20N6MpJGXSLCg8Mhgk8DNPH6mEv4Mm591WIapWnu8gg0kjTVdX2A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3ioYc2i7kK+BhRWyxLCx2x30+Dx/K6ksmcXzJy71rcM=;
 b=KfUgqdLpDH9LqZzqfKkq5wbuDve+oCWVumPMxI9LLvuyzOiRMN3TpeNq9FjHQGbN137ZTYRLa6esXh6im5CczOjMbYtXFwzAdVDJ+DEofNOQB2bPS1f9zXYRjPAh+WeVBUspUXw1gIjPklQXD/HWg03DqBLRgaKVDhqwEzOszvE=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 2/3] xen/misra: xen-analysis.py: allow cppcheck version
 above 2.7
Thread-Topic: [PATCH 2/3] xen/misra: xen-analysis.py: allow cppcheck version
 above 2.7
Thread-Index: AQHZfooyZwW8K0Ij5Ua0WLJyhged2K9KGRcAgAACm4A=
Date: Thu, 4 May 2023 13:29:12 +0000
Message-ID: <55F767F3-1963-4842-9FBB-9CEF547E9C6B@arm.com>
References: <20230504131245.2985400-1-luca.fancellu@arm.com>
 <20230504131245.2985400-3-luca.fancellu@arm.com>
 <a349cf91-b103-7177-1f1b-743f0894f517@citrix.com>
In-Reply-To: <a349cf91-b103-7177-1f1b-743f0894f517@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB9PR08MB8409:EE_|DBAEUR03FT027:EE_|AS2PR08MB8879:EE_
X-MS-Office365-Filtering-Correlation-Id: 208d0b27-8878-4221-c09e-08db4ca39553
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 04f8WVWR8J4TBkAOx/PO+t5b7V8nbKI1Np1l/X4LXewaJXZEdS34L+m34LE7DhFBUPgX5163M9ksHE9cjIJP1tc1mwL9QMfI27/5qv9oocrGz3YG4Azy5Yn5alIxxrwcHEQPkdX6nQtbQ2KuMOlGTx3LYodHgMNgmdFFpM+rqvUxLcLI2sLXpN/JGcnOkBQ4FhoG7Bz41EyV9mlz7n7EhEtojb7BmU4OwL11XVu5APHchfdTCsZu70QaD1CRUu/CZ0nG5ZYavqLO8Z41aLSCBV11a9ywnaCKeWBysaa3J0dADgW//oo3dWP00JLIxpbWGUe+0/wPekzDvQ8AEDzrXAI1rPf8GfROm5HXTlN42cScJyxzefTTSwIPRmxgPhXN9f2oYo3TnOD+zmw6Nri4vIIIFK9+CVrjp3Yp1YJ5N1GXiNMABMFNhEGWhj97w2Yd34kVYSuD2SVmjzpu2h1tGp6iqWxV2C813bfkJYZG7euWg1Q7cY9PB7tvZ4ITLllA8TJweEgiHqkjaZgPcJ7a1Ma/VqUbQrJbXwmMz+S4lL8In3xeo564rYOWI5wThS8SPC8CmA82sOSmI4pHCSctbwva0DrZGdjNs94RodpY5xO9XTcEuVkI6XXbe23pl9uOXUhnSk1QXSFAupxdSq4dMg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(366004)(39860400002)(376002)(346002)(451199021)(83380400001)(5660300002)(54906003)(2906002)(76116006)(66946007)(38100700002)(186003)(316002)(122000001)(8676002)(8936002)(53546011)(6512007)(6506007)(26005)(38070700005)(41300700001)(91956017)(33656002)(2616005)(6486002)(36756003)(64756008)(66556008)(66476007)(66446008)(4326008)(6916009)(71200400001)(478600001)(86362001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <14A7FD90F52B5E408553DD35886720C4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8409
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0aab4699-f55d-488a-3033-08db4ca38c5d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZoKO/XSerFvWa9lo4oFLhVIwXCCwebcTi6Xky5qV3jLp996Odyenq+dsRhPQyF4QwBWPQRbVIvNn2IdozxrcGg2kbyYUKAsgE0tt0/UDZoio8idTCfORzLtGma58N9XHWDmPpW8xjJe/SH/UXWKCk5rwpR/MOrshBi6gGPp439lxS1qoRtRVF1kyouYlBtCx3+7fOgaDXszaTCvLxEYfWGHpOOYSO+LQFDH4ftDZz4s/SCygd1xCU2aTU5VUlOi8mPUoWPdK4VrMugQzo8QXf/BWjSCK50m4zMhUppntHSQxK0pxnQrxMbFPti8OV6Ez0D6Zow/rm/RAybnhmSGvmoK9NcRC3RnBr4JCcG3A8SFZTMEcUYDMK8KHYvimEKtcB8+n6t9WSclk96WEyhIijjBAaifRLgvOo9dwnKm+/wGxHK/37JAdFE6ejPj/xeBnKolpytCx86aGu5xV19PwJGuvJ85alb/zXPJDmVBz6xzG+OBp+vb9tzXipKS+ddS3UebIe+EWp+UsymRmTcsHUpmlI1wkBYEdXRP4zrAQoZLTEqV0JGsQdHeASnyxhtsKT9Hokk+RygLPSq7KR5N9fjCIAeibEr9jxQ5XW6PN5SIPtDz7NPKrwsxFkzFGKsvC5b6pVWaIwvBJl+Vq91HRLY7U6+rmQaQaukh1rXBKNLCOWvhnPf9eIAiu/7tzNhDAqmicnkyAVPzIDrooc93GQKLNYGJI2TIDwAqFeDqlKxA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(136003)(346002)(396003)(451199021)(40470700004)(36840700001)(46966006)(40460700003)(336012)(2616005)(2906002)(47076005)(316002)(6506007)(34020700004)(26005)(6512007)(53546011)(70206006)(4326008)(36860700001)(70586007)(54906003)(41300700001)(5660300002)(6486002)(8936002)(8676002)(6862004)(40480700001)(478600001)(83380400001)(186003)(81166007)(356005)(86362001)(33656002)(82310400005)(36756003)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 13:29:27.1731
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 208d0b27-8878-4221-c09e-08db4ca39553
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8879

DQoNCj4gT24gNCBNYXkgMjAyMywgYXQgMTQ6MTksIEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29w
ZXIzQGNpdHJpeC5jb20+IHdyb3RlOg0KPiANCj4gT24gMDQvMDUvMjAyMyAyOjEyIHBtLCBMdWNh
IEZhbmNlbGx1IHdyb3RlOg0KPj4gQWxsb3cgdGhlIHVzZSBvZiBDcHBjaGVjayB2ZXJzaW9uIGFi
b3ZlIDIuNywgZXhjZXB0aW9uIGZvciAyLjggd2hpY2gNCj4+IGlzIGtub3duIGFuZCBkb2N1bWVu
dGVkIGRvIGJlIGJyb2tlbi4NCj4+IA0KPj4gU2lnbmVkLW9mZi1ieTogTHVjYSBGYW5jZWxsdSA8
bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KPj4gLS0tDQo+PiB4ZW4vc2NyaXB0cy94ZW5fYW5hbHlz
aXMvY3BwY2hlY2tfYW5hbHlzaXMucHkgfCAyMCArKysrKysrKysrKysrKystLS0tDQo+PiAxIGZp
bGUgY2hhbmdlZCwgMTYgaW5zZXJ0aW9ucygrKSwgNCBkZWxldGlvbnMoLSkNCj4+IA0KPj4gZGlm
ZiAtLWdpdCBhL3hlbi9zY3JpcHRzL3hlbl9hbmFseXNpcy9jcHBjaGVja19hbmFseXNpcy5weSBi
L3hlbi9zY3JpcHRzL3hlbl9hbmFseXNpcy9jcHBjaGVja19hbmFseXNpcy5weQ0KPj4gaW5kZXgg
NjU4Nzk1YmI5ZjViLi5jMzc4M2U4ZGYzNDMgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vc2NyaXB0cy94
ZW5fYW5hbHlzaXMvY3BwY2hlY2tfYW5hbHlzaXMucHkNCj4+ICsrKyBiL3hlbi9zY3JpcHRzL3hl
bl9hbmFseXNpcy9jcHBjaGVja19hbmFseXNpcy5weQ0KPj4gQEAgLTE1NywxMyArMTU3LDI1IEBA
IGRlZiBnZW5lcmF0ZV9jcHBjaGVja19kZXBzKCk6DQo+PiAgICAgICAgICAgICAiRXJyb3Igb2Nj
dXJlZCByZXRyaWV2aW5nIGNwcGNoZWNrIHZlcnNpb246XG57fVxuXG57fSINCj4+ICAgICAgICAg
KQ0KPj4gDQo+PiAtICAgIHZlcnNpb25fcmVnZXggPSByZS5zZWFyY2goJ15DcHBjaGVjayAoLiop
JCcsIGludm9rZV9jcHBjaGVjaywgZmxhZ3M9cmUuTSkNCj4+ICsgICAgdmVyc2lvbl9yZWdleCA9
IHJlLnNlYXJjaCgnXkNwcGNoZWNrIChcZCspLihcZCspKD86LlxkKyk/JCcsDQo+PiArICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgaW52b2tlX2NwcGNoZWNrLCBmbGFncz1yZS5NKQ0KPj4g
ICAgICMgQ3VycmVudGx5LCBvbmx5IGNwcGNoZWNrIHZlcnNpb24gPj0gMi43IGlzIHN1cHBvcnRl
ZCwgYnV0IHZlcnNpb24gMi44IGlzDQo+PiAgICAgIyBrbm93biB0byBiZSBicm9rZW4sIHBsZWFz
ZSByZWZlciB0byBkb2NzL21pc3JhL2NwcGNoZWNrLnR4dA0KPj4gLSAgICBpZiAobm90IHZlcnNp
b25fcmVnZXgpIG9yIChub3QgdmVyc2lvbl9yZWdleC5ncm91cCgxKS5zdGFydHN3aXRoKCIyLjci
KSk6DQo+PiArICAgIGlmIChub3QgdmVyc2lvbl9yZWdleCkgb3IgbGVuKHZlcnNpb25fcmVnZXgu
Z3JvdXBzKCkpIDwgMjoNCj4+ICAgICAgICAgcmFpc2UgQ3BwY2hlY2tEZXBzUGhhc2VFcnJvcigN
Cj4+IC0gICAgICAgICAgICAgICAgIkNhbid0IGZpbmQgY3BwY2hlY2sgdmVyc2lvbiBvciB2ZXJz
aW9uIGlzIG5vdCAyLjciDQo+PiAtICAgICAgICAgICAgICApDQo+PiArICAgICAgICAgICAgIkNh
bid0IGZpbmQgY3BwY2hlY2sgdmVyc2lvbiBvciB2ZXJzaW9uIG5vdCBpZGVudGlmaWVkOiAiDQo+
PiArICAgICAgICAgICAgInt9Ii5mb3JtYXQoaW52b2tlX2NwcGNoZWNrKQ0KPj4gKyAgICAgICAg
KQ0KPj4gKyAgICBtYWpvciA9IGludCh2ZXJzaW9uX3JlZ2V4Lmdyb3VwKDEpKQ0KPj4gKyAgICBt
aW5vciA9IGludCh2ZXJzaW9uX3JlZ2V4Lmdyb3VwKDIpKQ0KPj4gKyAgICBpZiBtYWpvciA8IDIg
b3IgKG1ham9yID09IDIgYW5kIG1pbm9yIDwgNyk6DQo+PiArICAgICAgICByYWlzZSBDcHBjaGVj
a0RlcHNQaGFzZUVycm9yKA0KPj4gKyAgICAgICAgICAgICJDcHBjaGVjayB2ZXJzaW9uIDwgMi43
IGlzIG5vdCBzdXBwb3J0ZWQiDQo+PiArICAgICAgICApDQo+PiArICAgIGlmIG1ham9yID09IDIg
YW5kIG1pbm9yID09IDg6DQo+PiArICAgICAgICByYWlzZSBDcHBjaGVja0RlcHNQaGFzZUVycm9y
KA0KPj4gKyAgICAgICAgICAgICJDcHBjaGVjayB2ZXJzaW9uIDIuOCBpcyBrbm93biB0byBiZSBi
cm9rZW4sIHNlZSB0aGUgZG9jdW1lbnRhdGlvbiINCj4+ICsgICAgICAgICkNCj4gDQo+IFB5dGhv
biBzb3J0cyB0dXBsZXMgdGhlIGhlbHBmdWwgd2F5IGFyb3VuZCwgc28gZm9yIGV4YW1wbGUNCj4g
DQo+IHYgPSAoMiwgOSkNCj4gDQo+IGlmIHYgPCAoMiwgNykgb3IgdiA9PSAoMiwgOCk6DQo+ICAg
ICAjIGhhbmRsZSBlcnJvcg0KPiANCj4gZG9lcyB3aGF0IHlvdSB3YW50LCBhbmQgZmFyIG1vcmUg
Y29uY2lzZWx5Lg0KDQpIaSBBbmRyZXcsDQoNClRoYW5rIHlvdSwgdGhpcyBpcyB2ZXJ5IGhlbHBm
dWwsIGl04oCZcyBjbGVhciB0aGF0IEnigJltIGF0IG15IGZpcnN0IGV4cGVyaWVuY2VzIHdpdGgg
UHl0aG9uLA0KSSB3aWxsIGNoYW5nZSB0aGUgY29kZSB0byB1c2UgdGhpcyBtb3JlIGNvaW5jaXNl
IGZvcm0uDQoNCkNoZWVycywNCkx1Y2ENCg0KPiANCj4gfkFuZHJldw0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:31:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:31:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529773.824539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZ32-0004Lv-PY; Thu, 04 May 2023 13:31:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529773.824539; Thu, 04 May 2023 13:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZ32-0004Lo-Mn; Thu, 04 May 2023 13:31:04 +0000
Received: by outflank-mailman (input) for mailman id 529773;
 Thu, 04 May 2023 13:31:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wc2m=AZ=tls.msk.ru=mjt@srs-se1.protection.inumbo.net>)
 id 1puZ31-0004Lh-Q0
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:31:03 +0000
Received: from isrv.corpit.ru (isrv.corpit.ru [86.62.121.231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e89b52f9-ea7f-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 15:31:01 +0200 (CEST)
Received: from tsrv.corpit.ru (tsrv.tls.msk.ru [192.168.177.2])
 by isrv.corpit.ru (Postfix) with ESMTP id E39D4400FD;
 Thu,  4 May 2023 16:30:59 +0300 (MSK)
Received: from [192.168.177.130] (mjt.wg.tls.msk.ru [192.168.177.130])
 by tsrv.corpit.ru (Postfix) with ESMTP id E55BC9E;
 Thu,  4 May 2023 16:30:58 +0300 (MSK)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e89b52f9-ea7f-11ed-8611-37d641c3527e
Message-ID: <5f7329f2-8d17-12de-4ea9-74a5932b80c5@msgid.tls.msk.ru>
Date: Thu, 4 May 2023 16:30:58 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] 9pfs/xen: Fix segfault on shutdown
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>, qemu-devel@nongnu.org
Cc: Greg Kurz <groug@kaod.org>, Christian Schoenebeck
 <qemu_oss@crudebyte.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>
References: <20230502143722.15613-1-jandryuk@gmail.com>
From: Michael Tokarev <mjt@tls.msk.ru>
In-Reply-To: <20230502143722.15613-1-jandryuk@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

02.05.2023 17:37, Jason Andryuk wrote:
> xen_9pfs_free can't use gnttabdev since it is already closed and NULL-ed
> out when free is called.  Do the teardown in _disconnect().  This
> matches the setup done in _connect().

Ping?

/mjt


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:36:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:36:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529776.824550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZ89-00050m-C0; Thu, 04 May 2023 13:36:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529776.824550; Thu, 04 May 2023 13:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZ89-00050f-9L; Thu, 04 May 2023 13:36:21 +0000
Received: by outflank-mailman (input) for mailman id 529776;
 Thu, 04 May 2023 13:36:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puZ87-00050V-Uf; Thu, 04 May 2023 13:36:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puZ87-0006yx-OO; Thu, 04 May 2023 13:36:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puZ87-0007Nk-6O; Thu, 04 May 2023 13:36:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puZ87-0005UF-5v; Thu, 04 May 2023 13:36:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MK1/6KrYMFAvN7ejZ09yhL58G40ImulP6+b5JDPPqfY=; b=cH20l5dJWr8mYUeSfC/n/VfqPj
	inC2VU/7Tg/MrxKf5MuqonA9ar4qJvhz+FOJU8eX3xJGFp9rXg3qpgE/q/ptAsZ/RfzbLOXMhYp+V
	TmGMQ8pTgc6a+qqxBEe5YCaG+0C52DMwf/VErNgjuF2BHEp4kIRlRnLrhWiZOdbqfMtk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180525-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180525: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1a5304fecee523060f26e2778d9d8e33c0562df3
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 May 2023 13:36:19 +0000

flight 180525 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180525/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                1a5304fecee523060f26e2778d9d8e33c0562df3
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   17 days
Failing since        180281  2023-04-17 06:24:36 Z   17 days   30 attempts
Testing same since   180525  2023-05-04 05:05:04 Z    0 days    1 attempts

------------------------------------------------------------
2224 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 272978 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 04 13:59:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 13:59:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529785.824560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZU8-0007kT-C0; Thu, 04 May 2023 13:59:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529785.824560; Thu, 04 May 2023 13:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZU8-0007kM-8c; Thu, 04 May 2023 13:59:04 +0000
Received: by outflank-mailman (input) for mailman id 529785;
 Thu, 04 May 2023 13:59:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qVjO=AZ=rabbit.lu=slack@srs-se1.protection.inumbo.net>)
 id 1puZU6-0007kG-Ry
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 13:59:02 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d22b68e6-ea83-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 15:59:00 +0200 (CEST)
Received: by mail-wm1-x32c.google.com with SMTP id
 5b1f17b1804b1-3f173af665fso4104285e9.3
 for <xen-devel@lists.xenproject.org>; Thu, 04 May 2023 06:59:00 -0700 (PDT)
Received: from [192.168.2.1] (82-64-138-184.subs.proxad.net. [82.64.138.184])
 by smtp.googlemail.com with ESMTPSA id
 k1-20020a7bc301000000b003eddc6aa5fasm5024354wmj.39.2023.05.04.06.58.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 04 May 2023 06:58:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d22b68e6-ea83-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=rabbit-lu.20221208.gappssmtp.com; s=20221208; t=1683208740; x=1685800740;
        h=content-transfer-encoding:in-reply-to:content-language:references
         :cc:to:subject:from:user-agent:mime-version:date:message-id:from:to
         :cc:subject:date:message-id:reply-to;
        bh=741OYHclmnary9o8QuEmWBLI/yLKwj6EVSWqvcmDpPk=;
        b=Ktoj4Y2WCFKxcd/Bza2FkWvcDKZaHrNjIp9fNFMB/s8f2XwFstTADuCWQlqwahabB1
         nuFtlvPM3aU9Iq5UhlTcalJERgrJGINvm1/05NnuA3L89nRgEPepkaJ+E0f3Dn5Qai6T
         y3Ny/8rjCrjgV6W1W3upx51duZdUMdIpG53mapkad30WPe2BE+uFBZx4vpf2yGzCerU7
         Y8JDR4dxk1f+Uke9oXCEC6xoS4FO5POEs91cgFvpN7JBgDwmA/Xb43lX4kjZI2TWkuO3
         rXYvU5SCfiykJGHPap2ki5o8eQj+8rGb9ExUgJG46GG4CIpLPdRuMrbXnM75F147QyH2
         u46w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683208740; x=1685800740;
        h=content-transfer-encoding:in-reply-to:content-language:references
         :cc:to:subject:from:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=741OYHclmnary9o8QuEmWBLI/yLKwj6EVSWqvcmDpPk=;
        b=dAV4FcsaE7TcXqwMSg1yCFz2opC72hm6EnD/llRVMUxzKr9A/ohX+TnwP+P9cM/Wzk
         HJQJXkhaXSwNsTwdaNgQJF76tnV5//Mvy8sFwXFqlIxmSC6LhyHIo23rs7PHjJTPHZ2t
         Xw25EaKjvONLdYBEe7sU7SSNtTMII/bgjeyxrOa8snQZNznhrIiHFfbxh6aTLL09hSBN
         m6L9/R/yfwku43fyss4xFqUhJGazcQu95ILJP5OFsJ/cC9ASgivSTBAwzeqsGtrS/8/I
         XNyzAAoJxJWaT2WKkZUlnT616uTEMzzYwlZUai1bxgYRbhiKgmw6v/3xNH2ittiue6Fs
         UMug==
X-Gm-Message-State: AC+VfDwqeoXCwcJbMTfbE7xhpKWLZAsFz33GTjSrsNx4Ki+rgTtdtsc7
	m+omT3P0lJpi9Vnw/Pca8FVc2Hya4w2HlQp8zm8uIw==
X-Google-Smtp-Source: ACHHUZ4vGVlTodkKYjJUmbu4stY+lxNumSzjMlNzBnYPHHXxS4yamNQKYTPQb180khL0pdtR1pRp/A==
X-Received: by 2002:a7b:c015:0:b0:3f1:662a:93d0 with SMTP id c21-20020a7bc015000000b003f1662a93d0mr17906533wmb.15.1683208740096;
        Thu, 04 May 2023 06:59:00 -0700 (PDT)
Message-ID: <50bf6b82-965b-d17c-7c5a-49c703991504@rabbit.lu>
Date: Thu, 4 May 2023 15:58:58 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: zithro <slack@rabbit.lu>
Subject: Re: xenstored: EACCESS error accessing control/feature-balloon 1
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, =?UTF-8?B?RWR3aW4gVMO2csO2aw==?=
 <edwin.torok@cloud.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Yann Dirson <yann.dirson@vates.fr>
References: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu>
 <474b531f-2067-a5d4-8b01-5b8ef1f7061d@citrix.com>
 <678df1ff-df18-b063-eda3-2a1aed6d40f8@vates.fr>
Content-Language: en-US
In-Reply-To: <678df1ff-df18-b063-eda3-2a1aed6d40f8@vates.fr>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

[ snipped for brevity, report summary:
XAPI daemon in domU tries to write to a non-existent xenstore node in a 
non-XAPI dom0 ]

On 12 Apr 2023 18:41, Yann Dirson wrote:
> Is there anything besides XAPI using this node, or the other data
> published by xe-daemon?

On my vanilla Xen (ie. non-XAPI), I have no node about "balloon"-ing in 
xenstore (either dom0 or domU nodes, but I'm not using ballooning in both).

> Maybe the original issue is just that there is no reason to have
> xe-guest-utilities installed in this setup?

That's what I thought as I'm not using XAPI, so maybe the problem should 
only be addressed to the truenas team ? I posted on their forum but got 
no answer.
I killed the 'xe-daemon' in both setups without loss of functionality.

My wild guess is that 'xe-daemon', 'xe-update-guest-attrs' and all 
'xenstore* commands' are leftovers from when Xen was working as a dom0 
under FreeBSD (why would a *domU* have them ?).


But Julien Grall went further by reacting to a suggestion by Yann Dirson 
(no snip here so you have the full patch details) :

On 12 Apr 2023 22:04, Julien Grall wrote:
>  From a brief look, this is very similar to the patch below that was 
> sent 3 years ago. I bet no-one ever tested the driver against libxl.
> 
> commit 30a970906038
> Author: Vitaly Kuznetsov <vkuznets@redhat.com>
> Date:   Tue Sep 4 13:39:29 2018 +0200
> 
>      libxl: create control/sysrq xenstore node
> 
>      'xl sysrq' command doesn't work with modern Linux guests with the 
> following
>      message in guest's log:
> 
>       xen:manage: sysrq_handler: Error -13 writing sysrq in control/sysrq
> 
>      xenstore trace confirms:
> 
>       IN 0x24bd9a0 20180904 04:36:32 WRITE (control/sysrq )
>       OUT 0x24bd9a0 20180904 04:36:32 ERROR (EACCES )
> 
>      The problem seems to be in the fact that we don't pre-create 
> control/sysrq
>      xenstore node and libxl_send_sysrq() doing libxl__xs_printf() 
> creates it as
>      read-only. As we want to allow guests to clean 'control/sysrq' 
> after the
>      requested action is performed, we need to make this node writable.
> 
>      Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>      Acked-by: Wei Liu <wei.liu2@citrix.com>
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 60676304e9b5..dcfde7787e2c 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -695,6 +695,9 @@ retry_transaction:
>                           GCSPRINTF("%s/control/feature-s4", dom_path),
>                           rwperm, ARRAY_SIZE(rwperm));
>       }
> +    libxl__xs_mknod(gc, t,
> +                    GCSPRINTF("%s/control/sysrq", dom_path),
> +                    rwperm, ARRAY_SIZE(rwperm));
>       libxl__xs_mknod(gc, t,
>                       GCSPRINTF("%s/device/suspend/event-channel", 
> dom_path),
>                       rwperm, ARRAY_SIZE(rwperm));
> 
>>
>> I suspect the best (/least bad) thing to do here is formally introduce
>> feature-ballon as a permitted node, and have the toolstack initialise it
>> to "" like we do with all other nodes, after which TrueNAS ought to be
>> able to set it successfully and not touch it a second time.
> 
> +1. This would match how libxl already deal "feature-s3" & co.

So, I will file a bug against truenas anyways, but are you going to 
alter the xenstore/toolstack ?
In that case it may be good to tell them at the same time.

Thanks all,

Kind regards,

zithro/Cyril


From xen-devel-bounces@lists.xenproject.org Thu May 04 14:00:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:00:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529788.824570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZVl-0000rH-Nx; Thu, 04 May 2023 14:00:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529788.824570; Thu, 04 May 2023 14:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZVl-0000rA-Ke; Thu, 04 May 2023 14:00:45 +0000
Received: by outflank-mailman (input) for mailman id 529788;
 Thu, 04 May 2023 14:00:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=evI0=AZ=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1puZVk-0000qe-5c
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:00:44 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20612.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::612])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0e351b29-ea84-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 16:00:42 +0200 (CEST)
Received: from MW4PR04CA0275.namprd04.prod.outlook.com (2603:10b6:303:89::10)
 by SN7PR12MB7881.namprd12.prod.outlook.com (2603:10b6:806:34a::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 14:00:39 +0000
Received: from CO1NAM11FT087.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:89:cafe::9c) by MW4PR04CA0275.outlook.office365.com
 (2603:10b6:303:89::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 14:00:38 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT087.mail.protection.outlook.com (10.13.174.68) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 14:00:37 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 09:00:33 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 07:00:16 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 4 May 2023 09:00:14 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e351b29-ea84-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ak0MoNIYap4A0M0CeK7zlTBhhTMgl1lWLzlB/vXKbhLFRULVAN9wdoGmKRwlc/v//0uAsPUE0YnmTi5Acklrqyf92wFi2IDaMk3gk9bOLqCQQSFL3YT/u2poem9NhVaQt/Y3j5sf1XIeiTeQgFpHuT6dh5vVc6FA0BlWVi1/zcQe3UkSAMZpgwJDZlFEBW1yY2lLy+crpzltlh6YTxKP0qIogQuj3RWBpa3BUo5cYnp93OP8xBMH21C/FiZ6emXQPIzffr1envQcUuXp3X7heBU4mfhQmTysO4qVB8MI+PhwPtwrKwOLeD+IH4g71JV0Bz1sIknYkH7V4fjHruPz6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=N/o1q/uGkkgP51iWgzuz8DL/hnCZg7fL5E1OxJ1E/Cg=;
 b=JYi5wBt2HfiFLLEcFGjdC5YM/7sabVMHyJsg2nQ1JljkBWRvSvLlCuZr6/m36S4H/LYsjh2EaZDg/++b9W4jHupE3P4MMydb7yZxg/CoyEHOvqgC5m2+KvPbdQMJANfQKfCAdTKzd0LJxKSBWj21MZP6ekFa1HCQt2TANSaeb+SOPKh4J8WooulI7CsFtVBDHkPGkDz/U4WvNtPxfrWf9zb5zQv2Z/1i7wqVWjxJDBny9GVbOzh1uoiW59DrT7JT6h2/eNyZclzOZOuFyo6NSxJvlxXeAcdSKuUmRutV9si2BVCU1p/5LbtkkaCnnZRuSG0HpWSZ0gYiyjJVLU+5mQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N/o1q/uGkkgP51iWgzuz8DL/hnCZg7fL5E1OxJ1E/Cg=;
 b=rrFKZbTlY58VKyxx99YsZFV1dUWLye9Nv3b5grPIPNOa2rThvOYhK3AQDlUBWtGnPwDM/fw6AkUnRvkwYql6x1n4C2/netTX/TydmJO1eoeUdzM4QeVYojC74ux+K6iHYClzNIRreFG0iY14IRy0xH18PqY0Bc+M+DN/pMDHbC4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <90c14bae-5071-fa19-7e94-307e939e8b9f@amd.com>
Date: Thu, 4 May 2023 16:00:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN v6 08/12] xen: dt: Replace u64 with uint64_t as the callback
 function parameters for dt_for_each_range()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-9-ayan.kumar.halder@amd.com>
 <d515c3a5-9473-3cde-2838-a20875aa1181@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <d515c3a5-9473-3cde-2838-a20875aa1181@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT087:EE_|SN7PR12MB7881:EE_
X-MS-Office365-Filtering-Correlation-Id: f93005e2-296d-45ce-4c16-08db4ca7f020
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1/RkK+FkGSRePVTC3mYXXZMAUEkfoh16ILTKrVXx3dheHWT1oLzCgcJldFHRei1npPvbpRJ5oN8SpKyG19gNLUAh40EmaazBiBy+BRCx/GY3OugtSahryhCIjSqeObzVChcuLVAoaIyKdhV/6Q6v3mdjXULxzjJzPYa5lB1SJrpvNMfwo/yZpK0KGLdsGay2I0Ko3LFk71dXsweUra+dRD5pk1+po2UphxgsCXkweyNQwMv3472ehlXWRcCcVf+AmfHYKNIxpr7MpElQ7TriPwX5vbkgaCqPoVXNd62HRQtchnYm1d/KWQXid/GuQV4z1SdQho288cpOm5DaFIArBWpC4tdfqUOQs61GiLgg+r9cwNiptC9T93oSOILriqZRyE1kxKltVCEwk4X7J+z2qM/1g+cuoyxgY4FB4n4j2922F2LCyRCLXW6ryCV8uGQIwS/FTXSN4O4mJ82j1WAVT4Y/yL0tXbAbXgTvCPP/SR2tD2zEj44Z4G+JZrswXBvQUw3e4MX1Py6Y1agCGAA6kGsjDy6fFDBZr3sJ8q1zPasSVj6huUoXtT6WwYFY+BUkL/6+cXPuFo3su7TDteNn711P0gHxigS3IRDo921KPBbzQp0PhWwhO7BC4By50h0AniSHaW7oT8GjU7Uzo0j5Ra8yOZfsQOUv0v8rSWuNNHYZ80AsMvFQ8KN0m3uuPGRyRc5TMEb/vFs6GcS9kGHlskvgshRWpFC3aV61cBBTcFTUziUrgWCah3ahDWz1n91TO6AuACEzWdNZJC7fskLiZw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(376002)(346002)(451199021)(40470700004)(36840700001)(46966006)(36756003)(7416002)(40460700003)(44832011)(478600001)(356005)(81166007)(40480700001)(8676002)(5660300002)(8936002)(82740400003)(110136005)(86362001)(54906003)(16576012)(82310400005)(26005)(31696002)(70206006)(70586007)(47076005)(426003)(4326008)(336012)(186003)(53546011)(41300700001)(2906002)(31686004)(4744005)(316002)(2616005)(36860700001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 14:00:37.2767
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f93005e2-296d-45ce-4c16-08db4ca7f020
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT087.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7881


On 03/05/2023 14:08, Julien Grall wrote:
> 
> 
> Hi,
> 
> On 28/04/2023 18:55, Ayan Kumar Halder wrote:
>> In the callback functions invoked by dt_for_each_range() ie handle_pci_range(),
>> map_range_to_domain(), 'u64' should be replaced with 'uint64_t' as the data type
>> for the parameters.
> 
> Please explain why this needs to be replaced. I.e. Xen coding style
> mention that u32 should be avoided.
> 
>> Also dt_for_each_range() invokes the callback functions with
>> 'uint64_t' arguments.
>>
>> There is another callback function ie is_bar_valid() which uses 'paddr_t'
>> instead of 'u64' or 'uint64_t'. We will change it in the subsequent commit.
> 
> I would rather prefer if this is folded in this patch.
> 
With the Julien's comments fixed:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal



From xen-devel-bounces@lists.xenproject.org Thu May 04 14:02:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:02:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529791.824580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZXl-0001Rg-3A; Thu, 04 May 2023 14:02:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529791.824580; Thu, 04 May 2023 14:02:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZXl-0001RZ-0B; Thu, 04 May 2023 14:02:49 +0000
Received: by outflank-mailman (input) for mailman id 529791;
 Thu, 04 May 2023 14:02:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=evI0=AZ=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1puZXk-0001RS-18
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:02:48 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20616.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 57974971-ea84-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 16:02:45 +0200 (CEST)
Received: from MW4PR04CA0141.namprd04.prod.outlook.com (2603:10b6:303:84::26)
 by SJ2PR12MB8034.namprd12.prod.outlook.com (2603:10b6:a03:4c7::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Thu, 4 May
 2023 14:02:42 +0000
Received: from CO1NAM11FT021.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:84:cafe::c8) by MW4PR04CA0141.outlook.office365.com
 (2603:10b6:303:84::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 14:02:42 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT021.mail.protection.outlook.com (10.13.175.51) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 14:02:41 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 09:02:38 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 07:02:29 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 4 May 2023 09:02:27 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57974971-ea84-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nJRHdZQmxg0Es0JNCevGX8G2QRF48Blkqr8RKMYwZnWYmIbPSXXV/hL+gv8gquSQFhM3WdOd3Ay9eQL5Fi84e1T52iDUJtPddH5Olb3yYrH3c1U6RFGslPQzB/+Pxyp4PfRbuxNOUuCOuEXFSJzPddWqHXJ8ZGSPaH7+nVeNVJLXlAx51nb+JNPZPUETPoR0ozx4rGeTdOCm9y1mS46/KPzdjJEqnR8mRFw/73b2q0wEoaAMP/FHpp92qPyzLz6Wqqb0EWfc9YxQHm3WpID5xG7NcmCEqfUC3OHX3+NNve+6IJs+KlhtyFN4sM7mentxOQyEs3w1pBpzJy4ansbpHA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/7oE9mA49/oLEJUa5lypAPBMHC8yVI2C9QFL9UWflj8=;
 b=i8hAokqmN1I8/l3aqQ7adZFb6fycX/ePDJ8pU7k5AJ3D3GMTPvpuhWn9RJKP+4rb/83zUBsfxSDP6/hIadR6bJBJYtkzTXqJqq2QeKJsKZAR91bSgVAqxxTRFAYAWhxTL0SCW7GB08UNkOTfVcJ/CfCM2bJgSeWWBcIWGNz8UXX8XiHY2WYIqZGF2XWpLQVCG8HnO2KZIOnEBTQehkIKTMPEPsMbhL7nswIyUEKhftjcK3FvL3WEXZ2pEkkb8DwxZaKzY5gas4jJcR9PuNcUTS6S5Oq19qQjIxHpQxR9U/YWMikwfXfJWurAiugRfQ82R4YI/p83cdILsGqRdbU5OA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/7oE9mA49/oLEJUa5lypAPBMHC8yVI2C9QFL9UWflj8=;
 b=cIxrvC7MNAdSF2yFPL3xujWKcvjVJeW92p1RkDjqGrBXV+2wWcqkyJW3MgWZMCVelmEUUXTY/INMQEpn61Cql5inku0C5Bea1Om2nrbwEGQaks4zSiszCD7r4IDx7eyMsRltvE+yE5tyyejZp/siMF9nN6VWfdh3Jeaqy7OYz6k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <2f45ce2b-32f1-7289-005a-edbb694a36be@amd.com>
Date: Thu, 4 May 2023 16:02:26 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN v6 10/12] xen/arm: domain_build: Check if the address fits
 the range of physical address
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-11-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230428175543.11902-11-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT021:EE_|SJ2PR12MB8034:EE_
X-MS-Office365-Filtering-Correlation-Id: dd76ec00-cd36-486f-707b-08db4ca83a5a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Bekhy7+hLM8TbDX9vu6EFljfO6TOX47l5m2N9b4YJdN1NzW8Yntc61NHA+otsztDsh3fP3dpyuBbnWwgNL6NSOLXys/ZSZjM3mxjS8xM+pAzatk/fV9xdTmIxO/uUzU+BJaElwfWYiqbWQSVfWh9zGhq6RU1sh43b1T1GD78CFpKEBAraeG5t+JKZeoLrRYXzvmZ+iTco1BQJvEsZqBggLqJgioGnSxV2vqcTJNmg5wkL3YScDiRez1n3s/IDgtfkOH3Gc9oA7XS4uj0lW3P7rNPnHi+MU7BovHAhfl7JShMVbWvnAHmTo6xhmzRaCewE7ml0+epMAZv/or9tdUUvCghWteVqjVKQ31Eh0KE7yqmLzsnpTfnPP9hHQMS/JYzkDt7FyKt5WLuvAzoszR0gGxOqcH0qapH9p4Xh2kEnKrnmpJ1uZdgMfzUVOi0wNKuXv3mLdOkZ110Biv7ObJJ8lI4DHskyxkWi2PxSuCYyq0A9vfvSAoocmnyykBzX/ZZ7ER/1vX4erhifE4hi4dP9TK4/UloUGNrIjp3sR9Aie4VRRhBXxuErZ+dtunpUYsGnp8Je6lDlsLvp+N2YQL9iQYqj8odCE1z8t7dYI/BwR/CCeDylK2RA40IpSOpFFWp2mjGCbCc6MTQpk003cWYpsOPADPFiAod47vo83HZklltLL7jMMSCT9AcnDm6ZAk20Py2+sb+4fN3lmJNmvtS5t8DvdZYCbPaLFfsKqTmeGpQ6Rwk2jq+ZVHSePiPNxBbyoZgglMor5Lx/9ssxNswhQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(396003)(346002)(136003)(451199021)(46966006)(36840700001)(40470700004)(44832011)(54906003)(8936002)(36756003)(186003)(16576012)(8676002)(110136005)(7416002)(31686004)(5660300002)(26005)(426003)(53546011)(40460700003)(41300700001)(81166007)(2906002)(356005)(40480700001)(316002)(70586007)(47076005)(31696002)(2616005)(70206006)(36860700001)(478600001)(82310400005)(4326008)(82740400003)(83380400001)(86362001)(336012)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 14:02:41.8717
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dd76ec00-cd36-486f-707b-08db4ca83a5a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT021.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8034



On 28/04/2023 19:55, Ayan Kumar Halder wrote:
> 
> 
> handle_pci_range() and map_range_to_domain() take addr and len as uint64_t
> parameters. Then frame numbers are obtained from addr and len by right shifting
> with PAGE_SHIFT. The frame numbers are expressed using unsigned long.
> 
> Now if 64-bit >> PAGE_SHIFT, the result will have 52-bits as valid. On a 32-bit
> system, 'unsigned long' is 32-bits. Thus, there is a potential loss of value
> when the result is stored as 'unsigned long'.
> 
> To mitigate this issue, we check if the starting and end address can be
> contained within the range of physical address supported on the system. If not,
> then an appropriate error is returned.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes from :-
> v1...v4 - NA. New patch introduced in v5.
> 
> v5 - 1. Updated the error message
> 2. Used "(((paddr_t)~0 - addr) < len)" to check the limit on len.
> 3. Changes in the prototype of "map_range_to_domain()" has been
> addressed by the patch 8.
> 
>  xen/arch/arm/domain_build.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 9865340eac..719bb09845 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1643,6 +1643,13 @@ static int __init handle_pci_range(const struct dt_device_node *dev,
>      paddr_t start, end;
>      int res;
> 
> +    if ( addr != (paddr_t)addr || (((paddr_t)~0 - addr) < len) )
Given that you enclose the second condition in parenthesis, I would expect the same for the first.

> +    {
> +        printk(XENLOG_ERR "%s: [0x%"PRIx64", 0x%"PRIx64"] exceeds the maximum allowed PA width (%u bits)",
> +               dt_node_full_name(dev), addr, (addr + len), PADDR_BITS);
> +        return -ERANGE;
> +    }
> +
>      start = addr & PAGE_MASK;
>      end = PAGE_ALIGN(addr + len);
>      res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end - 1));
> @@ -2337,6 +2344,13 @@ int __init map_range_to_domain(const struct dt_device_node *dev,
>      struct domain *d = mr_data->d;
>      int res;
> 
> +    if ( addr != (paddr_t)addr || (((paddr_t)~0 - addr) < len) )
Same here.

Other than that:
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Thu May 04 14:25:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:25:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529795.824600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZtp-0004R2-7m; Thu, 04 May 2023 14:25:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529795.824600; Thu, 04 May 2023 14:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZtp-0004Qt-4g; Thu, 04 May 2023 14:25:37 +0000
Received: by outflank-mailman (input) for mailman id 529795;
 Thu, 04 May 2023 14:25:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UZDr=AZ=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1puZtn-00049d-Pd
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:25:35 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 87796bd0-ea87-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 16:25:34 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1BA6C2F4;
 Thu,  4 May 2023 07:26:17 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 987FB3F67D;
 Thu,  4 May 2023 07:25:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87796bd0-ea87-11ed-b226-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 0/2] diff-report.py tool
Date: Thu,  4 May 2023 15:25:21 +0100
Message-Id: <20230504142523.2989306-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

--------------------------------------------------------------------------------
This serie is dependent on this patch:
https://patchwork.kernel.org/project/xen-devel/patch/20230504131245.2985400-4-luca.fancellu@arm.com/
--------------------------------------------------------------------------------

Now that we have a tool (xen-analysis.py) that wraps cppcheck to generates
reports, we have a generic overview of how many static analysis issues and non
compliances to MISRA C we have for a certain revision of the codebase.

This is great and eventually the work to be done should be just having less and
less findings in the report until we reach zero.

This is just an ideal trend, because in practice we might have issues that
comes from existing code (macro for example) that are not going to be fixed
soon for any reason, but we would like to check and see how many issues are
introduced by the commits added (ideally zero, but if any is added and the fault
resides outside the changed code, maintainers might decide to include it
anyway).

So the idea is to check the difference between the reports of the codebase: one
called "baseline" which is basically the current codebase, the other one called
"new report" that is the codebase after the changes.
To check if any new finding is added, we need to have a look on every finding in
the "new report" that is not listed in the "baseline".

It seems very simple, but what can happen to existing findings in the code after
a commit is applied?
Basically existing findings can be shifted in position due to changes to
unrelated lines, or they can be also deleted or fixed for changes involving the
findings line (Michal was the first one to point that out).

So comparing the two report now seems quite difficult, because if we compare
them we will have all the new findings plus all the findings that changed
position due to the changes applied.

To overcome this, the diff-tool.py is created and it allows to "patch" the
"baseline" report, looking into the changes applied to the baseline codebase,
described by git diff.

This serie is organised in two patch, I've tried to split the amount of code and
to leave a meaning, so in the first patch you have everything you need to
import cppcheck reports and do a "raw" diff between reports, this gives you
an hint about new findings + old findings that has changed place.

The second patch is adding the "patching" system, having a class that parses
the git diff output and later "patching" the baseline before doing the
comparison. This last option is activated only when passing the git diff changes
to the tool, but everything is described (I hope) in the help.

Some consideration needs to be made, this tool can translate in coordinates
(file, line) the findings from the "baseline" to the "new report" using the git
diff output as, let me say, a translation matrix.
This doesn't mean it can understand the meaning of the findings and recognise
them in the new codebase, so for example, a finding related to a line that is
moved to another part of the file won't be recognised as "old finding" and will
be just removed from the "baseline patched report", however we will find it
in the new report unless it contains a fix for the reported issue.

This means that the tool is not really suited to be a gatekeeper for the merge
action, it is more suitable to help the maintainer to understand when a change
is introducing new issues without having to check and compare manually two
reports of (nowadays) hundreds of finding.
Eventually we could run it in the CI and make the CI reply to the patchwork
thread with its output!

The tool has also a debug argument that when applied, generates extra files
that can be checked against the original files, for example the reports are
imported in the tool, and afterwards the debug code will generate them back from
the imported data and they should be the same (if everything works).
Another debug check is to export the representation of the parsed git diff
output, so that the developer can check how and if the parser interpreted
correctly the data.

Future works for this tool might be to parse also Coverity reports and
eventually (don't know if it is possible) also eclair text report.

Luca Fancellu (2):
  xen/misra: add diff-report.py tool
  xen/misra: diff-report.py: add report patching feature

 xen/scripts/diff-report.py                    | 127 +++++++++++
 .../xen_analysis/diff_tool/__init__.py        |   0
 .../xen_analysis/diff_tool/cppcheck_report.py |  41 ++++
 xen/scripts/xen_analysis/diff_tool/debug.py   |  55 +++++
 xen/scripts/xen_analysis/diff_tool/report.py  | 198 +++++++++++++++++
 .../diff_tool/unified_format_parser.py        | 202 ++++++++++++++++++
 6 files changed, 623 insertions(+)
 create mode 100755 xen/scripts/diff-report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/__init__.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/debug.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/unified_format_parser.py

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 14:25:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:25:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529794.824590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZtg-00049q-UK; Thu, 04 May 2023 14:25:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529794.824590; Thu, 04 May 2023 14:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZtg-00049j-R0; Thu, 04 May 2023 14:25:28 +0000
Received: by outflank-mailman (input) for mailman id 529794;
 Thu, 04 May 2023 14:25:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puZtf-00049d-8g
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:25:27 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20619.outbound.protection.outlook.com
 [2a01:111:f400:fe12::619])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 82f98000-ea87-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 16:25:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7047.eurprd04.prod.outlook.com (2603:10a6:20b:11b::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 14:25:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 14:25:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82f98000-ea87-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LJ8YIXYlv0wodUpb+Y9Lgyyk7KE8iGNoQGSNl0WW+vtahVOm3MsWnLvXqd2hggPMrGtFZNRTJ0PnGdJ/ndaa2ujXQ0Xn4GyUhJ5NfHw/STsO/9IXWIB1+ew+mYtnBjvAD5NfgUMZWDWXonyRGAC6EUi0yJvtjpBoJTLWvaUUCsR0286N6nPkXUDg0FQ9jTpGKrKeQmbHP33Q4fqq7nYfFnembPyyvH0WdHUrVgpVoGRZkPDiTSXwXl6O91Ho5P05ZmOZJiLZ6DuK0pLwKEhIDjnz78bRSHYPUjtvJ+Yo3+3WSpBaOg5M0Y89xHGam2hq1w58gGZMZQs2JM9bgEsHuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lum0Eiv6pzZwVF+AjweRCSSqsAI33STHmq0Lr4LkbTY=;
 b=jGcuJwZeJJCZufQkMDR1D9RqJThgglAiWg+Y7NdVlNBN+5oTfMFrmLK7KuxTSN1aEdB4xeZ8IwH/RoHBSGDYKd9LoPT7ssCEMQOoOui5+ANAg9nPOkuRu2Xhtti+a4r9mXdu6kKWLYgB+aAC1T06l1hAHTg+XWyrj7AMfWaQeUYv8Lxnvhq6bdthTAYHzpyWcqytxeQLKurnHviBqRCZnx/uN+b+c28lySRcXW8pa8yCYi/9rVq7+nl5LafVhvSNrXfDPsF6Wm4Q6ciHQO0Ms8vlDKbZCkBPC7GYYdhHq+fCyRKPN5Dy4x2vkk/I0/AKqjhyuGV1e6bK/5EYISwQIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lum0Eiv6pzZwVF+AjweRCSSqsAI33STHmq0Lr4LkbTY=;
 b=EZ+hZG1ckR2suYHDG761VvlEbzK6IZ2cCmBeWSdSj8fZlVAGYOm1024JTIzLgpA9VCYO5fLRVnhB2bFKNP5iI8ALDSKkejrPrYRo0TLXoD2DuxeZO/liSA6yxt/zG3lrtJUgKLuVrPUDIRVGN78TO+NPXyBLGaB/5CyaqNGIoBZNCV1IHW/dtoEWNv2zkvCeI6Lwa/Y3vuD5o/rYI0xB52OIAhIvhbgQbWNw4Y5KCa25A1AcnInNIFTfE/3VzRogzYyukOl4SSFA/KgeKwSUTSUlUcUz7Rc5Kr6LrbpcuysL6M2A2dutjB5Vb3fNUKNr2tUKng6hWsx19FqSgqxIRQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b2195b6f-a1ca-9a99-78d2-f5138c855f98@suse.com>
Date: Thu, 4 May 2023 16:25:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 4/8] x86/mem-sharing: copy GADDR based shared guest
 areas
Content-Language: en-US
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <472f8314-9ad7-523a-32dc-d5c2138c2c8c@suse.com>
 <a79d2a8b-6d6e-bd31-b079-a30b555e5fd0@suse.com>
 <CABfawhn4CRnctzV-17di4eYyNhSGTSMckZjgphS1Rg6HUGOtHw@mail.gmail.com>
 <c5f2ee35-0f5e-da04-9a28-aba49d2aba29@suse.com>
 <CABfawhnt=465mank4ye==5zbczcSeLWDSKjMoc6bxGTLqPqX-w@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CABfawhnt=465mank4ye==5zbczcSeLWDSKjMoc6bxGTLqPqX-w@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0083.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7047:EE_
X-MS-Office365-Filtering-Correlation-Id: 2baf880f-0b94-43db-d672-08db4cab6586
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	m/c/58ZPd/5D9phK4KI3OS3cZOyTbjFjEYjCO7sWipHrEz12wBDhFElol1VucmV/hzVe0IxdXtJDzqqiqOD0q/ZFLk4eLsJ3P+eHtPPoo1jm6mELFjtBZyyrwq3eHX27vDaMCh4hE2xJ1oGUhDiKYoajuIB4t+si8p7h6ynOBgXAw7bGGYJuD7YK2Dih2GYM5Dgf9sghwvYyVzENqyRtanzAwLmKwaiO2TcHHgV41MYydGKETPn33jm93/p6/5mIvb1nCzgsw5DiJkiYx/MZbT2s2L/Pawu9av6yHwK74qN5HW/vNWfhcV1UcTA1oPiO/2+O3qNGy4oMxDJsNaO4TMI1YG/Lyl4shtc3ZMjGoEX3JDMiEobX2LcXYU4Vl67zMseHvJPI/PzEklAoFq2ZKUt3k1MhljcyNLNaWu7OF3yAqfJuSOoTlOjXxtni4GeEpq4pxgk54Khd4pMea6towk+h1IpX0OjNT1mKkz4e/V4Fn2a4DUJQqL28FRoAymJj9l78ze/dRuZGCrxZkcn9y6Hkw1No1FtlEEbCVQFelB/GbXvDs6h7+5UrK+62Z1lfzk5dlVJ/TXq19D81YpUiSKI5FWSXhcBdpVtOxnhDku9asybngqE58dmkywz3b/WYjXC9wyynMzlLoB1+ghLkcw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(376002)(346002)(136003)(366004)(451199021)(5660300002)(478600001)(38100700002)(8676002)(8936002)(54906003)(2616005)(86362001)(6512007)(6506007)(26005)(31686004)(31696002)(66476007)(66556008)(66946007)(6916009)(83380400001)(6486002)(4326008)(316002)(53546011)(2906002)(186003)(41300700001)(66899021)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UllPWTZ0SjFiNGR6T0RqY0VRQVpRUVZsV1piQW9LclBYUjNrenFxcG81enFU?=
 =?utf-8?B?R1lROEE1NjJET2hEbEVib2xpaXJsQ3pQTGR1aFBEdWQ5OFI0TExnT3VZOW85?=
 =?utf-8?B?ZTVNK1dWTG52RksyemhkNWZmS0x6SndycWdVY24wRXpYaFg1RzJtbVlaRmhk?=
 =?utf-8?B?dGdrclpWODRuNW8wdnlBdGFMK3gwOXQ4eWdHenlmUGNIVnNPeXVDdEhYRGE0?=
 =?utf-8?B?cytZVUhRQXE5VzRxaGZkSXdYVUpxOUNUZkF3ay9MbUY2ZVpjdGsybWRvZEdt?=
 =?utf-8?B?Wlg2ZHZ4ZFNNWndDLzZYcGQrN0sxSm00M2JCcElLZ1J4ZzJIdVdiSTYxdFhl?=
 =?utf-8?B?NllYaVRLazIvcnByL2kxaGc0SHB2MFM4TU5CSzVOQ3N2M0pobW5hR2dvOWNV?=
 =?utf-8?B?UmtuZlZWWWt2c1I5eUp1VnJkN2xHelRubHRpUUgwdHk2TVc1VlMxQmM3akp5?=
 =?utf-8?B?eEt1S3BBb25LL0JNWGsyQ2FpTFdMWlNqMVJ2K2gwMTZ4NkU4bTZxMXYvNUYr?=
 =?utf-8?B?YkZQY29kMWoxMkp4S1gzVjQybkc2anRnVklnL1hZdlBqQ0EzMWxleit6R0Y2?=
 =?utf-8?B?aFgwQWFMK1UwNlVUQm5yUFFrdElRbHlWUUpONVpwaVIwN2N1Y3BRdHNJM1ps?=
 =?utf-8?B?aFErK01CVzBTSmtubkJMTnI1QlN5YlFVWlpSWWtUYStEREs0VG5SSkVjMjhW?=
 =?utf-8?B?SllWbmdWMXBGSmxBejNzUHN1eHhoY28zMmVxcWJrMEFFaWg0M3B5cklnM0th?=
 =?utf-8?B?b0pzR3dWK2pvanZaK3N6SUtjclQ5TnpQWUR5ZDI3bmZNdmVHc1AxL0FIa0dw?=
 =?utf-8?B?SHFWZHhGaTZjcmlCU2YwK0M4a0FUMlN0QlhDMUJtL1poT2hQekxiUllOWld6?=
 =?utf-8?B?eVBnZUVhQnA0b2d5MkpjZUxMNVBjeVNxNnB3VkJpU1VxeVRnVFIrWGpFWGhq?=
 =?utf-8?B?bm5BandtaXhuSGx4TXJPUCtRU3l2YWVrM1o2RmdJanVOeUhmaWVMejNHMlNY?=
 =?utf-8?B?akFURlhMSzFOV0pzNENUZXM5bXJhNnQ4bUNuOEtqcHRoSlZqY2tycmJTRzNa?=
 =?utf-8?B?YzlUUXNYSWZiQXBmK0c2eVd2dVVLS2ZNMHJxZ3F0RlNkTGJuTEJyWnB4ZXhW?=
 =?utf-8?B?YmpReDM5U1BCNG40YU5sVlhZUnhwUkoxZWlYZFZQOE1jZnhnZ2ZCNDBFTHcv?=
 =?utf-8?B?NUF1N01DTzlHUWpTN3VBWVlUODIvYjRiN0Nkcm5QVllmVjVEMElVc2tnRkZV?=
 =?utf-8?B?Y1IxYzc3Yk9nMTNBeGd1YkZjeG9rWk8wMUhvd01wbEdlWmV4TDNMbHVjc09a?=
 =?utf-8?B?Sm9rL1hoZ0MyU01FU0lvV0swUlNiZ0VTVEtjZXZOblE1OXl0V21YTWZjTWp2?=
 =?utf-8?B?dXkybVpockUxVDlrSEYxWFd4SklYWDhlLzBvTzhLYzNBejhENVdOVGZ0UXlR?=
 =?utf-8?B?TUhkRE9qU21iWEZJU01MZCtXYjVjd084M1JtT3c5MUJiVnl0VHJUenFqNVNl?=
 =?utf-8?B?MUt1T3pxMmxabThEOGI0a0JmTHBsRmgyNFNlLzNDZnNoZzU1NVNBVFRuWlpv?=
 =?utf-8?B?cUFGbUkzUDIvOU1YUHgyUDNmSnBpZll1THFYMDZXUW9BaDlyTE1aWWlzall0?=
 =?utf-8?B?dUtibkM5UGk2Ulc4TVZlU2xxc0lUWFQrTVVHeDZQZVhtRzN2ZEVnQkZ3NXBl?=
 =?utf-8?B?SGlNK3pCd0Mya3FsSWsxcXBkZEVuN2hpb2hGbG1mTmZaRGRoM0gwV3c1aGFZ?=
 =?utf-8?B?eGRxSCtJOGMzVEs4Y0JERC9OK1RhOE5MRHZVQnoxb2NzVWJYcXhnS1orTUNR?=
 =?utf-8?B?TC8vZjBBOHZkQ0VRWmhGWTZ0bTFXeFlEZmpQdXlvaEpyeUdVamVHZWIxejl5?=
 =?utf-8?B?ZnZuUkNQc3U0ZHE2UjlQQXM2ek9GRnVqWUtOTlZoc0lTZ3NwcTlBNkNzVldl?=
 =?utf-8?B?MEhCdXFPZDY0ME8wNGgyeVRHSjkyU0JPNGRuUnVXM3YwYmhtRmx2a3RGQ25I?=
 =?utf-8?B?Y2wrQzZqeXlKVFhaRlpqU1Q3QzBEbFhzcVUwOFU1M0wzMERkaDVlL2xXZXVa?=
 =?utf-8?B?WTBVS1ByN2Z3VXpNNzFhRFVUalUxNEdqaVVPQWdrU0JlRU1kNDV3SVJOSkJS?=
 =?utf-8?Q?z5PfSyF8P8hD9glyefyuiwk4+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2baf880f-0b94-43db-d672-08db4cab6586
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 14:25:23.1298
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HtYWAvI2n6kyyxkItwRCq49qfgTeYr8NLP7WTfw0oa3ZVnw/Ef8BonQp5lWW09qMIYHo9cL2xM+pazDbbSSW0g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7047

On 04.05.2023 14:50, Tamas K Lengyel wrote:
> On Thu, May 4, 2023 at 3:44 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 03.05.2023 19:14, Tamas K Lengyel wrote:
>>>> @@ -1974,7 +2046,10 @@ int mem_sharing_fork_reset(struct domain
>>>>
>>>>   state:
>>>>      if ( reset_state )
>>>> +    {
>>>>          rc = copy_settings(d, pd);
>>>> +        /* TBD: What to do here with -ERESTART? */
>>>
>>> Ideally we could avoid hitting code-paths that are restartable during
> fork
>>> reset since it gets called from vm_event replies that have no concept of
>>> handling errors. If we start having errors like this we would just have
> to
>>> drop the vm_event reply optimization and issue a standalone fork reset
>>> hypercall every time which isn't a big deal, it's just slower.
>>
>> I'm afraid I don't follow: We are in the process of fork-reset here. How
>> would issuing "a standalone fork reset hypercall every time" make this
>> any different? The possible need for a continuation here comes from a
>> failed spin_trylock() in map_guest_area(). That won't change the next
>> time round.
> 
> Why not? Who is holding the lock and why wouldn't it ever relinquish it?

What state is the fork in at that point in time? We're talking about the
fork's hypercall deadlock mutex here, after all. Hence if we knew the
fork is paused (just like the parent is), then I don't think -ERESTART
can be coming back. (To be precise, both paths leading here are of
interest, yet the state the fork is in may be different in both cases.)

> If
> that's really true then there is a larger issue then just not being able to
> report the error back to the user on vm_event_resume path and we need to
> devise a way of being able to copy this from the parent bypassing this
> lock.

The issue isn't that the lock will never become available. But we can't
predict how many attempts it'll take.

But my earlier question went in a different direction anyway: You did
suggest to replace a fork-reset by "a standalone fork reset hypercall
every time". I somehow don't get the difference (but it looks like some
of your further reply down from here addresses that).

> The parent is paused and its state should not be changing while forks
> are active, so if the lock was turned into an rwlock of some sort so we can
> acquire the read-lock would perhaps be a possible way out of this.

Given the specific lock we're talking about here, an rwlock is out of
question, I think.

>> But perhaps I should say that till now I didn't even pay much attention
>> to the 2nd use of the function by vm_event_resume(); I was mainly
>> focused on the one from XENMEM_sharing_op_fork_reset, where no
>> continuation handling exists. Yet perhaps your comment is mainly
>> related to that use?
>>
>> I actually notice that the comment ahead of the function already has a
>> continuation related TODO, just that there thought is only of larger
>> memory footprint.
> 
> With XENMEM_sharing_op_fork_reset the caller actually receives the error
> code and can decide what to do next. With vm_event_resume there is no path
> currently to notify the agent of an error. We could generate another
> vm_event to send such an error, but the expectation with fork_reset is that
> it will always work because the parent is paused, so not having that path
> for an error to get back to the agent isn't a big deal.
> 
> Now, if it becomes the case that due to this locking we can get an error
> even while the parent is paused, that will render the vm_event_resume path
> unreliably so we would just switch to using XENMEM_sharing_op_fork_reset so
> that at least it can re-try in case of an issue. Of course, only if a
> reissue of the hypercall has any reasonable chance of succeeding.

(I think this is the explanation for the "standalone reset hypercall.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 04 14:25:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:25:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529796.824610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZtr-0004i8-Ek; Thu, 04 May 2023 14:25:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529796.824610; Thu, 04 May 2023 14:25:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZtr-0004hm-Bw; Thu, 04 May 2023 14:25:39 +0000
Received: by outflank-mailman (input) for mailman id 529796;
 Thu, 04 May 2023 14:25:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UZDr=AZ=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1puZtp-00049d-MN
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:25:37 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 8949c1f3-ea87-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 16:25:36 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2D3A4139F;
 Thu,  4 May 2023 07:26:20 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 90E7E3F67D;
 Thu,  4 May 2023 07:25:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8949c1f3-ea87-11ed-b226-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/2] xen/misra: diff-report.py: add report patching feature
Date: Thu,  4 May 2023 15:25:23 +0100
Message-Id: <20230504142523.2989306-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230504142523.2989306-1-luca.fancellu@arm.com>
References: <20230504142523.2989306-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a feature to the diff-report.py script that improves the comparison
between two analysis report, one from a baseline codebase and the other
from the changes applied to the baseline.

The comparison between reports of different codebase is an issue because
entries in the baseline could have been moved in position due to addition
or deletion of unrelated lines or can disappear because of deletion of
the interested line, making the comparison between two revisions of the
code harder.

Having a baseline report, a report of the codebase with the changes
called "new report" and a git diff format file that describes the
changes happened to the code from the baseline, this feature can
understand which entries from the baseline report are deleted or shifted
in position due to changes to unrelated lines and can modify them as
they will appear in the "new report".

Having the "patched baseline" and the "new report", now it's simple
to make the diff between them and print only the entry that are new.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/scripts/diff-report.py                    |  55 ++++-
 xen/scripts/xen_analysis/diff_tool/debug.py   |  19 ++
 xen/scripts/xen_analysis/diff_tool/report.py  |  84 ++++++++
 .../diff_tool/unified_format_parser.py        | 202 ++++++++++++++++++
 4 files changed, 358 insertions(+), 2 deletions(-)
 create mode 100644 xen/scripts/xen_analysis/diff_tool/unified_format_parser.py

diff --git a/xen/scripts/diff-report.py b/xen/scripts/diff-report.py
index 4913fb43a8f9..17f707f5d34e 100755
--- a/xen/scripts/diff-report.py
+++ b/xen/scripts/diff-report.py
@@ -5,6 +5,10 @@ from argparse import ArgumentParser
 from xen_analysis.diff_tool.debug import Debug
 from xen_analysis.diff_tool.report import ReportError
 from xen_analysis.diff_tool.cppcheck_report import CppcheckReport
+from xen_analysis.diff_tool.unified_format_parser import \
+    (UnifiedFormatParser, UnifiedFormatParseError)
+from xen_analysis.utils import invoke_command
+from xen_analysis.settings import repo_dir
 
 
 def log_info(text, end='\n'):
@@ -32,9 +36,32 @@ def main(argv):
                              "against the baseline.")
     parser.add_argument("-v", "--verbose", action='store_true',
                         help="Print more informations during the run.")
+    parser.add_argument("--patch", type=str,
+                        help="The patch file containing the changes to the "
+                             "code, from the baseline analysis result to the "
+                             "'check report' analysis result.\n"
+                             "Do not use with --baseline-rev/--report-rev")
+    parser.add_argument("--baseline-rev", type=str,
+                        help="Revision or SHA of the codebase analysed to "
+                             "create the baseline report.\n"
+                             "Use together with --report-rev")
+    parser.add_argument("--report-rev", type=str,
+                        help="Revision or SHA of the codebase analysed to "
+                             "create the 'check report'.\n"
+                             "Use together with --baseline-rev")
 
     args = parser.parse_args()
 
+    if args.patch and (args.baseline_rev or args.report_rev):
+        print("ERROR: '--patch' argument can't be used with '--baseline-rev'"
+              " or '--report-rev'.")
+        sys.exit(1)
+
+    if bool(args.baseline_rev) != bool(args.report_rev):
+        print("ERROR: '--baseline-rev' must be used together with "
+              "'--report-rev'.")
+        sys.exit(1)
+
     if args.out == "stdout":
         file_out = sys.stdout
     else:
@@ -59,11 +86,35 @@ def main(argv):
         new_rep.parse()
         debug.debug_print_parsed_report(new_rep)
         log_info(" [OK]")
-    except ReportError as e:
+        diff_source = None
+        if args.patch:
+            diff_source = os.path.realpath(args.patch)
+        elif args.baseline_rev:
+            git_diff = invoke_command(
+                "git diff --git-dir={} -C -C {}..{}".format(repo_dir,
+                                                            args.baseline_rev,
+                                                            args.report_rev),
+                True, "Error occured invoking:\n{}\n\n{}"
+            )
+            diff_source = git_diff.splitlines(keepends=True)
+        if diff_source:
+            log_info("Parsing changes...", "")
+            diffs = UnifiedFormatParser(diff_source)
+            debug.debug_print_parsed_diff(diffs)
+            log_info(" [OK]")
+    except (ReportError, UnifiedFormatParseError) as e:
         print("ERROR: {}".format(e))
         sys.exit(1)
 
-    output = new_rep - baseline
+    if args.patch or args.baseline_rev:
+        log_info("Patching baseline...", "")
+        baseline_patched = baseline.patch(diffs)
+        debug.debug_print_patched_report(baseline_patched)
+        log_info(" [OK]")
+        output = new_rep - baseline_patched
+    else:
+        output = new_rep - baseline
+
     print(output, end="", file=file_out)
 
     if len(output) > 0:
diff --git a/xen/scripts/xen_analysis/diff_tool/debug.py b/xen/scripts/xen_analysis/diff_tool/debug.py
index d46df3300d21..c314edbc8e38 100644
--- a/xen/scripts/xen_analysis/diff_tool/debug.py
+++ b/xen/scripts/xen_analysis/diff_tool/debug.py
@@ -2,6 +2,7 @@
 
 import os
 from .report import Report
+from .unified_format_parser import UnifiedFormatParser
 
 
 class Debug:
@@ -34,3 +35,21 @@ class Debug:
         if not self.args.debug:
             return
         self.__debug_print_report(report, ".parsed")
+
+    def debug_print_patched_report(self, report: Report) -> None:
+        if not self.args.debug:
+            return
+        # The patched report contains already .patched in its name
+        self.__debug_print_report(report, "")
+
+    def debug_print_parsed_diff(self, diff: UnifiedFormatParser) -> None:
+        if not self.args.debug:
+            return
+        diff_filename = diff.get_diff_path()
+        out_pathname = self.__get_debug_out_filename(diff_filename, ".parsed")
+        try:
+            with open(out_pathname, "wt") as outfile:
+                for change_obj in diff.get_change_sets().values():
+                    print(change_obj, end="", file=outfile)
+        except OSError as e:
+            print("ERROR: Issue opening file {}: {}".format(out_pathname, e))
diff --git a/xen/scripts/xen_analysis/diff_tool/report.py b/xen/scripts/xen_analysis/diff_tool/report.py
index d958d1816eb4..312d59682329 100644
--- a/xen/scripts/xen_analysis/diff_tool/report.py
+++ b/xen/scripts/xen_analysis/diff_tool/report.py
@@ -1,6 +1,7 @@
 #!/usr/bin/env python3
 
 import os
+from .unified_format_parser import UnifiedFormatParser, ChangeSet
 
 
 class ReportError(Exception):
@@ -65,6 +66,89 @@ class Report:
             self.__entries[entry_path] = [entry]
         self.__last_line_order += 1
 
+    def remove_entries(self, entry_file_path: str) -> None:
+        del self.__entries[entry_file_path]
+
+    def remove_entry(self, entry_path: str, line_id: int) -> None:
+        if entry_path in self.__entries.keys():
+            len_entry_path = len(self.__entries[entry_path])
+            if len_entry_path == 1:
+                del self.__entries[entry_path]
+            else:
+                if line_id in self.__entries[entry_path]:
+                    self.__entries[entry_path].remove(line_id)
+
+    def patch(self, diff_obj: UnifiedFormatParser) -> 'Report':
+        filename, file_extension = os.path.splitext(self.__path)
+        patched_report = self.__class__(filename + ".patched" + file_extension)
+        remove_files = []
+        rename_files = []
+        remove_entry = []
+        ChangeMode = ChangeSet.ChangeMode
+
+        # Copy entries from this report to the report we are going to patch
+        for entries in self.__entries.values():
+            for entry in entries:
+                patched_report.add_entry(entry.file_path, entry.line_number,
+                                         entry.text)
+
+        # Patch the output report
+        patched_rep_entries = patched_report.get_report_entries()
+        for file_diff, change_obj in diff_obj.get_change_sets().items():
+            if change_obj.is_change_mode(ChangeMode.COPY):
+                # Copy the original entry pointed by change_obj.orig_file into
+                # a new key in the patched report named change_obj.dst_file,
+                # that here is file_diff variable content, because this
+                # change_obj is pushed into the change_sets with the
+                # change_obj.dst_file key
+                if change_obj.orig_file in self.__entries.keys():
+                    for entry in self.__entries[change_obj.orig_file]:
+                        patched_report.add_entry(file_diff,
+                                                 entry.line_number,
+                                                 entry.text)
+
+            if file_diff in patched_rep_entries.keys():
+                if change_obj.is_change_mode(ChangeMode.DELETE):
+                    # No need to check changes here, just remember to delete
+                    # the file from the report
+                    remove_files.append(file_diff)
+                    continue
+                elif change_obj.is_change_mode(ChangeMode.RENAME):
+                    # Remember to rename the file entry on this report
+                    rename_files.append(change_obj)
+
+                for line_num, change_type in change_obj.get_change_set():
+                    len_rep = len(patched_rep_entries[file_diff])
+                    for i in range(len_rep):
+                        rep_item = patched_rep_entries[file_diff][i]
+                        if change_type == ChangeSet.ChangeType.REMOVE:
+                            if rep_item.line_number == line_num:
+                                # This line is removed with this changes,
+                                # append to the list of entries to be removed
+                                remove_entry.append(rep_item)
+                            elif rep_item.line_number > line_num:
+                                rep_item.line_number -= 1
+                        elif change_type == ChangeSet.ChangeType.ADD:
+                            if rep_item.line_number >= line_num:
+                                rep_item.line_number += 1
+                    # Remove deleted entries from the list
+                    if len(remove_entry) > 0:
+                        for entry in remove_entry:
+                            patched_report.remove_entry(entry.file_path,
+                                                        entry.line_id)
+                        remove_entry.clear()
+
+        if len(remove_files) > 0:
+            for file_name in remove_files:
+                patched_report.remove_entries(file_name)
+
+        if len(rename_files) > 0:
+            for change_obj in rename_files:
+                patched_rep_entries[change_obj.dst_file] = \
+                    patched_rep_entries.pop(change_obj.orig_file)
+
+        return patched_report
+
     def to_list(self) -> list:
         report_list = []
         for _, entries in self.__entries.items():
diff --git a/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py b/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py
new file mode 100644
index 000000000000..e34cc8ac063f
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py
@@ -0,0 +1,202 @@
+#!/usr/bin/env python3
+
+import re
+from enum import Enum
+from typing import Tuple
+
+
+class UnifiedFormatParseError(Exception):
+    pass
+
+
+class ParserState(Enum):
+    FIND_DIFF_HEADER = 0
+    REGISTER_CHANGES = 1
+    FIND_HUNK_OR_DIFF_HEADER = 2
+
+
+class ChangeSet:
+    class ChangeType(Enum):
+        REMOVE = 0
+        ADD = 1
+
+    class ChangeMode(Enum):
+        NONE = 0
+        CHANGE = 1
+        RENAME = 2
+        DELETE = 3
+        COPY = 4
+
+    def __init__(self, a_file: str, b_file: str) -> None:
+        self.orig_file = a_file
+        self.dst_file = b_file
+        self.change_mode = ChangeSet.ChangeMode.NONE
+        self.__changes = []
+
+    def __str__(self) -> str:
+        str_out = "{}: {} -> {}:\n{}\n".format(
+            str(self.change_mode), self.orig_file, self.dst_file,
+            str(self.__changes)
+        )
+        return str_out
+
+    def set_change_mode(self, change_mode: ChangeMode) -> None:
+        self.change_mode = change_mode
+
+    def is_change_mode(self, change_mode: ChangeMode) -> bool:
+        return self.change_mode == change_mode
+
+    def add_change(self, line_number: int, change_type: ChangeType) -> None:
+        self.__changes.append((line_number, change_type))
+
+    def get_change_set(self) -> dict:
+        return self.__changes
+
+
+class UnifiedFormatParser:
+    def __init__(self, args: str | list) -> None:
+        if isinstance(args, str):
+            self.__diff_file = args
+            try:
+                with open(self.__diff_file, "rt") as infile:
+                    self.__diff_lines = infile.readlines()
+            except OSError as e:
+                raise UnifiedFormatParseError(
+                    "Issue with reading file {}: {}"
+                    .format(self.__diff_file, e)
+                )
+        elif isinstance(args, list):
+            self.__diff_file = "git-diff-local.txt"
+            self.__diff_lines = args
+        else:
+            raise UnifiedFormatParseError(
+                "UnifiedFormatParser constructor called with wrong arguments")
+
+        self.__git_diff_header = re.compile(r'^diff --git a/(.*) b/(.*)$')
+        self.__git_hunk_header = \
+            re.compile(r'^@@ -\d+,(\d+) \+(\d+),(\d+) @@.*$')
+        self.__diff_set = {}
+        self.__parse()
+
+    def get_diff_path(self) -> str:
+        return self.__diff_file
+
+    def add_change_set(self, change_set: ChangeSet) -> None:
+        if not change_set.is_change_mode(ChangeSet.ChangeMode.NONE):
+            if change_set.is_change_mode(ChangeSet.ChangeMode.COPY):
+                # Add copy change mode items using the dst_file key, because
+                # there might be other changes for the orig_file in this diff
+                self.__diff_set[change_set.dst_file] = change_set
+            else:
+                self.__diff_set[change_set.orig_file] = change_set
+
+    def __parse(self) -> None:
+        def parse_diff_header(line: str) -> ChangeSet | None:
+            change_item = None
+            diff_head = self.__git_diff_header.match(line)
+            if diff_head and diff_head.group(1) and diff_head.group(2):
+                change_item = ChangeSet(diff_head.group(1), diff_head.group(2))
+
+            return change_item
+
+        def parse_hunk_header(line: str) -> Tuple[int, int, int]:
+            file_linenum = -1
+            hunk_a_linemax = -1
+            hunk_b_linemax = -1
+            hunk_head = self.__git_hunk_header.match(line)
+            if hunk_head and hunk_head.group(1) and hunk_head.group(2) \
+               and hunk_head.group(3):
+                file_linenum = int(hunk_head.group(2))
+                hunk_a_linemax = int(hunk_head.group(1))
+                hunk_b_linemax = int(hunk_head.group(3))
+
+            return (file_linenum, hunk_a_linemax, hunk_b_linemax)
+
+        file_linenum = 0
+        hunk_a_linemax = 0
+        hunk_b_linemax = 0
+        diff_elem = None
+        parse_state = ParserState.FIND_DIFF_HEADER
+        ChangeMode = ChangeSet.ChangeMode
+        ChangeType = ChangeSet.ChangeType
+
+        for line in self.__diff_lines:
+            if parse_state == ParserState.FIND_DIFF_HEADER:
+                diff_elem = parse_diff_header(line)
+                if diff_elem:
+                    # Found the diff header, go to the next stage
+                    parse_state = ParserState.FIND_HUNK_OR_DIFF_HEADER
+            elif parse_state == ParserState.FIND_HUNK_OR_DIFF_HEADER:
+                # Here only these change modalities will be registered:
+                # deleted file mode <mode>
+                # rename from <path>
+                # rename to <path>
+                # copy from <path>
+                # copy to <path>
+                #
+                # These will be ignored:
+                # old mode <mode>
+                # new mode <mode>
+                # new file mode <mode>
+                #
+                # Also these info will be ignored
+                # similarity index <number>
+                # dissimilarity index <number>
+                # index <hash>..<hash> <mode>
+                if line.startswith("deleted file"):
+                    # If the file is deleted, register it but don't go through
+                    # the changes that will be only a set of lines removed
+                    diff_elem.set_change_mode(ChangeMode.DELETE)
+                    parse_state = ParserState.FIND_DIFF_HEADER
+                elif line.startswith("new file"):
+                    # If the file is new, skip it, as it doesn't give any
+                    # useful information on the report translation
+                    parse_state = ParserState.FIND_DIFF_HEADER
+                elif line.startswith("rename to"):
+                    # Renaming operation can be a pure renaming or a rename
+                    # and a set of change, so keep looking for the hunk
+                    # header
+                    diff_elem.set_change_mode(ChangeMode.RENAME)
+                elif line.startswith("copy to"):
+                    # This is a copy operation, mark it
+                    diff_elem.set_change_mode(ChangeMode.COPY)
+                else:
+                    # Look for the hunk header
+                    (file_linenum, hunk_a_linemax, hunk_b_linemax) = \
+                        parse_hunk_header(line)
+                    if file_linenum >= 0:
+                        if diff_elem.is_change_mode(ChangeMode.NONE):
+                            # The file has only changes
+                            diff_elem.set_change_mode(ChangeMode.CHANGE)
+                        parse_state = ParserState.REGISTER_CHANGES
+                    else:
+                        # ... or there could be a diff header
+                        new_diff_elem = parse_diff_header(line)
+                        if new_diff_elem:
+                            # Found a diff header, register the last change
+                            # item
+                            self.add_change_set(diff_elem)
+                            diff_elem = new_diff_elem
+            elif parse_state == ParserState.REGISTER_CHANGES:
+                if (hunk_b_linemax > 0) and line.startswith("+"):
+                    diff_elem.add_change(file_linenum, ChangeType.ADD)
+                    hunk_b_linemax -= 1
+                elif (hunk_a_linemax > 0) and line.startswith("-"):
+                    diff_elem.add_change(file_linenum, ChangeType.REMOVE)
+                    hunk_a_linemax -= 1
+                    file_linenum -= 1
+                elif ((hunk_a_linemax + hunk_b_linemax) > 0) and \
+                        line.startswith(" "):
+                    hunk_a_linemax -= 1 if (hunk_a_linemax > 0) else 0
+                    hunk_b_linemax -= 1 if (hunk_b_linemax > 0) else 0
+
+                if (hunk_a_linemax + hunk_b_linemax) <= 0:
+                    parse_state = ParserState.FIND_HUNK_OR_DIFF_HEADER
+
+                file_linenum += 1
+
+        if diff_elem is not None:
+            self.add_change_set(diff_elem)
+
+    def get_change_sets(self) -> dict:
+        return self.__diff_set
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 14:25:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:25:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529797.824620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZtu-00050N-MG; Thu, 04 May 2023 14:25:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529797.824620; Thu, 04 May 2023 14:25:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZtu-00050C-JF; Thu, 04 May 2023 14:25:42 +0000
Received: by outflank-mailman (input) for mailman id 529797;
 Thu, 04 May 2023 14:25:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UZDr=AZ=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1puZtt-0004cz-8Z
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:25:41 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 8850da65-ea87-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 16:25:35 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9E006C14;
 Thu,  4 May 2023 07:26:18 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0D4213F67D;
 Thu,  4 May 2023 07:25:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8850da65-ea87-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] xen/misra: add diff-report.py tool
Date: Thu,  4 May 2023 15:25:22 +0100
Message-Id: <20230504142523.2989306-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230504142523.2989306-1-luca.fancellu@arm.com>
References: <20230504142523.2989306-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new tool, diff-report.py that can be used to make diff between
reports generated by xen-analysis.py tool.
Currently this tool supports the Xen cppcheck text report format in
its operations.

The tool prints every finding that is in the report passed with -r
(check report) which is not in the report passed with -b (baseline).

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/scripts/diff-report.py                    |  76 ++++++++++++
 .../xen_analysis/diff_tool/__init__.py        |   0
 .../xen_analysis/diff_tool/cppcheck_report.py |  41 +++++++
 xen/scripts/xen_analysis/diff_tool/debug.py   |  36 ++++++
 xen/scripts/xen_analysis/diff_tool/report.py  | 114 ++++++++++++++++++
 5 files changed, 267 insertions(+)
 create mode 100755 xen/scripts/diff-report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/__init__.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/debug.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/report.py

diff --git a/xen/scripts/diff-report.py b/xen/scripts/diff-report.py
new file mode 100755
index 000000000000..4913fb43a8f9
--- /dev/null
+++ b/xen/scripts/diff-report.py
@@ -0,0 +1,76 @@
+#!/usr/bin/env python3
+
+import os, sys
+from argparse import ArgumentParser
+from xen_analysis.diff_tool.debug import Debug
+from xen_analysis.diff_tool.report import ReportError
+from xen_analysis.diff_tool.cppcheck_report import CppcheckReport
+
+
+def log_info(text, end='\n'):
+    global args
+    global file_out
+
+    if (args.verbose):
+        print(text, end=end, file=file_out)
+
+
+def main(argv):
+    global args
+    global file_out
+
+    parser = ArgumentParser(prog="diff-report.py")
+    parser.add_argument("-b", "--baseline", required=True, type=str,
+                        help="Path to the baseline report.")
+    parser.add_argument("--debug", action='store_true',
+                        help="Produce intermediate reports during operations.")
+    parser.add_argument("-o", "--out", default="stdout", type=str,
+                        help="Where to print the tool output. Default is "
+                             "stdout")
+    parser.add_argument("-r", "--report", required=True, type=str,
+                        help="Path to the 'check report', the one checked "
+                             "against the baseline.")
+    parser.add_argument("-v", "--verbose", action='store_true',
+                        help="Print more informations during the run.")
+
+    args = parser.parse_args()
+
+    if args.out == "stdout":
+        file_out = sys.stdout
+    else:
+        try:
+            file_out = open(args.out, "wt")
+        except OSError as e:
+            print("ERROR: Issue opening file {}: {}".format(args.out, e))
+            sys.exit(1)
+
+    debug = Debug(args)
+
+    try:
+        baseline_path = os.path.realpath(args.baseline)
+        log_info("Loading baseline report {}".format(baseline_path), "")
+        baseline = CppcheckReport(baseline_path)
+        baseline.parse()
+        debug.debug_print_parsed_report(baseline)
+        log_info(" [OK]")
+        new_rep_path = os.path.realpath(args.report)
+        log_info("Loading check report {}".format(new_rep_path), "")
+        new_rep = CppcheckReport(new_rep_path)
+        new_rep.parse()
+        debug.debug_print_parsed_report(new_rep)
+        log_info(" [OK]")
+    except ReportError as e:
+        print("ERROR: {}".format(e))
+        sys.exit(1)
+
+    output = new_rep - baseline
+    print(output, end="", file=file_out)
+
+    if len(output) > 0:
+        sys.exit(1)
+
+    sys.exit(0)
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/xen/scripts/xen_analysis/diff_tool/__init__.py b/xen/scripts/xen_analysis/diff_tool/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py b/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
new file mode 100644
index 000000000000..787a51aca583
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
@@ -0,0 +1,41 @@
+#!/usr/bin/env python3
+
+import re
+from .report import Report, ReportError
+
+
+class CppcheckReport(Report):
+    def __init__(self, report_path: str) -> None:
+        super().__init__(report_path)
+        # This matches a string like:
+        # path/to/file.c(<line number>,<digits>):<whatever>
+        # and captures file name path and line number
+        # the last capture group is used for text substitution in __str__
+        self.__report_entry_regex = re.compile(r'^(.*)\((\d+)(,\d+\):.*)$')
+
+    def parse(self) -> None:
+        report_path = self.get_report_path()
+        try:
+            with open(report_path, "rt") as infile:
+                report_lines = infile.readlines()
+        except OSError as e:
+            raise ReportError("Issue with reading file {}: {}"
+                              .format(report_path, e))
+        for line in report_lines:
+            entry = self.__report_entry_regex.match(line)
+            if entry and entry.group(1) and entry.group(2):
+                file_path = entry.group(1)
+                line_number = int(entry.group(2))
+                self.add_entry(file_path, line_number, line)
+            else:
+                raise ReportError("Malformed report entry in file {}:\n{}"
+                                  .format(report_path, line))
+
+    def __str__(self) -> str:
+        ret = ""
+        for entry in self.to_list():
+            ret += re.sub(self.__report_entry_regex,
+                          r'{}({}\3'.format(entry.file_path,
+                                            entry.line_number),
+                          entry.text)
+        return ret
diff --git a/xen/scripts/xen_analysis/diff_tool/debug.py b/xen/scripts/xen_analysis/diff_tool/debug.py
new file mode 100644
index 000000000000..d46df3300d21
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/debug.py
@@ -0,0 +1,36 @@
+#!/usr/bin/env python3
+
+import os
+from .report import Report
+
+
+class Debug:
+    def __init__(self, args):
+        self.args = args
+
+    def __get_debug_out_filename(self, path: str, type: str) -> str:
+        # Take basename
+        file_name = os.path.basename(path)
+        # Split in name and extension
+        file_name = os.path.splitext(file_name)
+        if self.args.out != "stdout":
+            out_folder = os.path.dirname(self.args.out)
+        else:
+            out_folder = "./"
+        dbg_report_path = out_folder + file_name[0] + type + file_name[1]
+
+        return dbg_report_path
+
+    def __debug_print_report(self, report: Report, type: str) -> None:
+        report_name = self.__get_debug_out_filename(report.get_report_path(),
+                                                    type)
+        try:
+            with open(report_name, "wt") as outfile:
+                print(report, end="", file=outfile)
+        except OSError as e:
+            print("ERROR: Issue opening file {}: {}".format(report_name, e))
+
+    def debug_print_parsed_report(self, report: Report) -> None:
+        if not self.args.debug:
+            return
+        self.__debug_print_report(report, ".parsed")
diff --git a/xen/scripts/xen_analysis/diff_tool/report.py b/xen/scripts/xen_analysis/diff_tool/report.py
new file mode 100644
index 000000000000..d958d1816eb4
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/report.py
@@ -0,0 +1,114 @@
+#!/usr/bin/env python3
+
+import os
+
+
+class ReportError(Exception):
+    pass
+
+
+class Report:
+    class ReportEntry:
+        def __init__(self, file_path: str, line_number: int,
+                     entry_text: list, line_id: int) -> None:
+            if not isinstance(line_number, int) or \
+               not isinstance(line_id, int):
+                raise ReportError("ReportEntry constructor wrong type args")
+            self.file_path = file_path
+            self.line_number = line_number
+            self.text = entry_text
+            self.line_id = line_id
+
+        def __str__(self) -> str:
+            ret = ''
+            header = 'File path:Count\n'
+
+            for path in self.stats:
+                ret += f'{path}: {len(self.stats[path])}\n'
+
+            if ret == '':
+                ret += 'No new issues introduced\n'
+
+            ret = header + ret
+
+            return ret
+
+        def __len__(self) -> int:
+            ret = 0
+
+            for ln_list in self.stats.values():
+                ret += len(ln_list)
+
+            return ret
+
+    def __init__(self, report_path: str) -> None:
+        self.__entries = {}
+        self.__path = report_path
+        self.__last_line_order = 0
+
+    def parse(self) -> None:
+        raise ReportError("Please create a specialised class from 'Report'.")
+
+    def get_report_path(self) -> str:
+        return self.__path
+
+    def get_report_entries(self) -> dict:
+        return self.__entries
+
+    def add_entry(self, entry_path: str, entry_line_number: int,
+                  entry_text: list) -> None:
+        entry = Report.ReportEntry(entry_path, entry_line_number, entry_text,
+                                   self.__last_line_order)
+        if entry_path in self.__entries.keys():
+            self.__entries[entry_path].append(entry)
+        else:
+            self.__entries[entry_path] = [entry]
+        self.__last_line_order += 1
+
+    def to_list(self) -> list:
+        report_list = []
+        for _, entries in self.__entries.items():
+            for entry in entries:
+                report_list.append(entry)
+
+        report_list.sort(key=lambda x: x.line_id)
+        return report_list
+
+    def __str__(self) -> str:
+        ret = ""
+        for entry in self.to_list():
+            ret += entry.file_path + ":" + entry.line_number + ":" + entry.text
+
+        return ret
+
+    def __len__(self) -> int:
+        return len(self.to_list())
+
+    def __sub__(self, report_b: 'Report') -> 'Report':
+        if self.__class__ != report_b.__class__:
+            raise ReportError("Diff of different type of report!")
+
+        filename, file_extension = os.path.splitext(self.__path)
+        diff_report = self.__class__(filename + ".diff" + file_extension)
+        # Put in the diff report only records of this report that are not
+        # present in the report_b.
+        for file_path, entries in self.__entries.items():
+            rep_b_entries = report_b.get_report_entries()
+            if file_path in rep_b_entries.keys():
+                # File path exists in report_b, so check what entries of that
+                # file path doesn't exist in report_b and add them to the diff
+                rep_b_entries_num = [
+                    x.line_number for x in rep_b_entries[file_path]
+                ]
+                for entry in entries:
+                    if entry.line_number not in rep_b_entries_num:
+                        diff_report.add_entry(file_path, entry.line_number,
+                                              entry.text)
+            else:
+                # File path doesn't exist in report_b, so add every entry
+                # of that file path to the diff
+                for entry in entries:
+                    diff_report.add_entry(file_path, entry.line_number,
+                                          entry.text)
+
+        return diff_report
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 14:27:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:27:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529813.824630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZvQ-0006Q1-8t; Thu, 04 May 2023 14:27:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529813.824630; Thu, 04 May 2023 14:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZvQ-0006Pu-62; Thu, 04 May 2023 14:27:16 +0000
Received: by outflank-mailman (input) for mailman id 529813;
 Thu, 04 May 2023 14:27:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=evI0=AZ=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1puZvO-0006Pe-Lj
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:27:14 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on20620.outbound.protection.outlook.com
 [2a01:111:f400:7e83::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c12773c4-ea87-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 16:27:11 +0200 (CEST)
Received: from MW4PR04CA0083.namprd04.prod.outlook.com (2603:10b6:303:6b::28)
 by IA1PR12MB7541.namprd12.prod.outlook.com (2603:10b6:208:42f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 14:27:08 +0000
Received: from CO1NAM11FT101.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:6b:cafe::25) by MW4PR04CA0083.outlook.office365.com
 (2603:10b6:303:6b::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 14:27:07 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT101.mail.protection.outlook.com (10.13.175.164) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 14:27:07 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 09:27:05 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 09:26:55 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 4 May 2023 09:26:53 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c12773c4-ea87-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WDcNXg2LdgacGaYFeU8pVvn6v3Aie+Yp8o6ardijkCDTKdESXTFbE1L0J1WEXMk3GQdmfPopJ+6hRoy5N2ci89qIro/3yG6//cZ4iwPjywdenoheOTenlCmltLETnOH0CfRdexhtwn2DONiaLwPUvQYrzkis+LVexY8g/0s01DBHsUL5YQzQLTNza1aVmUvQAPn0xn+oIMx4jALTjAdfzXi8JotrUBdxR6ihT1Wn63SDGSBSiTTxsyjLrsvUXOXyvGzFBaMfLT5DMivlt9pVo7Rhwwcx3yaBZWP3Dq/CGjNjtDGb2gtFM/MuYtl3rnKLAgYhk1a4yIM1G0wb6nYYEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yReBJkggtW9j5YlcdD87yqF+ZbDUNTJM8rmOi+qPKGQ=;
 b=CpjLRgWfDs8BsnbB47Q3aj1pWUyqiJYu+ClxoMrO11/LjZXvsO7B1Qlaq5HB16I59G4dhYgktaAJ9U8VTdyk43ZURO1ajuCXg9HD9mVjkMnYPcfevK/6kVIO2/+wYY2/uXUuZdF2gR3oXJkHdOPiUd3ac8jprfI86WTEQ5qUENYq42bSkWlSLUk2gbf8uwRx0Ym9elqcwvyz32GNiCbgeXvXTqSjpsK5SsdDCAwrfMUC8daL99rwp+f72ZqFa8gmGtO1CEnzCQVR0CuuRj4RksZ3Z2K+XVjHhWhDWkVOaWnHSKE9DqHOnqkPPdCZGrqdG+08IJPcst+1c/efZseiwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yReBJkggtW9j5YlcdD87yqF+ZbDUNTJM8rmOi+qPKGQ=;
 b=NKbihi94TR5t4BugvsGhA0I/oKidbt6pdCCHtFKuSBdgoMWv9u+Sa/BaZjKKrOedeccjWW8JXwHTAbQOOuH1w78yy8gh3OErasTsiviBeu1bfAeueijkok4+Icgti3cjHn568uvEzWNowE709lPl4OG/gLHkLS+TNOAGq1f/V/A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <29c7bf6a-0db4-5192-b93d-3e3220fe6a2e@amd.com>
Date: Thu, 4 May 2023 16:26:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN v6 11/12] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Ayan Kumar Halder
	<ayan.kumar.halder@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-12-ayan.kumar.halder@amd.com>
 <63fa927e-72f5-1645-97c0-6986f2fdcabe@xen.org>
 <178f9c0f-2f72-daac-772b-c3c4221bea40@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <178f9c0f-2f72-daac-772b-c3c4221bea40@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT101:EE_|IA1PR12MB7541:EE_
X-MS-Office365-Filtering-Correlation-Id: 4d21ad43-6b9c-45b7-bc61-08db4caba3de
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SxZh8RKOhHH5pULoRBT8yc0+AAwfnujgqnvedg6RpqQhgedfi7BrpMes61WcsUvtvvdRfzrxVf+kdaAUI8FZ2B3LHfm8rRlXawi8RVwxMipMgETudHKmULRJ8rqrMJhJX5dGA28fN+ZfQzoQhR9uM+R2Rv+yj0sfSVVoaAGxx2R8QPwyNLQTs2ds20tPZpVx0nMhy5rvAzyWOOASXLzpyS2b0v3InwQ2kyz5Ee2jPZG3wR2xGRBSdYFtAlYtJXkhfIk5xgYVPndcWqO1ZINWatT0Z+5JgTykezBNeh+M+VWNvTykcTtxlOqgtVvPy8Bvtw7Fu4uorKQeoBPxhY+nPg+drYnCWzWi8MQ3mnz8K2xnho2hMZTXj6aanptEZ2LCmADamNeygG33SBVTZuGXg6R5O075xnr5J4WK4tXtABc5hlRiBUufOnxBeTcezvJq+ppw4puruExDqRVEDIgxWptzT3RF3ahFHnBbOIsMvHt6wH5MJ2+C+ONarq+6AbkFNLWyGkT5Rik1j/VDBXBUNZGulun6J9Evi+TvwIsItogHxfNrtjAXRiI79xFKW6/fWSaD4lHhIXKEC35pvCzqn6rtLO/cVFooKT9t2KqxBf9jcncl652Hyk5Re3hS5nQUzcNoSyrYOVUOc2gbV++YPfxgLPjz61eSYnowqY1UC324RA3D7nvtgDcB1ZEiRvT+/Iy9EzFX+YNF3QEVXKA3LlR0RhdK5EZ2ywdJ+NdTmlWKRABFkq2zfxdoLACvcdRexeawqkx3eVKt4ZjfUwPSwg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(396003)(346002)(451199021)(36840700001)(46966006)(40470700004)(40460700003)(31686004)(8936002)(82310400005)(8676002)(7416002)(2616005)(26005)(53546011)(336012)(426003)(36756003)(40480700001)(2906002)(47076005)(83380400001)(44832011)(186003)(478600001)(4326008)(5660300002)(41300700001)(70586007)(70206006)(356005)(316002)(16576012)(110136005)(54906003)(31696002)(82740400003)(36860700001)(86362001)(81166007)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 14:27:07.3896
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4d21ad43-6b9c-45b7-bc61-08db4caba3de
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT101.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7541



On 03/05/2023 14:35, Julien Grall wrote:
> 
> 
> On 03/05/2023 13:20, Julien Grall wrote:
>> Hi,
>>
>> On 28/04/2023 18:55, Ayan Kumar Halder wrote:
>>> Restructure the code so that one can use pa_range_info[] table for both
>>> ARM_32 as well as ARM_64.
>>>
>>> Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
>>> p2m_root_order can be obtained from the pa_range_info[].root_order and
>>> p2m_root_level can be obtained from pa_range_info[].sl0.
>>>
>>> Refer ARM DDI 0406C.d ID040418, B3-1345,
>>> "Use of concatenated first-level translation tables
>>>
>>> ...However, a 40-bit input address range with a translation
>>> granularity of 4KB
>>> requires a total of 28 bits of address resolution. Therefore, a stage 2
>>> translation that supports a 40-bit input address range requires two
>>> concatenated
>>> first-level translation tables,..."
>>>
>>> Thus, root-order is 1 for 40-bit IPA on ARM_32.
>>>
>>> Refer ARM DDI 0406C.d ID040418, B3-1348,
>>>
>>> "Determining the required first lookup level for stage 2 translations
>>>
>>> For a stage 2 translation, the output address range from the stage 1
>>> translations determines the required input address range for the stage 2
>>> translation. The permitted values of VTCR.SL0 are:
>>>
>>> 0b00 Stage 2 translation lookup must start at the second level.
>>> 0b01 Stage 2 translation lookup must start at the first level.
>>>
>>> VTCR.T0SZ must indicate the required input address range. The size of
>>> the input
>>> address region is 2^(32-T0SZ) bytes."
>>>
>>> Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of
>>> input
>>> address region is 2^40 bytes.
>>>
>>> Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b
>>> which is 24.
>>>
>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>> ---
>>> Changes from -
>>>
>>> v3 - 1. New patch introduced in v4.
>>> 2. Restructure the code such that pa_range_info[] is used both by
>>> ARM_32 as
>>> well as ARM_64.
>>>
>>> v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and
>>> P2M_ROOT_LEVEL.
>>> The reason being root_order will not be always 1 (See the next patch).
>>> 2. Updated the commit message to explain t0sz, sl0 and root_order
>>> values for
>>> 32-bit IPA on Arm32.
>>> 3. Some sanity fixes.
>>>
>>> v5 - pa_range_info is indexed by system_cpuinfo.mm64.pa_range. ie
>>> when PARange is 0, the PA size is 32, 1 -> 36 and so on. So
>>> pa_range_info[] has
>>> been updated accordingly.
>>> For ARM_32 pa_range_info[0] = 0 and pa_range_info[1] = 0 as we do not
>>> support
>>> 32-bit, 36-bit physical address range yet.
>>>
>>>   xen/arch/arm/include/asm/p2m.h |  8 +-------
>>>   xen/arch/arm/p2m.c             | 32 ++++++++++++++++++--------------
>>>   2 files changed, 19 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/include/asm/p2m.h
>>> b/xen/arch/arm/include/asm/p2m.h
>>> index f67e9ddc72..4ddd4643d7 100644
>>> --- a/xen/arch/arm/include/asm/p2m.h
>>> +++ b/xen/arch/arm/include/asm/p2m.h
>>> @@ -14,16 +14,10 @@
>>>   /* Holds the bit size of IPAs in p2m tables.  */
>>>   extern unsigned int p2m_ipa_bits;
>>> -#ifdef CONFIG_ARM_64
>>>   extern unsigned int p2m_root_order;
>>>   extern unsigned int p2m_root_level;
>>> -#define P2M_ROOT_ORDER    p2m_root_order
>>> +#define P2M_ROOT_ORDER p2m_root_order
>>
>> This looks like a spurious change.
>>
>>>   #define P2M_ROOT_LEVEL p2m_root_level
>>> -#else
>>> -/* First level P2M is always 2 consecutive pages */
>>> -#define P2M_ROOT_ORDER    1
>>> -#define P2M_ROOT_LEVEL 1
>>> -#endif
>>>   struct domain;
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index 418997843d..1fe3cccf46 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -19,9 +19,9 @@
>>>   #define INVALID_VMID 0 /* VMID 0 is reserved */
>>> -#ifdef CONFIG_ARM_64
>>>   unsigned int __read_mostly p2m_root_order;
>>>   unsigned int __read_mostly p2m_root_level;
>>> +#ifdef CONFIG_ARM_64
>>>   static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>>>   /* VMID is by default 8 bit width on AArch64 */
>>>   #define MAX_VMID       max_vmid
>>> @@ -2247,16 +2247,6 @@ void __init setup_virt_paging(void)
>>>       /* Setup Stage 2 address translation */
>>>       register_t val =
>>> VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
>>> -#ifdef CONFIG_ARM_32
>>> -    if ( p2m_ipa_bits < 40 )
>>> -        panic("P2M: Not able to support %u-bit IPA at the moment\n",
>>> -              p2m_ipa_bits);
>>> -
>>> -    printk("P2M: 40-bit IPA\n");
>>> -    p2m_ipa_bits = 40;
>>> -    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
>>> -    val |= VTCR_SL0(0x1); /* P2M starts at first level */
>>> -#else /* CONFIG_ARM_64 */
>>>       static const struct {
>>>           unsigned int pabits; /* Physical Address Size */
>>>           unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
>>> @@ -2265,19 +2255,26 @@ void __init setup_virt_paging(void)
>>>       } pa_range_info[] __initconst = {
>>>           /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table
>>> D5-6 */
>>>           /*      PA size, t0sz(min), root-order, sl0(max) */
>>> +        [2] = { 40,      24/*24*/,  1,          1 },
>>
>> I don't like the fact that the index are not ordered anymore and...
>>
>>> +#ifdef CONFIG_ARM_64
>>>           [0] = { 32,      32/*32*/,  0,          1 },
>>>           [1] = { 36,      28/*28*/,  0,          1 },
>>> -        [2] = { 40,      24/*24*/,  1,          1 },
>>>           [3] = { 42,      22/*22*/,  3,          1 },
>>>           [4] = { 44,      20/*20*/,  0,          2 },
>>>           [5] = { 48,      16/*16*/,  0,          2 },
>>>           [6] = { 52,      12/*12*/,  4,          2 },
>>>           [7] = { 0 }  /* Invalid */
>>> +#else
>>> +        [0] = { 0 },  /* Invalid */
>>> +        [1] = { 0 },  /* Invalid */
>>> +        [3] = { 0 }  /* Invalid */
>>> +#endif
>>
>> ... it is not clear to me why we are adding 3 extra entries. I think it
>> would be better if we do:
>>
>> #ifdef CONFIG_ARM_64
>>     [0] ...
>>     [1] ...
>> #endif
>>     [2] ...
>> #ifdef CONFIG_ARM_64
>>     [3] ...
>>     [4] ...
>>     ...
>> #endif
> 
> Looking at the next patch. An alternative would be to go back
> duplicating the lines. So after the two patches we would have
> 
> #ifdef CONFIG_ARM_64
>      [0] ...
>      [7] ...
> #else
>      { /* 32-bit */ }
>      { /* 40-bit */ }
> #endif
+1 for this approach

~Michal


From xen-devel-bounces@lists.xenproject.org Thu May 04 14:32:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:32:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529816.824639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZzt-00081Q-QM; Thu, 04 May 2023 14:31:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529816.824639; Thu, 04 May 2023 14:31:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puZzt-00081J-Nb; Thu, 04 May 2023 14:31:53 +0000
Received: by outflank-mailman (input) for mailman id 529816;
 Thu, 04 May 2023 14:31:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Gg4=AZ=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1puZzs-00081D-Fe
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:31:52 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20619.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::619])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67515649-ea88-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 16:31:49 +0200 (CEST)
Received: from DB8PR06CA0033.eurprd06.prod.outlook.com (2603:10a6:10:100::46)
 by AS8PR08MB8299.eurprd08.prod.outlook.com (2603:10a6:20b:56f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Thu, 4 May
 2023 14:31:47 +0000
Received: from DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:100:cafe::22) by DB8PR06CA0033.outlook.office365.com
 (2603:10a6:10:100::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 14:31:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT011.mail.protection.outlook.com (100.127.142.132) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.26 via Frontend Transport; Thu, 4 May 2023 14:31:46 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Thu, 04 May 2023 14:31:46 +0000
Received: from 883d0a7524a4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 78955CA4-2807-49F1-B95B-30FAA3DE5579.1; 
 Thu, 04 May 2023 14:31:39 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 883d0a7524a4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 04 May 2023 14:31:39 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by AM8PR08MB6436.eurprd08.prod.outlook.com (2603:10a6:20b:365::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Thu, 4 May
 2023 14:31:38 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb%7]) with mapi id 15.20.6363.026; Thu, 4 May 2023
 14:31:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67515649-ea88-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TtzQXUYKMj+AdmzNXO2SGo6npQB/QyLYID7bcjuPU6I=;
 b=oP6tEUcnoF8CbHfAToc0wtN77NzoY969wUUUZrDkNGo3I293WxU61tVp/d6TUVJ9D5269o7Y9yEg6ULOj0JolCj7IV9o10iY9kyiOaW65MDDenpj05rVwyac7RZvoxBe2l9F9LLuxEz6DuNtjhIL32QIUzTPXFFJ1aRI+PUU9Yg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: a89e8712633c4e1c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g8G18LqzIc64k+tDxq/EKlThpU8nnjlVFGJTUfTjr/n5Tb8XCQRCvbADkohE9dzJABDOAqalxCe9ITsWxx6aW6iaq4WoeiTC7c+q9CQAbr9o/wB5Id3FwH62RCFa1UqtM85FK61Wp9mb9X8aa8jZBAtWPFrYDVKQlwZ3cffAISF9m3gYYPZXoPxY+VAt4mZfyIQx/542Ac4CDHC0P9QRH4JUXWbdTtrDGiuFuY0LLFoInNRvAOamOkYnn1bMu09m49LLFOingZozRYbv+2xA5qISWu+oDffNsoYf9czMD20E/xSg/7lVrD5BoLGweCTzU859cWU779zywKmsceoGQA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TtzQXUYKMj+AdmzNXO2SGo6npQB/QyLYID7bcjuPU6I=;
 b=KnsemLskmTl6iWxGV2GiSlZkfPNQxqYDHz7nlHVE9TEGULUQ1EM43QxrH+6JGXYeY1MGU+MAcmGr7pNxq/x4HJlMrtV+EuNxL7I9/XtxLkxEVPo8HNcWJP4Ct91kv4OgT/5PRtXjlwhlZdkhx/y4HTccdUg9zwl/hVn3bsPfZnMW4m175b3HSdfT8CJFmjwcOo0D3v65ySymdzH7Lv04/1uvLaq248JTBSDqty/E6N6qmJcPOiiziiaZJeCSym+iFbbuC27er5GApeBdrkSrqGERt/H7Rlsn0Jv1Ztf1FLAjDIKxwX0/iA4kqii7NanknJSriptkrnI+oWStukBTKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TtzQXUYKMj+AdmzNXO2SGo6npQB/QyLYID7bcjuPU6I=;
 b=oP6tEUcnoF8CbHfAToc0wtN77NzoY969wUUUZrDkNGo3I293WxU61tVp/d6TUVJ9D5269o7Y9yEg6ULOj0JolCj7IV9o10iY9kyiOaW65MDDenpj05rVwyac7RZvoxBe2l9F9LLuxEz6DuNtjhIL32QIUzTPXFFJ1aRI+PUU9Yg=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Ayan Kumar Halder <ayan.kumar.halder@amd.com>, Xen developer discussion
	<xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, "Stabellini, Stefano" <stefano.stabellini@amd.com>,
	Julien Grall <julien@xen.org>, "Volodymyr_Babchuk@epam.com"
	<Volodymyr_Babchuk@epam.com>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"george.dunlap@citrix.com" <george.dunlap@citrix.com>, "jbeulich@suse.com"
	<jbeulich@suse.com>, "wl@xen.org" <wl@xen.org>
Subject: Re: [XEN v6 04/12] xen/arm: smmu: Use writeq_relaxed_non_atomic() for
 writing to SMMU_CBn_TTBR0
Thread-Topic: [XEN v6 04/12] xen/arm: smmu: Use writeq_relaxed_non_atomic()
 for writing to SMMU_CBn_TTBR0
Thread-Index: AQHZefsV/U5NscHjSECspfoDYyGz6a9JwnmAgABzyIA=
Date: Thu, 4 May 2023 14:31:37 +0000
Message-ID: <675F9B31-1720-4AD3-932A-67D98FDDF7DE@arm.com>
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-5-ayan.kumar.halder@amd.com>
 <c8021f4b-1607-23df-803b-ac162d9d4324@amd.com>
In-Reply-To: <c8021f4b-1607-23df-803b-ac162d9d4324@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7158:EE_|AM8PR08MB6436:EE_|DBAEUR03FT011:EE_|AS8PR08MB8299:EE_
X-MS-Office365-Filtering-Correlation-Id: 5dc6f856-30cb-4b91-6906-08db4cac4a24
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 f1/OVeizTfDOGIwcwymgm9aa+ZdQ7MwQhX3knN+zTsdCZ2v3VgDMBE8KtknyjVmTjTi6h4XIX0/bKkRvGFNO9oReqbMnAhSYYTq2Pu4KVbK3tc4HclJyGF1TKwog3ABJDZVfuC/1jyd8tNTm79iZeK7u0NLKdJeFiYznXX1NLoFlst2rEe2hp/sfIyVfs7yb9IihyauBI7PN2lCLqfD4LoIaRCPEmtB3nigIWLnqMHxsXOIcMjSjuvmbWugMBCDQnWtETFMCgLqd6LfmEQSV1HHqml1p+g0su91jAz9a+MBdW/fGnY5fki6CfQ9CWROhmWMPX7YfGyM2poTLpYu2XKy9zZKirDts1C0I5p2Yc3qxP4uV6nVghv8iXqEjT2vOpc3TMbitIRMuUC5cjKJ4omDb1BEoKsViJCsx4gnMqrI0GJI+rpC4gd/kftSWELdnsbm7NIYMVhUW+/pDrKYb9jXDMqV1qeUbNubJk0t21re8n9qkLxx/a5pyh5ZcB0nBJHgNKjNJKXUilHYSuf9dIxGewfas6gp7LEvDOFxe1f/IMcJtsK+mfvo8fktchpE505kuAsTVzvnLlpTC+hQrqg5SKZZHvmHlN2azpVW9sNZZK2JZ2vhc/Us+VuYNCfNRKc1JBno7TTpI3r+S2Z5Swg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(39860400002)(396003)(376002)(136003)(451199021)(316002)(83380400001)(91956017)(71200400001)(6512007)(478600001)(6916009)(186003)(2616005)(26005)(54906003)(53546011)(6506007)(6486002)(76116006)(36756003)(122000001)(7416002)(2906002)(38100700002)(33656002)(5660300002)(4326008)(8936002)(66476007)(66446008)(8676002)(66556008)(66946007)(64756008)(41300700001)(38070700005)(86362001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: multipart/alternative;
	boundary="_000_675F9B3117204AD3932A67D98FDDF7DEarmcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6436
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fd986f87-4cfd-4b50-359b-08db4cac4509
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IwS7ChGoMOHUo4PDaCMqwJlpC/ITlaB37nejg+7k1xvO8SQW87GcC398MLQAMO0FT6y5E8BE0AfV3roJSBPuK9mLU7q7PWo7KiiohrLSKKyieqPuzDZWJ77VSQSsNxkNHNmdmJj5mD4B4+julM9+u9mrlRgmGgpY/FVP5r9DTaI4rWG2Vm8G+MLpRTe6x4i5hlZ9YCqbXAZLZoi7YtnXmYri7JQym+RmxW1CRVLuFyj3L7TkmH0n0oToqdfwHpHxeuZF9ofMeCA1rXHthMx2kUk43QodP4bRG0KUtn1GUuBYIFVoMY2pTKDa5ihwz7cRqHkxzCZlbKL0oH8vYoHR6XU6NPqBSFOq+c7qIvlgRBE6L7zUKF+zWarXWtDUPMm1yrR1mnI0VnpUF5a4PejhqNht0SvVWEg/gawXVgejKPbcX/9O97dWjHmZcv2NmsIrpO1tBic6XR9yWrKq5EttnRXtaqi/MkaTIFSGTSSfDwH5P72Wn8RYxry2/sTHSXIh7m/Njvxv20uujdfcNJTvGLplkj7hJmzuXjndUfGrMQflkNrpfiYp0//7kHhwVL3y4T6vlkaOUG4m7z1e0yhiAlGAJHA59kx9JYHxM+d2AzEcBPXKyiahlOyKeuN7PkPqPMFtc3bBLYlWPha76LezPefPuMNbNXVcQFin6v5HFB3SYNreN4q88/q2wnSiR76pSm657Q0BVOO+pX78L03FQb+gZgmwr4C+2Ls7/Ysuroo=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(39860400002)(346002)(451199021)(46966006)(40470700004)(36840700001)(45080400002)(54906003)(186003)(478600001)(6506007)(2616005)(6512007)(53546011)(336012)(26005)(36860700001)(6486002)(47076005)(4326008)(70206006)(70586007)(41300700001)(316002)(34020700004)(83380400001)(6862004)(5660300002)(8676002)(8936002)(82740400003)(81166007)(2906002)(40460700003)(356005)(40480700001)(36756003)(33656002)(86362001)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 14:31:46.5043
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5dc6f856-30cb-4b91-6906-08db4cac4a24
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8299

--_000_675F9B3117204AD3932A67D98FDDF7DEarmcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi Ayan,

On 4 May 2023, at 8:37 am, Michal Orzel <michal.orzel@amd.com> wrote:



On 28/04/2023 19:55, Ayan Kumar Halder wrote:


Refer ARM IHI 0062D.c ID070116 (SMMU 2.0 spec), 17-360, 17.3.9,
SMMU_CBn_TTBR0 is a 64 bit register. Thus, one can use
writeq_relaxed_non_atomic() to write to it instead of invoking
writel_relaxed() twice for lower half and upper half of the register.

This also helps us as p2maddr is 'paddr_t' (which may be u32 in future).
Thus, one can assign p2maddr to a 64 bit register and do the bit
manipulations on it, to generate the value for SMMU_CBn_TTBR0.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes from -

v1 - 1. Extracted the patch from "[XEN v1 8/9] xen/arm: Other adaptations r=
equired to support 32bit paddr".
Use writeq_relaxed_non_atomic() to write u64 register in a non-atomic
fashion.

v2 - 1. Added R-b.

v3 - 1. No changes.

v4 - 1. Reordered the R-b. No further changes.
(This patch can be committed independent of the series).

v5 - Used 'uint64_t' instead of u64. As the change looked trivial to me, I
retained the R-b.

xen/drivers/passthrough/arm/smmu.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/a=
rm/smmu.c
index 79281075ba..fb8bef5f69 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -499,8 +499,7 @@ enum arm_smmu_s2cr_privcfg {
#define ARM_SMMU_CB_SCTLR              0x0
#define ARM_SMMU_CB_RESUME             0x8
#define ARM_SMMU_CB_TTBCR2             0x10
-#define ARM_SMMU_CB_TTBR0_LO           0x20
-#define ARM_SMMU_CB_TTBR0_HI           0x24
+#define ARM_SMMU_CB_TTBR0              0x20
#define ARM_SMMU_CB_TTBCR              0x30
#define ARM_SMMU_CB_S1_MAIR0           0x38
#define ARM_SMMU_CB_FSR                        0x58
@@ -1083,6 +1082,7 @@ static void arm_smmu_flush_pgtable(struct arm_smmu_de=
vice *smmu, void *addr,
static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
{
       u32 reg;
+       uint64_t reg64;
       bool stage1;
       struct arm_smmu_cfg *cfg =3D &smmu_domain->cfg;
       struct arm_smmu_device *smmu =3D smmu_domain->smmu;
@@ -1177,12 +1177,13 @@ static void arm_smmu_init_context_bank(struct arm_s=
mmu_domain *smmu_domain)
       dev_notice(smmu->dev, "d%u: p2maddr 0x%"PRIpaddr"\n",
                  smmu_domain->cfg.domain->domain_id, p2maddr);

-       reg =3D (p2maddr & ((1ULL << 32) - 1));
-       writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_LO);
-       reg =3D (p2maddr >> 32);
+       reg64 =3D p2maddr;
+
       if (stage1)
-               reg |=3D ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT;
-       writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_HI);
+               reg64 |=3D (((uint64_t) (ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_=
ASID_SHIFT))
+                        << 32);
I think << should be aligned to the second '(' above.

Reviewed-by: Michal Orzel <michal.orzel@amd.com<mailto:michal.orzel@amd.com=
>>

With the Michal comment fixed.
Reviewed-by: Rahul Singh <rahul.singh@arm.com<mailto:rahul.singh@arm.com>>

Regards,
Rahul


--_000_675F9B3117204AD3932A67D98FDDF7DEarmcom_
Content-Type: text/html; charset="us-ascii"
Content-ID: <341421C73333064887EE899DA89C9868@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"overflow-wrap: break-word; -webkit-nbsp-mode: space; line-br=
eak: after-white-space;">
<div dir=3D"auto" style=3D"overflow-wrap: break-word; -webkit-nbsp-mode: sp=
ace; line-break: after-white-space;">
Hi Ayan,<br>
<div><br>
<blockquote type=3D"cite">
<div>On 4 May 2023, at 8:37 am, Michal Orzel &lt;michal.orzel@amd.com&gt; w=
rote:</div>
<br class=3D"Apple-interchange-newline">
<div><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-s=
ize: 12px; font-style: normal; font-variant-caps: normal; font-weight: 400;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none;">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: 400; lett=
er-spacing: normal; text-align: start; text-indent: 0px; text-transform: no=
ne; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;=
 text-decoration: none;">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: 400; le=
tter-spacing: normal; text-align: start; text-indent: 0px; text-transform: =
none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0p=
x; text-decoration: none; float: none; display: inline !important;">On
 28/04/2023 19:55, Ayan Kumar Halder wrote:</span><br style=3D"caret-color:=
 rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal;=
 font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-=
align: start; text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">
<blockquote type=3D"cite" style=3D"font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: 400; letter-sp=
acing: normal; orphans: auto; text-align: start; text-indent: 0px; text-tra=
nsform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit=
-text-stroke-width: 0px; text-decoration: none;">
<br>
<br>
Refer ARM IHI 0062D.c ID070116 (SMMU 2.0 spec), 17-360, 17.3.9,<br>
SMMU_CBn_TTBR0 is a 64 bit register. Thus, one can use<br>
writeq_relaxed_non_atomic() to write to it instead of invoking<br>
writel_relaxed() twice for lower half and upper half of the register.<br>
<br>
This also helps us as p2maddr is 'paddr_t' (which may be u32 in future).<br=
>
Thus, one can assign p2maddr to a 64 bit register and do the bit<br>
manipulations on it, to generate the value for SMMU_CBn_TTBR0.<br>
<br>
Signed-off-by: Ayan Kumar Halder &lt;ayan.kumar.halder@amd.com&gt;<br>
Reviewed-by: Stefano Stabellini &lt;sstabellini@kernel.org&gt;</blockquote>
</div>
</blockquote>
</div>
<div>
<blockquote type=3D"cite">
<div>
<blockquote type=3D"cite" style=3D"font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: 400; letter-sp=
acing: normal; orphans: auto; text-align: start; text-indent: 0px; text-tra=
nsform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit=
-text-stroke-width: 0px; text-decoration: none;">
---<br>
Changes from -<br>
<br>
v1 - 1. Extracted the patch from &quot;[XEN v1 8/9] xen/arm: Other adaptati=
ons required to support 32bit paddr&quot;.<br>
Use writeq_relaxed_non_atomic() to write u64 register in a non-atomic<br>
fashion.<br>
<br>
v2 - 1. Added R-b.<br>
<br>
v3 - 1. No changes.<br>
<br>
v4 - 1. Reordered the R-b. No further changes.<br>
(This patch can be committed independent of the series).<br>
<br>
v5 - Used 'uint64_t' instead of u64. As the change looked trivial to me, I<=
br>
retained the R-b.<br>
<br>
xen/drivers/passthrough/arm/smmu.c | 15 ++++++++-------<br>
1 file changed, 8 insertions(+), 7 deletions(-)<br>
<br>
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/a=
rm/smmu.c<br>
index 79281075ba..fb8bef5f69 100644<br>
--- a/xen/drivers/passthrough/arm/smmu.c<br>
+++ b/xen/drivers/passthrough/arm/smmu.c<br>
@@ -499,8 +499,7 @@ enum arm_smmu_s2cr_privcfg {<br>
#define ARM_SMMU_CB_SCTLR &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0x0<br>
#define ARM_SMMU_CB_RESUME &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;0x8<br>
#define ARM_SMMU_CB_TTBCR2 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;0x10<br>
-#define ARM_SMMU_CB_TTBR0_LO &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;0x20<br>
-#define ARM_SMMU_CB_TTBR0_HI &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;0x24<br>
+#define ARM_SMMU_CB_TTBR0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0x20<br>
#define ARM_SMMU_CB_TTBCR &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;0x30<br>
#define ARM_SMMU_CB_S1_MAIR0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;0x38<br>
#define ARM_SMMU_CB_FSR &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;0x58<br>
@@ -1083,6 +1082,7 @@ static void arm_smmu_flush_pgtable(struct arm_smmu_de=
vice *smmu, void *addr,<br>
static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)=
<br>
{<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;u32 reg;<br>
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;uint64_t reg64;<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;bool stage1;<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;struct arm_smmu_cfg *cfg =3D &amp=
;smmu_domain-&gt;cfg;<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;struct arm_smmu_device *smmu =3D =
smmu_domain-&gt;smmu;<br>
@@ -1177,12 +1177,13 @@ static void arm_smmu_init_context_bank(struct arm_s=
mmu_domain *smmu_domain)<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;dev_notice(smmu-&gt;dev, &quot;d%=
u: p2maddr 0x%&quot;PRIpaddr&quot;\n&quot;,<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;smmu_domain-&gt;cfg.domain-&gt;domain_id, =
p2maddr);<br>
<br>
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;reg =3D (p2maddr &amp; ((1ULL &lt;&lt=
; 32) - 1));<br>
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;writel_relaxed(reg, cb_base + ARM_SMM=
U_CB_TTBR0_LO);<br>
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;reg =3D (p2maddr &gt;&gt; 32);<br>
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;reg64 =3D p2maddr;<br>
+<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if (stage1)<br>
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;reg |=3D ARM_SMMU_CB_ASID(cfg) &lt;&lt; TTBRn_HI_ASID_SHIFT;<br>
- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;writel_relaxed(reg, cb_base + ARM_SMM=
U_CB_TTBR0_HI);<br>
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;reg64 |=3D (((uint64_t) (ARM_SMMU_CB_ASID(cfg) &lt;&lt; TTBRn_HI=
_ASID_SHIFT))<br>
+ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;&lt; 3=
2);<br>
</blockquote>
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: 400; le=
tter-spacing: normal; text-align: start; text-indent: 0px; text-transform: =
none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0p=
x; text-decoration: none; float: none; display: inline !important;">I
 think &lt;&lt; should be aligned to the second '(' above.</span><br style=
=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; fon=
t-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacin=
g: normal; text-align: start; text-indent: 0px; text-transform: none; white=
-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-dec=
oration: none;">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: 400; lett=
er-spacing: normal; text-align: start; text-indent: 0px; text-transform: no=
ne; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;=
 text-decoration: none;">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: 400; le=
tter-spacing: normal; text-align: start; text-indent: 0px; text-transform: =
none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0p=
x; text-decoration: none; float: none; display: inline !important;">Reviewe=
d-by:
 Michal Orzel &lt;</span><a href=3D"mailto:michal.orzel@amd.com" style=3D"f=
ont-family: Helvetica; font-size: 12px; font-style: normal; font-variant-ca=
ps: normal; font-weight: 400; letter-spacing: normal; orphans: auto; text-a=
lign: start; text-indent: 0px; text-transform: none; white-space: normal; w=
idows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;">michal.orz=
el@amd.com</a><span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvet=
ica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-w=
eight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; te=
xt-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-st=
roke-width: 0px; text-decoration: none; float: none; display: inline !impor=
tant;">&gt;</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helv=
etica; font-size: 12px; font-style: normal; font-variant-caps: normal; font=
-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-=
stroke-width: 0px; text-decoration: none;">
</div>
</blockquote>
<div><br>
</div>
<div>With the Michal comment fixed.</div>
<div>Reviewed-by: Rahul Singh &lt;<a href=3D"mailto:rahul.singh@arm.com">ra=
hul.singh@arm.com</a>&gt;</div>
<div><br>
</div>
<div>Regards,</div>
<div>Rahul</div>
</div>
<br>
</div>
</body>
</html>

--_000_675F9B3117204AD3932A67D98FDDF7DEarmcom_--


From xen-devel-bounces@lists.xenproject.org Thu May 04 14:35:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:35:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529821.824650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pua3e-0000Fx-FR; Thu, 04 May 2023 14:35:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529821.824650; Thu, 04 May 2023 14:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pua3e-0000Fq-CZ; Thu, 04 May 2023 14:35:46 +0000
Received: by outflank-mailman (input) for mailman id 529821;
 Thu, 04 May 2023 14:35:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pua3c-0000FR-E6
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:35:44 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2071.outbound.protection.outlook.com [40.107.7.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f2f891dc-ea88-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 16:35:43 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9367.eurprd04.prod.outlook.com (2603:10a6:102:2aa::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 14:35:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 14:35:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2f891dc-ea88-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RjPoXI9peTov5mIL79eiUWZc1JUtY2i2QVBYqiuu90q9TPd+nZAdUYwikXJOSU+pFGAtKqQFPC5Rr3MtuLkANtp0QoNvS7CY4wDvHgLkIWn6xvO6MFs/09Ptniz3dq7KA1DEKmelsfCPO2Bl0ovmyeUQ7gzlaQ/o02PrYQz9vK7DxTLk7/gExbkqEZZ18R29SqftqG2oEzQGdl/5ENeA4AsUsKriMXdD8IJrozB8GZXIuyIWcU8cQ9i2a2eclj8h7ttcVdBCvFgOO5gNoXu222IQ9j7AJEpxMrWQfJnV1Y+EMuBMSaDhDOpj1Am/la2UZcEQzjZ+qGgIyrf9EZv5mg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dB4/rhLL5Ist8rEa+N+ZqWBPQdVApQyPPYPHHJTga2M=;
 b=XzkBiQYeoHkueUkEo3kTIZAdf/Ggh0K/TZmbKid/IJMYy6SNzmXxmFCg3LgQyWgT9mB21H8icy5Pz9UxMNbd7V/slYu+RC3viwCPHbVsSzgdUZ5Pf79MPS8amWWfOOHFYPGSB20nd6Fdo2pHF1OoTEoxWlVesfBXPBPMQDBGR/vvmb7/uNGhN4atTgJvZ15kpxe8nuq1hk7bIYvfRwBnRR6zZzCA8c+MfdLum5up3f+U9UUktw7fQ0MI9QuRNlQBCBaN6JTjjhOMyWaNFdfH44Ymrb14opUn5LCGPPt581GXnBsHEdAhcuPuH8VEsbVebEoPZxeUXoMu2jku+Gd4Dw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dB4/rhLL5Ist8rEa+N+ZqWBPQdVApQyPPYPHHJTga2M=;
 b=a5/OHWWAKR4pBPb4igWUG6XZ9kEDC17HKYF8G7QaJwWj7SjWhih8uBvhruirwNvrm8M1ve2k2B6N7sjOZoMbVH5Hya3LdW4etPMSlP/vKEjSvi3oiBpZ9gjgUwuQ4ke3W3n/b4TuHckYm4X3oRh80mag0Qe0hiQQdQEu+zZYBTZpkgvAeFZYg7d5JOfW0mMZbnUN11PHZW41sJhMMxZbkn9c6ZKb4R9HuAxOhC8o5WDMyTN28yMr2IqmDvjIXrD0Jm7aw6t5Bl3wvn8lAZn/QGrPVEjHWBNZyfbnmvrLQZs4Mu2p68bPJtxFxAmyL62YXWu4tRxUMZ5P1FESPQajnA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2864bf57-88cd-6fce-2d38-6f3a31abb440@suse.com>
Date: Thu, 4 May 2023 16:35:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 05/14 RESEND] xenpm: Change get-cpufreq-para output for
 internal
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-6-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230501193034.88575-6-jandryuk@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0104.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9367:EE_
X-MS-Office365-Filtering-Correlation-Id: d99b4171-58ba-4a10-0e49-08db4cacc5ed
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k4RBb+C4x0vBffGUfrekkmDs6ZdtCDyBKZYMCeFmiJGek00NoTfx6OBZVmZocdkE9JwGg2ROFeIKqOqHW8/GC+4VxQ8Yc4oYcB/h1rMp8Ix3owyZFmQhtLWPa+65O6wCjbdx4a2OIJmXLN/odCtEdCh7+4Q9GnCkD9uAnvYxWRajMCMInwm1t+8cuGEeDrxbFRMX2HVQphCvuehLvWEoGPHsD5x+oY1ZFY0hZiy1Aa2d0uHHTJC7DziX9hQvGBYHRMArcQ1Q1TMotigCWcfQCkkhYmTced7pdK06b0FpBYr6UmD5pndtQdbNBN+qQdH6sGpC6TshxycID1IY/HmnaNbdqBzqolwPEdFPoPKExQW+FSdsEuRUoSYCUXCgfeNeyMPSyWDLniQzWQ461idLzDvrnnyw8UuSsRnijRHJqApu1h4MRivs4hf81jQbsPlxRwjWoe2kLu69TLISU4DJZFaZLFIXsuBWIVre5d0R7CSXpxfrxzOZN9oVzenwaI4vsHc+oK6oLm2NNy3TcDc9VBP/8gOjs0HWRk67uvbiWXvanL87gvvPz2VgvazEyqsmRjkPRhQiqH4H5Np6CI42CiaAu5rmW7qRsx5eKnOYySioDOJAZ44kD2LtFLpTdg48lvakjLE4IaV8f6qY86pbpA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(366004)(346002)(396003)(136003)(451199021)(31696002)(8676002)(6916009)(316002)(86362001)(38100700002)(66476007)(66556008)(66946007)(478600001)(4326008)(54906003)(36756003)(41300700001)(8936002)(6486002)(2616005)(2906002)(5660300002)(6506007)(6512007)(31686004)(53546011)(26005)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Zk1sV21ROGtrdHp6WDJyNWdzN21waUVoVkpaYkxsS1lnQTF5cmt0bE14YzJm?=
 =?utf-8?B?c2VrVmt4cWIyK3o2YzhUUWtNTUFtSnNCUXJLMmZEWFdCTGxMWWNoMWFsTklP?=
 =?utf-8?B?ZkdjblBhYnQxK3ZhNXN6NTdWVGErRTh0ZmlpSDVCcUFCKzJYVFVpakRIbEIx?=
 =?utf-8?B?ZTk3YnFXQitHNm5zZ0t0WjFGQzFEcTN1eXNOQ0pqVGdSLzg4YWMrSEFkQTBx?=
 =?utf-8?B?bFBJdTZBRnBZYThHMG9uZ2tQYjE0V0x0d0NXNEZxWlNZdTQwMXc4YVBFV25E?=
 =?utf-8?B?V0Q5UnBaNnJCVXFXTDh2TWFodjRoaGpqa2k3cWI0T0Zza2hlRDZsc1FmbEhi?=
 =?utf-8?B?SE8yV3U1OUVIT1ErMTBqU29iUWtScW5qcEhsdEdtc1k4UDhlQVhkOWN0VFlJ?=
 =?utf-8?B?T0cwSXRZckJWdmkvM0pyNGtEYXNqWlZvdFV2TjNUM1hxVlVBVFhzN1FMbngw?=
 =?utf-8?B?c3EzcEw4ZGpFejNmNUtFWUZLc1VpWU9IZndoeXRUSVh3bTRhQmpseSt6cWFG?=
 =?utf-8?B?alZuL0dRYmlyU1ZxSWV0UFBpSlpGY0FpZnoxMEdjY2NneTIrdHBnNFpRQ3lW?=
 =?utf-8?B?bTZxQVVxbkhkRWlrT0gyYVdnb0xEZmw3bkVkN0JhU2dVQXJPOXNIK1h0VXkv?=
 =?utf-8?B?UGphUUcxYmxyVWFKeWdoZS9ybWNMMW5pb3prOUVqTnkzNjJBcnN2VTJqN3Ns?=
 =?utf-8?B?NTRQR1VQdVBIaUtOUVM4RU5DRXlvR2YzTkhINDE4bHBPYzJqNlNJeHkwSzhp?=
 =?utf-8?B?aU9QS1puaWJJSk5PV2hrckVzbFRNZk9GTjZyVzMyVEVTeEFJa1NZUkRNQ3Ar?=
 =?utf-8?B?VHpwUlBtTm1kV3N0WHBDS1UwWUNmNzRFNkRZV0J5ZFpIU2lsdFFHWW1LYUE2?=
 =?utf-8?B?Y2VSS0g5bHdDcnd1UFdvYXlsMWFyM3Q3Y1dhaEJIcW5NVnppWjFhRzIwOWpV?=
 =?utf-8?B?bVlXckpKSTVzQVdvM2ErQU4yaDFWM0tPZFR6T0JSZ2s5VmtHSW5FeGcvMlda?=
 =?utf-8?B?V21GMkZQNndwYVRaSU10NTFGSXRtR0d6SGd6TUpSMEVmK1dQcWRlMlpKZGND?=
 =?utf-8?B?N3JyeUUxM1hxK1M0WE4rQWF3ajVYNlpNRkdLVnBuTzZRTXNoR2VFYVd1RmdS?=
 =?utf-8?B?TzcxeWpSZDkyWjUyZVFxS2VCUy9pRTgzaWNDT3JNNSthaXBRb1FLTXVoMkdx?=
 =?utf-8?B?REx2M3NUU1IxYkw0Q04yQldpSWFXVVRscmdOMHNBL3RiSnNXekltZ3p0bStL?=
 =?utf-8?B?RitlNnBTcGVDTGtnR1NZMHFOaGd1NVdEOFRiUmEwL0VvcUVwTkJjdXFHYWVa?=
 =?utf-8?B?ajZydWIwTkMrYkZSMjl4QXhIak9WRjhtQnYzUDVzbDBwMlNVRDFDeUU1K3VM?=
 =?utf-8?B?YVZVR0Q0bGVCeFYwOXJLV0R0cDN5WEZXVkVUb1Vkd1BOSTBzS044N0FWS3pW?=
 =?utf-8?B?WnVyOVg5QVhyZWs4cmI1anh2OStwVXFMaDVHZzVGbWhQOUYvejFTZC93RXBs?=
 =?utf-8?B?aHgwZDdqS2I1K24yWktvZXlHV3NoWUIveUY0WE42bnRGanhMcTdaU2lzaWFv?=
 =?utf-8?B?NDg0YUFjMU9uMnFEenNlckZBaUppTzVaTG1kMFFwNmdNaGlCRThKVUltWnUr?=
 =?utf-8?B?TkxkTWpNTG1oWE1NV0l1c2hvNmlNRHNUOURTbGhEVWRCcEUxOVgxME8xNlNW?=
 =?utf-8?B?TSticTZHekNVVFZKSVdrenVEMEdlUjNjMWkrUFUxMEhqRVdXWVAzRDREZVdq?=
 =?utf-8?B?NWpFNklzQmJDVjUrVkl3VFIrMmVwbHkxdFMwZWwrR1JQeVNRcEp2aWF0QUcx?=
 =?utf-8?B?NnN5bWhVRFNIWW51LzdULzdHaXFqK1Z6aHdNMklyazdjcVRuNSttdXFubFgr?=
 =?utf-8?B?bTEweFhMbkkvMjdvMTU3N3pwSWIzVHdodFNXVmVJM3doN1VEaHl3SDBKNGkw?=
 =?utf-8?B?ZUQyYnJiTGo3VkM5aUVXNHZnbEYxM3Q5RXRNc2MwdzEyOFVSRW9HZWRyUmNL?=
 =?utf-8?B?UkZ3VjVucUJtay9uRERqZkc3eEFwY1BZSW1vNHJySTFMbytvNFJTTXZwNi92?=
 =?utf-8?B?enNuQjJidzc0cFRaVGM4eEpMckFjQkpRQnBXRmtiTkZRamYxWDJLWTZBcFJi?=
 =?utf-8?Q?kGEd2rFOfl+yHXNUXcBlwXXh/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d99b4171-58ba-4a10-0e49-08db4cacc5ed
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 14:35:14.3773
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NufDsvLtqJR/cCqnK6jFtVOqMu8u+9dWjfN++GSIA1iW48hyTOObSJ/Wk/GVfiTMhhqM79k1FxbQMwL6+djkRA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9367

On 01.05.2023 21:30, Jason Andryuk wrote:
> When using HWP, some of the returned data is not applicable.  In that
> case, we should just omit it to avoid confusing the user.  So switch to
> printing the base and turbo frequencies since those are relevant to HWP.
> Similarly, stop printing the CPU frequencies since those do not apply.

It vaguely feels like I have asked this before: Can you point me at a
place in the SDM where it is said that CPUID 0x16's "Maximum Frequency"
is the turbo frequency? Without such a reference I feel a little uneasy
with ...

> @@ -720,10 +721,15 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
>          printf(" %d", p_cpufreq->affected_cpus[i]);
>      printf("\n");
>  
> -    printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
> -           p_cpufreq->cpuinfo_max_freq,
> -           p_cpufreq->cpuinfo_min_freq,
> -           p_cpufreq->cpuinfo_cur_freq);
> +    if ( internal )
> +        printf("cpuinfo frequency    : base [%u] turbo [%u]\n",
> +               p_cpufreq->cpuinfo_min_freq,
> +               p_cpufreq->cpuinfo_max_freq);

... calling it "turbo" (and not "max") here.

Jan

> +    else
> +        printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
> +               p_cpufreq->cpuinfo_max_freq,
> +               p_cpufreq->cpuinfo_min_freq,
> +               p_cpufreq->cpuinfo_cur_freq);
>  
>      printf("scaling_driver       : %s\n", p_cpufreq->scaling_driver);
>  



From xen-devel-bounces@lists.xenproject.org Thu May 04 14:50:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:50:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529825.824659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puaHE-00020t-L9; Thu, 04 May 2023 14:49:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529825.824659; Thu, 04 May 2023 14:49:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puaHE-00020m-IZ; Thu, 04 May 2023 14:49:48 +0000
Received: by outflank-mailman (input) for mailman id 529825;
 Thu, 04 May 2023 14:49:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYMk=AZ=citrix.com=prvs=481980579=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1puaHD-00020b-10
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:49:47 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7d01f25-ea8a-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 16:49:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7d01f25-ea8a-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683211785;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=xqxetoyMuTWWLtSRIRr5hMNplwRXRxZMX6BsncNBeUw=;
  b=OgdmOvzSRIni0ODwPXnPyeHgGGIEiSeQG95HrhkRN6ulyxOlRzT4Ya9l
   KRTx6CHZqPZokC88Ac58Gev+Rf0uYyGE3zSVa58EC5Vqee8vjvIjWBMlD
   pmwnLdgcjaMQB3rOTgHpaRPRFloeOyEXtPnGl/5JOuuvnmygjzN+qZNJb
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106636820
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:06/cj6i7XSutAaAX/mgEg2MKX161SxAKZh0ujC45NGQN5FlHY01je
 htvCGCFP6uDMDf8Kt4ibd7joRkG7MSBmNFnGQI4pCo0EHgb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QeCzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQnNQkfUCKZl96oxeO7E9Ruut45K+XkadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XejtEqFWTtOwv7nLa1gBZ27nxKtvFPNeNQK25m27B/
 jyYpDqpWEly2Nq3z2Tf6020hd70uhj9Aa4vMufmr9pbnwjGroAUIEJPDgbqyRWjsWahX/pPJ
 kpS/TAhxYAp71CiRNT5Wxy+oVaHswQaVt4WFPc1gCmE0qfO6hyVLnQFRDVGLtchsaceRyEu1
 1KPt8PkA3poqrL9YWKQ8PKYoC2/PQARLHQefmkUQA0d+d7hrYovyBXVQb5LEqS4k9n0EjHY2
 C2RoW41gLB7sCIQ//zlpxad2Wvq/8WXCFdvvW07Q15J8CtGebe3Wb6y+WTF6KdAdbubckObu
 1QLzp32AP81MX2dqMCcaLxTTOrxvqzVb2K0bU1HRMd4qWn0k5K3VcUJuWwleh80WioRUWWxC
 HI/rz+983O60JGCSaZsK7y8BM0xpUQLPYS0D6uEBjaij3UYSeNmwM2NTRTKt4wVuBJw+ZzTw
 L/CGSpWMV4UCL580B29TPoH3Lkgy0gWnD2DHsmnl03/ieXPORZ5rIs43KamNLhlvMtoXi2Mm
 zqgCyd640oGC7CvCsUm2YUSMUoLPRAGOHwCkOQOLrTrClM/SAkc5wr5netJl3pNw/4EyY8lP
 xiVBidl9bYIrSebcV/SNCo4N9sCn/9X9BoGAMDlBn7ws1BLXGplxP53m0cfFVX/yNFe8A==
IronPort-HdrOrdr: A9a23:Y6K8Fq5ZXzBy4PPilQPXwDLXdLJyesId70hD6qkQc3FomwKj9/
 xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBOPZqAEIcBeOlNK1u5
 0AT0B/YueAcGSTj6zBkXWF+wBL+qj5zEiq792usUuEVWtRGsZdB58SMHfhLqVxLjM2Y6YRJd
 6nyedsgSGvQngTZtTTPAh+YwCSz+e77a4PeHQ9dmYa1DU=
X-Talos-CUID: 9a23:NGFFx24OkvYl+VFD6tssy1MtCP4XaG3h0nKBIxaFC1pzUrq5RgrF
X-Talos-MUID: 9a23:UhBTyAp6WsQ39osmVTIezzc8bt5X+6efM2c2l8wqhu/ea3VUCijI2Q==
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="106636820"
Date: Thu, 4 May 2023 15:49:27 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Roger Pau
 =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH v2 1/2] build: don't export building_out_of_srctree
Message-ID: <30ef6706-5c01-441c-9c5e-08e578019166@perard>
References: <2e3c8f9d-8007-638a-c88b-21ad2783d8d3@suse.com>
 <9e63c6e5-11cb-9f0e-b33e-0247b17e3785@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <9e63c6e5-11cb-9f0e-b33e-0247b17e3785@suse.com>

On Wed, Mar 29, 2023 at 12:22:16PM +0200, Jan Beulich wrote:
> I don't view a variable of this name as suitable for exporting, the more

We could rename it.

> that it carries entirely redundant information. The reasons for its

The patch replace building_out_of_srctree with abs_objtree and
abs_srctree which also carries redundant informations. abs_objtree can
probably be replaced by $(CURDIR), abs_srctree can be
recalculated from $(srctree).

> introduction in Linux commit 051f278e9d81 ("kbuild: replace
> KBUILD_SRCTREE with boolean building_out_of_srctree") also don't apply
> to us. Ditch exporting of the variable, replacing uses suitably.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

This patch feels like obfuscation of the intended test. Instead of
reading a test for "out_of_tree", we now have to guess why the two paths
are been compared.

Anyway, there isn't that many use outside of the main Makefile, so I
guess the patch is kind of ok:
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>


> For further reasons (besides the similar redundancy aspect) exporting
> VPATH looks also suspicious: Its name being all uppercase makes it a
> "non application private" variable, i.e. it or its (pre-existing) value
> may have a purpose/use elsewhere. And exporting it looks to be easily

This sounds like you don't know what VPATH is for, but I'm pretty sure
you do. If there's an pre-existing value, we just ignore it. If VPATH is
used by a program that our Makefile used and that program is intended to
be used by build system then that a bug in that program for not knowing
about makes' VPATH. So I don't think we need to worried about it been
exported.

> avoidable: Instead of setting it in xen/Makefile, it looks like it could
> be set in xen/scripts/Kbuild.include. Thoughts?

I'd rather not make that change unless there's a real issue with
exporting VPATH. We are more likely to introduce a bug than to avoid
one.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 04 14:53:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:53:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529828.824670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puaKH-0003RB-4J; Thu, 04 May 2023 14:52:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529828.824670; Thu, 04 May 2023 14:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puaKH-0003R4-02; Thu, 04 May 2023 14:52:57 +0000
Received: by outflank-mailman (input) for mailman id 529828;
 Thu, 04 May 2023 14:52:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYMk=AZ=citrix.com=prvs=481980579=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1puaKF-0003Qw-UA
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:52:55 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 57f8dd66-ea8b-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 16:52:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57f8dd66-ea8b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683211973;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=1+g/k07CXD/gTtcKxuOaPCrh5JfZwNTbDY+vcD/fIfQ=;
  b=Tdj/cqwdkOpKttvQXSCMCNwqSMSgDnmfEqYYQcP0XVrGnxQ5HC438C7w
   473oWEyQszQ6f4htO1qVANC2K6lqMdVgGfYnVeZLMyILrAIwOwbpP9/vj
   GTEofndyZVynLsxfEyPEoOJI/6S0tJjSS1QzqFUmy0rGY1eojtY1inaac
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107896292
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:W7jMc6+hxqWKOb0YEarfDrUDq36TJUtcMsCJ2f8bNWPcYEJGY0x3y
 2EcDDzXb67ZazOneYh+b4jjoxtUvZCAm9JlSFNurSg8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKgX5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkNq
 Oc5AgImXiqT2dqu3peYEu9ijJ8KeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAj3/jczpeuRSNqLA++WT7xw1tyrn9dtHSf7RmQO0MxhnI9
 zycrj6R7hcyGO3E1yOc3niVg9CSoAHwRbIoT5e736s/6LGU7jNKU0BHPbehmtG7gEOjX9NUK
 2QP5zEj66M18SSDXtT7GhG1vnOAlhodQMZLVf037hmXzajZ6BrfAXILJhZDddgnuckeVTEsk
 FiTkLvBHidzubeYTXac8La8rj6oPyURa2gYakcsUg8t89Tl5oYpgXrnR85uCqevgvXpGDv7x
 HaBqy1WulkIpZdVjePhpwmB2m/y4MGTFWbZ+zk7QErmsxhYTryOV7a4t2DD89NjdICXRAKo6
 S1sd9el0AweMX2cvHXTEL5VRev5uKnt3C702gA2QcR4n9i50zv6JN0LvmkjTKt8GpxcEQIFd
 nM/ru+4CHV7GHKxJZF6bIuqYyjB5fixTI+1Phw4gzcnX3SQSONk1Hs0DaJo9zqx+HXAaIlmU
 XthTe6iDGwBFYNsxyesSuEW3NcDn35unjqPHcmjl0v2jNJygUKopUotagPSPojVEovdyOkqz
 zqvH5TTkEgOOAEPSiLW7ZQSPTg3EJTPPriv85Y/XrfacmJb9JQJV6e5LUUJJ9Y0wMy4V47go
 hmAZ6Ov4ACj3Sefd1/RNRiOqtrHBP5CkJ7yBgR0VX7A5pTpSd3HAHs3H3fvQYQayQ==
IronPort-HdrOrdr: A9a23:sGUI4qFTmUGMtbzqpLqEHseALOsnbusQ8zAXPiBKJCC9vPb5qy
 nOpoV86faQslwssR4b9uxoVJPvfZqYz+8W3WBzB8bEYOCFghrKEGgK1+KLrwEIWReOk9K1vZ
 0KT0EUMqyVMbEVt6fHCAnTKade/DGEmprY+9s3GR1WPHBXg6IL1XYINu6CeHcGPTWvnfACZe
 ehDswsnUvZRV0nKv6VK1MiROb5q9jChPvdEGI7705O0nj0sduwgoSKaSSl4g==
X-Talos-CUID: =?us-ascii?q?9a23=3AxNJZNWlD+lSdNTL+/9cQJ5HjeBLXOT6A11b6EWS?=
 =?us-ascii?q?IM3Y3cKWLbHzI/q1uyOM7zg=3D=3D?=
X-Talos-MUID: 9a23:CLg+PQuAtaHTuSfn8c2nqAA5Kt5xz66XDX9QrLcNpvm+bgZzEmLI
X-IronPort-AV: E=Sophos;i="5.99,249,1677560400"; 
   d="scan'208";a="107896292"
Date: Thu, 4 May 2023 15:52:41 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Roger Pau
 =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH v2 2/2] build: omit "source" symlink when building
 hypervisor in-tree
Message-ID: <9febc330-9586-4810-880d-f694a2b07b0d@perard>
References: <2e3c8f9d-8007-638a-c88b-21ad2783d8d3@suse.com>
 <0d073a56-b3a0-a64d-6bf4-851c660c6155@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <0d073a56-b3a0-a64d-6bf4-851c660c6155@suse.com>

On Wed, Mar 29, 2023 at 12:23:37PM +0200, Jan Beulich wrote:
> This symlink is getting in the way of using e.g. "find" on the xen/
> subtree, and it isn't really needed when not building out-of-tree:
> The one use that there was can easily be avoided.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 04 14:58:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 14:58:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529831.824679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puaPT-00046D-Lh; Thu, 04 May 2023 14:58:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529831.824679; Thu, 04 May 2023 14:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puaPT-000466-J5; Thu, 04 May 2023 14:58:19 +0000
Received: by outflank-mailman (input) for mailman id 529831;
 Thu, 04 May 2023 14:58:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puaPS-000460-6Y
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 14:58:18 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20610.outbound.protection.outlook.com
 [2a01:111:f400:fe16::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19c9d1f3-ea8c-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 16:58:17 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7862.eurprd04.prod.outlook.com (2603:10a6:20b:2a1::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 14:58:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 14:58:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19c9d1f3-ea8c-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ip5WEMsvhoegzjaaoFKKD8bBekZoUcujBWmMfQy1g+ZHqJ5wxYqT3KJMJldbM/1l+rN59S1IiGNEAJiyce19IWLxozfm8chAxaZSc2luUScLX4mf19vKgf1gh1h97ZycW9Cn6WdDmVCcQ/+GYGQohCSy6Vl8p00Rl2Wy+MEzDGz0vDhJw+Hcyy4UM6i1asqgs+SWAKiVFeuvYRvbwPJ1lGgM3PflwvJPw3W5AnUeR0sWQpfSRpNqFIYcU/XIWVFzzqWUGhNnr47WS0uBzDnOI6LIzGjIffhGSippAnSm1VarCqwsQvcuC+T5E9isGBmWIhkGYb5P6RqvI23FTK0N1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pHns2eARKQnQVqrbuS0ItzjVrOdmKr1yaovuxzQ1684=;
 b=Ud6SN4gJCs+qN0TSG9SASeoalEyOMIbV7EWODlluUUAyOId0IfvlZWWJhkrN36zcjgRxLxMwvYQo6EeMCKX27sNg+Bft1t1I0h3RnA3loJUoG/W9InjqodGCxv46edieoITC0RhQMhDMGDaYWidcgfLPmylFjY+V01/JQSi0psaAOb+cnlIvfOGXqpIHZZU+aJjJw20LaroqJTCuZfcjkiNnGPs/mxsK7CcdFrkGuAootlGE7t1lJwCQFElpT+2J5lWINgEVwVcZ2nTjeg4tKUypwygYcPhSOyleAYBtdigRReK6i5odsn6Vg3u/kTFlwjXFHIPukj3vOMc9lZI3VQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pHns2eARKQnQVqrbuS0ItzjVrOdmKr1yaovuxzQ1684=;
 b=mz0Mn1EmVPG55MBs+EjBs5e5SDRcIIL9sz/HBgurGNMISPNYlbV549Uoan5gT+hn+5MtUT0ivmavSQeljfnxiwKX/LUghvz2kC1yDOn7p/UQb6fzQbHsutKA1ywkzat2SMClfb/ne+KwChW6TPJdQ2C76i8DySB6K/JeMMTgX0QP3QjvZX9esOWeqaCTx0/D1wjzZYpM0WFVvHoPVoeLj8dtF1ksGT10MHQGxG1jF2Bqf57VIDxNP6SnxIMIE/7x9tHH0idA28+kF95pvsFRrbMa099brS3cLZScyC3gb1d9PqRN//2RoCgZfaONE4J53X2oC2A1a8BIT8zed3Lu5g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <85820299-1a45-652d-283b-cca89edcf3b2@suse.com>
Date: Thu, 4 May 2023 16:58:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 1/2] build: don't export building_out_of_srctree
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <2e3c8f9d-8007-638a-c88b-21ad2783d8d3@suse.com>
 <9e63c6e5-11cb-9f0e-b33e-0247b17e3785@suse.com>
 <30ef6706-5c01-441c-9c5e-08e578019166@perard>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <30ef6706-5c01-441c-9c5e-08e578019166@perard>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0073.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7862:EE_
X-MS-Office365-Filtering-Correlation-Id: 27a7c445-87b7-4ef9-9640-08db4caffc93
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	weoX7mbyrYQfZ5jyynF1/ndma/cGqtBJN+D4pr8tJUvnoa/fdC/+wHjfMnRox/TCxlcvHay8GdYPLBBSoE2h2JocCGTcHIBLnD2qI2mTh4lJ+h5YTlybBSdklEU/R8ZpDTNhQLay1FtCUIYQ1kZydCQcdKAZM/nHOZjyZ3L7uuN7+vhNLcpMt7HTbTbN4/7lzsJYs8lOZVhk5JY++a4HyUdm4+MRLZlm6noGxgF1f4PCE/kVo3pL/WOyeUWBk4HckOd7WZfJZ6gS5kRobUieTpLaRRImcWBe7O9IX+Th8g0i6IjL1/jvSJHLvF00rbgdjIBslL2q7+5bVluGdRI7rydjOvkQK207viY77WnJxdmqsXHazypSG7OAneKUL6ekNXxhTdvxeJN7d4qECjqxkLtH1aUM5Lamel9XIQfDYKGb30EvY5wcTJuovDn0Dgk/VMTTuJ5aQOHakmPBO2HMzumVzjoTyJN3dm5h+YA18BJzAbSd0B+i1P8cHFMn1q/2yAZ8n/M9Njb9a+J1Cxt8GU4jBcy3FISaPupHSse3dZ1RlwoORPf0j/zkHUj0VdL16+GCdULI2dOqT4iuGGNI91eI6d6E/okGXhaViTiM8/Tc7v5l+25fiCc70ZWgiSM/DlEdn4HFAuYnsSbGt5xUEw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(376002)(396003)(136003)(366004)(451199021)(5660300002)(8676002)(478600001)(38100700002)(8936002)(54906003)(2616005)(6916009)(6506007)(31696002)(86362001)(66556008)(36756003)(66946007)(66476007)(31686004)(6486002)(4326008)(83380400001)(6512007)(26005)(186003)(2906002)(316002)(53546011)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SWFMUnV1b1g1cWg2UlVIWmp2VHJHOUFsa2xkcHJyYzdCK2xqaWpmN28xL1pu?=
 =?utf-8?B?dEpIQUpjNEdnOHgxeldmM1pRVW1kUnJWZ09sdnpkMmRCcGN2TG9mMUhOcEti?=
 =?utf-8?B?OHdVU3RWTDVveC95RWUwdmFsZ1BGNVUwUXdXYWtYK0Fqcmt6NENVMmRnajhq?=
 =?utf-8?B?alJUYk1aUERTYXh2M1ZKOHlld1Z2WkNkYUJucFlpUEhkR3BHaUNlUVZVOWVa?=
 =?utf-8?B?aTdRZ2R2TGQxbDdxK0p5UnorektRTHJRYlQ3SnhjeUNIN2IwZWtjeGdxWHow?=
 =?utf-8?B?NUh4Um1Qc3B6aHVicGRzTDNzdElSOWtDTWNrNGZod040Z0Z0b0ZDM2VTcFRU?=
 =?utf-8?B?NWlEUzJGa0dyeGhOeEtpemVUdUZLTXViWnB2c0dKMStHSHpXOGljY0t3QTNk?=
 =?utf-8?B?SEJZVDk0L1B3U3pWek1TSDByRW0vVFNuTFFwdURXU3hyd2JKWThoaXVBZzNS?=
 =?utf-8?B?bGI2RGtVNGVYeHdMWlR5MHNwdS9hbzZzY3I0TEZlZjNGRHkwMVlXWnlHRTBE?=
 =?utf-8?B?OUhUNmtrSXA5cVFLV2VvNE93RjNKMHhCK0hQWE91a3pDT3FXdnhuSHpqUjFI?=
 =?utf-8?B?bkN4WjQxNXBNeWRDRXVta295ZklJRDRDN3NZTTJPWEphV0dVS042eEtPQ1Jq?=
 =?utf-8?B?bW0wM05xM0xCUGtiL0x5ZUJjTWQ1d25qNlB5VnZIWC9sYy9aQktpbWdGZVdi?=
 =?utf-8?B?NjJESm5EYXF5bFQycnlEa1ZKWFZGL2c3YmVGQ3RJZU1oaTgwQXRSdDRvVDVo?=
 =?utf-8?B?WEhtZnViUGFhdExzYnFRZmp6bVNJRzdnNFFIbnRPa0hVZTN5aTZHR3FNbzJk?=
 =?utf-8?B?d2NaRFJlaE9GZit6LzN2dkZMSEl4bSt5eVVCRGZ6UlMzQ2R3SWRUNGF1L2tX?=
 =?utf-8?B?SnkxbVpldlNtazhnT1BuNmRycU1YaXBXSkYwMmpMSTZPTFdZZWJxcjM5Zk5B?=
 =?utf-8?B?S2tpMUw5bnpKYmFacUp1cDRnQVFtWThlbld0ZGR6c3RzOWVRMzZnV1FzNFlr?=
 =?utf-8?B?cTVUano4dXc0bW9sdEFockhLd0J4ak9kQUxkR3FEK1pWMUtVZ2VTV24wNmxN?=
 =?utf-8?B?RDM5RHE5ckk4VGplbjREZENvdzcwYzc1bXV3YVdvWlBYMUNVUUxaWFJlcVg0?=
 =?utf-8?B?NXpRVGo2SlVFanlBR3o1dXY3dWdUNjlEeFlZYWY1UWhuTTFXK0hiYkVEM01a?=
 =?utf-8?B?dTN1OVBXTmU0eWgrekZKeGRGdFFpK2c4SVl4OHZOUGY4RjFheXFCNi9jcGNC?=
 =?utf-8?B?RENvcVlEblc3cU5lTFJhaWxNWk1CVTAvRVdkZGRndGM1UHkxNDNRSW1IbWQr?=
 =?utf-8?B?eUF4d2gydVBGcUJTcW1YbWRmV05jU1NQR3NZckREV0pNcjEwZUx4eG95Vmhh?=
 =?utf-8?B?eVNlVlRxanlkdktqeEtoR3pWdWdjNzBCVEJxa2N3M1RpbjlGZjhsRk5rajVW?=
 =?utf-8?B?M2x0dWhnOHdqYmRaTE1Rd3hkaUttZXhMM3o3V2hJNzhQN3AyTTM3OWJDNDc4?=
 =?utf-8?B?cG5qTndpWWVXVWgzaDhyK095czZEWkEwYThINyt4bjc1WHJzQm5DRGRHZFRn?=
 =?utf-8?B?VDRSVTZyT0M1ajMvVGoyVEM0c2xONUEvUGpGelZFYjMvY2Z6ekNRNEppeXNs?=
 =?utf-8?B?TXZTYWs1STNEdzR6c3VqMm1vdnh5TEZuRUdOUkxOVHhzT0ZTaWZVcG10Y3Vx?=
 =?utf-8?B?S3Fodm1DOTRMWE51NHFpc3pSakZOLzVBSk1weEZWRmtuNUx2akN3UFIxQ0F6?=
 =?utf-8?B?WHpkM2NPenUySU1ZZWNZeFJwQll2Q2lyMGxsZUgvdzlkN1dVY08vTHE1OUtI?=
 =?utf-8?B?VmloMy8zdmowQ3VsSDJsVi80MVRPYUo2VkNEVDdJOUF4RlljREZ1aG42U0Vu?=
 =?utf-8?B?R0FqWW1zY21qa3JxMGF2ZlNJM29MU0tNQXltd3dNeVF3SXMzbGJjZ0ppbTd0?=
 =?utf-8?B?ZVBGeWlMZ1F5TS9sWURnTm90TFdNU0ZxTVpkK2dUVEV6NjZvNDlMN0taS2J3?=
 =?utf-8?B?TDlYSXBjWno0ZzFKbml5NmlUNDNnaEZmdDRPa2VSa0VOclBySmFoeHA0OU52?=
 =?utf-8?B?Tk0rU2hGbkNGZDEzODQrTDBDTGw4ZC9vNlRMMk1JWHVXYy96dy9xcTJpYnpn?=
 =?utf-8?Q?PH0LfQ5XMOsCqEP1qQoqBAwO9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 27a7c445-87b7-4ef9-9640-08db4caffc93
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 14:58:14.4854
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: p1/qwZ6jpyws3BNJUtnAqd2UhnCPHMWFFXLaqJHkLV6xqWnaFnrKNCHkjPZP0LLpfPUXlr8MawJqmYX+hV0IyA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7862

On 04.05.2023 16:49, Anthony PERARD wrote:
> On Wed, Mar 29, 2023 at 12:22:16PM +0200, Jan Beulich wrote:
>> I don't view a variable of this name as suitable for exporting, the more
> 
> We could rename it.
> 
>> that it carries entirely redundant information. The reasons for its
> 
> The patch replace building_out_of_srctree with abs_objtree and
> abs_srctree which also carries redundant informations. abs_objtree can
> probably be replaced by $(CURDIR), abs_srctree can be
> recalculated from $(srctree).
> 
>> introduction in Linux commit 051f278e9d81 ("kbuild: replace
>> KBUILD_SRCTREE with boolean building_out_of_srctree") also don't apply
>> to us. Ditch exporting of the variable, replacing uses suitably.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> This patch feels like obfuscation of the intended test. Instead of
> reading a test for "out_of_tree", we now have to guess why the two paths
> are been compared.

Hmm, it's quite the other way around for me: I view the variable as
obfuscating, as it hides what it actually expresses (or better based
on what properties it is actually set).

> Anyway, there isn't that many use outside of the main Makefile, so I
> guess the patch is kind of ok:
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks.

>> For further reasons (besides the similar redundancy aspect) exporting
>> VPATH looks also suspicious: Its name being all uppercase makes it a
>> "non application private" variable, i.e. it or its (pre-existing) value
>> may have a purpose/use elsewhere. And exporting it looks to be easily
> 
> This sounds like you don't know what VPATH is for, but I'm pretty sure
> you do. If there's an pre-existing value, we just ignore it. If VPATH is
> used by a program that our Makefile used and that program is intended to
> be used by build system then that a bug in that program for not knowing
> about makes' VPATH. So I don't think we need to worried about it been
> exported.

We may use programs from our build system which aren't aware they might
be used that way. No matter that I know what VPATH is for, I consider
its name to violate the shell spec.

>> avoidable: Instead of setting it in xen/Makefile, it looks like it could
>> be set in xen/scripts/Kbuild.include. Thoughts?
> 
> I'd rather not make that change unless there's a real issue with
> exporting VPATH. We are more likely to introduce a bug than to avoid
> one.

Well, okay, it's surely (hopefully) a highly theoretical consideration
anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 04 16:00:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 16:00:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529836.824690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pubMt-0002js-7T; Thu, 04 May 2023 15:59:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529836.824690; Thu, 04 May 2023 15:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pubMt-0002jl-4X; Thu, 04 May 2023 15:59:43 +0000
Received: by outflank-mailman (input) for mailman id 529836;
 Thu, 04 May 2023 15:59:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AIjt=AZ=bounce.vates.fr=bounce-md_30504962.6453d669.v1-89c15dbd916c406d8613d1770bab4013@srs-se1.protection.inumbo.net>)
 id 1pubMs-0002jd-37
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 15:59:42 +0000
Received: from mail179-5.suw41.mandrillapp.com
 (mail179-5.suw41.mandrillapp.com [198.2.179.5])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id abfc2e99-ea94-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 17:59:38 +0200 (CEST)
Received: from pmta12.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail179-5.suw41.mandrillapp.com (Mailchimp) with ESMTP id 4QBz613lkczG0CJtH
 for <xen-devel@lists.xenproject.org>; Thu,  4 May 2023 15:59:37 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 89c15dbd916c406d8613d1770bab4013; Thu, 04 May 2023 15:59:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abfc2e99-ea94-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1683215977; x=1683476477; i=yann.dirson@vates.fr;
	bh=eLQjcR+VoTsYw7DiK+4fWqcd9Pnvu4j+lbS0pP71BjM=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=mNVDxbBRYGjgYdfHwOxyufsixN4lEv/t4B23lWarB7sRI22grC1upb9KQ75xCfC5R
	 wEaAlu9q7w1ZQPwCPGHJkKbd8sd982Tv7IfVVuIWik0X75GmuTwtOu084nnYnCASjE
	 jAylRwOuaR9IWsTM+N/nTUy9LJPfvypWz8gnmNXA=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1683215977; h=From : 
 Subject : Message-Id : To : Cc : References : In-Reply-To : Date : 
 MIME-Version : Content-Type : Content-Transfer-Encoding : From : 
 Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=eLQjcR+VoTsYw7DiK+4fWqcd9Pnvu4j+lbS0pP71BjM=; 
 b=pS6x3twT9m+jOK0BmMcIyQy6hyN84tmxRble+tp7zQc7rH0HB3hshTe9ZQOcxPc4x7iB9u
 IQG0P/4dlJr6hoUadFdcrWNbEuAppaGZBIHBnZYYRPq6khhOK803LI4hPpRT4YInu68u4t2k
 vWLfIfURuf8qTO9Z8AWaW0O7jWKJM=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: =?utf-8?Q?Re:=20xenstored:=20EACCESS=20error=20accessing=20control/feature-balloon=201?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 9664e26d-c55d-4176-ac64-680cfb7c5564
X-Bm-Transport-Timestamp: 1683215968933
Message-Id: <f44261a2-df39-f69a-9798-dc1d656e6dac@vates.fr>
To: zithro <slack@rabbit.lu>, xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, =?utf-8?Q?Edwin=20T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>, Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu> <474b531f-2067-a5d4-8b01-5b8ef1f7061d@citrix.com> <678df1ff-df18-b063-eda3-2a1aed6d40f8@vates.fr> <50bf6b82-965b-d17c-7c5a-49c703991504@rabbit.lu>
In-Reply-To: <50bf6b82-965b-d17c-7c5a-49c703991504@rabbit.lu>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.89c15dbd916c406d8613d1770bab4013?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230504:md
Date: Thu, 04 May 2023 15:59:37 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit


On 5/4/23 15:58, zithro wrote:
> Hi,
>
> [ snipped for brevity, report summary:
> XAPI daemon in domU tries to write to a non-existent xenstore node in 
> a non-XAPI dom0 ]
>
> On 12 Apr 2023 18:41, Yann Dirson wrote:
>> Is there anything besides XAPI using this node, or the other data
>> published by xe-daemon?
>
> On my vanilla Xen (ie. non-XAPI), I have no node about "balloon"-ing 
> in xenstore (either dom0 or domU nodes, but I'm not using ballooning 
> in both).
>
>> Maybe the original issue is just that there is no reason to have
>> xe-guest-utilities installed in this setup?
>
> That's what I thought as I'm not using XAPI, so maybe the problem 
> should only be addressed to the truenas team ? I posted on their forum 
> but got no answer.
> I killed the 'xe-daemon' in both setups without loss of functionality.
>
> My wild guess is that 'xe-daemon', 'xe-update-guest-attrs' and all 
> 'xenstore* commands' are leftovers from when Xen was working as a dom0 
> under FreeBSD (why would a *domU* have them ?).

That would not be correct: xenstore* are useful in guests, should you 
want to read/write to the XenStore manually or from scripts; xe-deamon 
and xe-update-guest-attrs both come from xe-guest-utilities 6.x, which 
is really a domU tool as well, but is there to support XAPI in dom0.

Best regards,

-- 
Yann Dirson | Vates Platform Developer
XCP-ng & Xen Orchestra - Vates solutions
w: vates.tech | xcp-ng.org | xen-orchestra.com



Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


From xen-devel-bounces@lists.xenproject.org Thu May 04 16:16:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 16:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529839.824699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pubcz-0005qw-Jp; Thu, 04 May 2023 16:16:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529839.824699; Thu, 04 May 2023 16:16:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pubcz-0005qp-H0; Thu, 04 May 2023 16:16:21 +0000
Received: by outflank-mailman (input) for mailman id 529839;
 Thu, 04 May 2023 16:16:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pubcy-0005qj-1t
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 16:16:20 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ff74922b-ea96-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 18:16:17 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6868.eurprd04.prod.outlook.com (2603:10a6:208:18c::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 16:16:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::52b2:f58:e19:56ae%2]) with mapi id 15.20.6363.022; Thu, 4 May 2023
 16:16:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff74922b-ea96-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fAPZt5EH1bG/JD1PnkOnmqHz5Bm+krvubRyDUOZujf5iertOEbV/yRHRYrw11JQz3xag1Ksh0ZYUYsGIym1GsilgQg7ZoO+GY0gZa6OPEAGD8/33aOgfD2M7ZYJi3X2qtfFPdOxxDU5reCdlxv7Ez7I8QwrH0jUngUwc2TLkO99KHmJYNXoNY0d7PHAOh10dXyXWfHaCzh/5OddAj4MJApZvnyTWZ+14SBpFVS8cLeSkbBToZckG26e24bzuknV/8fQs6ROkJhMTOG89LLQnpsTHjtrni2uy6yqHHmhsrD7WasLEmulFks5Hf926VIpOMSwY0Ouw932KTr/0nKT7yQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XShP+I010xYQHT+tjQ/t6tTbyUcOlkO/Qe5OKDnivA0=;
 b=jeYMry1EH9lkAX9Z5SgWbby6ko5SSBRl33BTGZKb0PtySXR7qXZpNT8a6niPJOsbq23dIFwy2Pu/GYd1fH/WTASO3AK+H26nDh9RpGVfVsTqYoRCsEwxWkoDBYic2/30wlOWDTGUcS1Ty2OdQ0UdrMRjP+F6OjyAiEYEUVxk7+K7vsFzV8mVXrm1JE0q/sW2EibTalklpNj8Uym6Lnq40ispg6b4T21Wo/2RXQpguO3CQVrnib2dH4EbvVHKpj1uj2UOk96kMvhgzs+Tf+YeAdwIZZBuwnn3OX0PNE0PZNgcAd+3rQhJC/UPpVpAUCme5z9UDDvPp1CJYAWs9DN/iQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XShP+I010xYQHT+tjQ/t6tTbyUcOlkO/Qe5OKDnivA0=;
 b=lzcg3IWpJg/9wzZ3pZugaXML+3og185wg/vNrd0XX5IzuPm0Bl0Hf2rwRSCDQqtsN5dJZBCYe0KEi2wML58WYo+xE2m6SSiJjo7fHdlNqJpBelMbKb5k7p6mRisaprqD17t1rqFFEGJCGqyezRYwiShSuAx/dpaYr1aX5clUwlBTA2YElZYVqlO7UadBYGaSQn4jix+GHHHsXeVmt/0U34GKZzTTtQg4u7r0EVGRAjL6ZBmxDKrga9e8g2jEXXRF36gUtY4w+AuAsbCs0l83HkGiGV/Q68FyeUrpx7tE04u1gGCxbl38wCrDtVI2iDXD69XBvDwkWvhJ0/9brHnMDw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <95467f2c-09ed-d34f-feef-5cd55c60f628@suse.com>
Date: Thu, 4 May 2023 18:16:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] build: shorten macro references
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0117.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6868:EE_
X-MS-Office365-Filtering-Correlation-Id: c9ae8391-1c8a-490f-f2e5-08db4cbae2a1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z6DzkKvn/98KRYaJl7lCZMBlQ3F2I9Qi0Dlnhy0A6lSH6J7DhsIbTEzZQ9HCYdzKQcWiPEBjSK8idWpnYUERh/0X2fFy2xeGrwJuZyR3aYDZ5YtX6Odoy49TS7P/AX5vpbfEFx2jiRoBOr0bjO8V188FE72aLMN1+dNA2sD6k7bJTgv5UYnbzcDMRg28PC9/1j6WHcIo941wnrjHXr8CP/giGgNIm266v7VLEN1OpGXXD69CNcBD4VCe3BaK4bae/dbdkiOEsJYBp5yi+QUVDbT9lH3gnTVpM2uOgzy2bz0Gsgl3hR0jIJr2hZ0QElBwozg1PN2enO/F2Ggk9sTd7p6NMDSCDymQGS0n35gJNrTrq5Z4nYAn9FGDRdb1lU1f88sk35MXCux6O6z0Sv0mBaFhT+TLd7ucnchHAdfjPPVmpZm04khjrIK35qnUaDt/UoKtNqtjY5RWAAeBerO4Jx77d5WWuA+Kb7hMZO67LsaFVR/Mdtmr9ZPIfHDRYvGVhlg1IcnWn2WULDHC9kQwHXPoO2IBmb3IUyNon1GkMSZ5Ibr2lR3o9XK029bHXwpDnR/BCaCX3bfvru4UcJz7+1/zuv0jRKW410DIuqKclZxlGDQbGez/IAD/i5lRPKToNiTmywVE0rk1/tXYefwEaw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(346002)(396003)(366004)(39860400002)(451199021)(2616005)(6486002)(26005)(41300700001)(86362001)(36756003)(316002)(4326008)(6916009)(66476007)(66556008)(66946007)(31696002)(478600001)(2906002)(38100700002)(31686004)(186003)(5660300002)(54906003)(7416002)(8936002)(8676002)(6506007)(6512007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cnJvYnNyRnk4eHdMU1dUalhIZTlsN2pydjgrbHAzVDM0MjU0VGowdFlYcE1r?=
 =?utf-8?B?a3N5RTFHNXVDc0RGTWxyRVhlcTRleTlVRGNLak5WR29QNmFFMWM2cWtFSXJt?=
 =?utf-8?B?SllsUTEzKzFZbUdVY0FBejl3cVBsUGE2Y0c1OVlNUHUrSGZtV2NSWU1ZbDJF?=
 =?utf-8?B?Q05kQitlcmthVHBlZm85eUxsTjdna1lzR3F5amhUTTE1US94YzZzL1JZSDVt?=
 =?utf-8?B?NXVpRE5RUklLcmtPbVVvTDZwWDErWlhCS2tJeVhrR25DMTFPSm9vZG5wSHp0?=
 =?utf-8?B?MW0xZ2xKaWF3bTNoVTQzVC9Ybk5sRUU2Zk43MnIyUjJRYUl6RUpHaVJqY3RU?=
 =?utf-8?B?bzl0d3QxSklwemdTUEFXdWJLbWpWSzJla3QvZjkzU0l6d2FGZXpQN2RxWjNz?=
 =?utf-8?B?YzlVWmlQdldqL3loSlFuOEhaVkxlVlpydjdXeTlnWGFpU1NFVW42bnhRSG44?=
 =?utf-8?B?amVSNVZ3Tkx3U2p5UmxLTEVHV3VoY2R5NHNFL2pmekVJL0xtemVNMk14M3h1?=
 =?utf-8?B?c0FQQkM1dzlJVnB5MER1cUFidWR2bENGZnpIem9ycHBiZWczTjJuQzRvaWVE?=
 =?utf-8?B?UDRyM2tkK3d3R1VuMVk2c1daTFNEaDAxNnRTVlZlTlEvVXJETjdiRjNSWTZH?=
 =?utf-8?B?MXBNU2lCQ1ZTR3NuRjd2Rkp1d0VFbkxseWt4ajhZOCtzcXBOY1k5U1N3NXBQ?=
 =?utf-8?B?SUIrZjE2emhQblpFRlUxMzlydld6TmhMRzlVUzdaOUwyL24xUmJXbEY2WGVI?=
 =?utf-8?B?cHduQXUyTVdpaG5VQlVsTWFDR0JUYnljcjZMRWtYMGpqZU9nRzlCYWpDSk5X?=
 =?utf-8?B?R0VyNlFZaUNZd0JHRFRKUlhvbzlUamI0MzNQZEVlN3FxRUg0NElsT0ZLTk5C?=
 =?utf-8?B?V1U5WGg4S1NGb2FqbHJvWGhBVFNienNSSGU0bUNqQk1IRVNEUjBRaVNSL1FI?=
 =?utf-8?B?K0kwaTczeGVRbVVUTWZkVmt4UnZvQTVjMFNUM3o3MmNaMG9WUW0rMlBxVmhG?=
 =?utf-8?B?dW52SDU5N1NCRWdMaW9LTjJmejNuWXJYS0tQY2xSekJhQldVNGE3YTZVVzV0?=
 =?utf-8?B?R1JmMDJEMkVYUG9KdVo4dWV4UUVKcjM0elQ1WHllQzdkM1FXVnVhNFdNU3Fu?=
 =?utf-8?B?NkRUblBGeG9ON2VzZWUzc1ZBWDFaczVSSllYdGtZY0w1bVZUb2hBUGQxeWQ0?=
 =?utf-8?B?R29Vc0tTRkp0bk8ydk1hMWZaOU1Oc0dJQ3lMRi9JSHNyRndzQ3ZFT2ZBeWtn?=
 =?utf-8?B?Y1dvNW5mNXRLdHpEenB3NDRwSUVQNi9teFUvRlR2OGtQaEw3bytIUkg0TWFv?=
 =?utf-8?B?OXUxUUZPVGJTZjdraENkcnZuMXBxbkxnOTZlZzM4dWdvNlQ3bXhxWEsvRWxN?=
 =?utf-8?B?a1pELyt0UHJzYkJ4ZEhFYjhnN1VpeXA4Nm1UWHc4ekV6cy9BVEQzbTdyZjVt?=
 =?utf-8?B?T0xlUDNYOEZ0VjRSbWIvRUNCWkkwNFpwV0NKMXJyMHhzY2FUblRlMDM5YXl1?=
 =?utf-8?B?bFJZUmNIRFJ1aVdUaWtkZ1g0cjhzRTFva3dteTZkbWtxTHdLc01UVWkvcmtu?=
 =?utf-8?B?dG9raE5KdGVLbjNodGlLSkJuZUt4WWt5YUpBUUh5SG5FdU54aFVsNXRhRXg3?=
 =?utf-8?B?VHhJak1Zd0NHYVZrU3RTOCt1YWJkVzJudWs3bjRmZnVsSnJCT2JQaEFDeXZU?=
 =?utf-8?B?UVRQQUVWaFNuaDUxTkFVdzBEdFdBeHV2dVB1S1g3WFdDSWJrbHBQM3JmMWxS?=
 =?utf-8?B?NzV4Tk1rSjFmRENCam9kSUN0S2dvME44NFhSR0dPSm91cDRIWDV4NEJ0YWlU?=
 =?utf-8?B?WE01S2FvTUxnaFRGMDJEYWxDMjJTcjlZOFoyNjZpVGpRZHl2RjhtV2tmeGhz?=
 =?utf-8?B?WkRRSXYwOENFMDU4MzdaNnlMbFZzNFo1SFRJY3Y0ZndiMFhzay85MThlM0VI?=
 =?utf-8?B?K3ljQ1c0VHVENFR4ODZXSGxidGhCYkh4SThqMy9iWFFTRXRScUFySWFVNEV2?=
 =?utf-8?B?WDBRWis3aWpYMGpCZDFESFhibmZ6M21jOEpyWVJGcFBYSGFyaW9OelhNMTYv?=
 =?utf-8?B?am5VbUl3ekRzVGpGdTliSkh3VTJYZlpiMmlYbG9MMDBBalRhR205NGR3b2Vu?=
 =?utf-8?Q?hxcVA3Aps6FJlVEABrZnAzfkd?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c9ae8391-1c8a-490f-f2e5-08db4cbae2a1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 16:16:15.5234
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MdqUSzeXNVT569WwvUHoZxxMFjEk8BPau19nyl3Whln5glGFvgJWspIdGZAR67DoELwUE3iak1eYAVcxCfNPfQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6868

Presumably by copy-and-paste we've accumulated a number of instances of
$(@D)/$(@F), which really is nothing else than $@. The split form only
needs using when we want to e.g. insert a leading . at the beginning of
the file name portion of the full name.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -104,9 +104,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
 	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
 	    $(@D)/.$(@F).1.o -o $@
-	$(NM) -pa --format=sysv $(@D)/$(@F) \
+	$(NM) -pa --format=sysv $@ \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
-		>$(@D)/$(@F).map
+		>$@.map
 	rm -f $(@D)/.$(@F).[0-9]*
 
 .PHONY: include
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -10,9 +10,9 @@ $(TARGET): $(TARGET)-syms
 
 $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) -o $@
-	$(NM) -pa --format=sysv $(@D)/$(@F) \
+	$(NM) -pa --format=sysv $@ \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
-		>$(@D)/$(@F).map
+		>$@.map
 
 $(obj)/xen.lds: $(src)/xen.lds.S FORCE
 	$(call if_changed_dep,cpp_lds_S)
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -150,9 +150,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
 	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
 	    $(orphan-handling-y) $(@D)/.$(@F).1.o -o $@
-	$(NM) -pa --format=sysv $(@D)/$(@F) \
+	$(NM) -pa --format=sysv $@ \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
-		>$(@D)/$(@F).map
+		>$@.map
 	rm -f $(@D)/.$(@F).[0-9]* $(@D)/..$(@F).[0-9]*
 ifeq ($(CONFIG_XEN_IBT),y)
 	$(SHELL) $(srctree)/tools/check-endbr.sh $@
@@ -224,8 +224,9 @@ endif
 	$(MAKE) $(build)=$(@D) .$(@F).1r.o .$(@F).1s.o
 	$(LD) $(call EFI_LDFLAGS,$(VIRT_BASE)) -T $(obj)/efi.lds -N $< \
 	      $(@D)/.$(@F).1r.o $(@D)/.$(@F).1s.o $(orphan-handling-y) $(note_file_option) -o $@
-	$(NM) -pa --format=sysv $(@D)/$(@F) \
-		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort >$(@D)/$(@F).map
+	$(NM) -pa --format=sysv $@ \
+		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
+		>$@.map
 ifeq ($(CONFIG_DEBUG_INFO),y)
 	$(if $(filter --strip-debug,$(EFI_LDFLAGS)),:$(space))$(OBJCOPY) -O elf64-x86-64 $@ $@.elf
 endif


From xen-devel-bounces@lists.xenproject.org Thu May 04 16:54:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 16:54:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529842.824709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pucDr-00021q-FR; Thu, 04 May 2023 16:54:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529842.824709; Thu, 04 May 2023 16:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pucDr-00021j-Cj; Thu, 04 May 2023 16:54:27 +0000
Received: by outflank-mailman (input) for mailman id 529842;
 Thu, 04 May 2023 16:54:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pucDq-00021Z-4Q; Thu, 04 May 2023 16:54:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pucDp-0003gu-Qm; Thu, 04 May 2023 16:54:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pucDp-0007aK-7W; Thu, 04 May 2023 16:54:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pucDp-0006YS-74; Thu, 04 May 2023 16:54:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2VSiSqMJw6OPeqVOVIs9v6QVVvYymxnwbXIlHvPpsNs=; b=nFFBj1uADhiv0lleoGlDlSpjLK
	cZerlqVd6FdnYZK9xPe6PzAr1C2AEa1PrEFczh7lfZVFRtRqq4owwiR4OkSsz0mCuNV+4BFhB8/Bm
	q21h2QtprLuK/7q9PTrkJD/54ioUZ1qvzl8c2dNujWDZNreE/L64NeeJxGzV4EKhuMwU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180524-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180524: regressions - FAIL
X-Osstest-Failures:
    libvirt:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=2f197ab6959b2f2c39c922c3ffbdbf45aa6e8eb6
X-Osstest-Versions-That:
    libvirt=b4f5e6c91b9871173de205f81b51cf06a833fcb1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 May 2023 16:54:25 +0000

flight 180524 libvirt real [real]
flight 180533 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180524/
http://logs.test-lab.xenproject.org/osstest/logs/180533/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 180513

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180513
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180513
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180513
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              2f197ab6959b2f2c39c922c3ffbdbf45aa6e8eb6
baseline version:
 libvirt              b4f5e6c91b9871173de205f81b51cf06a833fcb1

Last test of basis   180513  2023-05-03 04:18:50 Z    1 days
Testing same since   180524  2023-05-04 04:18:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2f197ab6959b2f2c39c922c3ffbdbf45aa6e8eb6
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Sun Apr 30 11:07:49 2023 +0200

    meson: Fix qemu_{user,group} defaults for Arch
    
    The current values might have been accurate at the time
    when the logic was introduced, but these days Arch is
    using the same ones as Debian.
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Erik Skultety <eskultet@redhat>


From xen-devel-bounces@lists.xenproject.org Thu May 04 16:57:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 16:57:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529847.824720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pucGL-0002c3-V3; Thu, 04 May 2023 16:57:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529847.824720; Thu, 04 May 2023 16:57:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pucGL-0002bw-Rd; Thu, 04 May 2023 16:57:01 +0000
Received: by outflank-mailman (input) for mailman id 529847;
 Thu, 04 May 2023 16:57:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YwFE=AZ=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pucGK-0002bq-8H
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 16:57:00 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae2025bc-ea9c-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 18:56:57 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-94a34a14a54so144450166b.1
 for <xen-devel@lists.xenproject.org>; Thu, 04 May 2023 09:56:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae2025bc-ea9c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683219417; x=1685811417;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lbgmAvCwU1vduplWgvwbxfALqETkKPg+Klz1tE1pM5Y=;
        b=jb69VfEKkUjaHQLR2M/xs9Y+NjCU0b+Fq7yHJOer79PouvQWymx5syldLv97uKRuJi
         jTBJTuUWPvXJsafPw7tmxOXS9v+fkcJBigl7u0iyewajt2FxdNuDU43jcAyypiOSn+3F
         FbuQtoKsWxZTcHaWRc8Wpkxt2ZU1UM1glzIq6RGU8KY8+EaSwKkLw4PX7zMjR5iG32w6
         kexzmp1SyulDQjB9ZU497cZljtZjzURytcMfGoizxe4bdx1xTdk5vqIvT5LTnYmO/NqJ
         2FuFNMmVpf9iT08qnN7iwGMVD3SosIWpQ2wWyk6Xn76vGvWyVc/5Zq83Rzvs4KUoGGeX
         KTVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683219417; x=1685811417;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=lbgmAvCwU1vduplWgvwbxfALqETkKPg+Klz1tE1pM5Y=;
        b=ApIom15PbmOaLxGey8yv9KKNVIx6vGYRtMldm2GgFe6lMY7tPJ0kAxG/PYT/URuD6B
         eV9AtccO0SXl6f5AP+7DAlZipEpa4S6RywPamojDqkLkBtXcV+ke1lz1jy7oS7YiMLxB
         LBRa6rT3hRSWVi+GwEBwPadRsbYLOSzB8rie3XkWWC0ZdawS03Zfi0BQL+58RC3gajB1
         DxfCjFZeLtXXv5JUz3OXlJw0CjfkDcqPJaUM0MN69i5SWBCGw33OLpV6pP5DWXANuWBt
         dz8oFQNst8SPaOBiquqU7YCEjiVIZ6xqeawH8Ub92vm30bwF9knG68hxdFOTFEMgbuX2
         Kp9Q==
X-Gm-Message-State: AC+VfDzu1fA+i+i7BK2Pqt3Y27D/Px18NPl+fXwKzoa3SZ9aFhZKh18M
	P81FwqRXSqSRkFYVpDE35cJiWVbBDPKVOLn8/zrqph0+e1w=
X-Google-Smtp-Source: ACHHUZ5zXJ2SAWr4/GWxWMzLDfvSB6dTtI5D7388SUaQuoRwULQP1Fm9g14+LAOGxxBqloPibNDOGbfJ42vuL+NO3FM=
X-Received: by 2002:a17:906:c156:b0:94e:2db:533e with SMTP id
 dp22-20020a170906c15600b0094e02db533emr8725935ejc.49.1683219416969; Thu, 04
 May 2023 09:56:56 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-5-jandryuk@gmail.com>
 <43c519f7-577d-f657-a4b1-1a31bf7f093a@suse.com>
In-Reply-To: <43c519f7-577d-f657-a4b1-1a31bf7f093a@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 4 May 2023 12:56:44 -0400
Message-ID: <CAKf6xpuzk6vLjbNAHzEmVpq8sDAO8O-cRFVStQxNqyD5ERr+Yg@mail.gmail.com>
Subject: Re: [PATCH v3 04/14 RESEND] cpufreq: Add Hardware P-State (HWP) driver
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 4, 2023 at 9:11=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
>
> On 01.05.2023 21:30, Jason Andryuk wrote:
> > For cpufreq=3Dxen:hwp, placing the option inside the governor wouldn't
> > work.  Users would have to select the hwp-internal governor to turn off
> > hwp support.
>
> I'm afraid I don't understand this, and you'll find a comment towards
> this further down. Even when ...
>
> >  hwp-internal isn't usable without hwp, and users wouldn't
> > be able to select a different governor.  That doesn't matter while hwp
> > defaults off, but it would if or when hwp defaults to enabled.
>
> ... it starts defaulting to enabled, selecting another governor can
> simply have the side effect of turning off hwp.

I didn't think of that - makes sense.

> > Write to disable the interrupt - the linux pstate driver does this.  We
> > don't use the interrupts, so we can just turn them off.  We aren't read=
y
> > to handle them, so we don't want any.  Unclear if this is necessary.
> > SDM says it's default disabled.
>
> Definitely better to be on the safe side.
>
> > --- a/docs/misc/xen-command-line.pandoc
> > +++ b/docs/misc/xen-command-line.pandoc
> > @@ -499,7 +499,7 @@ If set, force use of the performance counters for o=
profile, rather than detectin
> >  available support.
> >
> >  ### cpufreq
> > -> `=3D none | {{ <boolean> | xen } [:[powersave|performance|ondemand|u=
serspace][,<maxfreq>][,[<minfreq>][,[verbose]]]]} | dom0-kernel`
> > +> `=3D none | {{ <boolean> | xen } [:[powersave|performance|ondemand|u=
serspace][,<hdc>][,[<hwp>]][,[<maxfreq>]][,[<minfreq>]][,[verbose]]]} | dom=
0-kernel`
>
> Considering you use a special internal governor, the 4 governor alternati=
ves are
> meaningless for hwp. Hence at the command line level recognizing "hwp" as=
 if it
> was another governor name would seem better to me. This would then also g=
et rid
> of one of the two special "no-" prefix parsing cases (which I'm not overl=
y
> happy about).
>
> Even if not done that way I'm puzzled by the way you spell out the intera=
ction
> of "hwp" and "hdc": As you say in the description, "hdc" is meaningful on=
ly when
> "hwp" was specified, so even if not merged with the governors group "hwp"=
 should
> come first, and "hdc" ought to be rejected if "hwp" wasn't first specifie=
d. (The
> way you've spelled it out it actually looks to be kind of the other way a=
round.)

I placed them in alphabetical order, but, yes, it doesn't make sense.

> Strictly speaking "maxfreq" and "minfreq" also should be objected to when=
 "hwp"
> was specified.
>
> Overall I'm getting the impression that beyond your "verbose" related adj=
ustment
> more is needed, if you're meaning to get things closer to how we parse th=
e
> option (splitting across multiple lines to help see what I mean):
>
> `=3D none
>  | {{ <boolean> | xen } [:{powersave|performance|ondemand|userspace}
>                           [{,hwp[,hdc]|[,maxfreq=3D<maxfreq>[,minfreq=3D<=
minfreq>]}]
>                           [,verbose]]}
>  | dom0-kernel`
>
> (We're still parsing in a more relaxed way, e.g. minfreq may come ahead o=
f
> maxfreq, but better be more tight in the doc than too relaxed.)
>
> Furthermore while max/min freq don't apply directly, there are still two =
MSRs
> controlling bounds at the package and logical processor levels.

Well, we only program the logical processor level MSRs because we
don't have a good idea of the packages to know when we can skip
writing an MSR.

How about this:
`=3D none
 | {{ <boolean> | xen } {
[:{powersave|performance|ondemand|userspace}[,maxfreq=3D<maxfreq>[,minfreq=
=3D<minfreq>]]
                        | [:hwp[,hdc]] }
                          [,verbose]]}
 | dom0-kernel`

i.e:
xen:hwp,hdc

> > --- /dev/null
> > +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> > @@ -0,0 +1,506 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * hwp.c cpufreq driver to run Intel Hardware P-States (HWP)
> > + *
> > + * Copyright (C) 2021 Jason Andryuk <jandryuk@gmail.com>
> > + */
> > +
> > +#include <xen/cpumask.h>
> > +#include <xen/init.h>
> > +#include <xen/param.h>
> > +#include <xen/xmalloc.h>
> > +#include <asm/io.h>
> > +#include <asm/msr.h>
> > +#include <acpi/cpufreq/cpufreq.h>
> > +
> > +static bool feature_hwp;
> > +static bool feature_hwp_notification;
> > +static bool feature_hwp_activity_window;
> > +static bool feature_hwp_energy_perf;
> > +static bool feature_hwp_pkg_level_ctl;
> > +static bool feature_hwp_peci;
> > +
> > +static bool feature_hdc;
>
> Most (all?) of these want to be __ro_after_init, I expect.

I think you are correct.  (This pre-dates __ro_after_init and I didn't
update it.)

> > +__initdata bool opt_cpufreq_hwp =3D false;
> > +__initdata bool opt_cpufreq_hdc =3D true;
>
> Nit (style): Please put annotations after the type.
>
> > +#define HWP_ENERGY_PERF_BALANCE         0x80
> > +#define IA32_ENERGY_BIAS_BALANCE        0x7
> > +#define IA32_ENERGY_BIAS_MAX_POWERSAVE  0xf
> > +#define IA32_ENERGY_BIAS_MASK           0xf
> > +
> > +union hwp_request
> > +{
> > +    struct
> > +    {
> > +        uint64_t min_perf:8;
> > +        uint64_t max_perf:8;
> > +        uint64_t desired:8;
> > +        uint64_t energy_perf:8;
> > +        uint64_t activity_window:10;
> > +        uint64_t package_control:1;
> > +        uint64_t reserved:16;
> > +        uint64_t activity_window_valid:1;
> > +        uint64_t energy_perf_valid:1;
> > +        uint64_t desired_valid:1;
> > +        uint64_t max_perf_valid:1;
> > +        uint64_t min_perf_valid:1;
>
> The boolean fields here would probably better be of type "bool". I also
> don't see the need for using uint64_t for any of the other fields -
> unsigned int will be quite fine, I think. Only ...

This is the hardware MSR format, so it seemed natural to use uint64_t
and the bit fields.  To me, uint64_t foo:$bits; better shows that we
are dividing up a single hardware register using bit fields.
Honestly, I'm unfamiliar with the finer points of laying out bitfields
with bool.  And the 10 bits of activity window throws off aligning to
standard types.

This seems to have the correct layout:
struct
{
        unsigned char min_perf;
        unsigned char max_perf;
        unsigned char desired;
        unsigned char energy_perf;
        unsigned int activity_window:10;
        bool package_control:1;
        unsigned int reserved:16;
        bool activity_window_valid:1;
        bool energy_perf_valid:1;
        bool desired_valid:1;
        bool max_perf_valid:1;
        bool min_perf_valid:1;
} ;

Or would you prefer the first 8 bit ones to be unsigned int
min_perf:8?  The bools seem to need :1, which doesn't seem to be
gaining us much, IMO.  I'd strongly prefer just keeping it as I have
it, but I will change it however you like.

> > +    };
> > +    uint64_t raw;
>
> ... this wants to keep this type. (Same again below then.)

For "below", do you want:

        struct
        {
            unsigned char highest;
            unsigned char guaranteed;
            unsigned char most_efficient;
            unsigned char lowest;
            unsigned int reserved;
        } hw;
?

> > +bool __init hwp_available(void)
> > +{
> > +    unsigned int eax, ecx, unused;
> > +    bool use_hwp;
> > +
> > +    if ( boot_cpu_data.cpuid_level < CPUID_PM_LEAF )
> > +    {
> > +        hwp_verbose("cpuid_level (%#x) lacks HWP support\n",
> > +                    boot_cpu_data.cpuid_level);
> > +        return false;
> > +    }
> > +
> > +    if ( boot_cpu_data.cpuid_level < 0x16 )
> > +    {
> > +        hwp_info("HWP disabled: cpuid_level %#x < 0x16 lacks CPU freq =
info\n",
> > +                 boot_cpu_data.cpuid_level);
> > +        return false;
> > +    }
> > +
> > +    cpuid(CPUID_PM_LEAF, &eax, &unused, &ecx, &unused);
> > +
> > +    if ( !(eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE) &&
> > +         !(ecx & CPUID6_ECX_IA32_ENERGY_PERF_BIAS) )
> > +    {
> > +        hwp_verbose("HWP disabled: No energy/performance preference av=
ailable");
> > +        return false;
> > +    }
> > +
> > +    feature_hwp                 =3D eax & CPUID6_EAX_HWP;
> > +    feature_hwp_notification    =3D eax & CPUID6_EAX_HWP_NOTIFICATION;
> > +    feature_hwp_activity_window =3D eax & CPUID6_EAX_HWP_ACTIVITY_WIND=
OW;
> > +    feature_hwp_energy_perf     =3D
> > +        eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE;
> > +    feature_hwp_pkg_level_ctl   =3D eax & CPUID6_EAX_HWP_PACKAGE_LEVEL=
_REQUEST;
> > +    feature_hwp_peci            =3D eax & CPUID6_EAX_HWP_PECI;
> > +
> > +    hwp_verbose("HWP: %d notify: %d act-window: %d energy-perf: %d pkg=
-level: %d peci: %d\n",
> > +                feature_hwp, feature_hwp_notification,
> > +                feature_hwp_activity_window, feature_hwp_energy_perf,
> > +                feature_hwp_pkg_level_ctl, feature_hwp_peci);
> > +
> > +    if ( !feature_hwp )
> > +        return false;
> > +
> > +    feature_hdc =3D eax & CPUID6_EAX_HDC;
> > +
> > +    hwp_verbose("HWP: Hardware Duty Cycling (HDC) %ssupported%s\n",
> > +                feature_hdc ? "" : "not ",
> > +                feature_hdc ? opt_cpufreq_hdc ? ", enabled" : ", disab=
led"
> > +                            : "");
> > +
> > +    feature_hdc =3D feature_hdc && opt_cpufreq_hdc;
> > +
> > +    hwp_verbose("HWP: HW_FEEDBACK %ssupported\n",
> > +                (eax & CPUID6_EAX_HW_FEEDBACK) ? "" : "not ");
>
> You report this, but you don't really use it?

Correct.  I needed to know what capabilities my processors have.

feature_hwp_pkg_level_ctl and feature_hwp_peci can also be dropped
since they aren't used beyond printing their values.  I'd still lean
toward keeping their printing under verbose since otherwise there
isn't a convenient way to know if they are available without
recompiling.

> > +    use_hwp =3D feature_hwp && opt_cpufreq_hwp;
>
> There's a lot of output you may produce until you make it here, which is
> largely meaningless when opt_cpufreq_hwp =3D=3D false. Is there a reason =
you
> don't check that flag first thing in the function?

opt_cpufreq_hwp can be checked earlier for an early exit, yes.  The
code came about during development to print all the HWP capabilities
even if it wasn't enabled.  But eliminating it now makes sense.

> > +static void hdc_set_pkg_hdc_ctl(bool val)
> > +{
> > +    uint64_t msr;
> > +
> > +    if ( rdmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
> > +    {
> > +        hwp_err("error rdmsr_safe(MSR_IA32_PKG_HDC_CTL)\n");
> > +
> > +        return;
> > +    }
> > +
> > +    if ( val )
> > +        msr |=3D IA32_PKG_HDC_CTL_HDC_PKG_ENABLE;
> > +    else
> > +        msr &=3D ~IA32_PKG_HDC_CTL_HDC_PKG_ENABLE;
> > +
> > +    if ( wrmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
> > +        hwp_err("error wrmsr_safe(MSR_IA32_PKG_HDC_CTL): %016lx\n", ms=
r);
> > +}
> > +
> > +static void hdc_set_pm_ctl1(bool val)
> > +{
> > +    uint64_t msr;
> > +
> > +    if ( rdmsr_safe(MSR_IA32_PM_CTL1, msr) )
> > +    {
> > +        hwp_err("error rdmsr_safe(MSR_IA32_PM_CTL1)\n");
> > +
> > +        return;
> > +    }
> > +
> > +    if ( val )
> > +        msr |=3D IA32_PM_CTL1_HDC_ALLOW_BLOCK;
> > +    else
> > +        msr &=3D ~IA32_PM_CTL1_HDC_ALLOW_BLOCK;
> > +
> > +    if ( wrmsr_safe(MSR_IA32_PM_CTL1, msr) )
> > +        hwp_err("error wrmsr_safe(MSR_IA32_PM_CTL1): %016lx\n", msr);
> > +}
>
> For both functions: Elsewhere you also log the affected CPU in hwp_err().
> Without this I'm not convinced the logging here is very useful. In fact I
> wonder whether hwp_err() shouldn't take care of this and/or the "error"
> part of the string literal. A HWP: prefix might also not be bad ...

Sounds good.  I'll investigate.

> > +static void hwp_get_cpu_speeds(struct cpufreq_policy *policy)
> > +{
> > +    uint32_t base_khz, max_khz, bus_khz, edx;
> > +
> > +    cpuid(0x16, &base_khz, &max_khz, &bus_khz, &edx);
> > +
> > +    /* aperf/mperf scales base. */
> > +    policy->cpuinfo.perf_freq =3D base_khz * 1000;
> > +    policy->cpuinfo.min_freq =3D base_khz * 1000;
> > +    policy->cpuinfo.max_freq =3D max_khz * 1000;
> > +    policy->min =3D base_khz * 1000;
> > +    policy->max =3D max_khz * 1000;
> > +    policy->cur =3D 0;
>
> What is the comment intended to be telling me here?

When I was surprised to discover that I needed to pass in the base
frequency for proper aperf/mperf scaling, it seemed relevant at the
time as it's the opposite of ACPI cpufreq.  It can be dropped now.

> > +static void cf_check hwp_init_msrs(void *info)
> > +{
> > +    struct cpufreq_policy *policy =3D info;
> > +    struct hwp_drv_data *data =3D this_cpu(hwp_drv_data);
> > +    uint64_t val;
> > +
> > +    /*
> > +     * Package level MSR, but we don't have a good idea of packages he=
re, so
> > +     * just do it everytime.
> > +     */
> > +    if ( rdmsr_safe(MSR_IA32_PM_ENABLE, val) )
> > +    {
> > +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_PM_ENABLE)\n", polic=
y->cpu);
> > +        data->curr_req.raw =3D -1;
> > +        return;
> > +    }
> > +
> > +    /* Ensure we don't generate interrupts */
> > +    if ( feature_hwp_notification )
> > +        wrmsr_safe(MSR_IA32_HWP_INTERRUPT, 0);
> > +
> > +    hwp_verbose("CPU%u: MSR_IA32_PM_ENABLE: %016lx\n", policy->cpu, va=
l);
> > +    if ( !(val & IA32_PM_ENABLE_HWP_ENABLE) )
> > +    {
> > +        val |=3D IA32_PM_ENABLE_HWP_ENABLE;
> > +        if ( wrmsr_safe(MSR_IA32_PM_ENABLE, val) )
> > +        {
> > +            hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_PM_ENABLE, %lx)\=
n",
> > +                    policy->cpu, val);
> > +            data->curr_req.raw =3D -1;
> > +            return;
> > +        }
> > +    }
> > +
> > +    if ( rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, data->hwp_caps) )
> > +    {
> > +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_CAPABILITIES)\n"=
,
> > +                policy->cpu);
> > +        data->curr_req.raw =3D -1;
> > +        return;
> > +    }
> > +
> > +    if ( rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw) )
> > +    {
> > +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_REQUEST)\n", pol=
icy->cpu);
> > +        data->curr_req.raw =3D -1;
> > +        return;
> > +    }
> > +
> > +    if ( !feature_hwp_energy_perf ) {
>
> Nit: Brace placement.
>
> > +        if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, val) )
> > +        {
> > +            hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n");
> > +            data->curr_req.raw =3D -1;
> > +
> > +            return;
> > +        }
> > +
> > +        data->energy_perf =3D val & IA32_ENERGY_BIAS_MASK;
> > +    }
>
> In order to not need to undo the "enable" you've already done, maybe that
> should move down here?

HWP needs to be enabled before the Capabilities and Request MSRs can
be read.  Reading them shouldn't fail, but it seems safer to use
rdmsr_safe in case something goes wrong.

I think I will rip out ENERGY_PERF_BIAS.  The Linux driver doesn't
support it.  I thought it might be necessary, but my test machines
don't need it.  The Qubes report with SkyLake wasn't using
ENERGY_PERF_BIAS, and SkyLake introduced HWP.  So the set of machines
needing it is probably small and older, so it probably isn't worth
supporting.

> With all the sanity checking you do here, maybe
> you should also check that the write of the enable bit actually took
> effect?

I can add that.

> > +/* val 0 - highest performance, 15 - maximum energy savings */
> > +static void hwp_energy_perf_bias(const struct hwp_drv_data *data)
> > +{
> > +    uint64_t msr;
> > +    uint8_t val =3D data->energy_perf;
> > +
> > +    ASSERT(val <=3D IA32_ENERGY_BIAS_MAX_POWERSAVE);
> > +
> > +    if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, msr) )
> > +    {
> > +        hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n");
> > +
> > +        return;
> > +    }
> > +
> > +    msr &=3D ~IA32_ENERGY_BIAS_MASK;
> > +    msr |=3D val;
> > +
> > +    if ( wrmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, msr) )
> > +        hwp_err("error wrmsr_safe(MSR_IA32_ENERGY_PERF_BIAS): %016lx\n=
", msr);
> > +}
> > +
> > +static void cf_check hwp_write_request(void *info)
> > +{
> > +    struct cpufreq_policy *policy =3D info;
> > +    struct hwp_drv_data *data =3D this_cpu(hwp_drv_data);
> > +    union hwp_request hwp_req =3D data->curr_req;
> > +
> > +    BUILD_BUG_ON(sizeof(union hwp_request) !=3D sizeof(uint64_t));
> > +    if ( wrmsr_safe(MSR_IA32_HWP_REQUEST, hwp_req.raw) )
> > +    {
> > +        hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_HWP_REQUEST, %lx)\n"=
,
> > +                policy->cpu, hwp_req.raw);
> > +        rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw);
> > +    }
> > +
> > +    if ( !feature_hwp_energy_perf )
> > +        hwp_energy_perf_bias(data);
> > +
> > +}
> > +
> > +static int cf_check hwp_cpufreq_target(struct cpufreq_policy *policy,
> > +                                       unsigned int target_freq,
> > +                                       unsigned int relation)
> > +{
> > +    unsigned int cpu =3D policy->cpu;
> > +    struct hwp_drv_data *data =3D per_cpu(hwp_drv_data, cpu);
> > +    /* Zero everything to ensure reserved bits are zero... */
> > +    union hwp_request hwp_req =3D { .raw =3D 0 };
> > +
> > +    /* .. and update from there */
> > +    hwp_req.min_perf =3D data->minimum;
> > +    hwp_req.max_perf =3D data->maximum;
> > +    hwp_req.desired =3D data->desired;
> > +    if ( feature_hwp_energy_perf )
> > +        hwp_req.energy_perf =3D data->energy_perf;
> > +    if ( feature_hwp_activity_window )
> > +        hwp_req.activity_window =3D data->activity_window;
> > +
> > +    if ( hwp_req.raw =3D=3D data->curr_req.raw )
> > +        return 0;
> > +
> > +    data->curr_req =3D hwp_req;
> > +
> > +    hwp_verbose("CPU%u: wrmsr HWP_REQUEST %016lx\n", cpu, hwp_req.raw)=
;
> > +    on_selected_cpus(cpumask_of(cpu), hwp_write_request, policy, 1);
> > +
> > +    return 0;
> > +}
>
> If I'm not mistaken these 3 functions can only be reached from the user
> space tool (via set_cpufreq_para()). On that path I don't think there
> should be any hwp_err(); definitely not in non-verbose mode. Instead it
> would be good if a sensible error code could be reported back. (Same
> then for hwp_cpufreq_update() and its helper.)

I'll investigate this.  I guess I'll have to stash a result in struct
hwp_drv_data.

> > --- a/xen/arch/x86/include/asm/cpufeature.h
> > +++ b/xen/arch/x86/include/asm/cpufeature.h
> > @@ -46,8 +46,17 @@ extern struct cpuinfo_x86 boot_cpu_data;
> >  #define cpu_has(c, bit)              test_bit(bit, (c)->x86_capability=
)
> >  #define boot_cpu_has(bit)    test_bit(bit, boot_cpu_data.x86_capabilit=
y)
> >
> > -#define CPUID_PM_LEAF                    6
> > -#define CPUID6_ECX_APERFMPERF_CAPABILITY 0x1
> > +#define CPUID_PM_LEAF                                6
> > +#define CPUID6_EAX_HWP                               (_AC(1, U) <<  7)
> > +#define CPUID6_EAX_HWP_NOTIFICATION                  (_AC(1, U) <<  8)
> > +#define CPUID6_EAX_HWP_ACTIVITY_WINDOW               (_AC(1, U) <<  9)
> > +#define CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE (_AC(1, U) << 10)
> > +#define CPUID6_EAX_HWP_PACKAGE_LEVEL_REQUEST         (_AC(1, U) << 11)
> > +#define CPUID6_EAX_HDC                               (_AC(1, U) << 13)
> > +#define CPUID6_EAX_HWP_PECI                          (_AC(1, U) << 16)
> > +#define CPUID6_EAX_HW_FEEDBACK                       (_AC(1, U) << 19)
>
> Perhaps better without open-coding BIT()?

Ok.

> I also find it a little odd that e.g. bit 17 is left out here despite you
> declaring the 5 "valid" bits in union hwp_request (which are qualified by
> this CPUID bit afaict).

Well, I thought I wasn't supposed to introduce unused defines, so I
didn't add one for 17.  For union hwp_request, the "valid" bits are
part of the register structure, so it makes sense to include them
instead of an incomplete definition.  IIRC, at some point I set the
"valid" bits when I wasn't supposed to, and they caused the wrmsr
calls to fail.  That might have been because my test machines don't
have package-level HWP.

(I was confused when the CPUID section stated "Bit 17: Flexible HWP is
supported if set.", but there are no further references to "Flexible
HWP" in the SDM.)

> > +#define CPUID6_ECX_APERFMPERF_CAPABILITY             0x1
> > +#define CPUID6_ECX_IA32_ENERGY_PERF_BIAS             0x8
>
> Why not the same form here?

I was re-indenting APERFMPERF, and added ENERGY_PERF_BIAS in a
consistent style.  I will update with BIT().

> > --- a/xen/arch/x86/include/asm/msr-index.h
> > +++ b/xen/arch/x86/include/asm/msr-index.h
> > @@ -151,6 +151,13 @@
> >
> >  #define MSR_PKRS                            0x000006e1
> >
> > +#define MSR_IA32_PM_ENABLE                  0x00000770
> > +#define  IA32_PM_ENABLE_HWP_ENABLE          (_AC(1, ULL) <<  0)
> > +
> > +#define MSR_IA32_HWP_CAPABILITIES           0x00000771
> > +#define MSR_IA32_HWP_INTERRUPT              0x00000773
> > +#define MSR_IA32_HWP_REQUEST                0x00000774
>
> I think for new MSRs being added here in particular Andrew would like to
> see the IA32 infixes omitted. (I'd extend this then to
> CPUID6_ECX_IA32_ENERGY_PERF_BIAS as well.)

Ok.

> > @@ -165,6 +172,11 @@
> >  #define  PASID_PASID_MASK                   0x000fffff
> >  #define  PASID_VALID                        (_AC(1, ULL) << 31)
> >
> > +#define MSR_IA32_PKG_HDC_CTL                0x00000db0
> > +#define  IA32_PKG_HDC_CTL_HDC_PKG_ENABLE    (_AC(1, ULL) <<  0)
>
> The name has two redundant infixes, which looks odd, but then I can't
> suggest any better without going too much out of sync with the SDM.

Yes, it's not a good name, but I was trying to keep close to the SDM.
FAOD, these should drop IA32_ to become:
MSR_PKG_HDC_CTL
PKG_HDC_CTL_HDC_PKG_ENABLE
?

> > --- a/xen/drivers/cpufreq/cpufreq.c
> > +++ b/xen/drivers/cpufreq/cpufreq.c
> > @@ -565,6 +565,38 @@ static void cpufreq_cmdline_common_para(struct cpu=
freq_policy *new_policy)
> >
> >  static int __init cpufreq_handle_common_option(const char *name, const=
 char *val)
> >  {
> > +    if (!strcmp(name, "hdc")) {
> > +        if (val) {
> > +            int ret =3D parse_bool(val, NULL);
> > +            if (ret !=3D -1) {
> > +                opt_cpufreq_hdc =3D ret;
> > +                return 1;
> > +            }
> > +        } else {
> > +            opt_cpufreq_hdc =3D true;
> > +            return 1;
> > +        }
> > +    } else if (!strcmp(name, "no-hdc")) {
> > +        opt_cpufreq_hdc =3D false;
> > +        return 1;
> > +    }
>
> I think recognizing a "no-" prefix would want to be separated out, and be
> restricted to val being NULL. It would result in val being pointed at the
> string "no" (or "off" or anything else parse_bool() recognizes as negativ=
e
> indicator).
>
> Yet if, as suggested above, "hwp" became a "fake" governor also when
> parsing the command line, "hdc" could actually be handled in its
> handle_option() hook.

Makes sense.

Thank you for taking the time to review this.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 04 17:00:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 17:00:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529852.824730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pucJl-0004Cg-H0; Thu, 04 May 2023 17:00:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529852.824730; Thu, 04 May 2023 17:00:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pucJl-0004CZ-E7; Thu, 04 May 2023 17:00:33 +0000
Received: by outflank-mailman (input) for mailman id 529852;
 Thu, 04 May 2023 17:00:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YwFE=AZ=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pucJj-0004CS-T7
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 17:00:31 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2d583e48-ea9d-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 19:00:31 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-95369921f8eso124766166b.0
 for <xen-devel@lists.xenproject.org>; Thu, 04 May 2023 10:00:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d583e48-ea9d-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683219630; x=1685811630;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Tf+akuAetUw5mPpaoTEl1gz7jnjlPFnuMgfEJt3WehQ=;
        b=Uvl1QsdyaTc70GUeTJPP3JIMIajiRrYckvtyXmpKr67WX29WXIeaMsl4j+pxDr4zKD
         e5cPgolphQTQ/3vIyQQjQbfrvCB1XEpJk8QZ7/9b9VvMzwUpNFqZCVxJIOaTc6tsxxLo
         ISD22iYuSqCasw1QC9eV71ZQMSQIq8b5AiYWFkrKpY9KGDqQkJ7tS/aZaNDiiL2sACdK
         j8lC3By3qzwyoigV/xh49IWRG9e0eGN9B+FMp8j9ow7T3DTnqxBsVgiQ12P/gNUE0gX9
         Xs8mc0tAw4ME69ppQ1NzObb8ajPJT3K5520U26gyeImEDfvRKvV+KriTtJ20LP8zorTf
         ugLg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683219630; x=1685811630;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Tf+akuAetUw5mPpaoTEl1gz7jnjlPFnuMgfEJt3WehQ=;
        b=Izip/uew1OeTMPYsEU47tBaHVAcIMR0JZpHP97hkp60bp/xlbN46G2AVJrIp4oIGyN
         bsCkd38pUddpo5nAsnVcw8ksm7Ikm1T5sM0peddy4hpPK9/OQl+ZzNOKV3OOZ+0cHmq7
         AiGKa+zaW3aoY0nnVJvO1wR3bY1FwWmZMrOj+jXRdZjlO/UWShmr2KMnX2FNC0TqRXl+
         odWUr5XSWAGFbpTv1YCHrDXz900nuFmBHH1dDj8QHEqQ5GieYgEkDm4d6tWZ508+ii6y
         zdzjCMjFqOm0BuY6WiL2aaPRFUFcSpQc3yIes4TLySzkm4+O2JDr8fWZIrHLqk/X/TLK
         pIcA==
X-Gm-Message-State: AC+VfDz4PT7NcUWelkHiga2hHOk56mV7k+Iwmpq/Lj+GjYGDu3znBHyV
	b5HnqGe3sYlLff7RMbED3KAzIcP6jtd4DVObsN7keTT7
X-Google-Smtp-Source: ACHHUZ4EApKmy01ns7eRj4sQhVdkyd7+J6e/u+KRupmFVYz/e9IBl0/BGrOR/3Xi7dunPYOSCzA874bIj837NQPpHgo=
X-Received: by 2002:a17:907:26c5:b0:953:4775:baa7 with SMTP id
 bp5-20020a17090726c500b009534775baa7mr7703348ejc.52.1683219630364; Thu, 04
 May 2023 10:00:30 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-6-jandryuk@gmail.com>
 <2864bf57-88cd-6fce-2d38-6f3a31abb440@suse.com>
In-Reply-To: <2864bf57-88cd-6fce-2d38-6f3a31abb440@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 4 May 2023 13:00:18 -0400
Message-ID: <CAKf6xpshQ=6kPHtjpWqNUiaBym2uEXt=reY0Kd0VoZgxuE=LxA@mail.gmail.com>
Subject: Re: [PATCH v3 05/14 RESEND] xenpm: Change get-cpufreq-para output for internal
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 4, 2023 at 10:35=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 01.05.2023 21:30, Jason Andryuk wrote:
> > When using HWP, some of the returned data is not applicable.  In that
> > case, we should just omit it to avoid confusing the user.  So switch to
> > printing the base and turbo frequencies since those are relevant to HWP=
.
> > Similarly, stop printing the CPU frequencies since those do not apply.
>
> It vaguely feels like I have asked this before: Can you point me at a
> place in the SDM where it is said that CPUID 0x16's "Maximum Frequency"
> is the turbo frequency? Without such a reference I feel a little uneasy
> with ...

I don't have a reference, but I found it empirically to match the
"turbo" frequency.

For an Intel=C2=AE Core=E2=84=A2 i7-10810U,
https://ark.intel.com/content/www/us/en/ark/products/201888/intel-core-i710=
810u-processor-12m-cache-up-to-4-90-ghz.html

Max Turbo Frequency 4.90 GHz

# xenpm get-cpufreq-para
cpu id               : 0
affected_cpus        : 0
cpuinfo frequency    : base [1600000] turbo [4900000]

Turbo has to be enabled to reach (close to) that frequency.

>From my cover letter:
This is for a 10th gen 6-core 1600 MHz base 4900 MHZ max cpu.  In the
default balance mode, Turbo Boost doesn't exceed 4GHz.  Tweaking the
energy_perf preference with `xenpm set-cpufreq-hwp balance ene:64`,
I've seen the CPU hit 4.7GHz before throttling down and bouncing around
between 4.3 and 4.5 GHz.  Curiously the other cores read ~4GHz when
turbo boost takes affect.  This was done after pinning all dom0 cores,
and using taskset to pin to vCPU/pCPU 11 and running a bash tightloop.

> > @@ -720,10 +721,15 @@ static void print_cpufreq_para(int cpuid, struct =
xc_get_cpufreq_para *p_cpufreq)
> >          printf(" %d", p_cpufreq->affected_cpus[i]);
> >      printf("\n");
> >
> > -    printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
> > -           p_cpufreq->cpuinfo_max_freq,
> > -           p_cpufreq->cpuinfo_min_freq,
> > -           p_cpufreq->cpuinfo_cur_freq);
> > +    if ( internal )
> > +        printf("cpuinfo frequency    : base [%u] turbo [%u]\n",
> > +               p_cpufreq->cpuinfo_min_freq,
> > +               p_cpufreq->cpuinfo_max_freq);
>
> ... calling it "turbo" (and not "max") here.

I'm fine with "max".  I think I went with turbo since it's a value you
cannot sustain but can only hit in short bursts.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 04 17:24:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 17:24:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529856.824742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puch9-0006oZ-Dp; Thu, 04 May 2023 17:24:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529856.824742; Thu, 04 May 2023 17:24:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puch9-0006oR-B7; Thu, 04 May 2023 17:24:43 +0000
Received: by outflank-mailman (input) for mailman id 529856;
 Thu, 04 May 2023 17:24:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puch7-0006oK-Rj
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 17:24:42 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8a6031ec-eaa0-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 19:24:40 +0200 (CEST)
Received: from mail-mw2nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 13:24:14 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5154.namprd03.prod.outlook.com (2603:10b6:a03:22f::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 17:24:10 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 17:24:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a6031ec-eaa0-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683221080;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Tmg3PgyDemi8W5IKw2e27HHKjzkW0qEODkFlV4QFPwM=;
  b=LVVqwZ/kCjhnfk9yOJBfegrxXnY3QF1jVlZdE8OaPZtmYdcSKGy5fKH+
   5EP6l9kleLvSXYlrnfOhZQrQvVJCxa6kjxJ4JQgvFVRqYcCDaslyJW71X
   VXpymGH/FHwPQTSJmpzTvi7m4eSZe6/EvibRfceDOT7ql41B1i1ZQTB7/
   Q=;
X-IronPort-RemoteIP: 104.47.55.109
X-IronPort-MID: 106659110
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Aenu8aDqpp8gJRVW/33jw5YqxClBgxIJ4kV8jS/XYbTApD0m3zdTm
 2oZXjzVM6zeZ2H2edh/b9nipElTsZLVx4A3QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A4gRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwwOpGH2Jny
 qQkKB8NMRnA1viq/+ilRbw57igjBJGD0II3nFhFlGmcKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+Oxuuzm7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXw32mCdtLTNVU8NYxsmK52F5QFCQTRAeQs6OTsne4eOBQf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmKKRYWKQ8PGTtzzaESoIKW4PYwcUQA1D5MPsyKkolQ7GRNtnFK+zj/X2FCv2z
 jTMqzIx750NisoM27S+7ErwiTumrZjUTSY4/gzSGGmi62tRboO/e5ah71Sd6P9aNZuYVXGIp
 n1CkM+bhMgECpuHhSGWQOEAGbivz/mAOTzYx1VoGvEJ/jCs4GKqfJoW7it3IkxoKe4bdTSva
 0jW0Sta45lVO3mmZLF2eKq+Ds0rye7rEtGNaxzPRt9HY5w0cRDd+ihrPBaUxzq0yBlqlrwjM
 5CGd8rqFWwdFals0DuxQaEazKMvwSc9g2jUQPgX0iia7FZXX1bNIZ9tDbdERrlnhE9YiG05K
 +piCvY=
IronPort-HdrOrdr: A9a23:9YcdCq9gXQLyrNZh2c5uk+DEI+orL9Y04lQ7vn2ZKCYlEPBx9a
 iV9sjzsCWYtN9/Yh0dcLy7V5VoIkmslqKdg7NxAV7KZmCP01dASrsN0WKI+V3dJxE=
X-Talos-CUID: =?us-ascii?q?9a23=3Aet5+OGuaK/ORl3zCUVHbtCLR6IsffkzDz17ve3O?=
 =?us-ascii?q?4JmY2Qf7EU0Ovor1rxp8=3D?=
X-Talos-MUID: 9a23:0jFzvARk0ci6Em2QRXS3tg5IKvUr/52rGRwmg88sg/CCFh5/bmI=
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="106659110"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e+xUITIv/b/7R2/rcYufXXmyS0zOOPnkqUV8J4K48Nj+wCXxSS0oo3vV7/6J+33k/pTmwjysqXT1PbxwHMp3mlFvZW4qioW1IFdgiFDEYL3b1CPzxfysnBQDhYv1IWlKUaW4iwOv4vOoW0GUmWOc/diUt/9sBZmpzEyD+6CO2OcAPz0HVlV0s2g1R91jfFzo/rHwvkyA2MBpvvxAkv9AGzT37P+ryFMlH8YvrmNIB5bpBsBVoLLb9wr3m4eF0xkaZabOgZu0kk1P4ZdxTO9Zh5dBornXIk/ldhWEabsbxsNNiph0NYhwzJCxrtsDBsKf4FGmlUgpxCZeTX0q5nr/Jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=C7kpFIEGXAL2jxrg3WLBllXm2JX23q3dB1NJO/X8Mnc=;
 b=jlEL/uwZ4VptMluEOGnyn2xRwyQj/3Lps9kDuWM+yVduNAYZs7dQNOB3HrRA3Rd7wuTfNw/VDdoFfHw9CIIZsenb7REqxueJhPl9bnjoMCRt8+G1SDBKqDnIFjIIrhBwXxbsWGeID4w+EtkkXblKovuTi8UsiyXxAfDBikMCSbvrksg9lrJcWgtKeu57o+/9w0iJDSzfxdelTj5WFD+rLhzdzUkF2len1peE82yCgHMmop3d+fKrbdmr6bvCuY4XHBwIv0gzohR3Kr/lZHLkNPQYxNmMzp14L5VNa1+wY7ed0/FFvZ2EFwcFvNomwziY5Lwx39+1ydwy2VDMguafBg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C7kpFIEGXAL2jxrg3WLBllXm2JX23q3dB1NJO/X8Mnc=;
 b=a9bNWB0czrULeACNYufAhbB0njSuB7G7WhlEQnbCqxRzMmCNgm8AQl3fQK2NFs50/T1virNX2tohHikbKVOgl6DtCNDsVxbJOgBx9zWwdw98M6G4G0eKwgQVkjAemrbyNrogbu92aRdCBBw0eb1gqq71HJC+CeBMgkswSOlecCI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <ca02ea8b-d196-8d2c-bd63-b5ab5f379bc7@citrix.com>
Date: Thu, 4 May 2023 18:24:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] build: shorten macro references
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>
References: <95467f2c-09ed-d34f-feef-5cd55c60f628@suse.com>
In-Reply-To: <95467f2c-09ed-d34f-feef-5cd55c60f628@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0139.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c4::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY5PR03MB5154:EE_
X-MS-Office365-Filtering-Correlation-Id: b8fd23aa-bf9c-4d7b-3be5-08db4cc45ee9
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KkCr7y8gikwIUmV7PH6AhySvaN+dlksWHARQPZp1I8sOeSTz9qh6jUVPhol6YYpDlMi8Zl7Zqsmh3ovnC5cJahob+MsdBwcMNtFBUYlUGhE/MuTYX5OLA90zhweJmX9QUBcMYTy8iU5n0eNYuji9wXdmRg77wHaACsui9/mfya9HBcINxGLNJ+uCcmjtTiNd4qEihDQre/G0DDVkagzaI5fHoEa4e2Fy9oe1+B4xkTwp6qf91kuYwSq/InhK9RDQOBYF3x0vcPhIr8S/+OBNzKScPOmjAPXKIrLr8gPtHd1yp81J8mxyskbSChxUVQHetfK+UvA2bJKcf8DDoBlDyLwCGM4D8IZ2g/CmNsQ1ea+i+oY27w3nhDkuQMPASB1oiTMouo6AzCSKiqUodOSUQ3F5uBVhtxiS7gldhxla12j2mF/i28wE5A7lkutFs6eUFuEs/BzC+0Jht3aQAafUGcXbXntfbUv8+dcF+HytZo/VYN580520pY/uVq1Y1xy/k+NxNqPk2lfOJXDthnbNGt1riWkggWYnnOtTbLyWIYMlGUlcU/AgenAk8JKGq7iMKANYbwnhdQ2P1kAVhjIMSjAcjFFdE8HZz6YX6gRLt4VUetp7ip81W+K/LrRXvmnfm0SdGShlvCU4fE0t8G6GOw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(376002)(136003)(346002)(451199021)(86362001)(31696002)(36756003)(6666004)(6486002)(54906003)(316002)(110136005)(4326008)(66946007)(66556008)(478600001)(66476007)(8936002)(5660300002)(41300700001)(2906002)(7416002)(8676002)(82960400001)(2616005)(186003)(38100700002)(53546011)(26005)(6512007)(6506007)(107886003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y3Rpa2tZT3NnYXhDU05YTDJ1N2o1Z2FSSDc1dEdSb054dkl3bzdWWEhPZXN0?=
 =?utf-8?B?TFFia2Rmd1dhMGhSblU0V3JjemRDOFlYU1paSEpDMEt2UEtHcEJWcFlVVjV3?=
 =?utf-8?B?SUY0RE5vZG4vR0VZWE1VeHA3KzQ4aEFQaTI5a1hWZDg5aFpzcExaVk1MVVJD?=
 =?utf-8?B?b3BKQTZBUVVUaElYQ3V3aCs5ZFdZb1RYVTluV01ndW5iMkFwWnAwdGFjN2lw?=
 =?utf-8?B?SG5JbEhvc1g3dXdnODBhTGIvSitFRmtvelYyOUh6MUphQzM3UXM2aEFnV2Ew?=
 =?utf-8?B?V0lQYVFIb3BrcFhzODIrQkd0QnJBUzRpUjdlVXQvaGJQeHVSQ2NQMlhUblMr?=
 =?utf-8?B?VGloaGVKRUhEREdPZW9QV1RQZFJJSG9zcTgvQTQzbXpUd1Znc2xPbjRqZktJ?=
 =?utf-8?B?U3N5amVScDQzREZjQXU0cjBsQjgzMnFSdndrb0V2Mkd4MkRpNHF5MDdHU2dp?=
 =?utf-8?B?TDJCbW1TcFFFQkQ0N0pCbmNuWHJ3Uk45RVR3djcwbElGaWtGVEpTMjczbGs2?=
 =?utf-8?B?dTNyY3J5ZFBHcFEzNmpMWWltZVUrNEZRYko1eDhsY3Z2anpkL1JDWjlqMTVO?=
 =?utf-8?B?ZkE5dTA2cTN0OCtLdlBaY1VPRS94MEpuQmhmMjVKTmE2SzN6a3gwR1ZzYkZY?=
 =?utf-8?B?VFpQTlVuNmd6TUp4U1NwUGl3SlNabHdoR2pHakdIYitvTUFNZUR4dmxlbWs2?=
 =?utf-8?B?NWg1aitqQkZTbFl2V29OemFLSEt2Zk9EZmFueDdJSjRQNGhsL2ltTDNFRlp3?=
 =?utf-8?B?RktzWDVra3F5a1ppdWlpK3FUejlSaXVZRXdEbk5BdXZ6M0wyUkkzMm9TWm5F?=
 =?utf-8?B?TDBNWVJtbkFsazlEZk0xVDhHSnhPTURqVGg3dEFNVWQxSzF3b2tmWE9SRlVF?=
 =?utf-8?B?a1lLT2tpTW9qRHJ3VXllUjZkL2Y2bENoSm8rMEhQak9wNG5wZy9hci9PL0w4?=
 =?utf-8?B?WUh5WDZZS2RpRGJIY3ovL1JqaFBkcEgvaFJIZGJOY2ZZVlZWakMvelB3WnUw?=
 =?utf-8?B?bXBBRTJxZERBVlllRGJMT283QzNlL1p2MzFiNDBIN09jK0dBVjQ3RUNRZ1JJ?=
 =?utf-8?B?R3Z5WjQ1bW56N3M2Qi93MGU5YURQS0dwbGZvTGVrREpXcFYyRDFLM0hIMXFu?=
 =?utf-8?B?RjRoSmxoc3U2cnc2OEovWUhVZmNiTEZhNkd1QnZCWFIwdEtobndtY1lDUkh5?=
 =?utf-8?B?RUlvN1k1Y3pwWXRhdE5UZGorTHFyMXgxUEFaSEw5TDZ0Umw1SkQwM3pmb2gv?=
 =?utf-8?B?d1ZIR0dXM0o5a2RTTUloV2JWeThUTnZJd2FaRWdkYlkySFpLSmt1SVdVcFBs?=
 =?utf-8?B?Z08xbXcyaEtidVpDcnVudlFuKy9xYUtHbEo5NUJITjhDall3b3RDSTMxM0xL?=
 =?utf-8?B?NER0My9jRzFOUVNiK3JmM0w1c09Ed3FMZkNmSS91V0lxSmQ4cmxkM0ZwVGRX?=
 =?utf-8?B?Nlpka2JPNFdOcHFTYWFwS2dpZ0lVK1NIdk5BdFlLMUE3NXA3aTVBeEk4ZHhC?=
 =?utf-8?B?M1VxZElzZUQ1NXF1NFhpL0cyclVvcXpEQWRjSXF2MnJYY1ZGRDhudVU5UnM5?=
 =?utf-8?B?QWpsUFhPbVJmNVFRTjByNk95M1dOUG1yQndkcnA4enlaREVUVmZuMy9TdUlZ?=
 =?utf-8?B?bmhER2NPbXRVQVNtZEpuN0tUOTZ3eENBTzYyTXI0SGlGY0dVa3hrNW1WWHJx?=
 =?utf-8?B?VGdQVzJsYkgyWU1OM3IzcG9ESXhLWmpCajlhbWlMZjJvaEE5VkNaRmJiRXN4?=
 =?utf-8?B?T2Yvd0JoSTQydlNEUTNSVmZ1N3ptcEZDNmZqc2xsK1ZLUTlkRjZJT2xJVnIw?=
 =?utf-8?B?U25aanR5NDVCVjY4NkpqSjQ2TWtkNkxQMnUrbzdjTWxISUZxQS80NGY4dDNk?=
 =?utf-8?B?dEZBRGxKV2NEUS9RZGYzSnhCWjlTWjBOaUkranFsQ0gxaEp1UjlxeVNsaXRn?=
 =?utf-8?B?NlI0S3RMK00vdUNiMGM1ZXJ5aHlWb1RTMFdQdHQyUm43UVFBVmdLWHZyalJ3?=
 =?utf-8?B?ck5jcVRXUjNhdEw5aVBlUDZiOUFDeDhldE9iMUJ1NVBxRWZpanYvNGFKaUQ3?=
 =?utf-8?B?OE5uV25idThRRVNnMDdkc2xoVWVCZUF5ZWRyOUt1NzFxcFlBSFpiUzR3TEwv?=
 =?utf-8?B?OVhDVHhFTHF0QWE3SWJtZWdONm4zTTdBWDMzRER4Tk9WTjVoUktoajBsZDh2?=
 =?utf-8?B?ZHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?WnFKQ09ma1hSVWZUN2hrTkg1UGVrN0hYQnZTUWR2aTlZWlByVVo2b1pMcHRS?=
 =?utf-8?B?SE92UEVQYmNPQzFQUC9meDNPQlR0eERQemRoZTUydjByZ0piVVZOcGtwZzNP?=
 =?utf-8?B?Sk5HeWw2azlYL0ZobXlBbkE4WEk2bkpybXFnMHplSnUrcmNCVkY3amN0VzNN?=
 =?utf-8?B?YnZQTEF0dVVRSmx6cDZvSWhUeTllL3FIYm4zVVFhN3paeU9SdzNaSVdUd09B?=
 =?utf-8?B?Y2t2RTI3MGxVODIxUmh2eW14SzBJbjhaSW54WS9QaHYrNkV3S2JQbUZxMXBX?=
 =?utf-8?B?RGNmeXBlWU1tWUpBZ3BIV21NSXFwdzB6MUw2ZUUzdnB2WHlaUGorK2gwaS9E?=
 =?utf-8?B?QUp2czhWQnJaTDUvellFVlp3blNGYTFBYk1VeEZId3ozUFRrbXNHQmNDQVpr?=
 =?utf-8?B?Ty9qQVIzSzgwT3V4aWpWYjVvU25OZWtHbnNPVkN4bVZFaVUrVjNLZXZEbHZJ?=
 =?utf-8?B?cncrUnZDV0FmSkQ2aGEvYkdEZVpTNFJYODBtaml6cXRGVmxkclZYWS92MGFS?=
 =?utf-8?B?QkZVcUFoYnhxb0ZGNnRnM0YrejJseXZ1cU9HcThpRG9OL0owaHJGRm9ZRmJC?=
 =?utf-8?B?dFQ4UnprUm13QU43STBvQ1dhaUpCdndqdlVJZmhRSGhieGg2RGc2enhvMXMy?=
 =?utf-8?B?SWlGVUVsUUhrZUY1dFQzcml0YTlyM1hGTFhaRUpXS1FvZlR6ZTViVkc1WnpV?=
 =?utf-8?B?bmVTb2xxOXg3TTF0VWc3YTgyVlpCbk5RMEZMZW0rUGJ2UWltaGQyanpvUmI5?=
 =?utf-8?B?V3UxM01abnlBckZobURSZ2JVZ0wwb0Q5TG9wOUFldjF2eTV3VTZCWWVEcVZa?=
 =?utf-8?B?TUU5M0RoMHQyMEtmV0JQZERsZ1JRV3d3SDU4SUl0QVI4aUp4clcwcmt3K0NT?=
 =?utf-8?B?Z3JBV1RJSmdXZFloWWZ4b0tkM2NDRHVid2RiSG5yWWhsQXVwaEwvM21zUTNj?=
 =?utf-8?B?U2VySW40bCtIdTZndVN3bXgrYVdyMzdYSFV1S3llYkxUazRnM29qWjcxOTEx?=
 =?utf-8?B?cTJJck1QZjIzZTZFTHdXVVh1b1pXTkY1dTdVRHhrSnRZL3o0R3ZPK1ZiWkZy?=
 =?utf-8?B?TDg1c3V5TzlYOWdzUXVDL0VqTVNTcUoreHFLQ01SQmcyMEtJNlI4RkZaMkFh?=
 =?utf-8?B?K1lWM0x3SEt6VEY1WTdwenIvRFh5bmdlRDMzOFhPOVdWcDBMWkFrQ040c3VN?=
 =?utf-8?B?dkhSOTBTWlpubDR1Z0NOUzZ1SFV0Szc3YXViWVIrN0pWUzQ0SjdhM0l3Q0dr?=
 =?utf-8?Q?1ZgW3GtibGCWZpJ?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b8fd23aa-bf9c-4d7b-3be5-08db4cc45ee9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 17:24:09.7064
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kWVxmrHe7n4Q4UENgjtBsLadXRO5dADC2MT4BMDwlfuhZjMEvomMVedQbnW1gsBV0ffulx4irvdzUIesOMOJ3ZSzoWjTJvrlRcU9Rm5KLwo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5154

On 04/05/2023 5:16 pm, Jan Beulich wrote:
> Presumably by copy-and-paste we've accumulated a number of instances of
> $(@D)/$(@F), which really is nothing else than $@. The split form only
> needs using when we want to e.g. insert a leading . at the beginning of
> the file name portion of the full name.

>From the Kbuild work, we have $(dot-target) now which is marginally more
legible than $(@D)/.$(@F), and we ought to use it consistently.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, although ...

>
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -104,9 +104,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
>  	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
>  	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
>  	    $(@D)/.$(@F).1.o -o $@
> -	$(NM) -pa --format=sysv $(@D)/$(@F) \
> +	$(NM) -pa --format=sysv $@ \
>  		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
> -		>$(@D)/$(@F).map
> +		>$@.map

... any chance we can get a space between > and $ like we do almost
everywhere else?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 04 17:52:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 17:52:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529859.824753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pud7i-0001vm-Hm; Thu, 04 May 2023 17:52:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529859.824753; Thu, 04 May 2023 17:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pud7i-0001vf-F3; Thu, 04 May 2023 17:52:10 +0000
Received: by outflank-mailman (input) for mailman id 529859;
 Thu, 04 May 2023 17:52:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rz6h=AZ=citrix.com=prvs=481809c50=jennifer.herbert@srs-se1.protection.inumbo.net>)
 id 1pud7h-0001vZ-Nu
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 17:52:09 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6249d2d0-eaa4-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 19:52:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6249d2d0-eaa4-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683222728;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=+AX9GvpkHcVGZxhJWoF4UtlFKnkzWIF8vzMgw+7xYOc=;
  b=aap9eJAVbs8F9CGsX8ah4EbpcAsTYOtLZjds/HH3wWnIsz77bpllewow
   mTGE0v4EYmuln3+hUayJ69gt5hMBYH4oYyigdJ1asz3xZtiX1cc0+c8RU
   N3dByxeiIWoDr1vxp/B9VgLRaFReKb3occ9aq3fSDwYAusnU3AH+B+OFO
   k=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106662397
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:zg5HPako7D49vO6JQTWjybHo5gz6JkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xJLWDzXbvuNMDT1LYwlPYq28E4E68fRytdmHVds+yhmFSMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgW5AOGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 aQKMCA0ZBuJvOun6eyraPRyt4cFLPC+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglH9fjteqVyYv+w37nLZwRZt+LPsLMDUapqBQsA9ckOw/
 zqYoTqjWUBFXDCZ4Rmo+HmFgrHioQ32ALgOBKGy/dh3uHTGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U44gyQzqvf4y6CG3MJCDVGbbQOtsYwSSY7y
 1yhkNbgBDgpu7qQIU9x7Z/N82n0Y3JMazZfO2ldF1BtD8TfTJ8b1gDXXu9pG6SOh8zcRyHS3
 i677w4+vuBG5SIU7JlX7Wwrkhr1+MiSFldvtlqINo62xlgnPdD4PuRE/XCetK8dd9jBEzFtq
 VBew6CjAPYy4YZhfcBnaMEEB/mX6vmMK1UwanY/TsB6p1xBF5NOFL28AQ2Sx28zaK7ogRezP
 CfuVfp5vfe/xkeCY65teJ6WAM8316XmHtmNfqmKPoEXPsAoLFfYrXwGiaus4owQuBJ0zfFX1
 WmzKK5A8kr2+Yw4lWHrFo/xIJcgxzwkxHO7eK0XOy+PiOLEDFbMEOdtDbd7RrxhhE9yiFmPo
 ok32grj40k3bdASlQGOqNZDcQxTcClibX00wuQOHtO+zsNdMDlJI5fsLXkJIOSJQ4w9ej/0w
 0yA
IronPort-HdrOrdr: A9a23:zh6RuKzQm7WbWefKIc4cKrPwPb1zdoMgy1knxilNoH1uH/Bw8v
 rE9sjzuiWE6wr5J0tQ++xoVJPvfZq+z/JICOsqXYtKNTOO0FdAR7sM0WKN+Vzd8iTFh4tg6Z
 s=
X-Talos-CUID: 9a23:KvKc+WOS/WiKae5DADt/5HceEcoee2z0kmfyPlGJSkBpcejA
X-Talos-MUID: =?us-ascii?q?9a23=3ALSFyTw3swXulUMv1MqChcaVVVTUjwImkA2kJz7g?=
 =?us-ascii?q?6lcTcDzBgJz2WqimPa9py?=
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="106662397"
From: Jennifer Herbert <jennifer.herbert@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jason Andryuk <jandryuk@gmail.com>, Jennifer Herbert
	<jennifer.herbert@citrix.com>
Subject: [PATCH v4 0/2] acpi: Make TPM version configurable.
Date: Thu, 4 May 2023 17:51:44 +0000
Message-ID: <20230504175146.208936-1-jennifer.herbert@citrix.com>
X-Mailer: git-send-email 2.39.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This patch series makes the TPM version, for which the ACPI library probes,
configurable, and then add support for TPM version 2.

Changes from version 3:

* Omit tpm_version 0 case, to fallback to 1.2,  previously intended for
  compatibility for unknown code.
* Add checks for xenstore tpm_version field.
* Convert copyright header to SPDX
* Minor code style fixes.


Jennifer Herbert (2):
  acpi: Make TPM version configurable.
  acpi: Add TPM2 interface definition.

 docs/misc/xenstore-paths.pandoc |  10 +++
 tools/firmware/hvmloader/util.c |  39 +++++++++---
 tools/libacpi/Makefile          |   3 +-
 tools/libacpi/acpi2_0.h         |  33 ++++++++++
 tools/libacpi/build.c           | 106 +++++++++++++++++++++++---------
 tools/libacpi/libacpi.h         |   4 +-
 tools/libacpi/ssdt_tpm2.asl     |  27 ++++++++
 7 files changed, 184 insertions(+), 38 deletions(-)
 create mode 100644 tools/libacpi/ssdt_tpm2.asl

-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 17:52:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 17:52:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529860.824763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pud7w-0002Dr-QE; Thu, 04 May 2023 17:52:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529860.824763; Thu, 04 May 2023 17:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pud7w-0002Dk-Ne; Thu, 04 May 2023 17:52:24 +0000
Received: by outflank-mailman (input) for mailman id 529860;
 Thu, 04 May 2023 17:52:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rz6h=AZ=citrix.com=prvs=481809c50=jennifer.herbert@srs-se1.protection.inumbo.net>)
 id 1pud7v-0002DN-EY
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 17:52:23 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 69803ca4-eaa4-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 19:52:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69803ca4-eaa4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683222740;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=5qFBWRn+htczPnXVdTK5WICwefSMT3Q2LMY9OHfchM4=;
  b=di6sE8CNnl09xf4LGxLiO+Kzt+0Aus/H6DQs5PSKnFT+lfi/wK82A32s
   y537SKQsYts/4NF03aJ7KmYegZiknnTjCuaBD0Nc/8up1TKY/HiWaIU4E
   qGgSY8XZ8wNjJx/kZtrB6NH0GtRpLQo86GFKvf0+u0+9rq2Dbujb3FZkC
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107227937
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:geWnQaqe30qjIUNIurtNiu97iL1eBmJjZRIvgKrLsJaIsI4StFCzt
 garIBmPOKmOZGb0eYx2O9+0o0MC68ODn9FmQQpp/i8zEStGopuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDzyVNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABdUVkimuf2Q+p74VcdVlNoKN8LpNoxK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVapVSTpKQ6+S7TwRZ73aLxGNHUZsaLVYNemUPwS
 mfupjymWEtFZYfAodaD2kD1p+iMpyHEZLk9Gp622v56jEaK1mNGXXX6UnPk+KLk2yZSQel3O
 0ESvyYjs6U23EiqVcXmGQ21pmaeuRwRUMYWFPc1gCmPwKfJ5weSBkAfUyVMLtchsaceRzUs2
 kWYgtDBCjlmsbnTQnWYnop4thvrZ3JTdzVbI3ZZE01cuYKLTJwPYgznaY9bSIuLkfvPNXLv7
 QmYsgEljLcZgptev0mkxmwrkw5At7CQEF5uvlyNDj36hu9qTNX7PtL1sDA3+d4Fdd/EFQfZ4
 RDojuDEtIgz4YexeDthqQnnNJWg/L67PTLVmjaD9LFxpm32qxZPkW29iQySxXuF0e5eI1cFm
 GeJ5WtsCGZ7ZRNGl5NfbYOrENgNxqP9D9njXf28RoMQMsMoKFfepHkxNBP4M4XRraTRuftnZ
 cfznTiEVB729piLPBLpHrxAgNfHNwg1xH/JRICT8ilLJYG2PSbPIZ9caQvmUwzMxP/cyOkj2
 4oFZpTiJtQ2eLGWXxQ7BqZKdAhacSRiWMGvwyGVH8baSjdb9KgaI6e56dscl0ZNxcy5Ss+gE
 qmBZ3Jl
IronPort-HdrOrdr: A9a23:TJyWZa2jJP94q5K3v9H7AgqjBHokLtp133Aq2lEZdPU1SL3kqy
 nKpp8mPHDP5gr5NEtMpTnCAsm9qArnhPhICNAqTM6ftWrd2VdATrsSl7cKqgeIc0fDH4hmpN
 xdmsNFZ+EYY2IXsS+02njaL/8QhPSK9aC2ifzPpk0dKD2DDMlbnn9E46ugYylLrU19dP0EPY
 vZ4sZcvTKvdVYafq2Adxs4Y9Q=
X-Talos-CUID: =?us-ascii?q?9a23=3AN3k39Wo/6zK/wCVBvO1ZGb3mUed8SiX48mj2GWP?=
 =?us-ascii?q?7FkhQWqO8UlSS8qwxxg=3D=3D?=
X-Talos-MUID: 9a23:1KeTzgWZMAshFj7q/A21pTYhDsVX35+BBmMDvZUWt9KdJSMlbg==
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="107227937"
From: Jennifer Herbert <jennifer.herbert@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jason Andryuk <jandryuk@gmail.com>, Jennifer Herbert
	<jennifer.herbert@citrix.com>
Subject: [PATCH v4 1/2] acpi: Make TPM version configurable.
Date: Thu, 4 May 2023 17:51:45 +0000
Message-ID: <20230504175146.208936-2-jennifer.herbert@citrix.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <20230504175146.208936-1-jennifer.herbert@citrix.com>
References: <20230504175146.208936-1-jennifer.herbert@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This patch makes the TPM version, for which the ACPI library probes, configurable.
If acpi_config.tpm_verison is set to 1, it indicates that 1.2 (TCPA) should be probed.
I have also added to hvmloader an option to allow setting this new config, which can
be triggered by setting the platform/tpm_verion xenstore key.

Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
---
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jason Andryuk <jandryuk@gmail.com>

v4:
* Omit tpm_version 0 case, to fallback to 1.2, previously intended for
  compatibility for unknown code.
* Add checks for xenstore tpm_version field.
* Minor code style fixes.
v3:
* Default to probing for 1.2 TPM, if xenstore tpm_version field missing
  or tpm flag set, but not the TPM version.  (Functional revert change in v2)
* Correct TPM flag setting
v2:
* Split patch into two.
* Default not to probe a TPM, unless tpm_version xenstore field set.
* Minor code style fixes.
---
 docs/misc/xenstore-paths.pandoc |  9 +++++
 tools/firmware/hvmloader/util.c | 28 ++++++++++----
 tools/libacpi/build.c           | 68 ++++++++++++++++++---------------
 tools/libacpi/libacpi.h         |  3 +-
 4 files changed, 70 insertions(+), 38 deletions(-)

diff --git a/docs/misc/xenstore-paths.pandoc b/docs/misc/xenstore-paths.pandoc
index 5cd5c8a3b9..e67e164855 100644
--- a/docs/misc/xenstore-paths.pandoc
+++ b/docs/misc/xenstore-paths.pandoc
@@ -269,6 +269,15 @@ at the guest physical address in HVM_PARAM_VM_GENERATION_ID_ADDR.
 See Microsoft's "Virtual Machine Generation ID" specification for the
 circumstances where the generation ID needs to be changed.
 
+
+#### ~/platform/tpm_version = INTEGER [HVM,INTERNAL]
+
+The TPM version to be probed for.
+
+A value of 1 indicates to probe for TPM 1.2.
+A value of 0 or an invalid value will result in no TPM being probed.
+If unset, a default of 1 is assumed.
+
 ### Frontend device paths
 
 Paravirtual device frontends are generally specified by their own
diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
index 581b35e5cf..1b733a3091 100644
--- a/tools/firmware/hvmloader/util.c
+++ b/tools/firmware/hvmloader/util.c
@@ -920,6 +920,8 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
 {
     const char *s;
     struct acpi_ctxt ctxt;
+    long long tpm_version = 0;
+    char *end;
 
     /* Allocate and initialise the acpi info area. */
     mem_hole_populate_ram(ACPI_INFO_PHYSICAL_ADDRESS >> PAGE_SHIFT, 1);
@@ -967,8 +969,6 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
     s = xenstore_read("platform/generation-id", "0:0");
     if ( s )
     {
-        char *end;
-
         config->vm_gid[0] = strtoll(s, &end, 0);
         config->vm_gid[1] = 0;
         if ( end && end[0] == ':' )
@@ -994,13 +994,27 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
     if ( !strncmp(xenstore_read("platform/acpi_laptop_slate", "0"), "1", 1)  )
         config->table_flags |= ACPI_HAS_SSDT_LAPTOP_SLATE;
 
-    config->table_flags |= (ACPI_HAS_TCPA | ACPI_HAS_IOAPIC |
-                            ACPI_HAS_WAET | ACPI_HAS_PMTIMER |
-                            ACPI_HAS_BUTTONS | ACPI_HAS_VGA |
-                            ACPI_HAS_8042 | ACPI_HAS_CMOS_RTC);
+    config->table_flags |= (ACPI_HAS_IOAPIC | ACPI_HAS_WAET |
+                            ACPI_HAS_PMTIMER | ACPI_HAS_BUTTONS |
+                            ACPI_HAS_VGA | ACPI_HAS_8042 |
+                            ACPI_HAS_CMOS_RTC);
     config->acpi_revision = 4;
 
-    config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
+    config->tpm_version = 0;
+    s = xenstore_read("platform/tpm_version", "1");
+    tpm_version = strtoll(s, &end, 0);
+
+    if ( end && end[0] == '\0' )
+    {
+        switch ( tpm_version )
+        {
+        case 1:
+            config->table_flags |= ACPI_HAS_TPM;
+            config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
+            config->tpm_version = 1;
+            break;
+        }
+    }
 
     config->numa.nr_vmemranges = nr_vmemranges;
     config->numa.nr_vnodes = nr_vnodes;
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index fe2db66a62..bb0d0557d4 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -409,38 +409,46 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
         memcpy(ssdt, ssdt_laptop_slate, sizeof(ssdt_laptop_slate));
         table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
     }
-
-    /* TPM TCPA and SSDT. */
-    if ( (config->table_flags & ACPI_HAS_TCPA) &&
-         (config->tis_hdr[0] != 0 && config->tis_hdr[0] != 0xffff) &&
-         (config->tis_hdr[1] != 0 && config->tis_hdr[1] != 0xffff) )
+    /* TPM and its SSDT. */
+    if ( config->table_flags & ACPI_HAS_TPM )
     {
-        ssdt = ctxt->mem_ops.alloc(ctxt, sizeof(ssdt_tpm), 16);
-        if (!ssdt) return -1;
-        memcpy(ssdt, ssdt_tpm, sizeof(ssdt_tpm));
-        table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
-
-        tcpa = ctxt->mem_ops.alloc(ctxt, sizeof(struct acpi_20_tcpa), 16);
-        if (!tcpa) return -1;
-        memset(tcpa, 0, sizeof(*tcpa));
-        table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, tcpa);
-
-        tcpa->header.signature = ACPI_2_0_TCPA_SIGNATURE;
-        tcpa->header.length    = sizeof(*tcpa);
-        tcpa->header.revision  = ACPI_2_0_TCPA_REVISION;
-        fixed_strcpy(tcpa->header.oem_id, ACPI_OEM_ID);
-        fixed_strcpy(tcpa->header.oem_table_id, ACPI_OEM_TABLE_ID);
-        tcpa->header.oem_revision = ACPI_OEM_REVISION;
-        tcpa->header.creator_id   = ACPI_CREATOR_ID;
-        tcpa->header.creator_revision = ACPI_CREATOR_REVISION;
-        if ( (lasa = ctxt->mem_ops.alloc(ctxt, ACPI_2_0_TCPA_LAML_SIZE, 16)) != NULL )
+        switch ( config->tpm_version )
         {
-            tcpa->lasa = ctxt->mem_ops.v2p(ctxt, lasa);
-            tcpa->laml = ACPI_2_0_TCPA_LAML_SIZE;
-            memset(lasa, 0, tcpa->laml);
-            set_checksum(tcpa,
-                         offsetof(struct acpi_header, checksum),
-                         tcpa->header.length);
+        case 1:
+            if ( config->tis_hdr[0] == 0 || config->tis_hdr[0] == 0xffff ||
+                 config->tis_hdr[1] == 0 || config->tis_hdr[1] == 0xffff )
+                break;
+
+            ssdt = ctxt->mem_ops.alloc(ctxt, sizeof(ssdt_tpm), 16);
+            if (!ssdt) return -1;
+            memcpy(ssdt, ssdt_tpm, sizeof(ssdt_tpm));
+            table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
+
+            tcpa = ctxt->mem_ops.alloc(ctxt, sizeof(struct acpi_20_tcpa), 16);
+            if (!tcpa) return -1;
+            memset(tcpa, 0, sizeof(*tcpa));
+            table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, tcpa);
+
+            tcpa->header.signature = ACPI_2_0_TCPA_SIGNATURE;
+            tcpa->header.length    = sizeof(*tcpa);
+            tcpa->header.revision  = ACPI_2_0_TCPA_REVISION;
+            fixed_strcpy(tcpa->header.oem_id, ACPI_OEM_ID);
+            fixed_strcpy(tcpa->header.oem_table_id, ACPI_OEM_TABLE_ID);
+            tcpa->header.oem_revision = ACPI_OEM_REVISION;
+            tcpa->header.creator_id   = ACPI_CREATOR_ID;
+            tcpa->header.creator_revision = ACPI_CREATOR_REVISION;
+
+            lasa = ctxt->mem_ops.alloc(ctxt, ACPI_2_0_TCPA_LAML_SIZE, 16);
+            if ( lasa )
+            {
+                tcpa->lasa = ctxt->mem_ops.v2p(ctxt, lasa);
+                tcpa->laml = ACPI_2_0_TCPA_LAML_SIZE;
+                memset(lasa, 0, tcpa->laml);
+                set_checksum(tcpa,
+                             offsetof(struct acpi_header, checksum),
+                             tcpa->header.length);
+            }
+            break;
         }
     }
 
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index a2efd23b0b..f69452401f 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -27,7 +27,7 @@
 #define ACPI_HAS_SSDT_PM           (1<<4)
 #define ACPI_HAS_SSDT_S3           (1<<5)
 #define ACPI_HAS_SSDT_S4           (1<<6)
-#define ACPI_HAS_TCPA              (1<<7)
+#define ACPI_HAS_TPM               (1<<7)
 #define ACPI_HAS_IOAPIC            (1<<8)
 #define ACPI_HAS_WAET              (1<<9)
 #define ACPI_HAS_PMTIMER           (1<<10)
@@ -66,6 +66,7 @@ struct acpi_config {
 
     uint32_t table_flags;
     uint8_t acpi_revision;
+    uint8_t tpm_version;
 
     uint64_t vm_gid[2];
     unsigned long vm_gid_addr; /* OUT parameter */
-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 17:52:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 17:52:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529861.824773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pud80-0002WD-6Q; Thu, 04 May 2023 17:52:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529861.824773; Thu, 04 May 2023 17:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pud80-0002W6-1x; Thu, 04 May 2023 17:52:28 +0000
Received: by outflank-mailman (input) for mailman id 529861;
 Thu, 04 May 2023 17:52:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rz6h=AZ=citrix.com=prvs=481809c50=jennifer.herbert@srs-se1.protection.inumbo.net>)
 id 1pud7y-0002DN-1r
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 17:52:26 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c357424-eaa4-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 19:52:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c357424-eaa4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683222743;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=klOQO+gsr7z/KUFFtFG4HRRQMfRqAfQQnRF3fO14M6U=;
  b=aGrciT4Iyq2pK/SSWBJiIddXNqz2gGnHXeGji9TGW6BhfvKSBP5Nwucf
   OySzCWHvmSSMq+hi3xlbOFtWXomidDQofRd3uvt3esu8MR5rMz5Nb0ix6
   BnwnrNDvLjPdvEKwq0qfoLB8Jr3Bnk5v4Gd62ZsDGturhtMXWz4uG6OOV
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107227938
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:3MzBS6jCoJ3/JB+af6sWYyb2X161GxAKZh0ujC45NGQN5FlHY01je
 htvWWGFPfzZamXwc9FwbY63oE8P65CGzNNjSgBrpClgFHgb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QeCzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ/cQhVSy6codiag62Vc/tyr98oLsPkadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XcTBerlObqLBx42XJxgFrzJDmMcbPe8zMTsJQ9qqdj
 jueoD6jXUBBZLRzzxKg1ViwnPSQkx/aG9o3TZe2sdwzrV2Mkzl75Bo+CgLg/KjRZlSFc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUS6guA167V6AaxHXUfQ3hKb9lOnM0/QzAwx
 0KKt9zsDD1r9raSTBqgGqy89G3of3JPdClbOHFCFFFeizX+nG0tpjWWdo5GLrGTtPG2Bh/J0
 S6jtyM8jbpG2KbnyJ6HEUD7byOE/8aZFVZsvFWIAQpJ/SsiOtf7OtXABUzzqK8Zcd3HFgTpU
 G0swZD20QwYMX2aeMVhqs0pFarh2fuKOSa0bbVHT8h4rGTFF5JOkOltDNBCyKRBaJxslcfBO
 hO7hO+ozMY70IGWRaF2eZmtLM8h0LLtE9/oPtiNMIoVO8QpLVffo3AzDaJ1410BbWB2yf1vU
 XtlWZ/E4Ykm5VRPk2PtGrZ1PU4DzSEi32LDLa3GI+Cc+ePGPha9EO5VWGZim8hltMtoVi2Jq
 YcAXyZLoj0DONDDjt7/qNdLfQxWdCVqXPgbaaV/L4a+H+avI0l5Y9e5/F/rU9UNc3h9/gsQw
 kyAZw==
IronPort-HdrOrdr: A9a23:nNu1qK+EDAiDNGvraoZuk+AQI+orL9Y04lQ7vn2ZKCYlD/Bw8v
 rEoB1173HJYVoqNU3I2urhBED4ewK7yXct2/hpAV7AZmjbUQmTXftfBOLZqlWLJ8SZzJ8n6U
 4KScdD4bPLYWSSwvyKgzWQIpIMzNyG76yylY7lvhJQpWYDUdAZ0+7VMHf+LqQzfnggObMpUJ
 6R/NBOqTaDdWR/VLXYOkU4
X-Talos-CUID: 9a23:kkGBPm/Zf6fBDyKU6VmVv2lNIpgATGL79luTPGGBG3hAU7aXQ1DFrQ==
X-Talos-MUID: 9a23:4n35UQrnAxRonG6SXn4ezxRFHtty+PugNGFOybBfkcfDF3VdOyjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="107227938"
From: Jennifer Herbert <jennifer.herbert@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jason Andryuk <jandryuk@gmail.com>, Jennifer Herbert
	<jennifer.herbert@citrix.com>
Subject: [PATCH v4 2/2] acpi: Add TPM2 interface definition.
Date: Thu, 4 May 2023 17:51:46 +0000
Message-ID: <20230504175146.208936-3-jennifer.herbert@citrix.com>
X-Mailer: git-send-email 2.39.1
In-Reply-To: <20230504175146.208936-1-jennifer.herbert@citrix.com>
References: <20230504175146.208936-1-jennifer.herbert@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This patch introduces an optional TPM 2 interface definition to the ACPI table,
which is to be used as part of a vTPM 2 implementation.

Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jason Andryuk <jandryuk@gmail.com>

v4:
* Convert copyright header to SPDX
* Continuation from patch 1 change of xenstore tpm_version field checking.
* Minor code style fixes.
v3:
* Renamed TPM_CRB constants to better match the TPM specification.
* Moved some ACPI register locations to acpi2_0.h to such that both
  TPM register offsets are defined together, with clearer relation.
* Added additional comments to explain new constants.
* Minor code style fixes.
v2:
* Patch split into two.
* Move TPM log to 0xFED50000
* Minor code style fixes.
---
 docs/misc/xenstore-paths.pandoc |  3 ++-
 tools/firmware/hvmloader/util.c | 10 +++++++++
 tools/libacpi/Makefile          |  3 ++-
 tools/libacpi/acpi2_0.h         | 33 +++++++++++++++++++++++++++
 tools/libacpi/build.c           | 40 +++++++++++++++++++++++++++++++++
 tools/libacpi/libacpi.h         |  1 +
 tools/libacpi/ssdt_tpm2.asl     | 27 ++++++++++++++++++++++
 7 files changed, 115 insertions(+), 2 deletions(-)
 create mode 100644 tools/libacpi/ssdt_tpm2.asl

diff --git a/docs/misc/xenstore-paths.pandoc b/docs/misc/xenstore-paths.pandoc
index e67e164855..bffb8ea544 100644
--- a/docs/misc/xenstore-paths.pandoc
+++ b/docs/misc/xenstore-paths.pandoc
@@ -274,7 +274,8 @@ circumstances where the generation ID needs to be changed.
 
 The TPM version to be probed for.
 
-A value of 1 indicates to probe for TPM 1.2.
+A value of 1 indicates to probe for TPM 1.2, whereas a value of 2
+indicates that a TPM 2.0 using CRB should be probed.
 A value of 0 or an invalid value will result in no TPM being probed.
 If unset, a default of 1 is assumed.
 
diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
index 1b733a3091..b573a9c3cd 100644
--- a/tools/firmware/hvmloader/util.c
+++ b/tools/firmware/hvmloader/util.c
@@ -1013,6 +1013,16 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
             config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
             config->tpm_version = 1;
             break;
+
+        case 2:
+            config->table_flags |= ACPI_HAS_TPM;
+            config->crb_id = (uint16_t *)TPM_CRB_INTF_ID;
+
+            mem_hole_populate_ram(TPM_LOG_AREA_ADDRESS >> PAGE_SHIFT,
+                                  TPM_LOG_SIZE >> PAGE_SHIFT);
+            memset((void *)TPM_LOG_AREA_ADDRESS, 0, TPM_LOG_SIZE);
+            config->tpm_version = 2;
+            break;
         }
     }
 
diff --git a/tools/libacpi/Makefile b/tools/libacpi/Makefile
index 60860eaa00..23278f6a61 100644
--- a/tools/libacpi/Makefile
+++ b/tools/libacpi/Makefile
@@ -25,7 +25,8 @@ C_SRC-$(CONFIG_X86) = dsdt_anycpu.c dsdt_15cpu.c dsdt_anycpu_qemu_xen.c dsdt_pvh
 C_SRC-$(CONFIG_ARM_64) = dsdt_anycpu_arm.c
 DSDT_FILES ?= $(C_SRC-y)
 C_SRC = $(addprefix $(ACPI_BUILD_DIR)/, $(DSDT_FILES))
-H_SRC = $(addprefix $(ACPI_BUILD_DIR)/, ssdt_s3.h ssdt_s4.h ssdt_pm.h ssdt_tpm.h ssdt_laptop_slate.h)
+H_SRC = $(addprefix $(ACPI_BUILD_DIR)/, ssdt_s3.h ssdt_s4.h ssdt_pm.h)
+H_SRC += $(addprefix $(ACPI_BUILD_DIR)/, ssdt_tpm.h ssdt_tpm2.h ssdt_laptop_slate.h)
 
 MKDSDT_CFLAGS-$(CONFIG_ARM_64) = -DCONFIG_ARM_64
 MKDSDT_CFLAGS-$(CONFIG_X86) = -DCONFIG_X86
diff --git a/tools/libacpi/acpi2_0.h b/tools/libacpi/acpi2_0.h
index 2619ba32db..3503eb3cfa 100644
--- a/tools/libacpi/acpi2_0.h
+++ b/tools/libacpi/acpi2_0.h
@@ -121,6 +121,37 @@ struct acpi_20_tcpa {
 };
 #define ACPI_2_0_TCPA_LAML_SIZE (64*1024)
 
+/*
+ * TPM2
+ */
+struct acpi_20_tpm2 {
+    struct acpi_header header;
+    uint16_t platform_class;
+    uint16_t reserved;
+    uint64_t control_area_address;
+    uint32_t start_method;
+    uint8_t start_method_params[12];
+    uint32_t log_area_minimum_length;
+    uint64_t log_area_start_address;
+};
+#define TPM2_ACPI_CLASS_CLIENT      0
+#define TPM2_START_METHOD_CRB       7
+
+/*
+ * TPM register I/O Mapped region, location of which defined in the
+ * TCG PC Client Platform TPM Profile Specification for TPM 2.0.
+ * See table 9 - Only Locality 0 is used here. This is emulated by QEMU.
+ * Definition of Register space is found in table 12.
+ */
+#define TPM_REGISTER_BASE           0xFED40000
+#define TPM_CRB_CTRL_REQ            (TPM_REGISTER_BASE  + 0x40)
+#define TPM_CRB_INTF_ID             (TPM_REGISTER_BASE  + 0x30)
+
+#define TPM_LOG_AREA_ADDRESS        0xFED50000
+
+#define TPM_LOG_AREA_MINIMUM_SIZE   (64 << 10)
+#define TPM_LOG_SIZE                (64 << 10)
+
 /*
  * Fixed ACPI Description Table Structure (FADT) in ACPI 1.0.
  */
@@ -431,6 +462,7 @@ struct acpi_20_slit {
 #define ACPI_2_0_RSDT_SIGNATURE ASCII32('R','S','D','T')
 #define ACPI_2_0_XSDT_SIGNATURE ASCII32('X','S','D','T')
 #define ACPI_2_0_TCPA_SIGNATURE ASCII32('T','C','P','A')
+#define ACPI_2_0_TPM2_SIGNATURE ASCII32('T','P','M','2')
 #define ACPI_2_0_HPET_SIGNATURE ASCII32('H','P','E','T')
 #define ACPI_2_0_WAET_SIGNATURE ASCII32('W','A','E','T')
 #define ACPI_2_0_SRAT_SIGNATURE ASCII32('S','R','A','T')
@@ -444,6 +476,7 @@ struct acpi_20_slit {
 #define ACPI_2_0_RSDT_REVISION 0x01
 #define ACPI_2_0_XSDT_REVISION 0x01
 #define ACPI_2_0_TCPA_REVISION 0x02
+#define ACPI_2_0_TPM2_REVISION 0x04
 #define ACPI_2_0_HPET_REVISION 0x01
 #define ACPI_2_0_WAET_REVISION 0x01
 #define ACPI_1_0_FADT_REVISION 0x01
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index bb0d0557d4..401113503c 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -19,6 +19,7 @@
 #include "ssdt_s3.h"
 #include "ssdt_s4.h"
 #include "ssdt_tpm.h"
+#include "ssdt_tpm2.h"
 #include "ssdt_pm.h"
 #include "ssdt_laptop_slate.h"
 #include <xen/hvm/hvm_info_table.h>
@@ -350,6 +351,7 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
     struct acpi_20_hpet *hpet;
     struct acpi_20_waet *waet;
     struct acpi_20_tcpa *tcpa;
+    struct acpi_20_tpm2 *tpm2;
     unsigned char *ssdt;
     void *lasa;
 
@@ -449,6 +451,44 @@ static int construct_secondary_tables(struct acpi_ctxt *ctxt,
                              tcpa->header.length);
             }
             break;
+
+        case 2:
+            /*
+             * Check VID stored in bits 37:32 (3rd 16 bit word) of CRB
+             * identifier register.  See table 16 of TCG PC client platform
+             * TPM profile specification for TPM 2.0.
+             */
+            if ( config->crb_id[2] == 0 || config->crb_id[2] == 0xffff )
+                break;
+
+            ssdt = ctxt->mem_ops.alloc(ctxt, sizeof(ssdt_tpm2), 16);
+            if (!ssdt) return -1;
+            memcpy(ssdt, ssdt_tpm2, sizeof(ssdt_tpm2));
+            table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, ssdt);
+
+            tpm2 = ctxt->mem_ops.alloc(ctxt, sizeof(struct acpi_20_tpm2), 16);
+            if (!tpm2) return -1;
+            memset(tpm2, 0, sizeof(*tpm2));
+            table_ptrs[nr_tables++] = ctxt->mem_ops.v2p(ctxt, tpm2);
+
+            tpm2->header.signature = ACPI_2_0_TPM2_SIGNATURE;
+            tpm2->header.length    = sizeof(*tpm2);
+            tpm2->header.revision  = ACPI_2_0_TPM2_REVISION;
+            fixed_strcpy(tpm2->header.oem_id, ACPI_OEM_ID);
+            fixed_strcpy(tpm2->header.oem_table_id, ACPI_OEM_TABLE_ID);
+            tpm2->header.oem_revision = ACPI_OEM_REVISION;
+            tpm2->header.creator_id   = ACPI_CREATOR_ID;
+            tpm2->header.creator_revision = ACPI_CREATOR_REVISION;
+            tpm2->platform_class = TPM2_ACPI_CLASS_CLIENT;
+            tpm2->control_area_address = TPM_CRB_CTRL_REQ;
+            tpm2->start_method = TPM2_START_METHOD_CRB;
+            tpm2->log_area_minimum_length = TPM_LOG_AREA_MINIMUM_SIZE;
+            tpm2->log_area_start_address = TPM_LOG_AREA_ADDRESS;
+
+            set_checksum(tpm2,
+                         offsetof(struct acpi_header, checksum),
+                         tpm2->header.length);
+            break;
         }
     }
 
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index f69452401f..0d19f9fc4d 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -80,6 +80,7 @@ struct acpi_config {
     const struct hvm_info_table *hvminfo;
 
     const uint16_t *tis_hdr;
+    const uint16_t *crb_id;
 
     /*
      * Address where acpi_info should be placed.
diff --git a/tools/libacpi/ssdt_tpm2.asl b/tools/libacpi/ssdt_tpm2.asl
new file mode 100644
index 0000000000..3df9d70556
--- /dev/null
+++ b/tools/libacpi/ssdt_tpm2.asl
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: LGPL-2.1 */
+/*
+ * ssdt_tpm2.asl
+ *
+ * Copyright (c) 2018-2022, Citrix Systems, Inc.
+ */
+
+/* SSDT for TPM CRB Interface for Xen with Qemu device model. */
+
+DefinitionBlock ("SSDT_TPM2.aml", "SSDT", 2, "Xen", "HVM", 0)
+{
+    Device (TPM)
+    {
+        Name (_HID, "MSFT0101" /* TPM 2.0 Security Device */)  // _HID: Hardware ID
+        Name (_CRS, ResourceTemplate ()  // _CRS: Current Resource Settings
+        {
+            Memory32Fixed (ReadWrite,
+                0xFED40000,         // Address Base
+                0x00001000,         // Address Length
+                )
+        })
+        Method (_STA, 0, NotSerialized)  // _STA: Status
+        {
+            Return (0x0F)
+        }
+    }
+}
-- 
2.39.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 18:04:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 18:04:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529871.824783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pudJP-0004lS-8x; Thu, 04 May 2023 18:04:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529871.824783; Thu, 04 May 2023 18:04:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pudJP-0004lL-62; Thu, 04 May 2023 18:04:15 +0000
Received: by outflank-mailman (input) for mailman id 529871;
 Thu, 04 May 2023 18:04:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qVjO=AZ=rabbit.lu=slack@srs-se1.protection.inumbo.net>)
 id 1pudJO-0004lF-IJ
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 18:04:14 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 12baa79e-eaa6-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 20:04:12 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-306281edf15so749146f8f.1
 for <xen-devel@lists.xenproject.org>; Thu, 04 May 2023 11:04:11 -0700 (PDT)
Received: from [192.168.2.1] (82-64-138-184.subs.proxad.net. [82.64.138.184])
 by smtp.googlemail.com with ESMTPSA id
 q6-20020a5d5746000000b003063db8f45bsm6351649wrw.23.2023.05.04.11.04.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 04 May 2023 11:04:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12baa79e-eaa6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=rabbit-lu.20221208.gappssmtp.com; s=20221208; t=1683223451; x=1685815451;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=zPoy7ZjH0Hp3L3NSAKFAvrycApDT8quJeyk+IzfMzr8=;
        b=n740Nn8vCLMkAbNPdJ4Ss/Yj+iwdxeLduFRV5USOrUmS/rvP+eGovQmfo6kDZKkFk+
         RkfGBLKiyFyqTwUTFQVmuYs51Uvfjuu9a1XBmZKadGPoJXjqrR7R9w3WRiVTEmXb191C
         3gQXfFUJMYOYOdrVeuk+TZKTLNRCseERcSdLjCFxijIIr+Q8s5KrVOaURi1qd7dzAeJ3
         CRGrecprB6gOoSXxKKgeCt30UH94YH3nl15ZwKBuk55omYVHA7a5mQ5P2+VU4TEy1z5v
         hm5p5lLC0WhYDQNp3Xp1iI+2vykSpUyh4vX+AWGYFBPiUTY8u6ojDD85i7Pv0bowga2Q
         YYyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683223451; x=1685815451;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=zPoy7ZjH0Hp3L3NSAKFAvrycApDT8quJeyk+IzfMzr8=;
        b=S5LGlMmu2bu6l1IPVRnxG8jHi53d/seY9J6Klbb+2LjV777fvSGKqlXWB+nR/JGOnn
         pGmgPmpKF4fVBA/I2xrWeAJ8Dv4Fgh1zDglGwsWIR19vUOCgzDf1eKPhi99/A1ipWSsN
         2rDz/NZpoYadYjTCRgCWVITYGs4DmCiUqW44rZjIX4FBtfDHiYDSgfgzGV55Qt3EZGz6
         3/kWLQ3dBsIdE3fgUr5XI/1cpn/dcce/66XbCw2jdif01Lcu0EW8FQPUPHx1ChhwRT9l
         /mdxTa72m9I4ccBnR/MpkABqCL9ShvpnF2mQ9U1x4sU1j1KBYcgL2yxF/6UGyYhwFYYr
         5vSA==
X-Gm-Message-State: AC+VfDy0CLVE5fYnC6G942T+umybaxt2JmdNYW+G97H2SYwU6IARI9Wu
	S8o2gBdS3DqQf08VNR9v11aUOA==
X-Google-Smtp-Source: ACHHUZ6Mo1nJZkoEy25euN2Nob3Y+z0UQWcQqryc3MNaJrMrtoUE/Nf6NS4SdUF8LM5x7J5pd1tCSw==
X-Received: by 2002:a5d:42cf:0:b0:306:40dc:abd9 with SMTP id t15-20020a5d42cf000000b0030640dcabd9mr3376711wrr.71.1683223451453;
        Thu, 04 May 2023 11:04:11 -0700 (PDT)
Message-ID: <a51e0f7e-aed0-2ec9-f451-2e750636fb78@rabbit.lu>
Date: Thu, 4 May 2023 20:04:09 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: xenstored: EACCESS error accessing control/feature-balloon 1
Content-Language: en-US
To: Yann Dirson <yann.dirson@vates.fr>, xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, =?UTF-8?B?RWR3aW4gVMO2csO2aw==?=
 <edwin.torok@cloud.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu>
 <474b531f-2067-a5d4-8b01-5b8ef1f7061d@citrix.com>
 <678df1ff-df18-b063-eda3-2a1aed6d40f8@vates.fr>
 <50bf6b82-965b-d17c-7c5a-49c703991504@rabbit.lu>
 <f44261a2-df39-f69a-9798-dc1d656e6dac@vates.fr>
From: zithro <slack@rabbit.lu>
In-Reply-To: <f44261a2-df39-f69a-9798-dc1d656e6dac@vates.fr>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 04 May 2023 17:59, Yann Dirson wrote:
> 
> On 5/4/23 15:58, zithro wrote:
>> Hi,
>>
>> [ snipped for brevity, report summary:
>> XAPI daemon in domU tries to write to a non-existent xenstore node in
>> a non-XAPI dom0 ]
>>
>> On 12 Apr 2023 18:41, Yann Dirson wrote:
>>> Is there anything besides XAPI using this node, or the other data
>>> published by xe-daemon?
>>
>> On my vanilla Xen (ie. non-XAPI), I have no node about "balloon"-ing
>> in xenstore (either dom0 or domU nodes, but I'm not using ballooning
>> in both).
>>
>>> Maybe the original issue is just that there is no reason to have
>>> xe-guest-utilities installed in this setup?
>>
>> That's what I thought as I'm not using XAPI, so maybe the problem
>> should only be addressed to the truenas team ? I posted on their forum
>> but got no answer.
>> I killed the 'xe-daemon' in both setups without loss of functionality.
>>
>> My wild guess is that 'xe-daemon', 'xe-update-guest-attrs' and all
>> 'xenstore* commands' are leftovers from when Xen was working as a dom0
>> under FreeBSD (why would a *domU* have them ?).
> 
> That would not be correct: xenstore* are useful in guests, should you
> want to read/write to the XenStore manually or from scripts;

Didn't know that, can you give some use cases (or URLs) for which it is 
useful, with or without XAPI ?
I've read xenstore* man pages and could not infer a use case.
Although I may already see some : updating ballooned memory values, or 
as Julien Grall pointed out, updating "feature-s3/4" values ?

PS: small mistake in "man/xenstore-write.1.html" (from at least 4.14, 
and onward), the synopsis reads "xenstore-read" ieof "xenstore-read".
Also, the -s option disappeared from unstable, although it may be 
expected. I don't know it's purpose either.

> xe-deamon and xe-update-guest-attrs both come from xe-guest-utilities 6.x, which
> is really a domU tool as well, but is there to support XAPI in dom0.

I checked on freshports, this is effectively the last version in 
FreeBSD, hence FreeNAS (although the version number hasn't been updated 
since 2020, despite updates, so it's not clear which version I'm using).

So next question is: shouldn't the installer detect if running on a XAPI 
Xen or not ? I imagine it's feasible, maybe via XAPI-specific xenstore 
nodes ?

Have a nice evening


From xen-devel-bounces@lists.xenproject.org Thu May 04 18:46:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 18:46:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529874.824792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pudy9-0000tK-C9; Thu, 04 May 2023 18:46:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529874.824792; Thu, 04 May 2023 18:46:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pudy9-0000tD-9B; Thu, 04 May 2023 18:46:21 +0000
Received: by outflank-mailman (input) for mailman id 529874;
 Thu, 04 May 2023 18:46:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pudy7-0000t7-MM
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 18:46:19 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f3d9d1ce-eaab-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 20:46:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3d9d1ce-eaab-11ed-b226-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683225976;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=OIAcVs2YXBnNfaPvpuizW2PEW4lhJ1YnG6MUoZkO7aY=;
	b=3+tD+LTP2nHbLKQ6ayQSSKK8VpBIuQK3gXforAfUYbXvx4vM+fYGuufKzSpG2saVKOpTsV
	to0Q6mzIGbAQjDqsOQkje2Tja0U/TKz3F+TviP3kf+19hr3bEm6s4uGo2F+s+PdBgPIGkj
	gHjK9Yi0pM4GUZc5ZpONvd31Hppwcu+Bcbd8TlaOcVKhOuZIjToIAW5YXeXW3Go5sZpcCh
	htSi5EE8To3+PKCGlsNtyNrMRTsE8PlVdNzqWmd8pkRxZ1vMQC2VO2mXVu6cVctYauCbM0
	4H8XFlx6BIILGrhdr/EBrOtC8Cm0866nFAfWkkLlHu0QtdUMHZ2jk6jWm8xf3Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683225976;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=OIAcVs2YXBnNfaPvpuizW2PEW4lhJ1YnG6MUoZkO7aY=;
	b=pPb8Wqh2Bj6AkjBEVHoo9nA/DLOEH7EgQ6O+EgS1RtAjQlyepgF0Tr/3dF7rDdC5tR1PRC
	DisRQOSHrsQxZlDw==
To: "Michael Kelley (LINUX)" <mikelley@microsoft.com>, LKML
 <linux-kernel@vger.kernel.org>
Cc: "x86@kernel.org" <x86@kernel.org>, David Woodhouse <dwmw@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst
 <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom
 Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr
 Natalenko <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, David Woodhouse <dwmw@amazon.co.uk>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>, Russell King <linux@armlinux.org.uk>,
 Arnd
 Bergmann <arnd@arndb.de>, "linux-arm-kernel@lists.infradead.org"
 <linux-arm-kernel@lists.infradead.org>, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, "linux-csky@vger.kernel.org"
 <linux-csky@vger.kernel.org>, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, "linux-mips@vger.kernel.org"
 <linux-mips@vger.kernel.org>, "James E.J. Bottomley"
 <James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>,
 "linux-parisc@vger.kernel.org" <linux-parisc@vger.kernel.org>, Paul
 Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 "linux-riscv@lists.infradead.org" <linux-riscv@lists.infradead.org>, Mark
 Rutland <Mark.Rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
Subject: RE: [patch 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
In-Reply-To: <BYAPR21MB168888DC5432883D8866BA40D76A9@BYAPR21MB1688.namprd21.prod.outlook.com>
References: <20230414225551.858160935@linutronix.de>
 <BYAPR21MB168888DC5432883D8866BA40D76A9@BYAPR21MB1688.namprd21.prod.outlook.com>
Date: Thu, 04 May 2023 20:46:15 +0200
Message-ID: <878re43pfs.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

Michael!

On Thu, Apr 27 2023 at 14:48, Michael Kelley wrote:
> From: Thomas Gleixner <tglx@linutronix.de> Sent: Friday, April 14, 2023 4:44 PM
>
> I smoke-tested several Linux guest configurations running on Hyper-V,
> using the "kernel/git/tglx/devel.git hotplug" tree as updated on April 26th.
> No functional issues, but encountered one cosmetic issue (details below).
>
> Configurations tested:
> *  16 vCPUs and 32 vCPUs
> *  1 NUMA node and 2 NUMA nodes
> *  Parallel bring-up enabled and disabled via kernel boot line
> *  "Normal" VMs and SEV-SNP VMs running with a paravisor on Hyper-V.
>     This config can use parallel bring-up because most of the SNP-ness is
>     hidden in the paravisor.  I was glad to see this work properly.
>
> There's not much difference in performance with and without parallel
> bring-up on the 32 vCPU VM.   Without parallel, the time is about 26
> milliseconds.  With parallel, it's about 24 ms.   So bring-up is already
> fast in the virtual environment.

Depends on the environment :)

> The cosmetic issue is in the dmesg log, and arises because Hyper-V
> enumerates SMT CPUs differently from many other environments.  In
> a Hyper-V guest, the SMT threads in a core are numbered as <even, odd>
> pairs.  Guest CPUs #0 & #1 are SMT threads in core, as are #2 & #3, etc.  With
> parallel bring-up, here's the dmesg output:
>
> [    0.444345] smp: Bringing up secondary CPUs ...
> [    0.445139] .... node  #0, CPUs:    #2  #4  #6  #8 #10 #12 #14 #16 #18 #20 #22 #24 #26 #28 #30
> [    0.454112] x86: Booting SMP configuration:
> [    0.456035]       #1  #3  #5  #7  #9 #11 #13 #15 #17 #19 #21 #23 #25 #27 #29 #31
> [    0.466120] smp: Brought up 1 node, 32 CPUs
> [    0.467036] smpboot: Max logical packages: 1
> [    0.468035] smpboot: Total of 32 processors activated (153240.06 BogoMIPS)
>
> The function announce_cpu() is specifically testing for CPU #1 to output the
> "Booting SMP configuration" message.  In a Hyper-V guest, CPU #1 is the second
> SMT thread in a core, so it isn't started until all the even-numbered CPUs are
> started.

Ah. Didn't notice that because SMT siblings are usually enumerated after
all primary ones in ACPI.

> I don't know if this cosmetic issue is worth fixing, but I thought I'd point it out.

That's trivial enough to fix. I'll amend the topmost patch before
posting V2.

Thanks for giving it a ride!

       tglx


From xen-devel-bounces@lists.xenproject.org Thu May 04 18:48:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 18:48:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529877.824802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pudzf-0001Qp-NR; Thu, 04 May 2023 18:47:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529877.824802; Thu, 04 May 2023 18:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pudzf-0001Qi-Kh; Thu, 04 May 2023 18:47:55 +0000
Received: by outflank-mailman (input) for mailman id 529877;
 Thu, 04 May 2023 18:47:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pudzd-0001QU-Kq; Thu, 04 May 2023 18:47:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pudzd-0006Sp-Bw; Thu, 04 May 2023 18:47:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pudzc-0006Wb-SI; Thu, 04 May 2023 18:47:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pudzc-0003ar-Rm; Thu, 04 May 2023 18:47:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Jd4tqMh94nxAFtcL3IuvD56wAwIQCkHfrRc5dvBrkkQ=; b=paJQuS9CKXMztABLjimo1lFZ6J
	2aT/wV84FHBQcysE3MNfnkmDMUD4fVJgQaAjp1DRddIxuiJv3+N4Bg1SfqL7eQTagHFsCShwm/iUF
	mSvhG3a+eUvfYCcYkoUt1YBPP6hn7+Hunp1vSspbrPUgZjPMdoUhh8NHeJqy8frmR4FQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180532-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180532: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=4b02045f86d6aac8a617bf3f65f9cb2146630ce3
X-Osstest-Versions-That:
    ovmf=d6b42ed7ed1b0c4584097f0d76798cff74c96379
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 May 2023 18:47:52 +0000

flight 180532 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180532/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 4b02045f86d6aac8a617bf3f65f9cb2146630ce3
baseline version:
 ovmf                 d6b42ed7ed1b0c4584097f0d76798cff74c96379

Last test of basis   180508  2023-05-02 16:10:54 Z    2 days
Testing same since   180532  2023-05-04 14:40:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Oliver Steffen <osteffen@redhat.com>
  Patrik Berglund <patrik.berglund@arm.com>
  Pierre Gondois <pierre.gondois@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d6b42ed7ed..4b02045f86  4b02045f86d6aac8a617bf3f65f9cb2146630ce3 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529887.824843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDO-0004md-Om; Thu, 04 May 2023 19:02:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529887.824843; Thu, 04 May 2023 19:02:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDO-0004mU-LO; Thu, 04 May 2023 19:02:06 +0000
Received: by outflank-mailman (input) for mailman id 529887;
 Thu, 04 May 2023 19:02:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDN-00042k-Ol
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:05 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2854e7bf-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2854e7bf-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185936.480193035@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226923;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Eo225NP0lj/KI2zgHR0OIjiIzH0Ctds/GymtbPvmwBU=;
	b=LT/pOG5ZQQZ4Xkdn780R40Y7lueqSZR9oNFurLJq5jRV+Eu/Z/jjyhKXcNyYjcCaq3OxeZ
	D2BFgv0ZO2Yf12YjG235klE3IV29/RVeRocRuGfGAjifygXZaaoDJGgVt3+VOUiu6ohxFY
	p+SEyhTtLcmwBOdSZGNp6YT7r03nx+0ZjhTXH5U8LULRcg9p9u1Xn+PZrAfH0AGtejHGc2
	1WhTgb8mnl3tB5GijGDHQHQsbB76zY/aLri5dB0ODauWKsUjq6zqrj/wdK3i0TilxS51J9
	6cNAT2zbq0mtuwkLISVz29ZrTYOY6WH2fUJB84fiM72rjbd3hSxTAIA7/5hPnQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226923;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Eo225NP0lj/KI2zgHR0OIjiIzH0Ctds/GymtbPvmwBU=;
	b=c5fBzhVL9IXHiAIx3H5VnWToAYIP+2Zl7riSaIQdNQo8s6Prq7ZCnHaTOt6HvVUYoRs7oU
	ozupCaSo8Ov1mvAw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 03/38] x86/smpboot: Avoid pointless delay calibration if
 TSC is synchronized
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:03 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

When TSC is synchronized across sockets then there is no reason to
calibrate the delay for the first CPU which comes up on a socket.

Just reuse the existing calibration value.

This removes 100ms pointlessly wasted time from CPU hotplug per socket.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/smpboot.c |   38 ++++++++++++++++++++++++--------------
 arch/x86/kernel/tsc.c     |   20 ++++++++++++++++----
 2 files changed, 40 insertions(+), 18 deletions(-)
---
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -178,10 +178,7 @@ static void smp_callin(void)
 	 */
 	apic_ap_setup();
 
-	/*
-	 * Save our processor parameters. Note: this information
-	 * is needed for clock calibration.
-	 */
+	/* Save our processor parameters. */
 	smp_store_cpu_info(cpuid);
 
 	/*
@@ -192,14 +189,6 @@ static void smp_callin(void)
 
 	ap_init_aperfmperf();
 
-	/*
-	 * Get our bogomips.
-	 * Update loops_per_jiffy in cpu_data. Previous call to
-	 * smp_store_cpu_info() stored a value that is close but not as
-	 * accurate as the value just calculated.
-	 */
-	calibrate_delay();
-	cpu_data(cpuid).loops_per_jiffy = loops_per_jiffy;
 	pr_debug("Stack at about %p\n", &cpuid);
 
 	wmb();
@@ -212,8 +201,24 @@ static void smp_callin(void)
 	cpumask_set_cpu(cpuid, cpu_callin_mask);
 }
 
+static void ap_calibrate_delay(void)
+{
+	/*
+	 * Calibrate the delay loop and update loops_per_jiffy in cpu_data.
+	 * smp_store_cpu_info() stored a value that is close but not as
+	 * accurate as the value just calculated.
+	 *
+	 * As this is invoked after the TSC synchronization check,
+	 * calibrate_delay_is_known() will skip the calibration routine
+	 * when TSC is synchronized across sockets.
+	 */
+	calibrate_delay();
+	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
+}
+
 static int cpu0_logical_apicid;
 static int enable_start_cpu0;
+
 /*
  * Activate a secondary processor.
  */
@@ -240,10 +245,15 @@ static void notrace start_secondary(void
 
 	/* otherwise gcc will move up smp_processor_id before the cpu_init */
 	barrier();
+	/* Check TSC synchronization with the control CPU: */
+	check_tsc_sync_target();
+
 	/*
-	 * Check TSC synchronization with the boot CPU:
+	 * Calibrate the delay loop after the TSC synchronization check.
+	 * This allows to skip the calibration when TSC is synchronized
+	 * across sockets.
 	 */
-	check_tsc_sync_target();
+	ap_calibrate_delay();
 
 	speculative_store_bypass_ht_init();
 
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -1598,10 +1598,7 @@ void __init tsc_init(void)
 
 #ifdef CONFIG_SMP
 /*
- * If we have a constant TSC and are using the TSC for the delay loop,
- * we can skip clock calibration if another cpu in the same socket has already
- * been calibrated. This assumes that CONSTANT_TSC applies to all
- * cpus in the socket - this should be a safe assumption.
+ * Check whether existing calibration data can be reused.
  */
 unsigned long calibrate_delay_is_known(void)
 {
@@ -1609,6 +1606,21 @@ unsigned long calibrate_delay_is_known(v
 	int constant_tsc = cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC);
 	const struct cpumask *mask = topology_core_cpumask(cpu);
 
+	/*
+	 * If TSC has constant frequency and TSC is synchronized across
+	 * sockets then reuse CPU0 calibration.
+	 */
+	if (constant_tsc && !tsc_unstable)
+		return cpu_data(0).loops_per_jiffy;
+
+	/*
+	 * If TSC has constant frequency and TSC is not synchronized across
+	 * sockets and this is not the first CPU in the socket, then reuse
+	 * the calibration value of an already online CPU on that socket.
+	 *
+	 * This assumes that CONSTANT_TSC is consistent for all CPUs in a
+	 * socket.
+	 */
 	if (!constant_tsc || !mask)
 		return 0;
 



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529885.824818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDM-00045K-CC; Thu, 04 May 2023 19:02:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529885.824818; Thu, 04 May 2023 19:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDM-00044v-6c; Thu, 04 May 2023 19:02:04 +0000
Received: by outflank-mailman (input) for mailman id 529885;
 Thu, 04 May 2023 19:02:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDK-00042k-Py
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:02 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 25ed47b2-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25ed47b2-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185733.126511787@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226919;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc; bh=Owpn3VAzXYYF5pE4nW59O5YtNe6oATS3CJHH6pOEc+Q=;
	b=ydWEHvQKju6ua5QrgIvoNFizD1mp/ScmeGKoSzoDnG7Bbtt8/QnpnQ3+taJ6G0kL830h+i
	7ID/utgQdKCvUYsl6w6CO2W2gJuO8HL9LCU4D0vVfB6YxemlrHUeCt4ToRjPCaRBB5yHoD
	U1z1RXcQdP2U5KMvTNzHXSugPczS5MTUtGDtT8qHk2Yf+N4IEi6AoiUHeLjn63zurjgLOd
	4VlOYXYRkaR1ek5ywemLfh9oakl4kmjQRYWBDRl/UvRYBt4y2JOaZBadJTEhsdykMmsR/p
	RzfI1eYCgw7RPP+JveXpGF8/O3jsZY1WVFPEdOuBnlsQhWH5/KilpYnEwkhH7g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226919;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc; bh=Owpn3VAzXYYF5pE4nW59O5YtNe6oATS3CJHH6pOEc+Q=;
	b=t9YK5Y1XQbyMkzjkBl9J4dD0O18uiVWSVJxE2v987VhpllLfPD1+SKL0rJE7GDXI8icRF3
	WCaTcXYHj80XrzAw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 00/38] cpu/hotplug, x86: Reworked parallel CPU bringup
Date: Thu,  4 May 2023 21:01:58 +0200 (CEST)

Hi!

This is version 2 of the reworked parallel bringup series. Version 1 can be
found here:

   https://lore.kernel.org/lkml/20230414225551.858160935@linutronix.de

Background
----------

The reason why people are interested in parallel bringup is to shorten the
(kexec) reboot time of cloud servers to reduce the downtime of the VM
tenants.

The current fully serialized bringup does the following per AP:

    1) Prepare callbacks (allocate, intialize, create threads)
    2) Kick the AP alive (e.g. INIT/SIPI on x86)
    3) Wait for the AP to report alive state
    4) Let the AP continue through the atomic bringup
    5) Let the AP run the threaded bringup to full online state

There are two significant delays:

    #3 The time for an AP to report alive state in start_secondary() on x86
       has been measured in the range between 350us and 3.5ms depending on
       vendor and CPU type, BIOS microcode size etc.

    #4 The atomic bringup does the microcode update. This has been measured
       to take up to ~8ms on the primary threads depending on the microcode
       patch size to apply.

On a two socket SKL server with 56 cores (112 threads) the boot CPU spends
on current mainline about 800ms busy waiting for the APs to come up and
apply microcode. That's more than 80% of the actual onlining procedure.

By splitting the actual bringup mechanism into two parts this can be
reduced to waiting for the first AP to report alive or if the system is
large enough the first AP is already waiting when the boot CPU finished the
wake-up of the last AP. That reduces the AP bringup time on that SKL from
~800ms to ~80ms.

The actual gain varies wildly depending on the system, CPU, microcode patch
size and other factors.

The V1 cover letter has more details and a deep analysis.

Changes vs. V1:

  1) Switch APIC ID retrieval from CPUID to reading the APIC itself.

     This is required because CPUID based APIC ID retrieval can only
     provide the initial APIC ID, which might have been overruled by the
     firmware. Some AMD APUs come up with APIC ID = initial APIC ID + 0x10,
     so the APIC ID to CPU number lookup would fail miserably if based on
     CPUID. The only requirement is that the actual APIC IDs are consistent
     with the APCI/MADT table.

  2) As a consequence of #1 parallel bootup support for SEV guest has been
     dropped.

     Reading the APIC ID in a SEV guest is done via RDMSR. That RDMSR is
     intercepted and raises #VC which cannot be handled at that point as
     there is no stack and no IDT. There is no GHCB protocol for RDMSR
     like there is for CPUID. Left as an exercise for SEV wizards.

  3) Address review comments from Brian and the fallout reported by the
     kernel robot

  4) Unbreak i386 which exploded when bringing up the secondary CPUs due to
     the unconditinal load_ucode_ap() invocation in start_secondary(). That
     happens because on 32-bit load_ucode_ap() is invoked on the secondary
     CPUs from assembly code before paging is initialized and therefore
     uses physical addresses which are obviously invalid after paging is
     enabled.

  5) Small enhancements and comment updates.

  6) Rebased on Linux tree (1a5304fecee5)

The series applies on Linus tree and is also available from git:

    git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git hotplug

Thanks,

	tglx
---
 Documentation/admin-guide/kernel-parameters.txt |   20 
 Documentation/core-api/cpu_hotplug.rst          |   13 
 arch/Kconfig                                    |   23 +
 arch/arm/Kconfig                                |    1 
 arch/arm/include/asm/smp.h                      |    2 
 arch/arm/kernel/smp.c                           |   18 
 arch/arm64/Kconfig                              |    1 
 arch/arm64/include/asm/smp.h                    |    2 
 arch/arm64/kernel/smp.c                         |   14 
 arch/csky/Kconfig                               |    1 
 arch/csky/include/asm/smp.h                     |    2 
 arch/csky/kernel/smp.c                          |    8 
 arch/mips/Kconfig                               |    1 
 arch/mips/cavium-octeon/smp.c                   |    1 
 arch/mips/include/asm/smp-ops.h                 |    1 
 arch/mips/kernel/smp-bmips.c                    |    1 
 arch/mips/kernel/smp-cps.c                      |   14 
 arch/mips/kernel/smp.c                          |    8 
 arch/mips/loongson64/smp.c                      |    1 
 arch/parisc/Kconfig                             |    1 
 arch/parisc/kernel/process.c                    |    4 
 arch/parisc/kernel/smp.c                        |    7 
 arch/riscv/Kconfig                              |    1 
 arch/riscv/include/asm/smp.h                    |    2 
 arch/riscv/kernel/cpu-hotplug.c                 |   14 
 arch/x86/Kconfig                                |   45 --
 arch/x86/include/asm/apic.h                     |    5 
 arch/x86/include/asm/apicdef.h                  |    5 
 arch/x86/include/asm/cpu.h                      |    5 
 arch/x86/include/asm/cpumask.h                  |    5 
 arch/x86/include/asm/processor.h                |    1 
 arch/x86/include/asm/realmode.h                 |    3 
 arch/x86/include/asm/smp.h                      |   24 -
 arch/x86/include/asm/topology.h                 |   23 -
 arch/x86/include/asm/tsc.h                      |    2 
 arch/x86/kernel/acpi/sleep.c                    |    9 
 arch/x86/kernel/apic/apic.c                     |   26 -
 arch/x86/kernel/callthunks.c                    |    4 
 arch/x86/kernel/cpu/amd.c                       |    2 
 arch/x86/kernel/cpu/cacheinfo.c                 |   21 
 arch/x86/kernel/cpu/common.c                    |   50 --
 arch/x86/kernel/cpu/topology.c                  |    3 
 arch/x86/kernel/head_32.S                       |   14 
 arch/x86/kernel/head_64.S                       |   87 +++
 arch/x86/kernel/sev.c                           |    2 
 arch/x86/kernel/smp.c                           |    3 
 arch/x86/kernel/smpboot.c                       |  526 ++++++++----------------
 arch/x86/kernel/topology.c                      |   98 ----
 arch/x86/kernel/tsc.c                           |   20 
 arch/x86/kernel/tsc_sync.c                      |   36 -
 arch/x86/power/cpu.c                            |   37 -
 arch/x86/realmode/init.c                        |    3 
 arch/x86/realmode/rm/trampoline_64.S            |   27 +
 arch/x86/xen/enlighten_hvm.c                    |   11 
 arch/x86/xen/smp_hvm.c                          |   16 
 arch/x86/xen/smp_pv.c                           |   56 +-
 drivers/acpi/processor_idle.c                   |    4 
 include/linux/cpu.h                             |    4 
 include/linux/cpuhotplug.h                      |   17 
 kernel/cpu.c                                    |  396 +++++++++++++++++-
 kernel/smp.c                                    |    2 
 kernel/smpboot.c                                |  163 -------
 62 files changed, 934 insertions(+), 982 deletions(-)


From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529886.824833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDN-0004Vw-Gb; Thu, 04 May 2023 19:02:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529886.824833; Thu, 04 May 2023 19:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDN-0004VM-Da; Thu, 04 May 2023 19:02:05 +0000
Received: by outflank-mailman (input) for mailman id 529886;
 Thu, 04 May 2023 19:02:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDM-00042k-2U
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:04 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 276b5184-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 276b5184-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185936.424138296@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226922;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=waVaB+iOXpEeeQbnpvRruizzYbSaWW2RRSBQj1T/hXA=;
	b=fL/KXBGwhDgZ0aNNihOGlOrNlSV1oXSPmnek3reTOPzGU26K9/7I6++pxRi1WwqUlxPjOt
	5Tyij4lipwiE5wx00Ug/OlbZuQGzLQ5bz0yPSCT5LRqx0FtA/i9SOYFEBjjKXXhsEYmcX0
	Syt5stMGuEr0o0+HSblbMnvWZeQN7TaTrhA7dHR9yfdBmu3FPAPBUqggwPcIQGD3m48Bt5
	zOf6iElxOZzofzp19LMFZlMc7WEm/MIBFclvIFumjzIiZLJwDWozF9bZ+XBECVGvq7VpEs
	oWtzcnhMZlsuZosoXBPM96pPurYq5mz+wcdyir0ZGlT+xHDklreyFq9FgGAJog==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226922;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=waVaB+iOXpEeeQbnpvRruizzYbSaWW2RRSBQj1T/hXA=;
	b=FgatqUwh+JsOxmHUn9Bvq7DTwB3Lr8g2wziGsZ2rOwgPZthkB6nTAdx3egi9uzi5HZxDkx
	9oTTBrbEEnqsEmDw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 02/38] cpu/hotplug: Mark arch_disable_smp_support() and
 bringup_nonboot_cpus() __init
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:01 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

No point in keeping them around.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/smpboot.c |    4 ++--
 kernel/cpu.c              |    2 +-
 kernel/smp.c              |    2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)
---
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1269,9 +1269,9 @@ int native_cpu_up(unsigned int cpu, stru
 }
 
 /**
- * arch_disable_smp_support() - disables SMP support for x86 at runtime
+ * arch_disable_smp_support() - Disables SMP support for x86 at boottime
  */
-void arch_disable_smp_support(void)
+void __init arch_disable_smp_support(void)
 {
 	disable_ioapic_support();
 }
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1502,7 +1502,7 @@ int bringup_hibernate_cpu(unsigned int s
 	return 0;
 }
 
-void bringup_nonboot_cpus(unsigned int setup_max_cpus)
+void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
 {
 	unsigned int cpu;
 
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -892,7 +892,7 @@ EXPORT_SYMBOL(setup_max_cpus);
  * SMP mode to <NUM>.
  */
 
-void __weak arch_disable_smp_support(void) { }
+void __weak __init arch_disable_smp_support(void) { }
 
 static int __init nosmp(char *str)
 {



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529884.824813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDM-000432-2q; Thu, 04 May 2023 19:02:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529884.824813; Thu, 04 May 2023 19:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDL-00042v-Vs; Thu, 04 May 2023 19:02:03 +0000
Received: by outflank-mailman (input) for mailman id 529884;
 Thu, 04 May 2023 19:02:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDK-00042j-Fc
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:02 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 268624e9-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 268624e9-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185936.367031787@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226920;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Mxn87t9DPEW945il616r0H0SsMXLyUAeJz1AKeylD8A=;
	b=gN6osUOUj5A4g4ikKpXMxZ7xRrn0bkpwHrNdWYplv0k7rI8gYfItE27E6osNTA/nV1mrk6
	WWlzivAwHb3vuxBg3D/Ns21CditKMkPPd9EuijqE+Nl5O62KUwKu3kPrFtGxrKeH/ZJJrC
	NGZ2k4dOGaPPd/q3/QBB7j+SwZjA+XWe0JSQ1Pjv2a+MTO268fjYE3GMmFpYdhCRR1vFB4
	W35O70R3dhw4wdItVUsrhBSyeS38/14353iSux461nIW3VIT8Vl8zXmiSxiD/GMMsuMkzg
	wRMAaeVRVTGibV6maAbKj32s9VnEdmCl40bZStr+bAA0dwvsHXYOC5oLT4h4Jw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226920;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Mxn87t9DPEW945il616r0H0SsMXLyUAeJz1AKeylD8A=;
	b=11ibvq89apUCBbI307f/UhkVNeawUcpQ0C+FqbsH6snqLrHmocC2FzctyE3MrTYESrxN+r
	cfzlPdKVby530BCw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 01/38] x86/smpboot: Cleanup topology_phys_to_logical_pkg()/die()
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:00 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Make topology_phys_to_logical_pkg_die() static as it's only used in
smpboot.c and fixup the kernel-doc warnings for both functions.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/topology.h |    3 ---
 arch/x86/kernel/smpboot.c       |   10 ++++++----
 2 files changed, 6 insertions(+), 7 deletions(-)
---
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -139,7 +139,6 @@ static inline int topology_max_smt_threa
 int topology_update_package_map(unsigned int apicid, unsigned int cpu);
 int topology_update_die_map(unsigned int dieid, unsigned int cpu);
 int topology_phys_to_logical_pkg(unsigned int pkg);
-int topology_phys_to_logical_die(unsigned int die, unsigned int cpu);
 bool topology_is_primary_thread(unsigned int cpu);
 bool topology_smt_supported(void);
 #else
@@ -149,8 +148,6 @@ topology_update_package_map(unsigned int
 static inline int
 topology_update_die_map(unsigned int dieid, unsigned int cpu) { return 0; }
 static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
-static inline int topology_phys_to_logical_die(unsigned int die,
-		unsigned int cpu) { return 0; }
 static inline int topology_max_die_per_package(void) { return 1; }
 static inline int topology_max_smt_threads(void) { return 1; }
 static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -288,6 +288,7 @@ bool topology_smt_supported(void)
 
 /**
  * topology_phys_to_logical_pkg - Map a physical package id to a logical
+ * @phys_pkg:	The physical package id to map
  *
  * Returns logical package id or -1 if not found
  */
@@ -304,15 +305,17 @@ int topology_phys_to_logical_pkg(unsigne
 	return -1;
 }
 EXPORT_SYMBOL(topology_phys_to_logical_pkg);
+
 /**
  * topology_phys_to_logical_die - Map a physical die id to logical
+ * @die_id:	The physical die id to map
+ * @cur_cpu:	The CPU for which the mapping is done
  *
  * Returns logical die id or -1 if not found
  */
-int topology_phys_to_logical_die(unsigned int die_id, unsigned int cur_cpu)
+static int topology_phys_to_logical_die(unsigned int die_id, unsigned int cur_cpu)
 {
-	int cpu;
-	int proc_id = cpu_data(cur_cpu).phys_proc_id;
+	int cpu, proc_id = cpu_data(cur_cpu).phys_proc_id;
 
 	for_each_possible_cpu(cpu) {
 		struct cpuinfo_x86 *c = &cpu_data(cpu);
@@ -323,7 +326,6 @@ int topology_phys_to_logical_die(unsigne
 	}
 	return -1;
 }
-EXPORT_SYMBOL(topology_phys_to_logical_die);
 
 /**
  * topology_update_package_map - Update the physical to logical package map



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529888.824853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDR-00054I-3v; Thu, 04 May 2023 19:02:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529888.824853; Thu, 04 May 2023 19:02:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDR-00054B-0y; Thu, 04 May 2023 19:02:09 +0000
Received: by outflank-mailman (input) for mailman id 529888;
 Thu, 04 May 2023 19:02:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDP-00042k-ES
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:07 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 294fb817-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 294fb817-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185936.536756506@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226925;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=4k8WG2dAJ3uFYn2Qtg5bZA2RHhZEaXZ+bewsd+BJr/I=;
	b=bBh0CVpZWL+6hsvPUZtXnSdOBPa/K5CfY4sntqqXgNMtxpi+d2yUs3ECpJ289P0ZQAUrXv
	k7Gw7YnHFskIzQ/7fPEMWNfUyqW1tXD0Y5j9mjC9fHiZESm2nV58xOjcYTcUtICW4PGtib
	L7qAfpVjhgaAOAD2zQ57omRWjLADm+xmVTWvdqyhIzFRzilnUByYjHcwGeBKFMcYVRiiaY
	VaFLAq4WMscxLSALxvyX5KYN3LoP2rAtooUlf6qigkW3PSIl7OwVv0Em5tfeehe2sTShbV
	Tm0yzzDkfXdyeVnTbNUMouES/vmiS2pcR636jskZtx14ouKlyWxaen2mjAF6Eg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226925;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=4k8WG2dAJ3uFYn2Qtg5bZA2RHhZEaXZ+bewsd+BJr/I=;
	b=zT9oEAybbsH/w7G2lAyNh7UEoHso+OvSBYaHY787eM4NlqSRBZnZMUdOCeyvsS0fDgQH3V
	RekIpVEMD5NisxDQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 04/38] x86/smpboot: Rename start_cpu0() to soft_restart_cpu()
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:04 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

This is used in the SEV play_dead() implementation to re-online CPUs. But
that has nothing to do with CPU0.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>

---
 arch/x86/include/asm/cpu.h   |    2 +-
 arch/x86/kernel/callthunks.c |    2 +-
 arch/x86/kernel/head_32.S    |   10 +++++-----
 arch/x86/kernel/head_64.S    |   10 +++++-----
 arch/x86/kernel/sev.c        |    2 +-
 5 files changed, 13 insertions(+), 13 deletions(-)
---
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -30,7 +30,7 @@ struct x86_cpu {
 #ifdef CONFIG_HOTPLUG_CPU
 extern int arch_register_cpu(int num);
 extern void arch_unregister_cpu(int);
-extern void start_cpu0(void);
+extern void soft_restart_cpu(void);
 #ifdef CONFIG_DEBUG_HOTPLUG_CPU0
 extern int _debug_hotplug_cpu(int cpu, int action);
 #endif
--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -134,7 +134,7 @@ static bool skip_addr(void *dest)
 	if (dest == ret_from_fork)
 		return true;
 #ifdef CONFIG_HOTPLUG_CPU
-	if (dest == start_cpu0)
+	if (dest == soft_restart_cpu)
 		return true;
 #endif
 #ifdef CONFIG_FUNCTION_TRACER
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -140,16 +140,16 @@ SYM_CODE_END(startup_32)
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
- * Boot CPU0 entry point. It's called from play_dead(). Everything has been set
- * up already except stack. We just set up stack here. Then call
- * start_secondary().
+ * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
+ * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
+ * unplug. Everything is set up already except the stack.
  */
-SYM_FUNC_START(start_cpu0)
+SYM_FUNC_START(soft_restart_cpu)
 	movl initial_stack, %ecx
 	movl %ecx, %esp
 	call *(initial_code)
 1:	jmp 1b
-SYM_FUNC_END(start_cpu0)
+SYM_FUNC_END(soft_restart_cpu)
 #endif
 
 /*
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -377,11 +377,11 @@ SYM_CODE_END(secondary_startup_64)
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
- * Boot CPU0 entry point. It's called from play_dead(). Everything has been set
- * up already except stack. We just set up stack here. Then call
- * start_secondary() via .Ljump_to_C_code.
+ * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
+ * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
+ * unplug. Everything is set up already except the stack.
  */
-SYM_CODE_START(start_cpu0)
+SYM_CODE_START(soft_restart_cpu)
 	ANNOTATE_NOENDBR
 	UNWIND_HINT_END_OF_STACK
 
@@ -390,7 +390,7 @@ SYM_CODE_START(start_cpu0)
 	movq	TASK_threadsp(%rcx), %rsp
 
 	jmp	.Ljump_to_C_code
-SYM_CODE_END(start_cpu0)
+SYM_CODE_END(soft_restart_cpu)
 #endif
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -1328,7 +1328,7 @@ static void sev_es_play_dead(void)
 	 * If we get here, the VCPU was woken up again. Jump to CPU
 	 * startup code to get it back online.
 	 */
-	start_cpu0();
+	soft_restart_cpu();
 }
 #else  /* CONFIG_HOTPLUG_CPU */
 #define sev_es_play_dead	native_play_dead



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529889.824858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDR-00058F-Gz; Thu, 04 May 2023 19:02:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529889.824858; Thu, 04 May 2023 19:02:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDR-000581-B5; Thu, 04 May 2023 19:02:09 +0000
Received: by outflank-mailman (input) for mailman id 529889;
 Thu, 04 May 2023 19:02:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDQ-00042j-2o
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:08 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2a42af00-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:07 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a42af00-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185936.590281709@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226927;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=IjEqVwuDXy6l11CZOhb2Q9Y7uBHaszP86r8c69YcU4c=;
	b=1TjCsPNIyVe0J97AnMZ9IZuq5klCTmb8uOqmiS6QP7ZxFmpOciabd4+mZXTpZt5PKaBC3+
	tShhthEpUYU1PR0O6G3TAKNFHt5nf1NpPdvBpi/KOT/ebUi2EI8FLEgU6k80VCWhJI6XJu
	rdlacFVoZAD33FbY1MiLPzbfOdt5caRuE6+hd9z2DCXlPIQ+LrOM0u1VlHpb9igmtMFSjX
	biJHT/ieDdtdnU2SskStM2vMbaj83W3ZNAkvtv4DHH1Q8FTO6wyJZvCKM2vsuaZPc/wXqB
	4zIAc45oyX3TzFS3xg5Y4mqLrWDKm1CH9YSWlqdnpwKJWQa8TDMH4rl3Gug5TQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226927;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=IjEqVwuDXy6l11CZOhb2Q9Y7uBHaszP86r8c69YcU4c=;
	b=9diOKqSJbLdnSrqHxlovDTLpCZmCgBMmL7vKik0+pcztGx/FVE2OzspIn02s8YsczZddy6
	W6MOr2qwUqkfM0DA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 05/38] x86/topology: Remove CPU0 hotplug option
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:06 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

This was introduced together with commit e1c467e69040 ("x86, hotplug: Wake
up CPU0 via NMI instead of INIT, SIPI, SIPI") to eventually support
physical hotplug of CPU0:

 "We'll change this code in the future to wake up hard offlined CPU0 if
  real platform and request are available."

11 years later this has not happened and physical hotplug is not officially
supported. Remove the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 Documentation/admin-guide/kernel-parameters.txt |   14 ---
 Documentation/core-api/cpu_hotplug.rst          |   13 ---
 arch/x86/Kconfig                                |   43 ----------
 arch/x86/include/asm/cpu.h                      |    3 
 arch/x86/kernel/topology.c                      |   98 ------------------------
 arch/x86/power/cpu.c                            |   37 ---------
 6 files changed, 6 insertions(+), 202 deletions(-)
---
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -818,20 +818,6 @@
 			Format:
 			<first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>]
 
-	cpu0_hotplug	[X86] Turn on CPU0 hotplug feature when
-			CONFIG_BOOTPARAM_HOTPLUG_CPU0 is off.
-			Some features depend on CPU0. Known dependencies are:
-			1. Resume from suspend/hibernate depends on CPU0.
-			Suspend/hibernate will fail if CPU0 is offline and you
-			need to online CPU0 before suspend/hibernate.
-			2. PIC interrupts also depend on CPU0. CPU0 can't be
-			removed if a PIC interrupt is detected.
-			It's said poweroff/reboot may depend on CPU0 on some
-			machines although I haven't seen such issues so far
-			after CPU0 is offline on a few tested machines.
-			If the dependencies are under your control, you can
-			turn on cpu0_hotplug.
-
 	cpuidle.off=1	[CPU_IDLE]
 			disable the cpuidle sub-system
 
--- a/Documentation/core-api/cpu_hotplug.rst
+++ b/Documentation/core-api/cpu_hotplug.rst
@@ -127,17 +127,8 @@ Once the CPU is shutdown, it will be rem
  $ echo 1 > /sys/devices/system/cpu/cpu4/online
  smpboot: Booting Node 0 Processor 4 APIC 0x1
 
-The CPU is usable again. This should work on all CPUs. CPU0 is often special
-and excluded from CPU hotplug. On X86 the kernel option
-*CONFIG_BOOTPARAM_HOTPLUG_CPU0* has to be enabled in order to be able to
-shutdown CPU0. Alternatively the kernel command option *cpu0_hotplug* can be
-used. Some known dependencies of CPU0:
-
-* Resume from hibernate/suspend. Hibernate/suspend will fail if CPU0 is offline.
-* PIC interrupts. CPU0 can't be removed if a PIC interrupt is detected.
-
-Please let Fenghua Yu <fenghua.yu@intel.com> know if you find any dependencies
-on CPU0.
+The CPU is usable again. This should work on all CPUs, but CPU0 is often special
+and excluded from CPU hotplug.
 
 The CPU hotplug coordination
 ============================
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2305,49 +2305,6 @@ config HOTPLUG_CPU
 	def_bool y
 	depends on SMP
 
-config BOOTPARAM_HOTPLUG_CPU0
-	bool "Set default setting of cpu0_hotpluggable"
-	depends on HOTPLUG_CPU
-	help
-	  Set whether default state of cpu0_hotpluggable is on or off.
-
-	  Say Y here to enable CPU0 hotplug by default. If this switch
-	  is turned on, there is no need to give cpu0_hotplug kernel
-	  parameter and the CPU0 hotplug feature is enabled by default.
-
-	  Please note: there are two known CPU0 dependencies if you want
-	  to enable the CPU0 hotplug feature either by this switch or by
-	  cpu0_hotplug kernel parameter.
-
-	  First, resume from hibernate or suspend always starts from CPU0.
-	  So hibernate and suspend are prevented if CPU0 is offline.
-
-	  Second dependency is PIC interrupts always go to CPU0. CPU0 can not
-	  offline if any interrupt can not migrate out of CPU0. There may
-	  be other CPU0 dependencies.
-
-	  Please make sure the dependencies are under your control before
-	  you enable this feature.
-
-	  Say N if you don't want to enable CPU0 hotplug feature by default.
-	  You still can enable the CPU0 hotplug feature at boot by kernel
-	  parameter cpu0_hotplug.
-
-config DEBUG_HOTPLUG_CPU0
-	def_bool n
-	prompt "Debug CPU0 hotplug"
-	depends on HOTPLUG_CPU
-	help
-	  Enabling this option offlines CPU0 (if CPU0 can be offlined) as
-	  soon as possible and boots up userspace with CPU0 offlined. User
-	  can online CPU0 back after boot time.
-
-	  To debug CPU0 hotplug, you need to enable CPU0 offline/online
-	  feature by either turning on CONFIG_BOOTPARAM_HOTPLUG_CPU0 during
-	  compilation or giving cpu0_hotplug kernel parameter at boot.
-
-	  If unsure, say N.
-
 config COMPAT_VDSO
 	def_bool n
 	prompt "Disable the 32-bit vDSO (needed for glibc 2.3.3)"
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -31,9 +31,6 @@ struct x86_cpu {
 extern int arch_register_cpu(int num);
 extern void arch_unregister_cpu(int);
 extern void soft_restart_cpu(void);
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-extern int _debug_hotplug_cpu(int cpu, int action);
-#endif
 #endif
 
 extern void ap_init_aperfmperf(void);
--- a/arch/x86/kernel/topology.c
+++ b/arch/x86/kernel/topology.c
@@ -38,102 +38,12 @@
 static DEFINE_PER_CPU(struct x86_cpu, cpu_devices);
 
 #ifdef CONFIG_HOTPLUG_CPU
-
-#ifdef CONFIG_BOOTPARAM_HOTPLUG_CPU0
-static int cpu0_hotpluggable = 1;
-#else
-static int cpu0_hotpluggable;
-static int __init enable_cpu0_hotplug(char *str)
-{
-	cpu0_hotpluggable = 1;
-	return 1;
-}
-
-__setup("cpu0_hotplug", enable_cpu0_hotplug);
-#endif
-
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-/*
- * This function offlines a CPU as early as possible and allows userspace to
- * boot up without the CPU. The CPU can be onlined back by user after boot.
- *
- * This is only called for debugging CPU offline/online feature.
- */
-int _debug_hotplug_cpu(int cpu, int action)
-{
-	int ret;
-
-	if (!cpu_is_hotpluggable(cpu))
-		return -EINVAL;
-
-	switch (action) {
-	case 0:
-		ret = remove_cpu(cpu);
-		if (!ret)
-			pr_info("DEBUG_HOTPLUG_CPU0: CPU %u is now offline\n", cpu);
-		else
-			pr_debug("Can't offline CPU%d.\n", cpu);
-		break;
-	case 1:
-		ret = add_cpu(cpu);
-		if (ret)
-			pr_debug("Can't online CPU%d.\n", cpu);
-
-		break;
-	default:
-		ret = -EINVAL;
-	}
-
-	return ret;
-}
-
-static int __init debug_hotplug_cpu(void)
+int arch_register_cpu(int cpu)
 {
-	_debug_hotplug_cpu(0, 0);
-	return 0;
-}
-
-late_initcall_sync(debug_hotplug_cpu);
-#endif /* CONFIG_DEBUG_HOTPLUG_CPU0 */
-
-int arch_register_cpu(int num)
-{
-	struct cpuinfo_x86 *c = &cpu_data(num);
-
-	/*
-	 * Currently CPU0 is only hotpluggable on Intel platforms. Other
-	 * vendors can add hotplug support later.
-	 * Xen PV guests don't support CPU0 hotplug at all.
-	 */
-	if (c->x86_vendor != X86_VENDOR_INTEL ||
-	    cpu_feature_enabled(X86_FEATURE_XENPV))
-		cpu0_hotpluggable = 0;
-
-	/*
-	 * Two known BSP/CPU0 dependencies: Resume from suspend/hibernate
-	 * depends on BSP. PIC interrupts depend on BSP.
-	 *
-	 * If the BSP dependencies are under control, one can tell kernel to
-	 * enable BSP hotplug. This basically adds a control file and
-	 * one can attempt to offline BSP.
-	 */
-	if (num == 0 && cpu0_hotpluggable) {
-		unsigned int irq;
-		/*
-		 * We won't take down the boot processor on i386 if some
-		 * interrupts only are able to be serviced by the BSP in PIC.
-		 */
-		for_each_active_irq(irq) {
-			if (!IO_APIC_IRQ(irq) && irq_has_action(irq)) {
-				cpu0_hotpluggable = 0;
-				break;
-			}
-		}
-	}
-	if (num || cpu0_hotpluggable)
-		per_cpu(cpu_devices, num).cpu.hotpluggable = 1;
+	struct x86_cpu *xc = per_cpu_ptr(&cpu_devices, cpu);
 
-	return register_cpu(&per_cpu(cpu_devices, num).cpu, num);
+	xc->cpu.hotpluggable = cpu > 0;
+	return register_cpu(&xc->cpu, cpu);
 }
 EXPORT_SYMBOL(arch_register_cpu);
 
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -351,43 +351,6 @@ static int bsp_pm_callback(struct notifi
 	case PM_HIBERNATION_PREPARE:
 		ret = bsp_check();
 		break;
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-	case PM_RESTORE_PREPARE:
-		/*
-		 * When system resumes from hibernation, online CPU0 because
-		 * 1. it's required for resume and
-		 * 2. the CPU was online before hibernation
-		 */
-		if (!cpu_online(0))
-			_debug_hotplug_cpu(0, 1);
-		break;
-	case PM_POST_RESTORE:
-		/*
-		 * When a resume really happens, this code won't be called.
-		 *
-		 * This code is called only when user space hibernation software
-		 * prepares for snapshot device during boot time. So we just
-		 * call _debug_hotplug_cpu() to restore to CPU0's state prior to
-		 * preparing the snapshot device.
-		 *
-		 * This works for normal boot case in our CPU0 hotplug debug
-		 * mode, i.e. CPU0 is offline and user mode hibernation
-		 * software initializes during boot time.
-		 *
-		 * If CPU0 is online and user application accesses snapshot
-		 * device after boot time, this will offline CPU0 and user may
-		 * see different CPU0 state before and after accessing
-		 * the snapshot device. But hopefully this is not a case when
-		 * user debugging CPU0 hotplug. Even if users hit this case,
-		 * they can easily online CPU0 back.
-		 *
-		 * To simplify this debug code, we only consider normal boot
-		 * case. Otherwise we need to remember CPU0's state and restore
-		 * to that state and resolve racy conditions etc.
-		 */
-		_debug_hotplug_cpu(0, 0);
-		break;
-#endif
 	default:
 		break;
 	}



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529890.824874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDT-0005aX-00; Thu, 04 May 2023 19:02:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529890.824874; Thu, 04 May 2023 19:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDS-0005Yi-OU; Thu, 04 May 2023 19:02:10 +0000
Received: by outflank-mailman (input) for mailman id 529890;
 Thu, 04 May 2023 19:02:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDR-00042j-LX
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:09 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2b3552f1-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b3552f1-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185936.643128218@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226928;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=t1kFrxUQzoAGHqTqUkb/kf3JNUtNBt/869VYZMS78IM=;
	b=lPLKtt3lB9e6PQEqX4aRgKUFycBur1nGQsvwRP3go2MfvrlcnqU//JnJCNzp1o2Dm3QHM4
	BxFFAwYb9s9oEKep2SSjD/7hHltKU07ND8S6ARP7AC1mhVPf/UMHQ17aKQGApwPht2fG55
	pGZqOSdSNm5i8MM+NhojrwfieQkc4eO3Gbbf1xGarkf/7ZS4A/jqS/qZKd6Zg8bwYoy9bZ
	QY2GEqdFw31AQxktaZ3LQaLLlhtyiGvpvylK1LNZ0u5+rps4jsbRCFdKWxZZ38e4EbIGew
	Lh+wXXzuAWBjOvDHwNxF7ZIORMgCG3Jmr6XmbvnSMvq8EKmTQJC+Sll937gcBg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226928;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=t1kFrxUQzoAGHqTqUkb/kf3JNUtNBt/869VYZMS78IM=;
	b=gRTGa8QVKZ933uu1TIkd3rocCTd2HvDqOOJ8XSUIlwizxGixPYDURnH5KuGoBbbwky8MOG
	cfyCNtQxP40YHpCg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 06/38] x86/smpboot: Remove the CPU0 hotplug kludge
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:08 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

This was introduced with commit e1c467e69040 ("x86, hotplug: Wake up CPU0
via NMI instead of INIT, SIPI, SIPI") to eventually support physical
hotplug of CPU0:

 "We'll change this code in the future to wake up hard offlined CPU0 if
  real platform and request are available."

11 years later this has not happened and physical hotplug is not officially
supported. Remove the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/apic.h   |    1 
 arch/x86/include/asm/smp.h    |    1 
 arch/x86/kernel/smpboot.c     |  170 +++---------------------------------------
 drivers/acpi/processor_idle.c |    4 
 4 files changed, 14 insertions(+), 162 deletions(-)
---
--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -377,7 +377,6 @@ extern struct apic *__apicdrivers[], *__
  * APIC functionality to boot other CPUs - only used on SMP:
  */
 #ifdef CONFIG_SMP
-extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
 extern int lapic_can_unplug_cpu(void);
 #endif
 
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -130,7 +130,6 @@ void native_play_dead(void);
 void play_dead_common(void);
 void wbinvd_on_cpu(int cpu);
 int wbinvd_on_all_cpus(void);
-void cond_wakeup_cpu0(void);
 
 void native_smp_send_reschedule(int cpu);
 void native_send_call_func_ipi(const struct cpumask *mask);
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -216,9 +216,6 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
-static int cpu0_logical_apicid;
-static int enable_start_cpu0;
-
 /*
  * Activate a secondary processor.
  */
@@ -241,8 +238,6 @@ static void notrace start_secondary(void
 	x86_cpuinit.early_percpu_clock_init();
 	smp_callin();
 
-	enable_start_cpu0 = 0;
-
 	/* otherwise gcc will move up smp_processor_id before the cpu_init */
 	barrier();
 	/* Check TSC synchronization with the control CPU: */
@@ -410,7 +405,7 @@ void smp_store_cpu_info(int id)
 	c->cpu_index = id;
 	/*
 	 * During boot time, CPU0 has this setup already. Save the info when
-	 * bringing up AP or offlined CPU0.
+	 * bringing up an AP.
 	 */
 	identify_secondary_cpu(c);
 	c->initialized = true;
@@ -807,51 +802,14 @@ static void __init smp_quirk_init_udelay
 }
 
 /*
- * Poke the other CPU in the eye via NMI to wake it up. Remember that the normal
- * INIT, INIT, STARTUP sequence will reset the chip hard for us, and this
- * won't ... remember to clear down the APIC, etc later.
- */
-int
-wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip)
-{
-	u32 dm = apic->dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
-	unsigned long send_status, accept_status = 0;
-	int maxlvt;
-
-	/* Target chip */
-	/* Boot on the stack */
-	/* Kick the second */
-	apic_icr_write(APIC_DM_NMI | dm, apicid);
-
-	pr_debug("Waiting for send to finish...\n");
-	send_status = safe_apic_wait_icr_idle();
-
-	/*
-	 * Give the other CPU some time to accept the IPI.
-	 */
-	udelay(200);
-	if (APIC_INTEGRATED(boot_cpu_apic_version)) {
-		maxlvt = lapic_get_maxlvt();
-		if (maxlvt > 3)			/* Due to the Pentium erratum 3AP.  */
-			apic_write(APIC_ESR, 0);
-		accept_status = (apic_read(APIC_ESR) & 0xEF);
-	}
-	pr_debug("NMI sent\n");
-
-	if (send_status)
-		pr_err("APIC never delivered???\n");
-	if (accept_status)
-		pr_err("APIC delivery error (%lx)\n", accept_status);
-
-	return (send_status | accept_status);
-}
-
-static int
-wakeup_secondary_cpu_via_init(int phys_apicid, unsigned long start_eip)
+ * Wake up AP by INIT, INIT, STARTUP sequence.
+ */
+static int wakeup_secondary_cpu_via_init(int phys_apicid, unsigned long start_eip)
 {
 	unsigned long send_status = 0, accept_status = 0;
 	int maxlvt, num_starts, j;
 
+	preempt_disable();
 	maxlvt = lapic_get_maxlvt();
 
 	/*
@@ -957,6 +915,7 @@ wakeup_secondary_cpu_via_init(int phys_a
 	if (accept_status)
 		pr_err("APIC delivery error (%lx)\n", accept_status);
 
+	preempt_enable();
 	return (send_status | accept_status);
 }
 
@@ -997,67 +956,6 @@ static void announce_cpu(int cpu, int ap
 			node, cpu, apicid);
 }
 
-static int wakeup_cpu0_nmi(unsigned int cmd, struct pt_regs *regs)
-{
-	int cpu;
-
-	cpu = smp_processor_id();
-	if (cpu == 0 && !cpu_online(cpu) && enable_start_cpu0)
-		return NMI_HANDLED;
-
-	return NMI_DONE;
-}
-
-/*
- * Wake up AP by INIT, INIT, STARTUP sequence.
- *
- * Instead of waiting for STARTUP after INITs, BSP will execute the BIOS
- * boot-strap code which is not a desired behavior for waking up BSP. To
- * void the boot-strap code, wake up CPU0 by NMI instead.
- *
- * This works to wake up soft offlined CPU0 only. If CPU0 is hard offlined
- * (i.e. physically hot removed and then hot added), NMI won't wake it up.
- * We'll change this code in the future to wake up hard offlined CPU0 if
- * real platform and request are available.
- */
-static int
-wakeup_cpu_via_init_nmi(int cpu, unsigned long start_ip, int apicid,
-	       int *cpu0_nmi_registered)
-{
-	int id;
-	int boot_error;
-
-	preempt_disable();
-
-	/*
-	 * Wake up AP by INIT, INIT, STARTUP sequence.
-	 */
-	if (cpu) {
-		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
-		goto out;
-	}
-
-	/*
-	 * Wake up BSP by nmi.
-	 *
-	 * Register a NMI handler to help wake up CPU0.
-	 */
-	boot_error = register_nmi_handler(NMI_LOCAL,
-					  wakeup_cpu0_nmi, 0, "wake_cpu0");
-
-	if (!boot_error) {
-		enable_start_cpu0 = 1;
-		*cpu0_nmi_registered = 1;
-		id = apic->dest_mode_logical ? cpu0_logical_apicid : apicid;
-		boot_error = wakeup_secondary_cpu_via_nmi(id, start_ip);
-	}
-
-out:
-	preempt_enable();
-
-	return boot_error;
-}
-
 int common_cpu_up(unsigned int cpu, struct task_struct *idle)
 {
 	int ret;
@@ -1086,8 +984,7 @@ int common_cpu_up(unsigned int cpu, stru
  * Returns zero if CPU booted OK, else error code from
  * ->wakeup_secondary_cpu.
  */
-static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,
-		       int *cpu0_nmi_registered)
+static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
 	/* start_ip had better be page-aligned! */
 	unsigned long start_ip = real_mode_header->trampoline_start;
@@ -1120,7 +1017,6 @@ static int do_boot_cpu(int apicid, int c
 	 * This grunge runs the startup process for
 	 * the targeted processor.
 	 */
-
 	if (x86_platform.legacy.warm_reset) {
 
 		pr_debug("Setting warm reset code and vector.\n");
@@ -1149,15 +1045,14 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use a method from the APIC driver if one defined, with wakeup
 	 *   straight to 64-bit mode preferred over wakeup to RM.
 	 * Otherwise,
-	 * - Use an INIT boot APIC message for APs or NMI for BSP.
+	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
 		boot_error = apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
 		boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
 	else
-		boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,
-						     cpu0_nmi_registered);
+		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
 
 	if (!boot_error) {
 		/*
@@ -1206,9 +1101,8 @@ static int do_boot_cpu(int apicid, int c
 int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
-	int cpu0_nmi_registered = 0;
 	unsigned long flags;
-	int err, ret = 0;
+	int err;
 
 	lockdep_assert_irqs_enabled();
 
@@ -1247,11 +1141,10 @@ int native_cpu_up(unsigned int cpu, stru
 	if (err)
 		return err;
 
-	err = do_boot_cpu(apicid, cpu, tidle, &cpu0_nmi_registered);
+	err = do_boot_cpu(apicid, cpu, tidle);
 	if (err) {
 		pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
-		ret = -EIO;
-		goto unreg_nmi;
+		return err;
 	}
 
 	/*
@@ -1267,15 +1160,7 @@ int native_cpu_up(unsigned int cpu, stru
 		touch_nmi_watchdog();
 	}
 
-unreg_nmi:
-	/*
-	 * Clean up the nmi handler. Do this after the callin and callout sync
-	 * to avoid impact of possible long unregister time.
-	 */
-	if (cpu0_nmi_registered)
-		unregister_nmi_handler(NMI_LOCAL, "wake_cpu0");
-
-	return ret;
+	return 0;
 }
 
 /**
@@ -1373,14 +1258,6 @@ static void __init smp_cpu_index_default
 	}
 }
 
-static void __init smp_get_logical_apicid(void)
-{
-	if (x2apic_mode)
-		cpu0_logical_apicid = apic_read(APIC_LDR);
-	else
-		cpu0_logical_apicid = GET_APIC_LOGICAL_ID(apic_read(APIC_LDR));
-}
-
 void __init smp_prepare_cpus_common(void)
 {
 	unsigned int i;
@@ -1443,8 +1320,6 @@ void __init native_smp_prepare_cpus(unsi
 	/* Setup local timer */
 	x86_init.timers.setup_percpu_clockev();
 
-	smp_get_logical_apicid();
-
 	pr_info("CPU0: ");
 	print_cpu_info(&cpu_data(0));
 
@@ -1752,18 +1627,6 @@ void play_dead_common(void)
 	local_irq_disable();
 }
 
-/**
- * cond_wakeup_cpu0 - Wake up CPU0 if needed.
- *
- * If NMI wants to wake up CPU0, start CPU0.
- */
-void cond_wakeup_cpu0(void)
-{
-	if (smp_processor_id() == 0 && enable_start_cpu0)
-		start_cpu0();
-}
-EXPORT_SYMBOL_GPL(cond_wakeup_cpu0);
-
 /*
  * We need to flush the caches before going to sleep, lest we have
  * dirty data in our caches when we come back up.
@@ -1831,8 +1694,6 @@ static inline void mwait_play_dead(void)
 		__monitor(mwait_ptr, 0, 0);
 		mb();
 		__mwait(eax, 0);
-
-		cond_wakeup_cpu0();
 	}
 }
 
@@ -1841,11 +1702,8 @@ void __noreturn hlt_play_dead(void)
 	if (__this_cpu_read(cpu_info.x86) >= 4)
 		wbinvd();
 
-	while (1) {
+	while (1)
 		native_halt();
-
-		cond_wakeup_cpu0();
-	}
 }
 
 void native_play_dead(void)
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -597,10 +597,6 @@ static int acpi_idle_play_dead(struct cp
 			io_idle(cx->address);
 		} else
 			return -ENODEV;
-
-#if defined(CONFIG_X86) && defined(CONFIG_HOTPLUG_CPU)
-		cond_wakeup_cpu0();
-#endif
 	}
 
 	/* Never reached */



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529891.824883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDV-0005vD-CC; Thu, 04 May 2023 19:02:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529891.824883; Thu, 04 May 2023 19:02:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDV-0005uu-7x; Thu, 04 May 2023 19:02:13 +0000
Received: by outflank-mailman (input) for mailman id 529891;
 Thu, 04 May 2023 19:02:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDT-00042j-Kl
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:11 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2c24286f-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c24286f-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185936.696863260@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226930;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=oNktQrwN+fatH513QPoGkxuTXmWxbBJSO1kVQSmt5aw=;
	b=gBya4Wdiox20HXnYxId2o92/cZNTV4kT8ZfWJZEW8ITXjndCeX8dDH5GHwnbCKYq4N2LZ7
	1Vliet33ezvFZfT7hZ49SRcTQY9GGkgXhlSt5j4FLFBbo8w2LMXaeiXjsd9i1J9IMUzHg1
	sJN1go/y7aKDt6GemaqC60gjrJ2Y/O06s8spq6n5PMT68ouOvPdBNo/CzyJcNIvXElJXAl
	XdjDvEAOi1DHoGQz2s+P9J0LaJZ0aZZ0+hC8ebSl7akG4yKA5FQhYbj1MWyqNbSv7kr//g
	wKJSvQLOs5Ac6Rvs5KHpecAWJnrhruw0ihD4m2E9ZObgEFCoaikI3NMnkVCBlA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226930;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=oNktQrwN+fatH513QPoGkxuTXmWxbBJSO1kVQSmt5aw=;
	b=ne/7c8UNDxt/B5By5H8OrJEC2F33AGALC1UJsqUQEjBRB84Yn90Pp71hqA5iKSc8E1YfLo
	xVfQnGoIgwAl59BQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 07/38] x86/smpboot: Restrict soft_restart_cpu() to SEV
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:09 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that the CPU0 hotplug cruft is gone, the only user is AMD SEV.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>

---
 arch/x86/kernel/callthunks.c |    2 +-
 arch/x86/kernel/head_32.S    |   14 --------------
 arch/x86/kernel/head_64.S    |    2 +-
 3 files changed, 2 insertions(+), 16 deletions(-)
---
--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -133,7 +133,7 @@ static bool skip_addr(void *dest)
 	/* Accounts directly */
 	if (dest == ret_from_fork)
 		return true;
-#ifdef CONFIG_HOTPLUG_CPU
+#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_AMD_MEM_ENCRYPT)
 	if (dest == soft_restart_cpu)
 		return true;
 #endif
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -138,20 +138,6 @@ SYM_CODE_START(startup_32)
 	jmp .Ldefault_entry
 SYM_CODE_END(startup_32)
 
-#ifdef CONFIG_HOTPLUG_CPU
-/*
- * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
- * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
- * unplug. Everything is set up already except the stack.
- */
-SYM_FUNC_START(soft_restart_cpu)
-	movl initial_stack, %ecx
-	movl %ecx, %esp
-	call *(initial_code)
-1:	jmp 1b
-SYM_FUNC_END(soft_restart_cpu)
-#endif
-
 /*
  * Non-boot CPU entry point; entered from trampoline.S
  * We can't lgdt here, because lgdt itself uses a data segment, but
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -375,7 +375,7 @@ SYM_CODE_END(secondary_startup_64)
 #include "verify_cpu.S"
 #include "sev_verify_cbit.S"
 
-#ifdef CONFIG_HOTPLUG_CPU
+#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_AMD_MEM_ENCRYPT)
 /*
  * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
  * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529892.824888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDV-0005zi-Rw; Thu, 04 May 2023 19:02:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529892.824888; Thu, 04 May 2023 19:02:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDV-0005ym-KP; Thu, 04 May 2023 19:02:13 +0000
Received: by outflank-mailman (input) for mailman id 529892;
 Thu, 04 May 2023 19:02:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDU-00042j-UD
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:12 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2d2db0c5-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d2db0c5-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185936.750150196@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226932;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Csk76nlugXhAU996VRssrwo8MVI1hAOPsGe3TXuIETM=;
	b=tpY2Pta11o5D+EBlLezP7+DQ0YHNLnnshfEKyGamHfMkSJSDc6pWd24jta33cQ6549e5n0
	YoCRhsqkaZ0RzkSEFfE53xlP/wOrJKVGfAasd+i1K8FwsOnit2R8Qcbst29P+dph3dRJkn
	MWH3Lq9LgxU+JESzbQnOULIN1Fs0BetdOgLMsOeCIiVyyZnbX0XyK/efLbSuRFMNnGI3x6
	cKRhbBjsP/glATrekoMoT4u4F+Pv2ebV/PeVUxNrAcjRy4EL78zxGMHmgOH903YOe9OGR3
	DTNVfbTo3Z9BrgHrkycf+LctlaGconOxbeHGZGJUplx9zbo/SySM8/Japzc/+Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226932;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Csk76nlugXhAU996VRssrwo8MVI1hAOPsGe3TXuIETM=;
	b=7clF0BS88qjmoiBaLOkVsFT638/SjDAHfzIc/R9HEwJjRQD5/6IkcSs7vkFzJfGoR95jo3
	dYaej/CZ6HDpvPAA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch V2 08/38] x86/smpboot: Split up native_cpu_up() into separate
 phases and document them
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:11 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

There are four logical parts to what native_cpu_up() does on the BSP (or
on the controlling CPU for a later hotplug):

 1) Wake the AP by sending the INIT/SIPI/SIPI sequence.

 2) Wait for the AP to make it as far as wait_for_master_cpu() which
    sets that CPU's bit in cpu_initialized_mask, then sets the bit in
    cpu_callout_mask to let the AP proceed through cpu_init().

 3) Wait for the AP to finish cpu_init() and get as far as the
    smp_callin() call, which sets that CPU's bit in cpu_callin_mask.

 4) Perform the TSC synchronization and wait for the AP to actually
    mark itself online in cpu_online_mask.

In preparation to allow these phases to operate in parallel on multiple
APs, split them out into separate functions and document the interactions
a little more clearly in both the BP and AP code paths.

No functional change intended.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/smpboot.c |  187 +++++++++++++++++++++++++++++-----------------
 1 file changed, 121 insertions(+), 66 deletions(-)
---
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -193,6 +193,10 @@ static void smp_callin(void)
 
 	wmb();
 
+	/*
+	 * This runs the AP through all the cpuhp states to its target
+	 * state (CPUHP_ONLINE in the case of serial bringup).
+	 */
 	notify_cpu_starting(cpuid);
 
 	/*
@@ -233,14 +237,31 @@ static void notrace start_secondary(void
 	load_cr3(swapper_pg_dir);
 	__flush_tlb_all();
 #endif
+	/*
+	 * Sync point with wait_cpu_initialized(). Before proceeding through
+	 * cpu_init(), the AP will call wait_for_master_cpu() which sets its
+	 * own bit in cpu_initialized_mask and then waits for the BSP to set
+	 * its bit in cpu_callout_mask to release it.
+	 */
 	cpu_init_secondary();
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
+
+	/*
+	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
+	 * but just sets the bit to let the controlling CPU (BSP) know that
+	 * it's got this far.
+	 */
 	smp_callin();
 
-	/* otherwise gcc will move up smp_processor_id before the cpu_init */
+	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
 	barrier();
-	/* Check TSC synchronization with the control CPU: */
+
+	/*
+	 * Check TSC synchronization with the control CPU, which will do
+	 * its part of this from wait_cpu_online(), making it an implicit
+	 * synchronization point.
+	 */
 	check_tsc_sync_target();
 
 	/*
@@ -259,6 +280,7 @@ static void notrace start_secondary(void
 	 * half valid vector space.
 	 */
 	lock_vector_lock();
+	/* Sync point with do_wait_cpu_online() */
 	set_cpu_online(smp_processor_id(), true);
 	lapic_online();
 	unlock_vector_lock();
@@ -981,17 +1003,13 @@ int common_cpu_up(unsigned int cpu, stru
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
- * Returns zero if CPU booted OK, else error code from
+ * Returns zero if startup was successfully sent, else error code from
  * ->wakeup_secondary_cpu.
  */
 static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
-	/* start_ip had better be page-aligned! */
 	unsigned long start_ip = real_mode_header->trampoline_start;
 
-	unsigned long boot_error = 0;
-	unsigned long timeout;
-
 #ifdef CONFIG_X86_64
 	/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
 	if (apic->wakeup_secondary_cpu_64)
@@ -1048,60 +1066,89 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
-		boot_error = apic->wakeup_secondary_cpu_64(apicid, start_ip);
+		return apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
-		boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
-	else
-		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
+		return apic->wakeup_secondary_cpu(apicid, start_ip);
 
-	if (!boot_error) {
-		/*
-		 * Wait 10s total for first sign of life from AP
-		 */
-		boot_error = -1;
-		timeout = jiffies + 10*HZ;
-		while (time_before(jiffies, timeout)) {
-			if (cpumask_test_cpu(cpu, cpu_initialized_mask)) {
-				/*
-				 * Tell AP to proceed with initialization
-				 */
-				cpumask_set_cpu(cpu, cpu_callout_mask);
-				boot_error = 0;
-				break;
-			}
-			schedule();
-		}
-	}
+	return wakeup_secondary_cpu_via_init(apicid, start_ip);
+}
 
-	if (!boot_error) {
-		/*
-		 * Wait till AP completes initial initialization
-		 */
-		while (!cpumask_test_cpu(cpu, cpu_callin_mask)) {
-			/*
-			 * Allow other tasks to run while we wait for the
-			 * AP to come online. This also gives a chance
-			 * for the MTRR work(triggered by the AP coming online)
-			 * to be completed in the stop machine context.
-			 */
-			schedule();
-		}
-	}
+static int wait_cpu_cpumask(unsigned int cpu, const struct cpumask *mask)
+{
+	unsigned long timeout;
 
-	if (x86_platform.legacy.warm_reset) {
-		/*
-		 * Cleanup possible dangling ends...
-		 */
-		smpboot_restore_warm_reset_vector();
+	/*
+	 * Wait up to 10s for the CPU to report in.
+	 */
+	timeout = jiffies + 10*HZ;
+	while (time_before(jiffies, timeout)) {
+		if (cpumask_test_cpu(cpu, mask))
+			return 0;
+
+		schedule();
 	}
+	return -1;
+}
 
-	return boot_error;
+/*
+ * Bringup step two: Wait for the target AP to reach cpu_init_secondary()
+ * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
+ * to proceed.  The AP will then proceed past setting its 'callin' bit
+ * and end up waiting in check_tsc_sync_target() until we reach
+ * do_wait_cpu_online() to tend to it.
+ */
+static int wait_cpu_initialized(unsigned int cpu)
+{
+	/*
+	 * Wait for first sign of life from AP.
+	 */
+	if (wait_cpu_cpumask(cpu, cpu_initialized_mask))
+		return -1;
+
+	cpumask_set_cpu(cpu, cpu_callout_mask);
+	return 0;
 }
 
-int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+/*
+ * Bringup step three: Wait for the target AP to reach smp_callin().
+ * The AP is not waiting for us here so we don't need to parallelise
+ * this step. Not entirely clear why we care about this, since we just
+ * proceed directly to TSC synchronization which is the next sync
+ * point with the AP anyway.
+ */
+static void wait_cpu_callin(unsigned int cpu)
+{
+	while (!cpumask_test_cpu(cpu, cpu_callin_mask))
+		schedule();
+}
+
+/*
+ * Bringup step four: Synchronize the TSC and wait for the target AP
+ * to reach set_cpu_online() in start_secondary().
+ */
+static void wait_cpu_online(unsigned int cpu)
 {
-	int apicid = apic->cpu_present_to_apicid(cpu);
 	unsigned long flags;
+
+	/*
+	 * Check TSC synchronization with the AP (keep irqs disabled
+	 * while doing so):
+	 */
+	local_irq_save(flags);
+	check_tsc_sync_source(cpu);
+	local_irq_restore(flags);
+
+	/*
+	 * Wait for the AP to mark itself online, so the core caller
+	 * can drop sparse_irq_lock.
+	 */
+	while (!cpu_online(cpu))
+		schedule();
+}
+
+static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
+{
+	int apicid = apic->cpu_present_to_apicid(cpu);
 	int err;
 
 	lockdep_assert_irqs_enabled();
@@ -1142,25 +1189,33 @@ int native_cpu_up(unsigned int cpu, stru
 		return err;
 
 	err = do_boot_cpu(apicid, cpu, tidle);
-	if (err) {
+	if (err)
 		pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
-		return err;
-	}
 
-	/*
-	 * Check TSC synchronization with the AP (keep irqs disabled
-	 * while doing so):
-	 */
-	local_irq_save(flags);
-	check_tsc_sync_source(cpu);
-	local_irq_restore(flags);
+	return err;
+}
 
-	while (!cpu_online(cpu)) {
-		cpu_relax();
-		touch_nmi_watchdog();
-	}
+int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+{
+	int ret;
 
-	return 0;
+	ret = native_kick_ap(cpu, tidle);
+	if (ret)
+		goto out;
+
+	ret = wait_cpu_initialized(cpu);
+	if (ret)
+		goto out;
+
+	wait_cpu_callin(cpu);
+	wait_cpu_online(cpu);
+
+out:
+	/* Cleanup possible dangling ends... */
+	if (x86_platform.legacy.warm_reset)
+		smpboot_restore_warm_reset_vector();
+
+	return ret;
 }
 
 /**



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529893.824902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDY-0006Vy-7L; Thu, 04 May 2023 19:02:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529893.824902; Thu, 04 May 2023 19:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDY-0006Vl-2N; Thu, 04 May 2023 19:02:16 +0000
Received: by outflank-mailman (input) for mailman id 529893;
 Thu, 04 May 2023 19:02:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDW-00042j-AX
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:14 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2e1282a7-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e1282a7-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185936.802783450@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226933;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=djhcX/6oz1v3ZZHjWNR6fK2ATXnp39FPHavEh8Z2jPk=;
	b=ga7wmIakplENkJi6T5NoX0Dl4GKg+ap6nn7fRMuH4AJ0N/j8tLX4hOYKXxI82FfjNe6bvd
	vJmMRr19EbOXhlvWVBPj00971oHJ7EVwTVpYtKkKm1bHOPAtWTEqef+PWJeugHX4N45lMG
	uoAym6vetzrOuFxrlyf44EwZaO13wrZDfo/XkIGx0LvkxSOhnQMBCiqIzIQDgAgSp3wxnj
	y26KQIzEJjp7eB7O+cYD+6cb4y96+ikG+vSFWeNtad9FTEpCRcUHe+gEoz7H09GJDNgB1J
	AKbp/YjUjDEWvORBc7uiaoZEw4Hkz37CASo4CL84FTmpzcTx9pDR5194rYnnxw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226933;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=djhcX/6oz1v3ZZHjWNR6fK2ATXnp39FPHavEh8Z2jPk=;
	b=9d54JiewPSGmVQNrhzbBmbqIIQ6FHVsRs8p39dYcszm+2PpRs6UAq3uJ7Lp1hIlyibfgPq
	eWLViIKxHt4DgPAQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 09/38] x86/smpboot: Get rid of cpu_init_secondary()
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:13 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The synchronization of the AP with the control CPU is a SMP boot problem
and has nothing to do with cpu_init().

Open code cpu_init_secondary() in start_secondary() and move
wait_for_master_cpu() into the SMP boot code.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/processor.h |    1 -
 arch/x86/kernel/cpu/common.c     |   27 ---------------------------
 arch/x86/kernel/smpboot.c        |   24 +++++++++++++++++++-----
 3 files changed, 19 insertions(+), 33 deletions(-)
---
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -551,7 +551,6 @@ extern void switch_gdt_and_percpu_base(i
 extern void load_direct_gdt(int);
 extern void load_fixmap_gdt(int);
 extern void cpu_init(void);
-extern void cpu_init_secondary(void);
 extern void cpu_init_exception_handling(void);
 extern void cr4_init(void);
 
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2123,19 +2123,6 @@ static void dbg_restore_debug_regs(void)
 #define dbg_restore_debug_regs()
 #endif /* ! CONFIG_KGDB */
 
-static void wait_for_master_cpu(int cpu)
-{
-#ifdef CONFIG_SMP
-	/*
-	 * wait for ACK from master CPU before continuing
-	 * with AP initialization
-	 */
-	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
-	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
-		cpu_relax();
-#endif
-}
-
 static inline void setup_getcpu(int cpu)
 {
 	unsigned long cpudata = vdso_encode_cpunode(cpu, early_cpu_to_node(cpu));
@@ -2239,8 +2226,6 @@ void cpu_init(void)
 	struct task_struct *cur = current;
 	int cpu = raw_smp_processor_id();
 
-	wait_for_master_cpu(cpu);
-
 	ucode_cpu_init(cpu);
 
 #ifdef CONFIG_NUMA
@@ -2293,18 +2278,6 @@ void cpu_init(void)
 	load_fixmap_gdt(cpu);
 }
 
-#ifdef CONFIG_SMP
-void cpu_init_secondary(void)
-{
-	/*
-	 * Relies on the BP having set-up the IDT tables, which are loaded
-	 * on this CPU in cpu_init_exception_handling().
-	 */
-	cpu_init_exception_handling();
-	cpu_init();
-}
-#endif
-
 #ifdef CONFIG_MICROCODE_LATE_LOADING
 /**
  * store_cpu_caps() - Store a snapshot of CPU capabilities
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -220,6 +220,17 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
+static void wait_for_master_cpu(int cpu)
+{
+	/*
+	 * Wait for release by control CPU before continuing with AP
+	 * initialization.
+	 */
+	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
+	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
+		cpu_relax();
+}
+
 /*
  * Activate a secondary processor.
  */
@@ -237,13 +248,16 @@ static void notrace start_secondary(void
 	load_cr3(swapper_pg_dir);
 	__flush_tlb_all();
 #endif
+	cpu_init_exception_handling();
+
 	/*
-	 * Sync point with wait_cpu_initialized(). Before proceeding through
-	 * cpu_init(), the AP will call wait_for_master_cpu() which sets its
-	 * own bit in cpu_initialized_mask and then waits for the BSP to set
-	 * its bit in cpu_callout_mask to release it.
+	 * Sync point with wait_cpu_initialized(). Sets AP in
+	 * cpu_initialized_mask and then waits for the control CPU
+	 * to release it.
 	 */
-	cpu_init_secondary();
+	wait_for_master_cpu(raw_smp_processor_id());
+
+	cpu_init();
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
 



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529894.824913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDZ-0006nG-Jv; Thu, 04 May 2023 19:02:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529894.824913; Thu, 04 May 2023 19:02:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDZ-0006mW-EI; Thu, 04 May 2023 19:02:17 +0000
Received: by outflank-mailman (input) for mailman id 529894;
 Thu, 04 May 2023 19:02:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDY-00042j-2P
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:16 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2f2befae-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f2befae-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185936.858063589@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226935;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Oe/ZjwkRBSSl6siLO6072dd5g+eXgR8KPUDyURiw97E=;
	b=XFhnPlTMh9p61J4utErL2GrNxXeTPXLjb8cWdDUXCXr3KT5am1Ro8sgXd/oHEQe8xcfaKn
	XvBjK3bjPW274k7B3BA8GTqBl9J8B4hcaJ6fhMce4VcINARE+ROIpWv777g9FEinlrf3KP
	2BncinPttT1cKxKjc1GNl7EgBMoSP2hFeg6lIh3qLpAkyqMMGNy1faixInDunKcxvGKZ/2
	OqP65vuMXtHJ1FRwPGXBbsUOQZ6+3Am+8zuc5DTudqWBQfvITGZKgf6z6Jn8Uqd9+HB+x9
	+wGxFdwo/EPews1TdSiM3V3T23ps0A6doGUh4PAL3mLBHykinvz3pA2i2ONykA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226935;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Oe/ZjwkRBSSl6siLO6072dd5g+eXgR8KPUDyURiw97E=;
	b=2dY4x4BcncRyoBpkMaEwKJqbyCatsbUKyBYFOZIQcYW7WtnHHDD/v5wu7am6DZnlToQwCT
	FkpvVldlmNw8qgCw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 10/38] x86/cpu/cacheinfo: Remove cpu_callout_mask dependency
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:14 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

cpu_callout_mask is used for the stop machine based MTRR/PAT init.

In preparation of moving the BP/AP synchronization to the core hotplug
code, use a private CPU mask for cacheinfo and manage it in the
starting/dying hotplug state.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/cacheinfo.c |   21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)
---
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -39,6 +39,8 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t
 /* Shared L2 cache maps */
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
 
+static cpumask_var_t cpu_cacheinfo_mask;
+
 /* Kernel controls MTRR and/or PAT MSRs. */
 unsigned int memory_caching_control __ro_after_init;
 
@@ -1172,8 +1174,10 @@ void cache_bp_restore(void)
 		cache_cpu_init();
 }
 
-static int cache_ap_init(unsigned int cpu)
+static int cache_ap_online(unsigned int cpu)
 {
+	cpumask_set_cpu(cpu, cpu_cacheinfo_mask);
+
 	if (!memory_caching_control || get_cache_aps_delayed_init())
 		return 0;
 
@@ -1191,11 +1195,17 @@ static int cache_ap_init(unsigned int cp
 	 *      lock to prevent MTRR entry changes
 	 */
 	stop_machine_from_inactive_cpu(cache_rendezvous_handler, NULL,
-				       cpu_callout_mask);
+				       cpu_cacheinfo_mask);
 
 	return 0;
 }
 
+static int cache_ap_offline(unsigned int cpu)
+{
+	cpumask_clear_cpu(cpu, cpu_cacheinfo_mask);
+	return 0;
+}
+
 /*
  * Delayed cache initialization for all AP's
  */
@@ -1210,9 +1220,12 @@ void cache_aps_init(void)
 
 static int __init cache_ap_register(void)
 {
+	zalloc_cpumask_var(&cpu_cacheinfo_mask, GFP_KERNEL);
+	cpumask_set_cpu(smp_processor_id(), cpu_cacheinfo_mask);
+
 	cpuhp_setup_state_nocalls(CPUHP_AP_CACHECTRL_STARTING,
 				  "x86/cachectrl:starting",
-				  cache_ap_init, NULL);
+				  cache_ap_online, cache_ap_offline);
 	return 0;
 }
-core_initcall(cache_ap_register);
+early_initcall(cache_ap_register);



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529895.824923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDc-0007EZ-DL; Thu, 04 May 2023 19:02:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529895.824923; Thu, 04 May 2023 19:02:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDc-0007E2-66; Thu, 04 May 2023 19:02:20 +0000
Received: by outflank-mailman (input) for mailman id 529895;
 Thu, 04 May 2023 19:02:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDa-00042k-Ky
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:18 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 30037478-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30037478-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185936.917560748@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226936;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=7w0pkJKfi5BfzTn7YraeCXszLQVgaQibx9fCVrsyYLw=;
	b=KF4OYjBu/WAVy3JeYBlPGuO/psPpP6BNMkGwUP4hVxM+/D+l51OxcfTtr1678q7sAM86+l
	pkSbimwtfW+4xdLulA4/Enq9nqbfEKxZaT/IcZ6t/X1KbwGa9K46/CqLa2h1H1kz8voODT
	y4AmfcuAbs0leulnqDLnh1gIEYzNbs3Hpt0le/7AsOwR44b/CdgLPyNGYtpoteTh4TSBz6
	xBxTc2mzXrBGxYo6rlw1ZlvxoGPyU9ZhtpO28GI1g1d9zu8k+2yJftFOirkgMuWcP7q1P8
	AsSeoEHmtH0f/Ry8sbvPf/V1KSj5/DXRuhM2P4J01rZzTyifP9FYxFi1Q1uu8g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226936;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=7w0pkJKfi5BfzTn7YraeCXszLQVgaQibx9fCVrsyYLw=;
	b=AOLL7N8FlgTgX4Vpe4Ih6WqHRrBw5YfnEZjwda6Z8s/doyNG0e8UyHLJJ/7vaMaivGQ2TR
	0D5J6mUQhC4iKrBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 11/38] x86/smpboot: Move synchronization masks to SMP boot code
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:16 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The usage is in smpboot.c and not in the CPU initialization code.

The XEN_PV usage of cpu_callout_mask is obsolete as cpu_init() not longer
waits and cacheinfo has its own CPU mask now, so cpu_callout_mask can be
made static too.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/cpumask.h |    5 -----
 arch/x86/kernel/cpu/common.c   |   17 -----------------
 arch/x86/kernel/smpboot.c      |   16 ++++++++++++++++
 arch/x86/xen/smp_pv.c          |    3 ---
 4 files changed, 16 insertions(+), 25 deletions(-)
---
--- a/arch/x86/include/asm/cpumask.h
+++ b/arch/x86/include/asm/cpumask.h
@@ -4,11 +4,6 @@
 #ifndef __ASSEMBLY__
 #include <linux/cpumask.h>
 
-extern cpumask_var_t cpu_callin_mask;
-extern cpumask_var_t cpu_callout_mask;
-extern cpumask_var_t cpu_initialized_mask;
-extern cpumask_var_t cpu_sibling_setup_mask;
-
 extern void setup_cpu_local_masks(void);
 
 /*
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -67,14 +67,6 @@
 
 u32 elf_hwcap2 __read_mostly;
 
-/* all of these masks are initialized in setup_cpu_local_masks() */
-cpumask_var_t cpu_initialized_mask;
-cpumask_var_t cpu_callout_mask;
-cpumask_var_t cpu_callin_mask;
-
-/* representing cpus for which sibling maps can be computed */
-cpumask_var_t cpu_sibling_setup_mask;
-
 /* Number of siblings per CPU package */
 int smp_num_siblings = 1;
 EXPORT_SYMBOL(smp_num_siblings);
@@ -169,15 +161,6 @@ static void ppin_init(struct cpuinfo_x86
 	clear_cpu_cap(c, info->feature);
 }
 
-/* correctly size the local cpu masks */
-void __init setup_cpu_local_masks(void)
-{
-	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callin_mask);
-	alloc_bootmem_cpumask_var(&cpu_callout_mask);
-	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
-}
-
 static void default_init(struct cpuinfo_x86 *c)
 {
 #ifdef CONFIG_X86_64
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -101,6 +101,13 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
+/* All of these masks are initialized in setup_cpu_local_masks() */
+static cpumask_var_t cpu_initialized_mask;
+static cpumask_var_t cpu_callout_mask;
+static cpumask_var_t cpu_callin_mask;
+/* Representing CPUs for which sibling maps can be computed */
+static cpumask_var_t cpu_sibling_setup_mask;
+
 /* Logical package management. We might want to allocate that dynamically */
 unsigned int __max_logical_packages __read_mostly;
 EXPORT_SYMBOL(__max_logical_packages);
@@ -1548,6 +1555,15 @@ early_param("possible_cpus", _setup_poss
 		set_cpu_possible(i, true);
 }
 
+/* correctly size the local cpu masks */
+void __init setup_cpu_local_masks(void)
+{
+	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
+	alloc_bootmem_cpumask_var(&cpu_callin_mask);
+	alloc_bootmem_cpumask_var(&cpu_callout_mask);
+	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
+}
+
 #ifdef CONFIG_HOTPLUG_CPU
 
 /* Recompute SMT state for all CPUs on offline */
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -254,15 +254,12 @@ cpu_initialize_context(unsigned int cpu,
 	struct desc_struct *gdt;
 	unsigned long gdt_mfn;
 
-	/* used to tell cpu_init() that it can proceed with initialization */
-	cpumask_set_cpu(cpu, cpu_callout_mask);
 	if (cpumask_test_and_set_cpu(cpu, xen_cpu_initialized_map))
 		return 0;
 
 	ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL);
 	if (ctxt == NULL) {
 		cpumask_clear_cpu(cpu, xen_cpu_initialized_map);
-		cpumask_clear_cpu(cpu, cpu_callout_mask);
 		return -ENOMEM;
 	}
 



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529896.824933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDe-0007g9-SP; Thu, 04 May 2023 19:02:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529896.824933; Thu, 04 May 2023 19:02:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDe-0007ft-Mh; Thu, 04 May 2023 19:02:22 +0000
Received: by outflank-mailman (input) for mailman id 529896;
 Thu, 04 May 2023 19:02:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDc-00042k-Ar
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:20 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 30fe1cf7-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30fe1cf7-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185936.974986973@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226938;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=RWqZkjYspaeM2luDJ1kekjLX+eDwzhj89fWzaE+vcuc=;
	b=TB5oV9Aq6dUI1CYTG3Mfudq6NtZlPsW1Oex7yGf2LM/kO8/BZ25DUu+2zHE2ckb9PInfKC
	ThIb3r9/3b15wBd6uRdROhA10S5AgTJxEM2bEGeorA2NaJWuHDmSYm2szq80TnIJGzO87r
	81wUI2m9b4yt/h5OEp4tiu/Eej4ESXtZF7nf623HNABJkB4ypUwiLWY/CryJlS2D49iF4g
	VpdQdELsDbtjD2eM41gRwO8hH6r1xMhcJOplan7xyugn92wquyXDsHqitGavw4pzelheUe
	iefDvimqKcNsabrSPDXx2VGoczs/C1VurUq3fWakhAv9b6MMDMg5hc8nUdPY/A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226938;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=RWqZkjYspaeM2luDJ1kekjLX+eDwzhj89fWzaE+vcuc=;
	b=iEvC6KRjnzhtqcLL+LMevDrfOV+QfKnfRTDjoCA4Yw0ERn9sWjSc3oID7BBkVgcY89imxw
	v8or+5vf+SaSUADQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 12/38] x86/smpboot: Make TSC synchronization function call based
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:17 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Spin-waiting on the control CPU until the AP reaches the TSC
synchronization is just a waste especially in the case that there is no
synchronization required.

As the synchronization has to run with interrupts disabled the control CPU
part can just be done from a SMP function call. The upcoming AP issues that
call async only in the case that synchronization is required.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/tsc.h |    2 --
 arch/x86/kernel/smpboot.c  |   20 +++-----------------
 arch/x86/kernel/tsc_sync.c |   36 +++++++++++-------------------------
 3 files changed, 14 insertions(+), 44 deletions(-)
---
--- a/arch/x86/include/asm/tsc.h
+++ b/arch/x86/include/asm/tsc.h
@@ -55,12 +55,10 @@ extern bool tsc_async_resets;
 #ifdef CONFIG_X86_TSC
 extern bool tsc_store_and_check_tsc_adjust(bool bootcpu);
 extern void tsc_verify_tsc_adjust(bool resume);
-extern void check_tsc_sync_source(int cpu);
 extern void check_tsc_sync_target(void);
 #else
 static inline bool tsc_store_and_check_tsc_adjust(bool bootcpu) { return false; }
 static inline void tsc_verify_tsc_adjust(bool resume) { }
-static inline void check_tsc_sync_source(int cpu) { }
 static inline void check_tsc_sync_target(void) { }
 #endif
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -278,11 +278,7 @@ static void notrace start_secondary(void
 	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
 	barrier();
 
-	/*
-	 * Check TSC synchronization with the control CPU, which will do
-	 * its part of this from wait_cpu_online(), making it an implicit
-	 * synchronization point.
-	 */
+	/* Check TSC synchronization with the control CPU. */
 	check_tsc_sync_target();
 
 	/*
@@ -1144,21 +1140,11 @@ static void wait_cpu_callin(unsigned int
 }
 
 /*
- * Bringup step four: Synchronize the TSC and wait for the target AP
- * to reach set_cpu_online() in start_secondary().
+ * Bringup step four: Wait for the target AP to reach set_cpu_online() in
+ * start_secondary().
  */
 static void wait_cpu_online(unsigned int cpu)
 {
-	unsigned long flags;
-
-	/*
-	 * Check TSC synchronization with the AP (keep irqs disabled
-	 * while doing so):
-	 */
-	local_irq_save(flags);
-	check_tsc_sync_source(cpu);
-	local_irq_restore(flags);
-
 	/*
 	 * Wait for the AP to mark itself online, so the core caller
 	 * can drop sparse_irq_lock.
--- a/arch/x86/kernel/tsc_sync.c
+++ b/arch/x86/kernel/tsc_sync.c
@@ -245,7 +245,6 @@ bool tsc_store_and_check_tsc_adjust(bool
  */
 static atomic_t start_count;
 static atomic_t stop_count;
-static atomic_t skip_test;
 static atomic_t test_runs;
 
 /*
@@ -344,21 +343,14 @@ static inline unsigned int loop_timeout(
 }
 
 /*
- * Source CPU calls into this - it waits for the freshly booted
- * target CPU to arrive and then starts the measurement:
+ * The freshly booted CPU initiates this via an async SMP function call.
  */
-void check_tsc_sync_source(int cpu)
+static void check_tsc_sync_source(void *__cpu)
 {
+	unsigned int cpu = (unsigned long)__cpu;
 	int cpus = 2;
 
 	/*
-	 * No need to check if we already know that the TSC is not
-	 * synchronized or if we have no TSC.
-	 */
-	if (unsynchronized_tsc())
-		return;
-
-	/*
 	 * Set the maximum number of test runs to
 	 *  1 if the CPU does not provide the TSC_ADJUST MSR
 	 *  3 if the MSR is available, so the target can try to adjust
@@ -368,16 +360,9 @@ void check_tsc_sync_source(int cpu)
 	else
 		atomic_set(&test_runs, 3);
 retry:
-	/*
-	 * Wait for the target to start or to skip the test:
-	 */
-	while (atomic_read(&start_count) != cpus - 1) {
-		if (atomic_read(&skip_test) > 0) {
-			atomic_set(&skip_test, 0);
-			return;
-		}
+	/* Wait for the target to start. */
+	while (atomic_read(&start_count) != cpus - 1)
 		cpu_relax();
-	}
 
 	/*
 	 * Trigger the target to continue into the measurement too:
@@ -397,14 +382,14 @@ void check_tsc_sync_source(int cpu)
 	if (!nr_warps) {
 		atomic_set(&test_runs, 0);
 
-		pr_debug("TSC synchronization [CPU#%d -> CPU#%d]: passed\n",
+		pr_debug("TSC synchronization [CPU#%d -> CPU#%u]: passed\n",
 			smp_processor_id(), cpu);
 
 	} else if (atomic_dec_and_test(&test_runs) || random_warps) {
 		/* Force it to 0 if random warps brought us here */
 		atomic_set(&test_runs, 0);
 
-		pr_warn("TSC synchronization [CPU#%d -> CPU#%d]:\n",
+		pr_warn("TSC synchronization [CPU#%d -> CPU#%u]:\n",
 			smp_processor_id(), cpu);
 		pr_warn("Measured %Ld cycles TSC warp between CPUs, "
 			"turning off TSC clock.\n", max_warp);
@@ -457,11 +442,12 @@ void check_tsc_sync_target(void)
 	 * SoCs the TSC is frequency synchronized, but still the TSC ADJUST
 	 * register might have been wreckaged by the BIOS..
 	 */
-	if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable) {
-		atomic_inc(&skip_test);
+	if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable)
 		return;
-	}
 
+	/* Kick the control CPU into the TSC synchronization function */
+	smp_call_function_single(cpumask_first(cpu_online_mask), check_tsc_sync_source,
+				 (unsigned long *)(unsigned long)cpu, 0);
 retry:
 	/*
 	 * Register this CPU's participation and wait for the



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529898.824939 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDf-0007or-Oy; Thu, 04 May 2023 19:02:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529898.824939; Thu, 04 May 2023 19:02:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDf-0007jd-3m; Thu, 04 May 2023 19:02:23 +0000
Received: by outflank-mailman (input) for mailman id 529898;
 Thu, 04 May 2023 19:02:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDc-00042j-S2
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:20 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 31f3ebd1-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31f3ebd1-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.031599435@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226940;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=kEfrtWr4PgKjoC3uLMIgq7wwpzWxMcfb80JJcMYPrNo=;
	b=PmeJbSUkoSGxnlVzLJXUvjH48dikWh6/q5JUoaP/Nq1/Ww69G3R8FGSMzpcyfgrbQmJWPS
	L1voUlN08pCvhmk6/QZrwPjAlPy1WtF3SmmFaso8csKdSwgSj+Xrh0yxgNvs9UWRGz4C7i
	363j0HgkKBjg4ZGOGH5SMTymlwml0+zJzHFJoBLLKJNEm7WITc1vHb3UDkvrXQbf+dNpB1
	VNDwuviXGLHURwt2oOpvvRZjYyIp2mbeTsBXGfdQ/vyCHUYkflxEUN3y3vqtAgWpyktb4q
	t022EVpGgFL77pRnNi56GuH4R9mqq/EtBm5ZisWdMifcZ+kMZTibAs3llShwcg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226940;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=kEfrtWr4PgKjoC3uLMIgq7wwpzWxMcfb80JJcMYPrNo=;
	b=dKl/1HBoUECGor2iJDs5A3fYWbTcWkykgPuLe6FZ3+pVZqFoQN8p5o4X0DvHU0A/AOKXEN
	mUmNYqpZZv5DVhDw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 13/38] x86/smpboot: Remove cpu_callin_mask
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:19 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that TSC synchronization is SMP function call based there is no reason
to wait for the AP to be set in smp_callin_mask. The control CPU waits for
the AP to set itself in the online mask anyway.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/smpboot.c |   61 +++++++---------------------------------------
 1 file changed, 10 insertions(+), 51 deletions(-)
---
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -104,7 +104,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_info);
 /* All of these masks are initialized in setup_cpu_local_masks() */
 static cpumask_var_t cpu_initialized_mask;
 static cpumask_var_t cpu_callout_mask;
-static cpumask_var_t cpu_callin_mask;
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -167,21 +166,16 @@ static inline void smpboot_restore_warm_
  */
 static void smp_callin(void)
 {
-	int cpuid;
+	int cpuid = smp_processor_id();
 
 	/*
 	 * If waken up by an INIT in an 82489DX configuration
-	 * cpu_callout_mask guarantees we don't get here before
-	 * an INIT_deassert IPI reaches our local APIC, so it is
-	 * now safe to touch our local APIC.
-	 */
-	cpuid = smp_processor_id();
-
-	/*
-	 * the boot CPU has finished the init stage and is spinning
-	 * on callin_map until we finish. We are free to set up this
-	 * CPU, first the APIC. (this is probably redundant on most
-	 * boards)
+	 * cpu_callout_mask guarantees we don't get here before an
+	 * INIT_deassert IPI reaches our local APIC, so it is now safe to
+	 * touch our local APIC.
+	 *
+	 * Set up this CPU, first the APIC, which is probably redundant on
+	 * most boards.
 	 */
 	apic_ap_setup();
 
@@ -192,7 +186,7 @@ static void smp_callin(void)
 	 * The topology information must be up to date before
 	 * calibrate_delay() and notify_cpu_starting().
 	 */
-	set_cpu_sibling_map(raw_smp_processor_id());
+	set_cpu_sibling_map(cpuid);
 
 	ap_init_aperfmperf();
 
@@ -205,11 +199,6 @@ static void smp_callin(void)
 	 * state (CPUHP_ONLINE in the case of serial bringup).
 	 */
 	notify_cpu_starting(cpuid);
-
-	/*
-	 * Allow the master to continue.
-	 */
-	cpumask_set_cpu(cpuid, cpu_callin_mask);
 }
 
 static void ap_calibrate_delay(void)
@@ -268,11 +257,6 @@ static void notrace start_secondary(void
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
 
-	/*
-	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
-	 * but just sets the bit to let the controlling CPU (BSP) know that
-	 * it's got this far.
-	 */
 	smp_callin();
 
 	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
@@ -1112,7 +1096,7 @@ static int wait_cpu_cpumask(unsigned int
  * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
  * to proceed.  The AP will then proceed past setting its 'callin' bit
  * and end up waiting in check_tsc_sync_target() until we reach
- * do_wait_cpu_online() to tend to it.
+ * wait_cpu_online() to tend to it.
  */
 static int wait_cpu_initialized(unsigned int cpu)
 {
@@ -1127,20 +1111,7 @@ static int wait_cpu_initialized(unsigned
 }
 
 /*
- * Bringup step three: Wait for the target AP to reach smp_callin().
- * The AP is not waiting for us here so we don't need to parallelise
- * this step. Not entirely clear why we care about this, since we just
- * proceed directly to TSC synchronization which is the next sync
- * point with the AP anyway.
- */
-static void wait_cpu_callin(unsigned int cpu)
-{
-	while (!cpumask_test_cpu(cpu, cpu_callin_mask))
-		schedule();
-}
-
-/*
- * Bringup step four: Wait for the target AP to reach set_cpu_online() in
+ * Bringup step three: Wait for the target AP to reach set_cpu_online() in
  * start_secondary().
  */
 static void wait_cpu_online(unsigned int cpu)
@@ -1170,14 +1141,6 @@ static int native_kick_ap(unsigned int c
 	}
 
 	/*
-	 * Already booted CPU?
-	 */
-	if (cpumask_test_cpu(cpu, cpu_callin_mask)) {
-		pr_debug("do_boot_cpu %d Already started\n", cpu);
-		return -ENOSYS;
-	}
-
-	/*
 	 * Save current MTRR state in case it was changed since early boot
 	 * (e.g. by the ACPI SMI) to initialize new CPUs with MTRRs in sync:
 	 */
@@ -1214,7 +1177,6 @@ int native_cpu_up(unsigned int cpu, stru
 	if (ret)
 		goto out;
 
-	wait_cpu_callin(cpu);
 	wait_cpu_online(cpu);
 
 out:
@@ -1330,7 +1292,6 @@ void __init smp_prepare_cpus_common(void
 	 * Setup boot CPU information
 	 */
 	smp_store_boot_cpu_info(); /* Final full version of the data */
-	cpumask_copy(cpu_callin_mask, cpumask_of(0));
 	mb();
 
 	for_each_possible_cpu(i) {
@@ -1545,7 +1506,6 @@ early_param("possible_cpus", _setup_poss
 void __init setup_cpu_local_masks(void)
 {
 	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callin_mask);
 	alloc_bootmem_cpumask_var(&cpu_callout_mask);
 	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
 }
@@ -1609,7 +1569,6 @@ static void remove_cpu_from_maps(int cpu
 {
 	set_cpu_online(cpu, false);
 	cpumask_clear_cpu(cpu, cpu_callout_mask);
-	cpumask_clear_cpu(cpu, cpu_callin_mask);
 	/* was set by cpu_init() */
 	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	numa_remove_cpu(cpu);



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529899.824942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDg-0007vg-7w; Thu, 04 May 2023 19:02:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529899.824942; Thu, 04 May 2023 19:02:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDf-0007u5-Qb; Thu, 04 May 2023 19:02:23 +0000
Received: by outflank-mailman (input) for mailman id 529899;
 Thu, 04 May 2023 19:02:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDe-00042j-9G
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:22 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 32e2137c-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32e2137c-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.088861348@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226941;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=waR3DpyQ6DyvK8jEESlEb5rxqBYC2I7FeZ3H8ZtIrVQ=;
	b=HKYLcl9cklgr6YO5/NspFjC+D2CCt/ZYWzxxiOOGuCX7u/7IGOq81DQhVq9PvbBHY6Kpcr
	SxWHo0Ub70GUTAFkoXjKOB1ZyWdtlCAv2pBsav0oFkFgcMqJgiDoblcu2Hnt6D2hLbDJE7
	kec7wtFBfE1xoWI2cME3sJv8K73WrIvhYqzB8k1hCePY7f28tWiYPkzCL3lKJ1aAZjMvt+
	kXyU82XPa0OHMGsQ8hlC+TxCd6BOdr1gglOHT1fZhwBJGZCjFBU2Yxs7+XALa+aaWrMaCV
	mvw2zjNCImLQQhSEXScVL50CeJT6NF1GH6WIuPs8AGymg+ULRPU3wowu71D4Lw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226941;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=waR3DpyQ6DyvK8jEESlEb5rxqBYC2I7FeZ3H8ZtIrVQ=;
	b=XDnjEy7a3qUl/tWMxBUIj3PidEk2DRdxC5hulFnew5UcHpu5yRDRS/g5aolbe6p1RMLJCX
	rQwCSliTpoNYDfAw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 14/38] cpu/hotplug: Rework sparse_irq locking in bringup_cpu()
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:21 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

There is no harm to hold sparse_irq lock until the upcoming CPU completes
in cpuhp_online_idle(). This allows to remove cpu_online() synchronization
from architecture code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 kernel/cpu.c |   28 +++++++++++++++++++---------
 1 file changed, 19 insertions(+), 9 deletions(-)
---
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -558,7 +558,7 @@ static int cpuhp_kick_ap(int cpu, struct
 	return ret;
 }
 
-static int bringup_wait_for_ap(unsigned int cpu)
+static int bringup_wait_for_ap_online(unsigned int cpu)
 {
 	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
 
@@ -579,15 +579,12 @@ static int bringup_wait_for_ap(unsigned
 	 */
 	if (!cpu_smt_allowed(cpu))
 		return -ECANCELED;
-
-	if (st->target <= CPUHP_AP_ONLINE_IDLE)
-		return 0;
-
-	return cpuhp_kick_ap(cpu, st, st->target);
+	return 0;
 }
 
 static int bringup_cpu(unsigned int cpu)
 {
+	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
 	struct task_struct *idle = idle_thread_get(cpu);
 	int ret;
 
@@ -606,10 +603,23 @@ static int bringup_cpu(unsigned int cpu)
 
 	/* Arch-specific enabling code. */
 	ret = __cpu_up(cpu, idle);
-	irq_unlock_sparse();
 	if (ret)
-		return ret;
-	return bringup_wait_for_ap(cpu);
+		goto out_unlock;
+
+	ret = bringup_wait_for_ap_online(cpu);
+	if (ret)
+		goto out_unlock;
+
+	irq_unlock_sparse();
+
+	if (st->target <= CPUHP_AP_ONLINE_IDLE)
+		return 0;
+
+	return cpuhp_kick_ap(cpu, st, st->target);
+
+out_unlock:
+	irq_unlock_sparse();
+	return ret;
 }
 
 static int finish_cpu(unsigned int cpu)



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529901.824958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDi-0000G2-Uy; Thu, 04 May 2023 19:02:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529901.824958; Thu, 04 May 2023 19:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDi-0000BF-BD; Thu, 04 May 2023 19:02:26 +0000
Received: by outflank-mailman (input) for mailman id 529901;
 Thu, 04 May 2023 19:02:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDf-00042j-Vf
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:23 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 33e3e5b7-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33e3e5b7-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.145906075@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226943;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=lJLvACyWoTrqBZOps4v6XbP7wlJ3d4Q7defz4e+9Zb8=;
	b=M3KgFVyzFkR4ccZ70dYavQ5/yqmJzYsNUe2w4AYa1zmqPGpOGfvnO2dLe+bqfdSYVxHNmg
	UzxBYDkuh0vO9FnTUv0U0boU0E1jTAuFuEPxGvjM1vNFxZkY4+YKEIRoL7asL6MLh7WY2k
	YHroxEZc/MyAi6EghJhWG+wOe//5AznxGGSpRWucK1h/BnTyT75KngcAYt8WZjiMtq166b
	mZZvUfn/tKwnN+lYf/c6UiDK6atXikbDlaDBZ4sMT8u+PU/SPukxmYsr0YlXnC0aFBugr6
	4bUgvijH50TXxyxKZ9qydiyNaQwVfzl3UXTSijqX2pY50ezKH3gbSbXO2Pf/Zg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226943;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=lJLvACyWoTrqBZOps4v6XbP7wlJ3d4Q7defz4e+9Zb8=;
	b=YUlyUuTQQYH6MyD4cTuFQt3gYiD8H4Dxlm3l2cVGkFebVjVj9D7krF98nosGppSH1sXEJ5
	189JeF0cp5TwVnAA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 15/38] x86/smpboot: Remove wait for cpu_online()
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:22 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/smpboot.c |   26 ++------------------------
 1 file changed, 2 insertions(+), 24 deletions(-)
---
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -281,7 +281,6 @@ static void notrace start_secondary(void
 	 * half valid vector space.
 	 */
 	lock_vector_lock();
-	/* Sync point with do_wait_cpu_online() */
 	set_cpu_online(smp_processor_id(), true);
 	lapic_online();
 	unlock_vector_lock();
@@ -1110,20 +1109,6 @@ static int wait_cpu_initialized(unsigned
 	return 0;
 }
 
-/*
- * Bringup step three: Wait for the target AP to reach set_cpu_online() in
- * start_secondary().
- */
-static void wait_cpu_online(unsigned int cpu)
-{
-	/*
-	 * Wait for the AP to mark itself online, so the core caller
-	 * can drop sparse_irq_lock.
-	 */
-	while (!cpu_online(cpu))
-		schedule();
-}
-
 static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
@@ -1170,16 +1155,9 @@ int native_cpu_up(unsigned int cpu, stru
 	int ret;
 
 	ret = native_kick_ap(cpu, tidle);
-	if (ret)
-		goto out;
-
-	ret = wait_cpu_initialized(cpu);
-	if (ret)
-		goto out;
-
-	wait_cpu_online(cpu);
+	if (!ret)
+		ret = wait_cpu_initialized(cpu);
 
-out:
 	/* Cleanup possible dangling ends... */
 	if (x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:02:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:02:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529907.824973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDq-0001n3-1E; Thu, 04 May 2023 19:02:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529907.824973; Thu, 04 May 2023 19:02:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueDp-0001mL-R0; Thu, 04 May 2023 19:02:33 +0000
Received: by outflank-mailman (input) for mailman id 529907;
 Thu, 04 May 2023 19:02:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDo-00042k-HN
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:32 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 37c968b4-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37c968b4-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185937.370022257@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226949;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=mat1riu4O6AbPNpAmsMwayxE5kfc1RXkToP7jRe4U7o=;
	b=xeQ5SCOYldQBRnXkZ526uAbaB3KT8qlhRpTTXdBUQxwAwpI82XKOL3tevipA06i7tc5ik1
	/VIoaKs7slgXkoJFe/mtHfTaoLddU3MEOBEllwMSIf2d4NknWjTSWrq+h0v75+R1qK5t3r
	8j7mXzu4SOfMAjxADMA1u0nZP2bTlZ1UBthabyfdIS8xqStZnpoA/DeBzH2/80q+vsgWeN
	FK2jo8/F0W9ovDdsnYpGGqEqqyguyf1eDfkchtVyJxwbc5saq03VkAo4AB8uim4+borl6r
	hmKF3zSrVJF46+/dF1reYrrlXBVWx9cYHY3XLgoVznWRXMJEKbMR9Q5+tqaJWQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226949;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=mat1riu4O6AbPNpAmsMwayxE5kfc1RXkToP7jRe4U7o=;
	b=rxfY29HsuqZlJnuRZ3LKPx2xJrN/rplKUe2pMIlsDATOFZW6zMzLCZaZHUFpor639GkOdB
	FN/fmxQ4O5AleqBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 19/38] x86/smpboot: Switch to hotplug core state synchronization
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:29 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The new AP state tracking and synchronization mechanism in the CPU hotplug
core code allows to remove quite some x86 specific code:

  1) The AP alive synchronization based on cpumasks

  2) The decision whether an AP can be brought up again

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: xen-devel@lists.xenproject.org
---
V2: Use for_each_online_cpu() - Brian
---
 arch/x86/Kconfig           |    1 
 arch/x86/include/asm/smp.h |    7 +
 arch/x86/kernel/smp.c      |    1 
 arch/x86/kernel/smpboot.c  |  161 ++++++++++-----------------------------------
 arch/x86/xen/smp_hvm.c     |   16 +---
 arch/x86/xen/smp_pv.c      |   39 ++++++----
 6 files changed, 73 insertions(+), 152 deletions(-)
---
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -274,6 +274,7 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_CORE_SYNC_FULL		if SMP
 	select HOTPLUG_SMT			if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -38,6 +38,8 @@ struct smp_ops {
 	void (*crash_stop_other_cpus)(void);
 	void (*smp_send_reschedule)(int cpu);
 
+	void (*cleanup_dead_cpu)(unsigned cpu);
+	void (*poll_sync_state)(void);
 	int (*cpu_up)(unsigned cpu, struct task_struct *tidle);
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
@@ -90,7 +92,8 @@ static inline int __cpu_disable(void)
 
 static inline void __cpu_die(unsigned int cpu)
 {
-	smp_ops.cpu_die(cpu);
+	if (smp_ops.cpu_die)
+		smp_ops.cpu_die(cpu);
 }
 
 static inline void __noreturn play_dead(void)
@@ -123,8 +126,6 @@ void native_smp_cpus_done(unsigned int m
 int common_cpu_up(unsigned int cpunum, struct task_struct *tidle);
 int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
 int native_cpu_disable(void);
-int common_cpu_die(unsigned int cpu);
-void native_cpu_die(unsigned int cpu);
 void __noreturn hlt_play_dead(void);
 void native_play_dead(void);
 void play_dead_common(void);
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -269,7 +269,6 @@ struct smp_ops smp_ops = {
 	.smp_send_reschedule	= native_smp_send_reschedule,
 
 	.cpu_up			= native_cpu_up,
-	.cpu_die		= native_cpu_die,
 	.cpu_disable		= native_cpu_disable,
 	.play_dead		= native_play_dead,
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -57,6 +57,7 @@
 #include <linux/pgtable.h>
 #include <linux/overflow.h>
 #include <linux/stackprotector.h>
+#include <linux/cpuhotplug.h>
 
 #include <asm/acpi.h>
 #include <asm/cacheinfo.h>
@@ -101,9 +102,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
-/* All of these masks are initialized in setup_cpu_local_masks() */
-static cpumask_var_t cpu_initialized_mask;
-static cpumask_var_t cpu_callout_mask;
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -169,8 +167,8 @@ static void smp_callin(void)
 	int cpuid = smp_processor_id();
 
 	/*
-	 * If waken up by an INIT in an 82489DX configuration
-	 * cpu_callout_mask guarantees we don't get here before an
+	 * If waken up by an INIT in an 82489DX configuration the alive
+	 * synchronization guarantees we don't get here before an
 	 * INIT_deassert IPI reaches our local APIC, so it is now safe to
 	 * touch our local APIC.
 	 *
@@ -216,17 +214,6 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
-static void wait_for_master_cpu(int cpu)
-{
-	/*
-	 * Wait for release by control CPU before continuing with AP
-	 * initialization.
-	 */
-	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
-	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
-		cpu_relax();
-}
-
 /*
  * Activate a secondary processor.
  */
@@ -247,11 +234,11 @@ static void notrace start_secondary(void
 	cpu_init_exception_handling();
 
 	/*
-	 * Sync point with wait_cpu_initialized(). Sets AP in
-	 * cpu_initialized_mask and then waits for the control CPU
-	 * to release it.
+	 * Synchronization point with the hotplug core. Sets the
+	 * synchronization state to ALIVE and waits for the control CPU to
+	 * release this CPU for further bringup.
 	 */
-	wait_for_master_cpu(raw_smp_processor_id());
+	cpuhp_ap_sync_alive();
 
 	cpu_init();
 	rcu_cpu_starting(raw_smp_processor_id());
@@ -284,7 +271,6 @@ static void notrace start_secondary(void
 	set_cpu_online(smp_processor_id(), true);
 	lapic_online();
 	unlock_vector_lock();
-	cpu_set_state_online(smp_processor_id());
 	x86_platform.nmi_init();
 
 	/* enable local interrupts */
@@ -735,9 +721,9 @@ static void impress_friends(void)
 	 * Allow the user to impress friends.
 	 */
 	pr_debug("Before bogomips\n");
-	for_each_possible_cpu(cpu)
-		if (cpumask_test_cpu(cpu, cpu_callout_mask))
-			bogosum += cpu_data(cpu).loops_per_jiffy;
+	for_each_online_cpu(cpu)
+		bogosum += cpu_data(cpu).loops_per_jiffy;
+
 	pr_info("Total of %d processors activated (%lu.%02lu BogoMIPS)\n",
 		num_online_cpus(),
 		bogosum/(500000/HZ),
@@ -1009,6 +995,7 @@ int common_cpu_up(unsigned int cpu, stru
 static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
 	unsigned long start_ip = real_mode_header->trampoline_start;
+	int ret;
 
 #ifdef CONFIG_X86_64
 	/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
@@ -1049,13 +1036,6 @@ static int do_boot_cpu(int apicid, int c
 		}
 	}
 
-	/*
-	 * AP might wait on cpu_callout_mask in cpu_init() with
-	 * cpu_initialized_mask set if previous attempt to online
-	 * it timed-out. Clear cpu_initialized_mask so that after
-	 * INIT/SIPI it could start with a clean state.
-	 */
-	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	smp_mb();
 
 	/*
@@ -1066,47 +1046,16 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
-		return apic->wakeup_secondary_cpu_64(apicid, start_ip);
+		ret = apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
-		return apic->wakeup_secondary_cpu(apicid, start_ip);
-
-	return wakeup_secondary_cpu_via_init(apicid, start_ip);
-}
-
-static int wait_cpu_cpumask(unsigned int cpu, const struct cpumask *mask)
-{
-	unsigned long timeout;
-
-	/*
-	 * Wait up to 10s for the CPU to report in.
-	 */
-	timeout = jiffies + 10*HZ;
-	while (time_before(jiffies, timeout)) {
-		if (cpumask_test_cpu(cpu, mask))
-			return 0;
-
-		schedule();
-	}
-	return -1;
-}
-
-/*
- * Bringup step two: Wait for the target AP to reach cpu_init_secondary()
- * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
- * to proceed.  The AP will then proceed past setting its 'callin' bit
- * and end up waiting in check_tsc_sync_target() until we reach
- * wait_cpu_online() to tend to it.
- */
-static int wait_cpu_initialized(unsigned int cpu)
-{
-	/*
-	 * Wait for first sign of life from AP.
-	 */
-	if (wait_cpu_cpumask(cpu, cpu_initialized_mask))
-		return -1;
+		ret = apic->wakeup_secondary_cpu(apicid, start_ip);
+	else
+		ret = wakeup_secondary_cpu_via_init(apicid, start_ip);
 
-	cpumask_set_cpu(cpu, cpu_callout_mask);
-	return 0;
+	/* If the wakeup mechanism failed, cleanup the warm reset vector */
+	if (ret)
+		arch_cpuhp_cleanup_kick_cpu(cpu);
+	return ret;
 }
 
 static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
@@ -1131,11 +1080,6 @@ static int native_kick_ap(unsigned int c
 	 */
 	mtrr_save_state();
 
-	/* x86 CPUs take themselves offline, so delayed offline is OK. */
-	err = cpu_check_up_prepare(cpu);
-	if (err && err != -EBUSY)
-		return err;
-
 	/* the FPU context is blank, nobody can own it */
 	per_cpu(fpu_fpregs_owner_ctx, cpu) = NULL;
 
@@ -1152,17 +1096,29 @@ static int native_kick_ap(unsigned int c
 
 int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
-	int ret;
-
-	ret = native_kick_ap(cpu, tidle);
-	if (!ret)
-		ret = wait_cpu_initialized(cpu);
+	return native_kick_ap(cpu, tidle);
+}
 
+void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu)
+{
 	/* Cleanup possible dangling ends... */
-	if (x86_platform.legacy.warm_reset)
+	if (smp_ops.cpu_up == native_cpu_up && x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();
+}
 
-	return ret;
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
+	if (smp_ops.cleanup_dead_cpu)
+		smp_ops.cleanup_dead_cpu(cpu);
+
+	if (system_state == SYSTEM_RUNNING)
+		pr_info("CPU %u is now offline\n", cpu);
+}
+
+void arch_cpuhp_sync_state_poll(void)
+{
+	if (smp_ops.poll_sync_state)
+		smp_ops.poll_sync_state();
 }
 
 /**
@@ -1354,9 +1310,6 @@ void __init native_smp_prepare_boot_cpu(
 	if (!IS_ENABLED(CONFIG_SMP))
 		switch_gdt_and_percpu_base(me);
 
-	/* already set me in cpu_online_mask in boot_cpu_init() */
-	cpumask_set_cpu(me, cpu_callout_mask);
-	cpu_set_state_online(me);
 	native_pv_lock_init();
 }
 
@@ -1483,8 +1436,6 @@ early_param("possible_cpus", _setup_poss
 /* correctly size the local cpu masks */
 void __init setup_cpu_local_masks(void)
 {
-	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callout_mask);
 	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
 }
 
@@ -1546,9 +1497,6 @@ static void remove_siblinginfo(int cpu)
 static void remove_cpu_from_maps(int cpu)
 {
 	set_cpu_online(cpu, false);
-	cpumask_clear_cpu(cpu, cpu_callout_mask);
-	/* was set by cpu_init() */
-	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	numa_remove_cpu(cpu);
 }
 
@@ -1599,36 +1547,11 @@ int native_cpu_disable(void)
 	return 0;
 }
 
-int common_cpu_die(unsigned int cpu)
-{
-	int ret = 0;
-
-	/* We don't do anything here: idle task is faking death itself. */
-
-	/* They ack this in play_dead() by setting CPU_DEAD */
-	if (cpu_wait_death(cpu, 5)) {
-		if (system_state == SYSTEM_RUNNING)
-			pr_info("CPU %u is now offline\n", cpu);
-	} else {
-		pr_err("CPU %u didn't die...\n", cpu);
-		ret = -1;
-	}
-
-	return ret;
-}
-
-void native_cpu_die(unsigned int cpu)
-{
-	common_cpu_die(cpu);
-}
-
 void play_dead_common(void)
 {
 	idle_task_exit();
 
-	/* Ack it */
-	(void)cpu_report_death();
-
+	cpuhp_ap_report_dead();
 	/*
 	 * With physical CPU hotplug, we should halt the cpu
 	 */
@@ -1730,12 +1653,6 @@ int native_cpu_disable(void)
 	return -ENOSYS;
 }
 
-void native_cpu_die(unsigned int cpu)
-{
-	/* We said "no" in __cpu_disable */
-	BUG();
-}
-
 void native_play_dead(void)
 {
 	BUG();
--- a/arch/x86/xen/smp_hvm.c
+++ b/arch/x86/xen/smp_hvm.c
@@ -55,18 +55,16 @@ static void __init xen_hvm_smp_prepare_c
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
-static void xen_hvm_cpu_die(unsigned int cpu)
+static void xen_hvm_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (common_cpu_die(cpu) == 0) {
-		if (xen_have_vector_callback) {
-			xen_smp_intr_free(cpu);
-			xen_uninit_lock_cpu(cpu);
-			xen_teardown_timer(cpu);
-		}
+	if (xen_have_vector_callback) {
+		xen_smp_intr_free(cpu);
+		xen_uninit_lock_cpu(cpu);
+		xen_teardown_timer(cpu);
 	}
 }
 #else
-static void xen_hvm_cpu_die(unsigned int cpu)
+static void xen_hvm_cleanup_dead_cpu(unsigned int cpu)
 {
 	BUG();
 }
@@ -77,7 +75,7 @@ void __init xen_hvm_smp_init(void)
 	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
 	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
 	smp_ops.smp_cpus_done = xen_smp_cpus_done;
-	smp_ops.cpu_die = xen_hvm_cpu_die;
+	smp_ops.cleanup_dead_cpu = xen_hvm_cleanup_dead_cpu;
 
 	if (!xen_have_vector_callback) {
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -62,6 +62,7 @@ static void cpu_bringup(void)
 	int cpu;
 
 	cr4_init();
+	cpuhp_ap_sync_alive();
 	cpu_init();
 	touch_softlockup_watchdog();
 
@@ -83,7 +84,7 @@ static void cpu_bringup(void)
 
 	set_cpu_online(cpu, true);
 
-	cpu_set_state_online(cpu);  /* Implies full memory barrier. */
+	smp_mb();
 
 	/* We can take interrupts now: we're officially "up". */
 	local_irq_enable();
@@ -323,14 +324,6 @@ static int xen_pv_cpu_up(unsigned int cp
 
 	xen_setup_runstate_info(cpu);
 
-	/*
-	 * PV VCPUs are always successfully taken down (see 'while' loop
-	 * in xen_cpu_die()), so -EBUSY is an error.
-	 */
-	rc = cpu_check_up_prepare(cpu);
-	if (rc)
-		return rc;
-
 	/* make sure interrupts start blocked */
 	per_cpu(xen_vcpu, cpu)->evtchn_upcall_mask = 1;
 
@@ -349,6 +342,11 @@ static int xen_pv_cpu_up(unsigned int cp
 	return 0;
 }
 
+static void xen_pv_poll_sync_state(void)
+{
+	HYPERVISOR_sched_op(SCHEDOP_yield, NULL);
+}
+
 #ifdef CONFIG_HOTPLUG_CPU
 static int xen_pv_cpu_disable(void)
 {
@@ -364,18 +362,18 @@ static int xen_pv_cpu_disable(void)
 
 static void xen_pv_cpu_die(unsigned int cpu)
 {
-	while (HYPERVISOR_vcpu_op(VCPUOP_is_up,
-				  xen_vcpu_nr(cpu), NULL)) {
+	while (HYPERVISOR_vcpu_op(VCPUOP_is_up, xen_vcpu_nr(cpu), NULL)) {
 		__set_current_state(TASK_UNINTERRUPTIBLE);
 		schedule_timeout(HZ/10);
 	}
+}
 
-	if (common_cpu_die(cpu) == 0) {
-		xen_smp_intr_free(cpu);
-		xen_uninit_lock_cpu(cpu);
-		xen_teardown_timer(cpu);
-		xen_pmu_finish(cpu);
-	}
+static void xen_pv_cleanup_dead_cpu(unsigned int cpu)
+{
+	xen_smp_intr_free(cpu);
+	xen_uninit_lock_cpu(cpu);
+	xen_teardown_timer(cpu);
+	xen_pmu_finish(cpu);
 }
 
 static void __noreturn xen_pv_play_dead(void) /* used only with HOTPLUG_CPU */
@@ -397,6 +395,11 @@ static void xen_pv_cpu_die(unsigned int
 	BUG();
 }
 
+static void xen_pv_cleanup_dead_cpu(unsigned int cpu)
+{
+	BUG();
+}
+
 static void __noreturn xen_pv_play_dead(void)
 {
 	BUG();
@@ -437,6 +440,8 @@ static const struct smp_ops xen_smp_ops
 
 	.cpu_up = xen_pv_cpu_up,
 	.cpu_die = xen_pv_cpu_die,
+	.cleanup_dead_cpu = xen_pv_cleanup_dead_cpu,
+	.poll_sync_state = xen_pv_poll_sync_state,
 	.cpu_disable = xen_pv_cpu_disable,
 	.play_dead = xen_pv_play_dead,
 



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529922.824982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKB-0005HU-Nw; Thu, 04 May 2023 19:09:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529922.824982; Thu, 04 May 2023 19:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKB-0005HN-LL; Thu, 04 May 2023 19:09:07 +0000
Received: by outflank-mailman (input) for mailman id 529922;
 Thu, 04 May 2023 19:09:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueED-00042k-9y
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:57 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 47244339-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47244339-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185938.232336513@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226975;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=PaGuo8GAZ65ZbtTn7wzwCB2xmjp/8CcdVtBqqK3zJyY=;
	b=WBrmRYfVh8Hf7VAtaZM+773NG/l2NaZEQY2PbsMWpwgijTHlkaPZNZwTTqZqSePaJs28Tp
	huH2qX95r4oDUOSFSd9zA+6+RZv5lgr0xa0QdgDkEncw9LVXBvKPVIlHpBCwardNQlPmIT
	cajVqbzngFG+dnpn2I0UeA+RYWcEBwo1QkYYSYABAtHiWYiay4DXWTSDKMd3/FerfS7z8b
	xwX6otdMILDZ1YbCULvPsUARGQzswNTUUkZzB3iKqcysyV9JxG2dUxCWqjPi7FUYFLn/3T
	jkRxQ4LmxpZMkrdtxeZw1NAb25wiC+Tx1duuJ885D0xawxhq7zhP9DRXxAmkFA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226975;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=PaGuo8GAZ65ZbtTn7wzwCB2xmjp/8CcdVtBqqK3zJyY=;
	b=G0tsDBn5b6jENP+Wz6pMSYtlj20+hRg4V3sQlU3KTcd+8sMa78JVCQNTfSLn2dU5TfjS5i
	VD5uznhB/0z5zkAQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 35/38] x86/apic: Save the APIC virtual base address
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:55 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

For parallel CPU brinugp it's required to read the APIC ID in the low level
startup code. The virtual APIC base address is a constant because its a
fix-mapped address. Exposing that constant which is composed via macros to
assembly code is non-trivial dues to header inclusion hell.

Aside of that it's constant only because of the vsyscall ABI
requirement. Once vsyscall is out of the picture the fixmap can be placed
at runtime.

Avoid header hell, stay flexible and store the address in a variable which
can be exposed to the low level startup code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/smp.h  |    1 +
 arch/x86/kernel/apic/apic.c |    4 ++++
 2 files changed, 5 insertions(+)
---
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -196,6 +196,7 @@ extern void nmi_selftest(void);
 #endif
 
 extern unsigned int smpboot_control;
+extern unsigned long apic_mmio_base;
 
 #endif /* !__ASSEMBLY__ */
 
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -101,6 +101,9 @@ static int apic_extnmi __ro_after_init =
  */
 static bool virt_ext_dest_id __ro_after_init;
 
+/* For parallel bootup. */
+unsigned long apic_mmio_base __ro_after_init;
+
 /*
  * Map cpu index to physical APIC ID
  */
@@ -2163,6 +2166,7 @@ void __init register_lapic_address(unsig
 
 	if (!x2apic_mode) {
 		set_fixmap_nocache(FIX_APIC_BASE, address);
+		apic_mmio_base = APIC_BASE;
 		apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n",
 			    APIC_BASE, address);
 	}



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529928.824993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKS-0005ec-0E; Thu, 04 May 2023 19:09:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529928.824993; Thu, 04 May 2023 19:09:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKR-0005eT-TY; Thu, 04 May 2023 19:09:23 +0000
Received: by outflank-mailman (input) for mailman id 529928;
 Thu, 04 May 2023 19:09:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueEI-00042k-O1
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:03:02 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4a29c51a-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:03:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a29c51a-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185938.393373946@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226980;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=DnnEFfJ8qHS51cV93bPVxAhJn4egJudPWNxPdxHp+6I=;
	b=3Ex1oay3deKBQTCKUX4Ih8IGgIbgjtd18ImqJ1vJckFzp4vqpeCRciYL26XaBXwtGetEt9
	jQYdqvvfPpYi34oU2dU7fjuiZ3WyUQcsrd4BPREXrzjkH9groi0TU6YGHBW08cmpwip9QC
	vnXlEPucLe8Pxu/DAQZhHSJgvAG1br+71W5AkzFErVTB5fdbrjcZJ7I/PMvvfD3nV+Fpt+
	Y9+eS6OIW46jTxBhE1zoqPIcGpnWyrhDXCq+doarlLORorIO2HSYQo2DibMTzYhsGwQ8ko
	wng2jKhRiMwK0oOD7H54yErcCMVtMuv2wW3qjiAj1OaBH6NB6N7JtXkq+JhyKQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226980;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=DnnEFfJ8qHS51cV93bPVxAhJn4egJudPWNxPdxHp+6I=;
	b=6kDZyW5GTa+KS1pHyq0SY7syfYnd6TB6StPbtFwCKnOC1peAStPMetQpfCSqmue8lUL4Wc
	NQ0I2S6hl8MhwzDA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 38/38] x86/smpboot/64: Implement
 arch_cpuhp_init_parallel_bringup() and enable it
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:03:00 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Implement the validation function which tells the core code whether
parallel bringup is possible.

The only condition for now is that the kernel does not run in an encrypted
guest as these will trap the RDMSR via #VC, which cannot be handled at that
point in early startup.

There was an earlier variant for AMD-SEV which used the GHBC protocol for
retrieving the APIC ID via CPUID, but there is no guarantee that the
initial APIC ID in CPUID is the same as the real APIC ID. There is no
enforcement from the secure firmware and the hypervisor can assign APIC IDs
as it sees fit as long as the ACPI/MADT table is consistent with that
assignment.

Unfortunately there is no RDMSR GHCB protocol at the moment, so enabling
AMD-SEV guests for parallel startup needs some more thought.

Intel-TDX provides a secure RDMSR hypercall, but supporting that is outside
the scope of this change.

Fixup announce_cpu() as e.g. on Hyper-V CPU1 is the secondary sibling of
CPU0, which makes the @cpu == 1 logic in announce_cpu() fall apart.

[ mikelley: Reported the announce_cpu() fallout

Originally-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
V2: Fixup announce_cpu() - Michael Kelley
---
 arch/x86/Kconfig             |    3 +
 arch/x86/kernel/cpu/common.c |    6 ---
 arch/x86/kernel/smpboot.c    |   83 ++++++++++++++++++++++++++++++++++++-------
 3 files changed, 73 insertions(+), 19 deletions(-)
---
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -274,8 +274,9 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_PARALLEL			if SMP && X86_64
 	select HOTPLUG_SMT			if SMP
-	select HOTPLUG_SPLIT_STARTUP		if SMP
+	select HOTPLUG_SPLIT_STARTUP		if SMP && X86_32
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
 	select NEED_PER_CPU_PAGE_FIRST_CHUNK
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2128,11 +2128,7 @@ static inline void setup_getcpu(int cpu)
 }
 
 #ifdef CONFIG_X86_64
-static inline void ucode_cpu_init(int cpu)
-{
-	if (cpu)
-		load_ucode_ap();
-}
+static inline void ucode_cpu_init(int cpu) { }
 
 static inline void tss_setup_ist(struct tss_struct *tss)
 {
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -58,6 +58,7 @@
 #include <linux/overflow.h>
 #include <linux/stackprotector.h>
 #include <linux/cpuhotplug.h>
+#include <linux/mc146818rtc.h>
 
 #include <asm/acpi.h>
 #include <asm/cacheinfo.h>
@@ -75,7 +76,7 @@
 #include <asm/fpu/api.h>
 #include <asm/setup.h>
 #include <asm/uv/uv.h>
-#include <linux/mc146818rtc.h>
+#include <asm/microcode.h>
 #include <asm/i8259.h>
 #include <asm/misc.h>
 #include <asm/qspinlock.h>
@@ -128,7 +129,6 @@ int arch_update_cpu_topology(void)
 	return retval;
 }
 
-
 static unsigned int smpboot_warm_reset_vector_count;
 
 static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip)
@@ -229,16 +229,43 @@ static void notrace start_secondary(void
 	 */
 	cr4_init();
 
-#ifdef CONFIG_X86_32
-	/* switch away from the initial page table */
-	load_cr3(swapper_pg_dir);
-	__flush_tlb_all();
-#endif
+	/*
+	 * 32-bit specific. 64-bit reaches this code with the correct page
+	 * table established. Yet another historical divergence.
+	 */
+	if (IS_ENABLED(CONFIG_X86_32)) {
+		/* switch away from the initial page table */
+		load_cr3(swapper_pg_dir);
+		__flush_tlb_all();
+	}
+
 	cpu_init_exception_handling();
 
 	/*
-	 * Synchronization point with the hotplug core. Sets the
-	 * synchronization state to ALIVE and waits for the control CPU to
+	 * 32-bit systems load the microcode from the ASM startup code for
+	 * historical reasons.
+	 *
+	 * On 64-bit systems load it before reaching the AP alive
+	 * synchronization point below so it is not part of the full per
+	 * CPU serialized bringup part when "parallel" bringup is enabled.
+	 *
+	 * That's even safe when hyperthreading is enabled in the CPU as
+	 * the core code starts the primary threads first and leaves the
+	 * secondary threads waiting for SIPI. Loading microcode on
+	 * physical cores concurrently is a safe operation.
+	 *
+	 * This covers both the Intel specific issue that concurrent
+	 * microcode loading on SMT siblings must be prohibited and the
+	 * vendor independent issue`that microcode loading which changes
+	 * CPUID, MSRs etc. must be strictly serialized to maintain
+	 * software state correctness.
+	 */
+	if (IS_ENABLED(CONFIG_X86_64))
+		load_ucode_ap();
+
+	/*
+	 * Synchronization point with the hotplug core. Sets this CPUs
+	 * synchronization state to ALIVE and spin-waits for the control CPU to
 	 * release this CPU for further bringup.
 	 */
 	cpuhp_ap_sync_alive();
@@ -934,10 +961,10 @@ static void announce_cpu(int cpu, int ap
 	if (!node_width)
 		node_width = num_digits(num_possible_nodes()) + 1; /* + '#' */
 
-	if (cpu == 1)
-		printk(KERN_INFO "x86: Booting SMP configuration:\n");
-
 	if (system_state < SYSTEM_RUNNING) {
+		if (num_online_cpus() == 1)
+			pr_info("x86: Booting SMP configuration:\n");
+
 		if (node != current_node) {
 			if (current_node > (-1))
 				pr_cont("\n");
@@ -948,7 +975,7 @@ static void announce_cpu(int cpu, int ap
 		}
 
 		/* Add padding for the BSP */
-		if (cpu == 1)
+		if (num_online_cpus() == 1)
 			pr_cont("%*s", width + 1, " ");
 
 		pr_cont("%*s#%d", width - num_digits(cpu), " ", cpu);
@@ -1242,6 +1269,36 @@ void __init smp_prepare_cpus_common(void
 	set_cpu_sibling_map(0);
 }
 
+#ifdef CONFIG_X86_64
+/* Establish whether parallel bringup can be supported. */
+bool __init arch_cpuhp_init_parallel_bringup(void)
+{
+	/*
+	 * Encrypted guests require special handling. They enforce X2APIC
+	 * mode but the RDMSR to read the APIC ID is intercepted and raises
+	 * #VC or #VE which cannot be handled in the early startup code.
+	 *
+	 * AMD-SEV does not provide a RDMSR GHCB protocol so the early
+	 * startup code cannot directly communicate with the secure
+	 * firmware. The alternative solution to retrieve the APIC ID via
+	 * CPUID(0xb), which is covered by the GHCB protocol, is not viable
+	 * either because there is no enforcement of the CPUID(0xb)
+	 * provided "initial" APIC ID to be the same as the real APIC ID.
+	 *
+	 * Intel-TDX has a secure RDMSR hypercall, but that needs to be
+	 * implemented seperately in the low level startup ASM code.
+	 */
+	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
+		pr_info("Parallel CPU startup disabled due to guest state encryption\n");
+		return false;
+	}
+
+	smpboot_control = STARTUP_READ_APICID;
+	pr_debug("Parallel CPU startup enabled: 0x%08x\n", smpboot_control);
+	return true;
+}
+#endif
+
 /*
  * Prepare for SMP bootup.
  * @max_cpus: configured maximum number of CPUs, It is a legacy parameter



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529931.825003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKV-00060U-Cm; Thu, 04 May 2023 19:09:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529931.825003; Thu, 04 May 2023 19:09:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKV-00060L-9u; Thu, 04 May 2023 19:09:27 +0000
Received: by outflank-mailman (input) for mailman id 529931;
 Thu, 04 May 2023 19:09:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDw-00042j-22
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:40 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d7c3870-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d7c3870-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.700483457@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226959;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=o04fXgOhQs8+8CE7g0ZkhrpkLD4QHZg/LLM7Otp+9z8=;
	b=p4kcKoUjBHHJo7zBT76SAr4kx217XWJeqQlJZM4VxHPykLQrWLTEC/uzx8rVhbv5iqkHWl
	eKFtH9Dh+4qwsJ89ZP4rLm01HMJyfuh1U+U+HK5bkx7PCIfmaji3wiKDM0hoVurLEfzD81
	Wt5Ubm2jEjitMnzdFcIqEG1m9GNIikM3c5w8xOJBg2D9IGfl3i1Bdd22uIIaFwPx6R+oIN
	02fFHC7cYoeNHnd0KZn/7sG2vBaI1ZwTr6vd36gfFqo6mvp2OfPm/JhGUYVjsHiJsALYHO
	f8H4F6s4Sw0iVizvLScdCi66AVg3wSgMwKU7em9hN7nVzpd/ip3L0uFyKgVW2A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226959;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=o04fXgOhQs8+8CE7g0ZkhrpkLD4QHZg/LLM7Otp+9z8=;
	b=ZnZlGAwTWV21lQJdfFlOE3KOrdxbCyckyIQwnUuLhsAipHYSOksGllyP3kynqMPIKSFyh4
	3HRt8jQRFFxBADCA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 25/38] parisc: Switch to hotplug core state synchronization
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:38 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: linux-parisc@vger.kernel.org

---
 arch/parisc/Kconfig          |    1 +
 arch/parisc/kernel/process.c |    4 ++--
 arch/parisc/kernel/smp.c     |    7 +++----
 3 files changed, 6 insertions(+), 6 deletions(-)
---
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -57,6 +57,7 @@ config PARISC
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_REGS_AND_STACK_ACCESS_API
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select GENERIC_SCHED_CLOCK
 	select GENERIC_IRQ_MIGRATION if SMP
 	select HAVE_UNSTABLE_SCHED_CLOCK if SMP
--- a/arch/parisc/kernel/process.c
+++ b/arch/parisc/kernel/process.c
@@ -166,8 +166,8 @@ void __noreturn arch_cpu_idle_dead(void)
 
 	local_irq_disable();
 
-	/* Tell __cpu_die() that this CPU is now safe to dispose of. */
-	(void)cpu_report_death();
+	/* Tell the core that this CPU is now safe to dispose of. */
+	cpuhp_ap_report_dead();
 
 	/* Ensure that the cache lines are written out. */
 	flush_cache_all_local();
--- a/arch/parisc/kernel/smp.c
+++ b/arch/parisc/kernel/smp.c
@@ -500,11 +500,10 @@ int __cpu_disable(void)
 void __cpu_die(unsigned int cpu)
 {
 	pdc_cpu_rendezvous_lock();
+}
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
 	pr_info("CPU%u: is shutting down\n", cpu);
 
 	/* set task's state to interruptible sleep */



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529934.825013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKZ-0006MH-KX; Thu, 04 May 2023 19:09:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529934.825013; Thu, 04 May 2023 19:09:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKZ-0006MA-Gy; Thu, 04 May 2023 19:09:31 +0000
Received: by outflank-mailman (input) for mailman id 529934;
 Thu, 04 May 2023 19:09:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueE1-00042j-09
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:45 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 406aadd9-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 406aadd9-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.858955768@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226964;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=x1gYrMcqzm40p2bz6RsOmSrXlBcSvXXbnfB8HNeCbJU=;
	b=F9VwenKVSYd9QFli8bXNV/fz9qUSdzJcPELJ1ngRD9BxyzaHEgcu2cC0/b4kuprOfTMBsR
	NCkyPWtTcKTi4mTU6aAeMY6PYflFdg/QnWMligdTnr5DQ53G/59w7N1RxTuaJMuxKaB+6k
	fnxGx7t6MsQx4TP0A/Wv1B3YJqsjI01JirGXHQPo53zTdX5Z488sLyftTciloBgiN9k5k/
	bzf3t4Qwlj6lxCy0qHukKPQMZGDuh3h1yP7MJWtYXWHVZt3htrK/Qtw9ul4ERCfXuX9Nze
	it6Ui4ad23Dml/cuVbhr6i6cHXzKyVbR3juE2BxVW820qfkeEJiZvG833WVS3g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226964;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=x1gYrMcqzm40p2bz6RsOmSrXlBcSvXXbnfB8HNeCbJU=;
	b=PK3VhwCrlpzYLlITMOVdXxhPHsDqoA73a6Tz+/QOqHPRfKtGaiwEqnA7EtTcL4VMvMoxp6
	ebRWKW/d8KnEHmAA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch V2 28/38] cpu/hotplug: Reset task stack state in _cpu_up()
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:43 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

Commit dce1ca0525bf ("sched/scs: Reset task stack state in bringup_cpu()")
ensured that the shadow call stack and KASAN poisoning were removed from
a CPU's stack each time that CPU is brought up, not just once.

This is not incorrect. However, with parallel bringup the idle thread setup
will happen at a different step. As a consequence the cleanup in
bringup_cpu() would be too late.

Move the SCS/KASAN cleanup to the generic _cpu_up() function instead,
which already ensures that the new CPU's stack is available, purely to
allow for early failure. This occurs when the CPU to be brought up is
in the CPUHP_OFFLINE state, which should correctly do the cleanup any
time the CPU has been taken down to the point where such is needed.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>

---
 kernel/cpu.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)
---
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -771,12 +771,6 @@ static int bringup_cpu(unsigned int cpu)
 		return -EAGAIN;
 
 	/*
-	 * Reset stale stack state from the last time this CPU was online.
-	 */
-	scs_task_reset(idle);
-	kasan_unpoison_task_stack(idle);
-
-	/*
 	 * Some architectures have to walk the irq descriptors to
 	 * setup the vector space for the cpu which comes online.
 	 * Prevent irq alloc/free across the bringup.
@@ -1583,6 +1577,12 @@ static int _cpu_up(unsigned int cpu, int
 			ret = PTR_ERR(idle);
 			goto out;
 		}
+
+		/*
+		 * Reset stale stack state from the last time this CPU was online.
+		 */
+		scs_task_reset(idle);
+		kasan_unpoison_task_stack(idle);
 	}
 
 	cpuhp_tasks_frozen = tasks_frozen;



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529935.825018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKa-0006Pw-1a; Thu, 04 May 2023 19:09:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529935.825018; Thu, 04 May 2023 19:09:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKZ-0006PM-PN; Thu, 04 May 2023 19:09:31 +0000
Received: by outflank-mailman (input) for mailman id 529935;
 Thu, 04 May 2023 19:09:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueE2-00042j-Qd
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:46 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 415bc34c-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 415bc34c-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.912510359@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226965;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=F8kXoaR1nBQ6o9VxPp8Uz6U2V5tkVIrZOYAD+Uqf6yU=;
	b=FHHwtM3MehcvxyTD3IIJz8sgSlPXUlchZh7xJFV6jf2jNACHHUSgTO098Ser6A+TKYqwby
	MlO+QBW9B7CBztPuD5tg+w+tvkSrJZeyBsoOwfT3EBQQwdnjzUEH5V/q7RLkoQV5ifNNa+
	at9KsHlVkTZ1F4ajKTdjiM9iHVozExltMcmveGrXQs/fOlHDH8HgFUnTWmeo5rouQexRZ4
	Xry4XkxaL7TCkZzvLR+YbuqgPQqwNW1A3kMmXVynnWeMARwsSVesHKDo20gZt3Skv9srif
	0A1QbnYXe4Tj6Hfs/b/c+uuWIPdrOvy2KufgGEj/6EkbPvsEiWUj1xzdM/2/FQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226965;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=F8kXoaR1nBQ6o9VxPp8Uz6U2V5tkVIrZOYAD+Uqf6yU=;
	b=eKOKrA/+YsiD2CsaRIq67Pl+6fGbLOp23MnHr4M6jOIgCJwCHmPsKgxoXghxnT8I2GMUKL
	U9rUX94YG+zIxfDw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 29/38] cpu/hotplug: Provide a split up CPUHP_BRINGUP mechanism
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:45 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The bring up logic of a to be onlined CPU consists of several parts, which
are considered to be a single hotplug state:

  1) Control CPU issues the wake-up

  2) To be onlined CPU starts up, does the minimal initialization,
     reports to be alive and waits for release into the complete bring-up.

  3) Control CPU waits for the alive report and releases the upcoming CPU
     for the complete bring-up.

Allow to split this into two states:

  1) Control CPU issues the wake-up

     After that the to be onlined CPU starts up, does the minimal
     initialization, reports to be alive and waits for release into the
     full bring-up. As this can run after the control CPU dropped the
     hotplug locks the code which is executed on the AP before it reports
     alive has to be carefully audited to not violate any of the hotplug
     constraints, especially not modifying any of the various cpumasks.

     This is really only meant to avoid waiting for the AP to react on the
     wake-up. Of course an architecture can move strict CPU related setup
     functionality, e.g. microcode loading, with care before the
     synchronization point to save further pointless waiting time.

  2) Control CPU waits for the alive report and releases the upcoming CPU
     for the complete bring-up.

This allows that the two states can be split up to run all to be onlined
CPUs up to state #1 on the control CPU and then at a later point run state
#2. This spares some of the latencies of the full serialized per CPU
bringup by avoiding the per CPU wakeup/wait serialization. The assumption
is that the first AP already waits when the last AP has been woken up. This
obvioulsy depends on the hardware latencies and depending on the timings
this might still not completely eliminate all wait scenarios.

This split is just a preparatory step for enabling the parallel bringup
later. The boot time bringup is still fully serialized. It has a separate
config switch so that architectures which want to support parallel bringup
can test the split of the CPUHP_BRINGUG step separately.

To enable this the architecture must support the CPU hotplug core sync
mechanism and has to be audited that there are no implicit hotplug state
dependencies which require a fully serialized bringup.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/Kconfig               |    4 ++
 include/linux/cpuhotplug.h |    4 ++
 kernel/cpu.c               |   70 +++++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 76 insertions(+), 2 deletions(-)
---
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -49,6 +49,10 @@ config HOTPLUG_CORE_SYNC_FULL
 	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select HOTPLUG_CORE_SYNC
 
+config HOTPLUG_SPLIT_STARTUP
+	bool
+	select HOTPLUG_CORE_SYNC_FULL
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -133,6 +133,7 @@ enum cpuhp_state {
 	CPUHP_MIPS_SOC_PREPARE,
 	CPUHP_BP_PREPARE_DYN,
 	CPUHP_BP_PREPARE_DYN_END		= CPUHP_BP_PREPARE_DYN + 20,
+	CPUHP_BP_KICK_AP,
 	CPUHP_BRINGUP_CPU,
 
 	/*
@@ -517,9 +518,12 @@ void cpuhp_online_idle(enum cpuhp_state
 static inline void cpuhp_online_idle(enum cpuhp_state state) { }
 #endif
 
+struct task_struct;
+
 void cpuhp_ap_sync_alive(void);
 void arch_cpuhp_sync_state_poll(void);
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
+int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle);
 
 #ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
 void cpuhp_ap_report_dead(void);
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -761,6 +761,47 @@ static int bringup_wait_for_ap_online(un
 	return 0;
 }
 
+#ifdef CONFIG_HOTPLUG_SPLIT_STARTUP
+static int cpuhp_kick_ap_alive(unsigned int cpu)
+{
+	if (!cpuhp_can_boot_ap(cpu))
+		return -EAGAIN;
+
+	return arch_cpuhp_kick_ap_alive(cpu, idle_thread_get(cpu));
+}
+
+static int cpuhp_bringup_ap(unsigned int cpu)
+{
+	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
+	int ret;
+
+	/*
+	 * Some architectures have to walk the irq descriptors to
+	 * setup the vector space for the cpu which comes online.
+	 * Prevent irq alloc/free across the bringup.
+	 */
+	irq_lock_sparse();
+
+	ret = cpuhp_bp_sync_alive(cpu);
+	if (ret)
+		goto out_unlock;
+
+	ret = bringup_wait_for_ap_online(cpu);
+	if (ret)
+		goto out_unlock;
+
+	irq_unlock_sparse();
+
+	if (st->target <= CPUHP_AP_ONLINE_IDLE)
+		return 0;
+
+	return cpuhp_kick_ap(cpu, st, st->target);
+
+out_unlock:
+	irq_unlock_sparse();
+	return ret;
+}
+#else
 static int bringup_cpu(unsigned int cpu)
 {
 	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
@@ -777,7 +818,6 @@ static int bringup_cpu(unsigned int cpu)
 	 */
 	irq_lock_sparse();
 
-	/* Arch-specific enabling code. */
 	ret = __cpu_up(cpu, idle);
 	if (ret)
 		goto out_unlock;
@@ -801,6 +841,7 @@ static int bringup_cpu(unsigned int cpu)
 	irq_unlock_sparse();
 	return ret;
 }
+#endif
 
 static int finish_cpu(unsigned int cpu)
 {
@@ -1940,13 +1981,38 @@ static struct cpuhp_step cpuhp_hp_states
 		.startup.single		= timers_prepare_cpu,
 		.teardown.single	= timers_dead_cpu,
 	},
-	/* Kicks the plugged cpu into life */
+
+#ifdef CONFIG_HOTPLUG_SPLIT_STARTUP
+	/*
+	 * Kicks the AP alive. AP will wait in cpuhp_ap_sync_alive() until
+	 * the next step will release it.
+	 */
+	[CPUHP_BP_KICK_AP] = {
+		.name			= "cpu:kick_ap",
+		.startup.single		= cpuhp_kick_ap_alive,
+	},
+
+	/*
+	 * Waits for the AP to reach cpuhp_ap_sync_alive() and then
+	 * releases it for the complete bringup.
+	 */
+	[CPUHP_BRINGUP_CPU] = {
+		.name			= "cpu:bringup",
+		.startup.single		= cpuhp_bringup_ap,
+		.teardown.single	= finish_cpu,
+		.cant_stop		= true,
+	},
+#else
+	/*
+	 * All-in-one CPU bringup state which includes the kick alive.
+	 */
 	[CPUHP_BRINGUP_CPU] = {
 		.name			= "cpu:bringup",
 		.startup.single		= bringup_cpu,
 		.teardown.single	= finish_cpu,
 		.cant_stop		= true,
 	},
+#endif
 	/* Final state before CPU kills itself */
 	[CPUHP_AP_IDLE_DEAD] = {
 		.name			= "idle:dead",



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529936.825025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKa-0006UF-D0; Thu, 04 May 2023 19:09:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529936.825025; Thu, 04 May 2023 19:09:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKa-0006S5-2y; Thu, 04 May 2023 19:09:32 +0000
Received: by outflank-mailman (input) for mailman id 529936;
 Thu, 04 May 2023 19:09:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDj-00042j-BA
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:27 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 35d37f1b-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35d37f1b-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.257347750@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226946;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=7K79vzHgMUP9R0A/9c85ZFgwBToV4EA8CXJkDqIYiAA=;
	b=UC7UBpzkmxaEWM2nz/Qw6Vjvrnn6fWsNM/KnljWxV7HMtz5u7vnaAfODd1M7vTr5ktTb20
	nVojYUScHcp2OqOxJ8IqsoIvsDu/7IdzIUdDuRTyFO8wPuGlIHpnfYJUn5p8bx3AiClEX1
	fRskIOaKRFHPshjjQegmMKxWZJiKU/W1wNR+46su4jgsn/9Va0iMm7QqwA01siEqX7wCZg
	cW+AbT9hwgRm2RTXxGoRlf6T+Y7L4Mnn/bxe5uXiXqUdghVOxD2GUZ9J1ZZ4wpm572Kggb
	lAqMgDTfSLuvpAL4aW6s90BbYxngLSPUP8QvW4PAuFXJ0OZBFaKLB/9XRIX9dw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226946;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=7K79vzHgMUP9R0A/9c85ZFgwBToV4EA8CXJkDqIYiAA=;
	b=y1BzbUsMle0wpOLNE2p2p2IO/b5quxM8T3suQns4J8s0UcXtZFuk8jCSoUG3pH8YtIQT2K
	ezemL9386MNKclAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 17/38] x86/xen/hvm: Get rid of DEAD_FROZEN handling
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:25 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

No point in this conditional voodoo. Un-initializing the lock mechanism is
safe to be called unconditionally even if it was already invoked when the
CPU died.

Remove the invocation of xen_smp_intr_free() as that has been already
cleaned up in xen_cpu_dead_hvm().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: xen-devel@lists.xenproject.org

---
 arch/x86/xen/enlighten_hvm.c |   11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)
---
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -161,13 +161,12 @@ static int xen_cpu_up_prepare_hvm(unsign
 	int rc = 0;
 
 	/*
-	 * This can happen if CPU was offlined earlier and
-	 * offlining timed out in common_cpu_die().
+	 * If a CPU was offlined earlier and offlining timed out then the
+	 * lock mechanism is still initialized. Uninit it unconditionally
+	 * as it's safe to call even if already uninited. Interrupts and
+	 * timer have already been handled in xen_cpu_dead_hvm().
 	 */
-	if (cpu_report_state(cpu) == CPU_DEAD_FROZEN) {
-		xen_smp_intr_free(cpu);
-		xen_uninit_lock_cpu(cpu);
-	}
+	xen_uninit_lock_cpu(cpu);
 
 	if (cpu_acpi_id(cpu) != U32_MAX)
 		per_cpu(xen_vcpu_id, cpu) = cpu_acpi_id(cpu);



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529937.825029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKa-0006eq-Qc; Thu, 04 May 2023 19:09:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529937.825029; Thu, 04 May 2023 19:09:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKa-0006bj-JT; Thu, 04 May 2023 19:09:32 +0000
Received: by outflank-mailman (input) for mailman id 529937;
 Thu, 04 May 2023 19:09:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueE6-00042k-V7
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:50 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4349763a-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4349763a-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185938.019102098@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226969;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=/CyZEPJpAlcZassSDf6yPEl3fpuFiplGVJNrFAE1PjY=;
	b=JeZqr+iiqJKFh0frIQBEVa7KD1KSyYbB9ecZpoyy+Ss/8iF+SulZrNYmpNtbaINJPS+GKN
	Q6QZfpf+NIX8lp5U7q5IujAix4aBXn2NuS91LKTuT2EN4Wx27Sc9ozVGaKryZ1Xz3aRx2+
	pp8GZXqg1tw/FM8H1JHmg09BpGTRsIUfCFt3B7zeMEtw/SuA3yHhIfdyfHSJKF8aaDmkKn
	qwpwBR9+GFxfv0EF2zQ4uLzx0BTfrYNVsAosc3zXIaKDTQTdRFy5RnXNSRt5/H9P60pdjJ
	D6kES7EIGSOfUpN+Y62CgSsoDjmz21DT61TP4zDe+YR8Qs5pB+Pp23YRydrpMw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226969;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=/CyZEPJpAlcZassSDf6yPEl3fpuFiplGVJNrFAE1PjY=;
	b=QiMfb78X11T9xHP4n7a/ByAPbBWoJBf5QSjjdz3uu1bRQb6HpSz+jFnI3924nrlMSI1nr3
	XF1289B2KoIGXfCw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 31/38] x86/apic: Provide cpu_primary_thread mask
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:48 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Make the primary thread tracking CPU mask based in preparation for simpler
handling of parallel bootup.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/apic.h     |    2 --
 arch/x86/include/asm/topology.h |   19 +++++++++++++++----
 arch/x86/kernel/apic/apic.c     |   20 +++++++++-----------
 arch/x86/kernel/smpboot.c       |   12 +++---------
 4 files changed, 27 insertions(+), 26 deletions(-)
---
--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -506,10 +506,8 @@ extern int default_check_phys_apicid_pre
 #endif /* CONFIG_X86_LOCAL_APIC */
 
 #ifdef CONFIG_SMP
-bool apic_id_is_primary_thread(unsigned int id);
 void apic_smt_update(void);
 #else
-static inline bool apic_id_is_primary_thread(unsigned int id) { return false; }
 static inline void apic_smt_update(void) { }
 #endif
 
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -31,9 +31,9 @@
  * CONFIG_NUMA.
  */
 #include <linux/numa.h>
+#include <linux/cpumask.h>
 
 #ifdef CONFIG_NUMA
-#include <linux/cpumask.h>
 
 #include <asm/mpspec.h>
 #include <asm/percpu.h>
@@ -139,9 +139,20 @@ static inline int topology_max_smt_threa
 int topology_update_package_map(unsigned int apicid, unsigned int cpu);
 int topology_update_die_map(unsigned int dieid, unsigned int cpu);
 int topology_phys_to_logical_pkg(unsigned int pkg);
-bool topology_is_primary_thread(unsigned int cpu);
 bool topology_smt_supported(void);
-#else
+
+extern struct cpumask __cpu_primary_thread_mask;
+#define cpu_primary_thread_mask ((const struct cpumask *)&__cpu_primary_thread_mask)
+
+/**
+ * topology_is_primary_thread - Check whether CPU is the primary SMT thread
+ * @cpu:	CPU to check
+ */
+static inline bool topology_is_primary_thread(unsigned int cpu)
+{
+	return cpumask_test_cpu(cpu, cpu_primary_thread_mask);
+}
+#else /* CONFIG_SMP */
 #define topology_max_packages()			(1)
 static inline int
 topology_update_package_map(unsigned int apicid, unsigned int cpu) { return 0; }
@@ -152,7 +163,7 @@ static inline int topology_max_die_per_p
 static inline int topology_max_smt_threads(void) { return 1; }
 static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
 static inline bool topology_smt_supported(void) { return false; }
-#endif
+#endif /* !CONFIG_SMP */
 
 static inline void arch_fix_phys_package_id(int num, u32 slot)
 {
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2386,20 +2386,16 @@ bool arch_match_cpu_phys_id(int cpu, u64
 }
 
 #ifdef CONFIG_SMP
-/**
- * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
- * @apicid: APIC ID to check
- */
-bool apic_id_is_primary_thread(unsigned int apicid)
+static void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid)
 {
-	u32 mask;
-
-	if (smp_num_siblings == 1)
-		return true;
 	/* Isolate the SMT bit(s) in the APICID and check for 0 */
-	mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
-	return !(apicid & mask);
+	u32 mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
+
+	if (smp_num_siblings == 1 || !(apicid & mask))
+		cpumask_set_cpu(cpu, &__cpu_primary_thread_mask);
 }
+#else
+static inline void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid) { }
 #endif
 
 /*
@@ -2544,6 +2540,8 @@ int generic_processor_info(int apicid, i
 	set_cpu_present(cpu, true);
 	num_processors++;
 
+	cpu_mark_primary_thread(cpu, apicid);
+
 	return cpu;
 }
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -102,6 +102,9 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
+/* CPUs which are the primary SMT threads */
+struct cpumask __cpu_primary_thread_mask __read_mostly;
+
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -283,15 +286,6 @@ static void notrace start_secondary(void
 }
 
 /**
- * topology_is_primary_thread - Check whether CPU is the primary SMT thread
- * @cpu:	CPU to check
- */
-bool topology_is_primary_thread(unsigned int cpu)
-{
-	return apic_id_is_primary_thread(per_cpu(x86_cpu_to_apicid, cpu));
-}
-
-/**
  * topology_smt_supported - Check whether SMT is supported by the CPUs
  */
 bool topology_smt_supported(void)



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529949.825052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKo-0008MM-C9; Thu, 04 May 2023 19:09:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529949.825052; Thu, 04 May 2023 19:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKo-0008MD-9H; Thu, 04 May 2023 19:09:46 +0000
Received: by outflank-mailman (input) for mailman id 529949;
 Thu, 04 May 2023 19:09:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueE9-00042j-3B
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:53 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 453fc104-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 453fc104-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185938.126719312@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226972;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=C2xEv0EYOzjjBDOf0WNXDnbg3SxiEflVftHaff1ezNc=;
	b=R7TFizeMBd1vCDVjEXU0BtfNbQoQd0Q7O5Cmi/GhoH+A7DPO/eDCQYvwL8d0EzW6TZevXV
	zTSW3dCyZNMiwvAMbTbcTVfTHbY6pLve5Gv/afd1jzhLujx3f5ZspMS2zkpgXg0oDc3s5d
	kXcBYyWoHlipJH2VBBXdFP8QtRznLbAKZV10Ph92/Hk65CJwnRS+0u5oUwspH7khzqSQ/u
	SpYu0bcq4JwwE/JeCF3NwhBMiC+1RMRMfogzt5BdrL549/PBdPFpI4eEajMl9aEC63+JVF
	9zu2Vsz02+ZJpxAIfDIw2+6pnaLRSdZXjwd9Lq61E8b+1bqiBtxkam2eHoBKxg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226972;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=C2xEv0EYOzjjBDOf0WNXDnbg3SxiEflVftHaff1ezNc=;
	b=L3yFFvyeaWcbSL4+w4FNUsUl3Xq9FYBhGlVedajTlxgN49PBR5/K5uqBP8Q/AEH8xz/YfX
	kk6rHy0rbLZe69Cw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 33/38] x86/topology: Store extended topology leaf information
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:51 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Save the extended topology leaf number if it exists and is valid in
preparation of parallel CPU bringup.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/topology.h |    1 +
 arch/x86/kernel/cpu/topology.c  |    3 +++
 2 files changed, 4 insertions(+)
---
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -121,6 +121,7 @@ extern unsigned int __max_die_per_packag
 #define topology_core_cpumask(cpu)		(per_cpu(cpu_core_map, cpu))
 #define topology_sibling_cpumask(cpu)		(per_cpu(cpu_sibling_map, cpu))
 
+extern unsigned int topology_extended_leaf;
 extern unsigned int __max_logical_packages;
 #define topology_max_packages()			(__max_logical_packages)
 
--- a/arch/x86/kernel/cpu/topology.c
+++ b/arch/x86/kernel/cpu/topology.c
@@ -29,6 +29,8 @@ unsigned int __max_die_per_package __rea
 EXPORT_SYMBOL(__max_die_per_package);
 
 #ifdef CONFIG_SMP
+unsigned int topology_extended_leaf __read_mostly;
+
 /*
  * Check if given CPUID extended topology "leaf" is implemented
  */
@@ -72,6 +74,7 @@ int detect_extended_topology_early(struc
 	if (leaf < 0)
 		return -1;
 
+	topology_extended_leaf = leaf;
 	set_cpu_cap(c, X86_FEATURE_XTOPOLOGY);
 
 	cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx);



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529958.825062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKt-0000bl-Jy; Thu, 04 May 2023 19:09:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529958.825062; Thu, 04 May 2023 19:09:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKt-0000bV-HK; Thu, 04 May 2023 19:09:51 +0000
Received: by outflank-mailman (input) for mailman id 529958;
 Thu, 04 May 2023 19:09:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDh-00042j-KE
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:25 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 34e19d1e-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34e19d1e-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.201611990@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226944;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=8IR2ZEar7xSTX3+WcC3NFIdAp4AUTKg+YP70lXONDYs=;
	b=b5Dhpk7DXri1g+Jk4Fccefqph5HHh5m0O6WHx1dP1v8spz8TW/ZVEXRzP6Fu5Vyp4twGau
	0WJ5VSOP7FkG0YM49DJA3ab9WsH+lMCnQQToJbW5YLIvGGqyCD5SuatG+70PddmmocvInY
	sbywrL3O4QW1MGnAo6cSBXhCM/VRCDBpxMydY3h/7/ooJkjuTTpet2ZIX0gkrKNjXC4iyZ
	NM25MYCPlURsUrvY1AdwvXfa3J2aswhoiO4UsIkjW2ZgbjIhOmU0uQ9SDbhRPFEyzNsNh0
	mS7KdesNxpuJcwz0hAomc0k47L5YQTKWUQ3/pEIBgfRwzYecCD6MkvQPPrVSxw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226944;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=8IR2ZEar7xSTX3+WcC3NFIdAp4AUTKg+YP70lXONDYs=;
	b=dn+I2+Yxq6eST9WYLeBz7I//tSeh1zP1C4UnkiRYUPrB/Bi9OJVO0kV5NCl8mWHGp/F1Kx
	csRGXLzl80A3yQBw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 16/38] x86/xen/smp_pv: Remove wait for CPU online
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:24 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.

Whether the control CPU runs in a wait loop or sleeps in the core code
waiting for the online operation to complete makes no difference.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: xen-devel@lists.xenproject.org

---
 arch/x86/xen/smp_pv.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)
---
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -340,11 +340,11 @@ static int xen_pv_cpu_up(unsigned int cp
 
 	xen_pmu_init(cpu);
 
-	rc = HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL);
-	BUG_ON(rc);
-
-	while (cpu_report_state(cpu) != CPU_ONLINE)
-		HYPERVISOR_sched_op(SCHEDOP_yield, NULL);
+	/*
+	 * Why is this a BUG? If the hypercall fails then everything can be
+	 * rolled back, no?
+	 */
+	BUG_ON(HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL));
 
 	return 0;
 }



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529962.825069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKu-0000gv-42; Thu, 04 May 2023 19:09:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529962.825069; Thu, 04 May 2023 19:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKt-0000ee-SK; Thu, 04 May 2023 19:09:51 +0000
Received: by outflank-mailman (input) for mailman id 529962;
 Thu, 04 May 2023 19:09:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueEF-00042j-UE
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:59 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 49347d17-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49347d17-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185938.339811042@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226979;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=upC4xkPPrMs610atpDkPLXATkmq4wC3sFnO5m7xr5Mk=;
	b=tYQKH5LD4/bbax8XIkS2IfxvQW+jtTqnFhM0UC2BJL7JwyxQQk35fQ4DnIpjghBefCJsm7
	P+1s/BcE7c8Le2JQCtdXTEVe77jYdm91mOAH1a7Pas6H2yZPwUeewlGe6jf98m0L1QOOdl
	l8SUUcJVumcQ3CYnqLcAdr8T4FJi3NtquA7LjD1muU3H85kQUn83bT5W0gtnGSZT7Vo9+s
	0EfEAbAL4lNPVIulxXJipEHdvdzaxlrN/rqc9fO2zc1ZFMWBa/L5rQJPYodzEY71Kx3u7w
	1lIeT8YLpfIQhOcV4McTONWEkRR3daf5G8z27enuu98B/SK5Mo0nAa86XKEJqA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226979;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=upC4xkPPrMs610atpDkPLXATkmq4wC3sFnO5m7xr5Mk=;
	b=KFQyiHdPzE8IY2FhvYi2d28BxvSxl7bssB/iRxDq20Peb65/yNDsIpOVSz0WpIiOobDIBb
	Wudxsc+Jf6JAYLCw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject:
 [patch V2 37/38] x86/smpboot: Support parallel startup of secondary CPUs
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:58 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

In parallel startup mode the APs are kicked alive by the control CPU
quickly after each other and run through the early startup code in
parallel. The real-mode startup code is already serialized with a
bit-spinlock to protect the real-mode stack.

In parallel startup mode the smpboot_control variable obviously cannot
contain the Linux CPU number so the APs have to determine their Linux CPU
number on their own. This is required to find the CPUs per CPU offset in
order to find the idle task stack and other per CPU data.

To achieve this, export the cpuid_to_apicid[] array so that each AP can
find its own CPU number by searching therein based on its APIC ID.

Introduce a flag in the top bits of smpboot_control which indicates that
the AP should find its CPU number by reading the APIC ID from the APIC.

This is required because CPUID based APIC ID retrieval can only provide the
initial APIC ID, which might have been overruled by the firmware. Some AMD
APUs come up with APIC ID = initial APIC ID + 0x10, so the APIC ID to CPU
number lookup would fail miserably if based on CPUID. Also virtualization
can make its own APIC ID assignements. The only requirement is that the
APIC IDs are consistent with the APCI/MADT table.

For the boot CPU or in case parallel bringup is disabled the control bits
are empty and the CPU number is directly available in bit 0-23 of
smpboot_control.

[ tglx: Initial proof of concept patch with bitlock and APIC ID lookup ]
[ dwmw2: Rework and testing, commit message, CPUID 0x1 and CPU0 support ]
[ seanc: Fix stray override of initial_gs in common_cpu_up() ]
[ Oleksandr Natalenko: reported suspend/resume issue fixed in
  x86_acpi_suspend_lowlevel ]
[ tglx: Make it read the APIC ID from the APIC instead of using CPUID,
  	split the bitlock part out ]

Co-developed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Co-developed-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/apic.h    |    2 +
 arch/x86/include/asm/apicdef.h |    5 ++-
 arch/x86/include/asm/smp.h     |    6 +++
 arch/x86/kernel/acpi/sleep.c   |    9 +++++
 arch/x86/kernel/apic/apic.c    |    2 -
 arch/x86/kernel/head_64.S      |   62 +++++++++++++++++++++++++++++++++++++++++
 arch/x86/kernel/smpboot.c      |    2 -
 7 files changed, 84 insertions(+), 4 deletions(-)
---
--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -55,6 +55,8 @@ extern int local_apic_timer_c2_ok;
 extern int disable_apic;
 extern unsigned int lapic_timer_period;
 
+extern int cpuid_to_apicid[];
+
 extern enum apic_intr_mode_id apic_intr_mode;
 enum apic_intr_mode_id {
 	APIC_PIC,
--- a/arch/x86/include/asm/apicdef.h
+++ b/arch/x86/include/asm/apicdef.h
@@ -138,7 +138,8 @@
 #define		APIC_EILVT_MASKED	(1 << 16)
 
 #define APIC_BASE (fix_to_virt(FIX_APIC_BASE))
-#define APIC_BASE_MSR	0x800
+#define APIC_BASE_MSR		0x800
+#define APIC_X2APIC_ID_MSR	0x802
 #define XAPIC_ENABLE	(1UL << 11)
 #define X2APIC_ENABLE	(1UL << 10)
 
@@ -162,6 +163,7 @@
 #define APIC_CPUID(apicid)	((apicid) & XAPIC_DEST_CPUS_MASK)
 #define NUM_APIC_CLUSTERS	((BAD_APICID + 1) >> XAPIC_DEST_CPUS_SHIFT)
 
+#ifndef __ASSEMBLY__
 /*
  * the local APIC register structure, memory mapped. Not terribly well
  * tested, but we might eventually use this one in the future - the
@@ -435,4 +437,5 @@ enum apic_delivery_modes {
 	APIC_DELIVERY_MODE_EXTINT	= 7,
 };
 
+#endif /* !__ASSEMBLY__ */
 #endif /* _ASM_X86_APICDEF_H */
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -200,4 +200,10 @@ extern unsigned long apic_mmio_base;
 
 #endif /* !__ASSEMBLY__ */
 
+/* Control bits for startup_64 */
+#define STARTUP_READ_APICID	0x80000000
+
+/* Top 8 bits are reserved for control */
+#define STARTUP_PARALLEL_MASK	0xFF000000
+
 #endif /* _ASM_X86_SMP_H */
--- a/arch/x86/kernel/acpi/sleep.c
+++ b/arch/x86/kernel/acpi/sleep.c
@@ -16,6 +16,7 @@
 #include <asm/cacheflush.h>
 #include <asm/realmode.h>
 #include <asm/hypervisor.h>
+#include <asm/smp.h>
 
 #include <linux/ftrace.h>
 #include "../../realmode/rm/wakeup.h"
@@ -127,7 +128,13 @@ int x86_acpi_suspend_lowlevel(void)
 	 * value is in the actual %rsp register.
 	 */
 	current->thread.sp = (unsigned long)temp_stack + sizeof(temp_stack);
-	smpboot_control = smp_processor_id();
+	/*
+	 * Ensure the CPU knows which one it is when it comes back, if
+	 * it isn't in parallel mode and expected to work that out for
+	 * itself.
+	 */
+	if (!(smpboot_control & STARTUP_PARALLEL_MASK))
+		smpboot_control = smp_processor_id();
 #endif
 	initial_code = (unsigned long)wakeup_long64;
 	saved_magic = 0x123456789abcdef0L;
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2380,7 +2380,7 @@ static int nr_logical_cpuids = 1;
 /*
  * Used to store mapping between logical CPU IDs and APIC IDs.
  */
-static int cpuid_to_apicid[] = {
+int cpuid_to_apicid[] = {
 	[0 ... NR_CPUS - 1] = -1,
 };
 
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -24,7 +24,9 @@
 #include "../entry/calling.h"
 #include <asm/export.h>
 #include <asm/nospec-branch.h>
+#include <asm/apicdef.h>
 #include <asm/fixmap.h>
+#include <asm/smp.h>
 
 /*
  * We are not able to switch in one step to the final KERNEL ADDRESS SPACE
@@ -234,8 +236,68 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	ANNOTATE_NOENDBR // above
 
 #ifdef CONFIG_SMP
+	/*
+	 * For parallel boot, the APIC ID is read from the APIC, and then
+	 * used to look up the CPU number.  For booting a single CPU, the
+	 * CPU number is encoded in smpboot_control.
+	 *
+	 * Bit 31	STARTUP_READ_APICID (Read APICID from APIC)
+	 * Bit 0-23	CPU# if STARTUP_xx flags are not set
+	 */
 	movl	smpboot_control(%rip), %ecx
+	testl	$STARTUP_READ_APICID, %ecx
+	jnz	.Lread_apicid
+	/*
+	 * No control bit set, single CPU bringup. CPU number is provided
+	 * in bit 0-23. This is also the boot CPU case (CPU number 0).
+	 */
+	andl	$(~STARTUP_PARALLEL_MASK), %ecx
+	jmp	.Lsetup_cpu
+
+.Lread_apicid:
+	/* Check whether X2APIC mode is already enabled */
+	mov	$MSR_IA32_APICBASE, %ecx
+	rdmsr
+	testl	$X2APIC_ENABLE, %eax
+	jnz	.Lread_apicid_msr
+
+	/* Read the APIC ID from the fix-mapped MMIO space. */
+	movq	apic_mmio_base(%rip), %rcx
+	addq	$APIC_ID, %rcx
+	movl	(%rcx), %eax
+	shr	$24, %eax
+	jmp	.Llookup_AP
+
+.Lread_apicid_msr:
+	mov	$APIC_X2APIC_ID_MSR, %ecx
+	rdmsr
+
+.Llookup_AP:
+	/* EAX contains the APIC ID of the current CPU */
+	xorq	%rcx, %rcx
+	leaq	cpuid_to_apicid(%rip), %rbx
+
+.Lfind_cpunr:
+	cmpl	(%rbx,%rcx,4), %eax
+	jz	.Lsetup_cpu
+	inc	%ecx
+#ifdef CONFIG_FORCE_NR_CPUS
+	cmpl	$NR_CPUS, %ecx
+#else
+	cmpl	nr_cpu_ids(%rip), %ecx
+#endif
+	jb	.Lfind_cpunr
+
+	/*  APIC ID not found in the table. Drop the trampoline lock and bail. */
+	movq	trampoline_lock(%rip), %rax
+	lock
+	btrl	$0, (%rax)
+
+1:	cli
+	hlt
+	jmp	1b
 
+.Lsetup_cpu:
 	/* Get the per cpu offset for the given CPU# which is in ECX */
 	movq	__per_cpu_offset(,%rcx,8), %rdx
 #else
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1002,7 +1002,7 @@ static int do_boot_cpu(int apicid, int c
 	if (IS_ENABLED(CONFIG_X86_32)) {
 		early_gdt_descr.address = (unsigned long)get_cpu_gdt_rw(cpu);
 		initial_stack  = idle->thread.sp;
-	} else {
+	} else if (!(smpboot_control & STARTUP_PARALLEL_MASK)) {
 		smpboot_control = cpu;
 	}
 



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529963.825074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKu-0000oq-OH; Thu, 04 May 2023 19:09:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529963.825074; Thu, 04 May 2023 19:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKu-0000kh-7H; Thu, 04 May 2023 19:09:52 +0000
Received: by outflank-mailman (input) for mailman id 529963;
 Thu, 04 May 2023 19:09:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDx-00042j-TZ
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:41 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3e871476-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e871476-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.753526411@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226961;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=t+vCm64b4itQSl2LfKU8VrA27XKQ2KG5zUbd2mh4iV8=;
	b=ml6E+TXawV3yhhGCubFqgqufAZpTjpSiELyk/vrlZ3ms0dhrtAmXtDWQA1yMzAcLqw6EXj
	bFhc2tyQcBuQ33Bsz5zsKXgWIdU4zRB8yjhEhspCgWXshlokE4Q+JyTBI+TjRiiOk+rcml
	fl/hAT8Iw2GAj9Ot9oyk6BKqozcGuo7sJpNIvJzE2R06g9w52Mgyku0JBRkEGvJz486uRf
	KmExkb3oHx2qMv2bo4NLcSHevFECI+NeAIr4un5UMnquDQpRk4nWHteXJM1j5VItMYBQJc
	o8sXV0liH2x+qJwyn7FfgcHXOI887Tp59x5IcGaiawsgGKlKejdPBbAmAX9ZSw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226961;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=t+vCm64b4itQSl2LfKU8VrA27XKQ2KG5zUbd2mh4iV8=;
	b=9+U6SgXZ5rZ8A4IIpjwy8ijpc6qsJ1JIDkx+eqNJ7PC4GKp6PdTneYgE7ZC/9vOEWcjfFc
	AmLbek9quVdxpeCQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Palmer Dabbelt <palmer@rivosinc.com>
Subject: [patch V2 26/38] riscv: Switch to hotplug core state synchronization
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:40 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: linux-riscv@lists.infradead.org

---
 arch/riscv/Kconfig              |    1 +
 arch/riscv/include/asm/smp.h    |    2 +-
 arch/riscv/kernel/cpu-hotplug.c |   14 +++++++-------
 3 files changed, 9 insertions(+), 8 deletions(-)
---
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -122,6 +122,7 @@ config RISCV
 	select HAVE_RSEQ
 	select HAVE_STACKPROTECTOR
 	select HAVE_SYSCALL_TRACEPOINTS
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_DOMAIN
 	select IRQ_FORCED_THREADING
 	select KASAN_VMALLOC if KASAN
--- a/arch/riscv/include/asm/smp.h
+++ b/arch/riscv/include/asm/smp.h
@@ -70,7 +70,7 @@ asmlinkage void smp_callin(void);
 
 #if defined CONFIG_HOTPLUG_CPU
 int __cpu_disable(void);
-void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 #endif /* CONFIG_HOTPLUG_CPU */
 
 #else
--- a/arch/riscv/kernel/cpu-hotplug.c
+++ b/arch/riscv/kernel/cpu-hotplug.c
@@ -8,6 +8,7 @@
 #include <linux/sched.h>
 #include <linux/err.h>
 #include <linux/irq.h>
+#include <linux/cpuhotplug.h>
 #include <linux/cpu.h>
 #include <linux/sched/hotplug.h>
 #include <asm/irq.h>
@@ -49,17 +50,15 @@ int __cpu_disable(void)
 	return ret;
 }
 
+#ifdef CONFIG_HOTPLUG_CPU
 /*
- * Called on the thread which is asking for a CPU to be shutdown.
+ * Called on the thread which is asking for a CPU to be shutdown, if the
+ * CPU reported dead to the hotplug core.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
 	int ret = 0;
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU %u: didn't die\n", cpu);
-		return;
-	}
 	pr_notice("CPU%u: off\n", cpu);
 
 	/* Verify from the firmware if the cpu is really stopped*/
@@ -76,9 +75,10 @@ void __noreturn arch_cpu_idle_dead(void)
 {
 	idle_task_exit();
 
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	cpu_ops[smp_processor_id()]->cpu_stop();
 	/* It should never reach here */
 	BUG();
 }
+#endif



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529969.825093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKw-0001XP-Om; Thu, 04 May 2023 19:09:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529969.825093; Thu, 04 May 2023 19:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKw-0001XC-KV; Thu, 04 May 2023 19:09:54 +0000
Received: by outflank-mailman (input) for mailman id 529969;
 Thu, 04 May 2023 19:09:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDr-00042k-HX
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:35 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 39a58b9c-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39a58b9c-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185937.477649111@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226952;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=J5ghl8+CFsq8rw8xRo6ZeJvRj8tj82GRgWZ3Thf5XTs=;
	b=QenhqFcoejPtec/MlD+hqV1n9MlgT2acU00U3XD2Oc1Pol7q/9eI63VtS15ZCC/d+9WDGA
	KxchOK615tOJzZsBewTOkPd1Jd5+B3is2a42u/N5CTkPLatyJZrrvz8FEIqC7XWO8in3oU
	53hbjDByv1h2xPpYpva0v6RgjUscQ2HfSl/jqKFOass4OpUQpzxo6UHcGhtLw9087/2M+Z
	tt8EMVFZ49ggeVcETqNdsJDpa2BsXfwNUKKIIKpi+M53OvsvG9KFTR5Ffn+WK+321Pqy5y
	ZzdinLtGlwMYQuT48xSu+HgHHks4RytPcjC2QUYQ0+Ziux8bRvvTSEM3m/LHVw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226952;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=J5ghl8+CFsq8rw8xRo6ZeJvRj8tj82GRgWZ3Thf5XTs=;
	b=jA8uBZa7TYtw4jZ6vKFGruDR2mfnjcH3f8p0I307i2kdUOd1y2C6N6fsUQhyOplmByw+pg
	yUiP5bOguA/FxuBw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 21/38] ARM: smp: Switch to hotplug core state synchronization
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:32 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-arm-kernel@lists.infradead.org

---
 arch/arm/Kconfig           |    1 +
 arch/arm/include/asm/smp.h |    2 +-
 arch/arm/kernel/smp.c      |   18 +++++++-----------
 3 files changed, 9 insertions(+), 12 deletions(-)
---
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -124,6 +124,7 @@ config ARM
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UID16
 	select HAVE_VIRT_CPU_ACCOUNTING_GEN
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_FORCED_THREADING
 	select MODULES_USE_ELF_REL
 	select NEED_DMA_MAP_STATE
--- a/arch/arm/include/asm/smp.h
+++ b/arch/arm/include/asm/smp.h
@@ -64,7 +64,7 @@ extern void secondary_startup_arm(void);
 
 extern int __cpu_disable(void);
 
-extern void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 
 extern void arch_send_call_function_single_ipi(int cpu);
 extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -288,15 +288,11 @@ int __cpu_disable(void)
 }
 
 /*
- * called on the thread which is asking for a CPU to be shutdown -
- * waits until shutdown has completed, or it is timed out.
+ * called on the thread which is asking for a CPU to be shutdown after the
+ * shutdown completed.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
 	pr_debug("CPU%u: shutdown\n", cpu);
 
 	clear_tasks_mm_cpumask(cpu);
@@ -336,11 +332,11 @@ void __noreturn arch_cpu_idle_dead(void)
 	flush_cache_louis();
 
 	/*
-	 * Tell __cpu_die() that this CPU is now safe to dispose of.  Once
-	 * this returns, power and/or clocks can be removed at any point
-	 * from this CPU and its cache by platform_cpu_kill().
+	 * Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose
+	 * of. Once this returns, power and/or clocks can be removed at
+	 * any point from this CPU and its cache by platform_cpu_kill().
 	 */
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	/*
 	 * Ensure that the cache lines associated with that completion are



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529970.825098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKx-0001bs-6N; Thu, 04 May 2023 19:09:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529970.825098; Thu, 04 May 2023 19:09:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKx-0001bL-0M; Thu, 04 May 2023 19:09:55 +0000
Received: by outflank-mailman (input) for mailman id 529970;
 Thu, 04 May 2023 19:09:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueE7-00042j-LH
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:51 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 444a8d57-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 444a8d57-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185938.073662723@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226970;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=uPvntfwXoWAcfAS+SI4cvobPK1ZyCeBK1UEsrRJoKHI=;
	b=pGPnJYtp3Tpg+MMWkzGo2xuJZap0dH1jxlVLp3HnSKPjWhKQ7lUp8uBcRJgQC6IITlm0cJ
	1aGXFaHVXyvjBYzo9xhPU6Q9Hlb63Li5My1zMC5rKjhrcSFv9FrMoJC2AzdXeGim5XsZqz
	9v+nMgLyDTt/fW+8fRbtwh/FoW+E8TIs7Of6L7ZL4PjyPe5sHR924eR8F6uIhyHQM8vAHb
	4Q2e21rxSOrBXM8NWwvByPprou5mbPG7hQgLJJYj7UgEiLVag9IaPlw+QpmbIBOOjhqHUl
	BPFnbOSeBcOh1u6IqXuRouJv9SchS3OwdJYKx3EiyYmJQVnhihqfn9NcfJFqRg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226970;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=uPvntfwXoWAcfAS+SI4cvobPK1ZyCeBK1UEsrRJoKHI=;
	b=x2+bLbNKqKUzBlReY4wD0HaXB51VfG+zRONPZUd5afPPYXsV8jL1krzs8Tu9sGDHvinTTA
	c6yOUa4jvvA0hODA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch V2 32/38] cpu/hotplug: Allow "parallel" bringup up to
 CPUHP_BP_KICK_AP_STATE
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:50 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

There is often significant latency in the early stages of CPU bringup, and
time is wasted by waking each CPU (e.g. with SIPI/INIT/INIT on x86) and
then waiting for it to respond before moving on to the next.

Allow a platform to enable parallel setup which brings all to be onlined
CPUs up to the CPUHP_BP_KICK_AP state. While this state advancement on the
control CPU (BP) is single-threaded the important part is the last state
CPUHP_BP_KICK_AP which wakes the to be onlined CPUs up.

This allows the CPUs to run up to the first sychronization point
cpuhp_ap_sync_alive() where they wait for the control CPU to release them
one by one for the full onlining procedure.

This parallelism depends on the CPU hotplug core sync mechanism which
ensures that the parallel brought up CPUs wait for release before touching
any state which would make the CPU visible to anything outside the hotplug
control mechanism.

To handle the SMT constraints of X86 correctly the bringup happens in two
iterations when CONFIG_HOTPLUG_SMT is enabled. The control CPU brings up
the primary SMT threads of each core first, which can load the microcode
without the need to rendevouz with the thread siblings. Once that's
completed it brings up the secondary SMT threads.

Co-developed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 Documentation/admin-guide/kernel-parameters.txt |    6 +
 arch/Kconfig                                    |    4 
 include/linux/cpuhotplug.h                      |    1 
 kernel/cpu.c                                    |  103 ++++++++++++++++++++++--
 4 files changed, 109 insertions(+), 5 deletions(-)
---
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -838,6 +838,12 @@
 			on every CPU online, such as boot, and resume from suspend.
 			Default: 10000
 
+	cpuhp.parallel=
+			[SMP] Enable/disable parallel bringup of secondary CPUs
+			Format: <bool>
+			Default is enabled if CONFIG_HOTPLUG_PARALLEL=y. Otherwise
+			the parameter has no effect.
+
 	crash_kexec_post_notifiers
 			Run kdump after running panic-notifiers and dumping
 			kmsg. This only for the users who doubt kdump always
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -53,6 +53,10 @@ config HOTPLUG_SPLIT_STARTUP
 	bool
 	select HOTPLUG_CORE_SYNC_FULL
 
+config HOTPLUG_PARALLEL
+	bool
+	select HOTPLUG_SPLIT_STARTUP
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -524,6 +524,7 @@ void cpuhp_ap_sync_alive(void);
 void arch_cpuhp_sync_state_poll(void);
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
 int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle);
+bool arch_cpuhp_init_parallel_bringup(void);
 
 #ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
 void cpuhp_ap_report_dead(void);
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -649,8 +649,23 @@ bool cpu_smt_possible(void)
 		cpu_smt_control != CPU_SMT_NOT_SUPPORTED;
 }
 EXPORT_SYMBOL_GPL(cpu_smt_possible);
+
+static inline bool cpuhp_smt_aware(void)
+{
+	return topology_smt_supported();
+}
+
+static inline const struct cpumask *cpuhp_get_primary_thread_mask(void)
+{
+	return cpu_primary_thread_mask;
+}
 #else
 static inline bool cpu_smt_allowed(unsigned int cpu) { return true; }
+static inline bool cpuhp_smt_aware(void) { return false; }
+static inline const struct cpumask *cpuhp_get_primary_thread_mask(void)
+{
+	return cpu_present_mask;
+}
 #endif
 
 static inline enum cpuhp_state
@@ -1743,16 +1758,94 @@ int bringup_hibernate_cpu(unsigned int s
 	return 0;
 }
 
-void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
+static void __init cpuhp_bringup_mask(const struct cpumask *mask, unsigned int ncpus,
+				      enum cpuhp_state target)
 {
 	unsigned int cpu;
 
-	for_each_present_cpu(cpu) {
-		if (num_online_cpus() >= setup_max_cpus)
+	for_each_cpu(cpu, mask) {
+		struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
+
+		if (!--ncpus)
 			break;
-		if (!cpu_online(cpu))
-			cpu_up(cpu, CPUHP_ONLINE);
+
+		if (cpu_up(cpu, target) && can_rollback_cpu(st)) {
+			/*
+			 * If this failed then cpu_up() might have only
+			 * rolled back to CPUHP_BP_KICK_AP for the final
+			 * online. Clean it up. NOOP if already rolled back.
+			 */
+			WARN_ON(cpuhp_invoke_callback_range(false, cpu, st, CPUHP_OFFLINE));
+		}
+	}
+}
+
+#ifdef CONFIG_HOTPLUG_PARALLEL
+static bool __cpuhp_parallel_bringup __ro_after_init = true;
+
+static int __init parallel_bringup_parse_param(char *arg)
+{
+	return kstrtobool(arg, &__cpuhp_parallel_bringup);
+}
+early_param("cpuhp.parallel", parallel_bringup_parse_param);
+
+/*
+ * On architectures which have enabled parallel bringup this invokes all BP
+ * prepare states for each of the to be onlined APs first. The last state
+ * sends the startup IPI to the APs. The APs proceed through the low level
+ * bringup code in parallel and then wait for the control CPU to release
+ * them one by one for the final onlining procedure.
+ *
+ * This avoids waiting for each AP to respond to the startup IPI in
+ * CPUHP_BRINGUP_CPU.
+ */
+static bool __init cpuhp_bringup_cpus_parallel(unsigned int ncpus)
+{
+	const struct cpumask *mask = cpu_present_mask;
+
+	if (__cpuhp_parallel_bringup)
+		__cpuhp_parallel_bringup = arch_cpuhp_init_parallel_bringup();
+	if (!__cpuhp_parallel_bringup)
+		return false;
+
+	if (cpuhp_smt_aware()) {
+		const struct cpumask *pmask = cpuhp_get_primary_thread_mask();
+		static struct cpumask tmp_mask __initdata;
+
+		/*
+		 * X86 requires to prevent that SMT siblings stopped while
+		 * the primary thread does a microcode update for various
+		 * reasons. Bring the primary threads up first.
+		 */
+		cpumask_and(&tmp_mask, mask, pmask);
+		cpuhp_bringup_mask(&tmp_mask, ncpus, CPUHP_BP_KICK_AP);
+		cpuhp_bringup_mask(&tmp_mask, ncpus, CPUHP_ONLINE);
+		/* Account for the online CPUs */
+		ncpus -= num_online_cpus();
+		if (!ncpus)
+			return true;
+		/* Create the mask for secondary CPUs */
+		cpumask_andnot(&tmp_mask, mask, pmask);
+		mask = &tmp_mask;
 	}
+
+	/* Bring the not-yet started CPUs up */
+	cpuhp_bringup_mask(mask, ncpus, CPUHP_BP_KICK_AP);
+	cpuhp_bringup_mask(mask, ncpus, CPUHP_ONLINE);
+	return true;
+}
+#else
+static inline bool cpuhp_bringup_cpus_parallel(unsigned int ncpus) { return false; }
+#endif /* CONFIG_HOTPLUG_PARALLEL */
+
+void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
+{
+	/* Try parallel bringup optimization if enabled */
+	if (cpuhp_bringup_cpus_parallel(setup_max_cpus))
+		return;
+
+	/* Full per CPU serialized bringup */
+	cpuhp_bringup_mask(cpu_present_mask, setup_max_cpus, CPUHP_ONLINE);
 }
 
 #ifdef CONFIG_PM_SLEEP_SMP



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529971.825105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKx-0001l0-RO; Thu, 04 May 2023 19:09:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529971.825105; Thu, 04 May 2023 19:09:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueKx-0001jk-Gw; Thu, 04 May 2023 19:09:55 +0000
Received: by outflank-mailman (input) for mailman id 529971;
 Thu, 04 May 2023 19:09:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDz-00042k-ID
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:43 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3c993d89-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c993d89-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185937.637064751@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226957;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=ALZqTI1WqQMy7B72Ez5E7IdIQ5oXhT4SB1G1mM+q//8=;
	b=mYnm5MYhl9rGIzht43o2lcVxhO0R8y3aB5aRyOtlB+9KK0J/QmknXcGx5x8PN1YVWVRD3B
	GUQu025b731mf+nEkF4GI7Ob8UvQa6VPVPqdlahyw70W2WOH/igwzfOVqHBxPSurCU6YxH
	wEkOpPoEv2xWwm6a7bLiW9tbt8M7692eEWBhWTyKA7OHmC/IgVgvkO72HGXojQ0SvaAoib
	/bQLIpJj1VFAg0Ej8YehB8/VUFNEcW2ew88JgN73+AojuUn//mB8Tv1GcP3eB1NNPqtphq
	S4khIQBFzuPtperawUGsq3BmEdOg4bh3XSPwjL4h9A1+3VCy4w0hb5wdRZyzeQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226957;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=ALZqTI1WqQMy7B72Ez5E7IdIQ5oXhT4SB1G1mM+q//8=;
	b=zeGslggBYWDsSd6dEoQqGkVdC/21I+ltkoKmpa5dw1d5iL4yHtQATpFjTDZBiPLB8z5y7b
	LQX/S5Chy8uvrCAw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 24/38] MIPS: SMP_CPS: Switch to hotplug core state synchronization
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:37 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. This unfortunately requires to add dead reporting to the non CPS
platforms as CPS is the only user, but it allows an overall consolidation
of this functionality.

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: linux-mips@vger.kernel.org

---
 arch/mips/Kconfig               |    1 +
 arch/mips/cavium-octeon/smp.c   |    1 +
 arch/mips/include/asm/smp-ops.h |    1 +
 arch/mips/kernel/smp-bmips.c    |    1 +
 arch/mips/kernel/smp-cps.c      |   14 +++++---------
 arch/mips/kernel/smp.c          |    8 ++++++++
 arch/mips/loongson64/smp.c      |    1 +
 7 files changed, 18 insertions(+), 9 deletions(-)
---
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2285,6 +2285,7 @@ config MIPS_CPS
 	select MIPS_CM
 	select MIPS_CPS_PM if HOTPLUG_CPU
 	select SMP
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select SYNC_R4K if (CEVT_R4K || CSRC_R4K)
 	select SYS_SUPPORTS_HOTPLUG_CPU
 	select SYS_SUPPORTS_SCHED_SMT if CPU_MIPSR6
--- a/arch/mips/cavium-octeon/smp.c
+++ b/arch/mips/cavium-octeon/smp.c
@@ -345,6 +345,7 @@ void play_dead(void)
 	int cpu = cpu_number_map(cvmx_get_core_num());
 
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 	octeon_processor_boot = 0xff;
 	per_cpu(cpu_state, cpu) = CPU_DEAD;
 
--- a/arch/mips/include/asm/smp-ops.h
+++ b/arch/mips/include/asm/smp-ops.h
@@ -33,6 +33,7 @@ struct plat_smp_ops {
 #ifdef CONFIG_HOTPLUG_CPU
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
+	void (*cleanup_dead_cpu)(unsigned cpu);
 #endif
 #ifdef CONFIG_KEXEC
 	void (*kexec_nonboot_cpu)(void);
--- a/arch/mips/kernel/smp-bmips.c
+++ b/arch/mips/kernel/smp-bmips.c
@@ -392,6 +392,7 @@ static void bmips_cpu_die(unsigned int c
 void __ref play_dead(void)
 {
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 
 	/* flush data cache */
 	_dma_cache_wback_inv(0, ~0);
--- a/arch/mips/kernel/smp-cps.c
+++ b/arch/mips/kernel/smp-cps.c
@@ -503,8 +503,7 @@ void play_dead(void)
 		}
 	}
 
-	/* This CPU has chosen its way out */
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	cps_shutdown_this_cpu(cpu_death);
 
@@ -527,7 +526,9 @@ static void wait_for_sibling_halt(void *
 	} while (!(halted & TCHALT_H));
 }
 
-static void cps_cpu_die(unsigned int cpu)
+static void cps_cpu_die(unsigned int cpu) { }
+
+static void cps_cleanup_dead_cpu(unsigned cpu)
 {
 	unsigned core = cpu_core(&cpu_data[cpu]);
 	unsigned int vpe_id = cpu_vpe_id(&cpu_data[cpu]);
@@ -535,12 +536,6 @@ static void cps_cpu_die(unsigned int cpu
 	unsigned stat;
 	int err;
 
-	/* Wait for the cpu to choose its way out */
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU%u: didn't offline\n", cpu);
-		return;
-	}
-
 	/*
 	 * Now wait for the CPU to actually offline. Without doing this that
 	 * offlining may race with one or more of:
@@ -624,6 +619,7 @@ static const struct plat_smp_ops cps_smp
 #ifdef CONFIG_HOTPLUG_CPU
 	.cpu_disable		= cps_cpu_disable,
 	.cpu_die		= cps_cpu_die,
+	.cleanup_dead_cpu	= cps_cleanup_dead_cpu,
 #endif
 #ifdef CONFIG_KEXEC
 	.kexec_nonboot_cpu	= cps_kexec_nonboot_cpu,
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -690,6 +690,14 @@ void flush_tlb_one(unsigned long vaddr)
 EXPORT_SYMBOL(flush_tlb_page);
 EXPORT_SYMBOL(flush_tlb_one);
 
+#ifdef CONFIG_HOTPLUG_CPU
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
+	if (mp_ops->cleanup_dead_cpu)
+		mp_ops->cleanup_dead_cpu(cpu);
+}
+#endif
+
 #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
 
 static void tick_broadcast_callee(void *info)
--- a/arch/mips/loongson64/smp.c
+++ b/arch/mips/loongson64/smp.c
@@ -775,6 +775,7 @@ void play_dead(void)
 	void (*play_dead_at_ckseg1)(int *);
 
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 
 	prid_imp = read_c0_prid() & PRID_IMP_MASK;
 	prid_rev = read_c0_prid() & PRID_REV_MASK;



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529972.825123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL0-0002ZF-LW; Thu, 04 May 2023 19:09:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529972.825123; Thu, 04 May 2023 19:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL0-0002Yt-Fp; Thu, 04 May 2023 19:09:58 +0000
Received: by outflank-mailman (input) for mailman id 529972;
 Thu, 04 May 2023 19:09:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDz-00042j-PH
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:43 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f704063-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f704063-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.806324260@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226962;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=PIHn3pYsUJ8IshGaa3kU/Fr0bIJzVI5udpOrlPrYz+E=;
	b=QNGmIIt+/To56nrNKLo3wlp/uoA0S+O2aHLtwODYUDkrU8Guyvv03S4Uol8KKlrIgPdDLI
	qPjhMQNUAIZdRs2om4WUPJMT1jXnWhngn8GF42pSedv+mPXqPDg6dporcNAvFdzUw+egQ+
	Vqg53SlSVRiuRBZMIzV5FPs2l5GIv84C01a0Qxu+Mq65QbnphJFA5Ir3subA78ZM6E8kf2
	ZWThMRZ9iqmdUJfsdKFtkK4qDI92/cMPImPqXf9PZUVb/qwqechT+XgrR1ZcB+r0K7YQf/
	HeZhI+W/xkNZVJpDF4ppx/B//2tg7mVNz3rodO8pj9sfyrrX+kNXbJXXhJW8oA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226962;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=PIHn3pYsUJ8IshGaa3kU/Fr0bIJzVI5udpOrlPrYz+E=;
	b=wXxqyLYYRXuQOss/+ZoLBYE+oLaf2tb90zR4AxAwAajqpA22CmYoLXhycZjVGB2N2Jkdnq
	PQRh0paL0Sj8j3Bw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 27/38] cpu/hotplug: Remove unused state functions
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:42 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

All users converted to the hotplug core mechanism.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 include/linux/cpu.h |    2 -
 kernel/smpboot.c    |   75 ----------------------------------------------------
 2 files changed, 77 deletions(-)
---
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -193,8 +193,6 @@ static inline void play_idle(unsigned lo
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
-bool cpu_wait_death(unsigned int cpu, int seconds);
-bool cpu_report_death(void);
 void cpuhp_report_idle_dead(void);
 #else
 static inline void cpuhp_report_idle_dead(void) { }
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -325,78 +325,3 @@ void smpboot_unregister_percpu_thread(st
 	cpus_read_unlock();
 }
 EXPORT_SYMBOL_GPL(smpboot_unregister_percpu_thread);
-
-#ifndef CONFIG_HOTPLUG_CORE_SYNC
-static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
-
-#ifdef CONFIG_HOTPLUG_CPU
-/*
- * Wait for the specified CPU to exit the idle loop and die.
- */
-bool cpu_wait_death(unsigned int cpu, int seconds)
-{
-	int jf_left = seconds * HZ;
-	int oldstate;
-	bool ret = true;
-	int sleep_jf = 1;
-
-	might_sleep();
-
-	/* The outgoing CPU will normally get done quite quickly. */
-	if (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) == CPU_DEAD)
-		goto update_state_early;
-	udelay(5);
-
-	/* But if the outgoing CPU dawdles, wait increasingly long times. */
-	while (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) != CPU_DEAD) {
-		schedule_timeout_uninterruptible(sleep_jf);
-		jf_left -= sleep_jf;
-		if (jf_left <= 0)
-			break;
-		sleep_jf = DIV_ROUND_UP(sleep_jf * 11, 10);
-	}
-update_state_early:
-	oldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-update_state:
-	if (oldstate == CPU_DEAD) {
-		/* Outgoing CPU died normally, update state. */
-		smp_mb(); /* atomic_read() before update. */
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_POST_DEAD);
-	} else {
-		/* Outgoing CPU still hasn't died, set state accordingly. */
-		if (!atomic_try_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
-					&oldstate, CPU_BROKEN))
-			goto update_state;
-		ret = false;
-	}
-	return ret;
-}
-
-/*
- * Called by the outgoing CPU to report its successful death.  Return
- * false if this report follows the surviving CPU's timing out.
- *
- * A separate "CPU_DEAD_FROZEN" is used when the surviving CPU
- * timed out.  This approach allows architectures to omit calls to
- * cpu_check_up_prepare() and cpu_set_state_online() without defeating
- * the next cpu_wait_death()'s polling loop.
- */
-bool cpu_report_death(void)
-{
-	int oldstate;
-	int newstate;
-	int cpu = smp_processor_id();
-
-	oldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-	do {
-		if (oldstate != CPU_BROKEN)
-			newstate = CPU_DEAD;
-		else
-			newstate = CPU_DEAD_FROZEN;
-	} while (!atomic_try_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
-				     &oldstate, newstate));
-	return newstate == CPU_DEAD;
-}
-
-#endif /* #ifdef CONFIG_HOTPLUG_CPU */
-#endif /* !CONFIG_HOTPLUG_CORE_SYNC */



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:09:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:09:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529973.825129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL1-0002en-8w; Thu, 04 May 2023 19:09:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529973.825129; Thu, 04 May 2023 19:09:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL0-0002dv-Va; Thu, 04 May 2023 19:09:58 +0000
Received: by outflank-mailman (input) for mailman id 529973;
 Thu, 04 May 2023 19:09:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueE5-00042k-Cm
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:49 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 425723d3-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 425723d3-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185937.966056940@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226967;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=OPmaW7DXf6JnyvIw0i7yNDoe4KbapH95hpIdiwQiys8=;
	b=OHQM462Vlb4JUW68TEIl6fNm2wkG4SDJ/WmVKvLWMnKsvspXVogdjc2LngewLspjgzA1Jq
	L9eSuPiamrsyKrs8ZIYXvKQpCwzQsD/75Jm/wrxCCu2OxmCX8aK3ZbXJ/x2bOGmRyErPSO
	atYwr+JfOvoX9acRV+IiSIOe7gwgCiejv0oRBsUcMHNCvvplg663yGDlt+xcKaV0X1m/XU
	a5X+xuwQ5SZKVhitdTR8euHAFdO4DSjfLUG5pj4Ay6ZQljwWh0/Wr2g/g1eJ5tOUZnZOcZ
	KSbTJC0Z3mA9V2DpMSFA7mFMz/TZyEqm0eNXH2mOdMyg3lCXhJt8qaKcz/vc6A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226967;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=OPmaW7DXf6JnyvIw0i7yNDoe4KbapH95hpIdiwQiys8=;
	b=2pZygmqrbSKaKtOa6782fVVLu32t0VpQpHaUmZC3Qqmhzk/7sEVd08EWFmIUQoV6w+fmeA
	aSMnAotAR3MHv7AA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 30/38] x86/smpboot: Enable split CPU startup
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:46 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The x86 CPU bringup state currently does AP wake-up, wait for AP to
respond and then release it for full bringup.

It is safe to be split into a wake-up and and a separate wait+release
state.

Provide the required functions and enable the split CPU bringup, which
prepares for parallel bringup, where the bringup of the non-boot CPUs takes
two iterations: One to prepare and wake all APs and the second to wait and
release them. Depending on timing this can eliminate the wait time
completely.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/Kconfig           |    2 +-
 arch/x86/include/asm/smp.h |    9 ++-------
 arch/x86/kernel/smp.c      |    2 +-
 arch/x86/kernel/smpboot.c  |    8 ++++----
 arch/x86/xen/smp_pv.c      |    4 ++--
 5 files changed, 10 insertions(+), 15 deletions(-)
---
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -274,8 +274,8 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
-	select HOTPLUG_CORE_SYNC_FULL		if SMP
 	select HOTPLUG_SMT			if SMP
+	select HOTPLUG_SPLIT_STARTUP		if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
 	select NEED_PER_CPU_PAGE_FIRST_CHUNK
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -40,7 +40,7 @@ struct smp_ops {
 
 	void (*cleanup_dead_cpu)(unsigned cpu);
 	void (*poll_sync_state)(void);
-	int (*cpu_up)(unsigned cpu, struct task_struct *tidle);
+	int (*kick_ap_alive)(unsigned cpu, struct task_struct *tidle);
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
 	void (*play_dead)(void);
@@ -80,11 +80,6 @@ static inline void smp_cpus_done(unsigne
 	smp_ops.smp_cpus_done(max_cpus);
 }
 
-static inline int __cpu_up(unsigned int cpu, struct task_struct *tidle)
-{
-	return smp_ops.cpu_up(cpu, tidle);
-}
-
 static inline int __cpu_disable(void)
 {
 	return smp_ops.cpu_disable();
@@ -124,7 +119,7 @@ void native_smp_prepare_cpus(unsigned in
 void calculate_max_logical_packages(void);
 void native_smp_cpus_done(unsigned int max_cpus);
 int common_cpu_up(unsigned int cpunum, struct task_struct *tidle);
-int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
+int native_kick_ap(unsigned int cpu, struct task_struct *tidle);
 int native_cpu_disable(void);
 void __noreturn hlt_play_dead(void);
 void native_play_dead(void);
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -268,7 +268,7 @@ struct smp_ops smp_ops = {
 #endif
 	.smp_send_reschedule	= native_smp_send_reschedule,
 
-	.cpu_up			= native_cpu_up,
+	.kick_ap_alive		= native_kick_ap,
 	.cpu_disable		= native_cpu_disable,
 	.play_dead		= native_play_dead,
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1058,7 +1058,7 @@ static int do_boot_cpu(int apicid, int c
 	return ret;
 }
 
-static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
+int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
 	int err;
@@ -1094,15 +1094,15 @@ static int native_kick_ap(unsigned int c
 	return err;
 }
 
-int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle)
 {
-	return native_kick_ap(cpu, tidle);
+	return smp_ops.kick_ap_alive(cpu, tidle);
 }
 
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu)
 {
 	/* Cleanup possible dangling ends... */
-	if (smp_ops.cpu_up == native_cpu_up && x86_platform.legacy.warm_reset)
+	if (smp_ops.kick_ap_alive == native_kick_ap && x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();
 }
 
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -314,7 +314,7 @@ cpu_initialize_context(unsigned int cpu,
 	return 0;
 }
 
-static int xen_pv_cpu_up(unsigned int cpu, struct task_struct *idle)
+static int xen_pv_kick_ap(unsigned int cpu, struct task_struct *idle)
 {
 	int rc;
 
@@ -438,7 +438,7 @@ static const struct smp_ops xen_smp_ops
 	.smp_prepare_cpus = xen_pv_smp_prepare_cpus,
 	.smp_cpus_done = xen_smp_cpus_done,
 
-	.cpu_up = xen_pv_cpu_up,
+	.kick_ap_alive = xen_pv_kick_ap,
 	.cpu_die = xen_pv_cpu_die,
 	.cleanup_dead_cpu = xen_pv_cleanup_dead_cpu,
 	.poll_sync_state = xen_pv_poll_sync_state,



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:10:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:10:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529974.825135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL1-0002oW-Ub; Thu, 04 May 2023 19:09:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529974.825135; Thu, 04 May 2023 19:09:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL1-0002ml-JA; Thu, 04 May 2023 19:09:59 +0000
Received: by outflank-mailman (input) for mailman id 529974;
 Thu, 04 May 2023 19:09:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueEE-00042j-Av
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:58 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4844a154-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4844a154-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185938.285676159@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226977;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=PV6WWoQh1vSNv/X00PyVAYzPvMrMTPPv2Nyz2I8t2to=;
	b=ngwo/uGwLLzUyff7hlx9g/RxH5SaJg/lwXSYHXpqdS6uHQZR+3fUUQfO5oVYKvfxde06o+
	+RefzZ3wy9tsLi/Hy8zZ6JDATGhXPUq/QZGaWvxRx4XQWUnhfUzD50bvR2y7xGwKDKLwGm
	X2C2HqxX/WfTRVLQ/0g6RaEnIOW09xcuy0eKhcHRqbtoKQXKdt82wFhNqtvbm9wFJV5YkB
	OALYtVjQEZsIWpsJDbJIBCW6VkLxJKc0h6Dc19Ep7b3dc7MH5JO49SLLpNpVovgHkV/qQu
	V+Pl5jVc8Rxfg5JATIDLK+m3hZbXbKpQITDQu4OVIIYiQysd2QZB5OGLMeokNg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226977;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=PV6WWoQh1vSNv/X00PyVAYzPvMrMTPPv2Nyz2I8t2to=;
	b=JP8BLAG9V88lZ5GHmeL8/xfbmLZJ07n8AFIskCcZbg4rYh3fxmVO9mzY0ohdj0bEvJKj2e
	XTgJfhIo5hUgrgBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch V2 36/38] x86/smpboot: Implement a bit spinlock to protect the
 realmode stack
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:56 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Parallel AP bringup requires that the APs can run fully parallel through
the early startup code including the real mode trampoline.

To prepare for this implement a bit-spinlock to serialize access to the
real mode stack so that parallel upcoming APs are not going to corrupt each
others stack while going through the real mode startup code.

Co-developed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/include/asm/realmode.h      |    3 +++
 arch/x86/kernel/head_64.S            |   13 +++++++++++++
 arch/x86/realmode/init.c             |    3 +++
 arch/x86/realmode/rm/trampoline_64.S |   27 ++++++++++++++++++++++-----
 4 files changed, 41 insertions(+), 5 deletions(-)
---
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -52,6 +52,7 @@ struct trampoline_header {
 	u64 efer;
 	u32 cr4;
 	u32 flags;
+	u32 lock;
 #endif
 };
 
@@ -64,6 +65,8 @@ extern unsigned long initial_stack;
 extern unsigned long initial_vc_handler;
 #endif
 
+extern u32 *trampoline_lock;
+
 extern unsigned char real_mode_blob[];
 extern unsigned char real_mode_relocs[];
 
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -252,6 +252,17 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	movq	TASK_threadsp(%rax), %rsp
 
 	/*
+	 * Now that this CPU is running on its own stack, drop the realmode
+	 * protection. For the boot CPU the pointer is NULL!
+	 */
+	movq	trampoline_lock(%rip), %rax
+	testq	%rax, %rax
+	jz	.Lsetup_gdt
+	lock
+	btrl	$0, (%rax)
+
+.Lsetup_gdt:
+	/*
 	 * We must switch to a new descriptor in kernel space for the GDT
 	 * because soon the kernel won't have access anymore to the userspace
 	 * addresses where we're currently running on. We have to do that here
@@ -433,6 +444,8 @@ SYM_DATA(initial_code,	.quad x86_64_star
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 SYM_DATA(initial_vc_handler,	.quad handle_vc_boot_ghcb)
 #endif
+
+SYM_DATA(trampoline_lock, .quad 0);
 	__FINITDATA
 
 	__INIT
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -154,6 +154,9 @@ static void __init setup_real_mode(void)
 
 	trampoline_header->flags = 0;
 
+	trampoline_lock = &trampoline_header->lock;
+	*trampoline_lock = 0;
+
 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);
 
 	/* Map the real mode stub as virtual == physical */
--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -37,6 +37,24 @@
 	.text
 	.code16
 
+.macro LOAD_REALMODE_ESP
+	/*
+	 * Make sure only one CPU fiddles with the realmode stack
+	 */
+.Llock_rm\@:
+	btl	$0, tr_lock
+	jnc	2f
+	pause
+	jmp	.Llock_rm\@
+2:
+	lock
+	btsl	$0, tr_lock
+	jc	.Llock_rm\@
+
+	# Setup stack
+	movl	$rm_stack_end, %esp
+.endm
+
 	.balign	PAGE_SIZE
 SYM_CODE_START(trampoline_start)
 	cli			# We should be safe anyway
@@ -49,8 +67,7 @@ SYM_CODE_START(trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	# Setup stack
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 
 	call	verify_cpu		# Verify the cpu supports long mode
 	testl   %eax, %eax		# Check for return code
@@ -93,8 +110,7 @@ SYM_CODE_START(sev_es_trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	# Setup stack
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 
 	jmp	.Lswitch_to_protected
 SYM_CODE_END(sev_es_trampoline_start)
@@ -177,7 +193,7 @@ SYM_CODE_START(pa_trampoline_compat)
 	 * In compatibility mode.  Prep ESP and DX for startup_32, then disable
 	 * paging and complete the switch to legacy 32-bit mode.
 	 */
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 	movw	$__KERNEL_DS, %dx
 
 	movl	$(CR0_STATE & ~X86_CR0_PG), %eax
@@ -241,6 +257,7 @@ SYM_DATA_START(trampoline_header)
 	SYM_DATA(tr_efer,		.space 8)
 	SYM_DATA(tr_cr4,		.space 4)
 	SYM_DATA(tr_flags,		.space 4)
+	SYM_DATA(tr_lock,		.space 4)
 SYM_DATA_END(trampoline_header)
 
 #include "trampoline_common.S"



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:10:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:10:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529975.825140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL2-0002yl-Pl; Thu, 04 May 2023 19:10:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529975.825140; Thu, 04 May 2023 19:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL2-0002uE-6o; Thu, 04 May 2023 19:10:00 +0000
Received: by outflank-mailman (input) for mailman id 529975;
 Thu, 04 May 2023 19:09:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDt-00042k-Hm
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:37 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3aa69c1d-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3aa69c1d-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185937.531087176@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226954;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Q8MXo4UxfGPWkYiCkH8SfFuLE8nd3MwYFSO4OPkEU0k=;
	b=bnUOgHwWBE9hQkfsy68hlWixo+U0yYAVZfCfMXKGJisrc20Q1Webu21cChSehhJQaLZl77
	aYcKQ8Ytt2r2NeJhvWxidSRTpvTvE+EspS4m5s+LoGulbB1pCU2KCiI4sYoDI+vWCSHJkY
	pLu+grBlA9n7NODW06zxUNEt82sbImz6W8Kz7FepCq/gr5/IfItCe6Dkdodt5Q+vNIzgkQ
	Xn7Gz8k6gfZophc27B+l6J4sJeUpLR0bJXlF4TZA5eQidQJCcWgls2wS0/Py/vTS1AsL0f
	As3W35eXgUx29dMadPvgrCRe8Y9FS7s6QYZrLEoJyk9bohOWQId+pyuCBRum1w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226954;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Q8MXo4UxfGPWkYiCkH8SfFuLE8nd3MwYFSO4OPkEU0k=;
	b=HogNRfK+WKVbHXq0EreqUC5qHXGSg3L8ztEoy1FUGNc1//IkmnGiuVjwLox91YGFbTshES
	iAwGBolutbMvD0Cw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 22/38] arm64: smp: Switch to hotplug core state synchronization
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:34 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/Kconfig           |    1 +
 arch/arm64/include/asm/smp.h |    2 +-
 arch/arm64/kernel/smp.c      |   14 +++++---------
 3 files changed, 7 insertions(+), 10 deletions(-)
---
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -222,6 +222,7 @@ config ARM64
 	select HAVE_KPROBES
 	select HAVE_KRETPROBES
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_DOMAIN
 	select IRQ_FORCED_THREADING
 	select KASAN_VMALLOC if KASAN
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -99,7 +99,7 @@ static inline void arch_send_wakeup_ipi_
 
 extern int __cpu_disable(void);
 
-extern void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 extern void __noreturn cpu_die(void);
 extern void __noreturn cpu_die_early(void);
 
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -332,17 +332,13 @@ static int op_cpu_kill(unsigned int cpu)
 }
 
 /*
- * called on the thread which is asking for a CPU to be shutdown -
- * waits until shutdown has completed, or it is timed out.
+ * Called on the thread which is asking for a CPU to be shutdown after the
+ * shutdown completed.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
 	int err;
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
 	pr_debug("CPU%u: shutdown\n", cpu);
 
 	/*
@@ -369,8 +365,8 @@ void __noreturn cpu_die(void)
 
 	local_daif_mask();
 
-	/* Tell __cpu_die() that this CPU is now safe to dispose of */
-	(void)cpu_report_death();
+	/* Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose of */
+	cpuhp_ap_report_dead();
 
 	/*
 	 * Actually shutdown the CPU. This must never fail. The specific hotplug



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:10:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:10:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529978.825162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL8-0004Mh-6t; Thu, 04 May 2023 19:10:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529978.825162; Thu, 04 May 2023 19:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL7-0004Kt-LI; Thu, 04 May 2023 19:10:05 +0000
Received: by outflank-mailman (input) for mailman id 529978;
 Thu, 04 May 2023 19:10:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDm-00042j-Bn
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:30 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 36d8d8f1-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36d8d8f1-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.314874897@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226948;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=80kCLRDMZbd9+iHbBsmneisFdctM67+RT5wquUUhx5I=;
	b=GQZls3V5BLNNRryAlLiFemgQiD0SpRgV3iLbWQvRpTmYB/4c20aGsFDBWXI8Uweg/5YWWM
	12/ixEeO6WH/mwVBUzGqqUHfcNtCdkryRnIkY5kjUftL+C8Vj5d7X5voyN7S90aO+JUtGa
	QWqGgNPb9aB4YZ45GmT1B6RFWwT1VUmtodnu+ekNia7eOS//ayShRUnJRaze7yrTE/+nbI
	YLCwGu7Zqgfoq78Rwwsbz2qQezZsI5V8aOQCZJKpk+4sOYMq3egl0/9DC5HEt0dP2BRrx4
	5e3+lx2Ovzoigk9nXaArk+xUOPBPz0jXa4oBuRkcRD2cRp3G5LGH9+UYpvQqzQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226948;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=80kCLRDMZbd9+iHbBsmneisFdctM67+RT5wquUUhx5I=;
	b=sdydGLJIcakndZUdj794su6Aylk7Gsh33YzKsSvlvoJrUbguKLp6hxQFCUOzGunVvVlCBM
	I+/zQvKwtvWmlkDg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 18/38] cpu/hotplug: Add CPU state tracking and synchronization
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:27 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The CPU state tracking and synchronization mechanism in smpboot.c is
completely independent of the hotplug code and all logic around it is
implemented in architecture specific code.

Except for the state reporting of the AP there is absolutely nothing
architecture specific and the sychronization and decision functions can be
moved into the generic hotplug core code.

Provide an integrated variant and add the core synchronization and decision
points. This comes in two flavours:

  1) DEAD state synchronization

     Updated by the architecture code once the AP reaches the point where
     it is ready to be torn down by the control CPU, e.g. by removing power
     or clocks or tear down via the hypervisor.

     The control CPU waits for this state to be reached with a timeout. If
     the state is reached an architecture specific cleanup function is
     invoked.

  2) Full state synchronization

     This extends #1 with AP alive synchronization. This is new
     functionality, which allows to replace architecture specific wait
     mechanims, e.g. cpumasks, completely.

     It also prevents that an AP which is in a limbo state can be brought
     up again. This can happen when an AP failed to report dead state
     during a previous off-line operation.

The dead synchronization is what most architectures use. Only x86 makes a
bringup decision based on that state at the moment.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/Kconfig               |   15 +++
 include/linux/cpuhotplug.h |   12 ++
 kernel/cpu.c               |  193 ++++++++++++++++++++++++++++++++++++++++++++-
 kernel/smpboot.c           |    2 
 4 files changed, 221 insertions(+), 1 deletion(-)
---
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -34,6 +34,21 @@ config ARCH_HAS_SUBPAGE_FAULTS
 config HOTPLUG_SMT
 	bool
 
+# Selected by HOTPLUG_CORE_SYNC_DEAD or HOTPLUG_CORE_SYNC_FULL
+config HOTPLUG_CORE_SYNC
+	bool
+
+# Basic CPU dead synchronization selected by architecture
+config HOTPLUG_CORE_SYNC_DEAD
+	bool
+	select HOTPLUG_CORE_SYNC
+
+# Full CPU synchronization with alive state selected by architecture
+config HOTPLUG_CORE_SYNC_FULL
+	bool
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
+	select HOTPLUG_CORE_SYNC
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -517,4 +517,16 @@ void cpuhp_online_idle(enum cpuhp_state
 static inline void cpuhp_online_idle(enum cpuhp_state state) { }
 #endif
 
+void cpuhp_ap_sync_alive(void);
+void arch_cpuhp_sync_state_poll(void);
+void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
+void cpuhp_ap_report_dead(void);
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu);
+#else
+static inline void cpuhp_ap_report_dead(void) { }
+static inline void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu) { }
+#endif
+
 #endif
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -17,6 +17,7 @@
 #include <linux/cpu.h>
 #include <linux/oom.h>
 #include <linux/rcupdate.h>
+#include <linux/delay.h>
 #include <linux/export.h>
 #include <linux/bug.h>
 #include <linux/kthread.h>
@@ -59,6 +60,7 @@
  * @last:	For multi-instance rollback, remember how far we got
  * @cb_state:	The state for a single callback (install/uninstall)
  * @result:	Result of the operation
+ * @ap_sync_state:	State for AP synchronization
  * @done_up:	Signal completion to the issuer of the task for cpu-up
  * @done_down:	Signal completion to the issuer of the task for cpu-down
  */
@@ -76,6 +78,7 @@ struct cpuhp_cpu_state {
 	struct hlist_node	*last;
 	enum cpuhp_state	cb_state;
 	int			result;
+	atomic_t		ap_sync_state;
 	struct completion	done_up;
 	struct completion	done_down;
 #endif
@@ -276,6 +279,182 @@ static bool cpuhp_is_atomic_state(enum c
 	return CPUHP_AP_IDLE_DEAD <= state && state < CPUHP_AP_ONLINE;
 }
 
+/* Synchronization state management */
+enum cpuhp_sync_state {
+	SYNC_STATE_DEAD,
+	SYNC_STATE_KICKED,
+	SYNC_STATE_SHOULD_DIE,
+	SYNC_STATE_ALIVE,
+	SYNC_STATE_SHOULD_ONLINE,
+	SYNC_STATE_ONLINE,
+};
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC
+/**
+ * cpuhp_ap_update_sync_state - Update synchronization state during bringup/teardown
+ * @state:	The synchronization state to set
+ *
+ * No synchronization point. Just update of the synchronization state.
+ */
+static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state)
+{
+	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
+	int sync = atomic_read(st);
+
+	while (!atomic_try_cmpxchg(st, &sync, state));
+}
+
+void __weak arch_cpuhp_sync_state_poll(void) { cpu_relax(); }
+
+static bool cpuhp_wait_for_sync_state(unsigned int cpu, enum cpuhp_sync_state state,
+				      enum cpuhp_sync_state next_state)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	ktime_t now, end, start = ktime_get();
+	int sync;
+
+	end = start + 10ULL * NSEC_PER_SEC;
+
+	sync = atomic_read(st);
+	while (1) {
+		if (sync == state) {
+			if (!atomic_try_cmpxchg(st, &sync, next_state))
+				continue;
+			return true;
+		}
+
+		now = ktime_get();
+		if (now > end) {
+			/* Timeout. Leave the state unchanged */
+			return false;
+		} else if (now - start < NSEC_PER_MSEC) {
+			/* Poll for one millisecond */
+			arch_cpuhp_sync_state_poll();
+		} else {
+			usleep_range_state(USEC_PER_MSEC, 2 * USEC_PER_MSEC, TASK_UNINTERRUPTIBLE);
+		}
+		sync = atomic_read(st);
+	}
+	return true;
+}
+#else  /* CONFIG_HOTPLUG_CORE_SYNC */
+static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state) { }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC */
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
+/**
+ * cpuhp_ap_report_dead - Update synchronization state to DEAD
+ *
+ * No synchronization point. Just update of the synchronization state.
+ */
+void cpuhp_ap_report_dead(void)
+{
+	cpuhp_ap_update_sync_state(SYNC_STATE_DEAD);
+}
+
+void __weak arch_cpuhp_cleanup_dead_cpu(unsigned int cpu) { }
+
+/*
+ * Late CPU shutdown synchronization point. Cannot use cpuhp_state::done_down
+ * because the AP cannot issue complete() at this stage.
+ */
+static void cpuhp_bp_sync_dead(unsigned int cpu)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	int sync = atomic_read(st);
+
+	do {
+		/* CPU can have reported dead already. Don't overwrite that! */
+		if (sync == SYNC_STATE_DEAD)
+			break;
+	} while (!atomic_try_cmpxchg(st, &sync, SYNC_STATE_SHOULD_DIE));
+
+	if (cpuhp_wait_for_sync_state(cpu, SYNC_STATE_DEAD, SYNC_STATE_DEAD)) {
+		/* CPU reached dead state. Invoke the cleanup function */
+		arch_cpuhp_cleanup_dead_cpu(cpu);
+		return;
+	}
+
+	/* No further action possible. Emit message and give up. */
+	pr_err("CPU%u failed to report dead state\n", cpu);
+}
+#else /* CONFIG_HOTPLUG_CORE_SYNC_DEAD */
+static inline void cpuhp_bp_sync_dead(unsigned int cpu) { }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC_DEAD */
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_FULL
+/**
+ * cpuhp_ap_sync_alive - Synchronize AP with the control CPU once it is alive
+ *
+ * Updates the AP synchronization state to SYNC_STATE_ALIVE and waits
+ * for the BP to release it.
+ */
+void cpuhp_ap_sync_alive(void)
+{
+	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
+
+	cpuhp_ap_update_sync_state(SYNC_STATE_ALIVE);
+
+	/* Wait for the control CPU to release it. */
+	while (atomic_read(st) != SYNC_STATE_SHOULD_ONLINE)
+		cpu_relax();
+}
+
+static bool cpuhp_can_boot_ap(unsigned int cpu)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	int sync = atomic_read(st);
+
+again:
+	switch (sync) {
+	case SYNC_STATE_DEAD:
+		/* CPU is properly dead */
+		break;
+	case SYNC_STATE_KICKED:
+		/* CPU did not come up in previous attempt */
+		break;
+	case SYNC_STATE_ALIVE:
+		/* CPU is stuck cpuhp_ap_sync_alive(). */
+		break;
+	default:
+		/* CPU failed to report online or dead and is in limbo state. */
+		return false;
+	}
+
+	/* Prepare for booting */
+	if (!atomic_try_cmpxchg(st, &sync, SYNC_STATE_KICKED))
+		goto again;
+
+	return true;
+}
+
+void __weak arch_cpuhp_cleanup_kick_cpu(unsigned int cpu) { }
+
+/*
+ * Early CPU bringup synchronization point. Cannot use cpuhp_state::done_up
+ * because the AP cannot issue complete() so early in the bringup.
+ */
+static int cpuhp_bp_sync_alive(unsigned int cpu)
+{
+	int ret = 0;
+
+	if (!IS_ENABLED(CONFIG_HOTPLUG_CORE_SYNC_FULL))
+		return 0;
+
+	if (!cpuhp_wait_for_sync_state(cpu, SYNC_STATE_ALIVE, SYNC_STATE_SHOULD_ONLINE)) {
+		pr_err("CPU%u failed to report alive state\n", cpu);
+		ret = -EIO;
+	}
+
+	/* Let the architecture cleanup the kick alive mechanics. */
+	arch_cpuhp_cleanup_kick_cpu(cpu);
+	return ret;
+}
+#else /* CONFIG_HOTPLUG_CORE_SYNC_FULL */
+static inline int cpuhp_bp_sync_alive(unsigned int cpu) { return 0; }
+static inline bool cpuhp_can_boot_ap(unsigned int cpu) { return true; }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC_FULL */
+
 /* Serializes the updates to cpu_online_mask, cpu_present_mask */
 static DEFINE_MUTEX(cpu_add_remove_lock);
 bool cpuhp_tasks_frozen;
@@ -588,6 +767,9 @@ static int bringup_cpu(unsigned int cpu)
 	struct task_struct *idle = idle_thread_get(cpu);
 	int ret;
 
+	if (!cpuhp_can_boot_ap(cpu))
+		return -EAGAIN;
+
 	/*
 	 * Reset stale stack state from the last time this CPU was online.
 	 */
@@ -606,6 +788,10 @@ static int bringup_cpu(unsigned int cpu)
 	if (ret)
 		goto out_unlock;
 
+	ret = cpuhp_bp_sync_alive(cpu);
+	if (ret)
+		goto out_unlock;
+
 	ret = bringup_wait_for_ap_online(cpu);
 	if (ret)
 		goto out_unlock;
@@ -1109,6 +1295,8 @@ static int takedown_cpu(unsigned int cpu
 	/* This actually kills the CPU. */
 	__cpu_die(cpu);
 
+	cpuhp_bp_sync_dead(cpu);
+
 	tick_cleanup_dead_cpu(cpu);
 	rcutree_migrate_callbacks(cpu);
 	return 0;
@@ -1355,8 +1543,10 @@ void cpuhp_online_idle(enum cpuhp_state
 	if (state != CPUHP_AP_ONLINE_IDLE)
 		return;
 
+	cpuhp_ap_update_sync_state(SYNC_STATE_ONLINE);
+
 	/*
-	 * Unpart the stopper thread before we start the idle loop (and start
+	 * Unpark the stopper thread before we start the idle loop (and start
 	 * scheduling); this ensures the stopper task is always available.
 	 */
 	stop_machine_unpark(smp_processor_id());
@@ -2733,6 +2923,7 @@ void __init boot_cpu_hotplug_init(void)
 {
 #ifdef CONFIG_SMP
 	cpumask_set_cpu(smp_processor_id(), &cpus_booted_once_mask);
+	atomic_set(this_cpu_ptr(&cpuhp_state.ap_sync_state), SYNC_STATE_ONLINE);
 #endif
 	this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
 	this_cpu_write(cpuhp_state.target, CPUHP_ONLINE);
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -326,6 +326,7 @@ void smpboot_unregister_percpu_thread(st
 }
 EXPORT_SYMBOL_GPL(smpboot_unregister_percpu_thread);
 
+#ifndef CONFIG_HOTPLUG_CORE_SYNC
 static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
 
 /*
@@ -488,3 +489,4 @@ bool cpu_report_death(void)
 }
 
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC */



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:10:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:10:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529979.825169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL9-0004lj-NB; Thu, 04 May 2023 19:10:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529979.825169; Thu, 04 May 2023 19:10:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL9-0004kI-9A; Thu, 04 May 2023 19:10:07 +0000
Received: by outflank-mailman (input) for mailman id 529979;
 Thu, 04 May 2023 19:10:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDo-00042j-CQ
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:32 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38bfb97d-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38bfb97d-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185937.423370586@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226951;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=axTWYuKGRsLuY3sP+yWW8AcC0JvmEXRH4k+OCDSDm2U=;
	b=JAJHUpAZ99ml9kYEJFLumTtqdUWusgMhRKrkATvPmN/E+Np81ottt9W5maLBM9NBIbWnTJ
	ySXLTT+Kor4hdXfzIlODt/giO0VNStwSizmA019ybO7WFnSPuKbPIPcG3oj4yklxtxS7aU
	n5N8P25WOp3c7sKTlX7jtclRMC9dib+OWP5+g0u3XA7xjm7H0BSCVrqlJV/pGkhcRHSFVC
	ZhGGbdg5oKi9BMbxdVDZ/bAYmjvP7467rF6XY91JkA52gVUbdvWtKdomwEUWHzXE5dZyQO
	7dxtw3w6CHviPJabbTdmg6NJJm327vynExXUs6x1WAljoJBJhWhlKD3vNmVT4w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226951;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=axTWYuKGRsLuY3sP+yWW8AcC0JvmEXRH4k+OCDSDm2U=;
	b=AkOrbIV4f+h48OPko5LlDDOWxYQnlMZwftJ+TMWfHtwe4frm64xHt5Hf1EzpPa/P2tJZ7D
	0WESNcwn3qKK6QDA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 20/38] cpu/hotplug: Remove cpu_report_state() and related
 unused cruft
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:30 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

No more users.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 include/linux/cpu.h |    2 -
 kernel/smpboot.c    |   90 ----------------------------------------------------
 2 files changed, 92 deletions(-)
---
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -184,8 +184,6 @@ void arch_cpu_idle_enter(void);
 void arch_cpu_idle_exit(void);
 void __noreturn arch_cpu_idle_dead(void);
 
-int cpu_report_state(int cpu);
-int cpu_check_up_prepare(int cpu);
 void cpu_set_state_online(int cpu);
 void play_idle_precise(u64 duration_ns, u64 latency_ns);
 
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -329,97 +329,7 @@ EXPORT_SYMBOL_GPL(smpboot_unregister_per
 #ifndef CONFIG_HOTPLUG_CORE_SYNC
 static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
 
-/*
- * Called to poll specified CPU's state, for example, when waiting for
- * a CPU to come online.
- */
-int cpu_report_state(int cpu)
-{
-	return atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-}
-
-/*
- * If CPU has died properly, set its state to CPU_UP_PREPARE and
- * return success.  Otherwise, return -EBUSY if the CPU died after
- * cpu_wait_death() timed out.  And yet otherwise again, return -EAGAIN
- * if cpu_wait_death() timed out and the CPU still hasn't gotten around
- * to dying.  In the latter two cases, the CPU might not be set up
- * properly, but it is up to the arch-specific code to decide.
- * Finally, -EIO indicates an unanticipated problem.
- *
- * Note that it is permissible to omit this call entirely, as is
- * done in architectures that do no CPU-hotplug error checking.
- */
-int cpu_check_up_prepare(int cpu)
-{
-	if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);
-		return 0;
-	}
-
-	switch (atomic_read(&per_cpu(cpu_hotplug_state, cpu))) {
-
-	case CPU_POST_DEAD:
-
-		/* The CPU died properly, so just start it up again. */
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);
-		return 0;
-
-	case CPU_DEAD_FROZEN:
-
-		/*
-		 * Timeout during CPU death, so let caller know.
-		 * The outgoing CPU completed its processing, but after
-		 * cpu_wait_death() timed out and reported the error. The
-		 * caller is free to proceed, in which case the state
-		 * will be reset properly by cpu_set_state_online().
-		 * Proceeding despite this -EBUSY return makes sense
-		 * for systems where the outgoing CPUs take themselves
-		 * offline, with no post-death manipulation required from
-		 * a surviving CPU.
-		 */
-		return -EBUSY;
-
-	case CPU_BROKEN:
-
-		/*
-		 * The most likely reason we got here is that there was
-		 * a timeout during CPU death, and the outgoing CPU never
-		 * did complete its processing.  This could happen on
-		 * a virtualized system if the outgoing VCPU gets preempted
-		 * for more than five seconds, and the user attempts to
-		 * immediately online that same CPU.  Trying again later
-		 * might return -EBUSY above, hence -EAGAIN.
-		 */
-		return -EAGAIN;
-
-	case CPU_UP_PREPARE:
-		/*
-		 * Timeout while waiting for the CPU to show up. Allow to try
-		 * again later.
-		 */
-		return 0;
-
-	default:
-
-		/* Should not happen.  Famous last words. */
-		return -EIO;
-	}
-}
-
-/*
- * Mark the specified CPU online.
- *
- * Note that it is permissible to omit this call entirely, as is
- * done in architectures that do no CPU-hotplug error checking.
- */
-void cpu_set_state_online(int cpu)
-{
-	(void)atomic_xchg(&per_cpu(cpu_hotplug_state, cpu), CPU_ONLINE);
-}
-
 #ifdef CONFIG_HOTPLUG_CPU
-
 /*
  * Wait for the specified CPU to exit the idle loop and die.
  */



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:10:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529980.825174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueLA-0004ue-KA; Thu, 04 May 2023 19:10:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529980.825174; Thu, 04 May 2023 19:10:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueL9-0004sp-UR; Thu, 04 May 2023 19:10:07 +0000
Received: by outflank-mailman (input) for mailman id 529980;
 Thu, 04 May 2023 19:10:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueEA-00042j-Mk
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:54 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 46390635-eaae-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:02:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46390635-eaae-11ed-b226-6b7b168915f2
Message-ID: <20230504185938.179661118@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226974;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=dh/52GRhLbpp4gh50fAuvfQix7j7Ih1160L/CdLI8l0=;
	b=O6cG1HBehiX74K9JFp369TIKx6mKBR2TPreNtZ0JIxAQi+1KIShpZOLr9yn98NyJ2DVeV2
	KoRI+hkYEu5smHES9pGvnLyoWtwb4Bp/4BfVhvbHAfxZcdiWB7o0B0Ira+4IzxwLMlhRJ+
	sMHMr6Be0jbWVJhRKWMRzaormlwxyZ+o2EofkewXORpIgLbjVpvUxjCQcSQl+YfTMlHZ9T
	UE8rmtHne4Y5TmubM7NIWK98QCHj+0F+rqBtOZSFGM77VBpgrVetzsILoS9DByN2vrAczS
	5+Uo3R/A6vnceg12D8eqIMTFGAGJV6sRpxtSZ3hp8//m2IPi36uIIWtNhU3PLg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226974;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=dh/52GRhLbpp4gh50fAuvfQix7j7Ih1160L/CdLI8l0=;
	b=2Mr8xnCCJXQNYK40wYiyDitX1rtgKVSav9FlO5VPGJbI6wVmqI0M9y1M9rUGvQKL9tG1OS
	lphjzmGYwmWjK5CQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch V2 34/38] x86/cpu/amd; Invoke detect_extended_topology_early()
 on boot CPU
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:53 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The early detection stores the extended topology leaf number which is
required for parallel hotplug.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

---
 arch/x86/kernel/cpu/amd.c |    2 ++
 1 file changed, 2 insertions(+)
---
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -692,6 +692,8 @@ static void early_init_amd(struct cpuinf
 		}
 	}
 
+	detect_extended_topology_early(c);
+
 	if (cpu_has(c, X86_FEATURE_TOPOEXT))
 		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
 }



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:10:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:10:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.529986.825184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueLC-0005ZB-TJ; Thu, 04 May 2023 19:10:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 529986.825184; Thu, 04 May 2023 19:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueLC-0005VT-BD; Thu, 04 May 2023 19:10:10 +0000
Received: by outflank-mailman (input) for mailman id 529986;
 Thu, 04 May 2023 19:10:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=srgM=AZ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pueDv-00042k-Hl
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:02:39 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3b9b06d7-eaae-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:02:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b9b06d7-eaae-11ed-8611-37d641c3527e
Message-ID: <20230504185937.584277566@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683226956;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=wTNocxrIkrZERW1CgHrOxDfBd1NuTsgJS4AJ8mYPyyQ=;
	b=Mi46V8eHgoltNQxPJmnbu9i1N0WqNbjlBS5gXzUQdYuzDGSlRAwFIn42Fl4qTDQ9tpMOpi
	9pMD9eqFIhnqecwmKQOcPdssVAfxr98Ufnj8kPfEnna6Jg1FJPiFDXjB5qyYPeKOEBzagH
	d9bGKI26ZnIO1dmzo62pzKDwF621ZdxsTsNqH8faar1/6RQwLMfz9EzVFpDrERESz08UzT
	RKiX6TWVgbEvDpDMLz5ptMwZldZAxWrvz613h9Whj+rnlmmRseJTa9scp2XVHyZRgXYYx9
	W9B8x3jWUGKujnEYbsdrx7KzjQ+swT2YMyT9UZUBAmo9KzwEdkHLM2GZWs0XOQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683226956;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=wTNocxrIkrZERW1CgHrOxDfBd1NuTsgJS4AJ8mYPyyQ=;
	b=4wN87waBqOWq8wTSM62nHwIaB3nHKF5Odvy83jhqjVG4mxeKOSh6qxwJk2zaru4HlpMduu
	ieIS3XjNdw5x/MBw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch V2 23/38] csky/smp: Switch to hotplug core state synchronization
References: <20230504185733.126511787@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Thu,  4 May 2023 21:02:35 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Guo Ren <guoren@kernel.org>
Cc: linux-csky@vger.kernel.org

---
 arch/csky/Kconfig           |    1 +
 arch/csky/include/asm/smp.h |    2 +-
 arch/csky/kernel/smp.c      |    8 ++------
 3 files changed, 4 insertions(+), 7 deletions(-)
---
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -96,6 +96,7 @@ config CSKY
 	select HAVE_REGS_AND_STACK_ACCESS_API
 	select HAVE_STACKPROTECTOR
 	select HAVE_SYSCALL_TRACEPOINTS
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select MAY_HAVE_SPARSE_IRQ
 	select MODULES_USE_ELF_RELA if MODULES
 	select OF
--- a/arch/csky/include/asm/smp.h
+++ b/arch/csky/include/asm/smp.h
@@ -23,7 +23,7 @@ void __init set_send_ipi(void (*func)(co
 
 int __cpu_disable(void);
 
-void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 
 #endif /* CONFIG_SMP */
 
--- a/arch/csky/kernel/smp.c
+++ b/arch/csky/kernel/smp.c
@@ -291,12 +291,8 @@ int __cpu_disable(void)
 	return 0;
 }
 
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: shutdown failed\n", cpu);
-		return;
-	}
 	pr_notice("CPU%u: shutdown\n", cpu);
 }
 
@@ -304,7 +300,7 @@ void __noreturn arch_cpu_idle_dead(void)
 {
 	idle_task_exit();
 
-	cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	while (!secondary_stack)
 		arch_cpu_idle();



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:18:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:18:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530007.825203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueT3-0001tY-P3; Thu, 04 May 2023 19:18:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530007.825203; Thu, 04 May 2023 19:18:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueT3-0001tR-MQ; Thu, 04 May 2023 19:18:17 +0000
Received: by outflank-mailman (input) for mailman id 530007;
 Thu, 04 May 2023 19:18:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pueT2-0001tH-LX; Thu, 04 May 2023 19:18:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pueT2-0007BZ-Fh; Thu, 04 May 2023 19:18:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pueT1-0007TK-Sm; Thu, 04 May 2023 19:18:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pueT1-0006md-SE; Thu, 04 May 2023 19:18:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4Bo4v/HutPsvfpVlgU4XJGi9RIMyQ++bhbTswozKQgg=; b=MhumB86tfghEbqTi1GKUQKA3lc
	w7wBACp6uvKR7huM/cFha/fklYosTWJurYNR5g6WqGd0b9jcpinzBKjx8Fh6ngUUJxXr2bbixoc//
	CkaImOGKBbsqgDkwLVIofwt+8TVye4jGkfzmchI8ITzKHtzFfZCYbWHvCiUj+4urhGik=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180528-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180528: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b95a72bb5b2df24ff1baaa27920e57947dc97d49
X-Osstest-Versions-That:
    xen=b033eddc9779109c06a26936321d27a2ef4e088b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 May 2023 19:18:15 +0000

flight 180528 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180528/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180511
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180511
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180511
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180511
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180511
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180511
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180511
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180511
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 180511
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180511
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180511
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180511
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180511
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 xen                  b95a72bb5b2df24ff1baaa27920e57947dc97d49
baseline version:
 xen                  b033eddc9779109c06a26936321d27a2ef4e088b

Last test of basis   180511  2023-05-03 01:53:22 Z    1 days
Failing since        180519  2023-05-03 15:38:32 Z    1 days    2 attempts
Testing same since   180528  2023-05-04 08:14:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Henry Wang <Henry.Wang@arm.com> # CHANGELOG
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Sergey Dyasli <sergey.dyasli@citrix.com>
  Viresh Kumar <viresh.kumar@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b033eddc97..b95a72bb5b  b95a72bb5b2df24ff1baaa27920e57947dc97d49 -> master


From xen-devel-bounces@lists.xenproject.org Thu May 04 19:39:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530039.825266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueng-0006HW-IR; Thu, 04 May 2023 19:39:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530039.825266; Thu, 04 May 2023 19:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueng-0006Gh-DF; Thu, 04 May 2023 19:39:36 +0000
Received: by outflank-mailman (input) for mailman id 530039;
 Thu, 04 May 2023 19:39:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puenf-00069W-P1
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:39:35 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 650fb147-eab3-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:39:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 650fb147-eab3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683229173;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=8aejV8Ms7L6eLYNlIrFeJCAd+u3GgX1dUF2hUOuDAqc=;
  b=bMSQa8HD9Y18I4fZhvRN50W3irUMT2e5OkugtiVFEZE4Z08s0osCLB4d
   HELX/3mbHSlNMjclQvB6gSOsw1BvdnyGtPb1elFhhtfroxLORJgJTpMkt
   uIhxLmTPrB6IjEYxqmM04ClpxrojiJAjmd5z9xWvixpV09Y1ss8n6ORPB
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107797743
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:JkhA6Ki0Gqj2jcP4KvVDUeTmX161ERAKZh0ujC45NGQN5FlHY01je
 htvWmvUa/uPM2L1e41zYI7go0tSvpPQm9c2SQZvqSBjEigb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QeCzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tRCCQ8DUBahqd7mxavnRPBrxeYKC+/CadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27B/
 j2ZrjumXkpy2Nq31Xmhz2m2xcbzmj7/YqIMF4+p+/pnjwjGroAUIEJPDgbqyRWjsWahX/pPJ
 kpS/TAhxYAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxLkouQyNFadcmnNQrXjFs3
 ViM9+4FHhQ27ufTEyjEsO7J83XrY3N9wXI+iTEsFyo67eflgq8P3hfWEth6F+2Xp/rTMGSlq
 9yVlxTSl4n/nOZSifXgpQmd023zznTaZlVrv1uKBwpJ+is8Pdf4PNLwtDA3+N4adO6kok+9U
 G/ociR0xMQHFtmzmSOEW43h95n5tq/eYFUwbbOCdqTNFghBGFb5J+i8GBkkeC9U3j8sIFcFm
 nP7twJL/4N0N3C3d6JxaI/ZI510nfO6T4q5D6GMNYUmjn1NSeN61Hs2OR74M57FySDAbp3Ty
 b/EKJ3xXB72+IxszSasRvd17ILHMhsWnDuJLbiilkTP7FZrTCLNIVvzGAfUP79RAWLtiFm9z
 uuzwOPQlk4BDbekOXGImWPRRHhTRUUG6VnNg5Q/Xoa+zsBOQgnN19e5LWsdRrFY
IronPort-HdrOrdr: A9a23:eEuL2qBISPMCaULlHemW55DYdb4zR+YMi2TC1yhKKCC9Ffbo7/
 xG/c5rrCMc5wxhO03I9eruBEDEewK5yXcX2/h2AV7BZniFhILAFugLhuGOrwEIWReOkdK1vZ
 0QCJSWY+eRMbEVt6jHCXGDYrMd/OU=
X-Talos-CUID: 9a23:ePTaoW5hLF61aKfludssrm8rFP4jUFPhnHLJf0GhDiVLYZqzYArF
X-Talos-MUID: 9a23:klefQgWJbJvTq6Xq/COvoAx5LoRB36OzDHotj5Y0qsSAOCMlbg==
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="107797743"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 2/6] x86/cpuid: Rename NCAPINTS to X86_NR_CAPS
Date: Thu, 4 May 2023 20:39:20 +0100
Message-ID: <20230504193924.3305496-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The latter is more legible, and consistent with X86_NR_{SYNTH,BUG} which
already exist.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/alternative.c             |  2 +-
 xen/arch/x86/cpu/common.c              | 12 ++++++------
 xen/arch/x86/cpuid.c                   |  2 +-
 xen/arch/x86/include/asm/cpufeature.h  |  2 +-
 xen/arch/x86/include/asm/cpufeatures.h |  2 +-
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/alternative.c b/xen/arch/x86/alternative.c
index 99482766b51f..0434030693a9 100644
--- a/xen/arch/x86/alternative.c
+++ b/xen/arch/x86/alternative.c
@@ -200,7 +200,7 @@ static void init_or_livepatch _apply_alternatives(struct alt_instr *start,
 
         BUG_ON(a->repl_len > total_len);
         BUG_ON(total_len > sizeof(buf));
-        BUG_ON(a->cpuid >= NCAPINTS * 32);
+        BUG_ON(a->cpuid >= X86_NR_CAPS * 32);
 
         /*
          * Detect sequences of alt_instr's patching the same origin site, and
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index edc4db1335eb..1be049e332ce 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -55,8 +55,8 @@ unsigned int paddr_bits __read_mostly = 36;
 unsigned int hap_paddr_bits __read_mostly = 36;
 unsigned int vaddr_bits __read_mostly = VADDR_BITS;
 
-static unsigned int cleared_caps[NCAPINTS];
-static unsigned int forced_caps[NCAPINTS];
+static unsigned int cleared_caps[X86_NR_CAPS];
+static unsigned int forced_caps[X86_NR_CAPS];
 
 DEFINE_PER_CPU(bool, full_gdt_loaded);
 
@@ -501,7 +501,7 @@ void identify_cpu(struct cpuinfo_x86 *c)
 
 #ifdef NOISY_CAPS
 	printk(KERN_DEBUG "CPU: After vendor identify, caps:");
-	for (i = 0; i < NCAPINTS; i++)
+	for (i = 0; i < X86_NR_CAPS; i++)
 		printk(" %08x", c->x86_capability[i]);
 	printk("\n");
 #endif
@@ -530,7 +530,7 @@ void identify_cpu(struct cpuinfo_x86 *c)
 	for (i = 0; i < FSCAPINTS; ++i)
 		c->x86_capability[i] &= known_features[i];
 
-	for (i = 0 ; i < NCAPINTS ; ++i) {
+	for (i = 0 ; i < X86_NR_CAPS ; ++i) {
 		c->x86_capability[i] |= forced_caps[i];
 		c->x86_capability[i] &= ~cleared_caps[i];
 	}
@@ -548,7 +548,7 @@ void identify_cpu(struct cpuinfo_x86 *c)
 
 #ifdef NOISY_CAPS
 	printk(KERN_DEBUG "CPU: After all inits, caps:");
-	for (i = 0; i < NCAPINTS; i++)
+	for (i = 0; i < X86_NR_CAPS; i++)
 		printk(" %08x", c->x86_capability[i]);
 	printk("\n");
 #endif
@@ -585,7 +585,7 @@ void identify_cpu(struct cpuinfo_x86 *c)
 	 */
 	if ( c != &boot_cpu_data ) {
 		/* AND the already accumulated flags with these */
-		for ( i = 0 ; i < NCAPINTS ; i++ )
+		for ( i = 0 ; i < X86_NR_CAPS ; i++ )
 			boot_cpu_data.x86_capability[i] &= c->x86_capability[i];
 
 		mcheck_init(c, false);
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 455a09b2dd22..fd8021c6f16c 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -19,7 +19,7 @@ bool recheck_cpu_features(unsigned int cpu)
 
     identify_cpu(&c);
 
-    for ( i = 0; i < NCAPINTS; ++i )
+    for ( i = 0; i < X86_NR_CAPS; ++i )
     {
         if ( !(~c.x86_capability[i] & bsp->x86_capability[i]) )
             continue;
diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 4140ec0938b2..66bd4e296a18 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -26,7 +26,7 @@ struct cpuinfo_x86 {
     unsigned char x86_mask;
     int cpuid_level;                   /* Maximum supported CPUID level, -1=no CPUID */
     unsigned int extended_cpuid_level; /* Maximum supported CPUID extended level */
-    unsigned int x86_capability[NCAPINTS];
+    unsigned int x86_capability[X86_NR_CAPS];
     char x86_vendor_id[16];
     char x86_model_id[64];
     unsigned int x86_cache_size;       /* in KB - valid only when supported */
diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
index da0593de8542..e982ee920ce1 100644
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -53,4 +53,4 @@ XEN_CPUFEATURE(IBPB_ENTRY_HVM,    X86_SYNTH(29)) /* MSR_PRED_CMD used by Xen for
 #define X86_BUG_IBPB_NO_RET       X86_BUG( 3) /* IBPB doesn't flush the RSB/RAS */
 
 /* Total number of capability words, inc synth and bug words. */
-#define NCAPINTS (FSCAPINTS + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */
+#define X86_NR_CAPS (FSCAPINTS + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:39:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530040.825273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueng-0006Mu-UU; Thu, 04 May 2023 19:39:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530040.825273; Thu, 04 May 2023 19:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueng-0006K7-L0; Thu, 04 May 2023 19:39:36 +0000
Received: by outflank-mailman (input) for mailman id 530040;
 Thu, 04 May 2023 19:39:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puenf-00069l-T7
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:39:35 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65d9d6e2-eab3-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:39:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65d9d6e2-eab3-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683229175;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Jr5O2ho0oh2dy6YyNewvIFzCxvBrXOJlZmgmV269Q/k=;
  b=bs2AD/Fs7u8FRnFxMUlW3QRP1fUgBjsvmJorU8NJ2LZzdeKpZyrIfaSb
   rQ2yZAo16A/RlrVY1gQts8UuiF7iGAYzQCItZCYZL0ULaQrhMQKMFzNtb
   eoYjBsI3CgDXHg1XQhIOH1X6K3KnJgElM2PMXI6s+2SOKNi1vd1xQ4dxF
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107931611
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:kJpQzKuPcYcZMGFx809kXMhjeOfnVCdeMUV32f8akzHdYApBsoF/q
 tZmKTyCa6yKMGX2foh0aY209hkDu5SAxtFrHQVkpHsxHigR+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AKGyyFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwEz8saE25nNiPkY2FStA3r50DE5HxBdZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WxRepEiYuuwc5G/LwRYq+LPsLMDUapqBQsA9ckOw/
 zqZrj+gXEhDXDCZ4WSP9nzzud3dpwDce4NIN7GX7uYyonTGkwT/DzVJDADm8JFVkHWWRNZ3O
 0ESvC00osAa5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWbkAGRDNcbN0ttOctWCcnk
 FSOmrvU6SdH6ePPDyjHr/HN8G30YHJORYMfWcMaZTAKwt++mpoJt0PwcNZaS4fsruKtAwill
 lhmsxMCa6UvYd8jjvvrpgie2WLz+fAlXSZuuFyJAzvNAhdRIdf8Otf2sQWzAeNodt7xc7WXg
 JQTdyFyBsgqBIrFqiGCSf5l8FqBt6fca220bbKC8vAcG9WRF52LJ9o4DMlWfhsBDyr9UWaBj
 LXvkQ1Q/oRPG3ChcLV6ZYm8Y+xzk/i7T467CqmFNoERCnSUSDJrAQk0PRLAt4wTuBFEfV4D1
 WezLp/3UCdy5VVPxzuqXeYNuYIWKtQF7TqLH/jTlk33uYdykVbJEd/pxnPSNLFmhE5FyS2Jm
 +ti2zyikUgEDrCkOHKPrub+7zkidBAGOHw/kOQPHsbrH+asMDtJ5yP5qV/5R7FYog==
IronPort-HdrOrdr: A9a23:dcGjLqjOHA+0jiMQx7THnu5Q8nBQXg4ji2hC6mlwRA09TyX4rb
 HKoB1/73LJYVkqN03I9ervBED4ewK5yXct2/h3AV7AZniFhILLFu1fBOLZqlWLJ8SZzI9gPM
 9bGJSWY+eAbmSS4/yb3OFte+xQueVu78iT9JjjJ2YEd3ANV0l/hz0JcjpzGHcGODWvWPICZe
 GhDtIunUvbRZwNBv7Le0U4Yw==
X-Talos-CUID: 9a23:ujj73W9qBqA70R+GzX2Vv3UbO8Z4SGbn9SbvHU+BUk8waILNU2bFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3AoOu8Lw9dfBjUJW2cTSXRQG6Qf5li56rxN3Ifq7Y?=
 =?us-ascii?q?hmMjeFiBfIy/Frh3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="107931611"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH 5/6] x86/cpu-policy: Disentangle X86_NR_FEAT and FEATURESET_NR_ENTRIES
Date: Thu, 4 May 2023 20:39:23 +0100
Message-ID: <20230504193924.3305496-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

When adding new words to a featureset, there is a reasonable amount of
boilerplate and it is preforable to split the addition into multiple patches.

GCC 12 spotted a real (transient) error which occurs when splitting additions
like this.  Right now, FEATURESET_NR_ENTRIES is dynamically generated from the
highest numeric XEN_CPUFEATURE() value, and can be less than what the
FEATURESET_* constants suggest the length of a featureset bitmap ought to be.

This causes the policy <-> featureset converters to genuinely access
out-of-bounds on the featureset array.

Rework X86_NR_FEAT to be related to FEATURESET_* alone, allowing it
specifically to grow larger than FEATURESET_NR_ENTRIES.

Reported-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

To preempt what I expect will be the first review question, no FEATURESET_*
can't become an enumeration, because the constants undergo token concatination
in the preprocess as part of making DECL_BITFIELD() work.
---
 xen/arch/x86/cpu-policy.c              | 7 +++++++
 xen/arch/x86/include/asm/cpufeatures.h | 5 +----
 xen/include/xen/lib/x86/cpu-policy.h   | 4 ++--
 xen/include/xen/lib/x86/cpuid-consts.h | 2 ++
 xen/lib/x86/cpuid.c                    | 6 +++---
 5 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 774c512a03bd..00416244a3d8 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -883,6 +883,13 @@ void __init init_dom0_cpuid_policy(struct domain *d)
 
 static void __init __maybe_unused build_assertions(void)
 {
+    /*
+     * Generally these are the same, but tend to differ when adding new
+     * infrastructure split across several patches.  Simply confirm that the
+     * gen-cpuid.py X86_FEATURE_* bits fit within the bitmaps we operate on.
+     */
+    BUILD_BUG_ON(FEATURESET_NR_ENTRIES > X86_NR_FEAT);
+
     /* Find some more clever allocation scheme if this trips. */
     BUILD_BUG_ON(sizeof(struct cpu_policy) > PAGE_SIZE);
 
diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
index 408ab4ba16a5..8989291bbfd6 100644
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -2,10 +2,7 @@
  * Explicitly intended for multiple inclusion.
  */
 
-#include <xen/lib/x86/cpuid-autogen.h>
-
-/* Number of capability words covered by the featureset words. */
-#define X86_NR_FEAT FEATURESET_NR_ENTRIES
+#include <xen/lib/x86/cpuid-consts.h>
 
 /* Synthetic words follow the featureset words. */
 #define X86_NR_SYNTH 1
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index e9bda14a7595..01431de056c8 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -370,12 +370,12 @@ struct cpu_policy_errors
  * Copy the featureset words out of a cpu_policy object.
  */
 void x86_cpu_policy_to_featureset(const struct cpu_policy *p,
-                                  uint32_t fs[FEATURESET_NR_ENTRIES]);
+                                  uint32_t fs[X86_NR_FEAT]);
 
 /**
  * Copy the featureset words back into a cpu_policy object.
  */
-void x86_cpu_featureset_to_policy(const uint32_t fs[FEATURESET_NR_ENTRIES],
+void x86_cpu_featureset_to_policy(const uint32_t fs[X86_NR_FEAT],
                                   struct cpu_policy *p);
 
 static inline uint64_t cpu_policy_xcr0_max(const struct cpu_policy *p)
diff --git a/xen/include/xen/lib/x86/cpuid-consts.h b/xen/include/xen/lib/x86/cpuid-consts.h
index 6ca8c39a3df4..9fe931b8e31f 100644
--- a/xen/include/xen/lib/x86/cpuid-consts.h
+++ b/xen/include/xen/lib/x86/cpuid-consts.h
@@ -21,6 +21,8 @@
 #define FEATURESET_7c1   14 /* 0x00000007:1.ecx    */
 #define FEATURESET_7d1   15 /* 0x00000007:1.edx    */
 
+#define X86_NR_FEAT (FEATURESET_7d1 + 1)
+
 #endif /* !XEN_LIB_X86_CONSTS_H */
 
 /*
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 68aafb404927..76f26e92af8d 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -61,7 +61,7 @@ const char *x86_cpuid_vendor_to_str(unsigned int vendor)
 }
 
 void x86_cpu_policy_to_featureset(
-    const struct cpu_policy *p, uint32_t fs[FEATURESET_NR_ENTRIES])
+    const struct cpu_policy *p, uint32_t fs[X86_NR_FEAT])
 {
     fs[FEATURESET_1d]        = p->basic._1d;
     fs[FEATURESET_1c]        = p->basic._1c;
@@ -82,7 +82,7 @@ void x86_cpu_policy_to_featureset(
 }
 
 void x86_cpu_featureset_to_policy(
-    const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpu_policy *p)
+    const uint32_t fs[X86_NR_FEAT], struct cpu_policy *p)
 {
     p->basic._1d             = fs[FEATURESET_1d];
     p->basic._1c             = fs[FEATURESET_1c];
@@ -285,7 +285,7 @@ const uint32_t *x86_cpu_policy_lookup_deep_deps(uint32_t feature)
     static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
     static const struct {
         uint32_t feature;
-        uint32_t fs[FEATURESET_NR_ENTRIES];
+        uint32_t fs[X86_NR_FEAT];
     } deep_deps[] = INIT_DEEP_DEPS;
     unsigned int start = 0, end = ARRAY_SIZE(deep_deps);
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:39:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530041.825297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueni-00075v-I2; Thu, 04 May 2023 19:39:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530041.825297; Thu, 04 May 2023 19:39:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueni-00075k-7a; Thu, 04 May 2023 19:39:38 +0000
Received: by outflank-mailman (input) for mailman id 530041;
 Thu, 04 May 2023 19:39:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pueng-00069W-U1
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:39:36 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6527462a-eab3-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:39:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6527462a-eab3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683229174;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=zzSgC6HAWJ6nHal+JPSc8THZj+muKjktvIIOzu9Oncs=;
  b=cF83aT6l8/tYaKNgyls/1FiSDoGc3pHbN7ueZNsHgXjfweStpxpGCjNB
   9Grpmg7tQVQrJPgsP20Zp3i3wYVHnpu49BVSF89fePxiD6t0VRn8DbpC+
   LltchoHfx0suwzmsR/SCrKDtZCxHRv8PTg2jjU98cfuc3KHEbEA+VZtft
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107797746
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:xtR7A6qaSWy9tRyHh+JDUNRPxhxeBmJpZRIvgKrLsJaIsI4StFCzt
 garIBmGOPmMM2XyKNhzaYXkpEwE78ODmoRhSFNqrHs3ES1D85uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDzyVNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAHMmVTCFi8aX/Orhe7FXjPoxM8XoDrpK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVxrl6PqLVxyG/U1AFri5DmMcbPe8zMTsJQ9qqdj
 jueoDuoXU5GarRzzxKgzkywvu+MwhrbG5xOMLel79JAkly6kzl75Bo+CgLg/KjRZlSFc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUSyiuA167V6AaxHXUfQ3hKb9lOnNAybSwn0
 BmOhdyBONB0mOTLEzTHrO7S9G7sf3FPdgfueBPoUyNGyOLDpo0Xry6XFOc7K6SLnML5GgPJl
 mXiQDcFu1kDsSIa//zlrQuf2mj8+cehoh0dvVuOAD/8hu9tTMv8PtHztwCGhRpVBNzBJmRtq
 kTojCR3AAomKZiW3BKAT+wWdF1Cz6bUaWaM6bKD8nRIythMx5JAVdoKiN2GDB01WvvogBewC
 KMphStf5YVIIFyhZrJtboS6BqwClPawTo6/CKyNP4IVPfCdkTO6ENxGPxbMjwgBbmB1+U3AB
 XtrWZn1VitLYUiW5DG3W/0cwdcW+8zK/kuKHcqT503+gdKjiIu9Fe9t3K2mMrpos8tpYWz9r
 75iCid9408CC7OjOHOMqdF7wJJjBSFTOK0aYvd/LoarSjeK0kl7YxMN6dvNo7BYopk=
IronPort-HdrOrdr: A9a23:P+/wOKvIVTjVFEL5TrabSDgW7skDTtV00zEX/kB9WHVpmszxra
 6TdZMgpGbJYVcqKRcdcL+7WJVoLUmxyXcx2/h1AV7AZniAhILLFvAA0WKK+VSJcEeSygce79
 YFT0EXMqyJMbEQt6fHCWeDfOrIuOP3kpyVuQ==
X-Talos-CUID: =?us-ascii?q?9a23=3ACHfxhGkBUSk+04PNzU9cOw5MZdHXOT7291fWL2u?=
 =?us-ascii?q?ZNT4zeeWuQ2GC569nofM7zg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AqQyYbA5W6tsc8JvWB0+MugbDxoxN35qyMEBOzqw?=
 =?us-ascii?q?m+NfYESZtIhu3njWeF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="107797746"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 1/6] x86/cpu-policy: Drop build time cross-checks of featureset sizes
Date: Thu, 4 May 2023 20:39:19 +0100
Message-ID: <20230504193924.3305496-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

These BUILD_BUG_ON()s exist to cover the curious absence of a diagnostic for
code which looks like:

  uint32_t foo[1] = { 1, 2, 3 };

However, GCC 12 at least does now warn for this:

  foo.c:1:24: error: excess elements in array initializer [-Werror]
    884 | uint32_t foo[1] = { 1, 2, 3 };
        |                        ^
  foo.c:1:24: note: (near initialization for 'foo')

and has found other array length issues which we want to fix.  Drop the cross
check now tools can spot the problem case directly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu-policy.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index ef6a2d0d180a..44c88debf958 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -883,12 +883,6 @@ void __init init_dom0_cpuid_policy(struct domain *d)
 
 static void __init __maybe_unused build_assertions(void)
 {
-    BUILD_BUG_ON(ARRAY_SIZE(known_features) != FSCAPINTS);
-    BUILD_BUG_ON(ARRAY_SIZE(pv_max_featuremask) != FSCAPINTS);
-    BUILD_BUG_ON(ARRAY_SIZE(hvm_shadow_max_featuremask) != FSCAPINTS);
-    BUILD_BUG_ON(ARRAY_SIZE(hvm_hap_max_featuremask) != FSCAPINTS);
-    BUILD_BUG_ON(ARRAY_SIZE(deep_features) != FSCAPINTS);
-
     /* Find some more clever allocation scheme if this trips. */
     BUILD_BUG_ON(sizeof(struct cpu_policy) > PAGE_SIZE);
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:39:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530042.825306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puenj-0007PH-MJ; Thu, 04 May 2023 19:39:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530042.825306; Thu, 04 May 2023 19:39:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puenj-0007O9-GG; Thu, 04 May 2023 19:39:39 +0000
Received: by outflank-mailman (input) for mailman id 530042;
 Thu, 04 May 2023 19:39:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pueni-00069W-8I
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:39:38 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 667cad8e-eab3-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:39:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 667cad8e-eab3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683229175;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=8ce56R8LVcBJfEp4OPYpZgXky/uTBbu+6PODD4ktX/M=;
  b=ZXVEiBV20OPRjcoDnNVtzOJ1n+Lj96JLE/C8t1e13lI6XzGI6GEBXtwT
   e+Bs5Y6qp6Re/UNCNLJep03roQCqjs1dON45tTGjjsbOd+HooNKVzad6x
   dvtFg7Sie1VFlqNPmr70II+Iz2dKDNh1PNTFKcZWokZNYlXU7VZAVscW1
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107797745
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:xdERWaOw/3iHgwzvrR3Ul8FynXyQoLVcMsEvi/4bfWQNrUoh32NSz
 mZNWWuBPvrYM2agKd5yatng8EoP7JfWzYVkGwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5gBmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rxWPm9w8
 942FGAId0yNnuX14pucbMA506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLoXmuuyi2a5WDpfsF+P/oI84nTJzRw327/oWDbQUoXSGpoEwRnE+
 woq+UyjJBhdLvKWlwaMrGuKusSRxnmmQI87QejQGvlC3wTImz175ActfUu2p7y1h1CzX/pbK
 lcI4Ww+oK4q7kupQ9LhGRqirxasnDQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBeeNAx/
 gbXxZWzX2Up6eDLDyvHrd94sA9eJwBPDFAMWykmYzdV5sC/rpg0zTDmafBKRfvdYsLOJd3g/
 9ybhHFg1+5L1JRbiPrTEUPv2Gz1+MWQJuIhzkCOBz/+sFskDGKwT9bwgWU3+8qsO2pworOpm
 HEf0/aT4+kVZX1mvHzcGb5ddF1FChvsDdE9vbKMN8N7n9hV0yT/Fb28GRknTKuTDu4KeCXyf
 GjYsh5L6ZlYMROCNPEnONjrVZhznPC7SrwJs8zpgidmOMAtJGdrAgk3DaJv44wduBd1yvxuU
 XtqWc2tEWwbGcxa8dZCfM9EieVD7nlnlQvuqWXTk0zPPUy2OCTEFt/o8TKmMogE0U9ziF+Iq
 IgCbZHRk0o3vS+XSnC/zLP/5GsidRATba0aYeQNHgJfCmKKwF0cNsI=
IronPort-HdrOrdr: A9a23:t+PfxKxYwKdSpjb00wbfKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-Talos-CUID: 9a23:P7UCjG75lPudlrpMN9ssrm8rFP4jUFPhnHLJf0GhDiVLYZqzYArF
X-Talos-MUID: =?us-ascii?q?9a23=3AzCLoiww91vxrdQwpVr1UkB6due2aqP6IAQcXv5x?=
 =?us-ascii?q?Yh8DaLApMEGnG1mmvT6Zyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="107797745"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 3/6] x86/cpuid: Rename FSCAPINTS to X86_NR_FEAT
Date: Thu, 4 May 2023 20:39:21 +0100
Message-ID: <20230504193924.3305496-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The latter is more legible, and consistent with X86_NR_{CAPS,SYNTH,BUG} which
already exist.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu-policy.c              | 22 +++++++++++-----------
 xen/arch/x86/cpu/common.c              |  4 ++--
 xen/arch/x86/include/asm/cpufeatures.h |  8 ++++----
 xen/arch/x86/include/asm/cpuid.h       |  2 +-
 xen/arch/x86/sysctl.c                  |  8 ++++----
 5 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 44c88debf958..774c512a03bd 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -130,8 +130,8 @@ static int __init cf_check parse_xen_cpuid(const char *s)
 custom_param("cpuid", parse_xen_cpuid);
 
 static bool __initdata dom0_cpuid_cmdline;
-static uint32_t __initdata dom0_enable_feat[FSCAPINTS];
-static uint32_t __initdata dom0_disable_feat[FSCAPINTS];
+static uint32_t __initdata dom0_enable_feat[X86_NR_FEAT];
+static uint32_t __initdata dom0_disable_feat[X86_NR_FEAT];
 
 static void __init cf_check _parse_dom0_cpuid(unsigned int feat, bool val)
 {
@@ -158,10 +158,10 @@ static void sanitise_featureset(uint32_t *fs)
 {
     /* for_each_set_bit() uses unsigned longs.  Extend with zeroes. */
     uint32_t disabled_features[
-        ROUNDUP(FSCAPINTS, sizeof(unsigned long)/sizeof(uint32_t))] = {};
+        ROUNDUP(X86_NR_FEAT, sizeof(unsigned long)/sizeof(uint32_t))] = {};
     unsigned int i;
 
-    for ( i = 0; i < FSCAPINTS; ++i )
+    for ( i = 0; i < X86_NR_FEAT; ++i )
     {
         /* Clamp to known mask. */
         fs[i] &= known_features[i];
@@ -181,7 +181,7 @@ static void sanitise_featureset(uint32_t *fs)
 
         ASSERT(dfs); /* deep_features[] should guarentee this. */
 
-        for ( j = 0; j < FSCAPINTS; ++j )
+        for ( j = 0; j < X86_NR_FEAT; ++j )
         {
             fs[j] &= ~dfs[j];
             disabled_features[j] &= ~dfs[j];
@@ -476,7 +476,7 @@ static void __init guest_common_feature_adjustments(uint32_t *fs)
 static void __init calculate_pv_max_policy(void)
 {
     struct cpu_policy *p = &pv_max_cpu_policy;
-    uint32_t fs[FSCAPINTS];
+    uint32_t fs[X86_NR_FEAT];
     unsigned int i;
 
     *p = host_cpu_policy;
@@ -509,7 +509,7 @@ static void __init calculate_pv_max_policy(void)
 static void __init calculate_pv_def_policy(void)
 {
     struct cpu_policy *p = &pv_def_cpu_policy;
-    uint32_t fs[FSCAPINTS];
+    uint32_t fs[X86_NR_FEAT];
     unsigned int i;
 
     *p = pv_max_cpu_policy;
@@ -529,7 +529,7 @@ static void __init calculate_pv_def_policy(void)
 static void __init calculate_hvm_max_policy(void)
 {
     struct cpu_policy *p = &hvm_max_cpu_policy;
-    uint32_t fs[FSCAPINTS];
+    uint32_t fs[X86_NR_FEAT];
     unsigned int i;
     const uint32_t *mask;
 
@@ -625,7 +625,7 @@ static void __init calculate_hvm_max_policy(void)
 static void __init calculate_hvm_def_policy(void)
 {
     struct cpu_policy *p = &hvm_def_cpu_policy;
-    uint32_t fs[FSCAPINTS];
+    uint32_t fs[X86_NR_FEAT];
     unsigned int i;
     const uint32_t *mask;
 
@@ -723,7 +723,7 @@ void recalculate_cpuid_policy(struct domain *d)
     const struct cpu_policy *max = is_pv_domain(d)
         ? (IS_ENABLED(CONFIG_PV)  ?  &pv_max_cpu_policy : NULL)
         : (IS_ENABLED(CONFIG_HVM) ? &hvm_max_cpu_policy : NULL);
-    uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
+    uint32_t fs[X86_NR_FEAT], max_fs[X86_NR_FEAT];
     unsigned int i;
 
     if ( !max )
@@ -864,7 +864,7 @@ void __init init_dom0_cpuid_policy(struct domain *d)
     /* Apply dom0-cpuid= command line settings, if provided. */
     if ( dom0_cpuid_cmdline )
     {
-        uint32_t fs[FSCAPINTS];
+        uint32_t fs[X86_NR_FEAT];
         unsigned int i;
 
         x86_cpu_policy_to_featureset(p, fs);
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 1be049e332ce..d12ccea20350 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -80,7 +80,7 @@ void __init setup_clear_cpu_cap(unsigned int cap)
 	if (!dfs)
 		return;
 
-	for (i = 0; i < FSCAPINTS; ++i) {
+	for (i = 0; i < X86_NR_FEAT; ++i) {
 		cleared_caps[i] |= dfs[i];
 		boot_cpu_data.x86_capability[i] &= ~dfs[i];
 		if (!(forced_caps[i] & dfs[i]))
@@ -527,7 +527,7 @@ void identify_cpu(struct cpuinfo_x86 *c)
 	 * The vendor-specific functions might have changed features.  Now
 	 * we do "generic changes."
 	 */
-	for (i = 0; i < FSCAPINTS; ++i)
+	for (i = 0; i < X86_NR_FEAT; ++i)
 		c->x86_capability[i] &= known_features[i];
 
 	for (i = 0 ; i < X86_NR_CAPS ; ++i) {
diff --git a/xen/arch/x86/include/asm/cpufeatures.h b/xen/arch/x86/include/asm/cpufeatures.h
index e982ee920ce1..408ab4ba16a5 100644
--- a/xen/arch/x86/include/asm/cpufeatures.h
+++ b/xen/arch/x86/include/asm/cpufeatures.h
@@ -5,11 +5,11 @@
 #include <xen/lib/x86/cpuid-autogen.h>
 
 /* Number of capability words covered by the featureset words. */
-#define FSCAPINTS FEATURESET_NR_ENTRIES
+#define X86_NR_FEAT FEATURESET_NR_ENTRIES
 
 /* Synthetic words follow the featureset words. */
 #define X86_NR_SYNTH 1
-#define X86_SYNTH(x) (FSCAPINTS * 32 + (x))
+#define X86_SYNTH(x) (X86_NR_FEAT * 32 + (x))
 
 /* Synthetic features */
 XEN_CPUFEATURE(CONSTANT_TSC,      X86_SYNTH( 0)) /* TSC ticks at a constant rate */
@@ -45,7 +45,7 @@ XEN_CPUFEATURE(IBPB_ENTRY_HVM,    X86_SYNTH(29)) /* MSR_PRED_CMD used by Xen for
 
 /* Bug words follow the synthetic words. */
 #define X86_NR_BUG 1
-#define X86_BUG(x) ((FSCAPINTS + X86_NR_SYNTH) * 32 + (x))
+#define X86_BUG(x) ((X86_NR_FEAT + X86_NR_SYNTH) * 32 + (x))
 
 #define X86_BUG_FPU_PTRS          X86_BUG( 0) /* (F)X{SAVE,RSTOR} doesn't save/restore FOP/FIP/FDP. */
 #define X86_BUG_NULL_SEG          X86_BUG( 1) /* NULL-ing a selector preserves the base and limit. */
@@ -53,4 +53,4 @@ XEN_CPUFEATURE(IBPB_ENTRY_HVM,    X86_SYNTH(29)) /* MSR_PRED_CMD used by Xen for
 #define X86_BUG_IBPB_NO_RET       X86_BUG( 3) /* IBPB doesn't flush the RSB/RAS */
 
 /* Total number of capability words, inc synth and bug words. */
-#define X86_NR_CAPS (FSCAPINTS + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */
+#define X86_NR_CAPS (X86_NR_FEAT + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */
diff --git a/xen/arch/x86/include/asm/cpuid.h b/xen/arch/x86/include/asm/cpuid.h
index b32ba0bbfe5c..85b6ca0edb91 100644
--- a/xen/arch/x86/include/asm/cpuid.h
+++ b/xen/arch/x86/include/asm/cpuid.h
@@ -10,7 +10,7 @@
 
 #include <public/sysctl.h>
 
-extern const uint32_t known_features[FSCAPINTS];
+extern const uint32_t known_features[X86_NR_FEAT];
 
 /*
  * Expected levelling capabilities (given cpuid vendor/family information),
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index c107f40c6283..9be0e796628c 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -308,13 +308,13 @@ long arch_do_sysctl(
 #endif
         };
         const struct cpu_policy *p = NULL;
-        uint32_t featureset[FSCAPINTS];
+        uint32_t featureset[X86_NR_FEAT];
         unsigned int nr;
 
         /* Request for maximum number of features? */
         if ( guest_handle_is_null(sysctl->u.cpu_featureset.features) )
         {
-            sysctl->u.cpu_featureset.nr_features = FSCAPINTS;
+            sysctl->u.cpu_featureset.nr_features = X86_NR_FEAT;
             if ( __copy_field_to_guest(u_sysctl, sysctl,
                                        u.cpu_featureset.nr_features) )
                 ret = -EFAULT;
@@ -323,7 +323,7 @@ long arch_do_sysctl(
 
         /* Clip the number of entries. */
         nr = min_t(unsigned int, sysctl->u.cpu_featureset.nr_features,
-                   FSCAPINTS);
+                   X86_NR_FEAT);
 
         /* Look up requested featureset. */
         if ( sysctl->u.cpu_featureset.index < ARRAY_SIZE(policy_table) )
@@ -352,7 +352,7 @@ long arch_do_sysctl(
             ret = -EFAULT;
 
         /* Inform the caller if there was more data to provide. */
-        if ( !ret && nr < FSCAPINTS )
+        if ( !ret && nr < X86_NR_FEAT )
             ret = -ENOBUFS;
 
         break;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:39:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530043.825311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puenk-0007Td-2O; Thu, 04 May 2023 19:39:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530043.825311; Thu, 04 May 2023 19:39:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puenj-0007Sm-SF; Thu, 04 May 2023 19:39:39 +0000
Received: by outflank-mailman (input) for mailman id 530043;
 Thu, 04 May 2023 19:39:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puenj-00069W-3g
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:39:39 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 672ce9c0-eab3-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:39:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 672ce9c0-eab3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683229177;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=9R0YIYGzst7iA1qXqPxFQ9NZvJDOOPfxgM5CuGK9bi0=;
  b=bikuqNEzzDFQ2Sf6I2FAkoVPskOLa973sYNn8vEWFMOGFl5lyQrQ30u5
   /EvVrVGWRbweLgs+i7XQUjnYfXHVTIXd4p4/DevXudG10pLMYhfYpkXPR
   PAg1dK2hch0j0mGjogzNY5HlVx/oy0M8kYNXuVd2LpG1AOtgp2vPOf6vB
   w=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107797747
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:+Wvn1ajqcahEGyv37WPFhN36X161ERAKZh0ujC45NGQN5FlHY01je
 htvWz2HafyDMGTye4twad+0o0gB75PXm4dqHQptpCo8Qigb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QeCzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tRbAwstaBynpt7umoPjceNSlNsvL/vSadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27B/
 j2ZrjumXk5y2Nq3zBiv6U2mh+v1hWD3XcFPRIGqyO86nwjGroAUIEJPDgbqyRWjsWauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4O88Q5RyJy6HUyx2EHWVCRTlEAPQ5sOcmSDps0
 UWG9+4FHhQ27ufTEyjEsO7J83XrY3N9wXI+iTEsDiA+w9/vhKAP1kj+Fu1pLryqgt7HIGSlq
 9yVlxTSl4n/nOZSifXgpQmd023zznTaZlVrv1uKBwpJ+is8Pdf4PNLwtDA3+N4adO6kok+9U
 G/ociR0xMQHFtmzmSOEW43h95n5tq/eYFUwbbOCdqTNFghBGFb5J+i8GBkkeC9U3j8sIFcFm
 nP7twJL/4N0N3C3d6JxaI/ZI510nfO6T4q5D6GMNYUmjn1NSeN61Hs2OR74M57FySDAbp3Ty
 b/EKJ3xXB72+IxszSasRvd17ILHMhsWnDuJLbiilkTP7FZrTCLNIVvzGAfUP79RAWLtiFm9z
 uuzwOPQlk4BDbekOXGImWPRRHhTRUUG6VnNg5Q/Xoa+zsBOQgnN19e5LWsdRrFY
IronPort-HdrOrdr: A9a23:DDCJwqG4+wEztRPFpLqELMeALOsnbusQ8zAXPiBKJCC9E/bo8v
 xG+c5w6faaslkssR0b9+xoW5PwI080l6QU3WB5B97LMDUO0FHCEGgI1/qA/9SPIUzDHu4279
 YbT0B9YueAcGSTW6zBkXWF+9VL+qj5zEix792uq0uE1WtRGtldBwESMHf9LmRGADNoKLAeD5
 Sm6s9Ot1ObCA8qhpTSPAhiYwDbzee77a7bXQ==
X-Talos-CUID: =?us-ascii?q?9a23=3AnvwvZWqHZzaKT4gwBTMBk1PmUYMOSV3bwFPxHxf?=
 =?us-ascii?q?iOD5SQoOHVlOTypoxxg=3D=3D?=
X-Talos-MUID: 9a23:yZW1TQnAhHGdUB5UFoWydnpnFu1z3K6vDHoV0pkD+JTUCT5OKmeC2WE=
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="107797747"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 4/6] x86/cpu-policy: Split cpuid-consts.h out of cpu-policy.h
Date: Thu, 4 May 2023 20:39:22 +0100
Message-ID: <20230504193924.3305496-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

In order to disentangle X86_NR_FEAT from FEATURESET_NR_ENTRIES, we need to
adjust asm/cpufeatures.h's inclusion of cpuid-autogen.h, and including all of
cpu-policy.h ends with cyclic dependences and a mess.

Split the FEATURESET_* constants out into a new header.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

I don't particularly like this, but I can't find a better alternative.
---
 xen/include/xen/lib/x86/cpu-policy.h   | 19 +-------------
 xen/include/xen/lib/x86/cpuid-consts.h | 34 ++++++++++++++++++++++++++
 2 files changed, 35 insertions(+), 18 deletions(-)
 create mode 100644 xen/include/xen/lib/x86/cpuid-consts.h

diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index bfa425060464..e9bda14a7595 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -2,24 +2,7 @@
 #ifndef XEN_LIB_X86_POLICIES_H
 #define XEN_LIB_X86_POLICIES_H
 
-#include <xen/lib/x86/cpuid-autogen.h>
-
-#define FEATURESET_1d     0 /* 0x00000001.edx      */
-#define FEATURESET_1c     1 /* 0x00000001.ecx      */
-#define FEATURESET_e1d    2 /* 0x80000001.edx      */
-#define FEATURESET_e1c    3 /* 0x80000001.ecx      */
-#define FEATURESET_Da1    4 /* 0x0000000d:1.eax    */
-#define FEATURESET_7b0    5 /* 0x00000007:0.ebx    */
-#define FEATURESET_7c0    6 /* 0x00000007:0.ecx    */
-#define FEATURESET_e7d    7 /* 0x80000007.edx      */
-#define FEATURESET_e8b    8 /* 0x80000008.ebx      */
-#define FEATURESET_7d0    9 /* 0x00000007:0.edx    */
-#define FEATURESET_7a1   10 /* 0x00000007:1.eax    */
-#define FEATURESET_e21a  11 /* 0x80000021.eax      */
-#define FEATURESET_7b1   12 /* 0x00000007:1.ebx    */
-#define FEATURESET_7d2   13 /* 0x00000007:2.edx    */
-#define FEATURESET_7c1   14 /* 0x00000007:1.ecx    */
-#define FEATURESET_7d1   15 /* 0x00000007:1.edx    */
+#include <xen/lib/x86/cpuid-consts.h>
 
 struct cpuid_leaf
 {
diff --git a/xen/include/xen/lib/x86/cpuid-consts.h b/xen/include/xen/lib/x86/cpuid-consts.h
new file mode 100644
index 000000000000..6ca8c39a3df4
--- /dev/null
+++ b/xen/include/xen/lib/x86/cpuid-consts.h
@@ -0,0 +1,34 @@
+/* Common data structures and functions consumed by hypervisor and toolstack */
+#ifndef XEN_LIB_X86_CONSTS_H
+#define XEN_LIB_X86_CONSTS_H
+
+#include <xen/lib/x86/cpuid-autogen.h>
+
+#define FEATURESET_1d     0 /* 0x00000001.edx      */
+#define FEATURESET_1c     1 /* 0x00000001.ecx      */
+#define FEATURESET_e1d    2 /* 0x80000001.edx      */
+#define FEATURESET_e1c    3 /* 0x80000001.ecx      */
+#define FEATURESET_Da1    4 /* 0x0000000d:1.eax    */
+#define FEATURESET_7b0    5 /* 0x00000007:0.ebx    */
+#define FEATURESET_7c0    6 /* 0x00000007:0.ecx    */
+#define FEATURESET_e7d    7 /* 0x80000007.edx      */
+#define FEATURESET_e8b    8 /* 0x80000008.ebx      */
+#define FEATURESET_7d0    9 /* 0x00000007:0.edx    */
+#define FEATURESET_7a1   10 /* 0x00000007:1.eax    */
+#define FEATURESET_e21a  11 /* 0x80000021.eax      */
+#define FEATURESET_7b1   12 /* 0x00000007:1.ebx    */
+#define FEATURESET_7d2   13 /* 0x00000007:2.edx    */
+#define FEATURESET_7c1   14 /* 0x00000007:1.ecx    */
+#define FEATURESET_7d1   15 /* 0x00000007:1.edx    */
+
+#endif /* !XEN_LIB_X86_CONSTS_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:39:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530038.825260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueng-0006DJ-9m; Thu, 04 May 2023 19:39:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530038.825260; Thu, 04 May 2023 19:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueng-0006CW-5q; Thu, 04 May 2023 19:39:36 +0000
Received: by outflank-mailman (input) for mailman id 530038;
 Thu, 04 May 2023 19:39:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puenf-00069l-6I
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:39:35 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6293b883-eab3-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:39:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6293b883-eab3-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683229173;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=kA0CoCvs9w9UnA8Cp8g4Yf97SdqOL/znbdWvnuGRE/s=;
  b=GnQ7M7m/PYMymFfO92HSaHE497UtluP5svWs9DlZzqe7Qz4/NVEZ7yFc
   SNOwy88h+tzBRKikEIvgkNjKm9UEk0eIk7+2rXO97u1XLaT3gqEQF36Fo
   bYC46F7kTGjUkJYhBvFbdGT99CB0NmFLcm6u4YIde+YxlJuoau2r/cfvb
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107931606
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:6OvrB6+gnEOK1t08d7VHDrUD0H6TJUtcMsCJ2f8bNWPcYEJGY0x3y
 2QdD2+COPzcZmShc9p+PY3n9k9SupGDz4MwHgVl+yw8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKgX5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkTs
 thJBmEQXyqv3fOTwLyxYfZRvN0aeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0MxhnJ/
 TuYpzqR7hcyK5/HmAS1rC6WgLXQvgb0Q9o/G+aB+as/6LGU7jNKU0BHPbehmtGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0ZjZLO7RkskfXkPOSulvHQDFeFVatdeDKqudrAhh7+
 A6UrunFXy1KsLOfYm2gzK6t+Gba1TcuEYMSWcMVZVJbs4K7+dtj0U2nosVLS/Ds0ICscd3k6
 3XT9XVl2e1O5SIe///jlW0rlQ5AsXQgouQdwgzMFlyo4QpiDGJOT9z5sAOLhRqswWvwc7Vgg
 JTns5LEhAz2JcvR/BFhuc1UdF1T296LMSfHnXlkFIQ7+jKm9haLJN4Au2skehswY55fJlcFh
 XPuVf55vscPbBNGk4cuC25ONyja5fe5Tom0PhwlRtFPfoJwZGe6wc2aXmbJhzqFuBF1wckC1
 WKzLZ7E4YAyVf42k1Jbho41jdcW+8zJ7TmOHs6mlU78ieX2ibz8Ye5tDWZip9sRtMusyDg5O
 f4GXydW432ziNHDXxQ=
IronPort-HdrOrdr: A9a23:n00BHKOCaufrE8BcThWjsMiBIKoaSvp037BL7TEVdfUxSKGlfq
 +V88jzuSWbtN9pYgBFpTnYAtjmfZq+z+8W3WByB9uftWDd0QPDEGgF1+rfKlXbcBEWndQttp
 uIHZIfNDUlZWIK9PoT/2GDYqkdKMjuytHPuQ/Bp00dNT2CYZsQkzuQV26gYzZLrBEvP+tCKH
 KGjvA32gadRQ==
X-Talos-CUID: 9a23:+Jhv3GwLDU6bHbvpqSYMBgVJXeR0XCzA/U38eV6dCWhjY6Koc0aprfY=
X-Talos-MUID: 9a23:gY02lgpSBOzTJi5FAIAezy4hbJZW0/ryMkZXjrYjtNLcGCMrAjjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="107931606"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH 6/6] DO NOT APPLY: Example breakage
Date: Thu, 4 May 2023 20:39:24 +0100
Message-ID: <20230504193924.3305496-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Prior to disentangling X86_NR_FEAT from FEATURESET_NR_ENTRIES, GCC 12
correctly notices:

  lib/x86/cpuid.c: In function 'x86_cpu_policy_to_featureset':
  lib/x86/cpuid.c:82:7: error: array subscript 16 is outside array bounds of 'uint32_t[16]' {aka 'unsigned int[16]'} [-Werror=array-bounds=]
     82 |     fs[FEATURESET_7a2]       = p->feat._7a2;
        |     ~~^~~~~~~~~~~~~~~~
  lib/x86/cpuid.c:64:42: note: at offset 64 into object 'fs' of size [0, 64]

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/include/xen/lib/x86/cpu-policy.h   | 6 +++++-
 xen/include/xen/lib/x86/cpuid-consts.h | 2 ++
 xen/lib/x86/cpuid.c                    | 2 ++
 3 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 01431de056c8..164b3f4aac13 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -192,7 +192,11 @@ struct cpu_policy
             };
 
             /* Subleaf 2. */
-            uint32_t /* a */:32, /* b */:32, /* c */:32;
+            union {
+                uint32_t _7a2;
+                struct { DECL_BITFIELD(7a2); };
+            };
+            uint32_t /* b */:32, /* c */:32;
             union {
                 uint32_t _7d2;
                 struct { DECL_BITFIELD(7d2); };
diff --git a/xen/include/xen/lib/x86/cpuid-consts.h b/xen/include/xen/lib/x86/cpuid-consts.h
index 9fe931b8e31f..5dd9727fec79 100644
--- a/xen/include/xen/lib/x86/cpuid-consts.h
+++ b/xen/include/xen/lib/x86/cpuid-consts.h
@@ -20,8 +20,10 @@
 #define FEATURESET_7d2   13 /* 0x00000007:2.edx    */
 #define FEATURESET_7c1   14 /* 0x00000007:1.ecx    */
 #define FEATURESET_7d1   15 /* 0x00000007:1.edx    */
+#define FEATURESET_7a2   16
 
 #define X86_NR_FEAT (FEATURESET_7d1 + 1)
+//#define X86_NR_FEAT (FEATURESET_7a2 + 1)
 
 #endif /* !XEN_LIB_X86_CONSTS_H */
 
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 76f26e92af8d..90bc82a18c30 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -79,6 +79,7 @@ void x86_cpu_policy_to_featureset(
     fs[FEATURESET_7d2]       = p->feat._7d2;
     fs[FEATURESET_7c1]       = p->feat._7c1;
     fs[FEATURESET_7d1]       = p->feat._7d1;
+    fs[FEATURESET_7a2]       = p->feat._7a2;
 }
 
 void x86_cpu_featureset_to_policy(
@@ -100,6 +101,7 @@ void x86_cpu_featureset_to_policy(
     p->feat._7d2             = fs[FEATURESET_7d2];
     p->feat._7c1             = fs[FEATURESET_7c1];
     p->feat._7d1             = fs[FEATURESET_7d1];
+    p->feat._7a2             = fs[FEATURESET_7a2];
 }
 
 void x86_cpu_policy_recalc_synth(struct cpu_policy *p)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:39:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530037.825256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pueng-0006A7-2T; Thu, 04 May 2023 19:39:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530037.825256; Thu, 04 May 2023 19:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puenf-00069v-Tq; Thu, 04 May 2023 19:39:35 +0000
Received: by outflank-mailman (input) for mailman id 530037;
 Thu, 04 May 2023 19:39:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puend-00069W-WF
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:39:34 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6254b545-eab3-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:39:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6254b545-eab3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683229170;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=B+Lt4o2YAn9fGf4aYwlSeUeSsUI9OhnNotrhhc/MbjE=;
  b=IOEHq8l7MPXu/neqgoRj3U5FDEusbnx01qEO/hig/y3bk0HbOPJIzu6v
   FXkPKG88s2OKFAXwrM/q208lsId+9YktYPP1Ip4ohcBuB2fd9U/+yY30s
   8247UyCd+nUG/aPdjjDG7OLLApXUJjTU7Ur1o3t1OJAU9ULlKvz3x8wXC
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107797744
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:9NPEQKqiBM9uJR0MLDC4OmpJ9K5eBmJpZRIvgKrLsJaIsI4StFCzt
 garIBmCMqvcY2XxKtwlbdu0/UMOuZaDydBkSwQ+qCw2EHxH+ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDzyVNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAHMmVTCFi8aX/Orhe7FXjPoxM8XoDrpK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVxrl6PqLVxyG/U1AFri5DmMcbPe8zMTsJQ9qqdj
 jueoDuoXU5GaLRzzxKE/SKpoumVshnle5kVS76k6e9WqR66kzl75Bo+CgLg/KjRZlSFc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUSyiuA167V6AaxHXUfQ3hKb9lOnNAybSwn0
 BmOhdyBONB0mOTLEzTHrO7S9G7sf3FPdgfueBPoUyNGyOLDpo0Xry6XFOc7K6SLnML5GgPJl
 mXiQDcFu1kDsSIa//zlrQuf2mj8+cehoh0dvVuOAD/8hu9tTMv8PtHztwCGhRpVBNzBJmRtq
 kTojCR3AAomKZiW3BKAT+wWdF1Cz6bUaWaM6bKD8nRIythMx5JAVdoKiN2GDB01WvvogBewC
 KMphStf5YVIIFyhZrJtboS6BqwClPawTo6/CKyNP4IVPfCdkTO6ENxGPxbMjwgBbmB1+U3AB
 XtrWZn1VitLYUiW5DG3W/0cwdcW+8zK/kuKHcqT503+gdKjiIu9Fe9t3K2mMrpos8tpYWz9r
 75iCid9408CC7OjOHOMqdF7wJJjBSFTOK0aYvd/LoarSjeK0kl7YxMN6dvNo7BYopk=
IronPort-HdrOrdr: A9a23:dl1fZqj12NjW+WdERxt8gcM1C3BQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-Talos-CUID: 9a23:MIckimz2dJiLubaMQIBNBgVXMfsFdHH57UuIeX7gMmxXWqKZSm2prfY=
X-Talos-MUID: =?us-ascii?q?9a23=3AgnhrGwzRw9BoVrx/cl3LaSNIfiiaqKWhWEUpl88?=
 =?us-ascii?q?pgdenLisvOCuCnQieAaZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="107797744"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 0/6] x86: Fix transient build breakage with featureset additions
Date: Thu, 4 May 2023 20:39:18 +0100
Message-ID: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

See patch 5 for details.  Patch 6 is just an example that demonstrates the fix
working in practice.

Andrew Cooper (6):
  x86/cpu-policy: Drop build time cross-checks of featureset sizes
  x86/cpuid: Rename NCAPINTS to X86_NR_CAPS
  x86/cpuid: Rename FSCAPINTS to X86_NR_FEAT
  x86/cpu-policy: Split cpuid-consts.h out of cpu-policy.h
  x86/cpu-policy: Disentangle X86_NR_FEAT and FEATURESET_NR_ENTRIES
  DO NOT APPLY: Example breakage

 xen/arch/x86/alternative.c             |  2 +-
 xen/arch/x86/cpu-policy.c              | 33 +++++++++++-----------
 xen/arch/x86/cpu/common.c              | 16 +++++------
 xen/arch/x86/cpuid.c                   |  2 +-
 xen/arch/x86/include/asm/cpufeature.h  |  2 +-
 xen/arch/x86/include/asm/cpufeatures.h | 11 +++-----
 xen/arch/x86/include/asm/cpuid.h       |  2 +-
 xen/arch/x86/sysctl.c                  |  8 +++---
 xen/include/xen/lib/x86/cpu-policy.h   | 29 ++++++--------------
 xen/include/xen/lib/x86/cpuid-consts.h | 38 ++++++++++++++++++++++++++
 xen/lib/x86/cpuid.c                    |  8 ++++--
 11 files changed, 88 insertions(+), 63 deletions(-)
 create mode 100644 xen/include/xen/lib/x86/cpuid-consts.h

-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:53:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530062.825346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1N-0004G8-SV; Thu, 04 May 2023 19:53:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530062.825346; Thu, 04 May 2023 19:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1N-0004Fy-Pq; Thu, 04 May 2023 19:53:45 +0000
Received: by outflank-mailman (input) for mailman id 530062;
 Thu, 04 May 2023 19:53:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1M-0003xx-8V
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:53:44 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5e861438-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:53:42 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-403-PWDrmmbSNeKjFAv3qLOnZg-1; Thu, 04 May 2023 15:53:37 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 31E0D109DCE5;
 Thu,  4 May 2023 19:53:36 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 59BEFC15BAD;
 Thu,  4 May 2023 19:53:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e861438-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230021;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2q86JTUb82jHsKou8iDv6+YsV2u1chyoHCqSrVQuj0g=;
	b=B5pQAGBnnAr0A+0xGh0wbuAruZ1+g+47gWo6+eRRnLbwl9zPOfvj1q63v6OuTe5vKic0M2
	bvAcYiZhsT4glR8IbJYeqRoykfg+/DtaLLX8Zxd+7p20mKLEZMUvxn7GJjzKIwAVZC4QKu
	yLIcmabUUUiXUP7+hkyj2v2WoEDXtQ0=
X-MC-Unique: PWDrmmbSNeKjFAv3qLOnZg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 02/21] block-backend: split blk_do_set_aio_context()
Date: Thu,  4 May 2023 15:53:08 -0400
Message-Id: <20230504195327.695107-3-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

blk_set_aio_context() is not fully transactional because
blk_do_set_aio_context() updates blk->ctx outside the transaction. Most
of the time this goes unnoticed but a BlockDevOps.drained_end() callback
that invokes blk_get_aio_context() fails assert(ctx == blk->ctx). This
happens because blk->ctx is only assigned after
BlockDevOps.drained_end() is called and we're in an intermediate state
where BlockDrvierState nodes already have the new context and the
BlockBackend still has the old context.

Making blk_set_aio_context() fully transactional solves this assertion
failure because the BlockBackend's context is updated as part of the
transaction (before BlockDevOps.drained_end() is called).

Split blk_do_set_aio_context() in order to solve this assertion failure.
This helper function actually serves two different purposes:
1. It drives blk_set_aio_context().
2. It responds to BdrvChildClass->change_aio_ctx().

Get rid of the helper function. Do #1 inside blk_set_aio_context() and
do #2 inside blk_root_set_aio_ctx_commit(). This simplifies the code.

The only drawback of the fully transactional approach is that
blk_set_aio_context() must contend with blk_root_set_aio_ctx_commit()
being invoked as part of the AioContext change propagation. This can be
solved by temporarily setting blk->allow_aio_context_change to true.

Future patches call blk_get_aio_context() from
BlockDevOps->drained_end(), so this patch will become necessary.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/block-backend.c | 71 +++++++++++++++++--------------------------
 1 file changed, 28 insertions(+), 43 deletions(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index fc530ded6a..68d38635bc 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -2205,52 +2205,31 @@ static AioContext *blk_aiocb_get_aio_context(BlockAIOCB *acb)
     return blk_get_aio_context(blk_acb->blk);
 }
 
-static int blk_do_set_aio_context(BlockBackend *blk, AioContext *new_context,
-                                  bool update_root_node, Error **errp)
-{
-    BlockDriverState *bs = blk_bs(blk);
-    ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
-    int ret;
-
-    if (bs) {
-        bdrv_ref(bs);
-
-        if (update_root_node) {
-            /*
-             * update_root_node MUST be false for blk_root_set_aio_ctx_commit(),
-             * as we are already in the commit function of a transaction.
-             */
-            ret = bdrv_try_change_aio_context(bs, new_context, blk->root, errp);
-            if (ret < 0) {
-                bdrv_unref(bs);
-                return ret;
-            }
-        }
-        /*
-         * Make blk->ctx consistent with the root node before we invoke any
-         * other operations like drain that might inquire blk->ctx
-         */
-        blk->ctx = new_context;
-        if (tgm->throttle_state) {
-            bdrv_drained_begin(bs);
-            throttle_group_detach_aio_context(tgm);
-            throttle_group_attach_aio_context(tgm, new_context);
-            bdrv_drained_end(bs);
-        }
-
-        bdrv_unref(bs);
-    } else {
-        blk->ctx = new_context;
-    }
-
-    return 0;
-}
-
 int blk_set_aio_context(BlockBackend *blk, AioContext *new_context,
                         Error **errp)
 {
+    bool old_allow_change;
+    BlockDriverState *bs = blk_bs(blk);
+    int ret;
+
     GLOBAL_STATE_CODE();
-    return blk_do_set_aio_context(blk, new_context, true, errp);
+
+    if (!bs) {
+        blk->ctx = new_context;
+        return 0;
+    }
+
+    bdrv_ref(bs);
+
+    old_allow_change = blk->allow_aio_context_change;
+    blk->allow_aio_context_change = true;
+
+    ret = bdrv_try_change_aio_context(bs, new_context, NULL, errp);
+
+    blk->allow_aio_context_change = old_allow_change;
+
+    bdrv_unref(bs);
+    return ret;
 }
 
 typedef struct BdrvStateBlkRootContext {
@@ -2262,8 +2241,14 @@ static void blk_root_set_aio_ctx_commit(void *opaque)
 {
     BdrvStateBlkRootContext *s = opaque;
     BlockBackend *blk = s->blk;
+    AioContext *new_context = s->new_ctx;
+    ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
 
-    blk_do_set_aio_context(blk, s->new_ctx, false, &error_abort);
+    blk->ctx = new_context;
+    if (tgm->throttle_state) {
+        throttle_group_detach_aio_context(tgm);
+        throttle_group_attach_aio_context(tgm, new_context);
+    }
 }
 
 static TransactionActionDrv set_blk_root_context = {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:53:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530063.825356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1P-0004WO-4j; Thu, 04 May 2023 19:53:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530063.825356; Thu, 04 May 2023 19:53:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1P-0004WB-1s; Thu, 04 May 2023 19:53:47 +0000
Received: by outflank-mailman (input) for mailman id 530063;
 Thu, 04 May 2023 19:53:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1N-0003xx-K1
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:53:45 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5f943d10-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:53:44 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-300-_aYTw-OjPuqtsu_BDqa0FQ-1; Thu, 04 May 2023 15:53:39 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8751C3813F32;
 Thu,  4 May 2023 19:53:38 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E2CA11410DD7;
 Thu,  4 May 2023 19:53:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f943d10-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230023;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XkgBmD/d+YGvrkKNM3PZYahxZ017Cf0e7HM4zKnGRyM=;
	b=hRN+vthTTk5GIzhy/fi6D2ClFGcQlDtxI7DX7m15PSDuLyMoaX2Pnf7iUoS/LJeFY9CFR3
	UNesnjOaw7uQN8Q+A3WdAl9Y2qH9q0ybFlwPScOhC4jHKhf4p9dGNdAMVKAmoQAs0ay4CX
	hxqUNjxt3LmOaCFGwCE3Er4bPvr+oXM=
X-MC-Unique: _aYTw-OjPuqtsu_BDqa0FQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 03/21] hw/qdev: introduce qdev_is_realized() helper
Date: Thu,  4 May 2023 15:53:09 -0400
Message-Id: <20230504195327.695107-4-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

Add a helper function to check whether the device is realized without
requiring the Big QEMU Lock. The next patch adds a second caller. The
goal is to avoid spreading DeviceState field accesses throughout the
code.

Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/hw/qdev-core.h | 17 ++++++++++++++---
 hw/scsi/scsi-bus.c     |  3 +--
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
index 7623703943..f1070d6dc7 100644
--- a/include/hw/qdev-core.h
+++ b/include/hw/qdev-core.h
@@ -1,6 +1,7 @@
 #ifndef QDEV_CORE_H
 #define QDEV_CORE_H
 
+#include "qemu/atomic.h"
 #include "qemu/queue.h"
 #include "qemu/bitmap.h"
 #include "qemu/rcu.h"
@@ -168,9 +169,6 @@ typedef struct {
 
 /**
  * DeviceState:
- * @realized: Indicates whether the device has been fully constructed.
- *            When accessed outside big qemu lock, must be accessed with
- *            qatomic_load_acquire()
  * @reset: ResettableState for the device; handled by Resettable interface.
  *
  * This structure should not be accessed directly.  We declare it here
@@ -339,6 +337,19 @@ DeviceState *qdev_new(const char *name);
  */
 DeviceState *qdev_try_new(const char *name);
 
+/**
+ * qdev_is_realized:
+ * @dev: The device to check.
+ *
+ * May be called outside big qemu lock.
+ *
+ * Returns: %true% if the device has been fully constructed, %false% otherwise.
+ */
+static inline bool qdev_is_realized(DeviceState *dev)
+{
+    return qatomic_load_acquire(&dev->realized);
+}
+
 /**
  * qdev_realize: Realize @dev.
  * @dev: device to realize
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 3c20b47ad0..8857ff41f6 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -60,8 +60,7 @@ static SCSIDevice *do_scsi_device_find(SCSIBus *bus,
      * the user access the device.
      */
 
-    if (retval && !include_unrealized &&
-        !qatomic_load_acquire(&retval->qdev.realized)) {
+    if (retval && !include_unrealized && !qdev_is_realized(&retval->qdev)) {
         retval = NULL;
     }
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:53:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530060.825326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1G-0003iQ-9K; Thu, 04 May 2023 19:53:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530060.825326; Thu, 04 May 2023 19:53:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1G-0003iJ-6g; Thu, 04 May 2023 19:53:38 +0000
Received: by outflank-mailman (input) for mailman id 530060;
 Thu, 04 May 2023 19:53:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1E-0003iB-Hz
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:53:36 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 598eaa43-eab5-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:53:34 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-600-HQNHFjJIMHiplNqYoaDlwg-1; Thu, 04 May 2023 15:53:31 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8C9BF3813F3C;
 Thu,  4 May 2023 19:53:30 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 562674020960;
 Thu,  4 May 2023 19:53:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 598eaa43-eab5-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230012;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=9XCfQf87U8+YhGmnm4aQEFf11a+HXKX39xdjDXLXz/w=;
	b=PCrWMAxwV6XlwprH9mgZ6MQFY9ox4cOXJ/exukdB+LxPOiGzGAIISueVeSe7ivTIjZzabF
	WZdQSUV2W7P3vSeEKUF/mr++5RSUlJiqTDjgHCyZCLt13GdDWfPpiVIKbFuCyH9YLZec+u
	zkpvjssjtSI06mpMte+bpPUVCyElddg=
X-MC-Unique: HQNHFjJIMHiplNqYoaDlwg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 00/21] block: remove aio_disable_external() API
Date: Thu,  4 May 2023 15:53:06 -0400
Message-Id: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

v5:
- Use atomic accesses for in_flight counter in vhost-user-server.c [Kevin]
- Stash SCSIDevice id/lun values for VIRTIO_SCSI_T_TRANSPORT_RESET event
  before unrealizing the SCSIDevice [Kevin]
- Keep vhost-user-blk export .detach() callback so ctx is set to NULL [Kevin]
- Narrow BdrvChildClass and BlockDriver drained_{begin/end/poll} callbacks from
  IO_OR_GS_CODE() to GLOBAL_STATE_CODE() [Kevin]
- Include Kevin's "block: Fix use after free in blockdev_mark_auto_del()" to
  fix a latent bug that was exposed by this series

v4:
- Remove external_disable_cnt variable [Philippe]
- Add Patch 1 to fix assertion failure in .drained_end() -> blk_get_aio_context()
v3:
- Resend full patch series. v2 was sent in the middle of a git rebase and was
  missing patches. [Eric]
- Apply Reviewed-by tags.
v2:
- Do not rely on BlockBackend request queuing, implement .drained_begin/end()
  instead in xen-block, virtio-blk, and virtio-scsi [Paolo]
- Add qdev_is_realized() API [Philippe]
- Add patch to avoid AioContext lock around blk_exp_ref/unref() [Paolo]
- Add patch to call .drained_begin/end() from main loop thread to simplify
  callback implementations

The aio_disable_external() API temporarily suspends file descriptor monitoring
in the event loop. The block layer uses this to prevent new I/O requests being
submitted from the guest and elsewhere between bdrv_drained_begin() and
bdrv_drained_end().

While the block layer still needs to prevent new I/O requests in drained
sections, the aio_disable_external() API can be replaced with
.drained_begin/end/poll() callbacks that have been added to BdrvChildClass and
BlockDevOps.

This newer .bdrained_begin/end/poll() approach is attractive because it works
without specifying a specific AioContext. The block layer is moving towards
multi-queue and that means multiple AioContexts may be processing I/O
simultaneously.

The aio_disable_external() was always somewhat hacky. It suspends all file
descriptors that were registered with is_external=true, even if they have
nothing to do with the BlockDriverState graph nodes that are being drained.
It's better to solve a block layer problem in the block layer than to have an
odd event loop API solution.

The approach in this patch series is to implement BlockDevOps
.drained_begin/end() callbacks that temporarily stop file descriptor handlers.
This ensures that new I/O requests are not submitted in drained sections.

Kevin Wolf (1):
  block: Fix use after free in blockdev_mark_auto_del()

Stefan Hajnoczi (20):
  block-backend: split blk_do_set_aio_context()
  hw/qdev: introduce qdev_is_realized() helper
  virtio-scsi: avoid race between unplug and transport event
  virtio-scsi: stop using aio_disable_external() during unplug
  util/vhost-user-server: rename refcount to in_flight counter
  block/export: wait for vhost-user-blk requests when draining
  block/export: stop using is_external in vhost-user-blk server
  hw/xen: do not use aio_set_fd_handler(is_external=true) in
    xen_xenstore
  block: add blk_in_drain() API
  block: drain from main loop thread in bdrv_co_yield_to_drain()
  xen-block: implement BlockDevOps->drained_begin()
  hw/xen: do not set is_external=true on evtchn fds
  block/export: rewrite vduse-blk drain code
  block/export: don't require AioContext lock around blk_exp_ref/unref()
  block/fuse: do not set is_external=true on FUSE fd
  virtio: make it possible to detach host notifier from any thread
  virtio-blk: implement BlockDevOps->drained_begin()
  virtio-scsi: implement BlockDevOps->drained_begin()
  virtio: do not set is_external=true on host notifiers
  aio: remove aio_disable_external() API

 hw/block/dataplane/xen-block.h              |   2 +
 include/block/aio.h                         |  57 ---------
 include/block/block_int-common.h            |  90 +++++++-------
 include/block/export.h                      |   2 +
 include/hw/qdev-core.h                      |  17 ++-
 include/hw/scsi/scsi.h                      |  14 +++
 include/qemu/vhost-user-server.h            |   8 +-
 include/sysemu/block-backend-common.h       |  25 ++--
 include/sysemu/block-backend-global-state.h |   1 +
 util/aio-posix.h                            |   1 -
 block.c                                     |   7 --
 block/blkio.c                               |  15 +--
 block/block-backend.c                       |  78 ++++++------
 block/curl.c                                |  10 +-
 block/export/export.c                       |  13 +-
 block/export/fuse.c                         |  56 ++++++++-
 block/export/vduse-blk.c                    | 128 ++++++++++++++------
 block/export/vhost-user-blk-server.c        |  52 +++++++-
 block/io.c                                  |  16 ++-
 block/io_uring.c                            |   4 +-
 block/iscsi.c                               |   3 +-
 block/linux-aio.c                           |   4 +-
 block/nfs.c                                 |   5 +-
 block/nvme.c                                |   8 +-
 block/ssh.c                                 |   4 +-
 block/win32-aio.c                           |   6 +-
 blockdev.c                                  |  18 ++-
 hw/block/dataplane/virtio-blk.c             |  19 ++-
 hw/block/dataplane/xen-block.c              |  42 +++++--
 hw/block/virtio-blk.c                       |  38 +++++-
 hw/block/xen-block.c                        |  24 +++-
 hw/i386/kvm/xen_xenstore.c                  |   2 +-
 hw/scsi/scsi-bus.c                          |  46 ++++++-
 hw/scsi/scsi-disk.c                         |  27 ++++-
 hw/scsi/virtio-scsi-dataplane.c             |  31 +++--
 hw/scsi/virtio-scsi.c                       | 127 ++++++++++++++-----
 hw/virtio/virtio.c                          |   6 +-
 hw/xen/xen-bus.c                            |  11 +-
 io/channel-command.c                        |   6 +-
 io/channel-file.c                           |   3 +-
 io/channel-socket.c                         |   3 +-
 migration/rdma.c                            |  16 +--
 tests/unit/test-aio.c                       |  27 +----
 tests/unit/test-bdrv-drain.c                |  15 +--
 tests/unit/test-fdmon-epoll.c               |  73 -----------
 util/aio-posix.c                            |  20 +--
 util/aio-win32.c                            |   8 +-
 util/async.c                                |   3 +-
 util/fdmon-epoll.c                          |  18 +--
 util/fdmon-io_uring.c                       |   8 +-
 util/fdmon-poll.c                           |   3 +-
 util/main-loop.c                            |   7 +-
 util/qemu-coroutine-io.c                    |   7 +-
 util/vhost-user-server.c                    |  33 ++---
 hw/scsi/trace-events                        |   2 +
 tests/unit/meson.build                      |   3 -
 56 files changed, 738 insertions(+), 534 deletions(-)
 delete mode 100644 tests/unit/test-fdmon-epoll.c

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:53:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530061.825336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1L-0003zZ-Nx; Thu, 04 May 2023 19:53:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530061.825336; Thu, 04 May 2023 19:53:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1L-0003zS-JB; Thu, 04 May 2023 19:53:43 +0000
Received: by outflank-mailman (input) for mailman id 530061;
 Thu, 04 May 2023 19:53:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1K-0003xx-MF
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:53:42 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5cb4c55b-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:53:39 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-558-qVm4RSmtOvC22VNK6fWUTQ-1; Thu, 04 May 2023 15:53:34 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 50FFE101A550;
 Thu,  4 May 2023 19:53:33 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 78E421410F29;
 Thu,  4 May 2023 19:53:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cb4c55b-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230018;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7t7uQFODUsT7GUPKJzVAstWb4vuQdfLqZNy3S4rp+B4=;
	b=L1ntaQo24O9SYlpe8zmaJ8bo+st/9Q4rwHvj9Zrg8zQnW+ojLaaZQR/FR29jZveEx5Cilv
	QzWLdGbDJPbRHtkckNKB1ACx9X1I6FV57xGjT0oU3UCV3UF2JKazquhOytbUIQeTe1BrL6
	F3A4stphdxrZNMpMl0U1dKu65r7kWLM=
X-MC-Unique: qVm4RSmtOvC22VNK6fWUTQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 01/21] block: Fix use after free in blockdev_mark_auto_del()
Date: Thu,  4 May 2023 15:53:07 -0400
Message-Id: <20230504195327.695107-2-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

From: Kevin Wolf <kwolf@redhat.com>

job_cancel_locked() drops the job list lock temporarily and it may call
aio_poll(). We must assume that the list has changed after this call.
Also, with unlucky timing, it can end up freeing the job during
job_completed_txn_abort_locked(), making the job pointer invalid, too.

For both reasons, we can't just continue at block_job_next_locked(job).
Instead, start at the head of the list again after job_cancel_locked()
and skip those jobs that we already cancelled (or that are completing
anyway).

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20230503140142.474404-1-kwolf@redhat.com>
---
 blockdev.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index d7b5c18f0a..2c1752a403 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -153,12 +153,22 @@ void blockdev_mark_auto_del(BlockBackend *blk)
 
     JOB_LOCK_GUARD();
 
-    for (job = block_job_next_locked(NULL); job;
-         job = block_job_next_locked(job)) {
-        if (block_job_has_bdrv(job, blk_bs(blk))) {
+    do {
+        job = block_job_next_locked(NULL);
+        while (job && (job->job.cancelled ||
+                       job->job.deferred_to_main_loop ||
+                       !block_job_has_bdrv(job, blk_bs(blk))))
+        {
+            job = block_job_next_locked(job);
+        }
+        if (job) {
+            /*
+             * This drops the job lock temporarily and polls, so we need to
+             * restart processing the list from the start after this.
+             */
             job_cancel_locked(&job->job, false);
         }
-    }
+    } while (job);
 
     dinfo->auto_del = 1;
 }
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:53:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530064.825367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1R-0004pe-H7; Thu, 04 May 2023 19:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530064.825367; Thu, 04 May 2023 19:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1R-0004pT-Bc; Thu, 04 May 2023 19:53:49 +0000
Received: by outflank-mailman (input) for mailman id 530064;
 Thu, 04 May 2023 19:53:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1P-0003iB-Vg
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:53:47 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6143fca3-eab5-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:53:47 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-349-OwK9rzciNzal1nFRcpiK8A-1; Thu, 04 May 2023 15:53:42 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 947371C06EDA;
 Thu,  4 May 2023 19:53:41 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id A983B1121331;
 Thu,  4 May 2023 19:53:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6143fca3-eab5-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230025;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=efhUpWQleoubyTXms1hMRDqrtRfLyC1vJzKA96dtfK0=;
	b=bxzTsnMduWC8nIrXWM4GHUKtiPAkZrs3sRFV/V32G3rD3k0zfHga9gOLMBV3sJujIuMZJR
	u8DRLyMo1t9ILk7VZkh8MdGYfc/kG05b/w53dN1NRANRnJQDCc49jftb3sYyOTWn90yw8v
	KyaijIWomhMC/kV2gkChFnofYN5MxHQ=
X-MC-Unique: OwK9rzciNzal1nFRcpiK8A-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: [PATCH v5 04/21] virtio-scsi: avoid race between unplug and transport event
Date: Thu,  4 May 2023 15:53:10 -0400
Message-Id: <20230504195327.695107-5-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

Only report a transport reset event to the guest after the SCSIDevice
has been unrealized by qdev_simple_device_unplug_cb().

qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field
to false so that scsi_device_find/get() no longer see it.

scsi_target_emulate_report_luns() also needs to be updated to filter out
SCSIDevices that are unrealized.

Change virtio_scsi_push_event() to take event information as an argument
instead of the SCSIDevice. This allows virtio_scsi_hotunplug() to emit a
VIRTIO_SCSI_T_TRANSPORT_RESET event after the SCSIDevice has already
been unrealized.

These changes ensure that the guest driver does not see the SCSIDevice
that's being unplugged if it responds very quickly to the transport
reset event.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
v5:
- Stash SCSIDevice id/lun values for VIRTIO_SCSI_T_TRANSPORT_RESET event
  before unrealizing the SCSIDevice [Kevin]
---
 hw/scsi/scsi-bus.c    |  3 +-
 hw/scsi/virtio-scsi.c | 86 ++++++++++++++++++++++++++++++-------------
 2 files changed, 63 insertions(+), 26 deletions(-)

diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 8857ff41f6..64013c8a24 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -487,7 +487,8 @@ static bool scsi_target_emulate_report_luns(SCSITargetReq *r)
             DeviceState *qdev = kid->child;
             SCSIDevice *dev = SCSI_DEVICE(qdev);
 
-            if (dev->channel == channel && dev->id == id && dev->lun != 0) {
+            if (dev->channel == channel && dev->id == id && dev->lun != 0 &&
+                qdev_is_realized(&dev->qdev)) {
                 store_lun(tmp, dev->lun);
                 g_byte_array_append(buf, tmp, 8);
                 len += 8;
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 612c525d9d..ae314af3de 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -933,13 +933,27 @@ static void virtio_scsi_reset(VirtIODevice *vdev)
     s->events_dropped = false;
 }
 
-static void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
-                                   uint32_t event, uint32_t reason)
+typedef struct {
+    uint32_t event;
+    uint32_t reason;
+    union {
+        /* Used by messages specific to a device */
+        struct {
+            uint32_t id;
+            uint32_t lun;
+        } address;
+    };
+} VirtIOSCSIEventInfo;
+
+static void virtio_scsi_push_event(VirtIOSCSI *s,
+                                   const VirtIOSCSIEventInfo *info)
 {
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
     VirtIOSCSIReq *req;
     VirtIOSCSIEvent *evt;
     VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    uint32_t event = info->event;
+    uint32_t reason = info->reason;
 
     if (!(vdev->status & VIRTIO_CONFIG_S_DRIVER_OK)) {
         return;
@@ -965,27 +979,28 @@ static void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
     memset(evt, 0, sizeof(VirtIOSCSIEvent));
     evt->event = virtio_tswap32(vdev, event);
     evt->reason = virtio_tswap32(vdev, reason);
-    if (!dev) {
-        assert(event == VIRTIO_SCSI_T_EVENTS_MISSED);
-    } else {
+    if (event != VIRTIO_SCSI_T_EVENTS_MISSED) {
         evt->lun[0] = 1;
-        evt->lun[1] = dev->id;
+        evt->lun[1] = info->address.id;
 
         /* Linux wants us to keep the same encoding we use for REPORT LUNS.  */
-        if (dev->lun >= 256) {
-            evt->lun[2] = (dev->lun >> 8) | 0x40;
+        if (info->address.lun >= 256) {
+            evt->lun[2] = (info->address.lun >> 8) | 0x40;
         }
-        evt->lun[3] = dev->lun & 0xFF;
+        evt->lun[3] = info->address.lun & 0xFF;
     }
     trace_virtio_scsi_event(virtio_scsi_get_lun(evt->lun), event, reason);
-     
+
     virtio_scsi_complete_req(req);
 }
 
 static void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq)
 {
     if (s->events_dropped) {
-        virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
+        VirtIOSCSIEventInfo info = {
+            .event = VIRTIO_SCSI_T_NO_EVENT,
+        };
+        virtio_scsi_push_event(s, &info);
     }
 }
 
@@ -1009,9 +1024,17 @@ static void virtio_scsi_change(SCSIBus *bus, SCSIDevice *dev, SCSISense sense)
 
     if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_CHANGE) &&
         dev->type != TYPE_ROM) {
+        VirtIOSCSIEventInfo info = {
+            .event   = VIRTIO_SCSI_T_PARAM_CHANGE,
+            .reason  = sense.asc | (sense.ascq << 8),
+            .address = {
+                .id  = dev->id,
+                .lun = dev->lun,
+            },
+        };
+
         virtio_scsi_acquire(s);
-        virtio_scsi_push_event(s, dev, VIRTIO_SCSI_T_PARAM_CHANGE,
-                               sense.asc | (sense.ascq << 8));
+        virtio_scsi_push_event(s, &info);
         virtio_scsi_release(s);
     }
 }
@@ -1046,10 +1069,17 @@ static void virtio_scsi_hotplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     }
 
     if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
+        VirtIOSCSIEventInfo info = {
+            .event   = VIRTIO_SCSI_T_TRANSPORT_RESET,
+            .reason  = VIRTIO_SCSI_EVT_RESET_RESCAN,
+            .address = {
+                .id  = sd->id,
+                .lun = sd->lun,
+            },
+        };
+
         virtio_scsi_acquire(s);
-        virtio_scsi_push_event(s, sd,
-                               VIRTIO_SCSI_T_TRANSPORT_RESET,
-                               VIRTIO_SCSI_EVT_RESET_RESCAN);
+        virtio_scsi_push_event(s, &info);
         scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
         virtio_scsi_release(s);
     }
@@ -1062,15 +1092,14 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
     SCSIDevice *sd = SCSI_DEVICE(dev);
     AioContext *ctx = s->ctx ?: qemu_get_aio_context();
-
-    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
-        virtio_scsi_acquire(s);
-        virtio_scsi_push_event(s, sd,
-                               VIRTIO_SCSI_T_TRANSPORT_RESET,
-                               VIRTIO_SCSI_EVT_RESET_REMOVED);
-        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
-        virtio_scsi_release(s);
-    }
+    VirtIOSCSIEventInfo info = {
+        .event   = VIRTIO_SCSI_T_TRANSPORT_RESET,
+        .reason  = VIRTIO_SCSI_EVT_RESET_REMOVED,
+        .address = {
+            .id  = sd->id,
+            .lun = sd->lun,
+        },
+    };
 
     aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
@@ -1082,6 +1111,13 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
         blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL);
         virtio_scsi_release(s);
     }
+
+    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
+        virtio_scsi_acquire(s);
+        virtio_scsi_push_event(s, &info);
+        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
+        virtio_scsi_release(s);
+    }
 }
 
 static struct SCSIBusInfo virtio_scsi_scsi_info = {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:53:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530065.825376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1S-00058M-Sn; Thu, 04 May 2023 19:53:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530065.825376; Thu, 04 May 2023 19:53:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1S-000587-PE; Thu, 04 May 2023 19:53:50 +0000
Received: by outflank-mailman (input) for mailman id 530065;
 Thu, 04 May 2023 19:53:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1S-0003iB-6q
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:53:50 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 62cc8225-eab5-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:53:49 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-270-GII9Yb4pMlSDS5YQk3vVsA-1; Thu, 04 May 2023 15:53:45 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 145C8886064;
 Thu,  4 May 2023 19:53:44 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6F9C31121331;
 Thu,  4 May 2023 19:53:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62cc8225-eab5-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230028;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4aDv9+T5dXOUzGaYXYbFqi/phBTNlR0gw0d5Reej/CE=;
	b=RtWK7gEll8TwVEL8NE+0RnxDxnpO/js06J9+6G+jPK+WKu4bRKOg1RyDV31BTJPu6xh7Ub
	XST7dTb33KmjjnrNStDzpJuM6jcdxTpgA/SSUaYErf85suAJoB09DR/K9hIOIdF+MAkyy/
	QeA4mB/e1Lz09zUD84XJq2L8LI+SCt8=
X-MC-Unique: GII9Yb4pMlSDS5YQk3vVsA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: [PATCH v5 05/21] virtio-scsi: stop using aio_disable_external() during unplug
Date: Thu,  4 May 2023 15:53:11 -0400
Message-Id: <20230504195327.695107-6-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

This patch is part of an effort to remove the aio_disable_external()
API because it does not fit in a multi-queue block layer world where
many AioContexts may be submitting requests to the same disk.

The SCSI emulation code is already in good shape to stop using
aio_disable_external(). It was only used by commit 9c5aad84da1c
("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
disk") to ensure that virtio_scsi_hotunplug() works while the guest
driver is submitting I/O.

Ensure virtio_scsi_hotunplug() is safe as follows:

1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
   device_set_realized() calls qatomic_set(&dev->realized, false) so
   that future scsi_device_get() calls return NULL because they exclude
   SCSIDevices with realized=false.

   That means virtio-scsi will reject new I/O requests to this
   SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
   virtio_scsi_hotunplug() is still executing. We are protected against
   new requests!

2. scsi_device_unrealize() already contains a call to
   scsi_device_purge_requests() so that in-flight requests are cancelled
   synchronously. This ensures that no in-flight requests remain once
   qdev_simple_device_unplug_cb() returns.

Thanks to these two conditions we don't need aio_disable_external()
anymore.

Cc: Zhengui Li <lizhengui@huawei.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/scsi/virtio-scsi.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index ae314af3de..c1a7ea9ae2 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1091,7 +1091,6 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev);
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
     SCSIDevice *sd = SCSI_DEVICE(dev);
-    AioContext *ctx = s->ctx ?: qemu_get_aio_context();
     VirtIOSCSIEventInfo info = {
         .event   = VIRTIO_SCSI_T_TRANSPORT_RESET,
         .reason  = VIRTIO_SCSI_EVT_RESET_REMOVED,
@@ -1101,9 +1100,7 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
         },
     };
 
-    aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
-    aio_enable_external(ctx);
 
     if (s->ctx) {
         virtio_scsi_acquire(s);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:53:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:53:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530066.825385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1U-0005Pi-7H; Thu, 04 May 2023 19:53:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530066.825385; Thu, 04 May 2023 19:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1U-0005P9-4L; Thu, 04 May 2023 19:53:52 +0000
Received: by outflank-mailman (input) for mailman id 530066;
 Thu, 04 May 2023 19:53:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1T-0003iB-C6
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:53:51 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6378c682-eab5-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:53:50 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-471-Q3sOoSu5NLOST1Dr08hq8A-1; Thu, 04 May 2023 15:53:48 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D133A811E7B;
 Thu,  4 May 2023 19:53:46 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 35F9F1410F29;
 Thu,  4 May 2023 19:53:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6378c682-eab5-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230029;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Uh9JsAzQBVO7FYeYzOSkBp65HQ8Exa/aGS/jTv3In5w=;
	b=EzZQBmmhNU4ngSGJiFO3DnxrSckJJBnghhkyNeVJXLS0C5BOvjHPSqdaNsbAB27GKPdln4
	TZPPF56oof8Ztitpzo9qCrWwFS3b2PO6Dbr4/8nfab7sPM5rL6NQnlBwe8oSYm9u2an1Zc
	lVLQ13sCnnKTIaGjt41I5Qd5tcg/LXo=
X-MC-Unique: Q3sOoSu5NLOST1Dr08hq8A-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 06/21] util/vhost-user-server: rename refcount to in_flight counter
Date: Thu,  4 May 2023 15:53:12 -0400
Message-Id: <20230504195327.695107-7-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

The VuServer object has a refcount field and ref/unref APIs. The name is
confusing because it's actually an in-flight request counter instead of
a refcount.

Normally a refcount destroys the object upon reaching zero. The VuServer
counter is used to wake up the vhost-user coroutine when there are no
more requests.

Avoid confusing by renaming refcount and ref/unref to in_flight and
inc/dec.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/qemu/vhost-user-server.h     |  6 +++---
 block/export/vhost-user-blk-server.c | 11 +++++++----
 util/vhost-user-server.c             | 14 +++++++-------
 3 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index 25c72433ca..bc0ac9ddb6 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -41,7 +41,7 @@ typedef struct {
     const VuDevIface *vu_iface;
 
     /* Protected by ctx lock */
-    unsigned int refcount;
+    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -60,8 +60,8 @@ bool vhost_user_server_start(VuServer *server,
 
 void vhost_user_server_stop(VuServer *server);
 
-void vhost_user_server_ref(VuServer *server);
-void vhost_user_server_unref(VuServer *server);
+void vhost_user_server_inc_in_flight(VuServer *server);
+void vhost_user_server_dec_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index e56b92f2e2..841acb36e3 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -50,7 +50,10 @@ static void vu_blk_req_complete(VuBlkReq *req, size_t in_len)
     free(req);
 }
 
-/* Called with server refcount increased, must decrease before returning */
+/*
+ * Called with server in_flight counter increased, must decrease before
+ * returning.
+ */
 static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
 {
     VuBlkReq *req = opaque;
@@ -68,12 +71,12 @@ static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
                                     in_num, out_num);
     if (in_len < 0) {
         free(req);
-        vhost_user_server_unref(server);
+        vhost_user_server_dec_in_flight(server);
         return;
     }
 
     vu_blk_req_complete(req, in_len);
-    vhost_user_server_unref(server);
+    vhost_user_server_dec_in_flight(server);
 }
 
 static void vu_blk_process_vq(VuDev *vu_dev, int idx)
@@ -95,7 +98,7 @@ static void vu_blk_process_vq(VuDev *vu_dev, int idx)
         Coroutine *co =
             qemu_coroutine_create(vu_blk_virtio_process_req, req);
 
-        vhost_user_server_ref(server);
+        vhost_user_server_inc_in_flight(server);
         qemu_coroutine_enter(co);
     }
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 5b6216069c..1622f8cfb3 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -75,16 +75,16 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
     error_report("vu_panic: %s", buf);
 }
 
-void vhost_user_server_ref(VuServer *server)
+void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->refcount++;
+    server->in_flight++;
 }
 
-void vhost_user_server_unref(VuServer *server)
+void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->refcount--;
-    if (server->wait_idle && !server->refcount) {
+    server->in_flight--;
+    if (server->wait_idle && !server->in_flight) {
         aio_co_wake(server->co_trip);
     }
 }
@@ -192,13 +192,13 @@ static coroutine_fn void vu_client_trip(void *opaque)
         /* Keep running */
     }
 
-    if (server->refcount) {
+    if (server->in_flight) {
         /* Wait for requests to complete before we can unmap the memory */
         server->wait_idle = true;
         qemu_coroutine_yield();
         server->wait_idle = false;
     }
-    assert(server->refcount == 0);
+    assert(server->in_flight == 0);
 
     vu_deinit(vu_dev);
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:53:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:53:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530068.825395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1Z-0005vK-Gc; Thu, 04 May 2023 19:53:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530068.825395; Thu, 04 May 2023 19:53:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1Z-0005v2-DY; Thu, 04 May 2023 19:53:57 +0000
Received: by outflank-mailman (input) for mailman id 530068;
 Thu, 04 May 2023 19:53:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1Y-0003xx-Bh
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:53:56 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65cbde11-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:53:54 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-569-tNtDfOKSMVq3AQX3nRbV7g-1; Thu, 04 May 2023 15:53:50 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 36B9A101A550;
 Thu,  4 May 2023 19:53:49 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 931081121339;
 Thu,  4 May 2023 19:53:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65cbde11-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230033;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uqVEulc+YKH8du/OTtIOC0bNRzxnvgw0ufD9riF8H3c=;
	b=ZXxeazJeqhC4n051eeKdQiY8H06cimgZClzH2ZDzMvw4+Jirga4fZkLq2fKXl3IqTpDd3y
	F7Zfq75R/q9dufnBuyZDMM++8mpT5R6oOl4MJDfAZ4+tF227ZoXve/MGRIhIzwXI1BuYxu
	egnWeY26hjGFsmL2XPrSMDcLZIdGZ5s=
X-MC-Unique: tNtDfOKSMVq3AQX3nRbV7g-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 07/21] block/export: wait for vhost-user-blk requests when draining
Date: Thu,  4 May 2023 15:53:13 -0400
Message-Id: <20230504195327.695107-8-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

Each vhost-user-blk request runs in a coroutine. When the BlockBackend
enters a drained section we need to enter a quiescent state. Currently
any in-flight requests race with bdrv_drained_begin() because it is
unaware of vhost-user-blk requests.

When blk_co_preadv/pwritev()/etc returns it wakes the
bdrv_drained_begin() thread but vhost-user-blk request processing has
not yet finished. The request coroutine continues executing while the
main loop thread thinks it is in a drained section.

One example where this is unsafe is for blk_set_aio_context() where
bdrv_drained_begin() is called before .aio_context_detached() and
.aio_context_attach(). If request coroutines are still running after
bdrv_drained_begin(), then the AioContext could change underneath them
and they race with new requests processed in the new AioContext. This
could lead to virtqueue corruption, for example.

(This example is theoretical, I came across this while reading the
code and have not tried to reproduce it.)

It's easy to make bdrv_drained_begin() wait for in-flight requests: add
a .drained_poll() callback that checks the VuServer's in-flight counter.
VuServer just needs an API that returns true when there are requests in
flight. The in-flight counter needs to be atomic.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
v5:
- Use atomic accesses for in_flight counter in vhost-user-server.c [Kevin]
---
 include/qemu/vhost-user-server.h     |  4 +++-
 block/export/vhost-user-blk-server.c | 13 +++++++++++++
 util/vhost-user-server.c             | 18 ++++++++++++------
 3 files changed, 28 insertions(+), 7 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index bc0ac9ddb6..b1c1cda886 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -40,8 +40,9 @@ typedef struct {
     int max_queues;
     const VuDevIface *vu_iface;
 
+    unsigned int in_flight; /* atomic */
+
     /* Protected by ctx lock */
-    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -62,6 +63,7 @@ void vhost_user_server_stop(VuServer *server);
 
 void vhost_user_server_inc_in_flight(VuServer *server);
 void vhost_user_server_dec_in_flight(VuServer *server);
+bool vhost_user_server_has_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index 841acb36e3..f51a36a14f 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -272,7 +272,20 @@ static void vu_blk_exp_resize(void *opaque)
     vu_config_change_msg(&vexp->vu_server.vu_dev);
 }
 
+/*
+ * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
+ *
+ * Called with vexp->export.ctx acquired.
+ */
+static bool vu_blk_drained_poll(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    return vhost_user_server_has_in_flight(&vexp->vu_server);
+}
+
 static const BlockDevOps vu_blk_dev_ops = {
+    .drained_poll  = vu_blk_drained_poll,
     .resize_cb = vu_blk_exp_resize,
 };
 
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 1622f8cfb3..68c3bf162f 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
 void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->in_flight++;
+    qatomic_inc(&server->in_flight);
 }
 
 void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->in_flight--;
-    if (server->wait_idle && !server->in_flight) {
-        aio_co_wake(server->co_trip);
+    if (qatomic_fetch_dec(&server->in_flight) == 1) {
+        if (server->wait_idle) {
+            aio_co_wake(server->co_trip);
+        }
     }
 }
 
+bool vhost_user_server_has_in_flight(VuServer *server)
+{
+    return qatomic_load_acquire(&server->in_flight) > 0;
+}
+
 static bool coroutine_fn
 vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
 {
@@ -192,13 +198,13 @@ static coroutine_fn void vu_client_trip(void *opaque)
         /* Keep running */
     }
 
-    if (server->in_flight) {
+    if (vhost_user_server_has_in_flight(server)) {
         /* Wait for requests to complete before we can unmap the memory */
         server->wait_idle = true;
         qemu_coroutine_yield();
         server->wait_idle = false;
     }
-    assert(server->in_flight == 0);
+    assert(!vhost_user_server_has_in_flight(server));
 
     vu_deinit(vu_dev);
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:54:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530071.825405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1c-0006Qe-Ul; Thu, 04 May 2023 19:54:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530071.825405; Thu, 04 May 2023 19:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1c-0006QR-Pu; Thu, 04 May 2023 19:54:00 +0000
Received: by outflank-mailman (input) for mailman id 530071;
 Thu, 04 May 2023 19:53:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1b-0003iB-C9
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:53:59 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 68646d91-eab5-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:53:58 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-544-TbrqFLPtO9Gbo9gH3K6ilQ-1; Thu, 04 May 2023 15:53:55 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1EC7E28004A1;
 Thu,  4 May 2023 19:53:54 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 45ECB492C13;
 Thu,  4 May 2023 19:53:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68646d91-eab5-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230037;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1fEremRCY5+rqiCKEtAZVyetsPRT+IQwYtOmlYy+nxY=;
	b=Zdw+c2FzDtVKCppbVvwL+91GFXW8cfH6HTo4hGJJO2IhBtXiMf9BWYn9lBKmCRzhVJA39D
	m/71AWM8S5TBpsXyc7sbTY7AIpMv/McNADPmaMvm/hSQVujBaYFufkyQJEgvwyI9n0xVtr
	DoNMVJZIgy/3ZPnUVIpWu7yEwtaJkmQ=
X-MC-Unique: TbrqFLPtO9Gbo9gH3K6ilQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: [PATCH v5 09/21] hw/xen: do not use aio_set_fd_handler(is_external=true) in xen_xenstore
Date: Thu,  4 May 2023 15:53:15 -0400
Message-Id: <20230504195327.695107-10-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9

There is no need to suspend activity between aio_disable_external() and
aio_enable_external(), which is mainly used for the block layer's drain
operation.

This is part of ongoing work to remove the aio_disable_external() API.

Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/i386/kvm/xen_xenstore.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 900679af8a..6e81bc8791 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "Xenstore evtchn port init failed");
         return;
     }
-    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), true,
+    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false,
                        xen_xenstore_event, NULL, NULL, NULL, s);
 
     s->impl = xs_impl_create(xen_domid);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:54:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530072.825412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1d-0006UN-HU; Thu, 04 May 2023 19:54:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530072.825412; Thu, 04 May 2023 19:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1d-0006Ta-5U; Thu, 04 May 2023 19:54:01 +0000
Received: by outflank-mailman (input) for mailman id 530072;
 Thu, 04 May 2023 19:54:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1c-0003xx-Ev
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:00 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 686588fd-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:53:58 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-610-ZoYT6p5PP2qo_FQgwkpEwA-1; Thu, 04 May 2023 15:53:52 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 822BD185A78B;
 Thu,  4 May 2023 19:53:51 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E35D84020960;
 Thu,  4 May 2023 19:53:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 686588fd-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230037;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9wCNZYyVdjWaoTznBC2Ylnqhccg5NnPq2nFPN+RMgeA=;
	b=gn2qlPFGkA6wq44f0pNjMqZc6MsNtnVMUSDckB9UI1xTHwv6IDfMKvekI7Ys4wOojFka3y
	st6usqnqwFeV5Fh4N8wuAv6YRxmJUyRCjknlVJjLFgATm8oGDryLDytx4GxsDMp46N65WT
	orxFiyYfYSt3UD3Lng0+W9bipT7njbM=
X-MC-Unique: ZoYT6p5PP2qo_FQgwkpEwA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 08/21] block/export: stop using is_external in vhost-user-blk server
Date: Thu,  4 May 2023 15:53:14 -0400
Message-Id: <20230504195327.695107-9-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

vhost-user activity must be suspended during bdrv_drained_begin/end().
This prevents new requests from interfering with whatever is happening
in the drained section.

Previously this was done using aio_set_fd_handler()'s is_external
argument. In a multi-queue block layer world the aio_disable_external()
API cannot be used since multiple AioContext may be processing I/O, not
just one.

Switch to BlockDevOps->drained_begin/end() callbacks.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vhost-user-blk-server.c | 28 ++++++++++++++++++++++++++--
 util/vhost-user-server.c             | 10 +++++-----
 2 files changed, 31 insertions(+), 7 deletions(-)

diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index f51a36a14f..81b59761e3 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -212,15 +212,21 @@ static void blk_aio_attached(AioContext *ctx, void *opaque)
 {
     VuBlkExport *vexp = opaque;
 
+    /*
+     * The actual attach will happen in vu_blk_drained_end() and we just
+     * restore ctx here.
+     */
     vexp->export.ctx = ctx;
-    vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
 }
 
 static void blk_aio_detach(void *opaque)
 {
     VuBlkExport *vexp = opaque;
 
-    vhost_user_server_detach_aio_context(&vexp->vu_server);
+    /*
+     * The actual detach already happened in vu_blk_drained_begin() but from
+     * this point on we must not access ctx anymore.
+     */
     vexp->export.ctx = NULL;
 }
 
@@ -272,6 +278,22 @@ static void vu_blk_exp_resize(void *opaque)
     vu_config_change_msg(&vexp->vu_server.vu_dev);
 }
 
+/* Called with vexp->export.ctx acquired */
+static void vu_blk_drained_begin(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    vhost_user_server_detach_aio_context(&vexp->vu_server);
+}
+
+/* Called with vexp->export.blk AioContext acquired */
+static void vu_blk_drained_end(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ctx);
+}
+
 /*
  * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
  *
@@ -285,6 +307,8 @@ static bool vu_blk_drained_poll(void *opaque)
 }
 
 static const BlockDevOps vu_blk_dev_ops = {
+    .drained_begin = vu_blk_drained_begin,
+    .drained_end   = vu_blk_drained_end,
     .drained_poll  = vu_blk_drained_poll,
     .resize_cb = vu_blk_exp_resize,
 };
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 68c3bf162f..a12b2d1bba 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         vu_fd_watch->fd = fd;
         vu_fd_watch->cb = cb;
         qemu_socket_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
                            NULL, NULL, NULL, vu_fd_watch);
         vu_fd_watch->vu_dev = vu_dev;
         vu_fd_watch->pvt = pvt;
@@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
     if (!vu_fd_watch) {
         return;
     }
-    aio_set_fd_handler(server->ioc->ctx, fd, true,
+    aio_set_fd_handler(server->ioc->ctx, fd, false,
                        NULL, NULL, NULL, NULL, NULL);
 
     QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
@@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
@@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
     qio_channel_attach_aio_context(server->ioc, ctx);
 
     QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-        aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL,
+        aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL,
                            NULL, NULL, vu_fd_watch);
     }
 
@@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:54:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:54:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530078.825426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1l-0007Rz-QR; Thu, 04 May 2023 19:54:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530078.825426; Thu, 04 May 2023 19:54:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1l-0007Ro-NM; Thu, 04 May 2023 19:54:09 +0000
Received: by outflank-mailman (input) for mailman id 530078;
 Thu, 04 May 2023 19:54:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1j-0003xx-U2
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:07 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c9f016d-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:54:05 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-578-Mxyocj34PQiIzWFYqCzlBA-1; Thu, 04 May 2023 15:53:57 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7FA16109DCE6;
 Thu,  4 May 2023 19:53:56 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E89F34020960;
 Thu,  4 May 2023 19:53:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c9f016d-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230044;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SzNG6z7bzqGXyIR92raSnZpugRplPdo5zETBQKXUCHA=;
	b=HuZdHGRXreTp69OP9K9ShnZSxMlFkSYNxaNfgU40Dh5apVw4fFYKQ1MjBbDdhKwyVqsayC
	8t0WiBpvdljL/PWCpz4FGcFUfIaa/cbl06Nvl4e115j4dYbbrYE+8MGgHodhOuWSZ+8okM
	1qxI8WvlLw2QBnF3UDW5fzoPjYyUhMs=
X-MC-Unique: Mxyocj34PQiIzWFYqCzlBA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 10/21] block: add blk_in_drain() API
Date: Thu,  4 May 2023 15:53:16 -0400
Message-Id: <20230504195327.695107-11-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

The BlockBackend quiesce_counter is greater than zero during drained
sections. Add an API to check whether the BlockBackend is in a drained
section.

The next patch will use this API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/sysemu/block-backend-global-state.h | 1 +
 block/block-backend.c                       | 7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/include/sysemu/block-backend-global-state.h b/include/sysemu/block-backend-global-state.h
index 2b6d27db7c..ac7cbd6b5e 100644
--- a/include/sysemu/block-backend-global-state.h
+++ b/include/sysemu/block-backend-global-state.h
@@ -78,6 +78,7 @@ void blk_activate(BlockBackend *blk, Error **errp);
 int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags);
 void blk_aio_cancel(BlockAIOCB *acb);
 int blk_commit_all(void);
+bool blk_in_drain(BlockBackend *blk);
 void blk_drain(BlockBackend *blk);
 void blk_drain_all(void);
 void blk_set_on_error(BlockBackend *blk, BlockdevOnError on_read_error,
diff --git a/block/block-backend.c b/block/block-backend.c
index 68d38635bc..96f03cae95 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -1270,6 +1270,13 @@ blk_check_byte_request(BlockBackend *blk, int64_t offset, int64_t bytes)
     return 0;
 }
 
+/* Are we currently in a drained section? */
+bool blk_in_drain(BlockBackend *blk)
+{
+    GLOBAL_STATE_CODE(); /* change to IO_OR_GS_CODE(), if necessary */
+    return qatomic_read(&blk->quiesce_counter);
+}
+
 /* To be called between exactly one pair of blk_inc/dec_in_flight() */
 static void coroutine_fn blk_wait_while_drained(BlockBackend *blk)
 {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:54:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:54:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530079.825433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1m-0007We-B9; Thu, 04 May 2023 19:54:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530079.825433; Thu, 04 May 2023 19:54:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf1m-0007V5-1A; Thu, 04 May 2023 19:54:10 +0000
Received: by outflank-mailman (input) for mailman id 530079;
 Thu, 04 May 2023 19:54:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1k-0003xx-DI
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:08 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c75c70b-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:54:05 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-83-mf16-bP3ON6rSVB1WZdIzA-1; Thu, 04 May 2023 15:54:00 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E5EB4101A531;
 Thu,  4 May 2023 19:53:58 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 00AB54020962;
 Thu,  4 May 2023 19:53:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c75c70b-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230044;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TaJhEzn2xnk+TJZ+4gZYO3ndhDET6IVJHLdTW4Bnkp0=;
	b=OkR7Xgyrk9DRZ+GmfacqrWgXZ781Uwvtp77CWlpRDdaciJmLhCObMpDc78EIFyvv4qQ1ri
	3co3ZB8Ber+KuqHQ91JXqRvsszqqCjSqyy5DNy+AQWoc286ncoAKHFaduZxCZOgSSONzv2
	nGIRnXaUVCCnmXlw8aCYF7LfwZMNhmc=
X-MC-Unique: mf16-bP3ON6rSVB1WZdIzA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 11/21] block: drain from main loop thread in bdrv_co_yield_to_drain()
Date: Thu,  4 May 2023 15:53:17 -0400
Message-Id: <20230504195327.695107-12-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

For simplicity, always run BlockDevOps .drained_begin/end/poll()
callbacks in the main loop thread. This makes it easier to implement the
callbacks and avoids extra locks.

Move the function pointer declarations from the I/O Code section to the
Global State section for BlockDevOps, BdrvChildClass, and BlockDriver.

Narrow IO_OR_GS_CODE() to GLOBAL_STATE_CODE() where appropriate.

The test-bdrv-drain test case calls bdrv_drain() from an IOThread. This
is now only allowed from coroutine context, so update the test case to
run in a coroutine.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/block_int-common.h      | 90 +++++++++++++--------------
 include/sysemu/block-backend-common.h | 25 ++++----
 block/io.c                            | 14 +++--
 tests/unit/test-bdrv-drain.c          | 14 +++--
 4 files changed, 76 insertions(+), 67 deletions(-)

diff --git a/include/block/block_int-common.h b/include/block/block_int-common.h
index 013d419444..f462a8be55 100644
--- a/include/block/block_int-common.h
+++ b/include/block/block_int-common.h
@@ -356,6 +356,21 @@ struct BlockDriver {
     void (*bdrv_attach_aio_context)(BlockDriverState *bs,
                                     AioContext *new_context);
 
+    /**
+     * bdrv_drain_begin is called if implemented in the beginning of a
+     * drain operation to drain and stop any internal sources of requests in
+     * the driver.
+     * bdrv_drain_end is called if implemented at the end of the drain.
+     *
+     * They should be used by the driver to e.g. manage scheduled I/O
+     * requests, or toggle an internal state. After the end of the drain new
+     * requests will continue normally.
+     *
+     * Implementations of both functions must not call aio_poll().
+     */
+    void (*bdrv_drain_begin)(BlockDriverState *bs);
+    void (*bdrv_drain_end)(BlockDriverState *bs);
+
     /**
      * Try to get @bs's logical and physical block size.
      * On success, store them in @bsz and return zero.
@@ -743,21 +758,6 @@ struct BlockDriver {
     void coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_io_unplug)(
         BlockDriverState *bs);
 
-    /**
-     * bdrv_drain_begin is called if implemented in the beginning of a
-     * drain operation to drain and stop any internal sources of requests in
-     * the driver.
-     * bdrv_drain_end is called if implemented at the end of the drain.
-     *
-     * They should be used by the driver to e.g. manage scheduled I/O
-     * requests, or toggle an internal state. After the end of the drain new
-     * requests will continue normally.
-     *
-     * Implementations of both functions must not call aio_poll().
-     */
-    void (*bdrv_drain_begin)(BlockDriverState *bs);
-    void (*bdrv_drain_end)(BlockDriverState *bs);
-
     bool (*bdrv_supports_persistent_dirty_bitmap)(BlockDriverState *bs);
 
     bool coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_can_store_new_dirty_bitmap)(
@@ -920,36 +920,6 @@ struct BdrvChildClass {
     void GRAPH_WRLOCK_PTR (*attach)(BdrvChild *child);
     void GRAPH_WRLOCK_PTR (*detach)(BdrvChild *child);
 
-    /*
-     * Notifies the parent that the filename of its child has changed (e.g.
-     * because the direct child was removed from the backing chain), so that it
-     * can update its reference.
-     */
-    int (*update_filename)(BdrvChild *child, BlockDriverState *new_base,
-                           const char *filename, Error **errp);
-
-    bool (*change_aio_ctx)(BdrvChild *child, AioContext *ctx,
-                           GHashTable *visited, Transaction *tran,
-                           Error **errp);
-
-    /*
-     * I/O API functions. These functions are thread-safe.
-     *
-     * See include/block/block-io.h for more information about
-     * the I/O API.
-     */
-
-    void (*resize)(BdrvChild *child);
-
-    /*
-     * Returns a name that is supposedly more useful for human users than the
-     * node name for identifying the node in question (in particular, a BB
-     * name), or NULL if the parent can't provide a better name.
-     */
-    const char *(*get_name)(BdrvChild *child);
-
-    AioContext *(*get_parent_aio_context)(BdrvChild *child);
-
     /*
      * If this pair of functions is implemented, the parent doesn't issue new
      * requests after returning from .drained_begin() until .drained_end() is
@@ -970,6 +940,36 @@ struct BdrvChildClass {
      * activity on the child has stopped.
      */
     bool (*drained_poll)(BdrvChild *child);
+
+    /*
+     * Notifies the parent that the filename of its child has changed (e.g.
+     * because the direct child was removed from the backing chain), so that it
+     * can update its reference.
+     */
+    int (*update_filename)(BdrvChild *child, BlockDriverState *new_base,
+                           const char *filename, Error **errp);
+
+    bool (*change_aio_ctx)(BdrvChild *child, AioContext *ctx,
+                           GHashTable *visited, Transaction *tran,
+                           Error **errp);
+
+    /*
+     * I/O API functions. These functions are thread-safe.
+     *
+     * See include/block/block-io.h for more information about
+     * the I/O API.
+     */
+
+    void (*resize)(BdrvChild *child);
+
+    /*
+     * Returns a name that is supposedly more useful for human users than the
+     * node name for identifying the node in question (in particular, a BB
+     * name), or NULL if the parent can't provide a better name.
+     */
+    const char *(*get_name)(BdrvChild *child);
+
+    AioContext *(*get_parent_aio_context)(BdrvChild *child);
 };
 
 extern const BdrvChildClass child_of_bds;
diff --git a/include/sysemu/block-backend-common.h b/include/sysemu/block-backend-common.h
index 2391679c56..780cea7305 100644
--- a/include/sysemu/block-backend-common.h
+++ b/include/sysemu/block-backend-common.h
@@ -59,6 +59,19 @@ typedef struct BlockDevOps {
      */
     bool (*is_medium_locked)(void *opaque);
 
+    /*
+     * Runs when the backend receives a drain request.
+     */
+    void (*drained_begin)(void *opaque);
+    /*
+     * Runs when the backend's last drain request ends.
+     */
+    void (*drained_end)(void *opaque);
+    /*
+     * Is the device still busy?
+     */
+    bool (*drained_poll)(void *opaque);
+
     /*
      * I/O API functions. These functions are thread-safe.
      *
@@ -76,18 +89,6 @@ typedef struct BlockDevOps {
      * Runs when the size changed (e.g. monitor command block_resize)
      */
     void (*resize_cb)(void *opaque);
-    /*
-     * Runs when the backend receives a drain request.
-     */
-    void (*drained_begin)(void *opaque);
-    /*
-     * Runs when the backend's last drain request ends.
-     */
-    void (*drained_end)(void *opaque);
-    /*
-     * Is the device still busy?
-     */
-    bool (*drained_poll)(void *opaque);
 } BlockDevOps;
 
 /*
diff --git a/block/io.c b/block/io.c
index 6fa1993374..532c8c90c9 100644
--- a/block/io.c
+++ b/block/io.c
@@ -60,7 +60,7 @@ static void bdrv_parent_drained_begin(BlockDriverState *bs, BdrvChild *ignore)
 
 void bdrv_parent_drained_end_single(BdrvChild *c)
 {
-    IO_OR_GS_CODE();
+    GLOBAL_STATE_CODE();
 
     assert(c->quiesced_parent);
     c->quiesced_parent = false;
@@ -108,7 +108,7 @@ static bool bdrv_parent_drained_poll(BlockDriverState *bs, BdrvChild *ignore,
 
 void bdrv_parent_drained_begin_single(BdrvChild *c)
 {
-    IO_OR_GS_CODE();
+    GLOBAL_STATE_CODE();
 
     assert(!c->quiesced_parent);
     c->quiesced_parent = true;
@@ -248,7 +248,7 @@ typedef struct {
 bool bdrv_drain_poll(BlockDriverState *bs, BdrvChild *ignore_parent,
                      bool ignore_bds_parents)
 {
-    IO_OR_GS_CODE();
+    GLOBAL_STATE_CODE();
 
     if (bdrv_parent_drained_poll(bs, ignore_parent, ignore_bds_parents)) {
         return true;
@@ -335,7 +335,8 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
     if (ctx != co_ctx) {
         aio_context_release(ctx);
     }
-    replay_bh_schedule_oneshot_event(ctx, bdrv_co_drain_bh_cb, &data);
+    replay_bh_schedule_oneshot_event(qemu_get_aio_context(),
+                                     bdrv_co_drain_bh_cb, &data);
 
     qemu_coroutine_yield();
     /* If we are resumed from some other event (such as an aio completion or a
@@ -358,6 +359,8 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent,
         return;
     }
 
+    GLOBAL_STATE_CODE();
+
     /* Stop things in parent-to-child order */
     if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) {
         aio_disable_external(bdrv_get_aio_context(bs));
@@ -400,11 +403,14 @@ static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
 {
     int old_quiesce_counter;
 
+    IO_OR_GS_CODE();
+
     if (qemu_in_coroutine()) {
         bdrv_co_yield_to_drain(bs, false, parent, false);
         return;
     }
     assert(bs->quiesce_counter > 0);
+    GLOBAL_STATE_CODE();
 
     /* Re-enable things in child-to-parent order */
     old_quiesce_counter = qatomic_fetch_dec(&bs->quiesce_counter);
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index d9d3807062..dc3cb9e0e3 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -445,19 +445,19 @@ struct test_iothread_data {
     BlockDriverState *bs;
     enum drain_type drain_type;
     int *aio_ret;
+    bool co_done;
 };
 
-static void test_iothread_drain_entry(void *opaque)
+static void coroutine_fn test_iothread_drain_co_entry(void *opaque)
 {
     struct test_iothread_data *data = opaque;
 
-    aio_context_acquire(bdrv_get_aio_context(data->bs));
     do_drain_begin(data->drain_type, data->bs);
     g_assert_cmpint(*data->aio_ret, ==, 0);
     do_drain_end(data->drain_type, data->bs);
-    aio_context_release(bdrv_get_aio_context(data->bs));
 
-    qemu_event_set(&done_event);
+    data->co_done = true;
+    aio_wait_kick();
 }
 
 static void test_iothread_aio_cb(void *opaque, int ret)
@@ -493,6 +493,7 @@ static void test_iothread_common(enum drain_type drain_type, int drain_thread)
     BlockDriverState *bs;
     BDRVTestState *s;
     BlockAIOCB *acb;
+    Coroutine *co;
     int aio_ret;
     struct test_iothread_data data;
 
@@ -571,8 +572,9 @@ static void test_iothread_common(enum drain_type drain_type, int drain_thread)
         }
         break;
     case 1:
-        aio_bh_schedule_oneshot(ctx_a, test_iothread_drain_entry, &data);
-        qemu_event_wait(&done_event);
+        co = qemu_coroutine_create(test_iothread_drain_co_entry, &data);
+        aio_co_enter(ctx_a, co);
+        AIO_WAIT_WHILE_UNLOCKED(NULL, !data.co_done);
         break;
     default:
         g_assert_not_reached();
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:59:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:59:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530094.825455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf6o-0002Ox-4N; Thu, 04 May 2023 19:59:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530094.825455; Thu, 04 May 2023 19:59:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf6o-0002Oq-1a; Thu, 04 May 2023 19:59:22 +0000
Received: by outflank-mailman (input) for mailman id 530094;
 Thu, 04 May 2023 19:59:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf28-0003xx-K8
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:32 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b8f1ab9-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:54:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-595-i1TCEUOyPMCxwUb6LaaevA-1; Thu, 04 May 2023 15:54:24 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A91B5811E7E;
 Thu,  4 May 2023 19:54:23 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 0E16B2166B31;
 Thu,  4 May 2023 19:54:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b8f1ab9-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230069;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=peUabVxpb/f9SoCESFgzNkXObwSbZ6ll4imnzO2kIyY=;
	b=XoxQ/jxHYPKvkrOMDt7ssACzd/kVn1YwMjKNc5bk+7ES57V+Hchp9ORuWVUdH7ZbuIL7My
	4ifEBGnlRAzo5DaMTj/T0Ze2lIe+yTFodtd+WYzg76uEg7TuJiPTfQJFRj69Zx567QsavY
	Zwc2tIB1tAnRDW/UOkGVdVHaJJ+/wSs=
X-MC-Unique: i1TCEUOyPMCxwUb6LaaevA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 20/21] virtio: do not set is_external=true on host notifiers
Date: Thu,  4 May 2023 15:53:26 -0400
Message-Id: <20230504195327.695107-21-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

Host notifiers can now use is_external=false since virtio-blk and
virtio-scsi no longer rely on is_external=true for drained sections.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/virtio/virtio.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 272d930721..9cdad7e550 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n)
 
 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true,
+    aio_set_event_notifier(ctx, &vq->host_notifier, false,
                            virtio_queue_host_notifier_read,
                            virtio_queue_host_notifier_aio_poll,
                            virtio_queue_host_notifier_aio_poll_ready);
@@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
  */
 void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true,
+    aio_set_event_notifier(ctx, &vq->host_notifier, false,
                            virtio_queue_host_notifier_read,
                            NULL, NULL);
 }
 
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NULL);
     /* Test and clear notifier before after disabling event,
      * in case poll callback didn't have time to run. */
     virtio_queue_host_notifier_read(&vq->host_notifier);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:59:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:59:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530093.825446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf6l-00029b-TE; Thu, 04 May 2023 19:59:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530093.825446; Thu, 04 May 2023 19:59:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf6l-00029U-QM; Thu, 04 May 2023 19:59:19 +0000
Received: by outflank-mailman (input) for mailman id 530093;
 Thu, 04 May 2023 19:59:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1v-0003xx-QK
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:19 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 732726f0-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:54:16 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-311-pmj5n48DPI265atIoUKLhQ-1; Thu, 04 May 2023 15:54:12 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 78EBA855304;
 Thu,  4 May 2023 19:54:11 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id DB4002166B30;
 Thu,  4 May 2023 19:54:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 732726f0-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230055;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pXsgzUQNPWG1ZQh/g83GoR8nnGiyShUYhUlRFMynpck=;
	b=LLvQInOb3cY5EaZPXCFCBQngtWgL2hlRshh00E2VEU/8EUi3VryPt+evBdNGoje4Ifbzeq
	ubrqBZEy6RlpJtvugbWI2NJTZUCJOhBfjzoB6YhQ2+yrQjZ8v0/c9254je/izmLXQvKx2D
	CqOWg14P8OWSNDy+XUKUU/BoZ+25+P4=
X-MC-Unique: pmj5n48DPI265atIoUKLhQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 16/21] block/fuse: do not set is_external=true on FUSE fd
Date: Thu,  4 May 2023 15:53:22 -0400
Message-Id: <20230504195327.695107-17-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

This is part of ongoing work to remove the aio_disable_external() API.

Use BlockDevOps .drained_begin/end/poll() instead of
aio_set_fd_handler(is_external=true).

As a side-effect the FUSE export now follows AioContext changes like the
other export types.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/fuse.c | 56 +++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 54 insertions(+), 2 deletions(-)

diff --git a/block/export/fuse.c b/block/export/fuse.c
index 06fa41079e..adf3236b5a 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -50,6 +50,7 @@ typedef struct FuseExport {
 
     struct fuse_session *fuse_session;
     struct fuse_buf fuse_buf;
+    unsigned int in_flight; /* atomic */
     bool mounted, fd_handler_set_up;
 
     char *mountpoint;
@@ -78,6 +79,42 @@ static void read_from_fuse_export(void *opaque);
 static bool is_regular_file(const char *path, Error **errp);
 
 
+static void fuse_export_drained_begin(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       NULL, NULL, NULL, NULL, NULL);
+    exp->fd_handler_set_up = false;
+}
+
+static void fuse_export_drained_end(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    /* Refresh AioContext in case it changed */
+    exp->common.ctx = blk_get_aio_context(exp->common.blk);
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       read_from_fuse_export, NULL, NULL, NULL, exp);
+    exp->fd_handler_set_up = true;
+}
+
+static bool fuse_export_drained_poll(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    return qatomic_read(&exp->in_flight) > 0;
+}
+
+static const BlockDevOps fuse_export_blk_dev_ops = {
+    .drained_begin = fuse_export_drained_begin,
+    .drained_end   = fuse_export_drained_end,
+    .drained_poll  = fuse_export_drained_poll,
+};
+
 static int fuse_export_create(BlockExport *blk_exp,
                               BlockExportOptions *blk_exp_args,
                               Error **errp)
@@ -101,6 +138,15 @@ static int fuse_export_create(BlockExport *blk_exp,
         }
     }
 
+    blk_set_dev_ops(exp->common.blk, &fuse_export_blk_dev_ops, exp);
+
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * the FUSE fd handler. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->common.blk, true);
+
     init_exports_table();
 
     /*
@@ -224,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint,
     g_hash_table_insert(exports, g_strdup(mountpoint), NULL);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), true,
+                       fuse_session_fd(exp->fuse_session), false,
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 
@@ -246,6 +292,8 @@ static void read_from_fuse_export(void *opaque)
 
     blk_exp_ref(&exp->common);
 
+    qatomic_inc(&exp->in_flight);
+
     do {
         ret = fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf);
     } while (ret == -EINTR);
@@ -256,6 +304,10 @@ static void read_from_fuse_export(void *opaque)
     fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf);
 
 out:
+    if (qatomic_fetch_dec(&exp->in_flight) == 1) {
+        aio_wait_kick(); /* wake AIO_WAIT_WHILE() */
+    }
+
     blk_exp_unref(&exp->common);
 }
 
@@ -268,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp)
 
         if (exp->fd_handler_set_up) {
             aio_set_fd_handler(exp->common.ctx,
-                               fuse_session_fd(exp->fuse_session), true,
+                               fuse_session_fd(exp->fuse_session), false,
                                NULL, NULL, NULL, NULL, NULL);
             exp->fd_handler_set_up = false;
         }
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:59:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:59:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530095.825466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf6s-0002hr-Gx; Thu, 04 May 2023 19:59:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530095.825466; Thu, 04 May 2023 19:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf6s-0002hf-DY; Thu, 04 May 2023 19:59:26 +0000
Received: by outflank-mailman (input) for mailman id 530095;
 Thu, 04 May 2023 19:59:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1q-0003xx-Dp
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:14 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 708d161a-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:54:12 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-127-Vlp1VFRtMiyjJOAA8uYRjg-1; Thu, 04 May 2023 15:54:08 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 66842A0F392;
 Thu,  4 May 2023 19:54:06 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id BD6032166B31;
 Thu,  4 May 2023 19:54:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 708d161a-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230051;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7wSWNHCOtGu2HSsa9w9EnU6dW7eXmRIvTEr1l03At9A=;
	b=P9zEgIFjfrIgZ6OZCMauqbsLsASBuwBcr2L8tmHqwgKmN+dxLLl4f5JGOhRZOxG8Xl4FPY
	01BueEG9KicSVoimaf53qGONqV0LEYkz1RmOw0jWfCIUnlrKajMNHFEAZqxOXvmb395TnP
	AubLXFXnVK5i6lI7TKU4dztP69/Y4R0=
X-MC-Unique: Vlp1VFRtMiyjJOAA8uYRjg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 14/21] block/export: rewrite vduse-blk drain code
Date: Thu,  4 May 2023 15:53:20 -0400
Message-Id: <20230504195327.695107-15-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

vduse_blk_detach_ctx() waits for in-flight requests using
AIO_WAIT_WHILE(). This is not allowed according to a comment in
bdrv_set_aio_context_commit():

  /*
   * Take the old AioContex when detaching it from bs.
   * At this point, new_context lock is already acquired, and we are now
   * also taking old_context. This is safe as long as bdrv_detach_aio_context
   * does not call AIO_POLL_WHILE().
   */

Use this opportunity to rewrite the drain code in vduse-blk:

- Use the BlockExport refcount so that vduse_blk_exp_delete() is only
  called when there are no more requests in flight.

- Implement .drained_poll() so in-flight request coroutines are stopped
  by the time .bdrv_detach_aio_context() is called.

- Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the
  .bdrv_detach_aio_context() constraint violation. It's no longer
  needed due to the previous changes.

- Always handle the VDUSE file descriptor, even in drained sections. The
  VDUSE file descriptor doesn't submit I/O, so it's safe to handle it in
  drained sections. This ensures that the VDUSE kernel code gets a fast
  response.

- Suspend virtqueue fd handlers in .drained_begin() and resume them in
  .drained_end(). This eliminates the need for the
  aio_set_fd_handler(is_external=true) flag, which is being removed from
  QEMU.

This is a long list but splitting it into individual commits would
probably lead to git bisect failures - the changes are all related.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++------------
 1 file changed, 93 insertions(+), 39 deletions(-)

diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index b53ef39da0..a25556fe04 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -31,7 +31,8 @@ typedef struct VduseBlkExport {
     VduseDev *dev;
     uint16_t num_queues;
     char *recon_file;
-    unsigned int inflight;
+    unsigned int inflight; /* atomic */
+    bool vqs_started;
 } VduseBlkExport;
 
 typedef struct VduseBlkReq {
@@ -41,13 +42,24 @@ typedef struct VduseBlkReq {
 
 static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
 {
-    vblk_exp->inflight++;
+    if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) {
+        /* Prevent export from being deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_ref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
+    }
 }
 
 static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
 {
-    if (--vblk_exp->inflight == 0) {
+    if (qatomic_fetch_dec(&vblk_exp->inflight) == 1) {
+        /* Wake AIO_WAIT_WHILE() */
         aio_wait_kick();
+
+        /* Now the export can be deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_unref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
@@ -124,8 +136,12 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
 
+    if (!vblk_exp->vqs_started) {
+        return; /* vduse_blk_drained_end() will start vqs later */
+    }
+
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, on_vduse_vq_kick, NULL, NULL, NULL, vq);
+                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
     /* Make sure we don't miss any kick afer reconnecting */
     eventfd_write(vduse_queue_get_fd(vq), 1);
 }
@@ -133,9 +149,14 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
+    int fd = vduse_queue_get_fd(vq);
 
-    aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, NULL, NULL, NULL, NULL, NULL);
+    if (fd < 0) {
+        return;
+    }
+
+    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
+                       NULL, NULL, NULL, NULL, NULL);
 }
 
 static const VduseOps vduse_blk_ops = {
@@ -152,42 +173,19 @@ static void on_vduse_dev_kick(void *opaque)
 
 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 {
-    int i;
-
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, on_vduse_dev_kick, NULL, NULL, NULL,
+                       false, on_vduse_dev_kick, NULL, NULL, NULL,
                        vblk_exp->dev);
 
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd, true,
-                           on_vduse_vq_kick, NULL, NULL, NULL, vq);
-    }
+    /* Virtqueues are handled by vduse_blk_drained_end() */
 }
 
 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
 {
-    int i;
-
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd,
-                           true, NULL, NULL, NULL, NULL, NULL);
-    }
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, NULL, NULL, NULL, NULL, NULL);
+                       false, NULL, NULL, NULL, NULL, NULL);
 
-    AIO_WAIT_WHILE(vblk_exp->export.ctx, vblk_exp->inflight > 0);
+    /* Virtqueues are handled by vduse_blk_drained_begin() */
 }
 
 
@@ -220,8 +218,55 @@ static void vduse_blk_resize(void *opaque)
                             (char *)&config.capacity);
 }
 
+static void vduse_blk_stop_virtqueues(VduseBlkExport *vblk_exp)
+{
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_disable_queue(vblk_exp->dev, vq);
+    }
+
+    vblk_exp->vqs_started = false;
+}
+
+static void vduse_blk_start_virtqueues(VduseBlkExport *vblk_exp)
+{
+    vblk_exp->vqs_started = true;
+
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_enable_queue(vblk_exp->dev, vq);
+    }
+}
+
+static void vduse_blk_drained_begin(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_stop_virtqueues(vblk_exp);
+}
+
+static void vduse_blk_drained_end(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_start_virtqueues(vblk_exp);
+}
+
+static bool vduse_blk_drained_poll(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    return qatomic_read(&vblk_exp->inflight) > 0;
+}
+
 static const BlockDevOps vduse_block_ops = {
-    .resize_cb = vduse_blk_resize,
+    .resize_cb     = vduse_blk_resize,
+    .drained_begin = vduse_blk_drained_begin,
+    .drained_end   = vduse_blk_drained_end,
+    .drained_poll  = vduse_blk_drained_poll,
 };
 
 static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
@@ -268,6 +313,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     vblk_exp->handler.serial = g_strdup(vblk_opts->serial ?: "");
     vblk_exp->handler.logical_block_size = logical_block_size;
     vblk_exp->handler.writable = opts->writable;
+    vblk_exp->vqs_started = true;
 
     config.capacity =
             cpu_to_le64(blk_getlength(exp->blk) >> VIRTIO_BLK_SECTOR_BITS);
@@ -322,14 +368,20 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
         vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
     }
 
-    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), true,
+    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
                        on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev);
 
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                  vblk_exp);
-
     blk_set_dev_ops(exp->blk, &vduse_block_ops, exp);
 
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * virtqueue fd handlers. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->blk, true);
+
     return 0;
 err:
     vduse_dev_destroy(vblk_exp->dev);
@@ -344,6 +396,9 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
     int ret;
 
+    assert(qatomic_read(&vblk_exp->inflight) == 0);
+
+    vduse_blk_detach_ctx(vblk_exp);
     blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                     vblk_exp);
     ret = vduse_dev_destroy(vblk_exp->dev);
@@ -354,13 +409,12 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     g_free(vblk_exp->handler.serial);
 }
 
+/* Called with exp->ctx acquired */
 static void vduse_blk_exp_request_shutdown(BlockExport *exp)
 {
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
 
-    aio_context_acquire(vblk_exp->export.ctx);
-    vduse_blk_detach_ctx(vblk_exp);
-    aio_context_acquire(vblk_exp->export.ctx);
+    vduse_blk_stop_virtqueues(vblk_exp);
 }
 
 const BlockExportDriver blk_exp_vduse_blk = {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 19:59:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 19:59:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530096.825476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf6t-0002yF-Ot; Thu, 04 May 2023 19:59:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530096.825476; Thu, 04 May 2023 19:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf6t-0002xx-L0; Thu, 04 May 2023 19:59:27 +0000
Received: by outflank-mailman (input) for mailman id 530096;
 Thu, 04 May 2023 19:59:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1y-0003iB-Ae
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:22 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75f0b3fa-eab5-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:54:21 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-37-JLyMG8AyOQ6Ztjf5K0NY6Q-1; Thu, 04 May 2023 15:54:17 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 24421101A531;
 Thu,  4 May 2023 19:54:16 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5E4CA1121331;
 Thu,  4 May 2023 19:54:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75f0b3fa-eab5-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230060;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ruJgs45mwGYUnVoP6uIPKYhuhXIe1j+KFcN94nSbefs=;
	b=aMVbuSe9iyN4M9RlotmvoTWgdJEdpEiOCXkrmbFb+DGJW6eM/bzDVG909kUXfuQaUTQ6WC
	03xS/T+8udfRyeJIZw0FoxBY75ORE0KmB4lH0wD7c3puVR7tnUovNmAuoQzhecjPa2wAsk
	McLai88FozIa1WR7r3xXEvFojNk/bck=
X-MC-Unique: JLyMG8AyOQ6Ztjf5K0NY6Q-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 18/21] virtio-blk: implement BlockDevOps->drained_begin()
Date: Thu,  4 May 2023 15:53:24 -0400
Message-Id: <20230504195327.695107-19-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

Detach ioeventfds during drained sections to stop I/O submission from
the guest. virtio-blk is no longer reliant on aio_disable_external()
after this patch. This will allow us to remove the
aio_disable_external() API once all other code that relies on it is
converted.

Take extra care to avoid attaching/detaching ioeventfds if the data
plane is started/stopped during a drained section. This should be rare,
but maybe the mirror block job can trigger it.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 17 +++++++++------
 hw/block/virtio-blk.c           | 38 ++++++++++++++++++++++++++++++++-
 2 files changed, 48 insertions(+), 7 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 27eafa6c92..c0d2663abc 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -246,13 +246,15 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
     }
 
     /* Get this show started by hooking up our callbacks */
-    aio_context_acquire(s->ctx);
-    for (i = 0; i < nvqs; i++) {
-        VirtQueue *vq = virtio_get_queue(s->vdev, i);
+    if (!blk_in_drain(s->conf->conf.blk)) {
+        aio_context_acquire(s->ctx);
+        for (i = 0; i < nvqs; i++) {
+            VirtQueue *vq = virtio_get_queue(s->vdev, i);
 
-        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+            virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+        }
+        aio_context_release(s->ctx);
     }
-    aio_context_release(s->ctx);
     return 0;
 
   fail_aio_context:
@@ -318,7 +320,10 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
     trace_virtio_blk_data_plane_stop(s);
 
     aio_context_acquire(s->ctx);
-    aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
+
+    if (!blk_in_drain(s->conf->conf.blk)) {
+        aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
+    }
 
     /* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */
     blk_drain(s->conf->conf.blk);
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index cefca93b31..d8dedc575c 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1109,8 +1109,44 @@ static void virtio_blk_resize(void *opaque)
     aio_bh_schedule_oneshot(qemu_get_aio_context(), virtio_resize_cb, vdev);
 }
 
+/* Suspend virtqueue ioeventfd processing during drain */
+static void virtio_blk_drained_begin(void *opaque)
+{
+    VirtIOBlock *s = opaque;
+    VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
+    AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
+
+    if (!s->dataplane || !s->dataplane_started) {
+        return;
+    }
+
+    for (uint16_t i = 0; i < s->conf.num_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_detach_host_notifier(vq, ctx);
+    }
+}
+
+/* Resume virtqueue ioeventfd processing after drain */
+static void virtio_blk_drained_end(void *opaque)
+{
+    VirtIOBlock *s = opaque;
+    VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
+    AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
+
+    if (!s->dataplane || !s->dataplane_started) {
+        return;
+    }
+
+    for (uint16_t i = 0; i < s->conf.num_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_attach_host_notifier(vq, ctx);
+    }
+}
+
 static const BlockDevOps virtio_block_ops = {
-    .resize_cb = virtio_blk_resize,
+    .resize_cb     = virtio_blk_resize,
+    .drained_begin = virtio_blk_drained_begin,
+    .drained_end   = virtio_blk_drained_end,
 };
 
 static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 20:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 20:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530110.825486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7W-0005Az-45; Thu, 04 May 2023 20:00:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530110.825486; Thu, 04 May 2023 20:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7V-0005As-VA; Thu, 04 May 2023 20:00:05 +0000
Received: by outflank-mailman (input) for mailman id 530110;
 Thu, 04 May 2023 20:00:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1y-0003xx-0Y
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:22 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 752ae21e-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:54:20 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-635-rnzRckGWN_uYxyN68Qx0aw-1; Thu, 04 May 2023 15:54:15 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A63E5101A531;
 Thu,  4 May 2023 19:54:13 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E92B763F3E;
 Thu,  4 May 2023 19:54:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 752ae21e-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230059;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=94EbW6UEPYMLwV9zB8HOwndWfCZlP2kutpuX+h7WMqo=;
	b=TaCAoYoEpnhHFw4ZKWwTsG7GbZ0YvaDn5NTWcZ/jZDa3c4KlIcULOT36dZiExKMMTVnE7o
	UBao6Axld3t7VTAfWW3isW+Ye8ny5/OtoWfP+3Z2s7drTiyrp8aidxwNY9jwqu3QjltNcY
	p8HKKe6OkyZ4msncWm5i5djPhMndmko=
X-MC-Unique: rnzRckGWN_uYxyN68Qx0aw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 17/21] virtio: make it possible to detach host notifier from any thread
Date: Thu,  4 May 2023 15:53:23 -0400
Message-Id: <20230504195327.695107-18-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

virtio_queue_aio_detach_host_notifier() does two things:
1. It removes the fd handler from the event loop.
2. It processes the virtqueue one last time.

The first step can be peformed by any thread and without taking the
AioContext lock.

The second step may need the AioContext lock (depending on the device
implementation) and runs in the thread where request processing takes
place. virtio-blk and virtio-scsi therefore call
virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in
AioContext

Scheduling a BH is undesirable for .drained_begin() functions. The next
patch will introduce a .drained_begin() function that needs to call
virtio_queue_aio_detach_host_notifier().

Move the virtqueue processing out to the callers of
virtio_queue_aio_detach_host_notifier() so that the function can be
called from any thread. This is in preparation for the next patch.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 2 ++
 hw/scsi/virtio-scsi-dataplane.c | 9 +++++++++
 2 files changed, 11 insertions(+)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index a6202997ee..27eafa6c92 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -287,8 +287,10 @@ static void virtio_blk_data_plane_stop_bh(void *opaque)
 
     for (i = 0; i < s->conf->num_queues; i++) {
         VirtQueue *vq = virtio_get_queue(s->vdev, i);
+        EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq);
 
         virtio_queue_aio_detach_host_notifier(vq, s->ctx);
+        virtio_queue_host_notifier_read(host_notifier);
     }
 }
 
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 20bb91766e..81643445ed 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -71,12 +71,21 @@ static void virtio_scsi_dataplane_stop_bh(void *opaque)
 {
     VirtIOSCSI *s = opaque;
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
+    EventNotifier *host_notifier;
     int i;
 
     virtio_queue_aio_detach_host_notifier(vs->ctrl_vq, s->ctx);
+    host_notifier = virtio_queue_get_host_notifier(vs->ctrl_vq);
+    virtio_queue_host_notifier_read(host_notifier);
+
     virtio_queue_aio_detach_host_notifier(vs->event_vq, s->ctx);
+    host_notifier = virtio_queue_get_host_notifier(vs->event_vq);
+    virtio_queue_host_notifier_read(host_notifier);
+
     for (i = 0; i < vs->conf.num_queues; i++) {
         virtio_queue_aio_detach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        host_notifier = virtio_queue_get_host_notifier(vs->cmd_vqs[i]);
+        virtio_queue_host_notifier_read(host_notifier);
     }
 }
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 20:00:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 20:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530114.825496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7a-0005XZ-Bg; Thu, 04 May 2023 20:00:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530114.825496; Thu, 04 May 2023 20:00:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7a-0005XS-8d; Thu, 04 May 2023 20:00:10 +0000
Received: by outflank-mailman (input) for mailman id 530114;
 Thu, 04 May 2023 20:00:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf29-0003iB-86
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:33 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ba0c59d-eab5-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:54:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-197-qnKcAnexPNuFuzoeplb57A-1; Thu, 04 May 2023 15:54:28 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 220D33C0F66C;
 Thu,  4 May 2023 19:54:27 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id D5CF340C2064;
 Thu,  4 May 2023 19:54:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ba0c59d-eab5-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230070;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DIrmY+4xB43BwQ7fJaY0YH2utqNnWk6FB76ZRZW5uOk=;
	b=C5Egk0MCk0umval0dUHvR5cPxYYrF5KxVdolEX2MgjQhngMh4DdlZwF3zlK5JPdGYN0g2Q
	YdTNiHBK5dJ8E9Q9U+2zZNsdV5j81w8B6ynuR1qkD+6IpPnLZxhps+zM87cULBMtoVCLLM
	QnX9ABO6t/ZOhz9LbyZJ+Z7KyRiUC0I=
X-MC-Unique: qnKcAnexPNuFuzoeplb57A-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 21/21] aio: remove aio_disable_external() API
Date: Thu,  4 May 2023 15:53:27 -0400
Message-Id: <20230504195327.695107-22-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

All callers now pass is_external=false to aio_set_fd_handler() and
aio_set_event_notifier(). The aio_disable_external() API that
temporarily disables fd handlers that were registered is_external=true
is therefore dead code.

Remove aio_disable_external(), aio_enable_external(), and the
is_external arguments to aio_set_fd_handler() and
aio_set_event_notifier().

The entire test-fdmon-epoll test is removed because its sole purpose was
testing aio_disable_external().

Parts of this patch were generated using the following coccinelle
(https://coccinelle.lip6.fr/) semantic patch:

  @@
  expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque;
  @@
  - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque)
  + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, opaque)

  @@
  expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready;
  @@
  - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io_poll_ready)
  + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready)

Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/aio.h           | 57 ---------------------------
 util/aio-posix.h              |  1 -
 block.c                       |  7 ----
 block/blkio.c                 | 15 +++----
 block/curl.c                  | 10 ++---
 block/export/fuse.c           |  8 ++--
 block/export/vduse-blk.c      | 10 ++---
 block/io.c                    |  2 -
 block/io_uring.c              |  4 +-
 block/iscsi.c                 |  3 +-
 block/linux-aio.c             |  4 +-
 block/nfs.c                   |  5 +--
 block/nvme.c                  |  8 ++--
 block/ssh.c                   |  4 +-
 block/win32-aio.c             |  6 +--
 hw/i386/kvm/xen_xenstore.c    |  2 +-
 hw/virtio/virtio.c            |  6 +--
 hw/xen/xen-bus.c              |  8 ++--
 io/channel-command.c          |  6 +--
 io/channel-file.c             |  3 +-
 io/channel-socket.c           |  3 +-
 migration/rdma.c              | 16 ++++----
 tests/unit/test-aio.c         | 27 +------------
 tests/unit/test-bdrv-drain.c  |  1 -
 tests/unit/test-fdmon-epoll.c | 73 -----------------------------------
 util/aio-posix.c              | 20 +++-------
 util/aio-win32.c              |  8 +---
 util/async.c                  |  3 +-
 util/fdmon-epoll.c            | 18 +++------
 util/fdmon-io_uring.c         |  8 +---
 util/fdmon-poll.c             |  3 +-
 util/main-loop.c              |  7 ++--
 util/qemu-coroutine-io.c      |  7 ++--
 util/vhost-user-server.c      | 11 +++---
 tests/unit/meson.build        |  3 --
 35 files changed, 82 insertions(+), 295 deletions(-)
 delete mode 100644 tests/unit/test-fdmon-epoll.c

diff --git a/include/block/aio.h b/include/block/aio.h
index 89bbc536f9..32042e8905 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -225,8 +225,6 @@ struct AioContext {
      */
     QEMUTimerListGroup tlg;
 
-    int external_disable_cnt;
-
     /* Number of AioHandlers without .io_poll() */
     int poll_disable_cnt;
 
@@ -481,7 +479,6 @@ bool aio_poll(AioContext *ctx, bool blocking);
  */
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -497,7 +494,6 @@ void aio_set_fd_handler(AioContext *ctx,
  */
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            bool is_external,
                             EventNotifierHandler *io_read,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready);
@@ -626,59 +622,6 @@ static inline void aio_timer_init(AioContext *ctx,
  */
 int64_t aio_compute_timeout(AioContext *ctx);
 
-/**
- * aio_disable_external:
- * @ctx: the aio context
- *
- * Disable the further processing of external clients.
- */
-static inline void aio_disable_external(AioContext *ctx)
-{
-    qatomic_inc(&ctx->external_disable_cnt);
-}
-
-/**
- * aio_enable_external:
- * @ctx: the aio context
- *
- * Enable the processing of external clients.
- */
-static inline void aio_enable_external(AioContext *ctx)
-{
-    int old;
-
-    old = qatomic_fetch_dec(&ctx->external_disable_cnt);
-    assert(old > 0);
-    if (old == 1) {
-        /* Kick event loop so it re-arms file descriptors */
-        aio_notify(ctx);
-    }
-}
-
-/**
- * aio_external_disabled:
- * @ctx: the aio context
- *
- * Return true if the external clients are disabled.
- */
-static inline bool aio_external_disabled(AioContext *ctx)
-{
-    return qatomic_read(&ctx->external_disable_cnt);
-}
-
-/**
- * aio_node_check:
- * @ctx: the aio context
- * @is_external: Whether or not the checked node is an external event source.
- *
- * Check if the node's is_external flag is okay to be polled by the ctx at this
- * moment. True means green light.
- */
-static inline bool aio_node_check(AioContext *ctx, bool is_external)
-{
-    return !is_external || !qatomic_read(&ctx->external_disable_cnt);
-}
-
 /**
  * aio_co_schedule:
  * @ctx: the aio context
diff --git a/util/aio-posix.h b/util/aio-posix.h
index 80b927c7f4..4264c518be 100644
--- a/util/aio-posix.h
+++ b/util/aio-posix.h
@@ -38,7 +38,6 @@ struct AioHandler {
 #endif
     int64_t poll_idle_timeout; /* when to stop userspace polling */
     bool poll_ready; /* has polling detected an event? */
-    bool is_external;
 };
 
 /* Add a handler to a ready list */
diff --git a/block.c b/block.c
index 5ec1a3897e..6239c8bc07 100644
--- a/block.c
+++ b/block.c
@@ -7268,9 +7268,6 @@ static void bdrv_detach_aio_context(BlockDriverState *bs)
         bs->drv->bdrv_detach_aio_context(bs);
     }
 
-    if (bs->quiesce_counter) {
-        aio_enable_external(bs->aio_context);
-    }
     bs->aio_context = NULL;
 }
 
@@ -7280,10 +7277,6 @@ static void bdrv_attach_aio_context(BlockDriverState *bs,
     BdrvAioNotifier *ban, *ban_tmp;
     GLOBAL_STATE_CODE();
 
-    if (bs->quiesce_counter) {
-        aio_disable_external(new_context);
-    }
-
     bs->aio_context = new_context;
 
     if (bs->drv && bs->drv->bdrv_attach_aio_context) {
diff --git a/block/blkio.c b/block/blkio.c
index 0cdc99a729..72117fa005 100644
--- a/block/blkio.c
+++ b/block/blkio.c
@@ -306,23 +306,18 @@ static void blkio_attach_aio_context(BlockDriverState *bs,
 {
     BDRVBlkioState *s = bs->opaque;
 
-    aio_set_fd_handler(new_context,
-                       s->completion_fd,
-                       false,
-                       blkio_completion_fd_read,
-                       NULL,
+    aio_set_fd_handler(new_context, s->completion_fd,
+                       blkio_completion_fd_read, NULL,
                        blkio_completion_fd_poll,
-                       blkio_completion_fd_poll_ready,
-                       bs);
+                       blkio_completion_fd_poll_ready, bs);
 }
 
 static void blkio_detach_aio_context(BlockDriverState *bs)
 {
     BDRVBlkioState *s = bs->opaque;
 
-    aio_set_fd_handler(bdrv_get_aio_context(bs),
-                       s->completion_fd,
-                       false, NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(bdrv_get_aio_context(bs), s->completion_fd, NULL, NULL,
+                       NULL, NULL, NULL);
 }
 
 /* Call with s->blkio_lock held to submit I/O after enqueuing a new request */
diff --git a/block/curl.c b/block/curl.c
index 8bb39a134e..0fc42d03d7 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -132,7 +132,7 @@ static gboolean curl_drop_socket(void *key, void *value, void *opaque)
     CURLSocket *socket = value;
     BDRVCURLState *s = socket->s;
 
-    aio_set_fd_handler(s->aio_context, socket->fd, false,
+    aio_set_fd_handler(s->aio_context, socket->fd,
                        NULL, NULL, NULL, NULL, NULL);
     return true;
 }
@@ -180,20 +180,20 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
     trace_curl_sock_cb(action, (int)fd);
     switch (action) {
         case CURL_POLL_IN:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                curl_multi_do, NULL, NULL, NULL, socket);
             break;
         case CURL_POLL_OUT:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                NULL, curl_multi_do, NULL, NULL, socket);
             break;
         case CURL_POLL_INOUT:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                curl_multi_do, curl_multi_do,
                                NULL, NULL, socket);
             break;
         case CURL_POLL_REMOVE:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                NULL, NULL, NULL, NULL, NULL);
             break;
     }
diff --git a/block/export/fuse.c b/block/export/fuse.c
index adf3236b5a..3307b64089 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -84,7 +84,7 @@ static void fuse_export_drained_begin(void *opaque)
     FuseExport *exp = opaque;
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        NULL, NULL, NULL, NULL, NULL);
     exp->fd_handler_set_up = false;
 }
@@ -97,7 +97,7 @@ static void fuse_export_drained_end(void *opaque)
     exp->common.ctx = blk_get_aio_context(exp->common.blk);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 }
@@ -270,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint,
     g_hash_table_insert(exports, g_strdup(mountpoint), NULL);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 
@@ -320,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp)
 
         if (exp->fd_handler_set_up) {
             aio_set_fd_handler(exp->common.ctx,
-                               fuse_session_fd(exp->fuse_session), false,
+                               fuse_session_fd(exp->fuse_session),
                                NULL, NULL, NULL, NULL, NULL);
             exp->fd_handler_set_up = false;
         }
diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index e0455551f9..83b05548e7 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -137,7 +137,7 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
     }
 
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
+                       on_vduse_vq_kick, NULL, NULL, NULL, vq);
     /* Make sure we don't miss any kick afer reconnecting */
     eventfd_write(vduse_queue_get_fd(vq), 1);
 }
@@ -151,7 +151,7 @@ static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
         return;
     }
 
-    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
+    aio_set_fd_handler(vblk_exp->export.ctx, fd,
                        NULL, NULL, NULL, NULL, NULL);
 }
 
@@ -170,7 +170,7 @@ static void on_vduse_dev_kick(void *opaque)
 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 {
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       false, on_vduse_dev_kick, NULL, NULL, NULL,
+                       on_vduse_dev_kick, NULL, NULL, NULL,
                        vblk_exp->dev);
 
     /* Virtqueues are handled by vduse_blk_drained_end() */
@@ -179,7 +179,7 @@ static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
 {
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
 
     /* Virtqueues are handled by vduse_blk_drained_begin() */
 }
@@ -364,7 +364,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
         vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
     }
 
-    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
+    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev),
                        on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev);
 
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
diff --git a/block/io.c b/block/io.c
index 532c8c90c9..9f82d1009e 100644
--- a/block/io.c
+++ b/block/io.c
@@ -363,7 +363,6 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent,
 
     /* Stop things in parent-to-child order */
     if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) {
-        aio_disable_external(bdrv_get_aio_context(bs));
         bdrv_parent_drained_begin(bs, parent);
         if (bs->drv && bs->drv->bdrv_drain_begin) {
             bs->drv->bdrv_drain_begin(bs);
@@ -419,7 +418,6 @@ static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
             bs->drv->bdrv_drain_end(bs);
         }
         bdrv_parent_drained_end(bs, parent);
-        aio_enable_external(bdrv_get_aio_context(bs));
     }
 }
 
diff --git a/block/io_uring.c b/block/io_uring.c
index 989f9a99ed..b64a3e6285 100644
--- a/block/io_uring.c
+++ b/block/io_uring.c
@@ -406,7 +406,7 @@ int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t offset,
 
 void luring_detach_aio_context(LuringState *s, AioContext *old_context)
 {
-    aio_set_fd_handler(old_context, s->ring.ring_fd, false,
+    aio_set_fd_handler(old_context, s->ring.ring_fd,
                        NULL, NULL, NULL, NULL, s);
     qemu_bh_delete(s->completion_bh);
     s->aio_context = NULL;
@@ -416,7 +416,7 @@ void luring_attach_aio_context(LuringState *s, AioContext *new_context)
 {
     s->aio_context = new_context;
     s->completion_bh = aio_bh_new(new_context, qemu_luring_completion_bh, s);
-    aio_set_fd_handler(s->aio_context, s->ring.ring_fd, false,
+    aio_set_fd_handler(s->aio_context, s->ring.ring_fd,
                        qemu_luring_completion_cb, NULL,
                        qemu_luring_poll_cb, qemu_luring_poll_ready, s);
 }
diff --git a/block/iscsi.c b/block/iscsi.c
index 9fc0bed90b..34f97ab646 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -363,7 +363,6 @@ iscsi_set_events(IscsiLun *iscsilun)
 
     if (ev != iscsilun->events) {
         aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsi),
-                           false,
                            (ev & POLLIN) ? iscsi_process_read : NULL,
                            (ev & POLLOUT) ? iscsi_process_write : NULL,
                            NULL, NULL,
@@ -1540,7 +1539,7 @@ static void iscsi_detach_aio_context(BlockDriverState *bs)
     IscsiLun *iscsilun = bs->opaque;
 
     aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsilun->iscsi),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
     iscsilun->events = 0;
 
     if (iscsilun->nop_timer) {
diff --git a/block/linux-aio.c b/block/linux-aio.c
index fc50cdd1bf..129908531a 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -443,7 +443,7 @@ int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
 
 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context)
 {
-    aio_set_event_notifier(old_context, &s->e, false, NULL, NULL, NULL);
+    aio_set_event_notifier(old_context, &s->e, NULL, NULL, NULL);
     qemu_bh_delete(s->completion_bh);
     s->aio_context = NULL;
 }
@@ -452,7 +452,7 @@ void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context)
 {
     s->aio_context = new_context;
     s->completion_bh = aio_bh_new(new_context, qemu_laio_completion_bh, s);
-    aio_set_event_notifier(new_context, &s->e, false,
+    aio_set_event_notifier(new_context, &s->e,
                            qemu_laio_completion_cb,
                            qemu_laio_poll_cb,
                            qemu_laio_poll_ready);
diff --git a/block/nfs.c b/block/nfs.c
index 006045d71a..8f89ece69f 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -195,7 +195,6 @@ static void nfs_set_events(NFSClient *client)
     int ev = nfs_which_events(client->context);
     if (ev != client->events) {
         aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                           false,
                            (ev & POLLIN) ? nfs_process_read : NULL,
                            (ev & POLLOUT) ? nfs_process_write : NULL,
                            NULL, NULL, client);
@@ -373,7 +372,7 @@ static void nfs_detach_aio_context(BlockDriverState *bs)
     NFSClient *client = bs->opaque;
 
     aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
     client->events = 0;
 }
 
@@ -391,7 +390,7 @@ static void nfs_client_close(NFSClient *client)
     if (client->context) {
         qemu_mutex_lock(&client->mutex);
         aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                           false, NULL, NULL, NULL, NULL, NULL);
+                           NULL, NULL, NULL, NULL, NULL);
         qemu_mutex_unlock(&client->mutex);
         if (client->fh) {
             nfs_close(client->context, client->fh);
diff --git a/block/nvme.c b/block/nvme.c
index 5b744c2bda..17937d398d 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -862,7 +862,7 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
     }
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, nvme_handle_event, nvme_poll_cb,
+                           nvme_handle_event, nvme_poll_cb,
                            nvme_poll_ready);
 
     if (!nvme_identify(bs, namespace, errp)) {
@@ -948,7 +948,7 @@ static void nvme_close(BlockDriverState *bs)
     g_free(s->queues);
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, NULL, NULL, NULL);
+                           NULL, NULL, NULL);
     event_notifier_cleanup(&s->irq_notifier[MSIX_SHARED_IRQ_IDX]);
     qemu_vfio_pci_unmap_bar(s->vfio, 0, s->bar0_wo_map,
                             0, sizeof(NvmeBar) + NVME_DOORBELL_SIZE);
@@ -1546,7 +1546,7 @@ static void nvme_detach_aio_context(BlockDriverState *bs)
 
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, NULL, NULL, NULL);
+                           NULL, NULL, NULL);
 }
 
 static void nvme_attach_aio_context(BlockDriverState *bs,
@@ -1556,7 +1556,7 @@ static void nvme_attach_aio_context(BlockDriverState *bs,
 
     s->aio_context = new_context;
     aio_set_event_notifier(new_context, &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, nvme_handle_event, nvme_poll_cb,
+                           nvme_handle_event, nvme_poll_cb,
                            nvme_poll_ready);
 
     for (unsigned i = 0; i < s->queue_count; i++) {
diff --git a/block/ssh.c b/block/ssh.c
index b3b3352075..2748253d4a 100644
--- a/block/ssh.c
+++ b/block/ssh.c
@@ -1019,7 +1019,7 @@ static void restart_coroutine(void *opaque)
     AioContext *ctx = bdrv_get_aio_context(bs);
 
     trace_ssh_restart_coroutine(restart->co);
-    aio_set_fd_handler(ctx, s->sock, false, NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(ctx, s->sock, NULL, NULL, NULL, NULL, NULL);
 
     aio_co_wake(restart->co);
 }
@@ -1049,7 +1049,7 @@ static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs)
     trace_ssh_co_yield(s->sock, rd_handler, wr_handler);
 
     aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock,
-                       false, rd_handler, wr_handler, NULL, NULL, &restart);
+                       rd_handler, wr_handler, NULL, NULL, &restart);
     qemu_coroutine_yield();
     trace_ssh_co_yield_back(s->sock);
 }
diff --git a/block/win32-aio.c b/block/win32-aio.c
index ee87d6048f..6327861e1d 100644
--- a/block/win32-aio.c
+++ b/block/win32-aio.c
@@ -174,7 +174,7 @@ int win32_aio_attach(QEMUWin32AIOState *aio, HANDLE hfile)
 void win32_aio_detach_aio_context(QEMUWin32AIOState *aio,
                                   AioContext *old_context)
 {
-    aio_set_event_notifier(old_context, &aio->e, false, NULL, NULL, NULL);
+    aio_set_event_notifier(old_context, &aio->e, NULL, NULL, NULL);
     aio->aio_ctx = NULL;
 }
 
@@ -182,8 +182,8 @@ void win32_aio_attach_aio_context(QEMUWin32AIOState *aio,
                                   AioContext *new_context)
 {
     aio->aio_ctx = new_context;
-    aio_set_event_notifier(new_context, &aio->e, false,
-                           win32_aio_completion_cb, NULL, NULL);
+    aio_set_event_notifier(new_context, &aio->e, win32_aio_completion_cb,
+                           NULL, NULL);
 }
 
 QEMUWin32AIOState *win32_aio_init(void)
diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 6e81bc8791..0b189c6ab8 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "Xenstore evtchn port init failed");
         return;
     }
-    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false,
+    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh),
                        xen_xenstore_event, NULL, NULL, NULL, s);
 
     s->impl = xs_impl_create(xen_domid);
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 9cdad7e550..d48e240c37 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n)
 
 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false,
+    aio_set_event_notifier(ctx, &vq->host_notifier,
                            virtio_queue_host_notifier_read,
                            virtio_queue_host_notifier_aio_poll,
                            virtio_queue_host_notifier_aio_poll_ready);
@@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
  */
 void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false,
+    aio_set_event_notifier(ctx, &vq->host_notifier,
                            virtio_queue_host_notifier_read,
                            NULL, NULL);
 }
 
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &vq->host_notifier, NULL, NULL, NULL);
     /* Test and clear notifier before after disabling event,
      * in case poll callback didn't have time to run. */
     virtio_queue_host_notifier_read(&vq->host_notifier);
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index bf256d4da2..1e08cf027a 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
     }
 
     if (channel->ctx)
-        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
     if (ctx) {
         aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
-                           false, xen_device_event, NULL, xen_device_poll,
-                           NULL, channel);
+                           xen_device_event, NULL, xen_device_poll, NULL,
+                           channel);
     }
 }
 
@@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev,
 
     QLIST_REMOVE(channel, list);
 
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
                        NULL, NULL, NULL, NULL, NULL);
 
     if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) {
diff --git a/io/channel-command.c b/io/channel-command.c
index e7edd091af..7ed726c802 100644
--- a/io/channel-command.c
+++ b/io/channel-command.c
@@ -337,10 +337,8 @@ static void qio_channel_command_set_aio_fd_handler(QIOChannel *ioc,
                                                    void *opaque)
 {
     QIOChannelCommand *cioc = QIO_CHANNEL_COMMAND(ioc);
-    aio_set_fd_handler(ctx, cioc->readfd, false,
-                       io_read, NULL, NULL, NULL, opaque);
-    aio_set_fd_handler(ctx, cioc->writefd, false,
-                       NULL, io_write, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, cioc->readfd, io_read, NULL, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, cioc->writefd, NULL, io_write, NULL, NULL, opaque);
 }
 
 
diff --git a/io/channel-file.c b/io/channel-file.c
index d76663e6ae..8b5821f452 100644
--- a/io/channel-file.c
+++ b/io/channel-file.c
@@ -198,8 +198,7 @@ static void qio_channel_file_set_aio_fd_handler(QIOChannel *ioc,
                                                 void *opaque)
 {
     QIOChannelFile *fioc = QIO_CHANNEL_FILE(ioc);
-    aio_set_fd_handler(ctx, fioc->fd, false, io_read, io_write,
-                       NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, fioc->fd, io_read, io_write, NULL, NULL, opaque);
 }
 
 static GSource *qio_channel_file_create_watch(QIOChannel *ioc,
diff --git a/io/channel-socket.c b/io/channel-socket.c
index b0ea7d48b3..d99945ebec 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -899,8 +899,7 @@ static void qio_channel_socket_set_aio_fd_handler(QIOChannel *ioc,
                                                   void *opaque)
 {
     QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
-    aio_set_fd_handler(ctx, sioc->fd, false,
-                       io_read, io_write, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, sioc->fd, io_read, io_write, NULL, NULL, opaque);
 }
 
 static GSource *qio_channel_socket_create_watch(QIOChannel *ioc,
diff --git a/migration/rdma.c b/migration/rdma.c
index 7e747b2595..afb8761a06 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3110,15 +3110,15 @@ static void qio_channel_rdma_set_aio_fd_handler(QIOChannel *ioc,
 {
     QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
     if (io_read) {
-        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
     } else {
-        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
     }
 }
 
diff --git a/tests/unit/test-aio.c b/tests/unit/test-aio.c
index 321d7ab01a..519440eed3 100644
--- a/tests/unit/test-aio.c
+++ b/tests/unit/test-aio.c
@@ -130,7 +130,7 @@ static void *test_acquire_thread(void *opaque)
 static void set_event_notifier(AioContext *ctx, EventNotifier *notifier,
                                EventNotifierHandler *handler)
 {
-    aio_set_event_notifier(ctx, notifier, false, handler, NULL, NULL);
+    aio_set_event_notifier(ctx, notifier, handler, NULL, NULL);
 }
 
 static void dummy_notifier_read(EventNotifier *n)
@@ -383,30 +383,6 @@ static void test_flush_event_notifier(void)
     event_notifier_cleanup(&data.e);
 }
 
-static void test_aio_external_client(void)
-{
-    int i, j;
-
-    for (i = 1; i < 3; i++) {
-        EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
-        event_notifier_init(&data.e, false);
-        aio_set_event_notifier(ctx, &data.e, true, event_ready_cb, NULL, NULL);
-        event_notifier_set(&data.e);
-        for (j = 0; j < i; j++) {
-            aio_disable_external(ctx);
-        }
-        for (j = 0; j < i; j++) {
-            assert(!aio_poll(ctx, false));
-            assert(event_notifier_test_and_clear(&data.e));
-            event_notifier_set(&data.e);
-            aio_enable_external(ctx);
-        }
-        assert(aio_poll(ctx, false));
-        set_event_notifier(ctx, &data.e, NULL);
-        event_notifier_cleanup(&data.e);
-    }
-}
-
 static void test_wait_event_notifier_noflush(void)
 {
     EventNotifierTestData data = { .n = 0 };
@@ -935,7 +911,6 @@ int main(int argc, char **argv)
     g_test_add_func("/aio/event/wait",              test_wait_event_notifier);
     g_test_add_func("/aio/event/wait/no-flush-cb",  test_wait_event_notifier_noflush);
     g_test_add_func("/aio/event/flush",             test_flush_event_notifier);
-    g_test_add_func("/aio/external-client",         test_aio_external_client);
     g_test_add_func("/aio/timer/schedule",          test_timer_schedule);
 
     g_test_add_func("/aio/coroutine/queue-chaining", test_queue_chaining);
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index dc3cb9e0e3..9cac61c2bf 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -435,7 +435,6 @@ static void test_graph_change_drain_all(void)
 
     g_assert_cmpint(bs_b->quiesce_counter, ==, 0);
     g_assert_cmpint(b_s->drain_count, ==, 0);
-    g_assert_cmpint(qemu_get_aio_context()->external_disable_cnt, ==, 0);
 
     bdrv_unref(bs_b);
     blk_unref(blk_b);
diff --git a/tests/unit/test-fdmon-epoll.c b/tests/unit/test-fdmon-epoll.c
deleted file mode 100644
index ef5a856d09..0000000000
--- a/tests/unit/test-fdmon-epoll.c
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-/*
- * fdmon-epoll tests
- *
- * Copyright (c) 2020 Red Hat, Inc.
- */
-
-#include "qemu/osdep.h"
-#include "block/aio.h"
-#include "qapi/error.h"
-#include "qemu/main-loop.h"
-
-static AioContext *ctx;
-
-static void dummy_fd_handler(EventNotifier *notifier)
-{
-    event_notifier_test_and_clear(notifier);
-}
-
-static void add_event_notifiers(EventNotifier *notifiers, size_t n)
-{
-    for (size_t i = 0; i < n; i++) {
-        event_notifier_init(&notifiers[i], false);
-        aio_set_event_notifier(ctx, &notifiers[i], false,
-                               dummy_fd_handler, NULL, NULL);
-    }
-}
-
-static void remove_event_notifiers(EventNotifier *notifiers, size_t n)
-{
-    for (size_t i = 0; i < n; i++) {
-        aio_set_event_notifier(ctx, &notifiers[i], false, NULL, NULL, NULL);
-        event_notifier_cleanup(&notifiers[i]);
-    }
-}
-
-/* Check that fd handlers work when external clients are disabled */
-static void test_external_disabled(void)
-{
-    EventNotifier notifiers[100];
-
-    /* fdmon-epoll is only enabled when many fd handlers are registered */
-    add_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
-
-    event_notifier_set(&notifiers[0]);
-    assert(aio_poll(ctx, true));
-
-    aio_disable_external(ctx);
-    event_notifier_set(&notifiers[0]);
-    assert(aio_poll(ctx, true));
-    aio_enable_external(ctx);
-
-    remove_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
-}
-
-int main(int argc, char **argv)
-{
-    /*
-     * This code relies on the fact that fdmon-io_uring disables itself when
-     * the glib main loop is in use. The main loop uses fdmon-poll and upgrades
-     * to fdmon-epoll when the number of fds exceeds a threshold.
-     */
-    qemu_init_main_loop(&error_fatal);
-    ctx = qemu_get_aio_context();
-
-    while (g_main_context_iteration(NULL, false)) {
-        /* Do nothing */
-    }
-
-    g_test_init(&argc, &argv, NULL);
-    g_test_add_func("/fdmon-epoll/external-disabled", test_external_disabled);
-    return g_test_run();
-}
diff --git a/util/aio-posix.c b/util/aio-posix.c
index a8be940f76..934b1bbb85 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -99,7 +99,6 @@ static bool aio_remove_fd_handler(AioContext *ctx, AioHandler *node)
 
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -144,7 +143,6 @@ void aio_set_fd_handler(AioContext *ctx,
         new_node->io_poll = io_poll;
         new_node->io_poll_ready = io_poll_ready;
         new_node->opaque = opaque;
-        new_node->is_external = is_external;
 
         if (is_new) {
             new_node->pfd.fd = fd;
@@ -196,12 +194,11 @@ static void aio_set_fd_poll(AioContext *ctx, int fd,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            bool is_external,
                             EventNotifierHandler *io_read,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready)
 {
-    aio_set_fd_handler(ctx, event_notifier_get_fd(notifier), is_external,
+    aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
                        (IOHandler *)io_read, NULL, io_poll,
                        (IOHandler *)io_poll_ready, notifier);
 }
@@ -285,13 +282,11 @@ bool aio_pending(AioContext *ctx)
 
         /* TODO should this check poll ready? */
         revents = node->pfd.revents & node->pfd.events;
-        if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
-            aio_node_check(ctx, node->is_external)) {
+        if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) {
             result = true;
             break;
         }
-        if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
-            aio_node_check(ctx, node->is_external)) {
+        if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) {
             result = true;
             break;
         }
@@ -350,9 +345,7 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
         QLIST_INSERT_HEAD(&ctx->poll_aio_handlers, node, node_poll);
     }
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
-        poll_ready && revents == 0 &&
-        aio_node_check(ctx, node->is_external) &&
-        node->io_poll_ready) {
+        poll_ready && revents == 0 && node->io_poll_ready) {
         node->io_poll_ready(node->opaque);
 
         /*
@@ -364,7 +357,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
 
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
         (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
-        aio_node_check(ctx, node->is_external) &&
         node->io_read) {
         node->io_read(node->opaque);
 
@@ -375,7 +367,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
     }
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
         (revents & (G_IO_OUT | G_IO_ERR)) &&
-        aio_node_check(ctx, node->is_external) &&
         node->io_write) {
         node->io_write(node->opaque);
         progress = true;
@@ -436,8 +427,7 @@ static bool run_poll_handlers_once(AioContext *ctx,
     AioHandler *tmp;
 
     QLIST_FOREACH_SAFE(node, &ctx->poll_aio_handlers, node_poll, tmp) {
-        if (aio_node_check(ctx, node->is_external) &&
-            node->io_poll(node->opaque)) {
+        if (node->io_poll(node->opaque)) {
             aio_add_poll_ready_handler(ready_list, node);
 
             node->poll_idle_timeout = now + POLL_IDLE_INTERVAL_NS;
diff --git a/util/aio-win32.c b/util/aio-win32.c
index 6bded009a4..948ef47a4d 100644
--- a/util/aio-win32.c
+++ b/util/aio-win32.c
@@ -32,7 +32,6 @@ struct AioHandler {
     GPollFD pfd;
     int deleted;
     void *opaque;
-    bool is_external;
     QLIST_ENTRY(AioHandler) node;
 };
 
@@ -64,7 +63,6 @@ static void aio_remove_fd_handler(AioContext *ctx, AioHandler *node)
 
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -111,7 +109,6 @@ void aio_set_fd_handler(AioContext *ctx,
         node->opaque = opaque;
         node->io_read = io_read;
         node->io_write = io_write;
-        node->is_external = is_external;
 
         if (io_read) {
             bitmask |= FD_READ | FD_ACCEPT | FD_CLOSE;
@@ -135,7 +132,6 @@ void aio_set_fd_handler(AioContext *ctx,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *e,
-                            bool is_external,
                             EventNotifierHandler *io_notify,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready)
@@ -161,7 +157,6 @@ void aio_set_event_notifier(AioContext *ctx,
             node->e = e;
             node->pfd.fd = (uintptr_t)event_notifier_get_handle(e);
             node->pfd.events = G_IO_IN;
-            node->is_external = is_external;
             QLIST_INSERT_HEAD_RCU(&ctx->aio_handlers, node, node);
 
             g_source_add_poll(&ctx->source, &node->pfd);
@@ -368,8 +363,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     /* fill fd sets */
     count = 0;
     QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
-        if (!node->deleted && node->io_notify
-            && aio_node_check(ctx, node->is_external)) {
+        if (!node->deleted && node->io_notify) {
             assert(count < MAXIMUM_WAIT_OBJECTS);
             events[count++] = event_notifier_get_handle(node->e);
         }
diff --git a/util/async.c b/util/async.c
index 055070ffbd..8f90ddc304 100644
--- a/util/async.c
+++ b/util/async.c
@@ -409,7 +409,7 @@ aio_ctx_finalize(GSource     *source)
         g_free(bh);
     }
 
-    aio_set_event_notifier(ctx, &ctx->notifier, false, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL, NULL);
     event_notifier_cleanup(&ctx->notifier);
     qemu_rec_mutex_destroy(&ctx->lock);
     qemu_lockcnt_destroy(&ctx->list_lock);
@@ -593,7 +593,6 @@ AioContext *aio_context_new(Error **errp)
     QSLIST_INIT(&ctx->scheduled_coroutines);
 
     aio_set_event_notifier(ctx, &ctx->notifier,
-                           false,
                            aio_context_notifier_cb,
                            aio_context_notifier_poll,
                            aio_context_notifier_poll_ready);
diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c
index 1683aa1105..6b6a1a91f8 100644
--- a/util/fdmon-epoll.c
+++ b/util/fdmon-epoll.c
@@ -64,11 +64,6 @@ static int fdmon_epoll_wait(AioContext *ctx, AioHandlerList *ready_list,
     int i, ret = 0;
     struct epoll_event events[128];
 
-    /* Fall back while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return fdmon_poll_ops.wait(ctx, ready_list, timeout);
-    }
-
     if (timeout > 0) {
         ret = qemu_poll_ns(&pfd, 1, timeout);
         if (ret > 0) {
@@ -133,13 +128,12 @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigned npfd)
         return false;
     }
 
-    /* Do not upgrade while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return false;
-    }
-
-    if (npfd < EPOLL_ENABLE_THRESHOLD) {
-        return false;
+    if (npfd >= EPOLL_ENABLE_THRESHOLD) {
+        if (fdmon_epoll_try_enable(ctx)) {
+            return true;
+        } else {
+            fdmon_epoll_disable(ctx);
+        }
     }
 
     /* The list must not change while we add fds to epoll */
diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
index ab43052dd7..17ec18b7bd 100644
--- a/util/fdmon-io_uring.c
+++ b/util/fdmon-io_uring.c
@@ -276,11 +276,6 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,
     unsigned wait_nr = 1; /* block until at least one cqe is ready */
     int ret;
 
-    /* Fall back while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return fdmon_poll_ops.wait(ctx, ready_list, timeout);
-    }
-
     if (timeout == 0) {
         wait_nr = 0; /* non-blocking */
     } else if (timeout > 0) {
@@ -315,8 +310,7 @@ static bool fdmon_io_uring_need_wait(AioContext *ctx)
         return true;
     }
 
-    /* Are we falling back to fdmon-poll? */
-    return qatomic_read(&ctx->external_disable_cnt);
+    return false;
 }
 
 static const FDMonOps fdmon_io_uring_ops = {
diff --git a/util/fdmon-poll.c b/util/fdmon-poll.c
index 5fe3b47865..17df917cf9 100644
--- a/util/fdmon-poll.c
+++ b/util/fdmon-poll.c
@@ -65,8 +65,7 @@ static int fdmon_poll_wait(AioContext *ctx, AioHandlerList *ready_list,
     assert(npfd == 0);
 
     QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
-        if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events
-                && aio_node_check(ctx, node->is_external)) {
+        if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events) {
             add_pollfd(node);
         }
     }
diff --git a/util/main-loop.c b/util/main-loop.c
index 7022f02ef8..014c795916 100644
--- a/util/main-loop.c
+++ b/util/main-loop.c
@@ -644,14 +644,13 @@ void qemu_set_fd_handler(int fd,
                          void *opaque)
 {
     iohandler_init();
-    aio_set_fd_handler(iohandler_ctx, fd, false,
-                       fd_read, fd_write, NULL, NULL, opaque);
+    aio_set_fd_handler(iohandler_ctx, fd, fd_read, fd_write, NULL, NULL,
+                       opaque);
 }
 
 void event_notifier_set_handler(EventNotifier *e,
                                 EventNotifierHandler *handler)
 {
     iohandler_init();
-    aio_set_event_notifier(iohandler_ctx, e, false,
-                           handler, NULL, NULL);
+    aio_set_event_notifier(iohandler_ctx, e, handler, NULL, NULL);
 }
diff --git a/util/qemu-coroutine-io.c b/util/qemu-coroutine-io.c
index d791932d63..364f4d5abf 100644
--- a/util/qemu-coroutine-io.c
+++ b/util/qemu-coroutine-io.c
@@ -74,8 +74,7 @@ typedef struct {
 static void fd_coroutine_enter(void *opaque)
 {
     FDYieldUntilData *data = opaque;
-    aio_set_fd_handler(data->ctx, data->fd, false,
-                       NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(data->ctx, data->fd, NULL, NULL, NULL, NULL, NULL);
     qemu_coroutine_enter(data->co);
 }
 
@@ -87,7 +86,7 @@ void coroutine_fn yield_until_fd_readable(int fd)
     data.ctx = qemu_get_current_aio_context();
     data.co = qemu_coroutine_self();
     data.fd = fd;
-    aio_set_fd_handler(
-        data.ctx, fd, false, fd_coroutine_enter, NULL, NULL, NULL, &data);
+    aio_set_fd_handler(data.ctx, fd, fd_coroutine_enter, NULL, NULL, NULL,
+                       &data);
     qemu_coroutine_yield();
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index a12b2d1bba..cd17fb5326 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         vu_fd_watch->fd = fd;
         vu_fd_watch->cb = cb;
         qemu_socket_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, kick_handler,
                            NULL, NULL, NULL, vu_fd_watch);
         vu_fd_watch->vu_dev = vu_dev;
         vu_fd_watch->pvt = pvt;
@@ -299,8 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
     if (!vu_fd_watch) {
         return;
     }
-    aio_set_fd_handler(server->ioc->ctx, fd, false,
-                       NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(server->ioc->ctx, fd, NULL, NULL, NULL, NULL, NULL);
 
     QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
     g_free(vu_fd_watch);
@@ -362,7 +361,7 @@ void vhost_user_server_stop(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
@@ -403,7 +402,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
     qio_channel_attach_aio_context(server->ioc, ctx);
 
     QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-        aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL,
+        aio_set_fd_handler(ctx, vu_fd_watch->fd, kick_handler, NULL,
                            NULL, NULL, vu_fd_watch);
     }
 
@@ -417,7 +416,7 @@ void vhost_user_server_detach_aio_context(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
diff --git a/tests/unit/meson.build b/tests/unit/meson.build
index 3bc78d8660..b33298a444 100644
--- a/tests/unit/meson.build
+++ b/tests/unit/meson.build
@@ -122,9 +122,6 @@ if have_block
   if nettle.found() or gcrypt.found()
     tests += {'test-crypto-pbkdf': [io]}
   endif
-  if config_host_data.get('CONFIG_EPOLL_CREATE1')
-    tests += {'test-fdmon-epoll': [testblock]}
-  endif
 endif
 
 if have_system
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 20:00:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 20:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530115.825506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7c-0005q2-QB; Thu, 04 May 2023 20:00:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530115.825506; Thu, 04 May 2023 20:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7c-0005pv-Mx; Thu, 04 May 2023 20:00:12 +0000
Received: by outflank-mailman (input) for mailman id 530115;
 Thu, 04 May 2023 20:00:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf21-0003xx-RY
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:25 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77395080-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:54:23 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-86-xBjZ8_DjNQ-jsqK3HPfxEg-1; Thu, 04 May 2023 15:54:20 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A1B9C886063;
 Thu,  4 May 2023 19:54:19 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 297BC4020960;
 Thu,  4 May 2023 19:54:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77395080-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230062;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ckhE010jxhkqU+qGTRwp1CzCNoAx+VwzdwbJyrduiJo=;
	b=Rb7XL2IIisdNrWUxDLBtrFscNuTsQ6sFpEbq52XDXOMQMPFyEC1acEeFZT27SJrqtnY2G7
	iiBND0pYGclRy4EwyoQaKk16KGQKhG+pVveJJVFjkZmo/9hZD+JqClrrjgc7X5nluvEHL7
	ZYTlIsK7dgI5cZvZNnP9xXW9pBTNLKY=
X-MC-Unique: xBjZ8_DjNQ-jsqK3HPfxEg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 19/21] virtio-scsi: implement BlockDevOps->drained_begin()
Date: Thu,  4 May 2023 15:53:25 -0400
Message-Id: <20230504195327.695107-20-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

The virtio-scsi Host Bus Adapter provides access to devices on a SCSI
bus. Those SCSI devices typically have a BlockBackend. When the
BlockBackend enters a drained section, the SCSI device must temporarily
stop submitting new I/O requests.

Implement this behavior by temporarily stopping virtio-scsi virtqueue
processing when one of the SCSI devices enters a drained section. The
new scsi_device_drained_begin() API allows scsi-disk to message the
virtio-scsi HBA.

scsi_device_drained_begin() uses a drain counter so that multiple SCSI
devices can have overlapping drained sections. The HBA only sees one
pair of .drained_begin/end() calls.

After this commit, virtio-scsi no longer depends on hw/virtio's
ioeventfd aio_set_event_notifier(is_external=true). This commit is a
step towards removing the aio_disable_external() API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/hw/scsi/scsi.h          | 14 ++++++++++++
 hw/scsi/scsi-bus.c              | 40 +++++++++++++++++++++++++++++++++
 hw/scsi/scsi-disk.c             | 27 +++++++++++++++++-----
 hw/scsi/virtio-scsi-dataplane.c | 22 ++++++++++--------
 hw/scsi/virtio-scsi.c           | 38 +++++++++++++++++++++++++++++++
 hw/scsi/trace-events            |  2 ++
 6 files changed, 129 insertions(+), 14 deletions(-)

diff --git a/include/hw/scsi/scsi.h b/include/hw/scsi/scsi.h
index 6f23a7a73e..e2bb1a2fbf 100644
--- a/include/hw/scsi/scsi.h
+++ b/include/hw/scsi/scsi.h
@@ -133,6 +133,16 @@ struct SCSIBusInfo {
     void (*save_request)(QEMUFile *f, SCSIRequest *req);
     void *(*load_request)(QEMUFile *f, SCSIRequest *req);
     void (*free_request)(SCSIBus *bus, void *priv);
+
+    /*
+     * Temporarily stop submitting new requests between drained_begin() and
+     * drained_end(). Called from the main loop thread with the BQL held.
+     *
+     * Implement these callbacks if request processing is triggered by a file
+     * descriptor like an EventNotifier. Otherwise set them to NULL.
+     */
+    void (*drained_begin)(SCSIBus *bus);
+    void (*drained_end)(SCSIBus *bus);
 };
 
 #define TYPE_SCSI_BUS "SCSI"
@@ -144,6 +154,8 @@ struct SCSIBus {
 
     SCSISense unit_attention;
     const SCSIBusInfo *info;
+
+    int drain_count; /* protected by BQL */
 };
 
 /**
@@ -213,6 +225,8 @@ void scsi_req_cancel_complete(SCSIRequest *req);
 void scsi_req_cancel(SCSIRequest *req);
 void scsi_req_cancel_async(SCSIRequest *req, Notifier *notifier);
 void scsi_req_retry(SCSIRequest *req);
+void scsi_device_drained_begin(SCSIDevice *sdev);
+void scsi_device_drained_end(SCSIDevice *sdev);
 void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense);
 void scsi_device_set_ua(SCSIDevice *sdev, SCSISense sense);
 void scsi_device_report_change(SCSIDevice *dev, SCSISense sense);
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 64013c8a24..f80f4cb4fc 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -1669,6 +1669,46 @@ void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense)
     scsi_device_set_ua(sdev, sense);
 }
 
+void scsi_device_drained_begin(SCSIDevice *sdev)
+{
+    SCSIBus *bus = DO_UPCAST(SCSIBus, qbus, sdev->qdev.parent_bus);
+    if (!bus) {
+        return;
+    }
+
+    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
+    assert(bus->drain_count < INT_MAX);
+
+    /*
+     * Multiple BlockBackends can be on a SCSIBus and each may begin/end
+     * draining at any time. Keep a counter so HBAs only see begin/end once.
+     */
+    if (bus->drain_count++ == 0) {
+        trace_scsi_bus_drained_begin(bus, sdev);
+        if (bus->info->drained_begin) {
+            bus->info->drained_begin(bus);
+        }
+    }
+}
+
+void scsi_device_drained_end(SCSIDevice *sdev)
+{
+    SCSIBus *bus = DO_UPCAST(SCSIBus, qbus, sdev->qdev.parent_bus);
+    if (!bus) {
+        return;
+    }
+
+    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
+    assert(bus->drain_count > 0);
+
+    if (bus->drain_count-- == 1) {
+        trace_scsi_bus_drained_end(bus, sdev);
+        if (bus->info->drained_end) {
+            bus->info->drained_end(bus);
+        }
+    }
+}
+
 static char *scsibus_get_dev_path(DeviceState *dev)
 {
     SCSIDevice *d = SCSI_DEVICE(dev);
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 97c9b1c8cd..e0d79c7966 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2360,6 +2360,20 @@ static void scsi_disk_reset(DeviceState *dev)
     s->qdev.scsi_version = s->qdev.default_scsi_version;
 }
 
+static void scsi_disk_drained_begin(void *opaque)
+{
+    SCSIDiskState *s = opaque;
+
+    scsi_device_drained_begin(&s->qdev);
+}
+
+static void scsi_disk_drained_end(void *opaque)
+{
+    SCSIDiskState *s = opaque;
+
+    scsi_device_drained_end(&s->qdev);
+}
+
 static void scsi_disk_resize_cb(void *opaque)
 {
     SCSIDiskState *s = opaque;
@@ -2414,16 +2428,19 @@ static bool scsi_cd_is_medium_locked(void *opaque)
 }
 
 static const BlockDevOps scsi_disk_removable_block_ops = {
-    .change_media_cb = scsi_cd_change_media_cb,
+    .change_media_cb  = scsi_cd_change_media_cb,
+    .drained_begin    = scsi_disk_drained_begin,
+    .drained_end      = scsi_disk_drained_end,
     .eject_request_cb = scsi_cd_eject_request_cb,
-    .is_tray_open = scsi_cd_is_tray_open,
     .is_medium_locked = scsi_cd_is_medium_locked,
-
-    .resize_cb = scsi_disk_resize_cb,
+    .is_tray_open     = scsi_cd_is_tray_open,
+    .resize_cb        = scsi_disk_resize_cb,
 };
 
 static const BlockDevOps scsi_disk_block_ops = {
-    .resize_cb = scsi_disk_resize_cb,
+    .drained_begin = scsi_disk_drained_begin,
+    .drained_end   = scsi_disk_drained_end,
+    .resize_cb     = scsi_disk_resize_cb,
 };
 
 static void scsi_disk_unit_attention_reported(SCSIDevice *dev)
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 81643445ed..1060038e13 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -153,14 +153,16 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
     s->dataplane_starting = false;
     s->dataplane_started = true;
 
-    aio_context_acquire(s->ctx);
-    virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
-    virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
+    if (s->bus.drain_count == 0) {
+        aio_context_acquire(s->ctx);
+        virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
+        virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
 
-    for (i = 0; i < vs->conf.num_queues; i++) {
-        virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        for (i = 0; i < vs->conf.num_queues; i++) {
+            virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        }
+        aio_context_release(s->ctx);
     }
-    aio_context_release(s->ctx);
     return 0;
 
 fail_host_notifiers:
@@ -206,9 +208,11 @@ void virtio_scsi_dataplane_stop(VirtIODevice *vdev)
     }
     s->dataplane_stopping = true;
 
-    aio_context_acquire(s->ctx);
-    aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s);
-    aio_context_release(s->ctx);
+    if (s->bus.drain_count == 0) {
+        aio_context_acquire(s->ctx);
+        aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s);
+        aio_context_release(s->ctx);
+    }
 
     blk_drain_all(); /* ensure there are no in-flight requests */
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index c1a7ea9ae2..4a8849cc7e 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1117,6 +1117,42 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     }
 }
 
+/* Suspend virtqueue ioeventfd processing during drain */
+static void virtio_scsi_drained_begin(SCSIBus *bus)
+{
+    VirtIOSCSI *s = container_of(bus, VirtIOSCSI, bus);
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    uint32_t total_queues = VIRTIO_SCSI_VQ_NUM_FIXED +
+                            s->parent_obj.conf.num_queues;
+
+    if (!s->dataplane_started) {
+        return;
+    }
+
+    for (uint32_t i = 0; i < total_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_detach_host_notifier(vq, s->ctx);
+    }
+}
+
+/* Resume virtqueue ioeventfd processing after drain */
+static void virtio_scsi_drained_end(SCSIBus *bus)
+{
+    VirtIOSCSI *s = container_of(bus, VirtIOSCSI, bus);
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    uint32_t total_queues = VIRTIO_SCSI_VQ_NUM_FIXED +
+                            s->parent_obj.conf.num_queues;
+
+    if (!s->dataplane_started) {
+        return;
+    }
+
+    for (uint32_t i = 0; i < total_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+    }
+}
+
 static struct SCSIBusInfo virtio_scsi_scsi_info = {
     .tcq = true,
     .max_channel = VIRTIO_SCSI_MAX_CHANNEL,
@@ -1131,6 +1167,8 @@ static struct SCSIBusInfo virtio_scsi_scsi_info = {
     .get_sg_list = virtio_scsi_get_sg_list,
     .save_request = virtio_scsi_save_request,
     .load_request = virtio_scsi_load_request,
+    .drained_begin = virtio_scsi_drained_begin,
+    .drained_end = virtio_scsi_drained_end,
 };
 
 void virtio_scsi_common_realize(DeviceState *dev,
diff --git a/hw/scsi/trace-events b/hw/scsi/trace-events
index ab238293f0..bdd4e2c7c7 100644
--- a/hw/scsi/trace-events
+++ b/hw/scsi/trace-events
@@ -6,6 +6,8 @@ scsi_req_cancel(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_data(int target, int lun, int tag, int len) "target %d lun %d tag %d len %d"
 scsi_req_data_canceled(int target, int lun, int tag, int len) "target %d lun %d tag %d len %d"
 scsi_req_dequeue(int target, int lun, int tag) "target %d lun %d tag %d"
+scsi_bus_drained_begin(void *bus, void *sdev) "bus %p sdev %p"
+scsi_bus_drained_end(void *bus, void *sdev) "bus %p sdev %p"
 scsi_req_continue(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_continue_canceled(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_parsed(int target, int lun, int tag, int cmd, int mode, int xfer) "target %d lun %d tag %d command %d dir %d length %d"
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 20:00:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 20:00:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530116.825511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7d-0005tz-5D; Thu, 04 May 2023 20:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530116.825511; Thu, 04 May 2023 20:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7c-0005so-V1; Thu, 04 May 2023 20:00:12 +0000
Received: by outflank-mailman (input) for mailman id 530116;
 Thu, 04 May 2023 20:00:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1t-0003iB-Vn
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:17 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7358d671-eab5-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:54:17 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-94-EVERd-b1O6iV68LQZZKd_A-1; Thu, 04 May 2023 15:54:10 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EC0B6A0F389;
 Thu,  4 May 2023 19:54:08 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 51B294020960;
 Thu,  4 May 2023 19:54:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7358d671-eab5-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230056;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bn7vYKIN4BAATa1eS0drL+2+35bJodnZI5fsBJiYTKs=;
	b=E6pRBqtnxp0TP87WMNL+URfi5WzVeWaz1lK3LF8Jf47/gjBQowXxde02YVjVwNLuWbYNB6
	/rdqsEIHgki4Ndsn5e4KruXtYhgcERq6rdpvSzIUwB4XwmDVj+6aHHHTb3nU8KQ4D9QD4j
	SzHpPG6CzzghWV6mKeKo/CVRplBXH5g=
X-MC-Unique: EVERd-b1O6iV68LQZZKd_A-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 15/21] block/export: don't require AioContext lock around blk_exp_ref/unref()
Date: Thu,  4 May 2023 15:53:21 -0400
Message-Id: <20230504195327.695107-16-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

The FUSE export calls blk_exp_ref/unref() without the AioContext lock.
Instead of fixing the FUSE export, adjust blk_exp_ref/unref() so they
work without the AioContext lock. This way it's less error-prone.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/export.h   |  2 ++
 block/export/export.c    | 13 ++++++-------
 block/export/vduse-blk.c |  4 ----
 3 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/include/block/export.h b/include/block/export.h
index 7feb02e10d..f2fe0f8078 100644
--- a/include/block/export.h
+++ b/include/block/export.h
@@ -57,6 +57,8 @@ struct BlockExport {
      * Reference count for this block export. This includes strong references
      * both from the owner (qemu-nbd or the monitor) and clients connected to
      * the export.
+     *
+     * Use atomics to access this field.
      */
     int refcount;
 
diff --git a/block/export/export.c b/block/export/export.c
index 62c7c22d45..ab007e9d31 100644
--- a/block/export/export.c
+++ b/block/export/export.c
@@ -202,11 +202,10 @@ fail:
     return NULL;
 }
 
-/* Callers must hold exp->ctx lock */
 void blk_exp_ref(BlockExport *exp)
 {
-    assert(exp->refcount > 0);
-    exp->refcount++;
+    assert(qatomic_read(&exp->refcount) > 0);
+    qatomic_inc(&exp->refcount);
 }
 
 /* Runs in the main thread */
@@ -229,11 +228,10 @@ static void blk_exp_delete_bh(void *opaque)
     aio_context_release(aio_context);
 }
 
-/* Callers must hold exp->ctx lock */
 void blk_exp_unref(BlockExport *exp)
 {
-    assert(exp->refcount > 0);
-    if (--exp->refcount == 0) {
+    assert(qatomic_read(&exp->refcount) > 0);
+    if (qatomic_fetch_dec(&exp->refcount) == 1) {
         /* Touch the block_exports list only in the main thread */
         aio_bh_schedule_oneshot(qemu_get_aio_context(), blk_exp_delete_bh,
                                 exp);
@@ -341,7 +339,8 @@ void qmp_block_export_del(const char *id,
     if (!has_mode) {
         mode = BLOCK_EXPORT_REMOVE_MODE_SAFE;
     }
-    if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE && exp->refcount > 1) {
+    if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE &&
+        qatomic_read(&exp->refcount) > 1) {
         error_setg(errp, "export '%s' still in use", exp->id);
         error_append_hint(errp, "Use mode='hard' to force client "
                           "disconnect\n");
diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index a25556fe04..e0455551f9 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -44,9 +44,7 @@ static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
 {
     if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) {
         /* Prevent export from being deleted */
-        aio_context_acquire(vblk_exp->export.ctx);
         blk_exp_ref(&vblk_exp->export);
-        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
@@ -57,9 +55,7 @@ static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
         aio_wait_kick();
 
         /* Now the export can be deleted */
-        aio_context_acquire(vblk_exp->export.ctx);
         blk_exp_unref(&vblk_exp->export);
-        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 20:00:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 20:00:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530118.825523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7e-0006HF-Dd; Thu, 04 May 2023 20:00:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530118.825523; Thu, 04 May 2023 20:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7e-0006GM-9J; Thu, 04 May 2023 20:00:14 +0000
Received: by outflank-mailman (input) for mailman id 530118;
 Thu, 04 May 2023 20:00:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1m-0003xx-S4
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:10 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6e900e4a-eab5-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 21:54:09 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-288-xmcew5b9Mi60YoS0X3Y-pA-1; Thu, 04 May 2023 15:54:02 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 92EA73813F37;
 Thu,  4 May 2023 19:54:01 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id EB53F2166B31;
 Thu,  4 May 2023 19:54:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e900e4a-eab5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230048;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UIROaLBR9j1PhaubFnxgRi7zZWyl4fJ5E1s3DXvyvk4=;
	b=RZA3vtrpky9fDgrT/OCu2Bro0R+jwMqhXEhxJI57kEGG1bb1HRtp8T5OtXqQyW4MwN+Jj/
	5MqnLrs8dBLqB3Vw6dVg1YdYQh0AprySzX/mPCeuT0XkJm3Y79a5vsDJWYumGygtwnGEfW
	cyoCsuo35ds39gchqgIU6UQoTdtv6Q0=
X-MC-Unique: xmcew5b9Mi60YoS0X3Y-pA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 12/21] xen-block: implement BlockDevOps->drained_begin()
Date: Thu,  4 May 2023 15:53:18 -0400
Message-Id: <20230504195327.695107-13-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

Detach event channels during drained sections to stop I/O submission
from the ring. xen-block is no longer reliant on aio_disable_external()
after this patch. This will allow us to remove the
aio_disable_external() API once all other code that relies on it is
converted.

Extend xen_device_set_event_channel_context() to allow ctx=NULL. The
event channel still exists but the event loop does not monitor the file
descriptor. Event channel processing can resume by calling
xen_device_set_event_channel_context() with a non-NULL ctx.

Factor out xen_device_set_event_channel_context() calls in
hw/block/dataplane/xen-block.c into attach/detach helper functions.
Incidentally, these don't require the AioContext lock because
aio_set_fd_handler() is thread-safe.

It's safer to register BlockDevOps after the dataplane instance has been
created. The BlockDevOps .drained_begin/end() callbacks depend on the
dataplane instance, so move the blk_set_dev_ops() call after
xen_block_dataplane_create().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/xen-block.h |  2 ++
 hw/block/dataplane/xen-block.c | 42 +++++++++++++++++++++++++---------
 hw/block/xen-block.c           | 24 ++++++++++++++++---
 hw/xen/xen-bus.c               |  7 ++++--
 4 files changed, 59 insertions(+), 16 deletions(-)

diff --git a/hw/block/dataplane/xen-block.h b/hw/block/dataplane/xen-block.h
index 76dcd51c3d..7b8e9df09f 100644
--- a/hw/block/dataplane/xen-block.h
+++ b/hw/block/dataplane/xen-block.h
@@ -26,5 +26,7 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
                                unsigned int protocol,
                                Error **errp);
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane);
+void xen_block_dataplane_attach(XenBlockDataPlane *dataplane);
+void xen_block_dataplane_detach(XenBlockDataPlane *dataplane);
 
 #endif /* HW_BLOCK_DATAPLANE_XEN_BLOCK_H */
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index d8bc39d359..2597f38805 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -664,6 +664,30 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane)
     g_free(dataplane);
 }
 
+void xen_block_dataplane_detach(XenBlockDataPlane *dataplane)
+{
+    if (!dataplane || !dataplane->event_channel) {
+        return;
+    }
+
+    /* Only reason for failure is a NULL channel */
+    xen_device_set_event_channel_context(dataplane->xendev,
+                                         dataplane->event_channel,
+                                         NULL, &error_abort);
+}
+
+void xen_block_dataplane_attach(XenBlockDataPlane *dataplane)
+{
+    if (!dataplane || !dataplane->event_channel) {
+        return;
+    }
+
+    /* Only reason for failure is a NULL channel */
+    xen_device_set_event_channel_context(dataplane->xendev,
+                                         dataplane->event_channel,
+                                         dataplane->ctx, &error_abort);
+}
+
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 {
     XenDevice *xendev;
@@ -674,13 +698,11 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 
     xendev = dataplane->xendev;
 
-    aio_context_acquire(dataplane->ctx);
-    if (dataplane->event_channel) {
-        /* Only reason for failure is a NULL channel */
-        xen_device_set_event_channel_context(xendev, dataplane->event_channel,
-                                             qemu_get_aio_context(),
-                                             &error_abort);
+    if (!blk_in_drain(dataplane->blk)) {
+        xen_block_dataplane_detach(dataplane);
     }
+
+    aio_context_acquire(dataplane->ctx);
     /* Xen doesn't have multiple users for nodes, so this can't fail */
     blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort);
     aio_context_release(dataplane->ctx);
@@ -819,11 +841,9 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
     blk_set_aio_context(dataplane->blk, dataplane->ctx, NULL);
     aio_context_release(old_context);
 
-    /* Only reason for failure is a NULL channel */
-    aio_context_acquire(dataplane->ctx);
-    xen_device_set_event_channel_context(xendev, dataplane->event_channel,
-                                         dataplane->ctx, &error_abort);
-    aio_context_release(dataplane->ctx);
+    if (!blk_in_drain(dataplane->blk)) {
+        xen_block_dataplane_attach(dataplane);
+    }
 
     return;
 
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index f5a744589d..f099914831 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -189,8 +189,26 @@ static void xen_block_resize_cb(void *opaque)
     xen_device_backend_printf(xendev, "state", "%u", state);
 }
 
+/* Suspend request handling */
+static void xen_block_drained_begin(void *opaque)
+{
+    XenBlockDevice *blockdev = opaque;
+
+    xen_block_dataplane_detach(blockdev->dataplane);
+}
+
+/* Resume request handling */
+static void xen_block_drained_end(void *opaque)
+{
+    XenBlockDevice *blockdev = opaque;
+
+    xen_block_dataplane_attach(blockdev->dataplane);
+}
+
 static const BlockDevOps xen_block_dev_ops = {
-    .resize_cb = xen_block_resize_cb,
+    .resize_cb     = xen_block_resize_cb,
+    .drained_begin = xen_block_drained_begin,
+    .drained_end   = xen_block_drained_end,
 };
 
 static void xen_block_realize(XenDevice *xendev, Error **errp)
@@ -242,8 +260,6 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
         return;
     }
 
-    blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev);
-
     if (conf->discard_granularity == -1) {
         conf->discard_granularity = conf->physical_block_size;
     }
@@ -277,6 +293,8 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
     blockdev->dataplane =
         xen_block_dataplane_create(xendev, blk, conf->logical_block_size,
                                    blockdev->props.iothread);
+
+    blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev);
 }
 
 static void xen_block_frontend_changed(XenDevice *xendev,
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index c59850b1de..b8f408c9ed 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -846,8 +846,11 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
-                       xen_device_event, NULL, xen_device_poll, NULL, channel);
+    if (ctx) {
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
+                           true, xen_device_event, NULL, xen_device_poll, NULL,
+                           channel);
+    }
 }
 
 XenEventChannel *xen_device_bind_event_channel(XenDevice *xendev,
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 20:00:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 20:00:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530119.825536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7o-00071W-0M; Thu, 04 May 2023 20:00:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530119.825536; Thu, 04 May 2023 20:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puf7n-00071K-QB; Thu, 04 May 2023 20:00:23 +0000
Received: by outflank-mailman (input) for mailman id 530119;
 Thu, 04 May 2023 20:00:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EoaW=AZ=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1puf1p-0003iB-Sq
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 19:54:13 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 70e1dc82-eab5-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 21:54:13 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-310-3JUPVKmDMRukKNbAGsy8rA-1; Thu, 04 May 2023 15:54:05 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0DD063C0F428;
 Thu,  4 May 2023 19:54:04 +0000 (UTC)
Received: from localhost (unknown [10.39.192.57])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4EFA440C2064;
 Thu,  4 May 2023 19:54:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70e1dc82-eab5-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683230052;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YdqEYBGejD/PzTlXIgNRYCDgy4cvOgGgt6P2aeMbPY4=;
	b=iyJcOHhilUt2NLAzURmkERRvO0JXlScy+ZPZep5yEoHWgGyqGCYSTF77mBk1P1Us+qHjhs
	euIsShwn60iDjftuacIjG1RBffK8hxz2GGLK9NL8xilQ1byxzxlBWm+ps750DjxVouwB14
	EPVkEEJZ23T2Vl0JU12uqwI/V5qBvsg=
X-MC-Unique: 3JUPVKmDMRukKNbAGsy8rA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>,
	Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: [PATCH v5 13/21] hw/xen: do not set is_external=true on evtchn fds
Date: Thu,  4 May 2023 15:53:19 -0400
Message-Id: <20230504195327.695107-14-stefanha@redhat.com>
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

is_external=true suspends fd handlers between aio_disable_external() and
aio_enable_external(). The block layer's drain operation uses this
mechanism to prevent new I/O from sneaking in between
bdrv_drained_begin() and bdrv_drained_end().

The previous commit converted the xen-block device to use BlockDevOps
.drained_begin/end() callbacks. It no longer relies on is_external=true
so it is safe to pass is_external=false.

This is part of ongoing work to remove the aio_disable_external() API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/xen/xen-bus.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index b8f408c9ed..bf256d4da2 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
     }
 
     if (channel->ctx)
-        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
     if (ctx) {
         aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
-                           true, xen_device_event, NULL, xen_device_poll, NULL,
-                           channel);
+                           false, xen_device_event, NULL, xen_device_poll,
+                           NULL, channel);
     }
 }
 
@@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev,
 
     QLIST_REMOVE(channel, list);
 
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                        NULL, NULL, NULL, NULL, NULL);
 
     if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 04 21:01:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 21:01:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530140.825545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pug4M-0006wH-Db; Thu, 04 May 2023 21:00:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530140.825545; Thu, 04 May 2023 21:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pug4M-0006wA-AW; Thu, 04 May 2023 21:00:54 +0000
Received: by outflank-mailman (input) for mailman id 530140;
 Thu, 04 May 2023 21:00:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6+xu=AZ=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1pug4K-0006w3-Rt
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 21:00:53 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bde25d9b-eabe-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 23:00:48 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-483-M4XYXSG0PgOjzbjEbNZBbw-1; Thu, 04 May 2023 17:00:43 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7B7253C0DDAA;
 Thu,  4 May 2023 21:00:42 +0000 (UTC)
Received: from redhat.com (unknown [10.39.192.9])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 472A510DF8;
 Thu,  4 May 2023 21:00:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bde25d9b-eabe-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683234046;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=/tE35rOAAC2vyhAwuF/fixTME5fKR02RCLevjKZzkLM=;
	b=N8mohIMJWpdXWbC/dh7o9EA4up4B6E+0MyFGx2cKa6sSPYJaBg2W0esjDYWTchXdm8qX7+
	CtzTXruZb9t7Co48PcVSp7S/TGxf/G8hkKdvGsiahyltVB5hpWl97+5wcEnK760kekYDkB
	WYWKsqwfHnmgb5U+XWdBTQpoMRmcY9A=
X-MC-Unique: M4XYXSG0PgOjzbjEbNZBbw-1
Date: Thu, 4 May 2023 23:00:35 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 16/20] virtio: make it possible to detach host
 notifier from any thread
Message-ID: <ZFQc89cFJuoGF+qI@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-17-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230425172716.1033562-17-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> virtio_queue_aio_detach_host_notifier() does two things:
> 1. It removes the fd handler from the event loop.
> 2. It processes the virtqueue one last time.
> 
> The first step can be peformed by any thread and without taking the
> AioContext lock.
> 
> The second step may need the AioContext lock (depending on the device
> implementation) and runs in the thread where request processing takes
> place. virtio-blk and virtio-scsi therefore call
> virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in
> AioContext
> 
> Scheduling a BH is undesirable for .drained_begin() functions. The next
> patch will introduce a .drained_begin() function that needs to call
> virtio_queue_aio_detach_host_notifier().

Why is it undesirable? In my mental model, .drained_begin() is still
free to start as many asynchronous things as it likes. The only
important thing to take care of is that .drained_poll() returns true as
long as the BH (or other asynchronous operation) is still pending.

Of course, your way of doing things still seems to result in simpler
code because you don't have to deal with a BH at all if you only really
want the first part and not the second.

> Move the virtqueue processing out to the callers of
> virtio_queue_aio_detach_host_notifier() so that the function can be
> called from any thread. This is in preparation for the next patch.

Did you forget to remove it in virtio_queue_aio_detach_host_notifier()?
If it's unchanged, I don't think the AioContext requirement is lifted.

> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  hw/block/dataplane/virtio-blk.c | 2 ++
>  hw/scsi/virtio-scsi-dataplane.c | 9 +++++++++
>  2 files changed, 11 insertions(+)
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index b28d81737e..bd7cc6e76b 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -286,8 +286,10 @@ static void virtio_blk_data_plane_stop_bh(void *opaque)
>  
>      for (i = 0; i < s->conf->num_queues; i++) {
>          VirtQueue *vq = virtio_get_queue(s->vdev, i);
> +        EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq);
>  
>          virtio_queue_aio_detach_host_notifier(vq, s->ctx);
> +        virtio_queue_host_notifier_read(host_notifier);
>      }
>  }

The existing code in virtio_queue_aio_detach_host_notifier() has a
comment before the read:

    /* Test and clear notifier before after disabling event,
     * in case poll callback didn't have time to run. */

Do we want to keep it around in the new places? (And also fix the
"before after", I suppose, or replace it with a similar, but better
comment that explains why we're reading here.)

Kevin



From xen-devel-bounces@lists.xenproject.org Thu May 04 21:09:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 21:09:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530143.825555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pugCK-0007aZ-7i; Thu, 04 May 2023 21:09:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530143.825555; Thu, 04 May 2023 21:09:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pugCK-0007aS-4y; Thu, 04 May 2023 21:09:08 +0000
Received: by outflank-mailman (input) for mailman id 530143;
 Thu, 04 May 2023 21:09:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pugCI-0007aI-L8; Thu, 04 May 2023 21:09:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pugCI-0001sl-Dw; Thu, 04 May 2023 21:09:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pugCH-0002lU-Uf; Thu, 04 May 2023 21:09:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pugCH-0004J9-UE; Thu, 04 May 2023 21:09:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TVQRSwTIB8gYeI19Xh2M1eV0SW3m1yRle7jL7c478I0=; b=s/+aG+o6iyrfuHb6zgkpfEAMO7
	6jvLbvugp0LZw4Mq1q7gWqTwAOc1a6E8ArxKTKCTGXdGHo2YOoBLtQCP2BgSKZqYu2aAxGGJ6mqVQ
	IVI+iha+qm1IP85HJ0EdyqsE/kCzM6SqstIWyx4Zh8dxXFS3l/uW3VE7qicQBt/iMkug=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180535-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180535: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d992a05ade3d1bebc6e7a81aaf700286e0e217c8
X-Osstest-Versions-That:
    ovmf=4b02045f86d6aac8a617bf3f65f9cb2146630ce3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 04 May 2023 21:09:05 +0000

flight 180535 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180535/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d992a05ade3d1bebc6e7a81aaf700286e0e217c8
baseline version:
 ovmf                 4b02045f86d6aac8a617bf3f65f9cb2146630ce3

Last test of basis   180532  2023-05-04 14:40:45 Z    0 days
Testing same since   180535  2023-05-04 19:12:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chasel Chiu <chasel.chiu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   4b02045f86..d992a05ade  d992a05ade3d1bebc6e7a81aaf700286e0e217c8 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 04 21:13:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 21:13:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530148.825566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pugGy-0000Zo-Qm; Thu, 04 May 2023 21:13:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530148.825566; Thu, 04 May 2023 21:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pugGy-0000Zh-No; Thu, 04 May 2023 21:13:56 +0000
Received: by outflank-mailman (input) for mailman id 530148;
 Thu, 04 May 2023 21:13:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6+xu=AZ=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1pugGy-0000Zb-4U
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 21:13:56 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 926bdd0f-eac0-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 23:13:53 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-199-0jJU1yuvPWyStda8F5r-dw-1; Thu, 04 May 2023 17:13:48 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 81E11857F81;
 Thu,  4 May 2023 21:13:47 +0000 (UTC)
Received: from redhat.com (unknown [10.39.192.9])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id A4B891410F24;
 Thu,  4 May 2023 21:13:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 926bdd0f-eac0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683234832;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=PbtQd0T/eOvB2WwyzniyotLqfkl9lge5PBzlNU6yYaw=;
	b=UvzW2MQyUUInuBe+/PfgDFvELzIMm4LnxpFmgSHxl4cECgNgGrYnKvJTX41mwWSOwnW3nk
	yIYpo3wW5e2qWDHDcjHU1vqhuAIpgQx2GcWrp3V9VyOdr2PtfVfTuAkayUbZmJ/q8cxJcW
	45qXT3LDRIuJhNVhnABR3z+dVpXfbNQ=
X-MC-Unique: 0jJU1yuvPWyStda8F5r-dw-1
Date: Thu, 4 May 2023 23:13:42 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 17/20] virtio-blk: implement
 BlockDevOps->drained_begin()
Message-ID: <ZFQgBvWShB4NCymj@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-18-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230425172716.1033562-18-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> Detach ioeventfds during drained sections to stop I/O submission from
> the guest. virtio-blk is no longer reliant on aio_disable_external()
> after this patch. This will allow us to remove the
> aio_disable_external() API once all other code that relies on it is
> converted.
> 
> Take extra care to avoid attaching/detaching ioeventfds if the data
> plane is started/stopped during a drained section. This should be rare,
> but maybe the mirror block job can trigger it.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  hw/block/dataplane/virtio-blk.c | 17 +++++++++------
>  hw/block/virtio-blk.c           | 38 ++++++++++++++++++++++++++++++++-
>  2 files changed, 48 insertions(+), 7 deletions(-)
> 
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index bd7cc6e76b..d77fc6028c 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -245,13 +245,15 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
>      }
>  
>      /* Get this show started by hooking up our callbacks */
> -    aio_context_acquire(s->ctx);
> -    for (i = 0; i < nvqs; i++) {
> -        VirtQueue *vq = virtio_get_queue(s->vdev, i);
> +    if (!blk_in_drain(s->conf->conf.blk)) {
> +        aio_context_acquire(s->ctx);
> +        for (i = 0; i < nvqs; i++) {
> +            VirtQueue *vq = virtio_get_queue(s->vdev, i);
>  
> -        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
> +            virtio_queue_aio_attach_host_notifier(vq, s->ctx);
> +        }
> +        aio_context_release(s->ctx);
>      }
> -    aio_context_release(s->ctx);
>      return 0;
>  
>    fail_aio_context:
> @@ -317,7 +319,10 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
>      trace_virtio_blk_data_plane_stop(s);
>  
>      aio_context_acquire(s->ctx);
> -    aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
> +
> +    if (!blk_in_drain(s->conf->conf.blk)) {
> +        aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
> +    }

So here we actually get a semantic change: What you described as the
second part in the previous patch, processing the virtqueue one last
time, isn't done any more if the device is drained.

If it's okay to just skip this during drain, why do we need to do it
outside of drain?

Kevin



From xen-devel-bounces@lists.xenproject.org Thu May 04 21:25:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 21:25:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530151.825576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pugSD-00027H-Ri; Thu, 04 May 2023 21:25:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530151.825576; Thu, 04 May 2023 21:25:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pugSD-00027A-P7; Thu, 04 May 2023 21:25:33 +0000
Received: by outflank-mailman (input) for mailman id 530151;
 Thu, 04 May 2023 21:25:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6+xu=AZ=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1pugSC-000274-CE
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 21:25:32 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 32193822-eac2-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 23:25:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-365-0JHXhtIDOzGjIqozYKIm0A-1; Thu, 04 May 2023 17:25:23 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BB301185A78B;
 Thu,  4 May 2023 21:25:22 +0000 (UTC)
Received: from redhat.com (unknown [10.39.192.9])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 1EA6640C2064;
 Thu,  4 May 2023 21:25:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32193822-eac2-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683235530;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=EXBTvckOEwAXoXWbpnPFajVuZsPXxwBJqTqoZHV9JvQ=;
	b=V2u7aBfrIH5B/lud3IMRO5flv/4nzzzBj6wm+PNJR0HVqdcYOyOxZnUltN/AFkx8SiZCeG
	VDAwJHC8aYJ6rnAkEOm1N56ay9LYPtIh3jkUuKhTm5DoA9zl6tB3teyVVKfPJCM0iIMPr3
	JR5HPy+kg2Lws21G5Mpgg/UNQYzYRp4=
X-MC-Unique: 0JHXhtIDOzGjIqozYKIm0A-1
Date: Thu, 4 May 2023 23:25:17 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 18/20] virtio-scsi: implement
 BlockDevOps->drained_begin()
Message-ID: <ZFQivbkVPcX3nECA@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-19-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230425172716.1033562-19-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> The virtio-scsi Host Bus Adapter provides access to devices on a SCSI
> bus. Those SCSI devices typically have a BlockBackend. When the
> BlockBackend enters a drained section, the SCSI device must temporarily
> stop submitting new I/O requests.
> 
> Implement this behavior by temporarily stopping virtio-scsi virtqueue
> processing when one of the SCSI devices enters a drained section. The
> new scsi_device_drained_begin() API allows scsi-disk to message the
> virtio-scsi HBA.
> 
> scsi_device_drained_begin() uses a drain counter so that multiple SCSI
> devices can have overlapping drained sections. The HBA only sees one
> pair of .drained_begin/end() calls.
> 
> After this commit, virtio-scsi no longer depends on hw/virtio's
> ioeventfd aio_set_event_notifier(is_external=true). This commit is a
> step towards removing the aio_disable_external() API.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

> @@ -206,9 +208,11 @@ void virtio_scsi_dataplane_stop(VirtIODevice *vdev)
>      }
>      s->dataplane_stopping = true;
>  
> -    aio_context_acquire(s->ctx);
> -    aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s);
> -    aio_context_release(s->ctx);
> +    if (s->bus.drain_count == 0) {
> +        aio_context_acquire(s->ctx);
> +        aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s);
> +        aio_context_release(s->ctx);
> +    }

Same question as for virtio-blk: We lose processing the virtqueue one
last time during drain. Is it okay, and if so, why do we need it outside
of drain?

Kevin



From xen-devel-bounces@lists.xenproject.org Thu May 04 21:34:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 21:34:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530155.825586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pugat-0003aS-Lw; Thu, 04 May 2023 21:34:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530155.825586; Thu, 04 May 2023 21:34:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pugat-0003aL-JD; Thu, 04 May 2023 21:34:31 +0000
Received: by outflank-mailman (input) for mailman id 530155;
 Thu, 04 May 2023 21:34:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6+xu=AZ=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1pugar-0003aF-Le
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 21:34:29 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 71c1c72b-eac3-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 23:34:27 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-301-oftBbWKyPpm5UhTcIWwpiQ-1; Thu, 04 May 2023 17:34:23 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 20351185A7A2;
 Thu,  4 May 2023 21:34:22 +0000 (UTC)
Received: from redhat.com (unknown [10.39.192.9])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 5F7E82166B30;
 Thu,  4 May 2023 21:34:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71c1c72b-eac3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683236066;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Gi2dujXyOOsp2C2GzNI9fp0qtTuAHfyKycpA+LfupIY=;
	b=f+D9eg0FBnB1NY3jV8V1yvq9MIp96rcHolYq2LK3paPVShitP/PJbMWk7d/aryHjHdDbk/
	ZWjNHOW00lPpaMmoiQJWS44Ak8nyghDEe0FwUB4udGtF2iZjQv5Mn4YX6I6Lng0p0DF4pQ
	EozO1CzqHwF0pcqw4bRXQy2yEc1CLQI=
X-MC-Unique: oftBbWKyPpm5UhTcIWwpiQ-1
Date: Thu, 4 May 2023 23:34:17 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 20/20] aio: remove aio_disable_external() API
Message-ID: <ZFQk2TdhZ6DiwM4t@redhat.com>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-21-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230425172716.1033562-21-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> All callers now pass is_external=false to aio_set_fd_handler() and
> aio_set_event_notifier(). The aio_disable_external() API that
> temporarily disables fd handlers that were registered is_external=true
> is therefore dead code.
> 
> Remove aio_disable_external(), aio_enable_external(), and the
> is_external arguments to aio_set_fd_handler() and
> aio_set_event_notifier().
> 
> The entire test-fdmon-epoll test is removed because its sole purpose was
> testing aio_disable_external().
> 
> Parts of this patch were generated using the following coccinelle
> (https://coccinelle.lip6.fr/) semantic patch:
> 
>   @@
>   expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque;
>   @@
>   - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque)
>   + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, opaque)
> 
>   @@
>   expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready;
>   @@
>   - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io_poll_ready)
>   + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready)
> 
> Reviewed-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Philippe Mathieu-Daud <philmd@linaro.org>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

> diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c
> index 1683aa1105..6b6a1a91f8 100644
> --- a/util/fdmon-epoll.c
> +++ b/util/fdmon-epoll.c
> @@ -133,13 +128,12 @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigned npfd)
>          return false;
>      }
>  
> -    /* Do not upgrade while external clients are disabled */
> -    if (qatomic_read(&ctx->external_disable_cnt)) {
> -        return false;
> -    }
> -
> -    if (npfd < EPOLL_ENABLE_THRESHOLD) {
> -        return false;
> +    if (npfd >= EPOLL_ENABLE_THRESHOLD) {
> +        if (fdmon_epoll_try_enable(ctx)) {
> +            return true;
> +        } else {
> +            fdmon_epoll_disable(ctx);
> +        }
>      }
>  
>      /* The list must not change while we add fds to epoll */

I don't understand this hunk. Why are you changing more than just
deleting the external_disable_cnt check?

Is this a mismerge with your own commit e62da985?

Kevin



From xen-devel-bounces@lists.xenproject.org Thu May 04 21:45:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 21:45:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530185.825613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pugl5-00069V-4z; Thu, 04 May 2023 21:45:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530185.825613; Thu, 04 May 2023 21:45:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pugl5-00069O-1V; Thu, 04 May 2023 21:45:03 +0000
Received: by outflank-mailman (input) for mailman id 530185;
 Thu, 04 May 2023 21:45:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6+xu=AZ=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1pugl4-00069I-2s
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 21:45:02 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eab4ca4d-eac4-11ed-8611-37d641c3527e;
 Thu, 04 May 2023 23:45:00 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-645-HnOtJYstOJWrNZLz1UfM9w-1; Thu, 04 May 2023 17:44:54 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 994CD85531A;
 Thu,  4 May 2023 21:44:53 +0000 (UTC)
Received: from redhat.com (unknown [10.39.192.9])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 6800363F3E;
 Thu,  4 May 2023 21:44:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eab4ca4d-eac4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683236698;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=4uQyhS4chih7E8Y8+j3wgbWTkP0aNND6XQO9VSCZB+8=;
	b=QqzhyulZJcSxtO71aQremSieAmDQzt4gL7EOMpT5iWsE0lOr57a7WtYA48cWkyVl6+ole/
	wcxoRjau6hBoubpbUEuPksBmzeyFe82LaFjoVMzpC+SSGc/PH5ceIIg2FXiCgBskq9Ki3p
	+JsJXhTSz3yEFRUftZpEuW6/StVUZsY=
X-MC-Unique: HnOtJYstOJWrNZLz1UfM9w-1
Date: Thu, 4 May 2023 23:44:42 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>, Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com, Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: Re: [PATCH v5 00/21] block: remove aio_disable_external() API
Message-ID: <ZFQnSjGiEWuSFWTh@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

Am 04.05.2023 um 21:53 hat Stefan Hajnoczi geschrieben:
> v5:
> - Use atomic accesses for in_flight counter in vhost-user-server.c [Kevin]
> - Stash SCSIDevice id/lun values for VIRTIO_SCSI_T_TRANSPORT_RESET event
>   before unrealizing the SCSIDevice [Kevin]
> - Keep vhost-user-blk export .detach() callback so ctx is set to NULL [Kevin]
> - Narrow BdrvChildClass and BlockDriver drained_{begin/end/poll} callbacks from
>   IO_OR_GS_CODE() to GLOBAL_STATE_CODE() [Kevin]
> - Include Kevin's "block: Fix use after free in blockdev_mark_auto_del()" to
>   fix a latent bug that was exposed by this series

I only just finished reviewing v4 when you had already sent v5, but it
hadn't arrived yet. I had a few more comments on what are now patches
17, 18, 19 and 21 in v5. I think they all still apply.

Kevin



From xen-devel-bounces@lists.xenproject.org Thu May 04 21:55:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 21:55:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530190.825623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pugui-0007dd-2p; Thu, 04 May 2023 21:55:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530190.825623; Thu, 04 May 2023 21:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puguh-0007dW-W8; Thu, 04 May 2023 21:54:59 +0000
Received: by outflank-mailman (input) for mailman id 530190;
 Thu, 04 May 2023 21:54:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vymz=AZ=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pugug-0007dQ-3M
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 21:54:58 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20612.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::612])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4dd9c6aa-eac6-11ed-b226-6b7b168915f2;
 Thu, 04 May 2023 23:54:55 +0200 (CEST)
Received: from BN9PR03CA0441.namprd03.prod.outlook.com (2603:10b6:408:113::26)
 by DM4PR12MB5793.namprd12.prod.outlook.com (2603:10b6:8:60::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Thu, 4 May
 2023 21:54:52 +0000
Received: from BN8NAM11FT055.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:113:cafe::aa) by BN9PR03CA0441.outlook.office365.com
 (2603:10b6:408:113::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26 via Frontend
 Transport; Thu, 4 May 2023 21:54:52 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT055.mail.protection.outlook.com (10.13.177.62) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.27 via Frontend Transport; Thu, 4 May 2023 21:54:52 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 4 May
 2023 16:54:51 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 4 May 2023 16:54:50 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4dd9c6aa-eac6-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UI0rQ8W6fS9kZrnPWwZj57AG5PZnZD7JKDGWqEfTKxg2aI1i7ti+hVHccCnfqNxE9/KSDrwfcUr1oH8JXHUWRetuCxEssnvvVW1YQp6wG928ukXImat1WwxUma7qque+AI4v/qcOo0ZmZ06rH1+9KxZbgC6j4vDhL5memZLXfvWMQGIHHn01PsOsXmoGNz3e0GU3e6lIw1IhGvDFRPlC9e46TN54fESZUnzRMNLThudBpgbNfyjWZKVa9adPgNXdmR3phrxWU2yViIOcjlb5mZC6PQPBowbgFlwCMqYRIHQFbLbJlYCEzyXAzzLCOZs/9Pvheim9Xb/AitECu9FEZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1xP4MQBh+A/JD/hKH7vXg1eF4/RVQQqOEgguvhZSqKE=;
 b=KSoCxzjhRRhYSLlJjLaimG7iTv+MLEzbhqTh4o6wu6u+gDLw3WSkccBK2Q2vUABsc1W9vy+QKIQ2O5PhH2Q2JJMuaLXZf7cqQtsS8LT5JhA9lV3Nu9zfWCWL9mWMYBuNQ5Dw6eQvYVZExQGKholy4oJFuuom+7qUWPCFzi+KCmJYnRRnYXuFdjMkXqUE2zZ3lVfhVKORyNbjcIKKQf08uqB6tq8URJMhkAeeuofOmlyUaPhry51VpZv4NwTV4HONcbi7V5Z2ci7SgW56Wm4EvFHwSNYLXt1a315UGAYdZAQptcL0UlywhUKZ0DSW7emjWit5jBYcJKzQrVr6NsaErQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1xP4MQBh+A/JD/hKH7vXg1eF4/RVQQqOEgguvhZSqKE=;
 b=T2xg7h7s9Z8zssu5RDtw6pCYxUhsJq5hCa0RGbb3VHnEWmgEon+QfsgC3smuFWozsm0Y1XyhiLpWv85rUpyKcfbTlB04ncF85hQ7fkR1uz72WswSHEsx/kE+zElBKcMeuDFbmOb2labaR9icFJprxCC3IgliXLo1zSxggbFovnw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <6758ef37-bf66-99c3-1fd4-f2edfc12513a@amd.com>
Date: Thu, 4 May 2023 17:54:45 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v1 3/6] iommu/arm: Introduce iommu_add_dt_pci_device API
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Paul Durrant
	<paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
 <20230501200305.168058-4-stewart.hildebrand@amd.com>
 <0e7404fe-e249-7b3f-0628-b8b8b1925765@suse.com>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <0e7404fe-e249-7b3f-0628-b8b8b1925765@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT055:EE_|DM4PR12MB5793:EE_
X-MS-Office365-Filtering-Correlation-Id: cb11320b-1065-42ef-ed84-08db4cea308d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9JKO9b0GWg3yaN6BepGwU2af10lkd/Ise3Rwvw5IQ4eLfTfYtQxf1T3mzqTuAs7aKC4mCHREg549Ox+5CTmT3szg4t9J6ULSUQElBKBq8cev+DquwTlfvgQQy8o2hBm3PXzO8qcYjPrYlVz/6imNeLb+wmykjEG9uMIm8sfSFJt2bhfajB5wtivBxDv7gE92jCeAU+wsn3vCnOKxB5yGZpGDllXK/uoMmooNDXR6asccGfnCIPfJJxS6HuTDtk4hzcMct4kOxgWK+KAoLEjxII0SxM3968LLUeOO45cjVmjCnfwpgnmcfr48MTjsZVLW28w6RFheCW3R8U0R4L1+1yoLqOZj2sS6kcNlosQg9ngAPj9mCnY/Qk3mGY5U68Y67bpDKepNpJwib1baDza1jmCyGddBu16uVEwb54ixHG0EVc5UM9tt4ruYOYFfbKhx7rpJbZffyICFzGAImxVT8QAh9BtNizY/SZk2ofje3B4X2EQkSvnGQbUqG5tUfRnlAB2PcF7E6Xa5OD4HaCFbtcvuOGKo6O4MgiEl6wxba/ezsJqbVrJGyZyZIsmBytO5TeRG5xqHCEUoow5Q39pNe/tjH9TnJMzS8ASYCsy0YyFOnX2Vx+V7Ya3xF8EqzxRJdW+JPZfMiWr8/LK3HkE2pKKcc9c0n0XopKgQGDBqsOkqA3Pn0mRhLkv6X+ZmMvSNHriQb1pNOuAn3PKFVMFbGJgzxljcUQdItqh98uapiF7nkZ62+fj+30dG65EbkYLw
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(396003)(39860400002)(346002)(451199021)(46966006)(36840700001)(40470700004)(40480700001)(26005)(53546011)(44832011)(40460700003)(186003)(16576012)(82310400005)(478600001)(41300700001)(54906003)(966005)(6666004)(36756003)(81166007)(356005)(5660300002)(8936002)(8676002)(2616005)(70206006)(36860700001)(316002)(4326008)(6916009)(70586007)(2906002)(336012)(31686004)(83380400001)(86362001)(426003)(31696002)(47076005)(82740400003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 21:54:52.3254
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cb11320b-1065-42ef-ed84-08db4cea308d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT055.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5793

On 5/2/23 03:44, Jan Beulich wrote:
> On 01.05.2023 22:03, Stewart Hildebrand wrote:
>> @@ -228,6 +229,9 @@ int iommu_release_dt_devices(struct domain *d);
>>   *      (IOMMU is not enabled/present or device is not connected to it).
>>   */
>>  int iommu_add_dt_device(struct dt_device_node *np);
>> +#ifdef CONFIG_HAS_PCI
>> +int iommu_add_dt_pci_device(uint8_t devfn, struct pci_dev *pdev);
>> +#endif
> 
> Why the first parameter? Doesn't the 2nd one describe the device in full?

It's related to phantom device/function handling, although this series unfortunately does not properly handle phantom devices.

> If this is about phantom devices, then I'd expect the function to take
> care of those (like iommu_add_device() does), rather than its caller.

In the next patch ("[PATCH v1 4/6] pci/arm: Use iommu_add_dt_pci_device() instead of arch hook"), we will invoke iommu_add_dt_pci_device(devfn, pdev) from iommu_add_device(). Since iommu_add_device() iterates over the phantom functions, it would be redundant to also have such a loop inside of iommu_add_dt_pci_device().

If we are to properly handle phantom devices on ARM, the SMMU drivers (smmu.c/smmu-v3.c) would need some more work. In patches 5/6 and 6/6 in this series, we have:

if ( devfn != pdev->devfn )
    return -EOPNOTSUPP;

Also, the ARM SMMU drivers in Xen currently only support a single AXI stream ID per device, so some development would need to occur in order to support phantom devices.

Should phantom device support be part of this series, or would it be acceptable to introduce phantom device support on ARM as part of a future series?


Lastly, I'd like to check my understanding since phantom devices are new to me. Here's my understanding:

A phantom device is a device that advertises itself as single function, but actually has multiple phantom functions. These phantom functions will have unique requestor IDs (RID). The RID is essentially the BDF. To use a phantom device with Xen, we specify the pci-phantom command line option, and we identify phantom devices/functions in code by devfn != pdev->devfn.

On ARM, we need to map/translate a BDF to an AXI stream ID in order for the SMMU to identify the device and apply translation. That BDF -> stream ID mapping is defined by the iommu-map/iommu-map-mask property in the device tree [1]. The BDF -> AXI stream ID mapping in DT could allow phantom devices (i.e. devices with phantom functions) to use different stream IDs based on the (phantom) function.

So, in theory, on ARM, there is a possibility we may have a device that advertises itself as single function, but will issue AXI transactions with multiple different AXI stream IDs due to phantom functions. In this case, we will want each AXI stream ID to be programmed into the SMMU to avoid SMMU faults.

Please correct me if I've misunderstood anything.

[1] https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-iommu.txt


From xen-devel-bounces@lists.xenproject.org Thu May 04 23:04:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 04 May 2023 23:04:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530194.825633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pui0A-0006P6-6M; Thu, 04 May 2023 23:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530194.825633; Thu, 04 May 2023 23:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pui0A-0006Oz-3a; Thu, 04 May 2023 23:04:42 +0000
Received: by outflank-mailman (input) for mailman id 530194;
 Thu, 04 May 2023 23:04:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jZxo=AZ=citrix.com=prvs=48139b1ea=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pui08-0006Os-4U
 for xen-devel@lists.xenproject.org; Thu, 04 May 2023 23:04:40 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 099ffa0b-ead0-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 01:04:37 +0200 (CEST)
Received: from mail-dm6nam12lp2168.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 04 May 2023 19:04:25 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS7PR03MB5510.namprd03.prod.outlook.com (2603:10b6:5:2ce::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Thu, 4 May
 2023 23:04:23 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6340.031; Thu, 4 May 2023
 23:04:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 099ffa0b-ead0-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683241477;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=btf+YDR09e31a7nJQt4TFhTa4ii9ccR3xjDeZADBtFU=;
  b=aMpbuEDa1DjgYxgbaMk6bn4kFhwps9xXWcp2d4jX9+H0XOsEnm5XxS2V
   Fe/jpcb9J1NuWxrVB7y4C2xbZFnJXAxFX/23Q89vh86lpRRg7chU1MpSr
   AAeeWit1LFS8UAkc3BcOYPnm8K832HgUHhtritnK0BP0lvSEgLncrv5Tz
   E=;
X-IronPort-RemoteIP: 104.47.59.168
X-IronPort-MID: 110371711
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:b9ewj6rVwH7LF0/yLarrNn4OKf9eBmLUYxIvgKrLsJaIsI4StFCzt
 garIBmDOvnfYmLyLdx3aYi/oUhTvJOGmN43TgI//CE0Enka8puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDzyVNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXADkJUkClrOWR+bXlVPtVrNYiLujVP6pK7xmMzRmBZRonabbqZvyQoPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jemraYSFEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhr6Yx3wTMnzV75Bs+Wkrrq9Xhpn+FSvliM
 WgE83oNt4QjzRn+JjX6d1jiyJKehTYAVN5AO+k77hyR0K3S4hbfCmVsZjpAbsE28cYsQHkp2
 0WPktfBAT10rKbTSHST7L6YoDq+fy8PIgcqbiYYRA8E5Z/mqZsyiBvUZt95Fei+ididMS7xx
 zSiryUkgbgXy8kR2M2T8k3AmT+qjpvEVAg44kPQRG3Nxgd4YpO1Ioez6knz8/lNNsCaQ0OHs
 XxCnNKRhMgSFpuKmyGLTM0EGKmp7rCLKjTaglNpWZUsnxyk4XivVYlK5j10YktkWu4AeDn0c
 ArWuBtKzIFcMWHsbqJtZY+1TcMwwsDIGdnhUv3ORtVLct59eWev+CBoeF7Vx2n3kVMnub8wN
 I3dcsu2C3seT6N9w1KeX+4A1fk0zyQ73mfeVLjgzhqmz7fYb3mQIZ8dOV+LY/oR4KqOqgjR7
 5BUMM7i4xFeVvDuJzPW6oUNBVQLNmQgQ5HwpdZeeuOKLkxhAm5JI/rY27BnYZF5m61Tms/B5
 HT7UUhdoHLdjHjKMkOgbWx/b7brUIdXrXs9JiEqJV+yx34ue52v7b9ZfJwyFZEj9ep+3dZ9V
 fwBesOdErJIUDuB5jd1RYnwqslueQqmgSqKPjG5e34vcph4XQvL99T4OAz1+0EmBC2ttNB4p
 Ka8zATFWpkSbwN4Bc3SZbSkyFbZlXwcnv9iGkjFON9efG3y/4VwbS/8lPk6J4cLMxqr7ivKi
 S6VDA0eqO2LpJU6mPHFmqWb9a+oFeVkF0ZXFmWd6qy5XQHQ5GWu25RceOmNdDHZEmjz/c2Kb
 P9XxvfUNPwBgUYMsoxgHrItxqU7j/P3rbhc3AViHV3RYlirA68mKX6DteFXv7BJw7RakQi7Q
 ESC/p9dI7rhEN/lF1sYP0whY+KP3PoXixHb6PhzK0L/jAds8bOBU056MB6LhyhQarByNesNx
 uMsu9Uf7QClohwjLt+CgyZX8CKKJ3kNO40/s5wcEoLvog8szl5PZ9rZEiCw7JznQ9pUM0IhJ
 TuVrKvNjrBdzwzJdH9bPXzQ1uhUg9IKpRtIxVkFOXyGn9PEgrk82xg52SksRwFciBxd0uxbM
 Hl3OkF8I66DuTxvgaBrXW+2HBpaLAaE4UG3wFwM/EXCRkOyX3TEKkU0PO+f+0Zf+GVZFhBe/
 bKVzGLoQB7vfcrr2TA1V1IjoPvmJeGd7SXHkcGjWsiAQZ8zZGO/hrf0PDRY7Rz6Hck2mUvL4
 /Fw+/p9YrH6MihWpLAnD46d1vIbTxXsyHF+fMyNNZghRQn0EAxeExDXQ6xtUquh/8D3zHI=
IronPort-HdrOrdr: A9a23:q7JOg6pxGXXkorUcmwt6pjYaV5oAeYIsimQD101hICG9E/bo8v
 xG+c5wuCMc5wx8ZJhNo7+90dC7MArhHP1OkOss1NWZPDUO0VHARL2Ki7GN/9SKIVycygcy78
 Zdmp9FebnN5AhB5voSODPIaOrIGuP3lpxAWN2uqEuFkTsaE52IMT0JcDqmLg==
X-Talos-CUID: 9a23:vjt1N2Mplw2XN+5DUS1bpGg/HeccdSPD3k76MU66KFhqYejA
X-Talos-MUID: =?us-ascii?q?9a23=3AR/FKtA7NZl2ZTUmoxGCHD0SBxoxn8vSkJBghoK4?=
 =?us-ascii?q?XkPSeETx5axSHoCWOF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,250,1677560400"; 
   d="scan'208";a="110371711"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ULuGhHlj2ISnghdgpqb4EG6URH8INlEONQvHxdgVsfW4QtxkPX6GoRxGQh8vyZg5RwiM4QStkhPkBk8cML8VjSPh79dOa8JljP494LZ6x7iWerrrpb+zzEUqCX8fJAYq+kltENCH9tmdv6PVqJWQOQZxnnmcKsiz0YzqMZ+D1HjzXnKyfIRUrzcG09FAj8HkKsEZNbPCpMyf1FWn2bFPm0zPq7ZIlEp6Vj9aOLxQ+NgOlnS8yAu8lGlFZiRIIg7VpjZDGSL1dtIesQ/qjFqg/RqTeJUHvqEUZMMRDpQ91k+oh34jQDZc7ThIEKNPAJhznEB3OaKd0qOVFmZ8F4rUfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=btf+YDR09e31a7nJQt4TFhTa4ii9ccR3xjDeZADBtFU=;
 b=GykOnQVztneNYAc0H2rAk5tUFdK3UNw6/9WWn5cunhlUOMXIbuFU67WlXh/AyfPKTnx8r8cnF0qMfOAM0+tNmUgSGd/GFPDNbd4gGIFkL540i8T6sz7v9dBJLWKX81glrMTGOEgsI6eiL/3ChMXzMXkK6a23t12wWtpMxxYH7odspoL4Q3yZBpXdioahTQAasEiSEKbD8f9ZuDKv9J+VfC5U5Dlh3IqnIXBWaYsqp5kGB5Fz4toL33fK8Z3EN50//Or7YOYxib+2N6NP7cqabtlRo7057Nh3s56gUm8lPoBCZ5T0cSynLLu9ogm7c2SANGwVCmgE1cuu3w21MFmEIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=btf+YDR09e31a7nJQt4TFhTa4ii9ccR3xjDeZADBtFU=;
 b=J90gB9GLYZwmXiu3bmiuqp8KNl2vYOikQyV62BGqXZa7nDm5xOUvdf3xhNlqrOWoGrYaWy07Eeb3rpF3OiQuJaLmHBOnLuCXTSSQ9Wg/WPo+LE7MZyHygC6gO7E5vmHZtRqEpGVJzVfpa5lEKbbSYihDv7tDNYSno8G6g/B5dqI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <38b259bb-050b-023e-4f43-212f95f022ac@citrix.com>
Date: Fri, 5 May 2023 00:04:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [patch V2 34/38] x86/cpu/amd; Invoke
 detect_extended_topology_early() on boot CPU
Content-Language: en-GB
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, David Woodhouse <dwmw2@infradead.org>,
 Brian Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
References: <20230504185733.126511787@linutronix.de>
 <20230504185938.179661118@linutronix.de>
In-Reply-To: <20230504185938.179661118@linutronix.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0473.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a8::10) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DS7PR03MB5510:EE_
X-MS-Office365-Filtering-Correlation-Id: ddb066f3-8553-4af6-a2aa-08db4cf3e645
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KD0D68GVI57Fkqj8sMXyvLmEFZdB1+Xt/tcujv3JRziSTvGLC3LmYl0Q0o26xJbFwukSYHYgWGgqVAmfkmM6/z3nbvCAl+dV0NmmIrGH+/JoeUB0z73A2x+ltr6lRiwatqE4EOvo3U8RhHGXDnplF8XCKAsGZsC3bp6qywM5MNaVeIcijmqqPcRnhp9v/KaXUzRhczrprcV6YfAYSFbk3mZxV5TKB1rP4J2tSnKs89cGOJOD3Upg2p4lA3CLmFkZGPmz2ZOFs4e+aeUIrcUN5lnzA6kf8a4HaWhxp4Qc6z09TAUys8ND/+R0icaBN31FovcuUh9AMZDsznriMRyc7OMfNQBSmqSxuXtb73qpxA7LSyABUBdGaMokqPy02hVxAxccngzmVDiOLxd7zCG/CCD4jb4xC5LMyJhxuFtsbMOWt3EdVtE00x6jORAoYW8UUrcS8tsPJPhMYxlPIi7+R6qlH2VQbYta3G+2kNqXQ6wBSKNRuKQKSfn+r+rvQDAWYITrFivQv/KA937tbtlmcGGi4FAfBKh88UbmS5ovo+IxaEmZu+1iB7CX1O9TQLPkTct5Rx5m1F+2YrgSKfYKsApbkQ0SLooOyUg0cf79IwWpgJxnSDZltbuQx6l2FqzAyqExrt0z7yXtlisF9lPLlg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(376002)(366004)(136003)(346002)(451199021)(2906002)(4744005)(8676002)(31686004)(54906003)(41300700001)(66946007)(478600001)(5660300002)(316002)(8936002)(186003)(110136005)(6666004)(4326008)(66556008)(7406005)(7416002)(66476007)(6486002)(26005)(6512007)(6506007)(53546011)(82960400001)(36756003)(83380400001)(2616005)(31696002)(86362001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aWt5ekN6YWVqNGdWc3czcS9tcU9ZYTB3YnQ3YUxOMGg4VDNXSXduRU5JQzVl?=
 =?utf-8?B?SWRYeWxETFNiTGFtWXNSRGRMNm92VDhmcUFwNm9OVW5QdWhjMmptZzYzWElz?=
 =?utf-8?B?WTZHU0p0OHEyckh6OXVjMy9ZTnQ2ZGp4eDVvVit2NGQ1YXVYaXpDSzYxeWVi?=
 =?utf-8?B?SjA1ZjZzQ0c1cEsyWUdVK2Q0Z2txenlYdXpPS3FPSzZCbkV6MDAwcS9DZ09v?=
 =?utf-8?B?ZytOM1hpdmN3QXdtNVhHYlpJZldtdUUvSVphVUxVSkNJay9WWmgzanRQVnNP?=
 =?utf-8?B?TXBmNkp0VWQwRE9nRHE3UGRkdGdwZTQ5dVkrVFdsam1ITHZBc0JXM3YyNHB6?=
 =?utf-8?B?UzVPNGExQ0tabSt3SFo1OE9VQkNrVWdWNmh1b3ZTbEtZazhBbTBOTGlMQlM4?=
 =?utf-8?B?aFc2dWgvU1hvbHZadnEyWG9aUFExVUxlZFhnVXliaVY3SHhjUmhiQW5DUUFk?=
 =?utf-8?B?R3dia3ZrMEpnUGhTeGVmNFVhV2RlTHFEM2pvREpTcG5tTVIzd2JpODdLM2xl?=
 =?utf-8?B?RGdyaDU1UVFtUCtTTkhUcEUwbW94ZVVGUkUxeG1YYS90ZWplSW1hbS9PSzZ0?=
 =?utf-8?B?aFRqd0hJM0lXVDBFNm9mcWJYT2N1NWlXVWVuOS9MUzkrL0VZeXhNZzBwL1Y1?=
 =?utf-8?B?bTlvN3FLTEV3Z1RzWjdRQjd0TG9CdXo2dEVoUVIzMDVmMEdneit5dXNGWEV0?=
 =?utf-8?B?M09RdSthaGRFUzdRUUpNSStZUmVuN1gyeWRsSmhMc29uUWxkUE1oMTlEcjYw?=
 =?utf-8?B?d1hMdXFpNlMvVnNlNU9TN0dBTVNBSW1hY0hHQ0hiZWQxMFcrYm5uVXFVQWc0?=
 =?utf-8?B?Wm9VMkkxWUR5NC8waVcrNUNBK2xKRnZkSy93K1JCMDR4cVFZN1NXWmVtSjly?=
 =?utf-8?B?aTVjT0ZwbkhGa2xMOGR6WHhXbUhqWFJOd0tZV2tjUmNKY2hvd1Jvc3FpVmh1?=
 =?utf-8?B?RUxPY1dOVzlORFcwT3VDSjhPZVJqdmRKU25xMFFrMWdzdTJzY1EzNjJvcDQ2?=
 =?utf-8?B?OThxbUxpY2dtTzdBL0kreW9Na1gwdTFlYTNWS0xSMC9GUDlMc3NVdmZNQlRQ?=
 =?utf-8?B?Slg2c3ZPYVJCdDdzdURUN3pxNURqRnZ0YktzbkNlSzgyTGxYWmhCWTJWMGU1?=
 =?utf-8?B?UVNBRTBMZ2kxLzJSb3QvQXpsS2hxQ1BDNGJYRjAwMitBRk9iN3dZc2ErekNQ?=
 =?utf-8?B?dGpNZWcyWVRsUGRTeDF2NnJtN2wrbEZqZkI3amxCemFFaGFSOW9UWUp2eDdz?=
 =?utf-8?B?djUzTWZiUGRURHBZNEV3azUxZzVwa0pMUWlFVm1rekRaQmNHWkxDeWhQeHh1?=
 =?utf-8?B?UDBSN0drdHRvMnRYcE5QUUJ1d2NjYWgrOWJJamN5UlE1WkRpNVF0M2UvbWpZ?=
 =?utf-8?B?b3UvUUNFd3NIekFpN1F1T3AwUHU2QnNNVE9TMzZLNlJOTkc1VS9OQVV2WXBh?=
 =?utf-8?B?Q3VEazZCMFl2bkJJcnpzdW1BTFA2aTRzTHR0bXp2YVV6Q1dLcWR1RW1RS2tm?=
 =?utf-8?B?N0x0WW95MUMyOFZUaExXb001M2RHWEF4blFZUUFvWU1qK29GeWtyNXNuT204?=
 =?utf-8?B?TlR1VXA3QjdyTnN6R1NiNHpBajBuSVI0eHA4TVBFdmxaOERCbTdTcDhUc2gw?=
 =?utf-8?B?dXRoNmZNa0lUSjFtRnVrRk0zOEZubFk5L0taZHI5Nk9zczlWSit1MTBKUlho?=
 =?utf-8?B?WHkvNnRQNUhsTG84dVN1UXp4YlJwc1R2STU4NFRsSUVMbkVVMGVITm5mVzVY?=
 =?utf-8?B?V2V1VWVPN2NUV0VKb3Q4SHd3aVNxN2FIQTRIN0I5U1JLQ2huQ0l0VjZkV0dx?=
 =?utf-8?B?OUdKQUV1djFaZUhWMnA1ZjJvSTNYQW9xd014VDMveTYxZDFIVDlQdUxuUXhE?=
 =?utf-8?B?QjhjWi9tbmtFS0l6VjMxeUM0dzczVDRxUGpZcklsRUc0RldrZi9aVEVyYnBx?=
 =?utf-8?B?VkxhRWlHWjNWV3IrWGNuQTVkSERxRXI4TmRiWm1VaHFwemxqcnhUM28zSnRI?=
 =?utf-8?B?VTQ2eFVrTDRxY0M4a3FucjBPVXgrc29uOHk2NXJNUVp0TVEzTzVSN1dFc0JD?=
 =?utf-8?B?ZG1VY0prSHE0TE1QMGVzNk5CSU1yUnZXZU10eHF4bkFUcDArU1BJU1UvWmFl?=
 =?utf-8?B?UVJRNlErdEFIQno3R0xhOWpxL3I4ZUNmRGJoaEViODJ0N0Vna0ZoMHVqU3Vq?=
 =?utf-8?B?ZVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?ZE5oTXhTNjdUQjRtL0hadXhibmVmNHBXQTlmc25xYzE1ZXo4c1FiM29TREJv?=
 =?utf-8?B?c3RCRnlhL2NOSzQ5T0V0anVCc3J3ZGxrOXFLaUh0OFhUTnkxRUdGK2VSZG0r?=
 =?utf-8?B?cmVVYVdHYlBCTHJ4V1NVRFNZV2kzOVhSdlNOT1NLT2lJcEE2TlV2dlhOVW5N?=
 =?utf-8?B?VnBKbzlxdXFneUpqRWVXZkFjaWFUSEVVQi9SUFcweUFLVHhidjB2L1N3VmVP?=
 =?utf-8?B?TlhpdGRxTnpWWFFXZVk0UjFhOW1BZW5TZUh3VlQ0Tmh2YlRkeVpkWC9pMHZy?=
 =?utf-8?B?QjZza296S1ozOThHV0RPWVRhaXNUeDNCd2dxcG56OVc1YWxXT01jdG80VmR3?=
 =?utf-8?B?NWlaOWV4cnJnQUxITHFxNU9NM3pIR2NSSFdxYmdsbHBPQlhSY1NQWm4yZE5a?=
 =?utf-8?B?VmJTY25lUFZHTURGY3hFRDR1cmcyQzllclU2WS9BQzdwM2poQjA2VXd4UlFo?=
 =?utf-8?B?UWJnMjBlZFc4RmYxWkxxeWFKSTNMOUJURHpnOUc0S3VLeHlnbU54YmxDN0Zj?=
 =?utf-8?B?ZkgwNXdGVDlFTEtvNGs4aDFGaVB4QnROOGRnL3YzdlAzaXdobzZIWXlySEJK?=
 =?utf-8?B?UCtRMkc2SG9ZV3dBS0w4U05qMXJnVUxKMTYxc2x0bzFpci95b2FLQWk0MWxu?=
 =?utf-8?B?amwzeU5uOVNwSmxtNjdFZmk4ZklOb3d6RitiemRlRUl5QjNTUjNXZkUvTnpD?=
 =?utf-8?B?NGRDbkhVRmthZktxejlmN2s5MTFSaTg0Q1g4TktHb1NnelJYRWtLaFBUdnph?=
 =?utf-8?B?dmJvMGpndjhhMTBxRmJpVjk5Z1ZIem50SVU3aHRUdmEzY3pPazduVUhXaEFF?=
 =?utf-8?B?cDU0YVFEZEowR0hSaUFkL3FlOVNwRXhoKzVEN2xDR1NwRVMvbk9hdDVxVkxu?=
 =?utf-8?B?L1RTSE43ZEZlOFFIRHYzaWYrMHZSU1hObENFN3VUbE1acmRVYWZyK3JnUkR2?=
 =?utf-8?B?SDQxVUtwRmFQUWZWdWpUUjN1ME9yQTE1ZEZES3lBNXpDM3JkYzlud3p3SCtD?=
 =?utf-8?B?bVc2WTJOaTZ5blBjK2s2UTR6S3NLQTMxM1RQUlovQjlSSDdLalJKb1BQTnFJ?=
 =?utf-8?B?anVLTEhIYUIwTThxdFd4SXgwWERkMjhEcEs2WEVmeEdGMUU4Nmc2K3I2UmxC?=
 =?utf-8?B?VUlFR2VKVHpVKzE0TWp4TDFJd2FlNFl1TU8vR2t5ZDBoRFNRVGMyeFFHWnhW?=
 =?utf-8?B?ZEdVYkhmRU5nelVWRkFzNGIwUjQySnBVdXVDV2t0ZzNERlhpVWtQRjRVSjgx?=
 =?utf-8?B?YTB4eFd0b0dSc3BiN21CSERaVVpydDdMMmRkWFR0TU9Uc08vVFR3UEYxdU0r?=
 =?utf-8?B?NU04MkxLSHo5Uk85cXViaUdXMWJ4MDMvdVBOYnpCVmcrQU1lb0tqei9tTElk?=
 =?utf-8?B?eFM2Q0toU2UvNkVRUWlIb1NySG05bml4b1QvakRmQ09ZZ2dpZkZMZDd0dGtt?=
 =?utf-8?B?Z2hFV2M1amxtRjVvaVJIUUlTZGdRUk0ySytiK0FlVU1ZenVNV0hBVVJnaEM2?=
 =?utf-8?B?ZXcyREJjcDBZdmVYVkhPTFBwQ1U3alJmSUVid2VZS21sak5nMnk0Q1hxZmZn?=
 =?utf-8?B?WExlQnd2bmpXSWRFeWdHLytvdVpsK0drTGcvK0d2Ykoramh1eS92eXFCYXhS?=
 =?utf-8?B?SWloeTFkYXgwbDFZSEgrZDFPMlcxRTZvS0FHRDVsUmYwZWJ6N1lhbllVWE5I?=
 =?utf-8?B?SDZzK2o2T2IyZEZyVXE1UVQ1bzBubVVMc0V1bDJMdmwrcG85OW1VL25HSEpG?=
 =?utf-8?B?NitCelpBdmZjdGFCQjJYamdwVGFOWTd2WUlveWdxTHVlOVMyZEpSUU1FditT?=
 =?utf-8?B?VFpOdEcwT2h3dUc1QXYwVlpYTGEwZFJQaFNMalA1OTlpR3NQcHlhMW9jMVhM?=
 =?utf-8?B?OHdjZTJOWlZJRXJObXlyNzRDRzJ3aXlFaGx0Y2FSOHVUd2lKQ3I0bzhEVkhZ?=
 =?utf-8?B?TTZpdjVFNkNTQm1jdXJwYW00UjRpNVFEZTZDVk5iUWRFM3YzZ2dUcnpua0gz?=
 =?utf-8?B?WGU3cGZyU2FienVzNHZCM3duNHlQQy8zdzlYb2lTaDQ0RnN6STJXWE9uWERu?=
 =?utf-8?B?alRvVGMrWEhOdElvamVwNXVIdWlNQVR1M2V0Qy90WjJaWENaQ1RzV0swSWlM?=
 =?utf-8?B?Z1NiNThHQUlmL0x6ZCtuVE85T2pLZmRxSi93UnR4QmVaSVR5OFBuU2FEejJi?=
 =?utf-8?B?MktacDlnSngydzlGYnVPb1NBcE5xM3pQaWJabzBldDNGbWhrU2xTQ0ZaQ1VP?=
 =?utf-8?B?WjUrMVA1VmRjSFBHeG1pbFJxUW5RPT0=?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ddb066f3-8553-4af6-a2aa-08db4cf3e645
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2023 23:04:23.2063
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: L3M8vqN1eZMqZZM08fyQ6wS6KeSqlN98nl5daeUcL2WR76YX3Aoej1X//2uBjOZ0V02Gwj//z4/8ArazEkfIn9M67MlzwwFXVX+7wRwXgFg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5510

On 04/05/2023 8:02 pm, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
>
> The early detection stores the extended topology leaf number which is
> required for parallel hotplug.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

It occurs to me that this and the previous patch are stale given that we
no longer look at CPUID in the trampoline.

They're probably useful changes in isolation, but the commit messages
want adjusting to remove the association with parallel boot.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 05 00:35:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 00:35:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530199.825643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pujPs-0007mq-Um; Fri, 05 May 2023 00:35:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530199.825643; Fri, 05 May 2023 00:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pujPs-0007mj-QE; Fri, 05 May 2023 00:35:20 +0000
Received: by outflank-mailman (input) for mailman id 530199;
 Fri, 05 May 2023 00:35:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pujPs-0007mZ-78; Fri, 05 May 2023 00:35:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pujPr-0007GE-SX; Fri, 05 May 2023 00:35:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pujPr-0007TR-Ej; Fri, 05 May 2023 00:35:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pujPr-00033l-EG; Fri, 05 May 2023 00:35:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XY7MAMu3USn7TULXagilGunIRzGVRp9BZiH/hQUWH4Y=; b=lHz3tydp5H+Ov7G6aAPinMLlbV
	7c3h0M3H1/vMaJVK+e6/Y1bwgao9vE/zVKCrA6EfZk3LB3AjG0RWsjvoXfUzm+D4Cbu44+Ilb2VzW
	KxU6Y6SUYdEdoM4HSMK4Tz8Q3tCXl1C1Vt/BGd0huevjY6UCLTBfIjBWpJ2NP/C/qVBA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180530-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180530: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1488ccb9b64e76aab0843bc035ce3b1938df2517
X-Osstest-Versions-That:
    qemuu=044f8cf70a2fdf3b9e4c4d849c66e7855d2c446a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 May 2023 00:35:19 +0000

flight 180530 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180530/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180522
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180522
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180522
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180522
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180522
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180522
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180522
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180522
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1488ccb9b64e76aab0843bc035ce3b1938df2517
baseline version:
 qemuu                044f8cf70a2fdf3b9e4c4d849c66e7855d2c446a

Last test of basis   180522  2023-05-03 23:41:02 Z    1 days
Testing same since   180530  2023-05-04 13:29:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Eric Blake <eblake@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Stefan Hajnoczi <stefanha@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   044f8cf70a..1488ccb9b6  1488ccb9b64e76aab0843bc035ce3b1938df2517 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri May 05 03:10:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 03:10:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530205.825653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pulpn-0000UW-45; Fri, 05 May 2023 03:10:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530205.825653; Fri, 05 May 2023 03:10:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pulpn-0000UP-0C; Fri, 05 May 2023 03:10:15 +0000
Received: by outflank-mailman (input) for mailman id 530205;
 Fri, 05 May 2023 03:10:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qp9j=A2=antgroup.com=houwenlong.hwl@srs-se1.protection.inumbo.net>)
 id 1pulpl-0000Sk-SN
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 03:10:13 +0000
Received: from out0-207.mail.aliyun.com (out0-207.mail.aliyun.com
 [140.205.0.207]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 55608d96-eaf2-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 05:10:07 +0200 (CEST)
Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com
 fp:SMTPD_---.SYe4zj5_1683256196) by smtp.aliyun-inc.com;
 Fri, 05 May 2023 11:09:57 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55608d96-eaf2-11ed-8611-37d641c3527e
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R461e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047188;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=34;SR=0;TI=SMTPD_---.SYe4zj5_1683256196;
Date: Fri, 05 May 2023 11:09:56 +0800
From: "Hou Wenlong" <houwenlong.hwl@antgroup.com>
To: Juergen Gross <jgross@suse.com>
Cc:  <linux-kernel@vger.kernel.org>,
  "Thomas Garnier" <thgarnie@chromium.org>,
  "Lai Jiangshan" <jiangshan.ljs@antgroup.com>,
  "Kees Cook" <keescook@chromium.org>,
  "Brian Gerst" <brgerst@gmail.com>,
  "Thomas Gleixner" <tglx@linutronix.de>,
  "Ingo Molnar" <mingo@redhat.com>,
  "Borislav Petkov" <bp@alien8.de>,
  "Dave Hansen" <dave.hansen@linux.intel.com>,
   <x86@kernel.org>,
  "H. Peter Anvin" <hpa@zytor.com>,
  "Andy Lutomirski" <luto@kernel.org>,
  "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
  "Darren Hart" <dvhart@infradead.org>,
  "Andy Shevchenko" <andy@infradead.org>,
  "Nathan Chancellor" <nathan@kernel.org>,
  "Nick Desaulniers" <ndesaulniers@google.com>,
  "Tom Rix" <trix@redhat.com>,
  "Peter Zijlstra" <peterz@infradead.org>,
  "=?UTF-8?B?TWlrZSBSYXBvcG9ydCAoSUJNKQ==?=" <rppt@kernel.org>,
  "Ashok Raj" <ashok.raj@intel.com>,
  "Rick Edgecombe" <rick.p.edgecombe@intel.com>,
  "Catalin Marinas" <catalin.marinas@arm.com>,
  "Guo Ren" <guoren@kernel.org>,
  "Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
  "Jason A. Donenfeld" <Jason@zx2c4.com>,
  "Pawan Gupta" <pawan.kumar.gupta@linux.intel.com>,
  "Kim Phillips" <kim.phillips@amd.com>,
  "David Woodhouse" <dwmw@amazon.co.uk>,
  "Josh Poimboeuf" <jpoimboe@kernel.org>,
   <xen-devel@lists.xenproject.org>,
   <platform-driver-x86@vger.kernel.org>,
   <llvm@lists.linux.dev>
Subject: Re: [PATCH RFC 16/43] x86-64: Use per-cpu stack canary if supported
 by compiler
Message-ID: <20230505030956.GA103506@k08j02272.eu95sqa>
References: <cover.1682673542.git.houwenlong.hwl@antgroup.com>
 <7cee0c83225ffd8cf8fd0065bea9348f6db3b12a.1682673543.git.houwenlong.hwl@antgroup.com>
 <edd4b974-08de-0769-0dba-f945ed06f222@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <edd4b974-08de-0769-0dba-f945ed06f222@suse.com>
User-Agent: Mutt/1.5.21 (2010-09-15)

On Thu, May 04, 2023 at 12:31:59PM +0200, Juergen Gross wrote:
> On 28.04.23 11:50, Hou Wenlong wrote:
> >From: Brian Gerst <brgerst@gmail.com>
> >
> >From: Brian Gerst <brgerst@gmail.com>
> >
> >If the compiler supports it, use a standard per-cpu variable for the
> >stack protector instead of the old fixed location.  Keep the fixed
> >location code for compatibility with older compilers.
> >
> >[Hou Wenlong: Disable it on Clang, adapt new code change and adapt
> >missing GS set up path in pvh_start_xen()]
> >
> >Signed-off-by: Brian Gerst <brgerst@gmail.com>
> >Co-developed-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
> >Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
> >Cc: Thomas Garnier <thgarnie@chromium.org>
> >Cc: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> >Cc: Kees Cook <keescook@chromium.org>
> >---
> >  arch/x86/Kconfig                      | 12 ++++++++++++
> >  arch/x86/Makefile                     | 21 ++++++++++++++-------
> >  arch/x86/entry/entry_64.S             |  6 +++++-
> >  arch/x86/include/asm/processor.h      | 17 ++++++++++++-----
> >  arch/x86/include/asm/stackprotector.h | 16 +++++++---------
> >  arch/x86/kernel/asm-offsets_64.c      |  2 +-
> >  arch/x86/kernel/cpu/common.c          | 15 +++++++--------
> >  arch/x86/kernel/head_64.S             | 16 ++++++++++------
> >  arch/x86/kernel/vmlinux.lds.S         |  4 +++-
> >  arch/x86/platform/pvh/head.S          |  8 ++++++++
> >  arch/x86/xen/xen-head.S               | 14 +++++++++-----
> >  11 files changed, 88 insertions(+), 43 deletions(-)
> >
> 
> ...
> 
> >diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
> >index 643d02900fbb..09eaf59e8066 100644
> >--- a/arch/x86/xen/xen-head.S
> >+++ b/arch/x86/xen/xen-head.S
> >@@ -51,15 +51,19 @@ SYM_CODE_START(startup_xen)
> >  	leaq	(__end_init_task - PTREGS_SIZE)(%rip), %rsp
> >-	/* Set up %gs.
> >-	 *
> >-	 * The base of %gs always points to fixed_percpu_data.  If the
> >-	 * stack protector canary is enabled, it is located at %gs:40.
> >+	/*
> >+	 * Set up GS base.
> >  	 * Note that, on SMP, the boot cpu uses init data section until
> >  	 * the per cpu areas are set up.
> >  	 */
> >  	movl	$MSR_GS_BASE,%ecx
> >-	movq	$INIT_PER_CPU_VAR(fixed_percpu_data),%rax
> >+#if defined(CONFIG_STACKPROTECTOR_FIXED)
> >+	leaq	INIT_PER_CPU_VAR(fixed_percpu_data)(%rip), %rdx
> >+#elif defined(CONFIG_SMP)
> >+	movabs	$__per_cpu_load, %rdx
> 
> Shouldn't above 2 targets be %rax?
>
Ah yes, my mistake. I didn't test it on XEN guest, sorry,
I'll test XEN guest before the next submission.

Thanks.

> >+#else
> >+	xorl	%eax, %eax
> >+#endif
> >  	cdq
> >  	wrmsr
> 
> 
> Juergen

> pub  2048R/28BF132F 2014-06-02 Juergen Gross <jg@pfupf.net>
> uid                            Juergen Gross <jgross@suse.com>
> uid                            Juergen Gross <jgross@novell.com>
> uid                            Juergen Gross <jgross@suse.de>
> sub  2048R/16375B53 2014-06-02






From xen-devel-bounces@lists.xenproject.org Fri May 05 04:35:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 04:35:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530209.825663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pun9b-0000Hf-6z; Fri, 05 May 2023 04:34:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530209.825663; Fri, 05 May 2023 04:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pun9b-0000HP-30; Fri, 05 May 2023 04:34:47 +0000
Received: by outflank-mailman (input) for mailman id 530209;
 Fri, 05 May 2023 04:34:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pun9a-0000HF-H0; Fri, 05 May 2023 04:34:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pun9a-0006mC-8F; Fri, 05 May 2023 04:34:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pun9Z-0002Pn-KN; Fri, 05 May 2023 04:34:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pun9Z-0007NU-Iq; Fri, 05 May 2023 04:34:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZKLpdVI7fAo9ICFG/vY0Y52V2+cNUw2Ao5juOwN75GM=; b=AcGIgpI4RPVoCwSSDkTYvrxir3
	NUGANY8eSCivn5l24vMot/FQW2Qzcm84Z1+hubGYxYcC8m9fWHDv0gxYP/VDfBvsuSRASVg7uzGNf
	3DJP17zoeTUF13YiWzAfZjZUYByo+WRUMOOfyjYfYH+d/P5DIp3j3rmf05ujtV2XH6uo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180531-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180531: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1a5304fecee523060f26e2778d9d8e33c0562df3
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 May 2023 04:34:45 +0000

flight 180531 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180531/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 build-arm64-pvops             6 kernel-build   fail in 180525 REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180525

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 180525 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 180525 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 180525 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 180525 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 180525 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 180525 n/a
 test-arm64-arm64-examine      1 build-check(1)           blocked in 180525 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 180525 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 180525 n/a
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                1a5304fecee523060f26e2778d9d8e33c0562df3
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   18 days
Failing since        180281  2023-04-17 06:24:36 Z   17 days   31 attempts
Testing same since   180525  2023-05-04 05:05:04 Z    0 days    2 attempts

------------------------------------------------------------
2224 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 272978 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 05 05:59:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 05:59:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530219.825673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puoTR-0000X2-9f; Fri, 05 May 2023 05:59:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530219.825673; Fri, 05 May 2023 05:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puoTR-0000Wv-6J; Fri, 05 May 2023 05:59:21 +0000
Received: by outflank-mailman (input) for mailman id 530219;
 Fri, 05 May 2023 05:59:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=U57p=A2=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1puoTP-0000Wp-Pi
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 05:59:19 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f8c3eedf-eb09-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 07:59:18 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 7C4F221E44;
 Fri,  5 May 2023 05:59:17 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5799913513;
 Fri,  5 May 2023 05:59:17 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id OxTyEzWbVGTedQAAMHmgww
 (envelope-from <jgross@suse.com>); Fri, 05 May 2023 05:59:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8c3eedf-eb09-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683266357; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=B4hrcc2cZgLfaRZrHv5ILnE7WRWszh0eF+pWeBIt/DI=;
	b=bUIAvbfofH4x8PaIi+zNc3HF2sePEX7IJeN86HmLXJndFdaITErVc1EQ+KYzMe9OKiZnB9
	RO6824ZfsTloOogdHZfT1dcxvHKqwmUMpOnd8vaJ3C+J2tsi24Muf61hp9ItyTlLMY+1HQ
	cU+SgqUDuWT4dXfkrJmukC28WHQZfK8=
Message-ID: <30246788-c90e-e338-de4b-e7bb2e440f4e@suse.com>
Date: Fri, 5 May 2023 07:59:16 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v1] xen/sched/null: avoid crash after failed domU creation
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
References: <20230501203046.168856-1-stewart.hildebrand@amd.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230501203046.168856-1-stewart.hildebrand@amd.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------1r2vBuGWKZJSNerkRJ9qgZg8"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------1r2vBuGWKZJSNerkRJ9qgZg8
Content-Type: multipart/mixed; boundary="------------03jFee1iNTSgY4Evcw7jPZLX";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
Message-ID: <30246788-c90e-e338-de4b-e7bb2e440f4e@suse.com>
Subject: Re: [PATCH v1] xen/sched/null: avoid crash after failed domU creation
References: <20230501203046.168856-1-stewart.hildebrand@amd.com>
In-Reply-To: <20230501203046.168856-1-stewart.hildebrand@amd.com>

--------------03jFee1iNTSgY4Evcw7jPZLX
Content-Type: multipart/mixed; boundary="------------08zeABpPTNQOu4WKvLXjlc0o"

--------------08zeABpPTNQOu4WKvLXjlc0o
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDEuMDUuMjMgMjI6MzAsIFN0ZXdhcnQgSGlsZGVicmFuZCB3cm90ZToNCj4gV2hlbiBj
cmVhdGluZyBhIGRvbVUsIGJ1dCB0aGUgY3JlYXRpb24gZmFpbHMsIHRoZXJlIGlzIGEgY29y
bmVyIGNhc2UgdGhhdCBtYXkNCj4gbGVhZCB0byBhIGNyYXNoIGluIHRoZSBudWxsIHNjaGVk
dWxlciB3aGVuIHJ1bm5pbmcgYSBkZWJ1ZyBidWlsZCBvZiBYZW4uDQo+IA0KPiAoWEVOKSAq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+IChYRU4pIFBhbmlj
IG9uIENQVSAwOg0KPiAoWEVOKSBBc3NlcnRpb24gJ25wYy0+dW5pdCA9PSB1bml0JyBmYWls
ZWQgYXQgY29tbW9uL3NjaGVkL251bGwuYzozNzkNCj4gKFhFTikgKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKg0KPiANCj4gVGhlIGV2ZW50cyBsZWFkaW5nIHRv
IHRoZSBjcmFzaCBhcmU6DQo+IA0KPiAqIG51bGxfdW5pdF9pbnNlcnQoKSB3YXMgaW52b2tl
ZCB3aXRoIHRoZSB1bml0IG9mZmxpbmUuIFNpbmNlIHRoZSB1bml0IHdhcw0KPiAgICBvZmZs
aW5lLCB1bml0X2Fzc2lnbigpIHdhcyBub3QgY2FsbGVkLCBhbmQgbnVsbF91bml0X2luc2Vy
dCgpIHJldHVybmVkLg0KPiAqIExhdGVyIGR1cmluZyBkb21haW4gY3JlYXRpb24sIHRoZSB1
bml0IHdhcyBvbmxpbmVkDQo+ICogRXZlbnR1YWxseSwgZG9tYWluIGNyZWF0aW9uIGZhaWxl
ZCBkdWUgdG8gYmFkIGNvbmZpZ3VyYXRpb24NCj4gKiBudWxsX3VuaXRfcmVtb3ZlKCkgd2Fz
IGludm9rZWQgd2l0aCB0aGUgdW5pdCBzdGlsbCBvbmxpbmUuIFNpbmNlIHRoZSB1bml0IHdh
cw0KPiAgICBvbmxpbmUsIGl0IGNhbGxlZCB1bml0X2RlYXNzaWduKCkgYW5kIHRyaWdnZXJl
ZCBhbiBBU1NFUlQuDQo+IA0KPiBUbyBmaXggdGhpcywgb25seSBjYWxsIHVuaXRfZGVhc3Np
Z24oKSB3aGVuIG5wYy0+dW5pdCBpcyBub24tTlVMTCBpbg0KPiBudWxsX3VuaXRfcmVtb3Zl
Lg0KPiANCj4gU2lnbmVkLW9mZi1ieTogU3Rld2FydCBIaWxkZWJyYW5kIDxzdGV3YXJ0Lmhp
bGRlYnJhbmRAYW1kLmNvbT4NCg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9z
c0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------08zeABpPTNQOu4WKvLXjlc0o
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------08zeABpPTNQOu4WKvLXjlc0o--

--------------03jFee1iNTSgY4Evcw7jPZLX--

--------------1r2vBuGWKZJSNerkRJ9qgZg8
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRUmzQFAwAAAAAACgkQsN6d1ii/Ey/l
gAf+IbHz8hfyLQmPAE2DThyZ1ZaNXSc0mEoTGBoYsRoLm9PIpUSNAysxfqtBkOJq7YuUzReXsUgo
QdV3XGKI/i/8m0GnxWMfRykDDtUFX+7vHH6gw1fAptLKz38ASEqezxrqy6kNx9UZxFVZ+TGKoD5Z
HaxKw8OudFhg3Fd7NLDpfI2k4Vmj7z1eZlC/E7ZuNl2LYFkHVYNt/0su4mLMkvN/OSGev9KorGdN
ezUYgpjGPbRofc4kSgfke9eg+DYzJzhRgrG9H3N0L7a9I9aiJGaDiSATIxXMGInsunIIMIqZj7T0
go6aqcv/n72TizkoCCsEjznwhxYSvrIh5721pLO73g==
=7h+W
-----END PGP SIGNATURE-----

--------------1r2vBuGWKZJSNerkRJ9qgZg8--


From xen-devel-bounces@lists.xenproject.org Fri May 05 06:07:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 06:07:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530222.825683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puobN-00025d-2z; Fri, 05 May 2023 06:07:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530222.825683; Fri, 05 May 2023 06:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puobM-00025W-W8; Fri, 05 May 2023 06:07:32 +0000
Received: by outflank-mailman (input) for mailman id 530222;
 Fri, 05 May 2023 06:07:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puobL-00025Q-Lp
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 06:07:31 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0629.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1e464c6e-eb0b-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 08:07:30 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AM9PR04MB8145.eurprd04.prod.outlook.com (2603:10a6:20b:3e1::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 06:07:27 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%7]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 06:07:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e464c6e-eb0b-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nlZIW4KjZY1+igXQEsb+pJOXns/C4h8Z03/Mdj5349P2YPB7qSZRWqSwW7yFRqpGkZ6uf+qJ9dPSoGuLmdAbC3ZHxc+eoJYFGi1Usw4tOVOXFat3U7G06/5bJ6CAiSc/Usm8PGuZGo8sqN0nCplhdp/097/D4of8asd6JfyQxcy1KJNVkCmdG54saxEdPWQAQrTYpYpnjtt/6zcpSDiNM8f2LEv3EenTJ+E12TxyoZ20c0u2Xo1pLrX1mnZA+xm/bQ+9DCWoCPVBbpoAs+ilVabYGpYsW/Pqjrudcoy2q9J75QtpGdSyNaC5wjtAC5VzrdF+62f0HPammpgGmpm9aw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tyJsGgOmH1j6Xmps5DvUIVknIqyTkutnVPVaG5fiMeI=;
 b=KNpLwbWStd1hv5xFLDrppYxk89biHjKuq2VfmHpsd+nopjGh/CH96iofcsRo6m+Ep0U/0fECOxUFr19gsNiwwCKF4YpRYF0dvQCyIYbi4mdj2+KwgSpAuxEuJaod4+nUIT9lj9ia+iUylAREXNEqxFVFwjHTKt1+Smt+A322M/5t9WiCooJSqq6ERuta3pKlq65cJBXRQkiIcQ/bhSaKQVBAj+tlDCJX77tGk1GlmaGgJ28YMzlWCJiylBE5rDBItClhC2IlbNzzrlc8g4y6zJfsavpq3XPC93xznRVe//mQeXhqXzKit2VDXYRlQ2Jol3AYyHJOr+uBFc0xj2MEqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tyJsGgOmH1j6Xmps5DvUIVknIqyTkutnVPVaG5fiMeI=;
 b=A61dMT9lZX/HF1UlWvEepacjOGioHrroKVR6ClPHaDK3VE0/VZ7hGeRW8qgVdkvM1znTi5l9eEeZro2eeOec+PpNAdB7QCo2smFtUrOi8l8DrGk5IcUUc6d80Wi+Zj6HJYJ1IFhMK+iDQ4BiKXM1FK8qYCdja0lF295xlEH2FfGzzLj8rLCnpoHdxZKe4pRYER5JOTjkw856dS/40SVmVNheWcNANKAozQo1lYoupGohNMfZ6YY5WcPo4iYjXImWhdh9jZuMXG1aOrmaEuYubRmDfRYyHUqaRlcXwV2hrLyQe9ts1d0eV0Ejy6LT60/wgrQKwGGJeSgShSlYwgVWbQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <09655daa-97dc-75bf-4590-5c59af7ab84a@suse.com>
Date: Fri, 5 May 2023 08:07:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] build: shorten macro references
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <95467f2c-09ed-d34f-feef-5cd55c60f628@suse.com>
 <ca02ea8b-d196-8d2c-bd63-b5ab5f379bc7@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ca02ea8b-d196-8d2c-bd63-b5ab5f379bc7@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: VI1PR08CA0221.eurprd08.prod.outlook.com
 (2603:10a6:802:15::30) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AM9PR04MB8145:EE_
X-MS-Office365-Filtering-Correlation-Id: 956105b1-fd0e-4a43-154a-08db4d2f009d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Up2KQWLozvOpPlZToLxGKCLTDymBlIQUwACLAZW0prOS9gpAk3nVe4J2vCBu4VUyZXS2bmesqkGt5bE+uGYdLRJqsElCVul+RVX1VVFbtpmbqKzfp6OSpQJgmOrTSNdiBzs5ohqjfhnuLjRXQTrq+7d45KjpvfgT1p3v0FDceyabLYTKVnzb4Pyzb24wPKDlGnje/vF8P7QYEXbcGGmZA0SuVF1ZZfeaeyBx8Rc4Fwjk7VhACTc6BATDGoT3LRGN3YxOF3KRgWBHeSJyhVgLm2b+b425DYC1uIh4QWSDKmPPJ8J+wwXxiZKrWMfxTonfxRxNYuNem2qLY60XSJ3vhQZrQXefjr7WvRe7FlNvTKtiqyjELah7sNpbHnWcTVRyXzNZZ+1JqI7l9EbJRQnRiN17lSV5mk4+X+iqihBWsZ8BMbsf2YkGx56+pD8Meoc9dJgD5N3gz9XfuihaUz7u0hW5pIsCoBwOEjo65dwv4HuNvV8VcKvfa1Fuif8D/4hNm0HZKqys5Nr4+QNLBdDTvubVQxIFcW60HTLblEGDpiDHLPTs5Scb6UsyO8PRcqjqyjskLxva+ZlDoU6zWhxmOYf1Mr1Fs7qBB/cqUPtWMaLHb4gtHas6jYZKg9wuJ1FQmrn0aXktULqkykTVw7dvSQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(136003)(366004)(346002)(376002)(451199021)(31686004)(2616005)(6916009)(4326008)(41300700001)(38100700002)(8936002)(316002)(6506007)(53546011)(478600001)(66476007)(66556008)(66946007)(8676002)(31696002)(6512007)(26005)(6486002)(7416002)(86362001)(5660300002)(186003)(54906003)(2906002)(36756003)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c3crSXVJdHUrVDN5cFlzbWhCSGUzNVkwZGRIT0pFSjFWSWUzalYyU2lscUhB?=
 =?utf-8?B?NTV6VFU1Q1puTDh3MldZbUk4RGlXU3lnaHZpa2ZOUnoyMlNDT2ZNSVFJMEdX?=
 =?utf-8?B?bHFpZ1Q0MkE1bmgvR2FmTmo1RlFOc1orTkRLS0tOamliYzZvSTBKQm93Z1Zi?=
 =?utf-8?B?NGsxNzBmd2hyTkluL2NOcjZWTUlLRjdNbVVhNHd3a3p3eWRmRnhIL2tkbi9i?=
 =?utf-8?B?K01jdVBpZEZNVEVjMnBwSGhkd2RQQjd6Tmg0WUhLUzkwUFJmUDgwTVZ5RmpC?=
 =?utf-8?B?bU5WQkRRU0RGMVJsWFNoQ1ovQVlFVkV0ZUhxSG4yWlJWckVwUWZQbU1uQWgw?=
 =?utf-8?B?WGhQajhuVEY0NkNaY1JHNkRrUHBJd1l3blVPS3htV05NaWxVQmtpZWVPYlBz?=
 =?utf-8?B?ckorMWMxOGs5cElrR2NmK2tEemJQZmpHSHNUQjBrVkRrWkdoSnBsdWFmVjZQ?=
 =?utf-8?B?MWhuWmdQK0FUdi9VSWNnQjYxVDhFTGVzWVhMeW1lQmwwUitPTncrWEVtVjJ0?=
 =?utf-8?B?M09palBqUE9haWp3b0lNekRUdTVZbnVad2ZTTTJVbExpcmljcjJrWWxzUHRU?=
 =?utf-8?B?b2R0YXhlQm9BeXkreDBiRjJ3bWwrcFA2cWhZSXdrbi9GOFdRSWYvYUNDY2Ro?=
 =?utf-8?B?L2p6NXlNSHlDN29MK2IyRkt1ZVFDNm4rWURzdEVwMzlyeklJeU5BU2RFWjcw?=
 =?utf-8?B?SmtIR2xIb2toY2lldWN2MUxWTlNVYm55MnBqVVJybkpSYWN2UWdBdUgxUldK?=
 =?utf-8?B?N2E5SmpYQ1hJTlVJeUU3eEswOTNPaFZFMWNRUVA1U2trb05jaENiNjBCV05F?=
 =?utf-8?B?MUZNNjZSYXNYK1dIbEtWMkQ1ZkFYcnBqRXBxdHpKYUVQb2JQQjB1MGRLL1VS?=
 =?utf-8?B?TWxOK1FUVXlnRVZ0SGtJakhzTXJmeTJkMlZtdE9PVkYzQm5kOENGdExhMUoy?=
 =?utf-8?B?M1pNQ3g3T3VQaW43L0p1YjNIekYwekorQlkwM3V6Q0J0c0lTMWhkcE5IOTFy?=
 =?utf-8?B?bHd0Y2NQSmo0S3BtYVJDOUo0MGJNT0p5UlllNUUvbkNtVmJvc0NzQmNucEE0?=
 =?utf-8?B?cGg5cUdpcDFUb2xHd2d3U3g2UjVlaU5naGRZaG13SytaRVQ1UDNrM1NPbFRr?=
 =?utf-8?B?NjN3d1Q3MVRYdkhvYlNXd2EvdGZvbC9Kc3VnUUVyNk55Wkc5MlE2Nk1FNmVt?=
 =?utf-8?B?ZTBxakRJQTZFd1dSZGE1a0xETE9tdnMrUlVTbUgzSllyY3g4WU5TM2ZNc0Uw?=
 =?utf-8?B?Rk5oQTA4TmpQbTd0bHp5cElaRnFxdDU1YWtUZmR2aXN5UHVyVHpVdmdkd0Qv?=
 =?utf-8?B?TW5ublFJSVdZdTBYaDN0b0I2dm9rYkRlMks4MmJwK2l1NDBiUVZaSk9wUDZk?=
 =?utf-8?B?QlhCdTE4UzM3VHZNUm9QMGlJL3pNS1haWVlBcFBJQURqaGp6VWpCcXBDdWhJ?=
 =?utf-8?B?K0FTejRETWxKaUVNWUlUYXFBMERRQk5IbUlYR3BUenAxQkduK3g0cUhUTVZ0?=
 =?utf-8?B?ODlya2xyeG9hTnpzdTNUTXFCT2N0dUw5Rm5zOW1iVUxHbFFQYVhQTTdIc0Rx?=
 =?utf-8?B?RWhCZHM5bVdsODVSSmFvQXZQeTNLNW5QeDkraEhzUDJ1RTVJc2xoWW1vQi9H?=
 =?utf-8?B?UW05amV3RjQvU2lOK2p3NThIdzhZSlV4S3F0bGlXbDhzVHdGZVA1RFFTWTdI?=
 =?utf-8?B?T1J2aGJWRTA2N2JsN1BmVDM4YW5mUXgyZ0FwUlpQUVRqMTA2bGVaTUFQYmRh?=
 =?utf-8?B?QUpaMkV6Skl1T2c1MHAvWmJmQWZRLzVVaUd5S1pSay9CVTNNL0dKRkNNWjd5?=
 =?utf-8?B?NFcwSmVxRjZBR2kxbXZoL1NyOHMyRzFkQ0NjYzdJcmo0ZjFSdC9VdDJLQXhU?=
 =?utf-8?B?YWxwTVVkQkZ2bWNqQ3RBaHFxVXdDYkFrL21Na0E5OTlrMzROdHdpKzZvTzdv?=
 =?utf-8?B?elU0NW1XbmpzWTFYYkYySXJCYmFoamxaeEk1UHpJazk5TXhYODNIeXp6c1ow?=
 =?utf-8?B?V3Y2RGNZSjQ3dVNWTTc1TzFtdlpwL0hXQktEbkRkQXVIRnQ3c0VOSStQcUFG?=
 =?utf-8?B?TUZHMFl1dXJvN2IvWXh0U3BZSmd3dURPMWpGVXFHeWYwTlRuTCswS2pCV1lz?=
 =?utf-8?Q?QvuJn5B/4vKQpVnP2VD2VR+rQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 956105b1-fd0e-4a43-154a-08db4d2f009d
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 06:07:27.5177
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: n37W+M8SQhaEEczschCbxg/QHXXCScO20OHlbwkBBqAvDoj8oIAZSq6qsUK/gAyWYX9dSe0q5wBWjhGs/Ra9AA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8145

On 04.05.2023 19:24, Andrew Cooper wrote:
> On 04/05/2023 5:16 pm, Jan Beulich wrote:
>> Presumably by copy-and-paste we've accumulated a number of instances of
>> $(@D)/$(@F), which really is nothing else than $@. The split form only
>> needs using when we want to e.g. insert a leading . at the beginning of
>> the file name portion of the full name.
> 
> From the Kbuild work, we have $(dot-target) now which is marginally more
> legible than $(@D)/.$(@F), and we ought to use it consistently.

Oh, right - let me make yet another patch on top.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, although ...

Thanks.

>> --- a/xen/arch/arm/Makefile
>> +++ b/xen/arch/arm/Makefile
>> @@ -104,9 +104,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
>>  	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
>>  	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
>>  	    $(@D)/.$(@F).1.o -o $@
>> -	$(NM) -pa --format=sysv $(@D)/$(@F) \
>> +	$(NM) -pa --format=sysv $@ \
>>  		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
>> -		>$(@D)/$(@F).map
>> +		>$@.map
> 
> ... any chance we can get a space between > and $ like we do almost
> everywhere else?

Since "almost everywhere else" wasn't quite right for xen/arch/*/Makefile
I was first inclined to say no, but I can easily insert blanks when doing
the $(dot-target) conversion as well, and then it looks like it'll end
up fully consistent (in this one regard).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 05 06:19:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 06:19:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530225.825693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puon9-0003bF-4y; Fri, 05 May 2023 06:19:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530225.825693; Fri, 05 May 2023 06:19:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puon9-0003b8-27; Fri, 05 May 2023 06:19:43 +0000
Received: by outflank-mailman (input) for mailman id 530225;
 Fri, 05 May 2023 06:19:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=psjd=A2=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1puon8-0003b2-2p
 for xen-devel@lists.xen.org; Fri, 05 May 2023 06:19:42 +0000
Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com
 [2607:f8b0:4864:20::434])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cfddd581-eb0c-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 08:19:39 +0200 (CEST)
Received: by mail-pf1-x434.google.com with SMTP id
 d2e1a72fcca58-64115eef620so17409889b3a.1
 for <xen-devel@lists.xen.org>; Thu, 04 May 2023 23:19:38 -0700 (PDT)
Received: from localhost ([122.172.85.8]) by smtp.gmail.com with ESMTPSA id
 x3-20020a17090a530300b0024bb36e6f56sm4246058pjh.16.2023.05.04.23.19.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 04 May 2023 23:19:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfddd581-eb0c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1683267577; x=1685859577;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=IhIxM6lGctKPZ0F4zj+ieY/NuAoHjD/llTiAiBzpgD4=;
        b=ENw0Ibe1GKhNMaBt5lpUTsl73D2uA6Mt5GHElRwdD7t1vn5sOz63TFt6LuUvTVvSRr
         RNPKxN0sSsgXZyZlZ5Af6I5+9YQlZFMVtDBa8y2RlY+m9oY9D2CB6iEGYnyxJue71Ebe
         jhIaDXePlaJhp007drvlVLIg7YroOxTsZfRdX9beoSZ6q8NBkbK+Ttvw7Hb9zkEG532G
         JU96QRickeXpswwKUAfugqXyoB9F18S6bbROrHBR+MAqUvhKLVm2p3x6p9qS6d2ACpCF
         eIUaieQFbcBIeuDl1C1ZvfI43YuMnCI3sP5W8a3zexCzN0XDCZcDZdfPalTnWWjXtVl2
         Ac+w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683267577; x=1685859577;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=IhIxM6lGctKPZ0F4zj+ieY/NuAoHjD/llTiAiBzpgD4=;
        b=hmsWK94zNCydFEQ0U94gPTbClxX0OJ/SQ3QySpsA7XYPypjoTx9sftPiH7Equvigrd
         xIUflsWmUblU3FUSxU0frbisy7wX86lhO4GY7YtRvge8+TmBOEVRRzgd9ZvejN3i7DOc
         Cdwk5Esy4IsYiHEgWoFzftwy/6K7FTIBka8dvnw3iiOkbHQFPqQYcUhOd1PdnDpMfVKF
         eihxeCVylUIDeB7jnobpGdWKzBrlmp1g0uCWs6wxbJo3euByLx5v/Q2FSbvyU4QQqcRy
         /MW6CuSgexDrs+CirKblLjRYnklzupWQFyx+f4yXReaKhWl/Fe774wXyq29AetAnB0us
         2l8w==
X-Gm-Message-State: AC+VfDxt2li63SwIujH91ZxFNyxy9ypLLbvmZ53Wct0+ZMls+HI3yHqH
	3eJy4gSVauc/aX3cHT0v1f8kgQ==
X-Google-Smtp-Source: ACHHUZ7cdJ6d3QPoYAfskqtSPYVFeVoAqdh36Km+QYP02oUcudO5EtXRgvhgZ1XZ4ZBkMDLqNrapbA==
X-Received: by 2002:a17:90a:9f87:b0:24e:201e:dcbd with SMTP id o7-20020a17090a9f8700b0024e201edcbdmr692274pjp.21.1683267577176;
        Thu, 04 May 2023 23:19:37 -0700 (PDT)
Date: Fri, 5 May 2023 11:49:34 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>, xen-devel@lists.xen.org,
	stratos-dev@op-lists.linaro.org, Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Erik Schilling <erik.schilling@linaro.org>
Subject: Re: [PATCH] libxl: arm: Allow grant mappings for backends running on
 Dom0
Message-ID: <20230505061934.jm3bwjgsl5hf5rns@vireshk-i7>
References: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
 <25fb2b71-b663-b712-01cd-5c75aa4ccf9b@gmail.com>
 <20230404234228.vghxrrj6auy7zw4c@vireshk-i7>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230404234228.vghxrrj6auy7zw4c@vireshk-i7>

On 05-04-23, 05:12, Viresh Kumar wrote:
> On 04-04-23, 21:16, Oleksandr Tyshchenko wrote:
> > ok, probably makes sense
> 
> While testing both foreign and grant mappings I stumbled upon another
> related problem. How do I control the creation of iommu node from
> guest configuration file, irrespective of the domain backend is
> running at ? This is what we have right now:
> 
> - always create iommu nodes if backend-dom != 0
> - always create iommu nodes if forced_grant == 1
> 
> what I need to cover is
> - don't create iommu nodes irrespective of the domain
> 
> This is required if you want to test both foreign and grant memory
> allocations, with different guests kernels. i.e. one guest kernel for
> device with grant mappings and another guest for device with foreign
> mappings. There is no way, that I know of, to disable the creation of
> iommu nodes. Of course we would want to use the same images for kernel
> and other stuff, so this needs to be controlled from guest
> configuration file.

Any input on this please ?

-- 
viresh


From xen-devel-bounces@lists.xenproject.org Fri May 05 06:20:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 06:20:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530228.825703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puoo8-0004xP-FL; Fri, 05 May 2023 06:20:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530228.825703; Fri, 05 May 2023 06:20:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puoo8-0004xI-Bx; Fri, 05 May 2023 06:20:44 +0000
Received: by outflank-mailman (input) for mailman id 530228;
 Fri, 05 May 2023 06:20:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puoo6-0004x8-SB
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 06:20:42 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2042.outbound.protection.outlook.com [40.107.7.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f53ffeab-eb0c-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 08:20:40 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by VI1PR04MB9834.eurprd04.prod.outlook.com (2603:10a6:800:1d8::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.31; Fri, 5 May
 2023 06:20:10 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%7]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 06:20:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f53ffeab-eb0c-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iW+9YASDFAiHQ1cOYC1L31SClQjeEEcHNqOoJxh6jz1k3MG++Ebd9T++0AjmQR2UZLIHVY4IAbgXUPg52ZYqv9qTwVg6TRXe6/NOR8+4touxdF8DMPGXhoZMvlJPWB3qiPoByjREkW4RJgAXaRtxHG8LErR182UFgsTg//sjTx2esS3VL3/bMToyxOhi30xXOLWynhXHBUuEs+3RRggWsBJKXZz8iPLQbqJUd+Aj8AVV8Icf78UFQuYrq57137zoOSo7qeUnib7gipOr38We+gHWf3lr+Tkxr82pBLTyiJFrziI+7h1iEGa1WBiaAT3ydwSfWbzqKyvkqWPz3ndgEg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=l2XOraMKft/weyDy7v9639q6Tj2Q7kLZesPrPA1WRv8=;
 b=ex9g4BPHRtCAg5ZuhVbI3+GA8traZN9pMW9OT9Fih9VIQTInrjJVYsBx59Q2/XyaQCFl5pYxr9JaFAKxat3iIpjY2dkVXx6ahPxiqtVq8LO77jsDlLkrlqrvk5nI16Cz+AY/PLv954790KK+jGqo33pbIk87t3Ewo5b8BnajMuEidEOacR/GwUVuVLvd7CNG/P7UZu7QTQgMVR8dR71LXyrdPPWfHK1xsc36A2D3NFv7EHxjIEZTPDUq8n/1VnpxbgGejNBSHGG1I1omUHkD3Q5iIAEzfBA21pbuia7k5Cupm1MkOQFUABCLNpJ4Y3ZmQjgSBZMge62TUaDJRkIaXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l2XOraMKft/weyDy7v9639q6Tj2Q7kLZesPrPA1WRv8=;
 b=RzBppbgDs9KQXENfkvt191B7awboU34hgpn21a/HUwMHXZw36yi87/NeJwBHweSrF5bR70jT4XiyBDfCBfqqnWeWvM09cqfGyXThIszOaDKu17/QCZzw+EqnM0DMmUJ0trOLWhClIwCOGLgaHYLybS5SFIYp4/ZFwKVlucI3Fq2nb/cJbNlEvBGwuvf+ZJwhyUJg6POGKxjXxWvj+rN2BHOC9dSODicTR4Z0Bi7FM2HURQY4td8cCRjKdw7yvn8gKVRwAvpd7KyX0m1PiE0pKuikBJyQxzYn73wQxi97H9Y0/F1YvUtBqCeVH5wj54uke7vJpfh3T9Yz3mWB6YAcnA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <39f9548b-6a07-3d74-1975-8f17153f2d14@suse.com>
Date: Fri, 5 May 2023 08:20:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v1 3/6] iommu/arm: Introduce iommu_add_dt_pci_device API
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
 <20230501200305.168058-4-stewart.hildebrand@amd.com>
 <0e7404fe-e249-7b3f-0628-b8b8b1925765@suse.com>
 <6758ef37-bf66-99c3-1fd4-f2edfc12513a@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6758ef37-bf66-99c3-1fd4-f2edfc12513a@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0181.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::17) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|VI1PR04MB9834:EE_
X-MS-Office365-Filtering-Correlation-Id: 815c48f5-8092-4c98-9a5a-08db4d30c746
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	P7khCtcfZuGVtTda07iRio+rgWc+4Non8tGRfMDNUe+cDXWyFQ2i/PBbZwR9M4YWy4A6FxjI5PiiExyY2fbklGVo2TsJ4cOOHTDFBbEYx0AoGJqIKJyBXHdfqcUPYQxOJSqa2ST+xpiAfOVn5yN6JKN5DjA4ZGbMmJ4EjjxMp9XVXsLzqlZ4QBv1Fq/YiPNv0r9wPK67lEgY8XmaomBy5Eb1jrAuELv60mDq4pLYzFrlBtdeVz+2lN84bW8FxP8He/SmrNJYP1QZnJIhYwP175CMFKTOV+wUH68fgMWTJEvuVgMm1O25qhwbgY6Xc5qCXSZPiGGjkyIzKLC8FTyjRqfN3e3KnB4rIlc6cvX3Cxx/unta6G1b5MQ5jGO6egfSmUAKrN6t8csw9OK92uHMOV5vLnB9NGZjaQSv1n6J3dgb7NH8tA4W3uwhX6Kv4bNmMlO8ABmuW7ZoRWCXKVdxLKorG2CxMXwSWjBTUmarBcemBkGX99oBZ7ZnEkeXOCSyl67CpFkNiH0yAtjStTTeleGGkXHlWjnXNNCWtGr10QMfD+2Ih6os34Fea2PDRXnPBkoy0nZQhtOs3nIVbZRhOrbCYBfLB9+tZNWS71Oo55o72alEU/Xc42UxZnFdER+h0Z+krymly1Tu1fJgE+sQ6A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39850400004)(136003)(396003)(366004)(346002)(451199021)(5660300002)(8676002)(8936002)(966005)(6486002)(38100700002)(86362001)(26005)(83380400001)(31696002)(2616005)(53546011)(186003)(66556008)(66476007)(2906002)(66946007)(316002)(6916009)(6506007)(6512007)(41300700001)(4326008)(36756003)(54906003)(31686004)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZTZqemxGWk1wcmJod09JZlpKaWhjZXdMUTVZUGRzWktXc1pMRURUZ0paRE1B?=
 =?utf-8?B?V1pHenRmaEtDa3lESG9IamNQazdFdTRxbUNTbFhMbDI5QVN5RVlienVTdG85?=
 =?utf-8?B?b1BrOVQ1SWdlS1ZZNng0anBvbzdNeXlHYlA4bmRJeTZpSzhjaDBGWkdMY2c2?=
 =?utf-8?B?bGR2Q1RaZWwyR05CNlhITEFScENoT3A2aXpjNFFTaDhlNGxmWTVVeHVMckFj?=
 =?utf-8?B?eFZYblo2UXdMUmFJd1o0ejhqdHB3eUJnK2tEOTNDZjk5WjJBM0tjYjJjbE9C?=
 =?utf-8?B?eGdxN29jRnVsL053TUMwYVh5dVIxNWV5cFMxY1hWOE5MQzV0RU5qZUltTTdX?=
 =?utf-8?B?VDYrRUNQNUcvTVZmTDIrcXpyU0ZNdVZtUnhlcjZIMzhJM1FZWnpmSkkwMC9Q?=
 =?utf-8?B?bHhaZUkrU0pzYVA4cmgvTEhEY3VWOUJkdk52R1hmUFRmVkNnN0ZzbmRZbXds?=
 =?utf-8?B?T1crMUNCYlg5UFdzL25CcXdHWmFrTFhXY3BrZTVRY3ovb2dCVktxMmx6T0J5?=
 =?utf-8?B?SWJJdG1idXFod3k2TVhLTzBoVUFEaktla1dCdnVFWW5qQUZYdmdQU3hnV3Qw?=
 =?utf-8?B?c3MrYlpsWVhJZkxucHFTbzN2bWllOGgzZHp2dUF3ZHVXQ0w0UStiWFFJNVZY?=
 =?utf-8?B?TUFYNlpQS2hZWDZsenhkVzJudGxOV3RXZ29qQlkyeWRJSVlsdmYzSSt3Qys1?=
 =?utf-8?B?RnpSUnBLbHdPb0ZtYVNEVFd2L1RYdG5hNzB6UUJQMHNBU3NKL1AyYmNQYWgv?=
 =?utf-8?B?cjREay9ScXFBVTl0OURkdGdDYnErUTVadUJUQWZna2VrNFJuQVM5ZEE3NTYy?=
 =?utf-8?B?anRYQWxLQ002YkpkZVBnc3hiSklZVnNCbHBtYzc0a1huRTFlVmJ2anFaMmFu?=
 =?utf-8?B?cDJoeVNwSUMxa1ZvMDVscWEvaVVFZzh4VDBtTmxXcUpUazlCdGY0OTFIVnhy?=
 =?utf-8?B?UWNnYkNDb1FOU21USGJ3M0Y3UCt3Ri9BM1g1QlE3a09JKzgreSs3NmcrQThm?=
 =?utf-8?B?a0E5UllCajRodDBhK3lOam5SWW1KQjFrMkNzTGpySzZsMHgydWd2L0diQVBM?=
 =?utf-8?B?bGt0Sm45ZEMxWEpRUGJnMVVDbDdyaTNEQnhiamovTnBMT0dETDFzdDRnY1JS?=
 =?utf-8?B?bEV1QjFhVXR5bzV3alRjcEdkN2N4QTd2M0JaMCtKS1IwR01WcUhnTVVENzVt?=
 =?utf-8?B?SUN2QysxOGdROHBDRlJCSW1vbEswT0ZUQjk5ejc1K0ZTWk8xOTBtanJPR0NO?=
 =?utf-8?B?WFVuVEQzN0lIS0g2S1VwcmRpMThTWnYwOWNZc2QyRHlXNTB2Ni9mWDJZWU4x?=
 =?utf-8?B?RElab0s5eTRLNVlsQzZ3Zkt4R2dZb0RyQzdTelhYRjd1M0tZREsyckF0OXhw?=
 =?utf-8?B?dEh1cTR4cTdZZWlIOVhvL2dOajQwZTc0ZEgwM2p2cGZsV1Y0aFI4bVFmNjVq?=
 =?utf-8?B?SzZzQmxWY3pwYlRQeDhnWlJBWllXWUVnRTd6QjU2R2J1cThSZnNEczdUTEky?=
 =?utf-8?B?L2lUanNmMktJcVBmaUQzMldGdmxBbllObDdnY2ZoTnhjVnBKRmI5K3NLNFJt?=
 =?utf-8?B?RDVIS3JRWWRsZmJBdGdNVFZSNDl2T2lXaU0zclpXWUE3Ukh4QldJTkM1dFF2?=
 =?utf-8?B?aVJNc29ZM3VZaXI2cTFDd0pob1VqdU9WTCt4a01XQ3drMU15aGJYS1h3QlBw?=
 =?utf-8?B?SW1LVExDdEJ6UngyVDJPd2dVb1o1NXpkQmdtK0poSU9rRnRBN3M4dGpad2l1?=
 =?utf-8?B?cndFWk1IaUNRREd3T3pwUS90UzlZM1loS1RYZXZ1dFJPVEs5bHdtWWlnT0Fv?=
 =?utf-8?B?UENyc3BRTjJSazlFWG9vak1tOUwvTHlIZHk2Z3N0ekdGV3hmNEtLTElQRDFP?=
 =?utf-8?B?ZGp4Z2txTS9pOUxES2lZSXJBbzNNeEtLakpTSWtBM2Q1ZHc1ZmFhUkFBQWhX?=
 =?utf-8?B?dU0yRTEwTFBObXdlQjZkbmZOa2x1cDRtdGd5Zk5tdzNrS2J0ZkptcjNtVW1v?=
 =?utf-8?B?Z3J5NXc1NEdmQWFLUzRLWTdhdnFsZndhVS9EUHRWRzhJVFRkbjVlVU9QeUgw?=
 =?utf-8?B?TDh6YWRQT0V1MVlpRTZ4REE4ak9Od2Q3ZVpjNXZVbFNJMzVDZngrQVF3dmd1?=
 =?utf-8?Q?ZK6oyaiSvOguY5IuH4PBrYWCG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 815c48f5-8092-4c98-9a5a-08db4d30c746
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 06:20:10.1132
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kg6ep13FrSSBGV2a/3Hp0Z4yJTlIXPxPDQ6g45eMRBCaJgDcA6+nSGH9OtsbD5M4YTi4RE2rUrUK5li14VMDJg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB9834

On 04.05.2023 23:54, Stewart Hildebrand wrote:
> On 5/2/23 03:44, Jan Beulich wrote:
>> On 01.05.2023 22:03, Stewart Hildebrand wrote:
>>> @@ -228,6 +229,9 @@ int iommu_release_dt_devices(struct domain *d);
>>>   *      (IOMMU is not enabled/present or device is not connected to it).
>>>   */
>>>  int iommu_add_dt_device(struct dt_device_node *np);
>>> +#ifdef CONFIG_HAS_PCI
>>> +int iommu_add_dt_pci_device(uint8_t devfn, struct pci_dev *pdev);
>>> +#endif
>>
>> Why the first parameter? Doesn't the 2nd one describe the device in full?
> 
> It's related to phantom device/function handling, although this series unfortunately does not properly handle phantom devices.
> 
>> If this is about phantom devices, then I'd expect the function to take
>> care of those (like iommu_add_device() does), rather than its caller.
> 
> In the next patch ("[PATCH v1 4/6] pci/arm: Use iommu_add_dt_pci_device() instead of arch hook"), we will invoke iommu_add_dt_pci_device(devfn, pdev) from iommu_add_device().

Which I think I said there already I consider wrong.

> Since iommu_add_device() iterates over the phantom functions, it would be redundant to also have such a loop inside of iommu_add_dt_pci_device().
> 
> If we are to properly handle phantom devices on ARM, the SMMU drivers (smmu.c/smmu-v3.c) would need some more work. In patches 5/6 and 6/6 in this series, we have:
> 
> if ( devfn != pdev->devfn )
>     return -EOPNOTSUPP;
> 
> Also, the ARM SMMU drivers in Xen currently only support a single AXI stream ID per device, so some development would need to occur in order to support phantom devices.
> 
> Should phantom device support be part of this series, or would it be acceptable to introduce phantom device support on ARM as part of a future series?

I wouldn't view this as a strict requirement, so long as it is made clear in
respective patch descriptions.

> Lastly, I'd like to check my understanding since phantom devices are new to me. Here's my understanding:
> 
> A phantom device is a device that advertises itself as single function, but actually has multiple phantom functions. These phantom functions will have unique requestor IDs (RID). The RID is essentially the BDF. To use a phantom device with Xen, we specify the pci-phantom command line option, and we identify phantom devices/functions in code by devfn != pdev->devfn.

The command line option is there only to work around errata, i.e. devices
behaving as if they had phantom functions without advertising themselves
as such. See our use of PCI_EXP_DEVCAP_PHANTOM. As you can see, this being
PCIe only, and legacy PCI device bevaing this way would require use of the
command line option.

> On ARM, we need to map/translate a BDF to an AXI stream ID in order for the SMMU to identify the device and apply translation. That BDF -> stream ID mapping is defined by the iommu-map/iommu-map-mask property in the device tree [1]. The BDF -> AXI stream ID mapping in DT could allow phantom devices (i.e. devices with phantom functions) to use different stream IDs based on the (phantom) function.
> 
> So, in theory, on ARM, there is a possibility we may have a device that advertises itself as single function, but will issue AXI transactions with multiple different AXI stream IDs due to phantom functions. In this case, we will want each AXI stream ID to be programmed into the SMMU to avoid SMMU faults.

Right, which of course first requires that you know the mapping between
these IDs.

Jan

> Please correct me if I've misunderstood anything.
> 
> [1] https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-iommu.txt



From xen-devel-bounces@lists.xenproject.org Fri May 05 06:34:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 06:34:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530233.825713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pup19-0006b4-Ol; Fri, 05 May 2023 06:34:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530233.825713; Fri, 05 May 2023 06:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pup19-0006ax-M5; Fri, 05 May 2023 06:34:11 +0000
Received: by outflank-mailman (input) for mailman id 530233;
 Fri, 05 May 2023 06:34:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pup18-0006ar-5h
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 06:34:10 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060e.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d65547c9-eb0e-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 08:34:08 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by PA4PR04MB7933.eurprd04.prod.outlook.com (2603:10a6:102:b9::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 06:34:05 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%7]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 06:34:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d65547c9-eb0e-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nyDMFero+j4xtOww9ukoRCDxK4qNarntpDODw0WM69xzK5d0ZLx5YtKWWkSoRETWPBPIz09bbchIg8cTJfq4uhpx8EHLwHrqx+Ddr9caESB8PhcrlubU0Lcas2R7FCc5X+eGpw7r1gl+cGX8ZxxFz95sVf3XrSkGm3B93ck0Zbvsh8yHAAlcrC83h++N79Gc6Optv1obEF2937WU8Z/m5uYWR29Q5CI0hMaiCk+EY9Etk9ZDN+UgGjcU4Wo7RgcTltQsJY/GR4eVJXzaF4vjcthevCOixagkrsPCdaRA9UJma2xF8We2cQZONtSfPpU3LDHVqImXXC1fEgJOJYYeQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IZxrcutvyKcp2criGjoajIUaUNjslOaTNy/anR2x1vY=;
 b=iLtAFIF0UqGThyy7/mcbanun0PlQf3jWwbYkcK+gdW5nKMNiuDOZSXxtrUuYnOZnh6eVvzIrVBsVgQ0Gf3LqX5+Iie2W3ey3IpT4fMTuYThGxUZIcTaYUpMMX1zOt1jNxCMuFSFOqHxNKxLtp7dm18ce6MPXr7mtyRlcDd8w1jy0obgjt8q9rZjYZmTajZcyfxh92Lzy0zuxukKa51hqMR+x8CVYcXmPBULJIhKzKBPS/YNfVI/VctOuspxrvfNRmaGa7nTxmGWhwx335i/u2MleS33OZv5/82PmviacHRsN2yj5xTtxegCMD09hI1GiJNtIgBd9YDMRWI2JnnJdow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IZxrcutvyKcp2criGjoajIUaUNjslOaTNy/anR2x1vY=;
 b=1bWPvwUIqNs5ue1sip3DpwajGquDg3zQ78R15qF9XeZ5b0wuw6I95JDNp0tVVNKurVJ27/LGsSoiiSMn2VaI5w8WTU5zZpdHeEQjqyv4zynhxlnxiwStjt2x57QrDE8a0tVu6Q2Qx8k49xUAgM9ZUxwenHzC+KQ7aUkL/TDATmJVQxeHYrqThShKLoPNMtoltIBjdkdGp4Koi308oZ0WlNp5imLMN+CR6GbTpcFVUNEQ+uOXgDwQrkgAemscTziYxIa69ZLfI45L5i4yndiUshFpWdyAljzWa+U98kxo4GeDL87FjwrEtsnhiE6t2eQZ3wwI2IpxdofoUul77iJ1Lw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a60ad8ea-95a8-ed15-f862-3872e9fb68ac@suse.com>
Date: Fri, 5 May 2023 08:34:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v4 1/2] acpi: Make TPM version configurable.
Content-Language: en-US
To: Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jason Andryuk <jandryuk@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504175146.208936-1-jennifer.herbert@citrix.com>
 <20230504175146.208936-2-jennifer.herbert@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230504175146.208936-2-jennifer.herbert@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0090.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::8) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|PA4PR04MB7933:EE_
X-MS-Office365-Filtering-Correlation-Id: 0380a7f9-1c86-4e4a-b956-08db4d32b8ba
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lsh9PRYR1wB2b4MjD+6FS18SN7Hosz/E1yjpEL7WXSBSytQn46BaD6ChMLXGhonoTf4MlkssAJZhcpYQhAyaBXRLuByH4PzXUuUOvT6BEjYfzwbEX+VL7riF37vqVbq+Hy7KYs33RnthsF2gTitqw9h2YeB8MWks07op+mTpzZaXtGjzR/if3THAFOoVPQ5GX5+yfwJwM7X2R2iz39k+qcTlIe0d72iDTgwEAfTbyF9gBELG2n8O9qsgc9+w/7HmQDuqoU0YJ/yNEzC4iZ3SDjHIeE3cTWw02ot5ZfbO3MCs3Fz3dYV5YZ0PTOjvmGl3Kbi8Ig703D/B6AgQrnDcXBTA6EazAC/zBNucN/BXeqvLj3OpJhwucMJejWk+12ekyKXbIBlfs4deQh/eNxntA590/OEhf77p9vASFTc4vTNmmFwn+x2afTl2RzN3aJzZPoG3p9iXNTXAgje9vCDi8eXaKGsTJex6MyAszNisboufFM3zCYdl86jZL8AD/0pPjjWmfTHUEToe/y9OmTwbpP+tqIGS7yydaTXxYMT+kokB010yky7+q8bNzTDfhTwS2TlKLldUJLQN265n3XhJEgxsFDR5RcOQ5ek3dYdmQT27S8AdoZ7flBrrKKiNswOjdzvInL9f6aEVsvSnrMKTbw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(39860400002)(366004)(396003)(346002)(451199021)(31696002)(6486002)(86362001)(36756003)(54906003)(316002)(66556008)(66476007)(6916009)(4326008)(66946007)(478600001)(41300700001)(8936002)(5660300002)(8676002)(2906002)(38100700002)(186003)(2616005)(26005)(53546011)(6506007)(6512007)(83380400001)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T2JPOTkrUTBMU2REUnM3MVJCd0I5bmlWUUhJYktpRVJ3VGJjT1d1V3Z0cUY1?=
 =?utf-8?B?T0p2elNORm1RS2NtZnNNNytSNTU2NFNpVXBUSUQ0UTJJaVhTTUFxYjNWbVJO?=
 =?utf-8?B?MDBoYk4zSVJqcUFPODdTT2RmZUx1K0FKdmEwVWtRM2JRT2JyWFl0MkZmREF4?=
 =?utf-8?B?eWU2TVh3T0JiMlpNN3hqaWR2dU51RWgzcndxZkdQaUN0UHVESmtBaTBiY2VX?=
 =?utf-8?B?dGZOZzVDSHlCR3VwbU5vak03cDdlbzlWaklHMkliWmh3R2cwdXB2SGttT29J?=
 =?utf-8?B?eGxMYlJtZXBsemdoSWUzbGY3UndiZ2lKRDBwb2VDZFZOSG9tZzNLSUtUbk1Q?=
 =?utf-8?B?VThuVzg1T09Lcjk5dFQ4eEc4b2pnV2J5RnJoRVFUR1BhSTBzNDhmWmZBaytQ?=
 =?utf-8?B?bU03aTFkbXJ1QU4vNGhHKzNGbGJrS0QvczJMclNUNisvSHpFZ1BJRk85T1VQ?=
 =?utf-8?B?MmJvNkxzMkJWUzJkS0tGK3hzZ1M4azFJV2ZialFkdHhCQ3lNWWdxSitsV1Fn?=
 =?utf-8?B?WEQvVmJ6eVlwKzYrbWhoVFFHejhESHM3UkU1YldWd3VqSytiT08yWU9Sb1M3?=
 =?utf-8?B?dHRjSy96UXpDcG1ZTlNqTlg1MTNGQ2paVUFUZ3ZBd3k0Vlltb2gzOHhJZTUx?=
 =?utf-8?B?Q1JWejdhelVpMDlhSTVhd2tST2tqUXBvSlVETjdlYnZsc3hmSmZXMFFHTEdp?=
 =?utf-8?B?cVpNdDhJWWdUZlV5Y2dxc2FKTVhwa0FRUGEyS1AwMFVEbEJSUVRCYVhZbllx?=
 =?utf-8?B?U2NrVzEyeTdlOTFrUXNnOElWNWRpaTVtc0xWT3BjQ21uV2ZpbXI1NDJkcEVl?=
 =?utf-8?B?R1AySHpQeHVQemNxY2w0N2JaTU9RSm43bnZnc01STUZqMTFrakpVRnA0K21x?=
 =?utf-8?B?eHBwWjBvQWZXQ0JNTmxrMTNpd1BBb2NwcXVaQnZOckF6c0YxcjFOWDVkdW1j?=
 =?utf-8?B?WGtqRjc0bi9ZUXdWblIyVDRWQk9TRlhPUVd4Kzk4Wm90SnVXUXpaK0VlWGRO?=
 =?utf-8?B?L0hRMlFVN3BnMzIxQndiOHFaRFFkN2ExaGZUUWYxUGVWSklaSGNjL1NjN01o?=
 =?utf-8?B?ajh1TEhkaUJhZXg3d0RjZldpaDU2WDkwb1dUZlhhNHNMWUt3SDFuSEJJMGJV?=
 =?utf-8?B?a1prN0VuaXZWTE0zeGJkL0l2R0hwQnN4bjJIY1Y2QkkxZVFxckJMZFdpVmFT?=
 =?utf-8?B?Szk4ai9MUnR6MEU2QWxoTFBld3lpYlZyUlF5SjVJUEx2aU81TkhkbmVqS01p?=
 =?utf-8?B?N2hScERVM3l2RXhyVldEa0RTbGRZMDYraHZwaDIvZ0FiYkQxK0MvS0gzZlU5?=
 =?utf-8?B?Sm9qMDdiZFRnbnJqaTJsbnpZaUY1WWNPdzM1UVNFSXcwU1V4alYyckhDK0Y0?=
 =?utf-8?B?VWNDN1c3eTdpQkc1NjFuQlJJSkdtMVdmZDBtdEt4cE85UG5uVzB3b0p5VVFV?=
 =?utf-8?B?eTNrN2Q5Rnczck4zM2NhNzZLL1FPd1cwVzNEanRYWit5QW1JQlAwVk5BZVdZ?=
 =?utf-8?B?aHBpZ1VmUVRPUG5qaXBMWXJJcENJTGlGaTRqMlorM0tZeWlWSmhFaVdUY2RV?=
 =?utf-8?B?UCs3MDA3aWVEdWxzeGhTZWxkbGtpbFFBNlZscm9meE1MZFUxZGhVRldjTSt4?=
 =?utf-8?B?VGV3N3JvVG1XNzVISGJJN0Jia1VteUJVS25DOXM4cjN0L3NIZkErYnA2Q0I1?=
 =?utf-8?B?RnBMZys5ajVETGJMWVJtZEFEK1VVazlLNmllNzFDS0ZhQmZYMnI0eWJFczRs?=
 =?utf-8?B?TTlhRlhTSlVIUU8xVUFJMHpPZWVUaGk2WVZxVVA5NHg4M1VYdmx3WnBpUHNn?=
 =?utf-8?B?R3dPN2MzR0JRaWdRd203dCszRkc1YXpSbmF0UEFueks3bWVaY2gxWnhjdUtw?=
 =?utf-8?B?Si9JTXhNdi9HSitrMDQ3dTRQTHhKMXA0OUxVa0VkV0hkVUN2QVZzakhLbFFy?=
 =?utf-8?B?S2VranJXQzVaYWNyTm4vWi82OTJFUzVFbjNaTnc0RUJZUktzUllOOHVDL3VY?=
 =?utf-8?B?VURlVUN2dEU1bkc4cFFjZkZLV1dhaFoyR05KUDVvb1plQ1p5U21Va3FrRmNi?=
 =?utf-8?B?UkdSOWN2dlJyeHRNaUlyTHR0NTNyeFluMkROYU5nTVJqRVczZExwMUxxM3lY?=
 =?utf-8?Q?cuD0W8tdhVikA3jbGrcItqJPe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0380a7f9-1c86-4e4a-b956-08db4d32b8ba
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 06:34:04.8055
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9JwNRcXF+jq12C3R8ZapxPB/PS4RPLvSfecmoy9vIenaPF4o3PSfOkjzwoSHgRbx8fNfwJSa07d3y5+EYSrHxw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7933

On 04.05.2023 19:51, Jennifer Herbert wrote:
> This patch makes the TPM version, for which the ACPI library probes, configurable.
> If acpi_config.tpm_verison is set to 1, it indicates that 1.2 (TCPA) should be probed.
> I have also added to hvmloader an option to allow setting this new config, which can
> be triggered by setting the platform/tpm_verion xenstore key.
> 
> Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
albeit with two minor further request (which I'd be happy to make while
committing):

> --- a/tools/firmware/hvmloader/util.c
> +++ b/tools/firmware/hvmloader/util.c
> @@ -920,6 +920,8 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
>  {
>      const char *s;
>      struct acpi_ctxt ctxt;
> +    long long tpm_version = 0;

I don't see the need for an initializer here.

> @@ -967,8 +969,6 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
>      s = xenstore_read("platform/generation-id", "0:0");
>      if ( s )
>      {
> -        char *end;
> -
>          config->vm_gid[0] = strtoll(s, &end, 0);
>          config->vm_gid[1] = 0;
>          if ( end && end[0] == ':' )

While there is a similarly odd pattern here, ...

> @@ -994,13 +994,27 @@ void hvmloader_acpi_build_tables(struct acpi_config *config,
>      if ( !strncmp(xenstore_read("platform/acpi_laptop_slate", "0"), "1", 1)  )
>          config->table_flags |= ACPI_HAS_SSDT_LAPTOP_SLATE;
>  
> -    config->table_flags |= (ACPI_HAS_TCPA | ACPI_HAS_IOAPIC |
> -                            ACPI_HAS_WAET | ACPI_HAS_PMTIMER |
> -                            ACPI_HAS_BUTTONS | ACPI_HAS_VGA |
> -                            ACPI_HAS_8042 | ACPI_HAS_CMOS_RTC);
> +    config->table_flags |= (ACPI_HAS_IOAPIC | ACPI_HAS_WAET |
> +                            ACPI_HAS_PMTIMER | ACPI_HAS_BUTTONS |
> +                            ACPI_HAS_VGA | ACPI_HAS_8042 |
> +                            ACPI_HAS_CMOS_RTC);
>      config->acpi_revision = 4;
>  
> -    config->tis_hdr = (uint16_t *)ACPI_TIS_HDR_ADDRESS;
> +    config->tpm_version = 0;
> +    s = xenstore_read("platform/tpm_version", "1");
> +    tpm_version = strtoll(s, &end, 0);
> +
> +    if ( end && end[0] == '\0' )

... I don't think it should be further propagated. There's no need for
the left hand part of the condition.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 05 07:01:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 07:01:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530236.825723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pupRZ-0001mr-SW; Fri, 05 May 2023 07:01:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530236.825723; Fri, 05 May 2023 07:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pupRZ-0001mk-Ph; Fri, 05 May 2023 07:01:29 +0000
Received: by outflank-mailman (input) for mailman id 530236;
 Fri, 05 May 2023 07:01:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pupRY-0001md-I8
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 07:01:28 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20619.outbound.protection.outlook.com
 [2a01:111:f400:7d00::619])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a6c9b49d-eb12-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 09:01:26 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by PAWPR04MB9744.eurprd04.prod.outlook.com (2603:10a6:102:383::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.21; Fri, 5 May
 2023 07:01:23 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%7]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 07:01:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6c9b49d-eb12-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dGTTk7dAmbA7yDjmZxie2UWRpiddIQRcyd1hM5q5LDhDTd8Buecs31o0zTFqgFmAKsgrxNKFU1BQxXsn2R1X5SZ5QxaRbtZybwpaajhm+Yuwb+ezg2g9fhREZLX7rmwBjCZikOif/QMi39fKOCLpm7oRVKZEwoemLiweV3WDERHoLBKF+VtueVjpxa7ctJyrhU4F5JVSszZGl6vnJDW4jqvSvQol8O1fGMbSZS+e57OMz5/BIRDlrmePxm0DKfeTaKNdpINiCwBmTbE3S01xEb5u6krtCCYq3iauKosGvIv1wjszeMYhPbHpAatYiDFhDR9AKFF6ATDM6vGJwE+3CA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fgP644fB4vQGDoxQzCaXmXM8Dvzoplud5+uNJrnPxAM=;
 b=MJnXEs6Bs09sbi6ZRm93LO/eiNTB3R4bL8smWQvG+tTP/Oi44J7x2d7XXme87mRaSQW0oY4BIprB9AQwqlPeySjKx4NQ/1w3/Cvzc/qgtYVZRoBf6MTV4iQULgpsgVtv4A+OhWntELRJ8gMZgnLkSX/IeiMp4kUUl4opHTkrvgvrs09HtMFKr7SwE3pXnqDwCgDXyaF535hGwGxS7/RRG+QAGzJRhQx51MpX9NnVVrtyYf1/PslXsfxhDW5c9CzzFe3sLUdK8mhqAORS6vvfQQIztB+n7oxcgb6PEov72KXh1EiYhRBExHZnbT0ztjfA2Jb/d51qn5v5YejXVtPuXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fgP644fB4vQGDoxQzCaXmXM8Dvzoplud5+uNJrnPxAM=;
 b=n1vBKnr64oUrQuwADTAKxY0FOemGeECZ0dCwWaLX+HrCFacSQ4I3l9IjZ5ZBoSEL0gZYqHNblN4j4Ax3DXpTCow0NCWGOOihWDMKCFwEL6kJmfZvqyruNgbBzGJbDSh03i7/3+8je41PkoZarXIVzafOT0QW9f+hK+Fh5NA7HXgFsWCYSpEAY/5Zo8QNMjK12eOz+ktrVrvBSJ+uD78fnpZX5bk0myWivhjV574rDNHm3WXX9gfZwLyAQkEjqRoiCRaRKnUfktEB6QE7h2vugfRfJo691YfOmIwnIBI+UPezTdUncnASCTjKayt3flTkNrAcPugW3U1ZqBFApXIpIA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7921d24d-7d4d-8829-44bf-b8c2ecd001c8@suse.com>
Date: Fri, 5 May 2023 09:01:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 04/14 RESEND] cpufreq: Add Hardware P-State (HWP)
 driver
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-5-jandryuk@gmail.com>
 <43c519f7-577d-f657-a4b1-1a31bf7f093a@suse.com>
 <CAKf6xpuzk6vLjbNAHzEmVpq8sDAO8O-cRFVStQxNqyD5ERr+Yg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xpuzk6vLjbNAHzEmVpq8sDAO8O-cRFVStQxNqyD5ERr+Yg@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0169.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::10) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|PAWPR04MB9744:EE_
X-MS-Office365-Filtering-Correlation-Id: f51735bd-96b2-46ea-5470-08db4d36893e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wImjxmVGKfJiO4dA0/jq1lkNIZKtYTyc55znVYGDc2hucEWorMPg1KPda9nG5W2PEOsMTAjWtonQk3R7NIOGThVJapkAtW2hg63iwQrbWoQopTTl3bZIVlIvEgVZYijTDkRRPJ7w3g/gmUUA/HNX2jwLZU5BzCH2OGMQR+xQj+3fC6/X0Hvye0cjh0DUsGsfKkpi6t87s5o1gMNQkLWbRYZD7zxXo5FEMgSUNsqaGy/9CSsOyud1hTRnUmPkxlrgRawartQMYX9CB6xBNtXVCnfTg86NtIkq7wZR6f8PULAqWvlnRMyXOY+eADZoAbwjLPnlye1IVzHitAuWpmVOQiNkYEqlMxA385lmNUVk4s02MbYHR0h2EViH+EQdYXZmD0Zx4swfFiMjIQTMNfGQlnyGS4COHiLKMVFYbAUP9exTK7d5rW+dboo2MfRpZOTRpiqMyu3VlcUHQkIvC2qTLGqCHWQ3sqZRzgRBAmRFNJfkQ2aqTx91cGjrBuHfBgx0fqZKFbpHS9x1oPciVvYD08F1V07uZ6X2gzjXdQa+Ds5nkkceEHdIxYY4kLls2hq3fRdu9Czctt9SiFJiWOaBqudAnE7X4hzm8RIYxI/uqJkpwnq+Ww9em2dF2Yw8L0Ovuvvn7aJVfzT6hRF9rWjTSQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(366004)(396003)(376002)(136003)(451199021)(186003)(2616005)(6916009)(6506007)(6486002)(478600001)(30864003)(41300700001)(26005)(53546011)(6512007)(54906003)(83380400001)(2906002)(36756003)(38100700002)(5660300002)(66476007)(66946007)(66556008)(4326008)(8936002)(31696002)(8676002)(316002)(86362001)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ekhQdFYyOC9nMEhyY0VWRmZmY1pMNnlHV1Z5OG1GWWxKZnovVU50Z0U3bmVi?=
 =?utf-8?B?QWVqbytiYjFFaXZTcUxaeVYrNFM5Vk1sWWlIOFZzN091N25mMWowMWY1ZG5S?=
 =?utf-8?B?QnhmT1REQnVaK2dqTmVTOE9DR2hLdUJuQmViVW9XaU0yb1QvVjBUZWRBVmNL?=
 =?utf-8?B?MWtWQ2M2eC9qTFowS1FZM1dTSS9CbUIxQ1JXeTk4Qkw3TTlIUGZsQVJxRG5k?=
 =?utf-8?B?Ykt0MDVzeWVnQ0oyR0dDbUl4ZEdBMkh4MmMyMEg3MEJMbCszQlRjSzllV0Zy?=
 =?utf-8?B?bVZHTkV5TUdVUWJHY1VwVmltRGFTaHVTV2RUcjBqNm1IWkF2WU9UVXpSTWpl?=
 =?utf-8?B?d3RQWk83TUxjZnlhUndtUVVTY0I0MXlRQ2drWTZVL2F1dXRiUzA3SzdYME5U?=
 =?utf-8?B?TXN3MGMybEtremx2b3JtNDhvL0ltYlQ3bEhxOS82TUpzWXFXQXU1ckpIN1N5?=
 =?utf-8?B?MVRnSkRJTVpxRFppODJ0bVNWTGUycnZBWW9iZ0pyS3FlOW15NFNOTE4rWFBD?=
 =?utf-8?B?SjZ0RTVPYmZWaThuUXh4b21hU1FUWUwrdFN0K1hzekY3WWEveVBLci93akd1?=
 =?utf-8?B?UkhPajVDS2JhcGRtN0xheDMyZEVwRU8rR1dNSVJDUkdRdjY2M1Z3VjdtcHlz?=
 =?utf-8?B?R1FHUjlBZFpmcEFFZytCckFKMUhlcll1d0k1VnFnVGFPNHRoQk02SFA1M3RL?=
 =?utf-8?B?Rys5cjFjMW5zWTdEOWdKTXZPR3AwNC84UFc1akdrdUV3MTVLQTNKVmlUVXBI?=
 =?utf-8?B?TVQyeWNiems3MWZORmZ5MnlneUtqdW9QSlBLMlZ5NVJodElqZkErYURBWDFN?=
 =?utf-8?B?ckg5M01YYVBTZUt0THg5NmFGZHI5eTFabkhUZ3NacmFPTGlTaTl5SUlaU2Y1?=
 =?utf-8?B?bVhiWkduM3UyTzR0K1RxdURkb3VvSkVKQlhCNHFFM0RIcE9TUW5HSFZESGVu?=
 =?utf-8?B?TEVyWllvTlZ5ak9XdG1Ybm9GWmNVR3NaTWZaTTRNZW4zSUxlSXJzTnN3cWs5?=
 =?utf-8?B?c0dER0hlK0tpZ01TenVoY1dFNVVKdmxsV2lhY0NtOHVZeCtuVmVkaEVMMy9z?=
 =?utf-8?B?YWMrSExrQ2RLNWRLK1NwQ1BVVlZnaTUzRmloMEdpY0N0QVdRZUxHWW5IQlVu?=
 =?utf-8?B?NW1WTVpqV09qNlpJU293MTlyYklBNEs1UkVFTDVIamdic1pJY1hKWFdwVzJj?=
 =?utf-8?B?OEJmQTRzdDRVSERyVGRvbWZLcE80SWhxalVvS0x5YkxNSFZSWDFLMlBmOEpD?=
 =?utf-8?B?MkptZFFMTE1hMzhSdzdsZnpROUsvR200MzdyVmxEQnNDUWdWRGJ5ZXpxdFBC?=
 =?utf-8?B?NFQwc3RzZk1JQ2hES0JVbTNWWERuU0szRjlsWHErdzdGUU5UL2QvcTBNYjIx?=
 =?utf-8?B?TzBtd1pzQTRaaFRzRGx2eDBkRHE3Q2NwT1VDcEZvelFKYy83bm9nMUNRS0Zj?=
 =?utf-8?B?a3RPWmtsQWN2OWFNK2RHS0ZWVkdjVllUbWpZODVZaWRyeVNTMjhUQkRwM1Rn?=
 =?utf-8?B?dlBVd3RKaFBtVE1PekduVEkwY016T0RjWnU2TUlLUXY2U1VYUHVVemh5Sndt?=
 =?utf-8?B?TTNnbzFRSkRIMGk1SjRFL0x6OE1JdUFDSnliYzk1eno1VG9pMmlicDFXMUhT?=
 =?utf-8?B?Y0RtcERiTCsvL2RGVGVjM2N1NzNCamlGNWxRR0NSbytZM1hCYjNuSkgxWis2?=
 =?utf-8?B?ZysvOGRqeUJXTjE5THZWN0w1NUJJanRVdVI0VGJJTE5BRWYvZEFzOHdTRlV5?=
 =?utf-8?B?dks0TUJpREp1RW90K21oVGkwR2JPZWZVSk5wWHZNVXhYK09heUsyOWFFaDBi?=
 =?utf-8?B?ekFTeTZWSVc2bnF1MVY2WFpNOWlCN2EvaGhVVEJrZjl3OE9XRHdGcW0rZjYx?=
 =?utf-8?B?Z2l2MWdwMWk0K1RVWlBMbmVJSFhpNGpVU0FrN3laRGlDNGMvK3ljNnRCVU90?=
 =?utf-8?B?MEdsakJDSk9ERjM4NWN5U2dEQXppQ1RONVN3UUNiWnk3WmxzTU43cjFRQ3NG?=
 =?utf-8?B?YTlBdXlOWVVYd1NHWnpDczlNVDFvbkxNbStYMWhGWGhKWk9aT2xkUjhqL2V1?=
 =?utf-8?B?REcybDBFWk10aDhQOUM1WjkvV3I2d3RuWCtxdnRWbHVzOHVsQ3dDV0xZdkNy?=
 =?utf-8?Q?ctU/AawuOmetgGm8fbv1Qjse9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f51735bd-96b2-46ea-5470-08db4d36893e
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 07:01:23.0142
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RqMHjUq133NACOGQE9ETWfrW7xkt2/aOYF7tocWtR5BCksiPFwD9sGsNITGKhti9w3lhAMbRZX4V++jOjurUvw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR04MB9744

On 04.05.2023 18:56, Jason Andryuk wrote:
> On Thu, May 4, 2023 at 9:11 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 01.05.2023 21:30, Jason Andryuk wrote:
>>> --- a/docs/misc/xen-command-line.pandoc
>>> +++ b/docs/misc/xen-command-line.pandoc
>>> @@ -499,7 +499,7 @@ If set, force use of the performance counters for oprofile, rather than detectin
>>>  available support.
>>>
>>>  ### cpufreq
>>> -> `= none | {{ <boolean> | xen } [:[powersave|performance|ondemand|userspace][,<maxfreq>][,[<minfreq>][,[verbose]]]]} | dom0-kernel`
>>> +> `= none | {{ <boolean> | xen } [:[powersave|performance|ondemand|userspace][,<hdc>][,[<hwp>]][,[<maxfreq>]][,[<minfreq>]][,[verbose]]]} | dom0-kernel`
>>
>> Considering you use a special internal governor, the 4 governor alternatives are
>> meaningless for hwp. Hence at the command line level recognizing "hwp" as if it
>> was another governor name would seem better to me. This would then also get rid
>> of one of the two special "no-" prefix parsing cases (which I'm not overly
>> happy about).
>>
>> Even if not done that way I'm puzzled by the way you spell out the interaction
>> of "hwp" and "hdc": As you say in the description, "hdc" is meaningful only when
>> "hwp" was specified, so even if not merged with the governors group "hwp" should
>> come first, and "hdc" ought to be rejected if "hwp" wasn't first specified. (The
>> way you've spelled it out it actually looks to be kind of the other way around.)
> 
> I placed them in alphabetical order, but, yes, it doesn't make sense.
> 
>> Strictly speaking "maxfreq" and "minfreq" also should be objected to when "hwp"
>> was specified.
>>
>> Overall I'm getting the impression that beyond your "verbose" related adjustment
>> more is needed, if you're meaning to get things closer to how we parse the
>> option (splitting across multiple lines to help see what I mean):
>>
>> `= none
>>  | {{ <boolean> | xen } [:{powersave|performance|ondemand|userspace}
>>                           [{,hwp[,hdc]|[,maxfreq=<maxfreq>[,minfreq=<minfreq>]}]
>>                           [,verbose]]}
>>  | dom0-kernel`
>>
>> (We're still parsing in a more relaxed way, e.g. minfreq may come ahead of
>> maxfreq, but better be more tight in the doc than too relaxed.)
>>
>> Furthermore while max/min freq don't apply directly, there are still two MSRs
>> controlling bounds at the package and logical processor levels.
> 
> Well, we only program the logical processor level MSRs because we
> don't have a good idea of the packages to know when we can skip
> writing an MSR.
> 
> How about this:
> `= none
>  | {{ <boolean> | xen } {
> [:{powersave|performance|ondemand|userspace}[,maxfreq=<maxfreq>[,minfreq=<minfreq>]]
>                         | [:hwp[,hdc]] }
>                           [,verbose]]}
>  | dom0-kernel`

Looks right, yes.

>>> --- /dev/null
>>> +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
>>> @@ -0,0 +1,506 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * hwp.c cpufreq driver to run Intel Hardware P-States (HWP)
>>> + *
>>> + * Copyright (C) 2021 Jason Andryuk <jandryuk@gmail.com>
>>> + */
>>> +
>>> +#include <xen/cpumask.h>
>>> +#include <xen/init.h>
>>> +#include <xen/param.h>
>>> +#include <xen/xmalloc.h>
>>> +#include <asm/io.h>
>>> +#include <asm/msr.h>
>>> +#include <acpi/cpufreq/cpufreq.h>
>>> +
>>> +static bool feature_hwp;
>>> +static bool feature_hwp_notification;
>>> +static bool feature_hwp_activity_window;
>>> +static bool feature_hwp_energy_perf;
>>> +static bool feature_hwp_pkg_level_ctl;
>>> +static bool feature_hwp_peci;
>>> +
>>> +static bool feature_hdc;
>>
>> Most (all?) of these want to be __ro_after_init, I expect.
> 
> I think you are correct.  (This pre-dates __ro_after_init and I didn't
> update it.)

Yet even then they should have used __read_mostly.

>>> +union hwp_request
>>> +{
>>> +    struct
>>> +    {
>>> +        uint64_t min_perf:8;
>>> +        uint64_t max_perf:8;
>>> +        uint64_t desired:8;
>>> +        uint64_t energy_perf:8;
>>> +        uint64_t activity_window:10;
>>> +        uint64_t package_control:1;
>>> +        uint64_t reserved:16;
>>> +        uint64_t activity_window_valid:1;
>>> +        uint64_t energy_perf_valid:1;
>>> +        uint64_t desired_valid:1;
>>> +        uint64_t max_perf_valid:1;
>>> +        uint64_t min_perf_valid:1;
>>
>> The boolean fields here would probably better be of type "bool". I also
>> don't see the need for using uint64_t for any of the other fields -
>> unsigned int will be quite fine, I think. Only ...
> 
> This is the hardware MSR format, so it seemed natural to use uint64_t
> and the bit fields.  To me, uint64_t foo:$bits; better shows that we
> are dividing up a single hardware register using bit fields.
> Honestly, I'm unfamiliar with the finer points of laying out bitfields
> with bool.  And the 10 bits of activity window throws off aligning to
> standard types.
> 
> This seems to have the correct layout:
> struct
> {
>         unsigned char min_perf;
>         unsigned char max_perf;
>         unsigned char desired;
>         unsigned char energy_perf;
>         unsigned int activity_window:10;
>         bool package_control:1;
>         unsigned int reserved:16;
>         bool activity_window_valid:1;
>         bool energy_perf_valid:1;
>         bool desired_valid:1;
>         bool max_perf_valid:1;
>         bool min_perf_valid:1;
> } ;
> 
> Or would you prefer the first 8 bit ones to be unsigned int
> min_perf:8?

Personally I think using bitfields uniformly would be better. What you
definitely cannot use if not using a bitfield is "unsigned char", it
ought to by uint8_t then. If using a bitfield, as said, I think it's
best to stick to unsigned int and bool, unless field width goes
beyond 32 bits or fields cross a 32-bit boundary.

>  The bools seem to need :1, which doesn't seem to be
> gaining us much, IMO.  I'd strongly prefer just keeping it as I have
> it, but I will change it however you like.

It's not so much how I like it, but to follow (a) existing practice
(for the boolean fields) and (b) ./CODING_STYLE (for the selection of
types).

>>> +    };
>>> +    uint64_t raw;
>>
>> ... this wants to keep this type. (Same again below then.)
> 
> For "below", do you want:
> 
>         struct
>         {
>             unsigned char highest;
>             unsigned char guaranteed;
>             unsigned char most_efficient;
>             unsigned char lowest;
>             unsigned int reserved;
>         } hw;
> ?

No - it can only be bitfields or fixed-width types here.

>>> +bool __init hwp_available(void)
>>> +{
>>> +    unsigned int eax, ecx, unused;
>>> +    bool use_hwp;
>>> +
>>> +    if ( boot_cpu_data.cpuid_level < CPUID_PM_LEAF )
>>> +    {
>>> +        hwp_verbose("cpuid_level (%#x) lacks HWP support\n",
>>> +                    boot_cpu_data.cpuid_level);
>>> +        return false;
>>> +    }
>>> +
>>> +    if ( boot_cpu_data.cpuid_level < 0x16 )
>>> +    {
>>> +        hwp_info("HWP disabled: cpuid_level %#x < 0x16 lacks CPU freq info\n",
>>> +                 boot_cpu_data.cpuid_level);
>>> +        return false;
>>> +    }
>>> +
>>> +    cpuid(CPUID_PM_LEAF, &eax, &unused, &ecx, &unused);
>>> +
>>> +    if ( !(eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE) &&
>>> +         !(ecx & CPUID6_ECX_IA32_ENERGY_PERF_BIAS) )
>>> +    {
>>> +        hwp_verbose("HWP disabled: No energy/performance preference available");
>>> +        return false;
>>> +    }
>>> +
>>> +    feature_hwp                 = eax & CPUID6_EAX_HWP;
>>> +    feature_hwp_notification    = eax & CPUID6_EAX_HWP_NOTIFICATION;
>>> +    feature_hwp_activity_window = eax & CPUID6_EAX_HWP_ACTIVITY_WINDOW;
>>> +    feature_hwp_energy_perf     =
>>> +        eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE;
>>> +    feature_hwp_pkg_level_ctl   = eax & CPUID6_EAX_HWP_PACKAGE_LEVEL_REQUEST;
>>> +    feature_hwp_peci            = eax & CPUID6_EAX_HWP_PECI;
>>> +
>>> +    hwp_verbose("HWP: %d notify: %d act-window: %d energy-perf: %d pkg-level: %d peci: %d\n",
>>> +                feature_hwp, feature_hwp_notification,
>>> +                feature_hwp_activity_window, feature_hwp_energy_perf,
>>> +                feature_hwp_pkg_level_ctl, feature_hwp_peci);
>>> +
>>> +    if ( !feature_hwp )
>>> +        return false;
>>> +
>>> +    feature_hdc = eax & CPUID6_EAX_HDC;
>>> +
>>> +    hwp_verbose("HWP: Hardware Duty Cycling (HDC) %ssupported%s\n",
>>> +                feature_hdc ? "" : "not ",
>>> +                feature_hdc ? opt_cpufreq_hdc ? ", enabled" : ", disabled"
>>> +                            : "");
>>> +
>>> +    feature_hdc = feature_hdc && opt_cpufreq_hdc;
>>> +
>>> +    hwp_verbose("HWP: HW_FEEDBACK %ssupported\n",
>>> +                (eax & CPUID6_EAX_HW_FEEDBACK) ? "" : "not ");
>>
>> You report this, but you don't really use it?
> 
> Correct.  I needed to know what capabilities my processors have.
> 
> feature_hwp_pkg_level_ctl and feature_hwp_peci can also be dropped
> since they aren't used beyond printing their values.  I'd still lean
> toward keeping their printing under verbose since otherwise there
> isn't a convenient way to know if they are available without
> recompiling.

That's fine, but wants mentioning in the description. Also respective
variables would want to be __initdata then, be local to the function,
or be dropped altogether. Plus you'd want to be consistent - either
you use a helper variable for all print-only features, or you don't.

>>> +static void hwp_get_cpu_speeds(struct cpufreq_policy *policy)
>>> +{
>>> +    uint32_t base_khz, max_khz, bus_khz, edx;
>>> +
>>> +    cpuid(0x16, &base_khz, &max_khz, &bus_khz, &edx);
>>> +
>>> +    /* aperf/mperf scales base. */
>>> +    policy->cpuinfo.perf_freq = base_khz * 1000;
>>> +    policy->cpuinfo.min_freq = base_khz * 1000;
>>> +    policy->cpuinfo.max_freq = max_khz * 1000;
>>> +    policy->min = base_khz * 1000;
>>> +    policy->max = max_khz * 1000;
>>> +    policy->cur = 0;
>>
>> What is the comment intended to be telling me here?
> 
> When I was surprised to discover that I needed to pass in the base
> frequency for proper aperf/mperf scaling, it seemed relevant at the
> time as it's the opposite of ACPI cpufreq.  It can be dropped now.

Well, I'm not insisting on dropping the comment. It could also be left,
but then extended so it can be understood what is meant.

>>> +static void cf_check hwp_init_msrs(void *info)
>>> +{
>>> +    struct cpufreq_policy *policy = info;
>>> +    struct hwp_drv_data *data = this_cpu(hwp_drv_data);
>>> +    uint64_t val;
>>> +
>>> +    /*
>>> +     * Package level MSR, but we don't have a good idea of packages here, so
>>> +     * just do it everytime.
>>> +     */
>>> +    if ( rdmsr_safe(MSR_IA32_PM_ENABLE, val) )
>>> +    {
>>> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_PM_ENABLE)\n", policy->cpu);
>>> +        data->curr_req.raw = -1;
>>> +        return;
>>> +    }
>>> +
>>> +    /* Ensure we don't generate interrupts */
>>> +    if ( feature_hwp_notification )
>>> +        wrmsr_safe(MSR_IA32_HWP_INTERRUPT, 0);
>>> +
>>> +    hwp_verbose("CPU%u: MSR_IA32_PM_ENABLE: %016lx\n", policy->cpu, val);
>>> +    if ( !(val & IA32_PM_ENABLE_HWP_ENABLE) )
>>> +    {
>>> +        val |= IA32_PM_ENABLE_HWP_ENABLE;
>>> +        if ( wrmsr_safe(MSR_IA32_PM_ENABLE, val) )
>>> +        {
>>> +            hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_PM_ENABLE, %lx)\n",
>>> +                    policy->cpu, val);
>>> +            data->curr_req.raw = -1;
>>> +            return;
>>> +        }
>>> +    }
>>> +
>>> +    if ( rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, data->hwp_caps) )
>>> +    {
>>> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_CAPABILITIES)\n",
>>> +                policy->cpu);
>>> +        data->curr_req.raw = -1;
>>> +        return;
>>> +    }
>>> +
>>> +    if ( rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw) )
>>> +    {
>>> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_REQUEST)\n", policy->cpu);
>>> +        data->curr_req.raw = -1;
>>> +        return;
>>> +    }
>>> +
>>> +    if ( !feature_hwp_energy_perf ) {
>>
>> Nit: Brace placement.
>>
>>> +        if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, val) )
>>> +        {
>>> +            hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n");
>>> +            data->curr_req.raw = -1;
>>> +
>>> +            return;
>>> +        }
>>> +
>>> +        data->energy_perf = val & IA32_ENERGY_BIAS_MASK;
>>> +    }
>>
>> In order to not need to undo the "enable" you've already done, maybe that
>> should move down here?
> 
> HWP needs to be enabled before the Capabilities and Request MSRs can
> be read.

I must have missed this aspect in the SDM. Do you have a pointer?

>  Reading them shouldn't fail, but it seems safer to use
> rdmsr_safe in case something goes wrong.

Sure. But then the "enable" will need undoing in the unlikely event of
failure.

>>> --- a/xen/arch/x86/include/asm/cpufeature.h
>>> +++ b/xen/arch/x86/include/asm/cpufeature.h
>>> @@ -46,8 +46,17 @@ extern struct cpuinfo_x86 boot_cpu_data;
>>>  #define cpu_has(c, bit)              test_bit(bit, (c)->x86_capability)
>>>  #define boot_cpu_has(bit)    test_bit(bit, boot_cpu_data.x86_capability)
>>>
>>> -#define CPUID_PM_LEAF                    6
>>> -#define CPUID6_ECX_APERFMPERF_CAPABILITY 0x1
>>> +#define CPUID_PM_LEAF                                6
>>> +#define CPUID6_EAX_HWP                               (_AC(1, U) <<  7)
>>> +#define CPUID6_EAX_HWP_NOTIFICATION                  (_AC(1, U) <<  8)
>>> +#define CPUID6_EAX_HWP_ACTIVITY_WINDOW               (_AC(1, U) <<  9)
>>> +#define CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE (_AC(1, U) << 10)
>>> +#define CPUID6_EAX_HWP_PACKAGE_LEVEL_REQUEST         (_AC(1, U) << 11)
>>> +#define CPUID6_EAX_HDC                               (_AC(1, U) << 13)
>>> +#define CPUID6_EAX_HWP_PECI                          (_AC(1, U) << 16)
>>> +#define CPUID6_EAX_HW_FEEDBACK                       (_AC(1, U) << 19)
>>
>> Perhaps better without open-coding BIT()?
> 
> Ok.
> 
>> I also find it a little odd that e.g. bit 17 is left out here despite you
>> declaring the 5 "valid" bits in union hwp_request (which are qualified by
>> this CPUID bit afaict).
> 
> Well, I thought I wasn't supposed to introduce unused defines, so I
> didn't add one for 17.  For union hwp_request, the "valid" bits are
> part of the register structure, so it makes sense to include them
> instead of an incomplete definition.  IIRC, at some point I set the
> "valid" bits when I wasn't supposed to, and they caused the wrmsr
> calls to fail.  That might have been because my test machines don't
> have package-level HWP.
> 
> (I was confused when the CPUID section stated "Bit 17: Flexible HWP is
> supported if set.", but there are no further references to "Flexible
> HWP" in the SDM.)

A not uncommon issue with the SDM. At least there is a place where bit
17's purpose is described in the HWP section.

>>> @@ -165,6 +172,11 @@
>>>  #define  PASID_PASID_MASK                   0x000fffff
>>>  #define  PASID_VALID                        (_AC(1, ULL) << 31)
>>>
>>> +#define MSR_IA32_PKG_HDC_CTL                0x00000db0
>>> +#define  IA32_PKG_HDC_CTL_HDC_PKG_ENABLE    (_AC(1, ULL) <<  0)
>>
>> The name has two redundant infixes, which looks odd, but then I can't
>> suggest any better without going too much out of sync with the SDM.
> 
> Yes, it's not a good name, but I was trying to keep close to the SDM.
> FAOD, these should drop IA32_ to become:
> MSR_PKG_HDC_CTL
> PKG_HDC_CTL_HDC_PKG_ENABLE
> ?

Right.

> Thank you for taking the time to review this.

Well, it has taken me awfully long to get back to this.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 05 07:04:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 07:04:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530241.825732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pupUF-0002T6-Cw; Fri, 05 May 2023 07:04:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530241.825732; Fri, 05 May 2023 07:04:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pupUF-0002Sz-A0; Fri, 05 May 2023 07:04:15 +0000
Received: by outflank-mailman (input) for mailman id 530241;
 Fri, 05 May 2023 07:04:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7qVd=A2=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1pupUE-0002Sr-Gm
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 07:04:14 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20611.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0a0d292e-eb13-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 09:04:12 +0200 (CEST)
Received: from AM3PR03CA0053.eurprd03.prod.outlook.com (2603:10a6:207:5::11)
 by PA4PR08MB6013.eurprd08.prod.outlook.com (2603:10a6:102:eb::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 07:04:09 +0000
Received: from AM7EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:207:5:cafe::af) by AM3PR03CA0053.outlook.office365.com
 (2603:10a6:207:5::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.25 via Frontend
 Transport; Fri, 5 May 2023 07:04:09 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT051.mail.protection.outlook.com (100.127.140.64) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 07:04:08 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Fri, 05 May 2023 07:04:08 +0000
Received: from b9120d170423.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1AD70569-9188-456B-987B-569953A840EC.1; 
 Fri, 05 May 2023 07:04:01 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b9120d170423.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 05 May 2023 07:04:01 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS8PR08MB6438.eurprd08.prod.outlook.com (2603:10a6:20b:33e::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Fri, 5 May
 2023 07:03:59 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::9283:e527:bed8:ab23]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::9283:e527:bed8:ab23%7]) with mapi id 15.20.6363.022; Fri, 5 May 2023
 07:03:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a0d292e-eb13-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7yzR14qhZFTuVD2Nj9E0CJ0ZlwredMCBWPv253eudQE=;
 b=HFuk4KBTQ9X3tK76HKjjIxkK+g1DJu45Cd0rdvabwUV5gxC7R+BCRQkq10/oJyIAsOsJEWBP6MvCaP/ESUJ6KqbAmHUwvV/RD1WpO1wedm3uxrvP2QEF257uvZrrzlEe0EuohWVG9t8dfU+yOIClDU/n/fUWBIn/8mJWlUVws4c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 96c056a29253d263
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PWNfV9eVrbcpWKBUGvGZMiFitfQSeuFVCt7utIwUF13qc+uyNVBf+lOJYCuoNF4R0PDNxX0UkN1x1RLNepyGAFH7LYUQnH9Q30GBLRDYlL6n/Mn5Y4u+yaNOaQPDeFnv/1qOcKfO92IzhcoBZ9NXdStDL0h0Zu8JvRe3N15NmRoPBOwSAMHuc2CJQKc9gOv/f5JMMx2dVCNKQ/y8czcUhLhTWqYfRFFIKURK9OVlTQft6Wjer5bkRm0QE2vug37eQz48lmkDXmDyClYYJOG6L2DBraUYbk1qYzvDX9M9PuRwzsKJG2eD4HV/PSt4zW3RIMjNh/ft2l7FF6CLAi4TsA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7yzR14qhZFTuVD2Nj9E0CJ0ZlwredMCBWPv253eudQE=;
 b=e2KbgE3xyyKPJ7sAwzPWlOwY0SrqWByuC4dgWi/T9L//mtUrSQ1z8bZ1B88WMSoF7vIlMYsJIuK84X8ZqtTyYmnQTdSlc3MwkkEXsPFg7ZkPKMvD27WPALImiB5STOb52oNaRe8Au2gf9Ba8qwNGo4itF7lTowtz501O+udalh6AArbmeDSvcvfo1B3L7olCxG3+s+Jv+oRjWXBbuXEE1FCIYll8pkzFq+qQMGYyID5FZi5Pw1WmmrlIWlsoHZu+PjHBws33o9YaoBDzx2Ul9YNfAdci4Q71RFoyPHB0PuHqJSVXQ453YGhx057kCBwH/0b7W1nhdlVXEStms1Mr2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7yzR14qhZFTuVD2Nj9E0CJ0ZlwredMCBWPv253eudQE=;
 b=HFuk4KBTQ9X3tK76HKjjIxkK+g1DJu45Cd0rdvabwUV5gxC7R+BCRQkq10/oJyIAsOsJEWBP6MvCaP/ESUJ6KqbAmHUwvV/RD1WpO1wedm3uxrvP2QEF257uvZrrzlEe0EuohWVG9t8dfU+yOIClDU/n/fUWBIn/8mJWlUVws4c=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr
 Babchuk <volodymyr_babchuk@epam.com>, Bobby Eshleman
	<bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>, Anthony Perard
	<anthony.perard@citrix.com>
Subject: Re: [PATCH] build: shorten macro references
Thread-Topic: [PATCH] build: shorten macro references
Thread-Index: AQHZfqPFJa3dHs9KjUu/MM/3AtGIR69LQjSA
Date: Fri, 5 May 2023 07:03:59 +0000
Message-ID: <EEEC910E-1499-4D17-96D6-49296523451F@arm.com>
References: <95467f2c-09ed-d34f-feef-5cd55c60f628@suse.com>
In-Reply-To: <95467f2c-09ed-d34f-feef-5cd55c60f628@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS8PR08MB6438:EE_|AM7EUR03FT051:EE_|PA4PR08MB6013:EE_
X-MS-Office365-Filtering-Correlation-Id: 486871da-559a-47de-9db6-08db4d36ec1a
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Py///nuZCp5uqgh2FO+3Kgy09QJn3/U2L9xmc230HWg+ojMt7TuLm5okLHxf+v42O0Zx+DaPCbJuLMAvjaG3z6HFIfA/Wb/D3J0jZTyEMqpNrJcPZRgKXN437P8z2bI3g2m0QzyJE7qYcIC7spB1YHCC7lqLdznLccVP6J0zeMyIHlmyLg4f6IS7yL+YlsJDJvv296fLoETUYxpA/FS0jt2rP3CFRvCKYT93BwicyTEQaKHMYwo7gkt/RTutK4zfi1/d5AJqKTtn5qJ143A1QgQKdqMPkX8waZZqRF7P3e8z4cZbMZSnm3/v+5zFFZVrPWxvSeid6k/aQoxzc72aILRpXIuRG/g9YXtf2Ume1Hyo3f277YObVWNGnNtchum7YvS5FxL3Ys2t7GUPOQeclr+Hr0X4bAr/FLFYNtCbBOGt1js/PL8YP0JvX8zCy0Bz6STYtHkA1nu9/qx/3OC3kzb75b5JdiG/+VsZKIs4EMx3m1XrI61hn2cWdsbkHKYIrUtJQD67Uocti0R8D0ZR6rpidaqrTWMaZ+PgyzT7VcP7ycABt/HtCiho6qMAd185auahmBhIXxYNOCqiwC0S+Xb6aHXT68QAlh0EbZY2Ahdp7kJ4VaB1WvuZOad7Cjdigbzd670bP14cZeCFFE/nzQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(396003)(376002)(39860400002)(366004)(451199021)(316002)(33656002)(41300700001)(71200400001)(2906002)(53546011)(186003)(6512007)(86362001)(6486002)(4326008)(6916009)(64756008)(6506007)(66556008)(76116006)(91956017)(66476007)(66446008)(66946007)(36756003)(2616005)(38070700005)(54906003)(7416002)(8936002)(478600001)(122000001)(5660300002)(8676002)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <2C712AB7273D004D84866F29E6E5C855@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6438
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6cb967cd-d0e8-4a03-2cb2-08db4d36e6b6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wdCnvY2SR/Jp9VHkF6i8OX53bYkb8GXocpkhXtECZvXdTQT78ml7mpr7mryJD1BU5pl2PAa+qdpYPHIq8z7CF3ZT2opgK3MlZYd/xNaUYhA6wvam0vzH9UPFw5y2BsoOkm47Wu4waCpIAoxKnccjqxOawNGEqVUC5IU9nZar9gA16x96PYPZCCvich2195r850c5h6cKHG/zfQWIGznDVtN86WwdwxbrZ5vN9V0E2u4aiW45a1gwE9Tc2Gk7pbegA7v7PutGW9NJ7CqC0J3Fw3RKP/fHJbhE1msgfLpJXkqSmgp6PYlgTFTiNGAiTWnLHKC+kouZV7I120l3Amj/ywIygaqcbJTgPbxZn5j3IRirPGZgJMtlFAB352xkxM8lRDu8uCP7bDdth+pzW/dvxeixN7Ifu+QV+YcDwBPC7xtf1m6A1bcoAFWJxxLezU5lBoT0lOv8P4fDKMzzZB3qSt4DgDHMloQeEeb/Rgc22IMR31ItIszDvhyAJ5TJCc+EZV5nJr58E54gJI5gTWAnsmJMqFRZIErkue6V1r+Lz8VFnb71i5lU5gYSMw15mpkAu5ZCFRcyB878FTGSlIqBdjH6elDuBSPgCKM/+C0Mim2NoVU7A4Mr4+2w8anxW9tH7etsJqGuso2IxJ6pVZTmais+LUn+J4BCXud/6LyUxMAlcVjylc8U9eIS7+orNA2C/YWHG5Mbty0MHD5f4jCmllZHfgyyKIosargYJDMYkoU=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199021)(40470700004)(46966006)(36840700001)(47076005)(2616005)(40460700003)(186003)(336012)(2906002)(34020700004)(40480700001)(36756003)(33656002)(86362001)(82310400005)(82740400003)(81166007)(356005)(36860700001)(8936002)(8676002)(6862004)(5660300002)(4326008)(70586007)(70206006)(41300700001)(6486002)(316002)(478600001)(107886003)(54906003)(6512007)(53546011)(26005)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 07:04:08.7328
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 486871da-559a-47de-9db6-08db4d36ec1a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6013

Hi Jan,

> On 4 May 2023, at 18:16, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> Presumably by copy-and-paste we've accumulated a number of instances of
> $(@D)/$(@F), which really is nothing else than $@. The split form only
> needs using when we want to e.g. insert a leading . at the beginning of
> the file name portion of the full name.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

>=20
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -104,9 +104,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
> $(MAKE) $(build)=3D$(@D) $(@D)/.$(@F).1.o
> $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
>    $(@D)/.$(@F).1.o -o $@
> - $(NM) -pa --format=3Dsysv $(@D)/$(@F) \
> + $(NM) -pa --format=3Dsysv $@ \
> | $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
> - >$(@D)/$(@F).map
> + >$@.map
> rm -f $(@D)/.$(@F).[0-9]*
>=20
> .PHONY: include
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -10,9 +10,9 @@ $(TARGET): $(TARGET)-syms
>=20
> $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
> $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) -o $@
> - $(NM) -pa --format=3Dsysv $(@D)/$(@F) \
> + $(NM) -pa --format=3Dsysv $@ \
> | $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
> - >$(@D)/$(@F).map
> + >$@.map
>=20
> $(obj)/xen.lds: $(src)/xen.lds.S FORCE
> $(call if_changed_dep,cpp_lds_S)
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -150,9 +150,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
> $(MAKE) $(build)=3D$(@D) $(@D)/.$(@F).1.o
> $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
>    $(orphan-handling-y) $(@D)/.$(@F).1.o -o $@
> - $(NM) -pa --format=3Dsysv $(@D)/$(@F) \
> + $(NM) -pa --format=3Dsysv $@ \
> | $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
> - >$(@D)/$(@F).map
> + >$@.map
> rm -f $(@D)/.$(@F).[0-9]* $(@D)/..$(@F).[0-9]*
> ifeq ($(CONFIG_XEN_IBT),y)
> $(SHELL) $(srctree)/tools/check-endbr.sh $@
> @@ -224,8 +224,9 @@ endif
> $(MAKE) $(build)=3D$(@D) .$(@F).1r.o .$(@F).1s.o
> $(LD) $(call EFI_LDFLAGS,$(VIRT_BASE)) -T $(obj)/efi.lds -N $< \
>      $(@D)/.$(@F).1r.o $(@D)/.$(@F).1s.o $(orphan-handling-y) $(note_file=
_option) -o $@
> - $(NM) -pa --format=3Dsysv $(@D)/$(@F) \
> - | $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort >$(@D)=
/$(@F).map
> + $(NM) -pa --format=3Dsysv $@ \
> + | $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
> + >$@.map
> ifeq ($(CONFIG_DEBUG_INFO),y)
> $(if $(filter --strip-debug,$(EFI_LDFLAGS)),:$(space))$(OBJCOPY) -O elf64=
-x86-64 $@ $@.elf
> endif



From xen-devel-bounces@lists.xenproject.org Fri May 05 07:05:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 07:05:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530244.825743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pupUt-00030O-LZ; Fri, 05 May 2023 07:04:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530244.825743; Fri, 05 May 2023 07:04:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pupUt-00030H-IT; Fri, 05 May 2023 07:04:55 +0000
Received: by outflank-mailman (input) for mailman id 530244;
 Fri, 05 May 2023 07:04:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pupUs-0002yh-PX
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 07:04:54 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2043.outbound.protection.outlook.com [40.107.7.43])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 21ee2907-eb13-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 09:04:52 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DBAPR04MB7335.eurprd04.prod.outlook.com (2603:10a6:10:1b1::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 07:04:23 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%7]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 07:04:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21ee2907-eb13-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QmTA99hTVrFpkt8kVePZKrhUs/VB+A+dj/1GH7NkQX31EWjsRvkh6V9gAr/XA/rYZczsBGGfxQsqTPtaEOKXbhXcDbjqO9KInUcIyXgK3E+sb01nA+7t+zxPuI/Hb80/y9eNFwZ4tReFsOjU3pZC/842dUwpE+PtXXQq0xcOReApcaSUvbmS+f2Vo5rcjDCKHa6xwR0f8rbUWLDodgh/O4HfCFEqydbvFAFbgskTm9ECMpLUyMWh+9ww+fGBtRKUBeMQvZzKiAMOnYsMCdYSZhauQtaZrs14mwJnbZt84uuBsso+z/DMJpihiSOT5bwVuNm4VpGh5bcTuXnPIvOSuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3O0K/iRk1u8q9x+Hqf1l/IJHKu+M7pGBN80sM11F79g=;
 b=fLiwg3zesP3iNtNzzmWw2c8ufNNlwrtT2NJbRPbLwBoYKWewL/UWA5gyAG3xctVG4gQI6xRnb1JUspIfg8XGkXSeIK4FTeQQiENbL7QPAlfvBkI+AdO+YHrbQQ1EEgtzhQZG9Nt3ZXhlV3/+fLJftfvcB+OvJmgD+laU4283VvBnSWVQxBdnmrXEgEqMgM+C6iCGNCuaMelg4tJ3n3iexZ6SfzAqD6Q9WlsaZYmuSCtVE5DhE9yA0COCfcoKz+ysSCPW0I1uhp1My3KWTk6lwltRk7zs0MUY0l1URLDsdZim8Ia/D3cGcZHFv4PomfFsJyq/LqLX/ubneLSTJSGjhQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3O0K/iRk1u8q9x+Hqf1l/IJHKu+M7pGBN80sM11F79g=;
 b=NxmrwjycbmcfRztSEjVjIH21o5VlWGMvKCk0FHq6r3kHUlYtPSCAT/lAz5iahIf9THM/UoZeqdtxAOaPkAfpaBo0/1wJ/nqGYYA5cOATBf/BgZqiYsR0B71pH8x3Ql8yT58qQz0f7eNyH5OaPNgk/y7BIHjO3xf4pR/5+jPu6FxWxUXh0Uyo3yUpdFpXzvaAdxm25LkOvx3h02JcI0phbPIhehHJLOZVNPTsn0diNgKSm9QIEhZPn1uPFWJ/L5VUlgDuwqkW6jYjaa81e5R+gCLEyWNcm7+Wn/UbEajWgojf+eaz+JGcACwBVYOwLN+neUjizr6KTzXXR1Yf9EXIKw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fe991a0d-53ff-1b16-02b4-85c0332467a1@suse.com>
Date: Fri, 5 May 2023 09:04:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 05/14 RESEND] xenpm: Change get-cpufreq-para output for
 internal
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-6-jandryuk@gmail.com>
 <2864bf57-88cd-6fce-2d38-6f3a31abb440@suse.com>
 <CAKf6xpshQ=6kPHtjpWqNUiaBym2uEXt=reY0Kd0VoZgxuE=LxA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xpshQ=6kPHtjpWqNUiaBym2uEXt=reY0Kd0VoZgxuE=LxA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0235.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b2::20) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DBAPR04MB7335:EE_
X-MS-Office365-Filtering-Correlation-Id: d577a2da-ccf3-4788-acc7-08db4d36f4de
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fjZ0HNvntrF8zmpRZbroLjO/B3uq8m6bQZOaMc5n0ZKz7Q5k4cXzMKeVPoKjLFI9ApO7Qev1aAJx5uK2yml1UJjv1F4WH2KCOId4XR13l8oGtbcJt+/mFCxX69Wrp+qvkc3XFvDmqDZ3eGr/FrTOFzGV6xhTztQNWqq+1pvB1Xk1cDrSk6Bac7a5JAx4JruM9aQFui2HXFNfVjhOzZfWsu5HKBnbMpR3aSndcpktRBWBF2lzyu59f7t++kngS3YWsntRCR6VF1fgk+hi0wSi2DWGSETRfJFQI7gtNtY9yVfzZFxovnlDDTTKa47edakiyxdo5FhtirJ5VFPD9/EjQlMoscyL1LqLXEfN3peC/35cXlV9KwNIf391AgXZcCRwRDxFJxnDIFzNkj7jgmpe2f9lxJ/BwyqXuK4JRc7rRDHoCYln02wHj3feSTtdUHob7RFMIvVNe9APyz6UdtLXmchcXlBDculoouMN6DxP3wRcNel8Irsxzu5UQ+x84SDsy0M0YM8PvuV1XU1FMwUeoDrbCGrbBwZz349FO6SFs4pXFbGRGkn3piTTrBFB9xS9778TpT1aDsq2/YUY5B+qrh3BXFHCtg0r9ve5IWS48vJolKJmXQhJrkxs3r2LNX39U6gcyje1Wpilyq1s/2c+Zg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(346002)(39850400004)(376002)(366004)(136003)(451199021)(83380400001)(2616005)(186003)(2906002)(38100700002)(36756003)(31696002)(86362001)(41300700001)(8936002)(8676002)(5660300002)(966005)(6916009)(4326008)(66556008)(66476007)(66946007)(6486002)(316002)(31686004)(478600001)(54906003)(26005)(6506007)(6512007)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZS9MWXg0UWwwZ09lS3lTby8zMFYyNU5IWmFubE02cHVzMkhzdlRsWmF3Q0Z4?=
 =?utf-8?B?dUErL09iTExPTEV3YXZIV2h2OVVHeFdYVVhQd1VkODNnUEgyU3dkRC9CSGlV?=
 =?utf-8?B?bGxSd2dFQzlPaEQwYU0wZGFEc1ZNZWxVVkdxaUJUdGQxR0hPdC9KZGlKSFk1?=
 =?utf-8?B?VUhvc0FMSEN6Yyt3MGpoT0l3YWhkZ01Vdmo1VlBIWFRGRTFNazAyY0RMVXEx?=
 =?utf-8?B?dTVSZm0rcTNJdW1nRFEwTHZlaWVDSnVZOHpzMGptdG9vcHdESzJrK2V6YVE1?=
 =?utf-8?B?RU1WRitjclRBT2dSWjZUeU5tZ0M2b3FQYnZPaE1sejFXQ1k1MmRZOC9xTVA1?=
 =?utf-8?B?ZjJVYUowQkV5Wi9FZGhmb2xRWm1WN25XcUx2OUZZNDZ1TWtiYVV5d3FTRW8y?=
 =?utf-8?B?aUphSGdwQVIxQUZ4dTZRajNQc2tzMHY3ZGRJaitENzhhdVV4QjNEY2dQTUI0?=
 =?utf-8?B?WHhpOURXMnhiRlRpRTVnSjZ1N3ROL0ZUam5LYnNCaGpEVVdmWm4xL1BrMFFX?=
 =?utf-8?B?eWg2dGhxTFlSdlZXVGFtVFBxUUhBOXNFUFdjNmpBdDRqRGFYR0pKRVkzdkQ5?=
 =?utf-8?B?UWtwUVdPaFZkdXdESnZlYnpRMGczMzNrRUxuVEdwTW9mRTBVaEZLbXNwVHQ0?=
 =?utf-8?B?bUFmdVp3OW9wUk53U3NjSjdxQk9XRnVDKzJveUdNTSsyV0xwdEJPR1ZDSHla?=
 =?utf-8?B?bFlkMkl4bmlHVUJwUEdGZytGUyt4ZmRvUW5EcVBxb1Iwb3lKOFFSK3YxRGNM?=
 =?utf-8?B?RXhuR3k4UlI4V3hRV2hrSFBkUnlXSytQSlI3Qkc1cTdOOHRMbHNCUUNBQjZr?=
 =?utf-8?B?NFJRckhDNUJvQkkrSytvZDhDK3R4cVF3V0hhTzc2dlVmYXBQeXo5dE4yVDRj?=
 =?utf-8?B?UHpVdmJBbk9rWjFQdmlraXN3MG0reW1mWVZZMy93NFBoNFI4U2NGb280dERz?=
 =?utf-8?B?eE4yc3RoSTFQekRuSmV0RDh4UHNvNUxJOXFqOHNqdFRJb0dLZHRzVVVqZUFP?=
 =?utf-8?B?TUw3dm9oY2Y5TVR5K0I4VFp2TWVPbHNXRnJXRTZkUnd6NTNlaFFha2JER2lj?=
 =?utf-8?B?RkZ0S2VUTVJKV1lrUkdLQS9Va0MrYy8xcU9Pa0Z1MG5VTEp3N2dPNWtrNkhp?=
 =?utf-8?B?bTNtSS9WcFRMMm1UL1hsTWFUWHVtVEZjcTlNNkM5NDh0eGNLTFFtYnU1UFVU?=
 =?utf-8?B?UjFPVGZMMHVHa1ptMlB1QjNlT1pucTk0QVlKaWVoK3ZjNmxhc09DaHBEOEM4?=
 =?utf-8?B?VGs0ZW53TGVlb082elMwMms2NDhFMkI2VWhqQk1Ub1F6UE0xR2tZby9CWjVT?=
 =?utf-8?B?eDFUUUVzUXJuM0pWakR0b2R6TGt0MWRBaGowQit6MnJiUjdPWnU3czhFL2hs?=
 =?utf-8?B?ZitWQU16TnkwdWtwK3F1UHh0alJMSDJHczN0TXl2NUJmNDVvdXhkYnBCb0Zp?=
 =?utf-8?B?NUcvZ1dOeTZTOHFFVjZqSUpwMzMzd0Fzd3hSZ2RpclE5WW9hWFlSb2U0L3Ay?=
 =?utf-8?B?dEJUUFU2TG84OFVqTG1abkVES1V5eTAyNXdPamRXME5BNXRxMmYrbzZRZ3Iz?=
 =?utf-8?B?bDhZUHk0Q0RkV3FUMGNHOVRTdlhrOFNFOW9FZUpna3VGcnMwLzJLR3BSRFQy?=
 =?utf-8?B?by9nVU9uc1NCdkYyajUvektPQVMxSHIzcXhhK1k2ZUVBa2gzNUx0RUtBc2lW?=
 =?utf-8?B?YWYyTldUOWhra3BjSXM1Tm1lRTBSbmh5R2ZTVW5teWxVYlV2NDFBVG5yUjZl?=
 =?utf-8?B?ZW0zSFRJQjY3enlhVXlmYzFvcS9FSEZzMmYyK1ZYYThhQWJPRisrb0d5Vk5D?=
 =?utf-8?B?M29nOEJJbVl1cE1QNGpvRGNVWTRQQmozbzZFRno5aXQvcnU3a1o4VE5CTytt?=
 =?utf-8?B?ZjZRY2xva1RJc1lLNHVSU2lsUUlLVWRTRnYrV0hObm9NVTZ4Q2JwNGdqcHBw?=
 =?utf-8?B?eXlkZGFuV0FCZVRQbGRwUTNvMGRLeWVJUy9kMlo4OWRvMGN5aVdyS1JJaVZm?=
 =?utf-8?B?UWF2RytZRDBZcnVTK2FrMnppZE84NEZ1SVo4UzU2MUVtakRzYzdMeHBLbFBt?=
 =?utf-8?B?ckp3bXFFZERUZTV3cElHRVk4SFFxbTF5SGlxck00TnlwSjlHL290MXdlakN4?=
 =?utf-8?Q?fMdOhYfWww+kxTeyFur13EwgV?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d577a2da-ccf3-4788-acc7-08db4d36f4de
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 07:04:23.5865
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Vxry6/KsYw48na1RrhI6kgJORQPkl9uuL7FuGklwCbFMemvCu98Z6FM1iRXdQoyX3fqClPH0D8nNd3MwUtEgzg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7335

On 04.05.2023 19:00, Jason Andryuk wrote:
> On Thu, May 4, 2023 at 10:35 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 01.05.2023 21:30, Jason Andryuk wrote:
>>> When using HWP, some of the returned data is not applicable.  In that
>>> case, we should just omit it to avoid confusing the user.  So switch to
>>> printing the base and turbo frequencies since those are relevant to HWP.
>>> Similarly, stop printing the CPU frequencies since those do not apply.
>>
>> It vaguely feels like I have asked this before: Can you point me at a
>> place in the SDM where it is said that CPUID 0x16's "Maximum Frequency"
>> is the turbo frequency? Without such a reference I feel a little uneasy
>> with ...
> 
> I don't have a reference, but I found it empirically to match the
> "turbo" frequency.
> 
> For an Intel® Core™ i7-10810U,
> https://ark.intel.com/content/www/us/en/ark/products/201888/intel-core-i710810u-processor-12m-cache-up-to-4-90-ghz.html
> 
> Max Turbo Frequency 4.90 GHz
> 
> # xenpm get-cpufreq-para
> cpu id               : 0
> affected_cpus        : 0
> cpuinfo frequency    : base [1600000] turbo [4900000]
> 
> Turbo has to be enabled to reach (close to) that frequency.
> 
> From my cover letter:
> This is for a 10th gen 6-core 1600 MHz base 4900 MHZ max cpu.  In the
> default balance mode, Turbo Boost doesn't exceed 4GHz.  Tweaking the
> energy_perf preference with `xenpm set-cpufreq-hwp balance ene:64`,
> I've seen the CPU hit 4.7GHz before throttling down and bouncing around
> between 4.3 and 4.5 GHz.  Curiously the other cores read ~4GHz when
> turbo boost takes affect.  This was done after pinning all dom0 cores,
> and using taskset to pin to vCPU/pCPU 11 and running a bash tightloop.

Right, but what matters for the longer term future is what gets committed
(and the cover letter won't be). IOW ...

>>> @@ -720,10 +721,15 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
>>>          printf(" %d", p_cpufreq->affected_cpus[i]);
>>>      printf("\n");
>>>
>>> -    printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
>>> -           p_cpufreq->cpuinfo_max_freq,
>>> -           p_cpufreq->cpuinfo_min_freq,
>>> -           p_cpufreq->cpuinfo_cur_freq);
>>> +    if ( internal )
>>> +        printf("cpuinfo frequency    : base [%u] turbo [%u]\n",
>>> +               p_cpufreq->cpuinfo_min_freq,
>>> +               p_cpufreq->cpuinfo_max_freq);
>>
>> ... calling it "turbo" (and not "max") here.
> 
> I'm fine with "max".  I think I went with turbo since it's a value you
> cannot sustain but can only hit in short bursts.

... I don't mind you sticking to "turbo" as long as the description makes
clear why that was chosen despite the SDM not naming it this way.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 05 07:06:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 07:06:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530247.825753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pupWV-0003fQ-W8; Fri, 05 May 2023 07:06:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530247.825753; Fri, 05 May 2023 07:06:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pupWV-0003fH-TY; Fri, 05 May 2023 07:06:35 +0000
Received: by outflank-mailman (input) for mailman id 530247;
 Fri, 05 May 2023 07:06:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pupWU-0003f3-U7; Fri, 05 May 2023 07:06:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pupWU-00024s-M1; Fri, 05 May 2023 07:06:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pupWU-00037F-CX; Fri, 05 May 2023 07:06:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pupWU-0006KC-C0; Fri, 05 May 2023 07:06:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rFn4o8+zAAi6yzkB/TxryAda6Zjq1jCidQ/Gt0GSWZY=; b=bsKTxqphoR0YGckec+61IQpjc/
	GQbEj2ihjQv1f/JU+G+h79GKPeimuG6GsHj1VJIPFw+wdeeXbvXgWGHVggnhsQe8ROE7eJ7jRT9gN
	0oT+LFsbU6TdrjWOSBl2YBZs7xhvSzz6FqQ7oZPokIQY4Q/6z03HoiYMoE0m4Ro66zuM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180538-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180538: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ff7cb2d7c98f8b832180e054848459fc24a0910a
X-Osstest-Versions-That:
    ovmf=d992a05ade3d1bebc6e7a81aaf700286e0e217c8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 May 2023 07:06:34 +0000

flight 180538 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180538/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ff7cb2d7c98f8b832180e054848459fc24a0910a
baseline version:
 ovmf                 d992a05ade3d1bebc6e7a81aaf700286e0e217c8

Last test of basis   180535  2023-05-04 19:12:15 Z    0 days
Testing same since   180538  2023-05-05 04:12:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Benny Lin <benny.lin@intel.com>
  Pedro Falcato <pedro.falcato@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d992a05ade..ff7cb2d7c9  ff7cb2d7c98f8b832180e054848459fc24a0910a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 05 08:24:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 08:24:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530256.825763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puqju-00049P-40; Fri, 05 May 2023 08:24:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530256.825763; Fri, 05 May 2023 08:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puqju-00049I-01; Fri, 05 May 2023 08:24:30 +0000
Received: by outflank-mailman (input) for mailman id 530256;
 Fri, 05 May 2023 08:24:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KcVv=A2=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1puqjr-000498-GW
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 08:24:28 +0000
Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com
 [2607:f8b0:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3a2b559a-eb1e-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 10:24:18 +0200 (CEST)
Received: by mail-pl1-x631.google.com with SMTP id
 d9443c01a7336-1aaf2ede38fso14157385ad.2
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 01:24:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a2b559a-eb1e-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683275057; x=1685867057;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=YPzxmB9FIePwlqxtbt/uuMFpqYIeF7DSqLXDc5GDEAs=;
        b=NcjzHf0UjOD3MJxJNJokdR8VZ3XJ7eyQ+AV1KTriBnIWNIY5sPW263X+JUj7qeQ8+R
         sglN+hUlW5/tlGBL7d9QmkJ79fdWQAiXEA6043Tu3zzj7MAZbYcEsUJoMgunPNdhEt0N
         Xu5hhRFEstR+/LnF1+3Zg/wj96Hp8axuHzhPOhLVWw+HNTJnI2ffx/WNpRMVvfYCMAVl
         xz3GP77AGEfpffHdSiLYt0IhPDYp/AZfioKKC2ejmnofQzMbCI50axMEUCa+31k3/kpi
         u2ZVYNPVY8SXxfGgArAkmJiyyyFjonwvTUnqgDm0ab5TU+aDziDQkC5U+2DkOwCbiFtV
         QIeg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683275057; x=1685867057;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=YPzxmB9FIePwlqxtbt/uuMFpqYIeF7DSqLXDc5GDEAs=;
        b=eD6V2zTgmivTV7F6L+kQ/JVz5pYXF2cX1zYvDXolPvdW4A96Al94L641aOgR6PIwQS
         lYYUbopGboZQCZapooi2r53DEhDJ7xOdInS+N1Wjbi62IS+zIAOULqwofW7KqlgIt4yh
         sLpqR1vfpsdUT2wTqPlYs86ATmoWfKXgNtsB6mzP2P0dyGd6bVYU+HbzIdlk/ChGIiDP
         mBYhBj27gntTYWq1JWr0uvukHcKoIgYcyRmwPF0TKKGS9uhH2JiUWYipJZUuAChUEHIp
         6gN0Xnvfeu33qodUgmZz50Pi+Xw4MRkRU/fkAIG6QwULB8WTjeKrs/zIUs3YHI7QFaL+
         AuaQ==
X-Gm-Message-State: AC+VfDwdECS6c1ZYn/RZTJgf/HcnccdvtIKrAjM6EG8YslZuswhq2sug
	K8rAVC0EDO6Xf4GA0db3W9PdI1KPWEiW1eHxM20=
X-Google-Smtp-Source: ACHHUZ5Jx4GL90foDMd3kgoy5wNT4T/8jw/xCRc7UpExGsRCZf3jQOoKQGOx0Dsydc8eeU9ffYjgAtzsptEa//4q044=
X-Received: by 2002:a17:903:234c:b0:1ab:289f:65cf with SMTP id
 c12-20020a170903234c00b001ab289f65cfmr740923plh.54.1683275056513; Fri, 05 May
 2023 01:24:16 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <CA+SAi2tc_3r3SAXVOmdbDJXvppaXkSdMH0iv-fG1zUwG3Ub_hQ@mail.gmail.com>
 <alpine.DEB.2.22.394.2304191304570.15580@ubuntu-linux-20-04-desktop>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
 <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
 <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com> <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com> <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
 <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com> <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Fri, 5 May 2023 11:28:53 +0300
Message-ID: <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Michal Orzel <michal.orzel@amd.com>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>, 
	Stewart.Hildebrand@amd.com
Content-Type: multipart/alternative; boundary="000000000000cd85bd05faee032e"

--000000000000cd85bd05faee032e
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello Stefano,

I would like to try a xen cache color property from this repo
https://xenbits.xen.org/git-http/xen.git
Could you tell whot branch I should use ?

Regards,
Oleg

=D0=BF=D1=82, 28 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 00:51, Ste=
fano Stabellini <sstabellini@kernel.org>:

> I am familiar with the zcu102 but I don't know how you could possibly
> generate a SError.
>
> I suggest to try to use ImageBuilder [1] to generate the boot
> configuration as a test because that is known to work well for zcu102.
>
> [1] https://gitlab.com/xen-project/imagebuilder
>
>
> On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
> > Hello Stefano,
> >
> > Thanks for clarification.
> > We nighter use ImageBuilder nor uboot boot script.
> > A model is zcu102 compatible.
> >
> > Regards,
> > O.
> >
> > =D0=B2=D1=82, 25 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 21:21,=
 Stefano Stabellini <sstabellini@kernel.org
> >:
> >       This is interesting. Are you using Xilinx hardware by any chance?
> If so,
> >       which board?
> >
> >       Are you using ImageBuilder to generate your boot.scr boot script?
> If so,
> >       could you please post your ImageBuilder config file? If not, can
> you
> >       post the source of your uboot boot script?
> >
> >       SErrors are supposed to be related to a hardware failure of some
> kind.
> >       You are not supposed to be able to trigger an SError easily by
> >       "mistake". I have not seen SErrors due to wrong cache coloring
> >       configurations on any Xilinx board before.
> >
> >       The differences between Xen with and without cache coloring from =
a
> >       hardware perspective are:
> >
> >       - With cache coloring, the SMMU is enabled and does address
> translations
> >         even for dom0. Without cache coloring the SMMU could be
> disabled, and
> >         if enabled, the SMMU doesn't do any address translations for
> Dom0. If
> >         there is a hardware failure related to SMMU address translation
> it
> >         could only trigger with cache coloring. This would be my normal
> >         suggestion for you to explore, but the failure happens too earl=
y
> >         before any DMA-capable device is programmed. So I don't think
> this can
> >         be the issue.
> >
> >       - With cache coloring, the memory allocation is very different so
> you'll
> >         end up using different DDR regions for Dom0. So if your DDR is
> >         defective, you might only see a failure with cache coloring
> enabled
> >         because you end up using different regions.
> >
> >
> >       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
> >       > Hi Stefano,
> >       >
> >       > Thank you.
> >       > If I build xen without colors support there is not this error.
> >       > All the domains are booted well.
> >       > Hense it can not be a hardware issue.
> >       > This panic arrived during unpacking the rootfs.
> >       > Here I attached the boot log xen/Dom0 without color.
> >       > A highlighted strings printed exactly after the place where 1-s=
t
> time panic arrived.
> >       >
> >       >  Xen 4.16.1-pre
> >       > (XEN) Xen version 4.16.1-pre (nole2390@(none))
> (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=3Dy 2023-04-21
> >       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300
> git:321687b231-dirty
> >       > (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
> >       > (XEN) Processor: 00000000410fd034: "ARM Limited", variant: 0x0,
> part 0xd03,rev 0x4
> >       > (XEN) 64-bit Execution:
> >       > (XEN)   Processor Features: 0000000000002222 0000000000000000
> >       > (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32
> EL0:64+32
> >       > (XEN)     Extensions: FloatingPoint AdvancedSIMD
> >       > (XEN)   Debug Features: 0000000010305106 0000000000000000
> >       > (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
> >       > (XEN)   Memory Model Features: 0000000000001122 000000000000000=
0
> >       > (XEN)   ISA Features:  0000000000011120 0000000000000000
> >       > (XEN) 32-bit Execution:
> >       > (XEN)   Processor Features: 0000000000000131:0000000000011011
> >       > (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
> >       > (XEN)     Extensions: GenericTimer Security
> >       > (XEN)   Debug Features: 0000000003010066
> >       > (XEN)   Auxiliary Features: 0000000000000000
> >       > (XEN)   Memory Model Features: 0000000010201105 000000004000000=
0
> >       > (XEN)                          0000000001260000 000000000210221=
1
> >       > (XEN)   ISA Features: 0000000002101110 0000000013112111
> 0000000021232042
> >       > (XEN)                 0000000001112131 0000000000011142
> 0000000000011121
> >       > (XEN) Using SMC Calling Convention v1.2
> >       > (XEN) Using PSCI v1.1
> >       > (XEN) SMP: Allowing 4 CPUs
> >       > (XEN) Generic Timer IRQ: phys=3D30 hyp=3D26 virt=3D27 Freq: 100=
000 KHz
> >       > (XEN) GICv2 initialization:
> >       > (XEN)         gic_dist_addr=3D00000000f9010000
> >       > (XEN)         gic_cpu_addr=3D00000000f9020000
> >       > (XEN)         gic_hyp_addr=3D00000000f9040000
> >       > (XEN)         gic_vcpu_addr=3D00000000f9060000
> >       > (XEN)         gic_maintenance_irq=3D25
> >       > (XEN) GICv2: Adjusting CPU interface base to 0xf902f000
> >       > (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
> >       > (XEN) Using scheduler: null Scheduler (null)
> >       > (XEN) Initializing null scheduler
> >       > (XEN) WARNING: This is experimental software in development.
> >       > (XEN) Use at your own risk.
> >       > (XEN) Allocated console ring of 32 KiB.
> >       > (XEN) CPU0: Guest atomics will try 12 times before pausing the
> domain
> >       > (XEN) Bringing up CPU1
> >       > (XEN) CPU1: Guest atomics will try 13 times before pausing the
> domain
> >       > (XEN) CPU 1 booted.
> >       > (XEN) Bringing up CPU2
> >       > (XEN) CPU2: Guest atomics will try 13 times before pausing the
> domain
> >       > (XEN) CPU 2 booted.
> >       > (XEN) Bringing up CPU3
> >       > (XEN) CPU3: Guest atomics will try 13 times before pausing the
> domain
> >       > (XEN) Brought up 4 CPUs
> >       > (XEN) CPU 3 booted.
> >       > (XEN) smmu: /axi/smmu@fd800000: probing hardware
> configuration...
> >       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
> >       > (XEN) smmu: /axi/smmu@fd800000: stage 2 translation
> >       > (XEN) smmu: /axi/smmu@fd800000: stream matching with 48
> register groups, mask 0x7fff<2>smmu: /axi/smmu@fd800000: 16 context
> >       banks (0
> >       > stage-2 only)
> >       > (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit P=
A
> >       > (XEN) smmu: /axi/smmu@fd800000: registered 29 master devices
> >       > (XEN) I/O virtualisation enabled
> >       > (XEN)  - Dom0 mode: Relaxed
> >       > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> >       > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> >       > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> >       > (XEN) alternatives: Patching with alt table 00000000002cc5c8 ->
> 00000000002ccb2c
> >       > (XEN) *** LOADING DOMAIN 0 ***
> >       > (XEN) Loading d0 kernel from boot module @ 0000000001000000
> >       > (XEN) Loading ramdisk from boot module @ 0000000002000000
> >       > (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
> >       > (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
> >       > (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
> >       > (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
> >       > (XEN) Grant table range: 0x00000000e00000-0x00000000e40000
> >       > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
> >       > (XEN) Allocating PPI 16 for event channel interrupt
> >       > (XEN) Extended region 0: 0x81200000->0xa0000000
> >       > (XEN) Extended region 1: 0xb1200000->0xc0000000
> >       > (XEN) Extended region 2: 0xc8000000->0xe0000000
> >       > (XEN) Extended region 3: 0xf0000000->0xf9000000
> >       > (XEN) Extended region 4: 0x100000000->0x600000000
> >       > (XEN) Extended region 5: 0x880000000->0x8000000000
> >       > (XEN) Extended region 6: 0x8001000000->0x10000000000
> >       > (XEN) Loading zImage from 0000000001000000 to
> 0000000010000000-0000000010e41008
> >       > (XEN) Loading d0 initrd from 0000000002000000 to
> 0x0000000013600000-0x000000001ff3a617
> >       > (XEN) Loading d0 DTB to 0x0000000013400000-0x000000001340cbdc
> >       > (XEN) Initial low memory virq threshold set at 0x4000 pages.
> >       > (XEN) Std. Loglevel: All
> >       > (XEN) Guest Loglevel: All
> >       > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to
> switch input)
> >       > (XEN) null.c:353: 0 <-- d0v0
> >       > (XEN) Freed 356kB init memory.
> >       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
> >       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
> >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to
> ICACTIVER4
> >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to
> ICACTIVER8
> >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to
> ICACTIVER12
> >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to
> ICACTIVER16
> >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to
> ICACTIVER20
> >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to
> ICACTIVER0
> >       > [    0.000000] Booting Linux on physical CPU 0x0000000000
> [0x410fd034]
> >       > [    0.000000] Linux version 5.15.72-xilinx-v2022.1
> (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC) 11.3.0, GNU ld (GNU
> >       Binutils)
> >       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
> >       > [    0.000000] Machine model: D14 Viper Board - White Unit
> >       > [    0.000000] Xen 4.16 support found
> >       > [    0.000000] Zone ranges:
> >       > [    0.000000]   DMA      [mem
> 0x0000000010000000-0x000000007fffffff]
> >       > [    0.000000]   DMA32    empty
> >       > [    0.000000]   Normal   empty
> >       > [    0.000000] Movable zone start for each node
> >       > [    0.000000] Early memory node ranges
> >       > [    0.000000]   node   0: [mem
> 0x0000000010000000-0x000000001fffffff]
> >       > [    0.000000]   node   0: [mem
> 0x0000000022000000-0x0000000022147fff]
> >       > [    0.000000]   node   0: [mem
> 0x0000000022200000-0x0000000022347fff]
> >       > [    0.000000]   node   0: [mem
> 0x0000000024000000-0x0000000027ffffff]
> >       > [    0.000000]   node   0: [mem
> 0x0000000030000000-0x000000007fffffff]
> >       > [    0.000000] Initmem setup node 0 [mem
> 0x0000000010000000-0x000000007fffffff]
> >       > [    0.000000] On node 0, zone DMA: 8192 pages in unavailable
> ranges
> >       > [    0.000000] On node 0, zone DMA: 184 pages in unavailable
> ranges
> >       > [    0.000000] On node 0, zone DMA: 7352 pages in unavailable
> ranges
> >       > [    0.000000] cma: Reserved 256 MiB at 0x000000006e000000
> >       > [    0.000000] psci: probing for conduit method from DT.
> >       > [    0.000000] psci: PSCIv1.1 detected in firmware.
> >       > [    0.000000] psci: Using standard PSCI v0.2 function IDs
> >       > [    0.000000] psci: Trusted OS migration not required
> >       > [    0.000000] psci: SMC Calling Convention v1.1
> >       > [    0.000000] percpu: Embedded 16 pages/cpu s32792 r0 d32744
> u65536
> >       > [    0.000000] Detected VIPT I-cache on CPU0
> >       > [    0.000000] CPU features: kernel page table isolation forced
> ON by KASLR
> >       > [    0.000000] CPU features: detected: Kernel page table
> isolation (KPTI)
> >       > [    0.000000] Built 1 zonelists, mobility grouping on.  Total
> pages: 403845
> >       > [    0.000000] Kernel command line: console=3Dhvc0 earlycon=3Dx=
en
> earlyprintk=3Dxen clk_ignore_unused fips=3D1 root=3D/dev/ram0
> >       maxcpus=3D2
> >       > [    0.000000] Unknown kernel command line parameters
> "earlyprintk=3Dxen fips=3D1", will be passed to user space.
> >       > [    0.000000] Dentry cache hash table entries: 262144 (order:
> 9, 2097152 bytes, linear)
> >       > [    0.000000] Inode-cache hash table entries: 131072 (order: 8=
,
> 1048576 bytes, linear)
> >       > [    0.000000] mem auto-init: stack:off, heap alloc:on, heap
> free:on
> >       > [    0.000000] mem auto-init: clearing system memory may take
> some time...
> >       > [    0.000000] Memory: 1121936K/1641024K available (9728K kerne=
l
> code, 836K rwdata, 2396K rodata, 1536K init, 262K bss,
> >       256944K reserved,
> >       > 262144K cma-reserved)
> >       > [    0.000000] SLUB: HWalign=3D64, Order=3D0-3, MinObjects=3D0,
> CPUs=3D2, Nodes=3D1
> >       > [    0.000000] rcu: Hierarchical RCU implementation.
> >       > [    0.000000] rcu: RCU event tracing is enabled.
> >       > [    0.000000] rcu: RCU restricting CPUs from NR_CPUS=3D8 to
> nr_cpu_ids=3D2.
> >       > [    0.000000] rcu: RCU calculated value of scheduler-enlistmen=
t
> delay is 25 jiffies.
> >       > [    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=3D16=
,
> nr_cpu_ids=3D2
> >       > [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
> >       > [    0.000000] Root IRQ handler: gic_handle_irq
> >       > [    0.000000] arch_timer: cp15 timer(s) running at 100.00MHz
> (virt).
> >       > [    0.000000] clocksource: arch_sys_counter: mask:
> 0xffffffffffffff max_cycles: 0x171024e7e0, max_idle_ns: 440795205315 ns
> >       > [    0.000000] sched_clock: 56 bits at 100MHz, resolution 10ns,
> wraps every 4398046511100ns
> >       > [    0.000258] Console: colour dummy device 80x25
> >       > [    0.310231] printk: console [hvc0] enabled
> >       > [    0.314403] Calibrating delay loop (skipped), value
> calculated using timer frequency.. 200.00 BogoMIPS (lpj=3D400000)
> >       > [    0.324851] pid_max: default: 32768 minimum: 301
> >       > [    0.329706] LSM: Security Framework initializing
> >       > [    0.334204] Yama: becoming mindful.
> >       > [    0.337865] Mount-cache hash table entries: 4096 (order: 3,
> 32768 bytes, linear)
> >       > [    0.345180] Mountpoint-cache hash table entries: 4096 (order=
:
> 3, 32768 bytes, linear)
> >       > [    0.354743] xen:grant_table: Grant tables using version 1
> layout
> >       > [    0.359132] Grant table initialized
> >       > [    0.362664] xen:events: Using FIFO-based ABI
> >       > [    0.366993] Xen: initializing cpu0
> >       > [    0.370515] rcu: Hierarchical SRCU implementation.
> >       > [    0.375930] smp: Bringing up secondary CPUs ...
> >       > (XEN) null.c:353: 1 <-- d0v1
> >       > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to
> ICACTIVER0
> >       > [    0.382549] Detected VIPT I-cache on CPU1
> >       > [    0.388712] Xen: initializing cpu1
> >       > [    0.388743] CPU1: Booted secondary processor 0x0000000001
> [0x410fd034]
> >       > [    0.388829] smp: Brought up 1 node, 2 CPUs
> >       > [    0.406941] SMP: Total of 2 processors activated.
> >       > [    0.411698] CPU features: detected: 32-bit EL0 Support
> >       > [    0.416888] CPU features: detected: CRC32 instructions
> >       > [    0.422121] CPU: All CPU(s) started at EL1
> >       > [    0.426248] alternatives: patching kernel code
> >       > [    0.431424] devtmpfs: initialized
> >       > [    0.441454] KASLR enabled
> >       > [    0.441602] clocksource: jiffies: mask: 0xffffffff
> max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
> >       > [    0.448321] futex hash table entries: 512 (order: 3, 32768
> bytes, linear)
> >       > [    0.496183] NET: Registered PF_NETLINK/PF_ROUTE protocol
> family
> >       > [    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool for
> atomic allocations
> >       > [    0.503772] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA poo=
l
> for atomic allocations
> >       > [    0.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32
> pool for atomic allocations
> >       > [    0.519478] audit: initializing netlink subsys (disabled)
> >       > [    0.524985] audit: type=3D2000 audit(0.336:1):
> state=3Dinitialized audit_enabled=3D0 res=3D1
> >       > [    0.529169] thermal_sys: Registered thermal governor
> 'step_wise'
> >       > [    0.533023] hw-breakpoint: found 6 breakpoint and 4
> watchpoint registers.
> >       > [    0.545608] ASID allocator initialised with 32768 entries
> >       > [    0.551030] xen:swiotlb_xen: Warning: only able to allocate =
4
> MB for software IO TLB
> >       > [    0.559332] software IO TLB: mapped [mem
> 0x0000000011800000-0x0000000011c00000] (4MB)
> >       > [    0.583565] HugeTLB registered 1.00 GiB page size,
> pre-allocated 0 pages
> >       > [    0.584721] HugeTLB registered 32.0 MiB page size,
> pre-allocated 0 pages
> >       > [    0.591478] HugeTLB registered 2.00 MiB page size,
> pre-allocated 0 pages
> >       > [    0.598225] HugeTLB registered 64.0 KiB page size,
> pre-allocated 0 pages
> >       > [    0.636520] DRBG: Continuing without Jitter RNG
> >       > [    0.737187] raid6: neonx8   gen()  2143 MB/s
> >       > [    0.805294] raid6: neonx8   xor()  1589 MB/s
> >       > [    0.873406] raid6: neonx4   gen()  2177 MB/s
> >       > [    0.941499] raid6: neonx4   xor()  1556 MB/s
> >       > [    1.009612] raid6: neonx2   gen()  2072 MB/s
> >       > [    1.077715] raid6: neonx2   xor()  1430 MB/s
> >       > [    1.145834] raid6: neonx1   gen()  1769 MB/s
> >       > [    1.213935] raid6: neonx1   xor()  1214 MB/s
> >       > [    1.282046] raid6: int64x8  gen()  1366 MB/s
> >       > [    1.350132] raid6: int64x8  xor()   773 MB/s
> >       > [    1.418259] raid6: int64x4  gen()  1602 MB/s
> >       > [    1.486349] raid6: int64x4  xor()   851 MB/s
> >       > [    1.554464] raid6: int64x2  gen()  1396 MB/s
> >       > [    1.622561] raid6: int64x2  xor()   744 MB/s
> >       > [    1.690687] raid6: int64x1  gen()  1033 MB/s
> >       > [    1.758770] raid6: int64x1  xor()   517 MB/s
> >       > [    1.758809] raid6: using algorithm neonx4 gen() 2177 MB/s
> >       > [    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
> >       > [    1.767957] raid6: using neon recovery algorithm
> >       > [    1.772824] xen:balloon: Initialising balloon driver
> >       > [    1.778021] iommu: Default domain type: Translated
> >       > [    1.782584] iommu: DMA domain TLB invalidation policy: stric=
t
> mode
> >       > [    1.789149] SCSI subsystem initialized
> >       > [    1.792820] usbcore: registered new interface driver usbfs
> >       > [    1.798254] usbcore: registered new interface driver hub
> >       > [    1.803626] usbcore: registered new device driver usb
> >       > [    1.808761] pps_core: LinuxPPS API ver. 1 registered
> >       > [    1.813716] pps_core: Software ver. 5.3.6 - Copyright
> 2005-2007 Rodolfo Giometti <giometti@linux.it>
> >       > [    1.822903] PTP clock support registered
> >       > [    1.826893] EDAC MC: Ver: 3.0.0
> >       > [    1.830375] zynqmp-ipi-mbox mailbox@ff990400: Registered
> ZynqMP IPI mbox with TX/RX channels.
> >       > [    1.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered
> ZynqMP IPI mbox with TX/RX channels.
> >       > [    1.847356] zynqmp-ipi-mbox mailbox@ff990800: Registered
> ZynqMP IPI mbox with TX/RX channels.
> >       > [    1.855907] FPGA manager framework
> >       > [    1.859952] clocksource: Switched to clocksource
> arch_sys_counter
> >       > [    1.871712] NET: Registered PF_INET protocol family
> >       > [    1.871838] IP idents hash table entries: 32768 (order: 6,
> 262144 bytes, linear)
> >       > [    1.879392] tcp_listen_portaddr_hash hash table entries: 102=
4
> (order: 2, 16384 bytes, linear)
> >       > [    1.887078] Table-perturb hash table entries: 65536 (order:
> 6, 262144 bytes, linear)
> >       > [    1.894846] TCP established hash table entries: 16384 (order=
:
> 5, 131072 bytes, linear)
> >       > [    1.902900] TCP bind hash table entries: 16384 (order: 6,
> 262144 bytes, linear)
> >       > [    1.910350] TCP: Hash tables configured (established 16384
> bind 16384)
> >       > [    1.916778] UDP hash table entries: 1024 (order: 3, 32768
> bytes, linear)
> >       > [    1.923509] UDP-Lite hash table entries: 1024 (order: 3,
> 32768 bytes, linear)
> >       > [    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol family
> >       > [    1.936834] RPC: Registered named UNIX socket transport
> module.
> >       > [    1.942342] RPC: Registered udp transport module.
> >       > [    1.947088] RPC: Registered tcp transport module.
> >       > [    1.951843] RPC: Registered tcp NFSv4.1 backchannel transpor=
t
> module.
> >       > [    1.958334] PCI: CLS 0 bytes, default 64
> >       > [    1.962709] Trying to unpack rootfs image as initramfs...
> >       > [    1.977090] workingset: timestamp_bits=3D62 max_order=3D19
> bucket_order=3D0
> >       > [    1.982863] Installing knfsd (copyright (C) 1996
> okir@monad.swb.de).
> >       > [    2.021045] NET: Registered PF_ALG protocol family
> >       > [    2.021122] xor: measuring software checksum speed
> >       > [    2.029347]    8regs           :  2366 MB/sec
> >       > [    2.033081]    32regs          :  2802 MB/sec
> >       > [    2.038223]    arm64_neon      :  2320 MB/sec
> >       > [    2.038385] xor: using function: 32regs (2802 MB/sec)
> >       > [    2.043614] Block layer SCSI generic (bsg) driver version 0.=
4
> loaded (major 247)
> >       > [    2.050959] io scheduler mq-deadline registered
> >       > [    2.055521] io scheduler kyber registered
> >       > [    2.068227] xen:xen_evtchn: Event-channel device installed
> >       > [    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ sharing
> disabled
> >       > [    2.076190] cacheinfo: Unable to detect cache hierarchy for
> CPU 0
> >       > [    2.085548] brd: module loaded
> >       > [    2.089290] loop: module loaded
> >       > [    2.089341] Invalid max_queues (4), will use default max: 2.
> >       > [    2.094565] tun: Universal TUN/TAP device driver, 1.6
> >       > [    2.098655] xen_netfront: Initialising Xen virtual ethernet
> driver
> >       > [    2.104156] usbcore: registered new interface driver rtl8150
> >       > [    2.109813] usbcore: registered new interface driver r8152
> >       > [    2.115367] usbcore: registered new interface driver asix
> >       > [    2.120794] usbcore: registered new interface driver
> ax88179_178a
> >       > [    2.126934] usbcore: registered new interface driver cdc_eth=
er
> >       > [    2.132816] usbcore: registered new interface driver cdc_eem
> >       > [    2.138527] usbcore: registered new interface driver net1080
> >       > [    2.144256] usbcore: registered new interface driver
> cdc_subset
> >       > [    2.150205] usbcore: registered new interface driver zaurus
> >       > [    2.155837] usbcore: registered new interface driver cdc_ncm
> >       > [    2.161550] usbcore: registered new interface driver r8153_e=
cm
> >       > [    2.168240] usbcore: registered new interface driver cdc_acm
> >       > [    2.173109] cdc_acm: USB Abstract Control Model driver for
> USB modems and ISDN adapters
> >       > [    2.181358] usbcore: registered new interface driver uas
> >       > [    2.186547] usbcore: registered new interface driver
> usb-storage
> >       > [    2.192643] usbcore: registered new interface driver ftdi_si=
o
> >       > [    2.198384] usbserial: USB Serial support registered for FTD=
I
> USB Serial Device
> >       > [    2.206118] udc-core: couldn't find an available UDC - added
> [g_mass_storage] to list of pending drivers
> >       > [    2.215332] i2c_dev: i2c /dev entries driver
> >       > [    2.220467] xen_wdt xen_wdt: initialized (timeout=3D60s,
> nowayout=3D0)
> >       > [    2.225923] device-mapper: uevent: version 1.0.3
> >       > [    2.230668] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22)
> initialised: dm-devel@redhat.com
> >       > [    2.239315] EDAC MC0: Giving out device to module 1
> controller synps_ddr_controller: DEV synps_edac (INTERRUPT)
> >       > [    2.249405] EDAC DEVICE0: Giving out device to module
> zynqmp-ocm-edac controller zynqmp_ocm: DEV
> >       ff960000.memory-controller (INTERRUPT)
> >       > [    2.261719] sdhci: Secure Digital Host Controller Interface
> driver
> >       > [    2.267487] sdhci: Copyright(c) Pierre Ossman
> >       > [    2.271890] sdhci-pltfm: SDHCI platform and OF driver helper
> >       > [    2.278157] ledtrig-cpu: registered to indicate activity on
> CPUs
> >       > [    2.283816] zynqmp_firmware_probe Platform Management API v1=
.1
> >       > [    2.289554] zynqmp_firmware_probe Trustzone version v1.0
> >       > [    2.327875] securefw securefw: securefw probed
> >       > [    2.328324] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)
> >       > [    2.332563] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes:
> AES Successfully Registered
> >       > [    2.341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)
> >       > [    2.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is
> available
> >       > [    2.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is
> available
> >       > [    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager
> registered
> >       > [    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xen Proxy
> registered
> >       > [    2.372525] viper-vdpp a4000000.vdpp: Device Tree Probing
> >       > [    2.377778] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0
> Info: 1.512.15.0 KeyLen: 32
> >       > [    2.386432] viper-vdpp a4000000.vdpp: Unable to register
> tamper handler. Retrying...
> >       > [    2.394094] viper-vdpp-net a5000000.vdpp_net: Device Tree
> Probing
> >       > [    2.399854] viper-vdpp-net a5000000.vdpp_net: Device
> registered
> >       > [    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Device Tree
> Probing
> >       > [    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build
> parameters: VTI Count: 512 Event Count: 32
> >       > [    2.420856] default preset
> >       > [    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Device
> registered
> >       > [    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device Tree
> Probing
> >       > [    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device
> registered
> >       > [    2.441976] vmcu driver init
> >       > [    2.444922] VMCU: : (240:0) registered
> >       > [    2.444956] In K81 Updater init
> >       > [    2.449003] pktgen: Packet Generator for packet performance
> testing. Version: 2.75
> >       > [    2.468833] Initializing XFRM netlink socket
> >       > [    2.468902] NET: Registered PF_PACKET protocol family
> >       > [    2.472729] Bridge firewalling registered
> >       > [    2.476785] 8021q: 802.1Q VLAN Support v1.8
> >       > [    2.481341] registered taskstats version 1
> >       > [    2.486394] Btrfs loaded, crc32c=3Dcrc32c-generic, zoned=3Dn=
o,
> fsverity=3Dno
> >       > [    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq =
=3D
> 36, base_baud =3D 6250000) is a xuartps
> >       > [    2.507103] of-fpga-region fpga-full: FPGA Region probed
> >       > [    2.512986] xilinx-zynqmp-dma fd500000.dma-controller: ZynqM=
P
> DMA driver Probe success
> >       > [    2.520267] xilinx-zynqmp-dma fd510000.dma-controller: ZynqM=
P
> DMA driver Probe success
> >       > [    2.528239] xilinx-zynqmp-dma fd520000.dma-controller: ZynqM=
P
> DMA driver Probe success
> >       > [    2.536152] xilinx-zynqmp-dma fd530000.dma-controller: ZynqM=
P
> DMA driver Probe success
> >       > [    2.544153] xilinx-zynqmp-dma fd540000.dma-controller: ZynqM=
P
> DMA driver Probe success
> >       > [    2.552127] xilinx-zynqmp-dma fd550000.dma-controller: ZynqM=
P
> DMA driver Probe success
> >       > [    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqM=
P
> DMA driver Probe success
> >       > [    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqM=
P
> DMA driver Probe success
> >       > [    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqM=
P
> DMA driver Probe success
> >       > [    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqM=
P
> DMA driver Probe success
> >       > [    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
> >       > [    2.946467] 2 fixed-partitions partitions found on MTD devic=
e
> spi0.0
> >       > [    2.952393] Creating 2 MTD partitions on "spi0.0":
> >       > [    2.957231] 0x000004000000-0x000008000000 : "bank A"
> >       > [    2.963332] 0x000000000000-0x000004000000 : "bank B"
> >       > [    2.968694] macb ff0b0000.ethernet: Not enabling partial
> store and forward
> >       > [    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM rev
> 0x50070106 at 0xff0b0000 irq 25 (18:41:fe:0f:ff:02)
> >       > [    2.984472] macb ff0c0000.ethernet: Not enabling partial
> store and forward
> >       > [    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM rev
> 0x50070106 at 0xff0c0000 irq 26 (18:41:fe:0f:ff:03)
> >       > [    3.001043] viper_enet viper_enet: Viper power GPIOs
> initialised
> >       > [    3.007313] viper_enet viper_enet vnet0 (uninitialized):
> Validate interface QSGMII
> >       > [    3.014914] viper_enet viper_enet vnet1 (uninitialized):
> Validate interface QSGMII
> >       > [    3.022138] viper_enet viper_enet vnet1 (uninitialized):
> Validate interface type 18
> >       > [    3.030274] viper_enet viper_enet vnet2 (uninitialized):
> Validate interface QSGMII
> >       > [    3.037785] viper_enet viper_enet vnet3 (uninitialized):
> Validate interface QSGMII
> >       > [    3.045301] viper_enet viper_enet: Viper enet registered
> >       > [    3.050958] xilinx-axipmon ffa00000.perf-monitor: Probed
> Xilinx APM
> >       > [    3.057135] xilinx-axipmon fd0b0000.perf-monitor: Probed
> Xilinx APM
> >       > [    3.063538] xilinx-axipmon fd490000.perf-monitor: Probed
> Xilinx APM
> >       > [    3.069920] xilinx-axipmon ffa10000.perf-monitor: Probed
> Xilinx APM
> >       > [    3.097729] si70xx: probe of 2-0040 failed with error -5
> >       > [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Time=
r
> with timeout 60s
> >       > [    3.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdog Time=
r
> with timeout 10s
> >       > [    3.112457] viper-tamper viper-tamper: Device registered
> >       > [    3.117593] active_bank active_bank: boot bank: 1
> >       > [    3.122184] active_bank active_bank: boot mode: (0x02) qspi3=
2
> >       > [    3.128247] viper-vdpp a4000000.vdpp: Device Tree Probing
> >       > [    3.133439] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0
> Info: 1.512.15.0 KeyLen: 32
> >       > [    3.142151] viper-vdpp a4000000.vdpp: Tamper handler
> registered
> >       > [    3.147438] viper-vdpp a4000000.vdpp: Device registered
> >       > [    3.153007] lpc55_l2 spi1.0: registered handler for protocol=
 0
> >       > [    3.158582] lpc55_user lpc55_user: The major number for your
> device is 236
> >       > [    3.165976] lpc55_l2 spi1.0: registered handler for protocol=
 1
> >       > [    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad
> result: 1
> >       > [    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
> >       > [    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
> >       > [    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
> >       > [    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
> >       > [    3.202932] mmc0: SDHCI controller on ff160000.mmc
> [ff160000.mmc] using ADMA 64-bit
> >       > [    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
> >       > [    3.215694] lpc55_l2 spi1.0: rx error: -110
> >       > [    3.284438] mmc0: new HS200 MMC card at address 0001
> >       > [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
> >       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
> >       > [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
> >       > [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
> >       > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev
> (244:0)
> >       > [    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad
> result: 1
> >       > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read the
> hardware clock
> >       > [    3.591252] cdns-i2c ff020000.i2c: recovery information
> complete
> >       > [    3.597085] at24 0-0050: supply vcc not found, using dummy
> regulator
> >       > [    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
> >       > [    3.608093] at24 0-0050: 256 byte spd EEPROM, read-only
> >       > [    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
> >       > [    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
> >       > [    3.624224] rtc-rv3028 0-0052: registered as rtc1
> >       > [    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
> >       > [    3.633253] lpc55_l2 spi1.0: rx error: -110
> >       > [    3.639104] k81_bootloader 0-0010: probe
> >       > [    3.641628] VMCU: : (235:0) registered
> >       > [    3.641635] k81_bootloader 0-0010: probe completed
> >       > [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq
> 28
> >       > [    3.669154] cdns-i2c ff030000.i2c: recovery information
> complete
> >       > [    3.675412] lm75 1-0048: supply vs not found, using dummy
> regulator
> >       > [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
> >       > [    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
> >       > [    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
> >       > [    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
> >       > [    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
> >       > [    3.705157] pca954x 1-0070: registered 4 multiplexed busses
> for I2C switch pca9546
> >       > [    3.713049] at24 1-0054: supply vcc not found, using dummy
> regulator
> >       > [    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM, read-only
> >       > [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq
> 29
> >       > [    3.731272] sfp viper_enet:sfp-eth1: Host maximum power 2.0W
> >       > [    3.737549] sfp_register_socket: got sfp_bus
> >       > [    3.740709] sfp_register_socket: register sfp_bus
> >       > [    3.745459] sfp_register_bus: ops ok!
> >       > [    3.749179] sfp_register_bus: Try to attach
> >       > [    3.753419] sfp_register_bus: Attach succeeded
> >       > [    3.757914] sfp_register_bus: upstream ops attach
> >       > [    3.762677] sfp_register_bus: Bus registered
> >       > [    3.766999] sfp_register_socket: register sfp_bus succeeded
> >       > [    3.775870] of_cfs_init
> >       > [    3.776000] of_cfs_init: OK
> >       > [    3.778211] clk: Not disabling unused clocks
> >       > [   11.278477] Freeing initrd memory: 206056K
> >       > [   11.279406] Freeing unused kernel memory: 1536K
> >       > [   11.314006] Checked W+X mappings: passed, no W+X pages found
> >       > [   11.314142] Run /init as init process
> >       > INIT: version 3.01 booting
> >       > fsck (busybox 1.35.0)
> >       > /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600 blocks
> >       > /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600 blocks
> >       > /dev/mmcblk0p3 was not cleanly unmounted, check forced.
> >       > /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous), 663/16384
> blocks
> >       > [   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem without
> journal. Opts: (null). Quota mode: disabled.
> >       > Starting random number generator daemon.
> >       > [   11.580662] random: crng init done
> >       > Starting udev
> >       > [   11.613159] udevd[142]: starting version 3.2.10
> >       > [   11.620385] udevd[143]: starting eudev-3.2.10
> >       > [   11.704481] macb ff0b0000.ethernet control_red: renamed from
> eth0
> >       > [   11.720264] macb ff0c0000.ethernet control_black: renamed
> from eth1
> >       > [   12.063396] ip_local_port_range: prefer different parity for
> start/end values.
> >       > [   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad
> result: 1
> >       > hwclock: RTC_RD_TIME: Invalid exchange
> >       > Mon Feb 27 08:40:53 UTC 2023
> >       > [   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad
> result
> >       > hwclock: RTC_SET_TIME: Invalid exchange
> >       > [   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad
> result: 1
> >       > Starting mcud
> >       > INIT: Entering runlevel: 5
> >       > Configuring network interfaces... done.
> >       > resetting network interface
> >       > [   12.718295] macb ff0b0000.ethernet control_red: PHY
> [ff0b0000.ethernet-ffffffff:02] driver [Xilinx PCS/PMA PHY] (irq=3DPOLL)
> >       > [   12.723919] macb ff0b0000.ethernet control_red: configuring
> for phy/gmii link mode
> >       > [   12.732151] pps pps0: new PPS source ptp0
> >       > [   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp clock
> registered.
> >       > [   12.745724] macb ff0c0000.ethernet control_black: PHY
> [ff0c0000.ethernet-ffffffff:01] driver [Xilinx PCS/PMA PHY]
> >       (irq=3DPOLL)
> >       > [   12.753469] macb ff0c0000.ethernet control_black: configurin=
g
> for phy/gmii link mode
> >       > [   12.761804] pps pps1: new PPS source ptp1
> >       > [   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock
> registered.
> >       > Auto-negotiation: off
> >       > Auto-negotiation: off
> >       > [   16.828151] macb ff0b0000.ethernet control_red: unable to
> generate target frequency: 125000000 Hz
> >       > [   16.834553] macb ff0b0000.ethernet control_red: Link is Up -
> 1Gbps/Full - flow control off
> >       > [   16.860552] macb ff0c0000.ethernet control_black: unable to
> generate target frequency: 125000000 Hz
> >       > [   16.867052] macb ff0c0000.ethernet control_black: Link is Up
> - 1Gbps/Full - flow control off
> >       > Starting Failsafe Secure Shell server in port 2222: sshd
> >       > done.
> >       > Starting rpcbind daemon...done.
> >       >
> >       > [   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad
> result: 1
> >       > hwclock: RTC_RD_TIME: Invalid exchange
> >       > Starting State Manager Service
> >       > Start state-manager restarter...
> >       > (XEN) d0v1 Forwarding AES operation: 3254779951
> >       > Starting /usr/sbin/xenstored....[   17.265256] BTRFS: device
> fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa devid 1 transid 744
> >       /dev/dm-0
> >       > scanned by udevd (385)
> >       > [   17.349933] BTRFS info (device dm-0): disk space caching is
> enabled
> >       > [   17.350670] BTRFS info (device dm-0): has skinny extents
> >       > [   17.364384] BTRFS info (device dm-0): enabling ssd
> optimizations
> >       > [   17.830462] BTRFS: device fsid
> 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
> /dev/mapper/client_prov scanned by
> >       mkfs.btrfs
> >       > (526)
> >       > [   17.872699] BTRFS info (device dm-1): using free space tree
> >       > [   17.872771] BTRFS info (device dm-1): has skinny extents
> >       > [   17.878114] BTRFS info (device dm-1): flagging fs with big
> metadata feature
> >       > [   17.894289] BTRFS info (device dm-1): enabling ssd
> optimizations
> >       > [   17.895695] BTRFS info (device dm-1): checking UUID tree
> >       >
> >       > Setting domain 0 name, domid and JSON config...
> >       > Done setting up Dom0
> >       > Starting xenconsoled...
> >       > Starting QEMU as disk backend for dom0
> >       > Starting domain watchdog daemon: xenwatchdogd startup
> >       >
> >       > [   18.408647] BTRFS: device fsid
> 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
> /dev/mapper/client_config scanned by
> >       mkfs.btrfs
> >       > (574)
> >       > [done]
> >       > [   18.465552] BTRFS info (device dm-2): using free space tree
> >       > [   18.465629] BTRFS info (device dm-2): has skinny extents
> >       > [   18.471002] BTRFS info (device dm-2): flagging fs with big
> metadata feature
> >       > Starting crond: [   18.482371] BTRFS info (device dm-2):
> enabling ssd optimizations
> >       > [   18.486659] BTRFS info (device dm-2): checking UUID tree
> >       > OK
> >       > starting rsyslogd ... Log partition ready after 0 poll loops
> >       > done
> >       > rsyslogd: cannot connect to 172.18.0.1:514: Network is
> unreachable [v8.2208.0 try https://www.rsyslog.com/e/2027 ]
> >       > [   18.670637] BTRFS: device fsid
> 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3 scanne=
d
> by udevd (518)
> >       >
> >       > Please insert USB token and enter your role in login prompt.
> >       >
> >       > login:
> >       >
> >       > Regards,
> >       > O.
> >       >
> >       >
> >       > =D0=BF=D0=BD, 24 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=
=B2 23:39, Stefano Stabellini <
> sstabellini@kernel.org>:
> >       >       Hi Oleg,
> >       >
> >       >       Here is the issue from your logs:
> >       >
> >       >       SError Interrupt on CPU0, code 0xbe000000 -- SError
> >       >
> >       >       SErrors are special signals to notify software of serious
> hardware
> >       >       errors.  Something is going very wrong. Defective hardwar=
e
> is a
> >       >       possibility.  Another possibility if software accessing
> address ranges
> >       >       that it is not supposed to, sometimes it causes SErrors.
> >       >
> >       >       Cheers,
> >       >
> >       >       Stefano
> >       >
> >       >
> >       >
> >       >       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
> >       >
> >       >       > Hello,
> >       >       >
> >       >       > Thanks guys.
> >       >       > I found out where the problem was.
> >       >       > Now dom0 booted more. But I have a new one.
> >       >       > This is a kernel panic during Dom0 loading.
> >       >       > Maybe someone is able to suggest something ?
> >       >       >
> >       >       > Regards,
> >       >       > O.
> >       >       >
> >       >       > [    3.771362] sfp_register_bus: upstream ops attach
> >       >       > [    3.776119] sfp_register_bus: Bus registered
> >       >       > [    3.780459] sfp_register_socket: register sfp_bus
> succeeded
> >       >       > [    3.789399] of_cfs_init
> >       >       > [    3.789499] of_cfs_init: OK
> >       >       > [    3.791685] clk: Not disabling unused clocks
> >       >       > [   11.010355] SError Interrupt on CPU0, code 0xbe00000=
0
> -- SError
> >       >       > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not
> tainted 5.15.72-xilinx-v2022.1 #1
> >       >       > [   11.010393] Workqueue: events_unbound
> async_run_entry_fn
> >       >       > [   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO
> -TCO -DIT -SSBS BTYPE=3D--)
> >       >       > [   11.010422] pc : simple_write_end+0xd0/0x130
> >       >       > [   11.010431] lr : generic_perform_write+0x118/0x1e0
> >       >       > [   11.010438] sp : ffffffc00809b910
> >       >       > [   11.010441] x29: ffffffc00809b910 x28:
> 0000000000000000 x27: ffffffef69ba88c0
> >       >       > [   11.010451] x26: 0000000000003eec x25:
> ffffff807515db00 x24: 0000000000000000
> >       >       > [   11.010459] x23: ffffffc00809ba90 x22:
> 0000000002aac000 x21: ffffff807315a260
> >       >       > [   11.010472] x20: 0000000000001000 x19:
> fffffffe02000000 x18: 0000000000000000
> >       >       > [   11.010481] x17: 00000000ffffffff x16:
> 0000000000008000 x15: 0000000000000000
> >       >       > [   11.010490] x14: 0000000000000000 x13:
> 0000000000000000 x12: 0000000000000000
> >       >       > [   11.010498] x11: 0000000000000000 x10:
> 0000000000000000 x9 : 0000000000000000
> >       >       > [   11.010507] x8 : 0000000000000000 x7 :
> ffffffef693ba680 x6 : 000000002d89b700
> >       >       > [   11.010515] x5 : fffffffe02000000 x4 :
> ffffff807315a3c8 x3 : 0000000000001000
> >       >       > [   11.010524] x2 : 0000000002aab000 x1 :
> 0000000000000001 x0 : 0000000000000005
> >       >       > [   11.010534] Kernel panic - not syncing: Asynchronous
> SError Interrupt
> >       >       > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not
> tainted 5.15.72-xilinx-v2022.1 #1
> >       >       > [   11.010545] Hardware name: D14 Viper Board - White
> Unit (DT)
> >       >       > [   11.010548] Workqueue: events_unbound
> async_run_entry_fn
> >       >       > [   11.010556] Call trace:
> >       >       > [   11.010558]  dump_backtrace+0x0/0x1c4
> >       >       > [   11.010567]  show_stack+0x18/0x2c
> >       >       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
> >       >       > [   11.010583]  dump_stack+0x18/0x34
> >       >       > [   11.010588]  panic+0x14c/0x2f8
> >       >       > [   11.010597]  print_tainted+0x0/0xb0
> >       >       > [   11.010606]  arm64_serror_panic+0x6c/0x7c
> >       >       > [   11.010614]  do_serror+0x28/0x60
> >       >       > [   11.010621]  el1h_64_error_handler+0x30/0x50
> >       >       > [   11.010628]  el1h_64_error+0x78/0x7c
> >       >       > [   11.010633]  simple_write_end+0xd0/0x130
> >       >       > [   11.010639]  generic_perform_write+0x118/0x1e0
> >       >       > [   11.010644]  __generic_file_write_iter+0x138/0x1c4
> >       >       > [   11.010650]  generic_file_write_iter+0x78/0xd0
> >       >       > [   11.010656]  __kernel_write+0xfc/0x2ac
> >       >       > [   11.010665]  kernel_write+0x88/0x160
> >       >       > [   11.010673]  xwrite+0x44/0x94
> >       >       > [   11.010680]  do_copy+0xa8/0x104
> >       >       > [   11.010686]  write_buffer+0x38/0x58
> >       >       > [   11.010692]  flush_buffer+0x4c/0xbc
> >       >       > [   11.010698]  __gunzip+0x280/0x310
> >       >       > [   11.010704]  gunzip+0x1c/0x28
> >       >       > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
> >       >       > [   11.010715]  do_populate_rootfs+0x80/0x164
> >       >       > [   11.010722]  async_run_entry_fn+0x48/0x164
> >       >       > [   11.010728]  process_one_work+0x1e4/0x3a0
> >       >       > [   11.010736]  worker_thread+0x7c/0x4c0
> >       >       > [   11.010743]  kthread+0x120/0x130
> >       >       > [   11.010750]  ret_from_fork+0x10/0x20
> >       >       > [   11.010757] SMP: stopping secondary CPUs
> >       >       > [   11.010784] Kernel Offset: 0x2f61200000 from
> 0xffffffc008000000
> >       >       > [   11.010788] PHYS_OFFSET: 0x0
> >       >       > [   11.010790] CPU features: 0x00000401,00000842
> >       >       > [   11.010795] Memory Limit: none
> >       >       > [   11.277509] ---[ end Kernel panic - not syncing:
> Asynchronous SError Interrupt ]---
> >       >       >
> >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=
=B3. =D0=B2 15:52, Michal Orzel <
> michal.orzel@amd.com>:
> >       >       >       Hi Oleg,
> >       >       >
> >       >       >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
> >       >       >       >
> >       >       >       >
> >       >       >       >
> >       >       >       > Hello Michal,
> >       >       >       >
> >       >       >       > I was not able to enable earlyprintk in the xen
> for now.
> >       >       >       > I decided to choose another way.
> >       >       >       > This is a xen's command line that I found out
> completely.
> >       >       >       >
> >       >       >       > (XEN) $$$$ console=3Ddtuart dtuart=3Dserial0
> dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0
> >       vwfi=3Dnative
> >       >       sched=3Dnull
> >       >       >       timer_slop=3D0
> >       >       >       Yes, adding a printk() in Xen was also a good ide=
a.
> >       >       >
> >       >       >       >
> >       >       >       > So you are absolutely right about a command lin=
e.
> >       >       >       > Now I am going to find out why xen did not have
> the correct parameters from the device tree.
> >       >       >       Maybe you will find this document helpful:
> >       >       >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >       >       >
> >       >       >       ~Michal
> >       >       >
> >       >       >       >
> >       >       >       > Regards,
> >       >       >       > Oleg
> >       >       >       >
> >       >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=
=AF=D0=B3. =D0=B2 11:16, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
> >       >       >       >
> >       >       >       >
> >       >       >       >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
> >       >       >       >     >
> >       >       >       >     >
> >       >       >       >     >
> >       >       >       >     > Hello Michal,
> >       >       >       >     >
> >       >       >       >     > Yes, I use yocto.
> >       >       >       >     >
> >       >       >       >     > Yesterday all day long I tried to follow
> your suggestions.
> >       >       >       >     > I faced a problem.
> >       >       >       >     > Manually in the xen config build file I
> pasted the strings:
> >       >       >       >     In the .config file or in some Yocto file
> (listing additional Kconfig options) added to SRC_URI?
> >       >       >       >     You shouldn't really modify .config file bu=
t
> if you do, you should execute "make olddefconfig"
> >       afterwards.
> >       >       >       >
> >       >       >       >     >
> >       >       >       >     > CONFIG_EARLY_PRINTK
> >       >       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
> >       >       >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
> >       >       >       >     I hope you added =3Dy to them.
> >       >       >       >
> >       >       >       >     Anyway, you have at least the following
> solutions:
> >       >       >       >     1) Run bitbake xen -c menuconfig to properl=
y
> set early printk
> >       >       >       >     2) Find out how you enable other Kconfig
> options in your project (e.g. CONFIG_COLORING=3Dy that is not
> >       enabled by
> >       >       default)
> >       >       >       >     3) Append the following to
> "xen/arch/arm/configs/arm64_defconfig":
> >       >       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
> >       >       >       >
> >       >       >       >     ~Michal
> >       >       >       >
> >       >       >       >     >
> >       >       >       >     > Host hangs in build time.
> >       >       >       >     > Maybe I did not set something in the
> config build file ?
> >       >       >       >     >
> >       >       >       >     > Regards,
> >       >       >       >     > Oleg
> >       >       >       >     >
> >       >       >       >     > =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=
=E2=80=AF=D0=B3. =D0=B2 11:57, Oleg
> Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >       >       >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>:
> >       >       >       >     >
> >       >       >       >     >     Thanks Michal,
> >       >       >       >     >
> >       >       >       >     >     You gave me an idea.
> >       >       >       >     >     I am going to try it today.
> >       >       >       >     >
> >       >       >       >     >     Regards,
> >       >       >       >     >     O.
> >       >       >       >     >
> >       >       >       >     >     =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. =
2023=E2=80=AF=D0=B3. =D0=B2 11:56, Oleg
> Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >       >       >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>:
> >       >       >       >     >
> >       >       >       >     >         Thanks Stefano.
> >       >       >       >     >
> >       >       >       >     >         I am going to do it today.
> >       >       >       >     >
> >       >       >       >     >         Regards,
> >       >       >       >     >         O.
> >       >       >       >     >
> >       >       >       >     >         =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=
=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:05,
> Stefano Stabellini <sstabellini@kernel.org
> >       <mailto:sstabellini@kernel.org>
> >       >       >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>:
> >       >       >       >     >
> >       >       >       >     >             On Wed, 19 Apr 2023, Oleg
> Nikitenko wrote:
> >       >       >       >     >             > Hi Michal,
> >       >       >       >     >             >
> >       >       >       >     >             > I corrected xen's command
> line.
> >       >       >       >     >             > Now it is
> >       >       >       >     >             > xen,xen-bootargs =3D
> "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2
> >       dom0_vcpus_pin
> >       >       >       bootscrub=3D0 vwfi=3Dnative sched=3Dnull
> >       >       >       >     >             > timer_slop=3D0 way_size=3D6=
5536
> xen_colors=3D0-3 dom0_colors=3D4-7";
> >       >       >       >     >
> >       >       >       >     >             4 colors is way too many for
> xen, just do xen_colors=3D0-0. There is no
> >       >       >       >     >             advantage in using more than =
1
> color for Xen.
> >       >       >       >     >
> >       >       >       >     >             4 colors is too few for dom0,
> if you are giving 1600M of memory to Dom0.
> >       >       >       >     >             Each color is 256M. For 1600M
> you should give at least 7 colors. Try:
> >       >       >       >     >
> >       >       >       >     >             xen_colors=3D0-0 dom0_colors=
=3D1-8
> >       >       >       >     >
> >       >       >       >     >
> >       >       >       >     >
> >       >       >       >     >             > Unfortunately the result wa=
s
> the same.
> >       >       >       >     >             >
> >       >       >       >     >             > (XEN)  - Dom0 mode: Relaxed
> >       >       >       >     >             > (XEN) P2M: 40-bit IPA with
> 40-bit PA and 8-bit VMID
> >       >       >       >     >             > (XEN) P2M: 3 levels with
> order-1 root, VTCR 0x0000000080023558
> >       >       >       >     >             > (XEN) Scheduling
> granularity: cpu, 1 CPU per sched-resource
> >       >       >       >     >             > (XEN) Coloring general
> information
> >       >       >       >     >             > (XEN) Way size: 64kB
> >       >       >       >     >             > (XEN) Max. number of colors
> available: 16
> >       >       >       >     >             > (XEN) Xen color(s): [ 0 ]
> >       >       >       >     >             > (XEN) alternatives: Patchin=
g
> with alt table 00000000002cc690 -> 00000000002ccc0c
> >       >       >       >     >             > (XEN) Color array allocatio=
n
> failed for dom0
> >       >       >       >     >             > (XEN)
> >       >       >       >     >             > (XEN)
> ****************************************
> >       >       >       >     >             > (XEN) Panic on CPU 0:
> >       >       >       >     >             > (XEN) Error creating domain=
 0
> >       >       >       >     >             > (XEN)
> ****************************************
> >       >       >       >     >             > (XEN)
> >       >       >       >     >             > (XEN) Reboot in five
> seconds...
> >       >       >       >     >             >
> >       >       >       >     >             > I am going to find out how
> command line arguments passed and parsed.
> >       >       >       >     >             >
> >       >       >       >     >             > Regards,
> >       >       >       >     >             > Oleg
> >       >       >       >     >             >
> >       >       >       >     >             > =D1=81=D1=80, 19 =D0=B0=D0=
=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25,
> Oleg Nikitenko <oleshiiwood@gmail.com
> >       <mailto:oleshiiwood@gmail.com>
> >       >       >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>:
> >       >       >       >     >             >       Hi Michal,
> >       >       >       >     >             >
> >       >       >       >     >             > You put my nose into the
> problem. Thank you.
> >       >       >       >     >             > I am going to use your poin=
t.
> >       >       >       >     >             > Let's see what happens.
> >       >       >       >     >             >
> >       >       >       >     >             > Regards,
> >       >       >       >     >             > Oleg
> >       >       >       >     >             >
> >       >       >       >     >             >
> >       >       >       >     >             > =D1=81=D1=80, 19 =D0=B0=D0=
=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37,
> Michal Orzel <michal.orzel@amd.com
> >       <mailto:michal.orzel@amd.com>
> >       >       >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>:
> >       >       >       >     >             >       Hi Oleg,
> >       >       >       >     >             >
> >       >       >       >     >             >       On 19/04/2023 09:03,
> Oleg Nikitenko wrote:
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >
> >       >       >       >     >             >       > Hello Stefano,
> >       >       >       >     >             >       >
> >       >       >       >     >             >       > Thanks for the
> clarification.
> >       >       >       >     >             >       > My company uses
> yocto for image generation.
> >       >       >       >     >             >       > What kind of
> information do you need to consult me in this case ?
> >       >       >       >     >             >       >
> >       >       >       >     >             >       > Maybe modules
> sizes/addresses which were mentioned by @Julien Grall
> >       >       <mailto:julien@xen.org
> >       >       >       <mailto:julien@xen.org> <mailto:julien@xen.org
> <mailto:julien@xen.org>>> ?
> >       >       >       >     >             >
> >       >       >       >     >             >       Sorry for jumping int=
o
> discussion, but FWICS the Xen command line you provided
> >       seems to be
> >       >       not the
> >       >       >       one
> >       >       >       >     >             >       Xen booted with. The
> error you are observing most likely is due to dom0 colors
> >       >       configuration not
> >       >       >       being
> >       >       >       >     >             >       specified (i.e. lack
> of dom0_colors=3D<> parameter). Although in the command line you
> >       >       provided, this
> >       >       >       parameter
> >       >       >       >     >             >       is set, I strongly
> doubt that this is the actual command line in use.
> >       >       >       >     >             >
> >       >       >       >     >             >       You wrote:
> >       >       >       >     >             >       xen,xen-bootargs =3D
> "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2
> >       >       dom0_vcpus_pin
> >       >       >       bootscrub=3D0 vwfi=3Dnative
> >       >       >       >     >             >       sched=3Dnull
> timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
> >       >       >       >     >             >
> >       >       >       >     >             >       but:
> >       >       >       >     >             >       1) way_szize has a ty=
po
> >       >       >       >     >             >       2) you specified 4
> colors (0-3) for Xen, but the boot log says that Xen has only
> >       one:
> >       >       >       >     >             >       (XEN) Xen color(s): [
> 0 ]
> >       >       >       >     >             >
> >       >       >       >     >             >       This makes me believe
> that no colors configuration actually end up in command line
> >       that Xen
> >       >       booted
> >       >       >       with.
> >       >       >       >     >             >       Single color for Xen
> is a "default if not specified" and way size was probably
> >       calculated
> >       >       by asking
> >       >       >       HW.
> >       >       >       >     >             >
> >       >       >       >     >             >       So I would suggest to
> first cross-check the command line in use.
> >       >       >       >     >             >
> >       >       >       >     >             >       ~Michal
> >       >       >       >     >             >
> >       >       >       >     >             >
> >       >       >       >     >             >       >
> >       >       >       >     >             >       > Regards,
> >       >       >       >     >             >       > Oleg
> >       >       >       >     >             >       >
> >       >       >       >     >             >       > =D0=B2=D1=82, 18 =
=D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3.
> =D0=B2 20:44, Stefano Stabellini <sstabellini@kernel.org
> >       >       >       <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
> >       >       <mailto:sstabellini@kernel.org
> >       >       >       <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >     On Tue, 18 Apr
> 2023, Oleg Nikitenko wrote:
> >       >       >       >     >             >       >     > Hi Julien,
> >       >       >       >     >             >       >     >
> >       >       >       >     >             >       >     > >> This
> feature has not been merged in Xen upstream yet
> >       >       >       >     >             >       >     >
> >       >       >       >     >             >       >     > > would assum=
e
> that upstream + the series on the ML [1] work
> >       >       >       >     >             >       >     >
> >       >       >       >     >             >       >     > Please clarif=
y
> this point.
> >       >       >       >     >             >       >     > Because the
> two thoughts are controversial.
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >     Hi Oleg,
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >     As Julien wrote=
,
> there is nothing controversial. As you are aware,
> >       >       >       >     >             >       >     Xilinx maintain=
s
> a separate Xen tree specific for Xilinx here:
> >       >       >       >     >             >       >
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>
> >       >       <https://github.com/xilinx/xen
> >       >       >       <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>
> >       >       <https://github.com/xilinx/xen
> >       >       >       <https://github.com/xilinx/xen>>>
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >     and the branch
> you are using (xlnx_rebase_4.16) comes from there.
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >     Instead, the
> upstream Xen tree lives here:
> >       >       >       >     >             >       >
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >     The Cache
> Coloring feature that you are trying to configure is present
> >       >       >       >     >             >       >     in
> xlnx_rebase_4.16, but not yet present upstream (there is an
> >       >       >       >     >             >       >     outstanding
> patch series to add cache coloring to Xen upstream but it
> >       >       >       >     >             >       >     hasn't been
> merged yet.)
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >     Anyway, if you
> are using xlnx_rebase_4.16 it doesn't matter too much for
> >       >       >       >     >             >       >     you as you
> already have Cache Coloring as a feature there.
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >     I take you are
> using ImageBuilder to generate the boot configuration? If
> >       >       >       >     >             >       >     so, please post
> the ImageBuilder config file that you are using.
> >       >       >       >     >             >       >
> >       >       >       >     >             >       >     But from the
> boot message, it looks like the colors configuration for
> >       >       >       >     >             >       >     Dom0 is
> incorrect.
> >       >       >       >     >             >       >
> >       >       >       >     >             >
> >       >       >       >     >             >
> >       >       >       >     >             >
> >       >       >       >     >
> >       >       >       >
> >       >       >
> >       >       >
> >       >       >
> >       >
> >       >
> >       >
> >
> >
> >

--000000000000cd85bd05faee032e
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+PGRpdj5IZWxsbyBTdGVmYW5vLDwvZGl2PjxkaXY+PGJyPjwvZGl2Pjxk
aXY+SSB3b3VsZCBsaWtlIHRvIHRyeSBhIHhlbiBjYWNoZSBjb2xvciBwcm9wZXJ0eSBmcm9tIHRo
aXMgcmVwb8KgIDxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5n
aXQiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+PC9kaXY+PGRp
dj5Db3VsZCB5b3UgdGVsbCB3aG90IGJyYW5jaCBJIHNob3VsZCB1c2UgPzwvZGl2PjxkaXY+PGJy
PjwvZGl2PjxkaXY+UmVnYXJkcyw8L2Rpdj48ZGl2Pk9sZWc8YnI+PC9kaXY+PC9kaXY+PGJyPjxk
aXYgY2xhc3M9ImdtYWlsX3F1b3RlIj48ZGl2IGRpcj0ibHRyIiBjbGFzcz0iZ21haWxfYXR0ciI+
0L/RgiwgMjgg0LDQv9GALiAyMDIz4oCv0LMuINCyIDAwOjUxLCBTdGVmYW5vIFN0YWJlbGxpbmkg
Jmx0OzxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiZndDs6PGJyPjwvZGl2PjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90
ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4IDBweCAwLjhleDtib3JkZXItbGVmdDoxcHggc29saWQg
cmdiKDIwNCwyMDQsMjA0KTtwYWRkaW5nLWxlZnQ6MWV4Ij5JIGFtIGZhbWlsaWFyIHdpdGggdGhl
IHpjdTEwMiBidXQgSSBkb24mIzM5O3Qga25vdyBob3cgeW91IGNvdWxkIHBvc3NpYmx5PGJyPg0K
Z2VuZXJhdGUgYSBTRXJyb3IuPGJyPg0KPGJyPg0KSSBzdWdnZXN0IHRvIHRyeSB0byB1c2UgSW1h
Z2VCdWlsZGVyIFsxXSB0byBnZW5lcmF0ZSB0aGUgYm9vdDxicj4NCmNvbmZpZ3VyYXRpb24gYXMg
YSB0ZXN0IGJlY2F1c2UgdGhhdCBpcyBrbm93biB0byB3b3JrIHdlbGwgZm9yIHpjdTEwMi48YnI+
DQo8YnI+DQpbMV0gPGEgaHJlZj0iaHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdl
YnVpbGRlciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRsYWIu
Y29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcjwvYT48YnI+DQo8YnI+DQo8YnI+DQpPbiBUaHUs
IDI3IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7IEhlbGxvIFN0ZWZh
bm8sPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IFRoYW5rcyBmb3IgY2xhcmlmaWNhdGlvbi48YnI+DQom
Z3Q7IFdlIG5pZ2h0ZXIgdXNlIEltYWdlQnVpbGRlciBub3IgdWJvb3QgYm9vdCBzY3JpcHQuPGJy
Pg0KJmd0OyBBIG1vZGVsIGlzIHpjdTEwMiBjb21wYXRpYmxlLjxicj4NCiZndDsgPGJyPg0KJmd0
OyBSZWdhcmRzLDxicj4NCiZndDsgTy48YnI+DQomZ3Q7IDxicj4NCiZndDsg0LLRgiwgMjUg0LDQ
v9GALiAyMDIz4oCv0LMuINCyIDIxOjIxLCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4mZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoFRoaXMgaXMgaW50ZXJl
c3RpbmcuIEFyZSB5b3UgdXNpbmcgWGlsaW54IGhhcmR3YXJlIGJ5IGFueSBjaGFuY2U/IElmIHNv
LDxicj4NCiZndDvCoCDCoCDCoCDCoHdoaWNoIGJvYXJkPzxicj4NCiZndDsgPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgQXJlIHlvdSB1c2luZyBJbWFnZUJ1aWxkZXIgdG8gZ2VuZXJhdGUgeW91ciBib290
LnNjciBib290IHNjcmlwdD8gSWYgc28sPGJyPg0KJmd0O8KgIMKgIMKgIMKgY291bGQgeW91IHBs
ZWFzZSBwb3N0IHlvdXIgSW1hZ2VCdWlsZGVyIGNvbmZpZyBmaWxlPyBJZiBub3QsIGNhbiB5b3U8
YnI+DQomZ3Q7wqAgwqAgwqAgwqBwb3N0IHRoZSBzb3VyY2Ugb2YgeW91ciB1Ym9vdCBib290IHNj
cmlwdD88YnI+DQomZ3Q7IDxicj4NCiZndDvCoCDCoCDCoCDCoFNFcnJvcnMgYXJlIHN1cHBvc2Vk
IHRvIGJlIHJlbGF0ZWQgdG8gYSBoYXJkd2FyZSBmYWlsdXJlIG9mIHNvbWUga2luZC48YnI+DQom
Z3Q7wqAgwqAgwqAgwqBZb3UgYXJlIG5vdCBzdXBwb3NlZCB0byBiZSBhYmxlIHRvIHRyaWdnZXIg
YW4gU0Vycm9yIGVhc2lseSBieTxicj4NCiZndDvCoCDCoCDCoCDCoCZxdW90O21pc3Rha2UmcXVv
dDsuIEkgaGF2ZSBub3Qgc2VlbiBTRXJyb3JzIGR1ZSB0byB3cm9uZyBjYWNoZSBjb2xvcmluZzxi
cj4NCiZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb25zIG9uIGFueSBYaWxpbnggYm9hcmQgYmVm
b3JlLjxicj4NCiZndDsgPGJyPg0KJmd0O8KgIMKgIMKgIMKgVGhlIGRpZmZlcmVuY2VzIGJldHdl
ZW4gWGVuIHdpdGggYW5kIHdpdGhvdXQgY2FjaGUgY29sb3JpbmcgZnJvbSBhPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgaGFyZHdhcmUgcGVyc3BlY3RpdmUgYXJlOjxicj4NCiZndDsgPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgLSBXaXRoIGNhY2hlIGNvbG9yaW5nLCB0aGUgU01NVSBpcyBlbmFibGVkIGFuZCBk
b2VzIGFkZHJlc3MgdHJhbnNsYXRpb25zPGJyPg0KJmd0O8KgIMKgIMKgIMKgwqAgZXZlbiBmb3Ig
ZG9tMC4gV2l0aG91dCBjYWNoZSBjb2xvcmluZyB0aGUgU01NVSBjb3VsZCBiZSBkaXNhYmxlZCwg
YW5kPGJyPg0KJmd0O8KgIMKgIMKgIMKgwqAgaWYgZW5hYmxlZCwgdGhlIFNNTVUgZG9lc24mIzM5
O3QgZG8gYW55IGFkZHJlc3MgdHJhbnNsYXRpb25zIGZvciBEb20wLiBJZjxicj4NCiZndDvCoCDC
oCDCoCDCoMKgIHRoZXJlIGlzIGEgaGFyZHdhcmUgZmFpbHVyZSByZWxhdGVkIHRvIFNNTVUgYWRk
cmVzcyB0cmFuc2xhdGlvbiBpdDxicj4NCiZndDvCoCDCoCDCoCDCoMKgIGNvdWxkIG9ubHkgdHJp
Z2dlciB3aXRoIGNhY2hlIGNvbG9yaW5nLiBUaGlzIHdvdWxkIGJlIG15IG5vcm1hbDxicj4NCiZn
dDvCoCDCoCDCoCDCoMKgIHN1Z2dlc3Rpb24gZm9yIHlvdSB0byBleHBsb3JlLCBidXQgdGhlIGZh
aWx1cmUgaGFwcGVucyB0b28gZWFybHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqDCoCBiZWZvcmUgYW55
IERNQS1jYXBhYmxlIGRldmljZSBpcyBwcm9ncmFtbWVkLiBTbyBJIGRvbiYjMzk7dCB0aGluayB0
aGlzIGNhbjxicj4NCiZndDvCoCDCoCDCoCDCoMKgIGJlIHRoZSBpc3N1ZS48YnI+DQomZ3Q7IDxi
cj4NCiZndDvCoCDCoCDCoCDCoC0gV2l0aCBjYWNoZSBjb2xvcmluZywgdGhlIG1lbW9yeSBhbGxv
Y2F0aW9uIGlzIHZlcnkgZGlmZmVyZW50IHNvIHlvdSYjMzk7bGw8YnI+DQomZ3Q7wqAgwqAgwqAg
wqDCoCBlbmQgdXAgdXNpbmcgZGlmZmVyZW50IEREUiByZWdpb25zIGZvciBEb20wLiBTbyBpZiB5
b3VyIEREUiBpczxicj4NCiZndDvCoCDCoCDCoCDCoMKgIGRlZmVjdGl2ZSwgeW91IG1pZ2h0IG9u
bHkgc2VlIGEgZmFpbHVyZSB3aXRoIGNhY2hlIGNvbG9yaW5nIGVuYWJsZWQ8YnI+DQomZ3Q7wqAg
wqAgwqAgwqDCoCBiZWNhdXNlIHlvdSBlbmQgdXAgdXNpbmcgZGlmZmVyZW50IHJlZ2lvbnMuPGJy
Pg0KJmd0OyA8YnI+DQomZ3Q7IDxicj4NCiZndDvCoCDCoCDCoCDCoE9uIFR1ZSwgMjUgQXByIDIw
MjMsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgSGkgU3Rl
ZmFubyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBU
aGFuayB5b3UuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBJZiBJIGJ1aWxkIHhlbiB3aXRob3V0
IGNvbG9ycyBzdXBwb3J0IHRoZXJlIGlzIG5vdCB0aGlzIGVycm9yLjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgQWxsIHRoZSBkb21haW5zIGFyZSBib290ZWQgd2VsbC48YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IEhlbnNlIGl0IGNhbiBub3QgYmUgYSBoYXJkd2FyZSBpc3N1ZS48YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFRoaXMgcGFuaWMgYXJyaXZlZCBkdXJpbmcgdW5wYWNraW5nIHRoZSBy
b290ZnMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBIZXJlIEkgYXR0YWNoZWQgdGhlIGJvb3Qg
bG9nIHhlbi9Eb20wIHdpdGhvdXQgY29sb3IuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBBIGhp
Z2hsaWdodGVkIHN0cmluZ3MgcHJpbnRlZCBleGFjdGx5IGFmdGVyIHRoZSBwbGFjZSB3aGVyZSAx
LXN0IHRpbWUgcGFuaWMgYXJyaXZlZC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyDCoFhlbiA0LjE2LjEtcHJlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
OyAoWEVOKSBYZW4gdmVyc2lvbiA0LjE2LjEtcHJlIChub2xlMjM5MEAobm9uZSkpIChhYXJjaDY0
LXBvcnRhYmxlLWxpbnV4LWdjYyAoR0NDKSAxMS4zLjApIGRlYnVnPXkgMjAyMy0wNC0yMTxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTGF0ZXN0IENoYW5nZVNldDogV2VkIEFwciAxOSAx
Mjo1NjoxNCAyMDIzICswMzAwIGdpdDozMjE2ODdiMjMxLWRpcnR5PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBidWlsZC1pZDogYzE4NDcyNThmZGIxYjc5NTYyZmM3MTBkZGE0MDAwOGY5
NmMwZmRlNTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgUHJvY2Vzc29yOiAwMDAwMDAw
MDQxMGZkMDM0OiAmcXVvdDtBUk0gTGltaXRlZCZxdW90OywgdmFyaWFudDogMHgwLCBwYXJ0IDB4
ZDAzLHJldiAweDQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIDY0LWJpdCBFeGVjdXRp
b246PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBQcm9jZXNzb3IgRmVhdHVyZXM6
IDAwMDAwMDAwMDAwMDIyMjIgMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgwqAgwqAgRXhjZXB0aW9uIExldmVsczogRUwzOjY0KzMyIEVMMjo2NCszMiBFTDE6
NjQrMzIgRUwwOjY0KzMyPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCBFeHRl
bnNpb25zOiBGbG9hdGluZ1BvaW50IEFkdmFuY2VkU0lNRDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgwqAgRGVidWcgRmVhdHVyZXM6IDAwMDAwMDAwMTAzMDUxMDYgMDAwMDAwMDAwMDAw
MDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgQXV4aWxpYXJ5IEZlYXR1cmVz
OiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIMKgIE1lbW9yeSBNb2RlbCBGZWF0dXJlczogMDAwMDAwMDAwMDAwMTEyMiAwMDAw
MDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBJU0EgRmVhdHVy
ZXM6IMKgMDAwMDAwMDAwMDAxMTEyMCAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSAzMi1iaXQgRXhlY3V0aW9uOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgwqAgUHJvY2Vzc29yIEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAwMTMxOjAwMDAwMDAwMDAw
MTEwMTE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIEluc3RydWN0aW9uIFNl
dHM6IEFBcmNoMzIgQTMyIFRodW1iIFRodW1iLTIgSmF6ZWxsZTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgwqAgwqAgRXh0ZW5zaW9uczogR2VuZXJpY1RpbWVyIFNlY3VyaXR5PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBEZWJ1ZyBGZWF0dXJlczogMDAwMDAwMDAwMzAx
MDA2Njxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgQXV4aWxpYXJ5IEZlYXR1cmVz
OiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBNZW1v
cnkgTW9kZWwgRmVhdHVyZXM6IDAwMDAwMDAwMTAyMDExMDUgMDAwMDAwMDA0MDAwMDAwMDxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAwMDAwMDAwMDAxMjYwMDAwIDAwMDAwMDAwMDIxMDIyMTE8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIMKgIElTQSBGZWF0dXJlczogMDAwMDAwMDAwMjEwMTExMCAwMDAwMDAw
MDEzMTEyMTExIDAwMDAwMDAwMjEyMzIwNDI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIDAwMDAwMDAwMDExMTIxMzEgMDAwMDAwMDAwMDAxMTE0
MiAwMDAwMDAwMDAwMDExMTIxPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2luZyBT
TUMgQ2FsbGluZyBDb252ZW50aW9uIHYxLjI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IFVzaW5nIFBTQ0kgdjEuMTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgU01QOiBBbGxv
d2luZyA0IENQVXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEdlbmVyaWMgVGltZXIg
SVJROiBwaHlzPTMwIGh5cD0yNiB2aXJ0PTI3IEZyZXE6IDEwMDAwMCBLSHo8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIEdJQ3YyIGluaXRpYWxpemF0aW9uOjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgwqAgwqAgwqAgwqAgZ2ljX2Rpc3RfYWRkcj0wMDAwMDAwMGY5MDEwMDAw
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCBnaWNfY3B1X2FkZHI9
MDAwMDAwMDBmOTAyMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgwqAg
wqAgZ2ljX2h5cF9hZGRyPTAwMDAwMDAwZjkwNDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIMKgIMKgIMKgIMKgIGdpY192Y3B1X2FkZHI9MDAwMDAwMDBmOTA2MDAwMDxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgwqAgwqAgZ2ljX21haW50ZW5hbmNlX2lycT0y
NTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgR0lDdjI6IEFkanVzdGluZyBDUFUgaW50
ZXJmYWNlIGJhc2UgdG8gMHhmOTAyZjAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
R0lDdjI6IDE5MiBsaW5lcywgNCBjcHVzLCBzZWN1cmUgKElJRCAwMjAwMTQzYikuPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2luZyBzY2hlZHVsZXI6IG51bGwgU2NoZWR1bGVyIChu
dWxsKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgSW5pdGlhbGl6aW5nIG51bGwgc2No
ZWR1bGVyPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBXQVJOSU5HOiBUaGlzIGlzIGV4
cGVyaW1lbnRhbCBzb2Z0d2FyZSBpbiBkZXZlbG9wbWVudC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIFVzZSBhdCB5b3VyIG93biByaXNrLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgQWxsb2NhdGVkIGNvbnNvbGUgcmluZyBvZiAzMiBLaUIuPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBDUFUwOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEyIHRpbWVzIGJlZm9y
ZSBwYXVzaW5nIHRoZSBkb21haW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEJyaW5n
aW5nIHVwIENQVTE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIENQVTE6IEd1ZXN0IGF0
b21pY3Mgd2lsbCB0cnkgMTMgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhlIGRvbWFpbjxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVIDEgYm9vdGVkLjxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgQnJpbmdpbmcgdXAgQ1BVMjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgQ1BVMjogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAxMyB0aW1lcyBiZWZvcmUgcGF1c2luZyB0
aGUgZG9tYWluPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUgMiBib290ZWQuPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCcmluZ2luZyB1cCBDUFUzPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUzOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEzIHRpbWVz
IGJlZm9yZSBwYXVzaW5nIHRoZSBkb21haW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IEJyb3VnaHQgdXAgNCBDUFVzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUgMyBi
b290ZWQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4
MDAwMDA6IHByb2JpbmcgaGFyZHdhcmUgY29uZmlndXJhdGlvbi4uLjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBTTU1VdjIgd2l0aDo8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogc3Rh
Z2UgMiB0cmFuc2xhdGlvbjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgc21tdTogL2F4
aS9zbW11QGZkODAwMDAwOiBzdHJlYW0gbWF0Y2hpbmcgd2l0aCA0OCByZWdpc3RlciBncm91cHMs
IG1hc2sgMHg3ZmZmJmx0OzImZ3Q7c21tdTogL2F4aS9zbW11QGZkODAwMDAwOiAxNiBjb250ZXh0
PGJyPg0KJmd0O8KgIMKgIMKgIMKgYmFua3MgKDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHN0
YWdlLTIgb25seSk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21t
dUBmZDgwMDAwMDogU3RhZ2UtMjogNDgtYml0IElQQSAtJmd0OyA0OC1iaXQgUEE8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogcmVnaXN0ZXJl
ZCAyOSBtYXN0ZXIgZGV2aWNlczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgSS9PIHZp
cnR1YWxpc2F0aW9uIGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgLSBE
b20wIG1vZGU6IFJlbGF4ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFAyTTogNDAt
Yml0IElQQSB3aXRoIDQwLWJpdCBQQSBhbmQgOC1iaXQgVk1JRDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgUDJNOiAzIGxldmVscyB3aXRoIG9yZGVyLTEgcm9vdCwgVlRDUiAweDAwMDAw
MDAwODAwMjM1NTg8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFNjaGVkdWxpbmcgZ3Jh
bnVsYXJpdHk6IGNwdSwgMSBDUFUgcGVyIHNjaGVkLXJlc291cmNlPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBhbHRlcm5hdGl2ZXM6IFBhdGNoaW5nIHdpdGggYWx0IHRhYmxlIDAwMDAw
MDAwMDAyY2M1YzggLSZndDsgMDAwMDAwMDAwMDJjY2IyYzxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgKioqIExPQURJTkcgRE9NQUlOIDAgKioqPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
OyAoWEVOKSBMb2FkaW5nIGQwIGtlcm5lbCBmcm9tIGJvb3QgbW9kdWxlIEAgMDAwMDAwMDAwMTAw
MDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTG9hZGluZyByYW1kaXNrIGZyb20g
Ym9vdCBtb2R1bGUgQCAwMDAwMDAwMDAyMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAo
WEVOKSBBbGxvY2F0aW5nIDE6MSBtYXBwaW5ncyB0b3RhbGxpbmcgMTYwME1CIGZvciBkb20wOjxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQkFOS1swXSAweDAwMDAwMDEwMDAwMDAwLTB4
MDAwMDAwMjAwMDAwMDAgKDI1Nk1CKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQkFO
S1sxXSAweDAwMDAwMDI0MDAwMDAwLTB4MDAwMDAwMjgwMDAwMDAgKDY0TUIpPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSBCQU5LWzJdIDB4MDAwMDAwMzAwMDAwMDAtMHgwMDAwMDA4MDAw
MDAwMCAoMTI4ME1CKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgR3JhbnQgdGFibGUg
cmFuZ2U6IDB4MDAwMDAwMDBlMDAwMDAtMHgwMDAwMDAwMGU0MDAwMDxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBkMDogcDJtYWRkciAweDAw
MDAwMDA4N2JmOTQwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEFsbG9jYXRpbmcg
UFBJIDE2IGZvciBldmVudCBjaGFubmVsIGludGVycnVwdDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDA6IDB4ODEyMDAwMDAtJmd0OzB4YTAwMDAwMDA8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiAxOiAweGIxMjAwMDAw
LSZndDsweGMwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBFeHRlbmRlZCBy
ZWdpb24gMjogMHhjODAwMDAwMC0mZ3Q7MHhlMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDM6IDB4ZjAwMDAwMDAtJmd0OzB4ZjkwMDAwMDA8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiA0OiAweDEwMDAwMDAw
MC0mZ3Q7MHg2MDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVk
IHJlZ2lvbiA1OiAweDg4MDAwMDAwMC0mZ3Q7MHg4MDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gNjogMHg4MDAxMDAwMDAwLSZndDsweDEwMDAw
MDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBMb2FkaW5nIHpJbWFnZSBmcm9t
IDAwMDAwMDAwMDEwMDAwMDAgdG8gMDAwMDAwMDAxMDAwMDAwMC0wMDAwMDAwMDEwZTQxMDA4PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBMb2FkaW5nIGQwIGluaXRyZCBmcm9tIDAwMDAw
MDAwMDIwMDAwMDAgdG8gMHgwMDAwMDAwMDEzNjAwMDAwLTB4MDAwMDAwMDAxZmYzYTYxNzxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTG9hZGluZyBkMCBEVEIgdG8gMHgwMDAwMDAwMDEz
NDAwMDAwLTB4MDAwMDAwMDAxMzQwY2JkYzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
SW5pdGlhbCBsb3cgbWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBhdCAweDQwMDAgcGFnZXMuPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBTdGQuIExvZ2xldmVsOiBBbGw8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEd1ZXN0IExvZ2xldmVsOiBBbGw8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pICoqKiBTZXJpYWwgaW5wdXQgdG8gRE9NMCAodHlwZSAmIzM5O0NUUkwt
YSYjMzk7IHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIG51bGwuYzozNTM6IDAgJmx0Oy0tIGQwdjA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIEZyZWVkIDM1NmtCIGluaXQgbWVtb3J5Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgZDB2MCBVbmhhbmRsZWQgU01DL0hWQzogMHg4NDAwMDA1MDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDsgKFhFTikgZDB2MCBVbmhhbmRsZWQgU01DL0hWQzogMHg4NjAwZmYwMTxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdy
aXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSNDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZm
ZmYgdG8gSUNBQ1RJVkVSODxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MDogdkdJ
Q0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMTI8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQwdjA6IHZHSUNEOiB1bmhhbmRsZWQgd29y
ZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjE2PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBm
ZmZmZmZmZiB0byBJQ0FDVElWRVIyMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2
MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJ
VkVSMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBCb290aW5nIExp
bnV4IG9uIHBoeXNpY2FsIENQVSAweDAwMDAwMDAwMDAgWzB4NDEwZmQwMzRdPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIExpbnV4IHZlcnNpb24gNS4xNS43Mi14aWxp
bngtdjIwMjIuMSAob2UtdXNlckBvZS1ob3N0KSAoYWFyY2g2NC1wb3J0YWJsZS1saW51eC1nY2Mg
KEdDQykgMTEuMy4wLCBHTlUgbGQgKEdOVTxicj4NCiZndDvCoCDCoCDCoCDCoEJpbnV0aWxzKTxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDsgMi4zOC4yMDIyMDcwOCkgIzEgU01QIFR1ZSBGZWIgMjEg
MDU6NDc6NTQgVVRDIDIwMjM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAw
MF0gTWFjaGluZSBtb2RlbDogRDE0IFZpcGVyIEJvYXJkIC0gV2hpdGUgVW5pdDxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBYZW4gNC4xNiBzdXBwb3J0IGZvdW5kPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIFpvbmUgcmFuZ2VzOjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBETUEgwqAgwqAgwqBbbWVt
IDB4MDAwMDAwMDAxMDAwMDAwMC0weDAwMDAwMDAwN2ZmZmZmZmZdPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIERNQTMyIMKgIMKgZW1wdHk8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gwqAgTm9ybWFsIMKgIGVtcHR5PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE1vdmFibGUgem9uZSBzdGFydCBmb3Ig
ZWFjaCBub2RlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIEVhcmx5
IG1lbW9yeSBub2RlIHJhbmdlczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAw
MDAwXSDCoCBub2RlIMKgIDA6IFttZW0gMHgwMDAwMDAwMDEwMDAwMDAwLTB4MDAwMDAwMDAxZmZm
ZmZmZl08YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gwqAgbm9kZSDC
oCAwOiBbbWVtIDB4MDAwMDAwMDAyMjAwMDAwMC0weDAwMDAwMDAwMjIxNDdmZmZdPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAweDAw
MDAwMDAwMjIyMDAwMDAtMHgwMDAwMDAwMDIyMzQ3ZmZmXTxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKgIDA6IFttZW0gMHgwMDAwMDAwMDI0MDAwMDAw
LTB4MDAwMDAwMDAyN2ZmZmZmZl08YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAw
MDAwMF0gwqAgbm9kZSDCoCAwOiBbbWVtIDB4MDAwMDAwMDAzMDAwMDAwMC0weDAwMDAwMDAwN2Zm
ZmZmZmZdPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIEluaXRtZW0g
c2V0dXAgbm9kZSAwIFttZW0gMHgwMDAwMDAwMDEwMDAwMDAwLTB4MDAwMDAwMDA3ZmZmZmZmZl08
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gT24gbm9kZSAwLCB6b25l
IERNQTogODE5MiBwYWdlcyBpbiB1bmF2YWlsYWJsZSByYW5nZXM8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gT24gbm9kZSAwLCB6b25lIERNQTogMTg0IHBhZ2VzIGlu
IHVuYXZhaWxhYmxlIHJhbmdlczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAw
MDAwXSBPbiBub2RlIDAsIHpvbmUgRE1BOiA3MzUyIHBhZ2VzIGluIHVuYXZhaWxhYmxlIHJhbmdl
czxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBjbWE6IFJlc2VydmVk
IDI1NiBNaUIgYXQgMHgwMDAwMDAwMDZlMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC4wMDAwMDBdIHBzY2k6IHByb2JpbmcgZm9yIGNvbmR1aXQgbWV0aG9kIGZyb20gRFQu
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHBzY2k6IFBTQ0l2MS4x
IGRldGVjdGVkIGluIGZpcm13YXJlLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
MDAwMDAwXSBwc2NpOiBVc2luZyBzdGFuZGFyZCBQU0NJIHYwLjIgZnVuY3Rpb24gSURzPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHBzY2k6IFRydXN0ZWQgT1MgbWln
cmF0aW9uIG5vdCByZXF1aXJlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAw
MDAwXSBwc2NpOiBTTUMgQ2FsbGluZyBDb252ZW50aW9uIHYxLjE8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcGVyY3B1OiBFbWJlZGRlZCAxNiBwYWdlcy9jcHUgczMy
NzkyIHIwIGQzMjc0NCB1NjU1MzY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAw
MDAwMF0gRGV0ZWN0ZWQgVklQVCBJLWNhY2hlIG9uIENQVTA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjAwMDAwMF0gQ1BVIGZlYXR1cmVzOiBrZXJuZWwgcGFnZSB0YWJsZSBpc29s
YXRpb24gZm9yY2VkIE9OIGJ5IEtBU0xSPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC4wMDAwMDBdIENQVSBmZWF0dXJlczogZGV0ZWN0ZWQ6IEtlcm5lbCBwYWdlIHRhYmxlIGlzb2xh
dGlvbiAoS1BUSSk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gQnVp
bHQgMSB6b25lbGlzdHMsIG1vYmlsaXR5IGdyb3VwaW5nIG9uLsKgIFRvdGFsIHBhZ2VzOiA0MDM4
NDU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gS2VybmVsIGNvbW1h
bmQgbGluZTogY29uc29sZT1odmMwIGVhcmx5Y29uPXhlbiBlYXJseXByaW50az14ZW4gY2xrX2ln
bm9yZV91bnVzZWQgZmlwcz0xIHJvb3Q9L2Rldi9yYW0wPGJyPg0KJmd0O8KgIMKgIMKgIMKgbWF4
Y3B1cz0yPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIFVua25vd24g
a2VybmVsIGNvbW1hbmQgbGluZSBwYXJhbWV0ZXJzICZxdW90O2Vhcmx5cHJpbnRrPXhlbiBmaXBz
PTEmcXVvdDssIHdpbGwgYmUgcGFzc2VkIHRvIHVzZXIgc3BhY2UuPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6
IDI2MjE0NCAob3JkZXI6IDksIDIwOTcxNTIgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gSW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVz
OiAxMzEwNzIgKG9yZGVyOiA4LCAxMDQ4NTc2IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIG1lbSBhdXRvLWluaXQ6IHN0YWNrOm9mZiwgaGVh
cCBhbGxvYzpvbiwgaGVhcCBmcmVlOm9uPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC4wMDAwMDBdIG1lbSBhdXRvLWluaXQ6IGNsZWFyaW5nIHN5c3RlbSBtZW1vcnkgbWF5IHRha2Ug
c29tZSB0aW1lLi4uPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE1l
bW9yeTogMTEyMTkzNksvMTY0MTAyNEsgYXZhaWxhYmxlICg5NzI4SyBrZXJuZWwgY29kZSwgODM2
SyByd2RhdGEsIDIzOTZLIHJvZGF0YSwgMTUzNksgaW5pdCwgMjYySyBic3MsPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgMjU2OTQ0SyByZXNlcnZlZCw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IDI2MjE0
NEsgY21hLXJlc2VydmVkKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAw
XSBTTFVCOiBIV2FsaWduPTY0LCBPcmRlcj0wLTMsIE1pbk9iamVjdHM9MCwgQ1BVcz0yLCBOb2Rl
cz0xPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHJjdTogSGllcmFy
Y2hpY2FsIFJDVSBpbXBsZW1lbnRhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjAwMDAwMF0gcmN1OiBSQ1UgZXZlbnQgdHJhY2luZyBpcyBlbmFibGVkLjxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSByY3U6IFJDVSByZXN0cmljdGluZyBDUFVz
IGZyb20gTlJfQ1BVUz04IHRvIG5yX2NwdV9pZHM9Mi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjAwMDAwMF0gcmN1OiBSQ1UgY2FsY3VsYXRlZCB2YWx1ZSBvZiBzY2hlZHVsZXIt
ZW5saXN0bWVudCBkZWxheSBpcyAyNSBqaWZmaWVzLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuMDAwMDAwXSByY3U6IEFkanVzdGluZyBnZW9tZXRyeSBmb3IgcmN1X2Zhbm91dF9s
ZWFmPTE2LCBucl9jcHVfaWRzPTI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAw
MDAwMF0gTlJfSVJRUzogNjQsIG5yX2lycXM6IDY0LCBwcmVhbGxvY2F0ZWQgaXJxczogMDxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBSb290IElSUSBoYW5kbGVyOiBn
aWNfaGFuZGxlX2lycTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBh
cmNoX3RpbWVyOiBjcDE1IHRpbWVyKHMpIHJ1bm5pbmcgYXQgMTAwLjAwTUh6ICh2aXJ0KS48YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gY2xvY2tzb3VyY2U6IGFyY2hf
c3lzX2NvdW50ZXI6IG1hc2s6IDB4ZmZmZmZmZmZmZmZmZmYgbWF4X2N5Y2xlczogMHgxNzEwMjRl
N2UwLCBtYXhfaWRsZV9uczogNDQwNzk1MjA1MzE1IG5zPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC4wMDAwMDBdIHNjaGVkX2Nsb2NrOiA1NiBiaXRzIGF0IDEwME1IeiwgcmVzb2x1
dGlvbiAxMG5zLCB3cmFwcyBldmVyeSA0Mzk4MDQ2NTExMTAwbnM8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAwLjAwMDI1OF0gQ29uc29sZTogY29sb3VyIGR1bW15IGRldmljZSA4MHgy
NTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzEwMjMxXSBwcmludGs6IGNvbnNv
bGUgW2h2YzBdIGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMxNDQw
M10gQ2FsaWJyYXRpbmcgZGVsYXkgbG9vcCAoc2tpcHBlZCksIHZhbHVlIGNhbGN1bGF0ZWQgdXNp
bmcgdGltZXIgZnJlcXVlbmN5Li4gMjAwLjAwIEJvZ29NSVBTIChscGo9NDAwMDAwKTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzI0ODUxXSBwaWRfbWF4OiBkZWZhdWx0OiAzMjc2
OCBtaW5pbXVtOiAzMDE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMyOTcwNl0g
TFNNOiBTZWN1cml0eSBGcmFtZXdvcmsgaW5pdGlhbGl6aW5nPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC4zMzQyMDRdIFlhbWE6IGJlY29taW5nIG1pbmRmdWwuPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zMzc4NjVdIE1vdW50LWNhY2hlIGhhc2ggdGFibGUgZW50
cmllczogNDA5NiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4zNDUxODBdIE1vdW50cG9pbnQtY2FjaGUgaGFzaCB0YWJsZSBl
bnRyaWVzOiA0MDk2IChvcmRlcjogMywgMzI3NjggYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM1NDc0M10geGVuOmdyYW50X3RhYmxlOiBHcmFudCB0YWJs
ZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91dDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDAuMzU5MTMyXSBHcmFudCB0YWJsZSBpbml0aWFsaXplZDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuMzYyNjY0XSB4ZW46ZXZlbnRzOiBVc2luZyBGSUZPLWJhc2VkIEFCSTxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzY2OTkzXSBYZW46IGluaXRpYWxpemluZyBj
cHUwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNzA1MTVdIHJjdTogSGllcmFy
Y2hpY2FsIFNSQ1UgaW1wbGVtZW50YXRpb24uPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMC4zNzU5MzBdIHNtcDogQnJpbmdpbmcgdXAgc2Vjb25kYXJ5IENQVXMgLi4uPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBudWxsLmM6MzUzOiAxICZsdDstLSBkMHYxPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYxOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUg
MHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC4zODI1NDldIERldGVjdGVkIFZJUFQgSS1jYWNoZSBvbiBDUFUxPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zODg3MTJdIFhlbjogaW5pdGlhbGl6aW5nIGNwdTE8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM4ODc0M10gQ1BVMTogQm9vdGVkIHNlY29u
ZGFyeSBwcm9jZXNzb3IgMHgwMDAwMDAwMDAxIFsweDQxMGZkMDM0XTxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuMzg4ODI5XSBzbXA6IEJyb3VnaHQgdXAgMSBub2RlLCAyIENQVXM8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQwNjk0MV0gU01QOiBUb3RhbCBvZiAy
IHByb2Nlc3NvcnMgYWN0aXZhdGVkLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
NDExNjk4XSBDUFUgZmVhdHVyZXM6IGRldGVjdGVkOiAzMi1iaXQgRUwwIFN1cHBvcnQ8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQxNjg4OF0gQ1BVIGZlYXR1cmVzOiBkZXRlY3Rl
ZDogQ1JDMzIgaW5zdHJ1Y3Rpb25zPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40
MjIxMjFdIENQVTogQWxsIENQVShzKSBzdGFydGVkIGF0IEVMMTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDAuNDI2MjQ4XSBhbHRlcm5hdGl2ZXM6IHBhdGNoaW5nIGtlcm5lbCBjb2Rl
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40MzE0MjRdIGRldnRtcGZzOiBpbml0
aWFsaXplZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDQxNDU0XSBLQVNMUiBl
bmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40NDE2MDJdIGNsb2Nrc291
cmNlOiBqaWZmaWVzOiBtYXNrOiAweGZmZmZmZmZmIG1heF9jeWNsZXM6IDB4ZmZmZmZmZmYsIG1h
eF9pZGxlX25zOiA3NjQ1MDQxNzg1MTAwMDAwIG5zPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC40NDgzMjFdIGZ1dGV4IGhhc2ggdGFibGUgZW50cmllczogNTEyIChvcmRlcjogMywg
MzI3NjggYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQ5
NjE4M10gTkVUOiBSZWdpc3RlcmVkIFBGX05FVExJTksvUEZfUk9VVEUgcHJvdG9jb2wgZmFtaWx5
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40OTgyNzddIERNQTogcHJlYWxsb2Nh
dGVkIDI1NiBLaUIgR0ZQX0tFUk5FTCBwb29sIGZvciBhdG9taWMgYWxsb2NhdGlvbnM8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjUwMzc3Ml0gRE1BOiBwcmVhbGxvY2F0ZWQgMjU2
IEtpQiBHRlBfS0VSTkVMfEdGUF9ETUEgcG9vbCBmb3IgYXRvbWljIGFsbG9jYXRpb25zPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41MTE2MTBdIERNQTogcHJlYWxsb2NhdGVkIDI1
NiBLaUIgR0ZQX0tFUk5FTHxHRlBfRE1BMzIgcG9vbCBmb3IgYXRvbWljIGFsbG9jYXRpb25zPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41MTk0NzhdIGF1ZGl0OiBpbml0aWFsaXpp
bmcgbmV0bGluayBzdWJzeXMgKGRpc2FibGVkKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuNTI0OTg1XSBhdWRpdDogdHlwZT0yMDAwIGF1ZGl0KDAuMzM2OjEpOiBzdGF0ZT1pbml0
aWFsaXplZCBhdWRpdF9lbmFibGVkPTAgcmVzPTE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAwLjUyOTE2OV0gdGhlcm1hbF9zeXM6IFJlZ2lzdGVyZWQgdGhlcm1hbCBnb3Zlcm5vciAm
IzM5O3N0ZXBfd2lzZSYjMzk7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41MzMw
MjNdIGh3LWJyZWFrcG9pbnQ6IGZvdW5kIDYgYnJlYWtwb2ludCBhbmQgNCB3YXRjaHBvaW50IHJl
Z2lzdGVycy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjU0NTYwOF0gQVNJRCBh
bGxvY2F0b3IgaW5pdGlhbGlzZWQgd2l0aCAzMjc2OCBlbnRyaWVzPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC41NTEwMzBdIHhlbjpzd2lvdGxiX3hlbjogV2FybmluZzogb25seSBh
YmxlIHRvIGFsbG9jYXRlIDQgTUIgZm9yIHNvZnR3YXJlIElPIFRMQjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuNTU5MzMyXSBzb2Z0d2FyZSBJTyBUTEI6IG1hcHBlZCBbbWVtIDB4
MDAwMDAwMDAxMTgwMDAwMC0weDAwMDAwMDAwMTFjMDAwMDBdICg0TUIpPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC41ODM1NjVdIEh1Z2VUTEIgcmVnaXN0ZXJlZCAxLjAwIEdpQiBw
YWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuNTg0NzIxXSBIdWdlVExCIHJlZ2lzdGVyZWQgMzIuMCBNaUIgcGFnZSBzaXplLCBw
cmUtYWxsb2NhdGVkIDAgcGFnZXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjU5
MTQ3OF0gSHVnZVRMQiByZWdpc3RlcmVkIDIuMDAgTWlCIHBhZ2Ugc2l6ZSwgcHJlLWFsbG9jYXRl
ZCAwIHBhZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41OTgyMjVdIEh1Z2VU
TEIgcmVnaXN0ZXJlZCA2NC4wIEtpQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlczxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNjM2NTIwXSBEUkJHOiBDb250aW51aW5n
IHdpdGhvdXQgSml0dGVyIFJORzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNzM3
MTg3XSByYWlkNjogbmVvbng4IMKgIGdlbigpIMKgMjE0MyBNQi9zPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC44MDUyOTRdIHJhaWQ2OiBuZW9ueDggwqAgeG9yKCkgwqAxNTg5IE1C
L3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjg3MzQwNl0gcmFpZDY6IG5lb254
NCDCoCBnZW4oKSDCoDIxNzcgTUIvczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
OTQxNDk5XSByYWlkNjogbmVvbng0IMKgIHhvcigpIMKgMTU1NiBNQi9zPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMS4wMDk2MTJdIHJhaWQ2OiBuZW9ueDIgwqAgZ2VuKCkgwqAyMDcy
IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjA3NzcxNV0gcmFpZDY6IG5l
b254MiDCoCB4b3IoKSDCoDE0MzAgTUIvczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDEuMTQ1ODM0XSByYWlkNjogbmVvbngxIMKgIGdlbigpIMKgMTc2OSBNQi9zPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4yMTM5MzVdIHJhaWQ2OiBuZW9ueDEgwqAgeG9yKCkgwqAx
MjE0IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjI4MjA0Nl0gcmFpZDY6
IGludDY0eDggwqBnZW4oKSDCoDEzNjYgTUIvczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDEuMzUwMTMyXSByYWlkNjogaW50NjR4OCDCoHhvcigpIMKgIDc3MyBNQi9zPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS40MTgyNTldIHJhaWQ2OiBpbnQ2NHg0IMKgZ2VuKCkg
wqAxNjAyIE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjQ4NjM0OV0gcmFp
ZDY6IGludDY0eDQgwqB4b3IoKSDCoCA4NTEgTUIvczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDEuNTU0NDY0XSByYWlkNjogaW50NjR4MiDCoGdlbigpIMKgMTM5NiBNQi9zPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS42MjI1NjFdIHJhaWQ2OiBpbnQ2NHgyIMKgeG9y
KCkgwqAgNzQ0IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjY5MDY4N10g
cmFpZDY6IGludDY0eDEgwqBnZW4oKSDCoDEwMzMgTUIvczxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDEuNzU4NzcwXSByYWlkNjogaW50NjR4MSDCoHhvcigpIMKgIDUxNyBNQi9zPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43NTg4MDldIHJhaWQ2OiB1c2luZyBhbGdv
cml0aG0gbmVvbng0IGdlbigpIDIxNzcgTUIvczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDEuNzYyOTQxXSByYWlkNjogLi4uLiB4b3IoKSAxNTU2IE1CL3MsIHJtdyBlbmFibGVkPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43Njc5NTddIHJhaWQ2OiB1c2luZyBuZW9u
IHJlY292ZXJ5IGFsZ29yaXRobTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzcy
ODI0XSB4ZW46YmFsbG9vbjogSW5pdGlhbGlzaW5nIGJhbGxvb24gZHJpdmVyPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43NzgwMjFdIGlvbW11OiBEZWZhdWx0IGRvbWFpbiB0eXBl
OiBUcmFuc2xhdGVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43ODI1ODRdIGlv
bW11OiBETUEgZG9tYWluIFRMQiBpbnZhbGlkYXRpb24gcG9saWN5OiBzdHJpY3QgbW9kZTxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzg5MTQ5XSBTQ1NJIHN1YnN5c3RlbSBpbml0
aWFsaXplZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzkyODIwXSB1c2Jjb3Jl
OiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYmZzPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMS43OTgyNTRdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFj
ZSBkcml2ZXIgaHViPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44MDM2MjZdIHVz
YmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGRldmljZSBkcml2ZXIgdXNiPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMS44MDg3NjFdIHBwc19jb3JlOiBMaW51eFBQUyBBUEkgdmVyLiAxIHJl
Z2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgxMzcxNl0gcHBzX2Nv
cmU6IFNvZnR3YXJlIHZlci4gNS4zLjYgLSBDb3B5cmlnaHQgMjAwNS0yMDA3IFJvZG9sZm8gR2lv
bWV0dGkgJmx0OzxhIGhyZWY9Im1haWx0bzpnaW9tZXR0aUBsaW51eC5pdCIgdGFyZ2V0PSJfYmxh
bmsiPmdpb21ldHRpQGxpbnV4Lml0PC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAxLjgyMjkwM10gUFRQIGNsb2NrIHN1cHBvcnQgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDEuODI2ODkzXSBFREFDIE1DOiBWZXI6IDMuMC4wPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44MzAzNzVdIHp5bnFtcC1pcGktbWJveCBtYWlsYm94
QGZmOTkwNDAwOiBSZWdpc3RlcmVkIFp5bnFNUCBJUEkgbWJveCB3aXRoIFRYL1JYIGNoYW5uZWxz
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODM4ODYzXSB6eW5xbXAtaXBpLW1i
b3ggbWFpbGJveEBmZjk5MDYwMDogUmVnaXN0ZXJlZCBaeW5xTVAgSVBJIG1ib3ggd2l0aCBUWC9S
WCBjaGFubmVscy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg0NzM1Nl0genlu
cW1wLWlwaS1tYm94IG1haWxib3hAZmY5OTA4MDA6IFJlZ2lzdGVyZWQgWnlucU1QIElQSSBtYm94
IHdpdGggVFgvUlggY2hhbm5lbHMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44
NTU5MDddIEZQR0EgbWFuYWdlciBmcmFtZXdvcms8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAxLjg1OTk1Ml0gY2xvY2tzb3VyY2U6IFN3aXRjaGVkIHRvIGNsb2Nrc291cmNlIGFyY2hf
c3lzX2NvdW50ZXI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg3MTcxMl0gTkVU
OiBSZWdpc3RlcmVkIFBGX0lORVQgcHJvdG9jb2wgZmFtaWx5PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMS44NzE4MzhdIElQIGlkZW50cyBoYXNoIHRhYmxlIGVudHJpZXM6IDMyNzY4
IChvcmRlcjogNiwgMjYyMTQ0IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMS44NzkzOTJdIHRjcF9saXN0ZW5fcG9ydGFkZHJfaGFzaCBoYXNoIHRhYmxlIGVu
dHJpZXM6IDEwMjQgKG9yZGVyOiAyLCAxNjM4NCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDEuODg3MDc4XSBUYWJsZS1wZXJ0dXJiIGhhc2ggdGFibGUgZW50
cmllczogNjU1MzYgKG9yZGVyOiA2LCAyNjIxNDQgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg5NDg0Nl0gVENQIGVzdGFibGlzaGVkIGhhc2ggdGFibGUg
ZW50cmllczogMTYzODQgKG9yZGVyOiA1LCAxMzEwNzIgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjkwMjkwMF0gVENQIGJpbmQgaGFzaCB0YWJsZSBlbnRy
aWVzOiAxNjM4NCAob3JkZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDEuOTEwMzUwXSBUQ1A6IEhhc2ggdGFibGVzIGNvbmZpZ3VyZWQg
KGVzdGFibGlzaGVkIDE2Mzg0IGJpbmQgMTYzODQpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMS45MTY3NzhdIFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6IDEwMjQgKG9yZGVyOiAzLCAz
Mjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTIz
NTA5XSBVRFAtTGl0ZSBoYXNoIHRhYmxlIGVudHJpZXM6IDEwMjQgKG9yZGVyOiAzLCAzMjc2OCBi
eXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTMwNzU5XSBO
RVQ6IFJlZ2lzdGVyZWQgUEZfVU5JWC9QRl9MT0NBTCBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjkzNjgzNF0gUlBDOiBSZWdpc3RlcmVkIG5hbWVkIFVO
SVggc29ja2V0IHRyYW5zcG9ydCBtb2R1bGUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMS45NDIzNDJdIFJQQzogUmVnaXN0ZXJlZCB1ZHAgdHJhbnNwb3J0IG1vZHVsZS48YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk0NzA4OF0gUlBDOiBSZWdpc3RlcmVkIHRjcCB0
cmFuc3BvcnQgbW9kdWxlLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTUxODQz
XSBSUEM6IFJlZ2lzdGVyZWQgdGNwIE5GU3Y0LjEgYmFja2NoYW5uZWwgdHJhbnNwb3J0IG1vZHVs
ZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk1ODMzNF0gUENJOiBDTFMgMCBi
eXRlcywgZGVmYXVsdCA2NDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTYyNzA5
XSBUcnlpbmcgdG8gdW5wYWNrIHJvb3RmcyBpbWFnZSBhcyBpbml0cmFtZnMuLi48YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk3NzA5MF0gd29ya2luZ3NldDogdGltZXN0YW1wX2Jp
dHM9NjIgbWF4X29yZGVyPTE5IGJ1Y2tldF9vcmRlcj0wPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMS45ODI4NjNdIEluc3RhbGxpbmcga25mc2QgKGNvcHlyaWdodCAoQykgMTk5NiA8
YSBocmVmPSJtYWlsdG86b2tpckBtb25hZC5zd2IuZGUiIHRhcmdldD0iX2JsYW5rIj5va2lyQG1v
bmFkLnN3Yi5kZTwvYT4pLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDIxMDQ1
XSBORVQ6IFJlZ2lzdGVyZWQgUEZfQUxHIHByb3RvY29sIGZhbWlseTxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMDIxMTIyXSB4b3I6IG1lYXN1cmluZyBzb2Z0d2FyZSBjaGVja3N1
bSBzcGVlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDI5MzQ3XSDCoCDCoDhy
ZWdzIMKgIMKgIMKgIMKgIMKgIDogwqAyMzY2IE1CL3NlYzxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuMDMzMDgxXSDCoCDCoDMycmVncyDCoCDCoCDCoCDCoCDCoDogwqAyODAyIE1C
L3NlYzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDM4MjIzXSDCoCDCoGFybTY0
X25lb24gwqAgwqAgwqA6IMKgMjMyMCBNQi9zZWM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjAzODM4NV0geG9yOiB1c2luZyBmdW5jdGlvbjogMzJyZWdzICgyODAyIE1CL3NlYyk8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA0MzYxNF0gQmxvY2sgbGF5ZXIgU0NT
SSBnZW5lcmljIChic2cpIGRyaXZlciB2ZXJzaW9uIDAuNCBsb2FkZWQgKG1ham9yIDI0Nyk8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA1MDk1OV0gaW8gc2NoZWR1bGVyIG1xLWRl
YWRsaW5lIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA1NTUy
MV0gaW8gc2NoZWR1bGVyIGt5YmVyIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjA2ODIyN10geGVuOnhlbl9ldnRjaG46IEV2ZW50LWNoYW5uZWwgZGV2aWNlIGlu
c3RhbGxlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDY5MjgxXSBTZXJpYWw6
IDgyNTAvMTY1NTAgZHJpdmVyLCA0IHBvcnRzLCBJUlEgc2hhcmluZyBkaXNhYmxlZDxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDc2MTkwXSBjYWNoZWluZm86IFVuYWJsZSB0byBk
ZXRlY3QgY2FjaGUgaGllcmFyY2h5IGZvciBDUFUgMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMDg1NTQ4XSBicmQ6IG1vZHVsZSBsb2FkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjA4OTI5MF0gbG9vcDogbW9kdWxlIGxvYWRlZDxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMDg5MzQxXSBJbnZhbGlkIG1heF9xdWV1ZXMgKDQpLCB3aWxsIHVz
ZSBkZWZhdWx0IG1heDogMi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA5NDU2
NV0gdHVuOiBVbml2ZXJzYWwgVFVOL1RBUCBkZXZpY2UgZHJpdmVyLCAxLjY8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA5ODY1NV0geGVuX25ldGZyb250OiBJbml0aWFsaXNpbmcg
WGVuIHZpcnR1YWwgZXRoZXJuZXQgZHJpdmVyPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4xMDQxNTZdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgcnRs
ODE1MDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTA5ODEzXSB1c2Jjb3JlOiBy
ZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHI4MTUyPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi4xMTUzNjddIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBk
cml2ZXIgYXNpeDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTIwNzk0XSB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGF4ODgxNzlfMTc4YTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTI2OTM0XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5l
dyBpbnRlcmZhY2UgZHJpdmVyIGNkY19ldGhlcjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMTMyODE2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGNk
Y19lZW08YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjEzODUyN10gdXNiY29yZTog
cmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBuZXQxMDgwPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMi4xNDQyNTZdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFj
ZSBkcml2ZXIgY2RjX3N1YnNldDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTUw
MjA1XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHphdXJ1czxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTU1ODM3XSB1c2Jjb3JlOiByZWdpc3RlcmVk
IG5ldyBpbnRlcmZhY2UgZHJpdmVyIGNkY19uY208YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjE2MTU1MF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBy
ODE1M19lY208YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE2ODI0MF0gdXNiY29y
ZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNfYWNtPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMi4xNzMxMDldIGNkY19hY206IFVTQiBBYnN0cmFjdCBDb250cm9s
IE1vZGVsIGRyaXZlciBmb3IgVVNCIG1vZGVtcyBhbmQgSVNETiBhZGFwdGVyczxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTgxMzU4XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBp
bnRlcmZhY2UgZHJpdmVyIHVhczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTg2
NTQ3XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYi1zdG9yYWdl
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xOTI2NDNdIHVzYmNvcmU6IHJlZ2lz
dGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgZnRkaV9zaW88YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjE5ODM4NF0gdXNic2VyaWFsOiBVU0IgU2VyaWFsIHN1cHBvcnQgcmVnaXN0
ZXJlZCBmb3IgRlRESSBVU0IgU2VyaWFsIERldmljZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMjA2MTE4XSB1ZGMtY29yZTogY291bGRuJiMzOTt0IGZpbmQgYW4gYXZhaWxhYmxl
IFVEQyAtIGFkZGVkIFtnX21hc3Nfc3RvcmFnZV0gdG8gbGlzdCBvZiBwZW5kaW5nIGRyaXZlcnM8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIxNTMzMl0gaTJjX2RldjogaTJjIC9k
ZXYgZW50cmllcyBkcml2ZXI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIyMDQ2
N10geGVuX3dkdCB4ZW5fd2R0OiBpbml0aWFsaXplZCAodGltZW91dD02MHMsIG5vd2F5b3V0PTAp
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yMjU5MjNdIGRldmljZS1tYXBwZXI6
IHVldmVudDogdmVyc2lvbiAxLjAuMzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIu
MjMwNjY4XSBkZXZpY2UtbWFwcGVyOiBpb2N0bDogNC40NS4wLWlvY3RsICgyMDIxLTAzLTIyKSBp
bml0aWFsaXNlZDogPGEgaHJlZj0ibWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20iIHRhcmdldD0i
X2JsYW5rIj5kbS1kZXZlbEByZWRoYXQuY29tPC9hPjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMjM5MzE1XSBFREFDIE1DMDogR2l2aW5nIG91dCBkZXZpY2UgdG8gbW9kdWxlIDEg
Y29udHJvbGxlciBzeW5wc19kZHJfY29udHJvbGxlcjogREVWIHN5bnBzX2VkYWMgKElOVEVSUlVQ
VCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI0OTQwNV0gRURBQyBERVZJQ0Uw
OiBHaXZpbmcgb3V0IGRldmljZSB0byBtb2R1bGUgenlucW1wLW9jbS1lZGFjIGNvbnRyb2xsZXIg
enlucW1wX29jbTogREVWPGJyPg0KJmd0O8KgIMKgIMKgIMKgZmY5NjAwMDAubWVtb3J5LWNvbnRy
b2xsZXIgKElOVEVSUlVQVCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI2MTcx
OV0gc2RoY2k6IFNlY3VyZSBEaWdpdGFsIEhvc3QgQ29udHJvbGxlciBJbnRlcmZhY2UgZHJpdmVy
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yNjc0ODddIHNkaGNpOiBDb3B5cmln
aHQoYykgUGllcnJlIE9zc21hbjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjcx
ODkwXSBzZGhjaS1wbHRmbTogU0RIQ0kgcGxhdGZvcm0gYW5kIE9GIGRyaXZlciBoZWxwZXI8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI3ODE1N10gbGVkdHJpZy1jcHU6IHJlZ2lz
dGVyZWQgdG8gaW5kaWNhdGUgYWN0aXZpdHkgb24gQ1BVczxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuMjgzODE2XSB6eW5xbXBfZmlybXdhcmVfcHJvYmUgUGxhdGZvcm0gTWFuYWdl
bWVudCBBUEkgdjEuMTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjg5NTU0XSB6
eW5xbXBfZmlybXdhcmVfcHJvYmUgVHJ1c3R6b25lIHZlcnNpb24gdjEuMDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDIuMzI3ODc1XSBzZWN1cmVmdyBzZWN1cmVmdzogc2VjdXJlZncg
cHJvYmVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zMjgzMjRdIGFsZzogTm8g
dGVzdCBmb3IgeGlsaW54LXp5bnFtcC1hZXMgKHp5bnFtcC1hZXMpPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMi4zMzI1NjNdIHp5bnFtcF9hZXMgZmlybXdhcmU6enlucW1wLWZpcm13
YXJlOnp5bnFtcC1hZXM6IEFFUyBTdWNjZXNzZnVsbHkgUmVnaXN0ZXJlZDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDIuMzQxMTgzXSBhbGc6IE5vIHRlc3QgZm9yIHhpbGlueC16eW5x
bXAtcnNhICh6eW5xbXAtcnNhKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzQ3
NjY3XSByZW1vdGVwcm9jIHJlbW90ZXByb2MwOiBmZjlhMDAwMC5yZjVzczpyNWZfMCBpcyBhdmFp
bGFibGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM1MzAwM10gcmVtb3RlcHJv
YyByZW1vdGVwcm9jMTogZmY5YTAwMDAucmY1c3M6cjVmXzEgaXMgYXZhaWxhYmxlPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zNjI2MDVdIGZwZ2FfbWFuYWdlciBmcGdhMDogWGls
aW54IFp5bnFNUCBGUEdBIE1hbmFnZXIgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuMzY2NTQwXSB2aXBlci14ZW4tcHJveHkgdmlwZXIteGVuLXByb3h5OiBWaXBl
ciBYZW4gUHJveHkgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIu
MzcyNTI1XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IERldmljZSBUcmVlIFByb2Jpbmc8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM3Nzc3OF0gdmlwZXItdmRwcCBhNDAwMDAw
MC52ZHBwOiBWRFBQIFZlcnNpb246IDEuMy45LjAgSW5mbzogMS41MTIuMTUuMCBLZXlMZW46IDMy
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zODY0MzJdIHZpcGVyLXZkcHAgYTQw
MDAwMDAudmRwcDogVW5hYmxlIHRvIHJlZ2lzdGVyIHRhbXBlciBoYW5kbGVyLiBSZXRyeWluZy4u
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzk0MDk0XSB2aXBlci12ZHBwLW5l
dCBhNTAwMDAwMC52ZHBwX25ldDogRGV2aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMzk5ODU0XSB2aXBlci12ZHBwLW5ldCBhNTAwMDAwMC52ZHBwX25l
dDogRGV2aWNlIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQw
NTkzMV0gdmlwZXItdmRwcC1zdGF0IGE4MDAwMDAwLnZkcHBfc3RhdDogRGV2aWNlIFRyZWUgUHJv
YmluZzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDEyMDM3XSB2aXBlci12ZHBw
LXN0YXQgYTgwMDAwMDAudmRwcF9zdGF0OiBCdWlsZCBwYXJhbWV0ZXJzOiBWVEkgQ291bnQ6IDUx
MiBFdmVudCBDb3VudDogMzI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQyMDg1
Nl0gZGVmYXVsdCBwcmVzZXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQyMzc5
N10gdmlwZXItdmRwcC1zdGF0IGE4MDAwMDAwLnZkcHBfc3RhdDogRGV2aWNlIHJlZ2lzdGVyZWQ8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQzMDA1NF0gdmlwZXItdmRwcC1ybmcg
YWMwMDAwMDAudmRwcF9ybmc6IERldmljZSBUcmVlIFByb2Jpbmc8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAyLjQzNTk0OF0gdmlwZXItdmRwcC1ybmcgYWMwMDAwMDAudmRwcF9ybmc6
IERldmljZSByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40NDE5
NzZdIHZtY3UgZHJpdmVyIGluaXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ0
NDkyMl0gVk1DVTogOiAoMjQwOjApIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjQ0NDk1Nl0gSW4gSzgxIFVwZGF0ZXIgaW5pdDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuNDQ5MDAzXSBwa3RnZW46IFBhY2tldCBHZW5lcmF0b3IgZm9yIHBhY2tl
dCBwZXJmb3JtYW5jZSB0ZXN0aW5nLiBWZXJzaW9uOiAyLjc1PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi40Njg4MzNdIEluaXRpYWxpemluZyBYRlJNIG5ldGxpbmsgc29ja2V0PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40Njg5MDJdIE5FVDogUmVnaXN0ZXJlZCBQ
Rl9QQUNLRVQgcHJvdG9jb2wgZmFtaWx5PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi40NzI3MjldIEJyaWRnZSBmaXJld2FsbGluZyByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMi40NzY3ODVdIDgwMjFxOiA4MDIuMVEgVkxBTiBTdXBwb3J0IHYxLjg8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ4MTM0MV0gcmVnaXN0ZXJlZCB0YXNr
c3RhdHMgdmVyc2lvbiAxPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40ODYzOTRd
IEJ0cmZzIGxvYWRlZCwgY3JjMzJjPWNyYzMyYy1nZW5lcmljLCB6b25lZD1ubywgZnN2ZXJpdHk9
bm88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUwMzE0NV0gZmYwMTAwMDAuc2Vy
aWFsOiB0dHlQUzEgYXQgTU1JTyAweGZmMDEwMDAwIChpcnEgPSAzNiwgYmFzZV9iYXVkID0gNjI1
MDAwMCkgaXMgYSB4dWFydHBzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MDcx
MDNdIG9mLWZwZ2EtcmVnaW9uIGZwZ2EtZnVsbDogRlBHQSBSZWdpb24gcHJvYmVkPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MTI5ODZdIHhpbGlueC16eW5xbXAtZG1hIGZkNTAw
MDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MjAyNjddIHhpbGlueC16eW5xbXAtZG1hIGZk
NTEwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MjgyMzldIHhpbGlueC16eW5xbXAtZG1h
IGZkNTIwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNz
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MzYxNTJdIHhpbGlueC16eW5xbXAt
ZG1hIGZkNTMwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNj
ZXNzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NDQxNTNdIHhpbGlueC16eW5x
bXAtZG1hIGZkNTQwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBz
dWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NTIxMjddIHhpbGlueC16
eW5xbXAtZG1hIGZkNTUwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9i
ZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NjAxNzhdIHhpbGlu
eC16eW5xbXAtZG1hIGZmYTgwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQ
cm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41Njc5ODddIHhp
bGlueC16eW5xbXAtZG1hIGZmYTkwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZl
ciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NzYwMThd
IHhpbGlueC16eW5xbXAtZG1hIGZmYWEwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRy
aXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41ODM4
ODldIHhpbGlueC16eW5xbXAtZG1hIGZmYWIwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1B
IGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi45
NDYzNzldIHNwaS1ub3Igc3BpMC4wOiBtdDI1cXU1MTJhICgxMzEwNzIgS2J5dGVzKTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTQ2NDY3XSAyIGZpeGVkLXBhcnRpdGlvbnMgcGFy
dGl0aW9ucyBmb3VuZCBvbiBNVEQgZGV2aWNlIHNwaTAuMDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuOTUyMzkzXSBDcmVhdGluZyAyIE1URCBwYXJ0aXRpb25zIG9uICZxdW90O3Nw
aTAuMCZxdW90Ozo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk1NzIzMV0gMHgw
MDAwMDQwMDAwMDAtMHgwMDAwMDgwMDAwMDAgOiAmcXVvdDtiYW5rIEEmcXVvdDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk2MzMzMl0gMHgwMDAwMDAwMDAwMDAtMHgwMDAwMDQw
MDAwMDAgOiAmcXVvdDtiYW5rIEImcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjk2ODY5NF0gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldDogTm90IGVuYWJsaW5nIHBhcnRpYWwg
c3RvcmUgYW5kIGZvcndhcmQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk3NTMz
M10gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBldGgwOiBDYWRlbmNlIEdFTSByZXYgMHg1MDA3MDEw
NiBhdCAweGZmMGIwMDAwIGlycSAyNSAoMTg6NDE6ZmU6MGY6ZmY6MDIpPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMi45ODQ0NzJdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQ6IE5vdCBl
bmFibGluZyBwYXJ0aWFsIHN0b3JlIGFuZCBmb3J3YXJkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMi45OTIxNDRdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgZXRoMTogQ2FkZW5jZSBH
RU0gcmV2IDB4NTAwNzAxMDYgYXQgMHhmZjBjMDAwMCBpcnEgMjYgKDE4OjQxOmZlOjBmOmZmOjAz
KTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDAxMDQzXSB2aXBlcl9lbmV0IHZp
cGVyX2VuZXQ6IFZpcGVyIHBvd2VyIEdQSU9zIGluaXRpYWxpc2VkPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy4wMDczMTNdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldCB2bmV0MCAodW5p
bml0aWFsaXplZCk6IFZhbGlkYXRlIGludGVyZmFjZSBRU0dNSUk8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAzLjAxNDkxNF0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQxICh1bmlu
aXRpYWxpemVkKTogVmFsaWRhdGUgaW50ZXJmYWNlIFFTR01JSTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuMDIyMTM4XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDEgKHVuaW5p
dGlhbGl6ZWQpOiBWYWxpZGF0ZSBpbnRlcmZhY2UgdHlwZSAxODxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuMDMwMjc0XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDIgKHVuaW5p
dGlhbGl6ZWQpOiBWYWxpZGF0ZSBpbnRlcmZhY2UgUVNHTUlJPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMy4wMzc3ODVdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldCB2bmV0MyAodW5pbml0
aWFsaXplZCk6IFZhbGlkYXRlIGludGVyZmFjZSBRU0dNSUk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAzLjA0NTMwMV0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0OiBWaXBlciBlbmV0IHJl
Z2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA1MDk1OF0geGlsaW54
LWF4aXBtb24gZmZhMDAwMDAucGVyZi1tb25pdG9yOiBQcm9iZWQgWGlsaW54IEFQTTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDU3MTM1XSB4aWxpbngtYXhpcG1vbiBmZDBiMDAw
MC5wZXJmLW1vbml0b3I6IFByb2JlZCBYaWxpbnggQVBNPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMy4wNjM1MzhdIHhpbGlueC1heGlwbW9uIGZkNDkwMDAwLnBlcmYtbW9uaXRvcjog
UHJvYmVkIFhpbGlueCBBUE08YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA2OTky
MF0geGlsaW54LWF4aXBtb24gZmZhMTAwMDAucGVyZi1tb25pdG9yOiBQcm9iZWQgWGlsaW54IEFQ
TTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDk3NzI5XSBzaTcweHg6IHByb2Jl
IG9mIDItMDA0MCBmYWlsZWQgd2l0aCBlcnJvciAtNTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuMDk4MDQyXSBjZG5zLXdkdCBmZDRkMDAwMC53YXRjaGRvZzogWGlsaW54IFdhdGNo
ZG9nIFRpbWVyIHdpdGggdGltZW91dCA2MHM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjEwNTExMV0gY2Rucy13ZHQgZmYxNTAwMDAud2F0Y2hkb2c6IFhpbGlueCBXYXRjaGRvZyBU
aW1lciB3aXRoIHRpbWVvdXQgMTBzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4x
MTI0NTddIHZpcGVyLXRhbXBlciB2aXBlci10YW1wZXI6IERldmljZSByZWdpc3RlcmVkPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xMTc1OTNdIGFjdGl2ZV9iYW5rIGFjdGl2ZV9i
YW5rOiBib290IGJhbms6IDE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjEyMjE4
NF0gYWN0aXZlX2JhbmsgYWN0aXZlX2Jhbms6IGJvb3QgbW9kZTogKDB4MDIpIHFzcGkzMjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTI4MjQ3XSB2aXBlci12ZHBwIGE0MDAwMDAw
LnZkcHA6IERldmljZSBUcmVlIFByb2Jpbmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjEzMzQzOV0gdmlwZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBWRFBQIFZlcnNpb246IDEuMy45
LjAgSW5mbzogMS41MTIuMTUuMCBLZXlMZW46IDMyPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy4xNDIxNTFdIHZpcGVyLXZkcHAgYTQwMDAwMDAudmRwcDogVGFtcGVyIGhhbmRsZXIg
cmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTQ3NDM4XSB2aXBl
ci12ZHBwIGE0MDAwMDAwLnZkcHA6IERldmljZSByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy4xNTMwMDddIGxwYzU1X2wyIHNwaTEuMDogcmVnaXN0ZXJlZCBoYW5k
bGVyIGZvciBwcm90b2NvbCAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNTg1
ODJdIGxwYzU1X3VzZXIgbHBjNTVfdXNlcjogVGhlIG1ham9yIG51bWJlciBmb3IgeW91ciBkZXZp
Y2UgaXMgMjM2PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNjU5NzZdIGxwYzU1
X2wyIHNwaTEuMDogcmVnaXN0ZXJlZCBoYW5kbGVyIGZvciBwcm90b2NvbCAxPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xODE5OTldIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGxwYzU1
X3J0Y19nZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDMuMTgyODU2XSBydGMtbHBjNTUgcnRjX2xwYzU1OiByZWdpc3RlcmVkIGFzIHJ0YzA8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE4ODY1Nl0gbHBjNTVfbDIgc3BpMS4wOiAo
MikgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjE5Mzc0NF0gbHBjNTVfbDIgc3BpMS4wOiAoMykgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE5ODg0OF0gbHBjNTVfbDIgc3BpMS4wOiAoNCkg
bWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjIw
MjkzMl0gbW1jMDogU0RIQ0kgY29udHJvbGxlciBvbiBmZjE2MDAwMC5tbWMgW2ZmMTYwMDAwLm1t
Y10gdXNpbmcgQURNQSA2NC1iaXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjIx
MDY4OV0gbHBjNTVfbDIgc3BpMS4wOiAoNSkgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjIxNTY5NF0gbHBjNTVfbDIgc3BpMS4wOiByeCBlcnJv
cjogLTExMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjg0NDM4XSBtbWMwOiBu
ZXcgSFMyMDAgTU1DIGNhcmQgYXQgYWRkcmVzcyAwMDAxPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMy4yODUxNzldIG1tY2JsazA6IG1tYzA6MDAwMSBTRU0xNkcgMTQuNiBHaUI8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjI5MTc4NF0gwqBtbWNibGswOiBwMSBwMiBw
MyBwNCBwNSBwNiBwNyBwODxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjkzOTE1
XSBtbWNibGswYm9vdDA6IG1tYzA6MDAwMSBTRU0xNkcgNC4wMCBNaUI8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjI5OTA1NF0gbW1jYmxrMGJvb3QxOiBtbWMwOjAwMDEgU0VNMTZH
IDQuMDAgTWlCPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4zMDM5MDVdIG1tY2Js
azBycG1iOiBtbWMwOjAwMDEgU0VNMTZHIDQuMDAgTWlCLCBjaGFyZGV2ICgyNDQ6MCk8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjU4MjY3Nl0gcnRjLWxwYzU1IHJ0Y19scGM1NTog
bHBjNTVfcnRjX2dldF90aW1lOiBiYWQgcmVzdWx0OiAxPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMy41ODMzMzJdIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGhjdG9zeXM6IHVuYWJsZSB0
byByZWFkIHRoZSBoYXJkd2FyZSBjbG9jazxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuNTkxMjUyXSBjZG5zLWkyYyBmZjAyMDAwMC5pMmM6IHJlY292ZXJ5IGluZm9ybWF0aW9uIGNv
bXBsZXRlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy41OTcwODVdIGF0MjQgMC0w
MDUwOiBzdXBwbHkgdmNjIG5vdCBmb3VuZCwgdXNpbmcgZHVtbXkgcmVndWxhdG9yPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42MDMwMTFdIGxwYzU1X2wyIHNwaTEuMDogKDIpIG1j
dSBzdGlsbCBub3QgcmVhZHk/PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42MDgw
OTNdIGF0MjQgMC0wMDUwOiAyNTYgYnl0ZSBzcGQgRUVQUk9NLCByZWFkLW9ubHk8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYxMzYyMF0gbHBjNTVfbDIgc3BpMS4wOiAoMykgbWN1
IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYxOTM2
Ml0gbHBjNTVfbDIgc3BpMS4wOiAoNCkgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYyNDIyNF0gcnRjLXJ2MzAyOCAwLTAwNTI6IHJlZ2lzdGVy
ZWQgYXMgcnRjMTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjI4MzQzXSBscGM1
NV9sMiBzcGkxLjA6ICg1KSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuNjMzMjUzXSBscGM1NV9sMiBzcGkxLjA6IHJ4IGVycm9yOiAtMTEwPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42MzkxMDRdIGs4MV9ib290bG9hZGVyIDAt
MDAxMDogcHJvYmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY0MTYyOF0gVk1D
VTogOiAoMjM1OjApIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjY0MTYzNV0gazgxX2Jvb3Rsb2FkZXIgMC0wMDEwOiBwcm9iZSBjb21wbGV0ZWQ8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY2ODM0Nl0gY2Rucy1pMmMgZmYwMjAwMDAuaTJjOiA0
MDAga0h6IG1taW8gZmYwMjAwMDAgaXJxIDI4PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMy42NjkxNTRdIGNkbnMtaTJjIGZmMDMwMDAwLmkyYzogcmVjb3ZlcnkgaW5mb3JtYXRpb24g
Y29tcGxldGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY3NTQxMl0gbG03NSAx
LTAwNDg6IHN1cHBseSB2cyBub3QgZm91bmQsIHVzaW5nIGR1bW15IHJlZ3VsYXRvcjxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjgyOTIwXSBsbTc1IDEtMDA0ODogaHdtb24xOiBz
ZW5zb3IgJiMzOTt0bXAxMTImIzM5Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
Njg2NTQ4XSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMgMzxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjkwNzk1XSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxl
eGVkIGkyYyBidXMgNDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjk1NjI5XSBp
MmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMgNTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuNzAwNDkyXSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBi
dXMgNjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzA1MTU3XSBwY2E5NTR4IDEt
MDA3MDogcmVnaXN0ZXJlZCA0IG11bHRpcGxleGVkIGJ1c3NlcyBmb3IgSTJDIHN3aXRjaCBwY2E5
NTQ2PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43MTMwNDldIGF0MjQgMS0wMDU0
OiBzdXBwbHkgdmNjIG5vdCBmb3VuZCwgdXNpbmcgZHVtbXkgcmVndWxhdG9yPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43MjAwNjddIGF0MjQgMS0wMDU0OiAxMDI0IGJ5dGUgMjRj
MDggRUVQUk9NLCByZWFkLW9ubHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjcy
NDc2MV0gY2Rucy1pMmMgZmYwMzAwMDAuaTJjOiAxMDAga0h6IG1taW8gZmYwMzAwMDAgaXJxIDI5
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43MzEyNzJdIHNmcCB2aXBlcl9lbmV0
OnNmcC1ldGgxOiBIb3N0IG1heGltdW0gcG93ZXIgMi4wVzxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDMuNzM3NTQ5XSBzZnBfcmVnaXN0ZXJfc29ja2V0OiBnb3Qgc2ZwX2J1czxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzQwNzA5XSBzZnBfcmVnaXN0ZXJfc29ja2V0
OiByZWdpc3RlciBzZnBfYnVzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NDU0
NTldIHNmcF9yZWdpc3Rlcl9idXM6IG9wcyBvayE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAzLjc0OTE3OV0gc2ZwX3JlZ2lzdGVyX2J1czogVHJ5IHRvIGF0dGFjaDxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzUzNDE5XSBzZnBfcmVnaXN0ZXJfYnVzOiBBdHRhY2gg
c3VjY2VlZGVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NTc5MTRdIHNmcF9y
ZWdpc3Rlcl9idXM6IHVwc3RyZWFtIG9wcyBhdHRhY2g8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjc2MjY3N10gc2ZwX3JlZ2lzdGVyX2J1czogQnVzIHJlZ2lzdGVyZWQ8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc2Njk5OV0gc2ZwX3JlZ2lzdGVyX3NvY2tldDog
cmVnaXN0ZXIgc2ZwX2J1cyBzdWNjZWVkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjc3NTg3MF0gb2ZfY2ZzX2luaXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
Ljc3NjAwMF0gb2ZfY2ZzX2luaXQ6IE9LPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My43NzgyMTFdIGNsazogTm90IGRpc2FibGluZyB1bnVzZWQgY2xvY2tzPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjI3ODQ3N10gRnJlZWluZyBpbml0cmQgbWVtb3J5OiAyMDYwNTZL
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjI3OTQwNl0gRnJlZWluZyB1bnVzZWQg
a2VybmVsIG1lbW9yeTogMTUzNks8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMzE0
MDA2XSBDaGVja2VkIFcrWCBtYXBwaW5nczogcGFzc2VkLCBubyBXK1ggcGFnZXMgZm91bmQ8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMzE0MTQyXSBSdW4gL2luaXQgYXMgaW5pdCBw
cm9jZXNzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBJTklUOiB2ZXJzaW9uIDMuMDEgYm9vdGlu
Zzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgZnNjayAoYnVzeWJveCAxLjM1LjApPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyAvZGV2L21tY2JsazBwMTogY2xlYW4sIDEyLzEwMjQwMCBmaWxlcywg
MjM4MTYyLzQwOTYwMCBibG9ja3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IC9kZXYvbW1jYmxr
MHAyOiBjbGVhbiwgMTIvMTAyNDAwIGZpbGVzLCAxNzE5NzIvNDA5NjAwIGJsb2Nrczxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsgL2Rldi9tbWNibGswcDMgd2FzIG5vdCBjbGVhbmx5IHVubW91bnRl
ZCwgY2hlY2sgZm9yY2VkLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgL2Rldi9tbWNibGswcDM6
IDIwLzQwOTYgZmlsZXMgKDAuMCUgbm9uLWNvbnRpZ3VvdXMpLCA2NjMvMTYzODQgYmxvY2tzPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjU1MzA3M10gRVhUNC1mcyAobW1jYmxrMHAz
KTogbW91bnRlZCBmaWxlc3lzdGVtIHdpdGhvdXQgam91cm5hbC4gT3B0czogKG51bGwpLiBRdW90
YSBtb2RlOiBkaXNhYmxlZC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIHJhbmRv
bSBudW1iZXIgZ2VuZXJhdG9yIGRhZW1vbi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
MTEuNTgwNjYyXSByYW5kb206IGNybmcgaW5pdCBkb25lPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
OyBTdGFydGluZyB1ZGV2PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjYxMzE1OV0g
dWRldmRbMTQyXTogc3RhcnRpbmcgdmVyc2lvbiAzLjIuMTA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTEuNjIwMzg1XSB1ZGV2ZFsxNDNdOiBzdGFydGluZyBldWRldi0zLjIuMTA8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNzA0NDgxXSBtYWNiIGZmMGIwMDAwLmV0aGVy
bmV0IGNvbnRyb2xfcmVkOiByZW5hbWVkIGZyb20gZXRoMDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCAxMS43MjAyNjRdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29udHJvbF9ibGFjazog
cmVuYW1lZCBmcm9tIGV0aDE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuMDYzMzk2
XSBpcF9sb2NhbF9wb3J0X3JhbmdlOiBwcmVmZXIgZGlmZmVyZW50IHBhcml0eSBmb3Igc3RhcnQv
ZW5kIHZhbHVlcy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuMDg0ODAxXSBydGMt
bHBjNTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfZ2V0X3RpbWU6IGJhZCByZXN1bHQ6IDE8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IGh3Y2xvY2s6IFJUQ19SRF9USU1FOiBJbnZhbGlkIGV4Y2hhbmdl
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBNb24gRmViIDI3IDA4OjQwOjUzIFVUQyAyMDIzPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjExNTMwOV0gcnRjLWxwYzU1IHJ0Y19scGM1
NTogbHBjNTVfcnRjX3NldF90aW1lOiBiYWQgcmVzdWx0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
OyBod2Nsb2NrOiBSVENfU0VUX1RJTUU6IEludmFsaWQgZXhjaGFuZ2U8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTIuMTMxMDI3XSBydGMtbHBjNTUgcnRjX2xwYzU1OiBscGM1NV9ydGNf
Z2V0X3RpbWU6IGJhZCByZXN1bHQ6IDE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5n
IG1jdWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IElOSVQ6IEVudGVyaW5nIHJ1bmxldmVsOiA1
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBDb25maWd1cmluZyBuZXR3b3JrIGludGVyZmFjZXMu
Li4gZG9uZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJlc2V0dGluZyBuZXR3b3JrIGludGVy
ZmFjZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43MTgyOTVdIG1hY2IgZmYwYjAw
MDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IFBIWSBbZmYwYjAwMDAuZXRoZXJuZXQtZmZmZmZmZmY6
MDJdIGRyaXZlciBbWGlsaW54IFBDUy9QTUEgUEhZXSAoaXJxPVBPTEwpPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDEyLjcyMzkxOV0gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBjb250cm9s
X3JlZDogY29uZmlndXJpbmcgZm9yIHBoeS9nbWlpIGxpbmsgbW9kZTxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCAxMi43MzIxNTFdIHBwcyBwcHMwOiBuZXcgUFBTIHNvdXJjZSBwdHAwPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjczNTU2M10gbWFjYiBmZjBiMDAwMC5ldGhl
cm5ldDogZ2VtLXB0cC10aW1lciBwdHAgY2xvY2sgcmVnaXN0ZXJlZC48YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTIuNzQ1NzI0XSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0IGNvbnRyb2xf
YmxhY2s6IFBIWSBbZmYwYzAwMDAuZXRoZXJuZXQtZmZmZmZmZmY6MDFdIGRyaXZlciBbWGlsaW54
IFBDUy9QTUEgUEhZXTxicj4NCiZndDvCoCDCoCDCoCDCoChpcnE9UE9MTCk8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzUzNDY5XSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0IGNvbnRy
b2xfYmxhY2s6IGNvbmZpZ3VyaW5nIGZvciBwaHkvZ21paSBsaW5rIG1vZGU8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzYxODA0XSBwcHMgcHBzMTogbmV3IFBQUyBzb3VyY2UgcHRw
MTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43NjUzOThdIG1hY2IgZmYwYzAwMDAu
ZXRoZXJuZXQ6IGdlbS1wdHAtdGltZXIgcHRwIGNsb2NrIHJlZ2lzdGVyZWQuPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0OyBBdXRvLW5lZ290aWF0aW9uOiBvZmY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IEF1dG8tbmVnb3RpYXRpb246IG9mZjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
Ni44MjgxNTFdIG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IHVuYWJsZSB0byBn
ZW5lcmF0ZSB0YXJnZXQgZnJlcXVlbmN5OiAxMjUwMDAwMDAgSHo8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTYuODM0NTUzXSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0IGNvbnRyb2xfcmVk
OiBMaW5rIGlzIFVwIC0gMUdicHMvRnVsbCAtIGZsb3cgY29udHJvbCBvZmY8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTYuODYwNTUyXSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0IGNvbnRy
b2xfYmxhY2s6IHVuYWJsZSB0byBnZW5lcmF0ZSB0YXJnZXQgZnJlcXVlbmN5OiAxMjUwMDAwMDAg
SHo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTYuODY3MDUyXSBtYWNiIGZmMGMwMDAw
LmV0aGVybmV0IGNvbnRyb2xfYmxhY2s6IExpbmsgaXMgVXAgLSAxR2Jwcy9GdWxsIC0gZmxvdyBj
b250cm9sIG9mZjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgRmFpbHNhZmUgU2Vj
dXJlIFNoZWxsIHNlcnZlciBpbiBwb3J0IDIyMjI6IHNzaGQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IGRvbmUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBycGNiaW5kIGRhZW1v
bi4uLmRvbmUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCAxNy4wOTMwMTldIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGxwYzU1X3J0Y19nZXRfdGlt
ZTogYmFkIHJlc3VsdDogMTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgaHdjbG9jazogUlRDX1JE
X1RJTUU6IEludmFsaWQgZXhjaGFuZ2U8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5n
IFN0YXRlIE1hbmFnZXIgU2VydmljZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnQgc3Rh
dGUtbWFuYWdlciByZXN0YXJ0ZXIuLi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQw
djEgRm9yd2FyZGluZyBBRVMgb3BlcmF0aW9uOiAzMjU0Nzc5OTUxPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0OyBTdGFydGluZyAvdXNyL3NiaW4veGVuc3RvcmVkLi4uLlsgwqAgMTcuMjY1MjU2XSBC
VFJGUzogZGV2aWNlIGZzaWQgODBlZmMyMjQtYzIwMi00ZjhlLWE5NDktNGRhZTdmMDRhMGFhIGRl
dmlkIDEgdHJhbnNpZCA3NDQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAvZGV2L2RtLTA8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IHNjYW5uZWQgYnkgdWRldmQgKDM4NSk8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTcuMzQ5OTMzXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMCk6IGRpc2sgc3Bh
Y2UgY2FjaGluZyBpcyBlbmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3LjM1
MDY3MF0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTApOiBoYXMgc2tpbm55IGV4dGVudHM8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuMzY0Mzg0XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0t
MCk6IGVuYWJsaW5nIHNzZCBvcHRpbWl6YXRpb25zPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDE3LjgzMDQ2Ml0gQlRSRlM6IGRldmljZSBmc2lkIDI3ZmY2NjZiLWY0ZTUtNGY5MC05MDU0
LWMyMTBkYjViMmUyZSBkZXZpZCAxIHRyYW5zaWQgNiAvZGV2L21hcHBlci9jbGllbnRfcHJvdiBz
Y2FubmVkIGJ5PGJyPg0KJmd0O8KgIMKgIMKgIMKgbWtmcy5idHJmczxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgKDUyNik8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuODcyNjk5XSBC
VFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6IHVzaW5nIGZyZWUgc3BhY2UgdHJlZTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxNy44NzI3NzFdIEJUUkZTIGluZm8gKGRldmljZSBkbS0xKTog
aGFzIHNraW5ueSBleHRlbnRzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3Ljg3ODEx
NF0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTEpOiBmbGFnZ2luZyBmcyB3aXRoIGJpZyBtZXRhZGF0
YSBmZWF0dXJlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3Ljg5NDI4OV0gQlRSRlMg
aW5mbyAoZGV2aWNlIGRtLTEpOiBlbmFibGluZyBzc2Qgb3B0aW1pemF0aW9uczxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxNy44OTU2OTVdIEJUUkZTIGluZm8gKGRldmljZSBkbS0xKTog
Y2hlY2tpbmcgVVVJRCB0cmVlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDsgU2V0dGluZyBkb21haW4gMCBuYW1lLCBkb21pZCBhbmQgSlNPTiBjb25maWcu
Li48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IERvbmUgc2V0dGluZyB1cCBEb20wPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyB4ZW5jb25zb2xlZC4uLjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDsgU3RhcnRpbmcgUUVNVSBhcyBkaXNrIGJhY2tlbmQgZm9yIGRvbTA8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIGRvbWFpbiB3YXRjaGRvZyBkYWVtb246IHhlbndhdGNo
ZG9nZCBzdGFydHVwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxOC40MDg2NDddIEJUUkZTOiBkZXZpY2UgZnNpZCA1ZTA4ZDVlOS1iYzJhLTQ2
YjktYWY2YS00NGM3MDg3Yjg5MjEgZGV2aWQgMSB0cmFuc2lkIDYgL2Rldi9tYXBwZXIvY2xpZW50
X2NvbmZpZyBzY2FubmVkIGJ5PGJyPg0KJmd0O8KgIMKgIMKgIMKgbWtmcy5idHJmczxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsgKDU3NCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFtkb25lXTxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxOC40NjU1NTJdIEJUUkZTIGluZm8gKGRldmlj
ZSBkbS0yKTogdXNpbmcgZnJlZSBzcGFjZSB0cmVlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDE4LjQ2NTYyOV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTIpOiBoYXMgc2tpbm55IGV4dGVu
dHM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNDcxMDAyXSBCVFJGUyBpbmZvIChk
ZXZpY2UgZG0tMik6IGZsYWdnaW5nIGZzIHdpdGggYmlnIG1ldGFkYXRhIGZlYXR1cmU8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIGNyb25kOiBbIMKgIDE4LjQ4MjM3MV0gQlRSRlMg
aW5mbyAoZGV2aWNlIGRtLTIpOiBlbmFibGluZyBzc2Qgb3B0aW1pemF0aW9uczxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxOC40ODY2NTldIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTog
Y2hlY2tpbmcgVVVJRCB0cmVlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBPSzxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDsgc3RhcnRpbmcgcnN5c2xvZ2QgLi4uIExvZyBwYXJ0aXRpb24gcmVhZHkg
YWZ0ZXIgMCBwb2xsIGxvb3BzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBkb25lPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0OyByc3lzbG9nZDogY2Fubm90IGNvbm5lY3QgdG8gPGEgaHJlZj0iaHR0
cDovLzE3Mi4xOC4wLjE6NTE0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj4xNzIu
MTguMC4xOjUxNDwvYT46IE5ldHdvcmsgaXMgdW5yZWFjaGFibGUgW3Y4LjIyMDguMCB0cnkgPGEg
aHJlZj0iaHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc8L2E+IF08YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNjcwNjM3XSBCVFJGUzogZGV2aWNlIGZzaWQgMzlk
N2Q5ZTEtOTY3ZC00NzhlLTk0YWUtNjkwZGViNzIyMDk1IGRldmlkIDEgdHJhbnNpZCA2MDggL2Rl
di9kbS0zIHNjYW5uZWQgYnkgdWRldmQgKDUxOCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0OyBQbGVhc2UgaW5zZXJ0IFVTQiB0b2tlbiBhbmQgZW50ZXIg
eW91ciByb2xlIGluIGxvZ2luIHByb21wdC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0OyBsb2dpbjo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgTy48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDsg0L/QvSwgMjQg0LDQv9GALiAyMDIz4oCv0LMuINCyIDIzOjM5LCBT
dGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Ojxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIE9sZWcsPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhlcmUgaXMgdGhlIGlz
c3VlIGZyb20geW91ciBsb2dzOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTRXJyb3IgSW50ZXJydXB0IG9uIENQVTAsIGNvZGUgMHhi
ZTAwMDAwMCAtLSBTRXJyb3I8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgU0Vycm9ycyBhcmUgc3BlY2lhbCBzaWduYWxzIHRvIG5vdGlm
eSBzb2Z0d2FyZSBvZiBzZXJpb3VzIGhhcmR3YXJlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgZXJyb3JzLsKgIFNvbWV0aGluZyBpcyBnb2luZyB2ZXJ5IHdyb25nLiBEZWZlY3Rp
dmUgaGFyZHdhcmUgaXMgYTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHBvc3Np
YmlsaXR5LsKgIEFub3RoZXIgcG9zc2liaWxpdHkgaWYgc29mdHdhcmUgYWNjZXNzaW5nIGFkZHJl
c3MgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdGhhdCBpdCBpcyBu
b3Qgc3VwcG9zZWQgdG8sIHNvbWV0aW1lcyBpdCBjYXVzZXMgU0Vycm9ycy48YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgQ2hlZXJzLDxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBTdGVmYW5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgT24gTW9uLCAyNCBBcHIgMjAyMywgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
SGVsbG8sPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgVGhhbmtzIGd1eXMuPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJIGZvdW5kIG91dCB3aGVyZSB0aGUgcHJvYmxlbSB3
YXMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBOb3cgZG9tMCBib290
ZWQgbW9yZS4gQnV0IEkgaGF2ZSBhIG5ldyBvbmUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBUaGlzIGlzIGEga2VybmVsIHBhbmljIGR1cmluZyBEb20wIGxvYWRpbmcu
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBNYXliZSBzb21lb25lIGlz
IGFibGUgdG8gc3VnZ2VzdCBzb21ldGhpbmcgPzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFJlZ2Fy
ZHMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBPLjxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc3MTM2Ml0gc2ZwX3JlZ2lzdGVyX2J1czogdXBzdHJlYW0g
b3BzIGF0dGFjaDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuNzc2MTE5XSBzZnBfcmVnaXN0ZXJfYnVzOiBCdXMgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzgwNDU5XSBzZnBfcmVnaXN0ZXJf
c29ja2V0OiByZWdpc3RlciBzZnBfYnVzIHN1Y2NlZWRlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzg5Mzk5XSBvZl9jZnNfaW5pdDxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzg5NDk5XSBvZl9jZnNfaW5p
dDogT0s8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc5
MTY4NV0gY2xrOiBOb3QgZGlzYWJsaW5nIHVudXNlZCBjbG9ja3M8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwMzU1XSBTRXJyb3IgSW50ZXJydXB0IG9u
IENQVTAsIGNvZGUgMHhiZTAwMDAwMCAtLSBTRXJyb3I8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwMzgwXSBDUFU6IDAgUElEOiA5IENvbW06IGt3b3Jr
ZXIvdTQ6MCBOb3QgdGFpbnRlZCA1LjE1LjcyLXhpbGlueC12MjAyMi4xICMxPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDM5M10gV29ya3F1ZXVlOiBl
dmVudHNfdW5ib3VuZCBhc3luY19ydW5fZW50cnlfZm48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDE0XSBwc3RhdGU6IDYwMDAwMDA1IChuWkN2IGRh
aWYgLVBBTiAtVUFPIC1UQ08gLURJVCAtU1NCUyBCVFlQRT0tLSk8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDIyXSBwYyA6IHNpbXBsZV93cml0ZV9l
bmQrMHhkMC8weDEzMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMS4wMTA0MzFdIGxyIDogZ2VuZXJpY19wZXJmb3JtX3dyaXRlKzB4MTE4LzB4MWUwPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQzOF0gc3AgOiBm
ZmZmZmZjMDA4MDliOTEwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDExLjAxMDQ0MV0geDI5OiBmZmZmZmZjMDA4MDliOTEwIHgyODogMDAwMDAwMDAwMDAwMDAw
MCB4Mjc6IGZmZmZmZmVmNjliYTg4YzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTEuMDEwNDUxXSB4MjY6IDAwMDAwMDAwMDAwMDNlZWMgeDI1OiBmZmZmZmY4
MDc1MTVkYjAwIHgyNDogMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0NTldIHgyMzogZmZmZmZmYzAwODA5YmE5MCB4MjI6
IDAwMDAwMDAwMDJhYWMwMDAgeDIxOiBmZmZmZmY4MDczMTVhMjYwPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ3Ml0geDIwOiAwMDAwMDAwMDAwMDAx
MDAwIHgxOTogZmZmZmZmZmUwMjAwMDAwMCB4MTg6IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDgxXSB4MTc6IDAwMDAw
MDAwZmZmZmZmZmYgeDE2OiAwMDAwMDAwMDAwMDA4MDAwIHgxNTogMDAwMDAwMDAwMDAwMDAwMDxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0OTBdIHgx
NDogMDAwMDAwMDAwMDAwMDAwMCB4MTM6IDAwMDAwMDAwMDAwMDAwMDAgeDEyOiAwMDAwMDAwMDAw
MDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAx
MDQ5OF0geDExOiAwMDAwMDAwMDAwMDAwMDAwIHgxMDogMDAwMDAwMDAwMDAwMDAwMCB4OSA6IDAw
MDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTEuMDEwNTA3XSB4OCA6IDAwMDAwMDAwMDAwMDAwMDAgeDcgOiBmZmZmZmZlZjY5M2JhNjgw
IHg2IDogMDAwMDAwMDAyZDg5YjcwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4wMTA1MTVdIHg1IDogZmZmZmZmZmUwMjAwMDAwMCB4NCA6IGZmZmZmZjgw
NzMxNWEzYzggeDMgOiAwMDAwMDAwMDAwMDAxMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDUyNF0geDIgOiAwMDAwMDAwMDAyYWFiMDAwIHgxIDog
MDAwMDAwMDAwMDAwMDAwMSB4MCA6IDAwMDAwMDAwMDAwMDAwMDU8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTM0XSBLZXJuZWwgcGFuaWMgLSBub3Qg
c3luY2luZzogQXN5bmNocm9ub3VzIFNFcnJvciBJbnRlcnJ1cHQ8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTM5XSBDUFU6IDAgUElEOiA5IENvbW06
IGt3b3JrZXIvdTQ6MCBOb3QgdGFpbnRlZCA1LjE1LjcyLXhpbGlueC12MjAyMi4xICMxPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU0NV0gSGFyZHdh
cmUgbmFtZTogRDE0IFZpcGVyIEJvYXJkIC0gV2hpdGUgVW5pdCAoRFQpPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU0OF0gV29ya3F1ZXVlOiBldmVu
dHNfdW5ib3VuZCBhc3luY19ydW5fZW50cnlfZm48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTU2XSBDYWxsIHRyYWNlOjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1NThdIMKgZHVtcF9iYWNrdHJhY2Ur
MHgwLzB4MWM0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEx
LjAxMDU2N10gwqBzaG93X3N0YWNrKzB4MTgvMHgyYzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1NzRdIMKgZHVtcF9zdGFja19sdmwrMHg3Yy8weGEw
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU4M10g
wqBkdW1wX3N0YWNrKzB4MTgvMHgzNDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4wMTA1ODhdIMKgcGFuaWMrMHgxNGMvMHgyZjg8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTk3XSDCoHByaW50X3RhaW50ZWQr
MHgwLzB4YjA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEu
MDEwNjA2XSDCoGFybTY0X3NlcnJvcl9wYW5pYysweDZjLzB4N2M8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjE0XSDCoGRvX3NlcnJvcisweDI4LzB4
NjA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjIx
XSDCoGVsMWhfNjRfZXJyb3JfaGFuZGxlcisweDMwLzB4NTA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjI4XSDCoGVsMWhfNjRfZXJyb3IrMHg3OC8w
eDdjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDYz
M10gwqBzaW1wbGVfd3JpdGVfZW5kKzB4ZDAvMHgxMzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjM5XSDCoGdlbmVyaWNfcGVyZm9ybV93cml0ZSsw
eDExOC8weDFlMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS4wMTA2NDRdIMKgX19nZW5lcmljX2ZpbGVfd3JpdGVfaXRlcisweDEzOC8weDFjNDxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2NTBdIMKgZ2VuZXJp
Y19maWxlX3dyaXRlX2l0ZXIrMHg3OC8weGQwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjAxMDY1Nl0gwqBfX2tlcm5lbF93cml0ZSsweGZjLzB4MmFjPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY2NV0gwqBr
ZXJuZWxfd3JpdGUrMHg4OC8weDE2MDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4wMTA2NzNdIMKgeHdyaXRlKzB4NDQvMHg5NDxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2ODBdIMKgZG9fY29weSsweGE4LzB4
MTA0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY4
Nl0gwqB3cml0ZV9idWZmZXIrMHgzOC8weDU4PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjAxMDY5Ml0gwqBmbHVzaF9idWZmZXIrMHg0Yy8weGJjPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY5OF0gwqBfX2d1
bnppcCsweDI4MC8weDMxMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCAxMS4wMTA3MDRdIMKgZ3VuemlwKzB4MWMvMHgyODxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3MDldIMKgdW5wYWNrX3RvX3Jvb3RmcysweDE3
MC8weDJiMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA3MTVdIMKgZG9fcG9wdWxhdGVfcm9vdGZzKzB4ODAvMHgxNjQ8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzIyXSDCoGFzeW5jX3J1bl9lbnRyeV9m
bisweDQ4LzB4MTY0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IDExLjAxMDcyOF0gwqBwcm9jZXNzX29uZV93b3JrKzB4MWU0LzB4M2EwPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDczNl0gwqB3b3JrZXJfdGhyZWFk
KzB4N2MvMHg0YzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
MTEuMDEwNzQzXSDCoGt0aHJlYWQrMHgxMjAvMHgxMzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzUwXSDCoHJldF9mcm9tX2ZvcmsrMHgxMC8weDIw
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDc1N10g
U01QOiBzdG9wcGluZyBzZWNvbmRhcnkgQ1BVczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3ODRdIEtlcm5lbCBPZmZzZXQ6IDB4MmY2MTIwMDAwMCBm
cm9tIDB4ZmZmZmZmYzAwODAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4wMTA3ODhdIFBIWVNfT0ZGU0VUOiAweDA8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzkwXSBDUFUgZmVhdHVyZXM6IDB4MDAw
MDA0MDEsMDAwMDA4NDI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTEuMDEwNzk1XSBNZW1vcnkgTGltaXQ6IG5vbmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMjc3NTA5XSAtLS1bIGVuZCBLZXJuZWwgcGFuaWMgLSBu
b3Qgc3luY2luZzogQXN5bmNocm9ub3VzIFNFcnJvciBJbnRlcnJ1cHQgXS0tLTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7INC/0YIsIDIxINCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxNTo1MiwgTWljaGFs
IE9yemVsICZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0i
X2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIE9sZWcsPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoE9uIDIxLzA0LzIwMjMgMTQ6NDksIE9sZWcgTmlraXRlbmtvIHdyb3Rl
Ojxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBIZWxsbyBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgSSB3YXMgbm90IGFibGUgdG8gZW5hYmxlIGVhcmx5cHJpbnRrIGluIHRo
ZSB4ZW4gZm9yIG5vdy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IEkgZGVjaWRlZCB0byBjaG9vc2UgYW5vdGhlciB3YXkuPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGlzIGlzIGEgeGVu
JiMzOTtzIGNvbW1hbmQgbGluZSB0aGF0IEkgZm91bmQgb3V0IGNvbXBsZXRlbHkuPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgJCQkJCBj
b25zb2xlPWR0dWFydCBkdHVhcnQ9c2VyaWFsMCBkb20wX21lbT0xNjAwTSBkb20wX21heF92Y3B1
cz0yIGRvbTBfdmNwdXNfcGluIGJvb3RzY3J1Yj0wPGJyPg0KJmd0O8KgIMKgIMKgIMKgdndmaT1u
YXRpdmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzY2hlZD1udWxsPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdGltZXJfc2xvcD0w
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgWWVzLCBh
ZGRpbmcgYSBwcmludGsoKSBpbiBYZW4gd2FzIGFsc28gYSBnb29kIGlkZWEuPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFNvIHlvdSBhcmUgYWJzb2x1dGVseSByaWdodCBhYm91
dCBhIGNvbW1hbmQgbGluZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IE5vdyBJIGFtIGdvaW5nIHRvIGZpbmQgb3V0IHdoeSB4ZW4gZGlkIG5v
dCBoYXZlIHRoZSBjb3JyZWN0IHBhcmFtZXRlcnMgZnJvbSB0aGUgZGV2aWNlIHRyZWUuPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgTWF5YmUgeW91IHdp
bGwgZmluZCB0aGlzIGRvY3VtZW50IGhlbHBmdWw6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlu
eC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9v
dGluZy50eHQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHVi
LmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2Rldmlj
ZS10cmVlL2Jvb3RpbmcudHh0PC9hPjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB+
TWljaGFsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBPbGVnPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg0L/Rgiwg
MjEg0LDQv9GALiAyMDIz4oCv0LMuINCyIDExOjE2LCBNaWNoYWwgT3J6ZWwgJmx0OzxhIGhyZWY9
Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnpl
bEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1k
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7Ojxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
T24gMjEvMDQvMjAyMyAxMDowNCwgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEhlbGxvIE1pY2hhbCw8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0OyBZZXMsIEkgdXNlIHlvY3RvLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IFllc3Rl
cmRheSBhbGwgZGF5IGxvbmcgSSB0cmllZCB0byBmb2xsb3cgeW91ciBzdWdnZXN0aW9ucy48YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7IEkgZmFjZWQgYSBwcm9ibGVtLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgTWFudWFsbHkgaW4gdGhlIHhlbiBj
b25maWcgYnVpbGQgZmlsZSBJIHBhc3RlZCB0aGUgc3RyaW5nczo8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBJbiB0aGUgLmNvbmZp
ZyBmaWxlIG9yIGluIHNvbWUgWW9jdG8gZmlsZSAobGlzdGluZyBhZGRpdGlvbmFsIEtjb25maWcg
b3B0aW9ucykgYWRkZWQgdG8gU1JDX1VSST88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBZb3Ugc2hvdWxkbiYjMzk7dCByZWFsbHkg
bW9kaWZ5IC5jb25maWcgZmlsZSBidXQgaWYgeW91IGRvLCB5b3Ugc2hvdWxkIGV4ZWN1dGUgJnF1
b3Q7bWFrZSBvbGRkZWZjb25maWcmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqBhZnRlcndhcmRz
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0OyBDT05GSUdfRUFSTFlfUFJJTlRLPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBDT05GSUdf
RUFSTFlfUFJJTlRLX1pZTlFNUDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgQ09ORklHX0VBUkxZX1VBUlRfQ0hPSUNFX0NB
REVOQ0U8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqBJIGhvcGUgeW91IGFkZGVkID15IHRvIHRoZW0uPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEFueXdheSwgeW91IGhh
dmUgYXQgbGVhc3QgdGhlIGZvbGxvd2luZyBzb2x1dGlvbnM6PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgMSkgUnVuIGJpdGJha2Ug
eGVuIC1jIG1lbnVjb25maWcgdG8gcHJvcGVybHkgc2V0IGVhcmx5IHByaW50azxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoDIpIEZp
bmQgb3V0IGhvdyB5b3UgZW5hYmxlIG90aGVyIEtjb25maWcgb3B0aW9ucyBpbiB5b3VyIHByb2pl
Y3QgKGUuZy4gQ09ORklHX0NPTE9SSU5HPXkgdGhhdCBpcyBub3Q8YnI+DQomZ3Q7wqAgwqAgwqAg
wqBlbmFibGVkIGJ5PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZGVmYXVsdCk8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAzKSBBcHBlbmQgdGhlIGZvbGxvd2luZyB0byAmcXVvdDt4ZW4vYXJjaC9hcm0vY29uZmln
cy9hcm02NF9kZWZjb25maWcmcXVvdDs6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgQ09ORklHX0VBUkxZX1BSSU5US19aWU5RTVA9
eTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqB+TWljaGFsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEhvc3QgaGFuZ3MgaW4gYnVpbGQgdGltZS7C
oDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDsgTWF5YmUgSSBkaWQgbm90IHNldCBzb21ldGhpbmcgaW4gdGhlIGNvbmZpZyBi
dWlsZCBmaWxlID88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgT2xlZzxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7INGH0YIsIDIwINCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxMTo1Nywg
T2xlZyBOaWtpdGVua28gJmx0OzxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20i
IHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNo
aWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2Js
YW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqBUaGFua3MgTWljaGFsLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqBZ
b3UgZ2F2ZSBtZSBhbiBpZGVhLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoEkgYW0gZ29pbmcgdG8gdHJ5IGl0
IHRvZGF5Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqBSZWdhcmRzLDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoE8uPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoNGH0YIsIDIwINCw0L/RgC4gMjAyM+KA
r9CzLiDQsiAxMTo1NiwgT2xlZyBOaWtpdGVua28gJmx0OzxhIGhyZWY9Im1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21h
aWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7
Ojxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqBUaGFua3MgU3RlZmFuby48YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgSSBhbSBnb2luZyB0byBkbyBpdCB0b2RheS48YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqBPLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqDRgdGALCAxOSDQsNC/0YAu
IDIwMjPigK/Qsy4g0LIgMjM6MDUsIFN0ZWZhbm8gU3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPjxicj4NCiZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
PnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqBPbiBXZWQsIDE5IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5r
byB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEhpIE1pY2hhbCw8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0OyBJIGNvcnJlY3RlZCB4ZW4mIzM5O3MgY29tbWFuZCBsaW5lLjxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDsgTm93IGl0IGlzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0OyB4ZW4seGVuLWJvb3RhcmdzID0gJnF1b3Q7Y29uc29sZT1kdHVhcnQgZHR1YXJ0PXNl
cmlhbDAgZG9tMF9tZW09MTYwME0gZG9tMF9tYXhfdmNwdXM9Mjxicj4NCiZndDvCoCDCoCDCoCDC
oGRvbTBfdmNwdXNfcGluPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgYm9vdHNjcnViPTAgdndmaT1uYXRpdmUgc2NoZWQ9bnVsbDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDsgdGltZXJfc2xvcD0wIHdheV9zaXplPTY1NTM2IHhlbl9jb2xv
cnM9MC0zIGRvbTBfY29sb3JzPTQtNyZxdW90Ozs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgNCBjb2xvcnMgaXMgd2F5IHRvbyBtYW55IGZvciB4ZW4sIGp1c3QgZG8geGVu
X2NvbG9ycz0wLTAuIFRoZXJlIGlzIG5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgYWR2
YW50YWdlIGluIHVzaW5nIG1vcmUgdGhhbiAxIGNvbG9yIGZvciBYZW4uPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoDQgY29sb3JzIGlzIHRvbyBmZXcgZm9yIGRvbTAsIGlm
IHlvdSBhcmUgZ2l2aW5nIDE2MDBNIG9mIG1lbW9yeSB0byBEb20wLjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoEVhY2ggY29sb3IgaXMgMjU2TS4gRm9yIDE2MDBNIHlvdSBzaG91bGQgZ2l2
ZSBhdCBsZWFzdCA3IGNvbG9ycy4gVHJ5Ojxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqB4ZW5fY29sb3JzPTAtMCBkb20wX2NvbG9ycz0xLTg8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IFVuZm9ydHVuYXRl
bHkgdGhlIHJlc3VsdCB3YXMgdGhlIHNhbWUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAtIERvbTAgbW9k
ZTogUmVsYXhlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgUDJNOiA0
MC1iaXQgSVBBIHdpdGggNDAtYml0IFBBIGFuZCA4LWJpdCBWTUlEPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06IDMgbGV2ZWxzIHdpdGggb3JkZXItMSByb290LCBW
VENSIDB4MDAwMDAwMDA4MDAyMzU1ODxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsg
KFhFTikgU2NoZWR1bGluZyBncmFudWxhcml0eTogY3B1LCAxIENQVSBwZXIgc2NoZWQtcmVzb3Vy
Y2U8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIENvbG9yaW5nIGdlbmVy
YWwgaW5mb3JtYXRpb248YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFdh
eSBzaXplOiA2NGtCPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBNYXgu
IG51bWJlciBvZiBjb2xvcnMgYXZhaWxhYmxlOiAxNjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDsgKFhFTikgWGVuIGNvbG9yKHMpOiBbIDAgXTxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDsgKFhFTikgYWx0ZXJuYXRpdmVzOiBQYXRjaGluZyB3aXRoIGFsdCB0YWJsZSAw
MDAwMDAwMDAwMmNjNjkwIC0mZ3Q7IDAwMDAwMDAwMDAyY2NjMGM8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIENvbG9yIGFycmF5IGFsbG9jYXRpb24gZmFpbGVkIGZvciBk
b20wPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKTxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgUGFuaWMg
b24gQ1BVIDA6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBFcnJvciBj
cmVhdGluZyBkb21haW4gMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikg
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDsgKFhFTik8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIFJlYm9vdCBpbiBmaXZlIHNlY29uZHMuLi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBJIGFtIGdvaW5nIHRv
IGZpbmQgb3V0IGhvdyBjb21tYW5kIGxpbmUgYXJndW1lbnRzIHBhc3NlZCBhbmQgcGFyc2VkLjxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBPbGVn
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDsg0YHRgCwgMTkg0LDQv9GALiAyMDIz4oCv0LMuINCyIDExOjI1LCBPbGVnIE5p
a2l0ZW5rbyAmbHQ7PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0i
X2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0Ozo8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBNaWNoYWwsPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDsgWW91IHB1dCBteSBub3NlIGludG8gdGhlIHByb2JsZW0uIFRoYW5rIHlvdS48YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEkgYW0gZ29pbmcgdG8gdXNlIHlvdXIgcG9p
bnQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBMZXQmIzM5O3Mgc2VlIHdoYXQg
aGFwcGVucy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDsgT2xlZzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyDR
gdGALCAxOSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTA6MzcsIE1pY2hhbCBPcnplbCAmbHQ7PGEg
aHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFs
Lm9yemVsQGFtZC5jb208L2E+PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6
ZWxAYW1kLmNvbTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNo
YWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgSGkgT2xlZyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gMTkv
MDQvMjAyMyAwOTowMywgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IEhlbGxvIFN0ZWZhbm8sPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgVGhhbmtzIGZvciB0aGUgY2xhcmlmaWNhdGlvbi48YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE15IGNvbXBhbnkgdXNlcyB5b2N0
byBmb3IgaW1hZ2UgZ2VuZXJhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFdoYXQga2luZCBvZiBpbmZvcm1hdGlvbiBkbyB5b3UgbmVlZCB0byBj
b25zdWx0IG1lIGluIHRoaXMgY2FzZSA/PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgTWF5YmUgbW9kdWxlcyBzaXplcy9hZGRyZXNzZXMgd2hpY2ggd2VyZSBtZW50aW9u
ZWQgYnkgQEp1bGllbiBHcmFsbDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+
anVsaWVuQHhlbi5vcmc8L2E+PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRh
cmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyZndDsmZ3Q7ID88YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
U29ycnkgZm9yIGp1bXBpbmcgaW50byBkaXNjdXNzaW9uLCBidXQgRldJQ1MgdGhlIFhlbiBjb21t
YW5kIGxpbmUgeW91IHByb3ZpZGVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgc2VlbXMgdG8gYmU8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBub3QgdGhlPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgb25lPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgWGVuIGJvb3RlZCB3aXRoLiBUaGUgZXJyb3IgeW91IGFy
ZSBvYnNlcnZpbmcgbW9zdCBsaWtlbHkgaXMgZHVlIHRvIGRvbTAgY29sb3JzPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgY29uZmlndXJhdGlvbiBub3Q8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBiZWluZzxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHNwZWNpZmllZCAoaS5lLiBsYWNrIG9mIGRvbTBf
Y29sb3JzPSZsdDsmZ3Q7IHBhcmFtZXRlcikuIEFsdGhvdWdoIGluIHRoZSBjb21tYW5kIGxpbmUg
eW91PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcHJvdmlkZWQsIHRoaXM8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBwYXJhbWV0ZXI8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBpcyBzZXQsIEkgc3Ry
b25nbHkgZG91YnQgdGhhdCB0aGlzIGlzIHRoZSBhY3R1YWwgY29tbWFuZCBsaW5lIGluIHVzZS48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgWW91IHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoHhlbix4ZW4tYm9vdGFyZ3MgPSAmcXVvdDtjb25zb2xlPWR0dWFy
dCBkdHVhcnQ9c2VyaWFsMCBkb20wX21lbT0xNjAwTSBkb20wX21heF92Y3B1cz0yPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZG9tMF92Y3B1c19waW48YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBib290c2NydWI9MCB2d2ZpPW5hdGl2
ZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHNjaGVkPW51bGwg
dGltZXJfc2xvcD0wIHdheV9zeml6ZT02NTUzNiB4ZW5fY29sb3JzPTAtMyBkb20wX2NvbG9ycz00
LTcmcXVvdDs7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJ1dDo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAxKSB3YXlfc3ppemUgaGFzIGEgdHlwbzxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDIpIHlvdSBzcGVjaWZpZWQgNCBjb2xv
cnMgKDAtMykgZm9yIFhlbiwgYnV0IHRoZSBib290IGxvZyBzYXlzIHRoYXQgWGVuIGhhcyBvbmx5
PGJyPg0KJmd0O8KgIMKgIMKgIMKgb25lOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoChYRU4pIFhlbiBjb2xvcihzKTogWyAwIF08YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgVGhpcyBtYWtlcyBtZSBiZWxpZXZlIHRoYXQgbm8gY29sb3JzIGNvbmZpZ3VyYXRpb24gYWN0
dWFsbHkgZW5kIHVwIGluIGNvbW1hbmQgbGluZTxicj4NCiZndDvCoCDCoCDCoCDCoHRoYXQgWGVu
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgYm9vdGVkPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgd2l0aC48YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTaW5nbGUgY29sb3IgZm9yIFhlbiBpcyBhICZx
dW90O2RlZmF1bHQgaWYgbm90IHNwZWNpZmllZCZxdW90OyBhbmQgd2F5IHNpemUgd2FzIHByb2Jh
Ymx5PGJyPg0KJmd0O8KgIMKgIMKgIMKgY2FsY3VsYXRlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoGJ5IGFza2luZzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoEhXLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTbyBJIHdvdWxkIHN1Z2dl
c3QgdG8gZmlyc3QgY3Jvc3MtY2hlY2sgdGhlIGNvbW1hbmQgbGluZSBpbiB1c2UuPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoH5NaWNoYWw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsg0LLRgiwgMTgg0LDQv9GALiAyMDIz4oCv0LMuINCyIDIwOjQ0LCBTdGVmYW5vIFN0YWJl
bGxpbmkgJmx0OzxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5z
c3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7Jmd0
OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoE9u
IFR1ZSwgMTggQXByIDIwMjMsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgSGkgSnVsaWVuLDxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7ICZndDsmZ3Q7IFRoaXMgZmVhdHVyZSBoYXMgbm90IGJlZW4gbWVyZ2VkIGluIFhlbiB1
cHN0cmVhbSB5ZXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0OyAmZ3Q7IHdvdWxkIGFzc3VtZSB0aGF0IHVwc3RyZWFtICsgdGhl
IHNlcmllcyBvbiB0aGUgTUwgWzFdIHdvcms8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBQbGVhc2UgY2xhcmlmeSB0aGlzIHBv
aW50Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDsgQmVjYXVzZSB0aGUgdHdvIHRob3VnaHRzIGFyZSBjb250cm92ZXJzaWFsLjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBIaSBPbGVnLDxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBBcyBKdWxpZW4gd3Jv
dGUsIHRoZXJlIGlzIG5vdGhpbmcgY29udHJvdmVyc2lhbC4gQXMgeW91IGFyZSBhd2FyZSw8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBYaWxp
bnggbWFpbnRhaW5zIGEgc2VwYXJhdGUgWGVuIHRyZWUgc3BlY2lmaWMgZm9yIFhpbGlueCBoZXJl
Ojxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oDxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0Ozxh
IGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW48L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgYW5kIHRoZSBi
cmFuY2ggeW91IGFyZSB1c2luZyAoeGxueF9yZWJhc2VfNC4xNikgY29tZXMgZnJvbSB0aGVyZS48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEluc3RlYWQsIHRoZSB1cHN0
cmVhbSBYZW4gdHJlZSBsaXZlcyBoZXJlOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoDxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
PC9hPjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZs
dDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8
L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVm
PSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeTwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZn
dDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBUaGUgQ2FjaGUg
Q29sb3JpbmcgZmVhdHVyZSB0aGF0IHlvdSBhcmUgdHJ5aW5nIHRvIGNvbmZpZ3VyZSBpcyBwcmVz
ZW50PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgaW4geGxueF9yZWJhc2VfNC4xNiwgYnV0IG5vdCB5ZXQgcHJlc2VudCB1cHN0cmVhbSAodGhl
cmUgaXMgYW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqBvdXRzdGFuZGluZyBwYXRjaCBzZXJpZXMgdG8gYWRkIGNhY2hlIGNvbG9yaW5nIHRv
IFhlbiB1cHN0cmVhbSBidXQgaXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqBoYXNuJiMzOTt0IGJlZW4gbWVyZ2VkIHlldC4pPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBBbnl3YXksIGlmIHlvdSBhcmUgdXNpbmcg
eGxueF9yZWJhc2VfNC4xNiBpdCBkb2VzbiYjMzk7dCBtYXR0ZXIgdG9vIG11Y2ggZm9yPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgeW91IGFz
IHlvdSBhbHJlYWR5IGhhdmUgQ2FjaGUgQ29sb3JpbmcgYXMgYSBmZWF0dXJlIHRoZXJlLjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgSSB0YWtlIHlvdSBhcmUgdXNpbmcg
SW1hZ2VCdWlsZGVyIHRvIGdlbmVyYXRlIHRoZSBib290IGNvbmZpZ3VyYXRpb24/IElmPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgc28sIHBs
ZWFzZSBwb3N0IHRoZSBJbWFnZUJ1aWxkZXIgY29uZmlnIGZpbGUgdGhhdCB5b3UgYXJlIHVzaW5n
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBCdXQgZnJv
bSB0aGUgYm9vdCBtZXNzYWdlLCBpdCBsb29rcyBsaWtlIHRoZSBjb2xvcnMgY29uZmlndXJhdGlv
biBmb3I8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqBEb20wIGlzIGluY29ycmVjdC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7IDxicj4NCiZndDsg
PGJyPg0KJmd0OyA8L2Jsb2NrcXVvdGU+PC9kaXY+DQo=
--000000000000cd85bd05faee032e--


From xen-devel-bounces@lists.xenproject.org Fri May 05 08:34:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 08:34:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530262.825773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puqtq-0005j8-7g; Fri, 05 May 2023 08:34:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530262.825773; Fri, 05 May 2023 08:34:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puqtq-0005j1-4j; Fri, 05 May 2023 08:34:46 +0000
Received: by outflank-mailman (input) for mailman id 530262;
 Fri, 05 May 2023 08:34:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yBOj=A2=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1puqtp-0005it-0R
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 08:34:45 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20605.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::605])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id acff9be0-eb1f-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 10:34:41 +0200 (CEST)
Received: from SJ0PR05CA0199.namprd05.prod.outlook.com (2603:10b6:a03:330::24)
 by BL0PR12MB4852.namprd12.prod.outlook.com (2603:10b6:208:1ce::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 08:34:36 +0000
Received: from DM6NAM11FT096.eop-nam11.prod.protection.outlook.com
 (2603:10b6:a03:330:cafe::d) by SJ0PR05CA0199.outlook.office365.com
 (2603:10b6:a03:330::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.12 via Frontend
 Transport; Fri, 5 May 2023 08:34:35 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT096.mail.protection.outlook.com (10.13.173.145) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 08:34:35 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 5 May
 2023 03:34:34 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 5 May
 2023 03:34:33 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 5 May 2023 03:34:32 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acff9be0-eb1f-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e3ajFTP8lcG8WNPAlIZCufT1G/Rs/3jegaB9W3dmBNfMvBiexDOasuTk8KmShg8SJhos5K+QKn5OgfeQU31SjU/QIA42fr8oux4ochuGm3q/9JxFWC7WcnKVP2ygZ3+6IkDGCXTPW9+pQOLIyCC6qXKaswN75dqOZiTj0jbOw6U0XNf0mfHik+aIjJUjeeYwbEtrXDV0FNQNpMqeNnDPdtsDJcVnTORXqzZdz33+978KoA3c8DF54z9rR7T2xyIiAejijfmm2QvN/oLJ0vojPILD/XC2m4s3gOo/h3Wh/fVcNDnDpmsdzgeNZvP8oADhmnvG8E3DbPaiq5UpxMccfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vY+LIJYnhkw13FpLb4lXUYI2NE+a9f0s+eO8yLAs6UM=;
 b=HGAHz6ZrMWrYf5A0LibBkqI6ETYCrZgUGLECjqNgC+Reo8jnRwEdti0u11Tl5hW/BZ7UcwkLmJ693ptLz+rokSGfTvbNiEmopv1m96BugcBb+hYLunZMGolC6M6y9hVJbJR1DRMF57a9wV20R5/utwl5Xmn4XRozz6/mRTaxQQX0jcQxONSL6RoRk/g01q6JfwqhBLywC9iQYs0wXlfCRAAhJkT4WIsvDB/rpYYs0zEhyyBU0OfbTI8vbiaPn/Qag93Uk29U0VS8pZkO+AutfE9xCJ7fiqYMgY1loh9/Frp5QxDibT5rtTh72/zKT/HPdJv3jWbvt4KqoHlth0cLMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vY+LIJYnhkw13FpLb4lXUYI2NE+a9f0s+eO8yLAs6UM=;
 b=b6AOiEazproNSU78ttKyZf8k56LSokVsZpTSCcylcgJAUd3mJJVm6OC1uWNEDRI+DW2dlC7zzOxkzC/f+j7Y7IzAi+VfRHdB87ntcCXcwlV0YiOD0AxLre7L5vjPYxmjNt2SBTjJXz/gmVfpHOgFeJJpS5ZWTZiZwFkuK47Ld2E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com>
Date: Fri, 5 May 2023 10:34:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: xen cache colors in ARM
To: Oleg Nikitenko <oleshiiwood@gmail.com>, Stefano Stabellini
	<sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>,
	<Stewart.Hildebrand@amd.com>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
 <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
 <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com>
 <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com>
 <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
 <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT096:EE_|BL0PR12MB4852:EE_
X-MS-Office365-Filtering-Correlation-Id: 9feed9b6-56c9-4354-c1bd-08db4d438e86
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0yeCYLtgtSQig+JuW4sZCF7G6vEpK4p2Ob/gaQOMz8wQjaKXsrGhcG4BKYAcMzElkzihraehgCngmpObiOWAUNUEsjwdUBrrKqJQPafwU8RMxWvoQ1Y+jk3G8c5IVrCqIyVJrjsErlf1H3FqZqiUsD8gHIavXY2nS8FlKgW5DF60HOtUVGGSi3inLd18n8W8ey5cxG2PQ+xybZA4oMUmpcQhlJHMm98X56igUYZk6H38zurxCEpT4pwi+lWvUMHJ4G3TDUZZgMbPRGjrMeiBpm8/DgkLGsYG29fXrxYCkeuc6nRp3kKlEZQEJuisVufJnH6u2wKsH/jU2I6tkD6rCz0nlEJ9tktqbZtP/RXlKg89wOo/WowK4MCfkUWTv7H90J73Tot0A/HmcLiKCQn4T8h5FqttiHEsIJR2TJainBcBMhdI6SmbxO6LqnRGvYcy/PESzHQvnDRIbXzOm4DFfIwWXZTNY+OsI5BD46uUqHaYy2gcrz3MhvQHvf21ESDyz6kfw6WfS3GMubiy3/zsmpr6nhvycBYj8/reKi4lks9QM3cGA43rJuLPt4zq/G1v1e1gxo0l21k96u2mv/Zv/lWpb0OCid3CAUZ3wKRVQcCTQEajE34KFzjvDFl9sLt27ya2qSmKpRZgcNUFAonwzjjssEJJYUom5eHJdtwVOmEQ7rasxwyqCbUAzqEEPh247NdssAeT5KUjLF93drqKFSJ3iYUuM15noC+bZNqjjTOYBMCjZ1o5A6m2Fe0KbLOBpY+KcrEnOHbL9auKTW/qAyis3f+1mIsQt5FYSKGlGKKwwMyQpf4xM0EkkbqTFt6V
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(6019001)(4636009)(136003)(376002)(39860400002)(396003)(346002)(279900001)(451199021)(46966006)(36840700001)(40470700004)(31696002)(86362001)(36756003)(110136005)(16576012)(54906003)(316002)(4326008)(70206006)(70586007)(966005)(478600001)(41300700001)(40480700001)(82310400005)(8936002)(5660300002)(8676002)(44832011)(30864003)(2906002)(356005)(82740400003)(81166007)(186003)(2616005)(26005)(36860700001)(53546011)(47076005)(336012)(426003)(83380400001)(40460700003)(31686004)(36900700001)(43740500002)(579004)(559001)(139555002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 08:34:35.1619
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9feed9b6-56c9-4354-c1bd-08db4d438e86
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT096.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4852

Hi Oleg,

Replying, so that you do not need to wait for Stefano.

On 05/05/2023 10:28, Oleg Nikitenko wrote:
> 	
> 
> 
> Hello Stefano,
> 
> I would like to try a xen cache color property from this repo  https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git>
> Could you tell whot branch I should use ?
Cache coloring feature is not part of the upstream tree and it is still under review.
You can only find it integrated in the Xilinx Xen tree.

~Michal

> 
> Regards,
> Oleg
> 
> пт, 28 апр. 2023 г. в 00:51, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
> 
>     I am familiar with the zcu102 but I don't know how you could possibly
>     generate a SError.
> 
>     I suggest to try to use ImageBuilder [1] to generate the boot
>     configuration as a test because that is known to work well for zcu102.
> 
>     [1] https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder>
> 
> 
>     On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
>     > Hello Stefano,
>     >
>     > Thanks for clarification.
>     > We nighter use ImageBuilder nor uboot boot script.
>     > A model is zcu102 compatible.
>     >
>     > Regards,
>     > O.
>     >
>     > вт, 25 апр. 2023 г. в 21:21, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
>     >       This is interesting. Are you using Xilinx hardware by any chance? If so,
>     >       which board?
>     >
>     >       Are you using ImageBuilder to generate your boot.scr boot script? If so,
>     >       could you please post your ImageBuilder config file? If not, can you
>     >       post the source of your uboot boot script?
>     >
>     >       SErrors are supposed to be related to a hardware failure of some kind.
>     >       You are not supposed to be able to trigger an SError easily by
>     >       "mistake". I have not seen SErrors due to wrong cache coloring
>     >       configurations on any Xilinx board before.
>     >
>     >       The differences between Xen with and without cache coloring from a
>     >       hardware perspective are:
>     >
>     >       - With cache coloring, the SMMU is enabled and does address translations
>     >         even for dom0. Without cache coloring the SMMU could be disabled, and
>     >         if enabled, the SMMU doesn't do any address translations for Dom0. If
>     >         there is a hardware failure related to SMMU address translation it
>     >         could only trigger with cache coloring. This would be my normal
>     >         suggestion for you to explore, but the failure happens too early
>     >         before any DMA-capable device is programmed. So I don't think this can
>     >         be the issue.
>     >
>     >       - With cache coloring, the memory allocation is very different so you'll
>     >         end up using different DDR regions for Dom0. So if your DDR is
>     >         defective, you might only see a failure with cache coloring enabled
>     >         because you end up using different regions.
>     >
>     >
>     >       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
>     >       > Hi Stefano,
>     >       >
>     >       > Thank you.
>     >       > If I build xen without colors support there is not this error.
>     >       > All the domains are booted well.
>     >       > Hense it can not be a hardware issue.
>     >       > This panic arrived during unpacking the rootfs.
>     >       > Here I attached the boot log xen/Dom0 without color.
>     >       > A highlighted strings printed exactly after the place where 1-st time panic arrived.
>     >       >
>     >       >  Xen 4.16.1-pre
>     >       > (XEN) Xen version 4.16.1-pre (nole2390@(none)) (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=y 2023-04-21
>     >       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300 git:321687b231-dirty
>     >       > (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
>     >       > (XEN) Processor: 00000000410fd034: "ARM Limited", variant: 0x0, part 0xd03,rev 0x4
>     >       > (XEN) 64-bit Execution:
>     >       > (XEN)   Processor Features: 0000000000002222 0000000000000000
>     >       > (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
>     >       > (XEN)     Extensions: FloatingPoint AdvancedSIMD
>     >       > (XEN)   Debug Features: 0000000010305106 0000000000000000
>     >       > (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
>     >       > (XEN)   Memory Model Features: 0000000000001122 0000000000000000
>     >       > (XEN)   ISA Features:  0000000000011120 0000000000000000
>     >       > (XEN) 32-bit Execution:
>     >       > (XEN)   Processor Features: 0000000000000131:0000000000011011
>     >       > (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
>     >       > (XEN)     Extensions: GenericTimer Security
>     >       > (XEN)   Debug Features: 0000000003010066
>     >       > (XEN)   Auxiliary Features: 0000000000000000
>     >       > (XEN)   Memory Model Features: 0000000010201105 0000000040000000
>     >       > (XEN)                          0000000001260000 0000000002102211
>     >       > (XEN)   ISA Features: 0000000002101110 0000000013112111 0000000021232042
>     >       > (XEN)                 0000000001112131 0000000000011142 0000000000011121
>     >       > (XEN) Using SMC Calling Convention v1.2
>     >       > (XEN) Using PSCI v1.1
>     >       > (XEN) SMP: Allowing 4 CPUs
>     >       > (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 100000 KHz
>     >       > (XEN) GICv2 initialization:
>     >       > (XEN)         gic_dist_addr=00000000f9010000
>     >       > (XEN)         gic_cpu_addr=00000000f9020000
>     >       > (XEN)         gic_hyp_addr=00000000f9040000
>     >       > (XEN)         gic_vcpu_addr=00000000f9060000
>     >       > (XEN)         gic_maintenance_irq=25
>     >       > (XEN) GICv2: Adjusting CPU interface base to 0xf902f000
>     >       > (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
>     >       > (XEN) Using scheduler: null Scheduler (null)
>     >       > (XEN) Initializing null scheduler
>     >       > (XEN) WARNING: This is experimental software in development.
>     >       > (XEN) Use at your own risk.
>     >       > (XEN) Allocated console ring of 32 KiB.
>     >       > (XEN) CPU0: Guest atomics will try 12 times before pausing the domain
>     >       > (XEN) Bringing up CPU1
>     >       > (XEN) CPU1: Guest atomics will try 13 times before pausing the domain
>     >       > (XEN) CPU 1 booted.
>     >       > (XEN) Bringing up CPU2
>     >       > (XEN) CPU2: Guest atomics will try 13 times before pausing the domain
>     >       > (XEN) CPU 2 booted.
>     >       > (XEN) Bringing up CPU3
>     >       > (XEN) CPU3: Guest atomics will try 13 times before pausing the domain
>     >       > (XEN) Brought up 4 CPUs
>     >       > (XEN) CPU 3 booted.
>     >       > (XEN) smmu: /axi/smmu@fd800000: probing hardware configuration...
>     >       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
>     >       > (XEN) smmu: /axi/smmu@fd800000: stage 2 translation
>     >       > (XEN) smmu: /axi/smmu@fd800000: stream matching with 48 register groups, mask 0x7fff<2>smmu: /axi/smmu@fd800000: 16 context
>     >       banks (0
>     >       > stage-2 only)
>     >       > (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit PA
>     >       > (XEN) smmu: /axi/smmu@fd800000: registered 29 master devices
>     >       > (XEN) I/O virtualisation enabled
>     >       > (XEN)  - Dom0 mode: Relaxed
>     >       > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>     >       > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>     >       > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>     >       > (XEN) alternatives: Patching with alt table 00000000002cc5c8 -> 00000000002ccb2c
>     >       > (XEN) *** LOADING DOMAIN 0 ***
>     >       > (XEN) Loading d0 kernel from boot module @ 0000000001000000
>     >       > (XEN) Loading ramdisk from boot module @ 0000000002000000
>     >       > (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
>     >       > (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
>     >       > (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
>     >       > (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
>     >       > (XEN) Grant table range: 0x00000000e00000-0x00000000e40000
>     >       > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
>     >       > (XEN) Allocating PPI 16 for event channel interrupt
>     >       > (XEN) Extended region 0: 0x81200000->0xa0000000
>     >       > (XEN) Extended region 1: 0xb1200000->0xc0000000
>     >       > (XEN) Extended region 2: 0xc8000000->0xe0000000
>     >       > (XEN) Extended region 3: 0xf0000000->0xf9000000
>     >       > (XEN) Extended region 4: 0x100000000->0x600000000
>     >       > (XEN) Extended region 5: 0x880000000->0x8000000000
>     >       > (XEN) Extended region 6: 0x8001000000->0x10000000000
>     >       > (XEN) Loading zImage from 0000000001000000 to 0000000010000000-0000000010e41008
>     >       > (XEN) Loading d0 initrd from 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
>     >       > (XEN) Loading d0 DTB to 0x0000000013400000-0x000000001340cbdc
>     >       > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>     >       > (XEN) Std. Loglevel: All
>     >       > (XEN) Guest Loglevel: All
>     >       > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
>     >       > (XEN) null.c:353: 0 <-- d0v0
>     >       > (XEN) Freed 356kB init memory.
>     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
>     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
>     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
>     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
>     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
>     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
>     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
>     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>     >       > [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
>     >       > [    0.000000] Linux version 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC) 11.3.0, GNU ld (GNU
>     >       Binutils)
>     >       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
>     >       > [    0.000000] Machine model: D14 Viper Board - White Unit
>     >       > [    0.000000] Xen 4.16 support found
>     >       > [    0.000000] Zone ranges:
>     >       > [    0.000000]   DMA      [mem 0x0000000010000000-0x000000007fffffff]
>     >       > [    0.000000]   DMA32    empty
>     >       > [    0.000000]   Normal   empty
>     >       > [    0.000000] Movable zone start for each node
>     >       > [    0.000000] Early memory node ranges
>     >       > [    0.000000]   node   0: [mem 0x0000000010000000-0x000000001fffffff]
>     >       > [    0.000000]   node   0: [mem 0x0000000022000000-0x0000000022147fff]
>     >       > [    0.000000]   node   0: [mem 0x0000000022200000-0x0000000022347fff]
>     >       > [    0.000000]   node   0: [mem 0x0000000024000000-0x0000000027ffffff]
>     >       > [    0.000000]   node   0: [mem 0x0000000030000000-0x000000007fffffff]
>     >       > [    0.000000] Initmem setup node 0 [mem 0x0000000010000000-0x000000007fffffff]
>     >       > [    0.000000] On node 0, zone DMA: 8192 pages in unavailable ranges
>     >       > [    0.000000] On node 0, zone DMA: 184 pages in unavailable ranges
>     >       > [    0.000000] On node 0, zone DMA: 7352 pages in unavailable ranges
>     >       > [    0.000000] cma: Reserved 256 MiB at 0x000000006e000000
>     >       > [    0.000000] psci: probing for conduit method from DT.
>     >       > [    0.000000] psci: PSCIv1.1 detected in firmware.
>     >       > [    0.000000] psci: Using standard PSCI v0.2 function IDs
>     >       > [    0.000000] psci: Trusted OS migration not required
>     >       > [    0.000000] psci: SMC Calling Convention v1.1
>     >       > [    0.000000] percpu: Embedded 16 pages/cpu s32792 r0 d32744 u65536
>     >       > [    0.000000] Detected VIPT I-cache on CPU0
>     >       > [    0.000000] CPU features: kernel page table isolation forced ON by KASLR
>     >       > [    0.000000] CPU features: detected: Kernel page table isolation (KPTI)
>     >       > [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 403845
>     >       > [    0.000000] Kernel command line: console=hvc0 earlycon=xen earlyprintk=xen clk_ignore_unused fips=1 root=/dev/ram0
>     >       maxcpus=2
>     >       > [    0.000000] Unknown kernel command line parameters "earlyprintk=xen fips=1", will be passed to user space.
>     >       > [    0.000000] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
>     >       > [    0.000000] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
>     >       > [    0.000000] mem auto-init: stack:off, heap alloc:on, heap free:on
>     >       > [    0.000000] mem auto-init: clearing system memory may take some time...
>     >       > [    0.000000] Memory: 1121936K/1641024K available (9728K kernel code, 836K rwdata, 2396K rodata, 1536K init, 262K bss,
>     >       256944K reserved,
>     >       > 262144K cma-reserved)
>     >       > [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
>     >       > [    0.000000] rcu: Hierarchical RCU implementation.
>     >       > [    0.000000] rcu: RCU event tracing is enabled.
>     >       > [    0.000000] rcu: RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
>     >       > [    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
>     >       > [    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
>     >       > [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
>     >       > [    0.000000] Root IRQ handler: gic_handle_irq
>     >       > [    0.000000] arch_timer: cp15 timer(s) running at 100.00MHz (virt).
>     >       > [    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x171024e7e0, max_idle_ns: 440795205315 ns
>     >       > [    0.000000] sched_clock: 56 bits at 100MHz, resolution 10ns, wraps every 4398046511100ns
>     >       > [    0.000258] Console: colour dummy device 80x25
>     >       > [    0.310231] printk: console [hvc0] enabled
>     >       > [    0.314403] Calibrating delay loop (skipped), value calculated using timer frequency.. 200.00 BogoMIPS (lpj=400000)
>     >       > [    0.324851] pid_max: default: 32768 minimum: 301
>     >       > [    0.329706] LSM: Security Framework initializing
>     >       > [    0.334204] Yama: becoming mindful.
>     >       > [    0.337865] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>     >       > [    0.345180] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>     >       > [    0.354743] xen:grant_table: Grant tables using version 1 layout
>     >       > [    0.359132] Grant table initialized
>     >       > [    0.362664] xen:events: Using FIFO-based ABI
>     >       > [    0.366993] Xen: initializing cpu0
>     >       > [    0.370515] rcu: Hierarchical SRCU implementation.
>     >       > [    0.375930] smp: Bringing up secondary CPUs ...
>     >       > (XEN) null.c:353: 1 <-- d0v1
>     >       > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>     >       > [    0.382549] Detected VIPT I-cache on CPU1
>     >       > [    0.388712] Xen: initializing cpu1
>     >       > [    0.388743] CPU1: Booted secondary processor 0x0000000001 [0x410fd034]
>     >       > [    0.388829] smp: Brought up 1 node, 2 CPUs
>     >       > [    0.406941] SMP: Total of 2 processors activated.
>     >       > [    0.411698] CPU features: detected: 32-bit EL0 Support
>     >       > [    0.416888] CPU features: detected: CRC32 instructions
>     >       > [    0.422121] CPU: All CPU(s) started at EL1
>     >       > [    0.426248] alternatives: patching kernel code
>     >       > [    0.431424] devtmpfs: initialized
>     >       > [    0.441454] KASLR enabled
>     >       > [    0.441602] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
>     >       > [    0.448321] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
>     >       > [    0.496183] NET: Registered PF_NETLINK/PF_ROUTE protocol family
>     >       > [    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic allocations
>     >       > [    0.503772] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
>     >       > [    0.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
>     >       > [    0.519478] audit: initializing netlink subsys (disabled)
>     >       > [    0.524985] audit: type=2000 audit(0.336:1): state=initialized audit_enabled=0 res=1
>     >       > [    0.529169] thermal_sys: Registered thermal governor 'step_wise'
>     >       > [    0.533023] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
>     >       > [    0.545608] ASID allocator initialised with 32768 entries
>     >       > [    0.551030] xen:swiotlb_xen: Warning: only able to allocate 4 MB for software IO TLB
>     >       > [    0.559332] software IO TLB: mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB)
>     >       > [    0.583565] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>     >       > [    0.584721] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
>     >       > [    0.591478] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
>     >       > [    0.598225] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
>     >       > [    0.636520] DRBG: Continuing without Jitter RNG
>     >       > [    0.737187] raid6: neonx8   gen()  2143 MB/s
>     >       > [    0.805294] raid6: neonx8   xor()  1589 MB/s
>     >       > [    0.873406] raid6: neonx4   gen()  2177 MB/s
>     >       > [    0.941499] raid6: neonx4   xor()  1556 MB/s
>     >       > [    1.009612] raid6: neonx2   gen()  2072 MB/s
>     >       > [    1.077715] raid6: neonx2   xor()  1430 MB/s
>     >       > [    1.145834] raid6: neonx1   gen()  1769 MB/s
>     >       > [    1.213935] raid6: neonx1   xor()  1214 MB/s
>     >       > [    1.282046] raid6: int64x8  gen()  1366 MB/s
>     >       > [    1.350132] raid6: int64x8  xor()   773 MB/s
>     >       > [    1.418259] raid6: int64x4  gen()  1602 MB/s
>     >       > [    1.486349] raid6: int64x4  xor()   851 MB/s
>     >       > [    1.554464] raid6: int64x2  gen()  1396 MB/s
>     >       > [    1.622561] raid6: int64x2  xor()   744 MB/s
>     >       > [    1.690687] raid6: int64x1  gen()  1033 MB/s
>     >       > [    1.758770] raid6: int64x1  xor()   517 MB/s
>     >       > [    1.758809] raid6: using algorithm neonx4 gen() 2177 MB/s
>     >       > [    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
>     >       > [    1.767957] raid6: using neon recovery algorithm
>     >       > [    1.772824] xen:balloon: Initialising balloon driver
>     >       > [    1.778021] iommu: Default domain type: Translated
>     >       > [    1.782584] iommu: DMA domain TLB invalidation policy: strict mode
>     >       > [    1.789149] SCSI subsystem initialized
>     >       > [    1.792820] usbcore: registered new interface driver usbfs
>     >       > [    1.798254] usbcore: registered new interface driver hub
>     >       > [    1.803626] usbcore: registered new device driver usb
>     >       > [    1.808761] pps_core: LinuxPPS API ver. 1 registered
>     >       > [    1.813716] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it <mailto:giometti@linux.it>>
>     >       > [    1.822903] PTP clock support registered
>     >       > [    1.826893] EDAC MC: Ver: 3.0.0
>     >       > [    1.830375] zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channels.
>     >       > [    1.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX channels.
>     >       > [    1.847356] zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX channels.
>     >       > [    1.855907] FPGA manager framework
>     >       > [    1.859952] clocksource: Switched to clocksource arch_sys_counter
>     >       > [    1.871712] NET: Registered PF_INET protocol family
>     >       > [    1.871838] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
>     >       > [    1.879392] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
>     >       > [    1.887078] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
>     >       > [    1.894846] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
>     >       > [    1.902900] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
>     >       > [    1.910350] TCP: Hash tables configured (established 16384 bind 16384)
>     >       > [    1.916778] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
>     >       > [    1.923509] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
>     >       > [    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol family
>     >       > [    1.936834] RPC: Registered named UNIX socket transport module.
>     >       > [    1.942342] RPC: Registered udp transport module.
>     >       > [    1.947088] RPC: Registered tcp transport module.
>     >       > [    1.951843] RPC: Registered tcp NFSv4.1 backchannel transport module.
>     >       > [    1.958334] PCI: CLS 0 bytes, default 64
>     >       > [    1.962709] Trying to unpack rootfs image as initramfs...
>     >       > [    1.977090] workingset: timestamp_bits=62 max_order=19 bucket_order=0
>     >       > [    1.982863] Installing knfsd (copyright (C) 1996 okir@monad.swb.de <mailto:okir@monad.swb.de>).
>     >       > [    2.021045] NET: Registered PF_ALG protocol family
>     >       > [    2.021122] xor: measuring software checksum speed
>     >       > [    2.029347]    8regs           :  2366 MB/sec
>     >       > [    2.033081]    32regs          :  2802 MB/sec
>     >       > [    2.038223]    arm64_neon      :  2320 MB/sec
>     >       > [    2.038385] xor: using function: 32regs (2802 MB/sec)
>     >       > [    2.043614] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
>     >       > [    2.050959] io scheduler mq-deadline registered
>     >       > [    2.055521] io scheduler kyber registered
>     >       > [    2.068227] xen:xen_evtchn: Event-channel device installed
>     >       > [    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
>     >       > [    2.076190] cacheinfo: Unable to detect cache hierarchy for CPU 0
>     >       > [    2.085548] brd: module loaded
>     >       > [    2.089290] loop: module loaded
>     >       > [    2.089341] Invalid max_queues (4), will use default max: 2.
>     >       > [    2.094565] tun: Universal TUN/TAP device driver, 1.6
>     >       > [    2.098655] xen_netfront: Initialising Xen virtual ethernet driver
>     >       > [    2.104156] usbcore: registered new interface driver rtl8150
>     >       > [    2.109813] usbcore: registered new interface driver r8152
>     >       > [    2.115367] usbcore: registered new interface driver asix
>     >       > [    2.120794] usbcore: registered new interface driver ax88179_178a
>     >       > [    2.126934] usbcore: registered new interface driver cdc_ether
>     >       > [    2.132816] usbcore: registered new interface driver cdc_eem
>     >       > [    2.138527] usbcore: registered new interface driver net1080
>     >       > [    2.144256] usbcore: registered new interface driver cdc_subset
>     >       > [    2.150205] usbcore: registered new interface driver zaurus
>     >       > [    2.155837] usbcore: registered new interface driver cdc_ncm
>     >       > [    2.161550] usbcore: registered new interface driver r8153_ecm
>     >       > [    2.168240] usbcore: registered new interface driver cdc_acm
>     >       > [    2.173109] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
>     >       > [    2.181358] usbcore: registered new interface driver uas
>     >       > [    2.186547] usbcore: registered new interface driver usb-storage
>     >       > [    2.192643] usbcore: registered new interface driver ftdi_sio
>     >       > [    2.198384] usbserial: USB Serial support registered for FTDI USB Serial Device
>     >       > [    2.206118] udc-core: couldn't find an available UDC - added [g_mass_storage] to list of pending drivers
>     >       > [    2.215332] i2c_dev: i2c /dev entries driver
>     >       > [    2.220467] xen_wdt xen_wdt: initialized (timeout=60s, nowayout=0)
>     >       > [    2.225923] device-mapper: uevent: version 1.0.3
>     >       > [    2.230668] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com <mailto:dm-devel@redhat.com>
>     >       > [    2.239315] EDAC MC0: Giving out device to module 1 controller synps_ddr_controller: DEV synps_edac (INTERRUPT)
>     >       > [    2.249405] EDAC DEVICE0: Giving out device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV
>     >       ff960000.memory-controller (INTERRUPT)
>     >       > [    2.261719] sdhci: Secure Digital Host Controller Interface driver
>     >       > [    2.267487] sdhci: Copyright(c) Pierre Ossman
>     >       > [    2.271890] sdhci-pltfm: SDHCI platform and OF driver helper
>     >       > [    2.278157] ledtrig-cpu: registered to indicate activity on CPUs
>     >       > [    2.283816] zynqmp_firmware_probe Platform Management API v1.1
>     >       > [    2.289554] zynqmp_firmware_probe Trustzone version v1.0
>     >       > [    2.327875] securefw securefw: securefw probed
>     >       > [    2.328324] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)
>     >       > [    2.332563] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
>     >       > [    2.341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)
>     >       > [    2.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is available
>     >       > [    2.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is available
>     >       > [    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registered
>     >       > [    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xen Proxy registered
>     >       > [    2.372525] viper-vdpp a4000000.vdpp: Device Tree Probing
>     >       > [    2.377778] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>     >       > [    2.386432] viper-vdpp a4000000.vdpp: Unable to register tamper handler. Retrying...
>     >       > [    2.394094] viper-vdpp-net a5000000.vdpp_net: Device Tree Probing
>     >       > [    2.399854] viper-vdpp-net a5000000.vdpp_net: Device registered
>     >       > [    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing
>     >       > [    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VTI Count: 512 Event Count: 32
>     >       > [    2.420856] default preset
>     >       > [    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Device registered
>     >       > [    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing
>     >       > [    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device registered
>     >       > [    2.441976] vmcu driver init
>     >       > [    2.444922] VMCU: : (240:0) registered
>     >       > [    2.444956] In K81 Updater init
>     >       > [    2.449003] pktgen: Packet Generator for packet performance testing. Version: 2.75
>     >       > [    2.468833] Initializing XFRM netlink socket
>     >       > [    2.468902] NET: Registered PF_PACKET protocol family
>     >       > [    2.472729] Bridge firewalling registered
>     >       > [    2.476785] 8021q: 802.1Q VLAN Support v1.8
>     >       > [    2.481341] registered taskstats version 1
>     >       > [    2.486394] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
>     >       > [    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq = 36, base_baud = 6250000) is a xuartps
>     >       > [    2.507103] of-fpga-region fpga-full: FPGA Region probed
>     >       > [    2.512986] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver Probe success
>     >       > [    2.520267] xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver Probe success
>     >       > [    2.528239] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe success
>     >       > [    2.536152] xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver Probe success
>     >       > [    2.544153] xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver Probe success
>     >       > [    2.552127] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver Probe success
>     >       > [    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver Probe success
>     >       > [    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe success
>     >       > [    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver Probe success
>     >       > [    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver Probe success
>     >       > [    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
>     >       > [    2.946467] 2 fixed-partitions partitions found on MTD device spi0.0
>     >       > [    2.952393] Creating 2 MTD partitions on "spi0.0":
>     >       > [    2.957231] 0x000004000000-0x000008000000 : "bank A"
>     >       > [    2.963332] 0x000000000000-0x000004000000 : "bank B"
>     >       > [    2.968694] macb ff0b0000.ethernet: Not enabling partial store and forward
>     >       > [    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25 (18:41:fe:0f:ff:02)
>     >       > [    2.984472] macb ff0c0000.ethernet: Not enabling partial store and forward
>     >       > [    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26 (18:41:fe:0f:ff:03)
>     >       > [    3.001043] viper_enet viper_enet: Viper power GPIOs initialised
>     >       > [    3.007313] viper_enet viper_enet vnet0 (uninitialized): Validate interface QSGMII
>     >       > [    3.014914] viper_enet viper_enet vnet1 (uninitialized): Validate interface QSGMII
>     >       > [    3.022138] viper_enet viper_enet vnet1 (uninitialized): Validate interface type 18
>     >       > [    3.030274] viper_enet viper_enet vnet2 (uninitialized): Validate interface QSGMII
>     >       > [    3.037785] viper_enet viper_enet vnet3 (uninitialized): Validate interface QSGMII
>     >       > [    3.045301] viper_enet viper_enet: Viper enet registered
>     >       > [    3.050958] xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
>     >       > [    3.057135] xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
>     >       > [    3.063538] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
>     >       > [    3.069920] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
>     >       > [    3.097729] si70xx: probe of 2-0040 failed with error -5
>     >       > [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s
>     >       > [    3.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s
>     >       > [    3.112457] viper-tamper viper-tamper: Device registered
>     >       > [    3.117593] active_bank active_bank: boot bank: 1
>     >       > [    3.122184] active_bank active_bank: boot mode: (0x02) qspi32
>     >       > [    3.128247] viper-vdpp a4000000.vdpp: Device Tree Probing
>     >       > [    3.133439] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>     >       > [    3.142151] viper-vdpp a4000000.vdpp: Tamper handler registered
>     >       > [    3.147438] viper-vdpp a4000000.vdpp: Device registered
>     >       > [    3.153007] lpc55_l2 spi1.0: registered handler for protocol 0
>     >       > [    3.158582] lpc55_user lpc55_user: The major number for your device is 236
>     >       > [    3.165976] lpc55_l2 spi1.0: registered handler for protocol 1
>     >       > [    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >       > [    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
>     >       > [    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
>     >       > [    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
>     >       > [    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
>     >       > [    3.202932] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit
>     >       > [    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
>     >       > [    3.215694] lpc55_l2 spi1.0: rx error: -110
>     >       > [    3.284438] mmc0: new HS200 MMC card at address 0001
>     >       > [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
>     >       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
>     >       > [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
>     >       > [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
>     >       > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
>     >       > [    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >       > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardware clock
>     >       > [    3.591252] cdns-i2c ff020000.i2c: recovery information complete
>     >       > [    3.597085] at24 0-0050: supply vcc not found, using dummy regulator
>     >       > [    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
>     >       > [    3.608093] at24 0-0050: 256 byte spd EEPROM, read-only
>     >       > [    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
>     >       > [    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
>     >       > [    3.624224] rtc-rv3028 0-0052: registered as rtc1
>     >       > [    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
>     >       > [    3.633253] lpc55_l2 spi1.0: rx error: -110
>     >       > [    3.639104] k81_bootloader 0-0010: probe
>     >       > [    3.641628] VMCU: : (235:0) registered
>     >       > [    3.641635] k81_bootloader 0-0010: probe completed
>     >       > [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 28
>     >       > [    3.669154] cdns-i2c ff030000.i2c: recovery information complete
>     >       > [    3.675412] lm75 1-0048: supply vs not found, using dummy regulator
>     >       > [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
>     >       > [    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
>     >       > [    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
>     >       > [    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
>     >       > [    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
>     >       > [    3.705157] pca954x 1-0070: registered 4 multiplexed busses for I2C switch pca9546
>     >       > [    3.713049] at24 1-0054: supply vcc not found, using dummy regulator
>     >       > [    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM, read-only
>     >       > [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq 29
>     >       > [    3.731272] sfp viper_enet:sfp-eth1: Host maximum power 2.0W
>     >       > [    3.737549] sfp_register_socket: got sfp_bus
>     >       > [    3.740709] sfp_register_socket: register sfp_bus
>     >       > [    3.745459] sfp_register_bus: ops ok!
>     >       > [    3.749179] sfp_register_bus: Try to attach
>     >       > [    3.753419] sfp_register_bus: Attach succeeded
>     >       > [    3.757914] sfp_register_bus: upstream ops attach
>     >       > [    3.762677] sfp_register_bus: Bus registered
>     >       > [    3.766999] sfp_register_socket: register sfp_bus succeeded
>     >       > [    3.775870] of_cfs_init
>     >       > [    3.776000] of_cfs_init: OK
>     >       > [    3.778211] clk: Not disabling unused clocks
>     >       > [   11.278477] Freeing initrd memory: 206056K
>     >       > [   11.279406] Freeing unused kernel memory: 1536K
>     >       > [   11.314006] Checked W+X mappings: passed, no W+X pages found
>     >       > [   11.314142] Run /init as init process
>     >       > INIT: version 3.01 booting
>     >       > fsck (busybox 1.35.0)
>     >       > /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600 blocks
>     >       > /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600 blocks
>     >       > /dev/mmcblk0p3 was not cleanly unmounted, check forced.
>     >       > /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous), 663/16384 blocks
>     >       > [   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem without journal. Opts: (null). Quota mode: disabled.
>     >       > Starting random number generator daemon.
>     >       > [   11.580662] random: crng init done
>     >       > Starting udev
>     >       > [   11.613159] udevd[142]: starting version 3.2.10
>     >       > [   11.620385] udevd[143]: starting eudev-3.2.10
>     >       > [   11.704481] macb ff0b0000.ethernet control_red: renamed from eth0
>     >       > [   11.720264] macb ff0c0000.ethernet control_black: renamed from eth1
>     >       > [   12.063396] ip_local_port_range: prefer different parity for start/end values.
>     >       > [   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >       > hwclock: RTC_RD_TIME: Invalid exchange
>     >       > Mon Feb 27 08:40:53 UTC 2023
>     >       > [   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad result
>     >       > hwclock: RTC_SET_TIME: Invalid exchange
>     >       > [   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >       > Starting mcud
>     >       > INIT: Entering runlevel: 5
>     >       > Configuring network interfaces... done.
>     >       > resetting network interface
>     >       > [   12.718295] macb ff0b0000.ethernet control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver [Xilinx PCS/PMA PHY] (irq=POLL)
>     >       > [   12.723919] macb ff0b0000.ethernet control_red: configuring for phy/gmii link mode
>     >       > [   12.732151] pps pps0: new PPS source ptp0
>     >       > [   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp clock registered.
>     >       > [   12.745724] macb ff0c0000.ethernet control_black: PHY [ff0c0000.ethernet-ffffffff:01] driver [Xilinx PCS/PMA PHY]
>     >       (irq=POLL)
>     >       > [   12.753469] macb ff0c0000.ethernet control_black: configuring for phy/gmii link mode
>     >       > [   12.761804] pps pps1: new PPS source ptp1
>     >       > [   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock registered.
>     >       > Auto-negotiation: off
>     >       > Auto-negotiation: off
>     >       > [   16.828151] macb ff0b0000.ethernet control_red: unable to generate target frequency: 125000000 Hz
>     >       > [   16.834553] macb ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full - flow control off
>     >       > [   16.860552] macb ff0c0000.ethernet control_black: unable to generate target frequency: 125000000 Hz
>     >       > [   16.867052] macb ff0c0000.ethernet control_black: Link is Up - 1Gbps/Full - flow control off
>     >       > Starting Failsafe Secure Shell server in port 2222: sshd
>     >       > done.
>     >       > Starting rpcbind daemon...done.
>     >       >
>     >       > [   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >       > hwclock: RTC_RD_TIME: Invalid exchange
>     >       > Starting State Manager Service
>     >       > Start state-manager restarter...
>     >       > (XEN) d0v1 Forwarding AES operation: 3254779951
>     >       > Starting /usr/sbin/xenstored....[   17.265256] BTRFS: device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa devid 1 transid 744
>     >       /dev/dm-0
>     >       > scanned by udevd (385)
>     >       > [   17.349933] BTRFS info (device dm-0): disk space caching is enabled
>     >       > [   17.350670] BTRFS info (device dm-0): has skinny extents
>     >       > [   17.364384] BTRFS info (device dm-0): enabling ssd optimizations
>     >       > [   17.830462] BTRFS: device fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6 /dev/mapper/client_prov scanned by
>     >       mkfs.btrfs
>     >       > (526)
>     >       > [   17.872699] BTRFS info (device dm-1): using free space tree
>     >       > [   17.872771] BTRFS info (device dm-1): has skinny extents
>     >       > [   17.878114] BTRFS info (device dm-1): flagging fs with big metadata feature
>     >       > [   17.894289] BTRFS info (device dm-1): enabling ssd optimizations
>     >       > [   17.895695] BTRFS info (device dm-1): checking UUID tree
>     >       >
>     >       > Setting domain 0 name, domid and JSON config...
>     >       > Done setting up Dom0
>     >       > Starting xenconsoled...
>     >       > Starting QEMU as disk backend for dom0
>     >       > Starting domain watchdog daemon: xenwatchdogd startup
>     >       >
>     >       > [   18.408647] BTRFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6 /dev/mapper/client_config scanned by
>     >       mkfs.btrfs
>     >       > (574)
>     >       > [done]
>     >       > [   18.465552] BTRFS info (device dm-2): using free space tree
>     >       > [   18.465629] BTRFS info (device dm-2): has skinny extents
>     >       > [   18.471002] BTRFS info (device dm-2): flagging fs with big metadata feature
>     >       > Starting crond: [   18.482371] BTRFS info (device dm-2): enabling ssd optimizations
>     >       > [   18.486659] BTRFS info (device dm-2): checking UUID tree
>     >       > OK
>     >       > starting rsyslogd ... Log partition ready after 0 poll loops
>     >       > done
>     >       > rsyslogd: cannot connect to 172.18.0.1:514 <http://172.18.0.1:514>: Network is unreachable [v8.2208.0 try https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> ]
>     >       > [   18.670637] BTRFS: device fsid 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3 scanned by udevd (518)
>     >       >
>     >       > Please insert USB token and enter your role in login prompt.
>     >       >
>     >       > login:
>     >       >
>     >       > Regards,
>     >       > O.
>     >       >
>     >       >
>     >       > пн, 24 апр. 2023 г. в 23:39, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
>     >       >       Hi Oleg,
>     >       >
>     >       >       Here is the issue from your logs:
>     >       >
>     >       >       SError Interrupt on CPU0, code 0xbe000000 -- SError
>     >       >
>     >       >       SErrors are special signals to notify software of serious hardware
>     >       >       errors.  Something is going very wrong. Defective hardware is a
>     >       >       possibility.  Another possibility if software accessing address ranges
>     >       >       that it is not supposed to, sometimes it causes SErrors.
>     >       >
>     >       >       Cheers,
>     >       >
>     >       >       Stefano
>     >       >
>     >       >
>     >       >
>     >       >       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
>     >       >
>     >       >       > Hello,
>     >       >       >
>     >       >       > Thanks guys.
>     >       >       > I found out where the problem was.
>     >       >       > Now dom0 booted more. But I have a new one.
>     >       >       > This is a kernel panic during Dom0 loading.
>     >       >       > Maybe someone is able to suggest something ?
>     >       >       >
>     >       >       > Regards,
>     >       >       > O.
>     >       >       >
>     >       >       > [    3.771362] sfp_register_bus: upstream ops attach
>     >       >       > [    3.776119] sfp_register_bus: Bus registered
>     >       >       > [    3.780459] sfp_register_socket: register sfp_bus succeeded
>     >       >       > [    3.789399] of_cfs_init
>     >       >       > [    3.789499] of_cfs_init: OK
>     >       >       > [    3.791685] clk: Not disabling unused clocks
>     >       >       > [   11.010355] SError Interrupt on CPU0, code 0xbe000000 -- SError
>     >       >       > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>     >       >       > [   11.010393] Workqueue: events_unbound async_run_entry_fn
>     >       >       > [   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>     >       >       > [   11.010422] pc : simple_write_end+0xd0/0x130
>     >       >       > [   11.010431] lr : generic_perform_write+0x118/0x1e0
>     >       >       > [   11.010438] sp : ffffffc00809b910
>     >       >       > [   11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
>     >       >       > [   11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
>     >       >       > [   11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
>     >       >       > [   11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
>     >       >       > [   11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
>     >       >       > [   11.010490] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
>     >       >       > [   11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
>     >       >       > [   11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
>     >       >       > [   11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
>     >       >       > [   11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
>     >       >       > [   11.010534] Kernel panic - not syncing: Asynchronous SError Interrupt
>     >       >       > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>     >       >       > [   11.010545] Hardware name: D14 Viper Board - White Unit (DT)
>     >       >       > [   11.010548] Workqueue: events_unbound async_run_entry_fn
>     >       >       > [   11.010556] Call trace:
>     >       >       > [   11.010558]  dump_backtrace+0x0/0x1c4
>     >       >       > [   11.010567]  show_stack+0x18/0x2c
>     >       >       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
>     >       >       > [   11.010583]  dump_stack+0x18/0x34
>     >       >       > [   11.010588]  panic+0x14c/0x2f8
>     >       >       > [   11.010597]  print_tainted+0x0/0xb0
>     >       >       > [   11.010606]  arm64_serror_panic+0x6c/0x7c
>     >       >       > [   11.010614]  do_serror+0x28/0x60
>     >       >       > [   11.010621]  el1h_64_error_handler+0x30/0x50
>     >       >       > [   11.010628]  el1h_64_error+0x78/0x7c
>     >       >       > [   11.010633]  simple_write_end+0xd0/0x130
>     >       >       > [   11.010639]  generic_perform_write+0x118/0x1e0
>     >       >       > [   11.010644]  __generic_file_write_iter+0x138/0x1c4
>     >       >       > [   11.010650]  generic_file_write_iter+0x78/0xd0
>     >       >       > [   11.010656]  __kernel_write+0xfc/0x2ac
>     >       >       > [   11.010665]  kernel_write+0x88/0x160
>     >       >       > [   11.010673]  xwrite+0x44/0x94
>     >       >       > [   11.010680]  do_copy+0xa8/0x104
>     >       >       > [   11.010686]  write_buffer+0x38/0x58
>     >       >       > [   11.010692]  flush_buffer+0x4c/0xbc
>     >       >       > [   11.010698]  __gunzip+0x280/0x310
>     >       >       > [   11.010704]  gunzip+0x1c/0x28
>     >       >       > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
>     >       >       > [   11.010715]  do_populate_rootfs+0x80/0x164
>     >       >       > [   11.010722]  async_run_entry_fn+0x48/0x164
>     >       >       > [   11.010728]  process_one_work+0x1e4/0x3a0
>     >       >       > [   11.010736]  worker_thread+0x7c/0x4c0
>     >       >       > [   11.010743]  kthread+0x120/0x130
>     >       >       > [   11.010750]  ret_from_fork+0x10/0x20
>     >       >       > [   11.010757] SMP: stopping secondary CPUs
>     >       >       > [   11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc008000000
>     >       >       > [   11.010788] PHYS_OFFSET: 0x0
>     >       >       > [   11.010790] CPU features: 0x00000401,00000842
>     >       >       > [   11.010795] Memory Limit: none
>     >       >       > [   11.277509] ---[ end Kernel panic - not syncing: Asynchronous SError Interrupt ]---
>     >       >       >
>     >       >       > пт, 21 апр. 2023 г. в 15:52, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
>     >       >       >       Hi Oleg,
>     >       >       >
>     >       >       >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
>     >       >       >       >       
>     >       >       >       >
>     >       >       >       >
>     >       >       >       > Hello Michal,
>     >       >       >       >
>     >       >       >       > I was not able to enable earlyprintk in the xen for now.
>     >       >       >       > I decided to choose another way.
>     >       >       >       > This is a xen's command line that I found out completely.
>     >       >       >       >
>     >       >       >       > (XEN) $$$$ console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0
>     >       vwfi=native
>     >       >       sched=null
>     >       >       >       timer_slop=0
>     >       >       >       Yes, adding a printk() in Xen was also a good idea.
>     >       >       >
>     >       >       >       >
>     >       >       >       > So you are absolutely right about a command line.
>     >       >       >       > Now I am going to find out why xen did not have the correct parameters from the device tree.
>     >       >       >       Maybe you will find this document helpful:
>     >       >       >       https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt>
>     >       >       >
>     >       >       >       ~Michal
>     >       >       >
>     >       >       >       >
>     >       >       >       > Regards,
>     >       >       >       > Oleg
>     >       >       >       >
>     >       >       >       > пт, 21 апр. 2023 г. в 11:16, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
>     >       >       >       >
>     >       >       >       >
>     >       >       >       >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
>     >       >       >       >     >       
>     >       >       >       >     >
>     >       >       >       >     >
>     >       >       >       >     > Hello Michal,
>     >       >       >       >     >
>     >       >       >       >     > Yes, I use yocto.
>     >       >       >       >     >
>     >       >       >       >     > Yesterday all day long I tried to follow your suggestions.
>     >       >       >       >     > I faced a problem.
>     >       >       >       >     > Manually in the xen config build file I pasted the strings:
>     >       >       >       >     In the .config file or in some Yocto file (listing additional Kconfig options) added to SRC_URI?
>     >       >       >       >     You shouldn't really modify .config file but if you do, you should execute "make olddefconfig"
>     >       afterwards.
>     >       >       >       >
>     >       >       >       >     >
>     >       >       >       >     > CONFIG_EARLY_PRINTK
>     >       >       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
>     >       >       >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
>     >       >       >       >     I hope you added =y to them.
>     >       >       >       >
>     >       >       >       >     Anyway, you have at least the following solutions:
>     >       >       >       >     1) Run bitbake xen -c menuconfig to properly set early printk
>     >       >       >       >     2) Find out how you enable other Kconfig options in your project (e.g. CONFIG_COLORING=y that is not
>     >       enabled by
>     >       >       default)
>     >       >       >       >     3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
>     >       >       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=y
>     >       >       >       >
>     >       >       >       >     ~Michal
>     >       >       >       >
>     >       >       >       >     >
>     >       >       >       >     > Host hangs in build time. 
>     >       >       >       >     > Maybe I did not set something in the config build file ?
>     >       >       >       >     >
>     >       >       >       >     > Regards,
>     >       >       >       >     > Oleg
>     >       >       >       >     >
>     >       >       >       >     > чт, 20 апр. 2023 г. в 11:57, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>:
>     >       >       >       >     >
>     >       >       >       >     >     Thanks Michal,
>     >       >       >       >     >
>     >       >       >       >     >     You gave me an idea.
>     >       >       >       >     >     I am going to try it today.
>     >       >       >       >     >
>     >       >       >       >     >     Regards,
>     >       >       >       >     >     O.
>     >       >       >       >     >
>     >       >       >       >     >     чт, 20 апр. 2023 г. в 11:56, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>:
>     >       >       >       >     >
>     >       >       >       >     >         Thanks Stefano.
>     >       >       >       >     >
>     >       >       >       >     >         I am going to do it today.
>     >       >       >       >     >
>     >       >       >       >     >         Regards,
>     >       >       >       >     >         O.
>     >       >       >       >     >
>     >       >       >       >     >         ср, 19 апр. 2023 г. в 23:05, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
>     >       >       >       >     >
>     >       >       >       >     >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>     >       >       >       >     >             > Hi Michal,
>     >       >       >       >     >             >
>     >       >       >       >     >             > I corrected xen's command line.
>     >       >       >       >     >             > Now it is
>     >       >       >       >     >             > xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2
>     >       dom0_vcpus_pin
>     >       >       >       bootscrub=0 vwfi=native sched=null
>     >       >       >       >     >             > timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";
>     >       >       >       >     >
>     >       >       >       >     >             4 colors is way too many for xen, just do xen_colors=0-0. There is no
>     >       >       >       >     >             advantage in using more than 1 color for Xen.
>     >       >       >       >     >
>     >       >       >       >     >             4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.
>     >       >       >       >     >             Each color is 256M. For 1600M you should give at least 7 colors. Try:
>     >       >       >       >     >
>     >       >       >       >     >             xen_colors=0-0 dom0_colors=1-8
>     >       >       >       >     >
>     >       >       >       >     >
>     >       >       >       >     >
>     >       >       >       >     >             > Unfortunately the result was the same.
>     >       >       >       >     >             >
>     >       >       >       >     >             > (XEN)  - Dom0 mode: Relaxed
>     >       >       >       >     >             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>     >       >       >       >     >             > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>     >       >       >       >     >             > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>     >       >       >       >     >             > (XEN) Coloring general information
>     >       >       >       >     >             > (XEN) Way size: 64kB
>     >       >       >       >     >             > (XEN) Max. number of colors available: 16
>     >       >       >       >     >             > (XEN) Xen color(s): [ 0 ]
>     >       >       >       >     >             > (XEN) alternatives: Patching with alt table 00000000002cc690 -> 00000000002ccc0c
>     >       >       >       >     >             > (XEN) Color array allocation failed for dom0
>     >       >       >       >     >             > (XEN)
>     >       >       >       >     >             > (XEN) ****************************************
>     >       >       >       >     >             > (XEN) Panic on CPU 0:
>     >       >       >       >     >             > (XEN) Error creating domain 0
>     >       >       >       >     >             > (XEN) ****************************************
>     >       >       >       >     >             > (XEN)
>     >       >       >       >     >             > (XEN) Reboot in five seconds...
>     >       >       >       >     >             >
>     >       >       >       >     >             > I am going to find out how command line arguments passed and parsed.
>     >       >       >       >     >             >
>     >       >       >       >     >             > Regards,
>     >       >       >       >     >             > Oleg
>     >       >       >       >     >             >
>     >       >       >       >     >             > ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>:
>     >       >       >       >     >             >       Hi Michal,
>     >       >       >       >     >             >
>     >       >       >       >     >             > You put my nose into the problem. Thank you.
>     >       >       >       >     >             > I am going to use your point.
>     >       >       >       >     >             > Let's see what happens.
>     >       >       >       >     >             >
>     >       >       >       >     >             > Regards,
>     >       >       >       >     >             > Oleg
>     >       >       >       >     >             >
>     >       >       >       >     >             >
>     >       >       >       >     >             > ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>
>     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>
>     >       >       >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>:
>     >       >       >       >     >             >       Hi Oleg,
>     >       >       >       >     >             >
>     >       >       >       >     >             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>     >       >       >       >     >             >       >       
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       > Hello Stefano,
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       > Thanks for the clarification.
>     >       >       >       >     >             >       > My company uses yocto for image generation.
>     >       >       >       >     >             >       > What kind of information do you need to consult me in this case ?
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       > Maybe modules sizes/addresses which were mentioned by @Julien Grall
>     >       >       <mailto:julien@xen.org <mailto:julien@xen.org>
>     >       >       >       <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>> ?
>     >       >       >       >     >             >
>     >       >       >       >     >             >       Sorry for jumping into discussion, but FWICS the Xen command line you provided
>     >       seems to be
>     >       >       not the
>     >       >       >       one
>     >       >       >       >     >             >       Xen booted with. The error you are observing most likely is due to dom0 colors
>     >       >       configuration not
>     >       >       >       being
>     >       >       >       >     >             >       specified (i.e. lack of dom0_colors=<> parameter). Although in the command line you
>     >       >       provided, this
>     >       >       >       parameter
>     >       >       >       >     >             >       is set, I strongly doubt that this is the actual command line in use.
>     >       >       >       >     >             >
>     >       >       >       >     >             >       You wrote:
>     >       >       >       >     >             >       xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2
>     >       >       dom0_vcpus_pin
>     >       >       >       bootscrub=0 vwfi=native
>     >       >       >       >     >             >       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";
>     >       >       >       >     >             >
>     >       >       >       >     >             >       but:
>     >       >       >       >     >             >       1) way_szize has a typo
>     >       >       >       >     >             >       2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has only
>     >       one:
>     >       >       >       >     >             >       (XEN) Xen color(s): [ 0 ]
>     >       >       >       >     >             >
>     >       >       >       >     >             >       This makes me believe that no colors configuration actually end up in command line
>     >       that Xen
>     >       >       booted
>     >       >       >       with.
>     >       >       >       >     >             >       Single color for Xen is a "default if not specified" and way size was probably
>     >       calculated
>     >       >       by asking
>     >       >       >       HW.
>     >       >       >       >     >             >
>     >       >       >       >     >             >       So I would suggest to first cross-check the command line in use.
>     >       >       >       >     >             >
>     >       >       >       >     >             >       ~Michal
>     >       >       >       >     >             >
>     >       >       >       >     >             >
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       > Regards,
>     >       >       >       >     >             >       > Oleg
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>     >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>:
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>     >       >       >       >     >             >       >     > Hi Julien,
>     >       >       >       >     >             >       >     >
>     >       >       >       >     >             >       >     > >> This feature has not been merged in Xen upstream yet
>     >       >       >       >     >             >       >     >
>     >       >       >       >     >             >       >     > > would assume that upstream + the series on the ML [1] work
>     >       >       >       >     >             >       >     >
>     >       >       >       >     >             >       >     > Please clarify this point.
>     >       >       >       >     >             >       >     > Because the two thoughts are controversial.
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >     Hi Oleg,
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >     As Julien wrote, there is nothing controversial. As you are aware,
>     >       >       >       >     >             >       >     Xilinx maintains a separate Xen tree specific for Xilinx here:
>     >       >       >       >     >             >       >     https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>
>     >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>     >       >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>
>     >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>     >       >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >     and the branch you are using (xlnx_rebase_4.16) comes from there.
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >     Instead, the upstream Xen tree lives here:
>     >       >       >       >     >             >       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >     The Cache Coloring feature that you are trying to configure is present
>     >       >       >       >     >             >       >     in xlnx_rebase_4.16, but not yet present upstream (there is an
>     >       >       >       >     >             >       >     outstanding patch series to add cache coloring to Xen upstream but it
>     >       >       >       >     >             >       >     hasn't been merged yet.)
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
>     >       >       >       >     >             >       >     you as you already have Cache Coloring as a feature there.
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >     I take you are using ImageBuilder to generate the boot configuration? If
>     >       >       >       >     >             >       >     so, please post the ImageBuilder config file that you are using.
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >       >     But from the boot message, it looks like the colors configuration for
>     >       >       >       >     >             >       >     Dom0 is incorrect.
>     >       >       >       >     >             >       >
>     >       >       >       >     >             >
>     >       >       >       >     >             >
>     >       >       >       >     >             >
>     >       >       >       >     >
>     >       >       >       >
>     >       >       >
>     >       >       >
>     >       >       >
>     >       >
>     >       >
>     >       >
>     >
>     >
>     > 
> 


From xen-devel-bounces@lists.xenproject.org Fri May 05 08:44:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 08:44:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530268.825783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pur2s-0007JS-9Y; Fri, 05 May 2023 08:44:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530268.825783; Fri, 05 May 2023 08:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pur2s-0007JL-5M; Fri, 05 May 2023 08:44:06 +0000
Received: by outflank-mailman (input) for mailman id 530268;
 Fri, 05 May 2023 08:44:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KcVv=A2=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pur2q-0007JF-3J
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 08:44:04 +0000
Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com
 [2607:f8b0:4864:20::436])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f91ddcf6-eb20-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 10:43:57 +0200 (CEST)
Received: by mail-pf1-x436.google.com with SMTP id
 d2e1a72fcca58-64115e652eeso18121362b3a.0
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 01:43:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f91ddcf6-eb20-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683276236; x=1685868236;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=vw6kZ952yrAaWMx7SJZC4/SO8ee5o9H9xwCBUNJO3WQ=;
        b=q5vxHqowYJsEvZiYTYE8jJaE75Z2AaIGdDNnw/mybr/dzyd+DVNxB/f9jWI7WYMDEK
         sepGZ8THHFd/ojh2El7FUVxuHHE4y6iipvNegTcAM2vEQ5UEaHGnClmTZpUAYVX/MkXX
         f92XlsO5eyvmSkVr7PqpRYXotkldgVGjbFAyOG4a4zatq7RtqlhhkdUrg+9Tr1xcSb4u
         PmpCEmXkGXI+o1V2L3YuEBVXiTGxomDqm3j5l0ezt8UKehHqbWuO6Xkd01OTbEda5rSY
         muvMebR+xuRWW9bcf/A9V0Gnj09VAUR/zk/DcafuL4fo73aGaZ1Bud4xGLt+jDvu91OY
         7hmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683276236; x=1685868236;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=vw6kZ952yrAaWMx7SJZC4/SO8ee5o9H9xwCBUNJO3WQ=;
        b=V/k1Eis2af7lQX3OLs37H6qwYq2dysZ9/+AwTBYo3vPv5qnZSReaLd2Bm6LuDOnsVN
         hWMmcrDgxwz9JHGwRS6kSGuV8xpF0/Zo3puPk6PyWNC6cxPHbYBuo8FO6sU/F0PU54ie
         vrZuoWifNxzlru2FKBzi8HpQRlVIhIqaJl8Dsrz9BTqs7n69kvPJUY8QBoKUCsdFHtlU
         2UVlT7mAZ9CPj+PeemVtiXv9OT3FqTOuFr3dTzUzfMG+Kllmrvz6QL9Fvg/qd6e9C5+p
         bXLq4zDC0SKRMEYwZ8Tx5hT1NKCgmJcqggmUpFOXczS9kxLY37/7xX67gVWd2nWZf0Dj
         +pIg==
X-Gm-Message-State: AC+VfDzV9VQWvOcf0khDL3C2LzMg1W/hapRLXdvNAp8bIo6jsONZJJhY
	tcp/InZHzM0C/3hetm9gGut4EiD2UCXiy4+j2x0=
X-Google-Smtp-Source: ACHHUZ7hoVEwPRZvJMGOtBlHKeRtHJilY1GaWJWTJBi9bOV89NMERsJSVAhckgTPekM7rez84rj5HuXKhBdsRpxUzoc=
X-Received: by 2002:a17:90a:688f:b0:24e:1d3:e6a with SMTP id
 a15-20020a17090a688f00b0024e01d30e6amr5949759pjd.17.1683276235756; Fri, 05
 May 2023 01:43:55 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
 <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
 <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com> <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com> <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
 <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com> <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com>
In-Reply-To: <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Fri, 5 May 2023 11:48:31 +0300
Message-ID: <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Michal Orzel <michal.orzel@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>, 
	Stewart.Hildebrand@amd.com
Content-Type: multipart/alternative; boundary="000000000000175b6a05faee4a1e"

--000000000000175b6a05faee4a1e
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Michal,

Thanks.

Regards,
Oleg

=D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 11:34, Micha=
l Orzel <michal.orzel@amd.com>:

> Hi Oleg,
>
> Replying, so that you do not need to wait for Stefano.
>
> On 05/05/2023 10:28, Oleg Nikitenko wrote:
> >
> >
> >
> > Hello Stefano,
> >
> > I would like to try a xen cache color property from this repo
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>
> > Could you tell whot branch I should use ?
> Cache coloring feature is not part of the upstream tree and it is still
> under review.
> You can only find it integrated in the Xilinx Xen tree.
>
> ~Michal
>
> >
> > Regards,
> > Oleg
> >
> > =D0=BF=D1=82, 28 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 00:51,=
 Stefano Stabellini <sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>>:
> >
> >     I am familiar with the zcu102 but I don't know how you could possib=
ly
> >     generate a SError.
> >
> >     I suggest to try to use ImageBuilder [1] to generate the boot
> >     configuration as a test because that is known to work well for
> zcu102.
> >
> >     [1] https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>
> >
> >
> >     On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
> >     > Hello Stefano,
> >     >
> >     > Thanks for clarification.
> >     > We nighter use ImageBuilder nor uboot boot script.
> >     > A model is zcu102 compatible.
> >     >
> >     > Regards,
> >     > O.
> >     >
> >     > =D0=B2=D1=82, 25 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 =
21:21, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
> >     >       This is interesting. Are you using Xilinx hardware by any
> chance? If so,
> >     >       which board?
> >     >
> >     >       Are you using ImageBuilder to generate your boot.scr boot
> script? If so,
> >     >       could you please post your ImageBuilder config file? If not=
,
> can you
> >     >       post the source of your uboot boot script?
> >     >
> >     >       SErrors are supposed to be related to a hardware failure of
> some kind.
> >     >       You are not supposed to be able to trigger an SError easily
> by
> >     >       "mistake". I have not seen SErrors due to wrong cache
> coloring
> >     >       configurations on any Xilinx board before.
> >     >
> >     >       The differences between Xen with and without cache coloring
> from a
> >     >       hardware perspective are:
> >     >
> >     >       - With cache coloring, the SMMU is enabled and does address
> translations
> >     >         even for dom0. Without cache coloring the SMMU could be
> disabled, and
> >     >         if enabled, the SMMU doesn't do any address translations
> for Dom0. If
> >     >         there is a hardware failure related to SMMU address
> translation it
> >     >         could only trigger with cache coloring. This would be my
> normal
> >     >         suggestion for you to explore, but the failure happens to=
o
> early
> >     >         before any DMA-capable device is programmed. So I don't
> think this can
> >     >         be the issue.
> >     >
> >     >       - With cache coloring, the memory allocation is very
> different so you'll
> >     >         end up using different DDR regions for Dom0. So if your
> DDR is
> >     >         defective, you might only see a failure with cache
> coloring enabled
> >     >         because you end up using different regions.
> >     >
> >     >
> >     >       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
> >     >       > Hi Stefano,
> >     >       >
> >     >       > Thank you.
> >     >       > If I build xen without colors support there is not this
> error.
> >     >       > All the domains are booted well.
> >     >       > Hense it can not be a hardware issue.
> >     >       > This panic arrived during unpacking the rootfs.
> >     >       > Here I attached the boot log xen/Dom0 without color.
> >     >       > A highlighted strings printed exactly after the place
> where 1-st time panic arrived.
> >     >       >
> >     >       >  Xen 4.16.1-pre
> >     >       > (XEN) Xen version 4.16.1-pre (nole2390@(none))
> (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=3Dy 2023-04-21
> >     >       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300
> git:321687b231-dirty
> >     >       > (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
> >     >       > (XEN) Processor: 00000000410fd034: "ARM Limited", variant=
:
> 0x0, part 0xd03,rev 0x4
> >     >       > (XEN) 64-bit Execution:
> >     >       > (XEN)   Processor Features: 0000000000002222
> 0000000000000000
> >     >       > (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32
> EL0:64+32
> >     >       > (XEN)     Extensions: FloatingPoint AdvancedSIMD
> >     >       > (XEN)   Debug Features: 0000000010305106 0000000000000000
> >     >       > (XEN)   Auxiliary Features: 0000000000000000
> 0000000000000000
> >     >       > (XEN)   Memory Model Features: 0000000000001122
> 0000000000000000
> >     >       > (XEN)   ISA Features:  0000000000011120 0000000000000000
> >     >       > (XEN) 32-bit Execution:
> >     >       > (XEN)   Processor Features:
> 0000000000000131:0000000000011011
> >     >       > (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2
> Jazelle
> >     >       > (XEN)     Extensions: GenericTimer Security
> >     >       > (XEN)   Debug Features: 0000000003010066
> >     >       > (XEN)   Auxiliary Features: 0000000000000000
> >     >       > (XEN)   Memory Model Features: 0000000010201105
> 0000000040000000
> >     >       > (XEN)                          0000000001260000
> 0000000002102211
> >     >       > (XEN)   ISA Features: 0000000002101110 0000000013112111
> 0000000021232042
> >     >       > (XEN)                 0000000001112131 0000000000011142
> 0000000000011121
> >     >       > (XEN) Using SMC Calling Convention v1.2
> >     >       > (XEN) Using PSCI v1.1
> >     >       > (XEN) SMP: Allowing 4 CPUs
> >     >       > (XEN) Generic Timer IRQ: phys=3D30 hyp=3D26 virt=3D27 Fre=
q:
> 100000 KHz
> >     >       > (XEN) GICv2 initialization:
> >     >       > (XEN)         gic_dist_addr=3D00000000f9010000
> >     >       > (XEN)         gic_cpu_addr=3D00000000f9020000
> >     >       > (XEN)         gic_hyp_addr=3D00000000f9040000
> >     >       > (XEN)         gic_vcpu_addr=3D00000000f9060000
> >     >       > (XEN)         gic_maintenance_irq=3D25
> >     >       > (XEN) GICv2: Adjusting CPU interface base to 0xf902f000
> >     >       > (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
> >     >       > (XEN) Using scheduler: null Scheduler (null)
> >     >       > (XEN) Initializing null scheduler
> >     >       > (XEN) WARNING: This is experimental software in
> development.
> >     >       > (XEN) Use at your own risk.
> >     >       > (XEN) Allocated console ring of 32 KiB.
> >     >       > (XEN) CPU0: Guest atomics will try 12 times before pausin=
g
> the domain
> >     >       > (XEN) Bringing up CPU1
> >     >       > (XEN) CPU1: Guest atomics will try 13 times before pausin=
g
> the domain
> >     >       > (XEN) CPU 1 booted.
> >     >       > (XEN) Bringing up CPU2
> >     >       > (XEN) CPU2: Guest atomics will try 13 times before pausin=
g
> the domain
> >     >       > (XEN) CPU 2 booted.
> >     >       > (XEN) Bringing up CPU3
> >     >       > (XEN) CPU3: Guest atomics will try 13 times before pausin=
g
> the domain
> >     >       > (XEN) Brought up 4 CPUs
> >     >       > (XEN) CPU 3 booted.
> >     >       > (XEN) smmu: /axi/smmu@fd800000: probing hardware
> configuration...
> >     >       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
> >     >       > (XEN) smmu: /axi/smmu@fd800000: stage 2 translation
> >     >       > (XEN) smmu: /axi/smmu@fd800000: stream matching with 48
> register groups, mask 0x7fff<2>smmu: /axi/smmu@fd800000: 16 context
> >     >       banks (0
> >     >       > stage-2 only)
> >     >       > (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA ->
> 48-bit PA
> >     >       > (XEN) smmu: /axi/smmu@fd800000: registered 29 master
> devices
> >     >       > (XEN) I/O virtualisation enabled
> >     >       > (XEN)  - Dom0 mode: Relaxed
> >     >       > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> >     >       > (XEN) P2M: 3 levels with order-1 root, VTCR
> 0x0000000080023558
> >     >       > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resour=
ce
> >     >       > (XEN) alternatives: Patching with alt table
> 00000000002cc5c8 -> 00000000002ccb2c
> >     >       > (XEN) *** LOADING DOMAIN 0 ***
> >     >       > (XEN) Loading d0 kernel from boot module @ 00000000010000=
00
> >     >       > (XEN) Loading ramdisk from boot module @ 0000000002000000
> >     >       > (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
> >     >       > (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
> >     >       > (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
> >     >       > (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
> >     >       > (XEN) Grant table range: 0x00000000e00000-0x00000000e4000=
0
> >     >       > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr
> 0x000000087bf94000
> >     >       > (XEN) Allocating PPI 16 for event channel interrupt
> >     >       > (XEN) Extended region 0: 0x81200000->0xa0000000
> >     >       > (XEN) Extended region 1: 0xb1200000->0xc0000000
> >     >       > (XEN) Extended region 2: 0xc8000000->0xe0000000
> >     >       > (XEN) Extended region 3: 0xf0000000->0xf9000000
> >     >       > (XEN) Extended region 4: 0x100000000->0x600000000
> >     >       > (XEN) Extended region 5: 0x880000000->0x8000000000
> >     >       > (XEN) Extended region 6: 0x8001000000->0x10000000000
> >     >       > (XEN) Loading zImage from 0000000001000000 to
> 0000000010000000-0000000010e41008
> >     >       > (XEN) Loading d0 initrd from 0000000002000000 to
> 0x0000000013600000-0x000000001ff3a617
> >     >       > (XEN) Loading d0 DTB to
> 0x0000000013400000-0x000000001340cbdc
> >     >       > (XEN) Initial low memory virq threshold set at 0x4000
> pages.
> >     >       > (XEN) Std. Loglevel: All
> >     >       > (XEN) Guest Loglevel: All
> >     >       > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times
> to switch input)
> >     >       > (XEN) null.c:353: 0 <-- d0v0
> >     >       > (XEN) Freed 356kB init memory.
> >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
> >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
> to ICACTIVER4
> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
> to ICACTIVER8
> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
> to ICACTIVER12
> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
> to ICACTIVER16
> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
> to ICACTIVER20
> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
> to ICACTIVER0
> >     >       > [    0.000000] Booting Linux on physical CPU 0x0000000000
> [0x410fd034]
> >     >       > [    0.000000] Linux version 5.15.72-xilinx-v2022.1
> (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC) 11.3.0, GNU ld (GNU
> >     >       Binutils)
> >     >       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
> >     >       > [    0.000000] Machine model: D14 Viper Board - White Uni=
t
> >     >       > [    0.000000] Xen 4.16 support found
> >     >       > [    0.000000] Zone ranges:
> >     >       > [    0.000000]   DMA      [mem
> 0x0000000010000000-0x000000007fffffff]
> >     >       > [    0.000000]   DMA32    empty
> >     >       > [    0.000000]   Normal   empty
> >     >       > [    0.000000] Movable zone start for each node
> >     >       > [    0.000000] Early memory node ranges
> >     >       > [    0.000000]   node   0: [mem
> 0x0000000010000000-0x000000001fffffff]
> >     >       > [    0.000000]   node   0: [mem
> 0x0000000022000000-0x0000000022147fff]
> >     >       > [    0.000000]   node   0: [mem
> 0x0000000022200000-0x0000000022347fff]
> >     >       > [    0.000000]   node   0: [mem
> 0x0000000024000000-0x0000000027ffffff]
> >     >       > [    0.000000]   node   0: [mem
> 0x0000000030000000-0x000000007fffffff]
> >     >       > [    0.000000] Initmem setup node 0 [mem
> 0x0000000010000000-0x000000007fffffff]
> >     >       > [    0.000000] On node 0, zone DMA: 8192 pages in
> unavailable ranges
> >     >       > [    0.000000] On node 0, zone DMA: 184 pages in
> unavailable ranges
> >     >       > [    0.000000] On node 0, zone DMA: 7352 pages in
> unavailable ranges
> >     >       > [    0.000000] cma: Reserved 256 MiB at 0x000000006e00000=
0
> >     >       > [    0.000000] psci: probing for conduit method from DT.
> >     >       > [    0.000000] psci: PSCIv1.1 detected in firmware.
> >     >       > [    0.000000] psci: Using standard PSCI v0.2 function ID=
s
> >     >       > [    0.000000] psci: Trusted OS migration not required
> >     >       > [    0.000000] psci: SMC Calling Convention v1.1
> >     >       > [    0.000000] percpu: Embedded 16 pages/cpu s32792 r0
> d32744 u65536
> >     >       > [    0.000000] Detected VIPT I-cache on CPU0
> >     >       > [    0.000000] CPU features: kernel page table isolation
> forced ON by KASLR
> >     >       > [    0.000000] CPU features: detected: Kernel page table
> isolation (KPTI)
> >     >       > [    0.000000] Built 1 zonelists, mobility grouping on.
> Total pages: 403845
> >     >       > [    0.000000] Kernel command line: console=3Dhvc0
> earlycon=3Dxen earlyprintk=3Dxen clk_ignore_unused fips=3D1 root=3D/dev/r=
am0
> >     >       maxcpus=3D2
> >     >       > [    0.000000] Unknown kernel command line parameters
> "earlyprintk=3Dxen fips=3D1", will be passed to user space.
> >     >       > [    0.000000] Dentry cache hash table entries: 262144
> (order: 9, 2097152 bytes, linear)
> >     >       > [    0.000000] Inode-cache hash table entries: 131072
> (order: 8, 1048576 bytes, linear)
> >     >       > [    0.000000] mem auto-init: stack:off, heap alloc:on,
> heap free:on
> >     >       > [    0.000000] mem auto-init: clearing system memory may
> take some time...
> >     >       > [    0.000000] Memory: 1121936K/1641024K available (9728K
> kernel code, 836K rwdata, 2396K rodata, 1536K init, 262K bss,
> >     >       256944K reserved,
> >     >       > 262144K cma-reserved)
> >     >       > [    0.000000] SLUB: HWalign=3D64, Order=3D0-3, MinObject=
s=3D0,
> CPUs=3D2, Nodes=3D1
> >     >       > [    0.000000] rcu: Hierarchical RCU implementation.
> >     >       > [    0.000000] rcu: RCU event tracing is enabled.
> >     >       > [    0.000000] rcu: RCU restricting CPUs from NR_CPUS=3D8=
 to
> nr_cpu_ids=3D2.
> >     >       > [    0.000000] rcu: RCU calculated value of
> scheduler-enlistment delay is 25 jiffies.
> >     >       > [    0.000000] rcu: Adjusting geometry for
> rcu_fanout_leaf=3D16, nr_cpu_ids=3D2
> >     >       > [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated
> irqs: 0
> >     >       > [    0.000000] Root IRQ handler: gic_handle_irq
> >     >       > [    0.000000] arch_timer: cp15 timer(s) running at
> 100.00MHz (virt).
> >     >       > [    0.000000] clocksource: arch_sys_counter: mask:
> 0xffffffffffffff max_cycles: 0x171024e7e0, max_idle_ns: 440795205315 ns
> >     >       > [    0.000000] sched_clock: 56 bits at 100MHz, resolution
> 10ns, wraps every 4398046511100ns
> >     >       > [    0.000258] Console: colour dummy device 80x25
> >     >       > [    0.310231] printk: console [hvc0] enabled
> >     >       > [    0.314403] Calibrating delay loop (skipped), value
> calculated using timer frequency.. 200.00 BogoMIPS (lpj=3D400000)
> >     >       > [    0.324851] pid_max: default: 32768 minimum: 301
> >     >       > [    0.329706] LSM: Security Framework initializing
> >     >       > [    0.334204] Yama: becoming mindful.
> >     >       > [    0.337865] Mount-cache hash table entries: 4096
> (order: 3, 32768 bytes, linear)
> >     >       > [    0.345180] Mountpoint-cache hash table entries: 4096
> (order: 3, 32768 bytes, linear)
> >     >       > [    0.354743] xen:grant_table: Grant tables using versio=
n
> 1 layout
> >     >       > [    0.359132] Grant table initialized
> >     >       > [    0.362664] xen:events: Using FIFO-based ABI
> >     >       > [    0.366993] Xen: initializing cpu0
> >     >       > [    0.370515] rcu: Hierarchical SRCU implementation.
> >     >       > [    0.375930] smp: Bringing up secondary CPUs ...
> >     >       > (XEN) null.c:353: 1 <-- d0v1
> >     >       > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff
> to ICACTIVER0
> >     >       > [    0.382549] Detected VIPT I-cache on CPU1
> >     >       > [    0.388712] Xen: initializing cpu1
> >     >       > [    0.388743] CPU1: Booted secondary processor
> 0x0000000001 [0x410fd034]
> >     >       > [    0.388829] smp: Brought up 1 node, 2 CPUs
> >     >       > [    0.406941] SMP: Total of 2 processors activated.
> >     >       > [    0.411698] CPU features: detected: 32-bit EL0 Support
> >     >       > [    0.416888] CPU features: detected: CRC32 instructions
> >     >       > [    0.422121] CPU: All CPU(s) started at EL1
> >     >       > [    0.426248] alternatives: patching kernel code
> >     >       > [    0.431424] devtmpfs: initialized
> >     >       > [    0.441454] KASLR enabled
> >     >       > [    0.441602] clocksource: jiffies: mask: 0xffffffff
> max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
> >     >       > [    0.448321] futex hash table entries: 512 (order: 3,
> 32768 bytes, linear)
> >     >       > [    0.496183] NET: Registered PF_NETLINK/PF_ROUTE
> protocol family
> >     >       > [    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool
> for atomic allocations
> >     >       > [    0.503772] DMA: preallocated 256 KiB
> GFP_KERNEL|GFP_DMA pool for atomic allocations
> >     >       > [    0.511610] DMA: preallocated 256 KiB
> GFP_KERNEL|GFP_DMA32 pool for atomic allocations
> >     >       > [    0.519478] audit: initializing netlink subsys
> (disabled)
> >     >       > [    0.524985] audit: type=3D2000 audit(0.336:1):
> state=3Dinitialized audit_enabled=3D0 res=3D1
> >     >       > [    0.529169] thermal_sys: Registered thermal governor
> 'step_wise'
> >     >       > [    0.533023] hw-breakpoint: found 6 breakpoint and 4
> watchpoint registers.
> >     >       > [    0.545608] ASID allocator initialised with 32768
> entries
> >     >       > [    0.551030] xen:swiotlb_xen: Warning: only able to
> allocate 4 MB for software IO TLB
> >     >       > [    0.559332] software IO TLB: mapped [mem
> 0x0000000011800000-0x0000000011c00000] (4MB)
> >     >       > [    0.583565] HugeTLB registered 1.00 GiB page size,
> pre-allocated 0 pages
> >     >       > [    0.584721] HugeTLB registered 32.0 MiB page size,
> pre-allocated 0 pages
> >     >       > [    0.591478] HugeTLB registered 2.00 MiB page size,
> pre-allocated 0 pages
> >     >       > [    0.598225] HugeTLB registered 64.0 KiB page size,
> pre-allocated 0 pages
> >     >       > [    0.636520] DRBG: Continuing without Jitter RNG
> >     >       > [    0.737187] raid6: neonx8   gen()  2143 MB/s
> >     >       > [    0.805294] raid6: neonx8   xor()  1589 MB/s
> >     >       > [    0.873406] raid6: neonx4   gen()  2177 MB/s
> >     >       > [    0.941499] raid6: neonx4   xor()  1556 MB/s
> >     >       > [    1.009612] raid6: neonx2   gen()  2072 MB/s
> >     >       > [    1.077715] raid6: neonx2   xor()  1430 MB/s
> >     >       > [    1.145834] raid6: neonx1   gen()  1769 MB/s
> >     >       > [    1.213935] raid6: neonx1   xor()  1214 MB/s
> >     >       > [    1.282046] raid6: int64x8  gen()  1366 MB/s
> >     >       > [    1.350132] raid6: int64x8  xor()   773 MB/s
> >     >       > [    1.418259] raid6: int64x4  gen()  1602 MB/s
> >     >       > [    1.486349] raid6: int64x4  xor()   851 MB/s
> >     >       > [    1.554464] raid6: int64x2  gen()  1396 MB/s
> >     >       > [    1.622561] raid6: int64x2  xor()   744 MB/s
> >     >       > [    1.690687] raid6: int64x1  gen()  1033 MB/s
> >     >       > [    1.758770] raid6: int64x1  xor()   517 MB/s
> >     >       > [    1.758809] raid6: using algorithm neonx4 gen() 2177
> MB/s
> >     >       > [    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
> >     >       > [    1.767957] raid6: using neon recovery algorithm
> >     >       > [    1.772824] xen:balloon: Initialising balloon driver
> >     >       > [    1.778021] iommu: Default domain type: Translated
> >     >       > [    1.782584] iommu: DMA domain TLB invalidation policy:
> strict mode
> >     >       > [    1.789149] SCSI subsystem initialized
> >     >       > [    1.792820] usbcore: registered new interface driver
> usbfs
> >     >       > [    1.798254] usbcore: registered new interface driver h=
ub
> >     >       > [    1.803626] usbcore: registered new device driver usb
> >     >       > [    1.808761] pps_core: LinuxPPS API ver. 1 registered
> >     >       > [    1.813716] pps_core: Software ver. 5.3.6 - Copyright
> 2005-2007 Rodolfo Giometti <giometti@linux.it <mailto:giometti@linux.it>>
> >     >       > [    1.822903] PTP clock support registered
> >     >       > [    1.826893] EDAC MC: Ver: 3.0.0
> >     >       > [    1.830375] zynqmp-ipi-mbox mailbox@ff990400:
> Registered ZynqMP IPI mbox with TX/RX channels.
> >     >       > [    1.838863] zynqmp-ipi-mbox mailbox@ff990600:
> Registered ZynqMP IPI mbox with TX/RX channels.
> >     >       > [    1.847356] zynqmp-ipi-mbox mailbox@ff990800:
> Registered ZynqMP IPI mbox with TX/RX channels.
> >     >       > [    1.855907] FPGA manager framework
> >     >       > [    1.859952] clocksource: Switched to clocksource
> arch_sys_counter
> >     >       > [    1.871712] NET: Registered PF_INET protocol family
> >     >       > [    1.871838] IP idents hash table entries: 32768 (order=
:
> 6, 262144 bytes, linear)
> >     >       > [    1.879392] tcp_listen_portaddr_hash hash table
> entries: 1024 (order: 2, 16384 bytes, linear)
> >     >       > [    1.887078] Table-perturb hash table entries: 65536
> (order: 6, 262144 bytes, linear)
> >     >       > [    1.894846] TCP established hash table entries: 16384
> (order: 5, 131072 bytes, linear)
> >     >       > [    1.902900] TCP bind hash table entries: 16384 (order:
> 6, 262144 bytes, linear)
> >     >       > [    1.910350] TCP: Hash tables configured (established
> 16384 bind 16384)
> >     >       > [    1.916778] UDP hash table entries: 1024 (order: 3,
> 32768 bytes, linear)
> >     >       > [    1.923509] UDP-Lite hash table entries: 1024 (order:
> 3, 32768 bytes, linear)
> >     >       > [    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol
> family
> >     >       > [    1.936834] RPC: Registered named UNIX socket transpor=
t
> module.
> >     >       > [    1.942342] RPC: Registered udp transport module.
> >     >       > [    1.947088] RPC: Registered tcp transport module.
> >     >       > [    1.951843] RPC: Registered tcp NFSv4.1 backchannel
> transport module.
> >     >       > [    1.958334] PCI: CLS 0 bytes, default 64
> >     >       > [    1.962709] Trying to unpack rootfs image as
> initramfs...
> >     >       > [    1.977090] workingset: timestamp_bits=3D62 max_order=
=3D19
> bucket_order=3D0
> >     >       > [    1.982863] Installing knfsd (copyright (C) 1996
> okir@monad.swb.de <mailto:okir@monad.swb.de>).
> >     >       > [    2.021045] NET: Registered PF_ALG protocol family
> >     >       > [    2.021122] xor: measuring software checksum speed
> >     >       > [    2.029347]    8regs           :  2366 MB/sec
> >     >       > [    2.033081]    32regs          :  2802 MB/sec
> >     >       > [    2.038223]    arm64_neon      :  2320 MB/sec
> >     >       > [    2.038385] xor: using function: 32regs (2802 MB/sec)
> >     >       > [    2.043614] Block layer SCSI generic (bsg) driver
> version 0.4 loaded (major 247)
> >     >       > [    2.050959] io scheduler mq-deadline registered
> >     >       > [    2.055521] io scheduler kyber registered
> >     >       > [    2.068227] xen:xen_evtchn: Event-channel device
> installed
> >     >       > [    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ
> sharing disabled
> >     >       > [    2.076190] cacheinfo: Unable to detect cache hierarch=
y
> for CPU 0
> >     >       > [    2.085548] brd: module loaded
> >     >       > [    2.089290] loop: module loaded
> >     >       > [    2.089341] Invalid max_queues (4), will use default
> max: 2.
> >     >       > [    2.094565] tun: Universal TUN/TAP device driver, 1.6
> >     >       > [    2.098655] xen_netfront: Initialising Xen virtual
> ethernet driver
> >     >       > [    2.104156] usbcore: registered new interface driver
> rtl8150
> >     >       > [    2.109813] usbcore: registered new interface driver
> r8152
> >     >       > [    2.115367] usbcore: registered new interface driver
> asix
> >     >       > [    2.120794] usbcore: registered new interface driver
> ax88179_178a
> >     >       > [    2.126934] usbcore: registered new interface driver
> cdc_ether
> >     >       > [    2.132816] usbcore: registered new interface driver
> cdc_eem
> >     >       > [    2.138527] usbcore: registered new interface driver
> net1080
> >     >       > [    2.144256] usbcore: registered new interface driver
> cdc_subset
> >     >       > [    2.150205] usbcore: registered new interface driver
> zaurus
> >     >       > [    2.155837] usbcore: registered new interface driver
> cdc_ncm
> >     >       > [    2.161550] usbcore: registered new interface driver
> r8153_ecm
> >     >       > [    2.168240] usbcore: registered new interface driver
> cdc_acm
> >     >       > [    2.173109] cdc_acm: USB Abstract Control Model driver
> for USB modems and ISDN adapters
> >     >       > [    2.181358] usbcore: registered new interface driver u=
as
> >     >       > [    2.186547] usbcore: registered new interface driver
> usb-storage
> >     >       > [    2.192643] usbcore: registered new interface driver
> ftdi_sio
> >     >       > [    2.198384] usbserial: USB Serial support registered
> for FTDI USB Serial Device
> >     >       > [    2.206118] udc-core: couldn't find an available UDC -
> added [g_mass_storage] to list of pending drivers
> >     >       > [    2.215332] i2c_dev: i2c /dev entries driver
> >     >       > [    2.220467] xen_wdt xen_wdt: initialized (timeout=3D60=
s,
> nowayout=3D0)
> >     >       > [    2.225923] device-mapper: uevent: version 1.0.3
> >     >       > [    2.230668] device-mapper: ioctl: 4.45.0-ioctl
> (2021-03-22) initialised: dm-devel@redhat.com <mailto:dm-devel@redhat.com=
>
> >     >       > [    2.239315] EDAC MC0: Giving out device to module 1
> controller synps_ddr_controller: DEV synps_edac (INTERRUPT)
> >     >       > [    2.249405] EDAC DEVICE0: Giving out device to module
> zynqmp-ocm-edac controller zynqmp_ocm: DEV
> >     >       ff960000.memory-controller (INTERRUPT)
> >     >       > [    2.261719] sdhci: Secure Digital Host Controller
> Interface driver
> >     >       > [    2.267487] sdhci: Copyright(c) Pierre Ossman
> >     >       > [    2.271890] sdhci-pltfm: SDHCI platform and OF driver
> helper
> >     >       > [    2.278157] ledtrig-cpu: registered to indicate
> activity on CPUs
> >     >       > [    2.283816] zynqmp_firmware_probe Platform Management
> API v1.1
> >     >       > [    2.289554] zynqmp_firmware_probe Trustzone version v1=
.0
> >     >       > [    2.327875] securefw securefw: securefw probed
> >     >       > [    2.328324] alg: No test for xilinx-zynqmp-aes
> (zynqmp-aes)
> >     >       > [    2.332563] zynqmp_aes
> firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
> >     >       > [    2.341183] alg: No test for xilinx-zynqmp-rsa
> (zynqmp-rsa)
> >     >       > [    2.347667] remoteproc remoteproc0:
> ff9a0000.rf5ss:r5f_0 is available
> >     >       > [    2.353003] remoteproc remoteproc1:
> ff9a0000.rf5ss:r5f_1 is available
> >     >       > [    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA
> Manager registered
> >     >       > [    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xen
> Proxy registered
> >     >       > [    2.372525] viper-vdpp a4000000.vdpp: Device Tree
> Probing
> >     >       > [    2.377778] viper-vdpp a4000000.vdpp: VDPP Version:
> 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> >     >       > [    2.386432] viper-vdpp a4000000.vdpp: Unable to
> register tamper handler. Retrying...
> >     >       > [    2.394094] viper-vdpp-net a5000000.vdpp_net: Device
> Tree Probing
> >     >       > [    2.399854] viper-vdpp-net a5000000.vdpp_net: Device
> registered
> >     >       > [    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Device
> Tree Probing
> >     >       > [    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build
> parameters: VTI Count: 512 Event Count: 32
> >     >       > [    2.420856] default preset
> >     >       > [    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Device
> registered
> >     >       > [    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device
> Tree Probing
> >     >       > [    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device
> registered
> >     >       > [    2.441976] vmcu driver init
> >     >       > [    2.444922] VMCU: : (240:0) registered
> >     >       > [    2.444956] In K81 Updater init
> >     >       > [    2.449003] pktgen: Packet Generator for packet
> performance testing. Version: 2.75
> >     >       > [    2.468833] Initializing XFRM netlink socket
> >     >       > [    2.468902] NET: Registered PF_PACKET protocol family
> >     >       > [    2.472729] Bridge firewalling registered
> >     >       > [    2.476785] 8021q: 802.1Q VLAN Support v1.8
> >     >       > [    2.481341] registered taskstats version 1
> >     >       > [    2.486394] Btrfs loaded, crc32c=3Dcrc32c-generic,
> zoned=3Dno, fsverity=3Dno
> >     >       > [    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000
> (irq =3D 36, base_baud =3D 6250000) is a xuartps
> >     >       > [    2.507103] of-fpga-region fpga-full: FPGA Region prob=
ed
> >     >       > [    2.512986] xilinx-zynqmp-dma fd500000.dma-controller:
> ZynqMP DMA driver Probe success
> >     >       > [    2.520267] xilinx-zynqmp-dma fd510000.dma-controller:
> ZynqMP DMA driver Probe success
> >     >       > [    2.528239] xilinx-zynqmp-dma fd520000.dma-controller:
> ZynqMP DMA driver Probe success
> >     >       > [    2.536152] xilinx-zynqmp-dma fd530000.dma-controller:
> ZynqMP DMA driver Probe success
> >     >       > [    2.544153] xilinx-zynqmp-dma fd540000.dma-controller:
> ZynqMP DMA driver Probe success
> >     >       > [    2.552127] xilinx-zynqmp-dma fd550000.dma-controller:
> ZynqMP DMA driver Probe success
> >     >       > [    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller:
> ZynqMP DMA driver Probe success
> >     >       > [    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller:
> ZynqMP DMA driver Probe success
> >     >       > [    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller:
> ZynqMP DMA driver Probe success
> >     >       > [    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller:
> ZynqMP DMA driver Probe success
> >     >       > [    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
> >     >       > [    2.946467] 2 fixed-partitions partitions found on MTD
> device spi0.0
> >     >       > [    2.952393] Creating 2 MTD partitions on "spi0.0":
> >     >       > [    2.957231] 0x000004000000-0x000008000000 : "bank A"
> >     >       > [    2.963332] 0x000000000000-0x000004000000 : "bank B"
> >     >       > [    2.968694] macb ff0b0000.ethernet: Not enabling
> partial store and forward
> >     >       > [    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM
> rev 0x50070106 at 0xff0b0000 irq 25 (18:41:fe:0f:ff:02)
> >     >       > [    2.984472] macb ff0c0000.ethernet: Not enabling
> partial store and forward
> >     >       > [    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM
> rev 0x50070106 at 0xff0c0000 irq 26 (18:41:fe:0f:ff:03)
> >     >       > [    3.001043] viper_enet viper_enet: Viper power GPIOs
> initialised
> >     >       > [    3.007313] viper_enet viper_enet vnet0
> (uninitialized): Validate interface QSGMII
> >     >       > [    3.014914] viper_enet viper_enet vnet1
> (uninitialized): Validate interface QSGMII
> >     >       > [    3.022138] viper_enet viper_enet vnet1
> (uninitialized): Validate interface type 18
> >     >       > [    3.030274] viper_enet viper_enet vnet2
> (uninitialized): Validate interface QSGMII
> >     >       > [    3.037785] viper_enet viper_enet vnet3
> (uninitialized): Validate interface QSGMII
> >     >       > [    3.045301] viper_enet viper_enet: Viper enet register=
ed
> >     >       > [    3.050958] xilinx-axipmon ffa00000.perf-monitor:
> Probed Xilinx APM
> >     >       > [    3.057135] xilinx-axipmon fd0b0000.perf-monitor:
> Probed Xilinx APM
> >     >       > [    3.063538] xilinx-axipmon fd490000.perf-monitor:
> Probed Xilinx APM
> >     >       > [    3.069920] xilinx-axipmon ffa10000.perf-monitor:
> Probed Xilinx APM
> >     >       > [    3.097729] si70xx: probe of 2-0040 failed with error =
-5
> >     >       > [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdo=
g
> Timer with timeout 60s
> >     >       > [    3.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdo=
g
> Timer with timeout 10s
> >     >       > [    3.112457] viper-tamper viper-tamper: Device register=
ed
> >     >       > [    3.117593] active_bank active_bank: boot bank: 1
> >     >       > [    3.122184] active_bank active_bank: boot mode: (0x02)
> qspi32
> >     >       > [    3.128247] viper-vdpp a4000000.vdpp: Device Tree
> Probing
> >     >       > [    3.133439] viper-vdpp a4000000.vdpp: VDPP Version:
> 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> >     >       > [    3.142151] viper-vdpp a4000000.vdpp: Tamper handler
> registered
> >     >       > [    3.147438] viper-vdpp a4000000.vdpp: Device registere=
d
> >     >       > [    3.153007] lpc55_l2 spi1.0: registered handler for
> protocol 0
> >     >       > [    3.158582] lpc55_user lpc55_user: The major number fo=
r
> your device is 236
> >     >       > [    3.165976] lpc55_l2 spi1.0: registered handler for
> protocol 1
> >     >       > [    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time:
> bad result: 1
> >     >       > [    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
> >     >       > [    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
> >     >       > [    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
> >     >       > [    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
> >     >       > [    3.202932] mmc0: SDHCI controller on ff160000.mmc
> [ff160000.mmc] using ADMA 64-bit
> >     >       > [    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
> >     >       > [    3.215694] lpc55_l2 spi1.0: rx error: -110
> >     >       > [    3.284438] mmc0: new HS200 MMC card at address 0001
> >     >       > [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
> >     >       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
> >     >       > [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
> >     >       > [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
> >     >       > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB,
> chardev (244:0)
> >     >       > [    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time:
> bad result: 1
> >     >       > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to
> read the hardware clock
> >     >       > [    3.591252] cdns-i2c ff020000.i2c: recovery informatio=
n
> complete
> >     >       > [    3.597085] at24 0-0050: supply vcc not found, using
> dummy regulator
> >     >       > [    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
> >     >       > [    3.608093] at24 0-0050: 256 byte spd EEPROM, read-onl=
y
> >     >       > [    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
> >     >       > [    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
> >     >       > [    3.624224] rtc-rv3028 0-0052: registered as rtc1
> >     >       > [    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
> >     >       > [    3.633253] lpc55_l2 spi1.0: rx error: -110
> >     >       > [    3.639104] k81_bootloader 0-0010: probe
> >     >       > [    3.641628] VMCU: : (235:0) registered
> >     >       > [    3.641635] k81_bootloader 0-0010: probe completed
> >     >       > [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio
> ff020000 irq 28
> >     >       > [    3.669154] cdns-i2c ff030000.i2c: recovery informatio=
n
> complete
> >     >       > [    3.675412] lm75 1-0048: supply vs not found, using
> dummy regulator
> >     >       > [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
> >     >       > [    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
> >     >       > [    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
> >     >       > [    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
> >     >       > [    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
> >     >       > [    3.705157] pca954x 1-0070: registered 4 multiplexed
> busses for I2C switch pca9546
> >     >       > [    3.713049] at24 1-0054: supply vcc not found, using
> dummy regulator
> >     >       > [    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM,
> read-only
> >     >       > [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio
> ff030000 irq 29
> >     >       > [    3.731272] sfp viper_enet:sfp-eth1: Host maximum powe=
r
> 2.0W
> >     >       > [    3.737549] sfp_register_socket: got sfp_bus
> >     >       > [    3.740709] sfp_register_socket: register sfp_bus
> >     >       > [    3.745459] sfp_register_bus: ops ok!
> >     >       > [    3.749179] sfp_register_bus: Try to attach
> >     >       > [    3.753419] sfp_register_bus: Attach succeeded
> >     >       > [    3.757914] sfp_register_bus: upstream ops attach
> >     >       > [    3.762677] sfp_register_bus: Bus registered
> >     >       > [    3.766999] sfp_register_socket: register sfp_bus
> succeeded
> >     >       > [    3.775870] of_cfs_init
> >     >       > [    3.776000] of_cfs_init: OK
> >     >       > [    3.778211] clk: Not disabling unused clocks
> >     >       > [   11.278477] Freeing initrd memory: 206056K
> >     >       > [   11.279406] Freeing unused kernel memory: 1536K
> >     >       > [   11.314006] Checked W+X mappings: passed, no W+X pages
> found
> >     >       > [   11.314142] Run /init as init process
> >     >       > INIT: version 3.01 booting
> >     >       > fsck (busybox 1.35.0)
> >     >       > /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600
> blocks
> >     >       > /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600
> blocks
> >     >       > /dev/mmcblk0p3 was not cleanly unmounted, check forced.
> >     >       > /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous),
> 663/16384 blocks
> >     >       > [   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem
> without journal. Opts: (null). Quota mode: disabled.
> >     >       > Starting random number generator daemon.
> >     >       > [   11.580662] random: crng init done
> >     >       > Starting udev
> >     >       > [   11.613159] udevd[142]: starting version 3.2.10
> >     >       > [   11.620385] udevd[143]: starting eudev-3.2.10
> >     >       > [   11.704481] macb ff0b0000.ethernet control_red: rename=
d
> from eth0
> >     >       > [   11.720264] macb ff0c0000.ethernet control_black:
> renamed from eth1
> >     >       > [   12.063396] ip_local_port_range: prefer different
> parity for start/end values.
> >     >       > [   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time:
> bad result: 1
> >     >       > hwclock: RTC_RD_TIME: Invalid exchange
> >     >       > Mon Feb 27 08:40:53 UTC 2023
> >     >       > [   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time:
> bad result
> >     >       > hwclock: RTC_SET_TIME: Invalid exchange
> >     >       > [   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time:
> bad result: 1
> >     >       > Starting mcud
> >     >       > INIT: Entering runlevel: 5
> >     >       > Configuring network interfaces... done.
> >     >       > resetting network interface
> >     >       > [   12.718295] macb ff0b0000.ethernet control_red: PHY
> [ff0b0000.ethernet-ffffffff:02] driver [Xilinx PCS/PMA PHY] (irq=3DPOLL)
> >     >       > [   12.723919] macb ff0b0000.ethernet control_red:
> configuring for phy/gmii link mode
> >     >       > [   12.732151] pps pps0: new PPS source ptp0
> >     >       > [   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp
> clock registered.
> >     >       > [   12.745724] macb ff0c0000.ethernet control_black: PHY
> [ff0c0000.ethernet-ffffffff:01] driver [Xilinx PCS/PMA PHY]
> >     >       (irq=3DPOLL)
> >     >       > [   12.753469] macb ff0c0000.ethernet control_black:
> configuring for phy/gmii link mode
> >     >       > [   12.761804] pps pps1: new PPS source ptp1
> >     >       > [   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp
> clock registered.
> >     >       > Auto-negotiation: off
> >     >       > Auto-negotiation: off
> >     >       > [   16.828151] macb ff0b0000.ethernet control_red: unable
> to generate target frequency: 125000000 Hz
> >     >       > [   16.834553] macb ff0b0000.ethernet control_red: Link i=
s
> Up - 1Gbps/Full - flow control off
> >     >       > [   16.860552] macb ff0c0000.ethernet control_black:
> unable to generate target frequency: 125000000 Hz
> >     >       > [   16.867052] macb ff0c0000.ethernet control_black: Link
> is Up - 1Gbps/Full - flow control off
> >     >       > Starting Failsafe Secure Shell server in port 2222: sshd
> >     >       > done.
> >     >       > Starting rpcbind daemon...done.
> >     >       >
> >     >       > [   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time:
> bad result: 1
> >     >       > hwclock: RTC_RD_TIME: Invalid exchange
> >     >       > Starting State Manager Service
> >     >       > Start state-manager restarter...
> >     >       > (XEN) d0v1 Forwarding AES operation: 3254779951
> >     >       > Starting /usr/sbin/xenstored....[   17.265256] BTRFS:
> device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa devid 1 transid 744
> >     >       /dev/dm-0
> >     >       > scanned by udevd (385)
> >     >       > [   17.349933] BTRFS info (device dm-0): disk space
> caching is enabled
> >     >       > [   17.350670] BTRFS info (device dm-0): has skinny exten=
ts
> >     >       > [   17.364384] BTRFS info (device dm-0): enabling ssd
> optimizations
> >     >       > [   17.830462] BTRFS: device fsid
> 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
> /dev/mapper/client_prov scanned by
> >     >       mkfs.btrfs
> >     >       > (526)
> >     >       > [   17.872699] BTRFS info (device dm-1): using free space
> tree
> >     >       > [   17.872771] BTRFS info (device dm-1): has skinny exten=
ts
> >     >       > [   17.878114] BTRFS info (device dm-1): flagging fs with
> big metadata feature
> >     >       > [   17.894289] BTRFS info (device dm-1): enabling ssd
> optimizations
> >     >       > [   17.895695] BTRFS info (device dm-1): checking UUID tr=
ee
> >     >       >
> >     >       > Setting domain 0 name, domid and JSON config...
> >     >       > Done setting up Dom0
> >     >       > Starting xenconsoled...
> >     >       > Starting QEMU as disk backend for dom0
> >     >       > Starting domain watchdog daemon: xenwatchdogd startup
> >     >       >
> >     >       > [   18.408647] BTRFS: device fsid
> 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
> /dev/mapper/client_config scanned by
> >     >       mkfs.btrfs
> >     >       > (574)
> >     >       > [done]
> >     >       > [   18.465552] BTRFS info (device dm-2): using free space
> tree
> >     >       > [   18.465629] BTRFS info (device dm-2): has skinny exten=
ts
> >     >       > [   18.471002] BTRFS info (device dm-2): flagging fs with
> big metadata feature
> >     >       > Starting crond: [   18.482371] BTRFS info (device dm-2):
> enabling ssd optimizations
> >     >       > [   18.486659] BTRFS info (device dm-2): checking UUID tr=
ee
> >     >       > OK
> >     >       > starting rsyslogd ... Log partition ready after 0 poll
> loops
> >     >       > done
> >     >       > rsyslogd: cannot connect to 172.18.0.1:514 <
> http://172.18.0.1:514>: Network is unreachable [v8.2208.0 try
> https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> ]
> >     >       > [   18.670637] BTRFS: device fsid
> 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3 scanne=
d
> by udevd (518)
> >     >       >
> >     >       > Please insert USB token and enter your role in login
> prompt.
> >     >       >
> >     >       > login:
> >     >       >
> >     >       > Regards,
> >     >       > O.
> >     >       >
> >     >       >
> >     >       > =D0=BF=D0=BD, 24 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3.=
 =D0=B2 23:39, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
> >     >       >       Hi Oleg,
> >     >       >
> >     >       >       Here is the issue from your logs:
> >     >       >
> >     >       >       SError Interrupt on CPU0, code 0xbe000000 -- SError
> >     >       >
> >     >       >       SErrors are special signals to notify software of
> serious hardware
> >     >       >       errors.  Something is going very wrong. Defective
> hardware is a
> >     >       >       possibility.  Another possibility if software
> accessing address ranges
> >     >       >       that it is not supposed to, sometimes it causes
> SErrors.
> >     >       >
> >     >       >       Cheers,
> >     >       >
> >     >       >       Stefano
> >     >       >
> >     >       >
> >     >       >
> >     >       >       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
> >     >       >
> >     >       >       > Hello,
> >     >       >       >
> >     >       >       > Thanks guys.
> >     >       >       > I found out where the problem was.
> >     >       >       > Now dom0 booted more. But I have a new one.
> >     >       >       > This is a kernel panic during Dom0 loading.
> >     >       >       > Maybe someone is able to suggest something ?
> >     >       >       >
> >     >       >       > Regards,
> >     >       >       > O.
> >     >       >       >
> >     >       >       > [    3.771362] sfp_register_bus: upstream ops
> attach
> >     >       >       > [    3.776119] sfp_register_bus: Bus registered
> >     >       >       > [    3.780459] sfp_register_socket: register
> sfp_bus succeeded
> >     >       >       > [    3.789399] of_cfs_init
> >     >       >       > [    3.789499] of_cfs_init: OK
> >     >       >       > [    3.791685] clk: Not disabling unused clocks
> >     >       >       > [   11.010355] SError Interrupt on CPU0, code
> 0xbe000000 -- SError
> >     >       >       > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0
> Not tainted 5.15.72-xilinx-v2022.1 #1
> >     >       >       > [   11.010393] Workqueue: events_unbound
> async_run_entry_fn
> >     >       >       > [   11.010414] pstate: 60000005 (nZCv daif -PAN
> -UAO -TCO -DIT -SSBS BTYPE=3D--)
> >     >       >       > [   11.010422] pc : simple_write_end+0xd0/0x130
> >     >       >       > [   11.010431] lr :
> generic_perform_write+0x118/0x1e0
> >     >       >       > [   11.010438] sp : ffffffc00809b910
> >     >       >       > [   11.010441] x29: ffffffc00809b910 x28:
> 0000000000000000 x27: ffffffef69ba88c0
> >     >       >       > [   11.010451] x26: 0000000000003eec x25:
> ffffff807515db00 x24: 0000000000000000
> >     >       >       > [   11.010459] x23: ffffffc00809ba90 x22:
> 0000000002aac000 x21: ffffff807315a260
> >     >       >       > [   11.010472] x20: 0000000000001000 x19:
> fffffffe02000000 x18: 0000000000000000
> >     >       >       > [   11.010481] x17: 00000000ffffffff x16:
> 0000000000008000 x15: 0000000000000000
> >     >       >       > [   11.010490] x14: 0000000000000000 x13:
> 0000000000000000 x12: 0000000000000000
> >     >       >       > [   11.010498] x11: 0000000000000000 x10:
> 0000000000000000 x9 : 0000000000000000
> >     >       >       > [   11.010507] x8 : 0000000000000000 x7 :
> ffffffef693ba680 x6 : 000000002d89b700
> >     >       >       > [   11.010515] x5 : fffffffe02000000 x4 :
> ffffff807315a3c8 x3 : 0000000000001000
> >     >       >       > [   11.010524] x2 : 0000000002aab000 x1 :
> 0000000000000001 x0 : 0000000000000005
> >     >       >       > [   11.010534] Kernel panic - not syncing:
> Asynchronous SError Interrupt
> >     >       >       > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0
> Not tainted 5.15.72-xilinx-v2022.1 #1
> >     >       >       > [   11.010545] Hardware name: D14 Viper Board -
> White Unit (DT)
> >     >       >       > [   11.010548] Workqueue: events_unbound
> async_run_entry_fn
> >     >       >       > [   11.010556] Call trace:
> >     >       >       > [   11.010558]  dump_backtrace+0x0/0x1c4
> >     >       >       > [   11.010567]  show_stack+0x18/0x2c
> >     >       >       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
> >     >       >       > [   11.010583]  dump_stack+0x18/0x34
> >     >       >       > [   11.010588]  panic+0x14c/0x2f8
> >     >       >       > [   11.010597]  print_tainted+0x0/0xb0
> >     >       >       > [   11.010606]  arm64_serror_panic+0x6c/0x7c
> >     >       >       > [   11.010614]  do_serror+0x28/0x60
> >     >       >       > [   11.010621]  el1h_64_error_handler+0x30/0x50
> >     >       >       > [   11.010628]  el1h_64_error+0x78/0x7c
> >     >       >       > [   11.010633]  simple_write_end+0xd0/0x130
> >     >       >       > [   11.010639]  generic_perform_write+0x118/0x1e0
> >     >       >       > [   11.010644]
>  __generic_file_write_iter+0x138/0x1c4
> >     >       >       > [   11.010650]  generic_file_write_iter+0x78/0xd0
> >     >       >       > [   11.010656]  __kernel_write+0xfc/0x2ac
> >     >       >       > [   11.010665]  kernel_write+0x88/0x160
> >     >       >       > [   11.010673]  xwrite+0x44/0x94
> >     >       >       > [   11.010680]  do_copy+0xa8/0x104
> >     >       >       > [   11.010686]  write_buffer+0x38/0x58
> >     >       >       > [   11.010692]  flush_buffer+0x4c/0xbc
> >     >       >       > [   11.010698]  __gunzip+0x280/0x310
> >     >       >       > [   11.010704]  gunzip+0x1c/0x28
> >     >       >       > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
> >     >       >       > [   11.010715]  do_populate_rootfs+0x80/0x164
> >     >       >       > [   11.010722]  async_run_entry_fn+0x48/0x164
> >     >       >       > [   11.010728]  process_one_work+0x1e4/0x3a0
> >     >       >       > [   11.010736]  worker_thread+0x7c/0x4c0
> >     >       >       > [   11.010743]  kthread+0x120/0x130
> >     >       >       > [   11.010750]  ret_from_fork+0x10/0x20
> >     >       >       > [   11.010757] SMP: stopping secondary CPUs
> >     >       >       > [   11.010784] Kernel Offset: 0x2f61200000 from
> 0xffffffc008000000
> >     >       >       > [   11.010788] PHYS_OFFSET: 0x0
> >     >       >       > [   11.010790] CPU features: 0x00000401,00000842
> >     >       >       > [   11.010795] Memory Limit: none
> >     >       >       > [   11.277509] ---[ end Kernel panic - not
> syncing: Asynchronous SError Interrupt ]---
> >     >       >       >
> >     >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=
=AF=D0=B3. =D0=B2 15:52, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
> >     >       >       >       Hi Oleg,
> >     >       >       >
> >     >       >       >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
> >     >       >       >       >
> >     >       >       >       >
> >     >       >       >       >
> >     >       >       >       > Hello Michal,
> >     >       >       >       >
> >     >       >       >       > I was not able to enable earlyprintk in
> the xen for now.
> >     >       >       >       > I decided to choose another way.
> >     >       >       >       > This is a xen's command line that I found
> out completely.
> >     >       >       >       >
> >     >       >       >       > (XEN) $$$$ console=3Ddtuart dtuart=3Dseri=
al0
> dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0
> >     >       vwfi=3Dnative
> >     >       >       sched=3Dnull
> >     >       >       >       timer_slop=3D0
> >     >       >       >       Yes, adding a printk() in Xen was also a
> good idea.
> >     >       >       >
> >     >       >       >       >
> >     >       >       >       > So you are absolutely right about a
> command line.
> >     >       >       >       > Now I am going to find out why xen did no=
t
> have the correct parameters from the device tree.
> >     >       >       >       Maybe you will find this document helpful:
> >     >       >       >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >
> >     >       >       >
> >     >       >       >       ~Michal
> >     >       >       >
> >     >       >       >       >
> >     >       >       >       > Regards,
> >     >       >       >       > Oleg
> >     >       >       >       >
> >     >       >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=
=E2=80=AF=D0=B3. =D0=B2 11:16, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
> >     >       >       >       >
> >     >       >       >       >
> >     >       >       >       >     On 21/04/2023 10:04, Oleg Nikitenko
> wrote:
> >     >       >       >       >     >
> >     >       >       >       >     >
> >     >       >       >       >     >
> >     >       >       >       >     > Hello Michal,
> >     >       >       >       >     >
> >     >       >       >       >     > Yes, I use yocto.
> >     >       >       >       >     >
> >     >       >       >       >     > Yesterday all day long I tried to
> follow your suggestions.
> >     >       >       >       >     > I faced a problem.
> >     >       >       >       >     > Manually in the xen config build
> file I pasted the strings:
> >     >       >       >       >     In the .config file or in some Yocto
> file (listing additional Kconfig options) added to SRC_URI?
> >     >       >       >       >     You shouldn't really modify .config
> file but if you do, you should execute "make olddefconfig"
> >     >       afterwards.
> >     >       >       >       >
> >     >       >       >       >     >
> >     >       >       >       >     > CONFIG_EARLY_PRINTK
> >     >       >       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
> >     >       >       >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
> >     >       >       >       >     I hope you added =3Dy to them.
> >     >       >       >       >
> >     >       >       >       >     Anyway, you have at least the
> following solutions:
> >     >       >       >       >     1) Run bitbake xen -c menuconfig to
> properly set early printk
> >     >       >       >       >     2) Find out how you enable other
> Kconfig options in your project (e.g. CONFIG_COLORING=3Dy that is not
> >     >       enabled by
> >     >       >       default)
> >     >       >       >       >     3) Append the following to
> "xen/arch/arm/configs/arm64_defconfig":
> >     >       >       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
> >     >       >       >       >
> >     >       >       >       >     ~Michal
> >     >       >       >       >
> >     >       >       >       >     >
> >     >       >       >       >     > Host hangs in build time.
> >     >       >       >       >     > Maybe I did not set something in th=
e
> config build file ?
> >     >       >       >       >     >
> >     >       >       >       >     > Regards,
> >     >       >       >       >     > Oleg
> >     >       >       >       >     >
> >     >       >       >       >     > =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80=
. 2023=E2=80=AF=D0=B3. =D0=B2 11:57, Oleg
> Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
> >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>:
> >     >       >       >       >     >
> >     >       >       >       >     >     Thanks Michal,
> >     >       >       >       >     >
> >     >       >       >       >     >     You gave me an idea.
> >     >       >       >       >     >     I am going to try it today.
> >     >       >       >       >     >
> >     >       >       >       >     >     Regards,
> >     >       >       >       >     >     O.
> >     >       >       >       >     >
> >     >       >       >       >     >     =D1=87=D1=82, 20 =D0=B0=D0=BF=
=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:56,
> Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
> >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>:
> >     >       >       >       >     >
> >     >       >       >       >     >         Thanks Stefano.
> >     >       >       >       >     >
> >     >       >       >       >     >         I am going to do it today.
> >     >       >       >       >     >
> >     >       >       >       >     >         Regards,
> >     >       >       >       >     >         O.
> >     >       >       >       >     >
> >     >       >       >       >     >         =D1=81=D1=80, 19 =D0=B0=D0=
=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:05,
> Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org=
>
> >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>
> >     >       >       >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>:
> >     >       >       >       >     >
> >     >       >       >       >     >             On Wed, 19 Apr 2023,
> Oleg Nikitenko wrote:
> >     >       >       >       >     >             > Hi Michal,
> >     >       >       >       >     >             >
> >     >       >       >       >     >             > I corrected xen's
> command line.
> >     >       >       >       >     >             > Now it is
> >     >       >       >       >     >             > xen,xen-bootargs =3D
> "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2
> >     >       dom0_vcpus_pin
> >     >       >       >       bootscrub=3D0 vwfi=3Dnative sched=3Dnull
> >     >       >       >       >     >             > timer_slop=3D0
> way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
> >     >       >       >       >     >
> >     >       >       >       >     >             4 colors is way too man=
y
> for xen, just do xen_colors=3D0-0. There is no
> >     >       >       >       >     >             advantage in using more
> than 1 color for Xen.
> >     >       >       >       >     >
> >     >       >       >       >     >             4 colors is too few for
> dom0, if you are giving 1600M of memory to Dom0.
> >     >       >       >       >     >             Each color is 256M. For
> 1600M you should give at least 7 colors. Try:
> >     >       >       >       >     >
> >     >       >       >       >     >             xen_colors=3D0-0
> dom0_colors=3D1-8
> >     >       >       >       >     >
> >     >       >       >       >     >
> >     >       >       >       >     >
> >     >       >       >       >     >             > Unfortunately the
> result was the same.
> >     >       >       >       >     >             >
> >     >       >       >       >     >             > (XEN)  - Dom0 mode:
> Relaxed
> >     >       >       >       >     >             > (XEN) P2M: 40-bit IPA
> with 40-bit PA and 8-bit VMID
> >     >       >       >       >     >             > (XEN) P2M: 3 levels
> with order-1 root, VTCR 0x0000000080023558
> >     >       >       >       >     >             > (XEN) Scheduling
> granularity: cpu, 1 CPU per sched-resource
> >     >       >       >       >     >             > (XEN) Coloring genera=
l
> information
> >     >       >       >       >     >             > (XEN) Way size: 64kB
> >     >       >       >       >     >             > (XEN) Max. number of
> colors available: 16
> >     >       >       >       >     >             > (XEN) Xen color(s): [
> 0 ]
> >     >       >       >       >     >             > (XEN) alternatives:
> Patching with alt table 00000000002cc690 -> 00000000002ccc0c
> >     >       >       >       >     >             > (XEN) Color array
> allocation failed for dom0
> >     >       >       >       >     >             > (XEN)
> >     >       >       >       >     >             > (XEN)
> ****************************************
> >     >       >       >       >     >             > (XEN) Panic on CPU 0:
> >     >       >       >       >     >             > (XEN) Error creating
> domain 0
> >     >       >       >       >     >             > (XEN)
> ****************************************
> >     >       >       >       >     >             > (XEN)
> >     >       >       >       >     >             > (XEN) Reboot in five
> seconds...
> >     >       >       >       >     >             >
> >     >       >       >       >     >             > I am going to find ou=
t
> how command line arguments passed and parsed.
> >     >       >       >       >     >             >
> >     >       >       >       >     >             > Regards,
> >     >       >       >       >     >             > Oleg
> >     >       >       >       >     >             >
> >     >       >       >       >     >             > =D1=81=D1=80, 19 =D0=
=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2
> 11:25, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.co=
m
> >
> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com
> >>
> >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>:
> >     >       >       >       >     >             >       Hi Michal,
> >     >       >       >       >     >             >
> >     >       >       >       >     >             > You put my nose into
> the problem. Thank you.
> >     >       >       >       >     >             > I am going to use you=
r
> point.
> >     >       >       >       >     >             > Let's see what happen=
s.
> >     >       >       >       >     >             >
> >     >       >       >       >     >             > Regards,
> >     >       >       >       >     >             > Oleg
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >
> >     >       >       >       >     >             > =D1=81=D1=80, 19 =D0=
=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2
> 10:37, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>
> >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>
> >     >       >       >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>:
> >     >       >       >       >     >             >       Hi Oleg,
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >       On 19/04/2023
> 09:03, Oleg Nikitenko wrote:
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       > Hello Stefano=
,
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       > Thanks for th=
e
> clarification.
> >     >       >       >       >     >             >       > My company
> uses yocto for image generation.
> >     >       >       >       >     >             >       > What kind of
> information do you need to consult me in this case ?
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       > Maybe modules
> sizes/addresses which were mentioned by @Julien Grall
> >     >       >       <mailto:julien@xen.org <mailto:julien@xen.org>
> >     >       >       >       <mailto:julien@xen.org <mailto:
> julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:
> julien@xen.org <mailto:julien@xen.org>>>> ?
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >       Sorry for
> jumping into discussion, but FWICS the Xen command line you provided
> >     >       seems to be
> >     >       >       not the
> >     >       >       >       one
> >     >       >       >       >     >             >       Xen booted with=
.
> The error you are observing most likely is due to dom0 colors
> >     >       >       configuration not
> >     >       >       >       being
> >     >       >       >       >     >             >       specified (i.e.
> lack of dom0_colors=3D<> parameter). Although in the command line you
> >     >       >       provided, this
> >     >       >       >       parameter
> >     >       >       >       >     >             >       is set, I
> strongly doubt that this is the actual command line in use.
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >       You wrote:
> >     >       >       >       >     >             >       xen,xen-bootarg=
s
> =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D=
2
> >     >       >       dom0_vcpus_pin
> >     >       >       >       bootscrub=3D0 vwfi=3Dnative
> >     >       >       >       >     >             >       sched=3Dnull
> timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >       but:
> >     >       >       >       >     >             >       1) way_szize ha=
s
> a typo
> >     >       >       >       >     >             >       2) you specifie=
d
> 4 colors (0-3) for Xen, but the boot log says that Xen has only
> >     >       one:
> >     >       >       >       >     >             >       (XEN) Xen
> color(s): [ 0 ]
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >       This makes me
> believe that no colors configuration actually end up in command line
> >     >       that Xen
> >     >       >       booted
> >     >       >       >       with.
> >     >       >       >       >     >             >       Single color fo=
r
> Xen is a "default if not specified" and way size was probably
> >     >       calculated
> >     >       >       by asking
> >     >       >       >       HW.
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >       So I would
> suggest to first cross-check the command line in use.
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >       ~Michal
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       > Regards,
> >     >       >       >       >     >             >       > Oleg
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       > =D0=B2=D1=82,=
 18 =D0=B0=D0=BF=D1=80.
> 2023=E2=80=AF=D0=B3. =D0=B2 20:44, Stefano Stabellini <sstabellini@kernel=
.org <mailto:
> sstabellini@kernel.org>
> >     >       >       >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>
> >     >       >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>
> >     >       >       >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>:
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >     On Tue, 1=
8
> Apr 2023, Oleg Nikitenko wrote:
> >     >       >       >       >     >             >       >     > Hi
> Julien,
> >     >       >       >       >     >             >       >     >
> >     >       >       >       >     >             >       >     > >> This
> feature has not been merged in Xen upstream yet
> >     >       >       >       >     >             >       >     >
> >     >       >       >       >     >             >       >     > > would
> assume that upstream + the series on the ML [1] work
> >     >       >       >       >     >             >       >     >
> >     >       >       >       >     >             >       >     > Please
> clarify this point.
> >     >       >       >       >     >             >       >     > Because
> the two thoughts are controversial.
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >     Hi Oleg,
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >     As Julien
> wrote, there is nothing controversial. As you are aware,
> >     >       >       >       >     >             >       >     Xilinx
> maintains a separate Xen tree specific for Xilinx here:
> >     >       >       >       >     >             >       >
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>
> >     >       >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>
> >     >       >       >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>
> >     >       >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>
> >     >       >       >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >     and the
> branch you are using (xlnx_rebase_4.16) comes from there.
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >     Instead,
> the upstream Xen tree lives here:
> >     >       >       >       >     >             >       >
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >     The Cache
> Coloring feature that you are trying to configure is present
> >     >       >       >       >     >             >       >     in
> xlnx_rebase_4.16, but not yet present upstream (there is an
> >     >       >       >       >     >             >       >
>  outstanding patch series to add cache coloring to Xen upstream but it
> >     >       >       >       >     >             >       >     hasn't
> been merged yet.)
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >     Anyway, i=
f
> you are using xlnx_rebase_4.16 it doesn't matter too much for
> >     >       >       >       >     >             >       >     you as yo=
u
> already have Cache Coloring as a feature there.
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >     I take yo=
u
> are using ImageBuilder to generate the boot configuration? If
> >     >       >       >       >     >             >       >     so, pleas=
e
> post the ImageBuilder config file that you are using.
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >       >     But from
> the boot message, it looks like the colors configuration for
> >     >       >       >       >     >             >       >     Dom0 is
> incorrect.
> >     >       >       >       >     >             >       >
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >
> >     >       >       >       >     >             >
> >     >       >       >       >     >
> >     >       >       >       >
> >     >       >       >
> >     >       >       >
> >     >       >       >
> >     >       >
> >     >       >
> >     >       >
> >     >
> >     >
> >     >
> >
>

--000000000000175b6a05faee4a1e
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+PGRpdj5IaSBNaWNoYWwsPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5U
aGFua3MuPC9kaXY+PGRpdj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlJlZ2FyZHMsPC9kaXY+
PGRpdj5PbGVnPGJyPjwvZGl2PjwvZGl2Pjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+PGRp
diBkaXI9Imx0ciIgY2xhc3M9ImdtYWlsX2F0dHIiPtC/0YIsIDUg0LzQsNGPIDIwMjPigK/Qsy4g
0LIgMTE6MzQsIE1pY2hhbCBPcnplbCAmbHQ7PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Ojxicj48L2Rpdj48YmxvY2txdW90
ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MHB4IDBweCAwcHggMC44ZXg7Ym9y
ZGVyLWxlZnQ6MXB4IHNvbGlkIHJnYigyMDQsMjA0LDIwNCk7cGFkZGluZy1sZWZ0OjFleCI+SGkg
T2xlZyw8YnI+DQo8YnI+DQpSZXBseWluZywgc28gdGhhdCB5b3UgZG8gbm90IG5lZWQgdG8gd2Fp
dCBmb3IgU3RlZmFuby48YnI+DQo8YnI+DQpPbiAwNS8wNS8yMDIzIDEwOjI4LCBPbGVnIE5pa2l0
ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqA8YnI+DQomZ3Q7IDxicj4NCiZndDsgPGJy
Pg0KJmd0OyBIZWxsbyBTdGVmYW5vLDxicj4NCiZndDsgPGJyPg0KJmd0OyBJIHdvdWxkIGxpa2Ug
dG8gdHJ5IGEgeGVuIGNhY2hlIGNvbG9yIHByb3BlcnR5IGZyb20gdGhpcyByZXBvwqAgPGEgaHJl
Zj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVu
LmdpdDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hl
bi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+Jmd0Ozxicj4NCiZndDsgQ291bGQgeW91IHRlbGwg
d2hvdCBicmFuY2ggSSBzaG91bGQgdXNlID88YnI+DQpDYWNoZSBjb2xvcmluZyBmZWF0dXJlIGlz
IG5vdCBwYXJ0IG9mIHRoZSB1cHN0cmVhbSB0cmVlIGFuZCBpdCBpcyBzdGlsbCB1bmRlciByZXZp
ZXcuPGJyPg0KWW91IGNhbiBvbmx5IGZpbmQgaXQgaW50ZWdyYXRlZCBpbiB0aGUgWGlsaW54IFhl
biB0cmVlLjxicj4NCjxicj4NCn5NaWNoYWw8YnI+DQo8YnI+DQomZ3Q7IDxicj4NCiZndDsgUmVn
YXJkcyw8YnI+DQomZ3Q7IE9sZWc8YnI+DQomZ3Q7IDxicj4NCiZndDsg0L/RgiwgMjgg0LDQv9GA
LiAyMDIz4oCv0LMuINCyIDAwOjUxLCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZn
dDs6PGJyPg0KJmd0OyA8YnI+DQomZ3Q7wqAgwqAgwqBJIGFtIGZhbWlsaWFyIHdpdGggdGhlIHpj
dTEwMiBidXQgSSBkb24mIzM5O3Qga25vdyBob3cgeW91IGNvdWxkIHBvc3NpYmx5PGJyPg0KJmd0
O8KgIMKgIMKgZ2VuZXJhdGUgYSBTRXJyb3IuPGJyPg0KJmd0OyA8YnI+DQomZ3Q7wqAgwqAgwqBJ
IHN1Z2dlc3QgdG8gdHJ5IHRvIHVzZSBJbWFnZUJ1aWxkZXIgWzFdIHRvIGdlbmVyYXRlIHRoZSBi
b290PGJyPg0KJmd0O8KgIMKgIMKgY29uZmlndXJhdGlvbiBhcyBhIHRlc3QgYmVjYXVzZSB0aGF0
IGlzIGtub3duIHRvIHdvcmsgd2VsbCBmb3IgemN1MTAyLjxicj4NCiZndDsgPGJyPg0KJmd0O8Kg
IMKgIMKgWzFdIDxhIGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1
aWxkZXIiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNv
bS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXI8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGxh
Yi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0i
X2JsYW5rIj5odHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyPC9hPiZn
dDs8YnI+DQomZ3Q7IDxicj4NCiZndDsgPGJyPg0KJmd0O8KgIMKgIMKgT24gVGh1LCAyNyBBcHIg
MjAyMywgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0OyBIZWxsbyBT
dGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7IFRoYW5r
cyBmb3IgY2xhcmlmaWNhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7IFdlIG5pZ2h0ZXIgdXNl
IEltYWdlQnVpbGRlciBub3IgdWJvb3QgYm9vdCBzY3JpcHQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0
OyBBIG1vZGVsIGlzIHpjdTEwMiBjb21wYXRpYmxlLjxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0OyBPLjxicj4N
CiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7INCy0YIsIDI1INCw0L/RgC4g
MjAyM+KAr9CzLiDQsiAyMToyMSwgU3RlZmFubyBTdGFiZWxsaW5pICZsdDs8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7
Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFRoaXMgaXMgaW50ZXJlc3RpbmcuIEFy
ZSB5b3UgdXNpbmcgWGlsaW54IGhhcmR3YXJlIGJ5IGFueSBjaGFuY2U/IElmIHNvLDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHdoaWNoIGJvYXJkPzxicj4NCiZndDvCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBBcmUgeW91IHVzaW5nIEltYWdlQnVp
bGRlciB0byBnZW5lcmF0ZSB5b3VyIGJvb3Quc2NyIGJvb3Qgc2NyaXB0PyBJZiBzbyw8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjb3VsZCB5b3UgcGxlYXNlIHBvc3QgeW91ciBJbWFn
ZUJ1aWxkZXIgY29uZmlnIGZpbGU/IElmIG5vdCwgY2FuIHlvdTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoHBvc3QgdGhlIHNvdXJjZSBvZiB5b3VyIHVib290IGJvb3Qgc2NyaXB0Pzxi
cj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTRXJy
b3JzIGFyZSBzdXBwb3NlZCB0byBiZSByZWxhdGVkIHRvIGEgaGFyZHdhcmUgZmFpbHVyZSBvZiBz
b21lIGtpbmQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgWW91IGFyZSBub3Qgc3Vw
cG9zZWQgdG8gYmUgYWJsZSB0byB0cmlnZ2VyIGFuIFNFcnJvciBlYXNpbHkgYnk8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmcXVvdDttaXN0YWtlJnF1b3Q7LiBJIGhhdmUgbm90IHNl
ZW4gU0Vycm9ycyBkdWUgdG8gd3JvbmcgY2FjaGUgY29sb3Jpbmc8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBjb25maWd1cmF0aW9ucyBvbiBhbnkgWGlsaW54IGJvYXJkIGJlZm9yZS48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgVGhl
IGRpZmZlcmVuY2VzIGJldHdlZW4gWGVuIHdpdGggYW5kIHdpdGhvdXQgY2FjaGUgY29sb3Jpbmcg
ZnJvbSBhPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgaGFyZHdhcmUgcGVyc3BlY3Rp
dmUgYXJlOjxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAtIFdpdGggY2FjaGUgY29sb3JpbmcsIHRoZSBTTU1VIGlzIGVuYWJsZWQgYW5kIGRvZXMg
YWRkcmVzcyB0cmFuc2xhdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBl
dmVuIGZvciBkb20wLiBXaXRob3V0IGNhY2hlIGNvbG9yaW5nIHRoZSBTTU1VIGNvdWxkIGJlIGRp
c2FibGVkLCBhbmQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBpZiBlbmFibGVk
LCB0aGUgU01NVSBkb2VzbiYjMzk7dCBkbyBhbnkgYWRkcmVzcyB0cmFuc2xhdGlvbnMgZm9yIERv
bTAuIElmPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgdGhlcmUgaXMgYSBoYXJk
d2FyZSBmYWlsdXJlIHJlbGF0ZWQgdG8gU01NVSBhZGRyZXNzIHRyYW5zbGF0aW9uIGl0PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgY291bGQgb25seSB0cmlnZ2VyIHdpdGggY2Fj
aGUgY29sb3JpbmcuIFRoaXMgd291bGQgYmUgbXkgbm9ybWFsPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgwqAgc3VnZ2VzdGlvbiBmb3IgeW91IHRvIGV4cGxvcmUsIGJ1dCB0aGUgZmFp
bHVyZSBoYXBwZW5zIHRvbyBlYXJseTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoMKg
IGJlZm9yZSBhbnkgRE1BLWNhcGFibGUgZGV2aWNlIGlzIHByb2dyYW1tZWQuIFNvIEkgZG9uJiMz
OTt0IHRoaW5rIHRoaXMgY2FuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgYmUg
dGhlIGlzc3VlLjxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAtIFdpdGggY2FjaGUgY29sb3JpbmcsIHRoZSBtZW1vcnkgYWxsb2NhdGlvbiBpcyB2
ZXJ5IGRpZmZlcmVudCBzbyB5b3UmIzM5O2xsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgwqAgZW5kIHVwIHVzaW5nIGRpZmZlcmVudCBERFIgcmVnaW9ucyBmb3IgRG9tMC4gU28gaWYg
eW91ciBERFIgaXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBkZWZlY3RpdmUs
IHlvdSBtaWdodCBvbmx5IHNlZSBhIGZhaWx1cmUgd2l0aCBjYWNoZSBjb2xvcmluZyBlbmFibGVk
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgYmVjYXVzZSB5b3UgZW5kIHVwIHVz
aW5nIGRpZmZlcmVudCByZWdpb25zLjxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gVHVlLCAyNSBBcHIg
MjAyMywgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBIaSBTdGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFRoYW5rIHlvdS48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IElmIEkgYnVpbGQgeGVuIHdpdGhvdXQgY29sb3Jz
IHN1cHBvcnQgdGhlcmUgaXMgbm90IHRoaXMgZXJyb3IuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBBbGwgdGhlIGRvbWFpbnMgYXJlIGJvb3RlZCB3ZWxsLjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGVuc2UgaXQgY2FuIG5vdCBiZSBhIGhhcmR3YXJl
IGlzc3VlLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgVGhpcyBwYW5pYyBh
cnJpdmVkIGR1cmluZyB1bnBhY2tpbmcgdGhlIHJvb3Rmcy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IEhlcmUgSSBhdHRhY2hlZCB0aGUgYm9vdCBsb2cgeGVuL0RvbTAgd2l0
aG91dCBjb2xvci48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEEgaGlnaGxp
Z2h0ZWQgc3RyaW5ncyBwcmludGVkIGV4YWN0bHkgYWZ0ZXIgdGhlIHBsYWNlIHdoZXJlIDEtc3Qg
dGltZSBwYW5pYyBhcnJpdmVkLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgWGVuIDQuMTYuMS1wcmU8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFhlbiB2ZXJzaW9uIDQuMTYu
MS1wcmUgKG5vbGUyMzkwQChub25lKSkgKGFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2NjIChHQ0Mp
IDExLjMuMCkgZGVidWc9eSAyMDIzLTA0LTIxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiBXZWQgQXByIDE5IDEyOjU2OjE0IDIwMjMg
KzAzMDAgZ2l0OjMyMTY4N2IyMzEtZGlydHk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIGJ1aWxkLWlkOiBjMTg0NzI1OGZkYjFiNzk1NjJmYzcxMGRkYTQwMDA4Zjk2
YzBmZGU1PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBQcm9jZXNz
b3I6IDAwMDAwMDAwNDEwZmQwMzQ6ICZxdW90O0FSTSBMaW1pdGVkJnF1b3Q7LCB2YXJpYW50OiAw
eDAsIHBhcnQgMHhkMDMscmV2IDB4NDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgNjQtYml0IEV4ZWN1dGlvbjo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIMKgIFByb2Nlc3NvciBGZWF0dXJlczogMDAwMDAwMDAwMDAwMjIyMiAwMDAw
MDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDC
oCDCoCBFeGNlcHRpb24gTGV2ZWxzOiBFTDM6NjQrMzIgRUwyOjY0KzMyIEVMMTo2NCszMiBFTDA6
NjQrMzI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIEV4
dGVuc2lvbnM6IEZsb2F0aW5nUG9pbnQgQWR2YW5jZWRTSU1EPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBEZWJ1ZyBGZWF0dXJlczogMDAwMDAwMDAxMDMwNTEw
NiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAo
WEVOKSDCoCBBdXhpbGlhcnkgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgTWVtb3J5
IE1vZGVsIEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAxMTIyIDAwMDAwMDAwMDAwMDAwMDA8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIElTQSBGZWF0dXJlczogwqAw
MDAwMDAwMDAwMDExMTIwIDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIDMyLWJpdCBFeGVjdXRpb246PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBQcm9jZXNzb3IgRmVhdHVyZXM6IDAwMDAwMDAwMDAw
MDAxMzE6MDAwMDAwMDAwMDAxMTAxMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgwqAgwqAgSW5zdHJ1Y3Rpb24gU2V0czogQUFyY2gzMiBBMzIgVGh1bWIgVGh1bWIt
MiBKYXplbGxlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDC
oCBFeHRlbnNpb25zOiBHZW5lcmljVGltZXIgU2VjdXJpdHk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIERlYnVnIEZlYXR1cmVzOiAwMDAwMDAwMDAzMDEwMDY2
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBBdXhpbGlhcnkg
RmVhdHVyZXM6IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIMKgIE1lbW9yeSBNb2RlbCBGZWF0dXJlczogMDAwMDAwMDAxMDIwMTEwNSAw
MDAwMDAwMDQwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVO
KSDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoDAwMDAwMDAwMDEyNjAwMDAg
MDAwMDAwMDAwMjEwMjIxMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgwqAgSVNBIEZlYXR1cmVzOiAwMDAwMDAwMDAyMTAxMTEwIDAwMDAwMDAwMTMxMTIxMTEgMDAw
MDAwMDAyMTIzMjA0Mjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgMDAwMDAwMDAwMTExMjEzMSAwMDAwMDAwMDAwMDExMTQy
IDAwMDAwMDAwMDAwMTExMjE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIFVzaW5nIFNNQyBDYWxsaW5nIENvbnZlbnRpb24gdjEuMjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgVXNpbmcgUFNDSSB2MS4xPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBTTVA6IEFsbG93aW5nIDQgQ1BVczxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgR2VuZXJpYyBUaW1lciBJUlE6IHBoeXM9
MzAgaHlwPTI2IHZpcnQ9MjcgRnJlcTogMTAwMDAwIEtIejxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgR0lDdjIgaW5pdGlhbGl6YXRpb246PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCBnaWNfZGlzdF9hZGRyPTAw
MDAwMDAwZjkwMTAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IMKgIMKgIMKgIMKgIGdpY19jcHVfYWRkcj0wMDAwMDAwMGY5MDIwMDAwPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCBnaWNfaHlwX2FkZHI9MDAw
MDAwMDBmOTA0MDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
wqAgwqAgwqAgwqAgZ2ljX3ZjcHVfYWRkcj0wMDAwMDAwMGY5MDYwMDAwPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCBnaWNfbWFpbnRlbmFuY2Vf
aXJxPTI1PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBHSUN2Mjog
QWRqdXN0aW5nIENQVSBpbnRlcmZhY2UgYmFzZSB0byAweGY5MDJmMDAwPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBHSUN2MjogMTkyIGxpbmVzLCA0IGNwdXMsIHNl
Y3VyZSAoSUlEIDAyMDAxNDNiKS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIFVzaW5nIHNjaGVkdWxlcjogbnVsbCBTY2hlZHVsZXIgKG51bGwpPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBJbml0aWFsaXppbmcgbnVsbCBzY2hlZHVs
ZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFdBUk5JTkc6IFRo
aXMgaXMgZXhwZXJpbWVudGFsIHNvZnR3YXJlIGluIGRldmVsb3BtZW50Ljxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgVXNlIGF0IHlvdXIgb3duIHJpc2suPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBBbGxvY2F0ZWQgY29uc29sZSBy
aW5nIG9mIDMyIEtpQi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IENQVTA6IEd1ZXN0IGF0b21pY3Mgd2lsbCB0cnkgMTIgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhl
IGRvbWFpbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQnJpbmdp
bmcgdXAgQ1BVMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BV
MTogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAxMyB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9t
YWluPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUgMSBib290
ZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCcmluZ2luZyB1
cCBDUFUyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUyOiBH
dWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEzIHRpbWVzIGJlZm9yZSBwYXVzaW5nIHRoZSBkb21haW48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIENQVSAyIGJvb3RlZC48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEJyaW5naW5nIHVwIENQ
VTM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIENQVTM6IEd1ZXN0
IGF0b21pY3Mgd2lsbCB0cnkgMTMgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhlIGRvbWFpbjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQnJvdWdodCB1cCA0IENQVXM8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIENQVSAzIGJvb3RlZC48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21t
dUBmZDgwMDAwMDogcHJvYmluZyBoYXJkd2FyZSBjb25maWd1cmF0aW9uLi4uPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6
IFNNTVV2MiB3aXRoOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
c21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBzdGFnZSAyIHRyYW5zbGF0aW9uPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6
IHN0cmVhbSBtYXRjaGluZyB3aXRoIDQ4IHJlZ2lzdGVyIGdyb3VwcywgbWFzayAweDdmZmYmbHQ7
MiZndDtzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IDE2IGNvbnRleHQ8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBiYW5rcyAoMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgc3RhZ2UtMiBvbmx5KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBTdGFnZS0yOiA0OC1iaXQgSVBBIC0mZ3Q7
IDQ4LWJpdCBQQTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgc21t
dTogL2F4aS9zbW11QGZkODAwMDAwOiByZWdpc3RlcmVkIDI5IG1hc3RlciBkZXZpY2VzPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBJL08gdmlydHVhbGlzYXRpb24g
ZW5hYmxlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAtIERv
bTAgbW9kZTogUmVsYXhlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgUDJNOiA0MC1iaXQgSVBBIHdpdGggNDAtYml0IFBBIGFuZCA4LWJpdCBWTUlEPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06IDMgbGV2ZWxzIHdpdGggb3Jk
ZXItMSByb290LCBWVENSIDB4MDAwMDAwMDA4MDAyMzU1ODxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgU2NoZWR1bGluZyBncmFudWxhcml0eTogY3B1LCAxIENQVSBw
ZXIgc2NoZWQtcmVzb3VyY2U8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIGFsdGVybmF0aXZlczogUGF0Y2hpbmcgd2l0aCBhbHQgdGFibGUgMDAwMDAwMDAwMDJjYzVj
OCAtJmd0OyAwMDAwMDAwMDAwMmNjYjJjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSAqKiogTE9BRElORyBET01BSU4gMCAqKio8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIExvYWRpbmcgZDAga2VybmVsIGZyb20gYm9vdCBtb2R1bGUg
QCAwMDAwMDAwMDAxMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAo
WEVOKSBMb2FkaW5nIHJhbWRpc2sgZnJvbSBib290IG1vZHVsZSBAIDAwMDAwMDAwMDIwMDAwMDA8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEFsbG9jYXRpbmcgMTox
IG1hcHBpbmdzIHRvdGFsbGluZyAxNjAwTUIgZm9yIGRvbTA6PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCQU5LWzBdIDB4MDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAy
MDAwMDAwMCAoMjU2TUIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVO
KSBCQU5LWzFdIDB4MDAwMDAwMjQwMDAwMDAtMHgwMDAwMDAyODAwMDAwMCAoNjRNQik8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEJBTktbMl0gMHgwMDAwMDAzMDAw
MDAwMC0weDAwMDAwMDgwMDAwMDAwICgxMjgwTUIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAoWEVOKSBHcmFudCB0YWJsZSByYW5nZTogMHgwMDAwMDAwMGUwMDAwMC0weDAw
MDAwMDAwZTQwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBz
bW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IGQwOiBwMm1hZGRyIDB4MDAwMDAwMDg3YmY5NDAwMDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQWxsb2NhdGluZyBQUEkg
MTYgZm9yIGV2ZW50IGNoYW5uZWwgaW50ZXJydXB0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gMDogMHg4MTIwMDAwMC0mZ3Q7MHhhMDAw
MDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRXh0ZW5kZWQg
cmVnaW9uIDE6IDB4YjEyMDAwMDAtJmd0OzB4YzAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiAyOiAweGM4MDAwMDAwLSZndDsw
eGUwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBFeHRl
bmRlZCByZWdpb24gMzogMHhmMDAwMDAwMC0mZ3Q7MHhmOTAwMDAwMDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDQ6IDB4MTAwMDAwMDAw
LSZndDsweDYwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgRXh0ZW5kZWQgcmVnaW9uIDU6IDB4ODgwMDAwMDAwLSZndDsweDgwMDAwMDAwMDA8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiA2OiAw
eDgwMDEwMDAwMDAtJmd0OzB4MTAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIExvYWRpbmcgekltYWdlIGZyb20gMDAwMDAwMDAwMTAwMDAwMCB0byAw
MDAwMDAwMDEwMDAwMDAwLTAwMDAwMDAwMTBlNDEwMDg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIExvYWRpbmcgZDAgaW5pdHJkIGZyb20gMDAwMDAwMDAwMjAwMDAw
MCB0byAweDAwMDAwMDAwMTM2MDAwMDAtMHgwMDAwMDAwMDFmZjNhNjE3PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBMb2FkaW5nIGQwIERUQiB0byAweDAwMDAwMDAw
MTM0MDAwMDAtMHgwMDAwMDAwMDEzNDBjYmRjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQgc2V0IGF0IDB4
NDAwMCBwYWdlcy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFN0
ZC4gTG9nbGV2ZWw6IEFsbDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgR3Vlc3QgTG9nbGV2ZWw6IEFsbDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgKioqIFNlcmlhbCBpbnB1dCB0byBET00wICh0eXBlICYjMzk7Q1RSTC1hJiMzOTsg
dGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgbnVsbC5jOjM1MzogMCAmbHQ7LS0gZDB2MDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRnJlZWQgMzU2a0IgaW5pdCBtZW1vcnkuPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwIFVuaGFuZGxlZCBTTUMv
SFZDOiAweDg0MDAwMDUwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVO
KSBkMHYwIFVuaGFuZGxlZCBTTUMvSFZDOiAweDg2MDBmZjAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUg
MHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVI0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAw
MDBmZmZmZmZmZiB0byBJQ0FDVElWRVI4PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZm
ZmZmZiB0byBJQ0FDVElWRVIxMjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYg
dG8gSUNBQ1RJVkVSMTY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IGQwdjA6IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElD
QUNUSVZFUjIwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYw
OiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElW
RVIwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBd
IEJvb3RpbmcgTGludXggb24gcGh5c2ljYWwgQ1BVIDB4MDAwMDAwMDAwMCBbMHg0MTBmZDAzNF08
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gTGlu
dXggdmVyc2lvbiA1LjE1LjcyLXhpbGlueC12MjAyMi4xIChvZS11c2VyQG9lLWhvc3QpIChhYXJj
aDY0LXBvcnRhYmxlLWxpbnV4LWdjYyAoR0NDKSAxMS4zLjAsIEdOVSBsZCAoR05VPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgQmludXRpbHMpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAyLjM4LjIwMjIwNzA4KSAjMSBTTVAgVHVlIEZlYiAyMSAwNTo0Nzo1NCBV
VEMgMjAyMzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAw
MDAwXSBNYWNoaW5lIG1vZGVsOiBEMTQgVmlwZXIgQm9hcmQgLSBXaGl0ZSBVbml0PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIFhlbiA0LjE2IHN1
cHBvcnQgZm91bmQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjAwMDAwMF0gWm9uZSByYW5nZXM6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC4wMDAwMDBdIMKgIERNQSDCoCDCoCDCoFttZW0gMHgwMDAwMDAwMDEwMDAwMDAw
LTB4MDAwMDAwMDA3ZmZmZmZmZl08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjAwMDAwMF0gwqAgRE1BMzIgwqAgwqBlbXB0eTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBOb3JtYWwgwqAgZW1wdHk8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gTW92YWJs
ZSB6b25lIHN0YXJ0IGZvciBlYWNoIG5vZGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gRWFybHkgbWVtb3J5IG5vZGUgcmFuZ2VzPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAg
MDogW21lbSAweDAwMDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAwMDFmZmZmZmZmXTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKgIDA6
IFttZW0gMHgwMDAwMDAwMDIyMDAwMDAwLTB4MDAwMDAwMDAyMjE0N2ZmZl08YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gwqAgbm9kZSDCoCAwOiBb
bWVtIDB4MDAwMDAwMDAyMjIwMDAwMC0weDAwMDAwMDAwMjIzNDdmZmZdPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21l
bSAweDAwMDAwMDAwMjQwMDAwMDAtMHgwMDAwMDAwMDI3ZmZmZmZmXTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKgIDA6IFttZW0g
MHgwMDAwMDAwMDMwMDAwMDAwLTB4MDAwMDAwMDA3ZmZmZmZmZl08YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gSW5pdG1lbSBzZXR1cCBub2RlIDAg
W21lbSAweDAwMDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAwMDdmZmZmZmZmXTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBPbiBub2RlIDAsIHpvbmUg
RE1BOiA4MTkyIHBhZ2VzIGluIHVuYXZhaWxhYmxlIHJhbmdlczxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBPbiBub2RlIDAsIHpvbmUgRE1BOiAx
ODQgcGFnZXMgaW4gdW5hdmFpbGFibGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE9uIG5vZGUgMCwgem9uZSBETUE6IDczNTIgcGFn
ZXMgaW4gdW5hdmFpbGFibGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC4wMDAwMDBdIGNtYTogUmVzZXJ2ZWQgMjU2IE1pQiBhdCAweDAwMDAwMDAw
NmUwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAw
MDAwMF0gcHNjaTogcHJvYmluZyBmb3IgY29uZHVpdCBtZXRob2QgZnJvbSBEVC48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogUFNDSXYx
LjEgZGV0ZWN0ZWQgaW4gZmlybXdhcmUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC4wMDAwMDBdIHBzY2k6IFVzaW5nIHN0YW5kYXJkIFBTQ0kgdjAuMiBmdW5j
dGlvbiBJRHM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAw
MDAwMF0gcHNjaTogVHJ1c3RlZCBPUyBtaWdyYXRpb24gbm90IHJlcXVpcmVkPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHBzY2k6IFNNQyBDYWxs
aW5nIENvbnZlbnRpb24gdjEuMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuMDAwMDAwXSBwZXJjcHU6IEVtYmVkZGVkIDE2IHBhZ2VzL2NwdSBzMzI3OTIgcjAg
ZDMyNzQ0IHU2NTUzNjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDAuMDAwMDAwXSBEZXRlY3RlZCBWSVBUIEktY2FjaGUgb24gQ1BVMDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBDUFUgZmVhdHVyZXM6IGtlcm5l
bCBwYWdlIHRhYmxlIGlzb2xhdGlvbiBmb3JjZWQgT04gYnkgS0FTTFI8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gQ1BVIGZlYXR1cmVzOiBkZXRl
Y3RlZDogS2VybmVsIHBhZ2UgdGFibGUgaXNvbGF0aW9uIChLUFRJKTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBCdWlsdCAxIHpvbmVsaXN0cywg
bW9iaWxpdHkgZ3JvdXBpbmcgb24uwqAgVG90YWwgcGFnZXM6IDQwMzg0NTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBLZXJuZWwgY29tbWFuZCBs
aW5lOiBjb25zb2xlPWh2YzAgZWFybHljb249eGVuIGVhcmx5cHJpbnRrPXhlbiBjbGtfaWdub3Jl
X3VudXNlZCBmaXBzPTEgcm9vdD0vZGV2L3JhbTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBtYXhjcHVzPTI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjAwMDAwMF0gVW5rbm93biBrZXJuZWwgY29tbWFuZCBsaW5lIHBhcmFtZXRlcnMgJnF1b3Q7
ZWFybHlwcmludGs9eGVuIGZpcHM9MSZxdW90Oywgd2lsbCBiZSBwYXNzZWQgdG8gdXNlciBzcGFj
ZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0g
RGVudHJ5IGNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMjYyMTQ0IChvcmRlcjogOSwgMjA5NzE1
MiBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuMDAwMDAwXSBJbm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDEzMTA3MiAob3Jk
ZXI6IDgsIDEwNDg1NzYgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gbWVtIGF1dG8taW5pdDogc3RhY2s6b2ZmLCBoZWFw
IGFsbG9jOm9uLCBoZWFwIGZyZWU6b248YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjAwMDAwMF0gbWVtIGF1dG8taW5pdDogY2xlYXJpbmcgc3lzdGVtIG1lbW9y
eSBtYXkgdGFrZSBzb21lIHRpbWUuLi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjAwMDAwMF0gTWVtb3J5OiAxMTIxOTM2Sy8xNjQxMDI0SyBhdmFpbGFibGUg
KDk3MjhLIGtlcm5lbCBjb2RlLCA4MzZLIHJ3ZGF0YSwgMjM5Nksgcm9kYXRhLCAxNTM2SyBpbml0
LCAyNjJLIGJzcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAyNTY5NDRLIHJlc2Vy
dmVkLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgMjYyMTQ0SyBjbWEtcmVz
ZXJ2ZWQpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAw
MDBdIFNMVUI6IEhXYWxpZ249NjQsIE9yZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTIsIE5v
ZGVzPTE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAw
MF0gcmN1OiBIaWVyYXJjaGljYWwgUkNVIGltcGxlbWVudGF0aW9uLjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSByY3U6IFJDVSBldmVudCB0cmFj
aW5nIGlzIGVuYWJsZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMC4wMDAwMDBdIHJjdTogUkNVIHJlc3RyaWN0aW5nIENQVXMgZnJvbSBOUl9DUFVTPTggdG8g
bnJfY3B1X2lkcz0yLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDAuMDAwMDAwXSByY3U6IFJDVSBjYWxjdWxhdGVkIHZhbHVlIG9mIHNjaGVkdWxlci1lbmxpc3Rt
ZW50IGRlbGF5IGlzIDI1IGppZmZpZXMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC4wMDAwMDBdIHJjdTogQWRqdXN0aW5nIGdlb21ldHJ5IGZvciByY3VfZmFu
b3V0X2xlYWY9MTYsIG5yX2NwdV9pZHM9Mjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDAuMDAwMDAwXSBOUl9JUlFTOiA2NCwgbnJfaXJxczogNjQsIHByZWFsbG9j
YXRlZCBpcnFzOiAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC4wMDAwMDBdIFJvb3QgSVJRIGhhbmRsZXI6IGdpY19oYW5kbGVfaXJxPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIGFyY2hfdGltZXI6IGNwMTUg
dGltZXIocykgcnVubmluZyBhdCAxMDAuMDBNSHogKHZpcnQpLjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBjbG9ja3NvdXJjZTogYXJjaF9zeXNf
Y291bnRlcjogbWFzazogMHhmZmZmZmZmZmZmZmZmZiBtYXhfY3ljbGVzOiAweDE3MTAyNGU3ZTAs
IG1heF9pZGxlX25zOiA0NDA3OTUyMDUzMTUgbnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gc2NoZWRfY2xvY2s6IDU2IGJpdHMgYXQgMTAwTUh6
LCByZXNvbHV0aW9uIDEwbnMsIHdyYXBzIGV2ZXJ5IDQzOTgwNDY1MTExMDBuczxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMjU4XSBDb25zb2xlOiBjb2xv
dXIgZHVtbXkgZGV2aWNlIDgweDI1PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC4zMTAyMzFdIHByaW50azogY29uc29sZSBbaHZjMF0gZW5hYmxlZDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzE0NDAzXSBDYWxpYnJhdGlu
ZyBkZWxheSBsb29wIChza2lwcGVkKSwgdmFsdWUgY2FsY3VsYXRlZCB1c2luZyB0aW1lciBmcmVx
dWVuY3kuLiAyMDAuMDAgQm9nb01JUFMgKGxwaj00MDAwMDApPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zMjQ4NTFdIHBpZF9tYXg6IGRlZmF1bHQ6IDMyNzY4
IG1pbmltdW06IDMwMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDAuMzI5NzA2XSBMU006IFNlY3VyaXR5IEZyYW1ld29yayBpbml0aWFsaXppbmc8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMzNDIwNF0gWWFtYTogYmVjb21p
bmcgbWluZGZ1bC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjMzNzg2NV0gTW91bnQtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA0MDk2IChvcmRlcjogMywg
MzI3NjggYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjM0NTE4MF0gTW91bnRwb2ludC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDQw
OTYgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzU0NzQzXSB4ZW46Z3JhbnRfdGFibGU6IEdyYW50IHRh
YmxlcyB1c2luZyB2ZXJzaW9uIDEgbGF5b3V0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC4zNTkxMzJdIEdyYW50IHRhYmxlIGluaXRpYWxpemVkPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNjI2NjRdIHhlbjpldmVudHM6
IFVzaW5nIEZJRk8tYmFzZWQgQUJJPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC4zNjY5OTNdIFhlbjogaW5pdGlhbGl6aW5nIGNwdTA8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM3MDUxNV0gcmN1OiBIaWVyYXJjaGljYWwg
U1JDVSBpbXBsZW1lbnRhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjM3NTkzMF0gc21wOiBCcmluZ2luZyB1cCBzZWNvbmRhcnkgQ1BVcyAuLi48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIG51bGwuYzozNTM6IDEgJmx0
Oy0tIGQwdjE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQwdjE6
IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZF
UjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM4MjU0OV0g
RGV0ZWN0ZWQgVklQVCBJLWNhY2hlIG9uIENQVTE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjM4ODcxMl0gWGVuOiBpbml0aWFsaXppbmcgY3B1MTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzg4NzQzXSBDUFUxOiBCb290
ZWQgc2Vjb25kYXJ5IHByb2Nlc3NvciAweDAwMDAwMDAwMDEgWzB4NDEwZmQwMzRdPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zODg4MjldIHNtcDogQnJvdWdo
dCB1cCAxIG5vZGUsIDIgQ1BVczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuNDA2OTQxXSBTTVA6IFRvdGFsIG9mIDIgcHJvY2Vzc29ycyBhY3RpdmF0ZWQuPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40MTE2OThdIENQVSBm
ZWF0dXJlczogZGV0ZWN0ZWQ6IDMyLWJpdCBFTDAgU3VwcG9ydDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDE2ODg4XSBDUFUgZmVhdHVyZXM6IGRldGVjdGVk
OiBDUkMzMiBpbnN0cnVjdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjQyMjEyMV0gQ1BVOiBBbGwgQ1BVKHMpIHN0YXJ0ZWQgYXQgRUwxPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40MjYyNDhdIGFsdGVybmF0aXZl
czogcGF0Y2hpbmcga2VybmVsIGNvZGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjQzMTQyNF0gZGV2dG1wZnM6IGluaXRpYWxpemVkPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40NDE0NTRdIEtBU0xSIGVuYWJsZWQ8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQ0MTYwMl0gY2xvY2tz
b3VyY2U6IGppZmZpZXM6IG1hc2s6IDB4ZmZmZmZmZmYgbWF4X2N5Y2xlczogMHhmZmZmZmZmZiwg
bWF4X2lkbGVfbnM6IDc2NDUwNDE3ODUxMDAwMDAgbnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQ0ODMyMV0gZnV0ZXggaGFzaCB0YWJsZSBlbnRyaWVzOiA1
MTIgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDk2MTgzXSBORVQ6IFJlZ2lzdGVyZWQgUEZfTkVUTElO
Sy9QRl9ST1VURSBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAwLjQ5ODI3N10gRE1BOiBwcmVhbGxvY2F0ZWQgMjU2IEtpQiBHRlBfS0VS
TkVMIHBvb2wgZm9yIGF0b21pYyBhbGxvY2F0aW9uczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuNTAzNzcyXSBETUE6IHByZWFsbG9jYXRlZCAyNTYgS2lCIEdG
UF9LRVJORUx8R0ZQX0RNQSBwb29sIGZvciBhdG9taWMgYWxsb2NhdGlvbnM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjUxMTYxMF0gRE1BOiBwcmVhbGxvY2F0
ZWQgMjU2IEtpQiBHRlBfS0VSTkVMfEdGUF9ETUEzMiBwb29sIGZvciBhdG9taWMgYWxsb2NhdGlv
bnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjUxOTQ3OF0g
YXVkaXQ6IGluaXRpYWxpemluZyBuZXRsaW5rIHN1YnN5cyAoZGlzYWJsZWQpPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41MjQ5ODVdIGF1ZGl0OiB0eXBlPTIw
MDAgYXVkaXQoMC4zMzY6MSk6IHN0YXRlPWluaXRpYWxpemVkIGF1ZGl0X2VuYWJsZWQ9MCByZXM9
MTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTI5MTY5XSB0
aGVybWFsX3N5czogUmVnaXN0ZXJlZCB0aGVybWFsIGdvdmVybm9yICYjMzk7c3RlcF93aXNlJiMz
OTs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjUzMzAyM10g
aHctYnJlYWtwb2ludDogZm91bmQgNiBicmVha3BvaW50IGFuZCA0IHdhdGNocG9pbnQgcmVnaXN0
ZXJzLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTQ1NjA4
XSBBU0lEIGFsbG9jYXRvciBpbml0aWFsaXNlZCB3aXRoIDMyNzY4IGVudHJpZXM8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjU1MTAzMF0geGVuOnN3aW90bGJf
eGVuOiBXYXJuaW5nOiBvbmx5IGFibGUgdG8gYWxsb2NhdGUgNCBNQiBmb3Igc29mdHdhcmUgSU8g
VExCPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41NTkzMzJd
IHNvZnR3YXJlIElPIFRMQjogbWFwcGVkIFttZW0gMHgwMDAwMDAwMDExODAwMDAwLTB4MDAwMDAw
MDAxMWMwMDAwMF0gKDRNQik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAwLjU4MzU2NV0gSHVnZVRMQiByZWdpc3RlcmVkIDEuMDAgR2lCIHBhZ2Ugc2l6ZSwgcHJl
LWFsbG9jYXRlZCAwIHBhZ2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC41ODQ3MjFdIEh1Z2VUTEIgcmVnaXN0ZXJlZCAzMi4wIE1pQiBwYWdlIHNpemUsIHBy
ZS1hbGxvY2F0ZWQgMCBwYWdlczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuNTkxNDc4XSBIdWdlVExCIHJlZ2lzdGVyZWQgMi4wMCBNaUIgcGFnZSBzaXplLCBw
cmUtYWxsb2NhdGVkIDAgcGFnZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjU5ODIyNV0gSHVnZVRMQiByZWdpc3RlcmVkIDY0LjAgS2lCIHBhZ2Ugc2l6ZSwg
cHJlLWFsbG9jYXRlZCAwIHBhZ2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC42MzY1MjBdIERSQkc6IENvbnRpbnVpbmcgd2l0aG91dCBKaXR0ZXIgUk5HPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC43MzcxODddIHJhaWQ2
OiBuZW9ueDggwqAgZ2VuKCkgwqAyMTQzIE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjgwNTI5NF0gcmFpZDY6IG5lb254OCDCoCB4b3IoKSDCoDE1ODkg
TUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuODczNDA2
XSByYWlkNjogbmVvbng0IMKgIGdlbigpIMKgMjE3NyBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC45NDE0OTldIHJhaWQ2OiBuZW9ueDQgwqAgeG9yKCkg
wqAxNTU2IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAx
LjAwOTYxMl0gcmFpZDY6IG5lb254MiDCoCBnZW4oKSDCoDIwNzIgTUIvczxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuMDc3NzE1XSByYWlkNjogbmVvbngyIMKg
IHhvcigpIMKgMTQzMCBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMS4xNDU4MzRdIHJhaWQ2OiBuZW9ueDEgwqAgZ2VuKCkgwqAxNzY5IE1CL3M8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjIxMzkzNV0gcmFpZDY6IG5l
b254MSDCoCB4b3IoKSDCoDEyMTQgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDEuMjgyMDQ2XSByYWlkNjogaW50NjR4OCDCoGdlbigpIMKgMTM2NiBNQi9z
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4zNTAxMzJdIHJh
aWQ2OiBpbnQ2NHg4IMKgeG9yKCkgwqAgNzczIE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjQxODI1OV0gcmFpZDY6IGludDY0eDQgwqBnZW4oKSDCoDE2
MDIgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNDg2
MzQ5XSByYWlkNjogaW50NjR4NCDCoHhvcigpIMKgIDg1MSBNQi9zPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS41NTQ0NjRdIHJhaWQ2OiBpbnQ2NHgyIMKgZ2Vu
KCkgwqAxMzk2IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAxLjYyMjU2MV0gcmFpZDY6IGludDY0eDIgwqB4b3IoKSDCoCA3NDQgTUIvczxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNjkwNjg3XSByYWlkNjogaW50NjR4
MSDCoGdlbigpIMKgMTAzMyBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMS43NTg3NzBdIHJhaWQ2OiBpbnQ2NHgxIMKgeG9yKCkgwqAgNTE3IE1CL3M8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc1ODgwOV0gcmFpZDY6
IHVzaW5nIGFsZ29yaXRobSBuZW9ueDQgZ2VuKCkgMjE3NyBNQi9zPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43NjI5NDFdIHJhaWQ2OiAuLi4uIHhvcigpIDE1
NTYgTUIvcywgcm13IGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAxLjc2Nzk1N10gcmFpZDY6IHVzaW5nIG5lb24gcmVjb3ZlcnkgYWxnb3JpdGhtPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43NzI4MjRdIHhlbjpi
YWxsb29uOiBJbml0aWFsaXNpbmcgYmFsbG9vbiBkcml2ZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc3ODAyMV0gaW9tbXU6IERlZmF1bHQgZG9tYWluIHR5
cGU6IFRyYW5zbGF0ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAxLjc4MjU4NF0gaW9tbXU6IERNQSBkb21haW4gVExCIGludmFsaWRhdGlvbiBwb2xpY3k6IHN0
cmljdCBtb2RlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43
ODkxNDldIFNDU0kgc3Vic3lzdGVtIGluaXRpYWxpemVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43OTI4MjBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGlu
dGVyZmFjZSBkcml2ZXIgdXNiZnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAxLjc5ODI1NF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZl
ciBodWI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgwMzYy
Nl0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRyaXZlciB1c2I8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgwODc2MV0gcHBzX2NvcmU6IExpbnV4
UFBTIEFQSSB2ZXIuIDEgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDEuODEzNzE2XSBwcHNfY29yZTogU29mdHdhcmUgdmVyLiA1LjMuNiAtIENv
cHlyaWdodCAyMDA1LTIwMDcgUm9kb2xmbyBHaW9tZXR0aSAmbHQ7PGEgaHJlZj0ibWFpbHRvOmdp
b21ldHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFu
ayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgyMjkwM10gUFRQIGNsb2NrIHN1cHBvcnQgcmVnaXN0ZXJl
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODI2ODkzXSBF
REFDIE1DOiBWZXI6IDMuMC4wPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMS44MzAzNzVdIHp5bnFtcC1pcGktbWJveCBtYWlsYm94QGZmOTkwNDAwOiBSZWdpc3Rl
cmVkIFp5bnFNUCBJUEkgbWJveCB3aXRoIFRYL1JYIGNoYW5uZWxzLjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODM4ODYzXSB6eW5xbXAtaXBpLW1ib3ggbWFp
bGJveEBmZjk5MDYwMDogUmVnaXN0ZXJlZCBaeW5xTVAgSVBJIG1ib3ggd2l0aCBUWC9SWCBjaGFu
bmVscy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg0NzM1
Nl0genlucW1wLWlwaS1tYm94IG1haWxib3hAZmY5OTA4MDA6IFJlZ2lzdGVyZWQgWnlucU1QIElQ
SSBtYm94IHdpdGggVFgvUlggY2hhbm5lbHMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMS44NTU5MDddIEZQR0EgbWFuYWdlciBmcmFtZXdvcms8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg1OTk1Ml0gY2xvY2tzb3VyY2U6
IFN3aXRjaGVkIHRvIGNsb2Nrc291cmNlIGFyY2hfc3lzX2NvdW50ZXI8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg3MTcxMl0gTkVUOiBSZWdpc3RlcmVkIFBG
X0lORVQgcHJvdG9jb2wgZmFtaWx5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMS44NzE4MzhdIElQIGlkZW50cyBoYXNoIHRhYmxlIGVudHJpZXM6IDMyNzY4IChv
cmRlcjogNiwgMjYyMTQ0IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMS44NzkzOTJdIHRjcF9saXN0ZW5fcG9ydGFkZHJfaGFzaCBoYXNo
IHRhYmxlIGVudHJpZXM6IDEwMjQgKG9yZGVyOiAyLCAxNjM4NCBieXRlcywgbGluZWFyKTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODg3MDc4XSBUYWJsZS1w
ZXJ0dXJiIGhhc2ggdGFibGUgZW50cmllczogNjU1MzYgKG9yZGVyOiA2LCAyNjIxNDQgYnl0ZXMs
IGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg5
NDg0Nl0gVENQIGVzdGFibGlzaGVkIGhhc2ggdGFibGUgZW50cmllczogMTYzODQgKG9yZGVyOiA1
LCAxMzEwNzIgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAxLjkwMjkwMF0gVENQIGJpbmQgaGFzaCB0YWJsZSBlbnRyaWVzOiAxNjM4NCAo
b3JkZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDEuOTEwMzUwXSBUQ1A6IEhhc2ggdGFibGVzIGNvbmZpZ3VyZWQg
KGVzdGFibGlzaGVkIDE2Mzg0IGJpbmQgMTYzODQpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMS45MTY3NzhdIFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6IDEwMjQg
KG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDEuOTIzNTA5XSBVRFAtTGl0ZSBoYXNoIHRhYmxlIGVudHJpZXM6
IDEwMjQgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTMwNzU5XSBORVQ6IFJlZ2lzdGVyZWQgUEZfVU5J
WC9QRl9MT0NBTCBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAxLjkzNjgzNF0gUlBDOiBSZWdpc3RlcmVkIG5hbWVkIFVOSVggc29ja2V0
IHRyYW5zcG9ydCBtb2R1bGUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMS45NDIzNDJdIFJQQzogUmVnaXN0ZXJlZCB1ZHAgdHJhbnNwb3J0IG1vZHVsZS48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk0NzA4OF0gUlBDOiBS
ZWdpc3RlcmVkIHRjcCB0cmFuc3BvcnQgbW9kdWxlLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDEuOTUxODQzXSBSUEM6IFJlZ2lzdGVyZWQgdGNwIE5GU3Y0LjEg
YmFja2NoYW5uZWwgdHJhbnNwb3J0IG1vZHVsZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAxLjk1ODMzNF0gUENJOiBDTFMgMCBieXRlcywgZGVmYXVsdCA2NDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTYyNzA5XSBUcnlp
bmcgdG8gdW5wYWNrIHJvb3RmcyBpbWFnZSBhcyBpbml0cmFtZnMuLi48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk3NzA5MF0gd29ya2luZ3NldDogdGltZXN0
YW1wX2JpdHM9NjIgbWF4X29yZGVyPTE5IGJ1Y2tldF9vcmRlcj0wPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45ODI4NjNdIEluc3RhbGxpbmcga25mc2QgKGNv
cHlyaWdodCAoQykgMTk5NiA8YSBocmVmPSJtYWlsdG86b2tpckBtb25hZC5zd2IuZGUiIHRhcmdl
dD0iX2JsYW5rIj5va2lyQG1vbmFkLnN3Yi5kZTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86b2tpckBtb25hZC5zd2IuZGUiIHRhcmdldD0iX2JsYW5rIj5va2lyQG1vbmFkLnN3Yi5kZTwv
YT4mZ3Q7KS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAy
MTA0NV0gTkVUOiBSZWdpc3RlcmVkIFBGX0FMRyBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAyMTEyMl0geG9yOiBtZWFzdXJpbmcg
c29mdHdhcmUgY2hlY2tzdW0gc3BlZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjAyOTM0N10gwqAgwqA4cmVncyDCoCDCoCDCoCDCoCDCoCA6IMKgMjM2NiBN
Qi9zZWM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAzMzA4
MV0gwqAgwqAzMnJlZ3MgwqAgwqAgwqAgwqAgwqA6IMKgMjgwMiBNQi9zZWM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAzODIyM10gwqAgwqBhcm02NF9uZW9u
IMKgIMKgIMKgOiDCoDIzMjAgTUIvc2VjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi4wMzgzODVdIHhvcjogdXNpbmcgZnVuY3Rpb246IDMycmVncyAoMjgwMiBN
Qi9zZWMpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4wNDM2
MTRdIEJsb2NrIGxheWVyIFNDU0kgZ2VuZXJpYyAoYnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQgbG9h
ZGVkIChtYWpvciAyNDcpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4wNTA5NTldIGlvIHNjaGVkdWxlciBtcS1kZWFkbGluZSByZWdpc3RlcmVkPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4wNTU1MjFdIGlvIHNjaGVkdWxl
ciBreWJlciByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi4wNjgyMjddIHhlbjp4ZW5fZXZ0Y2huOiBFdmVudC1jaGFubmVsIGRldmljZSBpbnN0
YWxsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA2OTI4
MV0gU2VyaWFsOiA4MjUwLzE2NTUwIGRyaXZlciwgNCBwb3J0cywgSVJRIHNoYXJpbmcgZGlzYWJs
ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA3NjE5MF0g
Y2FjaGVpbmZvOiBVbmFibGUgdG8gZGV0ZWN0IGNhY2hlIGhpZXJhcmNoeSBmb3IgQ1BVIDA8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA4NTU0OF0gYnJkOiBt
b2R1bGUgbG9hZGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi4wODkyOTBdIGxvb3A6IG1vZHVsZSBsb2FkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAyLjA4OTM0MV0gSW52YWxpZCBtYXhfcXVldWVzICg0KSwgd2lsbCB1
c2UgZGVmYXVsdCBtYXg6IDIuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi4wOTQ1NjVdIHR1bjogVW5pdmVyc2FsIFRVTi9UQVAgZGV2aWNlIGRyaXZlciwgMS42
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4wOTg2NTVdIHhl
bl9uZXRmcm9udDogSW5pdGlhbGlzaW5nIFhlbiB2aXJ0dWFsIGV0aGVybmV0IGRyaXZlcjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTA0MTU2XSB1c2Jjb3Jl
OiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHJ0bDgxNTA8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjEwOTgxM10gdXNiY29yZTogcmVnaXN0ZXJl
ZCBuZXcgaW50ZXJmYWNlIGRyaXZlciByODE1Mjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMTE1MzY3XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZh
Y2UgZHJpdmVyIGFzaXg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjEyMDc5NF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBheDg4
MTc5XzE3OGE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjEy
NjkzNF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNfZXRoZXI8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjEzMjgxNl0gdXNi
Y29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNfZWVtPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xMzg1MjddIHVzYmNvcmU6IHJlZ2lz
dGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgbmV0MTA4MDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTQ0MjU2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBp
bnRlcmZhY2UgZHJpdmVyIGNkY19zdWJzZXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAyLjE1MDIwNV0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNl
IGRyaXZlciB6YXVydXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjE1NTgzN10gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNf
bmNtPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xNjE1NTBd
IHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgcjgxNTNfZWNtPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xNjgyNDBdIHVzYmNvcmU6
IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgY2RjX2FjbTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTczMTA5XSBjZGNfYWNtOiBVU0IgQWJzdHJh
Y3QgQ29udHJvbCBNb2RlbCBkcml2ZXIgZm9yIFVTQiBtb2RlbXMgYW5kIElTRE4gYWRhcHRlcnM8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE4MTM1OF0gdXNi
Y29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1YXM8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE4NjU0N10gdXNiY29yZTogcmVnaXN0ZXJl
ZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Itc3RvcmFnZTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTkyNjQzXSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBp
bnRlcmZhY2UgZHJpdmVyIGZ0ZGlfc2lvPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi4xOTgzODRdIHVzYnNlcmlhbDogVVNCIFNlcmlhbCBzdXBwb3J0IHJlZ2lz
dGVyZWQgZm9yIEZUREkgVVNCIFNlcmlhbCBEZXZpY2U8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIwNjExOF0gdWRjLWNvcmU6IGNvdWxkbiYjMzk7dCBmaW5k
IGFuIGF2YWlsYWJsZSBVREMgLSBhZGRlZCBbZ19tYXNzX3N0b3JhZ2VdIHRvIGxpc3Qgb2YgcGVu
ZGluZyBkcml2ZXJzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi4yMTUzMzJdIGkyY19kZXY6IGkyYyAvZGV2IGVudHJpZXMgZHJpdmVyPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yMjA0NjddIHhlbl93ZHQgeGVuX3dkdDog
aW5pdGlhbGl6ZWQgKHRpbWVvdXQ9NjBzLCBub3dheW91dD0wKTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjI1OTIzXSBkZXZpY2UtbWFwcGVyOiB1ZXZlbnQ6
IHZlcnNpb24gMS4wLjM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjIzMDY2OF0gZGV2aWNlLW1hcHBlcjogaW9jdGw6IDQuNDUuMC1pb2N0bCAoMjAyMS0wMy0y
MikgaW5pdGlhbGlzZWQ6IDxhIGhyZWY9Im1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+ZG0tZGV2ZWxAcmVkaGF0LmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmRtLWRldmVsQHJlZGhh
dC5jb208L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuMjM5MzE1XSBFREFDIE1DMDogR2l2aW5nIG91dCBkZXZpY2UgdG8gbW9kdWxlIDEgY29udHJv
bGxlciBzeW5wc19kZHJfY29udHJvbGxlcjogREVWIHN5bnBzX2VkYWMgKElOVEVSUlVQVCk8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI0OTQwNV0gRURBQyBE
RVZJQ0UwOiBHaXZpbmcgb3V0IGRldmljZSB0byBtb2R1bGUgenlucW1wLW9jbS1lZGFjIGNvbnRy
b2xsZXIgenlucW1wX29jbTogREVWPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZmY5
NjAwMDAubWVtb3J5LWNvbnRyb2xsZXIgKElOVEVSUlVQVCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI2MTcxOV0gc2RoY2k6IFNlY3VyZSBEaWdpdGFsIEhv
c3QgQ29udHJvbGxlciBJbnRlcmZhY2UgZHJpdmVyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMi4yNjc0ODddIHNkaGNpOiBDb3B5cmlnaHQoYykgUGllcnJlIE9z
c21hbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjcxODkw
XSBzZGhjaS1wbHRmbTogU0RIQ0kgcGxhdGZvcm0gYW5kIE9GIGRyaXZlciBoZWxwZXI8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI3ODE1N10gbGVkdHJpZy1j
cHU6IHJlZ2lzdGVyZWQgdG8gaW5kaWNhdGUgYWN0aXZpdHkgb24gQ1BVczxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjgzODE2XSB6eW5xbXBfZmlybXdhcmVf
cHJvYmUgUGxhdGZvcm0gTWFuYWdlbWVudCBBUEkgdjEuMTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjg5NTU0XSB6eW5xbXBfZmlybXdhcmVfcHJvYmUgVHJ1
c3R6b25lIHZlcnNpb24gdjEuMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMzI3ODc1XSBzZWN1cmVmdyBzZWN1cmVmdzogc2VjdXJlZncgcHJvYmVkPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zMjgzMjRdIGFsZzogTm8g
dGVzdCBmb3IgeGlsaW54LXp5bnFtcC1hZXMgKHp5bnFtcC1hZXMpPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zMzI1NjNdIHp5bnFtcF9hZXMgZmlybXdhcmU6
enlucW1wLWZpcm13YXJlOnp5bnFtcC1hZXM6IEFFUyBTdWNjZXNzZnVsbHkgUmVnaXN0ZXJlZDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzQxMTgzXSBhbGc6
IE5vIHRlc3QgZm9yIHhpbGlueC16eW5xbXAtcnNhICh6eW5xbXAtcnNhKTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzQ3NjY3XSByZW1vdGVwcm9jIHJlbW90
ZXByb2MwOiBmZjlhMDAwMC5yZjVzczpyNWZfMCBpcyBhdmFpbGFibGU8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM1MzAwM10gcmVtb3RlcHJvYyByZW1vdGVw
cm9jMTogZmY5YTAwMDAucmY1c3M6cjVmXzEgaXMgYXZhaWxhYmxlPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zNjI2MDVdIGZwZ2FfbWFuYWdlciBmcGdhMDog
WGlsaW54IFp5bnFNUCBGUEdBIE1hbmFnZXIgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzY2NTQwXSB2aXBlci14ZW4tcHJveHkgdmlwZXIt
eGVuLXByb3h5OiBWaXBlciBYZW4gUHJveHkgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzcyNTI1XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZk
cHA6IERldmljZSBUcmVlIFByb2Jpbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjM3Nzc3OF0gdmlwZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBWRFBQIFZlcnNp
b246IDEuMy45LjAgSW5mbzogMS41MTIuMTUuMCBLZXlMZW46IDMyPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zODY0MzJdIHZpcGVyLXZkcHAgYTQwMDAwMDAu
dmRwcDogVW5hYmxlIHRvIHJlZ2lzdGVyIHRhbXBlciBoYW5kbGVyLiBSZXRyeWluZy4uLjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzk0MDk0XSB2aXBlci12
ZHBwLW5ldCBhNTAwMDAwMC52ZHBwX25ldDogRGV2aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzk5ODU0XSB2aXBlci12ZHBwLW5l
dCBhNTAwMDAwMC52ZHBwX25ldDogRGV2aWNlIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQwNTkzMV0gdmlwZXItdmRwcC1zdGF0IGE4MDAw
MDAwLnZkcHBfc3RhdDogRGV2aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDEyMDM3XSB2aXBlci12ZHBwLXN0YXQgYTgwMDAwMDAu
dmRwcF9zdGF0OiBCdWlsZCBwYXJhbWV0ZXJzOiBWVEkgQ291bnQ6IDUxMiBFdmVudCBDb3VudDog
MzI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQyMDg1Nl0g
ZGVmYXVsdCBwcmVzZXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjQyMzc5N10gdmlwZXItdmRwcC1zdGF0IGE4MDAwMDAwLnZkcHBfc3RhdDogRGV2aWNlIHJl
Z2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQz
MDA1NF0gdmlwZXItdmRwcC1ybmcgYWMwMDAwMDAudmRwcF9ybmc6IERldmljZSBUcmVlIFByb2Jp
bmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQzNTk0OF0g
dmlwZXItdmRwcC1ybmcgYWMwMDAwMDAudmRwcF9ybmc6IERldmljZSByZWdpc3RlcmVkPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40NDE5NzZdIHZtY3UgZHJp
dmVyIGluaXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ0
NDkyMl0gVk1DVTogOiAoMjQwOjApIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ0NDk1Nl0gSW4gSzgxIFVwZGF0ZXIgaW5pdDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDQ5MDAzXSBwa3RnZW46IFBh
Y2tldCBHZW5lcmF0b3IgZm9yIHBhY2tldCBwZXJmb3JtYW5jZSB0ZXN0aW5nLiBWZXJzaW9uOiAy
Ljc1PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40Njg4MzNd
IEluaXRpYWxpemluZyBYRlJNIG5ldGxpbmsgc29ja2V0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40Njg5MDJdIE5FVDogUmVnaXN0ZXJlZCBQRl9QQUNLRVQg
cHJvdG9jb2wgZmFtaWx5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi40NzI3MjldIEJyaWRnZSBmaXJld2FsbGluZyByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40NzY3ODVdIDgwMjFxOiA4MDIuMVEgVkxB
TiBTdXBwb3J0IHYxLjg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjQ4MTM0MV0gcmVnaXN0ZXJlZCB0YXNrc3RhdHMgdmVyc2lvbiAxPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40ODYzOTRdIEJ0cmZzIGxvYWRlZCwgY3Jj
MzJjPWNyYzMyYy1nZW5lcmljLCB6b25lZD1ubywgZnN2ZXJpdHk9bm88YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUwMzE0NV0gZmYwMTAwMDAuc2VyaWFsOiB0
dHlQUzEgYXQgTU1JTyAweGZmMDEwMDAwIChpcnEgPSAzNiwgYmFzZV9iYXVkID0gNjI1MDAwMCkg
aXMgYSB4dWFydHBzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi41MDcxMDNdIG9mLWZwZ2EtcmVnaW9uIGZwZ2EtZnVsbDogRlBHQSBSZWdpb24gcHJvYmVkPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MTI5ODZdIHhpbGlu
eC16eW5xbXAtZG1hIGZkNTAwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQ
cm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi41MjAyNjddIHhpbGlueC16eW5xbXAtZG1hIGZkNTEwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5x
TVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMi41MjgyMzldIHhpbGlueC16eW5xbXAtZG1hIGZkNTIwMDAwLmRtYS1j
b250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MzYxNTJdIHhpbGlueC16eW5xbXAtZG1h
IGZkNTMwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNz
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NDQxNTNdIHhp
bGlueC16eW5xbXAtZG1hIGZkNTQwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZl
ciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi41NTIxMjddIHhpbGlueC16eW5xbXAtZG1hIGZkNTUwMDAwLmRtYS1jb250cm9sbGVyOiBa
eW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMi41NjAxNzhdIHhpbGlueC16eW5xbXAtZG1hIGZmYTgwMDAwLmRt
YS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41Njc5ODddIHhpbGlueC16eW5xbXAt
ZG1hIGZmYTkwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNj
ZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NzYwMThd
IHhpbGlueC16eW5xbXAtZG1hIGZmYWEwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRy
aXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi41ODM4ODldIHhpbGlueC16eW5xbXAtZG1hIGZmYWIwMDAwLmRtYS1jb250cm9sbGVy
OiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMi45NDYzNzldIHNwaS1ub3Igc3BpMC4wOiBtdDI1cXU1MTJh
ICgxMzEwNzIgS2J5dGVzKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuOTQ2NDY3XSAyIGZpeGVkLXBhcnRpdGlvbnMgcGFydGl0aW9ucyBmb3VuZCBvbiBNVEQg
ZGV2aWNlIHNwaTAuMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuOTUyMzkzXSBDcmVhdGluZyAyIE1URCBwYXJ0aXRpb25zIG9uICZxdW90O3NwaTAuMCZxdW90
Ozo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk1NzIzMV0g
MHgwMDAwMDQwMDAwMDAtMHgwMDAwMDgwMDAwMDAgOiAmcXVvdDtiYW5rIEEmcXVvdDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk2MzMzMl0gMHgwMDAwMDAw
MDAwMDAtMHgwMDAwMDQwMDAwMDAgOiAmcXVvdDtiYW5rIEImcXVvdDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk2ODY5NF0gbWFjYiBmZjBiMDAwMC5ldGhl
cm5ldDogTm90IGVuYWJsaW5nIHBhcnRpYWwgc3RvcmUgYW5kIGZvcndhcmQ8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk3NTMzM10gbWFjYiBmZjBiMDAwMC5l
dGhlcm5ldCBldGgwOiBDYWRlbmNlIEdFTSByZXYgMHg1MDA3MDEwNiBhdCAweGZmMGIwMDAwIGly
cSAyNSAoMTg6NDE6ZmU6MGY6ZmY6MDIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi45ODQ0NzJdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQ6IE5vdCBlbmFibGlu
ZyBwYXJ0aWFsIHN0b3JlIGFuZCBmb3J3YXJkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMi45OTIxNDRdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgZXRoMTogQ2Fk
ZW5jZSBHRU0gcmV2IDB4NTAwNzAxMDYgYXQgMHhmZjBjMDAwMCBpcnEgMjYgKDE4OjQxOmZlOjBm
OmZmOjAzKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDAx
MDQzXSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQ6IFZpcGVyIHBvd2VyIEdQSU9zIGluaXRpYWxpc2Vk
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wMDczMTNdIHZp
cGVyX2VuZXQgdmlwZXJfZW5ldCB2bmV0MCAodW5pbml0aWFsaXplZCk6IFZhbGlkYXRlIGludGVy
ZmFjZSBRU0dNSUk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjAxNDkxNF0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQxICh1bmluaXRpYWxpemVkKTogVmFs
aWRhdGUgaW50ZXJmYWNlIFFTR01JSTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDMuMDIyMTM4XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDEgKHVuaW5pdGlh
bGl6ZWQpOiBWYWxpZGF0ZSBpbnRlcmZhY2UgdHlwZSAxODxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDMwMjc0XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5l
dDIgKHVuaW5pdGlhbGl6ZWQpOiBWYWxpZGF0ZSBpbnRlcmZhY2UgUVNHTUlJPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wMzc3ODVdIHZpcGVyX2VuZXQgdmlw
ZXJfZW5ldCB2bmV0MyAodW5pbml0aWFsaXplZCk6IFZhbGlkYXRlIGludGVyZmFjZSBRU0dNSUk8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA0NTMwMV0gdmlw
ZXJfZW5ldCB2aXBlcl9lbmV0OiBWaXBlciBlbmV0IHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA1MDk1OF0geGlsaW54LWF4aXBtb24gZmZh
MDAwMDAucGVyZi1tb25pdG9yOiBQcm9iZWQgWGlsaW54IEFQTTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDU3MTM1XSB4aWxpbngtYXhpcG1vbiBmZDBiMDAw
MC5wZXJmLW1vbml0b3I6IFByb2JlZCBYaWxpbnggQVBNPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wNjM1MzhdIHhpbGlueC1heGlwbW9uIGZkNDkwMDAwLnBl
cmYtbW9uaXRvcjogUHJvYmVkIFhpbGlueCBBUE08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjA2OTkyMF0geGlsaW54LWF4aXBtb24gZmZhMTAwMDAucGVyZi1t
b25pdG9yOiBQcm9iZWQgWGlsaW54IEFQTTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuMDk3NzI5XSBzaTcweHg6IHByb2JlIG9mIDItMDA0MCBmYWlsZWQgd2l0
aCBlcnJvciAtNTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
MDk4MDQyXSBjZG5zLXdkdCBmZDRkMDAwMC53YXRjaGRvZzogWGlsaW54IFdhdGNoZG9nIFRpbWVy
IHdpdGggdGltZW91dCA2MHM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAzLjEwNTExMV0gY2Rucy13ZHQgZmYxNTAwMDAud2F0Y2hkb2c6IFhpbGlueCBXYXRjaGRv
ZyBUaW1lciB3aXRoIHRpbWVvdXQgMTBzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMy4xMTI0NTddIHZpcGVyLXRhbXBlciB2aXBlci10YW1wZXI6IERldmljZSBy
ZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4x
MTc1OTNdIGFjdGl2ZV9iYW5rIGFjdGl2ZV9iYW5rOiBib290IGJhbms6IDE8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjEyMjE4NF0gYWN0aXZlX2JhbmsgYWN0
aXZlX2Jhbms6IGJvb3QgbW9kZTogKDB4MDIpIHFzcGkzMjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTI4MjQ3XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6
IERldmljZSBUcmVlIFByb2Jpbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjEzMzQzOV0gdmlwZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBWRFBQIFZlcnNpb246
IDEuMy45LjAgSW5mbzogMS41MTIuMTUuMCBLZXlMZW46IDMyPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNDIxNTFdIHZpcGVyLXZkcHAgYTQwMDAwMDAudmRw
cDogVGFtcGVyIGhhbmRsZXIgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDMuMTQ3NDM4XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IERldmlj
ZSByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My4xNTMwMDddIGxwYzU1X2wyIHNwaTEuMDogcmVnaXN0ZXJlZCBoYW5kbGVyIGZvciBwcm90b2Nv
bCAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNTg1ODJd
IGxwYzU1X3VzZXIgbHBjNTVfdXNlcjogVGhlIG1ham9yIG51bWJlciBmb3IgeW91ciBkZXZpY2Ug
aXMgMjM2PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNjU5
NzZdIGxwYzU1X2wyIHNwaTEuMDogcmVnaXN0ZXJlZCBoYW5kbGVyIGZvciBwcm90b2NvbCAxPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xODE5OTldIHJ0Yy1s
cGM1NSBydGNfbHBjNTU6IGxwYzU1X3J0Y19nZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTgyODU2XSBydGMtbHBjNTUg
cnRjX2xwYzU1OiByZWdpc3RlcmVkIGFzIHJ0YzA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjE4ODY1Nl0gbHBjNTVfbDIgc3BpMS4wOiAoMikgbWN1IHN0aWxs
IG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjE5Mzc0NF0gbHBjNTVfbDIgc3BpMS4wOiAoMykgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE5ODg0OF0gbHBjNTVfbDIg
c3BpMS4wOiAoNCkgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjIwMjkzMl0gbW1jMDogU0RIQ0kgY29udHJvbGxlciBvbiBm
ZjE2MDAwMC5tbWMgW2ZmMTYwMDAwLm1tY10gdXNpbmcgQURNQSA2NC1iaXQ8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjIxMDY4OV0gbHBjNTVfbDIgc3BpMS4w
OiAoNSkgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAzLjIxNTY5NF0gbHBjNTVfbDIgc3BpMS4wOiByeCBlcnJvcjogLTExMDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjg0NDM4XSBtbWMw
OiBuZXcgSFMyMDAgTU1DIGNhcmQgYXQgYWRkcmVzcyAwMDAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4yODUxNzldIG1tY2JsazA6IG1tYzA6MDAwMSBTRU0x
NkcgMTQuNiBHaUI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjI5MTc4NF0gwqBtbWNibGswOiBwMSBwMiBwMyBwNCBwNSBwNiBwNyBwODxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjkzOTE1XSBtbWNibGswYm9vdDA6IG1t
YzA6MDAwMSBTRU0xNkcgNC4wMCBNaUI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAzLjI5OTA1NF0gbW1jYmxrMGJvb3QxOiBtbWMwOjAwMDEgU0VNMTZHIDQuMDAg
TWlCPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4zMDM5MDVd
IG1tY2JsazBycG1iOiBtbWMwOjAwMDEgU0VNMTZHIDQuMDAgTWlCLCBjaGFyZGV2ICgyNDQ6MCk8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjU4MjY3Nl0gcnRj
LWxwYzU1IHJ0Y19scGM1NTogbHBjNTVfcnRjX2dldF90aW1lOiBiYWQgcmVzdWx0OiAxPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy41ODMzMzJdIHJ0Yy1scGM1
NSBydGNfbHBjNTU6IGhjdG9zeXM6IHVuYWJsZSB0byByZWFkIHRoZSBoYXJkd2FyZSBjbG9jazxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNTkxMjUyXSBjZG5z
LWkyYyBmZjAyMDAwMC5pMmM6IHJlY292ZXJ5IGluZm9ybWF0aW9uIGNvbXBsZXRlPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy41OTcwODVdIGF0MjQgMC0wMDUw
OiBzdXBwbHkgdmNjIG5vdCBmb3VuZCwgdXNpbmcgZHVtbXkgcmVndWxhdG9yPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42MDMwMTFdIGxwYzU1X2wyIHNwaTEu
MDogKDIpIG1jdSBzdGlsbCBub3QgcmVhZHk/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy42MDgwOTNdIGF0MjQgMC0wMDUwOiAyNTYgYnl0ZSBzcGQgRUVQUk9N
LCByZWFkLW9ubHk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjYxMzYyMF0gbHBjNTVfbDIgc3BpMS4wOiAoMykgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYxOTM2Ml0gbHBjNTVfbDIg
c3BpMS4wOiAoNCkgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYyNDIyNF0gcnRjLXJ2MzAyOCAwLTAwNTI6IHJlZ2lzdGVy
ZWQgYXMgcnRjMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
NjI4MzQzXSBscGM1NV9sMiBzcGkxLjA6ICg1KSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjMzMjUzXSBscGM1NV9sMiBz
cGkxLjA6IHJ4IGVycm9yOiAtMTEwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMy42MzkxMDRdIGs4MV9ib290bG9hZGVyIDAtMDAxMDogcHJvYmU8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY0MTYyOF0gVk1DVTogOiAoMjM1
OjApIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjY0MTYzNV0gazgxX2Jvb3Rsb2FkZXIgMC0wMDEwOiBwcm9iZSBjb21wbGV0ZWQ8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY2ODM0Nl0gY2Rucy1pMmMg
ZmYwMjAwMDAuaTJjOiA0MDAga0h6IG1taW8gZmYwMjAwMDAgaXJxIDI4PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42NjkxNTRdIGNkbnMtaTJjIGZmMDMwMDAw
LmkyYzogcmVjb3ZlcnkgaW5mb3JtYXRpb24gY29tcGxldGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY3NTQxMl0gbG03NSAxLTAwNDg6IHN1cHBseSB2cyBu
b3QgZm91bmQsIHVzaW5nIGR1bW15IHJlZ3VsYXRvcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDMuNjgyOTIwXSBsbTc1IDEtMDA0ODogaHdtb24xOiBzZW5zb3Ig
JiMzOTt0bXAxMTImIzM5Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDMuNjg2NTQ4XSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMgMzxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjkwNzk1XSBpMmMgaTJj
LTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMgNDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDMuNjk1NjI5XSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVk
IGkyYyBidXMgNTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
NzAwNDkyXSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMgNjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzA1MTU3XSBwY2E5NTR4IDEtMDA3
MDogcmVnaXN0ZXJlZCA0IG11bHRpcGxleGVkIGJ1c3NlcyBmb3IgSTJDIHN3aXRjaCBwY2E5NTQ2
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43MTMwNDldIGF0
MjQgMS0wMDU0OiBzdXBwbHkgdmNjIG5vdCBmb3VuZCwgdXNpbmcgZHVtbXkgcmVndWxhdG9yPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43MjAwNjddIGF0MjQg
MS0wMDU0OiAxMDI0IGJ5dGUgMjRjMDggRUVQUk9NLCByZWFkLW9ubHk8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjcyNDc2MV0gY2Rucy1pMmMgZmYwMzAwMDAu
aTJjOiAxMDAga0h6IG1taW8gZmYwMzAwMDAgaXJxIDI5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43MzEyNzJdIHNmcCB2aXBlcl9lbmV0OnNmcC1ldGgxOiBI
b3N0IG1heGltdW0gcG93ZXIgMi4wVzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDMuNzM3NTQ5XSBzZnBfcmVnaXN0ZXJfc29ja2V0OiBnb3Qgc2ZwX2J1czxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzQwNzA5XSBzZnBfcmVn
aXN0ZXJfc29ja2V0OiByZWdpc3RlciBzZnBfYnVzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy43NDU0NTldIHNmcF9yZWdpc3Rlcl9idXM6IG9wcyBvayE8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc0OTE3OV0gc2ZwX3Jl
Z2lzdGVyX2J1czogVHJ5IHRvIGF0dGFjaDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuNzUzNDE5XSBzZnBfcmVnaXN0ZXJfYnVzOiBBdHRhY2ggc3VjY2VlZGVk
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NTc5MTRdIHNm
cF9yZWdpc3Rlcl9idXM6IHVwc3RyZWFtIG9wcyBhdHRhY2g8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc2MjY3N10gc2ZwX3JlZ2lzdGVyX2J1czogQnVzIHJl
Z2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc2
Njk5OV0gc2ZwX3JlZ2lzdGVyX3NvY2tldDogcmVnaXN0ZXIgc2ZwX2J1cyBzdWNjZWVkZWQ8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc3NTg3MF0gb2ZfY2Zz
X2luaXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc3NjAw
MF0gb2ZfY2ZzX2luaXQ6IE9LPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy43NzgyMTFdIGNsazogTm90IGRpc2FibGluZyB1bnVzZWQgY2xvY2tzPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjI3ODQ3N10gRnJlZWluZyBpbml0
cmQgbWVtb3J5OiAyMDYwNTZLPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDExLjI3OTQwNl0gRnJlZWluZyB1bnVzZWQga2VybmVsIG1lbW9yeTogMTUzNks8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMzE0MDA2XSBDaGVja2VkIFcr
WCBtYXBwaW5nczogcGFzc2VkLCBubyBXK1ggcGFnZXMgZm91bmQ8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMzE0MTQyXSBSdW4gL2luaXQgYXMgaW5pdCBwcm9j
ZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJTklUOiB2ZXJzaW9uIDMu
MDEgYm9vdGluZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgZnNjayAoYnVz
eWJveCAxLjM1LjApPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAvZGV2L21t
Y2JsazBwMTogY2xlYW4sIDEyLzEwMjQwMCBmaWxlcywgMjM4MTYyLzQwOTYwMCBibG9ja3M8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IC9kZXYvbW1jYmxrMHAyOiBjbGVhbiwg
MTIvMTAyNDAwIGZpbGVzLCAxNzE5NzIvNDA5NjAwIGJsb2Nrczxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgL2Rldi9tbWNibGswcDMgd2FzIG5vdCBjbGVhbmx5IHVubW91bnRl
ZCwgY2hlY2sgZm9yY2VkLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgL2Rl
di9tbWNibGswcDM6IDIwLzQwOTYgZmlsZXMgKDAuMCUgbm9uLWNvbnRpZ3VvdXMpLCA2NjMvMTYz
ODQgYmxvY2tzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjU1
MzA3M10gRVhUNC1mcyAobW1jYmxrMHAzKTogbW91bnRlZCBmaWxlc3lzdGVtIHdpdGhvdXQgam91
cm5hbC4gT3B0czogKG51bGwpLiBRdW90YSBtb2RlOiBkaXNhYmxlZC48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIHJhbmRvbSBudW1iZXIgZ2VuZXJhdG9yIGRh
ZW1vbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNTgwNjYy
XSByYW5kb206IGNybmcgaW5pdCBkb25lPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBTdGFydGluZyB1ZGV2PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDExLjYxMzE1OV0gdWRldmRbMTQyXTogc3RhcnRpbmcgdmVyc2lvbiAzLjIuMTA8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNjIwMzg1XSB1ZGV2ZFsxNDNd
OiBzdGFydGluZyBldWRldi0zLjIuMTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTEuNzA0NDgxXSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0IGNvbnRyb2xfcmVkOiBy
ZW5hbWVkIGZyb20gZXRoMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMS43MjAyNjRdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29udHJvbF9ibGFjazogcmVuYW1l
ZCBmcm9tIGV0aDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIu
MDYzMzk2XSBpcF9sb2NhbF9wb3J0X3JhbmdlOiBwcmVmZXIgZGlmZmVyZW50IHBhcml0eSBmb3Ig
c3RhcnQvZW5kIHZhbHVlcy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTIuMDg0ODAxXSBydGMtbHBjNTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfZ2V0X3RpbWU6IGJh
ZCByZXN1bHQ6IDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGh3Y2xvY2s6
IFJUQ19SRF9USU1FOiBJbnZhbGlkIGV4Y2hhbmdlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBNb24gRmViIDI3IDA4OjQwOjUzIFVUQyAyMDIzPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjExNTMwOV0gcnRjLWxwYzU1IHJ0Y19scGM1NTog
bHBjNTVfcnRjX3NldF90aW1lOiBiYWQgcmVzdWx0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBod2Nsb2NrOiBSVENfU0VUX1RJTUU6IEludmFsaWQgZXhjaGFuZ2U8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuMTMxMDI3XSBydGMtbHBjNTUg
cnRjX2xwYzU1OiBscGM1NV9ydGNfZ2V0X3RpbWU6IGJhZCByZXN1bHQ6IDE8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIG1jdWQ8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IElOSVQ6IEVudGVyaW5nIHJ1bmxldmVsOiA1PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBDb25maWd1cmluZyBuZXR3b3JrIGludGVyZmFjZXMu
Li4gZG9uZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJlc2V0dGluZyBu
ZXR3b3JrIGludGVyZmFjZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMi43MTgyOTVdIG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IFBIWSBbZmYw
YjAwMDAuZXRoZXJuZXQtZmZmZmZmZmY6MDJdIGRyaXZlciBbWGlsaW54IFBDUy9QTUEgUEhZXSAo
aXJxPVBPTEwpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjcy
MzkxOV0gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBjb250cm9sX3JlZDogY29uZmlndXJpbmcgZm9y
IHBoeS9nbWlpIGxpbmsgbW9kZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCAxMi43MzIxNTFdIHBwcyBwcHMwOiBuZXcgUFBTIHNvdXJjZSBwdHAwPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjczNTU2M10gbWFjYiBmZjBiMDAwMC5l
dGhlcm5ldDogZ2VtLXB0cC10aW1lciBwdHAgY2xvY2sgcmVnaXN0ZXJlZC48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzQ1NzI0XSBtYWNiIGZmMGMwMDAwLmV0
aGVybmV0IGNvbnRyb2xfYmxhY2s6IFBIWSBbZmYwYzAwMDAuZXRoZXJuZXQtZmZmZmZmZmY6MDFd
IGRyaXZlciBbWGlsaW54IFBDUy9QTUEgUEhZXTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoChpcnE9UE9MTCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
MTIuNzUzNDY5XSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0IGNvbnRyb2xfYmxhY2s6IGNvbmZpZ3Vy
aW5nIGZvciBwaHkvZ21paSBsaW5rIG1vZGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTIuNzYxODA0XSBwcHMgcHBzMTogbmV3IFBQUyBzb3VyY2UgcHRwMTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43NjUzOThdIG1hY2IgZmYw
YzAwMDAuZXRoZXJuZXQ6IGdlbS1wdHAtdGltZXIgcHRwIGNsb2NrIHJlZ2lzdGVyZWQuPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBBdXRvLW5lZ290aWF0aW9uOiBvZmY8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEF1dG8tbmVnb3RpYXRpb246IG9mZjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNi44MjgxNTFdIG1hY2Ig
ZmYwYjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IHVuYWJsZSB0byBnZW5lcmF0ZSB0YXJnZXQg
ZnJlcXVlbmN5OiAxMjUwMDAwMDAgSHo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTYuODM0NTUzXSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0IGNvbnRyb2xfcmVkOiBM
aW5rIGlzIFVwIC0gMUdicHMvRnVsbCAtIGZsb3cgY29udHJvbCBvZmY8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTYuODYwNTUyXSBtYWNiIGZmMGMwMDAwLmV0aGVy
bmV0IGNvbnRyb2xfYmxhY2s6IHVuYWJsZSB0byBnZW5lcmF0ZSB0YXJnZXQgZnJlcXVlbmN5OiAx
MjUwMDAwMDAgSHo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTYu
ODY3MDUyXSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0IGNvbnRyb2xfYmxhY2s6IExpbmsgaXMgVXAg
LSAxR2Jwcy9GdWxsIC0gZmxvdyBjb250cm9sIG9mZjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgU3RhcnRpbmcgRmFpbHNhZmUgU2VjdXJlIFNoZWxsIHNlcnZlciBpbiBwb3J0
IDIyMjI6IHNzaGQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGRvbmUuPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBycGNiaW5kIGRhZW1v
bi4uLmRvbmUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy4wOTMwMTldIHJ0Yy1scGM1NSBydGNf
bHBjNTU6IGxwYzU1X3J0Y19nZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgaHdjbG9jazogUlRDX1JEX1RJTUU6IEludmFsaWQgZXhjaGFu
Z2U8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIFN0YXRlIE1h
bmFnZXIgU2VydmljZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnQg
c3RhdGUtbWFuYWdlciByZXN0YXJ0ZXIuLi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIGQwdjEgRm9yd2FyZGluZyBBRVMgb3BlcmF0aW9uOiAzMjU0Nzc5OTUxPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyAvdXNyL3NiaW4veGVu
c3RvcmVkLi4uLlsgwqAgMTcuMjY1MjU2XSBCVFJGUzogZGV2aWNlIGZzaWQgODBlZmMyMjQtYzIw
Mi00ZjhlLWE5NDktNGRhZTdmMDRhMGFhIGRldmlkIDEgdHJhbnNpZCA3NDQ8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAvZGV2L2RtLTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IHNjYW5uZWQgYnkgdWRldmQgKDM4NSk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTcuMzQ5OTMzXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMCk6IGRp
c2sgc3BhY2UgY2FjaGluZyBpcyBlbmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDE3LjM1MDY3MF0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTApOiBoYXMgc2tp
bm55IGV4dGVudHM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcu
MzY0Mzg0XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMCk6IGVuYWJsaW5nIHNzZCBvcHRpbWl6YXRp
b25zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3LjgzMDQ2Ml0g
QlRSRlM6IGRldmljZSBmc2lkIDI3ZmY2NjZiLWY0ZTUtNGY5MC05MDU0LWMyMTBkYjViMmUyZSBk
ZXZpZCAxIHRyYW5zaWQgNiAvZGV2L21hcHBlci9jbGllbnRfcHJvdiBzY2FubmVkIGJ5PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbWtmcy5idHJmczxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKDUyNik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTcuODcyNjk5XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6IHVzaW5nIGZyZWUg
c3BhY2UgdHJlZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy44
NzI3NzFdIEJUUkZTIGluZm8gKGRldmljZSBkbS0xKTogaGFzIHNraW5ueSBleHRlbnRzPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3Ljg3ODExNF0gQlRSRlMgaW5m
byAoZGV2aWNlIGRtLTEpOiBmbGFnZ2luZyBmcyB3aXRoIGJpZyBtZXRhZGF0YSBmZWF0dXJlPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3Ljg5NDI4OV0gQlRSRlMg
aW5mbyAoZGV2aWNlIGRtLTEpOiBlbmFibGluZyBzc2Qgb3B0aW1pemF0aW9uczxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy44OTU2OTVdIEJUUkZTIGluZm8gKGRl
dmljZSBkbS0xKTogY2hlY2tpbmcgVVVJRCB0cmVlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU2V0dGluZyBk
b21haW4gMCBuYW1lLCBkb21pZCBhbmQgSlNPTiBjb25maWcuLi48YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IERvbmUgc2V0dGluZyB1cCBEb20wPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyB4ZW5jb25zb2xlZC4uLjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgUUVNVSBhcyBkaXNrIGJhY2tlbmQgZm9y
IGRvbTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIGRvbWFp
biB3YXRjaGRvZyBkYWVtb246IHhlbndhdGNoZG9nZCBzdGFydHVwPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCAxOC40MDg2NDddIEJUUkZTOiBkZXZpY2UgZnNpZCA1ZTA4ZDVlOS1iYzJhLTQ2YjktYWY2
YS00NGM3MDg3Yjg5MjEgZGV2aWQgMSB0cmFuc2lkIDYgL2Rldi9tYXBwZXIvY2xpZW50X2NvbmZp
ZyBzY2FubmVkIGJ5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbWtmcy5idHJmczxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKDU3NCk8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFtkb25lXTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCAxOC40NjU1NTJdIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTogdXNpbmcg
ZnJlZSBzcGFjZSB0cmVlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IDE4LjQ2NTYyOV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTIpOiBoYXMgc2tpbm55IGV4dGVudHM8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNDcxMDAyXSBCVFJG
UyBpbmZvIChkZXZpY2UgZG0tMik6IGZsYWdnaW5nIGZzIHdpdGggYmlnIG1ldGFkYXRhIGZlYXR1
cmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIGNyb25kOiBb
IMKgIDE4LjQ4MjM3MV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTIpOiBlbmFibGluZyBzc2Qgb3B0
aW1pemF0aW9uczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxOC40
ODY2NTldIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTogY2hlY2tpbmcgVVVJRCB0cmVlPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBPSzxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgc3RhcnRpbmcgcnN5c2xvZ2QgLi4uIExvZyBwYXJ0aXRpb24gcmVhZHkg
YWZ0ZXIgMCBwb2xsIGxvb3BzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBk
b25lPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyByc3lzbG9nZDogY2Fubm90
IGNvbm5lY3QgdG8gPGEgaHJlZj0iaHR0cDovLzE3Mi4xOC4wLjE6NTE0IiByZWw9Im5vcmVmZXJy
ZXIiIHRhcmdldD0iX2JsYW5rIj4xNzIuMTguMC4xOjUxNDwvYT4gJmx0OzxhIGhyZWY9Imh0dHA6
Ly8xNzIuMTguMC4xOjUxNCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDov
LzE3Mi4xOC4wLjE6NTE0PC9hPiZndDs6IE5ldHdvcmsgaXMgdW5yZWFjaGFibGUgW3Y4LjIyMDgu
MCB0cnkgPGEgaHJlZj0iaHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc8L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjciIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNzwvYT4m
Z3Q7IF08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNjcwNjM3
XSBCVFJGUzogZGV2aWNlIGZzaWQgMzlkN2Q5ZTEtOTY3ZC00NzhlLTk0YWUtNjkwZGViNzIyMDk1
IGRldmlkIDEgdHJhbnNpZCA2MDggL2Rldi9kbS0zIHNjYW5uZWQgYnkgdWRldmQgKDUxOCk8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBQbGVhc2UgaW5zZXJ0IFVTQiB0b2tlbiBhbmQgZW50ZXIgeW91ciByb2xl
IGluIGxvZ2luIHByb21wdC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBsb2dpbjo8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgTy48YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg0L/QvSwgMjQg
0LDQv9GALiAyMDIz4oCv0LMuINCyIDIzOjM5LCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhy
ZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJl
bGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+
Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
SGkgT2xlZyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgSGVyZSBpcyB0aGUgaXNzdWUgZnJv
bSB5b3VyIGxvZ3M6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFNFcnJvciBJbnRlcnJ1cHQg
b24gQ1BVMCwgY29kZSAweGJlMDAwMDAwIC0tIFNFcnJvcjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBTRXJyb3JzIGFyZSBzcGVjaWFsIHNpZ25hbHMgdG8gbm90aWZ5IHNvZnR3YXJlIG9mIHNl
cmlvdXMgaGFyZHdhcmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBlcnJvcnMuwqAgU29tZXRoaW5nIGlzIGdvaW5nIHZlcnkgd3JvbmcuIERlZmVjdGl2ZSBo
YXJkd2FyZSBpcyBhPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgcG9zc2liaWxpdHkuwqAgQW5vdGhlciBwb3NzaWJpbGl0eSBpZiBzb2Z0d2FyZSBhY2Nlc3Np
bmcgYWRkcmVzcyByYW5nZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqB0aGF0IGl0IGlzIG5vdCBzdXBwb3NlZCB0bywgc29tZXRpbWVzIGl0IGNhdXNlcyBT
RXJyb3JzLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBDaGVlcnMsPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoFN0ZWZhbm88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBPbiBNb24sIDI0IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBIZWxsbyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGFua3MgZ3V5cy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEkgZm91bmQgb3V0IHdoZXJlIHRoZSBwcm9i
bGVtIHdhcy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IE5vdyBkb20wIGJvb3RlZCBtb3JlLiBCdXQgSSBoYXZlIGEgbmV3IG9uZS48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFRoaXMgaXMgYSBrZXJu
ZWwgcGFuaWMgZHVyaW5nIERvbTAgbG9hZGluZy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE1heWJlIHNvbWVvbmUgaXMgYWJsZSB0byBzdWdnZXN0
IHNvbWV0aGluZyA/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IE8uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuNzcxMzYyXSBzZnBfcmVnaXN0ZXJfYnVzOiB1cHN0cmVhbSBvcHMgYXR0YWNoPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My43NzYxMTldIHNmcF9yZWdpc3Rlcl9idXM6IEJ1cyByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43ODA0NTldIHNm
cF9yZWdpc3Rlcl9zb2NrZXQ6IHJlZ2lzdGVyIHNmcF9idXMgc3VjY2VlZGVkPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43ODkzOTld
IG9mX2Nmc19pbml0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy43ODk0OTldIG9mX2Nmc19pbml0OiBPSzxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzkxNjg1XSBjbGs6
IE5vdCBkaXNhYmxpbmcgdW51c2VkIGNsb2Nrczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTAzNTVdIFNFcnJvciBJbnRlcnJ1cHQg
b24gQ1BVMCwgY29kZSAweGJlMDAwMDAwIC0tIFNFcnJvcjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTAzODBdIENQVTogMCBQSUQ6
IDkgQ29tbToga3dvcmtlci91NDowIE5vdCB0YWludGVkIDUuMTUuNzIteGlsaW54LXYyMDIyLjEg
IzE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTEuMDEwMzkzXSBXb3JrcXVldWU6IGV2ZW50c191bmJvdW5kIGFzeW5jX3J1bl9lbnRyeV9m
bjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMS4wMTA0MTRdIHBzdGF0ZTogNjAwMDAwMDUgKG5aQ3YgZGFpZiAtUEFOIC1VQU8gLVRDTyAt
RElUIC1TU0JTIEJUWVBFPS0tKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0MjJdIHBjIDogc2ltcGxlX3dyaXRlX2VuZCsweGQw
LzB4MTMwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIDExLjAxMDQzMV0gbHIgOiBnZW5lcmljX3BlcmZvcm1fd3JpdGUrMHgxMTgvMHgxZTA8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
MTEuMDEwNDM4XSBzcCA6IGZmZmZmZmMwMDgwOWI5MTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDQxXSB4Mjk6IGZmZmZmZmMw
MDgwOWI5MTAgeDI4OiAwMDAwMDAwMDAwMDAwMDAwIHgyNzogZmZmZmZmZWY2OWJhODhjMDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA0NTFdIHgyNjogMDAwMDAwMDAwMDAwM2VlYyB4MjU6IGZmZmZmZjgwNzUxNWRiMDAgeDI0OiAw
MDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ1OV0geDIzOiBmZmZmZmZjMDA4MDliYTkwIHgyMjogMDAw
MDAwMDAwMmFhYzAwMCB4MjE6IGZmZmZmZjgwNzMxNWEyNjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDcyXSB4MjA6IDAwMDAw
MDAwMDAwMDEwMDAgeDE5OiBmZmZmZmZmZTAyMDAwMDAwIHgxODogMDAwMDAwMDAwMDAwMDAwMDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS4wMTA0ODFdIHgxNzogMDAwMDAwMDBmZmZmZmZmZiB4MTY6IDAwMDAwMDAwMDAwMDgwMDAgeDE1
OiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ5MF0geDE0OiAwMDAwMDAwMDAwMDAwMDAwIHgxMzog
MDAwMDAwMDAwMDAwMDAwMCB4MTI6IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDk4XSB4MTE6IDAw
MDAwMDAwMDAwMDAwMDAgeDEwOiAwMDAwMDAwMDAwMDAwMDAwIHg5IDogMDAwMDAwMDAwMDAwMDAw
MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMS4wMTA1MDddIHg4IDogMDAwMDAwMDAwMDAwMDAwMCB4NyA6IGZmZmZmZmVmNjkzYmE2ODAg
eDYgOiAwMDAwMDAwMDJkODliNzAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDUxNV0geDUgOiBmZmZmZmZmZTAyMDAwMDAwIHg0
IDogZmZmZmZmODA3MzE1YTNjOCB4MyA6IDAwMDAwMDAwMDAwMDEwMDA8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTI0XSB4MiA6
IDAwMDAwMDAwMDJhYWIwMDAgeDEgOiAwMDAwMDAwMDAwMDAwMDAxIHgwIDogMDAwMDAwMDAwMDAw
MDAwNTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCAxMS4wMTA1MzRdIEtlcm5lbCBwYW5pYyAtIG5vdCBzeW5jaW5nOiBBc3luY2hyb25vdXMg
U0Vycm9yIEludGVycnVwdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1MzldIENQVTogMCBQSUQ6IDkgQ29tbToga3dvcmtlci91
NDowIE5vdCB0YWludGVkIDUuMTUuNzIteGlsaW54LXYyMDIyLjEgIzE8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTQ1XSBIYXJk
d2FyZSBuYW1lOiBEMTQgVmlwZXIgQm9hcmQgLSBXaGl0ZSBVbml0IChEVCk8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTQ4XSBX
b3JrcXVldWU6IGV2ZW50c191bmJvdW5kIGFzeW5jX3J1bl9lbnRyeV9mbjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1NTZdIENh
bGwgdHJhY2U6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDExLjAxMDU1OF0gwqBkdW1wX2JhY2t0cmFjZSsweDAvMHgxYzQ8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTY3
XSDCoHNob3dfc3RhY2srMHgxOC8weDJjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU3NF0gwqBkdW1wX3N0YWNrX2x2bCsweDdj
LzB4YTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTEuMDEwNTgzXSDCoGR1bXBfc3RhY2srMHgxOC8weDM0PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU4OF0gwqBwYW5p
YysweDE0Yy8weDJmODxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCAxMS4wMTA1OTddIMKgcHJpbnRfdGFpbnRlZCsweDAvMHhiMDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2
MDZdIMKgYXJtNjRfc2Vycm9yX3BhbmljKzB4NmMvMHg3Yzxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2MTRdIMKgZG9fc2Vycm9y
KzB4MjgvMHg2MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4wMTA2MjFdIMKgZWwxaF82NF9lcnJvcl9oYW5kbGVyKzB4MzAvMHg1MDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS4wMTA2MjhdIMKgZWwxaF82NF9lcnJvcisweDc4LzB4N2M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjMzXSDCoHNpbXBsZV93
cml0ZV9lbmQrMHhkMC8weDEzMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2MzldIMKgZ2VuZXJpY19wZXJmb3JtX3dyaXRlKzB4
MTE4LzB4MWUwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDExLjAxMDY0NF0gwqBfX2dlbmVyaWNfZmlsZV93cml0ZV9pdGVyKzB4MTM4LzB4
MWM0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDExLjAxMDY1MF0gwqBnZW5lcmljX2ZpbGVfd3JpdGVfaXRlcisweDc4LzB4ZDA8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEw
NjU2XSDCoF9fa2VybmVsX3dyaXRlKzB4ZmMvMHgyYWM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjY1XSDCoGtlcm5lbF93cml0
ZSsweDg4LzB4MTYwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDExLjAxMDY3M10gwqB4d3JpdGUrMHg0NC8weDk0PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY4MF0gwqBk
b19jb3B5KzB4YTgvMHgxMDQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjg2XSDCoHdyaXRlX2J1ZmZlcisweDM4LzB4NTg8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEu
MDEwNjkyXSDCoGZsdXNoX2J1ZmZlcisweDRjLzB4YmM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjk4XSDCoF9fZ3VuemlwKzB4
MjgwLzB4MzEwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDExLjAxMDcwNF0gwqBndW56aXArMHgxYy8weDI4PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDcwOV0gwqB1bnBh
Y2tfdG9fcm9vdGZzKzB4MTcwLzB4MmIwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDcxNV0gwqBkb19wb3B1bGF0ZV9yb290ZnMr
MHg4MC8weDE2NDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4wMTA3MjJdIMKgYXN5bmNfcnVuX2VudHJ5X2ZuKzB4NDgvMHgxNjQ8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEu
MDEwNzI4XSDCoHByb2Nlc3Nfb25lX3dvcmsrMHgxZTQvMHgzYTA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzM2XSDCoHdvcmtl
cl90aHJlYWQrMHg3Yy8weDRjMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3NDNdIMKga3RocmVhZCsweDEyMC8weDEzMDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA3NTBdIMKgcmV0X2Zyb21fZm9yaysweDEwLzB4MjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzU3XSBTTVA6IHN0b3BwaW5n
IHNlY29uZGFyeSBDUFVzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjAxMDc4NF0gS2VybmVsIE9mZnNldDogMHgyZjYxMjAwMDAwIGZy
b20gMHhmZmZmZmZjMDA4MDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDc4OF0gUEhZU19PRkZTRVQ6IDB4MDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3
OTBdIENQVSBmZWF0dXJlczogMHgwMDAwMDQwMSwwMDAwMDg0Mjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3OTVdIE1lbW9yeSBM
aW1pdDogbm9uZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4yNzc1MDldIC0tLVsgZW5kIEtlcm5lbCBwYW5pYyAtIG5vdCBzeW5jaW5n
OiBBc3luY2hyb25vdXMgU0Vycm9yIEludGVycnVwdCBdLS0tPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg0L/RgiwgMjEg0LDQv9GALiAyMDIz4oCv0LMuINCy
IDE1OjUyLCBNaWNoYWwgT3J6ZWwgJmx0OzxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1k
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1p
Y2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIE9sZWcsPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoE9uIDIxLzA0LzIw
MjMgMTQ6NDksIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBIZWxsbyBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgSSB3YXMgbm90IGFibGUgdG8gZW5hYmxlIGVhcmx5cHJpbnRrIGluIHRoZSB4ZW4gZm9yIG5v
dy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IEkgZGVjaWRlZCB0byBjaG9vc2UgYW5vdGhlciB3YXkuPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBU
aGlzIGlzIGEgeGVuJiMzOTtzIGNvbW1hbmQgbGluZSB0aGF0IEkgZm91bmQgb3V0IGNvbXBsZXRl
bHkuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgJCQkJCBjb25zb2xlPWR0dWFydCBkdHVhcnQ9
c2VyaWFsMCBkb20wX21lbT0xNjAwTSBkb20wX21heF92Y3B1cz0yIGRvbTBfdmNwdXNfcGluIGJv
b3RzY3J1Yj0wPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdndmaT1uYXRpdmU8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzY2hlZD1udWxsPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgdGltZXJfc2xvcD0wPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgWWVzLCBhZGRpbmcgYSBwcmludGsoKSBpbiBYZW4gd2FzIGFs
c28gYSBnb29kIGlkZWEuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFNvIHlvdSBhcmUgYWJzb2x1dGVseSByaWdo
dCBhYm91dCBhIGNvbW1hbmQgbGluZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE5vdyBJIGFtIGdvaW5nIHRvIGZpbmQg
b3V0IHdoeSB4ZW4gZGlkIG5vdCBoYXZlIHRoZSBjb3JyZWN0IHBhcmFtZXRlcnMgZnJvbSB0aGUg
ZGV2aWNlIHRyZWUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgTWF5YmUgeW91IHdpbGwgZmluZCB0aGlzIGRvY3VtZW50IGhlbHBm
dWw6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgPGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54
X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVu
L2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3Rpbmcu
dHh0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94
bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngv
eGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3Rp
bmcudHh0PC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgfk1pY2hhbDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
T2xlZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7INC/0YIsIDIxINCw0L/RgC4gMjAyM+KAr9CzLiDQsiAx
MToxNiwgTWljaGFsIE9yemVsICZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNo
YWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9
Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgT24gMjEvMDQvMjAyMyAxMDowNCwgT2xlZyBOaWtp
dGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEhl
bGxvIE1pY2hhbCw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBZ
ZXMsIEkgdXNlIHlvY3RvLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7IFllc3RlcmRheSBhbGwgZGF5IGxvbmcgSSB0cmllZCB0byBmb2xsb3cgeW91ciBzdWdnZXN0
aW9ucy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEkgZmFjZWQgYSBwcm9ibGVtLjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDsgTWFudWFsbHkgaW4gdGhlIHhlbiBjb25maWcgYnVpbGQgZmlsZSBJIHBhc3Rl
ZCB0aGUgc3RyaW5nczo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBJbiB0aGUgLmNvbmZpZyBmaWxlIG9yIGlu
IHNvbWUgWW9jdG8gZmlsZSAobGlzdGluZyBhZGRpdGlvbmFsIEtjb25maWcgb3B0aW9ucykgYWRk
ZWQgdG8gU1JDX1VSST88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBZb3Ugc2hvdWxkbiYjMzk7dCByZWFsbHkg
bW9kaWZ5IC5jb25maWcgZmlsZSBidXQgaWYgeW91IGRvLCB5b3Ugc2hvdWxkIGV4ZWN1dGUgJnF1
b3Q7bWFrZSBvbGRkZWZjb25maWcmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBhZnRlcndhcmRzLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0OyBDT05GSUdfRUFSTFlfUFJJTlRLPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBDT05G
SUdfRUFSTFlfUFJJTlRLX1pZTlFNUDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgQ09ORklHX0VBUkxZ
X1VBUlRfQ0hPSUNFX0NBREVOQ0U8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBJIGhvcGUgeW91IGFkZGVkID15
IHRvIHRoZW0uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEFueXdheSwgeW91IGhhdmUgYXQg
bGVhc3QgdGhlIGZvbGxvd2luZyBzb2x1dGlvbnM6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgMSkgUnVuIGJp
dGJha2UgeGVuIC1jIG1lbnVjb25maWcgdG8gcHJvcGVybHkgc2V0IGVhcmx5IHByaW50azxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoDIpIEZpbmQgb3V0IGhvdyB5b3UgZW5hYmxlIG90aGVyIEtjb25maWcgb3B0
aW9ucyBpbiB5b3VyIHByb2plY3QgKGUuZy4gQ09ORklHX0NPTE9SSU5HPXkgdGhhdCBpcyBub3Q8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBlbmFibGVkIGJ5PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZGVmYXVsdCk8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAzKSBBcHBlbmQgdGhlIGZvbGxvd2luZyB0byAmcXVvdDt4ZW4vYXJjaC9hcm0vY29uZmlncy9h
cm02NF9kZWZjb25maWcmcXVvdDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgQ09ORklHX0VBUkxZX1BSSU5U
S19aWU5RTVA9eTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqB+TWljaGFsPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEhvc3QgaGFuZ3MgaW4g
YnVpbGQgdGltZS7CoDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgTWF5YmUgSSBkaWQgbm90IHNldCBz
b21ldGhpbmcgaW4gdGhlIGNvbmZpZyBidWlsZCBmaWxlID88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgT2xl
Zzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7INGH0YIsIDIwINCw
0L/RgC4gMjAyM+KAr9CzLiDQsiAxMTo1NywgT2xlZyBOaWtpdGVua28gJmx0OzxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0i
X2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29v
ZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0
OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqBUaGFua3MgTWljaGFsLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqBZb3UgZ2F2ZSBtZSBhbiBpZGVhLjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoEkgYW0gZ29pbmcgdG8gdHJ5IGl0IHRvZGF5Ljxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoE8uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoNGH0YIsIDIwINCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxMTo1NiwgT2xlZyBOaWtpdGVu
a28gJmx0OzxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2Js
YW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdv
b2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0i
X2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdt
YWlsLmNvbTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqBUaGFua3MgU3RlZmFuby48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgSSBhbSBnb2luZyB0
byBkbyBpdCB0b2RheS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqBPLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqDRgdGALCAxOSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMjM6MDUsIFN0ZWZhbm8gU3RhYmVs
bGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0i
X2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJu
ZWwub3JnPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3Jn
PC9hPiZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoE9uIFdlZCwgMTkgQXByIDIwMjMsIE9sZWcgTmlr
aXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDsgSGkgTWljaGFsLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEkgY29ycmVj
dGVkIHhlbiYjMzk7cyBjb21tYW5kIGxpbmUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0OyBOb3cgaXQgaXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7IHhlbix4ZW4tYm9vdGFyZ3MgPSAmcXVvdDtjb25zb2xlPWR0dWFydCBk
dHVhcnQ9c2VyaWFsMCBkb20wX21lbT0xNjAwTSBkb20wX21heF92Y3B1cz0yPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgZG9tMF92Y3B1c19waW48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBib290c2NydWI9MCB2d2Zp
PW5hdGl2ZSBzY2hlZD1udWxsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0OyB0aW1lcl9zbG9wPTAgd2F5X3NpemU9NjU1MzYgeGVuX2NvbG9ycz0wLTMgZG9tMF9j
b2xvcnM9NC03JnF1b3Q7Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqA0IGNvbG9ycyBpcyB3YXkgdG9vIG1hbnkgZm9yIHhlbiwg
anVzdCBkbyB4ZW5fY29sb3JzPTAtMC4gVGhlcmUgaXMgbm88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqBhZHZhbnRhZ2UgaW4gdXNpbmcgbW9yZSB0aGFuIDEgY29sb3Ig
Zm9yIFhlbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgNCBjb2xvcnMgaXMgdG9vIGZldyBmb3IgZG9tMCwgaWYgeW91IGFyZSBn
aXZpbmcgMTYwME0gb2YgbWVtb3J5IHRvIERvbTAuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgRWFjaCBjb2xvciBpcyAyNTZNLiBGb3IgMTYwME0geW91IHNob3VsZCBn
aXZlIGF0IGxlYXN0IDcgY29sb3JzLiBUcnk6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoHhlbl9jb2xvcnM9MC0wIGRvbTBfY29s
b3JzPTEtODxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDsgVW5mb3J0dW5hdGVseSB0aGUgcmVzdWx0IHdhcyB0aGUgc2FtZS48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoC0gRG9tMCBtb2RlOiBSZWxheGVkPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06IDQwLWJp
dCBJUEEgd2l0aCA0MC1iaXQgUEEgYW5kIDgtYml0IFZNSUQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFAyTTogMyBsZXZlbHMgd2l0aCBvcmRlci0x
IHJvb3QsIFZUQ1IgMHgwMDAwMDAwMDgwMDIzNTU4PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBTY2hlZHVsaW5nIGdyYW51bGFyaXR5OiBjcHUsIDEg
Q1BVIHBlciBzY2hlZC1yZXNvdXJjZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDsgKFhFTikgQ29sb3JpbmcgZ2VuZXJhbCBpbmZvcm1hdGlvbjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgV2F5IHNpemU6IDY0a0I8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIE1heC4g
bnVtYmVyIG9mIGNvbG9ycyBhdmFpbGFibGU6IDE2PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBYZW4gY29sb3Iocyk6IFsgMCBdPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBhbHRlcm5hdGl2ZXM6IFBh
dGNoaW5nIHdpdGggYWx0IHRhYmxlIDAwMDAwMDAwMDAyY2M2OTAgLSZndDsgMDAwMDAwMDAwMDJj
Y2MwYzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikg
Q29sb3IgYXJyYXkgYWxsb2NhdGlvbiBmYWlsZWQgZm9yIGRvbTA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
OyAoWEVOKSBQYW5pYyBvbiBDUFUgMDo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIEVycm9yIGNyZWF0aW5nIGRvbWFpbiAwPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSAqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0OyAoWEVOKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDsgKFhFTikgUmVib290IGluIGZpdmUgc2Vjb25kcy4uLjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7IEkgYW0gZ29pbmcgdG8gZmluZCBvdXQgaG93IGNvbW1hbmQgbGluZSBh
cmd1bWVudHMgcGFzc2VkIGFuZCBwYXJzZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyDRgdGALCAxOSDQsNC/
0YAuIDIwMjPigK/Qsy4g0LIgMTE6MjUsIE9sZWcgTmlraXRlbmtvICZsdDs8YSBocmVmPSJtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21h
aWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwv
YT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9s
ZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIE1pY2hhbCw8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBZb3UgcHV0IG15IG5vc2UgaW50byB0aGUgcHJv
YmxlbS4gVGhhbmsgeW91Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDsgSSBhbSBnb2luZyB0byB1c2UgeW91ciBwb2ludC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IExldCYjMzk7cyBzZWUgd2hhdCBoYXBwZW5zLjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
INGB0YAsIDE5INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxMDozNywgTWljaGFsIE9yemVsICZsdDs8
YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNo
YWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9y
emVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1k
LmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20i
IHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Jmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9i
bGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5j
b208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hh
bC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgSGkgT2xlZyw8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gMTkvMDQvMjAyMyAwOTowMywgT2xl
ZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEhlbGxvIFN0ZWZh
bm8sPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgVGhhbmtzIGZvciB0aGUgY2xhcmlmaWNhdGlvbi48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE15IGNvbXBhbnkg
dXNlcyB5b2N0byBmb3IgaW1hZ2UgZ2VuZXJhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFdoYXQga2luZCBvZiBpbmZvcm1h
dGlvbiBkbyB5b3UgbmVlZCB0byBjb25zdWx0IG1lIGluIHRoaXMgY2FzZSA/PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgTWF5
YmUgbW9kdWxlcyBzaXplcy9hZGRyZXNzZXMgd2hpY2ggd2VyZSBtZW50aW9uZWQgYnkgQEp1bGll
biBHcmFsbDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+
anVsaWVuQHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4u
b3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVu
QHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0
YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9y
ZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0i
X2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Omp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVs
aWVuQHhlbi5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyA/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoFNvcnJ5IGZvciBqdW1waW5nIGludG8gZGlzY3Vzc2lvbiwg
YnV0IEZXSUNTIHRoZSBYZW4gY29tbWFuZCBsaW5lIHlvdSBwcm92aWRlZDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoHNlZW1zIHRvIGJlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgbm90IHRoZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoG9uZTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFhlbiBib290ZWQgd2l0aC4gVGhl
IGVycm9yIHlvdSBhcmUgb2JzZXJ2aW5nIG1vc3QgbGlrZWx5IGlzIGR1ZSB0byBkb20wIGNvbG9y
czxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbmZpZ3Vy
YXRpb24gbm90PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgYmVpbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBzcGVjaWZpZWQgKGkuZS4gbGFjayBvZiBkb20wX2NvbG9ycz0m
bHQ7Jmd0OyBwYXJhbWV0ZXIpLiBBbHRob3VnaCBpbiB0aGUgY29tbWFuZCBsaW5lIHlvdTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHByb3ZpZGVkLCB0aGlz
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgcGFyYW1ldGVyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgaXMgc2V0LCBJIHN0cm9uZ2x5IGRvdWJ0IHRoYXQgdGhpcyBpcyB0aGUg
YWN0dWFsIGNvbW1hbmQgbGluZSBpbiB1c2UuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoFlvdSB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB4ZW4seGVuLWJvb3RhcmdzID0gJnF1b3Q7Y29uc29s
ZT1kdHVhcnQgZHR1YXJ0PXNlcmlhbDAgZG9tMF9tZW09MTYwME0gZG9tMF9tYXhfdmNwdXM9Mjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGRvbTBfdmNwdXNf
cGluPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgYm9vdHNjcnViPTAgdndmaT1uYXRpdmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzY2hlZD1udWxsIHRpbWVyX3Nsb3A9MCB3
YXlfc3ppemU9NjU1MzYgeGVuX2NvbG9ycz0wLTMgZG9tMF9jb2xvcnM9NC03JnF1b3Q7Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBidXQ6PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgMSkgd2F5X3N6aXpl
IGhhcyBhIHR5cG88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAyKSB5b3Ugc3BlY2lmaWVkIDQgY29sb3JzICgwLTMpIGZvciBYZW4sIGJ1dCB0
aGUgYm9vdCBsb2cgc2F5cyB0aGF0IFhlbiBoYXMgb25seTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoG9uZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAoWEVOKSBYZW4gY29sb3Iocyk6IFsgMCBdPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFRoaXMgbWFrZXMgbWUgYmVsaWV2ZSB0aGF0IG5v
IGNvbG9ycyBjb25maWd1cmF0aW9uIGFjdHVhbGx5IGVuZCB1cCBpbiBjb21tYW5kIGxpbmU8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0aGF0IFhlbjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJvb3RlZDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHdpdGguPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU2luZ2xlIGNvbG9y
IGZvciBYZW4gaXMgYSAmcXVvdDtkZWZhdWx0IGlmIG5vdCBzcGVjaWZpZWQmcXVvdDsgYW5kIHdh
eSBzaXplIHdhcyBwcm9iYWJseTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNhbGN1
bGF0ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBieSBh
c2tpbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBIVy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgU28gSSB3b3VsZCBzdWdnZXN0IHRvIGZpcnN0IGNyb3NzLWNoZWNrIHRoZSBjb21tYW5kIGxp
bmUgaW4gdXNlLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB+
TWljaGFsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBSZWdhcmRz
LDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgT2xlZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7INCy0YIsIDE4INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAyMDo0NCwg
U3RlZmFubyBTdGFiZWxsaW5pICZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0
OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3Rh
YmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmci
IHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7Jmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5z
c3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2Js
YW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4m
Z3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJl
bGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
T24gVHVlLCAxOCBBcHIgMjAyMywgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
OyBIaSBKdWxpZW4sPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgJmd0OyZndDsgVGhp
cyBmZWF0dXJlIGhhcyBub3QgYmVlbiBtZXJnZWQgaW4gWGVuIHVwc3RyZWFtIHlldDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7ICZndDsgd291bGQgYXNzdW1lIHRoYXQgdXBzdHJlYW0g
KyB0aGUgc2VyaWVzIG9uIHRoZSBNTCBbMV0gd29yazxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7IFBsZWFzZSBjbGFyaWZ5IHRoaXMgcG9pbnQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBCZWNhdXNl
IHRoZSB0d28gdGhvdWdodHMgYXJlIGNvbnRyb3ZlcnNpYWwuPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEhpIE9s
ZWcsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoEFzIEp1bGllbiB3cm90ZSwgdGhlcmUgaXMgbm90aGluZyBjb250
cm92ZXJzaWFsLiBBcyB5b3UgYXJlIGF3YXJlLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoFhpbGlueCBtYWludGFpbnMg
YSBzZXBhcmF0ZSBYZW4gdHJlZSBzcGVjaWZpYyBmb3IgWGlsaW54IGhlcmU6PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0Ozxh
IGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJo
dHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVm
PSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsmZ3Q7ICZsdDs8
YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9
Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0i
X2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoGFuZCB0aGUgYnJhbmNoIHlvdSBhcmUgdXNpbmcgKHhsbnhfcmViYXNlXzQu
MTYpIGNvbWVzIGZyb20gdGhlcmUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBJbnN0ZWFkLCB0aGUgdXBz
dHJlYW0gWGVuIHRyZWUgbGl2ZXMgaGVyZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqA8YSBocmVmPSJodHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9h
PiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8
YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4u
b3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3
ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7
ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJu
b3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJy
ZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwv
YT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8
YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4u
b3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9
Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgVGhlIENhY2hl
IENvbG9yaW5nIGZlYXR1cmUgdGhhdCB5b3UgYXJlIHRyeWluZyB0byBjb25maWd1cmUgaXMgcHJl
c2VudDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoGluIHhsbnhfcmViYXNlXzQuMTYsIGJ1dCBub3QgeWV0IHByZXNlbnQg
dXBzdHJlYW0gKHRoZXJlIGlzIGFuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgb3V0c3RhbmRpbmcgcGF0Y2ggc2VyaWVz
IHRvIGFkZCBjYWNoZSBjb2xvcmluZyB0byBYZW4gdXBzdHJlYW0gYnV0IGl0PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
aGFzbiYjMzk7dCBiZWVuIG1lcmdlZCB5ZXQuKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgQW55d2F5LCBp
ZiB5b3UgYXJlIHVzaW5nIHhsbnhfcmViYXNlXzQuMTYgaXQgZG9lc24mIzM5O3QgbWF0dGVyIHRv
byBtdWNoIGZvcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoHlvdSBhcyB5b3UgYWxyZWFkeSBoYXZlIENhY2hlIENvbG9y
aW5nIGFzIGEgZmVhdHVyZSB0aGVyZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEkgdGFrZSB5b3UgYXJl
IHVzaW5nIEltYWdlQnVpbGRlciB0byBnZW5lcmF0ZSB0aGUgYm9vdCBjb25maWd1cmF0aW9uPyBJ
Zjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoHNvLCBwbGVhc2UgcG9zdCB0aGUgSW1hZ2VCdWlsZGVyIGNvbmZpZyBmaWxl
IHRoYXQgeW91IGFyZSB1c2luZy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgQnV0IGZyb20gdGhlIGJvb3QgbWVz
c2FnZSwgaXQgbG9va3MgbGlrZSB0aGUgY29sb3JzIGNvbmZpZ3VyYXRpb24gZm9yPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgRG9tMCBpcyBpbmNvcnJlY3QuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7IDxicj4NCiZndDsgPGJyPg0KPC9ibG9ja3F1b3RlPjwvZGl2Pg0K
--000000000000175b6a05faee4a1e--


From xen-devel-bounces@lists.xenproject.org Fri May 05 09:38:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 09:38:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530274.825793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1purth-0004UV-E3; Fri, 05 May 2023 09:38:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530274.825793; Fri, 05 May 2023 09:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1purth-0004UO-9Q; Fri, 05 May 2023 09:38:41 +0000
Received: by outflank-mailman (input) for mailman id 530274;
 Fri, 05 May 2023 09:38:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=psjd=A2=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1purtg-0004UI-IO
 for xen-devel@lists.xen.org; Fri, 05 May 2023 09:38:40 +0000
Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com
 [2607:f8b0:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9cf5b330-eb28-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 11:38:39 +0200 (CEST)
Received: by mail-pl1-x62e.google.com with SMTP id
 d9443c01a7336-1aae46e62e9so10933755ad.2
 for <xen-devel@lists.xen.org>; Fri, 05 May 2023 02:38:39 -0700 (PDT)
Received: from localhost ([122.172.85.8]) by smtp.gmail.com with ESMTPSA id
 j3-20020a63fc03000000b005140ce70582sm1225493pgi.44.2023.05.05.02.38.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 05 May 2023 02:38:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cf5b330-eb28-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1683279518; x=1685871518;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=WaG+8ofRnnd64SI0wvpySFG9n+vU2w5MynZBpHLEkRw=;
        b=AgxI2FxlRg1mgszxPM93IGao7P+5a85jLdYesRgNPJ4Uk5ie5AM06VZszJaN91M+Xr
         le0HRmV+YUrfkE8h/LsxoE8Rk6HNh6+dJ+AaJCqg9JIm1g1I8BXNu/2PK8C0aYz1c/xY
         rdn5JdW99zMW/OgIpQgcsI0/hYa1oLAf20FJZmvjD+ncA3u8+y8UC79ph6yvF1/I3W6+
         Z9bFhUfud89AQd3b2wVqG0rLGXFXQZ+wU3V+oTD4bDPAvJrNdHBB+kw0pbcxUoRLgW94
         7//4r8LQLnG4jPF1+HOuQjcNwxs1WOkv5dKNJm+0IcKpOmWi0tWaxfkkaCg5/qWjV2Nu
         z6pA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683279518; x=1685871518;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WaG+8ofRnnd64SI0wvpySFG9n+vU2w5MynZBpHLEkRw=;
        b=icUp3N0FKsw7dQViFinsbM24BhbQsosW32zMDcbig2Tst0y0/xvyiAc1JI/++QVYJh
         s2r5FtNGToxbCFNIe6qh3OljQCPAM4zYKHN/QqakJisyPtd4dCPD4j7yoSujQZz2yprH
         HGAmgj/yRsyV7uzOu7rhjhvBDncu+JltjTxZ98I9eKPoD36ZsjMIl67j0zcz0i0D/SvL
         NzuiSjNPDK8d94GSY5T+Dy+HI29Os4vJoJOEW/cGR7psajWI/gmCJckF/MILZvXXDZu5
         zGBh9S602LI5Pk1IXooaO3ABeZwkfXbaVAqWnclUQw1Zpqhh92sPpJA7wNJG8FLd0bCt
         ccZw==
X-Gm-Message-State: AC+VfDy9b8ZmxfWT5utcF5g3rmwz8+44cEzD2LWARqrquTaKk+d9XITs
	0v6lSdVilbETOzqDLpx0F22Twg==
X-Google-Smtp-Source: ACHHUZ4hkv3ImU5k++gFOf4/s2vPNYkPVZABtqaSq1SwoIqfgDQ4FUodHC/24r9E5PzsZRunFUo2LQ==
X-Received: by 2002:a17:903:120d:b0:1a5:2993:8aa6 with SMTP id l13-20020a170903120d00b001a529938aa6mr889555plh.63.1683279517711;
        Fri, 05 May 2023 02:38:37 -0700 (PDT)
Date: Fri, 5 May 2023 15:08:35 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xen.org, Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>,
	Erik Schilling <erik.schilling@linaro.org>
Subject: Re: [PATCH] libxl: arm: Allow grant mappings for backends running on
 Dom0
Message-ID: <20230505093835.jcbwo6zjk5hcjvsm@vireshk-i7>
References: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
 <92c7f972-f617-40fc-bc5d-582c8154d03c@perard>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <92c7f972-f617-40fc-bc5d-582c8154d03c@perard>

Hi Anthony,

On 02-05-23, 15:44, Anthony PERARD wrote:
> > diff --git a/tools/libs/light/libxl_virtio.c b/tools/libs/light/libxl_virtio.c
> > index faada49e184e..e1f15344ef97 100644
> > --- a/tools/libs/light/libxl_virtio.c
> > +++ b/tools/libs/light/libxl_virtio.c
> > @@ -48,11 +48,13 @@ static int libxl__set_xenstore_virtio(libxl__gc *gc, uint32_t domid,
> >      flexarray_append_pair(back, "base", GCSPRINTF("%#"PRIx64, virtio->base));
> >      flexarray_append_pair(back, "type", GCSPRINTF("%s", virtio->type));
> >      flexarray_append_pair(back, "transport", GCSPRINTF("%s", transport));
> > +    flexarray_append_pair(back, "forced_grant", GCSPRINTF("%u", virtio->forced_grant));
> >  
> >      flexarray_append_pair(front, "irq", GCSPRINTF("%u", virtio->irq));
> >      flexarray_append_pair(front, "base", GCSPRINTF("%#"PRIx64, virtio->base));
> >      flexarray_append_pair(front, "type", GCSPRINTF("%s", virtio->type));
> >      flexarray_append_pair(front, "transport", GCSPRINTF("%s", transport));
> > +    flexarray_append_pair(front, "forced_grant", GCSPRINTF("%u", virtio->forced_grant));
> 
> This "forced_grant" feels weird to me in the protocol, I feel like this
> use of grant or not could be handled by the backend. For example in
> "blkif" protocol, there's plenty of "feature-*" which allows both
> front-end and back-end to advertise which feature they can or want to
> use.
> But maybe the fact that the device tree needs to be modified to be able
> to accommodate grant mapping means that libxl needs to ask the backend to
> use grant or not, and the frontend needs to now if it needs to use
> grant.

I am not sure if I fully understand what you are suggesting here.

The eventual fronend drivers (like drivers/i2c/busses/i2c-virtio.c)
aren't Xen aware and the respective virtio protocol doesn't talk about
how memory is mapped for the guest. The guest kernel allows both
memory mapping models and the decision is made based on the presence
or absence of the iommu node in the DT.

The backends in our case are hypervisor agnostic and aren't part of
Xen or any other hypervisor. I am not sure how the backend can provide
the mapping information to Xen, with which the creation of the iommu
DT node can be controlled.

Also, as I communicated in another email, the currently suggested
option in this patch, "forced_grant", isn't enough for us. We also
need a way to disable grant mappings. Right now we are creating iommu
nodes by default all the time, if the backend domain isn't Dom0.

What I need probably is something like: "use_grant", where setting it
to 1 will always create the iommu node and setting it to 0 will not,
irrespective of the backend domain, and this overrides the current
model of defaulting to create the node when not mapped by Dom0.

-- 
viresh


From xen-devel-bounces@lists.xenproject.org Fri May 05 09:38:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 09:38:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530275.825803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1purtw-0004mD-LR; Fri, 05 May 2023 09:38:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530275.825803; Fri, 05 May 2023 09:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1purtw-0004m4-Ha; Fri, 05 May 2023 09:38:56 +0000
Received: by outflank-mailman (input) for mailman id 530275;
 Fri, 05 May 2023 09:38:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yBOj=A2=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1purtv-0004lV-1j
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 09:38:55 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on2060e.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::60e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a385b987-eb28-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 11:38:51 +0200 (CEST)
Received: from BY3PR05CA0002.namprd05.prod.outlook.com (2603:10b6:a03:254::7)
 by DM4PR12MB6376.namprd12.prod.outlook.com (2603:10b6:8:a0::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 09:38:45 +0000
Received: from DM6NAM11FT019.eop-nam11.prod.protection.outlook.com
 (2603:10b6:a03:254:cafe::6b) by BY3PR05CA0002.outlook.office365.com
 (2603:10b6:a03:254::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.12 via Frontend
 Transport; Fri, 5 May 2023 09:38:45 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT019.mail.protection.outlook.com (10.13.172.172) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 09:38:44 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 5 May
 2023 04:38:44 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 5 May 2023 04:38:43 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a385b987-eb28-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GR6gtOXpoC57t0HCacBTkIHeDuFNTDtoOhiwMl1S0NB31Zn4mX3CMYIMp3JlhZ6GZlNDSfmvUyKUDonfMSg5LB8gQTnU8pILRCSPP5q/JpAI6LSW61NDAlmPvzBq7jFAXPHS+9EA+lVodYp7rkfT0tl0TN/gpwbyo9Vkn6A7D6iCihd7Q1+3/B9wMEbc3dTwDw87bri34AKT3TAxriEV0NRchR7RqJPDsVN62U+Iwt3/u0AmSrXA+8VEnG8oa+fLlzlwOZzc0ye2PIGguv7zgiqK+vyBqZ8BF4pDCRARUoqL5PFDPJ1Xn1nW6HjaNcH9LZp9CDCbEJNFpTrBDYYrXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iPitrE6wIpj4ueq7bKbBJIHgmndspWRUr4v1ZWtVluk=;
 b=ExaaTXL4SpZi/s7k8WmlxbtcM+xaJ3E4uIM8krilBukWqac2/7SPRCVLgFah8oK17zgwZ0Bq6rtDRbVSU0A+QBCHL5Q9a+PQbpLgfPeOTREPg4Ij9OqKT6HOPaKm7x5aTZdLlf2l0lmuOMF5G0z+p3non15qqLzK4macH6rgoNeTDBIliX1U24v0L00WjcUKv7hXdTUYOYcG8dv9t7G9qC2HIPGK7JE565XvUSAWhCD4tQB2+icVBMICMQELgQccG+jvRlQSHinxCem4uS123J2kMzlmz96Vl8Sv9BE2KQiMsgSvnbGN6+KLUUjHw+6f/1ezzWqB2NS3C1R1Wfz61Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iPitrE6wIpj4ueq7bKbBJIHgmndspWRUr4v1ZWtVluk=;
 b=QeekNrC1mQiY24zng4JUAsFn8nqY9QayZPQ5ESW+7cVzwnSZLdhqmKsBi4sMDyTK45qdDFjyhnj/+0GCT0iP3ECkcaRXUu4m1GMAEaYTU4Oh843af9pwsJoSyvDfAhseETinGFPb7OKTI1vS0aNmx5zXSEmmkF8IfoWuYOfTrus=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <57937e19-e038-b36a-73aa-c2a95de7e525@amd.com>
Date: Fri, 5 May 2023 11:38:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN][PATCH v6 02/19] common/device_tree: handle memory
 allocation failure in __unflatten_device_tree()
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-3-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230502233650.20121-3-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT019:EE_|DM4PR12MB6376:EE_
X-MS-Office365-Filtering-Correlation-Id: bb878aa9-5058-4563-07c8-08db4d4c8508
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yCaqO0KGybFwAdbjNwSE4DwJym2jIDSrvKBT+IDReINK/ahz6pTHvxsN9DVQsmuf5G34Sks9al3IQYKb4PqjswFhddO74zRPiP6guhuKnsCtskWB/sSmxLCSYIztGUCvn4r/r5y5TurnkJLr0x5qinHmxzBhhsHDp+ezCW+AezkEdn7QLCeazMTD2MtIfi/wBItuac+Hq8BqtVsx6RVRwiKidu1eiWUVjDbpqP8NQS0Pzdw0XFqcw9haP26cKfQsa+pCyeaH0QO4RmozFgEVJR9Otdr/HfUlyd+orhkpvaGXbgp7QYW9rZLtlWqvm8I5Vy7AEnpPtrUyRpQCybZzEwJFUXiyv/csrNmWImdH2aWiUoeyoAngYqyePKVs870+DkmJrGPxOerxLKO7oLdT5nMKiW5+p8ACpp2bHPgG6l2Tu9vLH/eADI/ekNAEKms6YtCaeQW2BJiZ+DO0XZF0bfpxM8Stxhqtl+2bz8V7eHms5EBiwH6Hhy8aVRnhczpazonKRh0YJkCKWsCuD49KH7zIli+ptIEVS3y1UMfkhqXPMUiMozzQHaXEP3myJqk9ZdUgjDCW24SDsB3s1MlOqRrArseX7YNJVMM+GtinBHX+/7SdWE+FcvosnhbEKipBMrqpZ7vQ+PCnFbCK09K2paPThrLPCklJiRkVlGbeo8sBOzcb7mKdRy33Ii2Nfvltri7sVWtBk/jnqWa6QLA7/6vvqlIKyrAKvOAFmxywKI1TLHKB4L0hLE2FcpYFPlxT1jvtG6qahS/vHj+TVso+9Q==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199021)(40470700004)(46966006)(36840700001)(47076005)(83380400001)(2616005)(336012)(426003)(40460700003)(186003)(2906002)(40480700001)(36756003)(82310400005)(86362001)(31696002)(82740400003)(81166007)(356005)(36860700001)(41300700001)(8936002)(8676002)(5660300002)(44832011)(4326008)(70206006)(70586007)(316002)(478600001)(31686004)(53546011)(26005)(16576012)(110136005)(54906003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 09:38:44.7025
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bb878aa9-5058-4563-07c8-08db4d4c8508
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT019.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6376


On 03/05/2023 01:36, Vikram Garhwal wrote:
> 
> 
> Change __unflatten_device_tree() return type to integer so it can propagate
> memory allocation failure. Add panic() in dt_unflatten_host_device_tree() for
> memory allocation failure during boot.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
I think we are missing a Fixes tag.

> ---
>  xen/common/device_tree.c | 13 ++++++++++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 5f7ae45304..fc38a0b3dd 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -2056,8 +2056,8 @@ static unsigned long unflatten_dt_node(const void *fdt,
>   * @fdt: The fdt to expand
>   * @mynodes: The device_node tree created by the call
>   */
> -static void __init __unflatten_device_tree(const void *fdt,
> -                                           struct dt_device_node **mynodes)
> +static int __init __unflatten_device_tree(const void *fdt,
> +                                          struct dt_device_node **mynodes)
>  {
>      unsigned long start, mem, size;
>      struct dt_device_node **allnextp = mynodes;
> @@ -2078,6 +2078,8 @@ static void __init __unflatten_device_tree(const void *fdt,
> 
>      /* Allocate memory for the expanded device tree */
>      mem = (unsigned long)_xmalloc (size + 4, __alignof__(struct dt_device_node));
> +    if ( !mem )
> +        return -ENOMEM;
> 
>      ((__be32 *)mem)[size / 4] = cpu_to_be32(0xdeadbeef);
> 
> @@ -2095,6 +2097,8 @@ static void __init __unflatten_device_tree(const void *fdt,
>      *allnextp = NULL;
> 
>      dt_dprintk(" <- unflatten_device_tree()\n");
> +
> +    return 0;
>  }
> 
>  static void dt_alias_add(struct dt_alias_prop *ap,
> @@ -2179,7 +2183,10 @@ dt_find_interrupt_controller(const struct dt_device_match *matches)
> 
>  void __init dt_unflatten_host_device_tree(void)
>  {
> -    __unflatten_device_tree(device_tree_flattened, &dt_host);
> +    int error = __unflatten_device_tree(device_tree_flattened, &dt_host);
NIT: there should be a blank line between definitions and rest of the code

> +    if ( error )
> +        panic("__unflatten_device_tree failed with error %d\n", error);
> +
>      dt_alias_scan();
>  }
> 
> --
> 2.17.1
> 
> 

FWICS, patches 2 and 4 are not strictly related to DTBO and are fixing issues
and propagating errors which is always good. Therefore by moving them to the start
of the series, they could be merged right away reducing the number of patches to review.
At the moment, they can't be because patch 3 placed in-between is strictly related to the series.

@julien?

~Michal



From xen-devel-bounces@lists.xenproject.org Fri May 05 10:06:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 10:06:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530282.825812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pusK8-0008VH-J9; Fri, 05 May 2023 10:06:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530282.825812; Fri, 05 May 2023 10:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pusK8-0008VA-GZ; Fri, 05 May 2023 10:06:00 +0000
Received: by outflank-mailman (input) for mailman id 530282;
 Fri, 05 May 2023 10:05:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SieT=A2=crudebyte.com=qemu_oss@srs-se1.protection.inumbo.net>)
 id 1pusK7-0008V4-BJ
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 10:05:59 +0000
Received: from kylie.crudebyte.com (kylie.crudebyte.com [5.189.157.229])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6e0f159c-eb2c-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 12:05:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e0f159c-eb2c-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=crudebyte.com; s=kylie; h=Content-Type:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:
	Content-ID:Content-Description;
	bh=dQqO1khjEzEoVtLNIOKXHQ3bWHajiFN7+uL2OPfkzMc=; b=mdPsTpwATAWPjUrJsoF9GqrraD
	ajLSNGDetq1iGmRFud9L4vAHbADCzTD3E3c6yEdaLI3bbNGs1D4uyazX3g9WPHteinNh9oz30GZRE
	CO8h/8ds1IYxaYm2gkOCo7yB82NJf/CHy//A8XKS7wurt69xdA5mmKgLcqj8/Qjek+I2jKDWt/eP7
	MUf5+MlWwrg3SzIYfYMzYZF4OBLVHR5ZqxuXqiJPfQAECq80PEl2Kdc0m86pSda+Ppxoi442OEqdO
	06GMIvCgnIicQa5eekq+hFLlxYcWoOD6FN3s+7HFtT8nBKvE5J53fQO3t6EsUfKXr8KWuODtgBBxu
	MlCt7vSu6nU8XxBuAO4oon6qV8IQX7DrKh4D+3O94BsG6OReeTHKqKOzqIBJ4gzvxwD4qSSl9IWZG
	l7A0qMl3H8rZ+jj80Tx2SrBg+FKrFYd1DJC+iuTaIrnaibk3DWZhOFRDdUNAv5DG+NzAE2Whe0SPF
	V8rbCFG37AMef1xY+AxMloEZXIOAGM1m3edeEBVnPOQHL9fXA4OtsP17LmZUTqsh/kKAexb2RgY//
	FwZCdBD4uickZaxI4MJcwFtO/m3AL4f1b0ifFtnUgpStquuZ98puAJ+oPGsAGRWmmJ+TBYYnC2ZEA
	TNUaF0NX0WSuOJtxSSqcaHEl5uHf4J0kx6/+e/YN4=;
From: Christian Schoenebeck <qemu_oss@crudebyte.com>
To: qemu-devel@nongnu.org
Cc: Jason Andryuk <jandryuk@gmail.com>, Greg Kurz <groug@kaod.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>,
 Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH] 9pfs/xen: Fix segfault on shutdown
Date: Fri, 05 May 2023 12:05:45 +0200
Message-ID: <43162544.QFhiSxD2Za@silver>
In-Reply-To: <20230502143722.15613-1-jandryuk@gmail.com>
References: <20230502143722.15613-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"

Hi Jason,

as this is a Xen specific change, I would like Stefano or another Xen
developer to take a look at it, just few things from my side ...

On Tuesday, May 2, 2023 4:37:22 PM CEST Jason Andryuk wrote:
> xen_9pfs_free can't use gnttabdev since it is already closed and NULL-ed

Where exactly does it do that access? A backtrace or another detailed commit
log description would help.

> out when free is called.  Do the teardown in _disconnect().  This
> matches the setup done in _connect().
> 
> trace-events are also added for the XenDevOps functions.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
>  hw/9pfs/trace-events     |  5 +++++
>  hw/9pfs/xen-9p-backend.c | 36 +++++++++++++++++++++++-------------
>  2 files changed, 28 insertions(+), 13 deletions(-)
> 
> diff --git a/hw/9pfs/trace-events b/hw/9pfs/trace-events
> index 6c77966c0b..7b5b0b5a48 100644
> --- a/hw/9pfs/trace-events
> +++ b/hw/9pfs/trace-events
> @@ -48,3 +48,8 @@ v9fs_readlink(uint16_t tag, uint8_t id, int32_t fid) "tag %d id %d fid %d"
>  v9fs_readlink_return(uint16_t tag, uint8_t id, char* target) "tag %d id %d name %s"
>  v9fs_setattr(uint16_t tag, uint8_t id, int32_t fid, int32_t valid, int32_t mode, int32_t uid, int32_t gid, int64_t size, int64_t atime_sec, int64_t mtime_sec) "tag %u id %u fid %d iattr={valid %d mode %d uid %d gid %d size %"PRId64" atime=%"PRId64" mtime=%"PRId64" }"
>  v9fs_setattr_return(uint16_t tag, uint8_t id) "tag %u id %u"
> +

Nit-picking; missing leading comment:

# xen-9p-backend.c

> +xen_9pfs_alloc(char *name) "name %s"
> +xen_9pfs_connect(char *name) "name %s"
> +xen_9pfs_disconnect(char *name) "name %s"
> +xen_9pfs_free(char *name) "name %s"
> diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c
> index 0e266c552b..c646a0b3d1 100644
> --- a/hw/9pfs/xen-9p-backend.c
> +++ b/hw/9pfs/xen-9p-backend.c
> @@ -25,6 +25,8 @@
>  #include "qemu/iov.h"
>  #include "fsdev/qemu-fsdev.h"
>  
> +#include "trace.h"
> +
>  #define VERSIONS "1"
>  #define MAX_RINGS 8
>  #define MAX_RING_ORDER 9
> @@ -337,6 +339,8 @@ static void xen_9pfs_disconnect(struct XenLegacyDevice *xendev)
>      Xen9pfsDev *xen_9pdev = container_of(xendev, Xen9pfsDev, xendev);
>      int i;
>  
> +    trace_xen_9pfs_disconnect(xendev->name);
> +
>      for (i = 0; i < xen_9pdev->num_rings; i++) {
>          if (xen_9pdev->rings[i].evtchndev != NULL) {
>              qemu_set_fd_handler(qemu_xen_evtchn_fd(xen_9pdev->rings[i].evtchndev),
> @@ -345,40 +349,42 @@ static void xen_9pfs_disconnect(struct XenLegacyDevice *xendev)
>                                     xen_9pdev->rings[i].local_port);
>              xen_9pdev->rings[i].evtchndev = NULL;
>          }
> -    }
> -}
> -
> -static int xen_9pfs_free(struct XenLegacyDevice *xendev)
> -{
> -    Xen9pfsDev *xen_9pdev = container_of(xendev, Xen9pfsDev, xendev);
> -    int i;
> -
> -    if (xen_9pdev->rings[0].evtchndev != NULL) {
> -        xen_9pfs_disconnect(xendev);
> -    }
> -
> -    for (i = 0; i < xen_9pdev->num_rings; i++) {
>          if (xen_9pdev->rings[i].data != NULL) {
>              xen_be_unmap_grant_refs(&xen_9pdev->xendev,
>                                      xen_9pdev->rings[i].data,
>                                      xen_9pdev->rings[i].intf->ref,
>                                      (1 << xen_9pdev->rings[i].ring_order));
> +            xen_9pdev->rings[i].data = NULL;
>          }
>          if (xen_9pdev->rings[i].intf != NULL) {
>              xen_be_unmap_grant_ref(&xen_9pdev->xendev,
>                                     xen_9pdev->rings[i].intf,
>                                     xen_9pdev->rings[i].ref);
> +            xen_9pdev->rings[i].intf = NULL;
>          }
>          if (xen_9pdev->rings[i].bh != NULL) {
>              qemu_bh_delete(xen_9pdev->rings[i].bh);
> +            xen_9pdev->rings[i].bh = NULL;
>          }
>      }
>  
>      g_free(xen_9pdev->id);
> +    xen_9pdev->id = NULL;
>      g_free(xen_9pdev->tag);
> +    xen_9pdev->tag = NULL;
>      g_free(xen_9pdev->path);
> +    xen_9pdev->path = NULL;
>      g_free(xen_9pdev->security_model);
> +    xen_9pdev->security_model = NULL;
>      g_free(xen_9pdev->rings);
> +    xen_9pdev->rings = NULL;
> +    return;
> +}
> +
> +static int xen_9pfs_free(struct XenLegacyDevice *xendev)
> +{
> +    trace_xen_9pfs_free(xendev->name);
> +
>      return 0;
>  }

xen_9pfs_free() doing nothing, that doesn't look right to me. Wouldn't it make
sense to turn xen_9pfs_free() idempotent instead?

>  
> @@ -390,6 +396,8 @@ static int xen_9pfs_connect(struct XenLegacyDevice *xendev)
>      V9fsState *s = &xen_9pdev->state;
>      QemuOpts *fsdev;
>  
> +    trace_xen_9pfs_connect(xendev->name);
> +
>      if (xenstore_read_fe_int(&xen_9pdev->xendev, "num-rings",
>                               &xen_9pdev->num_rings) == -1 ||
>          xen_9pdev->num_rings > MAX_RINGS || xen_9pdev->num_rings < 1) {
> @@ -499,6 +507,8 @@ out:
>  
>  static void xen_9pfs_alloc(struct XenLegacyDevice *xendev)
>  {
> +    trace_xen_9pfs_alloc(xendev->name);
> +
>      xenstore_write_be_str(xendev, "versions", VERSIONS);
>      xenstore_write_be_int(xendev, "max-rings", MAX_RINGS);
>      xenstore_write_be_int(xendev, "max-ring-page-order", MAX_RING_ORDER);
> 




From xen-devel-bounces@lists.xenproject.org Fri May 05 10:50:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 10:50:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530285.825822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1put0f-0004Q6-S4; Fri, 05 May 2023 10:49:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530285.825822; Fri, 05 May 2023 10:49:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1put0f-0004Pz-PN; Fri, 05 May 2023 10:49:57 +0000
Received: by outflank-mailman (input) for mailman id 530285;
 Fri, 05 May 2023 10:49:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1put0e-0004PW-Gu; Fri, 05 May 2023 10:49:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1put0e-0007cm-9A; Fri, 05 May 2023 10:49:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1put0d-0006JW-Qy; Fri, 05 May 2023 10:49:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1put0d-0002Kr-QV; Fri, 05 May 2023 10:49:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aZ4YPEUeltTxZDJ9naQnd4NpCEuEcAb3Pysn2T/C2uw=; b=pjxsSRnJLDJAHkFGKcAbN4RLvN
	iXkPvHTsw3LrdBZcJJg7+kwsbPuEK9IXnN3kP0HQlUDcpXdEpJkj9qw+MB7zx5QjoZ04+vdjIxSUf
	XDLCHoV26k+GNp/vENBjzdFGJkdf2jNE8Ds0j0gJ+VvVN7lPAq/MqzzHM+awfSEPxz18=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180542-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180542: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=94c802e108a082d6f74c854bea8bd98fe7808453
X-Osstest-Versions-That:
    ovmf=ff7cb2d7c98f8b832180e054848459fc24a0910a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 May 2023 10:49:55 +0000

flight 180542 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180542/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 94c802e108a082d6f74c854bea8bd98fe7808453
baseline version:
 ovmf                 ff7cb2d7c98f8b832180e054848459fc24a0910a

Last test of basis   180538  2023-05-05 04:12:21 Z    0 days
Testing same since   180542  2023-05-05 08:42:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Liming Gao <gaoliming@byosoft.com.cn>
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ff7cb2d7c9..94c802e108  94c802e108a082d6f74c854bea8bd98fe7808453 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 05 12:06:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 12:06:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530301.825833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puuCK-0004UX-KR; Fri, 05 May 2023 12:06:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530301.825833; Fri, 05 May 2023 12:06:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puuCK-0004UQ-Gc; Fri, 05 May 2023 12:06:04 +0000
Received: by outflank-mailman (input) for mailman id 530301;
 Fri, 05 May 2023 12:06:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puuCI-0004UG-L5; Fri, 05 May 2023 12:06:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puuCI-0000vP-Hm; Fri, 05 May 2023 12:06:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puuCI-0008DP-2W; Fri, 05 May 2023 12:06:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puuCI-0002ws-23; Fri, 05 May 2023 12:06:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FjXxMNfX+upI/sQ+vfXsfFz9qONRQ/BH4CgicxKsC7g=; b=Q7gclpM0h7vs+O7TFRyS7N9HmI
	pQgXv14QJew+0r1uJDixH+6cRLOsS6N5TgmjmcOTeXoo9J5vyHsGUnXv0fWaB9k3rPGQS0W2pJdAx
	0IL+HSkc/Sb7UjnEhjbSI/KnN2uPt4UZlyYmM1JCpvdwR5wjt9Pyyu18aZNx9Qb8tf0Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180541-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180541: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e93e635e142d45e3904efb4a05e2b3b52a708b4f
X-Osstest-Versions-That:
    xen=b95a72bb5b2df24ff1baaa27920e57947dc97d49
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 May 2023 12:06:02 +0000

flight 180541 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180541/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e93e635e142d45e3904efb4a05e2b3b52a708b4f
baseline version:
 xen                  b95a72bb5b2df24ff1baaa27920e57947dc97d49

Last test of basis   180520  2023-05-03 18:03:28 Z    1 days
Testing same since   180541  2023-05-05 08:01:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b95a72bb5b..e93e635e14  e93e635e142d45e3904efb4a05e2b3b52a708b4f -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 05 12:17:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 12:17:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530309.825843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puuNC-0005zl-JM; Fri, 05 May 2023 12:17:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530309.825843; Fri, 05 May 2023 12:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puuNC-0005ze-GS; Fri, 05 May 2023 12:17:18 +0000
Received: by outflank-mailman (input) for mailman id 530309;
 Fri, 05 May 2023 12:17:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puuNA-0005zU-SK; Fri, 05 May 2023 12:17:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puuNA-00016R-K8; Fri, 05 May 2023 12:17:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puuNA-0008Sf-5q; Fri, 05 May 2023 12:17:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puuNA-0004C3-5K; Fri, 05 May 2023 12:17:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pGyCC/eh5cZH/VO+0R4qrBhP4ywbV+KCuZtMXRFJMXg=; b=Cp6OYZxRC3qF+AOX90MToIhXZA
	AWNqvr5Fd/ZPwJ0ZrkOfjsCLbrjpX1lo4+nmdKP1j+U6xSpou3rNVYa5aLFCAjMNmpKlntw+6xmo8
	4nYm0SVczFaK4dGdo2Icq6WMJlZRyCbrmfH/CeqF7kICVz1rUEAmbSJhtkM5Loy1AeL8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180536-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180536: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-coresched-amd64-xl:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f6b761bdbd8ba63cee7428d52fb6b46e4224ddab
X-Osstest-Versions-That:
    qemuu=1488ccb9b64e76aab0843bc035ce3b1938df2517
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 May 2023 12:17:16 +0000

flight 180536 qemu-mainline real [real]
flight 180543 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180536/
http://logs.test-lab.xenproject.org/osstest/logs/180543/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install   fail pass in 180543-retest
 test-amd64-coresched-amd64-xl 20 guest-localmigrate/x10 fail pass in 180543-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180530
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180530
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180530
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180530
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180530
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180530
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180530
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180530
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                f6b761bdbd8ba63cee7428d52fb6b46e4224ddab
baseline version:
 qemuu                1488ccb9b64e76aab0843bc035ce3b1938df2517

Last test of basis   180530  2023-05-04 13:29:10 Z    0 days
Testing same since   180536  2023-05-05 00:40:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel Xu <dxu@dxuuu.xyz>
  Kfir Manor <kfir@daynix.com>
  Konstantin Kostiuk <kkostiuk@redhat.com>
  Mark Somerville <mark@qpok.net>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   1488ccb9b6..f6b761bdbd  f6b761bdbd8ba63cee7428d52fb6b46e4224ddab -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri May 05 12:26:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 12:26:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530315.825853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puuVn-0007UY-FN; Fri, 05 May 2023 12:26:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530315.825853; Fri, 05 May 2023 12:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puuVn-0007UR-CP; Fri, 05 May 2023 12:26:11 +0000
Received: by outflank-mailman (input) for mailman id 530315;
 Fri, 05 May 2023 12:26:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NH7J=A2=citrix.com=prvs=48283d55d=jennifer.herbert@srs-se1.protection.inumbo.net>)
 id 1puuVl-0007UL-Dm
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 12:26:09 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id feaf0790-eb3f-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 14:26:02 +0200 (CEST)
Received: from mail-mw2nam12lp2043.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 May 2023 08:25:52 -0400
Received: from DS7PR03MB5414.namprd03.prod.outlook.com (2603:10b6:5:2c2::6) by
 SN7PR03MB7183.namprd03.prod.outlook.com (2603:10b6:806:2e5::5) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.26; Fri, 5 May 2023 12:25:50 +0000
Received: from DS7PR03MB5414.namprd03.prod.outlook.com
 ([fe80::fdfd:97e5:7c55:82d]) by DS7PR03MB5414.namprd03.prod.outlook.com
 ([fe80::fdfd:97e5:7c55:82d%7]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 12:25:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: feaf0790-eb3f-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683289562;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=CwCMoJsQ3XBHdfK27O+QOGL4OPl7EWw9dH3TIohe9kI=;
  b=RL1JbXLOZugSTa6NohvzFI1Dd1MWx+cb8TeGXtc1sDpN6aoUu+FZOV97
   yVpS8c0d8WwWRwmW8hh7xiOyITHNaRgotb+yewo5TZVOVGE+Byzzin7hK
   3oDnYPZsRMI6zkb8DAldBTIjLrX6cOQa8grtBJB01OmOHXZ7z0Or2JcEC
   k=;
X-IronPort-RemoteIP: 104.47.66.43
X-IronPort-MID: 107321463
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:XdW1ZqPd7AaeBUXvrR2RlsFynXyQoLVcMsEvi/4bfWQNrUor3mEDm
 GIeDGuDPqnYNGugKNAiOd/gpEoOsMPdy9E3Sgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5gFmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0u9FPDgQ/
 uw4FG82Yg6jq7yw3ZG6Z+Y506zPLOGzVG8ekldJ6GiDSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+/RxvzO7IA9ZidABNPLXd9qMRMtYhACYq
 3jM8n7lKhobKMae2XyO9XfEaurnxHukA9hLSOPjnhJsqEaclnBKFC9NaUe6mN67qWP5ect8N
 lNBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4qjd5QqDF3UHZjFEYd0i8sQxQFQCx
 lKP2t/kGzFrmLmUUm6GsKeZqyuoPioYJnNEYjULJTbp+PHmqYA3yxfQFNBqFffvisWvQW2rh
 TeXsCI5mrMfy9YR0Lm29kzGhDTqoYXVSgky5UPcWWfNAh5FWbNJrreAsTDzhcus5q7AJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:M3qxiKqMx5vAsCtg4qLc2M0aV5rbeYIsimQD101hICG9Evb0qy
 nOpoV+6faQslwssR4b9uxoVJPvfZq+z+8R3WByB8bAYOCOggLBQL2Ki7GC/9SJIUbDH4VmpM
 VdmsZFaOEYdmIK6voT4GODYqodKNvsytHWuQ8JpU0dMz2DaMtbnnZE4h7wKDwReOHfb6BJbq
 Z14KB81kOdUEVSVOuXLF8fUdPOotXa/aiWHCLvV3YcmXGzZSrD0s+ALySl
X-Talos-CUID: 9a23:2hlDD2MVRw10d+5DURtarRAfCuAZLUbR0lf8fUmlKmdER+jA
X-Talos-MUID: 9a23:J5/JIAYY4OwHxuBTrDC8mGh+BpxU/6WXL0IuvNIkouqZKnkl
X-IronPort-AV: E=Sophos;i="5.99,251,1677560400"; 
   d="scan'208";a="107321463"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ndFfZerHGfQZonnG8svWg05xeygl7bpXUNx09q8tBMbbpJOmH5kDSyBDzj5PnP9tcDSEdkDVY+uQIrLnly8Ix/vlVoFgmmhmDkSUDF7vXtBz6p1KyYm2ls8HpMk5gzwLggfL8M2ZAj0v0/wqRuTs/5RrmAYInMTTlv1CeA5PCzG3P87/MtNciA07wxywqXCqa6fWK4OBdZrwEz2+jjRBfWM6w071v30kbfjQvbQbs+9WG4wkOueMnxFMrNq0Cwd+6EVveSfyzZWbUZtP04d5FGtsAZPunvXCmLJpYrzI1cyG9MAgNWuX6LNjdnMwE6VZRtAsYIJy9JO479QSfU7MBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CwCMoJsQ3XBHdfK27O+QOGL4OPl7EWw9dH3TIohe9kI=;
 b=WF1FrbqEZTA+JZu0SMPtipyKqTP6nqJVN0VkIKOWtSdSjA1Rp2t22Klwba4NWIXzBn+5N+99PBgdHVFElqmxGsWvxnJfH0bHyRDOWIqXOLUGyYAD9XpSWqqFlEQfNiLEptp9K6Ktmml8xoqmXjXYkqvVqq5VNhtJD/AuWhSJiiSaSGHkChfrERiGDZSRfvbf+ipA12dWYiTmiGkb37lnA6m7G5zhFVe6ghXciMTPPUM9xViXZr0PqzGLqbuDYm3qK3SL+7bgzNlf7MsphztptzFqg7H6MN8AJNi3r4FWY6LvaqQXzRm5di07n8hnWlUBlvx2aTvUZbQU61iSuNbG1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CwCMoJsQ3XBHdfK27O+QOGL4OPl7EWw9dH3TIohe9kI=;
 b=q1X6Gb7KFb1GsSgUQGZEjGm1qeWU5WzbTkXnZqlevSBLXuvNEdEs49KaUGbfXIJKrRSZRDQOZpXbeoprISFV7tIkLiS2UDOnFdVHyAdH1Ame7IoybPum8U7MbqSKjRJcr2opRftg7g+sbyhZy+4+K6v37reZULx/xNz5AxLyqhw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <462a2ed3-2e9c-b271-b8d5-255c12f409aa@citrix.com>
Date: Fri, 5 May 2023 13:25:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v4 1/2] acpi: Make TPM version configurable.
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jason Andryuk <jandryuk@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504175146.208936-1-jennifer.herbert@citrix.com>
 <20230504175146.208936-2-jennifer.herbert@citrix.com>
 <a60ad8ea-95a8-ed15-f862-3872e9fb68ac@suse.com>
From: Jennifer Herbert <jennifer.herbert@citrix.com>
In-Reply-To: <a60ad8ea-95a8-ed15-f862-3872e9fb68ac@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0092.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:191::7) To DS7PR03MB5414.namprd03.prod.outlook.com
 (2603:10b6:5:2c2::6)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DS7PR03MB5414:EE_|SN7PR03MB7183:EE_
X-MS-Office365-Filtering-Correlation-Id: 07ff9139-2aab-4df3-209a-08db4d63dc5c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	e6FFe/Cbnq2Trcq9t0bU0siokENxydNMi0v7D1pGqwIhDahhou577B6sQ4qpqMn/75jMpTxVIyB9bAUUhwB5yHW6rySD322O66C0xxbygbIvC6fPv4UGQxu+ismNPME48iOJnXvEZ3RUTD7O4R7Obj0V4XXNObC4gCLb7260dTyEPlaGnzemvviq/fK2G6gw9PYciCC7WIplI+YYImlMyHYQJjsUU6iOpJ+JMQr+BpF9Cqy2pn/+nnieVM5v9zFf6sFOgh6411XVv9f1px3MiudPn6O+AfhNV6niI7+pkFYFUkxNl+VWd3uf9aAMAodj2pnOFcq75SWi2v/UmUUaD8tHfc1Um3mE8TEDoPnnx2yberH+VRlxgqJv8bzK3d1tjRGpBkzhA5hj49aaAwd9LAo227TmA/t7esf5K2mAwQ3aOebbR7lYw3kfPzsSOzLB+GQarDECEnLEk8wnECCp3Kt6Uk+KN3lKorgD1C48RLT85uVXWb0REeNvwhlBqMVS7Qje0MCu11EoSfREIzha/w9mbR1PEzg8EZevXrME83Fcuy01BI5KI7D6wgLeKk1mQ/P4+71di8MZrTJpFSVRtIrqb60L8scu8ADgTdtRCiau7eWtbge+4PpxYKpTDtxqyyJX09rJx2z59hi9gi3aTw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5414.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(376002)(396003)(366004)(136003)(451199021)(44832011)(8676002)(5660300002)(478600001)(38100700002)(8936002)(82960400001)(54906003)(6512007)(6506007)(31696002)(6666004)(86362001)(66476007)(66946007)(66556008)(31686004)(4326008)(6916009)(6486002)(83380400001)(2906002)(36756003)(316002)(53546011)(186003)(41300700001)(26005)(4744005)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RHZmbzBzZ0R4VmhqblhQTE9Vc2tGcCtSeXhNVkJTYm5DK2lnY1FhWTN5Ungx?=
 =?utf-8?B?MUNQcFdiUDNoQ3ptL2c2UFMyYUpkcHFWT3cxc2k4SFZmRENvMmRHRkRYdzJj?=
 =?utf-8?B?QXNkYWlMT3ZXeVR3U3FLSGxuSmZJKzkvekJzNnJHdzg2YWErZXA1OCs5Rjhw?=
 =?utf-8?B?aG1BM01mWUJUSlhSUjBrdUI1YkEzRmNQOEdSUmpTRk9PSHpmeWhtT1ZaQUI0?=
 =?utf-8?B?TXBDWkRSK1h5alpJYVFoRWN5dVFMS1BLSGVyWmtzVlVkaGRGWGhLeWkrV2Nq?=
 =?utf-8?B?eEhCWVRmSEt3UERQUFRzRitVNGxPV25abzk4ZTJ5eC8wSU95MkxOUktIRnFN?=
 =?utf-8?B?WXRJbWVCSGVBREtVdTVEeGR1YlNqcUw2TW9SRm1pT0p5blVpRXJrSmZDSW1W?=
 =?utf-8?B?NUwzWVM4OFRZU0FOTUEvaUxuSHNjOE1yK20vWnFySjZXdWhJVFN6L2ZzSUNB?=
 =?utf-8?B?N0FNeW1rL2ZHdzdxendNRTNnU0pFWlhyMVVNelFtdkJNNWRoMi9qKzhvMEx6?=
 =?utf-8?B?YjY5Q1dNWXdXSXE4VlJvZi9HSlYwcWVDeG5QOTZRR2NtRk1KbEg1M0EwVzRF?=
 =?utf-8?B?YmtveURRTFlKWlIvT2ZZV3llTHJ1NGlPK3U5UHZ3emlOdnZKc0c2Y0NoUDV5?=
 =?utf-8?B?c0J6TEI0d0hLRlVqWFdBeHlyQklvbVJTbjVGVGZIL2UyK3BaRGVab1BmbVZo?=
 =?utf-8?B?UDFkOFV0aW5TbE50T0lkdXJTUmw4TEc5amNyd3N3eVJUU0lsQjZyRlZERTZn?=
 =?utf-8?B?YzI5bzZtVDlFT29MVjJsQlpKOXVMeWNZNHZTVzRCMW5sdW4yYllPTlMzTWc4?=
 =?utf-8?B?dFBtbG10YUhIZlY3VGc5RnNGT0pLTXRWblp5ZUswN1QreU1wQ2hicWNoeVI3?=
 =?utf-8?B?U0ltclArSksyWkdlcHhMYkx0K1FsajIxN0VudWQ5a1BNSFhlZUIyVlZkejJJ?=
 =?utf-8?B?SlFXaGVZTFRkSCtLa0xEYkxtWFFtVkZ1ZXVuVDdPNmNKekp2VEJQNjZwMFpK?=
 =?utf-8?B?ek80UEJRODlOdU1JM1Bic0dRVzRJNTB5YzJrMVRWaWlkNm1CNHdSNlZ4OGdi?=
 =?utf-8?B?WjlFNy92ZGtnTHdObEdZWThCUFhGZzVUdFZPZklIU2o4SE45Z2ozdWpURXc4?=
 =?utf-8?B?MXJ3MHYvb3Bjd1pycUM3ZHd0ZnNEUXBXVGhWdmF3eUNtU2h3dGcrTFYybXJt?=
 =?utf-8?B?Wkpoa0NNM2djcjR2cnVFQWo1aXNFUnc1OTVCK2JTVGxXR2RIWk5ZanE4WkFY?=
 =?utf-8?B?cVgvNDJUWW83QWZNZkRsN2kwTnNKdE51cExjVEVXT0dVQUdoWnB2anh1RkpX?=
 =?utf-8?B?aDdzTHY3dmNDdStwbVNrcFNEWmFZS3hQNEtiMHZyNjlSWUhCZ1ljWDJrSVhJ?=
 =?utf-8?B?bmdYNlN6b3IyM3c2TVdGL01VczdmYXljUDdKUVY1ZXRsMjBySmc2NU5CS3Z3?=
 =?utf-8?B?YzRnUDVNMDJzTjBwRHUyK0tkWUZnb2hpRlJBdEhZZ3NTVkxuWmNCMDNHSmlx?=
 =?utf-8?B?YWZ1WURHS1NsL0ZlOVVRQzVSYTQ2QmFQVThieGtNcGVRVmxJZk5WeDJqT2dP?=
 =?utf-8?B?UEJZdkNEemFBNlZFZHM5cWhIUzF1K2NaVWp2Uk4vdHp5K2lHMXpQWnNWU05r?=
 =?utf-8?B?bXdVV0RBeks0bEhvMWt4Q3ZmZ1RCQWZOYnpLNVZjWXFiQXdlQmUwUENDeFgr?=
 =?utf-8?B?OFpWOEMrcVVKQlFjdlNGUStqcXFWYXVlOWZPWnFpbVR6YjRVTmVDRlhZdjdI?=
 =?utf-8?B?U2JkSmFGTDhSdjlQeFlOUHZ0a2NOb1ljOTJURWQrcXNHb0ZlK3BhU20vazc0?=
 =?utf-8?B?VTlTTWJDREd1cjlPMnRmQVVTTlhXMit5N3ZkV0UvV3NoTlBxNkNXZnErMG1P?=
 =?utf-8?B?Q2ZGNWNqTCt3RSszYU5iMTBQeVhEK0lJOTNoRWQ0OFM3c245aklvTWF2elNr?=
 =?utf-8?B?aDNQMGdDOXRCZWNxZXI4Qm5kL1piNjh2cTlCSGovczBjOFZkZkxrcElCVitO?=
 =?utf-8?B?aEowc2NOM3NBbmdxR2VyNzdxWWVUWnB6K2pIWW5OdXhtSE9YdkxnNEtpaWhM?=
 =?utf-8?B?Rys2U3VGNG40VWxnVWhsN1IybzMwTGhJZ0k2MDlYUWFTZUF5VGNzNGhxYjVz?=
 =?utf-8?B?dG1vVk50aGhsOEpTM0F3YVkwV21MakNGZVpyVmR6R3dUOEVlZEcxRC9EUS9S?=
 =?utf-8?B?U1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	l+spBdfoiL03RFi1eE5UtzmfQViZFNnhMPrU8aYme/bFukfgwUZdtD19psTukkKNiLPMuZAevqGDEkM7vDv6WA2CmG04vFn6+P8wgeEcww7HkLC07FpDMNALAlnQTu5wdoIl5t2QH4ZBizTFQnNh0UPsUkz72GdRWKMftndawW2R/AJ3GTKgbbYXeI+42+KmTR+M4Z7mwUmZGWMY5I8eU/yO5Vd1AV7lcjT04/4Wm8iG2QonWYucUY+90tFj65ggaAztB6Pnp0YGeKCp0bWUReImvP6i8yPJtVII7zm9ozzFB2NvVWgeoS016aG4Mp6/itT6co2XLiKwKGBa0HkS7G5ILUUgTG8B7CSjZ2EIyps6Biief7/NQwtOD6GejAk8eULnkKXeZnsZxM3kngXPjmxOf2+WRoT32oZkAyEbjfAY6M4+/ML/qFFtlsZC86BbTWYLsxQNrl8Jjny23mWElAIe/o9a2qIIhvJAn+gY7srGPSbSi4AxeHmDH9qR8t+rhTUVg24wIzm8RsB4K52V6a8vqm6KUMXTmfranGLRTvgD6UurHS5WdrXX/eQ7xmPUMM/cOEjXfJLXbQi9VxXFfNoM4tJodiukrrYJOFNgNsbDBJblsUw/UK7a6481IoEUwAh6T0yO2obfwe+tmJc6u2cL+pcLBAEL0xUJkgN762S2s6r7pAzg+99uvH/FWyWs1JACkrLBSDIEOxroJ31YtpD6WsbBFhbaPJ7RM2EmRV9FfF5z73TmjcNJ2YkvNX2qGLfzSFE4jWbvrKQC9l2uboZ9lF+ddDag+v04wWxLKfzleNaH5ulqRlDpF1OIj1+GoVseAnvIgBqoJ0KMcahh1Q==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 07ff9139-2aab-4df3-209a-08db4d63dc5c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5414.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 12:25:49.8949
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: X+39yR+t6m5Lw6WfsvCLzE7Ybsi85lr7GPYUFEUbUWNGyde1V4IKYgGZqYPbLO1ZtIGcrkhB4jJ2KnBhZ64L8FQ24jv4tZyeLOWyg23Fkyg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR03MB7183

On 05/05/2023 07:34, Jan Beulich wrote:

> On 04.05.2023 19:51, Jennifer Herbert wrote:
>> This patch makes the TPM version, for which the ACPI library probes, configurable.
>> If acpi_config.tpm_verison is set to 1, it indicates that 1.2 (TCPA) should be probed.
>> I have also added to hvmloader an option to allow setting this new config, which can
>> be triggered by setting the platform/tpm_verion xenstore key.
>>
>> Signed-off-by: Jennifer Herbert <jennifer.herbert@citrix.com>
>> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> albeit with two minor further request (which I'd be happy to make while
> committing):


I agree, the NULL checks and tpm_verison initilizer are unnesery.
So I'd be very happy if you made these changes as you commit them!

Thankyou!


-jenny



From xen-devel-bounces@lists.xenproject.org Fri May 05 12:45:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 12:45:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530322.825863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puuoY-0001XU-6S; Fri, 05 May 2023 12:45:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530322.825863; Fri, 05 May 2023 12:45:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puuoY-0001XN-39; Fri, 05 May 2023 12:45:34 +0000
Received: by outflank-mailman (input) for mailman id 530322;
 Fri, 05 May 2023 12:45:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vWC1=A2=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1puuoV-0001XH-N8
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 12:45:32 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b7505a15-eb42-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 14:45:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7505a15-eb42-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683290728;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Ro/VOFl72UDWFwy8gwNK93pOtDrJEFznvt3NXDl6cF8=;
	b=NEssImE7oVj95z4/FBIZJnxzphr31CeA4k7oJeJinz1r5r1l9dXN1gppKxrfH844UqDO9z
	4QSvSZ+MCzWy04loOUdVFWsJabtUy+6xAHaZOnUP3kDamwQqqChUVjh6yxS90kqh/3foQl
	BxPnqXdK0mEFhXKLhgXSM8yu090mNdF9FRSITtHjxNfxijznNsQsOy0jITHZ9ajNCFSUr7
	ixcpybMXO0+Tte9hyNa1D9uPISenZGZ0GM57Wg5ua9BDE6mitsB+qwg3eCN369M6O0Jwe5
	/MuZBe8FpjmVO3WPvPxa64u3KW+r1rcVtivlIiAVpuzcTFgSsngi+McsE/7vlQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683290728;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Ro/VOFl72UDWFwy8gwNK93pOtDrJEFznvt3NXDl6cF8=;
	b=65zpCtdzuEaRqOzhfQTeNfeiCJfu8TviFjuPm0aorQPKM8pLdEAQ4DrGamxrKih4/79CxI
	kPU4Ye7GnkFx6vAA==
To: Andrew Cooper <andrew.cooper3@citrix.com>, LKML
 <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, David Woodhouse <dwmw2@infradead.org>, Brian Gerst
 <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom
 Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>
Subject: Re: [patch V2 34/38] x86/cpu/amd; Invoke
 detect_extended_topology_early() on boot CPU
In-Reply-To: <38b259bb-050b-023e-4f43-212f95f022ac@citrix.com>
References: <20230504185733.126511787@linutronix.de>
 <20230504185938.179661118@linutronix.de>
 <38b259bb-050b-023e-4f43-212f95f022ac@citrix.com>
Date: Fri, 05 May 2023 14:45:28 +0200
Message-ID: <87354b3q1j.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Fri, May 05 2023 at 00:04, Andrew Cooper wrote:
> On 04/05/2023 8:02 pm, Thomas Gleixner wrote:
>> From: Thomas Gleixner <tglx@linutronix.de>
>>
>> The early detection stores the extended topology leaf number which is
>> required for parallel hotplug.
>>
>> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
>
> It occurs to me that this and the previous patch are stale given that we
> no longer look at CPUID in the trampoline.
>
> They're probably useful changes in isolation, but the commit messages
> want adjusting to remove the association with parallel boot.

Duh. Indeed. Completely forgot about that.


From xen-devel-bounces@lists.xenproject.org Fri May 05 13:05:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 13:05:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530327.825879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puv84-00043w-7u; Fri, 05 May 2023 13:05:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530327.825879; Fri, 05 May 2023 13:05:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puv84-00043P-0p; Fri, 05 May 2023 13:05:44 +0000
Received: by outflank-mailman (input) for mailman id 530327;
 Fri, 05 May 2023 13:05:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XVLi=A2=citrix.com=prvs=4822586d7=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puv82-0003zO-OD
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 13:05:42 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 899ce7c8-eb45-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 15:05:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 899ce7c8-eb45-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683291942;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=4e+XjBWakkBin20cYWqiIerVP51elQbRzoIyhzyYm2o=;
  b=hakdqJd5ugNOJu+NI7hp7CCMJakGJ0XOh85aSLQzHfY/zjepDZYtQOBI
   7ImEGnMgX8DCY90zs1SyY0+WccGI5EIis5aMRAc1cl/1eLZXZZCz00svK
   Hy+1byRfDqR/cxXUhMmjfGg47QOBjDx8KLLlmJT/16U3A32jNCC2eG/dL
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108401386
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:3NIipq9xDPUPxShaHkxMDrUDmX6TJUtcMsCJ2f8bNWPcYEJGY0x3z
 GEfXjyOaKvbYjenKNgiaIiypEhU7Z+Ax4RrGQRo+Cg8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKgW5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkl3z
 tIheWwEbSrEpNOO6oK3dvlPq8EseZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0MxhbE/
 judrzqR7hcyFNCWwjO6/VCW1tTCw33LcoMqReyz36s/6LGU7jNKU0BHPbehmtGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c/h6HvA+6QqN4rHJ+AvfDW8BJhZac8AvvsIyQT0s1
 3eKksnvCDgpt6eaIVqC8p+EoDX0PjIaRUcAeCsFQA0t89Tl5oYpgXryos1LSfDvyIevQHepn
 m7M9XJl71kOsSIV/5yA02mW3zuvnN/qfzYf1gnrTkGI6wwsMeZJeLeUBUjnAedoddjJFQPc7
 SFdxKBy/8hVU8jTyXXlrPElWejwuq3baGC0bUtHRcFJyti7x5K0kWm8ChlaLVwhDMsLcCSBj
 KT76VIIv8870JdHgMZKj2ON5ycCl/KI+SzNDKy8Uza3SsEZmPW71C9vf1WM+GvmjVIhl6oyU
 b/CL5b3VyxEVf02lGHpLwv47YLHOwhknT+DLXwF503PPUWiiI69Fu5ebQrmghER56KYugTFm
 +ti2z+x40wHCoXWO3CHmbP/2HhWdRDX87iq8Z0IHgNCSyI6cFwc5wj5nep9J9c/wPsIxo8lP
 BiVAydl9bY2vlWfQS3iV5ypQOqHsUpXxZ7jARERAA==
IronPort-HdrOrdr: A9a23:oVL1Eq+3qKMuWJ8BjdJuk+DfI+orL9Y04lQ7vn2ZKCYlFPBw8v
 rF8cjzuiWUtN98YhEdcKm7VpVoIkmskqKdg7NhWItKNTOO0ACVxedZnOjfKlXbdxEWndQtsZ
 uIHZIOauEZ0jBB4voTh2GDYq8d/OU=
X-Talos-CUID: =?us-ascii?q?9a23=3ARQHoHWuV73GsVFdrnZOoiJKA6IscSVPknSvZIHX?=
 =?us-ascii?q?/KldRYo2ZdU+z1aZrxp8=3D?=
X-Talos-MUID: 9a23:f41KWAsS4PDmOULOs82npD1rP85X7IaUWHsAz8s2kci8ZH17EmLI
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="108401386"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: [PATCH 1/2] LICENSES: Improve the legibility of these files
Date: Fri, 5 May 2023 14:05:32 +0100
Message-ID: <20230505130533.3580545-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
References: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

A few newlines go a very long way

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 LICENSES/BSD-2-Clause       | 4 ++++
 LICENSES/BSD-3-Clause       | 4 ++++
 LICENSES/BSD-3-Clause-Clear | 4 ++++
 LICENSES/CC-BY-4.0          | 5 +++++
 LICENSES/GPL-2.0            | 5 +++++
 LICENSES/LGPL-2.0           | 6 ++++++
 LICENSES/LGPL-2.1           | 6 ++++++
 LICENSES/MIT                | 4 ++++
 8 files changed, 38 insertions(+)

diff --git a/LICENSES/BSD-2-Clause b/LICENSES/BSD-2-Clause
index da366e2ce50b..694d8c93221c 100644
--- a/LICENSES/BSD-2-Clause
+++ b/LICENSES/BSD-2-Clause
@@ -1,10 +1,14 @@
 Valid-License-Identifier: BSD-2-Clause
+
 SPDX-URL: https://spdx.org/licenses/BSD-2-Clause.html
+
 Usage-Guide:
+
   To use the BSD 2-clause "Simplified" License put the following SPDX
   tag/value pair into a comment according to the placement guidelines in
   the licensing rules documentation:
     SPDX-License-Identifier: BSD-2-Clause
+
 License-Text:
 
 Copyright (c) <year> <owner> . All rights reserved.
diff --git a/LICENSES/BSD-3-Clause b/LICENSES/BSD-3-Clause
index 34c7f057c8d5..1441947f92e0 100644
--- a/LICENSES/BSD-3-Clause
+++ b/LICENSES/BSD-3-Clause
@@ -1,10 +1,14 @@
 Valid-License-Identifier: BSD-3-Clause
+
 SPDX-URL: https://spdx.org/licenses/BSD-3-Clause.html
+
 Usage-Guide:
+
   To use the BSD 3-clause "New" or "Revised" License put the following SPDX
   tag/value pair into a comment according to the placement guidelines in
   the licensing rules documentation:
     SPDX-License-Identifier: BSD-3-Clause
+
 License-Text:
 
 Copyright (c) <year> <owner> . All rights reserved.
diff --git a/LICENSES/BSD-3-Clause-Clear b/LICENSES/BSD-3-Clause-Clear
index e53b56092b90..2b27f24a65a0 100644
--- a/LICENSES/BSD-3-Clause-Clear
+++ b/LICENSES/BSD-3-Clause-Clear
@@ -1,10 +1,14 @@
 Valid-License-Identifier: BSD-3-Clause-Clear
+
 SPDX-URL: https://spdx.org/licenses/BSD-3-Clause-Clear.html
+
 Usage-Guide:
+
   To use the BSD 3-clause "Clear" License put the following SPDX
   tag/value pair into a comment according to the placement guidelines in
   the licensing rules documentation:
     SPDX-License-Identifier: BSD-3-Clause-Clear
+
 License-Text:
 
 The Clear BSD License
diff --git a/LICENSES/CC-BY-4.0 b/LICENSES/CC-BY-4.0
index 27dfefa95ccf..4197ceb180ff 100644
--- a/LICENSES/CC-BY-4.0
+++ b/LICENSES/CC-BY-4.0
@@ -1,15 +1,20 @@
 Valid-License-Identifier: CC-BY-4.0
+
 SPDX-URL: https://spdx.org/licenses/CC-BY-4.0
+
 Usage-Guide:
+
   Do NOT use this license for code, but it's acceptable for content like artwork
   or documentation. When using it for the latter, it's best to use it together
   with a GPL2 compatible license using "OR", as CC-BY-4.0 texts processed by
   the kernel's build system might combine it with content taken from more
   restrictive licenses.
+
   To use the Creative Commons Attribution 4.0 International license put
   the following SPDX tag/value pair into a comment according to the
   placement guidelines in the licensing rules documentation:
     SPDX-License-Identifier: CC-BY-4.0
+
 License-Text:
 
 Creative Commons Attribution 4.0 International
diff --git a/LICENSES/GPL-2.0 b/LICENSES/GPL-2.0
index 9f09528a8bce..0022a7c17788 100644
--- a/LICENSES/GPL-2.0
+++ b/LICENSES/GPL-2.0
@@ -2,11 +2,15 @@ Valid-License-Identifier: GPL-2.0
 Valid-License-Identifier: GPL-2.0-only
 Valid-License-Identifier: GPL-2.0+
 Valid-License-Identifier: GPL-2.0-or-later
+
 SPDX-URL: https://spdx.org/licenses/GPL-2.0.html
+
 Usage-Guide:
+
   To use this license in source code, put one of the following SPDX
   tag/value pairs into a comment according to the placement
   guidelines in the licensing rules documentation.
+
   For 'GNU General Public License (GPL) version 2 only' use:
     SPDX-License-Identifier: GPL-2.0-only
   or (now deprecated)
@@ -15,6 +19,7 @@ Usage-Guide:
     SPDX-License-Identifier: GPL-2.0+
   or
     SPDX-License-Identifier: GPL-2.0-or-later
+
 License-Text:
 
 		    GNU GENERAL PUBLIC LICENSE
diff --git a/LICENSES/LGPL-2.0 b/LICENSES/LGPL-2.0
index 957d798fe037..2fa16d72eabf 100644
--- a/LICENSES/LGPL-2.0
+++ b/LICENSES/LGPL-2.0
@@ -1,15 +1,21 @@
 Valid-License-Identifier: LGPL-2.0
 Valid-License-Identifier: LGPL-2.0+
+
 SPDX-URL: https://spdx.org/licenses/LGPL-2.0.html
+
 Usage-Guide:
+
   To use this license in source code, put one of the following SPDX
   tag/value pairs into a comment according to the placement
   guidelines in the licensing rules documentation.
+
   For 'GNU Library General Public License (LGPL) version 2.0 only' use:
     SPDX-License-Identifier: LGPL-2.0
+
   For 'GNU Library General Public License (LGPL) version 2.0 or any later
   version' use:
     SPDX-License-Identifier: LGPL-2.0+
+
 License-Text:
 
 GNU LIBRARY GENERAL PUBLIC LICENSE
diff --git a/LICENSES/LGPL-2.1 b/LICENSES/LGPL-2.1
index 27bb4342a3e8..b366c7e49199 100644
--- a/LICENSES/LGPL-2.1
+++ b/LICENSES/LGPL-2.1
@@ -1,15 +1,21 @@
 Valid-License-Identifier: LGPL-2.1
 Valid-License-Identifier: LGPL-2.1+
+
 SPDX-URL: https://spdx.org/licenses/LGPL-2.1.html
+
 Usage-Guide:
+
   To use this license in source code, put one of the following SPDX
   tag/value pairs into a comment according to the placement
   guidelines in the licensing rules documentation.
+
   For 'GNU Lesser General Public License (LGPL) version 2.1 only' use:
     SPDX-License-Identifier: LGPL-2.1
+
   For 'GNU Lesser General Public License (LGPL) version 2.1 or any later
   version' use:
     SPDX-License-Identifier: LGPL-2.1+
+
 License-Text:
 
 GNU LESSER GENERAL PUBLIC LICENSE
diff --git a/LICENSES/MIT b/LICENSES/MIT
index f33a68ceb3ea..eba1549f93e4 100644
--- a/LICENSES/MIT
+++ b/LICENSES/MIT
@@ -1,10 +1,14 @@
 Valid-License-Identifier: MIT
+
 SPDX-URL: https://spdx.org/licenses/MIT.html
+
 Usage-Guide:
+
   To use the MIT License put the following SPDX tag/value pair into a
   comment according to the placement guidelines in the licensing rules
   documentation:
     SPDX-License-Identifier: MIT
+
 License-Text:
 
 MIT License
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 05 13:05:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 13:05:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530326.825873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puv83-0003zg-QZ; Fri, 05 May 2023 13:05:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530326.825873; Fri, 05 May 2023 13:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puv83-0003zZ-Nq; Fri, 05 May 2023 13:05:43 +0000
Received: by outflank-mailman (input) for mailman id 530326;
 Fri, 05 May 2023 13:05:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XVLi=A2=citrix.com=prvs=4822586d7=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puv82-0003zO-Fn
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 13:05:42 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 87db2096-eb45-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 15:05:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87db2096-eb45-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683291940;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=9MVcvLqzUK6qhH7+uSpDyPfF2OpcLaXOQ00IpCjwAPE=;
  b=K9xihGbGjbbYcA1Apd0CkxWjml2vKNd/UxPHvaZSBu7RU/s0uPLCrwNv
   HzbDm1m4xW4zl8G2kolMjjLFbOqdRw3CJZRUwXFc7aN7fELwjunhRbEsy
   nmzo4svr6ouA2scenvOk0VG4gxOotqdQ5AAB5/beTq2KdIJHsVC5tBdh5
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108401385
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:0LvNqK7bmN5yQw1c4je0twxRtCHHchMFZxGqfqrLsTDasY5as4F+v
 mNOWGmBPvqNMDb8fdh2Ydu+px9XvJ6ByoVqSwI/qSA9Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0T4QeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mz
 Mc2EWFRZxO5wMGH+JiLRtV8pekmI5y+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9xx7I+
 jqfpDSkav0cHPKV6RPe31fvus7SsjOqdaY9Pae8xNc/1TV/wURMUUZLBDNXu8KRkVWiUthSL
 0gV/CsGrqUo8kGvCN7nUHWQv3qsrhMaHd1KHIUS6hyJy6fSyxaUAC4DVDEpQP4MudIyRDcq/
 kSUhN6vDjtq2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nczp4OPfr1JuvQ2i2m
 m3U6nFk3N3/kPLnyY2Cxn7+0m6lpaKObT8NzDXbYnKGwwZ2MdvNi5OT1bTL0RpRBN/HHgLb4
 Sldwpf2APMmVs/UynHUKAkZNPTwvqvebmWB6bJ6N8N5nwlB7UJPamy5DNtWAE5yevgJdjbyC
 KM4kVMAvcQDVJdGgEIeXm5QNyjJ5fK6fTgdfqqIBueim7AoHON9wAlgZFSLw0fmm1U2nKc0N
 P+zKJj8VixAV/g+nWLtGI/xNIMWKt0WnzuPFfgXMTz+uVZhWJJlYehcawbfBgzIxKiFvB/U4
 75iCid+8D0GCLeWSnCOoeYuwaUicSBT6Wbe95YGKYZu42NORAkcNhMm6e94J9A+xfsKy7egE
 7PUchYw9WcTTEbvcW2iAk2Popu2NXqjhRrX5RARAGs=
IronPort-HdrOrdr: A9a23:T4gJWqCV9fFsutTlHemr55DYdb4zR+YMi2TDtnoBLyC9F/bz+v
 xG88576faZslYssQgb9+xoW5PwJk80l6QFhLX5VI3KNGLbUQ2TXeJfBOPZrwEIcBeOktK1u5
 0QEZSX17XLYmST6q7BkXCFL+o=
X-Talos-CUID: 9a23:wQwmsWHH/ouQitEyqmJez2AtS5wscUea70boCX+0NXZKZbiaHAo=
X-Talos-MUID: 9a23:BcY9uQQpwcNp3u90RXTCjhFLb+pNoJiPFW8dwLgilOCeMR5ZbmI=
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="108401385"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: [PATCH 0/2] LICENSES improvements/corrections
Date: Fri, 5 May 2023 14:05:31 +0100
Message-ID: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Noticed in light of the recent LGPL changes to libacpi, but sadly only after
the fact.

Andrew Cooper (2):
  LICENSES: Improve the legibility of these files
  LICENSES: Remove the use of deprecated LGPL SPDX tags

 LICENSES/BSD-2-Clause               |  4 ++++
 LICENSES/BSD-3-Clause               |  4 ++++
 LICENSES/BSD-3-Clause-Clear         |  4 ++++
 LICENSES/CC-BY-4.0                  |  5 +++++
 LICENSES/GPL-2.0                    | 17 +++++++++++------
 LICENSES/LGPL-2.0                   | 14 ++++++++++----
 LICENSES/LGPL-2.1                   | 14 ++++++++++----
 LICENSES/MIT                        |  4 ++++
 tools/libacpi/Makefile              |  2 +-
 tools/libacpi/acpi2_0.h             |  2 +-
 tools/libacpi/build.c               |  2 +-
 tools/libacpi/dsdt.asl              |  2 +-
 tools/libacpi/dsdt_acpi_info.asl    |  2 +-
 tools/libacpi/libacpi.h             |  2 +-
 tools/libacpi/mk_dsdt.c             |  2 +-
 tools/libacpi/ssdt_laptop_slate.asl |  2 +-
 tools/libacpi/ssdt_pm.asl           |  2 +-
 tools/libacpi/ssdt_s3.asl           |  2 +-
 tools/libacpi/ssdt_s4.asl           |  2 +-
 tools/libacpi/ssdt_tpm.asl          |  2 +-
 tools/libacpi/static_tables.c       |  2 +-
 21 files changed, 65 insertions(+), 27 deletions(-)

-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 05 13:05:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 13:05:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530328.825893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puv85-0004UK-CH; Fri, 05 May 2023 13:05:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530328.825893; Fri, 05 May 2023 13:05:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puv85-0004UD-9S; Fri, 05 May 2023 13:05:45 +0000
Received: by outflank-mailman (input) for mailman id 530328;
 Fri, 05 May 2023 13:05:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XVLi=A2=citrix.com=prvs=4822586d7=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puv83-0003zO-Vt
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 13:05:43 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8a541022-eb45-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 15:05:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a541022-eb45-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683291943;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=2bX75MfTm7xh700KZ13jcKWteUQ7dRrOiWoTRNhLPAY=;
  b=dYRbFxwfFlIYNr10Phm3SDW/0pzQHu6j3Oz9sesHXEKwmmSp2WLdO6zB
   HaYsxYBX5YEzskyOufH7ZUO1TrTS3L6huK3BtLIKpJnOHBAh+HEMgzLeU
   bD9pMPgBr9UHdMB7HM+p2OKm3ExuA15Iyx4qXGTsz1Jomi4qWZ74QljdP
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108401387
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:vn5UFaiUPoRXygEj5O7GNLDaX161eRAKZh0ujC45NGQN5FlHY01je
 htvXj+FM/7ZN2v3ctwnOd6290sA7ZSHztUwSwFtqS01ES0b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QeDzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQhEx40P0yKhNjx44ODYNlgutQAAczkadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27B/
 jKapz2iUk5y2Nq32R+B1kKCl+/znhz7SJ0pBrKd3PBajwjGroAUIEJPDgbqyRWjsWalQM5WI
 UEQ/isorIAx+VatQ927WAe3yFabujYMVtwWFPc1gCmP167V7gCxFmUCCDlbZ7QOluU7WDgr3
 V+hhM7yCHpkt7j9YWKQ8PKYoC2/PQARLHQefmkUQA0d+d7hrYovyBXVQb5e/LWd14OvX2uqm
 nbT8XZ43u9I5SIW60ml1Wn8rQv9+pHKdEkW1z39bm6Z/iBUYYHwMuRE9mPnAeZ8wJexFwfR5
 yJZypHHs4jiHrnWynXTHbxl8KWBoq/cbWaC2QMH84wJrWzFxpK1QWxHDNiSzm9NO91MRzLma
 VS7Veh5tM4KZyvCgUOajuuM5yUWIUvIT46Nugj8NIYmX3SIXFbvENtSTUCRxXvxt0MnjLsyP
 5yWGe71UyZBU/45kGPnH7xCuVPO+h3SOEuJHcyrp/hZ+eP2iIGppUctbwLVM7FRAFKsqwTJ6
 ddPX/a3J+FkeLSmOEH/qNdDRW3m2FBnXfgaXeQLLL/cSuencUl9Y8LsLUQJJ9U5w/8PybiVr
 hlQmCZwkTLCuJEOEi3SAlgLVV8ldcwXQa4TVcD0AWuV5g==
IronPort-HdrOrdr: A9a23:g7a6Ka4Z7h2xCF+HEAPXwM/XdLJyesId70hD6qkRc20yTiX8ra
 rCoB11737JYVoqOE3I4OrvBEDiewK6yXcW2/h1AV7KZmCP01dASrsSjrcKqweNJ8SUzJ846U
 4PSdkdNDUuZWIUsS6ciDPIauod/A==
X-Talos-CUID: =?us-ascii?q?9a23=3A7FiXjGpgGp8n1aa4XKKADqrmUfkeXESMkH7TGQi?=
 =?us-ascii?q?bOF5DeLGpWXy9/awxxg=3D=3D?=
X-Talos-MUID: 9a23:pZ/JGAmvWLgeA+bgXzoldnpyEcNM04q/CHwOrpUWpPKWOglaNQeS2WE=
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="108401387"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: [PATCH 2/2] LICENSES: Remove the use of deprecated LGPL SPDX tags
Date: Fri, 5 May 2023 14:05:33 +0100
Message-ID: <20230505130533.3580545-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
References: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The SPDX forms without an explicit -only or -or-later suffix are deprecated
and should not be used.  The recent changes to libacpi are the only examples
in tree, so fix them all up.

For GPL, we have many examples using deprecated tags.  For now, just identify
them as such recommend that no new instances get added.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>

Unsure whether this should get some Fixes: tags or not.  Note also that
Jenny's "[PATCH v4 2/2] acpi: Add TPM2 interface definition." wants its SPDX
tag correcting as per this patch.
---
 LICENSES/GPL-2.0                    | 12 ++++++------
 LICENSES/LGPL-2.0                   |  8 ++++----
 LICENSES/LGPL-2.1                   |  8 ++++----
 tools/libacpi/Makefile              |  2 +-
 tools/libacpi/acpi2_0.h             |  2 +-
 tools/libacpi/build.c               |  2 +-
 tools/libacpi/dsdt.asl              |  2 +-
 tools/libacpi/dsdt_acpi_info.asl    |  2 +-
 tools/libacpi/libacpi.h             |  2 +-
 tools/libacpi/mk_dsdt.c             |  2 +-
 tools/libacpi/ssdt_laptop_slate.asl |  2 +-
 tools/libacpi/ssdt_pm.asl           |  2 +-
 tools/libacpi/ssdt_s3.asl           |  2 +-
 tools/libacpi/ssdt_s4.asl           |  2 +-
 tools/libacpi/ssdt_tpm.asl          |  2 +-
 tools/libacpi/static_tables.c       |  2 +-
 16 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/LICENSES/GPL-2.0 b/LICENSES/GPL-2.0
index 0022a7c17788..dcd969aa85b5 100644
--- a/LICENSES/GPL-2.0
+++ b/LICENSES/GPL-2.0
@@ -1,8 +1,9 @@
-Valid-License-Identifier: GPL-2.0
 Valid-License-Identifier: GPL-2.0-only
-Valid-License-Identifier: GPL-2.0+
 Valid-License-Identifier: GPL-2.0-or-later
 
+Deprecated-Identifier: GPL-2.0
+Deprecated-Identifier: GPL-2.0+
+
 SPDX-URL: https://spdx.org/licenses/GPL-2.0.html
 
 Usage-Guide:
@@ -13,13 +14,12 @@ Usage-Guide:
 
   For 'GNU General Public License (GPL) version 2 only' use:
     SPDX-License-Identifier: GPL-2.0-only
-  or (now deprecated)
-    SPDX-License-Identifier: GPL-2.0
   For 'GNU General Public License (GPL) version 2 or any later version' use:
-    SPDX-License-Identifier: GPL-2.0+
-  or
     SPDX-License-Identifier: GPL-2.0-or-later
 
+  The deprecated tags should not be used for any new additions.  Where
+  possible, their existing uses should be phased out.
+
 License-Text:
 
 		    GNU GENERAL PUBLIC LICENSE
diff --git a/LICENSES/LGPL-2.0 b/LICENSES/LGPL-2.0
index 2fa16d72eabf..c960ba3ce3b8 100644
--- a/LICENSES/LGPL-2.0
+++ b/LICENSES/LGPL-2.0
@@ -1,5 +1,5 @@
-Valid-License-Identifier: LGPL-2.0
-Valid-License-Identifier: LGPL-2.0+
+Valid-License-Identifier: LGPL-2.0-only
+Valid-License-Identifier: LGPL-2.0-or-later
 
 SPDX-URL: https://spdx.org/licenses/LGPL-2.0.html
 
@@ -10,11 +10,11 @@ Usage-Guide:
   guidelines in the licensing rules documentation.
 
   For 'GNU Library General Public License (LGPL) version 2.0 only' use:
-    SPDX-License-Identifier: LGPL-2.0
+    SPDX-License-Identifier: LGPL-2.0-only
 
   For 'GNU Library General Public License (LGPL) version 2.0 or any later
   version' use:
-    SPDX-License-Identifier: LGPL-2.0+
+    SPDX-License-Identifier: LGPL-2.0-or-later
 
 License-Text:
 
diff --git a/LICENSES/LGPL-2.1 b/LICENSES/LGPL-2.1
index b366c7e49199..4553664b76bf 100644
--- a/LICENSES/LGPL-2.1
+++ b/LICENSES/LGPL-2.1
@@ -1,5 +1,5 @@
-Valid-License-Identifier: LGPL-2.1
-Valid-License-Identifier: LGPL-2.1+
+Valid-License-Identifier: LGPL-2.1-only
+Valid-License-Identifier: LGPL-2.1-or-later
 
 SPDX-URL: https://spdx.org/licenses/LGPL-2.1.html
 
@@ -10,11 +10,11 @@ Usage-Guide:
   guidelines in the licensing rules documentation.
 
   For 'GNU Lesser General Public License (LGPL) version 2.1 only' use:
-    SPDX-License-Identifier: LGPL-2.1
+    SPDX-License-Identifier: LGPL-2.1-only
 
   For 'GNU Lesser General Public License (LGPL) version 2.1 or any later
   version' use:
-    SPDX-License-Identifier: LGPL-2.1+
+    SPDX-License-Identifier: LGPL-2.1-or-later
 
 License-Text:
 
diff --git a/tools/libacpi/Makefile b/tools/libacpi/Makefile
index aa9c520cbe85..bcfcd852f92f 100644
--- a/tools/libacpi/Makefile
+++ b/tools/libacpi/Makefile
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: LGPL-2.1
+# SPDX-License-Identifier: LGPL-2.1-only
 #
 # Copyright (c) 2004, Intel Corporation.
 
diff --git a/tools/libacpi/acpi2_0.h b/tools/libacpi/acpi2_0.h
index 212f5ab64182..e00b29854be0 100644
--- a/tools/libacpi/acpi2_0.h
+++ b/tools/libacpi/acpi2_0.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * Copyright (c) 2004, Intel Corporation.
  */
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index 830d37c61f03..3142e0ac84c0 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * Copyright (c) 2004, Intel Corporation.
  * Copyright (c) 2006, Keir Fraser, XenSource Inc.
diff --git a/tools/libacpi/dsdt.asl b/tools/libacpi/dsdt.asl
index c6691b56a986..32b42f85ae9f 100644
--- a/tools/libacpi/dsdt.asl
+++ b/tools/libacpi/dsdt.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /******************************************************************************
  * DSDT for Xen with Qemu device model
  *
diff --git a/tools/libacpi/dsdt_acpi_info.asl b/tools/libacpi/dsdt_acpi_info.asl
index c6e82f1fe6a7..6e114fa23404 100644
--- a/tools/libacpi/dsdt_acpi_info.asl
+++ b/tools/libacpi/dsdt_acpi_info.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 
     Scope (\_SB)
     {
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index acf012e35578..7ae28525f604 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /******************************************************************************
  * libacpi.h
  * 
diff --git a/tools/libacpi/mk_dsdt.c b/tools/libacpi/mk_dsdt.c
index c74b270c0c5d..34f6753f6193 100644
--- a/tools/libacpi/mk_dsdt.c
+++ b/tools/libacpi/mk_dsdt.c
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 
 #include <stdio.h>
 #include <stdarg.h>
diff --git a/tools/libacpi/ssdt_laptop_slate.asl b/tools/libacpi/ssdt_laptop_slate.asl
index 494f2d048d0a..69fd504c19fc 100644
--- a/tools/libacpi/ssdt_laptop_slate.asl
+++ b/tools/libacpi/ssdt_laptop_slate.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * ssdt_conv.asl
  *
diff --git a/tools/libacpi/ssdt_pm.asl b/tools/libacpi/ssdt_pm.asl
index e577e85c072b..db578d10ac3e 100644
--- a/tools/libacpi/ssdt_pm.asl
+++ b/tools/libacpi/ssdt_pm.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * ssdt_pm.asl
  *
diff --git a/tools/libacpi/ssdt_s3.asl b/tools/libacpi/ssdt_s3.asl
index 8f3177ec5adc..f6e9636f4759 100644
--- a/tools/libacpi/ssdt_s3.asl
+++ b/tools/libacpi/ssdt_s3.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * ssdt_s3.asl
  *
diff --git a/tools/libacpi/ssdt_s4.asl b/tools/libacpi/ssdt_s4.asl
index 979318eca1f5..8014f5fc9014 100644
--- a/tools/libacpi/ssdt_s4.asl
+++ b/tools/libacpi/ssdt_s4.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * ssdt_s4.asl
  *
diff --git a/tools/libacpi/ssdt_tpm.asl b/tools/libacpi/ssdt_tpm.asl
index 6c3267218f3b..944658d25177 100644
--- a/tools/libacpi/ssdt_tpm.asl
+++ b/tools/libacpi/ssdt_tpm.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * ssdt_tpm.asl
  *
diff --git a/tools/libacpi/static_tables.c b/tools/libacpi/static_tables.c
index 631fb911413b..715f46fee05c 100644
--- a/tools/libacpi/static_tables.c
+++ b/tools/libacpi/static_tables.c
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * Copyright (c) 2004, Intel Corporation.
  * Copyright (c) 2006, Keir Fraser, XenSource Inc.
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 05 13:06:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 13:06:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530339.825903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puv98-0005cG-SL; Fri, 05 May 2023 13:06:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530339.825903; Fri, 05 May 2023 13:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puv98-0005c9-PW; Fri, 05 May 2023 13:06:50 +0000
Received: by outflank-mailman (input) for mailman id 530339;
 Fri, 05 May 2023 13:06:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puv96-0005Hq-Q0
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 13:06:48 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20622.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::622])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b09b0e98-eb45-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 15:06:47 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AS4PR04MB9337.eurprd04.prod.outlook.com (2603:10a6:20b:4e5::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Fri, 5 May
 2023 13:06:44 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%7]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 13:06:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b09b0e98-eb45-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X6/3/Ys4PDb7OFaBf3sNTgJjxjHdbKYT3NtSSDR2vY8NwtEV61POKUWOFtxyhfo9YIUFojXQ4KsmP8VV2ZEyBDPzDk0D5OQatvHtFdsZbriUZOuduJ80LHU49kdA9ucEX4mQ94B0I3K582wBv/vm2cP08FBAeU6KhnPdscbJp7GnaAtu/Bc6qobY2wOnUJa1k1mKLsbOCCUmYv7hUBnS195oOCtFHugEhQZqUKr7lWwhSa6jpcAC8vB0hovXM0j2TVj1cNYc9he0gwnKk3Gl7GVmKEk0CksDA23qmC6HfoD6d+wumRNjdNeq/TbIf/kddm5Ne4pCoj6PhKf2vmY0tw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5fi21jemoaQ/IahEePuXb4ql0t8nqpe0QrayBGOWBbM=;
 b=W2x3oeYCdAPHPsUfTD99rn2WKjzKRn7c0jm1z6Y4T1yrp5kO9c6Je898YsgWF6ej6bT51VHecaEqk3l9cnwakUlUAfKhuzC5XAIiUOM2/+6Q6kFtYrYeiw8W1YfyYzVcov56B2N4lQ0L9xk+QA6SrDZSZ24FY/FLq0otGI2/Pa7I/+QS/Mz/Mz5Kq4ZR/Fdud0+0oVPuwS+D4ql7DdwR47CrtISQX1197C4u9B6dsWK7Ov8VZUZA0HC/r8S/9/BHlmzfGFDF4BKlscR7suCXw2v9Tdi3P3x3sUbJ+dJRUuIJMWtIyfi8L1iCXZWpYYG0JnEwDHnTDX8DdPb18VgH/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5fi21jemoaQ/IahEePuXb4ql0t8nqpe0QrayBGOWBbM=;
 b=OYcQ+WXnHA0rYlh5HW02H8d/0dFnQRFbg9AeXmd3wyoUxepUbpM6L78XdJh97AJhoZ2jUDZKJShjRg1r74le/gWX5N+V9iwE52VoiI2xvI1pFIoC+gPKvldnc6MIvW2txpYcEDxQahfZPgcoWWFAYf2yAorhR2mjYXC4+pxc8WUgAChnTrdBxQSMsQqR5FHzrAI1IdUieHFied16YGMzYIgiUILiSRFS8JJVgLOMvcUIqqARzLhCywJ6K2FXJywcfwtxG6jsH5ZmSMp6qQwUTVor2k8Ek3O8mrTogQAiKvJ2ziZfOVR7q7yRFWMep4FbBUmEog+WplsNKBx6SgrgRg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e8ec85b6-90bb-1df0-4f6b-d7e9c6ade25f@suse.com>
Date: Fri, 5 May 2023 15:06:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] LICENSES: Improve the legibility of these files
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
 <20230505130533.3580545-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230505130533.3580545-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0174.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::7) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AS4PR04MB9337:EE_
X-MS-Office365-Filtering-Correlation-Id: cf45689d-f97e-4be8-bbb0-08db4d699383
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Bnyk6Wx3DJFMeLKeR/igUv4DCbFygvGeyk5QNqYrJ9WvNpBMPWBr9w3P1qqCLXJN9gJ/T2p2QuUSygI5j6qzGsE7gDZ6ryvMT1AkiQIs3M5xMd5j1gO9eAc9ylwL+NHMAGerp03RZQbccMNj2bdDyBpPTxWX984NXs4vBIYQ/HYcnhU3/yBdSK11WVovjLcpQKLtl76i8UmOm6kp7np09ner6l5litb04EvuwEpMrwN2Jt7gjC5ul07jPyNIsE9q5gBHEuJbiuVYziqWGY2R88TO4RNBUTYc5A2INoVeNUfq2B//6U1Sw62ouEnfod+bxpOaVO+oZvQyEGupadY1yQn7CpFAASe73X0B3dPp7tZHjMvtIIQ4JGHiwHeVZRgZtnO/uYfex4FsRuJQfB28z3aDOVL1frzFihn9crcXbw4muq2JuoOK5gwCPbwobZYHFEtyjrF/o80+FgPGHcFngR8D8sqlSzTW6TytjfKDZ8J3GOBpIuRoykzbHLGWTvg5azIDTi2WS86c3aoFOkKO6/qbkxc0sq+ZYFnhWwY88+BXabxu5w5kOqmDqq+AhCB5pp3i07ItTZ5Naah+v4foyOAZt1oHlQ2FH9CmXCh09DlbVlq392uXOklVGbu6ReLPI2wZqtPtn65pyo/3oXUVXQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(136003)(396003)(366004)(39860400002)(451199021)(6486002)(478600001)(54906003)(2616005)(53546011)(186003)(6512007)(6506007)(26005)(8676002)(2906002)(38100700002)(5660300002)(36756003)(558084003)(4326008)(8936002)(66556008)(66476007)(6916009)(41300700001)(66946007)(31696002)(86362001)(316002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aWpkWjJXc3U2aXplUFpvdExYYWlnakdPNFBNWFlKaytnVkFzMlN5bVNQQ1J3?=
 =?utf-8?B?dnhXTEUwY01vRmRjc2x3VVFpRURFTWhsbnhJSHp4TlMyZGZuM0tzQ28yK29I?=
 =?utf-8?B?b3QrUkZ0UUo3dkV5a1c1Q0xweVNGSy9vUWRMTVMvZmVKTUE4RWI1Nzl5alNw?=
 =?utf-8?B?b08rREprdWVYaGVVTjdOYTRYYzBRS2hsNllWbnA4NUtzZFNZT3E3b3RJZ01J?=
 =?utf-8?B?NWFzam9ENWdxVjFOVFNiK043TDdqLzZvclJoYXY2UTRPYUVMeVI0SnptRFRZ?=
 =?utf-8?B?VkhzMVAyaWFza0lmNTRTZUJuVkxhU2ttZ1JWZnhHUFpzMGp4MDVkTkZxMW8r?=
 =?utf-8?B?M0FPdVY4eU9EbnNhaHVTWHVqaXZXTExHbXo0dWlKZjh1RVl3cFRrbVI2UkhK?=
 =?utf-8?B?a0VRQVc2aUw3KzBxUGRiTTRWSlhyM3JFTy9OaE9lUC9QYmpqZEpudXRTL25W?=
 =?utf-8?B?SHk2QjErZlpxaFlzays2RE02bXhYL05GMFV3azVJZnM4cjVOSWJiT1Rnenh1?=
 =?utf-8?B?N0ZFbkRYUXdQZ1FlcTl5bzlabWVZRFRYekZONzAxVFFHYU1XaEJPbk1PY0Z4?=
 =?utf-8?B?b2RQVFBaZU1HSGx5QWF1Syt4ZER0ZVlKdEJOdDRENW1qNHhnTjdmOUVwSlZQ?=
 =?utf-8?B?Rml4TGdtcDdXc1E1QWFIRjNhUHZQTSs2V2xWeVZNOUF2Zy95RDlDNjZBZkVH?=
 =?utf-8?B?SXAwS2UzRnZPM29vSVhQVEJIVUtwaXZtajZJaWJsTnFGSXVvTzBacnRGR0Zr?=
 =?utf-8?B?ZmZQejFiYUQ2by80b1JpNDJvYmtDbW4yMkdvOG0rVU5zV3FuZ0FPMkpNeGRU?=
 =?utf-8?B?cXdsWWtJVnhsbm9pWGVCTzNkNzBoZnVrZDkySlA2MUY2dlRXam1xZ0RvWVRJ?=
 =?utf-8?B?eThyQzYzVnRqTWd4MkZIM2Jxc0NhZ3BsUzBNcktmdHdpdlZjRDNKcHdEcWJZ?=
 =?utf-8?B?YzVqeStDbTRGNDlhNnFvZ25qWDFoMy9nYWRrMWRyUjdGUXk5QTY0bTYzbms4?=
 =?utf-8?B?dndxcU0rbUdnVUFDQmlYb2tId2h1SWY5eEVIWXViTi9wTC92VUczamVVRlQ2?=
 =?utf-8?B?cDFjRWpoMUtBQ0NRelNOSjF0aWlwejAvdUNwalU4dVhrVDFYZFllR2grWE00?=
 =?utf-8?B?SjhHYnBORHBFZUZIYVJmTTdzajFYQm9BMExxNDNMOVEzODBsNGdKanpwVFJ1?=
 =?utf-8?B?Q0ROb2hEZ1ZoL3NPTnVPUTF4U3hIUWhjNHBvU0cvSENCdDlSZTBrdHVBSWpj?=
 =?utf-8?B?MkFFTDNzUVBaV0E2dVlXcUxrNVUvRUQvODNKaWEzUHh2b1pOUzFlZlFpYnd5?=
 =?utf-8?B?dDduMFJqcFhTWEl4MWFKRGp6cFR0a2JoU0l6STVjRSs5elZUVHNCRzBTTzNk?=
 =?utf-8?B?SUhqNzF0MmVJSXUzQWprRk1RaFIxeG9tcnMyRlNsYnFZbG9DbjFnanZaRzI4?=
 =?utf-8?B?RVFhbU9ROWxkM0ljcXBvYTBqZHdMTXBDSHBMb3ZIUnlUYTBlWHZHRzIwcE50?=
 =?utf-8?B?TnNPcTNxVUY2MldBWjdEU3dCb2JjQU5HTUtHMmdTSWN1YWpPZzNmdzVZVUpV?=
 =?utf-8?B?RjY1NjZHSEUwMnpqQTNYa2hrZ2gwKzd6YVNzM3hYK0M0VmpYR05SbFRNMmRj?=
 =?utf-8?B?cXBXVGZvRU9zV2daQVNNSHVKMGVLcW9wM2RaN2lVUGJBaEY4bjVFdk1uR0dP?=
 =?utf-8?B?QXRxRlhLQTR0Ni9taEZQVVpIRlFSYTNoWUpJcndSV0R3YTBYSnVpRFVwR1FQ?=
 =?utf-8?B?bDZpTEZid1FFRVBYc2RTZnFIK0pwZDNZVnI0azdKNUQ1c2wxZENZQURQdWsz?=
 =?utf-8?B?SWtoOVVCWURIYkQyQXBBb1QzeXRFeSswVjNwVXZ4QXFCbGtwaDRXblhqc2Vh?=
 =?utf-8?B?eU5ubEVsZWNFM3JOM0lZNjhDQmNYNWtQYXVGZkRrKzdVNXIxN21wbldWTFlw?=
 =?utf-8?B?UXR2VEhXQUlCaHZJUmFWUlg2bE5HakkxQjhHdVlwNHYyUmhsaENIc2NvaExU?=
 =?utf-8?B?SFFJYXFMU3RHUlI0bDVhMkx3UG1SajlxMCtnd1RQa2hRcHBqdHFKUit0eGFI?=
 =?utf-8?B?R1FXZXZOZEE5NWo2QXFjZWwrbkR1L1dKRmh3clZROThLcVpXOUMwdGNtTk43?=
 =?utf-8?Q?mCF0cKcPZOgjRz7XzFWK4pJLw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cf45689d-f97e-4be8-bbb0-08db4d699383
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 13:06:44.6295
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: szisnl/g/ITnM56g37H6Qb7/IMSpOuJPQFH9HR7bkLUv9yXoF7sNIGpNt/+47EFvLwvCUF8FKum+8zt8yol1yQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9337

On 05.05.2023 15:05, Andrew Cooper wrote:
> A few newlines go a very long way
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Fri May 05 13:11:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 13:11:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530344.825912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvDm-00079c-Eo; Fri, 05 May 2023 13:11:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530344.825912; Fri, 05 May 2023 13:11:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvDm-00079V-C6; Fri, 05 May 2023 13:11:38 +0000
Received: by outflank-mailman (input) for mailman id 530344;
 Fri, 05 May 2023 13:11:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uJCN=A2=gmail.com=olekstysh@srs-se1.protection.inumbo.net>)
 id 1puvDl-00079P-4g
 for xen-devel@lists.xen.org; Fri, 05 May 2023 13:11:37 +0000
Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com
 [2607:f8b0:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5c691316-eb46-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 15:11:35 +0200 (CEST)
Received: by mail-pl1-x62c.google.com with SMTP id
 d9443c01a7336-1ab1ce53ca6so12546535ad.0
 for <xen-devel@lists.xen.org>; Fri, 05 May 2023 06:11:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c691316-eb46-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683292294; x=1685884294;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=kFXuBNBO1/GgAF8ppAIzcxSbDhbGubAz15oXF2l6RaU=;
        b=EmFSas1IcVTUjHuldQ2/6zNLezKywUWrBa9ONtWGtBr44Oh2n7g4qgPDHQfRrPVIf9
         gFU79gd1sYWfdZ2HqumFD0Gb4FhK64HSYt7hYh810wzM7DUttkDzTJifoX65Yh5ajKdL
         kN1AP9DB9XkXBMjWa3Hi6/Og9NQ2q5JnKcBRoLGw+DECydT49kl/2q7stAzYhqAXG6kr
         o5zg221k5noTVc/+Z/zq7yoriUvoPSF7aC3T4qtleMQmOnRnNiUTRc80IM6ddiJ8d1Vi
         V8GZkhebIZ01+0X5bO7bCmMGZhftmMTe2YW3oZv7cg/SKmCwEBKj8NZEH4w6KXY73pmS
         cQzA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683292294; x=1685884294;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=kFXuBNBO1/GgAF8ppAIzcxSbDhbGubAz15oXF2l6RaU=;
        b=htw9xHRfSArlVktpEFtSw/fsGeZC99YTB0uFkF6E0liXKPkJGYAo1qUqnx6hg5tuyP
         3No2b052UtLkzYKTvKrHjWoQzSx5gQIMlmkoiGrg5X+aPDZWIS4bAgNA9Yfaz5c3H2Oz
         auibYEMRK9jBprOglOcAhnIZCQx4v205kgDWtryKioPT2xJKdhjs97pqybkvfoHLJ1nk
         hIXQXg0C7HGyCAU0sVyM6A7tDA5ksWKnGviF/dtNhK3RgEdf6ZpQPMDx5TKtDZYdqVsG
         FbcwC7SU0LSkyjz3QBHL8M60MxoC5qPhv82z4VA3usRfRN57nf142TbTHEXg63qddohm
         Z2XQ==
X-Gm-Message-State: AC+VfDxW2H2q8bXdXaPBY3aKl8fLYM5jIHz5hQobye//vYlO2rcBreW+
	PlyThS0Ea3MyQbNbSm4KUDqtRr2FiVSZObg5Bm4=
X-Google-Smtp-Source: ACHHUZ4ocOAy1V5T9RjnZJ17ugGMfoiOcHse95/8gKexFyzq4yZczoV1WVOyAJTomSbTL+cXt7IoIs33gFSE/LLcw0Y=
X-Received: by 2002:a17:902:eb4b:b0:1a6:7632:2b20 with SMTP id
 i11-20020a170902eb4b00b001a676322b20mr1393189pli.40.1683292294302; Fri, 05
 May 2023 06:11:34 -0700 (PDT)
MIME-Version: 1.0
References: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
 <25fb2b71-b663-b712-01cd-5c75aa4ccf9b@gmail.com> <20230404234228.vghxrrj6auy7zw4c@vireshk-i7>
 <20230505061934.jm3bwjgsl5hf5rns@vireshk-i7>
In-Reply-To: <20230505061934.jm3bwjgsl5hf5rns@vireshk-i7>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Fri, 5 May 2023 16:11:23 +0300
Message-ID: <CAPD2p-nvLXdxkwik-UPjS1JAjz6z2FNuxb1JYrj4bNwusEZpPg@mail.gmail.com>
Subject: Re: [PATCH] libxl: arm: Allow grant mappings for backends running on Dom0
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>, xen-devel@lists.xen.org, 
	stratos-dev@op-lists.linaro.org, Juergen Gross <jgross@suse.com>, 
	Julien Grall <julien@xen.org>, =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>, 
	Anthony PERARD <anthony.perard@citrix.com>, Mathieu Poirier <mathieu.poirier@linaro.com>, 
	Erik Schilling <erik.schilling@linaro.org>
Content-Type: multipart/alternative; boundary="00000000000041518705faf207ed"

--00000000000041518705faf207ed
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello Viresh

[sorry for the possible format issues]

On Fri, May 5, 2023 at 9:19=E2=80=AFAM Viresh Kumar <viresh.kumar@linaro.or=
g> wrote:

> On 05-04-23, 05:12, Viresh Kumar wrote:
> > On 04-04-23, 21:16, Oleksandr Tyshchenko wrote:
> > > ok, probably makes sense
> >
> > While testing both foreign and grant mappings I stumbled upon another
> > related problem. How do I control the creation of iommu node from
> > guest configuration file, irrespective of the domain backend is
> > running at ? This is what we have right now:
> >
> > - always create iommu nodes if backend-dom !=3D 0
> > - always create iommu nodes if forced_grant =3D=3D 1
> >
> > what I need to cover is
> > - don't create iommu nodes irrespective of the domain
> >
> > This is required if you want to test both foreign and grant memory
> > allocations, with different guests kernels. i.e. one guest kernel for
> > device with grant mappings and another guest for device with foreign
> > mappings. There is no way, that I know of, to disable the creation of
> > iommu nodes. Of course we would want to use the same images for kernel
> > and other stuff, so this needs to be controlled from guest
> > configuration file.
>
> Any input on this please ?
>


I was going to propose an idea, but I have just realized that you already
voiced it here [1] ))
So what you proposed there sounds reasonable to me.

I will just rephrase it according to my understanding:

We probably need to consider transforming your "forced_grant" to something
three-state, for example
"grant_usage" (or "use_grant" as you suggested) which could be "default
behaviour" or "always disabled", or "always enabled".

With "grant_usage=3Ddefault" we will get exact what we have at the moment
(only create iommu nodes if backend-domid !=3D 0)
With "grant_usage=3Ddisabled" we will force grants to be always disabled
(don't create iommu nodes irrespective of the domain)
With "grant_usage=3Denabled" we will force grants to be always enabled
(always create iommu nodes irrespective of the domain)


[1]
https://lore.kernel.org/xen-devel/20230505093835.jcbwo6zjk5hcjvsm@vireshk-i=
7/


>
> --
> viresh
>


--=20
Regards,

Oleksandr Tyshchenko

--00000000000041518705faf207ed
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr">Hello Viresh<br></div><div dir=3D"ltr"><b=
r></div><div>[sorry for the possible format issues]</div><br><div class=3D"=
gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">On Fri, May 5, 2023 at 9=
:19=E2=80=AFAM Viresh Kumar &lt;<a href=3D"mailto:viresh.kumar@linaro.org" =
target=3D"_blank">viresh.kumar@linaro.org</a>&gt; wrote:<br></div><blockquo=
te class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px =
solid rgb(204,204,204);padding-left:1ex">On 05-04-23, 05:12, Viresh Kumar w=
rote:<br>
&gt; On 04-04-23, 21:16, Oleksandr Tyshchenko wrote:<br>
&gt; &gt; ok, probably makes sense<br>
&gt; <br>
&gt; While testing both foreign and grant mappings I stumbled upon another<=
br>
&gt; related problem. How do I control the creation of iommu node from<br>
&gt; guest configuration file, irrespective of the domain backend is<br>
&gt; running at ? This is what we have right now:<br>
&gt; <br>
&gt; - always create iommu nodes if backend-dom !=3D 0<br>
&gt; - always create iommu nodes if forced_grant =3D=3D 1<br>
&gt; <br>
&gt; what I need to cover is<br>
&gt; - don&#39;t create iommu nodes irrespective of the domain<br>
&gt; <br>
&gt; This is required if you want to test both foreign and grant memory<br>
&gt; allocations, with different guests kernels. i.e. one guest kernel for<=
br>
&gt; device with grant mappings and another guest for device with foreign<b=
r>
&gt; mappings. There is no way, that I know of, to disable the creation of<=
br>
&gt; iommu nodes. Of course we would want to use the same images for kernel=
<br>
&gt; and other stuff, so this needs to be controlled from guest<br>
&gt; configuration file.<br>
<br>
Any input on this please ?<br></blockquote><div><br></div><div><br></div><d=
iv>I was going to propose an idea, but I have just realized that you alread=
y voiced it here [1] ))<br>So what you proposed there sounds reasonable to =
me.<br><br>I will just rephrase it according to my understanding:<br><br>We=
 probably need to consider transforming your &quot;forced_grant&quot; to so=
mething three-state, for example<br>&quot;grant_usage&quot; (or &quot;use_g=
rant&quot; as you suggested) which could be &quot;default behaviour&quot; o=
r &quot;always disabled&quot;, or &quot;always enabled&quot;.<br><br>With &=
quot;grant_usage=3Ddefault&quot; we will get exact what we have at the mome=
nt (only create iommu nodes if backend-domid !=3D 0)<br>With &quot;grant_us=
age=3Ddisabled&quot; we will force grants to be always disabled (don&#39;t =
create iommu nodes irrespective of the domain)<br>With &quot;grant_usage=3D=
enabled&quot; we will force grants to be always enabled (always create iomm=
u nodes irrespective of the domain)<br></div><div><br></div><div><br></div>=
<div>[1] <a href=3D"https://lore.kernel.org/xen-devel/20230505093835.jcbwo6=
zjk5hcjvsm@vireshk-i7/">https://lore.kernel.org/xen-devel/20230505093835.jc=
bwo6zjk5hcjvsm@vireshk-i7/</a><br></div><div>=C2=A0</div><blockquote class=
=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rg=
b(204,204,204);padding-left:1ex">
<br>
-- <br>
viresh<br>
</blockquote></div><br clear=3D"all"><div><br></div><span>-- </span><br><di=
v dir=3D"ltr"><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr">=
<span style=3D"background-color:rgb(255,255,255)"><font size=3D"2"><span st=
yle=3D"color:rgb(51,51,51);font-family:Arial,sans-serif">Regards,</span></f=
ont></span></div><div dir=3D"ltr"><br></div><div dir=3D"ltr"><div><span sty=
le=3D"background-color:rgb(255,255,255)"><font size=3D"2">Oleksandr Tyshche=
nko</font></span></div></div></div></div></div></div></div></div>

--00000000000041518705faf207ed--


From xen-devel-bounces@lists.xenproject.org Fri May 05 13:14:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 13:14:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530347.825923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvFz-0007iN-RR; Fri, 05 May 2023 13:13:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530347.825923; Fri, 05 May 2023 13:13:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvFz-0007iG-OU; Fri, 05 May 2023 13:13:55 +0000
Received: by outflank-mailman (input) for mailman id 530347;
 Fri, 05 May 2023 13:13:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puvFz-0007iA-1i
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 13:13:55 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0629.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aeaa9810-eb46-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 15:13:53 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by DU0PR04MB9321.eurprd04.prod.outlook.com (2603:10a6:10:354::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 13:13:51 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%7]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 13:13:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aeaa9810-eb46-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h1Ns65s5gzk3qVBjgG/e7doc1RcFesMfvCAY+ZYh5bF8O7j5WOVDj4ShIjE3THNT6Lg6k6QGxIcXA6kw2knQn9qxohiNgxBUG64PcDlBSBQBy+9iU/zx0Ol4TkmHU9vU06ydbXcDsNvRyhqYTq877GuhpITUnnD47q1xlmQt0azr0CiVTtPwULV6XAzT5JpyYvEhVLRd3GUWZq9Tm7WO8BU/ORFvjJEdyR7ZwmdZSD/TGbPT44do08eF0CdsO1lVV/BvTdOwSp9DUzgdy+rxYO5NVvgp4wp8wkxZK1w/CFRZWBn0cj7gVDneTFC/Mftm1493tRvSdRg42EALbib04w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sy7ekOGL7Ui1hZMNT3Lmp3AyM0dO+DQEghWHmulQAgA=;
 b=lkzAyPg3Nm7Hggv0ujhinXZ+KC3RpLBMUkdcO89ELUMHPHD2a0XH/g2oV+A8pOVHee7VarixSZUUkypWC7I7EY+FAczVjaJlnJMqNqSQucsTJjHmYUQ+kOe8KPSPp+oMYoBGuQoSOTnJGsux5quI1rjr/9QtUXw9wxKkpjwloTfgS139f+lD3nLZYUfoc6F9Gd1pC+qCfMfzfGX88vf57oFvmYSYAeyDY+MTTY4kHUAjL0ooTrIKh6BrZZD8W/3OXSWzezHbQn48GrvoH7P+kiPGoW47A6k7YAjg0GRpuf7qV4YaYGKPMugm8OcZWy97y8ib2pPu5oQCNzofvgh5Yw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sy7ekOGL7Ui1hZMNT3Lmp3AyM0dO+DQEghWHmulQAgA=;
 b=bn6WdBEh253JrxnLtrTntVDJ8OuZNn1+ei9U83v926DU8LeKt01OMUubtqBjef7R+elcECsaBKJY4Z09j9Hlj7mkri/FgQPu7bLUEkKGATzq77acLwz0X9N2OxTnoaYor06kvuiTK/6rcit9VkhSdWgc8UdsV7HcO4S9/+yKt/066zKcf9xtt/E9jioS9RWP8M3kQY9eYlEvoHHo3jqo9xvT/hqMQS3oqEnscoRvOgtfbwMua3SFZE7EMT1essFJ01bT+LQNaQQLa4bTDw+wgkAcIC1W2nuifjREzIMwZquZoFD2wEd91ovrDdaST3bH4n5ESVTlZQJlkoQ+yPIKNw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <40d70f11-37e7-ff29-37c1-a94d3e286455@suse.com>
Date: Fri, 5 May 2023 15:13:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/2] LICENSES: Remove the use of deprecated LGPL SPDX tags
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
 <20230505130533.3580545-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230505130533.3580545-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0238.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b2::13) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|DU0PR04MB9321:EE_
X-MS-Office365-Filtering-Correlation-Id: 9a1c70ea-b93d-4ebd-76af-08db4d6a91ef
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oGOd4uxUd0desVQfbJFQq2WcbRIjz3mSImVXhQKeuOyJhdO3ZIS4ZMfsSIUBfG2FuRWt+BTbejxThlLeI99OjtXK4oLhpOACLGRPucj6+nf65rn7ONuEWOmz5GjqB7uG31VgWWqX+dYM/xUPBu1eDHCSlpw5bA7+D1C9C4LjV5+P1AkhxXCZMIl6qJyh0mYKcR5eaPXbn6z079panE57wACvT99x2fW2hb+UZYGYCI+/nk2kf/tfekkZ1N4ehz7g2UgWN+7iZqwebzyiaJAJFSYx7MeGcoy2QlXKRNhzikvEn/mMFwDex4jgc1ggb8WDUgITDwK56OH7CI9DfwOQlm/3MGdpcysS94pf8cbIHK1xFLtiPgpfZS3UBDxJTHpZKDYkxD6iARBU+2mrrFlN1+N3TotpnMpKhlnCxjOvdTEZYd94ijR+2u29MOyS5s/DispWvPx1UIvsThWRsNDpw6ODfdC/TQJc00GUF0aoHZfluFeJtan/WGDB8/j1kCin84pfj5ejUyAFAjUZt02gFAJkmFtO2NtifeXNH9+rHRgPapVkHEZwPjp3MI12How48LreYOJFCQHU58582BiRxs8CVA7BqRWQ++u07or87E6FnyHfB7fIHAv6v2ZolJGC
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(396003)(366004)(376002)(136003)(451199021)(31686004)(6916009)(4326008)(66946007)(66556008)(966005)(6486002)(478600001)(66476007)(316002)(54906003)(36756003)(31696002)(86362001)(2616005)(6506007)(53546011)(26005)(6512007)(8936002)(5660300002)(8676002)(4744005)(2906002)(41300700001)(38100700002)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SXp2a002VG51Q2F3aXdNMmY2M1ZGK080ZnRYZ0t1U1FGYUVDcWtEZ2FhNnNr?=
 =?utf-8?B?YWFWTjZielE4WGl3UzBoUEZtTW5PSC96emdUdDhaYmVMYUtwbmZWeHV6S2hq?=
 =?utf-8?B?MEJKOEZGS0pSSXZmTVRjUWZWS3pxVW5qbXBQRUQzalZlS2NlajhRL09DeDB6?=
 =?utf-8?B?RkFlcXByQVBGY0xFRW55YWlaV3dQd3YxVkJNenpFVzQvRklkdHcxelU2NklS?=
 =?utf-8?B?ZThQMkcyK21zRXRaQUw4Mm5EaHlhNUcxWGdESGJxNHc1ekN1eGlweGk1blht?=
 =?utf-8?B?RjVxQzEzK1dmSGlRd3hVZ3NWTlI1dWZxQVB4RG53eHpTR1ZubXRZNElpWE5s?=
 =?utf-8?B?Znk5YzU2N1V6QVFzeGJWWUZjNjR0cUEwQWNtdWZqYksyM014RXY2cjFqa0xY?=
 =?utf-8?B?RGk4TVFvWnc0VUFiR3o3VE5IdW03R3FrVzBJbjFPL0xjbWtRRUR2UXV0NjZC?=
 =?utf-8?B?OHhIbmhHczhrckVFa0tWZ3lIV3E5N1BpcFdlV0o0M2JERDdPOGJYMS9UdGFE?=
 =?utf-8?B?dVIzNDN4dlRJYWlOM3pUSmFHWVQ4ZHdwWnFDMG9uamRnMExaYytEUEs1Q2Mw?=
 =?utf-8?B?ZGVqM2JIc1JKYzYxa3NaeCtLZFNtM2lDeGVKVzl1bnFnYkxOS3h2NkxZWmNi?=
 =?utf-8?B?ZStyaXZwM2hGQlBNTXVRRHNNbkZLNjJYME9TcTQ0djBRUVRYaXZpd0UvUHV6?=
 =?utf-8?B?QXI1UVJTcXY2MEZjQzhITXV3amhmQU9TNlNRM2FvdExlWXZwcTdvbi9KZGQy?=
 =?utf-8?B?R2Q5dUdwVnVnY0cvUjNTU0VjeS9sdVRGRFpvcWY3VWI1Mk05MVBmWDhwVGFl?=
 =?utf-8?B?b2hSR2pieTdHVEJ2SWtacXBGdE1hN1dzRTZ2KzQ2dWJjWFF0UFNZZUJUWGph?=
 =?utf-8?B?OXJmZnB4Ukd2Z1NQT1JvMGIxeGJFNHJGWlhXNnJyVWkzcHdLckJzVGJBRVd1?=
 =?utf-8?B?ejlwWUgxYzBoOTZrckp4QmlkUTVsemIwS2UvZnJ2STRlOXVOTFZwYmhhdGNS?=
 =?utf-8?B?VXdqUEx5OEhsVVhEUkpyNUsyREphZ0M4TkozSEE0cFRaWDAySmM3ZEY4b3cx?=
 =?utf-8?B?Wm5nY1daNlV5bjQ4Skp4OG1rM0RTb1U5Y21kem5wUG1COS91dTVYSGdNby9j?=
 =?utf-8?B?RzhWSjZCQ2tuY1BHN29vOHNIUWI3RGt4dVg4U2dnejJTRE04azNrQ0tOS0xB?=
 =?utf-8?B?czhwdnhjZWN1VEsyWWVucUtydktFYVBTSzNEbStXRGgyUmJxaThrdld0Y0xI?=
 =?utf-8?B?QjA2aHNmZ0w2ZzYrODdsWmQ1c1U3aEpiYW5tcGg4ZXlFaXgza3B1VXBiOGUy?=
 =?utf-8?B?VXBkd2lKRS9TaytxYkJkNHdJY3VtZEVwODJrd3lYTDlva01lbkl0Ry82N055?=
 =?utf-8?B?QXF4MXJBWUd3ZFdEckNFSXp0cnBucm1HRnVRNmJxYWpBd0NEMTU3Q3gxbC9y?=
 =?utf-8?B?Ri90L0gwWXNtUTlheEdBU1V3TDFJbWY4OWtOU2NPOFFUc2pMcnRrRml3OGtL?=
 =?utf-8?B?c3FpcmNISkN0cVQ3Z2o0VDNuTjM4VXBjOHd3TmxRN0huOUNCQlVPaWcxbndu?=
 =?utf-8?B?NEJRa3ZQYXUzdVF0bmdTTG1Yb0puSFdKL05LM24zVWIzbTJRUVFlOUQ5Qy9B?=
 =?utf-8?B?TUFjaHVxaWxNUVhkYkJrK0F4aVdEbVoxcTluUVpPR01RK1cwU0JrOEFSV1pm?=
 =?utf-8?B?aU1nWG45U3VxbzNqVDdMb1haeHVCSlNPY3ByVFVuL1lxRXoweGdnZlpMSlhZ?=
 =?utf-8?B?VitJb1Vpek1ROHZoa0I4cVVmelo0ekRDUkpyMDJkRFl3cXAzbDJ1ZUFwU0VK?=
 =?utf-8?B?RXl0bStJQXhmYXFwSFpsYldIczc2RkNCV3FVZEZBM1NBSXZ4QnQwcnlhc0JT?=
 =?utf-8?B?NW5KZ2E2SkJ5S2g2OC9RT2hDZzF4ZWM5dEduUk9SdkNGTkd4d3hRQ0lZOVNL?=
 =?utf-8?B?RldBUFJFQlhtQkp2WVpvVzcxdkJlMldwdmI1RFd0d0xNcUk4TW12WkxNcU0v?=
 =?utf-8?B?ajJ0dTBQcEt1YTR2VnBqSmVRQS9aSlNoQ2ZwTXFQdkx1VEh2MVhIaFAwVTMz?=
 =?utf-8?B?NEVvWmhkc2xTTnpmM3hYdzduWHg4NVB2NEJ1RFFPQSttOG5oaHBtMlZrRzc5?=
 =?utf-8?Q?rYgsO6WU4WF5IHIDFMP3J/qOm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a1c70ea-b93d-4ebd-76af-08db4d6a91ef
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 13:13:51.4415
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tv3qhfySMW7398sxvvVgOCWVq4RzkJu08LH5K8z6RKdlvMDruUlr4WuEl2icm7b8JX9367ge8NwJ5m2JhKLMlw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9321

On 05.05.2023 15:05, Andrew Cooper wrote:
> The SPDX forms without an explicit -only or -or-later suffix are deprecated
> and should not be used.

I guess this wants a reference to where this is specified. In particular ...

> --- a/LICENSES/LGPL-2.1
> +++ b/LICENSES/LGPL-2.1
> @@ -1,5 +1,5 @@
> -Valid-License-Identifier: LGPL-2.1
> -Valid-License-Identifier: LGPL-2.1+
> +Valid-License-Identifier: LGPL-2.1-only
> +Valid-License-Identifier: LGPL-2.1-or-later
>  
>  SPDX-URL: https://spdx.org/licenses/LGPL-2.1.html

... I can't spot anything like this under e.g. this URL.

Also is there a reason you add Deprecated-Identifier: only to GPL-2.0?
Enumerating them would seem reasonable to me, not just for completeness,
but also in case we end up importing a file with a deprecated tag.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 05 13:22:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 13:22:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530354.825933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvON-0000pP-OB; Fri, 05 May 2023 13:22:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530354.825933; Fri, 05 May 2023 13:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvON-0000pI-LX; Fri, 05 May 2023 13:22:35 +0000
Received: by outflank-mailman (input) for mailman id 530354;
 Fri, 05 May 2023 13:22:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XVLi=A2=citrix.com=prvs=4822586d7=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puvOM-0000pC-NE
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 13:22:34 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e2f87e34-eb47-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 15:22:31 +0200 (CEST)
Received: from mail-co1nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 May 2023 09:22:29 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH7PR03MB7486.namprd03.prod.outlook.com (2603:10b6:510:2ec::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 13:22:18 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 13:22:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2f87e34-eb47-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683292951;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=x7eLhQmfN/cz2GOpic1QkGwjjsDjXQ0u/FsR7r7PqTI=;
  b=K+nQJwv+2Ik1a/j64T5Mnz4wVd/mO1v2XzDEC/vRmiRK1/yIfPOWr3Pv
   yh++Ql81D1AEqwZYwrcs0G+rn+cfApgjDRtbZ1rW3dYpEoAV4x5qju7lW
   hM6czrOjLSCsvprDngKkkLJCTLHjRS+Iw7TyfqkFwheyoQ2xO4pgpxLFY
   I=;
X-IronPort-RemoteIP: 104.47.56.177
X-IronPort-MID: 108403924
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:oxVoUa9LC4xb5vKLed7WDrUDVX+TJUtcMsCJ2f8bNWPcYEJGY0x3n
 2RKDzqCPveLN2X9fNkkYdu19EpQv5LSx4BkSlRv+y88E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKgW5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkQ2
 OYnATkSfCmGvN2pzY+VcMNchZoaeZyD0IM34hmMzBn/JNN/GNXoZPyP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWNilUuj9ABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpCSOXjp6Q26LGV7nUdJzcPFmPkm/Ok1GOuSvtAK
 kxX4TV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yx2CGmEOQzpFadonnMw7Xzon0
 hmOhdyBLSNrmK2YTzSa7Lj8hTGvPSkYK0cSaClCShEKi/HzrYd2gh/RQ9JLFK+uksazCTz22
 yqNriU1m/MUl8Fj6kmg1VXOgjbprZ+QSAcwv1zTRjj8sVw/Y5O5bYu171Sd9exHMIuSUliGu
 j4DhtSa6+cNS5qKkURhXdkwIV1g3N7dWBW0vLKlN8BJG+iFk5J7Qb1t3Q==
IronPort-HdrOrdr: A9a23:CNfzjKyTcShuYgmpxxquKrPxF+gkLtp133Aq2lEZdPULSKylfp
 GV/cjziyWbtN9IYgBepTiBUJPwJk80hqQFn7X5XI3SEzUO11HYV72KgbGSpgEIXheOitK12J
 0KT0EcMqy/MbEZt7eB3ODQKb9JqrTqn9HK9IXjJjVWPHxXgslbnl1E422gYytLrXx9dOIE/e
 2nl7N6TlSbCBAqhhjSPAhYY8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzSIwxsEVDtL4LE6tU
 zIiRbw6KmPu+yyjka07R6S071m3P/ajvdTDs2FjcYYbh3qlwaTfYxkH5GSoTwvp+mryVAy1P
 3BuQ0pMchf427YOku1vRzu8Q/91ytG0Q6i9XaoxV/Y5eDpTjMzDMRMwahDdAHC1kYmtNZglI
 pWwmOwrfNsfFD9tRW4w+KNewBhl0Kyr3Znu/UUlWZjXYwXb6IUhZAD/XlSDIwLEEvBmcIa+a
 hVfYThDcRtABSnhkPizy9SKRuXLy0O9yK9Mww/UwquomBrdT5CvhAlLYck7wY9HdoGOul5Dq
 L/Q/lVffh1P70rha4RPpZzfeKnTmPKWh7CK2SUPBDuE7wGIWvEr9rt7Kwy//zCQu1D8HIeou
 WIbLpjjx94R2v+TcmVmJFb+BHER2uwGTzr18FF/pB8/rnxXqDiPyGPQE0n15LImYRSPuTLH/
 KofJ5GCf7qKmXjXY5Pwg3lQpFXbX0TStcctNo3U0+H5sjLNorpvOrGd+u7HsuhLR80HmfkRn
 cTVjn6I8tNqkitR3/jmRDUH2jgf0TulKgAWZQyP9JjvrTlGrc8xTT9027Jk/1jAQcyxpALQA
 ==
X-Talos-CUID: =?us-ascii?q?9a23=3AMEPR+WtWVZY5zk8xt5yajxDJ6It7X2fi5X7PMXa?=
 =?us-ascii?q?9Mll2RYC7c3Wg+/1dxp8=3D?=
X-Talos-MUID: 9a23:+4WkGwas/RcT1eBTrjHDrighDJhS6bnwJVAcuIoHu4qfKnkl
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="108403924"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TC4FzQsDZZ3cSFtsloVOCFyLSGyKiJWL9ofYcBFSqDcrRo+SEpBTcc457kjogj4+bvfex1ZLzi5ROAlvJ6uJaAmdw6hdsqTd0+MZvOECSm56PcDWcf1SCO6RSymQnvIQg59M0F0qZJ9ZLPD7BVyMNJ+C7efRUM1oBr2E006yXwFfpRczEIc80gmMZI5TRFzwEUDvSDPghTQI+DI+ut+z3Ov5/jm/+8NeJacpv8C+NIO7d5rPx4zsksL0RHsR9vtsNmF1tkEKr3RIFaf10uw7euRDlQCA/uWEYEsmmQH3XEOsb7QNQO6yUecV+b4wvoAtJJmq3wh8qUb/RcoUTgIIpw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/e9y5U85C5VI0YC+fuBdqDQLKy7qs2gImICHi/mfPZw=;
 b=jpOFy99TTmb4Je9uBnUIdusE2vGRaX+kwdSi/gJqxwwKJK7BY6FN1KR/LYq2qNvLEfnUbn1O5xkPmT9XxONm/Z20VY1MzVC02hzM4J0pD8Ge2y8EaVnsLbgs/s8kmIgYL5NsQnqCi28TYCxsppEzjLRDpb1NQz0VsgplcoTSmpkbXBo6AATuiCx1p3szq6VMaqaoqMz7kurTRPjVytpdZ4uC3nUtoYplmo1vFHG9LKxhEc26JbN5WjCtjlT6XmTORITI7ohsQf3bVAWOy1czcIg77lNj4MqDwi4CQnY/JIdTHzzDtxmuBrDLsQuXFOVWFLm93Fak1qmczO1teJjpUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/e9y5U85C5VI0YC+fuBdqDQLKy7qs2gImICHi/mfPZw=;
 b=R+mm6/c3TJQBLtocLGQayx7KupuyLHA14CrKk7jxFMvAnEBywi8bx55nQkkEs5JHaLG/md0RfjKQafhRmifdKlXQ7zG+0BdzYUJ66FivF413vh2Gd/h68m92s7qO/0HCWEItYMzoa0UBzSTHdAv+kT23o+GbEmxomnCzgkHPT+A=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <8fa2cf23-4832-a033-cfc3-e3225eff0047@citrix.com>
Date: Fri, 5 May 2023 14:22:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/2] LICENSES: Remove the use of deprecated LGPL SPDX tags
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
 <20230505130533.3580545-3-andrew.cooper3@citrix.com>
 <40d70f11-37e7-ff29-37c1-a94d3e286455@suse.com>
In-Reply-To: <40d70f11-37e7-ff29-37c1-a94d3e286455@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0221.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a6::10) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH7PR03MB7486:EE_
X-MS-Office365-Filtering-Correlation-Id: 4695c6f1-8d1b-479e-718f-08db4d6bbf92
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZqBiaZENIhQBiX6cAREVb8hfiSgJHJpBtuoEKuh4nsUHfHlNpEVG7Lj1aAk9ElVzsxsamryJd6348OetPsuhnOni3EGWirAkfUAgv8QPYb12tcSxB2F63aVy0FizS/ksIQW6365YKoKcdTFLftY+XPgryzNJ6+HM/P89offRmCA69Qnvagjpm48yPmgQZfrLWYOalWEHVRxM/MQG6InPRbJwNtv86GUl0ptNyoMRu98SqXYKYqeIwKWWraPisYITw1SWgcqhDzn6Rfjz35icNy6php6ov8CuINrEfEHi2i252nECOJ9PJxRj9zfEW3YgtMEmcJqb4O/sXC/ULW6cqwUJZfUfjtlTr9Lsdz7Jdes6YtqZHWC0yMeDSxV22dlCZp5wG0Wf9BTlWmbUU29zSHu02IyPVxiSj/zlYc6bS7T9w6GaTMzoe/tKbw1TaYkFE5D7WaMgZ5T5QgQxA8TIbCugPYskJ9omar/S25nORpxDHVtw8zmMnDtIuR0DqPwCjLB+Ldi/l0f/NKarlZMPOpbU94g8XEp8sJO1PON/pLVkKOlYcJYdDHUoaz6sNnLwhpkHBzO+jbWoaljIrxJP8ad4zmZiHDfTizKSvWCzI+m9yYK6Gw2KO5cBo4v5pVwl
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(396003)(346002)(376002)(366004)(451199021)(2616005)(6512007)(83380400001)(2906002)(53546011)(186003)(6506007)(4326008)(66946007)(66556008)(66476007)(6916009)(316002)(41300700001)(6666004)(54906003)(478600001)(5660300002)(26005)(6486002)(966005)(8676002)(8936002)(38100700002)(82960400001)(86362001)(31696002)(36756003)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WnRUeWRPNkZCSUREY09JN0diNlZtVEU2V3AvRVlaeC9KTTQ5NmE1Sk0vc0lH?=
 =?utf-8?B?VW1KeXF5RFF2Rlh6cUs2SU53eTM2VkFva0M2TGhDZ09UZlEvWkN3R2xFNVlE?=
 =?utf-8?B?d2M1VDU0dUdLTVZ3VDIzT0RGaEZGYndqeGpMS1V4d1N0Ynpud2cwRDE3M0ts?=
 =?utf-8?B?L09oK3pnVUN5MFp4c0FJek1pcXFiQm9ycWhmTlgwTE8zWkJnQzlta0NBdkR5?=
 =?utf-8?B?WGorVzFuV3lhdC95SFc5Znl1NitSTHFGbmNiQUtEMDZXc2VVN1ZHMVZtR3NL?=
 =?utf-8?B?b2FrdTdmWHkxZHE0L0dGRWc5aWQyeGJWaUFyTFVKRmZMazFScjRZSVJjTzht?=
 =?utf-8?B?VHRiQVBkRFpyRFMzUDBQSUs5VzRxVEVxaDVaMHhxRFB3Y1dRcU9EbjNuMmZB?=
 =?utf-8?B?Si9QS01aZTJtamFsZWgyM1JxUXJoQU1Ua0ErTytQQUczSVFUQUFGTEFyR3Nn?=
 =?utf-8?B?ZVVxamxOV1hOZ2NxT3NIbm5hL0pGd0Zpd0ZZaUlDdCt0aWMweG82Z0JGWVQw?=
 =?utf-8?B?SC9GUm1aelozTmt4SnFSRTNZby8vamFJS3BoL01rbHgzREVNN00vb0Ixb0xM?=
 =?utf-8?B?L2luZzVXdlJZVjA0Vmt5cUt5MjVDWTlqdmNtV3dGRWRhaEFLVUFYRTV0aVA1?=
 =?utf-8?B?MVNnZ1dJNHozak4vWWZYdk5GT2dXSGtIUUJCNTM0RHhyQWdSaU5ld2UzMWsx?=
 =?utf-8?B?ODU5eGkrOHdtTzdnU0VyWnVFcVJ4NjI2dHdSMWtZYXVLK0JKZHFESGdDQVM1?=
 =?utf-8?B?N2l6QXY4WW04cG5oU1NtV3RVZ0ZybXl6VUxmNnhJQ1JYSzJOYi9OaDZWc0F3?=
 =?utf-8?B?cmJ4TFQxQzdPVnQ0T2JxK1ppTis4elBWMGNMZU5rOUl2cEw5ckorRHZZdm1C?=
 =?utf-8?B?U205cnZ4Y1pST1NBM0VvOUswaXNYc3JCNllqbnNoNmY1d1NGRHJqWWpnUXk1?=
 =?utf-8?B?UGF0OVg0bGN3TGU4SkY4bi8rVnFWYnZnbmRUMExPc3VWVEpxTjZZUjFpY01N?=
 =?utf-8?B?b0R1Q1VHSXplYld1aUJVbkkzTnZzbWVmR2tpaWhMdFpka1NuUGtEMTVkbGZH?=
 =?utf-8?B?MjJqZEQ0cHQyOHRkYm1NUko1cGZzNWFFRGQwVDF4a2MrZVc1WWVkbXQ4cEJ4?=
 =?utf-8?B?cDYvVjliVEYwOXlvSUtibS92L3JTeDdENm1GYkZHdlR2amwyZWVySXlZZHIv?=
 =?utf-8?B?TWtCajhudjVDK25nWlZvZGxraHNibHl1WFNDbCt6RTFvekc2V2JGRWVJQlJw?=
 =?utf-8?B?ZkNieTJBQ25waDQwKzl1N1FyYnpNNkV6RFhwMkN4RlNxWHVyMytyN0h3bFZh?=
 =?utf-8?B?YUsrRnMzWVI3Tk1KcGlUUXdaS1VDSk5PaGp1anUrSWpEMlVlMlBidjRMUUM4?=
 =?utf-8?B?NUdCR0Via3Z2bEtXc2srVVZyaFRFV2I3bkF3c3N1KzVWWHI5YmFwT25BK296?=
 =?utf-8?B?MU4wdzNKdE9WZFBhNG1vRHdIRVJJUEYzUk5IMG9LRFh2dE54cUNKdzhxaXFj?=
 =?utf-8?B?SlpiTGN5TEFPbUpzSjFKdFA5enNjQm15Zlpiam1nRHNYN0QzSi9RUzNNcHd2?=
 =?utf-8?B?SFJ2WHhNU3h2dlo0ZEhkbzI2UUNISE1JMFVXTWpSU2RoeHF5dXp4NTFySWts?=
 =?utf-8?B?Y3ovdkpOMUpmRnRuRit6NkwvWUM0WFNnMkdoVEE3Rkp5dkMwdXYxNmZ5Yysx?=
 =?utf-8?B?TVFRc04vVVZBemdsRURvcFFyemRsNW9UQU9zV1M3emFhWGhlMjdhOENIdTZJ?=
 =?utf-8?B?K2Q1VWN2Y3U5eGlITDBVQS9hQUxBMUtyUWdmdVVHKzhGQ2NjRGw5dGxLbFFh?=
 =?utf-8?B?bjFtdkVGRmlieXAxOEFhc01TZ0xiZGttbWFFQnJKeUlsbnlmY1oza0NuWnNw?=
 =?utf-8?B?YUkzdklDdU1HTGlEdm9QWU5UeEFRTkV6VmxOSVJ2bFRyeWsvNFZZTU9uMFBq?=
 =?utf-8?B?RFZ4aHoxaXY1SlFEd1JVOWdtUVlaejRQN0VQUXZLKy9vT3NTZ21zWXlnOUt5?=
 =?utf-8?B?RXJ3RDRLRC9KYmcrdTQ2aDR2dDBNb3VOOFBTdUErOUxYbWJEUmdCUVdyYmNo?=
 =?utf-8?B?ODVtWlc2T054QXZDUTM5ZW9kVVdRajd1TWVTelRUd3FtK2d6MUl3NzdqWmVU?=
 =?utf-8?B?VG52UUpIeFNRQjBicWJhUGJmdU1sSzFHUXhjSTJwV1RWSlJ4WFFmcDRSeTZU?=
 =?utf-8?B?c2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	M3CEHc6XElKkNWb6tcfohcFmTMsdEyfvHJ9PPRZUJQIU68XtrbxHRmtX05BLGlsObeISlTCrJs4S/t3HqUtq3IFxo3S/a7rG9bawahvayxo3kZUbJT/B+vc+j49VAtkFZ/t82ML5RDLFuNwihPMbM+a7xu9uumCkL+yJf85VIuyIBHKvnTqXrA4cLnMbWuYix2RtrWnBdeAQau+1/7hmQU5+q8Ipj8poWyIWi3+ip19nIyM8X+FEX779EyzQmPqIn27p1VpEpZIgQ+h4kl8p8We6kJUZOxti2CJ3aHcmXXvwesz/BP/XvO21fscT8xaFE5Cn1odZdVA61veNnedbmZ9zvLguuz73Rn6lRInqlT86W9eaDwHF3XrDLwV8+qykg0lQ5YEpBFxzZKJt2VZfMztU7WpppgK0SweLRGEGYtNZJAH+rRvJhwIMCCBPY3QXRYWP4/XV/1eQCPESMGOcL82Z1h+lPpJ9I0FZ6YZ0nLnOjgpVA1MX/RNDh09BGrsYFbJdiYAJ+jBnonVyrG9uh5t1sRrKJuKDIPmsbBhWOmtbPbRe6tYLn4UgT3a0pz5HcSAa37NopeRlkBDsQycaiIF5g6n82pk49VFfskrwu+OJdB8oQ0NYuJYCdABot945RMLJWwjtNiBAkpuHO42V1+j0wOLdZ/rTKmnQ1+fw4xwnMPOc672JQuJ/0NP7XrgJIsqNDVJWXllRHlWGUYNBfz2Yty5TJXOuhX235oX8kgr53OihBDo+lJYnXco49H7OGDjWh14fu3meYbDrDX4riQTU0rn539k+gAOfoxVpQ38z8rfvTT/gUtUxXW8ZdJQLNGbzqsMFilOHyhEEOWvc7D9nbzruIxcKHvGRBLK1cmA=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4695c6f1-8d1b-479e-718f-08db4d6bbf92
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 13:22:17.7381
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ug6PCMtMI9TlXXjmHQjV/2UWOBbjEmjU04jXNUScAeEWn5+JqcGm4mJdnaFnXe2Ejjo0f9j//7ibOnruXFMZKpnrU2pIvyQvj8V8jwN4N0U=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7486

On 05/05/2023 2:13 pm, Jan Beulich wrote:
> On 05.05.2023 15:05, Andrew Cooper wrote:
>> The SPDX forms without an explicit -only or -or-later suffix are deprecated
>> and should not be used.
> I guess this wants a reference to where this is specified. In particular ...
>
>> --- a/LICENSES/LGPL-2.1
>> +++ b/LICENSES/LGPL-2.1
>> @@ -1,5 +1,5 @@
>> -Valid-License-Identifier: LGPL-2.1
>> -Valid-License-Identifier: LGPL-2.1+
>> +Valid-License-Identifier: LGPL-2.1-only
>> +Valid-License-Identifier: LGPL-2.1-or-later
>>  
>>  SPDX-URL: https://spdx.org/licenses/LGPL-2.1.html
> ... I can't spot anything like this under e.g. this URL.

Hmm yeah, it is irritating.  The statement is at
https://spdx.org/licenses/ but only by virtue of two tables, the second
of which is the list of deprecated identifiers.

I'll put a paragraph about this in the commit message.

> Also is there a reason you add Deprecated-Identifier: only to GPL-2.0?
> Enumerating them would seem reasonable to me, not just for completeness,
> but also in case we end up importing a file with a deprecated tag.

We have problematic uses of GPL in tree, where we don't for LGPL.

I'm considering a gitlab pass which will reject patches with use an
identifier not in the permitted list, and reject in introduction of new
uses of the deprecated ones.    For this, the deprecated-but-tolerated
tags need calling out in some machine-readable way, but I don't think
it's helpful to express the tolerating of a tag we don't have any
violations of.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 05 13:31:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 13:31:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530358.825943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvWr-0002Kc-Lt; Fri, 05 May 2023 13:31:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530358.825943; Fri, 05 May 2023 13:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvWr-0002KV-HZ; Fri, 05 May 2023 13:31:21 +0000
Received: by outflank-mailman (input) for mailman id 530358;
 Fri, 05 May 2023 13:31:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1puvWp-0002KP-Rc
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 13:31:19 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0628.outbound.protection.outlook.com
 [2a01:111:f400:fe02::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d231b0b-eb49-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 15:31:17 +0200 (CEST)
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com (2603:10a6:20b:fa::20)
 by AS8PR04MB8612.eurprd04.prod.outlook.com (2603:10a6:20b:427::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 13:31:16 +0000
Received: from AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7]) by AM6PR04MB6551.eurprd04.prod.outlook.com
 ([fe80::768c:6df7:9afb:acd7%7]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 13:31:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d231b0b-eb49-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XEi862MKvs2330hVwolviVItOAAYJTmjM2z61/iRVSfpw9xm30YEjz2/cd/M9p9fLyB0Grnya1XTFUdtBqAqAV1mRg7McUMPq6PxElD+1vDmWpqo1nLOkYH56s9qw+ChFvnOGK9rZooowNdVOdyWp6q27V11c1jJDE0LETXP+yv3c3HrpDcJJXBF4cH6z6OXIDfoH0kFXU353I0Q7ndmmRkW9nbd2XnRZOzQEejpJqf+9GO4KBsGLKXKJc2srjmCUVkAaJHP3UL/Oibt6V9eYJyuOvrqFZLcrD7rATPv8trbYzqthIIA2MN27GFlU4qG9jDQ/FV3igXW8oNDkzyAQg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nemY/fDaSoviyXOfos5E6LNHFQh2y1QA6DiOYeeLb8o=;
 b=iWpyIl7/rX8V1iVUKGFrinJRAI2WsKXljjKiTL6zOdJG+e2gYllms06Nt0Z3u0ycbzYSAb3E1KEPlKafLKela+z4ct+qQsbS2eDyZMGO33GPp3vJzP+yt23K7/2WY9qx9ef8TDkuKc+kSVPcgN+ZJjqBHgftC+X6qSJDJ+MgtHCTZf85YZbmXliST9vbC5ol9044yLdCuc7Hk7Kc38kNWgWilR0KP2g0clETohiglKIyXDMeGDXu68BwdMpwMWNscdAsxFDw+RjMKZ20VDH3fMFS6cXsfR2lKyfLO/FNWRzkMy+Q3ka0Ggt4PuQR5TMpM3mLnlBLoHJQTO0tjSpFaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nemY/fDaSoviyXOfos5E6LNHFQh2y1QA6DiOYeeLb8o=;
 b=gJRbBGUWtfDaJcqZ1d2atOLjJcjBpbIHoAbp5DV5hb//IGOvrnJoS+3vSJVzqew3WsEMLhVIVrmZuNjxFiOQb1b4suYjp/uCfbDxCwhScu2Ke63qELyskvB6Y6WQ71vfsl1R6VL/uREzFHoAdIHfmrzjfH8eaukV561k6p50qAhsL9hLtcJ0qdyIAWdKMuZCdeKueDzAk5+LZ6+JmMKDgvlALCjd1afJ/6YEctzOGj8bdbx4kpktNh9N0mNGuByU+Ul2x9xO30TzW/i47krxnpr0WJ46fBDDoFXpixUBee11x+s/HHGSH4WIaKKOGEVnAN7ZPAEwz0ucY+l2kvpTvg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ac1662eb-41de-baee-45a0-db01b2141692@suse.com>
Date: Fri, 5 May 2023 15:31:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/2] LICENSES: Remove the use of deprecated LGPL SPDX tags
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
 <20230505130533.3580545-3-andrew.cooper3@citrix.com>
 <40d70f11-37e7-ff29-37c1-a94d3e286455@suse.com>
 <8fa2cf23-4832-a033-cfc3-e3225eff0047@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <8fa2cf23-4832-a033-cfc3-e3225eff0047@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0136.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::18) To AM6PR04MB6551.eurprd04.prod.outlook.com
 (2603:10a6:20b:fa::20)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM6PR04MB6551:EE_|AS8PR04MB8612:EE_
X-MS-Office365-Filtering-Correlation-Id: c6e74712-da14-46c7-1b79-08db4d6d00a0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ilk0LveS/NirsZc6yHXAZJe0Y9HxzBKbeURjXGm9FP40wtFg4sq7S3hv5uuCu0Hah/Fec2j2QhlazIW/tX837t6tOfJUMzdiHF3DKfKZfiN05hYNCNWlCUq5sQABu203T2+fcPzOPuX545QohZuzxufl/LntXK7J4HFiRdV5OExDj2+ZNV9FKyagF9RmheBTKQFdzP69a8mypYWBm0DRF3xTlrMdUYepp3EdD8ia186dCdqMZbkl3qnj+nAkG0ZdEYka2SSTbcMQD5j3Fwb6Fkspgiec0yI1Uy6K/TXNxHH18fpgc5VQoLgq4IUsIoLX0OWV67ZlZlR6TuW/XFhdMRigz80YJsDgIV2bReHeuSjFJsi8TUpA0TyPNq/SB+yX+cgE2thclQUtRHyKP6PmsNXpIFzRNasuLYzc57L3aPdx1OdFHN7+/f7eJcFK3Y9LtG0g7bcUF2ly9nobpt0GAXZEjAEXsSV6usF96/aWBhSzg3MtjqsAlZp34hv5OjTXjJMR8an2I1aIIRwlkaaylYJ46HsqWG+y1LvOZDk/eVcZUKYVvZukRXOKIh9xC/zQ0ykv5oBYzFnYDZp5FmzWehGYix74r3fqeys9mi+eweEl0vJqr7QTwxSDJQ6inoQz
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB6551.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(346002)(136003)(396003)(366004)(451199021)(31686004)(83380400001)(2616005)(6916009)(4326008)(41300700001)(38100700002)(8936002)(316002)(6506007)(53546011)(478600001)(966005)(66476007)(66946007)(66556008)(8676002)(31696002)(6512007)(26005)(6486002)(86362001)(186003)(5660300002)(54906003)(36756003)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T1BoeEVNajR5L1kyVmtWd1hadXNDajNxeDhUWlZHdlFMOWJGYkozUng3VHVX?=
 =?utf-8?B?VFMvVWVoL2l1Rmx5a2pQZDhaQjM1RHhOdTFQN3B6RXBsQ0VRQndycm9kaVJo?=
 =?utf-8?B?Ykt4VlJFV2JGKzFTRnk5Sk50eWhKUUtoM0IzdnVLTngvcVhBRVpGaXp0S3I0?=
 =?utf-8?B?c0ltKzBwWjZCZFNQL0Q0cTYxaWNWQVQ5Ly81MzgybTN4T3VCRVRPZk0yeEZS?=
 =?utf-8?B?bFRkQldCVm5uSDFtRWkwSFp0bkpZa0Z3Y3doZVdjQ0FkWEJsRk1RMkV3Wjdq?=
 =?utf-8?B?WjBZcmRmdVlTVUdFRXFYeE1rMldnckhqdHUyNVpTK0dzY1VvVTJJOXlsNENT?=
 =?utf-8?B?REJzbGZVcXgvY09qRURKQi9sNXU1YjV4cjdXNG4veGdENkVjZFlBSUFkenky?=
 =?utf-8?B?akkzcDVXT05UQ2RIbUJhejg3UVdJamROTTdrdlhOaUNSOXU2NmRMQXU3TU5K?=
 =?utf-8?B?LzB1UmNnWVp1RE1rd3dXU3I0VmxIVjBzNDlvdmZDSlF6Nm9KOE1ZUVJMckJs?=
 =?utf-8?B?d2dCMDNRRExzTmU1NUtuRkpnb0NQdXVYRkd4bVdKZEF1T21QRFFFZ3RpeXlk?=
 =?utf-8?B?MHhLV0FXRWIrc2tZejF0eCswMlhDWmF0RTZkanBMWTllNmVHa1Zjd3h6eWU5?=
 =?utf-8?B?Si9sdExLdFQwN0FpUFdIUmk2Z25jc3FwQnRzTHREZ3VMSmJnNUlieVc2by9y?=
 =?utf-8?B?U1hWbDVjZGZHWmpvYndMellRZmFWYzdUSjZLTHJZNHlnbVc2b2hlRFJVak5p?=
 =?utf-8?B?VGNHUTdtVU82NkU3alZPUVlTSlB1eVNIOUNZTWpjLzJmNjJoanpmcGJIampl?=
 =?utf-8?B?WmRUS2gvbTZtMnA2d0RUeXd1ZW9xdWd5SGhodnBUOFk5cDk2TlUwNzE4UFBl?=
 =?utf-8?B?TFAzdThOT3lYdFlJbzV1VjUzMm82NEg4Tmw0V1Q1RTRVWll3NmduRjdxaUFt?=
 =?utf-8?B?MGFYY0xLNGFlSFJLUlZKcExaSnNGWHJGbHV5U2RicTVSSk9XNk1kV093eU5k?=
 =?utf-8?B?VU9sMnhISk5WQjByVEJvQ2dNN21URG43amoxdWlsNmpLbkNINnhKNEVheW9k?=
 =?utf-8?B?bjlpT0YrTEU4aTYzMHhZbyt0cUtlUjA3ZTNLQ1hiT3MybC8zY3BVMUYrTHlK?=
 =?utf-8?B?VjFmODZtcERsZ3cwK1l4YjJCUHdPTkQvZ1Y4dy82cmJyQTVJaW0rWXZ0aWwx?=
 =?utf-8?B?bCtPODJ5M3RoM0pjMEJGdEZPcUc5SnM0M3VWMER2SzUvY0cxQ1h3NGRCbjI2?=
 =?utf-8?B?eDkrYVNaMnArNHRCWStIaElTb004aEw0V1BmK3pEYnp0K1ZUQkU2VWVPblhk?=
 =?utf-8?B?ZGlnYjI5TEJvN0RpQk1Eay9JNWVBZTdaL0YrdlJyeFZhaCsxalp4M1hXTmVI?=
 =?utf-8?B?K2xERXA0THZTS2o3TlNlZ3NLdEVZOWVGSFF4elFMd2hLWDJiR3REZ2JlU2VI?=
 =?utf-8?B?cU5yNGgydjJocmZKQ1BDUmFQQ3ZyYjhxZGFvTzh1d3RZbWxlRFNxbGUxeU94?=
 =?utf-8?B?UjgrTXRpNlNOT3RKc0c1RUpuOW9VRUJ2YlJoREZidFdWMU8yL3hZMnRubHBa?=
 =?utf-8?B?OXZCRTNIc3Vmb05RN0phSlpoWmRYd0paZ1lpRlMyZUJBUHptYjgvYy9DTFZK?=
 =?utf-8?B?U2t0UlhIMk1YZnlQWktzNWYvUzB4SkpTUmozVUZVU2dqVWRjSjJ3TXJtUVNn?=
 =?utf-8?B?NFdwcEFLSlVQY3NVSnVyM2lucHQxNUsyNm9qK3lGaEkxcWt1cWYxd1hlYVpL?=
 =?utf-8?B?Vys2V281V3lZbkROQ0oyazNiTFgrWko1QUc4RmNYSEtqa2k0ei9GZUtaV01u?=
 =?utf-8?B?U3Y4bjkxaHVuOWFFNGpwQk9yeVVORk5rM082L2F4K3NYOWQ2a29JdEJid1FH?=
 =?utf-8?B?a0kxaGgvcFJ0NWNmY2k1dVFsZ3ozVGdXYVQ2akwwUG9CdTFkR05WcFJoYS9I?=
 =?utf-8?B?RFdjaXNkRVNRVEsxT1JibTJJc010ZS90VnZMNU40YkdNWGNwWnl5NmtaSzhF?=
 =?utf-8?B?eDlkb04xSndoVTdtSmJKUFR3NFgyVGZwUWZxWW1RVUxQYTZySWZuRlNUQ1Ev?=
 =?utf-8?B?ZngzRzlaWm9HeDgyR3ZwTXQxeDBqcktHbWFxb1Uya0M0YnFhR2RKUUY3VmJL?=
 =?utf-8?Q?UZCr7I0lD/PbIsvIguvtszyfJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c6e74712-da14-46c7-1b79-08db4d6d00a0
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB6551.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 13:31:16.1528
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WBSdUWwHBeIjw7zZfBaXkIxa/WgcVKwE4fkTatnra5EQOig2lvxRWEp9OVPfx+/64oFOgbKE5jkzBcH+FK2VNg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8612

On 05.05.2023 15:22, Andrew Cooper wrote:
> On 05/05/2023 2:13 pm, Jan Beulich wrote:
>> On 05.05.2023 15:05, Andrew Cooper wrote:
>>> The SPDX forms without an explicit -only or -or-later suffix are deprecated
>>> and should not be used.
>> I guess this wants a reference to where this is specified. In particular ...
>>
>>> --- a/LICENSES/LGPL-2.1
>>> +++ b/LICENSES/LGPL-2.1
>>> @@ -1,5 +1,5 @@
>>> -Valid-License-Identifier: LGPL-2.1
>>> -Valid-License-Identifier: LGPL-2.1+
>>> +Valid-License-Identifier: LGPL-2.1-only
>>> +Valid-License-Identifier: LGPL-2.1-or-later
>>>  
>>>  SPDX-URL: https://spdx.org/licenses/LGPL-2.1.html
>> ... I can't spot anything like this under e.g. this URL.
> 
> Hmm yeah, it is irritating.  The statement is at
> https://spdx.org/licenses/ but only by virtue of two tables, the second
> of which is the list of deprecated identifiers.
> 
> I'll put a paragraph about this in the commit message.
> 
>> Also is there a reason you add Deprecated-Identifier: only to GPL-2.0?
>> Enumerating them would seem reasonable to me, not just for completeness,
>> but also in case we end up importing a file with a deprecated tag.
> 
> We have problematic uses of GPL in tree, where we don't for LGPL.
> 
> I'm considering a gitlab pass which will reject patches with use an
> identifier not in the permitted list, and reject in introduction of new
> uses of the deprecated ones.    For this, the deprecated-but-tolerated
> tags need calling out in some machine-readable way, but I don't think
> it's helpful to express the tolerating of a tag we don't have any
> violations of.

Hmm, okay. With the expanded commit message
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 05 13:36:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 13:36:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530362.825953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvbk-0002x5-6p; Fri, 05 May 2023 13:36:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530362.825953; Fri, 05 May 2023 13:36:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvbk-0002wX-3y; Fri, 05 May 2023 13:36:24 +0000
Received: by outflank-mailman (input) for mailman id 530362;
 Fri, 05 May 2023 13:36:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XVLi=A2=citrix.com=prvs=4822586d7=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puvbj-0002wR-BI
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 13:36:23 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d0ab8697-eb49-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 15:36:20 +0200 (CEST)
Received: from mail-mw2nam12lp2042.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 May 2023 09:36:10 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6673.namprd03.prod.outlook.com (2603:10b6:a03:398::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Fri, 5 May
 2023 13:36:08 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 13:36:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0ab8697-eb49-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683293780;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=aTwukeDh98r/JtbIfsvOG1co2CDLfI3A8dC8ocd0cUg=;
  b=AkNpv9sxBIjphBuj+NBMP9SPhaVEdAUkiWwPqc+Ng5p5I52MCSgOv2Wv
   nVwixYfDJvH2Xc45YpzbwytGQbRiOeOQQGOJiaNxGAIAuCWNxqqLHbE/b
   n0B3wRKFsfancdaFZXMvLgh1de0Sqw6nN0Dckf6WIaYE2d+VKVDnMuOVY
   E=;
X-IronPort-RemoteIP: 104.47.66.42
X-IronPort-MID: 110447861
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:+ranJaAz53owRRVW/+Piw5YqxClBgxIJ4kV8jS/XYbTApD9x0TdVm
 2MeDGrTPviJYmf0c9skYYXg80hQ6sWHm9VhQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A4wRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw1epQWURQz
 eMkGR8nRC6B29y5/++2Rbw57igjBJGD0II3nFhFlGucKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTL++xruQA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE37aXwX+kCNxLfFG+3txgp0+UzFINMzcxe36fvfCIiEGgVPsKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBqW1qPe7gKdB24FZj1MctorsIkxXzNC/
 kCNt8PkA3poqrL9YXCA8raZqxuiNC5TKnUNDQcUQA1A79T9rYUbihPUUs0lAKOzlsfyGzz73
 3aNtidWulkIpcsC1qH++E+dhTup/8LNVlRtul+RWX+55ARkYoLjf5av9VXQ8fdHKsCeU0WFu
 38H3cOZ6YjiEK2wqcBEe81VdJnB2hpPGGS0bYJHd3X5ywmQxg==
IronPort-HdrOrdr: A9a23:2wCszK4x0xaVxNEm5wPXwD7XdLJyesId70hD6qkQc3Fom6uj5q
 eTdZUgpHvJYVMqMk3I9urwWpVoLUm9yXcX2+gs1NWZLW/bUQKTRekIgeSN/9SKIVyaygcy79
 YCT4FOTOTqC150lMD75xT9PeoB7bC8gduVrNab9mxqSw5ybaFm8kNeMSa0VmNLZCQuP+tCKL
 OsovNdoTyuYHIWadn+PHkKc/nIosHHmPvdEGc77zhO0njzsdpt0s+DYmWl4is=
X-Talos-CUID: =?us-ascii?q?9a23=3AYDGd0Gijdo9H4BzkpZSATFjVDDJuXXH6lFLOG1C?=
 =?us-ascii?q?EOElDYoOXFWGfx/5pnJ87?=
X-Talos-MUID: 9a23:Q6CEcQYo6w92meBTlTXLljZoCtlTyYOALWQRmKQEtNeLHHkl
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="110447861"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q+UqyigkdsytwZkYLZn0zoyFmbgEMsh8CiXAyeOip4eNDpnBuP6Sfz67ovI/h6BbKPeak+lf69oiFZoIQJdXBLjh7seD7PLI89Ywct74Dtdt/a6yfZsIoZ+8Lr+LbnU/dt0vZIB9+0yT7EjvHg9vGjFwTMfrhjIoUr9O9QfmDqMvI63yDvAVDVPeCxI8KwwqM3Z3Nbrdi9aAvjRQLjPO9c8x8wUeQ1wH8rSWVX97+J97v2X3iXGKBchQxUuKHiBLCT1YXDtTJlgHEK3O0dTyrBrIj/o/GMP+U2ra/TyFwt7DzcvTAjtXoVpPzwzBw5jUDx9Q9mzOeXPAEEUOKeFcKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bgDkydVTkHtXCOHYGpAUhRwTNDLOWQG5oFcoYqgo3C8=;
 b=c2wPqybdovOExTebKFN+r+wZSeiFuN58g2iER84E+Dq6osX6PAE7vUJJwb3paZZdeemN+zSNxZHRNFT7t5RE7NxMKXfF9IVo/vDCYPT1X176LPGI4lxoGYtzQcukevLCMQmna4gCK8PAQRmNJAZeWCYXFvrVtdqnjqqp3uKR0PsnB9k6MlRwFo+kMnd4XfDfXYkBDB/Z0b7bueuiJR5eyzWiVk1RtHeixpoDzveYLJyOzMHEDU+ZeyH9FswXToqaHEtyohYckvlh+iZMFWdkms+/x6SNYOMlsXcTvH74CxBpnpYOv5uc7IInK2gipjjxyOU20rm0VWlmn8q7Jjr3XA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bgDkydVTkHtXCOHYGpAUhRwTNDLOWQG5oFcoYqgo3C8=;
 b=XSbQKeJOVbnLLda7NDFPgQ2C80SI3vY1sbYKpwxJOYSG9t9PbPeWtcztezEt7Etbd6cWg09NjoOksl6WH+mdMS9mV0r2paM+srYbBVy7LYGFwAeIfLqn5cPGnwymIs/Ov92ysAhasok18kyA63lQDlfvbuRtaJP5CCF2yCBQoPo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <df25cf9e-cf90-030a-91eb-6ee16b7b4dc7@citrix.com>
Date: Fri, 5 May 2023 14:36:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/2] LICENSES: Remove the use of deprecated LGPL SPDX tags
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
 <20230505130533.3580545-3-andrew.cooper3@citrix.com>
 <40d70f11-37e7-ff29-37c1-a94d3e286455@suse.com>
 <8fa2cf23-4832-a033-cfc3-e3225eff0047@citrix.com>
 <ac1662eb-41de-baee-45a0-db01b2141692@suse.com>
In-Reply-To: <ac1662eb-41de-baee-45a0-db01b2141692@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P123CA0082.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:138::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6673:EE_
X-MS-Office365-Filtering-Correlation-Id: e920757b-65ff-4eb6-fa8e-08db4d6dae96
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	INs1AL5LAZ+sZ/IxKrFl5NVtE4/40ojjkIWmIqFg5IrLvqs1vlrEOyaLnzGZKoW0jrEssQNiqu9bizCNQKVDs+fwqahggq8fScyUU/72hCZduLKEPRAtjsPLFxkqlLaFO5DpDTMhXu0HdH2qA1EkWIkoevoeigy38IxawfzSJScogaTK+DQmE7IhniM6pvqnjQJxFT2QxMsgJbcRztmNXglin0GaR5m9tFqZmb+ji+N+wCs48RT/ik77XpdnNQ4ghrzwF3O8yyjCH8kwRoqGVe8eaTVCwWOjYXuORoxcZvZHt51o6rzzbJyfgd7Z34plKZ9Xvx+KI0OKiPqoKMx8l60/B9SjTdPWlQZG4Ar5jBtoXLcIxYbVozF/ErnN4uw2gtqlzGdOKzyh3ybOUMDXorYmFvWeHc3Yls3D8189HPQ+nGehDrWnf7EYnM+6SlSma898aVvpG3mFU6RDVlvxV1DzRZqoQnaLhOBATMfnX2S5bJLTh7OGEO1eTgQqMkSGmJXKgYcnwUkGAWNdW3h+Ha6fQsZYnTBKPF9n3ZxyJ4wc4roiJAwebdO3+WG7IBvnAqNODGq1lNwCvtEBIFbnj+e+R0EqOGmNn1d5b2BsMrXFPSxrfuz0MVaAyuM1Gp3vGV0+pc8a9Lj1uR2q0m8btw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(366004)(39860400002)(376002)(451199021)(8936002)(966005)(26005)(6512007)(2616005)(6506007)(8676002)(53546011)(186003)(82960400001)(5660300002)(316002)(66476007)(54906003)(66556008)(66946007)(6916009)(41300700001)(6486002)(478600001)(6666004)(4326008)(38100700002)(36756003)(86362001)(31696002)(83380400001)(2906002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WUt4WGpSTEZhSlAyZm9qeWJxdmdMR0hkc0w1dkpwSjNraXlwRXNiOWRVT1dH?=
 =?utf-8?B?SnVhb3BzRFhXajFvNTJldGhFTE1kNmprcnZmakg1OUhVV0NLQXhTZUovR2Rs?=
 =?utf-8?B?eWhrWHhGZ0dUUUhSc0xYK3B6V2d3SVZDa3lPMHlVTmdWSlhjZnFVZnhSaDI1?=
 =?utf-8?B?QkpEakp0SW5mVjBZTGdsbjRTZUJwaUptOFdJU3FYWnBzQnZYUzdjLzI2Zjl3?=
 =?utf-8?B?dVVkd1F5UkpZSmdSNlUwM1NMaUJIWklLRzB0QWxybyt1T0U3cjhTQmRLa0h2?=
 =?utf-8?B?T21BVnhSMmJtM2VqSlIxS2Z3Ykp2WnFGeGdLSWVHWDVYUkFBbzdBdkdHZ3Vy?=
 =?utf-8?B?aWdGTWNPUjBFUVYyVDQ5K25LODUrSmdIYlJJOVIrakhOcFArdTkvL21MYnlD?=
 =?utf-8?B?T2xjZDFKZ2ZzZ1lxZ1ViK2QrdVhSOUVSSTFXblk0a2dHdU8rSzkxcSt3TStp?=
 =?utf-8?B?aDJUQm9hekIwbk9HM0VNR1RyUmlUOFp3Sk5uM3ZKQnVzYm8vYyt0amRMRjhR?=
 =?utf-8?B?cklIbGhWK2ZVMHBSdSt1Q1Z5UG5haE5QTWlLdkE4UnMzeHd0S3pUeW5VcFQv?=
 =?utf-8?B?UEl0d2NiZVNqZmUxcHhwd3R6UTRIV0JRdjVoY3NIc2ZlbzB6by9FN2NGdE9U?=
 =?utf-8?B?OHNPWmlhdnUyeTdHRkpUTXRKa2VmWUhSSXY4eEZLd2l1dVo2SitWS3ZKNE9J?=
 =?utf-8?B?c0JwaFVWRUdiVjlKcStnYUczNjBYUFBVRHJlWmVYb3M2LzZidXRBaVZiSEdR?=
 =?utf-8?B?bzllNVJHcmVqeGhvTjBieGNRSmpOM0cyQk1kUno1MlpDNFlOQ2FxbjRvdUw2?=
 =?utf-8?B?Q0lFOU54OUdoYWc3S2JzYm05azhRb1dyeWRkY2QrRDF2aXhwaEhVU1o2cnBD?=
 =?utf-8?B?YjBpUzJZMy8wNGFQZU11WktsdzE2QVNpYjJGQndFZ2hxQ2g0eWtUK2pJK1Nv?=
 =?utf-8?B?WUQ3YUE4MXRUQkR4V09CWjkyRlFoR25uNTNuMVQwZ01SNHdnL3hhZW96d1Fo?=
 =?utf-8?B?NDdnOUdjd1JDNlIySFBJYmZ4M0NORU1WSFF4d1lBdWFLYy9LTGZvQm52Ykpv?=
 =?utf-8?B?US8wc1FwT3VaRDREc24xelQxUnBZbnB2aHovTnhzMCt6S3Flc2ZDdDEvQWhv?=
 =?utf-8?B?NTgxbjl1OVI3YkZURThKY253M3llSWpEL1R6bHpFWEI3bUZuOEh4UEN2YUJv?=
 =?utf-8?B?YzFRSTJyQlJRTTJjZjhXNE14YVNBZEVRRldYek5hSEVjR3BzWGFhbnM2MWRu?=
 =?utf-8?B?Q2VmMC9hV2xrWlRIbkdFSHJseFIzOW1xT0oyRkZmVkVBUUVZdkd2d25MM21V?=
 =?utf-8?B?eWNWbXhPdlMrS1JDVVR5MnJpTk1kUmVydG1ERlV1a2xWQ1lmampnWDJsczBo?=
 =?utf-8?B?c3kwWEo0bmlXampxemxxRU1iTjdscGJRYytEL05RaXd0Zmduelgyd2ZYRDZJ?=
 =?utf-8?B?bzJtcGZZa1A4cC81ZHY0VTRjZlBhcmRWblB3R2UwcXd3WGIvbnVYaWZVVmNL?=
 =?utf-8?B?dzhQbXFZdjBkdVhlbU1VdnFwcWhkdnU5Y0dXaHJCTnludllWbzJZYU84RThI?=
 =?utf-8?B?dExUZW5aVTlFcGZXVUVxalZNejdPNG4vd0hlWStFNCtKZU5HejdjSmpYZ3lw?=
 =?utf-8?B?YUdMSWdFdkxnbExtSmFpcjJvRk53NjZRN01QSVMrZkJaaWc0QlluZXU5NW00?=
 =?utf-8?B?bGlJNFcwMnVFdmNkajhRblMzYkhqRGNqcWN2N3loeUpDNmJYRWErbHB2SzRh?=
 =?utf-8?B?OFJYNnY4TEZKQVF0cG56cjIxNkxZWFpCYXhjdjBYRVExeEhWZTFMbjZvVURQ?=
 =?utf-8?B?QXhuZXV6TlQ4Nml6N1hhWFZ4WC9DTDBnMlVhRzU2VjlUbk9QcGgvNWg2djVN?=
 =?utf-8?B?UGt2ekR3YjRmaTN1d1hvKzB6Q3pwR1E5ZWNnSXFwakdBSFJnRUx5SnNGQm54?=
 =?utf-8?B?M2lCclZrRHFEYllzb0QrZ1pNNm5GTzcwVmZFakgrYzhlRCtrVS8wUE9TNFYy?=
 =?utf-8?B?NldBZVdTNlhpQUhSaUp0c1cwZWx3RERoKytaeHY3MGtHa3pBYjVlQTNpNEV1?=
 =?utf-8?B?Q0owMWFIdEFmRUZiTzVnTFJoUjgwektzMU1tUEU4UjFtSE1LYkRJcVVuUWZi?=
 =?utf-8?B?anZxbHFXTXhtTUljZGlRQTg2NHA5WXZVaklrbEt1L1pNNDNrTHRsQ1gzYkQ5?=
 =?utf-8?B?YVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	IsllrZDxOjXJGTIn3EdzFwhDgFnFSoxYcEVAjeAWV1yDi8ZRKDGrZuuhXqfhb1wf/gitPqJX4UXZvrfeoRflq3pyvU896TehEE9aKQ1gQIvvj8fumYXNW/SHi+cPlDrhHrCBNtngcqWEbJJQRTk57HNGkE43MAZ7YtvOdfjRA1XThLi3Jogbt42nSfK6Q8W2a/A5vaBjsqaQZ+robydSaR6v3gInBVVnHvLztdjNBgX/1J9U5vvg1iAJRwl0tSSpcxm0s5kwllEJ55QeMSC50ebDALJPHANGFYAFK0ucpGTSJJngKKm3cq9W57vLCVMUlomSnFktHIIXVeSBvLh9hG6ciLwc0gRCzgWqLmo//4WbdojLRtesEltsZ3FE51aqtwraYv9gbCbQ1AfSc0YvFafAC0dwoge43aYXJPs4nE5HzygNmzYnt7vPv2bCkb6EFZM2Hu7PwMn4cBHdYZs3tKXT1kzCPrvMjo1oyIx1dw45DSgldRCQn6w7lHFW9wYWmt3QVPYtpJqxkNj4GSXw9Eq3yQm7gkYXddAys88QMILm9+1yt6Y0JtBXAolVpaKc5gHYhdYWGhoLxs3SgNP6vUcvCiPEIknxNiNJHAR5K/g2e5Tq81IElJ+Ys4f6ayyAhH2DAwhJ2Mn6cNz0Ohu5Ae16SDU+TYvhkINBb1LM+6eOaMvTGv3pyceFZqjVKuqxRxt+M8QioVMLw0fn1G/D134oSI5ybelFSbIhB4dSlPR5KR46isE079hSkHefqBySSpY+lLKritA45QV1qoCENvEBuMRWI+7R5DY8Eay3T+EBmS8gVUfQJI/xNUDXjerR7iytYLeR+MUDkm5LM4qf70yfmcwCmWllXnzmWdnkWZ8=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e920757b-65ff-4eb6-fa8e-08db4d6dae96
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 13:36:08.2338
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8S133c1BZPdnfbVPik+rWgrJQYDh+vPh2f0QM6e95mIQr2UO+sKhZGIfTNeEz+ueqjFoCFBiChGXpuTUrfzEa60xVVeZmu+yBfmyDTqD5Sg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6673

On 05/05/2023 2:31 pm, Jan Beulich wrote:
> On 05.05.2023 15:22, Andrew Cooper wrote:
>> On 05/05/2023 2:13 pm, Jan Beulich wrote:
>>> On 05.05.2023 15:05, Andrew Cooper wrote:
>>>> The SPDX forms without an explicit -only or -or-later suffix are deprecated
>>>> and should not be used.
>>> I guess this wants a reference to where this is specified. In particular ...
>>>
>>>> --- a/LICENSES/LGPL-2.1
>>>> +++ b/LICENSES/LGPL-2.1
>>>> @@ -1,5 +1,5 @@
>>>> -Valid-License-Identifier: LGPL-2.1
>>>> -Valid-License-Identifier: LGPL-2.1+
>>>> +Valid-License-Identifier: LGPL-2.1-only
>>>> +Valid-License-Identifier: LGPL-2.1-or-later
>>>>  
>>>>  SPDX-URL: https://spdx.org/licenses/LGPL-2.1.html
>>> ... I can't spot anything like this under e.g. this URL.
>> Hmm yeah, it is irritating.  The statement is at
>> https://spdx.org/licenses/ but only by virtue of two tables, the second
>> of which is the list of deprecated identifiers.
>>
>> I'll put a paragraph about this in the commit message.
>>
>>> Also is there a reason you add Deprecated-Identifier: only to GPL-2.0?
>>> Enumerating them would seem reasonable to me, not just for completeness,
>>> but also in case we end up importing a file with a deprecated tag.
>> We have problematic uses of GPL in tree, where we don't for LGPL.
>>
>> I'm considering a gitlab pass which will reject patches with use an
>> identifier not in the permitted list, and reject in introduction of new
>> uses of the deprecated ones.    For this, the deprecated-but-tolerated
>> tags need calling out in some machine-readable way, but I don't think
>> it's helpful to express the tolerating of a tag we don't have any
>> violations of.
> Hmm, okay. With the expanded commit message
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.  FAOD, here's the full commit message with adjustment:

LICENSES: Remove the use of deprecated LGPL SPDX tags

The SPDX forms without an explicit -only or -or-later suffix are deprecated
and should not be used.

Somewhat unhelpfully at the time of writing, this only appears to be
indicated
by the separation of the two tables at https://spdx.org/licenses/
   
The recent changes to libacpi are the only examples of deprecated LGPL
tags in
tree, so fix them all up.

For GPL, we have many examples using deprecated tags.  For now, just
identify
them as such and recommend that no new instances get added.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 05 13:51:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 13:51:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530377.825980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvpy-00068s-VT; Fri, 05 May 2023 13:51:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530377.825980; Fri, 05 May 2023 13:51:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puvpy-00068l-Rs; Fri, 05 May 2023 13:51:06 +0000
Received: by outflank-mailman (input) for mailman id 530377;
 Fri, 05 May 2023 13:51:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puvpx-000677-Jv; Fri, 05 May 2023 13:51:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puvpx-0003PB-CA; Fri, 05 May 2023 13:51:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puvpw-0002NI-Th; Fri, 05 May 2023 13:51:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puvpw-0005Vp-TH; Fri, 05 May 2023 13:51:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1Z0p4NsvHgRE8hBIfyDsX0pB+U+Uq/KCaHTWOd+Y0iQ=; b=a2HSVBH2dDESLbco+aQUfSedmn
	t79LJhZRxgrtzTT5oDdWdAkTZuGJkFhNji+OK0XH/9mvp+7mvVDzeUtedguoMK4qvV0ONAGyLmhNV
	ICX13QRabyeX5AkmfWFa3Vp/Uq6ti08dBc7jxQ5qDcQKReTvhQMFHwseGm9Zu8UE18Ho=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180544-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180544: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=757f502a3b350436877102c3043744e021537c19
X-Osstest-Versions-That:
    ovmf=94c802e108a082d6f74c854bea8bd98fe7808453
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 May 2023 13:51:04 +0000

flight 180544 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180544/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 757f502a3b350436877102c3043744e021537c19
baseline version:
 ovmf                 94c802e108a082d6f74c854bea8bd98fe7808453

Last test of basis   180542  2023-05-05 08:42:19 Z    0 days
Testing same since   180544  2023-05-05 12:12:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   94c802e108..757f502a3b  757f502a3b350436877102c3043744e021537c19 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 05 15:13:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:13:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530406.825989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pux6u-0006L6-Nw; Fri, 05 May 2023 15:12:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530406.825989; Fri, 05 May 2023 15:12:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pux6u-0006Kz-LL; Fri, 05 May 2023 15:12:40 +0000
Received: by outflank-mailman (input) for mailman id 530406;
 Fri, 05 May 2023 15:12:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pux6t-0006Kp-7p; Fri, 05 May 2023 15:12:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pux6s-0005Jy-Rz; Fri, 05 May 2023 15:12:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pux6s-0004Dy-CS; Fri, 05 May 2023 15:12:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pux6s-0004dT-C5; Fri, 05 May 2023 15:12:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ph6JG8TL+ijJwusBIya6RR0ONDZcAqfert5So9C/SZA=; b=XLHYxmyTM9YSYlOBbrFWXfhvzy
	R/icvQ7/FwVAWIQZIvCGRVr5fgfYMjy6M8rLopzkm/cdYM/x8uKSvpPonpDortvzYZ4wckd72Y6eD
	EGXLRczhWovJdCzb9Ic5JLsrD+rXnxeg90oZU9vidK81AllU+nwTKooNSy8t5+3BRrlY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180537-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180537: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b95a72bb5b2df24ff1baaa27920e57947dc97d49
X-Osstest-Versions-That:
    xen=b95a72bb5b2df24ff1baaa27920e57947dc97d49
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 May 2023 15:12:38 +0000

flight 180537 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180537/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 180528 pass in 180537
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180528

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180528
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180528
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180528
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180528
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180528
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180528
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180528
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180528
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180528
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180528
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180528
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180528
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  b95a72bb5b2df24ff1baaa27920e57947dc97d49
baseline version:
 xen                  b95a72bb5b2df24ff1baaa27920e57947dc97d49

Last test of basis   180537  2023-05-05 01:52:22 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri May 05 15:22:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:22:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530422.826039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGO-0000CY-Gp; Fri, 05 May 2023 15:22:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530422.826039; Fri, 05 May 2023 15:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGO-00007r-79; Fri, 05 May 2023 15:22:28 +0000
Received: by outflank-mailman (input) for mailman id 530422;
 Fri, 05 May 2023 15:22:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puxG0-0007pR-Uy
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:22:04 +0000
Received: from smtp-8fa9.mail.infomaniak.ch (smtp-8fa9.mail.infomaniak.ch
 [83.166.143.169]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9612e960-eb58-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 17:22:02 +0200 (CEST)
Received: from smtp-3-0000.mail.infomaniak.ch (unknown [10.4.36.107])
 by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCZDB3g0SzMqb46;
 Fri,  5 May 2023 17:22:02 +0200 (CEST)
Received: from unknown by smtp-3-0000.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCZD96vnsz1jJ; Fri,  5 May 2023 17:22:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9612e960-eb58-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683300122;
	bh=ZzOynGNQaPpNtiJ0zd91Lcwn9zW2+9ulyf1drBQ/Prg=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=vaA3xKDBfcFF3nZ5ibwAf9TNvpVjn95vBk3M+IQ75Zbj+yeoIWJoXq+IK2rjSvz6J
	 Ek6QJhVl0QOgjeidTZl8FeFQNrpxRWC0pFUxavnSrBqF5C1GDY1DYBVA7I141lj7uP
	 nCrFgP2JPKXF3ssC9wKq84oe+CDtv4db+Lpg7rcs=
From: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>
To: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>
Cc: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	=?UTF-8?q?Mihai=20Don=C8=9Bu?= <mdontu@bitdefender.com>,
	=?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?UTF-8?q?=C8=98tefan=20=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org,
	kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v1 5/9] KVM: x86: Add new hypercall to lock control registers
Date: Fri,  5 May 2023 17:20:42 +0200
Message-Id: <20230505152046.6575-6-mic@digikod.net>
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
References: <20230505152046.6575-1-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha

This enables guests to lock their CR0 and CR4 registers with a subset of
X86_CR0_WP, X86_CR4_SMEP, X86_CR4_SMAP, X86_CR4_UMIP, X86_CR4_FSGSBASE
and X86_CR4_CET flags.

The new KVM_HC_LOCK_CR_UPDATE hypercall takes two arguments.  The first
is to identify the control register, and the second is a bit mask to
pin (i.e. mark as read-only).

These register flags should already be pinned by Linux guests, but once
compromised, this self-protection mechanism could be disabled, which is
not the case with this dedicated hypercall.

Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Mickaël Salaün <mic@digikod.net>
Link: https://lore.kernel.org/r/20230505152046.6575-6-mic@digikod.net
---
 Documentation/virt/kvm/x86/hypercalls.rst | 15 +++++
 arch/x86/kernel/cpu/common.c              |  2 +-
 arch/x86/kvm/vmx/vmx.c                    | 10 ++++
 arch/x86/kvm/x86.c                        | 72 +++++++++++++++++++++++
 arch/x86/kvm/x86.h                        | 16 +++++
 include/linux/kvm_host.h                  |  3 +
 include/uapi/linux/kvm_para.h             |  1 +
 7 files changed, 118 insertions(+), 1 deletion(-)

diff --git a/Documentation/virt/kvm/x86/hypercalls.rst b/Documentation/virt/kvm/x86/hypercalls.rst
index 0ec79cc77f53..8aa5d28986e3 100644
--- a/Documentation/virt/kvm/x86/hypercalls.rst
+++ b/Documentation/virt/kvm/x86/hypercalls.rst
@@ -207,3 +207,18 @@ identified with set of physical page ranges (GFNs).  The HEKI_ATTR_MEM_NOWRITE
 memory page range attribute forbids related modification to the guest.
 
 Returns 0 on success or a KVM error code otherwise.
+
+10. KVM_HC_LOCK_CR_UPDATE
+-------------------------
+
+:Architecture: x86
+:Status: active
+:Purpose: Request some control registers to be restricted.
+
+- a0: identify a control register
+- a1: bit mask to make some flags read-only
+
+The hypercall lets a guest request control register flags to be pinned for
+itself.
+
+Returns 0 on success or a KVM error code otherwise.
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index f3cc7699e1e1..dd89379fe5ac 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -413,7 +413,7 @@ static __always_inline void setup_umip(struct cpuinfo_x86 *c)
 }
 
 /* These bits should not change their value after CPU init is finished. */
-static const unsigned long cr4_pinned_mask =
+const unsigned long cr4_pinned_mask =
 	X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP |
 	X86_CR4_FSGSBASE | X86_CR4_CET;
 static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 9870db887a62..931688edc8eb 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3162,6 +3162,11 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	unsigned long hw_cr0, old_cr0_pg;
 	u32 tmp;
+	int res;
+
+	res = heki_check_cr(vcpu->kvm, 0, cr0);
+	if (res)
+		return;
 
 	old_cr0_pg = kvm_read_cr0_bits(vcpu, X86_CR0_PG);
 
@@ -3323,6 +3328,11 @@ void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 	 * this bit, even if host CR4.MCE == 0.
 	 */
 	unsigned long hw_cr4;
+	int res;
+
+	res = heki_check_cr(vcpu->kvm, 4, cr4);
+	if (res)
+		return;
 
 	hw_cr4 = (cr4_read_shadow() & X86_CR4_MCE) | (cr4 & ~X86_CR4_MCE);
 	if (is_unrestricted_guest(vcpu))
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ffab64d08de3..a529455359ac 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7927,11 +7927,77 @@ static unsigned long emulator_get_cr(struct x86_emulate_ctxt *ctxt, int cr)
 	return value;
 }
 
+#ifdef CONFIG_HEKI
+
+extern unsigned long cr4_pinned_mask;
+
+static int heki_lock_cr(struct kvm *const kvm, const unsigned long cr,
+			unsigned long pin)
+{
+	if (!pin)
+		return -KVM_EINVAL;
+
+	switch (cr) {
+	case 0:
+		/* Cf. arch/x86/kernel/cpu/common.c */
+		if (!(pin & X86_CR0_WP))
+			return -KVM_EINVAL;
+
+		if ((read_cr0() & pin) != pin)
+			return -KVM_EINVAL;
+
+		atomic_long_or(pin, &kvm->heki_pinned_cr0);
+		return 0;
+	case 4:
+		/* Checks for irrelevant bits. */
+		if ((pin & cr4_pinned_mask) != pin)
+			return -KVM_EINVAL;
+
+		/* Ignores bits not present in host. */
+		pin &= __read_cr4();
+		atomic_long_or(pin, &kvm->heki_pinned_cr4);
+		return 0;
+	}
+	return -KVM_EINVAL;
+}
+
+int heki_check_cr(const struct kvm *const kvm, const unsigned long cr,
+		  const unsigned long val)
+{
+	unsigned long pinned;
+
+	switch (cr) {
+	case 0:
+		pinned = atomic_long_read(&kvm->heki_pinned_cr0);
+		if ((val & pinned) != pinned) {
+			pr_warn_ratelimited(
+				"heki-kvm: Blocked CR0 update: 0x%lx\n", val);
+			return -KVM_EPERM;
+		}
+		return 0;
+	case 4:
+		pinned = atomic_long_read(&kvm->heki_pinned_cr4);
+		if ((val & pinned) != pinned) {
+			pr_warn_ratelimited(
+				"heki-kvm: Blocked CR4 update: 0x%lx\n", val);
+			return -KVM_EPERM;
+		}
+		return 0;
+	}
+	return 0;
+}
+
+#endif /* CONFIG_HEKI */
+
 static int emulator_set_cr(struct x86_emulate_ctxt *ctxt, int cr, ulong val)
 {
 	struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
 	int res = 0;
 
+	res = heki_check_cr(vcpu->kvm, cr, val);
+	if (res)
+		return res;
+
 	switch (cr) {
 	case 0:
 		res = kvm_set_cr0(vcpu, mk_cr_64(kvm_read_cr0(vcpu), val));
@@ -9858,6 +9924,12 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 		else
 			ret = heki_lock_mem_page_ranges(vcpu->kvm, a0, a1);
 		break;
+	case KVM_HC_LOCK_CR_UPDATE:
+		if (a0 > U32_MAX)
+			ret = -KVM_EINVAL;
+		else
+			ret = heki_lock_cr(vcpu->kvm, a0, a1);
+		break;
 #endif /* CONFIG_HEKI */
 	default:
 		ret = -KVM_ENOSYS;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 9de72586f406..3e80a60ecbd8 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -276,6 +276,22 @@ static inline bool kvm_check_has_quirk(struct kvm *kvm, u64 quirk)
 	return !(kvm->arch.disabled_quirks & quirk);
 }
 
+#ifdef CONFIG_HEKI
+
+int heki_check_cr(const struct kvm *kvm, unsigned long cr, unsigned long val);
+
+bool kvm_heki_is_exec_allowed(struct kvm_vcpu *vcpu, gpa_t gpa);
+
+#else /* CONFIG_HEKI */
+
+static inline int heki_check_cr(const struct kvm *const kvm,
+				const unsigned long cr, const unsigned long val)
+{
+	return 0;
+}
+
+#endif /* CONFIG_HEKI */
+
 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);
 
 u64 get_kvmclock_ns(struct kvm *kvm);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 39a1bdc2ba42..ab9dc723bc89 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -812,6 +812,9 @@ struct kvm {
 #define HEKI_GFN_MAX 16
 	atomic_t heki_gfn_no_write_num;
 	struct heki_gfn_range heki_gfn_no_write[HEKI_GFN_MAX];
+
+	atomic_long_t heki_pinned_cr0;
+	atomic_long_t heki_pinned_cr4;
 #endif /* CONFIG_HEKI */
 
 #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
index d7512a10880e..9f68d4ba646b 100644
--- a/include/uapi/linux/kvm_para.h
+++ b/include/uapi/linux/kvm_para.h
@@ -31,6 +31,7 @@
 #define KVM_HC_SCHED_YIELD		11
 #define KVM_HC_MAP_GPA_RANGE		12
 #define KVM_HC_LOCK_MEM_PAGE_RANGES	13
+#define KVM_HC_LOCK_CR_UPDATE		14
 
 /*
  * hypercalls use architecture specific
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 15:22:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:22:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530413.826005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGM-0007z8-Ki; Fri, 05 May 2023 15:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530413.826005; Fri, 05 May 2023 15:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGM-0007yS-Gc; Fri, 05 May 2023 15:22:26 +0000
Received: by outflank-mailman (input) for mailman id 530413;
 Fri, 05 May 2023 15:22:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puxFw-0007pX-AD
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:22:00 +0000
Received: from smtp-190f.mail.infomaniak.ch (smtp-190f.mail.infomaniak.ch
 [185.125.25.15]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 932ffb88-eb58-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 17:21:58 +0200 (CEST)
Received: from smtp-3-0000.mail.infomaniak.ch (unknown [10.4.36.107])
 by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCZD54cDqzMqZgk;
 Fri,  5 May 2023 17:21:57 +0200 (CEST)
Received: from unknown by smtp-3-0000.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCZD330PNz1j3; Fri,  5 May 2023 17:21:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 932ffb88-eb58-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683300117;
	bh=Sck5/KPfNUhl8VtyERkRdjhXvgnFmYNWT4AHdC4aHDg=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=Xy2uQqu3d6UyYIRq1FHdiksjhJdSOrSuJr5w83oL1hRdMuCXet7azQy+AgpzQUfui
	 mWmlvk+7HFvNzEiU9UANLTHxNmTdUoA1IlKd2pKbr/FKAIab/IQO57y70uUQT1Y0RN
	 NjhO3YHPUfCPweoKfcyI2ayyvi0vzx4JsaOeBc24=
From: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>
To: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>
Cc: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	=?UTF-8?q?Mihai=20Don=C8=9Bu?= <mdontu@bitdefender.com>,
	=?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?UTF-8?q?=C8=98tefan=20=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org,
	kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v1 1/9] KVM: x86: Add kvm_x86_ops.fault_gva()
Date: Fri,  5 May 2023 17:20:38 +0200
Message-Id: <20230505152046.6575-2-mic@digikod.net>
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
References: <20230505152046.6575-1-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha

This function is needed for kvm_mmu_page_fault() to create synthetic
page faults.

Code originally written by Mihai Donțu and Nicușor Cîțu:
https://lore.kernel.org/r/20211006173113.26445-18-alazar@bitdefender.com
Renamed fault_gla() to fault_gva() and use the new
EPT_VIOLATION_GVA_IS_VALID.

Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Co-developed-by: Mihai Donțu <mdontu@bitdefender.com>
Signed-off-by: Mihai Donțu <mdontu@bitdefender.com>
Co-developed-by: Nicușor Cîțu <nicu.citu@icloud.com>
Signed-off-by: Nicușor Cîțu <nicu.citu@icloud.com>
Signed-off-by: Mickaël Salaün <mic@digikod.net>
Link: https://lore.kernel.org/r/20230505152046.6575-2-mic@digikod.net
---
 arch/x86/include/asm/kvm-x86-ops.h |  1 +
 arch/x86/include/asm/kvm_host.h    |  2 ++
 arch/x86/kvm/svm/svm.c             |  9 +++++++++
 arch/x86/kvm/vmx/vmx.c             | 10 ++++++++++
 4 files changed, 22 insertions(+)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index abccd51dcfca..b761182a9444 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -131,6 +131,7 @@ KVM_X86_OP(msr_filter_changed)
 KVM_X86_OP(complete_emulated_msr)
 KVM_X86_OP(vcpu_deliver_sipi_vector)
 KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
+KVM_X86_OP(fault_gva)
 
 #undef KVM_X86_OP
 #undef KVM_X86_OP_OPTIONAL
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6aaae18f1854..f319bcdeb8bd 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1706,6 +1706,8 @@ struct kvm_x86_ops {
 	 * Returns vCPU specific APICv inhibit reasons
 	 */
 	unsigned long (*vcpu_get_apicv_inhibit_reasons)(struct kvm_vcpu *vcpu);
+
+	u64 (*fault_gva)(struct kvm_vcpu *vcpu);
 };
 
 struct kvm_x86_nested_ops {
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 9a194aa1a75a..8b47b38aaf7f 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4700,6 +4700,13 @@ static int svm_vm_init(struct kvm *kvm)
 	return 0;
 }
 
+static u64 svm_fault_gva(struct kvm_vcpu *vcpu)
+{
+	const struct vcpu_svm *svm = to_svm(vcpu);
+
+	return svm->vcpu.arch.cr2 ? svm->vcpu.arch.cr2 : ~0ull;
+}
+
 static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.name = "kvm_amd",
 
@@ -4826,6 +4833,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 
 	.vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector,
 	.vcpu_get_apicv_inhibit_reasons = avic_vcpu_get_apicv_inhibit_reasons,
+
+	.fault_gva = svm_fault_gva,
 };
 
 /*
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 7eec0226d56a..9870db887a62 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8067,6 +8067,14 @@ static void vmx_vm_destroy(struct kvm *kvm)
 	free_pages((unsigned long)kvm_vmx->pid_table, vmx_get_pid_table_order(kvm));
 }
 
+static u64 vmx_fault_gva(struct kvm_vcpu *vcpu)
+{
+	if (vcpu->arch.exit_qualification & EPT_VIOLATION_GVA_IS_VALID)
+		return vmcs_readl(GUEST_LINEAR_ADDRESS);
+
+	return ~0ull;
+}
+
 static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.name = "kvm_intel",
 
@@ -8204,6 +8212,8 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.complete_emulated_msr = kvm_complete_insn_gp,
 
 	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
+
+	.fault_gva = vmx_fault_gva,
 };
 
 static unsigned int vmx_handle_intel_pt_intr(void)
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 15:22:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:22:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530415.826012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGN-000870-05; Fri, 05 May 2023 15:22:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530415.826012; Fri, 05 May 2023 15:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGM-00085e-RA; Fri, 05 May 2023 15:22:26 +0000
Received: by outflank-mailman (input) for mailman id 530415;
 Fri, 05 May 2023 15:22:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puxFx-0007pR-DR
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:22:01 +0000
Received: from smtp-42ab.mail.infomaniak.ch (smtp-42ab.mail.infomaniak.ch
 [84.16.66.171]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 93f51caf-eb58-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 17:21:59 +0200 (CEST)
Received: from smtp-3-0001.mail.infomaniak.ch (unknown [10.4.36.108])
 by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCZD66dQnzMqFxM;
 Fri,  5 May 2023 17:21:58 +0200 (CEST)
Received: from unknown by smtp-3-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCZD60R4vzMpxBm; Fri,  5 May 2023 17:21:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93f51caf-eb58-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683300118;
	bh=SrFXUO04Nqgobaetk479001er96mJxIHBBoQ7lZlahM=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=wpaXL8F3FW0faPdmfsziq7KNPT5JCNQ53r4Uxr1P2zxnScOSqz2qoUmOrpuWgBv6i
	 NuH/ri5XXTqceSyc0YP1flR3e6CIXrSsEa0s/oRFw99Emxppybi4No1rUvvsCb7yVM
	 kRuXppKZpZtphifBIw5w6qtCumEL2JsEPthxKuz0=
From: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>
To: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>
Cc: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	=?UTF-8?q?Mihai=20Don=C8=9Bu?= <mdontu@bitdefender.com>,
	=?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?UTF-8?q?=C8=98tefan=20=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org,
	kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v1 2/9] KVM: x86/mmu: Add support for prewrite page tracking
Date: Fri,  5 May 2023 17:20:39 +0200
Message-Id: <20230505152046.6575-3-mic@digikod.net>
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
References: <20230505152046.6575-1-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha

Add a new page tracking mode to deny a page update and throw a page
fault to the guest.  This is useful for KVM to be able to make some
pages non-writable (not read-only because it doesn't imply execution
restrictions), see the next Heki commits.

This kind of synthetic kernel page fault needs to be handled by the
guest, which is not currently the case, making it try again and again.
This will be part of a follow-up patch series.

Update emulator_read_write_onepage() to handle X86EMUL_CONTINUE and
X86EMUL_PROPAGATE_FAULT.

Update page_fault_handle_page_track() to call
kvm_slot_page_track_is_active() whenever this is required for
KVM_PAGE_TRACK_PREWRITE and KVM_PAGE_TRACK_WRITE, even if one tracker
already returned true.

Invert the return code semantic for read_emulate() and write_emulate():
- from 1=Ok 0=Error
- to X86EMUL_* return codes (e.g. X86EMUL_CONTINUE == 0)

Imported the prewrite page tracking support part originally written by
Mihai Donțu, Marian Rotariu, and Ștefan Șicleru:
https://lore.kernel.org/r/20211006173113.26445-27-alazar@bitdefender.com
https://lore.kernel.org/r/20211006173113.26445-28-alazar@bitdefender.com
Removed the GVA changes for page tracking, removed the
X86EMUL_RETRY_INSTR case, and some emulation part for now.

Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Marian Rotariu <marian.c.rotariu@gmail.com>
Cc: Mihai Donțu <mdontu@bitdefender.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Ștefan Șicleru <ssicleru@bitdefender.com>
Signed-off-by: Mickaël Salaün <mic@digikod.net>
Link: https://lore.kernel.org/r/20230505152046.6575-3-mic@digikod.net
---
 arch/x86/include/asm/kvm_page_track.h | 12 +++++
 arch/x86/kvm/mmu/mmu.c                | 64 ++++++++++++++++++++++-----
 arch/x86/kvm/mmu/page_track.c         | 33 +++++++++++++-
 arch/x86/kvm/mmu/spte.c               |  6 +++
 arch/x86/kvm/x86.c                    | 27 +++++++----
 5 files changed, 122 insertions(+), 20 deletions(-)

diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h
index eb186bc57f6a..a7fb4ff888e6 100644
--- a/arch/x86/include/asm/kvm_page_track.h
+++ b/arch/x86/include/asm/kvm_page_track.h
@@ -3,6 +3,7 @@
 #define _ASM_X86_KVM_PAGE_TRACK_H
 
 enum kvm_page_track_mode {
+	KVM_PAGE_TRACK_PREWRITE,
 	KVM_PAGE_TRACK_WRITE,
 	KVM_PAGE_TRACK_MAX,
 };
@@ -22,6 +23,16 @@ struct kvm_page_track_notifier_head {
 struct kvm_page_track_notifier_node {
 	struct hlist_node node;
 
+	/*
+	 * It is called when guest is writing the write-tracked page
+	 * and the write emulation didn't happened yet.
+	 *
+	 * @vcpu: the vcpu where the write access happened
+	 * @gpa: the physical address written by guest
+	 * @node: this nodet
+	 */
+	bool (*track_prewrite)(struct kvm_vcpu *vcpu, gpa_t gpa,
+			       struct kvm_page_track_notifier_node *node);
 	/*
 	 * It is called when guest is writing the write-tracked page
 	 * and write emulation is finished at that time.
@@ -73,6 +84,7 @@ kvm_page_track_register_notifier(struct kvm *kvm,
 void
 kvm_page_track_unregister_notifier(struct kvm *kvm,
 				   struct kvm_page_track_notifier_node *n);
+bool kvm_page_track_prewrite(struct kvm_vcpu *vcpu, gpa_t gpa);
 void kvm_page_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new,
 			  int bytes);
 void kvm_page_track_flush_slot(struct kvm *kvm, struct kvm_memory_slot *slot);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 835426254e76..e5d1e241ff0f 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -793,9 +793,13 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
 	slot = __gfn_to_memslot(slots, gfn);
 
 	/* the non-leaf shadow pages are keeping readonly. */
-	if (sp->role.level > PG_LEVEL_4K)
-		return kvm_slot_page_track_add_page(kvm, slot, gfn,
-						    KVM_PAGE_TRACK_WRITE);
+	if (sp->role.level > PG_LEVEL_4K) {
+		kvm_slot_page_track_add_page(kvm, slot, gfn,
+					     KVM_PAGE_TRACK_PREWRITE);
+		kvm_slot_page_track_add_page(kvm, slot, gfn,
+					     KVM_PAGE_TRACK_WRITE);
+		return;
+	}
 
 	kvm_mmu_gfn_disallow_lpage(slot, gfn);
 
@@ -840,9 +844,13 @@ static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
 	gfn = sp->gfn;
 	slots = kvm_memslots_for_spte_role(kvm, sp->role);
 	slot = __gfn_to_memslot(slots, gfn);
-	if (sp->role.level > PG_LEVEL_4K)
-		return kvm_slot_page_track_remove_page(kvm, slot, gfn,
-						       KVM_PAGE_TRACK_WRITE);
+	if (sp->role.level > PG_LEVEL_4K) {
+		kvm_slot_page_track_remove_page(kvm, slot, gfn,
+						KVM_PAGE_TRACK_PREWRITE);
+		kvm_slot_page_track_remove_page(kvm, slot, gfn,
+						KVM_PAGE_TRACK_WRITE);
+		return;
+	}
 
 	kvm_mmu_gfn_allow_lpage(slot, gfn);
 }
@@ -2714,7 +2722,10 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot,
 	 * track machinery is used to write-protect upper-level shadow pages,
 	 * i.e. this guards the role.level == 4K assertion below!
 	 */
-	if (kvm_slot_page_track_is_active(kvm, slot, gfn, KVM_PAGE_TRACK_WRITE))
+	if (kvm_slot_page_track_is_active(kvm, slot, gfn,
+						KVM_PAGE_TRACK_PREWRITE) ||
+	    kvm_slot_page_track_is_active(kvm, slot, gfn,
+						KVM_PAGE_TRACK_WRITE))
 		return -EPERM;
 
 	/*
@@ -4103,6 +4114,8 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct)
 static bool page_fault_handle_page_track(struct kvm_vcpu *vcpu,
 					 struct kvm_page_fault *fault)
 {
+	bool ret = false;
+
 	if (unlikely(fault->rsvd))
 		return false;
 
@@ -4113,10 +4126,14 @@ static bool page_fault_handle_page_track(struct kvm_vcpu *vcpu,
 	 * guest is writing the page which is write tracked which can
 	 * not be fixed by page fault handler.
 	 */
-	if (kvm_slot_page_track_is_active(vcpu->kvm, fault->slot, fault->gfn, KVM_PAGE_TRACK_WRITE))
-		return true;
+	ret = kvm_slot_page_track_is_active(vcpu->kvm, fault->slot, fault->gfn,
+					    KVM_PAGE_TRACK_PREWRITE) ||
+	      ret;
+	ret = kvm_slot_page_track_is_active(vcpu->kvm, fault->slot, fault->gfn,
+					    KVM_PAGE_TRACK_WRITE) ||
+	      ret;
 
-	return false;
+	return ret;
 }
 
 static void shadow_page_table_clear_flood(struct kvm_vcpu *vcpu, gva_t addr)
@@ -5600,6 +5617,33 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err
 	if (r != RET_PF_EMULATE)
 		return 1;
 
+	if ((error_code & PFERR_WRITE_MASK) &&
+	    !kvm_page_track_prewrite(vcpu, cr2_or_gpa)) {
+		struct x86_exception fault = {
+			.vector = PF_VECTOR,
+			.error_code_valid = true,
+			.error_code = error_code,
+			.nested_page_fault = false,
+			/*
+			 * TODO: This kind of kernel page fault needs to be handled by
+			 * the guest, which is not currently the case, making it try
+			 * again and again.
+			 *
+			 * You may want to test with cr2_or_gva to see the page
+			 * fault caught by the guest kernel (thinking it is a
+			 * user space fault).
+			 */
+			.address = static_call(kvm_x86_fault_gva)(vcpu),
+			.async_page_fault = false,
+		};
+
+		pr_warn_ratelimited(
+			"heki-kvm: Creating write #PF at 0x%016llx\n",
+			fault.address);
+		kvm_inject_page_fault(vcpu, &fault);
+		return RET_PF_INVALID;
+	}
+
 	/*
 	 * Before emulating the instruction, check if the error code
 	 * was due to a RO violation while translating the guest page.
diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c
index 2e09d1b6249f..2454887cd48b 100644
--- a/arch/x86/kvm/mmu/page_track.c
+++ b/arch/x86/kvm/mmu/page_track.c
@@ -131,9 +131,10 @@ void kvm_slot_page_track_add_page(struct kvm *kvm,
 	 */
 	kvm_mmu_gfn_disallow_lpage(slot, gfn);
 
-	if (mode == KVM_PAGE_TRACK_WRITE)
+	if (mode == KVM_PAGE_TRACK_PREWRITE || mode == KVM_PAGE_TRACK_WRITE) {
 		if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K))
 			kvm_flush_remote_tlbs(kvm);
+	}
 }
 EXPORT_SYMBOL_GPL(kvm_slot_page_track_add_page);
 
@@ -248,6 +249,36 @@ kvm_page_track_unregister_notifier(struct kvm *kvm,
 }
 EXPORT_SYMBOL_GPL(kvm_page_track_unregister_notifier);
 
+/*
+ * Notify the node that a write access is about to happen. Returning false
+ * doesn't stop the other nodes from being called, but it will stop
+ * the emulation.
+ *
+ * The node should figure out if the written page is the one that the node
+ * is interested in by itself.
+ */
+bool kvm_page_track_prewrite(struct kvm_vcpu *vcpu, gpa_t gpa)
+{
+	struct kvm_page_track_notifier_head *head;
+	struct kvm_page_track_notifier_node *n;
+	int idx;
+	bool ret = true;
+
+	head = &vcpu->kvm->arch.track_notifier_head;
+
+	if (hlist_empty(&head->track_notifier_list))
+		return ret;
+
+	idx = srcu_read_lock(&head->track_srcu);
+	hlist_for_each_entry_srcu(n, &head->track_notifier_list, node,
+				  srcu_read_lock_held(&head->track_srcu))
+		if (n->track_prewrite)
+			if (!n->track_prewrite(vcpu, gpa, n))
+				ret = false;
+	srcu_read_unlock(&head->track_srcu, idx);
+	return ret;
+}
+
 /*
  * Notify the node that write access is intercepted and write emulation is
  * finished at this time.
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index c0fd7e049b4e..639f220a1ed5 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -144,6 +144,12 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 	u64 spte = SPTE_MMU_PRESENT_MASK;
 	bool wrprot = false;
 
+	if (kvm_slot_page_track_is_active(vcpu->kvm, slot, gfn,
+					  KVM_PAGE_TRACK_PREWRITE) ||
+	    kvm_slot_page_track_is_active(vcpu->kvm, slot, gfn,
+					  KVM_PAGE_TRACK_WRITE))
+		pte_access &= ~ACC_WRITE_MASK;
+
 	WARN_ON_ONCE(!pte_access && !shadow_present_mask);
 
 	if (sp->role.ad_disabled)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a2c299d47e69..fd05f42c9913 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7325,6 +7325,7 @@ static int kvm_write_guest_virt_helper(gva_t addr, void *val, unsigned int bytes
 			r = X86EMUL_IO_NEEDED;
 			goto out;
 		}
+		kvm_page_track_write(vcpu, gpa, data, towrite);
 
 		bytes -= towrite;
 		data += towrite;
@@ -7441,13 +7442,12 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa,
 			const void *val, int bytes)
 {
-	int ret;
-
-	ret = kvm_vcpu_write_guest(vcpu, gpa, val, bytes);
-	if (ret < 0)
-		return 0;
+	if (!kvm_page_track_prewrite(vcpu, gpa))
+		return X86EMUL_PROPAGATE_FAULT;
+	if (kvm_vcpu_write_guest(vcpu, gpa, val, bytes))
+		return X86EMUL_UNHANDLEABLE;
 	kvm_page_track_write(vcpu, gpa, val, bytes);
-	return 1;
+	return X86EMUL_CONTINUE;
 }
 
 struct read_write_emulator_ops {
@@ -7477,7 +7477,9 @@ static int read_prepare(struct kvm_vcpu *vcpu, void *val, int bytes)
 static int read_emulate(struct kvm_vcpu *vcpu, gpa_t gpa,
 			void *val, int bytes)
 {
-	return !kvm_vcpu_read_guest(vcpu, gpa, val, bytes);
+	if (kvm_vcpu_read_guest(vcpu, gpa, val, bytes))
+		return X86EMUL_UNHANDLEABLE;
+	return X86EMUL_CONTINUE;
 }
 
 static int write_emulate(struct kvm_vcpu *vcpu, gpa_t gpa,
@@ -7551,8 +7553,12 @@ static int emulator_read_write_onepage(unsigned long addr, void *val,
 			return X86EMUL_PROPAGATE_FAULT;
 	}
 
-	if (!ret && ops->read_write_emulate(vcpu, gpa, val, bytes))
-		return X86EMUL_CONTINUE;
+	if (!ret) {
+		ret = ops->read_write_emulate(vcpu, gpa, val, bytes);
+		if (ret != X86EMUL_UNHANDLEABLE)
+			/* Handles X86EMUL_CONTINUE and X86EMUL_PROPAGATE_FAULT. */
+			return ret;
+	}
 
 	/*
 	 * Is this MMIO handled locally?
@@ -7689,6 +7695,9 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt,
 	if (kvm_is_error_hva(hva))
 		goto emul_write;
 
+	if (!kvm_page_track_prewrite(vcpu, gpa))
+		return X86EMUL_PROPAGATE_FAULT;
+
 	hva += offset_in_page(gpa);
 
 	switch (bytes) {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 15:22:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:22:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530419.826025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGN-0008M8-L5; Fri, 05 May 2023 15:22:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530419.826025; Fri, 05 May 2023 15:22:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGN-0008Iz-FC; Fri, 05 May 2023 15:22:27 +0000
Received: by outflank-mailman (input) for mailman id 530419;
 Fri, 05 May 2023 15:22:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puxFz-0007pX-BI
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:22:03 +0000
Received: from smtp-8faa.mail.infomaniak.ch (smtp-8faa.mail.infomaniak.ch
 [2001:1600:4:17::8faa])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9535c569-eb58-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 17:22:02 +0200 (CEST)
Received: from smtp-3-0001.mail.infomaniak.ch (unknown [10.4.36.108])
 by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCZD81Wj3zMqHLq;
 Fri,  5 May 2023 17:22:00 +0200 (CEST)
Received: from unknown by smtp-3-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCZD73gl0zMpxBk; Fri,  5 May 2023 17:21:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9535c569-eb58-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683300120;
	bh=lpToIjkPWA0duCR50VLguq+aftI5CkhRhLIM9+Y09DI=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=zFJPl8PCpV0TqwQBoPoSGHQX+/8+SSuy0c9K2NPBDqUtmLq0QmGY7FWD3HryhaVm/
	 17tCkfTiTGYBupyQ9RSGCsspEYrGNJuhBEUcpQDTrOdzsDu04Esrk+DE1xaHkpmpk1
	 MM8P5jZUL4ILscFaudbqPEsff1UIBvirPP3WnImg=
From: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>
To: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>
Cc: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	=?UTF-8?q?Mihai=20Don=C8=9Bu?= <mdontu@bitdefender.com>,
	=?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?UTF-8?q?=C8=98tefan=20=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org,
	kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v1 3/9] virt: Implement Heki common code
Date: Fri,  5 May 2023 17:20:40 +0200
Message-Id: <20230505152046.6575-4-mic@digikod.net>
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
References: <20230505152046.6575-1-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha

From: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>

Hypervisor Enforced Kernel Integrity (Heki) is a feature that will use
the hypervisor to enhance guest virtual machine security.

Configuration
=============

Define the config variables for the feature. This feature depends on
support from the architecture as well as the hypervisor.

Enabling HEKI
=============

Define a kernel command line parameter "heki" to turn the feature on or
off. By default, Heki is on.

Feature initialization
======================

The linker script, vmlinux.lds.S, defines a number of sections that are
loaded in kernel memory. Each of these sections has its own permissions.
For instance, .text has HEKI_ATTR_MEM_EXEC | HEKI_ATTR_MEM_NOWRITE, and
.rodata has HEKI_ATTR_MEM_NOWRITE.

Define an architecture specific init function, heki_arch_init(). In this
function, collect the ranges of all of the sections. These sections will
be protected in the host page table with their respective permissions so
that even if the guest kernel is compromised, their permissions cannot
be changed.

Define heki_early_init() to initialize the feature. For now, this
function just checks if the feature is enabled and calls
heki_arch_init().

Define heki_late_init() that protects the sections in the host page
table. This needs hypervisor support which will be introduced in the
future.  This function is called at the end of kernel init.

Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Mickaël Salaün <mic@digikod.net>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Link: https://lore.kernel.org/r/20230505152046.6575-4-mic@digikod.net
---
 Kconfig                         |   2 +
 arch/x86/Kconfig                |   1 +
 arch/x86/include/asm/sections.h |   4 +
 arch/x86/kernel/setup.c         |  49 ++++++++++++
 include/linux/heki.h            |  90 +++++++++++++++++++++
 init/main.c                     |   3 +
 virt/Makefile                   |   1 +
 virt/heki/Kconfig               |  22 ++++++
 virt/heki/Makefile              |   3 +
 virt/heki/heki.c                | 135 ++++++++++++++++++++++++++++++++
 10 files changed, 310 insertions(+)
 create mode 100644 include/linux/heki.h
 create mode 100644 virt/heki/Kconfig
 create mode 100644 virt/heki/Makefile
 create mode 100644 virt/heki/heki.c

diff --git a/Kconfig b/Kconfig
index 745bc773f567..0c844d9bcb03 100644
--- a/Kconfig
+++ b/Kconfig
@@ -29,4 +29,6 @@ source "lib/Kconfig"
 
 source "lib/Kconfig.debug"
 
+source "virt/heki/Kconfig"
+
 source "Documentation/Kconfig"
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3604074a878b..5cf5a7a97811 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -297,6 +297,7 @@ config X86
 	select FUNCTION_ALIGNMENT_4B
 	imply IMA_SECURE_AND_OR_TRUSTED_BOOT    if EFI
 	select HAVE_DYNAMIC_FTRACE_NO_PATCHABLE
+	select ARCH_SUPPORTS_HEKI		if X86_64
 
 config INSTRUCTION_DECODER
 	def_bool y
diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/asm/sections.h
index a6e8373a5170..42ef1e33b8a5 100644
--- a/arch/x86/include/asm/sections.h
+++ b/arch/x86/include/asm/sections.h
@@ -18,6 +18,10 @@ extern char __end_of_kernel_reserve[];
 
 extern unsigned long _brk_start, _brk_end;
 
+extern int __start_orc_unwind_ip[], __stop_orc_unwind_ip[];
+extern struct orc_entry __start_orc_unwind[], __stop_orc_unwind[];
+extern unsigned int orc_lookup[], orc_lookup_end[];
+
 static inline bool arch_is_kernel_initmem_freed(unsigned long addr)
 {
 	/*
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 88188549647c..f0ddaf24ab63 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -11,6 +11,7 @@
 #include <linux/dma-map-ops.h>
 #include <linux/dmi.h>
 #include <linux/efi.h>
+#include <linux/heki.h>
 #include <linux/ima.h>
 #include <linux/init_ohci1394_dma.h>
 #include <linux/initrd.h>
@@ -850,6 +851,54 @@ static void __init x86_report_nx(void)
 	}
 }
 
+#ifdef CONFIG_HEKI
+
+/*
+ * Gather all of the statically defined sections so heki_late_init() can
+ * protect these sections in the host page table.
+ *
+ * The sections are defined under "SECTIONS" in vmlinux.lds.S
+ * Keep this array in sync with SECTIONS.
+ */
+struct heki_va_range __initdata heki_va_ranges[] = {
+	{
+		.va_start = _stext,
+		.va_end = _etext,
+		.attributes = HEKI_ATTR_MEM_NOWRITE | HEKI_ATTR_MEM_EXEC,
+	},
+	{
+		.va_start = __start_rodata,
+		.va_end = __end_rodata,
+		.attributes = HEKI_ATTR_MEM_NOWRITE,
+	},
+#ifdef CONFIG_UNWINDER_ORC
+	{
+		.va_start = __start_orc_unwind_ip,
+		.va_end = __stop_orc_unwind_ip,
+		.attributes = HEKI_ATTR_MEM_NOWRITE,
+	},
+	{
+		.va_start = __start_orc_unwind,
+		.va_end = __stop_orc_unwind,
+		.attributes = HEKI_ATTR_MEM_NOWRITE,
+	},
+	{
+		.va_start = orc_lookup,
+		.va_end = orc_lookup_end,
+		.attributes = HEKI_ATTR_MEM_NOWRITE,
+	},
+#endif /* CONFIG_UNWINDER_ORC */
+};
+
+void __init heki_arch_init(void)
+{
+	heki.num_static_ranges = ARRAY_SIZE(heki_va_ranges);
+	heki.static_ranges =
+		heki_alloc_pa_ranges(heki_va_ranges, heki.num_static_ranges);
+}
+
+#endif /* CONFIG_HEKI */
+
 /*
  * Determine if we were loaded by an EFI loader.  If so, then we have also been
  * passed the efi memmap, systab, etc., so we should use these data structures
diff --git a/include/linux/heki.h b/include/linux/heki.h
new file mode 100644
index 000000000000..e4a3192ba687
--- /dev/null
+++ b/include/linux/heki.h
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Hypervisor Enforced Kernel Integrity (Heki) - Headers
+ *
+ * Copyright © 2023 Microsoft Corporation
+ */
+
+#ifndef __HEKI_H__
+#define __HEKI_H__
+
+#ifdef CONFIG_HEKI
+
+#include <linux/kvm_types.h>
+
+/* Heki attributes for memory pages. */
+/* clang-format off */
+#define HEKI_ATTR_MEM_NOWRITE		(1ULL << 0)
+#define HEKI_ATTR_MEM_EXEC		(1ULL << 1)
+/* clang-format on */
+
+/*
+ * heki_va_range is used to specify a virtual address range within the kernel
+ * address space along with their attributes.
+ */
+struct heki_va_range {
+	void *va_start;
+	void *va_end;
+	u64 attributes;
+};
+
+/*
+ * heki_pa_range is passed to the VMM or hypervisor so it can be processed by
+ * the VMM or the hypervisor based on range attributes. Examples of ranges:
+ *
+ *	- a range whose permissions need to be set in the host page table
+ *	- a range that contains information needed for authentication
+ *
+ * When an array of these is passed to the Hypervisor or VMM, the array
+ * must be in physically contiguous memory.
+ */
+struct heki_pa_range {
+	gfn_t gfn_start;
+	gfn_t gfn_end;
+	u64 attributes;
+};
+
+/*
+ * A hypervisor that supports Heki will instantiate this structure to
+ * provide hypervisor specific functions for Heki.
+ */
+struct heki_hypervisor {
+	int (*protect_ranges)(struct heki_pa_range *ranges, int num_ranges);
+	int (*lock_crs)(void);
+};
+
+/*
+ * If the architecture supports Heki, it will initialize static_ranges in
+ * early boot.
+ *
+ * If the active hypervisor supports Heki, it will plug its heki_hypervisor
+ * pointer into this heki structure.
+ */
+struct heki {
+	struct heki_pa_range *static_ranges;
+	int num_static_ranges;
+	struct heki_hypervisor *hypervisor;
+};
+
+extern struct heki heki;
+
+void heki_early_init(void);
+void heki_arch_init(void);
+void heki_late_init(void);
+
+struct heki_pa_range *heki_alloc_pa_ranges(struct heki_va_range *va_ranges,
+					   int num_ranges);
+void heki_free_pa_ranges(struct heki_pa_range *pa_ranges, int num_ranges);
+
+#else /* !CONFIG_HEKI */
+
+static inline void heki_early_init(void)
+{
+}
+static inline void heki_late_init(void)
+{
+}
+
+#endif /* CONFIG_HEKI */
+
+#endif /* __HEKI_H__ */
diff --git a/init/main.c b/init/main.c
index e1c3911d7c70..8649dbb07f18 100644
--- a/init/main.c
+++ b/init/main.c
@@ -102,6 +102,7 @@
 #include <linux/stackdepot.h>
 #include <linux/randomize_kstack.h>
 #include <net/net_namespace.h>
+#include <linux/heki.h>
 
 #include <asm/io.h>
 #include <asm/bugs.h>
@@ -999,6 +1000,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
 	sort_main_extable();
 	trap_init();
 	mm_init();
+	heki_early_init();
 	poking_init();
 	ftrace_init();
 
@@ -1530,6 +1532,7 @@ static int __ref kernel_init(void *unused)
 	exit_boot_config();
 	free_initmem();
 	mark_readonly();
+	heki_late_init();
 
 	/*
 	 * Kernel mappings are now finalized - update the userspace page-table
diff --git a/virt/Makefile b/virt/Makefile
index 1cfea9436af9..4550dc624466 100644
--- a/virt/Makefile
+++ b/virt/Makefile
@@ -1,2 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0-only
 obj-y	+= lib/
+obj-$(CONFIG_HEKI) += heki/
diff --git a/virt/heki/Kconfig b/virt/heki/Kconfig
new file mode 100644
index 000000000000..9858a827fe17
--- /dev/null
+++ b/virt/heki/Kconfig
@@ -0,0 +1,22 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Hypervisor Enforced Kernel Integrity (HEKI)
+#
+
+config HEKI
+	bool "Hypervisor Enforced Kernel Integrity (Heki)"
+	default y
+	depends on !JUMP_LABEL && ARCH_SUPPORTS_HEKI
+	select KVM_EXTERNAL_WRITE_TRACKING if KVM
+	help
+	  This feature enhances guest virtual machine security by taking
+	  advantage of security features provided by the hypervisor for guests.
+	  This feature is helpful in maintaining guest virtual machine security
+	  even after the guest kernel has been compromised.
+
+config ARCH_SUPPORTS_HEKI
+	bool "Architecture support for HEKI"
+	help
+	  An architecture should select this when it can successfully build
+	  and run with CONFIG_HEKI. That is, it should provide all of the
+	  architecture support required for the HEKI feature.
diff --git a/virt/heki/Makefile b/virt/heki/Makefile
new file mode 100644
index 000000000000..2bc2061c9dfc
--- /dev/null
+++ b/virt/heki/Makefile
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: GPL-2.0-only
+
+obj-y += heki.o
diff --git a/virt/heki/heki.c b/virt/heki/heki.c
new file mode 100644
index 000000000000..c8cb1b84cceb
--- /dev/null
+++ b/virt/heki/heki.c
@@ -0,0 +1,135 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Hypervisor Enforced Kernel Integrity (Heki) - Common code
+ *
+ * Copyright © 2023 Microsoft Corporation
+ */
+
+#include <linux/cache.h>
+#include <linux/heki.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/types.h>
+#include <linux/vmalloc.h>
+
+#ifdef pr_fmt
+#undef pr_fmt
+#endif
+
+#define pr_fmt(fmt) "heki-guest: " fmt
+
+static bool heki_enabled __ro_after_init = true;
+
+struct heki heki = {};
+
+struct heki_pa_range *heki_alloc_pa_ranges(struct heki_va_range *va_ranges,
+					   int num_ranges)
+{
+	struct heki_pa_range *pa_ranges, *pa_range;
+	struct heki_va_range *va_range;
+	u64 attributes;
+	size_t size;
+	int i;
+
+	size = PAGE_ALIGN(sizeof(struct heki_pa_range) * num_ranges);
+	pa_ranges = alloc_pages_exact(size, GFP_KERNEL);
+	if (!pa_ranges)
+		return NULL;
+
+	for (i = 0; i < num_ranges; i++) {
+		va_range = &va_ranges[i];
+		pa_range = &pa_ranges[i];
+
+		pa_range->gfn_start = PFN_DOWN(__pa_symbol(va_range->va_start));
+		pa_range->gfn_end = PFN_UP(__pa_symbol(va_range->va_end)) - 1;
+		pa_range->attributes = va_range->attributes;
+
+		/*
+		 * WARNING:
+		 * Leaks addresses, should only be kept for development.
+		 */
+		attributes = pa_range->attributes;
+		pr_warn("Configuring GFN 0x%llx-0x%llx with %s\n",
+			pa_range->gfn_start, pa_range->gfn_end,
+			(attributes & HEKI_ATTR_MEM_NOWRITE) ? "[nowrite]" :
+							       "");
+	}
+
+	return pa_ranges;
+}
+
+void heki_free_pa_ranges(struct heki_pa_range *pa_ranges, int num_ranges)
+{
+	size_t size;
+
+	size = PAGE_ALIGN(sizeof(struct heki_pa_range) * num_ranges);
+	free_pages_exact(pa_ranges, size);
+}
+
+void __init heki_early_init(void)
+{
+	if (!heki_enabled) {
+		pr_warn("Disabled\n");
+		return;
+	}
+	pr_warn("Enabled\n");
+
+	heki_arch_init();
+}
+
+void heki_late_init(void)
+{
+	struct heki_hypervisor *hypervisor = heki.hypervisor;
+	int ret;
+
+	if (!heki_enabled)
+		return;
+
+	if (!heki.static_ranges) {
+		pr_warn("Architecture did not initialize static ranges\n");
+		return;
+	}
+
+	/*
+	 * Hypervisor support will be added in the future. When it is, the
+	 * hypervisor will be used to protect guest kernel memory and
+	 * control registers.
+	 */
+
+	if (!hypervisor) {
+		/* This happens for kernels running on bare metal as well. */
+		pr_warn("No hypervisor support\n");
+		goto out;
+	}
+
+	/* Protects statically defined sections in the host page table. */
+	ret = hypervisor->protect_ranges(heki.static_ranges,
+					 heki.num_static_ranges);
+	if (WARN(ret, "Failed to protect static sections: %d\n", ret))
+		goto out;
+	pr_warn("Static sections protected\n");
+
+	/*
+	 * Locks control registers so a compromised guest cannot change
+	 * them.
+	 */
+	ret = hypervisor->lock_crs();
+	if (WARN(ret, "Failed to lock control registers: %d\n", ret))
+		goto out;
+	pr_warn("Control registers locked\n");
+
+out:
+	heki_free_pa_ranges(heki.static_ranges, heki.num_static_ranges);
+	heki.static_ranges = NULL;
+	heki.num_static_ranges = 0;
+}
+
+static int __init heki_parse_config(char *str)
+{
+	if (strtobool(str, &heki_enabled))
+		pr_warn("Invalid option string for heki: '%s'\n", str);
+	return 1;
+}
+
+__setup("heki=", heki_parse_config);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 15:22:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:22:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530421.826032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGO-0008Ul-3q; Fri, 05 May 2023 15:22:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530421.826032; Fri, 05 May 2023 15:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGN-0008ST-Qs; Fri, 05 May 2023 15:22:27 +0000
Received: by outflank-mailman (input) for mailman id 530421;
 Fri, 05 May 2023 15:22:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puxG0-0007pX-Qp
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:22:04 +0000
Received: from smtp-bc0e.mail.infomaniak.ch (smtp-bc0e.mail.infomaniak.ch
 [45.157.188.14]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 96cada29-eb58-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 17:22:04 +0200 (CEST)
Received: from smtp-3-0001.mail.infomaniak.ch (unknown [10.4.36.108])
 by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCZDC54NKzMqP0t;
 Fri,  5 May 2023 17:22:03 +0200 (CEST)
Received: from unknown by smtp-3-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCZDB71wWzMpt9w; Fri,  5 May 2023 17:22:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96cada29-eb58-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683300123;
	bh=DM23zXDS+uuGwMhWqh31C3FvKal04OOOYj6BCpV8eTU=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=xWSKWq8GRjQntbZQyjyc/hcLfD/4e9Eo9VEQ3TyJ15GAj81rIlqRsji0rYWtV6o9s
	 DjYkgLxPq6tOCUL7E0ENshFSZ0cXEvowTkLwi08Dmqi/WhYCFJPWohunvkPKWdyymr
	 yIWmw5cSKPerfUXYOOtVgs7cuuhvsEWWph9Se7ZE=
From: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>
To: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>
Cc: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	=?UTF-8?q?Mihai=20Don=C8=9Bu?= <mdontu@bitdefender.com>,
	=?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?UTF-8?q?=C8=98tefan=20=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org,
	kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v1 6/9] KVM: x86: Add Heki hypervisor support
Date: Fri,  5 May 2023 17:20:43 +0200
Message-Id: <20230505152046.6575-7-mic@digikod.net>
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
References: <20230505152046.6575-1-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha

From: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>

Each supported hypervisor in x86 implements a struct x86_hyper_init to
define the init functions for the hypervisor.  Define a new init_heki()
entry point in struct x86_hyper_init.  Hypervisors that support Heki
must define this init_heki() function.  Call init_heki() of the chosen
hypervisor in init_hypervisor_platform().

Create a heki_hypervisor structure that each hypervisor can fill
with its data and functions. This will allow the Heki feature to work
in a hypervisor agnostic way.

Declare and initialize a "heki_hypervisor" structure for KVM so KVM can
support Heki.  Define the init_heki() function for KVM.  In init_heki(),
set the hypervisor field in the generic "heki" structure to the KVM
"heki_hypervisor".  After this point, generic Heki code can access the
KVM Heki data and functions.

Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Co-developed-by: Mickaël Salaün <mic@digikod.net>
Signed-off-by: Mickaël Salaün <mic@digikod.net>
Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Link: https://lore.kernel.org/r/20230505152046.6575-7-mic@digikod.net
---
 arch/x86/include/asm/x86_init.h  |  2 +
 arch/x86/kernel/cpu/hypervisor.c |  1 +
 arch/x86/kernel/kvm.c            | 72 ++++++++++++++++++++++++++++++++
 arch/x86/kernel/x86_init.c       |  1 +
 arch/x86/kvm/Kconfig             |  1 +
 virt/heki/Kconfig                |  9 +++-
 virt/heki/heki.c                 |  6 ---
 7 files changed, 85 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index c1c8c581759d..0fc5041a66c6 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -119,6 +119,7 @@ struct x86_init_pci {
  * @msi_ext_dest_id:		MSI supports 15-bit APIC IDs
  * @init_mem_mapping:		setup early mappings during init_mem_mapping()
  * @init_after_bootmem:		guest init after boot allocator is finished
+ * @init_heki:			Hypervisor enforced kernel integrity
  */
 struct x86_hyper_init {
 	void (*init_platform)(void);
@@ -127,6 +128,7 @@ struct x86_hyper_init {
 	bool (*msi_ext_dest_id)(void);
 	void (*init_mem_mapping)(void);
 	void (*init_after_bootmem)(void);
+	void (*init_heki)(void);
 };
 
 /**
diff --git a/arch/x86/kernel/cpu/hypervisor.c b/arch/x86/kernel/cpu/hypervisor.c
index 553bfbfc3a1b..6085c8129e0c 100644
--- a/arch/x86/kernel/cpu/hypervisor.c
+++ b/arch/x86/kernel/cpu/hypervisor.c
@@ -106,4 +106,5 @@ void __init init_hypervisor_platform(void)
 
 	x86_hyper_type = h->type;
 	x86_init.hyper.init_platform();
+	x86_init.hyper.init_heki();
 }
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 1cceac5984da..e53cebdcf3ac 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -29,6 +29,7 @@
 #include <linux/syscore_ops.h>
 #include <linux/cc_platform.h>
 #include <linux/efi.h>
+#include <linux/heki.h>
 #include <asm/timer.h>
 #include <asm/cpu.h>
 #include <asm/traps.h>
@@ -866,6 +867,45 @@ static void __init kvm_guest_init(void)
 	hardlockup_detector_disable();
 }
 
+#ifdef CONFIG_HEKI
+
+static int kvm_protect_ranges(struct heki_pa_range *ranges, int num_ranges)
+{
+	size_t size;
+	long err;
+
+	WARN_ON(in_interrupt());
+
+	size = sizeof(ranges[0]) * num_ranges;
+	err = kvm_hypercall3(KVM_HC_LOCK_MEM_PAGE_RANGES, __pa(ranges), size, 0);
+	if (WARN(err, "Failed to enforce memory protection: %ld\n", err))
+		return err;
+
+	return 0;
+}
+
+extern unsigned long cr4_pinned_mask;
+
+/*
+ * TODO: Check SMP policy consistency, e.g. with
+ * this_cpu_read(cpu_tlbstate.cr4)
+ */
+static int kvm_lock_crs(void)
+{
+	unsigned long cr4;
+	int err;
+
+	err = kvm_hypercall2(KVM_HC_LOCK_CR_UPDATE, 0, X86_CR0_WP);
+	if (err)
+		return err;
+
+	cr4 = __read_cr4();
+	err = kvm_hypercall2(KVM_HC_LOCK_CR_UPDATE, 4, cr4 & cr4_pinned_mask);
+	return err;
+}
+
+#endif /* CONFIG_HEKI */
+
 static noinline uint32_t __kvm_cpuid_base(void)
 {
 	if (boot_cpu_data.cpuid_level < 0)
@@ -999,6 +1039,37 @@ static bool kvm_sev_es_hcall_finish(struct ghcb *ghcb, struct pt_regs *regs)
 }
 #endif
 
+#ifdef CONFIG_HEKI
+
+static struct heki_hypervisor kvm_heki_hypervisor = {
+	.protect_ranges = kvm_protect_ranges,
+	.lock_crs = kvm_lock_crs,
+};
+
+static void kvm_init_heki(void)
+{
+	long err;
+
+	if (!kvm_para_available())
+		/* Cannot make KVM hypercalls. */
+		return;
+
+	err = kvm_hypercall3(KVM_HC_LOCK_MEM_PAGE_RANGES, -1, -1, -1);
+	if (err == -KVM_ENOSYS)
+		/* Ignores host. */
+		return;
+
+	heki.hypervisor = &kvm_heki_hypervisor;
+}
+
+#else /* CONFIG_HEKI */
+
+static void kvm_init_heki(void)
+{
+}
+
+#endif /* CONFIG_HEKI */
+
 const __initconst struct hypervisor_x86 x86_hyper_kvm = {
 	.name				= "KVM",
 	.detect				= kvm_detect,
@@ -1007,6 +1078,7 @@ const __initconst struct hypervisor_x86 x86_hyper_kvm = {
 	.init.x2apic_available		= kvm_para_available,
 	.init.msi_ext_dest_id		= kvm_msi_ext_dest_id,
 	.init.init_platform		= kvm_init_platform,
+	.init.init_heki			= kvm_init_heki,
 #if defined(CONFIG_AMD_MEM_ENCRYPT)
 	.runtime.sev_es_hcall_prepare	= kvm_sev_es_hcall_prepare,
 	.runtime.sev_es_hcall_finish	= kvm_sev_es_hcall_finish,
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index ef80d361b463..0a023c24fcdb 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -114,6 +114,7 @@ struct x86_init_ops x86_init __initdata = {
 		.msi_ext_dest_id	= bool_x86_init_noop,
 		.init_mem_mapping	= x86_init_noop,
 		.init_after_bootmem	= x86_init_noop,
+		.init_heki		= x86_init_noop,
 	},
 
 	.acpi = {
diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index fbeaa9ddef59..ba355171ceeb 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -49,6 +49,7 @@ config KVM
 	select SRCU
 	select INTERVAL_TREE
 	select HAVE_KVM_PM_NOTIFIER if PM
+	select HYPERVISOR_SUPPORTS_HEKI
 	help
 	  Support hosting fully virtualized guest machines using hardware
 	  virtualization extensions.  You will need a fairly recent
diff --git a/virt/heki/Kconfig b/virt/heki/Kconfig
index 9858a827fe17..96f18ce03013 100644
--- a/virt/heki/Kconfig
+++ b/virt/heki/Kconfig
@@ -6,7 +6,7 @@
 config HEKI
 	bool "Hypervisor Enforced Kernel Integrity (Heki)"
 	default y
-	depends on !JUMP_LABEL && ARCH_SUPPORTS_HEKI
+	depends on !JUMP_LABEL && ARCH_SUPPORTS_HEKI && HYPERVISOR_SUPPORTS_HEKI
 	select KVM_EXTERNAL_WRITE_TRACKING if KVM
 	help
 	  This feature enhances guest virtual machine security by taking
@@ -20,3 +20,10 @@ config ARCH_SUPPORTS_HEKI
 	  An architecture should select this when it can successfully build
 	  and run with CONFIG_HEKI. That is, it should provide all of the
 	  architecture support required for the HEKI feature.
+
+config HYPERVISOR_SUPPORTS_HEKI
+	bool "Hypervisor support for Heki"
+	help
+	  A hypervisor should select this when it can successfully build
+	  and run with CONFIG_HEKI. That is, it should provide all of the
+	  hypervisor support required for the Heki feature.
diff --git a/virt/heki/heki.c b/virt/heki/heki.c
index c8cb1b84cceb..142b5dc98a2f 100644
--- a/virt/heki/heki.c
+++ b/virt/heki/heki.c
@@ -91,12 +91,6 @@ void heki_late_init(void)
 		return;
 	}
 
-	/*
-	 * Hypervisor support will be added in the future. When it is, the
-	 * hypervisor will be used to protect guest kernel memory and
-	 * control registers.
-	 */
-
 	if (!hypervisor) {
 		/* This happens for kernels running on bare metal as well. */
 		pr_warn("No hypervisor support\n");
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 15:22:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:22:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530412.826000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGM-0007vS-B0; Fri, 05 May 2023 15:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530412.826000; Fri, 05 May 2023 15:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGM-0007vL-7z; Fri, 05 May 2023 15:22:26 +0000
Received: by outflank-mailman (input) for mailman id 530412;
 Fri, 05 May 2023 15:21:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puxFu-0007pR-Nd
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:21:59 +0000
Received: from smtp-42af.mail.infomaniak.ch (smtp-42af.mail.infomaniak.ch
 [84.16.66.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 91b3a163-eb58-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 17:21:55 +0200 (CEST)
Received: from smtp-2-0000.mail.infomaniak.ch (unknown [10.5.36.107])
 by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCZD24pG9zMqLbT;
 Fri,  5 May 2023 17:21:54 +0200 (CEST)
Received: from unknown by smtp-2-0000.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCZCy4M9gzMpqZ9; Fri,  5 May 2023 17:21:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91b3a163-eb58-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683300114;
	bh=Qvc7SESfj28DhibUMOxF8XDC2Dv9Nk03svVhSexGCDo=;
	h=From:To:Cc:Subject:Date:From;
	b=sdjSaTf+L8uD2OtYxlVlBp2/RF9APLo7BFa1o6O4wpLqtYohS60n6C+Xb7KWY8oWN
	 1ThaIncVJDSPEINsOmIciOHlJQ5lHfvGTzMJwRq9cC3H70enX0j5qGVkDZEv5T8RHU
	 X7DMszBWczedvVBKFSwCEr6PDfJcgmy7Hfrc8dOE=
From: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>
To: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>
Cc: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	=?UTF-8?q?Mihai=20Don=C8=9Bu?= <mdontu@bitdefender.com>,
	=?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?UTF-8?q?=C8=98tefan=20=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org,
	kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Date: Fri,  5 May 2023 17:20:37 +0200
Message-Id: <20230505152046.6575-1-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha

Hi,

This patch series is a proof-of-concept that implements new KVM features
(extended page tracking, MBEC support, CR pinning) and defines a new API to
protect guest VMs. No VMM (e.g., Qemu) modification is required.

The main idea being that kernel self-protection mechanisms should be delegated
to a more privileged part of the system, hence the hypervisor. It is still the
role of the guest kernel to request such restrictions according to its
configuration. The high-level security guarantees provided by the hypervisor
are semantically the same as a subset of those the kernel already enforces on
itself (CR pinning hardening and memory page table protections), but with much
higher guarantees.

We'd like the mainline kernel to support such hardening features leveraging
virtualization. We're looking for reviews and comments that can help mainline
these two parts: the KVM implementation and the guest kernel API layer designed
to support different hypervisors. The struct heki_hypervisor enables to plug in
different backend implementations that are initialized with the
heki_early_init() and heki_late_init() calls. This RFC is an initial
call for collaboration. There is a lot to do, either on hypervisors,
guest kernels or VMMs sides.

We took inspiration from previous patches, mainly the KVMI [1] [2] and KVM
CR-pinning [3] series, revamped and simplified relevant parts to fit well with
our goal, added support for MBEC to enable a deny-by-default kernel execution
policy (e.g., write xor execute), added two hypercalls, and created a kernel
API for VMs to request protection in a generic way that can be leveraged by any
hypervisor.

This proof-of-concept is named Hypervisor-Enforced Kernel Integrity (Heki),
which reflects the goal to empower guest kernels to protect themselves. This
name is new to the kernel, and it enables to easily identify new code required
for this set of features.

This patch series is based on Linux 6.2 and requires the host to support
MBEC. This can easily be checked with:
grep ept_mode_based_exec /proc/cpuinfo
You can test it by enabling CONFIG_HEKI, CONFIG_HEKI_TEST,
CONFIG_KUNIT_DEFAULT_ENABLED, and adding the heki_test=N boot argument
to the guest as explained in the last patch. Another way to test it is
to try to load a kernel module in the guest: you'll see KVM creating
synthetic page faults. This only works using a bare metal machine as KVM
host; nested virtualization is not supported yet.

# Threat model

The initial threat model is a malicious user space process exploiting a kernel
vulnerability to gain more privileges or to bypass the access-control system.
An extended threat model could include attacks coming from network or storage
data (e.g., malformed network packet, inconsistent drive content).

Considering all potential ways to compromise a kernel, Heki's goal is to harden
a sane kernel before a runtime attack to make it more difficult, and
potentially to make such an attack failed. We consider the kernel itself to be
partially malicious during its lifetime e.g., because a ROP attack that could
disable kernel self-protection mechanisms and make kernel exploitation much
easier. Indeed, an exploit is often split into several stages, each bypassing
some security measures. Getting the guarantee that new kernel executable code
is not possible increases the cost of an attack, hopefully to the point that it
is not worth it.

To protect against persistent attacks, complementary security mechanisms should
be used (e.g., kernel module signing, IMA, IPE, Lockdown).

# Prerequisites

For this set of features to be useful, guest kernels must be trusted by the VM
owners at boot time, before launching any user space processes nor receiving
potentially malicious network packets. It is then required to have a security
mechanism to provide or check this initial trust (e.g., secure boot, kernel
module signing).

# How does it work?

This implementation mainly leverages KVM capabilities to control the Second
Layer Address Translation (or the Two Dimensional Paging e.g., Intel's EPT or
AMD's RVI/NPT) and Mode Based Execution Control (Intel's MBEC) introduced with
the Kaby Lake (7th generation) architecture. This allows to set permissions on
memory pages in a complementary way to the guest kernel's managed memory
permissions. Once these permissions are set, they are locked and there is no
way back.

A first KVM_HC_LOCK_MEM_PAGE_RANGES hypercall enables the guest kernel to lock
a set of its memory page ranges with either the HEKI_ATTR_MEM_NOWRITE or the
HEKI_ATTR_MEM_EXEC attribute. The first one denies write access to a specific
set of pages (allow-list approach), and the second only allows kernel execution
for a set of pages (deny-list approach).

The current implementation sets the whole kernel's .rodata (i.e., any const or
__ro_after_init variables, which includes critical security data such as LSM
parameters) and .text sections as non-writable, and the .text section is the
only one where kernel execution is allowed. This is possible thanks to the new
MBEC support also brough by this series (otherwise the vDSO would have to be
executable). Thanks to this hardware support (VT-x, EPT and MBEC), the
performance impact of such guest protection is negligible.

The second KVM_HC_LOCK_CR_UPDATE hypercall enables guests to pin some of its
CPU control register flags (e.g., X86_CR0_WP, X86_CR4_SMEP, X86_CR4_SMAP),
which is another complementary hardening mechanism.

Heki can be enabled with the heki=1 boot command argument.

# Similar implementations

Here is a non-exhaustive list of similar implementations that we looked at and
took some ideas. Linux mainline doesn't support such security features, let's
change that!

Windows's Virtualization-Based Security is a proprietary technology relying
that provides a superset of this kind of security mechanism, relying on Hyper-V
and Virtual Trust Levels which enables to have light and secure VM enforcing
restrictions on a full guest VM. This includes several components such as HVCI
which is in charge of code authenticity, or HyperGuard which monitors and
protects kernel code and data.

Samsung's Real-time Kernel Protection (RKP) and Huawei Hypervisor Execution
Environment (HHEE) rely on proprietary hypervisors to protect some Android
devices. They monitor critical kernel data (e.g., page tables, credentials,
selinux_enforcing).

The iOS Kernel Patch Protection is a proprietary solution that relies on a
secure enclave (dedicated hardware component) to monitor and protect critical
parts of the kernel.

Bitdefender's Hypervisor Memory Introspection (HVMI) is an open-source (but out
of tree) set of components leveraging virtualization. HVMI implementation is
very complex, and this approach implies potential semantic gap issues (i.e.,
kernel data structures may change from one version to another).

Linux Kernel Runtime Guard is an open-source kernel module that can detect some
kernel data illegitimate modifications. Because it is the same kernel as the
compromised one, an attacker could also bypass or disable these checks.

Intel's Virtualization Based Hardening [4] [5] is an open-source
proof-of-concept of a thin hypervisor dedicated to guest protection. As such,
it cannot be used to manage several VMs.

# Similar Linux patches

The VM introspection [1] [2] patch series proposed a set of features to
put probes and introspect VMs for debugging and security reasons. We
changed and included the prewrite page tracking and the fault_gva parts.
Heki is much simpler because it focuses on guest hardening, not
introspection.

Paravirtualized Control Register pinning [3] added a set of KVM IOCTLs to
restrict some flags to be set. Heki doesn't implement such user space
interface, but only a dedicated hypercall to lock such registers. A superset of
these flags is configurable with Heki.

The Hypervisor Based Integrity patches [6] [7] only contain a generic IPC
mechanism (KVM_HC_UCALL hypercall) to request protection to the VMM. The idea
was to extend the KVM_SET_USER_MEMORY_REGION IOCTL to support more permission
than read-only.

# Current limitations

The main limitation of this patch series is the statically enforced
permissions. This is not an issue for kernels without module but this needs to
be addressed.  Mechanisms that dynamically impact kernel executable memory are
not handled for now (e.g., kernel modules, tracepoints, eBPF JIT), and such
code will need to be authenticated.  Because the hypervisor is highly
privileged and critical to the security of all the VMs, we don't want to
implement a code authentication mechanism in the hypervisor itself but delegate
this verification to something much less privileged. We are thinking of two
ways to solve this: implement this verification in the VMM or spawn a dedicated
special VM (similar to Windows's VBS). There are pros on cons to each approach:
complexity, verification code ownership (guest's or VMM's), access to guest
memory (i.e., confidential computing).

Because the guest's virtual address translation is not protected by the
hypervisor, a compromised kernel could map existing physical pages into
arbitrary virtual addresses. The new Intel's Hypervisor-Managed Linear Address
Translation [8] (HLAT) could be used to extend the current protection and cover
this case.

ROP is not covered by this patch series. Guest kernels can still jump to
arbitrary executable pages according to their control-flow integrity
protection.

# Future work

We think this kind of restrictions can be leveraged to log attempts of
forbidden actions. Forwarding such signals to the VMM could help improve attack
detection.

Giving visibility to the VMM would also enable to migrate VMs.

New dynamic restrictions could enable to improve the protected data by
including security-sensitive data such as LSM states, seccomp filters,
keyrings... This requires support outside of the hypervisor.

An execute-only mode could also be useful (cf. XOM for KVM [9] [10]).

Extending register pinning (e.g., MSRs).

Being able to protect nested guests might be possible but we need to figure out
the potential security implications.

Protecting the host would be useful, but that doesn't really fit with the KVM
model. The Protected KVM project is a first step to help in this direction
[11].

We only tested this with an Intel CPU, but this approach should work the same
with an AMD CPU starting with the Zen 2 generation and their Guest Mode Execute
Trap (GMET) capability.

We also kept some TODOs to highlight missing checks and code sharing issues,
and some pr_warn() calls to help understand how it works. Tests need to be
improved (e.g., invalid hypercall arguments).

We'll present this work at the Linux Security Summit North America next week.

[1] https://lore.kernel.org/all/20211006173113.26445-1-alazar@bitdefender.com/
[2] https://www.linux-kvm.org/images/7/72/KVMForum2017_Introspection.pdf
[3] https://lore.kernel.org/all/20200617190757.27081-1-john.s.andersen@intel.com/
[4] https://github.com/intel/vbh
[5] https://sched.co/TmwN
[6] https://sched.co/eE3f
[7] https://lore.kernel.org/all/20200501185147.208192-1-yuanyu@google.com/
[8] https://sched.co/eE4F
[9] https://lore.kernel.org/kvm/20191003212400.31130-1-rick.p.edgecombe@intel.com/
[10] https://lpc.events/event/4/contributions/283/
[11] https://sched.co/eE24

Please reach out to us by replying to this thread, we're looking for
people to join and collaborate on this project!

Regards,

Madhavan T. Venkataraman (2):
  virt: Implement Heki common code
  KVM: x86: Add Heki hypervisor support

Mickaël Salaün (7):
  KVM: x86: Add kvm_x86_ops.fault_gva()
  KVM: x86/mmu: Add support for prewrite page tracking
  KVM: x86: Add new hypercall to set EPT permissions
  KVM: x86: Add new hypercall to lock control registers
  KVM: VMX: Add MBEC support
  KVM: x86/mmu: Enable guests to lock themselves thanks to MBEC
  virt: Add Heki KUnit tests

 Documentation/virt/kvm/x86/hypercalls.rst |  34 +++
 Kconfig                                   |   2 +
 arch/x86/Kconfig                          |   1 +
 arch/x86/include/asm/kvm-x86-ops.h        |   1 +
 arch/x86/include/asm/kvm_host.h           |   2 +
 arch/x86/include/asm/kvm_page_track.h     |  12 +
 arch/x86/include/asm/sections.h           |   4 +
 arch/x86/include/asm/vmx.h                |  11 +-
 arch/x86/include/asm/x86_init.h           |   2 +
 arch/x86/kernel/cpu/common.c              |   2 +-
 arch/x86/kernel/cpu/hypervisor.c          |   1 +
 arch/x86/kernel/kvm.c                     |  72 +++++
 arch/x86/kernel/setup.c                   |  49 +++
 arch/x86/kernel/x86_init.c                |   1 +
 arch/x86/kvm/Kconfig                      |   1 +
 arch/x86/kvm/mmu.h                        |   3 +-
 arch/x86/kvm/mmu/mmu.c                    | 105 ++++++-
 arch/x86/kvm/mmu/mmutrace.h               |  11 +-
 arch/x86/kvm/mmu/page_track.c             |  33 +-
 arch/x86/kvm/mmu/paging_tmpl.h            |  16 +-
 arch/x86/kvm/mmu/spte.c                   |  29 +-
 arch/x86/kvm/mmu/spte.h                   |  15 +-
 arch/x86/kvm/mmu/tdp_mmu.c                |  73 +++++
 arch/x86/kvm/mmu/tdp_mmu.h                |   4 +
 arch/x86/kvm/svm/svm.c                    |   9 +
 arch/x86/kvm/vmx/capabilities.h           |   7 +
 arch/x86/kvm/vmx/nested.c                 |   7 +
 arch/x86/kvm/vmx/vmx.c                    |  48 ++-
 arch/x86/kvm/vmx/vmx.h                    |   1 +
 arch/x86/kvm/x86.c                        | 352 +++++++++++++++++++++-
 arch/x86/kvm/x86.h                        |  23 ++
 include/linux/heki.h                      |  90 ++++++
 include/linux/kvm_host.h                  |  20 ++
 include/uapi/linux/kvm_para.h             |   2 +
 init/main.c                               |   3 +
 virt/Makefile                             |   1 +
 virt/heki/Kconfig                         |  41 +++
 virt/heki/Makefile                        |   3 +
 virt/heki/heki.c                          | 321 ++++++++++++++++++++
 virt/kvm/kvm_main.c                       |   5 +
 40 files changed, 1377 insertions(+), 40 deletions(-)
 create mode 100644 include/linux/heki.h
 create mode 100644 virt/heki/Kconfig
 create mode 100644 virt/heki/Makefile
 create mode 100644 virt/heki/heki.c


base-commit: c9c3395d5e3dcc6daee66c6908354d47bf98cb0c
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 15:22:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:22:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530417.826019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGN-0008FC-BA; Fri, 05 May 2023 15:22:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530417.826019; Fri, 05 May 2023 15:22:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGN-0008Cp-4Z; Fri, 05 May 2023 15:22:27 +0000
Received: by outflank-mailman (input) for mailman id 530417;
 Fri, 05 May 2023 15:22:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puxFy-0007pX-KZ
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:22:02 +0000
Received: from smtp-190e.mail.infomaniak.ch (smtp-190e.mail.infomaniak.ch
 [185.125.25.14]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9572491b-eb58-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 17:22:01 +0200 (CEST)
Received: from smtp-3-0001.mail.infomaniak.ch (unknown [10.4.36.108])
 by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCZD938BPzMq1NC;
 Fri,  5 May 2023 17:22:01 +0200 (CEST)
Received: from unknown by smtp-3-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCZD85V0tzMpt9P; Fri,  5 May 2023 17:22:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9572491b-eb58-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683300121;
	bh=dkHjbqpuvSBmD/ZCnBFFFKsFtN1g9PxpijWtwL2weO4=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=FR+pYomIgtrTj0WI3JRDUSbHVrdlibfnwQU30RKu6bstMqwIWNnbYH/f3efnA/++Y
	 PVW9ePmH83Lzuo5sF3eA8Zxiu6x5s8zasun6j41FzXtbI7LfAbR7ieVBarIcdhpBG1
	 xrxpuSnVobX97umCLQuu/zofOq0RUrhcLMOWIHuk=
From: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>
To: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>
Cc: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	=?UTF-8?q?Mihai=20Don=C8=9Bu?= <mdontu@bitdefender.com>,
	=?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?UTF-8?q?=C8=98tefan=20=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org,
	kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v1 4/9] KVM: x86: Add new hypercall to set EPT permissions
Date: Fri,  5 May 2023 17:20:41 +0200
Message-Id: <20230505152046.6575-5-mic@digikod.net>
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
References: <20230505152046.6575-1-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha

Add a new KVM_HC_LOCK_MEM_PAGE_RANGES hypercall that enables a guest to
set EPT permissions on a set of page ranges.

This hypercall takes three arguments.  The first contains the GPA
pointing to an array of struct heki_pa_range.  The second argument is
the size of the array, not the number of elements.  The third argument
is for future proofness and is designed to contains optional flags (e.g.
to change the array type), but must be zero for now.

The struct heki_pa_range contains a GFN that starts the range and
another that is the indicate the last (included) page.  A bit field of
attributes are tied to this range.

The HEKI_ATTR_MEM_NOWRITE attribute is interpreted as a removal of the
EPT write permission to deny any write access from the guest through its
lifetime.  We choose "nowrite" because "read-only" exclude
execution, it follows a deny-list approach, and most importantly because
it is an incremental addition to the status quo (i.e., everything is
allowed from the TDP point of view).  This is implemented thanks to the
KVM_PAGE_TRACK_PREWRITE mode previously introduced.

The page ranges recording is currently implemented with a static array
of 16 elements to make it simple, but this mechanism will be dynamic in
a follow-up.

Define a kernel command line parameter "heki" to turn the feature on or
off.  By default, Heki is turned on.

Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Mickaël Salaün <mic@digikod.net>
Link: https://lore.kernel.org/r/20230505152046.6575-5-mic@digikod.net
---
 Documentation/virt/kvm/x86/hypercalls.rst |  17 +++
 arch/x86/kvm/x86.c                        | 169 ++++++++++++++++++++++
 include/linux/kvm_host.h                  |  13 ++
 include/uapi/linux/kvm_para.h             |   1 +
 virt/kvm/kvm_main.c                       |   4 +
 5 files changed, 204 insertions(+)

diff --git a/Documentation/virt/kvm/x86/hypercalls.rst b/Documentation/virt/kvm/x86/hypercalls.rst
index 10db7924720f..0ec79cc77f53 100644
--- a/Documentation/virt/kvm/x86/hypercalls.rst
+++ b/Documentation/virt/kvm/x86/hypercalls.rst
@@ -190,3 +190,20 @@ the KVM_CAP_EXIT_HYPERCALL capability. Userspace must enable that capability
 before advertising KVM_FEATURE_HC_MAP_GPA_RANGE in the guest CPUID.  In
 addition, if the guest supports KVM_FEATURE_MIGRATION_CONTROL, userspace
 must also set up an MSR filter to process writes to MSR_KVM_MIGRATION_CONTROL.
+
+9. KVM_HC_LOCK_MEM_PAGE_RANGES
+------------------------------
+
+:Architecture: x86
+:Status: active
+:Purpose: Request memory page ranges to be restricted.
+
+- a0: physical address of a struct heki_pa_range array
+- a1: size of the array
+- a2: optional flags, must be 0 for now
+
+The hypercall lets a guest request memory permissions to be removed for itself,
+identified with set of physical page ranges (GFNs).  The HEKI_ATTR_MEM_NOWRITE
+memory page range attribute forbids related modification to the guest.
+
+Returns 0 on success or a KVM error code otherwise.
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fd05f42c9913..ffab64d08de3 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -59,6 +59,7 @@
 #include <linux/mem_encrypt.h>
 #include <linux/entry-kvm.h>
 #include <linux/suspend.h>
+#include <linux/heki.h>
 
 #include <trace/events/kvm.h>
 
@@ -9596,6 +9597,161 @@ static void kvm_sched_yield(struct kvm_vcpu *vcpu, unsigned long dest_id)
 	return;
 }
 
+#ifdef CONFIG_HEKI
+
+static int heki_page_track_add(struct kvm *const kvm, const gfn_t gfn,
+			       const enum kvm_page_track_mode mode)
+{
+	struct kvm_memory_slot *slot;
+	int idx;
+
+	BUILD_BUG_ON(!IS_ENABLED(CONFIG_KVM_EXTERNAL_WRITE_TRACKING));
+
+	idx = srcu_read_lock(&kvm->srcu);
+	slot = gfn_to_memslot(kvm, gfn);
+	if (!slot) {
+		srcu_read_unlock(&kvm->srcu, idx);
+		return -EINVAL;
+	}
+
+	write_lock(&kvm->mmu_lock);
+	kvm_slot_page_track_add_page(kvm, slot, gfn, mode);
+	write_unlock(&kvm->mmu_lock);
+	srcu_read_unlock(&kvm->srcu, idx);
+	return 0;
+}
+
+static bool
+heki_page_track_prewrite(struct kvm_vcpu *const vcpu, const gpa_t gpa,
+			 struct kvm_page_track_notifier_node *const node)
+{
+	const gfn_t gfn = gpa_to_gfn(gpa);
+	const struct kvm *const kvm = vcpu->kvm;
+	size_t i;
+
+	/* Checks if it is our own tracked pages, or those of someone else. */
+	for (i = 0; i < HEKI_GFN_MAX; i++) {
+		if (gfn >= kvm->heki_gfn_no_write[i].start &&
+		    gfn <= kvm->heki_gfn_no_write[i].end)
+			return false;
+	}
+
+	return true;
+}
+
+static int kvm_heki_init_vm(struct kvm *const kvm)
+{
+	struct kvm_page_track_notifier_node *const node =
+		kzalloc(sizeof(*node), GFP_KERNEL);
+
+	if (!node)
+		return -ENOMEM;
+
+	node->track_prewrite = heki_page_track_prewrite;
+	kvm_page_track_register_notifier(kvm, node);
+	return 0;
+}
+
+static bool is_gfn_overflow(unsigned long val)
+{
+	const gfn_t gfn_mask = gpa_to_gfn(~0);
+
+	return (val | gfn_mask) != gfn_mask;
+}
+
+#define HEKI_PA_RANGE_MAX_SIZE	(sizeof(struct heki_pa_range) * HEKI_GFN_MAX)
+
+static int heki_lock_mem_page_ranges(struct kvm *const kvm, gpa_t mem_ranges,
+				     unsigned long mem_ranges_size)
+{
+	int err;
+	size_t i, ranges_num;
+	struct heki_pa_range *ranges;
+
+	if (mem_ranges_size > HEKI_PA_RANGE_MAX_SIZE)
+		return -KVM_E2BIG;
+
+	if ((mem_ranges_size % sizeof(struct heki_pa_range)) != 0)
+		return -KVM_EINVAL;
+
+	ranges = kzalloc(mem_ranges_size, GFP_KERNEL);
+	if (!ranges)
+		return -KVM_E2BIG;
+
+	err = kvm_read_guest(kvm, mem_ranges, ranges, mem_ranges_size);
+	if (err) {
+		err = -KVM_EFAULT;
+		goto out_free_ranges;
+	}
+
+	ranges_num = mem_ranges_size / sizeof(struct heki_pa_range);
+	for (i = 0; i < ranges_num; i++) {
+		const u64 attributes_mask = HEKI_ATTR_MEM_NOWRITE;
+		const gfn_t gfn_start = ranges[i].gfn_start;
+		const gfn_t gfn_end = ranges[i].gfn_end;
+		const u64 attributes = ranges[i].attributes;
+
+		if (is_gfn_overflow(ranges[i].gfn_start)) {
+			err = -KVM_EINVAL;
+			goto out_free_ranges;
+		}
+		if (is_gfn_overflow(ranges[i].gfn_end)) {
+			err = -KVM_EINVAL;
+			goto out_free_ranges;
+		}
+		if (ranges[i].gfn_start > ranges[i].gfn_end) {
+			err = -KVM_EINVAL;
+			goto out_free_ranges;
+		}
+		if (!ranges[i].attributes) {
+			err = -KVM_EINVAL;
+			goto out_free_ranges;
+		}
+		if ((ranges[i].attributes | attributes_mask) !=
+		    attributes_mask) {
+			err = -KVM_EINVAL;
+			goto out_free_ranges;
+		}
+
+		if (attributes & HEKI_ATTR_MEM_NOWRITE) {
+			unsigned long gfn;
+			size_t gfn_i;
+
+			gfn_i = atomic_dec_if_positive(
+				&kvm->heki_gfn_no_write_num);
+			if (gfn_i == 0) {
+				err = -KVM_E2BIG;
+				goto out_free_ranges;
+			}
+
+			gfn_i--;
+			kvm->heki_gfn_no_write[gfn_i].start = gfn_start;
+			kvm->heki_gfn_no_write[gfn_i].end = gfn_end;
+
+			for (gfn = gfn_start; gfn <= gfn_end; gfn++)
+				WARN_ON_ONCE(heki_page_track_add(
+					kvm, gfn, KVM_PAGE_TRACK_PREWRITE));
+		}
+
+		pr_warn("heki-kvm: Locking GFN 0x%llx-0x%llx with %s\n",
+			gfn_start, gfn_end,
+			(attributes & HEKI_ATTR_MEM_NOWRITE) ? "[nowrite]" : "");
+	}
+
+out_free_ranges:
+	kfree(ranges);
+	return err;
+}
+
+#else /* CONFIG_HEKI */
+
+static int kvm_heki_init_vm(struct kvm *const kvm)
+{
+	return 0;
+}
+
+#endif /* CONFIG_HEKI */
+
 static int complete_hypercall_exit(struct kvm_vcpu *vcpu)
 {
 	u64 ret = vcpu->run->hypercall.ret;
@@ -9694,6 +9850,15 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 		vcpu->arch.complete_userspace_io = complete_hypercall_exit;
 		return 0;
 	}
+#ifdef CONFIG_HEKI
+	case KVM_HC_LOCK_MEM_PAGE_RANGES:
+		/* No flags for now. */
+		if (a2)
+			ret = -KVM_EINVAL;
+		else
+			ret = heki_lock_mem_page_ranges(vcpu->kvm, a0, a1);
+		break;
+#endif /* CONFIG_HEKI */
 	default:
 		ret = -KVM_ENOSYS;
 		break;
@@ -12126,6 +12291,10 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	if (ret)
 		goto out_page_track;
 
+	ret = kvm_heki_init_vm(kvm);
+	if (ret)
+		goto out_page_track;
+
 	ret = static_call(kvm_x86_vm_init)(kvm);
 	if (ret)
 		goto out_uninit_mmu;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 4f26b244f6d0..39a1bdc2ba42 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -699,6 +699,13 @@ struct kvm_memslots {
 	int node_idx;
 };
 
+#ifdef CONFIG_HEKI
+struct heki_gfn_range {
+	gfn_t start;
+	gfn_t end;
+};
+#endif /* CONFIG_HEKI */
+
 struct kvm {
 #ifdef KVM_HAVE_MMU_RWLOCK
 	rwlock_t mmu_lock;
@@ -801,6 +808,12 @@ struct kvm {
 	bool vm_bugged;
 	bool vm_dead;
 
+#ifdef CONFIG_HEKI
+#define HEKI_GFN_MAX 16
+	atomic_t heki_gfn_no_write_num;
+	struct heki_gfn_range heki_gfn_no_write[HEKI_GFN_MAX];
+#endif /* CONFIG_HEKI */
+
 #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
 	struct notifier_block pm_notifier;
 #endif
diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
index 960c7e93d1a9..d7512a10880e 100644
--- a/include/uapi/linux/kvm_para.h
+++ b/include/uapi/linux/kvm_para.h
@@ -30,6 +30,7 @@
 #define KVM_HC_SEND_IPI		10
 #define KVM_HC_SCHED_YIELD		11
 #define KVM_HC_MAP_GPA_RANGE		12
+#define KVM_HC_LOCK_MEM_PAGE_RANGES	13
 
 /*
  * hypercalls use architecture specific
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 9c60384b5ae0..4aea936dfe73 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1230,6 +1230,10 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname)
 	list_add(&kvm->vm_list, &vm_list);
 	mutex_unlock(&kvm_lock);
 
+#ifdef CONFIG_HEKI
+	atomic_set(&kvm->heki_gfn_no_write_num, HEKI_GFN_MAX + 1);
+#endif /* CONFIG_HEKI */
+
 	preempt_notifier_inc();
 	kvm_init_pm_notifier(kvm);
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 15:22:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:22:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530426.826057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGQ-0000eJ-9p; Fri, 05 May 2023 15:22:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530426.826057; Fri, 05 May 2023 15:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGP-0000aA-FV; Fri, 05 May 2023 15:22:29 +0000
Received: by outflank-mailman (input) for mailman id 530426;
 Fri, 05 May 2023 15:22:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puxG2-0007pX-5z
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:22:06 +0000
Received: from smtp-8fae.mail.infomaniak.ch (smtp-8fae.mail.infomaniak.ch
 [83.166.143.174]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 97796bb4-eb58-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 17:22:05 +0200 (CEST)
Received: from smtp-3-0001.mail.infomaniak.ch (unknown [10.4.36.108])
 by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCZDD5yVJzMqDN2;
 Fri,  5 May 2023 17:22:04 +0200 (CEST)
Received: from unknown by smtp-3-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCZDD0z6dzMpxBc; Fri,  5 May 2023 17:22:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97796bb4-eb58-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683300124;
	bh=numFKFwerBzb1GFmju5PzfUy3LvlXvoHGRlXH+KNT+c=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=aY8puWEd6eYwOhiBC5f3v2W8STDFR1ws5t4KGxnuMkV/veHF5dk9ikWYLtZACCzKL
	 ZZHQs/vJJkdeZe4WGYydF7qoNq+EgiNGsn5zsfWYHRSGNlObVxnWCaLzSzlH/rsRqO
	 0NTmO3IcYiupEfrP7vmngHe/c3dwNtOvqClQ6aug=
From: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>
To: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>
Cc: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	=?UTF-8?q?Mihai=20Don=C8=9Bu?= <mdontu@bitdefender.com>,
	=?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?UTF-8?q?=C8=98tefan=20=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org,
	kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v1 7/9] KVM: VMX: Add MBEC support
Date: Fri,  5 May 2023 17:20:44 +0200
Message-Id: <20230505152046.6575-8-mic@digikod.net>
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
References: <20230505152046.6575-1-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha

This changes add support for VMX_FEATURE_MODE_BASED_EPT_EXEC (named
ept_mode_based_exec in /proc/cpuinfo and MBEC elsewhere), which enables
to separate EPT execution bits for supervisor vs. user.  It transforms
the semantic of VMX_EPT_EXECUTABLE_MASK from a global execution to a
kernel execution, and use the VMX_EPT_USER_EXECUTABLE_MASK bit to
identify user execution.

The main use case is to be able to restrict kernel execution while
ignoring user space execution from the hypervisor point of view.
Indeed, user space execution can already be restricted by the guest
kernel.

This change enables MBEC but doesn't change the default configuration,
which is to allow execution for all guest memory.  However, the next
commit levages MBEC to restrict kernel memory pages.

MBEC can be configured with the new "enable_mbec" module parameter, set
to true by default.  However, MBEC is disable for L1 and L2 for now.

Replace EPT_VIOLATION_RWX_MASK (3 bits) with 4 dedicated
EPT_VIOLATION_READ, EPT_VIOLATION_WRITE, EPT_VIOLATION_KERNEL_INSTR, and
EPT_VIOLATION_USER_INSTR bits.

>From the Intel 64 and IA-32 Architectures Software Developer's Manual,
Volume 3C (System Programming Guide), Part 3:

SECONDARY_EXEC_MODE_BASED_EPT_EXEC (bit 22):
If either the "unrestricted guest" VM-execution control or the
"mode-based execute control for EPT" VM-execution control is 1, the
"enable EPT" VM-execution control must also be 1.

EPT_VIOLATION_KERNEL_INSTR_BIT (bit 5):
The logical-AND of bit 2 in the EPT paging-structure entries used to
translate the guest-physical address of the access causing the EPT
violation.  If the "mode-based execute control for EPT" VM-execution
control is 0, this indicates whether the guest-physical address was
executable. If that control is 1, this indicates whether the
guest-physical address was executable for supervisor-mode linear
addresses.

EPT_VIOLATION_USER_INSTR_BIT (bit 6):
If the "mode-based execute control" VM-execution control is 0, the value
of this bit is undefined. If that control is 1, this bit is the
logical-AND of bit 10 in the EPT paging-structures entries used to
translate the guest-physical address of the access causing the EPT
violation. In this case, it indicates whether the guest-physical address
was executable for user-mode linear addresses.

PT_USER_EXEC_MASK (bit 10):
Execute access for user-mode linear addresses. If the "mode-based
execute control for EPT" VM-execution control is 1, indicates whether
instruction fetches are allowed from user-mode linear addresses in the
512-GByte region controlled by this entry. If that control is 0, this
bit is ignored.

Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Mickaël Salaün <mic@digikod.net>
Link: https://lore.kernel.org/r/20230505152046.6575-8-mic@digikod.net
---
 arch/x86/include/asm/vmx.h      | 11 +++++++++--
 arch/x86/kvm/mmu.h              |  3 ++-
 arch/x86/kvm/mmu/mmu.c          |  6 +++++-
 arch/x86/kvm/mmu/paging_tmpl.h  | 16 ++++++++++++++--
 arch/x86/kvm/mmu/spte.c         |  4 +++-
 arch/x86/kvm/vmx/capabilities.h |  7 +++++++
 arch/x86/kvm/vmx/nested.c       |  7 +++++++
 arch/x86/kvm/vmx/vmx.c          | 28 +++++++++++++++++++++++++---
 arch/x86/kvm/vmx/vmx.h          |  1 +
 9 files changed, 73 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 498dc600bd5c..452e7d153832 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -511,6 +511,7 @@ enum vmcs_field {
 #define VMX_EPT_IPAT_BIT    			(1ull << 6)
 #define VMX_EPT_ACCESS_BIT			(1ull << 8)
 #define VMX_EPT_DIRTY_BIT			(1ull << 9)
+#define VMX_EPT_USER_EXECUTABLE_MASK		(1ull << 10)
 #define VMX_EPT_RWX_MASK                        (VMX_EPT_READABLE_MASK |       \
 						 VMX_EPT_WRITABLE_MASK |       \
 						 VMX_EPT_EXECUTABLE_MASK)
@@ -556,13 +557,19 @@ enum vm_entry_failure_code {
 #define EPT_VIOLATION_ACC_READ_BIT	0
 #define EPT_VIOLATION_ACC_WRITE_BIT	1
 #define EPT_VIOLATION_ACC_INSTR_BIT	2
-#define EPT_VIOLATION_RWX_SHIFT		3
+#define EPT_VIOLATION_READ_BIT		3
+#define EPT_VIOLATION_WRITE_BIT		4
+#define EPT_VIOLATION_KERNEL_INSTR_BIT	5
+#define EPT_VIOLATION_USER_INSTR_BIT	6
 #define EPT_VIOLATION_GVA_IS_VALID_BIT	7
 #define EPT_VIOLATION_GVA_TRANSLATED_BIT 8
 #define EPT_VIOLATION_ACC_READ		(1 << EPT_VIOLATION_ACC_READ_BIT)
 #define EPT_VIOLATION_ACC_WRITE		(1 << EPT_VIOLATION_ACC_WRITE_BIT)
 #define EPT_VIOLATION_ACC_INSTR		(1 << EPT_VIOLATION_ACC_INSTR_BIT)
-#define EPT_VIOLATION_RWX_MASK		(VMX_EPT_RWX_MASK << EPT_VIOLATION_RWX_SHIFT)
+#define EPT_VIOLATION_READ		(1 << EPT_VIOLATION_READ_BIT)
+#define EPT_VIOLATION_WRITE		(1 << EPT_VIOLATION_WRITE_BIT)
+#define EPT_VIOLATION_KERNEL_INSTR	(1 << EPT_VIOLATION_KERNEL_INSTR_BIT)
+#define EPT_VIOLATION_USER_INSTR	(1 << EPT_VIOLATION_USER_INSTR_BIT)
 #define EPT_VIOLATION_GVA_IS_VALID	(1 << EPT_VIOLATION_GVA_IS_VALID_BIT)
 #define EPT_VIOLATION_GVA_TRANSLATED	(1 << EPT_VIOLATION_GVA_TRANSLATED_BIT)
 
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 6bdaacb6faa0..3c4fd4618cc1 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -24,6 +24,7 @@ extern bool __read_mostly enable_mmio_caching;
 #define PT_PAGE_SIZE_MASK (1ULL << PT_PAGE_SIZE_SHIFT)
 #define PT_PAT_MASK (1ULL << 7)
 #define PT_GLOBAL_MASK (1ULL << 8)
+#define PT_USER_EXEC_MASK (1ULL << 10)
 #define PT64_NX_SHIFT 63
 #define PT64_NX_MASK (1ULL << PT64_NX_SHIFT)
 
@@ -102,7 +103,7 @@ static inline u8 kvm_get_shadow_phys_bits(void)
 
 void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
 void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask);
-void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only);
+void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only, bool has_mbec);
 
 void kvm_init_mmu(struct kvm_vcpu *vcpu);
 void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index e5d1e241ff0f..a47e63217eb8 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -27,6 +27,9 @@
 #include "cpuid.h"
 #include "spte.h"
 
+/* Required by paging_tmpl.h for enable_mbec */
+#include "../vmx/capabilities.h"
+
 #include <linux/kvm_host.h>
 #include <linux/types.h>
 #include <linux/string.h>
@@ -3763,7 +3766,8 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 	 */
 	pm_mask = PT_PRESENT_MASK | shadow_me_value;
 	if (mmu->root_role.level >= PT64_ROOT_4LEVEL) {
-		pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK;
+		pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK |
+			   PT_USER_EXEC_MASK;
 
 		if (WARN_ON_ONCE(!mmu->pml4_root)) {
 			r = -EIO;
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 0f6455072055..12119d519c77 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -498,8 +498,20 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
 		 * Note, pte_access holds the raw RWX bits from the EPTE, not
 		 * ACC_*_MASK flags!
 		 */
-		vcpu->arch.exit_qualification |= (pte_access & VMX_EPT_RWX_MASK) <<
-						 EPT_VIOLATION_RWX_SHIFT;
+		vcpu->arch.exit_qualification |=
+			!!(pte_access & VMX_EPT_READABLE_MASK)
+			<< EPT_VIOLATION_READ_BIT;
+		vcpu->arch.exit_qualification |=
+			!!(pte_access & VMX_EPT_WRITABLE_MASK)
+			<< EPT_VIOLATION_WRITE_BIT;
+		vcpu->arch.exit_qualification |=
+			!!(pte_access & VMX_EPT_EXECUTABLE_MASK)
+			<< EPT_VIOLATION_KERNEL_INSTR_BIT;
+		if (enable_mbec) {
+			vcpu->arch.exit_qualification |=
+				!!(pte_access & VMX_EPT_USER_EXECUTABLE_MASK)
+				<< EPT_VIOLATION_USER_INSTR_BIT;
+		}
 	}
 #endif
 	walker->fault.address = addr;
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 639f220a1ed5..f1e2e3cad878 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -430,13 +430,15 @@ void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask)
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_set_me_spte_mask);
 
-void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
+void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only, bool has_mbec)
 {
 	shadow_user_mask	= VMX_EPT_READABLE_MASK;
 	shadow_accessed_mask	= has_ad_bits ? VMX_EPT_ACCESS_BIT : 0ull;
 	shadow_dirty_mask	= has_ad_bits ? VMX_EPT_DIRTY_BIT : 0ull;
 	shadow_nx_mask		= 0ull;
 	shadow_x_mask		= VMX_EPT_EXECUTABLE_MASK;
+	if (has_mbec)
+		shadow_x_mask |= VMX_EPT_USER_EXECUTABLE_MASK;
 	shadow_present_mask	= has_exec_only ? 0ull : VMX_EPT_READABLE_MASK;
 	/*
 	 * EPT overrides the host MTRRs, and so KVM must program the desired
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index cd2ac9536c99..2cc5d7d20144 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -13,6 +13,7 @@ extern bool __read_mostly enable_vpid;
 extern bool __read_mostly flexpriority_enabled;
 extern bool __read_mostly enable_ept;
 extern bool __read_mostly enable_unrestricted_guest;
+extern bool __read_mostly enable_mbec;
 extern bool __read_mostly enable_ept_ad_bits;
 extern bool __read_mostly enable_pml;
 extern bool __read_mostly enable_ipiv;
@@ -255,6 +256,12 @@ static inline bool cpu_has_vmx_xsaves(void)
 		SECONDARY_EXEC_XSAVES;
 }
 
+static inline bool cpu_has_vmx_mbec(void)
+{
+	return vmcs_config.cpu_based_2nd_exec_ctrl &
+		SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
+}
+
 static inline bool cpu_has_vmx_waitpkg(void)
 {
 	return vmcs_config.cpu_based_2nd_exec_ctrl &
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index d93c715cda6a..3c381c75e2a9 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2317,6 +2317,9 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
 		/* VMCS shadowing for L2 is emulated for now */
 		exec_control &= ~SECONDARY_EXEC_SHADOW_VMCS;
 
+		/* MBEC is currently only handled for L0. */
+		exec_control &= ~SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
+
 		/*
 		 * Preset *DT exiting when emulating UMIP, so that vmx_set_cr4()
 		 * will not have to rewrite the controls just for this bit.
@@ -6870,6 +6873,10 @@ void nested_vmx_setup_ctls_msrs(struct vmcs_config *vmcs_conf, u32 ept_caps)
 	 */
 	msrs->secondary_ctls_low = 0;
 
+	/*
+	 * Currently, SECONDARY_EXEC_MODE_BASED_EPT_EXEC is only handled for
+	 * L0 and doesn't need to be exposed to L1 nor L2.
+	 */
 	msrs->secondary_ctls_high = vmcs_conf->cpu_based_2nd_exec_ctrl;
 	msrs->secondary_ctls_high &=
 		SECONDARY_EXEC_DESC |
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 931688edc8eb..004fd4e5e057 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -94,6 +94,9 @@ bool __read_mostly enable_unrestricted_guest = 1;
 module_param_named(unrestricted_guest,
 			enable_unrestricted_guest, bool, S_IRUGO);
 
+bool __read_mostly enable_mbec = true;
+module_param_named(mbec, enable_mbec, bool, 0444);
+
 bool __read_mostly enable_ept_ad_bits = 1;
 module_param_named(eptad, enable_ept_ad_bits, bool, S_IRUGO);
 
@@ -4518,10 +4521,21 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx *vmx)
 		exec_control &= ~SECONDARY_EXEC_ENABLE_VPID;
 	if (!enable_ept) {
 		exec_control &= ~SECONDARY_EXEC_ENABLE_EPT;
+		/*
+		 * From Intel's SDM:
+		 * If either the "unrestricted guest" VM-execution control or
+		 * the "mode-based execute control for EPT" VM-execution
+		 * control is 1, the "enable EPT" VM-execution control must
+		 * also be 1.
+		 */
 		enable_unrestricted_guest = 0;
+		enable_mbec = false;
 	}
 	if (!enable_unrestricted_guest)
 		exec_control &= ~SECONDARY_EXEC_UNRESTRICTED_GUEST;
+	if (!enable_mbec)
+		exec_control &= ~SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
+
 	if (kvm_pause_in_guest(vmx->vcpu.kvm))
 		exec_control &= ~SECONDARY_EXEC_PAUSE_LOOP_EXITING;
 	if (!kvm_vcpu_apicv_active(vcpu))
@@ -5658,7 +5672,7 @@ static int handle_task_switch(struct kvm_vcpu *vcpu)
 
 static int handle_ept_violation(struct kvm_vcpu *vcpu)
 {
-	unsigned long exit_qualification;
+	unsigned long exit_qualification, rwx_mask;
 	gpa_t gpa;
 	u64 error_code;
 
@@ -5688,7 +5702,11 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
 	error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
 		      ? PFERR_FETCH_MASK : 0;
 	/* ept page table entry is present? */
-	error_code |= (exit_qualification & EPT_VIOLATION_RWX_MASK)
+	rwx_mask = EPT_VIOLATION_READ | EPT_VIOLATION_WRITE |
+		   EPT_VIOLATION_KERNEL_INSTR;
+	if (enable_mbec)
+		rwx_mask |= EPT_VIOLATION_USER_INSTR;
+	error_code |= (exit_qualification & rwx_mask)
 		      ? PFERR_PRESENT_MASK : 0;
 
 	error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) != 0 ?
@@ -8345,6 +8363,9 @@ static __init int hardware_setup(void)
 	if (!cpu_has_vmx_unrestricted_guest() || !enable_ept)
 		enable_unrestricted_guest = 0;
 
+	if (!cpu_has_vmx_mbec() || !enable_ept)
+		enable_mbec = false;
+
 	if (!cpu_has_vmx_flexpriority())
 		flexpriority_enabled = 0;
 
@@ -8404,7 +8425,8 @@ static __init int hardware_setup(void)
 
 	if (enable_ept)
 		kvm_mmu_set_ept_masks(enable_ept_ad_bits,
-				      cpu_has_vmx_ept_execute_only());
+				      cpu_has_vmx_ept_execute_only(),
+				      enable_mbec);
 
 	/*
 	 * Setup shadow_me_value/shadow_me_mask to include MKTME KeyID
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index a3da84f4ea45..815db44cd51e 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -585,6 +585,7 @@ static inline u8 vmx_get_rvi(void)
 	 SECONDARY_EXEC_ENABLE_VMFUNC |					\
 	 SECONDARY_EXEC_BUS_LOCK_DETECTION |				\
 	 SECONDARY_EXEC_NOTIFY_VM_EXITING |				\
+	 SECONDARY_EXEC_MODE_BASED_EPT_EXEC |				\
 	 SECONDARY_EXEC_ENCLS_EXITING)
 
 #define KVM_REQUIRED_VMX_TERTIARY_VM_EXEC_CONTROL 0
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 15:22:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:22:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530428.826059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGQ-0000oo-GI; Fri, 05 May 2023 15:22:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530428.826059; Fri, 05 May 2023 15:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGQ-0000jU-0Y; Fri, 05 May 2023 15:22:30 +0000
Received: by outflank-mailman (input) for mailman id 530428;
 Fri, 05 May 2023 15:22:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puxG5-0007pR-FO
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:22:09 +0000
Received: from smtp-bc0d.mail.infomaniak.ch (smtp-bc0d.mail.infomaniak.ch
 [45.157.188.13]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 98c50ca0-eb58-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 17:22:07 +0200 (CEST)
Received: from smtp-3-0001.mail.infomaniak.ch (unknown [10.4.36.108])
 by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCZDH0FZXzMqc7P;
 Fri,  5 May 2023 17:22:07 +0200 (CEST)
Received: from unknown by smtp-3-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCZDG2bp6zMpxBc; Fri,  5 May 2023 17:22:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98c50ca0-eb58-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683300126;
	bh=6hEUXV1Yu+QqE06q6YV92ksgcTE4SSVMGFp/YRlUSqM=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=cCom6GYQLGlk2nCbcclR8AZKUkW0AJpZFC6wR92gff0rbAnNjJlFfzBd5J+3y++84
	 WAx+LLa8CAjewu98goOyt+2vh84ogydmCURfKSQ02P+jr2VYlfAmiT9Knsjn08r/LT
	 Tng2aFdb73PL8bp/dTy6vPMFTznPlq63+NzYf8sI=
From: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>
To: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>
Cc: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	=?UTF-8?q?Mihai=20Don=C8=9Bu?= <mdontu@bitdefender.com>,
	=?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?UTF-8?q?=C8=98tefan=20=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org,
	kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v1 9/9] virt: Add Heki KUnit tests
Date: Fri,  5 May 2023 17:20:46 +0200
Message-Id: <20230505152046.6575-10-mic@digikod.net>
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
References: <20230505152046.6575-1-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha

This adds a new CONFIG_HEKI_TEST option to run tests at boot.  Indeed,
because this patch series forbids the loading of kernel modules after
the boot, we need to make built-in tests.  Furthermore, because we use
some symbols not exported to modules (e.g., kernel_set_to_readonly) this
could not work as modules.

To run these tests, we need to boot the kernel with the heki_test=N boot
argument with N selecting a specific test:
1. heki_test_cr_disable_smep: Check CR pinning and try to disable SMEP.
2. heki_test_write_to_const: Check .rodata (const) protection.
3. heki_test_write_to_ro_after_init: Check __ro_after_init protection.
4. heki_test_exec: Check non-executable kernel memory.

This way to select tests should not be required when the kernel will
properly handle the triggered synthetic page faults.  For now, these
page faults make the kernel loop.

All these tests temporarily disable the related kernel self-protections
and should then failed if Heki doesn't protect the kernel.  They are
verbose to make it easier to understand what is going on.

Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Mickaël Salaün <mic@digikod.net>
Link: https://lore.kernel.org/r/20230505152046.6575-10-mic@digikod.net
---
 virt/heki/Kconfig |  12 +++
 virt/heki/heki.c  | 194 +++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 205 insertions(+), 1 deletion(-)

diff --git a/virt/heki/Kconfig b/virt/heki/Kconfig
index 96f18ce03013..806981f2b22d 100644
--- a/virt/heki/Kconfig
+++ b/virt/heki/Kconfig
@@ -27,3 +27,15 @@ config HYPERVISOR_SUPPORTS_HEKI
 	  A hypervisor should select this when it can successfully build
 	  and run with CONFIG_HEKI. That is, it should provide all of the
 	  hypervisor support required for the Heki feature.
+
+config HEKI_TEST
+	bool "Tests for Heki" if !KUNIT_ALL_TESTS
+	depends on HEKI && KUNIT=y
+	default KUNIT_ALL_TESTS
+	help
+	  Run Heki tests at runtime according to the heki_test=N boot
+	  parameter, with N identifying the test to run (between 1 and 4).
+
+	  Before launching the init process, the system might not respond
+	  because of unhandled kernel page fault.  This will be fixed in a
+	  next patch series.
diff --git a/virt/heki/heki.c b/virt/heki/heki.c
index 142b5dc98a2f..361e7734e950 100644
--- a/virt/heki/heki.c
+++ b/virt/heki/heki.c
@@ -5,11 +5,13 @@
  * Copyright © 2023 Microsoft Corporation
  */
 
+#include <kunit/test.h>
 #include <linux/cache.h>
 #include <linux/heki.h>
 #include <linux/kernel.h>
 #include <linux/mm.h>
 #include <linux/printk.h>
+#include <linux/set_memory.h>
 #include <linux/types.h>
 #include <linux/vmalloc.h>
 
@@ -78,13 +80,201 @@ void __init heki_early_init(void)
 	heki_arch_init();
 }
 
+#ifdef CONFIG_HEKI_TEST
+
+/* Heki test data */
+
+/* Takes two pages to not change permission of other read-only pages. */
+const char heki_test_const_buf[PAGE_SIZE * 2] = {};
+char heki_test_ro_after_init_buf[PAGE_SIZE * 2] __ro_after_init = {};
+
+long heki_test_exec_data(long);
+void _test_exec_data_end(void);
+
+/* Used to test ROP execution against the .rodata section. */
+/* clang-format off */
+asm(
+".pushsection .rodata;" // NOT .text section
+".global heki_test_exec_data;"
+".type heki_test_exec_data, @function;"
+"heki_test_exec_data:"
+ASM_ENDBR
+"movq %rdi, %rax;"
+"inc %rax;"
+ASM_RET
+".size heki_test_exec_data, .-heki_test_exec_data;"
+"_test_exec_data_end:"
+".popsection");
+/* clang-format on */
+
+static void heki_test_cr_disable_smep(struct kunit *test)
+{
+	unsigned long cr4;
+
+	/* SMEP should be initially enabled. */
+	KUNIT_ASSERT_TRUE(test, __read_cr4() & X86_CR4_SMEP);
+
+	kunit_warn(test,
+		   "Starting control register pinning tests with SMEP check\n");
+
+	/*
+	 * Trying to disable SMEP, bypassing kernel self-protection by not
+	 * using cr4_clear_bits(X86_CR4_SMEP).
+	 */
+	cr4 = __read_cr4() & ~X86_CR4_SMEP;
+	asm volatile("mov %0,%%cr4" : "+r"(cr4) : : "memory");
+
+	/* SMEP should still be enabled. */
+	KUNIT_ASSERT_TRUE(test, __read_cr4() & X86_CR4_SMEP);
+}
+
+static inline void print_addr(struct kunit *test, const char *const buf_name,
+			      void *const buf)
+{
+	const pte_t pte = *virt_to_kpte((unsigned long)buf);
+	const phys_addr_t paddr = slow_virt_to_phys(buf);
+	bool present = pte_flags(pte) & (_PAGE_PRESENT);
+	bool accessible = pte_accessible(&init_mm, pte);
+
+	kunit_warn(
+		test,
+		"%s vaddr:%llx paddr:%llx exec:%d write:%d present:%d accessible:%d\n",
+		buf_name, (unsigned long long)buf, paddr, !!pte_exec(pte),
+		!!pte_write(pte), present, accessible);
+}
+
+extern int kernel_set_to_readonly;
+
+static void heki_test_write_to_rodata(struct kunit *test,
+				      const char *const buf_name,
+				      char *const ro_buf)
+{
+	print_addr(test, buf_name, (void *)ro_buf);
+	KUNIT_EXPECT_EQ(test, 0, *ro_buf);
+
+	kunit_warn(
+		test,
+		"Bypassing kernel self-protection: mark memory as writable\n");
+	kernel_set_to_readonly = 0;
+	/*
+	 * Removes execute permission that might be set by bugdoor-exec,
+	 * because change_page_attr_clear() is not use by set_memory_rw().
+	 * This is required since commit 652c5bf380ad ("x86/mm: Refuse W^X
+	 * violations").
+	 */
+	KUNIT_ASSERT_FALSE(test, set_memory_nx((unsigned long)PTR_ALIGN_DOWN(
+						       ro_buf, PAGE_SIZE),
+					       1));
+	KUNIT_ASSERT_FALSE(test, set_memory_rw((unsigned long)PTR_ALIGN_DOWN(
+						       ro_buf, PAGE_SIZE),
+					       1));
+	kernel_set_to_readonly = 1;
+
+	kunit_warn(test, "Trying memory write\n");
+	*ro_buf = 0x11;
+	KUNIT_EXPECT_EQ(test, 0, *ro_buf);
+	kunit_warn(test, "New content: 0x%02x\n", *ro_buf);
+}
+
+static void heki_test_write_to_const(struct kunit *test)
+{
+	heki_test_write_to_rodata(test, "const_buf",
+				  (void *)heki_test_const_buf);
+}
+
+static void heki_test_write_to_ro_after_init(struct kunit *test)
+{
+	heki_test_write_to_rodata(test, "ro_after_init_buf",
+				  (void *)heki_test_ro_after_init_buf);
+}
+
+typedef long test_exec_t(long);
+
+static void heki_test_exec(struct kunit *test)
+{
+	const size_t exec_size = 7;
+	unsigned long nx_page_start = (unsigned long)PTR_ALIGN_DOWN(
+		(const void *const)heki_test_exec_data, PAGE_SIZE);
+	unsigned long nx_page_end = (unsigned long)PTR_ALIGN(
+		(const void *const)heki_test_exec_data + exec_size, PAGE_SIZE);
+	test_exec_t *exec = (test_exec_t *)heki_test_exec_data;
+	long ret;
+
+	/* Starting non-executable memory tests. */
+	print_addr(test, "test_exec_data", heki_test_exec_data);
+
+	kunit_warn(
+		test,
+		"Bypassing kernel-self protection: mark memory as executable\n");
+	kernel_set_to_readonly = 0;
+	KUNIT_ASSERT_FALSE(test,
+			   set_memory_rox(nx_page_start,
+					  PFN_UP(nx_page_end - nx_page_start)));
+	kernel_set_to_readonly = 1;
+
+	kunit_warn(
+		test,
+		"Trying to execute data (ROP) in (initially) non-executable memory\n");
+	ret = exec(3);
+
+	/* This should not be reached because of the uncaught page fault. */
+	KUNIT_EXPECT_EQ(test, 3, ret);
+	kunit_warn(test, "Result of execution: 3 + 1 = %ld\n", ret);
+}
+
+const struct kunit_case heki_test_cases[] = {
+	KUNIT_CASE(heki_test_cr_disable_smep),
+	KUNIT_CASE(heki_test_write_to_const),
+	KUNIT_CASE(heki_test_write_to_ro_after_init),
+	KUNIT_CASE(heki_test_exec),
+	{}
+};
+
+static unsigned long heki_test __ro_after_init;
+
+static int __init parse_heki_test_config(char *str)
+{
+	if (kstrtoul(str, 10, &heki_test) ||
+	    heki_test > (ARRAY_SIZE(heki_test_cases) - 1))
+		pr_warn("Invalid option string for heki_test: '%s'\n", str);
+	return 1;
+}
+
+__setup("heki_test=", parse_heki_test_config);
+
+static void heki_run_test(void)
+{
+	struct kunit_case heki_test_case[2] = {};
+	struct kunit_suite heki_test_suite = {
+		.name = "heki",
+		.test_cases = heki_test_case,
+	};
+	struct kunit_suite *const test_suite = &heki_test_suite;
+
+	if (!kunit_enabled() || heki_test == 0 ||
+	    heki_test >= ARRAY_SIZE(heki_test_cases))
+		return;
+
+	pr_warn("Running test #%lu\n", heki_test);
+	heki_test_case[0] = heki_test_cases[heki_test - 1];
+	__kunit_test_suites_init(&test_suite, 1);
+}
+
+#else /* CONFIG_HEKI_TEST */
+
+static inline void heki_run_test(void)
+{
+}
+
+#endif /* CONFIG_HEKI_TEST */
+
 void heki_late_init(void)
 {
 	struct heki_hypervisor *hypervisor = heki.hypervisor;
 	int ret;
 
 	if (!heki_enabled)
-		return;
+		return heki_run_test();
 
 	if (!heki.static_ranges) {
 		pr_warn("Architecture did not initialize static ranges\n");
@@ -113,6 +303,8 @@ void heki_late_init(void)
 		goto out;
 	pr_warn("Control registers locked\n");
 
+	heki_run_test();
+
 out:
 	heki_free_pa_ranges(heki.static_ranges, heki.num_static_ranges);
 	heki.static_ranges = NULL;
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 15:22:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:22:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530430.826069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGR-0000z8-BK; Fri, 05 May 2023 15:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530430.826069; Fri, 05 May 2023 15:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxGQ-0000uO-MM; Fri, 05 May 2023 15:22:30 +0000
Received: by outflank-mailman (input) for mailman id 530430;
 Fri, 05 May 2023 15:22:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puxG8-0007pX-Dm
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:22:12 +0000
Received: from smtp-1908.mail.infomaniak.ch (smtp-1908.mail.infomaniak.ch
 [185.125.25.8]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9b1ed6d0-eb58-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 17:22:11 +0200 (CEST)
Received: from smtp-3-0001.mail.infomaniak.ch (unknown [10.4.36.108])
 by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCZDF6ppRzMq81r;
 Fri,  5 May 2023 17:22:05 +0200 (CEST)
Received: from unknown by smtp-3-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCZDF1ydFzMptBL; Fri,  5 May 2023 17:22:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b1ed6d0-eb58-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683300125;
	bh=PyAeGY6RXqTYQefM0sl5cf8gwLNcu9YypouNBS2PSvg=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=gx2MJRBQRK/Q7zeNftCJx4FHowj9XptOmH9PCuCsUSu0Bp7v+0qbMxJPxRE/O0jXB
	 coJOo2nnquoJ+LYtPiXbpt1gGK6521yg3b0SbAlgonaVc1WJSFYSTnWf6gVnc0qPj3
	 BlSHXvPRMTfLaM2oFX0fm3FNVa7xLedl9XNnFLhY=
From: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>
To: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>,
	Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>
Cc: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= <mic@digikod.net>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	=?UTF-8?q?Mihai=20Don=C8=9Bu?= <mdontu@bitdefender.com>,
	=?UTF-8?q?Nicu=C8=99or=20C=C3=AE=C8=9Bu?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?UTF-8?q?=C8=98tefan=20=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org,
	kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org,
	virtualization@lists.linux-foundation.org,
	x86@kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v1 8/9] KVM: x86/mmu: Enable guests to lock themselves thanks to MBEC
Date: Fri,  5 May 2023 17:20:45 +0200
Message-Id: <20230505152046.6575-9-mic@digikod.net>
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
References: <20230505152046.6575-1-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha

This changes enable to enforce a deny-by-default execution security
policy for guest kernels, leveraged by the Heki implementation.

Create synthetic page faults when an access is denied by Heki.  This
kind of kernel page fault needs to be handled by guests, which is not
currently the case, making it try again and again, but we are working to
calm down such guests by teaching them how to handle such page faults.

The MMU tracepoints are updated to reflect the difference between kernel
and user space executions.

kvm_heki_fix_all_ept_exec_perm() walks through all guest memory pages to
set the configured default execution permissions (i.e. only allow
configured executabel memory pages).

The struct heki_mem_range's attribute field now understand
HEKI_ATTR_MEM_EXEC, which allows the related kernel sections to be
executable, and deny any other kernel memory from being executable for
the whole lifetime of the guest.  This obviously can only work with
static kernels and we are exploring ways to handle authenticated and
dynamic kernel memory permission updates.

If the host doesn't have MBEC enabled, the KVM_HC_LOCK_MEM_PAGE_RANGES
hypecall will return -KVM_EOPNOTSUPP and might only apply the previous
ranges, if any.  This is useful to develop this RFC and make sure
execution restrictions are enforced (and not silently ignored), but this
behavior might change in a future patch series.  Guest kernels could
check for MBEC support to not use the HEKI_ATTR_MEM_EXEC attribute.

The number of configurable memory ranges per guest is 16 for now.  This
will change with a follow-up.

There are currently some pr_warn() calls to make it easy to test this
code.

Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Mickaël Salaün <mic@digikod.net>
Link: https://lore.kernel.org/r/20230505152046.6575-9-mic@digikod.net
---
 Documentation/virt/kvm/x86/hypercalls.rst |  4 +-
 arch/x86/kvm/mmu/mmu.c                    | 35 ++++++++-
 arch/x86/kvm/mmu/mmutrace.h               | 11 ++-
 arch/x86/kvm/mmu/spte.c                   | 19 ++++-
 arch/x86/kvm/mmu/spte.h                   | 15 +++-
 arch/x86/kvm/mmu/tdp_mmu.c                | 73 ++++++++++++++++++
 arch/x86/kvm/mmu/tdp_mmu.h                |  4 +
 arch/x86/kvm/x86.c                        | 90 ++++++++++++++++++++++-
 arch/x86/kvm/x86.h                        |  7 ++
 include/linux/kvm_host.h                  |  4 +
 virt/kvm/kvm_main.c                       |  1 +
 11 files changed, 250 insertions(+), 13 deletions(-)

diff --git a/Documentation/virt/kvm/x86/hypercalls.rst b/Documentation/virt/kvm/x86/hypercalls.rst
index 8aa5d28986e3..5accf5f6de13 100644
--- a/Documentation/virt/kvm/x86/hypercalls.rst
+++ b/Documentation/virt/kvm/x86/hypercalls.rst
@@ -204,7 +204,9 @@ must also set up an MSR filter to process writes to MSR_KVM_MIGRATION_CONTROL.
 
 The hypercall lets a guest request memory permissions to be removed for itself,
 identified with set of physical page ranges (GFNs).  The HEKI_ATTR_MEM_NOWRITE
-memory page range attribute forbids related modification to the guest.
+memory page range attribute forbids related modification to the guest.  The
+HEKI_ATTR_MEM_EXEC attribute allows execution on the specified pages while
+removing it for all the others.
 
 Returns 0 on success or a KVM error code otherwise.
 
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a47e63217eb8..56a8bcac1b82 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3313,7 +3313,7 @@ fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
 static bool is_access_allowed(struct kvm_page_fault *fault, u64 spte)
 {
 	if (fault->exec)
-		return is_executable_pte(spte);
+		return is_executable_pte(spte, !fault->user);
 
 	if (fault->write)
 		return is_writable_pte(spte);
@@ -5602,6 +5602,39 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err
 	if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root.hpa)))
 		return RET_PF_RETRY;
 
+	/* Skips real page faults if not needed. */
+	if ((error_code & PFERR_FETCH_MASK) &&
+	    !kvm_heki_is_exec_allowed(vcpu, cr2_or_gpa)) {
+		/*
+		 * TODO: To avoid kvm_heki_is_exec_allowed() call, check
+		 * enable_mbec and EPT_VIOLATION_KERNEL_INSTR, see
+		 * handle_ept_violation().
+		 */
+		struct x86_exception fault = {
+			.vector = PF_VECTOR,
+			.error_code_valid = true,
+			.error_code = error_code,
+			.nested_page_fault = false,
+			/*
+			 * TODO: This kind of kernel page fault needs to be handled by
+			 * the guest, which is not currently the case, making it try
+			 * again and again.
+			 *
+			 * You may want to test with cr2_or_gva to see the page
+			 * fault caught by the guest kernel (thinking it is a
+			 * user space fault).
+			 */
+			.address = static_call(kvm_x86_fault_gva)(vcpu),
+			.async_page_fault = false,
+		};
+
+		pr_warn_ratelimited(
+			"heki-kvm: Creating fetch #PF at 0x%016llx\n",
+			fault.address);
+		kvm_inject_page_fault(vcpu, &fault);
+		return RET_PF_INVALID;
+	}
+
 	r = RET_PF_INVALID;
 	if (unlikely(error_code & PFERR_RSVD_MASK)) {
 		r = handle_mmio_page_fault(vcpu, cr2_or_gpa, direct);
diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h
index ae86820cef69..cb7df95aec25 100644
--- a/arch/x86/kvm/mmu/mmutrace.h
+++ b/arch/x86/kvm/mmu/mmutrace.h
@@ -342,7 +342,8 @@ TRACE_EVENT(
 		__field(u8, level)
 		/* These depend on page entry type, so compute them now.  */
 		__field(bool, r)
-		__field(bool, x)
+		__field(bool, kx)
+		__field(bool, ux)
 		__field(signed char, u)
 	),
 
@@ -352,15 +353,17 @@ TRACE_EVENT(
 		__entry->sptep = virt_to_phys(sptep);
 		__entry->level = level;
 		__entry->r = shadow_present_mask || (__entry->spte & PT_PRESENT_MASK);
-		__entry->x = is_executable_pte(__entry->spte);
+		__entry->kx = is_executable_pte(__entry->spte, true);
+		__entry->ux = is_executable_pte(__entry->spte, false);
 		__entry->u = shadow_user_mask ? !!(__entry->spte & shadow_user_mask) : -1;
 	),
 
-	TP_printk("gfn %llx spte %llx (%s%s%s%s) level %d at %llx",
+	TP_printk("gfn %llx spte %llx (%s%s%s%s%s) level %d at %llx",
 		  __entry->gfn, __entry->spte,
 		  __entry->r ? "r" : "-",
 		  __entry->spte & PT_WRITABLE_MASK ? "w" : "-",
-		  __entry->x ? "x" : "-",
+		  __entry->kx ? "X" : "-",
+		  __entry->ux ? "x" : "-",
 		  __entry->u == -1 ? "" : (__entry->u ? "u" : "-"),
 		  __entry->level, __entry->sptep
 	)
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index f1e2e3cad878..c9fabb3c9cb2 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -184,10 +184,25 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 		pte_access &= ~ACC_EXEC_MASK;
 	}
 
-	if (pte_access & ACC_EXEC_MASK)
+	if (pte_access & ACC_EXEC_MASK) {
 		spte |= shadow_x_mask;
-	else
+#ifdef CONFIG_HEKI
+		/*
+		 * FIXME: Race condition (at boot) if no
+		 * lockdep_assert_held_write(vcpu->kvm->mmu_lock);
+		 */
+		if (READ_ONCE(vcpu->kvm->heki_kernel_exec_locked)) {
+			if (!heki_exec_is_allowed(vcpu->kvm, gfn))
+				spte &= ~VMX_EPT_EXECUTABLE_MASK;
+			else
+				pr_warn("heki-kvm: Allowing kernel execution "
+					"for GFN 0x%llx\n",
+					gfn);
+		}
+#endif /* CONFIG_HEKI */
+	} else {
 		spte |= shadow_nx_mask;
+	}
 
 	if (pte_access & ACC_USER_MASK)
 		spte |= shadow_user_mask;
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 6f54dc9409c9..30b250d03132 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -3,7 +3,10 @@
 #ifndef KVM_X86_MMU_SPTE_H
 #define KVM_X86_MMU_SPTE_H
 
+#include <asm/vmx.h>
+
 #include "mmu_internal.h"
+#include "../vmx/vmx.h"
 
 /*
  * A MMU present SPTE is backed by actual memory and may or may not be present
@@ -307,9 +310,17 @@ static inline bool is_last_spte(u64 pte, int level)
 	return (level == PG_LEVEL_4K) || is_large_pte(pte);
 }
 
-static inline bool is_executable_pte(u64 spte)
+static inline bool is_executable_pte(u64 spte, bool for_kernel_mode)
 {
-	return (spte & (shadow_x_mask | shadow_nx_mask)) == shadow_x_mask;
+	u64 x_mask = shadow_x_mask;
+
+	if (enable_mbec) {
+		if (for_kernel_mode)
+			x_mask &= ~VMX_EPT_USER_EXECUTABLE_MASK;
+		else
+			x_mask &= ~VMX_EPT_EXECUTABLE_MASK;
+	}
+	return (spte & (x_mask | shadow_nx_mask)) == x_mask;
 }
 
 static inline kvm_pfn_t spte_to_pfn(u64 pte)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index d6df38d371a0..0be34a9e90c0 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -7,7 +7,10 @@
 #include "tdp_mmu.h"
 #include "spte.h"
 
+#include "../x86.h"
+
 #include <asm/cmpxchg.h>
+#include <asm/vmx.h>
 #include <trace/events/kvm.h>
 
 static bool __read_mostly tdp_mmu_enabled = true;
@@ -1021,6 +1024,76 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
 	}
 }
 
+#ifdef CONFIG_HEKI
+
+/* TODO: Handle flush? */
+void kvm_heki_fix_all_ept_exec_perm(struct kvm *const kvm)
+{
+	int i;
+	struct kvm_mmu_page *root;
+	const gfn_t start = 0;
+	const gfn_t end = tdp_mmu_max_gfn_exclusive();
+
+	if (WARN_ON_ONCE(!is_tdp_mmu_enabled(kvm)))
+		return;
+
+	if (WARN_ON_ONCE(!enable_mbec))
+		return;
+
+	write_lock(&kvm->mmu_lock);
+
+	/*
+	 * Because heki_exec_locked is only set with this code, it cannot be
+	 * unlocked.  This is protected against race condition thanks to
+	 * mmu_lock.
+	 */
+	WRITE_ONCE(kvm->heki_kernel_exec_locked, true);
+
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+		for_each_tdp_mmu_root(kvm, root, i) {
+			struct tdp_iter iter;
+
+			WARN_ON_ONCE(!refcount_read(&root->tdp_mmu_root_count));
+
+			/*
+			 * TODO: Make sure
+			 * !is_shadow_present_pte()/SPTE_MMU_PRESENT_MASK are
+			 * well handled when they are present.
+			 */
+
+			rcu_read_lock();
+			tdp_root_for_each_leaf_pte(iter, root, start, end) {
+				u64 new_spte;
+
+				if (heki_exec_is_allowed(kvm, iter.gfn)) {
+					pr_warn("heki-kvm: Allowing kernel "
+						"execution for GFN 0x%llx\n",
+						iter.gfn);
+					continue;
+				}
+				pr_warn("heki-kvm: Denying kernel execution "
+					"for GFN 0x%llx\n",
+					iter.gfn);
+
+retry:
+				new_spte = iter.old_spte &
+					   ~VMX_EPT_EXECUTABLE_MASK;
+				if (new_spte == iter.old_spte)
+					continue;
+
+				if (tdp_mmu_set_spte_atomic(kvm, &iter,
+							    new_spte))
+					goto retry;
+			}
+			rcu_read_unlock();
+		}
+	}
+	write_unlock(&kvm->mmu_lock);
+	pr_warn("heki-kvm: Locked executable kernel memory\n");
+}
+
+#endif /* CONFIG_HEKI */
+
 /*
  * Zap all invalidated roots to ensure all SPTEs are dropped before the "fast
  * zap" completes.
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index d3714200b932..8b70b6af68d4 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -24,6 +24,10 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm);
 void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);
 void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm);
 
+#ifdef CONFIG_HEKI
+void kvm_heki_fix_all_ept_exec_perm(struct kvm *const kvm);
+#endif /* CONFIG_HEKI */
+
 int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
 
 bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a529455359ac..7ac8d9fabc18 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -20,6 +20,7 @@
 #include "irq.h"
 #include "ioapic.h"
 #include "mmu.h"
+#include "mmu/tdp_mmu.h"
 #include "i8254.h"
 #include "tss.h"
 #include "kvm_cache_regs.h"
@@ -31,6 +32,7 @@
 #include "lapic.h"
 #include "xen.h"
 #include "smm.h"
+#include "vmx/capabilities.h"
 
 #include <linux/clocksource.h>
 #include <linux/interrupt.h>
@@ -9705,6 +9707,45 @@ heki_page_track_prewrite(struct kvm_vcpu *const vcpu, const gpa_t gpa,
 	return true;
 }
 
+bool heki_exec_is_allowed(const struct kvm *const kvm, const gfn_t gfn)
+{
+	unsigned int gfn_last;
+
+	if (!READ_ONCE(kvm->heki_kernel_exec_locked))
+		return true;
+
+	/*
+	 * heki_gfn_exec_last is initialized with (HEKI_GFN_MAX + 1),
+	 * and 0 means that heki_gfn_exec_last is full.
+	 */
+	for (gfn_last = atomic_read(&kvm->heki_gfn_exec_last);
+	     gfn_last > 0 && gfn_last <= HEKI_GFN_MAX;) {
+		gfn_last--;
+
+		/* Ignores unused slots. */
+		if (kvm->heki_gfn_exec[gfn_last].end == 0)
+			break;
+
+		if (gfn >= kvm->heki_gfn_exec[gfn_last].start &&
+		    gfn <= kvm->heki_gfn_exec[gfn_last].end) {
+			/* TODO: Opportunistically shrink heki_gfn_exec. */
+			return true;
+		}
+	}
+	return false;
+}
+
+bool kvm_heki_is_exec_allowed(struct kvm_vcpu *vcpu, gpa_t gpa)
+{
+	const gfn_t gfn = gpa_to_gfn(gpa);
+	const struct kvm *const kvm = vcpu->kvm;
+
+	if (heki_exec_is_allowed(kvm, gfn))
+		return true;
+
+	return false;
+}
+
 static int kvm_heki_init_vm(struct kvm *const kvm)
 {
 	struct kvm_page_track_notifier_node *const node =
@@ -9733,6 +9774,7 @@ static int heki_lock_mem_page_ranges(struct kvm *const kvm, gpa_t mem_ranges,
 	int err;
 	size_t i, ranges_num;
 	struct heki_pa_range *ranges;
+	bool has_exec_restriction = false;
 
 	if (mem_ranges_size > HEKI_PA_RANGE_MAX_SIZE)
 		return -KVM_E2BIG;
@@ -9752,7 +9794,8 @@ static int heki_lock_mem_page_ranges(struct kvm *const kvm, gpa_t mem_ranges,
 
 	ranges_num = mem_ranges_size / sizeof(struct heki_pa_range);
 	for (i = 0; i < ranges_num; i++) {
-		const u64 attributes_mask = HEKI_ATTR_MEM_NOWRITE;
+		const u64 attributes_mask = HEKI_ATTR_MEM_NOWRITE |
+					    HEKI_ATTR_MEM_EXEC;
 		const gfn_t gfn_start = ranges[i].gfn_start;
 		const gfn_t gfn_end = ranges[i].gfn_end;
 		const u64 attributes = ranges[i].attributes;
@@ -9799,11 +9842,52 @@ static int heki_lock_mem_page_ranges(struct kvm *const kvm, gpa_t mem_ranges,
 					kvm, gfn, KVM_PAGE_TRACK_PREWRITE));
 		}
 
-		pr_warn("heki-kvm: Locking GFN 0x%llx-0x%llx with %s\n",
+		/*
+		 * Allow-list for execute permission,
+		 * see kvm_heki_fix_all_ept_exec_perm().
+		 */
+		if (attributes & HEKI_ATTR_MEM_EXEC) {
+			size_t gfn_i;
+
+			if (!enable_mbec) {
+				/*
+				 * Guests can check for MBEC support to avoid
+				 * such error by not using HEKI_ATTR_MEM_EXEC.
+				 */
+				err = -KVM_EOPNOTSUPP;
+				pr_warn("heki-kvm: HEKI_ATTR_MEM_EXEC "
+					"depends on MBEC, which is disabled.");
+				/*
+				 * We should continue partially applying
+				 * restrictions, but it is useful for this RFC
+				 * to exit early in case of missing MBEC
+				 * support.
+				 */
+				goto out_free_ranges;
+			}
+
+			has_exec_restriction = true;
+			gfn_i = atomic_dec_if_positive(
+				&kvm->heki_gfn_exec_last);
+			if (gfn_i == 0) {
+				err = -KVM_E2BIG;
+				goto out_free_ranges;
+			}
+
+			gfn_i--;
+			kvm->heki_gfn_exec[gfn_i].start = gfn_start;
+			kvm->heki_gfn_exec[gfn_i].end = gfn_end;
+		}
+
+		pr_warn("heki-kvm: Locking GFN 0x%llx-0x%llx with %s%s\n",
 			gfn_start, gfn_end,
-			(attributes & HEKI_ATTR_MEM_NOWRITE) ? "[nowrite]" : "");
+			(attributes & HEKI_ATTR_MEM_NOWRITE) ? "[nowrite]" : "",
+			(attributes & HEKI_ATTR_MEM_EXEC) ? "[exec]" : "");
 	}
 
+	if (has_exec_restriction)
+		kvm_heki_fix_all_ept_exec_perm(kvm);
+
 out_free_ranges:
 	kfree(ranges);
 	return err;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 3e80a60ecbd8..2127e551202d 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -282,6 +282,8 @@ int heki_check_cr(const struct kvm *kvm, unsigned long cr, unsigned long val);
 
 bool kvm_heki_is_exec_allowed(struct kvm_vcpu *vcpu, gpa_t gpa);
 
+bool heki_exec_is_allowed(const struct kvm *const kvm, const gfn_t gfn);
+
 #else /* CONFIG_HEKI */
 
 static inline int heki_check_cr(const struct kvm *const kvm,
@@ -290,6 +292,11 @@ static inline int heki_check_cr(const struct kvm *const kvm,
 	return 0;
 }
 
+static inline bool kvm_heki_is_exec_allowed(struct kvm_vcpu *vcpu, gpa_t gpa)
+{
+	return true;
+}
+
 #endif /* CONFIG_HEKI */
 
 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index ab9dc723bc89..82c7b02cbcc3 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -812,9 +812,13 @@ struct kvm {
 #define HEKI_GFN_MAX 16
 	atomic_t heki_gfn_no_write_num;
 	struct heki_gfn_range heki_gfn_no_write[HEKI_GFN_MAX];
+	atomic_t heki_gfn_exec_last;
+	struct heki_gfn_range heki_gfn_exec[HEKI_GFN_MAX];
 
 	atomic_long_t heki_pinned_cr0;
 	atomic_long_t heki_pinned_cr4;
+
+	bool heki_kernel_exec_locked;
 #endif /* CONFIG_HEKI */
 
 #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 4aea936dfe73..a177f8ff5123 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1232,6 +1232,7 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname)
 
 #ifdef CONFIG_HEKI
 	atomic_set(&kvm->heki_gfn_no_write_num, HEKI_GFN_MAX + 1);
+	atomic_set(&kvm->heki_gfn_exec_last, HEKI_GFN_MAX + 1);
 #endif /* CONFIG_HEKI */
 
 	preempt_notifier_inc();
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 15:31:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530475.826100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxOp-0006Dm-Bj; Fri, 05 May 2023 15:31:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530475.826100; Fri, 05 May 2023 15:31:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxOp-0006Df-8o; Fri, 05 May 2023 15:31:11 +0000
Received: by outflank-mailman (input) for mailman id 530475;
 Fri, 05 May 2023 15:31:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fp4m=A2=citrix.com=prvs=4826eee3f=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1puxOn-0006DZ-Cm
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:31:09 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d91dfa4f-eb59-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 17:31:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d91dfa4f-eb59-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683300666;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=pd6I1EXHaLBuaVUf35P+v7jHRWsGbpVWYeZwKsxvbI4=;
  b=ZB0YRwaXS9HqLH8x2ILKW0zepcElfgKob2wJGOfujFO+GAavNbVJA/nW
   R5+6Ac5OnoMiG6HLksjs/nw7IL57UStapxr3z5PAFS4G5oxyCoHkduswZ
   8CNU6GR/Ofa1rS7bSYUaDvHLtWOypAPFU/TqY6WTX9uFplb6K8Lbf37ah
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 106778848
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:sJsI5KoFxnawG7lFbZlWSLf1km5eBmIyZRIvgKrLsJaIsI4StFCzt
 garIBnXPquDZGPzctslYdm3oUlUucSAnIA2SVNqpS5kFi1Dp5uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDzyRNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAB9TTh+Fndm9+rmiWMln2sMAffPuB4xK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVRrk6VoqwmpXDe1gVr3JDmMcbPe8zMTsJQ9qqdj
 jueoTumUkFGZLRzzxKCrW/zvOHEhR/ZXb8KH7+hxNd1jEaMkzl75Bo+CgLg/KjRZlSFc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUSyiuA167V6AaxHXUfQ3hKb9lOnNQtWTUg2
 1uNntXoLT9iqruYTTSa7Lj8hSy2ETgYKykFfyBsZQkY59jupqkjgxSJScxseIa8ltDvECv86
 yyLpiM5wb4UiKY2O76TpA6dxWj2/96QE1Bzv1+MNo640u9nTKH7R4Ou82PQ1/1ZPqaSEl6i7
 UIBoMfLuYjiEqqxeDyxrPQlRe/5vKzYYWSF3jaDDLF6qW3zpifLkZR4pWgneRw3aptslSrBO
 he7hO9H2HNE0JJGh4dTapn5NcklxLOI+T/NBqGNNYomjnScmWa6EMBSia24hTqFfLAEy/1XB
 HtiWZ/E4YwmIapm1iGqYOwWzKUmwCszrUuKG8CglU75jePPPCXKIVvgDGZik8hjtP/UyOkr2
 4832zS2J+V3D7SlP3i/HX87JlEWN3krba3LRzhsXrfbeGJOQThxY8I9NJt9I+SJaYwJzLaXl
 px8M2cEoGfCaYrvcFrRMS8/MuOxBf6SbxsTZEQRALph4FB7Ca7H0UvVX8FfkWUPnAC78cNJc
 g==
IronPort-HdrOrdr: A9a23:qVE71q3T0ySNbBavHrvGcgqjBIgkLtp133Aq2lEZdPUCSL3+qy
 nIpoV56faUslYssR4b8uxoVJPrfZq+z/9ICOsqUotKBzOW3FdARbsKhbcKpQeMJ8SUzIBgPM
 lbH5SXp7fLfD5HZWqR2njbLz6AquP3lZyVuQ==
X-Talos-CUID: =?us-ascii?q?9a23=3A9AH+I2iAWrmViYKK6NNScXjCKDJuXSP9w3HKGES?=
 =?us-ascii?q?CAn97aLbJTnzBwqZfnJ87?=
X-Talos-MUID: =?us-ascii?q?9a23=3AbLj8Kw372EAKKl45AHLNV3l0tTUj5PuzV3oDmo0?=
 =?us-ascii?q?6mcy5LSdyIgW0kzisXdpy?=
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="106778848"
Date: Fri, 5 May 2023 16:30:52 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Julien Grall
	<julien@xen.org>
Subject: Re: [PATCH 2/2] LICENSES: Remove the use of deprecated LGPL SPDX tags
Message-ID: <ZFUhLBm1fzlbpAux@perard>
References: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
 <20230505130533.3580545-3-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230505130533.3580545-3-andrew.cooper3@citrix.com>

On Fri, May 05, 2023 at 02:05:33PM +0100, Andrew Cooper wrote:
> ---
>  LICENSES/GPL-2.0                    | 12 ++++++------
>  LICENSES/LGPL-2.0                   |  8 ++++----
>  LICENSES/LGPL-2.1                   |  8 ++++----
> 
> diff --git a/LICENSES/GPL-2.0 b/LICENSES/GPL-2.0
> index 0022a7c17788..dcd969aa85b5 100644
> --- a/LICENSES/GPL-2.0
> +++ b/LICENSES/GPL-2.0
> @@ -1,8 +1,9 @@
> -Valid-License-Identifier: GPL-2.0
>  Valid-License-Identifier: GPL-2.0-only
> -Valid-License-Identifier: GPL-2.0+
>  Valid-License-Identifier: GPL-2.0-or-later
>  
> +Deprecated-Identifier: GPL-2.0
> +Deprecated-Identifier: GPL-2.0+
> +
>  SPDX-URL: https://spdx.org/licenses/GPL-2.0.html

You probably want to update the URLs as well, this one point to a page
with a deprecated notice. New url is:
    https://spdx.org/licenses/GPL-2.0-only.html

Same remark for the other licences file changed.

Also, maybe they want to be renamed as well, to match the identifier,
I'm not sure.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 05 15:35:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:35:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530480.826110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxTP-0006re-TN; Fri, 05 May 2023 15:35:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530480.826110; Fri, 05 May 2023 15:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxTP-0006rX-QW; Fri, 05 May 2023 15:35:55 +0000
Received: by outflank-mailman (input) for mailman id 530480;
 Fri, 05 May 2023 15:35:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V+Y3=A2=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1puxTN-0006rR-OX
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:35:53 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 84876bdc-eb5a-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 17:35:52 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-965d2749e2eso188730766b.1
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 08:35:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84876bdc-eb5a-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683300952; x=1685892952;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=31qlMl6bkiI4bRzxC3N4JX9MyFjNBhP97o0bysyL4Rc=;
        b=niNGKM0LFFqkXMppAj+bYzmkzTsiisqFtDYz2Ro5WDODkbpBxHRUlWGeHheYpVWuK6
         TmjoLen0WxGpuoBJ88btqLrqrpddqPupfNtcV4q2j5jHnCEsmWN31z0p/VdajBVEuVxP
         /HClno6FPtan36OMZs9l9VrpdNVwryYWwhnBlDZWBU9fv+X0v/wusYx8gp1LAYm1mYrG
         1r43/L6DUmDygi5XIHnq3ZwyoA3U6deRTP2+iqjFHC9E3fmu6YF4JneN3nCd/Z7MrYau
         A8/SpQWEAIZlG9nhV22q1lNlgX8Fa+/XuPvcWJQoeyfpjntFfEjE6CckHOrnWUoZCExA
         DFBg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683300952; x=1685892952;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=31qlMl6bkiI4bRzxC3N4JX9MyFjNBhP97o0bysyL4Rc=;
        b=cHX2kSGRBpik3pSyRwoUogdkcg897eLxXg5CjBrl7iQ86VZE7E83aP5oRaNDU5E/Af
         SABEWadpLdflgN3CrxQ/BIwBQ/5bWmJ4duUYDnweX/gKQb7a+b3ZAyYJEJuLRFQznMtY
         WGvFZHnDY7hZ2jGC2Vz2F6PmnBkGU+osSZrUAc6jXMgMvAT9S84utEsXLPjU5qZysHz5
         rjqQxremf8G08p6UXjsbm6mLxvPS4xC/9QGWmvk1ehq4CtLwstvHd6QN9eFZtR/dN/vO
         edmI4D5rUBfOD1FbTHsHqAVDEos4MBGEP37Nbq51bLYiqQQ+QCljLYw5rDWZGoPUSZ3q
         Vcsg==
X-Gm-Message-State: AC+VfDy9DU+uueU2V3Lg5nrAjN/KPPvLZEKlSo7xx9mB2wdA7P7W/RRK
	qvzOaWvJPjGdTQKDbZq2j7l41+NfsXmAo4VzxYcAfRBZ
X-Google-Smtp-Source: ACHHUZ6K+ucZsTkXReKaSLyTS6MXHvsPgDI/+Lt4ZClDTgE2NQfoE9KEzyoCJh20jvOGkN5fGHmw7kAjBmMAo864o9k=
X-Received: by 2002:a17:907:6287:b0:962:582d:89d7 with SMTP id
 nd7-20020a170907628700b00962582d89d7mr1758978ejc.38.1683300951588; Fri, 05
 May 2023 08:35:51 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-5-jandryuk@gmail.com>
 <43c519f7-577d-f657-a4b1-1a31bf7f093a@suse.com> <CAKf6xpuzk6vLjbNAHzEmVpq8sDAO8O-cRFVStQxNqyD5ERr+Yg@mail.gmail.com>
 <7921d24d-7d4d-8829-44bf-b8c2ecd001c8@suse.com>
In-Reply-To: <7921d24d-7d4d-8829-44bf-b8c2ecd001c8@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 5 May 2023 11:35:39 -0400
Message-ID: <CAKf6xpt0n2GO1PuqeaiWi6iOoeBt0ULoKJ4xgeiTZo2Rh67kVg@mail.gmail.com>
Subject: Re: [PATCH v3 04/14 RESEND] cpufreq: Add Hardware P-State (HWP) driver
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 5, 2023 at 3:01=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
>
> On 04.05.2023 18:56, Jason Andryuk wrote:
> > On Thu, May 4, 2023 at 9:11=E2=80=AFAM Jan Beulich <jbeulich@suse.com> =
wrote:
> >> On 01.05.2023 21:30, Jason Andryuk wrote:
> >>> --- a/docs/misc/xen-command-line.pandoc
> >>> +++ b/docs/misc/xen-command-line.pandoc
> >>> @@ -499,7 +499,7 @@ If set, force use of the performance counters for=
 oprofile, rather than detectin
> >>>  available support.
> >>>
> >>>  ### cpufreq
> >>> -> `=3D none | {{ <boolean> | xen } [:[powersave|performance|ondemand=
|userspace][,<maxfreq>][,[<minfreq>][,[verbose]]]]} | dom0-kernel`
> >>> +> `=3D none | {{ <boolean> | xen } [:[powersave|performance|ondemand=
|userspace][,<hdc>][,[<hwp>]][,[<maxfreq>]][,[<minfreq>]][,[verbose]]]} | d=
om0-kernel`
> >>
> >> Considering you use a special internal governor, the 4 governor altern=
atives are
> >> meaningless for hwp. Hence at the command line level recognizing "hwp"=
 as if it
> >> was another governor name would seem better to me. This would then als=
o get rid
> >> of one of the two special "no-" prefix parsing cases (which I'm not ov=
erly
> >> happy about).
> >>
> >> Even if not done that way I'm puzzled by the way you spell out the int=
eraction
> >> of "hwp" and "hdc": As you say in the description, "hdc" is meaningful=
 only when
> >> "hwp" was specified, so even if not merged with the governors group "h=
wp" should
> >> come first, and "hdc" ought to be rejected if "hwp" wasn't first speci=
fied. (The
> >> way you've spelled it out it actually looks to be kind of the other wa=
y around.)
> >
> > I placed them in alphabetical order, but, yes, it doesn't make sense.
> >
> >> Strictly speaking "maxfreq" and "minfreq" also should be objected to w=
hen "hwp"
> >> was specified.
> >>
> >> Overall I'm getting the impression that beyond your "verbose" related =
adjustment
> >> more is needed, if you're meaning to get things closer to how we parse=
 the
> >> option (splitting across multiple lines to help see what I mean):
> >>
> >> `=3D none
> >>  | {{ <boolean> | xen } [:{powersave|performance|ondemand|userspace}
> >>                           [{,hwp[,hdc]|[,maxfreq=3D<maxfreq>[,minfreq=
=3D<minfreq>]}]
> >>                           [,verbose]]}
> >>  | dom0-kernel`
> >>
> >> (We're still parsing in a more relaxed way, e.g. minfreq may come ahea=
d of
> >> maxfreq, but better be more tight in the doc than too relaxed.)
> >>
> >> Furthermore while max/min freq don't apply directly, there are still t=
wo MSRs
> >> controlling bounds at the package and logical processor levels.
> >
> > Well, we only program the logical processor level MSRs because we
> > don't have a good idea of the packages to know when we can skip
> > writing an MSR.
> >
> > How about this:
> > `=3D none
> >  | {{ <boolean> | xen } {
> > [:{powersave|performance|ondemand|userspace}[,maxfreq=3D<maxfreq>[,minf=
req=3D<minfreq>]]
> >                         | [:hwp[,hdc]] }
> >                           [,verbose]]}
> >  | dom0-kernel`
>
> Looks right, yes.

There is a wrinkle to using the hwp governor.  The hwp governor was
named "hwp-internal", so it needs to be renamed to "hwp" for use with
command line parsing.  That means the checking for "-internal" needs
to change to just "hwp" which removes the generality of the original
implementation.

The other issue is that if you select "hwp" as the governor, but HWP
hardware support is not available, then hwp_available() needs to reset
the governor back to the default.  This feels like a layering
violation.

I'm still investigating, but promoting hwp to a top level option -
cpufreq=3Dhwp - might be a better arrangement.

> >>> +union hwp_request
> >>> +{
> >>> +    struct
> >>> +    {
> >>> +        uint64_t min_perf:8;
> >>> +        uint64_t max_perf:8;
> >>> +        uint64_t desired:8;
> >>> +        uint64_t energy_perf:8;
> >>> +        uint64_t activity_window:10;
> >>> +        uint64_t package_control:1;
> >>> +        uint64_t reserved:16;
> >>> +        uint64_t activity_window_valid:1;
> >>> +        uint64_t energy_perf_valid:1;
> >>> +        uint64_t desired_valid:1;
> >>> +        uint64_t max_perf_valid:1;
> >>> +        uint64_t min_perf_valid:1;
> >>
> >> The boolean fields here would probably better be of type "bool". I als=
o
> >> don't see the need for using uint64_t for any of the other fields -
> >> unsigned int will be quite fine, I think. Only ...
> >
> > This is the hardware MSR format, so it seemed natural to use uint64_t
> > and the bit fields.  To me, uint64_t foo:$bits; better shows that we
> > are dividing up a single hardware register using bit fields.
> > Honestly, I'm unfamiliar with the finer points of laying out bitfields
> > with bool.  And the 10 bits of activity window throws off aligning to
> > standard types.
> >
> > This seems to have the correct layout:
> > struct
> > {
> >         unsigned char min_perf;
> >         unsigned char max_perf;
> >         unsigned char desired;
> >         unsigned char energy_perf;
> >         unsigned int activity_window:10;
> >         bool package_control:1;
> >         unsigned int reserved:16;
> >         bool activity_window_valid:1;
> >         bool energy_perf_valid:1;
> >         bool desired_valid:1;
> >         bool max_perf_valid:1;
> >         bool min_perf_valid:1;
> > } ;
> >
> > Or would you prefer the first 8 bit ones to be unsigned int
> > min_perf:8?
>
> Personally I think using bitfields uniformly would be better. What you
> definitely cannot use if not using a bitfield is "unsigned char", it
> ought to by uint8_t then. If using a bitfield, as said, I think it's
> best to stick to unsigned int and bool, unless field width goes
> beyond 32 bits or fields cross a 32-bit boundary.

Ok, thanks.

> >>> +bool __init hwp_available(void)
> >>> +{
> >>> +    unsigned int eax, ecx, unused;
> >>> +    bool use_hwp;
> >>> +
> >>> +    if ( boot_cpu_data.cpuid_level < CPUID_PM_LEAF )
> >>> +    {
> >>> +        hwp_verbose("cpuid_level (%#x) lacks HWP support\n",
> >>> +                    boot_cpu_data.cpuid_level);
> >>> +        return false;
> >>> +    }
> >>> +
> >>> +    if ( boot_cpu_data.cpuid_level < 0x16 )
> >>> +    {
> >>> +        hwp_info("HWP disabled: cpuid_level %#x < 0x16 lacks CPU fre=
q info\n",
> >>> +                 boot_cpu_data.cpuid_level);
> >>> +        return false;
> >>> +    }
> >>> +
> >>> +    cpuid(CPUID_PM_LEAF, &eax, &unused, &ecx, &unused);
> >>> +
> >>> +    if ( !(eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE) &&
> >>> +         !(ecx & CPUID6_ECX_IA32_ENERGY_PERF_BIAS) )
> >>> +    {
> >>> +        hwp_verbose("HWP disabled: No energy/performance preference =
available");
> >>> +        return false;
> >>> +    }
> >>> +
> >>> +    feature_hwp                 =3D eax & CPUID6_EAX_HWP;
> >>> +    feature_hwp_notification    =3D eax & CPUID6_EAX_HWP_NOTIFICATIO=
N;
> >>> +    feature_hwp_activity_window =3D eax & CPUID6_EAX_HWP_ACTIVITY_WI=
NDOW;
> >>> +    feature_hwp_energy_perf     =3D
> >>> +        eax & CPUID6_EAX_HWP_ENERGY_PERFORMANCE_PREFERENCE;
> >>> +    feature_hwp_pkg_level_ctl   =3D eax & CPUID6_EAX_HWP_PACKAGE_LEV=
EL_REQUEST;
> >>> +    feature_hwp_peci            =3D eax & CPUID6_EAX_HWP_PECI;
> >>> +
> >>> +    hwp_verbose("HWP: %d notify: %d act-window: %d energy-perf: %d p=
kg-level: %d peci: %d\n",
> >>> +                feature_hwp, feature_hwp_notification,
> >>> +                feature_hwp_activity_window, feature_hwp_energy_perf=
,
> >>> +                feature_hwp_pkg_level_ctl, feature_hwp_peci);
> >>> +
> >>> +    if ( !feature_hwp )
> >>> +        return false;
> >>> +
> >>> +    feature_hdc =3D eax & CPUID6_EAX_HDC;
> >>> +
> >>> +    hwp_verbose("HWP: Hardware Duty Cycling (HDC) %ssupported%s\n",
> >>> +                feature_hdc ? "" : "not ",
> >>> +                feature_hdc ? opt_cpufreq_hdc ? ", enabled" : ", dis=
abled"
> >>> +                            : "");
> >>> +
> >>> +    feature_hdc =3D feature_hdc && opt_cpufreq_hdc;
> >>> +
> >>> +    hwp_verbose("HWP: HW_FEEDBACK %ssupported\n",
> >>> +                (eax & CPUID6_EAX_HW_FEEDBACK) ? "" : "not ");
> >>
> >> You report this, but you don't really use it?
> >
> > Correct.  I needed to know what capabilities my processors have.
> >
> > feature_hwp_pkg_level_ctl and feature_hwp_peci can also be dropped
> > since they aren't used beyond printing their values.  I'd still lean
> > toward keeping their printing under verbose since otherwise there
> > isn't a convenient way to know if they are available without
> > recompiling.
>
> That's fine, but wants mentioning in the description. Also respective
> variables would want to be __initdata then, be local to the function,
> or be dropped altogether. Plus you'd want to be consistent - either
> you use a helper variable for all print-only features, or you don't.

Got it, thanks.

> >>> +        if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, val) )
> >>> +        {
> >>> +            hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n")=
;
> >>> +            data->curr_req.raw =3D -1;
> >>> +
> >>> +            return;
> >>> +        }
> >>> +
> >>> +        data->energy_perf =3D val & IA32_ENERGY_BIAS_MASK;
> >>> +    }
> >>
> >> In order to not need to undo the "enable" you've already done, maybe t=
hat
> >> should move down here?
> >
> > HWP needs to be enabled before the Capabilities and Request MSRs can
> > be read.
>
> I must have missed this aspect in the SDM. Do you have a pointer?

In 15.4.2 Enabling HWP
Additional MSRs associated with HWP may only be accessed after HWP is
enabled, with the exception of IA32_HWP_INTERRUPT and MSR_PPERF.
Accessing the IA32_HWP_INTERRUPT MSR requires only HWP is present as
enumerated by CPUID but does not require enabling HWP.

> >  Reading them shouldn't fail, but it seems safer to use
> > rdmsr_safe in case something goes wrong.
>
> Sure. But then the "enable" will need undoing in the unlikely event of
> failure.

Yes.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri May 05 15:40:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:40:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530487.826120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxXd-0008NS-H0; Fri, 05 May 2023 15:40:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530487.826120; Fri, 05 May 2023 15:40:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxXd-0008NL-Dz; Fri, 05 May 2023 15:40:17 +0000
Received: by outflank-mailman (input) for mailman id 530487;
 Fri, 05 May 2023 15:40:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V+Y3=A2=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1puxXc-0008NF-I3
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:40:16 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 215d7988-eb5b-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 17:40:15 +0200 (CEST)
Received: by mail-ed1-x52c.google.com with SMTP id
 4fb4d7f45d1cf-50bdd7b229cso3759013a12.0
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 08:40:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 215d7988-eb5b-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683301215; x=1685893215;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ow8QnJEELfcaFY4U3gRJU7R7gd8Vy0C9uCYqlaaNapY=;
        b=UJVF30BB53uSjvbrTD6oKVNzfz6TfDRHMfyOgGso/nWY5ymTWFNtPYYRwZRH/LDlqt
         /5MKVnjkjBL8BprwfuDvqUSFowSPKS38klsPfqIQhATdM6T8WCikQDPdnYIgLTqEv1MG
         DpijPPb46l4jKugRYaV7xq4DnGZh/SOxClLjezuiCKL02ebh+SdExMO41Vw9htnKyPaY
         EIOtGdUajE8DbAR2ct6CIduXrCCKGA5AadsMQYh0eJ3q1BYUe67/oguPoKyg+vwh3dRF
         DWWlHLVQ0A77nDOJYsFZ0J7YHFahxCCEBKngLiuSXkhTAYu035a4S4M4XLUVQxj8u7Vs
         +4mQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683301215; x=1685893215;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ow8QnJEELfcaFY4U3gRJU7R7gd8Vy0C9uCYqlaaNapY=;
        b=jyRWp8aYZ9ry5DtvqTUuF6E9xEGIOJSuqOokNiXbBP+KFEzyGzWaTacPkNdIPoydmm
         fZcw1PNAsjNHRswmiJ0auwcua94cOtyzrlkQWH3ny2O8vUC85rSQCqQeEEBXj7eEgm8a
         P3Xdp3bdZZtjF7f/0De5rLzg3NpYSCv274aRgDu1Wnpvc7rdeeyU3unP6DV8Saixqqhj
         qYxt6sxcxvV3IaJQ47HpPIW2bHXH8e3kA7mcReXh+c3sR1du8DFYsKXOFU0GdUWnR52w
         J54hGgPoaSy3heluc/DuRnF15drsfHoDsUpTdwSDPHNsgr3jPZA0vwov7MBiWVQ0XoAn
         ZiUA==
X-Gm-Message-State: AC+VfDxSLtY3QzwXTPnyhSfybYDf3gGrkVWQLqa2pxm0sD1jq82/ojeP
	FT51VVDpeDct/NvDTP7P88MI5OGy8NaV8aBJY7I=
X-Google-Smtp-Source: ACHHUZ7crrh2ZFeMxB9vLIkmKRR296a7g+ILEk7o9jF9gnSdA6hfcmgh1CbUnSfbrv2P2CYnu+Wyndo+qYrdI1Uu1vY=
X-Received: by 2002:a17:907:7250:b0:965:ff38:2fbb with SMTP id
 ds16-20020a170907725000b00965ff382fbbmr402910ejc.1.1683301214830; Fri, 05 May
 2023 08:40:14 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-6-jandryuk@gmail.com>
 <2864bf57-88cd-6fce-2d38-6f3a31abb440@suse.com> <CAKf6xpshQ=6kPHtjpWqNUiaBym2uEXt=reY0Kd0VoZgxuE=LxA@mail.gmail.com>
 <fe991a0d-53ff-1b16-02b4-85c0332467a1@suse.com>
In-Reply-To: <fe991a0d-53ff-1b16-02b4-85c0332467a1@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 5 May 2023 11:40:03 -0400
Message-ID: <CAKf6xpss-CbNNJ9+p_7AhEdQMr8e=--9XQCDo6+4su=pSdKNew@mail.gmail.com>
Subject: Re: [PATCH v3 05/14 RESEND] xenpm: Change get-cpufreq-para output for internal
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 5, 2023 at 3:04=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
> >>> @@ -720,10 +721,15 @@ static void print_cpufreq_para(int cpuid, struc=
t xc_get_cpufreq_para *p_cpufreq)
> >>>          printf(" %d", p_cpufreq->affected_cpus[i]);
> >>>      printf("\n");
> >>>
> >>> -    printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
> >>> -           p_cpufreq->cpuinfo_max_freq,
> >>> -           p_cpufreq->cpuinfo_min_freq,
> >>> -           p_cpufreq->cpuinfo_cur_freq);
> >>> +    if ( internal )
> >>> +        printf("cpuinfo frequency    : base [%u] turbo [%u]\n",
> >>> +               p_cpufreq->cpuinfo_min_freq,
> >>> +               p_cpufreq->cpuinfo_max_freq);
> >>
> >> ... calling it "turbo" (and not "max") here.
> >
> > I'm fine with "max".  I think I went with turbo since it's a value you
> > cannot sustain but can only hit in short bursts.
>
> ... I don't mind you sticking to "turbo" as long as the description makes
> clear why that was chosen despite the SDM not naming it this way.

I switched to "max" since as you point out that matches the SDM naming.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri May 05 15:47:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530491.826129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxek-0000au-8e; Fri, 05 May 2023 15:47:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530491.826129; Fri, 05 May 2023 15:47:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxek-0000an-5c; Fri, 05 May 2023 15:47:38 +0000
Received: by outflank-mailman (input) for mailman id 530491;
 Fri, 05 May 2023 15:47:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V+Y3=A2=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1puxei-0000ah-JJ
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:47:36 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2701f279-eb5c-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 17:47:34 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-965fc25f009so47886666b.3
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 08:47:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2701f279-eb5c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683301654; x=1685893654;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=K7a6OP2ZbfbLEcN+mCIl6WNnEQ1CS2Knx9ltqA+VoLI=;
        b=R1AvNbZqAwJlA3H0R1A+T2HPJ2Zj5UVscGVZpjfbImE5rGBM4MNxQhYvQIR+XVBrFn
         3m4JsqJAIFJxRw/njoLt56SrBW3FuBTSONMORidu89m3odaWR0ZIcYHieiQvRWUxj16H
         cVYSYp/X4VlUpXDksSCC5rIP4VRc3G1g7bQ+PVVNJ+iUO6eJ6nyaQWev6mscWktdNCaA
         hniLE82vr8l1yp1eOHHv6sEbJ2XSBjNtgr0XMSc+KdaSFBprcbOhltEnj2fWpp1ZXkJ2
         kFSOlt0thoskG2rshZF7ShIznTKAFqnX6qHwZzUmFCVKXF6r+18C0lX9X0P9L2jBJ4js
         mD9Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683301654; x=1685893654;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=K7a6OP2ZbfbLEcN+mCIl6WNnEQ1CS2Knx9ltqA+VoLI=;
        b=jMLjoD24EkK48/TuELAqmgeFl3O3Qi0eP0/H8g12GxDvSRFI8cQvQuq0zbPngvFW3T
         4prwYladS32JT19hKR/0HABE8RU7Mi1/jG48SnX76Y/kXe6+gv4q7L2Nlh43zpygs4Zy
         dUQFRuf148Qs2WU3n6KI2LnGZo2R3dl+eD1na58FSQEV2rk+JpRsQqh20g4+cYyetsxd
         p9TucxrrAurM36a4DIR2PdmeGt+3wo4JiXrWg3XbuI0bvH9CLOu/KFuCrLft9Dp9B5Fz
         n2Dzoiq8UtHNl/pen0xLUaP6aSs7udrF1FPW/18Kwj9QUSSgv8qhBClKuLPYb/W6J7ht
         0qFA==
X-Gm-Message-State: AC+VfDyoByORFVCyXUHK6wNnqtkjh4wTaxpxDYUD/A5Cb36jU2VGBblN
	NfmD2T8/hpsyFLN+N88cIBT4USo0Jp6jT9RYQ3g=
X-Google-Smtp-Source: ACHHUZ6hX1NVNdLDX3FwK5tK/XaAcUVGlrMRV5iFAyo2jjZU3aDo2vBO2rgQqXR/dKciAHCN22PumPlOGHa3zLVan2Q=
X-Received: by 2002:a17:906:dc92:b0:955:34a5:ae46 with SMTP id
 cs18-20020a170906dc9200b0095534a5ae46mr1654968ejc.77.1683301653862; Fri, 05
 May 2023 08:47:33 -0700 (PDT)
MIME-Version: 1.0
References: <20230502143722.15613-1-jandryuk@gmail.com> <43162544.QFhiSxD2Za@silver>
In-Reply-To: <43162544.QFhiSxD2Za@silver>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 5 May 2023 11:47:22 -0400
Message-ID: <CAKf6xpsAYSd68jhCt7d603eDuLh5YJ9N8zihGBi9XvAZabNVwA@mail.gmail.com>
Subject: Re: [PATCH] 9pfs/xen: Fix segfault on shutdown
To: Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: qemu-devel@nongnu.org, Greg Kurz <groug@kaod.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Paul Durrant <paul@xen.org>, "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 5, 2023 at 6:05=E2=80=AFAM Christian Schoenebeck
<qemu_oss@crudebyte.com> wrote:
>
> Hi Jason,
>
> as this is a Xen specific change, I would like Stefano or another Xen
> developer to take a look at it, just few things from my side ...
>
> On Tuesday, May 2, 2023 4:37:22 PM CEST Jason Andryuk wrote:
> > xen_9pfs_free can't use gnttabdev since it is already closed and NULL-e=
d
>
> Where exactly does it do that access? A backtrace or another detailed com=
mit
> log description would help.

The segfault is down in the xen grant libraries during the free
callback.  The call stack is roughly:
xen_pv_del_xendev(struct XenLegacyDevice *xendev)
xen_9pfs_free() (->free() callback)
xen_be_unmap_grant_refs(&xen_9pdev->xendev, ...)
qemu_xen_gnttab_unmap(xendev->gnttabdev, ...)
xengnttab_unmap(xgt, ...) <- segfault.

The device went through the "disconnect" state before free() is
called, so xen_be_disconnect() already ran which did:
    if (xendev->gnttabdev) {
        qemu_xen_gnttab_close(xendev->gnttabdev);
        xendev->gnttabdev =3D NULL;
    }

gnttabdev being used by xengnttab_unmap().

> > out when free is called.  Do the teardown in _disconnect().  This
> > matches the setup done in _connect().
> >
> > trace-events are also added for the XenDevOps functions.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > ---
> >  hw/9pfs/trace-events     |  5 +++++
> >  hw/9pfs/xen-9p-backend.c | 36 +++++++++++++++++++++++-------------
> >  2 files changed, 28 insertions(+), 13 deletions(-)
> >
> > diff --git a/hw/9pfs/trace-events b/hw/9pfs/trace-events
> > index 6c77966c0b..7b5b0b5a48 100644
> > --- a/hw/9pfs/trace-events
> > +++ b/hw/9pfs/trace-events
> > @@ -48,3 +48,8 @@ v9fs_readlink(uint16_t tag, uint8_t id, int32_t fid) =
"tag %d id %d fid %d"
> >  v9fs_readlink_return(uint16_t tag, uint8_t id, char* target) "tag %d i=
d %d name %s"
> >  v9fs_setattr(uint16_t tag, uint8_t id, int32_t fid, int32_t valid, int=
32_t mode, int32_t uid, int32_t gid, int64_t size, int64_t atime_sec, int64=
_t mtime_sec) "tag %u id %u fid %d iattr=3D{valid %d mode %d uid %d gid %d =
size %"PRId64" atime=3D%"PRId64" mtime=3D%"PRId64" }"
> >  v9fs_setattr_return(uint16_t tag, uint8_t id) "tag %u id %u"
> > +
>
> Nit-picking; missing leading comment:
>
> # xen-9p-backend.c

Will do, thanks.

> > +xen_9pfs_alloc(char *name) "name %s"
> > +xen_9pfs_connect(char *name) "name %s"
> > +xen_9pfs_disconnect(char *name) "name %s"
> > +xen_9pfs_free(char *name) "name %s"
> > diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c
> > index 0e266c552b..c646a0b3d1 100644
> > --- a/hw/9pfs/xen-9p-backend.c
> > +++ b/hw/9pfs/xen-9p-backend.c
> > @@ -25,6 +25,8 @@
> >  #include "qemu/iov.h"
> >  #include "fsdev/qemu-fsdev.h"
> >
> > +#include "trace.h"
> > +
> >  #define VERSIONS "1"
> >  #define MAX_RINGS 8
> >  #define MAX_RING_ORDER 9
> > @@ -337,6 +339,8 @@ static void xen_9pfs_disconnect(struct XenLegacyDev=
ice *xendev)
> >      Xen9pfsDev *xen_9pdev =3D container_of(xendev, Xen9pfsDev, xendev)=
;
> >      int i;
> >
> > +    trace_xen_9pfs_disconnect(xendev->name);
> > +
> >      for (i =3D 0; i < xen_9pdev->num_rings; i++) {
> >          if (xen_9pdev->rings[i].evtchndev !=3D NULL) {
> >              qemu_set_fd_handler(qemu_xen_evtchn_fd(xen_9pdev->rings[i]=
.evtchndev),
> > @@ -345,40 +349,42 @@ static void xen_9pfs_disconnect(struct XenLegacyD=
evice *xendev)
> >                                     xen_9pdev->rings[i].local_port);
> >              xen_9pdev->rings[i].evtchndev =3D NULL;
> >          }
> > -    }
> > -}
> > -
> > -static int xen_9pfs_free(struct XenLegacyDevice *xendev)
> > -{
> > -    Xen9pfsDev *xen_9pdev =3D container_of(xendev, Xen9pfsDev, xendev)=
;
> > -    int i;
> > -
> > -    if (xen_9pdev->rings[0].evtchndev !=3D NULL) {
> > -        xen_9pfs_disconnect(xendev);
> > -    }
> > -
> > -    for (i =3D 0; i < xen_9pdev->num_rings; i++) {
> >          if (xen_9pdev->rings[i].data !=3D NULL) {
> >              xen_be_unmap_grant_refs(&xen_9pdev->xendev,
> >                                      xen_9pdev->rings[i].data,
> >                                      xen_9pdev->rings[i].intf->ref,
> >                                      (1 << xen_9pdev->rings[i].ring_ord=
er));
> > +            xen_9pdev->rings[i].data =3D NULL;
> >          }
> >          if (xen_9pdev->rings[i].intf !=3D NULL) {
> >              xen_be_unmap_grant_ref(&xen_9pdev->xendev,
> >                                     xen_9pdev->rings[i].intf,
> >                                     xen_9pdev->rings[i].ref);
> > +            xen_9pdev->rings[i].intf =3D NULL;
> >          }
> >          if (xen_9pdev->rings[i].bh !=3D NULL) {
> >              qemu_bh_delete(xen_9pdev->rings[i].bh);
> > +            xen_9pdev->rings[i].bh =3D NULL;
> >          }
> >      }
> >
> >      g_free(xen_9pdev->id);
> > +    xen_9pdev->id =3D NULL;
> >      g_free(xen_9pdev->tag);
> > +    xen_9pdev->tag =3D NULL;
> >      g_free(xen_9pdev->path);
> > +    xen_9pdev->path =3D NULL;
> >      g_free(xen_9pdev->security_model);
> > +    xen_9pdev->security_model =3D NULL;
> >      g_free(xen_9pdev->rings);
> > +    xen_9pdev->rings =3D NULL;
> > +    return;
> > +}
> > +
> > +static int xen_9pfs_free(struct XenLegacyDevice *xendev)
> > +{
> > +    trace_xen_9pfs_free(xendev->name);
> > +
> >      return 0;
> >  }
>
> xen_9pfs_free() doing nothing, that doesn't look right to me. Wouldn't it=
 make
> sense to turn xen_9pfs_free() idempotent instead?

The callbacks are:
    .alloc      =3D xen_9pfs_alloc,
    .init       =3D xen_9pfs_init,
    .initialise =3D xen_9pfs_connect,
    .disconnect =3D xen_9pfs_disconnect,
    .free       =3D xen_9pfs_free,

.initialise (connect) and .disconnect are matched operations.  So
.disconnect should be cleaning up from .connect, which this patch
implements.

Also, neither xen_9pfs_alloc() nor xen_9pfs_init,() perform any
allocations, so that is why the .free callback is now empty.

Thanks for taking a look!

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri May 05 15:53:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 15:53:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530495.826140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxk2-00022q-Te; Fri, 05 May 2023 15:53:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530495.826140; Fri, 05 May 2023 15:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxk2-00022j-Q4; Fri, 05 May 2023 15:53:06 +0000
Received: by outflank-mailman (input) for mailman id 530495;
 Fri, 05 May 2023 15:53:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XVLi=A2=citrix.com=prvs=4822586d7=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puxk1-00022d-6c
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 15:53:05 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eaae084d-eb5c-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 17:53:04 +0200 (CEST)
Received: from mail-mw2nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 05 May 2023 11:53:01 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6157.namprd03.prod.outlook.com (2603:10b6:5:398::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 15:52:57 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.027; Fri, 5 May 2023
 15:52:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eaae084d-eb5c-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683301984;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Sm0s9iLiLjSWIAhlHKEV5b7kBNV8HdDQ0OE9VOheIGc=;
  b=WQJG0GZjArUAdK9UPJTIrUIOspuuvapoQPpcdbeNLm97UD4H2rpLZx81
   MaeZDGJDcELSec2lNEUP4T9SYwzm0ZsN8HQa+jvdCJJ/JfOSfppN+vpL8
   nD5d5rifi7OgD7OFhHhaB6oshtD1vTzFoH5abTzt70bTUOpxhAFBH9EOm
   E=;
X-IronPort-RemoteIP: 104.47.55.100
X-IronPort-MID: 106781722
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jPI3pa/EMlGyIaVxqDWJDrUDVX+TJUtcMsCJ2f8bNWPcYEJGY0x3y
 WMeDGzXPviPa2CmKthwbNvi8hwHscXcy9JmQAQ9+Cs8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKgW5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklB1
 tEbBzIRaSvZrOGM5+KAdrR+re4KeZyD0IM34hmMzBn/JNN/GdXmfP+P4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWOilUpiNABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpMS+3hqaQw6LGV7jNMBF4JVH62m+W0iQmPB/EYc
 3ZE5CV7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3OcUbzE30
 l6Cn/vyGCdi9raSTBq17ayIpDm/PSwUK24qZiIeSwYBpd75r+kbkRbnXttlVqmvgbXdGyz0w
 j2MhDgzgfMUl8Fj6kmg1VXOgjbpo4eTSAcwv13TRjj8tls/Y5O5bYu171Sd9exHMIuSUliGu
 j4DhtSa6+cNS5qKkURhXdkwIV1g3N7dWBW0vLKlN8JJG+iFk5J7Qb1t3Q==
IronPort-HdrOrdr: A9a23:mVFTCKg221Vv/9cObtyo9GDf93BQXmEji2hC6mlwRA09TyVXra
 GTdZMgpHnJYVcqKRYdcLW7UpVoLkmwyXcY2+Us1PKZLWrbUIXBFvAf0WKg+UycJ8XGntQtqp
 uICpIOduEYb2IbsS+K2njdLz96+qj/zEnAv463pEuFDzsaCZ2IiT0XNu/xKDwSeOApP/QE/b
 Onl7t6jgvlV3QLT9ixQkIIV/LEoLTw5ejbSC9DKR47yRWEyQil4r7iExSew34lIkhy6IZn32
 jZshDzop6uufGjyhPayiv64plMlMH6o+EzdPCku4w6KijMlg3tXohnVrGY1QpF2N2S1A==
X-Talos-CUID: 9a23:wMiFvmGef5vLcef8qmJo12MXNcI5dUaH53TqBB+DNTt7Y5e8HAo=
X-Talos-MUID: 9a23:2uUruAVml9q7oCbq/GCziHJ9NeJl2bijCwc8zbZB4/PUKyMlbg==
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="106781722"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=chCObT9cUWg4Nc/GOOjYrFuunw80VZy3hjHG3xAeG18yezY8Fo0X5gEprlyJRgidR79r31aYT5VD9pBoCJKnEYzqkRoRsn7/TX5mv7NLSE8jgeymCCDxDRYEFLpiNErVfO1RBjyzLRAETWqKYrpVChcmjt3N4A4X7kfiogCnNv8BFCDBhw8fUWwHvnEeIGZQxo1ot8Cwe7T/XdmjZGnxZy/51k3QQT55/NxD/wrcRSbLOqm5N0L0QdGj4b+FQiI+cRahjhlLJMvQPT2cwjMco3Mp+DbyiVomrW4kaxxqC238BQ2c3FYm0HVTtEReZiIo4A7VcF9WWczMF/CiISKYqQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fydBFIE5i5DlViiY+5e/fD7VTQF/UTnKjl+sl9XhRgE=;
 b=g3YUyzYZRyB2RmrZklqlfLssmomabhJ5nIneI/h4ezjm5DMnhCdU5ZF/ykuYMZ5ScLldlgRPM+E7zcF6hFCaYJkcDCJwQpBAheGNF8UvM8RUuW1Ia9mPP0WArvV+uta21hTxAuwbeEl1SaF5gV3k8VhSB1kL7xPDFeqnwAe2bFeGBkNM49mk76U9ena73otjGekcF8cBqEXoxkSOVfT5OmF/Nfir9Vkoe/uOHqkFKlPpMOZthonu1+nURbt1Kudyne8pHUHeXwtBeJ/HB7vUcnAKHu6UxzPb9OdDGrSI6kPYRNdgO4seUP8gMbjaWMKWZkDsIrIrMwYCIKhs70urfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fydBFIE5i5DlViiY+5e/fD7VTQF/UTnKjl+sl9XhRgE=;
 b=EBj/G74iOGGASfENyf1UwE9BsQYSadthU8zTa3UIYm61lJ2zQCGO6AdoS5QDE2IjWYZ8JbU2x1wSqaFgF668+4N7Aj7UpI+sOEmmGyTT6ulR+NCfgTR5J8m5AS8e2rSglHOg+hsEtl6FMGRfFKJo/VDQnRxw/zb99Ot1irADJfM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <99d08cef-3797-ebe2-d9e9-8cf6dd49ed9c@citrix.com>
Date: Fri, 5 May 2023 16:52:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/2] LICENSES: Remove the use of deprecated LGPL SPDX tags
Content-Language: en-GB
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Jan Beulich
 <JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>
References: <20230505130533.3580545-1-andrew.cooper3@citrix.com>
 <20230505130533.3580545-3-andrew.cooper3@citrix.com>
 <ZFUhLBm1fzlbpAux@perard>
In-Reply-To: <ZFUhLBm1fzlbpAux@perard>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0417.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18b::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM4PR03MB6157:EE_
X-MS-Office365-Filtering-Correlation-Id: c2cb5b7b-152b-45e8-80f0-08db4d80cb05
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7QxzjctcORRJNHFGHAvF040qxnRYM1xse7wbpLvl+lK0YO9uHvKkEsTvhuuRlG7DUefHtp1jPa2iRIr+Tved5LYy6kv6528A/h68jL03mnkqFM8GXCanCxp7M3DaHmAellmt9FgRcYVyiHd2igyTlSbHELghD/Aw9DaUzQ+J0El4PLZ1iLp0hVZ9E+T9bdVSDkJcSF729bjsxMGyO9d2Zrmfxh35aKcKJLRyZfAQk2cBRU/RwfJMd35XBGkkOJafYsD7FEcAY+r1zWHGOPFter7SGSDBD3GgfTdncJFbXCi/C3g0A9xF4Sk3H/By2bGUI9vqPfDawhVus56JF6TlEnVjeOStJnMCBx1THBqW5y9h+MIkYBoKKoaVtwqD1DAPKUdHEIJbKTLtCh9qtr9wmUADpWIFTGrml/Pk3dO+cUmpb3+qn+jnPkAtKLvoMVOE9Q3LsmhvrLgpW47/h87/3uiSRfXIE63Na9qPYNH8QYUGyh2JzytK7MHjK1sP7dXexBM8AQvtwm2XyHOMkOH95TX+DtkbtUVQ0M2mmN46fCzOgcFIHqeTcOsdzblhfWrjRvUE71oqzlF8uE9L0yYWMkl6CCDbrkTBOXGOsQ+pqfczWw/9oAigsp9uTC6BAa9ZjgI4PP2nQ5yNTpFD1HH24A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(136003)(366004)(376002)(451199021)(31686004)(6666004)(6486002)(478600001)(36756003)(82960400001)(38100700002)(2616005)(83380400001)(2906002)(26005)(54906003)(37006003)(966005)(186003)(53546011)(6512007)(6506007)(5660300002)(41300700001)(8936002)(8676002)(6862004)(86362001)(31696002)(66476007)(66556008)(4326008)(6636002)(66946007)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dzU1TFVJNWE0c2hlWHF0YVE5SHJ0M1h0QVE3MENwZm9SMlFCYXd0MEJtSzlR?=
 =?utf-8?B?bVZUdlI4a1VWODM3a1pLYndNcTBSMGdhV2FxdGt0VkZzd1dVTmNHc2JyY0dO?=
 =?utf-8?B?UVNvMFY1T1d2QnBkbXUyV3krY21lQTdiZ0ZtTVg2TmdpLzRnMXhhdDFldmZt?=
 =?utf-8?B?Vkl5SWc5YUw1ZXZtNldiM2ZuOHplT3MrRXcxKzhSOVFsLzU4VFhWTmd1UFk3?=
 =?utf-8?B?NFBiVUxMUDZ4VGk4UHBueUR3U3F2TmtUenZiczdzZG5pZm9Ga1VqRUNCanBj?=
 =?utf-8?B?U3REaGdPay9aa3p2VC9HQzltRjh1UTJiMU5uMjNsbUE1WDFvOVRaWnZ6Q3VP?=
 =?utf-8?B?c2JJelNZSGx0cmxsN0QwbEVEMHl6MDN4TURzYlhxTkZ3ZEl1bm1CK1gwZ0Fh?=
 =?utf-8?B?ZzcwZ1Q1RlJ5UzdCMmw0b1NFelRYamk1QWJQajg2bmJCckoyKzgrazM1Z08w?=
 =?utf-8?B?WWloV3E3M3Qzc0NIbGRyOUsrcEt5M1A4MzNqR0FGaldZUEE3UHpWMFZOWGJ4?=
 =?utf-8?B?UUxKOXpZYkYvSVdLcWVXbzk0U0dSRkp2UnhuRkplRGJWdU1TMmZ2QVVmL3J4?=
 =?utf-8?B?cHh4K1h5dHNqRVdUelNjL1ZqeDVKRWt3aExZb0dseTNkNzBMaUNmSnUrNmN4?=
 =?utf-8?B?WE1sK1NWQTVOTWEybU9PK2lmMWlFUW9GV2kzMDA0U1kzWkJ6OVV2bVc5dkJR?=
 =?utf-8?B?Y2QwVnB2TDI5UGp2ZmhTMk9Ya25iT1NyRFdDTVJTRTNWNEk1S2x5cDdoYUUx?=
 =?utf-8?B?bGYyWnF2N05oKzlHeTZMMzQ1S2VBZ1p3T01aSldaMEpOSnlZTzc2ejBiczRW?=
 =?utf-8?B?cDdwWHdUSmt6MTErUVp5ZEgwZkE2dTRDUmZXazAxZno5WlpDamNodk11NzRR?=
 =?utf-8?B?bWJIQjFnRkVodlVHeVdYMzUyV21IVUJ3d2oyckcrK0RZcUk2bU15azc0bDFI?=
 =?utf-8?B?d21tMHFvdVdjbVU4K3A4ODhZemtkMFlTRkdLRWpuZk5xTmFOUjBPMXpQQUtk?=
 =?utf-8?B?MUwrZ2lqanNMOXZtKy9xdk1zam0wcnlwUk1vM09XQmdJd2JLM3pwZjJranJr?=
 =?utf-8?B?eHZtaEkxeHA4TXJuRkxRbmpUNUdkdDBUTWpKbncvdSt5S1lHdldRanlwRysx?=
 =?utf-8?B?VFRMS04rc3AzUkV1cmNlYXNiK3AvS3VaQ1JpVURod0hSSGRDL3pHRFNkVkV4?=
 =?utf-8?B?VFl6ekRCbDQxYmd1M1l5ZzgzUEkwaW9VYzl6VWJWSHg0KzhLSXBwRWdlY081?=
 =?utf-8?B?UzdUbW95NkFYdmJDTXAwQlU4OWJJcG5uUkordFNIVlpmUzlvam50eTF6UEZ3?=
 =?utf-8?B?QU81ZVhkTVVzUnkrcFdkSnNQbnBnY2RGNVVtckFPb0JuZjBuaXYxczExbXh1?=
 =?utf-8?B?bjVJUEphOGhPeGZvN3hsT0RyQTJ4NGRMSFhNTG1IK0hzenNVcUs0Ky9uSmJM?=
 =?utf-8?B?R2Nkc3E1T3phSnpSZ2JBT0N4VE8rdEszcXVrNFVOdUhVYjJ0T0hERUJGZzFR?=
 =?utf-8?B?VWkrOU4xSWZDS1R5a0ZrR1haa2NGY2R3QkJrck1HdThBTjgvZzJXd2VjTDMy?=
 =?utf-8?B?QTE2WDZEVkVmc0s0ZjJDOVE3WTNQR2JoWFBVdm9qM2FlVWkzeHJiTVlVN1dB?=
 =?utf-8?B?MGVCcmNlK3hlVVBnaUpySW9rcW0weHBrYUhpRmUvNGlQK1VPWFNEYUxnZ1lY?=
 =?utf-8?B?Y1ErbFlDellsRmxpemtZbE5mSFNFemRsbk4zNFlPQnZ4bGtCT1BaYm9zbFg5?=
 =?utf-8?B?SDVjR1lURnpURmdacVlKUzNuZGwyNG54Q1dPWWNYSnFkK3FCakhHbUtqWUJY?=
 =?utf-8?B?SVNrSmZsb1JhQWtuVFE5Z2M3YjVoWmdkUUJTT3l4d2czK3JRcmEyYi91aStG?=
 =?utf-8?B?VkZ6OGh4ckRSRCs0MzU5TUU5SStVbFQ5K25VQnREZGZFcDVDcEtpTnNvVTJp?=
 =?utf-8?B?T2JTR0svQk0wK2JzRUlzcXF4SjQ4bE43WEl3eUFBSjJqaDdGQ0prcEtqNUx6?=
 =?utf-8?B?NXFNQzJ5RnVZSURENTcxTUNrbUhiZmFJS3NQdTVaRnFRUVFkVW5hWUJreGlR?=
 =?utf-8?B?QkZHa3pqc0t2U2k1aE5NMENhRnE0eFkzWVNTTE5xSjl5U0NyR2o1clM1aW9x?=
 =?utf-8?B?Vm42NlFiaFp0cjZIV2tBdWYvTHRpQXJWNkFPNVlBRGNDNmFwV1poWSt6NXNj?=
 =?utf-8?B?VlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	4wjvGzmeYOZvvHlTi7vNURyutoInjjwRHCHL3K8tpc073zb5Yn4OfxZdBIUikdTfuSOPIsoFrNL1RWQnLtoM+XUcWxA3ANWC48re/WKD4wq7TIUwRDq0Etmu2fX4R23Qe46sIKJGoRK4AMgxMaIn2TF57tvHjVCLQEOVEkohgyZnBYq7P36rGxPXR+cs01XOY0kVvTK5wnJV+juuN4zUJT9NYguHQHPhAxikMP8MXeimCb3KXcur1rbda8TbM9VTEh67hDE5Fh6PPKYCuUTkTgUzkHywue1ppTrXxFvzjFTE97oPVs1zT9GiKmb7J0d3Da6hXAS035vpmUE8Cfix9ZD+B8YHa5XFp9K0e2kZbbOULp6VURADFlj1M+GWtoE6aX4+W2CVd1S1jt5nUZFqA+FpbDgNxwbSUsqMM3AiNmJVRg/dtNEqtkhhfM1C+D1U+UBb7vIb9XiZT/uv/u5nVLI4a/lDOj1t3Enjue95+qxY9uNT58KKG/mtVprVrnW47rtYq9mKh8QODLy5SGfBBk1PoGQV+tVqB74Q6o8zDudMoWXONgmZvXp0L3krp7l/TJw6GsIhP7Acale+12uRH+lBLuENBgyIvoZpPnwrtQw06YyQazoXkottCti5P6BtRxtelQGTrMNea9Y6ohNsVYrkY0epcsiUHXduhEOFH8wgPG/1R5xgaF1XdmNzqbMmhyqq5Bf0xdSLULfrSGLIgFIKiySZnzv7YsPqOFqxI6ngsU7jlvCVd1hLChTJM00uvmUvTMP5dfT/pwViDCimZsf6zccSpJIctj4qht9NUOPpvwnljn8LvyO6S6x8JxqawPREW9FQRRsUjW4IhXFl/rcasImQ/1247O+9T7y7t3M=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c2cb5b7b-152b-45e8-80f0-08db4d80cb05
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 15:52:56.6417
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: p92S88biTvrHLcGhnd2jy7PuhDXKNcdrIw3g4VhCpteBeOFD4vxQG1zVjLYeTIs2OxWN4u6xeqV61Diw8DZZ7qQKoVjDxdD4Dlx9ezOVcos=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6157

On 05/05/2023 4:30 pm, Anthony PERARD wrote:
> On Fri, May 05, 2023 at 02:05:33PM +0100, Andrew Cooper wrote:
>> ---
>>  LICENSES/GPL-2.0                    | 12 ++++++------
>>  LICENSES/LGPL-2.0                   |  8 ++++----
>>  LICENSES/LGPL-2.1                   |  8 ++++----
>>
>> diff --git a/LICENSES/GPL-2.0 b/LICENSES/GPL-2.0
>> index 0022a7c17788..dcd969aa85b5 100644
>> --- a/LICENSES/GPL-2.0
>> +++ b/LICENSES/GPL-2.0
>> @@ -1,8 +1,9 @@
>> -Valid-License-Identifier: GPL-2.0
>>  Valid-License-Identifier: GPL-2.0-only
>> -Valid-License-Identifier: GPL-2.0+
>>  Valid-License-Identifier: GPL-2.0-or-later
>>  
>> +Deprecated-Identifier: GPL-2.0
>> +Deprecated-Identifier: GPL-2.0+
>> +
>>  SPDX-URL: https://spdx.org/licenses/GPL-2.0.html
> You probably want to update the URLs as well, this one point to a page
> with a deprecated notice. New url is:
>     https://spdx.org/licenses/GPL-2.0-only.html
>
> Same remark for the other licences file changed.
>
> Also, maybe they want to be renamed as well, to match the identifier,
> I'm not sure.

Hmm, all good points.  I'll update the URLs because that's easy.

For the files however, the GPL ones already discuss both the "only" or
"or-later" forms, so the filename already doesn't match the SDPX tag
specifically.  I think I'll leave them as-are.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 05 16:08:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 16:08:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530500.826150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxyu-0004CD-6w; Fri, 05 May 2023 16:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530500.826150; Fri, 05 May 2023 16:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puxyu-0004C6-2s; Fri, 05 May 2023 16:08:28 +0000
Received: by outflank-mailman (input) for mailman id 530500;
 Fri, 05 May 2023 16:08:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fp4m=A2=citrix.com=prvs=4826eee3f=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1puxys-0004C0-MH
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 16:08:26 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f2916e0-eb5f-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 18:08:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f2916e0-eb5f-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683302904;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=WFyZWrUsXdHPFST0v+jPBBnw/xYyynxZT39SG+tzogw=;
  b=V8jpbFH0ZR3YqxEQNRVXXDUAL7SToqm/wncRY8ZpImVPjICow67ZotHl
   VrBCSxNxsRb0TfMBkp6hEGexAjGNzjyF4ylJrolkomTU/JIZTLJ2qn+Ks
   2jKbp2Re+ESnfBqmFN4xb7yGyZT6DH/ThY1caOO4k41fnrFYA3MAaPaQk
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108428034
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:3+8r5qzNCsK8S2c4dyd6t+cdxirEfRIJ4+MujC+fZmUNrF6WrkUFz
 mMeWWjSOf+INjSnLtsnbYnko00DsZ/Sx9UwTQdu+CAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPa0T5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KWBU3
 /YqNwxRUhKCibut/5bjZcZG3dt2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZwNzhrC9
 jqdowwVBDkGEpuP6zyB9kvrobHzsz+4YIkvPuSBo6sCbFq7mTVIVUx+uUGAifuzh1O6WtlfA
 1cJ4Sdopq83nGS7Q9+4UxCmrXqsuh8HR8EWA+A88BuKyKff/0CeHGdsZjxcbN0rsucmSDps0
 UWG9/vyHiBmurCRTXOb95+XoCm0NCxTKnUNDQcbSSMV7t+lp5s85jrGVtt5GbS5psH0Ezr3h
 TuNqUADa6471JBRkf/hpBae3mzq/8KSJuIo2unJdmO/xAlLe7L5W46p5wHA0tIYA6mzFUbU6
 RDohPOiAPAy4YClzXLdG79QTejwuJ5pIxWH3wcxQsBJGyCFvif6INsOuGwWyFJBaJ5sRNP/X
 KPEVeq9Drd3NWDiU6J4apnZ5y8Cnfm5ToSNuhw5g7NzjnlNmOyvpnsGiba4hTyFraTVufhX1
 W2nWcitF20GLq9s0SC7QewQuZdymHBlnzmNGcmql0/4uVZ7WEN5tJ9faAfeBgzHxPrsTPrpH
 yZ3aJLRlkQ3vBzWaSjL648DRW03wYwALcmu8aR/L7fTSjeK7Ul9U5c9N5t9Id0690mU/8+Ul
 kyAtrhwkQWk3iKbc1XUMhiOqtrHBP5CkJ7yBgR0VX7A5pTpSdzHAHs3H3fvQYQayQ==
IronPort-HdrOrdr: A9a23:R/nPka8LOhLzFriCjfxuk+DjI+orL9Y04lQ7vn2ZKCYlCvBw8v
 rF8cjzuiWE7Ar5NEtQ/+xoW5PgfZq/z+8R3WB5B97LNzUO3lHYT72KwrGSoQEIcBeOkdK1u5
 0QCpSWy+eeMbG5t6rHCcWDferICePmzJyV
X-Talos-CUID: =?us-ascii?q?9a23=3AZ4WfXGpkvYK46aH+gJGhaA7mUcE/WHfv00uMH02?=
 =?us-ascii?q?5Bz9yd5nNZnC6o7wxxg=3D=3D?=
X-Talos-MUID: 9a23:13e6HwWJqmxYtDDq/CPKwyFAMsNs2uOJVnwjjdY+sPSubBUlbg==
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="108428034"
Date: Fri, 5 May 2023 17:08:12 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH RFC] build: respect top-level .config also for
 out-of-tree hypervisor builds
Message-ID: <a08f3650-0c81-4873-ae10-f5200f8b7613@perard>
References: <beace0ce-e90b-eb79-4419-03de45ea7360@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <beace0ce-e90b-eb79-4419-03de45ea7360@suse.com>

On Wed, Mar 15, 2023 at 03:58:59PM +0100, Jan Beulich wrote:
> With in-tree builds Config.mk includes a .config file (if present) from
> the top-level directory. Similar functionality is wanted with out-of-
> tree builds. Yet the concept of "top-level directory" becomes fuzzy in
> that case, because there is not really a requirement to have identical
> top-level directory structure in the output tree; in fact there's no
> need for anything top-level-ish there. Look for such a .config, but only
> if the tree layout matches (read: if the directory we're building in is
> named "xen").

Well, as long as the "xen/" part of the repository is the only build
system to be able to build out-of-srctree, there isn't going to be a
top-level .config possible in the build tree, as such .config will be
outside of the build tree. Reading outside of the build tree or source
tree might be problematic.

It's a possibility that some project might want to just build the
hypervisor, and they happened to name the build tree "xen", but they
also have a ".config" use for something else. Kconfig is using ".config"
for other reason for example, like we do to build Xen.

It might be better to have a different name instead of ".config", and
putting it in the build tree rather than the parent directory. Maybe
".xenbuild-config" ?


I've been using the optional makefile named "xen-version" to adjust
variable of the build system, with content like:

    export XEN_TARGET_ARCH=arm64
    export CROSS_COMPILE=aarch64-linux-gnu-

so that I can have multiple build tree for different arch, and not have
to do anything other than running make and do the expected build. Maybe
that's not possible because for some reason, the build system still
reconfigure some variable when entering a sub-directory, but that's a
start.


> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> RFC: The directory name based heuristic of course isn't nice. But I
>      couldn't think of anything better. Suggestions?

I can only thing of looking at a file that is in the build-tree rather
than outside of it. A file named ".xenbuild-config" proposed early for
example.

> RFC: There also being a .config in the top-level source dir would be a
>      little problematic: It would be included _after_ the one in the
>      object tree. Yet if such a scenario is to be expected/supported at
>      all, it makes more sense the other way around.

Well, that would mean teaching Config.mk about out-of-tree build that
part of the repository is capable of, or better would be to stop trying
to read ".config" from Config.mk, and handle the file in the different
sub-build systems. Or just let people writing ".config" files to handle
the cases in those .config files.

> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -236,8 +236,17 @@ endif
>  
>  include scripts/Kbuild.include
>  
> -# Don't break if the build process wasn't called from the top level
> -# we need XEN_TARGET_ARCH to generate the proper config
> +# Don't break if the build process wasn't called from the top level.  We need
> +# XEN_TARGET_ARCH to generate the proper config.  If building outside of the
> +# source tree also check whether we need to include a "top-level" .config:
> +# Config.mk, using $(XEN_ROOT)/.config, would look only in the source tree.
> +ifeq ($(building_out_of_srctree),1)
> +# Try to avoid including a random unrelated .config: Assume our parent dir
> +# is a "top-level" one only when the objtree is .../xen.
> +ifeq ($(patsubst %/xen,,$(abs_objtree)),)
> +-include ../.config
> +endif
> +endif
>  include $(XEN_ROOT)/Config.mk
>  
>  # Set ARCH/SUBARCH appropriately.
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -17,6 +17,13 @@ __build:
>  
>  -include $(objtree)/include/config/auto.conf
>  
> +# See commentary around the similar contruct in Makefile.
> +ifneq ($(abs_objtree),$(abs_srctree))
> +ifeq ($(patsubst %/xen,,$(abs_objtree)),)
> +../.config: ;
> +-include ../.config
> +endif
> +endif
>  include $(XEN_ROOT)/Config.mk
>  include $(srctree)/scripts/Kbuild.include

There's another makefile, "scripts/Makefile.clean", doesn't this would
need to be change as well?

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 05 16:18:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 16:18:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530506.826160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puy8j-0005lk-5w; Fri, 05 May 2023 16:18:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530506.826160; Fri, 05 May 2023 16:18:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puy8j-0005ld-2Z; Fri, 05 May 2023 16:18:37 +0000
Received: by outflank-mailman (input) for mailman id 530506;
 Fri, 05 May 2023 16:18:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puy8i-0005lR-NW; Fri, 05 May 2023 16:18:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puy8i-0007mo-Ey; Fri, 05 May 2023 16:18:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puy8i-0006LD-2T; Fri, 05 May 2023 16:18:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puy8i-0002La-23; Fri, 05 May 2023 16:18:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FTotJuaW/kVwuxIFvlNRhJTZkL4ORsVb+8QxO1S+ZPY=; b=6uX9KZ9MP926+PqHGANr+qAFc1
	jwzsI/g2C7gbkHeR5NJ8rb8DL+vhVRyyzsF+N1RVzMcfYUPbD9vQzvatp0Js8RtOfvjSRPTeyP/w+
	DTRBem9MWo7jk4Cbnc6MRYezeMNkh6rNgTChL7detcEGPXM0AAdZyPVCJtkRQ58OgGgY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180545-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180545: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b65c0eed6bc028388d790fe4e30a76770ebb46c4
X-Osstest-Versions-That:
    ovmf=757f502a3b350436877102c3043744e021537c19
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 May 2023 16:18:36 +0000

flight 180545 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180545/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b65c0eed6bc028388d790fe4e30a76770ebb46c4
baseline version:
 ovmf                 757f502a3b350436877102c3043744e021537c19

Last test of basis   180544  2023-05-05 12:12:07 Z    0 days
Testing same since   180545  2023-05-05 14:12:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dongyan Qian <qiandongyan@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   757f502a3b..b65c0eed6b  b65c0eed6bc028388d790fe4e30a76770ebb46c4 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 05 16:23:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 16:23:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530512.826169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyDQ-0007DC-Ok; Fri, 05 May 2023 16:23:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530512.826169; Fri, 05 May 2023 16:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyDQ-0007D5-Lu; Fri, 05 May 2023 16:23:28 +0000
Received: by outflank-mailman (input) for mailman id 530512;
 Fri, 05 May 2023 16:23:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fp4m=A2=citrix.com=prvs=4826eee3f=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1puyDP-0007Cz-4L
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 16:23:27 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2893022d-eb61-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 18:23:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2893022d-eb61-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683303805;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=LpW+04vmvMqG7/SbATjIMR3yKWBj0hCYyMl4I7nU1/Y=;
  b=CpIA3Fst/B9hO8TcBcKzDLXzc0qGYo8GjVfNveo7dxOAdPbXsv+8mNZX
   Cgqs1AgbJmtMNVOhWIwZT2byWuMFTnFaLEbhjJDA6h/2rndHB9sRuZaJa
   wjASqZ+7zs2B3kFV/lEyIb0XN+qWKPI849FyvQ+r4jdOMr1Fy5+felGVA
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108429655
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:ANXpP6AkbIPg/BVW//Hjw5YqxClBgxIJ4kV8jS/XYbTApDghgj1Tx
 2sWXG/Ub/yMZGDwedl1bom19R4PvMTUyoVlQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A4wRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw1L1TDWQR2
 c0iIRMOcg+HvPmI8Iqnc7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9I4TaGZ8EwBvFz
 o7A10qgHU9KHYbE8DuM43zwrbDwhmT4U41HQdVU8dY12QbOlwT/EiY+cUawqL+Xg0i1VtZbN
 mQd4C9opq83nGS7Q9+4UxCmrXqsuh8HR8EWA+A88BuKyKff/0CeHGdsZhRMcsA8vck6Axkjz
 EaUnsjBDCZq9raSTBq1ybqV6xiyNC49JHUHIyQDSGMt/N3LsIw1yBXVQb5LCqmuhMfyHjL26
 z+PpSk6wb4UiKYj1aqh+kvcqymxvZWPRQkwji3eRm+/5xl1TJKkbYevr1Pc6J59wJ2xFwfb+
 iJewo7Hsb5IVMvW/MCQfAkTNJ/yw/qAbxHtu1dMHIU+8w+2xU6bYbkFtVmSO3xV3tY4lS7BO
 RGD4lkMtcYKYxNGfocsPdvvVp1CIbzIUI28C6uKNocmjo1ZLlfvwc14WaKHM4kBemAImLp3B
 5qUeN3E4Z0yWfU+l2reqwvwPNYWKsECKYD7H8qTI+yPi+b2WZJsYe5t3KGyRu449riYhw7e7
 sxSMcCHoz0GDr2lOHOJrddCdQtVRZTeOa0aVuQNLrLTSuaYMDhJ5wDtLUMJJNU+wvU9ehbg9
 XChQE5IoGfCaYn8AVzSMBhLMeq/NauTWFpnZUTAy370gSl8CWtuhY9DH6YKkU4PrbM4lqEoF
 albI61twJ1nE1z6xtjUVrGlxKQKSfhhrVvm0/aNCNTnQ6Ndeg==
IronPort-HdrOrdr: A9a23:vL7GIq9mD7ygrZMLMaVuk+AcI+orL9Y04lQ7vn2ZKSY5TiX4rb
 HKoB1/73XJYVkqN03I9ervBEDiewK/yXcW2+ks1N6ZNWGLhILBFupfBODZsl7d8kPFl9K01c
 1bAtJD4N+bNykGsS4tijPIb+rJw7O8gd+Vbf+19QYIcenzAZsQlzuQDGygYypLbTgDP7UVPr
 yG6PFKojKxEE5nFfhSVhE+Lo7+T8SgruOeXSI7
X-Talos-CUID: 9a23:aYtV6W3x7/TEOXakRai9drxfNJsBfniC4EbqKmy7FXltc6areWeMwfYx
X-Talos-MUID: 9a23:UehdgQjZSfBUqhLRyUqqusMpGp53vfSHUhs0kJwXtZWvahNrfDSWg2Hi
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="108429655"
Date: Fri, 5 May 2023 17:23:19 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v6 10/12] xen/tools: add sve parameter in XL configuration
Message-ID: <da845262-9bb3-467e-9f04-18c9eb06c2eb@perard>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-11-luca.fancellu@arm.com>
 <996db21b-e963-4259-884d-2131c548ca1e@perard>
 <8C3DC6ED-83D8-4DD0-9C99-B34449304373@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <8C3DC6ED-83D8-4DD0-9C99-B34449304373@arm.com>

On Tue, May 02, 2023 at 07:54:19PM +0000, Luca Fancellu wrote:
> > On 2 May 2023, at 18:06, Anthony PERARD <anthony.perard@citrix.com> wrote:
> > On Mon, Apr 24, 2023 at 07:02:46AM +0100, Luca Fancellu wrote:
> >> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
> >> index ddc7b2a15975..1e69dac2c4fa 100644
> >> --- a/tools/libs/light/libxl_arm.c
> >> +++ b/tools/libs/light/libxl_arm.c
> >> @@ -211,6 +213,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
> >>         return ERROR_FAIL;
> >>     }
> >> 
> >> +    /* Parameter is sanitised in libxl__arch_domain_build_info_setdefault */
> >> +    if (d_config->b_info.arch_arm.sve_vl) {
> >> +        /* Vector length is divided by 128 in struct xen_domctl_createdomain */
> >> +        config->arch.sve_vl = d_config->b_info.arch_arm.sve_vl / 128U;
> >> +    }
> >> +
> >>     return 0;
> >> }
> >> 
> >> @@ -1681,6 +1689,26 @@ int libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
> >>     /* ACPI is disabled by default */
> >>     libxl_defbool_setdefault(&b_info->acpi, false);
> >> 
> >> +    /* Sanitise SVE parameter */
> >> +    if (b_info->arch_arm.sve_vl) {
> >> +        unsigned int max_sve_vl =
> >> +            arch_capabilities_arm_sve(physinfo->arch_capabilities);
> >> +
> >> +        if (!max_sve_vl) {
> >> +            LOG(ERROR, "SVE is unsupported on this machine.");
> >> +            return ERROR_FAIL;
> >> +        }
> >> +
> >> +        if (LIBXL_SVE_TYPE_HW == b_info->arch_arm.sve_vl) {
> >> +            b_info->arch_arm.sve_vl = max_sve_vl;
> >> +        } else if (b_info->arch_arm.sve_vl > max_sve_vl) {
> >> +            LOG(ERROR,
> >> +                "Invalid sve value: %d. Platform supports up to %u bits",
> >> +                b_info->arch_arm.sve_vl, max_sve_vl);
> >> +            return ERROR_FAIL;
> >> +        }
> > 
> > You still need to check that sve_vl is one of the value from the enum,
> > or that the value is divisible by 128.
> 
> I have probably missed something, I thought that using the way below to
> specify the input I had for free that the value is 0 or divisible by 128, is it
> not the case? Who can write to b_info->arch_arm.sve_vl different value
> from the enum we specified in the .idl?

`xl` isn't the only user of `libxl`. There's `libvirt` as well. We also
have libxl bindings for several languages. There's nothing stopping a
developer to write a number into `sve_vl` instead of choosing one of the
value from the enum. I think we should probably sanitize any input that
comes from outside of libxl, that's probably the case, I'm not sure.

So if valid values for `sve_vl` are only numbers divisible by 128, and
some other discrete numbers, then we should check that they are, or
check that the value is one defined by the enum.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 05 16:28:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 16:28:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530516.826179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyI2-0007pC-A6; Fri, 05 May 2023 16:28:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530516.826179; Fri, 05 May 2023 16:28:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyI2-0007p5-7R; Fri, 05 May 2023 16:28:14 +0000
Received: by outflank-mailman (input) for mailman id 530516;
 Fri, 05 May 2023 16:28:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nPXv=A2=flex--seanjc.bounces.google.com=3mi5VZAYKCXUlXTgcVZhhZeX.VhfqXg-WXoXeeblml.qXgikhcXVm.hkZ@srs-se1.protection.inumbo.net>)
 id 1puyI1-0007ow-AZ
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 16:28:13 +0000
Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com
 [2607:f8b0:4864:20::114a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d33406e0-eb61-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 18:28:11 +0200 (CEST)
Received: by mail-yw1-x114a.google.com with SMTP id
 00721157ae682-559e281c5dfso19662157b3.3
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 09:28:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d33406e0-eb61-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20221208; t=1683304090; x=1685896090;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id
         :reply-to;
        bh=tpG4xiqP3mzArRx9Ask1She9PnSIKi9NV0T7HckdUAY=;
        b=R7ZId5oM7hLrX+tcfVm/Bnk2u8JbAC1I8QSh2twc/YutVfGMASmtW7tbm2YP3uHqlk
         RwtPwL861jZAZsMdLif5CXyQc6HVUK4hq+Rsigm7g9H5SwYxxSrPZKroysdphn0wBKMc
         5pkuhlqCX/BOp4yTuk4h5T1NCQgcpfPZ9KLIiem4KASWPyrgXBR48rcxQa06YhEaqMbn
         orfLSFQcdN2vNomHtco8b3soICbHS5a9W6R91ScgWRyqTuI6OJFa+omcbJE3G4QloJsc
         Zd/UhaIHTp9GBfbJO6BShFyMFzXWMH9S7e+yp/9GzVVu/Y2uZsmKlNFrDjrpY2RYPhtU
         7R+Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683304090; x=1685896090;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=tpG4xiqP3mzArRx9Ask1She9PnSIKi9NV0T7HckdUAY=;
        b=JZcmGMH0Sk7khgu/cTCb7OlxOtGYfhG7uzwjCfAALOyOMVhIR7NVjT6WzB7c+L5Qek
         ACtGA059m4FaZCbRSJ6UOBO/IZljlDlvYBfOnG6ST77PNJti86HuLL635G1s/ZF6o1kn
         GSmBod7sGZLPDLmdqhvhiT9xZr2nto3f6Qes7WX67lDhWKGo5vkUlDwKbvrAZzSSldeZ
         41bHMDjsNTal9dxBDN4MkMLYmOymbx+oHZoOrxzmwiAfyi3n65dhdS3RK6g9RLyezhSZ
         bSuxQpkhDPUzVYtD6GmPa0qDS0VALRy+Ra/+ieLElIJQysaDI4gsI8ye1hxyYmxIOtVY
         1sBw==
X-Gm-Message-State: AC+VfDw9csWhAxSBmzSVe0ZGipJ3DIkVCdu1sNmCU/ISL4yqz5BkXAF5
	5fir944kfNVsUDgb+omwMVsNMdTTtI8=
X-Google-Smtp-Source: ACHHUZ6MaxIQ4e6CYkENF40G6p9cF+vIn2nNSO3VXw5131N/TbPSwiBkXRT2QEZKyWmorm0BLuruUfs9Bhg=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a81:ac24:0:b0:559:d859:d746 with SMTP id
 k36-20020a81ac24000000b00559d859d746mr1208880ywh.10.1683304090087; Fri, 05
 May 2023 09:28:10 -0700 (PDT)
Date: Fri, 5 May 2023 09:28:08 -0700
In-Reply-To: <20230505152046.6575-3-mic@digikod.net>
Mime-Version: 1.0
References: <20230505152046.6575-1-mic@digikod.net> <20230505152046.6575-3-mic@digikod.net>
Message-ID: <ZFUumGdZDNs1tkQA@google.com>
Subject: Re: [PATCH v1 2/9] KVM: x86/mmu: Add support for prewrite page tracking
From: Sean Christopherson <seanjc@google.com>
To: "=?iso-8859-1?Q?Micka=EBl_Sala=FCn?=" <mic@digikod.net>
Cc: Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, 
	"H . Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>, 
	Paolo Bonzini <pbonzini@redhat.com>, Thomas Gleixner <tglx@linutronix.de>, 
	Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, 
	Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>, 
	James Morris <jamorris@linux.microsoft.com>, John Andersen <john.s.andersen@intel.com>, 
	Liran Alon <liran.alon@oracle.com>, 
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>, Marian Rotariu <marian.c.rotariu@gmail.com>, 
	"Mihai =?utf-8?B?RG9uyJt1?=" <mdontu@bitdefender.com>, 
	"=?utf-8?B?TmljdciZb3IgQ8OuyJt1?=" <nicu.citu@icloud.com>, Rick Edgecombe <rick.p.edgecombe@intel.com>, 
	Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>, 
	Zahra Tarkhani <ztarkhani@microsoft.com>, 
	"=?utf-8?Q?=C8=98tefan_=C8=98icleru?=" <ssicleru@bitdefender.com>, dev@lists.cloudhypervisor.org, 
	kvm@vger.kernel.org, linux-hardening@vger.kernel.org, 
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
	linux-security-module@vger.kernel.org, qemu-devel@nongnu.org, 
	virtualization@lists.linux-foundation.org, x86@kernel.org, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

On Fri, May 05, 2023, Micka=EF=BF=BDl Sala=EF=BF=BDn wrote:
> diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm=
/kvm_page_track.h
> index eb186bc57f6a..a7fb4ff888e6 100644
> --- a/arch/x86/include/asm/kvm_page_track.h
> +++ b/arch/x86/include/asm/kvm_page_track.h
> @@ -3,6 +3,7 @@
>  #define _ASM_X86_KVM_PAGE_TRACK_H
> =20
>  enum kvm_page_track_mode {
> +	KVM_PAGE_TRACK_PREWRITE,

Heh, just when I decide to finally kill off support for multiple modes[1] :=
-)

My assessment from that changelog still holds true for this case:

  Drop "support" for multiple page-track modes, as there is no evidence
  that array-based and refcounted metadata is the optimal solution for
  other modes, nor is there any evidence that other use cases, e.g. for
  access-tracking, will be a good fit for the page-track machinery in
  general.
 =20
  E.g. one potential use case of access-tracking would be to prevent guest
  access to poisoned memory (from the guest's perspective).  In that case,
  the number of poisoned pages is likely to be a very small percentage of
  the guest memory, and there is no need to reference count the number of
  access-tracking users, i.e. expanding gfn_track[] for a new mode would be
  grossly inefficient.  And for poisoned memory, host userspace would also
  likely want to trap accesses, e.g. to inject #MC into the guest, and that
  isn't currently supported by the page-track framework.
 =20
  A better alternative for that poisoned page use case is likely a
  variation of the proposed per-gfn attributes overlay (linked), which
  would allow efficiently tracking the sparse set of poisoned pages, and by
  default would exit to userspace on access.

Of particular relevance:

  - Using the page-track machinery is inefficient because the guest is like=
ly
    going to write-protect a minority of its memory.  And this

      select KVM_EXTERNAL_WRITE_TRACKING if KVM

    is particularly nasty because simply enabling HEKI in the Kconfig will =
cause
    KVM to allocate rmaps and gfn tracking.

  - There's no need to reference count the protection, i.e. 15 of the 16 bi=
ts of
    gfn_track are dead weight.

  - As proposed, adding a second "mode" would double the cost of gfn tracki=
ng.

  - Tying the protections to the memslots will create an impossible-to-main=
tain
    ABI because the protections will be lost if the owning memslot is delet=
ed and
    recreated.

  - The page-track framework provides incomplete protection and will lead t=
o an
    ongoing game of whack-a-mole, e.g. this patch catches the obvious cases=
 by
    adding calls to kvm_page_track_prewrite(), but misses things like kvm_v=
cpu_map().

  - The scaling and maintenance issues will only get worse if/when someone =
tries
    to support dropping read and/or execute permissions, e.g. for execute-o=
nly.

  - The code is x86-only, and is likely to stay that way for the foreseeabl=
e
    future.

The proposed alternative is to piggyback the memory attributes implementati=
on[2]
that is being added (if all goes according to plan) for confidential VMs.  =
This
use case (dropping permissions) came up not too long ago[3], which is why I=
 have
a ready-made answer).

I have no doubt that we'll need to solve performance and scaling issues wit=
h the
memory attributes implementation, e.g. to utilize xarray multi-range suppor=
t
instead of storing information on a per-4KiB-page basis, but AFAICT, the co=
re
idea is sound.  And a very big positive from a maintenance perspective is t=
hat
any optimizations, fixes, etc. for one use case (CoCo vs. hardening) should=
 also
benefit the other use case.

[1] https://lore.kernel.org/all/20230311002258.852397-22-seanjc@google.com
[2] https://lore.kernel.org/all/Y2WB48kD0J4VGynX@google.com
[3] https://lore.kernel.org/all/Y1a1i9vbJ%2FpVmV9r@google.com


From xen-devel-bounces@lists.xenproject.org Fri May 05 16:37:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 16:37:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530520.826190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyQS-0000sG-5B; Fri, 05 May 2023 16:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530520.826190; Fri, 05 May 2023 16:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyQS-0000s9-1m; Fri, 05 May 2023 16:36:56 +0000
Received: by outflank-mailman (input) for mailman id 530520;
 Fri, 05 May 2023 16:36:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C5z1=A2=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1puyQR-0000s3-47
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 16:36:55 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0a2ad505-eb63-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 18:36:52 +0200 (CEST)
Received: from AS9PR06CA0617.eurprd06.prod.outlook.com (2603:10a6:20b:46e::28)
 by DBBPR08MB6010.eurprd08.prod.outlook.com (2603:10a6:10:20a::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 16:36:49 +0000
Received: from AM7EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:46e:cafe::11) by AS9PR06CA0617.outlook.office365.com
 (2603:10a6:20b:46e::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend
 Transport; Fri, 5 May 2023 16:36:49 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT028.mail.protection.outlook.com (100.127.140.192) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.27 via Frontend Transport; Fri, 5 May 2023 16:36:49 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Fri, 05 May 2023 16:36:48 +0000
Received: from ca8a46735cf3.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7E47D95F-2933-4CDE-A313-4478EB5C3B26.1; 
 Fri, 05 May 2023 16:36:38 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ca8a46735cf3.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 05 May 2023 16:36:38 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB5PR08MB10047.eurprd08.prod.outlook.com (2603:10a6:10:4a0::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27; Fri, 5 May
 2023 16:36:35 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be%6]) with mapi id 15.20.6363.021; Fri, 5 May 2023
 16:36:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a2ad505-eb63-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EBuac5j1CkQDJPxMr1oMigHqHJsCRs+q87fc4/OYyE0=;
 b=DpuC5N7HRaj+3ICHI5MvEHM7ZJ2HL26lkA2AA1AYHgoIViftYNazRLVkDmC99kt9+fZzDHkriDmb/k2bmRSCCzgETU5H0K5VxCtut6dfyosQjiHvoB+5E4+Cx5ttXwOKa9Pl2xSuvhFKfkuUpNDD6MI9tasyH/VTUj+qScM4rKY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1ea9e56575b35ba4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=niJXtgMVCJghSiylNfNhRlICut4mMpVOe4Qn8Mj8G3wbdWIAZuRw8eDyvN95nplB86Rgov93bHKJ6ySlDa7dR0HvMOX9qYjJ3cePe5AnZhiwbgIXM4KbeuR22RpPvydEopaRYJiYv4wj2ggMtcUoVWEMbs315T4qyW9y9BFxd1ZlgaKI5UM8uYp2v/G5zGeAMJBbAy57gsroxudGiMI6IRPn0qyEoAns1M0wPh0YjPBKK8wzRLuh0Nx7gchfBXraAYnj4ouAYbQoK4z+hFJyfc56VjT9BLyoF+jzKoEGeSVdLCamzrS7cvzFtWrsT/Xg6IeXyyx3IEIZkKH4RNdEEg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EBuac5j1CkQDJPxMr1oMigHqHJsCRs+q87fc4/OYyE0=;
 b=FWFMTdF0GzUdHY80NDNwAQ4rlzwzoNkDJcWheWe8MSeuv4Ubz6mJuiu1pdiGW+MBS1ueXLlazjsk0dKQw0cUtZSVbou/jbAzYwZ926eDWCvU+1br+x2UCfFUc68m/Uq6DxdZs7KCmvrqvaacQ1Fugwhzxy3qX7q+n/c79+vBJSaLJtM+AuHsYuDFEx8hIe3Nlu5Hwi2FQaNn9sKSbnP1poKrhQbWLhA5QoYK2g+8ddcJ+d025IdY31bqKroEJ72d9ce/b2LPTyW2YN3BR2e7FUf14rca6ooYGFUt5IVpsoocU8H9OFPbpZMH3z2R2p/aH4K4kMrnS/yayeW4yoNaAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EBuac5j1CkQDJPxMr1oMigHqHJsCRs+q87fc4/OYyE0=;
 b=DpuC5N7HRaj+3ICHI5MvEHM7ZJ2HL26lkA2AA1AYHgoIViftYNazRLVkDmC99kt9+fZzDHkriDmb/k2bmRSCCzgETU5H0K5VxCtut6dfyosQjiHvoB+5E4+Cx5ttXwOKa9Pl2xSuvhFKfkuUpNDD6MI9tasyH/VTUj+qScM4rKY=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v6 10/12] xen/tools: add sve parameter in XL configuration
Thread-Topic: [PATCH v6 10/12] xen/tools: add sve parameter in XL
 configuration
Thread-Index: AQHZdnKI1/ryq6h2kEuVT2yTgC4YMq9HQ+EAgAAu74CABHwZgIAAA6SA
Date: Fri, 5 May 2023 16:36:32 +0000
Message-ID: <D11202BE-18AF-4F78-A422-B484E390AC15@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-11-luca.fancellu@arm.com>
 <996db21b-e963-4259-884d-2131c548ca1e@perard>
 <8C3DC6ED-83D8-4DD0-9C99-B34449304373@arm.com>
 <da845262-9bb3-467e-9f04-18c9eb06c2eb@perard>
In-Reply-To: <da845262-9bb3-467e-9f04-18c9eb06c2eb@perard>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB5PR08MB10047:EE_|AM7EUR03FT028:EE_|DBBPR08MB6010:EE_
X-MS-Office365-Filtering-Correlation-Id: 89196cc3-0a0e-4432-1796-08db4d86ec91
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 VTOL/3PHStL0SwwFAPIuCSeaHr2LGELHedp7rT28YOUlqSfccPOCgbeq9t80/xWC90frd8jIdDq2DQzE5Ofe+qRt+BuMUbMpTYLQQUi+IsaWFM5PgvdWT/BM58/DZTD+k/JWza9oNglEVKsZEX25c41oqRcV0uh7LzIxANqm1O9xPpvS70QdbQNJLb9lx2qkw6jGcGNcgI5i6h6FFXnSD4XzQOQ98RWulLWObcoBuJd72NPXpRO2ceUjvwvY1TZgtJgdDoScd0QpgSbHygP/Z9QPcuUFYOsibE/nI5NxLWmNCa0FSrnqQcN+9EGMkIlnXvVLN1e/93LSrW+eCRztorhmUUo07rqOO29EO/zoft0vwURqAsNsJu4BKhnLkwTPqgP1DyeeaVH7y3P8Cp4vSZyA3n6zEaPD32NUJsB2wuaVbDOiflzw+4p6qE6mldZjWD0KCxbRa+zOgHTPopWuvgAqeyIJx9Ep+vafF0H+p/uPBuSKy8ZcOGa2dHqt9Sw1N45yz9t28yjfs3aUDo33kWeF6oVie/j4aeHUnscwtZhxaR+zuPeGpVEGjKNR4jAK3TzS6z6tU5JEVktV8FjtSDQGSxp5Fyy7cZ5UlNNsBOHjeJTcELhI2ZUldFp3QTDsOyQr57Xvm3wQ4BUVQPqS4A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39850400004)(346002)(396003)(136003)(366004)(376002)(451199021)(36756003)(86362001)(33656002)(6666004)(6486002)(71200400001)(316002)(91956017)(54906003)(76116006)(66476007)(66556008)(66446008)(64756008)(6916009)(4326008)(478600001)(66946007)(8676002)(8936002)(5660300002)(41300700001)(2906002)(2616005)(38100700002)(38070700005)(122000001)(186003)(6506007)(6512007)(26005)(53546011)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <0CBE64C3515C9E4A9BE565F762B2E6D9@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR08MB10047
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3cf9d66b-249f-45b0-e003-08db4d86e279
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7q9ddN1LJ37biC3S/yuW1dQVpAHr9zKi+OgliZ4q9epaokqQVX6K3Rp5ohdRUlui3J7WB4fOD0NwDR5KPG5NQsNUgeY3PaFt9+OVZ0N3GhYxCSa3QVRO0mkWPRT3bB+7y3SOFIV7Ak5FmWVE6Httq9ROl3EXM1nZF/+F44I/9DdDsh61B4ZYfzYpJfSYF9Z3STpe36kzDRsFj6Z8y0zQMiDYEwA3TBsbMdGeOJBtiay9WGXMYKZfxqEspv5tcT02IyIHBOGMfKBb6xLSzDRzSJc1RQy4fr89dagVQKiIbLSdCCNsNOqo5BaspRtHk2+el9toavuBX+ujrCvCRMwj5U7wRizU8Or47wxh0TEmd1TXfcex52+jJ7k3uxeezMnVv4Z3564oDUnwyvMK6pvrBE7WzRY7C4tO6MWZln8DOq9jxBBKNE1QTEX99o4XdCvNEr+gtdKSmfk0HhVJU+Rk+jSGH4/vO0Hi8sMIm6BXDPwmaJMoLnpSlJW1pJjtE6/W4CufQooZaB+seX2PC5OGYSJx9QRY+9uhp2BAnw7vVuBlyDl5YRCuZ4l+l/1Au1eMpmcbFw+JqCDwKuo2F1fSwWu/KWtRKuy0O4oid0SGPMSzlZXeNJHl5OXvT6aIf6oud70kRBAAKj/MmneCPmPtZN65tPgA3j5mY6vodZJqxpj8BjVwqta/lJPJfhVkme597iXVg109PJe4hlp+4Qi62A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(39860400002)(376002)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(70586007)(70206006)(4326008)(6486002)(6666004)(316002)(54906003)(33656002)(86362001)(36756003)(478600001)(47076005)(336012)(83380400001)(6506007)(107886003)(26005)(53546011)(36860700001)(6512007)(2616005)(8936002)(8676002)(6862004)(5660300002)(41300700001)(82310400005)(40480700001)(2906002)(186003)(82740400003)(81166007)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:36:49.2164
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 89196cc3-0a0e-4432-1796-08db4d86ec91
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6010

DQoNCj4gT24gNSBNYXkgMjAyMywgYXQgMTc6MjMsIEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBl
cmFyZEBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IE9uIFR1ZSwgTWF5IDAyLCAyMDIzIGF0IDA3
OjU0OjE5UE0gKzAwMDAsIEx1Y2EgRmFuY2VsbHUgd3JvdGU6DQo+Pj4gT24gMiBNYXkgMjAyMywg
YXQgMTg6MDYsIEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPiB3cm90
ZToNCj4+PiBPbiBNb24sIEFwciAyNCwgMjAyMyBhdCAwNzowMjo0NkFNICswMTAwLCBMdWNhIEZh
bmNlbGx1IHdyb3RlOg0KPj4+PiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF9h
cm0uYyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfYXJtLmMNCj4+Pj4gaW5kZXggZGRjN2IyYTE1
OTc1Li4xZTY5ZGFjMmM0ZmEgMTAwNjQ0DQo+Pj4+IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQvbGli
eGxfYXJtLmMNCj4+Pj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9hcm0uYw0KPj4+PiBA
QCAtMjExLDYgKzIxMywxMiBAQCBpbnQgbGlieGxfX2FyY2hfZG9tYWluX3ByZXBhcmVfY29uZmln
KGxpYnhsX19nYyAqZ2MsDQo+Pj4+ICAgICAgICByZXR1cm4gRVJST1JfRkFJTDsNCj4+Pj4gICAg
fQ0KPj4+PiANCj4+Pj4gKyAgICAvKiBQYXJhbWV0ZXIgaXMgc2FuaXRpc2VkIGluIGxpYnhsX19h
cmNoX2RvbWFpbl9idWlsZF9pbmZvX3NldGRlZmF1bHQgKi8NCj4+Pj4gKyAgICBpZiAoZF9jb25m
aWctPmJfaW5mby5hcmNoX2FybS5zdmVfdmwpIHsNCj4+Pj4gKyAgICAgICAgLyogVmVjdG9yIGxl
bmd0aCBpcyBkaXZpZGVkIGJ5IDEyOCBpbiBzdHJ1Y3QgeGVuX2RvbWN0bF9jcmVhdGVkb21haW4g
Ki8NCj4+Pj4gKyAgICAgICAgY29uZmlnLT5hcmNoLnN2ZV92bCA9IGRfY29uZmlnLT5iX2luZm8u
YXJjaF9hcm0uc3ZlX3ZsIC8gMTI4VTsNCj4+Pj4gKyAgICB9DQo+Pj4+ICsNCj4+Pj4gICAgcmV0
dXJuIDA7DQo+Pj4+IH0NCj4+Pj4gDQo+Pj4+IEBAIC0xNjgxLDYgKzE2ODksMjYgQEAgaW50IGxp
YnhsX19hcmNoX2RvbWFpbl9idWlsZF9pbmZvX3NldGRlZmF1bHQobGlieGxfX2djICpnYywNCj4+
Pj4gICAgLyogQUNQSSBpcyBkaXNhYmxlZCBieSBkZWZhdWx0ICovDQo+Pj4+ICAgIGxpYnhsX2Rl
ZmJvb2xfc2V0ZGVmYXVsdCgmYl9pbmZvLT5hY3BpLCBmYWxzZSk7DQo+Pj4+IA0KPj4+PiArICAg
IC8qIFNhbml0aXNlIFNWRSBwYXJhbWV0ZXIgKi8NCj4+Pj4gKyAgICBpZiAoYl9pbmZvLT5hcmNo
X2FybS5zdmVfdmwpIHsNCj4+Pj4gKyAgICAgICAgdW5zaWduZWQgaW50IG1heF9zdmVfdmwgPQ0K
Pj4+PiArICAgICAgICAgICAgYXJjaF9jYXBhYmlsaXRpZXNfYXJtX3N2ZShwaHlzaW5mby0+YXJj
aF9jYXBhYmlsaXRpZXMpOw0KPj4+PiArDQo+Pj4+ICsgICAgICAgIGlmICghbWF4X3N2ZV92bCkg
ew0KPj4+PiArICAgICAgICAgICAgTE9HKEVSUk9SLCAiU1ZFIGlzIHVuc3VwcG9ydGVkIG9uIHRo
aXMgbWFjaGluZS4iKTsNCj4+Pj4gKyAgICAgICAgICAgIHJldHVybiBFUlJPUl9GQUlMOw0KPj4+
PiArICAgICAgICB9DQo+Pj4+ICsNCj4+Pj4gKyAgICAgICAgaWYgKExJQlhMX1NWRV9UWVBFX0hX
ID09IGJfaW5mby0+YXJjaF9hcm0uc3ZlX3ZsKSB7DQo+Pj4+ICsgICAgICAgICAgICBiX2luZm8t
PmFyY2hfYXJtLnN2ZV92bCA9IG1heF9zdmVfdmw7DQo+Pj4+ICsgICAgICAgIH0gZWxzZSBpZiAo
Yl9pbmZvLT5hcmNoX2FybS5zdmVfdmwgPiBtYXhfc3ZlX3ZsKSB7DQo+Pj4+ICsgICAgICAgICAg
ICBMT0coRVJST1IsDQo+Pj4+ICsgICAgICAgICAgICAgICAgIkludmFsaWQgc3ZlIHZhbHVlOiAl
ZC4gUGxhdGZvcm0gc3VwcG9ydHMgdXAgdG8gJXUgYml0cyIsDQo+Pj4+ICsgICAgICAgICAgICAg
ICAgYl9pbmZvLT5hcmNoX2FybS5zdmVfdmwsIG1heF9zdmVfdmwpOw0KPj4+PiArICAgICAgICAg
ICAgcmV0dXJuIEVSUk9SX0ZBSUw7DQo+Pj4+ICsgICAgICAgIH0NCj4+PiANCj4+PiBZb3Ugc3Rp
bGwgbmVlZCB0byBjaGVjayB0aGF0IHN2ZV92bCBpcyBvbmUgb2YgdGhlIHZhbHVlIGZyb20gdGhl
IGVudW0sDQo+Pj4gb3IgdGhhdCB0aGUgdmFsdWUgaXMgZGl2aXNpYmxlIGJ5IDEyOC4NCj4+IA0K
Pj4gSSBoYXZlIHByb2JhYmx5IG1pc3NlZCBzb21ldGhpbmcsIEkgdGhvdWdodCB0aGF0IHVzaW5n
IHRoZSB3YXkgYmVsb3cgdG8NCj4+IHNwZWNpZnkgdGhlIGlucHV0IEkgaGFkIGZvciBmcmVlIHRo
YXQgdGhlIHZhbHVlIGlzIDAgb3IgZGl2aXNpYmxlIGJ5IDEyOCwgaXMgaXQNCj4+IG5vdCB0aGUg
Y2FzZT8gV2hvIGNhbiB3cml0ZSB0byBiX2luZm8tPmFyY2hfYXJtLnN2ZV92bCBkaWZmZXJlbnQg
dmFsdWUNCj4+IGZyb20gdGhlIGVudW0gd2Ugc3BlY2lmaWVkIGluIHRoZSAuaWRsPw0KPiANCj4g
YHhsYCBpc24ndCB0aGUgb25seSB1c2VyIG9mIGBsaWJ4bGAuIFRoZXJlJ3MgYGxpYnZpcnRgIGFz
IHdlbGwuIFdlIGFsc28NCj4gaGF2ZSBsaWJ4bCBiaW5kaW5ncyBmb3Igc2V2ZXJhbCBsYW5ndWFn
ZXMuDQoNClJpZ2h0LCB0aGlzIHBvaW50IHdhc27igJl0IGNsZWFyIHRvIG1lLCBJIHdpbGwgYWRk
IHRoZSBjaGVjayB0aGVyZSwgdGhhbmsgeW91DQpmb3IgdGhlIGV4cGxhbmF0aW9uLg0KDQo+IFRo
ZXJlJ3Mgbm90aGluZyBzdG9wcGluZyBhDQo+IGRldmVsb3BlciB0byB3cml0ZSBhIG51bWJlciBp
bnRvIGBzdmVfdmxgIGluc3RlYWQgb2YgY2hvb3Npbmcgb25lIG9mIHRoZQ0KPiB2YWx1ZSBmcm9t
IHRoZSBlbnVtLiBJIHRoaW5rIHdlIHNob3VsZCBwcm9iYWJseSBzYW5pdGl6ZSBhbnkgaW5wdXQg
dGhhdA0KPiBjb21lcyBmcm9tIG91dHNpZGUgb2YgbGlieGwsIHRoYXQncyBwcm9iYWJseSB0aGUg
Y2FzZSwgSSdtIG5vdCBzdXJlLg0KPiANCj4gU28gaWYgdmFsaWQgdmFsdWVzIGZvciBgc3ZlX3Zs
YCBhcmUgb25seSBudW1iZXJzIGRpdmlzaWJsZSBieSAxMjgsIGFuZA0KPiBzb21lIG90aGVyIGRp
c2NyZXRlIG51bWJlcnMsIHRoZW4gd2Ugc2hvdWxkIGNoZWNrIHRoYXQgdGhleSBhcmUsIG9yDQo+
IGNoZWNrIHRoYXQgdGhlIHZhbHVlIGlzIG9uZSBkZWZpbmVkIGJ5IHRoZSBlbnVtLg0KPiANCj4g
Q2hlZXJzLA0KPiANCj4gLS0gDQo+IEFudGhvbnkgUEVSQVJEDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri May 05 16:45:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 16:45:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530527.826200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyY4-0002Ni-1w; Fri, 05 May 2023 16:44:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530527.826200; Fri, 05 May 2023 16:44:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyY3-0002Nb-VG; Fri, 05 May 2023 16:44:47 +0000
Received: by outflank-mailman (input) for mailman id 530527;
 Fri, 05 May 2023 16:44:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fp4m=A2=citrix.com=prvs=4826eee3f=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1puyY2-0002NV-BC
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 16:44:46 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 22aa0459-eb64-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 18:44:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22aa0459-eb64-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683305084;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=OEtTTua7UCQiOx6mgjVmEFb8W/Gl1MOESQKAg0KNS0k=;
  b=NEKDr5Y0xpZdGge37xAK9TJ2UdZCqIV9O9fYTxUirjXTYKila5dIxB8K
   QIET8e0F/Xe3/wlar0+hNrMDbLQIeMWE3RrSQnOFUQDY5MHuWNi6NhMZB
   LPjiDSdB04VmgSatIM0xVwhtXJ2XNOpTvkvRbWzJsutFhhOrFXK2Ydemw
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108049361
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:NSJfuKJ7iMxDYyKLFE+RW5UlxSXFcZb7ZxGr2PjKsXjdYENSgjUFm
 2sYDGqEPPmLazajfNwnOoTg/EhQvpbdyt8ySQFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wRjPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5UPVpz+
 6Q4MAtVVSiqhLKPmombE+hz05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TTHZUNwhfD+
 DuuE2LRWAsqK9DH8H28qlWI19TsuQn2W4YfLejtnhJtqALKnTFCYPEMbnOfvPKwzGC3XdlSL
 0gJ0iM0qO4580nDZsnwWVi0rWCJujYYWsFMCKsq5QeV0K3W7g2FQG8eQVZpQtE8qNU/QzBs8
 1aThs7oHhRmqrjTQnWYnp+roDL0EigYK0cLfypCRgwAi/H5p5o5lBXITdBLH6u8j9mzEjb1q
 xiVqAAuirNVitQEv4258krAmCmEvYXSQ0g+4QC/dm6s8A5RfoOuYI2sr1/B4p5oKY+DS1Cbs
 Xssms6A7fsPB5WAiCyMRuoWGLijof2CNVX0iEZiBZA7+xyx+nSoesZb5zQWGatyGp9aI3mzO
 haV4F4PosYJZxNGcJObfaqSN/8Xw/DADe+9bcj9bYp3OKNRLQ6YqXQGiVGr44z9rKQ9ufhha
 czBLJf1VCly5bdPl2TvGbpEuVM/7mVnnD6IG8imp/iy+eDGDEN5X4vpJ7dnggoRyKqf6DvY/
 N9EXydh40UOCbauCsU7HGN6ELzrEZTYLcqswyCvXrTfSjeK4Ul4YxMr/ZsvepZ+g4NenfrS8
 3e2VydwkQSv3iedd1nTOyg4N9sDuKpCQY8TZ3RwbT5EJVB6CWpQ0EvvX8RuJuR2nACS5fV1U
 +MEa629Pxi7cRyeo251RcCk/ORfmOGD2Vrm09yNPGJuIPaNhmXhprfZQ+cY3HBRVXHo6ZFu/
 eXIO8GyacNrejmOxf3+MJqHp25dd1BE8A6udyMk+uVuRXg=
IronPort-HdrOrdr: A9a23:LOrwYqzUT4C4KCnOLoeWKrPw8b1zdoMgy1knxilNoH1uA6+lfq
 WV98jzuiWYtN98Yh0dcKm7SdW9qBDnlaKdg7N+AV74ZniFhILAFugLh7cKqAeAJ8SRzIFgPK
 1bAs1D4PqZNykc/KCKmXjBL/8QhPqi+KCsify29QYWcegTUdAa0++mYjzrdnGffGF9dOcE/T
 Gnl7V6mwY=
X-Talos-CUID: 9a23:Mz2+MWFmjkMcnR+AqmJEyHUpFJgFakaF90/5IB+BU11JEbyKHAo=
X-Talos-MUID: 9a23:j5hMjQlmjXlt8DEG+F1Gdno4Lvd37/yXT3lVtrFXgOqgHyFzPzqk2WE=
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="108049361"
Date: Fri, 5 May 2023 17:44:37 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu
	<wl@xen.org>, Juergen Gross <jgross@suse.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>, Marek
 =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>,
	Christian Lindig <christian.lindig@cloud.com>
Subject: Re: [PATCH v6 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Message-ID: <9a0398b9-5b59-47aa-af34-a41133dc9e70@perard>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-10-luca.fancellu@arm.com>
 <4e6758c0-dd81-4963-8989-d941eba2b257@perard>
 <34A79CE8-FEE8-426B-810C-1E928E207724@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <34A79CE8-FEE8-426B-810C-1E928E207724@arm.com>

On Wed, May 03, 2023 at 09:23:19AM +0000, Luca Fancellu wrote:
> 
> 
> > On 2 May 2023, at 17:13, Anthony PERARD <anthony.perard@citrix.com> wrote:
> > 
> > On Mon, Apr 24, 2023 at 07:02:45AM +0100, Luca Fancellu wrote:
> >> diff --git a/tools/include/xen-tools/arm-arch-capabilities.h b/tools/include/xen-tools/arm-arch-capabilities.h
> >> new file mode 100644
> >> index 000000000000..ac44c8b14344
> >> --- /dev/null
> >> +++ b/tools/include/xen-tools/arm-arch-capabilities.h
> >> @@ -0,0 +1,28 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> > 
> > Do you mean GPL-2.0-only ?
> > 
> > GPL-2.0 is deprecated by the SPDX project.
> > 
> > https://spdx.org/licenses/GPL-2.0.html
> > 
> > 
> > Besides that, patch looks fine:
> > Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Thanks, I’ll fix in the next push and I’ll add your R-by

Actually, could you use LGPL-2.1-only instead. As this code is to be
included in libxl, and libxl is supposed to be LGPL-2.1-only, it might
be better to be on the safe side and use LGPL for this new file.

As I understand (from recent discussion about libacpi, and a quick search
only), mixing GPL and LGPL code might mean the result is GPL. So just to
be on the safe side, have this file been LGPL might be better. And it
seems that it would still be fine to include that file in GPL projects.

Would that be ok with you?

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 05 16:45:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 16:45:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530528.826210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyYE-0002f8-8p; Fri, 05 May 2023 16:44:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530528.826210; Fri, 05 May 2023 16:44:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyYE-0002f1-5x; Fri, 05 May 2023 16:44:58 +0000
Received: by outflank-mailman (input) for mailman id 530528;
 Fri, 05 May 2023 16:44:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zlrz=A2=flex--seanjc.bounces.google.com=3hTJVZAYKCWgYKGTPIMUUMRK.IUSdKT-JKbKRROYZY.dKTVXUPKIZ.UXM@srs-se1.protection.inumbo.net>)
 id 1puyYC-0002NV-At
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 16:44:56 +0000
Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com
 [2607:f8b0:4864:20::104a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29997d9a-eb64-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 18:44:55 +0200 (CEST)
Received: by mail-pj1-x104a.google.com with SMTP id
 98e67ed59e1d1-24e33239357so1041248a91.1
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 09:44:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29997d9a-eb64-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20221208; t=1683305094; x=1685897094;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id
         :reply-to;
        bh=9+i6WgJAJzJlKXvdjrc4sB18Bds1WwkbYsWdLgSqVp4=;
        b=5NLcQWofuV/YkKQ6oScIEQc8lVhhA2L4HU7CAnLH03mX0Px3HVPbbRo2D6swL6bYVC
         DUB3fcT4XujkVxCNaKYJ+P/Z7sNnX421VReM55ms1qzD5WRQCvJ/Kix0mSSaYy6Vuguo
         DWnNHQWNURLtT+6bY0K/Qi8QSq5fb73IsidEyosi5bDvhPv4lQKLP/WCp19+1EjcQYkT
         2ZZ09LoscjMPBSeU10UZCNvLYrLZpvQ7Vt2oUZW3pPlG97XC7fwgw982zXDkvUJKrQ1i
         ngUKTQDZ0x71ie0DU+LnkH+jsC1SnJHoK6gBPF+WpYtjUSRSNwRS6UiypPvCgognd0bl
         qZfw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683305094; x=1685897094;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=9+i6WgJAJzJlKXvdjrc4sB18Bds1WwkbYsWdLgSqVp4=;
        b=AYI6GWCXPBkbLQUCfNbNdU6P/BjvTCHSRKPJ2j4sCK2h5wNu05jaOU1/T2utj8Mixr
         UOwarWJ61aP21teZkmRR4bILikNL9QUkpFyrsbwGrBSMIeFGLclEMguVazBCsS/Q7v1M
         D5ettck/DHnjcwPqoSyY9yn6N/haqezFgNcyGQXk+hH/grqO5ChzVHBZVsITgl+hXHk0
         9oZRG7vdXxUJrNLAyOjirqT4jFd0u9OKoqFfD0zAVm4t+VL8MkfRrOMCsDNISi2zYps2
         Nkrk2oensuRGlWoOffkDy6EhICopFyr9lrXAbVKW1sNU1/91baQOmuc0OfZ76tWbexpG
         rNlQ==
X-Gm-Message-State: AC+VfDxC0xjhEZ37sll1qUeGG2rkXzHnMJTFWSNXPFudKqeIyznhl5ZN
	7X9rQfLsxw4F6JulLMlo2JKtrbAttqw=
X-Google-Smtp-Source: ACHHUZ6aankmXvB8085gNTlbF2PCEyPixSX1tutA3Hp2uz+l8puQYKdMvvxlOn7gLaJGVxpHKJ9HgVKK2Lk=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a17:90a:ea0b:b0:24e:2787:405d with SMTP id
 w11-20020a17090aea0b00b0024e2787405dmr607921pjy.5.1683305093970; Fri, 05 May
 2023 09:44:53 -0700 (PDT)
Date: Fri, 5 May 2023 09:44:52 -0700
In-Reply-To: <20230505152046.6575-5-mic@digikod.net>
Mime-Version: 1.0
References: <20230505152046.6575-1-mic@digikod.net> <20230505152046.6575-5-mic@digikod.net>
Message-ID: <ZFUyhPuhtMbYdJ76@google.com>
Subject: Re: [PATCH v1 4/9] KVM: x86: Add new hypercall to set EPT permissions
From: Sean Christopherson <seanjc@google.com>
To: "=?iso-8859-1?Q?Micka=EBl_Sala=FCn?=" <mic@digikod.net>
Cc: Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, 
	"H . Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>, 
	Paolo Bonzini <pbonzini@redhat.com>, Thomas Gleixner <tglx@linutronix.de>, 
	Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, 
	Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>, 
	James Morris <jamorris@linux.microsoft.com>, John Andersen <john.s.andersen@intel.com>, 
	Liran Alon <liran.alon@oracle.com>, 
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>, Marian Rotariu <marian.c.rotariu@gmail.com>, 
	"Mihai =?utf-8?B?RG9uyJt1?=" <mdontu@bitdefender.com>, 
	"=?utf-8?B?TmljdciZb3IgQ8OuyJt1?=" <nicu.citu@icloud.com>, Rick Edgecombe <rick.p.edgecombe@intel.com>, 
	Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>, 
	Zahra Tarkhani <ztarkhani@microsoft.com>, 
	"=?utf-8?Q?=C8=98tefan_=C8=98icleru?=" <ssicleru@bitdefender.com>, dev@lists.cloudhypervisor.org, 
	kvm@vger.kernel.org, linux-hardening@vger.kernel.org, 
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
	linux-security-module@vger.kernel.org, qemu-devel@nongnu.org, 
	virtualization@lists.linux-foundation.org, x86@kernel.org, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

On Fri, May 05, 2023, Micka=EF=BF=BDl Sala=EF=BF=BDn wrote:
> Add a new KVM_HC_LOCK_MEM_PAGE_RANGES hypercall that enables a guest to
> set EPT permissions on a set of page ranges.

IMO, manipulation of protections, both for memory (this patch) and CPU stat=
e
(control registers in the next patch) should come from userspace.  I have n=
o
objection to KVM providing plumbing if necessary, but I think userspace nee=
ds to
to have full control over the actual state.

One of the things that caused Intel's control register pinning series to st=
all
out was how to handle edge cases like kexec() and reboot.  Deferring to use=
rspace
means the kernel doesn't need to define policy, e.g. when to unprotect memo=
ry,
and avoids questions like "should userspace be able to overwrite pinned con=
trol
registers".

And like the confidential VM use case, keeping userspace in the loop is a b=
ig
beneifit, e.g. the guest can't circumvent protections by coercing userspace=
 into
writing to protected memory .


From xen-devel-bounces@lists.xenproject.org Fri May 05 16:50:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 16:50:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530535.826221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puydB-0004JF-Va; Fri, 05 May 2023 16:50:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530535.826221; Fri, 05 May 2023 16:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puydB-0004J8-PS; Fri, 05 May 2023 16:50:05 +0000
Received: by outflank-mailman (input) for mailman id 530535;
 Fri, 05 May 2023 16:50:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puydB-0004EG-7O
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 16:50:05 +0000
Received: from smtp-8fac.mail.infomaniak.ch (smtp-8fac.mail.infomaniak.ch
 [83.166.143.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e11dbe8c-eb64-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 18:50:02 +0200 (CEST)
Received: from smtp-3-0001.mail.infomaniak.ch (unknown [10.4.36.108])
 by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCc9j6pyZzMq29R;
 Fri,  5 May 2023 18:50:01 +0200 (CEST)
Received: from unknown by smtp-3-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCc9f1ZprzMpt9p; Fri,  5 May 2023 18:49:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e11dbe8c-eb64-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683305401;
	bh=RPOWf6TexJXD2F61Y+MwwaWrFJGnJ6O8Bzm/vPfHfEg=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=KFqsV29bRBQXZVMUZDnW9//LbmTjZRdZ5CYSCvQtHGOugYPQZro4kAj1y6BuQuLMI
	 6ri9TjVKDdRi4alwo2oX8l60NkGjcqHjRzeAjieP1C8UKm+ZX2LPynTNK6yGyQO8br
	 /DpBAABiiqrinOuFsaScnqzH8PtKHmistKR3ODTI=
Message-ID: <6412bf27-4d05-eab8-3db1-d4efa44af3aa@digikod.net>
Date: Fri, 5 May 2023 18:49:57 +0200
MIME-Version: 1.0
User-Agent:
Subject: Re: [PATCH v1 2/9] KVM: x86/mmu: Add support for prewrite page
 tracking
Content-Language: en-US
To: Sean Christopherson <seanjc@google.com>
Cc: Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>, "H . Peter Anvin" <hpa@zytor.com>,
 Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>,
 Paolo Bonzini <pbonzini@redhat.com>, Thomas Gleixner <tglx@linutronix.de>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
 James Morris <jamorris@linux.microsoft.com>,
 John Andersen <john.s.andersen@intel.com>, Liran Alon
 <liran.alon@oracle.com>,
 "Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
 Marian Rotariu <marian.c.rotariu@gmail.com>,
 =?UTF-8?Q?Mihai_Don=c8=9bu?= <mdontu@bitdefender.com>,
 =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>,
 Zahra Tarkhani <ztarkhani@microsoft.com>,
 =?UTF-8?Q?=c8=98tefan_=c8=98icleru?= <ssicleru@bitdefender.com>,
 dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
 linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
 qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
 x86@kernel.org, xen-devel@lists.xenproject.org
References: <20230505152046.6575-1-mic@digikod.net>
 <20230505152046.6575-3-mic@digikod.net> <ZFUumGdZDNs1tkQA@google.com>
From: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>
In-Reply-To: <ZFUumGdZDNs1tkQA@google.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha


On 05/05/2023 18:28, Sean Christopherson wrote:
> On Fri, May 05, 2023, Mickaï¿½l Salaï¿½n wrote:
>> diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h
>> index eb186bc57f6a..a7fb4ff888e6 100644
>> --- a/arch/x86/include/asm/kvm_page_track.h
>> +++ b/arch/x86/include/asm/kvm_page_track.h
>> @@ -3,6 +3,7 @@
>>   #define _ASM_X86_KVM_PAGE_TRACK_H
>>   
>>   enum kvm_page_track_mode {
>> +	KVM_PAGE_TRACK_PREWRITE,
> 
> Heh, just when I decide to finally kill off support for multiple modes[1] :-)
> 
> My assessment from that changelog still holds true for this case:
> 
>    Drop "support" for multiple page-track modes, as there is no evidence
>    that array-based and refcounted metadata is the optimal solution for
>    other modes, nor is there any evidence that other use cases, e.g. for
>    access-tracking, will be a good fit for the page-track machinery in
>    general.
>    
>    E.g. one potential use case of access-tracking would be to prevent guest
>    access to poisoned memory (from the guest's perspective).  In that case,
>    the number of poisoned pages is likely to be a very small percentage of
>    the guest memory, and there is no need to reference count the number of
>    access-tracking users, i.e. expanding gfn_track[] for a new mode would be
>    grossly inefficient.  And for poisoned memory, host userspace would also
>    likely want to trap accesses, e.g. to inject #MC into the guest, and that
>    isn't currently supported by the page-track framework.
>    
>    A better alternative for that poisoned page use case is likely a
>    variation of the proposed per-gfn attributes overlay (linked), which
>    would allow efficiently tracking the sparse set of poisoned pages, and by
>    default would exit to userspace on access.
> 
> Of particular relevance:
> 
>    - Using the page-track machinery is inefficient because the guest is likely
>      going to write-protect a minority of its memory.  And this
> 
>        select KVM_EXTERNAL_WRITE_TRACKING if KVM
> 
>      is particularly nasty because simply enabling HEKI in the Kconfig will cause
>      KVM to allocate rmaps and gfn tracking.
> 
>    - There's no need to reference count the protection, i.e. 15 of the 16 bits of
>      gfn_track are dead weight.
> 
>    - As proposed, adding a second "mode" would double the cost of gfn tracking.
> 
>    - Tying the protections to the memslots will create an impossible-to-maintain
>      ABI because the protections will be lost if the owning memslot is deleted and
>      recreated.
> 
>    - The page-track framework provides incomplete protection and will lead to an
>      ongoing game of whack-a-mole, e.g. this patch catches the obvious cases by
>      adding calls to kvm_page_track_prewrite(), but misses things like kvm_vcpu_map().
> 
>    - The scaling and maintenance issues will only get worse if/when someone tries
>      to support dropping read and/or execute permissions, e.g. for execute-only.
> 
>    - The code is x86-only, and is likely to stay that way for the foreseeable
>      future.
> 
> The proposed alternative is to piggyback the memory attributes implementation[2]
> that is being added (if all goes according to plan) for confidential VMs.  This
> use case (dropping permissions) came up not too long ago[3], which is why I have
> a ready-made answer).
> 
> I have no doubt that we'll need to solve performance and scaling issues with the
> memory attributes implementation, e.g. to utilize xarray multi-range support
> instead of storing information on a per-4KiB-page basis, but AFAICT, the core
> idea is sound.  And a very big positive from a maintenance perspective is that
> any optimizations, fixes, etc. for one use case (CoCo vs. hardening) should also
> benefit the other use case.
> 
> [1] https://lore.kernel.org/all/20230311002258.852397-22-seanjc@google.com
> [2] https://lore.kernel.org/all/Y2WB48kD0J4VGynX@google.com
> [3] https://lore.kernel.org/all/Y1a1i9vbJ%2FpVmV9r@google.com

I agree, I used this mechanism because it was easier at first to rely on 
a previous work, but while I was working on the MBEC support, I realized 
that it's not the optimal way to do it.

I was thinking about using a new special EPT bit similar to 
EPT_SPTE_HOST_WRITABLE, but it may not be portable though. What do you 
think?


From xen-devel-bounces@lists.xenproject.org Fri May 05 16:57:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 16:57:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530540.826230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puykT-00052n-MJ; Fri, 05 May 2023 16:57:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530540.826230; Fri, 05 May 2023 16:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puykT-00052g-IZ; Fri, 05 May 2023 16:57:37 +0000
Received: by outflank-mailman (input) for mailman id 530540;
 Fri, 05 May 2023 16:57:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C5z1=A2=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1puykS-00052a-FB
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 16:57:36 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2054.outbound.protection.outlook.com [40.107.7.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eed0fb2b-eb65-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 18:57:35 +0200 (CEST)
Received: from ZR2P278CA0020.CHEP278.PROD.OUTLOOK.COM (2603:10a6:910:46::6) by
 GV1PR08MB9913.eurprd08.prod.outlook.com (2603:10a6:150:86::18) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.27; Fri, 5 May 2023 16:57:04 +0000
Received: from VI1EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:910:46:cafe::1c) by ZR2P278CA0020.outlook.office365.com
 (2603:10a6:910:46::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.27 via Frontend
 Transport; Fri, 5 May 2023 16:57:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT060.mail.protection.outlook.com (100.127.144.243) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6387.12 via Frontend Transport; Fri, 5 May 2023 16:57:02 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Fri, 05 May 2023 16:57:02 +0000
Received: from 87bf513720db.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F94D8FE1-D7DD-4F4F-9C8B-2C238BECE399.1; 
 Fri, 05 May 2023 16:56:51 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 87bf513720db.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 05 May 2023 16:56:51 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by VI1PR08MB10029.eurprd08.prod.outlook.com (2603:10a6:800:1c6::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Fri, 5 May
 2023 16:56:46 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be%6]) with mapi id 15.20.6363.021; Fri, 5 May 2023
 16:56:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eed0fb2b-eb65-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jB/0Hk7/nAxsOggP+UuejWY5jFRViXw4KZif78sQ5yI=;
 b=NAmXMzoMzeI9AwWThvqN9C9R8yktIETQ4RQnrNxhh5zEHKhup/B+cKY4sVir+dSfuR/NEwbqyE689br406ZJgNl6iGPn6+//Q5Wm/XkMpdSLrD3q+fM6ZTkogdBuyYMj3b9/gHA0DkG+M17Zzki0SD/i0LMXAXdzgIo+4WiwOtc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: fd9d954d9f968ac8
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F8p38Tk3nM7Zplfgsp92r0saq6ivKtnz29cXENpOslN41qnvomKrHNwK8rLJyuBAHS5gft8hkui8r06UaqlaGQsTov7s+dpCVu3KS4gEgbibvXNOsHYQfasTdva3cVTKWM43LY591sj1/lSBTLkWIAFZcaWZqdJt8v5IXAUJOR7HLtLBZjEzcPuMEbXBDIZpfR57elfXO0gQcVrDBW904H7N+89YYZAhXLggGK7r9+bAQSd3dkxbGXYN8dOvdvQnjyjTo2N88+1VRYlHbOgnyh9GzIUnfZlLrMvP6T0OazLJmjtQBSkTBIsHpToOXEWMXApORZIwhxu7/YpIcl5FJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jB/0Hk7/nAxsOggP+UuejWY5jFRViXw4KZif78sQ5yI=;
 b=kUsV96PDFHVrz33ERXm5sfeKVfQlq7hFumGUPO+toN7rW37gil8xkBbgm5NdGTZu6HERCyJGD9VnFN7vnzCPiZXwAnehJS285Ni4rhShPj81ObCX0ey71s5dauR4fxabwfPp9gs8YZD+RoWAKd322PQWKWUC7+GYG+QzxdWmJhG2usE6TJbmRtLhh/ULD89uWkJLXJlqpQ7ko+r5nLbmrix8G2iH4yoWSdobW0/cRAn6to9qn5qz4yfH4byutXurJNcemrqD3FKzGJFqF8+INpsi+ICGAj+NjT8H/0yuUpJJZRGiJYqIf3fQkZ9DzEkeEE8ASpi2xDUvpaTXs6izzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jB/0Hk7/nAxsOggP+UuejWY5jFRViXw4KZif78sQ5yI=;
 b=NAmXMzoMzeI9AwWThvqN9C9R8yktIETQ4RQnrNxhh5zEHKhup/B+cKY4sVir+dSfuR/NEwbqyE689br406ZJgNl6iGPn6+//Q5Wm/XkMpdSLrD3q+fM6ZTkogdBuyYMj3b9/gHA0DkG+M17Zzki0SD/i0LMXAXdzgIo+4WiwOtc=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu
	<wl@xen.org>, Juergen Gross <jgross@suse.com>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>,
	=?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>, Christian Lindig
	<christian.lindig@cloud.com>
Subject: Re: [PATCH v6 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Topic: [PATCH v6 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Index: AQHZdnKCeDO91wnv7UqBsQsgLvHj6a9HNTmAgAEfoACAA6ADgIAAA1aA
Date: Fri, 5 May 2023 16:56:44 +0000
Message-ID: <C87FA3E3-007B-423B-A1B9-2B68C12AFA67@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-10-luca.fancellu@arm.com>
 <4e6758c0-dd81-4963-8989-d941eba2b257@perard>
 <34A79CE8-FEE8-426B-810C-1E928E207724@arm.com>
 <9a0398b9-5b59-47aa-af34-a41133dc9e70@perard>
In-Reply-To: <9a0398b9-5b59-47aa-af34-a41133dc9e70@perard>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|VI1PR08MB10029:EE_|VI1EUR03FT060:EE_|GV1PR08MB9913:EE_
X-MS-Office365-Filtering-Correlation-Id: e8551b1c-db14-43e1-e020-08db4d89c005
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 EEeTPM0c3nGwDIQBNpHDDfsG+cGwk3IFvg5f6PDV6+oJbK0Oo2jC3srU3AQzyZdEiZUMI70kXRJlWUJ/ONWBtQhQU7Xjd+X+hnQmI3PUMFyQ5dXho2xOCY1WAuQPhZfwFV19bBO9fB4Ulco9+knfo1/axLrJjTwz6huNjR4ICsxZfxwhL7kKcjiLVvTiea+ij49wwLON6IxqLrZi8yF1iXDEVYBDbIdvXiTtKPhFMR2TlNzGr6pQVir6Ei/uUSOSsgKBTgAbvT+TUbxVbAwpRUEfNVFMaTZViOXowytOnUQ2dTrwY1ku8CL85VLSH3WvhIl6pFGyFb842GL1uiH4i7O4huiGlAYk+BBGkIySgg4hStSHZdbzwbku7gZdzCp2HlX4jbkAjq+sBWzvvnYj/adQweLtnrUBe105YyIrjr3bzM+pxTQmOfP6F2ErE0Tq0Ia2KkaS9bbsXUpL/l06KEW/TKdtnUbxIiol47+3J+0hYc3KqUyE9hAbkNpwkgdT3HxXtgYRE0wzI1uIabvEfKOwYqK24j/OawqW1KGIMmC+b5CUIFIac+O0kCGth3HQs8wo/9NA/W8ZjAZU0fNKjLPzJDJeHUU4BoDAOczGM9LPG9kzgHSdJZXtO7ME9L1S
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(39850400004)(396003)(346002)(376002)(451199021)(33656002)(6512007)(53546011)(6506007)(26005)(38070700005)(86362001)(4326008)(6916009)(76116006)(66946007)(66556008)(66476007)(41300700001)(64756008)(66446008)(91956017)(6486002)(36756003)(478600001)(2616005)(71200400001)(5660300002)(7416002)(54906003)(122000001)(38100700002)(2906002)(186003)(316002)(8936002)(966005)(8676002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <1AC788189CF5354B8C5F37045CC57B67@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB10029
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c38bae8a-dd42-4703-a975-08db4d89b4f1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RLWMURlZ4Ffv6Vgq6GKgk4+Gj3UUOd+b/H/sRVeegDUxTCH7WkL8Ta2bpGg3GCStpMLl5+aMoe2Db+mAo7TJgm8Oh4Y3p0/GZT414d0UEL8Ep6z59uYOn3+G7+x2tj69f2uehM8/nF8nJNrbubS1/JYT1wD5hSm7SYB2C+d6iry3q7GSI+EU9MgXhaXgv5y2r1h/FqRMIlEQgR1/l9TK868kl4Aj1IRYjXwZyW3J/SPka23/Q+ubJFD2Prg0EAeh2WURWmYVMRQKsUirpfDOAq3A1OcDghpNTMvfpKsYLShZHfroAYzogoKhMibSVCD7OUOHxHO6w2NRIVZpNoh8CdvIX8gz4Y4qd9T+RFC6JG7ttjKOCjtSJGdwUILSh+cCvs/lJda7J2DbciDndyeWJjRhSq77L7i3L3hJR8xoqhDfbHp9jEmqnkkqGtxWYSpkleHsOnk40fcpUVK3EZySz7YAw3X7jqpX6HzMhtT58ffBe5yi1z9r5/l8a2Hw1CKZju91gu6NIv3YKfYerlS9UAGQpnjZgFGEuupqIfKmv9j3N9Lr7JVnDTuNzwNmpAia5XkwM2DbuE+yXxnMjgaqnfckrXssxKQq8Dqn2bDg+XZvV3aj4FwHbEiBUCRCve8Wywjv2aDy7Th2IHuMDEka7poh6ps71GGKzzobGxYcOIzH+RlIVPEvEDElC9DFaUad
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199021)(36840700001)(46966006)(40470700004)(47076005)(2616005)(186003)(336012)(40460700003)(2906002)(40480700001)(36756003)(86362001)(33656002)(82310400005)(82740400003)(356005)(81166007)(36860700001)(5660300002)(41300700001)(8936002)(6862004)(8676002)(966005)(4326008)(70586007)(70206006)(6486002)(316002)(478600001)(54906003)(107886003)(6506007)(6512007)(26005)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2023 16:57:02.9432
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e8551b1c-db14-43e1-e020-08db4d89c005
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB9913

DQoNCj4gT24gNSBNYXkgMjAyMywgYXQgMTc6NDQsIEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBl
cmFyZEBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IE9uIFdlZCwgTWF5IDAzLCAyMDIzIGF0IDA5
OjIzOjE5QU0gKzAwMDAsIEx1Y2EgRmFuY2VsbHUgd3JvdGU6DQo+PiANCj4+IA0KPj4+IE9uIDIg
TWF5IDIwMjMsIGF0IDE3OjEzLCBBbnRob255IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4
LmNvbT4gd3JvdGU6DQo+Pj4gDQo+Pj4gT24gTW9uLCBBcHIgMjQsIDIwMjMgYXQgMDc6MDI6NDVB
TSArMDEwMCwgTHVjYSBGYW5jZWxsdSB3cm90ZToNCj4+Pj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2lu
Y2x1ZGUveGVuLXRvb2xzL2FybS1hcmNoLWNhcGFiaWxpdGllcy5oIGIvdG9vbHMvaW5jbHVkZS94
ZW4tdG9vbHMvYXJtLWFyY2gtY2FwYWJpbGl0aWVzLmgNCj4+Pj4gbmV3IGZpbGUgbW9kZSAxMDA2
NDQNCj4+Pj4gaW5kZXggMDAwMDAwMDAwMDAwLi5hYzQ0YzhiMTQzNDQNCj4+Pj4gLS0tIC9kZXYv
bnVsbA0KPj4+PiArKysgYi90b29scy9pbmNsdWRlL3hlbi10b29scy9hcm0tYXJjaC1jYXBhYmls
aXRpZXMuaA0KPj4+PiBAQCAtMCwwICsxLDI4IEBADQo+Pj4+ICsvKiBTUERYLUxpY2Vuc2UtSWRl
bnRpZmllcjogR1BMLTIuMCAqLw0KPj4+IA0KPj4+IERvIHlvdSBtZWFuIEdQTC0yLjAtb25seSA/
DQo+Pj4gDQo+Pj4gR1BMLTIuMCBpcyBkZXByZWNhdGVkIGJ5IHRoZSBTUERYIHByb2plY3QuDQo+
Pj4gDQo+Pj4gaHR0cHM6Ly9zcGR4Lm9yZy9saWNlbnNlcy9HUEwtMi4wLmh0bWwNCj4+PiANCj4+
PiANCj4+PiBCZXNpZGVzIHRoYXQsIHBhdGNoIGxvb2tzIGZpbmU6DQo+Pj4gUmV2aWV3ZWQtYnk6
IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPg0KPj4gDQo+PiBUaGFu
a3MsIEnigJlsbCBmaXggaW4gdGhlIG5leHQgcHVzaCBhbmQgSeKAmWxsIGFkZCB5b3VyIFItYnkN
Cj4gDQo+IEFjdHVhbGx5LCBjb3VsZCB5b3UgdXNlIExHUEwtMi4xLW9ubHkgaW5zdGVhZC4gQXMg
dGhpcyBjb2RlIGlzIHRvIGJlDQo+IGluY2x1ZGVkIGluIGxpYnhsLCBhbmQgbGlieGwgaXMgc3Vw
cG9zZWQgdG8gYmUgTEdQTC0yLjEtb25seSwgaXQgbWlnaHQNCj4gYmUgYmV0dGVyIHRvIGJlIG9u
IHRoZSBzYWZlIHNpZGUgYW5kIHVzZSBMR1BMIGZvciB0aGlzIG5ldyBmaWxlLg0KPiANCj4gQXMg
SSB1bmRlcnN0YW5kIChmcm9tIHJlY2VudCBkaXNjdXNzaW9uIGFib3V0IGxpYmFjcGksIGFuZCBh
IHF1aWNrIHNlYXJjaA0KPiBvbmx5KSwgbWl4aW5nIEdQTCBhbmQgTEdQTCBjb2RlIG1pZ2h0IG1l
YW4gdGhlIHJlc3VsdCBpcyBHUEwuIFNvIGp1c3QgdG8NCj4gYmUgb24gdGhlIHNhZmUgc2lkZSwg
aGF2ZSB0aGlzIGZpbGUgYmVlbiBMR1BMIG1pZ2h0IGJlIGJldHRlci4gQW5kIGl0DQo+IHNlZW1z
IHRoYXQgaXQgd291bGQgc3RpbGwgYmUgZmluZSB0byBpbmNsdWRlIHRoYXQgZmlsZSBpbiBHUEwg
cHJvamVjdHMuDQo+IA0KPiBXb3VsZCB0aGF0IGJlIG9rIHdpdGggeW91Pw0KDQpZZXMgc3VyZSwg
SSB3aWxsIHVzZSBMR1BMLTIuMS1vbmx5IGluc3RlYWQsIG5vIHByb2JsZW1zDQoNCj4gDQo+IENo
ZWVycywNCj4gDQo+IC0tIA0KPiBBbnRob255IFBFUkFSRA0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri May 05 17:01:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 17:01:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530546.826239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyoF-0006a6-7w; Fri, 05 May 2023 17:01:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530546.826239; Fri, 05 May 2023 17:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puyoF-0006Zz-53; Fri, 05 May 2023 17:01:31 +0000
Received: by outflank-mailman (input) for mailman id 530546;
 Fri, 05 May 2023 17:01:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C6Wy=A2=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1puyoD-0006Zt-Gs
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 17:01:29 +0000
Received: from smtp-42ac.mail.infomaniak.ch (smtp-42ac.mail.infomaniak.ch
 [2001:1600:4:17::42ac])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 792b0485-eb66-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 19:01:27 +0200 (CEST)
Received: from smtp-2-0001.mail.infomaniak.ch (unknown [10.5.36.108])
 by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QCcQt2Hl7zMqBQp;
 Fri,  5 May 2023 19:01:26 +0200 (CEST)
Received: from unknown by smtp-2-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QCcQl2DxJzMpxhN; Fri,  5 May 2023 19:01:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 792b0485-eb66-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1683306086;
	bh=Gz496lWCqtEQGiFhDAnNdISGi/Ti3+4r45rZfW9pMhA=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=0n9dK6yocwQrPoCyZhLiYaFRP+Rrc+AesC8U5DRpIAduml7GS9dfUNu27anU/Nq9H
	 X7nX6pZrCIdpSbikNeBpaLS6XS4cxGwP59sL8dmULJe3HbrVsb+7OJzRnuN4Tp0SIU
	 jtv1NK7dS4Wb0Fq0T7+YdNL/W/Ml/YkRAsZVmseM=
Message-ID: <39125b11-659f-35f4-ac7a-a3ba31365950@digikod.net>
Date: Fri, 5 May 2023 19:01:18 +0200
MIME-Version: 1.0
User-Agent:
Subject: Re: [PATCH v1 4/9] KVM: x86: Add new hypercall to set EPT permissions
Content-Language: en-US
To: Sean Christopherson <seanjc@google.com>
Cc: Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>, "H . Peter Anvin" <hpa@zytor.com>,
 Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>,
 Paolo Bonzini <pbonzini@redhat.com>, Thomas Gleixner <tglx@linutronix.de>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
 James Morris <jamorris@linux.microsoft.com>,
 John Andersen <john.s.andersen@intel.com>, Liran Alon
 <liran.alon@oracle.com>,
 "Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
 Marian Rotariu <marian.c.rotariu@gmail.com>,
 =?UTF-8?Q?Mihai_Don=c8=9bu?= <mdontu@bitdefender.com>,
 =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>,
 Zahra Tarkhani <ztarkhani@microsoft.com>,
 =?UTF-8?Q?=c8=98tefan_=c8=98icleru?= <ssicleru@bitdefender.com>,
 dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
 linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
 qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
 x86@kernel.org, xen-devel@lists.xenproject.org
References: <20230505152046.6575-1-mic@digikod.net>
 <20230505152046.6575-5-mic@digikod.net> <ZFUyhPuhtMbYdJ76@google.com>
From: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>
In-Reply-To: <ZFUyhPuhtMbYdJ76@google.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha


On 05/05/2023 18:44, Sean Christopherson wrote:
> On Fri, May 05, 2023, Mickaï¿½l Salaï¿½n wrote:
>> Add a new KVM_HC_LOCK_MEM_PAGE_RANGES hypercall that enables a guest to
>> set EPT permissions on a set of page ranges.
> 
> IMO, manipulation of protections, both for memory (this patch) and CPU state
> (control registers in the next patch) should come from userspace.  I have no
> objection to KVM providing plumbing if necessary, but I think userspace needs to
> to have full control over the actual state.

By user space, do you mean the host user space or the guest user space?

About the guest user space, I see several issues to delegate this kind 
of control:
- These are restrictions only relevant to the kernel.
- The threat model is to protect against user space as early as possible.
- It would be more complex for no obvious gain.

This patch series is an extension of the kernel self-protections 
mechanisms, and they are not configured by user space.


> 
> One of the things that caused Intel's control register pinning series to stall
> out was how to handle edge cases like kexec() and reboot.  Deferring to userspace
> means the kernel doesn't need to define policy, e.g. when to unprotect memory,
> and avoids questions like "should userspace be able to overwrite pinned control
> registers".

The idea is to authenticate every changes. For kexec, the VMM (or 
something else) would have to authenticate the new kernel. Do you have 
something else in mind that could legitimately require such memory or CR 
changes?


> 
> And like the confidential VM use case, keeping userspace in the loop is a big
> beneifit, e.g. the guest can't circumvent protections by coercing userspace into
> writing to protected memory .

I don't understand this part. Are you talking about the host user space? 
How the guest could circumvent protections?


From xen-devel-bounces@lists.xenproject.org Fri May 05 17:18:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 17:18:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530550.826250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puz4I-0008Fx-LE; Fri, 05 May 2023 17:18:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530550.826250; Fri, 05 May 2023 17:18:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puz4I-0008Fq-Hr; Fri, 05 May 2023 17:18:06 +0000
Received: by outflank-mailman (input) for mailman id 530550;
 Fri, 05 May 2023 17:18:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4wjn=A2=flex--seanjc.bounces.google.com=3SDpVZAYKCTspbXkgZdlldib.Zljubk-absbiifpqp.ubkmolgbZq.lod@srs-se1.protection.inumbo.net>)
 id 1puz4G-0008Fk-GK
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 17:18:04 +0000
Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com
 [2607:f8b0:4864:20::549])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c9d079b0-eb68-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 19:18:02 +0200 (CEST)
Received: by mail-pg1-x549.google.com with SMTP id
 41be03b00d2f7-517baf1bc93so1573140a12.0
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 10:18:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9d079b0-eb68-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20221208; t=1683307081; x=1685899081;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id
         :reply-to;
        bh=SxwNvfp9yRlruqDpvJxkAnrRtCk6mE67Bd3BubjypxQ=;
        b=cXXc3M6yYogaCUYe2c+SrQc2MIdAmLJbXRkHUS9zG7xhw6MiEu2sitOw7/Gs8N+qYL
         oXgYeStpeUI6b4JASS0HSftlcCCRj/X7n1N9WbKprVd/yNpe36+Ro8ZtPUj3p6Ld50JH
         pqxQnHXwImJi8vy3sHQBgx80mx2LtgRBbImqJGjt3ifJhQvYtGUkTdTjALM0pcmAV+1D
         6vpr8yMdnFBHG32elxIu/SKRWE8X5JKVIrtIk1RplILzJzkSNycRYJAr+Iqs6tbgmR/S
         ugGjoODz+nslHOA/c91Rz89LssATB8IAOjXVho0NUWdysNmyIv7F6uQ3Fm1u7pfzIzFf
         Lo4A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683307081; x=1685899081;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=SxwNvfp9yRlruqDpvJxkAnrRtCk6mE67Bd3BubjypxQ=;
        b=HYrL+5HltJa36ItWjhdifRWAQ01e5NypMS9Z6GIH4+CHoUK6oxNkoYVmpd0Fh83CH/
         XVwLXv553eqBL2KA7/nHFfl9LnqMsD3ekHPQhcNduHOzZNiTMsk62qghSWlQXJiT2o+j
         NvQXcFD7cBzsAm36Hvm9MT5+TzOOstLWB4xXoy30lXaAevVUo5yhCfIjUR/B0p7Q0xs9
         VuzSSJgsmKexZzGtu+YZtzrweMzTimxq1mjpbbXKyvUF6p/qR6Fezc8v+BxixeBfOTuy
         4RZijT/it1ld8YX1fxJy8zBViaFiwLmfme6KjD7rLORescHyEylExiGB+MuLRO7qCFVM
         juzA==
X-Gm-Message-State: AC+VfDyOI5KJYVxgXbCrXRNjLryp5ocSlV1v++n9ui6dEu9hbM5W91M9
	sAnS38Ygj8G5taGK1/fJitnIMcb7jTY=
X-Google-Smtp-Source: ACHHUZ6uHVJCOE/ldx1w0hZ4Jds3kKbtJzThF7Cs+tG2xUZ0aBKcfdgS0+FUqexTD5OeTy31Ux2XqzCsKXc=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a63:8ac1:0:b0:513:efd4:d76 with SMTP id
 y184-20020a638ac1000000b00513efd40d76mr529931pgd.5.1683307080808; Fri, 05 May
 2023 10:18:00 -0700 (PDT)
Date: Fri, 5 May 2023 10:17:59 -0700
In-Reply-To: <39125b11-659f-35f4-ac7a-a3ba31365950@digikod.net>
Mime-Version: 1.0
References: <20230505152046.6575-1-mic@digikod.net> <20230505152046.6575-5-mic@digikod.net>
 <ZFUyhPuhtMbYdJ76@google.com> <39125b11-659f-35f4-ac7a-a3ba31365950@digikod.net>
Message-ID: <ZFU6R2pZ0Vx5RpAj@google.com>
Subject: Re: [PATCH v1 4/9] KVM: x86: Add new hypercall to set EPT permissions
From: Sean Christopherson <seanjc@google.com>
To: "=?iso-8859-1?Q?Micka=EBl_Sala=FCn?=" <mic@digikod.net>
Cc: Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, 
	"H . Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>, 
	Paolo Bonzini <pbonzini@redhat.com>, Thomas Gleixner <tglx@linutronix.de>, 
	Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, 
	Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>, 
	James Morris <jamorris@linux.microsoft.com>, John Andersen <john.s.andersen@intel.com>, 
	Liran Alon <liran.alon@oracle.com>, 
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>, Marian Rotariu <marian.c.rotariu@gmail.com>, 
	"Mihai =?utf-8?B?RG9uyJt1?=" <mdontu@bitdefender.com>, 
	"=?utf-8?B?TmljdciZb3IgQ8OuyJt1?=" <nicu.citu@icloud.com>, Rick Edgecombe <rick.p.edgecombe@intel.com>, 
	Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>, 
	Zahra Tarkhani <ztarkhani@microsoft.com>, 
	"=?utf-8?Q?=C8=98tefan_=C8=98icleru?=" <ssicleru@bitdefender.com>, dev@lists.cloudhypervisor.org, 
	kvm@vger.kernel.org, linux-hardening@vger.kernel.org, 
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
	linux-security-module@vger.kernel.org, qemu-devel@nongnu.org, 
	virtualization@lists.linux-foundation.org, x86@kernel.org, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

On Fri, May 05, 2023, Micka=EF=BF=BDl Sala=EF=BF=BDn wrote:
>=20
> On 05/05/2023 18:44, Sean Christopherson wrote:
> > On Fri, May 05, 2023, Micka=EF=BF=BDl Sala=EF=BF=BDn wrote:
> > > Add a new KVM_HC_LOCK_MEM_PAGE_RANGES hypercall that enables a guest =
to
> > > set EPT permissions on a set of page ranges.
> >=20
> > IMO, manipulation of protections, both for memory (this patch) and CPU =
state
> > (control registers in the next patch) should come from userspace.  I ha=
ve no
> > objection to KVM providing plumbing if necessary, but I think userspace=
 needs to
> > to have full control over the actual state.
>=20
> By user space, do you mean the host user space or the guest user space?

Host userspace, a.k.a. the VMM.  Definitely not guest userspace.

> About the guest user space, I see several issues to delegate this kind of
> control:
> - These are restrictions only relevant to the kernel.
> - The threat model is to protect against user space as early as possible.
> - It would be more complex for no obvious gain.
>=20
> This patch series is an extension of the kernel self-protections mechanis=
ms,
> and they are not configured by user space.
>=20
>=20
> >=20
> > One of the things that caused Intel's control register pinning series t=
o stall
> > out was how to handle edge cases like kexec() and reboot.  Deferring to=
 userspace
> > means the kernel doesn't need to define policy, e.g. when to unprotect =
memory,
> > and avoids questions like "should userspace be able to overwrite pinned=
 control
> > registers".
>=20
> The idea is to authenticate every changes. For kexec, the VMM (or somethi=
ng
> else) would have to authenticate the new kernel. Do you have something el=
se
> in mind that could legitimately require such memory or CR changes?

I think we're on the same page, the VMM (host userspace) would need to ack =
any
changes.

FWIW, SMM is another wart as entry to SMM clobbers CRs.  Now that CONFIG_KV=
M_SMM
is a thing, the easiest solution would be to disallow coexistence with SMM,=
 though
that might not be an option for many use cases (IIUC, QEMU-based deployment=
s use
SMM to implement secure boot).

> > And like the confidential VM use case, keeping userspace in the loop is=
 a big
> > beneifit, e.g. the guest can't circumvent protections by coercing users=
pace into
> > writing to protected memory .
>=20
> I don't understand this part. Are you talking about the host user space? =
How
> the guest could circumvent protections?

Host userspace.  Guest configures a device buffer in write-protected memory=
, gets
a host (synthetic) device to write into the memory.


From xen-devel-bounces@lists.xenproject.org Fri May 05 17:31:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 17:31:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530555.826259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzH8-0002IO-Oj; Fri, 05 May 2023 17:31:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530555.826259; Fri, 05 May 2023 17:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzH8-0002IH-Ls; Fri, 05 May 2023 17:31:22 +0000
Received: by outflank-mailman (input) for mailman id 530555;
 Fri, 05 May 2023 17:31:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ky0y=A2=flex--seanjc.bounces.google.com=3ZD1VZAYKCV0N95IE7BJJBG9.7JHS9I-89Q9GGDNON.S9IKMJE97O.JMB@srs-se1.protection.inumbo.net>)
 id 1puzH7-0002IB-L2
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 17:31:21 +0000
Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com
 [2607:f8b0:4864:20::549])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a432f184-eb6a-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 19:31:18 +0200 (CEST)
Received: by mail-pg1-x549.google.com with SMTP id
 41be03b00d2f7-528ab7097afso1444091a12.1
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 10:31:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a432f184-eb6a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20221208; t=1683307876; x=1685899876;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id
         :reply-to;
        bh=qAOxuvlzoNXvPjKXSolx+GWj91lIdWlZPd2J+lepu+A=;
        b=YajfbfW6/wtHmrsLEvn1JkH+xI6N003d6IHsKYEdYKyQ4XBOf/ltXYAA8TT//7kx7h
         8wJuT8QRTxgAHxrmJYouhI/7YgQTZm4mO10KjPCgeJdJMeuE5+h/bLiahSr81Gm/ecKB
         zDx4TcPrzUJvkSrnc51y29YU+lehMTLTirJ8zF6Y7AvgQTUcfYtQ37OJnKLDHV7h0smL
         PJn351waqBTnvLrthFCgyYl4ojqlUIUMdMZ9mvD3/1gMVpPtasu86bb2kRnLpCKWJhO/
         k73RbTIykvnJHtRsAnnJa8Km60QZxDXHqCZ+6V04mHM7fIYa6vKKdRMKt3CdepgXl1bi
         CXtw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683307876; x=1685899876;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=qAOxuvlzoNXvPjKXSolx+GWj91lIdWlZPd2J+lepu+A=;
        b=C724/H88uLdTPA5yUEB/JDJWbpNGEyaKFzyEhRWHwOv8IXTFt4P5wFcsJtZq0F2sXI
         ERRWX7B6BhXeKmVkpigkc8hS/D7DVd+hw5TPiffisg9Huxjwcq+HehW2vfmKXCKtXrpJ
         VZqCZn8K8vmsN3l6nEzXD6OMAXpUAaNyQAbK3+BACljsu4nV7Ds1EQd1VQwDzUk2SJu7
         YZE+ONtAPgWa7VPa0wFyGBbg2Yn8HlZosHCaO4sSpXQUaqFAyWpJaOX1jW5U4CxnK7uS
         BxS+2s5LL7uMGgBET65myJhxl4ICXHndMoKu4Ew+uFYRRrt9x9w+WnO6xk8e0HPPp512
         cTkA==
X-Gm-Message-State: AC+VfDxxHF2gPqg7yriZxhGvg53ykIMb/6wnkSe8oW6d8IqSYU87H4YD
	KOGNOkj/Ocm0hfBBVZvCDGkEZHv5QYs=
X-Google-Smtp-Source: ACHHUZ5AKtrEhFxY1tyiq165vY9w3gn7Qqc7+wcBsvMx8OszEeBENNSnTQn/dNy8sL8BRZ1pG6YlJeRQjuk=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a63:152:0:b0:52c:407a:2279 with SMTP id
 79-20020a630152000000b0052c407a2279mr529292pgb.0.1683307876644; Fri, 05 May
 2023 10:31:16 -0700 (PDT)
Date: Fri, 5 May 2023 10:31:15 -0700
In-Reply-To: <6412bf27-4d05-eab8-3db1-d4efa44af3aa@digikod.net>
Mime-Version: 1.0
References: <20230505152046.6575-1-mic@digikod.net> <20230505152046.6575-3-mic@digikod.net>
 <ZFUumGdZDNs1tkQA@google.com> <6412bf27-4d05-eab8-3db1-d4efa44af3aa@digikod.net>
Message-ID: <ZFU9YzqG/T+Ty9gY@google.com>
Subject: Re: [PATCH v1 2/9] KVM: x86/mmu: Add support for prewrite page tracking
From: Sean Christopherson <seanjc@google.com>
To: "=?iso-8859-1?Q?Micka=EBl_Sala=FCn?=" <mic@digikod.net>
Cc: Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, 
	"H . Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>, 
	Paolo Bonzini <pbonzini@redhat.com>, Thomas Gleixner <tglx@linutronix.de>, 
	Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, 
	Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>, 
	James Morris <jamorris@linux.microsoft.com>, John Andersen <john.s.andersen@intel.com>, 
	Liran Alon <liran.alon@oracle.com>, 
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>, Marian Rotariu <marian.c.rotariu@gmail.com>, 
	"Mihai =?utf-8?B?RG9uyJt1?=" <mdontu@bitdefender.com>, 
	"=?utf-8?B?TmljdciZb3IgQ8OuyJt1?=" <nicu.citu@icloud.com>, Rick Edgecombe <rick.p.edgecombe@intel.com>, 
	Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>, 
	Zahra Tarkhani <ztarkhani@microsoft.com>, 
	"=?utf-8?Q?=C8=98tefan_=C8=98icleru?=" <ssicleru@bitdefender.com>, dev@lists.cloudhypervisor.org, 
	kvm@vger.kernel.org, linux-hardening@vger.kernel.org, 
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
	linux-security-module@vger.kernel.org, qemu-devel@nongnu.org, 
	virtualization@lists.linux-foundation.org, x86@kernel.org, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

On Fri, May 05, 2023, Micka=EF=BF=BDl Sala=EF=BF=BDn wrote:
>=20
> On 05/05/2023 18:28, Sean Christopherson wrote:
> > I have no doubt that we'll need to solve performance and scaling issues=
 with the
> > memory attributes implementation, e.g. to utilize xarray multi-range su=
pport
> > instead of storing information on a per-4KiB-page basis, but AFAICT, th=
e core
> > idea is sound.  And a very big positive from a maintenance perspective =
is that
> > any optimizations, fixes, etc. for one use case (CoCo vs. hardening) sh=
ould also
> > benefit the other use case.
> >=20
> > [1] https://lore.kernel.org/all/20230311002258.852397-22-seanjc@google.=
com
> > [2] https://lore.kernel.org/all/Y2WB48kD0J4VGynX@google.com
> > [3] https://lore.kernel.org/all/Y1a1i9vbJ%2FpVmV9r@google.com
>=20
> I agree, I used this mechanism because it was easier at first to rely on =
a
> previous work, but while I was working on the MBEC support, I realized th=
at
> it's not the optimal way to do it.
>=20
> I was thinking about using a new special EPT bit similar to
> EPT_SPTE_HOST_WRITABLE, but it may not be portable though. What do you
> think?

On x86, SPTEs are even more ephemeral than memslots.  E.g. for historical r=
easons,
KVM zaps all SPTEs if _any_ memslot is deleted, which is problematic if the=
 guest
is moving around BARs, using option ROMs, etc.

ARM's pKVM tracks metadata in its stage-2 PTEs, i.e. doesn't need an xarray=
 to
otrack attributes, but that works only because pKVM is more privileged than=
 the
host kernel, and the shared vs. private memory attribute that pKVM cares ab=
out
is very, very restricted in how it can be used and changed.

I tried shoehorning private vs. shared metadata into x86's SPTEs in the pas=
t, and
it ended up being a constant battle with the kernel, e.g. page migration, a=
nd with
KVM itself, e.g. the above memslot mess.


From xen-devel-bounces@lists.xenproject.org Fri May 05 17:34:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 17:34:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530560.826270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzJo-0002tZ-63; Fri, 05 May 2023 17:34:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530560.826270; Fri, 05 May 2023 17:34:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzJo-0002tS-2g; Fri, 05 May 2023 17:34:08 +0000
Received: by outflank-mailman (input) for mailman id 530560;
 Fri, 05 May 2023 17:34:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puzJn-0002tI-1s; Fri, 05 May 2023 17:34:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puzJm-0001LA-QM; Fri, 05 May 2023 17:34:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1puzJm-0002V6-BS; Fri, 05 May 2023 17:34:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1puzJm-0001Ke-Ag; Fri, 05 May 2023 17:34:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jbbwYn/ck0Rj7JLCnR9kJ2LOj8Kc7y9k209DyP2cFUw=; b=jS99WrV2hOKAv54FoLqnVltcIz
	lkTWdDXLK/W+vh2ql3fxJULHInL4KyOr/pnGf1ZflDFThjzP2+KkB/sIJRhvHqCtBj3bTkqdZwgzy
	bkDAwkkiqt0imO/F180Z7AvUSel4hDWLhP31/r+RlFOy4aJ+wzhkGTwWFouTJ0101xd0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180539-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180539: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=4419e74117b3bb17c61cd7a612b5801ab99ad547
X-Osstest-Versions-That:
    libvirt=b4f5e6c91b9871173de205f81b51cf06a833fcb1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 May 2023 17:34:06 +0000

flight 180539 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180539/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180513
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180513
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180513
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 libvirt              4419e74117b3bb17c61cd7a612b5801ab99ad547
baseline version:
 libvirt              b4f5e6c91b9871173de205f81b51cf06a833fcb1

Last test of basis   180513  2023-05-03 04:18:50 Z    2 days
Failing since        180524  2023-05-04 04:18:45 Z    1 days    2 attempts
Testing same since   180539  2023-05-05 04:18:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Daniel Henrique Barboza <dbarboza@ventanamicro.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   b4f5e6c91b..4419e74117  4419e74117b3bb17c61cd7a612b5801ab99ad547 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 05 17:57:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 17:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530597.826324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzgI-00077k-9L; Fri, 05 May 2023 17:57:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530597.826324; Fri, 05 May 2023 17:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzgI-00077E-2L; Fri, 05 May 2023 17:57:22 +0000
Received: by outflank-mailman (input) for mailman id 530597;
 Fri, 05 May 2023 17:57:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l1gM=A2=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1puzgF-0006Yg-LX
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 17:57:19 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4724a395-eb6e-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 19:57:19 +0200 (CEST)
Received: by mail-wm1-x334.google.com with SMTP id
 5b1f17b1804b1-3f315735514so108096885e9.1
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 10:57:19 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 b12-20020adfe30c000000b00306423904d6sm3053844wrj.45.2023.05.05.10.57.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 05 May 2023 10:57:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4724a395-eb6e-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683309438; x=1685901438;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=d6Xql5GXzmjj1oZIuAO041sn7FGeLxwpchTwgQr2k5s=;
        b=D0MY9e7uNVDRqXI8XB8j0kmwsJFPVA7488rLg9o/qJf8WaepewgG92ynTMnwvzhgog
         jnRDm8hfKRRxDpUVXyyGyt+JvmlmGkJY3xY6+QDL+STz9HxbWQxpNgRjSyMfgvqjnbwt
         rjXVlyagTW6MNShOmpa3YA6meVEmH44P4I7Gw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683309438; x=1685901438;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=d6Xql5GXzmjj1oZIuAO041sn7FGeLxwpchTwgQr2k5s=;
        b=cxvPjfpLe6hYnoyjg9omWzzY8z/PiA4Z0jC6uHqsLPAIN58aCB7MEg9hpvebBJDaTK
         Fu9S6ACj/3yfhLToUuifSjRRHs9w1pNQS4g3ECGumw7JDFUb1r7+RKervez7Fv3UbPf6
         ud0X9ZRfSB0ZJlNrXFsAg1a8sMVx+iv5mrlfEkf56flGVpIZJqrRjj7QiwZrpO7GqUV4
         LA3aaMpG/X0fIrfSyYjrgEEc21tDZSYoEfpfMFFm3qrGVu79dr7KjIzqbe6f/RFql83g
         pXMWso2cbHW70fre956KHcfIi3ZuvRLTxlooLrv1YXIfwXFH+KMdRp4CKrx7Uo0oYouM
         JGWA==
X-Gm-Message-State: AC+VfDxs88VZWpP9t2Uei2hVs4NBKYmRNWs499F+z6CWqQbdOyR5agIu
	PYIsl1gYCYkVzDiZ2JdJfOl4z3iXdCtmfR3MDJM=
X-Google-Smtp-Source: ACHHUZ7HbbsoCdRLwpKFkeV3KYcoZ2tYunzSdAflnU/lJ4BAM1XJOaXtZDpyoaNJmG9s2CmAuNrwFQ==
X-Received: by 2002:a05:600c:3d9a:b0:3f1:6ebd:d995 with SMTP id bi26-20020a05600c3d9a00b003f16ebdd995mr2286776wmb.0.1683309438059;
        Fri, 05 May 2023 10:57:18 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 3/3] x86: Use CpuidUserDis if an AMD HVM guest toggles CPUID faulting
Date: Fri,  5 May 2023 18:57:05 +0100
Message-Id: <20230505175705.18098-4-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is in order to aid guests of AMD hardware that we have exposed
CPUID faulting to. If they try to modify the Intel MSR that enables
the feature, trigger levelling so AMD's version of it (CpuidUserDis)
is used instead.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
 xen/arch/x86/msr.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index ecf126566d..984aedf180 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -431,6 +431,13 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     {
         bool old_cpuid_faulting = msrs->misc_features_enables.cpuid_faulting;
 
+        /*
+         * The boot CPU must support Intel's CPUID faulting _or_
+         * AMD's CpuidUserDis.
+         */
+        bool can_fault_cpuid = cpu_has_cpuid_faulting ||
+                               boot_cpu_has(X86_FEATURE_CPUID_USER_DIS);
+
         rsvd = ~0ull;
         if ( cp->platform_info.cpuid_faulting )
             rsvd &= ~MSR_MISC_FEATURES_CPUID_FAULTING;
@@ -440,7 +447,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 
         msrs->misc_features_enables.raw = val;
 
-        if ( v == curr && is_hvm_domain(d) && cpu_has_cpuid_faulting &&
+        if ( v == curr && is_hvm_domain(d) && can_fault_cpuid &&
              (old_cpuid_faulting ^ msrs->misc_features_enables.cpuid_faulting) )
             ctxt_switch_levelling(v);
         break;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 17:57:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 17:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530596.826317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzgH-00073q-SA; Fri, 05 May 2023 17:57:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530596.826317; Fri, 05 May 2023 17:57:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzgH-00073j-Ox; Fri, 05 May 2023 17:57:21 +0000
Received: by outflank-mailman (input) for mailman id 530596;
 Fri, 05 May 2023 17:57:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l1gM=A2=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1puzgF-0006ws-LN
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 17:57:19 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 46190858-eb6e-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 19:57:17 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-3f19a80a330so14429995e9.2
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 10:57:17 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 b12-20020adfe30c000000b00306423904d6sm3053844wrj.45.2023.05.05.10.57.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 05 May 2023 10:57:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46190858-eb6e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683309436; x=1685901436;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ubIK+XYgAIIZcyqAhGOfgJDl4JEExigDY8ufylt9qvw=;
        b=iUVAAHDp7zE3tBjYt2RmDBRUg2mKNMFd53nbCBsmyRuBT8fWhlFV7g8QWARjB8ygI2
         fxg29XQGaCSIq5vPpT61NLslfCNlWHG1Q7E7ijP3E62sfacQeSvj/Lnc2ncHdxFrUXSU
         CBqrGZcqBIQtYo6p8QFP051fkN/U9nmKodnyQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683309436; x=1685901436;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ubIK+XYgAIIZcyqAhGOfgJDl4JEExigDY8ufylt9qvw=;
        b=Q3a3nADK+czQJIQYO1kVngJ8+cs3bIyDet1zsveRPSuAsfFFUQ4KlkHcG3N0iPwhs1
         yYJ+Usd1v6f/RDpc2WfZat8wWUbyIATnvQNvWiQLSCqbjkqH8GrJi7ZiPIrt34oypskH
         /CJUudRzwR/F/h6LnxylnoCY7u/SA7md3imHHyoZ2P8NVA53LbZg6yWnAJQCcHdfdAt3
         cqjK+fPzkfk91uzhfeGHkSQbwWaSzbpsfRsv4c42gRhRndx1US1dATa0l8oSZpyh1rGD
         Za9/NT2xi9qznx9ADgB0CMwpZ6yCP2h+U2ja9GwoN4pTk2v15dRSyI5pe522ZEws93EH
         UIOQ==
X-Gm-Message-State: AC+VfDw0zTscQzc4gF5wEzVtk98dYm+QqzW//AQD++jwN06rZLICgt/n
	06GD2/6vTLYf1bs15+lB5nJatKSaG8kdI2Xxoc8=
X-Google-Smtp-Source: ACHHUZ6CrKzTndC7mGLtHexYNG/IMPc8wRB57ggtG2k3REK1pCr2UWWgO52VIUGy8rend9Fb/kIOwg==
X-Received: by 2002:a05:600c:257:b0:3f1:800f:cc61 with SMTP id 23-20020a05600c025700b003f1800fcc61mr1771921wmj.13.1683309436339;
        Fri, 05 May 2023 10:57:16 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/3] x86: Add support for CpuidUserDis
Date: Fri,  5 May 2023 18:57:04 +0100
Message-Id: <20230505175705.18098-3-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Includes a refactor to move vendor-specific probes to vendor-specific
files. Furthermore, because CpuIdUserDis is reported in Cpuid itself,
the extended leaf containing that bit must be retrieved before calling
c_early_init()

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
 xen/arch/x86/cpu/amd.c         | 29 ++++++++++++++++++-
 xen/arch/x86/cpu/common.c      | 51 ++++++++++++++++++----------------
 xen/arch/x86/cpu/intel.c       | 11 +++++++-
 xen/arch/x86/include/asm/amd.h |  1 +
 4 files changed, 66 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index caafe44740..9269015edd 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -271,8 +271,20 @@ static void __init noinline amd_init_levelling(void)
 {
 	const struct cpuidmask *m = NULL;
 
-	if (probe_cpuid_faulting())
+	/*
+	 * If there's support for CpuidUserDis or CPUID faulting then
+	 * we can skip levelling because CPUID accesses are trapped anyway.
+	 *
+	 * CPUID faulting is an Intel feature analogous to CpuidUserDis, so
+	 * that can only be present when Xen is itself virtualized (because
+	 * it can be emulated)
+	 */
+	if ((cpu_has_hypervisor && probe_cpuid_faulting()) ||
+	    boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) {
+		expected_levelling_cap |= LCAP_faulting;
+		levelling_caps |=  LCAP_faulting;
 		return;
+	}
 
 	probe_masking_msrs();
 
@@ -363,6 +375,21 @@ static void __init noinline amd_init_levelling(void)
 		ctxt_switch_masking = amd_ctxt_switch_masking;
 }
 
+void amd_set_cpuid_user_dis(bool enable)
+{
+	const uint64_t msr_addr = MSR_K8_HWCR;
+	const uint64_t bit = K8_HWCR_CPUID_USER_DIS;
+	uint64_t val;
+
+	rdmsrl(msr_addr, val);
+
+	if (!!(val & bit) == enable)
+		return;
+
+	val ^= bit;
+	wrmsrl(msr_addr, val);
+}
+
 /*
  * Check for the presence of an AMD erratum. Arguments are defined in amd.h 
  * for each known erratum. Return 1 if erratum is found.
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index edc4db1335..9bbb385db4 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -4,6 +4,7 @@
 #include <xen/param.h>
 #include <xen/smp.h>
 
+#include <asm/amd.h>
 #include <asm/cpu-policy.h>
 #include <asm/current.h>
 #include <asm/debugreg.h>
@@ -131,17 +132,6 @@ bool __init probe_cpuid_faulting(void)
 	uint64_t val;
 	int rc;
 
-	/*
-	 * Don't bother looking for CPUID faulting if we aren't virtualised on
-	 * AMD or Hygon hardware - it won't be present.  Likewise for Fam0F
-	 * Intel hardware.
-	 */
-	if (((boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
-	     ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
-	      boot_cpu_data.x86 == 0xf)) &&
-	    !cpu_has_hypervisor)
-		return false;
-
 	if ((rc = rdmsr_safe(MSR_INTEL_PLATFORM_INFO, val)) == 0)
 		raw_cpu_policy.platform_info.cpuid_faulting =
 			val & MSR_PLATFORM_INFO_CPUID_FAULTING;
@@ -155,8 +145,6 @@ bool __init probe_cpuid_faulting(void)
 		return false;
 	}
 
-	expected_levelling_cap |= LCAP_faulting;
-	levelling_caps |=  LCAP_faulting;
 	setup_force_cpu_cap(X86_FEATURE_CPUID_FAULTING);
 
 	return true;
@@ -179,8 +167,10 @@ static void set_cpuid_faulting(bool enable)
 void ctxt_switch_levelling(const struct vcpu *next)
 {
 	const struct domain *nextd = next ? next->domain : NULL;
+	bool enable_cpuid_faulting;
 
-	if (cpu_has_cpuid_faulting) {
+	if (cpu_has_cpuid_faulting ||
+	    boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) {
 		/*
 		 * No need to alter the faulting setting if we are switching
 		 * to idle; it won't affect any code running in idle context.
@@ -201,12 +191,18 @@ void ctxt_switch_levelling(const struct vcpu *next)
 		 * an interim escape hatch in the form of
 		 * `dom0=no-cpuid-faulting` to restore the older behaviour.
 		 */
-		set_cpuid_faulting(nextd && (opt_dom0_cpuid_faulting ||
-					     !is_control_domain(nextd) ||
-					     !is_pv_domain(nextd)) &&
-				   (is_pv_domain(nextd) ||
-				    next->arch.msrs->
-				    misc_features_enables.cpuid_faulting));
+		enable_cpuid_faulting = nextd && (opt_dom0_cpuid_faulting ||
+		                                  !is_control_domain(nextd) ||
+		                                  !is_pv_domain(nextd)) &&
+		                        (is_pv_domain(nextd) ||
+		                         next->arch.msrs->
+		                         misc_features_enables.cpuid_faulting);
+
+		if (cpu_has_cpuid_faulting)
+			set_cpuid_faulting(enable_cpuid_faulting);
+		else
+			amd_set_cpuid_user_dis(enable_cpuid_faulting);
+
 		return;
 	}
 
@@ -415,6 +411,17 @@ static void generic_identify(struct cpuinfo_x86 *c)
 	c->apicid = phys_pkg_id((ebx >> 24) & 0xFF, 0);
 	c->phys_proc_id = c->apicid;
 
+	eax = cpuid_eax(0x80000000);
+	if ((eax >> 16) == 0x8000)
+		c->extended_cpuid_level = eax;
+
+	/*
+	 * These AMD-defined flags are out of place, but we need
+	 * them early for the CPUID faulting probe code
+	 */
+	if (c->extended_cpuid_level >= 0x80000021)
+		c->x86_capability[FEATURESET_e21a] = cpuid_eax(0x80000021);
+
 	if (this_cpu->c_early_init)
 		this_cpu->c_early_init(c);
 
@@ -431,10 +438,6 @@ static void generic_identify(struct cpuinfo_x86 *c)
 	     (cpuid_ecx(CPUID_PM_LEAF) & CPUID6_ECX_APERFMPERF_CAPABILITY) )
 		__set_bit(X86_FEATURE_APERFMPERF, c->x86_capability);
 
-	eax = cpuid_eax(0x80000000);
-	if ((eax >> 16) == 0x8000)
-		c->extended_cpuid_level = eax;
-
 	/* AMD-defined flags: level 0x80000001 */
 	if (c->extended_cpuid_level >= 0x80000001)
 		cpuid(0x80000001, &tmp, &tmp,
diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 71fc1a1e18..7e5c657758 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -226,8 +226,17 @@ static void cf_check intel_ctxt_switch_masking(const struct vcpu *next)
  */
 static void __init noinline intel_init_levelling(void)
 {
-	if (probe_cpuid_faulting())
+	/* Intel Fam0f is old enough that probing for CPUID faulting support
+	 * introduces spurious #GP(0) when the appropriate MSRs are read,
+	 * so skip it altogether. In the case where Xen is virtualized these
+	 * MSRs may be emulated though, so we allow it in that case.
+	 */
+	if ((cpu_has_hypervisor || boot_cpu_data.x86 !=0xf) &&
+	    probe_cpuid_faulting()) {
+		expected_levelling_cap |= LCAP_faulting;
+		levelling_caps |=  LCAP_faulting;
 		return;
+	}
 
 	probe_masking_msrs();
 
diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h
index a975d3de26..09ee52dc1c 100644
--- a/xen/arch/x86/include/asm/amd.h
+++ b/xen/arch/x86/include/asm/amd.h
@@ -155,5 +155,6 @@ extern bool amd_legacy_ssbd;
 extern bool amd_virt_spec_ctrl;
 bool amd_setup_legacy_ssbd(void);
 void amd_set_legacy_ssbd(bool enable);
+void amd_set_cpuid_user_dis(bool enable);
 
 #endif /* __AMD_H__ */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 17:57:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 17:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530595.826303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzgE-0006cO-MU; Fri, 05 May 2023 17:57:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530595.826303; Fri, 05 May 2023 17:57:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzgE-0006bM-HR; Fri, 05 May 2023 17:57:18 +0000
Received: by outflank-mailman (input) for mailman id 530595;
 Fri, 05 May 2023 17:57:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l1gM=A2=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1puzgD-0006Yg-6R
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 17:57:17 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 452f0c61-eb6e-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 19:57:15 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-3f18335a870so14131365e9.0
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 10:57:15 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 b12-20020adfe30c000000b00306423904d6sm3053844wrj.45.2023.05.05.10.57.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 05 May 2023 10:57:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 452f0c61-eb6e-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683309435; x=1685901435;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2enxHJVvB53hhS6w5p4MreN4cLpgSnyXWT9wrZUvS/4=;
        b=S6WP4jZQqFuckveRJuLLwmmlAWJchVAJAb9jg7PVGcyB+N56rwpZ2AwOxLV671Sz+Q
         FE2T1YlC1BiiUjDV2uZ8Nf7jEsflLCwPGXr13DpeKCJpId6dtgybo09rgCJXJtEamKwj
         4KmLO8o+npkY8s8YD/XtuytYHkOev4UdELeDc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683309435; x=1685901435;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=2enxHJVvB53hhS6w5p4MreN4cLpgSnyXWT9wrZUvS/4=;
        b=dyFGTTtTIzDFPoWnYnalHt6XIBX5P+U+N9mhMCc/bToBPbc16rc2O8qrM6eDeFjlgZ
         5uWLn14Jyg52kGqfeYw6AsXWP6by89P+Bk1yxb1g+/xJP6ENip//huhCmJWmSLKC2vb9
         JAkhGDSXX9VVmXsuV9ID5Eb0UyHXhVDbmWWbGk6Z3HVtBx2ybwjC1Qf1/O7pGmlhVRUk
         870kGyeQ5iN2uIkZ0HqfZR8JlCrUXH0z1vW3SRxZZlDdLvmCTIMqpnJb4XlJdiLie8Qx
         B1dMRpLaZzF7b6Uu99ryc1klcbyu/psciB3wFyk+aFhLZckwLV65WUEOFIaXkhxwjoaB
         NeFQ==
X-Gm-Message-State: AC+VfDzV5gCsQncdbCCJAFPutJXlTvGeEnEoKfYb6Q0v0l+Lu4gFfDFH
	A+1FXNpq1V5HLYbEyaazgGjLvmIxJo32bWzQjAE=
X-Google-Smtp-Source: ACHHUZ5sPykgAxfLbTpChhsume9wrclYkH7/nFknVv7g/8xzCXnmQiEIye5TXtK5me/BScAefzZtWg==
X-Received: by 2002:a05:600c:2310:b0:3f3:fe82:ee79 with SMTP id 16-20020a05600c231000b003f3fe82ee79mr1707787wmo.23.1683309434835;
        Fri, 05 May 2023 10:57:14 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 1/3] x86: Add AMD's CpuidUserDis bit definitions
Date: Fri,  5 May 2023 18:57:03 +0100
Message-Id: <20230505175705.18098-2-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

AMD reports support for CpuidUserDis in CPUID and provides the toggle in HWCR.
This patch adds the positions of both of those bits to both xen and tools.

No functional change.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
 tools/libs/light/libxl_cpuid.c              | 1 +
 tools/misc/xen-cpuid.c                      | 2 ++
 xen/arch/x86/include/asm/msr-index.h        | 1 +
 xen/include/public/arch-x86/cpufeatureset.h | 1 +
 4 files changed, 5 insertions(+)

diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index 5f0bf93810..4d2fab5414 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -317,6 +317,7 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
 
         {"lfence+",      0x80000021, NA, CPUID_REG_EAX,  2,  1},
         {"nscb",         0x80000021, NA, CPUID_REG_EAX,  6,  1},
+        {"cpuid-user-dis", 0x80000021, NA, CPUID_REG_EAX, 17,  1},
 
         {"maxhvleaf",    0x40000000, NA, CPUID_REG_EAX,  0,  8},
 
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index d7efc59d31..8ec143ebc8 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -199,6 +199,8 @@ static const char *const str_e21a[32] =
 {
     [ 2] = "lfence+",
     [ 6] = "nscb",
+
+    /* 16 */                [17] = "cpuid-user-dis",
 };
 
 static const char *const str_7b1[32] =
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index fa771ed0b5..082fb2e0d9 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -337,6 +337,7 @@
 
 #define MSR_K8_HWCR			0xc0010015
 #define K8_HWCR_TSC_FREQ_SEL		(1ULL << 24)
+#define K8_HWCR_CPUID_USER_DIS		(1ULL << 35)
 
 #define MSR_K7_FID_VID_CTL		0xc0010041
 #define MSR_K7_FID_VID_STATUS		0xc0010042
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 12e3dc80c6..623dcb1bce 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -287,6 +287,7 @@ XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
 XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
 XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and limit too) */
+XEN_CPUFEATURE(CPUID_USER_DIS,     11*32+17) /*   CPUID disable for non-privileged software */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:1.ebx, word 12 */
 XEN_CPUFEATURE(INTEL_PPIN,         12*32+ 0) /*   Protected Processor Inventory Number */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 17:57:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 17:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530594.826297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzgE-0006Z0-E1; Fri, 05 May 2023 17:57:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530594.826297; Fri, 05 May 2023 17:57:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzgE-0006Yt-AN; Fri, 05 May 2023 17:57:18 +0000
Received: by outflank-mailman (input) for mailman id 530594;
 Fri, 05 May 2023 17:57:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l1gM=A2=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1puzgC-0006Yg-Gj
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 17:57:16 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 44447875-eb6e-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 19:57:15 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-3f178da21afso14987995e9.1
 for <xen-devel@lists.xenproject.org>; Fri, 05 May 2023 10:57:14 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 b12-20020adfe30c000000b00306423904d6sm3053844wrj.45.2023.05.05.10.57.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 05 May 2023 10:57:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44447875-eb6e-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683309433; x=1685901433;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=q1h9+sOy91h+3M0AKDC27QolVTLW8C6xg301Hs3wQXQ=;
        b=NUt+BCXAFaIMCWuvrEKzAzsu+FJJ0FZB1k8sG8goSYxw5RnMZSuDvCbxdo8aMCjwB8
         FXzaSVlQQ5g81Ek0bfrH1Zf00cyXAi4RN++m14nbcMCFbdFQVc9A7W6/N9mQpNL8k4/Z
         EMw7vlgtIJrqy/I/QMDdx9a3MMIYdd7repEjY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683309433; x=1685901433;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=q1h9+sOy91h+3M0AKDC27QolVTLW8C6xg301Hs3wQXQ=;
        b=XFz9bxiDSE7pmes0mQN0QP4Hg/p5POeY5dmGGnL133UK2s2mnaPM8cOElMtEnccco2
         1ESgywIjgjk6NQg/b35zZvxS079OnNEnlMTRg0J+BvHXHH28+zwnj4MImTn3ySwB0TmH
         tgRn3UIA5AryLktgjYK781K4rmwEi1exwLgUxNaogvPgkPAsaiAO/A+soNTf3vL9c7pF
         JbeXaCJQAJRmuUHPZUK3JcPVAhJA2jT94nxBkQckpFHlhZTkRZr3s2ARbwNNBqUkXX8x
         b9bvYFyn/gbm7qlTq8wckjj1OvmTJuJJd2f09EnlP3EDVdi4Wtj0KdmcIhB1TP7UF7K4
         wZCg==
X-Gm-Message-State: AC+VfDzovUQnLXlFiYoymaJfiV5wnyvi2UAMrTHiwwaYjbhNa0Jeaf0r
	wc57xCDFWm1ZLQo2vM0Ll2gpD9wz/I9jJHGP13g=
X-Google-Smtp-Source: ACHHUZ7G8+HYT3oqu2wvuuAHP58jwcanfBtV62hU5W0nwG7NX2mhAIPVxD5VmNHQXpKHIg0M18Bqug==
X-Received: by 2002:a7b:c44c:0:b0:3ed:f5b5:37fc with SMTP id l12-20020a7bc44c000000b003edf5b537fcmr1589134wmi.1.1683309432993;
        Fri, 05 May 2023 10:57:12 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 0/3] Add CpuidUserDis support
Date: Fri,  5 May 2023 18:57:02 +0100
Message-Id: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Nowadays AMD supports trapping the CPUID instruction from ring3 to ring0,
(CpuidUserDis) akin to Intel's "CPUID faulting". There is a difference in
that the toggle bit is in a different MSR and the support bit is in CPUID
itself rather than yet another MSR. This patch enables AMD hosts to use it
when supported in order to provide correct CPUID contents to PV guests. Also
allows HVM guests to use CpuidUserDis via emulated "CPUID faulting".

Patch 1 merely adds definitions to various places in CPUID and MSR

Patch 2 adds support for CpuidUserDis, hooking it in the probing path and
the context switching path.

Patch 3 enables HVM guests to use CpuidUserDis as if it was CPUID faulting,
saving an avoidable roundtrip through the hypervisor at fault handling.

Alejandro Vallejo (3):
  x86: Add AMD's CpuidUserDis bit definitions
  x86: Add support for CpuidUserDis
  x86: Use CpuidUserDis if an AMD HVM guest toggles CPUID faulting

 tools/libs/light/libxl_cpuid.c              |  1 +
 tools/misc/xen-cpuid.c                      |  2 +
 xen/arch/x86/cpu/amd.c                      | 29 +++++++++++-
 xen/arch/x86/cpu/common.c                   | 51 +++++++++++----------
 xen/arch/x86/cpu/intel.c                    | 11 ++++-
 xen/arch/x86/include/asm/amd.h              |  1 +
 xen/arch/x86/include/asm/msr-index.h        |  1 +
 xen/arch/x86/msr.c                          |  9 +++-
 xen/include/public/arch-x86/cpufeatureset.h |  1 +
 9 files changed, 79 insertions(+), 27 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 05 18:16:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 18:16:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530607.826336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzy5-0002Jw-PA; Fri, 05 May 2023 18:15:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530607.826336; Fri, 05 May 2023 18:15:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzy5-0002Jp-MI; Fri, 05 May 2023 18:15:45 +0000
Received: by outflank-mailman (input) for mailman id 530607;
 Fri, 05 May 2023 18:15:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XVLi=A2=citrix.com=prvs=4822586d7=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puzy4-0002Jh-Ag
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 18:15:44 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d6d0f8cc-eb70-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 20:15:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6d0f8cc-eb70-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683310540;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=YVh+aeiq4kunPj4X2l6hp6BB23HOhzrXAxxBHMLanWU=;
  b=IqamYFkJhiLKp4/BBN7UYgMWgRT+DNu4/7ihKNNrzHVcxs7IaPFQZZta
   asyPmSu3EPR5iPLFPIhOb/r+Te487QeapfGwFlCmTPdPqPhkbxJ84jWJA
   dsKESNIKsMBJd+J7wL27JeyGG3iQuPJ2pjaCMWgDBE82vexua6fy/Pkvs
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108058101
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:aGceR62GQmxPirbjD/bD5YNxkn2cJEfYwER7XKvMYLTBsI5bpz0Ez
 zcWDzzQaPuDZDbzLd90Otvjph4FvJSAnd9rQFA/pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFmOKgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfI1929
 PIJFgg3YD+Jmr2Q4K6pdPtJr5F2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiHJ0FxhvI/
 zqfl4j/KhpGLsKi9wjfy232xbD0wTH8G4YcLLLto5aGh3XMnzdOWXX6T2CTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1iPwQPIJTbd8slvUjPOJvUDAXDNsoiN9hMIOlvEORDI76
 GezkfzjGGNJkKGHd2C43+LBxd+tAhT5PVPudAddE1teuYSy+dhs5v7cZo09SfDo17UZDRm1m
 mnX93Zm2t3/mOZRj82GEUb7byVAT3QjZio8/U3pU22s9WuVj6b1NtXzuTA3ARutRbt1r2VtX
 1BewaByFMhUUfmweNWlGY3h5o2B6fefKyH7ilVyBZQn/DnF0yf9LdsKum8ifRYxaZlsldrVX
 aMukVkJuM870IWCNMebnL5d++x1lPO9RLwJp9jfb8ZUY4gZSTJrCBpGPBbKt0i0yRhErE3KE
 cvDGSpaJSpAWPsPIfvfb7t17ILHMQhknDKJGc6jk0j2uVdcDVbMIYo43JK1RrhRxMu5TM/9q
 r6z6+PiJ81jbdDD
IronPort-HdrOrdr: A9a23:vkstY6FprdPlgHFlpLqEGMeALOsnbusQ8zAXPiBKJCC9E/bo8/
 xG+c5w6faaslkssR0b9+xoW5PwJE80l6QFgrX5VI3KNGXbUQ2TTb2KhbGI/9SKIVydygcy78
 ddmtNFebrN5VgRt7eH3OG7eexQv+VuJsqT9JnjJ3QGd3AaV0l5hT0JbDpyiidNNXN77ZxSLu
 vk2uN34wCOVF4wdcqBCnwMT4H41qD2fMKPW29/O/Y/gjP+9g+V1A==
X-Talos-CUID: 9a23:vLLr225C7EuUA7wjNNss1nAsG840TEXnwljaPhCeKXp7dq+wYArF
X-Talos-MUID: 9a23:pBUAvAROuaMk+To9RXTn2Ct8CflauZ2xUgNSqskUvIqAOxR/bmI=
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="108058101"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v2 0/2] LICENSES improvements/corrections
Date: Fri, 5 May 2023 19:15:26 +0100
Message-ID: <20230505181528.3587485-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Noticed in light of the recent LGPL changes to libacpi, but sadly only after
the fact.  Minor updates in the second patch.

Andrew Cooper (2):
  LICENSES: Improve the legibility of these files
  LICENSES: Remove the use of deprecated SPDX tags

 LICENSES/BSD-2-Clause               |  4 ++++
 LICENSES/BSD-3-Clause               |  4 ++++
 LICENSES/BSD-3-Clause-Clear         |  4 ++++
 LICENSES/CC-BY-4.0                  |  5 +++++
 LICENSES/GPL-2.0                    | 21 ++++++++++++++-------
 LICENSES/LGPL-2.0                   | 17 ++++++++++++-----
 LICENSES/LGPL-2.1                   | 17 ++++++++++++-----
 LICENSES/MIT                        |  4 ++++
 tools/libacpi/Makefile              |  2 +-
 tools/libacpi/acpi2_0.h             |  2 +-
 tools/libacpi/build.c               |  2 +-
 tools/libacpi/dsdt.asl              |  2 +-
 tools/libacpi/dsdt_acpi_info.asl    |  2 +-
 tools/libacpi/libacpi.h             |  2 +-
 tools/libacpi/mk_dsdt.c             |  2 +-
 tools/libacpi/ssdt_laptop_slate.asl |  2 +-
 tools/libacpi/ssdt_pm.asl           |  2 +-
 tools/libacpi/ssdt_s3.asl           |  2 +-
 tools/libacpi/ssdt_s4.asl           |  2 +-
 tools/libacpi/ssdt_tpm.asl          |  2 +-
 tools/libacpi/static_tables.c       |  2 +-
 21 files changed, 72 insertions(+), 30 deletions(-)


base-commit: e93e635e142d45e3904efb4a05e2b3b52a708b4f
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 05 18:16:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 18:16:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530608.826347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzyJ-0002cR-5h; Fri, 05 May 2023 18:15:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530608.826347; Fri, 05 May 2023 18:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzyJ-0002cK-2p; Fri, 05 May 2023 18:15:59 +0000
Received: by outflank-mailman (input) for mailman id 530608;
 Fri, 05 May 2023 18:15:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XVLi=A2=citrix.com=prvs=4822586d7=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puzyH-0002Jh-C3
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 18:15:57 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df830fc7-eb70-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 20:15:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df830fc7-eb70-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683310555;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=E0aZkvgxxKojXoFk3hMdpQ/EPEKioiMdFoNuS2VCJ6c=;
  b=YHf2gqZXFdDXCTIqNXC/4ekQw7m2RQOlKA+xP9j1S14KCz1o/n7n4Mia
   RF0ep75VvHlczBPadilQAVvLUanwnWaZPTOIKkV6F8S2/9DHxGLHyH5+o
   ZRBfuvXER0GZwusYp/UJ8HmJk99owbMWo/TuNmQS0/eEse2FPruhayR3+
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110484431
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:lwgeXqKav5MbL6mWFE+RyJUlxSXFcZb7ZxGr2PjKsXjdYENS0GYHy
 DEWWGnUa/2MN2Okctl/OYi1pBxQv8KAmoMwTwtlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wRjPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c56JT9Q0
 qEyNAwwTRaBjNy4kZWETuxV05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TTHJ0JzhrE/
 TuuE2LRXCEVL+y6khO+ymu3gsXSjCjYV8FJLejtnhJtqALKnTFCYPEMbnOguuWwgEO6X9NZK
 mQX9zAooKx081akJvHiWzWorXjCuQQTM/JSDuk75Qel2qfSpQGDCQAsTDRMddgnv88eXiEx2
 xmCmNaBLSxitviZRGyQ8p+QrCiuIm4FIGkafygGQAAZpd75r+kOYgnnF4g5VvTv15usRG+2m
 mrRxMQju1kNpcMvibucoHrbvw+PoJrCcABkwTrTW1vwu2uVe7WZi5yUBUnztKgQd9zEHwDY4
 xDoiODFsrlQUMjleDilBbxUQer3v6vt3Cj02wYHInU3y9i6F5dPl6h06So2GkpmO91sldTBM
 B6K4lM5CHO+0RKXgU5Lj2GZUZ5CIVDIT4iNaxwtRoMmjmJNXAGG5jpyQkWbwnrglkMh+YlmZ
 8fAK5zyUS9LUf84pNZTe9rxLJdxnnxurY8tbcmTI+ubPUq2OyfOFOZt3KqmZeEl9qKUyDjoH
 yJkH5LSkX13CbSuChQ7BKZPdTjm21BnX8GpwyGWH8bfSjdb9JYJUaaAm+9/I9A5zsy4VI7gp
 xmAZ6OR83Kn7VWvFOlAQioLhG/HNXqnkU8GAA==
IronPort-HdrOrdr: A9a23:7VfDn6gLuWahrjDjHmeJ6yK0aXBQXioji2hC6mlwRA09TyX5ra
 2TdZUgpHvJYVMqMk3I9uruBEDtex3hHP1OkOws1NWZLWrbUQKTRekP0WKF+Vzd8kXFndK1vp
 0QEZSWZueRMbEAt7ec3OG5eexQvOVu8sqT9JjjJ6EGd3AVV0lihT0JezpyCidNNW977QJSLu
 vn2iJAzQDQAEg/X4CAKVQuefPMnNHPnIKOW296O/Z2gDP+9Q9B8dTBYmOl4is=
X-Talos-CUID: =?us-ascii?q?9a23=3AS+gvzWrnMFi8+sXQzgQT8BXmUc0BLGP6kV3MH3K?=
 =?us-ascii?q?bD2RidrPEbmCU/Zoxxg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AQuCpGAxDjWEA7q8buBmhWg+dl6yaqPqeB1s9tM4?=
 =?us-ascii?q?vgPGdLihaJiaWrzv0GbZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="110484431"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, George Dunlap <George.Dunlap@eu.citrix.com>, Jan Beulich
	<JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
Subject: [PATCH v2 1/2] LICENSES: Improve the legibility of these files
Date: Fri, 5 May 2023 19:15:27 +0100
Message-ID: <20230505181528.3587485-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230505181528.3587485-1-andrew.cooper3@citrix.com>
References: <20230505181528.3587485-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

A few newlines go a very long way.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 LICENSES/BSD-2-Clause       | 4 ++++
 LICENSES/BSD-3-Clause       | 4 ++++
 LICENSES/BSD-3-Clause-Clear | 4 ++++
 LICENSES/CC-BY-4.0          | 5 +++++
 LICENSES/GPL-2.0            | 6 ++++++
 LICENSES/LGPL-2.0           | 6 ++++++
 LICENSES/LGPL-2.1           | 6 ++++++
 LICENSES/MIT                | 4 ++++
 8 files changed, 39 insertions(+)

diff --git a/LICENSES/BSD-2-Clause b/LICENSES/BSD-2-Clause
index da366e2ce50b..694d8c93221c 100644
--- a/LICENSES/BSD-2-Clause
+++ b/LICENSES/BSD-2-Clause
@@ -1,10 +1,14 @@
 Valid-License-Identifier: BSD-2-Clause
+
 SPDX-URL: https://spdx.org/licenses/BSD-2-Clause.html
+
 Usage-Guide:
+
   To use the BSD 2-clause "Simplified" License put the following SPDX
   tag/value pair into a comment according to the placement guidelines in
   the licensing rules documentation:
     SPDX-License-Identifier: BSD-2-Clause
+
 License-Text:
 
 Copyright (c) <year> <owner> . All rights reserved.
diff --git a/LICENSES/BSD-3-Clause b/LICENSES/BSD-3-Clause
index 34c7f057c8d5..1441947f92e0 100644
--- a/LICENSES/BSD-3-Clause
+++ b/LICENSES/BSD-3-Clause
@@ -1,10 +1,14 @@
 Valid-License-Identifier: BSD-3-Clause
+
 SPDX-URL: https://spdx.org/licenses/BSD-3-Clause.html
+
 Usage-Guide:
+
   To use the BSD 3-clause "New" or "Revised" License put the following SPDX
   tag/value pair into a comment according to the placement guidelines in
   the licensing rules documentation:
     SPDX-License-Identifier: BSD-3-Clause
+
 License-Text:
 
 Copyright (c) <year> <owner> . All rights reserved.
diff --git a/LICENSES/BSD-3-Clause-Clear b/LICENSES/BSD-3-Clause-Clear
index e53b56092b90..2b27f24a65a0 100644
--- a/LICENSES/BSD-3-Clause-Clear
+++ b/LICENSES/BSD-3-Clause-Clear
@@ -1,10 +1,14 @@
 Valid-License-Identifier: BSD-3-Clause-Clear
+
 SPDX-URL: https://spdx.org/licenses/BSD-3-Clause-Clear.html
+
 Usage-Guide:
+
   To use the BSD 3-clause "Clear" License put the following SPDX
   tag/value pair into a comment according to the placement guidelines in
   the licensing rules documentation:
     SPDX-License-Identifier: BSD-3-Clause-Clear
+
 License-Text:
 
 The Clear BSD License
diff --git a/LICENSES/CC-BY-4.0 b/LICENSES/CC-BY-4.0
index 27dfefa95ccf..4197ceb180ff 100644
--- a/LICENSES/CC-BY-4.0
+++ b/LICENSES/CC-BY-4.0
@@ -1,15 +1,20 @@
 Valid-License-Identifier: CC-BY-4.0
+
 SPDX-URL: https://spdx.org/licenses/CC-BY-4.0
+
 Usage-Guide:
+
   Do NOT use this license for code, but it's acceptable for content like artwork
   or documentation. When using it for the latter, it's best to use it together
   with a GPL2 compatible license using "OR", as CC-BY-4.0 texts processed by
   the kernel's build system might combine it with content taken from more
   restrictive licenses.
+
   To use the Creative Commons Attribution 4.0 International license put
   the following SPDX tag/value pair into a comment according to the
   placement guidelines in the licensing rules documentation:
     SPDX-License-Identifier: CC-BY-4.0
+
 License-Text:
 
 Creative Commons Attribution 4.0 International
diff --git a/LICENSES/GPL-2.0 b/LICENSES/GPL-2.0
index 9f09528a8bce..fa5c66236fe9 100644
--- a/LICENSES/GPL-2.0
+++ b/LICENSES/GPL-2.0
@@ -2,19 +2,25 @@ Valid-License-Identifier: GPL-2.0
 Valid-License-Identifier: GPL-2.0-only
 Valid-License-Identifier: GPL-2.0+
 Valid-License-Identifier: GPL-2.0-or-later
+
 SPDX-URL: https://spdx.org/licenses/GPL-2.0.html
+
 Usage-Guide:
+
   To use this license in source code, put one of the following SPDX
   tag/value pairs into a comment according to the placement
   guidelines in the licensing rules documentation.
+
   For 'GNU General Public License (GPL) version 2 only' use:
     SPDX-License-Identifier: GPL-2.0-only
   or (now deprecated)
     SPDX-License-Identifier: GPL-2.0
+
   For 'GNU General Public License (GPL) version 2 or any later version' use:
     SPDX-License-Identifier: GPL-2.0+
   or
     SPDX-License-Identifier: GPL-2.0-or-later
+
 License-Text:
 
 		    GNU GENERAL PUBLIC LICENSE
diff --git a/LICENSES/LGPL-2.0 b/LICENSES/LGPL-2.0
index 957d798fe037..2fa16d72eabf 100644
--- a/LICENSES/LGPL-2.0
+++ b/LICENSES/LGPL-2.0
@@ -1,15 +1,21 @@
 Valid-License-Identifier: LGPL-2.0
 Valid-License-Identifier: LGPL-2.0+
+
 SPDX-URL: https://spdx.org/licenses/LGPL-2.0.html
+
 Usage-Guide:
+
   To use this license in source code, put one of the following SPDX
   tag/value pairs into a comment according to the placement
   guidelines in the licensing rules documentation.
+
   For 'GNU Library General Public License (LGPL) version 2.0 only' use:
     SPDX-License-Identifier: LGPL-2.0
+
   For 'GNU Library General Public License (LGPL) version 2.0 or any later
   version' use:
     SPDX-License-Identifier: LGPL-2.0+
+
 License-Text:
 
 GNU LIBRARY GENERAL PUBLIC LICENSE
diff --git a/LICENSES/LGPL-2.1 b/LICENSES/LGPL-2.1
index 27bb4342a3e8..b366c7e49199 100644
--- a/LICENSES/LGPL-2.1
+++ b/LICENSES/LGPL-2.1
@@ -1,15 +1,21 @@
 Valid-License-Identifier: LGPL-2.1
 Valid-License-Identifier: LGPL-2.1+
+
 SPDX-URL: https://spdx.org/licenses/LGPL-2.1.html
+
 Usage-Guide:
+
   To use this license in source code, put one of the following SPDX
   tag/value pairs into a comment according to the placement
   guidelines in the licensing rules documentation.
+
   For 'GNU Lesser General Public License (LGPL) version 2.1 only' use:
     SPDX-License-Identifier: LGPL-2.1
+
   For 'GNU Lesser General Public License (LGPL) version 2.1 or any later
   version' use:
     SPDX-License-Identifier: LGPL-2.1+
+
 License-Text:
 
 GNU LESSER GENERAL PUBLIC LICENSE
diff --git a/LICENSES/MIT b/LICENSES/MIT
index f33a68ceb3ea..eba1549f93e4 100644
--- a/LICENSES/MIT
+++ b/LICENSES/MIT
@@ -1,10 +1,14 @@
 Valid-License-Identifier: MIT
+
 SPDX-URL: https://spdx.org/licenses/MIT.html
+
 Usage-Guide:
+
   To use the MIT License put the following SPDX tag/value pair into a
   comment according to the placement guidelines in the licensing rules
   documentation:
     SPDX-License-Identifier: MIT
+
 License-Text:
 
 MIT License
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 05 18:16:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 18:16:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530609.826356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzyK-0002s8-Dj; Fri, 05 May 2023 18:16:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530609.826356; Fri, 05 May 2023 18:16:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1puzyK-0002rv-AL; Fri, 05 May 2023 18:16:00 +0000
Received: by outflank-mailman (input) for mailman id 530609;
 Fri, 05 May 2023 18:15:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XVLi=A2=citrix.com=prvs=4822586d7=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1puzyJ-0002Jh-No
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 18:15:59 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e1b163f8-eb70-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 20:15:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1b163f8-eb70-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683310557;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=1TQCZfeuZrxoYx6xAHLXfz0lWk/9oAGfIUvqEwt1EAY=;
  b=bck/AeeL2Q10dE+omKlXGA4dFSg3UvzWK2O5/jrvWh7nTWw7EjKCirch
   YP8GSNshhPRpSKQZh3hlEf74CyufL4zAYhvE72Dfu1PJj1srmdUjMLDw8
   WpDQpLM2Jl+gDwDB414/ugXwLW6l7jyVSfqm/un9Gix2h/wiO/S7WwWAu
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110484432
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:N56p7qlkX3X9MgmaceCoa6bo5gyYJkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIbX2DUOfuCZmP8f992aY3joRhU7MLVndU2TgJkrn1mQyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgW5AKGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 aUKGAwiS06mvOXsz6qaZcBOhppyLda+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglHWdTFCpU3Tjq0w+2XJlyR60aT3McqTcduPLSlQth/B/
 j6WoziiWHn2MvS7mB2GwjGL2NbTnCPFG6BOHaXmzvd11Qj7Kms7V0RNCArTTeOCol6zXZdTJ
 lIZ/gIqrLMu7wq7Q9/lRRq6rXWY+BkGVLJ4Eec39QWMwar8+BuCCy4PSTspQMc9qMY8SDgu1
 1mIt9DkHzpitPuSU3313qiQhSO/P24SN2BqTS0ZSQoI5fHzrYd1iQjAJuuPC4bs0IezQ2uph
 WnX8m5n3e57YdM3O7uTpEL3vx2J+L3ySVQ6/iT4d0ClsFJ4e9vwD2C30mQ3/cqsPa7AEAna5
 iVVwpHBhAwdJcrTzXLQGY3hCJnsvq/Ya2OE3DaDCrF7r1yQF2ifkZe8Cd2UDGNgKY46dDDge
 yc/UisBtcYIbBNGgUKaCr9d6vjGLoC6T7wJrtiOMrJzjmFZLWdrBh1Ga0+KxHzKm0Mxi6w5M
 przWZ/yXS1AU/o7lWvuHLh1PVoX+8zD7TmLGcCTI+qPiNJym0J5uZ9aaQDTP4jVHYuPoRnP8
 sY3CvZmPy53CbWkCgGOqN57ELz/BSRjbXwAg5ANJ7Hrz8sPMD1JNsI9Npt7It0/w/oNyruZl
 px/M2cBoGfCabT8AV3iQhhehHnHBv6TcVpT0fQQAGuV
IronPort-HdrOrdr: A9a23:mwvKjawAML+aSZ6e0BgRKrPw3L1zdoMgy1knxilNoHxuH/Bw9v
 re+MjzsCWftN9/Yh4dcLy7VpVoIkmskKKdg7NhXotKNTOO0AeVxedZjLcKqweKJ8SUzJ8+6U
 4PSchD4abLfD9HZcaR2njFLz4jquP3j5xBU43lvglQpQIBUdAQ0+9gYDzrdHGf3GN9dOAE/J
 z33Ls/mxOQPU45Q+6cHXc/U+3Kt7Tw5e/biU5vPW9e1OGW5wnYk4LHLw==
X-Talos-CUID: 9a23:Th/WcG7hWet5+YNtWtssrVwkJu8ALFjmyy3fO26XIERpFuyUVgrF
X-Talos-MUID: 9a23:0KCkDwtfXb6SUV25zs2noGBFN/crxPqXFEENr9Yk58CcaxR5JGLI
X-IronPort-AV: E=Sophos;i="5.99,252,1677560400"; 
   d="scan'208";a="110484432"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, George Dunlap <George.Dunlap@eu.citrix.com>, Jan Beulich
	<JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
Subject: [PATCH v2 2/2] LICENSES: Remove the use of deprecated SPDX tags
Date: Fri, 5 May 2023 19:15:28 +0100
Message-ID: <20230505181528.3587485-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230505181528.3587485-1-andrew.cooper3@citrix.com>
References: <20230505181528.3587485-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The GPL and LGPL SPDX forms without an explicit -only or -or-later suffix are
deprecated and should not be used.  Update the documention.

Somewhat unhelpfully at the time of writing, this only appears to be indicated
by the separation of the two tables at https://spdx.org/licenses/

The recent changes to libacpi are the only examples of deprecated LGPL tags in
tree, so fix them all up.

For GPL, we have many examples using deprecated tags.  For now, just identify
them as such and recommend that no new instances get added.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>

Unsure whether this should get some Fixes: tags or not.  Note also that
Jenny's "[PATCH v4 2/2] acpi: Add TPM2 interface definition." wants its SPDX
tag correcting as per this patch.

v2:
 * Extend commit message to include https://spdx.org/licenses/
 * Update the URLs too
---
 LICENSES/GPL-2.0                    | 15 ++++++++-------
 LICENSES/LGPL-2.0                   | 11 ++++++-----
 LICENSES/LGPL-2.1                   | 11 ++++++-----
 tools/libacpi/Makefile              |  2 +-
 tools/libacpi/acpi2_0.h             |  2 +-
 tools/libacpi/build.c               |  2 +-
 tools/libacpi/dsdt.asl              |  2 +-
 tools/libacpi/dsdt_acpi_info.asl    |  2 +-
 tools/libacpi/libacpi.h             |  2 +-
 tools/libacpi/mk_dsdt.c             |  2 +-
 tools/libacpi/ssdt_laptop_slate.asl |  2 +-
 tools/libacpi/ssdt_pm.asl           |  2 +-
 tools/libacpi/ssdt_s3.asl           |  2 +-
 tools/libacpi/ssdt_s4.asl           |  2 +-
 tools/libacpi/ssdt_tpm.asl          |  2 +-
 tools/libacpi/static_tables.c       |  2 +-
 16 files changed, 33 insertions(+), 30 deletions(-)

diff --git a/LICENSES/GPL-2.0 b/LICENSES/GPL-2.0
index fa5c66236fe9..07f332641ccd 100644
--- a/LICENSES/GPL-2.0
+++ b/LICENSES/GPL-2.0
@@ -1,9 +1,11 @@
-Valid-License-Identifier: GPL-2.0
 Valid-License-Identifier: GPL-2.0-only
-Valid-License-Identifier: GPL-2.0+
 Valid-License-Identifier: GPL-2.0-or-later
 
-SPDX-URL: https://spdx.org/licenses/GPL-2.0.html
+SPDX-URL: https://spdx.org/licenses/GPL-2.0-only.html
+SPDX-URL: https://spdx.org/licenses/GPL-2.0-or-later.html
+
+Deprecated-Identifier: GPL-2.0
+Deprecated-Identifier: GPL-2.0+
 
 Usage-Guide:
 
@@ -13,14 +15,13 @@ Usage-Guide:
 
   For 'GNU General Public License (GPL) version 2 only' use:
     SPDX-License-Identifier: GPL-2.0-only
-  or (now deprecated)
-    SPDX-License-Identifier: GPL-2.0
 
   For 'GNU General Public License (GPL) version 2 or any later version' use:
-    SPDX-License-Identifier: GPL-2.0+
-  or
     SPDX-License-Identifier: GPL-2.0-or-later
 
+  The deprecated tags should not be used for any new additions.  Where
+  possible, their existing uses should be phased out.
+
 License-Text:
 
 		    GNU GENERAL PUBLIC LICENSE
diff --git a/LICENSES/LGPL-2.0 b/LICENSES/LGPL-2.0
index 2fa16d72eabf..100c72c6db8c 100644
--- a/LICENSES/LGPL-2.0
+++ b/LICENSES/LGPL-2.0
@@ -1,7 +1,8 @@
-Valid-License-Identifier: LGPL-2.0
-Valid-License-Identifier: LGPL-2.0+
+Valid-License-Identifier: LGPL-2.0-only
+Valid-License-Identifier: LGPL-2.0-or-later
 
-SPDX-URL: https://spdx.org/licenses/LGPL-2.0.html
+SPDX-URL: https://spdx.org/licenses/LGPL-2.0-only.html
+SPDX-URL: https://spdx.org/licenses/LGPL-2.0-or-later.html
 
 Usage-Guide:
 
@@ -10,11 +11,11 @@ Usage-Guide:
   guidelines in the licensing rules documentation.
 
   For 'GNU Library General Public License (LGPL) version 2.0 only' use:
-    SPDX-License-Identifier: LGPL-2.0
+    SPDX-License-Identifier: LGPL-2.0-only
 
   For 'GNU Library General Public License (LGPL) version 2.0 or any later
   version' use:
-    SPDX-License-Identifier: LGPL-2.0+
+    SPDX-License-Identifier: LGPL-2.0-or-later
 
 License-Text:
 
diff --git a/LICENSES/LGPL-2.1 b/LICENSES/LGPL-2.1
index b366c7e49199..d3e213c39c26 100644
--- a/LICENSES/LGPL-2.1
+++ b/LICENSES/LGPL-2.1
@@ -1,7 +1,8 @@
-Valid-License-Identifier: LGPL-2.1
-Valid-License-Identifier: LGPL-2.1+
+Valid-License-Identifier: LGPL-2.1-only
+Valid-License-Identifier: LGPL-2.1-or-later
 
-SPDX-URL: https://spdx.org/licenses/LGPL-2.1.html
+SPDX-URL: https://spdx.org/licenses/LGPL-2.1-only.html
+SPDX-URL: https://spdx.org/licenses/LGPL-2.1-or-later.html
 
 Usage-Guide:
 
@@ -10,11 +11,11 @@ Usage-Guide:
   guidelines in the licensing rules documentation.
 
   For 'GNU Lesser General Public License (LGPL) version 2.1 only' use:
-    SPDX-License-Identifier: LGPL-2.1
+    SPDX-License-Identifier: LGPL-2.1-only
 
   For 'GNU Lesser General Public License (LGPL) version 2.1 or any later
   version' use:
-    SPDX-License-Identifier: LGPL-2.1+
+    SPDX-License-Identifier: LGPL-2.1-or-later
 
 License-Text:
 
diff --git a/tools/libacpi/Makefile b/tools/libacpi/Makefile
index aa9c520cbe85..bcfcd852f92f 100644
--- a/tools/libacpi/Makefile
+++ b/tools/libacpi/Makefile
@@ -1,4 +1,4 @@
-# SPDX-License-Identifier: LGPL-2.1
+# SPDX-License-Identifier: LGPL-2.1-only
 #
 # Copyright (c) 2004, Intel Corporation.
 
diff --git a/tools/libacpi/acpi2_0.h b/tools/libacpi/acpi2_0.h
index 212f5ab64182..e00b29854be0 100644
--- a/tools/libacpi/acpi2_0.h
+++ b/tools/libacpi/acpi2_0.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * Copyright (c) 2004, Intel Corporation.
  */
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index 830d37c61f03..3142e0ac84c0 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * Copyright (c) 2004, Intel Corporation.
  * Copyright (c) 2006, Keir Fraser, XenSource Inc.
diff --git a/tools/libacpi/dsdt.asl b/tools/libacpi/dsdt.asl
index c6691b56a986..32b42f85ae9f 100644
--- a/tools/libacpi/dsdt.asl
+++ b/tools/libacpi/dsdt.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /******************************************************************************
  * DSDT for Xen with Qemu device model
  *
diff --git a/tools/libacpi/dsdt_acpi_info.asl b/tools/libacpi/dsdt_acpi_info.asl
index c6e82f1fe6a7..6e114fa23404 100644
--- a/tools/libacpi/dsdt_acpi_info.asl
+++ b/tools/libacpi/dsdt_acpi_info.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 
     Scope (\_SB)
     {
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index acf012e35578..7ae28525f604 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /******************************************************************************
  * libacpi.h
  * 
diff --git a/tools/libacpi/mk_dsdt.c b/tools/libacpi/mk_dsdt.c
index c74b270c0c5d..34f6753f6193 100644
--- a/tools/libacpi/mk_dsdt.c
+++ b/tools/libacpi/mk_dsdt.c
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 
 #include <stdio.h>
 #include <stdarg.h>
diff --git a/tools/libacpi/ssdt_laptop_slate.asl b/tools/libacpi/ssdt_laptop_slate.asl
index 494f2d048d0a..69fd504c19fc 100644
--- a/tools/libacpi/ssdt_laptop_slate.asl
+++ b/tools/libacpi/ssdt_laptop_slate.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * ssdt_conv.asl
  *
diff --git a/tools/libacpi/ssdt_pm.asl b/tools/libacpi/ssdt_pm.asl
index e577e85c072b..db578d10ac3e 100644
--- a/tools/libacpi/ssdt_pm.asl
+++ b/tools/libacpi/ssdt_pm.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * ssdt_pm.asl
  *
diff --git a/tools/libacpi/ssdt_s3.asl b/tools/libacpi/ssdt_s3.asl
index 8f3177ec5adc..f6e9636f4759 100644
--- a/tools/libacpi/ssdt_s3.asl
+++ b/tools/libacpi/ssdt_s3.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * ssdt_s3.asl
  *
diff --git a/tools/libacpi/ssdt_s4.asl b/tools/libacpi/ssdt_s4.asl
index 979318eca1f5..8014f5fc9014 100644
--- a/tools/libacpi/ssdt_s4.asl
+++ b/tools/libacpi/ssdt_s4.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * ssdt_s4.asl
  *
diff --git a/tools/libacpi/ssdt_tpm.asl b/tools/libacpi/ssdt_tpm.asl
index 6c3267218f3b..944658d25177 100644
--- a/tools/libacpi/ssdt_tpm.asl
+++ b/tools/libacpi/ssdt_tpm.asl
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * ssdt_tpm.asl
  *
diff --git a/tools/libacpi/static_tables.c b/tools/libacpi/static_tables.c
index 631fb911413b..715f46fee05c 100644
--- a/tools/libacpi/static_tables.c
+++ b/tools/libacpi/static_tables.c
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: LGPL-2.1 */
+/* SPDX-License-Identifier: LGPL-2.1-only */
 /*
  * Copyright (c) 2004, Intel Corporation.
  * Copyright (c) 2006, Keir Fraser, XenSource Inc.
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 05 21:26:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 21:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530652.826400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv2wP-0007Sk-C0; Fri, 05 May 2023 21:26:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530652.826400; Fri, 05 May 2023 21:26:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv2wP-0007Rv-6l; Fri, 05 May 2023 21:26:13 +0000
Received: by outflank-mailman (input) for mailman id 530652;
 Fri, 05 May 2023 21:26:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HMjG=A2=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pv2wN-0007NI-P1
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 21:26:11 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6caaa018-eb8b-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 23:26:01 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id 2E3833200925;
 Fri,  5 May 2023 17:25:57 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Fri, 05 May 2023 17:25:57 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 5 May 2023 17:25:55 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6caaa018-eb8b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1683321956; x=1683408356; bh=o3
	13F4hQU5uB9lSLjfIs/1QY2b4KHS2capUoYZqcCPE=; b=YbtXIMljxxLWmuYvxf
	y1JLmQC7zAOrNuWB4kWv9vRaC5WRRAjHJcAZXfo2InEXMZNa8HnDkXWfF6mh5VIK
	EX9J8INupy/u1s7+1JiVPIXsTJqRRMPATUV07MMXlWLsFT0xV/L44PNOcxvOB4rn
	+fIhVUWLMq8JdlCyzxX84w/rQbSRgE//T2M8kIDEK0rB0LioBF3EzawWbjBy8NO7
	L7vjcvY8hkT8JoDsQF3Vic3vTmJZfcoeUkZU6orpMY1jRT1EIMly0ZHAPOnbMhjL
	Qkbd7GzDZX8Vc2CpzAFhBw20RJzzdohKmJo40ZsHes43EOjhpcGzVWlLaHPor4QG
	MA5Q==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1683321956; x=1683408356; bh=o313F4hQU5uB9lSLjfIs/1QY2b4KHS2capU
	oYZqcCPE=; b=iIVEDxKz2wS0FPUx5c/Huwdjf/aMYHODZN3qBIYt6neXuDVowjq
	qJOrajoHVZws9Ep9klcHQPhZr/lex1RH5kbw5sVei8dcsI2wTwTeU3A6xvOMQaQ7
	6SRBQcceM14/GMe9iY+qAW3h2afXezUp5xHi511DC1KLCP+IbZ0qBVUfXItBwLPj
	sGZlFQcDG1YcRsXyVj4O7aLUVPeeomEJLsgmDzb/2Rn0Cy6GL4StXdqs5RHgtMCY
	+wctBroiIQViiIOZDN+xNW6ExmlQ23LGG82KnBdgJCWuFEWzhrY4TCF+vvG7m9h+
	9NudkzDHpj7otKZDNmoN2E50GZcHZT9b+YA==
X-ME-Sender: <xms:ZHRVZFm2aigSdEM08qsCJG1-OATRORFzzX4V8PgGGRH_pK9EnLPtcg>
    <xme:ZHRVZA22RaVY-6kd8OZ4NaW6K5gB-jsjtvRlSVA7joJhkSPX7pRvnReUMXSFBYjft
    D9rxxJk8qK8sA>
X-ME-Received: <xmr:ZHRVZLqJmhGYAnzKYgRb0N1LuO4b65z8AQ9QtSd3ZBvPzRRrd8vqJuGxFexVGCprtkvbfwQcAGSr8fV87_ZkJRiLgxVwtkcreQtIe40L4LMGZTmNC-o4>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeefvddgudeifecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepofgr
    rhgvkhcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghkse
    hinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhep
    gfeuudehgfdvfeehhedujeehfeduveeugefhkefhheelgeevudetueeiudfggfffnecuve
    hluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghr
    vghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:ZHRVZFks5d5mzW6DTyeyNsE_HfmnKABIZbJQuOOwsi4ghaN-aQ0kHA>
    <xmx:ZHRVZD2cuxnDHDxU7AcqpVerjUPazXc5vrpTTAszYIx_TKPFN2QW5g>
    <xmx:ZHRVZEvwYvKtq4RDm3OIv27B2rGbHFYSIJUE3-hM239VP2hb2RXEMQ>
    <xmx:ZHRVZOTrLZcPMT_NvcGgvJyN5AcU7b0AdvaJ1siS89vBgBXZnac1GQ>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/2] x86/mm: add API for marking only part of a MMIO page read only
Date: Fri,  5 May 2023 23:25:35 +0200
Message-Id: <def382a6481a9d1bcc106200b971cd5b0f3d19c1.1683321183.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
References: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

In some cases, only few registers on a page needs to be write-protected.
Examples include USB3 console (64 bytes worth of registers) or MSI-X's
PBA table (which doesn't need to span the whole table either), although
in the latter case the spec forbids placing other registers on the same
page. Current API allows only marking whole pages pages read-only,
which sometimes may cover other registers that guest may need to
write into.

Currently, when a guest tries to write to an MMIO page on the
mmio_ro_ranges, it's either immediately crashed on EPT violation - if
that's HVM, or if PV, it gets #PF. In case of Linux PV, if access was
from userspace (like, /dev/mem), it will try to fixup by updating page
tables (that Xen again will force to read-only) and will hit that #PF
again (looping endlessly). Both behaviors are undesirable if guest could
actually be allowed the write.

Introduce an API that allows marking part of a page read-only. Since
sub-page permissions are not a thing in page tables (they are in EPT,
but not granular enough), do this via emulation (or simply page fault
handler for PV) that handles writes that are supposed to be allowed.
The new subpage_mmio_ro_add() takes a start physical address and the
region size in bytes. Both start address and the size need to be 8-byte
aligned, as a practical simplification (allows using smaller bitmask,
and a smaller granularity isn't really necessary right now).
It will internally add relevant pages to mmio_ro_ranges, but if either
start or end address is not page-aligned, it additionally adds that page
to a list for sub-page R/O handling. The list holds a bitmask which
dwords are supposed to be read-only and an address where page is mapped
for write emulation - this mapping is done only on the first access. A
plain list is used instead of more efficient structure, because there
isn't supposed to be many pages needing this precise r/o control.

The mechanism this API is plugged in is slightly different for PV and
HVM. For both paths, it's plugged into mmio_ro_emulated_write(). For PV,
it's already called for #PF on read-only MMIO page. For HVM however, EPT
violation on p2m_mmio_direct page results in a direct domain_crash().
To reach mmio_ro_emulated_write(), change how write violations for
p2m_mmio_direct are handled - specifically, check if they relate to such
partially protected page via subpage_mmio_write_accept() and if so, call
hvm_emulate_one_mmio() for them too. This decodes what guest is trying
write and finally calls mmio_ro_emulated_write(). Note that hitting EPT
write violation for p2m_mmio_direct page can only happen if the page was
on mmio_ro_ranges (see ept_p2m_type_to_flags()), so there is no need for
checking that again.
Both of those paths need an MFN to which guest tried to write (to check
which part of the page is supposed to be read-only, and where
the page is mapped for writes). This information currently isn't
available directly in mmio_ro_emulated_write(), but in both cases it is
already resolved somewhere higher in the call tree. Pass it down to
mmio_ro_emulated_write() via new mmio_ro_emulate_ctxt.mfn field.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
Shadow mode is not tested, but I don't expect it to work differently than
HAP in areas related to this patch.

Changes in v2:
- Simplify subpage_mmio_ro_add() parameters
- add to mmio_ro_ranges from within subpage_mmio_ro_add()
- use ioremap() instead of caller-provided fixmap
- use 8-bytes granularity (largest supported single write) and a bitmap
  instead of a rangeset
- clarify commit message
- change how it's plugged in for HVM domain, to not change the behavior for
  read-only parts (keep it hitting domain_crash(), instead of ignoring
  write)
- remove unused subpage_mmio_ro_remove()
---
 xen/arch/x86/hvm/emulate.c      |   2 +-
 xen/arch/x86/hvm/hvm.c          |   8 +-
 xen/arch/x86/include/asm/mm.h   |  15 ++-
 xen/arch/x86/mm.c               | 253 +++++++++++++++++++++++++++++++++-
 xen/arch/x86/pv/ro-page-fault.c |   1 +-
 5 files changed, 278 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 75ee98a73be8..0a64636fd9da 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2732,7 +2732,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
         .write      = mmio_ro_emulated_write,
         .validate   = hvmemul_validate,
     };
-    struct mmio_ro_emulate_ctxt mmio_ro_ctxt = { .cr2 = gla };
+    struct mmio_ro_emulate_ctxt mmio_ro_ctxt = { .cr2 = gla, .mfn = _mfn(mfn) };
     struct hvm_emulate_ctxt ctxt;
     const struct x86_emulate_ops *ops;
     unsigned int seg, bdf;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d7d31b53937a..0ed7ce31d5cf 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1990,6 +1990,14 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
         goto out_put_gfn;
     }
 
+    if ( (p2mt == p2m_mmio_direct) && npfec.write_access && npfec.present &&
+         subpage_mmio_write_accept(mfn, gla) &&
+         (hvm_emulate_one_mmio(mfn_x(mfn), gla) == X86EMUL_OKAY) )
+    {
+        rc = 1;
+        goto out_put_gfn;
+    }
+
     /* If we fell through, the vcpu will retry now that access restrictions have
      * been removed. It may fault again if the p2m entry type still requires so.
      * Otherwise, this is an error condition. */
diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h
index db29e3e2059f..3867a92d2dfa 100644
--- a/xen/arch/x86/include/asm/mm.h
+++ b/xen/arch/x86/include/asm/mm.h
@@ -522,9 +522,24 @@ extern struct rangeset *mmio_ro_ranges;
 void memguard_guard_stack(void *p);
 void memguard_unguard_stack(void *p);
 
+/*
+ * Add more precise r/o marking for a MMIO page. Bytes range specified here
+ * will still be R/O, but the rest of the page (nor marked as R/O via another
+ * call) will have writes passed through.
+ * The start address and the size must be aligned to SUBPAGE_MMIO_RO_ALIGN.
+ *
+ * Return values:
+ *  - negative: error
+ *  - 0: success
+ */
+#define SUBPAGE_MMIO_RO_ALIGN 8
+int subpage_mmio_ro_add(paddr_t start, size_t size);
+bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla);
+
 struct mmio_ro_emulate_ctxt {
         unsigned long cr2;
         unsigned int seg, bdf;
+        mfn_t mfn;
 };
 
 int cf_check mmio_ro_emulated_write(
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9741d28cbc96..59941a56c821 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -154,6 +154,19 @@ bool __read_mostly machine_to_phys_mapping_valid;
 
 struct rangeset *__read_mostly mmio_ro_ranges;
 
+/* Handling sub-page read-only MMIO regions */
+struct subpage_ro_range {
+    struct list_head list;
+    mfn_t mfn;
+    void __iomem *mapped;
+    DECLARE_BITMAP(ro_qwords, PAGE_SIZE / SUBPAGE_MMIO_RO_ALIGN);
+    struct rcu_head rcu;
+};
+
+static LIST_HEAD(subpage_ro_ranges);
+static DEFINE_RCU_READ_LOCK(subpage_ro_rcu);
+static DEFINE_SPINLOCK(subpage_ro_lock);
+
 static uint32_t base_disallow_mask;
 /* Global bit is allowed to be set on L1 PTEs. Intended for user mappings. */
 #define L1_DISALLOW_MASK ((base_disallow_mask | _PAGE_GNTTAB) & ~_PAGE_GLOBAL)
@@ -4882,6 +4895,243 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     return 0;
 }
 
+/* This needs subpage_ro_lock already taken */
+static int __init subpage_mmio_ro_add_page(
+    mfn_t mfn, unsigned int offset_s, unsigned int offset_e)
+{
+    struct subpage_ro_range *entry = NULL, *iter;
+    int i;
+
+    list_for_each_entry(iter, &subpage_ro_ranges, list)
+    {
+        if ( mfn_eq(iter->mfn, mfn) )
+        {
+            entry = iter;
+            break;
+        }
+    }
+    if ( !entry )
+    {
+        /* iter==NULL marks it was a newly allocated entry */
+        iter = NULL;
+        entry = xzalloc(struct subpage_ro_range);
+        if ( !entry )
+            return -ENOMEM;
+        entry->mfn = mfn;
+    }
+
+    for ( i = offset_s; i <= offset_e; i += SUBPAGE_MMIO_RO_ALIGN )
+        set_bit(i / SUBPAGE_MMIO_RO_ALIGN, entry->ro_qwords);
+
+    if ( !iter )
+        list_add_rcu(&entry->list, &subpage_ro_ranges);
+
+    return 0;
+}
+
+static void __init subpage_mmio_ro_free(struct rcu_head *rcu)
+{
+    struct subpage_ro_range *entry = container_of(
+        rcu, struct subpage_ro_range, rcu);
+
+    ASSERT(bitmap_empty(entry->ro_qwords, PAGE_SIZE / SUBPAGE_MMIO_RO_ALIGN));
+
+    if ( entry->mapped )
+        iounmap(entry->mapped);
+    xfree(entry);
+}
+
+/* This needs subpage_ro_lock already taken */
+static int __init subpage_mmio_ro_remove_page(
+    mfn_t mfn,
+    int offset_s,
+    int offset_e)
+{
+    struct subpage_ro_range *entry = NULL, *iter;
+    int rc, i;
+
+    list_for_each_entry_rcu(iter, &subpage_ro_ranges, list)
+    {
+        if ( mfn_eq(iter->mfn, mfn) )
+        {
+            entry = iter;
+            break;
+        }
+    }
+    if ( !entry )
+        return -ENOENT;
+
+    for ( i = offset_s; i <= offset_e; i += SUBPAGE_MMIO_RO_ALIGN )
+        clear_bit(i / SUBPAGE_MMIO_RO_ALIGN, entry->ro_qwords);
+
+    if ( !bitmap_empty(entry->ro_qwords, PAGE_SIZE / SUBPAGE_MMIO_RO_ALIGN) )
+        return 0;
+
+    list_del_rcu(&entry->list);
+    call_rcu(&entry->rcu, subpage_mmio_ro_free);
+    return rc;
+}
+
+
+int __init subpage_mmio_ro_add(
+    paddr_t start,
+    size_t size)
+{
+    mfn_t mfn_start = maddr_to_mfn(start);
+    paddr_t end = start + size - 1;
+    mfn_t mfn_end = maddr_to_mfn(end);
+    int offset_end = 0;
+    int rc;
+
+    ASSERT(IS_ALIGNED(start, SUBPAGE_MMIO_RO_ALIGN));
+    ASSERT(IS_ALIGNED(size, SUBPAGE_MMIO_RO_ALIGN));
+
+    if ( !size )
+        return 0;
+
+    rc = rangeset_add_range(mmio_ro_ranges, mfn_x(mfn_start), mfn_x(mfn_end));
+    if ( rc )
+        return rc;
+
+    spin_lock(&subpage_ro_lock);
+
+    if ( PAGE_OFFSET(start) ||
+         (mfn_eq(mfn_start, mfn_end) && PAGE_OFFSET(end) != PAGE_SIZE - 1) )
+    {
+        offset_end = mfn_eq(mfn_start, mfn_end) ?
+                     PAGE_OFFSET(end) :
+                     (PAGE_SIZE - 1);
+        rc = subpage_mmio_ro_add_page(mfn_start,
+                                      PAGE_OFFSET(start),
+                                      offset_end);
+        if ( rc )
+            goto err_unlock;
+    }
+
+    if ( !mfn_eq(mfn_start, mfn_end) && PAGE_OFFSET(end) != PAGE_SIZE - 1 )
+    {
+        rc = subpage_mmio_ro_add_page(mfn_end, 0, PAGE_OFFSET(end));
+        if ( rc )
+            goto err_unlock_remove;
+    }
+
+    spin_unlock(&subpage_ro_lock);
+
+    return 0;
+
+ err_unlock_remove:
+    if ( offset_end )
+        subpage_mmio_ro_remove_page(mfn_start, PAGE_OFFSET(start), offset_end);
+
+ err_unlock:
+    spin_unlock(&subpage_ro_lock);
+    if ( rangeset_remove_range(mmio_ro_ranges, mfn_x(mfn_start), mfn_x(mfn_end)) )
+        printk(XENLOG_ERR "Failed to cleanup on failed subpage_mmio_ro_add()\n");
+    return rc;
+}
+
+static void __iomem *subpage_mmio_get_page(struct subpage_ro_range *entry)
+{
+    if ( entry->mapped )
+        return entry->mapped;
+
+    spin_lock(&subpage_ro_lock);
+    /* Re-check under the lock */
+    if ( entry->mapped )
+        goto out_unlock;
+
+    entry->mapped = ioremap(mfn_x(entry->mfn) << PAGE_SHIFT, PAGE_SIZE);
+
+ out_unlock:
+    spin_unlock(&subpage_ro_lock);
+    return entry->mapped;
+}
+
+static void subpage_mmio_write_emulate(
+    mfn_t mfn,
+    unsigned int offset,
+    const void *data,
+    unsigned int len)
+{
+    struct subpage_ro_range *entry;
+    void __iomem *addr;
+
+    rcu_read_lock(&subpage_ro_rcu);
+
+    list_for_each_entry_rcu(entry, &subpage_ro_ranges, list)
+    {
+        if ( mfn_eq(entry->mfn, mfn) )
+        {
+            if ( test_bit(offset / SUBPAGE_MMIO_RO_ALIGN, entry->ro_qwords) )
+                goto write_ignored;
+
+            addr = subpage_mmio_get_page(entry);
+            if ( !addr )
+            {
+                gprintk(XENLOG_ERR,
+                        "Failed to map page for MMIO write at 0x%"PRI_mfn"%03x\n",
+                        mfn_x(mfn), offset);
+                goto out_unlock;
+            }
+
+            switch ( len )
+            {
+            case 1:
+                writeb(*(uint8_t*)data, addr);
+                break;
+            case 2:
+                writew(*(uint16_t*)data, addr);
+                break;
+            case 4:
+                writel(*(uint32_t*)data, addr);
+                break;
+            case 8:
+                writeq(*(uint64_t*)data, addr);
+                break;
+            default:
+                /* mmio_ro_emulated_write() already validated the size */
+                ASSERT_UNREACHABLE();
+                goto write_ignored;
+            }
+            goto out_unlock;
+        }
+    }
+    /* Do not print message for pages without any writable parts. */
+    goto out_unlock;
+
+ write_ignored:
+    gprintk(XENLOG_WARNING,
+             "ignoring write to R/O MMIO 0x%"PRI_mfn"%03x len %u\n",
+             mfn_x(mfn), offset, len);
+
+ out_unlock:
+    rcu_read_unlock(&subpage_ro_rcu);
+}
+
+bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla)
+{
+    unsigned int offset = PAGE_OFFSET(gla);
+    const struct subpage_ro_range *entry;
+
+    rcu_read_lock(&subpage_ro_rcu);
+
+    list_for_each_entry_rcu(entry, &subpage_ro_ranges, list)
+        if ( mfn_eq(entry->mfn, mfn) &&
+             !test_bit(offset / SUBPAGE_MMIO_RO_ALIGN, entry->ro_qwords) )
+        {
+            /*
+             * We don't know the write seize at this point yet, so it could be
+             * an unalligned write, but accept it here anyway and deal with it
+             * later.
+             */
+            rcu_read_unlock(&subpage_ro_rcu);
+            return true;
+        }
+
+    rcu_read_unlock(&subpage_ro_rcu);
+    return false;
+}
+
 int cf_check mmio_ro_emulated_write(
     enum x86_segment seg,
     unsigned long offset,
@@ -4900,6 +5150,9 @@ int cf_check mmio_ro_emulated_write(
         return X86EMUL_UNHANDLEABLE;
     }
 
+    subpage_mmio_write_emulate(mmio_ro_ctxt->mfn, PAGE_OFFSET(offset),
+                               p_data, bytes);
+
     return X86EMUL_OKAY;
 }
 
diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-fault.c
index cad28ef928ad..578bb4caaf0b 100644
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -330,6 +330,7 @@ static int mmio_ro_do_page_fault(struct x86_emulate_ctxt *ctxt,
             return X86EMUL_UNHANDLEABLE;
     }
 
+    mmio_ro_ctxt.mfn = mfn;
     ctxt->data = &mmio_ro_ctxt;
     if ( pci_ro_mmcfg_decode(mfn_x(mfn), &mmio_ro_ctxt.seg, &mmio_ro_ctxt.bdf) )
         return x86_emulate(ctxt, &mmcfg_intercept_ops);
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Fri May 05 21:26:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 21:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530650.826384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv2wF-00077W-Mg; Fri, 05 May 2023 21:26:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530650.826384; Fri, 05 May 2023 21:26:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv2wF-00077P-Jo; Fri, 05 May 2023 21:26:03 +0000
Received: by outflank-mailman (input) for mailman id 530650;
 Fri, 05 May 2023 21:26:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HMjG=A2=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pv2wD-00077H-Vw
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 21:26:02 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6bd93315-eb8b-11ed-b226-6b7b168915f2;
 Fri, 05 May 2023 23:25:58 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id 881393200A89;
 Fri,  5 May 2023 17:25:54 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Fri, 05 May 2023 17:25:54 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 5 May 2023 17:25:53 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bd93315-eb8b-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm3; t=1683321954; x=1683408354; bh=PDyU9NZ51sl5yKnOCnEilGWUm
	ShNFHrjw6ME6jDLwyk=; b=F3ywGpqE1Gk1X+Ee/uUhu90/cf6R3KNn8afKN0HBa
	Vh/JJObuRQqWhFRRvA2CWIN2vjjhBN3QRVo14OoD/8I3wzuWgDltg3G4CtzJFplY
	np+utE9vbQjoCNcjP9ekGQTx2suVusPWVo6PsJsDuDipymkukJBO0/S2dOCFoo5y
	zSVBL5udoao/1TXCOAqZh9pzvB3ic5ThSA39wUfS2pDBtP/cUn2zzlJ2P4ftzv1U
	FCDosrLmJnc6zoDK/x+REa6fu1lAHuQO1ZWbgq3GYC82GbxIAFTtUYhmApc98NsY
	+bmhpEtaiLb1c65nZOiP8MBb2LxJRnES98x1FtkJbRRLw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm3; t=1683321954; x=1683408354; bh=P
	DyU9NZ51sl5yKnOCnEilGWUmShNFHrjw6ME6jDLwyk=; b=b42mZxKjd/kl2DIbQ
	EGglrWR82TZxRBPi7XubYm/Tn+g7zv+9NO4HzFmg591yDXE1nb+J+mf7TJDeSmtz
	OVIAnhynPSmnt0mM8NFmftIsNAEq4W2UT51B7X6VA+NTuIvh++ZAGhDmj3VoLKMj
	Qnc1PkcMaM4m1fBt8gxUsNVPffFXZ1cABtALf+t2EvMGjkMnncer0/a2PmYsC3Eq
	4f4S8Lzsvj1ly1/AQHU1B9OD3KWDGagnxuP0I29DF0vPa9FEoMNnqGsKvSKZWoFK
	xlF5mc4fW2tT6xz5iQSoq1f9JSt3fAhqCGbe9kPtTxK6A6gAiOkVo6De0GUnJIv6
	arKqQ==
X-ME-Sender: <xms:YXRVZJ-LNG9wJYrkdlh0lEur5nzjUYrId2T4n4zC9quwwZtr6P4UXQ>
    <xme:YXRVZNtoWm-wRd4rP44kSs_KFH4sKW3tIaEVhp0Xukp-FOCGfKFqIwG9zItRUrISG
    9TLqmPVinFdmw>
X-ME-Received: <xmr:YXRVZHC_9y7lnVSAkJ61jJTjBVbW0BsGhry6Ujf4wRDWbgnEIPSV5PPQZg235nXLWxc8vDpfP5A7uVaM0jPaeLE2qGa5pFdUXuBmg3sYxZoYiJtdbg0H>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeefvddgudeifecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecunecujfgurhephffvvefufffkofggtgfgsehtke
    ertdertdejnecuhfhrohhmpeforghrvghkucforghrtgiihihkohifshhkihdqifpkrhgv
    tghkihcuoehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtoh
    hmqeenucggtffrrghtthgvrhhnpeelkefhudelteelleelteetveeffeetffekteetjeeh
    lefggeekleeghefhtdehvdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmh
    grihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggs
    rdgtohhm
X-ME-Proxy: <xmx:YXRVZNeOnnVMzpFUuqSiezogLqtwbhLDsBUzrjgTzqfZMAcmsgDSsg>
    <xmx:YXRVZOO_T-Wcv1WSZe17mG0BFoPCWfMNvNcyeRPRCFf3ZPAAYgNvbg>
    <xmx:YXRVZPlBhYktDUFqYdUsxdiY9QyGDObi-EuRb-ir9w1L3pAFb_gg_Q>
    <xmx:YnRVZBa8Mh7kNStt2_VPdIBeNKE_SoMAzN9zgf39OpK2KKBD0p3Cnw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH v2 0/2] Add API for making parts of a MMIO page R/O and use it in XHCI console
Date: Fri,  5 May 2023 23:25:34 +0200
Message-Id: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On older systems, XHCI xcap had a layout that no other (interesting) registers
were placed on the same page as the debug capability, so Linux was fine with
making the whole page R/O. But at least on Tiger Lake and Alder Lake, Linux
needs to write to some other registers on the same page too.

Add a generic API for making just parts of an MMIO page R/O and use it to fix
USB3 console with share=yes or share=hwdom options. More details in commit
messages.

Marek Marczykowski-Górecki (2):
  x86/mm: add API for marking only part of a MMIO page read only
  drivers/char: Use sub-page ro API to make just xhci dbc cap RO

 xen/arch/x86/hvm/emulate.c      |   2 +-
 xen/arch/x86/hvm/hvm.c          |   8 +-
 xen/arch/x86/include/asm/mm.h   |  15 ++-
 xen/arch/x86/mm.c               | 253 +++++++++++++++++++++++++++++++++-
 xen/arch/x86/pv/ro-page-fault.c |   1 +-
 xen/drivers/char/xhci-dbc.c     |  14 +--
 6 files changed, 284 insertions(+), 9 deletions(-)

base-commit: 99a9c3d7141063ae3f357892c6181cfa3be8a280
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Fri May 05 21:26:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 21:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530651.826393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv2wP-0007PD-14; Fri, 05 May 2023 21:26:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530651.826393; Fri, 05 May 2023 21:26:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv2wO-0007P6-UX; Fri, 05 May 2023 21:26:12 +0000
Received: by outflank-mailman (input) for mailman id 530651;
 Fri, 05 May 2023 21:26:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HMjG=A2=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pv2wN-0007NI-IB
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 21:26:11 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6e6fb91b-eb8b-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 23:26:01 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.west.internal (Postfix) with ESMTP id 154E53200985;
 Fri,  5 May 2023 17:26:00 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Fri, 05 May 2023 17:26:00 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 5 May 2023 17:25:58 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e6fb91b-eb8b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1683321959; x=1683408359; bh=uL
	hx0P6xBZxBZBxNEJgIiazgGFs2Lty/Wyht3mdpQIM=; b=pm2zuDECQQrRQaXpcm
	Qfpec60lFWjgj74E/Dk/vlOfOUOWJ7YpPwlW4XkPgXrtt0gnEnl1kaAATL+238PL
	vJeA/9EQhRFwMcVY62bETZYUyRQmM5wrwczCVfMCEMf2PGDfOOu12zQCE2SSMtaQ
	wM2FbDLdawwyZt8PR6NmHsXCF4RK88TtwSUJyQs/L+gqAqDvpKZQBjkLRn+Rwjqn
	dOwdLOuUHxw9zehmfXUZ4CMc+iIbe8mXfKoObHw+OkuRDTLhmNOjIs0+P7UfmAji
	2Lvr3DFR6uty99E1cvQ13UK56ekgtjcAOZEo2dxOd/KhwYO/kNU2cbY4UPEq3mGF
	K13A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=
	1683321959; x=1683408359; bh=uLhx0P6xBZxBZBxNEJgIiazgGFs2Lty/Wyh
	t3mdpQIM=; b=OFSdM02IPRy0XvJlK7BqPWyE8gxN6e21UPb0aN1FSK2ttWVRRX+
	ruVtBzaAiUo/NjMzb4Cd64clIHFomEoUVehziMx6YmSSATZxlGoldVlsLQCcl6Sn
	Z5zzWrQJpZeyjujCY7pjQG/x1abC0zHzxgqV80bgl2SW5gwli+Reasa1ShQ6Yvgh
	4aexAZVcbJ9X0ZzL9TNtO4uIKjFK5oe+UceLuJX6VAldvT1M34Jhv8KZZyg6mqbK
	qg+xjKjSfHKb4Jw/8oQ7WbPYTZG8zUTY36xp1WamEzhpfOGTwxgpFlCALRX/INW3
	L5WFFBjsKCQVgVf7eu+l9AereZGi2CtI5fA==
X-ME-Sender: <xms:Z3RVZPGJe5J6t7iODWL0V_zwIS7tDkxM5kPXoFHFoouUGvkOwVtNFA>
    <xme:Z3RVZMVdqsyRUaHkrZUO6z7FAOE766Jzaluzk_foPjqxZX0jVaKgptP_DAxJwBO4T
    uEO4YxmdBub7w>
X-ME-Received: <xmr:Z3RVZBJDxXMAtmqw6uEauUH5-_nsFhU0vvhXTa0_KKq2bcWGPu1vAOJEIpTPYp73vs3JuX0km1dB_Qvlu3wYe94pDNbLxYG1ET3-4Vfa6STYn-UuvVPX>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeefvddgudeifecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepofgr
    rhgvkhcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghkse
    hinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhep
    gfeuudehgfdvfeehhedujeehfeduveeugefhkefhheelgeevudetueeiudfggfffnecuve
    hluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghr
    vghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:Z3RVZNEmuzqVId5PqNGnXOjCtyrv1oF6U2ZGdFDMAk6CtRGSqgupdw>
    <xmx:Z3RVZFURf2nl5mFJqI_2FdUa2M-TZW3J2y-zRMTF5mwU4A8P4k4oPQ>
    <xmx:Z3RVZINb70Wz8AsGbl91EgiuLhApbQagOYHVxU6umRgdgM3sqgIeSw>
    <xmx:Z3RVZDevq3yKC8xPMTizEhazK8ke64FtATOhCjI5KfFQXsi1bYecdQ>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/2] drivers/char: Use sub-page ro API to make just xhci dbc cap RO
Date: Fri,  5 May 2023 23:25:36 +0200
Message-Id: <1f9909dacfd7822a1c7d30ba03bbec93fa2ff6fd.1683321183.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
References: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Not the whole page, which may contain other registers too. In fact
on Tiger Lake and newer (at least), this page do contain other registers
that Linux tries to use. And with share=yes, a domU would use them too.
Without this patch, PV dom0 would fail to initialize the controller,
while HVM would be killed on EPT violation.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
Changes in v2:
 - adjust for simplified subpage_mmio_ro_add() API
---
 xen/drivers/char/xhci-dbc.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/xen/drivers/char/xhci-dbc.c b/xen/drivers/char/xhci-dbc.c
index 60b781f87202..8e368364c933 100644
--- a/xen/drivers/char/xhci-dbc.c
+++ b/xen/drivers/char/xhci-dbc.c
@@ -1221,14 +1221,12 @@ static void __init cf_check dbc_uart_init_postirq(struct serial_port *port)
      * Linux's XHCI driver (as of 5.18) works without writting to the whole
      * page, so keep it simple.
      */
-    if ( rangeset_add_range(mmio_ro_ranges,
-                PFN_DOWN((uart->dbc.bar_val & PCI_BASE_ADDRESS_MEM_MASK) +
-                         uart->dbc.xhc_dbc_offset),
-                PFN_UP((uart->dbc.bar_val & PCI_BASE_ADDRESS_MEM_MASK) +
-                       uart->dbc.xhc_dbc_offset +
-                sizeof(*uart->dbc.dbc_reg)) - 1) )
-        printk(XENLOG_INFO
-               "Error while adding MMIO range of device to mmio_ro_ranges\n");
+    if ( subpage_mmio_ro_add(
+            (uart->dbc.bar_val & PCI_BASE_ADDRESS_MEM_MASK) +
+             uart->dbc.xhc_dbc_offset,
+            sizeof(*uart->dbc.dbc_reg)) )
+        printk(XENLOG_WARNING
+               "Error while marking MMIO range of XHCI console as R/O\n");
 #endif
 }
 
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Fri May 05 21:48:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 21:48:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530668.826413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv3I0-0002Nv-2k; Fri, 05 May 2023 21:48:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530668.826413; Fri, 05 May 2023 21:48:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv3Hz-0002No-W6; Fri, 05 May 2023 21:48:31 +0000
Received: by outflank-mailman (input) for mailman id 530668;
 Fri, 05 May 2023 21:48:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HMjG=A2=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pv3Hy-0002Ng-F8
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 21:48:30 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 90e2af1c-eb8e-11ed-8611-37d641c3527e;
 Fri, 05 May 2023 23:48:27 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.west.internal (Postfix) with ESMTP id 0462C32002FB;
 Fri,  5 May 2023 17:48:24 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Fri, 05 May 2023 17:48:25 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 5 May 2023 17:48:22 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90e2af1c-eb8e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm3; t=1683323304; x=1683409704; bh=63RaELZf0bZZdNao1GuFs3vFD
	Am34PcStRchPykUHYs=; b=g4cW2l8rkHmfM8cL4Kfrd3fptPg848oUPhuCXcs6J
	U3FDMOY1eqDrQ6xQSbodW3pDeq1j0CUnCuaXvb4Tgp+RrNjj4VggdeJg/kKR3WGr
	Ud2gQCni7nLWTVvSUn4KAdRZaio48MySCH0YJAHEYT5efF9+G26re1ZkoC+VRzky
	SyT6WRosYuHNkSbqeAonschZ0v6NAFyYcwwME5AGRJUlWUQ2I/i/VcL8tOf4G190
	Qpy6LBwx2LDz068e6cpr7u4JjQiyczEtSBZyXr3nfXM6zFg/NnDSFuw2MLU42DD8
	RM0m7PwJqGd6xd9GQSsmJPjwx+SirBVIgX4HPlMzYCSxA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm3; t=1683323304; x=1683409704; bh=6
	3RaELZf0bZZdNao1GuFs3vFDAm34PcStRchPykUHYs=; b=DWyn4fEISOpJSSncY
	xcOaePj4xLw1A55hu+5JxyWyn5eqtuW0eddWMFKbD9l/opNYg6ccWq1LZRLykatB
	3PUQtcGZhoMzHey8I2pdrgVVp8MYZtyugNLQhQbGBm4TS7WZYYbDx4oDRqgkLtq7
	i0ckYS3myC479neV4z4RYmj+RemC7dH6QUR1hxLrAv949s47MEdajYEPUvrP/giQ
	Y4V0cAxXj9pDPvpVUlP57sMcUJ+n0OL4AH5751L1R0RX3O62j+4zhfH9iA5gYWeK
	ctJG5IZ38ezFhdDqZAFKa1T9Na00qNilCA26MNxaGCJZQGA5JTXTjDcPwCDBc2SU
	2Dwww==
X-ME-Sender: <xms:qHlVZNKuEvSP7jGSvzsMfG8uQ-lEMaaCV-EVvMOzNUHOIa-faaO_oA>
    <xme:qHlVZJI3k0IwHuf2JoEEgzr6yEV3bH5TwE3ZBDEya7PZmskunAAs80y3dxXV2ziE2
    DklobAcBYGk0A>
X-ME-Received: <xmr:qHlVZFstlGpxbAAOsfNZpK4zfeI4qDrSoD0hedFjExsQ9DtKQfKdFPdbFxkWdhU0d00YuLgjfGR8yCcfXv5dJnf3wG1si3j4pBz8ZSxGLWYTejINbI-c>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeffedgtddvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeelkefh
    udelteelleelteetveeffeetffekteetjeehlefggeekleeghefhtdehvdenucevlhhush
    htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhes
    ihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:qHlVZOZ9noAs2Nu3zsjn6tS5oIOMxPP72YIadq1tC2mKvNiJ5Ty32A>
    <xmx:qHlVZEYD5Oiqx7z6cWWdSfNi0hnjPqKXQe1DPTs36_0L32J8VVIR_w>
    <xmx:qHlVZCDldD0VAzg-Dj1rgRkUAZtSE7w6Y5qUh6jUC2PGN8pA1WDsHw>
    <xmx:qHlVZDxqiDhrV4_bQv6zaMUxR68OCmga7KJhYmeXXunmAWz_OWVgqg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3] ns16550: enable memory decoding on MMIO-based PCI console card
Date: Fri,  5 May 2023 23:48:10 +0200
Message-Id: <20230505214810.406061-1-marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
devices, add setting PCI_COMMAND_MEMORY for MMIO-based UART devices too.
Note the MMIO-based devices in practice need a "pci" sub-option,
otherwise a few parameters are not initialized (including bar_idx,
reg_shift, reg_width etc). The "pci" is not supposed to be used with
explicit BDF, so do not key setting PCI_COMMAND_MEMORY on explicit BDF
being set. Contrary to the IO-based UART, pci_serial_early_init() will
not attempt to set BAR0 address, even if user provided io_base manually
- in most cases, those are with an offest and the current cmdline syntax
doesn't allow expressing it. Due to this, enable PCI_COMMAND_MEMORY only
if uart->bar is already populated. In similar spirit, this patch does
not support setting BAR0 of the bridge.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
This fixes the issue I was talking about on #xendevel. Thanks Roger for
the hint.

Changes in v2:
 - check if uart->bar instead of uart->io_base
 - move it ahead of !uart->ps_bdf_enable return
 - expand commit message.
Changes in v3:
- restore io_base check
---
 xen/drivers/char/ns16550.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 1b21eb93c45f..ae845a720f7a 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -272,6 +272,14 @@ static int cf_check ns16550_getc(struct serial_port *port, char *pc)
 static void pci_serial_early_init(struct ns16550 *uart)
 {
 #ifdef NS16550_PCI
+    if ( uart->bar && uart->io_base >= 0x10000)
+    {
+        pci_conf_write16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
+                                  uart->ps_bdf[2]),
+                         PCI_COMMAND, PCI_COMMAND_MEMORY);
+        return;
+    }
+
     if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
         return;
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri May 05 22:26:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 22:26:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530675.826424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv3sn-0006s8-4w; Fri, 05 May 2023 22:26:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530675.826424; Fri, 05 May 2023 22:26:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv3sn-0006s1-1G; Fri, 05 May 2023 22:26:33 +0000
Received: by outflank-mailman (input) for mailman id 530675;
 Fri, 05 May 2023 22:26:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pv3sm-0006rr-3V; Fri, 05 May 2023 22:26:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pv3sl-00082i-3t; Fri, 05 May 2023 22:26:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pv3sk-0001LU-Ex; Fri, 05 May 2023 22:26:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pv3sk-0007Zg-EY; Fri, 05 May 2023 22:26:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eA/F8pAMjbBG4X1HTffoC5NimbmB0c22VWChNptmhw8=; b=fL6yHIzliFWSx0MURs0/HuCjjc
	ylGTNon4W90dy8E1YT7Ss1g847FdvYMlxuvAuaMBGvr0rF1YpPRBzK6IjpQhjAmu0SFa554yc9sz9
	zr7jUyRHg0v5zwMZR0sIt2H2OJmz0oN41rTvrf0AtynhDFeJYTfbbehna0diQcNmDl4k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180540-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180540: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=78b421b6a7c6dbb6a213877c742af52330f5026d
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 05 May 2023 22:26:30 +0000

flight 180540 linux-linus real [real]
flight 180548 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180540/
http://logs.test-lab.xenproject.org/osstest/logs/180548/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                78b421b6a7c6dbb6a213877c742af52330f5026d
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   19 days
Failing since        180281  2023-04-17 06:24:36 Z   18 days   32 attempts
Testing same since   180540  2023-05-05 04:38:37 Z    0 days    1 attempts

------------------------------------------------------------
2247 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 276065 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 05 22:29:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 22:29:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530681.826434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv3vc-0007Ru-Iu; Fri, 05 May 2023 22:29:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530681.826434; Fri, 05 May 2023 22:29:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv3vc-0007Rn-G7; Fri, 05 May 2023 22:29:28 +0000
Received: by outflank-mailman (input) for mailman id 530681;
 Fri, 05 May 2023 22:29:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cSQs=A2=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pv3vb-0007Rb-8c
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 22:29:27 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 49a0db73-eb94-11ed-8611-37d641c3527e;
 Sat, 06 May 2023 00:29:24 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id BADCD64146;
 Fri,  5 May 2023 22:29:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3104EC433EF;
 Fri,  5 May 2023 22:29:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49a0db73-eb94-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683325763;
	bh=4dYx1LJPZVLQb1DyajVpkYkYKikgBEG6wKEEOq6JRDg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rjIi2K6y9Ut6lOg/KzWnKSLDajpG0zFPHnWWbWbJIfSaJgmTHJiGSSrLkVx1djFJ3
	 soJ2msK2aNMWijGDvAQN+PYGueCOwtRmuUHOYkCGLLFV5s+h2lktuaqZAfwuWzDjh0
	 o2Qe2SXqg5pa9Yr9zFoGZiEK723uw+2hqX2kOwMdQjfhfibiOt0Xlc6pkUXvYjdVs0
	 qGZB9aJrqj6zUEkf7wZDl4XctCSMEo//yNsy3NRdbMsuiOiMtO1XkCtVoSO6sVGY/c
	 yhJUwWhW6KCL8RudA7kUeRK/AqOhJ60pHUrpybbxT6VVicbFhldEZeAT+haPvL6Eit
	 NIHNv/ABwHYQg==
Date: Fri, 5 May 2023 15:29:19 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <George.Dunlap@eu.citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 1/2] LICENSES: Improve the legibility of these files
In-Reply-To: <e8ec85b6-90bb-1df0-4f6b-d7e9c6ade25f@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305051529140.974517@ubuntu-linux-20-04-desktop>
References: <20230505130533.3580545-1-andrew.cooper3@citrix.com> <20230505130533.3580545-2-andrew.cooper3@citrix.com> <e8ec85b6-90bb-1df0-4f6b-d7e9c6ade25f@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 5 May 2023, Jan Beulich wrote:
> On 05.05.2023 15:05, Andrew Cooper wrote:
> > A few newlines go a very long way
> > 
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>

Although I don't mind either way it should be noted that the LICENSES
files came from Linux, and Linux doesn't have the extra spaces.

That said, these are licenses files not code, so we don't necessarily
need to keep them in sync over time I think.

So I am fine with this. I am also fine if you drop this patch.

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Fri May 05 22:35:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 05 May 2023 22:35:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530685.826443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv40o-0000Ru-4w; Fri, 05 May 2023 22:34:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530685.826443; Fri, 05 May 2023 22:34:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv40o-0000Rn-2M; Fri, 05 May 2023 22:34:50 +0000
Received: by outflank-mailman (input) for mailman id 530685;
 Fri, 05 May 2023 22:34:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cSQs=A2=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pv40m-0000Rh-LW
 for xen-devel@lists.xenproject.org; Fri, 05 May 2023 22:34:48 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 09b6a312-eb95-11ed-b226-6b7b168915f2;
 Sat, 06 May 2023 00:34:47 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D31DB64151;
 Fri,  5 May 2023 22:34:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C84DC433D2;
 Fri,  5 May 2023 22:34:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09b6a312-eb95-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683326085;
	bh=C/tyMP0on/mJnQfCVGm2XWdaMp5GF2+MJvI5FCXTLqw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=DgNoqDqYcVbOWM3f1/A10Xt2InPUysmrPBBiqkpFrPuF89iQLuhs84grhdxWLq8qJ
	 8Zi8OKJOy1SgtK4YkipQD3naH/FzZsIotDzXMu0Jq95aQaiQj5kXm8nHKGdnrY12Tt
	 V/xZtvlupwWCOmgr5AmSaMsjBERFG1avpPqrQ+NNEVkvWmH0ARbdDQr9WVdG74gSAo
	 NRnPCaMp+aiRbUtRETTtZ1cwsDMlP9+m0DJtkcD8kDEwwDldiqOj/I+GwOz3q7vWBL
	 HUVgIthV5MSAvNzY5I1gNJWZVJvpin43ztSXsYdcgVl+mOFfYBwZ7N+flRGKcAoM7F
	 /ajPZcMvRlHbQ==
Date: Fri, 5 May 2023 15:34:41 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, 
    George Dunlap <George.Dunlap@eu.citrix.com>, 
    Jan Beulich <JBeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 2/2] LICENSES: Remove the use of deprecated SPDX
 tags
In-Reply-To: <20230505181528.3587485-3-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2305051534190.974517@ubuntu-linux-20-04-desktop>
References: <20230505181528.3587485-1-andrew.cooper3@citrix.com> <20230505181528.3587485-3-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 5 May 2023, Andrew Cooper wrote:
> The GPL and LGPL SPDX forms without an explicit -only or -or-later suffix are
> deprecated and should not be used.  Update the documention.
> 
> Somewhat unhelpfully at the time of writing, this only appears to be indicated
> by the separation of the two tables at https://spdx.org/licenses/
> 
> The recent changes to libacpi are the only examples of deprecated LGPL tags in
> tree, so fix them all up.
> 
> For GPL, we have many examples using deprecated tags.  For now, just identify
> them as such and recommend that no new instances get added.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>



> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> 
> Unsure whether this should get some Fixes: tags or not.  Note also that
> Jenny's "[PATCH v4 2/2] acpi: Add TPM2 interface definition." wants its SPDX
> tag correcting as per this patch.
> 
> v2:
>  * Extend commit message to include https://spdx.org/licenses/
>  * Update the URLs too
> ---
>  LICENSES/GPL-2.0                    | 15 ++++++++-------
>  LICENSES/LGPL-2.0                   | 11 ++++++-----
>  LICENSES/LGPL-2.1                   | 11 ++++++-----
>  tools/libacpi/Makefile              |  2 +-
>  tools/libacpi/acpi2_0.h             |  2 +-
>  tools/libacpi/build.c               |  2 +-
>  tools/libacpi/dsdt.asl              |  2 +-
>  tools/libacpi/dsdt_acpi_info.asl    |  2 +-
>  tools/libacpi/libacpi.h             |  2 +-
>  tools/libacpi/mk_dsdt.c             |  2 +-
>  tools/libacpi/ssdt_laptop_slate.asl |  2 +-
>  tools/libacpi/ssdt_pm.asl           |  2 +-
>  tools/libacpi/ssdt_s3.asl           |  2 +-
>  tools/libacpi/ssdt_s4.asl           |  2 +-
>  tools/libacpi/ssdt_tpm.asl          |  2 +-
>  tools/libacpi/static_tables.c       |  2 +-
>  16 files changed, 33 insertions(+), 30 deletions(-)
> 
> diff --git a/LICENSES/GPL-2.0 b/LICENSES/GPL-2.0
> index fa5c66236fe9..07f332641ccd 100644
> --- a/LICENSES/GPL-2.0
> +++ b/LICENSES/GPL-2.0
> @@ -1,9 +1,11 @@
> -Valid-License-Identifier: GPL-2.0
>  Valid-License-Identifier: GPL-2.0-only
> -Valid-License-Identifier: GPL-2.0+
>  Valid-License-Identifier: GPL-2.0-or-later
>  
> -SPDX-URL: https://spdx.org/licenses/GPL-2.0.html
> +SPDX-URL: https://spdx.org/licenses/GPL-2.0-only.html
> +SPDX-URL: https://spdx.org/licenses/GPL-2.0-or-later.html
> +
> +Deprecated-Identifier: GPL-2.0
> +Deprecated-Identifier: GPL-2.0+
>  
>  Usage-Guide:
>  
> @@ -13,14 +15,13 @@ Usage-Guide:
>  
>    For 'GNU General Public License (GPL) version 2 only' use:
>      SPDX-License-Identifier: GPL-2.0-only
> -  or (now deprecated)
> -    SPDX-License-Identifier: GPL-2.0
>  
>    For 'GNU General Public License (GPL) version 2 or any later version' use:
> -    SPDX-License-Identifier: GPL-2.0+
> -  or
>      SPDX-License-Identifier: GPL-2.0-or-later
>  
> +  The deprecated tags should not be used for any new additions.  Where
> +  possible, their existing uses should be phased out.
> +
>  License-Text:
>  
>  		    GNU GENERAL PUBLIC LICENSE
> diff --git a/LICENSES/LGPL-2.0 b/LICENSES/LGPL-2.0
> index 2fa16d72eabf..100c72c6db8c 100644
> --- a/LICENSES/LGPL-2.0
> +++ b/LICENSES/LGPL-2.0
> @@ -1,7 +1,8 @@
> -Valid-License-Identifier: LGPL-2.0
> -Valid-License-Identifier: LGPL-2.0+
> +Valid-License-Identifier: LGPL-2.0-only
> +Valid-License-Identifier: LGPL-2.0-or-later
>  
> -SPDX-URL: https://spdx.org/licenses/LGPL-2.0.html
> +SPDX-URL: https://spdx.org/licenses/LGPL-2.0-only.html
> +SPDX-URL: https://spdx.org/licenses/LGPL-2.0-or-later.html
>  
>  Usage-Guide:
>  
> @@ -10,11 +11,11 @@ Usage-Guide:
>    guidelines in the licensing rules documentation.
>  
>    For 'GNU Library General Public License (LGPL) version 2.0 only' use:
> -    SPDX-License-Identifier: LGPL-2.0
> +    SPDX-License-Identifier: LGPL-2.0-only
>  
>    For 'GNU Library General Public License (LGPL) version 2.0 or any later
>    version' use:
> -    SPDX-License-Identifier: LGPL-2.0+
> +    SPDX-License-Identifier: LGPL-2.0-or-later
>  
>  License-Text:
>  
> diff --git a/LICENSES/LGPL-2.1 b/LICENSES/LGPL-2.1
> index b366c7e49199..d3e213c39c26 100644
> --- a/LICENSES/LGPL-2.1
> +++ b/LICENSES/LGPL-2.1
> @@ -1,7 +1,8 @@
> -Valid-License-Identifier: LGPL-2.1
> -Valid-License-Identifier: LGPL-2.1+
> +Valid-License-Identifier: LGPL-2.1-only
> +Valid-License-Identifier: LGPL-2.1-or-later
>  
> -SPDX-URL: https://spdx.org/licenses/LGPL-2.1.html
> +SPDX-URL: https://spdx.org/licenses/LGPL-2.1-only.html
> +SPDX-URL: https://spdx.org/licenses/LGPL-2.1-or-later.html
>  
>  Usage-Guide:
>  
> @@ -10,11 +11,11 @@ Usage-Guide:
>    guidelines in the licensing rules documentation.
>  
>    For 'GNU Lesser General Public License (LGPL) version 2.1 only' use:
> -    SPDX-License-Identifier: LGPL-2.1
> +    SPDX-License-Identifier: LGPL-2.1-only
>  
>    For 'GNU Lesser General Public License (LGPL) version 2.1 or any later
>    version' use:
> -    SPDX-License-Identifier: LGPL-2.1+
> +    SPDX-License-Identifier: LGPL-2.1-or-later
>  
>  License-Text:
>  
> diff --git a/tools/libacpi/Makefile b/tools/libacpi/Makefile
> index aa9c520cbe85..bcfcd852f92f 100644
> --- a/tools/libacpi/Makefile
> +++ b/tools/libacpi/Makefile
> @@ -1,4 +1,4 @@
> -# SPDX-License-Identifier: LGPL-2.1
> +# SPDX-License-Identifier: LGPL-2.1-only
>  #
>  # Copyright (c) 2004, Intel Corporation.
>  
> diff --git a/tools/libacpi/acpi2_0.h b/tools/libacpi/acpi2_0.h
> index 212f5ab64182..e00b29854be0 100644
> --- a/tools/libacpi/acpi2_0.h
> +++ b/tools/libacpi/acpi2_0.h
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  /*
>   * Copyright (c) 2004, Intel Corporation.
>   */
> diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
> index 830d37c61f03..3142e0ac84c0 100644
> --- a/tools/libacpi/build.c
> +++ b/tools/libacpi/build.c
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  /*
>   * Copyright (c) 2004, Intel Corporation.
>   * Copyright (c) 2006, Keir Fraser, XenSource Inc.
> diff --git a/tools/libacpi/dsdt.asl b/tools/libacpi/dsdt.asl
> index c6691b56a986..32b42f85ae9f 100644
> --- a/tools/libacpi/dsdt.asl
> +++ b/tools/libacpi/dsdt.asl
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  /******************************************************************************
>   * DSDT for Xen with Qemu device model
>   *
> diff --git a/tools/libacpi/dsdt_acpi_info.asl b/tools/libacpi/dsdt_acpi_info.asl
> index c6e82f1fe6a7..6e114fa23404 100644
> --- a/tools/libacpi/dsdt_acpi_info.asl
> +++ b/tools/libacpi/dsdt_acpi_info.asl
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  
>      Scope (\_SB)
>      {
> diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
> index acf012e35578..7ae28525f604 100644
> --- a/tools/libacpi/libacpi.h
> +++ b/tools/libacpi/libacpi.h
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  /******************************************************************************
>   * libacpi.h
>   * 
> diff --git a/tools/libacpi/mk_dsdt.c b/tools/libacpi/mk_dsdt.c
> index c74b270c0c5d..34f6753f6193 100644
> --- a/tools/libacpi/mk_dsdt.c
> +++ b/tools/libacpi/mk_dsdt.c
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  
>  #include <stdio.h>
>  #include <stdarg.h>
> diff --git a/tools/libacpi/ssdt_laptop_slate.asl b/tools/libacpi/ssdt_laptop_slate.asl
> index 494f2d048d0a..69fd504c19fc 100644
> --- a/tools/libacpi/ssdt_laptop_slate.asl
> +++ b/tools/libacpi/ssdt_laptop_slate.asl
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  /*
>   * ssdt_conv.asl
>   *
> diff --git a/tools/libacpi/ssdt_pm.asl b/tools/libacpi/ssdt_pm.asl
> index e577e85c072b..db578d10ac3e 100644
> --- a/tools/libacpi/ssdt_pm.asl
> +++ b/tools/libacpi/ssdt_pm.asl
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  /*
>   * ssdt_pm.asl
>   *
> diff --git a/tools/libacpi/ssdt_s3.asl b/tools/libacpi/ssdt_s3.asl
> index 8f3177ec5adc..f6e9636f4759 100644
> --- a/tools/libacpi/ssdt_s3.asl
> +++ b/tools/libacpi/ssdt_s3.asl
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  /*
>   * ssdt_s3.asl
>   *
> diff --git a/tools/libacpi/ssdt_s4.asl b/tools/libacpi/ssdt_s4.asl
> index 979318eca1f5..8014f5fc9014 100644
> --- a/tools/libacpi/ssdt_s4.asl
> +++ b/tools/libacpi/ssdt_s4.asl
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  /*
>   * ssdt_s4.asl
>   *
> diff --git a/tools/libacpi/ssdt_tpm.asl b/tools/libacpi/ssdt_tpm.asl
> index 6c3267218f3b..944658d25177 100644
> --- a/tools/libacpi/ssdt_tpm.asl
> +++ b/tools/libacpi/ssdt_tpm.asl
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  /*
>   * ssdt_tpm.asl
>   *
> diff --git a/tools/libacpi/static_tables.c b/tools/libacpi/static_tables.c
> index 631fb911413b..715f46fee05c 100644
> --- a/tools/libacpi/static_tables.c
> +++ b/tools/libacpi/static_tables.c
> @@ -1,4 +1,4 @@
> -/* SPDX-License-Identifier: LGPL-2.1 */
> +/* SPDX-License-Identifier: LGPL-2.1-only */
>  /*
>   * Copyright (c) 2004, Intel Corporation.
>   * Copyright (c) 2006, Keir Fraser, XenSource Inc.
> -- 
> 2.30.2
> 


From xen-devel-bounces@lists.xenproject.org Sat May 06 00:35:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 00:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530751.826488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv5to-0007Br-Nf; Sat, 06 May 2023 00:35:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530751.826488; Sat, 06 May 2023 00:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv5to-0007Bk-Kp; Sat, 06 May 2023 00:35:44 +0000
Received: by outflank-mailman (input) for mailman id 530751;
 Sat, 06 May 2023 00:35:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CN1r=A3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pv5tn-0007Be-4p
 for xen-devel@lists.xenproject.org; Sat, 06 May 2023 00:35:43 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ecd62efd-eba5-11ed-8611-37d641c3527e;
 Sat, 06 May 2023 02:35:40 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0079763C2E;
 Sat,  6 May 2023 00:35:39 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4DED8C433D2;
 Sat,  6 May 2023 00:35:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecd62efd-eba5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683333338;
	bh=bP/3aHvZYOXH3zu8B4TUIjOV1o/FusNOt/78mT12AFw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=FA6qHKtIZCoo7JyMA/TPj5q9THuxlE3xbt1wUPQos+XP7Llr1DVlqSqtI9CtwtEnz
	 ZtUpCdFFsQR/gq0RuBDEBWUmNytLi/26Er/7E5CNzoXvyhvX11CvehWdgMcSyFl5cZ
	 3HzhQ/SrPomJvST5vkoMFoGPniwB7pf7KCH8NLBEl5KAntmYPPE51DFaGvltpnRcUH
	 HwWJN2Z7vYd8UykRhHmNmOONUvwc3qz1+hw8RfD3ILXha6BvLupOEUlSSxKnAeUsjz
	 p2hwvi5X8C8gabXB+E3EvyS/+uNsd1Knj9l/hxSPto3NSsb+r7k38IJIl4nGDQLUHr
	 o4yTLuUOaVS/w==
Date: Fri, 5 May 2023 17:35:35 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jason Andryuk <jandryuk@gmail.com>
cc: qemu-devel@nongnu.org, Greg Kurz <groug@kaod.org>, 
    Christian Schoenebeck <qemu_oss@crudebyte.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] 9pfs/xen: Fix segfault on shutdown
In-Reply-To: <20230502143722.15613-1-jandryuk@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2305051735020.974517@ubuntu-linux-20-04-desktop>
References: <20230502143722.15613-1-jandryuk@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 2 May 2023, Jason Andryuk wrote:
> xen_9pfs_free can't use gnttabdev since it is already closed and NULL-ed
> out when free is called.  Do the teardown in _disconnect().  This
> matches the setup done in _connect().
> 
> trace-events are also added for the XenDevOps functions.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
>  hw/9pfs/trace-events     |  5 +++++
>  hw/9pfs/xen-9p-backend.c | 36 +++++++++++++++++++++++-------------
>  2 files changed, 28 insertions(+), 13 deletions(-)
> 
> diff --git a/hw/9pfs/trace-events b/hw/9pfs/trace-events
> index 6c77966c0b..7b5b0b5a48 100644
> --- a/hw/9pfs/trace-events
> +++ b/hw/9pfs/trace-events
> @@ -48,3 +48,8 @@ v9fs_readlink(uint16_t tag, uint8_t id, int32_t fid) "tag %d id %d fid %d"
>  v9fs_readlink_return(uint16_t tag, uint8_t id, char* target) "tag %d id %d name %s"
>  v9fs_setattr(uint16_t tag, uint8_t id, int32_t fid, int32_t valid, int32_t mode, int32_t uid, int32_t gid, int64_t size, int64_t atime_sec, int64_t mtime_sec) "tag %u id %u fid %d iattr={valid %d mode %d uid %d gid %d size %"PRId64" atime=%"PRId64" mtime=%"PRId64" }"
>  v9fs_setattr_return(uint16_t tag, uint8_t id) "tag %u id %u"
> +
> +xen_9pfs_alloc(char *name) "name %s"
> +xen_9pfs_connect(char *name) "name %s"
> +xen_9pfs_disconnect(char *name) "name %s"
> +xen_9pfs_free(char *name) "name %s"
> diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c
> index 0e266c552b..c646a0b3d1 100644
> --- a/hw/9pfs/xen-9p-backend.c
> +++ b/hw/9pfs/xen-9p-backend.c
> @@ -25,6 +25,8 @@
>  #include "qemu/iov.h"
>  #include "fsdev/qemu-fsdev.h"
>  
> +#include "trace.h"
> +
>  #define VERSIONS "1"
>  #define MAX_RINGS 8
>  #define MAX_RING_ORDER 9
> @@ -337,6 +339,8 @@ static void xen_9pfs_disconnect(struct XenLegacyDevice *xendev)
>      Xen9pfsDev *xen_9pdev = container_of(xendev, Xen9pfsDev, xendev);
>      int i;
>  
> +    trace_xen_9pfs_disconnect(xendev->name);
> +
>      for (i = 0; i < xen_9pdev->num_rings; i++) {
>          if (xen_9pdev->rings[i].evtchndev != NULL) {
>              qemu_set_fd_handler(qemu_xen_evtchn_fd(xen_9pdev->rings[i].evtchndev),
> @@ -345,40 +349,42 @@ static void xen_9pfs_disconnect(struct XenLegacyDevice *xendev)
>                                     xen_9pdev->rings[i].local_port);
>              xen_9pdev->rings[i].evtchndev = NULL;
>          }
> -    }
> -}
> -
> -static int xen_9pfs_free(struct XenLegacyDevice *xendev)
> -{
> -    Xen9pfsDev *xen_9pdev = container_of(xendev, Xen9pfsDev, xendev);
> -    int i;
> -
> -    if (xen_9pdev->rings[0].evtchndev != NULL) {
> -        xen_9pfs_disconnect(xendev);
> -    }
> -
> -    for (i = 0; i < xen_9pdev->num_rings; i++) {
>          if (xen_9pdev->rings[i].data != NULL) {
>              xen_be_unmap_grant_refs(&xen_9pdev->xendev,
>                                      xen_9pdev->rings[i].data,
>                                      xen_9pdev->rings[i].intf->ref,
>                                      (1 << xen_9pdev->rings[i].ring_order));
> +            xen_9pdev->rings[i].data = NULL;
>          }
>          if (xen_9pdev->rings[i].intf != NULL) {
>              xen_be_unmap_grant_ref(&xen_9pdev->xendev,
>                                     xen_9pdev->rings[i].intf,
>                                     xen_9pdev->rings[i].ref);
> +            xen_9pdev->rings[i].intf = NULL;
>          }
>          if (xen_9pdev->rings[i].bh != NULL) {
>              qemu_bh_delete(xen_9pdev->rings[i].bh);
> +            xen_9pdev->rings[i].bh = NULL;
>          }
>      }
>  
>      g_free(xen_9pdev->id);
> +    xen_9pdev->id = NULL;
>      g_free(xen_9pdev->tag);
> +    xen_9pdev->tag = NULL;
>      g_free(xen_9pdev->path);
> +    xen_9pdev->path = NULL;
>      g_free(xen_9pdev->security_model);
> +    xen_9pdev->security_model = NULL;
>      g_free(xen_9pdev->rings);
> +    xen_9pdev->rings = NULL;
> +    return;

NIT: this return is redudant.

Aside from that:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> +}
> +
> +static int xen_9pfs_free(struct XenLegacyDevice *xendev)
> +{
> +    trace_xen_9pfs_free(xendev->name);
> +
>      return 0;
>  }
>  
> @@ -390,6 +396,8 @@ static int xen_9pfs_connect(struct XenLegacyDevice *xendev)
>      V9fsState *s = &xen_9pdev->state;
>      QemuOpts *fsdev;
>  
> +    trace_xen_9pfs_connect(xendev->name);
> +
>      if (xenstore_read_fe_int(&xen_9pdev->xendev, "num-rings",
>                               &xen_9pdev->num_rings) == -1 ||
>          xen_9pdev->num_rings > MAX_RINGS || xen_9pdev->num_rings < 1) {
> @@ -499,6 +507,8 @@ out:
>  
>  static void xen_9pfs_alloc(struct XenLegacyDevice *xendev)
>  {
> +    trace_xen_9pfs_alloc(xendev->name);
> +
>      xenstore_write_be_str(xendev, "versions", VERSIONS);
>      xenstore_write_be_int(xendev, "max-rings", MAX_RINGS);
>      xenstore_write_be_int(xendev, "max-ring-page-order", MAX_RING_ORDER);
> -- 
> 2.40.1
> 


From xen-devel-bounces@lists.xenproject.org Sat May 06 00:53:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 00:53:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530756.826498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv6AM-0001Bs-55; Sat, 06 May 2023 00:52:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530756.826498; Sat, 06 May 2023 00:52:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv6AM-0001Bl-2I; Sat, 06 May 2023 00:52:50 +0000
Received: by outflank-mailman (input) for mailman id 530756;
 Sat, 06 May 2023 00:52:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CN1r=A3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pv6AL-0001Be-53
 for xen-devel@lists.xenproject.org; Sat, 06 May 2023 00:52:49 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 50950233-eba8-11ed-b226-6b7b168915f2;
 Sat, 06 May 2023 02:52:46 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4FF0F6348A;
 Sat,  6 May 2023 00:52:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 54D14C433D2;
 Sat,  6 May 2023 00:52:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50950233-eba8-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683334364;
	bh=ALHPOAtna/tjSpwcldHIbCxstSQu3IlIOdXMLK1ymAk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=s0yzFH9qRgGUP+ca4jtTQerypoWoG51VOS6A+g3VEyKBY2xNAE7TKOqWi16rxNGti
	 CAgY2oYIOq1jxq0wfZKej6MBEJ8cfsIOJ/BXrRLcKwC5SRzzDzbVJvZZhqftMjE5IE
	 +YWk9Sn14B1chGnGsANBCTQFCwxpqrkF7l6oP87N3+8avr1HsnY9i9odXT2lWMQP7n
	 0dSvqSi1R8iDGRoY76Ir8wPc1bftYPConBgX+NmkoXDYAVb0YyjApB8MyHWFvlngkh
	 R+HySXoRmobwFj6V7unKA1BzJZlYgc2MbEnllsFt7wmQGvpS0ZpiSQF+a6eu43w32G
	 S07IPWkMiPmgg==
Date: Fri, 5 May 2023 17:52:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    Juergen Gross <jgross@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Samuel Holland <samuel@sholland.org>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Marc Zyngier <maz@kernel.org>, Jane Malalane <jane.malalane@citrix.com>, 
    David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [PATCH] xen/evtchn: Introduce new IOCTL to bind static evtchn
In-Reply-To: <48d30a439e37f6917b9a667289792c2b3f548d6d.1682685294.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305051750070.974517@ubuntu-linux-20-04-desktop>
References: <48d30a439e37f6917b9a667289792c2b3f548d6d.1682685294.git.rahul.singh@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1916755753-1683334363=:974517"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1916755753-1683334363=:974517
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 28 Apr 2023, Rahul Singh wrote:
> Xen 4.17 supports the creation of static evtchns. To allow user space
> application to bind static evtchns introduce new ioctl
> "IOCTL_EVTCHN_BIND_STATIC". Existing IOCTL doing more than binding
> that’s why we need to introduce the new IOCTL to only bind the static
> event channels.
> 
> Also, static evtchns to be available for use during the lifetime of the
> guest. When the application exits, __unbind_from_irq() end up
> being called from release() fop because of that static evtchns are
> getting closed. To avoid closing the static event channel, add the new
> bool variable "is_static" in "struct irq_info" to mark the event channel
> static when creating the event channel to avoid closing the static
> evtchn.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>

I think the patch is OK but evtchn_bind_to_user on the error path calls
EVTCHNOP_close. Could that be a problem for static evtchns? I wonder if
we need to skip that EVTCHNOP_close call too.


err:
	/* bind failed, should close the port now */
	close.port = port;
	if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0)
		BUG();
	del_evtchn(u, evtchn);


> ---
>  drivers/xen/events/events_base.c |  7 +++++--
>  drivers/xen/evtchn.c             | 22 +++++++++++++++++-----
>  include/uapi/xen/evtchn.h        |  9 +++++++++
>  include/xen/events.h             |  2 +-
>  4 files changed, 32 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
> index c7715f8bd452..31f2d3634ad5 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -112,6 +112,7 @@ struct irq_info {
>  	unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
>  	u64 eoi_time;           /* Time in jiffies when to EOI. */
>  	raw_spinlock_t lock;
> +	u8 is_static;           /* Is event channel static */
>  
>  	union {
>  		unsigned short virq;
> @@ -982,7 +983,8 @@ static void __unbind_from_irq(unsigned int irq)
>  		unsigned int cpu = cpu_from_irq(irq);
>  		struct xenbus_device *dev;
>  
> -		xen_evtchn_close(evtchn);
> +		if (!info->is_static)
> +			xen_evtchn_close(evtchn);
>  
>  		switch (type_from_irq(irq)) {
>  		case IRQT_VIRQ:
> @@ -1574,7 +1576,7 @@ int xen_set_irq_priority(unsigned irq, unsigned priority)
>  }
>  EXPORT_SYMBOL_GPL(xen_set_irq_priority);
>  
> -int evtchn_make_refcounted(evtchn_port_t evtchn)
> +int evtchn_make_refcounted(evtchn_port_t evtchn, bool is_static)
>  {
>  	int irq = get_evtchn_to_irq(evtchn);
>  	struct irq_info *info;
> @@ -1590,6 +1592,7 @@ int evtchn_make_refcounted(evtchn_port_t evtchn)
>  	WARN_ON(info->refcnt != -1);
>  
>  	info->refcnt = 1;
> +	info->is_static = is_static;
>  
>  	return 0;
>  }
> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
> index c99415a70051..47681d4c696b 100644
> --- a/drivers/xen/evtchn.c
> +++ b/drivers/xen/evtchn.c
> @@ -366,7 +366,8 @@ static int evtchn_resize_ring(struct per_user_data *u)
>  	return 0;
>  }
>  
> -static int evtchn_bind_to_user(struct per_user_data *u, evtchn_port_t port)
> +static int evtchn_bind_to_user(struct per_user_data *u, evtchn_port_t port,
> +			bool is_static)
>  {
>  	struct user_evtchn *evtchn;
>  	struct evtchn_close close;
> @@ -402,7 +403,7 @@ static int evtchn_bind_to_user(struct per_user_data *u, evtchn_port_t port)
>  	if (rc < 0)
>  		goto err;
>  
> -	rc = evtchn_make_refcounted(port);
> +	rc = evtchn_make_refcounted(port, is_static);
>  	return rc;
>  
>  err:
> @@ -456,7 +457,7 @@ static long evtchn_ioctl(struct file *file,
>  		if (rc != 0)
>  			break;
>  
> -		rc = evtchn_bind_to_user(u, bind_virq.port);
> +		rc = evtchn_bind_to_user(u, bind_virq.port, false);
>  		if (rc == 0)
>  			rc = bind_virq.port;
>  		break;
> @@ -482,7 +483,7 @@ static long evtchn_ioctl(struct file *file,
>  		if (rc != 0)
>  			break;
>  
> -		rc = evtchn_bind_to_user(u, bind_interdomain.local_port);
> +		rc = evtchn_bind_to_user(u, bind_interdomain.local_port, false);
>  		if (rc == 0)
>  			rc = bind_interdomain.local_port;
>  		break;
> @@ -507,7 +508,7 @@ static long evtchn_ioctl(struct file *file,
>  		if (rc != 0)
>  			break;
>  
> -		rc = evtchn_bind_to_user(u, alloc_unbound.port);
> +		rc = evtchn_bind_to_user(u, alloc_unbound.port, false);
>  		if (rc == 0)
>  			rc = alloc_unbound.port;
>  		break;
> @@ -536,6 +537,17 @@ static long evtchn_ioctl(struct file *file,
>  		break;
>  	}
>  
> +	case IOCTL_EVTCHN_BIND_STATIC: {
> +		struct ioctl_evtchn_bind bind;
> +
> +		rc = -EFAULT;
> +		if (copy_from_user(&bind, uarg, sizeof(bind)))
> +			break;
> +
> +		rc = evtchn_bind_to_user(u, bind.port, true);
> +		break;
> +	}
> +
>  	case IOCTL_EVTCHN_NOTIFY: {
>  		struct ioctl_evtchn_notify notify;
>  		struct user_evtchn *evtchn;
> diff --git a/include/uapi/xen/evtchn.h b/include/uapi/xen/evtchn.h
> index 7fbf732f168f..aef2b75f3413 100644
> --- a/include/uapi/xen/evtchn.h
> +++ b/include/uapi/xen/evtchn.h
> @@ -101,4 +101,13 @@ struct ioctl_evtchn_restrict_domid {
>  	domid_t domid;
>  };
>  
> +/*
> + * Bind statically allocated @port.
> + */
> +#define IOCTL_EVTCHN_BIND_STATIC			\
> +	_IOC(_IOC_NONE, 'E', 7, sizeof(struct ioctl_evtchn_bind))
> +struct ioctl_evtchn_bind {
> +	unsigned int port;
> +};
> +
>  #endif /* __LINUX_PUBLIC_EVTCHN_H__ */
> diff --git a/include/xen/events.h b/include/xen/events.h
> index 44c2855c76d1..962f0bbc7ce1 100644
> --- a/include/xen/events.h
> +++ b/include/xen/events.h
> @@ -69,7 +69,7 @@ int xen_set_irq_priority(unsigned irq, unsigned priority);
>  /*
>   * Allow extra references to event channels exposed to userspace by evtchn
>   */
> -int evtchn_make_refcounted(evtchn_port_t evtchn);
> +int evtchn_make_refcounted(evtchn_port_t evtchn, bool is_static);
>  int evtchn_get(evtchn_port_t evtchn);
>  void evtchn_put(evtchn_port_t evtchn);
>  
> -- 
> 2.25.1
> 
--8323329-1916755753-1683334363=:974517--


From xen-devel-bounces@lists.xenproject.org Sat May 06 00:54:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 00:54:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530761.826508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv6BT-0001hd-Fp; Sat, 06 May 2023 00:53:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530761.826508; Sat, 06 May 2023 00:53:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv6BT-0001hW-Cj; Sat, 06 May 2023 00:53:59 +0000
Received: by outflank-mailman (input) for mailman id 530761;
 Sat, 06 May 2023 00:53:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OKIq=A3=microsoft.com=mikelley@srs-se1.protection.inumbo.net>)
 id 1pv6BR-0001gm-HF
 for xen-devel@lists.xenproject.org; Sat, 06 May 2023 00:53:57 +0000
Received: from DM5PR00CU002.outbound.protection.outlook.com
 (mail-cusazlp170110003.outbound.protection.outlook.com
 [2a01:111:f403:c111::3])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7967c960-eba8-11ed-8611-37d641c3527e;
 Sat, 06 May 2023 02:53:55 +0200 (CEST)
Received: from BYAPR21MB1688.namprd21.prod.outlook.com (2603:10b6:a02:bf::26)
 by CY5PR21MB3447.namprd21.prod.outlook.com (2603:10b6:930:c::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.11; Sat, 6 May
 2023 00:53:50 +0000
Received: from BYAPR21MB1688.namprd21.prod.outlook.com
 ([fe80::9a0f:a04d:69bd:e622]) by BYAPR21MB1688.namprd21.prod.outlook.com
 ([fe80::9a0f:a04d:69bd:e622%4]) with mapi id 15.20.6387.012; Sat, 6 May 2023
 00:53:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7967c960-eba8-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LS1MpF61S0SiLXzXzkxI7LfoQ3vW7euzQ8IObApn5AwVo268iqb2hPiVgffH26wRbnOSvoININAgKdMuHZtzm/XhuFSWDLGcRFjtEEmWto3/vQ8r0hupl8d3Kp3Ca/drU8weVeKiFev/5xlgvAYeXP9SuaXKQxx1EzWiKNU/bSA5eqkbzPF78UkbeFqs4vXGuHjgC0kS6cIvJmNIyzlTIDtYTtn/kh9N9dru1Dz1Lm5bNgmHM3wC/yq0A/DLCX20Nvwr2+ChjmPjIF08Xnpx2eoOpKVKbFaxM2OBaQW3sM5U6mACJPohKTBzIrjRnUjIkpSI71ntOmR+WY95rdOAoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sK/IQvGZxUgMCa0K1Eigw1rYj2UhiPgrotGAq3MSiR4=;
 b=kHfC5XSovSYGyS/+5jTV0QwQSwNaV9i7AbzrU842xsrZOvuXe4PaMeRdSjeA0tGIhfwLOiGlmhfbcetXSZJgbfsv6/QAjCuebaxkCKLMzSyLTgwmklbBVw+SzcWaqfT9TmBUOpiQaEjikyVMvNQo1Y1WD1+zeuTVjc9v3Kk+VpHErHFtjBNeUSvZFJ74Ffy4JWr/eHFxyIe+W66n7/aVh0NRoI2HEyScVohriKYD9xtcoLS38mNHCVYW+Ckthx3YPLkvTnmAMt19fpoib/1He0tIdm7eeZwKM0VQuTO6sSBPwO/a2QKcafw0ubfLmUc3gTnlrilHtWMoXmSMRiAW8A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=microsoft.com; dmarc=pass action=none
 header.from=microsoft.com; dkim=pass header.d=microsoft.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sK/IQvGZxUgMCa0K1Eigw1rYj2UhiPgrotGAq3MSiR4=;
 b=b/hnm1GFlBHcj/2O4ssVnTDvGc/uQDdR+tpq5T/NLUTGLwLHywgdHx/bCkEsGChRv9b6csldT8JyqwDxaGT6WwxxBRewBKNFvGp+HKBcpSyfUYL/2zLgIQyrCUhHOEXHjyeJCeery1RjvkNPQoTh+JcjUURgUMWkGFp5q84viWE=
From: "Michael Kelley (LINUX)" <mikelley@microsoft.com>
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
CC: "x86@kernel.org" <x86@kernel.org>, David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>, Paolo Bonzini
	<pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom Lendacky
	<thomas.lendacky@amd.com>, Sean Christopherson <seanjc@google.com>, Oleksandr
 Natalenko <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
	<lucjan.lucjanov@gmail.com>, Usama Arif <usama.arif@bytedance.com>, Juergen
 Gross <jgross@suse.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Russell
 King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>, Catalin Marinas
	<catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
	<guoren@kernel.org>, "linux-csky@vger.kernel.org"
	<linux-csky@vger.kernel.org>, Thomas Bogendoerfer
	<tsbogend@alpha.franken.de>, "linux-mips@vger.kernel.org"
	<linux-mips@vger.kernel.org>, "James E.J. Bottomley"
	<James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>,
	"linux-parisc@vger.kernel.org" <linux-parisc@vger.kernel.org>, Paul Walmsley
	<paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
	"linux-riscv@lists.infradead.org" <linux-riscv@lists.infradead.org>, Mark
 Rutland <Mark.Rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
Subject: RE: [patch V2 38/38] x86/smpboot/64: Implement
 arch_cpuhp_init_parallel_bringup() and enable it
Thread-Topic: [patch V2 38/38] x86/smpboot/64: Implement
 arch_cpuhp_init_parallel_bringup() and enable it
Thread-Index: AQHZfrsPO9aHmT4GeE2OzCti+2hGJa9Macrg
Date: Sat, 6 May 2023 00:53:50 +0000
Message-ID:
 <BYAPR21MB168869144087644F89BAC89FD7739@BYAPR21MB1688.namprd21.prod.outlook.com>
References: <20230504185733.126511787@linutronix.de>
 <20230504185938.393373946@linutronix.de>
In-Reply-To: <20230504185938.393373946@linutronix.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
 MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ActionId=48817eed-6fcb-49bc-9a01-6a43bf5b9b82;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ContentBits=0;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Enabled=true;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Method=Standard;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Name=Internal;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SetDate=2023-05-06T00:42:24Z;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SiteId=72f988bf-86f1-41af-91ab-2d7cd011db47;
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=microsoft.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR21MB1688:EE_|CY5PR21MB3447:EE_
x-ms-office365-filtering-correlation-id: ede41693-710a-42e8-4c62-08db4dcc5b7b
x-ld-processed: 72f988bf-86f1-41af-91ab-2d7cd011db47,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 UOc9Db0wWrh2du3sbXN0q2e69LdewzzfxnEuAoaxPnLtsxOnJnAB9i5KvkPtcUvLUcYvZBqs2LsO/Op4kmGLXv3b53PL7TZ3cuVWyzCXcqCI2qnWoM6iAmETAT7kQPx8lNlJE3c5M1LfwEirdynbOqzw2x+ZnesUsUcpgjqKJZMp9fuIYAzO4+vyNgmeueiyEMKZgt6vRRLoszEtZeFrGzCjHQmN2Fkiz5T7qtnh6Vwxl7aZ5g5ywZtikNZK6HURstJAg8bBygsc49lnxmKrpknDPNVprVXI+KQuGFuJtguW0sayTwbCSyOiRE02N1L7wxFWU4MMYpqrAE+e3keyXLsHgB5Tz3v8Xb/TnXJwyQqNrkwnbMmY3Uorg6zno8dOiyAJiTtyCwTVWosokiYsQaJbqlTRpp4oel/CKQl+VSOT92d8tUsEa2mpDh49VwwMjvbWPlZoZ4wzG9QY388dS5Ea07jeqR6cpjlmLlcvfLyLeA6W/n0kxpzdaL5oimNGNxsQ3s/xc02EOcvSJLguKtDrIuZ2243EzD6Y3kYG438lBjO56Di1RKYe75hDOLB/nXrbxgoLqb7WEQv803wlVcmQKAflpagVQoN5fXvd2+m+MF1yqCgj280TK7fYzgcHQv0G/aKj5gsn0iT9WlGYD6a/UJ4/6N2HCHcxbOnl2eI=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR21MB1688.namprd21.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(366004)(39860400002)(396003)(376002)(451199021)(38100700002)(8676002)(8936002)(5660300002)(7406005)(52536014)(83380400001)(2906002)(38070700005)(82960400001)(316002)(82950400001)(786003)(7416002)(66556008)(4326008)(66946007)(8990500004)(76116006)(6506007)(26005)(66446008)(9686003)(64756008)(66476007)(71200400001)(54906003)(110136005)(41300700001)(122000001)(7696005)(55016003)(186003)(33656002)(10290500003)(86362001)(478600001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MVVTR0xkZ0lpelRrMnRGYXB0dDVnNTM1Q3dPVnhheXNHRkIyRjlvYVEvdjFL?=
 =?utf-8?B?RFN4cjB6MS9talI2MnNIZTJtZHViUCswNkMrMUU4UVZ3TDFxRWZpUllGQW9I?=
 =?utf-8?B?bVJOVncydDR4TXdvNWZmL3NJd1J4bW01U3pncjFlZ0hJeDJPOEpBZWpUd1Vt?=
 =?utf-8?B?YVgzMkJhY2ErVVRsSDErYVZwM3pkME45MG8zYy9Sd0NHSFI5RkNtUzRLUjBJ?=
 =?utf-8?B?dHdVUCt1THlsYjErWDF3ZHZsU2hhL1J0ODdJcWVoWGpGejJtWWdyeS9UcFU4?=
 =?utf-8?B?SmFIeGF3YUx2cDZqNktSTERYSkhrR2tIQjZBaHZHWHBlS0tSVHFOWlppUW9T?=
 =?utf-8?B?dUE1WXhxbG5tNnA4R1FzZ3lXdzcwMXFsNURPTmRSSWpYY1g4UGVzK211SUp5?=
 =?utf-8?B?WVNWcml0NThtN3Y4cU4xM0l3S1Npb3RJV2swMzNOVDkwYnBWN2UwclhsZTJq?=
 =?utf-8?B?bEFYMTBnWHRIUDN2cWkrNlErdHd4TWxIdnBBNHZWcTUxWFBrbUhIZ3VFMzZY?=
 =?utf-8?B?WkN3bVdrWGdDbE1RakNUSEREcVJOWGNBV0lULzlCSysrN0Y4ek9wUWw2ZXp3?=
 =?utf-8?B?WGZMRWlUQkx3dDBXUUtoa2RUQ21GSlFlQ2xXOGhGdXQwdElwbTZwdDZsczVi?=
 =?utf-8?B?QVBSTDRiQkpNNms1MTJUanZRa0F3QU1lb2FDLzA0S3pCYzA3SGFzL0ZGRmdF?=
 =?utf-8?B?L1dlYUplcS9rZEJiVFRpM3VIVHpTcTYyZ2Q3VkpvY0NSSFNMWjliejM4VHVu?=
 =?utf-8?B?dWRQaTlsUHZUMXhyMStQMjBkdjU3THNhMitDa05NL0JKT0NycHJPWExZbXpI?=
 =?utf-8?B?N2dObEFueVNsbkV1ZXp4R1laWHZKOWE0ZmdUTVBrY2lYSnJONWwzS2VYMjNX?=
 =?utf-8?B?dlNhSWlsMWpqN3ZHSXBtWjZ4VzYxaFlKbG1uR3RIc0tJZDR3TXZyVDU1NTVt?=
 =?utf-8?B?dlB6VzNuQjRVa0ZOMUVjRkYvcDZPZFYzTGZDOTQwM2plQVQxS3llK2FHT0NH?=
 =?utf-8?B?QWxhbTVsdW1yUEJoaWk0QWpUK2E2ZDZ5NDVCUUlNdjdyMTZHbk84R3lCMFp5?=
 =?utf-8?B?b0xqWkV6MktvWkYvbDk1TGc1RVFKbHgrRVBSUjdmS0h5V0tCV0pUK3ZFZHR5?=
 =?utf-8?B?ZlFkN2xvcGo3S2pmNUZ0c0k0VTNQRW1GRXhCUlhZbW1YZ3hHK3g4eHpsODN5?=
 =?utf-8?B?Ni94cFgxeENZeWlWK0w5cjRxM0pKL2ZGaXFjYXJ3NEM3VXh0ejNUaDd6dzda?=
 =?utf-8?B?YnFXTWVZWUpZOWlqeU9WdEMxMDBTQUhyZmhZcUZPYlByUmVPMmlYNDRwR01B?=
 =?utf-8?B?cjMvb1BTcUZVVDNWMDNnZmp4V0o5WGpIRnJjN2c4dVZacFl4c2szcStnM0w1?=
 =?utf-8?B?eTJ2QnhMUjVibHZvWGN6bHEydGl1cGw2cHoyakFUdUExblF0T2NNSTVacEFS?=
 =?utf-8?B?T1o0a0Q4TlVQMmgxZkw2VnNEVC8xRlNXekFUcGkzWk5nMlJQNVI0eVB1OGdr?=
 =?utf-8?B?UUg3UGdGcGhaYk55WHhQMU1sNHJlNSthTThIalpmT2sxelhvWWtkbVltMW85?=
 =?utf-8?B?UnVaK2VnUWxMRlpSdkhhS1ZTSHlHdUVIZE9IOENoVGdPN2VMUUNMTE1JVEhr?=
 =?utf-8?B?bzBrRTEraXNVNFEwZFViaUxROENpNnYrM0x6M0dYNjR3RFZ3SE80UXFGclow?=
 =?utf-8?B?RytDZlNpbllzNFRNamcyc0F5MUxUN3ZDSnNwdnRBaDRQdFgxOHF5NWhSbUNh?=
 =?utf-8?B?OXRlK1hQNG5LV3czbEdaeDR0Qlk5UzFkdUlGbURoQjRPVnRDMmZ2ODhBRXAy?=
 =?utf-8?B?WDM5Q0NrMFBXVmVtdE1wZkFpcTJHdXAvMWVpaVhSQlpIb000Q0hSWW0zZzRz?=
 =?utf-8?B?SjZHY0NBRm1RSUgvaHNFSy9NdXoxSmxpclBmeForY0RiY3RWYm9oVzROeVpK?=
 =?utf-8?B?WldOMm5sVm5Pd2dEbGdCRTJWK29aRmV0YVFmbmQ2N295N3JmTllaV0JWZjRa?=
 =?utf-8?B?bnpJMGsvbjZZRFNld3pNaXJkUy9scFVKV1U5VEJYRU5XUit0N0h2NlFoWlFq?=
 =?utf-8?B?TmRMUW5CaTdtMmNMK1J4RzFET3VJdXdEWkg4NXQ3RkxaekdwZmp0M3FIVFlF?=
 =?utf-8?B?bE5ucTlNOUFFZElDYk1JbmI3Z1NUNEdOUS9HWk15TEthWG5aVmhPeUxpdmta?=
 =?utf-8?B?Y2c9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: microsoft.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR21MB1688.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ede41693-710a-42e8-4c62-08db4dcc5b7b
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 May 2023 00:53:50.6070
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 5f63waM6CyyNfW9fBBr2sRx9cC18cFX9lz+N40ZowcuYvVqDxvJLM4ohL5Diajh7fXh+UFKh20WcZBTRuMpfqoZiiLRuRDrQycKnms7QMlQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR21MB3447

RnJvbTogVGhvbWFzIEdsZWl4bmVyIDx0Z2x4QGxpbnV0cm9uaXguZGU+IFNlbnQ6IFRodXJzZGF5
LCBNYXkgNCwgMjAyMyAxMjowMyBQTQ0KPiANCj4gSW1wbGVtZW50IHRoZSB2YWxpZGF0aW9uIGZ1
bmN0aW9uIHdoaWNoIHRlbGxzIHRoZSBjb3JlIGNvZGUgd2hldGhlcg0KPiBwYXJhbGxlbCBicmlu
Z3VwIGlzIHBvc3NpYmxlLg0KPiANCj4gVGhlIG9ubHkgY29uZGl0aW9uIGZvciBub3cgaXMgdGhh
dCB0aGUga2VybmVsIGRvZXMgbm90IHJ1biBpbiBhbiBlbmNyeXB0ZWQNCj4gZ3Vlc3QgYXMgdGhl
c2Ugd2lsbCB0cmFwIHRoZSBSRE1TUiB2aWEgI1ZDLCB3aGljaCBjYW5ub3QgYmUgaGFuZGxlZCBh
dCB0aGF0DQo+IHBvaW50IGluIGVhcmx5IHN0YXJ0dXAuDQo+IA0KPiBUaGVyZSB3YXMgYW4gZWFy
bGllciB2YXJpYW50IGZvciBBTUQtU0VWIHdoaWNoIHVzZWQgdGhlIEdIQkMgcHJvdG9jb2wgZm9y
DQo+IHJldHJpZXZpbmcgdGhlIEFQSUMgSUQgdmlhIENQVUlELCBidXQgdGhlcmUgaXMgbm8gZ3Vh
cmFudGVlIHRoYXQgdGhlDQo+IGluaXRpYWwgQVBJQyBJRCBpbiBDUFVJRCBpcyB0aGUgc2FtZSBh
cyB0aGUgcmVhbCBBUElDIElELiBUaGVyZSBpcyBubw0KPiBlbmZvcmNlbWVudCBmcm9tIHRoZSBz
ZWN1cmUgZmlybXdhcmUgYW5kIHRoZSBoeXBlcnZpc29yIGNhbiBhc3NpZ24gQVBJQyBJRHMNCj4g
YXMgaXQgc2VlcyBmaXQgYXMgbG9uZyBhcyB0aGUgQUNQSS9NQURUIHRhYmxlIGlzIGNvbnNpc3Rl
bnQgd2l0aCB0aGF0DQo+IGFzc2lnbm1lbnQuDQo+IA0KPiBVbmZvcnR1bmF0ZWx5IHRoZXJlIGlz
IG5vIFJETVNSIEdIQ0IgcHJvdG9jb2wgYXQgdGhlIG1vbWVudCwgc28gZW5hYmxpbmcNCj4gQU1E
LVNFViBndWVzdHMgZm9yIHBhcmFsbGVsIHN0YXJ0dXAgbmVlZHMgc29tZSBtb3JlIHRob3VnaHQu
DQo+IA0KPiBJbnRlbC1URFggcHJvdmlkZXMgYSBzZWN1cmUgUkRNU1IgaHlwZXJjYWxsLCBidXQg
c3VwcG9ydGluZyB0aGF0IGlzIG91dHNpZGUNCj4gdGhlIHNjb3BlIG9mIHRoaXMgY2hhbmdlLg0K
PiANCj4gRml4dXAgYW5ub3VuY2VfY3B1KCkgYXMgZS5nLiBvbiBIeXBlci1WIENQVTEgaXMgdGhl
IHNlY29uZGFyeSBzaWJsaW5nIG9mDQo+IENQVTAsIHdoaWNoIG1ha2VzIHRoZSBAY3B1ID09IDEg
bG9naWMgaW4gYW5ub3VuY2VfY3B1KCkgZmFsbCBhcGFydC4NCj4gDQo+IFsgbWlrZWxsZXk6IFJl
cG9ydGVkIHRoZSBhbm5vdW5jZV9jcHUoKSBmYWxsb3V0DQo+IA0KPiBPcmlnaW5hbGx5LWJ5OiBE
YXZpZCBXb29kaG91c2UgPGR3bXdAYW1hem9uLmNvLnVrPg0KPiBTaWduZWQtb2ZmLWJ5OiBUaG9t
YXMgR2xlaXhuZXIgPHRnbHhAbGludXRyb25peC5kZT4NCj4gLS0tDQo+IFYyOiBGaXh1cCBhbm5v
dW5jZV9jcHUoKSAtIE1pY2hhZWwgS2VsbGV5DQo+IC0tLQ0KPiAgYXJjaC94ODYvS2NvbmZpZyAg
ICAgICAgICAgICB8ICAgIDMgKw0KPiAgYXJjaC94ODYva2VybmVsL2NwdS9jb21tb24uYyB8ICAg
IDYgLS0tDQo+ICBhcmNoL3g4Ni9rZXJuZWwvc21wYm9vdC5jICAgIHwgICA4MyArKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKystLS0tDQo+IC0tLQ0KDQpbc25pcF0NCg0KPiBAQCAt
OTM0LDEwICs5NjEsMTAgQEAgc3RhdGljIHZvaWQgYW5ub3VuY2VfY3B1KGludCBjcHUsIGludCBh
cA0KPiAgCWlmICghbm9kZV93aWR0aCkNCj4gIAkJbm9kZV93aWR0aCA9IG51bV9kaWdpdHMobnVt
X3Bvc3NpYmxlX25vZGVzKCkpICsgMTsgLyogKyAnIycgKi8NCj4gDQo+IC0JaWYgKGNwdSA9PSAx
KQ0KPiAtCQlwcmludGsoS0VSTl9JTkZPICJ4ODY6IEJvb3RpbmcgU01QIGNvbmZpZ3VyYXRpb246
XG4iKTsNCj4gLQ0KPiAgCWlmIChzeXN0ZW1fc3RhdGUgPCBTWVNURU1fUlVOTklORykgew0KPiAr
CQlpZiAobnVtX29ubGluZV9jcHVzKCkgPT0gMSkNCg0KVW5mb3J0dW5hdGVseSwgdGhpcyBuZXcg
Y2hlY2sgZG9lc24ndCB3b3JrLiAgSGVyZSdzIHRoZSBvdXRwdXQgSSBnZXQ6DQoNClsgICAgMC43
MjEzODRdIHNtcDogQnJpbmdpbmcgdXAgc2Vjb25kYXJ5IENQVXMgLi4uDQpbICAgIDAuNzI1MzU5
XSBzbXBib290OiB4ODY6IEJvb3RpbmcgU01QIGNvbmZpZ3VyYXRpb246DQpbICAgIDAuNzI5MjQ5
XSAuLi4uIG5vZGUgICMwLCBDUFVzOiAgICAgICAgIzINClsgICAgMC43Mjk2NTRdIHNtcGJvb3Q6
IHg4NjogQm9vdGluZyBTTVAgY29uZmlndXJhdGlvbjoNClsgICAgMC43MzcyNDddICAgICAgICM0
DQpbICAgIDAuNzM3NTExXSBzbXBib290OiB4ODY6IEJvb3RpbmcgU01QIGNvbmZpZ3VyYXRpb246
DQpbICAgIDAuNzQxMjQ2XSAgICAgICAjNg0KWyAgICAwLjc0MTUwOF0gc21wYm9vdDogeDg2OiBC
b290aW5nIFNNUCBjb25maWd1cmF0aW9uOg0KWyAgICAwLjc0NTI0OF0gICAgICAgIzgNClsgICAg
MC43NDU1MDddIHNtcGJvb3Q6IHg4NjogQm9vdGluZyBTTVAgY29uZmlndXJhdGlvbjoNClsgICAg
MC43NDkyNTBdICAgICAgIzEwDQpbICAgIDAuNzQ5NTE0XSBzbXBib290OiB4ODY6IEJvb3Rpbmcg
U01QIGNvbmZpZ3VyYXRpb246DQpbICAgIDAuNzUzMjQ4XSAgICAgICMxMg0KWyAgICAwLjc1MzQ5
Ml0gc21wYm9vdDogeDg2OiBCb290aW5nIFNNUCBjb25maWd1cmF0aW9uOg0KWyAgICAwLjc1NzI0
OV0gICAgICAjMTQgICMxICAjMyAgIzUgICM3ICAjOSAjMTEgIzEzICMxNQ0KWyAgICAwLjc2OTMx
N10gc21wOiBCcm91Z2h0IHVwIDEgbm9kZSwgMTYgQ1BVcw0KWyAgICAwLjc3MzI0Nl0gc21wYm9v
dDogTWF4IGxvZ2ljYWwgcGFja2FnZXM6IDENClsgICAgMC43NzcyNTddIHNtcGJvb3Q6IFRvdGFs
IG9mIDE2IHByb2Nlc3NvcnMgYWN0aXZhdGVkICg3ODI1My43OSBCb2dvTUlQUykNCg0KRXZpZGVu
dGx5IG51bV9vbmxpbmVfY3B1cygpIGlzbid0IHVwZGF0ZWQgdW50aWwgYWZ0ZXIgYWxsIHRoZSBw
cmltYXJ5DQpzaWJsaW5ncyBnZXQgc3RhcnRlZC4NCg0KV2hlbiBib290aW5nIHdpdGggY3B1aHAu
cGFyYWxsZWw9MCwgdGhlIG91dHB1dCBpcyBnb29kLg0KDQpNaWNoYWVsDQoNCj4gKwkJCXByX2lu
Zm8oIng4NjogQm9vdGluZyBTTVAgY29uZmlndXJhdGlvbjpcbiIpOw0KPiArDQo+ICAJCWlmIChu
b2RlICE9IGN1cnJlbnRfbm9kZSkgew0KPiAgCQkJaWYgKGN1cnJlbnRfbm9kZSA+ICgtMSkpDQo+
ICAJCQkJcHJfY29udCgiXG4iKTsNCj4gQEAgLTk0OCw3ICs5NzUsNyBAQCBzdGF0aWMgdm9pZCBh
bm5vdW5jZV9jcHUoaW50IGNwdSwgaW50IGFwDQo+ICAJCX0NCj4gDQo+ICAJCS8qIEFkZCBwYWRk
aW5nIGZvciB0aGUgQlNQICovDQo+IC0JCWlmIChjcHUgPT0gMSkNCj4gKwkJaWYgKG51bV9vbmxp
bmVfY3B1cygpID09IDEpDQo+ICAJCQlwcl9jb250KCIlKnMiLCB3aWR0aCArIDEsICIgIik7DQo+
IA0KPiAgCQlwcl9jb250KCIlKnMjJWQiLCB3aWR0aCAtIG51bV9kaWdpdHMoY3B1KSwgIiAiLCBj
cHUpOw0K


From xen-devel-bounces@lists.xenproject.org Sat May 06 01:13:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 01:13:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530769.826517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv6UP-0006LW-7p; Sat, 06 May 2023 01:13:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530769.826517; Sat, 06 May 2023 01:13:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv6UP-0006LP-5G; Sat, 06 May 2023 01:13:33 +0000
Received: by outflank-mailman (input) for mailman id 530769;
 Sat, 06 May 2023 01:13:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CN1r=A3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pv6UO-0006LJ-1H
 for xen-devel@lists.xenproject.org; Sat, 06 May 2023 01:13:32 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 35f1d39d-ebab-11ed-b226-6b7b168915f2;
 Sat, 06 May 2023 03:13:30 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 316E4641D3;
 Sat,  6 May 2023 01:13:29 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 803CCC433D2;
 Sat,  6 May 2023 01:13:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35f1d39d-ebab-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683335608;
	bh=kbfyCZ/OIOcwupFcFbstNbToCUDi6rMZBmq26a28eDA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=AdAbF1L56MX8vHqQ6MzWKJGRAbRAMP3R1IXMtemOzD2oN9rzpV3BZDxQr7u9YS4Of
	 LZIB5myzBjj/w/uKDLFBKVDBSTxK9TRF5Q0arZk1rPDNQ4Xd7nqfWro5SdxegzDF/f
	 qkGQU0GRkxngpXz3UmTh1pAgjmrC0dZL8QFUw9IRbSQ80c4F89MBiyeFt645FTgT9E
	 5mYfdiIkLhtesYcnxz37VWLPlPMpa4xMkwiWhhyZtdFTp2yHODj6E22PlQq/JN4vrU
	 x6h7+WyyOtXQJZAv521wU6F2lXpPxHuH1N3vrf0iy/p2fq+gumzQMrplu9SrdfYrUQ
	 +yvMevRdK6tvA==
Date: Fri, 5 May 2023 18:13:25 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Mark Syms <mark.syms@citrix.com>
cc: qemu-devel@nongnu.org, sstabellini@kernel.org, anthony.perard@citrix.com, 
    paul@xen.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH 0/1] Updated: Ensure PV ring is drained on disconenct
In-Reply-To: <20230420102014.647446-1-mark.syms@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2305051759260.974517@ubuntu-linux-20-04-desktop>
References: <20230329105344.3465706-2-mark.syms@citrix.com> <20230420102014.647446-1-mark.syms@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 20 Apr 2023, Mark Syms wrote:
> Updated patch to address intermittent SIGSEGV on domain disconnect/shutdown.
> 
> Mark Syms (1):
>   Ensure the PV ring is drained on disconnect
> 
>  hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------
>  1 file changed, 25 insertions(+), 6 deletions(-)
> 
> -- 
> 2.40.0
> 
> >From 21724baa15a72534d98aa2653e9ec39e83559319 Mon Sep 17 00:00:00 2001
> From: Mark Syms <mark.syms@citrix.com>
> Date: Thu, 20 Apr 2023 11:08:34 +0100
> Subject: [PATCH 1/1] Ensure the PV ring is drained on disconnect
> 
> Also ensure all pending AIO is complete.

Hi Mark, can you please add more info on the problem you are trying to
solve? Also add any stacktrace if you get any due to this error.


> Signed-off-by: Mark Syms <mark.syms@citrix.com>
> ---
>  hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------
>  1 file changed, 25 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
> index 734da42ea7..d9da4090bf 100644
> --- a/hw/block/dataplane/xen-block.c
> +++ b/hw/block/dataplane/xen-block.c
> @@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
>  
>      dataplane->more_work = 0;
>  
> +    if (dataplane->sring == 0) {
> +        return done_something;

done_something cannot be changed by now, so I would just do

    return false;


> +    }
> +
>      rc = dataplane->rings.common.req_cons;
>      rp = dataplane->rings.common.sring->req_prod;
>      xen_rmb(); /* Ensure we see queued requests up to 'rp'. */
> @@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane)
>  void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
>  {
>      XenDevice *xendev;
> +    XenBlockRequest *request, *next;
>  
>      if (!dataplane) {
>          return;
>      }
>  
> +    /* We're about to drain the ring. We can cancel the scheduling of any
> +     * bottom half now */
> +    qemu_bh_cancel(dataplane->bh);
> +
> +    /* Ensure we have drained the ring */
> +    aio_context_acquire(dataplane->ctx);

Would it make sense to move the 2 loops below under the existing
aio_context_acquire also below?


> +    do {
> +        xen_block_handle_requests(dataplane);
> +    } while (dataplane->more_work);
> +    aio_context_release(dataplane->ctx);
> +
> +    /* Now ensure that all inflight requests are complete */
> +    while (!QLIST_EMPTY(&dataplane->inflight)) {
> +        QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) {
> +            blk_aio_flush(request->dataplane->blk, xen_block_complete_aio,
> +                        request);
> +        }
> +    }

especially because I would think that blk_aio_flush needs to be called
with aio_context_acquired ?



>      xendev = dataplane->xendev;
>  
>      aio_context_acquire(dataplane->ctx);
> +

move the new code here


>      if (dataplane->event_channel) {
>          /* Only reason for failure is a NULL channel */
>          xen_device_set_event_channel_context(xendev, dataplane->event_channel,
> @@ -684,12 +709,6 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
>      blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort);
>      aio_context_release(dataplane->ctx);
>  
> -    /*
> -     * Now that the context has been moved onto the main thread, cancel
> -     * further processing.
> -     */
> -    qemu_bh_cancel(dataplane->bh);
> -
>      if (dataplane->event_channel) {
>          Error *local_err = NULL;
>  
> -- 
> 2.40.0
> 


From xen-devel-bounces@lists.xenproject.org Sat May 06 04:36:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 04:36:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530804.826544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv9f0-0002Zp-J3; Sat, 06 May 2023 04:36:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530804.826544; Sat, 06 May 2023 04:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pv9f0-0002Zi-FX; Sat, 06 May 2023 04:36:42 +0000
Received: by outflank-mailman (input) for mailman id 530804;
 Sat, 06 May 2023 04:36:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pv9ez-0002ZS-6l; Sat, 06 May 2023 04:36:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pv9ez-00033T-1g; Sat, 06 May 2023 04:36:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pv9ey-00078l-Hz; Sat, 06 May 2023 04:36:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pv9ey-0006ex-Gw; Sat, 06 May 2023 04:36:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z7XMCM2W4HZR0TEuWoFJSBnV01a+YUyw0LPxBUDiDOs=; b=3D7agQYKaZZ4UNBW+B34ZJereG
	qroaAijTG9dNffxP07MwWp4BOTgNMRbnCONtZGWvzp+vrW631CgiWuHpO8WCVEeS/eEMNSU8f+qQU
	ihGGe4g4HDDRa5vSfWjOEmmnXir2gGbawAE5Ixtl1KTx2BMnioMW221eIM+OSXy6LRTk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180546-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180546: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a9fe9e191b4305b88c356a1ed9ac3baf89eb18aa
X-Osstest-Versions-That:
    qemuu=f6b761bdbd8ba63cee7428d52fb6b46e4224ddab
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 May 2023 04:36:40 +0000

flight 180546 qemu-mainline real [real]
flight 180550 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180546/
http://logs.test-lab.xenproject.org/osstest/logs/180550/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install fail pass in 180550-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop   fail in 180550 like 180536
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install             fail like 180536
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180536
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180536
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180536
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180536
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180536
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180536
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180536
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a9fe9e191b4305b88c356a1ed9ac3baf89eb18aa
baseline version:
 qemuu                f6b761bdbd8ba63cee7428d52fb6b46e4224ddab

Last test of basis   180536  2023-05-05 00:40:15 Z    1 days
Testing same since   180546  2023-05-05 15:15:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Alistair Francis <alistair.francis@wdc.com>
  Bin Meng <bmeng@tinylab.org>
  Conor Dooley <conor.dooley@microchip.com>
  Daniel Henrique Barboza <dbarboza@ventanamicro.com>
  Fei Wu <fei2.wu@intel.com>
  Irina Ryapolova <irina.ryapolova@syntacore.com>
  Ivan Klokov <ivan.klokov@syntacore.com>
  Junqiang Wang <wangjunqiang@iscas.ac.cn>
  LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
  Mayuresh Chitale <mchitale@ventanamicro.com>
  Philipp Tomsich <philipp.tomsich@vrull.eu>
  Rahul Pathak <rpathak@ventanamicro.com>
  Richard Henderson <richard.henderson@linaro.org>
  Weiwei Li <liweiwei@iscas.ac.cn>
  Yi Chen <chenyi2000@zju.edu.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   f6b761bdbd..a9fe9e191b  a9fe9e191b4305b88c356a1ed9ac3baf89eb18aa -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat May 06 06:23:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 06:23:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530812.826555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvBKV-0005sJ-2G; Sat, 06 May 2023 06:23:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530812.826555; Sat, 06 May 2023 06:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvBKU-0005sC-V2; Sat, 06 May 2023 06:23:38 +0000
Received: by outflank-mailman (input) for mailman id 530812;
 Sat, 06 May 2023 06:23:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvBKU-0005s0-9V; Sat, 06 May 2023 06:23:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvBKT-0006pD-Cy; Sat, 06 May 2023 06:23:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvBKS-0001tB-Qs; Sat, 06 May 2023 06:23:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvBKS-0006ss-QI; Sat, 06 May 2023 06:23:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vbP5B3N524j7AcPgLpPl9qdVs/36ZP1Nla6Fb2xdiss=; b=KSPxPeNZZ8mnN+4ZM6YkYE2AFT
	P/FQxDwQPVd6ktgpSoL60Br9cRqGK0IU7UhVFn9pr1ltupBByy3yzlF0qL+/VTNuTutHCV+vDYYbB
	kJiCVbxCcsz3t5pyCHvFqwVJQtESqwH8mHW4ZY9239wXDu5Km9fTb1m8HT33WDB570vY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180551-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180551: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=eabaeb0613c0b459db950ab655f99ada9389d0cf
X-Osstest-Versions-That:
    ovmf=b65c0eed6bc028388d790fe4e30a76770ebb46c4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 May 2023 06:23:36 +0000

flight 180551 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180551/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 eabaeb0613c0b459db950ab655f99ada9389d0cf
baseline version:
 ovmf                 b65c0eed6bc028388d790fe4e30a76770ebb46c4

Last test of basis   180545  2023-05-05 14:12:15 Z    0 days
Testing same since   180551  2023-05-06 04:10:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b65c0eed6b..eabaeb0613  eabaeb0613c0b459db950ab655f99ada9389d0cf -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat May 06 06:51:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 06:51:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530818.826565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvBle-00011k-8v; Sat, 06 May 2023 06:51:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530818.826565; Sat, 06 May 2023 06:51:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvBle-00011d-60; Sat, 06 May 2023 06:51:42 +0000
Received: by outflank-mailman (input) for mailman id 530818;
 Sat, 06 May 2023 06:51:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D3GX=A3=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pvBlc-00011S-Q3
 for xen-devel@lists.xenproject.org; Sat, 06 May 2023 06:51:40 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0606.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::606])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 70754649-ebda-11ed-8611-37d641c3527e;
 Sat, 06 May 2023 08:51:36 +0200 (CEST)
Received: from AM5PR0201CA0005.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::15) by DBBPR08MB6091.eurprd08.prod.outlook.com
 (2603:10a6:10:1f4::5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.29; Sat, 6 May
 2023 06:51:28 +0000
Received: from AM7EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::ea) by AM5PR0201CA0005.outlook.office365.com
 (2603:10a6:203:3d::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.29 via Frontend
 Transport; Sat, 6 May 2023 06:51:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT011.mail.protection.outlook.com (100.127.140.81) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6387.12 via Frontend Transport; Sat, 6 May 2023 06:51:27 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Sat, 06 May 2023 06:51:27 +0000
Received: from 67331670f6fd.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B0709909-C424-4563-9817-60DCB1B43D7A.1; 
 Sat, 06 May 2023 06:51:21 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 67331670f6fd.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 06 May 2023 06:51:21 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB7744.eurprd08.prod.outlook.com (2603:10a6:20b:508::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.29; Sat, 6 May
 2023 06:51:18 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::bce1:f206:86af:31be%6]) with mapi id 15.20.6363.029; Sat, 6 May 2023
 06:51:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70754649-ebda-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mVyhQTvLKKOubA6U89Gm0zkMBBxJJEVKwnYtVKMTve0=;
 b=eNTxJo0yuJaMfu4c0QlMvY8ODb3fzzGuyz2IRTpXd9INvjl0oTStIydtUyIXgH82yWgQyFAr246gkn93msU/sTetUD/+am2/E7DQ/oW8OjyXlikigI7HRhXea+36A1k8gBnFvLJHnyZQtA4ni09rqp2YYuLFgsVg4mvqfGYml6s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: b26a26440f20b42d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=grr97IDhQhDq7Cu1TiCy+vt8VtHmScGWTNrvAV14xsq2wrVq5bHYu9FNxfe+YBUiMdDMgcctMve8ELIuyxVPU+yVynQ0kpNY3z7/yEqdib1c+3cIZvN8hNkxZy5TpzZq3CQKuxNTodQ4HsneMv5UDhzP7jnusQ9buu6cYJD1Z7Nj2sxTXUIhgRekYmxP5Rqj8Wr8SiOrXyo5PXuvsEoOcH2aTIJjyHJ45lpFfYtX5Hzp3qsSz3luqH/RHwWK3KxkldXRGjd78YhSsZxeGXkLKMX6+8a9/sqkFotRhkgcbaF6LRQV9L/b6tkwZpEY96brwGWe6jz2v33j054aNISmZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mVyhQTvLKKOubA6U89Gm0zkMBBxJJEVKwnYtVKMTve0=;
 b=UQU9cnvjdMF2cDYT3Oqj/U3tS3tFsuwFAFewLNZk2RF/y/5/4LNrwVU2QJl3WSRrRnVg7JLdFk7r3JjoxXsWMFD4MvSaR49hzm0YIMoVRdOkzBNV3j8Y7d5V5mHlkW00E1D4rv2IEP14nvmNuXAjXbMuLgvUVHn32nOyrvVL33TimoAhk7udlIbrZ10Aq+NxucgKQZO9Pk1oiFAXrbppe9QXE7soBtDTVvE1w9jLzAT6wAt7zwfaZ48Q+UHpH+il19l6ZrbPNy7TPWEr+cQmGlnYPLa+CrfOPZXoChsKw4n0tzRQBK2Ce21VVz5mWmgvzAUgbPdSA6xSQGKwxTPH4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mVyhQTvLKKOubA6U89Gm0zkMBBxJJEVKwnYtVKMTve0=;
 b=eNTxJo0yuJaMfu4c0QlMvY8ODb3fzzGuyz2IRTpXd9INvjl0oTStIydtUyIXgH82yWgQyFAr246gkn93msU/sTetUD/+am2/E7DQ/oW8OjyXlikigI7HRhXea+36A1k8gBnFvLJHnyZQtA4ni09rqp2YYuLFgsVg4mvqfGYml6s=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 1/2] xen/misra: add diff-report.py tool
Thread-Topic: [PATCH 1/2] xen/misra: add diff-report.py tool
Thread-Index: AQHZfpRaHhveqUOI3U+Goaz/TE5FAK9M0RwA
Date: Sat, 6 May 2023 06:51:18 +0000
Message-ID: <A5E88DB7-BEA0-4B44-B52B-7824026F2BF7@arm.com>
References: <20230504142523.2989306-1-luca.fancellu@arm.com>
 <20230504142523.2989306-2-luca.fancellu@arm.com>
In-Reply-To: <20230504142523.2989306-2-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB7744:EE_|AM7EUR03FT011:EE_|DBBPR08MB6091:EE_
X-MS-Office365-Filtering-Correlation-Id: 8bbe4589-4068-4e37-18e0-08db4dfe5101
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 AP735fWgScDxGpOa8A4s/RsZrSnO4XB5XaXFsKI6KEICJrgsXXelV0UafaZxDdO9W6pBkOnS8R8KGjajMM/JwtBw4NCO/mdHTT0e8NlqWKH0eIuwr8Ur2rc5g/cO1SAmMeEwGBueMuCEzhSgF67Rj0s47qyNJ6pX2r8ioUhDNana7Vjpco1oxWzrbT+XI6cS7KeMqomgUIzKmiPtkqKNcBR2QhhYjE4PEc3iFGjWlBlSWt1IxcVNUFklNYToNkdo65+JBwAFsw5h8p9JxQzzPCefSqtQyGIlB/4u0LyUSc5FpsG3V3UBM54ZFNJlQEJSrOmPFQr8rNQJW/me9fohyq1G/ubk5gHPX+8XvDVGVO934p25OlmT7/K2Ki08i9HXDQCf9edNcEUsEh6HG7s9RWgOjZeP2uRhMz97M1ZL6WyMqQiIV/TBfGZ1yQPxxHws5kGJ4NoguNXhg5cvCELK2JwtHHkaqr6nFt9G69KrbV5Ds48/bj76tDQYkaKOADzeqdP8vaJ6uc3+zlYxEXFjpVSJWfCHU2Y9vlRHAOILBCbCujTWrcvtjpdeuVPH/lVV4fbMJgyOhewk07yCrCy1CK5OBGTGgBlYbW13/2Bx+fb73aGEULfH5dFiVaNj7TAYUwRPy9iWWt1U6avMtN6Nvw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(366004)(376002)(39860400002)(451199021)(53546011)(6512007)(6506007)(26005)(8676002)(8936002)(2616005)(30864003)(5660300002)(186003)(316002)(54906003)(66946007)(66556008)(66476007)(66446008)(64756008)(76116006)(91956017)(6916009)(4326008)(41300700001)(6486002)(478600001)(71200400001)(122000001)(38100700002)(86362001)(36756003)(33656002)(38070700005)(83380400001)(2906002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <F34B26E53DDB494E946FC492246E0D1C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB7744
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0c019ed6-69ab-475e-f43d-08db4dfe4b71
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ov6R0R6K6P1kbBStindOJ1QDCowARXVnoackpEcrBEQWjDCsZTcd55kKw+tRbLeM4wcnHmlp1fFgv9DAkoTdFyt8fIRjg1GavK8VlRxM6T70q+VZXV2K6DMpooEZDIWsVrwbjY+fLsdRf2g2gp5sUhWAXxXEQXxKd+S3Yt14CNRIWZb/iGtMsCMheW6QYU2ajw0LiMialvhAwBdWebcPadS2MU0UnH8eRTI5nMw4MqvHMIzBh8F5uIu2q+OM85b/rKoizojeChvAFfiREKNesi0pyApzh6LXRZyNdzq5Jik7yUlvX8rXtzPMoCL1KEYKOuqq7NVcVeqLBMEVnB0FAg5uvJtC3CbpmVPPLcYRAO4x/A/NGiiAxWLpK43iqquVxMYEGuJISSmPccqyy5eHVL2LfigwGZ6s50y79ShS6G10Qb60CgdOb6XLX3PaapzzMgCPAJ+LMMqyhndFbs7cfjdKSi4jZfd/EUnDxfAmz5EEQqBqw8o0n3NCJtRjbQTY/g6+arM4mvhTMkkLizjwHKHEJroHAD0EK5qGE6OxamXkh1Ge/XnpMLR8viF65MeRsRdQFHZ4Z7JT9y23YKk5gs2UMklHW2ym5RjRBb42oOYxBGsgdTnRc5dX4FAkBGYIbDtgMh23m1N0Lpsv/j6DhF9YfNjS0P057WLVQsZduxj5uQPpP6TzBvTGB3a6uUsjaKiyxGmA9RYK+VktaPr4Wg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(376002)(396003)(136003)(451199021)(36840700001)(46966006)(40470700004)(36860700001)(33656002)(36756003)(40460700003)(2906002)(30864003)(8676002)(8936002)(5660300002)(86362001)(40480700001)(316002)(70206006)(70586007)(4326008)(82310400005)(41300700001)(6916009)(82740400003)(81166007)(356005)(47076005)(336012)(83380400001)(54906003)(186003)(6512007)(6506007)(26005)(53546011)(6486002)(478600001)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2023 06:51:27.8334
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8bbe4589-4068-4e37-18e0-08db4dfe5101
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6091

DQoNCj4gT24gNCBNYXkgMjAyMywgYXQgMTU6MjUsIEx1Y2EgRmFuY2VsbHUgPEx1Y2EuRmFuY2Vs
bHVAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPiBBZGQgYSBuZXcgdG9vbCwgZGlmZi1yZXBvcnQucHkg
dGhhdCBjYW4gYmUgdXNlZCB0byBtYWtlIGRpZmYgYmV0d2Vlbg0KPiByZXBvcnRzIGdlbmVyYXRl
ZCBieSB4ZW4tYW5hbHlzaXMucHkgdG9vbC4NCj4gQ3VycmVudGx5IHRoaXMgdG9vbCBzdXBwb3J0
cyB0aGUgWGVuIGNwcGNoZWNrIHRleHQgcmVwb3J0IGZvcm1hdCBpbg0KPiBpdHMgb3BlcmF0aW9u
cy4NCj4gDQo+IFRoZSB0b29sIHByaW50cyBldmVyeSBmaW5kaW5nIHRoYXQgaXMgaW4gdGhlIHJl
cG9ydCBwYXNzZWQgd2l0aCAtcg0KPiAoY2hlY2sgcmVwb3J0KSB3aGljaCBpcyBub3QgaW4gdGhl
IHJlcG9ydCBwYXNzZWQgd2l0aCAtYiAoYmFzZWxpbmUpLg0KPiANCj4gU2lnbmVkLW9mZi1ieTog
THVjYSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KPiAtLS0NCj4geGVuL3Njcmlw
dHMvZGlmZi1yZXBvcnQucHkgICAgICAgICAgICAgICAgICAgIHwgIDc2ICsrKysrKysrKysrKw0K
PiAuLi4veGVuX2FuYWx5c2lzL2RpZmZfdG9vbC9fX2luaXRfXy5weSAgICAgICAgfCAgIDANCj4g
Li4uL3hlbl9hbmFseXNpcy9kaWZmX3Rvb2wvY3BwY2hlY2tfcmVwb3J0LnB5IHwgIDQxICsrKysr
KysNCj4geGVuL3NjcmlwdHMveGVuX2FuYWx5c2lzL2RpZmZfdG9vbC9kZWJ1Zy5weSAgIHwgIDM2
ICsrKysrKw0KPiB4ZW4vc2NyaXB0cy94ZW5fYW5hbHlzaXMvZGlmZl90b29sL3JlcG9ydC5weSAg
fCAxMTQgKysrKysrKysrKysrKysrKysrDQo+IDUgZmlsZXMgY2hhbmdlZCwgMjY3IGluc2VydGlv
bnMoKykNCj4gY3JlYXRlIG1vZGUgMTAwNzU1IHhlbi9zY3JpcHRzL2RpZmYtcmVwb3J0LnB5DQo+
IGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vc2NyaXB0cy94ZW5fYW5hbHlzaXMvZGlmZl90b29sL19f
aW5pdF9fLnB5DQo+IGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vc2NyaXB0cy94ZW5fYW5hbHlzaXMv
ZGlmZl90b29sL2NwcGNoZWNrX3JlcG9ydC5weQ0KPiBjcmVhdGUgbW9kZSAxMDA2NDQgeGVuL3Nj
cmlwdHMveGVuX2FuYWx5c2lzL2RpZmZfdG9vbC9kZWJ1Zy5weQ0KPiBjcmVhdGUgbW9kZSAxMDA2
NDQgeGVuL3NjcmlwdHMveGVuX2FuYWx5c2lzL2RpZmZfdG9vbC9yZXBvcnQucHkNCj4gDQo+IGRp
ZmYgLS1naXQgYS94ZW4vc2NyaXB0cy9kaWZmLXJlcG9ydC5weSBiL3hlbi9zY3JpcHRzL2RpZmYt
cmVwb3J0LnB5DQo+IG5ldyBmaWxlIG1vZGUgMTAwNzU1DQo+IGluZGV4IDAwMDAwMDAwMDAwMC4u
NDkxM2ZiNDNhOGY5DQo+IC0tLSAvZGV2L251bGwNCj4gKysrIGIveGVuL3NjcmlwdHMvZGlmZi1y
ZXBvcnQucHkNCj4gQEAgLTAsMCArMSw3NiBAQA0KPiArIyEvdXNyL2Jpbi9lbnYgcHl0aG9uMw0K
PiArDQo+ICtpbXBvcnQgb3MsIHN5cw0KPiArZnJvbSBhcmdwYXJzZSBpbXBvcnQgQXJndW1lbnRQ
YXJzZXINCj4gK2Zyb20geGVuX2FuYWx5c2lzLmRpZmZfdG9vbC5kZWJ1ZyBpbXBvcnQgRGVidWcN
Cj4gK2Zyb20geGVuX2FuYWx5c2lzLmRpZmZfdG9vbC5yZXBvcnQgaW1wb3J0IFJlcG9ydEVycm9y
DQo+ICtmcm9tIHhlbl9hbmFseXNpcy5kaWZmX3Rvb2wuY3BwY2hlY2tfcmVwb3J0IGltcG9ydCBD
cHBjaGVja1JlcG9ydA0KPiArDQo+ICsNCj4gK2RlZiBsb2dfaW5mbyh0ZXh0LCBlbmQ9J1xuJyk6
DQo+ICsgICAgZ2xvYmFsIGFyZ3MNCj4gKyAgICBnbG9iYWwgZmlsZV9vdXQNCj4gKw0KPiArICAg
IGlmIChhcmdzLnZlcmJvc2UpOg0KPiArICAgICAgICBwcmludCh0ZXh0LCBlbmQ9ZW5kLCBmaWxl
PWZpbGVfb3V0KQ0KPiArDQo+ICsNCj4gK2RlZiBtYWluKGFyZ3YpOg0KPiArICAgIGdsb2JhbCBh
cmdzDQo+ICsgICAgZ2xvYmFsIGZpbGVfb3V0DQo+ICsNCj4gKyAgICBwYXJzZXIgPSBBcmd1bWVu
dFBhcnNlcihwcm9nPSJkaWZmLXJlcG9ydC5weSIpDQo+ICsgICAgcGFyc2VyLmFkZF9hcmd1bWVu
dCgiLWIiLCAiLS1iYXNlbGluZSIsIHJlcXVpcmVkPVRydWUsIHR5cGU9c3RyLA0KPiArICAgICAg
ICAgICAgICAgICAgICAgICAgaGVscD0iUGF0aCB0byB0aGUgYmFzZWxpbmUgcmVwb3J0LiIpDQo+
ICsgICAgcGFyc2VyLmFkZF9hcmd1bWVudCgiLS1kZWJ1ZyIsIGFjdGlvbj0nc3RvcmVfdHJ1ZScs
DQo+ICsgICAgICAgICAgICAgICAgICAgICAgICBoZWxwPSJQcm9kdWNlIGludGVybWVkaWF0ZSBy
ZXBvcnRzIGR1cmluZyBvcGVyYXRpb25zLiIpDQo+ICsgICAgcGFyc2VyLmFkZF9hcmd1bWVudCgi
LW8iLCAiLS1vdXQiLCBkZWZhdWx0PSJzdGRvdXQiLCB0eXBlPXN0ciwNCj4gKyAgICAgICAgICAg
ICAgICAgICAgICAgIGhlbHA9IldoZXJlIHRvIHByaW50IHRoZSB0b29sIG91dHB1dC4gRGVmYXVs
dCBpcyAiDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICJzdGRvdXQiKQ0KPiArICAg
IHBhcnNlci5hZGRfYXJndW1lbnQoIi1yIiwgIi0tcmVwb3J0IiwgcmVxdWlyZWQ9VHJ1ZSwgdHlw
ZT1zdHIsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICBoZWxwPSJQYXRoIHRvIHRoZSAnY2hl
Y2sgcmVwb3J0JywgdGhlIG9uZSBjaGVja2VkICINCj4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgImFnYWluc3QgdGhlIGJhc2VsaW5lLiIpDQo+ICsgICAgcGFyc2VyLmFkZF9hcmd1bWVu
dCgiLXYiLCAiLS12ZXJib3NlIiwgYWN0aW9uPSdzdG9yZV90cnVlJywNCj4gKyAgICAgICAgICAg
ICAgICAgICAgICAgIGhlbHA9IlByaW50IG1vcmUgaW5mb3JtYXRpb25zIGR1cmluZyB0aGUgcnVu
LiIpDQo+ICsNCj4gKyAgICBhcmdzID0gcGFyc2VyLnBhcnNlX2FyZ3MoKQ0KPiArDQo+ICsgICAg
aWYgYXJncy5vdXQgPT0gInN0ZG91dCI6DQo+ICsgICAgICAgIGZpbGVfb3V0ID0gc3lzLnN0ZG91
dA0KPiArICAgIGVsc2U6DQo+ICsgICAgICAgIHRyeToNCj4gKyAgICAgICAgICAgIGZpbGVfb3V0
ID0gb3BlbihhcmdzLm91dCwgInd0IikNCj4gKyAgICAgICAgZXhjZXB0IE9TRXJyb3IgYXMgZToN
Cj4gKyAgICAgICAgICAgIHByaW50KCJFUlJPUjogSXNzdWUgb3BlbmluZyBmaWxlIHt9OiB7fSIu
Zm9ybWF0KGFyZ3Mub3V0LCBlKSkNCj4gKyAgICAgICAgICAgIHN5cy5leGl0KDEpDQo+ICsNCj4g
KyAgICBkZWJ1ZyA9IERlYnVnKGFyZ3MpDQo+ICsNCj4gKyAgICB0cnk6DQo+ICsgICAgICAgIGJh
c2VsaW5lX3BhdGggPSBvcy5wYXRoLnJlYWxwYXRoKGFyZ3MuYmFzZWxpbmUpDQo+ICsgICAgICAg
IGxvZ19pbmZvKCJMb2FkaW5nIGJhc2VsaW5lIHJlcG9ydCB7fSIuZm9ybWF0KGJhc2VsaW5lX3Bh
dGgpLCAiIikNCj4gKyAgICAgICAgYmFzZWxpbmUgPSBDcHBjaGVja1JlcG9ydChiYXNlbGluZV9w
YXRoKQ0KPiArICAgICAgICBiYXNlbGluZS5wYXJzZSgpDQo+ICsgICAgICAgIGRlYnVnLmRlYnVn
X3ByaW50X3BhcnNlZF9yZXBvcnQoYmFzZWxpbmUpDQo+ICsgICAgICAgIGxvZ19pbmZvKCIgW09L
XSIpDQo+ICsgICAgICAgIG5ld19yZXBfcGF0aCA9IG9zLnBhdGgucmVhbHBhdGgoYXJncy5yZXBv
cnQpDQo+ICsgICAgICAgIGxvZ19pbmZvKCJMb2FkaW5nIGNoZWNrIHJlcG9ydCB7fSIuZm9ybWF0
KG5ld19yZXBfcGF0aCksICIiKQ0KPiArICAgICAgICBuZXdfcmVwID0gQ3BwY2hlY2tSZXBvcnQo
bmV3X3JlcF9wYXRoKQ0KPiArICAgICAgICBuZXdfcmVwLnBhcnNlKCkNCj4gKyAgICAgICAgZGVi
dWcuZGVidWdfcHJpbnRfcGFyc2VkX3JlcG9ydChuZXdfcmVwKQ0KPiArICAgICAgICBsb2dfaW5m
bygiIFtPS10iKQ0KPiArICAgIGV4Y2VwdCBSZXBvcnRFcnJvciBhcyBlOg0KPiArICAgICAgICBw
cmludCgiRVJST1I6IHt9Ii5mb3JtYXQoZSkpDQo+ICsgICAgICAgIHN5cy5leGl0KDEpDQo+ICsN
Cj4gKyAgICBvdXRwdXQgPSBuZXdfcmVwIC0gYmFzZWxpbmUNCj4gKyAgICBwcmludChvdXRwdXQs
IGVuZD0iIiwgZmlsZT1maWxlX291dCkNCj4gKw0KPiArICAgIGlmIGxlbihvdXRwdXQpID4gMDoN
Cj4gKyAgICAgICAgc3lzLmV4aXQoMSkNCj4gKw0KPiArICAgIHN5cy5leGl0KDApDQo+ICsNCj4g
Kw0KPiAraWYgX19uYW1lX18gPT0gIl9fbWFpbl9fIjoNCj4gKyAgICBtYWluKHN5cy5hcmd2WzE6
XSkNCj4gZGlmZiAtLWdpdCBhL3hlbi9zY3JpcHRzL3hlbl9hbmFseXNpcy9kaWZmX3Rvb2wvX19p
bml0X18ucHkgYi94ZW4vc2NyaXB0cy94ZW5fYW5hbHlzaXMvZGlmZl90b29sL19faW5pdF9fLnB5
DQo+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0DQo+IGluZGV4IDAwMDAwMDAwMDAwMC4uZTY5ZGUyOWJi
MmQxDQo+IGRpZmYgLS1naXQgYS94ZW4vc2NyaXB0cy94ZW5fYW5hbHlzaXMvZGlmZl90b29sL2Nw
cGNoZWNrX3JlcG9ydC5weSBiL3hlbi9zY3JpcHRzL3hlbl9hbmFseXNpcy9kaWZmX3Rvb2wvY3Bw
Y2hlY2tfcmVwb3J0LnB5DQo+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0DQo+IGluZGV4IDAwMDAwMDAw
MDAwMC4uNzg3YTUxYWNhNTgzDQo+IC0tLSAvZGV2L251bGwNCj4gKysrIGIveGVuL3NjcmlwdHMv
eGVuX2FuYWx5c2lzL2RpZmZfdG9vbC9jcHBjaGVja19yZXBvcnQucHkNCj4gQEAgLTAsMCArMSw0
MSBAQA0KPiArIyEvdXNyL2Jpbi9lbnYgcHl0aG9uMw0KPiArDQo+ICtpbXBvcnQgcmUNCj4gK2Zy
b20gLnJlcG9ydCBpbXBvcnQgUmVwb3J0LCBSZXBvcnRFcnJvcg0KPiArDQo+ICsNCj4gK2NsYXNz
IENwcGNoZWNrUmVwb3J0KFJlcG9ydCk6DQo+ICsgICAgZGVmIF9faW5pdF9fKHNlbGYsIHJlcG9y
dF9wYXRoOiBzdHIpIC0+IE5vbmU6DQo+ICsgICAgICAgIHN1cGVyKCkuX19pbml0X18ocmVwb3J0
X3BhdGgpDQo+ICsgICAgICAgICMgVGhpcyBtYXRjaGVzIGEgc3RyaW5nIGxpa2U6DQo+ICsgICAg
ICAgICMgcGF0aC90by9maWxlLmMoPGxpbmUgbnVtYmVyPiw8ZGlnaXRzPik6PHdoYXRldmVyPg0K
PiArICAgICAgICAjIGFuZCBjYXB0dXJlcyBmaWxlIG5hbWUgcGF0aCBhbmQgbGluZSBudW1iZXIN
Cj4gKyAgICAgICAgIyB0aGUgbGFzdCBjYXB0dXJlIGdyb3VwIGlzIHVzZWQgZm9yIHRleHQgc3Vi
c3RpdHV0aW9uIGluIF9fc3RyX18NCj4gKyAgICAgICAgc2VsZi5fX3JlcG9ydF9lbnRyeV9yZWdl
eCA9IHJlLmNvbXBpbGUocideKC4qKVwoKFxkKykoLFxkK1wpOi4qKSQnKQ0KPiArDQo+ICsgICAg
ZGVmIHBhcnNlKHNlbGYpIC0+IE5vbmU6DQo+ICsgICAgICAgIHJlcG9ydF9wYXRoID0gc2VsZi5n
ZXRfcmVwb3J0X3BhdGgoKQ0KPiArICAgICAgICB0cnk6DQo+ICsgICAgICAgICAgICB3aXRoIG9w
ZW4ocmVwb3J0X3BhdGgsICJydCIpIGFzIGluZmlsZToNCj4gKyAgICAgICAgICAgICAgICByZXBv
cnRfbGluZXMgPSBpbmZpbGUucmVhZGxpbmVzKCkNCj4gKyAgICAgICAgZXhjZXB0IE9TRXJyb3Ig
YXMgZToNCj4gKyAgICAgICAgICAgIHJhaXNlIFJlcG9ydEVycm9yKCJJc3N1ZSB3aXRoIHJlYWRp
bmcgZmlsZSB7fToge30iDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAuZm9ybWF0
KHJlcG9ydF9wYXRoLCBlKSkNCj4gKyAgICAgICAgZm9yIGxpbmUgaW4gcmVwb3J0X2xpbmVzOg0K
PiArICAgICAgICAgICAgZW50cnkgPSBzZWxmLl9fcmVwb3J0X2VudHJ5X3JlZ2V4Lm1hdGNoKGxp
bmUpDQo+ICsgICAgICAgICAgICBpZiBlbnRyeSBhbmQgZW50cnkuZ3JvdXAoMSkgYW5kIGVudHJ5
Lmdyb3VwKDIpOg0KPiArICAgICAgICAgICAgICAgIGZpbGVfcGF0aCA9IGVudHJ5Lmdyb3VwKDEp
DQo+ICsgICAgICAgICAgICAgICAgbGluZV9udW1iZXIgPSBpbnQoZW50cnkuZ3JvdXAoMikpDQo+
ICsgICAgICAgICAgICAgICAgc2VsZi5hZGRfZW50cnkoZmlsZV9wYXRoLCBsaW5lX251bWJlciwg
bGluZSkNCj4gKyAgICAgICAgICAgIGVsc2U6DQo+ICsgICAgICAgICAgICAgICAgcmFpc2UgUmVw
b3J0RXJyb3IoIk1hbGZvcm1lZCByZXBvcnQgZW50cnkgaW4gZmlsZSB7fTpcbnt9Ig0KPiArICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC5mb3JtYXQocmVwb3J0X3BhdGgsIGxpbmUp
KQ0KPiArDQo+ICsgICAgZGVmIF9fc3RyX18oc2VsZikgLT4gc3RyOg0KPiArICAgICAgICByZXQg
PSAiIg0KPiArICAgICAgICBmb3IgZW50cnkgaW4gc2VsZi50b19saXN0KCk6DQo+ICsgICAgICAg
ICAgICByZXQgKz0gcmUuc3ViKHNlbGYuX19yZXBvcnRfZW50cnlfcmVnZXgsDQo+ICsgICAgICAg
ICAgICAgICAgICAgICAgICAgIHIne30oe31cMycuZm9ybWF0KGVudHJ5LmZpbGVfcGF0aCwNCj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZW50cnkubGluZV9u
dW1iZXIpLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICBlbnRyeS50ZXh0KQ0KPiArICAg
ICAgICByZXR1cm4gcmV0DQo+IGRpZmYgLS1naXQgYS94ZW4vc2NyaXB0cy94ZW5fYW5hbHlzaXMv
ZGlmZl90b29sL2RlYnVnLnB5IGIveGVuL3NjcmlwdHMveGVuX2FuYWx5c2lzL2RpZmZfdG9vbC9k
ZWJ1Zy5weQ0KPiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPiBpbmRleCAwMDAwMDAwMDAwMDAuLmQ0
NmRmMzMwMGQyMQ0KPiAtLS0gL2Rldi9udWxsDQo+ICsrKyBiL3hlbi9zY3JpcHRzL3hlbl9hbmFs
eXNpcy9kaWZmX3Rvb2wvZGVidWcucHkNCj4gQEAgLTAsMCArMSwzNiBAQA0KPiArIyEvdXNyL2Jp
bi9lbnYgcHl0aG9uMw0KPiArDQo+ICtpbXBvcnQgb3MNCj4gK2Zyb20gLnJlcG9ydCBpbXBvcnQg
UmVwb3J0DQo+ICsNCj4gKw0KPiArY2xhc3MgRGVidWc6DQo+ICsgICAgZGVmIF9faW5pdF9fKHNl
bGYsIGFyZ3MpOg0KPiArICAgICAgICBzZWxmLmFyZ3MgPSBhcmdzDQo+ICsNCj4gKyAgICBkZWYg
X19nZXRfZGVidWdfb3V0X2ZpbGVuYW1lKHNlbGYsIHBhdGg6IHN0ciwgdHlwZTogc3RyKSAtPiBz
dHI6DQo+ICsgICAgICAgICMgVGFrZSBiYXNlbmFtZQ0KPiArICAgICAgICBmaWxlX25hbWUgPSBv
cy5wYXRoLmJhc2VuYW1lKHBhdGgpDQo+ICsgICAgICAgICMgU3BsaXQgaW4gbmFtZSBhbmQgZXh0
ZW5zaW9uDQo+ICsgICAgICAgIGZpbGVfbmFtZSA9IG9zLnBhdGguc3BsaXRleHQoZmlsZV9uYW1l
KQ0KPiArICAgICAgICBpZiBzZWxmLmFyZ3Mub3V0ICE9ICJzdGRvdXQiOg0KPiArICAgICAgICAg
ICAgb3V0X2ZvbGRlciA9IG9zLnBhdGguZGlybmFtZShzZWxmLmFyZ3Mub3V0KQ0KPiArICAgICAg
ICBlbHNlOg0KPiArICAgICAgICAgICAgb3V0X2ZvbGRlciA9ICIuLyINCj4gKyAgICAgICAgZGJn
X3JlcG9ydF9wYXRoID0gb3V0X2ZvbGRlciArIGZpbGVfbmFtZVswXSArIHR5cGUgKyBmaWxlX25h
bWVbMV0NCj4gKw0KPiArICAgICAgICByZXR1cm4gZGJnX3JlcG9ydF9wYXRoDQo+ICsNCj4gKyAg
ICBkZWYgX19kZWJ1Z19wcmludF9yZXBvcnQoc2VsZiwgcmVwb3J0OiBSZXBvcnQsIHR5cGU6IHN0
cikgLT4gTm9uZToNCj4gKyAgICAgICAgcmVwb3J0X25hbWUgPSBzZWxmLl9fZ2V0X2RlYnVnX291
dF9maWxlbmFtZShyZXBvcnQuZ2V0X3JlcG9ydF9wYXRoKCksDQo+ICsgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdHlwZSkNCj4gKyAgICAgICAgdHJ5
Og0KPiArICAgICAgICAgICAgd2l0aCBvcGVuKHJlcG9ydF9uYW1lLCAid3QiKSBhcyBvdXRmaWxl
Og0KPiArICAgICAgICAgICAgICAgIHByaW50KHJlcG9ydCwgZW5kPSIiLCBmaWxlPW91dGZpbGUp
DQo+ICsgICAgICAgIGV4Y2VwdCBPU0Vycm9yIGFzIGU6DQo+ICsgICAgICAgICAgICBwcmludCgi
RVJST1I6IElzc3VlIG9wZW5pbmcgZmlsZSB7fToge30iLmZvcm1hdChyZXBvcnRfbmFtZSwgZSkp
DQo+ICsNCj4gKyAgICBkZWYgZGVidWdfcHJpbnRfcGFyc2VkX3JlcG9ydChzZWxmLCByZXBvcnQ6
IFJlcG9ydCkgLT4gTm9uZToNCj4gKyAgICAgICAgaWYgbm90IHNlbGYuYXJncy5kZWJ1ZzoNCj4g
KyAgICAgICAgICAgIHJldHVybg0KPiArICAgICAgICBzZWxmLl9fZGVidWdfcHJpbnRfcmVwb3J0
KHJlcG9ydCwgIi5wYXJzZWQiKQ0KPiBkaWZmIC0tZ2l0IGEveGVuL3NjcmlwdHMveGVuX2FuYWx5
c2lzL2RpZmZfdG9vbC9yZXBvcnQucHkgYi94ZW4vc2NyaXB0cy94ZW5fYW5hbHlzaXMvZGlmZl90
b29sL3JlcG9ydC5weQ0KPiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPiBpbmRleCAwMDAwMDAwMDAw
MDAuLmQ5NThkMTgxNmViNA0KPiAtLS0gL2Rldi9udWxsDQo+ICsrKyBiL3hlbi9zY3JpcHRzL3hl
bl9hbmFseXNpcy9kaWZmX3Rvb2wvcmVwb3J0LnB5DQo+IEBAIC0wLDAgKzEsMTE0IEBADQo+ICsj
IS91c3IvYmluL2VudiBweXRob24zDQo+ICsNCj4gK2ltcG9ydCBvcw0KPiArDQo+ICsNCj4gK2Ns
YXNzIFJlcG9ydEVycm9yKEV4Y2VwdGlvbik6DQo+ICsgICAgcGFzcw0KPiArDQo+ICsNCj4gK2Ns
YXNzIFJlcG9ydDoNCj4gKyAgICBjbGFzcyBSZXBvcnRFbnRyeToNCj4gKyAgICAgICAgZGVmIF9f
aW5pdF9fKHNlbGYsIGZpbGVfcGF0aDogc3RyLCBsaW5lX251bWJlcjogaW50LA0KPiArICAgICAg
ICAgICAgICAgICAgICAgZW50cnlfdGV4dDogbGlzdCwgbGluZV9pZDogaW50KSAtPiBOb25lOg0K
PiArICAgICAgICAgICAgaWYgbm90IGlzaW5zdGFuY2UobGluZV9udW1iZXIsIGludCkgb3IgXA0K
PiArICAgICAgICAgICAgICAgbm90IGlzaW5zdGFuY2UobGluZV9pZCwgaW50KToNCj4gKyAgICAg
ICAgICAgICAgICByYWlzZSBSZXBvcnRFcnJvcigiUmVwb3J0RW50cnkgY29uc3RydWN0b3Igd3Jv
bmcgdHlwZSBhcmdzIikNCj4gKyAgICAgICAgICAgIHNlbGYuZmlsZV9wYXRoID0gZmlsZV9wYXRo
DQo+ICsgICAgICAgICAgICBzZWxmLmxpbmVfbnVtYmVyID0gbGluZV9udW1iZXINCj4gKyAgICAg
ICAgICAgIHNlbGYudGV4dCA9IGVudHJ5X3RleHQNCj4gKyAgICAgICAgICAgIHNlbGYubGluZV9p
ZCA9IGxpbmVfaWQNCj4gKw0KDQpJIHJlYWxpc2Ugbm93IHRoYXQgSSBoYWQgYSByZWJhc2UgbWlz
dGFrZSBoZXJlLCB0aGUgY2xhc3MgUmVwb3J0RW50cnkgc2hvdWxkIG5vdA0KaGF2ZSB0aGVzZSB0
d28gZnVuY3Rpb25zLCBpdCB3YXMgcGFydCBvZiBhbm90aGVyIGJyYW5jaCwgSSB3aWxsIGZpeCB0
aGlzIGluIHRoZSBuZXh0DQp2ZXJzaW9uLCBpbiB0aGUgbWVhbiB0aW1lIEnigJlsbCB3YWl0IGZv
ciBvdGhlciBwb3NzaWJsZSBmaW5kaW5ncyBmb3IgdGhpcyBwYXRjaC4NCg0KPGRlbGV0ZSBmcm9t
IGhlcmU+DQoNCj4gKyAgICAgICAgZGVmIF9fc3RyX18oc2VsZikgLT4gc3RyOg0KPiArICAgICAg
ICAgICAgcmV0ID0gJycNCj4gKyAgICAgICAgICAgIGhlYWRlciA9ICdGaWxlIHBhdGg6Q291bnRc
bicNCj4gKw0KPiArICAgICAgICAgICAgZm9yIHBhdGggaW4gc2VsZi5zdGF0czoNCj4gKyAgICAg
ICAgICAgICAgICByZXQgKz0gZid7cGF0aH06IHtsZW4oc2VsZi5zdGF0c1twYXRoXSl9XG4nDQo+
ICsNCj4gKyAgICAgICAgICAgIGlmIHJldCA9PSAnJzoNCj4gKyAgICAgICAgICAgICAgICByZXQg
Kz0gJ05vIG5ldyBpc3N1ZXMgaW50cm9kdWNlZFxuJw0KPiArDQo+ICsgICAgICAgICAgICByZXQg
PSBoZWFkZXIgKyByZXQNCj4gKw0KPiArICAgICAgICAgICAgcmV0dXJuIHJldA0KPiArDQo+ICsg
ICAgICAgIGRlZiBfX2xlbl9fKHNlbGYpIC0+IGludDoNCj4gKyAgICAgICAgICAgIHJldCA9IDAN
Cj4gKw0KPiArICAgICAgICAgICAgZm9yIGxuX2xpc3QgaW4gc2VsZi5zdGF0cy52YWx1ZXMoKToN
Cj4gKyAgICAgICAgICAgICAgICByZXQgKz0gbGVuKGxuX2xpc3QpDQo+ICsNCj4gKyAgICAgICAg
ICAgIHJldHVybiByZXQNCg0KPHRvIGhlcmU+DQoNCj4gKw0KPiArICAgIGRlZiBfX2luaXRfXyhz
ZWxmLCByZXBvcnRfcGF0aDogc3RyKSAtPiBOb25lOg0KPiArICAgICAgICBzZWxmLl9fZW50cmll
cyA9IHt9DQo+ICsgICAgICAgIHNlbGYuX19wYXRoID0gcmVwb3J0X3BhdGgNCj4gKyAgICAgICAg
c2VsZi5fX2xhc3RfbGluZV9vcmRlciA9IDANCj4gKw0KPiArICAgIGRlZiBwYXJzZShzZWxmKSAt
PiBOb25lOg0KPiArICAgICAgICByYWlzZSBSZXBvcnRFcnJvcigiUGxlYXNlIGNyZWF0ZSBhIHNw
ZWNpYWxpc2VkIGNsYXNzIGZyb20gJ1JlcG9ydCcuIikNCj4gKw0KPiArICAgIGRlZiBnZXRfcmVw
b3J0X3BhdGgoc2VsZikgLT4gc3RyOg0KPiArICAgICAgICByZXR1cm4gc2VsZi5fX3BhdGgNCj4g
Kw0KPiArICAgIGRlZiBnZXRfcmVwb3J0X2VudHJpZXMoc2VsZikgLT4gZGljdDoNCj4gKyAgICAg
ICAgcmV0dXJuIHNlbGYuX19lbnRyaWVzDQo+ICsNCj4gKyAgICBkZWYgYWRkX2VudHJ5KHNlbGYs
IGVudHJ5X3BhdGg6IHN0ciwgZW50cnlfbGluZV9udW1iZXI6IGludCwNCj4gKyAgICAgICAgICAg
ICAgICAgIGVudHJ5X3RleHQ6IGxpc3QpIC0+IE5vbmU6DQo+ICsgICAgICAgIGVudHJ5ID0gUmVw
b3J0LlJlcG9ydEVudHJ5KGVudHJ5X3BhdGgsIGVudHJ5X2xpbmVfbnVtYmVyLCBlbnRyeV90ZXh0
LA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzZWxmLl9fbGFzdF9saW5l
X29yZGVyKQ0KPiArICAgICAgICBpZiBlbnRyeV9wYXRoIGluIHNlbGYuX19lbnRyaWVzLmtleXMo
KToNCj4gKyAgICAgICAgICAgIHNlbGYuX19lbnRyaWVzW2VudHJ5X3BhdGhdLmFwcGVuZChlbnRy
eSkNCj4gKyAgICAgICAgZWxzZToNCj4gKyAgICAgICAgICAgIHNlbGYuX19lbnRyaWVzW2VudHJ5
X3BhdGhdID0gW2VudHJ5XQ0KPiArICAgICAgICBzZWxmLl9fbGFzdF9saW5lX29yZGVyICs9IDEN
Cj4gKw0KPiArICAgIGRlZiB0b19saXN0KHNlbGYpIC0+IGxpc3Q6DQo+ICsgICAgICAgIHJlcG9y
dF9saXN0ID0gW10NCj4gKyAgICAgICAgZm9yIF8sIGVudHJpZXMgaW4gc2VsZi5fX2VudHJpZXMu
aXRlbXMoKToNCj4gKyAgICAgICAgICAgIGZvciBlbnRyeSBpbiBlbnRyaWVzOg0KPiArICAgICAg
ICAgICAgICAgIHJlcG9ydF9saXN0LmFwcGVuZChlbnRyeSkNCj4gKw0KPiArICAgICAgICByZXBv
cnRfbGlzdC5zb3J0KGtleT1sYW1iZGEgeDogeC5saW5lX2lkKQ0KPiArICAgICAgICByZXR1cm4g
cmVwb3J0X2xpc3QNCj4gKw0KPiArICAgIGRlZiBfX3N0cl9fKHNlbGYpIC0+IHN0cjoNCj4gKyAg
ICAgICAgcmV0ID0gIiINCj4gKyAgICAgICAgZm9yIGVudHJ5IGluIHNlbGYudG9fbGlzdCgpOg0K
PiArICAgICAgICAgICAgcmV0ICs9IGVudHJ5LmZpbGVfcGF0aCArICI6IiArIGVudHJ5LmxpbmVf
bnVtYmVyICsgIjoiICsgZW50cnkudGV4dA0KPiArDQo+ICsgICAgICAgIHJldHVybiByZXQNCj4g
Kw0KPiArICAgIGRlZiBfX2xlbl9fKHNlbGYpIC0+IGludDoNCj4gKyAgICAgICAgcmV0dXJuIGxl
bihzZWxmLnRvX2xpc3QoKSkNCj4gKw0KPiArICAgIGRlZiBfX3N1Yl9fKHNlbGYsIHJlcG9ydF9i
OiAnUmVwb3J0JykgLT4gJ1JlcG9ydCc6DQo+ICsgICAgICAgIGlmIHNlbGYuX19jbGFzc19fICE9
IHJlcG9ydF9iLl9fY2xhc3NfXzoNCj4gKyAgICAgICAgICAgIHJhaXNlIFJlcG9ydEVycm9yKCJE
aWZmIG9mIGRpZmZlcmVudCB0eXBlIG9mIHJlcG9ydCEiKQ0KPiArDQo+ICsgICAgICAgIGZpbGVu
YW1lLCBmaWxlX2V4dGVuc2lvbiA9IG9zLnBhdGguc3BsaXRleHQoc2VsZi5fX3BhdGgpDQo+ICsg
ICAgICAgIGRpZmZfcmVwb3J0ID0gc2VsZi5fX2NsYXNzX18oZmlsZW5hbWUgKyAiLmRpZmYiICsg
ZmlsZV9leHRlbnNpb24pDQo+ICsgICAgICAgICMgUHV0IGluIHRoZSBkaWZmIHJlcG9ydCBvbmx5
IHJlY29yZHMgb2YgdGhpcyByZXBvcnQgdGhhdCBhcmUgbm90DQo+ICsgICAgICAgICMgcHJlc2Vu
dCBpbiB0aGUgcmVwb3J0X2IuDQo+ICsgICAgICAgIGZvciBmaWxlX3BhdGgsIGVudHJpZXMgaW4g
c2VsZi5fX2VudHJpZXMuaXRlbXMoKToNCj4gKyAgICAgICAgICAgIHJlcF9iX2VudHJpZXMgPSBy
ZXBvcnRfYi5nZXRfcmVwb3J0X2VudHJpZXMoKQ0KPiArICAgICAgICAgICAgaWYgZmlsZV9wYXRo
IGluIHJlcF9iX2VudHJpZXMua2V5cygpOg0KPiArICAgICAgICAgICAgICAgICMgRmlsZSBwYXRo
IGV4aXN0cyBpbiByZXBvcnRfYiwgc28gY2hlY2sgd2hhdCBlbnRyaWVzIG9mIHRoYXQNCj4gKyAg
ICAgICAgICAgICAgICAjIGZpbGUgcGF0aCBkb2Vzbid0IGV4aXN0IGluIHJlcG9ydF9iIGFuZCBh
ZGQgdGhlbSB0byB0aGUgZGlmZg0KPiArICAgICAgICAgICAgICAgIHJlcF9iX2VudHJpZXNfbnVt
ID0gWw0KPiArICAgICAgICAgICAgICAgICAgICB4LmxpbmVfbnVtYmVyIGZvciB4IGluIHJlcF9i
X2VudHJpZXNbZmlsZV9wYXRoXQ0KPiArICAgICAgICAgICAgICAgIF0NCj4gKyAgICAgICAgICAg
ICAgICBmb3IgZW50cnkgaW4gZW50cmllczoNCj4gKyAgICAgICAgICAgICAgICAgICAgaWYgZW50
cnkubGluZV9udW1iZXIgbm90IGluIHJlcF9iX2VudHJpZXNfbnVtOg0KPiArICAgICAgICAgICAg
ICAgICAgICAgICAgZGlmZl9yZXBvcnQuYWRkX2VudHJ5KGZpbGVfcGF0aCwgZW50cnkubGluZV9u
dW1iZXIsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ZW50cnkudGV4dCkNCj4gKyAgICAgICAgICAgIGVsc2U6DQo+ICsgICAgICAgICAgICAgICAgIyBG
aWxlIHBhdGggZG9lc24ndCBleGlzdCBpbiByZXBvcnRfYiwgc28gYWRkIGV2ZXJ5IGVudHJ5DQo+
ICsgICAgICAgICAgICAgICAgIyBvZiB0aGF0IGZpbGUgcGF0aCB0byB0aGUgZGlmZg0KPiArICAg
ICAgICAgICAgICAgIGZvciBlbnRyeSBpbiBlbnRyaWVzOg0KPiArICAgICAgICAgICAgICAgICAg
ICBkaWZmX3JlcG9ydC5hZGRfZW50cnkoZmlsZV9wYXRoLCBlbnRyeS5saW5lX251bWJlciwNCj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGVudHJ5LnRleHQpDQo+
ICsNCj4gKyAgICAgICAgcmV0dXJuIGRpZmZfcmVwb3J0DQo+IC0tIA0KPiAyLjM0LjENCj4gDQo+
IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Sat May 06 08:33:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 08:33:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530827.826574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvDLW-0003Qj-U8; Sat, 06 May 2023 08:32:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530827.826574; Sat, 06 May 2023 08:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvDLW-0003Qc-RM; Sat, 06 May 2023 08:32:50 +0000
Received: by outflank-mailman (input) for mailman id 530827;
 Sat, 06 May 2023 08:32:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvDLV-0003QS-H5; Sat, 06 May 2023 08:32:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvDLV-0003j9-AU; Sat, 06 May 2023 08:32:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvDLU-0007TZ-Uu; Sat, 06 May 2023 08:32:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvDLU-0001MI-UY; Sat, 06 May 2023 08:32:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tVREe1QnPo5ZN1WVLsPBzvu66dAEv9NC76xuwWSaxvc=; b=bEcyJOvPumzu5xitcjpU1/j4pi
	HTFdz36xvszCatPefMHV9jxnw7emsnVpZCWpmc0o2VcCaK4brUvUk8+bOxlL7stHIujVNjwnoBdkO
	0Y1agO3PH+LC3MXVyhBHH8mJVZlMIBBJEInR+3mENZaOicVg1hcsiy7/XNglAFRzx2hU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180547-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180547: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e93e635e142d45e3904efb4a05e2b3b52a708b4f
X-Osstest-Versions-That:
    xen=b95a72bb5b2df24ff1baaa27920e57947dc97d49
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 May 2023 08:32:48 +0000

flight 180547 xen-unstable real [real]
flight 180554 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180547/
http://logs.test-lab.xenproject.org/osstest/logs/180554/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail pass in 180554-retest
 test-amd64-i386-xl-xsm        7 xen-install         fail pass in 180554-retest
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 180554-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180537
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180537
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180537
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180537
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180537
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180537
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180537
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180537
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180537
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180537
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180537
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180537
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  e93e635e142d45e3904efb4a05e2b3b52a708b4f
baseline version:
 xen                  b95a72bb5b2df24ff1baaa27920e57947dc97d49

Last test of basis   180537  2023-05-05 01:52:22 Z    1 days
Testing same since   180547  2023-05-05 15:40:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b95a72bb5b..e93e635e14  e93e635e142d45e3904efb4a05e2b3b52a708b4f -> master


From xen-devel-bounces@lists.xenproject.org Sat May 06 11:36:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 11:36:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530838.826597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvGDG-0004h4-0T; Sat, 06 May 2023 11:36:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530838.826597; Sat, 06 May 2023 11:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvGDF-0004gx-Tz; Sat, 06 May 2023 11:36:29 +0000
Received: by outflank-mailman (input) for mailman id 530838;
 Sat, 06 May 2023 11:36:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rQ1Y=A3=zedat.fu-berlin.de=glaubitz@srs-se1.protection.inumbo.net>)
 id 1pvGDE-0004go-KZ
 for xen-devel@lists.xenproject.org; Sat, 06 May 2023 11:36:28 +0000
Received: from outpost1.zedat.fu-berlin.de (outpost1.zedat.fu-berlin.de
 [130.133.4.66]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3ba43fe7-ec02-11ed-b226-6b7b168915f2;
 Sat, 06 May 2023 13:36:25 +0200 (CEST)
Received: from inpost2.zedat.fu-berlin.de ([130.133.4.69])
 by outpost.zedat.fu-berlin.de (Exim 4.95) with esmtps (TLS1.3)
 tls TLS_AES_256_GCM_SHA384
 (envelope-from <glaubitz@zedat.fu-berlin.de>)
 id 1pvGCa-002hVY-AY; Sat, 06 May 2023 13:35:48 +0200
Received: from p57bd9cee.dip0.t-ipconnect.de ([87.189.156.238]
 helo=suse-laptop.fritz.box) by inpost2.zedat.fu-berlin.de (Exim 4.95)
 with esmtpsa (TLS1.3) tls TLS_AES_256_GCM_SHA384
 (envelope-from <glaubitz@physik.fu-berlin.de>)
 id 1pvGCZ-000Sf9-Vs; Sat, 06 May 2023 13:35:48 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ba43fe7-ec02-11ed-b226-6b7b168915f2
Message-ID: <c0677d21a4b6caa2e5018af000294a974121d9e8.camel@physik.fu-berlin.de>
Subject: Re: [PATCH v2 30/34] sh: Convert pte_free_tlb() to use ptdescs
From: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>, Andrew Morton
	 <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, 
	linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, 
	linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, 
	linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, 
	linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, 
	linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, 
	linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, 
	linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, 
	kvm@vger.kernel.org, Yoshinori Sato <ysato@users.sourceforge.jp>
Date: Sat, 06 May 2023 13:35:46 +0200
In-Reply-To: <20230501192829.17086-31-vishal.moola@gmail.com>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
	 <20230501192829.17086-31-vishal.moola@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.1 
MIME-Version: 1.0
X-Original-Sender: glaubitz@physik.fu-berlin.de
X-Originating-IP: 87.189.156.238
X-ZEDAT-Hint: PO

Hi Vishal!

On Mon, 2023-05-01 at 12:28 -0700, Vishal Moola (Oracle) wrote:
> Part of the conversions to replace pgtable constructor/destructors with
> ptdesc equivalents. Also cleans up some spacing issues.
>=20
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  arch/sh/include/asm/pgalloc.h | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
>=20
> diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.=
h
> index a9e98233c4d4..ce2ba99dbd84 100644
> --- a/arch/sh/include/asm/pgalloc.h
> +++ b/arch/sh/include/asm/pgalloc.h
> @@ -2,6 +2,7 @@
>  #ifndef __ASM_SH_PGALLOC_H
>  #define __ASM_SH_PGALLOC_H
> =20
> +#include <linux/mm.h>
>  #include <asm/page.h>
> =20
>  #define __HAVE_ARCH_PMD_ALLOC_ONE
> @@ -31,10 +32,10 @@ static inline void pmd_populate(struct mm_struct *mm,=
 pmd_t *pmd,
>  	set_pmd(pmd, __pmd((unsigned long)page_address(pte)));
>  }
> =20
> -#define __pte_free_tlb(tlb,pte,addr)			\
> -do {							\
> -	pgtable_pte_page_dtor(pte);			\
> -	tlb_remove_page((tlb), (pte));			\
> +#define __pte_free_tlb(tlb, pte, addr)				\
> +do {								\
> +	ptdesc_pte_dtor(page_ptdesc(pte));			\
> +	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
>  } while (0)
> =20
>  #endif /* __ASM_SH_PGALLOC_H */

Looking at the patch which introduces tlb_remove_page_ptdesc() [1], it seem=
s that
tlb_remove_page_ptdesc() already calls tlb_remove_page() with ptdesc_page(p=
t), so
I'm not sure whether the above tlb_remove_page_ptdesc((tlb), (page_ptdesc(p=
te)))
is correct.

Shouldn't it just be tlb_remove_page_ptdesc((tlb), (pte))?

Thanks,
Adrian

> [1] https://lore.kernel.org/linux-mm/20230417205048.15870-5-vishal.moola@=
gmail.com/

--=20
 .''`.  John Paul Adrian Glaubitz
: :' :  Debian Developer
`. `'   Physicist
  `-    GPG: 62FF 8A75 84E0 2956 9546  0006 7426 3B37 F5B5 F913


From xen-devel-bounces@lists.xenproject.org Sat May 06 13:41:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 13:41:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530853.826606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvI9m-0001CB-6w; Sat, 06 May 2023 13:41:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530853.826606; Sat, 06 May 2023 13:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvI9m-0001C4-3y; Sat, 06 May 2023 13:41:02 +0000
Received: by outflank-mailman (input) for mailman id 530853;
 Sat, 06 May 2023 13:41:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvI9k-0001Bu-NG; Sat, 06 May 2023 13:41:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvI9k-0005Mz-G4; Sat, 06 May 2023 13:41:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvI9j-0006aL-S6; Sat, 06 May 2023 13:40:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvI9j-00013k-Rb; Sat, 06 May 2023 13:40:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XEjZOjBQi4Gte2ewLu8J/PTgKbMgeaNHUDgHWjFclcY=; b=2ZX6PCzzMyaoLLuQZafzXNXtCZ
	g22t+36DSfbOXzmVg0u5twjZ5du/tM41Bi3x8ZXgCQnpvbU7xVqsuU/i+hgWpySBTF3K9/NfqfcXY
	YHOvp1/EHHNvet6uCHzTS1WZGaETQIHB9khUD01CF/EBnUdeyByhFZX++EVVj2pX97XI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180549-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180549: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=418d5c98319f67b9ae651babea031b5394425c18
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 May 2023 13:40:59 +0000

flight 180549 linux-linus real [real]
flight 180556 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180549/
http://logs.test-lab.xenproject.org/osstest/logs/180556/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180556-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                418d5c98319f67b9ae651babea031b5394425c18
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   19 days
Failing since        180281  2023-04-17 06:24:36 Z   19 days   33 attempts
Testing same since   180549  2023-05-05 22:43:18 Z    0 days    1 attempts

------------------------------------------------------------
2276 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 278899 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 06 13:48:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 13:48:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530886.826635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvIGI-0002oq-Cm; Sat, 06 May 2023 13:47:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530886.826635; Sat, 06 May 2023 13:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvIGI-0002oj-7t; Sat, 06 May 2023 13:47:46 +0000
Received: by outflank-mailman (input) for mailman id 530886;
 Sat, 06 May 2023 13:47:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvIGH-0002oZ-8T; Sat, 06 May 2023 13:47:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvIGH-0005TA-0H; Sat, 06 May 2023 13:47:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvIGG-0006kM-Kg; Sat, 06 May 2023 13:47:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvIGG-0003ZZ-K7; Sat, 06 May 2023 13:47:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AZ19yHs3nHZTK3/hZqj9ryIw+z1Ut/aCkKvjmtUoWyQ=; b=LEinwPdVUqTkluuRxmo8CGxEg7
	jd8jRTbmcy0kFUhcdrfAt5aqMu/V1AwA9TKtHjD5zQEvHKfqb/ASbGLy+Y6muSwmAO6CT4iQllbe9
	TD7ZcV4csjNhLGU212D6k/OhTih4x19HiWI4li0JQ28f/ibNRyvXANFflfY/lWxnexMs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180552-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180552: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=9b8bb536ff999fa61e41869bd98a026b8e23378f
X-Osstest-Versions-That:
    libvirt=4419e74117b3bb17c61cd7a612b5801ab99ad547
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 May 2023 13:47:44 +0000

flight 180552 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180552/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180539
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180539
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180539
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 libvirt              9b8bb536ff999fa61e41869bd98a026b8e23378f
baseline version:
 libvirt              4419e74117b3bb17c61cd7a612b5801ab99ad547

Last test of basis   180539  2023-05-05 04:18:48 Z    1 days
Testing same since   180552  2023-05-06 04:18:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   4419e74117..9b8bb536ff  9b8bb536ff999fa61e41869bd98a026b8e23378f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat May 06 13:55:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 13:55:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530896.826643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvINF-0004IS-3M; Sat, 06 May 2023 13:54:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530896.826643; Sat, 06 May 2023 13:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvINF-0004IL-0G; Sat, 06 May 2023 13:54:57 +0000
Received: by outflank-mailman (input) for mailman id 530896;
 Sat, 06 May 2023 13:54:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvINE-0004IB-Eu; Sat, 06 May 2023 13:54:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvINE-0005b9-6V; Sat, 06 May 2023 13:54:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvIND-0006x8-LR; Sat, 06 May 2023 13:54:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvIND-0006Tf-L2; Sat, 06 May 2023 13:54:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=C624UMaRNrPshzCsbmq0VJSEVGQn5UNPkrffHcGnjRI=; b=fFqpY6uBVMXH/snz/fKAon6vuX
	a67AGYO0UxfnuYcl8aJHZ4fGgzbB/eg0M4o4d6X591rXaB+P5TZhto5be1tcnEEpYuAVyvsgGx1Rq
	pDA5AuUnNfr4ldH05ANGPq3LlC93Hf8fOequo1Zr8ZZNLwwfFU4Povg1rluD1ht965BQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180553-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180553: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:build-arm64-pvops:kernel-build:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=eb5c3932a383ba1ef3a911232c644f2e053ef66c
X-Osstest-Versions-That:
    qemuu=a9fe9e191b4305b88c356a1ed9ac3baf89eb18aa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 May 2023 13:54:55 +0000

flight 180553 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180553/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 180546
 test-amd64-i386-xl-vhd      21 guest-start/debian.repeat fail REGR. vs. 180546
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180546

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop       fail blocked in 180546
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180546
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180546
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180546
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180546
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180546
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180546
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180546
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                eb5c3932a383ba1ef3a911232c644f2e053ef66c
baseline version:
 qemuu                a9fe9e191b4305b88c356a1ed9ac3baf89eb18aa

Last test of basis   180546  2023-05-05 15:15:28 Z    0 days
Testing same since   180553  2023-05-06 04:39:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dorinda Bassey <dbassey@redhat.com>
  Juan Quintela <quintela@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 343 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 06 16:23:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 16:23:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530907.826656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvKgF-0003GJ-NJ; Sat, 06 May 2023 16:22:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530907.826656; Sat, 06 May 2023 16:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvKgF-0003GC-Kk; Sat, 06 May 2023 16:22:43 +0000
Received: by outflank-mailman (input) for mailman id 530907;
 Sat, 06 May 2023 16:22:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=twhF=A3=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pvKgE-0003G6-K3
 for xen-devel@lists.xenproject.org; Sat, 06 May 2023 16:22:42 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38252c96-ec2a-11ed-b226-6b7b168915f2;
 Sat, 06 May 2023 18:22:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38252c96-ec2a-11ed-b226-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683390158;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NXHfBgcGHt/ZfPwa+T595dnepfFJKJnpLjR3JhBl6Q0=;
	b=1czMZh1cRjeqiQ2TZA6GbMVHHXEF93eQIVvQioB8FE/pRwMNCJ9iNbT002o3Vf8Q1PmjjT
	pMzD7AC5NROe0zWPr/xkstx8tP21FQigIYhXnCedHf48++FNK52empvCyZE4IMSf7ryah7
	wwDm68n14K3LZeFvyQ99Fwubq6JJjmTxOyIllycil0ZJfXjm8tULzw2/ivE/eeRZ3vZ2Ce
	eFzoWJEYtyiUQ3YTBmQpXzsA8QFgMuJS8CeTY8f2OfEh+NvACDq69zDAubztHsM41nl1p1
	k+Q1vY7kiaQqTaWRFrb4KJC+127qTmzlxib6lozIfkeM3gQq7L3F5SIRCIw/xg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683390158;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NXHfBgcGHt/ZfPwa+T595dnepfFJKJnpLjR3JhBl6Q0=;
	b=qtQ0xS4FhuHdv2/qWAJNtp26GqvWQxksH6Mo/Q1MgLXE9WE0n6Pih7C913p3eWGR7QXk2c
	rCYD41DUjLVIXsBw==
To: "Michael Kelley (LINUX)" <mikelley@microsoft.com>, LKML
 <linux-kernel@vger.kernel.org>
Cc: "x86@kernel.org" <x86@kernel.org>, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr
 Natalenko <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, Usama Arif <usama.arif@bytedance.com>,
 Juergen
 Gross <jgross@suse.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Russell
 King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 "linux-arm-kernel@lists.infradead.org"
 <linux-arm-kernel@lists.infradead.org>, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, "linux-csky@vger.kernel.org"
 <linux-csky@vger.kernel.org>, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, "linux-mips@vger.kernel.org"
 <linux-mips@vger.kernel.org>, "James E.J. Bottomley"
 <James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>,
 "linux-parisc@vger.kernel.org" <linux-parisc@vger.kernel.org>, Paul
 Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 "linux-riscv@lists.infradead.org" <linux-riscv@lists.infradead.org>, Mark
 Rutland <Mark.Rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
Subject: RE: [patch V2 38/38] x86/smpboot/64: Implement
 arch_cpuhp_init_parallel_bringup() and enable it
In-Reply-To: <BYAPR21MB168869144087644F89BAC89FD7739@BYAPR21MB1688.namprd21.prod.outlook.com>
References: <20230504185733.126511787@linutronix.de>
 <20230504185938.393373946@linutronix.de>
 <BYAPR21MB168869144087644F89BAC89FD7739@BYAPR21MB1688.namprd21.prod.outlook.com>
Date: Sat, 06 May 2023 18:22:37 +0200
Message-ID: <87sfc92zw2.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Sat, May 06 2023 at 00:53, Michael Kelley wrote:
> From: Thomas Gleixner <tglx@linutronix.de> Sent: Thursday, May 4, 2023 12:03 PM
> [snip]
>
>> @@ -934,10 +961,10 @@ static void announce_cpu(int cpu, int ap
>>  	if (!node_width)
>>  		node_width = num_digits(num_possible_nodes()) + 1; /* + '#' */
>> 
>> -	if (cpu == 1)
>> -		printk(KERN_INFO "x86: Booting SMP configuration:\n");
>> -
>>  	if (system_state < SYSTEM_RUNNING) {
>> +		if (num_online_cpus() == 1)
>
> Unfortunately, this new check doesn't work.  Here's the output I get:
>
> [    0.721384] smp: Bringing up secondary CPUs ...
> [    0.725359] smpboot: x86: Booting SMP configuration:
> [    0.729249] .... node  #0, CPUs:        #2
> [    0.729654] smpboot: x86: Booting SMP configuration:
> [    0.737247]       #4
>
> Evidently num_online_cpus() isn't updated until after all the primary
> siblings get started.

Duh. Where is that brown paperbag?

> When booting with cpuhp.parallel=0, the output is good.

Exactly that was on the command line when I quickly booted that change :(

The below should fix it for real.

Thanks,

        tglx
---
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -951,9 +951,9 @@ static int wakeup_secondary_cpu_via_init
 /* reduce the number of lines printed when booting a large cpu count system */
 static void announce_cpu(int cpu, int apicid)
 {
+	static int width, node_width, first = 1;
 	static int current_node = NUMA_NO_NODE;
 	int node = early_cpu_to_node(cpu);
-	static int width, node_width;
 
 	if (!width)
 		width = num_digits(num_possible_cpus()) + 1; /* + '#' sign */
@@ -962,7 +962,7 @@ static void announce_cpu(int cpu, int ap
 		node_width = num_digits(num_possible_nodes()) + 1; /* + '#' */
 
 	if (system_state < SYSTEM_RUNNING) {
-		if (num_online_cpus() == 1)
+		if (first)
 			pr_info("x86: Booting SMP configuration:\n");
 
 		if (node != current_node) {
@@ -975,11 +975,11 @@ static void announce_cpu(int cpu, int ap
 		}
 
 		/* Add padding for the BSP */
-		if (num_online_cpus() == 1)
+		if (first)
 			pr_cont("%*s", width + 1, " ");
+		first = 0;
 
 		pr_cont("%*s#%d", width - num_digits(cpu), " ", cpu);
-
 	} else
 		pr_info("Booting Node %d Processor %d APIC 0x%x\n",
 			node, cpu, apicid);


From xen-devel-bounces@lists.xenproject.org Sat May 06 19:45:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 19:45:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530919.826673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvNqZ-0006Zp-9B; Sat, 06 May 2023 19:45:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530919.826673; Sat, 06 May 2023 19:45:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvNqZ-0006Zi-6S; Sat, 06 May 2023 19:45:35 +0000
Received: by outflank-mailman (input) for mailman id 530919;
 Sat, 06 May 2023 19:45:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvNqY-0006ZY-9s; Sat, 06 May 2023 19:45:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvNqX-000678-9i; Sat, 06 May 2023 19:45:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvNqW-0007IA-RM; Sat, 06 May 2023 19:45:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvNqW-00023F-Qx; Sat, 06 May 2023 19:45:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4t9kWov/D6pD6hCwzJt5oE+rJNjPLkcdWtgnbB7IRTo=; b=wJ9/kmdCda1pYDphDxNpdSNIqp
	oRwdxGyDQwiRh3K+wdEk+1mGUYPJ2AS8lc7TeHVlHquXk+t0BuV7VekBGAhJwFymINsr5X9vJ6gYl
	ApqHdAbkwsnPcq+HNN92W9BWzvT0AxkZL0MaECEYLfDAnY11KAO4QqRWqVKAWMPQp2Go=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180555-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180555: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e93e635e142d45e3904efb4a05e2b3b52a708b4f
X-Osstest-Versions-That:
    xen=e93e635e142d45e3904efb4a05e2b3b52a708b4f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 May 2023 19:45:32 +0000

flight 180555 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180555/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host         fail  like 180547
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180547
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180547
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180547
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180547
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180547
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180547
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180547
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180547
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180547
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180547
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180547
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180547
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  e93e635e142d45e3904efb4a05e2b3b52a708b4f
baseline version:
 xen                  e93e635e142d45e3904efb4a05e2b3b52a708b4f

Last test of basis   180555  2023-05-06 08:36:36 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat May 06 20:08:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 20:08:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530977.826689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvOD5-0000oX-6M; Sat, 06 May 2023 20:08:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530977.826689; Sat, 06 May 2023 20:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvOD5-0000oQ-2P; Sat, 06 May 2023 20:08:51 +0000
Received: by outflank-mailman (input) for mailman id 530977;
 Sat, 06 May 2023 20:08:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvOD4-0000oG-IR; Sat, 06 May 2023 20:08:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvOD4-0006mF-HX; Sat, 06 May 2023 20:08:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvOD4-0007vW-5y; Sat, 06 May 2023 20:08:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvOD4-0005vL-5T; Sat, 06 May 2023 20:08:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e4q7PLn1wtlnoOg3zz0Xc63GOAkt76WegN9PZoCrfnI=; b=pafL3zMv2MjWaBFAELP7ZSVngT
	NPXTWsNu5G/+Ekxe/08D6HVXWLxw4j49E9A6QjqeUwhbTU1Zm5q7DDJjABSv3YQrgOYCkhLo7yq83
	bSWgVcAsYXvkRQsmXmFyyJPGTl5f7vWG8y5Qn11dIbMgO2ZUeT7vpNwKrE3s6hZWWICQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180561-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180561: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=66494e532450c1205be93015740580c1e7b8877a
X-Osstest-Versions-That:
    ovmf=eabaeb0613c0b459db950ab655f99ada9389d0cf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 May 2023 20:08:50 +0000

flight 180561 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180561/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 66494e532450c1205be93015740580c1e7b8877a
baseline version:
 ovmf                 eabaeb0613c0b459db950ab655f99ada9389d0cf

Last test of basis   180551  2023-05-06 04:10:42 Z    0 days
Testing same since   180561  2023-05-06 18:10:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   eabaeb0613..66494e5324  66494e532450c1205be93015740580c1e7b8877a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat May 06 22:15:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 06 May 2023 22:15:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.530999.826704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvQBV-0005mB-Fr; Sat, 06 May 2023 22:15:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 530999.826704; Sat, 06 May 2023 22:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvQBV-0005m4-D7; Sat, 06 May 2023 22:15:21 +0000
Received: by outflank-mailman (input) for mailman id 530999;
 Sat, 06 May 2023 22:15:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvQBU-0005ld-Em; Sat, 06 May 2023 22:15:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvQBT-0001XJ-Fw; Sat, 06 May 2023 22:15:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvQBT-0002ok-8c; Sat, 06 May 2023 22:15:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvQBT-00078p-85; Sat, 06 May 2023 22:15:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d/oEbbG38Fk474cI3wx2c3jal1laJxMgOQKxRDrOwqg=; b=roV+sMay7T+A63MUTXtf8mbflL
	nDilSdVX7MFTsHUb/Q4gwKKmwAW91VQjBCaak4bH+nzjCsyz8g0DBFkh+hE0f+zbVi6VoEdillhPz
	8SzOZ88m8sNdX1od8FgBKZpSwIAiEsxLxSCem1bim3w3+kkNDlBC/Z2rFs4kTxyLcqgA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180563-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180563: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=4dea9e4a0e9db431240434279be9fbe45fd1651b
X-Osstest-Versions-That:
    ovmf=66494e532450c1205be93015740580c1e7b8877a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 06 May 2023 22:15:19 +0000

flight 180563 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180563/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 4dea9e4a0e9db431240434279be9fbe45fd1651b
baseline version:
 ovmf                 66494e532450c1205be93015740580c1e7b8877a

Last test of basis   180561  2023-05-06 18:10:51 Z    0 days
Testing same since   180563  2023-05-06 20:11:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   66494e5324..4dea9e4a0e  4dea9e4a0e9db431240434279be9fbe45fd1651b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun May 07 02:01:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 May 2023 02:01:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531012.826715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvTho-0006Hw-LD; Sun, 07 May 2023 02:00:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531012.826715; Sun, 07 May 2023 02:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvTho-0006Ho-Ea; Sun, 07 May 2023 02:00:56 +0000
Received: by outflank-mailman (input) for mailman id 531012;
 Sun, 07 May 2023 02:00:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvThn-0006He-DK; Sun, 07 May 2023 02:00:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvThn-00028b-4L; Sun, 07 May 2023 02:00:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvThm-00089C-F2; Sun, 07 May 2023 02:00:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvThm-0007bP-Ed; Sun, 07 May 2023 02:00:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vxsmsC3bVoBIaZxBPiimbscWbVahVgIrm0TgGYRJ3Hg=; b=v4v4tMFc5zJkNKYfKa95/DVxY0
	J8TyaUtGkvJm0tXU3FQ1zdk20bps7X0dv+oEi8MjepY4hNd51KtSnNgoe4+kuFGwPk64heapD+gqU
	ExH0NGAOjko/tiRXyVIu3RGwUTbuoiPWNlyoTFZ6I55B43Yw9jgP9j/Xp3kQL6Tynb5I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180557-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180557: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:leak-check/check:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2e1e1337881b0e9844d687982aa54b31b1269b11
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 May 2023 02:00:54 +0000

flight 180557 linux-linus real [real]
flight 180565 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180557/
http://logs.test-lab.xenproject.org/osstest/logs/180565/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 23 leak-check/check fail pass in 180565-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2e1e1337881b0e9844d687982aa54b31b1269b11
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   20 days
Failing since        180281  2023-04-17 06:24:36 Z   19 days   34 attempts
Testing same since   180557  2023-05-06 13:43:44 Z    0 days    1 attempts

------------------------------------------------------------
2304 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 281563 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 07 04:14:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 May 2023 04:14:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531020.826725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvVmt-0002ws-Sq; Sun, 07 May 2023 04:14:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531020.826725; Sun, 07 May 2023 04:14:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvVmt-0002wW-Mc; Sun, 07 May 2023 04:14:19 +0000
Received: by outflank-mailman (input) for mailman id 531020;
 Sun, 07 May 2023 04:14:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Snjp=A4=microsoft.com=mikelley@srs-se1.protection.inumbo.net>)
 id 1pvVms-0002wQ-36
 for xen-devel@lists.xenproject.org; Sun, 07 May 2023 04:14:18 +0000
Received: from DM6FTOPR00CU001.outbound.protection.outlook.com
 (mail-cusazlp170100000.outbound.protection.outlook.com
 [2a01:111:f403:c111::])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f5eccaf-ec8d-11ed-8611-37d641c3527e;
 Sun, 07 May 2023 06:14:14 +0200 (CEST)
Received: from BYAPR21MB1688.namprd21.prod.outlook.com (2603:10b6:a02:bf::26)
 by BL0PR2101MB1330.namprd21.prod.outlook.com (2603:10b6:208:92::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.1; Sun, 7 May
 2023 04:14:09 +0000
Received: from BYAPR21MB1688.namprd21.prod.outlook.com
 ([fe80::9a0f:a04d:69bd:e622]) by BYAPR21MB1688.namprd21.prod.outlook.com
 ([fe80::9a0f:a04d:69bd:e622%4]) with mapi id 15.20.6411.001; Sun, 7 May 2023
 04:14:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f5eccaf-ec8d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HgnllY60NA2knDjebBuJ09rGPOHOpdonOv0P9PLzY+zYpEo8LTZPIsd74baBbQ//vFyOx+4wg8Hd4CyAqZ3+7cF/wcrwNLGvwZKjrfDZPy65spPoVKmW7VteNgJbfTJ/vOzXKBXL6K3PUI436ByiAk2MRm0kUwbROWZzaPrFMyFOlWTDDofeMTZUvhQ/hboktMfj2ZQCgZ7b8xIs85o7PX1RyZrWBWjuOkNBfzQ7eG+SvjEzN75K7LITuK5JYLsF0dOA4Hae44bbsImNmx8r3WdWiJwVdlXysFY67vIIXlFFLrmxqkeBb6hBrwBmAckexrqbe/qsJMM6tk9WRqVuRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tY+33LB3hpYbL+5KySOOuT2kgudd0q8zip8Km+tZUW0=;
 b=HsbO2Hg/BfAWKLCq3ms5vQtUdu90499l2F02MZEUB2OOug5qU0GDLmIQb+gHCs24WETGvcw4dFl/4BwHgAxR8t7A4CJEsMdRS3oZcT7gTfalYWwDqxiAzyovFVtQx6wbD/u0g75bd8VChDffhWJxFUpHPTDx3yu3eE6f8XwFiltVl5XGstJzF/yUDvHypLduT+Qf35KyWlYvaPZX54HE8IDLV4YbcbN/saO2tpYyz/n02dOjZGmsLAdF9vR5bvuTvw4oec7Hx/x8evkEXqTCbAM6xIVPTjIeJxaDVQ77RQHBbBdua1PnzS7DBoMs4MZihDW6u8JmnKNX/sZQ9nHI2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=microsoft.com; dmarc=pass action=none
 header.from=microsoft.com; dkim=pass header.d=microsoft.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tY+33LB3hpYbL+5KySOOuT2kgudd0q8zip8Km+tZUW0=;
 b=Gl5dthcrtDqT0ZEU2+vNpfSqJMAVVcebESZC3e92llUNLHoBpEz3orQzinxr9GFOPAvkA5CDdFoE953M814HAv10BFO3Lwwp1aqOH1X+aYnrfC3AdWq9oXhomvzZsDCYBEXjElbQbAuArmtGY1YO6+cMhWesspDHM9FpLWpWQek=
From: "Michael Kelley (LINUX)" <mikelley@microsoft.com>
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
CC: "x86@kernel.org" <x86@kernel.org>, David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>, Paolo Bonzini
	<pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom Lendacky
	<thomas.lendacky@amd.com>, Sean Christopherson <seanjc@google.com>, Oleksandr
 Natalenko <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
	<lucjan.lucjanov@gmail.com>, Usama Arif <usama.arif@bytedance.com>, Juergen
 Gross <jgross@suse.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Russell
 King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>, Catalin Marinas
	<catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
	<guoren@kernel.org>, "linux-csky@vger.kernel.org"
	<linux-csky@vger.kernel.org>, Thomas Bogendoerfer
	<tsbogend@alpha.franken.de>, "linux-mips@vger.kernel.org"
	<linux-mips@vger.kernel.org>, "James E.J. Bottomley"
	<James.Bottomley@HansenPartnership.com>, Helge Deller <deller@gmx.de>,
	"linux-parisc@vger.kernel.org" <linux-parisc@vger.kernel.org>, Paul Walmsley
	<paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
	"linux-riscv@lists.infradead.org" <linux-riscv@lists.infradead.org>, Mark
 Rutland <Mark.Rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>
Subject: RE: [patch V2 38/38] x86/smpboot/64: Implement
 arch_cpuhp_init_parallel_bringup() and enable it
Thread-Topic: [patch V2 38/38] x86/smpboot/64: Implement
 arch_cpuhp_init_parallel_bringup() and enable it
Thread-Index: AQHZfrsPO9aHmT4GeE2OzCti+2hGJa9MacrggAEGsICAAMVzgA==
Date: Sun, 7 May 2023 04:14:08 +0000
Message-ID:
 <BYAPR21MB1688A75D614165C5E4E0302AD7709@BYAPR21MB1688.namprd21.prod.outlook.com>
References: <20230504185733.126511787@linutronix.de>
 <20230504185938.393373946@linutronix.de>
 <BYAPR21MB168869144087644F89BAC89FD7739@BYAPR21MB1688.namprd21.prod.outlook.com>
 <87sfc92zw2.ffs@tglx>
In-Reply-To: <87sfc92zw2.ffs@tglx>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
 MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ActionId=56484d60-8d00-4ff5-b5f8-f0c240f99b2d;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_ContentBits=0;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Enabled=true;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Method=Standard;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Name=Internal;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SetDate=2023-05-07T04:09:18Z;MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SiteId=72f988bf-86f1-41af-91ab-2d7cd011db47;
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=microsoft.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BYAPR21MB1688:EE_|BL0PR2101MB1330:EE_
x-ms-office365-filtering-correlation-id: d2a9855f-6ecc-44e8-ad10-08db4eb18117
x-ld-processed: 72f988bf-86f1-41af-91ab-2d7cd011db47,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 Pb14TKVPccx1vLCQs4eRlRzYXHVFHH59/7A2ShGhhIsvercQIIFPQtRK5tTfroxEBlHgTnggZvPnyUm+VsjEvMwyoOQS1/z1ofXcPzEfAIhRPxftL3Yh4EZww8uHXXKVY427W7ynwAeSO+SlZ6pCMh7XkD9MXp4FgkpFO5GQ0RLjo0Dgg22zWsiyQ33C7YEI2PnUxGbpW0TJ6JIRG/u+vucSesCosp+MeHJAOaoSrFmuaw788Nvx1K0VrEWBz/GfrY+2gs2SSExMQFWZJhLzUxZJnPuBQGfCOycrFo3VchIvRHelvk7crBN2YcUcna772LtatamGN2zpOiwnPCTPYWdV1ZVtZ9QNNEQxEN2CCWxLHzTP9tSUAWQDAD89A5prOTPEDUShF/Gg0+/6kTkOsXr+moLbT8N/IM11CtWk1Rnj1ZwCDyLWtw/XFjIyf1GUNBRCsKQRf2cCs8wgYT5MY+NlLSocQX5Mhgz8mQuD7BcRnvAeKx3AAOy9gv62UiND4Jary1QGZBMwJv5R4KGxcfl9RKoVTkptldyM1ZjU6n+iB5CNh7Fnt5bv1DrPYkHV2BmIvwmLdqsX2kQ0knlQ0f4gIJTJV3baN6Q9s8/2+0prTV+0VHPfsL0bdsi9PY2sCA22ied3nu+9RLV0BBYQzdD4DokLSPJgJfzzsvzqJbw=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR21MB1688.namprd21.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(366004)(346002)(136003)(396003)(451199021)(55016003)(8676002)(8936002)(9686003)(6506007)(26005)(7406005)(7416002)(2906002)(83380400001)(41300700001)(4326008)(76116006)(478600001)(10290500003)(316002)(52536014)(5660300002)(186003)(786003)(110136005)(66946007)(66556008)(66476007)(66446008)(64756008)(54906003)(7696005)(71200400001)(33656002)(8990500004)(86362001)(38070700005)(122000001)(38100700002)(82950400001)(82960400001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?z0aqLtKFIy4X2xpEiS9mMQ4+5lLnex6ibca9LtOq2LbE+x1rS/7Sm3dEDZq3?=
 =?us-ascii?Q?s++E9MprvXS2cnxe8posYjK2HQdLjlVKIj3svAbzaa4qrXh5lkAR9jRdEgsj?=
 =?us-ascii?Q?5hgwImtAKMv5da3Sf8wlxZTwI7E8We5sqmdIsZWvUnJEjRYvAAXxBBlKxSTk?=
 =?us-ascii?Q?a0kx3E6PnMEPGTaeULuLhNclZziiz3r1kMtDBYseaGSEHAsYTmfjmJW9iBaO?=
 =?us-ascii?Q?LV4fc+97zXi0UflD6k5GJECzVIn44omntZlNIQrX0pVEW9qgYeuWI2cufSDb?=
 =?us-ascii?Q?UpJggbisruSvV8mBvrZBxup1WqoPF/0gEcA0RUXH7iOLAx6dKSHA6I4iAaEp?=
 =?us-ascii?Q?Ai3n2ocUtzX7veOptz60VGVCPUV3Euzd4wkA9suFpCQ3a6BHF8418LghejCp?=
 =?us-ascii?Q?9DnK0TsMSUISZD8GhLXyUGPPZvUhDOEIw9kvlyzCTy5z9DofGP6PK/ZQeAxA?=
 =?us-ascii?Q?Qnla8HHAo0OWugb1TCgYYEi6v3iC33byigPPA4go9Br93v+64BNPC1nf0ELh?=
 =?us-ascii?Q?vw9oYiCFyzHs40G2atVGmfFaJAGXCNtUmXQI73LQy2Ng5zIOwfHprdvnJejG?=
 =?us-ascii?Q?pwNSNGGLLqqg7y5UlOB6eA3h8pWWLF5AbWNDn2GSNqfcxry8DmqbTKNJykjM?=
 =?us-ascii?Q?HY3qvbo8WtdGBnQf0/XW3kp0w8xzDsE58LiYXeAx3OGBcE64WwpYqPxY5TJ2?=
 =?us-ascii?Q?A/14FLGDw5LuV+QbLrX12rprd85VurwxmnisQDG+UwmV1V9CHSKohVcJgacK?=
 =?us-ascii?Q?uNmG6ClxtW8pF7cIZpc7KVgEFyUd5kSlW9i5/xAx8YujzU7jgdhiaWFI213j?=
 =?us-ascii?Q?NKoqX3UOnR4DwHeJfWQR9I9xGbILpo2tKJvp6vjQ1jPGv2F/IiTH/GeKUk4+?=
 =?us-ascii?Q?ahbJ3Qmpc0eMaNFlSYXYzukhwF+vKVkXUonz3owEGJd+xU97mQ9o8prTUfVv?=
 =?us-ascii?Q?Q8bHLDP0HokxyGua/lVUiRQgidZM67622lIwbsm/uHsTUXiVeerC6iz9Q/x4?=
 =?us-ascii?Q?DMu5LVjsZmgxAKClmiO4Ik40w3qsh8CRbO9tiMxqoKDAxcOvIK1MjCro4+IU?=
 =?us-ascii?Q?mpv+dG9RiNhhXfELN0nLiBZb1g7apAc8rooIQYtJe3dC/0wUoXq9Dmb7jg6O?=
 =?us-ascii?Q?l2mT3pLhE/Wd0HKNfu4IqKkbCR2q2/CwweWVYq4p22a1G3d4rSMiT9MmzI6R?=
 =?us-ascii?Q?97sIH2xkKJbfFx1tZTr27sqsnqaW4NAlRbjMyxmL5c7MlN24cklKs+XnA6AV?=
 =?us-ascii?Q?Dw4y9WXP9VLHPxN7cfUXvWa2Fv6Ky447PEnTkNuQQ7Qsz8vozOHsy5EdiYs/?=
 =?us-ascii?Q?ri1etBGeiGS9QZIidf6Y2b1zdLGQ3BoRAOty1cMLaleNS4ZbWMiwUm0Cf/hl?=
 =?us-ascii?Q?3q0nys4JPbDvETnlS6NY1Lj4Xr3vOdld8BIshI8OGXdjgF8rwacr5Q9qREWq?=
 =?us-ascii?Q?c8P6EsakQXzYroPIQ4yfqSmuvG+Azn5S62l1MYx5pd38qdTHc4Syh5EB86lB?=
 =?us-ascii?Q?eYMoplvic0qW8BBrqSF6OhrAyouce7msBO/X8f5eBYwstLNJspXyKMYjJVGr?=
 =?us-ascii?Q?zzhpUKZ2k+j1dZZUiaDplfn3cCDGWKMj+kLy/61gL0UWItLDCS+c/mpteDgt?=
 =?us-ascii?Q?QA=3D=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: microsoft.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR21MB1688.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d2a9855f-6ecc-44e8-ad10-08db4eb18117
X-MS-Exchange-CrossTenant-originalarrivaltime: 07 May 2023 04:14:08.4773
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /2wc1Blo+i32hKbcJuYuEh2AjgD9x8NsYa714HHvOCBKIZfz4f2amMVxJ/MZXr5GCi6+eqQjdr7SRGZDqhB3VIBK5fKEPc41CaxMMvYAeyc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR2101MB1330

From: Thomas Gleixner <tglx@linutronix.de> Sent: Saturday, May 6, 2023 9:23=
 AM
>=20
> On Sat, May 06 2023 at 00:53, Michael Kelley wrote:
> > From: Thomas Gleixner <tglx@linutronix.de> Sent: Thursday, May 4, 2023 =
12:03 PM
> > [snip]
> >
> >> @@ -934,10 +961,10 @@ static void announce_cpu(int cpu, int ap
> >>  	if (!node_width)
> >>  		node_width =3D num_digits(num_possible_nodes()) + 1; /* + '#' */
> >>
> >> -	if (cpu =3D=3D 1)
> >> -		printk(KERN_INFO "x86: Booting SMP configuration:\n");
> >> -
> >>  	if (system_state < SYSTEM_RUNNING) {
> >> +		if (num_online_cpus() =3D=3D 1)
> >
> > Unfortunately, this new check doesn't work.  Here's the output I get:
> >
> > [    0.721384] smp: Bringing up secondary CPUs ...
> > [    0.725359] smpboot: x86: Booting SMP configuration:
> > [    0.729249] .... node  #0, CPUs:        #2
> > [    0.729654] smpboot: x86: Booting SMP configuration:
> > [    0.737247]       #4
> >
> > Evidently num_online_cpus() isn't updated until after all the primary
> > siblings get started.
>=20
> Duh. Where is that brown paperbag?
>=20
> > When booting with cpuhp.parallel=3D0, the output is good.
>=20
> Exactly that was on the command line when I quickly booted that change :(
>=20
> The below should fix it for real.
>=20
> Thanks,
>=20
>         tglx
> ---
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -951,9 +951,9 @@ static int wakeup_secondary_cpu_via_init
>  /* reduce the number of lines printed when booting a large cpu count sys=
tem */
>  static void announce_cpu(int cpu, int apicid)
>  {
> +	static int width, node_width, first =3D 1;
>  	static int current_node =3D NUMA_NO_NODE;
>  	int node =3D early_cpu_to_node(cpu);
> -	static int width, node_width;
>=20
>  	if (!width)
>  		width =3D num_digits(num_possible_cpus()) + 1; /* + '#' sign */
> @@ -962,7 +962,7 @@ static void announce_cpu(int cpu, int ap
>  		node_width =3D num_digits(num_possible_nodes()) + 1; /* + '#' */
>=20
>  	if (system_state < SYSTEM_RUNNING) {
> -		if (num_online_cpus() =3D=3D 1)
> +		if (first)
>  			pr_info("x86: Booting SMP configuration:\n");
>=20
>  		if (node !=3D current_node) {
> @@ -975,11 +975,11 @@ static void announce_cpu(int cpu, int ap
>  		}
>=20
>  		/* Add padding for the BSP */
> -		if (num_online_cpus() =3D=3D 1)
> +		if (first)
>  			pr_cont("%*s", width + 1, " ");
> +		first =3D 0;
>=20
>  		pr_cont("%*s#%d", width - num_digits(cpu), " ", cpu);
> -
>  	} else
>  		pr_info("Booting Node %d Processor %d APIC 0x%x\n",
>  			node, cpu, apicid);

This works.  dmesg output is clean for these guest VM combinations
on Hyper-V that I tested:

* Normal VM:  16 vCPUs in 1 NUMA node and 32 vCPUs in 2 NUMA nodes
* Same configs for a SEV-SNP Confidential VM with paravisor

Tested with and without cpuhp.parallel=3D0

For the entire series:
Tested-by: Michael Kelley <mikelley@microsoft.com>


From xen-devel-bounces@lists.xenproject.org Sun May 07 05:31:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 May 2023 05:31:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531028.826734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvWzZ-0003R5-Ga; Sun, 07 May 2023 05:31:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531028.826734; Sun, 07 May 2023 05:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvWzZ-0003Qy-DE; Sun, 07 May 2023 05:31:29 +0000
Received: by outflank-mailman (input) for mailman id 531028;
 Sun, 07 May 2023 05:31:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvWzY-0003Qo-LC; Sun, 07 May 2023 05:31:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvWzY-0007R2-Dt; Sun, 07 May 2023 05:31:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvWzX-00020j-5A; Sun, 07 May 2023 05:31:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvWzW-00039n-Of; Sun, 07 May 2023 05:31:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HIlqPHRtHL47VFKk8YNtI5znvTNRRtXFMkGY9BhOJgc=; b=Ve7ekYQ+1w/uhWd+dsIJL/XD0c
	/DBY3VBNJj6icdsnXFzF/a2VwcATsE37/oyovP//6IWF5zBlr+8pVJKhkQGWD5MaO8DkS3SwW1+40
	z1IcKKU9Zp8TVuXx0AU/zvquLGTDa+K071daer6qO0YNh0b8mvQYvgS45CRh4spl37Qk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180559-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180559: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=792f77f376adef944f9a03e601f6ad90c2f891b2
X-Osstest-Versions-That:
    qemuu=a9fe9e191b4305b88c356a1ed9ac3baf89eb18aa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 May 2023 05:31:26 +0000

flight 180559 qemu-mainline real [real]
flight 180568 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180559/
http://logs.test-lab.xenproject.org/osstest/logs/180568/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 180568-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop       fail blocked in 180546
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install             fail like 180546
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180546
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180546
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180546
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180546
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180546
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180546
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180546
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                792f77f376adef944f9a03e601f6ad90c2f891b2
baseline version:
 qemuu                a9fe9e191b4305b88c356a1ed9ac3baf89eb18aa

Last test of basis   180546  2023-05-05 15:15:28 Z    1 days
Failing since        180553  2023-05-06 04:39:17 Z    1 days    2 attempts
Testing same since   180559  2023-05-06 14:16:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  BALATON Zoltan <balaton@eik.bme.hu>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  David Hildenbrand <david@redhat.com>
  Dorinda Bassey <dbassey@redhat.com>
  Harsh Prateek Bora <harshpb@linux.ibm.com>
  John Platts <john_platts@hotmail.com>
  Juan Quintela <quintela@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Richard Henderson <richard.henderson@linaro.org>
  Shivaprasad G Bhat <sbhat@linux.ibm.com>
  Song Gao <gaosong@loongson.cn>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Vaibhav Jain <vaibhav@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   a9fe9e191b..792f77f376  792f77f376adef944f9a03e601f6ad90c2f891b2 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun May 07 11:59:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 May 2023 11:59:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531092.826745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvd37-0000G2-2d; Sun, 07 May 2023 11:59:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531092.826745; Sun, 07 May 2023 11:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvd36-0000Fv-VY; Sun, 07 May 2023 11:59:32 +0000
Received: by outflank-mailman (input) for mailman id 531092;
 Sun, 07 May 2023 11:59:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvd35-0000Fl-Lc; Sun, 07 May 2023 11:59:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvd35-0008Ar-9X; Sun, 07 May 2023 11:59:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvd34-0007h4-PV; Sun, 07 May 2023 11:59:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvd34-0002So-Nt; Sun, 07 May 2023 11:59:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TIdYIDHKqB1o++HV4yhcdc+czSumHlxqXyHe/W5cOAo=; b=ZACOsW4hQZ9THMGxEtQf2SVZrr
	vIn8pXgrSOsMrqzfSNFh7fMy2cQcuuJFj5MzHjhtTDvzHZK3U4B0LJVGt2Pmi1I300YMJlodL+zWR
	bfRDWXl57V+1pLCZ4bgMp9N6ir5EC0qgjDRf+w6VKjrcWlZK42edr2lUGThaubo+Gnmc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180566-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180566: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e93e635e142d45e3904efb4a05e2b3b52a708b4f
X-Osstest-Versions-That:
    xen=e93e635e142d45e3904efb4a05e2b3b52a708b4f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 May 2023 11:59:30 +0000

flight 180566 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180566/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt       7 xen-install                fail pass in 180555
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 18 guest-localmigrate/x10 fail pass in 180555

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail in 180555 like 180547
 test-amd64-i386-libvirt     15 migrate-support-check fail in 180555 never pass
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180547
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180555
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180555
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180555
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180555
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180555
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180555
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180555
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180555
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180555
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180555
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180555
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180555
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  e93e635e142d45e3904efb4a05e2b3b52a708b4f
baseline version:
 xen                  e93e635e142d45e3904efb4a05e2b3b52a708b4f

Last test of basis   180566  2023-05-07 01:52:22 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 07 14:45:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 May 2023 14:45:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531138.826755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvfdq-0000gf-Uy; Sun, 07 May 2023 14:45:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531138.826755; Sun, 07 May 2023 14:45:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvfdq-0000gY-SD; Sun, 07 May 2023 14:45:38 +0000
Received: by outflank-mailman (input) for mailman id 531138;
 Sun, 07 May 2023 14:45:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvfdq-0000gO-Fa; Sun, 07 May 2023 14:45:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvfdq-0003Rw-BJ; Sun, 07 May 2023 14:45:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvfdp-00031E-RW; Sun, 07 May 2023 14:45:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvfdp-0001jz-R6; Sun, 07 May 2023 14:45:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7VZ2pH50TOpI9NMZ5O2TYb8t9mMoa4nUqwN66FTWrvs=; b=heczHIC8KoRSmUcdYnYOfiAKIP
	s+0eDfFOtBgSrPKx4tnJhiT0V+D1R4t7nBNWF/MUDq5PerZ/rmCK7BXnpSWnHgkdBCWcDGllWXQ8y
	C7CzhxMaWQirfB7b2q/pEEWTykNyrlcGIeqvEQcqL0sa1iL5VovS/rCRf62wCa3VSlXE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180567-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180567: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=fc4354c6e5c21257cf4a50b32f7c11c7d65c55b3
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 May 2023 14:45:37 +0000

flight 180567 linux-linus real [real]
flight 180569 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180567/
http://logs.test-lab.xenproject.org/osstest/logs/180569/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                fc4354c6e5c21257cf4a50b32f7c11c7d65c55b3
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   20 days
Failing since        180281  2023-04-17 06:24:36 Z   20 days   35 attempts
Testing same since   180567  2023-05-07 02:05:07 Z    0 days    1 attempts

------------------------------------------------------------
2326 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 283282 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 07 21:49:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 07 May 2023 21:49:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531194.826766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvmFg-0000lo-Pd; Sun, 07 May 2023 21:49:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531194.826766; Sun, 07 May 2023 21:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvmFg-0000lh-Ka; Sun, 07 May 2023 21:49:08 +0000
Received: by outflank-mailman (input) for mailman id 531194;
 Sun, 07 May 2023 21:49:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvmFe-0000lX-OU; Sun, 07 May 2023 21:49:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvmFe-00055C-IQ; Sun, 07 May 2023 21:49:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvmFe-0007Zq-3Z; Sun, 07 May 2023 21:49:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvmFe-0002dJ-33; Sun, 07 May 2023 21:49:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kcJ7NMyj4diQqk6DYu3WFsRuJeySkpKqHXq4+awghbY=; b=a9GD9/yJ2Nwn2/1gnYonYA8tBT
	N3PLBXaePYq72DtvPS0ZYLv1dIz3gTECTlUsuoUH7sRlwGevcdfqYji/mbBmpO9cIPxrtSd4Hg8wE
	E+nb1ZPXXdDdlAfOLE70+EIA4g0ntm0NEINcdnMQchfSS4OCEFGRAkp+ctP4bVIqhrkc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180570-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180570: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=fc4354c6e5c21257cf4a50b32f7c11c7d65c55b3
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 07 May 2023 21:49:06 +0000

flight 180570 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180570/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-dom0pvh-xl-amd 22 guest-start/debian.repeat fail pass in 180567

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                fc4354c6e5c21257cf4a50b32f7c11c7d65c55b3
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   21 days
Failing since        180281  2023-04-17 06:24:36 Z   20 days   36 attempts
Testing same since   180567  2023-05-07 02:05:07 Z    0 days    2 attempts

------------------------------------------------------------
2326 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 283282 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 08 03:25:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 03:25:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531226.826775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvrUl-0003l0-7Z; Mon, 08 May 2023 03:25:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531226.826775; Mon, 08 May 2023 03:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvrUl-0003ks-2f; Mon, 08 May 2023 03:25:03 +0000
Received: by outflank-mailman (input) for mailman id 531226;
 Mon, 08 May 2023 03:25:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvrUj-0003ki-KG; Mon, 08 May 2023 03:25:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvrUj-0007Xb-BZ; Mon, 08 May 2023 03:25:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvrUi-0002VT-O5; Mon, 08 May 2023 03:25:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvrUi-0002df-Nf; Mon, 08 May 2023 03:25:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GXWjfL2vJZbr+Zq5vH0Y5UXHwq47bdOLA1raDgHJBCw=; b=Wig8C8yfLXbR9G64jnnoE3ebJX
	Q9oPHHXwBvOwk3d40MuwfstmXYeV5iAjx9/BjaUepbym3PPS27IPUyaxyeXaN1fLhrCahlQmsd+LD
	m9Sh+hxIAMPRXcnYCaD/z/u67Yq7yWy675Wz96r5SzlwxO6UWckX8p8j0HUbUHJnciL0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180571-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180571: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ac9a78681b921877518763ba0e89202254349d1b
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 May 2023 03:25:00 +0000

flight 180571 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180571/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                ac9a78681b921877518763ba0e89202254349d1b
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   21 days
Failing since        180281  2023-04-17 06:24:36 Z   20 days   37 attempts
Testing same since   180571  2023-05-07 22:11:42 Z    0 days    1 attempts

------------------------------------------------------------
2359 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 296910 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 08 04:09:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 04:09:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531234.826785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvsBq-000051-MA; Mon, 08 May 2023 04:09:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531234.826785; Mon, 08 May 2023 04:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvsBq-00004s-Ig; Mon, 08 May 2023 04:09:34 +0000
Received: by outflank-mailman (input) for mailman id 531234;
 Mon, 08 May 2023 04:09:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Frh6=A5=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1pvsBq-0008WS-4O
 for xen-devel@lists.xen.org; Mon, 08 May 2023 04:09:34 +0000
Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com
 [2607:f8b0:4864:20::1033])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2235c1b5-ed56-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 06:09:32 +0200 (CEST)
Received: by mail-pj1-x1033.google.com with SMTP id
 98e67ed59e1d1-24e4e23f378so2820130a91.3
 for <xen-devel@lists.xen.org>; Sun, 07 May 2023 21:09:32 -0700 (PDT)
Received: from localhost ([122.172.85.8]) by smtp.gmail.com with ESMTPSA id
 y22-20020a17090abd1600b00246b7b8b43asm16729893pjr.49.2023.05.07.21.09.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 07 May 2023 21:09:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2235c1b5-ed56-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1683518971; x=1686110971;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=N2IiV5e2VeZOoaD5QZ3Kdrzs73EZjFRQz5WCvDTUYTQ=;
        b=L7ROSK+H7ZSYsOSTbYcv7GKTQOIjgRDDWt0LF9LyA7zPvb1jjZYcG37nzKo4rmxnPQ
         hNPYJIx/4bYOZB6CBKybMKdZcSFh6O1mWZy4O+/oWnHS1YaUySWNhJg2V0DvGZwMN2RU
         iW3byJVF5+dOtacAdUcY14b5aSDamahwbgEK5szN7BXvu37jYc25H7LXqUK6uMmvctxJ
         56u0RSz1aOeykvTTSFZuChNR7+8fcO7PpJj81P9qHRjgxIh/C1eSmmwIu5DqZUb4E8dY
         zlZBTnjNF5QZl3HY/PK6ldU697nWVGDoB9Qv1tuF0VVDl73npG3LubwZYH19WQP4X5av
         YTUA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683518971; x=1686110971;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=N2IiV5e2VeZOoaD5QZ3Kdrzs73EZjFRQz5WCvDTUYTQ=;
        b=dKSGgHTeXu2YZRCPFaVYXb0zItSmiO6mPFxcW9KPps9/K1hroI7d5IMsdq6u+1gM/x
         83MAlo5t8YY+L0WpixLFVVRyYRIJGEhfD4K1CbcFtOKwKshnCcl0SA601kEt4/IvU144
         1KzSjgNXZkLvYiqS9Hi+o950Vdkdcl0/oqrJ6Atd2b8n1h3/t5TZtD53KBsU9+RcYouD
         ZaZCTcsI7r7q4M0DQdMrZPGI8it02u8Da/h6323b2lBjzxyCHDKk3/ryxFrwKhaMftu1
         unQHlbdqeV3Gdc59bZ3/+IDwWe8xGm46WJjuG0Vq+gPSY9Tdh+0fVr4xEJZjTZMW7TKm
         ZZLg==
X-Gm-Message-State: AC+VfDwFKH4b81FZptpgcQniuithMkhvZSS9r31HuDZQClHK2JjyBfqa
	wmxzurrpOlUNl7I9ZiOHvxYVFg==
X-Google-Smtp-Source: ACHHUZ58x2dVjYWbAJEc7OvaS/dGnIyhye24/cjjOgFWN2/xdIp+Ti0FF7hm+UNLovjV+LqfHUE7pg==
X-Received: by 2002:a17:90a:5581:b0:24d:f159:d28b with SMTP id c1-20020a17090a558100b0024df159d28bmr8905534pji.47.1683518970803;
        Sun, 07 May 2023 21:09:30 -0700 (PDT)
Date: Mon, 8 May 2023 09:39:27 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>, xen-devel@lists.xen.org,
	stratos-dev@op-lists.linaro.org, Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Erik Schilling <erik.schilling@linaro.org>
Subject: Re: [PATCH] libxl: arm: Allow grant mappings for backends running on
 Dom0
Message-ID: <20230508040927.uhgwfkhknwyvsowu@vireshk-i7>
References: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
 <25fb2b71-b663-b712-01cd-5c75aa4ccf9b@gmail.com>
 <20230404234228.vghxrrj6auy7zw4c@vireshk-i7>
 <20230505061934.jm3bwjgsl5hf5rns@vireshk-i7>
 <CAPD2p-nvLXdxkwik-UPjS1JAjz6z2FNuxb1JYrj4bNwusEZpPg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAPD2p-nvLXdxkwik-UPjS1JAjz6z2FNuxb1JYrj4bNwusEZpPg@mail.gmail.com>

On 05-05-23, 16:11, Oleksandr Tyshchenko wrote:
> I was going to propose an idea, but I have just realized that you already
> voiced it here [1] ))
> So what you proposed there sounds reasonable to me.
> 
> I will just rephrase it according to my understanding:
> 
> We probably need to consider transforming your "forced_grant" to something
> three-state, for example
> "grant_usage" (or "use_grant" as you suggested) which could be "default
> behaviour" or "always disabled", or "always enabled".
> 
> With "grant_usage=default" we will get exact what we have at the moment
> (only create iommu nodes if backend-domid != 0)
> With "grant_usage=disabled" we will force grants to be always disabled
> (don't create iommu nodes irrespective of the domain)
> With "grant_usage=enabled" we will force grants to be always enabled
> (always create iommu nodes irrespective of the domain)

I was wondering if "grant_usage=default" can be interpreted by the
absence of the grant_usage parameter. So just grant_usage=1 or 0,
which would mean enabled or disabled and if grant_usage isn't passed,
we would switch back to default.

-- 
viresh


From xen-devel-bounces@lists.xenproject.org Mon May 08 05:42:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 05:42:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531242.826794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvtd5-0003LN-DK; Mon, 08 May 2023 05:41:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531242.826794; Mon, 08 May 2023 05:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvtd5-0003LG-Ak; Mon, 08 May 2023 05:41:47 +0000
Received: by outflank-mailman (input) for mailman id 531242;
 Mon, 08 May 2023 05:41:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvtd3-0003L2-B5; Mon, 08 May 2023 05:41:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvtd3-0002sj-4O; Mon, 08 May 2023 05:41:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvtd2-0002hS-I8; Mon, 08 May 2023 05:41:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvtd2-00005b-Hc; Mon, 08 May 2023 05:41:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9RK+Icn0wF6vIImiHl2T+zrR/QLLtKOEBpQEINF+IWk=; b=u3YATFJ2c/YsRtORvbNVmk+mM9
	9KSCBFzT7xw6zpb5YLqEgDsV/MGaAVRmBx/6c1tYlg/EClT/d6JJwkYn3ig4HTAhi/H1YSXVJzGiC
	bJtJn5eWmuvl0nIqdci61y1QyAKUGthvUWdu9x3IBnOSf6FGUuSa/I5mSO1+m1/D+6vM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180573-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180573: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8dbf868e02c71b407e31f9b41b5266169c702812
X-Osstest-Versions-That:
    ovmf=4dea9e4a0e9db431240434279be9fbe45fd1651b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 May 2023 05:41:44 +0000

flight 180573 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180573/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8dbf868e02c71b407e31f9b41b5266169c702812
baseline version:
 ovmf                 4dea9e4a0e9db431240434279be9fbe45fd1651b

Last test of basis   180563  2023-05-06 20:11:20 Z    1 days
Testing same since   180573  2023-05-08 02:10:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael Kubacki <michael.kubacki@microsoft.com>
  Oliver Smith-Denny <osde@linux.microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   4dea9e4a0e..8dbf868e02  8dbf868e02c71b407e31f9b41b5266169c702812 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 08 06:24:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 06:24:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531250.826804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvuI6-0000JC-EV; Mon, 08 May 2023 06:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531250.826804; Mon, 08 May 2023 06:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvuI6-0000Is-BG; Mon, 08 May 2023 06:24:10 +0000
Received: by outflank-mailman (input) for mailman id 531250;
 Mon, 08 May 2023 06:24:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvuI4-0000Im-PA
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 06:24:09 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eedcf41e-ed68-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 08:24:06 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7375.eurprd04.prod.outlook.com (2603:10a6:800:1a8::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.26; Mon, 8 May
 2023 06:24:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 06:24:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eedcf41e-ed68-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z/aV0V46Qv1TyEFYvvoOt/ek8QU/netFTfHpRhbqrfXwQ238f4iCNjWY7wEchUlOzfFaT3JR+A3Sa9A1Q7XOcPvmFhgtY3TXfa77sQsHj1XmLfYVnJ6DKLcajFWhw2BNUBBGLcn4zBfhHLHtzXDYSXk9yfSbVZPyZ2eZc6tGyBuqcqoTA+VFMdIf4s3yFBiIWqPEG+CWUEgdUYL9+ASCsyoI4DHAyBK4QmPG2JmRQAUHrDAObMfl4cltc6I4Zb7bKvw4228azL3c5DDim3izJz7fgoZDsa7cjxWUKKBqe5+g/W4SjORPjwq/QrtUgbq4phhGRTLqfMNB5EXmOGXJ+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JB9+/PxeRHja8l0ODby24uhpNc/UoW2s+UreyHmDFow=;
 b=g3mr+DdHUyu1CVOxwRFaYVHD6Mot7lga4g4pal5aRZxzPVfQ+TGEZu3BOqXOrbN9dG4oEEVfpSEoPTI9kilvKF0LHlxNsSLU43a9T8KJSxCyHBUGi+MoDF0ZP/FodXBKWd4SH7TLz4c+iF3l+StNZAW/CChHjNFlFKw1prqEXQcQ6kD+NCCin4d6cA9QYQx4xlNF7HuFGjx0MnS5k31ZmLFEznwSHbqpJL9f1Ls26+YJOrMfW81Ch/OwjHHnttHGyOeVYyzCq9db0P2o0Oqo/k3VRLISYPL5EV+qxvvB4ISR60mPDLMgERkKd+OquWMMVBX2nEtTnv4fKo+BfOCWIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JB9+/PxeRHja8l0ODby24uhpNc/UoW2s+UreyHmDFow=;
 b=GlMtKBdmCW3UqGm72FyAclReVidKOeqmxOA5lmLbwxgZBBLOGVa+NgRzMr7DnoP0wZ+Z+RzhBG8tgeTsRQTag6ld9EFiKOt2X8pEh/40ZrZF6NE6lSRnHY316IbBhiwPw0gqJnW9vrb3Jh52F8n6ViFq259xDtEtADbjKb8wxnNd4Zs0flGxn864+mh5dj6HXEKSQ6RGzR5Ge90HiJs4tFMmXKlJxPKHNjukoFuJNeuqD/bU2Dc8BSkjQ0gNwThlsBJF9IgqTBTRmRi2KqyOtxaiu90tBEvSB085Sxqz0CsKjns0q65p7OGmO4BTjIeNnyz/TV5FwWiAVCq3XtpAaA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3c9f8fd2-b60b-5540-00be-87351fec656e@suse.com>
Date: Mon, 8 May 2023 08:23:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH RFC] build: respect top-level .config also for out-of-tree
 hypervisor builds
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <beace0ce-e90b-eb79-4419-03de45ea7360@suse.com>
 <a08f3650-0c81-4873-ae10-f5200f8b7613@perard>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a08f3650-0c81-4873-ae10-f5200f8b7613@perard>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0115.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7375:EE_
X-MS-Office365-Filtering-Correlation-Id: 057f1c76-042e-49ad-7cea-08db4f8cd10e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uIbJH/qmN0hxsQdD4g7cCk9cmhSTFCVePkDzprjbXAkvq8WJ7r2nH/PNR5kC24wsVrJaGOrbtu+I0VUrCeb9uJeWYGY8HugJ1PJKsEyV1A0b/PZpPaFXd6EUSIkj+oJylXGS/G6DNFMD/JxLONHJKx1OXfOLohAvBzFKmAanW/X8cPCFRvcM1dnuUt52FEk9WQ36+lCDOEHUnjYDOSclvArt0FNKo914gmOrPMwbRlb0NTB5mF597AmyoEdPMiUGbokUYYEjJq5vRi1jmVzyN61xLtu0REOpfpYRhr2LPff5yYrThDlfLZmEw1obA6Bdb9+bM0Bo5mxlYmIZLoit3EPe/z+X4wD4koUq9qBqhCNuiucyU/VCKXsxMDj5idYdI60EDsfwwkbtmB9WINMNP2vPBAuqmJPJ/GpIB1j6YeSnNjUT7OMoffkvg4rxrbGQoNfIpBz3h7pVUBiH+Fcv4ek0vYPTJ1z23L1OTlitpXj+/wENVsCWWPfBqfObpUkwskNDyj4xVHBv+4Q5aZQBbaz0hqKfbS9RoiRM6CMDnB/eiHWKWr4FRA3tAe/FrG+6t7SSb04hXSjpG2Fu+APuuXuZJt/flyG/ol2h3lGYh2/QCmw9W+Kt1DFqPCAoNpaToitpqXorSt+dvtl01isGuTH60tKIVsYuXoigTBbys7PFjyYLl684jE7uVbGLPalU
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(366004)(136003)(346002)(396003)(376002)(451199021)(316002)(2906002)(53546011)(41300700001)(6512007)(66556008)(186003)(6666004)(31696002)(86362001)(66946007)(26005)(6506007)(6916009)(66476007)(36756003)(4326008)(6486002)(31686004)(54906003)(2616005)(478600001)(5660300002)(8936002)(8676002)(38100700002)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NUttRFNSaG10VjhiY2VHUnYxdDF0TFJIMzQyZjFhNEg0WE1vaTdVY1NtRzYy?=
 =?utf-8?B?WGNBb3NGSU9WZXptQVp2OFpvTzRhNktDaWxPM3MwbHZyWlV5ekswb3I5U0xJ?=
 =?utf-8?B?bktVTnJlRjhxZ3lJV1pmWUVkYlBWWnEyaXg4cENWU3M1SUd0S3lJaCtvY2VY?=
 =?utf-8?B?cGw4aHlCSDFCU05CVllxNC9JYlphZFBMelFKNyt4NFpUU0ZLSHBMcnJlVnVm?=
 =?utf-8?B?MEZqbVp5bUVHdW8xdVhkK09mbU4wdFNYRGo0VDlxQ0xoYzZSa29BM2p3Qi9I?=
 =?utf-8?B?NWVMRjd3eTlvajRtcS9oVmcvOGltWlh4TGN6WmlLMGZTL2QrZFdOakpidWhk?=
 =?utf-8?B?R2NiSFAvL29oL2lPc3psZ1BIeEx0bkVkWnNmYTljK3lIL2kyZXluMEF4Y2J6?=
 =?utf-8?B?YW9XNSt3TVRPMzJkQlVKaitmYnhVWjBXdWRTVlUyd2haZDlGMjFnSVEwRitJ?=
 =?utf-8?B?WHNEcnh5eTVHQ1NkOE9FUUVwcStFeEI3b3ovMUtzamhkcC9kUzdZQXVXT1RO?=
 =?utf-8?B?KytpV2xGSStMbnZGNS80cy9ydEhxVHBjay9yc3o3TVBPczNYMklPaDVBcXc0?=
 =?utf-8?B?YWkxdEl6NWF3ZzdwY0k2VWZCNEJzS0tZczJsQzhXckFhYjczMG1ZaC9GSHZH?=
 =?utf-8?B?SWtnNS9qUmtkZmtDdTJIeE1jOERUWVlzY2MrbGJlVHE5bms1Rm0rcm1nSGwr?=
 =?utf-8?B?NVJmRTBQOWQxYWV2S2g0UHpyU3dhUUh0bmx3Nis3YzNidk15dVg2NHFaMDF6?=
 =?utf-8?B?aEZLeDFUdEYvdGNvdUlWTnNZZHd3end5TUVodnliZlJYeXJmZ0ZRV05jbUty?=
 =?utf-8?B?SUtHTlpiSUlhWk1wNXRMdHh2R3ZwVmJmd1cyZ0kzQ2p0S3BTVUtXbXl0ZW9w?=
 =?utf-8?B?YXFlNEhHN1hraFNxRHZuZmpOZVVvMEtPNGhBbmRGandqanZSYmlhUXRKQnpH?=
 =?utf-8?B?U3dnLzRiVnNXL3I5MFZUUnVLbXgvQlFWNUlvL2xuTjV6WVhIOWpIZll1cWtV?=
 =?utf-8?B?SS9JTG5uRXZyOTdDY0ZTc0ZFSG9pZmZjbXM0bzZTTmVZc0tUYzJ4blZzUHZI?=
 =?utf-8?B?K0pRRlR1cnJZK2RLMTdVUGN4NXlYMHozemRrTmNaeXhETGtjK01hTlh2NTN3?=
 =?utf-8?B?TkczRlpnNGk0K2NHUi9QdlJDZ0EzSDVPdUUwZCs5SGRHNVorbk9JcVl2d2tM?=
 =?utf-8?B?THMvMlBFQjM3L0RCaHVhYmpka0VYdzVnemhwZmFWdlBDK2h4cWcxUU9wMGpr?=
 =?utf-8?B?eW5tZVNKS2h4cVIyc2luZnp4bWxRTUY2UUNmZ20yYU9sTE1IeDE3UlRJYzF3?=
 =?utf-8?B?eVhlTHZmYW5tUTBvdE5MY2F1Q3h2T3Jsc2NjRnM0SmtNR0ZtaDRPQkpRS09S?=
 =?utf-8?B?UW1zem1yRFFMSXBVNnF1eHg4VDNiYW9CbmNzUDFENHJzdHMwRHliSUpGcWhi?=
 =?utf-8?B?by9PUVlRZGlRNGlTWEFJSXl3anNJM3F5K0hOVFNkVis0Ky90OStweCtYOWZ6?=
 =?utf-8?B?SjNQdm04QUJiRG9qeVJWMnpUbE5GZjhoa1ZkQUJranlkT1ZPcG1abHJRVEN2?=
 =?utf-8?B?c2NrY1hubS9nQjIvT1NscFV6dGtTTFQvWjI5aUNJTit6UncwV09sOXZUWEtB?=
 =?utf-8?B?bG5EbHhjYldMaUtYM2I4cjM4Wmw4V200T0wweEllbHhHSXE3RW9JcWIrL0pF?=
 =?utf-8?B?LzNqb29SbFhQMVg0OGYxaURMMFJRT01kdjlIMGNKdHZpZkNlZ2dvVVZNWFIz?=
 =?utf-8?B?N2Z0RUlGdmp1M0lDa3RGT240NldON3YyWlhvWkU0dXFNWnpIL0N6Tm1XSVJM?=
 =?utf-8?B?L2xZc0Q5YUgva09aaGhXTmJWcXRNcVBYYm9jWHd1THp2dnpBRElkZzZ2Z0JS?=
 =?utf-8?B?YkVDd0FabzIzL2JzREpXRk9ab0dDMGNvZ3FxbTA0VEJCV0RWYjRaSUd5M1NF?=
 =?utf-8?B?L1oyc1pEUHhOMVpGY2tzVUVVS0FXTGM0dXFYRTYxUkp3bHh3Yk1TM2RQMXdR?=
 =?utf-8?B?VWs4MTJZSFkrZXNDV2J3bHgrUnRDYUF2OHZabDFFQzZ0MFJ4Tk9Uay9Uck5x?=
 =?utf-8?B?MmpUaEF5K0JmaThwSk5HNVdDZG9NRkdhblpzNW1HWWwvT1oxdWVRbFNrNUhr?=
 =?utf-8?Q?rxqvYBnXSpfVb8jE9XDzRwjPF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 057f1c76-042e-49ad-7cea-08db4f8cd10e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 06:24:02.6943
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: V91gJRFgwlJGW3w/h8jfgC0/wtlGz7NaYYrDfziSLMS7BCTn658Z+xVxII8ZL6Vy+W4uGHt9t2wyBEFh9GXOLA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7375

On 05.05.2023 18:08, Anthony PERARD wrote:
> On Wed, Mar 15, 2023 at 03:58:59PM +0100, Jan Beulich wrote:
>> With in-tree builds Config.mk includes a .config file (if present) from
>> the top-level directory. Similar functionality is wanted with out-of-
>> tree builds. Yet the concept of "top-level directory" becomes fuzzy in
>> that case, because there is not really a requirement to have identical
>> top-level directory structure in the output tree; in fact there's no
>> need for anything top-level-ish there. Look for such a .config, but only
>> if the tree layout matches (read: if the directory we're building in is
>> named "xen").
> 
> Well, as long as the "xen/" part of the repository is the only build
> system to be able to build out-of-srctree, there isn't going to be a
> top-level .config possible in the build tree, as such .config will be
> outside of the build tree. Reading outside of the build tree or source
> tree might be problematic.
> 
> It's a possibility that some project might want to just build the
> hypervisor, and they happened to name the build tree "xen", but they
> also have a ".config" use for something else. Kconfig is using ".config"
> for other reason for example, like we do to build Xen.

Right, that's what my first RFC remark is about.

> It might be better to have a different name instead of ".config", and
> putting it in the build tree rather than the parent directory. Maybe
> ".xenbuild-config" ?

Using a less ambiguous name is clearly an option. Putting the file in
the (Xen) build directory, otoh, imo isn't: In the hope that further
sub-trees would be enabled to build out-of-tree (qemu for instance in
principle can already, and I guess at least some of stubdom's sub-
packages also can), the present functionality of the top-level
.config (or whatever its new name) also controlling those builds would
better be retained.

> I've been using the optional makefile named "xen-version" to adjust
> variable of the build system, with content like:
> 
>     export XEN_TARGET_ARCH=arm64
>     export CROSS_COMPILE=aarch64-linux-gnu-
> 
> so that I can have multiple build tree for different arch, and not have
> to do anything other than running make and do the expected build. Maybe
> that's not possible because for some reason, the build system still
> reconfigure some variable when entering a sub-directory, but that's a
> start.

Hmm, interesting idea. I could (ab)use this at least in the short
term. Yet even then the file would further need including from
Rules.mk. Requiring all variables defined there to be exported isn't
a good idea, imo. Iirc not all make variables can usefully be
exported. For example, I have a local extension to CROSS_COMPILE in
place, which uses a variable with a dash in its name.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> RFC: The directory name based heuristic of course isn't nice. But I
>>      couldn't think of anything better. Suggestions?
> 
> I can only thing of looking at a file that is in the build-tree rather
> than outside of it. A file named ".xenbuild-config" proposed early for
> example.
> 
>> RFC: There also being a .config in the top-level source dir would be a
>>      little problematic: It would be included _after_ the one in the
>>      object tree. Yet if such a scenario is to be expected/supported at
>>      all, it makes more sense the other way around.
> 
> Well, that would mean teaching Config.mk about out-of-tree build that
> part of the repository is capable of, or better would be to stop trying
> to read ".config" from Config.mk, and handle the file in the different
> sub-build systems.

Hmm, is that a viable option? Or put differently: Wouldn't this mean doing
away with ./Config.mk altogether? Which in turn would mean no central
place anymore where XEN_TARGET_ARCH and friends could be overridden. (I
view this as a capability that we want to retain, if nothing else then for
the - see above - future option of allowing more than just xen/ to be
built out-of-tree.)

> Or just let people writing ".config" files to handle
> the cases in those .config files.

I'm afraid I don't see what you mean here.

>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -236,8 +236,17 @@ endif
>>  
>>  include scripts/Kbuild.include
>>  
>> -# Don't break if the build process wasn't called from the top level
>> -# we need XEN_TARGET_ARCH to generate the proper config
>> +# Don't break if the build process wasn't called from the top level.  We need
>> +# XEN_TARGET_ARCH to generate the proper config.  If building outside of the
>> +# source tree also check whether we need to include a "top-level" .config:
>> +# Config.mk, using $(XEN_ROOT)/.config, would look only in the source tree.
>> +ifeq ($(building_out_of_srctree),1)
>> +# Try to avoid including a random unrelated .config: Assume our parent dir
>> +# is a "top-level" one only when the objtree is .../xen.
>> +ifeq ($(patsubst %/xen,,$(abs_objtree)),)
>> +-include ../.config
>> +endif
>> +endif
>>  include $(XEN_ROOT)/Config.mk
>>  
>>  # Set ARCH/SUBARCH appropriately.
>> --- a/xen/Rules.mk
>> +++ b/xen/Rules.mk
>> @@ -17,6 +17,13 @@ __build:
>>  
>>  -include $(objtree)/include/config/auto.conf
>>  
>> +# See commentary around the similar contruct in Makefile.
>> +ifneq ($(abs_objtree),$(abs_srctree))
>> +ifeq ($(patsubst %/xen,,$(abs_objtree)),)
>> +../.config: ;
>> +-include ../.config
>> +endif
>> +endif
>>  include $(XEN_ROOT)/Config.mk
>>  include $(srctree)/scripts/Kbuild.include
> 
> There's another makefile, "scripts/Makefile.clean", doesn't this would
> need to be change as well?

In theory - maybe. But in practice: What override might there be that one
needs to put in place to run "clean". XEN_TARGET_ARCH, for example, better
wouldn't be needed for cleaning. Furthermore the top-level .config hasn't
been included there either, afaict.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 06:33:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 06:33:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531255.826814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvuRA-00022v-AB; Mon, 08 May 2023 06:33:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531255.826814; Mon, 08 May 2023 06:33:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvuRA-00022o-7a; Mon, 08 May 2023 06:33:32 +0000
Received: by outflank-mailman (input) for mailman id 531255;
 Mon, 08 May 2023 06:33:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvuR9-00021T-3a
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 06:33:31 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0624.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::624])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f187d1a-ed6a-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 08:33:30 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7399.eurprd04.prod.outlook.com (2603:10a6:10:1a8::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 06:33:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 06:33:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f187d1a-ed6a-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZWJ/AxrNZ0y+upsEEMLi6SG/3eIDfw+64OzzdTCKJuhAiH/dEwrIH2b1JT88ScPGyh7JbKPLVJh69W9ev0Qw544r+bFGZCbgVMdQyNuPZu/4Yb5gblwb+oFIgqXHzk9DY04xAH1Z6fe/FLtu9l/znsfWZFq/c6Btq1KbI6xk5NeTU3uA9wKySSy4LEMLmz+rv9tqAtQDIDePsHZU+MxD7Ud4GEaetHQ3oOQqxMDMa0st/6+w+o0bz4DmzROOPm50ccP3hKbjLlU470lv24rLthiSn1Vp847166PNQVx7wqMu38vXgpiOi4xh9MaD6mC1LVJfRpnePxAShqXMz18m8g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9c5O7KY9QSWascfGNjL6Q66F/DwxuFFAclnLFTBRS6Q=;
 b=axo79+BPAmJFVE4KYOd3e1IHJhDfFCWX/lBlAMTi4cvWIgggpnwhb6u30/S6WHWDoqSJuLgdU6Oevbmh7oCWOCKuF7OZYevXpI/y4rkxqlCNA1VNXSfQzXRzw5VTnv1qHvwqUyKaLSZEywFv9B/gl3w694AlQ7FLQixTSxdeUHIJT3LBVJZlVIym63POBVb/Fllw4KEyw9cKyPZcwylPwD1woKb/eUXzoEhzu3xpsIgAXcQ134cKBnl/LXUNaSy0B+06WsRnZ1pAEt5oPYeQI54j054pCyeXCClrr18nzZWXv18iNUH4tGiA7jssqVYAIwXZ2gOFCcK9G613urHdvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9c5O7KY9QSWascfGNjL6Q66F/DwxuFFAclnLFTBRS6Q=;
 b=a2dG1uxhVIT0MSURjh9a+JmR83A/8TFzfMeY1DSdsHlG8UgJMO26Ji2ND72SEH2c7bz2mAB9/o7TsdC15vkzWuO3NypQ+m9H2WlvrFfHTM6tehFoq75fVwd9BkO38zHLNTolxNm0dAkAraWmin2t/YHGkgTXaqOi4ZrtbohEMsVb/uXCszd6xc412xrGQsrSp4r8u5gRub+4+pwPIQjuGFOMtRxD/sVWNtbayl9r3+D0Ti3IuQdhriv0l1JnJ425M347KgqHhaF6XfmlKR5McnBtDeFf62EGez5InBXEVp3fGUgEQL6I0P9lkqkufZbuiGnaXA+naB6HAFxWMvwu5A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4bf60ae8-7757-7440-1a4c-95260c0f0578@suse.com>
Date: Mon, 8 May 2023 08:33:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 04/14 RESEND] cpufreq: Add Hardware P-State (HWP)
 driver
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-5-jandryuk@gmail.com>
 <43c519f7-577d-f657-a4b1-1a31bf7f093a@suse.com>
 <CAKf6xpuzk6vLjbNAHzEmVpq8sDAO8O-cRFVStQxNqyD5ERr+Yg@mail.gmail.com>
 <7921d24d-7d4d-8829-44bf-b8c2ecd001c8@suse.com>
 <CAKf6xpt0n2GO1PuqeaiWi6iOoeBt0ULoKJ4xgeiTZo2Rh67kVg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xpt0n2GO1PuqeaiWi6iOoeBt0ULoKJ4xgeiTZo2Rh67kVg@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS4P190CA0043.EURP190.PROD.OUTLOOK.COM
 (2603:10a6:20b:5d1::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7399:EE_
X-MS-Office365-Filtering-Correlation-Id: 28a6ff3c-ef62-4532-8175-08db4f8e224b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YD+lO2PjHBs9ZsWhTnLA1UyK91pClsJqNHVPwnSulhSA8+pBImeapAuLKtJyGDELsnbhvrrt5n82Aj6sPl3V2AYxt44bvpq4uX7mHBe2+KId4ISkO5ZnONvTbrJXsKOHK2y0HSk+rHuFhlJRP9l7WJm21hKExXg+F7RJm4r2Gxykz4sa9B6/r8Kq3nL58EAucJv16xwyeSgQ201QGeKIj7niGjMK0p+onZEnoCy6nEjH13/1hjMpsXKh8uVjRTdDpFQ2DRsLHSmTib0YlClLG2qYOj9tuMt/ybYfo2hlPZ56/xPhBiJV6/sBvwRRWDYCy9DSJrCc2aWcPJr+qRRC8LWBZxn40NYCV1s0MKeXuErX5fxzfcFx/tkrgDhqYopmHbPDZR138QzGDz4QHgSGM9VReFOtMv9gQR/hN8YsvhZ1RVi1IawrZaBKPAO5jFYksulgmfWlmATVP3AMJFw46iZs4BN5kmCYnlNZsKfvxiOe5K/LAju9qGWy0TI40uf2jqr2i5CHJhIgcvxiWOaFgkMQIsFPhbNUogs6ZP/1VUymbWR4IUGogKpJssOGmtt3IijuvFFzCGSsfpoTq+LY5IuwWrgoE7UjpCim0UsQpC7MV56UilW67wAdM1kJgHjtHBjVktNsJYPpeoC5g8jWPQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(376002)(39850400004)(346002)(396003)(451199021)(31686004)(478600001)(54906003)(2906002)(8936002)(8676002)(316002)(66476007)(66556008)(6916009)(4326008)(5660300002)(41300700001)(6666004)(66946007)(6486002)(6512007)(6506007)(53546011)(26005)(186003)(2616005)(36756003)(83380400001)(38100700002)(31696002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d2NEN3A2c1VnVmxaQlJCWTJVWWwrSjU4MEJqckxsN2h6cnB6TGJQRnBndXFn?=
 =?utf-8?B?ZFJXQzVDdDU3ZVV4YjYwVkJhbERPYjY4a1FDanpxOTA0SkJ1N0V3b2VGZWRn?=
 =?utf-8?B?K2RSb1Y4VEJDWjV5eXp6V3JYU3JwNVNocGdEdmtXaTZUMUFwQ0tsNlVKd0Jx?=
 =?utf-8?B?emtqbzBQUU5hZ1RNcytJS3UxMWZsclg3VGtBRkFEUnUyUDBSK0FWR3RwRzJ6?=
 =?utf-8?B?VzhiRlU5RUpxVWFBMWhwYkFDWFVnR2ZmQWF6MlpkWU13S2hMZ2JuZDAraUV1?=
 =?utf-8?B?aEpIaG5teUN6REpoVFN0L09oeFduV2psNHc3TW03VFNrR1hDb1hDaWhreW1j?=
 =?utf-8?B?U2xaUXBWazZEa2t6Y0F2bkVlNi9Ycy96em44ZnRIMnBZdVA1c2krWEtScEN0?=
 =?utf-8?B?cHVVRGxVVU9oVngrT3lYdHhLUU8yS0lFQTZNT2NnY0JRQ3hIMW5hRytwY0ta?=
 =?utf-8?B?TnVOVFZNN2pqTXZVMDZZREFucXhUdnROK0VNL1ZyWnIzbVNkK1Z0a2p5am1J?=
 =?utf-8?B?RTAwdkg5RDFCRHBqczBaUkUyM01rcVlmYzZtNm9FdkxjamJEc1RPMnVLdlpp?=
 =?utf-8?B?YXlKY3YyYWQzSUtScnNHR3ZleGhMRytrZlNmdzhkUzIxVjVnUmpheHVROWp4?=
 =?utf-8?B?OVhLKzJwZFJLZGc1ZzNBN0ZtZ2xRTW1oNFV4Z0MzQ2VSYmd0WmFRN2tLQklG?=
 =?utf-8?B?TkZOOUsvSEliU1RqVGR2ajcvdkhrL1hNM0hIUzc5bnNvMUJhRy8wQ2FpdnFq?=
 =?utf-8?B?eXN1MzROcUduWHVvSFhzYmV0cndQWDN6QUVhUWQ2dllTNzhQU25TQlRpWkhN?=
 =?utf-8?B?UmE2d205aDFGa3U2UGNnYXZEeDFzT1RhQ2FRbEdwRG1tZzg4cHVRS2JtWlVx?=
 =?utf-8?B?RGdrVllWOWZBTVRyR3VMSFdhdDdkaXdHeXI5ejFqVitzM2xtQWkrTlFZNzFB?=
 =?utf-8?B?dmQwTUVwRE14WVBrcXZJS0hSWnh2ajlkV21ITjE4UmpGSnlzMXVCZ0Qra0N6?=
 =?utf-8?B?UEsxSi9CU051Kzd6NFpJazM0SHlIb2NPWGR3WHVMSWw2WWN3STBzZ3BHQjVR?=
 =?utf-8?B?Wk9LSTd6dEtUNitHTFpqTGJFM1plVDdtUHJNMVphMEMreUVTclR1SXNMcDJP?=
 =?utf-8?B?a1lqMURlNnZhUnFpQm00OFlwY3Jqd3A2ZDdYa2tIc1hqR0dHU1YzME5vSThx?=
 =?utf-8?B?TXFsb2hqb3R3a2lzTlFacEN4dlVTeEQvU05WN004SkVkUHpyRnBLN1N5UEUx?=
 =?utf-8?B?bDlqOHF1T0VMMXFoeTloYTZBeFlpQ1Zzb2RCOGt4cmd6N2xVM0l0MWtqK1M2?=
 =?utf-8?B?cEJqMXBMcWI2OWJMejh2aHhlOVF0dGtsVm1yM3lCb2ZrUlhhcUZCOC9pbDV3?=
 =?utf-8?B?bVFJMlFuRngvSkl2KzFtdFlsa2JOeklkZzhkeTAwajlHekEvdVBrdktrTk02?=
 =?utf-8?B?aTl3b1V5WmphS3d5UHZhYzd4MVZuYVA2akZtUU10MnJyOGFueFdNSEZDS1Y1?=
 =?utf-8?B?bHNESURMTG1QTU1jRTBWbkRZWGZBZ1hQM3BFdWZLdFAzdWhPQTlZRWpxendJ?=
 =?utf-8?B?Rnc3eDlBZjRMb1lCazY2NTVqeEFwbWEvYlBzWW02emFoSmlVTnJ4bGxCZmE4?=
 =?utf-8?B?UWJhdHNxYXU2dlhvSDJ6S0N3UjN2aXFXVFVPa0hodElZYThjemd6QVI1VEJh?=
 =?utf-8?B?Ti9CRERDRTd3MU9NQk45MHFYQmYyRHYrUlF4K2d0UlBZMEJQc1RVRWJVRVpV?=
 =?utf-8?B?NWNnUExyWVh5M1diZzdkU1hlTThiLzQyTXhHNk1YUmQ2V2IrOXFTcUVGM2Fs?=
 =?utf-8?B?TVNNYzFINHFDREYvWk5hbFR3bHYxY0FXR0U3TFQvcWZiYUo4bEM2U1FHWTB0?=
 =?utf-8?B?dFZId05XV09lQXRsdTNPNmhtOVVxTjRIUGRVdS9iOWMxdVFiS2xWU2JwdDZX?=
 =?utf-8?B?Z28yTlVSOW1mZC8wYU5xNHplSHlVU1pGeGtQUnNud2pXSWhTN0x6UjVqNW52?=
 =?utf-8?B?Nlp4U2xVM2NnVDZMVHkyNmt2Yng0MW5ZMmo1ZytHNUZwN2FWMEk5WmJSZmYr?=
 =?utf-8?B?dGtTZE5yMmVCbmVjNFN2d09ycEdHUzBkNSsyaXFGWEVSNzZGcmc2NzFTZXdT?=
 =?utf-8?Q?U7V6ayHzUQIjQ5qVq2tPY7gMy?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 28a6ff3c-ef62-4532-8175-08db4f8e224b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 06:33:28.4321
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YMCw4J6PcPUFk4QI3R5EdliN2OwHAYkJFex9Hvwn0FJM/wgW+8Xkxc41C4JWWBlNM4ttY4rfpmNumB26m+3zGQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7399

On 05.05.2023 17:35, Jason Andryuk wrote:
> On Fri, May 5, 2023 at 3:01 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 04.05.2023 18:56, Jason Andryuk wrote:
>>> On Thu, May 4, 2023 at 9:11 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 01.05.2023 21:30, Jason Andryuk wrote:
>>>>> --- a/docs/misc/xen-command-line.pandoc
>>>>> +++ b/docs/misc/xen-command-line.pandoc
>>>>> @@ -499,7 +499,7 @@ If set, force use of the performance counters for oprofile, rather than detectin
>>>>>  available support.
>>>>>
>>>>>  ### cpufreq
>>>>> -> `= none | {{ <boolean> | xen } [:[powersave|performance|ondemand|userspace][,<maxfreq>][,[<minfreq>][,[verbose]]]]} | dom0-kernel`
>>>>> +> `= none | {{ <boolean> | xen } [:[powersave|performance|ondemand|userspace][,<hdc>][,[<hwp>]][,[<maxfreq>]][,[<minfreq>]][,[verbose]]]} | dom0-kernel`
>>>>
>>>> Considering you use a special internal governor, the 4 governor alternatives are
>>>> meaningless for hwp. Hence at the command line level recognizing "hwp" as if it
>>>> was another governor name would seem better to me. This would then also get rid
>>>> of one of the two special "no-" prefix parsing cases (which I'm not overly
>>>> happy about).
>>>>
>>>> Even if not done that way I'm puzzled by the way you spell out the interaction
>>>> of "hwp" and "hdc": As you say in the description, "hdc" is meaningful only when
>>>> "hwp" was specified, so even if not merged with the governors group "hwp" should
>>>> come first, and "hdc" ought to be rejected if "hwp" wasn't first specified. (The
>>>> way you've spelled it out it actually looks to be kind of the other way around.)
>>>
>>> I placed them in alphabetical order, but, yes, it doesn't make sense.
>>>
>>>> Strictly speaking "maxfreq" and "minfreq" also should be objected to when "hwp"
>>>> was specified.
>>>>
>>>> Overall I'm getting the impression that beyond your "verbose" related adjustment
>>>> more is needed, if you're meaning to get things closer to how we parse the
>>>> option (splitting across multiple lines to help see what I mean):
>>>>
>>>> `= none
>>>>  | {{ <boolean> | xen } [:{powersave|performance|ondemand|userspace}
>>>>                           [{,hwp[,hdc]|[,maxfreq=<maxfreq>[,minfreq=<minfreq>]}]
>>>>                           [,verbose]]}
>>>>  | dom0-kernel`
>>>>
>>>> (We're still parsing in a more relaxed way, e.g. minfreq may come ahead of
>>>> maxfreq, but better be more tight in the doc than too relaxed.)
>>>>
>>>> Furthermore while max/min freq don't apply directly, there are still two MSRs
>>>> controlling bounds at the package and logical processor levels.
>>>
>>> Well, we only program the logical processor level MSRs because we
>>> don't have a good idea of the packages to know when we can skip
>>> writing an MSR.
>>>
>>> How about this:
>>> `= none
>>>  | {{ <boolean> | xen } {
>>> [:{powersave|performance|ondemand|userspace}[,maxfreq=<maxfreq>[,minfreq=<minfreq>]]
>>>                         | [:hwp[,hdc]] }
>>>                           [,verbose]]}
>>>  | dom0-kernel`
>>
>> Looks right, yes.
> 
> There is a wrinkle to using the hwp governor.  The hwp governor was
> named "hwp-internal", so it needs to be renamed to "hwp" for use with
> command line parsing.  That means the checking for "-internal" needs
> to change to just "hwp" which removes the generality of the original
> implementation.

I'm afraid I don't see why this would strictly be necessary or a
consequence.

> The other issue is that if you select "hwp" as the governor, but HWP
> hardware support is not available, then hwp_available() needs to reset
> the governor back to the default.  This feels like a layering
> violation.

Layering violation - yes. But why would the governor need resetting in
this case? If HWP was asked for but isn't available, I don't think any
other cpufreq handling (and hence governor) should be put in place.
And turning off cpufreq altogether (if necessary in the first place)
wouldn't, to me, feel as much like a layering violation.

> I'm still investigating, but promoting hwp to a top level option -
> cpufreq=hwp - might be a better arrangement.

Might be an alternative, yes.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 06:48:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 06:48:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531261.826824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvuf4-00043H-Mn; Mon, 08 May 2023 06:47:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531261.826824; Mon, 08 May 2023 06:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvuf4-00043A-K8; Mon, 08 May 2023 06:47:54 +0000
Received: by outflank-mailman (input) for mailman id 531261;
 Mon, 08 May 2023 06:47:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvuf3-000430-Vi
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 06:47:53 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20627.outbound.protection.outlook.com
 [2a01:111:f400:fe12::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 403e2d84-ed6c-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 08:47:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9623.eurprd04.prod.outlook.com (2603:10a6:20b:4cd::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.31; Mon, 8 May
 2023 06:47:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 06:47:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 403e2d84-ed6c-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cjdfPY20SLys+79uzy3JuzD8RelYB5oOUxnczN5aaF+OC5Zjp4V1L+inrdUIYfrUT10AaczZ9PKuePmBF9PbwHOFCUuu0fh8NPOHSLO9GWPWm+OwiTGZceP/qhEbHZqhfFLFBKd2bYw6n+PlqgONYsb2xpG7MaGLlUY1VB8+GcyghbMKfMgF878cuAaqJ6PY81K0VaGMEGyPLpgUNBq05c8OO4M55851OeF/9y/zjVNRFEz0bkwFAUdrbsRtXRgemrBDYypjRV8G6QSJuG/ZnpJU8LA91HRL1QwnUt65ZAKQPVcbVWvdr+0wzRrYSgwDUNIRN01KMJpi0AMfKUIT0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hchJlrRfsacPsTKLPlIoSqERfo9BbLE4ltWKo+IyKQ8=;
 b=aSXOvM5spfhFIAM09Ja0glGPauzTxC8hi/F9Buual1tDZDIztCbKSGK87lqTyVo+NxlqgpeQ0f0mS2jhgeASWPk/TVcotsiJ5W+iudKtpOqEZeopIVKyHFni2cpOun6Kts/2WZXllTlSSnRsqf+JnpjQgDLVtbNLHoO1MVwxjKvdeYW+820wVRhFx80AGc4eZWGBIf1QtQt/EiSmsA2xTVdM47cO5uJxCOcrPJYB7BX9kQInTXoRUPFn/5vg+OwVe9z/DOeQAnTjtMMmzCa/P8SNq6je+P3psGoO4aSlH95kqIqQxJQUrXg38G15E6FrbdAPhEx2X4rLsvf0zVBuaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hchJlrRfsacPsTKLPlIoSqERfo9BbLE4ltWKo+IyKQ8=;
 b=PF20BQpSyzuHJoYoh5p2Fy0TArJfXzqUhjG4nUVX5pFJ4mG9IERCg5rMgKWzNF1y+m5t1rrCm4TFarFqaAwJe5uhagk/lfBDclcs1ZWmKdZIgYilEAzXKpsbZ8D7GrKbWu43fpup50Egyr7iDWVMXQu6vc3cZDt3YsHD3whzM7xmCPJI+SDOXXhDQdvmtJ2KHEykuGGcd4PJvdSJuL++rr0CcFg7MXdVtHtHON+uM5Y6ICpfocOowgd6TEfnRYAD8uescPt4Cyt8HRYSp3jmjT/4Y8pgL0wOS2OiyuyqoKNwxXpVFUJkGqSaK9Gx7WbXQcxLADK4hzxgH1U+sn9YpQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6531df09-7f7f-a1e2-5993-bd2a14e22421@suse.com>
Date: Mon, 8 May 2023 08:47:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/6] x86/cpu-policy: Drop build time cross-checks of
 featureset sizes
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230504193924.3305496-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0109.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9623:EE_
X-MS-Office365-Filtering-Correlation-Id: 298d4b9b-2929-44db-5210-08db4f9022a8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LLwvp3bhhzoSJyM4/CCY1voqXA/qHlj2l/0oe1TuCeEm2HLr0q6vyf1d1u4gyYdtdZTLa+Wkz2pMZMKrFzqzVzvQDSfpvEO1OgByqTWbwl0mn6KIzy8fSzu8s6fCbu7SVwcWMf2wAecBXM19nXUZGHN1H+3oUZOYVljJW7p/QKcPzYlurAD05mDS/nOC704sStkWPNtvWoHlv69Y3DvdLOv6y3B10vh1BTl5HtKvJ5LjYHF+r7M2wOjfh6coDh/TB5MRl5XGom3LN5CEzFbBCRTWzuxO1MWDcMiQy9JTDcLEm9wVO61eB4Dy/ObeyBmAMQYylsEBMBXQ/Sg9TTg7urhlw2CMPXEW+AQg2DVjdB1WAFQjmic/yWK5/V/sXkdBDMN9UX7iHE6TQ+AJNUZ0Bdq5o/EP7e5je7vdUq36ssIyZCdQPG8IChnuEOk+U74JYOO7uhYAXO3dOVzR0l6LVRi2lzCywXpvw8VCvcAHbKhbQC+oHavyD8WQ1d4jmCwNzkWGyNqW2fO+OPCqaU9JW8CnqD8GRoy7TRrBdKHs01QmkO8WqpIN+YQup+cWY3uTfxgFmcI27nrrCm9UMsZ7vrg1HILu8tugqI5/TTOQLKDM1242fLKVbW/7hxKlaPJpcAyF4A9rIEYtHMalsijrRw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(396003)(136003)(366004)(346002)(451199021)(31686004)(36756003)(38100700002)(2906002)(8676002)(8936002)(5660300002)(316002)(31696002)(86362001)(66476007)(66556008)(66946007)(6916009)(41300700001)(4326008)(83380400001)(53546011)(186003)(6512007)(6506007)(26005)(6486002)(478600001)(6666004)(2616005)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?elhtQXRneHAxMzNIdStTNzJwQ1M4YkRPQkROOUd6aitTYml5OHZaNzYyek9Z?=
 =?utf-8?B?b0VWT0RKeHduOWZVZVpFM1RQTjkyY241U25jSDIzK1JTNENuSVA3SWtLYi9V?=
 =?utf-8?B?T2JQa216Sm9mMW5HSXVJRG5pWWdZZWpzUjNiOW9GelJYWjN5OEN6WjcwYnZh?=
 =?utf-8?B?R2t5S2gzVDViOEkzdVJSc3lZazYwaDNqV0ZjVDVzQ2ZNZy9pY3BIY3dmdXg0?=
 =?utf-8?B?aktCT01jNyt6WnhIdXJkV0sveis1eWt4d09MSlVLR01yQ3dhZ1cxVTVBRWg5?=
 =?utf-8?B?dkFNRXFzeEZYcWNwU0lXZUQwQ0RkVGVyeE5DMUxrSkhyWkhUS1lvTDRGb25r?=
 =?utf-8?B?OXhzN3ozRVRtRTN3K0VBK2NsUkU4eUJzeXdkVHd5RklIYXdDOVVmWVNucDBa?=
 =?utf-8?B?azlWaGNpOGhGU1FVb0R4Qm5zQ1ZiNWwzQ1kvc2NrdnpqMXZuZlJaTTNMMkdy?=
 =?utf-8?B?WndZZXdQdE90SnRJbFEweS9yZHg1aWxvV3RaV2N4TmMwNGFyc2tPdEZrTGpO?=
 =?utf-8?B?QVZnemN5bnFqdGpOcEc2OTk0ZGYvaWJXV1hqaFFqZUVtSy80Q2txWDdsU1hY?=
 =?utf-8?B?N2pJeHpycDJMeHVpSTB2Vm13U0NyM3BLaEp5VlJnSWdPTjRtY0p6Ylo1Y2Zx?=
 =?utf-8?B?RmpkTDV3cDROdFgwK2R2aFNnYis4cnJ0OUFFWUwwais1aE9EdzIzc0VKa0Jk?=
 =?utf-8?B?eVplc2hvT3hXVGI2YjlsZkZmWmNsYXQxaExjK2kwSHozaEoyc3oxSGNTV0pz?=
 =?utf-8?B?N3cweWhqZWxSc2wvcWpMZjA5MDc2bkVxa29HTk9CS3kyQXhMVW4yYUdHNzlH?=
 =?utf-8?B?Ty9PdG52YmlUQzl6eXZoK3F1UnVxY1NLQS8yRlBnQ1RKeUZ0SVJtVVpncjJB?=
 =?utf-8?B?NmxzR3M4RzFvV29Iam85WVZ2Zi85c0MvMzdFWmU0ZGlUSUpYMjRlOVA2S2Rn?=
 =?utf-8?B?SmgzdnFWWVhPL1liUXVtK2FZbHcvVTF4TUhteEVVVHZTN3pBMWtKZm5FKzYx?=
 =?utf-8?B?YWM0VldKaC9zRGJyVDBieHVkMVo1dkxHeUZ4QlRJZmk3QVFEenRMbFlGZVVJ?=
 =?utf-8?B?cEtnMEF4c0laVDJEd0VkVGJqM2hBU0g4Yjd0QmhTbWpiMUhJbkRuUkdOOWM3?=
 =?utf-8?B?UFhLMDVlUlEvZlVkc0RDUENCTDR1TUpkakhsakkzTmVibUkybm9hdTd1K3Jz?=
 =?utf-8?B?SU9Oc0ZiTm5CdjVyUXBxWGZadVc2b1Y3WUJDWVRaSHBpSlJma2RUTGw0dy82?=
 =?utf-8?B?NzZab0JyT0psdUwzRVM3VkVRdHFMVC8zRkU0MEgrY3JVV2VsVldPS05NT1N3?=
 =?utf-8?B?aHA0WVJyeWFHMHBVZ1FGeUlmc3A2Skw0VU1hKy9VSjI1YWRkcjIwM3BSM2ZS?=
 =?utf-8?B?bDRrbkp4TngrT1JqMHZxRzhrWVl2dHliMGRZZXJCYWdtSStFcmFtWjc1VzFj?=
 =?utf-8?B?Q2NyaUJtbUluRitKVy9YdGlqSTR2MzcvcDFGaGd0cW4xMExTNTRkeUpPQnhF?=
 =?utf-8?B?Y2JSeS9rTlhZK1FJU0lmamVSSVVpWkFqaU02M0hxYVdQcUorNW0zc3ZYUTl0?=
 =?utf-8?B?ZGlISi9vR25EcmM5aXNQcFQwSStLUXBTMlNJUkFQeUZ5Vy85R2lqRFA3UDUr?=
 =?utf-8?B?aWFDMWZZUVN3UFZ0SkF5Z1FUdUdEL3k4Um45TXNWaEVnZEd1ZEZaRWtHUlRy?=
 =?utf-8?B?K0Fob2hxYTVsQTkrL0VJVWxLVHVudUExMEsrWEV0T3A4RmU3V1pERmxZWUgw?=
 =?utf-8?B?YWkwbldCOUhHVUdRb0k2RTAvajBacG1BU3FXRjJrL203NVNMV0tObkU3cHkx?=
 =?utf-8?B?VWhtdkVJN1dqWHdqV0xhUTdXSG42MlRSOENkR1l6czBFRjNzcENYRnROMGtu?=
 =?utf-8?B?NWxPUmV2cCt6T21PSFpwM0owSjVVVHVXQkw0NnRDSU00cXNjMnAxeXpZQ3Nj?=
 =?utf-8?B?ei9xTlA0bCt0dmlFTjI4Z0NobGJQdkh2d3liek1vb0RyOWl4WCs5OGJ6M3BB?=
 =?utf-8?B?MzZtV3hla3Y2MGlJRFViQlcwNlB4TmR1OW5tOWR3Vzc4V1YxVEUyNldnNjBY?=
 =?utf-8?B?YVNscm1hREp1MFRLbVVwcmtsMm1BM0lUSktVWi9aNVgrSWFEL3ZLd080WjJZ?=
 =?utf-8?Q?QLPt32wgi/bsQR2f+yiwE5Eut?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 298d4b9b-2929-44db-5210-08db4f9022a8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 06:47:47.9399
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /kUS8EGBWpyBLQKRCwrqKTAQ0ANF78njD4hEgCFMF/4NigqJ2E7hxx3gCbrY902teLkGuJCzYGPb+zFHKHGI2A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9623

On 04.05.2023 21:39, Andrew Cooper wrote:
> These BUILD_BUG_ON()s exist to cover the curious absence of a diagnostic for
> code which looks like:
> 
>   uint32_t foo[1] = { 1, 2, 3 };
> 
> However, GCC 12 at least does now warn for this:
> 
>   foo.c:1:24: error: excess elements in array initializer [-Werror]
>     884 | uint32_t foo[1] = { 1, 2, 3 };
>         |                        ^
>   foo.c:1:24: note: (near initialization for 'foo')

I'm pretty sure all gcc versions we support diagnose such cases. In turn
the arrays in question don't have explicit dimensions at their
definition sites, and hence they derive their dimensions from their
initializers. So the build-time-checks are about the arrays in fact
obtaining the right dimensions, i.e. the initializers being suitable.

With the core part of the reasoning not being applicable, I'm afraid I
can't even say "okay with an adjusted description".

Jan

> and has found other array length issues which we want to fix.  Drop the cross
> check now tools can spot the problem case directly.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> ---
>  xen/arch/x86/cpu-policy.c | 6 ------
>  1 file changed, 6 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
> index ef6a2d0d180a..44c88debf958 100644
> --- a/xen/arch/x86/cpu-policy.c
> +++ b/xen/arch/x86/cpu-policy.c
> @@ -883,12 +883,6 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>  
>  static void __init __maybe_unused build_assertions(void)
>  {
> -    BUILD_BUG_ON(ARRAY_SIZE(known_features) != FSCAPINTS);
> -    BUILD_BUG_ON(ARRAY_SIZE(pv_max_featuremask) != FSCAPINTS);
> -    BUILD_BUG_ON(ARRAY_SIZE(hvm_shadow_max_featuremask) != FSCAPINTS);
> -    BUILD_BUG_ON(ARRAY_SIZE(hvm_hap_max_featuremask) != FSCAPINTS);
> -    BUILD_BUG_ON(ARRAY_SIZE(deep_features) != FSCAPINTS);
> -
>      /* Find some more clever allocation scheme if this trips. */
>      BUILD_BUG_ON(sizeof(struct cpu_policy) > PAGE_SIZE);
>  



From xen-devel-bounces@lists.xenproject.org Mon May 08 06:55:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 06:55:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531265.826834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvumP-0005nB-Dh; Mon, 08 May 2023 06:55:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531265.826834; Mon, 08 May 2023 06:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvumP-0005n4-B9; Mon, 08 May 2023 06:55:29 +0000
Received: by outflank-mailman (input) for mailman id 531265;
 Mon, 08 May 2023 06:55:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvumN-0005mx-Sq
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 06:55:27 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20620.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4ff77995-ed6d-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 08:55:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7351.eurprd04.prod.outlook.com (2603:10a6:10:1b2::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 06:55:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 06:55:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ff77995-ed6d-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UEy0RALmbGAB2alOZ96cfOKjYLBmaJMbhZLd4zkShaRW7sx7YPzsa6UGPZfCgBV7yYn8PxaC/MiWVE8jWLv0YfKmlVau817fixqvwfw86tYf7d4pcJQ8iG4XYqxd0XGngoE3nZbwa9LEsTZteZ+ri2+AVT2aP6yihGBNnoLeH2wqrp6P0Eqk9vq/EdTOgT/kjk8vtgti4hX1me6grATIc7IBzQwBiM9peEnJF4WoCjzAtLUWErlJrA1m89WubbS7+ZVoJsO8oW1IGlSe8w224PkdEIepj4KKnRFV+b3RGIARnZ1v7BkET1yLWHTC4j4c/VecFv0/HqFineC9B2apbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hdXLfsRIT6/1HTTPg8tyCJ0DN0VEt440lgmkVWRuqeU=;
 b=ErCFIrqiKqTl6RoSOYfx8ysqF+qynuoa/qwuda6IuT76rZyVK+trviGIhiyL5wVM9aJbflHVk1OxQv/ltSlSQc+mA78N37q0lbJMHo3/CZvnXb9DUanS65jzEGDh3jHSM7QeUJzXJuFVyElsRS4uu3AOZc1pt3PBjQlEym29PKunmudHkJBkcDlsODEHN5jKCPydcfZC1FuWZWhRaDsvGc+wzUdXq7UBu/146lCSWgNJFdKfbC05uBkK+Nrbf1ACctwNnNpdlbSdE+a+wwowLQHvSF5dCc/WypQ8EC6yIbcOJBTcw3Oqx5Pim9pYDHCWcZAynB0laBhkArL0EqMycw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hdXLfsRIT6/1HTTPg8tyCJ0DN0VEt440lgmkVWRuqeU=;
 b=okHImzc9zT2A6zM+zRplEereDUDkpxUAhwLtu7AMoahBWPX9ZydzyVu9CfiPdo7XpjPD82/i5+CHTUPSETohacGJCNY5Caw8Lx+32ATIGwCGCltfYRFEeneyKXQioAF0Kvpvg/JszKJZfGcAAZUN9fTEAPRBAfVXwAqP3mA7bLhT4B43wLYwkTEZVPkSP+UrffizdtbPlWv8OwjT7YCInZOYSBcKOjfvxyTp3zpuminxu+J9IaakJEHkvIhh/k2PshUfMC0pg3ktWE+XWHl10LZZ3bAklxBZMl4r00Aw83EEGr2wE4pNeZ27TIZzOEQCXO+FBmmZYzV6G1uQxp3bWQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <76d4b625-2864-4452-86fb-70a4413bb886@suse.com>
Date: Mon, 8 May 2023 08:55:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/6] x86/cpuid: Rename NCAPINTS to X86_NR_CAPS
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230504193924.3305496-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0103.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7351:EE_
X-MS-Office365-Filtering-Correlation-Id: 5b106de3-d88b-457e-0256-08db4f91335c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DYBulBXXqqEQLqM5iRKPOzaNldxVL3ziuteYHkwU2GRL7NC63fvJs/pTkCcvTeoMP+BCajLMu9DzIJGjrXK9dkNOLdIIf1J6PE98Mt1t+ZXvhTnkk0U3ZbTWmTtLUYq4Xcpy/uZGnRwxeepdlw/+zOVqYPCbw7u+csVSRmbgTYZ9ZhTxEEdp/MM2AXNGFtMGKgzYZl4CHH9m3FSR8PgSS4d88YThEnOOk3CaU4HLfI93YODtuy4fbM1lBqBgN+yPwfJdJ8SormTqe1Kstuc0pqNwXFw2XUcWB/StlBMGdIMXAQdAxwuULJ/BY4HSp0jGDRzQewQkIkNnMXWnYbAVrZRgkF0N5ykKGiQe79WB9353OOxZ7/eGhdRDMoQUDVKHe+uC76AaIy/jlOR0n47nUGYf7eAM80jkdi1rbRMlfqXPwK4QbqH2yDunz6Ol6+E+pVrZBQwdKZXb35HiehFB6zhAFBnE1lgV6D4OOJ1k0s6w++Azi0pJz4Z2Y8sz07tYxZPwyF3iMNQJph7xgHsIIc5YoEs/vJlImW9qykkQsdmsMfkmYE+irMIWFjKyu50pr03A0Z9ZS2HpMmUbVmEDO1E6RaJQXfBWRTFu2YhSXUpYjn5BpyK7D0tS5wnsJwP6pafyKO/a7C4+jShh6OzKqA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(396003)(376002)(366004)(39850400004)(136003)(451199021)(31686004)(6916009)(38100700002)(41300700001)(4326008)(36756003)(8676002)(8936002)(316002)(6512007)(6506007)(26005)(53546011)(6486002)(478600001)(66946007)(66476007)(66556008)(2616005)(31696002)(5660300002)(86362001)(186003)(54906003)(2906002)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WXBIU0pxbitrZmxCU3hUTVB2c3NkcmZ2QVZzTll1V0JsdDFCN3FFN2RneXJz?=
 =?utf-8?B?T3N5RFhFMXR6RXhyWTZHbkw4aGxrYUNaTktSVHZZd3o0dWdycFJkQUx6bm1k?=
 =?utf-8?B?SGdwU1NiYnhjd2hORDVmYy82UkVlYkNoeWFnUDVQaWFQRWZqVjBBQWNaUURJ?=
 =?utf-8?B?NjgrTkluSlk2SG1EZ3NLVVJHc1NQV3ZGVjN3LzZVaXFQSGdQc2hFTGRUR3d1?=
 =?utf-8?B?bUV2emQ0czRMTDFoaWRWZzF6a3hZZkdWdk1kUm52TnQ4bXg5dEh1WEU2ODdu?=
 =?utf-8?B?eWcwQytKYXZRUUNFOTNPTTlYd0VIektxOElGQVdWWGdidzBIVmRMZTNSQ0tL?=
 =?utf-8?B?RFIrTWpMbG9CYk53dEFyUnpmOFlEcmduT2RRSFdHZHpkTDBrd0VqWFc4WlNT?=
 =?utf-8?B?OXVNNnp3a2lHcnVLenJqMVVYdkhiaFVzME8vRWVCdVRnbmlERUhMNGV2MnZu?=
 =?utf-8?B?M2FLWU81WGpIRklDdUhMd2s4QkFzR1pQVE9GU3FQS1E4SmVhWGJtM2pZT242?=
 =?utf-8?B?Y29wYytVc0FMdEROY3FqY25vY0Z6dFkwaFBTZGhpTkRQQThoNW9XT1VvZTF3?=
 =?utf-8?B?eW9nNy9scW5kYUtJYzl5dWo2THFPTnc2MXZXWm0raFJrVEpGZ1NhNFY1Ym1K?=
 =?utf-8?B?ZEJhNDhTRDlCUHhKODZCbDlNSkRMcHZqWWU1MWI4cGM0SHF1cnhGdTl5bmQz?=
 =?utf-8?B?TFE4eDJIRk5CZTdmRVZqdkJvUnk2WXNha2ZPOThzcHU0SHlCRU5DVzNGc0pU?=
 =?utf-8?B?N1JwMElNeWFCSWgxTHd0Y3doNklscmRkTDYrd3hGSmZDYW5EMTRDZUtPTGZT?=
 =?utf-8?B?ZGZkQ1dMQ2wrYWdwYzlqTGlJd05kNTl4NFZFMVp1NUhmRXM0bTN6TVExRlB0?=
 =?utf-8?B?WElIekg4OXp4ZHVJNm82czh3cjcrbFlVUDZXY3lSZit2R0ZVanVlVGFqUkla?=
 =?utf-8?B?eEsyWUdPTGNoZ0dkQmhybDJTRkFVbElsblZScDd0K3oyeVNweE9wek5hRUV6?=
 =?utf-8?B?cWVvY0x6eC9zRnRvZk9paDRWUDVWSVdhSWtLcVhPM3dFeXA5Szg4QVJBV3Rh?=
 =?utf-8?B?aE8yTk1XdzZocUN2OTZia0hKK0hrbWFFMHoyWUsxQ2s4bmdiNkpTdmluU3h4?=
 =?utf-8?B?Ukg1Rk0rbDZLVnlvbHdOZkxyZG1jUjVkMVZFUWV6dHpiL2xlblRnVnR1R1kx?=
 =?utf-8?B?WUtFai9nbVlpQUY5SzhlSGhjSGNsZ3dhOHFuWFR6UVA5cGZsbnRxeHhjRGZn?=
 =?utf-8?B?N3pFbVFKcTJCVVM4RmF3TWh3WGt6bHloVTIvUDBrTFBTYk9yVTFUaElYWlpN?=
 =?utf-8?B?WGljZE0wcGwvZnFycEs0dzZWQ0g4TGl0QlJKTHhSdUxzdldUaW1kdkl1cHAw?=
 =?utf-8?B?WVUwNnZvOVo1MThtbDNucVhzOGp0QXJZeDZuSEwreG1sRGxKZEpueDh3S1dT?=
 =?utf-8?B?OWRKMDBaSzNqZVlaa09YMDE3ekptV25SMDg4SC91YzBHZ2tLTVFNeFFlTWdp?=
 =?utf-8?B?dFZFZFRhdENIRWJBUDlqNmFtS2Vwdm1pbFpBSHZtZUptNWF3TnhwNTd5VzhF?=
 =?utf-8?B?ZG8zOVZBWXdBamU5TDNqMmFWdUpLa2EveHFhdDNIYlR1L0t4NS92RDB1TE5O?=
 =?utf-8?B?bjJpUmx3N0tsd2QrR2h4bERjOFJEbUVySUNuWWVkUGZKWngxdzRYdHAySkZj?=
 =?utf-8?B?aStibWNObnpaZVp1dXNOaWZTazdkdUdrVDNHeVd3MGU1UCtFeWF1MkZMZkhi?=
 =?utf-8?B?eWtxRCtGdnUyTUhnWUl4b2QyTzJRZktHV0FEQ2JrUnc3aUYzSDRTcjhvaWtv?=
 =?utf-8?B?Si8zcUFHYU8yNHlSdUpnY0FtSVhVWm41QlRqZTQ2ZzQyV2tKREFpT1A0bXNG?=
 =?utf-8?B?UVB2bHdFZXpYbTZXZVF3bElpaEpPemFsZkRBNTJiYkRjVDFSTTRqRnEwN0lr?=
 =?utf-8?B?Z1IzQ1ZJSUZ1aG1BMEcvd1UzV3RwUmdwaHRwT0V4ZVBTMmlES3R4c2VvOXNK?=
 =?utf-8?B?SkQ1amlOOFN4a08rdzZCaFFMR0RsbmI5SjRLL2JyZjEvNkM1Ulk4RFJjZjUr?=
 =?utf-8?B?SlRoLzZNcVByY1QxRU5JZTJxK0JsRTdZb1oxRWgrWDJlMjFUcElLTm1sNG81?=
 =?utf-8?Q?gOkP2K/4Ojt4DpdBpJZCjbfvU?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b106de3-d88b-457e-0256-08db4f91335c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 06:55:25.4509
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pFc8F1MXy/Lsl0Qs+eYnV7zqSnjhYXWfGgUk7mAIKMprrMTdCKO+vG50+JqtiFIZyUsw0yBvPYcLulPidVIuZQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7351

On 04.05.2023 21:39, Andrew Cooper wrote:
> The latter is more legible, and consistent with X86_NR_{SYNTH,BUG} which
> already exist.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

I can live with this as-is, so
Acked-by: Jan Beulich <jbeulich@suse.com>
yet ...

> --- a/xen/arch/x86/include/asm/cpufeatures.h
> +++ b/xen/arch/x86/include/asm/cpufeatures.h
> @@ -53,4 +53,4 @@ XEN_CPUFEATURE(IBPB_ENTRY_HVM,    X86_SYNTH(29)) /* MSR_PRED_CMD used by Xen for
>  #define X86_BUG_IBPB_NO_RET       X86_BUG( 3) /* IBPB doesn't flush the RSB/RAS */
>  
>  /* Total number of capability words, inc synth and bug words. */
> -#define NCAPINTS (FSCAPINTS + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */
> +#define X86_NR_CAPS (FSCAPINTS + X86_NR_SYNTH + X86_NR_BUG) /* N 32-bit words worth of info */

... the way the value is computed suggests to me that "CAPS" (i.e.
"capabilities") isn't quite the right term. "features" sadly isn't, either
(or else I'd have suggested that without hesitating), as neither of the
two really fits the inclusion of "bugs", but feels - to me as a non-native
English speaker - still slightly better.

Then again "CAPS" fits x86_capability[] best ...

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 06:57:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 06:57:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531269.826845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvuoT-0006Qi-Qm; Mon, 08 May 2023 06:57:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531269.826845; Mon, 08 May 2023 06:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvuoT-0006Qb-NK; Mon, 08 May 2023 06:57:37 +0000
Received: by outflank-mailman (input) for mailman id 531269;
 Mon, 08 May 2023 06:57:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvuoS-0006QT-0n
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 06:57:36 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0630.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9b0fb0df-ed6d-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 08:57:34 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7351.eurprd04.prod.outlook.com (2603:10a6:10:1b2::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 06:57:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 06:57:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b0fb0df-ed6d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YAD64Mg4IsVRJutXtFIyTfGHN6ne3i+pj8shU+hnYN9c8zLAV4m8DojUEeVOGx2nAmwReHtyxt3pifK6NJce8LpAXMJfflLW+V4z0NtdyPcPpOg7yFlMDNMmUD63SR+3s+EqITkyvKxELDvTHodnznZiO1VW8rvrlF3dn9G4IMErHPA+INZdj+VQkubcqsZkzQF2Ia7Z0M/Kk3F1c6N6Qyr5grq6ayy0Cllmwrg6Qx0GfPHIK6RHklAPqIRf0zkdDb1GtzrN2I0M21S9hPhqIMqA5iPXD761xJTFmpS8jrQnPkGdR0+L1cT0xPMqCCB+1qTKc1fMUbhUGk79qnZGPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nt2NJz22ENuOlKjutZ54L4hE9P3SR48KwWTodLGBkk0=;
 b=T0Xyw7tcHJqe25UyZXEhvaxxcvrhTWoHcnmjK65kVceMlcvbh08TiNuv1x0jj7Hr7Ll1SpFFTH+yECe09vKD6TeNRSu2h3taa8b1I5v0+Qe6sHCwXjZLtDtR+fHVV6f9RrKmHuHBZjerSidBSl1XM3DZtQuS1sfnkk5tt2Vp8t7Dj6ANqen7w0Bj+BoRb2pkWikVQIRl95qNiD8tbduWVzhfDFTMmR5a9eTzKbvHziVneeQoL7+N3YKMQ2yBRq8M5Ro2G28l5Uig29aeno+G9utsjLbX4GI7BbxWzqPb+Bu/9wCVuVn7I1IksoSnc08cyvxx6Yw/j9Wfyo5t1SrP2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nt2NJz22ENuOlKjutZ54L4hE9P3SR48KwWTodLGBkk0=;
 b=c4F84VgF3VYNLietC/CA5yN5LOc6uV0eV74wNC3PtWPWvd19MmG8MeWUTq8L7+c0zdLDXz7egiG62B+GkobWd/LTCXw0TxAUFmnbuXQDWLGYL8ZrLj2vFYDPUVAysb4PH94SFwcIrPnTlcYRR0Re6LfHgLQn1aHuxrtudzsbCuI+KdAudF9pW5sZEIz+JlEzevBbJqG3BE9//koH4W5u3GSAIVKMjKXZ3p6HOmBOMntizBJkhleTxxVCmMPIF6dqZ4wc+ApHv3Rom97iNbtUuuSdp/2kqB6R7uYE22VuBmSZXmUrMfd/VUiRQUeYhHKXvPP3VRYH6lFYQGe7Zzqa3g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e5dbcc00-ee8e-3530-f04b-0bfd8b936f9a@suse.com>
Date: Mon, 8 May 2023 08:57:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 3/6] x86/cpuid: Rename FSCAPINTS to X86_NR_FEAT
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230504193924.3305496-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM0PR02CA0074.eurprd02.prod.outlook.com
 (2603:10a6:208:154::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7351:EE_
X-MS-Office365-Filtering-Correlation-Id: 19a6e9f7-cc58-4c80-df15-08db4f917e36
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	n1Od0eWNjtNhp5Uh+scAWwZKzbWzgNhGL/qVUgvVvU7ZzJdx+aic3jNxsv94SiPicd1OQ2LUHzgxfwjUd+LHYCHhCz0c8tHax4NPXwubFNbvSZOuuD1HhihwfeYm6y7WJ9fOHbRErtF1X7sVnXgFtgq1wMDWtjEJmjcaowRL9RBqINDNABOHTyDUlTrMxh4lIuJOUv/8JL8GZnQW6i1y4y/C1m8S29pQQcibwG4Tt4ETOrxceS1k0gdlb1zyiR5iSxF+fFbCTiU5pCRBXGlumV+yLdMVGb6ozIhZftczpZSnvSEC4GLAjcCVDs8cyjN5K1tctpxFDFvcMt9lbSLrRxRMhQkEaQUw8Eo2647Kg1crNEJXdho+2hHj6DVPLkL/aTkpMu5mycGHr0UYXgnjW89ue/hkZVaT9s6L0iNGaBg6F1//XhnELapSbNGXc/NAxxinVMp1EHhEMpxQGqjsindojRQ3sf50dR2PoPfEkOly8v9aKfNatNRzGHQJhYa3PmHuWa+3KTsqp2EHLasN0JkXp+yBxAE4zele4/Tjlxguqjn1Aq+yKqRoI3lrzZAtLAjrjL0dPjs61xICtw19HsmmsScyRjtQ/B8TvezbBsrhI5Kq1aHwFJViNiRb4yDhQsZsiagzTQCHjrI4cizhZQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(396003)(376002)(366004)(39850400004)(136003)(451199021)(31686004)(6916009)(38100700002)(41300700001)(4326008)(36756003)(8676002)(8936002)(316002)(6512007)(6506007)(26005)(53546011)(6486002)(478600001)(66946007)(66476007)(66556008)(2616005)(31696002)(5660300002)(86362001)(186003)(54906003)(2906002)(4744005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZitZVlFxTzI1aGdYWG1kUXV2RHQyMHhpUkFDemtXSXNRSzJENE5rbXpseENk?=
 =?utf-8?B?LzNvYXFHRmlmclAzUklsbEQwSmNCM0kxQ0NoWERTem11REcrMHAyR2R2ZGxy?=
 =?utf-8?B?K3haVGlTSVB5QytSNDBJajNOYTBwWWNGbEpqZ256OUFHOTd6U2dnU1dUdkVR?=
 =?utf-8?B?TVhZVkxmSm14a3ltU2JuaGxXSWk4NDlPMkxyOVBoNEF1TWxZSlpsMEh6Nk1Y?=
 =?utf-8?B?S2VRVWFFYTN1Vm9wakRTd0dRd3ZwQVFlQ2RSVW1NUlowc1Y5NGc0M1NoZW5G?=
 =?utf-8?B?QVdiY21wVlpqRUJBOFFJWmovbkRZQXM1cUw2MXhGOXd5cVFzZmlMZlRRUk1r?=
 =?utf-8?B?Q1R3UzUyaWRVSDJkQmlhUlJPZlJMZ0NjSzRRM2h5VllTQTFYRGRDMFA5T2ta?=
 =?utf-8?B?RFE3emZQd3B4SE5IRVpWTkJWK2NQUzBncS9IUkx0Z0R2Z3hlTTVQTDhRWW1i?=
 =?utf-8?B?a1EwWlByNDdpR1VQbzFBbFF4OFJ1M0xzb3czWmhsOVhDNGRUcnJxeUFrcnlL?=
 =?utf-8?B?VmFVa0NkbnE1N0w2Zm5UanFQY2kyczJlM0lxTXhVVG41V1UreXFxN2JxU2Ro?=
 =?utf-8?B?c3A1YkVlV0hOMWxKQk0yczVwNFBTWXRMUTBUVEM3NVNtd1dpcUNhUWJjTE5y?=
 =?utf-8?B?cFQwNnRjdW5WdE4vOEdlVWczVFB2UmNUd0FkYTE5UzRjVERWWEdXZktFQUJR?=
 =?utf-8?B?N3dPcHFIRThCdjFzdExHQjRvRmVvYS8vUXdlK3Fha1VqaEtWTTdqY3oraE12?=
 =?utf-8?B?dExkTU9VcVRSS0NGZTF0ajI1Z3VmNjdCMVJvdEU5UmUvSGZhRDVnNVl3WDNN?=
 =?utf-8?B?eHJ5Y1hoZHFwQXQ5OHVGc2NXSUd2bkF6azZVY1NPYmpnOGs5bElMZDZXaGZ2?=
 =?utf-8?B?bTk5d0hqZGI2b0EwMzg0UGtpbTl4Y0U4ZEV1WkloOVNJODRLMjVUeEE1Qk1I?=
 =?utf-8?B?amFaVHp0NWllcjdNVWFaWXhJR2RTczVtT3VqSHJ2d05mUnBobERpMk91b1Jq?=
 =?utf-8?B?SVJ1VU1RL0JIQndrZS9wYVdLNk9CZXlCUDh4YUxJOWNZdkN0U3Y3bDhieUZr?=
 =?utf-8?B?WHU3QUo4Z0ltS21EczNyUmRMekJBOEd5bDdLQXl0QXlvSFhZSjFsRzNUWHg4?=
 =?utf-8?B?MzZ6ekozcWhHdEdjbFVsSmN3YlFtRzU1VzNjL0FFci8vRE55bm1yZmlZVjZR?=
 =?utf-8?B?Y1oxdWpaUXhzdnNIdFhZMlgvWDNiQm5oTDN6V2hQaVc2SkpzR0xLK1l2bTJa?=
 =?utf-8?B?Y2xOTUE1RStycjl0RC9OVDhLUVVxbU9QNyt0YUYyOTlJOTREd3JJU0hBTDdv?=
 =?utf-8?B?Q2Q0cnJnUVNVdjFvZVRSVXZQL2pHbi9iZ1ZzTkZrWkVSMlZjaEttOUFnMHk3?=
 =?utf-8?B?eDI0NzBwRmh1WEhzZWNjMUI0amhJcFJtRitBOHVRUTNOencrd0dQNEJzaFR1?=
 =?utf-8?B?U3MrclZoN01iWDUxVFUrN1Zhc2hwZjJaWXBvYkNtZWIyNElkNERJQ3JVM0dl?=
 =?utf-8?B?SjVlRUdDa0xRYm1NakNNRWVyb05tRk1NZkp0bEFNK3FGcXI0UG14bUtFQnQ0?=
 =?utf-8?B?d2NKNi92MVhmbnI1Zm1hMTRpMHpNVy8zclpPWFFCRnlvcFlWSHhUcDd6bDQy?=
 =?utf-8?B?Wk5mcXE2ckJSWDgwZmwrQk1nVWZmYUhvZWlBMDZZY2tBUitobFU1NFduWlJx?=
 =?utf-8?B?TlhkcXJVRXd5TXdOV1Vna2p0emZMN0kxQnZzQ1BRZFg1RXBveEJoRGh3ekdk?=
 =?utf-8?B?N1RVU3RxaWhIcFMyVWlrN0NacEcwQ0hhUFQyUno2OTk2c0VvMEF6SDk0Wk0z?=
 =?utf-8?B?em02WDBYRHFoZmJQZmFsUnk2YTNnZG5pOXU5VWV3VEZ2eEg3MzBQQksrWUd1?=
 =?utf-8?B?MVVKQmxOZHZza0xGcGFhOUtwQjR5aEg1eXBzWUJ0NlVEWDhUYzVLSHBnWVJa?=
 =?utf-8?B?dTNYOWM4QThlaW1BVjBHdWxOSHNYN3NtK2tvWGJFcW1wRUQweTB2dzNsbWE0?=
 =?utf-8?B?bkJpMGlUTzZlMWlpcTE2Y2ZrcmxDRmQyZzJkalFrYnIwVXloaGhneFRMZFN4?=
 =?utf-8?B?Q0xrL0EyVFA4UnNyUmtnbzVWVCtDSHNuUTRkWDhxdnJyWHBWTHhYa1RlUkxD?=
 =?utf-8?Q?bTIjTofEQt6oU+x/umiEv3N23?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 19a6e9f7-cc58-4c80-df15-08db4f917e36
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 06:57:31.2550
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IN+WWWBIuvbOCCPZ7LhXA00fe/yHA1TQDiHtDj9AW/oDCT1qG77/V0+W2BygddkEXpFFNgN2+DzPqVi5A6QfIw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7351

On 04.05.2023 21:39, Andrew Cooper wrote:
> The latter is more legible, and consistent with X86_NR_{CAPS,SYNTH,BUG} which
> already exist.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

And in the light of this the name choice in the earlier patch is probably
indeed the least bad one. (I'm sorry, I should have paid attention to the
title of this patch here when replying there.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 07:02:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 07:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531275.826855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvut6-00087L-Gs; Mon, 08 May 2023 07:02:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531275.826855; Mon, 08 May 2023 07:02:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvut6-00087E-CL; Mon, 08 May 2023 07:02:24 +0000
Received: by outflank-mailman (input) for mailman id 531275;
 Mon, 08 May 2023 07:02:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvut5-000875-20
 for xen-devel@lists.xen.org; Mon, 08 May 2023 07:02:23 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 475b669a-ed6e-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 09:02:21 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8C44F22038;
 Mon,  8 May 2023 07:02:21 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3F0C81346B;
 Mon,  8 May 2023 07:02:21 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id b1GqDX2eWGT/KAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 07:02:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 475b669a-ed6e-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683529341; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NLWDrEm93LZyOFp9xeXo6/XT6xDUF7ihv0pjc2/CGE4=;
	b=G5rh8vPRZ3AJXEFa9qc8uv/cQ6qd63PNGotWqAGoOyNk9W1u6bDIc/lviWZoL0eGt5KG4o
	uMogbfhwDjVZ5o/cLe2m1kmzSxj5wZ1jQf+B9Gshok8mwd9LdXn107jT/NWaz9dhtYB6XD
	/UfZ7bukKYbQWU3gvj9kDMWcDs6YTt4=
Message-ID: <016edde8-e47e-a988-e5c1-f5aad0d4b60d@suse.com>
Date: Mon, 8 May 2023 09:02:20 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] libxl: arm: Allow grant mappings for backends running on
 Dom0
Content-Language: en-US
To: Viresh Kumar <viresh.kumar@linaro.org>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>, xen-devel@lists.xen.org,
 stratos-dev@op-lists.linaro.org, Julien Grall <julien@xen.org>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Mathieu Poirier <mathieu.poirier@linaro.com>,
 Erik Schilling <erik.schilling@linaro.org>
References: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
 <25fb2b71-b663-b712-01cd-5c75aa4ccf9b@gmail.com>
 <20230404234228.vghxrrj6auy7zw4c@vireshk-i7>
 <20230505061934.jm3bwjgsl5hf5rns@vireshk-i7>
 <CAPD2p-nvLXdxkwik-UPjS1JAjz6z2FNuxb1JYrj4bNwusEZpPg@mail.gmail.com>
 <20230508040927.uhgwfkhknwyvsowu@vireshk-i7>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230508040927.uhgwfkhknwyvsowu@vireshk-i7>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------8TdY1rGaWKCJvGGRqi62EHVh"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------8TdY1rGaWKCJvGGRqi62EHVh
Content-Type: multipart/mixed; boundary="------------LljxTwaMiE7SpQZT1QMO5GRq";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Viresh Kumar <viresh.kumar@linaro.org>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>, xen-devel@lists.xen.org,
 stratos-dev@op-lists.linaro.org, Julien Grall <julien@xen.org>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Mathieu Poirier <mathieu.poirier@linaro.com>,
 Erik Schilling <erik.schilling@linaro.org>
Message-ID: <016edde8-e47e-a988-e5c1-f5aad0d4b60d@suse.com>
Subject: Re: [PATCH] libxl: arm: Allow grant mappings for backends running on
 Dom0
References: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
 <25fb2b71-b663-b712-01cd-5c75aa4ccf9b@gmail.com>
 <20230404234228.vghxrrj6auy7zw4c@vireshk-i7>
 <20230505061934.jm3bwjgsl5hf5rns@vireshk-i7>
 <CAPD2p-nvLXdxkwik-UPjS1JAjz6z2FNuxb1JYrj4bNwusEZpPg@mail.gmail.com>
 <20230508040927.uhgwfkhknwyvsowu@vireshk-i7>
In-Reply-To: <20230508040927.uhgwfkhknwyvsowu@vireshk-i7>

--------------LljxTwaMiE7SpQZT1QMO5GRq
Content-Type: multipart/mixed; boundary="------------SfLWA0MwlAA6q5B9qosBCs3a"

--------------SfLWA0MwlAA6q5B9qosBCs3a
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDguMDUuMjMgMDY6MDksIFZpcmVzaCBLdW1hciB3cm90ZToNCj4gT24gMDUtMDUtMjMs
IDE2OjExLCBPbGVrc2FuZHIgVHlzaGNoZW5rbyB3cm90ZToNCj4+IEkgd2FzIGdvaW5nIHRv
IHByb3Bvc2UgYW4gaWRlYSwgYnV0IEkgaGF2ZSBqdXN0IHJlYWxpemVkIHRoYXQgeW91IGFs
cmVhZHkNCj4+IHZvaWNlZCBpdCBoZXJlIFsxXSApKQ0KPj4gU28gd2hhdCB5b3UgcHJvcG9z
ZWQgdGhlcmUgc291bmRzIHJlYXNvbmFibGUgdG8gbWUuDQo+Pg0KPj4gSSB3aWxsIGp1c3Qg
cmVwaHJhc2UgaXQgYWNjb3JkaW5nIHRvIG15IHVuZGVyc3RhbmRpbmc6DQo+Pg0KPj4gV2Ug
cHJvYmFibHkgbmVlZCB0byBjb25zaWRlciB0cmFuc2Zvcm1pbmcgeW91ciAiZm9yY2VkX2dy
YW50IiB0byBzb21ldGhpbmcNCj4+IHRocmVlLXN0YXRlLCBmb3IgZXhhbXBsZQ0KPj4gImdy
YW50X3VzYWdlIiAob3IgInVzZV9ncmFudCIgYXMgeW91IHN1Z2dlc3RlZCkgd2hpY2ggY291
bGQgYmUgImRlZmF1bHQNCj4+IGJlaGF2aW91ciIgb3IgImFsd2F5cyBkaXNhYmxlZCIsIG9y
ICJhbHdheXMgZW5hYmxlZCIuDQo+Pg0KPj4gV2l0aCAiZ3JhbnRfdXNhZ2U9ZGVmYXVsdCIg
d2Ugd2lsbCBnZXQgZXhhY3Qgd2hhdCB3ZSBoYXZlIGF0IHRoZSBtb21lbnQNCj4+IChvbmx5
IGNyZWF0ZSBpb21tdSBub2RlcyBpZiBiYWNrZW5kLWRvbWlkICE9IDApDQo+PiBXaXRoICJn
cmFudF91c2FnZT1kaXNhYmxlZCIgd2Ugd2lsbCBmb3JjZSBncmFudHMgdG8gYmUgYWx3YXlz
IGRpc2FibGVkDQo+PiAoZG9uJ3QgY3JlYXRlIGlvbW11IG5vZGVzIGlycmVzcGVjdGl2ZSBv
ZiB0aGUgZG9tYWluKQ0KPj4gV2l0aCAiZ3JhbnRfdXNhZ2U9ZW5hYmxlZCIgd2Ugd2lsbCBm
b3JjZSBncmFudHMgdG8gYmUgYWx3YXlzIGVuYWJsZWQNCj4+IChhbHdheXMgY3JlYXRlIGlv
bW11IG5vZGVzIGlycmVzcGVjdGl2ZSBvZiB0aGUgZG9tYWluKQ0KPiANCj4gSSB3YXMgd29u
ZGVyaW5nIGlmICJncmFudF91c2FnZT1kZWZhdWx0IiBjYW4gYmUgaW50ZXJwcmV0ZWQgYnkg
dGhlDQo+IGFic2VuY2Ugb2YgdGhlIGdyYW50X3VzYWdlIHBhcmFtZXRlci4gU28ganVzdCBn
cmFudF91c2FnZT0xIG9yIDAsDQo+IHdoaWNoIHdvdWxkIG1lYW4gZW5hYmxlZCBvciBkaXNh
YmxlZCBhbmQgaWYgZ3JhbnRfdXNhZ2UgaXNuJ3QgcGFzc2VkLA0KPiB3ZSB3b3VsZCBzd2l0
Y2ggYmFjayB0byBkZWZhdWx0Lg0KDQpJIGRvbid0IHRoaW5rIHRoaXMgaXMgYSBnb29kIGlk
ZWEuIEkgdGhpbmsgeW91J2QgbmVlZCB0aGUgM3JkIHN0YXRlIGFscmVhZHkNCmluIHRoZSBp
bnRlcmZhY2UgYmV0d2VlbiB4bCBhbmQgbGlieGwuDQoNCg0KSnVlcmdlbg0KDQo=
--------------SfLWA0MwlAA6q5B9qosBCs3a
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------SfLWA0MwlAA6q5B9qosBCs3a--

--------------LljxTwaMiE7SpQZT1QMO5GRq--

--------------8TdY1rGaWKCJvGGRqi62EHVh
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRYnnwFAwAAAAAACgkQsN6d1ii/Ey/L
LwgAm1/ilqFa/bVlI3stWVraQeJ83u7kj8C+bB1tElWemvPL7UjiGuxuRKJhEH5jQY35HvcEZk1i
PKiTecPd9ze0ZvuPFq2nh7cSszAOlAJK6EMluAzd3TaEfPNv6PqfB08+qbm9ZG0aX0XsqeJ383lf
aMXoa+F+DBVjFKIMANswxVtrDkowGFTD9T8wLGY5jncavIKDI4ko6EJgUc8XvF5aIvNbhDgbw8dC
53j7ya9EVEiJaFPiTamJhtHewit3ZOJAujvltf4IQGA5XlHQuIJUubZM56rCt11V+tke4k/Nq5VW
EULruSEA7aNdOywxiAFT4g8HpKnTSKwPeKX2UGq9TA==
=MC4x
-----END PGP SIGNATURE-----

--------------8TdY1rGaWKCJvGGRqi62EHVh--


From xen-devel-bounces@lists.xenproject.org Mon May 08 07:45:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 07:45:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531280.826865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvvYx-0005M7-Ox; Mon, 08 May 2023 07:45:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531280.826865; Mon, 08 May 2023 07:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvvYx-0005M0-LI; Mon, 08 May 2023 07:45:39 +0000
Received: by outflank-mailman (input) for mailman id 531280;
 Mon, 08 May 2023 07:45:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvvYw-0005Lr-6G
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 07:45:38 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2079.outbound.protection.outlook.com [40.107.7.79])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51c184ab-ed74-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 09:45:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9403.eurprd04.prod.outlook.com (2603:10a6:10:369::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 07:45:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 07:45:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51c184ab-ed74-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a6Kz6wu6MRyTEIaVm6Ll6lwcA6/xHpS13SDnfxLVA/ejx+LYqM38HmIxCRUozi1sURkpqtRyN0NBWPC14O8fEUDR6dFoBhFt9zuJ/43ddcni+DPNXZmupFfUYmNhVe+60NHZAIoxg0JyV/3ssCq9OlKYEBcOyRgzOQP5Q/67EvypZyusMv20PazcypBv+mDV7jSHOLfkA6NfMR5QMNlcZLhBnklN4M+Nd4e2t89GK/fEALAawyxcJg0TM+QieS6A96m74JIJJ1CHmxOB6jg8tZWCJRLy8TfoRmDgXVyIx++sSh1bV/W0xfHKgtokSi9M4QbZNFq6VmJafJjDpK4VmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8rvEpc3Y1x9ckkFXedQ+0A6kTrYUJrxDOiToGFbCa60=;
 b=WxTYRNnM+cEcK9qAwGvXhaHHCuETqI46JBDZPT9IQTUHXP8Rk1ppbBcN+LUipWxInd1JN171qAp28EM7c39LYIdlwIVcW+cZE8Q/ibDfoc2YP+tptYszPZ4/OwemHyrk34U6CffdduzpvVTK0WZW3GS9QkNxDrvZVn4RqSjNc6TGrZnVPNP6U2I0/guqdMAUXJ+z5Oj/zD7XHe5Qq8nm7flo5u4LRB0rpbCaNxlDs62/Mkmr6dMikhNk31pprK0kNKc/gv3AHi9OHc2Ln744d6vshbJ55gITcbSPkclrQYHzT/1qlYi2qr7aI6q0Lygog6xBWEw0An/GbOFv/2XYUw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8rvEpc3Y1x9ckkFXedQ+0A6kTrYUJrxDOiToGFbCa60=;
 b=FKM1oUvXFnw1CuJhrBgok6kQ5f1YXB5yp8WdeyrwVY3i1jXF6PND10KXJET0+nosDSXgJ7sDAxrFrbl+xSJHVLJHYEllTM6i27VuirQm7Se1+SJ+dgzCTQh/Wlh5fInqB+88ncffuGxsJBSwddwwToLeiiQXMwdPFajddSselku6jDdgsg0bXQskiUd9SNBdvW4NeDJZnEseJbNbe++tuGYlao3bB2ZGUzsFLJm8e+gttFCsyyLOYudiMD6gex1vqzdHYC2F2NLmgaYNaqZZXkjVgvPm1AkT/jvGgW5WYZg1wNQ65QnSlAV61l0z2q5DNihwk6aFqlwLIU0B8FcVOg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9613df3b-c49f-6034-ff99-7a6ea9286f0f@suse.com>
Date: Mon, 8 May 2023 09:45:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 5/6] x86/cpu-policy: Disentangle X86_NR_FEAT and
 FEATURESET_NR_ENTRIES
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-6-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230504193924.3305496-6-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS4P251CA0025.EURP251.PROD.OUTLOOK.COM
 (2603:10a6:20b:5d3::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9403:EE_
X-MS-Office365-Filtering-Correlation-Id: c4df971d-cd86-4453-3361-08db4f982495
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ki74BeomotXVuxrdxTXZl3H0nM2lQvncrr54BwUy8XVHQiBS11ecd7e5QB9zSInsnsBbL9Poow8jww7v6NssB8A3/9bLVM2bGBnZgivq49ZogQEZfAYlcI4w6vJjExDJqxjgsJJDuf1PHM/B0T9bc5vmmXYct8Sd7kJcp3S194y3T0J4ZZgEFaB0GBsYdi4IXGsUAwxn89tr1VYy9Mg7DNGj2bMfR+DBsGylYH/LZG31sNyXu6YLywv2HWBtmEykjqgNkZIod+9b0PNFBqwAWsqwU7umbYJL2ei6EYsIzSi/KFRLX95w6j91Kbwe+fl5BX/ZhlqRgDAdFfYsTUBOUEaJYzdCXFWeM+qLgtjWScI0tXrAouyfKQ2POEqmFt2XVKVrgNw8Hh4nXBhIYqLeRo8b5I3gGsEjr7RkcrL3u9dGztZzbEkaRDa4dhasj38NGWaP4265ipAElRmj0EocRmaH2fOdDOjAOIeN2NAh2B9jt+HLMHtLA07WRzmocMHKhuWfY1IjlECisJZToG3tGjlvIsoSXKbBwAFMGFd0Ny7HynhHjdggNBrZWM1rphd7YpI4GnI1ReS1Vzughe+D0zN67raFvscPmi3+Lqp+DZDiKgKcJoJ609Y95umQ9Y6UaRQJQvxEOb4xmYOD0EMLmxyhEAUNfNH+a2WZEoUpHk6s/Ndx0a89hxd2aJ4QoQd5
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(346002)(376002)(136003)(366004)(451199021)(6512007)(6506007)(53546011)(26005)(6486002)(83380400001)(36756003)(2616005)(38100700002)(31696002)(86362001)(186003)(41300700001)(478600001)(2906002)(54906003)(31686004)(5660300002)(66476007)(66556008)(6916009)(4326008)(316002)(8936002)(8676002)(66946007)(66899021)(522954003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TEcyODlvN3JwUG1mckE4V0QwalpmR290bWVEcTVuZlMxSU1tZU9DYnYrSXUx?=
 =?utf-8?B?OWJXWmtKWXFySVVRekY0eGFZaEhMTUhjUjNlbUVCd3pjeEYzUStLRW85cVFF?=
 =?utf-8?B?a1lxY0xDVlJWZHcxQyt0ZU9zVU9mZGN1SFZnNzE5aWVhbWxMUDJYWlpDOFoy?=
 =?utf-8?B?K1AxWWw2SU9WOVZJNDJPRUgzMmFEcEhvcUp6dXdpVmhGSVpKSFA5Y1M5bm9U?=
 =?utf-8?B?OXpjN3Y1U0ZQYTFKNERsclhINGhRREFFV28vV2lDOHFHcTZjVjlJb1Vxd2Vt?=
 =?utf-8?B?ZXV0WEhoT2NSZnZQOURtYnJxY3VDK0xLNUVNT3F6L2NpQ2w4dVliVFVmTVJs?=
 =?utf-8?B?MnlWRGdzNU1DOWZjV0Y2bENrMHowMkk0eWdyUlc1OEhFajJoZVpBWnVRL3ZU?=
 =?utf-8?B?MHVodlNtcWoycmxEeTU2R25HVlYwbmxYZUVCWUM4Z0xmM3E3WDVPOGZnUzNU?=
 =?utf-8?B?ZzV3YnFPL2ZhRWVleWNpa1QwdEJwTE03aXVUL2EwOHdyOGtQL3ltOUZleDNT?=
 =?utf-8?B?Z3pMVnNDUWd6Mmc5LzFaenc0VDN3amRiNWVzK0gzclJOU3NwQ0Q3TmdsSnFX?=
 =?utf-8?B?N3o0bHFaY2FTcmVUb25KRjMxckoxWUpnbXUvMFNyZjJEWlVFTVI3UFZRR3po?=
 =?utf-8?B?Sks3aktlZzllbmd2SVhMeFJxaGw3NHJYOUlZT0o4SnlOSGJhZTJsdjFsKzZU?=
 =?utf-8?B?UGtrMW5QM0p1QjFxci9XMFRpMm95WktXUVk3ZGpaZHlvaEp2K29RbERtanNo?=
 =?utf-8?B?Y2lyUUlCZ1JDTnJXQ2RERmJ3TXVkb0drRnlXdERvbGZoUEF2djdDR0s2Y3pW?=
 =?utf-8?B?Ly95Ym84bkN0bTRWQUhMRlJDaGREZ0pleUdXVDh3N2tSYkVoWUQrZUc0NGhn?=
 =?utf-8?B?aGdGV2FYU0w2OWd1RE9rbDgwallEQnViNVZobWJrdDl3OG9rdTJsTE9admc0?=
 =?utf-8?B?T0I5WlF6cHp4YW5ZTk9RbUFEY1ZZQ29qMlBoeFdUY1ROUnRKNHVvWU1WRzBI?=
 =?utf-8?B?Siszbm9PcHArR3lQWGdJT3BKdnFtVXJPaUZSR2dSanhaWXh3OWVIcGt5blNK?=
 =?utf-8?B?QUFDV2V2RVBZbTJSRGFhNlRRY2kxUTBqc1lWV1VKR1ZEQUFVOWI0dE16OTgx?=
 =?utf-8?B?bmdLb3lEd0JVZ043aDNzNHF4YWc2bGs2ZUhJc2Nib2FwS2RtZU53VEVhWFZz?=
 =?utf-8?B?eVJLUmRKaDZmL2xoMkwxOS9mZTdKbFk3OXgvSGRBaDNhdUJSVFBzWDlSdVZJ?=
 =?utf-8?B?cUlRdjZlUVhvZVh1cGxaVE5mcXJxMm5LQWI5OXZvQkpXL2VtaXhQYitnZ29Q?=
 =?utf-8?B?Yk9NNzFaOWNpL2RFdXNEV0c5Zlp0Q1VWcEo5aVlCenRyZ3RZakFhUnduVm9a?=
 =?utf-8?B?VjJ2Z3NjZXhCSjZ1dUNlWkZaTjZ3R2pWZ1IxTGJqc1doU082YTJmTTVNU29p?=
 =?utf-8?B?cWl5TjNMRm1CLzNESTZYc2N1azZwSkZvU1JGd3RqVHVUdis1d0ZWUGxyb2hn?=
 =?utf-8?B?VXJ6TEhlbU5kbEl4bTFhTitYTVZQT2UyVDJDcXYreE42RklobHZYazdxVmE5?=
 =?utf-8?B?RllMcUpEV1lxUlV4YzQ4N0plWVNScitneGpPdDllVVRYekpkWmRXYW40WG5F?=
 =?utf-8?B?ZTMwam5ES1RHclZOOGxIbXNBbTMySjNuMm5TV3Fva2FZeHl4dkhOVFdRSS9a?=
 =?utf-8?B?V1N3L3BRWUpxenFsV3lNMDhpcnQ5aTRMamtUZlM4TUJSU3JUTHhlM3lvVDdR?=
 =?utf-8?B?USttb1dwUGdiQVNZSjBrM2RrUUg5RlYrU0wrL2VKbUJnaDNJWCtXT2p0SU1R?=
 =?utf-8?B?T0hVenlzR0MxZXA3Smh0VnBDNFZjQ1lJUFJGWFlJNnNSU1Ztb0llQkVWSWRJ?=
 =?utf-8?B?UWl6NHZVeW50cVhOdVY3d2RqTENGSUdKdGRuUDljY3FkTi9sTjhQQUQvU2Fa?=
 =?utf-8?B?amprSGp1WVI1QU1HNHhLM2IzazdreFcwTGM3UitnMFZPV24wRDZUYVo3VkhO?=
 =?utf-8?B?SmNOclNqWWlxWCtBK1AzTUFFVFFzRVRtTC91bEhRamJQdC9pTklBZ0NaTkZX?=
 =?utf-8?B?TnQxUTFiZ3JwLzlodXJoVktKamFHeVdOME4xVEMyU3RIL2hQRFZmQS9FbmYz?=
 =?utf-8?Q?M5uO+ThGr3Kopmx8x0jm/QbfA?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c4df971d-cd86-4453-3361-08db4f982495
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 07:45:07.2080
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bpC4a1X7/d7AsYtoNRWfAL7DQFkM6gt8Gty0S7yQ95f9eXoZWXvk9z+ltUAkXiq5eki3j2HqV4QSGgN785belw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9403

On 04.05.2023 21:39, Andrew Cooper wrote:
> When adding new words to a featureset, there is a reasonable amount of
> boilerplate and it is preforable to split the addition into multiple patches.
> 
> GCC 12 spotted a real (transient) error which occurs when splitting additions
> like this.  Right now, FEATURESET_NR_ENTRIES is dynamically generated from the
> highest numeric XEN_CPUFEATURE() value, and can be less than what the
> FEATURESET_* constants suggest the length of a featureset bitmap ought to be.
> 
> This causes the policy <-> featureset converters to genuinely access
> out-of-bounds on the featureset array.
> 
> Rework X86_NR_FEAT to be related to FEATURESET_* alone, allowing it
> specifically to grow larger than FEATURESET_NR_ENTRIES.
> 
> Reported-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

While, like you, I could live with the previous patch even if I don't
particularly like it, I'm not convinced of the route you take here. Can't
we instead improve build-time checking, so the issue spotted late in the
build by gcc12 can be spotted earlier and/or be connected better to the
underlying reason?

One idea I have would deal with another aspect I don't like about our
present XEN_CPUFEATURE() as well: The *32 that's there in every use of
the macro. How about

XEN_CPUFEATURE(FSRCS,        10, 12) /*A  Fast Short REP CMPSB/SCASB */

as the common use and e.g.

XEN_CPUFEATURE(16)

or (if that ends up easier in gen-cpuid-py and/or the public header)
something like

XEN_CPUFEATURE(, 16, )

as the placeholder required for (at least trailing) unpopulated slots? Of
course the macro used may also be one of a different name, which may even
be necessary to keep the public header reasonably simple; maybe as much
as avoiding use of compiler extensions there. (This would then mean to
leave alone XEN_CPUFEATURE(), and my secondary adjustment would perhaps
become an independent change to make.)

> To preempt what I expect will be the first review question, no FEATURESET_*
> can't become an enumeration, because the constants undergo token concatination
> in the preprocess as part of making DECL_BITFIELD() work.

Just as a remark: I had trouble understanding this. Which was a result of
you referring to token concatenation being the problem (which is fine when
the results are enumerators), when really the issue is with the result of
the concatenation wanting to be expanded to a literal number.

Then again - do CPUID_BITFIELD_<n> really need to be named that way?
Couldn't they equally well be CPUID_BITFIELD_1d, CPUID_BITFIELD_e1c, and
alike, thus removing the need for intermediate macro expansion?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 08:59:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 08:59:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531302.826875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvwhj-0005Oa-7Y; Mon, 08 May 2023 08:58:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531302.826875; Mon, 08 May 2023 08:58:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvwhj-0005OE-1I; Mon, 08 May 2023 08:58:47 +0000
Received: by outflank-mailman (input) for mailman id 531302;
 Mon, 08 May 2023 08:58:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvwhh-0005O8-LX
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 08:58:45 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0609.outbound.protection.outlook.com
 [2a01:111:f400:fe02::609])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 882dd0b1-ed7e-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 10:58:43 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9455.eurprd04.prod.outlook.com (2603:10a6:20b:4d8::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.25; Mon, 8 May
 2023 08:58:40 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 08:58:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 882dd0b1-ed7e-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fPgeT+f6Yjxm99p+kwwV3Na2TYYFl7636mLjD5uXp72T/0kD/nA9aRN8OBrmOz+XB95yH5sqYSxirZYnIY4gGpfS0zp3m0fMMX9/37hgz98pODAswg4TOFnApwAW9mSS1PS/4tkYIrzE/1RZpOzYXfyTEu/3RVqQbDSnvO/7XihNNW08FVajejzhC2kiWv2ufe6JvyyVmWkhKms4qbXTIWmCFd+kE//abVSozGuSQ5n+Oxz0+KDvFQNpJYwKjJCrqT16uSV4AIQBxlOcEXs0V0mCkZRSl2214qd1XDGl1/0oKRkCzNmSQiz1L/jn9hVLKlSONXHIgy5MJRyG2QechA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3EL6ZrgkM24n1dJrDjr80nVR/ElPrBMW09G6XzgUZYk=;
 b=UqFVz0vfUiv2fa4lK7KRUunQlk1SK1oE2NSmt0tLSr7Z6T/Co4pLgVuDu/DN6b8jiSiYUYyPkLO9CqZ68tdt7w1dx07A5GwQDX2ncpbl9YjbGrLTnBi68AIchPb7EQAFhMPl+iuS/7NGeBaqjYUqi4VU4WDvVtGU6Jhh59lyTcILWhfweHU5ZkNFNcQddCfeYGzFjh7DNBIR5F9w51oiwNGjMlWCIRXnlkuhXRjl74Q76vPi8nqSbejwgxbbp9SLbuFWqG+D4I1yhZNQk938kmMQXSkmM0gy8xlA155IFkYcDvRBeLcRW9xtg9hS51RW8jCN+z43M7cEzm+1Ua8Njw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3EL6ZrgkM24n1dJrDjr80nVR/ElPrBMW09G6XzgUZYk=;
 b=HWmivzP/XgSoqU4zxE3FiUMe1XhY9IfVFGlEbob/80qSYHXmSDilcL27xbG40aFsPLI090T+M8nwvjX0O43/zDmf0+r+4kLN192euTYh6990jXgKwnZi+tb8/ymFYolKbK4DbVtCQ2HZLngIqPVOIT9gQJ1feKLiiH4F1AnFrFzY2XewaSzkmilZHrl5po8PzYUVySxCBzRnGAjeHAgRWgDOLwVJ4P5rJzWQRCqVkTtxw46PvVpSQjBtn8PcoN7LLcEIdcgUdhMiPjWqXK5ESx1U/iZqAcuTBYUoEJA551pkfv0ZKv6RVinKZCQMumwwM9J0g+oOdsA+a2FN5ehEag==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0533b045-f4cb-0834-ae88-9229bd816cf2@suse.com>
Date: Mon, 8 May 2023 10:58:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v6 2/4] xen/riscv: introduce setup_initial_pages
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1683131359.git.oleksii.kurochko@gmail.com>
 <d1a6fb6112b61000645eb1a4ab9ade8a208d4204.1683131359.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d1a6fb6112b61000645eb1a4ab9ade8a208d4204.1683131359.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM0PR04CA0117.eurprd04.prod.outlook.com
 (2603:10a6:208:55::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9455:EE_
X-MS-Office365-Filtering-Correlation-Id: 7b094322-54b8-4480-149e-08db4fa26a82
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+bXLWORfFJYX7l6WtLwL04HkLxaxFPJRd3f+cr8b2UCbZPPwHRvrI3yMnWJyNf5XYp/bLsHX82FnuDwzEh4YcF5uSNy0Rm5aTgJSEArxeUCYmBakiywKk5Sycj4BGmncg3X4GOH4IVbWZVyXnvd0iSVttjjgBU4pCULnbsZdosxu1GlSMByTrtdu47Cpic9if+iERCafu95GGfMpspKekvPlJKtIC+vvj4wFFgYvu841YCykbZfqipFFo6OkioZb3JmUPfj5+nAaQfsA+K9LUJxC4dC0UVlUtWbMD/IwS+vGSB0nSBEK9glIhgfF4gTEKw2W1VAMB7vzASZ4q1ldFsbnZxAlRW4uoLT4um3bdtvpwXHDZyq7t7T9iXbfOCPrZSvdk6LmHMSjyx1gTBafkRpVPsaW50cHP7uUdE0TDXJ7H0vFL1Pd8+MC8uyJ3epiX6b+LiaqRxoXsFTWBpx15ERUd+LVZRhtcZ35Ndf+zYvgj7cn2TBegMGYQz5Vc8AUD5VCHjX8Re61Coimei8jXhZaEuUKIxe8FW81ABVBRwJsrU7Stfe8RNcPFF1QHXHAisyH7/0XjfyHTw92yY6bmBty9wVFMquiQvoyVX0nS3NiPfLtJkN031jYmCLYPzEvxjTYW3Wp4Hu9cs4Y9NuJyA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(376002)(346002)(39860400002)(366004)(451199021)(31686004)(66946007)(66556008)(66476007)(6916009)(4326008)(478600001)(6486002)(316002)(6666004)(54906003)(86362001)(31696002)(36756003)(83380400001)(2616005)(53546011)(6512007)(6506007)(26005)(8936002)(5660300002)(41300700001)(8676002)(30864003)(2906002)(186003)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S3NLcCtYemFubVo2RW4xUlRidEpXM0x2WVZxb0I1aEpBSVhpU1hKb1YxeEJz?=
 =?utf-8?B?TmVlemVaYUZ2TG9yclNneFVXTUphb2tFNUFDNTZLSk9ESjRqWnpPbjhMV2Iy?=
 =?utf-8?B?T2FXRS9UWklTOFN4SXFlLzhwNVl6K2hxdG9sWVFJMmp2eW9lRVFxcHkxcCsy?=
 =?utf-8?B?Q2xKOHIzb1o1MWZvaC9DVWRUN05wMGh0VE1rTDcvN0pUMm50OFFBcWVIQU5x?=
 =?utf-8?B?ZFRaWFdoLzVubC9xU2JjZDZsVy9mSEh3eDFNczREUVNiclJEcVlsdkYvRm00?=
 =?utf-8?B?ZytQNkdYVzZWTG5KZnhBdFl1R09TQU5kOTRBN09oRmRrZ1ZpaGF2RUxmSUdN?=
 =?utf-8?B?WFczdnJQQlpOcEw1c0xVR0k2VXRvd01waXBUYUZGWHRuWkxUUWJvanU0czJC?=
 =?utf-8?B?WjA1ckp5NXNnR2plbkpPM09meWx3QlNOeTVwU3RVb2dCbEp3SGh4bmd4STE5?=
 =?utf-8?B?UHR2dFQ2d0NJU2Z2ZW5nVDYwRXp6dFphVnA5YmlmS3NLSkJFbzk1cXVUbCtn?=
 =?utf-8?B?aWJrLzlyVHdSTnJyVmFRT2lweEw2SkZpaFNOdmdCSlZmd2dVZVlOVHVpTjBL?=
 =?utf-8?B?eFh5ZENGSWd0bmRKdnErUDZGVStBU1kvWVFWUzhsU2hhV0tILzlnMHhNc0dQ?=
 =?utf-8?B?d1Z2aXlLK2ErOHlsajZxOFpUMS9uRUJJY3d6aFVHZ09XUG5tOUdqbTltVEVk?=
 =?utf-8?B?bGNyTGlWSHpIY0Z3QmlTTzVDMnVxaE1yRVdPK0x0anQwVnlKMDZIZk1zS2FE?=
 =?utf-8?B?Z1p1Z1AvbzdxcFpqWU55REJGb3hnUDlSTk9TelpzTHZpZVhxd0NKeXJwSyt1?=
 =?utf-8?B?cjk0NjBFT25td2pieHc5TGV4akIzZ0RQY2w3b0ptSXZUZVpUenlxcFpGQ3dl?=
 =?utf-8?B?VnQ0NlljOXJWcU5jUnhOcUIrWlE5MnlwZktsNEhha1pYN3owNTdWV01HL0t6?=
 =?utf-8?B?Rk1RUjFFQk80a28yMTh5Z2x3eUNlUTk3VE1aTmFaMjVxaHVXTndaRTNrNitS?=
 =?utf-8?B?MDd1VG5IR1dWOFlHU0V4TXhVdkpIa2dTc3FTckloZW44KzhFNjBycVMzelBp?=
 =?utf-8?B?cDlrSksvbGlpS2F3dXRQTnlpRDlSZHNWMDdEa2QyZ0k0ZEtyR3NrYUpiMGhH?=
 =?utf-8?B?WDVHaEZ2VUlNcGJYazdyVTNubGlFY2Z2VmVnc0g4eWhscWJTblZscE8xMFFZ?=
 =?utf-8?B?VmtkaVN1cm1vOE9FT1hnYTZCZkllbWRnNi9BWjA3YjduT2NsMWdrSjBiTXFP?=
 =?utf-8?B?S3N0T21UY3RaRXEzc3JjVEdJaDhpNC81QVFXT0ZFSE5FcVF6Vm9aK1RjUFlC?=
 =?utf-8?B?MEVHK2RNbmlEYVNMUHowZFlER29wbHViNzFudENVYXJOSnZBTjJNTm5FVHdU?=
 =?utf-8?B?Skdlc3Rya0dnZlQwTHRYMzJkTUMwNVNjYkdZeko1NWVJV0hJVmk2N1BiYk5Q?=
 =?utf-8?B?bkNhRkxzaWxVUnhHdlpKaHMrNncyWDBrbEJKZTZoUnVKeSt1S0FqMnVIVnBF?=
 =?utf-8?B?Tmx2SGM5SGRRdEhTckwrMDQ2RE15dHhFVkp1a25oOW1CTTVJWXAvZmNDL0JI?=
 =?utf-8?B?WklEWEwra2FDaGlPUlQySm1aS2ZmTFAxNCt1UWI1bVpoRndDdjlJRXo1MW1V?=
 =?utf-8?B?Y1ZnckppY3cxc0ZtZk1TeHRuTW9OQlJ0N3kxR3JjOEtlNE84Q2RZcXBUM3lD?=
 =?utf-8?B?cThlb2VWYWhreDhzNXNWWlV4SWdVeUh5Rnh0ZXBmNlZXaWNHUXlEb2x5MVZZ?=
 =?utf-8?B?YlU1RG9wdzJjUjZZNnZrUDJxVXo1MVdZV3E4Z0I4bUxRZjc5MHUvQ29mOWNs?=
 =?utf-8?B?YWVmbTBZUXNMcUthK2lYVHBCYnNaSE1qSDJaOTNBQ3BSb2dIby9HWFN1eVNX?=
 =?utf-8?B?bWlMYjliR1lQVWVhS3o4ZFJpQUx3OTVCQVl4MEc5Y1p4S2d3d0o4ZU5IZ3Ur?=
 =?utf-8?B?KzROT3ZadHVndVFQMGsrZVhlMzY4YmRjbGE2cjNmbWlOYlZtL1hQMVlDRVdF?=
 =?utf-8?B?c01QNDlDWXFJVi9ONWdzMksrd2k2SzlSVnVaUjRwMnhrYlBtOXRWbkwzaHB2?=
 =?utf-8?B?ekhDTWJRRnlCdy8xbTVPWDNiRXhJZDFoZERvOGo3R0NmR1lDVEJJN1crMVI1?=
 =?utf-8?Q?5H6mqrezS52OJ5IWlUIdPGU6S?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7b094322-54b8-4480-149e-08db4fa26a82
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 08:58:39.4982
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: do41envXp+MuNom/I54rbH9pmSbC7B9VZkCu/9etSKZ31sAJiNuNwRwnHzq3CLAly2Lh6mdUsICJSKfATyafyg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9455

On 03.05.2023 18:31, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -70,12 +70,23 @@
>    name:
>  #endif
>  
> -#define XEN_VIRT_START  _AT(UL, 0x80200000)
> +#ifdef CONFIG_RISCV_64
> +#define XEN_VIRT_START 0xFFFFFFFFC0000000 /* (_AC(-1, UL) + 1 - GB(1)) */
> +#else
> +#error "RV32 isn't supported"
> +#endif
>  
>  #define SMP_CACHE_BYTES (1 << 6)
>  
>  #define STACK_SIZE PAGE_SIZE
>  
> +#ifdef CONFIG_RISCV_64
> +#define CONFIG_PAGING_LEVELS 3
> +#define RV_STAGE1_MODE SATP_MODE_SV39
> +#else

#define CONFIG_PAGING_LEVELS 2

(or else I would think ...

> +#define RV_STAGE1_MODE SATP_MODE_SV32

... this and hence the entire "#else" should also be omitted)

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/current.h
> @@ -0,0 +1,10 @@
> +#ifndef __ASM_CURRENT_H
> +#define __ASM_CURRENT_H
> +
> +#define switch_stack_and_jump(stack, fn)    \
> +    asm volatile (                          \
> +            "mv sp, %0 \n"                  \
> +            "j " #fn :: "r" (stack) :       \
> +    )

Nit: Style. Our normal way of writing this would be

#define switch_stack_and_jump(stack, fn)    \
    asm volatile (                          \
            "mv sp, %0\n"                   \
            "j " #fn :: "r" (stack) )

i.e. unnecessary colon omitted, no trailin blank on any generated
assembly line, and closing paren not placed on its own line (only
closing figure braces would normally live on their own lines).

However, as to the 3rd colon: Can you really get away here without a
memory clobber (i.e. the construct being a full compiler barrier)?

Further I think you want to make the use of "fn" visible to the
compiler, by using an "X" constraint just like Arm does.

Finally I think you want to add unreachable(), like both Arm and x86
have it. Plus the "noreturn" on relevant functions.

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/page.h
> @@ -0,0 +1,62 @@
> +#ifndef _ASM_RISCV_PAGE_H
> +#define _ASM_RISCV_PAGE_H
> +
> +#include <xen/const.h>
> +#include <xen/types.h>
> +
> +#define VPN_MASK                    ((unsigned long)(PAGETABLE_ENTRIES - 1))
> +
> +#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
> +#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) + PAGE_SHIFT)
> +#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
> +#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
> +#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
> +
> +#define PTE_VALID                   BIT(0, UL)
> +#define PTE_READABLE                BIT(1, UL)
> +#define PTE_WRITABLE                BIT(2, UL)
> +#define PTE_EXECUTABLE              BIT(3, UL)
> +#define PTE_USER                    BIT(4, UL)
> +#define PTE_GLOBAL                  BIT(5, UL)
> +#define PTE_ACCESSED                BIT(6, UL)
> +#define PTE_DIRTY                   BIT(7, UL)
> +#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
> +
> +#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE | PTE_WRITABLE)
> +#define PTE_TABLE                   (PTE_VALID)
> +
> +/* Calculate the offsets into the pagetables for a given VA */
> +#define pt_linear_offset(lvl, va)   ((va) >> XEN_PT_LEVEL_SHIFT(lvl))
> +
> +#define pt_index(lvl, va) pt_linear_offset(lvl, (va) & XEN_PT_LEVEL_MASK(lvl))

Maybe better

#define pt_index(lvl, va) (pt_linear_offset(lvl, va) & VPN_MASK)

as the involved constant will be easier to use for the compiler?

> +/* Page Table entry */
> +typedef struct {
> +#ifdef CONFIG_RISCV_64
> +    uint64_t pte;
> +#else
> +    uint32_t pte;
> +#endif
> +} pte_t;
> +
> +#define pte_to_addr(x) (((paddr_t)(x) >> PTE_PPN_SHIFT) << PAGE_SHIFT)
> +
> +#define addr_to_pte(x) (((x) >> PAGE_SHIFT) << PTE_PPN_SHIFT)

Are these two macros useful on their own? I ask because I still consider
them somewhat misnamed, as they don't produce / consume a PTE (but the
raw value). Yet generally you want to avoid any code operating on raw
values, using pte_t instead. IOW I would hope to be able to convince
you to ...

> +static inline pte_t paddr_to_pte(paddr_t paddr,
> +                                 unsigned int permissions)
> +{
> +    return (pte_t) { .pte = addr_to_pte(paddr) | permissions };
> +}
> +
> +static inline paddr_t pte_to_paddr(pte_t pte)
> +{
> +    return pte_to_addr(pte.pte);
> +}

... expand them in the two inline functions and then drop the macros.

> --- /dev/null
> +++ b/xen/arch/riscv/mm.c
> @@ -0,0 +1,315 @@
> +#include <xen/compiler.h>
> +#include <xen/init.h>
> +#include <xen/kernel.h>
> +#include <xen/pfn.h>
> +
> +#include <asm/early_printk.h>
> +#include <asm/config.h>

No inclusion of asm/config.h (or xen/config.h) in any new code please.
For quite some time xen/config.h has been forcefully included everywhere
by the build system.

> +/*
> + * It is expected that Xen won't be more then 2 MB.
> + * The check in xen.lds.S guarantees that.
> + * At least 3 page (in case of Sv39 )
> + * tables are needed to cover 2 MB.

I guess this reads a little better as "At least 3 page tables (in case of
Sv39) ..." or "At least 3 (in case of Sv39) page tables ..." Also could I
talk you into using the full 80 columns instead of wrapping early in the
middle of a sentence?

> One for each page level
> + * table with PAGE_SIZE = 4 Kb
> + *
> + * One L0 page table can cover 2 MB
> + * (512 entries of one page table * PAGE_SIZE).
> + *
> + */
> +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)

Hmm, did you lose the part of the comment explaining the "+ 1"? Or did
the need for that actually go away (and you should drop it here)?

> +#define PGTBL_ENTRY_AMOUNT  (PAGE_SIZE / sizeof(pte_t))

Isn't this PAGETABLE_ENTRIES (and the constant hence unnecessary)?

> +static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
> +                                         unsigned long map_start,
> +                                         unsigned long map_end,
> +                                         unsigned long pa_start,
> +                                         unsigned int permissions)

What use is the last parameter, when the sole caller passes
PTE_LEAF_DEFAULT (i.e. a build-time constant)? Then ...

> +{
> +    unsigned int index;
> +    pte_t *pgtbl;
> +    unsigned long page_addr;
> +    pte_t pte_to_be_written;
> +    unsigned long paddr;
> +    unsigned int tmp_permissions;

... this variable (which would better be of more narrow scope anyway, like
perhaps several more of the above) could also have its tmp_ prefix dropped
afaict.

> +    if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
> +    {
> +        early_printk("(XEN) Xen should be loaded at 4k boundary\n");
> +        die();
> +    }
> +
> +    if ( map_start & ~XEN_PT_LEVEL_MAP_MASK(0) ||
> +         pa_start & ~XEN_PT_LEVEL_MAP_MASK(0) )
> +    {
> +        early_printk("(XEN) map and pa start addresses should be aligned\n");
> +        /* panic(), BUG() or ASSERT() aren't ready now. */
> +        die();
> +    }
> +
> +    for ( page_addr = map_start; page_addr < map_end; page_addr += XEN_PT_LEVEL_SIZE(0) )

Nit: Style (line looks to be too long now).

    for ( page_addr = map_start; page_addr < map_end;
          page_addr += XEN_PT_LEVEL_SIZE(0) )

is the way we would typically wrap it, but

    for ( page_addr = map_start;
          page_addr < map_end;
          page_addr += XEN_PT_LEVEL_SIZE(0) )

would of course also be okay if you preferred that.

> +    {
> +        pgtbl = mmu_desc->pgtbl_base;
> +
> +        switch ( mmu_desc->num_levels )
> +        {
> +        case 4: /* Level 3 */
> +            HANDLE_PGTBL(3);
> +        case 3: /* Level 2 */
> +            HANDLE_PGTBL(2);
> +        case 2: /* Level 1 */
> +            HANDLE_PGTBL(1);
> +        case 1: /* Level 0 */
> +            index = pt_index(0, page_addr);
> +            paddr = (page_addr - map_start) + pa_start;
> +
> +            tmp_permissions = permissions;
> +
> +            if ( is_kernel_text(LINK_TO_LOAD(page_addr)) ||
> +                    is_kernel_inittext(LINK_TO_LOAD(page_addr)) )

Nit: Style (and I'm pretty sure I did comment on this kind of too deep
indentation already).

> +                tmp_permissions =
> +                    PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
> +
> +            if ( is_kernel_rodata(LINK_TO_LOAD(page_addr)) )
> +                tmp_permissions = PTE_READABLE | PTE_VALID;
> +
> +            pte_to_be_written = paddr_to_pte(paddr, tmp_permissions);
> +
> +            if ( !pte_is_valid(pgtbl[index]) )
> +                pgtbl[index] = pte_to_be_written;
> +            else
> +            {
> +                /*
> +                * get an adresses of current pte and that one to
> +                * be written
> +                */

Nit: Style (one missing blank each in the last three lines, and comment
text wants to start with a capital letter).

> +                unsigned long curr_pte =
> +                    pgtbl[index].pte & ~(PTE_DIRTY | PTE_ACCESSED);

While technically "unsigned long" is okay to use here (afaict), I'd still
recommend ...

> +                pte_to_be_written.pte &= ~(PTE_DIRTY | PTE_ACCESSED);
> +
> +                if ( curr_pte != pte_to_be_written.pte )

... doing everything in terms of pte_t:

                pte_t curr_pte = pgtbl[index];

                curr_pte.pte &= ~(PTE_DIRTY | PTE_ACCESSED);
                pte_to_be_written.pte &= ~(PTE_DIRTY | PTE_ACCESSED);

                if ( curr_pte.pte != pte_to_be_written.pte )

or perhaps even simply

                if ( (pgtbl[index].pte ^ pte_to_be_written.pte) &
                      ~(PTE_DIRTY | PTE_ACCESSED) )

> +                {
> +                    early_printk("PTE overridden has occurred\n");
> +                    /* panic(), <asm/bug.h> aren't ready now. */
> +                    die();
> +                }
> +            }
> +        }
> +    }
> +    #undef HANDLE_PGTBL

Nit: Such an #undef, even if technically okay either way, would imo
better live in the same scope (and have the same indentation) as the
corresponding #define. Since your #define is ahead of the function, it
would ...

> +}

... then live here.

> +static void __init calc_pgtbl_lvls_num(struct  mmu_desc *mmu_desc)

Nit: Double blank after "struct".

> +{
> +    unsigned long satp_mode = RV_STAGE1_MODE;
> +
> +    /* Number of page table levels */
> +    switch (satp_mode)
> +    {
> +    case SATP_MODE_SV32:
> +        mmu_desc->num_levels = 2;
> +        break;
> +    case SATP_MODE_SV39:
> +        mmu_desc->num_levels = 3;
> +        break;
> +    case SATP_MODE_SV48:
> +        mmu_desc->num_levels = 4;
> +        break;
> +    default:
> +        early_printk("(XEN) Unsupported SATP_MODE\n");
> +        die();
> +    }
> +}

Do you really need this function anymore? Isn't it now simply

    mmu_desc.num_levels = CONFIG_PAGING_LEVELS?

in the caller? WHich could then even be the variable's initializer
there?

> +static bool __init check_pgtbl_mode_support(struct mmu_desc *mmu_desc,
> +                                            unsigned long load_start,
> +                                            unsigned long satp_mode)
> +{
> +    bool is_mode_supported = false;
> +    unsigned int index;
> +    unsigned int page_table_level = (mmu_desc->num_levels - 1);
> +    unsigned level_map_mask = XEN_PT_LEVEL_MAP_MASK(page_table_level);
> +
> +    unsigned long aligned_load_start = load_start & level_map_mask;
> +    unsigned long aligned_page_size = XEN_PT_LEVEL_SIZE(page_table_level);
> +    unsigned long xen_size = (unsigned long)(_end - start);
> +
> +    if ( (load_start + xen_size) > (aligned_load_start + aligned_page_size) )
> +    {
> +        early_printk("please place Xen to be in range of PAGE_SIZE "
> +                     "where PAGE_SIZE is XEN_PT_LEVEL_SIZE( {L3 | L2 | L1} ) "
> +                     "depending on expected SATP_MODE \n"
> +                     "XEN_PT_LEVEL_SIZE is defined in <asm/page.h>\n");
> +        die();
> +    }
> +
> +    index = pt_index(page_table_level, aligned_load_start);
> +    stage1_pgtbl_root[index] = paddr_to_pte(aligned_load_start,
> +                                            PTE_LEAF_DEFAULT | PTE_EXECUTABLE);
> +
> +    asm volatile ( "sfence.vma" );
> +    csr_write(CSR_SATP,
> +              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
> +              satp_mode << SATP_MODE_SHIFT);
> +
> +    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) == satp_mode )
> +        is_mode_supported = true;

Just as a remark: While plain shift is kind of okay here, because the field
is the top-most one in the register, generally you will want to get used to
MASK_EXTR() (and MASK_INSR()) in cases like this one.

> +    csr_write(CSR_SATP, 0);
> +
> +    /* Clean MMU root page table */
> +    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
> +
> +    asm volatile ( "sfence.vma" );

Doesn't this want to move between the SATP write and the clearing of the
root table slot? Also here and elsewhere - don't these asm()s need memory
clobbers? And anyway - could I talk you into introducing an inline wrapper
(e.g. named sfence_vma()) so all uses end up consistent?

> +void __init setup_initial_pagetables(void)
> +{
> +    struct mmu_desc mmu_desc = { 0, 0, NULL, NULL };
> +
> +    /*
> +     * Access to _stard, _end is always PC-relative

Nit: Typo-ed symbol name. Also ...

> +     * thereby when access them we will get load adresses
> +     * of start and end of Xen
> +     * To get linker addresses LOAD_TO_LINK() is required
> +     * to use
> +     */

see the earlier line wrapping remark again. Finally in multi-sentence
comments full stops are required.

> +    unsigned long load_start    = (unsigned long)_start;
> +    unsigned long load_end      = (unsigned long)_end;
> +    unsigned long linker_start  = LOAD_TO_LINK(load_start);
> +    unsigned long linker_end    = LOAD_TO_LINK(load_end);
> +
> +    if ( (linker_start != load_start) &&
> +         (linker_start <= load_end) && (load_start <= linker_end) )
> +    {
> +        early_printk("(XEN) linker and load address ranges overlap\n");
> +        die();
> +    }
> +
> +    calc_pgtbl_lvls_num(&mmu_desc);
> +
> +    if ( !check_pgtbl_mode_support(&mmu_desc, load_start, RV_STAGE1_MODE) )

It is again questionable here whether passing a constant to a function
at its sole call site is really useful.

> +void __init noinline enable_mmu()
> +{
> +    /*
> +     * Calculate a linker time address of the mmu_is_enabled
> +     * label and update CSR_STVEC with it.
> +     * MMU is configured in a way where linker addresses are mapped
> +     * on load addresses so in a case when linker addresses are not equal
> +     * to load addresses, after MMU is enabled, it will cause
> +     * an exception and jump to linker time addresses.
> +     * Otherwise if load addresses are equal to linker addresses the code
> +     * after mmu_is_enabled label will be executed without exception.
> +     */
> +    csr_write(CSR_STVEC, LOAD_TO_LINK((unsigned long)&&mmu_is_enabled));
> +
> +    /* Ensure page table writes precede loading the SATP */
> +    asm volatile ( "sfence.vma" );
> +
> +    /* Enable the MMU and load the new pagetable for Xen */
> +    csr_write(CSR_SATP,
> +              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
> +              RV_STAGE1_MODE << SATP_MODE_SHIFT);
> +
> +    asm volatile ( ".align 2" );

May I suggest to avoid .align, and to prefer .balign and .p2align instead?
This helps people coming from different architectures, as which of the two
.align resolves to is arch-dependent.

> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -1,4 +1,5 @@
>  #include <asm/asm.h>
> +#include <asm/asm-offsets.h>
>  #include <asm/riscv_encoding.h>
>  
>          .section .text.header, "ax", %progbits

How does a need for this arise all of the sudden, without other changes
to this file? Is this maybe a leftover which wants dropping?

> --- a/xen/arch/riscv/xen.lds.S
> +++ b/xen/arch/riscv/xen.lds.S
> @@ -137,6 +137,7 @@ SECTIONS
>      __init_end = .;
>  
>      .bss : {                     /* BSS */
> +        . = ALIGN(POINTER_ALIGN);
>          __bss_start = .;
>          *(.bss.stack_aligned)
>          . = ALIGN(PAGE_SIZE);

Doesn't this want to be a separate change?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 09:01:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 09:01:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531309.826885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvwkD-0006rk-Ol; Mon, 08 May 2023 09:01:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531309.826885; Mon, 08 May 2023 09:01:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvwkD-0006rd-LJ; Mon, 08 May 2023 09:01:21 +0000
Received: by outflank-mailman (input) for mailman id 531309;
 Mon, 08 May 2023 09:01:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvwkC-0006rX-BF
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 09:01:20 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20610.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e5b0439a-ed7e-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 11:01:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9455.eurprd04.prod.outlook.com (2603:10a6:20b:4d8::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.25; Mon, 8 May
 2023 09:01:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 09:01:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5b0439a-ed7e-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jKPQS3CSGm5j+Ov/BNyzfGf9DAZyYfSk0BbwErMLvK4c0UoFHrbiRcBFbL4cd0kB6204mGEC3Uq5Qdnq/VnvKxbL6a1k7lc6mhcccDCojzCnXGrfDcMqC9N1lNq1Rw55GZ59TdBcsrZ0bTW1JEOT8MRpqI2Kpfm2g4nc8D7bvqgXFOkCWJH+Dgtb+offIeXI2ZAZmSlNRKal9DY6QLYJ/VMl+PJ/zdi+88XONYK2x1Hhf3YlEROSO2SWqh7yib/4QBTWwWsbtA39cwBu5huUZLBkFfnZJsv2b0nH66YZX9Ld77V6uWG3xKwzCOhVGxNPJWW+3+6N7yhuKfeEsqw5AQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8zg/eu4y+GGpwMvhH3GrfsmAH38DH55fmk8Qi+m0M+w=;
 b=GOEjCXb3x+DBzX1OoesCnFda/u3uf1SOuo5+a1jFH5VUSs1ROZxh4Qb0Gd6xh3D6ENNnvPdBOsTTTdqB8F9iLI90r44nEAXEb4REaBd9llVjoU09r2i7trW1TlFI/+9c9t//t5b9yDFt7E7cPBoEn1Z2dEPlJ+KOHcai/f1Op7rQfhbXztTpvi6yj3vmQQdHXqmY72jiPT74A4bFl2qh1mIqdl/86namLdKD5ATI87MBnWlGDoUH/za04ryJbclZz7RGxOdN0gjzDvz6Bo/wDGKSgtYLP9BSvw2efjFMW1uhocLvP0CbgWxyBttrHBtH+56NRwTrNXzsvgToFTeZiA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8zg/eu4y+GGpwMvhH3GrfsmAH38DH55fmk8Qi+m0M+w=;
 b=GY/U4FSASxTKySe6MrR7bB65dMvTxCc45DfsWBTK54QgyxfbyFfZ+bFbM4eC1b0JhWHFqrM4DQsZ+bI0pg2TvElEAEyLtQYL6OyE/1b7a7+QD5t4EaOZn1T5tkWjZWmp9tSMDg1G6Yd5So2Rvjf0XCSIUO6ZT6NPe5e0fOYwqbdF2AX0nl9zCI+ocEsPizehxjFYy8D10k9Mtg6+8fke+Pq3IQfgYLFXJHZAqi1SRBjOMJgCfwhKHsQwsBuas+CieN2mRifeL034yk7JMPwqfdZOXSplJdS7SFqzPo/w66EuIdDOEWyu39W1gq+ripJKhnbNUkygpJOGjNfqEolbKg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <763df4ee-95d7-be20-212f-7450f3fd431e@suse.com>
Date: Mon, 8 May 2023 11:01:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3] ns16550: enable memory decoding on MMIO-based PCI
 console card
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230505214810.406061-1-marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230505214810.406061-1-marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0111.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9455:EE_
X-MS-Office365-Filtering-Correlation-Id: 15e0d617-609c-4337-b64d-08db4fa2c937
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2ywJKYuUS4sY3Xl/nQ9gg3gSCHbQn6xqxNWBKylRjWW707Y+suP467gXearmqlx08TPaQp47GdMQrj+T3+neRCAeV8+YTbPEHjBKdv39mOTh37CBgZaeMf97MNtmwa9Zbv9zqobX0sMs1WKLVGbwa8qLIoCGxlT8+tOO1LyWra/8ddA0g6BUfPf0uYYZ2sTpx93Ku6ZFZMrYOo6DIOJeaEGSjuUEOPJhZHJHXBVwD2wEToG1B2EEqmnjftlJ6YwYMV4H8KaQE8SNd+mVATObU62R5vnN119P/PihOeGjPkMPl6AdXPxjstBSpEjSW9Wimzt0V0/AHgZzqu9RmLU6tPFbaESCgdN3I1qd3O2PwPCss1SqJsYku55QkjnP60N8tqVPWqG/XnsMA+22kEzB3IB4EpwG9PxvQ44cH290j1KLkbBVswaQiCwvGw3C3+4hZze6DTeEGh3tbMkOoCALqKhs90c4xfKtMtNEZgTPhuHVHLM29axt8ukq6aOC/eemJtaSUAXLqU1HxRHL0dSAJCmdi9jWJSdhQXq1sQP42U91syr0XAdKsbnrtXZ8E94Z0Sg9NeHlJ5SkerTjeZqWvIXG0zG/MLl7nkoWP9HhPq5jZ3xfOOlxyKM69Tj8PcV4ONsWOIM5MEY8ICpyJegNcA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(376002)(346002)(39860400002)(366004)(451199021)(31686004)(66946007)(66556008)(66476007)(6916009)(4326008)(478600001)(6486002)(316002)(54906003)(86362001)(31696002)(36756003)(2616005)(53546011)(6512007)(6506007)(26005)(8936002)(5660300002)(41300700001)(8676002)(2906002)(186003)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bGxOZk81Z1JudUxPR29vOS9MSW96RVZnRSs1WlEzVjJzWVVnMTFHeFluMHBr?=
 =?utf-8?B?d1ZoczVhMTd1REdVeURiaHprOU5BSjZENXU0dGxYQlEwVENsTDVvV3RIUHAw?=
 =?utf-8?B?RUZlRm03RnpCMUlWUi9LRy8ydlVBZ0U1MERwejhpTjVDSW9HT1hxSzh5UWFC?=
 =?utf-8?B?dUtiWGM0anllOVRiMEZ2TkU0TzBMdGQ3eHJvWUF1YUtsRHVJanJ2emZSWXFX?=
 =?utf-8?B?ZTlOb0xremFNWlVLb0k4eFJhRDNMcEdtM2xHUnBTc0crY3dVR2NJd0FPdHRW?=
 =?utf-8?B?NnM5K3JGSWRzQnIvNHppcjkzMlMrZkx5cFZEc0ZUUW4zRCs2NXc2Y2ovajVk?=
 =?utf-8?B?TTY5V0VOSXdZT2ZDNkdva2I2UzF4NjRnQzJFUUdYWHZWdGgwQ0ZwNEZuZ1Uy?=
 =?utf-8?B?OUsrSXY1ZHowMmNYQVlCNDZhL0M2UXpzbm9pWGI2K3lCM0FITGttOVhWRFBp?=
 =?utf-8?B?TFBZNFpSREV1c3JJY2pHbWFXeFNBRTdldFVWVkJidkVrK295aVZjWEFjRlVj?=
 =?utf-8?B?YkxQMWhCZC9TN3FFdm9zTUhxSW5GYWt3OCsxekZiWW5IZThWWkxTNmtsM0dt?=
 =?utf-8?B?dkZVYk52UEE3d0tIbWJYemhuUGNUdHQvcjVOY3RMMXdVcDAraGU5MWR3ZU5D?=
 =?utf-8?B?VUNPR0U4VDFMMDdlTi96ZUhBQzRQd1Y5bkNiZWFObk8zNVh2dFlGTVZqWlUz?=
 =?utf-8?B?RFV1UmxjUE5wYUZCZncxREV1Mzd5bE9kS1Y3Wm96QWJrRmlOWnBzM2c4SmxV?=
 =?utf-8?B?M3gzTmRBZ0lVb01tTXRJNG4xbDNlVXFPNTl1WmxQVEl3V20vSWxCOU0xSmRP?=
 =?utf-8?B?TzFtTmF3YnNoaTM1VU9LZm4yeVVVdUdKZVFKbWVYRlFRNFl0TWVuNXFjdCtM?=
 =?utf-8?B?a0JsVlV5Ly9TRVNpS3QxOGlWWjZwS0syenpBbzN6VDR2ZDJFRklocFF3ckF1?=
 =?utf-8?B?d3F1WlVrN1R5QXBMYnNZR3dHVkh1M3FyY0hsc3ErRWhmc0ttTTU2RXVMbnVx?=
 =?utf-8?B?Wk55RXhjWUFRN1JJeVpBVDZQbnUwcitwb1o1Y3NGSERMSUcvUnphRzJmaUVs?=
 =?utf-8?B?NmlsU0ZtMkprR0JiR0NXNU92WjB1UzNzemdXb1ZnaUVRRTRCbjBxRDh2dTM3?=
 =?utf-8?B?c2JYUFgzNVdmV25vRHk4Vk1nbnFWQjlRN2Q2VGd4VzZ2Y05HekNkcVJUZ2wz?=
 =?utf-8?B?WlA3NjhwRkgyT2hoQ29xaXZQVDltTjRwRHh6SWI1bEVSRm4yUnBHRnczQnZF?=
 =?utf-8?B?b2VxNmNCNnFmL0JOcDVtN3cyei9jbDBCSnVPUC9VY0c4RHlvQ1dZdlFsVWhh?=
 =?utf-8?B?Rzg0LzVkR2ZYQ1dvQUNkQloxTjFtc3ArSFRxWFREOVRXSUdiQ3ZFRitDcEUr?=
 =?utf-8?B?Z2tZUjBJQWptejNYVkhJcFdzQVowTnhPMFRnTi9BdVBZNWlYaTdsY0dKU2xT?=
 =?utf-8?B?eTlEcFBmMzBJUlp2Yy9nMXMzOTN5ZGxibVhyMGlpczJDZVVxMnVrUWlvR2tE?=
 =?utf-8?B?NkVqb0JZTmZLdUpiYzFXRGc1VWpOU1I4S21YZWFhbks5V3pzOUNQc00wY1Z5?=
 =?utf-8?B?N2FhRFF3UUQxSkF1UG4wN0lSSlRRM2NzSFFQN2tiTkVvTnNsa3VWd2RzU2I4?=
 =?utf-8?B?UGhxZklQN2pVd3dLam5zbnhxNTVqZFdQb2pqT0tQRk9ZZnF4Nko1VVRUSm9u?=
 =?utf-8?B?RGVLQVl6YWc1djNKYzRSZ0FiU3FIOGxyV25rSlVBdGdhQW5FUlJoRWhvcnhz?=
 =?utf-8?B?Y2JFQTBmS1RtWWp5c0F5V1hHL3V6Y3pnUWJQdjNQZDlLajRqVS9xd1lJOU9Y?=
 =?utf-8?B?RjBpdWpRMWdIUHptUFdvS3ZTQVIyS1c3R0x5T29DV0ZIODV3S3d4NFJtWk5m?=
 =?utf-8?B?NjB0bzZuQXFOUEVyM0NWV21sWFRabGNRVjB5cHBhRzRLRzdHUzVwOW1VOE9M?=
 =?utf-8?B?NTQyL1oxRHk1L3NIYzRDZDNBSEEvTllmWmM4OE9yYno4bis0RmJER2FQVjh1?=
 =?utf-8?B?OU9xSnNGL2lGRFlScUJzMDNjTk50aHVIQkFjYkZtU3VwZjdIQXR4eUFmUC9O?=
 =?utf-8?B?N0EydVQwaWt1VnJwUFdRUDlNWmdxOHR3dnBJa2RtQXlDb0w2U2t4dTlxTkp1?=
 =?utf-8?Q?tmW2ylXjON9AWGjLWNqpmUCWy?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 15e0d617-609c-4337-b64d-08db4fa2c937
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 09:01:18.3807
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JJMZkf7NNndEpn7k8R2OzFoDecCX8nJ6PVfE3saH4wu1yR3jlTJQeikd+djvt5CXCMBxk2JryIpOYYNZ/EPz0g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9455

On 05.05.2023 23:48, Marek Marczykowski-Górecki wrote:
> pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
> devices, add setting PCI_COMMAND_MEMORY for MMIO-based UART devices too.
> Note the MMIO-based devices in practice need a "pci" sub-option,
> otherwise a few parameters are not initialized (including bar_idx,
> reg_shift, reg_width etc). The "pci" is not supposed to be used with
> explicit BDF, so do not key setting PCI_COMMAND_MEMORY on explicit BDF
> being set. Contrary to the IO-based UART, pci_serial_early_init() will
> not attempt to set BAR0 address, even if user provided io_base manually
> - in most cases, those are with an offest and the current cmdline syntax
> doesn't allow expressing it. Due to this, enable PCI_COMMAND_MEMORY only
> if uart->bar is already populated. In similar spirit, this patch does
> not support setting BAR0 of the bridge.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
with ...

> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -272,6 +272,14 @@ static int cf_check ns16550_getc(struct serial_port *port, char *pc)
>  static void pci_serial_early_init(struct ns16550 *uart)
>  {
>  #ifdef NS16550_PCI
> +    if ( uart->bar && uart->io_base >= 0x10000)

... (nit) the missing blank inserted, which I'll be happy to do while
committing.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 09:06:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 09:06:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531313.826894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvwoy-0007Vn-9n; Mon, 08 May 2023 09:06:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531313.826894; Mon, 08 May 2023 09:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvwoy-0007Vg-7C; Mon, 08 May 2023 09:06:16 +0000
Received: by outflank-mailman (input) for mailman id 531313;
 Mon, 08 May 2023 09:06:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvwow-0007Va-Nm
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 09:06:14 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9485aec0-ed7f-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 11:06:13 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 63D5121F84;
 Mon,  8 May 2023 09:06:12 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 396E11346B;
 Mon,  8 May 2023 09:06:12 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id /nl8DIS7WGSXYwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 09:06:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9485aec0-ed7f-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683536772; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Jx1vHKXG9BsC/fOShussz6KjHmMB37fDHsvcq+nBraY=;
	b=gIAwOVQHFUi0RkRmgYdaf57/kV8Sw8hK6dbWT0iFmVPTb5NwknXSuZfb9onV2+oe+Ge1o6
	cjNo69FO+380vyG4x0IX+NsDXo0G0+Md7T6LM+bpOXwKhzl1pRllrB50nKEtEW7BfpZY7G
	+NyrBRy/aCIppyb66PLOgaNes0bBlhk=
Message-ID: <d8224847-ffc5-3faf-1bc5-6ad3d7966b26@suse.com>
Date: Mon, 8 May 2023 11:06:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v1] tools: convert bitfields to unsigned type
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <20230503150142.4987-1-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230503150142.4987-1-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------yLil7GVoUqhlZCx4zmNgOb4n"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------yLil7GVoUqhlZCx4zmNgOb4n
Content-Type: multipart/mixed; boundary="------------LOjdMT803timFkv0B3Ya03KN";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
Message-ID: <d8224847-ffc5-3faf-1bc5-6ad3d7966b26@suse.com>
Subject: Re: [PATCH v1] tools: convert bitfields to unsigned type
References: <20230503150142.4987-1-olaf@aepfle.de>
In-Reply-To: <20230503150142.4987-1-olaf@aepfle.de>

--------------LOjdMT803timFkv0B3Ya03KN
Content-Type: multipart/mixed; boundary="------------cwzj5DxJhdEW0LCD3VU5Lqq2"

--------------cwzj5DxJhdEW0LCD3VU5Lqq2
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTc6MDEsIE9sYWYgSGVyaW5nIHdyb3RlOg0KPiBjbGFuZyBjb21wbGFp
bnMgYWJvdXQgdGhlIHNpZ25lZCB0eXBlOg0KPiANCj4gaW1wbGljaXQgdHJ1bmNhdGlvbiBm
cm9tICdpbnQnIHRvIGEgb25lLWJpdCB3aWRlIGJpdC1maWVsZCBjaGFuZ2VzIHZhbHVlIGZy
b20gMSB0byAtMSBbLVdzaW5nbGUtYml0LWJpdGZpZWxkLWNvbnN0YW50LWNvbnZlcnNpb25d
DQo+IA0KPiBUaGUgcG90ZW50aWFsIEFCSSBjaGFuZ2UgaW4gbGlieGVudmNoYW4gaXMgY292
ZXJlZCBieSB0aGUgWGVuIHZlcnNpb24gYmFzZWQgU09OQU1FLg0KPiANCj4gVGhlIHhlbmFs
eXplIGNoYW5nZSBmb2xsb3dzIHRoZSBleGlzdGluZyBwYXR0ZXJuIGluIHRoYXQgZmlsZS4N
Cj4gDQo+IFNpZ25lZC1vZmYtYnk6IE9sYWYgSGVyaW5nIDxvbGFmQGFlcGZsZS5kZT4NCj4g
LS0tDQo+ICAgdG9vbHMvaW5jbHVkZS9saWJ4ZW52Y2hhbi5oIHwgNiArKystLS0NCj4gICB0
b29scy94ZW50cmFjZS94ZW5hbHl6ZS5jICAgfCAyICstDQo+ICAgMiBmaWxlcyBjaGFuZ2Vk
LCA0IGluc2VydGlvbnMoKyksIDQgZGVsZXRpb25zKC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEv
dG9vbHMvaW5jbHVkZS9saWJ4ZW52Y2hhbi5oIGIvdG9vbHMvaW5jbHVkZS9saWJ4ZW52Y2hh
bi5oDQo+IGluZGV4IDMwY2M3M2NmOTcuLjNkM2I4YWE4ZGQgMTAwNjQ0DQo+IC0tLSBhL3Rv
b2xzL2luY2x1ZGUvbGlieGVudmNoYW4uaA0KPiArKysgYi90b29scy9pbmNsdWRlL2xpYnhl
bnZjaGFuLmgNCj4gQEAgLTc5LDExICs3OSwxMSBAQCBzdHJ1Y3QgbGlieGVudmNoYW4gew0K
PiAgIAl4ZW5ldnRjaG5faGFuZGxlICpldmVudDsNCj4gICAJdWludDMyX3QgZXZlbnRfcG9y
dDsNCj4gICAJLyogaW5mb3JtYXRpdmUgZmxhZ3M6IGFyZSB3ZSBhY3RpbmcgYXMgc2VydmVy
PyAqLw0KPiAtCWludCBpc19zZXJ2ZXI6MTsNCj4gKwl1bnNpZ25lZCBpbnQgaXNfc2VydmVy
OjE7DQo+ICAgCS8qIHRydWUgaWYgc2VydmVyIHJlbWFpbnMgYWN0aXZlIHdoZW4gY2xpZW50
IGNsb3NlcyAoYWxsb3dzIHJlY29ubmVjdGlvbikgKi8NCj4gLQlpbnQgc2VydmVyX3BlcnNp
c3Q6MTsNCj4gKwl1bnNpZ25lZCBpbnQgc2VydmVyX3BlcnNpc3Q6MTsNCj4gICAJLyogdHJ1
ZSBpZiBvcGVyYXRpb25zIHNob3VsZCBibG9jayBpbnN0ZWFkIG9mIHJldHVybmluZyAwICov
DQo+IC0JaW50IGJsb2NraW5nOjE7DQo+ICsJdW5zaWduZWQgaW50IGJsb2NraW5nOjE7DQo+
ICAgCS8qIGNvbW11bmljYXRpb24gcmluZ3MgKi8NCj4gICAJc3RydWN0IGxpYnhlbnZjaGFu
X3JpbmcgcmVhZCwgd3JpdGU7DQo+ICAgCS8qKg0KPiBkaWZmIC0tZ2l0IGEvdG9vbHMveGVu
dHJhY2UveGVuYWx5emUuYyBiL3Rvb2xzL3hlbnRyYWNlL3hlbmFseXplLmMNCj4gaW5kZXgg
MTJkY2NhOTY0Ni4uMWI0YTE4OGFhYSAxMDA2NDQNCj4gLS0tIGEvdG9vbHMveGVudHJhY2Uv
eGVuYWx5emUuYw0KPiArKysgYi90b29scy94ZW50cmFjZS94ZW5hbHl6ZS5jDQo+IEBAIC0x
Mzc3LDcgKzEzNzcsNyBAQCBzdHJ1Y3QgaHZtX2RhdGEgew0KPiAgICAgICB0c2NfdCBleGl0
X3RzYywgYXJjX2N5Y2xlcywgZW50cnlfdHNjOw0KPiAgICAgICB1bnNpZ25lZCBsb25nIGxv
bmcgcmlwOw0KPiAgICAgICB1bnNpZ25lZCBleGl0X3JlYXNvbiwgZXZlbnRfaGFuZGxlcjsN
Cj4gLSAgICBpbnQgc2hvcnRfc3VtbWFyeV9kb25lOjEsIHByZWFsbG9jX3VucGluOjEsIHdy
bWFwX2JmOjE7DQo+ICsgICAgdW5zaWduZWQgc2hvcnRfc3VtbWFyeV9kb25lOjEsIHByZWFs
bG9jX3VucGluOjEsIHdybWFwX2JmOjE7DQoNClBsZWFzZSB1c2UgInVuc2lnbmVkIGludCIg
aW5zdGVhZCBvZiBhIHB1cmUgInVuc2lnbmVkIi4NCg0KV2l0aCB0aGF0IHlvdSBjYW4gYWRk
IG15Og0KDQpSZXZpZXdlZC1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0K
DQoNCkp1ZXJnZW4NCg==
--------------cwzj5DxJhdEW0LCD3VU5Lqq2
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------cwzj5DxJhdEW0LCD3VU5Lqq2--

--------------LOjdMT803timFkv0B3Ya03KN--

--------------yLil7GVoUqhlZCx4zmNgOb4n
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRYu4MFAwAAAAAACgkQsN6d1ii/Ey8E
lwf9E7Qh8x7RzZ0pzgW2fJVNlQTx9ySnBNQ8MuixNnBRkJkpFqxD9UVRvdhr5t/C7HzGEgylFrmn
Sa3ZeyRcxgnNpwwAIG4D0WFPPeHD6OBLe+JyGGZDQRoLxusmytN/Hgq3YFvZBImsWjulLeR+G3cR
SrnKr/S/xgyaSg3lAT0SGDtfuIYwk/MQDf2J535dgt7tBV2AW0QnNkaW9SuGUH9vOI2AQe4baZi2
R5HYyh/rI1sT7vJ9hmqnag5j0IENZzqDjwQbxgvSIeroBOcA17KjPMPjfyA4o/9tSMKpHm3UXbnv
tDJUrV3UhvHRCJHtZqhbV4tjKX79KrMaplydeqYJ8Q==
=J5NL
-----END PGP SIGNATURE-----

--------------yLil7GVoUqhlZCx4zmNgOb4n--


From xen-devel-bounces@lists.xenproject.org Mon May 08 09:06:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 09:06:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531315.826904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvwpa-00080h-K0; Mon, 08 May 2023 09:06:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531315.826904; Mon, 08 May 2023 09:06:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvwpa-00080a-GU; Mon, 08 May 2023 09:06:54 +0000
Received: by outflank-mailman (input) for mailman id 531315;
 Mon, 08 May 2023 09:06:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvwpZ-0007yZ-EM
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 09:06:53 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20621.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::621])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id abc41864-ed7f-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 11:06:52 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7914.eurprd04.prod.outlook.com (2603:10a6:10:1f0::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 09:06:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 09:06:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abc41864-ed7f-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d3E2U2j9hj7ITOMupBLBdqJoAIfMfLXyZ1GdXZ1I6PfIEMj4QrqUwRiYj5P+C/CNOgZqi+n+GDuZJvyJqukYLTm594mx2BtcIdLstMVV4bwB8VQOBwL5sP/f60zE7sxaX61bY5AD+SFTvlGa15tivsSK0nDtI8reSUoAGVjjpB/gZWm5r2JdvG78lXMkFwFHZ6t1sK6yjQL7RsIpQUpgTl0gr72wV0NxsQOBp1pn/W/u/Y/2zqyO0h/jcju507/YtoQreWC5+vO21Kz11XQ8yGymlX/Bkn4qqTc+4t7b0+ATsY5kAeqYE/9Dr406d56g6P+2OCNdmjs6FqCPmQindQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3iR+3mk6Pn9vKssQCx3/HtfIvIl3z3pMrxq7NHaT7Tg=;
 b=cUHjSiuZCFpHw+3pn+wn3yQuJOitk/gdbnXMuT7S3AGyeLneqsCA0YST/j+y3pFTVOIV9uuRJwn6/WCB99f5VfbeyOL4cwzg166LHZH2sGgDP9XXEqbleEIGpQTlUPa/pmm77Wq05FRf+mYB48/go6O4wUOE4lDReMUkbwPhtJNXWjmEMi/8E+kzEjkHmocWQ/klKMS2JyBhTdT+rD18LxHm+n2BzNjS8ISeBP9YidDWd6Fn8RxoQsU1FaweQKjiMrMIuomYg3BZY0U7DN8mNAMHGgmDzxQ3SBI6/J4xjbIrWRgK8VnnCbnh5i6GACvRrZR8SUzka5D3gGEBWQa1eA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3iR+3mk6Pn9vKssQCx3/HtfIvIl3z3pMrxq7NHaT7Tg=;
 b=ZQm2dgMe29WxG+WOl6KPrdV3M3E50/Mu/yv2sNTB0wi2QK7fjpPK3TcUlyW+Ci3VeCbChXu3mH0Br13fKwV9HEqpUSVfbbWyaFsvxQJgQ0d+bG3M4d+Kgym5FtE6NjjFJ4775ifiBywPb5YkvsJ103dfYaCMSOQt2knHpoV1i9JaAzYAckrPD59DeGRv6OEdXKoUnDd6/abAvIM5R6ITBScNZL9C+77sQ9+y2nqsxsu27SJE5G8hS40TqPYmnOW7dIslTrunD3Ktp4iB1Gn2dTD89hdrZbJUurk2coo2vSEfR6N0rmkAR7Z3CJslK28XohG1Y5K82khIk+yt+TUgqQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <05b27f59-f906-79af-6c3c-6b6fae7cb1a9@suse.com>
Date: Mon, 8 May 2023 11:06:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 0/3] Add CpuidUserDis support
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0088.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7914:EE_
X-MS-Office365-Filtering-Correlation-Id: d0078734-529c-433b-9eb0-08db4fa38ee5
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xA6FRKwtCGVMjMxi81FMuQLHiqqOYt5t0tuH/g2uOn1aDjkIIW1A/7VP1jYN3S4P+/I+iMWVa6iJST2+Nt+RlvIT8sMKubJC0tueFhQWaUn2k74IUloGXjVBI34O0iivgmRqMbt1dTqvrmr+xl4nn1jHEsjkMNC7z94W8MVX/fm5NtiQVbkhZfP0nZrMIhjqLkHWLti/A6RDJStHal1cAkg+Ds4DakfI47uKi6RYJsWOnisCOV7nyV7Pa413J3V7mIqZESKjx1e/KTZJUUlN9INiaO7kIJ6XKBQ6StKbJOlTFgMvfstmdCeCEINF5DBxdZOjIqMZkiPZjWvJvK2lw2iM5D64Yh6TVC7g45F30rrTpiGxGopdSS0K2C1RF6VPP9JdTL5Nkp14jt7KwSfhta5NlO2RozwrJh73QJyzhycOA/q9k+QjxpSzJzhLXjVGDM6WjIuxbItiDeBwAjlOHjfWSofc+Fkpj7hXzRiMoANrzrxv1Q6z1T6tejssq3mWuT1xVoOtsGnC3q3Cx/YHCdKHpvK8Uls38SylyMPlFIckssHkwl3YQZce5M8Dp/ZwI0n1TDW255hSP0egKUboXtPqOhRnC/lHjjqFBg1NAsZOuMgGO5XmzXP8E0LV2E3kARPaQXyYL7w/rVb4NIEcaw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(366004)(346002)(136003)(396003)(451199021)(31686004)(36756003)(38100700002)(2906002)(8676002)(8936002)(5660300002)(316002)(31696002)(86362001)(6916009)(66946007)(66556008)(41300700001)(66476007)(4326008)(83380400001)(53546011)(186003)(6512007)(6506007)(26005)(6486002)(478600001)(6666004)(2616005)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SnE3cnV4c1QyOVp3WE9QY1BFeVlIS0xZdFR2L2w1clJ1MzdPVmZkOUhzOHJC?=
 =?utf-8?B?OElXVkJJVjFhUjZoQlZpZnMvZk9DQzd6cmJOQk9QbDJXbGptSGo0d0R5QXUx?=
 =?utf-8?B?L2Y1WmNXaEYwUmR4QnpKYXZCcXBldXg1ekw3SmQyQjllcmVEVXFmYTU0NEZZ?=
 =?utf-8?B?QTFmY0licFVTSUtHQzlXUTNFRGRFTEVmNFdzV0hBWEw1bVo1ZDlteHZjc2tU?=
 =?utf-8?B?Mm1CQUZDVGZSeE05L0VQRW5ZcFFzZm5MZURJQkZaRzJGWEQxMUJFYk9tbWQv?=
 =?utf-8?B?WXF3SjZvaHN6MGd5Zzl2TktiT0pvWjR1cWFPeUxNSTB0dTEyeW05dGdHcUUw?=
 =?utf-8?B?S29zcStFdUZoMlZZWlRNb3pTdVdoczZnSllIblBDSTNFVXd5VUZFZTQrbnpw?=
 =?utf-8?B?Y2FUb2lUMjdadDAySllnd3dVS1VkK1lzVy9RVWg1OE03NGlUNHYrcC9GaWpO?=
 =?utf-8?B?dWNNRmhlTCs5UlNUWmdjYnY2V1gyUm9jQUYzTmRUenpYdy9tUFh1dGF5N0RG?=
 =?utf-8?B?enM3a1hzelhEdzVpZFJrbzYrYll3Zm5aNEp2NmVXM0dSa3VLV1c1R2ZKb2s0?=
 =?utf-8?B?S0VqU0FwSUtDVWY5dFovNDBxMGhTVDVJTnFaeXZwUVNvbml4V2FmR3paZWY4?=
 =?utf-8?B?TE44TTlnVTNNb2NJMVhJdlpucG9FdVdyZHZ3K3UxYm0zNksrSHhvZ0txRDZh?=
 =?utf-8?B?RzBRWnhpZ3NoTE1VdkNpbnp4VGFWYmF1ck1kWHY5a0I0SllZdnloVGd5Uk5P?=
 =?utf-8?B?QjdVbWVyVlVpVGRQemI2SXpaN2N1RU5oUm9VUmtTUXJVWmFKNkIweWdLUkMw?=
 =?utf-8?B?UStYVTNmcmh5L0VFRWxCSnRJLzVoUmRSYTRrMUxwVFgycEJnRW1pbGR5UkY2?=
 =?utf-8?B?ZUZzNktSUDFCS1dmaFBsR2Nhc09qL09CK3RjaGhNZ3NUV1NkbGhpbVRTbGlF?=
 =?utf-8?B?QjJnWUdXQ1YvQ1lOWXVORmljVHg2YTh6WHkwbHdkcEwwbkVHQ2VGNGJ4dU5F?=
 =?utf-8?B?S255c1hqVWJVa0dHT3FDeGdNVXhoYlZ6dDg2OHJWL1QzeWVSenJia3hrdUVq?=
 =?utf-8?B?MXdQODRibWxwNnpwR1pkQU04SS9zTFpBV0tRKzZvM0NPL0JrWVo4SXVBME9J?=
 =?utf-8?B?Z2UvOTE5R3JPa0RHVnVLNU16dkN2UVg0WGkza0orOTI2QkdjVUV2SkdiSHVG?=
 =?utf-8?B?clQyT0FIMDk2Yk5CT0dlM1NuVVJ3azdaeGMzMlM2NVQzNG1ubHJKSDF2VVRP?=
 =?utf-8?B?NmlYMGtaSkQ1b2JaU05LaEFnVzQvZjZTSnQvMEptU0pST2pZVzhOWHBIVDlN?=
 =?utf-8?B?Q084L1FFRFpkdGxXVUs2UWFVelVTTVdsbHEyWGpaWFR3OGdQemtKd2pubkxs?=
 =?utf-8?B?UU9reE1USVlpbnlDRFhVbndGbkJ0UWdSWjFPUlFGWmIzZ3p4NU9XcmVaVlVV?=
 =?utf-8?B?aFd4Wk1wK2djYkNFL0VoZnpyejRNYjJiOW0ya1ZTdWpDNDJYMUtUcmtVR2Zj?=
 =?utf-8?B?TGxtV3d1TmZObDZqK1gybG5ra0t2VlZRdm5IMklxRTFoL2R3NWJ4d2lPV01t?=
 =?utf-8?B?ZnE4c2haTTYrU1lHbUJWSUZCY1llY3hJdU9zei9zdmYrWkJCTHZ0MC8vQk1a?=
 =?utf-8?B?bDhka0xmb2VhbGVpZGNUZnhDZ3ViVEVSSVV3cjlIak1LYkF4cDlOZjU0bERN?=
 =?utf-8?B?dDRKQUwzT0Y0bXlxSnVpM3lFOXNJOFlBMzRDRzQ5d0JqT0xOUDlwL0ZmMHM5?=
 =?utf-8?B?Vk5CVitnU2hjRjVjRUxEWjIzL1VxZVNpaGQ3VFRoSkg2amR0R1NaZTZSdHhE?=
 =?utf-8?B?cnpsZkk1VEs0MDRoZnkvbDdGYWhQbU9VZ0hDSkV3aER3K3Ira3dmd29pd3R3?=
 =?utf-8?B?M08zazBjYXQ1WitvNm5EbFNYalM2SzdRYjdXNTd4Rjg2SWFMQnRwQzJ4Qi9j?=
 =?utf-8?B?dXhrbzloN2l6SjdudGUyalh5SkpicWRHSkNIWkQ0Mm9UMEZYQmExNEhRTUVx?=
 =?utf-8?B?ZUNWZXJSelJKTG5IeUNBRHFBdjZ5Tmp4Y1FEU1AzMHVlazNnUE4yVXg1Zko4?=
 =?utf-8?B?QXFZeGdRb1dYaVpSb1Jxd1pxY05tZVRua2pvVzhHdUMwb1NvWHVycHh1WC9T?=
 =?utf-8?Q?YxcL9ETBkv5PS2BcYGnKCr9LT?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d0078734-529c-433b-9eb0-08db4fa38ee5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 09:06:50.0121
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pYds6VyWT7hVBkNSi5LuZtTxkwGgtk/bJdyZ5DLIVCejPI/SK2lMGZ00vRzjt61mK6Ar688jv+U1JwtQXZyzTw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7914

On 05.05.2023 19:57, Alejandro Vallejo wrote:
> Nowadays AMD supports trapping the CPUID instruction from ring3 to ring0,

Since it's relevant for PV32: Their doc talks about CPL > 0, i.e. not just
ring 3. Therefore I wonder whether ...

> (CpuidUserDis)

... we shouldn't deviate from the PM and avoid the misleading use of "user"
in our internal naming.

Jan

> akin to Intel's "CPUID faulting". There is a difference in
> that the toggle bit is in a different MSR and the support bit is in CPUID
> itself rather than yet another MSR. This patch enables AMD hosts to use it
> when supported in order to provide correct CPUID contents to PV guests. Also
> allows HVM guests to use CpuidUserDis via emulated "CPUID faulting".
> 
> Patch 1 merely adds definitions to various places in CPUID and MSR
> 
> Patch 2 adds support for CpuidUserDis, hooking it in the probing path and
> the context switching path.
> 
> Patch 3 enables HVM guests to use CpuidUserDis as if it was CPUID faulting,
> saving an avoidable roundtrip through the hypervisor at fault handling.
> 
> Alejandro Vallejo (3):
>   x86: Add AMD's CpuidUserDis bit definitions
>   x86: Add support for CpuidUserDis
>   x86: Use CpuidUserDis if an AMD HVM guest toggles CPUID faulting
> 
>  tools/libs/light/libxl_cpuid.c              |  1 +
>  tools/misc/xen-cpuid.c                      |  2 +
>  xen/arch/x86/cpu/amd.c                      | 29 +++++++++++-
>  xen/arch/x86/cpu/common.c                   | 51 +++++++++++----------
>  xen/arch/x86/cpu/intel.c                    | 11 ++++-
>  xen/arch/x86/include/asm/amd.h              |  1 +
>  xen/arch/x86/include/asm/msr-index.h        |  1 +
>  xen/arch/x86/msr.c                          |  9 +++-
>  xen/include/public/arch-x86/cpufeatureset.h |  1 +
>  9 files changed, 79 insertions(+), 27 deletions(-)
> 



From xen-devel-bounces@lists.xenproject.org Mon May 08 09:18:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 09:18:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531324.826915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvx0C-0001Ff-OA; Mon, 08 May 2023 09:17:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531324.826915; Mon, 08 May 2023 09:17:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvx0C-0001FY-Ku; Mon, 08 May 2023 09:17:52 +0000
Received: by outflank-mailman (input) for mailman id 531324;
 Mon, 08 May 2023 09:17:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvx0A-0001FQ-LF
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 09:17:50 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20616.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 33555c9f-ed81-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 11:17:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8309.eurprd04.prod.outlook.com (2603:10a6:20b:3fe::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 09:17:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 09:17:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33555c9f-ed81-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uw4yqoCB39j7A7aQvDRriBboaP6Fc9DRv/A81ICOFTEG33FsjwY+bum04HZTgQfZbNGICFoJSiElpk79vRqrTLQTS/MrFp4CleomOk7/QBqmQENgXdWOPDCItWZHfOBw4uWbNDjZ844s5LEf5WP9Jebj+0RCKM+JIaf/013vIFb/XbNolWpozrrgze80mfBP2f+3K5DAK+ZfC2kGWcBP15Dj+mjlVzvrY9MHz2GMY9y3VAmzi764v9ZRw+Zy9WxqB0kZ3tWyXiFkB9LGYCAKT57A1hVKNf9OeMtpmaQDhKU4R2fprUnp7hDvkwEWIMov0Y/RL8IgjrbqequbDyd2Ow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2u83bdCLKqnnbdakJJyL+ltqn29nF2/1TNG5EbGKPzM=;
 b=c/TRlfR0otQ4K6XztKJwXeA3zj3dGQ/IFGFyBnuQfqMme8Jc7AhhY0PTy/qeXW1mWy9t7wg2fngtAsUE3X8cKTIQ6Ey9w9tjQUgNkrBc6Ln0jpSwRx9HcY6B0WyQIZQZEKf9MQwHdbRE60XxOgqPTB3Fy834APz9tfQQdJo4ZsafHJA3gacVZoau+inWtcc6fJmfWU1rfV5OqOpHfn9jWkoKS9kJVF/mOaGSF1EQhd64fxZce58+L6VUcH63fGhxwb0cgL/M5D4cB0Zs0CrmrTyRqnfd5tho65E6nNwVO5eOnCathJamqiC/KD/HwBfVIFDw9AV9xrRud5/HONz9xQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2u83bdCLKqnnbdakJJyL+ltqn29nF2/1TNG5EbGKPzM=;
 b=P1QFS6PuDRoEPaOCUUI503OByZ3hqeXhdbmjh5TeKRiJ83+IVeB1SwP7hK0zYY2nW/xXnfg0CSfr/rAwbsXB8PocG1I1SQuY1hSP1Wmc1/QV2rnKpUy3ldjrfpckY0b4KUa1Xz2A5Qlbq2oL5Cuoo16jrXEqRI3++UASNj2EiVp90XOUSPrfewOocW+v4MDzgtlpc+0rT2LHHpEi9kA2RprsVkPjbi1ShCw+FwjU5QZ2HGfMQCOZTp5XuFDs3LEKxI9KPpnry/gaD/Qhb8Ib01I16vBiV6hoCWaw2OEogKMykzA6LAqxdx+FP+zOHGKZkypY00n8GRrzqem2uq9juA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <166be666-7ba2-c2c8-5b46-19d49aa90f93@suse.com>
Date: Mon, 8 May 2023 11:17:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/3] x86: Add support for CpuidUserDis
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
 <20230505175705.18098-3-alejandro.vallejo@cloud.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230505175705.18098-3-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS4P192CA0013.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:20b:5da::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8309:EE_
X-MS-Office365-Filtering-Correlation-Id: 3a7c03f1-26b2-4ef3-3eb5-08db4fa5169d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QumaP+B9bD8yY9jg9gMvSwp/QqzTxVjEI06Y/qvbuDUI097zSkYh+jiriYXiKQUiKYI7/VKKdNf+cICpTGhffKDDTD2kb4U3T4/rM09xTK1/DdzRHYSDkQy/jQNza4Qy4AeXyXVn1AuSVHDWfvARnLZJxkZ36nwqN7XuWhIB7zBsbFEH4TL8WMcgbEYB9coN63eCme73VR+XHAyai6MHEC385veLbBjGBnxQseF9i0PLlg1PL+VpK4DcMeJrCPqct66ol5b2m6MACVSiOXWbgveq/MyChpQtUYGtQ1Jtbbs74DbA5GOjIJOWEYAKjYFQPurNhQ0gN/09XNsk3os9S2A3moKmjxklV/C3EbIoRjGcqhGPmGnQdLMINNcvYuKWHQiFOeirKctIoecr8JLzt6d1in9n473EaGLHZr8e8eC4lHOyKgZHpyq6pVz689nYGY6BkjgEjKD3OcU211CttBcfGguq1tP5Iykj9CybnaDvCN6B4PYRUlRHAgPUYfGuhB4Ixaay8jVb/MdOOMi63Zr/50s/C/zFcIqocFvc7Z8SSX87sZwlfFnj7ozxgtH7GMdyT+zKreYroMGIii+kkWM30VNCXd/Jrx9mvozG/HWaByaLXT0oardQVKaQEIgBSvFCVGng55HnRxY/luwHyg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(396003)(346002)(376002)(366004)(451199021)(31686004)(6916009)(5660300002)(4326008)(478600001)(66556008)(8676002)(8936002)(66476007)(54906003)(66946007)(2906002)(316002)(41300700001)(6486002)(6506007)(26005)(6512007)(53546011)(186003)(2616005)(36756003)(83380400001)(38100700002)(86362001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bnhjZE9ZRTNXd2k0ZDF2SmpIWUgzS1dQdS8rdlRydlg2amFoVStVTDFOZVhw?=
 =?utf-8?B?VHZBOHdua0R0SnFYckJUNllPQS94UUZUQkY4bWkrVmx3MGFFVHBsR2ltdGhG?=
 =?utf-8?B?NGw1RDQwNG5GQiswSHNmRDdSM2ZlQTJQNHcwaG9mWUdCdmZLUWYrSmdGZEkz?=
 =?utf-8?B?WXVOazBIaTkvZE52aWpvMHZjTU14a29weHdHUFZQcUNzZEJNY0JPdDYxdDc1?=
 =?utf-8?B?ZzMwUFMvK2c1UmhXTEl2azd4WEE3VDd5cEtjWTFyblBZU2F2d2lpV3c3YUlG?=
 =?utf-8?B?Nk1UUjlNRG4wWkUxdHlXSEd2anRXQUVFYzMxQUp0cTJUU2hHVnB5NWlNVHZH?=
 =?utf-8?B?aWFObGszTi9XZmpiYjVaYlAya3lvUGplUHJNd0l3OGtZVCtjR3JTZFVEMVJW?=
 =?utf-8?B?cDZxaDJPd2F2Z0dQWnhCYXY4L0loaFk4MHZNZTdtaFhyRTRvSGtlcmRVZ2kx?=
 =?utf-8?B?SHAwdjJyZzQ2aUxtZDRSRGJpbSt2SFFCendaNkVDYTlkcFV1ditFY2dFQVZZ?=
 =?utf-8?B?bzhyUGtoMzI5K0lWR2thYmY5ZGhUMVVFUTNKS1NZdHRVU3RBcUtyQytITTVG?=
 =?utf-8?B?WEdjUEJXd0dwcGtlYUxSUjNHcFlmM3ljbE41WjhYMFhtU1NMaStGSk0xbGlr?=
 =?utf-8?B?THFUQkdoMXVFNjNYSnBKblVXeWhFaXF2VG9UU3FCYlJjelg2cDdHVU9TQXlC?=
 =?utf-8?B?dzlYRi9LdE90Q3BkdFVTL1dsZUxCVG0rR3orUjF4UGpTR0M3WTMvL0RxOXhP?=
 =?utf-8?B?bGtMSTY3ZFFUL1c3azhWZXYvTSt6R1pYOUxrQnYzV3IvSm9tdVA2NlY2TXF6?=
 =?utf-8?B?NGVzWnJFdFZ3eXZBSjZsSXNTL1JzK3pJbTV3T3hLM2tGWTQ0NFcrak9OVlZz?=
 =?utf-8?B?RWE5L3hkd1NLZkRETVdBdnRGVE5NeGNYUGVnanVGSWdJdWgxY3FvS1FPWkNl?=
 =?utf-8?B?Ujl5TFZuWjRNZHh5cSthUFVsVUFiVGlJTTQxcjgzV0FGck1MZTg2T2hCeDh5?=
 =?utf-8?B?c0wwM3hyODFFRm1FK0Vra3lIajFhdHN6VUtZakhlRXVXanEwZGg1V2xVQlZV?=
 =?utf-8?B?RXhRWEhRQUc3MTNSUFBGaWdJMXdOMVQzbjhUSmN1b3ZOSVZ0cnhaaU8zOWVU?=
 =?utf-8?B?eTJvd0krbStpY0NYY0s3Z1JPT1oyZzNxUzZTQ0NxTGtwT096L0gxQ0czL3da?=
 =?utf-8?B?U1lCMEROYVR4ajJmN3g4MHF6dlFEZzIzRE9OeTVGUlNwTFFudjlBRkIreXls?=
 =?utf-8?B?clJOY1M0SEFicTlGODB3NWtublFYdkdvam1GUTlZcVNlblV6amFzUHd6eWhB?=
 =?utf-8?B?N1k2cHlHZVBsM1ZCaFNLVzRyMFhVLzVodGplSVIwKytrQk9rc0NYYkxSR00y?=
 =?utf-8?B?N1FkeXhoYWw5QUNNZUExQnJUajlVNFNBUUZLS2ZkZ3J6TG9FVGV2NUR1cmpF?=
 =?utf-8?B?S2FIM0FOYmdIWHd4Z0V6Rkd3NFhCK2VmQzltUmlieGlmK0hFSnMvcm9jQXU2?=
 =?utf-8?B?VWpCMXlCNEsrbXJYSVVjT21uOVJsM1R1T2ZDbjFzTWllZFgreW0rZVNCL2Rl?=
 =?utf-8?B?OXF6SWxtQ3hqSllERE5zN1Z6TC81bEdibVhKTTFobWl0ekROQmZpcUtIby9O?=
 =?utf-8?B?d0NxK0tjcklzMnhXaTQvUERWUUFQVkMvWVlLTVFxWGh1YzlWZ3BROU5sTnVn?=
 =?utf-8?B?dWY2Mi92T3ZlQkpNVVI4WkFaRzFvQ1ZXVW1WMExJSElqUmtUWEFyNTRTUUdy?=
 =?utf-8?B?eFhiM29QU1pvQWRFanZQY2MrcGU2NlZSYmV5T2h2RXVML00zTk1HS2h0aEZV?=
 =?utf-8?B?NzdQbFhaZ0Y1V0hUUTQ5MmdPVGhOTUU4Q3M2VzVrV2YwUWxhZzhTVkZvMnVp?=
 =?utf-8?B?WjRpTHZCSld5emhjQ0UxK25FOHVWdjV2RXNjZ3FsY0xzWFpSRTVNZGh1em5K?=
 =?utf-8?B?OGdOaFNXV3V3UGswVGU3RFc1Q1VvTGhwSFF4NkhyMzgyRm0vL0RRM255QlRE?=
 =?utf-8?B?RVJRQzdKSENGTS90emJhMVBzd0dVL2JWZU8wOFdtanp4Y3UrTk9LRWVMWHFh?=
 =?utf-8?B?dlY0N3RISzl4UkZ0WlQvWUVOVE4zcjlCL0k1R21PU3o3ejBDREVpQk9PS3NK?=
 =?utf-8?Q?/jsPcQkgict7c/P1smZUTVPz9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a7c03f1-26b2-4ef3-3eb5-08db4fa5169d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 09:17:47.2832
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RVvGFbPiKc30cn9Niek7Ioxq2VIB7JJzbHSgHV0KbfsVvlXm2VXZ8crOokPgIbX8neDW8UZ5tg3jkIlhzbPgZg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8309

On 05.05.2023 19:57, Alejandro Vallejo wrote:
> Includes a refactor to move vendor-specific probes to vendor-specific
> files.

I wonder whether the refactoring parts wouldn't better be split off.

> @@ -363,6 +375,21 @@ static void __init noinline amd_init_levelling(void)
>  		ctxt_switch_masking = amd_ctxt_switch_masking;
>  }
>  
> +void amd_set_cpuid_user_dis(bool enable)
> +{
> +	const uint64_t msr_addr = MSR_K8_HWCR;

Nit: No MSR index is ever a 64-bit quantity. I'd recommend using MSR_K8_HWCR
directly in the two accesses below anyway, but otherwise the variable wants
to be "const unsigned int".

> +	const uint64_t bit = K8_HWCR_CPUID_USER_DIS;
> +	uint64_t val;
> +
> +	rdmsrl(msr_addr, val);
> +
> +	if (!!(val & bit) == enable)
> +		return;
> +
> +	val ^= bit;
> +	wrmsrl(msr_addr, val);
> +}
>[...]
> --- a/xen/arch/x86/cpu/intel.c
> +++ b/xen/arch/x86/cpu/intel.c
> @@ -226,8 +226,17 @@ static void cf_check intel_ctxt_switch_masking(const struct vcpu *next)
>   */
>  static void __init noinline intel_init_levelling(void)
>  {
> -	if (probe_cpuid_faulting())
> +	/* Intel Fam0f is old enough that probing for CPUID faulting support

Nit: Style (/* on its own line).

> +	 * introduces spurious #GP(0) when the appropriate MSRs are read,
> +	 * so skip it altogether. In the case where Xen is virtualized these
> +	 * MSRs may be emulated though, so we allow it in that case.
> +	 */
> +	if ((cpu_has_hypervisor || boot_cpu_data.x86 !=0xf) &&

Nit: Style (blanks around binary operators). I'd also suggest swapping both
sides of the ||, to have the commonly true case first.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 09:38:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 09:38:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531329.826925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvxKG-0003lW-El; Mon, 08 May 2023 09:38:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531329.826925; Mon, 08 May 2023 09:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvxKG-0003lP-Ae; Mon, 08 May 2023 09:38:36 +0000
Received: by outflank-mailman (input) for mailman id 531329;
 Mon, 08 May 2023 09:38:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvxKF-0003lF-0Y; Mon, 08 May 2023 09:38:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvxKE-0002eW-Tr; Mon, 08 May 2023 09:38:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pvxKE-0005OS-FU; Mon, 08 May 2023 09:38:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pvxKE-0003gx-F3; Mon, 08 May 2023 09:38:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=t0QXuejOLMFhkVDdq4K2uIBdkcnUddrR9SYGL3ZFuK4=; b=qnFCcaLspj3D3IEI0FbmZPzGbX
	/QQBBSUmDaeSBFy9xrlILXVLdeTXuU6962QpmJawvhrjovAeB1Xp9LldpuLEtttjpvCXIv1H9GSEr
	lQgtb3heKkReKUM6JIjOTTCwnCfxol9sByjQI8blc40Ourzo/s2tPJ/pUG6Fb7LbEW20=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180572-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180572: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-i386-examine-uefi:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e93e635e142d45e3904efb4a05e2b3b52a708b4f
X-Osstest-Versions-That:
    xen=e93e635e142d45e3904efb4a05e2b3b52a708b4f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 May 2023 09:38:34 +0000

flight 180572 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180572/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt       7 xen-install      fail in 180566 pass in 180572
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 18 guest-localmigrate/x10 fail in 180566 pass in 180572
 test-amd64-i386-examine-uefi  6 xen-install                fail pass in 180566
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 180566

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 180566 like 180547
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host         fail  like 180555
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180566
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180566
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180566
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180566
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180566
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180566
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180566
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180566
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180566
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180566
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180566
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180566
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  e93e635e142d45e3904efb4a05e2b3b52a708b4f
baseline version:
 xen                  e93e635e142d45e3904efb4a05e2b3b52a708b4f

Last test of basis   180572  2023-05-08 01:53:35 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 08 09:54:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 09:54:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531339.826935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvxZP-0006DN-Td; Mon, 08 May 2023 09:54:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531339.826935; Mon, 08 May 2023 09:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvxZP-0006DG-Ph; Mon, 08 May 2023 09:54:15 +0000
Received: by outflank-mailman (input) for mailman id 531339;
 Mon, 08 May 2023 09:54:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvxZP-0006DA-6R
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 09:54:15 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2057.outbound.protection.outlook.com [40.107.7.57])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4960d106-ed86-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 11:54:13 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9243.eurprd04.prod.outlook.com (2603:10a6:20b:4e2::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 09:53:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 09:53:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4960d106-ed86-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UZmxu9xAj3O930y7bTTSf9fNCpB6pbgCA6bfKxSuHj8SRY2VCwlZSVo7R/HKGII6hzB7Yg2+kCBf70Ji/3TUDbDnIc/xcEjWLUz5XbICY2D0m0whiLx+1sVTFE2UQOHFCe9unMvH1HpcK+9GutZJZ4QLc/MckPI9iNCuO6QJvrgfOIshSasHLOCHOIUThFu9l0P3/PD9X8Br6e5rch4HvCNJaV9TiCdNPFWTKBc95XsYHgOCX6lYc/A4irRMfYO4d1/eJaEgtgfgk1vtCF799lzsFopdHGKncoCy5xZ1CDd4WKWGP0ib8z0g65KiFxGC3mbf/YF06a3lqrfPAoQhuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AIrHFTZ2UvRvRf56oE/0zxFn6W4wqe81S1kRnPkSL84=;
 b=nnT6kWS9QYpWtrGRT/9nSZ6O012Va6a2FwqnBC3uki4FIOAX+cbk5ZyA7NTfKWMiFVQs6qeD1YlTNhkEG0bV0Wl+3dkv6Sp0T8cUFj+NgsFBMT6p3TkV9ztUQF/C+SVtjoAzGjBE1j6G9piSylz3l1TgtZmoG4gEqV/T8Vcu+A/RlVUHaqCL0TOWZruUOpz1wXt2F1eQL0rDAXW3+CGPnaOni8SLGnGVj5447I4+p3aIpMuBsvwL3twmPz/PMsMt8tDOWDkno6juWmQSSI7YJQ54PP1Pf/sl6/hI/K7JNuhhxLSwA2N6hq3x3Jz0t/joeeyVEinjjFmYhLXonOy2pw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AIrHFTZ2UvRvRf56oE/0zxFn6W4wqe81S1kRnPkSL84=;
 b=pfgi1dLsdG6gfwX9Kp981dS5bK+00U5NHAaTVQ+8wohObcRE+nOjzvQ6YoLhhKWmVBhJqOWsHnfc9aYeJfCG05ujKkykbS2ASGHxnLSCpCO/gfTjXbNiNf2a3AE8ja+YezrU5ERxkjaNvJ0UMfw7SyxnKjL00jux0vtZ5bgG963OLIgGldmx2Un5S1QfBR1xETaAg5A/xAX6IZu6JqcCySR0MypRD7I4+1xFEIc5B37FgKoxBqRt9knK4hAmc+DoSxFIlPySUbQQzifvltraV6zlvTxhA4SNaZdGoHVMUF+AyrJ2yd75i5orpk8oYxl2O74atRz+XiO6/maXJyrzzQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bdd7aa43-3fb7-0a26-4041-b476b285d14e@suse.com>
Date: Mon, 8 May 2023 11:53:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 06/14 RESEND] xen/x86: Tweak PDC bits when using HWP
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-7-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230501193034.88575-7-jandryuk@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM0PR02CA0003.eurprd02.prod.outlook.com
 (2603:10a6:208:3e::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9243:EE_
X-MS-Office365-Filtering-Correlation-Id: 28c32217-2317-4bea-f561-08db4faa1c5c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nrQJEyU3LH2U7XScrQlHAohBOGh01oZfNhOdO9a/nM4V5XJfPKSbVpKun+Hu3sv529a2Jrash8nhEUeH5yglqurOT6Rw5GND7VUCAsa2mgtjDIq40BdD9uWsHbwT087LZii9osUvGdgNhO4l5U8JxwmvnBKIYdCile3Y195xDwhtTQGcElYXyQhC77mwWEk60ecOCXcFhxDiFZCNbfHfMnq5FqRnXrQKUk4ijhTbmmgx7UVYaFHPg3nh1XsRcSKSZmJnthfPHrte3N9Pu8XC5x7MXpLfpNLuXRkdJTKlIGYcc6nKWUcOK+8WIU0xP45lcLKKTQBvV7F1wDAjcp33AZfn+GO63mblEQd/IABH4kERSrGpZit/BbF4biUcy/MVPvoW59epyZmIiPVN/stbnqG5fSUYtD9NHmaByIv58ZymOJ4DW1YLuzQgXzOFzOVfftOSEEVQdk4FMh4ofKvJnb7ghcomugk4/xNkWL2b2l0djwPSDCUHOMr1dClaXdyK/1NEtewaoKwMYGBZtaSFxyBgLezKqyudM2+5qvrbtTPJ1EUb3L5JuK0lP6SQxhp5qbdMmPX4YKW9oUEkvsnxeRMCnnOcKE/dW8WVEn5g5yRphR8l4EXReYcb3xGuX7beJV81AWjJvUHYzj5QGFKWsw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(366004)(136003)(396003)(376002)(451199021)(86362001)(31696002)(66556008)(36756003)(54906003)(316002)(66946007)(66476007)(6916009)(4326008)(6486002)(478600001)(8936002)(5660300002)(8676002)(2906002)(4744005)(41300700001)(38100700002)(186003)(2616005)(6512007)(6506007)(26005)(53546011)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NGp6andNcE1YYjgyNk54ZnpaSVZsMWFnaDlWWFluSFNrUVJxOFIvdVFYNGtw?=
 =?utf-8?B?aHJ6RlljcENha2hmV3I5bmFmSmVOMnZvV0VqY3NNTE53MTVieFlzbVNFNWdM?=
 =?utf-8?B?WVo0YVFSSlpXcUMrNWR2WWZ1djBrNHRGYXhsYTdZQnhhWE05WE5yYnpGeVdQ?=
 =?utf-8?B?ZW52TGN6Mk1qQUc0eXJmc3M1b0lTK3BLU0d0UW95Rm1LYWlieVozVFhyVHR1?=
 =?utf-8?B?eVl4YWg1RUJMSXVqQXo2QXpiZG9pbGRwbjVhQ214d0dUZ2QzZlpoSFFYS0Zy?=
 =?utf-8?B?dDcrQ1Y5blBzZGg0WURhc2RvN0JTdDJ4WmViQlhLRk01ZDhDVGtJNjZDWFNi?=
 =?utf-8?B?MUp0Y0NSOTBFL0VLYnZjNkFUcXI0QXBKY1VnR1pCUDhEU1NqY3d3d3ZGYWxI?=
 =?utf-8?B?OEdzckZCaEpjQktkRmdHaUJDNnBaOU1EZVRiZ3Avd01qR1dHVC9TQmc1bWxG?=
 =?utf-8?B?N2RLYTQ4VUt0NFdIZDlrWlk0VE1OalRZSTVCWno2YTQ5NEwwY24wdVdtZjI4?=
 =?utf-8?B?U0VVMkZxVUk3YWxNckpiVzg3bUdCN1BMQ0FkSUFpckZueW1PY2lwb25wZnZi?=
 =?utf-8?B?cDRnWWdpcGsyR1FHZVFXREk0azlsaG1FQ1c0SXVYR2liaDV6UGZ5cHhqa0M5?=
 =?utf-8?B?VllIQWpoUVVwSkdXVnlCSlVEcjRPZE9KMWN2T0hzOXUwakpaMG03QnhnYmJB?=
 =?utf-8?B?U2cxS2hWaWtCKzdHMXZhTGt6RzVRbjZJT0JOK0E0VURnWmcvMHpWc3d6blc2?=
 =?utf-8?B?UUlvR1pZczM3K0NWSnNia3dEdkhxMk1tSHVWY1JZVllWT2ZMMTlveVdOYXZ2?=
 =?utf-8?B?WUVVakNGalFOYWpSZjR3YVpRdDNmVGVPOGVNTWdidHJtbG54cWg5emVVM2d3?=
 =?utf-8?B?R0QrZmdESFJXY1hZUDhEeUxNcDVmWFhlMkQyTmU0KzBIeThhRXpJRXkwZ3hq?=
 =?utf-8?B?UzFMSHI3UjRvUmQ0MnhNUytuT3ZkajVCOFE0aTJxR2tldzdtRDlRaGNKSDVH?=
 =?utf-8?B?czFBMEJ1c3ZNUVpxWEhuU01Wb3hVVDBQK3Raakk5NjRYaUpKa0VYaExLbEh3?=
 =?utf-8?B?c2hrY01RdUFHTUV4OHJLQTRUN0N5bXg3WC92d2JsVkJ0REFVcWFEQWRhTXRD?=
 =?utf-8?B?dGEzbk8xR21NQUFhbkU4Ti9QeW5hV2t2UDhESTVYbDJOM3BoSUplbTA3ckdV?=
 =?utf-8?B?SEJTV3Z2aVNPWStsbVJKVFJPRU1JU0RzRWYxVksxTHlGWGQrRis4YXJqQnFa?=
 =?utf-8?B?UHZGUzI4cWRiaTJicjVkQmJqSjRZTno4M3d4QUdpdGxDcjJXWDJkaUZLSGxM?=
 =?utf-8?B?aG1KZGovYTFvd05BRk56eDB4M1lobHM0cjR5R2E3OW5sRzluSVJFTmh5OVEy?=
 =?utf-8?B?YTAxOWNIU2ZobnFGb2NNQ2dJUVd0aTlVKzR4Ym9ZNTd0bXJGMnE3NnFsWmN5?=
 =?utf-8?B?V24wNTYrWU9PT0hJTGdnT2FhRkhubWJYaEVINFR1TWVSb01DbTJ6eEc0dTVO?=
 =?utf-8?B?MGYzbkZZQkhXbUF1ajJCUlFUL3VDekFBNVpXeUFvUFVsb1EwckFsUmRBNjJr?=
 =?utf-8?B?WHBkeEJVSlZOZnZIR01zRkRJdU5tdGkyM3lTYzVCQW9ONVBGQXdtYU1acWQ4?=
 =?utf-8?B?ZU5wSE5GVE9pbUJUWXhIUExBTzNaa0Y3MjVyTHU3REprbHN3K0VHaGt4dEkr?=
 =?utf-8?B?T0dOSFFzS293RWY5ZzRpVStvQ2JiL0ZNVUU3c0J0cGtWblE3NHZadUFobDhu?=
 =?utf-8?B?NWZPbVZIVnh0WS85NjBhcXdXZTMvZzM2WGNOQTdOaGMvNGNoNTB6R210UEpD?=
 =?utf-8?B?Mlp6R1VKSC9LS0JxMTJBV0ZhQUtYWHJHNHJsbTg3YjI1QUJlR2swbkVGa2dJ?=
 =?utf-8?B?YzB0Ykdxak4zWnBqbklRV2JZMEZyaWtJdFF0Mk5sVS9SV0lQNTRVOHpUcytl?=
 =?utf-8?B?SEVnV3lqaFdjMlRTc1RMMmwrMkNkNStpWWtjRkVRc2Vzd2sreGxqOW1Pb3ZR?=
 =?utf-8?B?V0F1Uk85UHdTSDhYenY0SUh5NWcyUTM0T2xScDBkckViRzFlaXRwR0ZrN1Zy?=
 =?utf-8?B?eVBSWFYybU5nQmZsWnI5dXpRUjBpeDRWdGlHdjJUald2eVdoSTRleGhsaHIw?=
 =?utf-8?Q?jUJW2pru5MCmYtWOPWgf0agKY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 28c32217-2317-4bea-f561-08db4faa1c5c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 09:53:44.3971
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ckl610/XuD9ncqB9myUUDZ/LSEGyNzQvHrh57dPqU4dcaD+eX2W2rRo0TzejGBPr2p4nrg9rEc0D6p151iiKqg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9243

On 01.05.2023 21:30, Jason Andryuk wrote:
> --- a/xen/arch/x86/acpi/cpufreq/hwp.c
> +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> @@ -13,6 +13,8 @@
>  #include <asm/msr.h>
>  #include <acpi/cpufreq/cpufreq.h>
>  
> +static bool hwp_in_use;

__ro_after_init again, please.

> --- a/xen/include/acpi/pdc_intel.h
> +++ b/xen/include/acpi/pdc_intel.h
> @@ -17,6 +17,7 @@
>  #define ACPI_PDC_C_C1_FFH		(0x0100)
>  #define ACPI_PDC_C_C2C3_FFH		(0x0200)
>  #define ACPI_PDC_SMP_P_HWCOORD		(0x0800)
> +#define ACPI_PDC_CPPC_NTV_INT		(0x1000)

I can probably live with NTV (albeit I'd prefer NATIVE), but INT is too
ambiguous for my taste: Can at least that become INTR, please?

With at least the minimal adjustments
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 10:01:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 10:01:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531344.826945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvxfy-0007lM-Jy; Mon, 08 May 2023 10:01:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531344.826945; Mon, 08 May 2023 10:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvxfy-0007lF-H7; Mon, 08 May 2023 10:01:02 +0000
Received: by outflank-mailman (input) for mailman id 531344;
 Mon, 08 May 2023 10:01:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fEaP=A5=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pvxfx-0007l9-7i
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 10:01:01 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.160]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3be2076e-ed87-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 12:01:00 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz48A0qzp8
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 8 May 2023 12:00:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3be2076e-ed87-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683540053; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=C8RfRDt9q8ZIHNKnF/GJYgSvQ3FPjV6nUWGp+BeLTCzZMWxLTD6uEKsvGxVwyzYr//
    ll0H1DEmJQlaR1s/HK4j5XRQOwuGgN0C1JwKX4wEbrdUTk+u2DsbITSCxalUh1G9AFzi
    LKVzCmJP2ZyUbNnCOG0W+aKBXCyQdRlTs7zh+ZzwF5mZ/Bw4hhfA5fMmY6hRu2bEvW4G
    Flo9pp1jvY8x9faf45+9A8mPpSLYaOjOwpnD2gwW8r0O+VLbx5pyEZ+buRiFbuL8+j7f
    cSWXrEWARbJRW8n3IEn8wj2fr9EZDiMmph5ZNxL9SjjCHm6l89vrl4jqqCv278SWd9Rq
    8daQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683540053;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=aZByrSjFH0yqleLqhT4Rj8ItPXbhzHGznL0O1xXh5es=;
    b=DmgxL+VoKlAXTEoz6y2lQYGA4g2J3O7ktfXsHjrrBNWz68WB/1tGxSIWZlhw12OZZ9
    uglZvxX7rxSVGJug9Lc6dukDqeNuozpd4JliNRcrEEr/i2JU9gL7jQdtHcARnNL2oc0m
    929hKAzm6jiDypqnmJGk6CHuoppzMwa74DAZXxRTblPEOlToa+918CTBUFFFcvcGCFvT
    xzBMoYl7CuvJvm5RhSCfJ3mSZ+42F8pJ+FIQyIjhJM47Grani1eM9J+sIZTW8JWrorAF
    8wGppusiVfhs8il0XTEKzP2u14WkI67ravE4c9VWUi2yg1MnMmPtL6cmPZlFAq1JKBsP
    J2/A==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683540053;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=aZByrSjFH0yqleLqhT4Rj8ItPXbhzHGznL0O1xXh5es=;
    b=iNVj+Vwpk0J+vuDQ+0IIeOQC5VRWdFjoW+FN3WutVjwuKW7E/Du7RlWdvpui+41pt0
    J44oxXQ3O60DJ3KT8I9ZLjxqHC7Vghate9aWEE9WDvKaP5UUphAXIymZ0N/3he49c9y7
    MYefUOvfrwytlLokcJ2k3j8L/ZblRSmW+ineTY5DEsTHqF78apBSvN8CmLQR3B4cY2jK
    Aj9VT6Ujg1DOTyUg33a68DxwE/SfwXQrTDh5xJX/npvU193iat2UpSKm50ukCE052Jtx
    1R2Ubu3QMDOjYe12mUeYXASbIfyDyv9PfvrrxbrKu+ljEv8VK3vd0wrTptGRKIObN4Yh
    rlIw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683540053;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=aZByrSjFH0yqleLqhT4Rj8ItPXbhzHGznL0O1xXh5es=;
    b=ZJi7MYJMkFleSzwr4FB01GFsS2M/NnmtxNBZ/azVkqNqUxO2xvmZLu8VJv61PJ1zDr
    4xW6elgnBS4ERlHfnnBg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4ARWIYxzstZKeVom+bauo0LKSCjuo5iX5xLikmg=="
Date: Mon, 8 May 2023 12:00:38 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH v1] tools: convert bitfields to unsigned type
Message-ID: <20230508120038.74246111.olaf@aepfle.de>
In-Reply-To: <d8224847-ffc5-3faf-1bc5-6ad3d7966b26@suse.com>
References: <20230503150142.4987-1-olaf@aepfle.de>
	<d8224847-ffc5-3faf-1bc5-6ad3d7966b26@suse.com>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/glmW5p0I8X7ORaw7EpscoEJ";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/glmW5p0I8X7ORaw7EpscoEJ
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Mon, 8 May 2023 11:06:11 +0200 Juergen Gross <jgross@suse.com>:

> > -    int short_summary_done:1, prealloc_unpin:1, wrmap_bf:1;
> > +    unsigned short_summary_done:1, prealloc_unpin:1, wrmap_bf:1; =20
> Please use "unsigned int" instead of a pure "unsigned".

The entire file uses just 'unsigned' for bitfields.

Olaf

--Sig_/glmW5p0I8X7ORaw7EpscoEJ
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRYyEYACgkQ86SN7mm1
DoA6uw//RLr3DtYxH/Z1tht+kPZQeqyhjfeErQeBDI6NxJ1V9gldO7GhcKBQ72nv
iFf10IdLrcPgRpNlpkhbglMSk6og2sBvN0gngWRfGlR8Bf5223mZpECyod9S93iR
EcXb6ZzA/KDxzynDjddFr7MVBYY77bp6gHs5Bs7PjCrtSaV7ziSHvrn4lZYrbhjj
QDlCWcWKOVPlsNPDkr4hCOcHJ865L/xi6l9Uyz/KgXgAsO078/RYXjVA03Cc0M8Y
kR7vOijzt6F1/4oYwrcMwK5Hsf5/H9aSwTXvxfmfGMTlDvLPsipoHtHdLMY5J9nA
LWLpiHj7zBtkBio2tEmLOa0tKJ2YQzjEVifz/gZgKiGFucMu14QX62jewRw/ssTC
s4EdGAd3st/V03R84p7Yz0a+EgL3fLpzPqC1WsM4LhP0gWS/X8vysOTmk499jXE0
HvFwMCVLgdAApvcqbCdGeyLhskMslUeibQoKJf+XJ+PeAQCoUjYeiMxMz8j2OgGN
7yVEzxX1RTZzRztaSA4nDqOrh68Wc4nq6gh4/yia5IOqdPOAbTFNhFbvsCoRTTs7
taV3iwwS7rz0lSj/QRfekPfKS6EuU7f9gO5xykJiUQJ6YCiMs1D9e/eXeVxveNIr
K9hurDOp2H2tOskQUzFLAyu1WLy3pkxVPdcGJncQND/fEEoD6ag=
=KWgt
-----END PGP SIGNATURE-----

--Sig_/glmW5p0I8X7ORaw7EpscoEJ--


From xen-devel-bounces@lists.xenproject.org Mon May 08 10:26:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 10:26:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531357.826955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvy4M-0001sD-Eq; Mon, 08 May 2023 10:26:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531357.826955; Mon, 08 May 2023 10:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvy4M-0001s6-Be; Mon, 08 May 2023 10:26:14 +0000
Received: by outflank-mailman (input) for mailman id 531357;
 Mon, 08 May 2023 10:26:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvy4K-0001s0-K8
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 10:26:12 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20630.outbound.protection.outlook.com
 [2a01:111:f400:7d00::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be847a7d-ed8a-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 12:26:08 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9516.eurprd04.prod.outlook.com (2603:10a6:150:29::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.31; Mon, 8 May
 2023 10:26:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 10:26:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be847a7d-ed8a-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WqX4YNv1ss9a71FGly4Tj5FKhsgvh4kZjqvBbp14dNKSiKXYvr1YVNpBShTtJ8JkthHLxk1w+eXqCoIjVHKbszkcmOwDSjEEk+XiBEQto5gyIKd8diZ8Oksis1eLLIgSpvAMoRX0FSBd2LrVPaJquFQAq/7tc+V26kLt8PDr9cNXIyxdTkQVxDsN/VuWdi8H6PbEglw8TeCpWyNVTDc94azuPz3zvs1oOdXiFr3GmF8MO/p+VO7l4s7sudg0r+AvYGEPfaS5dGlYmqRDgwKSIpdPbUPnkKC3e31pIGMKVkzM6GZvjBWnJPjHDkSm1KFWoari7DplIRre2rshROn+eA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VEr9T4xbk3VHvgP500GYHqgmh3Go+O1E/iTwnb/XR6M=;
 b=g3xpMVN0r43uX9r8L085gX3aaIayOjeLWQPLH7FB/ZqltyRQbDcta9pK9dVfz306LTMSVkhZbnuFbENB9CGOYXJ8ym2llddc1Pv0u5zB1HUZehnhrA+R6j227/PAV060bGdSGJd3GmO2FpMfR0M82YvKk9WhAMw1mtT7WGX80z4sDr48y4ThRwCo8lTBPR4L3SNMTvewkkKJym8B4m0StmuEc7zPoQmes4DMJTlx/s9GOv0EAWauK5fW9iP6BmZe4V8ylJpQanu3qsWMNS1/p7dEjYFU1V6ueDIR9LSuwFTwydg4ecihNb79ruFBQ2/ikQkmGxi1SSQOG1aQrHn6iQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VEr9T4xbk3VHvgP500GYHqgmh3Go+O1E/iTwnb/XR6M=;
 b=MhekFcRTWJrelCU7gvQvWLWM1Tq66BklVJGTRlVhWvK9YqxpxgMBN9RA59FCXLX3ccr5sFgJPJFJCAgneLRAuF+fHsSpXrLmLAUg2exzMeQ12NKjH6pSrY/kafp6dRFBQiO8hGXQ8VRGAk83ZZfgyeecQ6+fmrYX1qlFVBTWL0hwCUVyMq9XRgBUvlvEi88TwQlO8VehehH68rweoDfEDeqhubXSsUiAVb/USbvZslOWaP23//RLU7GzwwenpgYWQDy7MC/KLsHxbGSHOm9QKrUVZsN7oLL0E2GDuRkFx9gDlHF45hWXy8nbxEbRkVbDWHpvCWukEtaitlDhr678GA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7db38688-1233-bc16-03f3-7afdc3394054@suse.com>
Date: Mon, 8 May 2023 12:25:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 07/14 RESEND] cpufreq: Export HWP parameters to
 userspace
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-8-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230501193034.88575-8-jandryuk@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM3PR05CA0100.eurprd05.prod.outlook.com
 (2603:10a6:207:1::26) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9516:EE_
X-MS-Office365-Filtering-Correlation-Id: 05a1de6c-5a8c-4282-642b-08db4fae9ff5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yhzfp7HfkF/iFuuLkRLLJbbXB7YfO2p4rdofOQsVPGMy6sbOU7p1sK+Rw2akUl1up9jHKHETy29DlTrz+PE5BSfPaf9SdtoQkKUycg3NdcnHc409OqkvBEETy21YKnQfYf/ykCNEwWsDgE70WzsRHnDCNTbtstpxpgPQWJ3n1zzUwTMLDLcB8arRw2v7/qBlT/Zddth/+li6VCHO9p7tS2XC7SbIX825uoXN/Dn1hd67Tuaio/aDuFk2BT9vkMR/SylvhItowPUNwIP8oUeeTZk7kDwOVKMLcmSaSSYRB/Tbc3XGiGJWptfJ+GuyxT0U15WC3Z84EbCT5rDBbj1LzDlEu/5w5vv13qeVuxgh9sxKod1quY0l6YlOm23edBhrN09wCup7xQcGeRaRfapiEMjyaMniM7+YUauhNAj4wmNS20y1JMy8SxyQ/JNJca+0KpeKu9YTweSWF8a19DTpBhvECKgYKtrEGA0wprZ1wtaDca+z2zzVzc3+84mebcpxlBTN2rqijBw8L8kaFVMgOYQbgVrP+7Zotzi2639G9jIUH+552NxNIdf7xyrCMYMtSaigIqXzWCgUmR6iWZ1L77bV1EGHSPF4spMfzsHbeZQePtFe3xqwq3NfWFDcigfertCI0fUNWILM/9BUZVCl8w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(396003)(39860400002)(376002)(346002)(451199021)(83380400001)(2906002)(2616005)(186003)(36756003)(38100700002)(86362001)(31696002)(5660300002)(8936002)(8676002)(66556008)(66476007)(66946007)(6916009)(4326008)(6666004)(41300700001)(6486002)(316002)(478600001)(54906003)(31686004)(26005)(6512007)(6506007)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bHd5VXNXaENDeWp6Y3FkVjJkZ1dXWW5kZThHTURwa2FaMVEwR3crKzRWV0NX?=
 =?utf-8?B?K21sZERFaTNhdFM4WEtneWhSZER6WkdvOUhpZktsdGlaeTZXaHFWNDZlNzF5?=
 =?utf-8?B?NWZhOVFXV2JOQUtlK2wvcGZsTGRSNnR5d3JTOVhISWk3RnlKZVVqU1hlODRN?=
 =?utf-8?B?NExBbDhHME05MFcyYi93WDIzRjBWOXFUalNwOVNSZmFydTBrRm9JcE0zQnNW?=
 =?utf-8?B?YUJUUkxyMGk5NDNRSGNidWdhdERpU1VheW5JL3VnaW9xSmVoNlJZS1BwQ2pC?=
 =?utf-8?B?RmV3eFdsaVlaMVcrTUwxRndwU2kvb3FTNU9IdWpPa1NDQzg2ZDNTTmY4b09K?=
 =?utf-8?B?NlQyenRERFkrN0Q5RU80QXhUOW81N0RiVDlRaDVNVFB0Y2kxTmJvc01sVFBw?=
 =?utf-8?B?K0dBb2g2SGhJa0VVNlU1UkNLYm5zVzlGOTJkNTZEMkR6bENDUmRISlU0a1Jo?=
 =?utf-8?B?blA0OHRXUGowWTNKekVXRnhlV0FnRWJaNk40azVNUEk1cFJ5UHcyMWNkUndN?=
 =?utf-8?B?YjNKdmxyWnVKQnRQRTNiL21FTExHMGFhL1hkSG9uelByY25wbmpGZ1pCbHpQ?=
 =?utf-8?B?MnAvdTExVFVSTk5uTFdweGhYc0dhTFN3UVF3QnZ4TEU0dmVuMUVRdmhoSlNn?=
 =?utf-8?B?Tno2YWV2K3BlQVFGaUNVUEpVK1daSC84c0FPMWJ3MVFWNXpFU0hSeVVYSzFI?=
 =?utf-8?B?bWtIZWxIVitXcE9DTktPUENYN2thZ2JVNGJTdm1QaHl4UmR5dDZPaFc0TzZw?=
 =?utf-8?B?bUZnY3k0OUNpaktrTmd4MjV5dENXM3JaRlNqKzNpUUtEUEFFazZsT04xbnFl?=
 =?utf-8?B?Z0lROVVSWkc0YUVVcGJJRkZpV1hOUHJVSk55SzZmWSs1cnZiVmc2dEpUMnpS?=
 =?utf-8?B?RFJXUkI5V3FINkNlRlpIR0tyOWZQVzVrcHFZSDVJT24yc0FuUjBKbnBsVHlZ?=
 =?utf-8?B?aVhiY2JRa0hKY1M1QWExV3A4Y0tXM1NZYjFma1VzQWRZcFRJdndUUWx6NVRz?=
 =?utf-8?B?WGIrRlpiWmNTTDhUeU1KMmFGY1F1ZEFUV3Z2ODN3QTVPZlJoUmY1ckRMK1dM?=
 =?utf-8?B?a0hqRUJOempjOWFuOUVTc3dQaHp1RlQyME9LalNkS0xaYlVsTzU3dmlneGVG?=
 =?utf-8?B?b1VJZDdzcWV3eUtvcHhsLzdWTm1xVTR2Z1BhWGtGTGY4VUUxRFJaOEtOQ1dN?=
 =?utf-8?B?Q055dXlpVS9BTE1FSHVJMXBuSmRZS1BhejZKcmtQb21aLzJTUEQ3SkhlbTA0?=
 =?utf-8?B?cXRUNktWNHc5WUY1YlJUZEticUJzQlBNNEg1QTZJZm84YzcwMUl6c2tDQjQr?=
 =?utf-8?B?QlFRWVBGRjRlSldlbDQwNUJwelh5MWM4WXAzWUpYMlNZcmhDcStJN0FaRGNw?=
 =?utf-8?B?NG9sMGpGRHRsTUdlUUtiUThPYWRWNnBkZXdqOVM1T1ZVbDErd2Zmd3BuOW5Z?=
 =?utf-8?B?UkxZeThYaGlYdUlwblh3WTdYcm5KL1ZzMVhKVjQwUitHOHkxNzkzSnpyWEtZ?=
 =?utf-8?B?cW1aREhodjExby9rMGNJa2l2SitUak14c0pVcmtiZm9pYUQ0cmtsTkhVbTA3?=
 =?utf-8?B?aEg1Rm5Cb21oVHhFMVduTEczOHA1bjU4VjNnWHlQWTF1RDlVUWkxdEszYTlJ?=
 =?utf-8?B?cEh4dGVIRG1RajhkYWtlUVBQMVF0TDdQWmFRMzlEUUVMclBMajNUZjR2RE5a?=
 =?utf-8?B?eDZKdnVzVlFtTm5pLzhtd1hiMUlBVTB1eFdqOGJrdzZ5dGlKMGRUZkwxdU52?=
 =?utf-8?B?c1o4QWs1Q0cxTkxIMmVubGNzRldPSzRaMWRMaWlEbWxvL29mWUhHd3VzQWNP?=
 =?utf-8?B?RHVnOWZQRFdpNmErNnpyUWh5RS9sb2VZbmgxWTF3WkhBaUwrVUdpQkZUblJq?=
 =?utf-8?B?RXNDdFhFcW5XVStaLzFBQlBPakhBY2kvQTBvbCtGZnUyZDlVd0E1WThuZzMv?=
 =?utf-8?B?ZjAzMksrMkJlUmgxT1N3VlF3MTlNVDVDdkRkUDdzU2FFUTViUTJSL3A4N3Y1?=
 =?utf-8?B?S010dGFVay8yclYzUnhEUFRsSldDQzU4UkJBNzlwYXJNYVNPOFlmRXBNdTly?=
 =?utf-8?B?S1FGNzEza3gydmIrWGVzYWU5M1RWeDB5Q1dpT2U1M1VQUDlHMEUxYVhSbklJ?=
 =?utf-8?Q?BaHi3wCASH7uaVDF0Q0lXOak1?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 05a1de6c-5a8c-4282-642b-08db4fae9ff5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 10:26:03.1200
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5kxID9A2ZrzYjgE2jq9Rz0VEIYsyk02/cuKViyHpxSnPdvbTUA6Dj96PP3n/7U5aT778lB0ydR9zBjtm4bV8PA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9516

On 01.05.2023 21:30, Jason Andryuk wrote:
> Extend xen_get_cpufreq_para to return hwp parameters.  These match the
> hardware rather closely.
> 
> We need the features bitmask to indicated fields supported by the actual
> hardware.
> 
> The use of uint8_t parameters matches the hardware size.  uint32_t
> entries grows the sysctl_t past the build assertion in setup.c.  The
> uint8_t ranges are supported across multiple generations, so hopefully
> they won't change.

Still it feels a little odd for values to be this narrow. Aiui the
scaling_governor[] and scaling_{max,min}_freq fields aren't (really)
used by HWP. So you could widen the union in struct
xen_get_cpufreq_para (in a binary but not necessarily source compatible
manner), gaining you 6 more uint32_t slots. Possibly the somewhat oddly
placed scaling_cur_freq could be included as well ...

> --- a/xen/arch/x86/acpi/cpufreq/hwp.c
> +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> @@ -506,6 +506,31 @@ static const struct cpufreq_driver __initconstrel hwp_cpufreq_driver =
>      .update = hwp_cpufreq_update,
>  };
>  
> +int get_hwp_para(const struct cpufreq_policy *policy,

While I don't really mind a policy being passed into here, ...

> +                 struct xen_hwp_para *hwp_para)
> +{
> +    unsigned int cpu = policy->cpu;

... this is its only use afaics, and hence the caller could as well pass
in just a CPU number?

> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -292,6 +292,31 @@ struct xen_ondemand {
>      uint32_t up_threshold;
>  };
>  
> +struct xen_hwp_para {
> +    /*
> +     * bits 6:0   - 7bit mantissa
> +     * bits 9:7   - 3bit base-10 exponent
> +     * btis 15:10 - Unused - must be 0
> +     */
> +#define HWP_ACT_WINDOW_MANTISSA_MASK  0x7f
> +#define HWP_ACT_WINDOW_EXPONENT_MASK  0x7
> +#define HWP_ACT_WINDOW_EXPONENT_SHIFT 7
> +    uint16_t activity_window;
> +    /* energy_perf range 0-255 if 1. Otherwise 0-15 */
> +#define XEN_SYSCTL_HWP_FEAT_ENERGY_PERF (1 << 0)
> +    /* activity_window supported if 1 */
> +#define XEN_SYSCTL_HWP_FEAT_ACT_WINDOW  (1 << 1)
> +    uint8_t features; /* bit flags for features */
> +    uint8_t lowest;
> +    uint8_t most_efficient;
> +    uint8_t guaranteed;
> +    uint8_t highest;
> +    uint8_t minimum;
> +    uint8_t maximum;
> +    uint8_t desired;
> +    uint8_t energy_perf;

These fields could do with some more commentary. To be honest I had
trouble figuring (from the SDM) what exact meaning specific numeric
values have. Readers of this header should at the very least be told
where they can turn to in order to understand what these fields
communicate. (FTAOD this could be section names, but please not
section numbers. The latter are fine to use in a discussion, but
they're changing too frequently to make them useful in code
comments.)

> +};

Also, if you decide to stick to uint8_t, then the trailing padding
field (another uint8_t) wants making explicit. I'm on the edge
whether to ask to also check the field: Right here the struct is
"get only", and peeking ahead you look to be introducing a separate
sub-op for "set". Perhaps if you added /* OUT */ at the top of the
new struct? (But if you don't check the field for being zero, then
you'll want to set it to zero for forward compatibility.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 10:44:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 10:44:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531364.826966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvyLa-0004Hy-0X; Mon, 08 May 2023 10:44:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531364.826966; Mon, 08 May 2023 10:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvyLZ-0004Hr-Rl; Mon, 08 May 2023 10:44:01 +0000
Received: by outflank-mailman (input) for mailman id 531364;
 Mon, 08 May 2023 10:44:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvyLY-0004Hi-Fj
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 10:44:00 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2052.outbound.protection.outlook.com [40.107.13.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3b65f0c1-ed8d-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 12:43:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8972.eurprd04.prod.outlook.com (2603:10a6:20b:40b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 10:43:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 10:43:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b65f0c1-ed8d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hQK67h19T/9ze0UJVawzQDmmgiW3nt5uAK9TJ8VtSz6qZslE6dhaCDY291n2hSteeizN5bxekdTt/1p/LFpcOfbVgt4cAJSxVdKhFP3P6UJbBIebQh2uvSUlnoHN423QmECS/L6+w8KK9NY/nglB5afMTSjKHMoS/dd/lWr7aZmQO96mG7LfMMuqu8tVXxlH7q7qaDIu7f3fCg2RHifs4nukyMEKNKKWoBtWxJ1nGU1egPCBt605qVaIbgszrD9dbTRCKqst1UYKOB76iw7r4yHuA8hr6llFQQ0Snei2govYMl6qN2Wbl+8yv0YeDsbAboT4ZB8kLjlNyQSJGPSgIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+Vnj8BGxFdOpi+mDnv+mAqFum0KzLAlOAdeTOWLMjIs=;
 b=B0OgsHvxVQR3kafYFbv1tVDuEUFk+v1OqHZhKouVZPxbF9qteT5wNMAY5Kh9imsflZN89uKM7ZN5sA4GHPqjuHeTkLNO4xNVamouwaLxq3CkzkWwkPQaAa7TSd7ap8tHgOMxIYElhgXj0G+DgxG535YSzqVp57a8AsUTPSFvwau/S9XF3VgWkSTM7MzYXXftsVkp5LYKTWf0PbAUxpcohxjxXURKhbCy0PpXlzGl096SzLwoPCeHXbadDOztIBtvNqA5eySStdTxIP2eakOl0oGcOF7V8nW6zDTg8XyysnxIeaX9esqqXW8yo5m5LFjp4ToKXTtmBljQux8+S3mNGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+Vnj8BGxFdOpi+mDnv+mAqFum0KzLAlOAdeTOWLMjIs=;
 b=Wz8ESupLngXCenUVeARG9XeFNiZsQfkroTdHbTb99c3fj7lWW9PpW8RqGTS8cxqNZY0bFWJX61NJyU4pqb4q3lTgolkZGZgY8JkUyOQac/qH7qzHfvm0FULC9UnQk5cTZsvK9pjZwVUIpm1DRAX0o6EyT+4tKIF34SSgGv/zP4eS9jb+y8kp+O2tuItQB9LXCUyWL2J7FaW83FLjjDtNoNcpc6tdaBIz9PDr6CFsQzvwcsEoLZOnS3tkdUplxXC5E3URF+AW1sqbpajtab0C/lsvPLGl0M9fwRDNfRN7/yJzNzMe+Gl9u7KZ3LoC0A2Wr683BIDMnKbGj1DtuzcKqg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <256fc66c-066f-3f0c-b34b-a237e9268f22@suse.com>
Date: Mon, 8 May 2023 12:43:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 09/14 RESEND] xenpm: Print HWP parameters
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-10-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230501193034.88575-10-jandryuk@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM0PR02CA0001.eurprd02.prod.outlook.com
 (2603:10a6:208:3e::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8972:EE_
X-MS-Office365-Filtering-Correlation-Id: 65287fbd-5e2d-49b6-483a-08db4fb10db0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7JpMww0oG3sIX830c0JnR9B3CuR8bo58MZ7MR8zcVDBZVXtFMq6tPErsNKPsDNHgz5N2fom6MFnqSu8ZLzY85r8/rZ5l7qjoJL3MzMUukfTq/gEmZdAuHLMrJUVrLFjDCyGwoMf1KcOzr+070m/KCyWhKLYz8lVsGVLO/B6pyr88WvyHYgGQyhK+ozpSPHB6+8x39kZtePFJS6wRBtFho4j+y2DhfslvZH/TGOJEPggNOV3q3ALzyjQKjPg9wUJGL5utns80zCihEkYSuzp9aAB4FedRWjZBgrIjvGDfo9n9O4jIxMiKXLO2XyJZMI32p82LJVKuR4v3idvVyVUcy0wimI+/x3jNYf6U8wkspiDbO7Puvim1UplULidJhqv91kg9QyFj9janRRC4Y1/NP9qf8ht04N3FbNaF3zmR76QJSkOk/Fn6ZhsTo6kOU/qP6wZhpA0mf363evgLC861zZPed8hRKc2jT9JqVSyA1ljCoLlD23C2theDwmT6Eao33BppXTwxVwVU/5kmeSgV/A0w7kdftAj5rZeoV9dCi/KqfuLAlXrjeFTOL3EkX+dgq48E37fuBPq8fZeUlFHYCspRcVBBUkv0YXLhz1vyoDb0bmeSEmZVlR+r/jSOKIJcCMFHiM56HUpRlEl5g0NUeA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(366004)(376002)(136003)(396003)(451199021)(31686004)(66556008)(6916009)(4326008)(66476007)(6666004)(478600001)(66946007)(6486002)(54906003)(316002)(86362001)(31696002)(36756003)(26005)(53546011)(2616005)(6512007)(6506007)(83380400001)(5660300002)(8936002)(8676002)(2906002)(41300700001)(38100700002)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dG4vdlRSTFI4OWhhRGJrKy9kZmRzZCtRVFZqamxoWitLZjNYSDVQMXRrWkNJ?=
 =?utf-8?B?cTBRUklBbUQ2Y3YxS1lTajFjWUw1NHc3dnNBWmlHTml5NHE5L0t1eXQ1TGp5?=
 =?utf-8?B?bnd0TWk4MGI2OGFEN0JLMWFaT0pUVGl1V1ZwUUVRNjI1SmJDNGVTWUw1TVc5?=
 =?utf-8?B?SGFQMGJLeG1JMDRSRTNhdm82SkdUZzY3REpHK0ZMUGwrdmxwSFhmWmZTNGJ6?=
 =?utf-8?B?RDhvUFlhdGx4cXd2RHF3d3F0cFQ4OFJocFdzQmM1RmxNaDQxbEtPWHlGK1NJ?=
 =?utf-8?B?SjFwMHNLU1BEa1ZYN3pYTm4wdmVKVStJR0E1TnEzNlJlRm9ZdEwyUVUvNitM?=
 =?utf-8?B?VlNqVE81SkFvZTJpcFFoRUpaNG1TYmx5NkQ1STliN2I2ajNsTU1oVUI1Nkww?=
 =?utf-8?B?OFJ0a2FnckxWVXB4NXN5MFcvSEtudzFoY0FvenBhcC9QOE1FazFHdGpuSVdN?=
 =?utf-8?B?Y3JEVjdVVGRGZXNrNThobFR0ZEQ1V2hHbUV2Yy83Z0FIQjdYZm5SM1FQMkZG?=
 =?utf-8?B?ZmNwNGl2SFlBTGRwcjVSc2VFTTV2N2YvbXZmRGxKS2tGaHlKdWV1YS9DSEsw?=
 =?utf-8?B?Q1lrOGVQalcwUGVzS0V3bDRLMjM1ZVcyTlFOT2FpSFZNUzBSem5yaVNMMFFq?=
 =?utf-8?B?QjVDZjZhcUQ2ZG1Ob0p1Z2dNN1o0TXdaS205Q1NwVmhJa29WVk5xK3dleS9l?=
 =?utf-8?B?RVNCWG5tYzNLbENYclBiaGZkZFd3N0FSK0pLbHUreXdCbFE1dmtOYzZ3ekFT?=
 =?utf-8?B?UWc2S1VpZ3JxMHNNRzhjT1UrRmZOTy8zUUJFb3ZvMmhRZVhmeFlsdXF3dlFP?=
 =?utf-8?B?WTFkNStuNGpRQUw1clJBcXhDZVhWeDZ5YU9yQVJITlZlMFc4dlRwNzlVeVhx?=
 =?utf-8?B?ZEtyQkttb2o0a0gyalZaK0dIUXpaODdGd2lQenpsV2lSbzBPVENoVkI4SGs5?=
 =?utf-8?B?ZnhFTzlURnRQWTduUG0xSTdRbEJLVCtvTGFsNVZua2VBWEI2amJ6dEthRTVn?=
 =?utf-8?B?NWxvdVdreHJsVjN3MkozSVplZlNlMWpOWk9DbllJSXRkTkV1NXNBYm5MS3Bv?=
 =?utf-8?B?Qytsc05uT1BqUVBJOVN2aWkrWXVKSkRsbEM1NDNaZk9ZMDlneVhxWmppUTl0?=
 =?utf-8?B?K1UvZkRBZ0NDSUpvekpmZ2NvbXNCQUo5M1dTS0JPWEd3WE5oTnVhSTlJdkw3?=
 =?utf-8?B?eUF5dkhFYWdkQllnQlNFdHZXMlRtYjVNNG9udm4rNk1YY0lGKytRRGQvb0Vu?=
 =?utf-8?B?ZndtUHVIdkt1Q0xWTHJWdGxwUWlZUWtKL0lvMm1Ia29aTC95NlNBbVV4TmlE?=
 =?utf-8?B?dE9scm5JWjRYYzFZcGZMMmxuS3NyVnhwdjIyNm5XbkNhN0poeWRvZFJXaE00?=
 =?utf-8?B?TlVsZzBTN2g1Skx5cG92WTY4Y05uUHdRa2E5djIwZ2cwbjNhREVJYzNmWGlm?=
 =?utf-8?B?L0xuWElHNGY5QkhvT0FPVUhuRXpONDdrNmhNYlRMZjJQa013U2xNNmVjYnVO?=
 =?utf-8?B?NXlKMUUzYWJBS2hTTHNTTmF2OFpnWDRxNkhyUXc3cnRnZVJqL0JmU3A0TmFR?=
 =?utf-8?B?S01PZ2tVVk1nMzFnelZsczM0M0Q3ZHNBd0xTbXhPZXIxZHVnNkttR1lXSW1T?=
 =?utf-8?B?NFhYd0V0eldhT3dvYXR6WDdVOUNPbmxmUGRVdVpSR2l1Q1dERUg5Wk1mcEJL?=
 =?utf-8?B?UHlTTUpxOVI2cGozNFExaENidi96KzZjc2xMTmg4Vmw5VjhnUDgvaHpYRzhP?=
 =?utf-8?B?cThxK2dpbnhJbkZpa2JZYUthK1JtUU5NelVIVSt4SjFGL2ltY0ZHR2MxNkMy?=
 =?utf-8?B?aTNOTktBckppWXVZZ1BUaTY0NXM0MnhlZTVuUkl4MmIxeXJ3VU85QVQ2anpO?=
 =?utf-8?B?aG40NERvd2JrMWZ1cHF4L0orSktqQ0cvUmt2c1l2NHNsNnNsREpyK0dpYnRz?=
 =?utf-8?B?Z2lzNDdNY2xCRVFIQlZ1V1dudFU4NkF2VkFoaWsvanMraTN0bVdOZ1JiQnFN?=
 =?utf-8?B?cmh6WG9kK2JoUElEQjFGaVo2M1Y2bU8wMy83bHhubVVsMDNMYU9wWVNSZUZF?=
 =?utf-8?B?SWxubCtIc0N3VmE3TS93cGVzMW9VVUJBT0pVRFhac0VNQjcwUWV3ckJaUmhB?=
 =?utf-8?Q?B76mF5kTZ7048QP6Vnw/oGmqf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 65287fbd-5e2d-49b6-483a-08db4fb10db0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 10:43:26.1957
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PcdBNKK5KMSswbJt3ZLf8qUEcPDMrr5oUwxQAkK6zd9GZLyPij3FzveVd5SQSnV8DYq1lCGkOA71DllRamfJlg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8972

On 01.05.2023 21:30, Jason Andryuk wrote:
> Print HWP-specific parameters.  Some are always present, but others
> depend on hardware support.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
> v2:
> Style fixes
> Declare i outside loop
> Replace repearted hardware/configured limits with spaces
> Fixup for hw_ removal
> Use XEN_HWP_GOVERNOR
> Use HWP_ACT_WINDOW_EXPONENT_*
> Remove energy_perf hw autonomous - 0 doesn't mean autonomous
> ---
>  tools/misc/xenpm.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 65 insertions(+)
> 
> diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
> index ce8d7644d0..b2defde0d4 100644
> --- a/tools/misc/xenpm.c
> +++ b/tools/misc/xenpm.c
> @@ -708,6 +708,44 @@ void start_gather_func(int argc, char *argv[])
>      pause();
>  }
>  
> +static void calculate_hwp_activity_window(const xc_hwp_para_t *hwp,
> +                                          unsigned int *activity_window,
> +                                          const char **units)

The function's return value would be nice to use for one of the two
values that are being returned.

> +{
> +    unsigned int mantissa = hwp->activity_window & HWP_ACT_WINDOW_MANTISSA_MASK;
> +    unsigned int exponent =
> +        (hwp->activity_window >> HWP_ACT_WINDOW_EXPONENT_SHIFT) &
> +            HWP_ACT_WINDOW_EXPONENT_MASK;

I wish we had MASK_EXTR() in common-macros.h. While really a comment on
patch 7 - HWP_ACT_WINDOW_EXPONENT_SHIFT is redundant information and
should imo be omitted from the public interface, in favor of just a
(suitably shifted) mask value. Also note how those constants all lack
proper XEN_ prefixes.

> +    unsigned int multiplier = 1;
> +    unsigned int i;
> +
> +    if ( hwp->activity_window == 0 )
> +    {
> +        *units = "hardware selected";
> +        *activity_window = 0;
> +
> +        return;
> +    }

While in line with documentation, any mantissa of 0 results in a 0us
window, which I assume would then also mean "hardware selected".

> @@ -773,6 +811,33 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
>                 p_cpufreq->scaling_cur_freq);
>      }
>  
> +    if ( strcmp(p_cpufreq->scaling_governor, XEN_HWP_GOVERNOR) == 0 )
> +    {
> +        const xc_hwp_para_t *hwp = &p_cpufreq->u.hwp_para;
> +
> +        printf("hwp variables        :\n");
> +        printf("  hardware limits    : lowest [%u] most_efficient [%u]\n",

Here and ...

> +               hwp->lowest, hwp->most_efficient);
> +        printf("                     : guaranteed [%u] highest [%u]\n",
> +               hwp->guaranteed, hwp->highest);
> +        printf("  configured limits  : min [%u] max [%u] energy_perf [%u]\n",

... here I wonder what use the underscores are in produced output. I'd
use blanks. If you really want a separator there, then please use
dashes.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 10:47:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 10:47:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531370.826975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvyOg-0004wI-Ga; Mon, 08 May 2023 10:47:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531370.826975; Mon, 08 May 2023 10:47:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvyOg-0004wB-DV; Mon, 08 May 2023 10:47:14 +0000
Received: by outflank-mailman (input) for mailman id 531370;
 Mon, 08 May 2023 10:47:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvyOf-0004w5-U2
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 10:47:13 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062b.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aff92e3e-ed8d-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 12:47:12 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9688.eurprd04.prod.outlook.com (2603:10a6:102:271::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 10:47:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 10:47:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aff92e3e-ed8d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mLIr+16dX+1jZG6XvZ6yCRWHYMvDnuDI9QrOyhWtgBN0G2thX8qdXVs8UHipgXAOzD01vhtn0lyOg0922CmobgnFKFj6/gkn0EIOA2M+2VoJnVpQ4IeW7lus7ewwfBczEC2+PyB4zPNLgInA3ilzAgK5TNEEJGbVYI2QJBTwzazJrLeqsDhLJmAg6kzFO2gzn7KkN6g+KqpcHQ1mqVG/FSgOXo1WXYOArZg6W1kw6gf49eob2Rf1chxkitYnFKbW3vD6Ph+r+NZpU3Z42b7x5CGb7qZ/YxQfX9UJFoVVumOqIzFL4i5PQoR0ghsTyfyJqYtKXyC6ZjZTkjarzuLExQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3Kbj56+TveSb8xyuNr5L5tBwkr4DcPGmlHwHBe74nP4=;
 b=cnOIc0qBpjpZIFPj6f662s+MUfLUHzpcK7jFVdQReAUfn5VSkP7vQe1TC2xRPbo4gUTNMhivfYMKDjsDvsybD2jTilRlY+ITdRFViIq9mzsIV4OROgML4IujaRaLC55VFv4Ot1FgX3VMyEf8Wmirk9ZsILFKE5Km8Q5KJEaVmZvhgaI5YoJpZQ/4Pr09QrXWolTtz5zoM/aoc0BNpaLiU27HzSLpRKALTrrsR9EvZsr1N4XZcG7I3LMB194zDltJnsBcSTb+GUoVJXNV3KMGnnCy/i6osyOAYCjeZd38GPiPoLO4089aFvZazA66uRjW2cRB+gMIpoMP0U8Kap0YRA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3Kbj56+TveSb8xyuNr5L5tBwkr4DcPGmlHwHBe74nP4=;
 b=t6C5y6x9uCega+2hV4KmSkVpCrs/QF/lMGPnd1Ch/9eOMreLU9K5fyAcMIKrkWSY+ErJ2hx5yNdz+DIa2afrHC7PJbofAtzLBOG3SWTX4yuWklEgmoHEF5mSuJrLCIxOo025pxGHzU7E3eNggjqrU1cKv3LSHu+w77cepE0d2i/nPc982AjxFZRV2MYOlgnhWpJ9zUn8raiAGyxRSeqgF9ykXbYyFzlvzNmSwNTIhOwkabBbmLdtNVrhRkAiWOjbNv2qbc5P74tPej27DoSy6IAK0D4g1/d4kziOTlvZ6c9RowGQmca/SSvD6EltdgfQhZQYeQJ+OZNdaWqCiEK/WQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9cf71407-6209-296a-489a-9732b1928246@suse.com>
Date: Mon, 8 May 2023 12:46:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 07/14 RESEND] cpufreq: Export HWP parameters to
 userspace
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-8-jandryuk@gmail.com>
 <7db38688-1233-bc16-03f3-7afdc3394054@suse.com>
In-Reply-To: <7db38688-1233-bc16-03f3-7afdc3394054@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM3PR07CA0135.eurprd07.prod.outlook.com
 (2603:10a6:207:8::21) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9688:EE_
X-MS-Office365-Filtering-Correlation-Id: 7a9d2bfc-b34b-4ad5-8d05-08db4fb1924a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	F1iUrSBeA6vGRW32HBZvCze9jZbl9KXLddW9oVmtTlB2ozTGEJnqDHaOzrJ+OJu6eUZuh4Fm2oVeEUJnyp78KmkK9gK19JvCzaDSl8cWDny7WRXYsGT7snP3ZHr0so5sbzdBch4nAvgIzz71S5N6rCWjhEZpXUmW2oH/7QxLad79mI0dI0luLEA2ea5pXoXWddvJjeom9QUWY+9/78+LxYjVeHpIuF3oQB9AWo55F71Vdhba0/zi7GBc9FSkH3fwP4SYQ9xfXu8kE5o45XJTBxu9RgvPq1/o1J90mhi1k/2YFKOzlhtpsUXzuM8cFey6XbJIU4WJfmCg/0U5WeWDj8r6rL67YupUWQpKnVUZz+vmGWaLcAtCRR7Ep6W0PlW3FxqLj4bBi5/xqvKhkHGwph0Lea/ssZn1DpySnnohRfN633+g16dGhhxnSH2r1Khfo3cxBRYL7fpog2d5xn/GE5BsQmCBH3j33vY4n00izePnylfaelRDMa6k6/4RUQyLrjjsq9041L4nS066i5UzPLvw/41JWv/n2Fa7nrmavwDSYuaI/l2dI7Terv28FqgpLclFR6seaxDxiIMifzZJsDdABDS3G91f5EK2XcUDaMs9jQoxsy4t7etzvPFVylCPJpe5xi9YW1mg6NwzioKUTA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(366004)(346002)(376002)(396003)(451199021)(83380400001)(6486002)(478600001)(6666004)(54906003)(2616005)(6512007)(6506007)(26005)(186003)(53546011)(2906002)(38100700002)(36756003)(6916009)(66946007)(66476007)(4326008)(66556008)(41300700001)(8936002)(8676002)(31696002)(86362001)(5660300002)(316002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QTRMYjd6Z3llcTR2ZXRqRlJWL1FWMThOWm1PRm5vejJ3ZFdlRGthYzJmcTZO?=
 =?utf-8?B?Ty9zRGRWaytXN3VPNkVHalhLWjlobUhyMFY0bFhha2N5cTNZa2hYVTNVNDJU?=
 =?utf-8?B?NHdZYURCZHBncFhiamtudUhTREFpdGU2TFNZeStvSGhpS2NnSU82Sk9ybXgr?=
 =?utf-8?B?OEdsdEV2Nk9LcTluaG8xTGlBSkFqQ1V0OUZpYkZVbVBtcmkrMXVoRXhFQTlx?=
 =?utf-8?B?TkU2Z2RrdTVmUk5RUWRRUVI5OENQWXIzajlzUW9va211WjYwdUFvNEJoT0t0?=
 =?utf-8?B?MlFRUjRhUHBJakRnV1V1WWRValp5SHVjbEpoRVR6bFkrZTcxa3ZNRWZvb3Nt?=
 =?utf-8?B?MnRhMGJFMW1KR3ppa0J2QW9VUkl0OWNSTFpNSnJoWVpaOTdNWmpRUU1OWmpT?=
 =?utf-8?B?MWhycnFERWd2V0VuaVFCaGZpWm1PVVV5U00xVnBLU1NYWG42VG82VnFRLzZT?=
 =?utf-8?B?SkxNT1V3bThxclU0WnZ4M09SLzdKenFDcUZEQWJ2RllYcFB6VDJKblJ1Vy9p?=
 =?utf-8?B?SUV3ejdRVFNNUXQvV01FK3lBYjNiQzR2bTNxQzlkd1VtVXViRHZzQURaZDNL?=
 =?utf-8?B?S1ZNT3BlanVkTnpLUHJNR0JHdWhHRzJZZHJ0em16Y1hGdmpYc3lpUFIwMnpD?=
 =?utf-8?B?ODZTZHBBWG9QVVZvSENoN29Xdjk3cExXRnIrU1pmWWRVTTJSQndoWkMwR3FU?=
 =?utf-8?B?SG1Hb244cVJKOEMyeS9iYWw5Q3RwSnBaR3Q0bmlsUXRwS0ZJT3k0dmExam41?=
 =?utf-8?B?dVJXVXE4K0UzK2JXZUhSY0J6ejJ6TlhtbWZXcXlpbmZEV3puOWtMNXJ2bDJ5?=
 =?utf-8?B?bUp3TUpsV0orS1JrU3FvTllzSXVmb2Fld1hUT2hCU0hjRzFUeml4UEttR2VG?=
 =?utf-8?B?aDVDVENid2wzWlR0MVRrUjF4eTd3dHdhenFTWnlySSs5QXFGU001SjB4UkFN?=
 =?utf-8?B?M2ZEc0xVQ2Z0REdZdGRJc01DR2M1WGNoU3NoNGhha2x6eG42c2lDYlppSnhw?=
 =?utf-8?B?NWI3MXdMQ0dBZUJGSThXZ2ljdWZySlZKbzVIR1JCOXIzZFp0YXJqMksva3hD?=
 =?utf-8?B?UWd5bVorL2M5U3E3VjBvNDdDQWVYdTRHRGdEdkRIYVRBcGZaZXU3VWJLajRq?=
 =?utf-8?B?WGEzUG5lQ3BKK0xsUlRSa3N5V1JqSUh3ajR0TGREQWpRTHVmcG9Ea3JZZm5Q?=
 =?utf-8?B?TlVTOS80UWluVWpvVndNNVJqcWprY1F0WEF2a3B3bkNZSXQ4MDkyRjhlK2dE?=
 =?utf-8?B?VHZyRDBvK1AwZ2I2bmZvWktlUzlCOHcwS09RditnZzE5SXhBbnhDSjBLNlFM?=
 =?utf-8?B?T1pNOGtlZWFEYU1PTUNHeGs0OWlvQkFuZVZaQTRLeGJJN0RtU3BLd01McXpM?=
 =?utf-8?B?ektCdk9xRDN2Y2x0RWZQUXVPdFFNZllPUC9VbWY2clVsWllpSnlod0o1aWxD?=
 =?utf-8?B?cjRFUWVXSkp6K0RJUVB2UTBJYTV4N0IwWk91Mnd6RllLZ0puTmpJVWJmcFNZ?=
 =?utf-8?B?cWdmK0RZUC9mRUUxOHpoT2MyL0h1WGFWSzRxeFBKZmt5bWlzWXN4K2Nucm9u?=
 =?utf-8?B?NzBPOTVRNzAwWGtSNHFGU0UxR0Q5ZERseVRGdHlmaDJKdno1bG1uKzNWeWZu?=
 =?utf-8?B?TXFsSWJEN1lHYzJ4T0dteTc2THBsa2Y3bTVJQ3dBaFhvNnBSZGV2UU1qT3hs?=
 =?utf-8?B?ZkNuN0txWjJhSkZ3cDNzem5VZzVQdWhzWk8zTWR6T3A1L1JzN2dOMStDandL?=
 =?utf-8?B?YlhFUC8rOW43MjY5dWNkQkNUam1mSzlsMkNvYTNPOXRGNkFWUEgrT2FVdUk5?=
 =?utf-8?B?VjMvdHBYYWF2aHY1YkJGVi9pTEFsdU1jYWRRU3AyRUJOT2ZuMUJHOHdqQWkz?=
 =?utf-8?B?Rzd1TStBVis5VnBDK0tEYi9Tc2pDTjRrNW1QMzFkRENUcU5Ub0FaeDBqSWFJ?=
 =?utf-8?B?aWVLQ3hUSjNNTjVrL004Z1FUaFBkd2xuekdCZW03Q0VPbHUrMTNpVTJUWlk2?=
 =?utf-8?B?VVo2SkNDRW5OUGhvSlNMa2lFRnVtN0RlbDBhRDJSb3Q4L01lRlZaR0Q0b1lV?=
 =?utf-8?B?UzZtS0lTZ3hlYWZyajkrb20xRzFyTUlaZ1FWSFdHVHhwUU1VQVUwcDhyS1hJ?=
 =?utf-8?Q?AhASMWBdUUsvWNaC7QVN11CAb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7a9d2bfc-b34b-4ad5-8d05-08db4fb1924a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 10:47:08.6760
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FJ1m/iz1Ue1+VGlu/huyDtlF7ZvzQTS6l6q1mJrivHoTqYysK8O3cEuqtVlpsOMYiZ7YdrCQj/Z6kYlnHBbMNw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9688

On 08.05.2023 12:25, Jan Beulich wrote:
> On 01.05.2023 21:30, Jason Andryuk wrote:
>> Extend xen_get_cpufreq_para to return hwp parameters.  These match the
>> hardware rather closely.
>>
>> We need the features bitmask to indicated fields supported by the actual
>> hardware.
>>
>> The use of uint8_t parameters matches the hardware size.  uint32_t
>> entries grows the sysctl_t past the build assertion in setup.c.  The
>> uint8_t ranges are supported across multiple generations, so hopefully
>> they won't change.
> 
> Still it feels a little odd for values to be this narrow. Aiui the
> scaling_governor[] and scaling_{max,min}_freq fields aren't (really)
> used by HWP. So you could widen the union in struct
> xen_get_cpufreq_para (in a binary but not necessarily source compatible
> manner), gaining you 6 more uint32_t slots. Possibly the somewhat oddly
> placed scaling_cur_freq could be included as well ...

Having seen patch 9 now as well, I wonder whether here (or in a separate
patch) you don't want to limit providing inapplicable data (for example
not filling *scaling_available_governors would even avoid an allocation,
thus removing a possible reason for failure), while there (or again in a
separate patch) you'd also limit what the tool reports (inapplicable
output causes confusion / questions at best).

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 11:23:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:23:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531383.826984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvyxq-0000sm-37; Mon, 08 May 2023 11:23:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531383.826984; Mon, 08 May 2023 11:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvyxq-0000sf-08; Mon, 08 May 2023 11:23:34 +0000
Received: by outflank-mailman (input) for mailman id 531383;
 Mon, 08 May 2023 11:23:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvyxp-0000ru-81
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:23:33 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c13d83fa-ed92-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 13:23:31 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id CE2151FE34;
 Mon,  8 May 2023 11:23:27 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 95B2F1346B;
 Mon,  8 May 2023 11:23:27 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id t5UXI6/bWGQ2KgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:23:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c13d83fa-ed92-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683545007; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=BvpHVm1k2zYQtgm6kSRpmTVy7RWuaY1Ad2jZD6vQ65s=;
	b=q/KpXFUXmdwfU74NLloJMeqsObhR6H7BQPN0IWdMYTvOgvoXaHhPhOSgjUOVns2NMVsU8Y
	CDksf+Jmr7Zg6n1KyMSYdnv8ClNXYcnEZ+lkx8kk5M2RIkGGAwnHJaj8tlx9JTKOarMUOE
	gXvgQW7Wgqjru2ox4r9Xb8mzPuJ/qEo=
Message-ID: <324fd699-dcd8-9881-a276-167be38622b1@suse.com>
Date: Mon, 8 May 2023 13:23:27 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v1] tools: convert bitfields to unsigned type
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <20230503150142.4987-1-olaf@aepfle.de>
 <d8224847-ffc5-3faf-1bc5-6ad3d7966b26@suse.com>
 <20230508120038.74246111.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230508120038.74246111.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------GucqRMVKOv2enaXkTjBlexBR"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------GucqRMVKOv2enaXkTjBlexBR
Content-Type: multipart/mixed; boundary="------------nttb7OYmMOnGDlecJH2qQa0o";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
Message-ID: <324fd699-dcd8-9881-a276-167be38622b1@suse.com>
Subject: Re: [PATCH v1] tools: convert bitfields to unsigned type
References: <20230503150142.4987-1-olaf@aepfle.de>
 <d8224847-ffc5-3faf-1bc5-6ad3d7966b26@suse.com>
 <20230508120038.74246111.olaf@aepfle.de>
In-Reply-To: <20230508120038.74246111.olaf@aepfle.de>

--------------nttb7OYmMOnGDlecJH2qQa0o
Content-Type: multipart/mixed; boundary="------------WkhGyOghj6QDnSnCZmAhIFRK"

--------------WkhGyOghj6QDnSnCZmAhIFRK
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDguMDUuMjMgMTI6MDAsIE9sYWYgSGVyaW5nIHdyb3RlOg0KPiBNb24sIDggTWF5IDIw
MjMgMTE6MDY6MTEgKzAyMDAgSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPjoNCj4g
DQo+Pj4gLSAgICBpbnQgc2hvcnRfc3VtbWFyeV9kb25lOjEsIHByZWFsbG9jX3VucGluOjEs
IHdybWFwX2JmOjE7DQo+Pj4gKyAgICB1bnNpZ25lZCBzaG9ydF9zdW1tYXJ5X2RvbmU6MSwg
cHJlYWxsb2NfdW5waW46MSwgd3JtYXBfYmY6MTsNCj4+IFBsZWFzZSB1c2UgInVuc2lnbmVk
IGludCIgaW5zdGVhZCBvZiBhIHB1cmUgInVuc2lnbmVkIi4NCj4gDQo+IFRoZSBlbnRpcmUg
ZmlsZSB1c2VzIGp1c3QgJ3Vuc2lnbmVkJyBmb3IgYml0ZmllbGRzLg0KDQpJIGhhdmUgZm91
bmQgMTggbGluZXMgdXNpbmcgInVuc2lnbmVkIGludCIgZm9yIGJpdGZpZWxkcyBpbiB0aGlz
IGZpbGUuDQoNCg0KSnVlcmdlbg0K
--------------WkhGyOghj6QDnSnCZmAhIFRK
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------WkhGyOghj6QDnSnCZmAhIFRK--

--------------nttb7OYmMOnGDlecJH2qQa0o--

--------------GucqRMVKOv2enaXkTjBlexBR
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRY268FAwAAAAAACgkQsN6d1ii/Ey+3
Mgf/WAyWGxiLFA687yBCTRCN6wg5/pHH+F1oRL2M9yJpgar9Bd+l47X6Afkkcat+bZz8NejrLqLF
twC27U34pP+U54+zL+pQZBArpvmjRY1rZWK1A/C2IaOEt4TRjzR1TyuNxWropLBNLabmbulvP6Su
WAu5aD/Nax1WLIvq93sgiIqN7sUoV3hhhNWqxw+IrBvywCL29HOouCUaDMcraVChacaA2TR8+Pvr
L2N8fdmLu14jrW6bq6TBf30Vo9UHChVAe4Kd1Xbesgoz2hnE3KvIg6attO97XG+RzAbQnciKjoIr
vwUG1FSIR+WQGuJj3m9Pu/a386NMOYvf+XvQGIkExg==
=4z1t
-----END PGP SIGNATURE-----

--------------GucqRMVKOv2enaXkTjBlexBR--


From xen-devel-bounces@lists.xenproject.org Mon May 08 11:28:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:28:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531387.826996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvz2V-0001VS-Oj; Mon, 08 May 2023 11:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531387.826996; Mon, 08 May 2023 11:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvz2V-0001VL-JH; Mon, 08 May 2023 11:28:23 +0000
Received: by outflank-mailman (input) for mailman id 531387;
 Mon, 08 May 2023 11:28:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvz2U-0001VF-4m
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:28:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2075.outbound.protection.outlook.com [40.107.7.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6f950244-ed93-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 13:28:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9150.eurprd04.prod.outlook.com (2603:10a6:150:25::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 11:27:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 11:27:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f950244-ed93-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gFMRv3ym8Pc7NjnPfWcAfPcTdv+0wLCL0JJ5GDZQt4PgaVKv18kMcnlFacH2Ha0IYzd80U2PtXjCVi9Okl86qWPwasev3dKr+B1bi8jEirG2OUmu3M2XjpWEloruoKQqC+kd16gODEZbBixHLjRgPHD0LxBwZHUxUxgTgk7ziQRuk11Hexec8m0W1l2sN9I2UU6dYlJerpQ2vn3i4h2qx+vgVl86C3W0uW3crvJ3Tld7qQfS3q2yJSK/ggHyAdGVh4kuW/qkuw8RasO1mGuDMAyCvElDnaSs+76FlK9Oz6+vIA7kZgzNxLiyaIM4CDCcTlBRZtIYYB15a9+p89awcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hNwRBae2FyJhvSroKSc0n+S74kYsmrTh60e1kp0Gh28=;
 b=g2HVoY4jUHBpeBRRhsqCYL4RQB3i4Z8q/wjH8KoqjSCQi5KQNc5oEfx+wdCbz10xs7fI/uQnvCEr1jonVKQTd5oQJf0dxdpTM4zE53Qb2O7fcWMgV6UBqsjcUbI9YKsYIoBSzZTHPNpmlcwA6JSXqgjdDnnvFNHR9tKkcHqFoXbYYbJJlmX6/bzBumPSeJCKC875E4ZFOQ7pSzR9d/CKIDGsJ0ZaBDPAksCAldF6z5yOssljg6OhU3WBs7ZbUjlJDJax/EMiweRCD4CJXSzFzAQnLS8OrVX39bq9YdTOyOf28jkuO1UU6p5DKcgkQ3izT5o8VX+oKi92RCxGeUnIEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hNwRBae2FyJhvSroKSc0n+S74kYsmrTh60e1kp0Gh28=;
 b=W8eAfH3bsbmQt8TQ9V6OpjSlYGjNLljX59/YCzPGhCWJwP4BXecqMN1UoVGKU+dGutHOa6X3LTJOylKxQjo1XD0CjVW1klWC5gEQQycd1+PCF9l/AJ5b/UAI1Dm7gyNjflf0r2kKjxGc9f2rpaiRVvZm40WKy1IMxyy/l3rw9PaAWKM8KCFfR+/XKuY9rm6lDqP0SeM3MWOX0d7n7qU2XHq8LunuTjGE8xSQu4YcllWTUy+5HrKbvWUMh6hsk8vY7Xn9nXhHD8QXgb7i/gTzxQpx6GWl5c+4dulxM586+KxHf+DHLywku6JU/jQBpDqsMw7HV35APdlnCXXS6Ldobg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1166bdf1-4d54-30bb-bdf9-65dfaeb6b29e@suse.com>
Date: Mon, 8 May 2023 13:27:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 10/14 RESEND] xen: Add SET_CPUFREQ_HWP xen_sysctl_pm_op
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-11-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230501193034.88575-11-jandryuk@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM0PR10CA0116.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:e6::33) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9150:EE_
X-MS-Office365-Filtering-Correlation-Id: d35bf528-654c-4010-551e-08db4fb7423f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pqtKeKQUCsgLA1AyQW+7VySHBmSpMaKgbsyjqfohuN5LqyvzSiSDygmWGP1dDDyciXWXeXA4fWam4XK8UhFdx1sg6H/XyfqBmq3zJAOL4MpHx+xZy1Kn/702zoDKpv6CVYzbTXPrk3q6Ut4z4DMa7PtdJQF6koG/hbPGyU2dC0teu6JoKHTf6UU9Un1SqFLHpdTyeHJAQtLqPBSElUl/e634p/yntXYIc64GpzJxc54bT2BTByeYf9dAHsJZOL2amp44AZHz1gGZu9zIQX14usIabjeyipkcIwrtG2rSaDJMfNyPILwUtVJ6kjDKPSIWX7JFDFtUko5fYCTRaXr6HykCxNiwXM91abJO/sDc/hF/GMfoQ1I90OgW+bvfk8bLuOuF6I/fP+6FaTSejnzUcgW/dY75gDt05JPNMZZ0ZJXGDSKtaN44+eZvp7SRa5YhDh4sDmA4xG0bOtCjp9Jph6/UBfdSMDn9x6Bapq2ZMdBJUtdIcaHOelz49fW9+XmaRk2DyOmuHAvcXD1EGqs++iK9MIMXIKmcDoeB2K0W/NW0x+7N5dmU+LnWu08qV8WQLgV3WDrwHfVU7P92JOFE6G4iVoqRr5oZxgdmE/nTQ2qJSllLbdK0WVNVunylAQVNeIHTdvFePgerBj7u6vzb4w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(396003)(376002)(346002)(136003)(451199021)(6512007)(53546011)(26005)(6506007)(6486002)(83380400001)(36756003)(2616005)(38100700002)(86362001)(31696002)(186003)(41300700001)(2906002)(54906003)(31686004)(5660300002)(6916009)(4326008)(66476007)(66556008)(316002)(8936002)(8676002)(478600001)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dHNuQzFoYWFqZVplNk96a05reWI3NHhuZk9JRU83em53cklWUjFRbk9tb1Ax?=
 =?utf-8?B?UzdTRkJsSTJxeVZKT1VaOXNGemZoWVpvb2pxVUxrTlNYREZoZHBUbEErbWVs?=
 =?utf-8?B?alJkSU1heXZrTXExdEZuWGg2czhIRm45Y3VoaDl3Rys2WlNraEpDeEpSQ0x0?=
 =?utf-8?B?Kys3MlFlRWNlSnFnS1llcHo0ei9Za1g1cnlocmU4bkZvMHVSY1NEOTdkV2tB?=
 =?utf-8?B?U1NLZHB1cWlvbDJoenVJelRBdkJUVEIvUnFWbklnVC90QWlQZHltbnJ2Mmll?=
 =?utf-8?B?d3NqUVlWODlKWGVudzJ0V1B6dW9KUWgzMW1QWDA0QlJrenh5ZlVDSzdLcGNB?=
 =?utf-8?B?akM0R3pxRUpoelRicDBENnM5eDVjaE5Ga3QvRTU5M0lPOStUb21FTmRITng5?=
 =?utf-8?B?TEhiMC9DMlNueXlHQmNvMzBtRG9vRjVqTjgwKzJmVjZXQU5oL2kwQmU5Mi9o?=
 =?utf-8?B?OTNSekNrdTY0T1RJL3dPTHhyeUUxWmd4N3ZKZTd3R3Y1OWRzUTloajkxaGxs?=
 =?utf-8?B?SU5uZ3QvYTg4NnVNMmJBdmVGYlNXRDRiRU5zN21HZDVWeS82SnNsNFFldUN3?=
 =?utf-8?B?ZFMyWm4xbC84MVNLcDllVUhuOUVaT1U4a1cxVDF3eWxzVzRlNDNERlpiS3hs?=
 =?utf-8?B?TmRnTmN4QUJ6ZldQZnB0UnFWb2dLWjF5SEIwNDFmcHhJZHhHcHdrUHZsMlVq?=
 =?utf-8?B?eW9Sd2h6dnR1L0ZydmhnN2NWS2JLT2M4cVF3djFtVXJJUVVVRGZoSVpnTFc4?=
 =?utf-8?B?aGUrSkN6ZUNvbkZpa1lSSU52V2tSbk9uSnFPRk5hZkVLM2k0dlJJb2I5Q2NV?=
 =?utf-8?B?aDhBMlZRdWR1OGhXNHpuRGZaOFVGSUxKV3FNVllsUXlyN2lXeEJkbmxFaDVk?=
 =?utf-8?B?WUhZZGRWZTJlNVdLZ05Ta0ExSWtRbHhrRmJENVJMNEhyTUtPTjg2aXBlTXdu?=
 =?utf-8?B?UFo2ekRCVVNLTXNxODcwWE4zNkhzQm5zVysyeFUyeU1lWlZzQjl6MWxIcm1G?=
 =?utf-8?B?Q0tQcWJQK0VHUEpaaWU2OU9lVkdSYUhMcEkzQVE1dGFDYnRreE1hUzF2Z3By?=
 =?utf-8?B?M2ViWngvUElTZmYxdExJc1MwOU9ldm5PS1VtQXV5YVI3MjBJSEIvS1FjVXVI?=
 =?utf-8?B?ZllKQS8xd3BYNkxLQlM4bC9CRFhpKzdjY25DMGMvRU9PMXpPL3pWQmcremJi?=
 =?utf-8?B?OWV5TktMQk5tdzVuSFBxT0xQTDA0ZXF2Tm1oaVNrSWdyZmlsa0FvejJNdWlx?=
 =?utf-8?B?MnVDNytSak5ZUFZGSEZmMUpSYlMwTWk2ekV1RVBhVVBXemtKaGlZdG41Q2pY?=
 =?utf-8?B?YVlIZDkrUW5JN2NibFNvazBzVTRSZ0xsQy9VTGRqV1NWeVIxS3NzYmUvMHZj?=
 =?utf-8?B?SU1BeHUvWUFHNzFCRUpMZjZ3VmhtSXZ3NGlmenVwZXZHMXZHSmthWDJvTEJH?=
 =?utf-8?B?NXhjNEdjRGFnYTdMd1RaMHVuZkY5MGFzek5IVDFhcTdvWGFsTlhVNnkyc25x?=
 =?utf-8?B?UmxwbVdYU3Q1RWxkbHF6aEhKZTBmcUprRVVCQzFrNk53bWFVbzAxdC83N1c0?=
 =?utf-8?B?aDBLbXI3TDc4RitYb0E4aFFTME1rNXJpZlJ4REIyQnUyNzc5eXZUUUl1Y2FO?=
 =?utf-8?B?SDBFVDQyRU9EbjB2SWViejBvQWpqRlFzZDg4dmF4RGNrZWNoVUdSNXZvSDM3?=
 =?utf-8?B?aE9iZHNnQlJqUmk0cnNZOFN3QlZNamlEVmVBUzR1R2N4aCs0aWltKzUyRXRk?=
 =?utf-8?B?djMvVklxb3lBUE5TWDhmL21uTjJRM05yTVNKQ25OS0FjMlFVTWVQQVBIbnNn?=
 =?utf-8?B?QWJYQWpiZ1hoQjVzVFJvU0pPMXZxWlVHb2FVQ0dtWjdjbTZCalJCRU1aeExD?=
 =?utf-8?B?R05BREg0STA1dW5idm5VRnMrS3hLSXA2Z29iVW96ZGpUUVFCSkl3WkdEWVNQ?=
 =?utf-8?B?SUVXR3FGaHVKY2tmQk1QMVh5eG5aVnQzZzdmcEhrelY1ZzRKbFVpS2JJSWNF?=
 =?utf-8?B?aWlhY1NJbTlZZGNtWi9YMTZhOHVQcmIyQzVXQmJSSDJ5dE1PMWdnREpWWEZ1?=
 =?utf-8?B?V3ZiNFdQQmdIYzFBRXVjSU8yYkJIeEJoK1JDT095bVovcTN6bkd4RkRIYkQw?=
 =?utf-8?Q?Ai3f4sTJQPRCPQQYjeWK3TxcC?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d35bf528-654c-4010-551e-08db4fb7423f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 11:27:51.3751
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WXA2Y6g1vU9EaC19HnlF/hafsPB1Yq4wFzZtViPvydwTGAo7yayMxVN89bQRiYQjH2LnYBL7xR7Pu/T9gXMYWg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9150

On 01.05.2023 21:30, Jason Andryuk wrote:
> @@ -531,6 +533,100 @@ int get_hwp_para(const struct cpufreq_policy *policy,
>      return 0;
>  }
>  
> +int set_hwp_para(struct cpufreq_policy *policy,
> +                 struct xen_set_hwp_para *set_hwp)

const?

> +{
> +    unsigned int cpu = policy->cpu;
> +    struct hwp_drv_data *data = per_cpu(hwp_drv_data, cpu);
> +
> +    if ( data == NULL )
> +        return -EINVAL;
> +
> +    /* Validate all parameters first */
> +    if ( set_hwp->set_params & ~XEN_SYSCTL_HWP_SET_PARAM_MASK )
> +        return -EINVAL;
> +
> +    if ( set_hwp->activity_window & ~XEN_SYSCTL_HWP_ACT_WINDOW_MASK )
> +        return -EINVAL;

Below you limit checks to when the respective control bit is set. I
think you want the same here.

> +    if ( !feature_hwp_energy_perf &&
> +         (set_hwp->set_params & XEN_SYSCTL_HWP_SET_ENERGY_PERF) &&
> +         set_hwp->energy_perf > IA32_ENERGY_BIAS_MAX_POWERSAVE )
> +        return -EINVAL;
> +
> +    if ( (set_hwp->set_params & XEN_SYSCTL_HWP_SET_DESIRED) &&
> +         set_hwp->desired != 0 &&
> +         (set_hwp->desired < data->hw.lowest ||
> +          set_hwp->desired > data->hw.highest) )
> +        return -EINVAL;
> +
> +    /*
> +     * minimum & maximum are not validated as hardware doesn't seem to care
> +     * and the SDM says CPUs will clip internally.
> +     */
> +
> +    /* Apply presets */
> +    switch ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_PRESET_MASK )
> +    {
> +    case XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE:
> +        data->minimum = data->hw.lowest;
> +        data->maximum = data->hw.lowest;
> +        data->activity_window = 0;
> +        if ( feature_hwp_energy_perf )
> +            data->energy_perf = HWP_ENERGY_PERF_MAX_POWERSAVE;
> +        else
> +            data->energy_perf = IA32_ENERGY_BIAS_MAX_POWERSAVE;
> +        data->desired = 0;
> +        break;
> +
> +    case XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE:
> +        data->minimum = data->hw.highest;
> +        data->maximum = data->hw.highest;
> +        data->activity_window = 0;
> +        data->energy_perf = HWP_ENERGY_PERF_MAX_PERFORMANCE;
> +        data->desired = 0;
> +        break;
> +
> +    case XEN_SYSCTL_HWP_SET_PRESET_BALANCE:
> +        data->minimum = data->hw.lowest;
> +        data->maximum = data->hw.highest;
> +        data->activity_window = 0;
> +        if ( feature_hwp_energy_perf )
> +            data->energy_perf = HWP_ENERGY_PERF_BALANCE;
> +        else
> +            data->energy_perf = IA32_ENERGY_BIAS_BALANCE;
> +        data->desired = 0;
> +        break;
> +
> +    case XEN_SYSCTL_HWP_SET_PRESET_NONE:
> +        break;
> +
> +    default:
> +        return -EINVAL;
> +    }

So presets set all the values for which the individual item control bits
are clear. That's not exactly what I would have expected, and it took me
reading the code several times until I realized that you write life per-
CPU data fields here, not fields of some intermediate variable. I think
this could do with saying explicitly in the public header (if indeed the
intended model).

> +    /* Further customize presets if needed */
> +    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_MINIMUM )
> +        data->minimum = set_hwp->minimum;
> +
> +    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_MAXIMUM )
> +        data->maximum = set_hwp->maximum;
> +
> +    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_ENERGY_PERF )
> +        data->energy_perf = set_hwp->energy_perf;
> +
> +    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_DESIRED )
> +        data->desired = set_hwp->desired;
> +
> +    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_ACT_WINDOW )
> +        data->activity_window = set_hwp->activity_window &
> +                                XEN_SYSCTL_HWP_ACT_WINDOW_MASK;
> +
> +    hwp_cpufreq_target(policy, 0, 0);
> +
> +    return 0;

I don't think you should assume here that hwp_cpufreq_target() will
only ever return 0. Plus by returning its return value here you
allow the compiler to tail-call optimize this code.

> --- a/xen/drivers/acpi/pmstat.c
> +++ b/xen/drivers/acpi/pmstat.c
> @@ -398,6 +398,20 @@ static int set_cpufreq_para(struct xen_sysctl_pm_op *op)
>      return ret;
>  }
>  
> +static int set_cpufreq_hwp(struct xen_sysctl_pm_op *op)

const?

> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -317,6 +317,34 @@ struct xen_hwp_para {
>      uint8_t energy_perf;
>  };
>  
> +/* set multiple values simultaneously when set_args bit is set */

What "set_args bit" does this comment refer to?

> +struct xen_set_hwp_para {
> +#define XEN_SYSCTL_HWP_SET_DESIRED              (1U << 0)
> +#define XEN_SYSCTL_HWP_SET_ENERGY_PERF          (1U << 1)
> +#define XEN_SYSCTL_HWP_SET_ACT_WINDOW           (1U << 2)
> +#define XEN_SYSCTL_HWP_SET_MINIMUM              (1U << 3)
> +#define XEN_SYSCTL_HWP_SET_MAXIMUM              (1U << 4)
> +#define XEN_SYSCTL_HWP_SET_PRESET_MASK          0xf000
> +#define XEN_SYSCTL_HWP_SET_PRESET_NONE          0x0000
> +#define XEN_SYSCTL_HWP_SET_PRESET_BALANCE       0x1000
> +#define XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE     0x2000
> +#define XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE   0x3000
> +#define XEN_SYSCTL_HWP_SET_PARAM_MASK ( \
> +                                  XEN_SYSCTL_HWP_SET_PRESET_MASK | \
> +                                  XEN_SYSCTL_HWP_SET_DESIRED     | \
> +                                  XEN_SYSCTL_HWP_SET_ENERGY_PERF | \
> +                                  XEN_SYSCTL_HWP_SET_ACT_WINDOW  | \
> +                                  XEN_SYSCTL_HWP_SET_MINIMUM     | \
> +                                  XEN_SYSCTL_HWP_SET_MAXIMUM     )
> +    uint16_t set_params; /* bitflags for valid values */
> +#define XEN_SYSCTL_HWP_ACT_WINDOW_MASK          0x03ff
> +    uint16_t activity_window; /* See comment in struct xen_hwp_para */
> +    uint8_t minimum;
> +    uint8_t maximum;
> +    uint8_t desired;
> +    uint8_t energy_perf; /* 0-255 or 0-15 depending on HW support */

Instead of (or in addition to) the "HW support" reference, could this
gain a reference to the "get para" bit determining which range to use?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 11:48:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:48:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531395.827004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLV-00040T-Ds; Mon, 08 May 2023 11:48:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531395.827004; Mon, 08 May 2023 11:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLV-00040M-BJ; Mon, 08 May 2023 11:48:01 +0000
Received: by outflank-mailman (input) for mailman id 531395;
 Mon, 08 May 2023 11:48:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzLU-00040G-JD
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:48:00 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2cae1b69-ed96-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 13:47:57 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 85DCD1FE44;
 Mon,  8 May 2023 11:47:56 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 44BD51346B;
 Mon,  8 May 2023 11:47:56 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id j+RjD2zhWGTiNgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:47:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2cae1b69-ed96-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546476; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=sHByFjc6g+J1m3gjwq+ynzSH/6Ec1w8cHibOV3GDZ7A=;
	b=m97MgFUcOtTsPRfofXMMa2qlEy+GUybkjWqkCG890oJyLhdegdYK5NAWAEXhLtnAROsnks
	8Yp/3Bm8jGSgiLAlkOHBJWFgr25Rm3F359+baZmzDS4eQjsBVKMoVJG1Gf4zcIVrFOU8D5
	3KnV3NcOsKwxalk53/rmFLIotEADQ7I=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v5 00/14] tools/xenstore: rework internal accounting
Date: Mon,  8 May 2023 13:47:40 +0200
Message-Id: <20230508114754.31514-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series reworks the Xenstore internal accounting to use a uniform
generic framework. It is adding some additional useful diagnostic
information, like accounting trace and max. per-domain and global quota
values seen.

Changes in V2:
- added patch 1 (leftover from previous series)
- rebase

Changes in V3:
- addressed comments

Changes in V4:
- fixed patch 3

Changes in V5:
- addressed comments

Juergen Gross (14):
  tools/xenstore: take transaction internal nodes into account for quota
  tools/xenstore: manage per-transaction domain accounting data in an
    array
  tools/xenstore: introduce accounting data array for per-domain values
  tools/xenstore: add framework to commit accounting data on success
    only
  tools/xenstore: use accounting buffering for node accounting
  tools/xenstore: add current connection to domain_memory_add()
    parameters
  tools/xenstore: use accounting data array for per-domain values
  tools/xenstore: add accounting trace support
  tools/xenstore: add TDB access trace support
  tools/xenstore: switch transaction accounting to generic accounting
  tools/xenstore: remember global and per domain max accounting values
  tools/xenstore: use generic accounting for remaining quotas
  tools/xenstore: switch get_optval_int() to get_optval_uint()
  tools/xenstore: switch quota management to be table based

 docs/misc/xenstore.txt                 |   5 +-
 tools/xenstore/xenstored_control.c     |  65 ++--
 tools/xenstore/xenstored_core.c        | 176 +++++-----
 tools/xenstore/xenstored_core.h        |  24 +-
 tools/xenstore/xenstored_domain.c      | 432 ++++++++++++++++++-------
 tools/xenstore/xenstored_domain.h      |  59 +++-
 tools/xenstore/xenstored_transaction.c |  24 +-
 tools/xenstore/xenstored_watch.c       |  15 +-
 8 files changed, 521 insertions(+), 279 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:48:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:48:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531396.827015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLY-0004Fc-LD; Mon, 08 May 2023 11:48:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531396.827015; Mon, 08 May 2023 11:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLY-0004FV-Hs; Mon, 08 May 2023 11:48:04 +0000
Received: by outflank-mailman (input) for mailman id 531396;
 Mon, 08 May 2023 11:48:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzLX-0004FA-FD
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:48:03 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2fe7edf1-ed96-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 13:48:02 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 2EBB71FE44;
 Mon,  8 May 2023 11:48:02 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E3D831346B;
 Mon,  8 May 2023 11:48:01 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id gPc8NnHhWGTqNgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:48:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fe7edf1-ed96-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546482; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jR2CaVJDZ2qBizxTmbgLfYMIkR5bcQwiTfYf2yndaVw=;
	b=pehhPTg8kefxiDW5MIBGt/y393LQvpAZR7F5d+4PVz12T+VQQKbZhkX/8Zf6pwY/ODnIWD
	dRDm7rhx+bTK8yU34cxJRGR9B2ZM/f0oEDOZrKQiGwdO9HZ5QCzb6f3nVENVjAfkPxyIdx
	8wqG/A7vfacW6kIo/sGGdPwxTrU4Gm0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v5 01/14] tools/xenstore: take transaction internal nodes into account for quota
Date: Mon,  8 May 2023 13:47:41 +0200
Message-Id: <20230508114754.31514-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The accounting for the number of nodes of a domain in an active
transaction is not working correctly, as it is checking the node quota
only against the number of nodes outside the transaction.

This can result in the transaction finally failing, as node quota is
checked at the end of the transaction again.

On the other hand even in a transaction deleting many nodes, new nodes
might not be creatable, in case the node quota was already reached at
the start of the transaction.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V3:
- rewrite of commit message (Julien Grall)
---
 tools/xenstore/xenstored_domain.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f62be2245c..dbbf97accc 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1116,9 +1116,8 @@ int domain_nbentry_fix(unsigned int domid, int num, bool update)
 
 int domain_nbentry(struct connection *conn)
 {
-	return (domain_is_unprivileged(conn))
-		? conn->domain->nbentry
-		: 0;
+	return domain_is_unprivileged(conn)
+	       ? domain_nbentry_add(conn, conn->id, 0, true) : 0;
 }
 
 static bool domain_chk_quota(struct domain *domain, int mem)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:48:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531397.827024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLd-0004YE-TJ; Mon, 08 May 2023 11:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531397.827024; Mon, 08 May 2023 11:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLd-0004Y2-QG; Mon, 08 May 2023 11:48:09 +0000
Received: by outflank-mailman (input) for mailman id 531397;
 Mon, 08 May 2023 11:48:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzLc-0004FA-Fh
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:48:08 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3336086b-ed96-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 13:48:08 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B137E21D1E;
 Mon,  8 May 2023 11:48:07 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 80C6A1346B;
 Mon,  8 May 2023 11:48:07 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id udAbHnfhWGT4NgAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:48:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3336086b-ed96-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546487; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JlFoqnoU3I4JMyJapnFZtevlraEzWmolnTL1zQ4kqcs=;
	b=RbdltfRyJoMH9SRYvb9HQ+Lr0cZYfBo7WeUPq/xQonJDXSPa46co4U2wQOba4Lj1D+s+8H
	7p52xBfu6pJ6f8abdDiTOXDvO0gGqO4hUFTPKalIWlgHIVnJLWPbOU76kmvf7hadmUq9l6
	vWXXakXfqn3AZktYeWTRDZelTD14qcg=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v5 02/14] tools/xenstore: manage per-transaction domain accounting data in an array
Date: Mon,  8 May 2023 13:47:42 +0200
Message-Id: <20230508114754.31514-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to prepare keeping accounting data in an array instead of
using independent fields, switch the struct changed_domain accounting
data to that scheme, for now only using an array with one element.

In order to be able to extend this scheme add the needed indexing enum
to xenstored_domain.h.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- make "what" parameter of acc_add_changed_dom() an enum type, and
  assert() that it won't exceed the accounting array (Julien Grall)
---
 tools/xenstore/xenstored_domain.c | 19 +++++++++++--------
 tools/xenstore/xenstored_domain.h | 10 ++++++++++
 2 files changed, 21 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index dbbf97accc..609a9a13ab 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -99,8 +99,8 @@ struct changed_domain
 	/* Identifier of the changed domain. */
 	unsigned int domid;
 
-	/* Amount by which this domain's nbentry field has changed. */
-	int nbentry;
+	/* Accounting data. */
+	int acc[ACC_TR_N];
 };
 
 static struct hashtable *domhash;
@@ -550,7 +550,7 @@ int acc_fix_domains(struct list_head *head, bool chk_quota, bool update)
 	int cnt;
 
 	list_for_each_entry(cd, head, list) {
-		cnt = domain_nbentry_fix(cd->domid, cd->nbentry, update);
+		cnt = domain_nbentry_fix(cd->domid, cd->acc[ACC_NODES], update);
 		if (!update) {
 			if (chk_quota && cnt >= quota_nb_entry_per_domain)
 				return ENOSPC;
@@ -595,19 +595,21 @@ static struct changed_domain *acc_get_changed_domain(const void *ctx,
 	return cd;
 }
 
-static int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
-			       unsigned int domid)
+static int acc_add_changed_dom(const void *ctx, struct list_head *head,
+			       enum accitem what, int val, unsigned int domid)
 {
 	struct changed_domain *cd;
 
+	assert(what < ARRAY_SIZE(cd->acc));
+
 	cd = acc_get_changed_domain(ctx, head, domid);
 	if (!cd)
 		return 0;
 
 	errno = 0;
-	cd->nbentry += val;
+	cd->acc[what] += val;
 
-	return cd->nbentry;
+	return cd->acc[what];
 }
 
 static void domain_conn_reset(struct domain *domain)
@@ -1071,7 +1073,8 @@ static int domain_nbentry_add(struct connection *conn, unsigned int domid,
 
 	if (conn && conn->transaction) {
 		head = transaction_get_changed_domains(conn->transaction);
-		ret = acc_add_dom_nbentry(conn->transaction, head, add, domid);
+		ret = acc_add_changed_dom(conn->transaction, head, ACC_NODES,
+					  add, domid);
 		if (errno) {
 			fail_transaction(conn->transaction);
 			return -1;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 279cccb3ad..40803574f6 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -19,6 +19,16 @@
 #ifndef _XENSTORED_DOMAIN_H
 #define _XENSTORED_DOMAIN_H
 
+/*
+ * All accounting data is stored in a per-domain array.
+ * Depending on the account item there might be other scopes as well, like e.g.
+ * a per transaction array.
+ */
+enum accitem {
+	ACC_NODES,
+	ACC_TR_N,		/* Number of elements per transaction. */
+};
+
 void handle_event(void);
 
 void check_domains(void);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:48:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:48:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531398.827034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLl-0004xT-5M; Mon, 08 May 2023 11:48:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531398.827034; Mon, 08 May 2023 11:48:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLl-0004xJ-2R; Mon, 08 May 2023 11:48:17 +0000
Received: by outflank-mailman (input) for mailman id 531398;
 Mon, 08 May 2023 11:48:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzLj-00040G-RD
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:48:15 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 36938dd8-ed96-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 13:48:13 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6599B21D1E;
 Mon,  8 May 2023 11:48:13 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 368661346B;
 Mon,  8 May 2023 11:48:13 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 9+IeDH3hWGQFNwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:48:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36938dd8-ed96-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546493; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=no4Yd3rxrRD6C77O0adarS1Qkx7TupRDK427VBM6jfA=;
	b=eDYC3tqJ3vpCroHTALl4hX9lBQna379LF6BgYbfXwgWCCBtwnXuvWdp9kJcSd1PxQvMEhe
	RY4S+j8sa0jJARI7LtuwOx3pvAVKAv4/nT8wDEmvvXCsy0GHozb5ow5OI81YgIXaDF8E3z
	jGxuqGUYoN/mHFl0yfEn99p74m+7wT0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v5 03/14] tools/xenstore: introduce accounting data array for per-domain values
Date: Mon,  8 May 2023 13:47:43 +0200
Message-Id: <20230508114754.31514-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the scheme of an accounting data array for per-domain
accounting data and use it initially for the number of nodes owned by
a domain.

Make the accounting data type to be unsigned int, as no data is allowed
to be negative at any time.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V3:
- remove domid parameter from domain_acc_add_chk() (Julien Grall)
- rename domain_acc_add_chk() (Julien Grall)
- modify overflow check (Julien Grall)
V4:
- fix overflow check
---
 tools/xenstore/xenstored_domain.c | 70 ++++++++++++++++++-------------
 tools/xenstore/xenstored_domain.h |  3 +-
 2 files changed, 43 insertions(+), 30 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 609a9a13ab..30fb9acec6 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -69,8 +69,8 @@ struct domain
 	/* Has domain been officially introduced? */
 	bool introduced;
 
-	/* number of entry from this domain in the store */
-	int nbentry;
+	/* Accounting data for this domain. */
+	unsigned int acc[ACC_N];
 
 	/* Amount of memory allocated for this domain. */
 	int memory;
@@ -246,7 +246,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 
 	if (keep_orphans) {
 		set_tdb_key(node->name, &key);
-		domain->nbentry--;
+		domain_nbentry_dec(NULL, domain->domid);
 		node->perms.p[0].id = priv_domid;
 		node->acc.memory = 0;
 		domain_nbentry_inc(NULL, priv_domid);
@@ -270,7 +270,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 		ret = WALK_TREE_SKIP_CHILDREN;
 	}
 
-	return domain->nbentry > 0 ? ret : WALK_TREE_SUCCESS_STOP;
+	return domain->acc[ACC_NODES] ? ret : WALK_TREE_SUCCESS_STOP;
 }
 
 static void domain_tree_remove(struct domain *domain)
@@ -278,7 +278,7 @@ static void domain_tree_remove(struct domain *domain)
 	int ret;
 	struct walk_funcs walkfuncs = { .enter = domain_tree_remove_sub };
 
-	if (domain->nbentry > 0) {
+	if (domain->acc[ACC_NODES]) {
 		ret = walk_node_tree(domain, NULL, "/", &walkfuncs, domain);
 		if (ret == WALK_TREE_ERROR_STOP)
 			syslog(LOG_ERR,
@@ -437,7 +437,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	resp = talloc_asprintf_append(resp, "%-16s: %8d\n", #t, e); \
 	if (!resp) return ENOMEM
 
-	ent(nodes, d->nbentry);
+	ent(nodes, d->acc[ACC_NODES]);
 	ent(watches, d->nbwatch);
 	ent(transactions, ta);
 	ent(outstanding, d->nboutstanding);
@@ -1047,8 +1047,27 @@ int domain_adjust_node_perms(struct node *node)
 	return 0;
 }
 
-static int domain_nbentry_add(struct connection *conn, unsigned int domid,
-			      int add, bool no_dom_alloc)
+static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
+{
+	assert(what < ARRAY_SIZE(d->acc));
+
+	if ((add < 0 && -add > d->acc[what]) ||
+	    (add > 0 && (INT_MAX - d->acc[what]) < add)) {
+		/*
+		 * In a transaction when a node is being added/removed AND the
+		 * same node has been added/removed outside the transaction in
+		 * parallel, the resulting value will be wrong. This is no
+		 * problem, as the transaction will fail due to the resulting
+		 * conflict.
+		 */
+		return (add < 0) ? 0 : INT_MAX;
+	}
+
+	return d->acc[what] + add;
+}
+
+static int domain_acc_add(struct connection *conn, unsigned int domid,
+			  enum accitem what, int add, bool no_dom_alloc)
 {
 	struct domain *d;
 	struct list_head *head;
@@ -1071,56 +1090,49 @@ static int domain_nbentry_add(struct connection *conn, unsigned int domid,
 		}
 	}
 
-	if (conn && conn->transaction) {
+	if (conn && conn->transaction && what < ACC_TR_N) {
 		head = transaction_get_changed_domains(conn->transaction);
-		ret = acc_add_changed_dom(conn->transaction, head, ACC_NODES,
+		ret = acc_add_changed_dom(conn->transaction, head, what,
 					  add, domid);
 		if (errno) {
 			fail_transaction(conn->transaction);
 			return -1;
 		}
-		/*
-		 * In a transaction when a node is being added/removed AND the
-		 * same node has been added/removed outside the transaction in
-		 * parallel, the resulting number of nodes will be wrong. This
-		 * is no problem, as the transaction will fail due to the
-		 * resulting conflict.
-		 * In the node remove case the resulting number can be even
-		 * negative, which should be avoided.
-		 */
-		return max(d->nbentry + ret, 0);
+		return domain_acc_add_valid(d, what, ret);
 	}
 
-	d->nbentry += add;
+	d->acc[what] = domain_acc_add_valid(d, what, add);
 
-	return d->nbentry;
+	return d->acc[what];
 }
 
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
-	return (domain_nbentry_add(conn, domid, 1, false) < 0) ? errno : 0;
+	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
+	       ? errno : 0;
 }
 
 int domain_nbentry_dec(struct connection *conn, unsigned int domid)
 {
-	return (domain_nbentry_add(conn, domid, -1, true) < 0) ? errno : 0;
+	return (domain_acc_add(conn, domid, ACC_NODES, -1, true) < 0)
+	       ? errno : 0;
 }
 
 int domain_nbentry_fix(unsigned int domid, int num, bool update)
 {
 	int ret;
 
-	ret = domain_nbentry_add(NULL, domid, update ? num : 0, update);
+	ret = domain_acc_add(NULL, domid, ACC_NODES, update ? num : 0, update);
 	if (ret < 0 || update)
 		return ret;
 
 	return domid_is_unprivileged(domid) ? ret + num : 0;
 }
 
-int domain_nbentry(struct connection *conn)
+unsigned int domain_nbentry(struct connection *conn)
 {
 	return domain_is_unprivileged(conn)
-	       ? domain_nbentry_add(conn, conn->id, 0, true) : 0;
+	       ? domain_acc_add(conn, conn->id, ACC_NODES, 0, true) : 0;
 }
 
 static bool domain_chk_quota(struct domain *domain, int mem)
@@ -1597,7 +1609,7 @@ static int domain_check_acc_init_sub(const void *k, void *v, void *arg)
 	 * If everything is correct incrementing the value for each node will
 	 * result in dom->nodes being 0 at the end.
 	 */
-	dom->nodes = -d->nbentry;
+	dom->nodes = -d->acc[ACC_NODES];
 
 	if (!hashtable_insert(domains, &dom->domid, dom)) {
 		talloc_free(dom);
@@ -1652,7 +1664,7 @@ static int domain_check_acc_cb(const void *k, void *v, void *arg)
 	if (!d)
 		return 0;
 
-	d->nbentry += dom->nodes;
+	d->acc[ACC_NODES] += dom->nodes;
 
 	return 0;
 }
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 40803574f6..9d05eb01da 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -27,6 +27,7 @@
 enum accitem {
 	ACC_NODES,
 	ACC_TR_N,		/* Number of elements per transaction. */
+	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
 };
 
 void handle_event(void);
@@ -77,7 +78,7 @@ int domain_alloc_permrefs(struct node_perms *perms);
 int domain_nbentry_inc(struct connection *conn, unsigned int domid);
 int domain_nbentry_dec(struct connection *conn, unsigned int domid);
 int domain_nbentry_fix(unsigned int domid, int num, bool update);
-int domain_nbentry(struct connection *conn);
+unsigned int domain_nbentry(struct connection *conn);
 int domain_memory_add(unsigned int domid, int mem, bool no_quota_check);
 
 /*
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:48:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:48:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531400.827045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLq-0005QJ-Ez; Mon, 08 May 2023 11:48:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531400.827045; Mon, 08 May 2023 11:48:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLq-0005Q8-Bz; Mon, 08 May 2023 11:48:22 +0000
Received: by outflank-mailman (input) for mailman id 531400;
 Mon, 08 May 2023 11:48:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzLp-00040G-33
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:48:21 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 39e68f3d-ed96-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 13:48:19 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id F3B5222029;
 Mon,  8 May 2023 11:48:18 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C87061346B;
 Mon,  8 May 2023 11:48:18 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 3lSYL4LhWGQTNwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:48:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39e68f3d-ed96-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546499; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XSQk4VtF6UBQcxOc0jYUSMTnf9LF8gFmH+yoqj3aCZQ=;
	b=qEHA+HzaLDyJ48+fP2LIJJZwgxxEVXAprBWPw/uUmXibRu2tRKMuHzIEvkDDaNYzgyxhi+
	+pL2S6MKx28ASCj5ncvhDxFzcxajKXvIVNR56WoJ42++cETl5UuRZXndii9SPRAM9ULW/g
	SMD9miJX0QZE/04YKkNtSnszzbZVZu8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 04/14] tools/xenstore: add framework to commit accounting data on success only
Date: Mon,  8 May 2023 13:47:44 +0200
Message-Id: <20230508114754.31514-5-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of modifying accounting data and undo those modifications in
case of an error during further processing, add a framework for
collecting the needed changes and commit them only when the whole
operation has succeeded.

This scheme can reuse large parts of the per transaction accounting.
The changed_domain handling can be reused, but the array size of the
accounting data should be possible to be different for both use cases.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- call acc_commit() earlier (Julien Grall)
- add assert() to acc_commit()
- use fixed sized acc array in struct changed_domain (Julien Grall)
V5:
- set conn->in to NULL only locally in acc_commit() (Julien Grall)
- define ACC_CHD_N in enum (Julien Grall)
---
 tools/xenstore/xenstored_core.c   |  7 ++++
 tools/xenstore/xenstored_core.h   |  3 ++
 tools/xenstore/xenstored_domain.c | 54 ++++++++++++++++++++++++++++++-
 tools/xenstore/xenstored_domain.h |  6 +++-
 4 files changed, 68 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 3ca68681e3..8392bdec9b 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1023,6 +1023,9 @@ static void send_error(struct connection *conn, int error)
 			break;
 		}
 	}
+
+	acc_drop(conn);
+
 	send_reply(conn, XS_ERROR, xsd_errors[i].errstring,
 			  strlen(xsd_errors[i].errstring) + 1);
 }
@@ -1034,6 +1037,9 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 
 	assert(type != XS_WATCH_EVENT);
 
+	/* Commit accounting now, as later errors won't undo any changes. */
+	acc_commit(conn);
+
 	if ( len > XENSTORE_PAYLOAD_MAX ) {
 		send_error(conn, E2BIG);
 		return;
@@ -2195,6 +2201,7 @@ struct connection *new_connection(const struct interface_funcs *funcs)
 	new->is_stalled = false;
 	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
+	INIT_LIST_HEAD(&new->acc_list);
 	INIT_LIST_HEAD(&new->ref_list);
 	INIT_LIST_HEAD(&new->watches);
 	INIT_LIST_HEAD(&new->transaction_list);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index c59b06551f..1f811f38cb 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -139,6 +139,9 @@ struct connection
 	struct list_head out_list;
 	uint64_t timeout_msec;
 
+	/* Not yet committed accounting data (valid if in != NULL). */
+	struct list_head acc_list;
+
 	/* Referenced requests no longer pending. */
 	struct list_head ref_list;
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 30fb9acec6..e59e40088e 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -100,7 +100,7 @@ struct changed_domain
 	unsigned int domid;
 
 	/* Accounting data. */
-	int acc[ACC_TR_N];
+	int acc[ACC_CHD_N];
 };
 
 static struct hashtable *domhash;
@@ -1070,6 +1070,7 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 			  enum accitem what, int add, bool no_dom_alloc)
 {
 	struct domain *d;
+	struct changed_domain *cd;
 	struct list_head *head;
 	int ret;
 
@@ -1090,6 +1091,22 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 		}
 	}
 
+	/* Temporary accounting data until final commit? */
+	if (conn && conn->in && what < ACC_REQ_N) {
+		/* Consider transaction local data. */
+		ret = 0;
+		if (conn->transaction && what < ACC_TR_N) {
+			head = transaction_get_changed_domains(
+				conn->transaction);
+			cd = acc_find_changed_domain(head, domid);
+			if (cd)
+				ret = cd->acc[what];
+		}
+		ret += acc_add_changed_dom(conn->in, &conn->acc_list, what,
+					   add, domid);
+		return errno ? -1 : domain_acc_add_valid(d, what, ret);
+	}
+
 	if (conn && conn->transaction && what < ACC_TR_N) {
 		head = transaction_get_changed_domains(conn->transaction);
 		ret = acc_add_changed_dom(conn->transaction, head, what,
@@ -1106,6 +1123,41 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 	return d->acc[what];
 }
 
+void acc_drop(struct connection *conn)
+{
+	struct changed_domain *cd;
+
+	while ((cd = list_top(&conn->acc_list, struct changed_domain, list))) {
+		list_del(&cd->list);
+		talloc_free(cd);
+	}
+}
+
+void acc_commit(struct connection *conn)
+{
+	struct changed_domain *cd;
+	enum accitem what;
+	struct buffered_data *in = conn->in;
+
+	/*
+	 * Make sure domain_acc_add() below can't add additional data to
+	 * to be committed accounting records.
+	 */
+	conn->in = NULL;
+
+	while ((cd = list_top(&conn->acc_list, struct changed_domain, list))) {
+		list_del(&cd->list);
+		for (what = 0; what < ACC_REQ_N; what++)
+			if (cd->acc[what])
+				domain_acc_add(conn, cd->domid, what,
+					       cd->acc[what], true);
+
+		talloc_free(cd);
+	}
+
+	conn->in = in;
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 9d05eb01da..e40657216b 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -25,8 +25,10 @@
  * a per transaction array.
  */
 enum accitem {
-	ACC_NODES,
+	ACC_REQ_N,		/* Number of elements per request. */
+	ACC_NODES = ACC_REQ_N,
 	ACC_TR_N,		/* Number of elements per transaction. */
+	ACC_CHD_N = ACC_TR_N,	/* max(ACC_REQ_N, ACC_TR_N), for changed dom. */
 	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
 };
 
@@ -113,6 +115,8 @@ int domain_get_quota(const void *ctx, struct connection *conn,
  * If "update" is true, "chk_quota" is ignored.
  */
 int acc_fix_domains(struct list_head *head, bool chk_quota, bool update);
+void acc_drop(struct connection *conn);
+void acc_commit(struct connection *conn);
 
 /* Write rate limiting */
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:48:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:48:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531401.827055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLu-0005tX-Q2; Mon, 08 May 2023 11:48:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531401.827055; Mon, 08 May 2023 11:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzLu-0005tK-N0; Mon, 08 May 2023 11:48:26 +0000
Received: by outflank-mailman (input) for mailman id 531401;
 Mon, 08 May 2023 11:48:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzLt-0004FA-9j
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:48:25 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d402bad-ed96-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 13:48:24 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 8EDE11FE45;
 Mon,  8 May 2023 11:48:24 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 623961346B;
 Mon,  8 May 2023 11:48:24 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id heugFojhWGQeNwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:48:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d402bad-ed96-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546504; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=e3gBj5zfxkpTLBUN5UffF19V51EVI0dt3OZ8371Y2g4=;
	b=QDQkK6m/wEb45LS3of0LKGp9TMCrzRREqdkFLFZ44LT7ZlKaD3WtMLBx6emedlKIiX3YsU
	pbCgDkGhrsKLDNNFbJZji5wMkLUGas75fPYfykiD8mZu7K9izYZlKjLhPnt4c2vlGluVT+
	xZ0DnjcQedm5VotHJ6Rvbv6pzrwT4RM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 05/14] tools/xenstore: use accounting buffering for node accounting
Date: Mon,  8 May 2023 13:47:45 +0200
Message-Id: <20230508114754.31514-6-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the node accounting to the accounting information buffering in
order to avoid having to undo it in case of failure.

This requires to call domain_nbentry_dec() before any changes to the
data base, as it can return an error now.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V5:
- add error handling after domain_nbentry_dec() calls (Julien Grall)
---
 tools/xenstore/xenstored_core.c   | 29 +++++++----------------------
 tools/xenstore/xenstored_domain.h |  4 ++--
 2 files changed, 9 insertions(+), 24 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 8392bdec9b..22da434e2a 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1454,7 +1454,6 @@ static void destroy_node_rm(struct connection *conn, struct node *node)
 static int destroy_node(struct connection *conn, struct node *node)
 {
 	destroy_node_rm(conn, node);
-	domain_nbentry_dec(conn, get_node_owner(node));
 
 	/*
 	 * It is not possible to easily revert the changes in a transaction.
@@ -1645,6 +1644,9 @@ static int delnode_sub(const void *ctx, struct connection *conn,
 	if (ret > 0)
 		return WALK_TREE_SUCCESS_STOP;
 
+	if (domain_nbentry_dec(conn, get_node_owner(node)))
+		return WALK_TREE_ERROR_STOP;
+
 	/* In case of error stop the walk. */
 	if (!ret && do_tdb_delete(conn, &key, &node->acc))
 		return WALK_TREE_SUCCESS_STOP;
@@ -1657,8 +1659,6 @@ static int delnode_sub(const void *ctx, struct connection *conn,
 	watch_exact = strcmp(root, node->name);
 	fire_watches(conn, ctx, node->name, node, watch_exact, NULL);
 
-	domain_nbentry_dec(conn, get_node_owner(node));
-
 	return WALK_TREE_RM_CHILDENTRY;
 }
 
@@ -1797,29 +1797,14 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 		return EPERM;
 
 	old_perms = node->perms;
-	domain_nbentry_dec(conn, get_node_owner(node));
+	if (domain_nbentry_dec(conn, get_node_owner(node)))
+		return ENOMEM;
 	node->perms = perms;
-	if (domain_nbentry_inc(conn, get_node_owner(node))) {
-		node->perms = old_perms;
-		/*
-		 * This should never fail because we had a reference on the
-		 * domain before and Xenstored is single-threaded.
-		 */
-		domain_nbentry_inc(conn, get_node_owner(node));
+	if (domain_nbentry_inc(conn, get_node_owner(node)))
 		return ENOMEM;
-	}
-
-	if (write_node(conn, node, false)) {
-		int saved_errno = errno;
 
-		domain_nbentry_dec(conn, get_node_owner(node));
-		node->perms = old_perms;
-		/* No failure possible as above. */
-		domain_nbentry_inc(conn, get_node_owner(node));
-
-		errno = saved_errno;
+	if (write_node(conn, node, false))
 		return errno;
-	}
 
 	fire_watches(conn, ctx, name, node, false, &old_perms);
 	send_ack(conn, XS_SET_PERMS);
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index e40657216b..466549709f 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -25,9 +25,9 @@
  * a per transaction array.
  */
 enum accitem {
+	ACC_NODES,
 	ACC_REQ_N,		/* Number of elements per request. */
-	ACC_NODES = ACC_REQ_N,
-	ACC_TR_N,		/* Number of elements per transaction. */
+	ACC_TR_N = ACC_REQ_N,	/* Number of elements per transaction. */
 	ACC_CHD_N = ACC_TR_N,	/* max(ACC_REQ_N, ACC_TR_N), for changed dom. */
 	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
 };
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:48:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:48:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531403.827064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzM2-0006TV-2K; Mon, 08 May 2023 11:48:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531403.827064; Mon, 08 May 2023 11:48:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzM1-0006TC-VA; Mon, 08 May 2023 11:48:33 +0000
Received: by outflank-mailman (input) for mailman id 531403;
 Mon, 08 May 2023 11:48:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzM0-00040G-6o
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:48:32 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4095bc72-ed96-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 13:48:30 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 33AAC1FE44;
 Mon,  8 May 2023 11:48:30 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 004991346B;
 Mon,  8 May 2023 11:48:29 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id oI9sOo3hWGQpNwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:48:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4095bc72-ed96-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546510; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uSFUIHMqpZd+uno5jSiEak73wbg4pmWrSCfAkw/DVuI=;
	b=nartOVTodCD5LxExgRYiIAIDUNDWm0iU5ZYpe6tB89NGhDRP/XVllyFxnDuePQ/1FlrPXy
	BwwVBOJLNMfQ/vyyhtNz11oorrVJWKYU61b8/pKP9mR/dP2AidSjY6rIzgXLmxrgRdtW+6
	41csvUG61XYzOFD6UZgZ/DHjadywr2g=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v5 06/14] tools/xenstore: add current connection to domain_memory_add() parameters
Date: Mon,  8 May 2023 13:47:46 +0200
Message-Id: <20230508114754.31514-7-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to enable switching memory accounting to the generic array
based accounting, add the current connection to the parameters of
domain_memory_add().

This requires to add the connection to some other functions, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c   | 28 ++++++++++++++++------------
 tools/xenstore/xenstored_domain.c |  3 ++-
 tools/xenstore/xenstored_domain.h | 14 +++++++++-----
 tools/xenstore/xenstored_watch.c  | 11 ++++++-----
 4 files changed, 33 insertions(+), 23 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 22da434e2a..4d1debeba1 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -246,7 +246,8 @@ static void free_buffered_data(struct buffered_data *out,
 		}
 	}
 
-	domain_memory_add_nochk(conn->id, -out->hdr.msg.len - sizeof(out->hdr));
+	domain_memory_add_nochk(conn, conn->id,
+				-out->hdr.msg.len - sizeof(out->hdr));
 
 	if (out->hdr.msg.type == XS_WATCH_EVENT) {
 		req = out->pend.req;
@@ -631,24 +632,25 @@ int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 	 * nodes to new owners.
 	 */
 	if (old_acc.memory)
-		domain_memory_add_nochk(old_domid,
+		domain_memory_add_nochk(conn, old_domid,
 					-old_acc.memory - key->dsize);
-	ret = domain_memory_add(new_domid, data->dsize + key->dsize,
-				no_quota_check);
+	ret = domain_memory_add(conn, new_domid,
+				data->dsize + key->dsize, no_quota_check);
 	if (ret) {
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
-			domain_memory_add_nochk(old_domid,
+			domain_memory_add_nochk(conn, old_domid,
 						old_acc.memory + key->dsize);
 		return ret;
 	}
 
 	/* TDB should set errno, but doesn't even set ecode AFAICT. */
 	if (tdb_store(tdb_ctx, *key, *data, TDB_REPLACE) != 0) {
-		domain_memory_add_nochk(new_domid, -data->dsize - key->dsize);
+		domain_memory_add_nochk(conn, new_domid,
+					-data->dsize - key->dsize);
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
-			domain_memory_add_nochk(old_domid,
+			domain_memory_add_nochk(conn, old_domid,
 						old_acc.memory + key->dsize);
 		errno = EIO;
 		return errno;
@@ -683,7 +685,7 @@ int do_tdb_delete(struct connection *conn, TDB_DATA *key,
 
 	if (acc->memory) {
 		domid = get_acc_domid(conn, key, acc->domid);
-		domain_memory_add_nochk(domid, -acc->memory - key->dsize);
+		domain_memory_add_nochk(conn, domid, -acc->memory - key->dsize);
 	}
 
 	return 0;
@@ -1055,11 +1057,13 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 	if (len <= DEFAULT_BUFFER_SIZE) {
 		bdata->buffer = bdata->default_buffer;
 		/* Don't check quota, path might be used for returning error. */
-		domain_memory_add_nochk(conn->id, len + sizeof(bdata->hdr));
+		domain_memory_add_nochk(conn, conn->id,
+					len + sizeof(bdata->hdr));
 	} else {
 		bdata->buffer = talloc_array(bdata, char, len);
 		if (!bdata->buffer ||
-		    domain_memory_add_chk(conn->id, len + sizeof(bdata->hdr))) {
+		    domain_memory_add_chk(conn, conn->id,
+					  len + sizeof(bdata->hdr))) {
 			send_error(conn, ENOMEM);
 			return;
 		}
@@ -1124,7 +1128,7 @@ void send_event(struct buffered_data *req, struct connection *conn,
 		}
 	}
 
-	if (domain_memory_add_chk(conn->id, len + sizeof(bdata->hdr))) {
+	if (domain_memory_add_chk(conn, conn->id, len + sizeof(bdata->hdr))) {
 		talloc_free(bdata);
 		return;
 	}
@@ -3326,7 +3330,7 @@ static void add_buffered_data(struct buffered_data *bdata,
 	 * be smaller. So ignore it. The limit will be applied for any resource
 	 * after the state has been fully restored.
 	 */
-	domain_memory_add_nochk(conn->id, len + sizeof(bdata->hdr));
+	domain_memory_add_nochk(conn, conn->id, len + sizeof(bdata->hdr));
 }
 
 void read_state_buffered_data(const void *ctx, struct connection *conn,
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index e59e40088e..7770c4f395 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1236,7 +1236,8 @@ static bool domain_chk_quota(struct domain *domain, int mem)
 	return false;
 }
 
-int domain_memory_add(unsigned int domid, int mem, bool no_quota_check)
+int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
+		      bool no_quota_check)
 {
 	struct domain *domain;
 
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 466549709f..b94548fd7d 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -81,25 +81,29 @@ int domain_nbentry_inc(struct connection *conn, unsigned int domid);
 int domain_nbentry_dec(struct connection *conn, unsigned int domid);
 int domain_nbentry_fix(unsigned int domid, int num, bool update);
 unsigned int domain_nbentry(struct connection *conn);
-int domain_memory_add(unsigned int domid, int mem, bool no_quota_check);
+int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
+		      bool no_quota_check);
 
 /*
  * domain_memory_add_chk(): to be used when memory quota should be checked.
  * Not to be used when specifying a negative mem value, as lowering the used
  * memory should always be allowed.
  */
-static inline int domain_memory_add_chk(unsigned int domid, int mem)
+static inline int domain_memory_add_chk(struct connection *conn,
+					unsigned int domid, int mem)
 {
-	return domain_memory_add(domid, mem, false);
+	return domain_memory_add(conn, domid, mem, false);
 }
+
 /*
  * domain_memory_add_nochk(): to be used when memory quota should not be
  * checked, e.g. when lowering memory usage, or in an error case for undoing
  * a previous memory adjustment.
  */
-static inline void domain_memory_add_nochk(unsigned int domid, int mem)
+static inline void domain_memory_add_nochk(struct connection *conn,
+					   unsigned int domid, int mem)
 {
-	domain_memory_add(domid, mem, true);
+	domain_memory_add(conn, domid, mem, true);
 }
 void domain_watch_inc(struct connection *conn);
 void domain_watch_dec(struct connection *conn);
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 8ad0229df6..e30cd89be3 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -199,7 +199,7 @@ static struct watch *add_watch(struct connection *conn, char *path, char *token,
 	watch->token = talloc_strdup(watch, token);
 	if (!watch->node || !watch->token)
 		goto nomem;
-	if (domain_memory_add(conn->id, strlen(path) + strlen(token),
+	if (domain_memory_add(conn, conn->id, strlen(path) + strlen(token),
 			      no_quota_check))
 		goto nomem;
 
@@ -274,8 +274,9 @@ int do_unwatch(const void *ctx, struct connection *conn,
 	list_for_each_entry(watch, &conn->watches, list) {
 		if (streq(watch->node, node) && streq(watch->token, vec[1])) {
 			list_del(&watch->list);
-			domain_memory_add_nochk(conn->id, -strlen(watch->node) -
-							  strlen(watch->token));
+			domain_memory_add_nochk(conn, conn->id,
+						-strlen(watch->node) -
+						strlen(watch->token));
 			talloc_free(watch);
 			domain_watch_dec(conn);
 			send_ack(conn, XS_UNWATCH);
@@ -291,8 +292,8 @@ void conn_delete_all_watches(struct connection *conn)
 
 	while ((watch = list_top(&conn->watches, struct watch, list))) {
 		list_del(&watch->list);
-		domain_memory_add_nochk(conn->id, -strlen(watch->node) -
-						  strlen(watch->token));
+		domain_memory_add_nochk(conn, conn->id, -strlen(watch->node) -
+							strlen(watch->token));
 		talloc_free(watch);
 		domain_watch_dec(conn);
 	}
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:48:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:48:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531408.827075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzM6-0006yY-Bf; Mon, 08 May 2023 11:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531408.827075; Mon, 08 May 2023 11:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzM6-0006yP-8K; Mon, 08 May 2023 11:48:38 +0000
Received: by outflank-mailman (input) for mailman id 531408;
 Mon, 08 May 2023 11:48:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzM5-00040G-Sw
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:48:37 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 43e66c34-ed96-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 13:48:36 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id BBEA61FE49;
 Mon,  8 May 2023 11:48:35 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 925FB1346B;
 Mon,  8 May 2023 11:48:35 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id aatfIpPhWGQzNwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:48:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43e66c34-ed96-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546515; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QRtofhElFKAPAVu2ut+tgG7Lb4CqCOPPZR3Nz1CeFJk=;
	b=l5YIgBheeU6YqvxCaW6EO4JfCt0K0poacuMCaXfy1glSNuSsX9P7iz0jrNf6Z0AWO4BG3P
	t10T4GfrPopYt/QQIwp2JF3wIYwNIk1uhRbbIO3Q//G5F/aKL/5mQ9N7ErpzOVVk/dSGl9
	XuJbuWoUk0nhUmBbpxFxfOlJj2b+dek=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 07/14] tools/xenstore: use accounting data array for per-domain values
Date: Mon,  8 May 2023 13:47:47 +0200
Message-Id: <20230508114754.31514-8-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the accounting of per-domain usage of Xenstore memory, watches, and
outstanding requests to the array based mechanism.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V5:
- drop domid parameter from domain_outstanding_inc() (Julien Grall)
---
 tools/xenstore/xenstored_core.c   |   4 +-
 tools/xenstore/xenstored_domain.c | 109 +++++++++++-------------------
 tools/xenstore/xenstored_domain.h |   8 ++-
 3 files changed, 48 insertions(+), 73 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 4d1debeba1..e7f86f9487 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -255,7 +255,7 @@ static void free_buffered_data(struct buffered_data *out,
 			req->pend.ref.event_cnt--;
 			if (!req->pend.ref.event_cnt && !req->on_out_list) {
 				if (req->on_ref_list) {
-					domain_outstanding_domid_dec(
+					domain_outstanding_dec(conn,
 						req->pend.ref.domid);
 					list_del(&req->list);
 				}
@@ -271,7 +271,7 @@ static void free_buffered_data(struct buffered_data *out,
 		out->on_ref_list = true;
 		return;
 	} else
-		domain_outstanding_dec(conn);
+		domain_outstanding_dec(conn, conn->id);
 
 	talloc_free(out);
 }
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 7770c4f395..a35ed97fd7 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -72,19 +72,12 @@ struct domain
 	/* Accounting data for this domain. */
 	unsigned int acc[ACC_N];
 
-	/* Amount of memory allocated for this domain. */
-	int memory;
+	/* Memory quota data for this domain. */
 	bool soft_quota_reported;
 	bool hard_quota_reported;
 	time_t mem_last_msg;
 #define MEM_WARN_MINTIME_SEC 10
 
-	/* number of watch for this domain */
-	int nbwatch;
-
-	/* Number of outstanding requests. */
-	int nboutstanding;
-
 	/* write rate limit */
 	wrl_creditt wrl_credit; /* [ -wrl_config_writecost, +_dburst ] */
 	struct wrl_timestampt wrl_timestamp;
@@ -200,14 +193,15 @@ static bool domain_can_write(struct connection *conn)
 
 static bool domain_can_read(struct connection *conn)
 {
-	struct xenstore_domain_interface *intf = conn->domain->interface;
+	struct domain *domain = conn->domain;
+	struct xenstore_domain_interface *intf = domain->interface;
 
 	if (domain_is_unprivileged(conn)) {
-		if (conn->domain->wrl_credit < 0)
+		if (domain->wrl_credit < 0)
 			return false;
-		if (conn->domain->nboutstanding >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST] >= quota_req_outstanding)
 			return false;
-		if (conn->domain->memory >= quota_memory_per_domain_hard &&
+		if (domain->acc[ACC_MEM] >= quota_memory_per_domain_hard &&
 		    quota_memory_per_domain_hard)
 			return false;
 	}
@@ -438,10 +432,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	if (!resp) return ENOMEM
 
 	ent(nodes, d->acc[ACC_NODES]);
-	ent(watches, d->nbwatch);
+	ent(watches, d->acc[ACC_WATCH]);
 	ent(transactions, ta);
-	ent(outstanding, d->nboutstanding);
-	ent(memory, d->memory);
+	ent(outstanding, d->acc[ACC_OUTST]);
+	ent(memory, d->acc[ACC_MEM]);
 
 #undef ent
 
@@ -1187,14 +1181,16 @@ unsigned int domain_nbentry(struct connection *conn)
 	       ? domain_acc_add(conn, conn->id, ACC_NODES, 0, true) : 0;
 }
 
-static bool domain_chk_quota(struct domain *domain, int mem)
+static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 {
 	time_t now;
+	struct domain *domain;
 
-	if (!domain || !domid_is_unprivileged(domain->domid) ||
-	    (domain->conn && domain->conn->is_ignored))
+	if (!conn || !domid_is_unprivileged(conn->id) ||
+	    conn->is_ignored)
 		return false;
 
+	domain = conn->domain;
 	now = time(NULL);
 
 	if (mem >= quota_memory_per_domain_hard &&
@@ -1239,80 +1235,57 @@ static bool domain_chk_quota(struct domain *domain, int mem)
 int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
 		      bool no_quota_check)
 {
-	struct domain *domain;
+	int ret;
 
-	domain = find_domain_struct(domid);
-	if (domain) {
-		/*
-		 * domain_chk_quota() will print warning and also store whether
-		 * the soft/hard quota has been hit. So check no_quota_check
-		 * *after*.
-		 */
-		if (domain_chk_quota(domain, domain->memory + mem) &&
-		    !no_quota_check)
-			return ENOMEM;
-		domain->memory += mem;
-	} else {
-		/*
-		 * The domain the memory is to be accounted for should always
-		 * exist, as accounting is done either for a domain related to
-		 * the current connection, or for the domain owning a node
-		 * (which is always existing, as the owner of the node is
-		 * tested to exist and deleted or replaced by domid 0 if not).
-		 * So not finding the related domain MUST be an error in the
-		 * data base.
-		 */
-		errno = ENOENT;
-		corrupt(NULL, "Accounting called for non-existing domain %u\n",
-			domid);
-		return ENOENT;
-	}
+	ret = domain_acc_add(conn, domid, ACC_MEM, 0, true);
+	if (ret < 0)
+		return -ret;
+
+	/*
+	 * domain_chk_quota() will print warning and also store whether the
+	 * soft/hard quota has been hit. So check no_quota_check *after*.
+	 */
+	if (domain_chk_quota(conn, ret + mem) && !no_quota_check)
+		return ENOMEM;
+
+	/*
+	 * The domain the memory is to be accounted for should always exist,
+	 * as accounting is done either for a domain related to the current
+	 * connection, or for the domain owning a node (which is always
+	 * existing, as the owner of the node is tested to exist and deleted
+	 * or replaced by domid 0 if not).
+	 * So not finding the related domain MUST be an error in the data base.
+	 */
+	domain_acc_add(conn, domid, ACC_MEM, mem, true);
 
 	return 0;
 }
 
 void domain_watch_inc(struct connection *conn)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nbwatch++;
+	domain_acc_add(conn, conn->id, ACC_WATCH, 1, true);
 }
 
 void domain_watch_dec(struct connection *conn)
 {
-	if (!conn || !conn->domain)
-		return;
-	if (conn->domain->nbwatch)
-		conn->domain->nbwatch--;
+	domain_acc_add(conn, conn->id, ACC_WATCH, -1, true);
 }
 
 int domain_watch(struct connection *conn)
 {
 	return (domain_is_unprivileged(conn))
-		? conn->domain->nbwatch
+		? domain_acc_add(conn, conn->id, ACC_WATCH, 0, true)
 		: 0;
 }
 
 void domain_outstanding_inc(struct connection *conn)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nboutstanding++;
+	domain_acc_add(conn, conn->id, ACC_OUTST, 1, true);
 }
 
-void domain_outstanding_dec(struct connection *conn)
+void domain_outstanding_dec(struct connection *conn, unsigned int domid)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nboutstanding--;
-}
-
-void domain_outstanding_domid_dec(unsigned int domid)
-{
-	struct domain *d = find_domain_by_domid(domid);
-
-	if (d)
-		d->nboutstanding--;
+	domain_acc_add(conn, domid, ACC_OUTST, -1, true);
 }
 
 static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index b94548fd7d..086133407b 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -29,7 +29,10 @@ enum accitem {
 	ACC_REQ_N,		/* Number of elements per request. */
 	ACC_TR_N = ACC_REQ_N,	/* Number of elements per transaction. */
 	ACC_CHD_N = ACC_TR_N,	/* max(ACC_REQ_N, ACC_TR_N), for changed dom. */
-	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
+	ACC_WATCH = ACC_TR_N,
+	ACC_OUTST,
+	ACC_MEM,
+	ACC_N,			/* Number of elements per domain. */
 };
 
 void handle_event(void);
@@ -109,8 +112,7 @@ void domain_watch_inc(struct connection *conn);
 void domain_watch_dec(struct connection *conn);
 int domain_watch(struct connection *conn);
 void domain_outstanding_inc(struct connection *conn);
-void domain_outstanding_dec(struct connection *conn);
-void domain_outstanding_domid_dec(unsigned int domid);
+void domain_outstanding_dec(struct connection *conn, unsigned int domid);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:48:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:48:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531415.827085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzMB-0007S9-L8; Mon, 08 May 2023 11:48:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531415.827085; Mon, 08 May 2023 11:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzMB-0007S0-Hc; Mon, 08 May 2023 11:48:43 +0000
Received: by outflank-mailman (input) for mailman id 531415;
 Mon, 08 May 2023 11:48:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzMA-0004FA-BZ
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:48:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 474ba90e-ed96-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 13:48:41 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 687D81FE43;
 Mon,  8 May 2023 11:48:41 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2AD791346B;
 Mon,  8 May 2023 11:48:41 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id IpQiCZnhWGRJNwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:48:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 474ba90e-ed96-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546521; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HoO5vhLxJfHeUF6CaN4B5hMFR55WKl59O7dl7jFaRfo=;
	b=Yz3s02wIMFCTIEwsPY87/4US/J/9IK8jK5QOI19zTpnkV5jTVuuwlHFacpPhig//JplOeA
	+2UgwIzGy6D00Eh2C19Udvu9LoX0bkWQNaQTzkRujwPMJtb9i02aAOWU/pDNJYz/eQvoih
	xIWnQ1eZcXqhAgqu+4Ug8dc78QTZ33I=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v5 08/14] tools/xenstore: add accounting trace support
Date: Mon,  8 May 2023 13:47:48 +0200
Message-Id: <20230508114754.31514-9-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new trace switch "acc" and the related trace calls.

The "acc" switch is off per default.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c   |  2 +-
 tools/xenstore/xenstored_core.h   |  1 +
 tools/xenstore/xenstored_domain.c | 10 ++++++++++
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index e7f86f9487..15654730a6 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2750,7 +2750,7 @@ static void set_quota(const char *arg, bool soft)
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
 const char *const trace_switches[] = {
-	"obj", "io", "wrl",
+	"obj", "io", "wrl", "acc",
 	NULL
 };
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 1f811f38cb..3e0734a6c6 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -302,6 +302,7 @@ extern unsigned int trace_flags;
 #define TRACE_OBJ	0x00000001
 #define TRACE_IO	0x00000002
 #define TRACE_WRL	0x00000004
+#define TRACE_ACC	0x00000008
 extern const char *const trace_switches[];
 int set_trace_switch(const char *arg);
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index a35ed97fd7..03825ca24b 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -538,6 +538,12 @@ static struct domain *find_domain_by_domid(unsigned int domid)
 	return (d && d->introduced) ? d : NULL;
 }
 
+#define trace_acc(...)				\
+do {						\
+	if (trace_flags & TRACE_ACC)		\
+		trace("acc: " __VA_ARGS__);	\
+} while (0)
+
 int acc_fix_domains(struct list_head *head, bool chk_quota, bool update)
 {
 	struct changed_domain *cd;
@@ -601,6 +607,8 @@ static int acc_add_changed_dom(const void *ctx, struct list_head *head,
 		return 0;
 
 	errno = 0;
+	trace_acc("local change domid %u: what=%u %d add %d\n", domid, what,
+		  cd->acc[what], val);
 	cd->acc[what] += val;
 
 	return cd->acc[what];
@@ -1112,6 +1120,8 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 		return domain_acc_add_valid(d, what, ret);
 	}
 
+	trace_acc("global change domid %u: what=%u %u add %d\n", domid, what,
+		  d->acc[what], add);
 	d->acc[what] = domain_acc_add_valid(d, what, add);
 
 	return d->acc[what];
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:48:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:48:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531420.827094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzMN-0008Jp-W6; Mon, 08 May 2023 11:48:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531420.827094; Mon, 08 May 2023 11:48:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzMN-0008Ii-TJ; Mon, 08 May 2023 11:48:55 +0000
Received: by outflank-mailman (input) for mailman id 531420;
 Mon, 08 May 2023 11:48:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzMM-00040G-IB
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:48:54 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4df9619e-ed96-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 13:48:52 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A482E1FE46;
 Mon,  8 May 2023 11:48:52 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 713EF1346B;
 Mon,  8 May 2023 11:48:52 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 6TECGqThWGRrNwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:48:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4df9619e-ed96-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546532; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WygQN3u5bQxjz80/E369O7e/SKWPMMg7pxJNcjOOWFg=;
	b=mvlYw5GR4eaquZyEsApbJk1NuKRWSUXDggk7PfmrKXIZZygpP9dh8q6X9ic/dw1zp6BvhV
	IVNf8T6aJLqpgXRtrIiyHROxWYy3ESsNET0ASkNexOJsFHiGTUZB1mXUTUlGvj4qtrEnHV
	ZEEit0tOUdX5l545/bDkQo3EFzy0lRA=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 10/14] tools/xenstore: switch transaction accounting to generic accounting
Date: Mon,  8 May 2023 13:47:50 +0200
Message-Id: <20230508114754.31514-11-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As transaction accounting is active for unprivileged domains only, it
can easily be added to the generic per-domain accounting.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V5:
- use list_empty(&conn->transaction_list) for detection of "no
  transaction active" (Julien Grall)
---
 tools/xenstore/xenstored_core.c        |  3 +--
 tools/xenstore/xenstored_core.h        |  1 -
 tools/xenstore/xenstored_domain.c      | 21 ++++++++++++++++++---
 tools/xenstore/xenstored_domain.h      |  4 ++++
 tools/xenstore/xenstored_transaction.c | 14 ++++++--------
 5 files changed, 29 insertions(+), 14 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 3caf9e45dc..c98d30561f 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2087,7 +2087,7 @@ static void consider_message(struct connection *conn)
 	 * stalled. This will ignore new requests until Live-Update happened
 	 * or it was aborted.
 	 */
-	if (lu_is_pending() && conn->transaction_started == 0 &&
+	if (lu_is_pending() && list_empty(&conn->transaction_list) &&
 	    conn->in->hdr.msg.type == XS_TRANSACTION_START) {
 		trace("Delaying transaction start for connection %p req_id %u\n",
 		      conn, conn->in->hdr.msg.req_id);
@@ -2194,7 +2194,6 @@ struct connection *new_connection(const struct interface_funcs *funcs)
 	new->funcs = funcs;
 	new->is_ignored = false;
 	new->is_stalled = false;
-	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
 	INIT_LIST_HEAD(&new->acc_list);
 	INIT_LIST_HEAD(&new->ref_list);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 5a11dc1231..3564d85d7d 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -151,7 +151,6 @@ struct connection
 	/* List of in-progress transactions. */
 	struct list_head transaction_list;
 	uint32_t next_transaction_id;
-	unsigned int transaction_started;
 	time_t ta_start_time;
 
 	/* List of delayed requests. */
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 03825ca24b..25c6d20036 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -417,12 +417,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 {
 	struct domain *d = find_domain_struct(domid);
 	char *resp;
-	int ta;
 
 	if (!d)
 		return ENOENT;
 
-	ta = d->conn ? d->conn->transaction_started : 0;
 	resp = talloc_asprintf(ctx, "Domain %u:\n", domid);
 	if (!resp)
 		return ENOMEM;
@@ -433,7 +431,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 
 	ent(nodes, d->acc[ACC_NODES]);
 	ent(watches, d->acc[ACC_WATCH]);
-	ent(transactions, ta);
+	ent(transactions, d->acc[ACC_TRANS]);
 	ent(outstanding, d->acc[ACC_OUTST]);
 	ent(memory, d->acc[ACC_MEM]);
 
@@ -1298,6 +1296,23 @@ void domain_outstanding_dec(struct connection *conn, unsigned int domid)
 	domain_acc_add(conn, domid, ACC_OUTST, -1, true);
 }
 
+void domain_transaction_inc(struct connection *conn)
+{
+	domain_acc_add(conn, conn->id, ACC_TRANS, 1, true);
+}
+
+void domain_transaction_dec(struct connection *conn)
+{
+	domain_acc_add(conn, conn->id, ACC_TRANS, -1, true);
+}
+
+unsigned int domain_transaction_get(struct connection *conn)
+{
+	return (domain_is_unprivileged(conn))
+		? domain_acc_add(conn, conn->id, ACC_TRANS, 0, true)
+		: 0;
+}
+
 static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
 static wrl_creditt wrl_config_rate           = WRL_RATE   * WRL_FACTOR;
 static wrl_creditt wrl_config_dburst         = WRL_DBURST * WRL_FACTOR;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 086133407b..01b6f1861b 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -32,6 +32,7 @@ enum accitem {
 	ACC_WATCH = ACC_TR_N,
 	ACC_OUTST,
 	ACC_MEM,
+	ACC_TRANS,
 	ACC_N,			/* Number of elements per domain. */
 };
 
@@ -113,6 +114,9 @@ void domain_watch_dec(struct connection *conn);
 int domain_watch(struct connection *conn);
 void domain_outstanding_inc(struct connection *conn);
 void domain_outstanding_dec(struct connection *conn, unsigned int domid);
+void domain_transaction_inc(struct connection *conn);
+void domain_transaction_dec(struct connection *conn);
+unsigned int domain_transaction_get(struct connection *conn);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 11c8bcec84..b9e9d76a1f 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -479,8 +479,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	if (conn->transaction)
 		return EBUSY;
 
-	if (domain_is_unprivileged(conn) &&
-	    conn->transaction_started > quota_max_transaction)
+	if (domain_transaction_get(conn) > quota_max_transaction)
 		return ENOSPC;
 
 	/* Attach transaction to ctx for autofree until it's complete */
@@ -502,12 +501,12 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	} while (!IS_ERR(exists));
 
 	/* Now we own it. */
+	if (list_empty(&conn->transaction_list))
+		conn->ta_start_time = time(NULL);
 	list_add_tail(&trans->list, &conn->transaction_list);
 	talloc_steal(conn, trans);
 	talloc_set_destructor(trans, destroy_transaction);
-	if (!conn->transaction_started)
-		conn->ta_start_time = time(NULL);
-	conn->transaction_started++;
+	domain_transaction_inc(conn);
 	wrl_ntransactions++;
 
 	snprintf(id_str, sizeof(id_str), "%u", trans->id);
@@ -533,8 +532,8 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 
 	conn->transaction = NULL;
 	list_del(&trans->list);
-	conn->transaction_started--;
-	if (!conn->transaction_started)
+	domain_transaction_dec(conn);
+	if (list_empty(&conn->transaction_list))
 		conn->ta_start_time = 0;
 
 	chk_quota = trans->node_created && domain_is_unprivileged(conn);
@@ -588,7 +587,6 @@ void conn_delete_all_transactions(struct connection *conn)
 
 	assert(conn->transaction == NULL);
 
-	conn->transaction_started = 0;
 	conn->ta_start_time = 0;
 }
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:56:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:56:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531426.827105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzTO-00026H-OE; Mon, 08 May 2023 11:56:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531426.827105; Mon, 08 May 2023 11:56:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzTO-00026A-L9; Mon, 08 May 2023 11:56:10 +0000
Received: by outflank-mailman (input) for mailman id 531426;
 Mon, 08 May 2023 11:56:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvzTN-000264-98
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:56:09 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20602.outbound.protection.outlook.com
 [2a01:111:f400:fe13::602])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5079e5b5-ed97-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 13:56:06 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7844.eurprd04.prod.outlook.com (2603:10a6:20b:236::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 11:56:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 11:56:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5079e5b5-ed97-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HpGmM0UUgZrEV38xVmJtBOIR1PduVJ8W/4cqJNJJ2yRxzoLt5BVR9G0KuDz5QaFio8W192vv57CghefcYc+VstJ9/DoK1e4XlV6TlyVCrJpp9uyFBjLH5tcMpYZsui1wyVqMvwL+YpOUFTYEalpR3w1LlpELLkcOdazHtdNeQsctxjvDluelwZghRnIWTEPb57bttyyJcAbeOxK+dp/SBSellWFpNJXZIFsVGWIZHexttqsRkCmD697ikUr4376OCmRB4Fn0nPQBdmIetyTSlNegyDzGPkVPqGguJY8WyPYVXvvkEMPrFpDSnSsX3ccUPVs5rjcKLT5XinYju5x4gA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=L9NBcAHi0ygGzgQqrKQSGrRdSqMpi5l15+rO1KHm1tQ=;
 b=cF5aTCjasSGNb1cTTw9s+E/t3Y7dCAey7lgOvWHuHFG4T998Hpk40IvM2OQSLbHBQGaRsjTQp4UXu0upRlOHjT6YpPG0BNc/DwhlDWIAx322PULxBE2CvjJ+iMbkKA2P9mE1pfSHeTBMEccf8mo1R520qSmI9VqNw3QdXGMLLlX0YKEWqzrcT9Bdk1y1z7sGgRoSUaFQYTdQqAs6MYQ8eKKHmS+5ME+S8GkA4rjv3KXZ3gt8DLYjn7eFIQeGmlg/nAfFWLo19F+7hYb2N2idaTTIFghTr4IWuFKheW1yTx6lzwdIxdEZ+nftppoetN6YBxlGL+GD0OT49iAuXvfAIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L9NBcAHi0ygGzgQqrKQSGrRdSqMpi5l15+rO1KHm1tQ=;
 b=AKHQT4cq39zSYN/n9aAcs3dFIzneUR9/09tRzw1rH0vvj0htoG5cWjAzeyz4pa3EexQRu/rdMOfXEkQvkGkc6/Knjf5Jq5rnKQNKyyHH3QVmT8/zwwHd4acbGoo7K/OO5RWBgkLhhKnvEG6gSZzSBYgKSmYAEqYx/LgudKaKRwhHubCYZKLittipHrHn3M6ea/M09qWdJcDQqQCYU5S3i+18+Bvi5EWr2UnKf4p//KRySFA9K3jFbL89L15wo2ijfyLr5FcKwYgpKFVJ6Emo/KP8//4eatc3afzkXaH6cqKo6J5PvrjsyEwyCafep93KhoOKVa4fzWpdajbHCW7p/A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <46c46e0b-0fba-2f3f-6289-f68737a3d8d9@suse.com>
Date: Mon, 8 May 2023 13:56:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 13/14 RESEND] xenpm: Add set-cpufreq-hwp subcommand
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-14-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230501193034.88575-14-jandryuk@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0135.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7844:EE_
X-MS-Office365-Filtering-Correlation-Id: fee3b09a-2cee-4c02-c211-08db4fbb32d0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+gY3tpuhIroDVaoODJKH2Q2vh6Nb4En5Uyt2KtoO0x8iAuUxnk3+cyQlBGwUxexAeeuNfA7MdwNMoZ+l38kSbq9AixIP+pdi9cKrbtoZeDXqnAzxLIvw+UM7pphLJ5rqTp7BZRX0smjI1BeRxY/a7ITz1nJ04uxJ2f2F44MgPz7e28qcdKBiBtPdDjrllWHR8WPFxWT9zZCrPJTGbnkjFe9J0nTiv1DvQtTk/mtMu1Tm3VfQOluqaJMgrgZPtmvJ+1G4m4xyHMuTxSz9LtXX/qzIz/mTfc3RK729cDG3a9RpiDAoBBmxxkS32EWIhRsJRLZS7jW2SRpjohbAMyuyUrb2aQirpFGDci2O/Hc8dSNSH2WL5vPZ4Kmg9sRO4t5YJowhyO3PC7Zt8lDuMd21E5vnI1QTMLfKo0jwbWep2rW2ptw/Iqh6lBDq9AR55DkbTUI/5yUGNrWUC/ZrU1KLSAs+WX1A9UQZGzJVQ9MUGu6qeh87Y0h3ChvpCjWjEYWEfTLt3qlLiagWuttJ+jNWEIiX8MN9yv3ATkZU3CcG4rdZYPq3DHPD61+PQpxHQKKFC+E9dsH0eCLpn50TjBxlqO6nWCJe0Liy2XusQaF+Qs7sE+gSJfiVnNy4X7V8ROQa7lIs5RVN2l/5AR8Vd9FySw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(366004)(39860400002)(376002)(136003)(396003)(451199021)(6506007)(6512007)(26005)(53546011)(6486002)(83380400001)(66574015)(36756003)(2616005)(38100700002)(86362001)(31696002)(186003)(6916009)(5660300002)(4326008)(66476007)(54906003)(66946007)(2906002)(478600001)(8676002)(8936002)(66556008)(31686004)(41300700001)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dmRJVXdJVTNjWFlBTlFsN094MFQvUmRseFM3aHNjVDZWUUFEMGlCTllFQW9Z?=
 =?utf-8?B?dzIwVGpWRUtuMTgxd1RnZXpxem1VZW5Wejd3M2d4WmQ5RlVCalZiNTdxZnFZ?=
 =?utf-8?B?MXN4YUlZWXdWRlBwSEp2YThBWDVBRXpET01SUmpMbGVZZng2NGo5UWxTeWdB?=
 =?utf-8?B?bG9BNnMrdVBDbnNESG01R2J4WXh2SnZ3d1FPNXpNbWUwUTBONTNwc29hOGw5?=
 =?utf-8?B?WlpwNHJWSkJRSFZJUVJubHk2c2E1TEcwL2FiMFRBc0FkZ1laZW9LU1Y2Q3d4?=
 =?utf-8?B?NENpQzZRdkM5VTJ3dk1HWURFTXI4V21PWmhPeWVudjQ0U1FDa0xFckFtS0E4?=
 =?utf-8?B?SUNHVURzcS96OTFScXhYREdlOGJ1LzNJZlFpVVd0SWFqb1pDc0xsc3VYZ2tr?=
 =?utf-8?B?c0hUcS95TEZaaHBXTG5BTXZhV2liZjFpSFUyaUJuS0x0OGJIbVNma2dBV3Vk?=
 =?utf-8?B?dHRFZ0E2Mjd3dVo3RlphYUlqTmlNbGMyVmpxa29TR0x0SEQzRXR5YUQyQnNT?=
 =?utf-8?B?dUFjMDM0WnRyeUo5ZTJmRkp4Y3JpR1d1SVg3MVNLYmNwa0Z4dHpiTVRVaUFw?=
 =?utf-8?B?SVd2dUNndzJmQ09POTFkdkV0TXVZTXJpQWZZVmx1R2g1bkFHQ3FmZXZDc250?=
 =?utf-8?B?SXVPS1FlbERlQU5MNVN5a2I4N1h3WmRBbFZoM0Rja3dZTnhxcTRWNDkyeWxo?=
 =?utf-8?B?cWlUSUVFNGVrS3ByTnE1a3p3emduNVJlMWZGUGZjLzdFek5PUWdsQ28wYlpv?=
 =?utf-8?B?T0JRbXBIOThVWEdrMWUrZjhzV2hoalFIaWc1WEtlZ3JTRzc3VXRwc1E2c0Jr?=
 =?utf-8?B?THI0b1MyY0hjQWNIbyt6WDA3K3lGV2dsZ0pWWFFYbGZPL2JGeTV4R0MxZmZm?=
 =?utf-8?B?VnU1ZGdSekRUUVE2OS9PVmd6cUUxREdKR20ybXhyNmxFQmcrUktJL1hqc2x2?=
 =?utf-8?B?TTRjY0l3TlhrTWlrNUJJM2NJNnYzeXZDRHlndlI5VkVDdXdDWkl1QU1DOC9l?=
 =?utf-8?B?Uy9tZTJPNVdabXlvbnBYM0VBN2xzRk9iN0U3cHdyVmhFcGx4cmF3RkRwK2wv?=
 =?utf-8?B?Q1VZSWQvbWxjaWZMY29Cdm01K1Y1cVU3dWYxaVZjdXlCeWIyQ01UdXhvbTN4?=
 =?utf-8?B?NGxaOXlwRE1IMyt5Rk1YQWdsbWJxbVQ0cHJmWWZnVG1OaU1ORXRHVG4vWmgr?=
 =?utf-8?B?US9DclN0MUM1d1R6Z0Z6VEJ4ZzB0RGlkcjB4NER3RzR0MlBObks1NmZBTWtq?=
 =?utf-8?B?TEhyYXZOK1dkUFlpSHgzbGlwK1RHaDJ2L25URWNVcUhqWjBzRlZZeENoMFRN?=
 =?utf-8?B?QWFsQytJSy81ZzJJb2JSd2RBcTlsWmorWTlHTnZNUmNCcWZXQ1pPSldiN1lY?=
 =?utf-8?B?dEE4ZlhEMkRRUW1DUXdVcW90M2luT2VDYnhNcjFBRTF4MElpdDUybzVOUHFu?=
 =?utf-8?B?RERWay9iQlFXZ0FQb0pydk90dGlsck9DRGxsRy9ERWUyOUVkMmNJalR0MnRm?=
 =?utf-8?B?OG1PUEsvMDVPaTdiQmw3RVJtV20vZEhlNDZXbkp3dG5ZUTZjUE52ek51VThV?=
 =?utf-8?B?MEJyaXlHd0J5ZEZMWHNNSmZyak90RkUwQm05bHd0NHZSSHZ6K0ZjTjljay9t?=
 =?utf-8?B?elh1ZzVZREpZRVdZblAxNU9jL0VHZjhFRG43N3B0OW8wT0xHcitoWk4xc2I2?=
 =?utf-8?B?RVZLaG1JZ0dzSTJkbUFGRk1qL3RndWJlUEVvbGp0eDN4cS9YSFJWRldqSEZv?=
 =?utf-8?B?WjJOekRnZk9qeXVYWHJHZHU1MzhSWngrd3Y5eGMzaEduM2FETzRFNjN1aWY3?=
 =?utf-8?B?ditKUUErN3BWWUJzUS9iZWROMDVGUkhOOFRzY1JaVFFjaGdvcEh0bWhnOWNU?=
 =?utf-8?B?QXp0S3dNaGdZN3gyYThOOFRLNlE2RTkyOFBHTFlXbS9FV29YOWs3bGcvK0Fh?=
 =?utf-8?B?SkI5c1UrWlhRamVTdFF6Y2dVL0wrdCtCdUUyTXYrUGl3V1JpQVNzWklnSEdU?=
 =?utf-8?B?Z2F5MS9KaTVMV2JHUWVlRGpWM1JGRnliUXkyNm5jRWRhTFBBclpnbzZFVEZN?=
 =?utf-8?B?ck5oVXA5a0YrOVd2Uno4cEdjLzI1bis2UmZudFFtVW55ZGtRajNQOVVPcytC?=
 =?utf-8?Q?I6taH1HKawKBuishngmytKhfi?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fee3b09a-2cee-4c02-c211-08db4fbb32d0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 11:56:03.5553
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: M/a0WTVx6eRfsXY8CGCX4ygE797Cnps5lKZ5h0JzGKXbATqV5YNUbVyRO9fS5z8PkCH5qhlOzSIaoEydiZe+7g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7844

On 01.05.2023 21:30, Jason Andryuk wrote:
> @@ -67,6 +68,27 @@ void show_help(void)
>              " set-max-cstate        <num>|'unlimited' [<num2>|'unlimited']\n"
>              "                                     set the C-State limitation (<num> >= 0) and\n"
>              "                                     optionally the C-sub-state limitation (<num2> >= 0)\n"
> +            " set-cpufreq-hwp       [cpuid] [balance|performance|powersave] <param:val>*\n"
> +            "                                     set Hardware P-State (HWP) parameters\n"
> +            "                                     optionally a preset of one of\n"
> +            "                                       balance|performance|powersave\n"
> +            "                                     an optional list of param:val arguments\n"
> +            "                                       minimum:N  lowest ... highest\n"
> +            "                                       maximum:N  lowest ... highest\n"
> +            "                                       desired:N  lowest ... highest\n"

Personally I consider these three uses of "lowest ... highest" confusing:
It's not clear at all whether they're part of the option or merely mean
to express the allowable range for N (which I think they do). Perhaps ...

> +            "                                           Set explicit performance target.\n"
> +            "                                           non-zero disables auto-HWP mode.\n"
> +            "                                       energy-perf:0-255 (or 0-15)\n"

..., also taking this into account:

            "                                       energy-perf:N (0-255 or 0-15)\n"

and then use parentheses as well for the earlier value range explanations
(and again below)?

Also up from here you suddenly start having full stops on the lines. I
guess you also want to be consistent in your use of capital letters at
the start of lines (I didn't go check how consistent pre-existing code
is in this regard).

> @@ -1299,6 +1321,213 @@ void disable_turbo_mode(int argc, char *argv[])
>                  errno, strerror(errno));
>  }
>  
> +/*
> + * Parse activity_window:NNN{us,ms,s} and validate range.
> + *
> + * Activity window is a 7bit mantissa (0-127) with a 3bit exponent (0-7) base
> + * 10 in microseconds.  So the range is 1 microsecond to 1270 seconds.  A value
> + * of 0 lets the hardware autonomously select the window.
> + *
> + * Return 0 on success
> + *       -1 on error
> + */
> +static int parse_activity_window(xc_set_hwp_para_t *set_hwp, unsigned long u,
> +                                 const char *suffix)
> +{
> +    unsigned int exponent = 0;
> +    unsigned int multiplier = 1;
> +
> +    if ( suffix && suffix[0] )
> +    {
> +        if ( strcasecmp(suffix, "s") == 0 )
> +        {
> +            multiplier = 1000 * 1000;
> +            exponent = 6;
> +        }
> +        else if ( strcasecmp(suffix, "ms") == 0 )
> +        {
> +            multiplier = 1000;
> +            exponent = 3;
> +        }
> +        else if ( strcasecmp(suffix, "us") == 0 )
> +        {
> +            multiplier = 1;
> +            exponent = 0;
> +        }

Considering the initializers, this "else if" body isn't really needed,
and ...

> +        else

... instead this could become "else if ( strcmp() != 0 )".

Note also that I use strcmp() there - none of s, ms, or us are commonly
expressed by capital letters. (I wonder though whether μs shouldn't also
be recognized.)

> +        {
> +            fprintf(stderr, "invalid activity window units: \"%s\"\n", suffix);
> +
> +            return -1;
> +        }
> +    }
> +
> +    /* u * multipler > 1270 * 1000 * 1000 transformed to avoid overflow. */
> +    if ( u > 1270 * 1000 * 1000 / multiplier )
> +    {
> +        fprintf(stderr, "activity window is too large\n");
> +
> +        return -1;
> +    }
> +
> +    /* looking for 7 bits of mantissa and 3 bits of exponent */
> +    while ( u > 127 )
> +    {
> +        u += 5; /* Round up to mitigate truncation rounding down
> +                   e.g. 128 -> 120 vs 128 -> 130. */
> +        u /= 10;
> +        exponent += 1;
> +    }
> +
> +    set_hwp->activity_window = (exponent & HWP_ACT_WINDOW_EXPONENT_MASK) <<
> +                                   HWP_ACT_WINDOW_EXPONENT_SHIFT |

The shift wants parenthesizing against the | and the shift amount wants
indenting slightly less. (Really this would want to be MASK_INSR().)

> +                               (u & HWP_ACT_WINDOW_MANTISSA_MASK);
> +    set_hwp->set_params |= XEN_SYSCTL_HWP_SET_ACT_WINDOW;
> +
> +    return 0;
> +}
> +
> +static int parse_hwp_opts(xc_set_hwp_para_t *set_hwp, int *cpuid,
> +                          int argc, char *argv[])
> +{
> +    int i = 0;
> +
> +    if ( argc < 1 ) {
> +        fprintf(stderr, "Missing arguments\n");
> +        return -1;
> +    }
> +
> +    if ( parse_cpuid_non_fatal(argv[i], cpuid) == 0 )
> +    {
> +        i++;
> +    }

I don't think you need the earlier patch and the separate helper:
Whether a CPU number is present can be told by checking
isdigit(argv[i][0]).

Also (nit) note how you're mixing brace placement throughout this
function.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 11:57:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:57:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531436.827125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzUu-0002lX-O7; Mon, 08 May 2023 11:57:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531436.827125; Mon, 08 May 2023 11:57:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzUu-0002j5-F6; Mon, 08 May 2023 11:57:44 +0000
Received: by outflank-mailman (input) for mailman id 531436;
 Mon, 08 May 2023 11:57:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzMF-0004FA-N4
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:48:47 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4a9abe30-ed96-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 13:48:47 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id EF49B21D1E;
 Mon,  8 May 2023 11:48:46 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C037C1346B;
 Mon,  8 May 2023 11:48:46 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id 4nikLZ7hWGRbNwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:48:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a9abe30-ed96-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546526; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mnbpAA3QXKNUb/jFQV/tu/cKYwrbS8V5R6aaB8qSn9w=;
	b=k1qGxR8D1soRWuKzMkIo7SCy0iOrTAzKydaicZsfSJ0WLR2XbBA+fbli+PcJ37z+8CMwWc
	p5dLNvt6OycJ7CkuPZUkeX0b8ZSFMF1gqhFG3gucESij+n7k0ca5kGXhMYaDMZhsWutWUJ
	NU5FBPgE/nx3LHn6PySQ2RiAGrmEJkg=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v5 09/14] tools/xenstore: add TDB access trace support
Date: Mon,  8 May 2023 13:47:49 +0200
Message-Id: <20230508114754.31514-10-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new trace switch "tdb" and the related trace calls.

The "tdb" switch is off per default.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c        | 8 +++++++-
 tools/xenstore/xenstored_core.h        | 7 +++++++
 tools/xenstore/xenstored_transaction.c | 7 ++++++-
 3 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 15654730a6..3caf9e45dc 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -589,6 +589,8 @@ static void get_acc_data(TDB_DATA *key, struct node_account_data *acc)
 		if (old_data.dptr == NULL) {
 			acc->memory = 0;
 		} else {
+			trace_tdb("read %s size %zu\n", key->dptr,
+				  old_data.dsize + key->dsize);
 			hdr = (void *)old_data.dptr;
 			acc->memory = old_data.dsize;
 			acc->domid = hdr->perms[0].id;
@@ -655,6 +657,7 @@ int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 		errno = EIO;
 		return errno;
 	}
+	trace_tdb("store %s size %zu\n", key->dptr, data->dsize + key->dsize);
 
 	if (acc) {
 		/* Don't use new_domid, as it might be a transaction node. */
@@ -682,6 +685,7 @@ int do_tdb_delete(struct connection *conn, TDB_DATA *key,
 		errno = EIO;
 		return errno;
 	}
+	trace_tdb("delete %s\n", key->dptr);
 
 	if (acc->memory) {
 		domid = get_acc_domid(conn, key, acc->domid);
@@ -731,6 +735,8 @@ struct node *read_node(struct connection *conn, const void *ctx,
 		goto error;
 	}
 
+	trace_tdb("read %s size %zu\n", key.dptr, data.dsize + key.dsize);
+
 	node->parent = NULL;
 	talloc_steal(node, data.dptr);
 
@@ -2750,7 +2756,7 @@ static void set_quota(const char *arg, bool soft)
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
 const char *const trace_switches[] = {
-	"obj", "io", "wrl", "acc",
+	"obj", "io", "wrl", "acc", "tdb",
 	NULL
 };
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 3e0734a6c6..5a11dc1231 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -303,9 +303,16 @@ extern unsigned int trace_flags;
 #define TRACE_IO	0x00000002
 #define TRACE_WRL	0x00000004
 #define TRACE_ACC	0x00000008
+#define TRACE_TDB	0x00000010
 extern const char *const trace_switches[];
 int set_trace_switch(const char *arg);
 
+#define trace_tdb(...)				\
+do {						\
+	if (trace_flags & TRACE_TDB)		\
+		trace("tdb: " __VA_ARGS__);	\
+} while (0)
+
 extern TDB_CONTEXT *tdb_ctx;
 extern int dom0_domid;
 extern int dom0_event;
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 2b15506953..11c8bcec84 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -374,8 +374,11 @@ static int finalize_transaction(struct connection *conn,
 				if (tdb_error(tdb_ctx) != TDB_ERR_NOEXIST)
 					return EIO;
 				gen = NO_GENERATION;
-			} else
+			} else {
+				trace_tdb("read %s size %zu\n", key.dptr,
+					  key.dsize + data.dsize);
 				gen = hdr->generation;
+			}
 			talloc_free(data.dptr);
 			if (i->generation != gen)
 				return EAGAIN;
@@ -399,6 +402,8 @@ static int finalize_transaction(struct connection *conn,
 			set_tdb_key(i->trans_name, &ta_key);
 			data = tdb_fetch(tdb_ctx, ta_key);
 			if (data.dptr) {
+				trace_tdb("read %s size %zu\n", ta_key.dptr,
+					  ta_key.dsize + data.dsize);
 				hdr = (void *)data.dptr;
 				hdr->generation = ++generation;
 				*is_corrupt |= do_tdb_write(conn, &key, &data,
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:57:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:57:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531434.827120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzUu-0002hl-Ds; Mon, 08 May 2023 11:57:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531434.827120; Mon, 08 May 2023 11:57:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzUu-0002h7-7A; Mon, 08 May 2023 11:57:44 +0000
Received: by outflank-mailman (input) for mailman id 531434;
 Mon, 08 May 2023 11:57:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzMY-00040G-8w
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:49:06 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 54e40935-ed96-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 13:49:04 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 434BC21CC2;
 Mon,  8 May 2023 11:49:04 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1772D1346B;
 Mon,  8 May 2023 11:49:04 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id bsVJBLDhWGSDNwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:49:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54e40935-ed96-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546544; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ogLX0B6M+gJkDXPcurjCLFiofA0y8aWlvBWQPF5qvcM=;
	b=V6Lyts8dgLZRyxGTgTpMKz66OzBRVvAH1I7MRu+X6/1dIq3Yor9W9kSsoz2VuXFfjqhvsR
	cBdtb9woXuq3GWKLoFTxclRzMXp6JtP6OTyavx4f0YlteV2DAdZOGsEADsCZWQBrpJ/Dmk
	MDmEGhVWPry+LGhX4P9DXE9WXAmY5c0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 12/14] tools/xenstore: use generic accounting for remaining quotas
Date: Mon,  8 May 2023 13:47:52 +0200
Message-Id: <20230508114754.31514-13-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The maxrequests, node size, number of node permissions, and path length
quota are a little bit special, as they are either active in
transactions only (maxrequests), or they are just per item instead of
count values. Nevertheless being able to know the maximum number of
those quota related values per domain would be beneficial, so add them
to the generic accounting.

The per domain value will never show current numbers other than zero,
but the maximum number seen can be gathered the same way as the number
of nodes during a transaction.

To be able to use the const qualifier for a new function switch
domain_is_unprivileged() to take a const pointer, too.

For printing the quota/max values, adapt the print format string to
the longest quota name (now 17 characters long).

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V5:
- add comment (Julien Grall)
- add missing quota printing (Julien Grall)
---
 tools/xenstore/xenstored_core.c        | 15 +++++----
 tools/xenstore/xenstored_core.h        |  2 +-
 tools/xenstore/xenstored_domain.c      | 45 +++++++++++++++++++++-----
 tools/xenstore/xenstored_domain.h      |  6 ++++
 tools/xenstore/xenstored_transaction.c |  4 +--
 tools/xenstore/xenstored_watch.c       |  2 +-
 6 files changed, 55 insertions(+), 19 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index c98d30561f..fce73b883e 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -799,8 +799,9 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 		+ node->perms.num * sizeof(node->perms.p[0])
 		+ node->datalen + node->childlen;
 
-	if (!no_quota_check && domain_is_unprivileged(conn) &&
-	    data.dsize >= quota_max_entry_size) {
+	/* Call domain_max_chk() in any case in order to record max values. */
+	if (domain_max_chk(conn, ACC_NODESZ, data.dsize, quota_max_entry_size)
+	    && !no_quota_check) {
 		errno = ENOSPC;
 		return errno;
 	}
@@ -1170,7 +1171,7 @@ static bool valid_chars(const char *node)
 		       "0123456789-/_@") == strlen(node));
 }
 
-bool is_valid_nodename(const char *node)
+bool is_valid_nodename(const struct connection *conn, const char *node)
 {
 	int local_off = 0;
 	unsigned int domid;
@@ -1190,7 +1191,8 @@ bool is_valid_nodename(const char *node)
 	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
 		local_off = 0;
 
-	if (strlen(node) > local_off + quota_max_path_len)
+	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off,
+			   quota_max_path_len))
 		return false;
 
 	return valid_chars(node);
@@ -1252,7 +1254,7 @@ static struct node *get_node_canonicalized(struct connection *conn,
 	*canonical_name = canonicalize(conn, ctx, name);
 	if (!*canonical_name)
 		return NULL;
-	if (!is_valid_nodename(*canonical_name)) {
+	if (!is_valid_nodename(conn, *canonical_name)) {
 		errno = EINVAL;
 		return NULL;
 	}
@@ -1778,8 +1780,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 		return EINVAL;
 
 	perms.num--;
-	if (domain_is_unprivileged(conn) &&
-	    perms.num > quota_nb_perms_per_node)
+	if (domain_max_chk(conn, ACC_NPERM, perms.num, quota_nb_perms_per_node))
 		return ENOSPC;
 
 	permstr = in->buffer + strlen(in->buffer) + 1;
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 3564d85d7d..9339820156 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -258,7 +258,7 @@ void check_store(void);
 void corrupt(struct connection *conn, const char *fmt, ...);
 
 /* Is this a valid node name? */
-bool is_valid_nodename(const char *node);
+bool is_valid_nodename(const struct connection *conn, const char *node);
 
 /* Get name of parent node. */
 char *get_parent(const void *ctx, const char *node);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 6f3a27765a..b3a719569e 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -431,7 +431,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8u (max: %8u\n", #t, \
+	resp = talloc_asprintf_append(resp, "%-17s: %8u (max: %8u\n", #t, \
 				      d->acc[e].val, d->acc[e].max); \
 	if (!resp) return ENOMEM
 
@@ -440,6 +440,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	ent(transactions, ACC_TRANS);
 	ent(outstanding, ACC_OUTST);
 	ent(memory, ACC_MEM);
+	ent(transaction-nodes, ACC_TRANSNODES);
+	ent(node-permissions, ACC_NPERM);
+	ent(path-length, ACC_PATHLEN);
+	ent(node-size, ACC_NODESZ);
 
 #undef ent
 
@@ -457,7 +461,7 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8u\n", #t,   \
+	resp = talloc_asprintf_append(resp, "%-17s: %8u\n", #t,   \
 				      acc_global_max[e]);         \
 	if (!resp) return ENOMEM
 
@@ -466,6 +470,10 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
 	ent(transactions, ACC_TRANS);
 	ent(outstanding, ACC_OUTST);
 	ent(memory, ACC_MEM);
+	ent(transaction-nodes, ACC_TRANSNODES);
+	ent(node-permissions, ACC_NPERM);
+	ent(path-length, ACC_PATHLEN);
+	ent(node-size, ACC_NODESZ);
 
 #undef ent
 
@@ -1079,12 +1087,22 @@ int domain_adjust_node_perms(struct node *node)
 	return 0;
 }
 
+static void domain_acc_valid_max(struct domain *d, enum accitem what,
+				 unsigned int val)
+{
+	assert(what < ARRAY_SIZE(d->acc));
+	assert(what < ARRAY_SIZE(acc_global_max));
+
+	if (val > d->acc[what].max)
+		d->acc[what].max = val;
+	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
+		acc_global_max[what] = val;
+}
+
 static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 {
 	unsigned int val;
 
-	assert(what < ARRAY_SIZE(d->acc));
-
 	if ((add < 0 && -add > d->acc[what].val) ||
 	    (add > 0 && (INT_MAX - d->acc[what].val) < add)) {
 		/*
@@ -1098,10 +1116,7 @@ static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 	}
 
 	val = d->acc[what].val + add;
-	if (val > d->acc[what].max)
-		d->acc[what].max = val;
-	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
-		acc_global_max[what] = val;
+	domain_acc_valid_max(d, what, val);
 
 	return val;
 }
@@ -1222,6 +1237,20 @@ void domain_reset_global_acc(void)
 	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
 }
 
+bool domain_max_chk(const struct connection *conn, enum accitem what,
+		    unsigned int val, unsigned int quota)
+{
+	if (!conn || !conn->domain)
+		return false;
+
+	if (domain_is_unprivileged(conn) && val > quota)
+		return true;
+
+	domain_acc_valid_max(conn->domain, what, val);
+
+	return false;
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 416df25cb2..78ca434531 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -33,6 +33,10 @@ enum accitem {
 	ACC_OUTST,
 	ACC_MEM,
 	ACC_TRANS,
+	ACC_TRANSNODES,
+	ACC_NPERM,
+	ACC_PATHLEN,
+	ACC_NODESZ,
 	ACC_N,			/* Number of elements per domain. */
 };
 
@@ -129,6 +133,8 @@ void acc_drop(struct connection *conn);
 void acc_commit(struct connection *conn);
 int domain_max_global_acc(const void *ctx, struct connection *conn);
 void domain_reset_global_acc(void);
+bool domain_max_chk(const struct connection *conn, unsigned int what,
+		    unsigned int val, unsigned int quota);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index b9e9d76a1f..cecaf27018 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -252,8 +252,8 @@ int access_node(struct connection *conn, struct node *node,
 
 	i = find_accessed_node(trans, node->name);
 	if (!i) {
-		if (trans->nodes >= quota_trans_nodes &&
-		    domain_is_unprivileged(conn)) {
+		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1,
+				   quota_trans_nodes)) {
 			ret = ENOSPC;
 			goto err;
 		}
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index e30cd89be3..61b1e3421e 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -176,7 +176,7 @@ static int check_watch_path(struct connection *conn, const void *ctx,
 		*path = canonicalize(conn, ctx, *path);
 		if (!*path)
 			return errno;
-		if (!is_valid_nodename(*path))
+		if (!is_valid_nodename(conn, *path))
 			goto inval;
 	}
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:57:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531433.827115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzUu-0002gU-3R; Mon, 08 May 2023 11:57:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531433.827115; Mon, 08 May 2023 11:57:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzUu-0002gN-01; Mon, 08 May 2023 11:57:44 +0000
Received: by outflank-mailman (input) for mailman id 531433;
 Mon, 08 May 2023 11:57:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzMU-00040G-26
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:49:02 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 518ed7b4-ed96-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 13:48:58 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A6B5722029;
 Mon,  8 May 2023 11:48:58 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 621971346B;
 Mon,  8 May 2023 11:48:58 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id IFMzFqrhWGR5NwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:48:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 518ed7b4-ed96-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546538; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ia066ss6DTXiD5PD3QIOik+sQxxMkgbmr/cEVOihzyk=;
	b=CScXZUnssTNF4jzDhv0ru86vmo6122s8dsxArWVZLUi2HAWvKivd4TB0bHXt51GFAoXo3e
	7LB03/jCVyCxkYCturVvFvefbDlabxEZg/tAWXm0lGer1oHcPOaQvdf1vMj/GOckHAfyzq
	sDNnSjCTg94CI6JvTLR3ysy/h84IiRc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v5 11/14] tools/xenstore: remember global and per domain max accounting values
Date: Mon,  8 May 2023 13:47:51 +0200
Message-Id: <20230508114754.31514-12-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add saving the maximum values of the different accounting data seen
per domain and (for unprivileged domains) globally, and print those
values via the xenstore-control quota command. Add a sub-command for
resetting the global maximum values seen.

This should help for a decision how to set the related quotas.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 docs/misc/xenstore.txt             |   5 +-
 tools/xenstore/xenstored_control.c |  22 ++++++-
 tools/xenstore/xenstored_domain.c  | 100 +++++++++++++++++++++++------
 tools/xenstore/xenstored_domain.h  |   2 +
 4 files changed, 108 insertions(+), 21 deletions(-)

diff --git a/docs/misc/xenstore.txt b/docs/misc/xenstore.txt
index d807ef0709..38015835b1 100644
--- a/docs/misc/xenstore.txt
+++ b/docs/misc/xenstore.txt
@@ -426,7 +426,7 @@ CONTROL			<command>|[<parameters>|]
 	print|<string>
 		print <string> to syslog (xenstore runs as daemon) or
 		to console (xenstore runs as stubdom)
-	quota|[set <name> <val>|<domid>]
+	quota|[set <name> <val>|<domid>|max [-r]]
 		without parameters: print the current quota settings
 		with "set <name> <val>": set the quota <name> to new value
 		<val> (The admin should make sure all the domain usage is
@@ -435,6 +435,9 @@ CONTROL			<command>|[<parameters>|]
 		violating the new quota setting isn't increased further)
 		with "<domid>": print quota related accounting data for
 		the domain <domid>
+		with "max [-r]": show global per-domain maximum values of all
+		unprivileged domains, optionally reset the values by adding
+		"-r"
 	quota-soft|[set <name> <val>]
 		like the "quota" command, but for soft-quota.
 	help			<supported-commands>
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 403295788a..620a7da997 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -306,6 +306,22 @@ static int quota_get(const void *ctx, struct connection *conn,
 	return domain_get_quota(ctx, conn, atoi(vec[0]));
 }
 
+static int quota_max(const void *ctx, struct connection *conn,
+		     char **vec, int num)
+{
+	if (num > 1)
+		return EINVAL;
+
+	if (num == 1) {
+		if (!strcmp(vec[0], "-r"))
+			domain_reset_global_acc();
+		else
+			return EINVAL;
+	}
+
+	return domain_max_global_acc(ctx, conn);
+}
+
 static int do_control_quota(const void *ctx, struct connection *conn,
 			    char **vec, int num)
 {
@@ -315,6 +331,9 @@ static int do_control_quota(const void *ctx, struct connection *conn,
 	if (!strcmp(vec[0], "set"))
 		return quota_set(ctx, conn, vec + 1, num - 1, hard_quotas);
 
+	if (!strcmp(vec[0], "max"))
+		return quota_max(ctx, conn, vec + 1, num - 1);
+
 	return quota_get(ctx, conn, vec, num);
 }
 
@@ -978,7 +997,8 @@ static struct cmd_s cmds[] = {
 	{ "memreport", do_control_memreport, "[<file>]" },
 #endif
 	{ "print", do_control_print, "<string>" },
-	{ "quota", do_control_quota, "[set <name> <val>|<domid>]" },
+	{ "quota", do_control_quota,
+		"[set <name> <val>|<domid>|max [-r]]" },
 	{ "quota-soft", do_control_quota_s, "[set <name> <val>]" },
 	{ "help", do_control_help, "" },
 };
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 25c6d20036..6f3a27765a 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -43,6 +43,8 @@ static evtchn_port_t virq_port;
 
 xenevtchn_handle *xce_handle = NULL;
 
+static unsigned int acc_global_max[ACC_N];
+
 struct domain
 {
 	/* The id of this domain */
@@ -70,7 +72,10 @@ struct domain
 	bool introduced;
 
 	/* Accounting data for this domain. */
-	unsigned int acc[ACC_N];
+	struct acc {
+		unsigned int val;
+		unsigned int max;
+	} acc[ACC_N];
 
 	/* Memory quota data for this domain. */
 	bool soft_quota_reported;
@@ -199,9 +204,9 @@ static bool domain_can_read(struct connection *conn)
 	if (domain_is_unprivileged(conn)) {
 		if (domain->wrl_credit < 0)
 			return false;
-		if (domain->acc[ACC_OUTST] >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST].val >= quota_req_outstanding)
 			return false;
-		if (domain->acc[ACC_MEM] >= quota_memory_per_domain_hard &&
+		if (domain->acc[ACC_MEM].val >= quota_memory_per_domain_hard &&
 		    quota_memory_per_domain_hard)
 			return false;
 	}
@@ -264,7 +269,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 		ret = WALK_TREE_SKIP_CHILDREN;
 	}
 
-	return domain->acc[ACC_NODES] ? ret : WALK_TREE_SUCCESS_STOP;
+	return domain->acc[ACC_NODES].val ? ret : WALK_TREE_SUCCESS_STOP;
 }
 
 static void domain_tree_remove(struct domain *domain)
@@ -272,7 +277,7 @@ static void domain_tree_remove(struct domain *domain)
 	int ret;
 	struct walk_funcs walkfuncs = { .enter = domain_tree_remove_sub };
 
-	if (domain->acc[ACC_NODES]) {
+	if (domain->acc[ACC_NODES].val) {
 		ret = walk_node_tree(domain, NULL, "/", &walkfuncs, domain);
 		if (ret == WALK_TREE_ERROR_STOP)
 			syslog(LOG_ERR,
@@ -426,14 +431,41 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8d\n", #t, e); \
+	resp = talloc_asprintf_append(resp, "%-16s: %8u (max: %8u\n", #t, \
+				      d->acc[e].val, d->acc[e].max); \
+	if (!resp) return ENOMEM
+
+	ent(nodes, ACC_NODES);
+	ent(watches, ACC_WATCH);
+	ent(transactions, ACC_TRANS);
+	ent(outstanding, ACC_OUTST);
+	ent(memory, ACC_MEM);
+
+#undef ent
+
+	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
+
+	return 0;
+}
+
+int domain_max_global_acc(const void *ctx, struct connection *conn)
+{
+	char *resp;
+
+	resp = talloc_asprintf(ctx, "Max. seen accounting values:\n");
+	if (!resp)
+		return ENOMEM;
+
+#define ent(t, e) \
+	resp = talloc_asprintf_append(resp, "%-16s: %8u\n", #t,   \
+				      acc_global_max[e]);         \
 	if (!resp) return ENOMEM
 
-	ent(nodes, d->acc[ACC_NODES]);
-	ent(watches, d->acc[ACC_WATCH]);
-	ent(transactions, d->acc[ACC_TRANS]);
-	ent(outstanding, d->acc[ACC_OUTST]);
-	ent(memory, d->acc[ACC_MEM]);
+	ent(nodes, ACC_NODES);
+	ent(watches, ACC_WATCH);
+	ent(transactions, ACC_TRANS);
+	ent(outstanding, ACC_OUTST);
+	ent(memory, ACC_MEM);
 
 #undef ent
 
@@ -1049,10 +1081,12 @@ int domain_adjust_node_perms(struct node *node)
 
 static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 {
+	unsigned int val;
+
 	assert(what < ARRAY_SIZE(d->acc));
 
-	if ((add < 0 && -add > d->acc[what]) ||
-	    (add > 0 && (INT_MAX - d->acc[what]) < add)) {
+	if ((add < 0 && -add > d->acc[what].val) ||
+	    (add > 0 && (INT_MAX - d->acc[what].val) < add)) {
 		/*
 		 * In a transaction when a node is being added/removed AND the
 		 * same node has been added/removed outside the transaction in
@@ -1063,7 +1097,13 @@ static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 		return (add < 0) ? 0 : INT_MAX;
 	}
 
-	return d->acc[what] + add;
+	val = d->acc[what].val + add;
+	if (val > d->acc[what].max)
+		d->acc[what].max = val;
+	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
+		acc_global_max[what] = val;
+
+	return val;
 }
 
 static int domain_acc_add(struct connection *conn, unsigned int domid,
@@ -1119,10 +1159,10 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 	}
 
 	trace_acc("global change domid %u: what=%u %u add %d\n", domid, what,
-		  d->acc[what], add);
-	d->acc[what] = domain_acc_add_valid(d, what, add);
+		  d->acc[what].val, add);
+	d->acc[what].val = domain_acc_add_valid(d, what, add);
 
-	return d->acc[what];
+	return d->acc[what].val;
 }
 
 void acc_drop(struct connection *conn)
@@ -1160,6 +1200,28 @@ void acc_commit(struct connection *conn)
 	conn->in = in;
 }
 
+static int domain_reset_global_acc_sub(const void *k, void *v, void *arg)
+{
+	struct domain *d = v;
+	unsigned int i;
+
+	for (i = 0; i < ACC_N; i++)
+		d->acc[i].max = d->acc[i].val;
+
+	return 0;
+}
+
+void domain_reset_global_acc(void)
+{
+	unsigned int i;
+
+	for (i = 0; i < ACC_N; i++)
+		acc_global_max[i] = 0;
+
+	/* Set current max values seen. */
+	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
@@ -1660,7 +1722,7 @@ static int domain_check_acc_init_sub(const void *k, void *v, void *arg)
 	 * If everything is correct incrementing the value for each node will
 	 * result in dom->nodes being 0 at the end.
 	 */
-	dom->nodes = -d->acc[ACC_NODES];
+	dom->nodes = -d->acc[ACC_NODES].val;
 
 	if (!hashtable_insert(domains, &dom->domid, dom)) {
 		talloc_free(dom);
@@ -1715,7 +1777,7 @@ static int domain_check_acc_cb(const void *k, void *v, void *arg)
 	if (!d)
 		return 0;
 
-	d->acc[ACC_NODES] += dom->nodes;
+	d->acc[ACC_NODES].val += dom->nodes;
 
 	return 0;
 }
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 01b6f1861b..416df25cb2 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -127,6 +127,8 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 int acc_fix_domains(struct list_head *head, bool chk_quota, bool update);
 void acc_drop(struct connection *conn);
 void acc_commit(struct connection *conn);
+int domain_max_global_acc(const void *ctx, struct connection *conn);
+void domain_reset_global_acc(void);
 
 /* Write rate limiting */
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:57:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531441.827133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzUv-0002xC-5c; Mon, 08 May 2023 11:57:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531441.827133; Mon, 08 May 2023 11:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzUu-0002ui-W3; Mon, 08 May 2023 11:57:44 +0000
Received: by outflank-mailman (input) for mailman id 531441;
 Mon, 08 May 2023 11:57:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzMc-0004FA-L0
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:49:10 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 583cc118-ed96-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 13:49:10 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D0FD921FBD;
 Mon,  8 May 2023 11:49:09 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A49951346B;
 Mon,  8 May 2023 11:49:09 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id G+OqJrXhWGSUNwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:49:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 583cc118-ed96-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546549; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/kKszQ9GbTiBVpwvUDCT+QKEBlgA1iwA78V+6Rsue9o=;
	b=Em0i6C0a8i6U1/nWs5J9ugR/q0r/fa7JqrQ5wQ3y7DNWAwJXeJhqSVhWQ2U0HAX3wLvGus
	ZIaD2Yf3zxLnHmMiZBhflv6MPzYaZXVhWISdDyu6sl//n9FqMvqz8kOYueOPd9H5CtVcK/
	08qnxiY9NM7rXbwhFD79fE2SeKszQv0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 13/14] tools/xenstore: switch get_optval_int() to get_optval_uint()
Date: Mon,  8 May 2023 13:47:53 +0200
Message-Id: <20230508114754.31514-14-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Let get_optval_int() return an unsigned value and rename it
accordingly.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V5:
- new patch, carved out from next patch in series (Julien Grall)
---
 tools/xenstore/xenstored_core.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index fce73b883e..86ec7ab446 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2700,13 +2700,13 @@ int dom0_domid = 0;
 int dom0_event = 0;
 int priv_domid = 0;
 
-static int get_optval_int(const char *arg)
+static unsigned int get_optval_uint(const char *arg)
 {
 	char *end;
-	long val;
+	unsigned long val;
 
-	val = strtol(arg, &end, 10);
-	if (!*arg || *end || val < 0 || val > INT_MAX)
+	val = strtoul(arg, &end, 10);
+	if (!*arg || *end || val > INT_MAX)
 		barf("invalid parameter value \"%s\"\n", arg);
 
 	return val;
@@ -2726,7 +2726,7 @@ static void set_timeout(const char *arg)
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<seconds>\n");
-	val = get_optval_int(eq + 1);
+	val = get_optval_uint(eq + 1);
 	if (what_matches(arg, "watch-event"))
 		timeout_watch_event_msec = val * 1000;
 	else
@@ -2740,7 +2740,7 @@ static void set_quota(const char *arg, bool soft)
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<nb>\n");
-	val = get_optval_int(eq + 1);
+	val = get_optval_uint(eq + 1);
 	if (what_matches(arg, "outstanding") && !soft)
 		quota_req_outstanding = val;
 	else if (what_matches(arg, "transaction-nodes") && !soft)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:57:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531445.827140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzUv-00035p-KY; Mon, 08 May 2023 11:57:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531445.827140; Mon, 08 May 2023 11:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzUv-00033Q-9F; Mon, 08 May 2023 11:57:45 +0000
Received: by outflank-mailman (input) for mailman id 531445;
 Mon, 08 May 2023 11:57:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzMi-0004FA-TZ
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:49:17 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5b98b6ad-ed96-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 13:49:15 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 74E341FE43;
 Mon,  8 May 2023 11:49:15 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 461631346B;
 Mon,  8 May 2023 11:49:15 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id HLQAD7vhWGSvNwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:49:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b98b6ad-ed96-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683546555; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dcmeTEE9hVukRZJ0Dtyrk1eYeEJPS+eKlkdJh/h3FLo=;
	b=foQkqBNjW2yrIkKI/yb0yX2ZvZMZAlx7enzUVKzYjJUOtgnVknvq3iR4Cygv5DabFBLefC
	HTmKPZI7+zD+J8O/25xMjwx/1Yl8H/WPK3yyaf8/zkYhOQ4u0wlQe6gDDW8IUJUZosN1X4
	dAqYVbDZvlZg/eWYNa92WNjn4wkXJqA=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v5 14/14] tools/xenstore: switch quota management to be table based
Date: Mon,  8 May 2023 13:47:54 +0200
Message-Id: <20230508114754.31514-15-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230508114754.31514-1-jgross@suse.com>
References: <20230508114754.31514-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of having individual quota variables switch to a table based
approach like the generic accounting. Include all the related data in
the same table and add accessor functions.

This enables to use the command line --quota parameter for setting all
possible quota values, keeping the previous parameters for
compatibility.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
One further remark: it would be rather easy to add soft-quota for all
the other quotas (similar to the memory one). This could be used as
an early warning for the need to raise global quota.
V5:
- fix what_matches() (Julien Grall)
---
 tools/xenstore/xenstored_control.c     |  43 ++------
 tools/xenstore/xenstored_core.c        |  80 +++++++-------
 tools/xenstore/xenstored_core.h        |  10 --
 tools/xenstore/xenstored_domain.c      | 138 ++++++++++++++++---------
 tools/xenstore/xenstored_domain.h      |  12 ++-
 tools/xenstore/xenstored_transaction.c |   5 +-
 tools/xenstore/xenstored_watch.c       |   2 +-
 7 files changed, 152 insertions(+), 138 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 620a7da997..ed80924ee4 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -221,35 +221,6 @@ static int do_control_log(const void *ctx, struct connection *conn,
 	return 0;
 }
 
-struct quota {
-	const char *name;
-	int *quota;
-	const char *descr;
-};
-
-static const struct quota hard_quotas[] = {
-	{ "nodes", &quota_nb_entry_per_domain, "Nodes per domain" },
-	{ "watches", &quota_nb_watch_per_domain, "Watches per domain" },
-	{ "transactions", &quota_max_transaction, "Transactions per domain" },
-	{ "outstanding", &quota_req_outstanding,
-		"Outstanding requests per domain" },
-	{ "transaction-nodes", &quota_trans_nodes,
-		"Max. number of accessed nodes per transaction" },
-	{ "memory", &quota_memory_per_domain_hard,
-		"Total Xenstore memory per domain (error level)" },
-	{ "node-size", &quota_max_entry_size, "Max. size of a node" },
-	{ "path-max", &quota_max_path_len, "Max. length of a node path" },
-	{ "permissions", &quota_nb_perms_per_node,
-		"Max. number of permissions per node" },
-	{ NULL, NULL, NULL }
-};
-
-static const struct quota soft_quotas[] = {
-	{ "memory", &quota_memory_per_domain_soft,
-		"Total Xenstore memory per domain (warning level)" },
-	{ NULL, NULL, NULL }
-};
-
 static int quota_show_current(const void *ctx, struct connection *conn,
 			      const struct quota *quotas)
 {
@@ -260,9 +231,11 @@ static int quota_show_current(const void *ctx, struct connection *conn,
 	if (!resp)
 		return ENOMEM;
 
-	for (i = 0; quotas[i].quota; i++) {
+	for (i = 0; i < ACC_N; i++) {
+		if (!quotas[i].name)
+			continue;
 		resp = talloc_asprintf_append(resp, "%-17s: %8d %s\n",
-					      quotas[i].name, *quotas[i].quota,
+					      quotas[i].name, quotas[i].val,
 					      quotas[i].descr);
 		if (!resp)
 			return ENOMEM;
@@ -274,7 +247,7 @@ static int quota_show_current(const void *ctx, struct connection *conn,
 }
 
 static int quota_set(const void *ctx, struct connection *conn,
-		     char **vec, int num, const struct quota *quotas)
+		     char **vec, int num, struct quota *quotas)
 {
 	unsigned int i;
 	int val;
@@ -286,9 +259,9 @@ static int quota_set(const void *ctx, struct connection *conn,
 	if (val < 1)
 		return EINVAL;
 
-	for (i = 0; quotas[i].quota; i++) {
-		if (!strcmp(vec[0], quotas[i].name)) {
-			*quotas[i].quota = val;
+	for (i = 0; i < ACC_N; i++) {
+		if (quotas[i].name && !strcmp(vec[0], quotas[i].name)) {
+			quotas[i].val = val;
 			send_ack(conn, XS_CONTROL);
 			return 0;
 		}
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 86ec7ab446..4ebb75e5e9 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -89,17 +89,6 @@ unsigned int trace_flags = TRACE_OBJ | TRACE_IO;
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type);
 
-int quota_nb_entry_per_domain = 1000;
-int quota_nb_watch_per_domain = 128;
-int quota_max_entry_size = 2048; /* 2K */
-int quota_max_transaction = 10;
-int quota_nb_perms_per_node = 5;
-int quota_trans_nodes = 1024;
-int quota_max_path_len = XENSTORE_REL_PATH_MAX;
-int quota_req_outstanding = 20;
-int quota_memory_per_domain_soft = 2 * 1024 * 1024; /* 2 MB */
-int quota_memory_per_domain_hard = 2 * 1024 * 1024 + 512 * 1024; /* 2.5 MB */
-
 unsigned int timeout_watch_event_msec = 20000;
 
 void trace(const char *fmt, ...)
@@ -800,8 +789,7 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 		+ node->datalen + node->childlen;
 
 	/* Call domain_max_chk() in any case in order to record max values. */
-	if (domain_max_chk(conn, ACC_NODESZ, data.dsize, quota_max_entry_size)
-	    && !no_quota_check) {
+	if (domain_max_chk(conn, ACC_NODESZ, data.dsize) && !no_quota_check) {
 		errno = ENOSPC;
 		return errno;
 	}
@@ -1191,8 +1179,7 @@ bool is_valid_nodename(const struct connection *conn, const char *node)
 	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
 		local_off = 0;
 
-	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off,
-			   quota_max_path_len))
+	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off))
 		return false;
 
 	return valid_chars(node);
@@ -1504,7 +1491,7 @@ static struct node *create_node(struct connection *conn, const void *ctx,
 	for (i = node; i; i = i->parent) {
 		/* i->parent is set for each new node, so check quota. */
 		if (i->parent &&
-		    domain_nbentry(conn) >= quota_nb_entry_per_domain) {
+		    domain_nbentry(conn) >= hard_quotas[ACC_NODES].val) {
 			ret = ENOSPC;
 			goto err;
 		}
@@ -1780,7 +1767,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 		return EINVAL;
 
 	perms.num--;
-	if (domain_max_chk(conn, ACC_NPERM, perms.num, quota_nb_perms_per_node))
+	if (domain_max_chk(conn, ACC_NPERM, perms.num))
 		return ENOSPC;
 
 	permstr = in->buffer + strlen(in->buffer) + 1;
@@ -2649,7 +2636,16 @@ static void usage(void)
 "                          memory: total used memory per domain for nodes,\n"
 "                                  transactions, watches and requests, above\n"
 "                                  which Xenstore will stop talking to domain\n"
+"                          nodes: number nodes owned by a domain\n"
+"                          node-permissions: number of access permissions per\n"
+"                                            node\n"
+"                          node-size: total size of a node (permissions +\n"
+"                                     children names + content)\n"
 "                          outstanding: number of outstanding requests\n"
+"                          path-length: length of a node path\n"
+"                          transactions: number of concurrent transactions\n"
+"                                        per domain\n"
+"                          watches: number of watches per domain"
 "  -q, --quota-soft <what>=<nb> set a soft quota <what> to the value <nb>,\n"
 "                          causing a warning to be issued via syslog() if the\n"
 "                          limit is violated, allowed quotas are:\n"
@@ -2714,15 +2710,19 @@ static unsigned int get_optval_uint(const char *arg)
 
 static bool what_matches(const char *arg, const char *what)
 {
-	unsigned int what_len = strlen(what);
+	unsigned int what_len;
+
+	if (!what)
+		return false;
 
+	what_len = strlen(what);
 	return !strncmp(arg, what, what_len) && arg[what_len] == '=';
 }
 
 static void set_timeout(const char *arg)
 {
 	const char *eq = strchr(arg, '=');
-	int val;
+	unsigned int val;
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<seconds>\n");
@@ -2736,22 +2736,22 @@ static void set_timeout(const char *arg)
 static void set_quota(const char *arg, bool soft)
 {
 	const char *eq = strchr(arg, '=');
-	int val;
+	struct quota *q = soft ? soft_quotas : hard_quotas;
+	unsigned int val;
+	unsigned int i;
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<nb>\n");
 	val = get_optval_uint(eq + 1);
-	if (what_matches(arg, "outstanding") && !soft)
-		quota_req_outstanding = val;
-	else if (what_matches(arg, "transaction-nodes") && !soft)
-		quota_trans_nodes = val;
-	else if (what_matches(arg, "memory")) {
-		if (soft)
-			quota_memory_per_domain_soft = val;
-		else
-			quota_memory_per_domain_hard = val;
-	} else
-		barf("unknown quota \"%s\"\n", arg);
+
+	for (i = 0; i < ACC_N; i++) {
+		if (what_matches(arg, q[i].name)) {
+			q[i].val = val;
+			return;
+		}
+	}
+
+	barf("unknown quota \"%s\"\n", arg);
 }
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
@@ -2813,7 +2813,7 @@ int main(int argc, char *argv[])
 			no_domain_init = true;
 			break;
 		case 'E':
-			quota_nb_entry_per_domain = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NODES].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'F':
 			pidfile = optarg;
@@ -2831,10 +2831,10 @@ int main(int argc, char *argv[])
 			recovery = false;
 			break;
 		case 'S':
-			quota_max_entry_size = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NODESZ].val = strtoul(optarg, NULL, 10);
 			break;
 		case 't':
-			quota_max_transaction = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_TRANS].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'T':
 			tracefile = optarg;
@@ -2854,15 +2854,17 @@ int main(int argc, char *argv[])
 			verbose = true;
 			break;
 		case 'W':
-			quota_nb_watch_per_domain = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_WATCH].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'A':
-			quota_nb_perms_per_node = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NPERM].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'M':
-			quota_max_path_len = strtol(optarg, NULL, 10);
-			quota_max_path_len = min(XENSTORE_REL_PATH_MAX,
-						 quota_max_path_len);
+			hard_quotas[ACC_PATHLEN].val =
+				strtoul(optarg, NULL, 10);
+			hard_quotas[ACC_PATHLEN].val =
+				 min((unsigned int)XENSTORE_REL_PATH_MAX,
+				     hard_quotas[ACC_PATHLEN].val);
 			break;
 		case 'Q':
 			set_quota(optarg, false);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 9339820156..92d5b50f3c 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -316,16 +316,6 @@ extern TDB_CONTEXT *tdb_ctx;
 extern int dom0_domid;
 extern int dom0_event;
 extern int priv_domid;
-extern int quota_nb_watch_per_domain;
-extern int quota_max_transaction;
-extern int quota_max_entry_size;
-extern int quota_nb_perms_per_node;
-extern int quota_max_path_len;
-extern int quota_nb_entry_per_domain;
-extern int quota_req_outstanding;
-extern int quota_trans_nodes;
-extern int quota_memory_per_domain_soft;
-extern int quota_memory_per_domain_hard;
 extern bool keep_orphans;
 
 extern unsigned int timeout_watch_event_msec;
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index b3a719569e..04f911b111 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -43,7 +43,61 @@ static evtchn_port_t virq_port;
 
 xenevtchn_handle *xce_handle = NULL;
 
-static unsigned int acc_global_max[ACC_N];
+struct quota hard_quotas[ACC_N] = {
+	[ACC_NODES] = {
+		.name = "nodes",
+		.descr = "Nodes per domain",
+		.val = 1000,
+	},
+	[ACC_WATCH] = {
+		.name = "watches",
+		.descr = "Watches per domain",
+		.val = 128,
+	},
+	[ACC_OUTST] = {
+		.name = "outstanding",
+		.descr = "Outstanding requests per domain",
+		.val = 20,
+	},
+	[ACC_MEM] = {
+		.name = "memory",
+		.descr = "Total Xenstore memory per domain (error level)",
+		.val = 2 * 1024 * 1024 + 512 * 1024,	/* 2.5 MB */
+	},
+	[ACC_TRANS] = {
+		.name = "transactions",
+		.descr = "Active transactions per domain",
+		.val = 10,
+	},
+	[ACC_TRANSNODES] = {
+		.name = "transaction-nodes",
+		.descr = "Max. number of accessed nodes per transaction",
+		.val = 1024,
+	},
+	[ACC_NPERM] = {
+		.name = "node-permissions",
+		.descr = "Max. number of permissions per node",
+		.val = 5,
+	},
+	[ACC_PATHLEN] = {
+		.name = "path-max",
+		.descr = "Max. length of a node path",
+		.val = XENSTORE_REL_PATH_MAX,
+	},
+	[ACC_NODESZ] = {
+		.name = "node-size",
+		.descr = "Max. size of a node",
+		.val = 2048,
+	},
+};
+
+struct quota soft_quotas[ACC_N] = {
+	[ACC_MEM] = {
+		.name = "memory",
+		.descr = "Total Xenstore memory per domain (warning level)",
+		.val = 2 * 1024 * 1024,			/* 2.0 MB */
+	},
+};
 
 struct domain
 {
@@ -204,10 +258,10 @@ static bool domain_can_read(struct connection *conn)
 	if (domain_is_unprivileged(conn)) {
 		if (domain->wrl_credit < 0)
 			return false;
-		if (domain->acc[ACC_OUTST].val >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST].val >= hard_quotas[ACC_OUTST].val)
 			return false;
-		if (domain->acc[ACC_MEM].val >= quota_memory_per_domain_hard &&
-		    quota_memory_per_domain_hard)
+		if (domain->acc[ACC_MEM].val >= hard_quotas[ACC_MEM].val &&
+		    hard_quotas[ACC_MEM].val)
 			return false;
 	}
 
@@ -422,6 +476,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 {
 	struct domain *d = find_domain_struct(domid);
 	char *resp;
+	unsigned int i;
 
 	if (!d)
 		return ENOENT;
@@ -430,22 +485,15 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	if (!resp)
 		return ENOMEM;
 
-#define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-17s: %8u (max: %8u\n", #t, \
-				      d->acc[e].val, d->acc[e].max); \
-	if (!resp) return ENOMEM
-
-	ent(nodes, ACC_NODES);
-	ent(watches, ACC_WATCH);
-	ent(transactions, ACC_TRANS);
-	ent(outstanding, ACC_OUTST);
-	ent(memory, ACC_MEM);
-	ent(transaction-nodes, ACC_TRANSNODES);
-	ent(node-permissions, ACC_NPERM);
-	ent(path-length, ACC_PATHLEN);
-	ent(node-size, ACC_NODESZ);
-
-#undef ent
+	for (i = 0; i < ACC_N; i++) {
+		if (!hard_quotas[i].name)
+			continue;
+		resp = talloc_asprintf_append(resp, "%-17s: %8u (max %8u)\n",
+					      hard_quotas[i].name,
+					      d->acc[i].val, d->acc[i].max);
+		if (!resp)
+			return ENOMEM;
+	}
 
 	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
 
@@ -455,27 +503,21 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 int domain_max_global_acc(const void *ctx, struct connection *conn)
 {
 	char *resp;
+	unsigned int i;
 
 	resp = talloc_asprintf(ctx, "Max. seen accounting values:\n");
 	if (!resp)
 		return ENOMEM;
 
-#define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-17s: %8u\n", #t,   \
-				      acc_global_max[e]);         \
-	if (!resp) return ENOMEM
-
-	ent(nodes, ACC_NODES);
-	ent(watches, ACC_WATCH);
-	ent(transactions, ACC_TRANS);
-	ent(outstanding, ACC_OUTST);
-	ent(memory, ACC_MEM);
-	ent(transaction-nodes, ACC_TRANSNODES);
-	ent(node-permissions, ACC_NPERM);
-	ent(path-length, ACC_PATHLEN);
-	ent(node-size, ACC_NODESZ);
-
-#undef ent
+	for (i = 0; i < ACC_N; i++) {
+		if (!hard_quotas[i].name)
+			continue;
+		resp = talloc_asprintf_append(resp, "%-17s: %8u\n",
+					      hard_quotas[i].name,
+					      hard_quotas[i].max);
+		if (!resp)
+			return ENOMEM;
+	}
 
 	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
 
@@ -590,7 +632,7 @@ int acc_fix_domains(struct list_head *head, bool chk_quota, bool update)
 	list_for_each_entry(cd, head, list) {
 		cnt = domain_nbentry_fix(cd->domid, cd->acc[ACC_NODES], update);
 		if (!update) {
-			if (chk_quota && cnt >= quota_nb_entry_per_domain)
+			if (chk_quota && cnt >= hard_quotas[ACC_NODES].val)
 				return ENOSPC;
 			if (cnt < 0)
 				return ENOMEM;
@@ -1091,12 +1133,12 @@ static void domain_acc_valid_max(struct domain *d, enum accitem what,
 				 unsigned int val)
 {
 	assert(what < ARRAY_SIZE(d->acc));
-	assert(what < ARRAY_SIZE(acc_global_max));
+	assert(what < ARRAY_SIZE(hard_quotas));
 
 	if (val > d->acc[what].max)
 		d->acc[what].max = val;
-	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
-		acc_global_max[what] = val;
+	if (val > hard_quotas[what].max && domid_is_unprivileged(d->domid))
+		hard_quotas[what].max = val;
 }
 
 static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
@@ -1231,19 +1273,19 @@ void domain_reset_global_acc(void)
 	unsigned int i;
 
 	for (i = 0; i < ACC_N; i++)
-		acc_global_max[i] = 0;
+		hard_quotas[i].max = 0;
 
 	/* Set current max values seen. */
 	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
 }
 
 bool domain_max_chk(const struct connection *conn, enum accitem what,
-		    unsigned int val, unsigned int quota)
+		    unsigned int val)
 {
 	if (!conn || !conn->domain)
 		return false;
 
-	if (domain_is_unprivileged(conn) && val > quota)
+	if (domain_is_unprivileged(conn) && val > hard_quotas[what].val)
 		return true;
 
 	domain_acc_valid_max(conn->domain, what, val);
@@ -1292,8 +1334,7 @@ static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 	domain = conn->domain;
 	now = time(NULL);
 
-	if (mem >= quota_memory_per_domain_hard &&
-	    quota_memory_per_domain_hard) {
+	if (mem >= hard_quotas[ACC_MEM].val && hard_quotas[ACC_MEM].val) {
 		if (domain->hard_quota_reported)
 			return true;
 		syslog(LOG_ERR, "Domain %u exceeds hard memory quota, Xenstore interface to domain stalled\n",
@@ -1310,15 +1351,14 @@ static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 			syslog(LOG_INFO, "Domain %u below hard memory quota again\n",
 			       domain->domid);
 		}
-		if (mem >= quota_memory_per_domain_soft &&
-		    quota_memory_per_domain_soft &&
-		    !domain->soft_quota_reported) {
+		if (mem >= soft_quotas[ACC_MEM].val &&
+		    soft_quotas[ACC_MEM].val && !domain->soft_quota_reported) {
 			domain->mem_last_msg = now;
 			domain->soft_quota_reported = true;
 			syslog(LOG_WARNING, "Domain %u exceeds soft memory quota\n",
 			       domain->domid);
 		}
-		if (mem < quota_memory_per_domain_soft &&
+		if (mem < soft_quotas[ACC_MEM].val &&
 		    domain->soft_quota_reported) {
 			domain->mem_last_msg = now;
 			domain->soft_quota_reported = false;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 78ca434531..5123453f6c 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -40,6 +40,16 @@ enum accitem {
 	ACC_N,			/* Number of elements per domain. */
 };
 
+struct quota {
+	const char *name;
+	const char *descr;
+	unsigned int val;
+	unsigned int max;
+};
+
+extern struct quota hard_quotas[ACC_N];
+extern struct quota soft_quotas[ACC_N];
+
 void handle_event(void);
 
 void check_domains(void);
@@ -134,7 +144,7 @@ void acc_commit(struct connection *conn);
 int domain_max_global_acc(const void *ctx, struct connection *conn);
 void domain_reset_global_acc(void);
 bool domain_max_chk(const struct connection *conn, unsigned int what,
-		    unsigned int val, unsigned int quota);
+		    unsigned int val);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index cecaf27018..f2650f5d05 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -252,8 +252,7 @@ int access_node(struct connection *conn, struct node *node,
 
 	i = find_accessed_node(trans, node->name);
 	if (!i) {
-		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1,
-				   quota_trans_nodes)) {
+		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1)) {
 			ret = ENOSPC;
 			goto err;
 		}
@@ -479,7 +478,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	if (conn->transaction)
 		return EBUSY;
 
-	if (domain_transaction_get(conn) > quota_max_transaction)
+	if (domain_transaction_get(conn) > hard_quotas[ACC_TRANS].val)
 		return ENOSPC;
 
 	/* Attach transaction to ctx for autofree until it's complete */
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 61b1e3421e..e8eb35de02 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -239,7 +239,7 @@ int do_watch(const void *ctx, struct connection *conn, struct buffered_data *in)
 			return EEXIST;
 	}
 
-	if (domain_watch(conn) > quota_nb_watch_per_domain)
+	if (domain_watch(conn) > hard_quotas[ACC_WATCH].val)
 		return E2BIG;
 
 	watch = add_watch(conn, vec[0], vec[1], relative, false);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 08 11:58:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 11:58:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531466.827164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzVb-0005LW-2f; Mon, 08 May 2023 11:58:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531466.827164; Mon, 08 May 2023 11:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzVa-0005LP-W5; Mon, 08 May 2023 11:58:26 +0000
Received: by outflank-mailman (input) for mailman id 531466;
 Mon, 08 May 2023 11:58:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pvzVZ-0004mF-L3
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 11:58:25 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a2299642-ed97-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 13:58:23 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 3A8D122094;
 Mon,  8 May 2023 11:58:23 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0748F1346B;
 Mon,  8 May 2023 11:58:23 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id tuUfAN/jWGS3PAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 11:58:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2299642-ed97-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683547103; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=n5la5HdDyFDWRwDuAjJZPMu94UDPo4M8jI+4gdZynGQ=;
	b=RwTGeKjh8QjmX/DSmTvaAt9qy0B+poou0f6w4bY6TP25Py4A/HxKGRfjUnqvVtyqkZ5at3
	DGERyS1S0dGc9arJRsZ6pqidf1L3IWvoYgmkC6gLRbYXnIt8OtNKZ6fkUPHATLiYEUr81X
	CZ19g4i6SJSq8G+VLsSoIOCqy6dDlyg=
Message-ID: <63321442-ee0d-c525-ba20-e99cf135399c@suse.com>
Date: Mon, 8 May 2023 13:58:22 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 15/22] xen/pvcalls: Use alloc_ordered_workqueue() to
 create ordered workqueues
Content-Language: en-US
To: Tejun Heo <tj@kernel.org>, jiangshanlai@gmail.com
Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230421025046.4008499-1-tj@kernel.org>
 <20230421025046.4008499-16-tj@kernel.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230421025046.4008499-16-tj@kernel.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------Dglk4qYyyprX0NlIxjCYwpTm"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------Dglk4qYyyprX0NlIxjCYwpTm
Content-Type: multipart/mixed; boundary="------------sf1MvrCR4PwyBby3JUNqu6Sc";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Tejun Heo <tj@kernel.org>, jiangshanlai@gmail.com
Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 xen-devel@lists.xenproject.org
Message-ID: <63321442-ee0d-c525-ba20-e99cf135399c@suse.com>
Subject: Re: [PATCH 15/22] xen/pvcalls: Use alloc_ordered_workqueue() to
 create ordered workqueues
References: <20230421025046.4008499-1-tj@kernel.org>
 <20230421025046.4008499-16-tj@kernel.org>
In-Reply-To: <20230421025046.4008499-16-tj@kernel.org>

--------------sf1MvrCR4PwyBby3JUNqu6Sc
Content-Type: multipart/mixed; boundary="------------NFpaZFlUybo8IS9YTRCX05u6"

--------------NFpaZFlUybo8IS9YTRCX05u6
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjEuMDQuMjMgMDQ6NTAsIFRlanVuIEhlbyB3cm90ZToNCj4gQkFDS0dST1VORA0KPiA9
PT09PT09PT09DQo+IA0KPiBXaGVuIG11bHRpcGxlIHdvcmsgaXRlbXMgYXJlIHF1ZXVlZCB0
byBhIHdvcmtxdWV1ZSwgdGhlaXIgZXhlY3V0aW9uIG9yZGVyDQo+IGRvZXNuJ3QgbWF0Y2gg
dGhlIHF1ZXVlaW5nIG9yZGVyLiBUaGV5IG1heSBnZXQgZXhlY3V0ZWQgaW4gYW55IG9yZGVy
IGFuZA0KPiBzaW11bHRhbmVvdXNseS4gV2hlbiBmdWxseSBzZXJpYWxpemVkIGV4ZWN1dGlv
biAtIG9uZSBieSBvbmUgaW4gdGhlIHF1ZXVlaW5nDQo+IG9yZGVyIC0gaXMgbmVlZGVkLCBh
biBvcmRlcmVkIHdvcmtxdWV1ZSBzaG91bGQgYmUgdXNlZCB3aGljaCBjYW4gYmUgY3JlYXRl
ZA0KPiB3aXRoIGFsbG9jX29yZGVyZWRfd29ya3F1ZXVlKCkuDQo+IA0KPiBIb3dldmVyLCBh
bGxvY19vcmRlcmVkX3dvcmtxdWV1ZSgpIHdhcyBhIGxhdGVyIGFkZGl0aW9uLiBCZWZvcmUg
aXQsIGFuDQo+IG9yZGVyZWQgd29ya3F1ZXVlIGNvdWxkIGJlIG9idGFpbmVkIGJ5IGNyZWF0
aW5nIGFuIFVOQk9VTkQgd29ya3F1ZXVlIHdpdGgNCj4gQG1heF9hY3RpdmU9PTEuIFRoaXMg
b3JpZ2luYWxseSB3YXMgYW4gaW1wbGVtZW50YXRpb24gc2lkZS1lZmZlY3Qgd2hpY2ggd2Fz
DQo+IGJyb2tlbiBieSA0YzE2YmQzMjdjNzQgKCJ3b3JrcXVldWU6IHJlc3RvcmUgV1FfVU5C
T1VORC9tYXhfYWN0aXZlPT0xIHRvIGJlDQo+IG9yZGVyZWQiKS4gQmVjYXVzZSB0aGVyZSB3
ZXJlIHVzZXJzIHRoYXQgZGVwZW5kZWQgb24gdGhlIG9yZGVyZWQgZXhlY3V0aW9uLA0KPiA1
YzAzMzhjNjg3MDYgKCJ3b3JrcXVldWU6IHJlc3RvcmUgV1FfVU5CT1VORC9tYXhfYWN0aXZl
PT0xIHRvIGJlIG9yZGVyZWQiKQ0KPiBtYWRlIHdvcmtxdWV1ZSBhbGxvY2F0aW9uIHBhdGgg
dG8gaW1wbGljaXRseSBwcm9tb3RlIFVOQk9VTkQgd29ya3F1ZXVlcyB3Lw0KPiBAbWF4X2Fj
dGl2ZT09MSB0byBvcmRlcmVkIHdvcmtxdWV1ZXMuDQo+IA0KPiBXaGlsZSB0aGlzIGhhcyB3
b3JrZWQgb2theSwgb3ZlcmxvYWRpbmcgdGhlIFVOQk9VTkQgYWxsb2NhdGlvbiBpbnRlcmZh
Y2UNCj4gdGhpcyB3YXkgY3JlYXRlcyBvdGhlciBpc3N1ZXMuIEl0J3MgZGlmZmljdWx0IHRv
IHRlbGwgd2hldGhlciBhIGdpdmVuDQo+IHdvcmtxdWV1ZSBhY3R1YWxseSBuZWVkcyB0byBi
ZSBvcmRlcmVkIGFuZCB1c2VycyB0aGF0IGxlZ2l0aW1hdGVseSB3YW50IGENCj4gbWluIGNv
bmN1cnJlbmN5IGxldmVsIHdxIHVuZXhwZWN0ZWRseSBnZXRzIGFuIG9yZGVyZWQgb25lIGlu
c3RlYWQuIFdpdGgNCj4gcGxhbm5lZCBVTkJPVU5EIHdvcmtxdWV1ZSB1cGRhdGVzIHRvIGlt
cHJvdmUgZXhlY3V0aW9uIGxvY2FsaXR5IGFuZCBtb3JlDQo+IHByZXZhbGVuY2Ugb2YgY2hp
cGxldCBkZXNpZ25zIHdoaWNoIGNhbiBiZW5lZml0IGZyb20gc3VjaCBpbXByb3ZlbWVudHMs
IHRoaXMNCj4gaXNuJ3QgYSBzdGF0ZSB3ZSB3YW5uYSBiZSBpbiBmb3JldmVyLg0KPiANCj4g
VGhpcyBwYXRjaCBzZXJpZXMgYXVkaXRzIGFsbCBjYWxsc2l0ZXMgdGhhdCBjcmVhdGUgYW4g
VU5CT1VORCB3b3JrcXVldWUgdy8NCj4gQG1heF9hY3RpdmU9PTEgYW5kIGNvbnZlcnRzIHRo
ZW0gdG8gYWxsb2Nfb3JkZXJlZF93b3JrcXVldWUoKSBhcyBuZWNlc3NhcnkuDQo+IA0KPiBX
SEFUIFRPIExPT0sgRk9SDQo+ID09PT09PT09PT09PT09PT0NCj4gDQo+IFRoZSBjb252ZXJz
aW9ucyBhcmUgZnJvbQ0KPiANCj4gICAgYWxsb2Nfd29ya3F1ZXVlKFdRX1VOQk9VTkQgfCBm
bGFncywgMSwgYXJncy4uKQ0KPiANCj4gdG8NCj4gDQo+ICAgIGFsbG9jX29yZGVyZWRfd29y
a3F1ZXVlKGZsYWdzLCBhcmdzLi4uKQ0KPiANCj4gd2hpY2ggZG9uJ3QgY2F1c2UgYW55IGZ1
bmN0aW9uYWwgY2hhbmdlcy4gSWYgeW91IGtub3cgdGhhdCBmdWxseSBvcmRlcmVkDQo+IGV4
ZWN1dGlvbiBpcyBub3QgbmNlc3NhcnksIHBsZWFzZSBsZXQgbWUga25vdy4gSSdsbCBkcm9w
IHRoZSBjb252ZXJzaW9uIGFuZA0KPiBpbnN0ZWFkIGFkZCBhIGNvbW1lbnQgbm90aW5nIHRo
ZSBmYWN0IHRvIHJlZHVjZSBjb25mdXNpb24gd2hpbGUgY29udmVyc2lvbg0KPiBpcyBpbiBw
cm9ncmVzcy4NCj4gDQo+IElmIHlvdSBhcmVuJ3QgZnVsbHkgc3VyZSwgaXQncyBjb21wbGV0
ZWx5IGZpbmUgdG8gbGV0IHRoZSBjb252ZXJzaW9uDQo+IHRocm91Z2guIFRoZSBiZWhhdmlv
ciB3aWxsIHN0YXkgZXhhY3RseSB0aGUgc2FtZSBhbmQgd2UgY2FuIGFsd2F5cw0KPiByZWNv
bnNpZGVyIGxhdGVyLg0KPiANCj4gQXMgdGhlcmUgYXJlIGZvbGxvdy11cCB3b3JrcXVldWUg
Y29yZSBjaGFuZ2VzLCBJJ2QgcmVhbGx5IGFwcHJlY2lhdGUgaWYgdGhlDQo+IHBhdGNoIGNh
biBiZSByb3V0ZWQgdGhyb3VnaCB0aGUgd29ya3F1ZXVlIHRyZWUgdy8geW91ciBhY2tzLiBU
aGFua3MuDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBUZWp1biBIZW8gPHRqQGtlcm5lbC5vcmc+
DQo+IENjOiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+IENjOiBTdGVmYW5v
IFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+DQo+IENjOiBPbGVrc2FuZHIg
VHlzaGNoZW5rbyA8b2xla3NhbmRyX3R5c2hjaGVua29AZXBhbS5jb20+DQo+IENjOiB4ZW4t
ZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNCg0KQWNrZWQtYnk6IEp1ZXJnZW4gR3Jvc3Mg
PGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------NFpaZFlUybo8IS9YTRCX05u6
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------NFpaZFlUybo8IS9YTRCX05u6--

--------------sf1MvrCR4PwyBby3JUNqu6Sc--

--------------Dglk4qYyyprX0NlIxjCYwpTm
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRY494FAwAAAAAACgkQsN6d1ii/Ey/7
Bwf+OJXAmz1RWO7eG2WkDpbZH+arNDyS1afUPhFyYrSPPmoceOw9jhDlK/AC2CDZifAKdsZgKtKv
zc98yqEYW3/Cy+uRAgvvOCzA0JXQ2/c/OnlYK14G+3PE/AItI9v0lLghC9wb1y9U6vS5Igau+fAD
BfcWkg2WunVKz0B+cS6kcyDUU6aMxpED7j0SKee57XqeIKZmYnUSEo6rH045DEU+WbnWt1eyz28y
D7z0bAHFq592inBwBgeTMD6ojcclmACl55JPfKX8PCJhupcc1lhhS6coAtYdmlvDm8US+RSC02HA
HTNySfrIso+OURph0Lln2Laj0I0Mw4H9QpAeUi6YGQ==
=Yiej
-----END PGP SIGNATURE-----

--------------Dglk4qYyyprX0NlIxjCYwpTm--


From xen-devel-bounces@lists.xenproject.org Mon May 08 12:00:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531471.827175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzXH-0006yA-Hi; Mon, 08 May 2023 12:00:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531471.827175; Mon, 08 May 2023 12:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzXH-0006y3-Du; Mon, 08 May 2023 12:00:11 +0000
Received: by outflank-mailman (input) for mailman id 531471;
 Mon, 08 May 2023 12:00:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvzXG-0006xr-0b
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:00:10 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20612.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::612])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e0c40356-ed97-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 14:00:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8509.eurprd04.prod.outlook.com (2603:10a6:102:210::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 12:00:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 12:00:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0c40356-ed97-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FWaarnc7WHtr7skPNu0spU4YFksv3ExLcY+5q1UpCD5HsHzSq1OND2FxcjKoOiONeLy7syofS8W3bUWLafkpnbTANhDXzLJTDont1K8hlEF4RafyBE2Cef227MbJSsG/OhGmCJmg2u6K9B8bOSrN9KYMYjZ+cCuK4uVTi9njNKtQKYoYIu4Lu5VV8b7hGcIfqYodSqGbyGGqkJCcMS/gWRqyn+Xb6m6uDdNjoM+sdnJADEnV/kBIEKW2PrF1h9Qxpkjd+x9sXIO1JHiYGc3yy6ek5TTBfAyuqeCWVbd9qKJst3wBK8OIk8OWU8RaOz7P9GYetnrqF7jhwS2a4sxnSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4M0gQIPPWYu6lNbkmP6JNCTOctzFQmfMgVZacHhWJYc=;
 b=YKUJ8BH9tHGliB0FlMrj0Pi/THqin9aP7QmmEJ62WgI2viDyyjVEH6Gj5WSFsjBXwpN5bo5g9Os5IsnDTlWLygqHw62p3zeifiMbWaPB+0ZqTsqJPl8VjHGN/3H3rFJziq6acbXuV8bFmbKnrD/1/qz39kK7VqkD0ul5LKphCPDBgmmBzShqcQXqnlZgzQ34x+ofbvGIPE86Qg1+FLK0fh6GLDZ7z80krkptmNu3s1/ZGfARIxs9eQODJv/16DPh5sMntS97Clx1x3+Q1zVljnaH8OnGpQoBB8q7HKOyt8sgynxJ0eQsylfR53/X9rvaxhTvrcIYJbcQ61PCs1XGcw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4M0gQIPPWYu6lNbkmP6JNCTOctzFQmfMgVZacHhWJYc=;
 b=sY7bmbQ1RYJqjvbxdVPK14hVVbewwh67RX5CQTeTRL5xs77xOQETlRntXIyeKYnj18+7D2cID4InxqCr8LYYXSmViiSdT3F4jf2UfT8ne4N2EvVhZBRRPgGa4nFl1h2cdkllYihDQsrk0j9bFR5vriV/qxu7xJ53LB+8fqf3S2HKPce9tmIYoNsUVysEx1/GEVCtzWIAyB12PrpRg7z+A2JTDiFECSrIgjwy/8LWxMVfwGwWZGO0X3GlLeBcelxHbSCfqtAyh45gqPisEDILk1nyHGs4KeQIeWjgfyFXKHYpEY2g/z42oXA/hu2XvQr8lFFNDh/RPuOqpgLc/dAZpQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1f0d8b88-7c5a-670a-c966-2fd7d764f2fd@suse.com>
Date: Mon, 8 May 2023 14:00:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 13/14 RESEND] xenpm: Add set-cpufreq-hwp subcommand
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-14-jandryuk@gmail.com>
 <46c46e0b-0fba-2f3f-6289-f68737a3d8d9@suse.com>
In-Reply-To: <46c46e0b-0fba-2f3f-6289-f68737a3d8d9@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0010.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::20)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8509:EE_
X-MS-Office365-Filtering-Correlation-Id: 7272546f-c7b2-4185-2bb9-08db4fbbc3ef
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Vk+B0341oYyBOx0QAsfOSVU0i/sLt1hsazBr8PObYvW+QCO23wpXfJDhjUnPJztldkczpyg9rs+s5ilwtUsM+oDOxmY6H+pA1RGxZjOakU9NBXzCIYChlHzp2TtsQsyJjBk8KfVr8tYVLI+CPfWn+Tqr+Qo8ub+t2xtpwK65SzpTwreDpJai0mtAG7dAxUK5CwGkAgPsT0KuQbpYQw6fxEpVXOslrGnClGx4/sqS4i5EmYTiUgADCDCM9ldoC5v+hOAGL02jL3FGazMTGrAAQEN+XUor54FkMbEESyV8kjWxQ+QV7ShhnfoO9Mle0VS4J40xrRtv9n6TX4O0UczxtkjhwNQ/Vujxc2R0358NEyfvYiZUlGScDokqby1ddycZI6dTKAIJ+T9ilop9MpTuUnIRJ3GdRoyKhz0Af74esrjqU8e81W1tL43yzqXA312v4nZKU9Zf7QCqcZuL8dbvht7xHq6Z4I6ocTsObKC08GUNi5mFc2PNIBNlqLSnVYbt03nNgRwlfMembtt2XLzDpTiA2GwbPl6m6ZilJnTU8JjoBTM1GXJbOc6hOXuY+DF//seTFCA3Cl0oVDdPx9bYKytH/b+Qrnvn00vnLBpKwsZqcs92KxhbGM1gr45V2NAQ/yc4FbZVPGyIJujYqszQrA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(396003)(376002)(346002)(136003)(451199021)(6512007)(53546011)(26005)(6506007)(6486002)(36756003)(2616005)(38100700002)(86362001)(31696002)(186003)(41300700001)(2906002)(54906003)(31686004)(4744005)(5660300002)(6916009)(4326008)(66476007)(66556008)(316002)(8936002)(8676002)(478600001)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dEZLenRpd0J3cWt6cVdyQ3JDbHVIaFBnMkFkOFRReExId2JIMVlBaGd3b0pN?=
 =?utf-8?B?T1EzUnFFclExZWY0K2lBRlJBQ1Y0RjZYbmpYaUJKYTlyU2JXTTJ6VFo1QTVN?=
 =?utf-8?B?WDd6S1hwdXFtaEsxVFd5UlYyVCsrQi9OY0RJVUN0aGJQVEh6WFNjWHpsNHhT?=
 =?utf-8?B?c3NFQ2VnZWh5MHFiWFY2aWw0NG9MLzhTZmRUSm5yeXREaWxXdm01aFY2MVBx?=
 =?utf-8?B?Mk9Ja3dqRjJmeWdBNVNZZDMwdDZGVmpPVkNHU2NUSXdVRmthMjBaK2l0R0o1?=
 =?utf-8?B?eUVsZUFYZytVZHRYbHN4QXF4dUxUbnU2UktScysvdGxuYnkyaHlWN1dKN1FX?=
 =?utf-8?B?MEgvRGQ3cHVQT2doK3l0emMxc04zYmMxdWRpalpPTHlHWFhEM2xURzhyY1Ux?=
 =?utf-8?B?WFZjOExQb2pxVkRHRXhPWGNJc2huUklNc0tnSXc3eSsyWmk0amlaZTg3ZlQ2?=
 =?utf-8?B?b1JVZnBWSHJUZVloRlp4RHNGQVA1aXJkYm1jVG1PNm1NMVdJL2VncHB0TTRm?=
 =?utf-8?B?WFJGQU5YNTdhNGlIK2RZSDZud25KYnI1aDNSRWhsSmNxV0Q1U3NwTWtMKzVr?=
 =?utf-8?B?RVJvOUFHRFJ0YXJMNThZNVorSUJJMTFNUGZRT0dHeUZIWXNNSHR3RVU1eno1?=
 =?utf-8?B?MVRQdzhpb0RVZ29xaEY2M1lPK0NjOTQ0ZnArbTFmTVcwcFVKcnF4QVNiTEhh?=
 =?utf-8?B?bUhydnYzaWZuVVI5S3lKN1JTazUrR0tFSmtqYVlLUUF1YzBpMGtFVkxzeU9t?=
 =?utf-8?B?am5BV0I5bWQ4N0wzMnpGOUowSUgxa0NmdTRjZVpoOTMvY3lQazBWTGdIYjVv?=
 =?utf-8?B?Q1BldnFLNHpHWG1PSWI0MGFidTl5UTlheVh4anBpZWRibitiN01zcFQzdzFz?=
 =?utf-8?B?TkpHQVN5MTB6NjNMckxEMXh5R3YvZjd6czh0MGRZSjhHMXpPVGZibHhMQXdQ?=
 =?utf-8?B?em9vL2s1RVpkSnJtNzJRZnEyS0lucWlTd0c4ZEVlY0NrRS9FdHZONm9QcEQ4?=
 =?utf-8?B?bGEyT1d1SEdWOCtZcWI5azBTTXJFZjcrczlsNlJYb2tOK3R4U0t6NjY4YXdi?=
 =?utf-8?B?SUZnUDZWNnhmT0pJOEdaY1lzYVZjRE5Wb253SXpuMUpkekdZMVVrbkZvaUQ0?=
 =?utf-8?B?K29lazVmV1RVY2pZTExlVndJay8vM1NTQ1pWMnRSYUNtWkNxZjBvd0hYSTJn?=
 =?utf-8?B?bDlkNnYwWGs0NzVVK3FmVXIvOU1vM3pGOFc3NTl6R0ZwUFBsb252TFVOWnBJ?=
 =?utf-8?B?UCt0KzExbHVLa0pKbFo4TzEyM2FLcFdoSnpLNG5LVk5mdjhaQ2FWTEVQTFJl?=
 =?utf-8?B?QmtDckxtZXhHMVFmdGRpTys2UC9SOEozeFM0bTc0aC9Zem9GaG9lK3I1Zm5R?=
 =?utf-8?B?ZGRBUktWMjB0RlNRWUZxOVBuOFRYOFBYTDhCL1JQcVc1TFlYSVdiTmdwcjVZ?=
 =?utf-8?B?KzVGZ1NmdVdOSFJpUzJNVjFFSnFvZVNwdisrZU16T2k3bHZNZ3Q4b1JxMkZJ?=
 =?utf-8?B?U0NFTXg4dVVrUGNhejA4OGhDU1RnemVERENVd3NLVVpYTFpNNGtaamNDOExS?=
 =?utf-8?B?Q3liRWNHVVA4bVFmenhzL3pLQzdrbWRlZG9ydXlpSk1TSGc1SjVvczVObk8z?=
 =?utf-8?B?UDkyanI1WGhHbFJTeDRUSTdvczRWbXE4T0Y0aHhTTmxHUms2Q2F2bkUreGoz?=
 =?utf-8?B?QVcwdzZIYVNScXhyZytCK29UZHd6b29aYk1SZDdtOU13UGs0QlFEbzd0TU9j?=
 =?utf-8?B?QXJOSXd2U2dKMVJuWUtmRG45SUl3NnIvZmpDSHB6ZVVlekNrMCtCSWN1b3cz?=
 =?utf-8?B?WnJDeXcrL1NwTGtHKzJTRjRvdUdhMGlycTVRZlN3QmJNc09jVG1XNStyUnJT?=
 =?utf-8?B?cU1BWDVjZDUyaFAvSGh0czAxNk0xUmFpTFBHdW13RmJnQ1JVdnhXNis2c3Zn?=
 =?utf-8?B?MlgrWml0NTF5QlVTaEhtcmVyV09zRjVvVGtyb1FjVVg3SWYvdHA5dzNmcWFJ?=
 =?utf-8?B?eENEaVdUOVF2WU5nTFVjRjNqa0o0RUliMExBWnlqazFBcFZlTS9TaFREV0JC?=
 =?utf-8?B?U1ZlbjFmUUdyc2ZaYy81cExZdTl2T0dRSkU4bG93UFJBM1A2K3BaZ1NqNkYy?=
 =?utf-8?Q?eNVF4LA/RJ85OomU8rwdr8zuB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7272546f-c7b2-4185-2bb9-08db4fbbc3ef
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 12:00:06.9052
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kerhk2eflNk7jvrzRSOBQIs2NbuhSlSSNr193ww4zNRLKVCRl/hY31WR9obtxsqmeOeZe6sE305zLJtDeC7Fzg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8509

On 08.05.2023 13:56, Jan Beulich wrote:
> On 01.05.2023 21:30, Jason Andryuk wrote:
>> +static int parse_hwp_opts(xc_set_hwp_para_t *set_hwp, int *cpuid,
>> +                          int argc, char *argv[])
>> +{
>> +    int i = 0;
>> +
>> +    if ( argc < 1 ) {
>> +        fprintf(stderr, "Missing arguments\n");
>> +        return -1;
>> +    }
>> +
>> +    if ( parse_cpuid_non_fatal(argv[i], cpuid) == 0 )
>> +    {
>> +        i++;
>> +    }
> 
> I don't think you need the earlier patch and the separate helper:
> Whether a CPU number is present can be told by checking
> isdigit(argv[i][0]).

Hmm, yes, there is "all", but your help text doesn't mention it and
since you're handling a variable number of arguments anyway, there's
not need for anyone to say "all" - they can simply omit the optional
argument.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 12:02:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:02:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531477.827184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzZC-0007bp-SN; Mon, 08 May 2023 12:02:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531477.827184; Mon, 08 May 2023 12:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pvzZC-0007bi-Pf; Mon, 08 May 2023 12:02:10 +0000
Received: by outflank-mailman (input) for mailman id 531477;
 Mon, 08 May 2023 12:02:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pvzZC-0007bc-0j
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:02:10 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2072.outbound.protection.outlook.com [40.107.7.72])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 28888e36-ed98-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 14:02:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8352.eurprd04.prod.outlook.com (2603:10a6:102:1cd::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 12:01:39 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 12:01:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28888e36-ed98-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MToV/TB1E+2uBMoQPaO2FQBZJb76tbKhqNA3SdYF0HJtpLYPL4U9OyMilRtRBx39kMI/1F36YPrDuPmQpbZCOU9ql5ClF1CTnLbDhGvbIKZ3Cci2Nynv4Sofy/B85sZrRv6mxTHEDf+5EQ6LR6hlHn3jHSRMa54vdVY8QxKCKQlG8BR37udykFO+Gaw1SnVA4CGAqD11KaCVITa1xZVJQuSrrkcT0n9FVoJSg79DZAmlKSASvh/JvIdynmnxJRxO6PNQln8WjGcb33IIuGJs+kjVFxvxFEKe7Cb/4+ZJyoBwcCfAG2YlO2hDZM/MNUPd0nUhZvWr7v3z5047hBwQ8A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=P7kDkVV1bON4Xwcn4ICUJBwyjXPzXnP6fiusd2Z9+aw=;
 b=nJ/361o3DZ4/3/6cvgrK5KxP5u8FdeInQcwIor1wsN9gbajlogh3XPN+UnUN6zQciqV5O+rt98EgY016UaKQ1lC5PeGPrqLm4YHSxf0dyYVnZZqZFi3RToId/hZAZKQozRKku/7ASV09IgCR+ra40M2yy5CUwcSpJBUfN6NPXYMiz6a2DxLE+p9O7jwtgUunWFkakhFcwAolqPaGQVL/ThuSwy3jCBnVjPVi+5jNYdVXYYpMMR6fCl/qQKIQEpNxqXBqMFR0QS/CpNAhUT45Sk5S8wM01q5cQ0LglfC5Ppfn6PMrrj0XcnB27/sBP+yqfqYnSmqWErT9WEzuPL8nQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P7kDkVV1bON4Xwcn4ICUJBwyjXPzXnP6fiusd2Z9+aw=;
 b=G8T/qXcRbm97ZNl7SMBaEFCqPU+DALmwg/p3GT4/xdnaJQTgTf77NP58q2Ktg5HoaDSBH6POsLZXJW+wEQ1d9GInU0Abv4ziRUNn4SYQZRZQS547YuGrUm7iCYdc7gIwBUyz2t/zW0etha9qw/2nVvj9EXakqrukgP79JoLUv9j/7TJ93ahI1GXXZvpsWaYkbTkcKhZSYJE6GtM78TfwXhPenG5B66EqANGUlSjONxqoE3tMzkorqG9KVTWEk6ZIuVf/sAZ4LXPC2f1fP6BpsU5F8Z97T11Yqx2MpRTy03c3Slm9XOx1AMGohuYVc6ztAzBOqu6s8ioEVlN6orevCw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <938a8260-d05a-feef-82f2-ed680db1cf90@suse.com>
Date: Mon, 8 May 2023 14:01:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 12/14 RESEND] xenpm: Factor out a non-fatal cpuid_parse
 variant
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-13-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230501193034.88575-13-jandryuk@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0063.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8352:EE_
X-MS-Office365-Filtering-Correlation-Id: 30a870ef-4c92-4b1a-229c-08db4fbbfac9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lYFtfDl4Syoq01yxV7s6UaBPdcKuVmjYHav6/Ax7k/P1e2JEfc8gw1SUa4hV9IDMvg26Yne8hMDv1U1qkkBsLuHSkFIERaKIKHrV3cEANRQfcDpgt1hJ/iMnZyZoKN+K0pg2WVIgNXn3pdDpiY8zs5hsRRZ6N8d2KwHLSkFBIa4dSA2wHufs99Qgo1VZm1KDq+oz25+t3IDHvW6+iVUfjDWoug/6K077QPNZchjnv+MVE1NYYJk+1Q73NOzHTEgLi8ZpOT6KxcggKle09uFdZ0L4XsVFDVnhFmjhgMdgtBrzyp1XnCNQtTFjRX/qfe1KfeAgFKZkjuW/KIxb17ZXTiLtySkxLyTgB50UohrR3rsu9O/3WKMAWqDtEhj88/EoY7Q5TeHh0Sft4QGpWsHvhxILlVlQYYENtHNsOtYEhBXrL/HP8B9H6ztVGPM/AnslFIUgSvqblEvkzYHslRygZ8f1iVMWwQbfIBkrnmdO0T9vaaXE/aPimPcoRHSFAjAXq8FVUJv8IhQ0rO4C5bv11IvZgyE+utGCEBFXh05u9W5SSnD/dfodoNE1R1B2lbVv25myKHonjLajRQni1GgYuI+VL1KUy7kdQ11sU/y08jgLPElYoN9e2P+sGXzpU1/y9rl43swJridpR8lvVSPAJw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(346002)(136003)(366004)(39860400002)(451199021)(2616005)(186003)(36756003)(38100700002)(31696002)(86362001)(2906002)(6486002)(8936002)(8676002)(316002)(41300700001)(5660300002)(54906003)(31686004)(66946007)(6916009)(66556008)(4326008)(66476007)(478600001)(53546011)(26005)(6506007)(6512007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NWlzbWtUWHdyeTFZZjF0bjd5V3BiU1NYMms4MUZBWm9sYjVaaEhDR2RVZExG?=
 =?utf-8?B?dUlHekRtMmloWDRFRlBpU01pcWRWenMvV2R0NmxyNFZHTzc2bGdPSkl2ZVRO?=
 =?utf-8?B?VWhTQ1JPYzJRbkRqak4xblExSEdkUzhBeXB4ZHU0SnhSc0V3ald0SEFZd1Nr?=
 =?utf-8?B?d25HMUo4ajVtckV2UWkyWitocWdjL04wNXFtL2JnYmhhNE51M2FrRENlWEVW?=
 =?utf-8?B?SFhIbUF1ZjJEM2MyaERwVGcraUpkQlNBNmVQeXp4ekZnVlRvMTFQU0lkQlZ6?=
 =?utf-8?B?cEcxWVJFMGc0SS9LeDRWZitXZThkL2pOUmlFVWN5ZzJRcHk2cmNpa3VYRUFN?=
 =?utf-8?B?RnpIVzd1aEpGTjVZdjlwekxaZGIwcUFLM3drSGhHeXBmWlk5TFRLSmpPZGVX?=
 =?utf-8?B?bEJxeG84SC9mUXduVzBuQWxFS00wREFyU2plZzNNakk5RW9ucUdRQS9WNFFW?=
 =?utf-8?B?MDFnaG5YMTE2djl5bE80QS9KcldmUUI0VWhmeWdTUzBHZ3JuYnpJcHFtbjFK?=
 =?utf-8?B?RHpzd3FreldXRTJOUXAwT3p3Q3VZc3JBbWtNdHRWRlZFYldTam80cWFmdWhV?=
 =?utf-8?B?cDUzazV5MllQY24rK0JzNS8ycE9jZW9vQktyaktQTlk5QlRDQ3NpSFlQSUhU?=
 =?utf-8?B?elRiL2JDd3dUeTM3Qm1JbTJhelVmWU1WV1JEZjg0dzBSMGgyZnNxRlJvd25w?=
 =?utf-8?B?elppc3grK2U0TzJVVUZDMDVOOVhJRUMwRkxTaG8yTjgwNDFCNk5KMVlwaGVo?=
 =?utf-8?B?SEFvaFo3cXZvMjdWTVVHVWhoV1UxWXZwU2tUNnh2ZEdvY0svOE93OTF3NGM3?=
 =?utf-8?B?L1dEL2pjVjNidVlxSWk0M1JTUFVPRVFnV3ozOGJRL0lKcHZxYjZRQnV4ZUpG?=
 =?utf-8?B?OCthTkYzaHYyVVlmbW5wUml6WHBMcDdEOU5TT21jR1htdjBNM1IwR0F5N3FS?=
 =?utf-8?B?bm9hMkd2RmNNN3haZnUvU2NxWm9GTjF4cUgwSFJwbnZtZDVWWkdqdEpDMHRO?=
 =?utf-8?B?dERlbk10a1FWK3hpSGNoeGhlaVdUVm1VekpRQkQ3ak5LWnNPL3JORzdnM2tL?=
 =?utf-8?B?MkV3NDFBT0NVbTdyVGE1U0NMakZ2MzR5MTBvM2VOMzhkbUd5amsxUEVDakcy?=
 =?utf-8?B?ZVJYcTArQ3JCejVJQ1VZckVZWEpsazZhcEFwVGxXRVlkUDhDMTRaaVdaWSsz?=
 =?utf-8?B?VnFHYnF0REtNK09IWUY4bFFVVzV0ejZyUGpJZmpHcHp0NDFYYS9rYlNhcThP?=
 =?utf-8?B?M2hOT3JWODBGMDhPMm94dGJxOHpjUTBSL1VPZUE2ek1TRlM2ZkpJTmcyOFRn?=
 =?utf-8?B?Q1RNWVQ5T1JSQjI5VlBkN0ZQV2RSZ0pQTTh3aUxaeHBDTWRXN3lIbll4QmNj?=
 =?utf-8?B?eTZTSHNPRlBWUWUxbnpvTm1RRUZzNUdmajlHM1EzYWVTUURkOWtic2VDejRq?=
 =?utf-8?B?WkpqWmsrYlo2N1B5V0FoUVJoMmVZeWlDY0liVnZYWElla2ttd2x3dis0dWtI?=
 =?utf-8?B?UDBURFZJTVIvWHdGSFhBOUdEd3JXaitqOG8xb1FsWjFsb3ZxaEdmU2pRQ0Zu?=
 =?utf-8?B?SjNidHRUYXRaR2tHNUJhRjFBODFKdFdoRGVBeXdCZzFUdE5BSjNnR3hxS3lP?=
 =?utf-8?B?SCtTajgxcXVLNW41Uy9FdWs4dXJ4bWEvOUhaUHUyb2U5Q3ZKWERPdDZBeDN0?=
 =?utf-8?B?aVRoZXJXS1F1TGMxT20zakFPM3BRc0NrejQyUHh2Qy9WSUdFcVRCWFpJMkVV?=
 =?utf-8?B?WEtmaHBFdEtRNHFmVmtFVVFxWnh4ZW5RZmI3UUU1RllrcEFHY1d1b2NyWUY2?=
 =?utf-8?B?Wm5WdExoamtwbmxad2xib0JhQm1sYnFzK3laOHVmR21kV0lJRm8wTWdaajhD?=
 =?utf-8?B?Q3VkNHF0WUhucGUzdVBkcDYzZ0pKZXU5TVlnMEo0em4xR0xYRjk4aytLOXY2?=
 =?utf-8?B?b3J4MDVBSkIweWhTTzVQSXBaY3NrbHg0N2FmVyt6OFd2YlN5aVNsM01LMEpa?=
 =?utf-8?B?b3ZuUjhRb3JuL0p3QW5jV2FwVWpmV09wQ25lQ2lMZnp4QmlQbVNYaGF0RFJF?=
 =?utf-8?B?VTZNTjArUk5qQWhMTHN1ZEVrTW9mUllLbVdsUCt0cTdrUlR3NWdURTgxajlK?=
 =?utf-8?Q?9hexHQF96wxYscogASMy9P8pv?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 30a870ef-4c92-4b1a-229c-08db4fbbfac9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 12:01:38.9917
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ySqZQZFhC1qKTEWTt7T9+XIXvMOk9lm50FpkJDnldCQInZExpqSaj29qa2gxAKnom1B4T774pJOVc9OSCuEDKQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8352

On 01.05.2023 21:30, Jason Andryuk wrote:
> Allow cpuid_parse to be re-used without terminating xenpm.  HWP will
> re-use it to optionally parse a cpuid.  Unlike other uses of
> cpuid_parse, parse_hwp_opts will take a variable number of arguments and
> cannot just check argc.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
> v2:
> Retained because cpuid_parse handles numeric cpu numbers and "all".

Assuming you can convince me of retaining this patch:

> --- a/tools/misc/xenpm.c
> +++ b/tools/misc/xenpm.c
> @@ -79,17 +79,26 @@ void help_func(int argc, char *argv[])
>      show_help();
>  }
>  
> -static void parse_cpuid(const char *arg, int *cpuid)
> +static int parse_cpuid_non_fatal(const char *arg, int *cpuid)
>  {
>      if ( sscanf(arg, "%d", cpuid) != 1 || *cpuid < 0 )
>      {
>          if ( strcasecmp(arg, "all") )
> -        {
> -            fprintf(stderr, "Invalid CPU identifier: '%s'\n", arg);
> -            exit(EINVAL);
> -        }
> +            return -1;
> +
>          *cpuid = -1;
>      }
> +
> +    return 0;
> +}

Looks like this function wants to return bool?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 12:31:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:31:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531487.827195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw01I-0002e6-1X; Mon, 08 May 2023 12:31:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531487.827195; Mon, 08 May 2023 12:31:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw01H-0002dz-TC; Mon, 08 May 2023 12:31:11 +0000
Received: by outflank-mailman (input) for mailman id 531487;
 Mon, 08 May 2023 12:31:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pw01H-0002dt-5w
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:31:11 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20600.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 33c4ca0a-ed9c-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 14:31:06 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7295.eurprd04.prod.outlook.com (2603:10a6:800:1ac::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 12:31:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 12:31:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33c4ca0a-ed9c-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H+z+ScDij6RZZ+fcDvMogE/+wNvDipT3fhhklDFmrG2+nwQ5a8wXAHbSJtMbx5UPKLEsTvlQ2gs4a/uIP5q3QMiOJp4Ymtl8vYvhkyCkHhii7S6qmYzJLhcV8k7lYmWXqK811hCiXtjM9SESQMn5ngTgZm/VgMUHv02ojDg6XG6YBX7mPyan0o+HTNdOlMShYRKp5+aZJ7f5CS4vjstWhVQ60p+zZinF0AsTURGGU0ceNzJXsnFCvbBBXw5TI+veW83y0rmi0dhwAawiDTS5MaEgkFnNCEx5RhRIUmOBzYc3TP0XNhC7qJjdlWbgVvNKuNaiOEv0CXSxNHBNRTcbCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M5UVgGCt7HFd6QpACbVczcENgH/UCk8cUe0fCdquCg4=;
 b=VDFQ6wd0GBDDF8yGclvg8V6pSo58JwPoWXi+fYEM9OL3ZG7MCeuKpsfRVfiUiHqQLD5J4tcivgScg9cMElFCmZ5imJNwAbp7F3+Icoz0bULhEAKqv9qPBP7u+x1bU7UGNfMi4miPrqjFtnxTXT0qCJ5q8JRzWhWAw7yNz0e7ShZ40cyb93qy2HvoKxhfHOfgJDtf9avVC+rNUObCnTkJ5X11xxR6ApeMuBRKAzZrmI3jETHGYaUDa6QT5Vw5i/vXS9rPL8zVaJfkBneA7i8Yp0+Jf5t7ZSKmocZdvgZPSXRLSFE1TvQCpqhJUEWZIgCt/oyhX5jYnBNfSQt6s36wSQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M5UVgGCt7HFd6QpACbVczcENgH/UCk8cUe0fCdquCg4=;
 b=3GU+/ihArZcdilCAqaZOXlBhNFw/9TICS0xg+h0yvfZgaAeayRB2XXzJrGTDtIJ862jluyDWzh2aCg47H1IyEwyK1aCCFigdXKMm0rNeBQsoWhWOHdk/xAC6FgeHTZWslrYOHTpuUJrOx4KW63AAMqBbStHmF7lKpkZyj/YBPliKxsvM1hw9ZdY7xAJWYuz1XVrznPbQXgho1OdFLF5hXiUCQjiW6LMl1QMIlxVOGE7yyfoz6a9CHHZTgQeRzM6nzqtPQ2YddTz9qpGIxj4Ls+UKFPjNVwXdamQ5hZbffSwVsFGKJDnPC9gmTFHZBXFZpCU2Al16RAcSn5Sjkeh6vg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0b2a4e8c-ab43-4e6e-2c51-027dcdf1425d@suse.com>
Date: Mon, 8 May 2023 14:31:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/4] x86emul: misc additions
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0051.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7295:EE_
X-MS-Office365-Filtering-Correlation-Id: af34bd62-b7fa-4f8d-a448-08db4fc0161b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	x2s8dJwPa701iSoFm+RCxs6OYxNvbVUnyzYJqdd1gh/JquLW/Peb8A0v/pA5jIGO0GWOQOU/yopV4gZRqpzPXRF9YZGOu2z87q0v1aAXzP+8uQ9Dyu7KE6YCQC0ypPxNfng/iCt45qwhdHo5T0p6AWPwxIScCD2wnXthWAmY0LA5O3wcaLcR2g0xPSq1MApcpkotykZjhTOiWjhVY840+SwSnP0BxnbPRDfgM9Q1Z3pMR6EqkATVhzEHAjhdgspJ4YSYZNz9UYy3sR9mmsniREAi7/KtHUzNHZ3yAPcum3aWjR2obuH6fpLD8wMIlfj08AoPTsKHBQlbnbCp1V6B+Yb3uw8hiywUjDr5KTCKeMR7ofp6IOrAHOnTo1x2vJ4HNexawLv3nKogQn5kdgDxlI0KJuWXqCuWtQ0dNpue/0dxWWABJD4q6njnTQx4El6dT58cJqdEvOY2AcU//fcrEQWYdmzPawHjBp1QM5xgcE1enzW6I7UYs08NJQM0NI2LRCuqgFt6thEB0TxDrNuRU56zKrfM0npeifKDfUrORBmz2Kas0QmJjUWrx6C74q3VLj8OAhLp90NKfDZUfg2o/oQpn9TtatI5HA/gXxNoBV90w3RlIPqfeLswtJfcI0qLmIkOKgZB+gnHy7S/l9YWpg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(376002)(136003)(366004)(396003)(346002)(451199021)(66476007)(86362001)(31696002)(36756003)(54906003)(66556008)(6916009)(4326008)(66946007)(6486002)(316002)(478600001)(5660300002)(8676002)(2906002)(4744005)(41300700001)(8936002)(38100700002)(186003)(2616005)(6506007)(26005)(6512007)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?clBZRk5SWUZiZG96TFhJV1BPNmhhZDBRbWt4eVJBcTRtaFR6dkZpQ0xFall6?=
 =?utf-8?B?M1RVVEV0UGk1VWduZ2lmT2R4YWNYYktQYSsxOHlBSTlBVVBVL1ZqNDc0WWVB?=
 =?utf-8?B?My96WXdONVN4dlBUMFVwSHd0UW5JUVNYVEdpMVJCL0VQM0FXdm43ZHBKRDF5?=
 =?utf-8?B?Mi9kd1BQRVBZbDZhSFBQZ1cxYy9RYmt1RDhWK1JNM05hZHo2L2ZWSHoyUUhL?=
 =?utf-8?B?c05IVE5xQUl3alhmRHZ2bDJQaWw3UTBacUx1TVR3S1cwT0l3ZDdWTUE0eEh1?=
 =?utf-8?B?eHZKTUJuK3BsdGU5b0lEZkdVTVMwbHNtNU0zcngvNFRxNnZVOU9mOFFCOUln?=
 =?utf-8?B?eWEvWWRPdXhvd09tZTQ1YUxPS2VkMjArZ2lKK2hvSmtTTXBacXVNZEk4Q0hm?=
 =?utf-8?B?RlMvYUlzby8zTVYxTWdVdE10T1ZGMzNrTytDZGRpTlViNk9GdnNXQk45NWJq?=
 =?utf-8?B?NnhaOXpHczh1SXlWRkFQd1Z4UkFPbVdwSWJHU1BIZFJiclF6ZGZuRFBNSDJz?=
 =?utf-8?B?WXRYUTRyN3d6bHNScElUbnN5d2lJMUt2dzJyNzIwYWpqQng1Um51V3hhakUy?=
 =?utf-8?B?eEkySFBzMEpzb3YvSFJZTTJIQ2ZpZWR4OXVGYVBVZW5jNnNLZXJicG9HNFpq?=
 =?utf-8?B?MXdlaXZMUHBHbURCdjFzTnBJeGxrdERCK21jdUM1RUxwZUI2QlVCeTQvY2Zu?=
 =?utf-8?B?cDJYWDBhSExQMjZGcko5LzNENW82d1VQTXdtcU0zbzMyNHdMWm9VR0RGVHRu?=
 =?utf-8?B?NUJCM0kwOHZPeHNETURsaG5Ca3JnOVVoTDJPaFUzWmRQZnZ5cUFMcnV4dWhx?=
 =?utf-8?B?UHp1dDgwVFRXSHJ2T3Z1QTMyQ0R5L0U4VHNTMDhINEtONVQ4bGNheDQvZ0s5?=
 =?utf-8?B?dHhUWEN5aTBCdHpaU2NHRnpWWVlHa28yVDRpemhyQzBVNjFWcXczZGxyTVk1?=
 =?utf-8?B?aTJkdXBnN0FBRVhzVlhIV2RIbE5Dc1k0SEgvOXRDckh0K2s5Sys1TStsb25s?=
 =?utf-8?B?RnFFQmsrS0djUGFER0ZlSVNCbVcwR0NHVm9TNGNvQzg2RlVzQTcxZUFNZWdj?=
 =?utf-8?B?VWM3SkZQRVNWYW15VU9tVFdQOWpOcG1MSGViSENJTHpEUWJSczFkd09tNDBl?=
 =?utf-8?B?dWRXK2l0cU5xZXhJcjF0Z1F1SHVxUXc3U3h0NXV3Ymh5ZExKaFpreWV3M05k?=
 =?utf-8?B?TE1haUtSbnh5dWRPU25hZ3RqT0ZlSkYvdXNyVlJpTVQ2NnFwS3hjdFBReWRW?=
 =?utf-8?B?T25IUVRhL0M2cCtFSU00MVFDZFJLMjVacVpZUEYwTEY0UTVMVkFxdWxTR1RU?=
 =?utf-8?B?QmRjdExhaG00SE9ZTnF1cUtnNThoM2lWMEIyY0M5NllTNFVhR0p5OWZtb2Zs?=
 =?utf-8?B?MkFTOHR4cEJxaEtpQTlpelduby81MnlOTGZzYkhLMGswd21NQ3kySXFVZmps?=
 =?utf-8?B?L0xWKzJhVlkxaldsVE9FVTY1MWNLRXAvVXBkT2Z0Tm5pSlZ0OUN3YkI3NER3?=
 =?utf-8?B?em5qZDROc0dsWVN3VjVNc3NtaXBuTVNlQ3FLMVFKUFkyb0kxTjVNdTAyc2ND?=
 =?utf-8?B?TGw3S2hXKzZ1NEVxNlYvWDRudVNWRWVsbjNFcEJRS21KLzhFWWlQT1VsUkNS?=
 =?utf-8?B?ZUJVT2pyUkFQRjFrdGM4Q2Fsa2hhQU9BUHc3ZUxKbC9NdW8xeC9IYTJCbmE4?=
 =?utf-8?B?bEFKTGdqTUpwei9pRlJnT3VWZHNOcVlxWVIxVm5hSktxU3FWZVJhY1QvcGdK?=
 =?utf-8?B?SkxYREFsemlJTHVQTXZKeTVoQUN6L0pRdkRpZ0VWTHYzUjRiYVJBdExsN2JO?=
 =?utf-8?B?dXFVcWtDQUdObmxldjZnVzY2cEdDaEY2NVBtK1VSU2VsWXZzK3BzY2EyRzdZ?=
 =?utf-8?B?bzBlcXIvUTd0bDRzK29Qa3pRRStBaHhaTElpeU5WQS9IaUVVNFRIUU8yMHdr?=
 =?utf-8?B?cVZ3SGlUZHk5K2tDMTZ6VSsrODNjd3hqQ044dk5kUlJsWFN1OUNuU1dsNUZQ?=
 =?utf-8?B?THA4QU5vUGJIR2ZGVUV6MDcyRWcvdElXejJGblVzYko1bDFXRytrOVk1WnJk?=
 =?utf-8?B?T2x6aEFDNlExZlBMUVJKUXlGRnZDMFRzYVFYZlZ6Wk5Yc3BSVDF6UzJnQXNC?=
 =?utf-8?Q?es98iVNgx+6tzoCEBO8NBOAVb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: af34bd62-b7fa-4f8d-a448-08db4fc0161b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 12:31:02.7902
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rBRAFiwro7iwPnjipZFkcJfoutmDQ9PdtIHV7AjZFIfniKwOlVPfCoEJtmlzciHtmpHQAW5rooy1l1bgLh/zCA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7295

This series adds support for a number of more or less recently announced
ISA extensions. The series interacts mildly (and only contextually) with
the AVX512-FP16 one. Note that the last patch is kind of incomplete: It
doesn't enable the feature for guest use, for lack of detail in the
specification.

1: x86emul: support LKGS
2: x86emul: support CMPccXADD
3: VMX: tertiary execution control infrastructure
4: x86emul+VMX: support {RD,WR}MSRLIST

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 12:31:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:31:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531492.827205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw01z-0003CD-E1; Mon, 08 May 2023 12:31:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531492.827205; Mon, 08 May 2023 12:31:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw01z-0003C6-AI; Mon, 08 May 2023 12:31:55 +0000
Received: by outflank-mailman (input) for mailman id 531492;
 Mon, 08 May 2023 12:31:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pw01y-0002dt-AW
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:31:54 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20606.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4fe1b792-ed9c-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 14:31:53 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7295.eurprd04.prod.outlook.com (2603:10a6:800:1ac::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 12:31:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 12:31:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fe1b792-ed9c-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CgUqpLVM9osGYUCCHVtiLDUssvjgYEB7tTIeQqFMUDUQM+2gNjFBQ2FkJA5viNURw89rBYUzqbe9RpfrDrrmnmmhoHUZtjjCsLD6W0c2iokMPpTSBDqxzypH0iL2/FOaIr9ZCyccvftw16TytMgLkfNWYiwzRpBSnwz3NI7zfsrDHXnQaDQEQWjPfbl8oSkAq/gE+R5qvPIN8FQWBeY6eUN/fb/D4YTlcnTXfbHzj4K3GDP6cLPOtACZuRtF55/tz2VbLSFOWfEuAoidfwOGnF9mzaKDbGWbzs/MlEBQvJspdM4nPQ+ztK9jW30ESu7LYg9untlwuxNqBrMc4ve5Vg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CrPGkiGtKBG+qztGkkVUfaZrvhhRCY5QnAOoKkRTviA=;
 b=DVk7adZOKnJV/1afhWsAxaH858C6T4nXKz1wjAUPbqiogaYdWg/TSgckBJ26jt2L+/HTqD3tRkY125XnA0I0yoqg8QseCH3+AIdGoyI3oRZ8yrqufaMugAcDaCXcVAruVlUp9FfLQTBAMyCd4VUyob8LQdCH2zXU+fx7cS4sOV8FjJWj+ZN7BQ13dHtk/ezr/RJoZRi9CzIkZ2jRGhroLw7kgBXHCSDuPKYHr/v7el3HZFkKx5L2mWT3/ARWz8O/ExhoWgszOmsd4F7LO0HoLLtZLzyihmkr+hH9GFIHK9x0EyQzuAqivTil+SP2PbqmsYTVftGeDAPIritDB86G5g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CrPGkiGtKBG+qztGkkVUfaZrvhhRCY5QnAOoKkRTviA=;
 b=XKCaGV2E+S4YRgwLx8E41DpuF+jIVfBqx4jiRB7x0/OJ6brmE76xlvkIDp+jjAxX39ViCo/C1sC/BecMei3MbXBwxu4UoefZ6ux94AIzW/YJcaqbLGHLaWcDqv5eY143fMm71XEqFz3vbmgfIxWUIWDv7q8JvesA76VVy4l551nc8472e3Bc6Sxko/0GOtZBVsaVEfWBQWtEqA1/azliUR0Mx3ajVGteVdgrpRq2fxn3tdV4Y75BWnJ6Ams3wq1AV77fTjkpNAP2OKpWuOxx570+IQiqIz9LOfGlTf36hYz64gS4FeQ2AOiAyhnqwSZi7y3p1xBOXCKV9Rv9o+0nxw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <45cc879f-5c98-e0c5-e791-7c297ed1eb41@suse.com>
Date: Mon, 8 May 2023 14:31:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 1/4] x86emul: support LKGS
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <0b2a4e8c-ab43-4e6e-2c51-027dcdf1425d@suse.com>
In-Reply-To: <0b2a4e8c-ab43-4e6e-2c51-027dcdf1425d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0037.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7295:EE_
X-MS-Office365-Filtering-Correlation-Id: a4af7b62-c000-4906-c034-08db4fc03335
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HbLWawHKETiXsIm2+lJd9OK+CEWuGwwZMuYpw17Fl+SfP+NbDfSoZ0LmACpOBhWJgHagVpbTCNlJmJAas6DXataCbW2/TfMzED8W0fuUEK/xXLvmUAojg/mtuF92h5HhpPUbIUNKCnhbiKwu87cWy5OIbPT6l4P96kE/j3ZwWALUD+UBYpzilL2se3NN2Fpywtl2PyVM+UY9WgfjMTJLjNrCwZxuz9MwgIzrKrhoaRQA/z8DfxNSKgoHz38ZC2ch4Vvpm5AqvQU/rbFtfOwk7QjtTQP+E+j82mVSEAmF7VG5u687Ca6X/TJ+C+AMHlIrMc7r4jjrcMPVus8vjUJTg4JS8JyXJmw9B+pjqZWo1vF4rY8/M+lq0DVSyKSeN6j5lnFcPg1E15rUYB1Fd79n+/MxKj9h312JI2yakYYVTAitfUY3oOGotjuBfknAAqQcqrZEumyLXJtJ+XPu29y0MJot/k4kWKcv+NDR8ik5aENIh1QM+wEH/vGGBXP9oE9sT1dAuGMhlyJtH9ZpJXLg4tnCGNRJS6DDQx45Lf2FfA0ZqDqjJQC7IeRQ8qg6X+dy/DEAyxJNe0/cLhr00tyDT98Tr6eV9XmRKgtBixmX75hv51sEGTMPqwi7fpb3F1ejrnEQ9fjMJWy/cCc1VWOBNA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(376002)(136003)(366004)(396003)(346002)(451199021)(66476007)(86362001)(31696002)(36756003)(54906003)(66556008)(6916009)(4326008)(66946007)(6486002)(316002)(478600001)(5660300002)(8676002)(2906002)(41300700001)(8936002)(38100700002)(186003)(2616005)(6506007)(26005)(6512007)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MnhUdkFFUjVtUWU3ZVhkbS9RQWt0RDVYSHR2VVdXSDNQRkFRNUh4V3AxT00z?=
 =?utf-8?B?SnkrRDltbUZlclJUaVVxalArd0xxT2xOa0ZwTGFaWFpjSm1rQ0pHUFkvaDJZ?=
 =?utf-8?B?WG0yNzd5cTczQ0pJWCtlb254MnJWMHlBWDBTSnVFZ0dGQ1N5K0l0UUY2WDI3?=
 =?utf-8?B?Tjh3dkdPc1hwS3VuSTJ6OCsyZThsRXBVZ3JTcldZTHIzNFIvRzJJQ0YrcVhO?=
 =?utf-8?B?NS9sU3VRY052UC9nR21yeDRaaGNYTGQ2MTJya05JaGI2M08rQkgweENDVHMr?=
 =?utf-8?B?T1VWeVg5bWU2WUVCSHV2OTJOaGsyODkyK3VMK0plN0FDY0IrTjNQWUhpclVD?=
 =?utf-8?B?Y2kyZzhZbkZmV1hXSGhpK1c0VGxaZkc2TmxuK1pnUUxxbnM0R1IzMWxkRWlO?=
 =?utf-8?B?RWIyWlJqWXJQMnV0L2Q3VUlYb0xSU0V4M09aRlY3QVBOc2VLMzN2TFRjczFU?=
 =?utf-8?B?Y3Zwbk1qTDJ5MFpYMG5xWWxudHFmK3R1MDV5ODJKWDZqaE1TODlXaFQ3Ulpp?=
 =?utf-8?B?dU9MZUJ0U2Q2UG1udEp4U2VFVzNIUm9GYVNYYUsrRHdYWTE2ZUwrelFubzFL?=
 =?utf-8?B?aTQ0Umx4K3gyd3pBbEdscXE3NWdvVXF4K2t3SWcyNW9jSzkzWkNXUGl5TVVY?=
 =?utf-8?B?WFAxbDFkYUZUZ0ZzS3FTZXZPU0Y1Q3JyZWpjbFYzYllYSG92QU9KSVhoK3Na?=
 =?utf-8?B?T2p4WHRtaFhWS3hpL1VVTXpSUlFQdWJlcW04ZWsyZ0g3SGZlQktXUHJKSlZ4?=
 =?utf-8?B?TjIzSFlyY1Fkb0E5aFFPQnFFM0h5SXlXWlRDYVpXYW00dFJUYTRvaVhKOXps?=
 =?utf-8?B?V0ZhOTVIU2RRbE5aNGFROE1yeUtrUVQ4VURsbzF0NUV0VC92NDF0c2hQUUNH?=
 =?utf-8?B?UG41ZTAzRGhOVmFuTllFVXN5Sk5qV0tDY0RwYkIySVhTdDZvS2NaakNmN3ZL?=
 =?utf-8?B?NVNodWlYK21rNFJKb1dTVEFLODNQMjBqR2tTYUtjRjdFZFJNbVJlUWtpMzhO?=
 =?utf-8?B?QTRURHFzSjUzWUtPOHJPSjVtakxaQnpNWEI0S29ZL05nK3Q0aDI5bktmOFhR?=
 =?utf-8?B?VmluOUNabzBFQVRGWVNTbG9KMzRCclVKRlljTE1UdXBndGg2YURveENWZ29R?=
 =?utf-8?B?aGNRYWxZNnpLS2M2cjROT0xGRVYzY05CNDM4VnBkZW9wMkluNGlmU3lsa0Iy?=
 =?utf-8?B?S0ppd0lnSERERE0zTjZ5SzRqS2xtSWg3Nk5ocjNVYzhnQXlKMEZwa1A4dkJH?=
 =?utf-8?B?WTRYeWZCaGM3STRRVE15dTU3a01GYjFHRWR2VzRaZDZ2c1lDeXJpS2NmSXRk?=
 =?utf-8?B?QStSL1BsTTlscVhSWDAvTG9yUWs5cFp4Q3ZERC9GcjgzSXpDc3BGWUVQZ2Q2?=
 =?utf-8?B?eWNZdElLV2p3Znp1WDkxRnZ0RW02SmZIQzF1dmU0bjRkUFNoMS9yY1psUml0?=
 =?utf-8?B?VHFIOWRHRWdPam5HTzJTRXlocE9KdXZFN0cwYWlwclNDWWR6dmkyMkY2VTUy?=
 =?utf-8?B?NGhXNWpxZk4wUUMvZmE4WTQxbVd0MG5aTEpEUmdUOXRYUGlPYUI2TDJwTmtv?=
 =?utf-8?B?d2YxaFFYYTBva25xdkt2SWxTclRpbG52bVVLUlBUTVNDbnMySCtDTEk3QU5Y?=
 =?utf-8?B?TDY1aUwwRDhVbTJQNU1jSDBCM1Baak83YmlYOEFaYVRENWNKUlNLRDNEMlVC?=
 =?utf-8?B?Q3JJQTA0N0JqOEdXVWxaeFJRQ1hvUjdDZE9oem1WVnMrekE2cHdxQ0lMRGFH?=
 =?utf-8?B?ZDZpd1BvZ1FMcW5Dc2JROGpLYU5xeURFNVJoYkVPRVY5bHFONXc2MHJLK0dR?=
 =?utf-8?B?enhyL2JTRWNTVDZUODRrc2libFlMTTRxUitDSzhaRkxQTGJrUTFSTC9rdXpq?=
 =?utf-8?B?dzQvWHg5V1Rob3lOblV2Y1FUakhYOHdIWGp4SHJZSjdkVG5GeW9ObURkWTE2?=
 =?utf-8?B?WFF3VFpVRlpUOFlaMVpldGZBZlJKTVpwZnE5SUlRd3Rnc3ZHWE1oeWhMR1g2?=
 =?utf-8?B?L2VNQVJOTzlyNnI3aWxHMEVma2xwSHhoMTR0NXNTalhrT2hnL3owR1Z3Zlln?=
 =?utf-8?B?RFJBZVpqZFNhOXVDWjdlc2tnNkZFS3VkVS9Gci9OUFEwSUY4TWlWRWIrZlhv?=
 =?utf-8?Q?HEfI0UcUamx8dRxvmOQcFFCF4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a4af7b62-c000-4906-c034-08db4fc03335
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 12:31:51.6077
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rY3IlyLE57KTMgd9iqgBZl+wZ1u/2iZ2HWbmqt270FUcNdMBB6XuBIuJmQvHir+7g39vjPebMZ2hHumcjY19jw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7295

Provide support for this insn, which is a prereq to FRED. CPUID-wise
introduce both its and FRED's bit at this occasion, thus allowing to
also express the dependency right away.

While adding a testcase, also add a SWAPGS one. In order to not affect
the behavior of pre-existing tests, install write_{segment,msr} hooks
only transiently.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Instead of ->read_segment() we could of course also use ->read_msr() to
fetch the original GS base. I don't think I can see a clear advantage of
either approach; the way it's done it matches how we handle SWAPGS.

For PV save_segments() would need adjustment, but the insn being
restricted to ring 0 means PV guests can't use it anyway (unless we
wanted to emulate it as another privileged insn).
---
v2: Use X86_EXC_*. Add comments.

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -235,6 +235,8 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"fzrm",         0x00000007,  1, CPUID_REG_EAX, 10,  1},
         {"fsrs",         0x00000007,  1, CPUID_REG_EAX, 11,  1},
         {"fsrcs",        0x00000007,  1, CPUID_REG_EAX, 12,  1},
+        {"fred",         0x00000007,  1, CPUID_REG_EAX, 17,  1},
+        {"lkgs",         0x00000007,  1, CPUID_REG_EAX, 18,  1},
         {"wrmsrns",      0x00000007,  1, CPUID_REG_EAX, 19,  1},
         {"avx-ifma",     0x00000007,  1, CPUID_REG_EAX, 23,  1},
 
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -190,7 +190,8 @@ static const char *const str_7a1[32] =
     [10] = "fzrm",          [11] = "fsrs",
     [12] = "fsrcs",
 
-    /* 18 */                [19] = "wrmsrns",
+    /* 16 */                [17] = "fred",
+    [18] = "lkgs",          [19] = "wrmsrns",
 
     /* 22 */                [23] = "avx-ifma",
 };
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -326,6 +326,7 @@ static const struct {
     { { 0x00, 0x18 }, { 2, 2 }, T, R }, /* ltr */
     { { 0x00, 0x20 }, { 2, 2 }, T, R }, /* verr */
     { { 0x00, 0x28 }, { 2, 2 }, T, R }, /* verw */
+    { { 0x00, 0x30 }, { 0, 2 }, T, R, pfx_f2 }, /* lkgs */
     { { 0x01, 0x00 }, { 2, 2 }, F, W }, /* sgdt */
     { { 0x01, 0x08 }, { 2, 2 }, F, W }, /* sidt */
     { { 0x01, 0x10 }, { 2, 2 }, F, R }, /* lgdt */
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -666,6 +666,10 @@ static int blk(
     return x86_emul_blk((void *)offset, p_data, bytes, eflags, state, ctxt);
 }
 
+#ifdef __x86_64__
+static unsigned long gs_base, gs_base_shadow;
+#endif
+
 static int read_segment(
     enum x86_segment seg,
     struct segment_register *reg,
@@ -675,8 +679,30 @@ static int read_segment(
         return X86EMUL_UNHANDLEABLE;
     memset(reg, 0, sizeof(*reg));
     reg->p = 1;
+
+#ifdef __x86_64__
+    if ( seg == x86_seg_gs )
+        reg->base = gs_base;
+#endif
+
+    return X86EMUL_OKAY;
+}
+
+#ifdef __x86_64__
+static int write_segment(
+    enum x86_segment seg,
+    const struct segment_register *reg,
+    struct x86_emulate_ctxt *ctxt)
+{
+    if ( !is_x86_user_segment(seg) )
+        return X86EMUL_UNHANDLEABLE;
+
+    if ( seg == x86_seg_gs )
+        gs_base = reg->base;
+
     return X86EMUL_OKAY;
 }
+#endif
 
 static int read_msr(
     unsigned int reg,
@@ -689,6 +715,20 @@ static int read_msr(
         *val = ctxt->addr_size > 32 ? 0x500 /* LME|LMA */ : 0;
         return X86EMUL_OKAY;
 
+#ifdef __x86_64__
+    case 0xc0000101: /* GS_BASE */
+        if ( ctxt->addr_size < 64 )
+            break;
+        *val = gs_base;
+        return X86EMUL_OKAY;
+
+    case 0xc0000102: /* SHADOW_GS_BASE */
+        if ( ctxt->addr_size < 64 )
+            break;
+        *val = gs_base_shadow;
+        return X86EMUL_OKAY;
+#endif
+
     case 0xc0000103: /* TSC_AUX */
 #define TSC_AUX_VALUE 0xCACACACA
         *val = TSC_AUX_VALUE;
@@ -698,6 +738,31 @@ static int read_msr(
     return X86EMUL_UNHANDLEABLE;
 }
 
+#ifdef __x86_64__
+static int write_msr(
+    unsigned int reg,
+    uint64_t val,
+    struct x86_emulate_ctxt *ctxt)
+{
+    switch ( reg )
+    {
+    case 0xc0000101: /* GS_BASE */
+        if ( ctxt->addr_size < 64 || !is_canonical_address(val) )
+            break;
+        gs_base = val;
+        return X86EMUL_OKAY;
+
+    case 0xc0000102: /* SHADOW_GS_BASE */
+        if ( ctxt->addr_size < 64 || !is_canonical_address(val) )
+            break;
+        gs_base_shadow = val;
+        return X86EMUL_OKAY;
+    }
+
+    return X86EMUL_UNHANDLEABLE;
+}
+#endif
+
 #define INVPCID_ADDR 0x12345678
 #define INVPCID_PCID 0x123
 
@@ -1331,6 +1396,41 @@ int main(int argc, char **argv)
         printf("%u bytes read - ", bytes_read);
         goto fail;
     }
+    printf("okay\n");
+
+    emulops.write_segment = write_segment;
+    emulops.write_msr     = write_msr;
+
+    printf("%-40s", "Testing swapgs...");
+    instr[0] = 0x0f; instr[1] = 0x01; instr[2] = 0xf8;
+    regs.eip = (unsigned long)&instr[0];
+    gs_base = 0xffffeeeecccc8888UL;
+    gs_base_shadow = 0x0000111122224444UL;
+    rc = x86_emulate(&ctxt, &emulops);
+    if ( (rc != X86EMUL_OKAY) ||
+         (regs.eip != (unsigned long)&instr[3]) ||
+         (gs_base != 0x0000111122224444UL) ||
+         (gs_base_shadow != 0xffffeeeecccc8888UL) )
+        goto fail;
+    printf("okay\n");
+
+    printf("%-40s", "Testing lkgs 2(%rdx)...");
+    instr[0] = 0xf2; instr[1] = 0x0f; instr[2] = 0x00; instr[3] = 0x72; instr[4] = 0x02;
+    regs.eip = (unsigned long)&instr[0];
+    regs.edx = (unsigned long)res;
+    res[0]   = 0x00004444;
+    res[1]   = 0x8888cccc;
+    i = cp.extd.nscb; cp.extd.nscb = true; /* for AMD */
+    rc = x86_emulate(&ctxt, &emulops);
+    if ( (rc != X86EMUL_OKAY) ||
+         (regs.eip != (unsigned long)&instr[5]) ||
+         (gs_base != 0x0000111122224444UL) ||
+         gs_base_shadow )
+        goto fail;
+
+    cp.extd.nscb = i;
+    emulops.write_segment = NULL;
+    emulops.write_msr     = NULL;
 #endif
     printf("okay\n");
 
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -86,6 +86,7 @@ bool emul_test_init(void)
     cp.feat.adx = true;
     cp.feat.avx512pf = cp.feat.avx512f;
     cp.feat.rdpid = true;
+    cp.feat.lkgs = true;
     cp.feat.wrmsrns = true;
     cp.extd.clzero = true;
 
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -734,8 +734,12 @@ decode_twobyte(struct x86_emulate_state
         case 0:
             s->desc |= DstMem | SrcImplicit | Mov;
             break;
+        case 6:
+            if ( !(s->modrm_reg & 1) && mode_64bit() )
+            {
         case 2: case 4:
-            s->desc |= SrcMem16;
+                s->desc |= SrcMem16;
+            }
             break;
         }
         break;
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -583,6 +583,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
 #define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
+#define vcpu_has_lkgs()        (ctxt->cpuid->feat.lkgs)
 #define vcpu_has_wrmsrns()     (ctxt->cpuid->feat.wrmsrns)
 #define vcpu_has_avx_ifma()    (ctxt->cpuid->feat.avx_ifma)
 #define vcpu_has_avx_vnni_int8() (ctxt->cpuid->feat.avx_vnni_int8)
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -2853,8 +2853,35 @@ x86_emulate(
                 break;
             }
             break;
-        default:
-            generate_exception_if(true, X86_EXC_UD);
+        case 6: /* lkgs */
+            generate_exception_if((modrm_reg & 1) || vex.pfx != vex_f2,
+                                  X86_EXC_UD);
+            generate_exception_if(!mode_64bit() || !mode_ring0(), X86_EXC_UD);
+            vcpu_must_have(lkgs);
+            fail_if(!ops->read_segment || !ops->read_msr ||
+                    !ops->write_segment || !ops->write_msr);
+            if ( (rc = ops->read_msr(MSR_SHADOW_GS_BASE, &msr_val,
+                                     ctxt)) != X86EMUL_OKAY ||
+                 (rc = ops->read_segment(x86_seg_gs, &sreg,
+                                         ctxt)) != X86EMUL_OKAY )
+                goto done;
+            dst.orig_val = sreg.base; /* Preserve full GS Base. */
+            if ( (rc = protmode_load_seg(x86_seg_gs, src.val, false, &sreg,
+                                         ctxt, ops)) != X86EMUL_OKAY ||
+                 /* Write (32-bit) base into SHADOW_GS. */
+                 (rc = ops->write_msr(MSR_SHADOW_GS_BASE, sreg.base,
+                                      ctxt)) != X86EMUL_OKAY )
+                goto done;
+            sreg.base = dst.orig_val; /* Reinstate full GS Base. */
+            if ( (rc = ops->write_segment(x86_seg_gs, &sreg,
+                                          ctxt)) != X86EMUL_OKAY )
+            {
+                /* Best effort unwind (i.e. no real error checking). */
+                if ( ops->write_msr(MSR_SHADOW_GS_BASE, msr_val,
+                                    ctxt) == X86EMUL_EXCEPTION )
+                    x86_emul_reset_event(ctxt);
+                goto done;
+            }
             break;
         }
         break;
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -281,6 +281,8 @@ XEN_CPUFEATURE(AVX512_BF16,  10*32+ 5) /
 XEN_CPUFEATURE(FZRM,         10*32+10) /*A  Fast Zero-length REP MOVSB */
 XEN_CPUFEATURE(FSRS,         10*32+11) /*A  Fast Short REP STOSB */
 XEN_CPUFEATURE(FSRCS,        10*32+12) /*A  Fast Short REP CMPSB/SCASB */
+XEN_CPUFEATURE(FRED,         10*32+17) /*   Flexible Return and Event Delivery */
+XEN_CPUFEATURE(LKGS,         10*32+18) /*S  Load Kernel GS Base */
 XEN_CPUFEATURE(WRMSRNS,      10*32+19) /*S  WRMSR Non-Serialising */
 XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
 
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -295,6 +295,9 @@ def crunch_numbers(state):
 
         # In principle the TSXLDTRK insns could also be considered independent.
         RTM: [TSXLDTRK],
+
+        # FRED builds on the LKGS instruction.
+        LKGS: [FRED],
     }
 
     deep_features = tuple(sorted(deps.keys()))



From xen-devel-bounces@lists.xenproject.org Mon May 08 12:32:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:32:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531496.827214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw02u-0003km-MI; Mon, 08 May 2023 12:32:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531496.827214; Mon, 08 May 2023 12:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw02u-0003kf-Je; Mon, 08 May 2023 12:32:52 +0000
Received: by outflank-mailman (input) for mailman id 531496;
 Mon, 08 May 2023 12:32:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pw02t-0003kR-BP
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:32:51 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2054.outbound.protection.outlook.com [40.107.7.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 71ce1d66-ed9c-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 14:32:50 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9318.eurprd04.prod.outlook.com (2603:10a6:102:2a5::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.31; Mon, 8 May
 2023 12:32:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 12:32:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71ce1d66-ed9c-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fEZMnXKndzpA6XdwcjBjIfNx4MtGsrAvH0l8/f+fJicKwj0I32vZPWGqLYj/RKFUmUSAVBuocPPUAscGxVxEgBw/Chl1qHuaBJA6EXFo6mGXrA1UodZ7aQVAv57S/CwOvzItznAVdFdW1eiTlYddCd9e23z69mj/UQ4uVBWv5/fh8vyVUq4UgSlqh6VVP4+BbXfEu/IVNB6Q7SJSncOaaMq9WQYty8w4u7MHMo2mEHIGWMjRAaUxCLWtjYEP1XjxS24A1XOqlOOZ+mop/9NNvvvVtOCQEykIhMUH+NKDdWgkFjTjdm1zS1SGC4fwgk94LEVyXZgek0PT80g3roH7cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+5Qg+ImWbYH/+Yy12xrDGChaMfeAnfUy605gAGbDUlA=;
 b=KqOShkVIKSGzntf6Yk05TEy3TJ1h9cKDMK1i6FZ8LnlhFz4J2MGsDe0ruHMFBqJBeR/L8gL/U0rZQDgAblo2YCv+Q8elmxbPed7n7ZDFwMCCRTrGoHPgLOd8xjjDd5P33Uhby0nh4Tm3ZxpZlvuDkbCUibL3dM2QBcQ5yUnF+nAUv25z9TNjcp8zqhDn4AtEcuG3llArjL4UC+/Xl4bQhZlR1ND2IEhuGxz/z0lK62SCx1A4nxypRpwCz6l1kDqrQqqhAXY4eb68kq5H1UwmXX7uWKvOTf0d8fhOJ7066I2QBPdiA7K/kus8+VH18CkTjpg0YwZSEZseZCxV1nlnWQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+5Qg+ImWbYH/+Yy12xrDGChaMfeAnfUy605gAGbDUlA=;
 b=vzjTYd75GRABI0iKJy2LBxUD9KbP9jPkUiLdL0vTSq/6rk7d4aqou2wuPpP/jcGYspevO4u/pdywmP/dSL6BV6cUg9/NNvY0MNEkAbFTLX1b5QJDZ8Nj+tWAcMwrzuty1FNAcHlvjfUWr7hay3CQy/JiSqn7PtiMChenQetjkicMFmxvwNSib4FpR5vHGQhea3ILog8SmUTjGG/xwzhLt6TNnGQpkdXwHBE8cL1z/iUMGMjCPEC/uc0iToOxp6i79dBNeGKKx9xHHlA2G3ld2g/UZq5Dd6ro8PW5f7eARJSk2L+Yu6zgfQJNenTW0I9KiRY73BSR3Gte/cl8OCoiDw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <33865bc0-2c90-0501-0ef5-65818973f417@suse.com>
Date: Mon, 8 May 2023 14:32:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 2/4] x86emul: support CMPccXADD
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <0b2a4e8c-ab43-4e6e-2c51-027dcdf1425d@suse.com>
In-Reply-To: <0b2a4e8c-ab43-4e6e-2c51-027dcdf1425d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0101.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9318:EE_
X-MS-Office365-Filtering-Correlation-Id: 9874a92e-9590-4418-c82b-08db4fc043fd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xtrCO/0lSJTrZeYluK7qH4Lm7FkziCgFFCtBkwDZEbStLXRp/JhxgVqOaytRLX5yWz03G31ByERGPT5C3K1SXvQV+JikAv2Cmj2B3F9lm3az7HvZU6p4OQBS+1EOK3v/STHj5CEh5XcrFFsS/V0l66pljNadi0+pZhc3IjcjMIpv6YAckHiMgDGCURwpqSomrWecCJJbWJ0+NknJFM2n8tNT5AufPckBbVkXJVPzIMi8tgImOOq8Dz8eME+zLWVb853w+z89ebMJCmw8Yh/BfpWy9Bw+5vnUxrEm8UnKcdMi8clxaTM1N7ppjQe/9BzNWmEizY353jDGlvld9ocBjdsXvOmFMaRmwaoNwnV7Eps5qPBQQLmCFf4OikVSaFCbfhJ4l8qmMTWa4qYMXznUq8POP29kF0XE+geV28q18NWAz+v8heTnMz02FzvtW6kW8vmIr/Om/jbLGSLHwDFPUuPOrKTG9Pf33Qa/artioaXxfPRWk1U4UKIMUXOG6jxsXob8u0mh411RjWk50xsV2tPZkAK/v/01u730c9wDd+JdHEKi173xlUwnSxeKvtBht8ecnyDJBAf2rgxXk0TV2qsdJy2SMPi2ZTOoQwf6nq0UcnKAmnmOkbYsN7BPzR/i48kgxwov0Cy0PI0SpPWJrw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(376002)(346002)(136003)(451199021)(31686004)(36756003)(38100700002)(2906002)(30864003)(8676002)(5660300002)(316002)(86362001)(31696002)(8936002)(6916009)(66556008)(66946007)(41300700001)(66476007)(4326008)(83380400001)(66574015)(186003)(6512007)(6506007)(26005)(6486002)(478600001)(2616005)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SUlmYllmTWQwbUtEOEEvYkFTVVdXL3pwamdmMHkwZzF4clpONHJFVDNwdTVs?=
 =?utf-8?B?YmI1QzRHamUzMDIzM0lJTmJwQUZla2lDTXRxaENqb21MZzl4TUdiK3hGRVhz?=
 =?utf-8?B?WGNXQVdPdjdiNWtOZnRJSnZDS2JzVWJjVFZCVitsRnBBSnE5WXFjQ3A5UC9n?=
 =?utf-8?B?ZTBBRjBhdjk5VU1qSERTRHpQS2pSRFBXMHN5TWZoUTVQamlsNGVqODdlRUpH?=
 =?utf-8?B?OG1JQ2hKWW8wcXpjQjJxc1p1TEM0a1ZSaDdXQkhaUVpjYTJITzYyYzJzMnRn?=
 =?utf-8?B?dHFuYjEycis1bWdkeFVwUWxsWkFyQko1MHNoL0tMa0hlakQrV0NqOWNZb2dS?=
 =?utf-8?B?ODZjWk9IaTNNbEI2VUFBOThVWXY1VmtjcWZVOEk2Y251Z2lDM3VuQVRnWHpw?=
 =?utf-8?B?NnJDZ0RRMVlEMmh3bnhzQ3pPVVZTT1FrOGUza3VnaFQ4Tk1RTnE3dTY5dnQy?=
 =?utf-8?B?M05oOUJvbVVIdi8yd1ZiMjBOVHdPNzZPa3JqVXc5SzRkVWZnTWg1S2tmSmlD?=
 =?utf-8?B?MmRidzFiTzlEZnhRelVRZnI2WmxPLy9lb0k1c0V1NGxhMitxdG9KSzJJZWF2?=
 =?utf-8?B?ZU56Q2ltMExaTHoyZmRtcjJva20xMUp3UjJRa0VKc2pWc1ErcnozaWd2UWxm?=
 =?utf-8?B?OTY1L0crbEVNa3RiSUVIenlCdVNGNlJKQnRyZk0rVnMySGlrWFBQVXBTQ3R2?=
 =?utf-8?B?d21iNGowOThoZTd1WGdtWXR4WXhzYWxBTDZqZVE0eG9Gc3Uwb0lkclYxbU0r?=
 =?utf-8?B?Vmc5V0hGQnhjWmhSdTNLSkJlVWNYMXFRUUNaNGZDMk4wS1pKMCsxNXgwTldC?=
 =?utf-8?B?WVM3bGpVbXBOOHZad2RZYTNBbTU4Nm01US9pWm5DejlnbUVLa1VxWE5ONHBX?=
 =?utf-8?B?Um5rYjB3dGRFKzJkTWJUcVVMQXVvaTJCRnl4Vi9Za2xndkZwLytDSGtyRUdy?=
 =?utf-8?B?U2tHbjg1OGI1cHVTUC9IVG9lN2FTeW81aFZhSnJ3T2pyL2hoMkNCTkswNWFo?=
 =?utf-8?B?enRGanRkUWMwWFZHQW9NbzU3b0pUeUVsQWtaS21KRHdxNUhkNkdJSWZuZ0gy?=
 =?utf-8?B?bWVEMEFBVXVmR2tKblIxOS9KT09nOFBENHZoRkhNRisyNWJTSVpXL3lwY3F4?=
 =?utf-8?B?bTVyQ0Q5NU5tNElpNEZrUkNUTTd1QVlRc2pKakhJSGNsMkhYUXg1eXFoa3Rz?=
 =?utf-8?B?ankxeWxDWGFjQWlseXRDOWhwc0cyWURUWElNZ2NTWnp0M2t3U2RmRXVUZytz?=
 =?utf-8?B?REtiU0lBcDFIZmVjaVVSc3FaRHAvZzdQMEJ4MTJkMzZ0UGRZdHhEbmQwOGxP?=
 =?utf-8?B?akNlZzQ1RkNFWmNWV0N0cmRPdUdJaEN3UW1SWUVYQkxsUkRGa0pyckhpd0Mz?=
 =?utf-8?B?K3RQWlgzVHlKRWFrZ3VhL2J1TVVySElLaG05dGUyU0lIc1VsU0FYa0ZsTWJy?=
 =?utf-8?B?L24zd0xqUXNacFFCYUZhQWJiNEpkWk14U0l1NENoR3B0cTJZQW1TcUtvSmc0?=
 =?utf-8?B?SWhUeXYxUTN2bDFJVkFFbUs1NGFPVjZjVjFhbWFWTDZSY016dDRsaW9HMnB3?=
 =?utf-8?B?K1RPT2ZzOFRZTkxBNG1pUDhmVHBXa1c3a3dpYTc1N0o4dStSTk5kSXdNemtN?=
 =?utf-8?B?ZUNnZnJUaW95TXI2bzc3bkRycTNGTWJzZkVES2FGdUpING9JMTVpZTdIaDEw?=
 =?utf-8?B?bFV4S29nZHpUaDZSYU9DNkhFcWExRXlHa1ZJc29lVlNBSDRQNHZkNEFJUGg3?=
 =?utf-8?B?ZGJsbklUMFFEcndqMlAvcnlzQ2RDalN4bVVvaWc4S3cvWGdUNStMb0VKUlBU?=
 =?utf-8?B?c1RnUEFYc0NESjVxZWpxbDNOc0llN1htZ3VvdUxEN3NheEhSdnRONU4rYW9T?=
 =?utf-8?B?U0JER3RCRk1TOXkxaVNVWCsrQWxYRWNDM3ZIcE4rL2JhTDF2ZXM0UE8wdUpO?=
 =?utf-8?B?MzZnSFlHU2xFNUl3Ty9JWFZDdzNRbFlPZll6WGxjc1ZudkFqZFZHM05NTlpG?=
 =?utf-8?B?U0xaemxrckw3cWFMUmhrbURCdzkrMVQ0ZGlJSVRKNHE1cWVDalh0NGs1a0hy?=
 =?utf-8?B?dHh6SE1SUVM0UmYzQjFZQVJDbHVDTHNXVkQ1RVg1OXNMdjc3K3h2RVVkZmhq?=
 =?utf-8?Q?ObURvX23XcXao5nR0hPbGL0C+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9874a92e-9590-4418-c82b-08db4fc043fd
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 12:32:19.7984
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JoU+21pGfpQlgWgEL/BK0UcrR7pGm2gbchRraPljI3p2lgtfmQbBLXl9jPR3/tairSfRAiR+Gt56URYR6S95XQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9318

Unconditionally wire this through the ->rmw() hook. Since x86_emul_rmw()
now wants to construct and invoke a stub, make stub_exn available to it
via a new field in the emulator state structure.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Use X86_EXC_*. Move past introduction of stub_exn in struct
    x86_emulate_state. Keep feature at just "a" for now.
---
SDE: -grr or -srf

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -232,6 +232,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
 
         {"avx-vnni",     0x00000007,  1, CPUID_REG_EAX,  4,  1},
         {"avx512-bf16",  0x00000007,  1, CPUID_REG_EAX,  5,  1},
+        {"cmpccxadd",    0x00000007,  1, CPUID_REG_EAX,  7,  1},
         {"fzrm",         0x00000007,  1, CPUID_REG_EAX, 10,  1},
         {"fsrs",         0x00000007,  1, CPUID_REG_EAX, 11,  1},
         {"fsrcs",        0x00000007,  1, CPUID_REG_EAX, 12,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -186,6 +186,7 @@ static const char *const str_7d0[32] =
 static const char *const str_7a1[32] =
 {
     [ 4] = "avx-vnni",      [ 5] = "avx512-bf16",
+    /* 6 */                 [ 7] = "cmpccxadd",
 
     [10] = "fzrm",          [11] = "fsrs",
     [12] = "fsrcs",
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -1403,6 +1403,22 @@ static const struct vex {
     { { 0xdd }, 2, T, R, pfx_66, WIG, Ln }, /* vaesenclast */
     { { 0xde }, 2, T, R, pfx_66, WIG, Ln }, /* vaesdec */
     { { 0xdf }, 2, T, R, pfx_66, WIG, Ln }, /* vaesdeclast */
+    { { 0xe0 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpoxadd */
+    { { 0xe1 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpnoxadd */
+    { { 0xe2 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpbxadd */
+    { { 0xe3 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpnbxadd */
+    { { 0xe4 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpexadd */
+    { { 0xe5 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpnexadd */
+    { { 0xe6 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpbexadd */
+    { { 0xe7 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpaxadd */
+    { { 0xe8 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpsxadd */
+    { { 0xe9 }, 2, F, W, pfx_66, Wn, L0 }, /* cmpnsxadd */
+    { { 0xea }, 2, F, W, pfx_66, Wn, L0 }, /* cmppxadd */
+    { { 0xeb }, 2, F, W, pfx_66, Wn, L0 }, /* cmpnpxadd */
+    { { 0xec }, 2, F, W, pfx_66, Wn, L0 }, /* cmplxadd */
+    { { 0xed }, 2, F, W, pfx_66, Wn, L0 }, /* cmpgexadd */
+    { { 0xee }, 2, F, W, pfx_66, Wn, L0 }, /* cmplexadd */
+    { { 0xef }, 2, F, W, pfx_66, Wn, L0 }, /* cmpgxadd */
     { { 0xf2 }, 2, T, R, pfx_no, Wn, L0 }, /* andn */
     { { 0xf3, 0x08 }, 2, T, R, pfx_no, Wn, L0 }, /* blsr */
     { { 0xf3, 0x10 }, 2, T, R, pfx_no, Wn, L0 }, /* blsmsk */
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -1398,6 +1398,78 @@ int main(int argc, char **argv)
     }
     printf("okay\n");
 
+    printf("%-40s", "Testing cmpbxadd %rbx,%r9,(%rdx)...");
+    if ( stack_exec && cpu_has_cmpccxadd )
+    {
+        instr[0] = 0xc4; instr[1] = 0x62; instr[2] = 0xe1; instr[3] = 0xe2; instr[4] = 0x0a;
+        regs.rip = (unsigned long)&instr[0];
+        regs.eflags = EFLAGS_ALWAYS_SET;
+        res[0] = 0x11223344;
+        res[1] = 0x01020304;
+        regs.rdx = (unsigned long)res;
+        regs.r9  = 0x0001020300112233UL;
+        regs.rbx = 0x0101010101010101UL;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[5]) ||
+             (regs.r9 != 0x0102030411223344UL) ||
+             (regs.rbx != 0x0101010101010101UL) ||
+             ((regs.eflags & EFLAGS_MASK) !=
+              (X86_EFLAGS_PF | EFLAGS_ALWAYS_SET)) ||
+             (res[0] != 0x11223344) ||
+             (res[1] != 0x01020304) )
+            goto fail;
+
+        regs.rip = (unsigned long)&instr[0];
+        regs.r9 <<= 8;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[5]) ||
+             (regs.r9 != 0x0102030411223344UL) ||
+             (regs.rbx != 0x0101010101010101UL) ||
+             ((regs.eflags & EFLAGS_MASK) !=
+              (X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_SF |
+               EFLAGS_ALWAYS_SET)) ||
+             (res[0] != 0x12233445) ||
+             (res[1] != 0x02030405) )
+            goto fail;
+        printf("okay\n");
+
+        printf("%-40s", "Testing cmpsxadd %r9d,%ebx,4(%r10)...");
+        instr[1] = 0xc2; instr[2] = 0x31; instr[3] = 0xe8; instr[4] = 0x5a; instr[5] = 0x04;
+        regs.rip = (unsigned long)&instr[0];
+        res[2] = res[0] = ~0;
+        regs.r10 = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[6]) ||
+             (regs.r9 != 0x0102030411223344UL) ||
+             (regs.rbx != 0x02030405) ||
+             ((regs.eflags & EFLAGS_MASK) != EFLAGS_ALWAYS_SET) ||
+             (res[0] + 1) ||
+             (res[1] != 0x02030405) ||
+             (res[2] + 1) )
+            goto fail;
+
+        regs.rip = (unsigned long)&instr[0];
+        regs.rbx <<= 8;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[6]) ||
+             (regs.r9 != 0x0102030411223344UL) ||
+             (regs.rbx != 0x02030405) ||
+             ((regs.eflags & EFLAGS_MASK) !=
+              (X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_SF |
+               EFLAGS_ALWAYS_SET)) ||
+             (res[0] + 1) ||
+             (res[1] != 0x13253749) ||
+             (res[2] + 1) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     emulops.write_segment = write_segment;
     emulops.write_msr     = write_msr;
 
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -185,6 +185,7 @@ void wrpkru(unsigned int val);
 #define cpu_has_serialize  cp.feat.serialize
 #define cpu_has_avx_vnni   (cp.feat.avx_vnni && xcr0_mask(6))
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
+#define cpu_has_cmpccxadd  cp.feat.cmpccxadd
 #define cpu_has_avx_ifma   (cp.feat.avx_ifma && xcr0_mask(6))
 #define cpu_has_avx_vnni_int8 (cp.feat.avx_vnni_int8 && xcr0_mask(6))
 #define cpu_has_avx_ne_convert (cp.feat.avx_ne_convert && xcr0_mask(6))
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -170,6 +170,7 @@ extern struct cpuinfo_x86 boot_cpu_data;
 /* CPUID level 0x00000007:1.eax */
 #define cpu_has_avx_vnni        boot_cpu_has(X86_FEATURE_AVX_VNNI)
 #define cpu_has_avx512_bf16     boot_cpu_has(X86_FEATURE_AVX512_BF16)
+#define cpu_has_cmpccxadd       boot_cpu_has(X86_FEATURE_CMPCCXADD)
 #define cpu_has_avx_ifma        boot_cpu_has(X86_FEATURE_AVX_IFMA)
 
 /* CPUID level 0x00000007:1.edx */
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -433,6 +433,7 @@ static const struct ext0f38_table {
     [0xcf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0xdb] = { .simd_size = simd_packed_int, .two_op = 1 },
     [0xdc ... 0xdf] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0xe0 ... 0xef] = { .to_mem = 1 },
     [0xf0] = { .two_op = 1 },
     [0xf1] = { .to_mem = 1, .two_op = 1 },
     [0xf2 ... 0xf3] = {},
@@ -924,6 +925,8 @@ decode_0f38(struct x86_emulate_state *s,
             ctxt->opcode |= MASK_INSR(s->vex.pfx, X86EMUL_OPC_PFX_MASK);
         break;
 
+    case X86EMUL_OPC_VEX_66(0, 0xe0) ...
+         X86EMUL_OPC_VEX_66(0, 0xef): /* cmp<cc>xadd */
     case X86EMUL_OPC_VEX(0, 0xf2):    /* andn */
     case X86EMUL_OPC_VEX(0, 0xf3):    /* Grp 17 */
     case X86EMUL_OPC_VEX(0, 0xf5):    /* bzhi */
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -253,6 +253,7 @@ struct x86_emulate_state {
         rmw_btc,
         rmw_btr,
         rmw_bts,
+        rmw_cmpccxadd,
         rmw_dec,
         rmw_inc,
         rmw_neg,
@@ -583,6 +584,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
 #define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
+#define vcpu_has_cmpccxadd()   (ctxt->cpuid->feat.cmpccxadd)
 #define vcpu_has_lkgs()        (ctxt->cpuid->feat.lkgs)
 #define vcpu_has_wrmsrns()     (ctxt->cpuid->feat.wrmsrns)
 #define vcpu_has_avx_ifma()    (ctxt->cpuid->feat.avx_ifma)
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -6888,6 +6888,15 @@ x86_emulate(
 
 #endif /* !X86EMUL_NO_SIMD */
 
+    case X86EMUL_OPC_VEX_66(0x0f38, 0xe0) ...
+         X86EMUL_OPC_VEX_66(0x0f38, 0xef): /* cmp<cc>xadd r,r,m */
+        generate_exception_if(!mode_64bit() || dst.type != OP_MEM || vex.l,
+                              X86_EXC_UD);
+        host_and_vcpu_must_have(cmpccxadd);
+        fail_if(!ops->rmw);
+        state->rmw = rmw_cmpccxadd;
+        break;
+
     case X86EMUL_OPC(0x0f38, 0xf0): /* movbe m,r */
     case X86EMUL_OPC(0x0f38, 0xf1): /* movbe r,m */
         vcpu_must_have(movbe);
@@ -7949,14 +7958,20 @@ x86_emulate(
     {
         ea.val = src.val;
         op_bytes = dst.bytes;
+        state->stub_exn = &stub_exn;
         rc = ops->rmw(dst.mem.seg, dst.mem.off, dst.bytes, &_regs.eflags,
                       state, ctxt);
+#ifdef __XEN__
+        if ( rc == X86EMUL_stub_failure )
+            goto emulation_stub_failure;
+#endif
         if ( rc != X86EMUL_OKAY )
             goto done;
 
         /* Some operations require a register to be written. */
         switch ( state->rmw )
         {
+        case rmw_cmpccxadd:
         case rmw_xchg:
         case rmw_xadd:
             switch ( dst.bytes )
@@ -8233,6 +8248,7 @@ int x86_emul_rmw(
     uint32_t *eflags,
     struct x86_emulate_state *state,
     struct x86_emulate_ctxt *ctxt)
+#define stub_exn (*state->stub_exn) /* for invoke_stub() */
 {
     unsigned long *dst = ptr;
 
@@ -8298,6 +8314,37 @@ int x86_emul_rmw(
 #undef BINOP
 #undef SHIFT
 
+#ifdef __x86_64__
+    case rmw_cmpccxadd:
+    {
+        struct x86_emulate_stub stub = {};
+        uint8_t *buf = get_stub(stub);
+        typeof(state->vex) *pvex = container_of(buf + 1, typeof(state->vex),
+                                                raw[0]);
+        unsigned long dummy;
+
+        buf[0] = 0xc4;
+        *pvex = state->vex;
+        pvex->b = 1;
+        pvex->r = 1;
+        pvex->reg = 0xf; /* rAX */
+        buf[3] = ctxt->opcode;
+        buf[4] = 0x11; /* reg=rDX r/m=(%RCX) */
+        buf[5] = 0xc3;
+
+        *eflags &= ~EFLAGS_MASK;
+        invoke_stub("",
+                    _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"),
+                    "+m" (*dst), "+d" (state->ea.val),
+                    [tmp] "=&r" (dummy), [eflags] "+g" (*eflags)
+                    : "a" (*decode_vex_gpr(state->vex.reg, ctxt->regs, ctxt)),
+                      "c" (dst), [mask] "i" (EFLAGS_MASK));
+
+        put_stub(stub);
+        break;
+    }
+#endif
+
     case rmw_not:
         switch ( state->op_bytes )
         {
@@ -8393,7 +8440,13 @@ int x86_emul_rmw(
 #undef JCXZ
 
     return X86EMUL_OKAY;
+
+#if defined(__XEN__) && defined(__x86_64__)
+ emulation_stub_failure:
+    return X86EMUL_stub_failure;
+#endif
 }
+#undef stub_exn
 
 static void __init __maybe_unused build_assertions(void)
 {
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -278,6 +278,7 @@ XEN_CPUFEATURE(SSBD,          9*32+31) /
 /* Intel-defined CPU features, CPUID level 0x00000007:1.eax, word 10 */
 XEN_CPUFEATURE(AVX_VNNI,     10*32+ 4) /*A  AVX-VNNI Instructions */
 XEN_CPUFEATURE(AVX512_BF16,  10*32+ 5) /*A  AVX512 BFloat16 Instructions */
+XEN_CPUFEATURE(CMPCCXADD,    10*32+ 7) /*a  CMPccXADD Instructions */
 XEN_CPUFEATURE(FZRM,         10*32+10) /*A  Fast Zero-length REP MOVSB */
 XEN_CPUFEATURE(FSRS,         10*32+11) /*A  Fast Short REP STOSB */
 XEN_CPUFEATURE(FSRCS,        10*32+12) /*A  Fast Short REP CMPSB/SCASB */



From xen-devel-bounces@lists.xenproject.org Mon May 08 12:33:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:33:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531503.827225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw03d-0004MY-3C; Mon, 08 May 2023 12:33:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531503.827225; Mon, 08 May 2023 12:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw03d-0004MP-0F; Mon, 08 May 2023 12:33:37 +0000
Received: by outflank-mailman (input) for mailman id 531503;
 Mon, 08 May 2023 12:33:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pw03b-0003kR-Uu
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:33:36 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2053.outbound.protection.outlook.com [40.107.7.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8c1e543f-ed9c-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 14:33:35 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9318.eurprd04.prod.outlook.com (2603:10a6:102:2a5::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.31; Mon, 8 May
 2023 12:33:05 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 12:33:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c1e543f-ed9c-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eu3/82MsRMb+Xg735h6svvjGT8GvOsmLqaOVIQsKJ3cO9Enva5jsuZUxZqhxjMcK1/Ag4yYEiUSmBqm4n1Kfc52mbWI2lCZ0X3S/VDtdGZe3+jfTbhYmd7KOsiOisG86PlArmCwVh1g22FBSTXnZ3qZHMlSbKaW+8EYGR14M6boXDQao0auwfPWbGgszV3dxceewtdDtW4fzE9AFWvhDRVYIZs2JIzGKvcChdW4y2InzUEmP0q9CbnD45i2r9u30Y5RzR9bQ2GHcySVW+UpCxksPcCvCAVMfYHM38f67rv3mh3zYUVeEa3RuN2qw3hBaYHD7rkxTtTa62WKWKNN0HA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zhas7AbeMgoLdGDJZhnhFI7UMWReMRLI7GR1/taFIZY=;
 b=bYb7OqMW1l/j/F1uzfs7MC/9s5bokkatFXAHsfpUPSiNIkF5g4AdPZZN97pvbQSq33L8oBRt7qFFQQ+TkMCfaTJy2Z8N9rblziCT1FLx/gb6guZYGlL7rYTXhLj7yD42XOO94f0zoGMQeCs2+NM9P6A3u/tiFjqTIYmzDgfG/UysnGPw2Uu0pWG8gxWRhzxr2hFF9rSXBP0yJyIYaT7n5Z4vzPc6TxQVRd71H7urIJXIXyTIHtV3hcEoL27l5ONxmg0h06jWQlRqfue6KLxdy4v39wFAa+gzJEqckHAm2II9H6cllq01QZo/f5ZJ//D0Y5jBbiRgRuCl9SN084smtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zhas7AbeMgoLdGDJZhnhFI7UMWReMRLI7GR1/taFIZY=;
 b=iQpyTqRHXgUwDexql+X39Veb4ywEhj52pOZdKnc4ijT/tcAhj3jkVMkiQSFhhbMDATnCPvnq0jnXxZYarTJ7xDmFbWW4WqVFY4tSUEpqi8Ngdefzk2Th1GzzwTK0BEr0dcWhu+Qzc6+B/kjGD4PMH3ABCr4zc2/qk9maqrgA6csJ/GyaKhTpFM7P6JZN66yT5uzGlDwT0+x+5b68j/hu8n4j2ujYWL8ryhB9yjrFCJEhpWd6TcAD1mklrPQOJTVewPRxNosfzXFUPUramN2Q7cBsqpMAiwc+xSPwUHpQM9cTNQhaTx+eLO2iugc9veO3+VK5xTo5lCO60tuiWrxydQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a1e4d155-f316-79a8-14d1-b236a18abcba@suse.com>
Date: Mon, 8 May 2023 14:33:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 3/4] VMX: tertiary execution control infrastructure
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <0b2a4e8c-ab43-4e6e-2c51-027dcdf1425d@suse.com>
In-Reply-To: <0b2a4e8c-ab43-4e6e-2c51-027dcdf1425d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0107.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9318:EE_
X-MS-Office365-Filtering-Correlation-Id: 894566cc-d6b5-471d-9cc9-08db4fc05f63
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NK1oHSgzst9vZzeZga3Z+xHv99CxtFl1Cm2mKi04S2SGA9+lSucMxCl3KSD7kGrXqHzetzwtvKSgDtcvJzDqgdmSa++LQ8CicnnM13LZOO2fXkhp3Dp3TbraJp4MOglAFeKGCpoFxMMROfQzGd5LT1dvL4+dYSmrzYE0azb6QKeNmwSJ7pW7pZn9nEamCPBGmAdyFjvg/5s3hkKISDsPmkwV6L5pk45grpA8Am9ljnW9ULljP4rQ9+/MfYJWji2OixN/UG/B9awXbGUR9kNpF0KAnBYeZPNfCQUkJ7zY4kOZCjAUX5XinwNpHVAWGVkT+ypO7XeMNK2U8iCX31z8p7/NIbZkYsDxIFBDVKYacshlfIKQfUs74CYJRxG8h3Tbm7pG+PWKl2PGYIQ2CDedcEV+tQzH6/KvQzbsrgikfDLitFcUWfBNtixZr5V8Vyx69Nb0mMt/DSsbh+D5d15+zMzPvfYgiCiAyhpIEGGKm33IFufDftiL7PmTHFTi4yFL8ybjUO907Ak4m9UHDXG3ssizgJISuDo77SwZfB9I3XJafNaY2Kg4AKtkmLr3MaX3IwqTydN5qUUy8/KwPB84XiPKVVMbagqMzvzaddUsxfLWLYPQ4PBMMm+6MhKfQZN+X01ltjqTC76zWPvV0MxZyA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(376002)(346002)(136003)(451199021)(31686004)(36756003)(38100700002)(2906002)(8676002)(5660300002)(316002)(86362001)(31696002)(8936002)(6916009)(66556008)(66946007)(41300700001)(66476007)(4326008)(83380400001)(186003)(6512007)(6506007)(26005)(6486002)(478600001)(2616005)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NUN2OW5iL1grQ2YyQXBwejNXOUdGb1NmaStQclZXcjZRQnQ4ZitlVVNSSzhE?=
 =?utf-8?B?cEtsVG9WU2VqU3YrTVJ4dWJDVUlvejNxQ0JodG9rSjFuVzYvZzBNQ3pmdS9a?=
 =?utf-8?B?L09SWWJHRXZqVUhLaU5qR0RocitaVzgvRjRlWnZJS0R1Q1hPanJJZEh1M2w5?=
 =?utf-8?B?ZCtrS0VZc0dsSDZqMlZMZlBma0VjOXFCeStnbnpneG5CaDR6TCtPYlBJTS92?=
 =?utf-8?B?MGcxQ3F4NlYvNUJaWVpsMUc3TjA5czRMdzBzVkNzNmdsM29IckN1dkZSTDBH?=
 =?utf-8?B?dUFQOHhPaGdCaE55NjlPL0t0VXhiUXZYMStEM2ZjRVd3SXhqU0dnakZtUEUr?=
 =?utf-8?B?akZNOTVqd2FOV0RLKzUvMWJPb2I2UXpTTVd4SXA3UXFBRWQ5YzlWNVg4cFg5?=
 =?utf-8?B?NWJzNjZMamZ0aGxoTkFQSUdKbU5Ja1IwNHJ1cURabnlDTy8wUHlzbmJrR25Y?=
 =?utf-8?B?YnNMR2lVeXl3aGEwVjdRK0EwUHcxcDFLY3QyVkp4NkxJWUV4bDJYOHR4MEZC?=
 =?utf-8?B?Rk1QSnpZZXY5Y3BnK0pRa2JGWkZjSm0rbDQrWjZZSzNiOTV2aXF0Uk11M3p2?=
 =?utf-8?B?cFRoRGN3azRPZk9razFlLzNyMnE2eWxoUEYzZFk2eU50MEJJUDIvbDBqWmd5?=
 =?utf-8?B?Rkd5UEd2UW50V0dBVFdHdHJqVTJvcGlhc0wySUgrMitVRUJ1bVFKeWUranBZ?=
 =?utf-8?B?ajZ5emhOSWo2ZlNBVUp2bi9xcDhEY2Yzc0E5cysvakJTWjR5V2kxemxRSjNO?=
 =?utf-8?B?YlZaanNzaTNGdDd3Qldkc1BWWmY0cnRMT1BkaFplLytnWGMwMWFlRGZLOXoy?=
 =?utf-8?B?MFNXMlM0dmp6L2tsd01LMHNCOXRjcVhJOWZNMUlsaGdrUm4rR2IrSE5URnFn?=
 =?utf-8?B?YXNJM3RaNVdSY054Z0llVzhNWFJrRlJmQnZnVUlNRkl2M25xTVF5MGlHZEp6?=
 =?utf-8?B?TjB4S0k0RXFMYTdicTR3eWlCSm1tV2NUeWQwVTRoZGZadGp5ZCtXL1VPaW9r?=
 =?utf-8?B?SldPcTc5dFZ4US8zcTJPMENiTmp4Q05QeWpyWDFqWEpHbEpEelJkWHdrc20y?=
 =?utf-8?B?ZHJuWGYrMVVXcFRPalk5SzVxQktMU2gvUnp5OHFVdTNQMzBGa1dnYmRHRjB1?=
 =?utf-8?B?bk1lVFFxTHhTVytwMlM2Y1hvTUxrNzg2MHpMa3BwOExvNm12VkhXc0wvbGY4?=
 =?utf-8?B?ck93SE9nQUJLNWJzckMrZ2ZSdG5yc0JGVmR0Y1lIbG94eWxUV0pYS295cEFC?=
 =?utf-8?B?NVZpSFJXSy9BZEVYSEpObk9IZzd3Qlc1dnhPL21oQXZieFVHenZCTXZVOHNl?=
 =?utf-8?B?cTBlOU16RjBlU3NkV2wwMXlGaGJFUnBCYW4wVnZSN2hIcWRGakZOUlkwMEsx?=
 =?utf-8?B?UW9Rb0xEZUY0ZjErOHhhWWRvWFJ1WlFFakMwYUg3Nlh3UDdNYnVkYnA0eE50?=
 =?utf-8?B?djBzSHFwaWcxbnVGell4dHZZWnRpSERUNW5LNC9MQkdDMGVkSzVrVzZBaW9x?=
 =?utf-8?B?dW1zcVdBUEg0NURaWkxOVGNFZHE1bEo2NGdIQklEMHdXaDM0KzlVTmtYQ0Np?=
 =?utf-8?B?eWdDNEJrcTljTXFZQ3NVdk84ZTQraE5YNUE0VHZzQTJXK0dmUXNBN1FHeGVS?=
 =?utf-8?B?Yml4anUremt6cC90KzNSaWxDZjRzNUR6UVhnQzQzOXNyaG5OZ3dWWkVPZXRS?=
 =?utf-8?B?bm5TcERTVmxFL1FtKzI1RnVUUmFrcWVRUDJkaHYwdDFTK1QySmhsRlBGcldq?=
 =?utf-8?B?a3o1Wlc0cGg4YlA5RUsvd20vSjNHWG9TV1dEMDE2dk9kbkxjYTFjcW5aR2VN?=
 =?utf-8?B?UXZMV1lOcHVNdW5kTDRYTGlESmpKM1JwVVVSaFlMdldSVFRTV09LOGZIZ0I0?=
 =?utf-8?B?UHRnN3RDL3cwZ2Fnb3J4ZXRJOHFWM2Q0MjZpSkI5SUpCajZzVnhZYmhRcUVq?=
 =?utf-8?B?cFU1Q1h6bWZYODkvMnIzWnVSMDA0Y21tSWtHb0JubmZZMVQ3U1ZNZlNHTUhJ?=
 =?utf-8?B?Q05xamh2TWxMVE5BalByOHFlcHB4ajlTSXA5U2F0NENCZnRTU3NQYUtYTkt6?=
 =?utf-8?B?a0tJa29JWlJmOVkzTnUvR2lod2NiWGhyemlkeU40ODJZUnNyazdYMHJPUjhu?=
 =?utf-8?Q?Gb+W02zxAdGwKqVU0VCJkKZW9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 894566cc-d6b5-471d-9cc9-08db4fc05f63
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 12:33:05.6933
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VRijHlB1aNuOVndKmzJoQxzy21Pd/Wbqbw8VmnxENBqC/b/4CoTc4hCqv3xvQ1lUDfEgZSZRkWEFpVE3R3YFjQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9318

This is a prereq to enabling the MSRLIST feature.

Note that the PROCBASED_CTLS3 MSR is different from other VMX feature
reporting MSRs, in that all 64 bits report allowed 1-settings.

vVMX code is left alone, though, for the time being.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -164,6 +164,7 @@ static int cf_check parse_ept_param_runt
 u32 vmx_pin_based_exec_control __read_mostly;
 u32 vmx_cpu_based_exec_control __read_mostly;
 u32 vmx_secondary_exec_control __read_mostly;
+uint64_t vmx_tertiary_exec_control __read_mostly;
 u32 vmx_vmexit_control __read_mostly;
 u32 vmx_vmentry_control __read_mostly;
 u64 vmx_ept_vpid_cap __read_mostly;
@@ -229,10 +230,32 @@ static u32 adjust_vmx_controls(
     return ctl;
 }
 
-static bool_t cap_check(const char *name, u32 expected, u32 saw)
+static uint64_t adjust_vmx_controls2(
+    const char *name, uint64_t ctl_min, uint64_t ctl_opt, unsigned int msr,
+    bool *mismatch)
+{
+    uint64_t vmx_msr, ctl = ctl_min | ctl_opt;
+
+    rdmsrl(msr, vmx_msr);
+
+    ctl &= vmx_msr; /* bit == 0 ==> must be zero */
+
+    /* Ensure minimum (required) set of control bits are supported. */
+    if ( ctl_min & ~ctl )
+    {
+        *mismatch = true;
+        printk("VMX: CPU%u has insufficient %s (%#lx; requires %#lx)\n",
+               smp_processor_id(), name, ctl, ctl_min);
+    }
+
+    return ctl;
+}
+
+static bool cap_check(
+    const char *name, unsigned long expected, unsigned long saw)
 {
     if ( saw != expected )
-        printk("VMX %s: saw %#x expected %#x\n", name, saw, expected);
+        printk("VMX %s: saw %#lx expected %#lx\n", name, saw, expected);
     return saw != expected;
 }
 
@@ -242,6 +265,7 @@ static int vmx_init_vmcs_config(bool bsp
     u32 _vmx_pin_based_exec_control;
     u32 _vmx_cpu_based_exec_control;
     u32 _vmx_secondary_exec_control = 0;
+    uint64_t _vmx_tertiary_exec_control = 0;
     u64 _vmx_ept_vpid_cap = 0;
     u64 _vmx_misc_cap = 0;
     u32 _vmx_vmexit_control;
@@ -275,7 +299,8 @@ static int vmx_init_vmcs_config(bool bsp
     opt = (CPU_BASED_ACTIVATE_MSR_BITMAP |
            CPU_BASED_TPR_SHADOW |
            CPU_BASED_MONITOR_TRAP_FLAG |
-           CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
+           CPU_BASED_ACTIVATE_SECONDARY_CONTROLS |
+           CPU_BASED_ACTIVATE_TERTIARY_CONTROLS);
     _vmx_cpu_based_exec_control = adjust_vmx_controls(
         "CPU-Based Exec Control", min, opt,
         MSR_IA32_VMX_PROCBASED_CTLS, &mismatch);
@@ -339,6 +364,15 @@ static int vmx_init_vmcs_config(bool bsp
             MSR_IA32_VMX_PROCBASED_CTLS2, &mismatch);
     }
 
+    if ( _vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_TERTIARY_CONTROLS )
+    {
+        uint64_t opt = 0;
+
+        _vmx_tertiary_exec_control = adjust_vmx_controls2(
+            "Tertiary Exec Control", 0, opt,
+            MSR_IA32_VMX_PROCBASED_CTLS3, &mismatch);
+    }
+
     /* The IA32_VMX_EPT_VPID_CAP MSR exists only when EPT or VPID available */
     if ( _vmx_secondary_exec_control & (SECONDARY_EXEC_ENABLE_EPT |
                                         SECONDARY_EXEC_ENABLE_VPID) )
@@ -469,6 +503,7 @@ static int vmx_init_vmcs_config(bool bsp
         vmx_pin_based_exec_control = _vmx_pin_based_exec_control;
         vmx_cpu_based_exec_control = _vmx_cpu_based_exec_control;
         vmx_secondary_exec_control = _vmx_secondary_exec_control;
+        vmx_tertiary_exec_control  = _vmx_tertiary_exec_control;
         vmx_ept_vpid_cap           = _vmx_ept_vpid_cap;
         vmx_vmexit_control         = _vmx_vmexit_control;
         vmx_vmentry_control        = _vmx_vmentry_control;
@@ -505,6 +540,9 @@ static int vmx_init_vmcs_config(bool bsp
             "Secondary Exec Control",
             vmx_secondary_exec_control, _vmx_secondary_exec_control);
         mismatch |= cap_check(
+            "Tertiary Exec Control",
+            vmx_tertiary_exec_control, _vmx_tertiary_exec_control);
+        mismatch |= cap_check(
             "VMExit Control",
             vmx_vmexit_control, _vmx_vmexit_control);
         mismatch |= cap_check(
@@ -1082,6 +1120,7 @@ static int construct_vmcs(struct vcpu *v
         v->arch.hvm.vmx.exec_control |= CPU_BASED_RDTSC_EXITING;
 
     v->arch.hvm.vmx.secondary_exec_control = vmx_secondary_exec_control;
+    v->arch.hvm.vmx.tertiary_exec_control  = vmx_tertiary_exec_control;
 
     /*
      * Disable features which we don't want active by default:
@@ -1136,6 +1175,10 @@ static int construct_vmcs(struct vcpu *v
         __vmwrite(SECONDARY_VM_EXEC_CONTROL,
                   v->arch.hvm.vmx.secondary_exec_control);
 
+    if ( cpu_has_vmx_tertiary_exec_control )
+        __vmwrite(TERTIARY_VM_EXEC_CONTROL,
+                  v->arch.hvm.vmx.tertiary_exec_control);
+
     /* MSR access bitmap. */
     if ( cpu_has_vmx_msr_bitmap )
     {
@@ -2071,10 +2114,12 @@ void vmcs_dump_vcpu(struct vcpu *v)
                vmr(HOST_PERF_GLOBAL_CTRL));
 
     printk("*** Control State ***\n");
-    printk("PinBased=%08x CPUBased=%08x SecondaryExec=%08x\n",
+    printk("PinBased=%08x CPUBased=%08x\n",
            vmr32(PIN_BASED_VM_EXEC_CONTROL),
-           vmr32(CPU_BASED_VM_EXEC_CONTROL),
-           vmr32(SECONDARY_VM_EXEC_CONTROL));
+           vmr32(CPU_BASED_VM_EXEC_CONTROL));
+    printk("SecondaryExec=%08x TertiaryExec=%08lx\n",
+           vmr32(SECONDARY_VM_EXEC_CONTROL),
+           vmr(TERTIARY_VM_EXEC_CONTROL));
     printk("EntryControls=%08x ExitControls=%08x\n", vmentry_ctl, vmexit_ctl);
     printk("ExceptionBitmap=%08x PFECmask=%08x PFECmatch=%08x\n",
            vmr32(EXCEPTION_BITMAP),
--- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
@@ -114,6 +114,7 @@ struct vmx_vcpu {
     /* Cache of cpu execution control. */
     u32                  exec_control;
     u32                  secondary_exec_control;
+    uint64_t             tertiary_exec_control;
     u32                  exception_bitmap;
 
     uint64_t             shadow_gs;
@@ -196,6 +197,7 @@ void vmx_vmcs_reload(struct vcpu *v);
 #define CPU_BASED_RDTSC_EXITING               0x00001000
 #define CPU_BASED_CR3_LOAD_EXITING            0x00008000
 #define CPU_BASED_CR3_STORE_EXITING           0x00010000
+#define CPU_BASED_ACTIVATE_TERTIARY_CONTROLS  0x00020000
 #define CPU_BASED_CR8_LOAD_EXITING            0x00080000
 #define CPU_BASED_CR8_STORE_EXITING           0x00100000
 #define CPU_BASED_TPR_SHADOW                  0x00200000
@@ -260,6 +262,13 @@ extern u32 vmx_vmentry_control;
 #define SECONDARY_EXEC_NOTIFY_VM_EXITING        0x80000000
 extern u32 vmx_secondary_exec_control;
 
+#define TERTIARY_EXEC_LOADIWKEY_EXITING         BIT(0, UL)
+#define TERTIARY_EXEC_ENABLE_HLAT               BIT(1, UL)
+#define TERTIARY_EXEC_EPT_PAGING_WRITE          BIT(2, UL)
+#define TERTIARY_EXEC_GUEST_PAGING_VERIFY       BIT(3, UL)
+#define TERTIARY_EXEC_IPI_VIRT                  BIT(4, UL)
+extern uint64_t vmx_tertiary_exec_control;
+
 #define VMX_EPT_EXEC_ONLY_SUPPORTED                         0x00000001
 #define VMX_EPT_WALK_LENGTH_4_SUPPORTED                     0x00000040
 #define VMX_EPT_MEMORY_TYPE_UC                              0x00000100
@@ -295,6 +304,8 @@ extern u64 vmx_ept_vpid_cap;
     (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP)
 #define cpu_has_vmx_secondary_exec_control \
     (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
+#define cpu_has_vmx_tertiary_exec_control \
+    (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_TERTIARY_CONTROLS)
 #define cpu_has_vmx_ept \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT)
 #define cpu_has_vmx_dt_exiting \
@@ -418,6 +429,7 @@ enum vmcs_field {
     VIRT_EXCEPTION_INFO             = 0x0000202a,
     XSS_EXIT_BITMAP                 = 0x0000202c,
     TSC_MULTIPLIER                  = 0x00002032,
+    TERTIARY_VM_EXEC_CONTROL        = 0x00002034,
     GUEST_PHYSICAL_ADDRESS          = 0x00002400,
     VMCS_LINK_POINTER               = 0x00002800,
     GUEST_IA32_DEBUGCTL             = 0x00002802,
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -320,6 +320,7 @@
 #define MSR_IA32_VMX_TRUE_EXIT_CTLS             0x48f
 #define MSR_IA32_VMX_TRUE_ENTRY_CTLS            0x490
 #define MSR_IA32_VMX_VMFUNC                     0x491
+#define MSR_IA32_VMX_PROCBASED_CTLS3            0x492
 
 /* K7/K8 MSRs. Not complete. See the architecture manual for a more
    complete list. */
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -760,6 +760,12 @@ void vmx_update_secondary_exec_control(s
                   v->arch.hvm.vmx.secondary_exec_control);
 }
 
+void vmx_update_tertiary_exec_control(struct vcpu *v)
+{
+    __vmwrite(TERTIARY_VM_EXEC_CONTROL,
+              v->arch.hvm.vmx.tertiary_exec_control);
+}
+
 void vmx_update_exception_bitmap(struct vcpu *v)
 {
     u32 bitmap = unlikely(v->arch.hvm.vmx.vmx_realmode)
--- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
@@ -80,6 +80,7 @@ void vmx_realmode(struct cpu_user_regs *
 void vmx_update_exception_bitmap(struct vcpu *v);
 void vmx_update_cpu_exec_control(struct vcpu *v);
 void vmx_update_secondary_exec_control(struct vcpu *v);
+void vmx_update_tertiary_exec_control(struct vcpu *v);
 
 #define POSTED_INTR_ON  0
 #define POSTED_INTR_SN  1



From xen-devel-bounces@lists.xenproject.org Mon May 08 12:34:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:34:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531511.827235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw04d-0004yE-H0; Mon, 08 May 2023 12:34:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531511.827235; Mon, 08 May 2023 12:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw04d-0004y7-Do; Mon, 08 May 2023 12:34:39 +0000
Received: by outflank-mailman (input) for mailman id 531511;
 Mon, 08 May 2023 12:34:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pw04c-0004xq-Ev
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:34:38 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2055.outbound.protection.outlook.com [40.107.7.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b1017243-ed9c-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 14:34:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9318.eurprd04.prod.outlook.com (2603:10a6:102:2a5::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.31; Mon, 8 May
 2023 12:34:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 12:34:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1017243-ed9c-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HxdU93t7+5+vK03c3Y/4FxpB/K5u62wQDgU25mATLLyA7LuPENXtBeLQe4MpsU51Rjfu17FWskKLE8nK2O84bKDvWXZ9qQvORSm8NQvWeKdrzuqBZVDUPB5y8I6BYwbAyWwFOwm2K0p1d5Xg+xN4qsfu1hhHQq57K0XA5s2xcwHNML0+nnK5dJGLW3i/kxOEcVP3Ssrr6HCC9HXT1KZyz8wpCSNiGRND8MfFXY2Ntwpj5ZxzdeDRA+ltjHgoLgsBglcxmvKHbipUHHwXwCm8BZ3AgqPFfJMD9bCisa+IPDX0jqRNsZrn3o31vWxY6RW3olB2DRYJf8N+ocOrPu9DnA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=I2BSQaE2keFaf3Lw5g+cGKquavmTPmGFtbCScr9TRZs=;
 b=P4sPCkoHlNmXsNGABwFD3JrUM4tbY7H1MGYIDUNnSeoIyObUZSxeCMeJTEoILO+GHLRERm2svdW0jZC1PXwHIkF4C9kNT7ZBdrnV6LIhE9nXMvp2QOmBdmUeg6T206GCWG0dmuetOv2hAlQnTuWWXhXx45nYVSQuRmGGThSgnuvo0gYaHjOaf/qBr308Hew2MwPPBMW2i836JI/V6WJQDvWAfCRMQqumkzqzgpLo1/65AvaF+28MVXCdtDjS8yeJQVObUdsbZdWYYCytW2Wu+qtQGk7mx4yRIChbEKoVCNA77Y4kKsZsmhJSsEPjDHlqn7V2fkfc5cMU/5F/FfMaSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I2BSQaE2keFaf3Lw5g+cGKquavmTPmGFtbCScr9TRZs=;
 b=occT+vGWOTEB5LzRpJeqdLOyJT4GtsUNGeL/HgtckEb6HFzx8lT4T7aFOAAer6RY5Q3zq8wUp1XNxCvOMn1Zp1wdewoFU+rdSYaHKDjYclCIMjb4JL6l/xO2LpgiyVyGOsquYfSjVUVy96JGibm8cqE4ZYrqo5kwvnwZ6lGXxqqNqkE0kuXKQ4vsw4xRCawNXlsLW4qYv8QHl0/mxp9prVW4y6ukzPfyH90aoI1pxv2q90G44JP84T/z6DffB/W3wiunr6wjTTtlL0ueej+DIoggBeBO1eUvMyuQ1KcZFblEvK9VIVnGm2RU5DAKfkar0aWePfPexDZ38ttMY0N1Cg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <87e7af8b-67e7-1bf9-ee91-48547b2e5a39@suse.com>
Date: Mon, 8 May 2023 14:33:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 4/4] x86emul+VMX: support {RD,WR}MSRLIST
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
References: <0b2a4e8c-ab43-4e6e-2c51-027dcdf1425d@suse.com>
In-Reply-To: <0b2a4e8c-ab43-4e6e-2c51-027dcdf1425d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0005.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9318:EE_
X-MS-Office365-Filtering-Correlation-Id: 33c2dec3-9676-42a7-31ec-08db4fc07899
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mqermCcjJ7CNjtr8TCFnVo88SHAcII3kvUaX5lnoIiEkKqhYOqfXqBkiCPpubkkJS8VZ8U+G1d08TEHexF/wW+AgTmJxuqrARJVNxSAgxAz2twp8QPqNbrqJZ9Yx0r9Nde4i0snrJvZDK6qV81YzRPBGIobQyNg0sLbNYBD7hSHzdgeUiOhLIhGZH1/USd4FwK+AK3nuIxLHV5Udn+0sYxKQ45CphMUZcKuqZPysSe5sEfbfE7Bfan7bRFz+2OOdpJxyTZ5BVqKK2GJeZqGAxqhhmNsKaowZXLXH5tpipoRXm9S3Qp6Zj5cpvVB9zqogVCN52eA7MBf2xCsEUHTSOvCmQFUltBOtRkJiVTReTzK3aKF20rW3+6z+077T8ACJcgk7Dbb8Vitd/uN5Tqln/s1AYzt2McCecCQOVwLT4Y7122vDmU299PX8zMxsfGv0jK2BUYgNsEWpj9P2HQgXZI7PFcM69EgblLQTtzVB3Qo4ZtWj8tD3141Qt8t5o27AOUoMJyov7oueneyosMaxYFMrhXzTl5UrwZhmNbY32oghiTvnbnkr4AzfXNOv4M45JLzDfQSCTkHbvMB24+WB6ndK6UFF3dzPGms/Lh/UAvPKFEC/fpfWGoV4ocqNDmhAXfiH8YRxa+lgCr2HG474wQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(376002)(346002)(136003)(451199021)(31686004)(36756003)(38100700002)(2906002)(30864003)(8676002)(5660300002)(316002)(86362001)(31696002)(8936002)(6916009)(66556008)(66946007)(41300700001)(66476007)(4326008)(83380400001)(186003)(6512007)(6506007)(26005)(6486002)(478600001)(6666004)(2616005)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZXRIT3IzcTVwdmM4ZUQ4ZE1sb3VUSERFaVQ0d0RVMnROTzhybUVIdldHQWtV?=
 =?utf-8?B?VUhDQWIxTmRMOWFCSW12UU12Y2k5bnpoSnRsamdwSXpxUEJ3MS9xYTlMd2t3?=
 =?utf-8?B?VTNwTURhWDd3QXdqUkFVQjRqamJyV3dhWkswK1dJLzk3TWlqS3ZUN0lzT3gz?=
 =?utf-8?B?WDB6ZGhSV3FQdEVEdVRzSER0NklDOS9xQ0VFWTlrMFIzbkhmWkxFTVlMem9I?=
 =?utf-8?B?SXZ2ZlpkZEFRZzZyTk9IQlBtNkxFR1dTa3RtclhlcmdydFlxZEhweEZzVEtK?=
 =?utf-8?B?ZUJpYktiNy9mNkZKd21KckZvWU53T2xPQjdzL25GSlpTQ3FGV1lma1FYMHJz?=
 =?utf-8?B?cnlOVER1eXl3eklIT2pIWFUrYjExRVNMaXhSbXJ1aStoUUtFSU9Pam1DcjZV?=
 =?utf-8?B?eUdQODZkVzlmWCtuMmVIUFpyRERHYVNmKzlMcGd6OVZPU3czUTI0bGQ3a2J2?=
 =?utf-8?B?MXFRbUJiUFFTMXVzSFAvbGdkNGkwVGlGdUdiQTlOVDlJM1R5d3BOV2hiWXo2?=
 =?utf-8?B?akpSeG1zcEVXWi9wd1l5RjhkRmZTc09QSDJwM1JBN21HVGxvOU5ReEN3THgw?=
 =?utf-8?B?OFAzK29ZTjViTTlJbjZ1UGlBSjkxSG1WWkF4YXFSbmZaN0lmVk1iendNVXBB?=
 =?utf-8?B?TllTTXpKK0JKVytFcUlGc2NIakZ3eVBIazc2dndXSkN4UjBMSWpQNTExdE5N?=
 =?utf-8?B?Z3lnSzFGU2g4SDFSZnJyUFcyNnJ1ZWpyTjNwK2JJSHB0bjFhRXdVNFJ1azA0?=
 =?utf-8?B?bUxXS1Q4WHJzNkJDc0xWWCtZM0w5VUl2cWFpKys3UXFzY2M0TXZSWmVoWGNt?=
 =?utf-8?B?Y0pKd1JpUFpOVmJOb3c5QjdyT2JaOWx1emZxSUFhRER2Zy9wN2F1bmt2L2Nl?=
 =?utf-8?B?a1dXbk53bE15Y09iKzFDbXNTQTN0UTFzWnZ2ZkIzSk56NlRXRy8rNnV3SjlU?=
 =?utf-8?B?NHpxUEdOVCs4R2VTZHdIRUNMZVQyYktUNmRveGVOd3Iya0FndnlBb2JiTzRs?=
 =?utf-8?B?QWVqeXl3ZnBWdjNLSHdHZldsY2pMWkFERjk0OFVad3JldTNuS0hZczVHU0ZI?=
 =?utf-8?B?MHZUbC9nWDVzNHZhYVlsUDZJblhFQXVIOXIyT0JkUlRVaE1uNkhITnI3UXpr?=
 =?utf-8?B?NkhYdDBiZ3NtYmpqNm0yREdMTFBOVld3QXNsNGRIMzNqeGR6OTVmM1hoN1gw?=
 =?utf-8?B?S3VPTWVPTjJhcnNvOXZoUG0zc2RObVRMV01QVEd2VkIxR3hUeUFsV3Ixamp4?=
 =?utf-8?B?M0NpN3c1WGR3aWN1Rnh5enNKNWp2MURYMzREeXBtU0sxWDF6MU1kc3BVQ0lk?=
 =?utf-8?B?dlZjbWVBek5ORWlFZE1RRk9CRys0Zit1bGluakpGS0kvb2lMa256MkpjN3A5?=
 =?utf-8?B?Y2RRemdOSWZTbmpmenRqdDZhc1pvOHMyL05zZGQxd2RqaC9mSC90NEl5WEM0?=
 =?utf-8?B?bUx5bUdzUU5rU2NVelA2Z3o5NFZvdDNoV3ZlTTAvdlpTK1R1U0Nha1AwOUV6?=
 =?utf-8?B?MzRMMFdjbE11TEhPNi9UMld4MmJDZHRPbnMyUXVpTGNUQ3hIOFdxS2dVSjk1?=
 =?utf-8?B?K0FhOVhrTkZ0K1pmRG5mdXg2OTFTTkIvMFRUdzNWSWVJRHJHRmxFSDNUaVFM?=
 =?utf-8?B?NEgwWkpkYVRRVHp0aTNsOGVUQmFLK242Zm5XTXBNeHF5Sllod1NLZWExWTlI?=
 =?utf-8?B?V010UUc3UU9sUmVScWRVWVVYMVRvd0tLVG9LWklCb29Bc0h1aHhmam5tTCtD?=
 =?utf-8?B?ZkZYdHRBVGVmZW5vYjdTNk1YV2JXQlNmRXAzai9Ud0s4Rjc0Ui9vM0NmYUhx?=
 =?utf-8?B?MFBDNHhLbXA3am84aEptWU5iUm5JSzhiZHdXeFZZSkkyT1lCejRMdzI2eVJN?=
 =?utf-8?B?Q016bFhLcHY4VU9FeFVRc1p3RVArSmFaTTZlWXJCVUpuUFMzdDFQSzBVSm0x?=
 =?utf-8?B?YWpJdDhRUzErZWdVWWNmODJQVktMWUt3YXIvSTBVdTF3TjZSRGNoUHRzOEgz?=
 =?utf-8?B?bnI4aW1FMXpIYlJ0Z2hzRTVjZ2RwV2hoOWVSMHdyMmJzTWE4ZkZqSXBDNnAv?=
 =?utf-8?B?TDdIaEFSR3hVa3FCWWtpM2VTbXZZMk9yZFM2bmpsNVkzYVNTRU1JQUtiQXZr?=
 =?utf-8?Q?K5Xx5uSFm+l9Kl71u4N+yYlFJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 33c2dec3-9676-42a7-31ec-08db4fc07899
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 12:34:07.2317
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /zf3fX9RteFjaRak6P5mo6iD+aEOpSQ31gUsAfZZkEnIMQxwDs5iaBc9+mDhcmk8GaYyERxWinxL/YBSGY+mZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9318

These are "compound" instructions to issue a series of RDMSR / WRMSR
respectively. In the emulator we can therefore implement them by using
the existing msr_{read,write}() hooks. The memory accesses utilize that
the HVM ->read() / ->write() hooks are already linear-address
(x86_seg_none) aware (by way of hvmemul_virtual_to_linear() handling
this case).

Preemption is being checked for in WRMSRLIST handling only, as only MSR
writes are expected to possibly take long.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TODO: Once VMX tertiary execution control bit is known (see //todo)
      further adjust cpufeatureset.h.

RFC: In vmx_vmexit_handler() handling is forwarded to the emulator
     blindly. Alternatively we could consult the exit qualification and
     process just a single MSR at a time (without involving the
     emulator), exiting back to the guest after every iteration. (I
     don't think a mix of both models makes a lot of sense.)

With the VMX side of the spec still unclear (tertiary execution control
bit still unspecified in ISE 048) we can't enable the insn yet for (HVM)
guest use. The precise behavior of MSR_BARRIER is also not spelled out,
so the (minimal) implementation is a guess for now.
---
v2: Use X86_EXC_*. Add preemption checking to WRMSRLIST handling. Remove
    the feature from "max" when the VMX counterpart isn't available.

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -240,6 +240,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"lkgs",         0x00000007,  1, CPUID_REG_EAX, 18,  1},
         {"wrmsrns",      0x00000007,  1, CPUID_REG_EAX, 19,  1},
         {"avx-ifma",     0x00000007,  1, CPUID_REG_EAX, 23,  1},
+        {"msrlist",      0x00000007,  1, CPUID_REG_EAX, 27,  1},
 
         {"avx-vnni-int8",0x00000007,  1, CPUID_REG_EDX,  4,  1},
         {"avx-ne-convert",0x00000007, 1, CPUID_REG_EDX,  5,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -195,6 +195,8 @@ static const char *const str_7a1[32] =
     [18] = "lkgs",          [19] = "wrmsrns",
 
     /* 22 */                [23] = "avx-ifma",
+
+    /* 26 */                [27] = "msrlist",
 };
 
 static const char *const str_e21a[32] =
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -342,6 +342,8 @@ static const struct {
     { { 0x01, 0xc4 }, { 2, 2 }, F, N }, /* vmxoff */
     { { 0x01, 0xc5 }, { 2, 2 }, F, N }, /* pconfig */
     { { 0x01, 0xc6 }, { 2, 2 }, F, N }, /* wrmsrns */
+    { { 0x01, 0xc6 }, { 0, 2 }, F, W, pfx_f2 }, /* rdmsrlist */
+    { { 0x01, 0xc6 }, { 0, 2 }, F, R, pfx_f3 }, /* wrmsrlist */
     { { 0x01, 0xc8 }, { 2, 2 }, F, N }, /* monitor */
     { { 0x01, 0xc9 }, { 2, 2 }, F, N }, /* mwait */
     { { 0x01, 0xca }, { 2, 2 }, F, N }, /* clac */
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -589,6 +589,7 @@ static int read(
     default:
         if ( !is_x86_user_segment(seg) )
             return X86EMUL_UNHANDLEABLE;
+    case x86_seg_none:
         bytes_read += bytes;
         break;
     }
@@ -619,7 +620,7 @@ static int write(
     if ( verbose )
         printf("** %s(%u, %p,, %u,)\n", __func__, seg, (void *)offset, bytes);
 
-    if ( !is_x86_user_segment(seg) )
+    if ( !is_x86_user_segment(seg) && seg != x86_seg_none )
         return X86EMUL_UNHANDLEABLE;
     memcpy((void *)offset, p_data, bytes);
     return X86EMUL_OKAY;
@@ -711,6 +712,10 @@ static int read_msr(
 {
     switch ( reg )
     {
+    case 0x0000002f: /* BARRIER */
+        *val = 0;
+        return X86EMUL_OKAY;
+
     case 0xc0000080: /* EFER */
         *val = ctxt->addr_size > 32 ? 0x500 /* LME|LMA */ : 0;
         return X86EMUL_OKAY;
@@ -1499,9 +1504,53 @@ int main(int argc, char **argv)
          (gs_base != 0x0000111122224444UL) ||
          gs_base_shadow )
         goto fail;
+    printf("okay\n");
 
     cp.extd.nscb = i;
     emulops.write_segment = NULL;
+
+    printf("%-40s", "Testing rdmsrlist...");
+    instr[0] = 0xf2; instr[1] = 0x0f; instr[2] = 0x01; instr[3] = 0xc6;
+    regs.rip = (unsigned long)&instr[0];
+    regs.rsi = (unsigned long)(res + 0x80);
+    regs.rdi = (unsigned long)(res + 0x80 + 0x40 * 2);
+    regs.rcx = 0x0002000100008000UL;
+    gs_base_shadow = 0x0000222244446666UL;
+    memset(res + 0x80, ~0, 0x40 * 8 * 2);
+    res[0x80 + 0x0f * 2] = 0xc0000101; /* GS_BASE */
+    res[0x80 + 0x0f * 2 + 1] = 0;
+    res[0x80 + 0x20 * 2] = 0xc0000102; /* SHADOW_GS_BASE */
+    res[0x80 + 0x20 * 2 + 1] = 0;
+    res[0x80 + 0x31 * 2] = 0x2f; /* BARRIER */
+    res[0x80 + 0x31 * 2 + 1] = 0;
+    rc = x86_emulate(&ctxt, &emulops);
+    if ( (rc != X86EMUL_OKAY) ||
+         (regs.rip != (unsigned long)&instr[4]) ||
+         regs.rcx ||
+         (res[0x80 + (0x40 + 0x0f) * 2] != (unsigned int)gs_base) ||
+         (res[0x80 + (0x40 + 0x0f) * 2 + 1] != (gs_base >> (8 * sizeof(int)))) ||
+         (res[0x80 + (0x40 + 0x20) * 2] != (unsigned int)gs_base_shadow) ||
+         (res[0x80 + (0x40 + 0x20) * 2 + 1] != (gs_base_shadow >> (8 * sizeof(int)))) ||
+         res[0x80 + (0x40 + 0x31) * 2] || res[0x80 + (0x40 + 0x31) * 2 + 1] )
+        goto fail;
+    printf("okay\n");
+
+    printf("%-40s", "Testing wrmsrlist...");
+    instr[0] = 0xf3; instr[1] = 0x0f; instr[2] = 0x01; instr[3] = 0xc6;
+    regs.eip = (unsigned long)&instr[0];
+    regs.rsi -= 0x11 * 8;
+    regs.rdi -= 0x11 * 8;
+    regs.rcx = 0x0002000100000000UL;
+    res[0x80 + 0x0f * 2] = 0xc0000102; /* SHADOW_GS_BASE */
+    res[0x80 + 0x20 * 2] = 0xc0000101; /* GS_BASE */
+    rc = x86_emulate(&ctxt, &emulops);
+    if ( (rc != X86EMUL_OKAY) ||
+         (regs.rip != (unsigned long)&instr[4]) ||
+         regs.rcx ||
+         (gs_base != 0x0000222244446666UL) ||
+         (gs_base_shadow != 0x0000111122224444UL) )
+        goto fail;
+
     emulops.write_msr     = NULL;
 #endif
     printf("okay\n");
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -88,6 +88,7 @@ bool emul_test_init(void)
     cp.feat.rdpid = true;
     cp.feat.lkgs = true;
     cp.feat.wrmsrns = true;
+    cp.feat.msrlist = true;
     cp.extd.clzero = true;
 
     if ( cpu_has_xsave )
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -601,6 +601,9 @@ static void __init calculate_hvm_max_pol
             __clear_bit(X86_FEATURE_XSAVES, fs);
     }
 
+    if ( !cpu_has_vmx_msrlist )
+        __clear_bit(X86_FEATURE_MSRLIST, fs);
+
     /*
      * Xen doesn't use PKS, so the guest support for it has opted to not use
      * the VMCS load/save controls for efficiency reasons.  This depends on
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -830,6 +830,20 @@ static void cf_check vmx_cpuid_policy_ch
     else
         vmx_set_msr_intercept(v, MSR_PKRS, VMX_MSR_RW);
 
+    if ( cp->feat.msrlist )
+    {
+        vmx_clear_msr_intercept(v, MSR_BARRIER, VMX_MSR_RW);
+        v->arch.hvm.vmx.tertiary_exec_control |= TERTIARY_EXEC_ENABLE_MSRLIST;
+        vmx_update_tertiary_exec_control(v);
+    }
+    else if ( v->arch.hvm.vmx.tertiary_exec_control &
+              TERTIARY_EXEC_ENABLE_MSRLIST )
+    {
+        vmx_set_msr_intercept(v, MSR_BARRIER, VMX_MSR_RW);
+        v->arch.hvm.vmx.tertiary_exec_control &= ~TERTIARY_EXEC_ENABLE_MSRLIST;
+        vmx_update_tertiary_exec_control(v);
+    }
+
  out:
     vmx_vmcs_exit(v);
 
@@ -3700,6 +3714,22 @@ gp_fault:
     return X86EMUL_EXCEPTION;
 }
 
+static bool cf_check is_msrlist(
+    const struct x86_emulate_state *state, const struct x86_emulate_ctxt *ctxt)
+{
+
+    if ( ctxt->opcode == X86EMUL_OPC(0x0f, 0x01) )
+    {
+        unsigned int rm, reg;
+        int mode = x86_insn_modrm(state, &rm, &reg);
+
+        /* This also includes WRMSRNS; should be okay. */
+        return mode == 3 && rm == 6 && !reg;
+    }
+
+    return false;
+}
+
 static void vmx_do_extint(struct cpu_user_regs *regs)
 {
     unsigned long vector;
@@ -4507,6 +4537,17 @@ void vmx_vmexit_handler(struct cpu_user_
         }
         break;
 
+    case EXIT_REASON_RDMSRLIST:
+    case EXIT_REASON_WRMSRLIST:
+        if ( vmx_guest_x86_mode(v) != 8 || !currd->arch.cpuid->feat.msrlist )
+        {
+            ASSERT_UNREACHABLE();
+            hvm_inject_hw_exception(X86_EXC_UD, X86_EVENT_NO_EC);
+        }
+        else if ( !hvm_emulate_one_insn(is_msrlist, "MSR list") )
+            hvm_inject_hw_exception(X86_EXC_GP, 0);
+        break;
+
     case EXIT_REASON_VMXOFF:
     case EXIT_REASON_VMXON:
     case EXIT_REASON_VMCLEAR:
--- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
@@ -267,6 +267,7 @@ extern u32 vmx_secondary_exec_control;
 #define TERTIARY_EXEC_EPT_PAGING_WRITE          BIT(2, UL)
 #define TERTIARY_EXEC_GUEST_PAGING_VERIFY       BIT(3, UL)
 #define TERTIARY_EXEC_IPI_VIRT                  BIT(4, UL)
+#define TERTIARY_EXEC_ENABLE_MSRLIST            0//todo
 extern uint64_t vmx_tertiary_exec_control;
 
 #define VMX_EPT_EXEC_ONLY_SUPPORTED                         0x00000001
@@ -352,6 +353,8 @@ extern u64 vmx_ept_vpid_cap;
     (vmx_secondary_exec_control & SECONDARY_EXEC_BUS_LOCK_DETECTION)
 #define cpu_has_vmx_notify_vm_exiting \
     (vmx_secondary_exec_control & SECONDARY_EXEC_NOTIFY_VM_EXITING)
+#define cpu_has_vmx_msrlist \
+    (vmx_tertiary_exec_control & TERTIARY_EXEC_ENABLE_MSRLIST)
 
 #define VMCS_RID_TYPE_MASK              0x80000000
 
--- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
@@ -201,6 +201,8 @@ static inline void pi_clear_sn(struct pi
 #define EXIT_REASON_XRSTORS             64
 #define EXIT_REASON_BUS_LOCK            74
 #define EXIT_REASON_NOTIFY              75
+#define EXIT_REASON_RDMSRLIST           78
+#define EXIT_REASON_WRMSRLIST           79
 /* Remember to also update VMX_PERF_EXIT_REASON_SIZE! */
 
 /*
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -24,6 +24,8 @@
 #define  APIC_BASE_ENABLE                   (_AC(1, ULL) << 11)
 #define  APIC_BASE_ADDR_MASK                0x000ffffffffff000ULL
 
+#define MSR_BARRIER                         0x0000002f
+
 #define MSR_TEST_CTRL                       0x00000033
 #define  TEST_CTRL_SPLITLOCK_DETECT         (_AC(1, ULL) << 29)
 #define  TEST_CTRL_SPLITLOCK_DISABLE        (_AC(1, ULL) << 31)
--- a/xen/arch/x86/include/asm/perfc_defn.h
+++ b/xen/arch/x86/include/asm/perfc_defn.h
@@ -6,7 +6,7 @@ PERFCOUNTER_ARRAY(exceptions,
 
 #ifdef CONFIG_HVM
 
-#define VMX_PERF_EXIT_REASON_SIZE 76
+#define VMX_PERF_EXIT_REASON_SIZE 80
 #define VMEXIT_NPF_PERFC 143
 #define SVM_PERF_EXIT_REASON_SIZE (VMEXIT_NPF_PERFC + 1)
 PERFCOUNTER_ARRAY(vmexits,              "vmexits",
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -72,6 +72,12 @@ int guest_rdmsr(struct vcpu *v, uint32_t
     case MSR_AMD_PPIN:
         goto gp_fault;
 
+    case MSR_BARRIER:
+        if ( !cp->feat.msrlist )
+            goto gp_fault;
+        *val = 0;
+        break;
+
     case MSR_IA32_FEATURE_CONTROL:
         /*
          * Architecturally, availability of this MSR is enumerated by the
@@ -341,6 +347,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t
         uint64_t rsvd;
 
         /* Read-only */
+    case MSR_BARRIER:
     case MSR_IA32_PLATFORM_ID:
     case MSR_CORE_CAPABILITIES:
     case MSR_INTEL_CORE_THREAD_COUNT:
--- a/xen/arch/x86/x86_emulate/0f01.c
+++ b/xen/arch/x86/x86_emulate/0f01.c
@@ -11,6 +11,7 @@
 #include "private.h"
 
 #ifdef __XEN__
+#include <xen/event.h>
 #include <asm/prot-key.h>
 #endif
 
@@ -28,6 +29,7 @@ int x86emul_0f01(struct x86_emulate_stat
     switch ( s->modrm )
     {
         unsigned long base, limit, cr0, cr0w, cr4;
+        unsigned int n;
         struct segment_register sreg;
         uint64_t msr_val;
 
@@ -42,6 +44,64 @@ int x86emul_0f01(struct x86_emulate_stat
                                 ((uint64_t)regs->r(dx) << 32) | regs->eax,
                                 ctxt);
             goto done;
+
+        case vex_f3: /* wrmsrlist */
+            vcpu_must_have(msrlist);
+            generate_exception_if(!mode_64bit(), X86_EXC_UD);
+            generate_exception_if(!mode_ring0() || (regs->r(si) & 7) ||
+                                  (regs->r(di) & 7),
+                                  X86_EXC_GP, 0);
+            fail_if(!ops->write_msr);
+            while ( regs->r(cx) )
+            {
+                n = __builtin_ffsl(regs->r(cx)) - 1;
+                if ( (rc = ops->read(x86_seg_none, regs->r(si) + n * 8,
+                                     &msr_val, 8, ctxt)) != X86EMUL_OKAY )
+                    break;
+                generate_exception_if(msr_val != (uint32_t)msr_val,
+                                      X86_EXC_GP, 0);
+                base = msr_val;
+                if ( (rc = ops->read(x86_seg_none, regs->r(di) + n * 8,
+                                     &msr_val, 8, ctxt)) != X86EMUL_OKAY ||
+                     (rc = ops->write_msr(base, msr_val, ctxt)) != X86EMUL_OKAY )
+                    break;
+                regs->r(cx) &= ~(1UL << n);
+
+#ifdef __XEN__
+                if ( regs->r(cx) && local_events_need_delivery() )
+                {
+                    rc = X86EMUL_RETRY;
+                    break;
+                }
+#endif
+            }
+            goto done;
+
+        case vex_f2: /* rdmsrlist */
+            vcpu_must_have(msrlist);
+            generate_exception_if(!mode_64bit(), X86_EXC_UD);
+            generate_exception_if(!mode_ring0() || (regs->r(si) & 7) ||
+                                  (regs->r(di) & 7),
+                                  X86_EXC_GP, 0);
+            fail_if(!ops->read_msr || !ops->write);
+            while ( regs->r(cx) )
+            {
+                n = __builtin_ffsl(regs->r(cx)) - 1;
+                if ( (rc = ops->read(x86_seg_none, regs->r(si) + n * 8,
+                                     &msr_val, 8, ctxt)) != X86EMUL_OKAY )
+                    break;
+                generate_exception_if(msr_val != (uint32_t)msr_val,
+                                      X86_EXC_GP, 0);
+                if ( (rc = ops->read_msr(msr_val, &msr_val,
+                                         ctxt)) != X86EMUL_OKAY ||
+                     (rc = ops->write(x86_seg_none, regs->r(di) + n * 8,
+                                      &msr_val, 8, ctxt)) != X86EMUL_OKAY )
+                    break;
+                regs->r(cx) &= ~(1UL << n);
+            }
+            if ( rc != X86EMUL_OKAY )
+                ctxt->regs->r(cx) = regs->r(cx);
+            goto done;
         }
         generate_exception(X86_EXC_UD);
 
--- a/xen/arch/x86/x86_emulate/private.h
+++ b/xen/arch/x86/x86_emulate/private.h
@@ -588,6 +588,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_lkgs()        (ctxt->cpuid->feat.lkgs)
 #define vcpu_has_wrmsrns()     (ctxt->cpuid->feat.wrmsrns)
 #define vcpu_has_avx_ifma()    (ctxt->cpuid->feat.avx_ifma)
+#define vcpu_has_msrlist()     (ctxt->cpuid->feat.msrlist)
 #define vcpu_has_avx_vnni_int8() (ctxt->cpuid->feat.avx_vnni_int8)
 #define vcpu_has_avx_ne_convert() (ctxt->cpuid->feat.avx_ne_convert)
 
--- a/xen/arch/x86/x86_emulate/util.c
+++ b/xen/arch/x86/x86_emulate/util.c
@@ -100,6 +100,9 @@ bool cf_check x86_insn_is_mem_access(con
         break;
 
     case X86EMUL_OPC(0x0f, 0x01):
+        /* {RD,WR}MSRLIST */
+        if ( mode_64bit() && s->modrm == 0xc6 )
+            return s->vex.pfx >= vex_f3;
         /* Cover CLZERO. */
         return (s->modrm_rm & 7) == 4 && (s->modrm_reg & 7) == 7;
     }
@@ -160,7 +163,11 @@ bool cf_check x86_insn_is_mem_write(cons
         case 0xff: /* Grp5 */
             break;
 
-        case X86EMUL_OPC(0x0f, 0x01): /* CLZERO is the odd one. */
+        case X86EMUL_OPC(0x0f, 0x01):
+            /* RDMSRLIST */
+            if ( mode_64bit() && s->modrm == 0xc6 )
+                return s->vex.pfx == vex_f2;
+            /* CLZERO is another odd one. */
             return (s->modrm_rm & 7) == 4 && (s->modrm_reg & 7) == 7;
 
         default:
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -286,6 +286,7 @@ XEN_CPUFEATURE(FRED,         10*32+17) /
 XEN_CPUFEATURE(LKGS,         10*32+18) /*S  Load Kernel GS Base */
 XEN_CPUFEATURE(WRMSRNS,      10*32+19) /*S  WRMSR Non-Serialising */
 XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
+XEN_CPUFEATURE(MSRLIST,      10*32+27) /*   MSR list instructions */
 
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
 XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */



From xen-devel-bounces@lists.xenproject.org Mon May 08 12:34:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:34:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531505.827245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw04v-0005TU-Sy; Mon, 08 May 2023 12:34:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531505.827245; Mon, 08 May 2023 12:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw04v-0005TN-QL; Mon, 08 May 2023 12:34:57 +0000
Received: by outflank-mailman (input) for mailman id 531505;
 Mon, 08 May 2023 12:33:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a0X6=A5=amd.com=RaghavendraPrasad.Mallela@srs-se1.protection.inumbo.net>)
 id 1pw03o-0003kR-3C
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:33:48 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20624.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::624])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 92141580-ed9c-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 14:33:45 +0200 (CEST)
Received: from MN0PR12MB6079.namprd12.prod.outlook.com (2603:10b6:208:3c9::13)
 by BL0PR12MB5009.namprd12.prod.outlook.com (2603:10b6:208:1c2::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 12:33:41 +0000
Received: from MN0PR12MB6079.namprd12.prod.outlook.com
 ([fe80::e454:fcba:69ae:728f]) by MN0PR12MB6079.namprd12.prod.outlook.com
 ([fe80::e454:fcba:69ae:728f%7]) with mapi id 15.20.6363.032; Mon, 8 May 2023
 12:33:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92141580-ed9c-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j/QLxgqsx5B1hvVHaVJ3JEwWLaXBDFG+ertJ0/kjccuo3BeW4I2nZRA3gUAYjtCPwmGI2qjb7IaTQ6261CQErMFIQKkB3jr6Y0RvJE7pj7z0bUBI3ccfRHja0suVZxjF5ISrwKy10J5lK1m0fhZofIGSUc8934itBmxMeWVNl7ZTxMDQbohqlLPdppP8CXjZDCLo6oaD3YMN2scFp5pjL0iZ38+lsmupDsEB4nuDDwYmQuwDKokuy+qEjNlUFylchVMQo3OIVcf+5czOKj4zuL6LvuqqYsnbh6/uVIAU1DqfRv0KE2Q3lfFnBQgz8Cq4M/rrU8l2QffGg0cW1eGBZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WYqWbYuXDvj6TSNImC7BXmYK/ywqaOpoS6YFN0CCBBw=;
 b=AzoPsgTKOfXS9fQGUbWCw9lpvEZcwQeqDYHQFQoEKwlDKXsaFybDQCRz7pSssCK6mSPlMuJ9ktyBgeQPbNtO1UYtwEtJbEglPXfV0eexoe5YiXJ/EBLuqwcHhWZua1MB2egJD5bHZrRPOlvoK7DBL2Yg26v15AykgggQ+660B2ciSZl+EGEgbtycltTgkWTZHbN4xoQSWvWhnif7puCVng74voCnjwHE2Rp3+hsOmyGPR5mFjgfnNjs6qaIIYoGIziGJLChCJLcaCZJGwb+SwG2RxSa9p89veGnpFIJ7NNFHqvTaZnE7PibyqzXUnj54gi1v8gmFWZmjVZ5wnbWByQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WYqWbYuXDvj6TSNImC7BXmYK/ywqaOpoS6YFN0CCBBw=;
 b=5L+W9vKai6y9mPcOCmien2x6k/bCmqPxLj+GMf3pH6RjYxLbClagWYSOUsScwaAeNtZIlP+3BEkhhSa4vNEdhCYUu99u1in1UUzu4I15yftqwIndnsCfxpA8Z5xi7yprewaPP2XjPqrgzs7QdEMq4ZgmauA6XrNRH9kHw3VXjf4=
From: "Mallela, RaghavendraPrasad (Raghavendra Prasad)"
	<RaghavendraPrasad.Mallela@amd.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"xen-users@lists.xenproject.org" <xen-users@lists.xenproject.org>
CC: "SHARMA, JYOTIRMOY" <JYOTIRMOY.SHARMA@amd.com>, "Stabellini, Stefano"
	<stefano.stabellini@amd.com>
Subject: Camera Virtualization
Thread-Topic: Camera Virtualization
Thread-Index: AdmBp7+i8bgPkutESA2yV0wZFwMI9w==
Date: Mon, 8 May 2023 12:33:41 +0000
Message-ID:
 <MN0PR12MB6079CF8A38A6EB9FA7B97F29F5719@MN0PR12MB6079.namprd12.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels: MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_Enabled=true;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_SetDate=2023-05-08T12:33:39Z;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_Method=Standard;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_Name=General;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_ActionId=fe803404-e6b6-4be7-b52d-f6f8ca114558;
 MSIP_Label_4342314e-0df4-4b58-84bf-38bed6170a0f_ContentBits=1
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: MN0PR12MB6079:EE_|BL0PR12MB5009:EE_
x-ms-office365-filtering-correlation-id: c0ec9681-896f-4378-8597-08db4fc074c1
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 7CqGVRd1TziMRAyaXfFX6brV28XOtGeMKE9L9uaGdATBAa/r/ReQCzv1kJQ6X6HvtXGttrSCt1znIDzfTAInzPgzIB4/eD2mqsAkf8vGLmu9i8lBUfxU6CzqjW04hf6EZaGUXLknNV3WwoZLLcNfGayhbWEZfzf+AFtr93RrHSeyyb2VyJ1W8dx4ICd5L2v/GBD7MWHzLUq5nAoAxTZs7n3aUPn6fLHfEQ/b4eKhx+9iVninb3xjOYdqBKobsiGFOtQtIV6AxUSAjMPzUw4ofj4OMw1OP6VKn2BbEYP4PsHoIo9MrLkT5KXYdGMkJoh2T46knmxJqdf/LqCi1IouDY/JsJ+uoU6exMOOfquUuhfWh+5kiRjfEa0PM1dxfLIXiJanohsR3GKs1EycL+9ZXg9bWUe9pySpsdS5MldgIEWHms4K6V4cuaCq6RZcUqROO9H3tofcD2QseFlVNwWAk5kk7yvHYz3WFAzGsm7njbHRySDFSof3wQaJLXsbjepX8Nh3MyEZfKEQoRhkTMfBm9GupqmVajRTE0k/ZnyKCwFalI9B04VAJhfaFgK4S4b1YTlN7JRNpXTY05cwF7CiszVM/Q99esA7+9xRngbl+OnZZm/c3v1J1Toi1AbCTjM0r7i63Eqv9/8xy8XuUKwnRQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN0PR12MB6079.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(346002)(136003)(396003)(376002)(451199021)(4744005)(2906002)(7116003)(52536014)(5660300002)(38070700005)(166002)(38100700002)(122000001)(33656002)(86362001)(3480700007)(55016003)(54906003)(110136005)(6506007)(9686003)(478600001)(186003)(26005)(450100002)(966005)(7696005)(71200400001)(316002)(8936002)(66446008)(64756008)(66556008)(8676002)(41300700001)(66476007)(76116006)(66946007)(4326008)(83380400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?IzWGawMNeW1P6N030XyAlNkislZcCAhLPLVVnCIwYgTbGG2/GvN2w564Pbcg?=
 =?us-ascii?Q?RLGBF73FPV0cVzN4q1tcYDdCPWI/6hyYiC5R8umrc9oNdwm0YD+kQlnR4tZm?=
 =?us-ascii?Q?xeTr00oALwstHnCGeOpNgzjRzwtRajPj7tUV4f7JVwmEC+jR85jnrc9NW1rR?=
 =?us-ascii?Q?Xa6sdr7R3uY1Qv5si3m+5NnQ2IldJZgSjGbOcTxOWAa+nEZuDj0rnMe7d8nh?=
 =?us-ascii?Q?9Z/m8Aw8tqhfSjK32eZYlDwkZ+bOOROneXob7PK8yKF7EznlTM7jV3BwCADX?=
 =?us-ascii?Q?xRT5ishTv2IX1GBG3KQFP9csF+BV6AhjhsSbxk8cA2KQb/FJs/ghE8VngNeq?=
 =?us-ascii?Q?Nsh6M1hUX9g/e3m8DcpHYhk0ZWDl/gWt4mW2mhQMuRGOyA/mg7dDAuyu3Lf0?=
 =?us-ascii?Q?RYBAQ6hwwGnM2J3Lpj6I4qJLv5nSALvM5F9pXbgvjZC97ctxAMcgKDGbX2QD?=
 =?us-ascii?Q?+LNlbbuViBVydbZbo7Cg0iD/k044ZxhMZLUKvyl80YrlBRrIDqf47Lix0LB6?=
 =?us-ascii?Q?EgUbXDkjh3oOGcfE92nlZ8HrCmWRbmUfwv/pRKLbrqFbAvFGsw09vnbtuV9p?=
 =?us-ascii?Q?g/HZb15sWd7tuoCRSO+yMoBcttB4OPeCWjeYj12xMAMvMvZS6u+HOyqyavVV?=
 =?us-ascii?Q?qJ54DWEOaRnSUaQfhpEfi6ymRJEbEaUCBpbak+E1dwtZyoF8OXBO+U796eYq?=
 =?us-ascii?Q?YHad8egdeKRbRdIqQ9pCYgC5LCjWTNWQ63Y7uhAuf6XhAHG/fX8wxGa9qJjJ?=
 =?us-ascii?Q?0v6Sh4dYdwu4vCq66ZLpeDiD0icCFlQvUnIcfl02FAEEy3kkmWJ4uczqMDuU?=
 =?us-ascii?Q?0K2caCBvyEd6Yh/oMBRewAzmyof829lWFcyKmj4WT/3U7KQCCooJVgjp8pVQ?=
 =?us-ascii?Q?lmlh40CYXCyATltaWIFvZ46ZRG+OQgSUhR1KzNpHCTDVn9pyIXs+lxQreFtH?=
 =?us-ascii?Q?NJ5Xj5ZcTZr635WhJenEg2SHT5faAj3nTiuJ35SOoJAXRc3TuC0ZnxaAeHWs?=
 =?us-ascii?Q?9tc4Muqw/wz5tLXCJJIL2bu9quTERyuLbCqplm8hYOZQ5FT49duve5AGaQQ8?=
 =?us-ascii?Q?uRnk6CbrqErDbMklQKCxFMjnD/aTwrrbxbhIruj6a4oCBkCQJ7IBpkcXQUTp?=
 =?us-ascii?Q?yhXzNJocCvh/wfZvKuhQCAb8eDKs+9QgZELyvq+icw07LJ2wxAeI18S+cw4R?=
 =?us-ascii?Q?RyrceKtnvzvYgT9/q/vMK/a56IUmkXpYIKGrkWQ3UTujL0M+vreHOBWSDZz2?=
 =?us-ascii?Q?jnCZpxi9+N8RtOfuYuCVbXCRzoxqhDYxijvn72+BUBuN3Bc80jNUI4n7UgcP?=
 =?us-ascii?Q?yW4rCzrokjEl1Pw7HQp5gBnke1/aeP0damjtj1c4lpRtinIewc6vOhYj2xMz?=
 =?us-ascii?Q?muQz+OqU1KIHhdDRiqXvC2obGQmxIZOHn+bcycJvXJcORxIGPqJSxkxwTUAI?=
 =?us-ascii?Q?0yclQPAbmXUHipy2jGQ3dtyc/5mAUrEzMHIvpI7Sh6kcsvgw2knRmbQQ9Zxn?=
 =?us-ascii?Q?FJ5aDd06/T7Ie3hbYQBODeQHLvIOIiq+9JEL66hOY/eIOMVVlJW8s+zUJtUM?=
 =?us-ascii?Q?XY2CuwFeyTcp7VdKLAE=3D?=
Content-Type: multipart/alternative;
	boundary="_000_MN0PR12MB6079CF8A38A6EB9FA7B97F29F5719MN0PR12MB6079namp_"
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MN0PR12MB6079.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c0ec9681-896f-4378-8597-08db4fc074c1
X-MS-Exchange-CrossTenant-originalarrivaltime: 08 May 2023 12:33:41.3813
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: iTNvPclckLvaOzMb5JN27HTLFjZrPvjEI3x828bea4pVVUZXwKBeuni0D85yi5ULaDc1pDvhhqGoD5I4tfLXMg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB5009

--_000_MN0PR12MB6079CF8A38A6EB9FA7B97F29F5719MN0PR12MB6079namp_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

[AMD Official Use Only - General]


Hello all,

We want to virtualize the camera that uses the V4L2 Linux drivers i.e.., wa=
nted to use the Camera APP in DOMU.
Searched online and found 2 approaches to virtualize the camera.

FE and BE:
FrontEnd Driver is available at https://github.com/andr2000/linux/commits/c=
amera_front_v1/drivers/media/xen
BackEnd Driver is available at https://github.com/andr2000/camera_be

VirtIO Implementation:
Collabora implemented the VirtIO Camera implementation and is available at =
 https://gitlab.collabora.com/collabora/virtio-camera

Does anyone used above implementations?
Please guide us on which approach is best to use for Camera Virtualization?

Raghavendra M

--_000_MN0PR12MB6079CF8A38A6EB9FA7B97F29F5719MN0PR12MB6079namp_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	font-size:11.0pt;
	font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:#0563C1;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri",sans-serif;
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri",sans-serif;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72" style=3D"word-wrap:=
break-word">
<p class=3D"msipheaderdf3d92d6" align=3D"Left" style=3D"margin:0"><span sty=
le=3D"font-size:10.0pt;font-family:Arial;color:#0000FF">[AMD Official Use O=
nly - General]</span></p>
<br>
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Hello all,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">We want to virtualize the camera that uses the V4L2 =
Linux drivers i.e.., wanted to use the Camera APP in DOMU.<o:p></o:p></p>
<p class=3D"MsoNormal">Searched online and found 2 approaches to virtualize=
 the camera.
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><b>FE and BE:<o:p></o:p></b></p>
<p class=3D"MsoNormal">FrontEnd Driver is available at <a href=3D"https://g=
ithub.com/andr2000/linux/commits/camera_front_v1/drivers/media/xen">
https://github.com/andr2000/linux/commits/camera_front_v1/drivers/media/xen=
</a><o:p></o:p></p>
<p class=3D"MsoNormal">BackEnd Driver is available at <a href=3D"https://gi=
thub.com/andr2000/camera_be">
https://github.com/andr2000/camera_be</a><o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><b>VirtIO Implementation</b>:<o:p></o:p></p>
<p class=3D"MsoNormal">Collabora implemented the VirtIO Camera implementati=
on and is available at &nbsp;<a href=3D"https://gitlab.collabora.com/collab=
ora/virtio-camera">https://gitlab.collabora.com/collabora/virtio-camera</a>=
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Does anyone used above implementations?<o:p></o:p></=
p>
<p class=3D"MsoNormal">Please guide us on which approach is best to use for=
 Camera Virtualization?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Raghavendra M<o:p></o:p></p>
</div>
</body>
</html>

--_000_MN0PR12MB6079CF8A38A6EB9FA7B97F29F5719MN0PR12MB6079namp_--


From xen-devel-bounces@lists.xenproject.org Mon May 08 12:42:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:42:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531521.827254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0CU-0007A9-P8; Mon, 08 May 2023 12:42:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531521.827254; Mon, 08 May 2023 12:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0CU-0007A2-MZ; Mon, 08 May 2023 12:42:46 +0000
Received: by outflank-mailman (input) for mailman id 531521;
 Mon, 08 May 2023 12:41:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HagA=A5=huawei.com=linyunsheng@srs-se1.protection.inumbo.net>)
 id 1pw0Aw-00077B-Sm
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:41:10 +0000
Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 98b1103a-ed9d-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 14:41:08 +0200 (CEST)
Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.55])
 by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QFLTg4rC7zpWDH;
 Mon,  8 May 2023 20:39:51 +0800 (CST)
Received: from localhost.localdomain (10.69.192.56) by
 dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Mon, 8 May 2023 20:41:03 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98b1103a-ed9d-11ed-b226-6b7b168915f2
From: Yunsheng Lin <linyunsheng@huawei.com>
To: <netdev@vger.kernel.org>
CC: <linux-rdma@vger.kernel.org>, <virtualization@lists.linux-foundation.org>,
	<xen-devel@lists.xenproject.org>, <bpf@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <edumazet@google.com>, <davem@davemloft.net>,
	<kuba@kernel.org>, <pabeni@redhat.com>, <alexanderduyck@fb.com>,
	<jbrouer@redhat.com>, <ilias.apalodimas@linaro.org>
Subject: [PATCH RFC 2/2] net: remove __skb_frag_set_page()
Date: Mon, 8 May 2023 20:39:22 +0800
Message-ID: <20230508123922.39284-3-linyunsheng@huawei.com>
X-Mailer: git-send-email 2.33.0
In-Reply-To: <20230508123922.39284-1-linyunsheng@huawei.com>
References: <20230508123922.39284-1-linyunsheng@huawei.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.69.192.56]
X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To
 dggpemm500005.china.huawei.com (7.185.36.74)
X-CFilter-Loop: Reflected

The remaining users calling __skb_frag_set_page() with
page being NULL seems to doing defensive programming, as
shinfo->nr_frags is already decremented, so remove them.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
 drivers/net/ethernet/broadcom/bnx2.c      |  1 -
 drivers/net/ethernet/broadcom/bnxt/bnxt.c |  1 -
 include/linux/skbuff.h                    | 12 ------------
 3 files changed, 14 deletions(-)

diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c
index 466e1d62bcf6..0d917a9699c5 100644
--- a/drivers/net/ethernet/broadcom/bnx2.c
+++ b/drivers/net/ethernet/broadcom/bnx2.c
@@ -2955,7 +2955,6 @@ bnx2_reuse_rx_skb_pages(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr,
 		shinfo = skb_shinfo(skb);
 		shinfo->nr_frags--;
 		page = skb_frag_page(&shinfo->frags[shinfo->nr_frags]);
-		__skb_frag_set_page(&shinfo->frags[shinfo->nr_frags], NULL);
 
 		cons_rx_pg->page = page;
 		dev_kfree_skb(skb);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index efaff5018af8..f3f08660ec30 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -1105,7 +1105,6 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp,
 			unsigned int nr_frags;
 
 			nr_frags = --shinfo->nr_frags;
-			__skb_frag_set_page(&shinfo->frags[nr_frags], NULL);
 			cons_rx_buf->page = page;
 
 			/* Update prod since possibly some pages have been
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 0d1027ea81e0..a3c448757b4e 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -3491,18 +3491,6 @@ static inline void skb_frag_page_copy(skb_frag_t *fragto,
 	fragto->bv_page = fragfrom->bv_page;
 }
 
-/**
- * __skb_frag_set_page - sets the page contained in a paged fragment
- * @frag: the paged fragment
- * @page: the page to set
- *
- * Sets the fragment @frag to contain @page.
- */
-static inline void __skb_frag_set_page(skb_frag_t *frag, struct page *page)
-{
-	frag->bv_page = page;
-}
-
 bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio);
 
 /**
-- 
2.33.0



From xen-devel-bounces@lists.xenproject.org Mon May 08 12:42:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:42:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531522.827260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0CV-0007DX-1r; Mon, 08 May 2023 12:42:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531522.827260; Mon, 08 May 2023 12:42:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0CU-0007DQ-UK; Mon, 08 May 2023 12:42:46 +0000
Received: by outflank-mailman (input) for mailman id 531522;
 Mon, 08 May 2023 12:41:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HagA=A5=huawei.com=linyunsheng@srs-se1.protection.inumbo.net>)
 id 1pw0Ax-00077B-Hm
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:41:11 +0000
Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 98b13109-ed9d-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 14:41:07 +0200 (CEST)
Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.56])
 by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4QFLQG4gGLz18LFB;
 Mon,  8 May 2023 20:36:54 +0800 (CST)
Received: from localhost.localdomain (10.69.192.56) by
 dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Mon, 8 May 2023 20:41:02 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98b13109-ed9d-11ed-b226-6b7b168915f2
From: Yunsheng Lin <linyunsheng@huawei.com>
To: <netdev@vger.kernel.org>
CC: <linux-rdma@vger.kernel.org>, <virtualization@lists.linux-foundation.org>,
	<xen-devel@lists.xenproject.org>, <bpf@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <edumazet@google.com>, <davem@davemloft.net>,
	<kuba@kernel.org>, <pabeni@redhat.com>, <alexanderduyck@fb.com>,
	<jbrouer@redhat.com>, <ilias.apalodimas@linaro.org>
Subject: [PATCH RFC 1/2] net: introduce and use skb_frag_fill_page_desc()
Date: Mon, 8 May 2023 20:39:21 +0800
Message-ID: <20230508123922.39284-2-linyunsheng@huawei.com>
X-Mailer: git-send-email 2.33.0
In-Reply-To: <20230508123922.39284-1-linyunsheng@huawei.com>
References: <20230508123922.39284-1-linyunsheng@huawei.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.69.192.56]
X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To
 dggpemm500005.china.huawei.com (7.185.36.74)
X-CFilter-Loop: Reflected

Most users use __skb_frag_set_page()/skb_frag_off_set()/
skb_frag_size_set() to fill the page desc for a skb frag.

Introduce skb_frag_fill_page_desc() to do that.

net/bpf/test_run.c does not call skb_frag_off_set() to
set the offset, "copy_from_user(page_address(page), ...)"
suggest that it is assuming offset to be initialized as
zero, so call skb_frag_fill_page_desc() with offset being
zero for this case.

Also, skb_frag_set_page() is not used anymore, so remove
it.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
 .../net/ethernet/aquantia/atlantic/aq_ring.c  |  6 ++--
 drivers/net/ethernet/broadcom/bnxt/bnxt.c     |  5 ++--
 drivers/net/ethernet/chelsio/cxgb3/sge.c      |  5 ++--
 drivers/net/ethernet/emulex/benet/be_main.c   | 30 +++++++++----------
 drivers/net/ethernet/freescale/enetc/enetc.c  |  5 ++--
 .../net/ethernet/fungible/funeth/funeth_rx.c  |  5 ++--
 drivers/net/ethernet/marvell/mvneta.c         |  5 ++--
 .../net/ethernet/mellanox/mlx5/core/en_rx.c   |  4 +--
 drivers/net/ethernet/sun/cassini.c            |  8 ++---
 drivers/net/virtio_net.c                      |  4 +--
 drivers/net/vmxnet3/vmxnet3_drv.c             |  4 +--
 drivers/net/xen-netback/netback.c             |  4 +--
 include/linux/skbuff.h                        | 27 +++++++----------
 net/bpf/test_run.c                            |  3 +-
 net/core/gro.c                                |  4 +--
 net/core/pktgen.c                             | 13 ++++----
 net/core/skbuff.c                             |  7 ++---
 net/tls/tls_device.c                          | 10 +++----
 net/xfrm/xfrm_ipcomp.c                        |  5 +---
 19 files changed, 62 insertions(+), 92 deletions(-)

diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
index 7f933175cbda..4de22eed099a 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
@@ -532,10 +532,10 @@ static bool aq_add_rx_fragment(struct device *dev,
 					      buff_->rxdata.pg_off,
 					      buff_->len,
 					      DMA_FROM_DEVICE);
-		skb_frag_off_set(frag, buff_->rxdata.pg_off);
-		skb_frag_size_set(frag, buff_->len);
 		sinfo->xdp_frags_size += buff_->len;
-		__skb_frag_set_page(frag, buff_->rxdata.page);
+		skb_frag_fill_page_desc(frag, buff_->rxdata.page,
+					buff_->rxdata.pg_off,
+					buff_->len);
 
 		buff_->is_cleaned = 1;
 
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index dcd9367f05af..efaff5018af8 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -1085,9 +1085,8 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp,
 			    RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT;
 
 		cons_rx_buf = &rxr->rx_agg_ring[cons];
-		skb_frag_off_set(frag, cons_rx_buf->offset);
-		skb_frag_size_set(frag, frag_len);
-		__skb_frag_set_page(frag, cons_rx_buf->page);
+		skb_frag_fill_page_desc(frag, cons_rx_buf->page,
+					cons_rx_buf->offset, frag_len);
 		shinfo->nr_frags = i + 1;
 		__clear_bit(cons, rxr->rx_agg_bmap);
 
diff --git a/drivers/net/ethernet/chelsio/cxgb3/sge.c b/drivers/net/ethernet/chelsio/cxgb3/sge.c
index efa7f401529e..2e9a74fe0970 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb3/sge.c
@@ -2184,9 +2184,8 @@ static void lro_add_page(struct adapter *adap, struct sge_qset *qs,
 	len -= offset;
 
 	rx_frag += nr_frags;
-	__skb_frag_set_page(rx_frag, sd->pg_chunk.page);
-	skb_frag_off_set(rx_frag, sd->pg_chunk.offset + offset);
-	skb_frag_size_set(rx_frag, len);
+	skb_frag_fill_page_desc(rx_frag, sd->pg_chunk.page,
+				sd->pg_chunk.offset + offset, len);
 
 	skb->len += len;
 	skb->data_len += len;
diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
index 7e408bcc88de..49b6a0e28840 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -2343,11 +2343,10 @@ static void skb_fill_rx_data(struct be_rx_obj *rxo, struct sk_buff *skb,
 		hdr_len = ETH_HLEN;
 		memcpy(skb->data, start, hdr_len);
 		skb_shinfo(skb)->nr_frags = 1;
-		skb_frag_set_page(skb, 0, page_info->page);
-		skb_frag_off_set(&skb_shinfo(skb)->frags[0],
-				 page_info->page_offset + hdr_len);
-		skb_frag_size_set(&skb_shinfo(skb)->frags[0],
-				  curr_frag_len - hdr_len);
+		skb_frag_fill_page_desc(&skb_shinfo(skb)->frags[0],
+					page_info->page,
+					page_info->page_offset + hdr_len,
+					curr_frag_len - hdr_len);
 		skb->data_len = curr_frag_len - hdr_len;
 		skb->truesize += rx_frag_size;
 		skb->tail += hdr_len;
@@ -2369,16 +2368,16 @@ static void skb_fill_rx_data(struct be_rx_obj *rxo, struct sk_buff *skb,
 		if (page_info->page_offset == 0) {
 			/* Fresh page */
 			j++;
-			skb_frag_set_page(skb, j, page_info->page);
-			skb_frag_off_set(&skb_shinfo(skb)->frags[j],
-					 page_info->page_offset);
-			skb_frag_size_set(&skb_shinfo(skb)->frags[j], 0);
+			skb_frag_fill_page_desc(&skb_shinfo(skb)->frags[j],
+						page_info->page,
+						page_info->page_offset,
+						curr_frag_len);
 			skb_shinfo(skb)->nr_frags++;
 		} else {
 			put_page(page_info->page);
+			skb_frag_size_add(&skb_shinfo(skb)->frags[j], curr_frag_len);
 		}
 
-		skb_frag_size_add(&skb_shinfo(skb)->frags[j], curr_frag_len);
 		skb->len += curr_frag_len;
 		skb->data_len += curr_frag_len;
 		skb->truesize += rx_frag_size;
@@ -2451,14 +2450,15 @@ static void be_rx_compl_process_gro(struct be_rx_obj *rxo,
 		if (i == 0 || page_info->page_offset == 0) {
 			/* First frag or Fresh page */
 			j++;
-			skb_frag_set_page(skb, j, page_info->page);
-			skb_frag_off_set(&skb_shinfo(skb)->frags[j],
-					 page_info->page_offset);
-			skb_frag_size_set(&skb_shinfo(skb)->frags[j], 0);
+			skb_frag_fill_page_desc(&skb_shinfo(skb)->frags[j],
+						page_info->page,
+						page_info->page_offset,
+						curr_frag_len);
 		} else {
 			put_page(page_info->page);
+			skb_frag_size_add(&skb_shinfo(skb)->frags[j], curr_frag_len);
 		}
-		skb_frag_size_add(&skb_shinfo(skb)->frags[j], curr_frag_len);
+
 		skb->truesize += rx_frag_size;
 		remaining -= curr_frag_len;
 		memset(page_info, 0, sizeof(*page_info));
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
index 3c4fa26f0f9b..63854294ac33 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
@@ -1445,9 +1445,8 @@ static void enetc_add_rx_buff_to_xdp(struct enetc_bdr *rx_ring, int i,
 		xdp_buff_set_frag_pfmemalloc(xdp_buff);
 
 	frag = &shinfo->frags[shinfo->nr_frags];
-	skb_frag_off_set(frag, rx_swbd->page_offset);
-	skb_frag_size_set(frag, size);
-	__skb_frag_set_page(frag, rx_swbd->page);
+	skb_frag_fill_page_desc(frag, rx_swbd->page, rx_swbd->page_offset,
+				size);
 
 	shinfo->nr_frags++;
 }
diff --git a/drivers/net/ethernet/fungible/funeth/funeth_rx.c b/drivers/net/ethernet/fungible/funeth/funeth_rx.c
index 29a6c2ede43a..7e2584895de3 100644
--- a/drivers/net/ethernet/fungible/funeth/funeth_rx.c
+++ b/drivers/net/ethernet/fungible/funeth/funeth_rx.c
@@ -323,9 +323,8 @@ static int fun_gather_pkt(struct funeth_rxq *q, unsigned int tot_len,
 		if (ref_ok)
 			ref_ok |= buf->node;
 
-		__skb_frag_set_page(frags, buf->page);
-		skb_frag_off_set(frags, q->buf_offset);
-		skb_frag_size_set(frags++, frag_len);
+		skb_frag_fill_page_desc(frags++, buf->page, q->buf_offset,
+					frag_len);
 
 		tot_len -= frag_len;
 		if (!tot_len)
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 2cad76d0a50e..01b0312977d6 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2369,9 +2369,8 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 	if (data_len > 0 && sinfo->nr_frags < MAX_SKB_FRAGS) {
 		skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags++];
 
-		skb_frag_off_set(frag, pp->rx_offset_correction);
-		skb_frag_size_set(frag, data_len);
-		__skb_frag_set_page(frag, page);
+		skb_frag_fill_page_desc(frag, page,
+					pp->rx_offset_correction, data_len);
 
 		if (!xdp_buff_has_frags(xdp)) {
 			sinfo->xdp_frags_size = *size;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 69634829558e..704b022cd1f0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -491,9 +491,7 @@ mlx5e_add_skb_shared_info_frag(struct mlx5e_rq *rq, struct skb_shared_info *sinf
 	}
 
 	frag = &sinfo->frags[sinfo->nr_frags++];
-	__skb_frag_set_page(frag, frag_page->page);
-	skb_frag_off_set(frag, frag_offset);
-	skb_frag_size_set(frag, len);
+	skb_frag_fill_page_desc(frag, frag_page->page, frag_offset, len);
 
 	if (page_is_pfmemalloc(frag_page->page))
 		xdp_buff_set_frag_pfmemalloc(xdp);
diff --git a/drivers/net/ethernet/sun/cassini.c b/drivers/net/ethernet/sun/cassini.c
index 4ef05bad4613..2d52f54ebb45 100644
--- a/drivers/net/ethernet/sun/cassini.c
+++ b/drivers/net/ethernet/sun/cassini.c
@@ -1998,10 +1998,8 @@ static int cas_rx_process_pkt(struct cas *cp, struct cas_rx_comp *rxc,
 		skb->truesize += hlen - swivel;
 		skb->len      += hlen - swivel;
 
-		__skb_frag_set_page(frag, page->buffer);
+		skb_frag_fill_page_desc(frag, page->buffer, off, hlen - swivel);
 		__skb_frag_ref(frag);
-		skb_frag_off_set(frag, off);
-		skb_frag_size_set(frag, hlen - swivel);
 
 		/* any more data? */
 		if ((words[0] & RX_COMP1_SPLIT_PKT) && ((dlen -= hlen) > 0)) {
@@ -2024,10 +2022,8 @@ static int cas_rx_process_pkt(struct cas *cp, struct cas_rx_comp *rxc,
 			skb->len      += hlen;
 			frag++;
 
-			__skb_frag_set_page(frag, page->buffer);
+			skb_frag_fill_page_desc(frag, page->buffer, 0, hlen);
 			__skb_frag_ref(frag);
-			skb_frag_off_set(frag, 0);
-			skb_frag_size_set(frag, hlen);
 			RX_USED_ADD(page, hlen + cp->crc_size);
 		}
 
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 8d8038538fc4..b4c0d1acb3ae 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1153,9 +1153,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 		}
 
 		frag = &shinfo->frags[shinfo->nr_frags++];
-		__skb_frag_set_page(frag, page);
-		skb_frag_off_set(frag, offset);
-		skb_frag_size_set(frag, len);
+		skb_frag_fill_page_desc(frag, page, offset, len);
 		if (page_is_pfmemalloc(page))
 			xdp_buff_set_frag_pfmemalloc(xdp);
 
diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
index f2b76ee866a4..7fa74b8b2100 100644
--- a/drivers/net/vmxnet3/vmxnet3_drv.c
+++ b/drivers/net/vmxnet3/vmxnet3_drv.c
@@ -686,9 +686,7 @@ vmxnet3_append_frag(struct sk_buff *skb, struct Vmxnet3_RxCompDesc *rcd,
 
 	BUG_ON(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS);
 
-	__skb_frag_set_page(frag, rbi->page);
-	skb_frag_off_set(frag, 0);
-	skb_frag_size_set(frag, rcd->len);
+	skb_frag_fill_page_desc(frag, rbi->page, 0, rcd->len);
 	skb->data_len += rcd->len;
 	skb->truesize += PAGE_SIZE;
 	skb_shinfo(skb)->nr_frags++;
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index c1501f41e2d8..3d79b35eb577 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1128,9 +1128,7 @@ static int xenvif_handle_frag_list(struct xenvif_queue *queue, struct sk_buff *s
 			BUG();
 
 		offset += len;
-		__skb_frag_set_page(&frags[i], page);
-		skb_frag_off_set(&frags[i], 0);
-		skb_frag_size_set(&frags[i], len);
+		skb_frag_fill_page_desc(&frags[i], page, 0, len);
 	}
 
 	/* Release all the original (foreign) frags. */
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 738776ab8838..0d1027ea81e0 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -2411,6 +2411,15 @@ static inline unsigned int skb_pagelen(const struct sk_buff *skb)
 	return skb_headlen(skb) + __skb_pagelen(skb);
 }
 
+static inline void skb_frag_fill_page_desc(skb_frag_t *frag,
+					   struct page *page,
+					   int off, int size)
+{
+	frag->bv_page             = page;
+	frag->bv_offset           = off;
+	skb_frag_size_set(frag, size);
+}
+
 static inline void __skb_fill_page_desc_noacc(struct skb_shared_info *shinfo,
 					      int i, struct page *page,
 					      int off, int size)
@@ -2422,9 +2431,7 @@ static inline void __skb_fill_page_desc_noacc(struct skb_shared_info *shinfo,
 	 * that not all callers have unique ownership of the page but rely
 	 * on page_is_pfmemalloc doing the right thing(tm).
 	 */
-	frag->bv_page		  = page;
-	frag->bv_offset		  = off;
-	skb_frag_size_set(frag, size);
+	skb_frag_fill_page_desc(frag, page, off, size);
 }
 
 /**
@@ -3496,20 +3503,6 @@ static inline void __skb_frag_set_page(skb_frag_t *frag, struct page *page)
 	frag->bv_page = page;
 }
 
-/**
- * skb_frag_set_page - sets the page contained in a paged fragment of an skb
- * @skb: the buffer
- * @f: the fragment offset
- * @page: the page to set
- *
- * Sets the @f'th fragment of @skb to contain @page.
- */
-static inline void skb_frag_set_page(struct sk_buff *skb, int f,
-				     struct page *page)
-{
-	__skb_frag_set_page(&skb_shinfo(skb)->frags[f], page);
-}
-
 bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio);
 
 /**
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index e79e3a415ca9..98143b86a9dd 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -1415,11 +1415,10 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
 			}
 
 			frag = &sinfo->frags[sinfo->nr_frags++];
-			__skb_frag_set_page(frag, page);
 
 			data_len = min_t(u32, kattr->test.data_size_in - size,
 					 PAGE_SIZE);
-			skb_frag_size_set(frag, data_len);
+			skb_frag_fill_page_desc(frag, page, 0, data_len);
 
 			if (copy_from_user(page_address(page), data_in + size,
 					   data_len)) {
diff --git a/net/core/gro.c b/net/core/gro.c
index 2d84165cb4f1..6783a47a6136 100644
--- a/net/core/gro.c
+++ b/net/core/gro.c
@@ -239,9 +239,7 @@ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb)
 
 		pinfo->nr_frags = nr_frags + 1 + skbinfo->nr_frags;
 
-		__skb_frag_set_page(frag, page);
-		skb_frag_off_set(frag, first_offset);
-		skb_frag_size_set(frag, first_size);
+		skb_frag_fill_page_desc(frag, page, first_offset, first_size);
 
 		memcpy(frag + 1, skbinfo->frags, sizeof(*frag) * skbinfo->nr_frags);
 		/* We dont need to clear skbinfo->nr_frags here */
diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index 760238196db1..f56b8d697014 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -2785,14 +2785,17 @@ static void pktgen_finalize_skb(struct pktgen_dev *pkt_dev, struct sk_buff *skb,
 					break;
 			}
 			get_page(pkt_dev->page);
-			skb_frag_set_page(skb, i, pkt_dev->page);
-			skb_frag_off_set(&skb_shinfo(skb)->frags[i], 0);
+
 			/*last fragment, fill rest of data*/
 			if (i == (frags - 1))
-				skb_frag_size_set(&skb_shinfo(skb)->frags[i],
-				    (datalen < PAGE_SIZE ? datalen : PAGE_SIZE));
+				skb_frag_fill_page_desc(&skb_shinfo(skb)->frags[i],
+							pkt_dev->page, 0,
+							(datalen < PAGE_SIZE ?
+							 datalen : PAGE_SIZE));
 			else
-				skb_frag_size_set(&skb_shinfo(skb)->frags[i], frag_len);
+				skb_frag_fill_page_desc(&skb_shinfo(skb)->frags[i],
+							pkt_dev->page, 0, frag_len);
+
 			datalen -= skb_frag_size(&skb_shinfo(skb)->frags[i]);
 			skb->len += skb_frag_size(&skb_shinfo(skb)->frags[i]);
 			skb->data_len += skb_frag_size(&skb_shinfo(skb)->frags[i]);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 2112146092bf..06a94cb50945 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -4241,10 +4241,9 @@ static inline skb_frag_t skb_head_frag_to_page_desc(struct sk_buff *frag_skb)
 	struct page *page;
 
 	page = virt_to_head_page(frag_skb->head);
-	__skb_frag_set_page(&head_frag, page);
-	skb_frag_off_set(&head_frag, frag_skb->data -
-			 (unsigned char *)page_address(page));
-	skb_frag_size_set(&head_frag, skb_headlen(frag_skb));
+	skb_frag_fill_page_desc(&head_frag, page, frag_skb->data -
+				(unsigned char *)page_address(page),
+				skb_headlen(frag_skb));
 	return head_frag;
 }
 
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index a7cc4f9faac2..daeff54bdbfa 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -268,9 +268,8 @@ static void tls_append_frag(struct tls_record_info *record,
 		skb_frag_size_add(frag, size);
 	} else {
 		++frag;
-		__skb_frag_set_page(frag, pfrag->page);
-		skb_frag_off_set(frag, pfrag->offset);
-		skb_frag_size_set(frag, size);
+		skb_frag_fill_page_desc(frag, pfrag->page, pfrag->offset,
+					size);
 		++record->num_frags;
 		get_page(pfrag->page);
 	}
@@ -357,9 +356,8 @@ static int tls_create_new_record(struct tls_offload_context_tx *offload_ctx,
 		return -ENOMEM;
 
 	frag = &record->frags[0];
-	__skb_frag_set_page(frag, pfrag->page);
-	skb_frag_off_set(frag, pfrag->offset);
-	skb_frag_size_set(frag, prepend_size);
+	skb_frag_fill_page_desc(frag, pfrag->page, pfrag->offset,
+				prepend_size);
 
 	get_page(pfrag->page);
 	pfrag->offset += prepend_size;
diff --git a/net/xfrm/xfrm_ipcomp.c b/net/xfrm/xfrm_ipcomp.c
index 80143360bf09..9c0fa0e1786a 100644
--- a/net/xfrm/xfrm_ipcomp.c
+++ b/net/xfrm/xfrm_ipcomp.c
@@ -74,14 +74,11 @@ static int ipcomp_decompress(struct xfrm_state *x, struct sk_buff *skb)
 		if (!page)
 			return -ENOMEM;
 
-		__skb_frag_set_page(frag, page);
-
 		len = PAGE_SIZE;
 		if (dlen < len)
 			len = dlen;
 
-		skb_frag_off_set(frag, 0);
-		skb_frag_size_set(frag, len);
+		skb_frag_fill_page_desc(frag, page, 0, len);
 		memcpy(skb_frag_address(frag), scratch, len);
 
 		skb->truesize += len;
-- 
2.33.0



From xen-devel-bounces@lists.xenproject.org Mon May 08 12:42:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:42:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531523.827266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0CV-0007LJ-CJ; Mon, 08 May 2023 12:42:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531523.827266; Mon, 08 May 2023 12:42:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0CV-0007KO-7N; Mon, 08 May 2023 12:42:47 +0000
Received: by outflank-mailman (input) for mailman id 531523;
 Mon, 08 May 2023 12:41:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HagA=A5=huawei.com=linyunsheng@srs-se1.protection.inumbo.net>)
 id 1pw0Ax-00077H-NL
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:41:11 +0000
Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 98a72755-ed9d-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 14:41:08 +0200 (CEST)
Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.53])
 by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QFLSw0tyHzsRLP;
 Mon,  8 May 2023 20:39:12 +0800 (CST)
Received: from localhost.localdomain (10.69.192.56) by
 dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Mon, 8 May 2023 20:41:02 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98a72755-ed9d-11ed-8611-37d641c3527e
From: Yunsheng Lin <linyunsheng@huawei.com>
To: <netdev@vger.kernel.org>
CC: <linux-rdma@vger.kernel.org>, <virtualization@lists.linux-foundation.org>,
	<xen-devel@lists.xenproject.org>, <bpf@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <edumazet@google.com>, <davem@davemloft.net>,
	<kuba@kernel.org>, <pabeni@redhat.com>, <alexanderduyck@fb.com>,
	<jbrouer@redhat.com>, <ilias.apalodimas@linaro.org>
Subject: [PATCH RFC 0/2] introduce skb_frag_fill_page_desc()
Date: Mon, 8 May 2023 20:39:20 +0800
Message-ID: <20230508123922.39284-1-linyunsheng@huawei.com>
X-Mailer: git-send-email 2.33.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.69.192.56]
X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To
 dggpemm500005.china.huawei.com (7.185.36.74)
X-CFilter-Loop: Reflected

Most users use __skb_frag_set_page()/skb_frag_off_set()/
skb_frag_size_set() to fill the page desc for a skb frag.
It does not make much sense to calling __skb_frag_set_page()
without calling skb_frag_off_set(), as the offset may depend
on whether the page is head page or tail page, so add
skb_frag_fill_page_desc() to fill the page desc for a skb
frag.

In the future, we can make sure the page in the frag is
head page of compound page or a base page, if not, we
may warn about that and convert the tail page to head
page and update the offset accordingly, if we see a warning
about that, we also fix the caller to fill the head page
in the frag. when the fixing is done, we may remove the
warning and converting.

In this way, we can remove the compound_head() or use
page_ref_*() like the below case:
https://elixir.bootlin.com/linux/latest/source/net/core/page_pool.c#L881
https://elixir.bootlin.com/linux/latest/source/include/linux/skbuff.h#L3383

It may also convert stack to use the folio easier.


Yunsheng Lin (2):
  net: introduce and use skb_frag_fill_page_desc()
  net: remove __skb_frag_set_page()

 .../net/ethernet/aquantia/atlantic/aq_ring.c  |  6 +--
 drivers/net/ethernet/broadcom/bnx2.c          |  1 -
 drivers/net/ethernet/broadcom/bnxt/bnxt.c     |  6 +--
 drivers/net/ethernet/chelsio/cxgb3/sge.c      |  5 +--
 drivers/net/ethernet/emulex/benet/be_main.c   | 30 +++++++-------
 drivers/net/ethernet/freescale/enetc/enetc.c  |  5 +--
 .../net/ethernet/fungible/funeth/funeth_rx.c  |  5 +--
 drivers/net/ethernet/marvell/mvneta.c         |  5 +--
 .../net/ethernet/mellanox/mlx5/core/en_rx.c   |  4 +-
 drivers/net/ethernet/sun/cassini.c            |  8 +---
 drivers/net/virtio_net.c                      |  4 +-
 drivers/net/vmxnet3/vmxnet3_drv.c             |  4 +-
 drivers/net/xen-netback/netback.c             |  4 +-
 include/linux/skbuff.h                        | 39 +++++--------------
 net/bpf/test_run.c                            |  3 +-
 net/core/gro.c                                |  4 +-
 net/core/pktgen.c                             | 13 ++++---
 net/core/skbuff.c                             |  7 ++--
 net/tls/tls_device.c                          | 10 ++---
 net/xfrm/xfrm_ipcomp.c                        |  5 +--
 20 files changed, 62 insertions(+), 106 deletions(-)

-- 
2.33.0



From xen-devel-bounces@lists.xenproject.org Mon May 08 12:57:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:57:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531540.827285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0Qj-0001WF-MA; Mon, 08 May 2023 12:57:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531540.827285; Mon, 08 May 2023 12:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0Qj-0001W8-JO; Mon, 08 May 2023 12:57:29 +0000
Received: by outflank-mailman (input) for mailman id 531540;
 Mon, 08 May 2023 12:57:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pw0Qi-0001W0-16
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:57:28 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e0f2bc2f-ed9f-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 14:57:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8674.eurprd04.prod.outlook.com (2603:10a6:20b:429::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 12:57:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 12:57:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0f2bc2f-ed9f-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PoHMv6woAZ5h7Z6SDA9NvQP/DYc1BEofPVeZgcdZHlvQTCRCYTQS+j4H8TXufbkJdZl8Oebmfjh4w5jTV+AMO+uR3lWl8qD49P1j8GPOfl1hXgGO1E44q/S+9vCmlriDaNXXkhYMwBR9LxlQObqng2++toPoi72+p6h7B0nihdw2ZpQ7EqF9NAxUpWy1adVc5iN4w4f8jL97xBoG+BnE1bLhc0dkX3PMUzGURoCT3HdwAOy4EkhO3ohxtGJKW8vTMTCB5bGTjpEYsftrxu3AuPsngrNn0dlyAMeeUrGGQgD/n4vlwlwMpySqJuDShGDQsWhPuJ/1Hp/gQTMe+h7IBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oZd97VnJuH85htric0UZmxZ08snwYPDQyqs2n1VbUkU=;
 b=PYs2CFXbojMmjlHo3W2lQACey/mjp27KcUpbufkv90Lu3brNnWbL8gJ9aiJd14tDsLYnqnbUmqMEQv6ZzQTHBpTQvAPsbWkKylvIN4r+H3c+8c4tULsaxlBkfJz2YXr3kauCX9aOw0ti40qh7Pd0wsLROM4jvtEJz5Su7jJfmlYMuJdCdqVkfJN0/WPa2W/IRZBKPNDBYU3JmZBi6ubniUS4oY0YmdpkMBP58abz3M1hz+KPwZVXaXq34m/HfyLIeg8MMa2LDxDT81wcDOp3kpb1wzTzr5I/zh89pcg2kMpjgdxzSmtTYACo4psoB557bCZUkiazCBIcnYCW3osibQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oZd97VnJuH85htric0UZmxZ08snwYPDQyqs2n1VbUkU=;
 b=5TsnUWAb2BTfL9BlFENtWRf2qPZNZkkhC0AmkTk5+oRlcB6jaflcRfP8P7hwKMRj23/kz7X+aNX6NM6+MIABCa/Bze6Dxm/vthS/zBoT6PX6nH0XXfClbecz+rE12eLvy5xq9PMwnLfuZWtLN4S/C4HHJ7okt+jWzerjlLIqn5b+KBq5esT7CQv0tUyf2WKaVpqp9pOreliQRlt04/l6j1VodE5OZ46Pyfl3GLhSvThqR0s50gd3cPcYKrixnPH/D8OPk6C3KBVduDvjDun42NaydFB2r+5dRawLhP9Tb0Qt+soPIPw+uuJHHB+UZE+5GNVJo/0p+fdf80Vru/MdYg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a261b10b-25e4-3374-d6c4-05b307619d81@suse.com>
Date: Mon, 8 May 2023 14:57:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] build: improve macro references
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0002.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8674:EE_
X-MS-Office365-Filtering-Correlation-Id: 8ebec046-321a-464a-cc9e-08db4fc3c414
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0/hwL6mqy69Eqp4vzxXT8WdzFnMRlFKGs+jJQUwjpSYWQu85jZUbGOw1GrMp9VfKEJbGBSnSM0OumwE2vbJwgKRNlP7QaVncUvtYEpr9tNzrof4RJaJN47/ekMb5KNde6LtMz0cqB0q8FowWhmc7qVSF8xq45pAxjXvHpCZICkwF+NXL/sJCuJ7FQYOeDYCECNpzBmv3D8KJUJR39W54o9kP7pK/Yi9QsxBs/ifeMZcBNTaxyh8ja1LE2ViTGU0RlTe0X1JUC7exjHm+BNuP1SpN1sg9Eyr6lHXyuCuIAg+t6VUzIUM125GrDF04vhUV8CAaIU/AuQ/rDyDdOXgfh2QyvQKrYbwMDlF1ZCn0p3zbuVOQ34L8LyAv6QTm+PgOxzTTmUBEdY4jgi5XxuslQgj6p+kx3QoQMHEdYPvNsTnH+ryL5mmL1dkMNWOOT5SOPkw4mS1e6iuxEqQvvb2sLYuxdUP2Hw9YpbZQi9KQPuopbNFlNut+zXgpdU/VGoOf7WpDebX1YI1dKFwxC7/CHfzwOhdE5x/c67ibIZpegHcF9hzg6htGim3AzpUoIWWhHxV8lTRemEfqgzHIbczP2Qp7Ayqp9p4NGIveh5aLPLbbT6mncTb8mfo5IHRHccvWIFnZGsHfOjyEOvj9vyTWuQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(346002)(396003)(366004)(39860400002)(451199021)(31686004)(54906003)(2906002)(8936002)(8676002)(316002)(478600001)(6916009)(4326008)(66476007)(5660300002)(66556008)(41300700001)(7416002)(66946007)(6486002)(6512007)(26005)(6506007)(186003)(36756003)(2616005)(558084003)(38100700002)(86362001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UUgyVXFOWnlkVHplb1huVE5vREpPdnhqY2wvcGtLMm9tQVhTdEFYTFpWa1Js?=
 =?utf-8?B?bkkzSVArQ085Mko3MjFSMDJBVHZWY1FwOXFZZVJlOGlmeFpoNzRXRzdJYzd3?=
 =?utf-8?B?cVlESW9BNFo0andNemZPVHI0eXVZcmNtb08zK01HMERoeEd3ZEdOallpbWpJ?=
 =?utf-8?B?dVNMbzdURy9lQlM5RFd5L2x5RlJZRUNUT3hReTVVcFlPUVVHT1krYytmY1Fa?=
 =?utf-8?B?UWZkZkxRZzVva2x1c0FTQ1oreW5ybEV3Y25ucnloZlZZcUtjb3BZZ1VFZmV4?=
 =?utf-8?B?Z1N5aVZwZldoaE5kV2UzZVdwQ3o2eTlzKzIxZFdzVTFyM3hORFM4cVlEemhO?=
 =?utf-8?B?d2krbVdZbzh4NVlJeXJxUzk2T1RleTFEZTQ2eEI2Q1JNYnBaVW1HM2srVWJZ?=
 =?utf-8?B?MzZwa3lWWWlad1FSN1NFWGp5VW1jUmdHSlRzdm9NV0F4aHUralhmNGZMdHNx?=
 =?utf-8?B?N3RiZlpDTVQ3dGRsU0dpZ3hWMWo5d3NtQ1NoUThnd3JGUTVPZmpIRzlhT3Ez?=
 =?utf-8?B?TnlXalBWWGVMZkFXTExOYU1wYmltUFpmYi9CMFRxN1cyaDZtVG1nZXdFY09S?=
 =?utf-8?B?c2dTckNkTk1CekhXOXBFMzB1ZmUrakFwNUNKb3NtVWpDdlpKd1E2WTc2UjZr?=
 =?utf-8?B?eHNvamd5L2N5aW5hWHd4bWJSUGJ3WEVBQjlqdHdaeHBrYmpsLzB3NHMrbXFZ?=
 =?utf-8?B?Q05oaDZhYzczcXc1bnc3ZjBWc1Y3YUJybjNRbHRuZ085Q2pjR2tMUXdocFZr?=
 =?utf-8?B?SjFEWGw2VWdKRmgxSEl4TEdaQW9XZWlpYWNWSTdQbUxzdHQzWDRNeWwra1VF?=
 =?utf-8?B?Um5uQW1IK3N5akVjWnYxZU5IRmNGS2ZnUUI4UkhXYWFXQVIvcmhXVFgxc2di?=
 =?utf-8?B?RkNNUENBY0ZiL2JDYy95bnltcm04ME4xSTR4WjlhOHJZbEtsTTAzbkZSdGtX?=
 =?utf-8?B?VGduSUtTcFdFV2RGQXhWYnMyb0ZFWXlkenlmVEhlTkVkaHlxcXc4ZGJnbXVx?=
 =?utf-8?B?UkZzcDhReXVlMXZ6Ry9Mc0RodW9WVnhOcFBNN1g2alJJTmVhUDdNU3liZnJ0?=
 =?utf-8?B?SGV0bXRmc1lwc0pMb2dPZFdhakxrR2crRUEwRDFVWVM1dXNOSFlPeUlicDA0?=
 =?utf-8?B?bmNwVFdBdzdZbUMzK0F4L3RGbUN0bW8zUVRWQmJZVkRPbVlleVZkdWJpYzlB?=
 =?utf-8?B?Y2lTWWkwdWErV1ZZWTh3UVYzclltWnR1VUNVZU1MQ2ZSWUR3YWRqdW9QZG1C?=
 =?utf-8?B?ZUJwd0VBRGZCNEZRV3BKYTlqQitmcVlTZWRtU3YvWGJWYUFyVU04SWZJT1c4?=
 =?utf-8?B?RGlFZkVpZEtaSFJTa0tSN01Pa0pMbHBJcElkUHdYZTR6SWNQakkyODRZb1Fl?=
 =?utf-8?B?bWpHckRmNVA1eFZZMndlcjF4K3JYOEYzUWlKQ2s4NENyNjVTaExJREZyL05R?=
 =?utf-8?B?cklZUzNtZVc3d0dhNTQ3Smg3OHloWlNCc3V3L2prRXhwNlVGenFVUjJLTmdX?=
 =?utf-8?B?SnQrT2ZWU3Q0MVppbnZKTDBNc1dTOWFsZ0lIQjg4RW94Z1psMHdkRFI5cUtz?=
 =?utf-8?B?WFV0SjRocVp4Y3Z4bElmNmhHaklmNnJ6QXhjYU1QTERIbm1nU1VxRUI0bXZU?=
 =?utf-8?B?OGloeFhFNmdJeFBLUHZxTFBZKzFwTmx4QkU4dXRDQkk0eXNROXZ4d2pvT1dK?=
 =?utf-8?B?RlZ6dVJ2Z1hqT2g0ZndER2IyRHJSVmZ6eEd3NmhwOGdKdUljc0VXeDJvQm1B?=
 =?utf-8?B?ZEY4T1JNNitvcldiT1cyeGo1TVk4Z1YvRWlZZjNmem5CQ0RsUjBNemZrYXZv?=
 =?utf-8?B?WitSTUNtZU52NDc1ckhpalBRMWxML3ExQ3RYRVBCZ1FyL1pnUGZpUUpFR1NC?=
 =?utf-8?B?SnB0U3ZFZW9PQlB3c3g5c0lRK01hemo2blZRWEozeWpGN2F3NVdhSDZ4TDBt?=
 =?utf-8?B?cjBzV1JMSXowODQ1c0pXVDNwaFM4dTdWZitiYXJsNEVieXlRcCtZK3d0NzdV?=
 =?utf-8?B?SC9oVW9JUnJGTGh4M2pjOFJjWkRqL2lTQzlYNTZIejB5TlJqR2RXTndVc1Zj?=
 =?utf-8?B?RjlJbEpMZ0dqZUlrUXJNdEYvUkJDRTE5dHc5eVRFcEhvOTJpVlFYTU9USStP?=
 =?utf-8?Q?1rrWuVzHJshELcADOTFDjBjnM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ebec046-321a-464a-cc9e-08db4fc3c414
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 12:57:23.2058
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bvejrorUrFcg2ywYVIHIlIDz9VfJWT78JklpUOYjdhHeXvchdGXOSSge86AqR7kGnT4Kec7EjqApsSC7rl43jA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8674

Re-posting patch 1 merely because of the still missing RISC-V ack,
while the new patch 2 contextually depends on it.

1: shorten macro references
2: use $(dot-target)

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 12:58:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531542.827294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0RR-000230-VU; Mon, 08 May 2023 12:58:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531542.827294; Mon, 08 May 2023 12:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0RR-00022t-Sx; Mon, 08 May 2023 12:58:13 +0000
Received: by outflank-mailman (input) for mailman id 531542;
 Mon, 08 May 2023 12:58:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pw0RQ-00022e-Oi
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:58:12 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on060f.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::60f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fbe7fe27-ed9f-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 14:58:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8674.eurprd04.prod.outlook.com (2603:10a6:20b:429::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 12:58:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 12:58:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbe7fe27-ed9f-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AgoIzciQBZD2QD6lfkwZoZWIIe0fFDaUwyz7kDDe1GAeAyNXTGRk4sVONe/kzjP7WkLwmDY4Sguao5ZCNtUEAtiJr4xKoRjIilpGB1h1hy6B/0lqQthp72gdSyflS0MBykDFH12TGXrzN1yZHtkg3MRnW55d6aY5AWWJEcX7cZh8mSUtsu4xN4CieXr+lG9ZmYnE7qnmHwHcTTvpjxGUFEuqTjVIzbbbKeISs+oJHxu4qSDVPWZRoKxdTjjbVquxRIOXV042sU09RCynsrhtfokXQHHYYvQk293CFZWLsiYEBXWcLCbDuqm2v0xC/G4rimKmcpVrSj9kOzC2WFpAHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fglwV5hSyATtXnFfIAt/M+iOmywYI9rQMwaKbY12iBs=;
 b=li2L7Mrs3Zt3e0BJZw4puX2LA9M2pt9gpW0CgKMhKsAfF7h6mknONhKorQlhew/w3f3ZnKtbzbax5rJ7wZnWF6NeLzjbECb2F3rQqP2ecifU5RfwgOTULvUPptaEa6XRAsh/fN+M6/sWWT0zb84224kCkGHY2ayrGzAApQbRZH4eFFnKw5GLN+EYqsiLhmGBtgG0rhhDJ+dSUUVollBqb6hYse8BtG/iPH4jwNTjilY7LMd1sMUvLTTAjwg1yeWO3VtZ+Lt+RheVeo280TYp6jTmVmmhmwOqBM/fdVq3wgTe6eozLLQ7C1CbHPu43G6Ix56Tf7rxK3CQ58+bhieCjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fglwV5hSyATtXnFfIAt/M+iOmywYI9rQMwaKbY12iBs=;
 b=EqZCbrQu5RlTIcHiurZkKOwHqWTgRb07ab3GPerxUnbSDOb+OvtVWUuh960C8mYQasEfYSvC0+zdVMPGGHLFqzL/DBoM6pNwkmDbqnXbq+fb9pNPBR7kXF6XEFLNAIac86ZwnnOaxCZbptMPR6JTQ9lWNxy3ngV8sLWQUfyto4vl80OdLm5W78COojIlCdr0bLElPda3tGgpB3/QT4Oz2heKDbRkYxBCAi3oGNvzGPnErXxoZjNaQO9EV84O9Z0Cbwfne1tNKO+mm0wCgwEcRLSNYQRQpDgl8vvcGpjF9rbOuNADUQ7r4XUkrMd1PjJ2Q7AYl7OEGcxMqY8XvWdSTw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <aa76774a-0d73-989e-e054-1b30490160e1@suse.com>
Date: Mon, 8 May 2023 14:58:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 1/2] build: shorten macro references
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <a261b10b-25e4-3374-d6c4-05b307619d81@suse.com>
In-Reply-To: <a261b10b-25e4-3374-d6c4-05b307619d81@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0004.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8674:EE_
X-MS-Office365-Filtering-Correlation-Id: 86563cf3-bae1-4a0b-5659-08db4fc3df4d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pLKH/3zYOnjIB3eINaldLzjZfjqiwG91OAFqByml+dq3liBo73mCuo7gxHtI3SmqIsH6Fl2kgJjpVVv6enO26MBY4YarPiG4NV4lcwWJsKdtG3BPbSJUUruTnVC3Tb8J3ZP3FsY1b72iywh90gB6sCqWRb7yv/QbcXw+vxzJCd/jp5pTTmlphtI2oVfRKXqSIpssJjfVFYXmOjux4iez0P907IiT9QauOK/EcKSVLM3VC47LwtOmjiawj4Q7VjULjl8u2A7AcP1TDVmmPu7KzyQoiGDfZjTGElsgzkN1rBjMan/lebzxTDmhtew7x1URCLSz7389KCsOgXJ8btRT25ODT2+t+6rHeK3YLmH56b0rVx4GBDx1PYzm8Xi1vRUcoMGguSF/n0obzUyrzoonoZUFk7Yi4AeFGA1vSZqRPjqbTpnYls9GeOHgISXFM027oJYVRtcLn8uGX2yw3kJec5CjdIT5tPNSPraxEy9XcwvlESCbWrv/qSoG7aVaiLgKB/1451KW1TM72frM4XdcmrZSu7Tm1U2XmaHo6ziHgnawPOCuZIOqYMZq6Z3+ae7AUElqVupcLt9SU8zD2JzKAyjM//MhMKiCwaMB9USjOMJHA5FkYilOizrNxtms1s0gZI2Iu+doYT6UTAs08uKEDQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(346002)(396003)(366004)(39860400002)(451199021)(31686004)(54906003)(2906002)(8936002)(8676002)(316002)(478600001)(6916009)(4326008)(66476007)(5660300002)(66556008)(41300700001)(7416002)(66946007)(6486002)(6512007)(26005)(6506007)(186003)(36756003)(2616005)(38100700002)(86362001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b1E4R0xnLzl4WWpqa2VHdmlYVUxSdCtWR2Y3cjY3T2d0ODE5eUZiajNkSHFW?=
 =?utf-8?B?d2xYWWNDMVMwK1dPUHkrQWR0VVg3VFRxVXF2VWZtWnlEd2hhVTNYODk5eXF6?=
 =?utf-8?B?azY1ZVJiSG43bHZaa3hxWWYvdnEzalF4OGwyQnZuWkNDdkpha3U0UDljb0VK?=
 =?utf-8?B?Q0lHOGF0TW92cmhscFRkYWlMZTF5UWsxQjA3TjVvclBhd1dCeElYeFhvVWxT?=
 =?utf-8?B?c0RhTG5pY1NabFNyWXdhVW1VSFRsRnpFbG9QbzBnL3d5VEVTT3V5Rzl4UmJz?=
 =?utf-8?B?YXg4SnVlNU5JT0FZQkpMeUFsRC9iY25MMWVyZFJ2MWZudnp3dk16eUYvSWhF?=
 =?utf-8?B?dUo5YUphc09lMjc1NE5Kai9PKzdISkhvTlhLM1BBRzlXNkFzK2YxNTliZDU0?=
 =?utf-8?B?YmNJTno1YUl1YmJIM3FWQzh6eUNMMURwekY2ZEV6c24wNlNlNUR1Ti8zckNB?=
 =?utf-8?B?OGVPUEplU3JodXJaZ01xN1BnNzVLdFhia0lxbEFYWnk2Y3VnaEpNMGs1OXJ1?=
 =?utf-8?B?SzhVSm14K0V2WVphaVpmOU9LK2VoYlp5aUtjR1BKYm10SEtKZW9TeVNrYThy?=
 =?utf-8?B?MzJPSWNYMElodjc4OXFaWk9qRS9Ncmx2ZjJMQlZmWXdnTStjMVdmLzA2MlFQ?=
 =?utf-8?B?NGZtVkNMalJkN3lodjRGVlUvQ1dNdU52TVFQZVNVYXFsQ0dqNU54YmZ0ZGhO?=
 =?utf-8?B?L2xnUGt3WTlXcWpPL3NIZHJpKytxQklLcGpGU0Z3dXE1Vm8yUktLVGhtV2c4?=
 =?utf-8?B?TWhzenNMUWNScGlBMk05bVlMOTVacGhIZmFJelBDem0zUVZibDVhUC9iVCtQ?=
 =?utf-8?B?cWlWRXN2SC9vdkl5Z0hjbmw5cGZ1UnZXQWh6U01HNkRQSWxWVDdtWEZDWlB0?=
 =?utf-8?B?Wmw2SkxyTlBvdXp2TVJiSDFCSjk1ZWJWSms1ZFY0Zm1NVFVRVWpqZ1BqNVlQ?=
 =?utf-8?B?WjduZmYzT1VPZlgwVHFIUEVNWDBTVTZYeXg4RmFVUEVpNWI1WTAzcXp3ZGI4?=
 =?utf-8?B?bThZNlpKUHU3eDIyd29HSWN0bnczdjRPVnJqYWZzUUs2Y0RsMFVRaDJNelRN?=
 =?utf-8?B?SGN5NlY2UTY5R0xva1hTcXZlQWZOY2FGaHBQdlUxTy95enJsb2paRnM1QUlo?=
 =?utf-8?B?REF5Uy9Ic0l2djZUWXlrYjR5TUg3MGRoSUlCMlkxNTBTU2lUU2ZOUm1OemRo?=
 =?utf-8?B?Rk9GZHNqQXZZWVZBNCt0NVFjTDR0OVp5bGQzQ1d3MFlBQmIxTWFZcmdmQmlG?=
 =?utf-8?B?VUFMZ0NsS3V1SUYyVnFacUR5c3BQY2JjMjZRaDFiTVAvbkY2TjdiZnNjSW50?=
 =?utf-8?B?Q2lMaGJXbHlWbStJbmt2MFBjdzVCU0VnTVF4ZnVuZEV6enJ2YnZsR1c0RTk2?=
 =?utf-8?B?aG9hRklBQm1GTUN6QlRxZ1VoQjY0V2pDa2txbDJZM2k2NHdieFIzWkJtbkNn?=
 =?utf-8?B?eHlmczArWHJxY3JUVWREanBJdWo1cTdHVUtaOUQ1V2kyOXZBNDFUUngyWCto?=
 =?utf-8?B?Qk9jdlMwMWs1OWJmZzlieHpWMUJkaENMdUI3eFk4NmFpcUkxUlI3eWYvNm9p?=
 =?utf-8?B?Sjk5VjNFUUp2TTFmaW52b3lHNzdXRnVnVzQ3anBVQkp4eU05anpMZkhGOVpl?=
 =?utf-8?B?Y01BbHRCWVJnajhibXJ5clJ0MWZaQ2p6NXhUeWpBTENlNXR6YkowckNDZm9Q?=
 =?utf-8?B?emk3T3RUWFh1Nm85RVdNc1VIenQwSWZIekxxN1hROVNUK0l3Mm11T0pWSHU1?=
 =?utf-8?B?b1ZMWXZkNWgwTFpjVGcrb1dDbzJrb3IvdHozUHRpbWlvdi9vM3duWGJnRVNt?=
 =?utf-8?B?TlFNQWZFb0VzQVFSMFk2YkY1VGpDTkNKeGVJYXp1Z3lxTUV1SXJIQkNyb1M1?=
 =?utf-8?B?VDMyK3A0R3g4aFdBdVJpcDFNV2dSNXAxQlBqV3RaOTVIOGxYbE0xS1dTWm52?=
 =?utf-8?B?SXVjUms5UDRtYlBnRmdpRzlRRU4reDgzbzI4c1VpUEUzSW0xLzBqWUVON1g0?=
 =?utf-8?B?ZEtLOHArM29wNkg4UUtvVjZjejQwa2d2RHhOeFh2V2xvTkdYOTZtRmVoY1Ix?=
 =?utf-8?B?TGlZMFNFK3g3dndra2o3bkhHaFJha2tFU2t3Y3E4OWRTTUwxcnpyaEhyNG1L?=
 =?utf-8?Q?gl4+ccmfAI3kuKRlfmGmEttP0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 86563cf3-bae1-4a0b-5659-08db4fc3df4d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 12:58:08.7816
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IhxlAmTa2/E1NFyj36HhP3C8r8ha9giQVFN8vzFiQnVubcU5cHl9NCVKrtBElBYaE+RGoR9iyBiReCZXRaZGAw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8674

Presumably by copy-and-paste we've accumulated a number of instances of
$(@D)/$(@F), which really is nothing else than $@. The split form only
needs using when we want to e.g. insert a leading . at the beginning of
the file name portion of the full name.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
v2: Insert blanks after ">".

--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -104,9 +104,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
 	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
 	    $(@D)/.$(@F).1.o -o $@
-	$(NM) -pa --format=sysv $(@D)/$(@F) \
+	$(NM) -pa --format=sysv $@ \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
-		>$(@D)/$(@F).map
+		> $@.map
 	rm -f $(@D)/.$(@F).[0-9]*
 
 .PHONY: include
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -10,9 +10,9 @@ $(TARGET): $(TARGET)-syms
 
 $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) -o $@
-	$(NM) -pa --format=sysv $(@D)/$(@F) \
+	$(NM) -pa --format=sysv $@ \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
-		>$(@D)/$(@F).map
+		> $@.map
 
 $(obj)/xen.lds: $(src)/xen.lds.S FORCE
 	$(call if_changed_dep,cpp_lds_S)
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -150,9 +150,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
 	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
 	    $(orphan-handling-y) $(@D)/.$(@F).1.o -o $@
-	$(NM) -pa --format=sysv $(@D)/$(@F) \
+	$(NM) -pa --format=sysv $@ \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
-		>$(@D)/$(@F).map
+		> $@.map
 	rm -f $(@D)/.$(@F).[0-9]* $(@D)/..$(@F).[0-9]*
 ifeq ($(CONFIG_XEN_IBT),y)
 	$(SHELL) $(srctree)/tools/check-endbr.sh $@
@@ -224,8 +224,9 @@ endif
 	$(MAKE) $(build)=$(@D) .$(@F).1r.o .$(@F).1s.o
 	$(LD) $(call EFI_LDFLAGS,$(VIRT_BASE)) -T $(obj)/efi.lds -N $< \
 	      $(@D)/.$(@F).1r.o $(@D)/.$(@F).1s.o $(orphan-handling-y) $(note_file_option) -o $@
-	$(NM) -pa --format=sysv $(@D)/$(@F) \
-		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort >$(@D)/$(@F).map
+	$(NM) -pa --format=sysv $@ \
+		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
+		> $@.map
 ifeq ($(CONFIG_DEBUG_INFO),y)
 	$(if $(filter --strip-debug,$(EFI_LDFLAGS)),:$(space))$(OBJCOPY) -O elf64-x86-64 $@ $@.elf
 endif



From xen-devel-bounces@lists.xenproject.org Mon May 08 12:59:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 12:59:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531547.827305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0S6-0002aa-BY; Mon, 08 May 2023 12:58:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531547.827305; Mon, 08 May 2023 12:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0S6-0002aT-6v; Mon, 08 May 2023 12:58:54 +0000
Received: by outflank-mailman (input) for mailman id 531547;
 Mon, 08 May 2023 12:58:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pw0S5-0002aJ-4r
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 12:58:53 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0622.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::622])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1437805a-eda0-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 14:58:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8674.eurprd04.prod.outlook.com (2603:10a6:20b:429::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 12:58:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 12:58:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1437805a-eda0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lqWCsZ3wiHEq++gmGNGyHg3Vhmx0W7jmgW0yyPO2i7+uf/UQT2O4dQo4YpsDZ5hVNpexIYKY780zIAsjXcr9qRO2F6Z4XCcRHLWbcuwAjPwfm2KPp8RdaPKAsIxWn4oEtR7K1qRgXqEfYxaAoAo463xk8N2+2QEUg1eQ8REoRnO2uMV0sOzUdl6StiBy1mGtHPnvpw9+UmHaNOVWV//Vthj4I15tng3G5/SE7zuYEYVOk9cVYRGdTF9CqahG9cIuX/2BjRBnhcX3njR4RyvYo8IRKbrlfFo+OBq5qktAFtrh6yXeNOnxrZgwo36tI8u38yo6kfev/5ok8XD4CiU7Yw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bHHkwJRRJgd96xEOMAOZhq2iePtG+6G1NYpCQL/zg1U=;
 b=kIpnEUTTqWCEgkJZWBta46WAgUEk2hAXj78hfhnn/Rs0fCeLY5kyiIfEwqNx0FQcaEudWIGI6b26Y9Nef/y8KZ47rKIL23kFa+mDY6ypTXU4m84bVrSzXNPJcihmFwSdasDL0YdP3Krw5VribuoyfChU/+7G/X0Yz0pUarWt3ny/lI69MSDBtPWJ9yMruo3+VSDzKj59grs97voQeEM9qV2IuKNh0ADHcmwKV55+XHJqcm8UY2gcKt+jVfdKT6p2ZAcBU4Y2iKqL13qmFGbMl3Xu53gqTKlh8a9WYiM5B5zSOMyU69IPVcYiB2BqVttoUptpld1d88qz3Xpk8V9dlw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bHHkwJRRJgd96xEOMAOZhq2iePtG+6G1NYpCQL/zg1U=;
 b=5zgk+P+vClAQrjKkL+zGFmZMjg+N0Y87gBYJ4nEBNLPKD/W/m3F7t683lncQVOw/z5h7rcJLACtUe6WkXQeQvEt9utJ8DHBCncQx6jBSYrrxMaiCSOL8xKi4JkfCVEEfu9zY+NYDtX52YMUDw0Gp3ei3tHjj2lXOTe0XObNSTdAb4V+Pvpj5+AIHONHSxWPkjsL/3nOHzskr0S2+EvorLpfJpKKlW1iynUSQTqFT/ReFc4sKqOpj+cTeLgd0I/BGSoj1Kdqb1a4H8F/MJfOxHzwHexXBY8uMr0/aBk1a/JpAJ6Gt3BU8hixmSr1OET1WdyDgKqX0EONnwRsseueJIA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9bc7544b-659f-4c09-f54d-647641483605@suse.com>
Date: Mon, 8 May 2023 14:58:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v2 2/2] build: use $(dot-target)
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <a261b10b-25e4-3374-d6c4-05b307619d81@suse.com>
In-Reply-To: <a261b10b-25e4-3374-d6c4-05b307619d81@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0179.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8674:EE_
X-MS-Office365-Filtering-Correlation-Id: cb520c61-304b-4e10-e692-08db4fc3f796
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+qcLVVbR2tPQvT/Hwjz1Tf79eKVRjPm75hAUbw2M9KbIVeViFSRSuL2kj3xsGGAesHdxJurjZd/rItXUqgB2FBrxy6/LcaY4unMxwEjVqCAaGDa+2L5UXOOZGkyoBfFVlHcEHl1WauJLBA9ZTXYsQUMRGBrmcQqA73j2nlc2RZSfuhlJ8o4dkCm64Hadv2MQdvGZ65dUEl5QaRIPSfBVhyKYJVE4IvqY8GOSqgHOxn/HFFUHsRa/lunvp1kh8BSWkN25O0XOpSReZMdlXua0LaMELicgYjnHPosDj/UEghmk0eEWuUB9ARH4N+Vs121KRgrsbNzdIQFdv2u1dGq9mbC23K43b/yFdtz+ewmjQiLsHwIMVB1cqfoFpIUYYz46LKZuB7R11RvtNmodIsHgQm/0fG7AdwUv4Ee/DHcuDJAPyEV27wgvz9EZkrOmPPvTwnQ2mWMygw38TyQJGxmK+TW/8ZIvM9UMv4V+g+YBQyF40b4KkI5NRObQmAsGt5lBryy4Vxoy4zRoYYcVF8GRBwd+W9vVtvZMGbN0NBtJUz4ytyGV8NTAFFpWpjHiz2vYDrxhNwuXO+Ufmzdo3uDkqW/h0llGFzkTUhoO2m+11btDg/NEX75sWpiAuTzJic0IUAr9Gy5d3lP6KWUPdjQ1oQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(346002)(396003)(366004)(39860400002)(451199021)(31686004)(54906003)(2906002)(8936002)(8676002)(316002)(478600001)(6916009)(4326008)(66476007)(5660300002)(66556008)(41300700001)(66946007)(6486002)(6512007)(26005)(6506007)(186003)(36756003)(2616005)(38100700002)(86362001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VjQvOUVCWVNpVmZrVHFobDlFNHlpMmJBYUgyL20vRUxvUTlIUmRjTC92MDkv?=
 =?utf-8?B?OGM4Z3JiV1BQT09PLzhTcFBMSXdxbkw1SUFSTWx3RHJyZGtXTHg0OGsydGJj?=
 =?utf-8?B?WjlqMkF5SlhJaHk4THRCb3NvcVpSWXpzMjJmc3NRLzZrTHk3dnhKMmlYUjQv?=
 =?utf-8?B?YXpNZm9Wd09HNnZ1T1BoeHRxWXVxWERvMlZjMjltNmxyTW1YZjBhWE5FV2Zw?=
 =?utf-8?B?SWI1M3VZTVQ1QUFvZUxXdlJEK1NtYUJyNVF1bElwSUxnd3AyNDVMdW1KQnlC?=
 =?utf-8?B?Rm4vZmdFRzNhUytGaDVhS1lDSWFWcWMrNzlRcVFvcUVmekFPaE9BcmdjdEZT?=
 =?utf-8?B?ZnN0eG5jKytCRlNHZkhQTGl2Q3NiSWMwcUlPMWxMd1Vra0s3RzVFQ2s0M1Z6?=
 =?utf-8?B?UzY5TmVzaEtBY0ViZkdOekkybGdmeFFCejRVSWxHU2RjaFRzc1hobWJGaloy?=
 =?utf-8?B?MXVYRzFObHE4ZDdNbGNMTUtyeW9RckF6dFFwWlhKenR3S1p5T1dQa0RTem5P?=
 =?utf-8?B?Unc3Zm4yOFlSN0Y5VjFGUjN5MkhFSXgzM0Y1T2V6c0doU0s1UExpbjZ2SGsv?=
 =?utf-8?B?b3hiSmZFV0xwV3NqSlBGR211Y1V3d1ZYYTVGdmdHMkRaOHJxSDQ5cHhWbkJr?=
 =?utf-8?B?M0ZlTVBaWFl2dzdDMmNjUTBmT3BvaEZFdEdxWUFuWEc5d2Q2cWp0dDQxclFN?=
 =?utf-8?B?RHg3UzN4bkQ2TGdBRGhjaVFsZlJnMGNzN0t1bmpZSzhROUYvZjlwelF0VXU4?=
 =?utf-8?B?cVVBaGNVdDI1MWY5VE1xYWF4Y3hDdnU5ZkZHdXMvMUxCVmh2b0grNk8zUHVm?=
 =?utf-8?B?SVd2ekRYRkZhZmRaeFZQcEg0RTFtd1FQbXA0VUFGSzByaE1pSXJsa2VjVGJP?=
 =?utf-8?B?YVVQcHBqSEhCQ3JwMWVwT2NJS2FOTUlJbVNlT2dDRnpnS2wrcDJ6Zmk1WlZV?=
 =?utf-8?B?eUpwNG9OemFIK09NMHY1OTVqd1R4OE1HcUQyWHRlYkV1cUQwYmFGWG5EeEZM?=
 =?utf-8?B?ZlJDY2xpWUdaR0pqaGxkY05wcFU0K2R1RUhpdTkwa083TkhzenRoOHN4RXFX?=
 =?utf-8?B?aUF2Nk5VUlRqblNsSWRyVWhBZTllbi9LU0o5WVNjSmN4OVhsWUNObGNLRTRr?=
 =?utf-8?B?Qll5VkpybHFSVjFVNFlLRVl6TzNzai9VSWgra0dmVklmSlIwVVUyZkx5MDB6?=
 =?utf-8?B?Q3Y3NVo2d0h3dkIrVmpwYmY4cW1JV1lCU1R1bThOZ1paWjZDZ09GaGxGeHFp?=
 =?utf-8?B?STZlcHp0SVJEeU85V0VPVHFVMG1aTmRqTWdUQ2Y0OVB1VEswU1ZBZDBXQlhm?=
 =?utf-8?B?QVJTMXpxMGo2em5QWDZzeW8wSUNqNk5PYzFXMEFyMjlWOVBSYmMyWDJGdWlX?=
 =?utf-8?B?eW5CZ3RWdjJTcmpxdnF4UUtXUUsraXduNUR4ekZCV00wSk95SmYrcEFNbnhM?=
 =?utf-8?B?M1pFZmJUczhZS0RNakNpZlZ5U3RHR1BWaVo3RkdJbm5ITldTZW1qN2c4YkRH?=
 =?utf-8?B?MUdlS0d0T1JIdkc5cE5xTHBWMDhnT25oRzcxdXNUaUpmQ2lwR2ZTR1pocFJv?=
 =?utf-8?B?MUFETHBsSlh1NkVvVnp2eng5cjI4MW5pd2lKYTZzTVRsYmN2VmVxd081QU5I?=
 =?utf-8?B?TlFMTWpVR1JxT1FvZDhFaEtFWnNTUzRLaEdoYm9GdlFEMkpDamJOYkxBaE8z?=
 =?utf-8?B?OEwwR3NoakZaVDlMNk9QYWhRdWVGTFVDU2hhb1IrYW8yVDdVZUlFNEJ4akhY?=
 =?utf-8?B?N3ZJZEE3cWdKNkdWMklSS3NXR1UzS2srbHhHQllJVDR4SkU0U2MwNjZUWHE2?=
 =?utf-8?B?RmlKSjNwOGJiYVRVN1k1TnVmUUl5SkRiNXZOd1BUeGQyL0pPNTgrNjlpaEVl?=
 =?utf-8?B?NUhHdnpVOUpMaENJaCtYcldJK0JXR3BXNlRWWTF1dGVKTlJWd0VqN2NtTnFF?=
 =?utf-8?B?cFFTT3ZCaTA0Nk9mdU04MEk4MGgzaUxET2NxY0tVZE9BRFdVSE1nSVRrVTlG?=
 =?utf-8?B?TUFMUkxId1pCUm05ZXhDWUx1bTgxVHBxSmM1Qno3eVE1SStGM1dEOENLVmh0?=
 =?utf-8?B?bkU1RlFUdUY0S1pnbzhjV2ZWWENOOWxQMlkxSDh0emxXQW1ndENkaTAzSlFV?=
 =?utf-8?Q?5F+2T5miaiaDt/hVFXK6/oBnS?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cb520c61-304b-4e10-e692-08db4fc3f796
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 12:58:49.5245
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Wjonmp//N/rTjPMR8ZsH+G6Ut/Uy+D2Kkb0m4ymSAXwbMdK1KYaEEHTFQquZG0OrYCOhle7zwkYwDD5nc0FLgw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8674

While slightly longer, I agree with Andrew that using it helps
readability. Where touching them anyway, also wrap some overly long
lines.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -93,17 +93,19 @@ endif
 
 $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< \
-	    $(objtree)/common/symbols-dummy.o -o $(@D)/.$(@F).0
-	$(NM) -pa --format=sysv $(@D)/.$(@F).0 \
-		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort >$(@D)/.$(@F).0.S
-	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).0.o
+	    $(objtree)/common/symbols-dummy.o -o $(dot-target).0
+	$(NM) -pa --format=sysv $(dot-target).0 \
+		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort \
+		> $(dot-target).0.S
+	$(MAKE) $(build)=$(@D) $(dot-target).0.o
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< \
-	    $(@D)/.$(@F).0.o -o $(@D)/.$(@F).1
-	$(NM) -pa --format=sysv $(@D)/.$(@F).1 \
-		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort >$(@D)/.$(@F).1.S
-	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
+	    $(dot-target).0.o -o $(dot-target).1
+	$(NM) -pa --format=sysv $(dot-target).1 \
+		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort \
+		> $(dot-target).1.S
+	$(MAKE) $(build)=$(@D) $(dot-target).1.o
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
-	    $(@D)/.$(@F).1.o -o $@
+	    $(dot-target).1.o -o $@
 	$(NM) -pa --format=sysv $@ \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
 		> $@.map
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -123,7 +123,7 @@ syms-warn-dup-$(CONFIG_ENFORCE_UNIQUE_SY
 
 orphan-handling-$(call ld-option,--orphan-handling=warn) += --orphan-handling=warn
 
-$(TARGET): TMP = $(@D)/.$(@F).elf32
+$(TARGET): TMP = $(dot-target).elf32
 $(TARGET): $(TARGET)-syms $(efi-y) $(obj)/boot/mkelf32
 	$(obj)/boot/mkelf32 $(notes_phdrs) $(TARGET)-syms $(TMP) $(XEN_IMG_OFFSET) \
 	               `$(NM) $(TARGET)-syms | sed -ne 's/^\([^ ]*\) . __2M_rwdata_end$$/0x\1/p'`
@@ -137,23 +137,23 @@ CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_E
 
 $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
-	    $(objtree)/common/symbols-dummy.o -o $(@D)/.$(@F).0
-	$(NM) -pa --format=sysv $(@D)/.$(@F).0 \
+	    $(objtree)/common/symbols-dummy.o -o $(dot-target).0
+	$(NM) -pa --format=sysv $(dot-target).0 \
 		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort \
-		>$(@D)/.$(@F).0.S
-	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).0.o
+		> $(dot-target).0.S
+	$(MAKE) $(build)=$(@D) $(dot-target).0.o
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
-	    $(@D)/.$(@F).0.o -o $(@D)/.$(@F).1
-	$(NM) -pa --format=sysv $(@D)/.$(@F).1 \
+	    $(dot-target).0.o -o $(dot-target).1
+	$(NM) -pa --format=sysv $(dot-target).1 \
 		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort $(syms-warn-dup-y) \
-		>$(@D)/.$(@F).1.S
-	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
+		> $(dot-target).1.S
+	$(MAKE) $(build)=$(@D) $(dot-target).1.o
 	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
-	    $(orphan-handling-y) $(@D)/.$(@F).1.o -o $@
+	    $(orphan-handling-y) $(dot-target).1.o -o $@
 	$(NM) -pa --format=sysv $@ \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
 		> $@.map
-	rm -f $(@D)/.$(@F).[0-9]* $(@D)/..$(@F).[0-9]*
+	rm -f $(dot-target).[0-9]* $(@D)/..$(@F).[0-9]*
 ifeq ($(CONFIG_XEN_IBT),y)
 	$(SHELL) $(srctree)/tools/check-endbr.sh $@
 endif
@@ -210,27 +210,34 @@ ifeq ($(CONFIG_DEBUG_INFO),y)
 endif
 	$(foreach base, $(VIRT_BASE) $(ALT_BASE), \
 	          $(LD) $(call EFI_LDFLAGS,$(base)) -T $(obj)/efi.lds -N $< $(relocs-dummy) \
-	                $(objtree)/common/symbols-dummy.o $(note_file_option) -o $(@D)/.$(@F).$(base).0 &&) :
-	$(MKRELOC) $(foreach base,$(VIRT_BASE) $(ALT_BASE),$(@D)/.$(@F).$(base).0) >$(@D)/.$(@F).0r.S
-	$(NM) -pa --format=sysv $(@D)/.$(@F).$(VIRT_BASE).0 \
-		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort >$(@D)/.$(@F).0s.S
+	                $(objtree)/common/symbols-dummy.o $(note_file_option) \
+	                -o $(dot-target).$(base).0 &&) :
+	$(MKRELOC) $(foreach base,$(VIRT_BASE) $(ALT_BASE),$(dot-target).$(base).0) \
+		> $(dot-target).0r.S
+	$(NM) -pa --format=sysv $(dot-target).$(VIRT_BASE).0 \
+		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort \
+		> $(dot-target).0s.S
 	$(MAKE) $(build)=$(@D) .$(@F).0r.o .$(@F).0s.o
 	$(foreach base, $(VIRT_BASE) $(ALT_BASE), \
 	          $(LD) $(call EFI_LDFLAGS,$(base)) -T $(obj)/efi.lds -N $< \
-	                $(@D)/.$(@F).0r.o $(@D)/.$(@F).0s.o $(note_file_option) -o $(@D)/.$(@F).$(base).1 &&) :
-	$(MKRELOC) $(foreach base,$(VIRT_BASE) $(ALT_BASE),$(@D)/.$(@F).$(base).1) >$(@D)/.$(@F).1r.S
-	$(NM) -pa --format=sysv $(@D)/.$(@F).$(VIRT_BASE).1 \
-		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort >$(@D)/.$(@F).1s.S
+	                $(dot-target).0r.o $(dot-target).0s.o $(note_file_option) \
+	                -o $(dot-target).$(base).1 &&) :
+	$(MKRELOC) $(foreach base,$(VIRT_BASE) $(ALT_BASE),$(dot-target).$(base).1) \
+		> $(dot-target).1r.S
+	$(NM) -pa --format=sysv $(dot-target).$(VIRT_BASE).1 \
+		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort \
+		> $(dot-target).1s.S
 	$(MAKE) $(build)=$(@D) .$(@F).1r.o .$(@F).1s.o
 	$(LD) $(call EFI_LDFLAGS,$(VIRT_BASE)) -T $(obj)/efi.lds -N $< \
-	      $(@D)/.$(@F).1r.o $(@D)/.$(@F).1s.o $(orphan-handling-y) $(note_file_option) -o $@
+	      $(dot-target).1r.o $(dot-target).1s.o $(orphan-handling-y) \
+	      $(note_file_option) -o $@
 	$(NM) -pa --format=sysv $@ \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
 		> $@.map
 ifeq ($(CONFIG_DEBUG_INFO),y)
 	$(if $(filter --strip-debug,$(EFI_LDFLAGS)),:$(space))$(OBJCOPY) -O elf64-x86-64 $@ $@.elf
 endif
-	rm -f $(@D)/.$(@F).[0-9]* $(@D)/..$(@F).[0-9]*
+	rm -f $(dot-target).[0-9]* $(@D)/..$(@F).[0-9]*
 ifeq ($(CONFIG_XEN_IBT),y)
 	$(SHELL) $(srctree)/tools/check-endbr.sh $@
 endif



From xen-devel-bounces@lists.xenproject.org Mon May 08 13:01:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 13:01:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531554.827315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0U1-000488-Q6; Mon, 08 May 2023 13:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531554.827315; Mon, 08 May 2023 13:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0U1-000481-NA; Mon, 08 May 2023 13:00:53 +0000
Received: by outflank-mailman (input) for mailman id 531554;
 Mon, 08 May 2023 13:00:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw0U0-00047l-MC; Mon, 08 May 2023 13:00:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw0U0-0007aH-JG; Mon, 08 May 2023 13:00:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw0Tz-0001dC-Te; Mon, 08 May 2023 13:00:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pw0Tz-0001Dh-TB; Mon, 08 May 2023 13:00:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0UbqY4+zBLVl0og9KoSVd61a0MTwPxmC32m3yj5RHZY=; b=Xxoq0gVY1t7bpTytF66gdaSP4e
	+yMmep7vvsFcBzGHmHCMX7+XCQEV1W5PTxgVNPcrxz91hWTa802XYb6iIFhWg5wSaEUzpIUEGe8Cz
	Zcyv4SBVVzky1A3KgBn2kjgKc5Z0opQpRm0y8aNCC+Uoz6qEtmv5pyxNsZ81zan105zU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180574-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180574: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ac9a78681b921877518763ba0e89202254349d1b
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 May 2023 13:00:51 +0000

flight 180574 linux-linus real [real]
flight 180576 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180574/
http://logs.test-lab.xenproject.org/osstest/logs/180576/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ac9a78681b921877518763ba0e89202254349d1b
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   21 days
Failing since        180281  2023-04-17 06:24:36 Z   21 days   38 attempts
Testing same since   180571  2023-05-07 22:11:42 Z    0 days    2 attempts

------------------------------------------------------------
2359 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 296910 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 08 13:18:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 13:18:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531565.827324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0l0-0005pE-8f; Mon, 08 May 2023 13:18:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531565.827324; Mon, 08 May 2023 13:18:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0l0-0005p7-64; Mon, 08 May 2023 13:18:26 +0000
Received: by outflank-mailman (input) for mailman id 531565;
 Mon, 08 May 2023 13:18:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pw0kz-0005od-B9
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 13:18:25 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20623.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cfb06236-eda2-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 15:18:24 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6960.eurprd04.prod.outlook.com (2603:10a6:803:12d::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 13:18:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 13:18:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfb06236-eda2-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KfWN+H1uAMUpoMXaldDpKZ/Z42pc2mh7q4TGXESMhcPIwv1NrCn+wWkbGz/XeQyNH4sR/5nvBVBuAm2+WFBF6757/GCfO0SQ4mAr+rAKCABa3XIi06dg6OxYqklW8PUS7xFVh4SkUcKq305gtGqDbI/XRTCat8BMXFrjtQ6HjdUaLWgdC01nEpJmvKySjCyyFj/66IELgQGFBQthxL12G9108+3ok8hhYnBFwVXRB79zxlbXLCwk00oBhMaNYUAe/yHakeXw0Ft4O7mD8LfYoTfdJY+0fZwUFGFR1jfhZbuL2BkRs4Y1n1TfnLQQQbSjlqpNpwWURT9EUOob8H2wRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fWRPL6YneaoJ91jGEXzcRq759h8bxvZnyP2YvrxalZQ=;
 b=JezcqsJmsQEbTP45NYM7m2gml4Xw+4tG3wE02FRrI5AyPoaMcNl411mqeflATSJRUzu3YTNhJcg4h5NaOslhneUSy0YEDobDqgf66nAoiRIyePOi5c9ChpjF4EBXsEgYJFawRFiAPlRD+67VHCxLhN3J138s0RIH9bLnCxKsxCcXYTGuZf4Vx2tQeIPo5brXsdnZHFtOaydM0RsVTQsjYmqjVB/mMbEHJaWq9+n9EvhYfTLlKefMX7/yza0wzivpH3glF3HBvsMO4b65WFiQDh6qoZVbbHgky3Nq1MI+WafxuMIJITtoh+rxo+NWDXVrJ1HKR3/V2vtZ/gDvKQepYw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fWRPL6YneaoJ91jGEXzcRq759h8bxvZnyP2YvrxalZQ=;
 b=Glm/1kuGBMwg1S9vunwndR48fmCjYwRwcKR51FHsXYvJct9aosctaLMEnJuoB5DPUBB+0kEBGudI0caRC929CCFdqddKvrXLs+dkzHn9sHchEz007Onx7QAESE2SCLvR2pZ3xJEK8Jr5+Z/mVACFqvQnvObdEm8RqcGPFsIreQFOqguQ1WAtP7owkv9M2Mb1y/whweHCXLFefG0MrGFs2peM1AOEHpj62rW46iluvcAnQ66ixHOGfj+gCsgSkhsgRenYwYDnCl/vJqCc+s3oWRt69M6ePF+RXDGS/MzprsGOlqG346fWIC/eoL0wMUm2n0oNjAH10z4cZFd2vXylIQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d8c9728b-b615-7229-2e76-106d81802a20@suse.com>
Date: Mon, 8 May 2023 15:18:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 3/3] x86: Use CpuidUserDis if an AMD HVM guest toggles
 CPUID faulting
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
 <20230505175705.18098-4-alejandro.vallejo@cloud.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230505175705.18098-4-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0210.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ad::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6960:EE_
X-MS-Office365-Filtering-Correlation-Id: 281687fc-17d3-4262-a086-08db4fc6b2ca
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UyctBTik5JSqZ9P4vksq2NsXMP80FY0ZNyHrBxCZ12G17RpK/6CZg8wRNknn4ZGdob8+7FSw+kzFRXlwGMRGrOeoAQ5FKNGAYW5MR+vExEJ9cJJeq8CU31vz0j8JRmtAc0bhZduRyvZIUkA0OjMsr3VgYSUGErAyLjh0OG86UVh3wOZOF7JISZnXZELO7QT7xgaShD529jPVNgnP2zVYSlQubtzI760CIDCo6wN+O2+nJeNChI7J/gg0bYptuy91dFYSCoytW2AAKLkKHs5HSvoY/Qky4DOfnZnaGb/F8Cxe5z7ylMY6tUp7BqODoAQHiL4zAxHfAvCZWchqkYcH4JKb56KEkbteOu3mGG1IhySohRumrPnMJN8dFvl9grdDj2i/+su7N9G2uPFiLqyd0DxaGx2lHYLNHOUeuDbiesm7b6VtVLqt68pRfToyiFEiwuUFK51/NvVCqyLauVChUWgJQN59ModLaLLYpGsugjAHpI8IifL0zgdPxg6X4ZCZ46f/7o6Oqtwn7k3bQ4fRzEbvTYCr10pfsXimh3eBQWTu0Jf6Nd1yiqmcSQxtSOcF6+t44jMnuCNn25Gzkx4OXpFBC+PrNouDcwxloRHkG5yM1tLkyrGWRvPRW+iU7cUu7kGVjF7RtOJkd//NDiwSGw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(396003)(136003)(366004)(39850400004)(451199021)(31686004)(66556008)(6916009)(4326008)(66946007)(316002)(478600001)(6486002)(54906003)(86362001)(31696002)(66476007)(36756003)(83380400001)(2616005)(6506007)(26005)(6512007)(53546011)(5660300002)(41300700001)(8936002)(8676002)(2906002)(38100700002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SGJRVkZybmhUbjJFWU1wWE9GWnQ5MGRKNFFxVkhJY2YzWU93MG5hRHlsKzZF?=
 =?utf-8?B?YUFhOW5uWEFUaFFMMXk3MWh1WnNPcjE0Z2t5MmpweGtIeWh1d1VLZXRoY1hK?=
 =?utf-8?B?am4xWU9vSUNUTlkwUHN6UE95ZjZEbVhMYWJqSUx5UkZCQkJCbFB0Qk5wU20r?=
 =?utf-8?B?ZFFyQ0tZcHNjSi9uWmk3WCtRRXdyQ3FOWDlmL0VXTjdFUStKQXVmdVFqWGUr?=
 =?utf-8?B?bTFVT1h0R0RNMTRpSG9uMlBnUFpEU2lsSHdrTHc3d0RIN1MzWGZicU04VU9U?=
 =?utf-8?B?RzRGYTh2R2NpOERUcGlxYnRLVnpubURwNXR2WHgzcllYN2JCQXo1NzFSUk5I?=
 =?utf-8?B?b2JrdDNLWjJsL1ZaYWpXRE53QXFjQ0E2OGtSNVpNc1RoNjVHczI2dEZDT2Rz?=
 =?utf-8?B?anBHaVVxTE9WM2hZTFdsaEdCcVhmTFYzK0pPM3ZqSGZLVFJteTlPV3FQVFN3?=
 =?utf-8?B?R2ZQZm5RM1A4NmRDN0J4bDFzUUVmQnhWQW1TTXlTY1hPSVgra3hZUmVYT0xK?=
 =?utf-8?B?Nmh4czZOc2p0MWM5RUJTSzN3SW4zUWg1a2lROTVPTXh2S1V4dGZhV1RBaEFQ?=
 =?utf-8?B?ZS91Y2hweGIveUVFcnppMG54bDNPamV0VXhvOCtSTTdwSjBHeUdUZHBTYzF5?=
 =?utf-8?B?UkRldjBhUEZISk9lT0pGTzhKMnczVkJPajhGcEFlT1lMRXBIekVPVFhiY0xG?=
 =?utf-8?B?Mm5UK05lTWpzaEYxQmNKWS9qSFpqZWNrY2lMd05VWGI1MVNlYUNSWHRrUXQ4?=
 =?utf-8?B?S3B5QWJkZFZ5SlRyaXN2VHVKRGRKbzJ3c2tYcnlzeE0zNUJzSEF5WTRCNUZ4?=
 =?utf-8?B?NVhHSGZuZ2k5djE0dlc5aDlRWHpoYUx2YnJDZm9pbU1tWnR3ZW9OTGtHbmdI?=
 =?utf-8?B?MTFjRlN2YzB1dnYvalF2amQyM3ViN0VMWkdPM0U2M0pBM1krOXhlM09RYjhY?=
 =?utf-8?B?aTk1bmNtUmdoY3paWkVYMEQzVGkwRUlLK29wMGpMaTRaMVZBa1ZUc3VteTA5?=
 =?utf-8?B?SER5K0JQN3BxWHprOUdtVXhWWTZDWVh5bkR6djJUVldOT0dpQkhGVllKcFBs?=
 =?utf-8?B?c05LMkNkNU1ucm05Skd0bVkvTnlmUlhqaGI1UUliK25tM095WGF6UFVTRlho?=
 =?utf-8?B?RGhXcE1tVlBOVVhWa3ZhYjhyQ3plYmhEaVdkL0lhQ1ZaejRYUGRwb1Nwb1RS?=
 =?utf-8?B?dmduOUxCNlBialBFTzViYXRybW00T3pLY2Zic2QzTFpESHA5S3RVK2xUQ3Ar?=
 =?utf-8?B?eFJvd1IvVHVJVVFXemQ5QVE5bk5oT2tFdVNxV2NMWGd2SjJPSTVWcVo1RUtx?=
 =?utf-8?B?Z3FramY2R3dvZ1VNbjN1MEFWTnJtN1hwWlczVTZNQ2NWZTBMSndxbC9VUUhK?=
 =?utf-8?B?Q1hKeVgvL05hNDZZZWpjclRTS3V0VmVsK1hsMkpWZVZFUlVWRVRoeGVYM3U4?=
 =?utf-8?B?a3RuQWlTV0Q4ZkY5M1lkdHdYTnZWQVpyT3dqVmlEanQwb1lVbUdRN0JUcVJp?=
 =?utf-8?B?QWgzTFRzajNsMGJON0RicTdGQmR4TG95NGhRRmZTTVhsZkJSRSs1VlRLYXkx?=
 =?utf-8?B?MlVYWi9XWHJuU2hyWC9DekY4REpsMVlmcFB4RzZ4dFNrazhzZ1NrVUFpdFRx?=
 =?utf-8?B?MWdpbE9TNDRXSFVDUDJGWjlqT0hZWng3Q096aFRya3F2TFIxaFl1ZlRDNjg3?=
 =?utf-8?B?Vi9LS1RmWS94SytBcjVCb3N5Q2RscjNOaUtoeUdZcXYvUFhBY0NEeWh0ODkz?=
 =?utf-8?B?VXNhcEkvOU0xMnFjS2hlSlBieGRCYjVHazJ3My9DKzYrcUdSRit2Q01YZ0Uv?=
 =?utf-8?B?WDNiam5jTkRyQ0crenpFMERRZGR0VE9XRFZMeURoR0xJMnJPajVhS0NybzlO?=
 =?utf-8?B?NTJYSmhUSlVnR3pKa3NPWGtUbWpGRis2QjMvUks0NDArSFkxdnpBWHE2d0V4?=
 =?utf-8?B?RVFZaTVRYVM4UUlPS2F2RHRVSllZbXJIM1lKdldUbTZlQy9tL0RFVHhtVXUz?=
 =?utf-8?B?eHFodFlQelB2enpYRGVMaC9uMmJOdWxlNUxSUXpkd04zQm1pQ05YMktxdjA2?=
 =?utf-8?B?VHRoQzRjNjY0QlpnYzYzK28rWGpvMXYrNXZkcVgvY29xUnZpeC9YWndSMDlI?=
 =?utf-8?Q?wWkgeAqMsLmPQORjvPs800Npm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 281687fc-17d3-4262-a086-08db4fc6b2ca
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 13:18:22.6322
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /8Gb5HPm8gE9PVNJvmQOVHVh8hk9MoI6+9m7jaFjdydQSe21v/rXHpIfO+Fx16JmWlvVtAL/DDKAYvFxx+CVqw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6960

On 05.05.2023 19:57, Alejandro Vallejo wrote:
> This is in order to aid guests of AMD hardware that we have exposed
> CPUID faulting to. If they try to modify the Intel MSR that enables
> the feature, trigger levelling so AMD's version of it (CpuidUserDis)
> is used instead.
> 
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> ---
>  xen/arch/x86/msr.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)

Don't you also need to update cpu-policy.c:calculate_host_policy()
for the guest to actually know it can use the functionality? Which
in turn would appear to require some form of adjustment to
lib/x86/policy.c:x86_cpu_policies_are_compatible().

> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -431,6 +431,13 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>      {
>          bool old_cpuid_faulting = msrs->misc_features_enables.cpuid_faulting;
>  
> +        /*
> +         * The boot CPU must support Intel's CPUID faulting _or_
> +         * AMD's CpuidUserDis.
> +         */
> +        bool can_fault_cpuid = cpu_has_cpuid_faulting ||
> +                               boot_cpu_has(X86_FEATURE_CPUID_USER_DIS);

I'd like to suggest that in such a comment it not be emphasized that
it's the boot CPU (alone) we check. In fact I'm not convinced any
comment is needed here at all. I'm further inclined to suggest to
omit this (single-use) variable altogether.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 13:33:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 13:33:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531575.827335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0zU-0008SH-Gq; Mon, 08 May 2023 13:33:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531575.827335; Mon, 08 May 2023 13:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw0zU-0008SA-Dv; Mon, 08 May 2023 13:33:24 +0000
Received: by outflank-mailman (input) for mailman id 531575;
 Mon, 08 May 2023 13:33:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw0zS-0008S0-Tk; Mon, 08 May 2023 13:33:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw0zS-0008I2-OR; Mon, 08 May 2023 13:33:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw0zS-0002Pj-CA; Mon, 08 May 2023 13:33:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pw0zS-0001Z8-Bg; Mon, 08 May 2023 13:33:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oxj5pLGURD1+vKzwlYEowwAh0303qy0RZFRiWUrOUMo=; b=u/2nXEEkXx2ph3Qk/+lMjiSGYM
	MBEVzcdlWFvFiuXNhjZlKvdVoIkwCMGo3cuMNQ3QOmLWi8dUgFohbfd8Y/Hwm6rfSl8f6Iye2tjgV
	FFruKaYK3L71KFjEP08KTSGT9VEogjjMGu5FWu7TJdUTuY4nXEqY43DaZ+t0g3mAfOhc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180575-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180575: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d89492456f79e014679cb6c29b144ea26b910918
X-Osstest-Versions-That:
    ovmf=8dbf868e02c71b407e31f9b41b5266169c702812
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 May 2023 13:33:22 +0000

flight 180575 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180575/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d89492456f79e014679cb6c29b144ea26b910918
baseline version:
 ovmf                 8dbf868e02c71b407e31f9b41b5266169c702812

Last test of basis   180573  2023-05-08 02:10:45 Z    0 days
Testing same since   180575  2023-05-08 11:43:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Liu <linus.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8dbf868e02..d89492456f  d89492456f79e014679cb6c29b144ea26b910918 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 08 14:13:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 14:13:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531585.827345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw1bh-0004gk-IH; Mon, 08 May 2023 14:12:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531585.827345; Mon, 08 May 2023 14:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw1bh-0004gd-FR; Mon, 08 May 2023 14:12:53 +0000
Received: by outflank-mailman (input) for mailman id 531585;
 Mon, 08 May 2023 14:12:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHwn=A5=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pw1bf-0004gX-Q5
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 14:12:51 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20601.outbound.protection.outlook.com
 [2a01:111:f400:7eae::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 698ea6ab-edaa-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 16:12:50 +0200 (CEST)
Received: from BN8PR04CA0019.namprd04.prod.outlook.com (2603:10b6:408:70::32)
 by DM6PR12MB4090.namprd12.prod.outlook.com (2603:10b6:5:217::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 14:12:44 +0000
Received: from BN8NAM11FT087.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:70:cafe::26) by BN8PR04CA0019.outlook.office365.com
 (2603:10b6:408:70::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32 via Frontend
 Transport; Mon, 8 May 2023 14:12:44 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT087.mail.protection.outlook.com (10.13.177.24) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.33 via Frontend Transport; Mon, 8 May 2023 14:12:44 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 8 May
 2023 09:12:43 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 8 May
 2023 07:12:42 -0700
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 8 May 2023 09:12:42 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 698ea6ab-edaa-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iXQyMI35v3ND2QJyPMNGVlEIRsOUo+Io2Rj/9CeUuGSe+0OVfaKmOKnzoRKcYufRRgyGTHralB96HNGX68B06LzTda+N4lh3IVMNosqCafD1ZI6qOGA6i/MgBsROtllKNf7gi2KPf7ENdThnui/l6V3I5hqWx74FbJHRv0xgpSKNopOwkzxk8wUuvoeClJBBptnTFuMgUm/xWwBH4opFjuNNS6cJdeKYhVotDYDn8Fi7dXDE/WRu2eTdRrzpaE1brwRtULEzTXf+wHAi+8YfDAdd0pmCwyfGO3zghPRA8ffEPL7zsi9e8+veLUyeSe02jFLH/Q8LGkBpd4yEyDw85g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GeJaDIJJReWZD9EkDq23oZ61Jfq8KZPalADMHBqnzQ4=;
 b=JnunmsxoHvie1DOg1XFccF+Kjz81S3R3CLZejx5z5VmzjKIN7okwyTIVzAHGy0VByGfQapo5cpPoJ3pO1P3mjxrlD9Z59Zf5A1cN6XHUg0QFtKJ0Tnr0Rzq5/iAzPi+lHeUenbSyq/oxegVkLBKZ5Y8JTgd4MHPknNNoCGuV+3/yUTPHd1qxUDhTwxD2WU+9BRquFE1MP1ug46fBrd7rtwr+2nMtNFmAAxoIkFoxcaBg5mpAdpg8Gff/74U7/xtBnfx/Gtl92mHyxdU2UhFaxnKiospUoEjl2NW4iqEQHXrr8xtSvTK/Ng4Sq48v7NixP6OiyQdrYfkuokjV1EmMcA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GeJaDIJJReWZD9EkDq23oZ61Jfq8KZPalADMHBqnzQ4=;
 b=y1KpK0j85jCbv77MCwgciHaenNWmaUVPRIHBzMK2NsJapOqmYVcrspMO6xTPEafmeD2YfCsBc0C4vrzdA5gDLL7mnk2KgQQGEmJWTpseVl4eitBgSaPKkpMGxJ7um3L0kowyYNyl62I4M3WGMnIFGu3+hmME7hjH23Jb4Lnt85E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <fffda0cc-0074-69b0-b8c6-cb867ff4c5a5@amd.com>
Date: Mon, 8 May 2023 10:12:41 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v1 3/6] iommu/arm: Introduce iommu_add_dt_pci_device API
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Paul Durrant
	<paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
 <20230501200305.168058-4-stewart.hildebrand@amd.com>
 <0e7404fe-e249-7b3f-0628-b8b8b1925765@suse.com>
 <6758ef37-bf66-99c3-1fd4-f2edfc12513a@amd.com>
 <39f9548b-6a07-3d74-1975-8f17153f2d14@suse.com>
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <39f9548b-6a07-3d74-1975-8f17153f2d14@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT087:EE_|DM6PR12MB4090:EE_
X-MS-Office365-Filtering-Correlation-Id: 948428bc-5a85-461c-5224-08db4fce4ae9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2zcNkg+heGGd2LLiqpDI0siIXoSAMirqCqHYo7ObFYnZ2Lxb65Ec5wXCnDRab6Lu/piTCOa1Sww77QGi3BbU0RpbwPDPrTxKHfFfiegVidnFaraZEy6ntWznQarx5/Yj58WlrRLNpFmNoAbN/utjg3uBwZ39uanVtk1jnb3FwfOQUQdgCkefzp99clLe7BxGrZZpvHHa5OidXlU89kie8eYIcLRRgDwaGwUwPhqLK9xCZYW1xDJCdtZJMsWlvqIWw5WCMU6qS8UXS4RSVouWCOavz2UV5wgvkOPB30kgVwhMf+NMXu9zoSRV35l9saGux9AFonEqLFvxAuQOTthn6niv19+pP365wIzgW1ksC2MjTlKvCP1k9enxi2sjzftkIuxF2UsDFEWM/LaDnS/XtDurIKtSqFCBNyuTgCzlrEVhX/SmrJATSCPd4VGBhKgDDRPJvSdeW52myBHqJxhoC9l32ITTfkGKLkPD4NvKb/yURn+L6q4rmuTtdxvCIV/y90e/SsYs2xqD8fU8rujB5EuhA/rpPoWDHoNq9QgV91dDR6bnfbhQvnfdPKHKpeY2b/a3c493b0aiP8dyiOxga52KCXcFisthMl2LL9rQLg4ZrPtEP6eq+1j/AWfkErhmho0UTU3F7Ol/NLdxR7UBI6EE5c5fV6AMXGmZStcQ6v0yrb5gJYXZYfOYvu6Azqs5b9QbvhxeV6sgJ7UmUSifDAIVvdQVD9hUM9cTC0RuH4rctfpiXlLWpmrUeDFcbaPx+n3XjnhsEGEMjUzwAN/VqQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(346002)(376002)(136003)(451199021)(40470700004)(36840700001)(46966006)(53546011)(26005)(83380400001)(336012)(40480700001)(426003)(36860700001)(36756003)(2616005)(81166007)(356005)(86362001)(31696002)(82310400005)(82740400003)(186003)(40460700003)(47076005)(41300700001)(16576012)(2906002)(54906003)(31686004)(5660300002)(6916009)(4326008)(316002)(8936002)(8676002)(478600001)(44832011)(70586007)(70206006)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 14:12:44.1406
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 948428bc-5a85-461c-5224-08db4fce4ae9
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT087.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4090

On 5/5/23 02:20, Jan Beulich wrote:
> On 04.05.2023 23:54, Stewart Hildebrand wrote:
>> On 5/2/23 03:44, Jan Beulich wrote:
>>> On 01.05.2023 22:03, Stewart Hildebrand wrote:
>>>> @@ -228,6 +229,9 @@ int iommu_release_dt_devices(struct domain *d);
>>>>   *      (IOMMU is not enabled/present or device is not connected to it).
>>>>   */
>>>>  int iommu_add_dt_device(struct dt_device_node *np);
>>>> +#ifdef CONFIG_HAS_PCI
>>>> +int iommu_add_dt_pci_device(uint8_t devfn, struct pci_dev *pdev);
>>>> +#endif
>>>
>>> Why the first parameter? Doesn't the 2nd one describe the device in full?
>>
>> It's related to phantom device/function handling, although this series unfortunately does not properly handle phantom devices.
>>
>>> If this is about phantom devices, then I'd expect the function to take
>>> care of those (like iommu_add_device() does), rather than its caller.
>>
>> In the next patch ("[PATCH v1 4/6] pci/arm: Use iommu_add_dt_pci_device() instead of arch hook"), we will invoke iommu_add_dt_pci_device(devfn, pdev) from iommu_add_device().
> 
> Which I think I said there already I consider wrong.

Okay, I'm following now. Right, so, the first parameter is not needed.

>> Since iommu_add_device() iterates over the phantom functions, it would be redundant to also have such a loop inside of iommu_add_dt_pci_device().
>>
>> If we are to properly handle phantom devices on ARM, the SMMU drivers (smmu.c/smmu-v3.c) would need some more work. In patches 5/6 and 6/6 in this series, we have:
>>
>> if ( devfn != pdev->devfn )
>>     return -EOPNOTSUPP;
>>
>> Also, the ARM SMMU drivers in Xen currently only support a single AXI stream ID per device, so some development would need to occur in order to support phantom devices.

I spoke too soon, the SMMU drivers do appear to support multiple IDs.

>>
>> Should phantom device support be part of this series, or would it be acceptable to introduce phantom device support on ARM as part of a future series?
> 
> I wouldn't view this as a strict requirement, so long as it is made clear in
> respective patch descriptions.

After doing a little more investigation, I should be able to add in the plumbing to iterate over the phantom functions in iommu_add_dt_pci_device(). Stay tuned for v2 of this series.

>> Lastly, I'd like to check my understanding since phantom devices are new to me. Here's my understanding:
>>
>> A phantom device is a device that advertises itself as single function, but actually has multiple phantom functions. These phantom functions will have unique requestor IDs (RID). The RID is essentially the BDF. To use a phantom device with Xen, we specify the pci-phantom command line option, and we identify phantom devices/functions in code by devfn != pdev->devfn.
> 
> The command line option is there only to work around errata, i.e. devices
> behaving as if they had phantom functions without advertising themselves
> as such. See our use of PCI_EXP_DEVCAP_PHANTOM. As you can see, this being
> PCIe only, and legacy PCI device bevaing this way would require use of the
> command line option.
> 
>> On ARM, we need to map/translate a BDF to an AXI stream ID in order for the SMMU to identify the device and apply translation. That BDF -> stream ID mapping is defined by the iommu-map/iommu-map-mask property in the device tree [1]. The BDF -> AXI stream ID mapping in DT could allow phantom devices (i.e. devices with phantom functions) to use different stream IDs based on the (phantom) function.
>>
>> So, in theory, on ARM, there is a possibility we may have a device that advertises itself as single function, but will issue AXI transactions with multiple different AXI stream IDs due to phantom functions. In this case, we will want each AXI stream ID to be programmed into the SMMU to avoid SMMU faults.
> 
> Right, which of course first requires that you know the mapping between
> these IDs.

Thanks for clarifying!

Stewart


From xen-devel-bounces@lists.xenproject.org Mon May 08 14:16:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 14:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531590.827354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw1ez-0005Hq-1L; Mon, 08 May 2023 14:16:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531590.827354; Mon, 08 May 2023 14:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw1ey-0005Hj-Ux; Mon, 08 May 2023 14:16:16 +0000
Received: by outflank-mailman (input) for mailman id 531590;
 Mon, 08 May 2023 14:16:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHwn=A5=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pw1ex-0005HZ-Ev
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 14:16:15 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2060b.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::60b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e0ceb653-edaa-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 16:16:10 +0200 (CEST)
Received: from MW4PR04CA0310.namprd04.prod.outlook.com (2603:10b6:303:82::15)
 by PH7PR12MB8013.namprd12.prod.outlook.com (2603:10b6:510:27c::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 14:16:06 +0000
Received: from CO1NAM11FT023.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:82:cafe::67) by MW4PR04CA0310.outlook.office365.com
 (2603:10b6:303:82::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32 via Frontend
 Transport; Mon, 8 May 2023 14:16:05 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT023.mail.protection.outlook.com (10.13.175.35) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.32 via Frontend Transport; Mon, 8 May 2023 14:16:05 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 8 May
 2023 09:16:04 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 8 May 2023 09:16:04 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0ceb653-edaa-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hQMT5htmryGeYazIRpQR41F4hFEmIRZ3+fgpR/huG7vdquIwh2SWUILEqNx0WJq3dqSea+wgdrYf393oA7AvC7QBW6k99gJWao4a1Z5MQWvBcxOB/lu28/64g0FAmBPduDRG6XAtku59m7e5GJOrYQq80XI98j3IKPnhmm26ZHf9LlGU7I5zfe+Jpl0wGqdAIbFdudlbWqXZk8LZtZR44xnnAOuTe12Olg3ci3mzds8ZjE9tZvUPoq9b1ud+T6dqWBs9Sj0l8SJQFwUlQsO/vLCww17bWnwxTRWwE+9UCzCM05rESCK8FzXo/Zc7DZM7pX5wCIx8MIBnkUpbMupUUQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZxyRAMPyNArOT31T7QCsvLYjetxz3AUpOEcPbV16r98=;
 b=KXvyD6+41+XPNgDNNNciwTeWID18+Q7nMoGmWA8cxV3fyQEWQNTU902VMmr/Kj5fM+c33/MM5vx3UD4x9uwAhy+jxy0ityh+cYiRG7tv02bEDjr/B49wIsvoi+6g2OXDhJ55Q4fLg0psZ0voegv1xMbzLTQOFJfT0wK6c6V/WL1vdo3nDLUWbOxf5ws16FVv8cH4d5XRwRPEPfF50efNJW7IQ1EKIFdGJnMQnFUlAm/Eo7Ydfamj07PkQWjY7LcOZWi9Bvi+5F3nAC2cSndVMc//1+kENIbHjoACiSAYhafld7l4TQsmlqtM/qwm8H/gyZrjk0mFaeDbEVUw6WELgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZxyRAMPyNArOT31T7QCsvLYjetxz3AUpOEcPbV16r98=;
 b=LLouR+ChQQE1xice0b9QgKYKdT/f9cKIM475uXls+1IebGx9O2WUBFGw4tAoDWBmSjxPF4FXJwgovvR083ftD7C7QaN6lMzXCrJNZGdPQX9hVaRuSIbR1yjBKiJThyIcx1PlONKWd5eq8r7QjU6KrCWJ5o6FugLa2qXg1c4l57g=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <8591236e-5dc2-7da7-fe3a-7cb2ae1ed7d0@amd.com>
Date: Mon, 8 May 2023 10:16:03 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v1 4/6] pci/arm: Use iommu_add_dt_pci_device() instead of
 arch hook
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant
	<paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
 <20230501200305.168058-5-stewart.hildebrand@amd.com>
 <ced11c6e-caf7-3a19-92c8-5c11b18952b6@suse.com>
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <ced11c6e-caf7-3a19-92c8-5c11b18952b6@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT023:EE_|PH7PR12MB8013:EE_
X-MS-Office365-Filtering-Correlation-Id: 5af86cf0-4090-4b29-454c-08db4fcec31b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pOG1PcJYov8H5dMUKjYBRyneTA6SS0OQM8wdxbjTPsyOP2kpoKnhQWogvJt1xqg8+Vf593dMmd4ivQYWfiKALQRctFpiipXXLtV2cXO8St1IcD9Dem8kaJZIMyvD3k9oDph6WStGQXoIbDibf6NUmD/aXwB99elabRD8ny5TdDgX6J2FTvrFG2jIAcNOcz+rGX1UPDaW52AU2Bt0i4A9JZYiUvcxqiGcyKhY0FXt+K3I52EhOhlPVUE6YyauVi7RBofTvh4zRaZmwYcLBBkdsdKZXL7OnWn+YQv6c3qFhixU2OKtezK9g7vHY0os4FjtUN2iFTfjDRXHGxEguJqmencXoaZsw6Gjk+q73NvDe6PE0r8K5R+qKApLVUvSRGWNO+eFa7Fdpr4LQ93qoppfXWW7QqeiuHz/q8O3H7S6mXMFIXLeCYPFTqBFfVMXEr1ToyYh2mX0nVAyxZ1WTKy7aCJsXlLaNxUTSY1ue/r23zxHOXk68WPHukJPL5Fe0hArgqf1+7Kw4iZsE5XaVt92HwxmrmpI14Hb7k6Bp4AEbhh0gsReMBZ109fAKcUG8dGqaMfw5Pi3uCMNofKf48YmS8FS6P/5b+GTWni7rC4BK8IJ5rBDtsIsVJgS1k2iSxGTebGr/bCTvcmNqUXEcuG4kfVk9FcMbvBT9n2RulFTlvukN6c+o/5FXlCyuppzW+5A1nku7FmptCpdPPNVDksR3DETWslTOSIb3Q99h6+mDNerZCSDYJqxVEHsdlWcnh2TOhXBcS8/OoZwtha3kbbe/A==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(39860400002)(136003)(451199021)(36840700001)(46966006)(40470700004)(40480700001)(81166007)(356005)(82740400003)(36860700001)(47076005)(2616005)(426003)(26005)(186003)(336012)(83380400001)(53546011)(70586007)(2906002)(8676002)(44832011)(8936002)(36756003)(478600001)(31696002)(40460700003)(54906003)(16576012)(86362001)(316002)(6916009)(5660300002)(4326008)(41300700001)(70206006)(82310400005)(31686004)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 14:16:05.6540
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5af86cf0-4090-4b29-454c-08db4fcec31b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT023.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB8013

On 5/2/23 03:50, Jan Beulich wrote:
> On 01.05.2023 22:03, Stewart Hildebrand wrote:
>> --- a/xen/drivers/passthrough/pci.c
>> +++ b/xen/drivers/passthrough/pci.c
>> @@ -1305,7 +1305,7 @@ __initcall(setup_dump_pcidevs);
>>
>>  static int iommu_add_device(struct pci_dev *pdev)
>>  {
>> -    const struct domain_iommu *hd;
>> +    const struct domain_iommu *hd __maybe_unused;
>>      int rc;
>>      unsigned int devfn = pdev->devfn;
>>
>> @@ -1318,17 +1318,30 @@ static int iommu_add_device(struct pci_dev *pdev)
>>      if ( !is_iommu_enabled(pdev->domain) )
>>          return 0;
>>
>> +#ifdef CONFIG_HAS_DEVICE_TREE
>> +    rc = iommu_add_dt_pci_device(devfn, pdev);
>> +#else
>>      rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>> -    if ( rc || !pdev->phantom_stride )
>> +#endif
>> +    if ( rc < 0 || !pdev->phantom_stride )
>> +    {
>> +        if ( rc < 0 )
>> +            printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
>> +                   &pdev->sbdf, rc);
>>          return rc;
>> +    }
>>
>>      for ( ; ; )
>>      {
>>          devfn += pdev->phantom_stride;
>>          if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
>>              return 0;
>> +#ifdef CONFIG_HAS_DEVICE_TREE
>> +        rc = iommu_add_dt_pci_device(devfn, pdev);
>> +#else
>>          rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>> -        if ( rc )
>> +#endif
>> +        if ( rc < 0 )
>>              printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
>>                     &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
>>      }
> 
> Such #ifdef-ary may be okay at the call site(s), but replacing a per-
> IOMMU hook with a system-wide DT function here looks wrong to me.

Perhaps a better approach would be to rely on the existing iommu add_device call.

This might look something like:

#ifdef CONFIG_HAS_DEVICE_TREE
    rc = iommu_add_dt_pci_device(pdev);
    if ( !rc ) /* or rc >= 0, or something... */
#endif
        rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));

There would be no need to call iommu_add_dt_pci_device() in the loop iterating over phantom functions, as I will handle those inside iommu_add_dt_pci_device() in v2.

> The != 0 => < 0 changes also would want, after respective auditing,
> clarifying that they're indeed no functional change to existing code.

Okay.

Stewart


From xen-devel-bounces@lists.xenproject.org Mon May 08 14:40:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 14:40:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531596.827364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw224-0000FQ-05; Mon, 08 May 2023 14:40:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531596.827364; Mon, 08 May 2023 14:40:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw223-0000FJ-Tj; Mon, 08 May 2023 14:40:07 +0000
Received: by outflank-mailman (input) for mailman id 531596;
 Mon, 08 May 2023 14:31:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7VId=A5=corigine.com=simon.horman@srs-se1.protection.inumbo.net>)
 id 1pw1tt-0007go-PR
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 14:31:41 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2071c.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::71c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 098fad6e-edad-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 16:31:38 +0200 (CEST)
Received: from PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6)
 by BY3PR13MB4913.namprd13.prod.outlook.com (2603:10b6:a03:364::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 14:31:29 +0000
Received: from PH0PR13MB4842.namprd13.prod.outlook.com
 ([fe80::f416:544d:18b7:bb34]) by PH0PR13MB4842.namprd13.prod.outlook.com
 ([fe80::f416:544d:18b7:bb34%5]) with mapi id 15.20.6363.032; Mon, 8 May 2023
 14:31:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 098fad6e-edad-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JOPnvtcWRS00h2H1ELQximqgO0pZdIvZXY6u1deEUuC//PltgduwnDCGdKZZArok2EVEDmfyLhb7b5YqaCLYBN0GUitpjBKi/PWuQASYSjQ31i1jKwxrFNO60orrSlp6BUpIhnUd9HZGvRhVeRIEr7hypPXGtlAvnLlk8OXZEUs0aS+cNIJpRUUlmtW0fCKlOpMQZtczLbHt7KCPutJ7PzJtLDrNWGkEQ7LzuJygSlhx3d6pS9qeHzNyt0suI+AjsZMMUkThrFeq66m/e8lImKVbRNNhL94l+rtbyExYq0R5kkDbY8t7iZUEraMZA10e80ksSL1a15BBEPwMXnUktQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QWHFB7hQg4+3+Z/Kov4mvbp9ZaOMHz+OyvOUtUL64qU=;
 b=O6wj9HG0SuckzQ6nfabxIDbVQPSBah283PxlOayQ2YTpeuIDbvSlK9dDvidWpVhR6wXXe88LKyEqSjzygdbEKSTWnQ/5F+juT3tN1xGN5xrM8DjTyy6bamk8VcSnopf4liQbknCTURS7fAmDAbQTNBL4Io6qOyx1xLO/sIUqzFp/0+OOorf3JErUhSX4K2qLJrXAVb96Yz6DhSodLdNM/49gJVIvCVf6taovKT/DsFMM9UEkxOxOfjIEenb+z7n0LQzTSsAJP2BgsDXo7p25eKald2Sl+PdfDMWVGEjbOOXU2/rJv2UQ7qD/8IeHFHdEtT6N6vf7eS0CEU07Q6m74A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com;
 dkim=pass header.d=corigine.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QWHFB7hQg4+3+Z/Kov4mvbp9ZaOMHz+OyvOUtUL64qU=;
 b=UZ1kFLh6N/UUZaysCnvKi1HOrNlAY8XLjlJMKi3hV0cwb3kaxDDHzEDpBUlelrlm8CMmqqLEgYJWsh7t0gKlJdrWpFX3MaWjDKcW0Or7ZAIBEvz8gsP3Z4pEu2agITrfDlavuFsttW5cx4FZn1nqfMr56cmxrEGe4vW0JzFkiOQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=corigine.com;
Date: Mon, 8 May 2023 16:31:22 +0200
From: Simon Horman <simon.horman@corigine.com>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, bpf@vger.kernel.org,
	linux-kernel@vger.kernel.org, edumazet@google.com,
	davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com,
	alexanderduyck@fb.com, jbrouer@redhat.com,
	ilias.apalodimas@linaro.org
Subject: Re: [PATCH RFC 2/2] net: remove __skb_frag_set_page()
Message-ID: <ZFkHulUs7d1xWKSa@corigine.com>
References: <20230508123922.39284-1-linyunsheng@huawei.com>
 <20230508123922.39284-3-linyunsheng@huawei.com>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230508123922.39284-3-linyunsheng@huawei.com>
X-ClientProxiedBy: AS4PR10CA0014.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:20b:5dc::18) To PH0PR13MB4842.namprd13.prod.outlook.com
 (2603:10b6:510:78::6)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR13MB4842:EE_|BY3PR13MB4913:EE_
X-MS-Office365-Filtering-Correlation-Id: 9731ce74-4abe-44d5-f641-08db4fd0e9a0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5dNvbxKWS9XzMLeoQtaatZgxcWBBz0PUQmxiJb5DU6EhqHZYM/maOgqjTbW1wWVVQCn7N2FpD3XUVNS01jpdjX3ZCRfMFB1GCr+Th+UbTe2B3sc0d3JgLpFpmCFklegFCmxtbazSb9gRkgpFZU31nxG3h9XixQuTCo8Mz5vgJMq9dTQUg2Z9BnFoougkMrjdt5/dHqIT766496PQMXBoc3BbyMpmfupZFfC7n/rfy7FnpmBBC+lPMwutxg6HSHMQMDF82PxrJ5LaQUJG8T0pVNYwhE3Vc3+2W4B4jOBK5hwYTsIjXWjFHOmy2whe7MWrcvLfnobwONoBx6lBl1NtV7F7JcwYz47SUYhl83P2Xi0oE8/ybKWgdZVmRcEdLipIAuqfWXNZD9EXMSbPxSrMmxosTjtETuTl0c2/VgbQ8b99vs82/snW3GrGNCi1RgbeJ2Gx+7NEY8Tppqu0t1MxYwxfXIvitcq4tFNmGAjt9Sh82tZIhhYz5g9sSF0FoQeaGRa6TS3VWZcI9MCF7VjrVH2eqpOkyN8bVeDkD/Upgk2AAizzhxc1GlFyQ/7KBkn/Q097geVLTF1AL/JE8wAkq4d7RHSojIHeGfJ5reJM07w=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR13MB4842.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39840400004)(376002)(396003)(346002)(366004)(451199021)(2616005)(186003)(36756003)(38100700002)(86362001)(2906002)(6486002)(8936002)(8676002)(316002)(6666004)(41300700001)(4744005)(7416002)(5660300002)(66946007)(6916009)(66556008)(4326008)(66476007)(478600001)(44832011)(6506007)(6512007);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?us-ascii?Q?cN84bJh2d6pjo0zorxwYHiIln/mvP1cDBKvJYx3LiRZBb2LQFvS6Sb4Mc6ha?=
 =?us-ascii?Q?vGMVsIiRmKzBcrt8yZIjX18ZdrWyLoIXExonKUj4/k4IdkJdCNe447+wL8b5?=
 =?us-ascii?Q?ahXgms2O0O12VPiGtrGlxnnoJ5KfxhnwOHqOjwBgbIvlFPTHiXMutDAPi8YX?=
 =?us-ascii?Q?Pub8/S1Urv2UTYiDgUL00tGIINYWWMn7HotskbgaqFJUvJMkXflsFaskbUs2?=
 =?us-ascii?Q?+5Z4zQHoqIij9U9qn0jwjnzIzv+0ioLxQRTp5mZfDpKSMpTjNSUNbEE6Aa/q?=
 =?us-ascii?Q?I3RR58AjwQKz+mb8KjK1pBPcpsz7+VTjYKAv3wVD6B+UHLVd/9bYQ2AwnGQd?=
 =?us-ascii?Q?wdAjbSL6NO9wTjvPGPmA7QOmEMG/DWRvuEBJGfJcMpGU972nbJtwrZ9+aMHt?=
 =?us-ascii?Q?M5mAeVmc+5Dc7/Syx1ZIE1NUUS/skhEJAUUhJofS1KUrC4/SQMJ0T9tbC6tN?=
 =?us-ascii?Q?YfWihH40Q7asygFRlRMJPtjuMcpwbP0+2WVZILJeg61DToT9zLGs9S3cmK+2?=
 =?us-ascii?Q?B7i+d0qkWE3uL7Wki5pfpJrxxVNMCcr7icPdVvMpon3QSXb3PAbUzdmnu15E?=
 =?us-ascii?Q?o5njwCeA2P/yMg9qxxEj4xik4GlgXSaSsOK0YjyG8D3hCjB1lDMYia7kVChg?=
 =?us-ascii?Q?kl+VcngiXTZunIU2eBAA2/JjlT0UC7XkmwgC2O1z8Ou+CbbLFE6PSz2hQraC?=
 =?us-ascii?Q?qGouR+rDsi5plrNZhLN861PdMsP0jYdNrEqUAzlxhE36sOToQFiAeKGATnAB?=
 =?us-ascii?Q?fevHO3bqjE3saHpYqXNrANdn7wQLJnzpOAeU8PFt94ApXGjfFcdkzbiTQxrR?=
 =?us-ascii?Q?XOPmoUlTeJD1fwzi+fpS5WIDUJ7kaXuAVgBZ1RB1+O/S+qFKwi4qPMz3GpU6?=
 =?us-ascii?Q?ab6vA/LYAx3JjgHTjNLXLs4AntbbjGSXyn5iAtxELhpQgCmOAruhmQGVYRHZ?=
 =?us-ascii?Q?qP4ZuGBeV0eBT71zotsyaBgY/Krm8nvLc0s9IFd16Vr8ogBmO49J2pnwlpIH?=
 =?us-ascii?Q?Vz873wyoLbMAbYqKJnk+cbG9nfHwVn/nvxGRhI4L750q3o+0vBtUwgDceaPt?=
 =?us-ascii?Q?/KdA7uHiuOreiVAYb/RAMMiylSAjX52Ktm5M2ZmbJEXJXYtSDe70JE3tIeKz?=
 =?us-ascii?Q?UnxvNtWB8UKjRw3BFL+arxq0Fej8lT0UyuuJRwihQm9mdag5C82Iq/ZJidUE?=
 =?us-ascii?Q?9U/QMOCJXW06ByoK9WdjYYEfn24zhgE187lcH5ce7xQv9cVg2CqW9UUbhnmH?=
 =?us-ascii?Q?1qFOGd0tpM6Yj7Eq7Crm6qTXzbMTstKE5Ub2hVtIEIgYI20+NK075z6iLRSf?=
 =?us-ascii?Q?ux3bIATB+1NInBgGd1GZqEB7abQdmqfeyLcLuaGFmYXHu47Z+3mYZDObuk8/?=
 =?us-ascii?Q?2Mzs1FatRNSYF5UIoxnxXpZklLGEN+i3j3jWPS1fKlQtGUF1ZLM7egYWknwb?=
 =?us-ascii?Q?Oy9Pgvril1LAQOAXwp9HG/3QH2LisP7PKgcZwbreiHkNUOA8sAJ2vf40cr4K?=
 =?us-ascii?Q?J7gjpcti57++EJgcmXwfTf8Khtnhgh8PLeqAT6dR43XTFNg3xawHsPhTTWfZ?=
 =?us-ascii?Q?TNFidIuQqtCldx2Mdoqf7PgkJ/ZmyvGxhZckJean4Fj81Z2LDvYfA7kbHqXb?=
 =?us-ascii?Q?Fj7UDQXbqY6n6IPfqcksqcGghpacO97ACz+saMfiTZKcPY7akUDAaokSl9s6?=
 =?us-ascii?Q?Ryvtxg=3D=3D?=
X-OriginatorOrg: corigine.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9731ce74-4abe-44d5-f641-08db4fd0e9a0
X-MS-Exchange-CrossTenant-AuthSource: PH0PR13MB4842.namprd13.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 14:31:29.6325
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: u6hSWBRqOay5n8SVgV3DO8Jpj+AO+w/kQ2MhQ0MC5eFU8Many1hPg/r4wpMpQTXka5VdQx+3HXd4ZAJ+v+eaNix/d8aRuy1JBKZ02wKUGvE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY3PR13MB4913

On Mon, May 08, 2023 at 08:39:22PM +0800, Yunsheng Lin wrote:
> The remaining users calling __skb_frag_set_page() with
> page being NULL seems to doing defensive programming, as
> shinfo->nr_frags is already decremented, so remove them.
> 
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>

...

> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> index efaff5018af8..f3f08660ec30 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> @@ -1105,7 +1105,6 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp,
>  			unsigned int nr_frags;
>  
>  			nr_frags = --shinfo->nr_frags;

Hi Yunsheng,

nr_frags is now  unused, other than being set on the line above.
Probably this local variable can be removed.

> -			__skb_frag_set_page(&shinfo->frags[nr_frags], NULL);
>  			cons_rx_buf->page = page;
>  

...


From xen-devel-bounces@lists.xenproject.org Mon May 08 14:45:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 14:45:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531603.827375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw27S-0000xG-Mf; Mon, 08 May 2023 14:45:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531603.827375; Mon, 08 May 2023 14:45:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw27S-0000x9-Jd; Mon, 08 May 2023 14:45:42 +0000
Received: by outflank-mailman (input) for mailman id 531603;
 Mon, 08 May 2023 14:45:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y8Ur=A5=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pw27Q-0000x3-Pq
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 14:45:41 +0000
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com
 [66.111.4.29]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe45e7e3-edae-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 16:45:38 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 2B0C35C0198;
 Mon,  8 May 2023 10:45:36 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Mon, 08 May 2023 10:45:36 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 8 May 2023 10:45:33 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe45e7e3-edae-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1683557136; x=1683643536; bh=GalsNTfBovyiQ3QbCYOoCXADHgE91GHkBa5
	DzKiFpNI=; b=oxAmWlhgkuGMEXFUunr3DNH47p7sDDyAhVCjzNf1POru5DcOEjg
	ZNKo3q+5t342BaL2JbQv4H3sZV7NEc3OtlQaj8KMTts16ffvbkJq+TNqJad5V2Ij
	Z/vvIJwZO5FQuzSGXaV55dZ8irXMU4R9NftdW4J88eGoOm+7OFbQw4KelssJjf/3
	xPL+hau3pSRwBgEWXYzx+Do4uWs9CHYHNK7g2PvRu1YQ7OD5Q3/YqzTFd/HncLTs
	X8FrcU8tnH4xQdyV3VB1HU7qolgGT2KPQRcKayyTZeHPmLH6uR3bQ6cVYUvgeM5l
	rjuLcpzQwBea0e+xYEkMMGHXyneV/N/OP9A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1683557136; x=1683643536; bh=GalsNTfBovyiQ
	3QbCYOoCXADHgE91GHkBa5DzKiFpNI=; b=EMILixeZBAaHHJgYOgzUtcNpUVCfF
	Ib9jyNVK2C/cckyKIr18HtNT02OaOw5bbZECLwtKC4yj0GL8YXrSHii8HLl/GV3B
	N+svB//1zwstwrQkXQCdbMNzNJoFlY9DNBYGLzMQ6+rY8y1Cb2fyQhnN2TQBXZAf
	+CB32UbcCrtZpfMcbHU3LdVKG2h3usuMZ8kZGzryfXa7hgijRKoB3N7SlhhUGedu
	PPFM2ed/nfDPWKVbpJfK4GzvPCgi3fJ3STUJX8UGaOKNG5GrlpZmILKpyWUrVRH4
	DbExOUgqHUFPRuBLFzAtdWni0ZmuObKUcGUCj8edNPN5Bfuu9Yd9QiEPw==
X-ME-Sender: <xms:DwtZZOgx7f_GEfFJiJonXKNbRaUXpzDelJq-asUJIktvq9u8vyYcJw>
    <xme:DwtZZPDLqoxCCPZO12aPgYnIvIzmdV7HBjahQpTC99G4Zs0hDIn2y3jdL5cwSzOG1
    mrvn87s5NJtfw>
X-ME-Received: <xmr:DwtZZGHNknDML4PknwaNswF9Yr1NyHo2fi06rn-1RL2tOluKCfx1CXgq-RZ7>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeefkedgkeduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:DwtZZHQEVosl-Yng-HJEnoZV86KdPQI_chJD9df6n5Tnu1gPouEGdg>
    <xmx:DwtZZLzh04plgcs5CGsiz-jGTjMwASj4mra3HrLOUUOaalk-t36tQw>
    <xmx:DwtZZF6hPBurLQ6MA3scwPWd1-dwFAOnbXbbaz_nKG5avrXJk14DFA>
    <xmx:EAtZZNuOMtAei1IpTCvJZXR9DaFLSIDPCAOEquVKjyPxdg6JujwTKA>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 8 May 2023 16:45:30 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3] ns16550: enable memory decoding on MMIO-based PCI
 console card
Message-ID: <ZFkLChRN7L+SjXyg@mail-itl>
References: <20230505214810.406061-1-marmarek@invisiblethingslab.com>
 <763df4ee-95d7-be20-212f-7450f3fd431e@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="MYnSo1kS/nFwL/fs"
Content-Disposition: inline
In-Reply-To: <763df4ee-95d7-be20-212f-7450f3fd431e@suse.com>


--MYnSo1kS/nFwL/fs
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 8 May 2023 16:45:30 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3] ns16550: enable memory decoding on MMIO-based PCI
 console card

On Mon, May 08, 2023 at 11:01:18AM +0200, Jan Beulich wrote:
> On 05.05.2023 23:48, Marek Marczykowski-G=C3=B3recki wrote:
> > pci_serial_early_init() enables PCI_COMMAND_IO for IO-based UART
> > devices, add setting PCI_COMMAND_MEMORY for MMIO-based UART devices too.
> > Note the MMIO-based devices in practice need a "pci" sub-option,
> > otherwise a few parameters are not initialized (including bar_idx,
> > reg_shift, reg_width etc). The "pci" is not supposed to be used with
> > explicit BDF, so do not key setting PCI_COMMAND_MEMORY on explicit BDF
> > being set. Contrary to the IO-based UART, pci_serial_early_init() will
> > not attempt to set BAR0 address, even if user provided io_base manually
> > - in most cases, those are with an offest and the current cmdline syntax
> > doesn't allow expressing it. Due to this, enable PCI_COMMAND_MEMORY only
> > if uart->bar is already populated. In similar spirit, this patch does
> > not support setting BAR0 of the bridge.
> >=20
> > Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblething=
slab.com>
>=20
> Acked-by: Jan Beulich <jbeulich@suse.com>
> with ...
>=20
> > --- a/xen/drivers/char/ns16550.c
> > +++ b/xen/drivers/char/ns16550.c
> > @@ -272,6 +272,14 @@ static int cf_check ns16550_getc(struct serial_por=
t *port, char *pc)
> >  static void pci_serial_early_init(struct ns16550 *uart)
> >  {
> >  #ifdef NS16550_PCI
> > +    if ( uart->bar && uart->io_base >=3D 0x10000)
>=20
> ... (nit) the missing blank inserted, which I'll be happy to do while
> committing.

Thanks!

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--MYnSo1kS/nFwL/fs
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRZCwoACgkQ24/THMrX
1yweDgf/VYoswpWnZMdtshzSzmFxmIltDCjRlXb2573ThI7Wwjx6t2M28F+om1qh
J8bkD+fISFpIZnuDB9+x/BntvgwvpCsUXX0U/kuldnYTFi5MuJeopjlTn6u8db35
MnJQ4gTIghv2IBLZTHHIbWDMf/l+eWKARRrxUG31vvYsooiptRi1gLGYtcQpPjXs
UWRi3DmEqR4kGS/1QRUgIthuzv9zPfppFAD83OPZ11CTJhoeKtUxyhMjKAopQk7N
wtbNQOFDONcZOKNaq+/obF7oZEmoGLVvIjcxd+9Na5IalOrGEoNbVOgPh4/CV+yP
KpYsHSyW1hHaoi+83yn3DEGXzZFpfg==
=ltDc
-----END PGP SIGNATURE-----

--MYnSo1kS/nFwL/fs--


From xen-devel-bounces@lists.xenproject.org Mon May 08 14:51:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 14:51:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531608.827385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw2DE-0002Q1-BQ; Mon, 08 May 2023 14:51:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531608.827385; Mon, 08 May 2023 14:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw2DE-0002Pu-7s; Mon, 08 May 2023 14:51:40 +0000
Received: by outflank-mailman (input) for mailman id 531608;
 Mon, 08 May 2023 14:51:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnRg=A5=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pw2DC-0002Po-AL
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 14:51:38 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20601.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d50847cd-edaf-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 16:51:37 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9635.eurprd04.prod.outlook.com (2603:10a6:10:31f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Mon, 8 May
 2023 14:51:35 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.031; Mon, 8 May 2023
 14:51:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d50847cd-edaf-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ddRfcf6ZrdbL89IckrwLHwNf4gq+xT8+OrlUqAMslMuoKDAbufhocNJEIBql/DP9kJePRcf2KeY2vsLv4rs2cgu2kMcadU+SX+Wn9YkvSJuMZiAyWIR8QJEk19Eo8l0IH9+9AP2dtQpuWbjPa+GUVJ62zEE/QaqdC+d78dZ7m/kp8/VvWhxUae908YBC22aN4N3hlB/fyQZoFiF034kp3+xduoP42EvpIjZyFQKEM2IXHy+UTkyjs/L+9MY3dbPquBk4aiYz50FK91c80kbpZdByMVUQa5UMJzrj3s9TAoGesVMuz7vdytc0EGvwBJec9kDN0ERgViMIBxTNBoVCWg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yAGXXn/fj1TWLxwadDhIw8IrxVrgzziidtcjMLL+RBw=;
 b=BcR2QKbQIpMeYYsxPbH37dSzdQ/Z+1WsV+84J3H6o78QmGxutnKhaq2WHb/N6jVy9Y4sJ9vvai0B+o3Z0ILDjcKawUJqlYx9jxz/q+MXZL/c31rlY7V/c2i+x257lTey6jlcT2GFJF5HkxlBtQ4fT6NYvXHNpXPj5B/jUpmmBgKZJL7DQ00U7sGw4Db2ElLfmEcZkzUNhaPSXtxk8HyGV7fJ5nyrUgq6veWYwRn105OeBWQpzVaQEbMkgX0LGeU1Lm/HllW4GJZdE6iD6hbmEeKTb60gbUOvzhL7Afzquusgc3UvKtNtIYelPZjG/fMoAk74KttQSnQLWqtC/50xuQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yAGXXn/fj1TWLxwadDhIw8IrxVrgzziidtcjMLL+RBw=;
 b=5NWz4z/+IeeKkXBUTncI8qa/5u3guiE76Mmp5DPOlwNYLHjinIEt/IPDU+DqdXizzwAVhEQyspOrMMfdKghHiHLGnlluYWaBx8tV6VsYoOMWVCv5IHxhkgb4RK/KohAQwNLPpOBhV8W1TXjWxi+IDFhJl2BnPW0EbfmcNh+WoA6VaEclV/G6licaJQtZz4GfarqtfJcemapmRQLuEcwLZaXxhp/19pzZ+6bLK1I/6YavJBFuLYkiuWFzmoZGOWs5LwWJpO+WsVbpPvX4JfQjDiQwjhrDV+Xni2oVgZf0Ik77PulcCcZq3ocQ3WQZOTmC5N3z4MdaAMEG4+U5FoKPKw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <462657d4-72e7-6266-6ea5-2b9e443f9813@suse.com>
Date: Mon, 8 May 2023 16:51:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v1 4/6] pci/arm: Use iommu_add_dt_pci_device() instead of
 arch hook
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
 <20230501200305.168058-5-stewart.hildebrand@amd.com>
 <ced11c6e-caf7-3a19-92c8-5c11b18952b6@suse.com>
 <8591236e-5dc2-7da7-fe3a-7cb2ae1ed7d0@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <8591236e-5dc2-7da7-fe3a-7cb2ae1ed7d0@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0172.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9635:EE_
X-MS-Office365-Filtering-Correlation-Id: c8354ee9-00bc-41b4-67d0-08db4fd3b783
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KhLzzCuRodTH+oRcoAAo4O3kcR4dBB2lmRIVmPr8c2RIuJz4HrkYsmauHztugOIQzpjSFlC58QCJyJ+vZuVsu7LUHILhnl2jN+iS+Z72zRpHhRLnsW2G0agXsqwpjA6TmnbtTlUdRd6E1zC4Jm5TB5P5OqsMlKsRXEG0hHCvFDW0mR31hqdih82SpqNFuty4wGtcNuOIOkb6eFWtJBhMYdnvaaBfjTfU8N+mxErG6TMDHbsafBWvtsmNOL0q9HBHnInYjpT4sasNAJJEffPdiDS54UjsBtoLSyTQcYSBacKWZ7yQK50rArhkIusm003jmhWDA6PnvBsym/r34EetpoRfbOMxhtCwbIZd0kd71MGSjRaZ14OMuV0XCsL1pSdqvT+9MwhxXOyz+sLviZA3sYrbmDsxEpqG+InjrTx46ZS/zMOrDAmhN/lmX2FWMjFrWsjI72Da/h9+xI+BCcFeWoMAQODC4h3VfTtrxdcpkX3dD78yvM6bo6olqh9DFgI92PlGrzyKlBy3urVKPPnLcqpUXIH/FTj65a12MsdkOnPQ4IUQJhZaE24vo7go/jNnAZkgPDPYQIU/4D2xf6aEbk7jsUuxBYpnHVoymJ2iTAxODLB0uxDFfpvfU0xplwY0W3F0Fn0Kw7N7IZ1h/1+gUA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(39860400002)(376002)(346002)(366004)(451199021)(6486002)(36756003)(66476007)(4326008)(6916009)(66556008)(66946007)(54906003)(2906002)(31696002)(86362001)(41300700001)(316002)(5660300002)(8936002)(8676002)(38100700002)(478600001)(186003)(53546011)(26005)(6506007)(6512007)(83380400001)(31686004)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZzB2NHpHSlVYdllXbkpRVjNpZmsrbnovdVo2S3ZDZkFveE12dS9HMGZpeVhj?=
 =?utf-8?B?U2ZpdTRrVk1FVmJ2NUNFL2Jmc3BpeTRRUVhJTEF5UTZacndjK01UbXlhUUFO?=
 =?utf-8?B?WnZLeTFqL1QyRDFkRnRzRnlGbmlueTlzd0VjRkgyaFlwTFRqd1dZNkp1NE5a?=
 =?utf-8?B?VisvblB1ekkxSWJYdThTYjRuNE91dmhnQWx6OElTeUplQy9mUndzQUNWbWJR?=
 =?utf-8?B?eCtic2dJZXlWM1JkWFNFOEVNVEJqU00rMlV2Qy9FMTEreWpibmt3K3VHVWZ4?=
 =?utf-8?B?cStxTDVQS24wams2VEdvOTFDYzVtNnVjUWg5c0lUVlRoUThLcXVjNFFmOWo0?=
 =?utf-8?B?bFBGVFdZVWNreUFEa285enJGcEFzVTdLNTVoKzNuc1MwQzVlWXBDUzJQZmFC?=
 =?utf-8?B?S3kzdXlGTUZ5WlVDK2cvZDg2dmJPZndiV3lseklrcDBkQkdrU0hFNWdCSkNx?=
 =?utf-8?B?Nld5SUk1N2p1b0xOLzZNdlNDTnlNZDFhSGcwU0tUTFFveURwTEZESTJqVGl4?=
 =?utf-8?B?eTJ2QUlaSytKTmpsSkMvejlGSEdzUXZqd1ZlZ3hkSXhyZDdLeklvdHR5TUFw?=
 =?utf-8?B?SUZIalVycUsrOUcyR09YOFlpMGhiMldlQlpLUTlyd2lnUXdRbWF5bi82TzR5?=
 =?utf-8?B?alJ0K3gvOGhnNGFwMk1mQkRkR0NSMS9LV0tGZDQzRGFSNW5lWUUrejRMRkxm?=
 =?utf-8?B?K0JWUmFreE1rc3NLV3c2NTVDQ25OdTNCZU04ODNMazVwRnJqcVNPS0Yyb3Qr?=
 =?utf-8?B?YTJsL3dZUURWU1AwWnRnM2M4V0JnRnhOcytZdnJuaGNLN1hPSnRmYVh1cDh4?=
 =?utf-8?B?bURDQjJsRktQcitMNlUrUlhiSGdLcXRsZ3BZQngvLzJCN1Yzb1g3alRrbUQw?=
 =?utf-8?B?UjdHQThUUVJ2UDFjK3BKR3grSEx3bm9HQzJCazFvZzdNTTExWTQwdHE3U3Fk?=
 =?utf-8?B?QW5QRTNIVU9VanFVVTF5ZUNmenVHbHVhRHlndHc4b1E0U3BPVUpxWjJYM3F3?=
 =?utf-8?B?dXEzbDRENTNXRlBVVXNpeklveSsyaFRqeVk2L0l3Wk9Kc0tWREpKS2d0cUV3?=
 =?utf-8?B?OEFOOFhmTmVCelZLamZzSWozNHdsN25DaW5zeTB1R0dYQjAzaWxmZHdtdDJu?=
 =?utf-8?B?dlRlemVPOUFSS2syUjlQVVRKZk4wa3cxeGJnMTd0YzdzSmRTSGt4Y3o2c3hM?=
 =?utf-8?B?dEVmOXk4Sm9GcmhaMHNITm1FSVNoMzh5OWU0U2grTHkzdmhOMkxhZ1BkcHBn?=
 =?utf-8?B?UDJWOFZSbVppMWNrRFhoK2I5R05RUFRqYlllMmVSSUY3bjliWGVyU0tJcVht?=
 =?utf-8?B?anpZbktjTCtsVHV4T3VSRHpzcW16U2FBVWVDWklFUTRtV2szTVI0L2dhT2pU?=
 =?utf-8?B?MmpnL1piaE5SOHl4QklWNlN6aGhWWUgycDZBTWYvYno1d0VhYlc3VTFFb25V?=
 =?utf-8?B?a0hzTHVwbElFTWV6MGFwaFZFNkpsQjRPTVZMdUxZWWJuTmladEd0Y0REbDVG?=
 =?utf-8?B?blFqekxnaWpSWWVVYUNoR3hHc0xEV0ZQWHNwcEpHekt6RlpORStkMlNoeDZt?=
 =?utf-8?B?UStyelRBTFd3SDcxSzVzQlVPb1RVQ2pTbjFVNXFaUXhBanQwVEZ2TTV5SEtQ?=
 =?utf-8?B?N0R3Q3VDU3NMbTNEUyt4cWVGaGR6bEE3dk1MMUlrbWs4Wk92WXNma1hKaUli?=
 =?utf-8?B?ZjFyTkhEazJBaUFabjRuUy8rNituSjJyTnBkYXRXd014Q0F3Tm9vRFZlZitU?=
 =?utf-8?B?TWhaNjFwMTFQMHRSa1pPd1JQRnZDYWtnejZlbFVFVFpzM1lCUnJQeDYxVTBj?=
 =?utf-8?B?Tnh3Wm5UaXZ3VnErbjJ6QmlxRVptUlBDWmVrcUpheEpjeXNLZk1uU3ZOc1Uy?=
 =?utf-8?B?T20zR1FCVXRYRnBMSXEzSmVOWmJRU3NNd203MEp2MTNIVFBOWVF6eTZhSzBt?=
 =?utf-8?B?TFlVWnZUdkU4ejRHWVFEZmR2YWJEdTA0eGgvdHRJdTZ2bmszS0k1bFdpMHJD?=
 =?utf-8?B?cTZjWTlBeEgwaTVuYXhLakpVMWRpTC91VmJZRXVvY3V4K2hPSGsxdkduNjNh?=
 =?utf-8?B?TGV0ZVMvRnFtMkdKenYvc0FaM01hUEpkYy9zUWVIQXRnbFVIanpaOEllSXpD?=
 =?utf-8?Q?RNxKgQF6dPl+zx5C7uy6jzmxV?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c8354ee9-00bc-41b4-67d0-08db4fd3b783
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 May 2023 14:51:34.1122
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bUN+gU34u+P+vV39s7VwJGf5HbDhnU+lezm+p91NKNaVymutdTA5uYUIHx2K5KA4hAImD+8z7oBUD3Jhy+sxuw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9635

On 08.05.2023 16:16, Stewart Hildebrand wrote:
> On 5/2/23 03:50, Jan Beulich wrote:
>> On 01.05.2023 22:03, Stewart Hildebrand wrote:
>>> --- a/xen/drivers/passthrough/pci.c
>>> +++ b/xen/drivers/passthrough/pci.c
>>> @@ -1305,7 +1305,7 @@ __initcall(setup_dump_pcidevs);
>>>
>>>  static int iommu_add_device(struct pci_dev *pdev)
>>>  {
>>> -    const struct domain_iommu *hd;
>>> +    const struct domain_iommu *hd __maybe_unused;
>>>      int rc;
>>>      unsigned int devfn = pdev->devfn;
>>>
>>> @@ -1318,17 +1318,30 @@ static int iommu_add_device(struct pci_dev *pdev)
>>>      if ( !is_iommu_enabled(pdev->domain) )
>>>          return 0;
>>>
>>> +#ifdef CONFIG_HAS_DEVICE_TREE
>>> +    rc = iommu_add_dt_pci_device(devfn, pdev);
>>> +#else
>>>      rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>>> -    if ( rc || !pdev->phantom_stride )
>>> +#endif
>>> +    if ( rc < 0 || !pdev->phantom_stride )
>>> +    {
>>> +        if ( rc < 0 )
>>> +            printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
>>> +                   &pdev->sbdf, rc);
>>>          return rc;
>>> +    }
>>>
>>>      for ( ; ; )
>>>      {
>>>          devfn += pdev->phantom_stride;
>>>          if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
>>>              return 0;
>>> +#ifdef CONFIG_HAS_DEVICE_TREE
>>> +        rc = iommu_add_dt_pci_device(devfn, pdev);
>>> +#else
>>>          rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>>> -        if ( rc )
>>> +#endif
>>> +        if ( rc < 0 )
>>>              printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
>>>                     &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
>>>      }
>>
>> Such #ifdef-ary may be okay at the call site(s), but replacing a per-
>> IOMMU hook with a system-wide DT function here looks wrong to me.
> 
> Perhaps a better approach would be to rely on the existing iommu add_device call.
> 
> This might look something like:
> 
> #ifdef CONFIG_HAS_DEVICE_TREE
>     rc = iommu_add_dt_pci_device(pdev);
>     if ( !rc ) /* or rc >= 0, or something... */
> #endif
>         rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
> 
> There would be no need to call iommu_add_dt_pci_device() in the loop iterating over phantom functions, as I will handle those inside iommu_add_dt_pci_device() in v2.

But that still leaves #ifdef-ary inside the function. If for whatever reason
the hd->platform_ops hook isn't suitable (which I still don't understand),
then - as said - I'd view such #ifdef as possibly okay at the call site of
iommu_add_device(), but not inside.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 08 15:22:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 15:22:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531622.827395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw2gV-0005xH-Lj; Mon, 08 May 2023 15:21:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531622.827395; Mon, 08 May 2023 15:21:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw2gV-0005xA-IP; Mon, 08 May 2023 15:21:55 +0000
Received: by outflank-mailman (input) for mailman id 531622;
 Mon, 08 May 2023 15:21:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw2gU-0005x0-4R; Mon, 08 May 2023 15:21:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw2gU-0002Lp-1s; Mon, 08 May 2023 15:21:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw2gT-00061W-KW; Mon, 08 May 2023 15:21:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pw2gT-0001HJ-K0; Mon, 08 May 2023 15:21:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PyNgi0ojZt4A+GcLwAii4XZMRRqfzom2tmBpFk9imag=; b=e2QLxvHIPN8FtmKYco7USgLrm8
	KYPMhNdEuMjAKFD1Aa4DlCFyu7qfLUPCazzPGLfu1zBGfheqrUD2mXePWVFkcJ9f9CQIUzJeO/sPu
	WRdOzpitVoXTwLDh+d0MomI3vhBmEnocvNdtIV1KSN5toUwWS75EFR76OWcIA5x87N40=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180577-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180577: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a16fb78515d54be95f81c0d1c0a3a7b954a54d0a
X-Osstest-Versions-That:
    xen=e93e635e142d45e3904efb4a05e2b3b52a708b4f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 May 2023 15:21:53 +0000

flight 180577 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180577/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a16fb78515d54be95f81c0d1c0a3a7b954a54d0a
baseline version:
 xen                  e93e635e142d45e3904efb4a05e2b3b52a708b4f

Last test of basis   180541  2023-05-05 08:01:57 Z    3 days
Testing same since   180577  2023-05-08 13:03:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Jennifer Herbert <jennifer.herbert@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e93e635e14..a16fb78515  a16fb78515d54be95f81c0d1c0a3a7b954a54d0a -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 08 15:41:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 15:41:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531632.827405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw2yn-0008P5-8o; Mon, 08 May 2023 15:40:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531632.827405; Mon, 08 May 2023 15:40:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw2yn-0008Oy-5W; Mon, 08 May 2023 15:40:49 +0000
Received: by outflank-mailman (input) for mailman id 531632;
 Mon, 08 May 2023 15:40:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fEaP=A5=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pw2ym-0008Os-9l
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 15:40:48 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.217]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b2ddfa58-edb6-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 17:40:46 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz48Fec1Vb
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 8 May 2023 17:40:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2ddfa58-edb6-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683560438; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=LiemzYZrSKziM+y0QPXCKHxyEer0ip+WguU0P9L3lHLZHV+GHN4u66Lh8mHiiLC2R1
    SfMTx6UHgQUicJLkwv597duEDnSOTYDrfnkEIW8bGkLHB/rMSs8z3EMXWPqR95TN24we
    dmYofIL6GbFa/nE+xurTSJHbuXL5dHQafvHc9quXu1cuk/NAMjt8ZTKt7ZzFHyoKUxJh
    9DbBaK7IdsMKMSAOK8nB7DegbfFUbfRCe5iknXxiuZYN0sgxMIQ1ytsomYo2ZreVkWva
    VG7Hb6c9lamGd8jezH72pDtatkwIIc78wZbCyrV8k7O+c1E7H+8kD124eVIoYwKxrXgM
    8LDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683560438;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=GD4pD2lagECFcggREhpTpdjv2A2W+Z6MdThgJKqvj/8=;
    b=KJE4h3TpSUrwe25SXEJMpm3b6rtyVAHYE/yPZ5FBiRl7OG+ZjpBTK9ev5vp0ahF1lC
    L2tvrc93nC+DrPrKLWCOIG5obPjrHQwxYZnZR5K8zf3AXsojG4YZJ4xtcwQIP0TdVfpW
    /eAMR+ek7cycL9FSAmzWNnOmcQibcB05v9fwJ/43QqJYPCPKBdYA+l95AjuBRqkwO66I
    IhUqDdWoMHbz/6+NuFuQWhiuhJjOmDb7GCPeR25oVl41F23XY9AKqvjDt34g4HAXsEbN
    cdFdfWhTED0tCUZbvE2kWNOwhZfc3sWB2RTE+AVMy+suqRPkWm1A1Pd1ywLJqTS4Zus2
    jfMg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683560438;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=GD4pD2lagECFcggREhpTpdjv2A2W+Z6MdThgJKqvj/8=;
    b=jjUN8//RmNZlKrZjZPfjh4TNkajm2kxsYnIITkJbPM/WeCavI4c4cWRfwCUMymMfET
    6R1UWi6uoyP6aRBPEzDPkJMhCPBK8hSTLBuTlHnKTFoAfvf+eMDoQvzocZ76smVWbfxj
    WDWs1nTUn9nSHwnTftOJBqfX0Q36+DcWq4WEClbaqJULI42ZKeYr8+gE36q1El6G7RjO
    Byyf8feYqqCgyDi2zXlRBDq0GkS1SYwaSt5LUw55p6cdaYmJy1wdDWwtDZVSkjVjbgFE
    dFUQfCrHP1WK9QBVNJ1SntAjn9V8UgXj7x6AHgfutiPU7ynIFv28Extff8CdI2V6oKaF
    /ylg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683560438;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=GD4pD2lagECFcggREhpTpdjv2A2W+Z6MdThgJKqvj/8=;
    b=bwHBSPksmyEi/70m2Fo195zb2zZuw2V8D4hYnF6DH2rxvdFWtC2uQHnKurGsbUpqs0
    1WWCqNJW9kLTgQN/afCA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4ARWIYxzstZKeVom+bauo0LKSCjuo5iX5xLikmg=="
Date: Mon, 8 May 2023 17:40:31 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH v1] tools: convert bitfields to unsigned type
Message-ID: <20230508174031.585a3288.olaf@aepfle.de>
In-Reply-To: <324fd699-dcd8-9881-a276-167be38622b1@suse.com>
References: <20230503150142.4987-1-olaf@aepfle.de>
	<d8224847-ffc5-3faf-1bc5-6ad3d7966b26@suse.com>
	<20230508120038.74246111.olaf@aepfle.de>
	<324fd699-dcd8-9881-a276-167be38622b1@suse.com>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/Q0Z3jrQHA4gLbAwmBtbq3Fq";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/Q0Z3jrQHA4gLbAwmBtbq3Fq
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Mon, 8 May 2023 13:23:27 +0200 Juergen Gross <jgross@suse.com>:

> I have found 18 lines using "unsigned int" for bitfields in this file.

There is indeed a mix of both variants in this file.
I scrolled just down to line 6999, only looking for ':1'.


Olaf

--Sig_/Q0Z3jrQHA4gLbAwmBtbq3Fq
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRZF+8ACgkQ86SN7mm1
DoCJ4A//Q79FIn5hVwGknH3nmxusiPSNdYqcfyRo0QJrCR9gDnZiVMb0zQOgeO61
6fCcMX9UcV1/Dp2M+Z5K6Sgj4g0hLxBiQ+NrHMh4O2tHaPPe++l97i6HCC4b00EY
zdwaW1/oYbO9rXFG45aIlc4CmBvvc2bKEn/6mt1HBsR1IXWQBDBawX+G1CjmOWFR
0niXKVIFkzqaROtaHjwVnCvSahKNRNqhrcXycV0hxYO4ftmtkBbPhhqZofuYnNqy
TYpEXf/A6JgpryoDHOeDD5LfIcETgIQcnZfrS4Jp8e0xWtBlbs5ag3ljISbxHpAf
7hc7RaL+uBrnqQ7YVOGautRMjxHgDsRZfmn4o7LBqsM2wfIIfcVJjLUZB5ChPQ8R
2Ox6xsMPn4RQl9HciQG9wSrfp5cRRzHD8HHrU8Mg+kSEVyc1k7wBlE4Qa2VS55Fu
XW3DykpUd6Wgjr1YZsacDYRpQQtt0dnTO3WvtBsm6VU4VqXmDgApf2dSzPseg7Zh
TO2Ra5SHeSOb4bbXNPjalqKH4zqMAKNoBykHjsUJzdRH4wkeAInZ51t7XdqmWuCQ
HI3f///lG44uwyJQtHzA+mlQQPAKrjq/+AfhSQB27gUii4j613ZTT8dtvLXjQuE1
1gCFWcTs8vAfpfUS0wV6SA731xTtP9PECfQRe9g0LyG0CAwB64U=
=pZnR
-----END PGP SIGNATURE-----

--Sig_/Q0Z3jrQHA4gLbAwmBtbq3Fq--


From xen-devel-bounces@lists.xenproject.org Mon May 08 15:53:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 15:53:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531645.827414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw3B6-0001af-GI; Mon, 08 May 2023 15:53:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531645.827414; Mon, 08 May 2023 15:53:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw3B6-0001aY-Df; Mon, 08 May 2023 15:53:32 +0000
Received: by outflank-mailman (input) for mailman id 531645;
 Mon, 08 May 2023 15:53:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pw3B5-0001a6-Bf
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 15:53:31 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 79e06d9a-edb8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 17:53:29 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 02C4F21DB3;
 Mon,  8 May 2023 15:53:29 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 809EE1346B;
 Mon,  8 May 2023 15:53:28 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id y8mUHfgaWWQfOQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 15:53:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79e06d9a-edb8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683561209; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=enCmPKVL/QYyWJsvjc1siLNro32aXyEZpjP4ToWLEys=;
	b=ithED1KI1jXI4dpx1wT8kE5Q567qTNV290ZWYDHVsYekplRXDrxtJQ+EvU5F7c90fBjv19
	TB5/cxMwkMWg5B9aAl5f6H/hSokrfNUEno+6Xt156PNgM0Dk2JrTcpTPyZexNh7ip4e8ou
	tWXTL7Rb7w06LMHD/hNPWswPX0spdsY=
Message-ID: <3ce78def-659c-a3c2-3633-26e5b0a212b1@suse.com>
Date: Mon, 8 May 2023 17:53:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/pci/xen: populate MSI sysfs entries
Content-Language: en-US
To: Maximilian Heyne <mheyne@amazon.de>
Cc: Bjorn Helgaas <bhelgaas@google.com>, Thomas Gleixner
 <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 Marc Zyngier <maz@kernel.org>, Kevin Tian <kevin.tian@intel.com>,
 Jason Gunthorpe <jgg@ziepe.ca>, Ashok Raj <ashok.raj@intel.com>,
 "Ahmed S. Darwish" <darwi@linutronix.de>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 xen-devel@lists.xenproject.org, linux-pci@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20230503131656.15928-1-mheyne@amazon.de>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230503131656.15928-1-mheyne@amazon.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------qUSNF0DqSSSO09uggR0ul3al"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------qUSNF0DqSSSO09uggR0ul3al
Content-Type: multipart/mixed; boundary="------------2VgdL2ARHnFdKP06t0hm3NRa";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Maximilian Heyne <mheyne@amazon.de>
Cc: Bjorn Helgaas <bhelgaas@google.com>, Thomas Gleixner
 <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 Marc Zyngier <maz@kernel.org>, Kevin Tian <kevin.tian@intel.com>,
 Jason Gunthorpe <jgg@ziepe.ca>, Ashok Raj <ashok.raj@intel.com>,
 "Ahmed S. Darwish" <darwi@linutronix.de>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 xen-devel@lists.xenproject.org, linux-pci@vger.kernel.org,
 linux-kernel@vger.kernel.org
Message-ID: <3ce78def-659c-a3c2-3633-26e5b0a212b1@suse.com>
Subject: Re: [PATCH] x86/pci/xen: populate MSI sysfs entries
References: <20230503131656.15928-1-mheyne@amazon.de>
In-Reply-To: <20230503131656.15928-1-mheyne@amazon.de>

--------------2VgdL2ARHnFdKP06t0hm3NRa
Content-Type: multipart/mixed; boundary="------------5l1refLpYoFuAkuHwmsfSyyJ"

--------------5l1refLpYoFuAkuHwmsfSyyJ
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTU6MTYsIE1heGltaWxpYW4gSGV5bmUgd3JvdGU6DQo+IENvbW1pdCBi
ZjVlNzU4ZjAyZmMgKCJnZW5pcnEvbXNpOiBTaW1wbGlmeSBzeXNmcyBoYW5kbGluZyIpIHJl
d29ya2VkIHRoZQ0KPiBjcmVhdGlvbiBvZiBzeXNmcyBlbnRyaWVzIGZvciBNU0kgSVJRcy4g
VGhlIGNyZWF0aW9uIHVzZWQgdG8gYmUgaW4NCj4gbXNpX2RvbWFpbl9hbGxvY19pcnFzX2Rl
c2NzX2xvY2tlZCBhZnRlciBjYWxsaW5nIG9wcy0+ZG9tYWluX2FsbG9jX2lycXMuDQo+IFRo
ZW4gaXQgbW92ZWQgaW50byBfX21zaV9kb21haW5fYWxsb2NfaXJxcyB3aGljaCBpcyBhbiBp
bXBsZW1lbnRhdGlvbiBvZg0KPiBkb21haW5fYWxsb2NfaXJxcy4gSG93ZXZlciwgWGVuIGNv
bWVzIHdpdGggdGhlIG9ubHkgb3RoZXIgaW1wbGVtZW50YXRpb24NCj4gb2YgZG9tYWluX2Fs
bG9jX2lycXMgYW5kIGhlbmNlIGRvZXNuJ3QgcnVuIHRoZSBzeXNmcyBwb3B1bGF0aW9uIGNv
ZGUNCj4gYW55bW9yZS4NCj4gDQo+IENvbW1pdCA2Yzc5Njk5NmVlNzAgKCJ4ODYvcGNpL3hl
bjogRml4dXAgZmFsbG91dCBmcm9tIHRoZSBQQ0kvTVNJDQo+IG92ZXJoYXVsIikgc2V0IHRo
ZSBmbGFnIE1TSV9GTEFHX0RFVl9TWVNGUyBmb3IgdGhlIHhlbiBtc2lfZG9tYWluX2luZm8N
Cj4gYnV0IHRoYXQgZG9lc24ndCBhY3R1YWxseSBoYXZlIGFuIGVmZmVjdCBiZWNhdXNlIFhl
biB1c2VzIGl0J3Mgb3duDQo+IGRvbWFpbl9hbGxvY19pcnFzIGltcGxlbWVudGF0aW9uLg0K
PiANCj4gRml4IHRoaXMgYnkgbWFraW5nIHVzZSBvZiB0aGUgZmFsbGJhY2sgZnVuY3Rpb25z
IGZvciBzeXNmcyBwb3B1bGF0aW9uLg0KPiANCj4gRml4ZXM6IGJmNWU3NThmMDJmYyAoImdl
bmlycS9tc2k6IFNpbXBsaWZ5IHN5c2ZzIGhhbmRsaW5nIikNCj4gU2lnbmVkLW9mZi1ieTog
TWF4aW1pbGlhbiBIZXluZSA8bWhleW5lQGFtYXpvbi5kZT4NCg0KUmV2aWV3ZWQtYnk6IEp1
ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------5l1refLpYoFuAkuHwmsfSyyJ
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5l1refLpYoFuAkuHwmsfSyyJ--

--------------2VgdL2ARHnFdKP06t0hm3NRa--

--------------qUSNF0DqSSSO09uggR0ul3al
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRZGvgFAwAAAAAACgkQsN6d1ii/Ey93
MQf/cxKVXhKe0gAEAMln7DmZGr6S7zdybp9iDiXoHMLt9maAVZlKd7AoiwoDqsSRJCkaxoZWtPfC
1ShfFUXQrH3X+RXM0xoOZbwErxXDCnT06ZAtV4muq8RUWfw6wTppfnvd0t0WxuYhEtvdbU40zH0u
+K26n/zsBHSUygZ6kpnHLvMAI/BarkuOamBar8fkKQ3SxhKdOtpNWSn7UgmyvO0tDyepG3GciLy6
cMrI5ibjjWIOeOJD93KNpihT0QlSlVVfQ2MYr58QUyTmi4VLtIJ2LPI/H6whMc1d399u16pzjpfJ
gIE01xfP/1/0PoBJhnL3Y+9LNzY1ZGzF/iKfnTxP7Q==
=cQFh
-----END PGP SIGNATURE-----

--------------qUSNF0DqSSSO09uggR0ul3al--


From xen-devel-bounces@lists.xenproject.org Mon May 08 15:57:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 15:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531649.827425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw3En-0002Bl-0X; Mon, 08 May 2023 15:57:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531649.827425; Mon, 08 May 2023 15:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw3Em-0002Be-Sr; Mon, 08 May 2023 15:57:20 +0000
Received: by outflank-mailman (input) for mailman id 531649;
 Mon, 08 May 2023 15:57:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOTd=A5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pw3El-0002BY-Tr
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 15:57:19 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0250432e-edb9-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 17:57:18 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id ED6AE21E9C;
 Mon,  8 May 2023 15:57:17 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id BB4481346B;
 Mon,  8 May 2023 15:57:17 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id EbczLN0bWWQqOwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 08 May 2023 15:57:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0250432e-edb9-11ed-b226-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683561437; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uJsZCdyVn5207Vye65VsGviklEd6AocSi/qGBd7aHkI=;
	b=AgB+dU+oP9plsZBFhGD+WPMqLBj8LtSrFz8Z8g1HZIZuZZbycHFYaCWnAtY0KNbh/Cybig
	ZtQUnMFz3v2c6S/MyUMiXXKSL9ACf8v3Cf7Hhp3Fhosbhb3IVhvKXHwR4w2dlipITC5BVg
	xvd+S3fAJywGgjvvAPWaXgABg+jnQIo=
Message-ID: <db39c117-d6fa-ab4d-1683-05fccdeda404@suse.com>
Date: Mon, 8 May 2023 17:57:17 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] xen/pvcalls-back: fix double frees with
 pvcalls_new_active_socket()
Content-Language: en-US
To: Dan Carpenter <dan.carpenter@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, kernel-janitors@vger.kernel.org
References: <e5f98dc2-0305-491f-a860-71bbd1398a2f@kili.mountain>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <e5f98dc2-0305-491f-a860-71bbd1398a2f@kili.mountain>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------GQgp9aeO0OpOmH1pBanwsEqn"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------GQgp9aeO0OpOmH1pBanwsEqn
Content-Type: multipart/mixed; boundary="------------qLcoMchTk1vUtg8ADvS0YEfR";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Dan Carpenter <dan.carpenter@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, kernel-janitors@vger.kernel.org
Message-ID: <db39c117-d6fa-ab4d-1683-05fccdeda404@suse.com>
Subject: Re: [PATCH] xen/pvcalls-back: fix double frees with
 pvcalls_new_active_socket()
References: <e5f98dc2-0305-491f-a860-71bbd1398a2f@kili.mountain>
In-Reply-To: <e5f98dc2-0305-491f-a860-71bbd1398a2f@kili.mountain>

--------------qLcoMchTk1vUtg8ADvS0YEfR
Content-Type: multipart/mixed; boundary="------------XKdA7KkFzwvrNm5B0HXEBVTK"

--------------XKdA7KkFzwvrNm5B0HXEBVTK
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDMuMDUuMjMgMTc6MTEsIERhbiBDYXJwZW50ZXIgd3JvdGU6DQo+IEluIHRoZSBwdmNh
bGxzX25ld19hY3RpdmVfc29ja2V0KCkgZnVuY3Rpb24sIG1vc3QgZXJyb3IgcGF0aHMgY2Fs
bA0KPiBwdmNhbGxzX2JhY2tfcmVsZWFzZV9hY3RpdmUoZmVkYXRhLT5kZXYsIGZlZGF0YSwg
bWFwKSB3aGljaCBjYWxscw0KPiBzb2NrX3JlbGVhc2UoKSBvbiAic29jayIuICBUaGUgYnVn
IGlzIHRoYXQgdGhlIGNhbGxlciBhbHNvIGZyZWVzIHNvY2suDQo+IA0KPiBGaXggdGhpcyBi
eSBtYWtpbmcgZXZlcnkgZXJyb3IgcGF0aCBpbiBwdmNhbGxzX25ld19hY3RpdmVfc29ja2V0
KCkNCj4gcmVsZWFzZSB0aGUgc29jaywgYW5kIGRvbid0IGZyZWUgaXQgaW4gdGhlIGNhbGxl
ci4NCj4gDQo+IEZpeGVzOiA1ZGI0ZDI4NmE4ZWYgKCJ4ZW4vcHZjYWxsczogaW1wbGVtZW50
IGNvbm5lY3QgY29tbWFuZCIpDQo+IFNpZ25lZC1vZmYtYnk6IERhbiBDYXJwZW50ZXIgPGRh
bi5jYXJwZW50ZXJAbGluYXJvLm9yZz4NCg0KUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3Mg
PGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------XKdA7KkFzwvrNm5B0HXEBVTK
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------XKdA7KkFzwvrNm5B0HXEBVTK--

--------------qLcoMchTk1vUtg8ADvS0YEfR--

--------------GQgp9aeO0OpOmH1pBanwsEqn
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRZG90FAwAAAAAACgkQsN6d1ii/Ey9p
CAf/UjzlrR65HDDHfmlqqsZfml3j7ueSzKVzrj7j4Ak/rlQupbNWb/03mifKyEBPoBJ8+qEV2yhZ
VGm1S9mL6iXq4+tJWIwiHEAB+a/JkCREjiMARkT7s3OuFi60SDWIbJJzNeZFk2zf+EhhvKrygjP5
19rB2nfHdjc1kRuHTh8jwUKihsxam/LuprDgq77gN+3HO/Y4AjR+/AIrtGifuza+do4YPc9ECHv4
xrO8kjf04JUQVZsS7vDDNBQc4NJChg/vIFP7P56hJdSjTl+DU5hLqtAldYs2ModCd43J0gGoKyTV
rEPYka4jLtmTJFzyLYxwFnMkII2qzV3lcc9pHpb31w==
=IZyW
-----END PGP SIGNATURE-----

--------------GQgp9aeO0OpOmH1pBanwsEqn--


From xen-devel-bounces@lists.xenproject.org Mon May 08 16:24:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 16:24:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531662.827434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw3et-00066X-0s; Mon, 08 May 2023 16:24:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531662.827434; Mon, 08 May 2023 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw3es-00066Q-UX; Mon, 08 May 2023 16:24:18 +0000
Received: by outflank-mailman (input) for mailman id 531662;
 Mon, 08 May 2023 16:24:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw3er-00066G-N5; Mon, 08 May 2023 16:24:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw3er-0004Fd-Jq; Mon, 08 May 2023 16:24:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw3er-0008Na-84; Mon, 08 May 2023 16:24:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pw3er-0006IE-7Y; Mon, 08 May 2023 16:24:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7T/Mf8EIeVQAtInc3rUYrcfmXoVWfLE/3x/1YA1UFkg=; b=RBjTSEB5GemIBRusAkzl42JITy
	awyInhWcIVNAdfW4IQ1qFsBgz1ZKkHEn6Df+U72zBvUtfmELqHRU1JKMTA/2Y0ceZX8UDjWu/dmqX
	bsJ25w62o2b2xJWkknsDzTCLOsZgDanXxpAkWJFFu2AELatJgRbKhN2cMCgBaDCsmSbo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180579-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180579: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6eeb58ece38060be3b0f7111649a93cc8b2dca49
X-Osstest-Versions-That:
    ovmf=d89492456f79e014679cb6c29b144ea26b910918
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 May 2023 16:24:17 +0000

flight 180579 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180579/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6eeb58ece38060be3b0f7111649a93cc8b2dca49
baseline version:
 ovmf                 d89492456f79e014679cb6c29b144ea26b910918

Last test of basis   180575  2023-05-08 11:43:41 Z    0 days
Testing same since   180579  2023-05-08 13:40:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Minh Nguyen <minhnguyen1@os.amperecomputing.com>
  Nhi Pham <nhi@os.amperecomputing.com>
  Vu Nguyen <vunguyen@os.amperecomputing.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d89492456f..6eeb58ece3  6eeb58ece38060be3b0f7111649a93cc8b2dca49 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 08 16:46:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 16:46:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531668.827444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw40S-00009X-P5; Mon, 08 May 2023 16:46:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531668.827444; Mon, 08 May 2023 16:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw40S-00009Q-MJ; Mon, 08 May 2023 16:46:36 +0000
Received: by outflank-mailman (input) for mailman id 531668;
 Mon, 08 May 2023 16:46:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fEaP=A5=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pw40Q-000094-Mv
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 16:46:34 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.23]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2bcf59e-edbf-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 18:46:32 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz48GkL1ei
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 8 May 2023 18:46:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2bcf59e-edbf-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683564381; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=aQOBJvQhWb1t5tnqeVzHgWO/YtjOx3rNfKHaYf6UwWeC+MkYJiCW6nVylCffa88lSq
    rxywtfvj3AtpTtoxlKpAeE4d7M7PP7Jpm/WbNNNcb73Ij/TqE2erALOQFRhZcwmgtt6x
    Mjhg75RwKlvhrs0+85eJmM+nVXGzhGsJ1XrhOP5Y0zXGoj3blIyezNeI45wgpqvuOk9p
    AD7tSc+6GhenaDRaGJaO+oiPusdN7s/FMVw0+oRj4hoeDUcNWNYoyPMOF6qQUJWpOBHG
    jyayKvdqHKxnVoPxjQJapLW/IHBC5li5C7gPqcf2HYrd8c1PQfjUGkIAuAmKHcb38HoR
    cgJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683564381;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=ckiWL2I0mA50zkvrZeK7N1+RfhrIURHzeKf776Hu3nE=;
    b=sA2c/UTSPr2n7ftMly/N4kZ+4i/RdAraMSv6OKQJzoMMS31a4YiFXk317NdQ8JDecl
    UZ3YSpJiHy4jOftrucQE2aQwBKabxU/6Mg86o/MfTEktpulYnCrA7ngx18K8YJ77gPkF
    VcB3npY+fxOlecInxj2vvi026+WWT2z/rOzFnVQjfCNjXeTnpza3x/GBuYUsV5L/2qUa
    KhDA1kudFXhh8eMON1SO2Vlb9IHdbexSPZwtOh3HjlI1sppc0BI/FEEw9UzINc3agVbZ
    KuVMnRfa1JoWmeg0klSPaOyyt6lvUt6NsboUaR+VItTy7lxuhOGUSwd7l7lrYt3cI9TU
    DBmw==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683564381;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=ckiWL2I0mA50zkvrZeK7N1+RfhrIURHzeKf776Hu3nE=;
    b=DYlKtcZN7zjBC8XassEpnx4dkpk7e+6kblL6C/sXsfFuSVud0v/JrJWX0/jFftfBNB
    EQeYLWdSZ3xsZ/WJfX0hz/gA5177SlhTDVIx3qo40NVDsNmMiDickNOZWWDt1Vw6+Uh6
    SvgFrMp4rP2ik9yU5YUcD4szOKk3myQheW9DQTPuvCKUjdUzHBoRHJ1dMDsY53UuRiUq
    27NNBL2ui3LtZN7Kb6ZQSI5kZSZ5qRYHtZgT5EOKBMLGh2mbGDdSGdP44WlhGzOg8IxW
    jmkSnshwncGbb/Q7VO/lZW7t1AF8l8nhXMaLC45fp1m8xgIWfIpPIGWeskhPLyzyvfZY
    /6+Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683564381;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=ckiWL2I0mA50zkvrZeK7N1+RfhrIURHzeKf776Hu3nE=;
    b=R/ogvXlolyxT7xoTrk9JeByV/QKhwCZGA0U4eQmaLX3o5UhMKPO6iBpLAuUruMwFvW
    odSG+o0vcbpzzlmCDfCQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xr137Gpot26qU4O0oDB37weYobhAHKAaiA4NsOg=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH v2] tools: convert bitfields to unsigned type
Date: Mon,  8 May 2023 16:46:18 +0000
Message-Id: <20230508164618.21496-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

clang complains about the signed type:

implicit truncation from 'int' to a one-bit wide bit-field changes value from 1 to -1 [-Wsingle-bit-bitfield-constant-conversion]

The potential ABI change in libxenvchan is covered by the Xen version based SONAME.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
v2: cover one more case in xenalyze

 tools/include/libxenvchan.h | 6 +++---
 tools/xentrace/xenalyze.c   | 8 ++++----
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/tools/include/libxenvchan.h b/tools/include/libxenvchan.h
index 30cc73cf97..3d3b8aa8dd 100644
--- a/tools/include/libxenvchan.h
+++ b/tools/include/libxenvchan.h
@@ -79,11 +79,11 @@ struct libxenvchan {
 	xenevtchn_handle *event;
 	uint32_t event_port;
 	/* informative flags: are we acting as server? */
-	int is_server:1;
+	unsigned int is_server:1;
 	/* true if server remains active when client closes (allows reconnection) */
-	int server_persist:1;
+	unsigned int server_persist:1;
 	/* true if operations should block instead of returning 0 */
-	int blocking:1;
+	unsigned int blocking:1;
 	/* communication rings */
 	struct libxenvchan_ring read, write;
 	/**
diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index 12dcca9646..a50538e9a8 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -1377,7 +1377,7 @@ struct hvm_data {
     tsc_t exit_tsc, arc_cycles, entry_tsc;
     unsigned long long rip;
     unsigned exit_reason, event_handler;
-    int short_summary_done:1, prealloc_unpin:1, wrmap_bf:1;
+    unsigned int short_summary_done:1, prealloc_unpin:1, wrmap_bf:1;
 
     /* Immediate processing */
     void *d;
@@ -8235,13 +8235,13 @@ void mem_set_p2m_entry_process(struct pcpu_info *p)
 
     struct {
         uint64_t gfn, mfn;
-        int p2mt;
-        int d:16,order:16;
+        uint32_t p2mt;
+        uint16_t d, order;
     } *r = (typeof(r))ri->d;
 
     if ( opt.dump_all )
     {
-        printf(" %s set_p2m_entry d%d o%d t %d g %llx m %llx\n",
+        printf(" %s set_p2m_entry d%u o%u t %u g %llx m %llx\n",
                ri->dump_header,
                r->d, r->order,
                r->p2mt,


From xen-devel-bounces@lists.xenproject.org Mon May 08 17:15:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 17:15:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531682.827454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw4Ro-0003dl-TF; Mon, 08 May 2023 17:14:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531682.827454; Mon, 08 May 2023 17:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw4Ro-0003de-Pm; Mon, 08 May 2023 17:14:52 +0000
Received: by outflank-mailman (input) for mailman id 531682;
 Mon, 08 May 2023 17:14:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fEaP=A5=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pw4Rn-0003cn-K1
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 17:14:51 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [85.215.255.53]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d70ff816-edc3-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 19:14:50 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz48HEj1iE
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 8 May 2023 19:14:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d70ff816-edc3-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683566085; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=tE6YHuHy5lVgdjuDoGDmTwSBgXKtuS1zDa+h/S810xaL3xbUpjKNwYeQHYaBy2RvFl
    XpZuEd8VLlxKqHAEIc6ga5n58BAziTQas967IHhukhcGU0EDv2VsvjZ7VEaqrihJ2Y9/
    i0Vh4ngEwa0bPncFQPvXLsOHMVorH7YoZ/dzlDlOWjPIgm8mif3tAGTWiCrnKCCsRpx5
    3p479Zye8lgUFoWvbLH1feNVbJbESCTKFSG1ntDBw8jU7975kRDXEu7X0WX7prpco4nY
    8IA6BOJYLF39Ln+EoVt4zk6X+h1daiCLOmcQ3KbsYczTIh/wB+29XuPf70rWREdCAzxP
    zuOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683566085;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=y0EEjOUz8ZL2cVZEIRcJ/bDf+Ceoz247L+uHTHcTrAo=;
    b=b05BlS4jHDUWuizJL3NhuJotZtOLwERDH4ZMFsEmvbs1CocwbRm41A3tZEBaDUksS1
    bObD3J8ePYHUk56GTqUyNvuIIyr0TpW5RGBlTbFg2B+8kZ1iKGYar6S9pIcE4v2EoUEQ
    Aiif6AdOVQ2p43r4o49yQt8SkH3UCdRzJGUEJWmir2/yaSTfISyhavMHQmrOrdzTssTL
    Uy+8oDr+ixK9Q8+QuM/NvGZlMiF9/OM7ERoOIn7MdwiqlqeRQq061xtFbRdponjbMUyj
    par6x6S8IudAulCdUcHgTeeAc3I/J5elU9eXBxQl5yTrnaQwhbczQ43Nu5CdN53R0mnr
    5qZg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683566085;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=y0EEjOUz8ZL2cVZEIRcJ/bDf+Ceoz247L+uHTHcTrAo=;
    b=WfTnRdjagb5Qlqlvp773aVY/NuQaV7jGY/PlS4dKa71IgRcOYO6ew27kbArysI/G4B
    NBRDn20uoaNrc2WE+6ct3b07XSY+Eoa/8cKTCYhlLLZhNWGaTLBR2TL8/TFPfD+ATE/0
    bwCc52HlIp3OxyY0qeId8K1ML943XhRviSODWacIDH44dHDBWemI/90VI1gNvxd5moF3
    T8Xq2hYL9hi0hZGiVXsvkdRGXIB+RIRQqYV71l+1Aa8nN9VqmR4okXlvqENWBrs8AphC
    SfYsqLG4ToE8gaC/ITXOTvYJCXu3m+WyZLHmMWfgBelPihBJsltJyS2mJYNi+wQ2M48c
    HlOQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683566085;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=y0EEjOUz8ZL2cVZEIRcJ/bDf+Ceoz247L+uHTHcTrAo=;
    b=mCQQC+FJHISBFELfPTHBHvCVe1SjmVm6K6kDqRAgGoenlST0+wBHtE71uQq1U4//KB
    rFjD9jOEkEWyFKMDY4Bg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xr137Gpot26qU4O0oDB37weYobhAHKAaiA4NsOg=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v2] Fix install.sh for systemd
Date: Mon,  8 May 2023 17:14:37 +0000
Message-Id: <20230508171437.27424-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

On a fedora system, if you run `sudo sh install.sh` you break your
system.  The installation clobbers /var/run, a symlink to /run.  A
subsequent boot fails when /var/run and /run are different since
accesses through /var/run can't find items that now only exist in /run
and vice-versa.

Skip populating /var/run/xen during make install.
The directory is already created by some scripts. Adjust all remaining
scripts to create XEN_RUN_DIR at runtime.

XEN_RUN_STORED is covered by XEN_RUN_DIR because xenstored is usually
started afterwards.

Reported-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/Makefile                                     | 2 --
 tools/hotplug/FreeBSD/rc.d/xencommons.in           | 1 +
 tools/hotplug/FreeBSD/rc.d/xendriverdomain.in      | 1 +
 tools/hotplug/Linux/init.d/xendriverdomain.in      | 1 +
 tools/hotplug/Linux/systemd/xenconsoled.service.in | 2 +-
 tools/hotplug/NetBSD/rc.d/xendriverdomain.in       | 2 +-
 6 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/tools/Makefile b/tools/Makefile
index 4906fdbc23..1ff90ddfa0 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -58,9 +58,7 @@ build all: subdirs-all
 install:
 	$(INSTALL_DIR) -m 700 $(DESTDIR)$(XEN_DUMP_DIR)
 	$(INSTALL_DIR) $(DESTDIR)$(XEN_LOG_DIR)
-	$(INSTALL_DIR) $(DESTDIR)$(XEN_RUN_DIR)
 	$(INSTALL_DIR) $(DESTDIR)$(XEN_LIB_DIR)
-	$(INSTALL_DIR) $(DESTDIR)$(XEN_RUN_STORED)
 	$(INSTALL_DIR) $(DESTDIR)$(PKG_INSTALLDIR)
 	$(MAKE) subdirs-install
 
diff --git a/tools/hotplug/FreeBSD/rc.d/xencommons.in b/tools/hotplug/FreeBSD/rc.d/xencommons.in
index 7f7cda289f..1cf217d418 100644
--- a/tools/hotplug/FreeBSD/rc.d/xencommons.in
+++ b/tools/hotplug/FreeBSD/rc.d/xencommons.in
@@ -34,6 +34,7 @@ xen_startcmd()
 	local time=0
 	local timeout=30
 
+	mkdir -p "@XEN_RUN_DIR@"
 	xenstored_pid=$(check_pidfile ${XENSTORED_PIDFILE} ${XENSTORED})
 	if test -z "$xenstored_pid"; then
 		printf "Starting xenservices: xenstored, xenconsoled."
diff --git a/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in b/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in
index a032822e33..030d104024 100644
--- a/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in
+++ b/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in
@@ -27,6 +27,7 @@ xendriverdomain_start()
 {
 	printf "Starting xenservices: xl devd."
 
+	mkdir -p "@XEN_RUN_DIR@"
 	PATH="${bindir}:${sbindir}:$PATH" ${sbindir}/xl devd --pidfile ${XLDEVD_PIDFILE} ${XLDEVD_ARGS}
 
 	printf "\n"
diff --git a/tools/hotplug/Linux/init.d/xendriverdomain.in b/tools/hotplug/Linux/init.d/xendriverdomain.in
index c63060f62a..1055d0b942 100644
--- a/tools/hotplug/Linux/init.d/xendriverdomain.in
+++ b/tools/hotplug/Linux/init.d/xendriverdomain.in
@@ -49,6 +49,7 @@ fi
 
 do_start () {
 	echo Starting xl devd...
+	mkdir -m700 -p @XEN_RUN_DIR@
 	${sbindir}/xl devd --pidfile=$XLDEVD_PIDFILE $XLDEVD_ARGS
 }
 do_stop () {
diff --git a/tools/hotplug/Linux/systemd/xenconsoled.service.in b/tools/hotplug/Linux/systemd/xenconsoled.service.in
index 1f03de9041..d84c09aa9c 100644
--- a/tools/hotplug/Linux/systemd/xenconsoled.service.in
+++ b/tools/hotplug/Linux/systemd/xenconsoled.service.in
@@ -11,7 +11,7 @@ Environment=XENCONSOLED_TRACE=none
 Environment=XENCONSOLED_LOG_DIR=@XEN_LOG_DIR@/console
 EnvironmentFile=-@CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
 ExecStartPre=/bin/grep -q control_d /proc/xen/capabilities
-ExecStartPre=/bin/mkdir -p ${XENCONSOLED_LOG_DIR}
+ExecStartPre=/bin/mkdir -p ${XENCONSOLED_LOG_DIR} @XEN_RUN_DIR@
 ExecStart=@sbindir@/xenconsoled -i --log=${XENCONSOLED_TRACE} --log-dir=${XENCONSOLED_LOG_DIR} $XENCONSOLED_ARGS
 
 [Install]
diff --git a/tools/hotplug/NetBSD/rc.d/xendriverdomain.in b/tools/hotplug/NetBSD/rc.d/xendriverdomain.in
index f47b0b189c..23a5352502 100644
--- a/tools/hotplug/NetBSD/rc.d/xendriverdomain.in
+++ b/tools/hotplug/NetBSD/rc.d/xendriverdomain.in
@@ -23,7 +23,7 @@ XLDEVD_PIDFILE="@XEN_RUN_DIR@/xldevd.pid"
 
 xendriverdomain_precmd()
 {
-	:
+	mkdir -p "@XEN_RUN_DIR@"
 }
 
 xendriverdomain_startcmd()


From xen-devel-bounces@lists.xenproject.org Mon May 08 17:17:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 17:17:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531687.827464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw4U6-0004CF-8W; Mon, 08 May 2023 17:17:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531687.827464; Mon, 08 May 2023 17:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw4U6-0004C8-5e; Mon, 08 May 2023 17:17:14 +0000
Received: by outflank-mailman (input) for mailman id 531687;
 Mon, 08 May 2023 17:17:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fEaP=A5=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pw4U4-0004C2-TP
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 17:17:12 +0000
Received: from mo4-p02-ob.smtp.rzone.de (mo4-p02-ob.smtp.rzone.de
 [85.215.255.82]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2b8a7262-edc4-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 19:17:12 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz48HH81ia
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 8 May 2023 19:17:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b8a7262-edc4-11ed-b226-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683566228; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=W5kIyWZBRtirVB8R8ABRp+GuE5Wdlf/KkMElpiFRHAMl2TRgu409DRaik8UZw4yFGA
    yQf9a4wkZpC5JiSKUQF7fVv3BZX8Ytg5tHqBWdA1/hK0Jm/NfQ83S1BZfFAFDqDQz93k
    pXP0Z8Wm7W69gVy9kZsoXWbmpdxn6dXbolcUIH3py5K+u2oskLf9p9IipiwzZXVhN97c
    cRqh/5Ye3oql+1/dUnpZq31cnE2t8q/cIWar5JH6Bw5j18k8xN0MjLP+d5D88oeLgWQq
    KDrrRyPki4+isDRxIpfX6QJf61lfjzDz04C0GTg2qow2rHm1ulMZxy91fXEydMVOclQP
    YdMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683566228;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=JJjjfI8cGGCBUnmE7uP1IovAWHC+stQjDcKDTynV5EM=;
    b=anb+H1BggY9Al/43BWLcTC6X3IacWo/qQC4SmycI/M9NADwN4GrV4JoUpnxeo24WRd
    r/ucRm0likxPTCNxVPO/V8tcGo6V3MA5w/PbmVa+0+nwnxS3rJferrioafr2A5XXfdwA
    sK51kyLMgCO8gvpTs91SvPRyjByZT4iX1L6WNI7lqIWu8ZxR3VW2Vun8ScM4S/9YNPBy
    gcLzH87e4K6F4Kqnd7L84QKzb8fguresfuLOBS3ykjqFcJRHnLoyXK+XOK4kr/kLHSs6
    8ILHMf9hE8bjPc3g03O/hGhmjVPF6XrzBQwi3v6D5gDl/mYNnat3bgTOxxoF5/HJbAOF
    1C5Q==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo02
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683566228;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=JJjjfI8cGGCBUnmE7uP1IovAWHC+stQjDcKDTynV5EM=;
    b=tEAdgWKY31K7pXA5G0cggg6Y4/f8HBazh+jQCxX7I8ez45pRqBiwcrdNJ7R8MAMVup
    JdG7DyBG26j1slFLo87EuG4HPfaK/2l6IdnXYVyAYIRwIR5qvj+ktiHmFMwXFq6ALsAQ
    TjDCsaHPMokqOHPDrv4ljRAqQpzXBqVSwUPzlIskyVCGyCenMT53ZK3dYPh9e78Wf6c3
    YYTvmQ0BIKSj6MjGIOZvMFCW+M/kEBgoEImNphIasmA9XSKAWR6SYCUiWpirrqDkUa90
    +ISilLlNm7Un2ghGS960EeZDZhHmbIOQYf8XHnPTPYqym7QVSeoS8woIkm7GB1lTgugo
    q3rQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683566228;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=JJjjfI8cGGCBUnmE7uP1IovAWHC+stQjDcKDTynV5EM=;
    b=W9ktGnwlemDpkX5JnkPK040mp1hkOagfgwP/AQz/vdF6GheLbB8plTGXInrSFpgpRI
    Qaeo7Cbqb5ErMQRGPRCg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4ARWIYxzstZKeVom+bauo0LKSCjuo5iX5xLikmg=="
Date: Mon, 8 May 2023 19:17:06 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org, Wei Liu
 <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] Fix install.sh for systemd
Message-ID: <20230508191706.52cd21f4.olaf@aepfle.de>
In-Reply-To: <CAKf6xpt-h7sMjznhbn1RvdT_kn1iri5oXq+_ugjWib6YyuCx+w@mail.gmail.com>
References: <20230425194622.114869-1-jandryuk@gmail.com>
	<20230426091533.68324d8d.olaf@aepfle.de>
	<650a7f6e-be82-0312-05f2-bb69e51e828d@suse.com>
	<20230426104754.78845a19.olaf@aepfle.de>
	<9dfb4f01-979e-e225-214e-34ddb51a9199@suse.com>
	<20230426124051.24c2a9a6.olaf@aepfle.de>
	<CAKf6xpt-h7sMjznhbn1RvdT_kn1iri5oXq+_ugjWib6YyuCx+w@mail.gmail.com>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/kem2mED/kyqZYZrf0a0a5uZ";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/kem2mED/kyqZYZrf0a0a5uZ
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Wed, 26 Apr 2023 09:31:44 -0400 Jason Andryuk <jandryuk@gmail.com>:

> On Wed, Apr 26, 2023 at 6:40=E2=80=AFAM Olaf Hering <olaf@aepfle.de> wrot=
e:
> > +++ b/tools/hotplug/Linux/init.d/xendriverdomain.in
> > @@ -49,6 +49,7 @@ fi
> >
> >  do_start () {
> >         echo Starting xl devd...
> > +       mkdir -m700 -p ${XEN_RUN_DIR} =20
> This one should be "@XEN_RUN_DIR@"?

I changed it, it was a result of copy&paste from another file.

Since hotplug.sh is included in many scripts, all such users could be conve=
rted to a shell variable. But this would be a separate change.


Olaf

--Sig_/kem2mED/kyqZYZrf0a0a5uZ
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRZLpIACgkQ86SN7mm1
DoDHRw//SmBrqDmjqaXIMqVXSNyzP2zU33erPjdoQixcOeigyLisfJZUMfCDSnOV
oWKgd6wsEIdNP/0iXfoXwXwL8LYfM16gIaJ40SJ8ZFryTF4TC9Z1ySnJPp1rfjG+
R5/j/4caUJA0BW3mKAtENZ1YScQU2vM76IgeXcXfxIOhZLKnfFiolXI02ZR16OR9
kRrR0H3JisqEvv60xj8J1D2y4mUw3nbnB6Bim1Aqnyby/XuJP6DbZPtSmsGUhg4a
KNTtu137lZELkE4t1H55zlUFle/d/s0JLu1qstD6O633K/DbemPME6mNaDrrE74k
8MgHpM2nsEpPcnnpSxzwSfSv8tk8pOnO+i/LBjZaTQq0uqTlWcLVzxP87ffb/qgT
oGe8Qx/stp7H5rL4OOXl0GhlSyTgcEDvJhckg+FIk9wJ2riGEvs6Kr3aXj29PDTi
EjFO6D7y9V+txzdtu+xyxkHBcXYEZJvAlyEf4QFcndq1F9pSdoguMcT/Jwwd6HoG
C2RCX5SSykX0nDqE56RhAG6v2uqppk1bbrInpogMJxUWssB5FWNn/x2SWJY3UBO9
WOHfHdfM6faZSj2dmbps6YJEf8yUaYpQ9Ux+21Vq3POyaJU/R3iBkcd/1ayaAOef
FFIMgLJmtWkJ17+Btip0y0zz0YQFpLddeqsg1G6vmlAmS/woLDw=
=VZWr
-----END PGP SIGNATURE-----

--Sig_/kem2mED/kyqZYZrf0a0a5uZ--


From xen-devel-bounces@lists.xenproject.org Mon May 08 17:30:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 17:30:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531694.827475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw4gF-0005kq-CA; Mon, 08 May 2023 17:29:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531694.827475; Mon, 08 May 2023 17:29:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw4gF-0005kj-8n; Mon, 08 May 2023 17:29:47 +0000
Received: by outflank-mailman (input) for mailman id 531694;
 Mon, 08 May 2023 17:29:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hkai=A5=gmail.com=wei.liu.linux@srs-se1.protection.inumbo.net>)
 id 1pw4gE-0005kd-BZ
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 17:29:46 +0000
Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com
 [209.85.210.170]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ebfd1c59-edc5-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 19:29:45 +0200 (CEST)
Received: by mail-pf1-f170.google.com with SMTP id
 d2e1a72fcca58-6439bbc93b6so3004198b3a.1
 for <xen-devel@lists.xenproject.org>; Mon, 08 May 2023 10:29:45 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([20.69.120.36])
 by smtp.gmail.com with ESMTPSA id
 i4-20020aa787c4000000b0063d2d9990ecsm232036pfo.87.2023.05.08.10.29.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 08 May 2023 10:29:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebfd1c59-edc5-11ed-b226-6b7b168915f2
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683566983; x=1686158983;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=3rfke9qB1DRHiCuMEOyzzXuV8oi0P7zjijqnIYynp3o=;
        b=PXcyir2XDkklOuGPiNzcz5Kv++6+ZR18LlHpIM5iK2H/hLAGM6dLX8P3ym0awZFide
         hyl3t4W+OWFLBV9Pdp9aqsiq+4xZmRRWwA3TxhTfX+t5zusnI7K9+rSNVDjcTqxZqXdH
         5/8OPK7mgTLz0RSFtEPsSBOsaU9gsids09s9zpPzlDsHr3pyToTQLjhfl6ymlXCdxUfJ
         yCDN7xJld0ez525YX03T0zOgnCkp02Uxnb42EH/3Crfk/jDbROadD1lj4EggnwZBmGAj
         Ex4VH3ZAxsfi9VCsYKXZUhoKAuM+5etoDw3GYPKuBzmtXOklDfV65eZTcxzo5xQOM8XJ
         hWdg==
X-Gm-Message-State: AC+VfDymeHvftx864ayDzXBbxi/zFeL18p0LeMQ4/iIom4ZFn+DkXROx
	APk6uaoIEMURJMP40pfxS+k=
X-Google-Smtp-Source: ACHHUZ75U4lCk4iijzyP5tOZ0JsVrz/EkvkWJ7HQDBXg3iSt23KVdFrierjfNZySz59EZ1uNpntsKQ==
X-Received: by 2002:a05:6a20:a107:b0:ff:7c74:a799 with SMTP id q7-20020a056a20a10700b000ff7c74a799mr11302220pzk.9.1683566983497;
        Mon, 08 May 2023 10:29:43 -0700 (PDT)
Date: Mon, 8 May 2023 17:29:41 +0000
From: Wei Liu <wei.liu@kernel.org>
To: =?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>
Cc: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	Mihai =?utf-8?B?RG9uyJt1?= <mdontu@bitdefender.com>,
	=?utf-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?utf-8?Q?=C8=98tefan_=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
	x86@kernel.org, xen-devel@lists.xenproject.org,
	Wei Liu <wei.liu@kernel.org>
Subject: Re: [PATCH v1 3/9] virt: Implement Heki common code
Message-ID: <ZFkxhWhjyIzrPkt8@liuwe-devbox-debian-v2>
References: <20230505152046.6575-1-mic@digikod.net>
 <20230505152046.6575-4-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230505152046.6575-4-mic@digikod.net>

On Fri, May 05, 2023 at 05:20:40PM +0200, Mickal Salan wrote:
> From: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
> 
> Hypervisor Enforced Kernel Integrity (Heki) is a feature that will use
> the hypervisor to enhance guest virtual machine security.
> 
> Configuration
> =============
> 
> Define the config variables for the feature. This feature depends on
> support from the architecture as well as the hypervisor.
> 
> Enabling HEKI
> =============
> 
> Define a kernel command line parameter "heki" to turn the feature on or
> off. By default, Heki is on.

For such a newfangled feature can we have it off by default? Especially
when there are unsolved issues around dynamically loaded code.

> 
[...]
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 3604074a878b..5cf5a7a97811 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -297,6 +297,7 @@ config X86
>  	select FUNCTION_ALIGNMENT_4B
>  	imply IMA_SECURE_AND_OR_TRUSTED_BOOT    if EFI
>  	select HAVE_DYNAMIC_FTRACE_NO_PATCHABLE
> +	select ARCH_SUPPORTS_HEKI		if X86_64

Why is there a restriction on X86_64?

>  
>  config INSTRUCTION_DECODER
>  	def_bool y
> diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/asm/sections.h
> index a6e8373a5170..42ef1e33b8a5 100644
> --- a/arch/x86/include/asm/sections.h
> +++ b/arch/x86/include/asm/sections.h
[...]
>  
> +#ifdef CONFIG_HEKI
> +
> +/*
> + * Gather all of the statically defined sections so heki_late_init() can
> + * protect these sections in the host page table.
> + *
> + * The sections are defined under "SECTIONS" in vmlinux.lds.S
> + * Keep this array in sync with SECTIONS.
> + */

This seems a bit fragile, because it requires constant attention from
people who care about this functionality. Can this table be
automatically generated?

Thanks,
Wei.

> +struct heki_va_range __initdata heki_va_ranges[] = {
> +	{
> +		.va_start = _stext,
> +		.va_end = _etext,
> +		.attributes = HEKI_ATTR_MEM_NOWRITE | HEKI_ATTR_MEM_EXEC,
> +	},
> +	{
> +		.va_start = __start_rodata,
> +		.va_end = __end_rodata,
> +		.attributes = HEKI_ATTR_MEM_NOWRITE,
> +	},
> +#ifdef CONFIG_UNWINDER_ORC
> +	{
> +		.va_start = __start_orc_unwind_ip,
> +		.va_end = __stop_orc_unwind_ip,
> +		.attributes = HEKI_ATTR_MEM_NOWRITE,
> +	},
> +	{
> +		.va_start = __start_orc_unwind,
> +		.va_end = __stop_orc_unwind,
> +		.attributes = HEKI_ATTR_MEM_NOWRITE,
> +	},
> +	{
> +		.va_start = orc_lookup,
> +		.va_end = orc_lookup_end,
> +		.attributes = HEKI_ATTR_MEM_NOWRITE,
> +	},
> +#endif /* CONFIG_UNWINDER_ORC */
> +};
> +


From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531750.827566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lp-0005wj-8s; Mon, 08 May 2023 19:43:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531750.827566; Mon, 08 May 2023 19:43:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lp-0005vp-1q; Mon, 08 May 2023 19:43:41 +0000
Received: by outflank-mailman (input) for mailman id 531750;
 Mon, 08 May 2023 19:43:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6ln-0004GB-6u
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:39 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0e19a96-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:43:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0e19a96-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185217.618219949@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575018;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Nq+EVjEHwGvHRlseUbUpbAQd2uYa+p7t28i6JGIOJ78=;
	b=l050jCE5Rr8v/gugpUq1Ss+TlvOTAU6bHW5tmCokiaoqtw5CJrnjAQqxTBqzm46r10H8V/
	PN/C85twXsUJTaTtz1N6lrClgA/Gywes1nzaKTKsIwa99M6sRy4sGklJiGciZ0ImlUVQ80
	BCh6GCvig4h53NBkh/6pnZ/bEmQ5POZmqaNW5u1zjMqAwqiP+mkv0AM8PZR2tBTf7vMFzv
	Vk5CiVSe4cZsa28TSPkqrniRc0oqqy9ISXDKayjUa0EChnVUegmMMNAcBSr7NGh9c0+rJ5
	gYcfpjceJWyADjwArci8tisYfWTvchOJq06bNOx2RJdGtIqY/RHVI8A1x+wuBg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575018;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Nq+EVjEHwGvHRlseUbUpbAQd2uYa+p7t28i6JGIOJ78=;
	b=OBLEtPDPwv/0EcuR38ddNfg8WWZsz2t8tgAu+g3ffoBnaypodVLNrO9gR6tjF5qfnG9zHm
	9+H9v5nuCGMGmXAw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 07/36] x86/smpboot: Restrict soft_restart_cpu() to SEV
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:37 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that the CPU0 hotplug cruft is gone, the only user is AMD SEV.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/kernel/callthunks.c |    2 +-
 arch/x86/kernel/head_32.S    |   14 --------------
 arch/x86/kernel/head_64.S    |    2 +-
 3 files changed, 2 insertions(+), 16 deletions(-)
---

--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -133,7 +133,7 @@ static bool skip_addr(void *dest)
 	/* Accounts directly */
 	if (dest == ret_from_fork)
 		return true;
-#ifdef CONFIG_HOTPLUG_CPU
+#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_AMD_MEM_ENCRYPT)
 	if (dest == soft_restart_cpu)
 		return true;
 #endif
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -138,20 +138,6 @@ SYM_CODE_START(startup_32)
 	jmp .Ldefault_entry
 SYM_CODE_END(startup_32)
 
-#ifdef CONFIG_HOTPLUG_CPU
-/*
- * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
- * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
- * unplug. Everything is set up already except the stack.
- */
-SYM_FUNC_START(soft_restart_cpu)
-	movl initial_stack, %ecx
-	movl %ecx, %esp
-	call *(initial_code)
-1:	jmp 1b
-SYM_FUNC_END(soft_restart_cpu)
-#endif
-
 /*
  * Non-boot CPU entry point; entered from trampoline.S
  * We can't lgdt here, because lgdt itself uses a data segment, but
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -375,7 +375,7 @@ SYM_CODE_END(secondary_startup_64)
 #include "verify_cpu.S"
 #include "sev_verify_cbit.S"
 
-#ifdef CONFIG_HOTPLUG_CPU
+#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_AMD_MEM_ENCRYPT)
 /*
  * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
  * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531749.827562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lo-0005td-SD; Mon, 08 May 2023 19:43:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531749.827562; Mon, 08 May 2023 19:43:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lo-0005tS-OT; Mon, 08 May 2023 19:43:40 +0000
Received: by outflank-mailman (input) for mailman id 531749;
 Mon, 08 May 2023 19:43:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6lm-0004Y5-Ro
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:38 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9feb5f3f-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:43:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9feb5f3f-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185217.564850214@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575016;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=dWwLkoa+CxUjsSK7qf2Y5jZ2cGq23YEvo1MW7nbjWN0=;
	b=Lbsq5RCoehz/zeQ46Kql0DxBJ16qoGQI1FM4XaAZJbLbjajj7ITHBDJGs95sO0pMop0NoQ
	7f4Rhr7rPQAc/6P66xjPMduhyAGVTekvjPLIl4pGJBWHVmMSCtw2Den9wSosh90IVyneVN
	d0wYFDA8BRyvXCF0AlMOKoc/w2SrWhQdphMR4YYOq67IfoCSmiWvqziIPGaTXrjFLVNYT5
	v2aExqc7xj8oXY+cp3ngOjhAtQEX6T8jphKJj54KFkmr8IOd75w8gei4vtXfH0pQZXJnmF
	FjzIikFSa9a+embThcwnko4mbExq6fkzBS8G3txl24LTaj5eef7mkZL4VtFyCQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575016;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=dWwLkoa+CxUjsSK7qf2Y5jZ2cGq23YEvo1MW7nbjWN0=;
	b=yBxVgF62QfhOKRNoOvqWQGE+yadZ+RtYU17bgeMhSlxSxnbKX3E7zuIX3FHR8qFHE44iZ8
	MVVgGQHKacYfRdAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 06/36] x86/smpboot: Remove the CPU0 hotplug kludge
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:36 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

This was introduced with commit e1c467e69040 ("x86, hotplug: Wake up CPU0
via NMI instead of INIT, SIPI, SIPI") to eventually support physical
hotplug of CPU0:

 "We'll change this code in the future to wake up hard offlined CPU0 if
  real platform and request are available."

11 years later this has not happened and physical hotplug is not officially
supported. Remove the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/include/asm/apic.h   |    1 
 arch/x86/include/asm/smp.h    |    1 
 arch/x86/kernel/smpboot.c     |  170 +++---------------------------------------
 drivers/acpi/processor_idle.c |    4 
 4 files changed, 14 insertions(+), 162 deletions(-)
---

--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -377,7 +377,6 @@ extern struct apic *__apicdrivers[], *__
  * APIC functionality to boot other CPUs - only used on SMP:
  */
 #ifdef CONFIG_SMP
-extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
 extern int lapic_can_unplug_cpu(void);
 #endif
 
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -130,7 +130,6 @@ void native_play_dead(void);
 void play_dead_common(void);
 void wbinvd_on_cpu(int cpu);
 int wbinvd_on_all_cpus(void);
-void cond_wakeup_cpu0(void);
 
 void native_smp_send_reschedule(int cpu);
 void native_send_call_func_ipi(const struct cpumask *mask);
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -216,9 +216,6 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
-static int cpu0_logical_apicid;
-static int enable_start_cpu0;
-
 /*
  * Activate a secondary processor.
  */
@@ -241,8 +238,6 @@ static void notrace start_secondary(void
 	x86_cpuinit.early_percpu_clock_init();
 	smp_callin();
 
-	enable_start_cpu0 = 0;
-
 	/* otherwise gcc will move up smp_processor_id before the cpu_init */
 	barrier();
 	/* Check TSC synchronization with the control CPU: */
@@ -410,7 +405,7 @@ void smp_store_cpu_info(int id)
 	c->cpu_index = id;
 	/*
 	 * During boot time, CPU0 has this setup already. Save the info when
-	 * bringing up AP or offlined CPU0.
+	 * bringing up an AP.
 	 */
 	identify_secondary_cpu(c);
 	c->initialized = true;
@@ -807,51 +802,14 @@ static void __init smp_quirk_init_udelay
 }
 
 /*
- * Poke the other CPU in the eye via NMI to wake it up. Remember that the normal
- * INIT, INIT, STARTUP sequence will reset the chip hard for us, and this
- * won't ... remember to clear down the APIC, etc later.
- */
-int
-wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip)
-{
-	u32 dm = apic->dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
-	unsigned long send_status, accept_status = 0;
-	int maxlvt;
-
-	/* Target chip */
-	/* Boot on the stack */
-	/* Kick the second */
-	apic_icr_write(APIC_DM_NMI | dm, apicid);
-
-	pr_debug("Waiting for send to finish...\n");
-	send_status = safe_apic_wait_icr_idle();
-
-	/*
-	 * Give the other CPU some time to accept the IPI.
-	 */
-	udelay(200);
-	if (APIC_INTEGRATED(boot_cpu_apic_version)) {
-		maxlvt = lapic_get_maxlvt();
-		if (maxlvt > 3)			/* Due to the Pentium erratum 3AP.  */
-			apic_write(APIC_ESR, 0);
-		accept_status = (apic_read(APIC_ESR) & 0xEF);
-	}
-	pr_debug("NMI sent\n");
-
-	if (send_status)
-		pr_err("APIC never delivered???\n");
-	if (accept_status)
-		pr_err("APIC delivery error (%lx)\n", accept_status);
-
-	return (send_status | accept_status);
-}
-
-static int
-wakeup_secondary_cpu_via_init(int phys_apicid, unsigned long start_eip)
+ * Wake up AP by INIT, INIT, STARTUP sequence.
+ */
+static int wakeup_secondary_cpu_via_init(int phys_apicid, unsigned long start_eip)
 {
 	unsigned long send_status = 0, accept_status = 0;
 	int maxlvt, num_starts, j;
 
+	preempt_disable();
 	maxlvt = lapic_get_maxlvt();
 
 	/*
@@ -957,6 +915,7 @@ wakeup_secondary_cpu_via_init(int phys_a
 	if (accept_status)
 		pr_err("APIC delivery error (%lx)\n", accept_status);
 
+	preempt_enable();
 	return (send_status | accept_status);
 }
 
@@ -997,67 +956,6 @@ static void announce_cpu(int cpu, int ap
 			node, cpu, apicid);
 }
 
-static int wakeup_cpu0_nmi(unsigned int cmd, struct pt_regs *regs)
-{
-	int cpu;
-
-	cpu = smp_processor_id();
-	if (cpu == 0 && !cpu_online(cpu) && enable_start_cpu0)
-		return NMI_HANDLED;
-
-	return NMI_DONE;
-}
-
-/*
- * Wake up AP by INIT, INIT, STARTUP sequence.
- *
- * Instead of waiting for STARTUP after INITs, BSP will execute the BIOS
- * boot-strap code which is not a desired behavior for waking up BSP. To
- * void the boot-strap code, wake up CPU0 by NMI instead.
- *
- * This works to wake up soft offlined CPU0 only. If CPU0 is hard offlined
- * (i.e. physically hot removed and then hot added), NMI won't wake it up.
- * We'll change this code in the future to wake up hard offlined CPU0 if
- * real platform and request are available.
- */
-static int
-wakeup_cpu_via_init_nmi(int cpu, unsigned long start_ip, int apicid,
-	       int *cpu0_nmi_registered)
-{
-	int id;
-	int boot_error;
-
-	preempt_disable();
-
-	/*
-	 * Wake up AP by INIT, INIT, STARTUP sequence.
-	 */
-	if (cpu) {
-		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
-		goto out;
-	}
-
-	/*
-	 * Wake up BSP by nmi.
-	 *
-	 * Register a NMI handler to help wake up CPU0.
-	 */
-	boot_error = register_nmi_handler(NMI_LOCAL,
-					  wakeup_cpu0_nmi, 0, "wake_cpu0");
-
-	if (!boot_error) {
-		enable_start_cpu0 = 1;
-		*cpu0_nmi_registered = 1;
-		id = apic->dest_mode_logical ? cpu0_logical_apicid : apicid;
-		boot_error = wakeup_secondary_cpu_via_nmi(id, start_ip);
-	}
-
-out:
-	preempt_enable();
-
-	return boot_error;
-}
-
 int common_cpu_up(unsigned int cpu, struct task_struct *idle)
 {
 	int ret;
@@ -1086,8 +984,7 @@ int common_cpu_up(unsigned int cpu, stru
  * Returns zero if CPU booted OK, else error code from
  * ->wakeup_secondary_cpu.
  */
-static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,
-		       int *cpu0_nmi_registered)
+static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
 	/* start_ip had better be page-aligned! */
 	unsigned long start_ip = real_mode_header->trampoline_start;
@@ -1120,7 +1017,6 @@ static int do_boot_cpu(int apicid, int c
 	 * This grunge runs the startup process for
 	 * the targeted processor.
 	 */
-
 	if (x86_platform.legacy.warm_reset) {
 
 		pr_debug("Setting warm reset code and vector.\n");
@@ -1149,15 +1045,14 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use a method from the APIC driver if one defined, with wakeup
 	 *   straight to 64-bit mode preferred over wakeup to RM.
 	 * Otherwise,
-	 * - Use an INIT boot APIC message for APs or NMI for BSP.
+	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
 		boot_error = apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
 		boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
 	else
-		boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,
-						     cpu0_nmi_registered);
+		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
 
 	if (!boot_error) {
 		/*
@@ -1206,9 +1101,8 @@ static int do_boot_cpu(int apicid, int c
 int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
-	int cpu0_nmi_registered = 0;
 	unsigned long flags;
-	int err, ret = 0;
+	int err;
 
 	lockdep_assert_irqs_enabled();
 
@@ -1247,11 +1141,10 @@ int native_cpu_up(unsigned int cpu, stru
 	if (err)
 		return err;
 
-	err = do_boot_cpu(apicid, cpu, tidle, &cpu0_nmi_registered);
+	err = do_boot_cpu(apicid, cpu, tidle);
 	if (err) {
 		pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
-		ret = -EIO;
-		goto unreg_nmi;
+		return err;
 	}
 
 	/*
@@ -1267,15 +1160,7 @@ int native_cpu_up(unsigned int cpu, stru
 		touch_nmi_watchdog();
 	}
 
-unreg_nmi:
-	/*
-	 * Clean up the nmi handler. Do this after the callin and callout sync
-	 * to avoid impact of possible long unregister time.
-	 */
-	if (cpu0_nmi_registered)
-		unregister_nmi_handler(NMI_LOCAL, "wake_cpu0");
-
-	return ret;
+	return 0;
 }
 
 /**
@@ -1373,14 +1258,6 @@ static void __init smp_cpu_index_default
 	}
 }
 
-static void __init smp_get_logical_apicid(void)
-{
-	if (x2apic_mode)
-		cpu0_logical_apicid = apic_read(APIC_LDR);
-	else
-		cpu0_logical_apicid = GET_APIC_LOGICAL_ID(apic_read(APIC_LDR));
-}
-
 void __init smp_prepare_cpus_common(void)
 {
 	unsigned int i;
@@ -1443,8 +1320,6 @@ void __init native_smp_prepare_cpus(unsi
 	/* Setup local timer */
 	x86_init.timers.setup_percpu_clockev();
 
-	smp_get_logical_apicid();
-
 	pr_info("CPU0: ");
 	print_cpu_info(&cpu_data(0));
 
@@ -1752,18 +1627,6 @@ void play_dead_common(void)
 	local_irq_disable();
 }
 
-/**
- * cond_wakeup_cpu0 - Wake up CPU0 if needed.
- *
- * If NMI wants to wake up CPU0, start CPU0.
- */
-void cond_wakeup_cpu0(void)
-{
-	if (smp_processor_id() == 0 && enable_start_cpu0)
-		start_cpu0();
-}
-EXPORT_SYMBOL_GPL(cond_wakeup_cpu0);
-
 /*
  * We need to flush the caches before going to sleep, lest we have
  * dirty data in our caches when we come back up.
@@ -1831,8 +1694,6 @@ static inline void mwait_play_dead(void)
 		__monitor(mwait_ptr, 0, 0);
 		mb();
 		__mwait(eax, 0);
-
-		cond_wakeup_cpu0();
 	}
 }
 
@@ -1841,11 +1702,8 @@ void __noreturn hlt_play_dead(void)
 	if (__this_cpu_read(cpu_info.x86) >= 4)
 		wbinvd();
 
-	while (1) {
+	while (1)
 		native_halt();
-
-		cond_wakeup_cpu0();
-	}
 }
 
 void native_play_dead(void)
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -597,10 +597,6 @@ static int acpi_idle_play_dead(struct cp
 			io_idle(cx->address);
 		} else
 			return -ENODEV;
-
-#if defined(CONFIG_X86) && defined(CONFIG_HOTPLUG_CPU)
-		cond_wakeup_cpu0();
-#endif
 	}
 
 	/* Never reached */




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531744.827505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lg-0004Ju-7H; Mon, 08 May 2023 19:43:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531744.827505; Mon, 08 May 2023 19:43:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lg-0004Iq-38; Mon, 08 May 2023 19:43:32 +0000
Received: by outflank-mailman (input) for mailman id 531744;
 Mon, 08 May 2023 19:43:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6lf-0004GB-0k
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:31 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9b2f47c9-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:43:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b2f47c9-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185217.287533369@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575008;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=TIFcpQKDkUA3tA8gujt3zKN0CCqeTM3e8ZIS8vByT1c=;
	b=Wf4mqDxFrL1eWhI5cpdoRdFllQIJr50VWM3iPhe3WsbZ/Iw75OzzMXvNpDiX4/F1RlUka/
	FuaadHz8U/UJgoRuNI+eiueis1okBnBTdsW7K3h/5cJff54TU9cNXi2Eo25oxatOtt8Ped
	uqIwnl5vpuaeMpBtb7hOTKugxduUe8Augp4O7OKVi2nBKgfwEY926yQ5SmqvUi7xIbg97V
	fQtSCNhqHK3HwtnrdayfMk5OYNYQqF7Po1kCsNkOoJVvrU6F54wA+I/A0Pn8m/abs0ww8P
	YoKjgZ69Z9J+x/KjldafuRP3SwFynPh5Zr2/PtLG0H6Hu60gav2dDq7D6S6DFA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575008;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=TIFcpQKDkUA3tA8gujt3zKN0CCqeTM3e8ZIS8vByT1c=;
	b=LftWUnhcnLdQ0KvHaFRp+CDhSV7AuCp+oemkbqukqx/mhhoAfEVrKqqXitjKTnGoiLrrvy
	h0+2Iv+MwcjSr/DA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 01/36] [patch V2 01/38] x86/smpboot: Cleanup
 topology_phys_to_logical_pkg()/die()
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:28 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Make topology_phys_to_logical_pkg_die() static as it's only used in
smpboot.c and fixup the kernel-doc warnings for both functions.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/include/asm/topology.h |    3 ---
 arch/x86/kernel/smpboot.c       |   10 ++++++----
 2 files changed, 6 insertions(+), 7 deletions(-)
---

--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -139,7 +139,6 @@ static inline int topology_max_smt_threa
 int topology_update_package_map(unsigned int apicid, unsigned int cpu);
 int topology_update_die_map(unsigned int dieid, unsigned int cpu);
 int topology_phys_to_logical_pkg(unsigned int pkg);
-int topology_phys_to_logical_die(unsigned int die, unsigned int cpu);
 bool topology_is_primary_thread(unsigned int cpu);
 bool topology_smt_supported(void);
 #else
@@ -149,8 +148,6 @@ topology_update_package_map(unsigned int
 static inline int
 topology_update_die_map(unsigned int dieid, unsigned int cpu) { return 0; }
 static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
-static inline int topology_phys_to_logical_die(unsigned int die,
-		unsigned int cpu) { return 0; }
 static inline int topology_max_die_per_package(void) { return 1; }
 static inline int topology_max_smt_threads(void) { return 1; }
 static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -288,6 +288,7 @@ bool topology_smt_supported(void)
 
 /**
  * topology_phys_to_logical_pkg - Map a physical package id to a logical
+ * @phys_pkg:	The physical package id to map
  *
  * Returns logical package id or -1 if not found
  */
@@ -304,15 +305,17 @@ int topology_phys_to_logical_pkg(unsigne
 	return -1;
 }
 EXPORT_SYMBOL(topology_phys_to_logical_pkg);
+
 /**
  * topology_phys_to_logical_die - Map a physical die id to logical
+ * @die_id:	The physical die id to map
+ * @cur_cpu:	The CPU for which the mapping is done
  *
  * Returns logical die id or -1 if not found
  */
-int topology_phys_to_logical_die(unsigned int die_id, unsigned int cur_cpu)
+static int topology_phys_to_logical_die(unsigned int die_id, unsigned int cur_cpu)
 {
-	int cpu;
-	int proc_id = cpu_data(cur_cpu).phys_proc_id;
+	int cpu, proc_id = cpu_data(cur_cpu).phys_proc_id;
 
 	for_each_possible_cpu(cpu) {
 		struct cpuinfo_x86 *c = &cpu_data(cpu);
@@ -323,7 +326,6 @@ int topology_phys_to_logical_die(unsigne
 	}
 	return -1;
 }
-EXPORT_SYMBOL(topology_phys_to_logical_die);
 
 /**
  * topology_update_package_map - Update the physical to logical package map




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531743.827502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lf-0004GW-VK; Mon, 08 May 2023 19:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531743.827502; Mon, 08 May 2023 19:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lf-0004GP-Se; Mon, 08 May 2023 19:43:31 +0000
Received: by outflank-mailman (input) for mailman id 531743;
 Mon, 08 May 2023 19:43:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6le-0004GB-BE
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:30 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9a860726-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:43:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a860726-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508181633.089804905@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575007;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc; bh=cSONB8rAwM/DhP8pLyA+5ZPUbC2lOuuQwaVdJRn5WDM=;
	b=TUKnHdyKCm2jhBxD5IlUGRPEG/QaxWUEEk5j+F5TiwGQh75MzfKnWvvga1ogV0FWpqsAk0
	Z/VqPwKzVgozvsmFgeZHc6fKtgAWVGs7xGme4vq54PLJoTakzMDfDNVCbcO2rny0q4HnR6
	qlFXb43Qt1L8lTxJDtVVOKBuzxeXABCLtAFoVNNIdEV1p8ow/ukbkpiR2uIVB/FYpjA6p3
	h6hKX4cZFsqIirm+tNd9G+ACRA1lcKi+gtx7U3umktHuz9juvrtG2mgbwLKrM2SK5QyvS3
	r9jvL2zUpq3y0guhjpTb2zQUgu9MH/0dN+oTlFfspWYXJjDrCu5rao9nchjzhw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575007;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc; bh=cSONB8rAwM/DhP8pLyA+5ZPUbC2lOuuQwaVdJRn5WDM=;
	b=i5YP5va30wPZDaaNSG4+BZNoSkYS91jOjq0F96kx0dzX5lZgVd2tGOGcKaXMK8erIbX4td
	IGoM8wVZMNo8tHAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 00/36] cpu/hotplug, x86: Reworked parallel CPU bringup
Date: Mon,  8 May 2023 21:43:26 +0200 (CEST)

Hi!

This is version 3 of the reworked parallel bringup series. Version 2 can be
found here:

   https://lore.kernel.org/lkml/20230504185733.126511787@linutronix.de

This is just a quick reiteration to address the following details:

  1) Drop the two extended topology leaf patches as they are not longer
     relevant (Andrew Cooper)

  2) Make the announce_cpu() fixup work for real (Micheal Kelley)

Other than that there are no changes and the other details are all the same
as in V2.

Thanks,

	tglx







From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531747.827542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lk-0005Fx-3L; Mon, 08 May 2023 19:43:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531747.827542; Mon, 08 May 2023 19:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lj-0005F8-Uo; Mon, 08 May 2023 19:43:35 +0000
Received: by outflank-mailman (input) for mailman id 531747;
 Mon, 08 May 2023 19:43:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6li-0004GB-EW
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:34 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9dfae0e5-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:43:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9dfae0e5-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185217.458393998@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575013;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=YVvAw0WAsVmopE5OIHIsJcpjHcWT9bVzoHoSegD/6OE=;
	b=G8jY8noRcocsOMuudUmnigMS5g2D2j/illqfnrAKMDSwZNAyNByW7W91CB5A0qyup8xoLU
	ju1ecTOFYCXG1ZOzwfeLNIWu4WEoDoD6p+1D+AqZKyOSugeGaOehdmkoA/nlGtcew2jUOh
	DzAZFh9YTiogntfcZ4P9+UPNch+6emIC0ae/WUtPo0aud1Iu4QPNUMUDCBmri2TlGBp9jT
	AA6L+H6VK8adYXgBNCf9r3PVCVhzwleTI15/nfJTmsMFQVq+7axqKZpItAeazqI90AqRIf
	nMTxkJ3sBvJwkZTMIR7xYS+DUgVAJRQlg/jUyMsipvPSuiu4phOaq20wyRyl4g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575013;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=YVvAw0WAsVmopE5OIHIsJcpjHcWT9bVzoHoSegD/6OE=;
	b=s+ViDYzSQVsqYkVETBJLWwXeFyinDS8d3eNTrYKk8dlzptBDC/K8uzHkO4TXLBIVOdRrGS
	hKy0R6/dTZ++mCAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch v3 04/36] x86/smpboot: Rename start_cpu0() to soft_restart_cpu()
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:33 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

This is used in the SEV play_dead() implementation to re-online CPUs. But
that has nothing to do with CPU0.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/include/asm/cpu.h   |    2 +-
 arch/x86/kernel/callthunks.c |    2 +-
 arch/x86/kernel/head_32.S    |   10 +++++-----
 arch/x86/kernel/head_64.S    |   10 +++++-----
 arch/x86/kernel/sev.c        |    2 +-
 5 files changed, 13 insertions(+), 13 deletions(-)
---

--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -30,7 +30,7 @@ struct x86_cpu {
 #ifdef CONFIG_HOTPLUG_CPU
 extern int arch_register_cpu(int num);
 extern void arch_unregister_cpu(int);
-extern void start_cpu0(void);
+extern void soft_restart_cpu(void);
 #ifdef CONFIG_DEBUG_HOTPLUG_CPU0
 extern int _debug_hotplug_cpu(int cpu, int action);
 #endif
--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -134,7 +134,7 @@ static bool skip_addr(void *dest)
 	if (dest == ret_from_fork)
 		return true;
 #ifdef CONFIG_HOTPLUG_CPU
-	if (dest == start_cpu0)
+	if (dest == soft_restart_cpu)
 		return true;
 #endif
 #ifdef CONFIG_FUNCTION_TRACER
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -140,16 +140,16 @@ SYM_CODE_END(startup_32)
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
- * Boot CPU0 entry point. It's called from play_dead(). Everything has been set
- * up already except stack. We just set up stack here. Then call
- * start_secondary().
+ * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
+ * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
+ * unplug. Everything is set up already except the stack.
  */
-SYM_FUNC_START(start_cpu0)
+SYM_FUNC_START(soft_restart_cpu)
 	movl initial_stack, %ecx
 	movl %ecx, %esp
 	call *(initial_code)
 1:	jmp 1b
-SYM_FUNC_END(start_cpu0)
+SYM_FUNC_END(soft_restart_cpu)
 #endif
 
 /*
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -377,11 +377,11 @@ SYM_CODE_END(secondary_startup_64)
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
- * Boot CPU0 entry point. It's called from play_dead(). Everything has been set
- * up already except stack. We just set up stack here. Then call
- * start_secondary() via .Ljump_to_C_code.
+ * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
+ * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
+ * unplug. Everything is set up already except the stack.
  */
-SYM_CODE_START(start_cpu0)
+SYM_CODE_START(soft_restart_cpu)
 	ANNOTATE_NOENDBR
 	UNWIND_HINT_END_OF_STACK
 
@@ -390,7 +390,7 @@ SYM_CODE_START(start_cpu0)
 	movq	TASK_threadsp(%rcx), %rsp
 
 	jmp	.Ljump_to_C_code
-SYM_CODE_END(start_cpu0)
+SYM_CODE_END(soft_restart_cpu)
 #endif
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -1328,7 +1328,7 @@ static void sev_es_play_dead(void)
 	 * If we get here, the VCPU was woken up again. Jump to CPU
 	 * startup code to get it back online.
 	 */
-	start_cpu0();
+	soft_restart_cpu();
 }
 #else  /* CONFIG_HOTPLUG_CPU */
 #define sev_es_play_dead	native_play_dead




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531748.827552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6ll-0005XG-CK; Mon, 08 May 2023 19:43:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531748.827552; Mon, 08 May 2023 19:43:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6ll-0005X7-8y; Mon, 08 May 2023 19:43:37 +0000
Received: by outflank-mailman (input) for mailman id 531748;
 Mon, 08 May 2023 19:43:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6lk-0004GB-CL
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:36 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9ef6a7fd-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:43:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ef6a7fd-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185217.511579580@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575015;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=SosWLf7ByWh3YMm1K9ndiB9cuSmcQMV7NcxWhF9A3kI=;
	b=RxIY2+NGJMVR7IR/st90ytSTmlXQsFTiFea99c/lKKxj1EB1q7hquMHWhtzdUNbWMNcbrM
	CnwJRgUBfMd2jL4iqtNC/N/c0jTglRRfq2OxvXjlOeJUe9UV5oIypcCkWD/al9xzufAzFi
	MPKvMUBIHPUjsOZ5nZYGDi0di8/Lx5vCXWMf6RqNjjuVGFMedci3sMZdVSB+S1ZFC6xrJb
	okqsHQtnu340k8mccnfdU1E4uotcI24/fyWVoduhFx1fhit6P/t9BzkXewcwomKJxshvcQ
	bS6VrEc7RyZlHtIKOWsn9nV7oGXuQ5C8JHSKVY1pW+2LCdY/5v0m9cy4sSFTNw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575015;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=SosWLf7ByWh3YMm1K9ndiB9cuSmcQMV7NcxWhF9A3kI=;
	b=fN1ysvSc4RZkQU6RxlQ3edeUZTJeyhE4L52lxIV6AQR8CpkXxixTOvSAkXm7BXO2oN4A4n
	CA7Uf5KlMR0J2bDQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 05/36] x86/topology: Remove CPU0 hotplug option
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:34 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

This was introduced together with commit e1c467e69040 ("x86, hotplug: Wake
up CPU0 via NMI instead of INIT, SIPI, SIPI") to eventually support
physical hotplug of CPU0:

 "We'll change this code in the future to wake up hard offlined CPU0 if
  real platform and request are available."

11 years later this has not happened and physical hotplug is not officially
supported. Remove the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 Documentation/admin-guide/kernel-parameters.txt |   14 ---
 Documentation/core-api/cpu_hotplug.rst          |   13 ---
 arch/x86/Kconfig                                |   43 ----------
 arch/x86/include/asm/cpu.h                      |    3 
 arch/x86/kernel/topology.c                      |   98 ------------------------
 arch/x86/power/cpu.c                            |   37 ---------
 6 files changed, 6 insertions(+), 202 deletions(-)
---

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -818,20 +818,6 @@
 			Format:
 			<first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>]
 
-	cpu0_hotplug	[X86] Turn on CPU0 hotplug feature when
-			CONFIG_BOOTPARAM_HOTPLUG_CPU0 is off.
-			Some features depend on CPU0. Known dependencies are:
-			1. Resume from suspend/hibernate depends on CPU0.
-			Suspend/hibernate will fail if CPU0 is offline and you
-			need to online CPU0 before suspend/hibernate.
-			2. PIC interrupts also depend on CPU0. CPU0 can't be
-			removed if a PIC interrupt is detected.
-			It's said poweroff/reboot may depend on CPU0 on some
-			machines although I haven't seen such issues so far
-			after CPU0 is offline on a few tested machines.
-			If the dependencies are under your control, you can
-			turn on cpu0_hotplug.
-
 	cpuidle.off=1	[CPU_IDLE]
 			disable the cpuidle sub-system
 
--- a/Documentation/core-api/cpu_hotplug.rst
+++ b/Documentation/core-api/cpu_hotplug.rst
@@ -127,17 +127,8 @@ Once the CPU is shutdown, it will be rem
  $ echo 1 > /sys/devices/system/cpu/cpu4/online
  smpboot: Booting Node 0 Processor 4 APIC 0x1
 
-The CPU is usable again. This should work on all CPUs. CPU0 is often special
-and excluded from CPU hotplug. On X86 the kernel option
-*CONFIG_BOOTPARAM_HOTPLUG_CPU0* has to be enabled in order to be able to
-shutdown CPU0. Alternatively the kernel command option *cpu0_hotplug* can be
-used. Some known dependencies of CPU0:
-
-* Resume from hibernate/suspend. Hibernate/suspend will fail if CPU0 is offline.
-* PIC interrupts. CPU0 can't be removed if a PIC interrupt is detected.
-
-Please let Fenghua Yu <fenghua.yu@intel.com> know if you find any dependencies
-on CPU0.
+The CPU is usable again. This should work on all CPUs, but CPU0 is often special
+and excluded from CPU hotplug.
 
 The CPU hotplug coordination
 ============================
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2305,49 +2305,6 @@ config HOTPLUG_CPU
 	def_bool y
 	depends on SMP
 
-config BOOTPARAM_HOTPLUG_CPU0
-	bool "Set default setting of cpu0_hotpluggable"
-	depends on HOTPLUG_CPU
-	help
-	  Set whether default state of cpu0_hotpluggable is on or off.
-
-	  Say Y here to enable CPU0 hotplug by default. If this switch
-	  is turned on, there is no need to give cpu0_hotplug kernel
-	  parameter and the CPU0 hotplug feature is enabled by default.
-
-	  Please note: there are two known CPU0 dependencies if you want
-	  to enable the CPU0 hotplug feature either by this switch or by
-	  cpu0_hotplug kernel parameter.
-
-	  First, resume from hibernate or suspend always starts from CPU0.
-	  So hibernate and suspend are prevented if CPU0 is offline.
-
-	  Second dependency is PIC interrupts always go to CPU0. CPU0 can not
-	  offline if any interrupt can not migrate out of CPU0. There may
-	  be other CPU0 dependencies.
-
-	  Please make sure the dependencies are under your control before
-	  you enable this feature.
-
-	  Say N if you don't want to enable CPU0 hotplug feature by default.
-	  You still can enable the CPU0 hotplug feature at boot by kernel
-	  parameter cpu0_hotplug.
-
-config DEBUG_HOTPLUG_CPU0
-	def_bool n
-	prompt "Debug CPU0 hotplug"
-	depends on HOTPLUG_CPU
-	help
-	  Enabling this option offlines CPU0 (if CPU0 can be offlined) as
-	  soon as possible and boots up userspace with CPU0 offlined. User
-	  can online CPU0 back after boot time.
-
-	  To debug CPU0 hotplug, you need to enable CPU0 offline/online
-	  feature by either turning on CONFIG_BOOTPARAM_HOTPLUG_CPU0 during
-	  compilation or giving cpu0_hotplug kernel parameter at boot.
-
-	  If unsure, say N.
-
 config COMPAT_VDSO
 	def_bool n
 	prompt "Disable the 32-bit vDSO (needed for glibc 2.3.3)"
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -31,9 +31,6 @@ struct x86_cpu {
 extern int arch_register_cpu(int num);
 extern void arch_unregister_cpu(int);
 extern void soft_restart_cpu(void);
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-extern int _debug_hotplug_cpu(int cpu, int action);
-#endif
 #endif
 
 extern void ap_init_aperfmperf(void);
--- a/arch/x86/kernel/topology.c
+++ b/arch/x86/kernel/topology.c
@@ -38,102 +38,12 @@
 static DEFINE_PER_CPU(struct x86_cpu, cpu_devices);
 
 #ifdef CONFIG_HOTPLUG_CPU
-
-#ifdef CONFIG_BOOTPARAM_HOTPLUG_CPU0
-static int cpu0_hotpluggable = 1;
-#else
-static int cpu0_hotpluggable;
-static int __init enable_cpu0_hotplug(char *str)
-{
-	cpu0_hotpluggable = 1;
-	return 1;
-}
-
-__setup("cpu0_hotplug", enable_cpu0_hotplug);
-#endif
-
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-/*
- * This function offlines a CPU as early as possible and allows userspace to
- * boot up without the CPU. The CPU can be onlined back by user after boot.
- *
- * This is only called for debugging CPU offline/online feature.
- */
-int _debug_hotplug_cpu(int cpu, int action)
-{
-	int ret;
-
-	if (!cpu_is_hotpluggable(cpu))
-		return -EINVAL;
-
-	switch (action) {
-	case 0:
-		ret = remove_cpu(cpu);
-		if (!ret)
-			pr_info("DEBUG_HOTPLUG_CPU0: CPU %u is now offline\n", cpu);
-		else
-			pr_debug("Can't offline CPU%d.\n", cpu);
-		break;
-	case 1:
-		ret = add_cpu(cpu);
-		if (ret)
-			pr_debug("Can't online CPU%d.\n", cpu);
-
-		break;
-	default:
-		ret = -EINVAL;
-	}
-
-	return ret;
-}
-
-static int __init debug_hotplug_cpu(void)
+int arch_register_cpu(int cpu)
 {
-	_debug_hotplug_cpu(0, 0);
-	return 0;
-}
-
-late_initcall_sync(debug_hotplug_cpu);
-#endif /* CONFIG_DEBUG_HOTPLUG_CPU0 */
-
-int arch_register_cpu(int num)
-{
-	struct cpuinfo_x86 *c = &cpu_data(num);
-
-	/*
-	 * Currently CPU0 is only hotpluggable on Intel platforms. Other
-	 * vendors can add hotplug support later.
-	 * Xen PV guests don't support CPU0 hotplug at all.
-	 */
-	if (c->x86_vendor != X86_VENDOR_INTEL ||
-	    cpu_feature_enabled(X86_FEATURE_XENPV))
-		cpu0_hotpluggable = 0;
-
-	/*
-	 * Two known BSP/CPU0 dependencies: Resume from suspend/hibernate
-	 * depends on BSP. PIC interrupts depend on BSP.
-	 *
-	 * If the BSP dependencies are under control, one can tell kernel to
-	 * enable BSP hotplug. This basically adds a control file and
-	 * one can attempt to offline BSP.
-	 */
-	if (num == 0 && cpu0_hotpluggable) {
-		unsigned int irq;
-		/*
-		 * We won't take down the boot processor on i386 if some
-		 * interrupts only are able to be serviced by the BSP in PIC.
-		 */
-		for_each_active_irq(irq) {
-			if (!IO_APIC_IRQ(irq) && irq_has_action(irq)) {
-				cpu0_hotpluggable = 0;
-				break;
-			}
-		}
-	}
-	if (num || cpu0_hotpluggable)
-		per_cpu(cpu_devices, num).cpu.hotpluggable = 1;
+	struct x86_cpu *xc = per_cpu_ptr(&cpu_devices, cpu);
 
-	return register_cpu(&per_cpu(cpu_devices, num).cpu, num);
+	xc->cpu.hotpluggable = cpu > 0;
+	return register_cpu(&xc->cpu, cpu);
 }
 EXPORT_SYMBOL(arch_register_cpu);
 
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -351,43 +351,6 @@ static int bsp_pm_callback(struct notifi
 	case PM_HIBERNATION_PREPARE:
 		ret = bsp_check();
 		break;
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-	case PM_RESTORE_PREPARE:
-		/*
-		 * When system resumes from hibernation, online CPU0 because
-		 * 1. it's required for resume and
-		 * 2. the CPU was online before hibernation
-		 */
-		if (!cpu_online(0))
-			_debug_hotplug_cpu(0, 1);
-		break;
-	case PM_POST_RESTORE:
-		/*
-		 * When a resume really happens, this code won't be called.
-		 *
-		 * This code is called only when user space hibernation software
-		 * prepares for snapshot device during boot time. So we just
-		 * call _debug_hotplug_cpu() to restore to CPU0's state prior to
-		 * preparing the snapshot device.
-		 *
-		 * This works for normal boot case in our CPU0 hotplug debug
-		 * mode, i.e. CPU0 is offline and user mode hibernation
-		 * software initializes during boot time.
-		 *
-		 * If CPU0 is online and user application accesses snapshot
-		 * device after boot time, this will offline CPU0 and user may
-		 * see different CPU0 state before and after accessing
-		 * the snapshot device. But hopefully this is not a case when
-		 * user debugging CPU0 hotplug. Even if users hit this case,
-		 * they can easily online CPU0 back.
-		 *
-		 * To simplify this debug code, we only consider normal boot
-		 * case. Otherwise we need to remember CPU0's state and restore
-		 * to that state and resolve racy conditions etc.
-		 */
-		_debug_hotplug_cpu(0, 0);
-		break;
-#endif
 	default:
 		break;
 	}




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531746.827527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6li-0004nA-Sp; Mon, 08 May 2023 19:43:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531746.827527; Mon, 08 May 2023 19:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6li-0004me-Mi; Mon, 08 May 2023 19:43:34 +0000
Received: by outflank-mailman (input) for mailman id 531746;
 Mon, 08 May 2023 19:43:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6lg-0004Y5-Ts
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:32 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9c161b0a-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:43:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c161b0a-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185217.347553670@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575010;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=S9LsovJrwK7U1/6r/D/tJ1OdnJxm0UpeZcX2eXUL15I=;
	b=C5NEL7Sc3xFfKL0CxGkuq3u1SI/3MBmp8kg2JO4gnaU2mJzbEGpxk1b+m4QK7KTPpH4pVj
	Je5YQoIAQw415ZdCKvoZrpOxEKSNL3+6l5SO/FrWT4Ni5Ew2CIek1ECZ03FWVa1H7bKFIY
	LtzTqoDPPuHjD06VN2npWHg8rtixxSjX0ftouZEEd0Sf7pS2qYVGbOuU6h7xDtyTYslukj
	RHbaiW/MH2YP0HMjKAgYUFdeqyVOkzsdpwguiZgepsxXygbQqBa+QVNMrwxQBM3ZdZlm5u
	Q1RjNUaXEpp4826qTOW50h9LZGHuPZq497+v7TvVBOTfkKOpxxTiLHhDN67CUw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575010;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=S9LsovJrwK7U1/6r/D/tJ1OdnJxm0UpeZcX2eXUL15I=;
	b=zSCuoKEKQteV6nPjwNnN8p1R7pY/jEKhW4RIT67a4neXwQLEhCVtxH6R7aW9FeOFUi4146
	WzFetV3Y74q63+Bg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 02/36] cpu/hotplug: Mark arch_disable_smp_support() and
 bringup_nonboot_cpus() __init
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:29 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

No point in keeping them around.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/kernel/smpboot.c |    4 ++--
 kernel/cpu.c              |    2 +-
 kernel/smp.c              |    2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)
---

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1269,9 +1269,9 @@ int native_cpu_up(unsigned int cpu, stru
 }
 
 /**
- * arch_disable_smp_support() - disables SMP support for x86 at runtime
+ * arch_disable_smp_support() - Disables SMP support for x86 at boottime
  */
-void arch_disable_smp_support(void)
+void __init arch_disable_smp_support(void)
 {
 	disable_ioapic_support();
 }
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1502,7 +1502,7 @@ int bringup_hibernate_cpu(unsigned int s
 	return 0;
 }
 
-void bringup_nonboot_cpus(unsigned int setup_max_cpus)
+void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
 {
 	unsigned int cpu;
 
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -892,7 +892,7 @@ EXPORT_SYMBOL(setup_max_cpus);
  * SMP mode to <NUM>.
  */
 
-void __weak arch_disable_smp_support(void) { }
+void __weak __init arch_disable_smp_support(void) { }
 
 static int __init nosmp(char *str)
 {




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531745.827522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6li-0004ks-JA; Mon, 08 May 2023 19:43:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531745.827522; Mon, 08 May 2023 19:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6li-0004kd-Ez; Mon, 08 May 2023 19:43:34 +0000
Received: by outflank-mailman (input) for mailman id 531745;
 Mon, 08 May 2023 19:43:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6lg-0004GB-Sv
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:32 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9d08aa25-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:43:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d08aa25-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185217.405187204@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575012;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=HvbbbIiCrteaI1bndI/LX3oIuQfCpfxndnpojb5RiKc=;
	b=BfN8q4YGZrvvr3hwXOfVCQXAM0cSaY1SEoSphmNTsut/rUAni3XtelvN4CHoNkC2+34Ymi
	TExugThNdfAh5Fp1/heIMO4QUnZPoX/5/+WK/OdLeyR3RF9vIZeGhZsFY4OLhi5HdKiIER
	/YIJWQAT9QzRFGpqH7CAPTxY9UkNRCpCd4alyqj4+UmOC+6AxHr5xWXJsqeSXPmKwFhv1A
	ubCJW6/Egd3JKzrvz6jHP0y5xHT+UfGDn1rLC92NRh5X9Mx/ylbzv+ZZQOLMPjHx0w/hZY
	PLLJPmQ9Y1thsJxqAo8DO9T8VrZNf4ZCHtIxyjWM81zadG9cxhkbFRB3VFi7xA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575012;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=HvbbbIiCrteaI1bndI/LX3oIuQfCpfxndnpojb5RiKc=;
	b=TGUqtBVLw37zc5671ZnaxyCwD1VN/ytP+JSMajykmRFyOCcIZ2eBHNHYynrijOrC/OQEpW
	IDlSUjUrSbEGPyCw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 03/36] x86/smpboot: Avoid pointless delay calibration if
 TSC is synchronized
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:31 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

When TSC is synchronized across sockets then there is no reason to
calibrate the delay for the first CPU which comes up on a socket.

Just reuse the existing calibration value.

This removes 100ms pointlessly wasted time from CPU hotplug per socket.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/kernel/smpboot.c |   38 ++++++++++++++++++++++++--------------
 arch/x86/kernel/tsc.c     |   20 ++++++++++++++++----
 2 files changed, 40 insertions(+), 18 deletions(-)
---

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -178,10 +178,7 @@ static void smp_callin(void)
 	 */
 	apic_ap_setup();
 
-	/*
-	 * Save our processor parameters. Note: this information
-	 * is needed for clock calibration.
-	 */
+	/* Save our processor parameters. */
 	smp_store_cpu_info(cpuid);
 
 	/*
@@ -192,14 +189,6 @@ static void smp_callin(void)
 
 	ap_init_aperfmperf();
 
-	/*
-	 * Get our bogomips.
-	 * Update loops_per_jiffy in cpu_data. Previous call to
-	 * smp_store_cpu_info() stored a value that is close but not as
-	 * accurate as the value just calculated.
-	 */
-	calibrate_delay();
-	cpu_data(cpuid).loops_per_jiffy = loops_per_jiffy;
 	pr_debug("Stack at about %p\n", &cpuid);
 
 	wmb();
@@ -212,8 +201,24 @@ static void smp_callin(void)
 	cpumask_set_cpu(cpuid, cpu_callin_mask);
 }
 
+static void ap_calibrate_delay(void)
+{
+	/*
+	 * Calibrate the delay loop and update loops_per_jiffy in cpu_data.
+	 * smp_store_cpu_info() stored a value that is close but not as
+	 * accurate as the value just calculated.
+	 *
+	 * As this is invoked after the TSC synchronization check,
+	 * calibrate_delay_is_known() will skip the calibration routine
+	 * when TSC is synchronized across sockets.
+	 */
+	calibrate_delay();
+	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
+}
+
 static int cpu0_logical_apicid;
 static int enable_start_cpu0;
+
 /*
  * Activate a secondary processor.
  */
@@ -240,10 +245,15 @@ static void notrace start_secondary(void
 
 	/* otherwise gcc will move up smp_processor_id before the cpu_init */
 	barrier();
+	/* Check TSC synchronization with the control CPU: */
+	check_tsc_sync_target();
+
 	/*
-	 * Check TSC synchronization with the boot CPU:
+	 * Calibrate the delay loop after the TSC synchronization check.
+	 * This allows to skip the calibration when TSC is synchronized
+	 * across sockets.
 	 */
-	check_tsc_sync_target();
+	ap_calibrate_delay();
 
 	speculative_store_bypass_ht_init();
 
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -1598,10 +1598,7 @@ void __init tsc_init(void)
 
 #ifdef CONFIG_SMP
 /*
- * If we have a constant TSC and are using the TSC for the delay loop,
- * we can skip clock calibration if another cpu in the same socket has already
- * been calibrated. This assumes that CONSTANT_TSC applies to all
- * cpus in the socket - this should be a safe assumption.
+ * Check whether existing calibration data can be reused.
  */
 unsigned long calibrate_delay_is_known(void)
 {
@@ -1609,6 +1606,21 @@ unsigned long calibrate_delay_is_known(v
 	int constant_tsc = cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC);
 	const struct cpumask *mask = topology_core_cpumask(cpu);
 
+	/*
+	 * If TSC has constant frequency and TSC is synchronized across
+	 * sockets then reuse CPU0 calibration.
+	 */
+	if (constant_tsc && !tsc_unstable)
+		return cpu_data(0).loops_per_jiffy;
+
+	/*
+	 * If TSC has constant frequency and TSC is not synchronized across
+	 * sockets and this is not the first CPU in the socket, then reuse
+	 * the calibration value of an already online CPU on that socket.
+	 *
+	 * This assumes that CONSTANT_TSC is consistent for all CPUs in a
+	 * socket.
+	 */
 	if (!constant_tsc || !mask)
 		return 0;
 




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531751.827582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lr-0006V7-HB; Mon, 08 May 2023 19:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531751.827582; Mon, 08 May 2023 19:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lr-0006Uv-Bm; Mon, 08 May 2023 19:43:43 +0000
Received: by outflank-mailman (input) for mailman id 531751;
 Mon, 08 May 2023 19:43:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6lq-0004GB-Gf
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:42 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a2c93318-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:43:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2c93318-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185217.725507622@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575021;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=l2ZAdiTSxc4Ph8TyhAnDuEUhjcNUoY+vPQjmK9rgSOw=;
	b=gaT1OuAZ/GfjuakxTEJ2KYFHHzuQGSX55lDDjTfN7H14JbYbNLMM4m5KEMtJmbQjf5dGd8
	u6xaJZJ4RDdDY8AEhdNSWuCFNXk+TTJsi8/yzR94kBccocw9u25EGL8O6o4TOYanqr+EXt
	NhIFRD+lDCv3rQ1zF9CsAlmwrDH3WUd0arfpWQUXnSr9BmI2oikrsACB3KufetuUHeTXx6
	Dfmr70iicTai9wWJcUCYXfrT+e3fQNAy7f7bAiHZh1hN8Pw4PKOZApdT4wEEG7OYby4pO/
	J0L7EIGZ0mmjG5HI8U9pfnUs9mQdXGK6TXfo6ya7SFw/bbUl1OddNzvQqalhdg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575021;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=l2ZAdiTSxc4Ph8TyhAnDuEUhjcNUoY+vPQjmK9rgSOw=;
	b=DHa8pWb6DR3SZOX4vG2QWSchsl3pME0lzD5CYQrhDWXg/TvCY5mWEc+2tWrjc+r9aOnyjo
	ukEASnIuuQDnS6AQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 09/36] x86/smpboot: Get rid of cpu_init_secondary()
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:41 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The synchronization of the AP with the control CPU is a SMP boot problem
and has nothing to do with cpu_init().

Open code cpu_init_secondary() in start_secondary() and move
wait_for_master_cpu() into the SMP boot code.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/include/asm/processor.h |    1 -
 arch/x86/kernel/cpu/common.c     |   27 ---------------------------
 arch/x86/kernel/smpboot.c        |   24 +++++++++++++++++++-----
 3 files changed, 19 insertions(+), 33 deletions(-)
---

--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -551,7 +551,6 @@ extern void switch_gdt_and_percpu_base(i
 extern void load_direct_gdt(int);
 extern void load_fixmap_gdt(int);
 extern void cpu_init(void);
-extern void cpu_init_secondary(void);
 extern void cpu_init_exception_handling(void);
 extern void cr4_init(void);
 
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2123,19 +2123,6 @@ static void dbg_restore_debug_regs(void)
 #define dbg_restore_debug_regs()
 #endif /* ! CONFIG_KGDB */
 
-static void wait_for_master_cpu(int cpu)
-{
-#ifdef CONFIG_SMP
-	/*
-	 * wait for ACK from master CPU before continuing
-	 * with AP initialization
-	 */
-	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
-	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
-		cpu_relax();
-#endif
-}
-
 static inline void setup_getcpu(int cpu)
 {
 	unsigned long cpudata = vdso_encode_cpunode(cpu, early_cpu_to_node(cpu));
@@ -2239,8 +2226,6 @@ void cpu_init(void)
 	struct task_struct *cur = current;
 	int cpu = raw_smp_processor_id();
 
-	wait_for_master_cpu(cpu);
-
 	ucode_cpu_init(cpu);
 
 #ifdef CONFIG_NUMA
@@ -2293,18 +2278,6 @@ void cpu_init(void)
 	load_fixmap_gdt(cpu);
 }
 
-#ifdef CONFIG_SMP
-void cpu_init_secondary(void)
-{
-	/*
-	 * Relies on the BP having set-up the IDT tables, which are loaded
-	 * on this CPU in cpu_init_exception_handling().
-	 */
-	cpu_init_exception_handling();
-	cpu_init();
-}
-#endif
-
 #ifdef CONFIG_MICROCODE_LATE_LOADING
 /**
  * store_cpu_caps() - Store a snapshot of CPU capabilities
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -220,6 +220,17 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
+static void wait_for_master_cpu(int cpu)
+{
+	/*
+	 * Wait for release by control CPU before continuing with AP
+	 * initialization.
+	 */
+	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
+	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
+		cpu_relax();
+}
+
 /*
  * Activate a secondary processor.
  */
@@ -237,13 +248,16 @@ static void notrace start_secondary(void
 	load_cr3(swapper_pg_dir);
 	__flush_tlb_all();
 #endif
+	cpu_init_exception_handling();
+
 	/*
-	 * Sync point with wait_cpu_initialized(). Before proceeding through
-	 * cpu_init(), the AP will call wait_for_master_cpu() which sets its
-	 * own bit in cpu_initialized_mask and then waits for the BSP to set
-	 * its bit in cpu_callout_mask to release it.
+	 * Sync point with wait_cpu_initialized(). Sets AP in
+	 * cpu_initialized_mask and then waits for the control CPU
+	 * to release it.
 	 */
-	cpu_init_secondary();
+	wait_for_master_cpu(raw_smp_processor_id());
+
+	cpu_init();
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
 




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531752.827592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6ls-0006m6-R7; Mon, 08 May 2023 19:43:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531752.827592; Mon, 08 May 2023 19:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6ls-0006ks-LP; Mon, 08 May 2023 19:43:44 +0000
Received: by outflank-mailman (input) for mailman id 531752;
 Mon, 08 May 2023 19:43:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6lr-0004Y5-58
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:43 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1cf5cbf-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:43:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1cf5cbf-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185217.671595388@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575020;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=0q1aETu6bFeN3kD2/0xHoCf/51ZXAzUMjn9pP2AnX0s=;
	b=cVh3jWdyX+zcrRH936wg6/kt9IRtzA5ofxglmadS4D7yqxXjL+vTnFNlQsOMEiiXu8XUFn
	oOgQ7PQuxs7incQUDCiP1F1nybF0Kpl9Rx7yedKtJv18/UtElco0r+y+ebysJ8v8BmVIMf
	tzuEGK9VFeV1JAjK0Rj0BmHngpw8FuJJSGigGCnANdFiBVVRiGWMdyHiHbTp9127mH2nr9
	ExHcBiTrkoX8YaNmTVJycyoKtdznubwLMVJnHxRoPZNsXHvb5tjynNcMh7iWyq39QScM1f
	ydIGmovKSBPRr3E0k5xQldSo30IfRQmhDh/Bo/a3TW3dGSYKRC7eC/Me28ck1Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575020;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=0q1aETu6bFeN3kD2/0xHoCf/51ZXAzUMjn9pP2AnX0s=;
	b=wG2ubOdBEVCcWDlSU4MGXW9jUu84w7IsaGsYdweGau1DadNC3D/VSWHEhMpzsnuTaDB8jW
	jkp8XTT9SW4h9uCw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into separate
 phases and document them
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:39 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

There are four logical parts to what native_cpu_up() does on the BSP (or
on the controlling CPU for a later hotplug):

 1) Wake the AP by sending the INIT/SIPI/SIPI sequence.

 2) Wait for the AP to make it as far as wait_for_master_cpu() which
    sets that CPU's bit in cpu_initialized_mask, then sets the bit in
    cpu_callout_mask to let the AP proceed through cpu_init().

 3) Wait for the AP to finish cpu_init() and get as far as the
    smp_callin() call, which sets that CPU's bit in cpu_callin_mask.

 4) Perform the TSC synchronization and wait for the AP to actually
    mark itself online in cpu_online_mask.

In preparation to allow these phases to operate in parallel on multiple
APs, split them out into separate functions and document the interactions
a little more clearly in both the BP and AP code paths.

No functional change intended.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/kernel/smpboot.c |  187 +++++++++++++++++++++++++++++-----------------
 1 file changed, 121 insertions(+), 66 deletions(-)
---

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -193,6 +193,10 @@ static void smp_callin(void)
 
 	wmb();
 
+	/*
+	 * This runs the AP through all the cpuhp states to its target
+	 * state (CPUHP_ONLINE in the case of serial bringup).
+	 */
 	notify_cpu_starting(cpuid);
 
 	/*
@@ -233,14 +237,31 @@ static void notrace start_secondary(void
 	load_cr3(swapper_pg_dir);
 	__flush_tlb_all();
 #endif
+	/*
+	 * Sync point with wait_cpu_initialized(). Before proceeding through
+	 * cpu_init(), the AP will call wait_for_master_cpu() which sets its
+	 * own bit in cpu_initialized_mask and then waits for the BSP to set
+	 * its bit in cpu_callout_mask to release it.
+	 */
 	cpu_init_secondary();
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
+
+	/*
+	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
+	 * but just sets the bit to let the controlling CPU (BSP) know that
+	 * it's got this far.
+	 */
 	smp_callin();
 
-	/* otherwise gcc will move up smp_processor_id before the cpu_init */
+	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
 	barrier();
-	/* Check TSC synchronization with the control CPU: */
+
+	/*
+	 * Check TSC synchronization with the control CPU, which will do
+	 * its part of this from wait_cpu_online(), making it an implicit
+	 * synchronization point.
+	 */
 	check_tsc_sync_target();
 
 	/*
@@ -259,6 +280,7 @@ static void notrace start_secondary(void
 	 * half valid vector space.
 	 */
 	lock_vector_lock();
+	/* Sync point with do_wait_cpu_online() */
 	set_cpu_online(smp_processor_id(), true);
 	lapic_online();
 	unlock_vector_lock();
@@ -981,17 +1003,13 @@ int common_cpu_up(unsigned int cpu, stru
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
- * Returns zero if CPU booted OK, else error code from
+ * Returns zero if startup was successfully sent, else error code from
  * ->wakeup_secondary_cpu.
  */
 static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
-	/* start_ip had better be page-aligned! */
 	unsigned long start_ip = real_mode_header->trampoline_start;
 
-	unsigned long boot_error = 0;
-	unsigned long timeout;
-
 #ifdef CONFIG_X86_64
 	/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
 	if (apic->wakeup_secondary_cpu_64)
@@ -1048,60 +1066,89 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
-		boot_error = apic->wakeup_secondary_cpu_64(apicid, start_ip);
+		return apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
-		boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
-	else
-		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
+		return apic->wakeup_secondary_cpu(apicid, start_ip);
 
-	if (!boot_error) {
-		/*
-		 * Wait 10s total for first sign of life from AP
-		 */
-		boot_error = -1;
-		timeout = jiffies + 10*HZ;
-		while (time_before(jiffies, timeout)) {
-			if (cpumask_test_cpu(cpu, cpu_initialized_mask)) {
-				/*
-				 * Tell AP to proceed with initialization
-				 */
-				cpumask_set_cpu(cpu, cpu_callout_mask);
-				boot_error = 0;
-				break;
-			}
-			schedule();
-		}
-	}
+	return wakeup_secondary_cpu_via_init(apicid, start_ip);
+}
 
-	if (!boot_error) {
-		/*
-		 * Wait till AP completes initial initialization
-		 */
-		while (!cpumask_test_cpu(cpu, cpu_callin_mask)) {
-			/*
-			 * Allow other tasks to run while we wait for the
-			 * AP to come online. This also gives a chance
-			 * for the MTRR work(triggered by the AP coming online)
-			 * to be completed in the stop machine context.
-			 */
-			schedule();
-		}
-	}
+static int wait_cpu_cpumask(unsigned int cpu, const struct cpumask *mask)
+{
+	unsigned long timeout;
 
-	if (x86_platform.legacy.warm_reset) {
-		/*
-		 * Cleanup possible dangling ends...
-		 */
-		smpboot_restore_warm_reset_vector();
+	/*
+	 * Wait up to 10s for the CPU to report in.
+	 */
+	timeout = jiffies + 10*HZ;
+	while (time_before(jiffies, timeout)) {
+		if (cpumask_test_cpu(cpu, mask))
+			return 0;
+
+		schedule();
 	}
+	return -1;
+}
 
-	return boot_error;
+/*
+ * Bringup step two: Wait for the target AP to reach cpu_init_secondary()
+ * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
+ * to proceed.  The AP will then proceed past setting its 'callin' bit
+ * and end up waiting in check_tsc_sync_target() until we reach
+ * do_wait_cpu_online() to tend to it.
+ */
+static int wait_cpu_initialized(unsigned int cpu)
+{
+	/*
+	 * Wait for first sign of life from AP.
+	 */
+	if (wait_cpu_cpumask(cpu, cpu_initialized_mask))
+		return -1;
+
+	cpumask_set_cpu(cpu, cpu_callout_mask);
+	return 0;
 }
 
-int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+/*
+ * Bringup step three: Wait for the target AP to reach smp_callin().
+ * The AP is not waiting for us here so we don't need to parallelise
+ * this step. Not entirely clear why we care about this, since we just
+ * proceed directly to TSC synchronization which is the next sync
+ * point with the AP anyway.
+ */
+static void wait_cpu_callin(unsigned int cpu)
+{
+	while (!cpumask_test_cpu(cpu, cpu_callin_mask))
+		schedule();
+}
+
+/*
+ * Bringup step four: Synchronize the TSC and wait for the target AP
+ * to reach set_cpu_online() in start_secondary().
+ */
+static void wait_cpu_online(unsigned int cpu)
 {
-	int apicid = apic->cpu_present_to_apicid(cpu);
 	unsigned long flags;
+
+	/*
+	 * Check TSC synchronization with the AP (keep irqs disabled
+	 * while doing so):
+	 */
+	local_irq_save(flags);
+	check_tsc_sync_source(cpu);
+	local_irq_restore(flags);
+
+	/*
+	 * Wait for the AP to mark itself online, so the core caller
+	 * can drop sparse_irq_lock.
+	 */
+	while (!cpu_online(cpu))
+		schedule();
+}
+
+static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
+{
+	int apicid = apic->cpu_present_to_apicid(cpu);
 	int err;
 
 	lockdep_assert_irqs_enabled();
@@ -1142,25 +1189,33 @@ int native_cpu_up(unsigned int cpu, stru
 		return err;
 
 	err = do_boot_cpu(apicid, cpu, tidle);
-	if (err) {
+	if (err)
 		pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
-		return err;
-	}
 
-	/*
-	 * Check TSC synchronization with the AP (keep irqs disabled
-	 * while doing so):
-	 */
-	local_irq_save(flags);
-	check_tsc_sync_source(cpu);
-	local_irq_restore(flags);
+	return err;
+}
 
-	while (!cpu_online(cpu)) {
-		cpu_relax();
-		touch_nmi_watchdog();
-	}
+int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+{
+	int ret;
 
-	return 0;
+	ret = native_kick_ap(cpu, tidle);
+	if (ret)
+		goto out;
+
+	ret = wait_cpu_initialized(cpu);
+	if (ret)
+		goto out;
+
+	wait_cpu_callin(cpu);
+	wait_cpu_online(cpu);
+
+out:
+	/* Cleanup possible dangling ends... */
+	if (x86_platform.legacy.warm_reset)
+		smpboot_restore_warm_reset_vector();
+
+	return ret;
 }
 
 /**




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531753.827602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lu-000798-K3; Mon, 08 May 2023 19:43:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531753.827602; Mon, 08 May 2023 19:43:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lu-000782-Dr; Mon, 08 May 2023 19:43:46 +0000
Received: by outflank-mailman (input) for mailman id 531753;
 Mon, 08 May 2023 19:43:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6lt-0004Y5-3r
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:45 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a3ccb2dc-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:43:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3ccb2dc-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185217.779457690@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575023;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=hGYrzvnylJOzDXAat75Sb7qGF0AwMNGvLNaazC51fRk=;
	b=CF0aLnbIsxwy11PPaGSo9/qeT6BDOr9z6WOZ1n5alNxMpu2v9rWR0KwWbfAKKSqQHnu2Hn
	NnnSo+WQJLNlwX0V8RASx7bUePgeQQucrN8jozqHWn53CWFEZvXdW6Y2qKuXf2sMuO8LFd
	vPFSd9/cKf6jPiVXrKOCNBpqoC5p2O4AreHmLXAsBLw7vhozQ4sYeaLwrKGhc6krbZC3oC
	pq7zyVxdn/hqe1GUvju73Vol+z3Myrurw5iGjanI0WSm1tR559wW7/7IR9FR2DUET3Pet6
	d9J7XLK6/DSrSBUNRugaS2ELxaaKrUV3h9/hQgurMpa7sjjcuL3FKGuJi92j9w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575023;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=hGYrzvnylJOzDXAat75Sb7qGF0AwMNGvLNaazC51fRk=;
	b=zBhk/M4G/ZDNvqYEqQApQE+J8NOSpsSrbhi32/stcCYNqlxvc9b49l1CCvmbf0GB531PSJ
	vM0+Vo+LFsbAPKDw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 10/36] [patch V2 10/38] x86/cpu/cacheinfo: Remove
 cpu_callout_mask dependency
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:42 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

cpu_callout_mask is used for the stop machine based MTRR/PAT init.

In preparation of moving the BP/AP synchronization to the core hotplug
code, use a private CPU mask for cacheinfo and manage it in the
starting/dying hotplug state.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/kernel/cpu/cacheinfo.c |   21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)
---

--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -39,6 +39,8 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t
 /* Shared L2 cache maps */
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
 
+static cpumask_var_t cpu_cacheinfo_mask;
+
 /* Kernel controls MTRR and/or PAT MSRs. */
 unsigned int memory_caching_control __ro_after_init;
 
@@ -1172,8 +1174,10 @@ void cache_bp_restore(void)
 		cache_cpu_init();
 }
 
-static int cache_ap_init(unsigned int cpu)
+static int cache_ap_online(unsigned int cpu)
 {
+	cpumask_set_cpu(cpu, cpu_cacheinfo_mask);
+
 	if (!memory_caching_control || get_cache_aps_delayed_init())
 		return 0;
 
@@ -1191,11 +1195,17 @@ static int cache_ap_init(unsigned int cp
 	 *      lock to prevent MTRR entry changes
 	 */
 	stop_machine_from_inactive_cpu(cache_rendezvous_handler, NULL,
-				       cpu_callout_mask);
+				       cpu_cacheinfo_mask);
 
 	return 0;
 }
 
+static int cache_ap_offline(unsigned int cpu)
+{
+	cpumask_clear_cpu(cpu, cpu_cacheinfo_mask);
+	return 0;
+}
+
 /*
  * Delayed cache initialization for all AP's
  */
@@ -1210,9 +1220,12 @@ void cache_aps_init(void)
 
 static int __init cache_ap_register(void)
 {
+	zalloc_cpumask_var(&cpu_cacheinfo_mask, GFP_KERNEL);
+	cpumask_set_cpu(smp_processor_id(), cpu_cacheinfo_mask);
+
 	cpuhp_setup_state_nocalls(CPUHP_AP_CACHECTRL_STARTING,
 				  "x86/cachectrl:starting",
-				  cache_ap_init, NULL);
+				  cache_ap_online, cache_ap_offline);
 	return 0;
 }
-core_initcall(cache_ap_register);
+early_initcall(cache_ap_register);




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531754.827607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lv-0007GI-FR; Mon, 08 May 2023 19:43:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531754.827607; Mon, 08 May 2023 19:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lu-0007E9-WD; Mon, 08 May 2023 19:43:47 +0000
Received: by outflank-mailman (input) for mailman id 531754;
 Mon, 08 May 2023 19:43:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6lt-0004GB-P8
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:45 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a4b9aeee-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:43:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4b9aeee-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185217.839754797@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575024;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=yktK43K4z764grg4BmbpcU4theMKSXX2FJzQQqhK61Y=;
	b=GsDJK3tbruELvka1DJRhV/NBIkEqIqahXpk513esIXbZhzEt6LuYtLjMDEjqNsgBuNYBBU
	aogjszWzzpxIt3n/b6+BHZRPD/S6sHUTKNWfWdl+Xo0wlW8L3AezHNpIPhfG93IQnTMvae
	qL9nms1W2O4Q3FhzXhqnL6cf8KHcJYkuSVfabZ9D24qarslVQp6/AGSHZJDpf9mdAlyCIJ
	8lekxmNQK/wp/CGIZjgt/h3NQHQ7GA1QXrdHbpcPshOA1VWYYllwHhFTGFo5DbH6WHnVTj
	oRS58V/dRLnQ+faNYo+4+8F9UBa9CvlRP+S+Biy7Pmm3Y33zvsFUXJAJeAv3BA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575024;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=yktK43K4z764grg4BmbpcU4theMKSXX2FJzQQqhK61Y=;
	b=CE24QWMJnuzTO1rGtdtOgbz8b1f7ITzm1kMtGlaJ2lzycziTmEj4vdfomqHlwq+kXe//gG
	VsHYBMMe+F6IB+Bw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 11/36] [patch V2 11/38] x86/smpboot: Move synchronization
 masks to SMP boot code
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:44 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The usage is in smpboot.c and not in the CPU initialization code.

The XEN_PV usage of cpu_callout_mask is obsolete as cpu_init() not longer
waits and cacheinfo has its own CPU mask now, so cpu_callout_mask can be
made static too.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/include/asm/cpumask.h |    5 -----
 arch/x86/kernel/cpu/common.c   |   17 -----------------
 arch/x86/kernel/smpboot.c      |   16 ++++++++++++++++
 arch/x86/xen/smp_pv.c          |    3 ---
 4 files changed, 16 insertions(+), 25 deletions(-)
---

--- a/arch/x86/include/asm/cpumask.h
+++ b/arch/x86/include/asm/cpumask.h
@@ -4,11 +4,6 @@
 #ifndef __ASSEMBLY__
 #include <linux/cpumask.h>
 
-extern cpumask_var_t cpu_callin_mask;
-extern cpumask_var_t cpu_callout_mask;
-extern cpumask_var_t cpu_initialized_mask;
-extern cpumask_var_t cpu_sibling_setup_mask;
-
 extern void setup_cpu_local_masks(void);
 
 /*
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -67,14 +67,6 @@
 
 u32 elf_hwcap2 __read_mostly;
 
-/* all of these masks are initialized in setup_cpu_local_masks() */
-cpumask_var_t cpu_initialized_mask;
-cpumask_var_t cpu_callout_mask;
-cpumask_var_t cpu_callin_mask;
-
-/* representing cpus for which sibling maps can be computed */
-cpumask_var_t cpu_sibling_setup_mask;
-
 /* Number of siblings per CPU package */
 int smp_num_siblings = 1;
 EXPORT_SYMBOL(smp_num_siblings);
@@ -169,15 +161,6 @@ static void ppin_init(struct cpuinfo_x86
 	clear_cpu_cap(c, info->feature);
 }
 
-/* correctly size the local cpu masks */
-void __init setup_cpu_local_masks(void)
-{
-	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callin_mask);
-	alloc_bootmem_cpumask_var(&cpu_callout_mask);
-	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
-}
-
 static void default_init(struct cpuinfo_x86 *c)
 {
 #ifdef CONFIG_X86_64
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -101,6 +101,13 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
+/* All of these masks are initialized in setup_cpu_local_masks() */
+static cpumask_var_t cpu_initialized_mask;
+static cpumask_var_t cpu_callout_mask;
+static cpumask_var_t cpu_callin_mask;
+/* Representing CPUs for which sibling maps can be computed */
+static cpumask_var_t cpu_sibling_setup_mask;
+
 /* Logical package management. We might want to allocate that dynamically */
 unsigned int __max_logical_packages __read_mostly;
 EXPORT_SYMBOL(__max_logical_packages);
@@ -1548,6 +1555,15 @@ early_param("possible_cpus", _setup_poss
 		set_cpu_possible(i, true);
 }
 
+/* correctly size the local cpu masks */
+void __init setup_cpu_local_masks(void)
+{
+	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
+	alloc_bootmem_cpumask_var(&cpu_callin_mask);
+	alloc_bootmem_cpumask_var(&cpu_callout_mask);
+	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
+}
+
 #ifdef CONFIG_HOTPLUG_CPU
 
 /* Recompute SMT state for all CPUs on offline */
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -254,15 +254,12 @@ cpu_initialize_context(unsigned int cpu,
 	struct desc_struct *gdt;
 	unsigned long gdt_mfn;
 
-	/* used to tell cpu_init() that it can proceed with initialization */
-	cpumask_set_cpu(cpu, cpu_callout_mask);
 	if (cpumask_test_and_set_cpu(cpu, xen_cpu_initialized_map))
 		return 0;
 
 	ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL);
 	if (ctxt == NULL) {
 		cpumask_clear_cpu(cpu, xen_cpu_initialized_map);
-		cpumask_clear_cpu(cpu, cpu_callout_mask);
 		return -ENOMEM;
 	}
 




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531756.827622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lx-0007vJ-MJ; Mon, 08 May 2023 19:43:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531756.827622; Mon, 08 May 2023 19:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6lx-0007tc-FE; Mon, 08 May 2023 19:43:49 +0000
Received: by outflank-mailman (input) for mailman id 531756;
 Mon, 08 May 2023 19:43:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6lw-0004Y5-F4
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:48 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a5b515e3-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:43:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5b515e3-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185217.898282342@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575026;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=WRyTY63fFKNT8NNe9pdrXSxP97ZO9fcfInZz6PguA8M=;
	b=1cHMH34GyxfIxkvmlBecr+5mzqthYk0E5c5P4+XqqcL8e6XurHWWxoOFIeJNtH/QSimVPW
	18V4fky6NzTdKRpcATR0yTdNUZbSMxJNRORImkmezodXTP7pe74xPQQAr7BpbhkgTBLmJ9
	9+OjV54jcwTxOTA+n9J3KUmsDNgl2gb3hdT1mCXG5keQW7p23s7uTtMRmJhNI8qHCATek2
	VFYxci2MRuYhxjE+yAzrFzousZ3auxKM0KBHlEanvL+LGxBtvIkKatD++/+hdxq+0bTgJP
	Q8JQ6UU2zRO8a9/lotv6c8YTYGM7ip1GcZN4ZI6sLMyO//NiTtDVmoUIAZOoIQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575026;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=WRyTY63fFKNT8NNe9pdrXSxP97ZO9fcfInZz6PguA8M=;
	b=a+XTx0gdMg20467/DY65JXjTt0K7JO0GQA4Eac40m+yQL/vp4DDGDsEbZstdM86jzLh2iO
	AyCWRRDCHHJEPTBg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 12/36] [patch V2 12/38] x86/smpboot: Make TSC
 synchronization function call based
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:46 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Spin-waiting on the control CPU until the AP reaches the TSC
synchronization is just a waste especially in the case that there is no
synchronization required.

As the synchronization has to run with interrupts disabled the control CPU
part can just be done from a SMP function call. The upcoming AP issues that
call async only in the case that synchronization is required.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/include/asm/tsc.h |    2 --
 arch/x86/kernel/smpboot.c  |   20 +++-----------------
 arch/x86/kernel/tsc_sync.c |   36 +++++++++++-------------------------
 3 files changed, 14 insertions(+), 44 deletions(-)
---

--- a/arch/x86/include/asm/tsc.h
+++ b/arch/x86/include/asm/tsc.h
@@ -55,12 +55,10 @@ extern bool tsc_async_resets;
 #ifdef CONFIG_X86_TSC
 extern bool tsc_store_and_check_tsc_adjust(bool bootcpu);
 extern void tsc_verify_tsc_adjust(bool resume);
-extern void check_tsc_sync_source(int cpu);
 extern void check_tsc_sync_target(void);
 #else
 static inline bool tsc_store_and_check_tsc_adjust(bool bootcpu) { return false; }
 static inline void tsc_verify_tsc_adjust(bool resume) { }
-static inline void check_tsc_sync_source(int cpu) { }
 static inline void check_tsc_sync_target(void) { }
 #endif
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -278,11 +278,7 @@ static void notrace start_secondary(void
 	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
 	barrier();
 
-	/*
-	 * Check TSC synchronization with the control CPU, which will do
-	 * its part of this from wait_cpu_online(), making it an implicit
-	 * synchronization point.
-	 */
+	/* Check TSC synchronization with the control CPU. */
 	check_tsc_sync_target();
 
 	/*
@@ -1144,21 +1140,11 @@ static void wait_cpu_callin(unsigned int
 }
 
 /*
- * Bringup step four: Synchronize the TSC and wait for the target AP
- * to reach set_cpu_online() in start_secondary().
+ * Bringup step four: Wait for the target AP to reach set_cpu_online() in
+ * start_secondary().
  */
 static void wait_cpu_online(unsigned int cpu)
 {
-	unsigned long flags;
-
-	/*
-	 * Check TSC synchronization with the AP (keep irqs disabled
-	 * while doing so):
-	 */
-	local_irq_save(flags);
-	check_tsc_sync_source(cpu);
-	local_irq_restore(flags);
-
 	/*
 	 * Wait for the AP to mark itself online, so the core caller
 	 * can drop sparse_irq_lock.
--- a/arch/x86/kernel/tsc_sync.c
+++ b/arch/x86/kernel/tsc_sync.c
@@ -245,7 +245,6 @@ bool tsc_store_and_check_tsc_adjust(bool
  */
 static atomic_t start_count;
 static atomic_t stop_count;
-static atomic_t skip_test;
 static atomic_t test_runs;
 
 /*
@@ -344,21 +343,14 @@ static inline unsigned int loop_timeout(
 }
 
 /*
- * Source CPU calls into this - it waits for the freshly booted
- * target CPU to arrive and then starts the measurement:
+ * The freshly booted CPU initiates this via an async SMP function call.
  */
-void check_tsc_sync_source(int cpu)
+static void check_tsc_sync_source(void *__cpu)
 {
+	unsigned int cpu = (unsigned long)__cpu;
 	int cpus = 2;
 
 	/*
-	 * No need to check if we already know that the TSC is not
-	 * synchronized or if we have no TSC.
-	 */
-	if (unsynchronized_tsc())
-		return;
-
-	/*
 	 * Set the maximum number of test runs to
 	 *  1 if the CPU does not provide the TSC_ADJUST MSR
 	 *  3 if the MSR is available, so the target can try to adjust
@@ -368,16 +360,9 @@ void check_tsc_sync_source(int cpu)
 	else
 		atomic_set(&test_runs, 3);
 retry:
-	/*
-	 * Wait for the target to start or to skip the test:
-	 */
-	while (atomic_read(&start_count) != cpus - 1) {
-		if (atomic_read(&skip_test) > 0) {
-			atomic_set(&skip_test, 0);
-			return;
-		}
+	/* Wait for the target to start. */
+	while (atomic_read(&start_count) != cpus - 1)
 		cpu_relax();
-	}
 
 	/*
 	 * Trigger the target to continue into the measurement too:
@@ -397,14 +382,14 @@ void check_tsc_sync_source(int cpu)
 	if (!nr_warps) {
 		atomic_set(&test_runs, 0);
 
-		pr_debug("TSC synchronization [CPU#%d -> CPU#%d]: passed\n",
+		pr_debug("TSC synchronization [CPU#%d -> CPU#%u]: passed\n",
 			smp_processor_id(), cpu);
 
 	} else if (atomic_dec_and_test(&test_runs) || random_warps) {
 		/* Force it to 0 if random warps brought us here */
 		atomic_set(&test_runs, 0);
 
-		pr_warn("TSC synchronization [CPU#%d -> CPU#%d]:\n",
+		pr_warn("TSC synchronization [CPU#%d -> CPU#%u]:\n",
 			smp_processor_id(), cpu);
 		pr_warn("Measured %Ld cycles TSC warp between CPUs, "
 			"turning off TSC clock.\n", max_warp);
@@ -457,11 +442,12 @@ void check_tsc_sync_target(void)
 	 * SoCs the TSC is frequency synchronized, but still the TSC ADJUST
 	 * register might have been wreckaged by the BIOS..
 	 */
-	if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable) {
-		atomic_inc(&skip_test);
+	if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable)
 		return;
-	}
 
+	/* Kick the control CPU into the TSC synchronization function */
+	smp_call_function_single(cpumask_first(cpu_online_mask), check_tsc_sync_source,
+				 (unsigned long *)(unsigned long)cpu, 0);
 retry:
 	/*
 	 * Register this CPU's participation and wait for the




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531758.827632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6m0-00009I-6k; Mon, 08 May 2023 19:43:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531758.827632; Mon, 08 May 2023 19:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6m0-00008c-0T; Mon, 08 May 2023 19:43:52 +0000
Received: by outflank-mailman (input) for mailman id 531758;
 Mon, 08 May 2023 19:43:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6ly-0004Y5-72
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:50 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a69df5ca-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:43:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a69df5ca-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185217.956149661@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575028;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Wg+c42iei99IoWm1nks371E2WLpclL3rCuBzgiDxS/4=;
	b=hGH5Q6f/HoQLGtMGK7WJz/A7zrmv2hn0bYeL983aivgiXf4v9b+ajqaNl5i7LW/QDWn3Yv
	CjzyVuJhnnT2MTQ6URv7UGI69UsvsBWG1zH7dPI2EwrDbRFg8OckIikBMyGhGBovnfWD8V
	XmUIz/yp7zubm8zlGMWbg7fj6ZqT5JLEYrr3wv5vcieD7M3QALS6GCdBf3ROCwVWVU38hB
	NOEIIr0t8m3I9Gdi/vpb2zh2U9a/Xkq4UfM9fwUrsg+hKlBQKHaw4qrKw8pPxIK+qO4RFh
	x8liDY4krDFiCCbp6VzznQNqaXHfRG8xHd/bDSpXekLIHcAgYb9cNaeMexc6Vg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575028;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Wg+c42iei99IoWm1nks371E2WLpclL3rCuBzgiDxS/4=;
	b=XGQUrkXzd8odT1L5UrSSf3572AUI0YZOsKhlh13Qy6i38K6kH4yV2eXaFVhbV5nn1D4aBF
	GzlCkIWzf1cwWLBg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 13/36] x86/smpboot: Remove cpu_callin_mask
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:47 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that TSC synchronization is SMP function call based there is no reason
to wait for the AP to be set in smp_callin_mask. The control CPU waits for
the AP to set itself in the online mask anyway.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/kernel/smpboot.c |   61 +++++++---------------------------------------
 1 file changed, 10 insertions(+), 51 deletions(-)
---

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -104,7 +104,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_info);
 /* All of these masks are initialized in setup_cpu_local_masks() */
 static cpumask_var_t cpu_initialized_mask;
 static cpumask_var_t cpu_callout_mask;
-static cpumask_var_t cpu_callin_mask;
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -167,21 +166,16 @@ static inline void smpboot_restore_warm_
  */
 static void smp_callin(void)
 {
-	int cpuid;
+	int cpuid = smp_processor_id();
 
 	/*
 	 * If waken up by an INIT in an 82489DX configuration
-	 * cpu_callout_mask guarantees we don't get here before
-	 * an INIT_deassert IPI reaches our local APIC, so it is
-	 * now safe to touch our local APIC.
-	 */
-	cpuid = smp_processor_id();
-
-	/*
-	 * the boot CPU has finished the init stage and is spinning
-	 * on callin_map until we finish. We are free to set up this
-	 * CPU, first the APIC. (this is probably redundant on most
-	 * boards)
+	 * cpu_callout_mask guarantees we don't get here before an
+	 * INIT_deassert IPI reaches our local APIC, so it is now safe to
+	 * touch our local APIC.
+	 *
+	 * Set up this CPU, first the APIC, which is probably redundant on
+	 * most boards.
 	 */
 	apic_ap_setup();
 
@@ -192,7 +186,7 @@ static void smp_callin(void)
 	 * The topology information must be up to date before
 	 * calibrate_delay() and notify_cpu_starting().
 	 */
-	set_cpu_sibling_map(raw_smp_processor_id());
+	set_cpu_sibling_map(cpuid);
 
 	ap_init_aperfmperf();
 
@@ -205,11 +199,6 @@ static void smp_callin(void)
 	 * state (CPUHP_ONLINE in the case of serial bringup).
 	 */
 	notify_cpu_starting(cpuid);
-
-	/*
-	 * Allow the master to continue.
-	 */
-	cpumask_set_cpu(cpuid, cpu_callin_mask);
 }
 
 static void ap_calibrate_delay(void)
@@ -268,11 +257,6 @@ static void notrace start_secondary(void
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
 
-	/*
-	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
-	 * but just sets the bit to let the controlling CPU (BSP) know that
-	 * it's got this far.
-	 */
 	smp_callin();
 
 	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
@@ -1112,7 +1096,7 @@ static int wait_cpu_cpumask(unsigned int
  * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
  * to proceed.  The AP will then proceed past setting its 'callin' bit
  * and end up waiting in check_tsc_sync_target() until we reach
- * do_wait_cpu_online() to tend to it.
+ * wait_cpu_online() to tend to it.
  */
 static int wait_cpu_initialized(unsigned int cpu)
 {
@@ -1127,20 +1111,7 @@ static int wait_cpu_initialized(unsigned
 }
 
 /*
- * Bringup step three: Wait for the target AP to reach smp_callin().
- * The AP is not waiting for us here so we don't need to parallelise
- * this step. Not entirely clear why we care about this, since we just
- * proceed directly to TSC synchronization which is the next sync
- * point with the AP anyway.
- */
-static void wait_cpu_callin(unsigned int cpu)
-{
-	while (!cpumask_test_cpu(cpu, cpu_callin_mask))
-		schedule();
-}
-
-/*
- * Bringup step four: Wait for the target AP to reach set_cpu_online() in
+ * Bringup step three: Wait for the target AP to reach set_cpu_online() in
  * start_secondary().
  */
 static void wait_cpu_online(unsigned int cpu)
@@ -1170,14 +1141,6 @@ static int native_kick_ap(unsigned int c
 	}
 
 	/*
-	 * Already booted CPU?
-	 */
-	if (cpumask_test_cpu(cpu, cpu_callin_mask)) {
-		pr_debug("do_boot_cpu %d Already started\n", cpu);
-		return -ENOSYS;
-	}
-
-	/*
 	 * Save current MTRR state in case it was changed since early boot
 	 * (e.g. by the ACPI SMI) to initialize new CPUs with MTRRs in sync:
 	 */
@@ -1214,7 +1177,6 @@ int native_cpu_up(unsigned int cpu, stru
 	if (ret)
 		goto out;
 
-	wait_cpu_callin(cpu);
 	wait_cpu_online(cpu);
 
 out:
@@ -1330,7 +1292,6 @@ void __init smp_prepare_cpus_common(void
 	 * Setup boot CPU information
 	 */
 	smp_store_boot_cpu_info(); /* Final full version of the data */
-	cpumask_copy(cpu_callin_mask, cpumask_of(0));
 	mb();
 
 	for_each_possible_cpu(i) {
@@ -1545,7 +1506,6 @@ early_param("possible_cpus", _setup_poss
 void __init setup_cpu_local_masks(void)
 {
 	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callin_mask);
 	alloc_bootmem_cpumask_var(&cpu_callout_mask);
 	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
 }
@@ -1609,7 +1569,6 @@ static void remove_cpu_from_maps(int cpu
 {
 	set_cpu_online(cpu, false);
 	cpumask_clear_cpu(cpu, cpu_callout_mask);
-	cpumask_clear_cpu(cpu, cpu_callin_mask);
 	/* was set by cpu_init() */
 	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	numa_remove_cpu(cpu);




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531759.827638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6m1-0000OM-IN; Mon, 08 May 2023 19:43:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531759.827638; Mon, 08 May 2023 19:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6m1-0000K1-0u; Mon, 08 May 2023 19:43:53 +0000
Received: by outflank-mailman (input) for mailman id 531759;
 Mon, 08 May 2023 19:43:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6lz-0004Y5-H2
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:51 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a79d0dee-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:43:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a79d0dee-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185218.013044883@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575029;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=BOT3gWwQF7O/1lZ9KsTIy5NWCf87k83rCpCfGL29bzE=;
	b=Vtkw853T/g2LgebBW2u5NvRyk/o8vkE9DVNbCjOgfsv8NQAdQ+PiRmVE3YRjVR9zyaPDpW
	A0CpqtisqwiF90SMSTT3dgMD9wfCYwYlyM9zgdWNPZBK6ove/DWKBEfTIBavRYG3mLpwGW
	gj5/U4hTXI1gfU5gMpT9O+3nSrnlgwHXON5jO7YTMsIZWUpGn1lUA7aXp/3y1CXCUYBa6o
	4saDTE36mpdN94ZK/rdI32a5+qd4XpBBQ9/jSvxCD81AEctXSXlca21oVxip2aNe/i6362
	k8ifgTHs2omVUbliI/3cpRSjuph/X6qOreGlkue5SVagGH6iBXC3i5IxD3kMEg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575029;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=BOT3gWwQF7O/1lZ9KsTIy5NWCf87k83rCpCfGL29bzE=;
	b=z0rzXlV5+Y57E7wE/hgSxsOIOaBBLImivYo4M54wyRCUx4IHVjFSQS7chfIs3CglORkucG
	Op/Ji4IG3mTcrvAw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 14/36] [patch V2 14/38] cpu/hotplug: Rework sparse_irq
 locking in bringup_cpu()
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:49 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

There is no harm to hold sparse_irq lock until the upcoming CPU completes
in cpuhp_online_idle(). This allows to remove cpu_online() synchronization
from architecture code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 kernel/cpu.c |   28 +++++++++++++++++++---------
 1 file changed, 19 insertions(+), 9 deletions(-)
---

--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -558,7 +558,7 @@ static int cpuhp_kick_ap(int cpu, struct
 	return ret;
 }
 
-static int bringup_wait_for_ap(unsigned int cpu)
+static int bringup_wait_for_ap_online(unsigned int cpu)
 {
 	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
 
@@ -579,15 +579,12 @@ static int bringup_wait_for_ap(unsigned
 	 */
 	if (!cpu_smt_allowed(cpu))
 		return -ECANCELED;
-
-	if (st->target <= CPUHP_AP_ONLINE_IDLE)
-		return 0;
-
-	return cpuhp_kick_ap(cpu, st, st->target);
+	return 0;
 }
 
 static int bringup_cpu(unsigned int cpu)
 {
+	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
 	struct task_struct *idle = idle_thread_get(cpu);
 	int ret;
 
@@ -606,10 +603,23 @@ static int bringup_cpu(unsigned int cpu)
 
 	/* Arch-specific enabling code. */
 	ret = __cpu_up(cpu, idle);
-	irq_unlock_sparse();
 	if (ret)
-		return ret;
-	return bringup_wait_for_ap(cpu);
+		goto out_unlock;
+
+	ret = bringup_wait_for_ap_online(cpu);
+	if (ret)
+		goto out_unlock;
+
+	irq_unlock_sparse();
+
+	if (st->target <= CPUHP_AP_ONLINE_IDLE)
+		return 0;
+
+	return cpuhp_kick_ap(cpu, st, st->target);
+
+out_unlock:
+	irq_unlock_sparse();
+	return ret;
 }
 
 static int finish_cpu(unsigned int cpu)




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531761.827644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6m2-0000ir-Ip; Mon, 08 May 2023 19:43:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531761.827644; Mon, 08 May 2023 19:43:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6m2-0000fj-3K; Mon, 08 May 2023 19:43:54 +0000
Received: by outflank-mailman (input) for mailman id 531761;
 Mon, 08 May 2023 19:43:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6m0-0004GB-BN
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:52 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a8b44ada-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:43:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8b44ada-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185218.070008578@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575031;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=DpwezFnftgf/NwnjaJYg18wbFoZRAK6UuAmlsWmBJR4=;
	b=i1TFZjgCtddkAakfxkISXw55XmrEJvUVPuQTFOFgS+oTTKVcNGhuqDExhfxWr3kSGXHZ9h
	Q+ujpZNUZEK3gioXPoyRZ+NzfuV4aL+sQupa7bIw509g00exAtAYyYGseE5+d40Ye/9iSn
	kyYz5rdl/ounEY4O9wO7ubw04Cyxxy6DTnoJB2oO2RietlbB/OitKYLkOQ8iHZAyHbfy1W
	bv+Q2OEIWcacTVCd+tTm0xqwTwSoZ8e7RLxQJArbSe8S99eF/7jfVItjSmrC275mG9pAip
	lscg+orwTig3a1iTNLS2jmVzL558VHIKrCz5q1WCRCIfLCf1r3Xa3vOeSE+ulQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575031;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=DpwezFnftgf/NwnjaJYg18wbFoZRAK6UuAmlsWmBJR4=;
	b=p4MipbJWjxb6AbwUz07UF9If0ACyXBwES1Mx0lUk9qi+e9Q63raPnhjjqmne+YvfK1SUP6
	eRiNkcZ4g3zeVxBg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 15/36] x86/smpboot: Remove wait for cpu_online()
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:51 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/kernel/smpboot.c |   26 ++------------------------
 1 file changed, 2 insertions(+), 24 deletions(-)
---

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -281,7 +281,6 @@ static void notrace start_secondary(void
 	 * half valid vector space.
 	 */
 	lock_vector_lock();
-	/* Sync point with do_wait_cpu_online() */
 	set_cpu_online(smp_processor_id(), true);
 	lapic_online();
 	unlock_vector_lock();
@@ -1110,20 +1109,6 @@ static int wait_cpu_initialized(unsigned
 	return 0;
 }
 
-/*
- * Bringup step three: Wait for the target AP to reach set_cpu_online() in
- * start_secondary().
- */
-static void wait_cpu_online(unsigned int cpu)
-{
-	/*
-	 * Wait for the AP to mark itself online, so the core caller
-	 * can drop sparse_irq_lock.
-	 */
-	while (!cpu_online(cpu))
-		schedule();
-}
-
 static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
@@ -1170,16 +1155,9 @@ int native_cpu_up(unsigned int cpu, stru
 	int ret;
 
 	ret = native_kick_ap(cpu, tidle);
-	if (ret)
-		goto out;
-
-	ret = wait_cpu_initialized(cpu);
-	if (ret)
-		goto out;
-
-	wait_cpu_online(cpu);
+	if (!ret)
+		ret = wait_cpu_initialized(cpu);
 
-out:
 	/* Cleanup possible dangling ends... */
 	if (x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:43:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:43:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531765.827658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6m5-0001bj-AX; Mon, 08 May 2023 19:43:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531765.827658; Mon, 08 May 2023 19:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6m5-0001ZA-0T; Mon, 08 May 2023 19:43:57 +0000
Received: by outflank-mailman (input) for mailman id 531765;
 Mon, 08 May 2023 19:43:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6m3-0004Y5-69
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:55 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a9a82ceb-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:43:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9a82ceb-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185218.127315637@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575033;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=3mTnL+eNa5W0nTjOIkPEr4xfOzLcS1B04oILYIrEMyg=;
	b=eXpNMch9880MTyH+0Fz++N2mu/WFctp1eVVZZiPgiFCXMrCyTDadzskT0l9z0ZwlHcQvpi
	9kOXC1Ff+g7cp+JCq8d1S0oE842eFc1UxgDKQQeOqkarfCqFbRshssa2KqOorq+nJ88tDD
	fEwDRj+DjeBHMdKLi4HMvK6U1AZG5K1eaPpGnaB41VL5++7xyatSf8XaejDnLahfRNMaFZ
	tRdTESS436fnKAXT9YC2WSG2YnfEsRaEZ1La5POAwvfbErvcr0mQIh0UqhWVl9XCwRW6hl
	L0VvqXarQ/4OtLm/Xar1BiBIJIO4YV8immaktE8lAUNSC0Bjdpx+QwPKR8ozZQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575033;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=3mTnL+eNa5W0nTjOIkPEr4xfOzLcS1B04oILYIrEMyg=;
	b=cOcWhFITcchL8q9j8XJqY4yP565IAGOCYm0IG0oNwqVPTKDJpWdAXEM1GM1oNZ+wZdo+zX
	mjuMRPU8QuqrdVDQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 16/36] x86/xen/smp_pv: Remove wait for CPU online
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:52 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.

Whether the control CPU runs in a wait loop or sleeps in the core code
waiting for the online operation to complete makes no difference.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/xen/smp_pv.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)
---

--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -340,11 +340,11 @@ static int xen_pv_cpu_up(unsigned int cp
 
 	xen_pmu_init(cpu);
 
-	rc = HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL);
-	BUG_ON(rc);
-
-	while (cpu_report_state(cpu) != CPU_ONLINE)
-		HYPERVISOR_sched_op(SCHEDOP_yield, NULL);
+	/*
+	 * Why is this a BUG? If the hypercall fails then everything can be
+	 * rolled back, no?
+	 */
+	BUG_ON(HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL));
 
 	return 0;
 }




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531785.827683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pc-0005bd-Ko; Mon, 08 May 2023 19:47:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531785.827683; Mon, 08 May 2023 19:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pc-0005Y7-E3; Mon, 08 May 2023 19:47:36 +0000
Received: by outflank-mailman (input) for mailman id 531785;
 Mon, 08 May 2023 19:47:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mE-0004Y5-Hm
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:06 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b0758392-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:44:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0758392-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185218.525901017@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575044;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=p1bTZcu/tFFl7ZFXZHrWEevEsbPzi9oWCJbzAEjoRvU=;
	b=ro1G7PpJ83X6+w8xekc1Z+eWqsLGpjtOzo4PP6r5XIV4J/QSzXaoSgHSrgfE8UoFXJAtVZ
	ddPmGBBvhizIfL5MKQAHxiueGiD7SaKd8SVF5xoPIMD87HT7xLkes0Mcannkaqabpdim7S
	3JJn+0HsTPLVLlf7o4k4xojJ0ha9PM09J1xPq/22Ii89ON2+9r5obP9K+fx8VKe6+Ofd/g
	p7PvjLDXUy1B93tSth06IG3OPn/RKQSfM5HzHrKJn2PQ/N/AB81EoHej+RLOqJotuRy5u1
	RqZ9dU8p9XQKefuuGDTYHgGekuJbpsfN6iEzcsmbpxKC9VJxb72TeG+/ixZKCA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575044;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=p1bTZcu/tFFl7ZFXZHrWEevEsbPzi9oWCJbzAEjoRvU=;
	b=+JfvJZKkBGEOkJ0Rz4Oc2ENXcMdWjC9X0M7zDLfKJYbzqf6/zAZPUYwnaqTXpxX83jhfM1
	Eek8gh5PJeZxIPBw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 23/36] [patch V2 23/38] csky/smp: Switch to hotplug core
 state synchronization
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:04 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/csky/Kconfig           |    1 +
 arch/csky/include/asm/smp.h |    2 +-
 arch/csky/kernel/smp.c      |    8 ++------
 3 files changed, 4 insertions(+), 7 deletions(-)
---

--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -96,6 +96,7 @@ config CSKY
 	select HAVE_REGS_AND_STACK_ACCESS_API
 	select HAVE_STACKPROTECTOR
 	select HAVE_SYSCALL_TRACEPOINTS
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select MAY_HAVE_SPARSE_IRQ
 	select MODULES_USE_ELF_RELA if MODULES
 	select OF
--- a/arch/csky/include/asm/smp.h
+++ b/arch/csky/include/asm/smp.h
@@ -23,7 +23,7 @@ void __init set_send_ipi(void (*func)(co
 
 int __cpu_disable(void);
 
-void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 
 #endif /* CONFIG_SMP */
 
--- a/arch/csky/kernel/smp.c
+++ b/arch/csky/kernel/smp.c
@@ -291,12 +291,8 @@ int __cpu_disable(void)
 	return 0;
 }
 
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: shutdown failed\n", cpu);
-		return;
-	}
 	pr_notice("CPU%u: shutdown\n", cpu);
 }
 
@@ -304,7 +300,7 @@ void __noreturn arch_cpu_idle_dead(void)
 {
 	idle_task_exit();
 
-	cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	while (!secondary_stack)
 		arch_cpu_idle();




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531784.827677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pc-0005WG-AZ; Mon, 08 May 2023 19:47:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531784.827677; Mon, 08 May 2023 19:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pc-0005Ur-5P; Mon, 08 May 2023 19:47:36 +0000
Received: by outflank-mailman (input) for mailman id 531784;
 Mon, 08 May 2023 19:47:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mJ-0004Y5-88
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:11 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b35bcc9b-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:44:09 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b35bcc9b-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185218.696553809@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575049;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=3tzd3hEIt5mPiFNhB7mGn6+j5PGwWpl970beJ0OUc/w=;
	b=C9XC5JI8zhwnnoQtLMIubfpUrFZX045J0qH0esdV5efaL1DfV2riv76m/cVjfmJF0fN9eF
	Dg9M8YbSEWBbAOdQKsXxR5bxpQ7opMqC9KZ7LG3mnhfF2YiHhuyNa2uxYzQh+4oA7V/0+Y
	cYy3VXWPussLtl1gvcbp245haJ5E4+9ssxa8XZloSJNIOQi4zAFNxq77aqQQmu5H6aGbpR
	XU+KqCkEIHDUAbHLQ4CVUIfkHOtLfNXQ5l22br1Rpz/dhbFOz00hUN+6cf2I0hZ8tRIa+X
	2TI0s8Q/f+BqbSNzvYN5ujjjPbaZrYZMoD26kttduUli9cAwakgVH+aefPZcgg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575049;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=3tzd3hEIt5mPiFNhB7mGn6+j5PGwWpl970beJ0OUc/w=;
	b=iemtyuzb0UI5fQ6Aq9I7tT3PTD5UcdGEt1e/CnmmJ3j7zbcR7ak/dMGG4/JiCkppeVPKaa
	xBpQm1dpjEEo1dBQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Palmer Dabbelt <palmer@rivosinc.com>
Subject: [patch v3 26/36] riscv: Switch to hotplug core state synchronization
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:08 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/riscv/Kconfig              |    1 +
 arch/riscv/include/asm/smp.h    |    2 +-
 arch/riscv/kernel/cpu-hotplug.c |   14 +++++++-------
 3 files changed, 9 insertions(+), 8 deletions(-)
---

--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -122,6 +122,7 @@ config RISCV
 	select HAVE_RSEQ
 	select HAVE_STACKPROTECTOR
 	select HAVE_SYSCALL_TRACEPOINTS
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_DOMAIN
 	select IRQ_FORCED_THREADING
 	select KASAN_VMALLOC if KASAN
--- a/arch/riscv/include/asm/smp.h
+++ b/arch/riscv/include/asm/smp.h
@@ -70,7 +70,7 @@ asmlinkage void smp_callin(void);
 
 #if defined CONFIG_HOTPLUG_CPU
 int __cpu_disable(void);
-void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 #endif /* CONFIG_HOTPLUG_CPU */
 
 #else
--- a/arch/riscv/kernel/cpu-hotplug.c
+++ b/arch/riscv/kernel/cpu-hotplug.c
@@ -8,6 +8,7 @@
 #include <linux/sched.h>
 #include <linux/err.h>
 #include <linux/irq.h>
+#include <linux/cpuhotplug.h>
 #include <linux/cpu.h>
 #include <linux/sched/hotplug.h>
 #include <asm/irq.h>
@@ -49,17 +50,15 @@ int __cpu_disable(void)
 	return ret;
 }
 
+#ifdef CONFIG_HOTPLUG_CPU
 /*
- * Called on the thread which is asking for a CPU to be shutdown.
+ * Called on the thread which is asking for a CPU to be shutdown, if the
+ * CPU reported dead to the hotplug core.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
 	int ret = 0;
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU %u: didn't die\n", cpu);
-		return;
-	}
 	pr_notice("CPU%u: off\n", cpu);
 
 	/* Verify from the firmware if the cpu is really stopped*/
@@ -76,9 +75,10 @@ void __noreturn arch_cpu_idle_dead(void)
 {
 	idle_task_exit();
 
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	cpu_ops[smp_processor_id()]->cpu_stop();
 	/* It should never reach here */
 	BUG();
 }
+#endif




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531782.827672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pc-0005T0-27; Mon, 08 May 2023 19:47:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531782.827672; Mon, 08 May 2023 19:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pb-0005Sa-Ti; Mon, 08 May 2023 19:47:35 +0000
Received: by outflank-mailman (input) for mailman id 531782;
 Mon, 08 May 2023 19:47:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mY-0004GB-Vg
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:26 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bd2a9f97-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:44:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd2a9f97-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185219.230287961@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575065;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=3NBWNJYu3OFWIUCmAc5GXW8QVPdUswJXSMxcyKJq1tg=;
	b=K6CA2AWS2i4RYcmGQF5izhhtRvEH4Lqy1zX4leSX1xCbJyu2fRUt2bE/H/K1KBaD4mhPbX
	DXbEEuDfyoRsRnWOS+Az/SwmgLHy0tkGFwSi+d9yJ7tMxgkcQqSUw6UAeTo9FkUMHY3IUg
	T/DcJN7g9UpndwKZO+NAaBwlV2ov/Q5kI7PbzfEk3wlLHqSoSVXJmbX27DqtE0LGcHeJ5/
	uMD6H0ks6CuxavlZamlO82S2b8EL75FhHxK4jbfs4N8WF5LaIuaXPH1KPg0hu/77zltt9Q
	FxeGNYCpIMpzLNNpAb0Z1xP21rZ/1pqRbky5sY5CgPzxIzrF2nbI5kpsiH2C4A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575065;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=3NBWNJYu3OFWIUCmAc5GXW8QVPdUswJXSMxcyKJq1tg=;
	b=wmV5ftrL9XveDdj42u5NoraYPGMS4sHOk5l7sWwCFijJk9PW7V8wwmY2iynLMZfkCTudZu
	C7051zfZnT0NNeAQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 36/36] x86/smpboot/64: Implement
 arch_cpuhp_init_parallel_bringup() and enable it
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:25 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Implement the validation function which tells the core code whether
parallel bringup is possible.

The only condition for now is that the kernel does not run in an encrypted
guest as these will trap the RDMSR via #VC, which cannot be handled at that
point in early startup.

There was an earlier variant for AMD-SEV which used the GHBC protocol for
retrieving the APIC ID via CPUID, but there is no guarantee that the
initial APIC ID in CPUID is the same as the real APIC ID. There is no
enforcement from the secure firmware and the hypervisor can assign APIC IDs
as it sees fit as long as the ACPI/MADT table is consistent with that
assignment.

Unfortunately there is no RDMSR GHCB protocol at the moment, so enabling
AMD-SEV guests for parallel startup needs some more thought.

Intel-TDX provides a secure RDMSR hypercall, but supporting that is outside
the scope of this change.

Fixup announce_cpu() as e.g. on Hyper-V CPU1 is the secondary sibling of
CPU0, which makes the @cpu == 1 logic in announce_cpu() fall apart.

[ mikelley: Reported the announce_cpu() fallout

Originally-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>

---
V2: Fixup announce_cpu() - Michael Kelley
V3: Fixup announce_cpu() for real - Michael Kelley
---
 arch/x86/Kconfig             |    3 -
 arch/x86/kernel/cpu/common.c |    6 --
 arch/x86/kernel/smpboot.c    |   87 +++++++++++++++++++++++++++++++++++--------
 3 files changed, 75 insertions(+), 21 deletions(-)
---

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -274,8 +274,9 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_PARALLEL			if SMP && X86_64
 	select HOTPLUG_SMT			if SMP
-	select HOTPLUG_SPLIT_STARTUP		if SMP
+	select HOTPLUG_SPLIT_STARTUP		if SMP && X86_32
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
 	select NEED_PER_CPU_PAGE_FIRST_CHUNK
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2128,11 +2128,7 @@ static inline void setup_getcpu(int cpu)
 }
 
 #ifdef CONFIG_X86_64
-static inline void ucode_cpu_init(int cpu)
-{
-	if (cpu)
-		load_ucode_ap();
-}
+static inline void ucode_cpu_init(int cpu) { }
 
 static inline void tss_setup_ist(struct tss_struct *tss)
 {
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -58,6 +58,7 @@
 #include <linux/overflow.h>
 #include <linux/stackprotector.h>
 #include <linux/cpuhotplug.h>
+#include <linux/mc146818rtc.h>
 
 #include <asm/acpi.h>
 #include <asm/cacheinfo.h>
@@ -75,7 +76,7 @@
 #include <asm/fpu/api.h>
 #include <asm/setup.h>
 #include <asm/uv/uv.h>
-#include <linux/mc146818rtc.h>
+#include <asm/microcode.h>
 #include <asm/i8259.h>
 #include <asm/misc.h>
 #include <asm/qspinlock.h>
@@ -128,7 +129,6 @@ int arch_update_cpu_topology(void)
 	return retval;
 }
 
-
 static unsigned int smpboot_warm_reset_vector_count;
 
 static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip)
@@ -229,16 +229,43 @@ static void notrace start_secondary(void
 	 */
 	cr4_init();
 
-#ifdef CONFIG_X86_32
-	/* switch away from the initial page table */
-	load_cr3(swapper_pg_dir);
-	__flush_tlb_all();
-#endif
+	/*
+	 * 32-bit specific. 64-bit reaches this code with the correct page
+	 * table established. Yet another historical divergence.
+	 */
+	if (IS_ENABLED(CONFIG_X86_32)) {
+		/* switch away from the initial page table */
+		load_cr3(swapper_pg_dir);
+		__flush_tlb_all();
+	}
+
 	cpu_init_exception_handling();
 
 	/*
-	 * Synchronization point with the hotplug core. Sets the
-	 * synchronization state to ALIVE and waits for the control CPU to
+	 * 32-bit systems load the microcode from the ASM startup code for
+	 * historical reasons.
+	 *
+	 * On 64-bit systems load it before reaching the AP alive
+	 * synchronization point below so it is not part of the full per
+	 * CPU serialized bringup part when "parallel" bringup is enabled.
+	 *
+	 * That's even safe when hyperthreading is enabled in the CPU as
+	 * the core code starts the primary threads first and leaves the
+	 * secondary threads waiting for SIPI. Loading microcode on
+	 * physical cores concurrently is a safe operation.
+	 *
+	 * This covers both the Intel specific issue that concurrent
+	 * microcode loading on SMT siblings must be prohibited and the
+	 * vendor independent issue`that microcode loading which changes
+	 * CPUID, MSRs etc. must be strictly serialized to maintain
+	 * software state correctness.
+	 */
+	if (IS_ENABLED(CONFIG_X86_64))
+		load_ucode_ap();
+
+	/*
+	 * Synchronization point with the hotplug core. Sets this CPUs
+	 * synchronization state to ALIVE and spin-waits for the control CPU to
 	 * release this CPU for further bringup.
 	 */
 	cpuhp_ap_sync_alive();
@@ -924,9 +951,9 @@ static int wakeup_secondary_cpu_via_init
 /* reduce the number of lines printed when booting a large cpu count system */
 static void announce_cpu(int cpu, int apicid)
 {
+	static int width, node_width, first = 1;
 	static int current_node = NUMA_NO_NODE;
 	int node = early_cpu_to_node(cpu);
-	static int width, node_width;
 
 	if (!width)
 		width = num_digits(num_possible_cpus()) + 1; /* + '#' sign */
@@ -934,10 +961,10 @@ static void announce_cpu(int cpu, int ap
 	if (!node_width)
 		node_width = num_digits(num_possible_nodes()) + 1; /* + '#' */
 
-	if (cpu == 1)
-		printk(KERN_INFO "x86: Booting SMP configuration:\n");
-
 	if (system_state < SYSTEM_RUNNING) {
+		if (first)
+			pr_info("x86: Booting SMP configuration:\n");
+
 		if (node != current_node) {
 			if (current_node > (-1))
 				pr_cont("\n");
@@ -948,11 +975,11 @@ static void announce_cpu(int cpu, int ap
 		}
 
 		/* Add padding for the BSP */
-		if (cpu == 1)
+		if (first)
 			pr_cont("%*s", width + 1, " ");
+		first = 0;
 
 		pr_cont("%*s#%d", width - num_digits(cpu), " ", cpu);
-
 	} else
 		pr_info("Booting Node %d Processor %d APIC 0x%x\n",
 			node, cpu, apicid);
@@ -1242,6 +1269,36 @@ void __init smp_prepare_cpus_common(void
 	set_cpu_sibling_map(0);
 }
 
+#ifdef CONFIG_X86_64
+/* Establish whether parallel bringup can be supported. */
+bool __init arch_cpuhp_init_parallel_bringup(void)
+{
+	/*
+	 * Encrypted guests require special handling. They enforce X2APIC
+	 * mode but the RDMSR to read the APIC ID is intercepted and raises
+	 * #VC or #VE which cannot be handled in the early startup code.
+	 *
+	 * AMD-SEV does not provide a RDMSR GHCB protocol so the early
+	 * startup code cannot directly communicate with the secure
+	 * firmware. The alternative solution to retrieve the APIC ID via
+	 * CPUID(0xb), which is covered by the GHCB protocol, is not viable
+	 * either because there is no enforcement of the CPUID(0xb)
+	 * provided "initial" APIC ID to be the same as the real APIC ID.
+	 *
+	 * Intel-TDX has a secure RDMSR hypercall, but that needs to be
+	 * implemented seperately in the low level startup ASM code.
+	 */
+	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
+		pr_info("Parallel CPU startup disabled due to guest state encryption\n");
+		return false;
+	}
+
+	smpboot_control = STARTUP_READ_APICID;
+	pr_debug("Parallel CPU startup enabled: 0x%08x\n", smpboot_control);
+	return true;
+}
+#endif
+
 /*
  * Prepare for SMP bootup.
  * @max_cpus: configured maximum number of CPUs, It is a legacy parameter



From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531786.827701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pe-0006A8-48; Mon, 08 May 2023 19:47:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531786.827701; Mon, 08 May 2023 19:47:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pe-000697-0E; Mon, 08 May 2023 19:47:38 +0000
Received: by outflank-mailman (input) for mailman id 531786;
 Mon, 08 May 2023 19:47:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mX-0004GB-Dl
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:25 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bc32fc7e-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:44:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc32fc7e-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185219.176824543@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575064;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=aMiZ8OZ6vDaTJDCLruBwx0VdtP/fC3sP/hWfkyTStdQ=;
	b=yuDnwG7zN3SLTmatxCLQi6G3M2r5niq2jqlm9COB92lTFmTMZaYgZryIJ+rzTHsCHONba6
	vWo0dxyJs4ESGXbgE0lFP3hCB3L6owshuQVhMMGyIHPKxrbdF1XesOdpTLgoP9tB48W8oh
	mouC0wKak7b/sNfOgtGV8ddHlzYxwPPZi9y6XIo/ya2YHeA0kMFIv3cjw+uxxSpUsxWDUr
	9/ia6goJRJHQwRV1fF/1M6+Sa2hR4Xaf5N6IFcpZwOon0DvYoweKWPqBcr5ZaIifGmqw9E
	7BWWTQJenDmhznIV/zB3Diw5xpR2mScuFAmY1LhjlhzH3b1XVCnrQpwUO8SbWw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575064;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=aMiZ8OZ6vDaTJDCLruBwx0VdtP/fC3sP/hWfkyTStdQ=;
	b=z+Qn0Pl/QKCqJ4Eb8oVMoc5d6bP6ce6mJl3mcAgcLTAEC9qajSvfk3Z5fjH3P38dAJ+nTB
	ojjVcTl/IJG6D+Dg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject:
 [patch v3 35/36] x86/smpboot: Support parallel startup of secondary CPUs
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:23 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

In parallel startup mode the APs are kicked alive by the control CPU
quickly after each other and run through the early startup code in
parallel. The real-mode startup code is already serialized with a
bit-spinlock to protect the real-mode stack.

In parallel startup mode the smpboot_control variable obviously cannot
contain the Linux CPU number so the APs have to determine their Linux CPU
number on their own. This is required to find the CPUs per CPU offset in
order to find the idle task stack and other per CPU data.

To achieve this, export the cpuid_to_apicid[] array so that each AP can
find its own CPU number by searching therein based on its APIC ID.

Introduce a flag in the top bits of smpboot_control which indicates that
the AP should find its CPU number by reading the APIC ID from the APIC.

This is required because CPUID based APIC ID retrieval can only provide the
initial APIC ID, which might have been overruled by the firmware. Some AMD
APUs come up with APIC ID = initial APIC ID + 0x10, so the APIC ID to CPU
number lookup would fail miserably if based on CPUID. Also virtualization
can make its own APIC ID assignements. The only requirement is that the
APIC IDs are consistent with the APCI/MADT table.

For the boot CPU or in case parallel bringup is disabled the control bits
are empty and the CPU number is directly available in bit 0-23 of
smpboot_control.

[ tglx: Initial proof of concept patch with bitlock and APIC ID lookup ]
[ dwmw2: Rework and testing, commit message, CPUID 0x1 and CPU0 support ]
[ seanc: Fix stray override of initial_gs in common_cpu_up() ]
[ Oleksandr Natalenko: reported suspend/resume issue fixed in
  x86_acpi_suspend_lowlevel ]
[ tglx: Make it read the APIC ID from the APIC instead of using CPUID,
  	split the bitlock part out ]

Co-developed-by: Thomas Gleixner <tglx@linutronix.de>
Co-developed-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/include/asm/apic.h    |    2 +
 arch/x86/include/asm/apicdef.h |    5 ++-
 arch/x86/include/asm/smp.h     |    6 +++
 arch/x86/kernel/acpi/sleep.c   |    9 +++++
 arch/x86/kernel/apic/apic.c    |    2 -
 arch/x86/kernel/head_64.S      |   62 +++++++++++++++++++++++++++++++++++++++++
 arch/x86/kernel/smpboot.c      |    2 -
 7 files changed, 84 insertions(+), 4 deletions(-)
---

--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -55,6 +55,8 @@ extern int local_apic_timer_c2_ok;
 extern int disable_apic;
 extern unsigned int lapic_timer_period;
 
+extern int cpuid_to_apicid[];
+
 extern enum apic_intr_mode_id apic_intr_mode;
 enum apic_intr_mode_id {
 	APIC_PIC,
--- a/arch/x86/include/asm/apicdef.h
+++ b/arch/x86/include/asm/apicdef.h
@@ -138,7 +138,8 @@
 #define		APIC_EILVT_MASKED	(1 << 16)
 
 #define APIC_BASE (fix_to_virt(FIX_APIC_BASE))
-#define APIC_BASE_MSR	0x800
+#define APIC_BASE_MSR		0x800
+#define APIC_X2APIC_ID_MSR	0x802
 #define XAPIC_ENABLE	(1UL << 11)
 #define X2APIC_ENABLE	(1UL << 10)
 
@@ -162,6 +163,7 @@
 #define APIC_CPUID(apicid)	((apicid) & XAPIC_DEST_CPUS_MASK)
 #define NUM_APIC_CLUSTERS	((BAD_APICID + 1) >> XAPIC_DEST_CPUS_SHIFT)
 
+#ifndef __ASSEMBLY__
 /*
  * the local APIC register structure, memory mapped. Not terribly well
  * tested, but we might eventually use this one in the future - the
@@ -435,4 +437,5 @@ enum apic_delivery_modes {
 	APIC_DELIVERY_MODE_EXTINT	= 7,
 };
 
+#endif /* !__ASSEMBLY__ */
 #endif /* _ASM_X86_APICDEF_H */
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -200,4 +200,10 @@ extern unsigned long apic_mmio_base;
 
 #endif /* !__ASSEMBLY__ */
 
+/* Control bits for startup_64 */
+#define STARTUP_READ_APICID	0x80000000
+
+/* Top 8 bits are reserved for control */
+#define STARTUP_PARALLEL_MASK	0xFF000000
+
 #endif /* _ASM_X86_SMP_H */
--- a/arch/x86/kernel/acpi/sleep.c
+++ b/arch/x86/kernel/acpi/sleep.c
@@ -16,6 +16,7 @@
 #include <asm/cacheflush.h>
 #include <asm/realmode.h>
 #include <asm/hypervisor.h>
+#include <asm/smp.h>
 
 #include <linux/ftrace.h>
 #include "../../realmode/rm/wakeup.h"
@@ -127,7 +128,13 @@ int x86_acpi_suspend_lowlevel(void)
 	 * value is in the actual %rsp register.
 	 */
 	current->thread.sp = (unsigned long)temp_stack + sizeof(temp_stack);
-	smpboot_control = smp_processor_id();
+	/*
+	 * Ensure the CPU knows which one it is when it comes back, if
+	 * it isn't in parallel mode and expected to work that out for
+	 * itself.
+	 */
+	if (!(smpboot_control & STARTUP_PARALLEL_MASK))
+		smpboot_control = smp_processor_id();
 #endif
 	initial_code = (unsigned long)wakeup_long64;
 	saved_magic = 0x123456789abcdef0L;
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2380,7 +2380,7 @@ static int nr_logical_cpuids = 1;
 /*
  * Used to store mapping between logical CPU IDs and APIC IDs.
  */
-static int cpuid_to_apicid[] = {
+int cpuid_to_apicid[] = {
 	[0 ... NR_CPUS - 1] = -1,
 };
 
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -24,7 +24,9 @@
 #include "../entry/calling.h"
 #include <asm/export.h>
 #include <asm/nospec-branch.h>
+#include <asm/apicdef.h>
 #include <asm/fixmap.h>
+#include <asm/smp.h>
 
 /*
  * We are not able to switch in one step to the final KERNEL ADDRESS SPACE
@@ -234,8 +236,68 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	ANNOTATE_NOENDBR // above
 
 #ifdef CONFIG_SMP
+	/*
+	 * For parallel boot, the APIC ID is read from the APIC, and then
+	 * used to look up the CPU number.  For booting a single CPU, the
+	 * CPU number is encoded in smpboot_control.
+	 *
+	 * Bit 31	STARTUP_READ_APICID (Read APICID from APIC)
+	 * Bit 0-23	CPU# if STARTUP_xx flags are not set
+	 */
 	movl	smpboot_control(%rip), %ecx
+	testl	$STARTUP_READ_APICID, %ecx
+	jnz	.Lread_apicid
+	/*
+	 * No control bit set, single CPU bringup. CPU number is provided
+	 * in bit 0-23. This is also the boot CPU case (CPU number 0).
+	 */
+	andl	$(~STARTUP_PARALLEL_MASK), %ecx
+	jmp	.Lsetup_cpu
+
+.Lread_apicid:
+	/* Check whether X2APIC mode is already enabled */
+	mov	$MSR_IA32_APICBASE, %ecx
+	rdmsr
+	testl	$X2APIC_ENABLE, %eax
+	jnz	.Lread_apicid_msr
+
+	/* Read the APIC ID from the fix-mapped MMIO space. */
+	movq	apic_mmio_base(%rip), %rcx
+	addq	$APIC_ID, %rcx
+	movl	(%rcx), %eax
+	shr	$24, %eax
+	jmp	.Llookup_AP
+
+.Lread_apicid_msr:
+	mov	$APIC_X2APIC_ID_MSR, %ecx
+	rdmsr
+
+.Llookup_AP:
+	/* EAX contains the APIC ID of the current CPU */
+	xorq	%rcx, %rcx
+	leaq	cpuid_to_apicid(%rip), %rbx
+
+.Lfind_cpunr:
+	cmpl	(%rbx,%rcx,4), %eax
+	jz	.Lsetup_cpu
+	inc	%ecx
+#ifdef CONFIG_FORCE_NR_CPUS
+	cmpl	$NR_CPUS, %ecx
+#else
+	cmpl	nr_cpu_ids(%rip), %ecx
+#endif
+	jb	.Lfind_cpunr
+
+	/*  APIC ID not found in the table. Drop the trampoline lock and bail. */
+	movq	trampoline_lock(%rip), %rax
+	lock
+	btrl	$0, (%rax)
+
+1:	cli
+	hlt
+	jmp	1b
 
+.Lsetup_cpu:
 	/* Get the per cpu offset for the given CPU# which is in ECX */
 	movq	__per_cpu_offset(,%rcx,8), %rdx
 #else
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1002,7 +1002,7 @@ static int do_boot_cpu(int apicid, int c
 	if (IS_ENABLED(CONFIG_X86_32)) {
 		early_gdt_descr.address = (unsigned long)get_cpu_gdt_rw(cpu);
 		initial_stack  = idle->thread.sp;
-	} else {
+	} else if (!(smpboot_control & STARTUP_PARALLEL_MASK)) {
 		smpboot_control = cpu;
 	}
 




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531789.827707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pe-0006G2-Ki; Mon, 08 May 2023 19:47:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531789.827707; Mon, 08 May 2023 19:47:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pe-0006En-CI; Mon, 08 May 2023 19:47:38 +0000
Received: by outflank-mailman (input) for mailman id 531789;
 Mon, 08 May 2023 19:47:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mW-0004Y5-IW
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:24 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb3b1bbb-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:44:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb3b1bbb-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185219.123719053@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575062;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=6WUCUG1CCgVCdNI4CLJSTwJWrFInFv2K4B15hgoY0Kw=;
	b=2Bs14rnpSX2uI3czdlUR6dxpOrB8cEiRoXze+GgpUpTrI18Aq7yjGjS1XrIAZlUcMz3uMd
	81Ee1d8CAuUMu/Cea+MINgrvBWHwWj7aDMuNr9CKHOJX1LeWVUg0eeYcWQU9yCm43mA5jI
	nFRUbISTZF5fup1DHvzKGJ2Ro5zcwHRYKT108/qXaIregxzdzIyi6qtb0OSPviLcC3U5Mv
	hOogc1YJR/1XoEgdejSpRreCY8gNmg522997lkxJczmDMtlDTe0lWlFMtvRe2lPbkikk5B
	AJmhANjWq9Jk3iE2vBuDwP6T9O4FvPd4wQJubneDx2wjQYPHoPvHEFI2vraUuA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575062;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=6WUCUG1CCgVCdNI4CLJSTwJWrFInFv2K4B15hgoY0Kw=;
	b=hCFX0auSqHWTem7lXZILDzRTaJhCgFeurJJC+rgTLqc9kN9+C6WS+qXnmDMKwZNzqpefKl
	nTGLqooH8Ea+o8DQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch v3 34/36] x86/smpboot: Implement a bit spinlock to protect the
 realmode stack
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:22 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Parallel AP bringup requires that the APs can run fully parallel through
the early startup code including the real mode trampoline.

To prepare for this implement a bit-spinlock to serialize access to the
real mode stack so that parallel upcoming APs are not going to corrupt each
others stack while going through the real mode startup code.

Co-developed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/include/asm/realmode.h      |    3 +++
 arch/x86/kernel/head_64.S            |   13 +++++++++++++
 arch/x86/realmode/init.c             |    3 +++
 arch/x86/realmode/rm/trampoline_64.S |   27 ++++++++++++++++++++++-----
 4 files changed, 41 insertions(+), 5 deletions(-)
---

--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -52,6 +52,7 @@ struct trampoline_header {
 	u64 efer;
 	u32 cr4;
 	u32 flags;
+	u32 lock;
 #endif
 };
 
@@ -64,6 +65,8 @@ extern unsigned long initial_stack;
 extern unsigned long initial_vc_handler;
 #endif
 
+extern u32 *trampoline_lock;
+
 extern unsigned char real_mode_blob[];
 extern unsigned char real_mode_relocs[];
 
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -252,6 +252,17 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	movq	TASK_threadsp(%rax), %rsp
 
 	/*
+	 * Now that this CPU is running on its own stack, drop the realmode
+	 * protection. For the boot CPU the pointer is NULL!
+	 */
+	movq	trampoline_lock(%rip), %rax
+	testq	%rax, %rax
+	jz	.Lsetup_gdt
+	lock
+	btrl	$0, (%rax)
+
+.Lsetup_gdt:
+	/*
 	 * We must switch to a new descriptor in kernel space for the GDT
 	 * because soon the kernel won't have access anymore to the userspace
 	 * addresses where we're currently running on. We have to do that here
@@ -433,6 +444,8 @@ SYM_DATA(initial_code,	.quad x86_64_star
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 SYM_DATA(initial_vc_handler,	.quad handle_vc_boot_ghcb)
 #endif
+
+SYM_DATA(trampoline_lock, .quad 0);
 	__FINITDATA
 
 	__INIT
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -154,6 +154,9 @@ static void __init setup_real_mode(void)
 
 	trampoline_header->flags = 0;
 
+	trampoline_lock = &trampoline_header->lock;
+	*trampoline_lock = 0;
+
 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);
 
 	/* Map the real mode stub as virtual == physical */
--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -37,6 +37,24 @@
 	.text
 	.code16
 
+.macro LOAD_REALMODE_ESP
+	/*
+	 * Make sure only one CPU fiddles with the realmode stack
+	 */
+.Llock_rm\@:
+	btl	$0, tr_lock
+	jnc	2f
+	pause
+	jmp	.Llock_rm\@
+2:
+	lock
+	btsl	$0, tr_lock
+	jc	.Llock_rm\@
+
+	# Setup stack
+	movl	$rm_stack_end, %esp
+.endm
+
 	.balign	PAGE_SIZE
 SYM_CODE_START(trampoline_start)
 	cli			# We should be safe anyway
@@ -49,8 +67,7 @@ SYM_CODE_START(trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	# Setup stack
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 
 	call	verify_cpu		# Verify the cpu supports long mode
 	testl   %eax, %eax		# Check for return code
@@ -93,8 +110,7 @@ SYM_CODE_START(sev_es_trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	# Setup stack
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 
 	jmp	.Lswitch_to_protected
 SYM_CODE_END(sev_es_trampoline_start)
@@ -177,7 +193,7 @@ SYM_CODE_START(pa_trampoline_compat)
 	 * In compatibility mode.  Prep ESP and DX for startup_32, then disable
 	 * paging and complete the switch to legacy 32-bit mode.
 	 */
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 	movw	$__KERNEL_DS, %dx
 
 	movl	$(CR0_STATE & ~X86_CR0_PG), %eax
@@ -241,6 +257,7 @@ SYM_DATA_START(trampoline_header)
 	SYM_DATA(tr_efer,		.space 8)
 	SYM_DATA(tr_cr4,		.space 4)
 	SYM_DATA(tr_flags,		.space 4)
+	SYM_DATA(tr_lock,		.space 4)
 SYM_DATA_END(trampoline_header)
 
 #include "trampoline_common.S"




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531791.827721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pg-0006i2-1k; Mon, 08 May 2023 19:47:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531791.827721; Mon, 08 May 2023 19:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pf-0006hS-P6; Mon, 08 May 2023 19:47:39 +0000
Received: by outflank-mailman (input) for mailman id 531791;
 Mon, 08 May 2023 19:47:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mT-0004Y5-83
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:21 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b9373003-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:44:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9373003-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185219.016841363@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575059;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=209TLBvRBeEafFURgM0Y1ht7eQ7Y0xFjxVmui7BNI1o=;
	b=n00hUXstsdyfUPFcrDAYr9qfqXyIIcwLYM+NMXKmbpRXuAwK6NgEAWLRBbou/zHFnXIEod
	8New7so5w76Tb6aP+ykkJPN+I2CJwBwHZMzePhRQEpwGX52DB5Z1gdG558koxONpDK5vHV
	Nw3vZn8ZYXWXENECHg8/+P4uLipUY5x32lSL0Ye9CYi84jhqQ0/1HAPCNaEXglJl15NFbz
	zfcfgBtb+CGqOrVf91xrpFIeSR0Cfa5oQ5LpNyu3yK2Hb8OUT10cao1Vz6GemRAHhna574
	D0JjHpAmGP3ofXOD9DehizPjgrf0doKL7iorp17NI6K6epm+KVwswpLQO+EHLA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575059;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=209TLBvRBeEafFURgM0Y1ht7eQ7Y0xFjxVmui7BNI1o=;
	b=HZPvGLiAxw+6fgkpd1LobrgVgwanW2gWFTwlrLJ+Qy03Ucp5K4TQSj/S8iTBeqfZ2ooZAu
	VgXE2xAvT1s4U6AA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch v3 32/36] cpu/hotplug: Allow "parallel" bringup up to
 CPUHP_BP_KICK_AP_STATE
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:18 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

There is often significant latency in the early stages of CPU bringup, and
time is wasted by waking each CPU (e.g. with SIPI/INIT/INIT on x86) and
then waiting for it to respond before moving on to the next.

Allow a platform to enable parallel setup which brings all to be onlined
CPUs up to the CPUHP_BP_KICK_AP state. While this state advancement on the
control CPU (BP) is single-threaded the important part is the last state
CPUHP_BP_KICK_AP which wakes the to be onlined CPUs up.

This allows the CPUs to run up to the first sychronization point
cpuhp_ap_sync_alive() where they wait for the control CPU to release them
one by one for the full onlining procedure.

This parallelism depends on the CPU hotplug core sync mechanism which
ensures that the parallel brought up CPUs wait for release before touching
any state which would make the CPU visible to anything outside the hotplug
control mechanism.

To handle the SMT constraints of X86 correctly the bringup happens in two
iterations when CONFIG_HOTPLUG_SMT is enabled. The control CPU brings up
the primary SMT threads of each core first, which can load the microcode
without the need to rendevouz with the thread siblings. Once that's
completed it brings up the secondary SMT threads.

Co-developed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 Documentation/admin-guide/kernel-parameters.txt |    6 +
 arch/Kconfig                                    |    4 
 include/linux/cpuhotplug.h                      |    1 
 kernel/cpu.c                                    |  103 ++++++++++++++++++++++--
 4 files changed, 109 insertions(+), 5 deletions(-)
---

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -838,6 +838,12 @@
 			on every CPU online, such as boot, and resume from suspend.
 			Default: 10000
 
+	cpuhp.parallel=
+			[SMP] Enable/disable parallel bringup of secondary CPUs
+			Format: <bool>
+			Default is enabled if CONFIG_HOTPLUG_PARALLEL=y. Otherwise
+			the parameter has no effect.
+
 	crash_kexec_post_notifiers
 			Run kdump after running panic-notifiers and dumping
 			kmsg. This only for the users who doubt kdump always
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -53,6 +53,10 @@ config HOTPLUG_SPLIT_STARTUP
 	bool
 	select HOTPLUG_CORE_SYNC_FULL
 
+config HOTPLUG_PARALLEL
+	bool
+	select HOTPLUG_SPLIT_STARTUP
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -524,6 +524,7 @@ void cpuhp_ap_sync_alive(void);
 void arch_cpuhp_sync_state_poll(void);
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
 int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle);
+bool arch_cpuhp_init_parallel_bringup(void);
 
 #ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
 void cpuhp_ap_report_dead(void);
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -649,8 +649,23 @@ bool cpu_smt_possible(void)
 		cpu_smt_control != CPU_SMT_NOT_SUPPORTED;
 }
 EXPORT_SYMBOL_GPL(cpu_smt_possible);
+
+static inline bool cpuhp_smt_aware(void)
+{
+	return topology_smt_supported();
+}
+
+static inline const struct cpumask *cpuhp_get_primary_thread_mask(void)
+{
+	return cpu_primary_thread_mask;
+}
 #else
 static inline bool cpu_smt_allowed(unsigned int cpu) { return true; }
+static inline bool cpuhp_smt_aware(void) { return false; }
+static inline const struct cpumask *cpuhp_get_primary_thread_mask(void)
+{
+	return cpu_present_mask;
+}
 #endif
 
 static inline enum cpuhp_state
@@ -1743,16 +1758,94 @@ int bringup_hibernate_cpu(unsigned int s
 	return 0;
 }
 
-void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
+static void __init cpuhp_bringup_mask(const struct cpumask *mask, unsigned int ncpus,
+				      enum cpuhp_state target)
 {
 	unsigned int cpu;
 
-	for_each_present_cpu(cpu) {
-		if (num_online_cpus() >= setup_max_cpus)
+	for_each_cpu(cpu, mask) {
+		struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
+
+		if (!--ncpus)
 			break;
-		if (!cpu_online(cpu))
-			cpu_up(cpu, CPUHP_ONLINE);
+
+		if (cpu_up(cpu, target) && can_rollback_cpu(st)) {
+			/*
+			 * If this failed then cpu_up() might have only
+			 * rolled back to CPUHP_BP_KICK_AP for the final
+			 * online. Clean it up. NOOP if already rolled back.
+			 */
+			WARN_ON(cpuhp_invoke_callback_range(false, cpu, st, CPUHP_OFFLINE));
+		}
+	}
+}
+
+#ifdef CONFIG_HOTPLUG_PARALLEL
+static bool __cpuhp_parallel_bringup __ro_after_init = true;
+
+static int __init parallel_bringup_parse_param(char *arg)
+{
+	return kstrtobool(arg, &__cpuhp_parallel_bringup);
+}
+early_param("cpuhp.parallel", parallel_bringup_parse_param);
+
+/*
+ * On architectures which have enabled parallel bringup this invokes all BP
+ * prepare states for each of the to be onlined APs first. The last state
+ * sends the startup IPI to the APs. The APs proceed through the low level
+ * bringup code in parallel and then wait for the control CPU to release
+ * them one by one for the final onlining procedure.
+ *
+ * This avoids waiting for each AP to respond to the startup IPI in
+ * CPUHP_BRINGUP_CPU.
+ */
+static bool __init cpuhp_bringup_cpus_parallel(unsigned int ncpus)
+{
+	const struct cpumask *mask = cpu_present_mask;
+
+	if (__cpuhp_parallel_bringup)
+		__cpuhp_parallel_bringup = arch_cpuhp_init_parallel_bringup();
+	if (!__cpuhp_parallel_bringup)
+		return false;
+
+	if (cpuhp_smt_aware()) {
+		const struct cpumask *pmask = cpuhp_get_primary_thread_mask();
+		static struct cpumask tmp_mask __initdata;
+
+		/*
+		 * X86 requires to prevent that SMT siblings stopped while
+		 * the primary thread does a microcode update for various
+		 * reasons. Bring the primary threads up first.
+		 */
+		cpumask_and(&tmp_mask, mask, pmask);
+		cpuhp_bringup_mask(&tmp_mask, ncpus, CPUHP_BP_KICK_AP);
+		cpuhp_bringup_mask(&tmp_mask, ncpus, CPUHP_ONLINE);
+		/* Account for the online CPUs */
+		ncpus -= num_online_cpus();
+		if (!ncpus)
+			return true;
+		/* Create the mask for secondary CPUs */
+		cpumask_andnot(&tmp_mask, mask, pmask);
+		mask = &tmp_mask;
 	}
+
+	/* Bring the not-yet started CPUs up */
+	cpuhp_bringup_mask(mask, ncpus, CPUHP_BP_KICK_AP);
+	cpuhp_bringup_mask(mask, ncpus, CPUHP_ONLINE);
+	return true;
+}
+#else
+static inline bool cpuhp_bringup_cpus_parallel(unsigned int ncpus) { return false; }
+#endif /* CONFIG_HOTPLUG_PARALLEL */
+
+void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
+{
+	/* Try parallel bringup optimization if enabled */
+	if (cpuhp_bringup_cpus_parallel(setup_max_cpus))
+		return;
+
+	/* Full per CPU serialized bringup */
+	cpuhp_bringup_mask(cpu_present_mask, setup_max_cpus, CPUHP_ONLINE);
 }
 
 #ifdef CONFIG_PM_SLEEP_SMP




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531792.827728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pg-0006qx-SZ; Mon, 08 May 2023 19:47:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531792.827728; Mon, 08 May 2023 19:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pg-0006pV-JT; Mon, 08 May 2023 19:47:40 +0000
Received: by outflank-mailman (input) for mailman id 531792;
 Mon, 08 May 2023 19:47:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mA-0004GB-3l
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:02 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ae6cacde-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:44:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae6cacde-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185218.413540718@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575041;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=38nzuLvukobfje6Ccppy0BhMMd+JJk7+K0e3k3P15Wg=;
	b=x5uunjrfWzFDxVEDVb12W2XtAz1xdLN26NVHk7bMNEu2wIJNaum7qGKPG0dROx5n0HhLC+
	FzBIUNB9IVFoDocC8+UO+48hmJZTHkBagkRSbcBi10tPYzBz2N+KBu3+t/DwlLjK8qgbHm
	Zyl2slxhswI4y0jqXj5mdHQ7jfB6T53pzIasGJrkcbdNjcE/FqTbh3cIh4s2CI2dpENglT
	jZGblTZh1KTfeLW19FXrR8et4/tPPGpgkP+CMulFHzL902fUKU7E5w1yXi6/rZi2EGU8tO
	CSr4Jls1MoB22W6EUMLB/NCwWjqQzCr6exaZi0Y+dcSIy9UXOvzD4rMbjXRcvg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575041;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=38nzuLvukobfje6Ccppy0BhMMd+JJk7+K0e3k3P15Wg=;
	b=OFaURbehZY+bjBk6HdPJsuKX1EooATeyF0S/YDbEsa+/7byZljllwL7I/PloYv0yfuXe68
	qGHlWRIfIq35xrAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 21/36] [patch V2 21/38] ARM: smp: Switch to hotplug core
 state synchronization
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:00 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/arm/Kconfig           |    1 +
 arch/arm/include/asm/smp.h |    2 +-
 arch/arm/kernel/smp.c      |   18 +++++++-----------
 3 files changed, 9 insertions(+), 12 deletions(-)
---

--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -124,6 +124,7 @@ config ARM
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UID16
 	select HAVE_VIRT_CPU_ACCOUNTING_GEN
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_FORCED_THREADING
 	select MODULES_USE_ELF_REL
 	select NEED_DMA_MAP_STATE
--- a/arch/arm/include/asm/smp.h
+++ b/arch/arm/include/asm/smp.h
@@ -64,7 +64,7 @@ extern void secondary_startup_arm(void);
 
 extern int __cpu_disable(void);
 
-extern void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 
 extern void arch_send_call_function_single_ipi(int cpu);
 extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -288,15 +288,11 @@ int __cpu_disable(void)
 }
 
 /*
- * called on the thread which is asking for a CPU to be shutdown -
- * waits until shutdown has completed, or it is timed out.
+ * called on the thread which is asking for a CPU to be shutdown after the
+ * shutdown completed.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
 	pr_debug("CPU%u: shutdown\n", cpu);
 
 	clear_tasks_mm_cpumask(cpu);
@@ -336,11 +332,11 @@ void __noreturn arch_cpu_idle_dead(void)
 	flush_cache_louis();
 
 	/*
-	 * Tell __cpu_die() that this CPU is now safe to dispose of.  Once
-	 * this returns, power and/or clocks can be removed at any point
-	 * from this CPU and its cache by platform_cpu_kill().
+	 * Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose
+	 * of. Once this returns, power and/or clocks can be removed at
+	 * any point from this CPU and its cache by platform_cpu_kill().
 	 */
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	/*
 	 * Ensure that the cache lines associated with that completion are




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531795.827733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6ph-0006zW-Gb; Mon, 08 May 2023 19:47:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531795.827733; Mon, 08 May 2023 19:47:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6ph-0006xp-1q; Mon, 08 May 2023 19:47:41 +0000
Received: by outflank-mailman (input) for mailman id 531795;
 Mon, 08 May 2023 19:47:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mO-0004Y5-Eu
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:16 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b658fe77-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:44:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b658fe77-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185218.856286839@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575054;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=S1u0Rr9+GOXQtpnupcrVF2C6hZgp3Qx9VyPMOZrmGUo=;
	b=oISYDK58uyxEqUGHyMJl0yjUCJNOL5/7EGvOZggKHrsRbzI+Zfvw/3LqoASrcYSbpSofdk
	smv5bYqSUgI37EvCpKk1bIpdP6plTO/QVQFaxEN/bX+VNPLK6e9fxhxLFDKIXJiJ8+kSiF
	cQZuVcY4mwsCMKCnQL+NO+4vqzlN0ZiMTQ6qbV+dFndB1QkTtSbQxRgBNU1jh3+qlg559v
	oxQCzKKAnn8gCc+1y89JF3fvz5z1Xn99zAaSiyaLuoBN6+lOS/ZxYYzwXkZkWIhNnC+9+0
	MzcZbiKvl223hH1JPAYQnBF6gNdXhbpmdcQST4cZ+iIm+WtHU0bzyy3U3O2+oQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575054;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=S1u0Rr9+GOXQtpnupcrVF2C6hZgp3Qx9VyPMOZrmGUo=;
	b=AZidL7FELSVa1ikq306vI32X2wbks/dMZeZ9LLRMIVoxLB22DPBpW8f+vd00DqVCG0ZwZH
	HJ9bhkUAS1wb3vBg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 29/36] [patch V2 29/38] cpu/hotplug: Provide a split up
 CPUHP_BRINGUP mechanism
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:13 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The bring up logic of a to be onlined CPU consists of several parts, which
are considered to be a single hotplug state:

  1) Control CPU issues the wake-up

  2) To be onlined CPU starts up, does the minimal initialization,
     reports to be alive and waits for release into the complete bring-up.

  3) Control CPU waits for the alive report and releases the upcoming CPU
     for the complete bring-up.

Allow to split this into two states:

  1) Control CPU issues the wake-up

     After that the to be onlined CPU starts up, does the minimal
     initialization, reports to be alive and waits for release into the
     full bring-up. As this can run after the control CPU dropped the
     hotplug locks the code which is executed on the AP before it reports
     alive has to be carefully audited to not violate any of the hotplug
     constraints, especially not modifying any of the various cpumasks.

     This is really only meant to avoid waiting for the AP to react on the
     wake-up. Of course an architecture can move strict CPU related setup
     functionality, e.g. microcode loading, with care before the
     synchronization point to save further pointless waiting time.

  2) Control CPU waits for the alive report and releases the upcoming CPU
     for the complete bring-up.

This allows that the two states can be split up to run all to be onlined
CPUs up to state #1 on the control CPU and then at a later point run state
#2. This spares some of the latencies of the full serialized per CPU
bringup by avoiding the per CPU wakeup/wait serialization. The assumption
is that the first AP already waits when the last AP has been woken up. This
obvioulsy depends on the hardware latencies and depending on the timings
this might still not completely eliminate all wait scenarios.

This split is just a preparatory step for enabling the parallel bringup
later. The boot time bringup is still fully serialized. It has a separate
config switch so that architectures which want to support parallel bringup
can test the split of the CPUHP_BRINGUG step separately.

To enable this the architecture must support the CPU hotplug core sync
mechanism and has to be audited that there are no implicit hotplug state
dependencies which require a fully serialized bringup.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/Kconfig               |    4 ++
 include/linux/cpuhotplug.h |    4 ++
 kernel/cpu.c               |   70 +++++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 76 insertions(+), 2 deletions(-)
---

--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -49,6 +49,10 @@ config HOTPLUG_CORE_SYNC_FULL
 	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select HOTPLUG_CORE_SYNC
 
+config HOTPLUG_SPLIT_STARTUP
+	bool
+	select HOTPLUG_CORE_SYNC_FULL
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -133,6 +133,7 @@ enum cpuhp_state {
 	CPUHP_MIPS_SOC_PREPARE,
 	CPUHP_BP_PREPARE_DYN,
 	CPUHP_BP_PREPARE_DYN_END		= CPUHP_BP_PREPARE_DYN + 20,
+	CPUHP_BP_KICK_AP,
 	CPUHP_BRINGUP_CPU,
 
 	/*
@@ -517,9 +518,12 @@ void cpuhp_online_idle(enum cpuhp_state
 static inline void cpuhp_online_idle(enum cpuhp_state state) { }
 #endif
 
+struct task_struct;
+
 void cpuhp_ap_sync_alive(void);
 void arch_cpuhp_sync_state_poll(void);
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
+int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle);
 
 #ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
 void cpuhp_ap_report_dead(void);
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -761,6 +761,47 @@ static int bringup_wait_for_ap_online(un
 	return 0;
 }
 
+#ifdef CONFIG_HOTPLUG_SPLIT_STARTUP
+static int cpuhp_kick_ap_alive(unsigned int cpu)
+{
+	if (!cpuhp_can_boot_ap(cpu))
+		return -EAGAIN;
+
+	return arch_cpuhp_kick_ap_alive(cpu, idle_thread_get(cpu));
+}
+
+static int cpuhp_bringup_ap(unsigned int cpu)
+{
+	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
+	int ret;
+
+	/*
+	 * Some architectures have to walk the irq descriptors to
+	 * setup the vector space for the cpu which comes online.
+	 * Prevent irq alloc/free across the bringup.
+	 */
+	irq_lock_sparse();
+
+	ret = cpuhp_bp_sync_alive(cpu);
+	if (ret)
+		goto out_unlock;
+
+	ret = bringup_wait_for_ap_online(cpu);
+	if (ret)
+		goto out_unlock;
+
+	irq_unlock_sparse();
+
+	if (st->target <= CPUHP_AP_ONLINE_IDLE)
+		return 0;
+
+	return cpuhp_kick_ap(cpu, st, st->target);
+
+out_unlock:
+	irq_unlock_sparse();
+	return ret;
+}
+#else
 static int bringup_cpu(unsigned int cpu)
 {
 	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
@@ -777,7 +818,6 @@ static int bringup_cpu(unsigned int cpu)
 	 */
 	irq_lock_sparse();
 
-	/* Arch-specific enabling code. */
 	ret = __cpu_up(cpu, idle);
 	if (ret)
 		goto out_unlock;
@@ -801,6 +841,7 @@ static int bringup_cpu(unsigned int cpu)
 	irq_unlock_sparse();
 	return ret;
 }
+#endif
 
 static int finish_cpu(unsigned int cpu)
 {
@@ -1940,13 +1981,38 @@ static struct cpuhp_step cpuhp_hp_states
 		.startup.single		= timers_prepare_cpu,
 		.teardown.single	= timers_dead_cpu,
 	},
-	/* Kicks the plugged cpu into life */
+
+#ifdef CONFIG_HOTPLUG_SPLIT_STARTUP
+	/*
+	 * Kicks the AP alive. AP will wait in cpuhp_ap_sync_alive() until
+	 * the next step will release it.
+	 */
+	[CPUHP_BP_KICK_AP] = {
+		.name			= "cpu:kick_ap",
+		.startup.single		= cpuhp_kick_ap_alive,
+	},
+
+	/*
+	 * Waits for the AP to reach cpuhp_ap_sync_alive() and then
+	 * releases it for the complete bringup.
+	 */
+	[CPUHP_BRINGUP_CPU] = {
+		.name			= "cpu:bringup",
+		.startup.single		= cpuhp_bringup_ap,
+		.teardown.single	= finish_cpu,
+		.cant_stop		= true,
+	},
+#else
+	/*
+	 * All-in-one CPU bringup state which includes the kick alive.
+	 */
 	[CPUHP_BRINGUP_CPU] = {
 		.name			= "cpu:bringup",
 		.startup.single		= bringup_cpu,
 		.teardown.single	= finish_cpu,
 		.cant_stop		= true,
 	},
+#endif
 	/* Final state before CPU kills itself */
 	[CPUHP_AP_IDLE_DEAD] = {
 		.name			= "idle:dead",




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531796.827739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pi-00077S-48; Mon, 08 May 2023 19:47:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531796.827739; Mon, 08 May 2023 19:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6ph-00074D-N0; Mon, 08 May 2023 19:47:41 +0000
Received: by outflank-mailman (input) for mailman id 531796;
 Mon, 08 May 2023 19:47:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6m5-0004GB-K5
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:57 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ab9a95f9-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:43:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab9a95f9-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185218.240871842@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575036;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=/nqzDu5xfHXjN7n3gGaKe3Qp9xgjuZPS5IpPjnAT+f4=;
	b=Y9osvnDWdCFR42wS3Sq9455tPTExWH6chOdVK5t+mElOJ7bmi/rBIUzKmrZFhLMAryyWI9
	42k5ASwHdZKGQdxaJo7QXjrfzOtyqL45DfKI3fQNAjyGFICTT/FAvhaW+FpMAE/idsBhLd
	szNwKYPH7EhFVR3AkF2Ir5xe1+1ylMBRrM5mJp72cb88zxQyAVq9cHCCmxD2WbikZ+TEzW
	o3Vo+irhC2HNz6lB8iyeyWXwtTlgXUlJKwaa+XkY0BRUkQCMhfQNf/vofmV4FtVDwALeB6
	HMpdhCe+3uQSS6EgGde3TvQ49KubdZK17ErrXoAMJsr0uMdRNS+eaVdMHMxoew==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575036;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=/nqzDu5xfHXjN7n3gGaKe3Qp9xgjuZPS5IpPjnAT+f4=;
	b=5t4mMAnQWernMb+O3GXjrwM8U6FkU2jP3+/SU36NRq2BqU7xPkCTiHjSiyuzYfTY/O48DX
	SQFLh5owzs5njNDw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 18/36] [patch V2 18/38] cpu/hotplug: Add CPU state tracking
 and synchronization
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:55 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The CPU state tracking and synchronization mechanism in smpboot.c is
completely independent of the hotplug code and all logic around it is
implemented in architecture specific code.

Except for the state reporting of the AP there is absolutely nothing
architecture specific and the sychronization and decision functions can be
moved into the generic hotplug core code.

Provide an integrated variant and add the core synchronization and decision
points. This comes in two flavours:

  1) DEAD state synchronization

     Updated by the architecture code once the AP reaches the point where
     it is ready to be torn down by the control CPU, e.g. by removing power
     or clocks or tear down via the hypervisor.

     The control CPU waits for this state to be reached with a timeout. If
     the state is reached an architecture specific cleanup function is
     invoked.

  2) Full state synchronization

     This extends #1 with AP alive synchronization. This is new
     functionality, which allows to replace architecture specific wait
     mechanims, e.g. cpumasks, completely.

     It also prevents that an AP which is in a limbo state can be brought
     up again. This can happen when an AP failed to report dead state
     during a previous off-line operation.

The dead synchronization is what most architectures use. Only x86 makes a
bringup decision based on that state at the moment.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/Kconfig               |   15 +++
 include/linux/cpuhotplug.h |   12 ++
 kernel/cpu.c               |  193 ++++++++++++++++++++++++++++++++++++++++++++-
 kernel/smpboot.c           |    2 
 4 files changed, 221 insertions(+), 1 deletion(-)
---

--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -34,6 +34,21 @@ config ARCH_HAS_SUBPAGE_FAULTS
 config HOTPLUG_SMT
 	bool
 
+# Selected by HOTPLUG_CORE_SYNC_DEAD or HOTPLUG_CORE_SYNC_FULL
+config HOTPLUG_CORE_SYNC
+	bool
+
+# Basic CPU dead synchronization selected by architecture
+config HOTPLUG_CORE_SYNC_DEAD
+	bool
+	select HOTPLUG_CORE_SYNC
+
+# Full CPU synchronization with alive state selected by architecture
+config HOTPLUG_CORE_SYNC_FULL
+	bool
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
+	select HOTPLUG_CORE_SYNC
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -517,4 +517,16 @@ void cpuhp_online_idle(enum cpuhp_state
 static inline void cpuhp_online_idle(enum cpuhp_state state) { }
 #endif
 
+void cpuhp_ap_sync_alive(void);
+void arch_cpuhp_sync_state_poll(void);
+void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
+void cpuhp_ap_report_dead(void);
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu);
+#else
+static inline void cpuhp_ap_report_dead(void) { }
+static inline void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu) { }
+#endif
+
 #endif
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -17,6 +17,7 @@
 #include <linux/cpu.h>
 #include <linux/oom.h>
 #include <linux/rcupdate.h>
+#include <linux/delay.h>
 #include <linux/export.h>
 #include <linux/bug.h>
 #include <linux/kthread.h>
@@ -59,6 +60,7 @@
  * @last:	For multi-instance rollback, remember how far we got
  * @cb_state:	The state for a single callback (install/uninstall)
  * @result:	Result of the operation
+ * @ap_sync_state:	State for AP synchronization
  * @done_up:	Signal completion to the issuer of the task for cpu-up
  * @done_down:	Signal completion to the issuer of the task for cpu-down
  */
@@ -76,6 +78,7 @@ struct cpuhp_cpu_state {
 	struct hlist_node	*last;
 	enum cpuhp_state	cb_state;
 	int			result;
+	atomic_t		ap_sync_state;
 	struct completion	done_up;
 	struct completion	done_down;
 #endif
@@ -276,6 +279,182 @@ static bool cpuhp_is_atomic_state(enum c
 	return CPUHP_AP_IDLE_DEAD <= state && state < CPUHP_AP_ONLINE;
 }
 
+/* Synchronization state management */
+enum cpuhp_sync_state {
+	SYNC_STATE_DEAD,
+	SYNC_STATE_KICKED,
+	SYNC_STATE_SHOULD_DIE,
+	SYNC_STATE_ALIVE,
+	SYNC_STATE_SHOULD_ONLINE,
+	SYNC_STATE_ONLINE,
+};
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC
+/**
+ * cpuhp_ap_update_sync_state - Update synchronization state during bringup/teardown
+ * @state:	The synchronization state to set
+ *
+ * No synchronization point. Just update of the synchronization state.
+ */
+static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state)
+{
+	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
+	int sync = atomic_read(st);
+
+	while (!atomic_try_cmpxchg(st, &sync, state));
+}
+
+void __weak arch_cpuhp_sync_state_poll(void) { cpu_relax(); }
+
+static bool cpuhp_wait_for_sync_state(unsigned int cpu, enum cpuhp_sync_state state,
+				      enum cpuhp_sync_state next_state)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	ktime_t now, end, start = ktime_get();
+	int sync;
+
+	end = start + 10ULL * NSEC_PER_SEC;
+
+	sync = atomic_read(st);
+	while (1) {
+		if (sync == state) {
+			if (!atomic_try_cmpxchg(st, &sync, next_state))
+				continue;
+			return true;
+		}
+
+		now = ktime_get();
+		if (now > end) {
+			/* Timeout. Leave the state unchanged */
+			return false;
+		} else if (now - start < NSEC_PER_MSEC) {
+			/* Poll for one millisecond */
+			arch_cpuhp_sync_state_poll();
+		} else {
+			usleep_range_state(USEC_PER_MSEC, 2 * USEC_PER_MSEC, TASK_UNINTERRUPTIBLE);
+		}
+		sync = atomic_read(st);
+	}
+	return true;
+}
+#else  /* CONFIG_HOTPLUG_CORE_SYNC */
+static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state) { }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC */
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
+/**
+ * cpuhp_ap_report_dead - Update synchronization state to DEAD
+ *
+ * No synchronization point. Just update of the synchronization state.
+ */
+void cpuhp_ap_report_dead(void)
+{
+	cpuhp_ap_update_sync_state(SYNC_STATE_DEAD);
+}
+
+void __weak arch_cpuhp_cleanup_dead_cpu(unsigned int cpu) { }
+
+/*
+ * Late CPU shutdown synchronization point. Cannot use cpuhp_state::done_down
+ * because the AP cannot issue complete() at this stage.
+ */
+static void cpuhp_bp_sync_dead(unsigned int cpu)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	int sync = atomic_read(st);
+
+	do {
+		/* CPU can have reported dead already. Don't overwrite that! */
+		if (sync == SYNC_STATE_DEAD)
+			break;
+	} while (!atomic_try_cmpxchg(st, &sync, SYNC_STATE_SHOULD_DIE));
+
+	if (cpuhp_wait_for_sync_state(cpu, SYNC_STATE_DEAD, SYNC_STATE_DEAD)) {
+		/* CPU reached dead state. Invoke the cleanup function */
+		arch_cpuhp_cleanup_dead_cpu(cpu);
+		return;
+	}
+
+	/* No further action possible. Emit message and give up. */
+	pr_err("CPU%u failed to report dead state\n", cpu);
+}
+#else /* CONFIG_HOTPLUG_CORE_SYNC_DEAD */
+static inline void cpuhp_bp_sync_dead(unsigned int cpu) { }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC_DEAD */
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_FULL
+/**
+ * cpuhp_ap_sync_alive - Synchronize AP with the control CPU once it is alive
+ *
+ * Updates the AP synchronization state to SYNC_STATE_ALIVE and waits
+ * for the BP to release it.
+ */
+void cpuhp_ap_sync_alive(void)
+{
+	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
+
+	cpuhp_ap_update_sync_state(SYNC_STATE_ALIVE);
+
+	/* Wait for the control CPU to release it. */
+	while (atomic_read(st) != SYNC_STATE_SHOULD_ONLINE)
+		cpu_relax();
+}
+
+static bool cpuhp_can_boot_ap(unsigned int cpu)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	int sync = atomic_read(st);
+
+again:
+	switch (sync) {
+	case SYNC_STATE_DEAD:
+		/* CPU is properly dead */
+		break;
+	case SYNC_STATE_KICKED:
+		/* CPU did not come up in previous attempt */
+		break;
+	case SYNC_STATE_ALIVE:
+		/* CPU is stuck cpuhp_ap_sync_alive(). */
+		break;
+	default:
+		/* CPU failed to report online or dead and is in limbo state. */
+		return false;
+	}
+
+	/* Prepare for booting */
+	if (!atomic_try_cmpxchg(st, &sync, SYNC_STATE_KICKED))
+		goto again;
+
+	return true;
+}
+
+void __weak arch_cpuhp_cleanup_kick_cpu(unsigned int cpu) { }
+
+/*
+ * Early CPU bringup synchronization point. Cannot use cpuhp_state::done_up
+ * because the AP cannot issue complete() so early in the bringup.
+ */
+static int cpuhp_bp_sync_alive(unsigned int cpu)
+{
+	int ret = 0;
+
+	if (!IS_ENABLED(CONFIG_HOTPLUG_CORE_SYNC_FULL))
+		return 0;
+
+	if (!cpuhp_wait_for_sync_state(cpu, SYNC_STATE_ALIVE, SYNC_STATE_SHOULD_ONLINE)) {
+		pr_err("CPU%u failed to report alive state\n", cpu);
+		ret = -EIO;
+	}
+
+	/* Let the architecture cleanup the kick alive mechanics. */
+	arch_cpuhp_cleanup_kick_cpu(cpu);
+	return ret;
+}
+#else /* CONFIG_HOTPLUG_CORE_SYNC_FULL */
+static inline int cpuhp_bp_sync_alive(unsigned int cpu) { return 0; }
+static inline bool cpuhp_can_boot_ap(unsigned int cpu) { return true; }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC_FULL */
+
 /* Serializes the updates to cpu_online_mask, cpu_present_mask */
 static DEFINE_MUTEX(cpu_add_remove_lock);
 bool cpuhp_tasks_frozen;
@@ -588,6 +767,9 @@ static int bringup_cpu(unsigned int cpu)
 	struct task_struct *idle = idle_thread_get(cpu);
 	int ret;
 
+	if (!cpuhp_can_boot_ap(cpu))
+		return -EAGAIN;
+
 	/*
 	 * Reset stale stack state from the last time this CPU was online.
 	 */
@@ -606,6 +788,10 @@ static int bringup_cpu(unsigned int cpu)
 	if (ret)
 		goto out_unlock;
 
+	ret = cpuhp_bp_sync_alive(cpu);
+	if (ret)
+		goto out_unlock;
+
 	ret = bringup_wait_for_ap_online(cpu);
 	if (ret)
 		goto out_unlock;
@@ -1109,6 +1295,8 @@ static int takedown_cpu(unsigned int cpu
 	/* This actually kills the CPU. */
 	__cpu_die(cpu);
 
+	cpuhp_bp_sync_dead(cpu);
+
 	tick_cleanup_dead_cpu(cpu);
 	rcutree_migrate_callbacks(cpu);
 	return 0;
@@ -1355,8 +1543,10 @@ void cpuhp_online_idle(enum cpuhp_state
 	if (state != CPUHP_AP_ONLINE_IDLE)
 		return;
 
+	cpuhp_ap_update_sync_state(SYNC_STATE_ONLINE);
+
 	/*
-	 * Unpart the stopper thread before we start the idle loop (and start
+	 * Unpark the stopper thread before we start the idle loop (and start
 	 * scheduling); this ensures the stopper task is always available.
 	 */
 	stop_machine_unpark(smp_processor_id());
@@ -2733,6 +2923,7 @@ void __init boot_cpu_hotplug_init(void)
 {
 #ifdef CONFIG_SMP
 	cpumask_set_cpu(smp_processor_id(), &cpus_booted_once_mask);
+	atomic_set(this_cpu_ptr(&cpuhp_state.ap_sync_state), SYNC_STATE_ONLINE);
 #endif
 	this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
 	this_cpu_write(cpuhp_state.target, CPUHP_ONLINE);
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -326,6 +326,7 @@ void smpboot_unregister_percpu_thread(st
 }
 EXPORT_SYMBOL_GPL(smpboot_unregister_percpu_thread);
 
+#ifndef CONFIG_HOTPLUG_CORE_SYNC
 static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
 
 /*
@@ -488,3 +489,4 @@ bool cpu_report_death(void)
 }
 
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC */




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531805.827762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pm-0008AX-MX; Mon, 08 May 2023 19:47:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531805.827762; Mon, 08 May 2023 19:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pm-00089t-GO; Mon, 08 May 2023 19:47:46 +0000
Received: by outflank-mailman (input) for mailman id 531805;
 Mon, 08 May 2023 19:47:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mP-0004Y5-Rf
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:17 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b7436e10-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:44:16 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7436e10-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185218.909138139@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575056;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=kwmqj85oJPkZidDSe6wGXZxA0ULq99wzodNu0T1h+kk=;
	b=wg1uBLGtSoxcaJSU0iEE4x0UcfBChTrLt0gyE4M5PDuGfK5fjT22t8ZTBBC2tynpzI6HHh
	6Y7wX0lp4t6osRCyrG3TT1YgnOHCxIFJnJkEGdJWSzb30rEUBKI5nxlG8O7tNjZVGxOjHb
	SANeefFOJGw6DUXAVXM56kO2xHkef7AW/0tPOsy0YwgLmgzJMjuxG1N7Phieoka0kZqbc+
	SwamRLfl76kZk/gDSsJbzFLV/CAY9FezjErFfEBAxnMp56mRQxgPmpsjeFT7Dxl3HIxRAN
	Y1+kfoeoNJM9pmRMjeHWlK/tpJMX+RNO6kr+gmNFYozmaU/eCEE9q4itJyCt1A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575056;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=kwmqj85oJPkZidDSe6wGXZxA0ULq99wzodNu0T1h+kk=;
	b=hBj0Uqf6QiSA9MqtLkZVoZtNNYgv3nBK9gQda39ezyZ0ekHS2uiMmy5gmQpbYtX4lR0BfY
	+Xmymckx4LlMyoDw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 30/36] x86/smpboot: Enable split CPU startup
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:15 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The x86 CPU bringup state currently does AP wake-up, wait for AP to
respond and then release it for full bringup.

It is safe to be split into a wake-up and and a separate wait+release
state.

Provide the required functions and enable the split CPU bringup, which
prepares for parallel bringup, where the bringup of the non-boot CPUs takes
two iterations: One to prepare and wake all APs and the second to wait and
release them. Depending on timing this can eliminate the wait time
completely.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/Kconfig           |    2 +-
 arch/x86/include/asm/smp.h |    9 ++-------
 arch/x86/kernel/smp.c      |    2 +-
 arch/x86/kernel/smpboot.c  |    8 ++++----
 arch/x86/xen/smp_pv.c      |    4 ++--
 5 files changed, 10 insertions(+), 15 deletions(-)
---

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -274,8 +274,8 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
-	select HOTPLUG_CORE_SYNC_FULL		if SMP
 	select HOTPLUG_SMT			if SMP
+	select HOTPLUG_SPLIT_STARTUP		if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
 	select NEED_PER_CPU_PAGE_FIRST_CHUNK
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -40,7 +40,7 @@ struct smp_ops {
 
 	void (*cleanup_dead_cpu)(unsigned cpu);
 	void (*poll_sync_state)(void);
-	int (*cpu_up)(unsigned cpu, struct task_struct *tidle);
+	int (*kick_ap_alive)(unsigned cpu, struct task_struct *tidle);
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
 	void (*play_dead)(void);
@@ -80,11 +80,6 @@ static inline void smp_cpus_done(unsigne
 	smp_ops.smp_cpus_done(max_cpus);
 }
 
-static inline int __cpu_up(unsigned int cpu, struct task_struct *tidle)
-{
-	return smp_ops.cpu_up(cpu, tidle);
-}
-
 static inline int __cpu_disable(void)
 {
 	return smp_ops.cpu_disable();
@@ -124,7 +119,7 @@ void native_smp_prepare_cpus(unsigned in
 void calculate_max_logical_packages(void);
 void native_smp_cpus_done(unsigned int max_cpus);
 int common_cpu_up(unsigned int cpunum, struct task_struct *tidle);
-int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
+int native_kick_ap(unsigned int cpu, struct task_struct *tidle);
 int native_cpu_disable(void);
 void __noreturn hlt_play_dead(void);
 void native_play_dead(void);
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -268,7 +268,7 @@ struct smp_ops smp_ops = {
 #endif
 	.smp_send_reschedule	= native_smp_send_reschedule,
 
-	.cpu_up			= native_cpu_up,
+	.kick_ap_alive		= native_kick_ap,
 	.cpu_disable		= native_cpu_disable,
 	.play_dead		= native_play_dead,
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1058,7 +1058,7 @@ static int do_boot_cpu(int apicid, int c
 	return ret;
 }
 
-static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
+int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
 	int err;
@@ -1094,15 +1094,15 @@ static int native_kick_ap(unsigned int c
 	return err;
 }
 
-int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle)
 {
-	return native_kick_ap(cpu, tidle);
+	return smp_ops.kick_ap_alive(cpu, tidle);
 }
 
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu)
 {
 	/* Cleanup possible dangling ends... */
-	if (smp_ops.cpu_up == native_cpu_up && x86_platform.legacy.warm_reset)
+	if (smp_ops.kick_ap_alive == native_kick_ap && x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();
 }
 
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -314,7 +314,7 @@ cpu_initialize_context(unsigned int cpu,
 	return 0;
 }
 
-static int xen_pv_cpu_up(unsigned int cpu, struct task_struct *idle)
+static int xen_pv_kick_ap(unsigned int cpu, struct task_struct *idle)
 {
 	int rc;
 
@@ -438,7 +438,7 @@ static const struct smp_ops xen_smp_ops
 	.smp_prepare_cpus = xen_pv_smp_prepare_cpus,
 	.smp_cpus_done = xen_smp_cpus_done,
 
-	.cpu_up = xen_pv_cpu_up,
+	.kick_ap_alive = xen_pv_kick_ap,
 	.cpu_die = xen_pv_cpu_die,
 	.cleanup_dead_cpu = xen_pv_cleanup_dead_cpu,
 	.poll_sync_state = xen_pv_poll_sync_state,




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531809.827769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pn-0008Md-RS; Mon, 08 May 2023 19:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531809.827769; Mon, 08 May 2023 19:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pn-0008Kx-I3; Mon, 08 May 2023 19:47:47 +0000
Received: by outflank-mailman (input) for mailman id 531809;
 Mon, 08 May 2023 19:47:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mQ-0004GB-Gu
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:18 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b83f9151-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:44:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b83f9151-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185218.962208640@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575057;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=7J6qDuLCrgdvR1kG/InEcQTSLCUqJ2Io10qHE6FCHaM=;
	b=0Hv0AspymKyYz8qW0hj6Fb0vUtVHsdGmkvWcUakIBtshYh+zVy/vlvZHR1/vJJLPlas4D8
	PRzzSySwBkg1+JPbOBNLLdT/hp7seWkZBvoftuEF+vw8QuQjWUJ4K4oLdS5xDTwEjJHE7A
	gNOv1k4J4BrzZeGVnOAKkMmcblUqN/FYihcmxHZv2nEClm+/K+rhv+En8fpkOo6ZMhekbP
	fp0J8Mycqhuh4vaLIP3ZaeDDy7dR7H2M1G6jiZF8zqKNY6sfFE8cTQVtJvDtVzZmYHGBDI
	A83cW4yHAFFLmlY1u2nMBkF0J3x6Dy2QTQscDGxCprr2i8ihRMilmaPq8wHcbw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575057;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=7J6qDuLCrgdvR1kG/InEcQTSLCUqJ2Io10qHE6FCHaM=;
	b=3bCOAPoiV1ViDB0Ns+pjgu6fuq8OhbzSh5tZOLe9PQ/YwBWzawJVTSPOhF+994y0l1r8Gv
	bZKIMX08LnDG7gCA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:17 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Make the primary thread tracking CPU mask based in preparation for simpler
handling of parallel bootup.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/include/asm/apic.h     |    2 --
 arch/x86/include/asm/topology.h |   19 +++++++++++++++----
 arch/x86/kernel/apic/apic.c     |   20 +++++++++-----------
 arch/x86/kernel/smpboot.c       |   12 +++---------
 4 files changed, 27 insertions(+), 26 deletions(-)
---

--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -506,10 +506,8 @@ extern int default_check_phys_apicid_pre
 #endif /* CONFIG_X86_LOCAL_APIC */
 
 #ifdef CONFIG_SMP
-bool apic_id_is_primary_thread(unsigned int id);
 void apic_smt_update(void);
 #else
-static inline bool apic_id_is_primary_thread(unsigned int id) { return false; }
 static inline void apic_smt_update(void) { }
 #endif
 
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -31,9 +31,9 @@
  * CONFIG_NUMA.
  */
 #include <linux/numa.h>
+#include <linux/cpumask.h>
 
 #ifdef CONFIG_NUMA
-#include <linux/cpumask.h>
 
 #include <asm/mpspec.h>
 #include <asm/percpu.h>
@@ -139,9 +139,20 @@ static inline int topology_max_smt_threa
 int topology_update_package_map(unsigned int apicid, unsigned int cpu);
 int topology_update_die_map(unsigned int dieid, unsigned int cpu);
 int topology_phys_to_logical_pkg(unsigned int pkg);
-bool topology_is_primary_thread(unsigned int cpu);
 bool topology_smt_supported(void);
-#else
+
+extern struct cpumask __cpu_primary_thread_mask;
+#define cpu_primary_thread_mask ((const struct cpumask *)&__cpu_primary_thread_mask)
+
+/**
+ * topology_is_primary_thread - Check whether CPU is the primary SMT thread
+ * @cpu:	CPU to check
+ */
+static inline bool topology_is_primary_thread(unsigned int cpu)
+{
+	return cpumask_test_cpu(cpu, cpu_primary_thread_mask);
+}
+#else /* CONFIG_SMP */
 #define topology_max_packages()			(1)
 static inline int
 topology_update_package_map(unsigned int apicid, unsigned int cpu) { return 0; }
@@ -152,7 +163,7 @@ static inline int topology_max_die_per_p
 static inline int topology_max_smt_threads(void) { return 1; }
 static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
 static inline bool topology_smt_supported(void) { return false; }
-#endif
+#endif /* !CONFIG_SMP */
 
 static inline void arch_fix_phys_package_id(int num, u32 slot)
 {
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2386,20 +2386,16 @@ bool arch_match_cpu_phys_id(int cpu, u64
 }
 
 #ifdef CONFIG_SMP
-/**
- * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
- * @apicid: APIC ID to check
- */
-bool apic_id_is_primary_thread(unsigned int apicid)
+static void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid)
 {
-	u32 mask;
-
-	if (smp_num_siblings == 1)
-		return true;
 	/* Isolate the SMT bit(s) in the APICID and check for 0 */
-	mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
-	return !(apicid & mask);
+	u32 mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
+
+	if (smp_num_siblings == 1 || !(apicid & mask))
+		cpumask_set_cpu(cpu, &__cpu_primary_thread_mask);
 }
+#else
+static inline void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid) { }
 #endif
 
 /*
@@ -2544,6 +2540,8 @@ int generic_processor_info(int apicid, i
 	set_cpu_present(cpu, true);
 	num_processors++;
 
+	cpu_mark_primary_thread(cpu, apicid);
+
 	return cpu;
 }
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -102,6 +102,9 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
+/* CPUs which are the primary SMT threads */
+struct cpumask __cpu_primary_thread_mask __read_mostly;
+
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -283,15 +286,6 @@ static void notrace start_secondary(void
 }
 
 /**
- * topology_is_primary_thread - Check whether CPU is the primary SMT thread
- * @cpu:	CPU to check
- */
-bool topology_is_primary_thread(unsigned int cpu)
-{
-	return apic_id_is_primary_thread(per_cpu(x86_cpu_to_apicid, cpu));
-}
-
-/**
  * topology_smt_supported - Check whether SMT is supported by the CPUs
  */
 bool topology_smt_supported(void)




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531806.827768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pn-0008F8-L0; Mon, 08 May 2023 19:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531806.827768; Mon, 08 May 2023 19:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pm-0008CV-VJ; Mon, 08 May 2023 19:47:46 +0000
Received: by outflank-mailman (input) for mailman id 531806;
 Mon, 08 May 2023 19:47:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mL-0004GB-TI
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:13 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b55ad22d-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:44:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b55ad22d-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185218.802324532@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575052;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=HVjUF1EbgoHvP2JET34IHqSOI4Hcbuoi5HyMhczyA/0=;
	b=f87S8Ive1ie4K+DuiLg7usxJY5CRCksJCqSgCkxD5uZH7y4x+OJFuZWNm0Taip1XPPiDD6
	MNH0b+hp2o5RbiOVox1gno/kYayr67OXDsY3LQy/XL1EajtZ65KomNmMKrjSPGzPHIQ6mt
	Bc0Q0qOP9/5ItgD7CpD7wNvTDGfz7YuG1nQmd5y2Lv904N1i53sg9+MQ+HioP83fuoz27l
	K0h3votn/bkWzE6uLiONR6KdsjL/p4GqVASZ+gwe2bmO48I4HcfU9n8eyvXRqDwkYb7s44
	Njvuj3M2dK+aBiLKQsshzndlNWX014TMRWinVJi/9FwvNn2qCERvNBTYOkZNDg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575052;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=HVjUF1EbgoHvP2JET34IHqSOI4Hcbuoi5HyMhczyA/0=;
	b=0jP1z8zyc9UIx2xHWlL0pghnJTSR5ytXBwb/OqMmY//ybrAvid6zf5b8IChN8VPr3CVlpG
	1ard2l2bnsfXlBBQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch v3 28/36] cpu/hotplug: Reset task stack state in _cpu_up()
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:12 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

Commit dce1ca0525bf ("sched/scs: Reset task stack state in bringup_cpu()")
ensured that the shadow call stack and KASAN poisoning were removed from
a CPU's stack each time that CPU is brought up, not just once.

This is not incorrect. However, with parallel bringup the idle thread setup
will happen at a different step. As a consequence the cleanup in
bringup_cpu() would be too late.

Move the SCS/KASAN cleanup to the generic _cpu_up() function instead,
which already ensures that the new CPU's stack is available, purely to
allow for early failure. This occurs when the CPU to be brought up is
in the CPUHP_OFFLINE state, which should correctly do the cleanup any
time the CPU has been taken down to the point where such is needed.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 kernel/cpu.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)
---

--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -771,12 +771,6 @@ static int bringup_cpu(unsigned int cpu)
 		return -EAGAIN;
 
 	/*
-	 * Reset stale stack state from the last time this CPU was online.
-	 */
-	scs_task_reset(idle);
-	kasan_unpoison_task_stack(idle);
-
-	/*
 	 * Some architectures have to walk the irq descriptors to
 	 * setup the vector space for the cpu which comes online.
 	 * Prevent irq alloc/free across the bringup.
@@ -1583,6 +1577,12 @@ static int _cpu_up(unsigned int cpu, int
 			ret = PTR_ERR(idle);
 			goto out;
 		}
+
+		/*
+		 * Reset stale stack state from the last time this CPU was online.
+		 */
+		scs_task_reset(idle);
+		kasan_unpoison_task_stack(idle);
 	}
 
 	cpuhp_tasks_frozen = tasks_frozen;




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531814.827777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pp-0000AC-3b; Mon, 08 May 2023 19:47:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531814.827777; Mon, 08 May 2023 19:47:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6po-0008WT-MG; Mon, 08 May 2023 19:47:48 +0000
Received: by outflank-mailman (input) for mailman id 531814;
 Mon, 08 May 2023 19:47:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mD-0004Y5-Hj
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:05 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id af9025bb-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:44:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af9025bb-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185218.472786557@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575043;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=/iWTZekX5kQrNbDZ4qzevgkextn6cmGYi8Qwr8K2wd0=;
	b=lOv5CV/JU1toOLovRLqZlTvcRCZ8ltOCGkp6Tu2FEmBz8l2BhYz8Z5gaEUl2tvbeMaUktu
	naeM1OYPPOb/TMivhvW1pX2QwmNH1QEr1EYdTMeEJicu1SC1DRtYs7/3EGdirqV+PjAVb4
	fEhKbiUZ+Ma4Qf+7o2PWQhcdfityfGcgCU0bnUUvTHKObm1jWKBFtPw2zMaDQdulrkbawI
	+xLgFfoRyCWbU/EZ1Kt9ViC/DeiNSYKzoja1i6f6FtvQCBkZdP+kJFuq9OJu18AL6ddLsp
	pM+tuBp7BqzPFWb2FuWk5s0zh/tMRx3ZrKdi4WuN3YRev8DqBKVO4GAYXrxTKQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575043;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=/iWTZekX5kQrNbDZ4qzevgkextn6cmGYi8Qwr8K2wd0=;
	b=KbtCrmqgK+WkhRxMq4lpiF9CN1ShevgS463Q/CW1CH+7hVRAoy+HCJrLtfTvSHjNpPNlk0
	gXiYZp/kgdvkEdAQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch v3 22/36] arm64: smp: Switch to hotplug core state synchronization
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:02 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Michael Kelley <mikelley@microsoft.com>

---
 arch/arm64/Kconfig           |    1 +
 arch/arm64/include/asm/smp.h |    2 +-
 arch/arm64/kernel/smp.c      |   14 +++++---------
 3 files changed, 7 insertions(+), 10 deletions(-)
---

--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -222,6 +222,7 @@ config ARM64
 	select HAVE_KPROBES
 	select HAVE_KRETPROBES
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_DOMAIN
 	select IRQ_FORCED_THREADING
 	select KASAN_VMALLOC if KASAN
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -99,7 +99,7 @@ static inline void arch_send_wakeup_ipi_
 
 extern int __cpu_disable(void);
 
-extern void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 extern void __noreturn cpu_die(void);
 extern void __noreturn cpu_die_early(void);
 
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -332,17 +332,13 @@ static int op_cpu_kill(unsigned int cpu)
 }
 
 /*
- * called on the thread which is asking for a CPU to be shutdown -
- * waits until shutdown has completed, or it is timed out.
+ * Called on the thread which is asking for a CPU to be shutdown after the
+ * shutdown completed.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
 	int err;
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
 	pr_debug("CPU%u: shutdown\n", cpu);
 
 	/*
@@ -369,8 +365,8 @@ void __noreturn cpu_die(void)
 
 	local_daif_mask();
 
-	/* Tell __cpu_die() that this CPU is now safe to dispose of */
-	(void)cpu_report_death();
+	/* Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose of */
+	cpuhp_ap_report_dead();
 
 	/*
 	 * Actually shutdown the CPU. This must never fail. The specific hotplug




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531815.827788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pr-0000nG-4x; Mon, 08 May 2023 19:47:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531815.827788; Mon, 08 May 2023 19:47:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pq-0000jr-OU; Mon, 08 May 2023 19:47:50 +0000
Received: by outflank-mailman (input) for mailman id 531815;
 Mon, 08 May 2023 19:47:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mG-0004Y5-4M
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:08 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b1711425-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:44:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1711425-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185218.578643863@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575046;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=i+dpCSWhwU+6bxlys4Er8+x/ffOHETyJWzg+gvnggWo=;
	b=3YwCUqtcytN4aZ861iesyIkZVFpl/8iL7JOPtbTJ0r8YY9O0hA7PFKQWP/VhOladl6YVHd
	oXNqHi6CaE4fWqff5AfQx90V5PHNtVWab0q5q+CzuVLQ5ClyvMceIGCQoA0U4NdI74Hytu
	Y3+HQXGflBDUkC3QrjVDyzDuu8vc8DF+sOcub1w+SQcfQYuMDSr3sl8AKEXss71YCzdSNs
	K7FTL0G2EvbmjosCTFqcQQ7qoBmKw+7tjhjizRZuxtJ5Q9gW/83BR4ZSbhIvjd7P97RrbH
	UnZ5DsNlB/OjkMSIOge9mTydw1E/0zyqU08V3gaeRZYJK9GS0p6p4RbTCnf4YQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575046;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=i+dpCSWhwU+6bxlys4Er8+x/ffOHETyJWzg+gvnggWo=;
	b=0A6YOWIgV3ASET2u6qXUkyJUtq2zMc+7jX705tummCTsCytwDoD9pqfjOP2BrOuDpfQPwG
	r85LDKJye1tnlVAA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 24/36] [patch V2 24/38] MIPS: SMP_CPS: Switch to hotplug
 core state synchronization
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:05 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. This unfortunately requires to add dead reporting to the non CPS
platforms as CPS is the only user, but it allows an overall consolidation
of this functionality.

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/mips/Kconfig               |    1 +
 arch/mips/cavium-octeon/smp.c   |    1 +
 arch/mips/include/asm/smp-ops.h |    1 +
 arch/mips/kernel/smp-bmips.c    |    1 +
 arch/mips/kernel/smp-cps.c      |   14 +++++---------
 arch/mips/kernel/smp.c          |    8 ++++++++
 arch/mips/loongson64/smp.c      |    1 +
 7 files changed, 18 insertions(+), 9 deletions(-)
---

--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2285,6 +2285,7 @@ config MIPS_CPS
 	select MIPS_CM
 	select MIPS_CPS_PM if HOTPLUG_CPU
 	select SMP
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select SYNC_R4K if (CEVT_R4K || CSRC_R4K)
 	select SYS_SUPPORTS_HOTPLUG_CPU
 	select SYS_SUPPORTS_SCHED_SMT if CPU_MIPSR6
--- a/arch/mips/cavium-octeon/smp.c
+++ b/arch/mips/cavium-octeon/smp.c
@@ -345,6 +345,7 @@ void play_dead(void)
 	int cpu = cpu_number_map(cvmx_get_core_num());
 
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 	octeon_processor_boot = 0xff;
 	per_cpu(cpu_state, cpu) = CPU_DEAD;
 
--- a/arch/mips/include/asm/smp-ops.h
+++ b/arch/mips/include/asm/smp-ops.h
@@ -33,6 +33,7 @@ struct plat_smp_ops {
 #ifdef CONFIG_HOTPLUG_CPU
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
+	void (*cleanup_dead_cpu)(unsigned cpu);
 #endif
 #ifdef CONFIG_KEXEC
 	void (*kexec_nonboot_cpu)(void);
--- a/arch/mips/kernel/smp-bmips.c
+++ b/arch/mips/kernel/smp-bmips.c
@@ -392,6 +392,7 @@ static void bmips_cpu_die(unsigned int c
 void __ref play_dead(void)
 {
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 
 	/* flush data cache */
 	_dma_cache_wback_inv(0, ~0);
--- a/arch/mips/kernel/smp-cps.c
+++ b/arch/mips/kernel/smp-cps.c
@@ -503,8 +503,7 @@ void play_dead(void)
 		}
 	}
 
-	/* This CPU has chosen its way out */
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	cps_shutdown_this_cpu(cpu_death);
 
@@ -527,7 +526,9 @@ static void wait_for_sibling_halt(void *
 	} while (!(halted & TCHALT_H));
 }
 
-static void cps_cpu_die(unsigned int cpu)
+static void cps_cpu_die(unsigned int cpu) { }
+
+static void cps_cleanup_dead_cpu(unsigned cpu)
 {
 	unsigned core = cpu_core(&cpu_data[cpu]);
 	unsigned int vpe_id = cpu_vpe_id(&cpu_data[cpu]);
@@ -535,12 +536,6 @@ static void cps_cpu_die(unsigned int cpu
 	unsigned stat;
 	int err;
 
-	/* Wait for the cpu to choose its way out */
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU%u: didn't offline\n", cpu);
-		return;
-	}
-
 	/*
 	 * Now wait for the CPU to actually offline. Without doing this that
 	 * offlining may race with one or more of:
@@ -624,6 +619,7 @@ static const struct plat_smp_ops cps_smp
 #ifdef CONFIG_HOTPLUG_CPU
 	.cpu_disable		= cps_cpu_disable,
 	.cpu_die		= cps_cpu_die,
+	.cleanup_dead_cpu	= cps_cleanup_dead_cpu,
 #endif
 #ifdef CONFIG_KEXEC
 	.kexec_nonboot_cpu	= cps_kexec_nonboot_cpu,
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -690,6 +690,14 @@ void flush_tlb_one(unsigned long vaddr)
 EXPORT_SYMBOL(flush_tlb_page);
 EXPORT_SYMBOL(flush_tlb_one);
 
+#ifdef CONFIG_HOTPLUG_CPU
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
+	if (mp_ops->cleanup_dead_cpu)
+		mp_ops->cleanup_dead_cpu(cpu);
+}
+#endif
+
 #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
 
 static void tick_broadcast_callee(void *info)
--- a/arch/mips/loongson64/smp.c
+++ b/arch/mips/loongson64/smp.c
@@ -775,6 +775,7 @@ void play_dead(void)
 	void (*play_dead_at_ckseg1)(int *);
 
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 
 	prid_imp = read_c0_prid() & PRID_IMP_MASK;
 	prid_rev = read_c0_prid() & PRID_REV_MASK;




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531816.827793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6ps-00011T-4V; Mon, 08 May 2023 19:47:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531816.827793; Mon, 08 May 2023 19:47:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pr-0000zA-K3; Mon, 08 May 2023 19:47:51 +0000
Received: by outflank-mailman (input) for mailman id 531816;
 Mon, 08 May 2023 19:47:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6m4-0004GB-CO
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:43:56 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aaa15205-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:43:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aaa15205-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185218.183895955@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575034;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=JFZgPCXfk4YqrkWgB2iOW/w2n4X/0HLpXczGoD+v1Vc=;
	b=dUhrJJpkQ4m0PAJlXgFQayguIDRYj4DbGKW+17HOquchCD5OkCNG5p+ivglj6Zp0QU+I0O
	sjDl2ZSA2qdGYXZhUvFPnbPFXnAtHugvx2GI+wQbJ19qs9tz5O+rWnDYC86GVW88KAAeKE
	Y1tCVHUnxg/G/GD2y2S3k8oy8xdR5u5wsZrKD02jMrshweqjIs0USPtzbNHtBjzyPjuZYP
	bnHk9lUQIZYMlOyOmAOBq9jij07nAxoPfgUgAqbfHUg2W+wm48VsfVgoz2u1VnDCsiAyNh
	0QgIUd501thJfEkajmTlocFpAZEG7qATCMndUXAIFuG7nTRHlZ0qw32Bk/xexg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575034;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=JFZgPCXfk4YqrkWgB2iOW/w2n4X/0HLpXczGoD+v1Vc=;
	b=w6a2Le7411PlZc47ukf/lNrWYBsIfR8a67l/rnpzfLfXCU2HTbN7zdy8QxM7z6Q1BG3dst
	HEreuKgJSAkc7vCQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 17/36] x86/xen/hvm: Get rid of DEAD_FROZEN handling
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:54 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

No point in this conditional voodoo. Un-initializing the lock mechanism is
safe to be called unconditionally even if it was already invoked when the
CPU died.

Remove the invocation of xen_smp_intr_free() as that has been already
cleaned up in xen_cpu_dead_hvm().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/xen/enlighten_hvm.c |   11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)
---

--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -161,13 +161,12 @@ static int xen_cpu_up_prepare_hvm(unsign
 	int rc = 0;
 
 	/*
-	 * This can happen if CPU was offlined earlier and
-	 * offlining timed out in common_cpu_die().
+	 * If a CPU was offlined earlier and offlining timed out then the
+	 * lock mechanism is still initialized. Uninit it unconditionally
+	 * as it's safe to call even if already uninited. Interrupts and
+	 * timer have already been handled in xen_cpu_dead_hvm().
 	 */
-	if (cpu_report_state(cpu) == CPU_DEAD_FROZEN) {
-		xen_smp_intr_free(cpu);
-		xen_uninit_lock_cpu(cpu);
-	}
+	xen_uninit_lock_cpu(cpu);
 
 	if (cpu_acpi_id(cpu) != U32_MAX)
 		per_cpu(xen_vcpu_id, cpu) = cpu_acpi_id(cpu);




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:47:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:47:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531819.827804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pu-0001V6-3W; Mon, 08 May 2023 19:47:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531819.827804; Mon, 08 May 2023 19:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6pt-0001Ox-Az; Mon, 08 May 2023 19:47:53 +0000
Received: by outflank-mailman (input) for mailman id 531819;
 Mon, 08 May 2023 19:47:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6m8-0004Y5-5l
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:00 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac877e76-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:43:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac877e76-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185218.297596481@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575038;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=XQhJ+YJbo0fWro2ruFVKTtipcCHrQUMCrvr4JNW/4mg=;
	b=4HZ0gY7/qD3Jz2ZNiBgE//AXQYDH/HeFjZDh4WjRi76jcqRWL6r1IWSArSsgLl2nlNQFR6
	g3oTCdwyHjjeeSH6oYG4P6dDHDqPyrm4XsYm9t4qbqxeqX6w9TMM6v51TRol9UuzaVyIPk
	O8sLnZGl0vrJEr5urbSAYXzeA0ustuVEtlwl/eDhsKlBBr4ajx+3nr3yXmbpdfkWsah1Lk
	odDCU1HmUFyvF+/Y8OceVNiymvFXX2ukC/Gxal6fq+3UJSIiNFsGWISmsq8MY4xbzh0Twx
	2YA2DgvNmC6y7rxMRpj43UMudtuAPgUtDjD8EJmrsJnCusfYrI4PMDU4n2KO7A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575038;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=XQhJ+YJbo0fWro2ruFVKTtipcCHrQUMCrvr4JNW/4mg=;
	b=K1/k56pLiZSNgtdyMkDW8r9Mred4paXTZmDhXCdZZLJnLWAWM+LVFaD5vra+tfT4KvzwFJ
	SeTJHuTiUJpAhFAw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject:
 [patch v3 19/36] x86/smpboot: Switch to hotplug core state synchronization
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:57 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The new AP state tracking and synchronization mechanism in the CPU hotplug
core code allows to remove quite some x86 specific code:

  1) The AP alive synchronization based on cpumasks

  2) The decision whether an AP can be brought up again

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>

---
V2: Use for_each_online_cpu() - Brian
---
 arch/x86/Kconfig           |    1 
 arch/x86/include/asm/smp.h |    7 +
 arch/x86/kernel/smp.c      |    1 
 arch/x86/kernel/smpboot.c  |  161 ++++++++++-----------------------------------
 arch/x86/xen/smp_hvm.c     |   16 +---
 arch/x86/xen/smp_pv.c      |   39 ++++++----
 6 files changed, 73 insertions(+), 152 deletions(-)
---

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -274,6 +274,7 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_CORE_SYNC_FULL		if SMP
 	select HOTPLUG_SMT			if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -38,6 +38,8 @@ struct smp_ops {
 	void (*crash_stop_other_cpus)(void);
 	void (*smp_send_reschedule)(int cpu);
 
+	void (*cleanup_dead_cpu)(unsigned cpu);
+	void (*poll_sync_state)(void);
 	int (*cpu_up)(unsigned cpu, struct task_struct *tidle);
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
@@ -90,7 +92,8 @@ static inline int __cpu_disable(void)
 
 static inline void __cpu_die(unsigned int cpu)
 {
-	smp_ops.cpu_die(cpu);
+	if (smp_ops.cpu_die)
+		smp_ops.cpu_die(cpu);
 }
 
 static inline void __noreturn play_dead(void)
@@ -123,8 +126,6 @@ void native_smp_cpus_done(unsigned int m
 int common_cpu_up(unsigned int cpunum, struct task_struct *tidle);
 int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
 int native_cpu_disable(void);
-int common_cpu_die(unsigned int cpu);
-void native_cpu_die(unsigned int cpu);
 void __noreturn hlt_play_dead(void);
 void native_play_dead(void);
 void play_dead_common(void);
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -269,7 +269,6 @@ struct smp_ops smp_ops = {
 	.smp_send_reschedule	= native_smp_send_reschedule,
 
 	.cpu_up			= native_cpu_up,
-	.cpu_die		= native_cpu_die,
 	.cpu_disable		= native_cpu_disable,
 	.play_dead		= native_play_dead,
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -57,6 +57,7 @@
 #include <linux/pgtable.h>
 #include <linux/overflow.h>
 #include <linux/stackprotector.h>
+#include <linux/cpuhotplug.h>
 
 #include <asm/acpi.h>
 #include <asm/cacheinfo.h>
@@ -101,9 +102,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
-/* All of these masks are initialized in setup_cpu_local_masks() */
-static cpumask_var_t cpu_initialized_mask;
-static cpumask_var_t cpu_callout_mask;
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -169,8 +167,8 @@ static void smp_callin(void)
 	int cpuid = smp_processor_id();
 
 	/*
-	 * If waken up by an INIT in an 82489DX configuration
-	 * cpu_callout_mask guarantees we don't get here before an
+	 * If waken up by an INIT in an 82489DX configuration the alive
+	 * synchronization guarantees we don't get here before an
 	 * INIT_deassert IPI reaches our local APIC, so it is now safe to
 	 * touch our local APIC.
 	 *
@@ -216,17 +214,6 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
-static void wait_for_master_cpu(int cpu)
-{
-	/*
-	 * Wait for release by control CPU before continuing with AP
-	 * initialization.
-	 */
-	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
-	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
-		cpu_relax();
-}
-
 /*
  * Activate a secondary processor.
  */
@@ -247,11 +234,11 @@ static void notrace start_secondary(void
 	cpu_init_exception_handling();
 
 	/*
-	 * Sync point with wait_cpu_initialized(). Sets AP in
-	 * cpu_initialized_mask and then waits for the control CPU
-	 * to release it.
+	 * Synchronization point with the hotplug core. Sets the
+	 * synchronization state to ALIVE and waits for the control CPU to
+	 * release this CPU for further bringup.
 	 */
-	wait_for_master_cpu(raw_smp_processor_id());
+	cpuhp_ap_sync_alive();
 
 	cpu_init();
 	rcu_cpu_starting(raw_smp_processor_id());
@@ -284,7 +271,6 @@ static void notrace start_secondary(void
 	set_cpu_online(smp_processor_id(), true);
 	lapic_online();
 	unlock_vector_lock();
-	cpu_set_state_online(smp_processor_id());
 	x86_platform.nmi_init();
 
 	/* enable local interrupts */
@@ -735,9 +721,9 @@ static void impress_friends(void)
 	 * Allow the user to impress friends.
 	 */
 	pr_debug("Before bogomips\n");
-	for_each_possible_cpu(cpu)
-		if (cpumask_test_cpu(cpu, cpu_callout_mask))
-			bogosum += cpu_data(cpu).loops_per_jiffy;
+	for_each_online_cpu(cpu)
+		bogosum += cpu_data(cpu).loops_per_jiffy;
+
 	pr_info("Total of %d processors activated (%lu.%02lu BogoMIPS)\n",
 		num_online_cpus(),
 		bogosum/(500000/HZ),
@@ -1009,6 +995,7 @@ int common_cpu_up(unsigned int cpu, stru
 static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
 	unsigned long start_ip = real_mode_header->trampoline_start;
+	int ret;
 
 #ifdef CONFIG_X86_64
 	/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
@@ -1049,13 +1036,6 @@ static int do_boot_cpu(int apicid, int c
 		}
 	}
 
-	/*
-	 * AP might wait on cpu_callout_mask in cpu_init() with
-	 * cpu_initialized_mask set if previous attempt to online
-	 * it timed-out. Clear cpu_initialized_mask so that after
-	 * INIT/SIPI it could start with a clean state.
-	 */
-	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	smp_mb();
 
 	/*
@@ -1066,47 +1046,16 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
-		return apic->wakeup_secondary_cpu_64(apicid, start_ip);
+		ret = apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
-		return apic->wakeup_secondary_cpu(apicid, start_ip);
-
-	return wakeup_secondary_cpu_via_init(apicid, start_ip);
-}
-
-static int wait_cpu_cpumask(unsigned int cpu, const struct cpumask *mask)
-{
-	unsigned long timeout;
-
-	/*
-	 * Wait up to 10s for the CPU to report in.
-	 */
-	timeout = jiffies + 10*HZ;
-	while (time_before(jiffies, timeout)) {
-		if (cpumask_test_cpu(cpu, mask))
-			return 0;
-
-		schedule();
-	}
-	return -1;
-}
-
-/*
- * Bringup step two: Wait for the target AP to reach cpu_init_secondary()
- * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
- * to proceed.  The AP will then proceed past setting its 'callin' bit
- * and end up waiting in check_tsc_sync_target() until we reach
- * wait_cpu_online() to tend to it.
- */
-static int wait_cpu_initialized(unsigned int cpu)
-{
-	/*
-	 * Wait for first sign of life from AP.
-	 */
-	if (wait_cpu_cpumask(cpu, cpu_initialized_mask))
-		return -1;
+		ret = apic->wakeup_secondary_cpu(apicid, start_ip);
+	else
+		ret = wakeup_secondary_cpu_via_init(apicid, start_ip);
 
-	cpumask_set_cpu(cpu, cpu_callout_mask);
-	return 0;
+	/* If the wakeup mechanism failed, cleanup the warm reset vector */
+	if (ret)
+		arch_cpuhp_cleanup_kick_cpu(cpu);
+	return ret;
 }
 
 static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
@@ -1131,11 +1080,6 @@ static int native_kick_ap(unsigned int c
 	 */
 	mtrr_save_state();
 
-	/* x86 CPUs take themselves offline, so delayed offline is OK. */
-	err = cpu_check_up_prepare(cpu);
-	if (err && err != -EBUSY)
-		return err;
-
 	/* the FPU context is blank, nobody can own it */
 	per_cpu(fpu_fpregs_owner_ctx, cpu) = NULL;
 
@@ -1152,17 +1096,29 @@ static int native_kick_ap(unsigned int c
 
 int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
-	int ret;
-
-	ret = native_kick_ap(cpu, tidle);
-	if (!ret)
-		ret = wait_cpu_initialized(cpu);
+	return native_kick_ap(cpu, tidle);
+}
 
+void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu)
+{
 	/* Cleanup possible dangling ends... */
-	if (x86_platform.legacy.warm_reset)
+	if (smp_ops.cpu_up == native_cpu_up && x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();
+}
 
-	return ret;
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
+	if (smp_ops.cleanup_dead_cpu)
+		smp_ops.cleanup_dead_cpu(cpu);
+
+	if (system_state == SYSTEM_RUNNING)
+		pr_info("CPU %u is now offline\n", cpu);
+}
+
+void arch_cpuhp_sync_state_poll(void)
+{
+	if (smp_ops.poll_sync_state)
+		smp_ops.poll_sync_state();
 }
 
 /**
@@ -1354,9 +1310,6 @@ void __init native_smp_prepare_boot_cpu(
 	if (!IS_ENABLED(CONFIG_SMP))
 		switch_gdt_and_percpu_base(me);
 
-	/* already set me in cpu_online_mask in boot_cpu_init() */
-	cpumask_set_cpu(me, cpu_callout_mask);
-	cpu_set_state_online(me);
 	native_pv_lock_init();
 }
 
@@ -1483,8 +1436,6 @@ early_param("possible_cpus", _setup_poss
 /* correctly size the local cpu masks */
 void __init setup_cpu_local_masks(void)
 {
-	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callout_mask);
 	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
 }
 
@@ -1546,9 +1497,6 @@ static void remove_siblinginfo(int cpu)
 static void remove_cpu_from_maps(int cpu)
 {
 	set_cpu_online(cpu, false);
-	cpumask_clear_cpu(cpu, cpu_callout_mask);
-	/* was set by cpu_init() */
-	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	numa_remove_cpu(cpu);
 }
 
@@ -1599,36 +1547,11 @@ int native_cpu_disable(void)
 	return 0;
 }
 
-int common_cpu_die(unsigned int cpu)
-{
-	int ret = 0;
-
-	/* We don't do anything here: idle task is faking death itself. */
-
-	/* They ack this in play_dead() by setting CPU_DEAD */
-	if (cpu_wait_death(cpu, 5)) {
-		if (system_state == SYSTEM_RUNNING)
-			pr_info("CPU %u is now offline\n", cpu);
-	} else {
-		pr_err("CPU %u didn't die...\n", cpu);
-		ret = -1;
-	}
-
-	return ret;
-}
-
-void native_cpu_die(unsigned int cpu)
-{
-	common_cpu_die(cpu);
-}
-
 void play_dead_common(void)
 {
 	idle_task_exit();
 
-	/* Ack it */
-	(void)cpu_report_death();
-
+	cpuhp_ap_report_dead();
 	/*
 	 * With physical CPU hotplug, we should halt the cpu
 	 */
@@ -1730,12 +1653,6 @@ int native_cpu_disable(void)
 	return -ENOSYS;
 }
 
-void native_cpu_die(unsigned int cpu)
-{
-	/* We said "no" in __cpu_disable */
-	BUG();
-}
-
 void native_play_dead(void)
 {
 	BUG();
--- a/arch/x86/xen/smp_hvm.c
+++ b/arch/x86/xen/smp_hvm.c
@@ -55,18 +55,16 @@ static void __init xen_hvm_smp_prepare_c
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
-static void xen_hvm_cpu_die(unsigned int cpu)
+static void xen_hvm_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (common_cpu_die(cpu) == 0) {
-		if (xen_have_vector_callback) {
-			xen_smp_intr_free(cpu);
-			xen_uninit_lock_cpu(cpu);
-			xen_teardown_timer(cpu);
-		}
+	if (xen_have_vector_callback) {
+		xen_smp_intr_free(cpu);
+		xen_uninit_lock_cpu(cpu);
+		xen_teardown_timer(cpu);
 	}
 }
 #else
-static void xen_hvm_cpu_die(unsigned int cpu)
+static void xen_hvm_cleanup_dead_cpu(unsigned int cpu)
 {
 	BUG();
 }
@@ -77,7 +75,7 @@ void __init xen_hvm_smp_init(void)
 	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
 	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
 	smp_ops.smp_cpus_done = xen_smp_cpus_done;
-	smp_ops.cpu_die = xen_hvm_cpu_die;
+	smp_ops.cleanup_dead_cpu = xen_hvm_cleanup_dead_cpu;
 
 	if (!xen_have_vector_callback) {
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -62,6 +62,7 @@ static void cpu_bringup(void)
 	int cpu;
 
 	cr4_init();
+	cpuhp_ap_sync_alive();
 	cpu_init();
 	touch_softlockup_watchdog();
 
@@ -83,7 +84,7 @@ static void cpu_bringup(void)
 
 	set_cpu_online(cpu, true);
 
-	cpu_set_state_online(cpu);  /* Implies full memory barrier. */
+	smp_mb();
 
 	/* We can take interrupts now: we're officially "up". */
 	local_irq_enable();
@@ -323,14 +324,6 @@ static int xen_pv_cpu_up(unsigned int cp
 
 	xen_setup_runstate_info(cpu);
 
-	/*
-	 * PV VCPUs are always successfully taken down (see 'while' loop
-	 * in xen_cpu_die()), so -EBUSY is an error.
-	 */
-	rc = cpu_check_up_prepare(cpu);
-	if (rc)
-		return rc;
-
 	/* make sure interrupts start blocked */
 	per_cpu(xen_vcpu, cpu)->evtchn_upcall_mask = 1;
 
@@ -349,6 +342,11 @@ static int xen_pv_cpu_up(unsigned int cp
 	return 0;
 }
 
+static void xen_pv_poll_sync_state(void)
+{
+	HYPERVISOR_sched_op(SCHEDOP_yield, NULL);
+}
+
 #ifdef CONFIG_HOTPLUG_CPU
 static int xen_pv_cpu_disable(void)
 {
@@ -364,18 +362,18 @@ static int xen_pv_cpu_disable(void)
 
 static void xen_pv_cpu_die(unsigned int cpu)
 {
-	while (HYPERVISOR_vcpu_op(VCPUOP_is_up,
-				  xen_vcpu_nr(cpu), NULL)) {
+	while (HYPERVISOR_vcpu_op(VCPUOP_is_up, xen_vcpu_nr(cpu), NULL)) {
 		__set_current_state(TASK_UNINTERRUPTIBLE);
 		schedule_timeout(HZ/10);
 	}
+}
 
-	if (common_cpu_die(cpu) == 0) {
-		xen_smp_intr_free(cpu);
-		xen_uninit_lock_cpu(cpu);
-		xen_teardown_timer(cpu);
-		xen_pmu_finish(cpu);
-	}
+static void xen_pv_cleanup_dead_cpu(unsigned int cpu)
+{
+	xen_smp_intr_free(cpu);
+	xen_uninit_lock_cpu(cpu);
+	xen_teardown_timer(cpu);
+	xen_pmu_finish(cpu);
 }
 
 static void __noreturn xen_pv_play_dead(void) /* used only with HOTPLUG_CPU */
@@ -397,6 +395,11 @@ static void xen_pv_cpu_die(unsigned int
 	BUG();
 }
 
+static void xen_pv_cleanup_dead_cpu(unsigned int cpu)
+{
+	BUG();
+}
+
 static void __noreturn xen_pv_play_dead(void)
 {
 	BUG();
@@ -437,6 +440,8 @@ static const struct smp_ops xen_smp_ops
 
 	.cpu_up = xen_pv_cpu_up,
 	.cpu_die = xen_pv_cpu_die,
+	.cleanup_dead_cpu = xen_pv_cleanup_dead_cpu,
+	.poll_sync_state = xen_pv_poll_sync_state,
 	.cpu_disable = xen_pv_cpu_disable,
 	.play_dead = xen_pv_play_dead,
 




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:48:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:48:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531846.827832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6q2-0003dY-C4; Mon, 08 May 2023 19:48:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531846.827832; Mon, 08 May 2023 19:48:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6q2-0003d7-4Z; Mon, 08 May 2023 19:48:02 +0000
Received: by outflank-mailman (input) for mailman id 531846;
 Mon, 08 May 2023 19:48:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mG-0004GB-SD
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:08 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b266ad52-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:44:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b266ad52-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185218.643400362@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575047;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=hx5fWQqpBJqFhc3VD4cFl0lkC1fEFtezXeQ/kkMDuYM=;
	b=dMSiYiyMCD3/kfD6ycKkp08Cs2yGyR46ibmTbUZ4EPuuwnERkLqo1DPunwf9Bd+Lh282Kk
	iyz5crAqn34/Z1DU1FdeHq8w2ICxkwvwPupWSapQM5bKWcd06oqKMF65CPiBv3i5Jdq5GK
	No9UCLkO9a75FWDWNjS5cUfUlOLyZkVKjYuIut6Mde17+klVjF/vsV/x/8XZgC94CcLGgL
	esCNz6eRqlPWNCIMyIyPzBMIvNBVYenPTUQOWaJr4izZS/Q4K0oHTNZvGzzHj78cMSROKA
	7adYzjGge8G4Bi2MTD6vyk5VOcXBsF4npN+FHo+bMYD6KtCMSa31T9g3imKuVQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575047;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=hx5fWQqpBJqFhc3VD4cFl0lkC1fEFtezXeQ/kkMDuYM=;
	b=5H4Y1AwaHO6MuLieAAEsSuc7IElwyyxQTr8AXtmBcHORWW1C5BYTyvmgYBKAcPiBa35CdM
	lrZnB79si3SXI4BQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 25/36] parisc: Switch to hotplug core state synchronization
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:07 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/parisc/Kconfig          |    1 +
 arch/parisc/kernel/process.c |    4 ++--
 arch/parisc/kernel/smp.c     |    7 +++----
 3 files changed, 6 insertions(+), 6 deletions(-)
---

--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -57,6 +57,7 @@ config PARISC
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_REGS_AND_STACK_ACCESS_API
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select GENERIC_SCHED_CLOCK
 	select GENERIC_IRQ_MIGRATION if SMP
 	select HAVE_UNSTABLE_SCHED_CLOCK if SMP
--- a/arch/parisc/kernel/process.c
+++ b/arch/parisc/kernel/process.c
@@ -166,8 +166,8 @@ void __noreturn arch_cpu_idle_dead(void)
 
 	local_irq_disable();
 
-	/* Tell __cpu_die() that this CPU is now safe to dispose of. */
-	(void)cpu_report_death();
+	/* Tell the core that this CPU is now safe to dispose of. */
+	cpuhp_ap_report_dead();
 
 	/* Ensure that the cache lines are written out. */
 	flush_cache_all_local();
--- a/arch/parisc/kernel/smp.c
+++ b/arch/parisc/kernel/smp.c
@@ -500,11 +500,10 @@ int __cpu_disable(void)
 void __cpu_die(unsigned int cpu)
 {
 	pdc_cpu_rendezvous_lock();
+}
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
 	pr_info("CPU%u: is shutting down\n", cpu);
 
 	/* set task's state to interruptible sleep */




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:48:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:48:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531848.827837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6q3-0003jH-2w; Mon, 08 May 2023 19:48:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531848.827837; Mon, 08 May 2023 19:48:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6q2-0003hx-Nl; Mon, 08 May 2023 19:48:02 +0000
Received: by outflank-mailman (input) for mailman id 531848;
 Mon, 08 May 2023 19:48:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mJ-0004GB-V4
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:11 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b458d500-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:44:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b458d500-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185218.749244209@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575051;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=4bj8ZvYTo7h4ovclEum+b6WsfhLEcO8uNjPMgn9Q5x4=;
	b=luptazBGQl/6gwCsUF/buYWxkBAEoJNYHx7Kxq0OeppfP1uF6F7NgOCw3LqRVTaT31I9h4
	D5XOFiRqBn9YmzEHb1c8fA6wh7ZzDtnqoGVuN2a4blplprkdtDd8R1N5BUJ7dY//ynam4z
	9nOeEaXVbMYGsvANvqaAzHwNiaL+IZrNUZNLb8PdkXNIsgQJ3JnLHxTcMWQHPktfZ7p1+I
	valh5s3/2GsrRw0m8GhQn7EqA30eWJMKuxbBma4fWLJDjf3ostL0yMHCO5slcxoZThfFtJ
	ateRKwD80dVM75IN8sPMyv58NxXIKSeUEGcWX63SCtgZbvKN9Mfqco6vLkvfkg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575051;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=4bj8ZvYTo7h4ovclEum+b6WsfhLEcO8uNjPMgn9Q5x4=;
	b=Irrw/2b9WZnFSWO4H5T5IOf9dImpiZr2fFBeiX3HwHII5+jw+y4xy+lPuveYMabu3EDI57
	iXBQKFEgrrdQwlAQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 27/36] cpu/hotplug: Remove unused state functions
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:10 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

All users converted to the hotplug core mechanism.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 include/linux/cpu.h |    2 -
 kernel/smpboot.c    |   75 ----------------------------------------------------
 2 files changed, 77 deletions(-)
---

--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -193,8 +193,6 @@ static inline void play_idle(unsigned lo
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
-bool cpu_wait_death(unsigned int cpu, int seconds);
-bool cpu_report_death(void);
 void cpuhp_report_idle_dead(void);
 #else
 static inline void cpuhp_report_idle_dead(void) { }
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -325,78 +325,3 @@ void smpboot_unregister_percpu_thread(st
 	cpus_read_unlock();
 }
 EXPORT_SYMBOL_GPL(smpboot_unregister_percpu_thread);
-
-#ifndef CONFIG_HOTPLUG_CORE_SYNC
-static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
-
-#ifdef CONFIG_HOTPLUG_CPU
-/*
- * Wait for the specified CPU to exit the idle loop and die.
- */
-bool cpu_wait_death(unsigned int cpu, int seconds)
-{
-	int jf_left = seconds * HZ;
-	int oldstate;
-	bool ret = true;
-	int sleep_jf = 1;
-
-	might_sleep();
-
-	/* The outgoing CPU will normally get done quite quickly. */
-	if (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) == CPU_DEAD)
-		goto update_state_early;
-	udelay(5);
-
-	/* But if the outgoing CPU dawdles, wait increasingly long times. */
-	while (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) != CPU_DEAD) {
-		schedule_timeout_uninterruptible(sleep_jf);
-		jf_left -= sleep_jf;
-		if (jf_left <= 0)
-			break;
-		sleep_jf = DIV_ROUND_UP(sleep_jf * 11, 10);
-	}
-update_state_early:
-	oldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-update_state:
-	if (oldstate == CPU_DEAD) {
-		/* Outgoing CPU died normally, update state. */
-		smp_mb(); /* atomic_read() before update. */
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_POST_DEAD);
-	} else {
-		/* Outgoing CPU still hasn't died, set state accordingly. */
-		if (!atomic_try_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
-					&oldstate, CPU_BROKEN))
-			goto update_state;
-		ret = false;
-	}
-	return ret;
-}
-
-/*
- * Called by the outgoing CPU to report its successful death.  Return
- * false if this report follows the surviving CPU's timing out.
- *
- * A separate "CPU_DEAD_FROZEN" is used when the surviving CPU
- * timed out.  This approach allows architectures to omit calls to
- * cpu_check_up_prepare() and cpu_set_state_online() without defeating
- * the next cpu_wait_death()'s polling loop.
- */
-bool cpu_report_death(void)
-{
-	int oldstate;
-	int newstate;
-	int cpu = smp_processor_id();
-
-	oldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-	do {
-		if (oldstate != CPU_BROKEN)
-			newstate = CPU_DEAD;
-		else
-			newstate = CPU_DEAD_FROZEN;
-	} while (!atomic_try_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
-				     &oldstate, newstate));
-	return newstate == CPU_DEAD;
-}
-
-#endif /* #ifdef CONFIG_HOTPLUG_CPU */
-#endif /* !CONFIG_HOTPLUG_CORE_SYNC */




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:48:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:48:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531851.827843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6q4-00041T-BQ; Mon, 08 May 2023 19:48:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531851.827843; Mon, 08 May 2023 19:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6q3-0003xB-Up; Mon, 08 May 2023 19:48:03 +0000
Received: by outflank-mailman (input) for mailman id 531851;
 Mon, 08 May 2023 19:48:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6m9-0004Y5-HR
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:01 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ad7f2531-edd8-11ed-8611-37d641c3527e;
 Mon, 08 May 2023 21:43:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad7f2531-edd8-11ed-8611-37d641c3527e
Message-ID: <20230508185218.354392116@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575039;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=XHpNsJs5kSVxMByUTgiKML4QfVv98dxDjLlAQchlqtk=;
	b=Ki8zOz4u2YoGQTHxRWvDCdJpS9rUfG7Rt8el1sEQonOE93FQ2t/Ouh6saOF5zSGR/sUwqy
	RGwztYnIhjpNbF8b2PGA4CfI6IE5cG/4pkd5Q33EfIphmoL8ll1Ul99/6dF/RDkEojrYjr
	EB5pH//XxOfXbRz5/okajpcb/SRh9nL9WoyZaL+b0N+3wBhaYWzQ0JAad4Cix7Pxxq45EN
	qoUV3fffrtSRueog+x/R851Gsai56IVP/bi8/TS243MY3shbpmrcRDIo+w8Qw8dLxfZrAJ
	3jwxSYmT//Hz16/1k1wHeOTfsamwVnUrPFdDAmzXRODZAUOo/yfyrnt7BniKSQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575039;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=XHpNsJs5kSVxMByUTgiKML4QfVv98dxDjLlAQchlqtk=;
	b=nQqE0LEqpdin/188ms5Eh3E7UO6uD16LhW/aIhEdUgOoCpO7X2+xXn/WMrcSUx9PqrxIEc
	Yg+NuqTDvr5KtUBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 20/36] cpu/hotplug: Remove cpu_report_state() and related
 unused cruft
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:43:59 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

No more users.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 include/linux/cpu.h |    2 -
 kernel/smpboot.c    |   90 ----------------------------------------------------
 2 files changed, 92 deletions(-)
---

--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -184,8 +184,6 @@ void arch_cpu_idle_enter(void);
 void arch_cpu_idle_exit(void);
 void __noreturn arch_cpu_idle_dead(void);
 
-int cpu_report_state(int cpu);
-int cpu_check_up_prepare(int cpu);
 void cpu_set_state_online(int cpu);
 void play_idle_precise(u64 duration_ns, u64 latency_ns);
 
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -329,97 +329,7 @@ EXPORT_SYMBOL_GPL(smpboot_unregister_per
 #ifndef CONFIG_HOTPLUG_CORE_SYNC
 static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
 
-/*
- * Called to poll specified CPU's state, for example, when waiting for
- * a CPU to come online.
- */
-int cpu_report_state(int cpu)
-{
-	return atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-}
-
-/*
- * If CPU has died properly, set its state to CPU_UP_PREPARE and
- * return success.  Otherwise, return -EBUSY if the CPU died after
- * cpu_wait_death() timed out.  And yet otherwise again, return -EAGAIN
- * if cpu_wait_death() timed out and the CPU still hasn't gotten around
- * to dying.  In the latter two cases, the CPU might not be set up
- * properly, but it is up to the arch-specific code to decide.
- * Finally, -EIO indicates an unanticipated problem.
- *
- * Note that it is permissible to omit this call entirely, as is
- * done in architectures that do no CPU-hotplug error checking.
- */
-int cpu_check_up_prepare(int cpu)
-{
-	if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);
-		return 0;
-	}
-
-	switch (atomic_read(&per_cpu(cpu_hotplug_state, cpu))) {
-
-	case CPU_POST_DEAD:
-
-		/* The CPU died properly, so just start it up again. */
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);
-		return 0;
-
-	case CPU_DEAD_FROZEN:
-
-		/*
-		 * Timeout during CPU death, so let caller know.
-		 * The outgoing CPU completed its processing, but after
-		 * cpu_wait_death() timed out and reported the error. The
-		 * caller is free to proceed, in which case the state
-		 * will be reset properly by cpu_set_state_online().
-		 * Proceeding despite this -EBUSY return makes sense
-		 * for systems where the outgoing CPUs take themselves
-		 * offline, with no post-death manipulation required from
-		 * a surviving CPU.
-		 */
-		return -EBUSY;
-
-	case CPU_BROKEN:
-
-		/*
-		 * The most likely reason we got here is that there was
-		 * a timeout during CPU death, and the outgoing CPU never
-		 * did complete its processing.  This could happen on
-		 * a virtualized system if the outgoing VCPU gets preempted
-		 * for more than five seconds, and the user attempts to
-		 * immediately online that same CPU.  Trying again later
-		 * might return -EBUSY above, hence -EAGAIN.
-		 */
-		return -EAGAIN;
-
-	case CPU_UP_PREPARE:
-		/*
-		 * Timeout while waiting for the CPU to show up. Allow to try
-		 * again later.
-		 */
-		return 0;
-
-	default:
-
-		/* Should not happen.  Famous last words. */
-		return -EIO;
-	}
-}
-
-/*
- * Mark the specified CPU online.
- *
- * Note that it is permissible to omit this call entirely, as is
- * done in architectures that do no CPU-hotplug error checking.
- */
-void cpu_set_state_online(int cpu)
-{
-	(void)atomic_xchg(&per_cpu(cpu_hotplug_state, cpu), CPU_ONLINE);
-}
-
 #ifdef CONFIG_HOTPLUG_CPU
-
 /*
  * Wait for the specified CPU to exit the idle loop and die.
  */




From xen-devel-bounces@lists.xenproject.org Mon May 08 19:48:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 19:48:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531860.827862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6q8-00054j-5n; Mon, 08 May 2023 19:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531860.827862; Mon, 08 May 2023 19:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw6q7-00052m-RJ; Mon, 08 May 2023 19:48:07 +0000
Received: by outflank-mailman (input) for mailman id 531860;
 Mon, 08 May 2023 19:48:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=49Re=A5=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pw6mT-0004GB-QM
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 19:44:21 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ba44802f-edd8-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 21:44:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba44802f-edd8-11ed-b226-6b7b168915f2
Message-ID: <20230508185219.070274100@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683575061;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=rNSjycztoby2JMftnREWdlBREDazVhJLVJwM8M2fPkk=;
	b=OpZe9lLKyKO6h5IfhQv2j9WvmIn2yf65zqaHV7y3wQleqSqr+f6etMGdIS9Efc2LFhQWS0
	5ftRjwED05+6AZBpiAUMBSD+9Bs4XI7ErbN7ZPJB/dWcAHVXO0162v680HbB1xi5BIj6b6
	2VmnddYXWQAy5QrMp52rjUb0ug4Nu5d4+araY9u5t682iYzuMEwChigCf9eS54/VMeAbyp
	FHpBHrvUOTNlGmggxAiWaYQoqPnvlTGipZEoQY9ETwz7wg7cLp6+qhnkF8oJI3QMQlbXXd
	MwefEeZHkRCA3XHukRzDC06izW3/O9wI5c+MU2Y7YPlYsyj6psxCoZFPmRCp2g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683575061;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=rNSjycztoby2JMftnREWdlBREDazVhJLVJwM8M2fPkk=;
	b=L+J15yNAu9kuVnMklFhkCkOJaiMb9jA+tCWcI3g0yk74oGzDCt/9UexPGrZp3cttAT1h/e
	UE8ijpRbZRAquYDA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: [patch v3 33/36] x86/apic: Save the APIC virtual base address
References: <20230508181633.089804905@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Mon,  8 May 2023 21:44:20 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

For parallel CPU brinugp it's required to read the APIC ID in the low level
startup code. The virtual APIC base address is a constant because its a
fix-mapped address. Exposing that constant which is composed via macros to
assembly code is non-trivial dues to header inclusion hell.

Aside of that it's constant only because of the vsyscall ABI
requirement. Once vsyscall is out of the picture the fixmap can be placed
at runtime.

Avoid header hell, stay flexible and store the address in a variable which
can be exposed to the low level startup code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>


---
 arch/x86/include/asm/smp.h  |    1 +
 arch/x86/kernel/apic/apic.c |    4 ++++
 2 files changed, 5 insertions(+)
---

--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -196,6 +196,7 @@ extern void nmi_selftest(void);
 #endif
 
 extern unsigned int smpboot_control;
+extern unsigned long apic_mmio_base;
 
 #endif /* !__ASSEMBLY__ */
 
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -101,6 +101,9 @@ static int apic_extnmi __ro_after_init =
  */
 static bool virt_ext_dest_id __ro_after_init;
 
+/* For parallel bootup. */
+unsigned long apic_mmio_base __ro_after_init;
+
 /*
  * Map cpu index to physical APIC ID
  */
@@ -2163,6 +2166,7 @@ void __init register_lapic_address(unsig
 
 	if (!x2apic_mode) {
 		set_fixmap_nocache(FIX_APIC_BASE, address);
+		apic_mmio_base = APIC_BASE;
 		apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n",
 			    APIC_BASE, address);
 	}




From xen-devel-bounces@lists.xenproject.org Mon May 08 20:53:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 20:53:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531909.827871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw7rF-00079A-QL; Mon, 08 May 2023 20:53:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531909.827871; Mon, 08 May 2023 20:53:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw7rF-000793-Nf; Mon, 08 May 2023 20:53:21 +0000
Received: by outflank-mailman (input) for mailman id 531909;
 Mon, 08 May 2023 20:53:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw7rD-00078t-LS; Mon, 08 May 2023 20:53:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw7rD-0002T9-J5; Mon, 08 May 2023 20:53:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw7rC-000661-Vm; Mon, 08 May 2023 20:53:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pw7rC-0000Zp-UQ; Mon, 08 May 2023 20:53:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hNnDSUl9bZpecRe2kcUwMrPPEQpb64bR+kWhZ8EwxO0=; b=MK4TAmzuCiRG+tpt3hAeDIk0ER
	7unkdsyt+j+71WMhPpz/DEb9MbEdPFlkpjZ5NF0hY7J9ly0DmgICFe3nGL1xpVO6vaFhN4DvcfYRz
	4mWu2ymuEmlvFovW0dWWnehKbRKVlQt98V5GTGJV4N+g5cgGwnH0zAmwL0gBetmLw6Vs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180578-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180578: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ac9a78681b921877518763ba0e89202254349d1b
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 May 2023 20:53:18 +0000

flight 180578 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180578/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180574

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ac9a78681b921877518763ba0e89202254349d1b
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   22 days
Failing since        180281  2023-04-17 06:24:36 Z   21 days   39 attempts
Testing same since   180571  2023-05-07 22:11:42 Z    0 days    3 attempts

------------------------------------------------------------
2359 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 296910 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 08 21:12:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 21:12:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531915.827882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw89E-0001Cz-AQ; Mon, 08 May 2023 21:11:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531915.827882; Mon, 08 May 2023 21:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw89E-0001Cs-7J; Mon, 08 May 2023 21:11:56 +0000
Received: by outflank-mailman (input) for mailman id 531915;
 Mon, 08 May 2023 21:11:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hkai=A5=gmail.com=wei.liu.linux@srs-se1.protection.inumbo.net>)
 id 1pw89D-0001Cm-0G
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 21:11:55 +0000
Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com
 [209.85.214.169]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f3b1f9d1-ede4-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 23:11:53 +0200 (CEST)
Received: by mail-pl1-f169.google.com with SMTP id
 d9443c01a7336-1aad6f2be8eso47446215ad.3
 for <xen-devel@lists.xenproject.org>; Mon, 08 May 2023 14:11:52 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([20.69.120.36])
 by smtp.gmail.com with ESMTPSA id
 k14-20020a170902760e00b0019aeddce6casm7648553pll.205.2023.05.08.14.11.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 08 May 2023 14:11:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3b1f9d1-ede4-11ed-b226-6b7b168915f2
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683580311; x=1686172311;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=NFnNeAlUj9C+huN2rBEcxkVDG8OipZYXhgLegmZBDMk=;
        b=T2Gihf/TB44KBcPKItW7mTEncGsIA+xFO2igp+BadB2b9EAQ8JFjoTr6VumeRXkmMV
         2carEAlSnI9T5CmasnTDIpArCW3KRHr2J/zVEwn+17X8VTZzQHKsX5ecIYNgZMNPFr7D
         oUNKPo1viLNPDjH2JCDjYMMLGrDdrGq4DQomkMX7UDwPrXk9sVLp6Yx6qcvYISjmxV96
         dVFa8xzFOCeP8Vgz16w++S7QCKXRqUpbqQsE5QaPtkBhFdqg6b8jPHR5M8CcRvLY3fma
         0W1q0CDnTbgqz593EV8hMIaQdiP4K4ZXa9JPkD7aw5ir2SVylcNlOt6hR4aJmSR3qx8s
         UTKQ==
X-Gm-Message-State: AC+VfDwQOdRSoONmLnfu67cO5O4e6hMpqU38g38t04KFUVNUqromtnOQ
	eqDegmbsbHuW5ohnALlkxsk=
X-Google-Smtp-Source: ACHHUZ6BtFvSoF+wcEO4loE9NZdFfCT+I/PFTdAZhqBSeRBB8gma6+CJUbrsA8zRjfVrnuCD3KU/Lw==
X-Received: by 2002:a17:90a:65cb:b0:248:8399:1f7c with SMTP id i11-20020a17090a65cb00b0024883991f7cmr11239406pjs.38.1683580310757;
        Mon, 08 May 2023 14:11:50 -0700 (PDT)
Date: Mon, 8 May 2023 21:11:48 +0000
From: Wei Liu <wei.liu@kernel.org>
To: =?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>
Cc: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	Mihai =?utf-8?B?RG9uyJt1?= <mdontu@bitdefender.com>,
	=?utf-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?utf-8?Q?=C8=98tefan_=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
	x86@kernel.org, xen-devel@lists.xenproject.org,
	Wei Liu <wei.liu@kernel.org>
Subject: Re: [PATCH v1 5/9] KVM: x86: Add new hypercall to lock control
 registers
Message-ID: <ZFlllHjntehpthma@liuwe-devbox-debian-v2>
References: <20230505152046.6575-1-mic@digikod.net>
 <20230505152046.6575-6-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230505152046.6575-6-mic@digikod.net>

On Fri, May 05, 2023 at 05:20:42PM +0200, Mickal Salan wrote:
> This enables guests to lock their CR0 and CR4 registers with a subset of
> X86_CR0_WP, X86_CR4_SMEP, X86_CR4_SMAP, X86_CR4_UMIP, X86_CR4_FSGSBASE
> and X86_CR4_CET flags.
> 
> The new KVM_HC_LOCK_CR_UPDATE hypercall takes two arguments.  The first
> is to identify the control register, and the second is a bit mask to
> pin (i.e. mark as read-only).
> 
> These register flags should already be pinned by Linux guests, but once
> compromised, this self-protection mechanism could be disabled, which is
> not the case with this dedicated hypercall.
> 
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: H. Peter Anvin <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Wanpeng Li <wanpengli@tencent.com>
> Signed-off-by: Mickal Salan <mic@digikod.net>
> Link: https://lore.kernel.org/r/20230505152046.6575-6-mic@digikod.net
[...]
>  	hw_cr4 = (cr4_read_shadow() & X86_CR4_MCE) | (cr4 & ~X86_CR4_MCE);
>  	if (is_unrestricted_guest(vcpu))
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index ffab64d08de3..a529455359ac 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7927,11 +7927,77 @@ static unsigned long emulator_get_cr(struct x86_emulate_ctxt *ctxt, int cr)
>  	return value;
>  }
>  
> +#ifdef CONFIG_HEKI
> +
> +extern unsigned long cr4_pinned_mask;
> +

Can this be moved to a header file?

> +static int heki_lock_cr(struct kvm *const kvm, const unsigned long cr,
> +			unsigned long pin)
> +{
> +	if (!pin)
> +		return -KVM_EINVAL;
> +
> +	switch (cr) {
> +	case 0:
> +		/* Cf. arch/x86/kernel/cpu/common.c */
> +		if (!(pin & X86_CR0_WP))
> +			return -KVM_EINVAL;
> +
> +		if ((read_cr0() & pin) != pin)
> +			return -KVM_EINVAL;
> +
> +		atomic_long_or(pin, &kvm->heki_pinned_cr0);
> +		return 0;
> +	case 4:
> +		/* Checks for irrelevant bits. */
> +		if ((pin & cr4_pinned_mask) != pin)
> +			return -KVM_EINVAL;
> +

It is enforcing the host mask on the guest, right? If the guest's set is a
super set of the host's then it will get rejected.


> +		/* Ignores bits not present in host. */
> +		pin &= __read_cr4();
> +		atomic_long_or(pin, &kvm->heki_pinned_cr4);
> +		return 0;
> +	}
> +	return -KVM_EINVAL;
> +}
> +
> +int heki_check_cr(const struct kvm *const kvm, const unsigned long cr,
> +		  const unsigned long val)
> +{
> +	unsigned long pinned;
> +
> +	switch (cr) {
> +	case 0:
> +		pinned = atomic_long_read(&kvm->heki_pinned_cr0);
> +		if ((val & pinned) != pinned) {
> +			pr_warn_ratelimited(
> +				"heki-kvm: Blocked CR0 update: 0x%lx\n", val);

I think if the message contains the VM and VCPU identifier it will
become more useful.

Thanks,
Wei.


From xen-devel-bounces@lists.xenproject.org Mon May 08 21:18:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 21:18:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531923.827892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw8Ff-0001uf-51; Mon, 08 May 2023 21:18:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531923.827892; Mon, 08 May 2023 21:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw8Ff-0001uY-1L; Mon, 08 May 2023 21:18:35 +0000
Received: by outflank-mailman (input) for mailman id 531923;
 Mon, 08 May 2023 21:18:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hkai=A5=gmail.com=wei.liu.linux@srs-se1.protection.inumbo.net>)
 id 1pw8Fe-0001uS-Ch
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 21:18:34 +0000
Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com
 [209.85.215.174]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2a80f5a-ede5-11ed-b226-6b7b168915f2;
 Mon, 08 May 2023 23:18:33 +0200 (CEST)
Received: by mail-pg1-f174.google.com with SMTP id
 41be03b00d2f7-52c30fbccd4so4605531a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 08 May 2023 14:18:33 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([20.69.120.36])
 by smtp.gmail.com with ESMTPSA id
 c35-20020a631c23000000b00513ec871c01sm6699252pgc.16.2023.05.08.14.18.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 08 May 2023 14:18:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2a80f5a-ede5-11ed-b226-6b7b168915f2
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683580712; x=1686172712;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Z9LwMl3SaVI5b8sezFDQVcUBEvYrPsuHk5i8fgGhI70=;
        b=dRQ1o2a4swNAtvDeAqk3PDLdsymy1sdiyBuuLqn4CboEusFBPZexHSnnvjPJTWgSpe
         zCUqFQyMIf7rkb+qBz8Mtte+Vz8mQkmmq7vXnaLN5oZsrYJwtyAVg+VUGMkLrYXajkNE
         uIsTfxF5YT05vwcRwaMFAkIpVT0rseiBwvSF6CV34GHSdhL06ypC+eLcPa3lHEUbP+9j
         Cf8iip0ZU9V91tJdIE+PXW46H7ug0NcQnbIuYU/a6Y402qJ3gRakFK7jkNrp6kuVd+Zv
         sV+u6DrSu9+P+c77oK+FC7CesSPowOxXqDHIzVn+1/ZwY5cMU/Jeidz5CxnY/g9q8Ipg
         BcIg==
X-Gm-Message-State: AC+VfDxieX762xRAOk/wOH6GIjUGYXkBABinObUxujFviZSMMZidKtL0
	9TfMQfFBwtqHb+jRKFpTL8g=
X-Google-Smtp-Source: ACHHUZ6kteuJnNZnZsRe+bisWrFQUndJSqajeCoEM2/rFLcMVskwkNxu1jPZifG3ayQwvJCBSdGnnA==
X-Received: by 2002:a05:6a20:1591:b0:f0:ec64:f3de with SMTP id h17-20020a056a20159100b000f0ec64f3demr15058351pzj.25.1683580711711;
        Mon, 08 May 2023 14:18:31 -0700 (PDT)
Date: Mon, 8 May 2023 21:18:29 +0000
From: Wei Liu <wei.liu@kernel.org>
To: =?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>
Cc: Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
	Kees Cook <keescook@chromium.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	Liran Alon <liran.alon@oracle.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	Mihai =?utf-8?B?RG9uyJt1?= <mdontu@bitdefender.com>,
	=?utf-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?utf-8?Q?=C8=98tefan_=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
	x86@kernel.org, xen-devel@lists.xenproject.org,
	Wei Liu <wei.liu@kernel.org>
Subject: Re: [PATCH v1 6/9] KVM: x86: Add Heki hypervisor support
Message-ID: <ZFlnJRsJh2fX3IJb@liuwe-devbox-debian-v2>
References: <20230505152046.6575-1-mic@digikod.net>
 <20230505152046.6575-7-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20230505152046.6575-7-mic@digikod.net>

On Fri, May 05, 2023 at 05:20:43PM +0200, Mickal Salan wrote:
> From: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
> 
> Each supported hypervisor in x86 implements a struct x86_hyper_init to
> define the init functions for the hypervisor.  Define a new init_heki()
> entry point in struct x86_hyper_init.  Hypervisors that support Heki
> must define this init_heki() function.  Call init_heki() of the chosen
> hypervisor in init_hypervisor_platform().
> 
> Create a heki_hypervisor structure that each hypervisor can fill
> with its data and functions. This will allow the Heki feature to work
> in a hypervisor agnostic way.
> 
> Declare and initialize a "heki_hypervisor" structure for KVM so KVM can
> support Heki.  Define the init_heki() function for KVM.  In init_heki(),
> set the hypervisor field in the generic "heki" structure to the KVM
> "heki_hypervisor".  After this point, generic Heki code can access the
> KVM Heki data and functions.
> 
[...]
> +static void kvm_init_heki(void)
> +{
> +	long err;
> +
> +	if (!kvm_para_available())
> +		/* Cannot make KVM hypercalls. */
> +		return;
> +
> +	err = kvm_hypercall3(KVM_HC_LOCK_MEM_PAGE_RANGES, -1, -1, -1);

Why not do a proper version check or capability check here? If the ABI
or supported features ever change then we have something to rely on?

Thanks,
Wei.


From xen-devel-bounces@lists.xenproject.org Mon May 08 21:33:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 21:33:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531928.827901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw8UE-0004JH-Bu; Mon, 08 May 2023 21:33:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531928.827901; Mon, 08 May 2023 21:33:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pw8UE-0004JA-9B; Mon, 08 May 2023 21:33:38 +0000
Received: by outflank-mailman (input) for mailman id 531928;
 Mon, 08 May 2023 21:33:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw8UC-0004Ih-KA; Mon, 08 May 2023 21:33:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw8UC-0003MS-Ht; Mon, 08 May 2023 21:33:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pw8UC-0007l7-64; Mon, 08 May 2023 21:33:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pw8UC-0006O6-5j; Mon, 08 May 2023 21:33:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YzKdj2q9rZV8iecyJaiAQVw9XNRhMFpMLQpRfKrT5CM=; b=Uc2cmsDQECULQceSiIRF8sL0C0
	scd/pb3/dYxhPnmQ1QJdSmbffLHGycM4o5L8+/Py52CPh6/+PitQHJhorkWpWYsfgIwNG0b7Fk0LS
	XU4eetoJeKWuoMWFsJh7vYIU2mVvMgWJmAP3oddMr15JC9jIp8P3MBSVQET2eh01kalA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180581-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180581: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=5215cd5baf6609e54050c69909273b7f5161c59e
X-Osstest-Versions-That:
    ovmf=6eeb58ece38060be3b0f7111649a93cc8b2dca49
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 08 May 2023 21:33:36 +0000

flight 180581 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180581/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 5215cd5baf6609e54050c69909273b7f5161c59e
baseline version:
 ovmf                 6eeb58ece38060be3b0f7111649a93cc8b2dca49

Last test of basis   180579  2023-05-08 13:40:39 Z    0 days
Testing same since   180581  2023-05-08 19:13:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6eeb58ece3..5215cd5baf  5215cd5baf6609e54050c69909273b7f5161c59e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 08 23:59:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 08 May 2023 23:59:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531940.827912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwAl9-0001W6-JH; Mon, 08 May 2023 23:59:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531940.827912; Mon, 08 May 2023 23:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwAl9-0001Vz-Ew; Mon, 08 May 2023 23:59:15 +0000
Received: by outflank-mailman (input) for mailman id 531940;
 Mon, 08 May 2023 23:59:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FoPu=A5=gmail.com=htejun@srs-se1.protection.inumbo.net>)
 id 1pwAl8-0001Vt-9S
 for xen-devel@lists.xenproject.org; Mon, 08 May 2023 23:59:14 +0000
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [2607:f8b0:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 534dac68-edfc-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 01:59:11 +0200 (CEST)
Received: by mail-pl1-x636.google.com with SMTP id
 d9443c01a7336-1ab14cb3aaeso36432505ad.2
 for <xen-devel@lists.xenproject.org>; Mon, 08 May 2023 16:59:11 -0700 (PDT)
Received: from localhost
 (2603-800c-1a02-1bae-a7fa-157f-969a-4cde.res6.spectrum.com.
 [2603:800c:1a02:1bae:a7fa:157f:969a:4cde])
 by smtp.gmail.com with ESMTPSA id
 t24-20020a170902b21800b001a505f04a06sm45882plr.190.2023.05.08.16.59.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 08 May 2023 16:59:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 534dac68-edfc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683590350; x=1686182350;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id
         :reply-to;
        bh=2YJeCVahMakSFFIeHkE0L5vaAYWzeS9qst+vBo++bi0=;
        b=TPN5cpDiUj++UaBuLClkHYKSZJQANDd2z/20VxIZVWHMxUq8xHaiESgezp9rxD7Gua
         kwQsYUXcUlV+z/TE10zkBILwoSSFW9HIsSnlJiytJoJy7FpjLcJSkMTQ5FwrG0ZhPqpv
         mNklxZ3suiwvDBCj7EVZBtRDY5IACqgNnR8WuwKVGfhpG2I7i1f+Mkf7pj595ha9os6j
         TJJUiDx5lGMYJli6SFP6m5BzTEv5qxMT9T42KuwnS05NEL9gk7D+TTxRe4Ne2ObBXqzJ
         tDju3HLcbYeZK1v2hcxZ5e6OH+eh3geSlD4uCw+dEJXNmHBvdg/GJYTwaPZT50EygaGc
         RMPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683590350; x=1686182350;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=2YJeCVahMakSFFIeHkE0L5vaAYWzeS9qst+vBo++bi0=;
        b=N5CDiXqdbeTJeLyORa6q43ZQYcTEFRMLxDRKlMvDnsOILvQOXXDuvb7Wu95SRH+43y
         yinCKEaLaHTGixDMJ4vYrBlmh/TqfBt2ot/s6cvSh2tLGzo2OMu5y/kxURA9X8pll49e
         n8XO9BcabH3wO7SQB+ZHThHM69M+grJ6zmckRm8j8xDJE1znbIpt/TNkltVOEeeTCWQT
         9yP2tkn0Hf+zlPvGY1bPy4pD4lWMX0wWc2M6SH3ZQ4xOkjagrvlRy23RKTkXzxov4KfW
         LRtzqLy2euO319cT6jFh9G9L/GQK4JylLU24wRfxpGA2/UqUUJBIuJzbpsi7ISW8e3kF
         FnJg==
X-Gm-Message-State: AC+VfDz3aaEpPpzvPnsXWxczmI/Y1Nk0LJMq/HxsvhR5PASC6vd/ktea
	lbq+kGgwWDIYnlbg57bgbdY=
X-Google-Smtp-Source: ACHHUZ7b+WkCZIE72n5uctblOVhF8S4nx7C/p93x2y2+UjUpxZ3gKtCgsPeB10q3QmlKSF1IQOuFjA==
X-Received: by 2002:a17:902:a583:b0:1aa:f818:7a23 with SMTP id az3-20020a170902a58300b001aaf8187a23mr12371985plb.27.1683590349547;
        Mon, 08 May 2023 16:59:09 -0700 (PDT)
Sender: Tejun Heo <htejun@gmail.com>
Date: Mon, 8 May 2023 13:59:07 -1000
From: Tejun Heo <tj@kernel.org>
To: jiangshanlai@gmail.com
Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH 15/22] xen/pvcalls: Use alloc_ordered_workqueue() to
 create ordered workqueues
Message-ID: <ZFmMy_QxaOzIoy0P@slm.duckdns.org>
References: <20230421025046.4008499-1-tj@kernel.org>
 <20230421025046.4008499-16-tj@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230421025046.4008499-16-tj@kernel.org>

Applied to wq/for-6.5-cleanup-ordered.

Thanks.

-- 
tejun


From xen-devel-bounces@lists.xenproject.org Tue May 09 00:51:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 00:51:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531944.827922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwBZV-0008QZ-F9; Tue, 09 May 2023 00:51:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531944.827922; Tue, 09 May 2023 00:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwBZV-0008QS-C5; Tue, 09 May 2023 00:51:17 +0000
Received: by outflank-mailman (input) for mailman id 531944;
 Tue, 09 May 2023 00:51:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lV9y=A6=huawei.com=linyunsheng@srs-se1.protection.inumbo.net>)
 id 1pwBZU-0008QM-Mt
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 00:51:16 +0000
Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 96be2e78-ee03-11ed-b226-6b7b168915f2;
 Tue, 09 May 2023 02:51:12 +0200 (CEST)
Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.53])
 by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QFfh404RszpWF8;
 Tue,  9 May 2023 08:49:56 +0800 (CST)
Received: from [10.69.30.204] (10.69.30.204) by dggpemm500005.china.huawei.com
 (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 9 May
 2023 08:51:07 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96be2e78-ee03-11ed-b226-6b7b168915f2
Subject: Re: [PATCH RFC 2/2] net: remove __skb_frag_set_page()
To: Simon Horman <simon.horman@corigine.com>
CC: <netdev@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
	<virtualization@lists.linux-foundation.org>,
	<xen-devel@lists.xenproject.org>, <bpf@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <edumazet@google.com>, <davem@davemloft.net>,
	<kuba@kernel.org>, <pabeni@redhat.com>, <alexanderduyck@fb.com>,
	<jbrouer@redhat.com>, <ilias.apalodimas@linaro.org>
References: <20230508123922.39284-1-linyunsheng@huawei.com>
 <20230508123922.39284-3-linyunsheng@huawei.com>
 <ZFkHulUs7d1xWKSa@corigine.com>
From: Yunsheng Lin <linyunsheng@huawei.com>
Message-ID: <ca5dabb4-d875-7845-553f-915b3ce85be1@huawei.com>
Date: Tue, 9 May 2023 08:51:07 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
 Thunderbird/52.2.0
MIME-Version: 1.0
In-Reply-To: <ZFkHulUs7d1xWKSa@corigine.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.69.30.204]
X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To
 dggpemm500005.china.huawei.com (7.185.36.74)
X-CFilter-Loop: Reflected

On 2023/5/8 22:31, Simon Horman wrote:
> On Mon, May 08, 2023 at 08:39:22PM +0800, Yunsheng Lin wrote:
>> The remaining users calling __skb_frag_set_page() with
>> page being NULL seems to doing defensive programming, as
>> shinfo->nr_frags is already decremented, so remove them.
>>
>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> 
> ...
> 
>> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>> index efaff5018af8..f3f08660ec30 100644
>> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>> @@ -1105,7 +1105,6 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp,
>>  			unsigned int nr_frags;
>>  
>>  			nr_frags = --shinfo->nr_frags;
> 
> Hi Yunsheng,
> 
> nr_frags is now  unused, other than being set on the line above.
> Probably this local variable can be removed.

Yes, will remove that.
Thanks.


From xen-devel-bounces@lists.xenproject.org Tue May 09 01:26:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 01:26:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531951.827944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwC7D-0001wT-6W; Tue, 09 May 2023 01:26:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531951.827944; Tue, 09 May 2023 01:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwC7D-0001wM-3C; Tue, 09 May 2023 01:26:07 +0000
Received: by outflank-mailman (input) for mailman id 531951;
 Tue, 09 May 2023 01:26:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwC7C-0001wC-1y; Tue, 09 May 2023 01:26:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwC7B-0007mK-Vi; Tue, 09 May 2023 01:26:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwC7B-0008TF-Cr; Tue, 09 May 2023 01:26:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwC7B-0003VR-CP; Tue, 09 May 2023 01:26:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AqTehIPyTriyPxUrAkxoTYK8iEuuU/f6ysUwPTqJt8s=; b=D2YorN+tbMEd2gcndeilR97xtC
	RsCjDQsuXEhll3mh25osrjvmenNnHSpd9A23VDw/gozxtLg7TlW71py5hm6iF6OihVEWxrp+vJheg
	eCH51nX0nJBDrlEIGedNsNBKJJsfR2j6UBHfEK3yTcxFnfz4f55RGoDdPrybsta0a7+g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180580-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180580: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a16fb78515d54be95f81c0d1c0a3a7b954a54d0a
X-Osstest-Versions-That:
    xen=e93e635e142d45e3904efb4a05e2b3b52a708b4f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 May 2023 01:26:05 +0000

flight 180580 xen-unstable real [real]
flight 180583 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180580/
http://logs.test-lab.xenproject.org/osstest/logs/180583/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-raw   7 xen-install         fail pass in 180583-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 180583 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180572
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180572
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180572
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180572
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180572
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180572
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180572
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180572
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180572
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180572
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180572
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180572
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  a16fb78515d54be95f81c0d1c0a3a7b954a54d0a
baseline version:
 xen                  e93e635e142d45e3904efb4a05e2b3b52a708b4f

Last test of basis   180572  2023-05-08 01:53:35 Z    0 days
Testing same since   180580  2023-05-08 15:37:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Jennifer Herbert <jennifer.herbert@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e93e635e14..a16fb78515  a16fb78515d54be95f81c0d1c0a3a7b954a54d0a -> master


From xen-devel-bounces@lists.xenproject.org Tue May 09 06:53:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 06:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531968.827954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwHE3-0001rh-56; Tue, 09 May 2023 06:53:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531968.827954; Tue, 09 May 2023 06:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwHE3-0001ra-1G; Tue, 09 May 2023 06:53:31 +0000
Received: by outflank-mailman (input) for mailman id 531968;
 Tue, 09 May 2023 06:53:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ur7t=A6=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pwHE0-0001rP-MH
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 06:53:29 +0000
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [2607:f8b0:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2fea2a31-ee36-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 08:53:22 +0200 (CEST)
Received: by mail-pl1-x636.google.com with SMTP id
 d9443c01a7336-1ab13da70a3so53776715ad.1
 for <xen-devel@lists.xenproject.org>; Mon, 08 May 2023 23:53:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fea2a31-ee36-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683615201; x=1686207201;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=5m8y8qVVSPfQ/fcaKQsBhFMEoo/I0V1CEIhBkXCm97g=;
        b=gWAE6hIn1z/KtzhyKwhEGQV8g/1Wi7Hr8hS7u4tgRcT5wm+jY8ODLb4GVRG3kEJYwB
         lBSDyrtsfO3EkS981tYdtW7LSe/pemsc5YHLhD9cchAuHpR+p3EyQI6xGPifpmgu1Fo9
         qjVSnyj0qqoytSsC130yZ2o70iK1EuAhrutHKtooZQWntIqNOZTtd5RRKJzZvfQ3TNH4
         g8iAJbydiu/mW69n2yBBpUSI4d42n8WWDbxvi+ct7GWDhkwLi+RbeSqwIdRbbgP2j56n
         +kVQEdF8nPxU6cZ8bdNPQwJTUStqPh2g99aoiok6W9XYALVwGNEPDlpoNMf5MyUgwxWQ
         GXhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683615201; x=1686207201;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=5m8y8qVVSPfQ/fcaKQsBhFMEoo/I0V1CEIhBkXCm97g=;
        b=im7jhyfq1eksz5Epn+O7vkiwvoBtcnM9ocOTXsucgHtgDsVIhYhoycjFUU4T+41T3n
         +PRYML9ZtIaCGlqd2JZrHwxYpFIXPh2kcgnDZ3rnzMbFiTvU78+kbkVUtAE1qWGvBjL+
         ukpESEx57BadqmOW8+IdXxUuKb5BCv7RBQHURqmhJTcY0QZlhIY9gqjMSy2eKWO+EshW
         rxPY29ZApNhv3RghNL7futbb78TPcNq/HRDT4BOLHi6BGYArGklN0P33AfgFK5ySHKdp
         JB2yBFktyxRSN04yBTYJ0a6H2MJSrDsOh3kYKq7T2d+yTyiU6olCeTfEnmcM2ULZxrPJ
         F1Qg==
X-Gm-Message-State: AC+VfDxsoqusvkAkNdIWPscy9nT0CFT2osyzL9zPXs4as2aPNQ7d7J7U
	vX2oKogTT4m5b5XB3xzf5w0jtc7yyvojxBrh24M=
X-Google-Smtp-Source: ACHHUZ4qPRDtKPZ+fz9q/4M3Obw7XM/rNocBKNqPEIYW304IifHDbAvgqsf5n0TajM1QGdYvkX9wN64ukfe9AwJZHto=
X-Received: by 2002:a17:903:1206:b0:19c:be57:9c82 with SMTP id
 l6-20020a170903120600b0019cbe579c82mr14881245plh.65.1683615200462; Mon, 08
 May 2023 23:53:20 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <CA+SAi2tEbV0Y=p=NhT_8H1SeBzqXxUTS5R9pZu3_UYn5zU952A@mail.gmail.com>
 <CA+SAi2s7jUf4ZB6WCDqEbG5jV1A5XV=bJDiGOseQBBG+Xt9_vQ@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
 <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com> <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com> <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
 <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com> <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
In-Reply-To: <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Tue, 9 May 2023 09:58:04 +0300
Message-ID: <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Michal Orzel <michal.orzel@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>, 
	Stewart.Hildebrand@amd.com
Content-Type: multipart/alternative; boundary="000000000000f64c8c05fb3d35d5"

--000000000000f64c8c05fb3d35d5
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello guys,

I have a couple of more questions.
Have you ever run xen with the cache coloring at Zynq UltraScale+ MPSoC
zcu102 xczu15eg ?
When did you run xen with the cache coloring last time ?
What kernel version did you use for Dom0 when you ran xen with the cache
coloring last time ?

Regards,
Oleg

=D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 11:48, Oleg =
Nikitenko <oleshiiwood@gmail.com>:

> Hi Michal,
>
> Thanks.
>
> Regards,
> Oleg
>
> =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 11:34, Mic=
hal Orzel <michal.orzel@amd.com>:
>
>> Hi Oleg,
>>
>> Replying, so that you do not need to wait for Stefano.
>>
>> On 05/05/2023 10:28, Oleg Nikitenko wrote:
>> >
>> >
>> >
>> > Hello Stefano,
>> >
>> > I would like to try a xen cache color property from this repo
>> https://xenbits.xen.org/git-http/xen.git <
>> https://xenbits.xen.org/git-http/xen.git>
>> > Could you tell whot branch I should use ?
>> Cache coloring feature is not part of the upstream tree and it is still
>> under review.
>> You can only find it integrated in the Xilinx Xen tree.
>>
>> ~Michal
>>
>> >
>> > Regards,
>> > Oleg
>> >
>> > =D0=BF=D1=82, 28 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 00:51=
, Stefano Stabellini <sstabellini@kernel.org
>> <mailto:sstabellini@kernel.org>>:
>> >
>> >     I am familiar with the zcu102 but I don't know how you could
>> possibly
>> >     generate a SError.
>> >
>> >     I suggest to try to use ImageBuilder [1] to generate the boot
>> >     configuration as a test because that is known to work well for
>> zcu102.
>> >
>> >     [1] https://gitlab.com/xen-project/imagebuilder <
>> https://gitlab.com/xen-project/imagebuilder>
>> >
>> >
>> >     On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
>> >     > Hello Stefano,
>> >     >
>> >     > Thanks for clarification.
>> >     > We nighter use ImageBuilder nor uboot boot script.
>> >     > A model is zcu102 compatible.
>> >     >
>> >     > Regards,
>> >     > O.
>> >     >
>> >     > =D0=B2=D1=82, 25 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2=
 21:21, Stefano Stabellini <
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
>> >     >       This is interesting. Are you using Xilinx hardware by any
>> chance? If so,
>> >     >       which board?
>> >     >
>> >     >       Are you using ImageBuilder to generate your boot.scr boot
>> script? If so,
>> >     >       could you please post your ImageBuilder config file? If
>> not, can you
>> >     >       post the source of your uboot boot script?
>> >     >
>> >     >       SErrors are supposed to be related to a hardware failure o=
f
>> some kind.
>> >     >       You are not supposed to be able to trigger an SError easil=
y
>> by
>> >     >       "mistake". I have not seen SErrors due to wrong cache
>> coloring
>> >     >       configurations on any Xilinx board before.
>> >     >
>> >     >       The differences between Xen with and without cache colorin=
g
>> from a
>> >     >       hardware perspective are:
>> >     >
>> >     >       - With cache coloring, the SMMU is enabled and does addres=
s
>> translations
>> >     >         even for dom0. Without cache coloring the SMMU could be
>> disabled, and
>> >     >         if enabled, the SMMU doesn't do any address translations
>> for Dom0. If
>> >     >         there is a hardware failure related to SMMU address
>> translation it
>> >     >         could only trigger with cache coloring. This would be my
>> normal
>> >     >         suggestion for you to explore, but the failure happens
>> too early
>> >     >         before any DMA-capable device is programmed. So I don't
>> think this can
>> >     >         be the issue.
>> >     >
>> >     >       - With cache coloring, the memory allocation is very
>> different so you'll
>> >     >         end up using different DDR regions for Dom0. So if your
>> DDR is
>> >     >         defective, you might only see a failure with cache
>> coloring enabled
>> >     >         because you end up using different regions.
>> >     >
>> >     >
>> >     >       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
>> >     >       > Hi Stefano,
>> >     >       >
>> >     >       > Thank you.
>> >     >       > If I build xen without colors support there is not this
>> error.
>> >     >       > All the domains are booted well.
>> >     >       > Hense it can not be a hardware issue.
>> >     >       > This panic arrived during unpacking the rootfs.
>> >     >       > Here I attached the boot log xen/Dom0 without color.
>> >     >       > A highlighted strings printed exactly after the place
>> where 1-st time panic arrived.
>> >     >       >
>> >     >       >  Xen 4.16.1-pre
>> >     >       > (XEN) Xen version 4.16.1-pre (nole2390@(none))
>> (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=3Dy 2023-04-21
>> >     >       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300
>> git:321687b231-dirty
>> >     >       > (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
>> >     >       > (XEN) Processor: 00000000410fd034: "ARM Limited",
>> variant: 0x0, part 0xd03,rev 0x4
>> >     >       > (XEN) 64-bit Execution:
>> >     >       > (XEN)   Processor Features: 0000000000002222
>> 0000000000000000
>> >     >       > (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+3=
2
>> EL0:64+32
>> >     >       > (XEN)     Extensions: FloatingPoint AdvancedSIMD
>> >     >       > (XEN)   Debug Features: 0000000010305106 000000000000000=
0
>> >     >       > (XEN)   Auxiliary Features: 0000000000000000
>> 0000000000000000
>> >     >       > (XEN)   Memory Model Features: 0000000000001122
>> 0000000000000000
>> >     >       > (XEN)   ISA Features:  0000000000011120 0000000000000000
>> >     >       > (XEN) 32-bit Execution:
>> >     >       > (XEN)   Processor Features:
>> 0000000000000131:0000000000011011
>> >     >       > (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2
>> Jazelle
>> >     >       > (XEN)     Extensions: GenericTimer Security
>> >     >       > (XEN)   Debug Features: 0000000003010066
>> >     >       > (XEN)   Auxiliary Features: 0000000000000000
>> >     >       > (XEN)   Memory Model Features: 0000000010201105
>> 0000000040000000
>> >     >       > (XEN)                          0000000001260000
>> 0000000002102211
>> >     >       > (XEN)   ISA Features: 0000000002101110 0000000013112111
>> 0000000021232042
>> >     >       > (XEN)                 0000000001112131 0000000000011142
>> 0000000000011121
>> >     >       > (XEN) Using SMC Calling Convention v1.2
>> >     >       > (XEN) Using PSCI v1.1
>> >     >       > (XEN) SMP: Allowing 4 CPUs
>> >     >       > (XEN) Generic Timer IRQ: phys=3D30 hyp=3D26 virt=3D27 Fr=
eq:
>> 100000 KHz
>> >     >       > (XEN) GICv2 initialization:
>> >     >       > (XEN)         gic_dist_addr=3D00000000f9010000
>> >     >       > (XEN)         gic_cpu_addr=3D00000000f9020000
>> >     >       > (XEN)         gic_hyp_addr=3D00000000f9040000
>> >     >       > (XEN)         gic_vcpu_addr=3D00000000f9060000
>> >     >       > (XEN)         gic_maintenance_irq=3D25
>> >     >       > (XEN) GICv2: Adjusting CPU interface base to 0xf902f000
>> >     >       > (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
>> >     >       > (XEN) Using scheduler: null Scheduler (null)
>> >     >       > (XEN) Initializing null scheduler
>> >     >       > (XEN) WARNING: This is experimental software in
>> development.
>> >     >       > (XEN) Use at your own risk.
>> >     >       > (XEN) Allocated console ring of 32 KiB.
>> >     >       > (XEN) CPU0: Guest atomics will try 12 times before
>> pausing the domain
>> >     >       > (XEN) Bringing up CPU1
>> >     >       > (XEN) CPU1: Guest atomics will try 13 times before
>> pausing the domain
>> >     >       > (XEN) CPU 1 booted.
>> >     >       > (XEN) Bringing up CPU2
>> >     >       > (XEN) CPU2: Guest atomics will try 13 times before
>> pausing the domain
>> >     >       > (XEN) CPU 2 booted.
>> >     >       > (XEN) Bringing up CPU3
>> >     >       > (XEN) CPU3: Guest atomics will try 13 times before
>> pausing the domain
>> >     >       > (XEN) Brought up 4 CPUs
>> >     >       > (XEN) CPU 3 booted.
>> >     >       > (XEN) smmu: /axi/smmu@fd800000: probing hardware
>> configuration...
>> >     >       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
>> >     >       > (XEN) smmu: /axi/smmu@fd800000: stage 2 translation
>> >     >       > (XEN) smmu: /axi/smmu@fd800000: stream matching with 48
>> register groups, mask 0x7fff<2>smmu: /axi/smmu@fd800000: 16 context
>> >     >       banks (0
>> >     >       > stage-2 only)
>> >     >       > (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA ->
>> 48-bit PA
>> >     >       > (XEN) smmu: /axi/smmu@fd800000: registered 29 master
>> devices
>> >     >       > (XEN) I/O virtualisation enabled
>> >     >       > (XEN)  - Dom0 mode: Relaxed
>> >     >       > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>> >     >       > (XEN) P2M: 3 levels with order-1 root, VTCR
>> 0x0000000080023558
>> >     >       > (XEN) Scheduling granularity: cpu, 1 CPU per
>> sched-resource
>> >     >       > (XEN) alternatives: Patching with alt table
>> 00000000002cc5c8 -> 00000000002ccb2c
>> >     >       > (XEN) *** LOADING DOMAIN 0 ***
>> >     >       > (XEN) Loading d0 kernel from boot module @
>> 0000000001000000
>> >     >       > (XEN) Loading ramdisk from boot module @ 000000000200000=
0
>> >     >       > (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
>> >     >       > (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
>> >     >       > (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
>> >     >       > (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
>> >     >       > (XEN) Grant table range: 0x00000000e00000-0x00000000e400=
00
>> >     >       > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr
>> 0x000000087bf94000
>> >     >       > (XEN) Allocating PPI 16 for event channel interrupt
>> >     >       > (XEN) Extended region 0: 0x81200000->0xa0000000
>> >     >       > (XEN) Extended region 1: 0xb1200000->0xc0000000
>> >     >       > (XEN) Extended region 2: 0xc8000000->0xe0000000
>> >     >       > (XEN) Extended region 3: 0xf0000000->0xf9000000
>> >     >       > (XEN) Extended region 4: 0x100000000->0x600000000
>> >     >       > (XEN) Extended region 5: 0x880000000->0x8000000000
>> >     >       > (XEN) Extended region 6: 0x8001000000->0x10000000000
>> >     >       > (XEN) Loading zImage from 0000000001000000 to
>> 0000000010000000-0000000010e41008
>> >     >       > (XEN) Loading d0 initrd from 0000000002000000 to
>> 0x0000000013600000-0x000000001ff3a617
>> >     >       > (XEN) Loading d0 DTB to
>> 0x0000000013400000-0x000000001340cbdc
>> >     >       > (XEN) Initial low memory virq threshold set at 0x4000
>> pages.
>> >     >       > (XEN) Std. Loglevel: All
>> >     >       > (XEN) Guest Loglevel: All
>> >     >       > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three time=
s
>> to switch input)
>> >     >       > (XEN) null.c:353: 0 <-- d0v0
>> >     >       > (XEN) Freed 356kB init memory.
>> >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
>> >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
>> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
>> to ICACTIVER4
>> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
>> to ICACTIVER8
>> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
>> to ICACTIVER12
>> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
>> to ICACTIVER16
>> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
>> to ICACTIVER20
>> >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff
>> to ICACTIVER0
>> >     >       > [    0.000000] Booting Linux on physical CPU 0x000000000=
0
>> [0x410fd034]
>> >     >       > [    0.000000] Linux version 5.15.72-xilinx-v2022.1
>> (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC) 11.3.0, GNU ld (GNU
>> >     >       Binutils)
>> >     >       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
>> >     >       > [    0.000000] Machine model: D14 Viper Board - White Un=
it
>> >     >       > [    0.000000] Xen 4.16 support found
>> >     >       > [    0.000000] Zone ranges:
>> >     >       > [    0.000000]   DMA      [mem
>> 0x0000000010000000-0x000000007fffffff]
>> >     >       > [    0.000000]   DMA32    empty
>> >     >       > [    0.000000]   Normal   empty
>> >     >       > [    0.000000] Movable zone start for each node
>> >     >       > [    0.000000] Early memory node ranges
>> >     >       > [    0.000000]   node   0: [mem
>> 0x0000000010000000-0x000000001fffffff]
>> >     >       > [    0.000000]   node   0: [mem
>> 0x0000000022000000-0x0000000022147fff]
>> >     >       > [    0.000000]   node   0: [mem
>> 0x0000000022200000-0x0000000022347fff]
>> >     >       > [    0.000000]   node   0: [mem
>> 0x0000000024000000-0x0000000027ffffff]
>> >     >       > [    0.000000]   node   0: [mem
>> 0x0000000030000000-0x000000007fffffff]
>> >     >       > [    0.000000] Initmem setup node 0 [mem
>> 0x0000000010000000-0x000000007fffffff]
>> >     >       > [    0.000000] On node 0, zone DMA: 8192 pages in
>> unavailable ranges
>> >     >       > [    0.000000] On node 0, zone DMA: 184 pages in
>> unavailable ranges
>> >     >       > [    0.000000] On node 0, zone DMA: 7352 pages in
>> unavailable ranges
>> >     >       > [    0.000000] cma: Reserved 256 MiB at 0x000000006e0000=
00
>> >     >       > [    0.000000] psci: probing for conduit method from DT.
>> >     >       > [    0.000000] psci: PSCIv1.1 detected in firmware.
>> >     >       > [    0.000000] psci: Using standard PSCI v0.2 function I=
Ds
>> >     >       > [    0.000000] psci: Trusted OS migration not required
>> >     >       > [    0.000000] psci: SMC Calling Convention v1.1
>> >     >       > [    0.000000] percpu: Embedded 16 pages/cpu s32792 r0
>> d32744 u65536
>> >     >       > [    0.000000] Detected VIPT I-cache on CPU0
>> >     >       > [    0.000000] CPU features: kernel page table isolation
>> forced ON by KASLR
>> >     >       > [    0.000000] CPU features: detected: Kernel page table
>> isolation (KPTI)
>> >     >       > [    0.000000] Built 1 zonelists, mobility grouping on.
>> Total pages: 403845
>> >     >       > [    0.000000] Kernel command line: console=3Dhvc0
>> earlycon=3Dxen earlyprintk=3Dxen clk_ignore_unused fips=3D1 root=3D/dev/=
ram0
>> >     >       maxcpus=3D2
>> >     >       > [    0.000000] Unknown kernel command line parameters
>> "earlyprintk=3Dxen fips=3D1", will be passed to user space.
>> >     >       > [    0.000000] Dentry cache hash table entries: 262144
>> (order: 9, 2097152 bytes, linear)
>> >     >       > [    0.000000] Inode-cache hash table entries: 131072
>> (order: 8, 1048576 bytes, linear)
>> >     >       > [    0.000000] mem auto-init: stack:off, heap alloc:on,
>> heap free:on
>> >     >       > [    0.000000] mem auto-init: clearing system memory may
>> take some time...
>> >     >       > [    0.000000] Memory: 1121936K/1641024K available (9728=
K
>> kernel code, 836K rwdata, 2396K rodata, 1536K init, 262K bss,
>> >     >       256944K reserved,
>> >     >       > 262144K cma-reserved)
>> >     >       > [    0.000000] SLUB: HWalign=3D64, Order=3D0-3, MinObjec=
ts=3D0,
>> CPUs=3D2, Nodes=3D1
>> >     >       > [    0.000000] rcu: Hierarchical RCU implementation.
>> >     >       > [    0.000000] rcu: RCU event tracing is enabled.
>> >     >       > [    0.000000] rcu: RCU restricting CPUs from NR_CPUS=3D=
8
>> to nr_cpu_ids=3D2.
>> >     >       > [    0.000000] rcu: RCU calculated value of
>> scheduler-enlistment delay is 25 jiffies.
>> >     >       > [    0.000000] rcu: Adjusting geometry for
>> rcu_fanout_leaf=3D16, nr_cpu_ids=3D2
>> >     >       > [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated
>> irqs: 0
>> >     >       > [    0.000000] Root IRQ handler: gic_handle_irq
>> >     >       > [    0.000000] arch_timer: cp15 timer(s) running at
>> 100.00MHz (virt).
>> >     >       > [    0.000000] clocksource: arch_sys_counter: mask:
>> 0xffffffffffffff max_cycles: 0x171024e7e0, max_idle_ns: 440795205315 ns
>> >     >       > [    0.000000] sched_clock: 56 bits at 100MHz, resolutio=
n
>> 10ns, wraps every 4398046511100ns
>> >     >       > [    0.000258] Console: colour dummy device 80x25
>> >     >       > [    0.310231] printk: console [hvc0] enabled
>> >     >       > [    0.314403] Calibrating delay loop (skipped), value
>> calculated using timer frequency.. 200.00 BogoMIPS (lpj=3D400000)
>> >     >       > [    0.324851] pid_max: default: 32768 minimum: 301
>> >     >       > [    0.329706] LSM: Security Framework initializing
>> >     >       > [    0.334204] Yama: becoming mindful.
>> >     >       > [    0.337865] Mount-cache hash table entries: 4096
>> (order: 3, 32768 bytes, linear)
>> >     >       > [    0.345180] Mountpoint-cache hash table entries: 4096
>> (order: 3, 32768 bytes, linear)
>> >     >       > [    0.354743] xen:grant_table: Grant tables using
>> version 1 layout
>> >     >       > [    0.359132] Grant table initialized
>> >     >       > [    0.362664] xen:events: Using FIFO-based ABI
>> >     >       > [    0.366993] Xen: initializing cpu0
>> >     >       > [    0.370515] rcu: Hierarchical SRCU implementation.
>> >     >       > [    0.375930] smp: Bringing up secondary CPUs ...
>> >     >       > (XEN) null.c:353: 1 <-- d0v1
>> >     >       > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff
>> to ICACTIVER0
>> >     >       > [    0.382549] Detected VIPT I-cache on CPU1
>> >     >       > [    0.388712] Xen: initializing cpu1
>> >     >       > [    0.388743] CPU1: Booted secondary processor
>> 0x0000000001 [0x410fd034]
>> >     >       > [    0.388829] smp: Brought up 1 node, 2 CPUs
>> >     >       > [    0.406941] SMP: Total of 2 processors activated.
>> >     >       > [    0.411698] CPU features: detected: 32-bit EL0 Suppor=
t
>> >     >       > [    0.416888] CPU features: detected: CRC32 instruction=
s
>> >     >       > [    0.422121] CPU: All CPU(s) started at EL1
>> >     >       > [    0.426248] alternatives: patching kernel code
>> >     >       > [    0.431424] devtmpfs: initialized
>> >     >       > [    0.441454] KASLR enabled
>> >     >       > [    0.441602] clocksource: jiffies: mask: 0xffffffff
>> max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
>> >     >       > [    0.448321] futex hash table entries: 512 (order: 3,
>> 32768 bytes, linear)
>> >     >       > [    0.496183] NET: Registered PF_NETLINK/PF_ROUTE
>> protocol family
>> >     >       > [    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool
>> for atomic allocations
>> >     >       > [    0.503772] DMA: preallocated 256 KiB
>> GFP_KERNEL|GFP_DMA pool for atomic allocations
>> >     >       > [    0.511610] DMA: preallocated 256 KiB
>> GFP_KERNEL|GFP_DMA32 pool for atomic allocations
>> >     >       > [    0.519478] audit: initializing netlink subsys
>> (disabled)
>> >     >       > [    0.524985] audit: type=3D2000 audit(0.336:1):
>> state=3Dinitialized audit_enabled=3D0 res=3D1
>> >     >       > [    0.529169] thermal_sys: Registered thermal governor
>> 'step_wise'
>> >     >       > [    0.533023] hw-breakpoint: found 6 breakpoint and 4
>> watchpoint registers.
>> >     >       > [    0.545608] ASID allocator initialised with 32768
>> entries
>> >     >       > [    0.551030] xen:swiotlb_xen: Warning: only able to
>> allocate 4 MB for software IO TLB
>> >     >       > [    0.559332] software IO TLB: mapped [mem
>> 0x0000000011800000-0x0000000011c00000] (4MB)
>> >     >       > [    0.583565] HugeTLB registered 1.00 GiB page size,
>> pre-allocated 0 pages
>> >     >       > [    0.584721] HugeTLB registered 32.0 MiB page size,
>> pre-allocated 0 pages
>> >     >       > [    0.591478] HugeTLB registered 2.00 MiB page size,
>> pre-allocated 0 pages
>> >     >       > [    0.598225] HugeTLB registered 64.0 KiB page size,
>> pre-allocated 0 pages
>> >     >       > [    0.636520] DRBG: Continuing without Jitter RNG
>> >     >       > [    0.737187] raid6: neonx8   gen()  2143 MB/s
>> >     >       > [    0.805294] raid6: neonx8   xor()  1589 MB/s
>> >     >       > [    0.873406] raid6: neonx4   gen()  2177 MB/s
>> >     >       > [    0.941499] raid6: neonx4   xor()  1556 MB/s
>> >     >       > [    1.009612] raid6: neonx2   gen()  2072 MB/s
>> >     >       > [    1.077715] raid6: neonx2   xor()  1430 MB/s
>> >     >       > [    1.145834] raid6: neonx1   gen()  1769 MB/s
>> >     >       > [    1.213935] raid6: neonx1   xor()  1214 MB/s
>> >     >       > [    1.282046] raid6: int64x8  gen()  1366 MB/s
>> >     >       > [    1.350132] raid6: int64x8  xor()   773 MB/s
>> >     >       > [    1.418259] raid6: int64x4  gen()  1602 MB/s
>> >     >       > [    1.486349] raid6: int64x4  xor()   851 MB/s
>> >     >       > [    1.554464] raid6: int64x2  gen()  1396 MB/s
>> >     >       > [    1.622561] raid6: int64x2  xor()   744 MB/s
>> >     >       > [    1.690687] raid6: int64x1  gen()  1033 MB/s
>> >     >       > [    1.758770] raid6: int64x1  xor()   517 MB/s
>> >     >       > [    1.758809] raid6: using algorithm neonx4 gen() 2177
>> MB/s
>> >     >       > [    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
>> >     >       > [    1.767957] raid6: using neon recovery algorithm
>> >     >       > [    1.772824] xen:balloon: Initialising balloon driver
>> >     >       > [    1.778021] iommu: Default domain type: Translated
>> >     >       > [    1.782584] iommu: DMA domain TLB invalidation policy=
:
>> strict mode
>> >     >       > [    1.789149] SCSI subsystem initialized
>> >     >       > [    1.792820] usbcore: registered new interface driver
>> usbfs
>> >     >       > [    1.798254] usbcore: registered new interface driver
>> hub
>> >     >       > [    1.803626] usbcore: registered new device driver usb
>> >     >       > [    1.808761] pps_core: LinuxPPS API ver. 1 registered
>> >     >       > [    1.813716] pps_core: Software ver. 5.3.6 - Copyright
>> 2005-2007 Rodolfo Giometti <giometti@linux.it <mailto:giometti@linux.it>=
>
>> >     >       > [    1.822903] PTP clock support registered
>> >     >       > [    1.826893] EDAC MC: Ver: 3.0.0
>> >     >       > [    1.830375] zynqmp-ipi-mbox mailbox@ff990400:
>> Registered ZynqMP IPI mbox with TX/RX channels.
>> >     >       > [    1.838863] zynqmp-ipi-mbox mailbox@ff990600:
>> Registered ZynqMP IPI mbox with TX/RX channels.
>> >     >       > [    1.847356] zynqmp-ipi-mbox mailbox@ff990800:
>> Registered ZynqMP IPI mbox with TX/RX channels.
>> >     >       > [    1.855907] FPGA manager framework
>> >     >       > [    1.859952] clocksource: Switched to clocksource
>> arch_sys_counter
>> >     >       > [    1.871712] NET: Registered PF_INET protocol family
>> >     >       > [    1.871838] IP idents hash table entries: 32768
>> (order: 6, 262144 bytes, linear)
>> >     >       > [    1.879392] tcp_listen_portaddr_hash hash table
>> entries: 1024 (order: 2, 16384 bytes, linear)
>> >     >       > [    1.887078] Table-perturb hash table entries: 65536
>> (order: 6, 262144 bytes, linear)
>> >     >       > [    1.894846] TCP established hash table entries: 16384
>> (order: 5, 131072 bytes, linear)
>> >     >       > [    1.902900] TCP bind hash table entries: 16384 (order=
:
>> 6, 262144 bytes, linear)
>> >     >       > [    1.910350] TCP: Hash tables configured (established
>> 16384 bind 16384)
>> >     >       > [    1.916778] UDP hash table entries: 1024 (order: 3,
>> 32768 bytes, linear)
>> >     >       > [    1.923509] UDP-Lite hash table entries: 1024 (order:
>> 3, 32768 bytes, linear)
>> >     >       > [    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol
>> family
>> >     >       > [    1.936834] RPC: Registered named UNIX socket
>> transport module.
>> >     >       > [    1.942342] RPC: Registered udp transport module.
>> >     >       > [    1.947088] RPC: Registered tcp transport module.
>> >     >       > [    1.951843] RPC: Registered tcp NFSv4.1 backchannel
>> transport module.
>> >     >       > [    1.958334] PCI: CLS 0 bytes, default 64
>> >     >       > [    1.962709] Trying to unpack rootfs image as
>> initramfs...
>> >     >       > [    1.977090] workingset: timestamp_bits=3D62 max_order=
=3D19
>> bucket_order=3D0
>> >     >       > [    1.982863] Installing knfsd (copyright (C) 1996
>> okir@monad.swb.de <mailto:okir@monad.swb.de>).
>> >     >       > [    2.021045] NET: Registered PF_ALG protocol family
>> >     >       > [    2.021122] xor: measuring software checksum speed
>> >     >       > [    2.029347]    8regs           :  2366 MB/sec
>> >     >       > [    2.033081]    32regs          :  2802 MB/sec
>> >     >       > [    2.038223]    arm64_neon      :  2320 MB/sec
>> >     >       > [    2.038385] xor: using function: 32regs (2802 MB/sec)
>> >     >       > [    2.043614] Block layer SCSI generic (bsg) driver
>> version 0.4 loaded (major 247)
>> >     >       > [    2.050959] io scheduler mq-deadline registered
>> >     >       > [    2.055521] io scheduler kyber registered
>> >     >       > [    2.068227] xen:xen_evtchn: Event-channel device
>> installed
>> >     >       > [    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ
>> sharing disabled
>> >     >       > [    2.076190] cacheinfo: Unable to detect cache
>> hierarchy for CPU 0
>> >     >       > [    2.085548] brd: module loaded
>> >     >       > [    2.089290] loop: module loaded
>> >     >       > [    2.089341] Invalid max_queues (4), will use default
>> max: 2.
>> >     >       > [    2.094565] tun: Universal TUN/TAP device driver, 1.6
>> >     >       > [    2.098655] xen_netfront: Initialising Xen virtual
>> ethernet driver
>> >     >       > [    2.104156] usbcore: registered new interface driver
>> rtl8150
>> >     >       > [    2.109813] usbcore: registered new interface driver
>> r8152
>> >     >       > [    2.115367] usbcore: registered new interface driver
>> asix
>> >     >       > [    2.120794] usbcore: registered new interface driver
>> ax88179_178a
>> >     >       > [    2.126934] usbcore: registered new interface driver
>> cdc_ether
>> >     >       > [    2.132816] usbcore: registered new interface driver
>> cdc_eem
>> >     >       > [    2.138527] usbcore: registered new interface driver
>> net1080
>> >     >       > [    2.144256] usbcore: registered new interface driver
>> cdc_subset
>> >     >       > [    2.150205] usbcore: registered new interface driver
>> zaurus
>> >     >       > [    2.155837] usbcore: registered new interface driver
>> cdc_ncm
>> >     >       > [    2.161550] usbcore: registered new interface driver
>> r8153_ecm
>> >     >       > [    2.168240] usbcore: registered new interface driver
>> cdc_acm
>> >     >       > [    2.173109] cdc_acm: USB Abstract Control Model drive=
r
>> for USB modems and ISDN adapters
>> >     >       > [    2.181358] usbcore: registered new interface driver
>> uas
>> >     >       > [    2.186547] usbcore: registered new interface driver
>> usb-storage
>> >     >       > [    2.192643] usbcore: registered new interface driver
>> ftdi_sio
>> >     >       > [    2.198384] usbserial: USB Serial support registered
>> for FTDI USB Serial Device
>> >     >       > [    2.206118] udc-core: couldn't find an available UDC =
-
>> added [g_mass_storage] to list of pending drivers
>> >     >       > [    2.215332] i2c_dev: i2c /dev entries driver
>> >     >       > [    2.220467] xen_wdt xen_wdt: initialized (timeout=3D6=
0s,
>> nowayout=3D0)
>> >     >       > [    2.225923] device-mapper: uevent: version 1.0.3
>> >     >       > [    2.230668] device-mapper: ioctl: 4.45.0-ioctl
>> (2021-03-22) initialised: dm-devel@redhat.com <mailto:dm-devel@redhat.co=
m
>> >
>> >     >       > [    2.239315] EDAC MC0: Giving out device to module 1
>> controller synps_ddr_controller: DEV synps_edac (INTERRUPT)
>> >     >       > [    2.249405] EDAC DEVICE0: Giving out device to module
>> zynqmp-ocm-edac controller zynqmp_ocm: DEV
>> >     >       ff960000.memory-controller (INTERRUPT)
>> >     >       > [    2.261719] sdhci: Secure Digital Host Controller
>> Interface driver
>> >     >       > [    2.267487] sdhci: Copyright(c) Pierre Ossman
>> >     >       > [    2.271890] sdhci-pltfm: SDHCI platform and OF driver
>> helper
>> >     >       > [    2.278157] ledtrig-cpu: registered to indicate
>> activity on CPUs
>> >     >       > [    2.283816] zynqmp_firmware_probe Platform Management
>> API v1.1
>> >     >       > [    2.289554] zynqmp_firmware_probe Trustzone version
>> v1.0
>> >     >       > [    2.327875] securefw securefw: securefw probed
>> >     >       > [    2.328324] alg: No test for xilinx-zynqmp-aes
>> (zynqmp-aes)
>> >     >       > [    2.332563] zynqmp_aes
>> firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
>> >     >       > [    2.341183] alg: No test for xilinx-zynqmp-rsa
>> (zynqmp-rsa)
>> >     >       > [    2.347667] remoteproc remoteproc0:
>> ff9a0000.rf5ss:r5f_0 is available
>> >     >       > [    2.353003] remoteproc remoteproc1:
>> ff9a0000.rf5ss:r5f_1 is available
>> >     >       > [    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA
>> Manager registered
>> >     >       > [    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xe=
n
>> Proxy registered
>> >     >       > [    2.372525] viper-vdpp a4000000.vdpp: Device Tree
>> Probing
>> >     >       > [    2.377778] viper-vdpp a4000000.vdpp: VDPP Version:
>> 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>> >     >       > [    2.386432] viper-vdpp a4000000.vdpp: Unable to
>> register tamper handler. Retrying...
>> >     >       > [    2.394094] viper-vdpp-net a5000000.vdpp_net: Device
>> Tree Probing
>> >     >       > [    2.399854] viper-vdpp-net a5000000.vdpp_net: Device
>> registered
>> >     >       > [    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Devic=
e
>> Tree Probing
>> >     >       > [    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build
>> parameters: VTI Count: 512 Event Count: 32
>> >     >       > [    2.420856] default preset
>> >     >       > [    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Devic=
e
>> registered
>> >     >       > [    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device
>> Tree Probing
>> >     >       > [    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device
>> registered
>> >     >       > [    2.441976] vmcu driver init
>> >     >       > [    2.444922] VMCU: : (240:0) registered
>> >     >       > [    2.444956] In K81 Updater init
>> >     >       > [    2.449003] pktgen: Packet Generator for packet
>> performance testing. Version: 2.75
>> >     >       > [    2.468833] Initializing XFRM netlink socket
>> >     >       > [    2.468902] NET: Registered PF_PACKET protocol family
>> >     >       > [    2.472729] Bridge firewalling registered
>> >     >       > [    2.476785] 8021q: 802.1Q VLAN Support v1.8
>> >     >       > [    2.481341] registered taskstats version 1
>> >     >       > [    2.486394] Btrfs loaded, crc32c=3Dcrc32c-generic,
>> zoned=3Dno, fsverity=3Dno
>> >     >       > [    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff01000=
0
>> (irq =3D 36, base_baud =3D 6250000) is a xuartps
>> >     >       > [    2.507103] of-fpga-region fpga-full: FPGA Region
>> probed
>> >     >       > [    2.512986] xilinx-zynqmp-dma fd500000.dma-controller=
:
>> ZynqMP DMA driver Probe success
>> >     >       > [    2.520267] xilinx-zynqmp-dma fd510000.dma-controller=
:
>> ZynqMP DMA driver Probe success
>> >     >       > [    2.528239] xilinx-zynqmp-dma fd520000.dma-controller=
:
>> ZynqMP DMA driver Probe success
>> >     >       > [    2.536152] xilinx-zynqmp-dma fd530000.dma-controller=
:
>> ZynqMP DMA driver Probe success
>> >     >       > [    2.544153] xilinx-zynqmp-dma fd540000.dma-controller=
:
>> ZynqMP DMA driver Probe success
>> >     >       > [    2.552127] xilinx-zynqmp-dma fd550000.dma-controller=
:
>> ZynqMP DMA driver Probe success
>> >     >       > [    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller=
:
>> ZynqMP DMA driver Probe success
>> >     >       > [    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller=
:
>> ZynqMP DMA driver Probe success
>> >     >       > [    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller=
:
>> ZynqMP DMA driver Probe success
>> >     >       > [    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller=
:
>> ZynqMP DMA driver Probe success
>> >     >       > [    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes=
)
>> >     >       > [    2.946467] 2 fixed-partitions partitions found on MT=
D
>> device spi0.0
>> >     >       > [    2.952393] Creating 2 MTD partitions on "spi0.0":
>> >     >       > [    2.957231] 0x000004000000-0x000008000000 : "bank A"
>> >     >       > [    2.963332] 0x000000000000-0x000004000000 : "bank B"
>> >     >       > [    2.968694] macb ff0b0000.ethernet: Not enabling
>> partial store and forward
>> >     >       > [    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM
>> rev 0x50070106 at 0xff0b0000 irq 25 (18:41:fe:0f:ff:02)
>> >     >       > [    2.984472] macb ff0c0000.ethernet: Not enabling
>> partial store and forward
>> >     >       > [    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM
>> rev 0x50070106 at 0xff0c0000 irq 26 (18:41:fe:0f:ff:03)
>> >     >       > [    3.001043] viper_enet viper_enet: Viper power GPIOs
>> initialised
>> >     >       > [    3.007313] viper_enet viper_enet vnet0
>> (uninitialized): Validate interface QSGMII
>> >     >       > [    3.014914] viper_enet viper_enet vnet1
>> (uninitialized): Validate interface QSGMII
>> >     >       > [    3.022138] viper_enet viper_enet vnet1
>> (uninitialized): Validate interface type 18
>> >     >       > [    3.030274] viper_enet viper_enet vnet2
>> (uninitialized): Validate interface QSGMII
>> >     >       > [    3.037785] viper_enet viper_enet vnet3
>> (uninitialized): Validate interface QSGMII
>> >     >       > [    3.045301] viper_enet viper_enet: Viper enet
>> registered
>> >     >       > [    3.050958] xilinx-axipmon ffa00000.perf-monitor:
>> Probed Xilinx APM
>> >     >       > [    3.057135] xilinx-axipmon fd0b0000.perf-monitor:
>> Probed Xilinx APM
>> >     >       > [    3.063538] xilinx-axipmon fd490000.perf-monitor:
>> Probed Xilinx APM
>> >     >       > [    3.069920] xilinx-axipmon ffa10000.perf-monitor:
>> Probed Xilinx APM
>> >     >       > [    3.097729] si70xx: probe of 2-0040 failed with error
>> -5
>> >     >       > [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx
>> Watchdog Timer with timeout 60s
>> >     >       > [    3.105111] cdns-wdt ff150000.watchdog: Xilinx
>> Watchdog Timer with timeout 10s
>> >     >       > [    3.112457] viper-tamper viper-tamper: Device
>> registered
>> >     >       > [    3.117593] active_bank active_bank: boot bank: 1
>> >     >       > [    3.122184] active_bank active_bank: boot mode: (0x02=
)
>> qspi32
>> >     >       > [    3.128247] viper-vdpp a4000000.vdpp: Device Tree
>> Probing
>> >     >       > [    3.133439] viper-vdpp a4000000.vdpp: VDPP Version:
>> 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>> >     >       > [    3.142151] viper-vdpp a4000000.vdpp: Tamper handler
>> registered
>> >     >       > [    3.147438] viper-vdpp a4000000.vdpp: Device register=
ed
>> >     >       > [    3.153007] lpc55_l2 spi1.0: registered handler for
>> protocol 0
>> >     >       > [    3.158582] lpc55_user lpc55_user: The major number
>> for your device is 236
>> >     >       > [    3.165976] lpc55_l2 spi1.0: registered handler for
>> protocol 1
>> >     >       > [    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time:
>> bad result: 1
>> >     >       > [    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
>> >     >       > [    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
>> >     >       > [    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
>> >     >       > [    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
>> >     >       > [    3.202932] mmc0: SDHCI controller on ff160000.mmc
>> [ff160000.mmc] using ADMA 64-bit
>> >     >       > [    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
>> >     >       > [    3.215694] lpc55_l2 spi1.0: rx error: -110
>> >     >       > [    3.284438] mmc0: new HS200 MMC card at address 0001
>> >     >       > [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
>> >     >       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
>> >     >       > [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
>> >     >       > [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
>> >     >       > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB,
>> chardev (244:0)
>> >     >       > [    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time:
>> bad result: 1
>> >     >       > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to
>> read the hardware clock
>> >     >       > [    3.591252] cdns-i2c ff020000.i2c: recovery
>> information complete
>> >     >       > [    3.597085] at24 0-0050: supply vcc not found, using
>> dummy regulator
>> >     >       > [    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
>> >     >       > [    3.608093] at24 0-0050: 256 byte spd EEPROM, read-on=
ly
>> >     >       > [    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
>> >     >       > [    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
>> >     >       > [    3.624224] rtc-rv3028 0-0052: registered as rtc1
>> >     >       > [    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
>> >     >       > [    3.633253] lpc55_l2 spi1.0: rx error: -110
>> >     >       > [    3.639104] k81_bootloader 0-0010: probe
>> >     >       > [    3.641628] VMCU: : (235:0) registered
>> >     >       > [    3.641635] k81_bootloader 0-0010: probe completed
>> >     >       > [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio
>> ff020000 irq 28
>> >     >       > [    3.669154] cdns-i2c ff030000.i2c: recovery
>> information complete
>> >     >       > [    3.675412] lm75 1-0048: supply vs not found, using
>> dummy regulator
>> >     >       > [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
>> >     >       > [    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
>> >     >       > [    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
>> >     >       > [    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
>> >     >       > [    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
>> >     >       > [    3.705157] pca954x 1-0070: registered 4 multiplexed
>> busses for I2C switch pca9546
>> >     >       > [    3.713049] at24 1-0054: supply vcc not found, using
>> dummy regulator
>> >     >       > [    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM,
>> read-only
>> >     >       > [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio
>> ff030000 irq 29
>> >     >       > [    3.731272] sfp viper_enet:sfp-eth1: Host maximum
>> power 2.0W
>> >     >       > [    3.737549] sfp_register_socket: got sfp_bus
>> >     >       > [    3.740709] sfp_register_socket: register sfp_bus
>> >     >       > [    3.745459] sfp_register_bus: ops ok!
>> >     >       > [    3.749179] sfp_register_bus: Try to attach
>> >     >       > [    3.753419] sfp_register_bus: Attach succeeded
>> >     >       > [    3.757914] sfp_register_bus: upstream ops attach
>> >     >       > [    3.762677] sfp_register_bus: Bus registered
>> >     >       > [    3.766999] sfp_register_socket: register sfp_bus
>> succeeded
>> >     >       > [    3.775870] of_cfs_init
>> >     >       > [    3.776000] of_cfs_init: OK
>> >     >       > [    3.778211] clk: Not disabling unused clocks
>> >     >       > [   11.278477] Freeing initrd memory: 206056K
>> >     >       > [   11.279406] Freeing unused kernel memory: 1536K
>> >     >       > [   11.314006] Checked W+X mappings: passed, no W+X page=
s
>> found
>> >     >       > [   11.314142] Run /init as init process
>> >     >       > INIT: version 3.01 booting
>> >     >       > fsck (busybox 1.35.0)
>> >     >       > /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600
>> blocks
>> >     >       > /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600
>> blocks
>> >     >       > /dev/mmcblk0p3 was not cleanly unmounted, check forced.
>> >     >       > /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous),
>> 663/16384 blocks
>> >     >       > [   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem
>> without journal. Opts: (null). Quota mode: disabled.
>> >     >       > Starting random number generator daemon.
>> >     >       > [   11.580662] random: crng init done
>> >     >       > Starting udev
>> >     >       > [   11.613159] udevd[142]: starting version 3.2.10
>> >     >       > [   11.620385] udevd[143]: starting eudev-3.2.10
>> >     >       > [   11.704481] macb ff0b0000.ethernet control_red:
>> renamed from eth0
>> >     >       > [   11.720264] macb ff0c0000.ethernet control_black:
>> renamed from eth1
>> >     >       > [   12.063396] ip_local_port_range: prefer different
>> parity for start/end values.
>> >     >       > [   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time:
>> bad result: 1
>> >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>> >     >       > Mon Feb 27 08:40:53 UTC 2023
>> >     >       > [   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time:
>> bad result
>> >     >       > hwclock: RTC_SET_TIME: Invalid exchange
>> >     >       > [   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time:
>> bad result: 1
>> >     >       > Starting mcud
>> >     >       > INIT: Entering runlevel: 5
>> >     >       > Configuring network interfaces... done.
>> >     >       > resetting network interface
>> >     >       > [   12.718295] macb ff0b0000.ethernet control_red: PHY
>> [ff0b0000.ethernet-ffffffff:02] driver [Xilinx PCS/PMA PHY] (irq=3DPOLL)
>> >     >       > [   12.723919] macb ff0b0000.ethernet control_red:
>> configuring for phy/gmii link mode
>> >     >       > [   12.732151] pps pps0: new PPS source ptp0
>> >     >       > [   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp
>> clock registered.
>> >     >       > [   12.745724] macb ff0c0000.ethernet control_black: PHY
>> [ff0c0000.ethernet-ffffffff:01] driver [Xilinx PCS/PMA PHY]
>> >     >       (irq=3DPOLL)
>> >     >       > [   12.753469] macb ff0c0000.ethernet control_black:
>> configuring for phy/gmii link mode
>> >     >       > [   12.761804] pps pps1: new PPS source ptp1
>> >     >       > [   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp
>> clock registered.
>> >     >       > Auto-negotiation: off
>> >     >       > Auto-negotiation: off
>> >     >       > [   16.828151] macb ff0b0000.ethernet control_red: unabl=
e
>> to generate target frequency: 125000000 Hz
>> >     >       > [   16.834553] macb ff0b0000.ethernet control_red: Link
>> is Up - 1Gbps/Full - flow control off
>> >     >       > [   16.860552] macb ff0c0000.ethernet control_black:
>> unable to generate target frequency: 125000000 Hz
>> >     >       > [   16.867052] macb ff0c0000.ethernet control_black: Lin=
k
>> is Up - 1Gbps/Full - flow control off
>> >     >       > Starting Failsafe Secure Shell server in port 2222: sshd
>> >     >       > done.
>> >     >       > Starting rpcbind daemon...done.
>> >     >       >
>> >     >       > [   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time:
>> bad result: 1
>> >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>> >     >       > Starting State Manager Service
>> >     >       > Start state-manager restarter...
>> >     >       > (XEN) d0v1 Forwarding AES operation: 3254779951
>> >     >       > Starting /usr/sbin/xenstored....[   17.265256] BTRFS:
>> device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa devid 1 transid 744
>> >     >       /dev/dm-0
>> >     >       > scanned by udevd (385)
>> >     >       > [   17.349933] BTRFS info (device dm-0): disk space
>> caching is enabled
>> >     >       > [   17.350670] BTRFS info (device dm-0): has skinny
>> extents
>> >     >       > [   17.364384] BTRFS info (device dm-0): enabling ssd
>> optimizations
>> >     >       > [   17.830462] BTRFS: device fsid
>> 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
>> /dev/mapper/client_prov scanned by
>> >     >       mkfs.btrfs
>> >     >       > (526)
>> >     >       > [   17.872699] BTRFS info (device dm-1): using free spac=
e
>> tree
>> >     >       > [   17.872771] BTRFS info (device dm-1): has skinny
>> extents
>> >     >       > [   17.878114] BTRFS info (device dm-1): flagging fs wit=
h
>> big metadata feature
>> >     >       > [   17.894289] BTRFS info (device dm-1): enabling ssd
>> optimizations
>> >     >       > [   17.895695] BTRFS info (device dm-1): checking UUID
>> tree
>> >     >       >
>> >     >       > Setting domain 0 name, domid and JSON config...
>> >     >       > Done setting up Dom0
>> >     >       > Starting xenconsoled...
>> >     >       > Starting QEMU as disk backend for dom0
>> >     >       > Starting domain watchdog daemon: xenwatchdogd startup
>> >     >       >
>> >     >       > [   18.408647] BTRFS: device fsid
>> 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
>> /dev/mapper/client_config scanned by
>> >     >       mkfs.btrfs
>> >     >       > (574)
>> >     >       > [done]
>> >     >       > [   18.465552] BTRFS info (device dm-2): using free spac=
e
>> tree
>> >     >       > [   18.465629] BTRFS info (device dm-2): has skinny
>> extents
>> >     >       > [   18.471002] BTRFS info (device dm-2): flagging fs wit=
h
>> big metadata feature
>> >     >       > Starting crond: [   18.482371] BTRFS info (device dm-2):
>> enabling ssd optimizations
>> >     >       > [   18.486659] BTRFS info (device dm-2): checking UUID
>> tree
>> >     >       > OK
>> >     >       > starting rsyslogd ... Log partition ready after 0 poll
>> loops
>> >     >       > done
>> >     >       > rsyslogd: cannot connect to 172.18.0.1:514 <
>> http://172.18.0.1:514>: Network is unreachable [v8.2208.0 try
>> https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> ]
>> >     >       > [   18.670637] BTRFS: device fsid
>> 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3 scann=
ed
>> by udevd (518)
>> >     >       >
>> >     >       > Please insert USB token and enter your role in login
>> prompt.
>> >     >       >
>> >     >       > login:
>> >     >       >
>> >     >       > Regards,
>> >     >       > O.
>> >     >       >
>> >     >       >
>> >     >       > =D0=BF=D0=BD, 24 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3=
. =D0=B2 23:39, Stefano Stabellini <
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
>> >     >       >       Hi Oleg,
>> >     >       >
>> >     >       >       Here is the issue from your logs:
>> >     >       >
>> >     >       >       SError Interrupt on CPU0, code 0xbe000000 -- SErro=
r
>> >     >       >
>> >     >       >       SErrors are special signals to notify software of
>> serious hardware
>> >     >       >       errors.  Something is going very wrong. Defective
>> hardware is a
>> >     >       >       possibility.  Another possibility if software
>> accessing address ranges
>> >     >       >       that it is not supposed to, sometimes it causes
>> SErrors.
>> >     >       >
>> >     >       >       Cheers,
>> >     >       >
>> >     >       >       Stefano
>> >     >       >
>> >     >       >
>> >     >       >
>> >     >       >       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
>> >     >       >
>> >     >       >       > Hello,
>> >     >       >       >
>> >     >       >       > Thanks guys.
>> >     >       >       > I found out where the problem was.
>> >     >       >       > Now dom0 booted more. But I have a new one.
>> >     >       >       > This is a kernel panic during Dom0 loading.
>> >     >       >       > Maybe someone is able to suggest something ?
>> >     >       >       >
>> >     >       >       > Regards,
>> >     >       >       > O.
>> >     >       >       >
>> >     >       >       > [    3.771362] sfp_register_bus: upstream ops
>> attach
>> >     >       >       > [    3.776119] sfp_register_bus: Bus registered
>> >     >       >       > [    3.780459] sfp_register_socket: register
>> sfp_bus succeeded
>> >     >       >       > [    3.789399] of_cfs_init
>> >     >       >       > [    3.789499] of_cfs_init: OK
>> >     >       >       > [    3.791685] clk: Not disabling unused clocks
>> >     >       >       > [   11.010355] SError Interrupt on CPU0, code
>> 0xbe000000 -- SError
>> >     >       >       > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0
>> Not tainted 5.15.72-xilinx-v2022.1 #1
>> >     >       >       > [   11.010393] Workqueue: events_unbound
>> async_run_entry_fn
>> >     >       >       > [   11.010414] pstate: 60000005 (nZCv daif -PAN
>> -UAO -TCO -DIT -SSBS BTYPE=3D--)
>> >     >       >       > [   11.010422] pc : simple_write_end+0xd0/0x130
>> >     >       >       > [   11.010431] lr :
>> generic_perform_write+0x118/0x1e0
>> >     >       >       > [   11.010438] sp : ffffffc00809b910
>> >     >       >       > [   11.010441] x29: ffffffc00809b910 x28:
>> 0000000000000000 x27: ffffffef69ba88c0
>> >     >       >       > [   11.010451] x26: 0000000000003eec x25:
>> ffffff807515db00 x24: 0000000000000000
>> >     >       >       > [   11.010459] x23: ffffffc00809ba90 x22:
>> 0000000002aac000 x21: ffffff807315a260
>> >     >       >       > [   11.010472] x20: 0000000000001000 x19:
>> fffffffe02000000 x18: 0000000000000000
>> >     >       >       > [   11.010481] x17: 00000000ffffffff x16:
>> 0000000000008000 x15: 0000000000000000
>> >     >       >       > [   11.010490] x14: 0000000000000000 x13:
>> 0000000000000000 x12: 0000000000000000
>> >     >       >       > [   11.010498] x11: 0000000000000000 x10:
>> 0000000000000000 x9 : 0000000000000000
>> >     >       >       > [   11.010507] x8 : 0000000000000000 x7 :
>> ffffffef693ba680 x6 : 000000002d89b700
>> >     >       >       > [   11.010515] x5 : fffffffe02000000 x4 :
>> ffffff807315a3c8 x3 : 0000000000001000
>> >     >       >       > [   11.010524] x2 : 0000000002aab000 x1 :
>> 0000000000000001 x0 : 0000000000000005
>> >     >       >       > [   11.010534] Kernel panic - not syncing:
>> Asynchronous SError Interrupt
>> >     >       >       > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0
>> Not tainted 5.15.72-xilinx-v2022.1 #1
>> >     >       >       > [   11.010545] Hardware name: D14 Viper Board -
>> White Unit (DT)
>> >     >       >       > [   11.010548] Workqueue: events_unbound
>> async_run_entry_fn
>> >     >       >       > [   11.010556] Call trace:
>> >     >       >       > [   11.010558]  dump_backtrace+0x0/0x1c4
>> >     >       >       > [   11.010567]  show_stack+0x18/0x2c
>> >     >       >       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
>> >     >       >       > [   11.010583]  dump_stack+0x18/0x34
>> >     >       >       > [   11.010588]  panic+0x14c/0x2f8
>> >     >       >       > [   11.010597]  print_tainted+0x0/0xb0
>> >     >       >       > [   11.010606]  arm64_serror_panic+0x6c/0x7c
>> >     >       >       > [   11.010614]  do_serror+0x28/0x60
>> >     >       >       > [   11.010621]  el1h_64_error_handler+0x30/0x50
>> >     >       >       > [   11.010628]  el1h_64_error+0x78/0x7c
>> >     >       >       > [   11.010633]  simple_write_end+0xd0/0x130
>> >     >       >       > [   11.010639]  generic_perform_write+0x118/0x1e=
0
>> >     >       >       > [   11.010644]
>>  __generic_file_write_iter+0x138/0x1c4
>> >     >       >       > [   11.010650]  generic_file_write_iter+0x78/0xd=
0
>> >     >       >       > [   11.010656]  __kernel_write+0xfc/0x2ac
>> >     >       >       > [   11.010665]  kernel_write+0x88/0x160
>> >     >       >       > [   11.010673]  xwrite+0x44/0x94
>> >     >       >       > [   11.010680]  do_copy+0xa8/0x104
>> >     >       >       > [   11.010686]  write_buffer+0x38/0x58
>> >     >       >       > [   11.010692]  flush_buffer+0x4c/0xbc
>> >     >       >       > [   11.010698]  __gunzip+0x280/0x310
>> >     >       >       > [   11.010704]  gunzip+0x1c/0x28
>> >     >       >       > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
>> >     >       >       > [   11.010715]  do_populate_rootfs+0x80/0x164
>> >     >       >       > [   11.010722]  async_run_entry_fn+0x48/0x164
>> >     >       >       > [   11.010728]  process_one_work+0x1e4/0x3a0
>> >     >       >       > [   11.010736]  worker_thread+0x7c/0x4c0
>> >     >       >       > [   11.010743]  kthread+0x120/0x130
>> >     >       >       > [   11.010750]  ret_from_fork+0x10/0x20
>> >     >       >       > [   11.010757] SMP: stopping secondary CPUs
>> >     >       >       > [   11.010784] Kernel Offset: 0x2f61200000 from
>> 0xffffffc008000000
>> >     >       >       > [   11.010788] PHYS_OFFSET: 0x0
>> >     >       >       > [   11.010790] CPU features: 0x00000401,00000842
>> >     >       >       > [   11.010795] Memory Limit: none
>> >     >       >       > [   11.277509] ---[ end Kernel panic - not
>> syncing: Asynchronous SError Interrupt ]---
>> >     >       >       >
>> >     >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=
=AF=D0=B3. =D0=B2 15:52, Michal Orzel <
>> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
>> >     >       >       >       Hi Oleg,
>> >     >       >       >
>> >     >       >       >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
>> >     >       >       >       >
>> >     >       >       >       >
>> >     >       >       >       >
>> >     >       >       >       > Hello Michal,
>> >     >       >       >       >
>> >     >       >       >       > I was not able to enable earlyprintk in
>> the xen for now.
>> >     >       >       >       > I decided to choose another way.
>> >     >       >       >       > This is a xen's command line that I foun=
d
>> out completely.
>> >     >       >       >       >
>> >     >       >       >       > (XEN) $$$$ console=3Ddtuart dtuart=3Dser=
ial0
>> dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0
>> >     >       vwfi=3Dnative
>> >     >       >       sched=3Dnull
>> >     >       >       >       timer_slop=3D0
>> >     >       >       >       Yes, adding a printk() in Xen was also a
>> good idea.
>> >     >       >       >
>> >     >       >       >       >
>> >     >       >       >       > So you are absolutely right about a
>> command line.
>> >     >       >       >       > Now I am going to find out why xen did
>> not have the correct parameters from the device tree.
>> >     >       >       >       Maybe you will find this document helpful:
>> >     >       >       >
>> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device=
-tree/booting.txt
>> <
>> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device=
-tree/booting.txt
>> >
>> >     >       >       >
>> >     >       >       >       ~Michal
>> >     >       >       >
>> >     >       >       >       >
>> >     >       >       >       > Regards,
>> >     >       >       >       > Oleg
>> >     >       >       >       >
>> >     >       >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 202=
3=E2=80=AF=D0=B3. =D0=B2 11:16, Michal Orzel
>> <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
>> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
>> >     >       >       >       >
>> >     >       >       >       >
>> >     >       >       >       >     On 21/04/2023 10:04, Oleg Nikitenko
>> wrote:
>> >     >       >       >       >     >
>> >     >       >       >       >     >
>> >     >       >       >       >     >
>> >     >       >       >       >     > Hello Michal,
>> >     >       >       >       >     >
>> >     >       >       >       >     > Yes, I use yocto.
>> >     >       >       >       >     >
>> >     >       >       >       >     > Yesterday all day long I tried to
>> follow your suggestions.
>> >     >       >       >       >     > I faced a problem.
>> >     >       >       >       >     > Manually in the xen config build
>> file I pasted the strings:
>> >     >       >       >       >     In the .config file or in some Yocto
>> file (listing additional Kconfig options) added to SRC_URI?
>> >     >       >       >       >     You shouldn't really modify .config
>> file but if you do, you should execute "make olddefconfig"
>> >     >       afterwards.
>> >     >       >       >       >
>> >     >       >       >       >     >
>> >     >       >       >       >     > CONFIG_EARLY_PRINTK
>> >     >       >       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
>> >     >       >       >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
>> >     >       >       >       >     I hope you added =3Dy to them.
>> >     >       >       >       >
>> >     >       >       >       >     Anyway, you have at least the
>> following solutions:
>> >     >       >       >       >     1) Run bitbake xen -c menuconfig to
>> properly set early printk
>> >     >       >       >       >     2) Find out how you enable other
>> Kconfig options in your project (e.g. CONFIG_COLORING=3Dy that is not
>> >     >       enabled by
>> >     >       >       default)
>> >     >       >       >       >     3) Append the following to
>> "xen/arch/arm/configs/arm64_defconfig":
>> >     >       >       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
>> >     >       >       >       >
>> >     >       >       >       >     ~Michal
>> >     >       >       >       >
>> >     >       >       >       >     >
>> >     >       >       >       >     > Host hangs in build time.
>> >     >       >       >       >     > Maybe I did not set something in
>> the config build file ?
>> >     >       >       >       >     >
>> >     >       >       >       >     > Regards,
>> >     >       >       >       >     > Oleg
>> >     >       >       >       >     >
>> >     >       >       >       >     > =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=
=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:57, Oleg
>> Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
>> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>> >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>>>>:
>> >     >       >       >       >     >
>> >     >       >       >       >     >     Thanks Michal,
>> >     >       >       >       >     >
>> >     >       >       >       >     >     You gave me an idea.
>> >     >       >       >       >     >     I am going to try it today.
>> >     >       >       >       >     >
>> >     >       >       >       >     >     Regards,
>> >     >       >       >       >     >     O.
>> >     >       >       >       >     >
>> >     >       >       >       >     >     =D1=87=D1=82, 20 =D0=B0=D0=BF=
=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:56,
>> Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>> >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>>>>:
>> >     >       >       >       >     >
>> >     >       >       >       >     >         Thanks Stefano.
>> >     >       >       >       >     >
>> >     >       >       >       >     >         I am going to do it today.
>> >     >       >       >       >     >
>> >     >       >       >       >     >         Regards,
>> >     >       >       >       >     >         O.
>> >     >       >       >       >     >
>> >     >       >       >       >     >         =D1=81=D1=80, 19 =D0=B0=D0=
=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2
>> 23:05, Stefano Stabellini <sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>
>> >     >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>>
>> >     >       >       >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>>>>:
>> >     >       >       >       >     >
>> >     >       >       >       >     >             On Wed, 19 Apr 2023,
>> Oleg Nikitenko wrote:
>> >     >       >       >       >     >             > Hi Michal,
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             > I corrected xen's
>> command line.
>> >     >       >       >       >     >             > Now it is
>> >     >       >       >       >     >             > xen,xen-bootargs =3D
>> "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2
>> >     >       dom0_vcpus_pin
>> >     >       >       >       bootscrub=3D0 vwfi=3Dnative sched=3Dnull
>> >     >       >       >       >     >             > timer_slop=3D0
>> way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
>> >     >       >       >       >     >
>> >     >       >       >       >     >             4 colors is way too
>> many for xen, just do xen_colors=3D0-0. There is no
>> >     >       >       >       >     >             advantage in using mor=
e
>> than 1 color for Xen.
>> >     >       >       >       >     >
>> >     >       >       >       >     >             4 colors is too few fo=
r
>> dom0, if you are giving 1600M of memory to Dom0.
>> >     >       >       >       >     >             Each color is 256M. Fo=
r
>> 1600M you should give at least 7 colors. Try:
>> >     >       >       >       >     >
>> >     >       >       >       >     >             xen_colors=3D0-0
>> dom0_colors=3D1-8
>> >     >       >       >       >     >
>> >     >       >       >       >     >
>> >     >       >       >       >     >
>> >     >       >       >       >     >             > Unfortunately the
>> result was the same.
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             > (XEN)  - Dom0 mode:
>> Relaxed
>> >     >       >       >       >     >             > (XEN) P2M: 40-bit IP=
A
>> with 40-bit PA and 8-bit VMID
>> >     >       >       >       >     >             > (XEN) P2M: 3 levels
>> with order-1 root, VTCR 0x0000000080023558
>> >     >       >       >       >     >             > (XEN) Scheduling
>> granularity: cpu, 1 CPU per sched-resource
>> >     >       >       >       >     >             > (XEN) Coloring
>> general information
>> >     >       >       >       >     >             > (XEN) Way size: 64kB
>> >     >       >       >       >     >             > (XEN) Max. number of
>> colors available: 16
>> >     >       >       >       >     >             > (XEN) Xen color(s): =
[
>> 0 ]
>> >     >       >       >       >     >             > (XEN) alternatives:
>> Patching with alt table 00000000002cc690 -> 00000000002ccc0c
>> >     >       >       >       >     >             > (XEN) Color array
>> allocation failed for dom0
>> >     >       >       >       >     >             > (XEN)
>> >     >       >       >       >     >             > (XEN)
>> ****************************************
>> >     >       >       >       >     >             > (XEN) Panic on CPU 0=
:
>> >     >       >       >       >     >             > (XEN) Error creating
>> domain 0
>> >     >       >       >       >     >             > (XEN)
>> ****************************************
>> >     >       >       >       >     >             > (XEN)
>> >     >       >       >       >     >             > (XEN) Reboot in five
>> seconds...
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             > I am going to find
>> out how command line arguments passed and parsed.
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             > Regards,
>> >     >       >       >       >     >             > Oleg
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             > =D1=81=D1=80, 19 =D0=
=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2
>> 11:25, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>
>> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.co=
m
>> >>
>> >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>>>>:
>> >     >       >       >       >     >             >       Hi Michal,
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             > You put my nose into
>> the problem. Thank you.
>> >     >       >       >       >     >             > I am going to use
>> your point.
>> >     >       >       >       >     >             > Let's see what
>> happens.
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             > Regards,
>> >     >       >       >       >     >             > Oleg
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             > =D1=81=D1=80, 19 =D0=
=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2
>> 10:37, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>
>> >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>=
>
>> >     >       >       >       <mailto:michal.orzel@amd.com <mailto:
>> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
>> michal.orzel@amd.com>>>>:
>> >     >       >       >       >     >             >       Hi Oleg,
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >       On 19/04/2023
>> 09:03, Oleg Nikitenko wrote:
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       > Hello Stefan=
o,
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       > Thanks for
>> the clarification.
>> >     >       >       >       >     >             >       > My company
>> uses yocto for image generation.
>> >     >       >       >       >     >             >       > What kind of
>> information do you need to consult me in this case ?
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       > Maybe module=
s
>> sizes/addresses which were mentioned by @Julien Grall
>> >     >       >       <mailto:julien@xen.org <mailto:julien@xen.org>
>> >     >       >       >       <mailto:julien@xen.org <mailto:
>> julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:
>> julien@xen.org <mailto:julien@xen.org>>>> ?
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >       Sorry for
>> jumping into discussion, but FWICS the Xen command line you provided
>> >     >       seems to be
>> >     >       >       not the
>> >     >       >       >       one
>> >     >       >       >       >     >             >       Xen booted
>> with. The error you are observing most likely is due to dom0 colors
>> >     >       >       configuration not
>> >     >       >       >       being
>> >     >       >       >       >     >             >       specified (i.e=
.
>> lack of dom0_colors=3D<> parameter). Although in the command line you
>> >     >       >       provided, this
>> >     >       >       >       parameter
>> >     >       >       >       >     >             >       is set, I
>> strongly doubt that this is the actual command line in use.
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >       You wrote:
>> >     >       >       >       >     >             >
>>  xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600=
M
>> dom0_max_vcpus=3D2
>> >     >       >       dom0_vcpus_pin
>> >     >       >       >       bootscrub=3D0 vwfi=3Dnative
>> >     >       >       >       >     >             >       sched=3Dnull
>> timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >       but:
>> >     >       >       >       >     >             >       1) way_szize
>> has a typo
>> >     >       >       >       >     >             >       2) you
>> specified 4 colors (0-3) for Xen, but the boot log says that Xen has onl=
y
>> >     >       one:
>> >     >       >       >       >     >             >       (XEN) Xen
>> color(s): [ 0 ]
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >       This makes me
>> believe that no colors configuration actually end up in command line
>> >     >       that Xen
>> >     >       >       booted
>> >     >       >       >       with.
>> >     >       >       >       >     >             >       Single color
>> for Xen is a "default if not specified" and way size was probably
>> >     >       calculated
>> >     >       >       by asking
>> >     >       >       >       HW.
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >       So I would
>> suggest to first cross-check the command line in use.
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >       ~Michal
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       > Regards,
>> >     >       >       >       >     >             >       > Oleg
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       > =D0=B2=D1=82=
, 18 =D0=B0=D0=BF=D1=80.
>> 2023=E2=80=AF=D0=B3. =D0=B2 20:44, Stefano Stabellini <sstabellini@kerne=
l.org <mailto:
>> sstabellini@kernel.org>
>> >     >       >       >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>>>
>> >     >       >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>
>> >     >       >       >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>>>>>:
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >     On Tue,
>> 18 Apr 2023, Oleg Nikitenko wrote:
>> >     >       >       >       >     >             >       >     > Hi
>> Julien,
>> >     >       >       >       >     >             >       >     >
>> >     >       >       >       >     >             >       >     > >> Thi=
s
>> feature has not been merged in Xen upstream yet
>> >     >       >       >       >     >             >       >     >
>> >     >       >       >       >     >             >       >     > > woul=
d
>> assume that upstream + the series on the ML [1] work
>> >     >       >       >       >     >             >       >     >
>> >     >       >       >       >     >             >       >     > Please
>> clarify this point.
>> >     >       >       >       >     >             >       >     > Becaus=
e
>> the two thoughts are controversial.
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >     Hi Oleg,
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >     As Julie=
n
>> wrote, there is nothing controversial. As you are aware,
>> >     >       >       >       >     >             >       >     Xilinx
>> maintains a separate Xen tree specific for Xilinx here:
>> >     >       >       >       >     >             >       >
>> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
>> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>
>> >     >       >       <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>
>> >     >       >       >       <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>>
>> >     >       >       <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>
>> >     >       >       >       <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>>>>
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >     and the
>> branch you are using (xlnx_rebase_4.16) comes from there.
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >     Instead,
>> the upstream Xen tree lives here:
>> >     >       >       >       >     >             >       >
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       >       >       <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       >       >       <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       >       >       <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       >       >       <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >     The Cach=
e
>> Coloring feature that you are trying to configure is present
>> >     >       >       >       >     >             >       >     in
>> xlnx_rebase_4.16, but not yet present upstream (there is an
>> >     >       >       >       >     >             >       >
>>  outstanding patch series to add cache coloring to Xen upstream but it
>> >     >       >       >       >     >             >       >     hasn't
>> been merged yet.)
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >     Anyway,
>> if you are using xlnx_rebase_4.16 it doesn't matter too much for
>> >     >       >       >       >     >             >       >     you as
>> you already have Cache Coloring as a feature there.
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >     I take
>> you are using ImageBuilder to generate the boot configuration? If
>> >     >       >       >       >     >             >       >     so,
>> please post the ImageBuilder config file that you are using.
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >       >     But from
>> the boot message, it looks like the colors configuration for
>> >     >       >       >       >     >             >       >     Dom0 is
>> incorrect.
>> >     >       >       >       >     >             >       >
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >             >
>> >     >       >       >       >     >
>> >     >       >       >       >
>> >     >       >       >
>> >     >       >       >
>> >     >       >       >
>> >     >       >
>> >     >       >
>> >     >       >
>> >     >
>> >     >
>> >     >
>> >
>>
>

--000000000000f64c8c05fb3d35d5
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+PGRpdj5IZWxsbyBndXlzLDwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+
SSBoYXZlIGEgY291cGxlIG9mIG1vcmUgcXVlc3Rpb25zLjwvZGl2PjxkaXY+SGF2ZSB5b3UgZXZl
ciBydW4geGVuIHdpdGggdGhlIGNhY2hlIGNvbG9yaW5nIGF0IFp5bnEgVWx0cmFTY2FsZSsgTVBT
b0MgemN1MTAyIHhjenUxNWVnID88L2Rpdj48ZGl2PldoZW4gZGlkIHlvdSBydW4geGVuIHdpdGgg
dGhlIGNhY2hlIGNvbG9yaW5nIGxhc3QgdGltZSA/PC9kaXY+PGRpdj5XaGF0IGtlcm5lbCB2ZXJz
aW9uIGRpZCB5b3UgdXNlIGZvciBEb20wIHdoZW4geW91IHJhbiB4ZW4gd2l0aCB0aGUgY2FjaGUg
Y29sb3JpbmcgbGFzdCB0aW1lID88L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlJlZ2FyZHMsPC9k
aXY+PGRpdj5PbGVnPGJyPjwvZGl2PjwvZGl2Pjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+
PGRpdiBkaXI9Imx0ciIgY2xhc3M9ImdtYWlsX2F0dHIiPtC/0YIsIDUg0LzQsNGPIDIwMjPigK/Q
sy4g0LIgMTE6NDgsIE9sZWcgTmlraXRlbmtvICZsdDs8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0Ozo8YnI+PC9kaXY+PGJs
b2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjBweCAwcHggMHB4IDAu
OGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIwNCwyMDQpO3BhZGRpbmctbGVmdDox
ZXgiPjxkaXYgZGlyPSJsdHIiPjxkaXY+SGkgTWljaGFsLDwvZGl2PjxkaXY+PGJyPjwvZGl2Pjxk
aXY+VGhhbmtzLjwvZGl2PjxkaXY+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5SZWdhcmRzLDwv
ZGl2PjxkaXY+T2xlZzxicj48L2Rpdj48L2Rpdj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUi
PjxkaXYgZGlyPSJsdHIiIGNsYXNzPSJnbWFpbF9hdHRyIj7Qv9GCLCA1INC80LDRjyAyMDIz4oCv
0LMuINCyIDExOjM0LCBNaWNoYWwgT3J6ZWwgJmx0OzxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6
ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDs6
PGJyPjwvZGl2PjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjow
cHggMHB4IDBweCAwLjhleDtib3JkZXItbGVmdDoxcHggc29saWQgcmdiKDIwNCwyMDQsMjA0KTtw
YWRkaW5nLWxlZnQ6MWV4Ij5IaSBPbGVnLDxicj4NCjxicj4NClJlcGx5aW5nLCBzbyB0aGF0IHlv
dSBkbyBub3QgbmVlZCB0byB3YWl0IGZvciBTdGVmYW5vLjxicj4NCjxicj4NCk9uIDA1LzA1LzIw
MjMgMTA6MjgsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCDCoDxicj4N
CiZndDsgPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IEhlbGxvIFN0ZWZhbm8sPGJyPg0KJmd0OyA8YnI+
DQomZ3Q7IEkgd291bGQgbGlrZSB0byB0cnkgYSB4ZW4gY2FjaGUgY29sb3IgcHJvcGVydHkgZnJv
bSB0aGlzIHJlcG/CoCA8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94
ZW4uZ2l0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdDwvYT4mZ3Q7PGJyPg0K
Jmd0OyBDb3VsZCB5b3UgdGVsbCB3aG90IGJyYW5jaCBJIHNob3VsZCB1c2UgPzxicj4NCkNhY2hl
IGNvbG9yaW5nIGZlYXR1cmUgaXMgbm90IHBhcnQgb2YgdGhlIHVwc3RyZWFtIHRyZWUgYW5kIGl0
IGlzIHN0aWxsIHVuZGVyIHJldmlldy48YnI+DQpZb3UgY2FuIG9ubHkgZmluZCBpdCBpbnRlZ3Jh
dGVkIGluIHRoZSBYaWxpbnggWGVuIHRyZWUuPGJyPg0KPGJyPg0Kfk1pY2hhbDxicj4NCjxicj4N
CiZndDsgPGJyPg0KJmd0OyBSZWdhcmRzLDxicj4NCiZndDsgT2xlZzxicj4NCiZndDsgPGJyPg0K
Jmd0OyDQv9GCLCAyOCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMDA6NTEsIFN0ZWZhbm8gU3RhYmVs
bGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0i
X2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4mZ3Q7Jmd0Ozo8YnI+DQomZ3Q7IDxicj4NCiZndDvCoCDCoCDCoEkgYW0g
ZmFtaWxpYXIgd2l0aCB0aGUgemN1MTAyIGJ1dCBJIGRvbiYjMzk7dCBrbm93IGhvdyB5b3UgY291
bGQgcG9zc2libHk8YnI+DQomZ3Q7wqAgwqAgwqBnZW5lcmF0ZSBhIFNFcnJvci48YnI+DQomZ3Q7
IDxicj4NCiZndDvCoCDCoCDCoEkgc3VnZ2VzdCB0byB0cnkgdG8gdXNlIEltYWdlQnVpbGRlciBb
MV0gdG8gZ2VuZXJhdGUgdGhlIGJvb3Q8YnI+DQomZ3Q7wqAgwqAgwqBjb25maWd1cmF0aW9uIGFz
IGEgdGVzdCBiZWNhdXNlIHRoYXQgaXMga25vd24gdG8gd29yayB3ZWxsIGZvciB6Y3UxMDIuPGJy
Pg0KJmd0OyA8YnI+DQomZ3Q7wqAgwqAgwqBbMV0gPGEgaHJlZj0iaHR0cHM6Ly9naXRsYWIuY29t
L3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcjwvYT4gJmx0Ozxh
IGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVj
dC9pbWFnZWJ1aWxkZXI8L2E+Jmd0Ozxicj4NCiZndDsgPGJyPg0KJmd0OyA8YnI+DQomZ3Q7wqAg
wqAgwqBPbiBUaHUsIDI3IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7IEhlbGxvIFN0ZWZhbm8sPGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDsgVGhhbmtzIGZvciBjbGFyaWZpY2F0aW9uLjxicj4NCiZndDvCoCDCoCDC
oCZndDsgV2UgbmlnaHRlciB1c2UgSW1hZ2VCdWlsZGVyIG5vciB1Ym9vdCBib290IHNjcmlwdC48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7IEEgbW9kZWwgaXMgemN1MTAyIGNvbXBhdGlibGUuPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7IE8uPGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDsg0LLRgiwgMjUg0LDQv9GALiAyMDIz4oCv0LMuINCyIDIxOjIxLCBTdGVmYW5vIFN0YWJlbGxp
bmkgJmx0OzxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9i
bGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgVGhp
cyBpcyBpbnRlcmVzdGluZy4gQXJlIHlvdSB1c2luZyBYaWxpbnggaGFyZHdhcmUgYnkgYW55IGNo
YW5jZT8gSWYgc28sPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgd2hpY2ggYm9hcmQ/
PGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEFy
ZSB5b3UgdXNpbmcgSW1hZ2VCdWlsZGVyIHRvIGdlbmVyYXRlIHlvdXIgYm9vdC5zY3IgYm9vdCBz
Y3JpcHQ/IElmIHNvLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvdWxkIHlvdSBw
bGVhc2UgcG9zdCB5b3VyIEltYWdlQnVpbGRlciBjb25maWcgZmlsZT8gSWYgbm90LCBjYW4geW91
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcG9zdCB0aGUgc291cmNlIG9mIHlvdXIg
dWJvb3QgYm9vdCBzY3JpcHQ/PGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoFNFcnJvcnMgYXJlIHN1cHBvc2VkIHRvIGJlIHJlbGF0ZWQgdG8gYSBo
YXJkd2FyZSBmYWlsdXJlIG9mIHNvbWUga2luZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBZb3UgYXJlIG5vdCBzdXBwb3NlZCB0byBiZSBhYmxlIHRvIHRyaWdnZXIgYW4gU0Vycm9y
IGVhc2lseSBieTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZxdW90O21pc3Rha2Um
cXVvdDsuIEkgaGF2ZSBub3Qgc2VlbiBTRXJyb3JzIGR1ZSB0byB3cm9uZyBjYWNoZSBjb2xvcmlu
Zzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb25zIG9uIGFueSBY
aWxpbnggYm9hcmQgYmVmb3JlLjxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBUaGUgZGlmZmVyZW5jZXMgYmV0d2VlbiBYZW4gd2l0aCBhbmQgd2l0
aG91dCBjYWNoZSBjb2xvcmluZyBmcm9tIGE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBoYXJkd2FyZSBwZXJzcGVjdGl2ZSBhcmU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoC0gV2l0aCBjYWNoZSBjb2xvcmluZywgdGhlIFNNTVUg
aXMgZW5hYmxlZCBhbmQgZG9lcyBhZGRyZXNzIHRyYW5zbGF0aW9uczxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoMKgIGV2ZW4gZm9yIGRvbTAuIFdpdGhvdXQgY2FjaGUgY29sb3Jpbmcg
dGhlIFNNTVUgY291bGQgYmUgZGlzYWJsZWQsIGFuZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoMKgIGlmIGVuYWJsZWQsIHRoZSBTTU1VIGRvZXNuJiMzOTt0IGRvIGFueSBhZGRyZXNz
IHRyYW5zbGF0aW9ucyBmb3IgRG9tMC4gSWY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqDCoCB0aGVyZSBpcyBhIGhhcmR3YXJlIGZhaWx1cmUgcmVsYXRlZCB0byBTTU1VIGFkZHJlc3Mg
dHJhbnNsYXRpb24gaXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBjb3VsZCBv
bmx5IHRyaWdnZXIgd2l0aCBjYWNoZSBjb2xvcmluZy4gVGhpcyB3b3VsZCBiZSBteSBub3JtYWw8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBzdWdnZXN0aW9uIGZvciB5b3UgdG8g
ZXhwbG9yZSwgYnV0IHRoZSBmYWlsdXJlIGhhcHBlbnMgdG9vIGVhcmx5PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgwqAgYmVmb3JlIGFueSBETUEtY2FwYWJsZSBkZXZpY2UgaXMgcHJv
Z3JhbW1lZC4gU28gSSBkb24mIzM5O3QgdGhpbmsgdGhpcyBjYW48YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqDCoCBiZSB0aGUgaXNzdWUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoC0gV2l0aCBjYWNoZSBjb2xvcmluZywgdGhlIG1l
bW9yeSBhbGxvY2F0aW9uIGlzIHZlcnkgZGlmZmVyZW50IHNvIHlvdSYjMzk7bGw8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBlbmQgdXAgdXNpbmcgZGlmZmVyZW50IEREUiByZWdp
b25zIGZvciBEb20wLiBTbyBpZiB5b3VyIEREUiBpczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoMKgIGRlZmVjdGl2ZSwgeW91IG1pZ2h0IG9ubHkgc2VlIGEgZmFpbHVyZSB3aXRoIGNh
Y2hlIGNvbG9yaW5nIGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBi
ZWNhdXNlIHlvdSBlbmQgdXAgdXNpbmcgZGlmZmVyZW50IHJlZ2lvbnMuPGJyPg0KJmd0O8KgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBPbiBUdWUsIDI1IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEhpIFN0ZWZhbm8sPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
VGhhbmsgeW91Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSWYgSSBidWls
ZCB4ZW4gd2l0aG91dCBjb2xvcnMgc3VwcG9ydCB0aGVyZSBpcyBub3QgdGhpcyBlcnJvci48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEFsbCB0aGUgZG9tYWlucyBhcmUgYm9v
dGVkIHdlbGwuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBIZW5zZSBpdCBj
YW4gbm90IGJlIGEgaGFyZHdhcmUgaXNzdWUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBUaGlzIHBhbmljIGFycml2ZWQgZHVyaW5nIHVucGFja2luZyB0aGUgcm9vdGZzLjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGVyZSBJIGF0dGFjaGVkIHRoZSBi
b290IGxvZyB4ZW4vRG9tMCB3aXRob3V0IGNvbG9yLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgQSBoaWdobGlnaHRlZCBzdHJpbmdzIHByaW50ZWQgZXhhY3RseSBhZnRlciB0
aGUgcGxhY2Ugd2hlcmUgMS1zdCB0aW1lIHBhbmljIGFycml2ZWQuPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
wqBYZW4gNC4xNi4xLXByZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgWGVuIHZlcnNpb24gNC4xNi4xLXByZSAobm9sZTIzOTBAKG5vbmUpKSAoYWFyY2g2NC1wb3J0
YWJsZS1saW51eC1nY2MgKEdDQykgMTEuMy4wKSBkZWJ1Zz15IDIwMjMtMDQtMjE8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIExhdGVzdCBDaGFuZ2VTZXQ6IFdlZCBB
cHIgMTkgMTI6NTY6MTQgMjAyMyArMDMwMCBnaXQ6MzIxNjg3YjIzMS1kaXJ0eTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgYnVpbGQtaWQ6IGMxODQ3MjU4ZmRiMWI3
OTU2MmZjNzEwZGRhNDAwMDhmOTZjMGZkZTU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIFByb2Nlc3NvcjogMDAwMDAwMDA0MTBmZDAzNDogJnF1b3Q7QVJNIExpbWl0
ZWQmcXVvdDssIHZhcmlhbnQ6IDB4MCwgcGFydCAweGQwMyxyZXYgMHg0PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSA2NC1iaXQgRXhlY3V0aW9uOjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgUHJvY2Vzc29yIEZlYXR1cmVzOiAw
MDAwMDAwMDAwMDAyMjIyIDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIEV4Y2VwdGlvbiBMZXZlbHM6IEVMMzo2NCszMiBFTDI6
NjQrMzIgRUwxOjY0KzMyIEVMMDo2NCszMjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgwqAgwqAgRXh0ZW5zaW9uczogRmxvYXRpbmdQb2ludCBBZHZhbmNlZFNJTUQ8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIERlYnVnIEZlYXR1
cmVzOiAwMDAwMDAwMDEwMzA1MTA2IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIEF1eGlsaWFyeSBGZWF0dXJlczogMDAwMDAwMDAw
MDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSDCoCBNZW1vcnkgTW9kZWwgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDExMjIgMDAw
MDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
wqAgSVNBIEZlYXR1cmVzOiDCoDAwMDAwMDAwMDAwMTExMjAgMDAwMDAwMDAwMDAwMDAwMDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgMzItYml0IEV4ZWN1dGlvbjo8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIFByb2Nlc3NvciBG
ZWF0dXJlczogMDAwMDAwMDAwMDAwMDEzMTowMDAwMDAwMDAwMDExMDExPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCBJbnN0cnVjdGlvbiBTZXRzOiBBQXJj
aDMyIEEzMiBUaHVtYiBUaHVtYi0yIEphemVsbGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIMKgIMKgIEV4dGVuc2lvbnM6IEdlbmVyaWNUaW1lciBTZWN1cml0eTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgRGVidWcgRmVhdHVy
ZXM6IDAwMDAwMDAwMDMwMTAwNjY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIMKgIEF1eGlsaWFyeSBGZWF0dXJlczogMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgTWVtb3J5IE1vZGVsIEZlYXR1cmVz
OiAwMDAwMDAwMDEwMjAxMTA1IDAwMDAwMDAwNDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgMDAwMDAwMDAwMTI2MDAwMCAwMDAwMDAwMDAyMTAyMjExPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBJU0EgRmVhdHVyZXM6IDAwMDAwMDAwMDIxMDExMTAg
MDAwMDAwMDAxMzExMjExMSAwMDAwMDAwMDIxMjMyMDQyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCAwMDAwMDAwMDAxMTEy
MTMxIDAwMDAwMDAwMDAwMTExNDIgMDAwMDAwMDAwMDAxMTEyMTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgVXNpbmcgU01DIENhbGxpbmcgQ29udmVudGlvbiB2MS4y
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2luZyBQU0NJIHYx
LjE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFNNUDogQWxsb3dp
bmcgNCBDUFVzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBHZW5l
cmljIFRpbWVyIElSUTogcGh5cz0zMCBoeXA9MjYgdmlydD0yNyBGcmVxOiAxMDAwMDAgS0h6PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBHSUN2MiBpbml0aWFsaXph
dGlvbjo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKg
IMKgIGdpY19kaXN0X2FkZHI9MDAwMDAwMDBmOTAxMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgwqAgwqAgZ2ljX2NwdV9hZGRyPTAwMDAwMDAwZjkw
MjAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKg
IMKgIGdpY19oeXBfYWRkcj0wMDAwMDAwMGY5MDQwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCBnaWNfdmNwdV9hZGRyPTAwMDAwMDAwZjkw
NjAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKg
IMKgIGdpY19tYWludGVuYW5jZV9pcnE9MjU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIEdJQ3YyOiBBZGp1c3RpbmcgQ1BVIGludGVyZmFjZSBiYXNlIHRvIDB4Zjkw
MmYwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEdJQ3YyOiAx
OTIgbGluZXMsIDQgY3B1cywgc2VjdXJlIChJSUQgMDIwMDE0M2IpLjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBudWxsIFNjaGVkdWxl
ciAobnVsbCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEluaXRp
YWxpemluZyBudWxsIHNjaGVkdWxlcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgV0FSTklORzogVGhpcyBpcyBleHBlcmltZW50YWwgc29mdHdhcmUgaW4gZGV2ZWxv
cG1lbnQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2UgYXQg
eW91ciBvd24gcmlzay48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IEFsbG9jYXRlZCBjb25zb2xlIHJpbmcgb2YgMzIgS2lCLjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVMDogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAxMiB0aW1l
cyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBCcmluZ2luZyB1cCBDUFUxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAoWEVOKSBDUFUxOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEzIHRpbWVzIGJl
Zm9yZSBwYXVzaW5nIHRoZSBkb21haW48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIENQVSAxIGJvb3RlZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIEJyaW5naW5nIHVwIENQVTI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIENQVTI6IEd1ZXN0IGF0b21pY3Mgd2lsbCB0cnkgMTMgdGltZXMgYmVmb3Jl
IHBhdXNpbmcgdGhlIGRvbWFpbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgQ1BVIDIgYm9vdGVkLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgQnJpbmdpbmcgdXAgQ1BVMzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgQ1BVMzogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAxMyB0aW1lcyBiZWZvcmUgcGF1
c2luZyB0aGUgZG9tYWluPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVO
KSBCcm91Z2h0IHVwIDQgQ1BVczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgQ1BVIDMgYm9vdGVkLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBwcm9iaW5nIGhhcmR3YXJlIGNvbmZpZ3Vy
YXRpb24uLi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6
IC9heGkvc21tdUBmZDgwMDAwMDogU01NVXYyIHdpdGg6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IHN0YWdlIDIgdHJh
bnNsYXRpb248YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6
IC9heGkvc21tdUBmZDgwMDAwMDogc3RyZWFtIG1hdGNoaW5nIHdpdGggNDggcmVnaXN0ZXIgZ3Jv
dXBzLCBtYXNrIDB4N2ZmZiZsdDsyJmd0O3NtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogMTYgY29u
dGV4dDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJhbmtzICgwPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBzdGFnZS0yIG9ubHkpPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IFN0YWdl
LTI6IDQ4LWJpdCBJUEEgLSZndDsgNDgtYml0IFBBPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IHJlZ2lzdGVyZWQgMjkg
bWFzdGVyIGRldmljZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IEkvTyB2aXJ0dWFsaXNhdGlvbiBlbmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSDCoC0gRG9tMCBtb2RlOiBSZWxheGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06IDQwLWJpdCBJUEEgd2l0aCA0MC1iaXQgUEEgYW5k
IDgtYml0IFZNSUQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFAy
TTogMyBsZXZlbHMgd2l0aCBvcmRlci0xIHJvb3QsIFZUQ1IgMHgwMDAwMDAwMDgwMDIzNTU4PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBTY2hlZHVsaW5nIGdyYW51
bGFyaXR5OiBjcHUsIDEgQ1BVIHBlciBzY2hlZC1yZXNvdXJjZTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgYWx0ZXJuYXRpdmVzOiBQYXRjaGluZyB3aXRoIGFsdCB0
YWJsZSAwMDAwMDAwMDAwMmNjNWM4IC0mZ3Q7IDAwMDAwMDAwMDAyY2NiMmM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pICoqKiBMT0FESU5HIERPTUFJTiAwICoqKjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTG9hZGluZyBkMCBrZXJu
ZWwgZnJvbSBib290IG1vZHVsZSBAIDAwMDAwMDAwMDEwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIExvYWRpbmcgcmFtZGlzayBmcm9tIGJvb3QgbW9kdWxl
IEAgMDAwMDAwMDAwMjAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgQWxsb2NhdGluZyAxOjEgbWFwcGluZ3MgdG90YWxsaW5nIDE2MDBNQiBmb3IgZG9tMDo8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEJBTktbMF0gMHgwMDAw
MDAxMDAwMDAwMC0weDAwMDAwMDIwMDAwMDAwICgyNTZNQik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEJBTktbMV0gMHgwMDAwMDAyNDAwMDAwMC0weDAwMDAwMDI4
MDAwMDAwICg2NE1CKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
QkFOS1syXSAweDAwMDAwMDMwMDAwMDAwLTB4MDAwMDAwODAwMDAwMDAgKDEyODBNQik8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEdyYW50IHRhYmxlIHJhbmdlOiAw
eDAwMDAwMDAwZTAwMDAwLTB4MDAwMDAwMDBlNDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogZDA6IHAybWFkZHIg
MHgwMDAwMDAwODdiZjk0MDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAo
WEVOKSBBbGxvY2F0aW5nIFBQSSAxNiBmb3IgZXZlbnQgY2hhbm5lbCBpbnRlcnJ1cHQ8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiAwOiAw
eDgxMjAwMDAwLSZndDsweGEwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gMTogMHhiMTIwMDAwMC0mZ3Q7MHhjMDAwMDAwMDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9u
IDI6IDB4YzgwMDAwMDAtJmd0OzB4ZTAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiAzOiAweGYwMDAwMDAwLSZndDsweGY5MDAw
MDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBFeHRlbmRlZCBy
ZWdpb24gNDogMHgxMDAwMDAwMDAtJmd0OzB4NjAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gNTogMHg4ODAwMDAwMDAtJmd0
OzB4ODAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
RXh0ZW5kZWQgcmVnaW9uIDY6IDB4ODAwMTAwMDAwMC0mZ3Q7MHgxMDAwMDAwMDAwMDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTG9hZGluZyB6SW1hZ2UgZnJvbSAw
MDAwMDAwMDAxMDAwMDAwIHRvIDAwMDAwMDAwMTAwMDAwMDAtMDAwMDAwMDAxMGU0MTAwODxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTG9hZGluZyBkMCBpbml0cmQg
ZnJvbSAwMDAwMDAwMDAyMDAwMDAwIHRvIDB4MDAwMDAwMDAxMzYwMDAwMC0weDAwMDAwMDAwMWZm
M2E2MTc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIExvYWRpbmcg
ZDAgRFRCIHRvIDB4MDAwMDAwMDAxMzQwMDAwMC0weDAwMDAwMDAwMTM0MGNiZGM8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEluaXRpYWwgbG93IG1lbW9yeSB2aXJx
IHRocmVzaG9sZCBzZXQgYXQgMHg0MDAwIHBhZ2VzLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgKFhFTikgU3RkLiBMb2dsZXZlbDogQWxsPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBHdWVzdCBMb2dsZXZlbDogQWxsPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSAqKiogU2VyaWFsIGlucHV0IHRvIERPTTAgKHR5
cGUgJiMzOTtDVFJMLWEmIzM5OyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQpPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBudWxsLmM6MzUzOiAwICZsdDstLSBk
MHYwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBGcmVlZCAzNTZr
QiBpbml0IG1lbW9yeS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IGQwdjAgVW5oYW5kbGVkIFNNQy9IVkM6IDB4ODQwMDAwNTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQwdjAgVW5oYW5kbGVkIFNNQy9IVkM6IDB4ODYwMGZmMDE8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQwdjA6IHZHSUNEOiB1
bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjQ8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQwdjA6IHZHSUNEOiB1bmhhbmRs
ZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjg8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQwdjA6IHZHSUNEOiB1bmhhbmRsZWQgd29y
ZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjEyPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3Jp
dGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIxNjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4
MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIGQwdjA6IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAw
MGZmZmZmZmZmIHRvIElDQUNUSVZFUjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjAwMDAwMF0gQm9vdGluZyBMaW51eCBvbiBwaHlzaWNhbCBDUFUgMHgwMDAw
MDAwMDAwIFsweDQxMGZkMDM0XTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuMDAwMDAwXSBMaW51eCB2ZXJzaW9uIDUuMTUuNzIteGlsaW54LXYyMDIyLjEgKG9l
LXVzZXJAb2UtaG9zdCkgKGFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2NjIChHQ0MpIDExLjMuMCwg
R05VIGxkIChHTlU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBCaW51dGlscyk8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IDIuMzguMjAyMjA3MDgpICMxIFNNUCBU
dWUgRmViIDIxIDA1OjQ3OjU0IFVUQyAyMDIzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE1hY2hpbmUgbW9kZWw6IEQxNCBWaXBlciBCb2FyZCAt
IFdoaXRlIFVuaXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjAwMDAwMF0gWGVuIDQuMTYgc3VwcG9ydCBmb3VuZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBab25lIHJhbmdlczo8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gwqAgRE1BIMKgIMKgIMKgW21l
bSAweDAwMDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAwMDdmZmZmZmZmXTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBETUEzMiDCoCDCoGVtcHR5
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKg
IE5vcm1hbCDCoCBlbXB0eTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuMDAwMDAwXSBNb3ZhYmxlIHpvbmUgc3RhcnQgZm9yIGVhY2ggbm9kZTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBFYXJseSBtZW1vcnkg
bm9kZSByYW5nZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjAwMDAwMF0gwqAgbm9kZSDCoCAwOiBbbWVtIDB4MDAwMDAwMDAxMDAwMDAwMC0weDAwMDAwMDAw
MWZmZmZmZmZdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4w
MDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAweDAwMDAwMDAwMjIwMDAwMDAtMHgwMDAwMDAwMDIy
MTQ3ZmZmXTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAw
MDAwXSDCoCBub2RlIMKgIDA6IFttZW0gMHgwMDAwMDAwMDIyMjAwMDAwLTB4MDAwMDAwMDAyMjM0
N2ZmZl08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAw
MF0gwqAgbm9kZSDCoCAwOiBbbWVtIDB4MDAwMDAwMDAyNDAwMDAwMC0weDAwMDAwMDAwMjdmZmZm
ZmZdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBd
IMKgIG5vZGUgwqAgMDogW21lbSAweDAwMDAwMDAwMzAwMDAwMDAtMHgwMDAwMDAwMDdmZmZmZmZm
XTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBJ
bml0bWVtIHNldHVwIG5vZGUgMCBbbWVtIDB4MDAwMDAwMDAxMDAwMDAwMC0weDAwMDAwMDAwN2Zm
ZmZmZmZdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAw
MDBdIE9uIG5vZGUgMCwgem9uZSBETUE6IDgxOTIgcGFnZXMgaW4gdW5hdmFpbGFibGUgcmFuZ2Vz
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE9u
IG5vZGUgMCwgem9uZSBETUE6IDE4NCBwYWdlcyBpbiB1bmF2YWlsYWJsZSByYW5nZXM8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gT24gbm9kZSAw
LCB6b25lIERNQTogNzM1MiBwYWdlcyBpbiB1bmF2YWlsYWJsZSByYW5nZXM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gY21hOiBSZXNlcnZlZCAy
NTYgTWlCIGF0IDB4MDAwMDAwMDA2ZTAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBwc2NpOiBwcm9iaW5nIGZvciBjb25kdWl0IG1ldGhv
ZCBmcm9tIERULjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
MDAwMDAwXSBwc2NpOiBQU0NJdjEuMSBkZXRlY3RlZCBpbiBmaXJtd2FyZS48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogVXNpbmcgc3Rh
bmRhcmQgUFNDSSB2MC4yIGZ1bmN0aW9uIElEczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBwc2NpOiBUcnVzdGVkIE9TIG1pZ3JhdGlvbiBub3Qg
cmVxdWlyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAw
MDAwMF0gcHNjaTogU01DIENhbGxpbmcgQ29udmVudGlvbiB2MS4xPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHBlcmNwdTogRW1iZWRkZWQgMTYg
cGFnZXMvY3B1IHMzMjc5MiByMCBkMzI3NDQgdTY1NTM2PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIERldGVjdGVkIFZJUFQgSS1jYWNoZSBvbiBD
UFUwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBd
IENQVSBmZWF0dXJlczoga2VybmVsIHBhZ2UgdGFibGUgaXNvbGF0aW9uIGZvcmNlZCBPTiBieSBL
QVNMUjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAw
XSBDUFUgZmVhdHVyZXM6IGRldGVjdGVkOiBLZXJuZWwgcGFnZSB0YWJsZSBpc29sYXRpb24gKEtQ
VEkpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBd
IEJ1aWx0IDEgem9uZWxpc3RzLCBtb2JpbGl0eSBncm91cGluZyBvbi7CoCBUb3RhbCBwYWdlczog
NDAzODQ1PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAw
MDBdIEtlcm5lbCBjb21tYW5kIGxpbmU6IGNvbnNvbGU9aHZjMCBlYXJseWNvbj14ZW4gZWFybHlw
cmludGs9eGVuIGNsa19pZ25vcmVfdW51c2VkIGZpcHM9MSByb290PS9kZXYvcmFtMDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoG1heGNwdXM9Mjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBVbmtub3duIGtlcm5lbCBjb21tYW5kIGxp
bmUgcGFyYW1ldGVycyAmcXVvdDtlYXJseXByaW50az14ZW4gZmlwcz0xJnF1b3Q7LCB3aWxsIGJl
IHBhc3NlZCB0byB1c2VyIHNwYWNlLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuMDAwMDAwXSBEZW50cnkgY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAyNjIx
NDQgKG9yZGVyOiA5LCAyMDk3MTUyIGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIElub2RlLWNhY2hlIGhhc2ggdGFibGUg
ZW50cmllczogMTMxMDcyIChvcmRlcjogOCwgMTA0ODU3NiBieXRlcywgbGluZWFyKTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBtZW0gYXV0by1p
bml0OiBzdGFjazpvZmYsIGhlYXAgYWxsb2M6b24sIGhlYXAgZnJlZTpvbjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBtZW0gYXV0by1pbml0OiBj
bGVhcmluZyBzeXN0ZW0gbWVtb3J5IG1heSB0YWtlIHNvbWUgdGltZS4uLjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBNZW1vcnk6IDExMjE5MzZL
LzE2NDEwMjRLIGF2YWlsYWJsZSAoOTcyOEsga2VybmVsIGNvZGUsIDgzNksgcndkYXRhLCAyMzk2
SyByb2RhdGEsIDE1MzZLIGluaXQsIDI2MksgYnNzLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoDI1Njk0NEsgcmVzZXJ2ZWQsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAyNjIxNDRLIGNtYS1yZXNlcnZlZCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gU0xVQjogSFdhbGlnbj02NCwgT3JkZXI9MC0zLCBNaW5P
YmplY3RzPTAsIENQVXM9MiwgTm9kZXM9MTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDAuMDAwMDAwXSByY3U6IEhpZXJhcmNoaWNhbCBSQ1UgaW1wbGVtZW50YXRp
b24uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBd
IHJjdTogUkNVIGV2ZW50IHRyYWNpbmcgaXMgZW5hYmxlZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcmN1OiBSQ1UgcmVzdHJpY3RpbmcgQ1BV
cyBmcm9tIE5SX0NQVVM9OCB0byBucl9jcHVfaWRzPTIuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHJjdTogUkNVIGNhbGN1bGF0ZWQgdmFsdWUg
b2Ygc2NoZWR1bGVyLWVubGlzdG1lbnQgZGVsYXkgaXMgMjUgamlmZmllcy48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcmN1OiBBZGp1c3Rpbmcg
Z2VvbWV0cnkgZm9yIHJjdV9mYW5vdXRfbGVhZj0xNiwgbnJfY3B1X2lkcz0yPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE5SX0lSUVM6IDY0LCBu
cl9pcnFzOiA2NCwgcHJlYWxsb2NhdGVkIGlycXM6IDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gUm9vdCBJUlEgaGFuZGxlcjogZ2ljX2hhbmRs
ZV9pcnE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAw
MF0gYXJjaF90aW1lcjogY3AxNSB0aW1lcihzKSBydW5uaW5nIGF0IDEwMC4wME1IeiAodmlydCku
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIGNs
b2Nrc291cmNlOiBhcmNoX3N5c19jb3VudGVyOiBtYXNrOiAweGZmZmZmZmZmZmZmZmZmIG1heF9j
eWNsZXM6IDB4MTcxMDI0ZTdlMCwgbWF4X2lkbGVfbnM6IDQ0MDc5NTIwNTMxNSBuczxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBzY2hlZF9jbG9j
azogNTYgYml0cyBhdCAxMDBNSHosIHJlc29sdXRpb24gMTBucywgd3JhcHMgZXZlcnkgNDM5ODA0
NjUxMTEwMG5zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4w
MDAyNThdIENvbnNvbGU6IGNvbG91ciBkdW1teSBkZXZpY2UgODB4MjU8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMxMDIzMV0gcHJpbnRrOiBjb25zb2xlIFto
dmMwXSBlbmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC4zMTQ0MDNdIENhbGlicmF0aW5nIGRlbGF5IGxvb3AgKHNraXBwZWQpLCB2YWx1ZSBjYWxjdWxh
dGVkIHVzaW5nIHRpbWVyIGZyZXF1ZW5jeS4uIDIwMC4wMCBCb2dvTUlQUyAobHBqPTQwMDAwMCk8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMyNDg1MV0gcGlk
X21heDogZGVmYXVsdDogMzI3NjggbWluaW11bTogMzAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zMjk3MDZdIExTTTogU2VjdXJpdHkgRnJhbWV3b3JrIGlu
aXRpYWxpemluZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
MzM0MjA0XSBZYW1hOiBiZWNvbWluZyBtaW5kZnVsLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuMzM3ODY1XSBNb3VudC1jYWNoZSBoYXNoIHRhYmxlIGVudHJp
ZXM6IDQwOTYgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzQ1MTgwXSBNb3VudHBvaW50LWNhY2hlIGhh
c2ggdGFibGUgZW50cmllczogNDA5NiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNTQ3NDNdIHhlbjpn
cmFudF90YWJsZTogR3JhbnQgdGFibGVzIHVzaW5nIHZlcnNpb24gMSBsYXlvdXQ8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM1OTEzMl0gR3JhbnQgdGFibGUg
aW5pdGlhbGl6ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjM2MjY2NF0geGVuOmV2ZW50czogVXNpbmcgRklGTy1iYXNlZCBBQkk8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM2Njk5M10gWGVuOiBpbml0aWFsaXppbmcg
Y3B1MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzcwNTE1
XSByY3U6IEhpZXJhcmNoaWNhbCBTUkNVIGltcGxlbWVudGF0aW9uLjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzc1OTMwXSBzbXA6IEJyaW5naW5nIHVwIHNl
Y29uZGFyeSBDUFVzIC4uLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgbnVsbC5jOjM1MzogMSAmbHQ7LS0gZDB2MTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgZDB2MTogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAw
ZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuMzgyNTQ5XSBEZXRlY3RlZCBWSVBUIEktY2FjaGUgb24gQ1BVMTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzg4NzEyXSBYZW46IGluaXRp
YWxpemluZyBjcHUxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC4zODg3NDNdIENQVTE6IEJvb3RlZCBzZWNvbmRhcnkgcHJvY2Vzc29yIDB4MDAwMDAwMDAwMSBb
MHg0MTBmZDAzNF08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjM4ODgyOV0gc21wOiBCcm91Z2h0IHVwIDEgbm9kZSwgMiBDUFVzPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40MDY5NDFdIFNNUDogVG90YWwgb2YgMiBwcm9j
ZXNzb3JzIGFjdGl2YXRlZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAwLjQxMTY5OF0gQ1BVIGZlYXR1cmVzOiBkZXRlY3RlZDogMzItYml0IEVMMCBTdXBwb3J0
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40MTY4ODhdIENQ
VSBmZWF0dXJlczogZGV0ZWN0ZWQ6IENSQzMyIGluc3RydWN0aW9uczxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDIyMTIxXSBDUFU6IEFsbCBDUFUocykgc3Rh
cnRlZCBhdCBFTDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjQyNjI0OF0gYWx0ZXJuYXRpdmVzOiBwYXRjaGluZyBrZXJuZWwgY29kZTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDMxNDI0XSBkZXZ0bXBmczogaW5pdGlh
bGl6ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQ0MTQ1
NF0gS0FTTFIgZW5hYmxlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuNDQxNjAyXSBjbG9ja3NvdXJjZTogamlmZmllczogbWFzazogMHhmZmZmZmZmZiBtYXhf
Y3ljbGVzOiAweGZmZmZmZmZmLCBtYXhfaWRsZV9uczogNzY0NTA0MTc4NTEwMDAwMCBuczxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDQ4MzIxXSBmdXRleCBo
YXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40OTYxODNdIE5FVDog
UmVnaXN0ZXJlZCBQRl9ORVRMSU5LL1BGX1JPVVRFIHByb3RvY29sIGZhbWlseTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDk4Mjc3XSBETUE6IHByZWFsbG9j
YXRlZCAyNTYgS2lCIEdGUF9LRVJORUwgcG9vbCBmb3IgYXRvbWljIGFsbG9jYXRpb25zPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41MDM3NzJdIERNQTogcHJl
YWxsb2NhdGVkIDI1NiBLaUIgR0ZQX0tFUk5FTHxHRlBfRE1BIHBvb2wgZm9yIGF0b21pYyBhbGxv
Y2F0aW9uczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTEx
NjEwXSBETUE6IHByZWFsbG9jYXRlZCAyNTYgS2lCIEdGUF9LRVJORUx8R0ZQX0RNQTMyIHBvb2wg
Zm9yIGF0b21pYyBhbGxvY2F0aW9uczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuNTE5NDc4XSBhdWRpdDogaW5pdGlhbGl6aW5nIG5ldGxpbmsgc3Vic3lzIChk
aXNhYmxlZCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjUy
NDk4NV0gYXVkaXQ6IHR5cGU9MjAwMCBhdWRpdCgwLjMzNjoxKTogc3RhdGU9aW5pdGlhbGl6ZWQg
YXVkaXRfZW5hYmxlZD0wIHJlcz0xPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC41MjkxNjldIHRoZXJtYWxfc3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292ZXJu
b3IgJiMzOTtzdGVwX3dpc2UmIzM5Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuNTMzMDIzXSBody1icmVha3BvaW50OiBmb3VuZCA2IGJyZWFrcG9pbnQgYW5k
IDQgd2F0Y2hwb2ludCByZWdpc3RlcnMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC41NDU2MDhdIEFTSUQgYWxsb2NhdG9yIGluaXRpYWxpc2VkIHdpdGggMzI3
NjggZW50cmllczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
NTUxMDMwXSB4ZW46c3dpb3RsYl94ZW46IFdhcm5pbmc6IG9ubHkgYWJsZSB0byBhbGxvY2F0ZSA0
IE1CIGZvciBzb2Z0d2FyZSBJTyBUTEI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjU1OTMzMl0gc29mdHdhcmUgSU8gVExCOiBtYXBwZWQgW21lbSAweDAwMDAw
MDAwMTE4MDAwMDAtMHgwMDAwMDAwMDExYzAwMDAwXSAoNE1CKTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTgzNTY1XSBIdWdlVExCIHJlZ2lzdGVyZWQgMS4w
MCBHaUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXM8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjU4NDcyMV0gSHVnZVRMQiByZWdpc3RlcmVkIDMy
LjAgTWlCIHBhZ2Ugc2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2VzPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41OTE0NzhdIEh1Z2VUTEIgcmVnaXN0ZXJlZCAy
LjAwIE1pQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlczxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTk4MjI1XSBIdWdlVExCIHJlZ2lzdGVyZWQg
NjQuMCBLaUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXM8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjYzNjUyMF0gRFJCRzogQ29udGludWluZyB3
aXRob3V0IEppdHRlciBSTkc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAwLjczNzE4N10gcmFpZDY6IG5lb254OCDCoCBnZW4oKSDCoDIxNDMgTUIvczxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuODA1Mjk0XSByYWlkNjogbmVv
bng4IMKgIHhvcigpIMKgMTU4OSBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC44NzM0MDZdIHJhaWQ2OiBuZW9ueDQgwqAgZ2VuKCkgwqAyMTc3IE1CL3M8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjk0MTQ5OV0gcmFp
ZDY6IG5lb254NCDCoCB4b3IoKSDCoDE1NTYgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDEuMDA5NjEyXSByYWlkNjogbmVvbngyIMKgIGdlbigpIMKgMjA3
MiBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4wNzc3
MTVdIHJhaWQ2OiBuZW9ueDIgwqAgeG9yKCkgwqAxNDMwIE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjE0NTgzNF0gcmFpZDY6IG5lb254MSDCoCBnZW4o
KSDCoDE3NjkgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDEuMjEzOTM1XSByYWlkNjogbmVvbngxIMKgIHhvcigpIMKgMTIxNCBNQi9zPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4yODIwNDZdIHJhaWQ2OiBpbnQ2NHg4
IMKgZ2VuKCkgwqAxMzY2IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAxLjM1MDEzMl0gcmFpZDY6IGludDY0eDggwqB4b3IoKSDCoCA3NzMgTUIvczxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNDE4MjU5XSByYWlkNjog
aW50NjR4NCDCoGdlbigpIMKgMTYwMiBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMS40ODYzNDldIHJhaWQ2OiBpbnQ2NHg0IMKgeG9yKCkgwqAgODUxIE1C
L3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjU1NDQ2NF0g
cmFpZDY6IGludDY0eDIgwqBnZW4oKSDCoDEzOTYgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNjIyNTYxXSByYWlkNjogaW50NjR4MiDCoHhvcigpIMKg
IDc0NCBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS42
OTA2ODddIHJhaWQ2OiBpbnQ2NHgxIMKgZ2VuKCkgwqAxMDMzIE1CL3M8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc1ODc3MF0gcmFpZDY6IGludDY0eDEgwqB4
b3IoKSDCoCA1MTcgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDEuNzU4ODA5XSByYWlkNjogdXNpbmcgYWxnb3JpdGhtIG5lb254NCBnZW4oKSAyMTc3IE1C
L3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc2Mjk0MV0g
cmFpZDY6IC4uLi4geG9yKCkgMTU1NiBNQi9zLCBybXcgZW5hYmxlZDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzY3OTU3XSByYWlkNjogdXNpbmcgbmVvbiBy
ZWNvdmVyeSBhbGdvcml0aG08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAxLjc3MjgyNF0geGVuOmJhbGxvb246IEluaXRpYWxpc2luZyBiYWxsb29uIGRyaXZlcjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzc4MDIxXSBpb21t
dTogRGVmYXVsdCBkb21haW4gdHlwZTogVHJhbnNsYXRlZDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzgyNTg0XSBpb21tdTogRE1BIGRvbWFpbiBUTEIgaW52
YWxpZGF0aW9uIHBvbGljeTogc3RyaWN0IG1vZGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAxLjc4OTE0OV0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6ZWQ8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc5MjgyMF0gdXNiY29y
ZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Jmczxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzk4MjU0XSB1c2Jjb3JlOiByZWdpc3RlcmVk
IG5ldyBpbnRlcmZhY2UgZHJpdmVyIGh1Yjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDEuODAzNjI2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBkZXZpY2UgZHJp
dmVyIHVzYjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODA4
NzYxXSBwcHNfY29yZTogTGludXhQUFMgQVBJIHZlci4gMSByZWdpc3RlcmVkPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44MTM3MTZdIHBwc19jb3JlOiBTb2Z0
d2FyZSB2ZXIuIDUuMy42IC0gQ29weXJpZ2h0IDIwMDUtMjAwNyBSb2RvbGZvIEdpb21ldHRpICZs
dDs8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGludXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9t
ZXR0aUBsaW51eC5pdDwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGlu
dXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9tZXR0aUBsaW51eC5pdDwvYT4mZ3Q7Jmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODIyOTAzXSBQVFAgY2xv
Y2sgc3VwcG9ydCByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMS44MjY4OTNdIEVEQUMgTUM6IFZlcjogMy4wLjA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgzMDM3NV0genlucW1wLWlwaS1tYm94IG1haWxi
b3hAZmY5OTA0MDA6IFJlZ2lzdGVyZWQgWnlucU1QIElQSSBtYm94IHdpdGggVFgvUlggY2hhbm5l
bHMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44Mzg4NjNd
IHp5bnFtcC1pcGktbWJveCBtYWlsYm94QGZmOTkwNjAwOiBSZWdpc3RlcmVkIFp5bnFNUCBJUEkg
bWJveCB3aXRoIFRYL1JYIGNoYW5uZWxzLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDEuODQ3MzU2XSB6eW5xbXAtaXBpLW1ib3ggbWFpbGJveEBmZjk5MDgwMDog
UmVnaXN0ZXJlZCBaeW5xTVAgSVBJIG1ib3ggd2l0aCBUWC9SWCBjaGFubmVscy48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg1NTkwN10gRlBHQSBtYW5hZ2Vy
IGZyYW1ld29yazxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEu
ODU5OTUyXSBjbG9ja3NvdXJjZTogU3dpdGNoZWQgdG8gY2xvY2tzb3VyY2UgYXJjaF9zeXNfY291
bnRlcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODcxNzEy
XSBORVQ6IFJlZ2lzdGVyZWQgUEZfSU5FVCBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg3MTgzOF0gSVAgaWRlbnRzIGhhc2ggdGFi
bGUgZW50cmllczogMzI3NjggKG9yZGVyOiA2LCAyNjIxNDQgYnl0ZXMsIGxpbmVhcik8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg3OTM5Ml0gdGNwX2xpc3Rl
bl9wb3J0YWRkcl9oYXNoIGhhc2ggdGFibGUgZW50cmllczogMTAyNCAob3JkZXI6IDIsIDE2Mzg0
IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMS44ODcwNzhdIFRhYmxlLXBlcnR1cmIgaGFzaCB0YWJsZSBlbnRyaWVzOiA2NTUzNiAob3Jk
ZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDEuODk0ODQ2XSBUQ1AgZXN0YWJsaXNoZWQgaGFzaCB0YWJsZSBlbnRy
aWVzOiAxNjM4NCAob3JkZXI6IDUsIDEzMTA3MiBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTAyOTAwXSBUQ1AgYmluZCBoYXNoIHRh
YmxlIGVudHJpZXM6IDE2Mzg0IChvcmRlcjogNiwgMjYyMTQ0IGJ5dGVzLCBsaW5lYXIpPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45MTAzNTBdIFRDUDogSGFz
aCB0YWJsZXMgY29uZmlndXJlZCAoZXN0YWJsaXNoZWQgMTYzODQgYmluZCAxNjM4NCk8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjkxNjc3OF0gVURQIGhhc2gg
dGFibGUgZW50cmllczogMTAyNCAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45MjM1MDldIFVEUC1MaXRl
IGhhc2ggdGFibGUgZW50cmllczogMTAyNCAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIp
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45MzA3NTldIE5F
VDogUmVnaXN0ZXJlZCBQRl9VTklYL1BGX0xPQ0FMIHByb3RvY29sIGZhbWlseTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTM2ODM0XSBSUEM6IFJlZ2lzdGVy
ZWQgbmFtZWQgVU5JWCBzb2NrZXQgdHJhbnNwb3J0IG1vZHVsZS48YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk0MjM0Ml0gUlBDOiBSZWdpc3RlcmVkIHVkcCB0
cmFuc3BvcnQgbW9kdWxlLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDEuOTQ3MDg4XSBSUEM6IFJlZ2lzdGVyZWQgdGNwIHRyYW5zcG9ydCBtb2R1bGUuPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45NTE4NDNdIFJQQzogUmVn
aXN0ZXJlZCB0Y3AgTkZTdjQuMSBiYWNrY2hhbm5lbCB0cmFuc3BvcnQgbW9kdWxlLjxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTU4MzM0XSBQQ0k6IENMUyAw
IGJ5dGVzLCBkZWZhdWx0IDY0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMS45NjI3MDldIFRyeWluZyB0byB1bnBhY2sgcm9vdGZzIGltYWdlIGFzIGluaXRyYW1m
cy4uLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTc3MDkw
XSB3b3JraW5nc2V0OiB0aW1lc3RhbXBfYml0cz02MiBtYXhfb3JkZXI9MTkgYnVja2V0X29yZGVy
PTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk4Mjg2M10g
SW5zdGFsbGluZyBrbmZzZCAoY29weXJpZ2h0IChDKSAxOTk2IDxhIGhyZWY9Im1haWx0bzpva2ly
QG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJfYmxhbmsiPm9raXJAbW9uYWQuc3diLmRlPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJfYmxhbmsi
Pm9raXJAbW9uYWQuc3diLmRlPC9hPiZndDspLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMDIxMDQ1XSBORVQ6IFJlZ2lzdGVyZWQgUEZfQUxHIHByb3RvY29s
IGZhbWlseTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDIx
MTIyXSB4b3I6IG1lYXN1cmluZyBzb2Z0d2FyZSBjaGVja3N1bSBzcGVlZDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDI5MzQ3XSDCoCDCoDhyZWdzIMKgIMKg
IMKgIMKgIMKgIDogwqAyMzY2IE1CL3NlYzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuMDMzMDgxXSDCoCDCoDMycmVncyDCoCDCoCDCoCDCoCDCoDogwqAyODAy
IE1CL3NlYzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDM4
MjIzXSDCoCDCoGFybTY0X25lb24gwqAgwqAgwqA6IMKgMjMyMCBNQi9zZWM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAzODM4NV0geG9yOiB1c2luZyBmdW5j
dGlvbjogMzJyZWdzICgyODAyIE1CL3NlYyk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAyLjA0MzYxNF0gQmxvY2sgbGF5ZXIgU0NTSSBnZW5lcmljIChic2cpIGRy
aXZlciB2ZXJzaW9uIDAuNCBsb2FkZWQgKG1ham9yIDI0Nyk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA1MDk1OV0gaW8gc2NoZWR1bGVyIG1xLWRlYWRsaW5l
IHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAy
LjA1NTUyMV0gaW8gc2NoZWR1bGVyIGt5YmVyIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA2ODIyN10geGVuOnhlbl9ldnRjaG46IEV2ZW50
LWNoYW5uZWwgZGV2aWNlIGluc3RhbGxlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuMDY5MjgxXSBTZXJpYWw6IDgyNTAvMTY1NTAgZHJpdmVyLCA0IHBvcnRz
LCBJUlEgc2hhcmluZyBkaXNhYmxlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuMDc2MTkwXSBjYWNoZWluZm86IFVuYWJsZSB0byBkZXRlY3QgY2FjaGUgaGll
cmFyY2h5IGZvciBDUFUgMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMDg1NTQ4XSBicmQ6IG1vZHVsZSBsb2FkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA4OTI5MF0gbG9vcDogbW9kdWxlIGxvYWRlZDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDg5MzQxXSBJbnZhbGlkIG1h
eF9xdWV1ZXMgKDQpLCB3aWxsIHVzZSBkZWZhdWx0IG1heDogMi48YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA5NDU2NV0gdHVuOiBVbml2ZXJzYWwgVFVOL1RB
UCBkZXZpY2UgZHJpdmVyLCAxLjY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjA5ODY1NV0geGVuX25ldGZyb250OiBJbml0aWFsaXNpbmcgWGVuIHZpcnR1YWwg
ZXRoZXJuZXQgZHJpdmVyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4xMDQxNTZdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgcnRs
ODE1MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTA5ODEz
XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHI4MTUyPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xMTUzNjddIHVzYmNvcmU6IHJl
Z2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgYXNpeDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTIwNzk0XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBp
bnRlcmZhY2UgZHJpdmVyIGF4ODgxNzlfMTc4YTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMTI2OTM0XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZh
Y2UgZHJpdmVyIGNkY19ldGhlcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMTMyODE2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVy
IGNkY19lZW08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjEz
ODUyN10gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBuZXQxMDgwPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xNDQyNTZdIHVzYmNv
cmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgY2RjX3N1YnNldDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTUwMjA1XSB1c2Jjb3JlOiByZWdp
c3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHphdXJ1czxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTU1ODM3XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBp
bnRlcmZhY2UgZHJpdmVyIGNkY19uY208YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjE2MTU1MF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRy
aXZlciByODE1M19lY208YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjE2ODI0MF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNf
YWNtPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xNzMxMDld
IGNkY19hY206IFVTQiBBYnN0cmFjdCBDb250cm9sIE1vZGVsIGRyaXZlciBmb3IgVVNCIG1vZGVt
cyBhbmQgSVNETiBhZGFwdGVyczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMTgxMzU4XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVy
IHVhczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTg2NTQ3
XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYi1zdG9yYWdlPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xOTI2NDNdIHVzYmNv
cmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgZnRkaV9zaW88YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE5ODM4NF0gdXNic2VyaWFsOiBVU0Ig
U2VyaWFsIHN1cHBvcnQgcmVnaXN0ZXJlZCBmb3IgRlRESSBVU0IgU2VyaWFsIERldmljZTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjA2MTE4XSB1ZGMtY29y
ZTogY291bGRuJiMzOTt0IGZpbmQgYW4gYXZhaWxhYmxlIFVEQyAtIGFkZGVkIFtnX21hc3Nfc3Rv
cmFnZV0gdG8gbGlzdCBvZiBwZW5kaW5nIGRyaXZlcnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIxNTMzMl0gaTJjX2RldjogaTJjIC9kZXYgZW50cmllcyBk
cml2ZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIyMDQ2
N10geGVuX3dkdCB4ZW5fd2R0OiBpbml0aWFsaXplZCAodGltZW91dD02MHMsIG5vd2F5b3V0PTAp
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yMjU5MjNdIGRl
dmljZS1tYXBwZXI6IHVldmVudDogdmVyc2lvbiAxLjAuMzxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjMwNjY4XSBkZXZpY2UtbWFwcGVyOiBpb2N0bDogNC40
NS4wLWlvY3RsICgyMDIxLTAzLTIyKSBpbml0aWFsaXNlZDogPGEgaHJlZj0ibWFpbHRvOmRtLWRl
dmVsQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5kbS1kZXZlbEByZWRoYXQuY29tPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIiB0YXJnZXQ9Il9i
bGFuayI+ZG0tZGV2ZWxAcmVkaGF0LmNvbTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yMzkzMTVdIEVEQUMgTUMwOiBHaXZpbmcgb3V0IGRldmlj
ZSB0byBtb2R1bGUgMSBjb250cm9sbGVyIHN5bnBzX2Rkcl9jb250cm9sbGVyOiBERVYgc3lucHNf
ZWRhYyAoSU5URVJSVVBUKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMjQ5NDA1XSBFREFDIERFVklDRTA6IEdpdmluZyBvdXQgZGV2aWNlIHRvIG1vZHVsZSB6
eW5xbXAtb2NtLWVkYWMgY29udHJvbGxlciB6eW5xbXBfb2NtOiBERVY8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBmZjk2MDAwMC5tZW1vcnktY29udHJvbGxlciAoSU5URVJSVVBUKTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjYxNzE5XSBzZGhj
aTogU2VjdXJlIERpZ2l0YWwgSG9zdCBDb250cm9sbGVyIEludGVyZmFjZSBkcml2ZXI8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI2NzQ4N10gc2RoY2k6IENv
cHlyaWdodChjKSBQaWVycmUgT3NzbWFuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi4yNzE4OTBdIHNkaGNpLXBsdGZtOiBTREhDSSBwbGF0Zm9ybSBhbmQgT0Yg
ZHJpdmVyIGhlbHBlcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuMjc4MTU3XSBsZWR0cmlnLWNwdTogcmVnaXN0ZXJlZCB0byBpbmRpY2F0ZSBhY3Rpdml0eSBv
biBDUFVzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yODM4
MTZdIHp5bnFtcF9maXJtd2FyZV9wcm9iZSBQbGF0Zm9ybSBNYW5hZ2VtZW50IEFQSSB2MS4xPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yODk1NTRdIHp5bnFt
cF9maXJtd2FyZV9wcm9iZSBUcnVzdHpvbmUgdmVyc2lvbiB2MS4wPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zMjc4NzVdIHNlY3VyZWZ3IHNlY3VyZWZ3OiBz
ZWN1cmVmdyBwcm9iZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjMyODMyNF0gYWxnOiBObyB0ZXN0IGZvciB4aWxpbngtenlucW1wLWFlcyAoenlucW1wLWFl
cyk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjMzMjU2M10g
enlucW1wX2FlcyBmaXJtd2FyZTp6eW5xbXAtZmlybXdhcmU6enlucW1wLWFlczogQUVTIFN1Y2Nl
c3NmdWxseSBSZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi4zNDExODNdIGFsZzogTm8gdGVzdCBmb3IgeGlsaW54LXp5bnFtcC1yc2EgKHp5bnFt
cC1yc2EpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zNDc2
NjddIHJlbW90ZXByb2MgcmVtb3RlcHJvYzA6IGZmOWEwMDAwLnJmNXNzOnI1Zl8wIGlzIGF2YWls
YWJsZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzUzMDAz
XSByZW1vdGVwcm9jIHJlbW90ZXByb2MxOiBmZjlhMDAwMC5yZjVzczpyNWZfMSBpcyBhdmFpbGFi
bGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM2MjYwNV0g
ZnBnYV9tYW5hZ2VyIGZwZ2EwOiBYaWxpbnggWnlucU1QIEZQR0EgTWFuYWdlciByZWdpc3RlcmVk
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zNjY1NDBdIHZp
cGVyLXhlbi1wcm94eSB2aXBlci14ZW4tcHJveHk6IFZpcGVyIFhlbiBQcm94eSByZWdpc3RlcmVk
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zNzI1MjVdIHZp
cGVyLXZkcHAgYTQwMDAwMDAudmRwcDogRGV2aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzc3Nzc4XSB2aXBlci12ZHBwIGE0MDAw
MDAwLnZkcHA6IFZEUFAgVmVyc2lvbjogMS4zLjkuMCBJbmZvOiAxLjUxMi4xNS4wIEtleUxlbjog
MzI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM4NjQzMl0g
dmlwZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBVbmFibGUgdG8gcmVnaXN0ZXIgdGFtcGVyIGhhbmRs
ZXIuIFJldHJ5aW5nLi4uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4zOTQwOTRdIHZpcGVyLXZkcHAtbmV0IGE1MDAwMDAwLnZkcHBfbmV0OiBEZXZpY2UgVHJl
ZSBQcm9iaW5nPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4z
OTk4NTRdIHZpcGVyLXZkcHAtbmV0IGE1MDAwMDAwLnZkcHBfbmV0OiBEZXZpY2UgcmVnaXN0ZXJl
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDA1OTMxXSB2
aXBlci12ZHBwLXN0YXQgYTgwMDAwMDAudmRwcF9zdGF0OiBEZXZpY2UgVHJlZSBQcm9iaW5nPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40MTIwMzddIHZpcGVy
LXZkcHAtc3RhdCBhODAwMDAwMC52ZHBwX3N0YXQ6IEJ1aWxkIHBhcmFtZXRlcnM6IFZUSSBDb3Vu
dDogNTEyIEV2ZW50IENvdW50OiAzMjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuNDIwODU2XSBkZWZhdWx0IHByZXNldDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDIzNzk3XSB2aXBlci12ZHBwLXN0YXQgYTgwMDAwMDAu
dmRwcF9zdGF0OiBEZXZpY2UgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuNDMwMDU0XSB2aXBlci12ZHBwLXJuZyBhYzAwMDAwMC52ZHBwX3Ju
ZzogRGV2aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuNDM1OTQ4XSB2aXBlci12ZHBwLXJuZyBhYzAwMDAwMC52ZHBwX3JuZzogRGV2
aWNlIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjQ0MTk3Nl0gdm1jdSBkcml2ZXIgaW5pdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuNDQ0OTIyXSBWTUNVOiA6ICgyNDA6MCkgcmVnaXN0ZXJlZDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDQ0OTU2XSBJbiBLODEg
VXBkYXRlciBpbml0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi40NDkwMDNdIHBrdGdlbjogUGFja2V0IEdlbmVyYXRvciBmb3IgcGFja2V0IHBlcmZvcm1hbmNl
IHRlc3RpbmcuIFZlcnNpb246IDIuNzU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjQ2ODgzM10gSW5pdGlhbGl6aW5nIFhGUk0gbmV0bGluayBzb2NrZXQ8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ2ODkwMl0gTkVUOiBS
ZWdpc3RlcmVkIFBGX1BBQ0tFVCBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ3MjcyOV0gQnJpZGdlIGZpcmV3YWxsaW5nIHJlZ2lz
dGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ3Njc4
NV0gODAyMXE6IDgwMi4xUSBWTEFOIFN1cHBvcnQgdjEuODxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDgxMzQxXSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2ZXJz
aW9uIDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ4NjM5
NF0gQnRyZnMgbG9hZGVkLCBjcmMzMmM9Y3JjMzJjLWdlbmVyaWMsIHpvbmVkPW5vLCBmc3Zlcml0
eT1ubzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNTAzMTQ1
XSBmZjAxMDAwMC5zZXJpYWw6IHR0eVBTMSBhdCBNTUlPIDB4ZmYwMTAwMDAgKGlycSA9IDM2LCBi
YXNlX2JhdWQgPSA2MjUwMDAwKSBpcyBhIHh1YXJ0cHM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUwNzEwM10gb2YtZnBnYS1yZWdpb24gZnBnYS1mdWxsOiBG
UEdBIFJlZ2lvbiBwcm9iZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjUxMjk4Nl0geGlsaW54LXp5bnFtcC1kbWEgZmQ1MDAwMDAuZG1hLWNvbnRyb2xsZXI6
IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUyMDI2N10geGlsaW54LXp5bnFtcC1kbWEgZmQ1MTAwMDAu
ZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUyODIzOV0geGlsaW54LXp5bnFt
cC1kbWEgZmQ1MjAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1
Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUzNjE1
Ml0geGlsaW54LXp5bnFtcC1kbWEgZmQ1MzAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEg
ZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjU0NDE1M10geGlsaW54LXp5bnFtcC1kbWEgZmQ1NDAwMDAuZG1hLWNvbnRyb2xs
ZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU1MjEyN10geGlsaW54LXp5bnFtcC1kbWEgZmQ1NTAw
MDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU2MDE3OF0geGlsaW54LXp5
bnFtcC1kbWEgZmZhODAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2Jl
IHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU2
Nzk4N10geGlsaW54LXp5bnFtcC1kbWEgZmZhOTAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBE
TUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjU3NjAxOF0geGlsaW54LXp5bnFtcC1kbWEgZmZhYTAwMDAuZG1hLWNvbnRy
b2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU4Mzg4OV0geGlsaW54LXp5bnFtcC1kbWEgZmZh
YjAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk0NjM3OV0gc3BpLW5v
ciBzcGkwLjA6IG10MjVxdTUxMmEgKDEzMTA3MiBLYnl0ZXMpPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi45NDY0NjddIDIgZml4ZWQtcGFydGl0aW9ucyBwYXJ0
aXRpb25zIGZvdW5kIG9uIE1URCBkZXZpY2Ugc3BpMC4wPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMi45NTIzOTNdIENyZWF0aW5nIDIgTVREIHBhcnRpdGlvbnMg
b24gJnF1b3Q7c3BpMC4wJnF1b3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuOTU3MjMxXSAweDAwMDAwNDAwMDAwMC0weDAwMDAwODAwMDAwMCA6ICZxdW90
O2JhbmsgQSZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuOTYzMzMyXSAweDAwMDAwMDAwMDAwMC0weDAwMDAwNDAwMDAwMCA6ICZxdW90O2JhbmsgQiZx
dW90Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTY4Njk0
XSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0OiBOb3QgZW5hYmxpbmcgcGFydGlhbCBzdG9yZSBhbmQg
Zm9yd2FyZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTc1
MzMzXSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0IGV0aDA6IENhZGVuY2UgR0VNIHJldiAweDUwMDcw
MTA2IGF0IDB4ZmYwYjAwMDAgaXJxIDI1ICgxODo0MTpmZTowZjpmZjowMik8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk4NDQ3Ml0gbWFjYiBmZjBjMDAwMC5l
dGhlcm5ldDogTm90IGVuYWJsaW5nIHBhcnRpYWwgc3RvcmUgYW5kIGZvcndhcmQ8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk5MjE0NF0gbWFjYiBmZjBjMDAw
MC5ldGhlcm5ldCBldGgxOiBDYWRlbmNlIEdFTSByZXYgMHg1MDA3MDEwNiBhdCAweGZmMGMwMDAw
IGlycSAyNiAoMTg6NDE6ZmU6MGY6ZmY6MDMpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy4wMDEwNDNdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldDogVmlwZXIgcG93
ZXIgR1BJT3MgaW5pdGlhbGlzZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjAwNzMxM10gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQwICh1bmluaXRpYWxp
emVkKTogVmFsaWRhdGUgaW50ZXJmYWNlIFFTR01JSTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDMuMDE0OTE0XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDEg
KHVuaW5pdGlhbGl6ZWQpOiBWYWxpZGF0ZSBpbnRlcmZhY2UgUVNHTUlJPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wMjIxMzhdIHZpcGVyX2VuZXQgdmlwZXJf
ZW5ldCB2bmV0MSAodW5pbml0aWFsaXplZCk6IFZhbGlkYXRlIGludGVyZmFjZSB0eXBlIDE4PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wMzAyNzRdIHZpcGVy
X2VuZXQgdmlwZXJfZW5ldCB2bmV0MiAodW5pbml0aWFsaXplZCk6IFZhbGlkYXRlIGludGVyZmFj
ZSBRU0dNSUk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjAz
Nzc4NV0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQzICh1bmluaXRpYWxpemVkKTogVmFsaWRh
dGUgaW50ZXJmYWNlIFFTR01JSTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuMDQ1MzAxXSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQ6IFZpcGVyIGVuZXQgcmVnaXN0
ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDUwOTU4
XSB4aWxpbngtYXhpcG1vbiBmZmEwMDAwMC5wZXJmLW1vbml0b3I6IFByb2JlZCBYaWxpbnggQVBN
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wNTcxMzVdIHhp
bGlueC1heGlwbW9uIGZkMGIwMDAwLnBlcmYtbW9uaXRvcjogUHJvYmVkIFhpbGlueCBBUE08YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA2MzUzOF0geGlsaW54
LWF4aXBtb24gZmQ0OTAwMDAucGVyZi1tb25pdG9yOiBQcm9iZWQgWGlsaW54IEFQTTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDY5OTIwXSB4aWxpbngtYXhp
cG1vbiBmZmExMDAwMC5wZXJmLW1vbml0b3I6IFByb2JlZCBYaWxpbnggQVBNPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wOTc3MjldIHNpNzB4eDogcHJvYmUg
b2YgMi0wMDQwIGZhaWxlZCB3aXRoIGVycm9yIC01PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy4wOTgwNDJdIGNkbnMtd2R0IGZkNGQwMDAwLndhdGNoZG9nOiBY
aWxpbnggV2F0Y2hkb2cgVGltZXIgd2l0aCB0aW1lb3V0IDYwczxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTA1MTExXSBjZG5zLXdkdCBmZjE1MDAwMC53YXRj
aGRvZzogWGlsaW54IFdhdGNoZG9nIFRpbWVyIHdpdGggdGltZW91dCAxMHM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjExMjQ1N10gdmlwZXItdGFtcGVyIHZp
cGVyLXRhbXBlcjogRGV2aWNlIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjExNzU5M10gYWN0aXZlX2JhbmsgYWN0aXZlX2Jhbms6IGJvb3Qg
YmFuazogMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTIy
MTg0XSBhY3RpdmVfYmFuayBhY3RpdmVfYmFuazogYm9vdCBtb2RlOiAoMHgwMikgcXNwaTMyPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xMjgyNDddIHZpcGVy
LXZkcHAgYTQwMDAwMDAudmRwcDogRGV2aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTMzNDM5XSB2aXBlci12ZHBwIGE0MDAwMDAw
LnZkcHA6IFZEUFAgVmVyc2lvbjogMS4zLjkuMCBJbmZvOiAxLjUxMi4xNS4wIEtleUxlbjogMzI8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE0MjE1MV0gdmlw
ZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBUYW1wZXIgaGFuZGxlciByZWdpc3RlcmVkPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNDc0MzhdIHZpcGVyLXZkcHAg
YTQwMDAwMDAudmRwcDogRGV2aWNlIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE1MzAwN10gbHBjNTVfbDIgc3BpMS4wOiByZWdpc3RlcmVk
IGhhbmRsZXIgZm9yIHByb3RvY29sIDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAzLjE1ODU4Ml0gbHBjNTVfdXNlciBscGM1NV91c2VyOiBUaGUgbWFqb3IgbnVt
YmVyIGZvciB5b3VyIGRldmljZSBpcyAyMzY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAzLjE2NTk3Nl0gbHBjNTVfbDIgc3BpMS4wOiByZWdpc3RlcmVkIGhhbmRs
ZXIgZm9yIHByb3RvY29sIDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAzLjE4MTk5OV0gcnRjLWxwYzU1IHJ0Y19scGM1NTogbHBjNTVfcnRjX2dldF90aW1lOiBi
YWQgcmVzdWx0OiAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My4xODI4NTZdIHJ0Yy1scGM1NSBydGNfbHBjNTU6IHJlZ2lzdGVyZWQgYXMgcnRjMDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTg4NjU2XSBscGM1NV9sMiBz
cGkxLjA6ICgyKSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDMuMTkzNzQ0XSBscGM1NV9sMiBzcGkxLjA6ICgzKSBtY3Ugc3Rp
bGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuMTk4ODQ4XSBscGM1NV9sMiBzcGkxLjA6ICg0KSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjAyOTMyXSBtbWMwOiBT
REhDSSBjb250cm9sbGVyIG9uIGZmMTYwMDAwLm1tYyBbZmYxNjAwMDAubW1jXSB1c2luZyBBRE1B
IDY0LWJpdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjEw
Njg5XSBscGM1NV9sMiBzcGkxLjA6ICg1KSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjE1Njk0XSBscGM1NV9sMiBzcGkx
LjA6IHJ4IGVycm9yOiAtMTEwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy4yODQ0MzhdIG1tYzA6IG5ldyBIUzIwMCBNTUMgY2FyZCBhdCBhZGRyZXNzIDAwMDE8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjI4NTE3OV0gbW1j
YmxrMDogbW1jMDowMDAxIFNFTTE2RyAxNC42IEdpQjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDMuMjkxNzg0XSDCoG1tY2JsazA6IHAxIHAyIHAzIHA0IHA1IHA2
IHA3IHA4PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4yOTM5
MTVdIG1tY2JsazBib290MDogbW1jMDowMDAxIFNFTTE2RyA0LjAwIE1pQjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjk5MDU0XSBtbWNibGswYm9vdDE6IG1t
YzA6MDAwMSBTRU0xNkcgNC4wMCBNaUI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAzLjMwMzkwNV0gbW1jYmxrMHJwbWI6IG1tYzA6MDAwMSBTRU0xNkcgNC4wMCBN
aUIsIGNoYXJkZXYgKDI0NDowKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuNTgyNjc2XSBydGMtbHBjNTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfZ2V0X3RpbWU6
IGJhZCByZXN1bHQ6IDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjU4MzMzMl0gcnRjLWxwYzU1IHJ0Y19scGM1NTogaGN0b3N5czogdW5hYmxlIHRvIHJlYWQg
dGhlIGhhcmR3YXJlIGNsb2NrPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy41OTEyNTJdIGNkbnMtaTJjIGZmMDIwMDAwLmkyYzogcmVjb3ZlcnkgaW5mb3JtYXRp
b24gY29tcGxldGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjU5NzA4NV0gYXQyNCAwLTAwNTA6IHN1cHBseSB2Y2Mgbm90IGZvdW5kLCB1c2luZyBkdW1teSBy
ZWd1bGF0b3I8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYw
MzAxMV0gbHBjNTVfbDIgc3BpMS4wOiAoMikgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYwODA5M10gYXQyNCAwLTAwNTA6
IDI1NiBieXRlIHNwZCBFRVBST00sIHJlYWQtb25seTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDMuNjEzNjIwXSBscGM1NV9sMiBzcGkxLjA6ICgzKSBtY3Ugc3Rp
bGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuNjE5MzYyXSBscGM1NV9sMiBzcGkxLjA6ICg0KSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjI0MjI0XSBydGMtcnYz
MDI4IDAtMDA1MjogcmVnaXN0ZXJlZCBhcyBydGMxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy42MjgzNDNdIGxwYzU1X2wyIHNwaTEuMDogKDUpIG1jdSBzdGls
bCBub3QgcmVhZHk/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My42MzMyNTNdIGxwYzU1X2wyIHNwaTEuMDogcnggZXJyb3I6IC0xMTA8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYzOTEwNF0gazgxX2Jvb3Rsb2FkZXIgMC0w
MDEwOiBwcm9iZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
NjQxNjI4XSBWTUNVOiA6ICgyMzU6MCkgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjQxNjM1XSBrODFfYm9vdGxvYWRlciAwLTAwMTA6IHBy
b2JlIGNvbXBsZXRlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuNjY4MzQ2XSBjZG5zLWkyYyBmZjAyMDAwMC5pMmM6IDQwMCBrSHogbW1pbyBmZjAyMDAwMCBp
cnEgMjg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY2OTE1
NF0gY2Rucy1pMmMgZmYwMzAwMDAuaTJjOiByZWNvdmVyeSBpbmZvcm1hdGlvbiBjb21wbGV0ZTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjc1NDEyXSBsbTc1
IDEtMDA0ODogc3VwcGx5IHZzIG5vdCBmb3VuZCwgdXNpbmcgZHVtbXkgcmVndWxhdG9yPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42ODI5MjBdIGxtNzUgMS0w
MDQ4OiBod21vbjE6IHNlbnNvciAmIzM5O3RtcDExMiYjMzk7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42ODY1NDhdIGkyYyBpMmMtMTogQWRkZWQgbXVsdGlw
bGV4ZWQgaTJjIGJ1cyAzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMy42OTA3OTVdIGkyYyBpMmMtMTogQWRkZWQgbXVsdGlwbGV4ZWQgaTJjIGJ1cyA0PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42OTU2MjldIGkyYyBpMmMt
MTogQWRkZWQgbXVsdGlwbGV4ZWQgaTJjIGJ1cyA1PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy43MDA0OTJdIGkyYyBpMmMtMTogQWRkZWQgbXVsdGlwbGV4ZWQg
aTJjIGJ1cyA2PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43
MDUxNTddIHBjYTk1NHggMS0wMDcwOiByZWdpc3RlcmVkIDQgbXVsdGlwbGV4ZWQgYnVzc2VzIGZv
ciBJMkMgc3dpdGNoIHBjYTk1NDY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjcxMzA0OV0gYXQyNCAxLTAwNTQ6IHN1cHBseSB2Y2Mgbm90IGZvdW5kLCB1c2lu
ZyBkdW1teSByZWd1bGF0b3I8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAzLjcyMDA2N10gYXQyNCAxLTAwNTQ6IDEwMjQgYnl0ZSAyNGMwOCBFRVBST00sIHJlYWQt
b25seTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzI0NzYx
XSBjZG5zLWkyYyBmZjAzMDAwMC5pMmM6IDEwMCBrSHogbW1pbyBmZjAzMDAwMCBpcnEgMjk8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjczMTI3Ml0gc2ZwIHZp
cGVyX2VuZXQ6c2ZwLWV0aDE6IEhvc3QgbWF4aW11bSBwb3dlciAyLjBXPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43Mzc1NDldIHNmcF9yZWdpc3Rlcl9zb2Nr
ZXQ6IGdvdCBzZnBfYnVzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMy43NDA3MDldIHNmcF9yZWdpc3Rlcl9zb2NrZXQ6IHJlZ2lzdGVyIHNmcF9idXM8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc0NTQ1OV0gc2ZwX3JlZ2lz
dGVyX2J1czogb3BzIG9rITxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDMuNzQ5MTc5XSBzZnBfcmVnaXN0ZXJfYnVzOiBUcnkgdG8gYXR0YWNoPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NTM0MTldIHNmcF9yZWdpc3Rlcl9i
dXM6IEF0dGFjaCBzdWNjZWVkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjc1NzkxNF0gc2ZwX3JlZ2lzdGVyX2J1czogdXBzdHJlYW0gb3BzIGF0dGFjaDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzYyNjc3XSBzZnBf
cmVnaXN0ZXJfYnVzOiBCdXMgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDMuNzY2OTk5XSBzZnBfcmVnaXN0ZXJfc29ja2V0OiByZWdpc3RlciBz
ZnBfYnVzIHN1Y2NlZWRlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDMuNzc1ODcwXSBvZl9jZnNfaW5pdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuNzc2MDAwXSBvZl9jZnNfaW5pdDogT0s8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc3ODIxMV0gY2xrOiBOb3QgZGlzYWJsaW5nIHVu
dXNlZCBjbG9ja3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEu
Mjc4NDc3XSBGcmVlaW5nIGluaXRyZCBtZW1vcnk6IDIwNjA1Nks8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMjc5NDA2XSBGcmVlaW5nIHVudXNlZCBrZXJuZWwg
bWVtb3J5OiAxNTM2Szxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS4zMTQwMDZdIENoZWNrZWQgVytYIG1hcHBpbmdzOiBwYXNzZWQsIG5vIFcrWCBwYWdlcyBmb3Vu
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4zMTQxNDJdIFJ1
biAvaW5pdCBhcyBpbml0IHByb2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IElOSVQ6IHZlcnNpb24gMy4wMSBib290aW5nPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBmc2NrIChidXN5Ym94IDEuMzUuMCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IC9kZXYvbW1jYmxrMHAxOiBjbGVhbiwgMTIvMTAyNDAwIGZpbGVzLCAyMzgx
NjIvNDA5NjAwIGJsb2Nrczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgL2Rl
di9tbWNibGswcDI6IGNsZWFuLCAxMi8xMDI0MDAgZmlsZXMsIDE3MTk3Mi80MDk2MDAgYmxvY2tz
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAvZGV2L21tY2JsazBwMyB3YXMg
bm90IGNsZWFubHkgdW5tb3VudGVkLCBjaGVjayBmb3JjZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAvZGV2L21tY2JsazBwMzogMjAvNDA5NiBmaWxlcyAoMC4wJSBub24t
Y29udGlndW91cyksIDY2My8xNjM4NCBibG9ja3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTEuNTUzMDczXSBFWFQ0LWZzIChtbWNibGswcDMpOiBtb3VudGVkIGZp
bGVzeXN0ZW0gd2l0aG91dCBqb3VybmFsLiBPcHRzOiAobnVsbCkuIFF1b3RhIG1vZGU6IGRpc2Fi
bGVkLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgcmFuZG9t
IG51bWJlciBnZW5lcmF0b3IgZGFlbW9uLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS41ODA2NjJdIHJhbmRvbTogY3JuZyBpbml0IGRvbmU8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIHVkZXY8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNjEzMTU5XSB1ZGV2ZFsxNDJdOiBzdGFydGluZyB2
ZXJzaW9uIDMuMi4xMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS42MjAzODVdIHVkZXZkWzE0M106IHN0YXJ0aW5nIGV1ZGV2LTMuMi4xMDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS43MDQ0ODFdIG1hY2IgZmYwYjAwMDAuZXRo
ZXJuZXQgY29udHJvbF9yZWQ6IHJlbmFtZWQgZnJvbSBldGgwPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjcyMDI2NF0gbWFjYiBmZjBjMDAwMC5ldGhlcm5ldCBj
b250cm9sX2JsYWNrOiByZW5hbWVkIGZyb20gZXRoMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxMi4wNjMzOTZdIGlwX2xvY2FsX3BvcnRfcmFuZ2U6IHByZWZlciBk
aWZmZXJlbnQgcGFyaXR5IGZvciBzdGFydC9lbmQgdmFsdWVzLjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi4wODQ4MDFdIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGxw
YzU1X3J0Y19nZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgaHdjbG9jazogUlRDX1JEX1RJTUU6IEludmFsaWQgZXhjaGFuZ2U8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE1vbiBGZWIgMjcgMDg6NDA6NTMgVVRDIDIw
MjM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuMTE1MzA5XSBy
dGMtbHBjNTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfc2V0X3RpbWU6IGJhZCByZXN1bHQ8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGh3Y2xvY2s6IFJUQ19TRVRfVElNRTogSW52
YWxpZCBleGNoYW5nZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
Mi4xMzEwMjddIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGxwYzU1X3J0Y19nZXRfdGltZTogYmFkIHJl
c3VsdDogMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgbWN1
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSU5JVDogRW50ZXJpbmcgcnVu
bGV2ZWw6IDU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IENvbmZpZ3VyaW5n
IG5ldHdvcmsgaW50ZXJmYWNlcy4uLiBkb25lLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgcmVzZXR0aW5nIG5ldHdvcmsgaW50ZXJmYWNlPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjcxODI5NV0gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBj
b250cm9sX3JlZDogUEhZIFtmZjBiMDAwMC5ldGhlcm5ldC1mZmZmZmZmZjowMl0gZHJpdmVyIFtY
aWxpbnggUENTL1BNQSBQSFldIChpcnE9UE9MTCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTIuNzIzOTE5XSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0IGNvbnRyb2xf
cmVkOiBjb25maWd1cmluZyBmb3IgcGh5L2dtaWkgbGluayBtb2RlPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjczMjE1MV0gcHBzIHBwczA6IG5ldyBQUFMgc291
cmNlIHB0cDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzM1
NTYzXSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0OiBnZW0tcHRwLXRpbWVyIHB0cCBjbG9jayByZWdp
c3RlcmVkLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43NDU3
MjRdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29udHJvbF9ibGFjazogUEhZIFtmZjBjMDAwMC5l
dGhlcm5ldC1mZmZmZmZmZjowMV0gZHJpdmVyIFtYaWxpbnggUENTL1BNQSBQSFldPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgKGlycT1QT0xMKTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxMi43NTM0NjldIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29u
dHJvbF9ibGFjazogY29uZmlndXJpbmcgZm9yIHBoeS9nbWlpIGxpbmsgbW9kZTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43NjE4MDRdIHBwcyBwcHMxOiBuZXcg
UFBTIHNvdXJjZSBwdHAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IDEyLjc2NTM5OF0gbWFjYiBmZjBjMDAwMC5ldGhlcm5ldDogZ2VtLXB0cC10aW1lciBwdHAgY2xv
Y2sgcmVnaXN0ZXJlZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEF1dG8t
bmVnb3RpYXRpb246IG9mZjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQXV0
by1uZWdvdGlhdGlvbjogb2ZmPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDE2LjgyODE1MV0gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBjb250cm9sX3JlZDogdW5hYmxl
IHRvIGdlbmVyYXRlIHRhcmdldCBmcmVxdWVuY3k6IDEyNTAwMDAwMCBIejxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNi44MzQ1NTNdIG1hY2IgZmYwYjAwMDAuZXRo
ZXJuZXQgY29udHJvbF9yZWQ6IExpbmsgaXMgVXAgLSAxR2Jwcy9GdWxsIC0gZmxvdyBjb250cm9s
IG9mZjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNi44NjA1NTJd
IG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29udHJvbF9ibGFjazogdW5hYmxlIHRvIGdlbmVyYXRl
IHRhcmdldCBmcmVxdWVuY3k6IDEyNTAwMDAwMCBIejxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxNi44NjcwNTJdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29udHJv
bF9ibGFjazogTGluayBpcyBVcCAtIDFHYnBzL0Z1bGwgLSBmbG93IGNvbnRyb2wgb2ZmPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBGYWlsc2FmZSBTZWN1cmUg
U2hlbGwgc2VydmVyIGluIHBvcnQgMjIyMjogc3NoZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgZG9uZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0
YXJ0aW5nIHJwY2JpbmQgZGFlbW9uLi4uZG9uZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3LjA5
MzAxOV0gcnRjLWxwYzU1IHJ0Y19scGM1NTogbHBjNTVfcnRjX2dldF90aW1lOiBiYWQgcmVzdWx0
OiAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBod2Nsb2NrOiBSVENfUkRf
VElNRTogSW52YWxpZCBleGNoYW5nZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgU3RhcnRpbmcgU3RhdGUgTWFuYWdlciBTZXJ2aWNlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBTdGFydCBzdGF0ZS1tYW5hZ2VyIHJlc3RhcnRlci4uLjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MSBGb3J3YXJkaW5nIEFFUyBvcGVy
YXRpb246IDMyNTQ3Nzk5NTE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0
YXJ0aW5nIC91c3Ivc2Jpbi94ZW5zdG9yZWQuLi4uWyDCoCAxNy4yNjUyNTZdIEJUUkZTOiBkZXZp
Y2UgZnNpZCA4MGVmYzIyNC1jMjAyLTRmOGUtYTk0OS00ZGFlN2YwNGEwYWEgZGV2aWQgMSB0cmFu
c2lkIDc0NDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoC9kZXYvZG0tMDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgc2Nhbm5lZCBieSB1ZGV2ZCAoMzg1KTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy4zNDk5MzNdIEJUUkZTIGlu
Zm8gKGRldmljZSBkbS0wKTogZGlzayBzcGFjZSBjYWNoaW5nIGlzIGVuYWJsZWQ8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuMzUwNjcwXSBCVFJGUyBpbmZvIChk
ZXZpY2UgZG0tMCk6IGhhcyBza2lubnkgZXh0ZW50czxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxNy4zNjQzODRdIEJUUkZTIGluZm8gKGRldmljZSBkbS0wKTogZW5h
Ymxpbmcgc3NkIG9wdGltaXphdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTcuODMwNDYyXSBCVFJGUzogZGV2aWNlIGZzaWQgMjdmZjY2NmItZjRlNS00Zjkw
LTkwNTQtYzIxMGRiNWIyZTJlIGRldmlkIDEgdHJhbnNpZCA2IC9kZXYvbWFwcGVyL2NsaWVudF9w
cm92IHNjYW5uZWQgYnk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBta2ZzLmJ0cmZz
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoNTI2KTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy44NzI2OTldIEJUUkZTIGluZm8gKGRldmlj
ZSBkbS0xKTogdXNpbmcgZnJlZSBzcGFjZSB0cmVlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDE3Ljg3Mjc3MV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTEpOiBoYXMg
c2tpbm55IGV4dGVudHM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
MTcuODc4MTE0XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6IGZsYWdnaW5nIGZzIHdpdGggYmln
IG1ldGFkYXRhIGZlYXR1cmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTcuODk0Mjg5XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6IGVuYWJsaW5nIHNzZCBvcHRp
bWl6YXRpb25zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3Ljg5
NTY5NV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTEpOiBjaGVja2luZyBVVUlEIHRyZWU8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBTZXR0aW5nIGRvbWFpbiAwIG5hbWUsIGRvbWlkIGFuZCBKU09OIGNvbmZpZy4u
Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgRG9uZSBzZXR0aW5nIHVwIERv
bTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIHhlbmNvbnNv
bGVkLi4uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBRRU1V
IGFzIGRpc2sgYmFja2VuZCBmb3IgZG9tMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgU3RhcnRpbmcgZG9tYWluIHdhdGNoZG9nIGRhZW1vbjogeGVud2F0Y2hkb2dkIHN0YXJ0
dXA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE4LjQwODY0N10gQlRSRlM6IGRldmljZSBmc2lkIDVl
MDhkNWU5LWJjMmEtNDZiOS1hZjZhLTQ0YzcwODdiODkyMSBkZXZpZCAxIHRyYW5zaWQgNiAvZGV2
L21hcHBlci9jbGllbnRfY29uZmlnIHNjYW5uZWQgYnk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBta2ZzLmJ0cmZzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAo
NTc0KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgW2RvbmVdPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE4LjQ2NTU1Ml0gQlRSRlMgaW5mbyAo
ZGV2aWNlIGRtLTIpOiB1c2luZyBmcmVlIHNwYWNlIHRyZWU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNDY1NjI5XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMik6
IGhhcyBza2lubnkgZXh0ZW50czxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCAxOC40NzEwMDJdIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTogZmxhZ2dpbmcgZnMgd2l0
aCBiaWcgbWV0YWRhdGEgZmVhdHVyZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgU3RhcnRpbmcgY3JvbmQ6IFsgwqAgMTguNDgyMzcxXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0t
Mik6IGVuYWJsaW5nIHNzZCBvcHRpbWl6YXRpb25zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDE4LjQ4NjY1OV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTIpOiBjaGVj
a2luZyBVVUlEIHRyZWU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9LPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBzdGFydGluZyByc3lzbG9nZCAuLi4g
TG9nIHBhcnRpdGlvbiByZWFkeSBhZnRlciAwIHBvbGwgbG9vcHM8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IGRvbmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IHJzeXNsb2dkOiBjYW5ub3QgY29ubmVjdCB0byA8YSBocmVmPSJodHRwOi8vMTcyLjE4LjAu
MTo1MTQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPjE3Mi4xOC4wLjE6NTE0PC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cDovLzE3Mi4xOC4wLjE6NTE0IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwOi8vMTcyLjE4LjAuMTo1MTQ8L2E+Jmd0OzogTmV0d29yayBpcyB1
bnJlYWNoYWJsZSBbdjguMjIwOC4wIHRyeSA8YSBocmVmPSJodHRwczovL3d3dy5yc3lzbG9nLmNv
bS9lLzIwMjciIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vd3d3LnJz
eXNsb2cuY29tL2UvMjAyNzwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vd3d3LnJzeXNsb2cuY29t
L2UvMjAyNyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly93d3cucnN5
c2xvZy5jb20vZS8yMDI3PC9hPiZndDsgXTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxOC42NzA2MzddIEJUUkZTOiBkZXZpY2UgZnNpZCAzOWQ3ZDllMS05NjdkLTQ3
OGUtOTRhZS02OTBkZWI3MjIwOTUgZGV2aWQgMSB0cmFuc2lkIDYwOCAvZGV2L2RtLTMgc2Nhbm5l
ZCBieSB1ZGV2ZCAoNTE4KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFBsZWFzZSBpbnNlcnQgVVNCIHRva2Vu
IGFuZCBlbnRlciB5b3VyIHJvbGUgaW4gbG9naW4gcHJvbXB0Ljxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGxv
Z2luOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBPLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyDQv9C9LCAyNCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMjM6MzksIFN0ZWZhbm8g
U3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJl
bGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBPbGVnLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBI
ZXJlIGlzIHRoZSBpc3N1ZSBmcm9tIHlvdXIgbG9nczo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgU0Vycm9yIEludGVycnVwdCBvbiBDUFUwLCBjb2RlIDB4YmUwMDAwMDAgLS0gU0Vycm9yPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFNFcnJvcnMgYXJlIHNwZWNpYWwgc2lnbmFscyB0byBu
b3RpZnkgc29mdHdhcmUgb2Ygc2VyaW91cyBoYXJkd2FyZTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGVycm9ycy7CoCBTb21ldGhpbmcgaXMgZ29pbmcgdmVy
eSB3cm9uZy4gRGVmZWN0aXZlIGhhcmR3YXJlIGlzIGE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBwb3NzaWJpbGl0eS7CoCBBbm90aGVyIHBvc3NpYmlsaXR5
IGlmIHNvZnR3YXJlIGFjY2Vzc2luZyBhZGRyZXNzIHJhbmdlczxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHRoYXQgaXQgaXMgbm90IHN1cHBvc2VkIHRvLCBz
b21ldGltZXMgaXQgY2F1c2VzIFNFcnJvcnMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoENo
ZWVycyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU3RlZmFubzxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoE9uIE1vbiwgMjQgQXByIDIwMjMsIE9sZWcgTmlr
aXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEhlbGxvLDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFRoYW5rcyBndXlzLjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSSBmb3Vu
ZCBvdXQgd2hlcmUgdGhlIHByb2JsZW0gd2FzLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgTm93IGRvbTAgYm9vdGVkIG1vcmUuIEJ1dCBJIGhhdmUg
YSBuZXcgb25lLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgVGhpcyBpcyBhIGtlcm5lbCBwYW5pYyBkdXJpbmcgRG9tMCBsb2FkaW5nLjxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgTWF5YmUgc29tZW9u
ZSBpcyBhYmxlIHRvIHN1Z2dlc3Qgc29tZXRoaW5nID88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgTy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NzEzNjJdIHNmcF9yZWdpc3Rlcl9idXM6IHVw
c3RyZWFtIG9wcyBhdHRhY2g8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc3NjExOV0gc2ZwX3JlZ2lzdGVyX2J1czogQnVzIHJlZ2lz
dGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjc4MDQ1OV0gc2ZwX3JlZ2lzdGVyX3NvY2tldDogcmVnaXN0ZXIgc2ZwX2J1cyBz
dWNjZWVkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAzLjc4OTM5OV0gb2ZfY2ZzX2luaXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc4OTQ5OV0gb2ZfY2ZzX2luaXQ6
IE9LPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy43OTE2ODVdIGNsazogTm90IGRpc2FibGluZyB1bnVzZWQgY2xvY2tzPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDM1
NV0gU0Vycm9yIEludGVycnVwdCBvbiBDUFUwLCBjb2RlIDB4YmUwMDAwMDAgLS0gU0Vycm9yPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEx
LjAxMDM4MF0gQ1BVOiAwIFBJRDogOSBDb21tOiBrd29ya2VyL3U0OjAgTm90IHRhaW50ZWQgNS4x
NS43Mi14aWxpbngtdjIwMjIuMSAjMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTAzOTNdIFdvcmtxdWV1ZTogZXZlbnRzX3VuYm91
bmQgYXN5bmNfcnVuX2VudHJ5X2ZuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQxNF0gcHN0YXRlOiA2MDAwMDAwNSAoblpDdiBk
YWlmIC1QQU4gLVVBTyAtVENPIC1ESVQgLVNTQlMgQlRZUEU9LS0pPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQyMl0gcGMgOiBz
aW1wbGVfd3JpdGVfZW5kKzB4ZDAvMHgxMzA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDMxXSBsciA6IGdlbmVyaWNfcGVyZm9y
bV93cml0ZSsweDExOC8weDFlMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0MzhdIHNwIDogZmZmZmZmYzAwODA5YjkxMDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA0NDFdIHgyOTogZmZmZmZmYzAwODA5YjkxMCB4Mjg6IDAwMDAwMDAwMDAwMDAwMDAgeDI3OiBm
ZmZmZmZlZjY5YmE4OGMwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ1MV0geDI2OiAwMDAwMDAwMDAwMDAzZWVjIHgyNTogZmZm
ZmZmODA3NTE1ZGIwMCB4MjQ6IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDU5XSB4MjM6IGZmZmZm
ZmMwMDgwOWJhOTAgeDIyOiAwMDAwMDAwMDAyYWFjMDAwIHgyMTogZmZmZmZmODA3MzE1YTI2MDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS4wMTA0NzJdIHgyMDogMDAwMDAwMDAwMDAwMTAwMCB4MTk6IGZmZmZmZmZlMDIwMDAwMDAgeDE4
OiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ4MV0geDE3OiAwMDAwMDAwMGZmZmZmZmZmIHgxNjog
MDAwMDAwMDAwMDAwODAwMCB4MTU6IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDkwXSB4MTQ6IDAw
MDAwMDAwMDAwMDAwMDAgeDEzOiAwMDAwMDAwMDAwMDAwMDAwIHgxMjogMDAwMDAwMDAwMDAwMDAw
MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMS4wMTA0OThdIHgxMTogMDAwMDAwMDAwMDAwMDAwMCB4MTA6IDAwMDAwMDAwMDAwMDAwMDAg
eDkgOiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDUwN10geDggOiAwMDAwMDAwMDAwMDAwMDAwIHg3
IDogZmZmZmZmZWY2OTNiYTY4MCB4NiA6IDAwMDAwMDAwMmQ4OWI3MDA8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTE1XSB4NSA6
IGZmZmZmZmZlMDIwMDAwMDAgeDQgOiBmZmZmZmY4MDczMTVhM2M4IHgzIDogMDAwMDAwMDAwMDAw
MTAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCAxMS4wMTA1MjRdIHgyIDogMDAwMDAwMDAwMmFhYjAwMCB4MSA6IDAwMDAwMDAwMDAwMDAw
MDEgeDAgOiAwMDAwMDAwMDAwMDAwMDA1PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDUzNF0gS2VybmVsIHBhbmljIC0gbm90IHN5
bmNpbmc6IEFzeW5jaHJvbm91cyBTRXJyb3IgSW50ZXJydXB0PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDUzOV0gQ1BVOiAwIFBJ
RDogOSBDb21tOiBrd29ya2VyL3U0OjAgTm90IHRhaW50ZWQgNS4xNS43Mi14aWxpbngtdjIwMjIu
MSAjMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCAxMS4wMTA1NDVdIEhhcmR3YXJlIG5hbWU6IEQxNCBWaXBlciBCb2FyZCAtIFdoaXRlIFVu
aXQgKERUKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCAxMS4wMTA1NDhdIFdvcmtxdWV1ZTogZXZlbnRzX3VuYm91bmQgYXN5bmNfcnVuX2Vu
dHJ5X2ZuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIDExLjAxMDU1Nl0gQ2FsbCB0cmFjZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTU4XSDCoGR1bXBfYmFja3RyYWNl
KzB4MC8weDFjNDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4wMTA1NjddIMKgc2hvd19zdGFjaysweDE4LzB4MmM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTc0XSDC
oGR1bXBfc3RhY2tfbHZsKzB4N2MvMHhhMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1ODNdIMKgZHVtcF9zdGFjaysweDE4LzB4
MzQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTEuMDEwNTg4XSDCoHBhbmljKzB4MTRjLzB4MmY4PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU5N10gwqBwcmludF90YWlu
dGVkKzB4MC8weGIwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDExLjAxMDYwNl0gwqBhcm02NF9zZXJyb3JfcGFuaWMrMHg2Yy8weDdjPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEx
LjAxMDYxNF0gwqBkb19zZXJyb3IrMHgyOC8weDYwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDYyMV0gwqBlbDFoXzY0X2Vycm9y
X2hhbmRsZXIrMHgzMC8weDUwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDYyOF0gwqBlbDFoXzY0X2Vycm9yKzB4NzgvMHg3Yzxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS4wMTA2MzNdIMKgc2ltcGxlX3dyaXRlX2VuZCsweGQwLzB4MTMwPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDYzOV0gwqBnZW5l
cmljX3BlcmZvcm1fd3JpdGUrMHgxMTgvMHgxZTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjQ0XSDCoF9fZ2VuZXJpY19maWxl
X3dyaXRlX2l0ZXIrMHgxMzgvMHgxYzQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjUwXSDCoGdlbmVyaWNfZmlsZV93cml0ZV9p
dGVyKzB4NzgvMHhkMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCAxMS4wMTA2NTZdIMKgX19rZXJuZWxfd3JpdGUrMHhmYy8weDJhYzxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA2NjVdIMKga2VybmVsX3dyaXRlKzB4ODgvMHgxNjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjczXSDCoHh3cml0ZSsweDQ0
LzB4OTQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTEuMDEwNjgwXSDCoGRvX2NvcHkrMHhhOC8weDEwNDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2ODZdIMKgd3JpdGVf
YnVmZmVyKzB4MzgvMHg1ODxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2OTJdIMKgZmx1c2hfYnVmZmVyKzB4NGMvMHhiYzxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA2OThdIMKgX19ndW56aXArMHgyODAvMHgzMTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzA0XSDCoGd1bnppcCsweDFjLzB4
Mjg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTEuMDEwNzA5XSDCoHVucGFja190b19yb290ZnMrMHgxNzAvMHgyYjA8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzE1XSDC
oGRvX3BvcHVsYXRlX3Jvb3RmcysweDgwLzB4MTY0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDcyMl0gwqBhc3luY19ydW5fZW50
cnlfZm4rMHg0OC8weDE2NDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3MjhdIMKgcHJvY2Vzc19vbmVfd29yaysweDFlNC8weDNh
MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMS4wMTA3MzZdIMKgd29ya2VyX3RocmVhZCsweDdjLzB4NGMwPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDc0M10gwqBrdGhy
ZWFkKzB4MTIwLzB4MTMwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjAxMDc1MF0gwqByZXRfZnJvbV9mb3JrKzB4MTAvMHgyMDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA3NTddIFNNUDogc3RvcHBpbmcgc2Vjb25kYXJ5IENQVXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzg0XSBLZXJuZWwgT2Zm
c2V0OiAweDJmNjEyMDAwMDAgZnJvbSAweGZmZmZmZmMwMDgwMDAwMDA8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzg4XSBQSFlT
X09GRlNFVDogMHgwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDExLjAxMDc5MF0gQ1BVIGZlYXR1cmVzOiAweDAwMDAwNDAxLDAwMDAwODQy
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IDExLjAxMDc5NV0gTWVtb3J5IExpbWl0OiBub25lPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjI3NzUwOV0gLS0tWyBlbmQgS2VybmVs
IHBhbmljIC0gbm90IHN5bmNpbmc6IEFzeW5jaHJvbm91cyBTRXJyb3IgSW50ZXJydXB0IF0tLS08
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDQv9GCLCAyMSDQ
sNC/0YAuIDIwMjPigK/Qsy4g0LIgMTU6NTIsIE1pY2hhbCBPcnplbCAmbHQ7PGEgaHJlZj0ibWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFt
ZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29t
IiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDs6PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
SGkgT2xlZyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgT24gMjEvMDQvMjAyMyAxNDo0OSwgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEhlbGxvIE1pY2hh
bCw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJIHdhcyBub3QgYWJsZSB0byBlbmFibGUgZWFybHlwcmlu
dGsgaW4gdGhlIHhlbiBmb3Igbm93Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSSBkZWNpZGVkIHRvIGNob29zZSBhbm90
aGVyIHdheS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFRoaXMgaXMgYSB4ZW4mIzM5O3MgY29tbWFuZCBsaW5lIHRoYXQg
SSBmb3VuZCBvdXQgY29tcGxldGVseS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSAkJCQkIGNv
bnNvbGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWwwIGRvbTBfbWVtPTE2MDBNIGRvbTBfbWF4X3ZjcHVz
PTIgZG9tMF92Y3B1c19waW4gYm9vdHNjcnViPTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqB2d2ZpPW5hdGl2ZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoHNjaGVkPW51bGw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0aW1lcl9zbG9wPTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBZZXMsIGFkZGluZyBhIHBy
aW50aygpIGluIFhlbiB3YXMgYWxzbyBhIGdvb2QgaWRlYS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU28geW91
IGFyZSBhYnNvbHV0ZWx5IHJpZ2h0IGFib3V0IGEgY29tbWFuZCBsaW5lLjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgTm93
IEkgYW0gZ29pbmcgdG8gZmluZCBvdXQgd2h5IHhlbiBkaWQgbm90IGhhdmUgdGhlIGNvcnJlY3Qg
cGFyYW1ldGVycyBmcm9tIHRoZSBkZXZpY2UgdHJlZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBNYXliZSB5b3Ugd2lsbCBmaW5k
IHRoaXMgZG9jdW1lbnQgaGVscGZ1bDo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqA8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20v
WGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJl
ZS9ib290aW5nLnR4dCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9n
aXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0v
ZGV2aWNlLXRyZWUvYm9vdGluZy50eHQ8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5j
b20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2Ut
dHJlZS9ib290aW5nLnR4dCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6
Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9h
cm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQ8L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB+TWljaGFsPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg0L/RgiwgMjEg0LDQ
v9GALiAyMDIz4oCv0LMuINCyIDExOjE2LCBNaWNoYWwgT3J6ZWwgJmx0OzxhIGhyZWY9Im1haWx0
bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQu
Y29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsgJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNo
YWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9y
emVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7
Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBPbiAyMS8wNC8y
MDIzIDEwOjA0LCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDsgSGVsbG8gTWljaGFsLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7IFllcywgSSB1c2UgeW9jdG8uPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDsgWWVzdGVyZGF5IGFsbCBkYXkgbG9uZyBJIHRyaWVkIHRv
IGZvbGxvdyB5b3VyIHN1Z2dlc3Rpb25zLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgSSBmYWNlZCBh
IHByb2JsZW0uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBNYW51YWxseSBpbiB0aGUgeGVuIGNvbmZp
ZyBidWlsZCBmaWxlIEkgcGFzdGVkIHRoZSBzdHJpbmdzOjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEluIHRo
ZSAuY29uZmlnIGZpbGUgb3IgaW4gc29tZSBZb2N0byBmaWxlIChsaXN0aW5nIGFkZGl0aW9uYWwg
S2NvbmZpZyBvcHRpb25zKSBhZGRlZCB0byBTUkNfVVJJPzxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoFlvdSBz
aG91bGRuJiMzOTt0IHJlYWxseSBtb2RpZnkgLmNvbmZpZyBmaWxlIGJ1dCBpZiB5b3UgZG8sIHlv
dSBzaG91bGQgZXhlY3V0ZSAmcXVvdDttYWtlIG9sZGRlZmNvbmZpZyZxdW90Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoGFmdGVyd2FyZHMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IENPTkZJR19FQVJMWV9QUklOVEs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7IENPTkZJR19FQVJMWV9QUklOVEtfWllOUU1QPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0OyBDT05GSUdfRUFSTFlfVUFSVF9DSE9JQ0VfQ0FERU5DRTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oEkgaG9wZSB5b3UgYWRkZWQgPXkgdG8gdGhlbS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
QW55d2F5LCB5b3UgaGF2ZSBhdCBsZWFzdCB0aGUgZm9sbG93aW5nIHNvbHV0aW9uczo8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAxKSBSdW4gYml0YmFrZSB4ZW4gLWMgbWVudWNvbmZpZyB0byBwcm9wZXJseSBz
ZXQgZWFybHkgcHJpbnRrPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgMikgRmluZCBvdXQgaG93IHlvdSBlbmFi
bGUgb3RoZXIgS2NvbmZpZyBvcHRpb25zIGluIHlvdXIgcHJvamVjdCAoZS5nLiBDT05GSUdfQ09M
T1JJTkc9eSB0aGF0IGlzIG5vdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGVuYWJs
ZWQgYnk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBkZWZh
dWx0KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoDMpIEFwcGVuZCB0aGUgZm9sbG93aW5nIHRvICZxdW90O3hl
bi9hcmNoL2FybS9jb25maWdzL2FybTY0X2RlZmNvbmZpZyZxdW90Ozo8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqBDT05GSUdfRUFSTFlfUFJJTlRLX1pZTlFNUD15PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oH5NaWNoYWw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDsgSG9zdCBoYW5ncyBpbiBidWlsZCB0aW1lLsKgPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBN
YXliZSBJIGRpZCBub3Qgc2V0IHNvbWV0aGluZyBpbiB0aGUgY29uZmlnIGJ1aWxkIGZpbGUgPzxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDsg0YfRgiwgMjAg0LDQv9GALiAyMDIz4oCv0LMuINCyIDExOjU3LCBPbGVnIE5pa2l0
ZW5rbyAmbHQ7PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJf
YmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21h
aWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlp
d29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RA
Z21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoFRoYW5rcyBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoFlvdSBnYXZlIG1lIGFuIGlkZWEuPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgSSBhbSBnb2luZyB0byB0cnkgaXQgdG9kYXkuPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoFJlZ2FyZHMs
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgTy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg0YfRgiwgMjAg0LDQv9GALiAyMDIz4oCv0LMuINCy
IDExOjU2LCBPbGVnIE5pa2l0ZW5rbyAmbHQ7PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21h
aWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdl
dD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29k
QGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9i
bGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoFRoYW5rcyBT
dGVmYW5vLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqBJIGFtIGdvaW5nIHRvIGRvIGl0IHRvZGF5Ljxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqBSZWdhcmRzLDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoE8uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoNGB0YAsIDE5INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAy
MzowNSwgU3RlZmFubyBTdGFiZWxsaW5pICZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0i
X2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0
YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgT24gV2VkLCAx
OSBBcHIgMjAyMywgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBIaSBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDsgSSBjb3JyZWN0ZWQgeGVuJiMzOTtzIGNvbW1hbmQgbGluZS48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IE5vdyBpdCBpczxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgeGVuLHhlbi1ib290YXJncyA9ICZx
dW90O2NvbnNvbGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWwwIGRvbTBfbWVtPTE2MDBNIGRvbTBfbWF4
X3ZjcHVzPTI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBkb20wX3ZjcHVzX3Bpbjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoGJvb3RzY3J1Yj0wIHZ3Zmk9bmF0aXZlIHNjaGVkPW51bGw8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IHRpbWVyX3Nsb3A9MCB3YXlfc2l6ZT02NTUzNiB4
ZW5fY29sb3JzPTAtMyBkb20wX2NvbG9ycz00LTcmcXVvdDs7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoDQgY29sb3JzIGlzIHdh
eSB0b28gbWFueSBmb3IgeGVuLCBqdXN0IGRvIHhlbl9jb2xvcnM9MC0wLiBUaGVyZSBpcyBubzxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoGFkdmFudGFnZSBpbiB1c2lu
ZyBtb3JlIHRoYW4gMSBjb2xvciBmb3IgWGVuLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqA0IGNvbG9ycyBpcyB0b28gZmV3IGZv
ciBkb20wLCBpZiB5b3UgYXJlIGdpdmluZyAxNjAwTSBvZiBtZW1vcnkgdG8gRG9tMC48YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqBFYWNoIGNvbG9yIGlzIDI1Nk0uIEZv
ciAxNjAwTSB5b3Ugc2hvdWxkIGdpdmUgYXQgbGVhc3QgNyBjb2xvcnMuIFRyeTo8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgeGVu
X2NvbG9ycz0wLTAgZG9tMF9jb2xvcnM9MS04PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBVbmZvcnR1bmF0ZWx5IHRoZSByZXN1bHQgd2FzIHRo
ZSBzYW1lLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgLSBEb20w
IG1vZGU6IFJlbGF4ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIFAyTTogNDAtYml0IElQQSB3aXRoIDQwLWJpdCBQQSBhbmQgOC1iaXQgVk1JRDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgUDJNOiAz
IGxldmVscyB3aXRoIG9yZGVyLTEgcm9vdCwgVlRDUiAweDAwMDAwMDAwODAwMjM1NTg8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFNjaGVkdWxpbmcg
Z3JhbnVsYXJpdHk6IGNwdSwgMSBDUFUgcGVyIHNjaGVkLXJlc291cmNlPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBDb2xvcmluZyBnZW5lcmFsIGlu
Zm9ybWF0aW9uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAo
WEVOKSBXYXkgc2l6ZTogNjRrQjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDsgKFhFTikgTWF4LiBudW1iZXIgb2YgY29sb3JzIGF2YWlsYWJsZTogMTY8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFhlbiBjb2xvcihz
KTogWyAwIF08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIGFsdGVybmF0aXZlczogUGF0Y2hpbmcgd2l0aCBhbHQgdGFibGUgMDAwMDAwMDAwMDJjYzY5
MCAtJmd0OyAwMDAwMDAwMDAwMmNjYzBjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0OyAoWEVOKSBDb2xvciBhcnJheSBhbGxvY2F0aW9uIGZhaWxlZCBmb3IgZG9t
MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTik8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pICoqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFBhbmljIG9uIENQVSAwOjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgRXJyb3IgY3JlYXRpbmcgZG9t
YWluIDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4p
ICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBSZWJvb3QgaW4gZml2ZSBzZWNvbmRzLi4u
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgSSBhbSBnb2luZyB0byBmaW5kIG91
dCBob3cgY29tbWFuZCBsaW5lIGFyZ3VtZW50cyBwYXNzZWQgYW5kIHBhcnNlZC48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgT2xlZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7INGB0YAsIDE5INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxMToyNSwgT2xlZyBOaWtpdGVu
a28gJmx0OzxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2Js
YW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xl
c2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+
Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0OyZn
dDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgSGkgTWljaGFsLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IFlvdSBwdXQg
bXkgbm9zZSBpbnRvIHRoZSBwcm9ibGVtLiBUaGFuayB5b3UuPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBJIGFtIGdvaW5nIHRvIHVzZSB5b3VyIHBvaW50Ljxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgTGV0JiMzOTtzIHNl
ZSB3aGF0IGhhcHBlbnMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgUmVnYXJk
cyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDsg0YHRgCwgMTkg0LDQv9GALiAyMDIz4oCv0LMuINCyIDEwOjM3
LCBNaWNoYWwgT3J6ZWwgJmx0OzxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5v
cnplbEBhbWQuY29tPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpt
aWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29t
PC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0
YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozo8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBI
aSBPbGVnLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBPbiAx
OS8wNC8yMDIzIDA5OjAzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqA8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgSGVsbG8gU3RlZmFubyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGFua3MgZm9yIHRoZSBjbGFyaWZpY2F0
aW9uLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgTXkgY29tcGFueSB1c2VzIHlvY3RvIGZvciBpbWFnZSBnZW5lcmF0aW9uLjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
V2hhdCBraW5kIG9mIGluZm9ybWF0aW9uIGRvIHlvdSBuZWVkIHRvIGNvbnN1bHQgbWUgaW4gdGhp
cyBjYXNlID88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBNYXliZSBtb2R1bGVzIHNpemVzL2FkZHJlc3NlcyB3aGljaCB3ZXJl
IG1lbnRpb25lZCBieSBASnVsaWVuIEdyYWxsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5v
cmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwv
YT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRh
cmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7
Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJf
YmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxp
ZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxp
ZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmci
IHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7ID88YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU29ycnkgZm9yIGp1bXBp
bmcgaW50byBkaXNjdXNzaW9uLCBidXQgRldJQ1MgdGhlIFhlbiBjb21tYW5kIGxpbmUgeW91IHBy
b3ZpZGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgc2VlbXMgdG8gYmU8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBub3QgdGhlPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgb25l
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
WGVuIGJvb3RlZCB3aXRoLiBUaGUgZXJyb3IgeW91IGFyZSBvYnNlcnZpbmcgbW9zdCBsaWtlbHkg
aXMgZHVlIHRvIGRvbTAgY29sb3JzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgY29uZmlndXJhdGlvbiBub3Q8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBiZWluZzxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHNwZWNpZmllZCAoaS5lLiBs
YWNrIG9mIGRvbTBfY29sb3JzPSZsdDsmZ3Q7IHBhcmFtZXRlcikuIEFsdGhvdWdoIGluIHRoZSBj
b21tYW5kIGxpbmUgeW91PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgcHJvdmlkZWQsIHRoaXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBwYXJhbWV0ZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBpcyBzZXQsIEkgc3Ryb25nbHkgZG91
YnQgdGhhdCB0aGlzIGlzIHRoZSBhY3R1YWwgY29tbWFuZCBsaW5lIGluIHVzZS48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgWW91IHdyb3RlOjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHhlbix4ZW4tYm9v
dGFyZ3MgPSAmcXVvdDtjb25zb2xlPWR0dWFydCBkdHVhcnQ9c2VyaWFsMCBkb20wX21lbT0xNjAw
TSBkb20wX21heF92Y3B1cz0yPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgZG9tMF92Y3B1c19waW48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBib290c2NydWI9MCB2d2ZpPW5hdGl2ZTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHNjaGVk
PW51bGwgdGltZXJfc2xvcD0wIHdheV9zeml6ZT02NTUzNiB4ZW5fY29sb3JzPTAtMyBkb20wX2Nv
bG9ycz00LTcmcXVvdDs7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoGJ1dDo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAxKSB3YXlfc3ppemUgaGFzIGEgdHlwbzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDIpIHlvdSBzcGVjaWZpZWQgNCBjb2xvcnMg
KDAtMykgZm9yIFhlbiwgYnV0IHRoZSBib290IGxvZyBzYXlzIHRoYXQgWGVuIGhhcyBvbmx5PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgb25lOjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoChYRU4pIFhlbiBjb2xvcihzKTogWyAw
IF08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgVGhpcyBtYWtl
cyBtZSBiZWxpZXZlIHRoYXQgbm8gY29sb3JzIGNvbmZpZ3VyYXRpb24gYWN0dWFsbHkgZW5kIHVw
IGluIGNvbW1hbmQgbGluZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHRoYXQgWGVu
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgYm9vdGVkPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgd2l0aC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBTaW5nbGUgY29sb3IgZm9yIFhlbiBpcyBhICZxdW90O2RlZmF1bHQgaWYgbm90IHNw
ZWNpZmllZCZxdW90OyBhbmQgd2F5IHNpemUgd2FzIHByb2JhYmx5PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgY2FsY3VsYXRlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoGJ5IGFza2luZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhXLjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTbyBJIHdvdWxkIHN1Z2dlc3QgdG8gZmlyc3QgY3Jvc3Mt
Y2hlY2sgdGhlIGNvbW1hbmQgbGluZSBpbiB1c2UuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoH5NaWNoYWw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg0LLRgiwgMTgg0LDQv9GALiAy
MDIz4oCv0LMuINCyIDIwOjQ0LCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9Im1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc8L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5z
c3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZn
dDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqBPbiBUdWUsIDE4IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3
cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEhpIEp1bGllbiw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0OyAmZ3Q7Jmd0OyBUaGlzIGZlYXR1cmUgaGFzIG5vdCBiZWVuIG1lcmdlZCBpbiBYZW4g
dXBzdHJlYW0geWV0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgJmd0OyB3b3VsZCBh
c3N1bWUgdGhhdCB1cHN0cmVhbSArIHRoZSBzZXJpZXMgb24gdGhlIE1MIFsxXSB3b3JrPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgUGxlYXNlIGNsYXJpZnkgdGhpcyBwb2ludC48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7IEJlY2F1c2UgdGhlIHR3byB0aG91Z2h0cyBhcmUgY29udHJvdmVyc2lhbC48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgSGkgT2xlZyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgQXMgSnVsaWVuIHdyb3RlLCB0
aGVyZSBpcyBub3RoaW5nIGNvbnRyb3ZlcnNpYWwuIEFzIHlvdSBhcmUgYXdhcmUsPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgWGlsaW54IG1haW50YWlucyBhIHNlcGFyYXRlIFhlbiB0cmVlIHNwZWNpZmljIGZvciBYaWxp
bnggaGVyZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqA8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
biIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4i
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwv
YT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4m
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
bjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwv
YT4mZ3Q7Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbjwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW48L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9h
PiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgYW5kIHRoZSBicmFuY2ggeW91IGFyZSB1
c2luZyAoeGxueF9yZWJhc2VfNC4xNikgY29tZXMgZnJvbSB0aGVyZS48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoEluc3RlYWQsIHRoZSB1cHN0cmVhbSBYZW4gdHJlZSBsaXZlcyBoZXJlOjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oDxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhl
bi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZn
dDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhy
ZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeTwvYT4mZ3Q7Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZs
dDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBo
cmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnki
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqBUaGUgQ2FjaGUgQ29sb3JpbmcgZmVhdHVyZSB0aGF0IHlvdSBhcmUgdHJ5aW5n
IHRvIGNvbmZpZ3VyZSBpcyBwcmVzZW50PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgaW4geGxueF9yZWJhc2VfNC4xNiwg
YnV0IG5vdCB5ZXQgcHJlc2VudCB1cHN0cmVhbSAodGhlcmUgaXMgYW48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBvdXRz
dGFuZGluZyBwYXRjaCBzZXJpZXMgdG8gYWRkIGNhY2hlIGNvbG9yaW5nIHRvIFhlbiB1cHN0cmVh
bSBidXQgaXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqBoYXNuJiMzOTt0IGJlZW4gbWVyZ2VkIHlldC4pPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqBBbnl3YXksIGlmIHlvdSBhcmUgdXNpbmcgeGxueF9yZWJhc2VfNC4xNiBpdCBk
b2VzbiYjMzk7dCBtYXR0ZXIgdG9vIG11Y2ggZm9yPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgeW91IGFzIHlvdSBhbHJl
YWR5IGhhdmUgQ2FjaGUgQ29sb3JpbmcgYXMgYSBmZWF0dXJlIHRoZXJlLjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgSSB0YWtlIHlvdSBhcmUgdXNpbmcgSW1hZ2VCdWlsZGVyIHRvIGdlbmVyYXRlIHRoZSBi
b290IGNvbmZpZ3VyYXRpb24/IElmPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgc28sIHBsZWFzZSBwb3N0IHRoZSBJbWFn
ZUJ1aWxkZXIgY29uZmlnIGZpbGUgdGhhdCB5b3UgYXJlIHVzaW5nLjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBC
dXQgZnJvbSB0aGUgYm9vdCBtZXNzYWdlLCBpdCBsb29rcyBsaWtlIHRoZSBjb2xvcnMgY29uZmln
dXJhdGlvbiBmb3I8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBEb20wIGlzIGluY29ycmVjdC48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDsgPGJyPg0KJmd0OyA8YnI+DQo8L2Js
b2NrcXVvdGU+PC9kaXY+DQo8L2Jsb2NrcXVvdGU+PC9kaXY+DQo=
--000000000000f64c8c05fb3d35d5--


From xen-devel-bounces@lists.xenproject.org Tue May 09 07:10:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 07:10:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.531975.827963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwHU8-0004Uc-Mm; Tue, 09 May 2023 07:10:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 531975.827963; Tue, 09 May 2023 07:10:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwHU8-0004UV-K7; Tue, 09 May 2023 07:10:08 +0000
Received: by outflank-mailman (input) for mailman id 531975;
 Tue, 09 May 2023 07:10:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8dUO=A6=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pwHU6-0004UP-MU
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 07:10:06 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 861745a1-ee38-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 09:10:05 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id F31A01F45A;
 Tue,  9 May 2023 07:10:04 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C6005139B3;
 Tue,  9 May 2023 07:10:04 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id NsYJL8zxWWS3HAAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 09 May 2023 07:10:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 861745a1-ee38-11ed-b227-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683616205; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Nr6sbJPcZ0amzdla/OGNm5kfmiUJwrr6bXmyCciMiNo=;
	b=sjpn9UqCABm+jh9+9zxNPf3OuvMIEsW+Qs4/ir3YUsk492Hml1iTd40ewLCoyO5hfKp7Hw
	tKNg2nDQVVhNVE5XZz28lrUcfAAOGc8w1xVEhhDRHAa86v96+BKYUei7yqVbNTJ4KYg8cD
	v1Xeifqsf3k6rOQmNndBOSs49LGEOK4=
Message-ID: <0e7c1819-f611-1ba1-9f5a-3295eae7f95d@suse.com>
Date: Tue, 9 May 2023 09:10:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2] tools: convert bitfields to unsigned type
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <20230508164618.21496-1-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230508164618.21496-1-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------FijHca0A7NcSsaFZsH2RsFYv"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------FijHca0A7NcSsaFZsH2RsFYv
Content-Type: multipart/mixed; boundary="------------myB8dc4MuSE18IXYazDCN9VE";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
Message-ID: <0e7c1819-f611-1ba1-9f5a-3295eae7f95d@suse.com>
Subject: Re: [PATCH v2] tools: convert bitfields to unsigned type
References: <20230508164618.21496-1-olaf@aepfle.de>
In-Reply-To: <20230508164618.21496-1-olaf@aepfle.de>

--------------myB8dc4MuSE18IXYazDCN9VE
Content-Type: multipart/mixed; boundary="------------0P9cVVRs7fsnXncdrNUt2qh6"

--------------0P9cVVRs7fsnXncdrNUt2qh6
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDguMDUuMjMgMTg6NDYsIE9sYWYgSGVyaW5nIHdyb3RlOg0KPiBjbGFuZyBjb21wbGFp
bnMgYWJvdXQgdGhlIHNpZ25lZCB0eXBlOg0KPiANCj4gaW1wbGljaXQgdHJ1bmNhdGlvbiBm
cm9tICdpbnQnIHRvIGEgb25lLWJpdCB3aWRlIGJpdC1maWVsZCBjaGFuZ2VzIHZhbHVlIGZy
b20gMSB0byAtMSBbLVdzaW5nbGUtYml0LWJpdGZpZWxkLWNvbnN0YW50LWNvbnZlcnNpb25d
DQo+IA0KPiBUaGUgcG90ZW50aWFsIEFCSSBjaGFuZ2UgaW4gbGlieGVudmNoYW4gaXMgY292
ZXJlZCBieSB0aGUgWGVuIHZlcnNpb24gYmFzZWQgU09OQU1FLg0KPiANCj4gU2lnbmVkLW9m
Zi1ieTogT2xhZiBIZXJpbmcgPG9sYWZAYWVwZmxlLmRlPg0KDQpSZXZpZXdlZC1ieTogSnVl
cmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KDQoNCkp1ZXJnZW4NCg0K
--------------0P9cVVRs7fsnXncdrNUt2qh6
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0P9cVVRs7fsnXncdrNUt2qh6--

--------------myB8dc4MuSE18IXYazDCN9VE--

--------------FijHca0A7NcSsaFZsH2RsFYv
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRZ8cwFAwAAAAAACgkQsN6d1ii/Ey+B
fgf/fOoGMrP46JTa1DIHq46O64fgzD+gBmIMnoZIjfG/HBQe5sgVULg1dZAfjV7fZqhWSB6otjk/
GS3gyXqISlQfN77bITjsVkECGbhp4VFbGcgtOApjA+PzLFNS3owXgqJ7ZFqfDUhHYsyliI+G8ffC
Sfs91UPk1GR4AF5GgNqbQ3psULeclZJCT13WtK0m9/2eg16nB9vQeCdW9Kb7pV6kGG+IhJos3HVd
ZtGWBOPTSZsXLsZoL6Lup8uKYsNo53j7v/GUzB+mnWxNqh6fBBRqSFBFk16UMO4/vutrkJPq/OEg
ZdsXND9xmV283BtI/sSQ2ji5yvk2Qi+NGHtAJaaJkw==
=uxQS
-----END PGP SIGNATURE-----

--------------FijHca0A7NcSsaFZsH2RsFYv--


From xen-devel-bounces@lists.xenproject.org Tue May 09 07:52:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 07:52:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532015.827991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwI9E-0001ar-7B; Tue, 09 May 2023 07:52:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532015.827991; Tue, 09 May 2023 07:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwI9E-0001ak-3y; Tue, 09 May 2023 07:52:36 +0000
Received: by outflank-mailman (input) for mailman id 532015;
 Tue, 09 May 2023 07:52:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwI9D-0001aa-9C; Tue, 09 May 2023 07:52:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwI9D-0001RG-6h; Tue, 09 May 2023 07:52:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwI9C-0008RY-R4; Tue, 09 May 2023 07:52:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwI9C-0003kj-Qb; Tue, 09 May 2023 07:52:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Y999lLf6z/EDax7FraFGxLyzXuFPVTjcbg9bvTEq4AI=; b=ejLLwX0bqEAMgG9uKJeH6UgDY8
	azON12q7828L7FhBh30AbiiDL+ce4U3XAB8jTf6tAm5viVTer3ZFY8SHeyKeggJVVvy7UulwiI/KW
	8o/4kiq0jq6WZUTH6n501yq8kLmqVkX157A56rzMycCx1Kezeb7ShWfCZPWWZ7FE+NtE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180582-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180582: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ba0ad6ed89fd5dada3b7b65ef2b08e95d449d4ab
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 May 2023 07:52:34 +0000

flight 180582 linux-linus real [real]
flight 180585 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180582/
http://logs.test-lab.xenproject.org/osstest/logs/180585/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ba0ad6ed89fd5dada3b7b65ef2b08e95d449d4ab
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   22 days
Failing since        180281  2023-04-17 06:24:36 Z   22 days   40 attempts
Testing same since   180582  2023-05-08 21:11:46 Z    0 days    1 attempts

------------------------------------------------------------
2359 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 296937 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 09 08:50:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 08:50:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532037.828001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwJ3J-0008US-Se; Tue, 09 May 2023 08:50:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532037.828001; Tue, 09 May 2023 08:50:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwJ3J-0008UL-Pm; Tue, 09 May 2023 08:50:33 +0000
Received: by outflank-mailman (input) for mailman id 532037;
 Tue, 09 May 2023 08:50:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W85P=A6=bounce.vates.fr=bounce-md_30504962.645a0955.v1-94f408e6aef34506a031dc1ad4551bf0@srs-se1.protection.inumbo.net>)
 id 1pwJ3I-0008UF-E9
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 08:50:32 +0000
Received: from mail145-2.atl61.mandrillapp.com
 (mail145-2.atl61.mandrillapp.com [198.2.145.2])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8cc6b881-ee46-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 10:50:30 +0200 (CEST)
Received: from pmta06.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail145-2.atl61.mandrillapp.com (Mailchimp) with ESMTP id 4QFsLY103BzQXhVH1
 for <xen-devel@lists.xenproject.org>; Tue,  9 May 2023 08:50:29 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 94f408e6aef34506a031dc1ad4551bf0; Tue, 09 May 2023 08:50:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cc6b881-ee46-11ed-b227-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1683622229; x=1683882729; i=yann.dirson@vates.fr;
	bh=XH6A9kRnaKfrtYq4xDKHmzCwIBK2FhX1ZqRruWUSQMc=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=OBtd2a3RMp3AR4Zbcl5JuEp474EoCJtS2oBwJYnJV14QHgWOfx1/lpnbPpyAB/g7w
	 OK9qtfb5HbZ31RhyV+4Q3YhrzgMuDvv/Q+KwO7cN56P3vNSxpGzuo767q4HXPylLu2
	 iQuBm1q4AmkFj5REA+Es+Q+pIhfplbR8NRIpmNMI=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1683622229; h=From : 
 Subject : Message-Id : To : Cc : References : In-Reply-To : Date : 
 MIME-Version : Content-Type : Content-Transfer-Encoding : From : 
 Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=XH6A9kRnaKfrtYq4xDKHmzCwIBK2FhX1ZqRruWUSQMc=; 
 b=MbrYoUxC8pdOc6NtZhJ43fBb9aRZ/VORsu+B6wbw+XMV463zWpA0Vr7/MooIWjOMDKGEvY
 4k8d+aCSJsc/xEbWVbRYEygdpSRfCIrW9yHNiAyMVqgF6fkf7D6MRPI1X300MLJuDBp7+GYD
 c6szoPfn3/w75oI4ipAeYDuwAn5jU=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: =?utf-8?Q?Re:=20xenstored:=20EACCESS=20error=20accessing=20control/feature-balloon=201?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 9664e26d-c55d-4176-ac64-680cfb7c5564
X-Bm-Transport-Timestamp: 1683622225995
Message-Id: <9a5045b8-43f3-d418-9e77-418b6db91f71@vates.fr>
To: zithro <slack@rabbit.lu>, xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, =?utf-8?Q?Edwin=20T=C3=B6r=C3=B6k?= <edwin.torok@cloud.com>, Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu> <474b531f-2067-a5d4-8b01-5b8ef1f7061d@citrix.com> <678df1ff-df18-b063-eda3-2a1aed6d40f8@vates.fr> <50bf6b82-965b-d17c-7c5a-49c703991504@rabbit.lu> <f44261a2-df39-f69a-9798-dc1d656e6dac@vates.fr> <a51e0f7e-aed0-2ec9-f451-2e750636fb78@rabbit.lu>
In-Reply-To: <a51e0f7e-aed0-2ec9-f451-2e750636fb78@rabbit.lu>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.94f408e6aef34506a031dc1ad4551bf0?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230509:md
Date: Tue, 09 May 2023 08:50:29 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit


On 5/4/23 20:04, zithro wrote:
> On 04 May 2023 17:59, Yann Dirson wrote:
>>
>> On 5/4/23 15:58, zithro wrote:
>>> Hi,
>>>
>>> [ snipped for brevity, report summary:
>>> XAPI daemon in domU tries to write to a non-existent xenstore node in
>>> a non-XAPI dom0 ]
>>>
>>> On 12 Apr 2023 18:41, Yann Dirson wrote:
>>>> Is there anything besides XAPI using this node, or the other data
>>>> published by xe-daemon?
>>>
>>> On my vanilla Xen (ie. non-XAPI), I have no node about "balloon"-ing
>>> in xenstore (either dom0 or domU nodes, but I'm not using ballooning
>>> in both).
>>>
>>>> Maybe the original issue is just that there is no reason to have
>>>> xe-guest-utilities installed in this setup?
>>>
>>> That's what I thought as I'm not using XAPI, so maybe the problem
>>> should only be addressed to the truenas team ? I posted on their forum
>>> but got no answer.
>>> I killed the 'xe-daemon' in both setups without loss of functionality.
>>>
>>> My wild guess is that 'xe-daemon', 'xe-update-guest-attrs' and all
>>> 'xenstore* commands' are leftovers from when Xen was working as a dom0
>>> under FreeBSD (why would a *domU* have them ?).
>>
>> That would not be correct: xenstore* are useful in guests, should you
>> want to read/write to the XenStore manually or from scripts;
>
> Didn't know that, can you give some use cases (or URLs) for which it 
> is useful, with or without XAPI ?
> I've read xenstore* man pages and could not infer a use case.
> Although I may already see some : updating ballooned memory values, or 
> as Julien Grall pointed out, updating "feature-s3/4" values ?

You can get other examples in 
https://xenbits.xen.org/docs/unstable/misc/xenstore-paths.html#domain-controlled-paths

>
> PS: small mistake in "man/xenstore-write.1.html" (from at least 4.14, 
> and onward), the synopsis reads "xenstore-read" ieof "xenstore-read".
Patch sent, thanks.
> Also, the -s option disappeared from unstable, although it may be 
> expected. I don't know it's purpose either.

See 
https://github.com/xen-project/xen/commit/c65687ed16d2289ec91036ec2862a4b4bd34ea4f

Best regards,

-- 
Yann Dirson | Vates Platform Developer
XCP-ng & Xen Orchestra - Vates solutions
w: vates.tech | xcp-ng.org | xen-orchestra.com



Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


From xen-devel-bounces@lists.xenproject.org Tue May 09 09:01:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 09:01:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532043.828010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwJE4-0001hl-Vr; Tue, 09 May 2023 09:01:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532043.828010; Tue, 09 May 2023 09:01:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwJE4-0001he-TI; Tue, 09 May 2023 09:01:40 +0000
Received: by outflank-mailman (input) for mailman id 532043;
 Tue, 09 May 2023 09:01:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JiHS=A6=bounce.vates.fr=bounce-md_30504962.645a0bf0.v1-623ad5ea264e45fbb42cd0bfa6510f2e@srs-se1.protection.inumbo.net>)
 id 1pwJE3-0001hY-KP
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 09:01:39 +0000
Received: from mail179-5.suw41.mandrillapp.com
 (mail179-5.suw41.mandrillapp.com [198.2.179.5])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1a60effb-ee48-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 11:01:37 +0200 (CEST)
Received: from pmta12.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail179-5.suw41.mandrillapp.com (Mailchimp) with ESMTP id 4QFsbN1GZszG0CBdJ
 for <xen-devel@lists.xenproject.org>; Tue,  9 May 2023 09:01:36 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 623ad5ea264e45fbb42cd0bfa6510f2e; Tue, 09 May 2023 09:01:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a60effb-ee48-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1683622896; x=1683883396; i=yann.dirson@vates.fr;
	bh=8BqKJOEwx5qtlEBsdJyBSQKBsIITldio/65tB71HYkg=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=nEsvgiS4ZVzrK5gVS5VR5zSsuFKqphc4UzRE8FQjBGW+M/KeDbfQRuCux4jCERkRT
	 4c3KpxyyVrB/IHpXpLMV5p9rmUBSFjI7KMJMTpsMjBKCufZzPOHrXkHYNZD2dhQ0k5
	 vBoJYTXifJbmJp+uxePogjBPOH74I9z13Najcruc=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1683622896; h=From : 
 Subject : To : Cc : Message-Id : Date : MIME-Version : Content-Type : 
 Content-Transfer-Encoding : From : Subject : Date : X-Mandrill-User : 
 List-Unsubscribe; bh=8BqKJOEwx5qtlEBsdJyBSQKBsIITldio/65tB71HYkg=; 
 b=YlyTa1mPIPqCxh9Eerffl1AkKLmo1rXKoqbRytSM6WSaM8+5+miHOdvMe4ATvCHCan4zt0
 JDGgL7ZgReAfK6KpRhPtU0LRpM4yOLg0aCQVKvolpdfPOZhYn3slA3nHZLKXr8D+QX2J1qhH
 vLsQPe5oFwknyW9UvjMLwr/K87gw4=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: =?utf-8?Q?[PATCH]=20docs/man:=20fix=20xenstore-write=20synopsis?=
X-Mailer: git-send-email 2.30.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 9664e26d-c55d-4176-ac64-680cfb7c5564
X-Bm-Transport-Timestamp: 1683622890131
To: xen-devel@lists.xenproject.org
Cc: Yann Dirson <yann.dirson@vates.fr>, zithro <slack@rabbit.lu>
Message-Id: <20230509090123.781644-1-yann.dirson@vates.fr>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.623ad5ea264e45fbb42cd0bfa6510f2e?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230509:md
Date: Tue, 09 May 2023 09:01:36 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

Reported-by: zithro <slack@rabbit.lu>
Signed-off-by: Yann Dirson <yann.dirson@vates.fr>
---
 docs/man/xenstore-write.1.pod | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/man/xenstore-write.1.pod b/docs/man/xenstore-write.1.pod
index a0b1bca333..74f80f7b1b 100644
--- a/docs/man/xenstore-write.1.pod
+++ b/docs/man/xenstore-write.1.pod
@@ -4,7 +4,7 @@ xenstore-write - write Xenstore values
 
 =head1 SYNOPSIS
 
-B<xenstore-read> [I<OPTION>]... I<PATH> I<VALUE>...
+B<xenstore-write> [I<OPTION>]... I<PATH> I<VALUE>...
 
 =head1 DESCRIPTION
 
-- 
2.30.2



Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


From xen-devel-bounces@lists.xenproject.org Tue May 09 09:20:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 09:20:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532049.828021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwJW8-0004EI-Fy; Tue, 09 May 2023 09:20:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532049.828021; Tue, 09 May 2023 09:20:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwJW8-0004EB-C3; Tue, 09 May 2023 09:20:20 +0000
Received: by outflank-mailman (input) for mailman id 532049;
 Tue, 09 May 2023 09:20:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1Yk8=A6=omp.ru=s.shtylyov@srs-se1.protection.inumbo.net>)
 id 1pwJW6-0004E5-Vl
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 09:20:18 +0000
Received: from mx01.omp.ru (mx01.omp.ru [90.154.21.10])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b540e484-ee4a-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 11:20:16 +0200 (CEST)
Received: from [192.168.1.103] (178.176.73.203) by msexch01.omp.ru
 (10.188.4.12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.14; Tue, 9 May 2023
 12:20:06 +0300
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b540e484-ee4a-11ed-8611-37d641c3527e
Subject: Re: [patch v3 33/36] x86/apic: Save the APIC virtual base address
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
CC: <x86@kernel.org>, David Woodhouse <dwmw2@infradead.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>, Arjan van de
 Veen <arjan@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Paul
 McKenney <paulmck@kernel.org>, Tom Lendacky <thomas.lendacky@amd.com>, Sean
 Christopherson <seanjc@google.com>, Oleksandr Natalenko
	<oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>, "Guilherme
 G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, <xen-devel@lists.xenproject.org>,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	<linux-arm-kernel@lists.infradead.org>, Catalin Marinas
	<catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
	<guoren@kernel.org>, <linux-csky@vger.kernel.org>, Thomas Bogendoerfer
	<tsbogend@alpha.franken.de>, <linux-mips@vger.kernel.org>, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
	<deller@gmx.de>, <linux-parisc@vger.kernel.org>, Paul Walmsley
	<paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
	<linux-riscv@lists.infradead.org>, Mark Rutland <mark.rutland@arm.com>, Sabin
 Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
	<mikelley@microsoft.com>
References: <20230508181633.089804905@linutronix.de>
 <20230508185219.070274100@linutronix.de>
From: Sergey Shtylyov <s.shtylyov@omp.ru>
Organization: Open Mobile Platform
Message-ID: <a6f48a7b-484c-31af-f568-cb1de0d766d4@omp.ru>
Date: Tue, 9 May 2023 12:20:06 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20230508185219.070274100@linutronix.de>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [178.176.73.203]
X-ClientProxiedBy: msexch01.omp.ru (10.188.4.12) To msexch01.omp.ru
 (10.188.4.12)
X-KSE-ServerInfo: msexch01.omp.ru, 9
X-KSE-AntiSpam-Interceptor-Info: scan successful
X-KSE-AntiSpam-Version: 5.9.59, Database issued on: 05/09/2023 08:58:12
X-KSE-AntiSpam-Status: KAS_STATUS_NOT_DETECTED
X-KSE-AntiSpam-Method: none
X-KSE-AntiSpam-Rate: 59
X-KSE-AntiSpam-Info: Lua profiles 177218 [May 07 2023]
X-KSE-AntiSpam-Info: Version: 5.9.59.0
X-KSE-AntiSpam-Info: Envelope from: s.shtylyov@omp.ru
X-KSE-AntiSpam-Info: LuaCore: 510 510 bc345371020d3ce827abc4c710f5f0ecf15eaf2e
X-KSE-AntiSpam-Info: {rep_avail}
X-KSE-AntiSpam-Info: {Tracking_from_domain_doesnt_match_to}
X-KSE-AntiSpam-Info: {relay has no DNS name}
X-KSE-AntiSpam-Info: {SMTP from is not routable}
X-KSE-AntiSpam-Info: {Found in DNSBL: 178.176.73.203 in (user)
 b.barracudacentral.org}
X-KSE-AntiSpam-Info: {Found in DNSBL: 178.176.73.203 in (user)
 dbl.spamhaus.org}
X-KSE-AntiSpam-Info:
	178.176.73.203:7.4.1,7.7.3;d41d8cd98f00b204e9800998ecf8427e.com:7.1.1;omp.ru:7.1.1;127.0.0.199:7.1.2
X-KSE-AntiSpam-Info: {iprep_blacklist}
X-KSE-AntiSpam-Info: ApMailHostAddress: 178.176.73.203
X-KSE-AntiSpam-Info: {DNS response errors}
X-KSE-AntiSpam-Info: Rate: 59
X-KSE-AntiSpam-Info: Status: not_detected
X-KSE-AntiSpam-Info: Method: none
X-KSE-AntiSpam-Info: Auth:dmarc=temperror header.from=omp.ru;spf=temperror
 smtp.mailfrom=omp.ru;dkim=none
X-KSE-Antiphishing-Info: Clean
X-KSE-Antiphishing-ScanningType: Heuristic
X-KSE-Antiphishing-Method: None
X-KSE-Antiphishing-Bases: 05/09/2023 09:07:00
X-KSE-Antivirus-Interceptor-Info: scan successful
X-KSE-Antivirus-Info: Clean, bases: 5/9/2023 6:00:00 AM
X-KSE-Attachment-Filter-Triggered-Rules: Clean
X-KSE-Attachment-Filter-Triggered-Filters: Clean
X-KSE-BulkMessagesFiltering-Scan-Result: InTheLimit

Hello!

On 5/8/23 10:44 PM, Thomas Gleixner wrote:

> From: Thomas Gleixner <tglx@linutronix.de>
> 
> For parallel CPU brinugp it's required to read the APIC ID in the low level
> startup code. The virtual APIC base address is a constant because its a
> fix-mapped address. Exposing that constant which is composed via macros to
> assembly code is non-trivial dues to header inclusion hell.

   s/dues/due/?

> Aside of that it's constant only because of the vsyscall ABI
> requirement. Once vsyscall is out of the picture the fixmap can be placed
> at runtime.
> 
> Avoid header hell, stay flexible and store the address in a variable which
> can be exposed to the low level startup code.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Michael Kelley <mikelley@microsoft.com>

[...]

MBR, Sergey


From xen-devel-bounces@lists.xenproject.org Tue May 09 09:22:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 09:22:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532053.828031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwJYV-0004mj-TC; Tue, 09 May 2023 09:22:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532053.828031; Tue, 09 May 2023 09:22:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwJYV-0004mc-QD; Tue, 09 May 2023 09:22:47 +0000
Received: by outflank-mailman (input) for mailman id 532053;
 Tue, 09 May 2023 09:22:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vSxc=A6=citrix.com=prvs=486b9cf0a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pwJYU-0004mT-GE
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 09:22:46 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0adbe900-ee4b-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 11:22:43 +0200 (CEST)
Received: from mail-dm6nam11lp2175.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 May 2023 05:22:35 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6755.namprd03.prod.outlook.com (2603:10b6:510:122::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Tue, 9 May
 2023 09:22:31 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 09:22:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0adbe900-ee4b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683624163;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ZmZcjOA7Jca1zuLe5giIpkxa2HlNPYtOYeZERbqRYtE=;
  b=QLl/AmyfUMdVdjY3wgHWMXYkTVFpzkTuWgO7l38lSj7PeFd18Z/C+6xa
   E6LDaEvT02Qg/noG7S36qZEHDDCElDGofQBD0xfJIwnT1altZOGbMgkbS
   B7cFHhgD5B05WLPgGUsmYOoiXWNT6hSOlHzmAvQFqZCcJZ7Aew8AWCAB6
   g=;
X-IronPort-RemoteIP: 104.47.57.175
X-IronPort-MID: 108764916
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:STq5LqkZSQWSaXxBSqKuErDo5gy1J0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIXDWiPPPnYNzb2fNEnO961o08D6JPXydBgS1A5pSk3FCMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgW5A6GzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 aEUNzlKKRrdvrro+OOAUuVd28AIF8a+aevzulk4pd3YJdAPZMmaBonvu5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVM3ieewWDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTO3nqKI72QP7Kmo7Fi1Hcxi3i9iDrWG0C/JnB
 0dJ/ykTov1nnKCsZpynN/Gim1aVtxgaHdZRCfY97imTw7bZ+ECXHAAsXjNHLdArqsIybTgrz
 UOS2cPkAyR1t7+YQm7b8a2bxRupPSEeLkcYbCUOTBdD58SLnW0ophfGT9ImHKvriNTwQGj02
 2rT83V4gKgPh8kW0an95UrAnz+nupnOSEgy+xnTWWWmqAh+YeZJerCV1LQS1t4YRK7xc7VLl
 CJc8yRCxIji1a2wqRE=
IronPort-HdrOrdr: A9a23:lxFJgK7fcb8Sw6MKQwPXwa6CI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HnBEDyewK5yXcT2/htAV7CZnidhILMFu1fBOTZsl7d8kHFh5ZgPO
 JbAtND4b7LfCZHZKTBgDVQeuxIqLfnzEnrv5am854Ed3AUV0gK1XYdNu/0KDwQeOALP+taKH
 LKjfA32wZINE5nJ/iTNz0gZazuttfLnJXpbVovAAMm0hCHiXeF+aP3CB+R2zYZSndqza05+W
 bIvgTl7uH72svLhyP05iv21dB7idHhwtxMCIiljdUUECzljkKFdZlsQLqLuREyuaWK5EwxmN
 fBjh88N4BY6m/XfEuyvRzxsjOQmgoG2jvH8xu1kHHjqcv2SHYTDNdAv5tQdl/851A7tN9x/a
 pX1ybB3qAnRi/orWDY3ZzlRhtqnk27rT4LlvMStWVWVc87ZKVKpYIS0UtJGNMrHT786qogDO
 5yZfusrMp+QBe/VTT0r2NvyNujUjAaGQqHeFELvoiv3z1fjBlCvj8l7f1auk1F2IM2SpFC6e
 iBGL9vjqtyQsgfar84LPsdQOOsY1a9AC7kASa3GxDKBasHM3XCp9rc+7Mu/tynf5QO0d8bhI
 nBalVFrmQ/EnieR/Fm5Kc7sSwlfV/NHwgEkqpllt1EU/zHNfXW2BS4ORATe5DKmYRaPiXZM8
 zDTa6+TcWTalcGIrw5rDEWa6MiWEX2b/dlyurTe2j+1f4jebeawNDzQbL0GIfHNwoCdyfWPk
 YjNQKDVvmoqHrbFkPFvA==
X-Talos-CUID: =?us-ascii?q?9a23=3AFZzaRGsF7i/mnZtoxAOv8Ht36It4fmLUizDafXb?=
 =?us-ascii?q?gV144c6WZeAKU05hrxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3A+WUMOwwG8esEnCop2hyV9CYaTgqaqLqeWWZOnb8?=
 =?us-ascii?q?nge6rP3xQAxuW0TG0f4Byfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,261,1677560400"; 
   d="scan'208";a="108764916"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ilYhhijff+S1+ePxg+hb3gAw3SDssetTvML80WX2w6G9mUerW/XTN0RmT1ZGXoaKksIfOi7siRJLYJxyz1lF0IaQ++2q0XDUixyuR0ZyVAcdALl48sIK9WhzDLnt2/LQpccNr1v2qAf8t4g+5uK1ExHX0lNNl1/5EkiT8IanCzmoGBc6ecRRvPkFZ0pBdaaxwpxJVkyC/U/29Xr3Uku6PJDHz3MeV7KSX3iXyYFrAxFav8ZYiS2HYKcGXLzeU2OvHM/fQTeS2jJGS4f6WJUn5AcP2609+aTTgEeyx54VZZ8Sl8iDC890zUfSKrnsnI54SPEwtqZWCtlIDBVrF59Qbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZmZcjOA7Jca1zuLe5giIpkxa2HlNPYtOYeZERbqRYtE=;
 b=SA0TbcIHgSFKtbcYnXlA0C72Hoki9h9xqSZfq5V6wIRh65lqk7Hhsavoz7wcgYULFRUqGwf+UId+58vGfjxjF6LVRTwtxlXeA1UjU3/S/+bvAQ3KjLe/Lt+NwJzdcJgdug0dCspkUOpiFBeWanXgbY9hvNFQS43oY/jljsUy5YzrDvJIag8yT4jN1H9G93dUqQcXytFt/nwoxnv3KdrZQu31oxI/szzvxmuXO5JnOvGbOQg83f4q3wP/mVI3Mzhu9OCQIDPUwSXIM2lkR3qwb31iJ8rPaxGlfYHUgkRkdAqSnmKUNLppNuxhvJDe0OidOMhKHhF2YhZ4mK01TupVgA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZmZcjOA7Jca1zuLe5giIpkxa2HlNPYtOYeZERbqRYtE=;
 b=CBxZIhJR8RDrPFWyj98bDCHW7H8A7a3byyjMqh9iE7QyUYKPWkMq6UCmHnr6Pi16l7YWS554GLKcKOZRMfIuLDha3BgF6yGmkabcX2R7ZhFI1rIrzQsF/ffPNZLl8u3QBEM6x8mhX3WeDWh4DXssIwzG5u5IVDPDTyjFoibPM8k=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <f310cd4e-8bef-e0f8-f49c-74eb58e47268@citrix.com>
Date: Tue, 9 May 2023 10:22:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] docs/man: fix xenstore-write synopsis
Content-Language: en-GB
To: Yann Dirson <yann.dirson@vates.fr>, xen-devel@lists.xenproject.org
Cc: zithro <slack@rabbit.lu>
References: <20230509090123.781644-1-yann.dirson@vates.fr>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230509090123.781644-1-yann.dirson@vates.fr>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0196.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:318::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB6755:EE_
X-MS-Office365-Filtering-Correlation-Id: f3852c3c-417a-45e1-2af0-08db506ee629
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rbF153gS7brZznw6Wk0VGLGQJbIgF57POpPUmcnZf3BirvWor3E7xKDp1r2pg/gHVneCrnM+XeqJ3XJbjMhVcoE5wtRJtt8SIUQoZsNIyLRRPK22wZNzUHI6+ZyTGn6EVskwCTE+/qcJnEsUt892cdqWatBfZt5h3ycw/7NJWTD/cvXVd94C9j1I04p7UMZ1pV4aDi1x8pg2kfu1N/F+E9FL7bbl895WERZzz5QURu+CaAtT9f6w8wFjE4ncVsUUEW2BoR3RjkONsTN4CzU8s5Qq6u/eEDqdkXqjKgFksh9JE893ILk5RntS0+vaoNNn2uFEt/JRPK9NhnxoW+mfQnZLKdzpckPZUs/qOf+nkpdaAkUpU63F1zTIfDh/FWgwBxkQL7EX94rrytVD1pA4vLKuE28+dsnQqZTza6eEqkHRCpUOmW30TQ6DYR6W9OIQWXv3H4TsXPS5vs/OXM2QMME2xB4zdMcYuY8owGsEDvnnSX4GjxAoDosi0wd49qmn8DlAvSplqMuiorGoieN4/uFaOxNrS+SzkiUR2HYfGfHJqQeZIt8nIDjKu8VaJGKyfTrwRC21b9a18roZ6xzbi85EspZH/c6r6hbENcMo21sOu5o5HOJ6h8s60z+LvcA8NwNKzwbZHqAGIY/rAYMlGg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(136003)(396003)(366004)(451199021)(558084003)(36756003)(31696002)(86362001)(316002)(478600001)(4326008)(66556008)(66476007)(66946007)(6486002)(8676002)(186003)(5660300002)(8936002)(2906002)(41300700001)(82960400001)(38100700002)(26005)(2616005)(53546011)(6506007)(6512007)(6666004)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dzhxeGdJSzlqcXVISjdic2MyeEkxSVgyZ0txWWQweks1Zkd0VEZ1R0JMUkpT?=
 =?utf-8?B?dGRONitCY2VGbE11K0VQdkovdExDN2JPNzRKTS93a2lCRHY5MHkwRGJHWm5V?=
 =?utf-8?B?V0pXSE4raUNWZ3c5bjhZdWFKcDFBSEUvSERzRWRHRGkwU2Fmd1BEZkFqMkhH?=
 =?utf-8?B?NisrL2xRRlRIUUpRWFlLb3V0VmFHaXNKVytSbUFuUkNPcGVjWnhSOC9PaHFI?=
 =?utf-8?B?NElaT2s0L2J3ZnRabjI1QzFEeDA0S2xlRUtRZUtJL1Nqd2hMNmJkdU1OMm5v?=
 =?utf-8?B?bDljVlAzTXJVcVR6cnFmTGhtN2NrUzZqTGJmRkJDbVROYTEvT2g0Y1BYdm56?=
 =?utf-8?B?SkgwdXROWjFaSTl0V1l2RWtZZ0pwOWcrbFVEL0srT2thZzNwQm1NNW1UUGpY?=
 =?utf-8?B?OGVrZWlkOEc4TTB6K01kaDhLUXBlRkpLcFNBWlJTY1VvVEZRUW9QeHNXcEJS?=
 =?utf-8?B?dkpvTk4rbnBVSitUV2tZd3pHTFJpTC9Hbll5Q0pqUnVsaTA2TTdxRkpFN0NF?=
 =?utf-8?B?WmlHeFNrSUw5ZUpVM3EvZGpzTm5ON3pVOHNmNjRsYTJNYVJyNUNGL21uZjdm?=
 =?utf-8?B?T1hJWEtzN0VyeG4wdmNNdXJsdEFlQkFnWS80eGd1VmtCSCtYYmlURGNjdjhV?=
 =?utf-8?B?YUFQTHN3R04reWJ5WDZvZFd1OVNZSTNrMEJwb3FTMWczQUw5Q0tHY0txTnlL?=
 =?utf-8?B?a05RYkowbzFSR21GQWdyeDN4TzdvN2R0L2R1STBmVFMxZk1PMk1FaTgyNXZm?=
 =?utf-8?B?dC8zSFdWZFNheWEvd3dSeHpHbGlaSUdlYktXc2MwTjZVSDlYOGF5SmgyTGZU?=
 =?utf-8?B?eFRwWGUvaDJySUx1QTJYcXZnbDZaYnpLamk1cTBPZnEyTktkc3dtTjdZd2pB?=
 =?utf-8?B?MjNTVitydldDUEZkNnNTU2tDdDdjbURSU3FjSWlRaU4wZTg3TFpFSUJ0enll?=
 =?utf-8?B?dFlzdjJWVDAvSEdqeFYrY1ZGZERLWHdJc1lhVDVnb1VVRWJ3d2F2V3k4WGVk?=
 =?utf-8?B?KzlBbm9Ud25VM25mS2ZoNGMyTm8vR0k0RVlXd2k3aUdSa0VqWjdQNEdJK0lm?=
 =?utf-8?B?blpSemR4RnAvRlNMNkM2bXgyRjdCeFZlblp0c0EzSSsrTzJxZTdNSDgrTDN3?=
 =?utf-8?B?ZndYNE5ONW1QWWF3WEJuTWk2MXpzOEt3YWhJbmo5clFYdyt3dG9KbkhVTnpw?=
 =?utf-8?B?azIyVi96MmZ5NDhiV2I0MmNxS1lPNUhpVjZnd1UxaXRIZkszODNmeHA5SCtN?=
 =?utf-8?B?YS8vQVc1dGcraGRuS3FzdmplK0NBSUlXY3hDbS9ZcjR4MzQ4RGY2dHVJMlJC?=
 =?utf-8?B?VzNkOXpla0xwU2pHcVplMWIxdCtyeTZOUDFPR285aWR5NVR5VW9DWFZkTFln?=
 =?utf-8?B?bFJIZzhjbGRGS0RwRThZQWRqUzYzQU95RUdNdGQ0d1FuVGUxTWJCd1FmeWdE?=
 =?utf-8?B?SDRiazU4S1RSOCt2RUI0bTNrc3ZrNFlrWTdtL2M3dGRtTTFNQ002UEdLMHll?=
 =?utf-8?B?U0I2M0JDelhoTHl0eVFMN1hLUDU1M3NDWitrVHdmUGFvbDBzTmFqZzY3WVFX?=
 =?utf-8?B?d2theGpnVDJzeThETlE0MGlJWStGVzdacFVPdkl2R2dnN094WkVKK3B5eC9j?=
 =?utf-8?B?NFM3RDZySEEvNWV2TmM0dE5EbzlKTVh3RDVsL0xhL1NwTlBPc2NvZWZQUmtQ?=
 =?utf-8?B?d2haOWJUL0lScjZGdzBNMXVDdWpwK244V0lrQVhTU3BvZVpSOVZRK3E1R20y?=
 =?utf-8?B?L0EvcGJrR0loZnh5LzJCb3ZZblQzckNpaWsrWWRKT0gweG1pM0dXZnZXZ2s3?=
 =?utf-8?B?SU9yMThkbGVCWEd5RTA4aHRWY0oxK1laa1paN3ArSnBET1lrdU5RdFZKczQv?=
 =?utf-8?B?ZmxpbjRXMFdURnJsUWpvSWw3RVJvRFRIL252V0t2L1J0TEdEcTFnR1BKQzZE?=
 =?utf-8?B?MTFseEJxbXA3N1BBcEkrUHRjZHRPMWIvUnJVaGxCMldvTHdmcXZ6RjJGRHlX?=
 =?utf-8?B?UjBXbVlRaE1RL1JpdU0vN2FnT3VDNXRMTkZjZ1dITlhuNEZVZlZESG5QQVJ3?=
 =?utf-8?B?bi91R2xBMWZWby9Nd2FncXhJYWtvUEJGeVBzcmdtZ2Z5bE1mT0VvTjZJQmNw?=
 =?utf-8?B?dVZuVnkvYkNkNVFPajhhYzliWFh4dW1PN0M5TWdCSHY4WFg5bWthSmxGRzVD?=
 =?utf-8?B?Rmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	/b33ZqtwM0EuUt3YjO3DsZ5BEws77CLkM/qFE/jXwdkSPs4S+y2xZyO8oVduRQMXrvQACFm6jGtpRMx7OgYBVj1JaPirMH3HcqIerTKCRNg1haOC96d7IDz6v5NZNSQIJysWEBQ3Wr0+WqLxqLmF9PG7QAd1L6Pdf+5letHrBL7dxR047nRXeLsP1bL6dT1212vlTFRr+UIV0+PD+V7xuRSXTpWx/2kV0Ld3MA9IagZEIv5ZOA5jsZdGb1dfEyW0ys2ZHJZZLwhS1xPony5O/bsI+rNALisGWyFvI8xuISh6ArpvCtJ4RaGfLGx8WkGFr/66QVeMLilH025qEAp93dN5nB2shmIK5Zbf+VgljKvxDMsY7a+OkQi4vbP0LvyuUgwcwX9N8rCnXse1DMZhHO/55tpSPfRn1SjYf2HHmBA/RchaYLyuOjf4Rplgja1vp1SZpWd6iT7YceTF4Bfi2PHg0X5Ti/fLlPRxxw/+YRzCGQ8CRmJ+eSVAATlLZ2+KGDW1/zOKwj53DYoWcFpc1DKQWK3+iKkGLT/nATQT7Bp1jHl8ZIgtf8/VjjJjU/wyB/vNcy7kk5i3zQ0D9mkko6sCom8nB8K/3mRng7TvJR86+2wQTFHGMVQXBme5RljG7/8Tbahbr6LbaIzSvvPakyRUKH1soO5ch2gVlUUdmdQ7hPoGi6aF69J/T434xfk8ThjlrkXATRCtwvQfURz1QQo+Y8kUMW831eoeEDG3g4xvUNZG/pIbtdm5MQHQRIXGEkOK5iL0z5zXh2qa5saJXQFnHwPphiuxXUESmHa3M7PUIaZZ2/p8PpZFrgg3uZ8a9BUo8PwYTC9JkQG86COXeA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f3852c3c-417a-45e1-2af0-08db506ee629
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 09:22:24.5426
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: M9CEIaMqlpskY7Rj27BRIQnVKuWa4TdkUvD5KsDizJEhC5rQN7tqoEpfnCM8M+SlxJ8dojs/ypaaOqzapsANEvUUaU1Y0+GOLX3dYswxiHI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6755

On 09/05/2023 10:01 am, Yann Dirson wrote:
> Reported-by: zithro <slack@rabbit.lu>
> Signed-off-by: Yann Dirson <yann.dirson@vates.fr>

Oops.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 09 10:06:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 10:06:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532060.828041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKEA-0000up-3u; Tue, 09 May 2023 10:05:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532060.828041; Tue, 09 May 2023 10:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKEA-0000ui-0N; Tue, 09 May 2023 10:05:50 +0000
Received: by outflank-mailman (input) for mailman id 532060;
 Tue, 09 May 2023 10:05:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SDpd=A6=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pwKE8-0000uX-4u
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 10:05:48 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 107b4676-ee51-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 12:05:46 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pwKCp-00F8Kf-BN; Tue, 09 May 2023 10:04:27 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id AE21E300451;
 Tue,  9 May 2023 12:04:21 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 36C9B20B0882F; Tue,  9 May 2023 12:04:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 107b4676-ee51-11ed-b227-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=iQWsW3vK30T3lTlKXPjWF0ZCCCbqYMC5X75lG5LhBXg=; b=cDPXLQL0z0hLF1/yioFQfmCaBB
	N8lg3b/6x/H60+tu73L248gNyd5xmZKox7zPuU6VKCA8HcngpEx3i7EoIRxvCJJ2pou+WtOB8+c9l
	YAHhjctqF48WprPL0AyMzqqVZVwEpLUFVHpyjG9u8qxjZkI2cmceUyq18t9h1WvD06PgFHVUtc705
	oUIyYOlqGQYWw9krcv5O2YQSBJhVKeLH6KQpRFn783U2eBmeNUpii/ZsJVFV2chFCnrg6UHHo2dd7
	DFXEbQ718ZPPJ5wbZiakPwUWdfgyQIDGAdRJdu8G/8DVLYhi0TuNwKpexszZz1O5menSPb5zkPzoM
	5WEFDxwA==;
Date: Tue, 9 May 2023 12:04:21 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into
 separate phases and document them
Message-ID: <20230509100421.GU83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.671595388@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230508185217.671595388@linutronix.de>

On Mon, May 08, 2023 at 09:43:39PM +0200, Thomas Gleixner wrote:

> @@ -233,14 +237,31 @@ static void notrace start_secondary(void
>  	load_cr3(swapper_pg_dir);
>  	__flush_tlb_all();
>  #endif
> +	/*
> +	 * Sync point with wait_cpu_initialized(). Before proceeding through
> +	 * cpu_init(), the AP will call wait_for_master_cpu() which sets its
> +	 * own bit in cpu_initialized_mask and then waits for the BSP to set
> +	 * its bit in cpu_callout_mask to release it.
> +	 */
>  	cpu_init_secondary();
>  	rcu_cpu_starting(raw_smp_processor_id());
>  	x86_cpuinit.early_percpu_clock_init();
> +
> +	/*
> +	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
> +	 * but just sets the bit to let the controlling CPU (BSP) know that
> +	 * it's got this far.
> +	 */
>  	smp_callin();
>  
> -	/* otherwise gcc will move up smp_processor_id before the cpu_init */
> +	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
>  	barrier();

Not to the detriment of this patch, but this barrier() and it's comment
seem weird vs smp_callin(). That function ends with an atomic bitop (it
has to, at the very least it must not be weaker than store-release) but
also has an explicit wmb() to order setup vs CPU_STARTING.

(arguably that should be a full fence *AND* get a comment)

There is no way the smp_processor_id() referred to in this comment can
land before cpu_init() even without the barrier().

> -	/* Check TSC synchronization with the control CPU: */
> +
> +	/*
> +	 * Check TSC synchronization with the control CPU, which will do
> +	 * its part of this from wait_cpu_online(), making it an implicit
> +	 * synchronization point.
> +	 */
>  	check_tsc_sync_target();
>  
>  	/*


From xen-devel-bounces@lists.xenproject.org Tue May 09 10:06:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 10:06:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532061.828051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKEB-00019p-Es; Tue, 09 May 2023 10:05:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532061.828051; Tue, 09 May 2023 10:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKEB-00019i-Be; Tue, 09 May 2023 10:05:51 +0000
Received: by outflank-mailman (input) for mailman id 532061;
 Tue, 09 May 2023 10:05:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vSxc=A6=citrix.com=prvs=486b9cf0a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pwKE9-0000uX-H3
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 10:05:49 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10c7f51e-ee51-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 12:05:47 +0200 (CEST)
Received: from mail-bn8nam12lp2172.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 May 2023 06:05:44 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB5893.namprd03.prod.outlook.com (2603:10b6:510:32::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 10:05:39 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 10:05:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10c7f51e-ee51-11ed-b227-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683626747;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=JWvIBREWyZzKb0kfTWlXYxd6FpkTz2KcRJ+zN6VGspA=;
  b=UV5miD8pCGIzP+dsClSsCpyA/ApliMTagDwmyiP6jwhK0wtwK2QRC+CU
   voyBupJhDBH0hyic8zqNuKIsK4esQBwnC+Mv8+bVvyw2Lfhcgo0UQQSGv
   SVKzizYyri7aYM8HSnz21h69nwyNtr+p4kiKD2cHnpn8cR6QlUVB4GYQP
   A=;
X-IronPort-RemoteIP: 104.47.55.172
X-IronPort-MID: 108385412
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:vUG01aBzWTIHORVW/+niw5YqxClBgxIJ4kV8jS/XYbTApGsk0jMDz
 DBOWDyDMqqKYGfwL9h0YI+08UgE7MLQx4VlQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A7wRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw48EmAGNUr
 PskOG4dMC6P2POQ/Y/mVbw57igjBJGD0II3nFhFlGucJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuvDK7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXzXirBd5LTtVU8NYpi16R/TcLNycXdkCCu8e0gVSdS+1mf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmEzqfW6gCVHG9CUDdFbNEks+c9QTs32
 hmCmNaBLSNrmK2YTzSa7Lj8hSO/P20ZIHEPYQcATBAZ+J/zrYcrlBXNQ91/VqmvgbXI9SrYx
 jmLqG00geUVhMtSjaGjpwmY2XSru4TDSRMz6kPPRGW54whlZYmjIYu19Vzc6vUGJ4GcJrWcg
 EU5dwGlxLhmJfmweOalGo3hwJnBCy65DQDh
IronPort-HdrOrdr: A9a23:3+HpcqyLPGCGSgX3dZLSKrPwT71zdoMgy1knxilNoH1uEvBw8v
 rEoB1173LJYVoqMk3I+urgBED/exzhHPdOiOEs1NyZMDUO1lHHEL1f
X-Talos-CUID: 9a23:M/l80W5cnQwcOxpw6dss7WlFG8I/K3rh3CnAfXG0VndPdYDIVgrF
X-Talos-MUID: 9a23:ziHhrwXgid59bSjq/HznhzNZb99X2aWNE3kXr5gNoeqUChUlbg==
X-IronPort-AV: E=Sophos;i="5.99,261,1677560400"; 
   d="scan'208";a="108385412"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hJm5WPOVZ/co4GakWWtgHILjhIA3dZZqqfQu5GJkJHRNV8AjOERFjL2eapsODmRjWrG6oo6GwWvtCiLsZot389iSFc8uSgodycF9U+nqhXXo0GYOcsyHtxQq9ktT2q6/K6d3LWKbVolJK1gfH0J5a7NonnDhGcbp8R+2Pr0xxvhQqkGkYYNHs/Fd3wHhffwIFeB8sFwZXhnIq0qSzjvQb53CvUXVgjc5LAYg3sXt8hBKfiBg5XKFxKXK4b6b4qtQdP8mV/egdJCzkiXM+uBq495OoKH3zf6CZOMIL+weapwyvNo2Mfyj/NEwstPrlvqvYz2zzxXDWIS8nNynbEp7tg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NbnVfteE5M1tOHc/qDAEDGwPiaan2F6G/eKjEWPY47Y=;
 b=li/eccHX51rNtCJoAzt3EIy5oYcpeEacA4YiQ0Aq49wLB1F4lT39NpBd9xTg5vBzWAs3nfoLmuNKSQkYZd/lHKA5lEIou9FcpIMaamwMmuibGMXbjKcetdULNLiSA/OMTJ5LW7CtlJxcavoKZk9kR1efaVPE2JJxZl04InLj1ID6vE25p+nc75KEbzF7cRlMKMwAWK/LcqXRUWFpDVVrxEoI0IKCBlxcsoXptDkKD5oRA9a3XSHZFvebOdhbrhNVChl7xrr1awf5l2nwRNDQRF88BxKFG1aeaZf8woMS/DQGe/ia14Hh6Gx5xfROS8v0TAtt5KrPA0Eqk4DaHVzZfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NbnVfteE5M1tOHc/qDAEDGwPiaan2F6G/eKjEWPY47Y=;
 b=OzkVVIVRvN2fOSAIm8N7tdrZT7KST6lHN1+24fCQP36CWtzC4wLJWu9JN5vxls/AAo4jKdo/g5dSLg1aPsxnUBUMErMD7hKcNRl8Yu0WNkxJkAg8bdaejZkdJdcZhQZ60dQB4eNprn+cQXFloilbf2hZm0KmgErbSnPziVrhG3s=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <d0794e7d-604e-044f-000e-3a0bde126425@citrix.com>
Date: Tue, 9 May 2023 11:05:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/3] x86: Use CpuidUserDis if an AMD HVM guest toggles
 CPUID faulting
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
 <20230505175705.18098-4-alejandro.vallejo@cloud.com>
 <d8c9728b-b615-7229-2e76-106d81802a20@suse.com>
In-Reply-To: <d8c9728b-b615-7229-2e76-106d81802a20@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0038.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:152::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB5893:EE_
X-MS-Office365-Filtering-Correlation-Id: 29ca883c-f072-4c91-88f9-08db5074f0f2
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Y50xSCv/yKLIFYttO4h1KUwBSwoTplckc/iio9FWQXtq7ctlIRV/ZODeg0jHPtH5qAIR4GWsAvICu7rJEATDC8qWHDifH+YOyVjQO768WGfiZhopWEzS9f6iYznTdaxuVTdYwuYecrZXlkQT4qOYIGPQuTQaoe8mfrF39Apd8boQwnsGTdryk8ryT1Wul1NiQpzVlREkQ/TrCaXQ+bxka3mDkxgJ/X4bGQpw87/6MuM+wmTiuaMBAnPtNeRvHfaemXf+A8Z3vxxomJ7id7G8S0XVfFf2A1MSgJBzN1T7ymtIlNin8xA2eCYG/inXfe3Q06EJvVWtTNpxhtKPBKbh9tELDA7B+U1oRtSivnsCzwv84Vmt6Se4IWX78FHwqTIVEF8kTeJSFHzK/8Hk/6ve74LfYoymUHskFWrp8evcLqKXL8McV9nh0S/n0YhPGwbBqD3soxMygaU16Q6Bjm3sa1J0ESTY+UoCNl6Y9GXY4eDcQtOIohmWAa0UF9wsd20CmGeWSahItNp3VrdBGoOxAW+EV9IZyeMZMGSIqW1U7W+l1U6QUHrg+7qXoadmqrHdSLWUkdpu4nik/SNSk6QNyMjIb9OhS/KzCvwAvxPJs7neHKdT/sRexM4i7BsjWhrbT9fHTE4Ub2I0+5JUjxS7wQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(376002)(396003)(366004)(451199021)(6666004)(31686004)(6486002)(110136005)(54906003)(38100700002)(4326008)(83380400001)(66946007)(66556008)(66476007)(2616005)(31696002)(316002)(41300700001)(86362001)(8676002)(186003)(8936002)(6506007)(478600001)(6512007)(5660300002)(26005)(82960400001)(53546011)(2906002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?akduQUpXQWJYeUFMQmVxSVNNK0dWWFJuOVNMbTU4bEVCb1o1Y2lTdkxaOXVr?=
 =?utf-8?B?L0N2NHpNY2MyZ2dCNjVTN0cwMEZjQnJDQkJHTFNyNjgxYklnejVWSTRFdlUw?=
 =?utf-8?B?NENDVzdTeCtjY1hqNHVENFNyMnFlejY1ZDAxeFJpWDN2TC8rZUpPV1NqVDN6?=
 =?utf-8?B?K3JjRDdnaGN3QWRSTDAxV25vbDA2TjlTWlFUZjVEQ2x6UExGYWN5Q0dpR3dT?=
 =?utf-8?B?QmJqZlNEbkJ0bUMvQkVsajUwRFZZYWtlOE5YWEtocWp0S3dsUE1NMVl5VEE1?=
 =?utf-8?B?SHJNMmRTUUtwSGdJR3lwa2JSUWpHMDNpdjFUbVREdlQveVFoOHhwZWtER2ZM?=
 =?utf-8?B?UTY3U2hGRHUrZGtYTXBqY056eGZocCsvbXZnMDhycEFqYnZ1NTFyZlFFbWlN?=
 =?utf-8?B?TVUxZk1aeDd5ckhkbGJJQ1BHVzQ2Q1U5TmQ0Z2VZSldEL3R0WXR4MlhIUnR0?=
 =?utf-8?B?TmdBL2hqSVltWUhOeEl0cTBIeWppcVBISDlRemRkU3dxbnhOU0JERk9aZzd0?=
 =?utf-8?B?TTl3ajFRQzcyWUdqdDd0enFTL1ZwZnA5bDZCZDZqSG9saC9pNHc5TEY3MW9a?=
 =?utf-8?B?dmdZajloUXFmcG03TlljRHlnTGhONGVsM0Npc1lpY3gzMW1zMlRqYWoyeUJN?=
 =?utf-8?B?NytHVXFjTmZoWG81dkZsVk9IVHVRTE5tUkdCVUFyYlZnZHRzRm0rbFQwTGRq?=
 =?utf-8?B?V0R6NHlvRkZSSGx4QTVQa1AxdTZGUVZrQUdJK3NuRjJXRmtqOW14Qng2dkdC?=
 =?utf-8?B?OENpRGUxU0ROQytSS3IxbWtTcUNtRGpiZ1FrTlN1Zk5TZW9kSGlZc3ZMaVFj?=
 =?utf-8?B?OHNjalFoR3pDUytlY3ZxaWhSdWhvQ01TT3Q3QzJMWVlaV2M5YWNiQ0l6OSsv?=
 =?utf-8?B?ZXNlU29YT0ZBZ2RCNmRkK1FaV2ZlY3JaSk5LUzNNZ2VRU3dJL1Btc2VGQ2hu?=
 =?utf-8?B?RzJqbjhEMjByTkZwWXBjRzFNZWllTVBITFFjRG1NcWgvL0dSK09kR1Z5OWdS?=
 =?utf-8?B?YjNlTjk1QWRPRG9jVTZ1czBoMDl6ZHc5MzhjdDFtZ0JhVTVRRjMyMUxXNVFh?=
 =?utf-8?B?WjVjMEp6Z05NdXpJUlFMUFZXOUlpUU1FYlRoSlBTSi9BcXlxZGxVSmNSVita?=
 =?utf-8?B?dGMvWWVCd2VVSG9TS21GZndCbEJNeUkyQndOK2lXVEhtaTl1MUxVWmFBSXR1?=
 =?utf-8?B?eTlZdS9GLzRINTRIbDFUaEM0SzBka3Z1N002VlJKWHc1N1lma2FjYSt1eG9m?=
 =?utf-8?B?L2NaMFhnbVRrNU1EWW1DZWZYUGdSRDl1TDFjQjByMnlkaG1VaG0zNnIvd3dT?=
 =?utf-8?B?M2hnNE1XWGJjcHFQQXQvVXVmT2g0RDduRWswTnFTSDlBTG9kVlFoUjNPcDZB?=
 =?utf-8?B?bktKWVY4NHArcnJiRWtHbmdXQVUxc2xOMjhDMkRPRHYxaDhJOTl4TVlBQkhn?=
 =?utf-8?B?b1lCZUZkWUdaa05Cd0szWmVFaUJCRVAzeHhiamhyc2t6V0RkUTlWbk9vVEhO?=
 =?utf-8?B?d09TeThJdDVBcEQvWUtBNEVCWEFjWm5wWEZKNG5kOHBYdjZSNEFaQm96TjBa?=
 =?utf-8?B?dXlRMEJBbGhtcWlGV040RWtmQlgrVmZLUTlmbTk3elNUeDBuMERNNGJ3N1RY?=
 =?utf-8?B?akJRZFM0L0hhRldWZkQ4SjE1TnkwUHZ1eEFraGY5eEtGbEd5SmJBTERQem9W?=
 =?utf-8?B?R1ZmUm45SjErQmlxTW1jeHliYjlYa3VKRkN3TTl4K0gvWm1jYks5U1NPdkRG?=
 =?utf-8?B?dGxMdDdiSi9oQ0xxQ29hQjNGbVQ0emVMWmRMcnhaakhnRngzR1hWcE1RZlli?=
 =?utf-8?B?a0hpSlAzM0hUUFZ2SkNPZ1VHZGpvYmd3WUlRNUswMTE2aTltbmtBWUJnM2lX?=
 =?utf-8?B?eGM1eWtRVVNaNXFyQlhDL3ZMUysrT1NjVEErbDBQN0tLUmRzREJsb2FzTzZJ?=
 =?utf-8?B?d25vRFBEYk1GNHdqZDQ3TUEvakl1UDJnbU1NV1drSVdQN1pkcUlKZWx4VzBy?=
 =?utf-8?B?M1hQUGJMbWlFL0NXazJzUlh4RDFuRjdCTGZ2eTJXOFV5VG5qT0lBZmlGdWw3?=
 =?utf-8?B?cnppNldaVWhUQWlhWmZwa1F4RmdhaUV6MC9XR2NCanRyeTBoU2V5dW9sRVBF?=
 =?utf-8?B?dHlpWWh5WkluL3RtelNRbGVTYkU0ZVFEVm9ES1V0TlpSZzkrSGxVMXlua1lL?=
 =?utf-8?B?MWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	cl04r7woIPGXU5UJGsCca5jcUN3QYq6VFZkU676heM+Himj289S4n7sJOgc+LSpLBMGcRUYu2xY+cGToOkLccgSoAvJHg11muPKr1oc+BIH81V4hVooH6OI3EyuexCwqovwxWwi9rwgy2s+UQ4sH9d0TX5ZRJre2eJe+h1ZhABhHOV9jdnoG2t0ALjBV7E/O48hxEH3otLY0S2QqUMOeEYUOUEojJLLS7Y1EsszBxDr6Hu3xKCIGrobnTc0XVL/EPN2fem3CyVsj1AtwlPmp0ZZTi7+Jcdbmlt6PU6ST6q6/hPxqnHwWWZYRkLv16LZNkLzF5PJ+Ez4HaemXEvoh3Xgmr9BXVroVX+2X2sK2CShBNeCdf+pRHlgtUF1E6Iy7Alu+tLNQsgptl0XxYvxw7525XTp/tSV0r15dOc7niJ+6Ngdp5f2zRyTM9RDAav7TEyHZPEPZ5U5i+5ROyL7lrOQpfS+e0dv2DiIofEEK2NuOyOArneI84JE1m6koY/krX6JXf3QQKzoFp5NYo1VD+k3DQedUcDpHAnfu5j1nfv0ybNs6YSjZo8MSGuhBjxMfjbL68U1Hee9L35NERYE82zj4DeJxAcbzploPCqWfWwhEADfUyzB3C7JEsEbMpv9u/Z8gZ+X0vy80GNZLpnMTQS8ycUDzU5LIMRRlM5xTdva+iRCtE6rzTRSIiy7EHdEhkMwQTgOCBkfaOfqagIosyea+KIz5h9Jn50iPwJltjlof7pk2r/nYI8sORHBWN04emwHhy1f2UKaLi8kEu5k676BqSMVJAZSCaYNJLCXztqqXYEGVOi5KliqD6IwK1toJKsmXXoIuE2WoxugexaX0I+z9SKLB2FLBwwNxV4RdwwM=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 29ca883c-f072-4c91-88f9-08db5074f0f2
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 10:05:39.5024
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VmXucxG5JARxy2oyy+LSIJdAaWNay+AfYMctxoCQM5nrqhHn9wCqDFGHHcXhHAgvcdEhsmQUQkk34GCqbCxZ604rkLSygH8yl7erhxRfyFY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5893

On 08/05/2023 2:18 pm, Jan Beulich wrote:
> On 05.05.2023 19:57, Alejandro Vallejo wrote:
>> This is in order to aid guests of AMD hardware that we have exposed
>> CPUID faulting to. If they try to modify the Intel MSR that enables
>> the feature, trigger levelling so AMD's version of it (CpuidUserDis)
>> is used instead.
>>
>> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
>> ---
>>  xen/arch/x86/msr.c | 9 ++++++++-
>>  1 file changed, 8 insertions(+), 1 deletion(-)
> Don't you also need to update cpu-policy.c:calculate_host_policy()
> for the guest to actually know it can use the functionality? Which
> in turn would appear to require some form of adjustment to
> lib/x86/policy.c:x86_cpu_policies_are_compatible().

I asked Alejandro to do it like this.

Advertising this to guests requires plumbing another MSR into the
infrastructure which isn't quite set up properly let, and is in flux
from my work.

For now, this just lets Xen enforce the policy over PV guests, which is
an improvement in and of itself.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 09 10:19:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 10:19:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532072.828060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKRi-00036g-M7; Tue, 09 May 2023 10:19:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532072.828060; Tue, 09 May 2023 10:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKRi-00036Z-JX; Tue, 09 May 2023 10:19:50 +0000
Received: by outflank-mailman (input) for mailman id 532072;
 Tue, 09 May 2023 10:19:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SDpd=A6=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pwKRg-00036R-Sg
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 10:19:48 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0512a847-ee53-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 12:19:45 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pwKQx-00F95R-HR; Tue, 09 May 2023 10:19:03 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 4B1FF300451;
 Tue,  9 May 2023 12:19:02 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 29A6F20B21BBE; Tue,  9 May 2023 12:19:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0512a847-ee53-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=PKLh6UaLISIjcUauhr3iF6UlbMhF5g8Vn0HXFbC39j0=; b=K8SrgFC3k1Pze4pOS36EiCmG5X
	E4mGhrJh8Tp4MMzIggJmmXSFDZoSRoQqK8ZUoW5YCSL9s//gW2n7gBpJeQTN7vOfD2LUm3mpTdcjL
	DTdVl75NGlZb4DOM8Wve9/AenoFUKmgzoAxcsYqONDoeb7PPUXjOxPipqJ24L4IRLmGatnmIZpg2K
	HOlNGraQ3bkXrLYD9MfaMGh6bgWxWd7JPQwv1bSEkvzLwsfu5SOyXdDbvpO1kezDEFG52JV23LZ25
	KZPTC+yVvXc+uJTxC9FqAeTMucXuRrB9TjfoerWv5xB7Ih77dPE2yMPM9BF4mN/7ksbc1OfszoqR+
	0x9nU15w==;
Date: Tue, 9 May 2023 12:19:02 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into
 separate phases and document them
Message-ID: <20230509101902.GV83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.671595388@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230508185217.671595388@linutronix.de>


Again, not really this patch, but since I had to look at this code ....

On Mon, May 08, 2023 at 09:43:39PM +0200, Thomas Gleixner wrote:
> @@ -1048,60 +1066,89 @@ static int do_boot_cpu(int apicid, int c

	/*
	 * AP might wait on cpu_callout_mask in cpu_init() with
	 * cpu_initialized_mask set if previous attempt to online
	 * it timed-out. Clear cpu_initialized_mask so that after
	 * INIT/SIPI it could start with a clean state.
	 */
	cpumask_clear_cpu(cpu, cpu_initialized_mask);
	smp_mb();

^^^ that barrier is weird too, cpumask_clear_cpu() is an atomic op and
implies much the same (this is x86 after all). If you want to be super
explicit about it write:

	smp_mb__after_atomic();

(which is a no-op) but then it still very much requires a comment as to
what exactly it orders against what.


	/*
	 * Wake up a CPU in difference cases:
	 * - Use a method from the APIC driver if one defined, with wakeup
	 *   straight to 64-bit mode preferred over wakeup to RM.
	 * Otherwise,
>  	 * - Use an INIT boot APIC message
>  	 */
>  	if (apic->wakeup_secondary_cpu_64)
> +		return apic->wakeup_secondary_cpu_64(apicid, start_ip);
>  	else if (apic->wakeup_secondary_cpu)
> +		return apic->wakeup_secondary_cpu(apicid, start_ip);
>  
> +	return wakeup_secondary_cpu_via_init(apicid, start_ip);
> +}


From xen-devel-bounces@lists.xenproject.org Tue May 09 10:25:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 10:25:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532076.828071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKXA-0004Yd-9e; Tue, 09 May 2023 10:25:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532076.828071; Tue, 09 May 2023 10:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKXA-0004YW-6g; Tue, 09 May 2023 10:25:28 +0000
Received: by outflank-mailman (input) for mailman id 532076;
 Tue, 09 May 2023 10:25:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AIFU=A6=bounce.vates.fr=bounce-md_30504962.645a1f94.v1-67242cc3dadc44398b471f4dc25777c8@srs-se1.protection.inumbo.net>)
 id 1pwKX8-0004YQ-V1
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 10:25:26 +0000
Received: from mail179-5.suw41.mandrillapp.com
 (mail179-5.suw41.mandrillapp.com [198.2.179.5])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cf638322-ee53-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 12:25:25 +0200 (CEST)
Received: from pmta12.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail179-5.suw41.mandrillapp.com (Mailchimp) with ESMTP id 4QFvS42JdwzG0CJ84
 for <xen-devel@lists.xenproject.org>; Tue,  9 May 2023 10:25:24 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 67242cc3dadc44398b471f4dc25777c8; Tue, 09 May 2023 10:25:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf638322-ee53-11ed-b227-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1683627924; x=1683888424; i=yann.dirson@vates.fr;
	bh=jZBGOSIKAnJ70t9c/PMfJzw/jtihIdrFNC/Fnqahdbc=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=EMSyVIedVVN7Eda7PMCE6Z2miqoxqF7VWhhsvkTqx9CRmv/LJ+koLE4sFprBbLGbE
	 N+M+qLE7kh7qpuMN7q18E89BYw1p6cIaRQzTw4HHuU5SL87fjF0RyHgfq/2BH3MVvs
	 szXHl4alDd7yrLeI5gy5QFMbSVbRvaSBA7GBstOM=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1683627924; h=From : 
 Subject : To : Cc : Message-Id : Date : MIME-Version : Content-Type : 
 Content-Transfer-Encoding : From : Subject : Date : X-Mandrill-User : 
 List-Unsubscribe; bh=jZBGOSIKAnJ70t9c/PMfJzw/jtihIdrFNC/Fnqahdbc=; 
 b=dutfCRvY5Ttryc0iL2/M+ohVIHq5qAV06PSoEAfksB2gV5ntY+M4SEgE2gyQZWiHPj1H/o
 tY+FsrEP9+h1pOl9EHnDwDQmz5ewjC2LZkuuEhXLkS3MWlnlOAbipQ/9kVHw/D+CmQmPhpkA
 MEpNeKxTS5alCb26w+P5yDif0aLZU=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: =?utf-8?Q?[PATCH]=20docs:=20fix=20xenstore-paths=20doc=20structure?=
X-Mailer: git-send-email 2.30.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 9664e26d-c55d-4176-ac64-680cfb7c5564
X-Bm-Transport-Timestamp: 1683627922056
To: xen-devel@lists.xenproject.org
Cc: Yann Dirson <yann.dirson@vates.fr>
Message-Id: <20230509102455.813997-1-yann.dirson@vates.fr>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.67242cc3dadc44398b471f4dc25777c8?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230509:md
Date: Tue, 09 May 2023 10:25:24 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

We currently have "Per Domain Paths" as an empty section, whereas it
looks like "General Paths" was not indended to include all the
following sections.

Signed-off-by: Yann Dirson <yann.dirson@vates.fr>
---
 docs/misc/xenstore-paths.pandoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/misc/xenstore-paths.pandoc b/docs/misc/xenstore-paths.pandoc
index bffb8ea544..f07ef90f63 100644
--- a/docs/misc/xenstore-paths.pandoc
+++ b/docs/misc/xenstore-paths.pandoc
@@ -129,7 +129,7 @@ create writable subdirectories as necessary.
 
 ## Per Domain Paths
 
-## General Paths
+### General Paths
 
 #### ~/vm = PATH []
 
-- 
2.30.2



Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


From xen-devel-bounces@lists.xenproject.org Tue May 09 10:33:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 10:33:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532081.828081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKeI-00061H-0G; Tue, 09 May 2023 10:32:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532081.828081; Tue, 09 May 2023 10:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKeH-00061A-Tl; Tue, 09 May 2023 10:32:49 +0000
Received: by outflank-mailman (input) for mailman id 532081;
 Tue, 09 May 2023 10:32:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SDpd=A6=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pwKeG-000614-Uh
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 10:32:48 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d69a43e7-ee54-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 12:32:46 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pwKdJ-00F9hx-3z; Tue, 09 May 2023 10:31:49 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 29CF1300451;
 Tue,  9 May 2023 12:31:47 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 10E2120B21BBF; Tue,  9 May 2023 12:31:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d69a43e7-ee54-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=Gd64jCpByI9ftMmszyCnHSbL15wXcN+sPo8L/Xdfd60=; b=CuTgyV8Kb4pFfLeLsvFJaMyO39
	w2tpTejdL1282DKRtl3AaKdBc8Hn9nRTvQQ9DYhYlpzuM+dpyDsaxtc21qM27D5mhnRuD9XTvUY5G
	MfmaLX0npp7MFDGUFX7ijHJkVij0No7VT3AyLbMnuaBv9CKOD2u3qLQSzqJSxcKBidQ7zxyDDXI7m
	hqnlBbQaeXEG6bseRUttA1lqVEUVI3JxK01J44mwxCael+dlQLLA7vZjXVQQ7OBQnE6iIxtBTXeCR
	ImOG2OhzXWYJBbeaCRJM8CDHHwhpRKJBAE1FMdg0FB6l5hrBc2ELTNWjQncuzgP6yccu91Zu935vF
	pcCRHJ6g==;
Date: Tue, 9 May 2023 12:31:46 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into
 separate phases and document them
Message-ID: <20230509103146.GW83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.671595388@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230508185217.671595388@linutronix.de>


And since I'm commenting on existing things anyway, let me continue...

On Mon, May 08, 2023 at 09:43:39PM +0200, Thomas Gleixner wrote:

> +static int wait_cpu_cpumask(unsigned int cpu, const struct cpumask *mask)
> +{
> +	unsigned long timeout;
>  
> +	/*
> +	 * Wait up to 10s for the CPU to report in.
> +	 */
> +	timeout = jiffies + 10*HZ;
> +	while (time_before(jiffies, timeout)) {
> +		if (cpumask_test_cpu(cpu, mask))
> +			return 0;
> +
> +		schedule();
>  	}
> +	return -1;
> +}

> +/*
> + * Bringup step three: Wait for the target AP to reach smp_callin().
> + * The AP is not waiting for us here so we don't need to parallelise
> + * this step. Not entirely clear why we care about this, since we just
> + * proceed directly to TSC synchronization which is the next sync
> + * point with the AP anyway.
> + */
> +static void wait_cpu_callin(unsigned int cpu)
> +{
> +	while (!cpumask_test_cpu(cpu, cpu_callin_mask))
> +		schedule();
> +}
> +
> +/*
> + * Bringup step four: Synchronize the TSC and wait for the target AP
> + * to reach set_cpu_online() in start_secondary().
> + */
> +static void wait_cpu_online(unsigned int cpu)
>  {
>  	unsigned long flags;
> +
> +	/*
> +	 * Check TSC synchronization with the AP (keep irqs disabled
> +	 * while doing so):
> +	 */
> +	local_irq_save(flags);
> +	check_tsc_sync_source(cpu);
> +	local_irq_restore(flags);
> +
> +	/*
> +	 * Wait for the AP to mark itself online, so the core caller
> +	 * can drop sparse_irq_lock.
> +	 */
> +	while (!cpu_online(cpu))
> +		schedule();
> +}

These schedule() loops make me itch... this is basically Ye Olde yield()
loop with all it's known 'benefits'.

Now, I don't think it's horribly broken, we're explicitly waiting on
another CPU and can't have priority inversions, but yuck!

It could all be somewhat cleaned up with wait_var_event{_timeout}() and
wake_up_var(), but I'm really not sure that's worth it. But at least it
requires a comment to justify.


From xen-devel-bounces@lists.xenproject.org Tue May 09 10:42:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 10:42:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532088.828091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKnw-0007bG-Uj; Tue, 09 May 2023 10:42:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532088.828091; Tue, 09 May 2023 10:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKnw-0007b9-Rv; Tue, 09 May 2023 10:42:48 +0000
Received: by outflank-mailman (input) for mailman id 532088;
 Tue, 09 May 2023 10:42:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7D+C=A6=citrix.com=prvs=486391a49=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pwKnv-0007b3-Aa
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 10:42:47 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3a491814-ee56-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 12:42:44 +0200 (CEST)
Received: from mail-mw2nam04lp2174.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 May 2023 06:42:38 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by LV3PR03MB7479.namprd03.prod.outlook.com (2603:10b6:408:194::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 10:42:36 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%4]) with mapi id 15.20.6363.032; Tue, 9 May 2023
 10:42:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a491814-ee56-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683628964;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=6mjw9wOt/hp89bIP9Ja/Gt+A7Z/6HALaO2DsquhDAKI=;
  b=TXW59u7+S456AelzxW1okh+Ksb0p3ALKDv0f3ysE2tI4hmcp1drlkF5E
   jkCYiCAYHlDwsC3SchpzJZzck11rYYmPSJArcHJzJJ12hYsd/5SHBcc1D
   Lu4lEqxloumVr7veheGXx1/pwwk/ZGNsSIjIk7D747gX84LsaAb5cUdpw
   4=;
X-IronPort-RemoteIP: 104.47.73.174
X-IronPort-MID: 107702724
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:SM/4lqDwdovhahVW/wHiw5YqxClBgxIJ4kV8jS/XYbTApDMi3zxTx
 zAWCzvXaPjbYWX9e9skPdizp04PscTRmtY2QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A7wRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwoKEoGzpp1
 7siLm4CVBCto6Wd/baSRbw57igjBJGD0II3nFhFlW2cKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK/uxuvDS7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqi3z3rWQx3+TtIQ6Jf7p8P1nuASomVcZUBNRTQCa8PycsxvrMz5YA
 wlOksY0loAi+UruQtTjUhmQpH+fogVaS9dWC/c96gyG1uzT+QnxLmoOQyNFadcmnNQrXjFs3
 ViM9/vqCCJutvuJSHub3raSsT62fyMSKAc/iTQsSAIE55zmv9s1hxeXEtJ7Svfq0pvyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLztJ6s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:v9WbYqkc2iR9NRWw2eeIGvicGgDpDfIa3DAbv31ZSRFFG/Fw9v
 re5cjzsCWe4gr5N0tNpTn+AtjjfZqxz/NICPAqTNSftWrdyQ2Vxf9Zg7cKqgeIcxEWkNQ86U
 4KScdD4bPLbGRSvILQ5hW+Gdpl+dGb8cmT9ILjJg9WPGZXglZbnmNENjo=
X-Talos-CUID: =?us-ascii?q?9a23=3AH95+72ik+oYnJccBkO9UYZj1UDJuKDqC1iz3AQi?=
 =?us-ascii?q?CDzhmcr27bRi7xadNnJ87?=
X-Talos-MUID: 9a23:3hAYXAi4kaixHyOSasunlcMpBYQ55aejMBk2zq4M48fDLXdRajK4tWHi
X-IronPort-AV: E=Sophos;i="5.99,261,1677560400"; 
   d="scan'208";a="107702724"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GC96wJaru1eV7ZjTFEQc46cXB+LXG/HX8QrlFRaOuO5kEVzDmqgE3k3Kf4NHRRr057oLi1CLJ+mP8Pt8UCTp8AX7CTAbrTlL0XAbXLtNudz0KIatVGVdhT3aD2Yplwf8yqmeD3F5OVuhgi3OiwypT6Tijeo2xXLG5wU1vA0Bm8N8R79zwVXWyqTsiIsr+zJab933WooCZ5w1zW7La6chGCV0v/z0R4e2Lc5q30PlJwWx2kxaFcsv8KFU0SrRwPzR3wqdXflp0UXJoU/C9atJu+VhTZRXDZKQRI1mcVThGh1HxurZsjLZvy4HsIDYd6ajurLYDTl//bjI96bexvM6ZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+/6g8tjSBTZthJRn0IsH15R3+4hN4TP9sKJX1WwKJX4=;
 b=Jj4j2GAIjCplvGD9cemopphdDpWLSArcM2qdtAo1Ys8u9ANO+Rp77Ga+yD1YAK/CAIxZu+x+2U7eq3SboPauc2iNjgNaDzWnprGuh3m/EERXhV6LNhos/n1sMtAKTreb/sERHA8dFFeMLR0f5Mi3OmAk04XMoFqre77p7Hx95j/EcSDSbKZA9DEcGITDyyk30KEy7NfbPoVe4JRZNqeU0ZBOSo30SK164pSVanDfGYmBbO3maEIHZcBPZd8o1fCNCrWl6cZVtYyqsMJiWZTqdQBIKnQWBu4qdTFXxy0aCkf4OT27k31OZFpiU8h2WdZGF6UxlnMnLH6ywRqqbLJyrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+/6g8tjSBTZthJRn0IsH15R3+4hN4TP9sKJX1WwKJX4=;
 b=hd+UHp8NK6f/JP5O8uy6bUOFv5UYsiwLppV2Qzn1W1iYJmyYSEzIphUG0U5ggyPHIgBmaryiMfIs3tGQoUbTMnmZBXMeUyevjb/lTbqTAx6rAV0BQjMn9h9lO3F7sDXwTAf7KtaowEnRBboukn5AOaz72J416hZXxsbAhXAl/24=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH] iommu/vtd: fix address translation for superpages
Date: Tue,  9 May 2023 12:41:46 +0200
Message-Id: <20230509104146.61178-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0180.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:311::11) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|LV3PR03MB7479:EE_
X-MS-Office365-Filtering-Correlation-Id: 8eebd26c-88d4-4399-0ead-08db507a19d6
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7kPK9gyzBNtOlaYx/lTtz9cm5KCjYVzwOn729pJmuI9QctnmhP64TQitzf7mcGEKh9vEjM2CRh8yf/IrJA6FLgaQcHpsp04hWst7CkyGB7MtzLMLeTWy7it3ra1odGpfGbLaL7/esLKrj94msrmiz5Y7DeGupeSwM+SihIvurJ/vumEpFL3jtLs2SHsxTEHZnfqSWDG9lqoxmQr7uRAk+r6hkx8deGYT2FuNTXZc2h3bjYXI6Mvm0qn0kDGX8M4E0xm0QEmKAmligCrCidcd+iPLdd8UcWzKcM0h7tIlPQyLM0FDqEDpNSa4tS5K89SkDoy62Bv6kK39pAqRs8Wl4dbq6wRMNiQVoIKJF/bo55vh3aFqZMXiLkvbH6pTaokXJxxbFBWnCfEgk1MD86GgRYYRVvDGR7TRWJW1q7yRYFwY5BsGTpd2ERRdEi8+2BHNqXZu9erl+0CYvltY3I3iGzqJyJQFGWVZVeNm2wE2eyPbMWkKx4IOdHyV523MgKRoo7dRADyCWiHgCG7Q9WqtLC5TUF61jBgD529PlZbmAZORfozBjyh1l0r1bt/VNr2k
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(376002)(346002)(136003)(39860400002)(451199021)(4326008)(6916009)(66476007)(66556008)(36756003)(2906002)(8936002)(86362001)(8676002)(5660300002)(316002)(41300700001)(66946007)(2616005)(54906003)(478600001)(6486002)(6666004)(186003)(82960400001)(26005)(1076003)(6506007)(6512007)(83380400001)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y3lKKzdDNHVkNzNXdHdRSnJDSHpPMytsL0l0b2YzMFpFSXhWaEl0ZDN1MUV0?=
 =?utf-8?B?bG85ZlkyQXNiRFNEa01XdFY2RmlKTmF6MjZEYTdLRTdZcStsbGp3cnlmV3Iw?=
 =?utf-8?B?OXFTckhsd2x3eVN5OUZscVdzR2pyZGtRTE1JVFNtd1FpbVBYVHl0ejNmZlZm?=
 =?utf-8?B?VGo2ajBHQ0tRSFdFR01ZdTVDYWZ0SE1wSHYyNFhWQjhzQmJkTUQxZTlYNU81?=
 =?utf-8?B?Vnd2NEhYZldaVlZQbSsyWVFaa090OVRPSVgrN1hnKzRQK293K3VPWjYzYm12?=
 =?utf-8?B?WVhKNzNhd2RnKy9sc0VLcTA2SDE5MWk4TUpSQmx4blRLSi9hUzFhbmM0azli?=
 =?utf-8?B?WWJ0eWhCS1A0b0NVODFxZFlDQU15cDloN0pZRjlELzU0d3RRbTlTU2FaYkFL?=
 =?utf-8?B?VlQ2cHQrY0MrNjR3YWpPU0FibFVQOWZnczJjSXc5eGJYbDcxbDhSWGZKOXJC?=
 =?utf-8?B?YTlMRzZ1SlE5K1RsRVdqaENQN1VkcmxkS2dwT3NnWjVPWFdCSk5tYnhmc01W?=
 =?utf-8?B?ZzdLSm5KZEpNQkpXeXZjMnBUUHBQTzdWTmNvOEJLb1plL01XUXpLNjhzZVh1?=
 =?utf-8?B?SThVcU1FWm9lYjV2S2MwQ3FpNFRHUXhEcGM0dFVzYmZHK2VpRzVKNXhac3R4?=
 =?utf-8?B?K2kvVk90OStHeEQ3dmt0bmVsZkhBdmZya3NkUWJGWC9uOFUzNjFOVUY3QzdL?=
 =?utf-8?B?R3VJbGNnM3RZT2N6T0U2cGR6V0NLVng1QnZOd2RidWZrVnhuNitoa1haKzVG?=
 =?utf-8?B?OE5mZmxRVzRIT0toa25GejRITDFoUHl6cVNzSVRYdExtd0JMbUVEQXRIQm14?=
 =?utf-8?B?bkhRcmFPRS9KS2U2UkpHcmsxeTcrMGZzM1kzcWJ2blZ3bjhCNzRrWXhqUXpt?=
 =?utf-8?B?K3FqWTFCS0lNLzBUQzhjUWlibm9iaFhWbWI1NXdMdEtBM3dRY2N2cDVuRlVx?=
 =?utf-8?B?MTFPWm4wWk5CZkJEeHdIY1ZPaU9tRU0rUHd3NFhoc0VWbjI0S1NENnl3Yjlo?=
 =?utf-8?B?eWNoNjc0R2NDdjlNRGFwd3UwV0RsSE9DdVo4M2wrbWlVM1F6Q2oyVlU2Umc1?=
 =?utf-8?B?ZXkyYkhaemR2NUc0ek5NWCsrTFNDbDdXVVFqK2VVWVJjZmZMWmZPSGtTcG1Z?=
 =?utf-8?B?SGlRT05GZWh4dkhrV016ZG9zRXdVM25JcUtXSlozTDVTdGg5SWRyZ0hYMW5l?=
 =?utf-8?B?Z2ZMR1B6bTRzS1U2S3NGRjU1bk5QN1dqS0Nrd0dHTExWY3NnWlV5Z0ZhN1ZJ?=
 =?utf-8?B?UWErVGRpRHo0ZytGTkg3VkZyZUpCODVoUGlFamgwa01OUU5JVmoraGx0UFhr?=
 =?utf-8?B?TEhKZnlIbTlOYVBUenRZKzFub2pCSkpaeXBGN3RFZlJ4R1c5YXhUbjd1U3JP?=
 =?utf-8?B?VUZVL09VQm1ZbzEvblA4bEtTeGhKNVpaV0c1T3MydDJBNlpsUFN3aFB6WTNO?=
 =?utf-8?B?ckVmb3ZpTVUzNys5Z1lBM0h0azRPeTBoMjdtS0h1WkQyKzR1RzdJR1l1UzQx?=
 =?utf-8?B?NnR0c0dWYk1pM0szV0NWZ1RKeEFoQUNyV0c0Mm5ndVk1SWFLcDhwV1dzVHNS?=
 =?utf-8?B?RGpKQ0FXd08wdllKenJMYUVJOGNUdGRFT0RuRmw2M2l6aWVXY1U4Q1R6UmI2?=
 =?utf-8?B?N1BjZmloTGE2bVJXT2F4SWFxclA3RlFIbXV0Z25TTFoyMTd1dXJrUmQ5SThB?=
 =?utf-8?B?RUNZZStoSUhwNUs1TGx1WHJwYnJaQlRjVnVDcWNHNWJVRGQ2dDEvQTdDRks3?=
 =?utf-8?B?M0xlcHBlKzMvTUtiSlU1cWt0cjVmNEw4cStCYlhJb09vb2Vra1NyUGYrVEZz?=
 =?utf-8?B?TEF1UGRmL05kVVZzaUgreUh4TVRFbWdwdHEyWENwd2ZBRUc2bVZMekcwUEI5?=
 =?utf-8?B?R0U0cVdvancxYWlLZWkyMUdHK2hqY1cvTmJlRTF5ZWxaeFE4WnB3NE1tdHIw?=
 =?utf-8?B?VTVxRXk3cVNGTnl4K1k5dFFvQklqRzJ0aDIyKzZsTmg4d3RxZ05Tc0NyRXA1?=
 =?utf-8?B?bllJRDZxYk56akR1T2JoZHN2d21LdWptNXBpaDdZYndlcnREQ2sxVTMwS1N6?=
 =?utf-8?B?a25JZ28xL3dSTXhCbmpxMGRHWWxiTWQ3ZHEwWEoxd0NpZTBmSTN6R0RmM0xj?=
 =?utf-8?Q?OM7D48KI7kd/zw4/TIGLrLauz?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	UFEXoX//Yr9BrELCI9MOnXpwOr2iWcdWkFpoUJCQ2FKDjwLpE4ZGwJ7F3gsXdV7MTl4vGp4OxmSd1Un+W3jazLCyepakEsm+a4uBWVlr8U7OUhq7wfm1EE3QQmPBeMzNLe/Vd9VdjqszIS8FeklBH6vG9QPSw/KzFxoINMaFhY4F0FZcjCa3SiX6gQzGfVomtoSgOP11UMo9Fclxf5U9RcVyqV4TT7bDnGK4o84j1nmQJJp6ljeTY5NluniFQpRdE423Yisi5KgztLhSxZvxNNdGWXh+DXmgftS6ZsW01wTss7mzJheEmtLESfHumEaZkaorDZfMlpesuNJ7yg7x+YUTLxrmA7Ix6viSz/IoBNOhtKR7AEfEgbJRugHN05PBFBjBQrr8paPDDV97qThju5wxdmWR7+gbieZKXUB3IrV4Eu8BDXC3/sX4ZSEmVtMK1OjZruuXeCSf7iMBQulvmZ9V9PPN8qzNfu/v2S5wUGBwjhGsa9zHqRjOnWUmSv1hvoWnhixgw+wnOmH6kagQALLzXHB0xQNXCsVwG7j+2ap/HqA8YxCn4KeNRNnnQzgzedVuzPS30cBHF0ftvWDhmT5G/gZun374sVqMKhvN54sGEPqpgl02QAe78JL8MqEEIqSm1P9j5QzSVUDwJwPbbknTC8TezI8BsIjoFrxntWYmPtb1IPJnS8fkkwI/eLZpduNuXs6ZwxkDmVOV4wpveyJO1tQJgb0sk0VNO7ehuXSA99CjfuzX6KTA9J0srcjyBYi2yil+6PweENDyEJ6hxJVKYxSdLBN/IF5QiAvt+gm7FEQqGbJRnTanEYY/Sa2RqY9sDnHdtNmL8qbWUgMG+A==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8eebd26c-88d4-4399-0ead-08db507a19d6
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 10:42:35.5515
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: v4QsFod0aNlCtmtBlg7COCGFPE7msPXDKyCe444zTDkB+0NOAohcD0uVWkm5RE9CYnvsvJDwP9hQ7G6yBc6vgg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR03MB7479

When translating an address that falls inside of a superpage in the
IOMMU page tables the fetching of the PTE physical address field
wasn't using dma_pte_addr(), which caused the returned data to be
corrupt as it would contain bits not related to the address field.

Fix this by re-using the value of pte_maddr which is obtained using
dma_pte_addr().  Such change requires adjusting the previous error
path to zero pte_maddr.

Fixes: c71e55501a61 ('VT-d: have callers specify the target level for page table walks')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/drivers/passthrough/vtd/iommu.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 130a159cde07..819e996e6269 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -359,16 +359,18 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
 
             if ( !alloc )
             {
-                pte_maddr = 0;
                 if ( !dma_pte_present(*pte) )
+                {
+                    pte_maddr = 0;
                     break;
+                }
 
                 /*
                  * When the leaf entry was requested, pass back the full PTE,
                  * with the address adjusted to account for the residual of
                  * the walk.
                  */
-                pte_maddr = pte->val +
+                pte_maddr +=
                     (addr & ((1UL << level_to_offset_bits(level)) - 1) &
                      PAGE_MASK);
                 if ( !target )
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue May 09 10:50:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 10:50:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532094.828101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKv7-0000fI-Ms; Tue, 09 May 2023 10:50:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532094.828101; Tue, 09 May 2023 10:50:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKv7-0000fB-JQ; Tue, 09 May 2023 10:50:13 +0000
Received: by outflank-mailman (input) for mailman id 532094;
 Tue, 09 May 2023 10:50:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SDpd=A6=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pwKv5-0000f5-UP
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 10:50:12 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 447996ec-ee57-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 12:50:10 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pwKuD-0066WL-1Q; Tue, 09 May 2023 10:49:19 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id C2CAA300023;
 Tue,  9 May 2023 12:49:15 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id A111020B0882C; Tue,  9 May 2023 12:49:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 447996ec-ee57-11ed-b227-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=8P7wZHGsQrdcODAexcyC5HmNjq7UyxY+a/r9BHYLyPc=; b=LZcrlvoWgQ+AuQIzSKoAbD0LMI
	b6BHDQIQNgv1TgrA6LONLCnHui8tdPe3S7PqOkv3sYRCrscZ+XMID94S8/TszoyfPgDPSAtQ+UTEb
	h4a3MCLNpB/XRmwM0n8A1ogc7k4Bpmn3jlbth/DX7axlvB31Yl6fCYMH6bJHqtyrRocreKHmePIjY
	oGkzR9e3lZU/07+fFI0IW2dT3qtKo/wVGsmWPmVsGg8bTZtIl1kVM5ehFmk3qTLmzEE7S3RcUzrrq
	Hzf61paJrYytBCotmCbO9yxt1Ffzzoe6S7f87moZIuIvJFRKis9FFf32XMpNwcYOWQPfdMZxjOYjd
	W8dWFyOA==;
Date: Tue, 9 May 2023 12:49:15 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: Re: [patch v3 13/36] x86/smpboot: Remove cpu_callin_mask
Message-ID: <20230509104915.GX83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.956149661@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230508185217.956149661@linutronix.de>

On Mon, May 08, 2023 at 09:43:47PM +0200, Thomas Gleixner wrote:

> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c

> @@ -167,21 +166,16 @@ static inline void smpboot_restore_warm_
>   */
>  static void smp_callin(void)
>  {
> -	int cpuid;
> +	int cpuid = smp_processor_id();
>  
>  	/*
>  	 * If waken up by an INIT in an 82489DX configuration
> -	 * cpu_callout_mask guarantees we don't get here before
> -	 * an INIT_deassert IPI reaches our local APIC, so it is
> -	 * now safe to touch our local APIC.
> -	 */
> -	cpuid = smp_processor_id();
> -
> -	/*
> -	 * the boot CPU has finished the init stage and is spinning
> -	 * on callin_map until we finish. We are free to set up this
> -	 * CPU, first the APIC. (this is probably redundant on most
> -	 * boards)
> +	 * cpu_callout_mask guarantees we don't get here before an
> +	 * INIT_deassert IPI reaches our local APIC, so it is now safe to
> +	 * touch our local APIC.
> +	 *
> +	 * Set up this CPU, first the APIC, which is probably redundant on
> +	 * most boards.
>  	 */
>  	apic_ap_setup();
>  
> @@ -192,7 +186,7 @@ static void smp_callin(void)
>  	 * The topology information must be up to date before
>  	 * calibrate_delay() and notify_cpu_starting().
>  	 */
> -	set_cpu_sibling_map(raw_smp_processor_id());
> +	set_cpu_sibling_map(cpuid);
>  
>  	ap_init_aperfmperf();
>  
> @@ -205,11 +199,6 @@ static void smp_callin(void)
>  	 * state (CPUHP_ONLINE in the case of serial bringup).
>  	 */
>  	notify_cpu_starting(cpuid);
> -
> -	/*
> -	 * Allow the master to continue.
> -	 */
> -	cpumask_set_cpu(cpuid, cpu_callin_mask);
>  }
>  
>  static void ap_calibrate_delay(void)
> @@ -268,11 +257,6 @@ static void notrace start_secondary(void
>  	rcu_cpu_starting(raw_smp_processor_id());
>  	x86_cpuinit.early_percpu_clock_init();
>  
> -	/*
> -	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
> -	 * but just sets the bit to let the controlling CPU (BSP) know that
> -	 * it's got this far.
> -	 */
>  	smp_callin();
>  
>  	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */

Good riddance to that mask; however is smp_callin() still an appropriate
name for that function?

Would smp_starting() -- seeing how this kicks of CPU_STARTING not be a
better name?


From xen-devel-bounces@lists.xenproject.org Tue May 09 10:54:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 10:54:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532099.828110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKzR-0001FI-8e; Tue, 09 May 2023 10:54:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532099.828110; Tue, 09 May 2023 10:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwKzR-0001FB-5Z; Tue, 09 May 2023 10:54:41 +0000
Received: by outflank-mailman (input) for mailman id 532099;
 Tue, 09 May 2023 10:54:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PO4B=A6=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pwKzP-0001F5-RP
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 10:54:40 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7e88::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e20dfcbe-ee57-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 12:54:37 +0200 (CEST)
Received: from DM6PR02CA0160.namprd02.prod.outlook.com (2603:10b6:5:332::27)
 by SA1PR12MB8597.namprd12.prod.outlook.com (2603:10b6:806:251::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18; Tue, 9 May
 2023 10:54:31 +0000
Received: from DM6NAM11FT014.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:332:cafe::52) by DM6PR02CA0160.outlook.office365.com
 (2603:10b6:5:332::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18 via Frontend
 Transport; Tue, 9 May 2023 10:54:31 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT014.mail.protection.outlook.com (10.13.173.132) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.33 via Frontend Transport; Tue, 9 May 2023 10:54:31 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 9 May
 2023 05:54:31 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 9 May
 2023 05:54:30 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 9 May 2023 05:54:28 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e20dfcbe-ee57-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gcFZK9uExUejsgeeBoGCqyKir+KhLgVi1YFVlEYnnaW1xBu6GgjS+6AsBzzBdgnaPu3UpF/QP5pdEZydTksGLVcJPWh2RYXL//nyNcb3d5hbCwNho7kefll4+pTQ1GFTa+9L/6iAxbLhQEXi9CmhbPXXIc8/gFBPOkAHRZ7Ut0iS5iH92ECy32DcN84Qq6f+hgUqiaLztr7eacR64xLjTx2+DsJy4ESrY2prO2irw44p2QXoiWgbL4Xzxxr8/BR48B7vAO6Sts2Ds7tEi7yxGtPzB8xKAAmmjM55oYy5Qp3j0eJwa5taLGbdPTHARiwOpwpJIlFJ23m2XiKAOXHmbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pExdquudE4JcTAfm19Hr5GI1d+g1k/9tY22rsnGMN9Q=;
 b=R43Et2GP4gvP8L4MCZAj87mqsv+xk83K9a1jmvj0anrGdetJhJjI7kEZt42QQdXXcSpYN3mujamTi7hBMgT4sthn+0TAc6957gikZ4jEjFdSVpvz8VL+gvlqwVlktvdV63ww39uAbdlRtR4MzBI5/S90NMN/cPN8cmhyZeIISOc+RvDEVzOTSqQ7+6u3z9kPNAlfjHa0VGwC3eCmbEywK8T0WNnGNuoqDN22NDbZ69Zant/4A9dWA26GOl3zcpr+OWFqH4D1O/U8C2fvRjsm0Wo7RZQ0fKOoWCFtwJ67BgIj0pI8PwxTYMTceVKv8eSr3ozL3H0KrCdHDhNYRLuYNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pExdquudE4JcTAfm19Hr5GI1d+g1k/9tY22rsnGMN9Q=;
 b=NSB7FRG3ENDsIUETiReXfLThITK2tt90e253IlWn0d+7rKXr4a9nUILbxchfQjJQ34MTuyXjsSwJuLC8Mm6GK98emJINacUV69vMZ5buY0WdRcOwvHVasJSfN7VpXhDHhEET9NgMubWLngQ67Yvyil7CilBFuaQ9F3ZXYH2Axgg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <3f2bb14e-34fd-6a74-5644-1c6470c7cc37@amd.com>
Date: Tue, 9 May 2023 12:54:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN][PATCH v6 05/19] xen/arm: Add CONFIG_OVERLAY_DTB
To: Henry Wang <Henry.Wang@arm.com>, Vikram Garhwal <vikram.garhwal@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-6-vikram.garhwal@amd.com>
 <AS8PR08MB799123AE54B0ABE907F2FBB8926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <AS8PR08MB799123AE54B0ABE907F2FBB8926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT014:EE_|SA1PR12MB8597:EE_
X-MS-Office365-Filtering-Correlation-Id: 46f017b4-32b0-4327-c4d8-08db507bc4db
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	THyLfO3G/HzN04RvnhbcgKhavUfmK127gOuo3qQ3g276YFZn2VLboHOfGM38YTbjzVlvMXkF7gFo9nO1A6nYrc+Dcineb/vLPB6OcdASy+IbzQUPHyqQPIwrEwRtOGZvj0VTbqNKo9G/WNPxdtgMg6vJZoqPJAZW7mXSfKnnMl9ExgSqrFPwjB1r33FdK3zUAAN1jM5fOWu5pJ4QB2/q4br0Qeh0eA0CdSBjJsNruQQHprU94nH/mbU48KclUqqLD0qgTfyi/rXzrCr5jO0ctvd+H43jiPRztFSa+iXoJYwzY+BDtQf/jlN/cgVeRkIkKCLu4UlNSjg4vqWKdG/Bq+vy4eQJSfIVIY6kplf+tcsdLaSjt9YSmumckZUCyMGomJruQ+50HcPYpuAtYPTQn6TvryXXwhveFNbuojixvUuAs6favpvz3b6kX1BT6ea3cdLkQmyi2gPaEGaEarziKngd7tTgBmgm2+voh11uXQcCZldwcvWf4GTG6Z5tKnPc17W/+lrcfYH+bBZZ+jkVjGu+SHMBoaeneMyjMYI91nRJozux30PXe+sC25pVRx6SEv+ee7hp/EJ82wcgGdj7xRjezHy4EDw4ecNCpCLAidqyOvBmypqQn6TWeyE6HFk+bh6o7xVP547xMI4i4Mv6Uy0TAf4UycT2nQAvffGrODKNpMKT8ws4YL+6FkGVjEiJqYvJD+GFutmSH2x8A7jGHbEtfe1eYqI+Ml3LoEC0c7mHrgHTgNyJcuDrYKxLKvzqX0OfwcPtMO5UJzsyda8Oor0IE3eBL8XUBrAm+J2ZGGLW9AMCLvwQHDYlEes5Fg/e
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(136003)(346002)(396003)(451199021)(46966006)(40470700004)(36840700001)(478600001)(2616005)(47076005)(83380400001)(36860700001)(336012)(426003)(86362001)(31696002)(40480700001)(26005)(356005)(316002)(81166007)(82740400003)(110136005)(16576012)(54906003)(70586007)(70206006)(4326008)(53546011)(186003)(31686004)(7416002)(41300700001)(8676002)(8936002)(5660300002)(44832011)(40460700003)(4744005)(2906002)(82310400005)(36756003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 10:54:31.6212
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 46f017b4-32b0-4327-c4d8-08db507bc4db
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT014.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8597



On 04/05/2023 06:11, Henry Wang wrote:
> 
> 
> Hi Vikram,
> 
>> -----Original Message-----
>> Subject: [XEN][PATCH v6 05/19] xen/arm: Add CONFIG_OVERLAY_DTB
>>
>> Introduce a config option where the user can enable support for
>> adding/removing
>> device tree nodes using a device tree binary overlay.
> 
> May I please also suggest adding a CHANGELOG entry in the "### Added"
> section? I personally think this series deserves a CHANGELOG entry but I
> am open to others' opinion though. Thanks!
Yes, this definitely deserves an entry.
+
please mention the SUPPORT (and CHANGELOG) changes in a commit msg (for now
it only covers the Kconfig option).

~Michal


From xen-devel-bounces@lists.xenproject.org Tue May 09 11:03:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 11:03:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532103.828120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwL7t-0002m9-2W; Tue, 09 May 2023 11:03:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532103.828120; Tue, 09 May 2023 11:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwL7s-0002m2-W6; Tue, 09 May 2023 11:03:24 +0000
Received: by outflank-mailman (input) for mailman id 532103;
 Tue, 09 May 2023 11:03:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SDpd=A6=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pwL7s-0002lw-13
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 11:03:24 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c09315e-ee59-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 13:03:21 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pwL6v-0066iJ-0M; Tue, 09 May 2023 11:02:25 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 2422B300451;
 Tue,  9 May 2023 13:02:24 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 057BF201DB6CD; Tue,  9 May 2023 13:02:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c09315e-ee59-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=JF1V+c3UL+N9ld1zX5rd8Mu0N4lob7gKoBwZz1mVIMw=; b=aql03NPFGAMO5Kl80coujejdq3
	VXWbx83fXzAKkdlxYwSALK8eVIvquLc6Ikp1W0vcqSV2KZFsK0+H7qSrgqzM5hHqyQaI6c6dtK5Bo
	uua/mR63aSWOGyFaB0ob+GKwxTQPCDfzgAhA8c/Y3JJbxMLyT0F0HwllrurWg1mR1JDZsedVWxS9j
	FAK3tthBDoVdkkdMJDLY3MllM6CGIfUH146sLI/18R1vwH5QDSoPiTiUXSiyb2KYPpHMa/vs9cEhb
	BfS09qrwDutlOjjfAgyNrNhgddilyXlvHpPbGL7Qr4vOvio7lk2WWrZ7JrQu/SxY7BUXwYFCQ9zlK
	wcsdOk9g==;
Date: Tue, 9 May 2023 13:02:23 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: Re: [patch v3 14/36] [patch V2 14/38] cpu/hotplug: Rework sparse_irq
 locking in bringup_cpu()
Message-ID: <20230509110223.GY83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.013044883@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230508185218.013044883@linutronix.de>

On Mon, May 08, 2023 at 09:43:49PM +0200, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> There is no harm to hold sparse_irq lock until the upcoming CPU completes
> in cpuhp_online_idle(). This allows to remove cpu_online() synchronization
> from architecture code.

Uuuuuhh.. damn. Can you at the very least ammend the comment near
irq_lock_sparse() to mention these extra duties?



From xen-devel-bounces@lists.xenproject.org Tue May 09 11:04:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 11:04:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532109.828131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwL8q-0003ME-FT; Tue, 09 May 2023 11:04:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532109.828131; Tue, 09 May 2023 11:04:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwL8q-0003M7-Bj; Tue, 09 May 2023 11:04:24 +0000
Received: by outflank-mailman (input) for mailman id 532109;
 Tue, 09 May 2023 11:04:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7D+C=A6=citrix.com=prvs=486391a49=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pwL8p-0003Lz-AK
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 11:04:23 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f820814-ee59-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 13:04:21 +0200 (CEST)
Received: from mail-bn8nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 May 2023 07:04:19 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by CH2PR03MB5368.namprd03.prod.outlook.com (2603:10b6:610:9d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 11:04:16 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%4]) with mapi id 15.20.6363.032; Tue, 9 May 2023
 11:04:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f820814-ee59-11ed-b227-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683630261;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=CdHTu5cLFhVVdekV7oETNew1x2H7T1z1YCZ4sBCK6tE=;
  b=Xoy+2JyAyyXwU91PkdX8tBc/UVG0DnIw2Jey2zKT6yFbcGfIcmQznLmB
   ntQdvS+Xz0djJhsbdevvLIBaaNrMW/MWphgcy1Lvy9z6YrzdFMljsORMg
   wUPnh39HFWDdGJU321fI8af5F8rBBw72k3DGTh4cbZcALJoY4u7NPLGZ+
   c=;
X-IronPort-RemoteIP: 104.47.55.171
X-IronPort-MID: 108776587
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:utWWbK0v67ioTzkJufbD5fNwkn2cJEfYwER7XKvMYLTBsI5bp2BRn
 GIcCm7VbvjYYjOmftAjaYS08BsBvZ6AnNVjSFE/pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFmNKgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfIEtsq
 dM/Gm00SAm6q9qGxOmWCe92mZF2RCXrFNt3VnBI6xj8VKxja7aTBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvi6Kk1UZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13r+Qwn2mCN16+LuQxvlgsFar2XApEQQvCkn4sPaWlxGOYocKQ
 6AT0m90xUQoz2S7Q9+4UxCmrXqsuh8HR8EWA+A88BuKyKff/0CeHGdsZjxLZcEitcQ2bSc3z
 VLPlNTsbRR3uaCRYWKQ8PGTtzzaBMQOBWoLZCtBRw1V5dDm+ds3lkiWEY8lF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9bABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:Y/BNhKnc99t3ZEFori5IRw/2tULpDfL13DAbv31ZSRFFG/Fw9v
 re4cjzsCWetN9/YhAdcK+7Sc+9qB/nmaKdorNhRItKJTOWw1dAdbsSl7cKoAeQZxEWlNQ86U
 4IScEXYuEYa2IUsS+Q2mSF+rgbruVujciT9J/jJqNWPGNXg90J1XYfNu/iKDwUeOCwP+tcKH
 M03Lsjmwad
X-Talos-CUID: 9a23:B7emj2AkJ5JtbmH6EwhN1hI7PfJ4S1vE9VvoE0vjJCVvUqLAHA==
X-Talos-MUID: 9a23:lsnKNQShAFkf3HFnRXTcgztSP/93/563EXlSg9YXtOaZahxJbmI=
X-IronPort-AV: E=Sophos;i="5.99,261,1677560400"; 
   d="scan'208";a="108776587"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K6399ZA2Lw2MTkbtY75Ek8PEkSQjCkgWKIo3OUkvDWAyZBiJywN+wRXidEdeY3EHarQBNLZkrPQrNvbXHxRWe366iLyTIXvmpyzu8N5KUJYQv8W7/75vkLbsDOZkb/KR966ocs+M+rbjTKY7C9+ez1M4q6Ql07+t8ZnEvYfmrU36/uGITR+f2rgrvqALOCli+K8x638eRjyr9u9LhbcT+DEEk3uNrPc18TNqX1Qlx/63vkOXb02QCs6KgvFCYnUdDgXhQUE/vr1kOuq56TFfCjmfgOKaCMsbaTc4rPrm+4fTVB2Uks48uCxGRzPdYKkRsRvLZQ6mGZajuZ7LKpoCKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EEVdwpNoDpCMI+qjQBMvK79jWyy35hxraOfYHbD9/PY=;
 b=O9+8EH3RpAZ0E5HHTlmp1MC3Du80FxScR3EmY7iA1PjdVCCkQEmIjue6xOxQOpky5gzffzr0IBop4lC2Mob5jCvAvf+k9FUQSoaoqy7e883YZVJhpmsCS2no7vnsfzh3pL/9FGz5RhtkLQ5iySZDKm8V9dtKSKb+ZU1hkD5PaWAUQBm8CQtq04By0JwHyfLwd41tY+tShKNNCWf3kQrgSK76Q+h2XK/iogWX8DEsiFDhZUjamO8YYJOQGFs+XUYEWdKgIjRySKyQaq79CQ+TXePmZ7PA1Ev5a5P4JGSwPe6AU058SRjG8CTDxLg0djdxAZRfOP9X2R0s6O1B3eKbCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EEVdwpNoDpCMI+qjQBMvK79jWyy35hxraOfYHbD9/PY=;
 b=SQ8SaFduFzXJCyNRdcxffYddyCSBTyffjZyXhaSvRU3RavUUCvyl668UwjGbrZW7gT28vqoxVcvtHSlf8WzYWLws85lybgb9u9+UtvETS6fbKcNHXCxANoCbRHRxI3UMiwTZK3+QvuTluaBFzNIfUXMYfCpEbfguFk5NZOgPALQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH] x86/iommu: fix wrong iterator type in arch_iommu_hwdom_init()
Date: Tue,  9 May 2023 13:03:25 +0200
Message-Id: <20230509110325.61750-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0111.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:192::8) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|CH2PR03MB5368:EE_
X-MS-Office365-Filtering-Correlation-Id: 9c1b70f1-3cd6-465e-e30f-08db507d2145
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gKSe0yby7K78C4izqXNDGU/VcGrIIG3htr74Wu7OVdPd4A4mweBnY2n4PUgda8rBEGPHs08mnS/dDT4/Z4/Ws3rAfPVVDgp8sps0KeckCIH+qdUrrPrWm8VWlLYsgaUnFzoc+ieAI2XH3CPmUF8wpRBceorZG6xl0s9gPAGfK5IGzvBW6j0GKvkERhaxjK2WxAHBcu5cZnE5nDRAthxNgLhfEZxWMFsr7xQhInIYGgwnEJCtmsHITYi6G5UBaOgySe8A2tW8ViE6YjMPB7X3ao7SfOFN6XLJMFE7yHbbvfnveZn1s9803gvi5lUVpBCXT4he96a4I4lNGLRDTEmtg+fAJ+GMwsh5qiQFAAh1iCuNksd7N3+dwtwiJsvqA9Mbbj/D8Bpon0qzEEQV6UsWMUVjfdR1OviZS/7NCyZ8jACNMfVZcLZ4R/tT3phvWK7IMvPg6cBRgOxzBcPZhzmGDru/nVkbS/AZD9VIsZ4wUx/KvdT2GJZMmv91fKZqB+EoMWXt3r9HfmM9L4qF3RJ2zg6DoJLNCewoc6i6HcIoooRhJ+hf4eR0dwav//K1ZFnV
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(136003)(376002)(396003)(39860400002)(451199021)(66946007)(478600001)(6916009)(4326008)(66556008)(66476007)(8936002)(8676002)(5660300002)(54906003)(26005)(316002)(6486002)(41300700001)(6506007)(6512007)(1076003)(6666004)(4744005)(2906002)(2616005)(83380400001)(186003)(82960400001)(86362001)(38100700002)(36756003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OG9CNXVNdEhWcmM5M0Qxd0RxQnN0SHNzK1cvMmo1UE5ZUlZhRGZRejhIT1By?=
 =?utf-8?B?cGs2a3ZsZ0VaemNFY3dVMWE5STNYd1I0M2tSWDRxZ1ZTUWsxR0ZwcE0vMHJy?=
 =?utf-8?B?N1FsT2ZMM1ZZNGxFMWJYQjJ0N1FleGtqNWNlRDB0Z2loYzFHd0F4bFIzeXQw?=
 =?utf-8?B?N3ZlOEYweUo0bzFaUTRpRjF0RGtPRnlaZDM2VnM1SEg1eWNiQXBFZHNjZWVm?=
 =?utf-8?B?UUk4UXpnZ2d6M0xKT2lQanQ5THRua01KOXk2dTJYVTV0d0xjTlRycnlxdS9y?=
 =?utf-8?B?TlJBeDJxZ0dGOFpUZVAvRHBEa2FRenVuQzN4UUdqbktra2prLzlSSFd1OWZ6?=
 =?utf-8?B?L1RJRURtMW96R3hxMVVPemdPZitjNEhuYzc4dGNHZ2wrU294U1hHZmlIQVRQ?=
 =?utf-8?B?OG9TL3lITFJOcXlLODIyS2NBODZYdDdRZzBGN1p3K3FEOUl4QUcvSGo1UTNV?=
 =?utf-8?B?aHlQc29xaEVlZXNzSXJhUXo2c2FMRnZtbDJMTWJrSThEZ1JKTmZ4SjJvWElk?=
 =?utf-8?B?dC9OOEkyQmEzRGNySGl1V21BWWxSd3Q3UGpLbDZGbVpwaHYxNjlqQ2RIeGdH?=
 =?utf-8?B?MlFTTkRPUmROT3FUOE10SG5LTEFXblJWeVA2YndwN0k4RTZ6cWdGb0lhMTFJ?=
 =?utf-8?B?aFowQXk5QTNyMS9qZ05Odlc3VkltZzI4UWdzRE03SG5OY21HY1FFN01aR2sy?=
 =?utf-8?B?TThxRjRhaXFpVVZobDArbGRNTUZqTUkrb0FDeWNqY2RCamhKeHpzTTAza281?=
 =?utf-8?B?U2tkajVoNmNYam13REtXbjJOTWdKNTFqbTFzKzNBQWRaSHRHZnRtVlVlWEZV?=
 =?utf-8?B?ZlYvU0NDVzM4NDRBSEZwT3F3SVA1UW03RmdXc3FwYTNWOXdkTCtnMmdjZjhN?=
 =?utf-8?B?elBDd0pRbld5WkJsN1hYSW4zSEloU3JQSWVKWHZockc3SEJOTFhqNjFHSHZQ?=
 =?utf-8?B?eGpmNkttcUxpTzRpcXRZSDdwKzdOKzlNOHRvalVxK2Y0ckRQSFFnTHlWUG1Y?=
 =?utf-8?B?aGNEUmcxSEQ2bTdYVmRpc0VFTENTZTlabUdKbW9jcU1NUmlVUHl6SW1HV2Rz?=
 =?utf-8?B?OSsxVFhwSTNrbC9PVE1uTzFIRVFQVjRiOXUrRG4wa3VrSWRDUFV3MkhIMzMz?=
 =?utf-8?B?ZXp0UjFha2xLREhCN1V6djF3NXg1UlR0MWZqa2g3QlN3MlJnM20xVGpFSjBF?=
 =?utf-8?B?WUdTcWIvYjBUVndrSkl6MFRnZ2wwTUpiSGhBdno0MnZGMm5XSFZ2VEdXdjA3?=
 =?utf-8?B?ckdMSmVxVjhKeFdHelZOM1FJYXQrRUFxcnlTMFBybzl3WDhydVk3b3Z6REE5?=
 =?utf-8?B?Uk1sbVFuN1ZhNHpTM0xWVVp0SWpEZXduanRLTFNydFYyck5vRmFUb1ZJVG02?=
 =?utf-8?B?ZWJza3FRVms2WVc1NDIrZFd1VERaSERBSU91MThoWC9kVzk5Yjl3MGdBQVBw?=
 =?utf-8?B?cjczWlYrMWpQMnR4OVdoVjZUSHBKSkl5T3lxQ1gvYzhlSWpnOUFvL1FuVXcv?=
 =?utf-8?B?VFZ0ZW5JN2hpY3JablpIaVhXUkZXQ2J4dGJaUFl3U2FiWGFaMjFiMFVXSHY4?=
 =?utf-8?B?WmFLTndJSFFDcnBpZVNJQk5KbDd2cXVoWHovSHl1TWh6dUhCZ2J2VUpBUmpI?=
 =?utf-8?B?ODZXT2JTSUhZcWNnTXoxQjhoaFczQnpZQWRyUldvQ0JvWnVZN2tPREFIYnN0?=
 =?utf-8?B?bHRmazRYUWcwQ2JBcXB0MTJjU0ZOUXkraUc4Q0FRb1llTmpEYU9EaDgrZG1R?=
 =?utf-8?B?RjNrN3NlSGtyVnloMmFURjJERjJQczNjSEZtUXIvNUdsQ0xSOFBzaDRjaXE3?=
 =?utf-8?B?emJXQm8xcEdRS2k0Wkx2NkFOaE9zVENNWEttSlU0UWpZMWJFSWU4THVuTXl6?=
 =?utf-8?B?eDVkM1N6ZHFlcUlNRU5jN2xUM2dldEhrOE5uRmtrajBNKzJvMmVHbUV0bjFU?=
 =?utf-8?B?RXhuUlBod2x3dVFTNHNQekRxcDY0THlZWjdFZkhQQ0VmNFU1ZmZBYnlZVGpj?=
 =?utf-8?B?VEwzSTJzOENhSk05bXRoUnJCekoySmNVN0oyTittZFpWWEg5RHNDNVJ3ZkRs?=
 =?utf-8?B?MkhHRUFGUzViVW5qOWVUZjJmaWxaSkNjbk1mcFBHWHdqY3lOUVJORC9xMjBW?=
 =?utf-8?Q?mcukKKpv3xgzu1gOLmhwrTBqO?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	AjUUiXcFRpdMeSKgSajmXsz4BmMVpckldXNwuYoGe75+IPSZFjD2WP03m0ZywQ8bQglWkICG0SDsv5VI2LwxKJsdlCsmjpedyB+rzirpg6sI+MYY8jPixhn65igQpQl1HjYuo85KZFh2F5omEVM0swZWpjUY2CowTNokDdsjG2qi6pCBr54HA0OaQp5DDM8NkHyd2n81VZUor2oiK2/fjLz6oBKYG+qrJ4r8wxUpTPhPvujRAoh7GGu9pGszJkKNhbcjU78Y3TFdvjlXa6uR1Cc+p8Ad+Q44eXe2RY0fYQJp77qbwexW5EHUr4e5NGdeO6qspE5UQ7BQdmbudlnPn1CjhDiphPjwAxl8L1tJIJ+rdZ8B/XYi1MJx3fHaD/dfuzOZZa3XhFby7taVgf4EuQEJou/M27EtIr3EcI0O27bUF3TsQzp2m5v9D8XJdjgeQsXrHNjTXDzMfkTpD/3s/VdZCz15SkPtAaTGbxH/7NSzLxUptJPZxsRpWCTS3hYxV47fQI7xjEK6KuPkLPMmEqwc1PMMr+FSqc8OBtBhgcjIt1qg/mbakiL8bA8WJsIC2KvqQR0wO6ChLE+MUzOkBSVqF/PX1NLR7gMHanPxYgt0Vr2l31Fx2C0CxJ1ZDnZazsBmaBQ+n+xM8Fs3czSe3wbaH9Q/W2HYtczD3B8uZ/PaEc1qalzkzH5ys3bznw+A2WwSslEXQr0GtXBVnzI24kZrzMySxaVlSZ6/PZwJdsGrZSvc0+vwNn63JEcx8xQfUrpFKe2TwgxUTBwbFA77aDxCy+GRF47nOlSq01kLFunBMo+//1ou92mP3x7DfY0F
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9c1b70f1-3cd6-465e-e30f-08db507d2145
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 11:04:16.3394
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1V2cJUBiBeRF0Ge5aN1gl3E+MGKefI8owauUa9L2m5jQuCbtPcm9ulqp5Epgk44QtattS9tagztyyXAMfQ2C5g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5368

The 'i' iterator index stores a pdx, not a pfn, and hence the initial
assignation of start (which stores a pfn) needs a conversion from pfn
to pdx.

Fixes: 6b4f6a31ace1 ('x86/PVH: de-duplicate mappings for first Mb of Dom0 memory')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/drivers/passthrough/x86/iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cb0788960a08..6bc79e7ec843 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -406,7 +406,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
      */
     start = paging_mode_translate(d) ? PFN_DOWN(MB(1)) : 0;
 
-    for ( i = start, count = 0; i < top; )
+    for ( i = pfn_to_pdx(start), count = 0; i < top; )
     {
         unsigned long pfn = pdx_to_pfn(i);
         unsigned int perms = hwdom_iommu_map(d, pfn, max_pfn);
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Tue May 09 11:08:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 11:08:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532114.828140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwLCp-00044R-UR; Tue, 09 May 2023 11:08:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532114.828140; Tue, 09 May 2023 11:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwLCp-00044K-Rm; Tue, 09 May 2023 11:08:31 +0000
Received: by outflank-mailman (input) for mailman id 532114;
 Tue, 09 May 2023 11:08:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SDpd=A6=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pwLCm-000449-Ga
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 11:08:30 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d2a659d2-ee59-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 13:08:27 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pwLBl-00FBZV-VA; Tue, 09 May 2023 11:07:26 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 20197300023;
 Tue,  9 May 2023 13:07:23 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 0504E20B0882C; Tue,  9 May 2023 13:07:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2a659d2-ee59-11ed-b227-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=jRJqPjkZDLfU3CkfoL3myLBABJzadgOWoNWDI32Y8LI=; b=vSapig8PT+X9FleUgN3U0CZtui
	relRtn1zPe77P0WIAsoYme9WGw5tkGy2jBjW361caFHw1VfseDENIJEioGXGsAlXDL56EYU9zLoKK
	sYtsAQy08WKeWKCMyQFs1KJSr/OyChBFdzqL/MxK7qab9fpOnk7H5jh5WbiYQN/4wFh+Pj/AmUjtw
	QaFbJledlkQi5f022TKkXoXPhIOBe4yd/n6XLOTCL2wmgcVNokN/6W7o9qCPn/qVvfH0K0rQjHZ6A
	RGMSVi/rBgBEn6a8OS6m8fLIOlG3fUbF/IUlHnmjqIk+2Iwkmd+PWpBK1wBJ3ULFWthvylMJrnGnM
	mBfXf0KA==;
Date: Tue, 9 May 2023 13:07:22 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: Re: [patch v3 18/36] [patch V2 18/38] cpu/hotplug: Add CPU state
 tracking and synchronization
Message-ID: <20230509110722.GZ83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.240871842@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230508185218.240871842@linutronix.de>

On Mon, May 08, 2023 at 09:43:55PM +0200, Thomas Gleixner wrote:

> +static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state)
> +{
> +	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
> +	int sync = atomic_read(st);
> +
> +	while (!atomic_try_cmpxchg(st, &sync, state));
> +}

Why isn't:

	atomic_set(st, state);

any good?



From xen-devel-bounces@lists.xenproject.org Tue May 09 11:30:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 11:30:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532119.828151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwLXW-0006c0-N2; Tue, 09 May 2023 11:29:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532119.828151; Tue, 09 May 2023 11:29:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwLXW-0006bt-Jm; Tue, 09 May 2023 11:29:54 +0000
Received: by outflank-mailman (input) for mailman id 532119;
 Tue, 09 May 2023 11:29:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PO4B=A6=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pwLXU-0006aG-O6
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 11:29:52 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20603.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cef9cece-ee5c-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 13:29:51 +0200 (CEST)
Received: from BN8PR07CA0016.namprd07.prod.outlook.com (2603:10b6:408:ac::29)
 by IA1PR12MB6627.namprd12.prod.outlook.com (2603:10b6:208:3a1::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Tue, 9 May
 2023 11:29:47 +0000
Received: from BN8NAM11FT030.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ac:cafe::56) by BN8PR07CA0016.outlook.office365.com
 (2603:10b6:408:ac::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18 via Frontend
 Transport; Tue, 9 May 2023 11:29:47 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT030.mail.protection.outlook.com (10.13.177.146) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.18 via Frontend Transport; Tue, 9 May 2023 11:29:47 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 9 May
 2023 06:29:46 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 9 May
 2023 06:29:42 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 9 May 2023 06:29:41 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cef9cece-ee5c-11ed-b227-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jVm6F4awnu945CCXBOLTejpcGJ+uisdVhuwYrfleMqlgZ2OrdBwDU6vOwDNecuG0s2Iher9wqwoJYxoD2fOxQk8lQDpHB/h/1G10Cke3Tqhknr3ciUv9I1/Lsl/mSk4Fq7KUNA7+Vou8fgSG+Dph+0pq3V2FoCjHHSoiOmZUUZK0rnT590HfpuYPaW4n8R/67tG2sPSqtAdHKm/zc0fDn06wuOhvW/7urGDbOTN2XrmQewthtY00XaZdxh1Ct9v/kSScEnoe0/2qN1tWfN5w72PXgArRRUCAAaNFGhkL4rmzYDFzBXDojSClljg4NU7UUZ932vBoJMWltzKQXv5ysg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6lEcqQLcDaEF19Nz3REWbrermeWgjlYGyZZ8ZM+1zR4=;
 b=TJA0Ja8tFZ3poU/KCePZnAz8U9qyAklk/Hqs+UuiUYJFpxTEdIpgwUWVScHiXMzt+s2PZFpc1AnuZ7UzbtzJLRrBUa/BrFzbAcMJ+r/NizE2PjbXQ9JsfGsE5sJJMguuDTlcpxO8Y+jgFltkadzLMa2y0361P3QsdPog0Q/LP8u1BFDETpm8gRgP9EID9Rqgg60bhXyiDeFQt2Isu0tH3oL4fExPubgRcGzcOPZuFiGvptCPdLaSeLrF0kMG9jYE1U31EZXt9I0rn6adfpALKNVWTVMwYRDuwExZgqSdoUi/eaNO8dlcAyePWyimpGC4PTGNoZqU/3xJiQJMkmcyNQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6lEcqQLcDaEF19Nz3REWbrermeWgjlYGyZZ8ZM+1zR4=;
 b=da8nxxS7AaVmtCUSjtKXF89EKkUABJM7LpE6/gy32sWY+9Y0n//K0if6oWhmwiJLCWTt8jFx2dTkoADfVEJCM74NiFw3GdbIrwnt2NeGDyjRLGfn4Sn4rpHxRhZ9KKsrUL70fWxaAZMW377hazLxkByzL+FNaloVV0t9Cc8UsdA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <080bd1fe-a58c-5bdf-eef5-995420001ca4@amd.com>
Date: Tue, 9 May 2023 13:29:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN][PATCH v6 08/19] xen/device-tree: Add
 device_tree_find_node_by_path() to find nodes in device tree
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>, Vikram Garhwal <vikram.garhwal@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "sstabellini@kernel.org" <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-9-vikram.garhwal@amd.com>
 <AS8PR08MB79910CFF4439E503046660EF926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <AS8PR08MB79910CFF4439E503046660EF926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT030:EE_|IA1PR12MB6627:EE_
X-MS-Office365-Filtering-Correlation-Id: 0c76b904-a6d3-44db-3e22-08db5080b1b7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Jh5wDg5ZFI5TKwcxmgdwLwkxnRqcc5WEiJKiD0EaKKnepk5VZf4ahfD+oKU5+NPCPvsmfaoS2AYOit89nbBAq5y2Ntvm8sgGPxQ+/BuHqMKCqLBytJ1e95A61hwmYRSNN+9leuSqDyqHcD8TwqtTkNAbeIFKpNlmQzlljBsON2pRbSR4lNFx+NHak+dgbflyxE4PAPuyoIZ40OlP6lIZNoS/W3G+ZcQgP8dg7Mf4ONS/l1EXf3JN66r9PbpgB70ugqgbfw+o9t3DOWopHwv0DaLukiyWVbTkXB32iLeJMR8MkpW1Ti2Zs2wX/tLiqLCHX1RzIO6VNO/nRSSWFhQdqrb984GiK4f9uNK5GiGXR/0XQIRcXpA7RhiTe929hJiA2s4I+UGd1HChhEZPHBEehOXr/Hs1KkH/UByTDc3t7OEhvuV9ehzwYpqxu1T2T2MamPHyQK/Wi7TsDpuJr5S3SIw1w1Nvah8t804h+cnMy55GZV1rkDse9Xp6q7YP5+pFgTvR8ZGU4fEZTo9OnIpe+tfVy8xbDaFGWSxV9K3sW+D3sBajUNCw7EeVemiliKdy9t6bYQmn8QlMqmW0o7Ed0d5WkZCNVgatqF/A3Ow34DsvV8ihHhsOQMwJqeUB7U7Oj7Cv6KajD8Pb2uVp+YfD5JPUW/wAH599Kua5MHJ+aZJ199IGbmyCpxbJ1c69eACjplOobQxdQJv8ARxo65yJabhskhavT5f5upCXqIcBv4+3vg434WKyzwoKo3NwNUQbCZBZVVQAPJVvVSk0Vg3qQmiV8/CUW7cqDtjjbWoKJ38=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(346002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(31686004)(40460700003)(478600001)(4326008)(70586007)(70206006)(110136005)(316002)(54906003)(16576012)(86362001)(36756003)(31696002)(47076005)(336012)(83380400001)(36860700001)(53546011)(26005)(2616005)(426003)(8936002)(41300700001)(2906002)(356005)(8676002)(82310400005)(40480700001)(5660300002)(82740400003)(186003)(81166007)(44832011)(37363002)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 11:29:47.0200
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0c76b904-a6d3-44db-3e22-08db5080b1b7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT030.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6627



On 04/05/2023 06:23, Henry Wang wrote:
> 
> 
> Hi Vikram,
> 
>> -----Original Message-----
>> Subject: [XEN][PATCH v6 08/19] xen/device-tree: Add
>> device_tree_find_node_by_path() to find nodes in device tree
>>
>> Add device_tree_find_node_by_path() to find a matching node with path for
>> a
>> dt_device_node.
>>
>> Reason behind this function:
>>     Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
>>     device_tree_flattened) is created and updated with overlay nodes. This
>>     updated fdt is further unflattened to a dt_host_new. Next, we need to find
>>     the overlay nodes in dt_host_new, find the overlay node's parent in dt_host
>>     and add the nodes as child under their parent in the dt_host. Thus we need
>>     this function to search for node in different unflattened device trees.
>>
>> Also, make dt_find_node_by_path() static inline.
>>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>> ---
>>  xen/common/device_tree.c      |  5 +++--
>>  xen/include/xen/device_tree.h | 17 +++++++++++++++--
>>  2 files changed, 18 insertions(+), 4 deletions(-)
>>
> 
> [...]
> 
>>  /**
>> - * dt_find_node_by_path - Find a node matching a full DT path
>> + * device_tree_find_node_by_path - Generic function to find a node
>> matching the
>> + * full DT path for any given unflatten device tree
>> + * @dt_node: The device tree to search
> 
> I noticed that you missed Michal's comment here about renaming the
> "dt_node" here to "dt" to match below function prototype...
This is one thing. The other is that in v5 you said this is to be a generic function
where you can search from a middle of a device tree. This means that the parameter should be
named "node" or "from" and the description needs to say "The node to start searching from" +
seeing the lack of ->allnext you can mention that this is inclusive (i.e. the passed node will also be searched).

~Michal


From xen-devel-bounces@lists.xenproject.org Tue May 09 11:36:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 11:36:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532124.828160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwLe0-00082g-Ba; Tue, 09 May 2023 11:36:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532124.828160; Tue, 09 May 2023 11:36:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwLe0-00082Z-8t; Tue, 09 May 2023 11:36:36 +0000
Received: by outflank-mailman (input) for mailman id 532124;
 Tue, 09 May 2023 11:36:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SDpd=A6=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pwLdy-00082T-ON
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 11:36:34 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bfebc1a3-ee5d-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 13:36:34 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pwLdH-00678Y-0c; Tue, 09 May 2023 11:35:52 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 8F613302F3D;
 Tue,  9 May 2023 13:35:48 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 6D04F20C342B1; Tue,  9 May 2023 13:35:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfebc1a3-ee5d-11ed-b227-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=9uy06VylNOBEAiDOc+c/ggtZsNSxY44p/M1PWpIHlTo=; b=AdPs8XtYEpkR3G25UQ9/ynnaJQ
	yoQ7TJHtTer78qTxQSruGfUSIJdNhTXt+0UcvF9ygN5qgsE6skCGKlKdAU+XGBlszdQLENogWL1pZ
	XLDEoXkr1kINeJ7StBwuvXlWsxIhr3i/RPNAsp79LBG8KMNp3MRdGeYKSudMDFtBS86uE8nzWEe5v
	a8CfaEawnbrmqXklNcksitW8TUN0x1wEip6a7zDZlkdKaRBllociBadnbaExT9c9ou3QCvpRH4AgG
	bSH5gSP2c1D5dxXxDWXZkTYGeRAN4a/hJ1z/YW9bJKB/4Im2NbAtVPF75WnxH6OHWaCKH0GsK31IY
	N6mfY+UQ==;
Date: Tue, 9 May 2023 13:35:48 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>
Subject: Re: [patch v3 18/36] [patch V2 18/38] cpu/hotplug: Add CPU state
 tracking and synchronization
Message-ID: <20230509113548.GD38236@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.240871842@linutronix.de>
 <20230509110722.GZ83892@hirez.programming.kicks-ass.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230509110722.GZ83892@hirez.programming.kicks-ass.net>

On Tue, May 09, 2023 at 01:07:23PM +0200, Peter Zijlstra wrote:
> On Mon, May 08, 2023 at 09:43:55PM +0200, Thomas Gleixner wrote:
> 
> > +static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state)
> > +{
> > +	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
> > +	int sync = atomic_read(st);
> > +
> > +	while (!atomic_try_cmpxchg(st, &sync, state));
> > +}
> 
> Why isn't:
> 
> 	atomic_set(st, state);
> 
> any good?

Hmm, should at the very least be atomic_set_release(), but if you want
the full barrier then:

	(void)atomic_xchg(st, state);

is the much saner way to write the above.


From xen-devel-bounces@lists.xenproject.org Tue May 09 12:07:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 12:07:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532139.828171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwM7q-0003KQ-1g; Tue, 09 May 2023 12:07:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532139.828171; Tue, 09 May 2023 12:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwM7p-0003KJ-V3; Tue, 09 May 2023 12:07:25 +0000
Received: by outflank-mailman (input) for mailman id 532139;
 Tue, 09 May 2023 12:07:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5VI=A6=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pwM7o-0003KA-EB
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 12:07:24 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0d69057c-ee62-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 14:07:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d69057c-ee62-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683634040;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=L8MCFM0FDR2P5xT6w7iIAowrB6rGKJkA6dzl9HivHWk=;
	b=WQKf9JTpFPSQGT0uFg5INijnVR8R5fqcikzHfuS5DVBLQGQ7GwJQT+EVszHAmjT/3/UBuz
	qVYnARbWYomJ6gliEb43t5la/h6VifzuO+zL/GRl0NjzHXmgK5N/FGwqL06FgLMkljqBFm
	XyfTJfmksGnqa6BjZhyerBbGE6iuFhkjn27fQrUXAQ1QApceQPW64uUNRla+fMpap+pUkk
	SWZ9l1mk0piAcuHN5EvFC700NqxfjfjIo2TZ35Uf44N6qz9M7DRMymepQA2ZWVsdTMQEJG
	ZopQRoGBKBzZCgD4GsYTMKx5vSxJBSs3/zJnl8HdNjGWI5X3/Fou00r0/4324g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683634040;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=L8MCFM0FDR2P5xT6w7iIAowrB6rGKJkA6dzl9HivHWk=;
	b=HEbhRwUpE3y701WVF4If5EM5F8H8s6m0+F3iRYhv9+fKfUn3XmN66AS4uimVHFaZrmTvUY
	YoYD1EwiDwf+ofDQ==
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into
 separate phases and document them
In-Reply-To: <20230509100421.GU83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.671595388@linutronix.de>
 <20230509100421.GU83892@hirez.programming.kicks-ass.net>
Date: Tue, 09 May 2023 14:07:20 +0200
Message-ID: <87pm791zev.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 09 2023 at 12:04, Peter Zijlstra wrote:
> On Mon, May 08, 2023 at 09:43:39PM +0200, Thomas Gleixner wrote:
>> +	/*
>> +	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
>> +	 * but just sets the bit to let the controlling CPU (BSP) know that
>> +	 * it's got this far.
>> +	 */
>>  	smp_callin();
>>  
>> -	/* otherwise gcc will move up smp_processor_id before the cpu_init */
>> +	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
>>  	barrier();
>
> Not to the detriment of this patch, but this barrier() and it's comment
> seem weird vs smp_callin(). That function ends with an atomic bitop (it
> has to, at the very least it must not be weaker than store-release) but
> also has an explicit wmb() to order setup vs CPU_STARTING.
>
> (arguably that should be a full fence *AND* get a comment)
>
> There is no way the smp_processor_id() referred to in this comment can
> land before cpu_init() even without the barrier().

Right. Let me clean that up.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Tue May 09 12:08:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 12:08:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532144.828181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwM8y-0003tP-BE; Tue, 09 May 2023 12:08:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532144.828181; Tue, 09 May 2023 12:08:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwM8y-0003tI-89; Tue, 09 May 2023 12:08:36 +0000
Received: by outflank-mailman (input) for mailman id 532144;
 Tue, 09 May 2023 12:08:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5VI=A6=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pwM8x-0003oR-1W
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 12:08:35 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 372c68cf-ee62-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 14:08:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 372c68cf-ee62-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683634111;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=auvnPlCdrxbwfVCryOkTHE8CzjY0504gnLY8d2dRonQ=;
	b=wafQfE3T818swYq6uLvh8KHiTe44B+AlTcCuev0YTvhSecK7fkanUUlrOFuq1HA2ASd8t+
	BU6tHwwDMQA0mZaxMylheStrGpAitYQ7e4+1xdIBRK50qlGuPKGUbpzNoPFc/U7pFg4vp7
	3txn/u3LsCuYBQ0eNUlaTm7ZlGVH1WlsNSZfP3xwKGSzwPqUJorHyzC7YM1MHYvCO7Lmx3
	BiJQOvEs+Zr/r7cQvp3o3X/HSCezUruhmuDtMYKpw4lmxWL4y4XK9Xj0Ms0sscYlMfQk/M
	8L9/ObIQjuOr/DXSdupvrplmykX+MkwMcI4Kqk1nxIkYGFdBkt+5ECVVyItbUg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683634111;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=auvnPlCdrxbwfVCryOkTHE8CzjY0504gnLY8d2dRonQ=;
	b=iQxVDqhAuaPEJSQzwxbX/REaKwQ08dwCm2L5m3NrRVeFnmsUvKPu/IfGcUkN/jYTMsh6S3
	yqlu1nzMP1/UIxCA==
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into
 separate phases and document them
In-Reply-To: <20230509101902.GV83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.671595388@linutronix.de>
 <20230509101902.GV83892@hirez.programming.kicks-ass.net>
Date: Tue, 09 May 2023 14:08:31 +0200
Message-ID: <87mt2d1zcw.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 09 2023 at 12:19, Peter Zijlstra wrote:
> On Mon, May 08, 2023 at 09:43:39PM +0200, Thomas Gleixner wrote:
>> @@ -1048,60 +1066,89 @@ static int do_boot_cpu(int apicid, int c
>
> 	/*
> 	 * AP might wait on cpu_callout_mask in cpu_init() with
> 	 * cpu_initialized_mask set if previous attempt to online
> 	 * it timed-out. Clear cpu_initialized_mask so that after
> 	 * INIT/SIPI it could start with a clean state.
> 	 */
> 	cpumask_clear_cpu(cpu, cpu_initialized_mask);
> 	smp_mb();
>
> ^^^ that barrier is weird too, cpumask_clear_cpu() is an atomic op and
> implies much the same (this is x86 after all). If you want to be super
> explicit about it write:
>
> 	smp_mb__after_atomic();
>
> (which is a no-op) but then it still very much requires a comment as to
> what exactly it orders against what.

As this is gone a few patches later, I just be lazy and leave it alone.


From xen-devel-bounces@lists.xenproject.org Tue May 09 12:09:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 12:09:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532149.828191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwM9l-0004QW-JC; Tue, 09 May 2023 12:09:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532149.828191; Tue, 09 May 2023 12:09:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwM9l-0004QP-GT; Tue, 09 May 2023 12:09:25 +0000
Received: by outflank-mailman (input) for mailman id 532149;
 Tue, 09 May 2023 12:09:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5VI=A6=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pwM9l-0004Pg-0M
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 12:09:25 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5607d547-ee62-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 14:09:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5607d547-ee62-11ed-b227-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683634163;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JzzIG9diO0rBmRqXNA2vITQ7laniNDqcQjUI61xR0zc=;
	b=yGBgjZXX7JUQf9ai2BxNAcE40QqlbbCZvKDivZ8fc+Ky9mAI/0ueQdDzNtDVuilsrvxJJO
	lJ5mhaodIDGsYlQ0IXQvZ9tZytQody5uNCpERHh6I46d500WQf2vjLSAaSUonj1VM7S27k
	ijKtUP6m6oR8ZuUZRNmjosyunXeQRrX26+o4C4sXaWq1kq0u9zQ+mYZBmKYBPh+6Nuyz6J
	ha7BRefsXdcuOh0ABNgFpkVgNGLsV2g+kzTN5Fz65Tux9qtwNHNwfsc6BXGq7Rw/qb5o8h
	P5C95248r6aRH+rMJbxtb6dCXh+b7CBcpY6Wfvf0ny/k2Ev6eJ3Z3YSX7s1qRw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683634163;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JzzIG9diO0rBmRqXNA2vITQ7laniNDqcQjUI61xR0zc=;
	b=fUDzvynzHE21pHjVwgqzy6eWSyFgbVajRkQas+BUCgv8eUYACTe86SoKSEy94dBf+BbPut
	fM8AU/kPvb2mGGDg==
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into
 separate phases and document them
In-Reply-To: <20230509103146.GW83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.671595388@linutronix.de>
 <20230509103146.GW83892@hirez.programming.kicks-ass.net>
Date: Tue, 09 May 2023 14:09:23 +0200
Message-ID: <87jzxh1zbg.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 09 2023 at 12:31, Peter Zijlstra wrote:
> On Mon, May 08, 2023 at 09:43:39PM +0200, Thomas Gleixner wrote:
>> +	/*
>> +	 * Wait for the AP to mark itself online, so the core caller
>> +	 * can drop sparse_irq_lock.
>> +	 */
>> +	while (!cpu_online(cpu))
>> +		schedule();
>> +}
>
> These schedule() loops make me itch... this is basically Ye Olde yield()
> loop with all it's known 'benefits'.
>
> Now, I don't think it's horribly broken, we're explicitly waiting on
> another CPU and can't have priority inversions, but yuck!
>
> It could all be somewhat cleaned up with wait_var_event{_timeout}() and
> wake_up_var(), but I'm really not sure that's worth it. But at least it
> requires a comment to justify.

All of them are gone with the later patches and the control CPU will
just go straight to wait for the completion in the core code.



From xen-devel-bounces@lists.xenproject.org Tue May 09 12:09:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 12:09:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532152.828201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMA7-0004tJ-T6; Tue, 09 May 2023 12:09:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532152.828201; Tue, 09 May 2023 12:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMA7-0004tC-Pp; Tue, 09 May 2023 12:09:47 +0000
Received: by outflank-mailman (input) for mailman id 532152;
 Tue, 09 May 2023 12:09:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5VI=A6=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pwMA6-0004sZ-L9
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 12:09:46 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 629a00e2-ee62-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 14:09:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 629a00e2-ee62-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683634184;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JdyszAQFsS8NQ584CZnckJnpEhoUaXW9tnBOiTa+9+E=;
	b=ka9gkcGHPU/F0K+ngaz5jaCDeTzITcmJantNFEH1Q/42sdZ4zT1IqtMce7W/ZjZD1bK1+j
	/LNwDssKYbtn14emuSZbyff8IjECHOj44E32PB3y66ONqTaivar6HFMvTYpEn79bGOVSI8
	sOjrIz9MvrC0jruLu+aWlOCPHHLUW+sovMocfUIwQQzie8Jw2EWX+LtwgcvbMB/fb++PT7
	3V5LLpihTVTj/4ipngAecsP92q7WSv1hEdEi47Ocn0+UqB1lNCmcxgzxFfcOzQz6hdRbtH
	MZzhHOOgMIT3YAtQVGXh4XejswO34svSG9XWtwbNWw47oZdXADP7zBo3uAbL7Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683634184;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=JdyszAQFsS8NQ584CZnckJnpEhoUaXW9tnBOiTa+9+E=;
	b=GXVTcObOnQaoFBT3flZUDZ+yT6E9sycBY79vdaY5U1FPyYZPuFxqBXtxFTxmCYvrEjr620
	6gR4tFX9TtdUtDAw==
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>
Subject: Re: [patch v3 13/36] x86/smpboot: Remove cpu_callin_mask
In-Reply-To: <20230509104915.GX83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.956149661@linutronix.de>
 <20230509104915.GX83892@hirez.programming.kicks-ass.net>
Date: Tue, 09 May 2023 14:09:44 +0200
Message-ID: <87h6sl1zav.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 09 2023 at 12:49, Peter Zijlstra wrote:
> On Mon, May 08, 2023 at 09:43:47PM +0200, Thomas Gleixner wrote:
>> -	/*
>> -	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
>> -	 * but just sets the bit to let the controlling CPU (BSP) know that
>> -	 * it's got this far.
>> -	 */
>>  	smp_callin();
>>  
>>  	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
>
> Good riddance to that mask; however is smp_callin() still an appropriate
> name for that function?
>
> Would smp_starting() -- seeing how this kicks of CPU_STARTING not be a
> better name?

Something like that, yes.


From xen-devel-bounces@lists.xenproject.org Tue May 09 12:10:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 12:10:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532154.828211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMAU-0006G5-59; Tue, 09 May 2023 12:10:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532154.828211; Tue, 09 May 2023 12:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMAU-0006Fy-29; Tue, 09 May 2023 12:10:10 +0000
Received: by outflank-mailman (input) for mailman id 532154;
 Tue, 09 May 2023 12:10:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5VI=A6=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pwMAS-0004sZ-DP
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 12:10:08 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6fcd4982-ee62-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 14:10:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fcd4982-ee62-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683634206;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=VtDcNKByG3QGqBaMWteEmBlx8o79YaY4wIW5CrphOKI=;
	b=ozV/VUOhCbdvdcwqxdtfXZdcaA90ARKwOnYQBRouWIahnMXGWDMqDNjRrHC/BBYWjq5SZl
	0zVkjwMN1HrH8Nj5PlcDw7JkrmEq1wl2opWaW4iHlUWYz9rS55vTCEV7bvHci0YQS0msKe
	OiKR9k0tA7izR2CPXGYcFUsQfK7A+x/0KRvkRZX+qrF7KiQIDhvs0oyXBpq1rxJdGhfGCh
	MYC6WnITSxgPyRjiVJDU49KaDy0xrS7hBHILcujYL+/hxIaH+NoGh3k9eNjkZ5FjVDbyZQ
	FdCg0ek5JAc2T8OI8FHkzUUTTsC4j7rKC+fqFmbcQZsqVeexs2oybxw7tmiGyQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683634206;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=VtDcNKByG3QGqBaMWteEmBlx8o79YaY4wIW5CrphOKI=;
	b=ZLjODydBGy8cvSTodcStIAcmZAy/XrGuM8TLpXQA6c10hgPy1SCLXFLe3hFmS5CqBcJDXp
	pZHsXusMPUdgkGCg==
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>
Subject: Re: [patch v3 14/36] [patch V2 14/38] cpu/hotplug: Rework
 sparse_irq locking in bringup_cpu()
In-Reply-To: <20230509110223.GY83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.013044883@linutronix.de>
 <20230509110223.GY83892@hirez.programming.kicks-ass.net>
Date: Tue, 09 May 2023 14:10:06 +0200
Message-ID: <87ednp1za9.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 09 2023 at 13:02, Peter Zijlstra wrote:
> On Mon, May 08, 2023 at 09:43:49PM +0200, Thomas Gleixner wrote:
>> From: Thomas Gleixner <tglx@linutronix.de>
>> 
>> There is no harm to hold sparse_irq lock until the upcoming CPU completes
>> in cpuhp_online_idle(). This allows to remove cpu_online() synchronization
>> from architecture code.
>
> Uuuuuhh.. damn. Can you at the very least ammend the comment near
> irq_lock_sparse() to mention these extra duties?

Will do.


From xen-devel-bounces@lists.xenproject.org Tue May 09 12:12:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 12:12:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532162.828221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMCV-0006y6-IN; Tue, 09 May 2023 12:12:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532162.828221; Tue, 09 May 2023 12:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMCV-0006xz-FY; Tue, 09 May 2023 12:12:15 +0000
Received: by outflank-mailman (input) for mailman id 532162;
 Tue, 09 May 2023 12:12:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5VI=A6=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pwMCU-0006xp-7l
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 12:12:14 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bb1754fc-ee62-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 14:12:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb1754fc-ee62-11ed-b227-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683634332;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=OeDWxrG4XUYobVqWgAe3O2JPVJDj89o+5jrCfskGJg8=;
	b=hl/TK4RBTejer0s1ss/VQWoTE3DJHIErNy5ObBMt3F35AuGijMu5SQIYqEXIyyWfguIJ74
	IuVIsOwOjI+DOYpi1oyZvtSrA4xy0VqeURht3o6yDrYLZd0dCj9sQmFOFO0ByR3FGuzx1X
	DX/4mqvZ8z2EcTfmL6YwZj0AheRNPYO2Sw/CFuIRodeNmQ9+fMov+UrbzmNB3mOPDT4LpP
	XixRF4zedTnImudhtqeC8vGLPe0PN84sIltm+fYhikFa7xWpApb8aT+EKAws94oS5Kw77W
	ZD3DPvNRfQa9kERCm1KsnYEOdLoCE866a9iJjH9pstUyHznEaCqcjcX+ulrAEw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683634332;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=OeDWxrG4XUYobVqWgAe3O2JPVJDj89o+5jrCfskGJg8=;
	b=C/RxUkGIvC1trkLde7ELOcrRJTFX/YyTBVeqxDPOcpgYwMXYyUXrtmmSUhQ5qz65ax/jQH
	CAvGiAfvhE+ZmGAQ==
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>
Subject: Re: [patch v3 18/36] [patch V2 18/38] cpu/hotplug: Add CPU state
 tracking and synchronization
In-Reply-To: <20230509110722.GZ83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.240871842@linutronix.de>
 <20230509110722.GZ83892@hirez.programming.kicks-ass.net>
Date: Tue, 09 May 2023 14:12:12 +0200
Message-ID: <87bkit1z6r.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 09 2023 at 13:07, Peter Zijlstra wrote:
> On Mon, May 08, 2023 at 09:43:55PM +0200, Thomas Gleixner wrote:
>
>> +static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state)
>> +{
>> +	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
>> +	int sync = atomic_read(st);
>> +
>> +	while (!atomic_try_cmpxchg(st, &sync, state));
>> +}
>
> Why isn't:
>
> 	atomic_set(st, state);
>
> any good?

Good question. It's only the other side (control CPU) which needs to be
careful.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Tue May 09 12:43:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 12:43:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532168.828232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMgk-0002Bk-55; Tue, 09 May 2023 12:43:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532168.828232; Tue, 09 May 2023 12:43:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMgk-0002Bd-0J; Tue, 09 May 2023 12:43:30 +0000
Received: by outflank-mailman (input) for mailman id 532168;
 Tue, 09 May 2023 12:43:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwMgi-0002BR-Q7; Tue, 09 May 2023 12:43:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwMgi-0000CQ-O4; Tue, 09 May 2023 12:43:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwMgi-0007KN-8J; Tue, 09 May 2023 12:43:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwMgi-0002G9-7U; Tue, 09 May 2023 12:43:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lJDvUXt0S5ipVy0wybxj7NYoLAMI4STfVas8fZQCqe0=; b=LmuM1S+WHZ5GpIKF1za8TKXGSD
	oxo0wOB8nZYXi/8k8mwRx/fGv9peliGl+kFS99MKxRFSpYC17gvnMu5GrOEIL/9OUBoMKjLhB5KWy
	lZkQ89DoF8WQHc0D8zb1Fgv0gG6aiqu7IrmBOr44s0Q/MSdgfpu48rJVr/8rmMRrp1Cc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180588-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180588: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8b1ac353b4db7c5bb2f82cb6afee9cc641e756a4
X-Osstest-Versions-That:
    xen=a16fb78515d54be95f81c0d1c0a3a7b954a54d0a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 May 2023 12:43:28 +0000

flight 180588 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180588/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8b1ac353b4db7c5bb2f82cb6afee9cc641e756a4
baseline version:
 xen                  a16fb78515d54be95f81c0d1c0a3a7b954a54d0a

Last test of basis   180577  2023-05-08 13:03:54 Z    0 days
Testing same since   180588  2023-05-09 10:01:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Yann Dirson <yann.dirson@vates.fr>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a16fb78515..8b1ac353b4  8b1ac353b4db7c5bb2f82cb6afee9cc641e756a4 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 09 12:47:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 12:47:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532174.828241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMkf-0002qs-KM; Tue, 09 May 2023 12:47:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532174.828241; Tue, 09 May 2023 12:47:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMkf-0002ql-HE; Tue, 09 May 2023 12:47:33 +0000
Received: by outflank-mailman (input) for mailman id 532174;
 Tue, 09 May 2023 12:47:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vSxc=A6=citrix.com=prvs=486b9cf0a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pwMke-0002pI-6H
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 12:47:32 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a78cfba0-ee67-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 14:47:29 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 May 2023 08:47:20 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6590.namprd03.prod.outlook.com (2603:10b6:510:bd::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Tue, 9 May
 2023 12:47:17 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 12:47:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a78cfba0-ee67-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683636449;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=LpLOEd0MUaX+8R/j7ZUV6UhwMA5YT1XIl3diOMCxcwE=;
  b=Kghwq6aIROqRkmmbg3YYyFkPK+l2H4j+BmwTVsL/a+O4qadSnU/io26E
   rtwgosr168mWZblqWr92zLDbYAhF5tB2UYXnVMc+T1E+tDgs92WiJtrKQ
   0IQrlHaUUe8JcIPAPfgWZUm31qHt9eTYYsl4xfSyUih6IeF+/3dJ5g7hD
   8=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 107140754
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:hPtvparieGBD/L2l0FUf24D2M89eBmIkZBIvgKrLsJaIsI4StFCzt
 garIBmPM6zeYzD0KN1xOo7i9BwB757Ux9MyTAprpXtnQSsT85uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDzyhNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAD4xMzSsnNmu+6uqZsRL2MklF9K3MpxK7xmMzRmBZRonabbqZvyToPV+jHI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeiraYSEEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhrqEz3AXMmzB75Bs+f2GE//uhrhKHcd9TM
 EU/pCBzn4w0+xn+JjX6d1jiyJKehTYbX9NeO+Q38A+Jx+zY7m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+WpDW+IzkILkcNYCYFSU0O5NyLiJ43pgLCSJBkCqHdszHuMTT5w
 jTPpi5ugbwW1JcPz//iowGBhC+wrJ/USAJz/h/QQm+u8gJ+YsiiepCs7l/Yq/1HKe51U2W8g
 ZTNoODGhMhmMH1HvHXlrDkldF1x28u4DQ==
IronPort-HdrOrdr: A9a23:AP+3g6O1lFGo1MBcTnqjsMiBIKoaSvp037Dk7SFMoHtuA6qlfq
 GV7ZMmPHrP4gr5N0tMpTntAsW9qDbnhP1ICWd4B8bfYOCkghrUEGlahbGSvAEIYheOiNK1t5
 0BT0EOMqyVMbEgt7eC3ODQKb9Jq+VvsprY59s2qU0DcegAUdAE0+4WMGim+2RNNXh7LKt8Op
 qAx9ZN4wGtcW4Qaa2AdwM4dtmGid3XtY7sJSULDR4/6AWIkFqTmcXHOind8BcCci9FhYwv+2
 jdkwD/++GKvvyhxgXHvlWjn6h+qZ/OysZjGMfJsMQTJzn24zzYHLhcZw==
X-Talos-CUID: =?us-ascii?q?9a23=3AB2peZmjKs2elDg/QdFZawCAJqjJufEGA6FjLGFe?=
 =?us-ascii?q?DEHdFZZrLRFmr4/9qjJ87?=
X-Talos-MUID: 9a23:+fEWuQlfTqNbGlwb1idJdnpIDvg24byKVHknnJkdsuLaP3FCHzGS2WE=
X-IronPort-AV: E=Sophos;i="5.99,262,1677560400"; 
   d="scan'208";a="107140754"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VXODCYkUa0V1PDCahs8rEguJm+eZMCePeZ+Qcoa4vR3K1kUHLnEg0MRXXnLE+BvN+ZbHmq0D3oq/GW0mYlaC5Tl2sDl3xpMQ3rFjCNg0AZITRQoIMcyI89Dpl+N42G/4QaX2Br/GN+6b40A5gS6fFCA0EEwtCqqGh7dosYqICM9NeCZct78vCDaZWAlVJF9ebDD1CafkJuMdzt1wdvIqp72kn1SXsUHA/1HLBy2UmJJmnxTa4X79A07gntO0f418PcG4e+jtEtckLv0bxLQ7I2SgMSk0/kYwjiFoqHHvseyqGDytJuAez0SmsiXXHss2Hz57mJHafifIdk/gDbCatw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Rrt8osdwbVzW+5B6L7lGqi/dGheX57n61YDc+SVC4xU=;
 b=Fr1XP4mvtXLOR8Br1UO1wdn2LsMpguUOokKIJo1kUWPGp4yOrAHBtt9hkO8untzQcuEXCAEOxN4u07PWFoxqRjUoVi8gX+1o9z6BPUmTA8OjiLFORbbz40JKOlSb5SnMGMBIofDB4cDyxmk8ATMa5pHzeoQDMQZUIVKWnosqxAxPyaQnA2xzmbosl/AOG/xIGHoE1GnRagPboQvPblcprmPrpJtLfV1kmZELxzZnKQeJdo4EfHUnU4H8W+XvQcJw1ljUjvZzOewM0jXYoLYiASExQSxewNDJ6a2IVt487KHTHEkpSeJ5qdX7Leg/FaQtzh3oSExrN6mtjX2rhXfteg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rrt8osdwbVzW+5B6L7lGqi/dGheX57n61YDc+SVC4xU=;
 b=aXhX2j2ZXHDiwUdavcrbzE7uofHvBP1RsSL+nMRkpauuIoCSdQPcrig4DosgpurZeTTy8NvaDUkrShj+gmWn5ylWfVuWWns+UzvkDFXIfUKXVWO80WFcLASHPNDJCOKICuYw+wnTsDslFGUXNPKwgIeFfS4Omr3/1tL7k73wvpk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0785a316-1920-f5de-61d3-ed21ddbff0b9@citrix.com>
Date: Tue, 9 May 2023 13:47:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2] Fix install.sh for systemd
Content-Language: en-GB
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230508171437.27424-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230508171437.27424-1-olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0381.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18f::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB6590:EE_
X-MS-Office365-Filtering-Correlation-Id: b7f2066d-1acc-4129-06e4-08db508b8506
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1na7UNiqRbLaKbX03dmduWnyGOOJc7lAYq/LOvLMSNqMFiaTZG6A7T+nrNas9lbeijtpj6KOocOfQgVTiHWivjL73EzZLCCDgdab4atM+0WnIqC20Z8fmTLnNlQDZ0xOUNE+Xtw+6sUvWQnsA260fU+EdSWe7xQplUIQUyKOlDpzZa+mPq7IVp8iTncPMRGfPgXWHLyjXb25zOgCXxGiMGMP7w9RthEVJirMj5dF+4GgTvGemCUYBRgRiLQRWlr1SMtGdAQpFRY+oUHVXvqBPSSQGiN4nPJELCdTnArVn5Vvkf7gsQHhNGnl3MBhV1sLKkc/vvNhfJGhUMGBYuQB7LK6khYuGv10dD//OdqkWMOcTBuLtRsju3DpU3pC5AxNY8qmKL9XIbQOI75hjSDr7tLAzeEdPQcnynQ1D12qQjdRr8n3i5DFf6J/qQ8+lVa84jJwOLplXT57rIEgwYQGV/TRJmD/ZQWQmrhy9163lvYWCatQ+rvjw/KPherPGoZKrM0lWTd/C7OA5vFTeTC7tw7zU0ZdZU4SEcFaTQDtW609KPF0MyUc4ar8x/PWXXx5Qrn3VdNEXNnXoddT2XXVQzHuZKMTiNqmOYqzvQ4TUlo8lZi07U1rZfQ8znq6OGWBGwYEY/kijPrW0ddU3Wvgnw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(396003)(39860400002)(366004)(451199021)(31686004)(54906003)(4326008)(2906002)(8936002)(478600001)(316002)(66476007)(5660300002)(66556008)(8676002)(41300700001)(6666004)(66946007)(6486002)(6512007)(53546011)(6506007)(26005)(107886003)(186003)(82960400001)(83380400001)(36756003)(2616005)(38100700002)(31696002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M0wvdVVRUytveE5FQWJUZlJPR3JDQXdDVFQyNjRZZXM2dlo5Q2VpVnljbGFh?=
 =?utf-8?B?SlVrTHp4aHZxV3RjQ3h2YkdaK0F6Y0JOcENJZVZoNVk4OFlTTEtFaHhialNU?=
 =?utf-8?B?UGd3eFRLc2ZDaDBxTVU5QVZRNnM0RldSdmxlb3ZHdkp2S3RQWTYrSTFBVWFl?=
 =?utf-8?B?Q1k0Q1FwR1c2YnoyaUNvUHcwMVZhYlpQYmZnQXdnc1Q2M0Qza3dVWDQ3Q1JX?=
 =?utf-8?B?cVkzNzBpNVZJZDZtTEhobDFRUXlLUi9TdE5EWTdDbG91VUV1LzhqOU9OY1hm?=
 =?utf-8?B?aEpPaktNZEg5VTIrRlZlRlBQNFFrNDZLV056b0lWYndEK29uU2JHK0ovSTZZ?=
 =?utf-8?B?S3NUbzhPQWhBWEppVnRsV2VOSDErM3kvSERlTHlUNDNRa3phODc4NHVuazBX?=
 =?utf-8?B?OXZKMDE3SXM2N3FaQjdlQkpvbExPZnhxOWhxZlpYemYrOENLT1A1Q0lkYzZW?=
 =?utf-8?B?dHE1dVZqV0pMZDU4Mkd1Snk4dElUR05nY3FFa05rSjZUdUdxNmtQaGV1MFBZ?=
 =?utf-8?B?bTdnMEloVGFINGEraE9YamtrMThqZ0FYZU01c2I3bmZzZi94R0ozRGlRblZt?=
 =?utf-8?B?S0xQN2dpTTIzaDkvakhaK3dINURsVlpGVlNMNDdKczRiZzdwZmhPRW5KN2F2?=
 =?utf-8?B?WGlidUc4MDlOUmVYMFdWNkx6TjNrTyt0dXozTGR3RUlzVTVHWkttY29ydURW?=
 =?utf-8?B?UjN4Z240TnBLZVFoVDZnSkdpbmRHLzhvdmR1ZEJXSkFwcWJSSlZNS1BuSHRP?=
 =?utf-8?B?QU5mVGJiUTA5bk5DZjYrQWowdDI4MUFwSkdubEw4UTVlZVI3eG5rdXRtTE1S?=
 =?utf-8?B?bGtBNm15am0xUVdxWkNwRDhPcEFvd1RGZU5ZU2JoWDRqaFVMMVFQdFlqemhp?=
 =?utf-8?B?ZStEU0k5Mm9xaXBTSTNVTU9sWG95UnlZa015RG10OWJzZHQ3bXc5ck1Ocktn?=
 =?utf-8?B?ZFUvL0hlemtqbFNrZ1lrZis1MzFCV3JseEFSNmpUTzRVNUJaRy9BWXJyTi9E?=
 =?utf-8?B?RkZHRGxDNVZSbnp1WWhleWtlYjZjQjdkMVFTdVorWWpQOUJMOXR5d3dNVEVu?=
 =?utf-8?B?OW1oZkt0Q04rZk90NHp1T2dRY2dsVytDa3A1c05kWWtTa1kzci82bXZXR2Vr?=
 =?utf-8?B?SHlrT01kRk5sL0hUc09pY3c0TFFnOGdQVDFwN056bzIyeDBZMUhtSklZNXVt?=
 =?utf-8?B?UVNOdDZKcjN2Um9POWNFOFk4L0g0NDM4UGtEekxmV3ZZV2dDZlZ0ai9XSmds?=
 =?utf-8?B?dWU4YWFab0xsaktuYURHdFFDTTVUYUlOdkNHbWowWlZHQVd0WHVERzQveGw2?=
 =?utf-8?B?SjNMVFBGZ1hLY2FiRG8yc0ZzVkpxUFE0ZmRYV2FGZ2oyb2JxbVYvUGhRalVo?=
 =?utf-8?B?M0plRURYU09mRWJjNFVXK2xmbHBnY0E0cFV3MnlsVERhK01KVkFFRU83M2Q1?=
 =?utf-8?B?bGZsQUpCZ2pqMWxkM2J0alhpd28reFlnd2p1dUpmbXpERGdGdjc0am9UbWJn?=
 =?utf-8?B?V3BXb2JDY0ZqYXBnTVk3VGxxUVdHa3lQeTQrY2NOUFl2OFNoRnU1V2JIenoy?=
 =?utf-8?B?T2tjbTdjRHVGMUhEZGpMbE4xTmdPenNweGNtSzVkbnN4YU1wQmR0MWk0eU9u?=
 =?utf-8?B?WnlXaUQyeEQ5MllEZEY3R1J3WXdEaUpseXVqTkRid3RyTmhwZm9vQnowUnNI?=
 =?utf-8?B?eTNtTUVNNFhKd2NJSzhBSTRQQmw1dk84RDh4K2lzRXBnc2xDNGcvcG1uL1lM?=
 =?utf-8?B?ME83aSs1TXRWdFJQK2RJa3lyVGp3clEvN3ZoRndnM0lEc2Y3cGEyRXlueXRi?=
 =?utf-8?B?bWJKRG9Mb25hRGUxajdGeEJqWGx4YmpXTXhHdSthY2ZRakJjU0x3dFBlcDdN?=
 =?utf-8?B?c3pOMTc5VW9KdGRIeVBSRUhjbEZEcHBUU0QwM3haVU5CSlQrSnJrRG9jRVJQ?=
 =?utf-8?B?aVI2N2RoSkk5eC9KZ3JCY3RiWmluOTNDZGMvYXlnenpnOS9PY00wL3lmVHdF?=
 =?utf-8?B?SEFWTXBVcGY1ejF3Q0M0amR5Q2RQeGFlWDFqbFkwOTFxVDY1YzJibGVRZ1Jp?=
 =?utf-8?B?SDQzMHdGNFJRczh3dnFNc0NYMWZvSUJENnI0NVdzZGdvTDRvM3V4SHQyOXRK?=
 =?utf-8?B?eXZDd1VDZlFiM24zai83NmZPcnc2Q0hHOXZ1Z1d0RzVqcStFRVpyMjhqQy9k?=
 =?utf-8?B?TVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	vFjWc418nf9Kf88DCrLRhUQptbAZc4HCoEJqLpY0OuNFFtujpz2DkE8WKVA04AbKJfAulVjw47/PPCaVqtxrny/bKmfkOOyFERtX9oS7ReDpL8G93nFF9WThuIHs7Q0EO/rmJqpMEYNsEppnh1dJ658jvObQPzLjqMgWDC4HhBivtFY4V6ue/DyRwdAwUB/TqSAa7gLfFYvawIHvH8vKBJVq8yCA7BtBiIxfMfzjI0bVaVvrEXPSP2caBUIpJaAOYgFi7lPBK+Rsn6KPPk+QyfpWDI7uQai7/Ku5WYlKUI4LTUep9JKBPIvWqlkgNyACYSBDsA6Wku2pqyC3F9OIDwh0cMF0dj09BL/UJ3i3pSoPkmuO3CnCeZ7FlVrto6i1OiCr+2QwqFwRVuOhU9ntYchg8d/oJq3Wv9ifOIsbSRL48+j3etgLAHXEmf8aiPxlo0r05sPOCrOok6zBYXp8iAtY97lLYFANx9wJJyOwX/kD3BjjP3hz+N6lzDlGny6XZCTh8tXIH9UGTCiBrwq/dg7u71yAoT/rdcvVwcXyvb4ci4oCh5yzUBfrD8BSifEC+IyQichIG1LtSb0TAQJolsinlcnsh5PZhKrQy/ssrOL3OvyRmBLFCp51gP67DY0FKOtvrCCu5MPe9u69b6vTLXycf3i77k4BKwvt5SCVCIT+JUP5nHxC9SeMm2xBJRHRamXlSGE1y4jayUsj1cZfR45w7zW0CLSLyey5ugoBXN1upHXZu9JELHLzxXt6OgQyPrmeahUoRVwbEcvKvo0v2YpVZHQFgoyILbN42KueIHMMb0cIsJXK4qszzmtkOErpVzjZ+Gw8PhAacAAwnUou6IJgHIIgPI5jtO23H+6DO8M=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b7f2066d-1acc-4129-06e4-08db508b8506
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 12:47:16.9960
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EugyueS/cygIKTH5GQaqYM5Cpfgos3M7UHqS2+TcJ3JRlapopJMVbLQfdHBqDl5EErf5x+TApc9x1/e6GjHHr64vg8CllKrVJPIKsLziBck=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6590

On 08/05/2023 6:14 pm, Olaf Hering wrote:
> On a fedora system, if you run `sudo sh install.sh` you break your
> system.  The installation clobbers /var/run, a symlink to /run.  A
> subsequent boot fails when /var/run and /run are different since
> accesses through /var/run can't find items that now only exist in /run
> and vice-versa.
>
> Skip populating /var/run/xen during make install.
> The directory is already created by some scripts. Adjust all remaining
> scripts to create XEN_RUN_DIR at runtime.
>
> XEN_RUN_STORED is covered by XEN_RUN_DIR because xenstored is usually
> started afterwards.
>
> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

TBH, I think this goes to show how many people do `make install` by hand
on a system, rather than using a packaged version.

One query...

> diff --git a/tools/hotplug/Linux/init.d/xendriverdomain.in b/tools/hotplug/Linux/init.d/xendriverdomain.in
> index c63060f62a..1055d0b942 100644
> --- a/tools/hotplug/Linux/init.d/xendriverdomain.in
> +++ b/tools/hotplug/Linux/init.d/xendriverdomain.in
> @@ -49,6 +49,7 @@ fi
>  
>  do_start () {
>  	echo Starting xl devd...
> +	mkdir -m700 -p @XEN_RUN_DIR@

Why is this 700, and the others just using regular perms?

Also, doesn't it want quoting like the other examples too?

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 09 12:59:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 12:59:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532178.828250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMwI-0004Te-Ld; Tue, 09 May 2023 12:59:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532178.828250; Tue, 09 May 2023 12:59:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwMwI-0004TX-Is; Tue, 09 May 2023 12:59:34 +0000
Received: by outflank-mailman (input) for mailman id 532178;
 Tue, 09 May 2023 12:59:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1Tft=A6=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pwMwH-0004TR-02
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 12:59:33 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 568bf67a-ee69-11ed-b227-6b7b168915f2;
 Tue, 09 May 2023 14:59:31 +0200 (CEST)
Received: by mail-lf1-x131.google.com with SMTP id
 2adb3069b0e04-4efe8b3f3f7so6645779e87.2
 for <xen-devel@lists.xenproject.org>; Tue, 09 May 2023 05:59:31 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 w9-20020a05651204c900b004efee4ff266sm342645lfq.67.2023.05.09.05.59.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 May 2023 05:59:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 568bf67a-ee69-11ed-b227-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683637171; x=1686229171;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=y5CMwoqofI9LEoZLJQCKYdWLtvfFfqvin2ORGtdJFUg=;
        b=V2sj66xGcl9kG8fhhCxF7FEpFvBTAogGCd7t4mWZRrBoWmZJ+ZetMO62caqauuGZ+e
         ph7fIZJQRPbeqESrodUA2n/gwS00gLjMvxZgi8GkVvZCZwKQNgdYeeV+6Rtsm/DJOm4D
         PFnSNyfZyo3CZ8Z972t8oOLN7LhvgpKeUAmWOJix21Hjkm/UW9sdX97JljJ9WIjKIjD6
         O17aqVRqyBlPeadGA0WABKBTU9cj80wXjZkHVZXi4i94LrKikLeif237hW+ASJhV/56U
         5hSujLT7OzEXKG8ay6xmyT4CbHQEJKXIqK6MJ2zGYNibvng/54UaE6Jbnh/lsSAXz2Qs
         7+cg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683637171; x=1686229171;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=y5CMwoqofI9LEoZLJQCKYdWLtvfFfqvin2ORGtdJFUg=;
        b=KKONdegKKEMrlrNwYQyEsZTJQS2GVIsSK/Y5Gwpoi2k7pwN+oaYB8VOIaI3s/asRfz
         AgTvf6YkBaN7x/Z+CDh4RJEfdQQYplvMo0wtvi7Tgg08G9LcNfZtAVKnVsgg2X1U1Iar
         tveESxBAUDnIjBaN0ypq1x7rz47BRw74CHi+ZhXEuAB7LGJXn6UsQU86FxVMiNcaVLeK
         DL9o9FYD26wHZo+5VJ9PceDNREysaqGtMRkVyM2TrrvDjiIn9IMeJys+BLTNrHrGu03q
         nIRoIj6U1QYQjcLv5DHDmKVWFSFETiwIT9FPIvwCTDsHaBKDaic+PXiWeEB5bTs3m2AS
         +6Ow==
X-Gm-Message-State: AC+VfDzzrnyDSXjon5wFJch0mLdhUjNzzPQpReIrzucd0AvAFcGoZfZl
	jd7mWV7pEW8gu4RLIGBXPe4=
X-Google-Smtp-Source: ACHHUZ4F7sCvYJEA+jyik17gr5DxHRj39it0ZagwWk/DfHU2Ai7YhMdKyzjhrLQjkVWJYcsO9uHP6A==
X-Received: by 2002:ac2:47e5:0:b0:4dd:ce0b:7692 with SMTP id b5-20020ac247e5000000b004ddce0b7692mr713852lfp.46.1683637170556;
        Tue, 09 May 2023 05:59:30 -0700 (PDT)
Message-ID: <db6fe5a3db067ae3429d4b83766508233dfc9ca8.camel@gmail.com>
Subject: Re: [PATCH v6 2/4] xen/riscv: introduce setup_initial_pages
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Tue, 09 May 2023 15:59:29 +0300
In-Reply-To: <0533b045-f4cb-0834-ae88-9229bd816cf2@suse.com>
References: <cover.1683131359.git.oleksii.kurochko@gmail.com>
	 <d1a6fb6112b61000645eb1a4ab9ade8a208d4204.1683131359.git.oleksii.kurochko@gmail.com>
	 <0533b045-f4cb-0834-ae88-9229bd816cf2@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.48.1 (3.48.1-1.fc38) 
MIME-Version: 1.0

T24gTW9uLCAyMDIzLTA1LTA4IGF0IDEwOjU4ICswMjAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiBP
biAwMy4wNS4yMDIzIDE4OjMxLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOgo+ID4gLS0tIGEveGVu
L2FyY2gvcmlzY3YvaW5jbHVkZS9hc20vY29uZmlnLmgKPiA+ICsrKyBiL3hlbi9hcmNoL3Jpc2N2
L2luY2x1ZGUvYXNtL2NvbmZpZy5oCj4gPiBAQCAtNzAsMTIgKzcwLDIzIEBACj4gPiDCoMKgIG5h
bWU6Cj4gPiDCoCNlbmRpZgo+ID4gwqAKPiA+IC0jZGVmaW5lIFhFTl9WSVJUX1NUQVJUwqAgX0FU
KFVMLCAweDgwMjAwMDAwKQo+ID4gKyNpZmRlZiBDT05GSUdfUklTQ1ZfNjQKPiA+ICsjZGVmaW5l
IFhFTl9WSVJUX1NUQVJUIDB4RkZGRkZGRkZDMDAwMDAwMCAvKiAoX0FDKC0xLCBVTCkgKyAxIC0K
PiA+IEdCKDEpKSAqLwo+ID4gKyNlbHNlCj4gPiArI2Vycm9yICJSVjMyIGlzbid0IHN1cHBvcnRl
ZCIKPiA+ICsjZW5kaWYKPiA+IMKgCj4gPiDCoCNkZWZpbmUgU01QX0NBQ0hFX0JZVEVTICgxIDw8
IDYpCj4gPiDCoAo+ID4gwqAjZGVmaW5lIFNUQUNLX1NJWkUgUEFHRV9TSVpFCj4gPiDCoAo+ID4g
KyNpZmRlZiBDT05GSUdfUklTQ1ZfNjQKPiA+ICsjZGVmaW5lIENPTkZJR19QQUdJTkdfTEVWRUxT
IDMKPiA+ICsjZGVmaW5lIFJWX1NUQUdFMV9NT0RFIFNBVFBfTU9ERV9TVjM5Cj4gPiArI2Vsc2UK
PiAKPiAjZGVmaW5lIENPTkZJR19QQUdJTkdfTEVWRUxTIDIKPiAKPiAob3IgZWxzZSBJIHdvdWxk
IHRoaW5rIC4uLgo+IAo+ID4gKyNkZWZpbmUgUlZfU1RBR0UxX01PREUgU0FUUF9NT0RFX1NWMzIK
PiAKPiAuLi4gdGhpcyBhbmQgaGVuY2UgdGhlIGVudGlyZSAiI2Vsc2UiIHNob3VsZCBhbHNvIGJl
IG9taXR0ZWQpCkFncmVlLiBpdCB3aWxsIGJlIGJldHRlciB0byBzZXQgQ09ORklHX1BBR0lOR19M
RVZFTFMgZm9yIFJWMzIgdG9vLgoKPiAKPiA+IC0tLSAvZGV2L251bGwKPiA+ICsrKyBiL3hlbi9h
cmNoL3Jpc2N2L2luY2x1ZGUvYXNtL2N1cnJlbnQuaAo+ID4gQEAgLTAsMCArMSwxMCBAQAo+ID4g
KyNpZm5kZWYgX19BU01fQ1VSUkVOVF9ICj4gPiArI2RlZmluZSBfX0FTTV9DVVJSRU5UX0gKPiA+
ICsKPiA+ICsjZGVmaW5lIHN3aXRjaF9zdGFja19hbmRfanVtcChzdGFjaywgZm4pwqDCoMKgIFwK
PiA+ICvCoMKgwqAgYXNtIHZvbGF0aWxlICjCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAibXYgc3As
ICUwIFxuIsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gK8KgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgImogIiAjZm4gOjogInIiIChzdGFjaykgOsKgwqDCoMKgwqDCoCBcCj4g
PiArwqDCoMKgICkKPiAKPiBOaXQ6IFN0eWxlLiBPdXIgbm9ybWFsIHdheSBvZiB3cml0aW5nIHRo
aXMgd291bGQgYmUKPiAKPiAjZGVmaW5lIHN3aXRjaF9zdGFja19hbmRfanVtcChzdGFjaywgZm4p
wqDCoMKgIFwKPiDCoMKgwqAgYXNtIHZvbGF0aWxlICjCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAibXYg
c3AsICUwXG4iwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgICJqICIgI2ZuIDo6ICJyIiAoc3RhY2spICkKPiAKPiBpLmUuIHVubmVj
ZXNzYXJ5IGNvbG9uIG9taXR0ZWQsIG5vIHRyYWlsaW4gYmxhbmsgb24gYW55IGdlbmVyYXRlZAo+
IGFzc2VtYmx5IGxpbmUsIGFuZCBjbG9zaW5nIHBhcmVuIG5vdCBwbGFjZWQgb24gaXRzIG93biBs
aW5lIChvbmx5Cj4gY2xvc2luZyBmaWd1cmUgYnJhY2VzIHdvdWxkIG5vcm1hbGx5IGxpdmUgb24g
dGhlaXIgb3duIGxpbmVzKS4KVGhhbmtzIGZvciBjbGFyaWZpY2F0aW9uCgo+IAo+IEhvd2V2ZXIs
IGFzIHRvIHRoZSAzcmQgY29sb246IENhbiB5b3UgcmVhbGx5IGdldCBhd2F5IGhlcmUgd2l0aG91
dCBhCj4gbWVtb3J5IGNsb2JiZXIgKGkuZS4gdGhlIGNvbnN0cnVjdCBiZWluZyBhIGZ1bGwgY29t
cGlsZXIgYmFycmllcik/CkkgYW0gbm90IDEwMCUgc3VyZSBzbyB0byBiZSBzYWZlIEkgd291bGQg
YWRkIG1lbW9yeSBjbG9iYmVyLiBUaGFua3MuCgo+IAo+IEZ1cnRoZXIgSSB0aGluayB5b3Ugd2Fu
dCB0byBtYWtlIHRoZSB1c2Ugb2YgImZuIiB2aXNpYmxlIHRvIHRoZQo+IGNvbXBpbGVyLCBieSB1
c2luZyBhbiAiWCIgY29uc3RyYWludCBqdXN0IGxpa2UgQXJtIGRvZXMuCj4gCj4gRmluYWxseSBJ
IHRoaW5rIHlvdSB3YW50IHRvIGFkZCB1bnJlYWNoYWJsZSgpLCBsaWtlIGJvdGggQXJtIGFuZCB4
ODYKPiBoYXZlIGl0LiBQbHVzIHRoZSAibm9yZXR1cm4iIG9uIHJlbGV2YW50IGZ1bmN0aW9ucy4K
SXQgbWFrZXMgc2Vuc2UuIEknbGwgdGFrZSBpbnRvIGFjY291bnQuCj4gCj4gPiAtLS0gL2Rldi9u
dWxsCj4gPiArKysgYi94ZW4vYXJjaC9yaXNjdi9pbmNsdWRlL2FzbS9wYWdlLmgKPiA+IEBAIC0w
LDAgKzEsNjIgQEAKPiA+ICsjaWZuZGVmIF9BU01fUklTQ1ZfUEFHRV9ICj4gPiArI2RlZmluZSBf
QVNNX1JJU0NWX1BBR0VfSAo+ID4gKwo+ID4gKyNpbmNsdWRlIDx4ZW4vY29uc3QuaD4KPiA+ICsj
aW5jbHVkZSA8eGVuL3R5cGVzLmg+Cj4gPiA9KSsKPiA+ICsjZGVmaW5lIFZQTl9NQVNLwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgKCh1bnNpZ25lZAo+ID4gbG9uZykoUEFH
RVRBQkxFX0VOVFJJRVMgLSAxKSkKPiA+ICsKPiA+ICsjZGVmaW5lIFhFTl9QVF9MRVZFTF9PUkRF
UihsdmwpwqDCoMKgwqAgKChsdmwpICogUEFHRVRBQkxFX09SREVSKQo+ID4gKyNkZWZpbmUgWEVO
X1BUX0xFVkVMX1NISUZUKGx2bCnCoMKgwqDCoCAoWEVOX1BUX0xFVkVMX09SREVSKGx2bCkgKwo+
ID4gUEFHRV9TSElGVCkKPiA+ICsjZGVmaW5lIFhFTl9QVF9MRVZFTF9TSVpFKGx2bCnCoMKgwqDC
oMKgIChfQVQocGFkZHJfdCwgMSkgPDwKPiA+IFhFTl9QVF9MRVZFTF9TSElGVChsdmwpKQo+ID4g
KyNkZWZpbmUgWEVOX1BUX0xFVkVMX01BUF9NQVNLKGx2bCnCoCAofihYRU5fUFRfTEVWRUxfU0la
RShsdmwpIC0KPiA+IDEpKQo+ID4gKyNkZWZpbmUgWEVOX1BUX0xFVkVMX01BU0sobHZsKcKgwqDC
oMKgwqAgKFZQTl9NQVNLIDw8Cj4gPiBYRU5fUFRfTEVWRUxfU0hJRlQobHZsKSkKPiA+ICsKPiA+
ICsjZGVmaW5lIFBURV9WQUxJRMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBC
SVQoMCwgVUwpCj4gPiArI2RlZmluZSBQVEVfUkVBREFCTEXCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgQklUKDEsIFVMKQo+ID4gKyNkZWZpbmUgUFRFX1dSSVRBQkxFwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIEJJVCgyLCBVTCkKPiA+ICsjZGVmaW5lIFBURV9FWEVDVVRBQkxF
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgQklUKDMsIFVMKQo+ID4gKyNkZWZpbmUgUFRFX1VT
RVLCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBCSVQoNCwgVUwpCj4gPiAr
I2RlZmluZSBQVEVfR0xPQkFMwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBCSVQo
NSwgVUwpCj4gPiArI2RlZmluZSBQVEVfQUNDRVNTRUTCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgQklUKDYsIFVMKQo+ID4gKyNkZWZpbmUgUFRFX0RJUlRZwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIEJJVCg3LCBVTCkKPiA+ICsjZGVmaW5lIFBURV9SU1fCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChCSVQoOCwgVUwpIHwgQklUKDksIFVM
KSkKPiA+ICsKPiA+ICsjZGVmaW5lIFBURV9MRUFGX0RFRkFVTFTCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIChQVEVfVkFMSUQgfCBQVEVfUkVBREFCTEUgfAo+ID4gUFRFX1dSSVRBQkxFKQo+ID4gKyNk
ZWZpbmUgUFRFX1RBQkxFwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChQVEVf
VkFMSUQpCj4gPiArCj4gPiArLyogQ2FsY3VsYXRlIHRoZSBvZmZzZXRzIGludG8gdGhlIHBhZ2V0
YWJsZXMgZm9yIGEgZ2l2ZW4gVkEgKi8KPiA+ICsjZGVmaW5lIHB0X2xpbmVhcl9vZmZzZXQobHZs
LCB2YSnCoMKgICgodmEpID4+Cj4gPiBYRU5fUFRfTEVWRUxfU0hJRlQobHZsKSkKPiA+ICsKPiA+
ICsjZGVmaW5lIHB0X2luZGV4KGx2bCwgdmEpIHB0X2xpbmVhcl9vZmZzZXQobHZsLCAodmEpICYK
PiA+IFhFTl9QVF9MRVZFTF9NQVNLKGx2bCkpCj4gCj4gTWF5YmUgYmV0dGVyCj4gCj4gI2RlZmlu
ZSBwdF9pbmRleChsdmwsIHZhKSAocHRfbGluZWFyX29mZnNldChsdmwsIHZhKSAmIFZQTl9NQVNL
KQo+IAo+IGFzIHRoZSBpbnZvbHZlZCBjb25zdGFudCB3aWxsIGJlIGVhc2llciB0byB1c2UgZm9y
IHRoZSBjb21waWxlcj8KQnV0IFZQTl9NQVNLIHNob3VsZCBiZSBzaGlmdGVkIGJ5IGxldmVsIHNo
aWZ0IHZhbHVlLgoKT3IgZGlkIHlvdSBtZWFuIHRoYXQgaXQgd2lsbCBiZSBiZXR0ZXIgdG8gcHJl
LWNhbGN1bGF0ZSBNQVNLIGZvciBlYWNoCmxldmVsIGluc3RlYWQgb2YgZGVmaW5lIE1BU0sgYXMg
KFZQTl9NQVNLIDw8ClhFTl9QVF9MRVZFTF9TSElGVChsdmwpKS4KClByb2JhYmx5IEkgaGF2ZSB0
byByZS1jaGVjayB0aGF0IGJ1dCB0YWtpbmcgaW50byBhY2NvdW50IHRoYXQgYWxsCnZhbHVlcyBh
cmUgZGVmaW5lZCBkdXJpbmcgY29tcGlsZSB0aW1lIHNvIHRoZXkgd2lsbCBiZSBwcmUtY2FsY3Vs
YXRlZApzbyBpdCBzaG91bGRuJ3QgYmUgYmlnIGRpZmZlcmVuY2UgZm9yIHRoZSBjb21waWxlci4K
PiAKPiA+ICsvKiBQYWdlIFRhYmxlIGVudHJ5ICovCj4gPiArdHlwZWRlZiBzdHJ1Y3Qgewo+ID4g
KyNpZmRlZiBDT05GSUdfUklTQ1ZfNjQKPiA+ICvCoMKgwqAgdWludDY0X3QgcHRlOwo+ID4gKyNl
bHNlCj4gPiArwqDCoMKgIHVpbnQzMl90IHB0ZTsKPiA+ICsjZW5kaWYKPiA+ICt9IHB0ZV90Owo+
ID4gKwo+ID4gKyNkZWZpbmUgcHRlX3RvX2FkZHIoeCkgKCgocGFkZHJfdCkoeCkgPj4gUFRFX1BQ
Tl9TSElGVCkgPDwKPiA+IFBBR0VfU0hJRlQpCj4gPiArCj4gPiArI2RlZmluZSBhZGRyX3RvX3B0
ZSh4KSAoKCh4KSA+PiBQQUdFX1NISUZUKSA8PCBQVEVfUFBOX1NISUZUKQo+IAo+IEFyZSB0aGVz
ZSB0d28gbWFjcm9zIHVzZWZ1bCBvbiB0aGVpciBvd24/IEkgYXNrIGJlY2F1c2UgSSBzdGlsbAo+
IGNvbnNpZGVyCj4gdGhlbSBzb21ld2hhdCBtaXNuYW1lZCwgYXMgdGhleSBkb24ndCBwcm9kdWNl
IC8gY29uc3VtZSBhIFBURSAoYnV0Cj4gdGhlCj4gcmF3IHZhbHVlKS4gWWV0IGdlbmVyYWxseSB5
b3Ugd2FudCB0byBhdm9pZCBhbnkgY29kZSBvcGVyYXRpbmcgb24gcmF3Cj4gdmFsdWVzLCB1c2lu
ZyBwdGVfdCBpbnN0ZWFkLiBJT1cgSSB3b3VsZCBob3BlIHRvIGJlIGFibGUgdG8gY29udmluY2UK
PiB5b3UgdG8gLi4uCj4gCj4gPiArc3RhdGljIGlubGluZSBwdGVfdCBwYWRkcl90b19wdGUocGFk
ZHJfdCBwYWRkciwKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGludCBwZXJtaXNzaW9ucykKPiA+ICt7
Cj4gPiArwqDCoMKgIHJldHVybiAocHRlX3QpIHsgLnB0ZSA9IGFkZHJfdG9fcHRlKHBhZGRyKSB8
IHBlcm1pc3Npb25zIH07Cj4gPiArfQo+ID4gKwo+ID4gK3N0YXRpYyBpbmxpbmUgcGFkZHJfdCBw
dGVfdG9fcGFkZHIocHRlX3QgcHRlKQo+ID4gK3sKPiA+ICvCoMKgwqAgcmV0dXJuIHB0ZV90b19h
ZGRyKHB0ZS5wdGUpOwo+ID4gK30KPiAKPiAuLi4gZXhwYW5kIHRoZW0gaW4gdGhlIHR3byBpbmxp
bmUgZnVuY3Rpb25zIGFuZCB0aGVuIGRyb3AgdGhlIG1hY3Jvcy4KSSB0aG91Z2h0IGFib3V0IGRy
b3AgdGhlIG1hY3JvcyBhcyB0aGV5IGFyZSB1c2VkIG5vdyBvbmx5IGluIHRoZQptZW50aW9uZWQg
YWJvdmUgZnVuY3Rpb25zLiAoIGluaXRpYWxseSB0aGUgbWFjcm9zIHdlcmUgaW50cm9kdWNlZCBm
b3IKYW5vdGhlciBjb2RlIGJ1dCB0aGF0IGNvZGUgd2FzIHJlLXdyaXR0ZW4gYW5kIHRoZSBtYWNy
b3MgYXJlbid0IG5lZWRlZAphbnltb3JlKS4KU28geWVzLCBsZXRzIHJlbW92ZSB0aGVtLgoKPiAK
PiA+IC0tLSAvZGV2L251bGwKPiA+ICsrKyBiL3hlbi9hcmNoL3Jpc2N2L21tLmMKPiA+IEBAIC0w
LDAgKzEsMzE1IEBACj4gPiArI2luY2x1ZGUgPHhlbi9jb21waWxlci5oPgo+ID4gKyNpbmNsdWRl
IDx4ZW4vaW5pdC5oPgo+ID4gKyNpbmNsdWRlIDx4ZW4va2VybmVsLmg+Cj4gPiArI2luY2x1ZGUg
PHhlbi9wZm4uaD4KPiA+ICsKPiA+ICsjaW5jbHVkZSA8YXNtL2Vhcmx5X3ByaW50ay5oPgo+ID4g
KyNpbmNsdWRlIDxhc20vY29uZmlnLmg+Cj4gCj4gTm8gaW5jbHVzaW9uIG9mIGFzbS9jb25maWcu
aCAob3IgeGVuL2NvbmZpZy5oKSBpbiBhbnkgbmV3IGNvZGUKPiBwbGVhc2UuCj4gRm9yIHF1aXRl
IHNvbWUgdGltZSB4ZW4vY29uZmlnLmggaGFzIGJlZW4gZm9yY2VmdWxseSBpbmNsdWRlZAo+IGV2
ZXJ5d2hlcmUKPiBieSB0aGUgYnVpbGQgc3lzdGVtLgpTdXJlLiBJJ2xsIHJlbW92ZSB0aGUgaW5j
bHVzaW9uIHRoYW4uCgo+IAo+ID4gKy8qCj4gPiArICogSXQgaXMgZXhwZWN0ZWQgdGhhdCBYZW4g
d29uJ3QgYmUgbW9yZSB0aGVuIDIgTUIuCj4gPiArICogVGhlIGNoZWNrIGluIHhlbi5sZHMuUyBn
dWFyYW50ZWVzIHRoYXQuCj4gPiArICogQXQgbGVhc3QgMyBwYWdlIChpbiBjYXNlIG9mIFN2Mzkg
KQo+ID4gKyAqIHRhYmxlcyBhcmUgbmVlZGVkIHRvIGNvdmVyIDIgTUIuCj4gCj4gSSBndWVzcyB0
aGlzIHJlYWRzIGEgbGl0dGxlIGJldHRlciBhcyAiQXQgbGVhc3QgMyBwYWdlIHRhYmxlcyAoaW4K
PiBjYXNlIG9mCj4gU3YzOSkgLi4uIiBvciAiQXQgbGVhc3QgMyAoaW4gY2FzZSBvZiBTdjM5KSBw
YWdlIHRhYmxlcyAuLi4iIEFsc28KPiBjb3VsZCBJCj4gdGFsayB5b3UgaW50byB1c2luZyB0aGUg
ZnVsbCA4MCBjb2x1bW5zIGluc3RlYWQgb2Ygd3JhcHBpbmcgZWFybHkgaW4KPiB0aGUKPiBtaWRk
bGUgb2YgYSBzZW50ZW5jZT8KU3VyZSwgSSdsbCB1c2UgZnVsbCA4MCBjb2x1bW5zLgoKPiAKPiA+
IE9uZSBmb3IgZWFjaCBwYWdlIGxldmVsCj4gPiArICogdGFibGUgd2l0aCBQQUdFX1NJWkUgPSA0
IEtiCj4gPiArICoKPiA+ICsgKiBPbmUgTDAgcGFnZSB0YWJsZSBjYW4gY292ZXIgMiBNQgo+ID4g
KyAqICg1MTIgZW50cmllcyBvZiBvbmUgcGFnZSB0YWJsZSAqIFBBR0VfU0laRSkuCj4gPiArICoK
PiA+ICsgKi8KPiA+ICsjZGVmaW5lIFBHVEJMX0lOSVRJQUxfQ09VTlQgKChDT05GSUdfUEFHSU5H
X0xFVkVMUyAtIDEpICsgMSkKPiAKPiBIbW0sIGRpZCB5b3UgbG9zZSB0aGUgcGFydCBvZiB0aGUg
Y29tbWVudCBleHBsYWluaW5nIHRoZSAiKyAxIj8gT3IKPiBkaWQKPiB0aGUgbmVlZCBmb3IgdGhh
dCBhY3R1YWxseSBnbyBhd2F5IChhbmQgeW91IHNob3VsZCBkcm9wIGl0IGhlcmUpPwpJIGxvc3Qg
aXQgc28gaXQgc2hvdWxkIGJlIGJhY2tlZC4gVGhhbmtzLgoKPiAKPiA+ICsjZGVmaW5lIFBHVEJM
X0VOVFJZX0FNT1VOVMKgIChQQUdFX1NJWkUgLyBzaXplb2YocHRlX3QpKQo+IAo+IElzbid0IHRo
aXMgUEFHRVRBQkxFX0VOVFJJRVMgKGFuZCB0aGUgY29uc3RhbnQgaGVuY2UgdW5uZWNlc3Nhcnkp
PwpJdCBpcy4gSSdsbCByZS11c2UgUEFHRVRBQkxFX0VOVFJJRVMKPiAKPiA+ICtzdGF0aWMgdm9p
ZCBfX2luaXQgc2V0dXBfaW5pdGlhbF9tYXBwaW5nKHN0cnVjdCBtbXVfZGVzYwo+ID4gKm1tdV9k
ZXNjLAo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGxvbmcgbWFwX3N0YXJ0
LAo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGxvbmcgbWFwX2VuZCwKPiA+
ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBsb25nIHBhX3N0YXJ0LAo+ID4gK8Kg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGludCBwZXJtaXNzaW9ucykKPiAKPiBXaGF0
IHVzZSBpcyB0aGUgbGFzdCBwYXJhbWV0ZXIsIHdoZW4gdGhlIHNvbGUgY2FsbGVyIHBhc3Nlcwo+
IFBURV9MRUFGX0RFRkFVTFQgKGkuZS4gYSBidWlsZC10aW1lIGNvbnN0YW50KT8gVGhlbiAuLi4K
SXQncyBub3QgdXNlZCBhbnltb3JlLiBhbmQgdGFraW5nIGludG8gYWNjb3VudCB0aGF0IEkgYm9v
dCBEb20wCnN1Y2Nlc3NmdWxseSBhbmQgdXNlZCBmb3Igc2V0dXBfaW5pdGlhbF9tYXBwaW5nKCkg
YWx3YXlzClBURV9MRUFGX0RFRkFVTFQgc28gaSdsbCBkcm9wIGl0Lgo+IAo+ID4gK3sKPiA+ICvC
oMKgwqAgdW5zaWduZWQgaW50IGluZGV4Owo+ID4gK8KgwqDCoCBwdGVfdCAqcGd0Ymw7Cj4gPiAr
wqDCoMKgIHVuc2lnbmVkIGxvbmcgcGFnZV9hZGRyOwo+ID4gK8KgwqDCoCBwdGVfdCBwdGVfdG9f
YmVfd3JpdHRlbjsKPiA+ICvCoMKgwqAgdW5zaWduZWQgbG9uZyBwYWRkcjsKPiA+ICvCoMKgwqAg
dW5zaWduZWQgaW50IHRtcF9wZXJtaXNzaW9uczsKPiAKPiAuLi4gdGhpcyB2YXJpYWJsZSAod2hp
Y2ggd291bGQgYmV0dGVyIGJlIG9mIG1vcmUgbmFycm93IHNjb3BlIGFueXdheSwKPiBsaWtlCj4g
cGVyaGFwcyBzZXZlcmFsIG1vcmUgb2YgdGhlIGFib3ZlKSBjb3VsZCBhbHNvIGhhdmUgaXRzIHRt
cF8gcHJlZml4Cj4gZHJvcHBlZAo+IGFmYWljdC4KPiAKPiA+ICvCoMKgwqAgaWYgKCAodW5zaWdu
ZWQgbG9uZylfc3RhcnQgJSBYRU5fUFRfTEVWRUxfU0laRSgwKSApCj4gPiArwqDCoMKgIHsKPiA+
ICvCoMKgwqDCoMKgwqDCoCBlYXJseV9wcmludGsoIihYRU4pIFhlbiBzaG91bGQgYmUgbG9hZGVk
IGF0IDRrCj4gPiBib3VuZGFyeVxuIik7Cj4gPiArwqDCoMKgwqDCoMKgwqAgZGllKCk7Cj4gPiAr
wqDCoMKgIH0KPiA+ICsKPiA+ICvCoMKgwqAgaWYgKCBtYXBfc3RhcnQgJiB+WEVOX1BUX0xFVkVM
X01BUF9NQVNLKDApIHx8Cj4gPiArwqDCoMKgwqDCoMKgwqDCoCBwYV9zdGFydCAmIH5YRU5fUFRf
TEVWRUxfTUFQX01BU0soMCkgKQo+ID4gK8KgwqDCoCB7Cj4gPiArwqDCoMKgwqDCoMKgwqAgZWFy
bHlfcHJpbnRrKCIoWEVOKSBtYXAgYW5kIHBhIHN0YXJ0IGFkZHJlc3NlcyBzaG91bGQgYmUKPiA+
IGFsaWduZWRcbiIpOwo+ID4gK8KgwqDCoMKgwqDCoMKgIC8qIHBhbmljKCksIEJVRygpIG9yIEFT
U0VSVCgpIGFyZW4ndCByZWFkeSBub3cuICovCj4gPiArwqDCoMKgwqDCoMKgwqAgZGllKCk7Cj4g
PiArwqDCoMKgIH0KPiA+ICsKPiA+ICvCoMKgwqAgZm9yICggcGFnZV9hZGRyID0gbWFwX3N0YXJ0
OyBwYWdlX2FkZHIgPCBtYXBfZW5kOyBwYWdlX2FkZHIgKz0KPiA+IFhFTl9QVF9MRVZFTF9TSVpF
KDApICkKPiAKPiBOaXQ6IFN0eWxlIChsaW5lIGxvb2tzIHRvIGJlIHRvbyBsb25nIG5vdykuCj4g
Cj4gwqDCoMKgIGZvciAoIHBhZ2VfYWRkciA9IG1hcF9zdGFydDsgcGFnZV9hZGRyIDwgbWFwX2Vu
ZDsKPiDCoMKgwqDCoMKgwqDCoMKgwqAgcGFnZV9hZGRyICs9IFhFTl9QVF9MRVZFTF9TSVpFKDAp
ICkKPiAKPiBpcyB0aGUgd2F5IHdlIHdvdWxkIHR5cGljYWxseSB3cmFwIGl0LCBidXQKPiAKPiDC
oMKgwqAgZm9yICggcGFnZV9hZGRyID0gbWFwX3N0YXJ0Owo+IMKgwqDCoMKgwqDCoMKgwqDCoCBw
YWdlX2FkZHIgPCBtYXBfZW5kOwo+IMKgwqDCoMKgwqDCoMKgwqDCoCBwYWdlX2FkZHIgKz0gWEVO
X1BUX0xFVkVMX1NJWkUoMCkgKQo+IAo+IHdvdWxkIG9mIGNvdXJzZSBhbHNvIGJlIG9rYXkgaWYg
eW91IHByZWZlcnJlZCB0aGF0LgpMYXN0IG9uZSBvcHRpb24gbG9va3MgYmV0dGVyIHRvIG1lLiBU
aGFua3MuCj4gCj4gPiArwqDCoMKgIHsKPiA+ICvCoMKgwqDCoMKgwqDCoCBwZ3RibCA9IG1tdV9k
ZXNjLT5wZ3RibF9iYXNlOwo+ID4gKwo+ID4gK8KgwqDCoMKgwqDCoMKgIHN3aXRjaCAoIG1tdV9k
ZXNjLT5udW1fbGV2ZWxzICkKPiA+ICvCoMKgwqDCoMKgwqDCoCB7Cj4gPiArwqDCoMKgwqDCoMKg
wqAgY2FzZSA0OiAvKiBMZXZlbCAzICovCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBIQU5E
TEVfUEdUQkwoMyk7Cj4gPiArwqDCoMKgwqDCoMKgwqAgY2FzZSAzOiAvKiBMZXZlbCAyICovCj4g
PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBIQU5ETEVfUEdUQkwoMik7Cj4gPiArwqDCoMKgwqDC
oMKgwqAgY2FzZSAyOiAvKiBMZXZlbCAxICovCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBI
QU5ETEVfUEdUQkwoMSk7Cj4gPiArwqDCoMKgwqDCoMKgwqAgY2FzZSAxOiAvKiBMZXZlbCAwICov
Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpbmRleCA9IHB0X2luZGV4KDAsIHBhZ2VfYWRk
cik7Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBwYWRkciA9IChwYWdlX2FkZHIgLSBtYXBf
c3RhcnQpICsgcGFfc3RhcnQ7Cj4gPiArCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB0bXBf
cGVybWlzc2lvbnMgPSBwZXJtaXNzaW9uczsKPiA+ICsKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIGlmICggaXNfa2VybmVsX3RleHQoTElOS19UT19MT0FEKHBhZ2VfYWRkcikpIHx8Cj4gPiAr
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaXNfa2VybmVsX2luaXR0ZXh0
KExJTktfVE9fTE9BRChwYWdlX2FkZHIpKSApCj4gCj4gTml0OiBTdHlsZSAoYW5kIEknbSBwcmV0
dHkgc3VyZSBJIGRpZCBjb21tZW50IG9uIHRoaXMga2luZCBvZiB0b28KPiBkZWVwCj4gaW5kZW50
YXRpb24gYWxyZWFkeSkuClNvcnJ5LiBJJ2xsIGRvdWJsZSBjaGVjayB0aGF0IGFuZCBmaXguCgo+
IAo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB0bXBfcGVybWlzc2lvbnMgPQo+
ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFBURV9FWEVDVVRBQkxF
IHwgUFRFX1JFQURBQkxFIHwgUFRFX1ZBTElEOwo+ID4gKwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgaWYgKCBpc19rZXJuZWxfcm9kYXRhKExJTktfVE9fTE9BRChwYWdlX2FkZHIpKSApCj4g
PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHRtcF9wZXJtaXNzaW9ucyA9IFBURV9S
RUFEQUJMRSB8IFBURV9WQUxJRDsKPiA+ICsKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHB0
ZV90b19iZV93cml0dGVuID0gcGFkZHJfdG9fcHRlKHBhZGRyLAo+ID4gdG1wX3Blcm1pc3Npb25z
KTsKPiA+ICsKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICggIXB0ZV9pc192YWxpZChw
Z3RibFtpbmRleF0pICkKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgcGd0Ymxb
aW5kZXhdID0gcHRlX3RvX2JlX3dyaXR0ZW47Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBl
bHNlCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB7Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIC8qCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICogZ2V0
IGFuIGFkcmVzc2VzIG9mIGN1cnJlbnQgcHRlIGFuZCB0aGF0IG9uZSB0bwo+ID4gK8KgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAqIGJlIHdyaXR0ZW4KPiA+ICvCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgKi8KPiAKPiBOaXQ6IFN0eWxlIChvbmUgbWlzc2luZyBibGFuayBlYWNo
IGluIHRoZSBsYXN0IHRocmVlIGxpbmVzLCBhbmQKPiBjb21tZW50Cj4gdGV4dCB3YW50cyB0byBz
dGFydCB3aXRoIGEgY2FwaXRhbCBsZXR0ZXIpLgpUaGFua3MuCgo+IAo+ID4gK8KgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBsb25nIGN1cnJfcHRlID0KPiA+ICvCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBwZ3RibFtpbmRleF0ucHRlICYgfihQVEVf
RElSVFkgfAo+ID4gUFRFX0FDQ0VTU0VEKTsKPiAKPiBXaGlsZSB0ZWNobmljYWxseSAidW5zaWdu
ZWQgbG9uZyIgaXMgb2theSB0byB1c2UgaGVyZSAoYWZhaWN0KSwgSSdkCj4gc3RpbGwKPiByZWNv
bW1lbmQgLi4uCj4gCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHB0ZV90b19i
ZV93cml0dGVuLnB0ZSAmPSB+KFBURV9ESVJUWSB8Cj4gPiBQVEVfQUNDRVNTRUQpOwo+ID4gKwo+
ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoIGN1cnJfcHRlICE9IHB0ZV90
b19iZV93cml0dGVuLnB0ZSApCj4gCj4gLi4uIGRvaW5nIGV2ZXJ5dGhpbmcgaW4gdGVybXMgb2Yg
cHRlX3Q6Cj4gCj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHB0ZV90IGN1cnJfcHRl
ID0gcGd0YmxbaW5kZXhdOwo+IAo+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBjdXJy
X3B0ZS5wdGUgJj0gfihQVEVfRElSVFkgfCBQVEVfQUNDRVNTRUQpOwo+IMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCBwdGVfdG9fYmVfd3JpdHRlbi5wdGUgJj0gfihQVEVfRElSVFkgfCBQ
VEVfQUNDRVNTRUQpOwo+IAo+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoIGN1
cnJfcHRlLnB0ZSAhPSBwdGVfdG9fYmVfd3JpdHRlbi5wdGUgKQo+IAo+IG9yIHBlcmhhcHMgZXZl
biBzaW1wbHkKSXQgbWFrZXMgc2Vuc2UgZm9yIG1lIHRvIHVzZSBwdGVfdCBzbyB3aWxsIHN3aXRj
aCB0byBpdC4KCj4gCj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICggKHBndGJs
W2luZGV4XS5wdGUgXiBwdGVfdG9fYmVfd3JpdHRlbi5wdGUpICYKPiDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgfihQVEVfRElSVFkgfCBQVEVfQUNDRVNTRUQpICkK
PiAKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgewo+ID4gK8KgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGVhcmx5X3ByaW50aygiUFRFIG92ZXJyaWRkZW4g
aGFzIG9jY3VycmVkXG4iKTsKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCAvKiBwYW5pYygpLCA8YXNtL2J1Zy5oPiBhcmVuJ3QgcmVhZHkgbm93LiAqLwo+ID4gK8Kg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGRpZSgpOwo+ID4gK8KgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB9Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB9Cj4g
PiArwqDCoMKgwqDCoMKgwqAgfQo+ID4gK8KgwqDCoCB9Cj4gPiArwqDCoMKgICN1bmRlZiBIQU5E
TEVfUEdUQkwKPiAKPiBOaXQ6IFN1Y2ggYW4gI3VuZGVmLCBldmVuIGlmIHRlY2huaWNhbGx5IG9r
YXkgZWl0aGVyIHdheSwgd291bGQgaW1vCj4gYmV0dGVyIGxpdmUgaW4gdGhlIHNhbWUgc2NvcGUg
KGFuZCBoYXZlIHRoZSBzYW1lIGluZGVudGF0aW9uKSBhcyB0aGUKPiBjb3JyZXNwb25kaW5nICNk
ZWZpbmUuIFNpbmNlIHlvdXIgI2RlZmluZSBpcyBhaGVhZCBvZiB0aGUgZnVuY3Rpb24sCj4gaXQK
PiB3b3VsZCAuLi4KPiAKPiA+ICt9Cj4gCj4gLi4uIHRoZW4gbGl2ZSBoZXJlLgpUaGFua3MuIEkn
bGwgdGFrZSBpbnRvIGFjY291bnQuCgo+IAo+ID4gK3N0YXRpYyB2b2lkIF9faW5pdCBjYWxjX3Bn
dGJsX2x2bHNfbnVtKHN0cnVjdMKgIG1tdV9kZXNjICptbXVfZGVzYykKPiAKPiBOaXQ6IERvdWJs
ZSBibGFuayBhZnRlciAic3RydWN0Ii4Kd2lsbCByZW1vdmUgaXQuIHRoYW5rcy4KCj4gCj4gPiAr
ewo+ID4gK8KgwqDCoCB1bnNpZ25lZCBsb25nIHNhdHBfbW9kZSA9IFJWX1NUQUdFMV9NT0RFOwo+
ID4gKwo+ID4gK8KgwqDCoCAvKiBOdW1iZXIgb2YgcGFnZSB0YWJsZSBsZXZlbHMgKi8KPiA+ICvC
oMKgwqAgc3dpdGNoIChzYXRwX21vZGUpCj4gPiArwqDCoMKgIHsKPiA+ICvCoMKgwqAgY2FzZSBT
QVRQX01PREVfU1YzMjoKPiA+ICvCoMKgwqDCoMKgwqDCoCBtbXVfZGVzYy0+bnVtX2xldmVscyA9
IDI7Cj4gPiArwqDCoMKgwqDCoMKgwqAgYnJlYWs7Cj4gPiArwqDCoMKgIGNhc2UgU0FUUF9NT0RF
X1NWMzk6Cj4gPiArwqDCoMKgwqDCoMKgwqAgbW11X2Rlc2MtPm51bV9sZXZlbHMgPSAzOwo+ID4g
K8KgwqDCoMKgwqDCoMKgIGJyZWFrOwo+ID4gK8KgwqDCoCBjYXNlIFNBVFBfTU9ERV9TVjQ4Ogo+
ID4gK8KgwqDCoMKgwqDCoMKgIG1tdV9kZXNjLT5udW1fbGV2ZWxzID0gNDsKPiA+ICvCoMKgwqDC
oMKgwqDCoCBicmVhazsKPiA+ICvCoMKgwqAgZGVmYXVsdDoKPiA+ICvCoMKgwqDCoMKgwqDCoCBl
YXJseV9wcmludGsoIihYRU4pIFVuc3VwcG9ydGVkIFNBVFBfTU9ERVxuIik7Cj4gPiArwqDCoMKg
wqDCoMKgwqAgZGllKCk7Cj4gPiArwqDCoMKgIH0KPiA+ICt9Cj4gCj4gRG8geW91IHJlYWxseSBu
ZWVkIHRoaXMgZnVuY3Rpb24gYW55bW9yZT8gSXNuJ3QgaXQgbm93IHNpbXBseQo+IAo+IMKgwqDC
oCBtbXVfZGVzYy5udW1fbGV2ZWxzID0gQ09ORklHX1BBR0lOR19MRVZFTFM/Cj4gCj4gaW4gdGhl
IGNhbGxlcj8gV0hpY2ggY291bGQgdGhlbiBldmVuIGJlIHRoZSB2YXJpYWJsZSdzIGluaXRpYWxp
emVyCj4gdGhlcmU/CllvdSBhcmUgcmlnaHQgYWZ0ZXIgaW50cm9kdWN0aW9uIG9mIENPTkZJR19Q
QUdJTkdfTEVWRUxTIGZvciBSSVNDLVYgd2UKY2FuIHJlbW92ZSB0aGUgY29kZSB5b3UgbWVudGlv
bmVkLgo+IAo+ID4gK3N0YXRpYyBib29sIF9faW5pdCBjaGVja19wZ3RibF9tb2RlX3N1cHBvcnQo
c3RydWN0IG1tdV9kZXNjCj4gPiAqbW11X2Rlc2MsCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgdW5zaWduZWQgbG9uZwo+ID4gbG9hZF9zdGFydCwKPiA+ICvCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBsb25nCj4gPiBzYXRwX21vZGUpCj4gPiArewo+ID4g
K8KgwqDCoCBib29sIGlzX21vZGVfc3VwcG9ydGVkID0gZmFsc2U7Cj4gPiArwqDCoMKgIHVuc2ln
bmVkIGludCBpbmRleDsKPiA+ICvCoMKgwqAgdW5zaWduZWQgaW50IHBhZ2VfdGFibGVfbGV2ZWwg
PSAobW11X2Rlc2MtPm51bV9sZXZlbHMgLSAxKTsKPiA+ICvCoMKgwqAgdW5zaWduZWQgbGV2ZWxf
bWFwX21hc2sgPQo+ID4gWEVOX1BUX0xFVkVMX01BUF9NQVNLKHBhZ2VfdGFibGVfbGV2ZWwpOwo+
ID4gKwo+ID4gK8KgwqDCoCB1bnNpZ25lZCBsb25nIGFsaWduZWRfbG9hZF9zdGFydCA9IGxvYWRf
c3RhcnQgJgo+ID4gbGV2ZWxfbWFwX21hc2s7Cj4gPiArwqDCoMKgIHVuc2lnbmVkIGxvbmcgYWxp
Z25lZF9wYWdlX3NpemUgPQo+ID4gWEVOX1BUX0xFVkVMX1NJWkUocGFnZV90YWJsZV9sZXZlbCk7
Cj4gPiArwqDCoMKgIHVuc2lnbmVkIGxvbmcgeGVuX3NpemUgPSAodW5zaWduZWQgbG9uZykoX2Vu
ZCAtIHN0YXJ0KTsKPiA+ICsKPiA+ICvCoMKgwqAgaWYgKCAobG9hZF9zdGFydCArIHhlbl9zaXpl
KSA+IChhbGlnbmVkX2xvYWRfc3RhcnQgKwo+ID4gYWxpZ25lZF9wYWdlX3NpemUpICkKPiA+ICvC
oMKgwqAgewo+ID4gK8KgwqDCoMKgwqDCoMKgIGVhcmx5X3ByaW50aygicGxlYXNlIHBsYWNlIFhl
biB0byBiZSBpbiByYW5nZSBvZiBQQUdFX1NJWkUKPiA+ICIKPiA+ICvCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICJ3aGVyZSBQQUdFX1NJWkUgaXMgWEVOX1BUX0xFVkVM
X1NJWkUoIHtMMyB8Cj4gPiBMMiB8IEwxfSApICIKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgICJkZXBlbmRpbmcgb24gZXhwZWN0ZWQgU0FUUF9NT0RFIFxuIgo+
ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgIlhFTl9QVF9MRVZF
TF9TSVpFIGlzIGRlZmluZWQgaW4KPiA+IDxhc20vcGFnZS5oPlxuIik7Cj4gPiArwqDCoMKgwqDC
oMKgwqAgZGllKCk7Cj4gPiArwqDCoMKgIH0KPiA+ICsKPiA+ICvCoMKgwqAgaW5kZXggPSBwdF9p
bmRleChwYWdlX3RhYmxlX2xldmVsLCBhbGlnbmVkX2xvYWRfc3RhcnQpOwo+ID4gK8KgwqDCoCBz
dGFnZTFfcGd0Ymxfcm9vdFtpbmRleF0gPSBwYWRkcl90b19wdGUoYWxpZ25lZF9sb2FkX3N0YXJ0
LAo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFBURV9MRUFGX0RFRkFVTFQgfAo+
ID4gUFRFX0VYRUNVVEFCTEUpOwo+ID4gKwo+ID4gK8KgwqDCoCBhc20gdm9sYXRpbGUgKCAic2Zl
bmNlLnZtYSIgKTsKPiA+ICvCoMKgwqAgY3NyX3dyaXRlKENTUl9TQVRQLAo+ID4gK8KgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIFBGTl9ET1dOKCh1bnNpZ25lZCBsb25nKXN0YWdlMV9wZ3RibF9y
b290KSB8Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgc2F0cF9tb2RlIDw8IFNBVFBf
TU9ERV9TSElGVCk7Cj4gPiArCj4gPiArwqDCoMKgIGlmICggKGNzcl9yZWFkKENTUl9TQVRQKSA+
PiBTQVRQX01PREVfU0hJRlQpID09IHNhdHBfbW9kZSApCj4gPiArwqDCoMKgwqDCoMKgwqAgaXNf
bW9kZV9zdXBwb3J0ZWQgPSB0cnVlOwo+IAo+IEp1c3QgYXMgYSByZW1hcms6IFdoaWxlIHBsYWlu
IHNoaWZ0IGlzIGtpbmQgb2Ygb2theSBoZXJlLCBiZWNhdXNlIHRoZQo+IGZpZWxkCj4gaXMgdGhl
IHRvcC1tb3N0IG9uZSBpbiB0aGUgcmVnaXN0ZXIsIGdlbmVyYWxseSB5b3Ugd2lsbCB3YW50IHRv
IGdldAo+IHVzZWQgdG8KPiBNQVNLX0VYVFIoKSAoYW5kIE1BU0tfSU5TUigpKSBpbiBjYXNlcyBs
aWtlIHRoaXMgb25lLgpUaGFua3MuIEknbGwgbG9vayBhdCB0aGVtLiAKPiAKPiA+ICvCoMKgwqAg
Y3NyX3dyaXRlKENTUl9TQVRQLCAwKTsKPiA+ICsKPiA+ICvCoMKgwqAgLyogQ2xlYW4gTU1VIHJv
b3QgcGFnZSB0YWJsZSAqLwo+ID4gK8KgwqDCoCBzdGFnZTFfcGd0Ymxfcm9vdFtpbmRleF0gPSBw
YWRkcl90b19wdGUoMHgwLCAweDApOwo+ID4gKwo+ID4gK8KgwqDCoCBhc20gdm9sYXRpbGUgKCAi
c2ZlbmNlLnZtYSIgKTsKPiAKPiBEb2Vzbid0IHRoaXMgd2FudCB0byBtb3ZlIGJldHdlZW4gdGhl
IFNBVFAgd3JpdGUgYW5kIHRoZSBjbGVhcmluZyBvZgo+IHRoZQo+IHJvb3QgdGFibGUgc2xvdD8g
QWxzbyBoZXJlIGFuZCBlbHNld2hlcmUgLSBkb24ndCB0aGVzZSBhc20oKXMgbmVlZAo+IG1lbW9y
eQo+IGNsb2JiZXJzPyBBbmQgYW55d2F5IC0gY291bGQgSSB0YWxrIHlvdSBpbnRvIGludHJvZHVj
aW5nIGFuIGlubGluZQo+IHdyYXBwZXIKPiAoZS5nLiBuYW1lZCBzZmVuY2Vfdm1hKCkpIHNvIGFs
bCB1c2VzIGVuZCB1cCBjb25zaXN0ZW50PwpJIHRoaW5rIGNsZWFyaW5nIG9mIHJvb3QgcGFnZSB0
YWJsZSBzaG91bGQgYmUgZG9uZSBiZWZvcmUgInNmZW5jZS52bWEiCmJlY2F1c2Ugd2UgaGF2ZSB0
byBmaXJzdCBjbGVhciBzbG90IG9mIE1NVSdzIHJvb3QgcGFnZSB0YWJsZSBhbmQgdGhlbgptYWtl
IHVwZGF0aW5nIHJvb3QgcGFnZSB0YWJsZSB2aXNpYmxlIGZvciBhbGwuICggYnkgdXNhZ2Ugb2Yg
c2ZlbmNlCmluc3RydWN0aW9uICkKCldlIGNhbiBhZGQgbWVtb3J5IGNsb2JiZXIgdG8gYmUgc3Vy
ZSBidXQgaXQgbG9va3MgbGlrZSBpdCBpcyBhCmR1cGxpY2F0aW9uIG9mICJzZmVuY2UiIGluc3Ry
dWN0aW9uLgoKSSBhZ3JlZSB0aGF0IGl0IHdvdWxkIGJlIG5pY2UgdG8gaW50cm9kdWNlIGEgd3Jh
cHBlci4KCj4gCj4gPiArdm9pZCBfX2luaXQgc2V0dXBfaW5pdGlhbF9wYWdldGFibGVzKHZvaWQp
Cj4gPiArewo+ID4gK8KgwqDCoCBzdHJ1Y3QgbW11X2Rlc2MgbW11X2Rlc2MgPSB7IDAsIDAsIE5V
TEwsIE5VTEwgfTsKPiA+ICsKPiA+ICvCoMKgwqAgLyoKPiA+ICvCoMKgwqDCoCAqIEFjY2VzcyB0
byBfc3RhcmQsIF9lbmQgaXMgYWx3YXlzIFBDLXJlbGF0aXZlCj4gCj4gTml0OiBUeXBvLWVkIHN5
bWJvbCBuYW1lLiBBbHNvIC4uLgo+IAo+ID4gK8KgwqDCoMKgICogdGhlcmVieSB3aGVuIGFjY2Vz
cyB0aGVtIHdlIHdpbGwgZ2V0IGxvYWQgYWRyZXNzZXMKPiA+ICvCoMKgwqDCoCAqIG9mIHN0YXJ0
IGFuZCBlbmQgb2YgWGVuCj4gPiArwqDCoMKgwqAgKiBUbyBnZXQgbGlua2VyIGFkZHJlc3NlcyBM
T0FEX1RPX0xJTksoKSBpcyByZXF1aXJlZAo+ID4gK8KgwqDCoMKgICogdG8gdXNlCj4gPiArwqDC
oMKgwqAgKi8KPiAKPiBzZWUgdGhlIGVhcmxpZXIgbGluZSB3cmFwcGluZyByZW1hcmsgYWdhaW4u
IEZpbmFsbHkgaW4gbXVsdGktc2VudGVuY2UKPiBjb21tZW50cyBmdWxsIHN0b3BzIGFyZSByZXF1
aXJlZC4KRnVsbCBzdG9wcyBtZWFuICcuJyBhdCB0aGUgZW5kIG9mIHNlbnRlbmNlcz8KPiAKPiA+
ICvCoMKgwqAgdW5zaWduZWQgbG9uZyBsb2FkX3N0YXJ0wqDCoMKgID0gKHVuc2lnbmVkIGxvbmcp
X3N0YXJ0Owo+ID4gK8KgwqDCoCB1bnNpZ25lZCBsb25nIGxvYWRfZW5kwqDCoMKgwqDCoCA9ICh1
bnNpZ25lZCBsb25nKV9lbmQ7Cj4gPiArwqDCoMKgIHVuc2lnbmVkIGxvbmcgbGlua2VyX3N0YXJ0
wqAgPSBMT0FEX1RPX0xJTksobG9hZF9zdGFydCk7Cj4gPiArwqDCoMKgIHVuc2lnbmVkIGxvbmcg
bGlua2VyX2VuZMKgwqDCoCA9IExPQURfVE9fTElOSyhsb2FkX2VuZCk7Cj4gPiArCj4gPiArwqDC
oMKgIGlmICggKGxpbmtlcl9zdGFydCAhPSBsb2FkX3N0YXJ0KSAmJgo+ID4gK8KgwqDCoMKgwqDC
oMKgwqAgKGxpbmtlcl9zdGFydCA8PSBsb2FkX2VuZCkgJiYgKGxvYWRfc3RhcnQgPD0gbGlua2Vy
X2VuZCkKPiA+ICkKPiA+ICvCoMKgwqAgewo+ID4gK8KgwqDCoMKgwqDCoMKgIGVhcmx5X3ByaW50
aygiKFhFTikgbGlua2VyIGFuZCBsb2FkIGFkZHJlc3MgcmFuZ2VzCj4gPiBvdmVybGFwXG4iKTsK
PiA+ICvCoMKgwqDCoMKgwqDCoCBkaWUoKTsKPiA+ICvCoMKgwqAgfQo+ID4gKwo+ID4gK8KgwqDC
oCBjYWxjX3BndGJsX2x2bHNfbnVtKCZtbXVfZGVzYyk7Cj4gPiArCj4gPiArwqDCoMKgIGlmICgg
IWNoZWNrX3BndGJsX21vZGVfc3VwcG9ydCgmbW11X2Rlc2MsIGxvYWRfc3RhcnQsCj4gPiBSVl9T
VEFHRTFfTU9ERSkgKQo+IAo+IEl0IGlzIGFnYWluIHF1ZXN0aW9uYWJsZSBoZXJlIHdoZXRoZXIg
cGFzc2luZyBhIGNvbnN0YW50IHRvIGEKPiBmdW5jdGlvbgo+IGF0IGl0cyBzb2xlIGNhbGwgc2l0
ZSBpcyByZWFsbHkgdXNlZnVsLgpJdCBsb29rcyBsaWtlIHRoZXJlIGlzIG5vIHRvbyBtdWNoIHNl
bnNlIHRvIHBhc3MgYSBjb25zdGFudCBkaXJlY3RseSB0bwphIGZ1bmN0aW9uLiBTbyBJJ2xsIHVw
ZGF0ZSB0aGF0Lgo+IAo+ID4gK3ZvaWQgX19pbml0IG5vaW5saW5lIGVuYWJsZV9tbXUoKQo+ID4g
K3sKPiA+ICvCoMKgwqAgLyoKPiA+ICvCoMKgwqDCoCAqIENhbGN1bGF0ZSBhIGxpbmtlciB0aW1l
IGFkZHJlc3Mgb2YgdGhlIG1tdV9pc19lbmFibGVkCj4gPiArwqDCoMKgwqAgKiBsYWJlbCBhbmQg
dXBkYXRlIENTUl9TVFZFQyB3aXRoIGl0Lgo+ID4gK8KgwqDCoMKgICogTU1VIGlzIGNvbmZpZ3Vy
ZWQgaW4gYSB3YXkgd2hlcmUgbGlua2VyIGFkZHJlc3NlcyBhcmUKPiA+IG1hcHBlZAo+ID4gK8Kg
wqDCoMKgICogb24gbG9hZCBhZGRyZXNzZXMgc28gaW4gYSBjYXNlIHdoZW4gbGlua2VyIGFkZHJl
c3NlcyBhcmUKPiA+IG5vdCBlcXVhbAo+ID4gK8KgwqDCoMKgICogdG8gbG9hZCBhZGRyZXNzZXMs
IGFmdGVyIE1NVSBpcyBlbmFibGVkLCBpdCB3aWxsIGNhdXNlCj4gPiArwqDCoMKgwqAgKiBhbiBl
eGNlcHRpb24gYW5kIGp1bXAgdG8gbGlua2VyIHRpbWUgYWRkcmVzc2VzLgo+ID4gK8KgwqDCoMKg
ICogT3RoZXJ3aXNlIGlmIGxvYWQgYWRkcmVzc2VzIGFyZSBlcXVhbCB0byBsaW5rZXIgYWRkcmVz
c2VzCj4gPiB0aGUgY29kZQo+ID4gK8KgwqDCoMKgICogYWZ0ZXIgbW11X2lzX2VuYWJsZWQgbGFi
ZWwgd2lsbCBiZSBleGVjdXRlZCB3aXRob3V0Cj4gPiBleGNlcHRpb24uCj4gPiArwqDCoMKgwqAg
Ki8KPiA+ICvCoMKgwqAgY3NyX3dyaXRlKENTUl9TVFZFQywgTE9BRF9UT19MSU5LKCh1bnNpZ25l
ZAo+ID4gbG9uZykmJm1tdV9pc19lbmFibGVkKSk7Cj4gPiArCj4gPiArwqDCoMKgIC8qIEVuc3Vy
ZSBwYWdlIHRhYmxlIHdyaXRlcyBwcmVjZWRlIGxvYWRpbmcgdGhlIFNBVFAgKi8KPiA+ICvCoMKg
wqAgYXNtIHZvbGF0aWxlICggInNmZW5jZS52bWEiICk7Cj4gPiArCj4gPiArwqDCoMKgIC8qIEVu
YWJsZSB0aGUgTU1VIGFuZCBsb2FkIHRoZSBuZXcgcGFnZXRhYmxlIGZvciBYZW4gKi8KPiA+ICvC
oMKgwqAgY3NyX3dyaXRlKENTUl9TQVRQLAo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
IFBGTl9ET1dOKCh1bnNpZ25lZCBsb25nKXN0YWdlMV9wZ3RibF9yb290KSB8Cj4gPiArwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgUlZfU1RBR0UxX01PREUgPDwgU0FUUF9NT0RFX1NISUZUKTsK
PiA+ICsKPiA+ICvCoMKgwqAgYXNtIHZvbGF0aWxlICggIi5hbGlnbiAyIiApOwo+IAo+IE1heSBJ
IHN1Z2dlc3QgdG8gYXZvaWQgLmFsaWduLCBhbmQgdG8gcHJlZmVyIC5iYWxpZ24gYW5kIC5wMmFs
aWduCj4gaW5zdGVhZD8KPiBUaGlzIGhlbHBzIHBlb3BsZSBjb21pbmcgZnJvbSBkaWZmZXJlbnQg
YXJjaGl0ZWN0dXJlcywgYXMgd2hpY2ggb2YKPiB0aGUgdHdvCj4gLmFsaWduIHJlc29sdmVzIHRv
IGlzIGFyY2gtZGVwZW5kZW50LgpTdXJlLiBUaGFua3MuIEkgdGhvdWdodCB0aGF0IC5wMmFsaWdu
IG1vc3RseSB1c2VkIGZvciBhc3NlbWJseSBjb2RlLgpCdXQgeWVhaCBJJ2xsIHJlLXVzZSBpdCBp
bnN0ZWFkIG9mICcgYXNtIHZvbGF0aWxlICggIi5hbGlnbiAyIiApOycuCgo+IAo+ID4gLS0tIGEv
eGVuL2FyY2gvcmlzY3YvcmlzY3Y2NC9oZWFkLlMKPiA+ICsrKyBiL3hlbi9hcmNoL3Jpc2N2L3Jp
c2N2NjQvaGVhZC5TCj4gPiBAQCAtMSw0ICsxLDUgQEAKPiA+IMKgI2luY2x1ZGUgPGFzbS9hc20u
aD4KPiA+ICsjaW5jbHVkZSA8YXNtL2FzbS1vZmZzZXRzLmg+Cj4gPiDCoCNpbmNsdWRlIDxhc20v
cmlzY3ZfZW5jb2RpbmcuaD4KPiA+IMKgCj4gPiDCoMKgwqDCoMKgwqDCoMKgIC5zZWN0aW9uIC50
ZXh0LmhlYWRlciwgImF4IiwgJXByb2diaXRzCj4gCj4gSG93IGRvZXMgYSBuZWVkIGZvciB0aGlz
IGFyaXNlIGFsbCBvZiB0aGUgc3VkZGVuLCB3aXRob3V0IG90aGVyCj4gY2hhbmdlcwo+IHRvIHRo
aXMgZmlsZT8gSXMgdGhpcyBtYXliZSBhIGxlZnRvdmVyIHdoaWNoIHdhbnRzIGRyb3BwaW5nPwpZ
ZWFoLCBpdCBpcyByZWFsbHkgd2VpcmQuIEknbGwgY2hlY2sgdGhhdC4KCj4gCj4gPiAtLS0gYS94
ZW4vYXJjaC9yaXNjdi94ZW4ubGRzLlMKPiA+ICsrKyBiL3hlbi9hcmNoL3Jpc2N2L3hlbi5sZHMu
Uwo+ID4gQEAgLTEzNyw2ICsxMzcsNyBAQCBTRUNUSU9OUwo+ID4gwqDCoMKgwqAgX19pbml0X2Vu
ZCA9IC47Cj4gPiDCoAo+ID4gwqDCoMKgwqAgLmJzcyA6IHvCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIC8qIEJTUyAqLwo+ID4gK8KgwqDCoMKgwqDCoMKgIC4gPSBBTElH
TihQT0lOVEVSX0FMSUdOKTsKPiA+IMKgwqDCoMKgwqDCoMKgwqAgX19ic3Nfc3RhcnQgPSAuOwo+
ID4gwqDCoMKgwqDCoMKgwqDCoCAqKC5ic3Muc3RhY2tfYWxpZ25lZCkKPiA+IMKgwqDCoMKgwqDC
oMKgwqAgLiA9IEFMSUdOKFBBR0VfU0laRSk7Cj4gCj4gRG9lc24ndCB0aGlzIHdhbnQgdG8gYmUg
YSBzZXBhcmF0ZSBjaGFuZ2U/ClllcywgaXQgc2hvdWxkIGJlLiBJJ2xsIGRvIHRoYXQgaW4gbmV3
IHZlcnN0aW9uIG9mIHRoZSBwYXRjaCBzZXJpZXMuCgp+IE9sZWtzaWkK



From xen-devel-bounces@lists.xenproject.org Tue May 09 13:05:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 13:05:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532184.828261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwN1f-00063C-CN; Tue, 09 May 2023 13:05:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532184.828261; Tue, 09 May 2023 13:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwN1f-000635-9a; Tue, 09 May 2023 13:05:07 +0000
Received: by outflank-mailman (input) for mailman id 532184;
 Tue, 09 May 2023 13:05:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vSxc=A6=citrix.com=prvs=486b9cf0a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pwN1d-00062v-Ou
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 13:05:05 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1baa26f7-ee6a-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 15:05:03 +0200 (CEST)
Received: from mail-dm6nam12lp2172.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 May 2023 09:04:57 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5328.namprd03.prod.outlook.com (2603:10b6:208:1e2::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Tue, 9 May
 2023 13:04:54 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 13:04:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1baa26f7-ee6a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683637503;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=bFfA2u1bFld5ekEC5yY1hWxiny2s4iBrvx0O8m8QGTw=;
  b=ZA1q/jruHG+NTxIvgiI+c8A2J+/Zcr9/e6egsZWQRKb7F1TIrCoBmDvU
   bUcPFmC2kEzwI/20ldaOUP4gUhaNYef6Djv5BsCZoOXWS/KhluM0pBghe
   5twLYzumU/8jNJs7fPno5m0+OtjRyiUcjSXC7aDcw4p11VPbxGNM3EKOc
   c=;
X-IronPort-RemoteIP: 104.47.59.172
X-IronPort-MID: 107717895
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:gtnfa6MFLjwGtzvvrR2IlsFynXyQoLVcMsEvi/4bfWQNrUoigWZSn
 GEeD26GbqyPM2Okftwlb4S2908AsZXSydFrGgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5g1mP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0sFNPm9B2
 KwmERsITyuituedx7ypFNA506zPLOGzVG8ekldJ6GiDSNoDH9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PtxujeJpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxXykAd1CReDQGvhC0Xyo4jQWFT8se1KbvdKJ0Uy4ecBBA
 hlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml1a788wufQG8eQVZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9URX5NkcHbC4ACAcAvd/qpdhpigqVF4k5VqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraTygbQHxZ6s9Lqkc2Q=
IronPort-HdrOrdr: A9a23:QBNhg6Ppz/HZuMBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-Talos-CUID: 9a23:pQfrfGw0b+695tmHbvQwBgUzIP8rfVCN8k77Jm6DKFdsY7qbWwGprfY=
X-Talos-MUID: 9a23:nGWtGQW3MXcYz+Pq/G7NvxU6MdpK2YqnKFwmoc4EveuWKwUlbg==
X-IronPort-AV: E=Sophos;i="5.99,262,1677560400"; 
   d="scan'208";a="107717895"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eBc2NL7JMxSzfKScWfJvlJKu2EIfAb7r06eXZeIIniDCbXt7+pr6Zgju5DvKvLe/DLi5VCqNUvgxxmH7MIsifUDw/XnPFptHJO/YGDXiP6Mx/4KZzWy9mJG6BUS4egGe4THQj872zCX5/3BZ0ViIhhxCOknbGkUKeUtVc5dyVekJ4THDSVTm6ScUH9S0FWWhAflnTdYAwIikSDwiBJpvN64w94k0C0bbiacs/VqEv5FGlHic+ngwuQ1rb8FjbxIgVGASDZVWnEhBkXmpjFzGLrW6RYIWnn5yq7YR7ZM7W820M7H2qt51Ub/gIRPlao2hchkG3xNxYt8W7AUIHnxgQg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aPJaUOtLXSna1hDllO7PoB9exQDqte8dXEP66cZMgqI=;
 b=UVLnDBv5DpZl2QpFmoPbIFJk0zkLue0D97nsM5Mf/hMUovFiWL8/woYwm+THDab/7GBDPgUuMvIfdVdpsY5pwiuvH18ysRlJ+DUrxVhuSY5PKu9O1ztItbyLkwSjLBD1HfygZQReTMPMHa0KzqD1Q88eY4ydMD07DnVvGfVJ8nch1z62E8KXdEjaXXgtsucQptx7EhtLAUnQ18E9LfjfrkNFcvU9szjllSIMII88GgZmNECQIqdFpmmJnY6DpiQ+NlxZm41ubyYVqEsJEJ/5TjJTowcTcbT1a6AsLVz4v77OtLej3pbhFEdgfiD3oTykglOOrcf1uQKuVSNNxk/slw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aPJaUOtLXSna1hDllO7PoB9exQDqte8dXEP66cZMgqI=;
 b=YC57gC3KxOzSlW7FHgzVTlgTRW//jC/L4H9x6p3PRcs4WeVz/CCSiEvszlA8IPC4V222n9jqK/UrILL3u0bKRg3ac+AXap+pgX6FClpl24LuREesucfUV3pop9cvEQkCHAJc8H+D6Do4/NBv63Hq5OTxuwSLcdFRnIcrSEitOYE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <18090224-6838-8ed4-6be5-ae10a334e277@citrix.com>
Date: Tue, 9 May 2023 14:04:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/6] x86/cpu-policy: Drop build time cross-checks of
 featureset sizes
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-2-andrew.cooper3@citrix.com>
 <6531df09-7f7f-a1e2-5993-bd2a14e22421@suse.com>
In-Reply-To: <6531df09-7f7f-a1e2-5993-bd2a14e22421@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LNXP265CA0005.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5e::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5328:EE_
X-MS-Office365-Filtering-Correlation-Id: 908d2fcc-4275-42f8-ea88-08db508dfb57
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CerlIj75R0D47iEnKpAYcb3kadtSQBL471dR58LKcfkgQctP5DoS2+2cj//8kKa1w9D/uF2gR+YiYGSSK4XxAZh09v3d2jNjxAyVf3Uu9DKwVA7CeQObfWaW7EhO9K2ePEBB7/fPdYbvVbH0e0zohXq9xMHXTrkEQLYjNc1OBdr+mfcdkNNXuGoikaYt3C4GR9UTzKOx2SIEsDj4QUQXzKKkGYBYMT96m0QuFL4vHBS+y/uttEIWqDcN0EPUz8QeE3QLAZAnlOAM+8sEwTdQdtAxXYmL7EXpoJijwy9ox7odiReKGjYW9p6VO8wyYWFSI7ImpphXsFs5tuGhaVePYYkETejsI70DDTE6lEvTu5W2teVC6AIQO5gw2AAIRs/+/QhIzHKQD82KSGEtIoSp4pLZ/zjVZwHZSpxpXFyj3Z5P3V7gSvw64SmK4/FKrF4rszTVIv4HosJQ7D0zLWa/BhJGxyfsZzSu4gZbv84pmXrAbjJl2u2m+jG35tBsr8IBAaEXcPIwbDrCDRfeldgsDDyRsIBkQ/PqjP3J7o2QL3tz1xPmSgPXGewaVR8gkrxQ9m6RGeDUgP8YFJS5anbIV690VXKeEQ/3VactFSC/TKyKtp20I702dfZSq4h8Fu67CWOgVosmuhBGbR6iFCYJVw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(346002)(396003)(376002)(136003)(451199021)(31686004)(83380400001)(6506007)(66946007)(6916009)(38100700002)(41300700001)(4326008)(36756003)(82960400001)(8936002)(8676002)(316002)(478600001)(53546011)(31696002)(6512007)(26005)(66556008)(6486002)(2616005)(66476007)(86362001)(186003)(54906003)(5660300002)(2906002)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L0xpMzliNytRQ0luL2pydzE1dTJtNmxWbFpaVUU1ZUd6RzNDcjg5a2o2aGh4?=
 =?utf-8?B?eW84Zm9Uc3QxZkIzM1NGZWlXR2VKUEtNbk9yVTY2T1dkNldDY1VpSFdIQUdz?=
 =?utf-8?B?OGxqT3Y5N3EyUjdrL1hXQkRLVUlOQ3IrY1dyMEtjVmRuVGpCdlIrNC9aNVBY?=
 =?utf-8?B?UTdVMlFUOGFUclM0M0dGYVo1UFB2YlZuYi9ncTkyQkRxRTljN1BtU3ZGRU1Z?=
 =?utf-8?B?c3JtMVJFMC9KbDJnV2NSMERSLzVRK053NlZmN3BLRUQ4ekczeGRBTVpkUHJv?=
 =?utf-8?B?ZFJ4NXlyY0R5R1J6S004T3BkYzNXY1Y0NlZhOTErbFNINGxNV1Y5U2doR0gw?=
 =?utf-8?B?b0xpbWdPbHZmK0dPTHhZT3ZXUEFZNlNhd0g3Sk9lWWJlM1FNdXdmeDhWdWU1?=
 =?utf-8?B?NE05UnNLdVhmSnFSTjB3S3JmZWdrcDZYaGVOVnYzZXFtTFdtaGNrVURVVlFt?=
 =?utf-8?B?NVF6MnJDTTVvYTFlSEMxRWJMcFg1YUZTVEc1NHZhWGZpcVFMTkEvWWQ5bmkx?=
 =?utf-8?B?WFZWVkY1akxkNzByZ1JQc2gvbVN3ZGFnQmJWTHlHYTVWTGpvbUVFU1NUenRk?=
 =?utf-8?B?dEVwUjNjbVE1K3p3cGtCNFBGb1orZm5mcUEwSjk3SVhPc0U4TEVQancyNEhJ?=
 =?utf-8?B?KytrVk5UU3F3b1IxUWRiVUNyNkd4TnB4Q0taNUo1Skx5Tnhtc3VnVTIrQ09G?=
 =?utf-8?B?VXozdUZZbWNBZ1VpL3RtZUpaZ1JWTGorakpoQVFhdXJOeUlGVGJDN1FiYjdK?=
 =?utf-8?B?dmtQR1ppc2ZFMDhxOGx2b2NUNDBGR0hqK2VkcEpiamFNdjJWbmhrRG0veU15?=
 =?utf-8?B?aGdIOXJsbTl5eWpZZ2UrazVNYkRpN3BBUk9Hc1JRRzFHTDE5NnZZa3B0bW1l?=
 =?utf-8?B?NGdGK29pUGZDTmlPYUIwUkVrR3VXaXIxM25BRGxOdTZVWjRHN0dsV3cyUysz?=
 =?utf-8?B?YVBjdCtqb3RKMmpZUkRUSDdNRTQxWVVWdEMrWkZxMU9hbk1JaGczaUJxamo2?=
 =?utf-8?B?cGxIbGpXSW1QU1NGWkhjQWNEbldNTnBvc0Jlb2d1dUxHbmtWb1FMSGtoRG9T?=
 =?utf-8?B?QWtLN1l0djBscjczY3VINXY4Uis0clRGc1NwZmZuNlB6ZU5qQXBkSUxGTDVi?=
 =?utf-8?B?cXhvQXlOTmpoR3J2YXlmTHhlMXVOd3ZhLy9mOUhISGtZVmpCZm5KQ0ZUTHJy?=
 =?utf-8?B?ZUgzUUE3MHRseG41aTZkZ1FSdVA1SVpzY1d5L1ZXazJLNXcvVlM1L3R1ZnZH?=
 =?utf-8?B?ZVRIcklYbGQ3R0MyTWRQdmNMWkRTZktVOXMxU0gwaks1UC9TVzZEblZlNkt0?=
 =?utf-8?B?RCtqTUsrc0JTeUpwakNkZjR4NC9BeGhFQUpFcnFxTzRHaE5zdXVYOXJGV2FE?=
 =?utf-8?B?WFNrMU41a3E3bXM4eEtzYWNtTXVhL0RxckV3M3ZtaENOeHgvdGx0dWhHS2di?=
 =?utf-8?B?cWtpSm5WMlVSajRkVFFLL2V0RnJlUFdxQ3NOSHlXNGFIbE9kbCtzSUFVTW1p?=
 =?utf-8?B?M012S0pUWml0clE4RnpQTEJvVUJ0bUFzYjBjNGZHUUpPNU1meGxVeStxZjQr?=
 =?utf-8?B?Tm5reTlFTlNUZjZUeXlyTGQyc1VCYnJZa3ZMdCtPd3VkWm1KYmV6VlVQQllj?=
 =?utf-8?B?ZjJHaEErb3YxbElYNzJZU2xrZFh5VU5Kb0tsTDZsQ1VnVzlFQkFNRVFoZ0dU?=
 =?utf-8?B?dVNJNXJ4eWdoWFNHOWZ5cStMYVdwemNrZWw4N0JFbkQvTUQ4YlYvZFQ1eGtk?=
 =?utf-8?B?UHpodlZpRlVtY200Sytkbzk1bmNweVRKQUZZTGJ3Q2tMS0F6RitJYkxDSCsy?=
 =?utf-8?B?UHVHUGtLRnMwT0N5TFZLb3JHNzZqbmx0MlFkcUt2QXJraExoc2MxUFpTWjRY?=
 =?utf-8?B?TFcxRGxXV0JyVGtJeGh0dnc4dDZ3aFFyMElEOFowMlhZQzJ5a1dFU3hScC9n?=
 =?utf-8?B?T2djSFNtSkFiS1AxaTllckpmVWExeVdOQmpVVkNLQ1VPeUlobnVkVlQxbGxw?=
 =?utf-8?B?aFRxbFVjZmpyRVhyaVpWWWk3WVlwQ2dkTG9aL0ZaUG5wQmpoamtnUDZoTFVZ?=
 =?utf-8?B?d0luK2NwSEUxUDhIVVdLbHNranN5VnpUMTg1RTcvTDVCZjBIb2NWeldJMFQ0?=
 =?utf-8?B?NWMzbi9mdlpsOUhWS3R5dy9Xejg5aTk5ZjZmVmhCS1kyVS8wWEp3dVRjZkNG?=
 =?utf-8?B?NlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	oD2krNlm+VBecwmQwtnrOpmEFBRlBt0CQIa5GaHjO5pKm6jJuqfcjTSrgdMoM05GYO28OtzON2PcNL1VTjDMZmoLvGi+iMPbfV1+47W44EQzvqzcaDiZLfNazkZOQ2vIllXxk9PM5wee2uwHWPGj1WhwU9Vl3ZATjKbPHYQoar7BEE8zENSdO7kuIZ2iRKpLRPNBPQn2dEFvJNLDfGJMHyeN3izn19nHcmTu0GATrFrZcLLSXDoqbphLdj6dIfTpfQazIpCavbdm0sMcarQG/4JXNar/I0fsw39kSj3oQSa4RC9iTLYiq2B2tTB0OHXnmnd6O3u2qpHXN76VNJdIVfrCskDqXH1lPT1fmfweagR1lCkzHGYahaAYFrvkVqkHS0TOz6C9rGAkKBqlD/gu1s9fu7j5dX+sqedoz3YvWv1FFXzP6TVa0gUzgbfzYRo66NwNckIvvV2xj2okPPDMHl8EJ/9jooAkSxSI2SzlGTKCICVe8sM0iufTuYuvu/coptUvBmiNQr7hIIiCXlwuwIS7Pz8x7+frbf63x1a832LdsQmKm4yNFMplUOvxR8CToBidpiQtwJWgBP3CJU7HA7Oz8WsCkXFN6GwXzl0lkARWvNh9kL1zeChHZ0xF7RQD9FiXlXV32x3srY8bW13Egm02fews2aX9imuUopVi7jisk4/QWz/K0+kHkyl1p+eSeU3PB0abCW8ozzhr1UO9gITsjFSdONGFtkvjuSd1KddcVHM3O66Ghg7NxEdCZbxJgmVSGeGz7J+Ld7iAbxpQ3j2e1q/8pjmT4xQhakr9GGNwsuPWU+ECRorPhhb9uRLV
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 908d2fcc-4275-42f8-ea88-08db508dfb57
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 13:04:54.5556
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MHV24YY2oekblI2EuqQavQD0J7SyQQoRZHdmaI0q7g7uWyXvcY4Xl/g4dcUFPAlkngNdxCxjhNZZj11mQXmj4IVAvgjm1OdGOe8SaN6UDYc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5328

On 08/05/2023 7:47 am, Jan Beulich wrote:
> On 04.05.2023 21:39, Andrew Cooper wrote:
>> These BUILD_BUG_ON()s exist to cover the curious absence of a diagnostic for
>> code which looks like:
>>
>>   uint32_t foo[1] = { 1, 2, 3 };
>>
>> However, GCC 12 at least does now warn for this:
>>
>>   foo.c:1:24: error: excess elements in array initializer [-Werror]
>>     884 | uint32_t foo[1] = { 1, 2, 3 };
>>         |                        ^
>>   foo.c:1:24: note: (near initialization for 'foo')
> I'm pretty sure all gcc versions we support diagnose such cases. In turn
> the arrays in question don't have explicit dimensions at their
> definition sites, and hence they derive their dimensions from their
> initializers. So the build-time-checks are about the arrays in fact
> obtaining the right dimensions, i.e. the initializers being suitable.
>
> With the core part of the reasoning not being applicable, I'm afraid I
> can't even say "okay with an adjusted description".

Now I'm extra confused.

I put those BUILD_BUG_ON()'s in because I was not getting a diagnostic
when I was expecting one, and there was a bug in the original featureset
work caused by this going wrong.

But godbolt seems to agree that even GCC 4.1 notices.

Maybe it was some other error (C file not seeing the header properly?)
which disappeared across the upstream review?


Either way, these aren't appropriate, and need deleting before patch 5,
because the check is no longer valid when a featureset can be longer
than the autogen length.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 09 13:15:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 13:15:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532189.828272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwNBZ-0007bl-DH; Tue, 09 May 2023 13:15:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532189.828272; Tue, 09 May 2023 13:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwNBZ-0007be-8V; Tue, 09 May 2023 13:15:21 +0000
Received: by outflank-mailman (input) for mailman id 532189;
 Tue, 09 May 2023 13:15:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SDpd=A6=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pwNBX-0007bW-6W
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 13:15:20 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8ace1a6e-ee6b-11ed-b228-6b7b168915f2;
 Tue, 09 May 2023 15:15:18 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pwNA0-00FGci-C9; Tue, 09 May 2023 13:13:44 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id B23CD30026A;
 Tue,  9 May 2023 15:13:40 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 9497920B41FB2; Tue,  9 May 2023 15:13:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ace1a6e-ee6b-11ed-b228-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=khqr0MrXgdTVdwiWa8T0MGcDfR6bo4jGzZcTutjT+jQ=; b=AEUW7P6wHkheamaa+eBuAM5GlC
	HfXe2ylLtWfcka1eSHoTdooMv0rHLwy/4UXXJfZaPtCYmRmGdjmuMt/FClhve6qLhrukTIXm+8N+x
	nnGp+Vzv9Hegi36E8Rf/r4OzxVgdZvlwWMFCYnhPsYrrh5BHtNngH7k2UHSRocxMaqFkCSuUYHGyM
	EsKfIzE5XcxVl6vZOctnKdun9FsIUQbYBW2M+Z1E/CAIBnpced8CJq1O4rRlwdi1xMIKktEKJ2RfE
	L+hwPaIMMTATJdkHhod0l8xSQ9RsSKpUEP7+RkcJIh1jBJKJkRFGUcEM0nLnU4e2CbQIrer3yhEJm
	rDt36aIw==;
Date: Tue, 9 May 2023 15:13:40 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 34/36] x86/smpboot: Implement a bit spinlock to
 protect the realmode stack
Message-ID: <20230509131340.GA83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185219.123719053@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230508185219.123719053@linutronix.de>

On Mon, May 08, 2023 at 09:44:22PM +0200, Thomas Gleixner wrote:

> @@ -252,6 +252,17 @@ SYM_INNER_LABEL(secondary_startup_64_no_
>  	movq	TASK_threadsp(%rax), %rsp
>  
>  	/*
> +	 * Now that this CPU is running on its own stack, drop the realmode
> +	 * protection. For the boot CPU the pointer is NULL!
> +	 */
> +	movq	trampoline_lock(%rip), %rax
	movl	$0, (%rax)

> +.Lsetup_gdt:
> +	/*
>  	 * We must switch to a new descriptor in kernel space for the GDT
>  	 * because soon the kernel won't have access anymore to the userspace
>  	 * addresses where we're currently running on. We have to do that here

> --- a/arch/x86/realmode/rm/trampoline_64.S
> +++ b/arch/x86/realmode/rm/trampoline_64.S
> @@ -37,6 +37,24 @@
>  	.text
>  	.code16
>  
> +.macro LOAD_REALMODE_ESP
> +	/*
> +	 * Make sure only one CPU fiddles with the realmode stack
> +	 */
> +.Llock_rm\@:
> +	btl	$0, tr_lock
> +	jnc	2f
> +	pause
> +	jmp	.Llock_rm\@
> +2:
> +	lock
> +	btsl	$0, tr_lock
> +	jc	.Llock_rm\@

Do we really care about performance here; or should we pick the simpler
form? Also, 'lock' is a prefix, not an instruction.

.Llock_rm\@:
	lock btsl	$0, tr_lock;
	jnc		2f
	pause
	jmp		.Llock_rm\@
2:

> +
> +	# Setup stack
> +	movl	$rm_stack_end, %esp
> +.endm
> +
>  	.balign	PAGE_SIZE


From xen-devel-bounces@lists.xenproject.org Tue May 09 13:17:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 13:17:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532193.828281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwND3-00088p-Oe; Tue, 09 May 2023 13:16:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532193.828281; Tue, 09 May 2023 13:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwND3-00088g-Jc; Tue, 09 May 2023 13:16:53 +0000
Received: by outflank-mailman (input) for mailman id 532193;
 Tue, 09 May 2023 13:16:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwND3-00088N-0d; Tue, 09 May 2023 13:16:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwND2-0000z2-TK; Tue, 09 May 2023 13:16:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwND2-00088o-EN; Tue, 09 May 2023 13:16:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwND2-0002ev-Dt; Tue, 09 May 2023 13:16:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=62lEpoiztwTkBb7euFp+owWsgfrSsm2BCbJ3fbjYoWM=; b=T1CbEYfhcP0WXMpOTak3ND7Ope
	gx1/5f3IEDE/hlO7va11Sr118E6O+2zW7cAI6HVK/Jm2M2byOaLVkBo5OPfucToPJuWmpCo/2UMNA
	IxRz0eHHSyfYYcQbytP+/pl2VCxXQjbKRC0HlJXE0s9ch265hcTtSlJxkftCBTu64mMk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180584-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180584: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-amd64-coresched-i386-xl:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a16fb78515d54be95f81c0d1c0a3a7b954a54d0a
X-Osstest-Versions-That:
    xen=a16fb78515d54be95f81c0d1c0a3a7b954a54d0a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 May 2023 13:16:52 +0000

flight 180584 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180584/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-raw   7 xen-install      fail in 180580 pass in 180584
 test-amd64-i386-pair         11 xen-install/dst_host       fail pass in 180580
 test-amd64-coresched-i386-xl 20 guest-localmigrate/x10     fail pass in 180580
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 180580
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180580

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180580
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180580
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180580
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180580
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180580
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180580
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180580
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180580
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180580
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180580
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180580
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180580
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  a16fb78515d54be95f81c0d1c0a3a7b954a54d0a
baseline version:
 xen                  a16fb78515d54be95f81c0d1c0a3a7b954a54d0a

Last test of basis   180584  2023-05-09 01:53:21 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue May 09 13:19:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 13:19:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532201.828291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwNFM-0000PO-7h; Tue, 09 May 2023 13:19:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532201.828291; Tue, 09 May 2023 13:19:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwNFM-0000PH-4j; Tue, 09 May 2023 13:19:16 +0000
Received: by outflank-mailman (input) for mailman id 532201;
 Tue, 09 May 2023 13:19:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jxqG=A6=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pwNFL-0000P6-2r
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 13:19:15 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.23]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16b57bf8-ee6c-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 15:19:12 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz49DJ95vk
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 9 May 2023 15:19:09 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16b57bf8-ee6c-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683638349; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=GRiBNU2amoEyyJJhmK5Is9uO3Ekfn8meC6iqfYwqjOijRLCZYAhNuju6sKXtBpi5Sj
    mKc9Ss7O5jYYaJvxyRanUNnBfswav+UvEinVP0MnSqA8Tgc5hSrtSWfhw3wHlBA72NOF
    jReajvxj4YomOfuAXXDcqCST33uuzkm04A48BMhr5Lk3l3zdCLBievNV+s29UyCirse+
    JBIcT8OwY7+Zub4y5sEkAY9R25rV82GYrBl97fRHfnMrjB5yMVcMSFbzbePQXrtNrAEu
    dgWJwVZVvd6u7C6aeNri5j50Y2SkJlWCQ4rdIUwWXXthzSM+gsB6SBtAnhHvRYrXH2zP
    HTCw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683638349;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=rpuULCw5A7ilt2bXrFeg0Vy8Af4vXwqUM0zRdW+DNBM=;
    b=Mgk9WGkXHV45+gDRCXJoqNsLz0r4mz1JWNfXseGYkesIOojnVIaV+/WU1J8AfDBbj2
    3zsZIL0ENyUfX10mt8gAeUgl6aPBrIz10/rxj96M31EeDF4AMHRDnpmxoAmrb/yxeBU1
    Z6wUWoRJ3blFrreD3nzMLB8JmJ9ZNGho+hlmHCpWdVPySl+aEtdjHuh8pdXCmIk77gAX
    kcE/GXJI1qqj+cHLWKkhSojILDAfb0xbdj9LFmEXlaWp3MLbp3EPWGyFdLionrvP+PMH
    pn1NkQIxBDpb1/CAWgtCYik8Rdzq7Lq+MX1JPNxNhSxfk+mKjXHF2AA50Kt/4J3aVyD6
    +XAg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683638349;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=rpuULCw5A7ilt2bXrFeg0Vy8Af4vXwqUM0zRdW+DNBM=;
    b=JurmNDJ24tIKdKnUeKka3qhwunZJD9BCwS3I9mSJ93JZgUj08ClUWtIxuPvZO0CRaT
    zySmTWVmNl3QYtsNhYnGUrYwx1DCoAiWY/h9Ue/OwN/f2p6WlWqT6ONkZknR3/Ew0NGB
    qpLUCkyDCeVrT/rSK0f9G3EAtz8QomDxEvj9wEUtASPX5sILyIEZkt6pxvdszotRJEQQ
    fSFXcdXhNQY4qxqmW6aUPtyyWVoBHhQryf2H47c100zSqHb9NacVvJE0mapw/7gsEPng
    VuvUE0M383L+zjfqGZ6QuzzthpdPNoYDAnX6WCo7M0uA364gyzvedYMCRc/GSFDxNvOV
    2AGQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683638349;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=rpuULCw5A7ilt2bXrFeg0Vy8Af4vXwqUM0zRdW+DNBM=;
    b=nxwhfRaeM5n8fK+0IxqCzztC4mY5uR60xqLFvmI6fIoE1e3BHIbL3qhli9epoaVBaG
    sBTPJNg2NP1ZGdp3VfDQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4ARWIYxzstZKeVom+bauo0LKSCjuo5iX5xLikmg=="
Date: Tue, 9 May 2023 15:19:02 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jason Andryuk <jandryuk@gmail.com>, Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2] Fix install.sh for systemd
Message-ID: <20230509151902.391b1bff.olaf@aepfle.de>
In-Reply-To: <0785a316-1920-f5de-61d3-ed21ddbff0b9@citrix.com>
References: <20230508171437.27424-1-olaf@aepfle.de>
	<0785a316-1920-f5de-61d3-ed21ddbff0b9@citrix.com>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/Uod6fEfg8EmAfoVpyfp8Ud7";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/Uod6fEfg8EmAfoVpyfp8Ud7
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Tue, 9 May 2023 13:47:11 +0100 Andrew Cooper <andrew.cooper3@citrix.com>:

> > +++ b/tools/hotplug/Linux/init.d/xendriverdomain.in
> > @@ -49,6 +49,7 @@ fi
> > =20
> >  do_start () {
> >  	echo Starting xl devd...
> > +	mkdir -m700 -p @XEN_RUN_DIR@ =20
>=20
> Why is this 700, and the others just using regular perms?

I think the permissions are just copy&paste from elsewhere.
I have to check why -m700 is used anyway, and why it is not used consistent=
ly.

> Also, doesn't it want quoting like the other examples too?

There is a mix of quoting and non-quoting. Not sure if having spaces in any=
 of the path names does actually work today.

I will double check this detail as well.

Olaf

--Sig_/Uod6fEfg8EmAfoVpyfp8Ud7
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRaSEYACgkQ86SN7mm1
DoDB+xAAkapUr5jz3U2KvFAEawzjuRpqfUUe4k6MThz2aTInWcgRZ/ZIzlbVKyXZ
yhzRawy3YX2c2cPjz+DCO4U941BP9i29LFVpXrplWQ9lxP5BwjBgvDMkNx7dvBho
lJ6wWQLWxYhgHPH1dLV/V1TLyj2k2eJHsJEH3zMEPUvqw/YT6Ex/B3p9J3ukqUt0
evsjoOZZcyI/WqAw2ihz3FRvUTv7bJLKjC0KzwmCjbjR9PsvGsuUVxXPpws4w1Ki
/eOCoN3ZwmAicCe9VQ7aGR6moVZhEjN+Nm+AoTsnPWHuypdihLAMiZRcNn5loRqf
yY7cyr/h2P4wqomsHLeUee5tJ0wTFfO6lcCJ2tndSWKQggP8YHe9gDktSPzYoawN
XTgFf2OJvMhj3iYkhHwpzw98nb3DSEK5+VyhrUkemeRZ4uMH+r1PjpPldY+zTAdm
KmUnDhcSnbsm9u0XdngmJA2gDk9CuFWkYzCTPFPlJk8GBDiTPFXAljOTC+j1lX3A
qd7974W3pWQgtHt8SLJM6ysUZpa7zKbxGyrZpy+nXLhYUG+GPfGIFYiWLtVFRo1Q
7br3a8u3zeNFVPTP1j753KveqqnKQKQzUVRLSbkUZcvuHov6Bt8/AFwNZxiSNcKF
g2nBXy8ChMaKUJx6vlg/dSEhyfn/nHBYp4s2xPeMNbQvwZUUXis=
=VaDW
-----END PGP SIGNATURE-----

--Sig_/Uod6fEfg8EmAfoVpyfp8Ud7--


From xen-devel-bounces@lists.xenproject.org Tue May 09 13:47:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 13:47:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532218.828301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwNgq-0003vy-EV; Tue, 09 May 2023 13:47:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532218.828301; Tue, 09 May 2023 13:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwNgq-0003vr-B1; Tue, 09 May 2023 13:47:40 +0000
Received: by outflank-mailman (input) for mailman id 532218;
 Tue, 09 May 2023 13:47:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5VI=A6=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pwNgp-0003vl-Nx
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 13:47:39 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f278ad7-ee70-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 15:47:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f278ad7-ee70-11ed-b229-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683640056;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=EiOLX/A8rFePD0DrbaikIW8S9TmO2h3FaFar7PON5hU=;
	b=IDDtornM7g68QBrYy4uAnmWXjYq0L0EFNdR49PbDsHqDTgeml6v7RJL2t0m5sUXtVghPxN
	Y9xm+Z8m/e3hKUrE5pyar8/KK0qEKQPlYka0/Tb3iMx+mgfeyMNL/O25jjjjRzlsdTv+0Z
	8shI/utWzvYwqBv97NrrZDv7w4aKvhnGNE6KrmZ++Tu6qGaUft6vGLNO3kbS/OKo+Q0Re5
	6QCYNHWmtfZwiB9EeOwWBrdDOurFiMNsscJ4jOZMujRqBwDip8CpD7bfEN2VdQMrgNJt2m
	19jTmwkZ2nc1PLZHDKShrWj69L1Yd8FKDMqN/Q3H65LVSvPfuBkvDkQSP/O+tg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683640056;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=EiOLX/A8rFePD0DrbaikIW8S9TmO2h3FaFar7PON5hU=;
	b=Qut7XwyPl0DgMnjgEWltV14QsYg9li7QYZrUkmEwn/8iVTGLbJy2fiNLjt0flPIbrA0rho
	MvlgsTmYsK7KiKDA==
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 34/36] x86/smpboot: Implement a bit spinlock to
 protect the realmode stack
In-Reply-To: <20230509131340.GA83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185219.123719053@linutronix.de>
 <20230509131340.GA83892@hirez.programming.kicks-ass.net>
Date: Tue, 09 May 2023 15:47:36 +0200
Message-ID: <87v8h1zkef.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 09 2023 at 15:13, Peter Zijlstra wrote:
> On Mon, May 08, 2023 at 09:44:22PM +0200, Thomas Gleixner wrote:
>
> Do we really care about performance here; or should we pick the simpler
> form? Also, 'lock' is a prefix, not an instruction.

Right. KISS is the way to go.


From xen-devel-bounces@lists.xenproject.org Tue May 09 13:58:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 13:58:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532228.828310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwNrP-0005SU-Ca; Tue, 09 May 2023 13:58:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532228.828310; Tue, 09 May 2023 13:58:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwNrP-0005SN-9q; Tue, 09 May 2023 13:58:35 +0000
Received: by outflank-mailman (input) for mailman id 532228;
 Tue, 09 May 2023 13:58:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SDpd=A6=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pwNrN-0005SH-ID
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 13:58:34 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9465cec7-ee71-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 15:58:31 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pwNqk-0069ad-04; Tue, 09 May 2023 13:57:54 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 51BE730026A;
 Tue,  9 May 2023 15:57:49 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 3402820B08835; Tue,  9 May 2023 15:57:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9465cec7-ee71-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=hI8/gqpI8woXdIossg1TLqD9QGRFHNBoUAZE5mW3mjQ=; b=GG2RnOQApmwDBLG6Gsm3njtVQf
	QxqK9VGUrgp9oG3wZracPmQrcB+PBRcVxw73VjgtK1EMmZyecLmKDRZHvAfJpdPcc5RHZtTaq4xJD
	Mbjqc7jH+sKLynVrQ64G6thUi4zwqh8rCYsg9Lp1iTW79Slo5Qe2L9QpI3n9DvhT3p7bA6TVmgAX9
	8FkXE5EqF+OkD2XK8OFB/cQwLHDQ75opJ0AVpsGCIfoG/ikF4CYJw/Z3nYXvLfqJ6eD5BtnvJz2Bw
	RLxcdIrVYS3ih1v97QwYZHxN2Pa2PpFaGa3IzYQuFTqIQDj8U8Z4tCDPnIp0p0jNTeEO6UhbNFTql
	XWuyju0w==;
Date: Tue, 9 May 2023 15:57:49 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 35/36] x86/smpboot: Support parallel startup of
 secondary CPUs
Message-ID: <20230509135749.GB83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185219.176824543@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230508185219.176824543@linutronix.de>

On Mon, May 08, 2023 at 09:44:23PM +0200, Thomas Gleixner wrote:
> +	/*  APIC ID not found in the table. Drop the trampoline lock and bail. */
> +	movq	trampoline_lock(%rip), %rax

Again:

	movl	$0, (%rax)

is sufficient for unlock.

> +	lock
> +	btrl	$0, (%rax)


From xen-devel-bounces@lists.xenproject.org Tue May 09 14:03:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 14:03:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532233.828320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwNwR-0006xv-Ud; Tue, 09 May 2023 14:03:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532233.828320; Tue, 09 May 2023 14:03:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwNwR-0006xo-Rv; Tue, 09 May 2023 14:03:47 +0000
Received: by outflank-mailman (input) for mailman id 532233;
 Tue, 09 May 2023 14:03:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vSxc=A6=citrix.com=prvs=486b9cf0a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pwNwQ-0006xP-HK
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 14:03:46 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e694ffd-ee72-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 16:03:44 +0200 (CEST)
Received: from mail-bn8nam12lp2170.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 May 2023 10:03:36 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5363.namprd03.prod.outlook.com (2603:10b6:a03:225::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 14:03:34 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 14:03:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e694ffd-ee72-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683641024;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=AqTAeukQRylbsfIdYnWv38oZH9fwytMBSHnXx258v3g=;
  b=Y42yKCMC9INQYyWdSaPf1kEv6axokBYqPX8nJW1CQ4e2rT7tBM/5+Uku
   5i3bZ1/JNOxnTu5Vebj6ltNtOn3PFnsXst+G2BEEn5FI5pjOhQ5YBFCL0
   PuAdlFRn6pE5W1WSRnxGvG0U+DU6pHqEXawQUMYRE2lN3bkPC/4ltL1WF
   I=;
X-IronPort-RemoteIP: 104.47.55.170
X-IronPort-MID: 110846984
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:p7rKMatjY8Y0z4LafK8crfZ0eOfnVHtfMUV32f8akzHdYApBsoF/q
 tZmKT+GafqCMWamf9F/a4TlphtQ78SHyoRhQVM/pCg0RCsS+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AKGxiFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwBm4QVVO9rsiK7+ylZq5jweoOFpbqM9ZK0p1g5Wmx4fcOZ7nmGv+Pz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjf60b4K9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOThrq423gzDmgT/DjUrc16HiqGJoXWBRvd4F
 1IfxysPtZQboRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2hBSfSN9mSfSxloesRmm2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBNrxooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:CJzdU6GwUeYiCHSwpLqFBJLXdLJyesId70hD6qkvc20vTiXIrb
 HXoB1E726MtN9IYgBXpTiBUJPwP080hqQFqLX5XI3SGjUO3VHCEGgM1/qR/9SNIVyDygcZ79
 YWT0EcMqyoMbEZt7eO3ODQKb9JrajigcfY/5ai854ud3AfV0gK1XYJNu/vKDwEeOAwP+tIKH
 Pz3Ls5m9IvEU56UixmbkN1OdTrlpnnmI/rawMBHD4LrDCUizml8qT3HnGjr1kjuyAl+9gfGG
 7++TDR1+GGibWW2xXc32jc49B/n8bg8MJKAIihm9UYMTLljyevfcBEV6eZtD44jemz4BJy+e
 O8+CsIDoBW0Tf8b2u1qRzi103J1ysv0WbrzRu1kGbuusvwQRM9Eo5kiZhCehXUxkI8tJVX0b
 5N3Uieq51LZCmwxBjV1pztbVVHh0C0qX0tnao4lHpES7YTb7dXsMg24F5VOI1oJlOm1KkXVM
 1VSO3M7vdfdl2XK1rDuHN0/dCqVnMvWj+bX0k5vNCP2TQ+pgEk86JY/r1Bop4zzuNtd3B23Z
 WVDk2ursAcciYiV9MiOA7Ge7rkNoWCe2OYDIvYGyWuKEhOAQOHl3b0i49a2Aj8Qv01Jd0J6c
 78uAszjw4Pk0WEM7zs4HVMmSq9IllUWV/Wu6RjzpB8o7L4QrCDC1zNdHk+18SnuPkRGcvdRr
 K6P49XGebqKS/0FZ9OxBCWYegWFZAyarxXhj8AYSPNnuvbbonx8uDLevfaI7TgVT4iR2PkG3
 MGGDz+Pt9J4EynUmLxxEG5YQKrRmXvuZZrVKTK9ekaz4YAcoVKrwgOkFy8osWGMydLvKA6dF
 Z3ZLnnjqS4r2+r+nug1RQsBjNNSkJOpLnwWXJDogEHd0vybLYYot2aPXtf2XOWTyUPD/8+0D
 Qv5mif1ZjHY6B4nxpSQe5PGljqwkcumA==
X-Talos-CUID: 9a23:846CtWypnWOUEVcoIYzlBgU6RN0eMUP05kv/fVGUTmkuZpCvFl6frfY=
X-Talos-MUID: =?us-ascii?q?9a23=3A7HiRhA7VDK/DP8/Szr8uLIyTxoxVxaGSIxg9uq4?=
 =?us-ascii?q?5puyjCTJ7Bja8lAieF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,262,1677560400"; 
   d="scan'208";a="110846984"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AFCd2Ms9aveMUTNZ+48oVx4aBEIsiMsd5H2zKfQa0kzwYjdudQ77sBP77gw6TD/SmWukFmKvU9fXU7vvwWygaQwnXQVuFr1MDPs4h1c75dvIP2CCO3G57uz2t1zZtZ04YPOcm8o83dApxatNhPsJp2/612Gr7Z/bqMs80BZCdNWx+cGxkBEwfGsuz7Ng5U8cFoSbb9WJ+rCV4UYuG0Dwhj2zo2Po/hjNEANk31HJgMh3qd5nFFnC+vpXHxtxGUOneY2D+ZwjC2iZ0lisMVT+Inax6Ex1bUU+Xa08M7RJzYu/6AMtme793j7BBqKAemVY6tQUrBBSXGL96d4YwlUsbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=X7Zc+P+cWPl8jk6EPp5iLJmV9pal8auTQVg8uTmw1Uk=;
 b=ZkFLWkgxK75AJ/696+shuutSWpkoy256ztdfkXDs3rMnK977LCAbXmLjgMnnMKXTjDtlqj7nGC78pMyKfCSkvD90UveBimMLF+dEyQ242vopX0BLpRJnVcOA5yFP9Hii7Fs8w4WOSTZKhqXsQI4R7Ifni6hwYiqklmmNLXE4s0eD0VU4VkS81Rl20GUMs1kDq+SmpxkWCDI/BbnYl266RV0ycfIERU0wzF4jGpDA74JFrJaHZqdOBz/bm7lxLosgbyi7ZDwQbjNahEs1uMfEEqT9OKgHYEuvRps8EUGurFK4g8HyfgoIs2CbMDSh4H8lym1DbkenKZsPd9nlZxPDNA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X7Zc+P+cWPl8jk6EPp5iLJmV9pal8auTQVg8uTmw1Uk=;
 b=oqnlpW537MrrjUz/HiqFkDaqG1b1ZSVaeNeWuSQ9Qv8ABVJH3S7b5fPHZUMj/hXeW0WSAgjQcgR9hBCss0eX5E2nNkpsY8ySyhEX1X9DzkCvquFTPzQBoA3yz9YxHdgrRuGaxCiQ/v5gDNpq8vCBNXEUoQjd/Z9K5ERzn4kRFw8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <e888dc16-66bf-fc15-9ddd-f10879b79a5d@citrix.com>
Date: Tue, 9 May 2023 15:03:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 5/6] x86/cpu-policy: Disentangle X86_NR_FEAT and
 FEATURESET_NR_ENTRIES
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-6-andrew.cooper3@citrix.com>
 <9613df3b-c49f-6034-ff99-7a6ea9286f0f@suse.com>
In-Reply-To: <9613df3b-c49f-6034-ff99-7a6ea9286f0f@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0071.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::35) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY5PR03MB5363:EE_
X-MS-Office365-Filtering-Correlation-Id: c1fb9fd9-aad6-46f8-c362-08db50962cfe
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lEi1XvpvdEfLe1Qgrq51Zb9Sg1b9r7B1dznbrSY/2pUJERso7a+3EE24yBGUbB2Sk/XydyoyUmH7BnGMTkpeLRo1D7MK6YJHyHOsYI5zhMaCPIBJCM88waL1/nydeUK0H0jgBZAfpf9uci82oRHy60ZhJ+vpjwhSuGIoidLrenZjJD9eaDVhNMgo/162XLLwvLqCC2sLl5RndoEpN4/vho4b+HDWQHX+SEC55Z2EF1sEt1j6CCGJvN3t/ItTqPNU0KqA2czTM0C6aCsXJVNqc55FUTBjzugOQibBh7VPvFUNRwgDl33syZWaBTqCLVX+T5vlsY1GZTVb114xvKwmRtdsllch8Q8FU52Vc33tOxrnfEN/PrFKOUzkBdYrNdijPobaoT3WwTr094so5Hisv4kyfjEzNvHRuasB5XjTEm3Ypz+IKJJPdDmhAFHqw4HgYSAcfUScrXgu97GiWHO3YFpWIsuUa5iwo9nP2unTt9+1y5DODcjXqES5DrVoPG48MyLE5teG8qyZQR8cB4O5xdvfRt7WaV5zyK5RFpbOSzR7N6AYRQMdZKR60gJri0DQwNZXsFRyXoL50w8PXu238hY/09nxOZH51o7IpQlyjETjw5ff6si970EgpW7ioszlMmIf+CrEsymn1Onb7ruDbDczgTjBbTZ1M3/cQuMG9uQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(346002)(366004)(376002)(451199021)(83380400001)(2616005)(186003)(2906002)(38100700002)(36756003)(31696002)(86362001)(82960400001)(5660300002)(8936002)(8676002)(966005)(66946007)(66556008)(66476007)(6916009)(4326008)(478600001)(6666004)(6486002)(41300700001)(316002)(31686004)(53546011)(66899021)(6506007)(6512007)(26005)(54906003)(522954003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QlB1d2FuemF6N1U2T0xGK3A1VVd2cjgrUnFmdDJYREpGaWdFYlZGelM3SG9F?=
 =?utf-8?B?UTJHekpaQW9aNklnZG1KRmxaOGZpbjY0a0FhRktwTWZMMHFNUGRyS3hkREY3?=
 =?utf-8?B?anlKYkdUZEdqQWlxc1NudHZzYWpVNXI3Z21aangwK2tvK0VCWGFMS3ZWeDFy?=
 =?utf-8?B?YlJiQzRaZURNZTJkbHlyVmFZZC9nTTFnUVZEY0VaWWQ4S1FlQUx0SjVrQ1NV?=
 =?utf-8?B?NGV3VmR6a0tza29hQmxYMU9wZU05THprLzJiYXRQME9wVy9PMThtQ1RlVWZq?=
 =?utf-8?B?ZWZuOC9rajdvdStWdXA4WWs2OFpaZGN6aEJJalAvYjZBVEQwaTJQWCtpYWtm?=
 =?utf-8?B?MWNxb2JBdHc0cTlVbWpCVUxBVDVsVTRpUEI4VnQ5emtNWTNwQSs1aWQ2cGJT?=
 =?utf-8?B?aEtqUDREbUt2T0lqQkdyMkVUa0ZYRmV5YWdWeTV1Tnl1NVRaeHlKaEU0RjhC?=
 =?utf-8?B?UVM2cDlqS0lKZWg5MC94c0xwSHBxeWxFRlcwdjFXZ0VmTElIRkpkYXY0NFZB?=
 =?utf-8?B?Q2ttWjJLanBuRzVic3ZaanRTWENnZkRvaGNMbG5lRnNYRS93T3RFQTZ1ZHdk?=
 =?utf-8?B?eFVlWFJnWlU2N0lDd0E3czV3aXIyM3FMNmIxNXFsZ0NNS3I0WWMrK0dOSmxp?=
 =?utf-8?B?ZndEbHVTUzRwNlE1QXA5cmtSYmVhVDc5dDg5bEVJYTFtWk1pS1JVRUJCM0NR?=
 =?utf-8?B?VHNiNHhGWU4wVS8rY09wWnJEMFJhR0dsOGphYzFaZ2ZBZ1RuNDJMWVAvRW1N?=
 =?utf-8?B?TWhTalV3R0MwajBObWxHTEM1S1NHWEE0akhFRWpVYjFEWjBvVTNFdEQ4OGYz?=
 =?utf-8?B?TllTSEtwdHpwWTFjQmpFM2JNN0NmQVNBNWFrVVNlVkU0NWlpZk93bDV3Zzkw?=
 =?utf-8?B?aE5TZFJqY2hHb3NKek56azZ0ckkwdjJ6bjFONy96ZlUwM2toclgrYkwxaHFW?=
 =?utf-8?B?Y1c5THNVcTVEeFZxUUQ2R0xYejZHMENFbXhWSWJQdjIyNStIQ1JySm5SZHZ2?=
 =?utf-8?B?SW81ZFFpbkZHc2tSc1d1UG80NHNZU2JHOFZYc0RPS250NXVacHQvaXJ2S3Ni?=
 =?utf-8?B?MERKS0RxSFJXemxWQkQ2NlUxdG5IU1lMSFo0Uncydndvak9RT3I0SXFJcy94?=
 =?utf-8?B?cnVVVVlmR25oODZqYnJ1dW1LaUdydXNHS003YWJ0Y1E2WjNDaldGeFNiTUtD?=
 =?utf-8?B?aExFNXhOa1hIUEEvTGpJeDNweGxEb1FyU1M2cFlCRHYwdkxpcHkxUWxmYU1X?=
 =?utf-8?B?ekFDYm82dEo3WWZUdWdmR002T25ScGsxaTBXNWhTQlprN01KL21jNTduU0sz?=
 =?utf-8?B?Wmk0dVFFcm5lc29ERWxnOEhsRXc2dnJ5cTJsN2pWWkdLZ2JUSHdaU0g4RzZN?=
 =?utf-8?B?ZXVkaDBMUGNLV0tkNklnMW5PMk9XL29RY3RNWFkyR2pOeTJINFN4MGxsdERr?=
 =?utf-8?B?WVVkbDhNVUVwUkE3cGp6UkphTWg4azhENXQ1K01IbjhaQVFsUXdXR1BwTTlw?=
 =?utf-8?B?NVl0MmVlcWdmcm5Cci8vTzJRbzhSeFVTdVhMQ3VnOWVTZzkrazZDTFFzS1VS?=
 =?utf-8?B?QnUzN1VwMWhoV0xFNXByQ0dzNlZnWnFuUFlXQWh0WURXWGpFclJNMmYyWUU4?=
 =?utf-8?B?Q3ExNmFGY0orMHdRcEpETkpMT3pyY1loYXNvQXZJZWJXZjU1b0h0RkxYYnlD?=
 =?utf-8?B?WS9HSTVEWlREOG9rQW9yZXA0MEJMaVRuYmxoMHQyeGU2cy8xeUlmTEtWMGdm?=
 =?utf-8?B?bHRXUld0TEtuUndlSlFsNlVCMm9FZWhiM0Fiakk0RU4yWnpINkxqNVdOUHcz?=
 =?utf-8?B?TGRnMTU0T3RBNGVvenZxVko4VXF3RzJ3c3VvSkhQWVQyTU9FbG00c0x0YUQ5?=
 =?utf-8?B?YmoyUjhJelRsOFpPVjdwRzhuOWtmUFMxbWNlVWtQc2NlMDlXaWdmNDRBT3Zn?=
 =?utf-8?B?TTVBcmxtejlNanBEMFgxVmtkOVdjajl6ZVRxbmZrTzEwcWlMTE5DbHo3cVo3?=
 =?utf-8?B?czdiSjgvem83eGZtV2dGblBJSFlmQnBlczJnVndPNENMTEI2TGZVRGZ2K1pY?=
 =?utf-8?B?MUoxQzhadGxYQXJaWEV4OFBWMUNxeWZGWlQ2TllPSFh2QU5kb1J0bDNkaHM5?=
 =?utf-8?B?RGJGMEhmd3ZNTmdPb0VDcmF1SE5YWnBVNjArYktrcDZ2cUFJMlVIWlpZdkM1?=
 =?utf-8?B?NUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	zEb2OgPEj9nSbdlvXKDfEqx4MDwVCRGXgDe3HBQ5/OLybkjqe5mJs0p0QOa2qBCOj+tctaSCBIJyMMp5scF7OgodzlzG7c5naIq1Y0T9VaAw7lcF/3hiNCM/sWeWHXB4vDMwTzDD5I8RqeDze+Ab69dLMkasaujmcdW2XGA3eXgxJMavXwAQ6royLq1xlnbvrFIY49oeB2XzikwLIrxhZ27xSmsQy6AVrRXcsFegCCwliGRnDGBzcFZI8y5eP0yEwd2pcZ4ZGxLUFH0VmE/NDFg/Jun1yFPmNIEeSvc/7HeCntA2NLQAh633J8W+QKV+5oT9+G4jWbcZE5GmjhIc1/nm6HxqjITkqPtTUP1q9gdhNxpbcOwO/tMz/7c6DVBzUDTQJCa/E2TrgA43PbmsOCE98IZFrOO2Hc8ls6FeyRo5rpJD3gPaWqa1qyaaeTjqqGIsz0d/SiBhf5dYPIotHj/YqI+H/wpm8CinrxFcd7h54tYV8RrRzQ0stjRFV50h6epR/m/pnS5AZbZQ/NSwioZXrwih5LnvSbo4b0A0QJMJSeZ75s2+DOZ+CP6AR5SxGdp+MsfTUGOj8Qn6D1Lzz7UeS9fHA7WG2lTFMEh6kY4bXMZMBbk3Bbd71o2SjGGh0qsAWdfL4p7AgcUGoDSL/w3h/esgq5eJSSfPpntuMqxC/8CDe5NhqiUAFiU8eKw5/un3I+JIv5c3t0zJo9SIrfJepDFm9ylWDP681CI1/kyYso7ZmwJH8w9A96Ih7upXVwBqlWMXMJJTOk911Ot7JyQdJqS3GDO8zx/7C2xN/CqPknpSx6ret3sDwNPbqx3i
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c1fb9fd9-aad6-46f8-c362-08db50962cfe
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 14:03:33.6735
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: C8ey1mD8Qrevh/ZjN3pgB2AvZY6iOd4q44uxKDFEYRP+Kz1WEkrygpI40Ui9i4ULyWdLLQEwDKoYO7MafzQw7yTza8ukAzduzak0sayI/9c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5363

On 08/05/2023 8:45 am, Jan Beulich wrote:
> On 04.05.2023 21:39, Andrew Cooper wrote:
>> When adding new words to a featureset, there is a reasonable amount of
>> boilerplate and it is preforable to split the addition into multiple patches.
>>
>> GCC 12 spotted a real (transient) error which occurs when splitting additions
>> like this.  Right now, FEATURESET_NR_ENTRIES is dynamically generated from the
>> highest numeric XEN_CPUFEATURE() value, and can be less than what the
>> FEATURESET_* constants suggest the length of a featureset bitmap ought to be.
>>
>> This causes the policy <-> featureset converters to genuinely access
>> out-of-bounds on the featureset array.
>>
>> Rework X86_NR_FEAT to be related to FEATURESET_* alone, allowing it
>> specifically to grow larger than FEATURESET_NR_ENTRIES.
>>
>> Reported-by: Jan Beulich <jbeulich@suse.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> While, like you, I could live with the previous patch even if I don't
> particularly like it, I'm not convinced of the route you take here.

It's the route you tentatively agreed to in
https://lore.kernel.org/xen-devel/a282c338-98ab-6c3f-314b-267a5a82bad1@suse.com/

> Can't
> we instead improve build-time checking, so the issue spotted late in the
> build by gcc12 can be spotted earlier and/or be connected better to the
> underlying reason?

I don't understand what you mean by this.  For the transient period of
time, Xen's idea of a featureset *is* longer the autogen idea, hence the
work in this patch to decouple the two.

>
> One idea I have would deal with another aspect I don't like about our
> present XEN_CPUFEATURE() as well: The *32 that's there in every use of
> the macro. How about
>
> XEN_CPUFEATURE(FSRCS,        10, 12) /*A  Fast Short REP CMPSB/SCASB */
>
> as the common use and e.g.
>
> XEN_CPUFEATURE(16)
>
> or (if that ends up easier in gen-cpuid-py and/or the public header)
> something like
>
> XEN_CPUFEATURE(, 16, )
>
> as the placeholder required for (at least trailing) unpopulated slots? Of
> course the macro used may also be one of a different name, which may even
> be necessary to keep the public header reasonably simple; maybe as much
> as avoiding use of compiler extensions there. (This would then mean to
> leave alone XEN_CPUFEATURE(), and my secondary adjustment would perhaps
> become an independent change to make.)

Honestly, I don't want to hide the *32 part of the expression.  This
logic is already magic enough.

If we were to do something like this, I don't see what's wrong with just
having the value as a regular define at the end anyway.

One way or another with this approach, something needs updating in the
tail of cpufeatureset.h, and gen-cpuid.py can easily parse for a
specific named constant, and it will be less magic than overloading
XEN_CPUFEATURE().

>> To preempt what I expect will be the first review question, no FEATURESET_*
>> can't become an enumeration, because the constants undergo token concatination
>> in the preprocess as part of making DECL_BITFIELD() work.
> Just as a remark: I had trouble understanding this. Which was a result of
> you referring to token concatenation being the problem (which is fine when
> the results are enumerators), when really the issue is with the result of
> the concatenation wanting to be expanded to a literal number.
>
> Then again - do CPUID_BITFIELD_<n> really need to be named that way?
> Couldn't they equally well be CPUID_BITFIELD_1d, CPUID_BITFIELD_e1c, and
> alike, thus removing the need for intermediate macro expansion?

gen-cpuid.py doesn't know the short names; only Xen does, which is why
the expansion needs to know the name->word mapping.

I suppose this can be fixed, but it will require more magic comments and
more parsing to achieve.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 09 14:25:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 14:25:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532242.828331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOGu-00015z-Pk; Tue, 09 May 2023 14:24:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532242.828331; Tue, 09 May 2023 14:24:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOGu-00015s-Ma; Tue, 09 May 2023 14:24:56 +0000
Received: by outflank-mailman (input) for mailman id 532242;
 Tue, 09 May 2023 14:24:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S+Ht=A6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwOGt-00015m-He
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 14:24:55 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0616.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 441ac25e-ee75-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 16:24:54 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB10068.eurprd04.prod.outlook.com (2603:10a6:10:4c9::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 14:24:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 14:24:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 441ac25e-ee75-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mD7++gYq4r9QjVb0eqMVsUkdjzIlwbeDjujvf7TFqigxSet0xSwGbyHiGVMoiOTD24XxAuUQ3IbUO7NGP7QU+U/1psdqsMLzt1WFWHItxBLoy7XddhrnlYWXzNdIe8D3Eh0TULcs5Cx4L7tdPf/Wz320Ue9IaobDio3JCGPzVdOZ6M+MqTO6uMCyU2INED/SsiHTaGgvq46UR4xdddl3kbi5PMWKK699tGHL5RZksnM8o4GnVHAwFiMfTyY4pEoW9xCgpkS8R098JYSioZXZBlM9FkvlHrgtSBIB0gR+j8JoqdeYSIrTPtPSElLh+3XjGWGbkmAdciKsyDCou77dlA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SYViuhHHCw/s8dVOtM2HJgk/OVqgQ5grkC3l1JOWhaU=;
 b=QynP3YenebGKenXnV3jwm5hcQhmwpnM9cyV6c6ESEERSTWqmjoEBx1ToENa7+BCJFNVgt+kZ0mMSYv0htBQLcf/L4lynTKTlbsuhjkuJcuzMdu7SMM7kFknwzzIQq+EA4YNWoJH+EyPTboCHlkBtmSNae1eAsGfkKTAU8ZP0QBzTMhfnQzn+F4+1j/JHagSx7Pqlh2Lc9b8TWJWce7vcsyjChEWi4TPmmM4i3tQPTri6AEuf0XHWAL+QnahoM0cW442Jb+NZrrYG7KovtRyQ+LKg/EywdVcaNj5JlBT2UnHrct6TbrRgfmdBs0/Wr5C2bTeT+C8jc5FMjR1patrlbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SYViuhHHCw/s8dVOtM2HJgk/OVqgQ5grkC3l1JOWhaU=;
 b=0dqVVwz0dvEG4Qo70/0fwahl1dwP8QCG3Bzne4ZTt6ucSC67cJGm6LMpDY/SXggSVehfOzBuIvM8VXeyV+XS5x5P5ch0uTUa/+Iv96ALY1rslcFksxQD5sR3dqlDg52/VEljTJJvzH/GfU7gY/sZiTiwBLqngM6lSJXEMDBN9tij37dcb0zpE8Qgh6kbLrxv+OMzu0zYaDAR1X6TJwAGwyaCbXdewrtWHx9Z0VPK0MruOP/Ygj9Zk1z46tcQxkIXTz1HW4pCjrPUbzGRAnHYFNy6mbCITwq0xX/WGVaAYPAZLSKL3XyKhfM9fhb1erMWzJ5L2g/mHHcBkxopxkfo1A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f09c9127-1cc4-1ed9-0348-be12c0c999e8@suse.com>
Date: Tue, 9 May 2023 16:24:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 5/6] x86/cpu-policy: Disentangle X86_NR_FEAT and
 FEATURESET_NR_ENTRIES
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-6-andrew.cooper3@citrix.com>
 <9613df3b-c49f-6034-ff99-7a6ea9286f0f@suse.com>
 <e888dc16-66bf-fc15-9ddd-f10879b79a5d@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e888dc16-66bf-fc15-9ddd-f10879b79a5d@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0055.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB10068:EE_
X-MS-Office365-Filtering-Correlation-Id: 455a3764-b113-42df-01c1-08db509926e3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	P7cCIODMHw6d2bTcv5HZWFDsFlul8udr5O/T+KpQCocp9c8bwEFiC8JAUREzFd3xx7LWPvdhwNOVjovYp4Dbt5lLHMGvYZqtMjUuCCtj5h8eVzKMtp0Ni1fIqTrb6AK3QyPRSEtJPoUrxjdokfSHoSw6o96LhysrFm0QcV6XVkYkLKKTHlqsMavmuGDt35rkOoVerr5HsZxN4PIsYyR4tNiVnN8f/tDNHC9bvMJuPRhd+shomFAJ4qHiXNFFlV1+7ihFFXGdWcEQKT0fluwLY3A/1c12sfpDThdYh8uziDQ58cjYnkVW3/s80rBNXvAwwi6IEqzW6TYRVhAeQHT8M2TkGivf9QyOHrpYUuxVjWBaBm2SnDNCLKtxaW7nki/A5zIjAt3lzCyuAiPF4cxviRuTx3aeLxgKMh2yMx1T9OoqSuPEcCvLEfNycf2YAnuK2fXIFF+tsafWak3P82sX3hUm6YS+qvo5jaHOv5gOJeU4YLxPIrkAYPA91vJ+P8YnB9IZ7Oq9GeuLZSKyqex1+wa+YMHz1QoTSG/yLqtdXD+KHWKEReqWdGu80i0EyvFOg0L8fdqWU3pGuOQd+DttZd4T3DfFYCNGJA8yaNKckf31lALIZel+6CnrHcgXL0x+dGvpGg8BG9mXlG8/1Bgxxo498wwBI1AWGGuCrKGcBus=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(376002)(366004)(346002)(39860400002)(451199021)(2906002)(316002)(8676002)(8936002)(41300700001)(66476007)(478600001)(6916009)(5660300002)(31686004)(4326008)(54906003)(66899021)(6666004)(66556008)(66946007)(966005)(6486002)(6512007)(6506007)(26005)(53546011)(186003)(36756003)(2616005)(83380400001)(38100700002)(31696002)(86362001)(522954003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NWlIRkwwL1FwcmFKbGN1SThwWFFoQlg0NG5QbzIyVk1iVGhSUGF3VzdWdFpG?=
 =?utf-8?B?NFZ5K05SdU01SkpQYVhUQlM4cjZmQ2paeVN3ekp4aXRkTnFneFlPbTYyT2Nr?=
 =?utf-8?B?NlF3TXVHQy9hbHZCVnV2d25UN29rdXROVDBOeS9WMEcrQk1WVkhaU0x2TGVt?=
 =?utf-8?B?cDB2U0RaSkV6dG81K1BhSFFUd3I1WVlXdVNabXdsSzBzMS9GOEhYa0pTNk1G?=
 =?utf-8?B?UGJaL1ZFRFZYMit4UWVmWTFLSDN4UlE5N0IxVm41eGMydlFGbHpWWUg3eEdH?=
 =?utf-8?B?d0lpZlU2V1ZKOXNjQ29NWXpPK2pwZnE4MU1wYW82MGxwZnVLUlRhaTB0aU0x?=
 =?utf-8?B?blVJWHFYeFJKeXB4MXlFS1hBRzN3NWhPNkF1dzY1cXRaOEJVUkxESk9NUDE2?=
 =?utf-8?B?dkxoRTBUK2pVZytqN1RncVJGYjVQRkhNckI3cVFYTVdaRG5LMUJKM0YxZndk?=
 =?utf-8?B?WkVkaDdnYjlDbUkwS2tvNmFOdGJtajlvbE5udzJGSEJnZjd4N2YzSjBhbkt4?=
 =?utf-8?B?a0g5N25oQVFJbExWTnBzZVJPSk9yUmFzTFE2czlINXVNaUhPRTI1MVAzU3lQ?=
 =?utf-8?B?L2kzSGpIRUE3cEE3MWY3NWdpNDF6M2lCRHl4MzdXWDV5NzhnbHI5T05TZ3p3?=
 =?utf-8?B?YnZsK0E0dm01cWtkaFNRRjlQR3F1UE1kVC9jU3NLbTZUdHVDK0d4a2ZSdTVT?=
 =?utf-8?B?d3BBZTkvaThjNGx6c2RTUEJiRUpuZUV6c0NZck5rY3k5QnJwcVBiOU10Mzdj?=
 =?utf-8?B?WExwNlYxQUVQaURsVlRWaTBqQjZOd0U5ak8wL1Z4OWhIb2JEQTlLOVVHc2R5?=
 =?utf-8?B?a3V4aXdQNGg3TFRCdFFkOFVwS01iTWw5R1Nrd1c3UHNzdk9ITHdzZWZ5Q1Ja?=
 =?utf-8?B?RmZJdkE4c2kwWVdxQ2JMSnl6YlhJcFRpQlA1Y1VHZ1Y1ZjB3Vy9Pam1tQ2g4?=
 =?utf-8?B?YlkyeHluTnYrNEVEaExaZ0UwaCt5blVxWll4aXc4cnp1a1JHQkwwbmRFSjdG?=
 =?utf-8?B?Q0dhNC9TWUdPYmpyQWxOMFZMY29xV2ZGRW0rcjNnWXcvNmlNRXpGZkt0MVRr?=
 =?utf-8?B?aFEycnR5a2VyN0RsbVQ4cTZMK2QxWDFGSzhKa0EwUzB6c0wrT3RCL3dPZU54?=
 =?utf-8?B?SkVjbUVMdHZibXNlUDBhaU9EcEM5RjhDdENCeFpXQTdFY3VmOW5iVzEvZFhh?=
 =?utf-8?B?cU55SzI1RjFVbmFKeXloSmx5cTV4Y0EyMXpKSnFVSkFOMldPNVRkOVdjZlkw?=
 =?utf-8?B?TFF3MkpheSsrdExlNlVPSTdDTUdnTkkzMU1JQzVUR2hOanpid1NSWWxSQ2ph?=
 =?utf-8?B?UzljUGV5K3c3c0x0UjFjTTkyVmhkcUE4VzRhUWQvRlNYYTQxN3poa2M4dVRs?=
 =?utf-8?B?Ukp1aExGNzVvV3h3eUZJN1A0V0QrRDBhQ2RWeFFyVFpKQytURHBTb25qaGtX?=
 =?utf-8?B?QkNidDJuNC9WV1BiNWVhb20wNGozZHBMUHM1WlE5TFhNcVdpNWxzdTRnZ05B?=
 =?utf-8?B?ZEhhVy9FYmhjWEwwcSthcy9EZUNlWkRoVWdWSUVEK1ZzVFR3RnlCVWpOaVVE?=
 =?utf-8?B?anRmTnJmZWpDdENoZ1hKOHhJSFp3SWxqZzdwWFhzQVhjNTlPWlJ4N3lqMS9n?=
 =?utf-8?B?c2ZMcFBndlRZVU4ydDkzM3hoOWo0aVcyVzc4NFVENkhxbFdqc1BLdnppTHpC?=
 =?utf-8?B?N3lLV0VpVHB5UzdqbjNiek9qYUVBNE40U1oxK3RZS2FnamY5dml0dkVuZGJy?=
 =?utf-8?B?OVhzVUsyOHI3SGpnTGZXYjUvWUxiWitJVG1vVE9JTjJEOEs1OVJWRlV0MlhU?=
 =?utf-8?B?Y3BlaktSamJqVnhoT0ZpTXNDUXBvT3ZXYkhWOFJxRmJNckJUVTlRN1dmM3Fu?=
 =?utf-8?B?QUV1dHV4ODZGQ0tzcEdaM3RuUGQySDRrWTVGWnpuMTRpR0JROVFTL2ZUMXkr?=
 =?utf-8?B?dkJKVEovNThHVm15UVdicDFOcHdiTWkxUnZzZXBxV0VSempJckZXSmhMQU5r?=
 =?utf-8?B?RmR5VUFRS0JtTzZCbVdBZnJVcExKK1N3bk9FSUlFRmZBNW9LZ3NLTVNqd1Jx?=
 =?utf-8?B?cmxqWk1EcFMwTDdyU3dBbzBUWnE2MFZGSkRpeUUxa2NPQ0o4aDhtWE9EUW4y?=
 =?utf-8?Q?C1voZqRNJACEEPI0byWGo8RWS?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 455a3764-b113-42df-01c1-08db509926e3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 14:24:52.0047
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WFmB50uOVYL2FwM1TporCyVL5nNVRDTaJq4K/2yDCgX6u3NIyBvkqSnII0zCCsgiC2PF6j/fuxWV71Fnryl6pg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB10068

On 09.05.2023 16:03, Andrew Cooper wrote:
> On 08/05/2023 8:45 am, Jan Beulich wrote:
>> On 04.05.2023 21:39, Andrew Cooper wrote:
>>> When adding new words to a featureset, there is a reasonable amount of
>>> boilerplate and it is preforable to split the addition into multiple patches.
>>>
>>> GCC 12 spotted a real (transient) error which occurs when splitting additions
>>> like this.  Right now, FEATURESET_NR_ENTRIES is dynamically generated from the
>>> highest numeric XEN_CPUFEATURE() value, and can be less than what the
>>> FEATURESET_* constants suggest the length of a featureset bitmap ought to be.
>>>
>>> This causes the policy <-> featureset converters to genuinely access
>>> out-of-bounds on the featureset array.
>>>
>>> Rework X86_NR_FEAT to be related to FEATURESET_* alone, allowing it
>>> specifically to grow larger than FEATURESET_NR_ENTRIES.
>>>
>>> Reported-by: Jan Beulich <jbeulich@suse.com>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> While, like you, I could live with the previous patch even if I don't
>> particularly like it, I'm not convinced of the route you take here.
> 
> It's the route you tentatively agreed to in
> https://lore.kernel.org/xen-devel/a282c338-98ab-6c3f-314b-267a5a82bad1@suse.com/

Right. Yet I deliberately said "may be the best" there, as something
better might turn up. And getting the two numbers to always agree, as
suggested, might end up being better.

>> Can't
>> we instead improve build-time checking, so the issue spotted late in the
>> build by gcc12 can be spotted earlier and/or be connected better to the
>> underlying reason?
> 
> I don't understand what you mean by this.  For the transient period of
> time, Xen's idea of a featureset *is* longer the autogen idea, hence the
> work in this patch to decouple the two.

Well, this part of my reply was just aiming at diagnosing the issue
as early as possible and as clearly as possible, such that one can
easily and quickly adjust whatever is missing in a change being worked
on. The main goal of course needs to be that this can't easily go
entirely unnoticed (as had happened, which has prompted this attempt
of yours of addressing the issue). I.e. diagnosing late is still far
better than failing to (without the compiler spotting it) altogether.

>> One idea I have would deal with another aspect I don't like about our
>> present XEN_CPUFEATURE() as well: The *32 that's there in every use of
>> the macro. How about
>>
>> XEN_CPUFEATURE(FSRCS,        10, 12) /*A  Fast Short REP CMPSB/SCASB */
>>
>> as the common use and e.g.
>>
>> XEN_CPUFEATURE(16)
>>
>> or (if that ends up easier in gen-cpuid-py and/or the public header)
>> something like
>>
>> XEN_CPUFEATURE(, 16, )
>>
>> as the placeholder required for (at least trailing) unpopulated slots? Of
>> course the macro used may also be one of a different name, which may even
>> be necessary to keep the public header reasonably simple; maybe as much
>> as avoiding use of compiler extensions there. (This would then mean to
>> leave alone XEN_CPUFEATURE(), and my secondary adjustment would perhaps
>> become an independent change to make.)
> 
> Honestly, I don't want to hide the *32 part of the expression.  This
> logic is already magic enough.

Well, I certainly wouldn't insist, but to me it looks pretty odd to have
it on all the lines.

> If we were to do something like this, I don't see what's wrong with just
> having the value as a regular define at the end anyway.
> 
> One way or another with this approach, something needs updating in the
> tail of cpufeatureset.h, and gen-cpuid.py can easily parse for a
> specific named constant, and it will be less magic than overloading
> XEN_CPUFEATURE().

If less overloading is deemed better - fine with me. Looking at the
script I wasn't sure hunting for an entirely different construct would
end up being more tidy.

What isn't really clear to me from your reply: Are you okay with trying
such an alternative approach? Or are you opposed to it? Or something in
the middle, like you being okay, but only if it's not you to actually
try it out?

>>> To preempt what I expect will be the first review question, no FEATURESET_*
>>> can't become an enumeration, because the constants undergo token concatination
>>> in the preprocess as part of making DECL_BITFIELD() work.
>> Just as a remark: I had trouble understanding this. Which was a result of
>> you referring to token concatenation being the problem (which is fine when
>> the results are enumerators), when really the issue is with the result of
>> the concatenation wanting to be expanded to a literal number.
>>
>> Then again - do CPUID_BITFIELD_<n> really need to be named that way?
>> Couldn't they equally well be CPUID_BITFIELD_1d, CPUID_BITFIELD_e1c, and
>> alike, thus removing the need for intermediate macro expansion?
> 
> gen-cpuid.py doesn't know the short names; only Xen does, which is why
> the expansion needs to know the name->word mapping.
> 
> I suppose this can be fixed, but it will require more magic comments and
> more parsing to achieve.

Okay, let's leave this entire aspect aside for now. It started from a
not-to-be-committed remark only anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 09 14:28:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 14:28:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532246.828340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOKZ-0001gO-8z; Tue, 09 May 2023 14:28:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532246.828340; Tue, 09 May 2023 14:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOKZ-0001gH-6O; Tue, 09 May 2023 14:28:43 +0000
Received: by outflank-mailman (input) for mailman id 532246;
 Tue, 09 May 2023 14:28:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S+Ht=A6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwOKY-0001g7-Ae
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 14:28:42 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2050.outbound.protection.outlook.com [40.107.7.50])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ca5d7d86-ee75-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 16:28:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8943.eurprd04.prod.outlook.com (2603:10a6:102:20e::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 14:28:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 14:28:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca5d7d86-ee75-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CQDjRF/Cxw7ML6Ch8Ua5pDbmScLjmLUA7o6EHKRnr8YXDpAH2kLHEIF2mrdoeMEqTKBYolLNTpGYli7lYOmZ5cOqDX0e/Y1JhQWr40V4BIi7IowLwsYxZ3tzMdoloW6XMbWcenaJCzoRtLMCEjj7A3Th0gl13tsGLdMepeClKI0CcmB9h9/oxdqYUR6N2UjwrVnLSOn+PTwyiT0RD2bIyM8zni2EiBElnlZHE4DtBclNbpXuOLaZfM6HBJpZvdgSFztSEkrU9rVT5k1K5BGtYmiEIRpQlLnqyKS+upKl0Z83KekWkxCQqsk97eSvzOu60OrTlKUtWwpgZMS3KA3AhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pljH4mRZhaIosXef+zzUdjXC2LaBSHD20eiATYnOl3Y=;
 b=KNzSvf9bHZIutt2F8EUPPORaretEU0UhGOs4X7rlw26MOV2crnFo9hHnVLp+/9rlcPxxpi9BlflzTXBKdivv4sGwX0KyWxn7mguGuzNNp6DoffolU6IE+fGwx4jjJOeEB1XmIQK7QuDkrZHgFa2rXcbdGpFVCPrvStZ1gsZUXuNdYCcw0HAsjqbGQGIeGUfA1FDJ8ayg4kK1vSBvTxiz9LcUrpWyx8DXVqmeIGvFTecsAJFBtk0/dizyPEMPmR88PCKpCBfk+n9VGLDS4BvPKQscwiNCUkxgRCuChWUxaYLrVpZwUVqlgtRO+xNnB8QWl1j++oOJNCv+1u8uGdFktA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pljH4mRZhaIosXef+zzUdjXC2LaBSHD20eiATYnOl3Y=;
 b=V8b/7CV4gFxIXizMScp2bmyYyc0LBY9DPVJPavCM0DxE79yeEEitRzH9B1K5j+Zzj+roJ78ClzRIZDRTlAkyiocniDf2FVi2tF0TJujHiTVdd69SQk2fRTfE2jZQk+PtJY8P9WU0bDtpCpSg7GzZcaMMNyH8NJFcTxIzTWIhtaizm+tZwLTLzNyArGLnfMopHAreitfh//VE/XJfEhGYdRJNNhxkfMghUFEj8EN7y6l36F75VX2wzRE3/3dFfS8V3EJ/ep/P5Ew991iCjaB5XfzEM5xuWIK8D7AHsAMseGx3Qyjmb8lX/AM/TPBAaoo528w7fZnrMQf8J3H3u6kxpA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bbbd4c8b-e681-f785-b85c-474b380d6160@suse.com>
Date: Tue, 9 May 2023 16:28:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/6] x86/cpu-policy: Drop build time cross-checks of
 featureset sizes
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-2-andrew.cooper3@citrix.com>
 <6531df09-7f7f-a1e2-5993-bd2a14e22421@suse.com>
 <18090224-6838-8ed4-6be5-ae10a334e277@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <18090224-6838-8ed4-6be5-ae10a334e277@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0230.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b2::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8943:EE_
X-MS-Office365-Filtering-Correlation-Id: b26aa67b-1559-4361-4acb-08db50999db7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OmkrM6cBglK+BzLlqLLfKLH1Q4+VOQrBZJ9+BEHDTYM709HtjDRVCQEaP5bQcCXlpyXBNUV1cCxJX3jSPUhsd4Gq4DWZuL/jjMaNHf/DhMLlMwAyb+o+/Q9b6RrUGZpj0vSDGGZrteX9Izry1qsM5+AsVvE86OUsvkVesdZNtGNNaih10WRvQhHo/OZdpwmJpKlHg7AoetC8vQtvQrxYFGD8gq8l9harjdoRDjUxP7BT0L4lPAJlWyKZRIOj5Kv5JwqnnzRVMKrtaC5xanhDAAHbV81jyPciBNUsgQmuBJbNAjIHEuTAOIuX030KYijmPN86Z4eX8230ckVxa/vmMHHdBYY6x+WwCtltVsQSpgL/ERsY/lYpdsG9ZnSA5RX0i29AJ2borGW0flTUq19QlY+iW+aDPuirUtGgBFdy0X75p/4AwpyiFGAi4dXubyTvoycNScrkRNZgdZSEQGquSlRt1upp9B+MfEjSXj6Dnp26zU3IGyO1jLTqwJQ+RH3FOuF2XBzPvw0DET8l6Su6OmgVGPLqrU30ODnK8+UW7q69aEBk1uU5+Yy6QUE+Hvt6AkC6d4zKtIp9OfPdb4R2h8EGobFs+98A7e+XCDtY48bxTNYt5ZSsPmUafHGWM0mbgOj0B/XM1CS9+LNyDQgLRA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(346002)(136003)(376002)(451199021)(36756003)(316002)(83380400001)(66946007)(66476007)(66556008)(4326008)(6916009)(478600001)(54906003)(2616005)(6506007)(6512007)(186003)(26005)(53546011)(2906002)(86362001)(31696002)(41300700001)(5660300002)(8936002)(8676002)(6486002)(38100700002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d1lpRWFtdzQyM3lUaTZELzJJeDB5eXl0RytnQVFvSVF3YmNkWVVMb0F2UzRX?=
 =?utf-8?B?Vkx3YXNkSGRZcDFVWWIxNk40MmdhVEFNcTVQQndKSnZrVjIxWVBsMG5SaWpR?=
 =?utf-8?B?U01PakRjdnBOY3BlTGhvNWdIam53b081bkFaY2hpbnVKTFd1cnFlSkprb0Nz?=
 =?utf-8?B?MHpHVXdJc3REdGkrOUd0eVVpL1RpQVFkME1nNmNVOFA5Y1ptWUQrZ3kwOTFv?=
 =?utf-8?B?MndmVW9vZW1ONHpNY0d3STF4WDNZZEJUZGJPaWlSaFd0cmU2SWpETjEvMGVX?=
 =?utf-8?B?ZStSeHkrT2lORXFmU2RUWG1mMmpTN3ErSFVGTDY2ZGZnSUpBeHU1U0xTaVpQ?=
 =?utf-8?B?cURsSWU5a3dFTTk2OUNuWGlpYzdpc2xKMzJiM3RaWmlhcm9mWURKNWplMllC?=
 =?utf-8?B?eWZPWnN1Tm4rRXpjem5hclp3bEhLTHNMK0g3UEFoYXpTSFN6OFZmREs2UlpV?=
 =?utf-8?B?QUh0Ynk4YnlCQmdZOU9sL215bkt6Z1B5Mi9iQ3RSWC9UMkN5dTRYK1VHU1h3?=
 =?utf-8?B?VXBVYXdWOHRuQU5KL2tURlpRUDdSYzJja2l4VHR4K1haYzJ5YXUwbWZ3VEFE?=
 =?utf-8?B?VlhXMTNsSmRScWh1VXpJaU1ObmV6WDJmSUR5SEV0T1RZdnNUZkJZMHhTZ2tJ?=
 =?utf-8?B?K0lSdnhoa0RxTDUyaWlCWjNNSlcyY2p6ekVlTkRLWkdEVk1HUTB0OG9wSnpP?=
 =?utf-8?B?L05KK1RQV0VXTFYvVVNDb0xsRGxMV0t3dXRGZDNicmdOS0xrU0Z6bmlHdjNi?=
 =?utf-8?B?VFRITS94NTR4R2pjN2xFL0NjUlpCTXBjWkZiMEpBMzZuRytvRExIQ2sxTGps?=
 =?utf-8?B?em9wVnluRlJGc2FkYUxVR1MzaDlweFlpUWJRMVJtcnBRK0doOVQyUWRHVHhC?=
 =?utf-8?B?dDVSMThETzNKTllBY09lYzhYTTlWMFhKbWFxY0I4WklIazJ1WmFaYWp6RzVp?=
 =?utf-8?B?TmZlYmJod21HRDBGY0Q5TEIrSzZqRjUrdkZ1VHRNbURvS2E3RnhrVXdWVXd2?=
 =?utf-8?B?ZDRXWFZHcVlIU05scXgxQjBkamQ3cEh6bWlibVhUYlJXMU9uS0NIUW9oMUpN?=
 =?utf-8?B?NjRnOTY4dzMvL1VOd3J5alJocGYwTHcxdndBNFk0YjJXdkQ3L082Zk4xeU9M?=
 =?utf-8?B?bCtrTDZSM20ycFRQbXRTVlN4ZklpbWFZQmNKYVBhVnV1RmtRUXZRSGZuT056?=
 =?utf-8?B?dExyQnY3Si94LzJkTElUVHI5dE81UEFxb3JZZFlaS2srYS9zN1RIR3Fjakta?=
 =?utf-8?B?L3p3SEplM2E2VUYvYjA4UkFBZWtSaVMrVnJKa3pkQ0gwWENpWTFqa0U1MVBT?=
 =?utf-8?B?VndGT1hPTGpXSXFiaW82Nk40QUtVNHNqeTNiTG5UeWRoZzJNdkFUdHpxNTQ3?=
 =?utf-8?B?TmtuWGFJS285MVVMZVFocEVJeHFGTEl1eDh0Skx4bTgwQVREQXVDRmZoQXdO?=
 =?utf-8?B?bVQrNzVhUTlQMTRya0ErODFCNldmVUtXZUZlQ3lCaGZ5Zk1jWmZUcjY4d0VJ?=
 =?utf-8?B?VWJWT2VVakdNbnhkRzJHSGlqYnVyYUkxam9DZzJ2c1cza0pWTC9YTzVJM2RP?=
 =?utf-8?B?TDRLRm1RMytmZjNuaWEraFlycmZtS3UrTUkwOWRQcnpHMjdxQk52aHQzY0tN?=
 =?utf-8?B?dXpWb2NnWllUSTJtTkdVeU0rTi9wQ2FsTEdYbk4vcGdOV003R3MzTWJ0b3pT?=
 =?utf-8?B?eGx4UmN4V3cxaWxBd295bnliUDhsc2tsMnBLbWZqb3dTUW1UcGhJNHBOMUwx?=
 =?utf-8?B?ekhEMXhpRHRsUEp5R0dOUERURW5nbzVWMHJidlE5UUhteTA4dzJobWpQbHJW?=
 =?utf-8?B?emhWSkFoYjRWVld4YzFhUFc1Vmh1ME5YcVhMeGZZTUF2RXRoYVI2OElmUmND?=
 =?utf-8?B?V3B5RFVoQVRORlU3MHJqRGV1ZlA0dCtYYUpZWGhKU2djZ29RU09GeHRHK002?=
 =?utf-8?B?VWp4ZU5hNGREVDJCYVVSSWtBOHU3RXg3bTYvWnVQNWlDL0pJQzFBUE96cldN?=
 =?utf-8?B?U1VoRXQ4M29Nc1lKUUlKblNEVUhjS2N4TGhBSUg2VGs2YTk4T1NXV1VDRFpr?=
 =?utf-8?B?T2R0c3lqQ3RqYUlKeGhzT1pnd05TVTJsNHdRM2hHZm9uakJjTzNIWmlBWTYw?=
 =?utf-8?Q?G6p9uw6SBPCGMz7qOyWqTB5I1?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b26aa67b-1559-4361-4acb-08db50999db7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 14:28:11.1400
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sGsB2tO0agpJFXxd1kaaOu/bppvon3Iqwfosg1KEn5UD71n2IFdx6b94JUeC4QqhUKlmEOKxJVQvaZ+hJZBwDA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8943

On 09.05.2023 15:04, Andrew Cooper wrote:
> On 08/05/2023 7:47 am, Jan Beulich wrote:
>> On 04.05.2023 21:39, Andrew Cooper wrote:
>>> These BUILD_BUG_ON()s exist to cover the curious absence of a diagnostic for
>>> code which looks like:
>>>
>>>   uint32_t foo[1] = { 1, 2, 3 };
>>>
>>> However, GCC 12 at least does now warn for this:
>>>
>>>   foo.c:1:24: error: excess elements in array initializer [-Werror]
>>>     884 | uint32_t foo[1] = { 1, 2, 3 };
>>>         |                        ^
>>>   foo.c:1:24: note: (near initialization for 'foo')
>> I'm pretty sure all gcc versions we support diagnose such cases. In turn
>> the arrays in question don't have explicit dimensions at their
>> definition sites, and hence they derive their dimensions from their
>> initializers. So the build-time-checks are about the arrays in fact
>> obtaining the right dimensions, i.e. the initializers being suitable.
>>
>> With the core part of the reasoning not being applicable, I'm afraid I
>> can't even say "okay with an adjusted description".
> 
> Now I'm extra confused.
> 
> I put those BUILD_BUG_ON()'s in because I was not getting a diagnostic
> when I was expecting one, and there was a bug in the original featureset
> work caused by this going wrong.
> 
> But godbolt seems to agree that even GCC 4.1 notices.
> 
> Maybe it was some other error (C file not seeing the header properly?)
> which disappeared across the upstream review?

Or maybe, by mistake, too few initializer fields? But what exactly it
was probably doesn't matter. If this patch is to stay (see below), some
different description will be needed anyway (or the change be folded
into the one actually invalidating those BUILD_BUG_ON()s).

> Either way, these aren't appropriate, and need deleting before patch 5,
> because the check is no longer valid when a featureset can be longer
> than the autogen length.

Well, they need deleting if we stick to the approach chosen there right
now. If we switched to my proposed alternative, they better would stay.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 09 14:30:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 14:30:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532251.828351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOMU-00035g-LP; Tue, 09 May 2023 14:30:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532251.828351; Tue, 09 May 2023 14:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOMU-00035Z-Hu; Tue, 09 May 2023 14:30:42 +0000
Received: by outflank-mailman (input) for mailman id 532251;
 Tue, 09 May 2023 14:30:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PO4B=A6=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pwOMS-00035T-OC
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 14:30:40 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on20608.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 117723fb-ee76-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 16:30:38 +0200 (CEST)
Received: from BN9PR03CA0965.namprd03.prod.outlook.com (2603:10b6:408:109::10)
 by CH2PR12MB4874.namprd12.prod.outlook.com (2603:10b6:610:64::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Tue, 9 May
 2023 14:30:35 +0000
Received: from BN8NAM11FT003.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:109:cafe::86) by BN9PR03CA0965.outlook.office365.com
 (2603:10b6:408:109::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33 via Frontend
 Transport; Tue, 9 May 2023 14:30:35 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT003.mail.protection.outlook.com (10.13.177.90) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6363.33 via Frontend Transport; Tue, 9 May 2023 14:30:34 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 9 May
 2023 09:30:34 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 9 May 2023 09:30:32 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 117723fb-ee76-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FHVyrXlDsAhjFYpne/JeSxDxMpRuE/FAv1/eT2nccDsX+xseqm7VXo4aJFLBiIl1Ezma00+57eeRFQZwxvlbUUq2UGRrhmv9KSDEkp2X2AV5oJIZXW1KoYwHoBLZ4ZSX+RzZvW56Pzh/yM7Vazf5ul6yP+TQl4dT2qXrUhQ0Qp5KC24rsEmilzfjxA50HU9o5Tbn52qK6bSye0Ku90cCtTB6jkmfq99q+24d8BYrbSn6VtpZsS1mFzUykB35NVgxe6F73UpBWv8w3+GYBk6e79HholmceDOo0eROduD/tIPme4+TwZF4gWh/CXk2nMQcvrR885BVOIN+a2cm4OKelg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dC85YpkY8sd1brOOSAgibYc0mheybciXTgrou8/dKUs=;
 b=KtzM1f8LOZOGjcPre2jz/gaJRTLdR3p7+FK56AXzjkwjYzoNfj0hNXHlo2dm+A4VNVvZVV1SjxRnMAY6dQIkdpv54nCXy8I0+GCZX3kduqY+vCJlW1O3+1yMexppwVQnW7S1IJqVFkoJherimY3msbRMaKKeFFBptdqAQNBx2DbA3EVIaDqtYTr7lpFd6cbdKjNTYLYQtJkZUp/XmnCh9AgVCQsaB2XD812u5u9PuIGuVWtWrIdzIYClMFYLB3b2pohLEnpwV66f/g9f2Ry4TpXulCXu79wIsf6+94qjjZq6pdIFk8ZSMdhtlUn+6xSDZ/SaEsr+YuC82tn1Wbykmw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dC85YpkY8sd1brOOSAgibYc0mheybciXTgrou8/dKUs=;
 b=T0sSqf8aQV8fMXlUNyaHJi6MPLW034cut0SfHdIzz2P1vBa0o2x+hG9r7npSpdrUKMfvCZWQ6PRPUYCmK0WFY+RchFqOtfMst26p+cAVJtXfpZ5RPb10g43ObvpuIjfeCuhqtYmoRPZAoyCDv12YJxjee3jAC/yek3x4VzalMCw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <8c96d009-bd81-66c7-fd23-1f11bc07e72f@amd.com>
Date: Tue, 9 May 2023 16:30:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN][PATCH v6 15/19] xen/arm: Implement device tree node removal
 functionalities
Content-Language: en-US
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-16-vikram.garhwal@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230502233650.20121-16-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT003:EE_|CH2PR12MB4874:EE_
X-MS-Office365-Filtering-Correlation-Id: c7e5bfef-cc36-4ba5-39cd-08db5099f381
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RN35gLrYA0uYv5e/1KgvMcoecH4Bn/gN5/4a/RLv0xjDWFDp10oz6rNpDduJN7BNKhOOt7eBRygQNeg65jROKNmUGTaEuKZMAl7WaTOIb56b/b1nfrTn32I/HmkDw5rT6X7h+Z23eYssZfxU9TcaZqxOzIC9icx+brbFfFaKZ8qyHoya8Szgy2HStyOj/AeOeQg0DV/CKlGFLKAaix5Yn/yBiBlN9VdxLK9geAfLfavUTKPsAUR/OIC3Sucqs2qp5pZ2eMetOautkAhpmOdc75/Glt7REdSqmovE9pRktSoXzS6+RHSYxxo0AVNnWB1lPUAFA/yMil+Xxqpb9dnwLRjuaudcXEa7Sw7fDCupIardJ6dSaAvVAQjtZx5VEDSzZ3XaBA3VcKxu/I69LyPhQw/W5t6wGVcmlIzT1qqxafLzi4n1jxgAFmWAi0MoX7TWCbi8h+05KfZFTHXvgd+8fhDYvyR5CUYY/1sbWWKLb6GjXPc2Ti96mHgrQUVGI0gtPEEwi0QbukTZiIXG+eRKaY7ZbMYNPxu7TB0Rrc8tl4QXt8Rxqcno0XQAVLJLuwLJnGX2l2/Jaok2Jpy9lm1IrJze0jpHlydeFXmg2utB9pGvjfdp3UFRae2lKWj+Z5GOoo5d7fL6iJ52ltCGrw0K/1qqnuRgWWoam0i2ERM1Sxmue/VjoWQ9ek25tUsl1whTcDWLVye6BaEKRo8VwOAyxQnvTW9gG7S4ng/0vQD1+osMffkyV+VOFlxCCI21dDeThrrdX35SzJ1/+ynujfDWqZpkudD3MEkHeEwAWoTuSiY=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(346002)(376002)(451199021)(36840700001)(46966006)(40470700004)(26005)(53546011)(40480700001)(336012)(426003)(2616005)(36756003)(36860700001)(81166007)(31696002)(86362001)(356005)(82310400005)(82740400003)(186003)(40460700003)(47076005)(83380400001)(110136005)(41300700001)(16576012)(54906003)(4326008)(2906002)(31686004)(30864003)(5660300002)(8676002)(478600001)(8936002)(316002)(44832011)(70586007)(70206006)(403724002)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 14:30:34.8139
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c7e5bfef-cc36-4ba5-39cd-08db5099f381
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT003.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4874



On 03/05/2023 01:36, Vikram Garhwal wrote:
> Introduce sysctl XEN_SYSCTL_dt_overlay to remove device-tree nodes added using
> device tree overlay.
> 
> xl dt-overlay remove file.dtbo:
>     Removes all the nodes in a given dtbo.
>     First, removes IRQ permissions and MMIO accesses. Next, it finds the nodes
>     in dt_host and delete the device node entries from dt_host.
> 
>     The nodes get removed only if it is not used by any of dom0 or domio.
> 
> Also, added overlay_track struct to keep the track of added node through device
> tree overlay. overlay_track has dt_host_new which is unflattened form of updated
> fdt and name of overlay nodes. When a node is removed, we also free the memory
> used by overlay_track for the particular overlay node.
> 
> Nested overlay removal is supported in sequential manner only i.e. if
> overlay_child nests under overlay_parent, it is assumed that user first removes
> overlay_child and then removes overlay_parent.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/arch/arm/sysctl.c        |  16 +-
>  xen/common/Makefile          |   1 +
>  xen/common/dt-overlay.c      | 419 +++++++++++++++++++++++++++++++++++
>  xen/include/public/sysctl.h  |  23 ++
>  xen/include/xen/dt-overlay.h |  58 +++++
>  5 files changed, 516 insertions(+), 1 deletion(-)
>  create mode 100644 xen/common/dt-overlay.c
>  create mode 100644 xen/include/xen/dt-overlay.h
> 
> diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
> index b0a78a8b10..456358166c 100644
> --- a/xen/arch/arm/sysctl.c
> +++ b/xen/arch/arm/sysctl.c
> @@ -9,6 +9,7 @@
>  
>  #include <xen/types.h>
>  #include <xen/lib.h>
> +#include <xen/dt-overlay.h>
>  #include <xen/errno.h>
>  #include <xen/hypercall.h>
>  #include <public/sysctl.h>
> @@ -21,7 +22,20 @@ void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
>  long arch_do_sysctl(struct xen_sysctl *sysctl,
>                      XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>  {
> -    return -ENOSYS;
> +    long ret = 0;
> +
> +    switch ( sysctl->cmd )
> +    {
> +    case XEN_SYSCTL_dt_overlay:
> +        ret = dt_sysctl(&sysctl->u.dt_overlay);
> +        break;
> +
> +    default:
> +        ret = -ENOSYS;
> +        break;
> +    }
> +
> +    return ret;
>  }
>  
>  /*
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index 46049eac35..e7e96b1087 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -8,6 +8,7 @@ obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
>  obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
>  obj-$(CONFIG_IOREQ_SERVER) += dm.o
>  obj-y += domain.o
> +obj-$(CONFIG_OVERLAY_DTB) += dt-overlay.o
>  obj-y += event_2l.o
>  obj-y += event_channel.o
>  obj-y += event_fifo.o
> diff --git a/xen/common/dt-overlay.c b/xen/common/dt-overlay.c
> new file mode 100644
> index 0000000000..b89cceab84
> --- /dev/null
> +++ b/xen/common/dt-overlay.c
> @@ -0,0 +1,419 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
GPL-2.0-only according to the latest series from Andrew (GPL-2.0 is a depracated tag)

[...]

> +
> +    /* Remove mmio access. */
> +    for ( i = 0; i < naddr; i++ )
> +    {
> +        uint64_t addr, size;
> +
> +        rc = dt_device_get_address(device_node, i, &addr, &size);
Given that Ayan's 32-bit series might be merged first, this will have to be changed (to use paddr_t for addr,size, etc.)

> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
> +                   i, dt_node_full_name(device_node));
> +            return rc;
> +        }
> +
> +        rc = iomem_deny_access(d, paddr_to_pfn(addr),
> +                               paddr_to_pfn(PAGE_ALIGN(addr + size - 1)));
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Unable to remove dom%d access to"
> +                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
> +                   d->domain_id,
> +                   addr & PAGE_MASK, PAGE_ALIGN(addr + size) - 1);
> +            return rc;
> +        }
> +
> +    }
> +
> +    return rc;
> +}
> +
> +/* Removes all descendants of the given node. */
> +static int remove_all_descendant_nodes(struct dt_device_node *device_node)
> +{
> +    int rc = 0;
> +    struct dt_device_node *child_node;
> +
> +    for ( child_node = device_node->child; child_node != NULL;
> +         child_node = child_node->sibling )
> +    {
> +        if ( child_node->child )
> +            remove_all_descendant_nodes(child_node);
> +
> +        rc = handle_remove_irq_iommu(child_node);
> +        if ( rc )
> +            return rc;
> +    }
> +
> +    return rc;
> +}
> +
> +/* Remove nodes from dt_host. */
> +static int remove_nodes(const struct overlay_track *tracker)
> +{
> +    int rc = 0;
> +    struct dt_device_node *overlay_node;
> +    unsigned int j;
> +
> +    for ( j = 0; j < tracker->num_nodes; j++ )
> +    {
> +        overlay_node = (struct dt_device_node *)tracker->nodes_address[j];
> +        if ( overlay_node == NULL )
> +        {
> +            printk(XENLOG_ERR "Device %s is not present in the tree. Removing nodes failed\n",
> +                   overlay_node->full_name);
> +            return -EINVAL;
> +        }
> +
> +        rc = remove_all_descendant_nodes(overlay_node);
> +
> +        /* All children nodes are unmapped. Now remove the node itself. */
> +        rc = handle_remove_irq_iommu(overlay_node);
> +        if ( rc )
> +            return rc;
> +
> +        read_lock(&dt_host->lock);
> +
> +        rc = dt_overlay_remove_node(overlay_node);
> +        if ( rc )
> +        {
> +            read_unlock(&dt_host->lock);
> +
> +            return rc;
> +        }
> +
> +        read_unlock(&dt_host->lock);
> +    }
> +
> +    return rc;
> +}
> +
> +/*
> + * First finds the device node to remove. Check if the device is being used by
> + * any dom and finally remove it from dt_host. IOMMU is already being taken care
> + * while destroying the domain.
> + */
> +static long handle_remove_overlay_nodes(void *overlay_fdt,
> +                                        uint32_t overlay_fdt_size)
> +{
> +    int rc = 0;
as always, please do not initialize variables if it is not required (this applies to all the patches)

> +    struct overlay_track *entry, *temp, *track;
> +    bool found_entry = false;
> +
> +    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
> +    if ( rc )
> +        return rc;
> +
> +    if ( overlay_node_count(overlay_fdt) == 0 )
> +        return -EINVAL;
> +
> +    spin_lock(&overlay_lock);
> +
> +    /*
> +     * First check if dtbo is correct i.e. it should one of the dtbo which was
> +     * used when dynamically adding the node.
> +     * Limitation: Cases with same node names but different property are not
> +     * supported currently. We are relying on user to provide the same dtbo
> +     * as it was used when adding the nodes.
> +     */
> +    list_for_each_entry_safe( entry, temp, &overlay_tracker, entry )
> +    {
> +        if ( memcmp(entry->overlay_fdt, overlay_fdt, overlay_fdt_size) == 0 )
> +        {
> +            track = entry;
> +            found_entry = true;
> +            break;
> +        }
> +    }
> +
> +    if ( found_entry == false )
> +    {
> +        rc = -EINVAL;
> +
> +        printk(XENLOG_ERR "Cannot find any matching tracker with input dtbo."
> +               " Removing nodes is supported for only prior added dtbo. Please"
> +               " provide a valid dtbo which was used to add the nodes.\n");
This will be a quite large single line. Maybe some split?

> +        goto out;
> +
> +    }
> +
> +    rc = remove_nodes(entry);
> +
remove this line so that a check follows assignment

> +    if ( rc )
> +    {
> +        printk(XENLOG_ERR "Removing node failed\n");
> +        goto out;
> +    }
> +
> +    list_del(&entry->entry);
> +
> +    xfree(entry->dt_host_new);
> +    xfree(entry->fdt);
> +    xfree(entry->overlay_fdt);
> +
> +    xfree(entry->nodes_address);
> +
> +    xfree(entry);
> +
> +out:
> +    spin_unlock(&overlay_lock);
> +    return rc;
> +}
> +
> +long dt_sysctl(struct xen_sysctl_dt_overlay *op)
> +{
> +    long ret;
> +    void *overlay_fdt;
> +
> +    if ( op->overlay_fdt_size == 0 || op->overlay_fdt_size > KB(500) )
> +        return -EINVAL;
> +
> +    overlay_fdt = xmalloc_bytes(op->overlay_fdt_size);
> +
> +    if ( overlay_fdt == NULL )
> +        return -ENOMEM;
> +
> +    ret = copy_from_guest(overlay_fdt, op->overlay_fdt, op->overlay_fdt_size);
> +    if ( ret )
> +    {
> +        gprintk(XENLOG_ERR, "copy from guest failed\n");
> +        xfree(overlay_fdt);
> +
> +        return -EFAULT;
> +    }
> +
> +    switch ( op->overlay_op )
> +    {
> +    case XEN_SYSCTL_DT_OVERLAY_REMOVE:
> +        ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
> +        xfree(overlay_fdt);
this xfree and ...
> +
> +        break;
> +
> +    default:
> +        xfree(overlay_fdt);
this one can be removed by adding a single xfree before final return

> +        break;
> +    }
> +
> +    return ret;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index 2b24d6bfd0..28f7fba98b 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -1057,6 +1057,24 @@ typedef struct xen_sysctl_cpu_policy xen_sysctl_cpu_policy_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpu_policy_t);
>  #endif
>  
> +#if defined(__arm__) || defined (__aarch64__)
> +/*
> + * XEN_SYSCTL_dt_overlay
> + * Performs addition/removal of device tree nodes under parent node using dtbo.
> + * This does in three steps:
> + *  - Adds/Removes the nodes from dt_host.
> + *  - Adds/Removes IRQ permission for the nodes.
> + *  - Adds/Removes MMIO accesses.
> + */
> +struct xen_sysctl_dt_overlay {
> +    XEN_GUEST_HANDLE_64(void) overlay_fdt;  /* IN: overlay fdt. */
> +    uint32_t overlay_fdt_size;              /* IN: Overlay dtb size. */
> +#define XEN_SYSCTL_DT_OVERLAY_ADD                   1
> +#define XEN_SYSCTL_DT_OVERLAY_REMOVE                2
> +    uint8_t overlay_op;                     /* IN: Add or remove. */
> +};
> +#endif
> +
>  struct xen_sysctl {
>      uint32_t cmd;
>  #define XEN_SYSCTL_readconsole                    1
> @@ -1087,6 +1105,7 @@ struct xen_sysctl {
>  #define XEN_SYSCTL_livepatch_op                  27
>  /* #define XEN_SYSCTL_set_parameter              28 */
>  #define XEN_SYSCTL_get_cpu_policy                29
> +#define XEN_SYSCTL_dt_overlay                    30
>      uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
>      union {
>          struct xen_sysctl_readconsole       readconsole;
> @@ -1117,6 +1136,10 @@ struct xen_sysctl {
>  #if defined(__i386__) || defined(__x86_64__)
>          struct xen_sysctl_cpu_policy        cpu_policy;
>  #endif
> +
> +#if defined(__arm__) || defined (__aarch64__)
> +        struct xen_sysctl_dt_overlay        dt_overlay;
> +#endif
>          uint8_t                             pad[128];
>      } u;
>  };
> diff --git a/xen/include/xen/dt-overlay.h b/xen/include/xen/dt-overlay.h
> new file mode 100644
> index 0000000000..5b369f8eb7
> --- /dev/null
> +++ b/xen/include/xen/dt-overlay.h
> @@ -0,0 +1,58 @@
> + /* SPDX-License-Identifier: GPL-2.0 */
GPL-2.0-only according to the latest series from Andrew (GPL-2.0 is a depracated tag)

> + /*
> + * xen/dt-overlay.h
> + *
> + * Device tree overlay support in Xen.
> + *
> + * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
> + * Written by Vikram Garhwal <vikram.garhwal@amd.com>
> + *
> + */
> +#ifndef __XEN_DT_OVERLAY_H__
> +#define __XEN_DT_OVERLAY_H__
> +
> +#include <xen/list.h>
> +#include <xen/libfdt/libfdt.h>
> +#include <xen/device_tree.h>
> +#include <xen/rangeset.h>
> +
> +/*
> + * overlay_node_track describes information about added nodes through dtbo.
structure is called overlay_track

> + * @entry: List pointer.
> + * @dt_host_new: Pointer to the updated dt_host_new unflattened 'updated fdt'.
This reads strange, would you mind to fix it?

> + * @fdt: Stores the fdt.
>From here ...

> + * @nodes_fullname: Stores the full name of nodes.
> + * @nodes_irq: Stores the IRQ added from overlay dtb.
> + * @node_num_irq: Stores num of IRQ for each node in overlay dtb.
... to here these params do not reflect the struct members. Please fix. Also, you can omit "Stores" word
and just write e.g. "total number of ..."

> + * @num_nodes: Stores total number of nodes in overlay dtb.
> + */
> +struct overlay_track {
> +    struct list_head entry;
> +    struct dt_device_node *dt_host_new;
> +    void *fdt;
> +    void *overlay_fdt;
> +    unsigned long *nodes_address;
> +    unsigned int num_nodes;
> +};
> +
> +struct xen_sysctl_dt_overlay;
> +
> +#ifdef CONFIG_OVERLAY_DTB
> +long dt_sysctl(struct xen_sysctl_dt_overlay *op);
> +#else
> +static inline long dt_sysctl(struct xen_sysctl_dt_overlay *op)
> +{
> +    return -ENOSYS;
no xen/errno.h included results in a compilation error on my side.

> +}
> +#endif
add empty line here

> +#endif
add /* __XEN_DT_OVERLAY_H__ */ next to last #endif

> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */

~Michal


From xen-devel-bounces@lists.xenproject.org Tue May 09 14:38:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 14:38:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532258.828360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOUF-0003oY-HA; Tue, 09 May 2023 14:38:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532258.828360; Tue, 09 May 2023 14:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOUF-0003oR-Ea; Tue, 09 May 2023 14:38:43 +0000
Received: by outflank-mailman (input) for mailman id 532258;
 Tue, 09 May 2023 14:38:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S+Ht=A6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwOUD-0003oL-U0
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 14:38:41 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20619.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2fac8634-ee77-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 16:38:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8629.eurprd04.prod.outlook.com (2603:10a6:10:2dc::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 14:38:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 14:38:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fac8634-ee77-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kvk8wBWducPI5X2SbrUuZrcWHlAQh65nxdjkRzSmA5gJdGhdnOGcGzaMHQu/oyEsbUQbXAvp40XKfHdlGHxLLg2keDfXP9sDOOfK2DHHoihlMP63ZRZsGgau2aYwNmwQq5Xy/YF7untYOui7KIUsRNftiLJkPanjaHqBFFHCfDwdSs7nha2KtRC/j7y/+peLK8p7cFWulFYDVigfF8dR7ccABbKZy5bpIodvqGlWnLl2U6fxgoBxzvMMnqCMGWgYqty9/W340aabhto32MR/W/ApTd8hOiNdN4ck99gnu8TurS2YdFa85/3HypVeRCXEU7wCd4PwdzvLx071IgMZ+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Gw2xphWlCYN1y+gl6mor7MMYkRi4vPBzRRABXc+P7ec=;
 b=HlZNsG5LgjAzuV2Iz1LxDSjtBVjc7UY9HKSBwZ+lROIaKcQegv1XIyN+cvznckA9bltjwWJZJh9VOFSOfsw03/Mk/eFMmYICB3gIGcZmlgPHu94lxvXSIo5Ut/Uor1r2Ug0r4Y0CamK6/PLN0ZVMbc17ha6sEBXBmYFdXujvsnuvdJrKL8TCzrih0rPsrhrzXHPSZneisTfi7KX5HcutclsvtwSahxfXDpP2JIvNwzAZBQeF0GrHTD90pUz3IV8V8bFC4UQZxEjm0lcgZJzLWGERv/RVYb0VhV7OzuPRpfKamizWOihDXBCwUuwJcELleVXxVvdnced3rKsB4a+ihA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gw2xphWlCYN1y+gl6mor7MMYkRi4vPBzRRABXc+P7ec=;
 b=p6gHH+IKEEE/BgpYAdqTSbfz3N38Ys4u5o2QeAvGrznaoMZZ3zDi6yP0BzvLP7jsS4hFgxX/5Q6mG46lP4cqrzu9yDm5HlxQghUmNy2Gd/cIEvuYpEdiv44AkMPXDF9AoiI744OqYwKVtt051UUek49NE9vALT+nsEjxMWwwmsP7GosLlmCxih+ZnSF8comoiWLs4Ne/ZWAgarZxFELTbHwBgoz52OHZHbyxS9LI4ZvxmacRKhi2J665ED/+kXvIK2AtV1n32+g0j420Psfy9Z1+4ZSw37VCXTyDU6Aau/XowisQ2vl3fE6zDnnhxxtJQL2j2ghzcL1U4K/Omy3XyA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <da4592b3-b5f2-50d8-b474-9b2340b5bb81@suse.com>
Date: Tue, 9 May 2023 16:38:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v6 2/4] xen/riscv: introduce setup_initial_pages
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1683131359.git.oleksii.kurochko@gmail.com>
 <d1a6fb6112b61000645eb1a4ab9ade8a208d4204.1683131359.git.oleksii.kurochko@gmail.com>
 <0533b045-f4cb-0834-ae88-9229bd816cf2@suse.com>
 <db6fe5a3db067ae3429d4b83766508233dfc9ca8.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <db6fe5a3db067ae3429d4b83766508233dfc9ca8.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0188.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8629:EE_
X-MS-Office365-Filtering-Correlation-Id: fc71fba0-48b7-47c9-4ff0-08db509b1322
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RBMfnhB7XWS2RVUwy3QAw5//TgnIIApc2/92LM5RsA0kn0t+rF+AVE3Uv3Zygbp2CUxZNpSJesnpSZfzwyuxY1gjvDjAx6RvN3gsBQPG+PxrbQqTM1jhMxfQe6mYZg+pxJUM/zKvMaucF9p/u7/6Sy4x9hkrogyofP9aG5OqfoYXQmH6InbdPUGeHdd4mZ5QouBEjoszJXIsE7aPbpBAlGQ38Mbyz4+gm2x45noqZJuoowdHKR6q7MGXKUus1PlgMDB8vL9H1EAIg/m3JMIl3ukeMs9a0DpYKdcLLssPHabeWeSm0/PPca48kmFscE8Mp8Nzh455DniSGZW0hB7KQ6qP9eEom2MyWGIhDp9gngTdqdgBSRWqgwqCAH907lnYBHfKVNrwYl6V/jWs5hMc+z2ubCy8NVkOQIm1LE715dI7uYNPgUho7xWoj8XwOcjRLDpGEMMZxnjyN0Yhdbl0ai6ue2veKXLytbKCDL053GFY0+MgHUmfd2vCuN0veCPo3cXrCNY0PhE2wSQ/eGioOlQaA/zcaZgsTXVAvbi6QZUq4b+s6F9U7NeTwuocYUdvIguuV/K3LGSKcJD1KR8ba8xFmPDfEPILNFgN4Z4TsffXaSbnp/FljZEEdJh/Cl03Z8soajQe0N4dZZReqotgcg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(376002)(136003)(396003)(39850400004)(346002)(451199021)(31686004)(6486002)(41300700001)(36756003)(31696002)(2906002)(86362001)(38100700002)(8676002)(8936002)(5660300002)(66556008)(4326008)(66476007)(66946007)(6916009)(478600001)(316002)(83380400001)(186003)(54906003)(53546011)(6506007)(26005)(6512007)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aVNrRmx6V3NkN3c4dEFYSzdMdHpFQlRjRTJyMlNSSGN6cHZaRllIWUhJK1Vx?=
 =?utf-8?B?UlhiVXBLeXVneVJPdkc1dTN3ZUNIdWZZZUpXcnZwRU81MmgyenlNaSs1aTRR?=
 =?utf-8?B?TU93cWo1SkJ5ei9qZXAvbWNzcWpNVjJNY0NZY012MlVBMDFKcW9neERKRVI0?=
 =?utf-8?B?eFRBbUdpaFI1R2J3OUxBMGJCM1BvQ1ZvSjJ3MXdnVlBrbVo0WWwzQ0dSTVkz?=
 =?utf-8?B?MUZOTGhvcnYzTThpdEViMDlRTnBTMHZaUUFkR1dBUTNyZCtjNlIyb09WYi93?=
 =?utf-8?B?ZnBMbnEzdTduWEhnQ3Q3WU53SjljMXI4ajgzS292MnBocFJTb2JieFB4aFVy?=
 =?utf-8?B?ZStTbFMxTEtQYy8veWFoemZHYkFJSXBkQWF1dXFieW5aNkVpa3UzdERIZVpL?=
 =?utf-8?B?UU9NQnF0NUdOV1AzWER0MkE2ZnB3eENYMVRSOTIwUG5IdDNaTmNEMGsvMjZ4?=
 =?utf-8?B?bHFXVGg0VjhKRmNzWFJwNVR6bkluTExERXluVDNDS29DTmF0U3NkcGhEd3Q1?=
 =?utf-8?B?RHROeVhIK2hZenpWNDJ3MzVPUDdIMHRqZFBYSlRPVHdrZmtqUVR2aThGSzRG?=
 =?utf-8?B?KzVoZkdmL2ZqYm1UcDY3QkEzSWdYdWRLY0kzMFZWak9mNDkxb00wZUl0RHg0?=
 =?utf-8?B?RUZHeU1sSGF2V3lxaFZDZEpxRmNRblpCNlYydHNaSVo1TGxGSzdlM244eW12?=
 =?utf-8?B?RG1rbDR1ZFRnYUVaL3Mvb1ZubEtka0NBZ0V3b21RWWVwY2xsc25sMGFaalhY?=
 =?utf-8?B?SjBSYk9hRUFXWmdXdHFwaTY0VVlwZXFZRENuck5naGZFcUFWV1g4M1EyL0pr?=
 =?utf-8?B?ektyVHBRbDlKSzlvb1J3TmFVcmNoN1Q5VDluSmJTaUJsRWRnZWZCOGpKMDVx?=
 =?utf-8?B?Ty8vNXZNUlJnTDhTdnZSaHc3VFc4ZlQ5L2dKa0ZjMnpPOHlJWmhOT3Z5ajZy?=
 =?utf-8?B?VDYzNlhybTBSNERoUEtRVHp3UURmZkZqT1lPUHQ4SDJQNWFDWXY4a0JON29m?=
 =?utf-8?B?Mys1SlJZZ0RNR2Qvb3BOYzFSeG9IcG42L2NRV3B6NFFXTkdOQnlCMWROTnQv?=
 =?utf-8?B?aS9VUngvbTV1T3MybGlQL0dkSnZJKzZPUFNTNGFBOGNCQkx4Ti9GeXNwemZa?=
 =?utf-8?B?c0JrUU9yV3ZwK2VheXlzQmdmM2lUUVhmUHo1TXlzY3ZEdXFodFNxb1hjbnI2?=
 =?utf-8?B?OFpGN250dHhsZ3N0MjRnY2JVZnIyZkR0RXlXek93aTUwaFVheE5hRE9nY29V?=
 =?utf-8?B?UjU1ZVVHLzhXR3hPQllBQ0VaZWRQamRPN01uQkFjSERtSEVtdUxXMTl0SjN0?=
 =?utf-8?B?UkJ5MnErVzN2UDRhcFJ2QUdaRi9FZUlaWEhTVUFzZzk4cy92WGxiSHNJbVZ3?=
 =?utf-8?B?RW91K3JGREdYd1NjQ2QwWUovemZ6SDZLYTFjVi9idjN2b0dDak9UY1NpZVFZ?=
 =?utf-8?B?c3lLeld5OXhQUlAyeUpScEswK3VuSnorMTd0NlowbDlHcXBvL3RyNllUQ3gr?=
 =?utf-8?B?VVJVdmRrM0UweFdRajZ2dHQ0MVRYcmFJY1lnK2NqT2h5cjA3KzY4MmRwQWE5?=
 =?utf-8?B?djRHeExidHJOZjR6aTJ2K3B0RXZBUUVmUHhyMmw0SnJFdzRzd1lHOXVFUWZo?=
 =?utf-8?B?bWwzN25xMkR0dXFxS0JXdkY0TnpyZ3B4eVM5bEVRdjNMbjVVa0VScU1zQkk2?=
 =?utf-8?B?anBuMTczZWtDSUJPL1dTSnFBU2grRlEvazhZRUc5Z1phNDRtN2xzQ2ZWUTJP?=
 =?utf-8?B?M3ltZzNiTytWbEdkRFhLb3ZEY0p5ekkzc3Y2QlNpSE5kS1kyaGhvNHRJRG5B?=
 =?utf-8?B?cW1jRjdxSFBoU1p0cnNzUndMUUt4MGJsaW1MRysxTmx2R0RacmxFUVlQOFFa?=
 =?utf-8?B?TUdBdHFISlBHbXMzVjU2aWxhaGNqWXpqMlNtS3BXRk5PdktEYUZtRFRhWG9j?=
 =?utf-8?B?MnhHbGRHTVRpckVzT2tJSk1YN0ZOQmxlMnVLeVRxYmc4ajVNUXVtenlrWEl2?=
 =?utf-8?B?N1VNYTFCK1R3UW80UmtKN3M1MDhnZmZTTis4RnZvRFA4VHhWWTl3b21SSHJM?=
 =?utf-8?B?L2ROT3ZyeWM4eG5ZYmxFTXZmNzl1YU4vYURKM28wSnE1MXdHN1RJVEdaZ04y?=
 =?utf-8?Q?IH8sF+Sfqggcp/qYv3htniHkW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fc71fba0-48b7-47c9-4ff0-08db509b1322
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 14:38:37.5619
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mNgRRtNOqBn6yKh9Caio6Cn/QuoeyGTnMcCkPpOPj/fXoJ3nuH24uPNH0491FRP7jogvAIR3rLMKlg1HHqLwag==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8629

On 09.05.2023 14:59, Oleksii wrote:
> On Mon, 2023-05-08 at 10:58 +0200, Jan Beulich wrote:
>> On 03.05.2023 18:31, Oleksii Kurochko wrote:
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/include/asm/page.h
>>> @@ -0,0 +1,62 @@
>>> +#ifndef _ASM_RISCV_PAGE_H
>>> +#define _ASM_RISCV_PAGE_H
>>> +
>>> +#include <xen/const.h>
>>> +#include <xen/types.h>
>>> =)+
>>> +#define VPN_MASK                    ((unsigned
>>> long)(PAGETABLE_ENTRIES - 1))
>>> +
>>> +#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
>>> +#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) +
>>> PAGE_SHIFT)
>>> +#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) <<
>>> XEN_PT_LEVEL_SHIFT(lvl))
>>> +#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) -
>>> 1))
>>> +#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK <<
>>> XEN_PT_LEVEL_SHIFT(lvl))
>>> +
>>> +#define PTE_VALID                   BIT(0, UL)
>>> +#define PTE_READABLE                BIT(1, UL)
>>> +#define PTE_WRITABLE                BIT(2, UL)
>>> +#define PTE_EXECUTABLE              BIT(3, UL)
>>> +#define PTE_USER                    BIT(4, UL)
>>> +#define PTE_GLOBAL                  BIT(5, UL)
>>> +#define PTE_ACCESSED                BIT(6, UL)
>>> +#define PTE_DIRTY                   BIT(7, UL)
>>> +#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
>>> +
>>> +#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE |
>>> PTE_WRITABLE)
>>> +#define PTE_TABLE                   (PTE_VALID)
>>> +
>>> +/* Calculate the offsets into the pagetables for a given VA */
>>> +#define pt_linear_offset(lvl, va)   ((va) >>
>>> XEN_PT_LEVEL_SHIFT(lvl))
>>> +
>>> +#define pt_index(lvl, va) pt_linear_offset(lvl, (va) &
>>> XEN_PT_LEVEL_MASK(lvl))
>>
>> Maybe better
>>
>> #define pt_index(lvl, va) (pt_linear_offset(lvl, va) & VPN_MASK)
>>
>> as the involved constant will be easier to use for the compiler?
> But VPN_MASK should be shifted by level shift value.

Why? pt_linear_offset() already does the necessary shifting.

>>> +    csr_write(CSR_SATP, 0);
>>> +
>>> +    /* Clean MMU root page table */
>>> +    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
>>> +
>>> +    asm volatile ( "sfence.vma" );
>>
>> Doesn't this want to move between the SATP write and the clearing of
>> the
>> root table slot? Also here and elsewhere - don't these asm()s need
>> memory
>> clobbers? And anyway - could I talk you into introducing an inline
>> wrapper
>> (e.g. named sfence_vma()) so all uses end up consistent?
> I think clearing of root page table should be done before "sfence.vma"
> because we have to first clear slot of MMU's root page table and then
> make updating root page table visible for all. ( by usage of sfence
> instruction )

I disagree. The SATP write has removed the connection of the CPU
to the page tables. That's the action you want to fence, not the
altering of some (then) no longer referenced data structure.

>>> +void __init setup_initial_pagetables(void)
>>> +{
>>> +    struct mmu_desc mmu_desc = { 0, 0, NULL, NULL };
>>> +
>>> +    /*
>>> +     * Access to _stard, _end is always PC-relative
>>
>> Nit: Typo-ed symbol name. Also ...
>>
>>> +     * thereby when access them we will get load adresses
>>> +     * of start and end of Xen
>>> +     * To get linker addresses LOAD_TO_LINK() is required
>>> +     * to use
>>> +     */
>>
>> see the earlier line wrapping remark again. Finally in multi-sentence
>> comments full stops are required.
> Full stops mean '.' at the end of sentences?

Yes. Please see ./CODING_STYLE.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 09 14:42:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 14:42:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532262.828370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOXM-0005Dx-12; Tue, 09 May 2023 14:41:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532262.828370; Tue, 09 May 2023 14:41:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOXL-0005Dq-UK; Tue, 09 May 2023 14:41:55 +0000
Received: by outflank-mailman (input) for mailman id 532262;
 Tue, 09 May 2023 14:41:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S+Ht=A6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwOXK-0005Dj-Kf
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 14:41:54 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a39f8a35-ee77-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 16:41:53 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8423.eurprd04.prod.outlook.com (2603:10a6:20b:3e3::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 14:41:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 14:41:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a39f8a35-ee77-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KVarJC5A4tZcks8d7RXFqdYItQjc9VjaNXOkJsJu2Qyqlq/Oipuck5XZWoRaN9swapJUu8Bmfg2AJyUpGfCBBVlW9V6OOqaZVr5aCgCg4X+KQtqLqQLGrgyS8pUv53WqpEqToW8ufItgHmBpjmDnNaBh7bfNGSiyZR2tV8B5/PcXBH+KV5gC0QthYneB891SXm9/fc/lGJIJLxL49Sv1sXKAFwBEnzj0AKlgqJx6LD8DggxTNamQwwlcHDQeZtVIVAnCTW9/n7aENA+0wg3WsSZEyo7zSRprSjqlxbX0MVlsF6ElpNXxlcNSixNWjt13Cu/PKN0CG4Ffat8AXbBQhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KlMsGqj00J+Fvd5BspK73zAJsW0hxZEdfb+xg962teA=;
 b=lLOvKYeuVoPgJZLyS5+0Jj8BjIarqiyB3ynDnqpk4eV/aoMX9t0c/F9KyhJLqA+ydk3a0NiYb7Kyp7kQVgoNjwHrfhADAW8xSCCMM2PzYFa/swzLv3J0GinzFDRLVEKbdsA7y4i/oWz3e0HICMalOctnLGcMQJeWQdYyQJVteTVEOnjtzmIw1w17xQzqguVaO6uFTcBCAql6Klmhbv+PEjvZmW/zGH47F+OJu42aIKn8IIGKQM+0LkXR37ao4arRWzuy2KR5NHIIF/bQY8f/ea/BYRgHLqS3qbwtA/3UtsI0qEe53FK4sPCtIYLV+HoaUCTh02M8CmD9aC4m1vGM7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KlMsGqj00J+Fvd5BspK73zAJsW0hxZEdfb+xg962teA=;
 b=cb7nJSyj3bM+iLaUD0z+folVqkM/qgwinIf79seHSPDZIlLH+pE2VDTlnePhCWAfpFatn5hnxHwIuPGV2CNqsvxqGCmUaVgwvo+uyACzQaAxF4zXlfF04g7Ecci0ESawE8PFiANZIxNNcHEr49j3Dq9Boo9LnR3vTacenpqP8Clq4g9SzbFLdlHJ/aVpqlKo8jUCA2u9mQlQUSGp3jssympy/UQMPL+UIkdzM52yKg3wuTXsKdHwCVThAVBkQQyHGR6LvBMU0X3H5iAAe5LxSmatDo1G/yoCtvUXlBGQZhLHuHpmJGRXbHiDBGs4XGls6ZGBS/3/pnOTWthd6xcpAQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b8e8132e-0cd7-8d1e-308a-afb1963d6b2a@suse.com>
Date: Tue, 9 May 2023 16:41:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 3/3] x86: Use CpuidUserDis if an AMD HVM guest toggles
 CPUID faulting
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
 <20230505175705.18098-4-alejandro.vallejo@cloud.com>
 <d8c9728b-b615-7229-2e76-106d81802a20@suse.com>
 <d0794e7d-604e-044f-000e-3a0bde126425@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d0794e7d-604e-044f-000e-3a0bde126425@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0125.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8423:EE_
X-MS-Office365-Filtering-Correlation-Id: e191654c-04f7-46bf-bcac-08db509b867f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SQGIU4HWVOUBm61F7nPvwDVLU5iFFNl96sc4AQMPTA0RlhZiz3KjQVnX2dAvSHhZBEUTnAm8Ep3auVs1YP3SY8/+4ojzkk/jSX/hk2o/LFBGrYIkvMpPHrnGmzo4scODc+/0TP/LNTAFrlXnn7Yg2vciVRD3OOhGJF2eDUOfxM+JA+w5Eev+9j4S82jcgiOHp8E+okaTZK85mUT62gpDYPH4SenSyDSwOEq00l/Jb7SsBrPb+UWS5rfYNHJmaSN5B8uPP3kq+Feednvmia+twO9pLeUTIDIfEU4VY0NSfqbepkWhJzSM4WuHlyIx4oHmGMEUL5+TYWZSyfbuvlw2bTlhkc2kix1OC8ZYLiWcupRpGtB7AxGVimPBDJ9kO6GLaev0gm0Hp16KOpRPUFKd7uy5DJkQNWuMWj03kBky5jyG4wypKaljy6pVtaIQJNM7q1hpDy6blcBBYvdvqh2FGy0K5npzg0yLJoWNB+rqvOgHEEhndgiMfiLOaDdg9O2Yf+lJ6lBoXkMAuJDIp97+O4KCzJlLgQv+BoSbwLczrBGisMEbu9MYVQbZjGZE3KyKjJvaec8+SwyYYCd8e8DrvYLWxjRmwcwkeZg7ANeWeBBG+Af+PXZjiVRTrXW4XV7+DNVICWe/ziOw0f8NRgmdUQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(346002)(136003)(39860400002)(396003)(451199021)(31686004)(316002)(41300700001)(110136005)(54906003)(31696002)(478600001)(36756003)(38100700002)(6486002)(4326008)(86362001)(66556008)(26005)(83380400001)(2616005)(6506007)(53546011)(186003)(6512007)(5660300002)(8676002)(8936002)(2906002)(66476007)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U1hzWWhya1AvaXZxUDMyNnhhcFJZM3NRWGFQZG5DZEtVK0tHWWZncVI0M2Ni?=
 =?utf-8?B?cFVIVFQ4R0RzTWJVZWhaT1BRWDBwSGFGVS9hc3JORUpkQ25YTjRNVHhHNFJ0?=
 =?utf-8?B?aENwOFo5bUJGano3cEpJWVU3czJseXIyOWg4NHZCa3k5R1A4QlhuS0lmZlFY?=
 =?utf-8?B?MjYxL2ZMWCtFWVV4SXJSWWNLaGFzQkpFaDdCSC96RkRLbjQ4SXJvWDNHeHlJ?=
 =?utf-8?B?bHBXcGd5M1Bwd3RRZHVlUDRrTzk2QU91aUZBaFo1UUVkbnh0LzFwRG1RTWQ5?=
 =?utf-8?B?VjEwMDFRdFFhSDBtRW1PQzIyZG9UbTduN0E0bC9CUTlucVUxOWVGclVKQ3lw?=
 =?utf-8?B?UHJXUm9oaUVxUGRpaC8vSDBvNmNmY2xuQ3U3SWoyVm41ZGNtSkxwdkRpTlFD?=
 =?utf-8?B?VGI2VFVrV0xWWFlranVlMHk2Y2lZRlM0TXRzYzJNVzB3SzBkZldRcWROeVZJ?=
 =?utf-8?B?NE5CRkpKQmdoQUlxbFBZM1FVVnZaY2h3R0llM2hXSkNZZGlmYmtHYnJqMGZy?=
 =?utf-8?B?eFhHUVNOM3RyVGVGZnZDVk1kSi9Lc1VoUzdsV3dNMnVrd0orSUVPWmJUcStV?=
 =?utf-8?B?VWtNN2hHMWw3Z0JWSEo3OWFzamE4SXFHS01xcENBdGt6bFdIN045WG9VbWFq?=
 =?utf-8?B?YSs0SEI4THJPVHV5M2paWDBEK2pHZktHZTlkSnQyZGo5V3JDNVQ5ZFlTTUM1?=
 =?utf-8?B?ekJwWmVrZGVZR0xGTkswaXUxYld0NVZpWUc3UERUV2NrbGNUMGJlaUxWVE1m?=
 =?utf-8?B?eWdxVE9FQ1A5Qk53Uk5kZXpFZTdmeUw3WFBSOE8ySHdRMTFqMS92aTFUaWNK?=
 =?utf-8?B?Y0RPRG1XcS95blZhZGYwaVY4VksvOTlzR3N0OFptcnF4Yi95WC8wbGQxVUwx?=
 =?utf-8?B?S1lLRElteVUvR1JkUWI4eEhUTmdCcWtkMTF6a2ttTC9aaWVJODFCUHZHRTRt?=
 =?utf-8?B?bTBTRnhYWHlRUG4rbll6bnFXTEN4ZXpFSkQ4L054bG9LVFUybTVWWTVOREIv?=
 =?utf-8?B?TXZGcU1NZG54UXFIeWVWSm5Bb0VBenJXcXQ0YlNPQ2dkUjBkTGx5TFozRU9o?=
 =?utf-8?B?L1laRzdUTW5pbG9hakxLd0p2SkZYWUhLM0lyRkYrS05zeTBxZnRFUXROczNK?=
 =?utf-8?B?VVRSMlNGVjVwY1FWaWVKS2JFUTBxbk4yMWRtVnJXbG9pYnc4UklaSW45aHBs?=
 =?utf-8?B?dFZWcnJUdXE4KzUzdS9XWm4zTVZVNkluRHBKSmUxQjY3RlFVMXhjUlVad3lJ?=
 =?utf-8?B?eUlGL2YyTHAxZ2tvcmd5MXBQcmxSNC9GTnJVa3RBdnZ6S0dVVXZFMjA1Nkpp?=
 =?utf-8?B?WFNYNFdiZnliSVNac2JnM3oxUHlDYnlXeGYzZ1M5SklFYnBCQVFrU01jRmdy?=
 =?utf-8?B?eTRBQVpTYkpxUVl3MG45T3oyZEpsVUlIQUtNbjdPaE9TbjE2aXQzK3lvbXk3?=
 =?utf-8?B?NWFaVGI3T2NPUWFSaTQxdi93SVR0dk1SLzNqUll2VUQwT0Q1MENTZXVYZDZ4?=
 =?utf-8?B?dVRMeEEvb2dxTitWL0RVY2dTOEhYQTFmSTVYU0htWlJ2cDdnMFRjTEV1MmNm?=
 =?utf-8?B?cHh0ZDVqOVAyUHpGTE5aekVyZGlSelloYk95ejRjMHp2UDFJVXVJYURjZXF5?=
 =?utf-8?B?dms3VDkycVgrTy9KZlBCbTUzRThUVHFzbjF6Y2MxVlVad2dKYUJWWFI2cGJr?=
 =?utf-8?B?U0RCRjVPNENhNEVkZUZranh4eHd1dE5FOGxkRThTZSsybllIM0xVVlV1My9C?=
 =?utf-8?B?OVpzcUY5cms1QWpUVGVhNFo4d2JnendVMFRWK0VuRUhadkhTVkM4anpTSWpl?=
 =?utf-8?B?R3ROV1ViUWx1Qlc2VUdhQVkzVjFUVjh0MmVHSGZXNk9oRldjQk1ZWlM5QnRT?=
 =?utf-8?B?T0VNMy9rRXV3eFRGYzQrM3cyWVlYVGQ5Nm40Zi9BV0JHa0xFTVFaMVNOMEhy?=
 =?utf-8?B?ZnJSeTcxM2l5MDI0U3BPT0dHd0NlcHhJKzdreHZDNlVWTDR5WlZzMnNZZTlh?=
 =?utf-8?B?S040bU1VSllyR2M0K044ZXU2NU9MSWFLVnFCdSt0Z2hoSmtQcVRxbHZ0SERG?=
 =?utf-8?B?eVJlWHZBSkV5b2VET3VjdE55Qk9XT25HYXk1YzRvKzZ5WFFPMW8xeitRTVVU?=
 =?utf-8?Q?wM18D/pXlNtKzxincdn0/0W5h?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e191654c-04f7-46bf-bcac-08db509b867f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 14:41:51.1508
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3nw74ovqB3zwZ6yEAFqcybWjyhhbNmI+tirszdC5MJk9qJ1M7Bw41p4yQhK1Htca30hj/hr9izVtgfrk/zRqSw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8423

On 09.05.2023 12:05, Andrew Cooper wrote:
> On 08/05/2023 2:18 pm, Jan Beulich wrote:
>> On 05.05.2023 19:57, Alejandro Vallejo wrote:
>>> This is in order to aid guests of AMD hardware that we have exposed
>>> CPUID faulting to. If they try to modify the Intel MSR that enables
>>> the feature, trigger levelling so AMD's version of it (CpuidUserDis)
>>> is used instead.
>>>
>>> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
>>> ---
>>>  xen/arch/x86/msr.c | 9 ++++++++-
>>>  1 file changed, 8 insertions(+), 1 deletion(-)
>> Don't you also need to update cpu-policy.c:calculate_host_policy()
>> for the guest to actually know it can use the functionality? Which
>> in turn would appear to require some form of adjustment to
>> lib/x86/policy.c:x86_cpu_policies_are_compatible().
> 
> I asked Alejandro to do it like this.
> 
> Advertising this to guests requires plumbing another MSR into the
> infrastructure which isn't quite set up properly let, and is in flux
> from my work.
> 
> For now, this just lets Xen enforce the policy over PV guests, which is
> an improvement in and of itself.

But as per the title this patch is about HVM guests (aiui the PV aspect
is taken care of already without the patch here). In any event - if the
omissions are intentional (for the time being), then I think that wants
mentioning in the description.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 09 14:52:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 14:52:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532272.828381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOha-0006jG-Vd; Tue, 09 May 2023 14:52:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532272.828381; Tue, 09 May 2023 14:52:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOha-0006j9-SN; Tue, 09 May 2023 14:52:30 +0000
Received: by outflank-mailman (input) for mailman id 532272;
 Tue, 09 May 2023 14:52:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vSxc=A6=citrix.com=prvs=486b9cf0a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pwOha-0006j3-5E
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 14:52:30 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c926c8d-ee79-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 16:52:27 +0200 (CEST)
Received: from mail-bn1nam02lp2042.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 May 2023 10:52:19 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5017.namprd03.prod.outlook.com (2603:10b6:5:1ee::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 14:52:17 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 14:52:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c926c8d-ee79-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683643947;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=HWreJTB0eUO9kvTPP7mLgewRQZ30FWBS2UkQbnOS0oc=;
  b=R4r2tr8GbR2TFIe9TmnYDf0RM5jdg2kNMtMBzDuUpgZwWGMEQiaY5TFR
   1cogaoOzDGW+j4TfiZ09i9+qH7TEIHNvInqoBqRcZKAIM8sn0JCx8Cr36
   r/NjugGpvG4YgC5SezYafofPypEAJ8QdnfRbP3yc/4SHjHAbXGzQL4vQC
   Y=;
X-IronPort-RemoteIP: 104.47.51.42
X-IronPort-MID: 107735470
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:tnUXV6mbacVoWpAgaJMysLvo5gw3J0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJMDD2OOvaIYTSjKYt3a4SypxhX6pLcnd5qTgdlpSlnQiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgW5A6GzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 doUMB8vdjWPvLiJw5GjbMhh2MJyNta+aevzulk4pd3YJdAPZMiZBo/svJpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVU3jOKF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNtKS+Ljq6I12DV/wERQBTgfDhi5/sO3g065WfFyN
 l44vXMh+P1aGEuDC4OVsweDiHuFtR4VX5xXCf837CmEz6aS6AGcbkAaTz1bYdlgu8YsRSMj0
 neAmt+vDjtq2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nZs14DKe/g9nxGDfx6
 zOHti4zg/MUl8Fj/6em+VHKhRq8q56PSRQ6ji3dU3i59Ap/aMigbpax9FnAxf9aKcCSSVzpl
 EYDn8+S/eUfF6annSaGQPgONLyx7vPDOzrZ6XZtFZQ88zWm+1a4YJtdpjp5IS9BLcIsaTLvJ
 kjJtmtsCIR7OXKraep9Zdu3AsFyl6z4T4y5DLbTc8ZEZYV3eEmf5iZyaEWM3mfr1k8xjaU4P
 pTdesGpZZoHNZlaIPONb791+dcWKuoWmQs/mbiTI8yb7Iej
IronPort-HdrOrdr: A9a23:my86O6FVHPnSzJy1pLqEGMeALOsnbusQ8zAXPiBKJCC9E/bo8P
 xG+c5w6faaslkssR0b9+xofZPwIk80lqQFhbX5X43DYOCOggLBQL2Kr7GSoQEIcxeUygc379
 YET0ERMrzN5VgRt7eH3OG7eexQv+VuJsqT9JnjJ3QGd3AaV0l5hT0JbDpyiidNNXN77ZxSLu
 vk2uN34wCOVF4wdcqBCnwMT4H41qD2fMKPW29/O/Y/gjP+9g+V1A==
X-Talos-CUID: 9a23:d7mC82ADFI8FXpr6Ew9r7GE2AuUpSSfiz1TIBW61V2A4UqLAHA==
X-Talos-MUID: =?us-ascii?q?9a23=3Aypfw7Q+umGulrw5c9CDw/EKQf4Rlxq31S0JSq5s?=
 =?us-ascii?q?Pu86GFgFtKRva1TviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,262,1677560400"; 
   d="scan'208";a="107735470"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=koe51CRkseXSNKXiwidAiCD+5MwTUh8FcscTjm86Lv16wyWd3Dvi7/e0t6jfL3icFQnVVe37TRWEqvz/QgU6x2TNrNI1I+aG514R6T2p4lAFlSgChtOeiP0gi+k0e9A4Qb4X1ujDxFGE1mx4GlG+v5cq/K2ZR8J+zkB5ik3hABAzCfqPFqiqb/WIb+cEWV3/qXLTu/DXKVKUfGa3Q7k4xuR3ByxVBzspRSv2i/4UHCi2Mgi+ved7Q394xJ/xYxXlsDSAwjTDUTy4Soq5X8yfNr9COdtVLy8c3GgZTcYnfFKpSO3wOKOGeATq+D4eo4cgirsU2CG5zoPLmSk6oe+zOA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HWreJTB0eUO9kvTPP7mLgewRQZ30FWBS2UkQbnOS0oc=;
 b=LFjw+W4iBMglpc9lRFsQCAWFUfPCH68gSKYjvaIMuzZC1r7Va/Ic/VsAJGk9RdqhdK/5D6XYHZfKmA6Nc0UNuB3i4y4YOdgPWuHAYSen4dMK2NrHV6yJxeiLo1B0zZ8iSgG7Bd03nCR0QSiHSiHhQmcy8T9pH1uGUC4TXgUppV120Zjp9MaLyf+VshCsBj3p74ZPH7V/h79zoszdSh2ysrXHc89d8XqLDzx7tsUE4iwT6LZVtF9yCOD3VNi106/LVk1LZXRVNvHZu6b30/CA7OBmyN69Wp5SrFoMvDRes+LpYvrfqPFrQDZis2oGRJ71T+f9FzIeeeVEV5h4flkiVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HWreJTB0eUO9kvTPP7mLgewRQZ30FWBS2UkQbnOS0oc=;
 b=KC0uSAtEDi9l/rD0YET00TBu9H0kiCkGdX+kWcBRVjl/yFWR4ukHH3GGyF6q4c8m4FTv+1F72sq4dT1ARB4tGIuczkhzlkEYoinQbY8ZZHBehby9cE7M+xcXsnae7Tm2XEje2UWMjurvu9YKTsWfgfzQWd/34J1UqqAX3wxSih4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <728a0931-30d0-a4b5-4963-46b0fdf85cff@citrix.com>
Date: Tue, 9 May 2023 15:52:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [XEN][PATCH v6 15/19] xen/arm: Implement device tree node removal
 functionalities
Content-Language: en-GB
To: Michal Orzel <michal.orzel@amd.com>,
 Vikram Garhwal <vikram.garhwal@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-16-vikram.garhwal@amd.com>
 <8c96d009-bd81-66c7-fd23-1f11bc07e72f@amd.com>
In-Reply-To: <8c96d009-bd81-66c7-fd23-1f11bc07e72f@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0441.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a9::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM6PR03MB5017:EE_
X-MS-Office365-Filtering-Correlation-Id: e2a02048-788b-4f42-de8d-08db509cfb61
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	J8pFIMzSQ0DFiJ+rVxJOMvxlypOMs7yn9NbreHWArBgEA6CbUrcWrAmzKNh6MWzzjJSzeTvc25Agp9rQ/XTpssmikP/jCU9ylJN4cMl+f45dnLI4h1Xoq10OXpFLgRBkmZE95lCAQ4NpWuAUtjNu0+7S2Xvz8qpIinCWFFS7xmTRlTPW8TGHpETa1RIVM5zpAl484AAUUp9ZqrlSWKEJy8VrDD593Rqy82b1O8jYeXjlunYwn9qWgnlBzQY4aFvBB183XBAxYBmUbkgoYDpF2/2i/C2nJPsrrYWlwUz/PXB17IrvnETIZv8cqT5eBidCq9Adj4Hq0TgswAgz8aULZ7L0awajHki4Y6Dn0rGNDhzqsifUo0Ln3Eg1k3HG+RjBEMQNmiQa6t4UfIEYgJZY9RoQ2iOsRzaZbQwPb6rAvqWrJLnJtEJlJqanTgYGo/fzY63mcK1O9QT6+NFxVFjmgLNs6KydTRuODHJW4rafmC/fZ7euAE7m/Fertr/wUzxBMr6G1/nQ4bej0f27yvGEiTb/8YFXq/7tnPMOHuiSfOfPN8kYvzLmRKVUWF0RBeeYoB5x4LVXtolYgSBfAdXC3bk7NNJ5+HyEUU1dSjvTGefwd6jieb9ZdHHVUksuhAQbcxDSgXPqpRQ6zAVH8OtrmA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(136003)(346002)(376002)(39860400002)(451199021)(2906002)(6506007)(53546011)(26005)(6512007)(4744005)(186003)(2616005)(5660300002)(41300700001)(6666004)(66476007)(66946007)(66556008)(316002)(4326008)(8936002)(8676002)(6486002)(36756003)(54906003)(110136005)(31686004)(478600001)(86362001)(31696002)(7416002)(38100700002)(82960400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WFR3dDdJYjRKb0trcWdGZGFkTjlnck55b3hMYTROU25VQXJFSCtFc29pYWFx?=
 =?utf-8?B?WGVSM1FxWnJhZkN3TnRXL1FNejFuWFYzd080d2ZpWHNjZzRKTjRGM0doZ3l0?=
 =?utf-8?B?SWVjUUdiODYrY21lTGo5Q1U5M0VlVnNjNFNaUjVsb0liTWhRK3ZaYnVPNzQ5?=
 =?utf-8?B?enM3cmVQK3hXMTQzcDd4MEI0SnpWclpiMGRCanFFUDJaVmh3OE10c3AzNHFm?=
 =?utf-8?B?eVhhNWRLaHAzWEY2azdaWWZUS01Sc2hsaHN6S3BZT2lLVGFsNkprZm85ZTI3?=
 =?utf-8?B?Rzh4VjRjcWUrb2YvM2dSclN6emVYV01KTXZkR2p2RTFmV0dyVTROWFZqNEdY?=
 =?utf-8?B?OXpLcUxmTUJPRnBwc0d2TGVTWmpQOUE0OWQ1RllkWmNwZVVtbW9WRi81cnBO?=
 =?utf-8?B?aWJZdWVFazlYTnZHTHY1dWFjN0UyNFk0TlFTYU94TWdkSmJTWVZSeG1yL1A2?=
 =?utf-8?B?d2ErNTFnV0FJRndBWWpnQ0dQb2RwS3g2alZNS1M4ZXhqMXRBbGtWNE5OcVg4?=
 =?utf-8?B?MGp3ci9QMWh1MVVJNUlnRkhCQ0ViWkNNdjFmWGM5NEN5NUI4dkc0d1BHV0g3?=
 =?utf-8?B?cjc0R0NCTVJybEF0UmhUZ1NSRkYrV0MvQjBZbnF0NGJTRjN1NzNmN1lSOUJw?=
 =?utf-8?B?dGZaQWJZTVppK1pUZk5Yd1JSOWpsV3htOEMrWlVpZ0ZqN2diQ0pxUVlPWndB?=
 =?utf-8?B?dmI4WXhNMFgxT2UvSkFQNURBQWxUWWsybW1TbWJLTlZ5ZnU5WTBVb29jV21o?=
 =?utf-8?B?TVVVRnAramt5aWNvSC9jMytFQTgrdk55NitHbVUrUTdlV3hIbE9jaE92Szh4?=
 =?utf-8?B?dWRnelJxQUVVdVhFKzFZZ2lKaHMvM2I2bEQ3WnBrczlJQTFIdTRacHdJcDFx?=
 =?utf-8?B?TGtidE01MkRodDlrR1UxVXUxdnBEZmhON0EwcTc4ZlhtRG9aR05rTXoydEFI?=
 =?utf-8?B?TmxzcDZsK3ExRjJsYm9UMUsrdHE4cXZ5TnZrQVRGMVZlQVNvTXBPUVVPZ3Br?=
 =?utf-8?B?M1NIakVZLzJydGFDZ2E4cDNjQlhKa3A1UXdvdDlGMUNDaGdkd2syQzY4akdi?=
 =?utf-8?B?WThtK24zWkcrRGgrVE1MMTNDQzdUOU9ZYXZrN3VscHJsVVZ4dDRMajRwcHd3?=
 =?utf-8?B?K2NaNU1MaUdiTnYzMm9LQmtjUFRhWFRnSjFYNHZLS0FZdTQwcHFsTkEra0J1?=
 =?utf-8?B?UmpRWmVuWTZlaWF0aXB6WUxCMGpqMlMvSzVScG5XYjluS0dQVGhoR08ySFNW?=
 =?utf-8?B?R0ozTmMwVjZ6Wm9DV2U2YUZxcVhRbEZlZjlDaEpLM0dkcVpwOVBPd2Zwaisv?=
 =?utf-8?B?VDllOWZoUGV1WHRWZ20rbXJyckVkU0NlNHZpQktTdzRXUWVDYnlFeS92UzI5?=
 =?utf-8?B?SHEydE1zS21xQlNvbjJvdHZMNXIxOTNCNW9mdmxrMVp0d2FkRk9IU3dTVEE4?=
 =?utf-8?B?alVISC90aStiR1BlUVp5a1dpRWl4STQzbTc5ZGFvb3JualgvTnR1ZDdlbDBr?=
 =?utf-8?B?aGl6cVkwcjd1czN6bjVFYlplQ2JGQUVCU2g1OGZscld5YWRZaW5kdWpoR21j?=
 =?utf-8?B?b1VERmRsNGFqenJSUXhob0hTS0dadys3SjY0cmVSYUY1eXNTWGtHS084L0RL?=
 =?utf-8?B?UTJCSHRsbk1vQTFOM0o4WDd6NzloNzd5NmwrTERjUWtZbW5GWm5Td2xmbFJZ?=
 =?utf-8?B?RmRXVWt3OGlJZkR3UmZBV3ZTK3pvb0VPQzl1Vmw2MFlhMWVWTVZqVEo0ZzJC?=
 =?utf-8?B?Z2YvRHRhekI4dzk3VExGOTBuVTVEUnVnY2lMMFZxUE5uMlh5RlVISEpVOWVn?=
 =?utf-8?B?aVI5aGxsa3lwL1ZaTHJ0RE5na3JkTlJuZzE1NitqVGxBUHQyUEhjUzJndkxh?=
 =?utf-8?B?amZaUjZENm5IZDMzVGxoek5LWlI4eDV6ek90WUJBMXhwSzZJekpZM0p0SzJH?=
 =?utf-8?B?MjB0T3JyOXhHZ2o0cnkzTjlMbHlrQnptS3d2SkhQTUN3SlF1ZWp4dk5zYzQ1?=
 =?utf-8?B?TFdYQWZBQTQ1SWd2ckU4cVJEZ1d0SWk1LytJNUtoekZrUUYyUXdnYWMzZTl2?=
 =?utf-8?B?ZFYrdCtQUmRTZWZPazA1eHJRdUpFZ3lZa2ZxSjBXejhuam5qTzRzeGRPNEMy?=
 =?utf-8?B?SWFVcFNqR2RNbE9VYnN5RmRFNDhYYk1XYjZQVmRpajVBM3RVSTRRcUVHb2Fl?=
 =?utf-8?B?aWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	XMRna0q1iKj+cpByXSKb/8oeIYzPAjlaxJmyVwezFKI533aD1Jpsvis9TeuHIIXbFizt57SrNdmJxVDJ+EVvTGvUG2MIhXI+xLHoK1zo/a+OXnvt2rm/ttwKX4nttNvwSY3t8Nq+4LJZ4FoqnJiCP+54bY7I3zfwRqHlFBFw8RQip+DNfbR5u0ktLpxHy40r6Hil3hSwOYXAF/gQGAjt0ybmoXNwgKLF8VCF/BvTTZES0lx9vqqIMwuNz9E4qhjlWGCF7VhIZHXnN2fZ8gMLcNovppP2MiX57ZlXCcVSh7A+GwGRQ2tCjnaR48ZfkK+zQp/M5RBmsEa8GuOZwSFQsdPFGZW3AzuYVPi2WRUgYa0JVUCprwOOM7p9upRQmjw3+GX+/W9xZiG0UPmSZbn4Fzgwl4Ri+RX4J5jSULRKd1ScI5nVnkixZhFFHKyAUVHzUvHga+jI3tUEmrXx6Ad7Ghl7Uykj3hTSZ2Zkb9Oxz61OCBbFOWyEOQ+wqvZGj5Lb+lfev0MfiASYvSCKvksmvNgsrEdV//rtiSbd/LFB3cVrFTBn3IPIEHvf3fnrgN6urAP/lTg2CCsW8Vqkf/cqDdYq6uftrDFywTN6iFpPe5R3KAF9zsByatywAzFJR7SgFzye7lz0xsh+aXlCkp9oPQfLzMJPcpItQSv/doDLkyoKu1DJk/IRc06tA/r2WP3EN4883b1Cqy44rAwVd5gMriUhdvBMTk59LirgwAMvknQswk2osXa1dtD9CNuKdBtO//bmsuwjQDzi5Z0VD139zlOo6oDMLADVU01Zxlvu7sJ7m+R+hmyqdmQrVGdZP/EEtxAKdojqsZoAEaF/ajpZfjpWt86UvJFTqi0B2sq13LogAI8b1EnoEFH5y6seHvZmSXjvwNvbAt0lwZTzMdEHDDb19Dpa672sU2t8AcWjeMc=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e2a02048-788b-4f42-de8d-08db509cfb61
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 14:52:16.8296
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qp/q5aDOzpe6Kb1GBwHzU9VnmLYqIg9m/3KUka9bexMCgA5YCp8rdhYKtJSJMB+gmqnO9u65HQyvVqUL3GewMl9lK1QyO/fWeLvtmGzOFR4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5017

On 09/05/2023 3:30 pm, Michal Orzel wrote:
> On 03/05/2023 01:36, Vikram Garhwal wrote:
>> diff --git a/xen/include/xen/dt-overlay.h b/xen/include/xen/dt-overlay.h
>> new file mode 100644
>> index 0000000000..5b369f8eb7
>> --- /dev/null
>> +++ b/xen/include/xen/dt-overlay.h
>> @@ -0,0 +1,58 @@
>> + /* SPDX-License-Identifier: GPL-2.0 */
> GPL-2.0-only according to the latest series from Andrew (GPL-2.0 is a depracated tag)

Well, or "-or-later" at your choosing/as applicable, but one of the
explicitly suffixed forms please.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 09 14:57:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 14:57:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532279.828391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOmJ-0007PM-Jm; Tue, 09 May 2023 14:57:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532279.828391; Tue, 09 May 2023 14:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOmJ-0007PF-Gv; Tue, 09 May 2023 14:57:23 +0000
Received: by outflank-mailman (input) for mailman id 532279;
 Tue, 09 May 2023 14:57:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kELI=A6=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pwOmI-0007P9-Op
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 14:57:22 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ccf938fb-ee79-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 16:57:21 +0200 (CEST)
Received: by mail-wr1-x434.google.com with SMTP id
 ffacd0b85a97d-306f2b42a86so3826778f8f.3
 for <xen-devel@lists.xenproject.org>; Tue, 09 May 2023 07:57:21 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 a9-20020adfe5c9000000b0030632110586sm14965925wrn.3.2023.05.09.07.57.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 May 2023 07:57:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccf938fb-ee79-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683644241; x=1686236241;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=1cQpaxNuMlALE2/8503tcqClCB6p0rlV2GrkKvvt3J0=;
        b=Zk0Vs1WMK3rRfZu4v3IlDg4rpAqFBLmoNRhCIrVRTiVBA1gF7orqXXal6CzGdLledI
         D8Znb8K1z5rSDasYm7cRm/egXIhh5YssJ8UwpWLoXTIIKL5KIVf61EIzUToAX3cBd9bq
         rNCUB5NmOUjbjb2IXoiwYcnROy7gTOdHUr3YU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683644241; x=1686236241;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1cQpaxNuMlALE2/8503tcqClCB6p0rlV2GrkKvvt3J0=;
        b=cFFEm/Ac3Fnc0GV5BvSx6AkYZ9I/rDO53fgrtpsfgXq2Iz4FUMohCG4RA3OnaGTneA
         oiyJuw5Bk/kcct9oXvUv3Z7VTcd+6IDEgeVy1c7gAPLxPiUh1t1QsRLllKiR84qHrLPQ
         2ubiregvtmu6dIdu6HKUzh+oS75JjMK2VyL+qE3/Ymb+bhjx2ZnXT3l4QJSB3ukXEYOk
         0dxvHVAzbXSgPP8IXsapyy6X9xwP6qv0N/lwCxDEBZmzmw19iAHoEw+OeBrlp3Ey7Jta
         5lansqF1jBtVyrT7UsWVuhQcyvVSxwvtabUD6XU0iynAkIX4sk1RWMGXSkkj+vPNx0eq
         nclQ==
X-Gm-Message-State: AC+VfDwxkgCakSk8WtsNJ4ezL7PNkpVibHLxtqwoz+4DRzgMJ/cubuSK
	+/XtiumXVKTOYNDwlVgvJWB4QA==
X-Google-Smtp-Source: ACHHUZ5BCvWyiU7/CbK5VAm0YyP9c19HcbHUvXBnzZfYXdo/9hOubxjfUnwOQ2/viAVWQ5QesqkHpQ==
X-Received: by 2002:a05:6000:1186:b0:306:9f70:86fa with SMTP id g6-20020a056000118600b003069f7086famr10426241wrx.37.1683644241158;
        Tue, 09 May 2023 07:57:21 -0700 (PDT)
Message-ID: <645a5f50.df0a0220.9d880.f133@mx.google.com>
X-Google-Original-Message-ID: <ZFpfTeeC9cnDY0ye@EMEAENGAAD19049.>
Date: Tue, 9 May 2023 15:57:17 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 3/3] x86: Use CpuidUserDis if an AMD HVM guest toggles
 CPUID faulting
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
 <20230505175705.18098-4-alejandro.vallejo@cloud.com>
 <d8c9728b-b615-7229-2e76-106d81802a20@suse.com>
 <d0794e7d-604e-044f-000e-3a0bde126425@citrix.com>
 <b8e8132e-0cd7-8d1e-308a-afb1963d6b2a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <b8e8132e-0cd7-8d1e-308a-afb1963d6b2a@suse.com>

On Tue, May 09, 2023 at 04:41:49PM +0200, Jan Beulich wrote:
> > I asked Alejandro to do it like this.
> > 
> > Advertising this to guests requires plumbing another MSR into the
> > infrastructure which isn't quite set up properly let, and is in flux
> > from my work.
> > 
> > For now, this just lets Xen enforce the policy over PV guests, which is
> > an improvement in and of itself.
> 
> But as per the title this patch is about HVM guests (aiui the PV aspect
> is taken care of already without the patch here). In any event - if the
> omissions are intentional (for the time being), then I think that wants
> mentioning in the description.
> 
> Jan

HVM guests are always exposed the Intel interface (emulated if not natively
available). The HVM max policy forces it on, and I don't see anything in
the default policy overriding it. My attempt here was to let AMD guests use
the emulated Intel MSR and trigger levellling that would itself rely on
AMD's CpuidUserDis without guest intervention. That said, several cans of
worms exist in mantaining this internal routing. I'll get rid of that last
patch and leave HVM guests alone for the time being. They are functionally 
correct (albeit their CPUID take 2 faults, whereas 1 would suffice).

Alejandro


From xen-devel-bounces@lists.xenproject.org Tue May 09 15:06:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 15:06:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532282.828401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOuf-0000Xw-E3; Tue, 09 May 2023 15:06:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532282.828401; Tue, 09 May 2023 15:06:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwOuf-0000Xp-BM; Tue, 09 May 2023 15:06:01 +0000
Received: by outflank-mailman (input) for mailman id 532282;
 Tue, 09 May 2023 15:06:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Biuh=A6=citrix.com=prvs=486aa7bf2=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pwOue-0000Xh-EY
 for xen-devel@lists.xen.org; Tue, 09 May 2023 15:06:00 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 000def68-ee7b-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 17:05:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 000def68-ee7b-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683644758;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=bV1tKTDmJVds5vajKV71uNrbCCoYJAfcysJpmJz6VyI=;
  b=XztEkyq1yx6NqHNW5PoAXdjKEgMkep4Fz72rAwpp8QZ/SbYBY659ymke
   Rpr53TBiEuyHDTL3y1mpqvWl0yaIYtu3nZSUT/yPDy0Jn7+XQFERScghR
   6pYkWuW4CwNrmO32ADjdRYOQ3YrNZ9V9UEBvQnG2WucT+NcEAicxZ3eU1
   E=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110858831
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:xR9BHqAKGwAT3BVW/3jkw5YqxClBgxIJ4kV8jS/XYbTApGgm0DYPz
 2YYWTiDPK2DYDegKN4jaNnnpEIE7Z7TyNdnQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5A7wRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwp/wmUV8Q+
 qchcgstaRPYvfy875jgc7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9I4XSFZgFzh3Bz
 o7A12/cJyg+a+ed8CKuz0L9mPHTlDmqAJ1HQdVU8dY12QbOlwT/EiY+S1qhrOK5zE2jXttFA
 0gV/CM0qu417kPDZsnwWVi0rWCJujYYWsFMCKsq5QfLzbDbiy6bCXIDVSVpc8E9uYk9QjlC/
 l2Um9LkAxR/vbvTTmiSnp+KrDW5NC4Ja2AfZDMYShct59jlqZs0yBXVQb5LC7Kvh8HyAnT8z
 i6iqzB7g7QIkdVN06S99ErAxTW2qfDhRwo49kPdU2Sj4w5RYI+jbpavr1/B4p5oKY+FTliMo
 T4cnMmE7e0UJZWMkiWXR6MGG7TBz/GBPT7HnU90H7Eu8j2s/zioeoU4yDxkJUQvKc0EeDvtY
 UnckQdQ4pZJOz2td6AfS4upBt4j16TICdXvXfeSZd1LCqWdbyfeonsoPxTJmTmwzg51y/pX1
 YqnndiEUGxLWZo5w2GPHdw91rxw/D406kmIfMWup/i46oa2aHmQQLYDFVKBaOEl8a+JyDnoH
 8Zj29iikEsGDrCnCsXD2ctKdA1RcyBnbXzjg5YPHtNvNDaKD43I5xX55bo6M7JokK1O/gsj1
 iHsAxQIoLYTaJCuFOlrVpyBQOm3NXqchShhVcDJAWtEI1B5Pe6SAF0jX5U2Z6I70+do0OR5S
 fIIE+3ZXKQUF2ueqmpCMMmtxGCHSClHeCrUZ3b1CNTBV8QIq/P1Fi/MIVK0qXhm4tufvsoiu
 bywvj7mrW44b106Vq7+Mav/p25dSFBBwIpaRVXTGNBPdS3Eqc4yQ8AHpqNtcp5kxNSq7mfy6
 jt69j9C+7Gc/9FsrYmZ7U1Gxq/we9ZD8oNhNzGzxd6L2ePypAJPHacovD60QA3g
IronPort-HdrOrdr: A9a23:bl58iK5ZPUvELCcRqQPXwDjXdLJyesId70hD6qkQc3Fom62j5q
 STdZEgvyMc5wx/ZJhNo7690cq7MBbhHPxOkOos1N6ZNWGLhILPFuBfBOPZqAEIcBeOlNK1u5
 0BT0EEMqyWMbB75/yKnDVREbwbsaa6GHbDv5ah859vJzsaGp2J921Ce2Cm+tUdfng9OXI+fq
 Dsn/Zvln6bVlk8SN+0PXUBV/irnay3qHq3CSR2fyLO8WO1/EiV1II=
X-Talos-CUID: =?us-ascii?q?9a23=3ASHTWDWgKKd7Vly9BiBnTpWdz5jJuL2eMnEmOLBS?=
 =?us-ascii?q?BU11JT77Oc0Ga14k/qp87?=
X-Talos-MUID: 9a23:9Y4SvAZze7XwM+BTvBrJvRU9D8VR+4+TOBETj4kPoZjcKnkl
X-IronPort-AV: E=Sophos;i="5.99,262,1677560400"; 
   d="scan'208";a="110858831"
Date: Tue, 9 May 2023 16:05:45 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
CC: <xen-devel@lists.xen.org>, Juergen Gross <jgross@suse.com>, Julien Grall
	<julien@xen.org>, Vincent Guittot <vincent.guittot@linaro.org>,
	<stratos-dev@op-lists.linaro.org>, Alex =?iso-8859-1?Q?Benn=E9e?=
	<alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>, Erik Schilling
	<erik.schilling@linaro.org>
Subject: Re: [PATCH] libxl: arm: Allow grant mappings for backends running on
 Dom0
Message-ID: <e6b10450-50c2-468c-88ba-36e0274b5970@perard>
References: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
 <92c7f972-f617-40fc-bc5d-582c8154d03c@perard>
 <20230505093835.jcbwo6zjk5hcjvsm@vireshk-i7>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230505093835.jcbwo6zjk5hcjvsm@vireshk-i7>

On Fri, May 05, 2023 at 03:08:35PM +0530, Viresh Kumar wrote:
> Hi Anthony,
> 
> On 02-05-23, 15:44, Anthony PERARD wrote:
> > > diff --git a/tools/libs/light/libxl_virtio.c b/tools/libs/light/libxl_virtio.c
> > > index faada49e184e..e1f15344ef97 100644
> > > --- a/tools/libs/light/libxl_virtio.c
> > > +++ b/tools/libs/light/libxl_virtio.c
> > > @@ -48,11 +48,13 @@ static int libxl__set_xenstore_virtio(libxl__gc *gc, uint32_t domid,
> > >      flexarray_append_pair(back, "base", GCSPRINTF("%#"PRIx64, virtio->base));
> > >      flexarray_append_pair(back, "type", GCSPRINTF("%s", virtio->type));
> > >      flexarray_append_pair(back, "transport", GCSPRINTF("%s", transport));
> > > +    flexarray_append_pair(back, "forced_grant", GCSPRINTF("%u", virtio->forced_grant));
> > >  
> > >      flexarray_append_pair(front, "irq", GCSPRINTF("%u", virtio->irq));
> > >      flexarray_append_pair(front, "base", GCSPRINTF("%#"PRIx64, virtio->base));
> > >      flexarray_append_pair(front, "type", GCSPRINTF("%s", virtio->type));
> > >      flexarray_append_pair(front, "transport", GCSPRINTF("%s", transport));
> > > +    flexarray_append_pair(front, "forced_grant", GCSPRINTF("%u", virtio->forced_grant));
> > 
> > This "forced_grant" feels weird to me in the protocol, I feel like this
> > use of grant or not could be handled by the backend. For example in
> > "blkif" protocol, there's plenty of "feature-*" which allows both
> > front-end and back-end to advertise which feature they can or want to
> > use.
> > But maybe the fact that the device tree needs to be modified to be able
> > to accommodate grant mapping means that libxl needs to ask the backend to
> > use grant or not, and the frontend needs to now if it needs to use
> > grant.
> 
> I am not sure if I fully understand what you are suggesting here.

I guess the way virtio devices are implemented in libxl suggest to me
that the are just Xen PV devices. So I guess some documentation in the
tree would be useful, maybe some comments in libxl_virtio.c.

> The eventual fronend drivers (like drivers/i2c/busses/i2c-virtio.c)
> aren't Xen aware and the respective virtio protocol doesn't talk about
> how memory is mapped for the guest. The guest kernel allows both
> memory mapping models and the decision is made based on the presence
> or absence of the iommu node in the DT.

So, virtio's frontend don't know about xenstore? In this case, there's
no need to have all those nodes in xenstore under the frontend path.

I guess the nodes for the backends are at least somewhat useful for
libxl to reload the configuration of the virtio device. But even that
isn't probably useful if we can't hot-plug or hot-unplug virtio devices.

Are the xenstore node for the backend actually been used by a virtio
backend?

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 09 15:35:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 15:35:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532290.828410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPMT-00042y-M7; Tue, 09 May 2023 15:34:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532290.828410; Tue, 09 May 2023 15:34:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPMT-00042r-JR; Tue, 09 May 2023 15:34:45 +0000
Received: by outflank-mailman (input) for mailman id 532290;
 Tue, 09 May 2023 15:34:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eZzA=A6=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1pwPMR-00042l-Mr
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 15:34:43 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2062e.outbound.protection.outlook.com
 [2a01:111:f400:fe13::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 03c6fce2-ee7f-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 17:34:41 +0200 (CEST)
Received: from DB6PR0301CA0054.eurprd03.prod.outlook.com (2603:10a6:4:54::22)
 by AS8PR08MB8635.eurprd08.prod.outlook.com (2603:10a6:20b:563::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 15:34:33 +0000
Received: from DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:54:cafe::9b) by DB6PR0301CA0054.outlook.office365.com
 (2603:10a6:4:54::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32 via Frontend
 Transport; Tue, 9 May 2023 15:34:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT055.mail.protection.outlook.com (100.127.142.171) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6387.18 via Frontend Transport; Tue, 9 May 2023 15:34:33 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Tue, 09 May 2023 15:34:33 +0000
Received: from 82938efc3ca5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7CB05DD5-6086-4E4F-9FBF-3AA82A324BD5.1; 
 Tue, 09 May 2023 15:34:26 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 82938efc3ca5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 09 May 2023 15:34:26 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by AS8PR08MB9120.eurprd08.prod.outlook.com (2603:10a6:20b:5c1::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 15:34:24 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb%7]) with mapi id 15.20.6363.027; Tue, 9 May 2023
 15:34:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03c6fce2-ee7f-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9gC+OZzMjd8ANa2cPCKvlFq3Lc0rxVf0XgXL5xZm6Vs=;
 b=1/niJLhGAgwmwxUnvQSv1TaWeYWH6U9D2qaDV5brkRKq4flJ8rF1FoOIaX9yU57dB40H48k7XBjTuV6qtwCRlG39QKJk4MuwgH6wHW6lQVT95H/2VMx5zgbpS8J8qLg8Vr/uusmxqLeYAg9sho1+nsMWvTn2f5dDt1q5crr95w4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5690b89dbcafc158
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MId7UH3mzThJG70nswdIikhXdXJjSeCVzxyT8djHy6atxVIxUU8fATla0GlszfblAizIvSp9BZd6XtFiUxfYcVM4M+mvc0sv25v6UsfNu6twR2yrTk1BxKy16I+7tOZ+4VzygIT/qOBjO5URLx7+M9b+XqHbIdU3+AGmzKbAPMeZy9u+PVOiCli/l4u6e3BdBV3km4lndkyPvyDhvRaFygIo/WnE25vFuZE3yHm5L4kJmkw1vmIdDaNYzOhXf3HTmIHjvwIJAlr7O4WyKMLmyfIRViTKwqkaSoY0Mo0izGmWRnUeMBT5sBBaXX6W9acOr6SNGkehmGJLk+jT8d8TYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9gC+OZzMjd8ANa2cPCKvlFq3Lc0rxVf0XgXL5xZm6Vs=;
 b=fQihuQUUotQMZ5vLI2eOvVU8EO9RlGCNOYHRB3K7sWyUBXmOHOZWIWBN3NzS9tSCCRU+pn/N/nkJwJjyom/GfQ8E7MjHK4Dm4FIBcxRdRu4OHcQxzKa7tGfJELtSoIGQ55IUrLp8gdtTdRG7ohrMjRwZZ2VAI779vA+SyFmDw+3nwaj/koA5pPlOCS2wqrt6X6mYsxx3B7M5kesU1FiXxKok0+O9a1vcKUk/jT+uPzA2RQpBNSRksAXCsm/VZG/W0/VrnCSeshvZ7Jw43YdJbUg7OoIpo4eQoxpfVAKX7BrbeXRWhB3mNF39Sw6MpPey5r9jR1PiCwqUr6dQXmwFtw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9gC+OZzMjd8ANa2cPCKvlFq3Lc0rxVf0XgXL5xZm6Vs=;
 b=1/niJLhGAgwmwxUnvQSv1TaWeYWH6U9D2qaDV5brkRKq4flJ8rF1FoOIaX9yU57dB40H48k7XBjTuV6qtwCRlG39QKJk4MuwgH6wHW6lQVT95H/2VMx5zgbpS8J8qLg8Vr/uusmxqLeYAg9sho1+nsMWvTn2f5dDt1q5crr95w4=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen developer discussion <xen-devel@lists.xenproject.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, Juergen Gross
	<jgross@suse.com>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Samuel Holland <samuel@sholland.org>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, Marc Zyngier <maz@kernel.org>, Jane Malalane
	<jane.malalane@citrix.com>, David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [PATCH] xen/evtchn: Introduce new IOCTL to bind static evtchn
Thread-Topic: [PATCH] xen/evtchn: Introduce new IOCTL to bind static evtchn
Thread-Index: AQHZec44jeWWxzG+E02fa4Z9QZ57Yq9MdoEAgAWtTIA=
Date: Tue, 9 May 2023 15:34:24 +0000
Message-ID: <97F690E0-B78C-4B00-9079-84B202F39674@arm.com>
References:
 <48d30a439e37f6917b9a667289792c2b3f548d6d.1682685294.git.rahul.singh@arm.com>
 <alpine.DEB.2.22.394.2305051750070.974517@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2305051750070.974517@ubuntu-linux-20-04-desktop>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7158:EE_|AS8PR08MB9120:EE_|DBAEUR03FT055:EE_|AS8PR08MB8635:EE_
X-MS-Office365-Filtering-Correlation-Id: 4f0094b6-9067-4d73-a34c-08db50a2e379
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 SzPfmlR/YGXwxr8mwCtFePJpiwR2255hZZJ4ZtRgXHGlqQvyosSrYEBg9cO7jwSyeL04V4s9buY2caui0JumJBEuOYTu3k2cVL96m4qXY5r7ZtqFJo3ZbATi2CXEnuNgxkWwb9Mc1M9PmxKjc3G7yBc75k9nCjfxDJeXimSKSyzeAzXNHMimAlDhBL/6HjRAMgU0ThebiAUct8h4C1NxLaSa8/vz+yAWVbnktkSnm0WjgpzYpZMFQI7FrhbUdVJTQ+uM6C5B0MRT5fc8J6hwGwyjM6xXVuel25/qmxcVHaPzHL3lEAUvFJJHTU0TSxCG6G708uG9JuUBSzVn2OdncnmhCqUmYWkyPbvZ2lOyJMHuAcKCg7pERSo6J4q9FWZawvjlb1jGL9F/1Mus3gu5JcyO11nWWVNUUfm5X+I1eAxC9eWz1LbSJ5e+U44hWAomlf2n5jqPFbSHPK1Trz2nmpLRbMyssSkjBm2EN2qnWt24XublZMji//UZ7J/rQBR18juxtQbhv/lTwchqhvPy5Sdd7erN0OxM1wRMO3wNdLXtyAKpqEt+LGoNSAmRO2bDaihdxqbF21mbaUyIX7sQun8CRSrEiy3snBD2AGqoz9rAye8ZZ0tIWB3Tqxg2ENXPcTxX22sVQm+gN9A4BFHBIQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39860400002)(136003)(346002)(366004)(451199021)(26005)(122000001)(6506007)(6512007)(38100700002)(38070700005)(6486002)(186003)(83380400001)(2616005)(36756003)(71200400001)(7416002)(2906002)(33656002)(86362001)(76116006)(66946007)(6916009)(54906003)(66556008)(478600001)(66476007)(64756008)(91956017)(8936002)(41300700001)(66446008)(316002)(5660300002)(8676002)(4326008)(53546011)(45980500001);DIR:OUT;SFP:1101;
Content-Type: multipart/alternative;
	boundary="_000_97F690E0B78C4B00907984B202F39674armcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9120
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4576f4f5-fa7c-410f-c34d-08db50a2ddf6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jlaX/TOtsvxfWjrXsXylY4L2pKv0NUEgntV6T79r3qiUZjfa8SY+qiAGpl5vhs5r9syStYqwSjMff3qV05bQHhkTjYeDhEJRGU4f5N0gfb+3RHIMmUdMDjvHkLTuByyc3Xy8gPajVLOphGfLPcKheplWCWv/p653byq958mszKJbL2zMR+ScMsS2iTAkM7nVnCghwJ30072kI1NvqWdi1tG5GrX74x7YbPBBAMA41x4THY+r31ClHM45jon3AiauhFpq6yNNwnXUeXX9upMiKo42nDQnJ+j/5KMMnUhK7+zt152XhuSe5RwLWxIEDbTkXGMxOOnO91f/uGafSNYHgztx81aekWbn1obefomrupSGAyeX/qJCcSqVoFCkEM+8DgCmGvMad9KJ2Wxt2wUkz60akIJlUjDsTqiGfsSTSF5Q6YgQ07gmbzwB1Q5+jF90Z14LaZDwv0VR/VVTPr7m2LCzoYNhH4sq7YT+clB0EddDH9cCXPYIueEd9fEkRI/5X0LVeAeRf2rr/RHgeNzw2PI1Cq2jPDHCusR43Fx36ayVBhaXpe/mPFl0LYYfM/P6FJKYxcIu4zBhTqokQAylr5JcDCzYSMrVTuou5nQQ75lClq2RNoVSYG6Y2BuzLi7yAJZnPJaI5jF3e4yd2j7O1AmWMvYQTQBev+P5wawHLcALMHk6uwFO8rbIT4btGQwBYokNNPCTFulxNYeQQlFjLQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(39860400002)(346002)(376002)(451199021)(46966006)(36840700001)(40470700004)(54906003)(478600001)(86362001)(33656002)(82310400005)(70586007)(41300700001)(316002)(70206006)(83380400001)(36756003)(33964004)(6486002)(4326008)(8936002)(8676002)(6862004)(5660300002)(82740400003)(81166007)(356005)(107886003)(45080400002)(40460700003)(6506007)(53546011)(6512007)(26005)(186003)(36860700001)(2906002)(336012)(47076005)(2616005)(40480700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 15:34:33.3912
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4f0094b6-9067-4d73-a34c-08db50a2e379
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8635

--_000_97F690E0B78C4B00907984B202F39674armcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

SGkgU3RlZmFubywNCg0KVGhhbmtzIGZvciB0aGUgcmV2aWV3Lg0KDQpPbiA2IE1heSAyMDIzLCBh
dCAxOjUyIGFtLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdy
b3RlOg0KDQpPbiBGcmksIDI4IEFwciAyMDIzLCBSYWh1bCBTaW5naCB3cm90ZToNClhlbiA0LjE3
IHN1cHBvcnRzIHRoZSBjcmVhdGlvbiBvZiBzdGF0aWMgZXZ0Y2hucy4gVG8gYWxsb3cgdXNlciBz
cGFjZQ0KYXBwbGljYXRpb24gdG8gYmluZCBzdGF0aWMgZXZ0Y2hucyBpbnRyb2R1Y2UgbmV3IGlv
Y3RsDQoiSU9DVExfRVZUQ0hOX0JJTkRfU1RBVElDIi4gRXhpc3RpbmcgSU9DVEwgZG9pbmcgbW9y
ZSB0aGFuIGJpbmRpbmcNCnRoYXTigJlzIHdoeSB3ZSBuZWVkIHRvIGludHJvZHVjZSB0aGUgbmV3
IElPQ1RMIHRvIG9ubHkgYmluZCB0aGUgc3RhdGljDQpldmVudCBjaGFubmVscy4NCg0KQWxzbywg
c3RhdGljIGV2dGNobnMgdG8gYmUgYXZhaWxhYmxlIGZvciB1c2UgZHVyaW5nIHRoZSBsaWZldGlt
ZSBvZiB0aGUNCmd1ZXN0LiBXaGVuIHRoZSBhcHBsaWNhdGlvbiBleGl0cywgX191bmJpbmRfZnJv
bV9pcnEoKSBlbmQgdXANCmJlaW5nIGNhbGxlZCBmcm9tIHJlbGVhc2UoKSBmb3AgYmVjYXVzZSBv
ZiB0aGF0IHN0YXRpYyBldnRjaG5zIGFyZQ0KZ2V0dGluZyBjbG9zZWQuIFRvIGF2b2lkIGNsb3Np
bmcgdGhlIHN0YXRpYyBldmVudCBjaGFubmVsLCBhZGQgdGhlIG5ldw0KYm9vbCB2YXJpYWJsZSAi
aXNfc3RhdGljIiBpbiAic3RydWN0IGlycV9pbmZvIiB0byBtYXJrIHRoZSBldmVudCBjaGFubmVs
DQpzdGF0aWMgd2hlbiBjcmVhdGluZyB0aGUgZXZlbnQgY2hhbm5lbCB0byBhdm9pZCBjbG9zaW5n
IHRoZSBzdGF0aWMNCmV2dGNobi4NCg0KU2lnbmVkLW9mZi1ieTogUmFodWwgU2luZ2ggPHJhaHVs
LnNpbmdoQGFybS5jb20+DQoNCkkgdGhpbmsgdGhlIHBhdGNoIGlzIE9LIGJ1dCBldnRjaG5fYmlu
ZF90b191c2VyIG9uIHRoZSBlcnJvciBwYXRoIGNhbGxzDQpFVlRDSE5PUF9jbG9zZS4gQ291bGQg
dGhhdCBiZSBhIHByb2JsZW0gZm9yIHN0YXRpYyBldnRjaG5zPyBJIHdvbmRlciBpZg0Kd2UgbmVl
ZCB0byBza2lwIHRoYXQgRVZUQ0hOT1BfY2xvc2UgY2FsbCB0b28uDQoNCg0KZXJyOg0KLyogYmlu
ZCBmYWlsZWQsIHNob3VsZCBjbG9zZSB0aGUgcG9ydCBub3cgKi8NCmNsb3NlLnBvcnQgPSBwb3J0
Ow0KaWYgKEhZUEVSVklTT1JfZXZlbnRfY2hhbm5lbF9vcChFVlRDSE5PUF9jbG9zZSwgJmNsb3Nl
KSAhPSAwKQ0KQlVHKCk7DQpkZWxfZXZ0Y2huKHUsIGV2dGNobik7DQoNClllcywgd2UgbmVlZCB0
byBhdm9pZCB0byBjbG9zZSB0aGUgc3RhdGljIGV2ZW50IGNoYW5uZWwgaW4gY2FzZSBvZiBlcnJv
ciBwYXRoIGFsc28uDQpJIHdpbGwgZml4IHRoaXMgaW4gbmV4dCB2ZXJzaW9uLg0KDQpSZWdhcmRz
LA0KUmFodWwNCg0K

--_000_97F690E0B78C4B00907984B202F39674armcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <C332D48793526940A289BC0AA06CB6CF@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJvdmVyZmxv
dy13cmFwOiBicmVhay13b3JkOyAtd2Via2l0LW5ic3AtbW9kZTogc3BhY2U7IGxpbmUtYnJlYWs6
IGFmdGVyLXdoaXRlLXNwYWNlOyI+DQo8ZGl2IGRpcj0iYXV0byIgc3R5bGU9Im92ZXJmbG93LXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgbGluZS1icmVhazogYWZ0
ZXItd2hpdGUtc3BhY2U7Ij4NCjxkaXYgZGlyPSJhdXRvIiBzdHlsZT0ib3ZlcmZsb3ctd3JhcDog
YnJlYWstd29yZDsgLXdlYmtpdC1uYnNwLW1vZGU6IHNwYWNlOyBsaW5lLWJyZWFrOiBhZnRlci13
aGl0ZS1zcGFjZTsiPg0KSGkgU3RlZmFubywNCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PlRoYW5r
cyBmb3IgdGhlIHJldmlldy48YnI+DQo8ZGl2Pjxicj4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUi
Pg0KPGRpdj5PbiA2IE1heSAyMDIzLCBhdCAxOjUyIGFtLCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0
O3NzdGFiZWxsaW5pQGtlcm5lbC5vcmcmZ3Q7IHdyb3RlOjwvZGl2Pg0KPGJyIGNsYXNzPSJBcHBs
ZS1pbnRlcmNoYW5nZS1uZXdsaW5lIj4NCjxkaXY+PHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiBy
Z2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9u
dC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDog
NDAwOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRl
bnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQt
c3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3Jh
dGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyI+T24N
CiBGcmksIDI4IEFwciAyMDIzLCBSYWh1bCBTaW5naCB3cm90ZTo8L3NwYW4+PGJyIHN0eWxlPSJj
YXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNp
emU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsg
Zm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3Rh
cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog
bm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4
OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7Ij4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiIHN0eWxl
PSJmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5v
cm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVy
LXNwYWNpbmc6IG5vcm1hbDsgb3JwaGFuczogYXV0bzsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQt
aW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3
aWRvd3M6IGF1dG87IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRo
OiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiPg0KWGVuIDQuMTcgc3VwcG9ydHMmbmJzcDt0
aGUgY3JlYXRpb24gb2Ygc3RhdGljIGV2dGNobnMuIFRvIGFsbG93IHVzZXIgc3BhY2U8YnI+DQph
cHBsaWNhdGlvbiB0byBiaW5kIHN0YXRpYyBldnRjaG5zIGludHJvZHVjZSBuZXcgaW9jdGw8YnI+
DQomcXVvdDtJT0NUTF9FVlRDSE5fQklORF9TVEFUSUMmcXVvdDsuIEV4aXN0aW5nIElPQ1RMIGRv
aW5nIG1vcmUgdGhhbiBiaW5kaW5nPGJyPg0KdGhhdOKAmXMgd2h5IHdlIG5lZWQgdG8gaW50cm9k
dWNlIHRoZSBuZXcgSU9DVEwgdG8gb25seSBiaW5kIHRoZSBzdGF0aWM8YnI+DQpldmVudCBjaGFu
bmVscy48YnI+DQo8YnI+DQpBbHNvLCBzdGF0aWMgZXZ0Y2hucyB0byBiZSBhdmFpbGFibGUgZm9y
IHVzZSBkdXJpbmcgdGhlIGxpZmV0aW1lIG9mIHRoZTxicj4NCmd1ZXN0LiBXaGVuIHRoZSBhcHBs
aWNhdGlvbiBleGl0cywgX191bmJpbmRfZnJvbV9pcnEoKSBlbmQgdXA8YnI+DQpiZWluZyBjYWxs
ZWQgZnJvbSByZWxlYXNlKCkgZm9wIGJlY2F1c2Ugb2YgdGhhdCBzdGF0aWMgZXZ0Y2hucyBhcmU8
YnI+DQpnZXR0aW5nIGNsb3NlZC4gVG8gYXZvaWQgY2xvc2luZyB0aGUgc3RhdGljIGV2ZW50IGNo
YW5uZWwsIGFkZCB0aGUgbmV3PGJyPg0KYm9vbCB2YXJpYWJsZSAmcXVvdDtpc19zdGF0aWMmcXVv
dDsgaW4gJnF1b3Q7c3RydWN0IGlycV9pbmZvJnF1b3Q7IHRvIG1hcmsgdGhlIGV2ZW50IGNoYW5u
ZWw8YnI+DQpzdGF0aWMgd2hlbiBjcmVhdGluZyB0aGUgZXZlbnQgY2hhbm5lbCB0byBhdm9pZCBj
bG9zaW5nIHRoZSBzdGF0aWM8YnI+DQpldnRjaG4uPGJyPg0KPGJyPg0KU2lnbmVkLW9mZi1ieTog
UmFodWwgU2luZ2ggJmx0O3JhaHVsLnNpbmdoQGFybS5jb20mZ3Q7PGJyPg0KPC9ibG9ja3F1b3Rl
Pg0KPGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVs
dmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50
LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsg
dGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25l
OyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0
cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7Ij4NCjxzcGFuIHN0eWxlPSJj
YXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNp
emU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsg
Zm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3Rh
cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog
bm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4
OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWlt
cG9ydGFudDsiPkkNCiB0aGluayB0aGUgcGF0Y2ggaXMgT0sgYnV0IGV2dGNobl9iaW5kX3RvX3Vz
ZXIgb24gdGhlIGVycm9yIHBhdGggY2FsbHM8L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjog
cmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZv
bnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6
IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5k
ZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3Jk
LXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29y
YXRpb246IG5vbmU7Ij4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBm
b250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1h
bDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNw
YWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQt
dHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsg
LXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZs
b2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiPkVWVENITk9QX2Nsb3NlLg0K
IENvdWxkIHRoYXQgYmUgYSBwcm9ibGVtIGZvciBzdGF0aWMgZXZ0Y2hucz8gSSB3b25kZXIgaWY8
L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTog
SGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJp
YW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1h
bDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBu
b25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0
LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7Ij4NCjxzcGFuIHN0eWxl
PSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250
LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1h
bDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjog
c3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFj
ZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDog
MHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUg
IWltcG9ydGFudDsiPndlDQogbmVlZCB0byBza2lwIHRoYXQgRVZUQ0hOT1BfY2xvc2UgY2FsbCB0
b28uPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1p
bHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQt
dmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiA0MDA7IGxldHRlci1zcGFjaW5nOiBu
b3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9y
bTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQt
dGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyI+DQo8YnIgc3R5
bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZv
bnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9y
bWFsOyBmb250LXdlaWdodDogNDAwOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWdu
OiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNw
YWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRo
OiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiPg0KPGJyIHN0eWxlPSJjYXJldC1jb2xvcjog
cmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZv
bnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6
IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5k
ZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3Jk
LXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29y
YXRpb246IG5vbmU7Ij4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBm
b250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1h
bDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNw
YWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQt
dHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsg
LXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZs
b2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiPmVycjo8L3NwYW4+PGJyIHN0
eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBm
b250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5v
cm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGln
bjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1z
cGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0
aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7Ij4NCjxzcGFuIGNsYXNzPSJBcHBsZS10YWIt
c3BhbiIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2
ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQt
Y2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogNDAwOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0
ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7
IHdoaXRlLXNwYWNlOiBwcmU7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tl
LXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiPjwvc3Bhbj48c3BhbiBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiA0MDA7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0
YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6
IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBw
eDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFp
bXBvcnRhbnQ7Ij4vKg0KIGJpbmQgZmFpbGVkLCBzaG91bGQgY2xvc2UgdGhlIHBvcnQgbm93ICov
PC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6
IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFy
aWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiA0MDA7IGxldHRlci1zcGFjaW5nOiBub3Jt
YWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTog
bm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4
dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyI+DQo8c3BhbiBjbGFz
cz0iQXBwbGUtdGFiLXNwYW4iIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250
LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsg
Zm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNp
bmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJh
bnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogcHJlOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtp
dC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7Ij48L3NwYW4+
PHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2
ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQt
Y2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogNDAwOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0
ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7
IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ry
b2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3Bs
YXk6IGlubGluZSAhaW1wb3J0YW50OyI+Y2xvc2UucG9ydA0KID0gcG9ydDs8L3NwYW4+PGJyIHN0
eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBm
b250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5v
cm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGln
bjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1z
cGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0
aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7Ij4NCjxzcGFuIGNsYXNzPSJBcHBsZS10YWIt
c3BhbiIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2
ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQt
Y2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogNDAwOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0
ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7
IHdoaXRlLXNwYWNlOiBwcmU7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tl
LXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiPjwvc3Bhbj48c3BhbiBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiA0MDA7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0
YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6
IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBw
eDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFp
bXBvcnRhbnQ7Ij5pZg0KIChIWVBFUlZJU09SX2V2ZW50X2NoYW5uZWxfb3AoRVZUQ0hOT1BfY2xv
c2UsICZhbXA7Y2xvc2UpICE9IDApPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigw
LCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0
eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiA0MDA7
IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDog
MHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFj
aW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9u
OiBub25lOyI+DQo8c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNwYW4iIHN0eWxlPSJjYXJldC1jb2xv
cjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7
IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWln
aHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQt
aW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogcHJlOyB3b3Jk
LXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29y
YXRpb246IG5vbmU7Ij48L3NwYW4+PHNwYW4gY2xhc3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiA0MDA7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0
YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6
IHByZTsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsg
dGV4dC1kZWNvcmF0aW9uOiBub25lOyI+PC9zcGFuPjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjog
cmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZv
bnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6
IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5k
ZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3Jk
LXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29y
YXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiPkJV
RygpOzwvc3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFt
aWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250
LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogNDAwOyBsZXR0ZXItc3BhY2luZzog
bm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zv
cm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0
LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiPg0KPHNwYW4g
Y2xhc3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsg
Zm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3Jt
YWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiA0MDA7IGxldHRlci1z
cGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0
LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IHByZTsgd29yZC1zcGFjaW5nOiAwcHg7IC13
ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyI+PC9z
cGFuPjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTog
SGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJp
YW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IDQwMDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1h
bDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBu
b25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0
LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBk
aXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiPmRlbF9ldnRjaG4odSwNCiBldnRjaG4pOzwvc3Bh
bj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2
ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQt
Y2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogNDAwOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0
ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7
IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ry
b2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiPg0KPC9kaXY+DQo8L2Jsb2Nr
cXVvdGU+DQo8ZGl2Pjxicj4NCjwvZGl2Pg0KWWVzLCB3ZSBuZWVkIHRvIGF2b2lkIHRvIGNsb3Nl
IHRoZSBzdGF0aWMgZXZlbnQgY2hhbm5lbCBpbiBjYXNlIG9mIGVycm9yIHBhdGggYWxzby4mbmJz
cDs8L2Rpdj4NCjxkaXY+SSB3aWxsIGZpeCB0aGlzIGluIG5leHQgdmVyc2lvbi4mbmJzcDs8L2Rp
dj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PlJlZ2FyZHMsPC9kaXY+DQo8ZGl2PlJhaHVsPC9k
aXY+DQo8ZGl2PiZuYnNwOzwvZGl2Pg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9ib2R5Pg0K
PC9odG1sPg0K

--_000_97F690E0B78C4B00907984B202F39674armcom_--


From xen-devel-bounces@lists.xenproject.org Tue May 09 16:00:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:00:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532300.828421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPks-0006ga-VR; Tue, 09 May 2023 15:59:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532300.828421; Tue, 09 May 2023 15:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPks-0006gT-Rv; Tue, 09 May 2023 15:59:58 +0000
Received: by outflank-mailman (input) for mailman id 532300;
 Tue, 09 May 2023 15:59:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vSxc=A6=citrix.com=prvs=486b9cf0a=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pwPks-0006gM-74
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 15:59:58 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 89862df1-ee82-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 17:59:55 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 09 May 2023 11:59:45 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB7012.namprd03.prod.outlook.com (2603:10b6:303:1a7::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18; Tue, 9 May
 2023 15:59:43 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 15:59:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89862df1-ee82-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683647995;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=W9gH6urSQ59K0ZNTftgzaVYfZVX6edf0EvkUySIZl+0=;
  b=CfQBGjtQgamkqucl4gRnUrBBSfondg4FlI65fhmRkMVs8eIt95M9HxSJ
   aFGsBP3aiR4xg6YQKQOZEO4dk3lxny+4CQpfpGJGbVaGD0Lp8/MWUXXzl
   wJQcJizhyKzbW0i+vUPdOy/R/lkV9terbLKBzCT4qlrCPE2Zrw7z7w1aj
   w=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 107173637
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:RLXN/akW4qcIzd2JLEmRPF3o5gysJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJJXGuBbvjcMzSjLdwkPd7l8RlXsJ7Un4RhGlQ+ri9kRCMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgW5A6GzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 eM+DGEzVVO6vcmnxJWAa9cv3/8Ncsa+aevzulk4pd3YJdAPZMmaBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3iea8WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOLhqaU02wL7Kmo7FloGfny8gOKD0V+iacxzC
 hMv5AUOlP1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xGWwsXjNHLts8u6ceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDRLoViMA6tjn5Ys13hTGS486FLbv14OkXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94ZRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:zQmUbaNxK5kDN8BcTuejsMiBIKoaSvp037B87TEKdfVwSL3gqy
 nIpoV86faUskd3ZJhEo7q90ca7MBDhHPJOgbX5Xo3SODUO2lHYTr2KtrGSuwEIcheWnoVgPM
 xbAs1D4bPLbGRSvILT/BS/CNo4xcnvytHSuQ4c9RtQpMNRBp2IIz0XNu9TKCNLeDU=
X-Talos-CUID: 9a23:Wy1ZiW9xLD1LOS9y+nCVv1EyPpEbST7k9lzBJ2iDKFgqEYa7F1DFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3A+gGRPA8d853GsYMtdClLJKyQf9kv84+JD2EWq84?=
 =?us-ascii?q?Xp/uaGQ0vACmyrA3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,262,1677560400"; 
   d="scan'208";a="107173637"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jWF7UH+hulSXeiYvbeb3gvpF2vAnoyotoxW5mJ1gmka5bw4b/VI5hyOUhR6YGqHzY1bSBFfE+Ua5qzOX/2ti8ffaWAFomPYI1cwurSsHZtpjDaiZ7Nhpv873x2hWsd9Q427rZ2CflD/mthSw4EKENCNneCbJqTnpCGoQBia6AtUmOmiwZ5kFd8EOid7/FvSowYqGn7uqZtm2zrHJOLbVdWmCckLmFz3hffLIjQxLbW+5xzNo0VJvzfCLIFY3OxTRxjHPX4SKk24eG/GX6FG2dyzxxJwAan7KaKHsdd8eA7Rqdugdh2Pu3lg0QuscbH4KtR0NLyLJp6jxPsuL2Zw61Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AkvkqYWUH0cLPGfLd0sXi0jaQLqO4CG8XTHHa6qEJQs=;
 b=Za9ylj33Ueu4HQeMs+yYb196mSjyKABdz46y5LIZUf+JggJS2l9HrogCGxofM9MtTXGv/JUuAvipCg79ZP9Y/RKRSQfxc+Pm8s9mDGXUnOiUho7u7+hFpPjBUtVtZsP5Iuv7FBj6pvUkLIhsTR+FVxxzBSAY3XG/rr1aadVeeN5OpOtTAbjxMHprgth3tOCg8tucwIeQghUiBgilPdxJy5JnJSlQmOV8zXIA2CXZgEfYg2J4jjxc8I7GhgTKSVYHjXI7cB0KWa1zvAiG2Zn2oq29UAgj7ms93PmB3dwHHXmwU5M1XfBYn5rW6w9gZwxfJw6151wludAQ0iRacA1NWQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AkvkqYWUH0cLPGfLd0sXi0jaQLqO4CG8XTHHa6qEJQs=;
 b=tCTAyn/5oLTiI5a81kNZJZidzzbMCUXR9ieLaSp/m8TU666zlHYN/lxDYu2EfkLm9LkPJVy9pXLFJFx3qRs1Kc9ll6WPtod6rrgofEfGLlx7Yr67GPdO7KOc1D9AVMCKv75gQLc13xZN/eUkFpJlpZAk3NWLkMtcGrsCUnzwdtI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <742a5807-dd53-0cd1-d478-aed567d5c4f5@citrix.com>
Date: Tue, 9 May 2023 16:59:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/6] x86/cpu-policy: Drop build time cross-checks of
 featureset sizes
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-2-andrew.cooper3@citrix.com>
 <6531df09-7f7f-a1e2-5993-bd2a14e22421@suse.com>
 <18090224-6838-8ed4-6be5-ae10a334e277@citrix.com>
 <bbbd4c8b-e681-f785-b85c-474b380d6160@suse.com>
In-Reply-To: <bbbd4c8b-e681-f785-b85c-474b380d6160@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO6P265CA0013.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:339::19) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MW4PR03MB7012:EE_
X-MS-Office365-Filtering-Correlation-Id: 5a7b4d7a-4781-442c-9b3d-08db50a66701
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fOzxUB4/LOcB1KO5AtWWwrF85U5Hc3z9iejlVTSfqtXjC+ClJXfH009RGJJ+E2PF383LW2U34u5+jLRTKeF+1MCDFCDT+YyoVBrYfvNeq2J8oOUwVPW+BGzgKcE1J+AW+DOGOsyNQbEeZo9IaKR34ng8bQVTYZYqSCurKsLXI2AsPHia9LmMRcSz4Fv35jJB4wKtGQFc7Ma1f2Q0+1l2Uw/XXRMLzREQQBd5WTzf9uPQNSmcmoKqHBVwCwo/EyQLiZPCOZ4+SEM3Oq6CaJ7D9aBpqeF5FGwAvAaI/l8yBORJ7/OYvpq7tfmD5UFq7sNORLOekjxH0EizS7hMmyd23wYTfksR2xS0501OjbWghYSoePs/vIAIXAXBt0UiD6NWDygG1i7PLJV0zvxwJ+9o88rK8UX7VRkR7aspuKppDuSZXSli0Hw0rS8Prl4QyP+YEhE30miwccTkGbxrjyKnRBAgrIZUW294+W0V33yilSBuEDQ0qLnO97NbGbZ+N3E8MbPK3IA18/6XM7JfKsEfY1kjpq91Y1ZpXEFFEx1rTjA++nPfwWMSXrpC5o4RwaVgHVNNnil31RIhzclLRy1gaxyIAmaQVjQpUpa4uPyN0xkuAvDSP1Vxh+QJTcq3ZOpDVP44whe4ZztO83XLoAcCgg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(366004)(136003)(39860400002)(396003)(451199021)(2616005)(6666004)(53546011)(26005)(6512007)(6506007)(8936002)(8676002)(6486002)(86362001)(38100700002)(31696002)(478600001)(36756003)(186003)(2906002)(5660300002)(54906003)(83380400001)(316002)(4326008)(6916009)(66476007)(66946007)(66556008)(41300700001)(82960400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aG03aGlJRHl1RGdGSkZycGlsRjNwamJuWlFOZElYNm9sRDMzQ3Z0N1kzK2NY?=
 =?utf-8?B?RURDUGp6a3M2UzJ5cFNBbFlJYVR5YXAzSGd0b1VhQy9kUHhtYXNhSWRScGhZ?=
 =?utf-8?B?SEsrMWtLSE4vd3hEOUtGZlRiQXhTZXY3OWlHZE00bFBsMEpRbHFZQnY0WCtz?=
 =?utf-8?B?czhzWEVtQndQL1cxWU5ybG54WHF1d0Q5cFk0N1ZFTGtxc1ZDRFVuRkN4ZXc3?=
 =?utf-8?B?dnNkVlkyNXVkQVlmT2l4TS9hK2tkTFhSQmQrMzk2T0k5blV2encvZmZoMlBh?=
 =?utf-8?B?dW4vRnY2ZjgwU0dmVzJtL2RwdG1PU2VMRVhEVnBkbEF6OVVHK1RzRkxIdzBB?=
 =?utf-8?B?T2VuZTB1ZFRnK1hZM2puSlVlTUVPNUZ0WkxrampvYTc5VE1RL3F5THgvc2VN?=
 =?utf-8?B?YXMwdDZsS2VJRjJkTmJpZ1lEb0pNcEFxNVd0WXplamMvYlp5cHdUQnhldEVJ?=
 =?utf-8?B?Z0l3QkNUNk9tU21nUmo4dWtIMktwMDhCTm5OcTh3QkZ4ejZTYlMwTTMzTjRD?=
 =?utf-8?B?M3JNb0h3SitNbFRXaU5VSCt2T0EzSFhmdWM3Lyt6dVM4VW5xK2MvVFBORis3?=
 =?utf-8?B?a2dGT01mbExjQW4rT2creVlPN0VKazJFd3cwUjBLU0ZoSStFYXpWQlpiODN3?=
 =?utf-8?B?aXFXTWNqTEFabTZhQUN4UDFCbHdIWDFibVQrQmhHa2QxNnZZZm5zZDdIY0cr?=
 =?utf-8?B?enRZVTNnbmlGKzZudzE3WVE2aHJMSlNtVm4yTWY2U2pnWk0rbGdmbkNkL1dl?=
 =?utf-8?B?alM5S2g2STkzZEx6NEZ6ZSszRFlhVTliUjNIajVNN01oREdoZjZFYWJGdE1h?=
 =?utf-8?B?Vngzd0NNUjVHOWwyb1UrdWZFTkZ6SU5pWWVWZ0xGaFFPMnlDNjBjUzR3LzdP?=
 =?utf-8?B?UW9aTHJpV0FoTVJnUUVpTGVZdVVFSG1zTGdtWnREZFp1ZWIxN1ZZUjROZk5o?=
 =?utf-8?B?OEFDUXFWVUtVQ2ZDSFBHMmY1cVpFdi9BVGhGUHZhVHp1ODRndXl3VDZvZXp5?=
 =?utf-8?B?OEpuWDl2dlNHRjd1VGZoc2didi9yZzZQNERna2x2ckZ1RXN1djlkWk0xVGgw?=
 =?utf-8?B?MzVDTjJ5Q3ZLaFVNN3VqZVh6bEE0SmxtVDBWN3VYYWtZQ25JRFptY2JCcGds?=
 =?utf-8?B?SnN4bVdyZTJzY2tVd0ozZ3FiSmE3aGJiOSt6dmxUMG85aE93ZnRuZXE1d0lT?=
 =?utf-8?B?eGdtUXhaWk9pbmJ2NjhTZFRzcW04WkU5SEphQ0VSOGN2dmd0ZUQyb0ZIQmNG?=
 =?utf-8?B?SllEdldrYlFTWTJad3ljaDBtdldKNEUvQzdUcklkcUI1QXdtRzVpbFlKWkRo?=
 =?utf-8?B?RDJYemUrb2FZTkxVbkVaSjdDeUl6Z1lmSEVNdm5jcjBzbkdKelJrb3J5T1Q5?=
 =?utf-8?B?K1V6Uk8vOW9nT2xFYkh5NGJPTEt3SHk1eVNWVWNaekJCWGEzMG8vVzQzQ3U5?=
 =?utf-8?B?U0hGM2J3a3F3cElmWXJPNkxjSVQ5ck8wSm1ib1RmbjVOM21LalJZczJFOHZS?=
 =?utf-8?B?TzhRRU9LRWI1NFMweUw3YlVzK251MURLZU8xVVlwZ05lTGNTUDVZZm5TTEJu?=
 =?utf-8?B?SXRPOHlpRWFHWHo2UU1ZaHBrQmpTNnRIbUtzYUhyVlFwTHEyZUx1ellOanBU?=
 =?utf-8?B?L1gwU2JxL3RoYjJVaHRzWjZ1VHY3VmVvM291ZFBzYURSUi9FRUlaK05jR3J2?=
 =?utf-8?B?aGJxU3JTdHRFV2dIVXpsME5VTy9MRzZuUjBibWhmRmM4WVIrSk9NUlhJeHRN?=
 =?utf-8?B?S29HRHI5UkNjTjlxTXVxT1NMZjBVMW1MVXNUcm1sMW52WVZITmN6dTN3OUZO?=
 =?utf-8?B?cWptQUFjTnpvRy9RWGY4emQzUmVtZlRWaUplNWJpMFpRRjRZWW8rZnVlUTZ3?=
 =?utf-8?B?YW5EYnJUaGpUTi81N3FFZnhka0FZVUVXdDVIcEJ2d0xWRVhkbkJRczIxalNY?=
 =?utf-8?B?WlRSOFp3SHRSYU90YWI5dytsd3daQ05lV0FjR1EyTDF6WGU1YlNBaUcxdjB1?=
 =?utf-8?B?NHBHdHBzcnpFN1dxTzVqSlVPeWxTRVJDU05qbW50ZVg2UExBNlkyME96NlpP?=
 =?utf-8?B?eHZUb2VwVmV6Mjg2YXVDa0JKMExHYlRtVSs5ZmFkZ2xQMHc2emRDQW9DUlhB?=
 =?utf-8?B?S1h4MDBYM3NGZGdQOURCNVJqUWtIbURyZGQxR0oyYU5BMHZrYSt1c01DWUpB?=
 =?utf-8?B?cGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ND8uhVtwjN21bm3EQD6ysD3MAheEhDzSzjCMENXgM8wePXoGapZGf0tjSZlS5PIlwjI7swIXjbfU5xM54ogL25lnz0D3XNsOc0P9AeSHLiVfIoisZz+dXI8I72HFvAwX1QZi/RSiHWsJ14kSzSCKyghpejiausgJATfrcXkx/5CeOJE+Uj9Vfch/DYRmEYjmVRkBTREmup7KqLIBvCIWH+0VRvS59EXchyWfT5BEobIaFH26jcwKUrSabr5NeOkDTl2I+RWAYBiraHlqK/R/KKDqgB6lkahS6fAzc3IRcOGEqZgkRxY301D6TFuontb5nrtJxtmcbnb+IczodK6RFGsfbgWA/I5ttwUtIOjM5b4g/yh0gSqEDscrGIMoO8N1DfotqfVbe6Pu7SqdGf0fKSZm7VgA8wJWBeeF3oD5BDpl4S1BRZmBtpqq8pFyInoPKxsDAqpH+RkM1w6krjDhInqEY8eEFUtTFZ6D6JPNKhxMKhYFZ/zs+yagvrSq9Pj3naic4lbEfLzBK6VUOn2kUw4MdELKcugH/qR1aJ/H4rdnW3NjJhcvHi0O43jl9O/+7HXB8Iw09gKDA7oiJ2rGcYbRlydcCnpAgnsT7d/OpSkiV2jABJ2ZxgqLVgwOdao+DtNyGt8jA8YkPpVJy1JbOW/VqNrjMpcUPDTU8qXV9zLzcWN70BYUYZ51WXgy3+kCkIsyfKJTCJMocwWDkJeNcgVSWApoWWHIq35a4LmgTyC60mdPbZpN4Zvlww7ODNyDnnYfhc19i/m6iCw5c3OMa4gHhJNUQQluntAuFXHk/+kS1gFZbN3PLBlfxlmepT2q
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a7b4d7a-4781-442c-9b3d-08db50a66701
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 15:59:42.8893
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DqIVw3H9w7mf8iDTz323MqsEUtz+euDBZOczPqb4CHEcyLhAJHv80gMaGdIzWIgqr+qR34Z1sU4UBtzYPNZKt/dvM8Wrk3QH2KtE1Ys4MjI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB7012

On 09/05/2023 3:28 pm, Jan Beulich wrote:
> On 09.05.2023 15:04, Andrew Cooper wrote:
>> On 08/05/2023 7:47 am, Jan Beulich wrote:
>>> On 04.05.2023 21:39, Andrew Cooper wrote:
>>>> These BUILD_BUG_ON()s exist to cover the curious absence of a diagnostic for
>>>> code which looks like:
>>>>
>>>>   uint32_t foo[1] = { 1, 2, 3 };
>>>>
>>>> However, GCC 12 at least does now warn for this:
>>>>
>>>>   foo.c:1:24: error: excess elements in array initializer [-Werror]
>>>>     884 | uint32_t foo[1] = { 1, 2, 3 };
>>>>         |                        ^
>>>>   foo.c:1:24: note: (near initialization for 'foo')
>>> I'm pretty sure all gcc versions we support diagnose such cases. In turn
>>> the arrays in question don't have explicit dimensions at their
>>> definition sites, and hence they derive their dimensions from their
>>> initializers. So the build-time-checks are about the arrays in fact
>>> obtaining the right dimensions, i.e. the initializers being suitable.
>>>
>>> With the core part of the reasoning not being applicable, I'm afraid I
>>> can't even say "okay with an adjusted description".
>> Now I'm extra confused.
>>
>> I put those BUILD_BUG_ON()'s in because I was not getting a diagnostic
>> when I was expecting one, and there was a bug in the original featureset
>> work caused by this going wrong.
>>
>> But godbolt seems to agree that even GCC 4.1 notices.
>>
>> Maybe it was some other error (C file not seeing the header properly?)
>> which disappeared across the upstream review?
> Or maybe, by mistake, too few initializer fields? But what exactly it
> was probably doesn't matter. If this patch is to stay (see below), some
> different description will be needed anyway (or the change be folded
> into the one actually invalidating those BUILD_BUG_ON()s).
>
>> Either way, these aren't appropriate, and need deleting before patch 5,
>> because the check is no longer valid when a featureset can be longer
>> than the autogen length.
> Well, they need deleting if we stick to the approach chosen there right
> now. If we switched to my proposed alternative, they better would stay.

Given that all versions of GCC do warn, I don't see any justification
for them to stay.

i.e. this should be committed, even if the commit message says "no idea
what they were taken originally, but they're superfluous in the logic as
it exists today".

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 09 16:06:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:06:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532305.828431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPrY-0000DE-Jt; Tue, 09 May 2023 16:06:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532305.828431; Tue, 09 May 2023 16:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPrY-0000D7-Gn; Tue, 09 May 2023 16:06:52 +0000
Received: by outflank-mailman (input) for mailman id 532305;
 Tue, 09 May 2023 16:06:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S+Ht=A6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwPrX-0000D0-2I
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:06:51 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2061f.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::61f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 813f5b29-ee83-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 18:06:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8582.eurprd04.prod.outlook.com (2603:10a6:10:2d9::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 16:06:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 16:06:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 813f5b29-ee83-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M/aAe4/ecbZj1RKXN4pQn959ccSSYwhtW85HEwMRS1QI5Y+tK08qwEvzoTgYirE202JSIxkUYzF2X+cXvjoHTX+OuxB1c4ZW88MybtvFmeo093CVOHbsEo0I/rETAEU4XzOkqYfHd2iAOK2F3M0AFBm724mUOCW3UXeMtIYXya/kuLighU52Nugw7sSC7bwR65YMEyVgTRxbYX7waHIgw9KqTCLGMkJasFeC34Lip20wFFzzIEyTA51UGuo02FFEyJr/6cPa9hrGnsRgoF37VaV2fR/P/luObS17LgUNwvWxJFViTevC0vo6B/J99uqtjGTxHqVaohInO1T55MrNbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E4bY/LJ2S4yEbyQjwXlFZ98iAIm3DE0husNP6Aq8f3A=;
 b=cvGv28AX+xCdCe3NXrJDV6a36uGzRObUqPqp4kfXqviwWrVNEkeM49HrpYaAwkxO4sk9ohrYMUrsulDJs7EVPHENH9e48mH7se4iav0LgrwTkTQaRAztZHL4nd0R33kr9WFNo19BbVTvLbem4z0UKesKVHUCLV/rF7jx+unHobDFZ7cRYDAnoIE/soSztb5Pr+PDQ//PgwYO9rvttfWoxWWbIJhc4yrjJw+BOqL/nPz4FHY3lff0QmkLdS8bCv+7RUyUREokDGeytwkgupj2fFYs00SCdqrRJsy0xQRHPdF2IXDjfb2ZuLUwSpPnxe/FJv7/kaG8buCQeREqd0YQcg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E4bY/LJ2S4yEbyQjwXlFZ98iAIm3DE0husNP6Aq8f3A=;
 b=maxb1mI13ZSzJimMadbc94vn41h8/ml96Lu2mU9o8da8O7KLCcN0/QKUn3L53skLXntOfdgAlin7ms4eEbCSTeeJ8Ty+nshqoZSeV7WNNP8MCqZgUkkVk2Ofw7AHnQqMXBOG02u+hZhfR/oezVXBbdZjCWbJrvVYFklQMjDHrMlkkCaZkkjY1lBnbHoqMEAITD5IS+BnovmtMyHKLs0f8dFTE/npWTA3+d61V5ySdaM6rfx29xJNTNn5pD8vKZGe+tFhznFpGwtff3DgBb49OrbeStZJvEGcyti53lM3aJsgg993vdjXEN0Oq+qVyuCFAqpBSoZSaNx5eCqseboLfQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bc750b8d-77be-f967-907a-4f19c783ad99@suse.com>
Date: Tue, 9 May 2023 18:06:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] iommu/vtd: fix address translation for superpages
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <20230509104146.61178-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230509104146.61178-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0118.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8582:EE_
X-MS-Office365-Filtering-Correlation-Id: e2a4ceec-750a-4b0f-9618-08db50a763d5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iLoMJA4l6MQZOeoxK+WmE08vzm+em7Gj8Qk7D6QA2tNXQKoPlenOvtdaUnyNPa0foF580O+bNAsHKhrETbxtI5yUCBwLGYfpyCW3zKiHx/5bfxFi7W8Y7sTDzqezz7+Fs7GJfvuf0aPn+bFOPskUthORk7kKBfTZ37yVNlf3d55ubRQpaZfuQQ1TEcIIqZgfm+YsS7fnNSNS2YU9jvEQb0J/x614G5AoDGaS7LKImJFj68Z9ZSNgz+kZJnej+gBVK5lgIT8IZcEZMEVXjcbLtizgI4WJOQ8LX9ahCpx+mgZn3oZU/wuprwIeqjcmm/Ft2e5/ZQLBxFHKzgRbPwi55TGUFxTrErFiNj5xu1srlwOEvuIs9/5i8rvhZi/QObHuhlkqfxgcJzhNqcwiD46sN3tc6xJyon/G/jUS2N4y+zZaUOSkQz+yJjTpOAzHV+xU4GQ2FLXn4zE8PQwyeNNhuzluI3X7jbqdhUAwHNIwp6QucgWPCvGerOEfdGFS5dk1NAQ7IE66tsNDdvusf0NwYRlfhOM2VmHwe72ndqUr5bFRp5X9u7Xvrx6ZMrPKRmvahKHrs7cfZxPC3RRQoNCRtkHw7cBETu36JQSUULHSRzqbl1xxKWs77tcE0bKijhhY6EAftEWhiRFsUO7zk4w40w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(39860400002)(346002)(376002)(136003)(396003)(451199021)(31686004)(478600001)(6916009)(4326008)(66946007)(66476007)(6486002)(66556008)(316002)(86362001)(31696002)(36756003)(83380400001)(2616005)(6506007)(53546011)(6512007)(26005)(8936002)(5660300002)(8676002)(2906002)(41300700001)(186003)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bkdQdC9TNkhucDlPQmxvZFNjZlNQblNMZUlSTXpKWlFPT2N4OXA3MEJmQ0pi?=
 =?utf-8?B?dlZpcndMZXgxc1FGT0RMZVdCbkhUK3oyUzhUTlQwOEJmWTErMXMyd3k5WVU1?=
 =?utf-8?B?bTBnaXNTMjNJUHJJaU42QkwrZ3NsUklYQm9PbWxib0ZxTGdFUGVRWVlHNXNu?=
 =?utf-8?B?RStDZjJrUWhQb3lkQ2Njb01xYk5CK3lhN0JZZFVYYkxXbEF6Sks3V2FReGx1?=
 =?utf-8?B?cmJHaHlNNmlBNlNSeS9TSXh0ek5FM1hxQ2pIdzZDQVE2VnlIVmVwWGNOR3Rt?=
 =?utf-8?B?N3p0bGxNRXFDaU9WZ3ZLQVErakRZbzEwUFNUV2RZb2NZRFBsZ09nZUVoNVpX?=
 =?utf-8?B?aVowc0hwTlZIMC85cFhBNzljV0MwUE0zVU1wcjF2Y1dOcm80SCt1Ti9FQlZ3?=
 =?utf-8?B?U09wL2krMVQ5eGtEWENoTC9LYkZFRTVIUEl1YjhNMC9JTkZIUng0TkR2N0Z6?=
 =?utf-8?B?QVJPWFlBa3ZWQ0k0eU80L21tRWVuZGJ4SXF5M3FXbmRGQktjSXM3S3dDRmV3?=
 =?utf-8?B?SVJSU1hrNVlBeWRjRHpUSHlUb0JDNUMyMmVLNXg3ZWN5UjU4OW1JaWZhbUtL?=
 =?utf-8?B?SGFveGJ1OUJnd3Z1bVNYVDMrYW94aXdYQUVqM01DOS9HaWhWaVRBUHhhNU5S?=
 =?utf-8?B?OUtqZENyNSt6aFF0aXMzWEFTTmNkWUc3bFQzN1VPa05YbUFOK1hoZlNEbldR?=
 =?utf-8?B?NVRNZ0tDanNLa01QbkFDR2p5VmU5UHZmN0c0R3VJWVZKNFg2SlpMZFBKMG1k?=
 =?utf-8?B?WnZHNmNRVkw1aUUvNUYxWTRDQ3hFellzSUdqTldQdVdPcmlUQ2pVNVRjNzIr?=
 =?utf-8?B?a0xoZ2xLL1hJaXozWjZHRTJUWWpvNTB1L0krN1VvekY5UE5JVG5LaXJDQVNw?=
 =?utf-8?B?Z21Ca2pFdE81OVo4QlBnOXRjYVJONFZUVFJwVmduNWJKYUdRcndDSXBMMUlo?=
 =?utf-8?B?aGUrZmxSaEtCVjJkMzFpZnFucFhKa1hjUnlkWWpnOXY0VnJNbUlZUEdlM3RR?=
 =?utf-8?B?T2ZCWk8wL1N5RmxXUm1LK0ZaQW1DbWQ5MXo1Um9YTGNxOE03SnYyYWloRy9x?=
 =?utf-8?B?cWVJb2tBU0kvWmd1cnRaWmRBcHViOVJJQWNvVTFBemdZZlkvcTBONzJ1MjJk?=
 =?utf-8?B?NVBsbXU1dzdicDRNZnBLV2p0aHRpajJGd2NicXZYRDBXYlB6OGcrTjFpcTE2?=
 =?utf-8?B?RkN5SjJtSkZVL3RoVjRHOXA3Rk5QdjdDMG55dXpPcGQ5ekJMc3l0STVtMkFw?=
 =?utf-8?B?c2Y2U21YNXBLRjBzbHJyVGRZUS9oL3VLSzE3ZHZ4V3lJLzRXK2FITEd5eCtk?=
 =?utf-8?B?ZlBvK3IwcU1IZlJaejZlNXpCaEZkMEFRZ1l3ZzFLdVNJTFc2a3laU0NYc3VV?=
 =?utf-8?B?a01iWGxjMzZDN3M3eVhnOVFHcUk4dXpFRjZQOWgxK0JmODFrRjFsa3J3QXVn?=
 =?utf-8?B?SzZiNkJBamVqbnI5MjBMZ0w5NDVCS0ltREpZUTlFb3BGZERpaG03VjhlNEhV?=
 =?utf-8?B?d0lZY00vYUhqNi9vMmRrOXhQMUd2ZmZpNi91UktUV3FNdXVDUU9oWW9vZnp5?=
 =?utf-8?B?UjlrYmR6QnI4bnozSDZwNmhtVG9wVHBnSTFZM3hBdklsMmt5TzBtTHFQRVVn?=
 =?utf-8?B?NUlONGJ1TEJUYTI5YjhXb2pobFBxTFc1aDYyN0VObVFLWlJaMUxXQkpRcGdh?=
 =?utf-8?B?VG5HbXRyNldiT251d3ZONXRsVy8wMUZnNTBVTHJxNDlwOVFia1E4UGxKcTBT?=
 =?utf-8?B?NEl2NU0zZUxlTER1Tk42akNzbzhOdjJTMzlxN1VqQmZBMEE1SHZCcy9pLzlE?=
 =?utf-8?B?dEI0bm5sYkFIZGFZeFNtQ0s3dUMzdW9GYkhxSStwLzE4MFg3aU90K1BtNUs3?=
 =?utf-8?B?RU00NGpiK1VjekRFQ2xWT29zMW1NbW05cndHWVVNWnVsRzJ1YW0xS0NYVzhw?=
 =?utf-8?B?MmZVYUc5clEvSitweFA5Q2FqUVVnZDRkb1BVQnN6VTErM1NMcHBDR0RIVnI3?=
 =?utf-8?B?TW56RnJsZ3lQcXpzOG44UUwrOVdSb3VTOU9lVUV2OGJteHZiOTlHQ0p3T0Fa?=
 =?utf-8?B?WkhPelRnMHJiZTlJbjJkdlY5Y0FhQVFOL1dsRGxCb1JUYXN2amtvRUJKMFE0?=
 =?utf-8?Q?+tYgl09pOdowzngM9gCVRqahf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e2a4ceec-750a-4b0f-9618-08db50a763d5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 16:06:46.9524
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DiO6lIpCeIcW5gUWaKxVkurgYZPs68OFS/JYK6cQqlJO8DhalPg4Z1X41MnPvoyHNlsVsJcg2y8hWTKq7XXsLg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8582

On 09.05.2023 12:41, Roger Pau Monne wrote:
> When translating an address that falls inside of a superpage in the
> IOMMU page tables the fetching of the PTE physical address field
> wasn't using dma_pte_addr(), which caused the returned data to be
> corrupt as it would contain bits not related to the address field.

I'm afraid I don't understand:

> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -359,16 +359,18 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
>  
>              if ( !alloc )
>              {
> -                pte_maddr = 0;
>                  if ( !dma_pte_present(*pte) )
> +                {
> +                    pte_maddr = 0;
>                      break;
> +                }
>  
>                  /*
>                   * When the leaf entry was requested, pass back the full PTE,
>                   * with the address adjusted to account for the residual of
>                   * the walk.
>                   */
> -                pte_maddr = pte->val +
> +                pte_maddr +=
>                      (addr & ((1UL << level_to_offset_bits(level)) - 1) &
>                       PAGE_MASK);

With this change you're now violating what the comment says (plus what
the comment ahead of the function says). And it says what it says for
a reason - see intel_iommu_lookup_page(), which I think your change is
breaking.

Note also the following code:

                if ( !target )
                    break;
            }

            pte_maddr = level - 1;

IOW the local variable is overwritten right away unless target == 0.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 09 16:07:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:07:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532308.828440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPrz-0000e1-S4; Tue, 09 May 2023 16:07:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532308.828440; Tue, 09 May 2023 16:07:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPrz-0000du-PJ; Tue, 09 May 2023 16:07:19 +0000
Received: by outflank-mailman (input) for mailman id 532308;
 Tue, 09 May 2023 16:07:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kELI=A6=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pwPry-0000D0-Pl
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:07:18 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 921324eb-ee83-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 18:07:17 +0200 (CEST)
Received: by mail-wr1-x42d.google.com with SMTP id
 ffacd0b85a97d-30644c18072so4012230f8f.2
 for <xen-devel@lists.xenproject.org>; Tue, 09 May 2023 09:07:17 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 z8-20020adfec88000000b003062675d4c9sm14721479wrn.39.2023.05.09.09.07.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 May 2023 09:07:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 921324eb-ee83-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683648436; x=1686240436;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=sUa4EW5NKYqGX/lgWMn9gXNm9+m7nSTsffZhyMEoWAk=;
        b=lCNUFKmugnetXQ9tS/KeJQi9upvmtzrKcNgDBDdCsWi3DYWtaV7eIb09I/okdh7A1i
         3nfenmySSjIcWXPakDN7yYO1L9rJul4nW1nW87PqaD8rbn8pMSWc5NeLnkg1zdQqA2y2
         drOVcVYpCk7RJVMkV3CWF+uw5Zc3MnjncotCc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683648436; x=1686240436;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=sUa4EW5NKYqGX/lgWMn9gXNm9+m7nSTsffZhyMEoWAk=;
        b=b8AOpMpB2KvzbVjF8Y6jmD/NF55qzXcq5jR9Qe3bWBj2OY9PsvhnJ33yS83/r0O5tM
         seH+pME9qrWwvKbgM1hgGRHU3hBw5K08WfnwB9iKE87RoeMK4Xd1MEywA57nRSX/NEoD
         BclFX0YlNTp7cKpAroGMD2M0jLAN1EmOU3CB1q6IzHpS7qtn+3/XD8DjqosEI5n93jVh
         sKJvi31fNtBrlvHUvxTuJakshP9VIE4NB0F3QD8xPWJj+ildi0weivUYHUtvLlt7r556
         mvy/pcxV0tDtoH1Iz/pcH9rRC+nvSWDHBcgIP1VzsMbEsG7ndPukYgPi5YHc4eL8F7vx
         lmhQ==
X-Gm-Message-State: AC+VfDz265lzCzpcmzVxaPd6aMKIcLhnS8xm+gIzN89xWgMD2sfoNoiY
	FCP1fFYqfDMt88Ftu0Lk+K9Q/rX1tFMQcvTjWdQ=
X-Google-Smtp-Source: ACHHUZ6ZBZnw9zD8944MY0/yq1AzxwPIE317KoKQqFZvNv5cp+DBNexO3LZgDmu9NwE3nStixKL8Mw==
X-Received: by 2002:adf:e605:0:b0:304:6fef:f375 with SMTP id p5-20020adfe605000000b003046feff375mr10072813wrm.70.1683648436647;
        Tue, 09 May 2023 09:07:16 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Tim Deegan <tim@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>
Subject: [PATCH v4 0/3] Rationalize usage of xc_domain_getinfo{,list}()
Date: Tue,  9 May 2023 17:07:09 +0100
Message-Id: <20230509160712.11685-1-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The first 4 patches of v2 already made it to staging. v4 includes a fix
so libxl preserves its previous error handling behaviour and a style change

Original cover letter:

xc_domain_getinfo() returns the list of domains with domid >= first_domid.
It does so by repeatedly invoking XEN_DOMCTL_getdomaininfo, which leads to
unintuitive behaviour (asking for domid=1 might succeed returning domid=2).
Furthermore, N hypercalls are required whereas the equivalent functionality
can be achieved using with XEN_SYSCTL_getdomaininfo.

Ideally, we want a DOMCTL interface that operates over a single precisely
specified domain and a SYSCTL interface that can be used for bulk queries.

All callers of xc_domain_getinfo() that are better off using SYSCTL are
migrated to use that instead. That includes callers performing domain
discovery and those requesting info for more than 1 domain per hypercall.

A new xc_domain_getinfo_single() is introduced with stricter semantics than
xc_domain_getinfo() (failing if domid isn't found) to migrate the rest to.

With no callers left the xc_dominfo_t structure and the xc_domain_getinfo()
call itself can be cleanly removed, and the DOMCTL interface simplified to
only use its fastpath.

With the DOMCTL ammended, the new xc_domain_getinfo_single() drops its
stricter check, becoming a simple wrapper to invoke the hypercall itself.

Alejandro Vallejo (3):
  tools: Modify single-domid callers of xc_domain_getinfolist()
  tools: Use new xc function for some xc_domain_getinfo() calls
  domctl: Modify XEN_DOMCTL_getdomaininfo to fail if domid is not found

 tools/console/client/main.c             |  7 +--
 tools/debugger/kdd/kdd-xen.c            |  5 +-
 tools/include/xenctrl.h                 | 43 -------------
 tools/libs/ctrl/xc_domain.c             | 82 ++-----------------------
 tools/libs/ctrl/xc_pagetab.c            |  7 +--
 tools/libs/ctrl/xc_private.c            |  9 +--
 tools/libs/ctrl/xc_private.h            |  7 ++-
 tools/libs/guest/xg_core.c              | 23 +++----
 tools/libs/guest/xg_core.h              |  6 +-
 tools/libs/guest/xg_core_arm.c          | 10 +--
 tools/libs/guest/xg_core_x86.c          | 18 +++---
 tools/libs/guest/xg_cpuid_x86.c         | 40 ++++++------
 tools/libs/guest/xg_dom_boot.c          | 16 ++---
 tools/libs/guest/xg_domain.c            |  8 +--
 tools/libs/guest/xg_offline_page.c      | 12 ++--
 tools/libs/guest/xg_private.h           |  1 +
 tools/libs/guest/xg_resume.c            | 20 +++---
 tools/libs/guest/xg_sr_common.h         |  2 +-
 tools/libs/guest/xg_sr_restore.c        | 17 ++---
 tools/libs/guest/xg_sr_restore_x86_pv.c |  2 +-
 tools/libs/guest/xg_sr_save.c           | 27 ++++----
 tools/libs/guest/xg_sr_save_x86_pv.c    |  6 +-
 tools/libs/light/libxl_dom.c            | 17 ++---
 tools/libs/light/libxl_dom_suspend.c    |  7 +--
 tools/libs/light/libxl_domain.c         | 18 +++---
 tools/libs/light/libxl_mem.c            |  4 +-
 tools/libs/light/libxl_sched.c          | 30 ++++-----
 tools/libs/light/libxl_x86_acpi.c       |  6 +-
 tools/misc/xen-hvmcrash.c               |  6 +-
 tools/misc/xen-lowmemd.c                |  6 +-
 tools/misc/xen-mfndump.c                | 22 +++----
 tools/misc/xen-vmtrace.c                |  6 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c     |  6 +-
 tools/vchan/vchan-socket-proxy.c        |  6 +-
 tools/xenpaging/xenpaging.c             | 10 +--
 tools/xenstore/xenstored_domain.c       | 15 +++--
 tools/xentrace/xenctx.c                 |  8 +--
 xen/common/domctl.c                     | 32 +---------
 38 files changed, 190 insertions(+), 377 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 09 16:07:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:07:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532309.828451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPs2-0000vn-9b; Tue, 09 May 2023 16:07:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532309.828451; Tue, 09 May 2023 16:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPs2-0000ve-5Y; Tue, 09 May 2023 16:07:22 +0000
Received: by outflank-mailman (input) for mailman id 532309;
 Tue, 09 May 2023 16:07:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kELI=A6=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pwPs0-0000D0-4s
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:07:20 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 930cb8cd-ee83-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 18:07:19 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id
 ffacd0b85a97d-3079d2afedbso1453816f8f.0
 for <xen-devel@lists.xenproject.org>; Tue, 09 May 2023 09:07:19 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 z8-20020adfec88000000b003062675d4c9sm14721479wrn.39.2023.05.09.09.07.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 May 2023 09:07:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 930cb8cd-ee83-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683648438; x=1686240438;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=C8CTgH7ikeYK7Qw0f3MvFSgbUxreSSzk+kEZoLhVA/0=;
        b=YWKb3TvdvmBUuHMF8qViOwDV5Q82/U3E2031GP9ctVTf9vzSwxwWo7FhUrRvgQBbKt
         1230pUK3FY99+eanU9+cUJIK2hlJaXnwTq1F3oCtHCCx1N+sALsSsancXrDq4EoLxfdg
         7jLDQNpLtHdTt0LaLUvowtcy27TrYb/qb+e9U=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683648438; x=1686240438;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=C8CTgH7ikeYK7Qw0f3MvFSgbUxreSSzk+kEZoLhVA/0=;
        b=Ys55Ojvv9eUX1bcuTwJB8K80YXTmJ1fuSSnxFwEITdzG/LHnAamvAuegdCsFBpX8Y4
         jftFhvj/MahsPS90S1NHhXhIPcgiYjjqzmhNNUYrGkQ9B/YPgGb7Ie6l1uNDteXFzRi2
         IAQ6de3QaAvjQZlNF1sZpXBVchseFzw0O0w0L4pKzbaEGop9jQCUbov0FqtQH9WrdosD
         QwfvfWF406rYI5rVWkr/6L5QVikCk5RxLcdmThgpyNxW5unE3nFw2LgFfFzqlXif+yyd
         TGH/fKqDzyWYbaDbeQgytU01NMNqY0ff8W8m0sn9PSzDMT9r7Ul4uC6JHSXl51Mf1Yq+
         imMg==
X-Gm-Message-State: AC+VfDwkALi50+/lfxvajQgfZFXMEzc2oEZ7aJuXGcPqPM8Oit45rxPG
	jaFbeJJETBUZuPtjjytFofsO1jmcZ0p2TzhFj0w=
X-Google-Smtp-Source: ACHHUZ4vV84Q38n434zY8oRxhpiCROidaPnpGJBdsfSWBMNR2E8d7BwZIzd4dtGNMuVYWRch7vHH1Q==
X-Received: by 2002:adf:df0c:0:b0:307:5912:789 with SMTP id y12-20020adfdf0c000000b0030759120789mr9728753wrl.66.1683648438284;
        Tue, 09 May 2023 09:07:18 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Christian Lindig <christian.lindig@cloud.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>
Subject: [PATCH v4 1/3] tools: Modify single-domid callers of xc_domain_getinfolist()
Date: Tue,  9 May 2023 17:07:10 +0100
Message-Id: <20230509160712.11685-2-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230509160712.11685-1-alejandro.vallejo@cloud.com>
References: <20230509160712.11685-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xc_domain_getinfolist() internally relies on a sysctl that performs
a linear search for the domids. Many callers of xc_domain_getinfolist()
who require information about a precise domid are much better off calling
xc_domain_getinfo_single() instead, that will use the getdomaininfo domctl
instead and ensure the returned domid matches the requested one. The domtctl
will find the domid faster too, because that uses hashed lists.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Christian Lindig <christian.lindig@cloud.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Christian Lindig <christian.lindig@citrix.com>

v4:
  libxl:
    * Preserve distinction between "domain-not-found" and other errors
    * Renamed new rc variable to r, as the coding style
---
 tools/libs/light/libxl_dom.c         | 17 ++++++-----------
 tools/libs/light/libxl_dom_suspend.c |  7 +------
 tools/libs/light/libxl_domain.c      | 18 ++++++++----------
 tools/libs/light/libxl_mem.c         |  4 ++--
 tools/libs/light/libxl_sched.c       | 14 +++++---------
 tools/ocaml/libs/xc/xenctrl_stubs.c  |  6 ++----
 tools/xenpaging/xenpaging.c          | 10 +++++-----
 7 files changed, 29 insertions(+), 47 deletions(-)

diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
index 25fb716084..94fef37401 100644
--- a/tools/libs/light/libxl_dom.c
+++ b/tools/libs/light/libxl_dom.c
@@ -32,9 +32,9 @@ libxl_domain_type libxl__domain_type(libxl__gc *gc, uint32_t domid)
     xc_domaininfo_t info;
     int ret;
 
-    ret = xc_domain_getinfolist(ctx->xch, domid, 1, &info);
-    if (ret != 1 || info.domain != domid) {
-        LOG(ERROR, "unable to get domain type for domid=%"PRIu32, domid);
+    ret = xc_domain_getinfo_single(ctx->xch, domid, &info);
+    if (ret < 0) {
+        LOGED(ERROR, domid, "unable to get dominfo");
         return LIBXL_DOMAIN_TYPE_INVALID;
     }
     if (info.flags & XEN_DOMINF_hvm_guest) {
@@ -70,15 +70,10 @@ int libxl__domain_cpupool(libxl__gc *gc, uint32_t domid)
     xc_domaininfo_t info;
     int ret;
 
-    ret = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
-    if (ret != 1)
+    ret = xc_domain_getinfo_single(CTX->xch, domid, &info);
+    if (ret < 0)
     {
-        LOGE(ERROR, "getinfolist failed %d", ret);
-        return ERROR_FAIL;
-    }
-    if (info.domain != domid)
-    {
-        LOGE(ERROR, "got info for dom%d, wanted dom%d\n", info.domain, domid);
+        LOGED(ERROR, domid, "get domaininfo failed");
         return ERROR_FAIL;
     }
     return info.cpupool;
diff --git a/tools/libs/light/libxl_dom_suspend.c b/tools/libs/light/libxl_dom_suspend.c
index 4fa22bb739..6091a5f3f6 100644
--- a/tools/libs/light/libxl_dom_suspend.c
+++ b/tools/libs/light/libxl_dom_suspend.c
@@ -332,13 +332,8 @@ static void suspend_common_wait_guest_check(libxl__egc *egc,
     /* Convenience aliases */
     const uint32_t domid = dsps->domid;
 
-    ret = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
+    ret = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (ret < 0) {
-        LOGED(ERROR, domid, "unable to check for status of guest");
-        goto err;
-    }
-
-    if (!(ret == 1 && info.domain == domid)) {
         LOGED(ERROR, domid, "guest we were suspending has been destroyed");
         goto err;
     }
diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 7f0986c185..5ee1544d9c 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -349,15 +349,11 @@ int libxl_domain_info(libxl_ctx *ctx, libxl_dominfo *info_r,
     int ret;
     GC_INIT(ctx);
 
-    ret = xc_domain_getinfolist(ctx->xch, domid, 1, &xcinfo);
-    if (ret<0) {
-        LOGED(ERROR, domid, "Getting domain info list");
-        GC_FREE;
-        return ERROR_FAIL;
-    }
-    if (ret==0 || xcinfo.domain != domid) {
+    ret = xc_domain_getinfo_single(ctx->xch, domid, &xcinfo);
+    if (ret < 0) {
+        LOGED(ERROR, domid, "Getting domain info");
         GC_FREE;
-        return ERROR_DOMAIN_NOTFOUND;
+        return errno == ESRCH ? ERROR_DOMAIN_NOTFOUND : ERROR_FAIL;
     }
 
     if (info_r)
@@ -1657,14 +1653,16 @@ int libxl__resolve_domid(libxl__gc *gc, const char *name, uint32_t *domid)
 libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
                                        int *nr_vcpus_out, int *nr_cpus_out)
 {
+    int r;
     GC_INIT(ctx);
     libxl_vcpuinfo *ptr, *ret;
     xc_domaininfo_t domaininfo;
     xc_vcpuinfo_t vcpuinfo;
     unsigned int nr_vcpus;
 
-    if (xc_domain_getinfolist(ctx->xch, domid, 1, &domaininfo) != 1) {
-        LOGED(ERROR, domid, "Getting infolist");
+    r = xc_domain_getinfo_single(ctx->xch, domid, &domaininfo);
+    if (r < 0) {
+        LOGED(ERROR, domid, "Getting dominfo");
         GC_FREE;
         return NULL;
     }
diff --git a/tools/libs/light/libxl_mem.c b/tools/libs/light/libxl_mem.c
index 92ec09f4cf..44e554adba 100644
--- a/tools/libs/light/libxl_mem.c
+++ b/tools/libs/light/libxl_mem.c
@@ -323,8 +323,8 @@ retry_transaction:
     libxl__xs_printf(gc, t, GCSPRINTF("%s/memory/target", dompath),
                      "%"PRIu64, new_target_memkb);
 
-    r = xc_domain_getinfolist(ctx->xch, domid, 1, &info);
-    if (r != 1 || info.domain != domid) {
+    r = xc_domain_getinfo_single(ctx->xch, domid, &info);
+    if (r < 0) {
         abort_transaction = 1;
         rc = ERROR_FAIL;
         goto out;
diff --git a/tools/libs/light/libxl_sched.c b/tools/libs/light/libxl_sched.c
index 7c53dc60e6..841c05b0ef 100644
--- a/tools/libs/light/libxl_sched.c
+++ b/tools/libs/light/libxl_sched.c
@@ -219,13 +219,11 @@ static int sched_credit_domain_set(libxl__gc *gc, uint32_t domid,
     xc_domaininfo_t domaininfo;
     int rc;
 
-    rc = xc_domain_getinfolist(CTX->xch, domid, 1, &domaininfo);
+    rc = xc_domain_getinfo_single(CTX->xch, domid, &domaininfo);
     if (rc < 0) {
-        LOGED(ERROR, domid, "Getting domain info list");
-        return ERROR_FAIL;
+        LOGED(ERROR, domid, "Getting domain info");
+        return errno == ESRCH ? ERROR_INVAL : ERROR_FAIL;
     }
-    if (rc != 1 || domaininfo.domain != domid)
-        return ERROR_INVAL;
 
     rc = xc_sched_credit_domain_get(CTX->xch, domid, &sdom);
     if (rc != 0) {
@@ -426,13 +424,11 @@ static int sched_credit2_domain_set(libxl__gc *gc, uint32_t domid,
     xc_domaininfo_t info;
     int rc;
 
-    rc = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
+    rc = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (rc < 0) {
         LOGED(ERROR, domid, "Getting domain info");
-        return ERROR_FAIL;
+        return errno == ESRCH ? ERROR_INVAL : ERROR_FAIL;
     }
-    if (rc != 1 || info.domain != domid)
-        return ERROR_INVAL;
 
     rc = xc_sched_credit2_domain_get(CTX->xch, domid, &sdom);
     if (rc != 0) {
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index 6ec9ed6d1e..f686db3124 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -497,10 +497,8 @@ CAMLprim value stub_xc_domain_getinfo(value xch_val, value domid)
 	xc_domaininfo_t info;
 	int ret;
 
-	ret = xc_domain_getinfolist(xch, Int_val(domid), 1, &info);
-	if (ret != 1)
-		failwith_xc(xch);
-	if (info.domain != Int_val(domid))
+	ret = xc_domain_getinfo_single(xch, Int_val(domid), &info);
+	if (ret < 0)
 		failwith_xc(xch);
 
 	result = alloc_domaininfo(&info);
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 6e5490315d..c7a9a82477 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -169,8 +169,8 @@ static int xenpaging_get_tot_pages(struct xenpaging *paging)
     xc_domaininfo_t domain_info;
     int rc;
 
-    rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1, &domain_info);
-    if ( rc != 1 )
+    rc = xc_domain_getinfo_single(xch, paging->vm_event.domain_id, &domain_info);
+    if ( rc < 0 )
     {
         PERROR("Error getting domain info");
         return -1;
@@ -424,9 +424,9 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     /* Get max_pages from guest if not provided via cmdline */
     if ( !paging->max_pages )
     {
-        rc = xc_domain_getinfolist(xch, paging->vm_event.domain_id, 1,
-                                   &domain_info);
-        if ( rc != 1 )
+        rc = xc_domain_getinfo_single(xch, paging->vm_event.domain_id,
+                                      &domain_info);
+        if ( rc < 0 )
         {
             PERROR("Error getting domain info");
             goto err;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 09 16:07:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:07:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532310.828461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPs4-0001Cw-Ho; Tue, 09 May 2023 16:07:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532310.828461; Tue, 09 May 2023 16:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPs4-0001Cp-Dh; Tue, 09 May 2023 16:07:24 +0000
Received: by outflank-mailman (input) for mailman id 532310;
 Tue, 09 May 2023 16:07:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kELI=A6=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pwPs3-0000D0-9k
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:07:23 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 942bfa1c-ee83-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 18:07:21 +0200 (CEST)
Received: by mail-wm1-x334.google.com with SMTP id
 5b1f17b1804b1-3f42c865534so7986465e9.2
 for <xen-devel@lists.xenproject.org>; Tue, 09 May 2023 09:07:21 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 z8-20020adfec88000000b003062675d4c9sm14721479wrn.39.2023.05.09.09.07.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 May 2023 09:07:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 942bfa1c-ee83-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683648440; x=1686240440;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FdWx27kUD3c/g7EnD0npPaYTUD8O2LdCMzKt1vZWnvs=;
        b=KySMYdqEVQpQdQFmCmq5jpRPclpN6C82tDy22MG3CR/zw+++MKOINzj3NhEzxPIzeS
         4eq1SSpdMnzzjfR7etWyHEXQv6Gc5IkhcWXpuXjb9URL+4XITQd1NQFCiqI6X2FjAias
         vMySbXkFKKhsazRpGPfxGcVO/tCag28r7XQgw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683648440; x=1686240440;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=FdWx27kUD3c/g7EnD0npPaYTUD8O2LdCMzKt1vZWnvs=;
        b=Mgy2e46pcBhV3tHde49XALpGkSSFJ5Jm90oYJ/zTycWToUsT0DwGKednvaZ49c/cDF
         3EGmbIkVnxvU/O0CXDjjt39LcfGllu6eOOW5Xi6PDS4qGaaKpxFjfnukqpBwK08tQ9cc
         T4W9sJpLx9iUtg7UFqIOQmPRHLkQGDjw2Rns+DYhJn6cutd0QMXRyWLT1GK0T/eq1Sqo
         SK4BCxao1GAW2ZErOSAOBjYjQwmbq+Bwmo3WxnnD3+EI4KV9oXo++MjdtYBAI5zy7An4
         196tom6Rv+cYdaw9DMaePNU24fRocUXc5TpZvL6PIbg3605u9pZyoIGpKZ9zQpuH/5Hx
         Ciew==
X-Gm-Message-State: AC+VfDxC4gCnPYbtHiSqL0gX+WyB7zikaZprnGUqI8hbeW0oDz2gpmra
	2AEb3nMihwJ0ohEOuq6JUBZ9cwXykBxTtOkSO0M=
X-Google-Smtp-Source: ACHHUZ4+VlZnDWiF1GGVi4KD31HWPi7em92+f1lJPMyFKWcMxT6v2fAPb61P+syOAkQG17RYjhF2NQ==
X-Received: by 2002:a1c:f406:0:b0:3f0:7e56:82a4 with SMTP id z6-20020a1cf406000000b003f07e5682a4mr10834428wma.18.1683648439985;
        Tue, 09 May 2023 09:07:19 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v4 2/3] tools: Use new xc function for some xc_domain_getinfo() calls
Date: Tue,  9 May 2023 17:07:11 +0100
Message-Id: <20230509160712.11685-3-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230509160712.11685-1-alejandro.vallejo@cloud.com>
References: <20230509160712.11685-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move calls that require a information about a single precisely identified
domain to the new xc_domain_getinfo_single().

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Tim Deegan <tim@xen.org>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Juergen Gross <jgross@suse.com>

v4: Changed a line in libxl_x86_acpi.c to print errno/domid instead of rc
---
 tools/console/client/main.c             |  7 ++---
 tools/debugger/kdd/kdd-xen.c            |  5 ++--
 tools/libs/ctrl/xc_domain.c             |  9 +++---
 tools/libs/ctrl/xc_pagetab.c            |  7 ++---
 tools/libs/ctrl/xc_private.c            |  9 +++---
 tools/libs/ctrl/xc_private.h            |  7 +++--
 tools/libs/guest/xg_core.c              | 23 ++++++--------
 tools/libs/guest/xg_core.h              |  6 ++--
 tools/libs/guest/xg_core_arm.c          | 10 +++----
 tools/libs/guest/xg_core_x86.c          | 18 +++++------
 tools/libs/guest/xg_cpuid_x86.c         | 40 +++++++++++++------------
 tools/libs/guest/xg_dom_boot.c          | 16 +++-------
 tools/libs/guest/xg_domain.c            |  8 ++---
 tools/libs/guest/xg_offline_page.c      | 12 ++++----
 tools/libs/guest/xg_private.h           |  1 +
 tools/libs/guest/xg_resume.c            | 20 ++++++-------
 tools/libs/guest/xg_sr_common.h         |  2 +-
 tools/libs/guest/xg_sr_restore.c        | 17 ++++-------
 tools/libs/guest/xg_sr_restore_x86_pv.c |  2 +-
 tools/libs/guest/xg_sr_save.c           | 27 +++++++----------
 tools/libs/guest/xg_sr_save_x86_pv.c    |  6 ++--
 tools/libs/light/libxl_sched.c          | 16 +++++-----
 tools/libs/light/libxl_x86_acpi.c       |  6 ++--
 tools/misc/xen-hvmcrash.c               |  6 ++--
 tools/misc/xen-lowmemd.c                |  6 ++--
 tools/misc/xen-mfndump.c                | 22 ++++++--------
 tools/misc/xen-vmtrace.c                |  6 ++--
 tools/vchan/vchan-socket-proxy.c        |  6 ++--
 tools/xenstore/xenstored_domain.c       | 15 +++++-----
 tools/xentrace/xenctx.c                 |  8 ++---
 30 files changed, 159 insertions(+), 184 deletions(-)

diff --git a/tools/console/client/main.c b/tools/console/client/main.c
index 1a6fa162f7..6775006488 100644
--- a/tools/console/client/main.c
+++ b/tools/console/client/main.c
@@ -408,17 +408,16 @@ int main(int argc, char **argv)
 	if (dom_path == NULL)
 		err(errno, "xs_get_domain_path()");
 	if (type == CONSOLE_INVAL) {
-		xc_dominfo_t xcinfo;
+		xc_domaininfo_t xcinfo;
 		xc_interface *xc_handle = xc_interface_open(0,0,0);
 		if (xc_handle == NULL)
 			err(errno, "Could not open xc interface");
-		if ( (xc_domain_getinfo(xc_handle, domid, 1, &xcinfo) != 1) ||
-		     (xcinfo.domid != domid) ) {
+		if (xc_domain_getinfo_single(xc_handle, domid, &xcinfo) < 0) {
 			xc_interface_close(xc_handle);
 			err(errno, "Failed to get domain information");
 		}
 		/* default to pv console for pv guests and serial for hvm guests */
-		if (xcinfo.hvm)
+		if (xcinfo.flags & XEN_DOMINF_hvm_guest)
 			type = CONSOLE_SERIAL;
 		else
 			type = CONSOLE_PV;
diff --git a/tools/debugger/kdd/kdd-xen.c b/tools/debugger/kdd/kdd-xen.c
index e78c9311c4..e63e267023 100644
--- a/tools/debugger/kdd/kdd-xen.c
+++ b/tools/debugger/kdd/kdd-xen.c
@@ -570,7 +570,7 @@ kdd_guest *kdd_guest_init(char *arg, FILE *log, int verbosity)
     kdd_guest *g = NULL;
     xc_interface *xch = NULL;
     uint32_t domid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
 
     g = calloc(1, sizeof (kdd_guest));
     if (!g) 
@@ -590,7 +590,8 @@ kdd_guest *kdd_guest_init(char *arg, FILE *log, int verbosity)
     g->domid = domid;
 
     /* Check that the domain exists and is HVM */
-    if (xc_domain_getinfo(xch, domid, 1, &info) != 1 || !info.hvm)
+    if (xc_domain_getinfo_single(xch, domid, &info) < 0 ||
+        !(info.flags & XEN_DOMINF_hvm_guest))
         goto err;
 
     snprintf(g->id, (sizeof g->id) - 1, 
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index d5f0923088..66179e6f12 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -1960,15 +1960,14 @@ int xc_domain_memory_mapping(
     uint32_t add_mapping)
 {
     DECLARE_DOMCTL;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int ret = 0, rc;
     unsigned long done = 0, nr, max_batch_sz;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get info for domain");
-        return -EINVAL;
+        PERROR("Could not get info for dom%u", domid);
+        return -1;
     }
     if ( !xc_core_arch_auto_translated_physmap(&info) )
         return 0;
diff --git a/tools/libs/ctrl/xc_pagetab.c b/tools/libs/ctrl/xc_pagetab.c
index db25c20247..d9f886633a 100644
--- a/tools/libs/ctrl/xc_pagetab.c
+++ b/tools/libs/ctrl/xc_pagetab.c
@@ -29,17 +29,16 @@
 unsigned long xc_translate_foreign_address(xc_interface *xch, uint32_t dom,
                                            int vcpu, unsigned long long virt)
 {
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     uint64_t paddr, mask, pte = 0;
     int size, level, pt_levels = 2;
     void *map;
 
-    if (xc_domain_getinfo(xch, dom, 1, &dominfo) != 1 
-        || dominfo.domid != dom)
+    if (xc_domain_getinfo_single(xch, dom, &dominfo) < 0)
         return 0;
 
     /* What kind of paging are we dealing with? */
-    if (dominfo.hvm) {
+    if (dominfo.flags & XEN_DOMINF_hvm_guest) {
         struct hvm_hw_cpu ctx;
         if (xc_domain_hvm_getcontext_partial(xch, dom,
                                              HVM_SAVE_CODE(CPU), vcpu,
diff --git a/tools/libs/ctrl/xc_private.c b/tools/libs/ctrl/xc_private.c
index 2f99a7d2cf..6293a45531 100644
--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -441,11 +441,12 @@ int xc_machphys_mfn_list(xc_interface *xch,
 
 long xc_get_tot_pages(xc_interface *xch, uint32_t domid)
 {
-    xc_dominfo_t info;
-    if ( (xc_domain_getinfo(xch, domid, 1, &info) != 1) ||
-         (info.domid != domid) )
+    xc_domaininfo_t info;
+
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
         return -1;
-    return info.nr_pages;
+
+    return info.tot_pages;
 }
 
 int xc_copy_to_domain_page(xc_interface *xch,
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index 80dc464c93..8faabaea67 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -16,6 +16,7 @@
 #ifndef XC_PRIVATE_H
 #define XC_PRIVATE_H
 
+#include <inttypes.h>
 #include <unistd.h>
 #include <stdarg.h>
 #include <stdio.h>
@@ -420,12 +421,12 @@ void *xc_vm_event_enable(xc_interface *xch, uint32_t domain_id, int param,
 int do_dm_op(xc_interface *xch, uint32_t domid, unsigned int nr_bufs, ...);
 
 #if defined (__i386__) || defined (__x86_64__)
-static inline int xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
+static inline int xc_core_arch_auto_translated_physmap(const xc_domaininfo_t *info)
 {
-    return info->hvm;
+    return info->flags & XEN_DOMINF_hvm_guest;
 }
 #elif defined (__arm__) || defined(__aarch64__)
-static inline int xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
+static inline int xc_core_arch_auto_translated_physmap(const xc_domaininfo_t *info)
 {
     return 1;
 }
diff --git a/tools/libs/guest/xg_core.c b/tools/libs/guest/xg_core.c
index c52f1161c1..f83436d6cb 100644
--- a/tools/libs/guest/xg_core.c
+++ b/tools/libs/guest/xg_core.c
@@ -349,7 +349,7 @@ elfnote_dump_none(xc_interface *xch, void *args, dumpcore_rtn_t dump_rtn)
 static int
 elfnote_dump_core_header(
     xc_interface *xch,
-    void *args, dumpcore_rtn_t dump_rtn, const xc_dominfo_t *info,
+    void *args, dumpcore_rtn_t dump_rtn, const xc_domaininfo_t *info,
     int nr_vcpus, unsigned long nr_pages)
 {
     int sts;
@@ -361,7 +361,8 @@ elfnote_dump_core_header(
     
     elfnote.descsz = sizeof(header);
     elfnote.type = XEN_ELFNOTE_DUMPCORE_HEADER;
-    header.xch_magic = info->hvm ? XC_CORE_MAGIC_HVM : XC_CORE_MAGIC;
+    header.xch_magic = (info->flags & XEN_DOMINF_hvm_guest) ? XC_CORE_MAGIC_HVM
+                                                            : XC_CORE_MAGIC;
     header.xch_nr_vcpus = nr_vcpus;
     header.xch_nr_pages = nr_pages;
     header.xch_page_size = PAGE_SIZE;
@@ -423,7 +424,7 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
                                 void *args,
                                 dumpcore_rtn_t dump_rtn)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     shared_info_any_t *live_shinfo = NULL;
     struct domain_info_context _dinfo = {};
     struct domain_info_context *dinfo = &_dinfo;
@@ -468,15 +469,15 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         goto out;
     }
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get info for domain");
+        PERROR("Could not get info for dom%u", domid);
         goto out;
     }
     /* Map the shared info frame */
     live_shinfo = xc_map_foreign_range(xch, domid, PAGE_SIZE,
                                        PROT_READ, info.shared_info_frame);
-    if ( !live_shinfo && !info.hvm )
+    if ( !live_shinfo && !(info.flags & XEN_DOMINF_hvm_guest) )
     {
         PERROR("Couldn't map live_shinfo");
         goto out;
@@ -517,12 +518,6 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         dinfo->guest_width = sizeof(unsigned long);
     }
 
-    if ( domid != info.domid )
-    {
-        PERROR("Domain %d does not exist", domid);
-        goto out;
-    }
-
     ctxt = calloc(sizeof(*ctxt), info.max_vcpu_id + 1);
     if ( !ctxt )
     {
@@ -560,9 +555,9 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
      * all the array...
      *
      * We don't want to use the total potential size of the memory map
-     * since that is usually much higher than info.nr_pages.
+     * since that is usually much higher than info.tot_pages.
      */
-    nr_pages = info.nr_pages;
+    nr_pages = info.tot_pages;
 
     if ( !auto_translated_physmap )
     {
diff --git a/tools/libs/guest/xg_core.h b/tools/libs/guest/xg_core.h
index aaca9e0a8b..ff577dad31 100644
--- a/tools/libs/guest/xg_core.h
+++ b/tools/libs/guest/xg_core.h
@@ -134,15 +134,15 @@ typedef struct xc_core_memory_map xc_core_memory_map_t;
 struct xc_core_arch_context;
 int xc_core_arch_memory_map_get(xc_interface *xch,
                                 struct xc_core_arch_context *arch_ctxt,
-                                xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                                xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                                 xc_core_memory_map_t **mapp,
                                 unsigned int *nr_entries);
 int xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo,
-                         xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                         xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                          xen_pfn_t **live_p2m);
 
 int xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo,
-                                  xc_dominfo_t *info,
+                                  xc_domaininfo_t *info,
                                   shared_info_any_t *live_shinfo,
                                   xen_pfn_t **live_p2m);
 
diff --git a/tools/libs/guest/xg_core_arm.c b/tools/libs/guest/xg_core_arm.c
index de30cf0c31..34276152da 100644
--- a/tools/libs/guest/xg_core_arm.c
+++ b/tools/libs/guest/xg_core_arm.c
@@ -33,14 +33,14 @@ xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
 
 int
 xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unused,
-                            xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                            xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                             xc_core_memory_map_t **mapp,
                             unsigned int *nr_entries)
 {
     xen_pfn_t p2m_size = 0;
     xc_core_memory_map_t *map;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &p2m_size) < 0 )
         return -1;
 
     map = malloc(sizeof(*map));
@@ -59,7 +59,7 @@ xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unus
 }
 
 static int
-xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
 {
     errno = ENOSYS;
@@ -67,14 +67,14 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                               shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
diff --git a/tools/libs/guest/xg_core_x86.c b/tools/libs/guest/xg_core_x86.c
index c5e4542ccc..dbd3a440f7 100644
--- a/tools/libs/guest/xg_core_x86.c
+++ b/tools/libs/guest/xg_core_x86.c
@@ -49,14 +49,14 @@ xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
 
 int
 xc_core_arch_memory_map_get(xc_interface *xch, struct xc_core_arch_context *unused,
-                            xc_dominfo_t *info, shared_info_any_t *live_shinfo,
+                            xc_domaininfo_t *info, shared_info_any_t *live_shinfo,
                             xc_core_memory_map_t **mapp,
                             unsigned int *nr_entries)
 {
     xen_pfn_t p2m_size = 0;
     xc_core_memory_map_t *map;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &p2m_size) < 0 )
         return -1;
 
     map = malloc(sizeof(*map));
@@ -314,24 +314,24 @@ xc_core_arch_map_p2m_tree_rw(xc_interface *xch, struct domain_info_context *dinf
 }
 
 static int
-xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m, int rw)
 {
     xen_pfn_t *p2m_frame_list = NULL;
     uint64_t p2m_cr3;
-    uint32_t dom = info->domid;
+    uint32_t dom = info->domain;
     int ret = -1;
     int err;
 
-    if ( xc_domain_nr_gpfns(xch, info->domid, &dinfo->p2m_size) < 0 )
+    if ( xc_domain_nr_gpfns(xch, info->domain, &dinfo->p2m_size) < 0 )
     {
         ERROR("Could not get maximum GPFN!");
         goto out;
     }
 
-    if ( dinfo->p2m_size < info->nr_pages  )
+    if ( dinfo->p2m_size < info->tot_pages  )
     {
-        ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
+        ERROR("p2m_size < nr_pages -1 (%lx < %"PRIx64, dinfo->p2m_size, info->tot_pages - 1);
         goto out;
     }
 
@@ -366,14 +366,14 @@ out:
 }
 
 int
-xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                         shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 0);
 }
 
 int
-xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_dominfo_t *info,
+xc_core_arch_map_p2m_writable(xc_interface *xch, struct domain_info_context *dinfo, xc_domaininfo_t *info,
                               shared_info_any_t *live_shinfo, xen_pfn_t **live_p2m)
 {
     return xc_core_arch_map_p2m_rw(xch, dinfo, info, live_shinfo, live_p2m, 1);
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index bd16a87e48..57221ffea8 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -281,7 +281,8 @@ static int xc_cpuid_xend_policy(
     xc_interface *xch, uint32_t domid, const struct xc_xend_cpuid *xend)
 {
     int rc;
-    xc_dominfo_t di;
+    bool hvm;
+    xc_domaininfo_t di;
     unsigned int nr_leaves, nr_msrs;
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
     /*
@@ -291,13 +292,13 @@ static int xc_cpuid_xend_policy(
     xen_cpuid_leaf_t *host = NULL, *def = NULL, *cur = NULL;
     unsigned int nr_host, nr_def, nr_cur;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
-         di.domid != domid )
+    if ( (rc = xc_domain_getinfo_single(xch, domid, &di)) < 0 )
     {
-        ERROR("Failed to obtain d%d info", domid);
-        rc = -ESRCH;
+        PERROR("Failed to obtain d%d info", domid);
+        rc = -errno;
         goto fail;
     }
+    hvm = di.flags & XEN_DOMINF_hvm_guest;
 
     rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
     if ( rc )
@@ -330,12 +331,12 @@ static int xc_cpuid_xend_policy(
     /* Get the domain type's default policy. */
     nr_msrs = 0;
     nr_def = nr_leaves;
-    rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
-                                           : XEN_SYSCTL_cpu_policy_pv_default,
+    rc = get_system_cpu_policy(xch, hvm ? XEN_SYSCTL_cpu_policy_hvm_default
+                                        : XEN_SYSCTL_cpu_policy_pv_default,
                                &nr_def, def, &nr_msrs, NULL);
     if ( rc )
     {
-        PERROR("Failed to obtain %s def policy", di.hvm ? "hvm" : "pv");
+        PERROR("Failed to obtain %s def policy", hvm ? "hvm" : "pv");
         rc = -errno;
         goto fail;
     }
@@ -428,7 +429,8 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
                           const struct xc_xend_cpuid *xend)
 {
     int rc;
-    xc_dominfo_t di;
+    bool hvm;
+    xc_domaininfo_t di;
     unsigned int i, nr_leaves, nr_msrs;
     xen_cpuid_leaf_t *leaves = NULL;
     struct cpu_policy *p = NULL;
@@ -436,13 +438,13 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
     uint32_t host_featureset[FEATURESET_NR_ENTRIES] = {};
     uint32_t len = ARRAY_SIZE(host_featureset);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
-         di.domid != domid )
+    if ( (rc = xc_domain_getinfo_single(xch, domid, &di)) < 0 )
     {
-        ERROR("Failed to obtain d%d info", domid);
-        rc = -ESRCH;
+        PERROR("Failed to obtain d%d info", domid);
+        rc = -errno;
         goto out;
     }
+    hvm = di.flags & XEN_DOMINF_hvm_guest;
 
     rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
     if ( rc )
@@ -475,12 +477,12 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
 
     /* Get the domain's default policy. */
     nr_msrs = 0;
-    rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
-                                           : XEN_SYSCTL_cpu_policy_pv_default,
+    rc = get_system_cpu_policy(xch, hvm ? XEN_SYSCTL_cpu_policy_hvm_default
+                                        : XEN_SYSCTL_cpu_policy_pv_default,
                                &nr_leaves, leaves, &nr_msrs, NULL);
     if ( rc )
     {
-        PERROR("Failed to obtain %s default policy", di.hvm ? "hvm" : "pv");
+        PERROR("Failed to obtain %s default policy", hvm ? "hvm" : "pv");
         rc = -errno;
         goto out;
     }
@@ -514,7 +516,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         p->feat.hle = test_bit(X86_FEATURE_HLE, host_featureset);
         p->feat.rtm = test_bit(X86_FEATURE_RTM, host_featureset);
 
-        if ( di.hvm )
+        if ( hvm )
         {
             p->feat.mpx = test_bit(X86_FEATURE_MPX, host_featureset);
         }
@@ -571,7 +573,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
     {
         p->extd.itsc = itsc;
 
-        if ( di.hvm )
+        if ( hvm )
         {
             p->basic.pae = pae;
             p->basic.vmx = nested_virt;
@@ -579,7 +581,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         }
     }
 
-    if ( !di.hvm )
+    if ( !hvm )
     {
         /*
          * On hardware without CPUID Faulting, PV guests see real topology.
diff --git a/tools/libs/guest/xg_dom_boot.c b/tools/libs/guest/xg_dom_boot.c
index 263a3f4c85..6e0847e718 100644
--- a/tools/libs/guest/xg_dom_boot.c
+++ b/tools/libs/guest/xg_dom_boot.c
@@ -164,7 +164,7 @@ void *xc_dom_boot_domU_map(struct xc_dom_image *dom, xen_pfn_t pfn,
 
 int xc_dom_boot_image(struct xc_dom_image *dom)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int rc;
 
     DOMPRINTF_CALLED(dom->xch);
@@ -174,19 +174,11 @@ int xc_dom_boot_image(struct xc_dom_image *dom)
         return rc;
 
     /* collect some info */
-    rc = xc_domain_getinfo(dom->xch, dom->guest_domid, 1, &info);
-    if ( rc < 0 )
-    {
-        xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
-                     "%s: getdomaininfo failed (rc=%d)", __FUNCTION__, rc);
-        return rc;
-    }
-    if ( rc == 0 || info.domid != dom->guest_domid )
+    if ( xc_domain_getinfo_single(dom->xch, dom->guest_domid, &info) < 0 )
     {
         xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
-                     "%s: Huh? No domains found (nr_domains=%d) "
-                     "or domid mismatch (%d != %d)", __FUNCTION__,
-                     rc, info.domid, dom->guest_domid);
+                     "%s: getdomaininfo failed (errno=%d)",
+                     __func__, errno);
         return -1;
     }
     dom->shared_info_mfn = info.shared_info_frame;
diff --git a/tools/libs/guest/xg_domain.c b/tools/libs/guest/xg_domain.c
index f0e7748449..198f6f904a 100644
--- a/tools/libs/guest/xg_domain.c
+++ b/tools/libs/guest/xg_domain.c
@@ -37,7 +37,7 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
 {
     struct domain_info_context _di;
 
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     shared_info_any_t *live_shinfo;
     xen_capabilities_info_t xen_caps = "";
     unsigned long i;
@@ -49,9 +49,9 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
         return -1;
     }
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get domain info");
+        PERROR("Could not get dominfo for dom%u", domid);
         return -1;
     }
 
@@ -86,7 +86,7 @@ int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
                                        info.shared_info_frame);
     if ( !live_shinfo )
     {
-        PERROR("Could not map the shared info frame (MFN 0x%lx)",
+        PERROR("Could not map the shared info frame (MFN 0x%"PRIx64")",
                info.shared_info_frame);
         return -1;
     }
diff --git a/tools/libs/guest/xg_offline_page.c b/tools/libs/guest/xg_offline_page.c
index 8f0a252417..5f61d49456 100644
--- a/tools/libs/guest/xg_offline_page.c
+++ b/tools/libs/guest/xg_offline_page.c
@@ -366,7 +366,7 @@ static int clear_pte(xc_interface *xch, uint32_t domid,
  */
 
 static int is_page_exchangable(xc_interface *xch, uint32_t domid, xen_pfn_t mfn,
-                               xc_dominfo_t *info)
+                               xc_domaininfo_t *info)
 {
     uint32_t status;
     int rc;
@@ -377,7 +377,7 @@ static int is_page_exchangable(xc_interface *xch, uint32_t domid, xen_pfn_t mfn,
         DPRINTF("Dom0's page can't be LM");
         return 0;
     }
-    if (info->hvm)
+    if (info->flags & XEN_DOMINF_hvm_guest)
     {
         DPRINTF("Currently we can only live change PV guest's page\n");
         return 0;
@@ -458,7 +458,7 @@ err0:
 /* The domain should be suspended when called here */
 int xc_exchange_page(xc_interface *xch, uint32_t domid, xen_pfn_t mfn)
 {
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xc_domain_meminfo minfo;
     struct xc_mmu *mmu = NULL;
     struct pte_backup old_ptes = {NULL, 0, 0};
@@ -473,13 +473,13 @@ int xc_exchange_page(xc_interface *xch, uint32_t domid, xen_pfn_t mfn)
     xen_pfn_t *m2p_table;
     unsigned long max_mfn;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        ERROR("Could not get domain info");
+        PERROR("Could not get domain info for dom%u", domid);
         return -1;
     }
 
-    if (!info.shutdown || info.shutdown_reason != SHUTDOWN_suspend)
+    if (!dominfo_shutdown_with(&info, SHUTDOWN_suspend))
     {
         errno = EINVAL;
         ERROR("Can't exchange page unless domain is suspended\n");
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index e729a8106c..d73947094f 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -16,6 +16,7 @@
 #ifndef XG_PRIVATE_H
 #define XG_PRIVATE_H
 
+#include <inttypes.h>
 #include <unistd.h>
 #include <errno.h>
 #include <fcntl.h>
diff --git a/tools/libs/guest/xg_resume.c b/tools/libs/guest/xg_resume.c
index 77e2451a3c..c85d09a7f5 100644
--- a/tools/libs/guest/xg_resume.c
+++ b/tools/libs/guest/xg_resume.c
@@ -26,28 +26,28 @@
 static int modify_returncode(xc_interface *xch, uint32_t domid)
 {
     vcpu_guest_context_any_t ctxt;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     xen_capabilities_info_t caps;
     struct domain_info_context _dinfo = {};
     struct domain_info_context *dinfo = &_dinfo;
     int rc;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
-        PERROR("Could not get domain info");
+        PERROR("Could not get info for dom%u", domid);
         return -1;
     }
 
-    if ( !info.shutdown || (info.shutdown_reason != SHUTDOWN_suspend) )
+    if ( !dominfo_shutdown_with(&info, SHUTDOWN_suspend) )
     {
         ERROR("Dom %d not suspended: (shutdown %d, reason %d)", domid,
-              info.shutdown, info.shutdown_reason);
+              info.flags & XEN_DOMINF_shutdown,
+              dominfo_shutdown_reason(&info));
         errno = EINVAL;
         return -1;
     }
 
-    if ( info.hvm )
+    if ( info.flags & XEN_DOMINF_hvm_guest )
     {
         /* HVM guests without PV drivers have no return code to modify. */
         uint64_t irq = 0;
@@ -133,7 +133,7 @@ static int xc_domain_resume_hvm(xc_interface *xch, uint32_t domid)
 static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
 {
     DECLARE_DOMCTL;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     int i, rc = -1;
 #if defined(__i386__) || defined(__x86_64__)
     struct domain_info_context _dinfo = { .guest_width = 0,
@@ -146,7 +146,7 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
     xen_pfn_t *p2m = NULL;
 #endif
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         PERROR("Could not get domain info");
         return rc;
@@ -156,7 +156,7 @@ static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
      * (x86 only) Rewrite store_mfn and console_mfn back to MFN (from PFN).
      */
 #if defined(__i386__) || defined(__x86_64__)
-    if ( info.hvm )
+    if ( info.flags & XEN_DOMINF_hvm_guest )
         return xc_domain_resume_hvm(xch, domid);
 
     if ( xc_domain_get_guest_width(xch, domid, &dinfo->guest_width) != 0 )
diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 36d45ef56f..2f058ee3a6 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -220,7 +220,7 @@ struct xc_sr_context
     /* Plain VM, or checkpoints over time. */
     xc_stream_type_t stream_type;
 
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
 
     union /* Common save or restore data. */
     {
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 7314a24cf9..06231ca826 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -852,6 +852,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
                       xc_stream_type_t stream_type,
                       struct restore_callbacks *callbacks, int send_back_fd)
 {
+    bool hvm;
     xen_pfn_t nr_pfns;
     struct xc_sr_context ctx = {
         .xch = xch,
@@ -887,20 +888,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         break;
     }
 
-    if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )
+    if ( xc_domain_getinfo_single(xch, dom, &ctx.dominfo) < 0 )
     {
-        PERROR("Failed to get domain info");
-        return -1;
-    }
-
-    if ( ctx.dominfo.domid != dom )
-    {
-        ERROR("Domain %u does not exist", dom);
+        PERROR("Failed to get dominfo for dom%u", dom);
         return -1;
     }
 
+    hvm = ctx.dominfo.flags & XEN_DOMINF_hvm_guest;
     DPRINTF("fd %d, dom %u, hvm %u, stream_type %d",
-            io_fd, dom, ctx.dominfo.hvm, stream_type);
+            io_fd, dom, hvm, stream_type);
 
     ctx.domid = dom;
 
@@ -914,8 +910,7 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     }
 
     ctx.restore.p2m_size = nr_pfns;
-    ctx.restore.ops = ctx.dominfo.hvm
-        ? restore_ops_x86_hvm : restore_ops_x86_pv;
+    ctx.restore.ops = hvm ? restore_ops_x86_hvm : restore_ops_x86_pv;
 
     if ( restore(&ctx) )
         return -1;
diff --git a/tools/libs/guest/xg_sr_restore_x86_pv.c b/tools/libs/guest/xg_sr_restore_x86_pv.c
index dc50b0f5a8..eaeb97f4a0 100644
--- a/tools/libs/guest/xg_sr_restore_x86_pv.c
+++ b/tools/libs/guest/xg_sr_restore_x86_pv.c
@@ -903,7 +903,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
         ctx->dominfo.shared_info_frame);
     if ( !guest_shinfo )
     {
-        PERROR("Failed to map Shared Info at mfn %#lx",
+        PERROR("Failed to map Shared Info at mfn %#"PRIx64,
                ctx->dominfo.shared_info_frame);
         goto err;
     }
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 9853d8d846..3b2c5222e4 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -336,19 +336,18 @@ static int suspend_domain(struct xc_sr_context *ctx)
     }
 
     /* Refresh domain information. */
-    if ( (xc_domain_getinfo(xch, ctx->domid, 1, &ctx->dominfo) != 1) ||
-         (ctx->dominfo.domid != ctx->domid) )
+    if ( xc_domain_getinfo_single(xch, ctx->domid, &ctx->dominfo) < 0 )
     {
         PERROR("Unable to refresh domain information");
         return -1;
     }
 
     /* Confirm the domain has actually been paused. */
-    if ( !ctx->dominfo.shutdown ||
-         (ctx->dominfo.shutdown_reason != SHUTDOWN_suspend) )
+    if ( !dominfo_shutdown_with(&ctx->dominfo, SHUTDOWN_suspend) )
     {
         ERROR("Domain has not been suspended: shutdown %d, reason %d",
-              ctx->dominfo.shutdown, ctx->dominfo.shutdown_reason);
+              ctx->dominfo.flags & XEN_DOMINF_shutdown,
+              dominfo_shutdown_reason(&ctx->dominfo));
         return -1;
     }
 
@@ -893,8 +892,7 @@ static int save(struct xc_sr_context *ctx, uint16_t guest_type)
         if ( rc )
             goto err;
 
-        if ( !ctx->dominfo.shutdown ||
-             (ctx->dominfo.shutdown_reason != SHUTDOWN_suspend) )
+        if ( !dominfo_shutdown_with(&ctx->dominfo, SHUTDOWN_suspend) )
         {
             ERROR("Domain has not been suspended");
             rc = -1;
@@ -989,6 +987,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
         .fd = io_fd,
         .stream_type = stream_type,
     };
+    bool hvm;
 
     /* GCC 4.4 (of CentOS 6.x vintage) can' t initialise anonymous unions. */
     ctx.save.callbacks = callbacks;
@@ -996,17 +995,13 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
     ctx.save.debug = !!(flags & XCFLAGS_DEBUG);
     ctx.save.recv_fd = recv_fd;
 
-    if ( xc_domain_getinfo(xch, dom, 1, &ctx.dominfo) != 1 )
+    if ( xc_domain_getinfo_single(xch, dom, &ctx.dominfo) < 0 )
     {
         PERROR("Failed to get domain info");
         return -1;
     }
 
-    if ( ctx.dominfo.domid != dom )
-    {
-        ERROR("Domain %u does not exist", dom);
-        return -1;
-    }
+    hvm = ctx.dominfo.flags & XEN_DOMINF_hvm_guest;
 
     /* Sanity check stream_type-related parameters */
     switch ( stream_type )
@@ -1018,7 +1013,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
         assert(callbacks->checkpoint && callbacks->postcopy);
         /* Fallthrough */
     case XC_STREAM_PLAIN:
-        if ( ctx.dominfo.hvm )
+        if ( hvm )
             assert(callbacks->switch_qemu_logdirty);
         break;
 
@@ -1028,11 +1023,11 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
     }
 
     DPRINTF("fd %d, dom %u, flags %u, hvm %d",
-            io_fd, dom, flags, ctx.dominfo.hvm);
+            io_fd, dom, flags, hvm);
 
     ctx.domid = dom;
 
-    if ( ctx.dominfo.hvm )
+    if ( hvm )
     {
         ctx.save.ops = save_ops_x86_hvm;
         return save(&ctx, DHDR_TYPE_X86_HVM);
diff --git a/tools/libs/guest/xg_sr_save_x86_pv.c b/tools/libs/guest/xg_sr_save_x86_pv.c
index 4964f1f7b8..f3d7a7a71a 100644
--- a/tools/libs/guest/xg_sr_save_x86_pv.c
+++ b/tools/libs/guest/xg_sr_save_x86_pv.c
@@ -20,7 +20,7 @@ static int map_shinfo(struct xc_sr_context *ctx)
         xch, ctx->domid, PAGE_SIZE, PROT_READ, ctx->dominfo.shared_info_frame);
     if ( !ctx->x86.pv.shinfo )
     {
-        PERROR("Failed to map shared info frame at mfn %#lx",
+        PERROR("Failed to map shared info frame at mfn %#"PRIx64,
                ctx->dominfo.shared_info_frame);
         return -1;
     }
@@ -943,7 +943,7 @@ static int normalise_pagetable(struct xc_sr_context *ctx, const uint64_t *src,
 #ifdef __i386__
             if ( mfn == INVALID_MFN )
             {
-                if ( !ctx->dominfo.paused )
+                if ( !(ctx->dominfo.flags & XEN_DOMINF_paused) )
                     errno = EAGAIN;
                 else
                 {
@@ -965,7 +965,7 @@ static int normalise_pagetable(struct xc_sr_context *ctx, const uint64_t *src,
 
             if ( !mfn_in_pseudophysmap(ctx, mfn) )
             {
-                if ( !ctx->dominfo.paused )
+                if ( !(ctx->dominfo.flags & XEN_DOMINF_paused) )
                     errno = EAGAIN;
                 else
                 {
diff --git a/tools/libs/light/libxl_sched.c b/tools/libs/light/libxl_sched.c
index 841c05b0ef..2d6635dae7 100644
--- a/tools/libs/light/libxl_sched.c
+++ b/tools/libs/light/libxl_sched.c
@@ -498,10 +498,10 @@ static int sched_rtds_vcpu_get(libxl__gc *gc, uint32_t domid,
 {
     uint32_t num_vcpus;
     int i, r, rc;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -552,10 +552,10 @@ static int sched_rtds_vcpu_get_all(libxl__gc *gc, uint32_t domid,
 {
     uint32_t num_vcpus;
     int i, r, rc;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -602,10 +602,10 @@ static int sched_rtds_vcpu_set(libxl__gc *gc, uint32_t domid,
     int r, rc;
     int i;
     uint16_t max_vcpuid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
@@ -662,11 +662,11 @@ static int sched_rtds_vcpu_set_all(libxl__gc *gc, uint32_t domid,
     int r, rc;
     int i;
     uint16_t max_vcpuid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct xen_domctl_schedparam_vcpu *vcpus;
     uint32_t num_vcpus;
 
-    r = xc_domain_getinfo(CTX->xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(CTX->xch, domid, &info);
     if (r < 0) {
         LOGED(ERROR, domid, "Getting domain info");
         rc = ERROR_FAIL;
diff --git a/tools/libs/light/libxl_x86_acpi.c b/tools/libs/light/libxl_x86_acpi.c
index 22eb160659..620f3c700c 100644
--- a/tools/libs/light/libxl_x86_acpi.c
+++ b/tools/libs/light/libxl_x86_acpi.c
@@ -87,16 +87,16 @@ static int init_acpi_config(libxl__gc *gc,
 {
     xc_interface *xch = dom->xch;
     uint32_t domid = dom->guest_domid;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     struct hvm_info_table *hvminfo;
     int i, r, rc;
 
     config->dsdt_anycpu = config->dsdt_15cpu = dsdt_pvh;
     config->dsdt_anycpu_len = config->dsdt_15cpu_len = dsdt_pvh_len;
 
-    r = xc_domain_getinfo(xch, domid, 1, &info);
+    r = xc_domain_getinfo_single(xch, domid, &info);
     if (r < 0) {
-        LOG(ERROR, "getdomaininfo failed (rc=%d)", r);
+        LOGED(ERROR, domid, "getdomaininfo failed");
         rc = ERROR_FAIL;
         goto out;
     }
diff --git a/tools/misc/xen-hvmcrash.c b/tools/misc/xen-hvmcrash.c
index 4f0dabcb18..1d058fa40a 100644
--- a/tools/misc/xen-hvmcrash.c
+++ b/tools/misc/xen-hvmcrash.c
@@ -48,7 +48,7 @@ main(int argc, char **argv)
 {
     int domid;
     xc_interface *xch;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     int ret;
     uint32_t len;
     uint8_t *buf;
@@ -66,13 +66,13 @@ main(int argc, char **argv)
         exit(1);
     }
 
-    ret = xc_domain_getinfo(xch, domid, 1, &dominfo);
+    ret = xc_domain_getinfo_single(xch, domid, &dominfo);
     if (ret < 0) {
         perror("xc_domain_getinfo");
         exit(1);
     }
 
-    if (!dominfo.hvm) {
+    if (!(dominfo.flags & XEN_DOMINF_hvm_guest)) {
         fprintf(stderr, "domain %d is not HVM\n", domid);
         exit(1);
     }
diff --git a/tools/misc/xen-lowmemd.c b/tools/misc/xen-lowmemd.c
index a3a2741242..9d5cb549a8 100644
--- a/tools/misc/xen-lowmemd.c
+++ b/tools/misc/xen-lowmemd.c
@@ -38,7 +38,7 @@ void cleanup(void)
 #define BUFSZ 512
 void handle_low_mem(void)
 {
-    xc_dominfo_t  dom0_info;
+    xc_domaininfo_t dom0_info;
     xc_physinfo_t info;
     unsigned long long free_pages, dom0_pages, diff, dom0_target;
     char data[BUFSZ], error[BUFSZ];
@@ -58,13 +58,13 @@ void handle_low_mem(void)
         return;
     diff = THRESHOLD_PG - free_pages; 
 
-    if (xc_domain_getinfo(xch, 0, 1, &dom0_info) < 1)
+    if (xc_domain_getinfo_single(xch, 0, &dom0_info) < 0)
     {
         perror("Failed to get dom0 info");
         return;
     }
 
-    dom0_pages = (unsigned long long) dom0_info.nr_pages;
+    dom0_pages = (unsigned long long) dom0_info.tot_pages;
     printf("Dom0 pages: 0x%llx:%llu\n", dom0_pages, dom0_pages);
     dom0_target = dom0_pages - diff;
     if (dom0_target <= DOM0_FLOOR_PG)
diff --git a/tools/misc/xen-mfndump.c b/tools/misc/xen-mfndump.c
index b32c95e262..8863ece3f5 100644
--- a/tools/misc/xen-mfndump.c
+++ b/tools/misc/xen-mfndump.c
@@ -74,7 +74,7 @@ int dump_m2p_func(int argc, char *argv[])
 int dump_p2m_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     unsigned long i;
     int domid;
 
@@ -85,8 +85,7 @@ int dump_p2m_func(int argc, char *argv[])
     }
     domid = atoi(argv[0]);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -158,7 +157,7 @@ int dump_p2m_func(int argc, char *argv[])
 int dump_ptes_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     void *page = NULL;
     unsigned long i, max_mfn;
     int domid, pte_num, rc = 0;
@@ -172,8 +171,7 @@ int dump_ptes_func(int argc, char *argv[])
     domid = atoi(argv[0]);
     mfn = strtoul(argv[1], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -266,7 +264,7 @@ int dump_ptes_func(int argc, char *argv[])
 int lookup_pte_func(int argc, char *argv[])
 {
     struct xc_domain_meminfo minfo;
-    xc_dominfo_t info;
+    xc_domaininfo_t info;
     void *page = NULL;
     unsigned long i, j;
     int domid, pte_num;
@@ -280,8 +278,7 @@ int lookup_pte_func(int argc, char *argv[])
     domid = atoi(argv[0]);
     mfn = strtoul(argv[1], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
-         info.domid != domid )
+    if ( xc_domain_getinfo_single(xch, domid, &info) < 0 )
     {
         ERROR("Failed to obtain info for domain %d\n", domid);
         return -1;
@@ -336,7 +333,7 @@ int lookup_pte_func(int argc, char *argv[])
 
 int memcmp_mfns_func(int argc, char *argv[])
 {
-    xc_dominfo_t info1, info2;
+    xc_domaininfo_t info1, info2;
     void *page1 = NULL, *page2 = NULL;
     int domid1, domid2;
     xen_pfn_t mfn1, mfn2;
@@ -352,9 +349,8 @@ int memcmp_mfns_func(int argc, char *argv[])
     mfn1 = strtoul(argv[1], NULL, 16);
     mfn2 = strtoul(argv[3], NULL, 16);
 
-    if ( xc_domain_getinfo(xch, domid1, 1, &info1) != 1 ||
-         xc_domain_getinfo(xch, domid2, 1, &info2) != 1 ||
-         info1.domid != domid1 || info2.domid != domid2)
+    if ( xc_domain_getinfo_single(xch, domid1, &info1) < 0 ||
+         xc_domain_getinfo_single(xch, domid2, &info2) < 0)
     {
         ERROR("Failed to obtain info for domains\n");
         return -1;
diff --git a/tools/misc/xen-vmtrace.c b/tools/misc/xen-vmtrace.c
index 5b688a54af..ba2ce17a17 100644
--- a/tools/misc/xen-vmtrace.c
+++ b/tools/misc/xen-vmtrace.c
@@ -133,15 +133,15 @@ int main(int argc, char **argv)
 
     while ( !interrupted )
     {
-        xc_dominfo_t dominfo;
+        xc_domaininfo_t dominfo;
 
         if ( get_more_data() )
             goto out;
 
         usleep(1000 * 100);
 
-        if ( xc_domain_getinfo(xch, domid, 1, &dominfo) != 1 ||
-             dominfo.domid != domid || dominfo.shutdown )
+        if ( xc_domain_getinfo_single(xch, domid, &dominfo) < 0 ||
+             (dominfo.flags & XEN_DOMINF_shutdown) )
         {
             if ( get_more_data() )
                 goto out;
diff --git a/tools/vchan/vchan-socket-proxy.c b/tools/vchan/vchan-socket-proxy.c
index e1d959c6d1..9c4c336b03 100644
--- a/tools/vchan/vchan-socket-proxy.c
+++ b/tools/vchan/vchan-socket-proxy.c
@@ -222,7 +222,7 @@ static struct libxenvchan *connect_vchan(int domid, const char *path) {
     struct libxenvchan *ctrl = NULL;
     struct xs_handle *xs = NULL;
     xc_interface *xc = NULL;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
     char **watch_ret;
     unsigned int watch_num;
     int ret;
@@ -254,12 +254,12 @@ static struct libxenvchan *connect_vchan(int domid, const char *path) {
         if (ctrl)
             break;
 
-        ret = xc_domain_getinfo(xc, domid, 1, &dominfo);
+        ret = xc_domain_getinfo_single(xc, domid, &dominfo);
         /* break the loop if domain is definitely not there anymore, but
          * continue if it is or the call failed (like EPERM) */
         if (ret == -1 && errno == ESRCH)
             break;
-        if (ret == 1 && (dominfo.domid != (uint32_t)domid || dominfo.dying))
+        if (ret == 0 && (dominfo.flags & XEN_DOMINF_dying))
             break;
     }
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f62be2245c..aeb7595ae1 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -339,15 +339,14 @@ static int destroy_domain(void *_domain)
 	return 0;
 }
 
-static bool get_domain_info(unsigned int domid, xc_dominfo_t *dominfo)
+static bool get_domain_info(unsigned int domid, xc_domaininfo_t *dominfo)
 {
-	return xc_domain_getinfo(*xc_handle, domid, 1, dominfo) == 1 &&
-	       dominfo->domid == domid;
+	return xc_domain_getinfo_single(*xc_handle, domid, dominfo) == 0;
 }
 
 static int check_domain(const void *k, void *v, void *arg)
 {
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 	struct connection *conn;
 	bool dom_valid;
 	struct domain *domain = v;
@@ -360,12 +359,12 @@ static int check_domain(const void *k, void *v, void *arg)
 		return 0;
 	}
 	if (dom_valid) {
-		if ((dominfo.crashed || dominfo.shutdown)
+		if ((dominfo.flags & XEN_DOMINF_shutdown)
 		    && !domain->shutdown) {
 			domain->shutdown = true;
 			*notify = true;
 		}
-		if (!dominfo.dying)
+		if (!(dominfo.flags & XEN_DOMINF_dying))
 			return 0;
 	}
 	if (domain->conn) {
@@ -486,7 +485,7 @@ static struct domain *find_or_alloc_domain(const void *ctx, unsigned int domid)
 static struct domain *find_or_alloc_existing_domain(unsigned int domid)
 {
 	struct domain *domain;
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 
 	domain = find_domain_struct(domid);
 	if (!domain && get_domain_info(domid, &dominfo))
@@ -1010,7 +1009,7 @@ int domain_alloc_permrefs(struct node_perms *perms)
 {
 	unsigned int i, domid;
 	struct domain *d;
-	xc_dominfo_t dominfo;
+	xc_domaininfo_t dominfo;
 
 	for (i = 0; i < perms->num; i++) {
 		domid = perms->p[i].id;
diff --git a/tools/xentrace/xenctx.c b/tools/xentrace/xenctx.c
index 85ba0c0fa6..9acb9db460 100644
--- a/tools/xentrace/xenctx.c
+++ b/tools/xentrace/xenctx.c
@@ -92,7 +92,7 @@ static struct xenctx {
     int do_stack;
 #endif
     int kernel_start_set;
-    xc_dominfo_t dominfo;
+    xc_domaininfo_t dominfo;
 } xenctx;
 
 struct symbol {
@@ -989,7 +989,7 @@ static void dump_ctx(int vcpu)
 
 #if defined(__i386__) || defined(__x86_64__)
     {
-        if (xenctx.dominfo.hvm) {
+        if (xenctx.dominfo.flags & XEN_DOMINF_hvm_guest) {
             struct hvm_hw_cpu cpuctx;
             xen_capabilities_info_t xen_caps = "";
             if (xc_domain_hvm_getcontext_partial(
@@ -1269,9 +1269,9 @@ int main(int argc, char **argv)
         exit(-1);
     }
 
-    ret = xc_domain_getinfo(xenctx.xc_handle, xenctx.domid, 1, &xenctx.dominfo);
+    ret = xc_domain_getinfo_single(xenctx.xc_handle, xenctx.domid, &xenctx.dominfo);
     if (ret < 0) {
-        perror("xc_domain_getinfo");
+        perror("xc_domain_getinfo_single");
         exit(-1);
     }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 09 16:07:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:07:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532311.828471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPs7-0001YK-UZ; Tue, 09 May 2023 16:07:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532311.828471; Tue, 09 May 2023 16:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPs7-0001Y9-QX; Tue, 09 May 2023 16:07:27 +0000
Received: by outflank-mailman (input) for mailman id 532311;
 Tue, 09 May 2023 16:07:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kELI=A6=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pwPs7-0001Fg-0g
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:07:27 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 94db335f-ee83-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 18:07:22 +0200 (CEST)
Received: by mail-wr1-x434.google.com with SMTP id
 ffacd0b85a97d-307a8386946so565030f8f.2
 for <xen-devel@lists.xenproject.org>; Tue, 09 May 2023 09:07:22 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 z8-20020adfec88000000b003062675d4c9sm14721479wrn.39.2023.05.09.09.07.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 May 2023 09:07:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94db335f-ee83-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683648441; x=1686240441;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=mZVhaMLLt7izRTc4YEPKUAhksNrFYD80OOgYr+Bs30o=;
        b=aSAZmNzDfEFZr0palmyTgXnZI3hyF2CBWvfryL4uVQHjVxl3BrtWVEmZdzwmYovvbp
         EzbcMl05CQf59eySp7c3sNvvqOnjUuzmyfJxUS+f5v1vlujJPcjz15zm+slcs6C0gqNN
         5+NUQ6hfzVTKoRIaf/6xgUOCwdKV6R83Gyg9w=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683648441; x=1686240441;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=mZVhaMLLt7izRTc4YEPKUAhksNrFYD80OOgYr+Bs30o=;
        b=PNqBdetk7JHV01Pr4UNucSVyQeb+4cpegnXm/+ZHHOHFY7ZFFarm6f8Obz2cWh5hEv
         /qcEfRQVAuopBS8glsd1chZ6z01Y1OCJqXovw0PTk9eGZyY821sxEotWFeClF/fKDqnV
         fLvpTnAo2q+hCOX1XepIb3EIzI8Y/B74L5huPZlwVDeQquRz35ZOP7mXfB4hX0j4s0Vr
         dzneVDUttvLYVVRQAGuzqUQAiZ5bbmu0FBVMtkEfm1FFT4XG4Lpcd207SJwpqc3reRKW
         DSyx2OLoEHoz/pZHoaZNOdorinoKba9U9ADjFAcDBRFmvpheATxKmvX1SXSkNlLMFrpX
         zeTg==
X-Gm-Message-State: AC+VfDzp7FIuuEMpqjFeE8blMkeQOtQ8FcqFG5BxXl/kuCvUib3TrJYS
	klNV6Dih/0mgOjkvXAT5IrdnuPVQup4bS9CIoHQ=
X-Google-Smtp-Source: ACHHUZ4hST7B0s6Bpmr+TX+xyHq63ScfSVVrzgqYaS3Wc82T0K9gxOlyRi3zklQr+zUDJ0Xea2mZKg==
X-Received: by 2002:a05:6000:1b8f:b0:306:2b31:5935 with SMTP id r15-20020a0560001b8f00b003062b315935mr8463700wru.55.1683648441777;
        Tue, 09 May 2023 09:07:21 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v4 3/3] domctl: Modify XEN_DOMCTL_getdomaininfo to fail if domid is not found
Date: Tue,  9 May 2023 17:07:12 +0100
Message-Id: <20230509160712.11685-4-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230509160712.11685-1-alejandro.vallejo@cloud.com>
References: <20230509160712.11685-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It previously mimicked the getdomaininfo sysctl semantics by returning
the first domid higher than the requested domid that does exist. This
unintuitive behaviour causes quite a few mistakes and makes the call
needlessly slow in its error path.

This patch removes the fallback search, returning -ESRCH if the requested
domain doesn't exist. Domain discovery can still be done through the sysctl
interface as that performs a linear search on the list of domains.

With this modification the xc_domain_getinfo() function is deprecated and
removed to make sure it's not mistakenly used expecting the old behaviour.
The new xc wrapper is xc_domain_getinfo_single().

All previous callers of xc_domain_getinfo() have been updated to use
xc_domain_getinfo_single() or xc_domain_getinfolist() instead. This also
means xc_dominfo_t is no longer used by anything and can be purged.

Resolves: xen-project/xen#105
Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Anthony PERARD <anthony.perard@citrix.com>

---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 tools/include/xenctrl.h     | 43 ----------------------
 tools/libs/ctrl/xc_domain.c | 73 -------------------------------------
 xen/common/domctl.c         | 32 +---------------
 3 files changed, 2 insertions(+), 146 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 086314d28a..dba33d5d0f 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -444,28 +444,6 @@ typedef struct xc_core_header {
  * DOMAIN MANAGEMENT FUNCTIONS
  */
 
-typedef struct xc_dominfo {
-    uint32_t      domid;
-    uint32_t      ssidref;
-    unsigned int  dying:1, crashed:1, shutdown:1,
-                  paused:1, blocked:1, running:1,
-                  hvm:1, debugged:1, xenstore:1, hap:1;
-    unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
-    unsigned long nr_pages; /* current number, not maximum */
-    unsigned long nr_outstanding_pages;
-    unsigned long nr_shared_pages;
-    unsigned long nr_paged_pages;
-    unsigned long shared_info_frame;
-    uint64_t      cpu_time;
-    unsigned long max_memkb;
-    unsigned int  nr_online_vcpus;
-    unsigned int  max_vcpu_id;
-    xen_domain_handle_t handle;
-    unsigned int  cpupool;
-    uint8_t       gpaddr_bits;
-    struct xen_arch_domainconfig arch_config;
-} xc_dominfo_t;
-
 typedef xen_domctl_getdomaininfo_t xc_domaininfo_t;
 
 static inline unsigned int dominfo_shutdown_reason(const xc_domaininfo_t *info)
@@ -721,27 +699,6 @@ int xc_domain_getinfo_single(xc_interface *xch,
                              uint32_t domid,
                              xc_domaininfo_t *info);
 
-/**
- * This function will return information about one or more domains. It is
- * designed to iterate over the list of domains. If a single domain is
- * requested, this function will return the next domain in the list - if
- * one exists. It is, therefore, important in this case to make sure the
- * domain requested was the one returned.
- *
- * @parm xch a handle to an open hypervisor interface
- * @parm first_domid the first domain to enumerate information from.  Domains
- *                   are currently enumerate in order of creation.
- * @parm max_doms the number of elements in info
- * @parm info an array of max_doms size that will contain the information for
- *            the enumerated domains.
- * @return the number of domains enumerated or -1 on error
- */
-int xc_domain_getinfo(xc_interface *xch,
-                      uint32_t first_domid,
-                      unsigned int max_doms,
-                      xc_dominfo_t *info);
-
-
 /**
  * This function will set the execution context for the specified vcpu.
  *
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index 66179e6f12..724fa6f753 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -357,85 +357,12 @@ int xc_domain_getinfo_single(xc_interface *xch,
     if ( do_domctl(xch, &domctl) < 0 )
         return -1;
 
-    if ( domctl.u.getdomaininfo.domain != domid )
-    {
-        errno = ESRCH;
-        return -1;
-    }
-
     if ( info )
         *info = domctl.u.getdomaininfo;
 
     return 0;
 }
 
-int xc_domain_getinfo(xc_interface *xch,
-                      uint32_t first_domid,
-                      unsigned int max_doms,
-                      xc_dominfo_t *info)
-{
-    unsigned int nr_doms;
-    uint32_t next_domid = first_domid;
-    DECLARE_DOMCTL;
-    int rc = 0;
-
-    memset(info, 0, max_doms*sizeof(xc_dominfo_t));
-
-    for ( nr_doms = 0; nr_doms < max_doms; nr_doms++ )
-    {
-        domctl.cmd = XEN_DOMCTL_getdomaininfo;
-        domctl.domain = next_domid;
-        if ( (rc = do_domctl(xch, &domctl)) < 0 )
-            break;
-        info->domid      = domctl.domain;
-
-        info->dying    = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_dying);
-        info->shutdown = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_shutdown);
-        info->paused   = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_paused);
-        info->blocked  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_blocked);
-        info->running  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_running);
-        info->hvm      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hvm_guest);
-        info->debugged = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_debugged);
-        info->xenstore = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_xs_domain);
-        info->hap      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hap);
-
-        info->shutdown_reason =
-            (domctl.u.getdomaininfo.flags>>XEN_DOMINF_shutdownshift) &
-            XEN_DOMINF_shutdownmask;
-
-        if ( info->shutdown && (info->shutdown_reason == SHUTDOWN_crash) )
-        {
-            info->shutdown = 0;
-            info->crashed  = 1;
-        }
-
-        info->ssidref  = domctl.u.getdomaininfo.ssidref;
-        info->nr_pages = domctl.u.getdomaininfo.tot_pages;
-        info->nr_outstanding_pages = domctl.u.getdomaininfo.outstanding_pages;
-        info->nr_shared_pages = domctl.u.getdomaininfo.shr_pages;
-        info->nr_paged_pages = domctl.u.getdomaininfo.paged_pages;
-        info->max_memkb = domctl.u.getdomaininfo.max_pages << (PAGE_SHIFT-10);
-        info->shared_info_frame = domctl.u.getdomaininfo.shared_info_frame;
-        info->cpu_time = domctl.u.getdomaininfo.cpu_time;
-        info->nr_online_vcpus = domctl.u.getdomaininfo.nr_online_vcpus;
-        info->max_vcpu_id = domctl.u.getdomaininfo.max_vcpu_id;
-        info->cpupool = domctl.u.getdomaininfo.cpupool;
-        info->gpaddr_bits = domctl.u.getdomaininfo.gpaddr_bits;
-        info->arch_config = domctl.u.getdomaininfo.arch_config;
-
-        memcpy(info->handle, domctl.u.getdomaininfo.handle,
-               sizeof(xen_domain_handle_t));
-
-        next_domid = (uint16_t)domctl.domain + 1;
-        info++;
-    }
-
-    if ( nr_doms == 0 )
-        return rc;
-
-    return nr_doms;
-}
-
 int xc_domain_getinfolist(xc_interface *xch,
                           uint32_t first_domain,
                           unsigned int max_domains,
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index ad71ad8a4c..24a14996e6 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -314,7 +314,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         /* fall through */
     default:
         d = rcu_lock_domain_by_id(op->domain);
-        if ( !d && op->cmd != XEN_DOMCTL_getdomaininfo )
+        if ( !d )
             return -ESRCH;
     }
 
@@ -534,42 +534,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     case XEN_DOMCTL_getdomaininfo:
     {
-        domid_t dom = DOMID_INVALID;
-
-        if ( !d )
-        {
-            ret = -EINVAL;
-            if ( op->domain >= DOMID_FIRST_RESERVED )
-                break;
-
-            rcu_read_lock(&domlist_read_lock);
-
-            dom = op->domain;
-            for_each_domain ( d )
-                if ( d->domain_id >= dom )
-                    break;
-        }
-
-        ret = -ESRCH;
-        if ( d == NULL )
-            goto getdomaininfo_out;
-
         ret = xsm_getdomaininfo(XSM_HOOK, d);
         if ( ret )
-            goto getdomaininfo_out;
+            break;
 
         getdomaininfo(d, &op->u.getdomaininfo);
 
         op->domain = op->u.getdomaininfo.domain;
         copyback = 1;
-
-    getdomaininfo_out:
-        /* When d was non-NULL upon entry, no cleanup is needed. */
-        if ( dom == DOMID_INVALID )
-            break;
-
-        rcu_read_unlock(&domlist_read_lock);
-        d = NULL;
         break;
     }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 09 16:13:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:13:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532325.828481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPxl-000428-Ii; Tue, 09 May 2023 16:13:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532325.828481; Tue, 09 May 2023 16:13:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPxl-000421-Fz; Tue, 09 May 2023 16:13:17 +0000
Received: by outflank-mailman (input) for mailman id 532325;
 Tue, 09 May 2023 16:13:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DnBy=A6=rabbit.lu=slack@srs-se1.protection.inumbo.net>)
 id 1pwPxj-00041v-Fk
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:13:15 +0000
Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com
 [2a00:1450:4864:20::336])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6595d632-ee84-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 18:13:12 +0200 (CEST)
Received: by mail-wm1-x336.google.com with SMTP id
 5b1f17b1804b1-3f195b164c4so40383015e9.1
 for <xen-devel@lists.xenproject.org>; Tue, 09 May 2023 09:13:13 -0700 (PDT)
Received: from [192.168.2.1] (82-64-138-184.subs.proxad.net. [82.64.138.184])
 by smtp.googlemail.com with ESMTPSA id
 p13-20020a7bcc8d000000b003f4289b18a7sm4568730wma.5.2023.05.09.09.13.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 09 May 2023 09:13:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6595d632-ee84-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=rabbit-lu.20221208.gappssmtp.com; s=20221208; t=1683648792; x=1686240792;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=zZL5/yJ0py80FB7KCRxPZ4CEHo/l8qnSLGQegUdOStw=;
        b=E0Do2rJAH8C9Bb1YZ8sKe9/hCwpL0bItmV+hDtIzjnsyYgbNx9idKtIks76Gpn9Scf
         Ktbmy6E8utPfROM+S5jHLcLlLUcF5+421uJ/x0EkZoZq79MdCe99icuU/AFPWrOv7zDM
         fhH84n58hZ44w4qNc144fQAccjG3UAxog7q3KrESczB/WrPUjJ2BpBYhTsjQpdLuE7HY
         dQzA/V4yMNNIwv/a3KqSGcrnSgeCmD2tYBrPZXl7LU2jb5OKIKEYKybbrqY2UhAOOOux
         kYA4jkFo4u36TSqkTpXfFQUO2iTfRN0R4zTXaWJ4FUSX2H6jw5AFd0jl/2hs5RI1axRU
         Pppg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683648792; x=1686240792;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=zZL5/yJ0py80FB7KCRxPZ4CEHo/l8qnSLGQegUdOStw=;
        b=B9ywZD15g0vR/SpYpKOGJ4DOXtppLn6QWI55Jl9C+4g2JcalyP0LJIEWAyJJZpnbks
         Fz7VMqZzX3ptZc9sqZgTPWuzINuw+HmYB/GfqHWp1PqhnuqjedXTPDeOq1431zQkvdyn
         poS8Spnn6T1t4IwTKQaRHQmwgQTWS0ZS4PxjGQz9L7k7p3H7u9PqVv35qh4IP9br0JRn
         t6Yxr4Wq+59a0P4HRv2bHGGJDo8zejFPUYeHJ4z4P+Uf9lR8aBnHLOcyoxv+M7whLzvP
         KsJ44mZaL6B/5nD+u/ijfRDI1ckmtjZUCvEhe/1OzNYY6UwDSze2gB7U17z5r8N6J59f
         7zdw==
X-Gm-Message-State: AC+VfDwCWDXZEgNjEDsX1gyaBRR3cK1vYm98J67D+5LjneQlBVv1gtFU
	R2+6WGniLDfnW6l8IJpNKwjZvcsVgrvS+4wfcjaAXg==
X-Google-Smtp-Source: ACHHUZ4xJF31OFqZKnjqn2rIjsFe4D2XckHUK2oZpnmMXoAutKwB45h7WrFwn0TK1gkLOLUosD5gLw==
X-Received: by 2002:a7b:cd94:0:b0:3f4:266d:5901 with SMTP id y20-20020a7bcd94000000b003f4266d5901mr4918059wmj.35.1683648792474;
        Tue, 09 May 2023 09:13:12 -0700 (PDT)
Message-ID: <bea250ed-a24f-8983-42fd-d11c6072bc1e@rabbit.lu>
Date: Tue, 9 May 2023 18:13:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: xenstored: EACCESS error accessing control/feature-balloon 1
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: Yann Dirson <yann.dirson@vates.fr>, Juergen Gross <jgross@suse.com>,
 =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edwin.torok@cloud.com>,
 Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <3065c524-07c7-6458-ff4c-ba68ff78c946@rabbit.lu>
 <474b531f-2067-a5d4-8b01-5b8ef1f7061d@citrix.com>
 <678df1ff-df18-b063-eda3-2a1aed6d40f8@vates.fr>
 <50bf6b82-965b-d17c-7c5a-49c703991504@rabbit.lu>
 <f44261a2-df39-f69a-9798-dc1d656e6dac@vates.fr>
 <a51e0f7e-aed0-2ec9-f451-2e750636fb78@rabbit.lu>
 <9a5045b8-43f3-d418-9e77-418b6db91f71@vates.fr>
From: zithro <slack@rabbit.lu>
In-Reply-To: <9a5045b8-43f3-d418-9e77-418b6db91f71@vates.fr>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 09 May 2023 10:50, Yann Dirson wrote:
> On 5/4/23 20:04, zithro wrote:
>> On 04 May 2023 17:59, Yann Dirson wrote:
>>>
>>> On 5/4/23 15:58, zithro wrote:
>>>> Hi,
>>>>
>>>> [ snipped for brevity, report summary:
>>>> XAPI daemon in domU tries to write to a non-existent xenstore node in
>>>> a non-XAPI dom0 ]
>>>>
>>>> On 12 Apr 2023 18:41, Yann Dirson wrote:
>>>>> Is there anything besides XAPI using this node, or the other data
>>>>> published by xe-daemon?
>>>>
>>>> On my vanilla Xen (ie. non-XAPI), I have no node about "balloon"-ing
>>>> in xenstore (either dom0 or domU nodes, but I'm not using ballooning
>>>> in both).
>>>>
>>>>> Maybe the original issue is just that there is no reason to have
>>>>> xe-guest-utilities installed in this setup?
>>>>
>>>> That's what I thought as I'm not using XAPI, so maybe the problem
>>>> should only be addressed to the truenas team ? I posted on their forum
>>>> but got no answer.
>>>> I killed the 'xe-daemon' in both setups without loss of functionality.
>>>>
>>>> My wild guess is that 'xe-daemon', 'xe-update-guest-attrs' and all
>>>> 'xenstore* commands' are leftovers from when Xen was working as a dom0
>>>> under FreeBSD (why would a *domU* have them ?).
>>>
>>> That would not be correct: xenstore* are useful in guests, should you
>>> want to read/write to the XenStore manually or from scripts;
>>
>> Didn't know that, can you give some use cases (or URLs) for which it
>> is useful, with or without XAPI ?
> 
> You can get other examples in
> https://xenbits.xen.org/docs/unstable/misc/xenstore-paths.html#domain-controlled-paths

Thanks, I should pay more attention to headings ;)
 From the comparison between the docs and the xe-daemon, I saw some 
inconsistencies.
I decided to post a bug report in TrueNAS bug tracker :
https://ixsystems.atlassian.net/browse/NAS-121872
(It's a bit messy though, /me not proud).
I didn't check if FreeBSD and TrueNAS do some changes from upstream, but 
it may be of interest for Citrix devs too ?
Apologizes in advance for mistakes due to misunderstanding/noobiness.

>> PS: small mistake in "man/xenstore-write.1.html" (from at least 4.14,
>> and onward), the synopsis reads "xenstore-read" ieof "xenstore-read".
> Patch sent, thanks.

No prob and happy to help back !
(@Yann: please tell that to your colleague C. Schultz, so he's more 
inclined to lemme write articles for the Xen docs ^_^).

>> Also, the -s option disappeared from unstable, although it may be
>> expected. I don't know it's purpose either.
> 
> See
> https://github.com/xen-project/xen/commit/c65687ed16d2289ec91036ec2862a4b4bd34ea4f

Ah yes, understood.
To complete the commit, I found a few other places where "-s" has not 
been removed from "xenstore_client.c", in func "usage()" :

Example for line 97, diff style:
- errx(1, "Usage: %s %s[-h] [-p] [-s] [-R] key [...]", progname, mstr);
+ errx(1, "Usage: %s %s[-h] [-p] [-R] key [...]", progname, mstr);

Same deletions needed for lines: 100, 103, 109, 112, 115.


KR,
zithro/Cyril


From xen-devel-bounces@lists.xenproject.org Tue May 09 16:15:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:15:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532330.828491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPzi-0004be-VB; Tue, 09 May 2023 16:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532330.828491; Tue, 09 May 2023 16:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwPzi-0004bX-S1; Tue, 09 May 2023 16:15:18 +0000
Received: by outflank-mailman (input) for mailman id 532330;
 Tue, 09 May 2023 16:15:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S+Ht=A6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwPzh-0004bR-MH
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:15:17 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062d.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae815f62-ee84-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 18:15:15 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB8049.eurprd04.prod.outlook.com (2603:10a6:20b:24c::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Tue, 9 May
 2023 16:15:13 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 16:15:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae815f62-ee84-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h3tMZ6ZYotA8Ewj5tUIu1Zj2u/lpR+hB2P0aeadaas8qRryrHVUOCqK8/QGuYAWCYY4IH//qFulHtVNPO3Jt0IihYewOdHTq5mI5jXSOXdsU/DUl3xyMlWKZi/jdaUE48unrUIjEbmU5KaxTT4C74G6Ik6+XbAsXOGPkhJrrtuXHqqa8I0p0c2p+5vNH4SjoLGsYCoMIgZiLVsnCPkaMA8I3jX1kNq0BcEDZefqIBydVJZy3T5IOziw5JjqvwjfXVE0/att9TRJAPCyIRg10c/qljkYAEAfMhr9xbe9hDCPvHYyOVC4nEhxw09h6qPi2wNGZDC4R0aeugErKfy4+Ww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+FgRHNRZj2MMTYp9um+pHQVsDSUaxkxUPLIFhyLAB1k=;
 b=W+V3QvBxqvug9uz+8va+V0BE1iwd3uWMEHJkCIyxUq1fHRFRgNPhbywF6KbiwWSo2H3qUi/MLcH+yjewSal4DqblBqDUBDBULanHgOKkT+XfPvkwahldj+gOYk3DtYfYM4bGHMY79wGybu0FRza1RkxX1us+ATa4qxd7y9M7VQdMkusmTQoKEppsWLkhQnT7BHgvvto/wrW1S6YhOea6qQ4C5Uy6aA/qH+aOVm/EcMyVEoIWeUE77lz4ggE7TXVfKduG2LI/1F+UPrABAABwkZxeVQmE2Q5EXGy8IcoN8+l1lkt/OSS7DmLAihBgY/+PXcSHqdx0oKDSzGXB/z3wCA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+FgRHNRZj2MMTYp9um+pHQVsDSUaxkxUPLIFhyLAB1k=;
 b=auXvdM3Kgq+oM6v8dAh7iP7im5kzGGj81esxqR0a9w5er7DeqfZ+HVdUYmCThyV5wrUETBXVfB17HZxCb7lPB3avLo7dPSGe/XjCvBBvjJ5t22AiUwmKM7Jo6N+C6D2/RrxPJUbMSIAyn8+HP3lxly4yEXrULLRHCfi1zPmBWw4LpHtGszQ2Bc+XP8R6DATZx17IHV8LeBw77Vr7P85SJynqdbES8o4FR6B/kYnElqMJ7e8MuNIxG+8kCZWxpITakqChoujNRtSU7ghkNR8kyd5te6IlnqRCHLAcDMZMLldQKQ96tICB/unuQv7OTq3opSIaYl61MzopVAYA3ZYwkg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c8cb1df9-33af-8cae-291d-9a86a3b7f6b9@suse.com>
Date: Tue, 9 May 2023 18:15:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/6] x86/cpu-policy: Drop build time cross-checks of
 featureset sizes
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-2-andrew.cooper3@citrix.com>
 <6531df09-7f7f-a1e2-5993-bd2a14e22421@suse.com>
 <18090224-6838-8ed4-6be5-ae10a334e277@citrix.com>
 <bbbd4c8b-e681-f785-b85c-474b380d6160@suse.com>
 <742a5807-dd53-0cd1-d478-aed567d5c4f5@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <742a5807-dd53-0cd1-d478-aed567d5c4f5@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0144.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB8049:EE_
X-MS-Office365-Filtering-Correlation-Id: 34c28b2d-a34e-4b8e-295b-08db50a891e6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	l2ldSbNk8sG+PFu2iuDo8u9Ub4ELY6/NQFIIPEdh9BkqiHRcbyFmMaIUZYmhu0Ikoy3DBjhUqwfzbVP3MSmK9dHpaPsoQgcXUrGcAiE6yvoMWJ4/0rPGUGtjLLhtTL+bYFbOzRkWtOQoa7RWVzGT6xhEeReqENpEBpb4cDhpQE18jAGQMUfVs4DgBqvY1BIPpJ60OyrlDikwuOAD8ts8i1+EQqJR3RPGmMk0rCpXWFwODWlXlErytfD7Qc1ItDTGLLIZjCYVDhTqoMQ0iF9VFJflsCSlvaXinkLALqBJ1kHjoo4En4wzEXsM6apTWI9e8Rb0l1W9x1yrBHoEeDWQvoqMjc/WIU9vf1VS4/lqiaNYvQ7MVucaV3fn+W3Ck1ztuBi7/JJ+jDIy5zp9PQBopugyE91Mu4w04DfPgb+G3gu1qg226TVOfKIOzRSoJkXJvC8wB9XmHFfS1MDHiME3NGTibVHO7OAB/+H05VhcZMMFDRTSzxgtYNYkbolxqB8mDVx00xmc4ft6TDevAiPvMz3gss9AEgr9RVwmW01Tkf8mwWEBuQlnhC6VhKhzDRGK2v+Tgr278p6V2TOHdjUTQBwsOWgzxdYQngNfcsi98eHYj1RD5qEsHgPreFRW9Ubvpj9Lm7LXVbufSa0b8T9/RA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(396003)(376002)(39860400002)(366004)(451199021)(54906003)(86362001)(31696002)(31686004)(316002)(6486002)(66476007)(66946007)(41300700001)(4326008)(66556008)(6916009)(5660300002)(8936002)(8676002)(478600001)(6512007)(186003)(38100700002)(6506007)(26005)(2906002)(53546011)(2616005)(83380400001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TTNzRXRMdmpQZEhiUFV5SEl6b1pOQ2o3Z2RNWHBjenZlU2lwaEs2VExtdytp?=
 =?utf-8?B?MGdCU1ZPcFgvb3Jzdk1teHVBME1WRDYzNmRwS1FnT3JoUWRyQjNWU29DY0pU?=
 =?utf-8?B?eVNNdllTMUE3c0VxdExzZDE2VkFaVXBrdXMram13VStyODZJNXgzTHJ4OGlH?=
 =?utf-8?B?b1BHclM3aHRZeG1mdjA4T3hCYUhwazVHLzM5ZGxUa2szQkFDT25HWFFZYWNl?=
 =?utf-8?B?elVpWmdXVUUwSTFXaCtrK204d29oMU1CUjduRXEwK0NOZHA1V3NvWlJKRkNa?=
 =?utf-8?B?RnA0dEQrTU1nckNWeFYvU3ArRlFlQWM0eksyV0NHdlJzMlk2R2pIL2tsdm1X?=
 =?utf-8?B?dXhVOVpUbDU3OXhEaUFsVSt2dWVJaTUyVG1WN05WZjZtbGJVcU90ekUvbG5p?=
 =?utf-8?B?RmxpcWRhZG1oMjh3NXh3aTB1VDJNWVRORHJkSTBQSFBYdnRYdVF1cTAxWWtJ?=
 =?utf-8?B?YUZibjE0S1lFUk1UVDZFaFZsL1pyVEVGOVp6R2dVdy9Za1pRVDFJeURncnlC?=
 =?utf-8?B?bjFxR3lQV0NzcE5Bc2p4dlNtUEVrRWxYSjNETGdrTTZGSFJDVjhCMUk4cU41?=
 =?utf-8?B?QURqaGhWb093OUlzMUZSSi90Qk1WYXRJME9idEE5SXNhS041bmN0ZG9VeXJ2?=
 =?utf-8?B?QmJkVGVHd0ppMStqZXNzdlA0ZnA5amNMOUQxUEhIV0ZvQ1czZXRSb0YyVzVO?=
 =?utf-8?B?emJHU0s1MGtDc1JVa0hiZkplUGhPRVM4dEUxZko4TkxTamNxNEkyazF4S1g3?=
 =?utf-8?B?Q0ZDNU14alI1RFBKSGkrS2d1MHhSOTZvNStHd05GNXNjbFM5YlMzSmtRNEJh?=
 =?utf-8?B?dmcvR0JWTzc5cGFPaFlRMnBvREVqczltNWwxSTVDT3ErcnpIUDlrcGcrNFBF?=
 =?utf-8?B?OG55eER6Z2lrOHRYWVMxVW0xL0JjandLSisrZmFtdURUZG9XYmxSVWoyclha?=
 =?utf-8?B?aFlFQjc4TjMySTYwSHN0RG8raStEbmZsUlMxajdjc0E0MEVRMHFxUUR0WHJr?=
 =?utf-8?B?ak9VUHpIY1FhL2xiazlkZHdMQndqdktVOEpFNWU4dlg1TFFTUmYvQkZBWml5?=
 =?utf-8?B?SmFkV3ZrWVVxSEo1ejJOand1aTV1Tk9EN2xhd1R2TDFOSUV2RDFFLzF6Wlps?=
 =?utf-8?B?MzZ3SFJ4V29vem5YWE1kME9oY3pYNUpRcmswaEsrc1l1K0F2dkY4S1RZVU9V?=
 =?utf-8?B?TUVKdlBVbnhlZnNXZDRhS2hscEs0bUZDT2h5QXpzU210YTFXcDFWNEVEaUZ3?=
 =?utf-8?B?V3JrNEZiWjJNbnhUcmx1TUthbTBENmtBbHpUV1lMbksydmc1YWcwTHpXTVFw?=
 =?utf-8?B?N3V0aFBsTHR3WEhmN2pna3JYZTZrenBQVzFkMDF5RVl1VnpXZ1ZnWDRsR0lt?=
 =?utf-8?B?OWJyREsydGtPd3o2Y1hmNXNWUkZIaHJTOU42UmFIS1c4dFJmWmpBV210dTkz?=
 =?utf-8?B?ZytCUnBwem9nTExKL3Vqb3oxOUFPUWxWSTdOVXptS1NxZ1AxTHJCSjQ5OWV3?=
 =?utf-8?B?OEl0MFRQWEozKzZsclpIa3JaYlUzN0E2SVFKSjk1NVYxWEF0RDVjNFROZ2ZQ?=
 =?utf-8?B?Wkd4WjJacXdoV3A4bkRFc2lUUlVwQTdKdHN5MzBtdllva01yZjkwOHl1RGhP?=
 =?utf-8?B?L25IazRNUzdjNkFHUDVlVXJYOTVSNnp4d0UzMUZnbTJ3dit2VHplNENHUWht?=
 =?utf-8?B?VFdyZVdBZkQrOVFHcitiZkY4N2NvZzc0ZG9rYnFPN01MaDhhZmRHUnZCaXVI?=
 =?utf-8?B?MlRMVnB5UFA3djYwMXkyUmVEaGQxTWgrdEl1K1hjRjE5ckh3Ukg0M3JOQndE?=
 =?utf-8?B?bnN3NVBoR2tVWk1Ld2kzWGsvS2tONmwrVGoycWhqcGF3SjVhWkw5QUlOWkMx?=
 =?utf-8?B?RDNlckw1SEtPOTlGaUpPZG93STNLdS9JeFdtMm1ZTUtudC92RSs0Vk0yZ2xq?=
 =?utf-8?B?NmdoWmpEbjgvK0hmUXoxSTg2U2FMSUZxdmVMbkxPUU9WUWNuMzI2RENMcHl3?=
 =?utf-8?B?TVdEbGtsSnZYYTVPNDJ0bkJIYTlYMVMyWVFmL2pnZE1JR0wyOXZXNW1iMUd4?=
 =?utf-8?B?WUxKbW96OWZIM25QN0ZzOFZzdTNIQ09OanU5c3ZGVkFsVmFDOWRIcU9kTmVk?=
 =?utf-8?Q?dx/2Nk6dp5qH+ic84kz0VgoOs?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 34c28b2d-a34e-4b8e-295b-08db50a891e6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 16:15:13.7216
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ipPCiu5B2TuTPhCBjfCdcOQNnDgDJWUo9lZIpU6hXf5GM5QlQ4hKAns+Zh1whG9notgm9UQYqspQHe3rPD2M0Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB8049

On 09.05.2023 17:59, Andrew Cooper wrote:
> On 09/05/2023 3:28 pm, Jan Beulich wrote:
>> On 09.05.2023 15:04, Andrew Cooper wrote:
>>> On 08/05/2023 7:47 am, Jan Beulich wrote:
>>>> On 04.05.2023 21:39, Andrew Cooper wrote:
>>>>> These BUILD_BUG_ON()s exist to cover the curious absence of a diagnostic for
>>>>> code which looks like:
>>>>>
>>>>>   uint32_t foo[1] = { 1, 2, 3 };
>>>>>
>>>>> However, GCC 12 at least does now warn for this:
>>>>>
>>>>>   foo.c:1:24: error: excess elements in array initializer [-Werror]
>>>>>     884 | uint32_t foo[1] = { 1, 2, 3 };
>>>>>         |                        ^
>>>>>   foo.c:1:24: note: (near initialization for 'foo')
>>>> I'm pretty sure all gcc versions we support diagnose such cases. In turn
>>>> the arrays in question don't have explicit dimensions at their
>>>> definition sites, and hence they derive their dimensions from their
>>>> initializers. So the build-time-checks are about the arrays in fact
>>>> obtaining the right dimensions, i.e. the initializers being suitable.
>>>>
>>>> With the core part of the reasoning not being applicable, I'm afraid I
>>>> can't even say "okay with an adjusted description".
>>> Now I'm extra confused.
>>>
>>> I put those BUILD_BUG_ON()'s in because I was not getting a diagnostic
>>> when I was expecting one, and there was a bug in the original featureset
>>> work caused by this going wrong.
>>>
>>> But godbolt seems to agree that even GCC 4.1 notices.
>>>
>>> Maybe it was some other error (C file not seeing the header properly?)
>>> which disappeared across the upstream review?
>> Or maybe, by mistake, too few initializer fields? But what exactly it
>> was probably doesn't matter. If this patch is to stay (see below), some
>> different description will be needed anyway (or the change be folded
>> into the one actually invalidating those BUILD_BUG_ON()s).
>>
>>> Either way, these aren't appropriate, and need deleting before patch 5,
>>> because the check is no longer valid when a featureset can be longer
>>> than the autogen length.
>> Well, they need deleting if we stick to the approach chosen there right
>> now. If we switched to my proposed alternative, they better would stay.
> 
> Given that all versions of GCC do warn, I don't see any justification
> for them to stay.

All versions warn when the variable declarations / definitions have a
dimension specified, and then there are excess initializers. Yet none
of the five affected arrays have a dimension specified in their
definitions.

Even if dimensions were added, we'd then have only covered half of
what the BUILD_BUG_ON()s cover right now: There could then be fewer
than intended initializer fields, and things may still be screwed. I
think it was for this very reason why BUILD_BUG_ON() was chosen.

Jan

> i.e. this should be committed, even if the commit message says "no idea
> what they were taken originally, but they're superfluous in the logic as
> it exists today".
> 
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Tue May 09 16:25:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:25:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532361.828517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQ9J-0007A4-Au; Tue, 09 May 2023 16:25:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532361.828517; Tue, 09 May 2023 16:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQ9J-00079x-8K; Tue, 09 May 2023 16:25:13 +0000
Received: by outflank-mailman (input) for mailman id 532361;
 Tue, 09 May 2023 16:25:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S+Ht=A6=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwQ9H-00079k-In
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:25:11 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0615.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::615])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1124a157-ee86-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 18:25:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6921.eurprd04.prod.outlook.com (2603:10a6:10:119::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Tue, 9 May
 2023 16:25:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Tue, 9 May 2023
 16:25:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1124a157-ee86-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I/lCRKkVFlvsenPzdEq2mi7nG738Lo2D2Rsf9rs63V8UjFl5ah0E5JJbWaZkBf1P1aWszPeKMAxj33+5SwWVOmDc2JONmW6aL/RQzkOhWbyRmvfdaoVsEsBM8Ha8L3j+wHpOm4FIAkngXThxd6hKfNbMnxqWCocmjBPC4l3NB/NYk2UT7mOJ9wtyJ12dAb9QeC+oDW/AlvIDMuwQCTSNHKCdTLd1npKONfIp5zryCnXmvuy+Sy4cLpYI/q8ehi/v7qhEQgDIIekYCJZ+AC2UI+dnjlnKbS5O6MK+KMQOZDciuRCtsMUyWR7xxiJyO3ZGOo+Q976MybVtN+S37qiDwQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gE5Ni0+G6ask8+koWbUDzG5lMKS5+OBPt48hbBBuaYY=;
 b=Dgz7yZAnJmHFPto3ewne9U2p8DKmNCoUBVuqcaXiqvJkVepy5cdsyF+MvrDxDgJBiPBNTSTAt43ToMBRcZpWGqnNbFHlKfa7qy2KJVrR5YuPTORguvLwPVPj47VgHN0PR7eDchQmn5vO9meI3v2J+WPw0UpaVzfUM6szO7NBHXJYC5aakMKpR21Noyy9N/vxSZCkUtNcaYCGn6t5i1a4gelVy4rmICl9eIQwZkWVQWspNZVWdS1Hk2LXEJgIq7egy/slXGrTu4/qKEoIJ16hvNwKbNixiUeuxu1SjWihZztX6/u0C9lp300srVIEAL+arYFX8Te0J6ZWUiVYW3x5lg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gE5Ni0+G6ask8+koWbUDzG5lMKS5+OBPt48hbBBuaYY=;
 b=KTYYKfvAKfs9tr4OPUw9lpihbsflmezyEHLSMxXutU5HQs3b7itVJsCVs7lbyOvH7X+3cLlnrZnzUJod3PfMVUL6PmaiYtpwKch8aZVPmGenVsG7kZy4Ys7cDJ1yW0u9oRUfSnZ1jjc1oFlRuC5NupWXf9XSnrMf1+/n0tJHFRyvE8LUt9PPwu7s7uo+bpLB+Tv8qq1yGieSi+zcqtcQ17JJ1rnBc5s+90Ek3lJTotb0U0TsfoMBdRTv/5lxRpIJH/4ppXnddbb8IAqr4ra+SAvqpgQuqDeUmio1qvAZAIbWHZddiBlszTMWxmp53WYuFMAbGs5IWE64BJ8GoT5cyQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <56c5d0f1-bc47-8824-9515-239647015d47@suse.com>
Date: Tue, 9 May 2023 18:25:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/iommu: fix wrong iterator type in
 arch_iommu_hwdom_init()
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230509110325.61750-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230509110325.61750-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0118.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6921:EE_
X-MS-Office365-Filtering-Correlation-Id: d7399060-1f28-4b28-1ad6-08db50a9f3e4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Etk09ThO8KZjkMzL4wC7UsRfjp+9JW+ptXMGeoCujL7zux2/g/a51+pN0hPBs1uFEYyA3CHbv3+JmmuvIgvDIE0MwoRNguOKKDGGUlwVa0hKZdPL2Zoro+jrfAlBSWRMGLN44MsPz11B7Bc/M5cdtDxDbAsj/4a0wH0qbZ3CEL6wYoTEB7iiu3Srvy/FO1hOuyOULeP+g9MpPJ21mTTc3Q38/R28r74sdOYllLY/ARK/1a0r0ebqgKwuWL2wHigoxeqD19bkKF3srCQYMc2gBf+cJSZhR3Im0hFYM9jAfp53SgmeMKdixZKBvW9JPOU/JhTdw++VxiSrR0eQx5200t/G/5c8zLN6uuBBxYPd6ZQJhssYpNrGNeh2Nl2wDY4ItpG316gt8aFG0DrzmdG2sn/w9n9/3M7IP0SE8o8YIiiM7+6VEkT5eZXM5OvaDej2dJ/Ni2MbFiAaSQfsxgGLt44GvGSm7zp+YVTW6bnw6TTnjjDeKXkv0Fytj33TYaYSYyGswUEopm34yn8uvzFaKNY1uAfuJ7fuc1akBtXTbKNGR78swFNqYKvRbHR5FQpreneXSfQcF5V3NSh4JOKcNUlT/GUge+fajeFycNuGU6y4SKM0txo/VvekR6IBtTtqMCD8yriDwfps+Pp/yElF2A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(346002)(376002)(136003)(451199021)(6512007)(6506007)(26005)(53546011)(6486002)(2616005)(36756003)(38100700002)(31696002)(86362001)(186003)(83380400001)(41300700001)(4326008)(2906002)(31686004)(5660300002)(66556008)(8676002)(66476007)(6916009)(478600001)(8936002)(316002)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MUQ2dkt4KzhNUUVkTjNnSEk5SzBFZDMxNkZlTTgwcVhjOUJ1OGxYc3krY2Zu?=
 =?utf-8?B?ZnJyaTF5YndORzdkMDNlWDFoWFppSGVjdGdhVFRsZ2RJSWlIeXRCOGRBYTFP?=
 =?utf-8?B?MzREVU5KK3k2QU1aUnF1OWNsNXdiaVBTU1NkWVRiRUVRV3VBMERCNCtIRmw4?=
 =?utf-8?B?YkVxL3RPOUx5MlFzbERXZFc2aVpqRFphbm85TjhVK25SbFl4TnAwZmtFcTRJ?=
 =?utf-8?B?bFEzTWlTTk16WU04VXFPcDF4ejZ2NWFEYlVYdXhWT1FvR0ZxWGl1djd6ZWp6?=
 =?utf-8?B?UVRpVkxBdmI1WWFCTFhxOUp5SThsaDBXN205N1UxbXdPK1ZJV3YzRDVtYjlU?=
 =?utf-8?B?MmlNZDRmc3BDUzIxT0RYRTVPdmdYYmlzVUpLaE9IK09SVUV2MlZhMDFZeDRC?=
 =?utf-8?B?QXArMnhRV2xkN1ozb0NzZkpCNXlUY1lzclhtM1pISFY4aDRIVVNBRTFIRXlU?=
 =?utf-8?B?YXRlS2JYZjlsbGNSUW5jTkF6RlZxZ09xQnB5K2xJcHcxMi9BZWZ1ekVXZVRT?=
 =?utf-8?B?bVhvRTZnMTFEVmFQQUVvcThIOEkyZVFUSDN0cGNQMmFCMGtzeUp5VllTem1H?=
 =?utf-8?B?THV4QXBwaFAyYjlsOUxsRUxKdE1oSXFUbDF6V2kzekNrSU1OeFRqdWNCVjZI?=
 =?utf-8?B?dVlGZXpOM0s2UEk2alRCRHl4ZmtNOExza2haRVdvNFNtbDlUMVdDNU5yb3RS?=
 =?utf-8?B?cDBueXFxSHFaN3k2Skd6R2JXOG03M2UzaStyS3I4c1hGK0tOSkxNN0xMTlZj?=
 =?utf-8?B?ZW1OajNaR1dvQVp0aWdDd2ZxQU83c29qdS9lRXpEWFRZRE9ScXpZZFpPTnIy?=
 =?utf-8?B?QWd4VHd4WVJKV0luVVpsWW1XcktMOElVdUFGWFJ1djJEZm5FTExJL29ta0Jh?=
 =?utf-8?B?QVlDTkY1ZFh2TUl6WERtWWRQTGttQjFTT0JxdU9pd2w1VjB0NUx4UFFZOFYw?=
 =?utf-8?B?eXZQd1A5a1d6SnlFS3JzN09JTmx0c29sVlBHdFhPcmc2aWZheEdSOWl0VDAx?=
 =?utf-8?B?cDhSSndkbWJpMCtGYys1Q1FxaWt0YUxPZ3lNWHlkT1pDQUtuZVBBbkRyRElE?=
 =?utf-8?B?SWNtQnVSeG9wb1ZBeHRNQXNtdVBiZFB6QXcyN3BDZHNEUCt6cHNCRXBpaVFK?=
 =?utf-8?B?eUpwUXJaNUZsb1Z4VUxPaEo3a1VnMnRIZTVDL21JMlBEN3UzZmNVTnlzUW1T?=
 =?utf-8?B?dGhoSzkvVEZoa1g2am0vMnNkMzlHWWp0Q3BkUVBqcnZSRkZBamFMcUpBWUFj?=
 =?utf-8?B?bThUY3dnUW83U2J4d3pDa3BrcGpVdUl1L2xzQlpPY2pDWDk2RFR3TGlVdFli?=
 =?utf-8?B?T3VxZ2R1N2dZcTJWWGJTYkhuRkdRTnIrb3FGRWZNL3BhZzk0akNlQlZyUDRT?=
 =?utf-8?B?Z3M0VWx3MUY2ZUFjQzlwcC9ub1VScisyKzc2Q3M1dllJQm51b2FrR21xQ25I?=
 =?utf-8?B?Qzk4ZXdjQWZ6aC9YOUNJR3o4QmlNaS9yK0VFWG5VcjJ1WlhLM09QQ1l1cExM?=
 =?utf-8?B?OHk1MVQyNUwyYm1xYnJVYzZzVE5GSTdvZHFWNmFqMTA3bTU1VDdtS3hVT0d1?=
 =?utf-8?B?ekF1ZEZ2Yk1ZZ0w0aTV1K0xudEg4RlNVN243NEpKWlJqZk42T3lBZnA4cDZh?=
 =?utf-8?B?RDRGUnlEam0zdEFrdTFTeVB6dStaMkQ0blBISkhBMTNrK3FRSW5KUWI4dndR?=
 =?utf-8?B?TXR3WDBneDg3aW5yMU9GaTA0V2VzRVQyVzE4ZWc0TTNOM3dPVFN5ck1xTTBt?=
 =?utf-8?B?NmprS1AwSEM5SEk2ZzRRbDY3NW84V3RYTmRwQVU0bTd1MVUwdWtEOWljQTBQ?=
 =?utf-8?B?NWU4N0hoWHhWbk9TUkxFUmNFVFk2cXQ0UU9TNjZjaFczM2FlSGgydHNJRmR5?=
 =?utf-8?B?TDBmNFFqcVpQOHFNTUVhNlZUamx6N0VlQzJHUUZUV2hzTHNJaWVFc0ErVG45?=
 =?utf-8?B?TE93eHJONVUvdkdCZ3lzdUpVVDhrK3JGOTZIdkJOUzFGNW1jS3RJem9zcU8x?=
 =?utf-8?B?OW9jZ3ZnQUNFZko2K3M2VWx1WTh0czFQTzFPRjVXdW1DVzMrY3FEclBKL2w5?=
 =?utf-8?B?ckJ4OUdlblRsaWVnYVd1NFZZVHpybGdNUGQ3Ymp4ek0xNnZ4T1JHUnAvVlpm?=
 =?utf-8?Q?kmzdQtkrsb5vRLPOzsPzFbrot?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d7399060-1f28-4b28-1ad6-08db50a9f3e4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2023 16:25:07.6241
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MLFCZ4r96urTZjxveWxYFpWxyLj86Wg3rdoTcu1kb53u9xg29kQ2r5oeJw+fr1WLDaRL05yhlwbDmt1zGhYasw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6921

On 09.05.2023 13:03, Roger Pau Monne wrote:
> The 'i' iterator index stores a pdx, not a pfn, and hence the initial
> assignation of start (which stores a pfn) needs a conversion from pfn
> to pdx.

Strictly speaking: Yes. But pdx compression skips the bottom MAX_ORDER
bits, so ...

> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -406,7 +406,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
>       */
>      start = paging_mode_translate(d) ? PFN_DOWN(MB(1)) : 0;

... with this, ...

> -    for ( i = start, count = 0; i < top; )
> +    for ( i = pfn_to_pdx(start), count = 0; i < top; )

... this is an expensive identity transformation. Could I talk you into
adding

    ASSERT(start == pfn_to_pdx(start));

instead (or the corresponding BUG_ON() if you'd prefer that, albeit then
the expensive identity transformation will still be there even in release
builds; not that it matters all that much right here, but still)?

In any event, with no real bug fixed (unless I'm overlooking something),
I would suggest to drop the Fixes: tag.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 09 16:29:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:29:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532368.828528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQDa-0007pL-T1; Tue, 09 May 2023 16:29:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532368.828528; Tue, 09 May 2023 16:29:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQDa-0007pE-Ou; Tue, 09 May 2023 16:29:38 +0000
Received: by outflank-mailman (input) for mailman id 532368;
 Tue, 09 May 2023 16:29:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwQDZ-0007p4-Ij; Tue, 09 May 2023 16:29:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwQDZ-0005zD-FC; Tue, 09 May 2023 16:29:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwQDY-0007gq-RQ; Tue, 09 May 2023 16:29:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwQDY-0001LA-Qq; Tue, 09 May 2023 16:29:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5BFAPIsk1lLDeYxQlgw1lDUapvTc2yBiaUgfKbxGlHw=; b=5JpuSh26tnV2x3kA1z6tqKJYec
	uGmNqgM7+1WA/iOGAaIe7vxNdOx8FwpYeneTotkFppR+4M0AHp+8vYsSQS/4Eg94wZ3LPoyXcSTvK
	aVsisVX6b0O8jTsxq9eMA3nMpW4S6OFYtnO79fSZcTdHcBr08vki7IlU6qGslpjdC8Vo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180587-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180587: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ba0ad6ed89fd5dada3b7b65ef2b08e95d449d4ab
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 May 2023 16:29:36 +0000

flight 180587 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180587/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180582
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180582

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 180582 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 180582 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 180582 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 180582 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 180582 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 180582 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 180582 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 180582 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 180582 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 180582 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 180582 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 180582 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 180582 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 180582 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 180582 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 180582 never pass
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                ba0ad6ed89fd5dada3b7b65ef2b08e95d449d4ab
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   22 days
Failing since        180281  2023-04-17 06:24:36 Z   22 days   41 attempts
Testing same since   180582  2023-05-08 21:11:46 Z    0 days    2 attempts

------------------------------------------------------------
2359 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 296937 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 09 16:43:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:43:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532377.828558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQRI-0002M4-QL; Tue, 09 May 2023 16:43:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532377.828558; Tue, 09 May 2023 16:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQRI-0002Lv-MD; Tue, 09 May 2023 16:43:48 +0000
Received: by outflank-mailman (input) for mailman id 532377;
 Tue, 09 May 2023 16:43:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kELI=A6=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pwQRH-0002Ky-BX
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:43:47 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a9d017aa-ee88-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 18:43:45 +0200 (CEST)
Received: by mail-wr1-x430.google.com with SMTP id
 ffacd0b85a97d-3078fa679a7so2934622f8f.3
 for <xen-devel@lists.xenproject.org>; Tue, 09 May 2023 09:43:45 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 s9-20020a5d5109000000b002ffbf2213d4sm14754606wrt.75.2023.05.09.09.43.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 May 2023 09:43:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9d017aa-ee88-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683650624; x=1686242624;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=oQyCUxTpPixPBVFBdSiN0k/YI9aqhLZJQE2OAMb3ZSg=;
        b=XKh1o0IgXOTR8pP6xG5QopyY1l3y18i6SbkyqgaYeApyoLGSwmQqskMhCdfK8PiKlx
         hZsFMYdruCgUuqpKvRiOHLxtzSm7F6qfb92tbMj9hM/rf5Dg3lGcdNBbPmOqlN87lNNX
         esDFxd4EnbwvdY1XizIi7/oaISq0xuXxA9EM8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683650624; x=1686242624;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=oQyCUxTpPixPBVFBdSiN0k/YI9aqhLZJQE2OAMb3ZSg=;
        b=Yq7+wJ8TLQdz8yR4jjPX/5m+pzg9xShn4d9VRWlBPlN3WTov1LVtDWh1av7vHqPQ4/
         OU7ElZe2dwRcOBwxQiV3o2PM7cuJhOm+cZjhg6sxxd5kQXgENCVURtpblbtJtIRr9Fcp
         27MSbE9PiIHAsT1GJ9K7iboODimQtQ2YmZ9cHpZs3VVpeLqIpMPEJer4m/DkbaMqGy6L
         FXNcvcBS99W1l1id79uYCSFBNWvKFF/gQEPq+K1YGO4onu9qOWLw9VJ6fS5kI2E6anlp
         Utr594ffQ5DqtET5Olm3WTzuAKbhP810X9IpxTtVa0bJheqAtEAYB0/xelj2V/oZHo2F
         q79g==
X-Gm-Message-State: AC+VfDyLlfxcdVFu7QWw61WIt95gR4LVUYQzhJf0M4E4ZfPEnR1mzUaS
	iyvhGADQYwcZk9Wf8zf4ElyV4vY5ddp5OSQ+lkQ=
X-Google-Smtp-Source: ACHHUZ66heKHNtZecnybWt7F3Z5YP/uYyRzYh7uooIzmomVHTOppE0QrYKMEGi9+Efhd8qFzR3KEag==
X-Received: by 2002:adf:fa06:0:b0:2fb:600e:55bd with SMTP id m6-20020adffa06000000b002fb600e55bdmr10065258wrr.39.1683650623951;
        Tue, 09 May 2023 09:43:43 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/3] x86: Refactor conditional guard in probe_cpuid_faulting()
Date: Tue,  9 May 2023 17:43:35 +0100
Message-Id: <20230509164336.12523-3-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
References: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move vendor-specific checks to the vendor-specific callers.

No functional change.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v2:
  * Patch factored out from patch2 of v1
---
 xen/arch/x86/cpu/amd.c    | 10 +++++++++-
 xen/arch/x86/cpu/common.c | 11 -----------
 xen/arch/x86/cpu/intel.c  |  9 ++++++++-
 3 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index caafe44740..899bae7a10 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -271,7 +271,15 @@ static void __init noinline amd_init_levelling(void)
 {
 	const struct cpuidmask *m = NULL;
 
-	if (probe_cpuid_faulting())
+	/*
+	 * If there's support for CpuidUserDis or CPUID faulting then
+	 * we can skip levelling because CPUID accesses are trapped anyway.
+	 *
+	 * CPUID faulting is an Intel feature analogous to CpuidUserDis, so
+	 * that can only be present when Xen is itself virtualized (because
+	 * it can be emulated)
+	 */
+	if (cpu_has_hypervisor && probe_cpuid_faulting())
 		return;
 
 	probe_masking_msrs();
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index edc4db1335..4bfaac4590 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -131,17 +131,6 @@ bool __init probe_cpuid_faulting(void)
 	uint64_t val;
 	int rc;
 
-	/*
-	 * Don't bother looking for CPUID faulting if we aren't virtualised on
-	 * AMD or Hygon hardware - it won't be present.  Likewise for Fam0F
-	 * Intel hardware.
-	 */
-	if (((boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
-	     ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
-	      boot_cpu_data.x86 == 0xf)) &&
-	    !cpu_has_hypervisor)
-		return false;
-
 	if ((rc = rdmsr_safe(MSR_INTEL_PLATFORM_INFO, val)) == 0)
 		raw_cpu_policy.platform_info.cpuid_faulting =
 			val & MSR_PLATFORM_INFO_CPUID_FAULTING;
diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 71fc1a1e18..a04414ba1d 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -226,7 +226,14 @@ static void cf_check intel_ctxt_switch_masking(const struct vcpu *next)
  */
 static void __init noinline intel_init_levelling(void)
 {
-	if (probe_cpuid_faulting())
+	/*
+	 * Intel Fam0f is old enough that probing for CPUID faulting support
+	 * introduces spurious #GP(0) when the appropriate MSRs are read,
+	 * so skip it altogether. In the case where Xen is virtualized these
+	 * MSRs may be emulated though, so we allow it in that case.
+	 */
+	if ((boot_cpu_data.x86 != 0xf || cpu_has_hypervisor) &&
+	    probe_cpuid_faulting())
 		return;
 
 	probe_masking_msrs();
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 09 16:43:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:43:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532376.828544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQRF-0001ua-Dv; Tue, 09 May 2023 16:43:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532376.828544; Tue, 09 May 2023 16:43:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQRF-0001u9-8O; Tue, 09 May 2023 16:43:45 +0000
Received: by outflank-mailman (input) for mailman id 532376;
 Tue, 09 May 2023 16:43:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kELI=A6=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pwQRD-0001qp-VG
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:43:43 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a8d2fb1d-ee88-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 18:43:43 +0200 (CEST)
Received: by mail-wm1-x329.google.com with SMTP id
 5b1f17b1804b1-3f42ba32e24so7279585e9.3
 for <xen-devel@lists.xenproject.org>; Tue, 09 May 2023 09:43:43 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 s9-20020a5d5109000000b002ffbf2213d4sm14754606wrt.75.2023.05.09.09.43.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 May 2023 09:43:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8d2fb1d-ee88-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683650622; x=1686242622;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2enxHJVvB53hhS6w5p4MreN4cLpgSnyXWT9wrZUvS/4=;
        b=Jx1Y62UZFeAOx2gMKCjK4jBHh3yhgyKAn9n7FuUAvNutUqg8yAeuRofpXYb8vErxfI
         7mrXPYxG+D4C8qtZvknk5JZ1buBWqM91LA8CGQylC4OiOQNkJyFM8JFHan2FaHKM8qrS
         D4S45+nSoit9/DdwrTanwMs6g87mBGlF5IcYU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683650622; x=1686242622;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=2enxHJVvB53hhS6w5p4MreN4cLpgSnyXWT9wrZUvS/4=;
        b=F+RtZctmdlk1u4hnyJaebmVIITPuxovDcCUH9M5PiGIjHzVJw7EtwKmAwU83XB4gCr
         gv67lLrH4IIwNdMleLU/5Umq13hBN0JARzZmpFVMdzfnK20FgWWdvJzXCvGqFUiAmu0F
         7lxRxns8kfYX/WGGxZP8/KRI8Oxr+6Zveqvi2jtRruz9GvAtHZKgh4Vzhe1FW9BJD50z
         U9cg1MTMQ0FlNGKa8lkYIu7yBpwj5UR6opRa5bBBkjJtJX5YsP4WzTJ8EGDBAeXg9qpP
         m49XQ7mRPXR0X89DCO2UG62FjtKJcz9g67wytX6YxRGX5mWELFF02gfKtLLnyDXlZKRz
         LMeA==
X-Gm-Message-State: AC+VfDxVHJPam9f7x1Yy0O51mWMYpNhF+7+6Z8QCqLZTQbof7kq6G37A
	qQwBzninmoSbTx5llR5TlwYsCiZvflXaM9BplII=
X-Google-Smtp-Source: ACHHUZ581+1hPFk3uezFqtzRz8gKAsnHQjClnXioOR9gPVCwBkgpowVceLxszo8PH6MPDLUn03qoRg==
X-Received: by 2002:a05:600c:2206:b0:3f4:2220:28cc with SMTP id z6-20020a05600c220600b003f4222028ccmr5959782wml.9.1683650622346;
        Tue, 09 May 2023 09:43:42 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 1/3] x86: Add AMD's CpuidUserDis bit definitions
Date: Tue,  9 May 2023 17:43:34 +0100
Message-Id: <20230509164336.12523-2-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
References: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

AMD reports support for CpuidUserDis in CPUID and provides the toggle in HWCR.
This patch adds the positions of both of those bits to both xen and tools.

No functional change.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
 tools/libs/light/libxl_cpuid.c              | 1 +
 tools/misc/xen-cpuid.c                      | 2 ++
 xen/arch/x86/include/asm/msr-index.h        | 1 +
 xen/include/public/arch-x86/cpufeatureset.h | 1 +
 4 files changed, 5 insertions(+)

diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index 5f0bf93810..4d2fab5414 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -317,6 +317,7 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
 
         {"lfence+",      0x80000021, NA, CPUID_REG_EAX,  2,  1},
         {"nscb",         0x80000021, NA, CPUID_REG_EAX,  6,  1},
+        {"cpuid-user-dis", 0x80000021, NA, CPUID_REG_EAX, 17,  1},
 
         {"maxhvleaf",    0x40000000, NA, CPUID_REG_EAX,  0,  8},
 
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index d7efc59d31..8ec143ebc8 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -199,6 +199,8 @@ static const char *const str_e21a[32] =
 {
     [ 2] = "lfence+",
     [ 6] = "nscb",
+
+    /* 16 */                [17] = "cpuid-user-dis",
 };
 
 static const char *const str_7b1[32] =
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index fa771ed0b5..082fb2e0d9 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -337,6 +337,7 @@
 
 #define MSR_K8_HWCR			0xc0010015
 #define K8_HWCR_TSC_FREQ_SEL		(1ULL << 24)
+#define K8_HWCR_CPUID_USER_DIS		(1ULL << 35)
 
 #define MSR_K7_FID_VID_CTL		0xc0010041
 #define MSR_K7_FID_VID_STATUS		0xc0010042
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 12e3dc80c6..623dcb1bce 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -287,6 +287,7 @@ XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
 XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
 XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and limit too) */
+XEN_CPUFEATURE(CPUID_USER_DIS,     11*32+17) /*   CPUID disable for non-privileged software */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:1.ebx, word 12 */
 XEN_CPUFEATURE(INTEL_PPIN,         12*32+ 0) /*   Protected Processor Inventory Number */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 09 16:43:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:43:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532378.828568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQRK-0002cm-0Y; Tue, 09 May 2023 16:43:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532378.828568; Tue, 09 May 2023 16:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQRJ-0002cd-TN; Tue, 09 May 2023 16:43:49 +0000
Received: by outflank-mailman (input) for mailman id 532378;
 Tue, 09 May 2023 16:43:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kELI=A6=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pwQRI-0002Ky-DQ
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:43:48 +0000
Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com
 [2a00:1450:4864:20::436])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aabb4d0a-ee88-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 18:43:46 +0200 (CEST)
Received: by mail-wr1-x436.google.com with SMTP id
 ffacd0b85a97d-3064099f9b6so3915731f8f.1
 for <xen-devel@lists.xenproject.org>; Tue, 09 May 2023 09:43:46 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 s9-20020a5d5109000000b002ffbf2213d4sm14754606wrt.75.2023.05.09.09.43.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 May 2023 09:43:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aabb4d0a-ee88-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683650625; x=1686242625;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=16GgcLP2BBqsFeayUECe7ljf4/vLwO1azevfRpDiAnE=;
        b=HuBJiOT6QALlrn9nb56NQbzi6cla2ddOsWT7xW5nGmKpcyzIYNsdiPxKyWV9v1KN4B
         /uIHJ7Xy/LdlgRK3eynfeh8W3+AQGlV/DVg0JCRSqLnHxZqU4FpCzVgbw242PvnRAFrb
         KZv11Jspf0AuzoLRYVG11tRamYzXLmkAKH/E8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683650625; x=1686242625;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=16GgcLP2BBqsFeayUECe7ljf4/vLwO1azevfRpDiAnE=;
        b=Qe/kYIS/nk4zN18OEe5bPE3UVMbbzCbwoaeuyPGvzySAp3ncE6zy/6k2xNL1Kp9pNc
         Nx3ZcYLAKr6+0ne2bchZYI4JN4BIS4zgoCEwweD1Qz0VnMZT/rLBGrLGXVrzhOaUTF7P
         xDRK/eo4d1jaAujPuko+JE1jJo0ROxDqVlEUCCVBPRievF5Sr+ns6k3Q2/jExK7T6mmj
         87BNNRjVN1Xz+5LLUF8qzX19ldiXo5oBH43uuJOSibZAdf/g/FbTlhBbUDlYeDJIs7n6
         BqQ5fBlkAdrjMtJPSBSbB5s8at0AHvDND1iX12oNse57p3B/dyg0TvCHenGO8hCHz2Cr
         lXNA==
X-Gm-Message-State: AC+VfDwvdx8U8E18cBtw9bqGMkmX9WxVLoCW19jJVHwOLdPkrqgM2GVg
	WMtSKKlDhionSoW90SrKDSBHu3dMoQtX09Q4oK4=
X-Google-Smtp-Source: ACHHUZ7NMADEFOOpRrJnC2oe43bnCiHaWZNRSN6qIYJRojhJXSRulKGcG6OnzfawBJdlc/o7zsGVEg==
X-Received: by 2002:a5d:5701:0:b0:307:2ba7:c617 with SMTP id a1-20020a5d5701000000b003072ba7c617mr10532538wrv.52.1683650625641;
        Tue, 09 May 2023 09:43:45 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 3/3] x86: Add support for CpuidUserDis
Date: Tue,  9 May 2023 17:43:36 +0100
Message-Id: <20230509164336.12523-4-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
References: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Because CpuIdUserDis is reported in CPUID itself, the extended leaf
containing that bit must be retrieved before calling c_early_init()

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v2:
 * Style fixes
 * MSR index inlined in rdmsr/wrmsr
 * Swapped Intel's conditional guard so typically true condition goes first
 * Factored the vendor-specific non functional changes into another patch
---
 xen/arch/x86/cpu/amd.c         | 20 ++++++++++++++++-
 xen/arch/x86/cpu/common.c      | 40 +++++++++++++++++++++++-----------
 xen/arch/x86/cpu/intel.c       |  5 ++++-
 xen/arch/x86/include/asm/amd.h |  1 +
 4 files changed, 51 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 899bae7a10..cc9c89fd19 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -279,8 +279,12 @@ static void __init noinline amd_init_levelling(void)
 	 * that can only be present when Xen is itself virtualized (because
 	 * it can be emulated)
 	 */
-	if (cpu_has_hypervisor && probe_cpuid_faulting())
+	if ((cpu_has_hypervisor && probe_cpuid_faulting()) ||
+	    boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) {
+		expected_levelling_cap |= LCAP_faulting;
+		levelling_caps |=  LCAP_faulting;
 		return;
+	}
 
 	probe_masking_msrs();
 
@@ -371,6 +375,20 @@ static void __init noinline amd_init_levelling(void)
 		ctxt_switch_masking = amd_ctxt_switch_masking;
 }
 
+void amd_set_cpuid_user_dis(bool enable)
+{
+	const uint64_t bit = K8_HWCR_CPUID_USER_DIS;
+	uint64_t val;
+
+	rdmsrl(MSR_K8_HWCR, val);
+
+	if (!!(val & bit) == enable)
+		return;
+
+	val ^= bit;
+	wrmsrl(MSR_K8_HWCR, val);
+}
+
 /*
  * Check for the presence of an AMD erratum. Arguments are defined in amd.h 
  * for each known erratum. Return 1 if erratum is found.
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 4bfaac4590..9bbb385db4 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -4,6 +4,7 @@
 #include <xen/param.h>
 #include <xen/smp.h>
 
+#include <asm/amd.h>
 #include <asm/cpu-policy.h>
 #include <asm/current.h>
 #include <asm/debugreg.h>
@@ -144,8 +145,6 @@ bool __init probe_cpuid_faulting(void)
 		return false;
 	}
 
-	expected_levelling_cap |= LCAP_faulting;
-	levelling_caps |=  LCAP_faulting;
 	setup_force_cpu_cap(X86_FEATURE_CPUID_FAULTING);
 
 	return true;
@@ -168,8 +167,10 @@ static void set_cpuid_faulting(bool enable)
 void ctxt_switch_levelling(const struct vcpu *next)
 {
 	const struct domain *nextd = next ? next->domain : NULL;
+	bool enable_cpuid_faulting;
 
-	if (cpu_has_cpuid_faulting) {
+	if (cpu_has_cpuid_faulting ||
+	    boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) {
 		/*
 		 * No need to alter the faulting setting if we are switching
 		 * to idle; it won't affect any code running in idle context.
@@ -190,12 +191,18 @@ void ctxt_switch_levelling(const struct vcpu *next)
 		 * an interim escape hatch in the form of
 		 * `dom0=no-cpuid-faulting` to restore the older behaviour.
 		 */
-		set_cpuid_faulting(nextd && (opt_dom0_cpuid_faulting ||
-					     !is_control_domain(nextd) ||
-					     !is_pv_domain(nextd)) &&
-				   (is_pv_domain(nextd) ||
-				    next->arch.msrs->
-				    misc_features_enables.cpuid_faulting));
+		enable_cpuid_faulting = nextd && (opt_dom0_cpuid_faulting ||
+		                                  !is_control_domain(nextd) ||
+		                                  !is_pv_domain(nextd)) &&
+		                        (is_pv_domain(nextd) ||
+		                         next->arch.msrs->
+		                         misc_features_enables.cpuid_faulting);
+
+		if (cpu_has_cpuid_faulting)
+			set_cpuid_faulting(enable_cpuid_faulting);
+		else
+			amd_set_cpuid_user_dis(enable_cpuid_faulting);
+
 		return;
 	}
 
@@ -404,6 +411,17 @@ static void generic_identify(struct cpuinfo_x86 *c)
 	c->apicid = phys_pkg_id((ebx >> 24) & 0xFF, 0);
 	c->phys_proc_id = c->apicid;
 
+	eax = cpuid_eax(0x80000000);
+	if ((eax >> 16) == 0x8000)
+		c->extended_cpuid_level = eax;
+
+	/*
+	 * These AMD-defined flags are out of place, but we need
+	 * them early for the CPUID faulting probe code
+	 */
+	if (c->extended_cpuid_level >= 0x80000021)
+		c->x86_capability[FEATURESET_e21a] = cpuid_eax(0x80000021);
+
 	if (this_cpu->c_early_init)
 		this_cpu->c_early_init(c);
 
@@ -420,10 +438,6 @@ static void generic_identify(struct cpuinfo_x86 *c)
 	     (cpuid_ecx(CPUID_PM_LEAF) & CPUID6_ECX_APERFMPERF_CAPABILITY) )
 		__set_bit(X86_FEATURE_APERFMPERF, c->x86_capability);
 
-	eax = cpuid_eax(0x80000000);
-	if ((eax >> 16) == 0x8000)
-		c->extended_cpuid_level = eax;
-
 	/* AMD-defined flags: level 0x80000001 */
 	if (c->extended_cpuid_level >= 0x80000001)
 		cpuid(0x80000001, &tmp, &tmp,
diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index a04414ba1d..bbe7b7d1ce 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -233,8 +233,11 @@ static void __init noinline intel_init_levelling(void)
 	 * MSRs may be emulated though, so we allow it in that case.
 	 */
 	if ((boot_cpu_data.x86 != 0xf || cpu_has_hypervisor) &&
-	    probe_cpuid_faulting())
+	    probe_cpuid_faulting()) {
+		expected_levelling_cap |= LCAP_faulting;
+		levelling_caps |=  LCAP_faulting;
 		return;
+	}
 
 	probe_masking_msrs();
 
diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h
index a975d3de26..09ee52dc1c 100644
--- a/xen/arch/x86/include/asm/amd.h
+++ b/xen/arch/x86/include/asm/amd.h
@@ -155,5 +155,6 @@ extern bool amd_legacy_ssbd;
 extern bool amd_virt_spec_ctrl;
 bool amd_setup_legacy_ssbd(void);
 void amd_set_legacy_ssbd(bool enable);
+void amd_set_cpuid_user_dis(bool enable);
 
 #endif /* __AMD_H__ */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 09 16:43:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:43:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532375.828538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQRF-0001r9-45; Tue, 09 May 2023 16:43:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532375.828538; Tue, 09 May 2023 16:43:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQRF-0001r0-0E; Tue, 09 May 2023 16:43:45 +0000
Received: by outflank-mailman (input) for mailman id 532375;
 Tue, 09 May 2023 16:43:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kELI=A6=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pwQRD-0001qp-6N
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:43:43 +0000
Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com
 [2a00:1450:4864:20::332])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a7fb8427-ee88-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 18:43:42 +0200 (CEST)
Received: by mail-wm1-x332.google.com with SMTP id
 5b1f17b1804b1-3f315735514so216226995e9.1
 for <xen-devel@lists.xenproject.org>; Tue, 09 May 2023 09:43:42 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 s9-20020a5d5109000000b002ffbf2213d4sm14754606wrt.75.2023.05.09.09.43.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 09 May 2023 09:43:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7fb8427-ee88-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683650621; x=1686242621;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=6Vv9h8xRR34WXMgEze0CmFq4k7f1bgK9vFoprNoHdzc=;
        b=DSNgs0hmVkJMJlA6yBYq9TF7QVy7IfG0z9GULd8GywiDk2FKO04piCfpzWMnSrV65/
         0Dx/auYQzpE38dxMuoz3BE8Sat8RAQddKRB5IpenwTa6fSShArj4Qc/7zu7+qHUqX4HE
         9KQyOMVQMq5/Erb6CZvgu8xWl+ID87LEoFqcY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683650621; x=1686242621;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=6Vv9h8xRR34WXMgEze0CmFq4k7f1bgK9vFoprNoHdzc=;
        b=j0yIIbCZG7BktbBLhWtq2qt4L6hg/I8e3QyQZJRpsS3gRYo1K25nAT5fZcg2ujFLFp
         6MQZ5NlRHfuMnExHqlkjjB2Cm8hEVc7FeiygvpHoTnOIzgKwW0DzDsQczk8PwruqFCIf
         nIOhHmBCsGmN7Ak39yXzeBEzKDTr1l2okIEmq3oZYwfQJN+FDwa9rMvZ8on4ZaUSyXmv
         IJ+Ytwfi9hyIHOChnZN7qi7WsCex+FZGwr+oRfiyKJRquGZCvstwnX8SGVQbw3KbpOo4
         MEui0KscKmxPS2G6Oj0XqWqZdKt0GpoDnmE0HZLWUn9Kc3lA+FxLAfIUik2gh1V8h2Ya
         aLAw==
X-Gm-Message-State: AC+VfDySimX1gV8RaTfxpHAEpXLprQ+eoFnTUU5NZY9W4/M3f9invn5H
	XGb9QF45q29PPNbj9eh4wuka4tYC84bmbcPiHgo=
X-Google-Smtp-Source: ACHHUZ7zPKxYr8ChuNd7x3xyoGnskZRptPhUxOU9LOoGvv57zoOdccEaZNbbEjBeGCUKb+Os2c74EQ==
X-Received: by 2002:adf:feca:0:b0:2fb:92c7:b169 with SMTP id q10-20020adffeca000000b002fb92c7b169mr14253235wrs.10.1683650620761;
        Tue, 09 May 2023 09:43:40 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 0/3] Add CpuidUserDis support
Date: Tue,  9 May 2023 17:43:33 +0100
Message-Id: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

v2:
 * Style changes
 * Remove v1/patch3: HVM not to be addressed by this series
 * Adds one patch between v1/patch1 and v1/patch2 with the vendor-specific
   refactor of probe_cpuid_faulting()

Nowadays AMD supports trapping the CPUID instruction from ring>0 to ring0,
(CpuidUserDis) akin to Intel's "CPUID faulting". There is a difference in
that the toggle bit is in a different MSR and the support bit is in CPUID
itself rather than yet another MSR. This patch enables AMD hosts to use it
when supported in order to provide correct CPUID contents to PV guests.

Patch 1 merely adds definitions to various places in CPUID and MSR

Patch 2 moves vendor-specific code on probe_cpuid_faulting() to amd.c/intel.c

Patch 3 adds support for CpuidUserDis, hooking it in the probing path and
the context switching path.

Alejandro Vallejo (3):
  x86: Add AMD's CpuidUserDis bit definitions
  x86: Refactor conditional guard in probe_cpuid_faulting()
  x86: Add support for CpuidUserDis

 tools/libs/light/libxl_cpuid.c              |  1 +
 tools/misc/xen-cpuid.c                      |  2 +
 xen/arch/x86/cpu/amd.c                      | 28 ++++++++++-
 xen/arch/x86/cpu/common.c                   | 51 +++++++++++----------
 xen/arch/x86/cpu/intel.c                    | 12 ++++-
 xen/arch/x86/include/asm/amd.h              |  1 +
 xen/arch/x86/include/asm/msr-index.h        |  1 +
 xen/include/public/arch-x86/cpufeatureset.h |  1 +
 8 files changed, 71 insertions(+), 26 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 09 16:48:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 16:48:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532392.828578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQVP-0004Dh-Hc; Tue, 09 May 2023 16:48:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532392.828578; Tue, 09 May 2023 16:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwQVP-0004Da-Ef; Tue, 09 May 2023 16:48:03 +0000
Received: by outflank-mailman (input) for mailman id 532392;
 Tue, 09 May 2023 16:48:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Biuh=A6=citrix.com=prvs=486aa7bf2=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pwQVN-0004DS-To
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 16:48:01 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4054dd7e-ee89-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 18:47:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4054dd7e-ee89-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683650879;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=SlO5o6QMvWhWZy6Z5a5MuJGBudeCec69EU74MjRd1Wc=;
  b=LFxdp3oppPagKPwdmu8joI0J9vLSDIlFZ80dwrLlzd7wQLiFLXiR09FV
   2I3JAbX3CFf6ucBtVYyMsIg8YiV8jpmzp2WfGfes8DRWr7utZRKa/sWC6
   HngUob+ejzDdrQDZr96W6ZE7ssIU6kLyIWzqzC88VrkSU0pTZrljNSff4
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110875268
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:ArTh6aJLd15VoWIwFE+Rx5UlxSXFcZb7ZxGr2PjKsXjdYENS1T0Gx
 2QXW2iCP/aKM2rxe492ao+y8EgG6JWDztJlTwNlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wRvPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4tBHBCp
 PExcgkJRSCzrtKJkei2cthz05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TTHJ0FwRvC+
 DKuE2LRLgErJoaE2xu/8H+GoeDjmnjASdg9C+jtnhJtqALKnTFCYPEMbnO8pfC3okezQ9xbJ
 goY90IGvaU0sUCmUNT5dxm5u2Kf+A4RXcJKFO834x3LzbDbiy6GAkAUQzgHb8Yp3Oc/XTEw3
 0WFt8/oDzdo9raSTBqgGqy89G3of3JPdClbOHFCFFFeizX+nG0tpkjKX9oyHYfvt9neKQHZ8
 w/b9iUGtqpG2KbnyJ6H1VzAhjutoL3AQQg0+hjbUwqZ0+9pWGK2T9f2sAaGtJ6sOK7cFwDc5
 yZcx6By+chUVfmweDqxrPLh9V1Dz9KMK3XijFFmBPHNHBz9qif4Lei8DNyTTXqF0/romxezO
 Cc/WisLvve/2UeXgVdfOd7ZNijT5fGI+S7Zfv7VdMFSRZN6aRWK+ipjDWbJgTC2zRd1wP9gZ
 MfLGSpJMUv29Iw9lGbmLwvj+eVDKt8CKZP7GsmgkkXPPUu2b3+JU7YVWGazghQCxPrc+m39q
 o8PX/ZmPj0DCIUSlAGLq99MRb3LRFBnba3LRzt/LbHbe1U6RDl4VJc8A9oJIuRYokicrc+Ql
 lnVZ6OS4ACXaaHvQelSVk1eVQ==
IronPort-HdrOrdr: A9a23:gzMUWayNziQEcQXCgc6dKrPwKL1zdoMgy1knxilNoH1uA6qlfq
 WV9sjzuiWE7wr5J0tApTntAtjkfZqkz/JICOoqU4tKPjOIhILAFugLgLcKpQeQeBEWntQ36U
 4KSdkdNDSfNykfsS43iDPZL+od
X-Talos-CUID: 9a23:69SCf2D+kFAfpDP6E3Ni5UJFF+l/S3n2/Ef5E3DpU2tReaLAHA==
X-Talos-MUID: =?us-ascii?q?9a23=3AQETH+w9zv/3YB0JYoC8wzhWQf+t40rb0E1Isq5c?=
 =?us-ascii?q?Lke6LGSsrACbHoyviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,262,1677560400"; 
   d="scan'208";a="110875268"
Date: Tue, 9 May 2023 17:47:33 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1] tools: drop bogus and obsolete ptyfuncs.m4
Message-ID: <9fd06ad0-4c21-43be-ac48-8d30844535ad@perard>
References: <20230502204800.10733-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230502204800.10733-1-olaf@aepfle.de>

On Tue, May 02, 2023 at 08:48:00PM +0000, Olaf Hering wrote:
> According to openpty(3) it is required to include <pty.h> to get the
> prototypes for openpty() and login_tty(). But this is not what the
> function AX_CHECK_PTYFUNCS actually does. It makes no attempt to include
> the required header.
> 
> The two source files which call openpty() and login_tty() already contain
> the conditionals to include the required header.
> 
> Remove the bogus m4 file to fix build with clang, which complains about
> calls to undeclared functions.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

That change isn't enough. And I'm not convinced that it needs to be
removed.

First, AX_CHECK_PTYFUNCS is still called in "tools/configure.ac".

Then, AX_CHECK_PTYFUNCS define INCLUDE_LIBUTIL_H and PTYFUNCS_LIBS.
Those two are still used in the tree.

Also, that that macro isn't just about the header, but also about the
needed library.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 09 17:52:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 17:52:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532403.828587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwRVD-0003Am-3o; Tue, 09 May 2023 17:51:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532403.828587; Tue, 09 May 2023 17:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwRVD-0003Af-0q; Tue, 09 May 2023 17:51:55 +0000
Received: by outflank-mailman (input) for mailman id 532403;
 Tue, 09 May 2023 17:51:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hm7m=A6=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pwRVA-0003AY-Vk
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 17:51:53 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c0b60cc-ee92-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 19:51:50 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-59-PpY0sT0QMaWcAjHy4w3mDQ-1; Tue, 09 May 2023 13:51:42 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 92D92100F651;
 Tue,  9 May 2023 17:51:41 +0000 (UTC)
Received: from localhost (unknown [10.39.195.128])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 2336840C6E67;
 Tue,  9 May 2023 17:51:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c0b60cc-ee92-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683654708;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YSdkeye67d1BrBNCM3zk98YAc8rNSrDW0Ne/KD+GhLE=;
	b=if8DEJVqJdBu2KnAHtRbUqVg9s7CCbGEDxlp5HlfZFf8Glq5bm8zmBT0SmrMlsmBdKUCiN
	K4+kkRFJ3etTuv7exP/cjTQFgzlQ4fi9sfBgqNHMTZ7vnhr0a/UxcFDMDyW9esk1yA94/l
	2qM06dW9Nsz9GXxabH0RUOS1abNQKFY=
X-MC-Unique: PpY0sT0QMaWcAjHy4w3mDQ-1
Date: Tue, 9 May 2023 13:51:38 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>, Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com, Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: Re: [PATCH v5 00/21] block: remove aio_disable_external() API
Message-ID: <20230509175138.GC1018047@fedora>
References: <20230504195327.695107-1-stefanha@redhat.com>
 <ZFQnSjGiEWuSFWTh@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="zYR6uqo+9eG5ZY9J"
Content-Disposition: inline
In-Reply-To: <ZFQnSjGiEWuSFWTh@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2


--zYR6uqo+9eG5ZY9J
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, May 04, 2023 at 11:44:42PM +0200, Kevin Wolf wrote:
> Am 04.05.2023 um 21:53 hat Stefan Hajnoczi geschrieben:
> > v5:
> > - Use atomic accesses for in_flight counter in vhost-user-server.c [Kev=
in]
> > - Stash SCSIDevice id/lun values for VIRTIO_SCSI_T_TRANSPORT_RESET event
> >   before unrealizing the SCSIDevice [Kevin]
> > - Keep vhost-user-blk export .detach() callback so ctx is set to NULL [=
Kevin]
> > - Narrow BdrvChildClass and BlockDriver drained_{begin/end/poll} callba=
cks from
> >   IO_OR_GS_CODE() to GLOBAL_STATE_CODE() [Kevin]
> > - Include Kevin's "block: Fix use after free in blockdev_mark_auto_del(=
)" to
> >   fix a latent bug that was exposed by this series
>=20
> I only just finished reviewing v4 when you had already sent v5, but it
> hadn't arrived yet. I had a few more comments on what are now patches
> 17, 18, 19 and 21 in v5. I think they all still apply.

I'm not sure which comments from v4 still apply. In my email client all
your replies were already read when I sent v5.

Maybe you can share the Message-Id of something I still need to address?

Thanks,
Stefan

--zYR6uqo+9eG5ZY9J
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRaiCoACgkQnKSrs4Gr
c8jaCggAiPAw+aTVrrMgYTa8ZedRV1wLjpV5lST53EB62gJPS57Ru3GedV9WCoBj
l/s8rZSmlc+p/iO4BTaYSoonSePjzs81JlhDXB3WRYVAFUYplCHGdlOfhkOfueND
T/OCXMkwooUDcUgSJKUCM+gjFw52J2Xppw1HlzOSOxSGmuyJsNPPUlrfjiNDwxGq
THTaMGzpNPvLEQLhw9cLzrg1eE6W6z3vNeJULIYIwYfQE9cIbeo2Hx8c1rmFhasF
mjL0+4PXOjH2CcotBw8EfyUsx3GV1BbjB/RhbEzr7SG4YCC748WCx2xatAIorzZd
Vf68V/bHVGXLlN2FDniw5dFCdMSzUA==
=Vtb2
-----END PGP SIGNATURE-----

--zYR6uqo+9eG5ZY9J--



From xen-devel-bounces@lists.xenproject.org Tue May 09 17:59:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 17:59:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532411.828597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwRca-0003wW-1y; Tue, 09 May 2023 17:59:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532411.828597; Tue, 09 May 2023 17:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwRcZ-0003wO-VR; Tue, 09 May 2023 17:59:31 +0000
Received: by outflank-mailman (input) for mailman id 532411;
 Tue, 09 May 2023 17:59:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5VI=A6=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pwRcY-0003wI-S0
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 17:59:31 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3dd5e8e2-ee93-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 19:59:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3dd5e8e2-ee93-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683655167;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Xum3/PoQhkj8+lZ0cf9sSFx6MXDYkjs5zNnUN+Pv1LU=;
	b=Nkxxvef/gwwBJnsMEIx+ow4OwBi+0uHyp5RBmTf5nu2aiSk0OvZ3yifQpp8iu0sBDEUJcj
	a1p/dXIoIW4cJOIsg3u9nq+3ZMGA79rnrh+h78apzOr+Woxv52xV94jRRDOLibuFSvjBwU
	SUohqreBen1xyv3eEH/AIWmtrxMik/GCCKHEY26BeCG6MKFWZnPkyvuY6oXd/duWViLVW4
	n9vlzd7qcknLzQIMlzIHP9a/aIzu+mUh7LOMVIovsp4Zig3bcOdy74WIwY4m1K8dtzitFX
	d+4WVMFQBWUbsdvHH09vwdfFlTvUiShojFdmigWEUDG66lPt7MAObiHtgpD6iQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683655167;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Xum3/PoQhkj8+lZ0cf9sSFx6MXDYkjs5zNnUN+Pv1LU=;
	b=hLLA1dOtcFDkk6Slof++fN9bjbnwHItgF9jR7A+4LDWa4ZEHpxGnn+ndm2NGXZQsn8wojt
	9d63Rl+ydABO0vAQ==
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom
 Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into
 separate phases and document them
In-Reply-To: <87pm791zev.ffs@tglx>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.671595388@linutronix.de>
 <20230509100421.GU83892@hirez.programming.kicks-ass.net>
 <87pm791zev.ffs@tglx>
Date: Tue, 09 May 2023 19:59:26 +0200
Message-ID: <87o7mtz8qp.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 09 2023 at 14:07, Thomas Gleixner wrote:
> On Tue, May 09 2023 at 12:04, Peter Zijlstra wrote:
>> On Mon, May 08, 2023 at 09:43:39PM +0200, Thomas Gleixner wrote:
>>> +	/*
>>> +	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
>>> +	 * but just sets the bit to let the controlling CPU (BSP) know that
>>> +	 * it's got this far.
>>> +	 */
>>>  	smp_callin();
>>>  
>>> -	/* otherwise gcc will move up smp_processor_id before the cpu_init */
>>> +	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
>>>  	barrier();
>>
>> Not to the detriment of this patch, but this barrier() and it's comment
>> seem weird vs smp_callin(). That function ends with an atomic bitop (it
>> has to, at the very least it must not be weaker than store-release) but
>> also has an explicit wmb() to order setup vs CPU_STARTING.
>>
>> (arguably that should be a full fence *AND* get a comment)
>>
>> There is no way the smp_processor_id() referred to in this comment can
>> land before cpu_init() even without the barrier().
>
> Right. Let me clean that up.

So I went and tried to figure out where this comes from. It's from
d8f19f2cac70 ("[PATCH] x86-64 merge") in the history tree. One of those
well documented combo patches which change world and some more. The
context back then was:

	/*
	 * Dont put anything before smp_callin(), SMP
	 * booting is too fragile that we want to limit the
	 * things done here to the most necessary things.
	 */
	cpu_init();
	smp_callin();

	Dprintk("cpu %d: waiting for commence\n", smp_processor_id()); 

That still does not explain what the barrier is doing. I tried to
harvest mailing list archives, but did not find anything. The back then
list discuss@x86-64.org was never publicly archived... Boris gave me an
tarball, but this 'barrier()' add on was nowhere discussed in public.

As the barrier has no obvious value, I'm adding a patch upfront which
removes it.

Thanks,

        tglx






From xen-devel-bounces@lists.xenproject.org Tue May 09 18:03:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 18:03:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532415.828609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwRgX-0005SD-JJ; Tue, 09 May 2023 18:03:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532415.828609; Tue, 09 May 2023 18:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwRgX-0005S6-En; Tue, 09 May 2023 18:03:37 +0000
Received: by outflank-mailman (input) for mailman id 532415;
 Tue, 09 May 2023 18:03:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5VI=A6=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pwRgW-0005S0-0n
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 18:03:36 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d017b748-ee93-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 20:03:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d017b748-ee93-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683655413;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=H8lMKg/Pl8bB6qnYC08+nBqTm61HWmUyuaLIaj1gFuQ=;
	b=p/DT4GAI2EtIvZG/E48g6Y7O0PSW6I7fXi+Elpt8is0SHczyNHYG4WQo1p8SYYocLErxKG
	tmvgrxjIF+UxfScN840YPnkbKctQmYUakhWVsbINnsqgaMlRICOjt3Qfc6Up2X1bjDtoAO
	ar+k+iD6nwCsxY3BjhAFgFJUIEJHTt1PrRE8rbODQZgYjBKNgq6ykZLI1Uu1z7pabrzvU3
	BwVO+gXxHyh4uHTSjhXVxYRN+IF1E8Jy0qeflsgMafTVMfAIUm1oLIOiNDcRtX6ADo3QXJ
	K1YoDfr+wSZLKitYMptlFZ4BJxPZWh1AEKhWq/8P8VH08qDCjiggV3Xig5/J1w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683655413;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=H8lMKg/Pl8bB6qnYC08+nBqTm61HWmUyuaLIaj1gFuQ=;
	b=o/GeRRk5OesDmJnkkrzpp/P4MfatYoS6Fv9RucWpaz9lKWhE91kzNOmkETuNZ878NLl+Jz
	nDab+Y4hdtD/OPAA==
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into
 separate phases and document them
In-Reply-To: <20230509101902.GV83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.671595388@linutronix.de>
 <20230509101902.GV83892@hirez.programming.kicks-ass.net>
Date: Tue, 09 May 2023 20:03:32 +0200
Message-ID: <87lehxz8jv.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 09 2023 at 12:19, Peter Zijlstra wrote:
> Again, not really this patch, but since I had to look at this code ....
>
> On Mon, May 08, 2023 at 09:43:39PM +0200, Thomas Gleixner wrote:
>> @@ -1048,60 +1066,89 @@ static int do_boot_cpu(int apicid, int c
>
> 	/*
> 	 * AP might wait on cpu_callout_mask in cpu_init() with
> 	 * cpu_initialized_mask set if previous attempt to online
> 	 * it timed-out. Clear cpu_initialized_mask so that after
> 	 * INIT/SIPI it could start with a clean state.
> 	 */
> 	cpumask_clear_cpu(cpu, cpu_initialized_mask);
> 	smp_mb();
>
> ^^^ that barrier is weird too, cpumask_clear_cpu() is an atomic op and
> implies much the same (this is x86 after all). If you want to be super
> explicit about it write:
>
> 	smp_mb__after_atomic();
>
> (which is a no-op) but then it still very much requires a comment as to
> what exactly it orders against what.

Won't bother either as that mask is gone a few patches later.


From xen-devel-bounces@lists.xenproject.org Tue May 09 18:21:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 18:21:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532421.828618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwRxq-0007wm-1Y; Tue, 09 May 2023 18:21:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532421.828618; Tue, 09 May 2023 18:21:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwRxp-0007wf-Td; Tue, 09 May 2023 18:21:29 +0000
Received: by outflank-mailman (input) for mailman id 532421;
 Tue, 09 May 2023 18:21:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m/BS=A6=kernel.org=helgaas@srs-se1.protection.inumbo.net>)
 id 1pwRxo-0007wZ-7M
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 18:21:28 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4ecb5b22-ee96-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 20:21:26 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D050064791;
 Tue,  9 May 2023 18:21:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AE77FC433EF;
 Tue,  9 May 2023 18:21:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ecb5b22-ee96-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683656484;
	bh=wTrGWZ/prXac+ksuuHC0df7jUrXSG9zBg0tuZm4eZAk=;
	h=Date:From:To:Cc:Subject:In-Reply-To:From;
	b=eWEhqVADxc/GqJSwXnOQ3JLmV9EVm67GRKXU/8kCH+e7bYAn4tbjZ8OZRHVaX3ZcO
	 S0xv0AhCQd5NINM++YJU/lVWNxRqARJ2JdTZgkf8DKQ/aBVDAb43Y4DpvrsggmiDhc
	 VLuK4RypC1++gIMHtYJwDcVU3u+My56Q4IZYK0at1LgOku1qb8yZmqculMrILZXTXZ
	 neG8vnzAeERR8/VCvqWmCzd1JtSknWdohvz8yYije8I6fBKpAtZuUigoK/yqxiyRiQ
	 4E280Lp7b1rI5aQ6wbmUv7nAdHleWjJ8JY7/fVD00VEW304Ke/w4PCv+lqMsKKDnXi
	 XzOdVIUpJ2dRg==
Date: Tue, 9 May 2023 13:21:22 -0500
From: Bjorn Helgaas <helgaas@kernel.org>
To: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	Rich Felker <dalias@libc.org>, linux-sh@vger.kernel.org,
	linux-pci@vger.kernel.org,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	linux-mips@vger.kernel.org, Bjorn Helgaas <bhelgaas@google.com>,
	Andrew Lunn <andrew@lunn.ch>, sparclinux@vger.kernel.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Gregory Clement <gregory.clement@bootlin.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Russell King <linux@armlinux.org.uk>, linux-acpi@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>, xen-devel@lists.xenproject.org,
	Matt Turner <mattst88@gmail.com>,
	Anatolij Gustschin <agust@denx.de>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Nicholas Piggin <npiggin@gmail.com>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	=?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	linux-arm-kernel@lists.infradead.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	Randy Dunlap <rdunlap@infradead.org>, linux-kernel@vger.kernel.org,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	linux-alpha@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>
Subject: Re: [PATCH v8 0/7] Add pci_dev_for_each_resource() helper and update
 users
Message-ID: <20230509182122.GA1259567@bhelgaas>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230404161101.GA3554747@bhelgaas>

On Tue, Apr 04, 2023 at 11:11:01AM -0500, Bjorn Helgaas wrote:
> On Thu, Mar 30, 2023 at 07:24:27PM +0300, Andy Shevchenko wrote:
> > Provide two new helper macros to iterate over PCI device resources and
> > convert users.

> Applied 2-7 to pci/resource for v6.4, thanks, I really like this!

This is 09cc90063240 ("PCI: Introduce pci_dev_for_each_resource()")
upstream now.

Coverity complains about each use, sample below from
drivers/pci/vgaarb.c.  I didn't investigate at all, so it might be a
false positive; just FYI.

	  1. Condition screen_info.capabilities & (2U /* 1 << 1 */), taking true branch.
  556        if (screen_info.capabilities & VIDEO_CAPABILITY_64BIT_BASE)
  557                base |= (u64)screen_info.ext_lfb_base << 32;
  558
  559        limit = base + size;
  560
  561        /* Does firmware framebuffer belong to us? */
	  2. Condition __b < PCI_NUM_RESOURCES, taking true branch.
	  3. Condition (r = &pdev->resource[__b]) , (__b < PCI_NUM_RESOURCES), taking true branch.
	  6. Condition __b < PCI_NUM_RESOURCES, taking true branch.
	  7. cond_at_most: Checking __b < PCI_NUM_RESOURCES implies that __b may be up to 16 on the true branch.
	  8. Condition (r = &pdev->resource[__b]) , (__b < PCI_NUM_RESOURCES), taking true branch.
	  11. incr: Incrementing __b. The value of __b may now be up to 17.
	  12. alias: Assigning: r = &pdev->resource[__b]. r may now point to as high as element 17 of pdev->resource (which consists of 17 64-byte elements).
	  13. Condition __b < PCI_NUM_RESOURCES, taking true branch.
	  14. Condition (r = &pdev->resource[__b]) , (__b < PCI_NUM_RESOURCES), taking true branch.
  562        pci_dev_for_each_resource(pdev, r) {
	  4. Condition resource_type(r) != 512, taking true branch.
	  9. Condition resource_type(r) != 512, taking true branch.

  CID 1529911 (#1 of 1): Out-of-bounds read (OVERRUN)
  15. overrun-local: Overrunning array of 1088 bytes at byte offset 1088 by dereferencing pointer r. [show details]
  563                if (resource_type(r) != IORESOURCE_MEM)
	  5. Continuing loop.
	  10. Continuing loop.
  564                        continue;


From xen-devel-bounces@lists.xenproject.org Tue May 09 18:28:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 18:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532426.828627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwS4n-0000BF-Ld; Tue, 09 May 2023 18:28:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532426.828627; Tue, 09 May 2023 18:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwS4n-0000B8-Iy; Tue, 09 May 2023 18:28:41 +0000
Received: by outflank-mailman (input) for mailman id 532426;
 Tue, 09 May 2023 18:28:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pwS4l-0000B2-Rh
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 18:28:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwS4l-0000ET-1V; Tue, 09 May 2023 18:28:39 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.0.228]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwS4k-00052W-Rj; Tue, 09 May 2023 18:28:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=FpjfXe36wPgPDDQbxBoRCXUy070Hu51w/3Tt+2bcAWA=; b=3mazdo2AX4vBj+50TGDSo6zUWC
	iYXYUObziY6DIT8cKorfbBx9OpRnGeGjE1kRea7VAoG7XUFlWEZvdXD3s4LbZAaYNS8JiRhios80U
	xLLJPTphqqqM4j8aZ/oRoUVEp88cceqBCgvcS5LU8I3HmF8JoBB2BmrCtSuDhJ5ibPn0=;
Message-ID: <a6f1f4f3-3423-b818-732b-1ac7e9a38251@xen.org>
Date: Tue, 9 May 2023 19:28:37 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 04/14] tools/xenstore: add framework to commit
 accounting data on success only
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-5-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230508114754.31514-5-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 08/05/2023 12:47, Juergen Gross wrote:
> Instead of modifying accounting data and undo those modifications in
> case of an error during further processing, add a framework for
> collecting the needed changes and commit them only when the whole
> operation has succeeded.
> 
> This scheme can reuse large parts of the per transaction accounting.
> The changed_domain handling can be reused, but the array size of the
> accounting data should be possible to be different for both use cases.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 09 18:35:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 18:35:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532431.828638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSBM-0001fA-Bu; Tue, 09 May 2023 18:35:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532431.828638; Tue, 09 May 2023 18:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSBM-0001f3-7b; Tue, 09 May 2023 18:35:28 +0000
Received: by outflank-mailman (input) for mailman id 532431;
 Tue, 09 May 2023 18:35:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=i4ct=A6=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1pwSBK-0001ex-B2
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 18:35:26 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 42c75559-ee98-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 20:35:25 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-393-DaIJTyJ1OoywXMxBU_OwCg-1; Tue, 09 May 2023 14:35:19 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6CB1E38184E0;
 Tue,  9 May 2023 18:35:18 +0000 (UTC)
Received: from redhat.com (unknown [10.39.194.192])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 0084140C2063;
 Tue,  9 May 2023 18:35:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42c75559-ee98-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683657323;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hK66GgZK4oBJTlqAp6Gt40la0pcXeIc5sZ5w97pfWSM=;
	b=dtOCS4w3fYTvi2jAsD99shMd27u5kkI2v6iVqXB6Obu46+6yn7uvQwfCQsOuJSt0c362Fl
	eSUP5A2K5LHUhdPUabQKgDPmxW3atW+KCjryVjR10zL5NYYlfxxAq6+jMc5xvT/fWTbnGx
	ETae1VkSQFWmdGh/LHMWvraoy5h91og=
X-MC-Unique: DaIJTyJ1OoywXMxBU_OwCg-1
Date: Tue, 9 May 2023 20:35:12 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>, Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com, Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: Re: [PATCH v5 00/21] block: remove aio_disable_external() API
Message-ID: <ZFqSYJaOeKwU1DIo@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
 <ZFQnSjGiEWuSFWTh@redhat.com>
 <20230509175138.GC1018047@fedora>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="+wtMkZiYhpfvLEAI"
Content-Disposition: inline
In-Reply-To: <20230509175138.GC1018047@fedora>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1


--+wtMkZiYhpfvLEAI
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Am 09.05.2023 um 19:51 hat Stefan Hajnoczi geschrieben:
> On Thu, May 04, 2023 at 11:44:42PM +0200, Kevin Wolf wrote:
> > Am 04.05.2023 um 21:53 hat Stefan Hajnoczi geschrieben:
> > > v5:
> > > - Use atomic accesses for in_flight counter in vhost-user-server.c [K=
evin]
> > > - Stash SCSIDevice id/lun values for VIRTIO_SCSI_T_TRANSPORT_RESET ev=
ent
> > >   before unrealizing the SCSIDevice [Kevin]
> > > - Keep vhost-user-blk export .detach() callback so ctx is set to NULL=
 [Kevin]
> > > - Narrow BdrvChildClass and BlockDriver drained_{begin/end/poll} call=
backs from
> > >   IO_OR_GS_CODE() to GLOBAL_STATE_CODE() [Kevin]
> > > - Include Kevin's "block: Fix use after free in blockdev_mark_auto_de=
l()" to
> > >   fix a latent bug that was exposed by this series
> >=20
> > I only just finished reviewing v4 when you had already sent v5, but it
> > hadn't arrived yet. I had a few more comments on what are now patches
> > 17, 18, 19 and 21 in v5. I think they all still apply.
>=20
> I'm not sure which comments from v4 still apply. In my email client all
> your replies were already read when I sent v5.

Yes, but I added some more replies after you had sent v5 (and before I
fetched mail again to actually see v5).

> Maybe you can share the Message-Id of something I still need to address?

I thought the patch numbers identified them and were easier, but sure:

Message-ID: <ZFQc89cFJuoGF+qI@redhat.com>
Message-ID: <ZFQgBvWShB4NCymj@redhat.com>
Message-ID: <ZFQivbkVPcX3nECA@redhat.com>
Message-ID: <ZFQk2TdhZ6DiwM4t@redhat.com>

Kevin

--+wtMkZiYhpfvLEAI
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmRakmAACgkQfwmycsiP
L9aOzw//eu0aqw74PsvIWZfWFl7KSwRHbJx2OOazG9pWzqezZB/pFVvTwtGHL4+Q
ltXsY1aWDqysXFihI1AGxyqK16v0o8oSdxIk+dDAdU8FGgqJu70yk1nH7fYJUnW4
CYITZ0TE4w8OESRt4E219tfd/JsZmKzHnmY5cULkogXGhwSRZlekD3BTB5rXAhGh
gOOR97HNmyWa5/syO1Z5XNd6G79z6AWUg1es9IxEaYziuViRPjTPMVfpbm0BiF8I
O46S+Iz9S2yFAT18pOdnp5Vq51qerrNQzuhJZ++SktVRgUIjLoHtwxnGTf2eB7Jk
Rf9K9rBu0H2Yoj0Lgt7uOvNSGqnH+9gcYiZltqseArCwsozeJhZDwMnYpEe/USYP
PROvYwT7il2VsqYgKIvVuUcw3ev3tH9aoreHe1e6uYjfBFpVDKL0yCc3SLZ/Kezw
hd8Vpa0Hff0zf8cbhb3IEkP0wr0W4I6ccMSzvTvJoG8hC+YVHu0vSqZ9eQYLFhnf
P8R9xnNDt5RaLyzNcCAuD2D4NAOK/D2tnJ7BpDhaxqqT1AoQPHvBj3vIW8b56/rk
PWJ1h6qNz524X617jMUKC8cMoyUuVFt5mghZyT3krbT49IIteY2H2vS0AnRrLSSs
+qsNnB6Ol7zV2j/aFK+IqKsc6rFIRnFm6LiI/JBhIdbduWMpv3c=
=67t7
-----END PGP SIGNATURE-----

--+wtMkZiYhpfvLEAI--



From xen-devel-bounces@lists.xenproject.org Tue May 09 18:46:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 18:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532435.828647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSMB-0003Ek-9L; Tue, 09 May 2023 18:46:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532435.828647; Tue, 09 May 2023 18:46:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSMB-0003Ed-6U; Tue, 09 May 2023 18:46:39 +0000
Received: by outflank-mailman (input) for mailman id 532435;
 Tue, 09 May 2023 18:46:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pwSMA-0003EX-2y
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 18:46:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwSM9-0000h0-D4; Tue, 09 May 2023 18:46:37 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.0.228]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwSM9-0005nT-6G; Tue, 09 May 2023 18:46:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Cte6XHuqNiyt7bJbnbNBdUlKdoCQf/YJv2ZJHQr6nrU=; b=DfvHfXMotQSiJbsOIQAlq8EaFQ
	GvLUFFRdbEcA2DrSfP8+ERVHat/Uh6hw8NcgkWyLvqG5Z0yQhrcaqFh4RaN8UMgVsdQrlVPe8tJLD
	LGfGJRzcyNqxbQ4lMGLkzHvc30Yzajyt+JRL9ZnyT/c0CnpKXIBjf4j5hYyD760sbGuw=;
Message-ID: <21847835-4f7e-a09a-458e-e68dc59d4268@xen.org>
Date: Tue, 9 May 2023 19:46:35 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 05/14] tools/xenstore: use accounting buffering for
 node accounting
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-6-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230508114754.31514-6-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 08/05/2023 12:47, Juergen Gross wrote:
> Add the node accounting to the accounting information buffering in
> order to avoid having to undo it in case of failure.
> 
> This requires to call domain_nbentry_dec() before any changes to the
> data base, as it can return an error now.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V5:
> - add error handling after domain_nbentry_dec() calls (Julien Grall)
> ---
>   tools/xenstore/xenstored_core.c   | 29 +++++++----------------------
>   tools/xenstore/xenstored_domain.h |  4 ++--
>   2 files changed, 9 insertions(+), 24 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 8392bdec9b..22da434e2a 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -1454,7 +1454,6 @@ static void destroy_node_rm(struct connection *conn, struct node *node)
>   static int destroy_node(struct connection *conn, struct node *node)
>   {
>   	destroy_node_rm(conn, node);
> -	domain_nbentry_dec(conn, get_node_owner(node));
>   
>   	/*
>   	 * It is not possible to easily revert the changes in a transaction.
> @@ -1645,6 +1644,9 @@ static int delnode_sub(const void *ctx, struct connection *conn,
>   	if (ret > 0)
>   		return WALK_TREE_SUCCESS_STOP;
>   
> +	if (domain_nbentry_dec(conn, get_node_owner(node)))
> +		return WALK_TREE_ERROR_STOP;

I think there is a potential issue with the buffering here. In case of 
failure, the node could have been removed, but the quota would not be 
properly accounted.

Also, I think a comment would be warrant to explain why we are returning 
WALK_TREE_ERROR_STOP here when...

> +
>   	/* In case of error stop the walk. */
>   	if (!ret && do_tdb_delete(conn, &key, &node->acc))
>   		return WALK_TREE_SUCCESS_STOP;

... this is not the case when do_tdb_delete() fails for some reasons.

> @@ -1657,8 +1659,6 @@ static int delnode_sub(const void *ctx, struct connection *conn,
>   	watch_exact = strcmp(root, node->name);
>   	fire_watches(conn, ctx, node->name, node, watch_exact, NULL);
>   
> -	domain_nbentry_dec(conn, get_node_owner(node));
> -
>   	return WALK_TREE_RM_CHILDENTRY;
>   }
>   
> @@ -1797,29 +1797,14 @@ static int do_set_perms(const void *ctx, struct connection *conn,
>   		return EPERM;
>   
>   	old_perms = node->perms;
> -	domain_nbentry_dec(conn, get_node_owner(node));
> +	if (domain_nbentry_dec(conn, get_node_owner(node)))
> +		return ENOMEM;
>   	node->perms = perms;
> -	if (domain_nbentry_inc(conn, get_node_owner(node))) {
> -		node->perms = old_perms;
> -		/*
> -		 * This should never fail because we had a reference on the
> -		 * domain before and Xenstored is single-threaded.
> -		 */
> -		domain_nbentry_inc(conn, get_node_owner(node));
> +	if (domain_nbentry_inc(conn, get_node_owner(node)))
>   		return ENOMEM;
> -	}
> -
> -	if (write_node(conn, node, false)) {
> -		int saved_errno = errno;
>   
> -		domain_nbentry_dec(conn, get_node_owner(node));
> -		node->perms = old_perms;
> -		/* No failure possible as above. */
> -		domain_nbentry_inc(conn, get_node_owner(node));
> -
> -		errno = saved_errno;
> +	if (write_node(conn, node, false))
>   		return errno;
> -	}
>   
>   	fire_watches(conn, ctx, name, node, false, &old_perms);
>   	send_ack(conn, XS_SET_PERMS);
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
> index e40657216b..466549709f 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -25,9 +25,9 @@
>    * a per transaction array.
>    */
>   enum accitem {
> +	ACC_NODES,
>   	ACC_REQ_N,		/* Number of elements per request. */
> -	ACC_NODES = ACC_REQ_N,
> -	ACC_TR_N,		/* Number of elements per transaction. */
> +	ACC_TR_N = ACC_REQ_N,	/* Number of elements per transaction. */
>   	ACC_CHD_N = ACC_TR_N,	/* max(ACC_REQ_N, ACC_TR_N), for changed dom. */
>   	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
>   };


Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 09 18:48:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 18:48:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532441.828658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSOA-0003sg-Pk; Tue, 09 May 2023 18:48:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532441.828658; Tue, 09 May 2023 18:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSOA-0003sZ-MJ; Tue, 09 May 2023 18:48:42 +0000
Received: by outflank-mailman (input) for mailman id 532441;
 Tue, 09 May 2023 18:48:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pwSO9-0003sR-VU
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 18:48:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwSO9-0000kJ-BN; Tue, 09 May 2023 18:48:41 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.0.228]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwSO9-0005qs-5m; Tue, 09 May 2023 18:48:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Mt0OIZGEiU3mU2IsfnTro9G8M3qLcOeJASvvzdcfPCQ=; b=Zg1pXIZnGfMrXU3u6Hnf2iLLXu
	xYvwWe0M0cPG9IEIIqiZFwUBwczt0Fr2wObn9jt+FaNSiorfBYlFmIEZKgbxwzrGrz3kvQ6EcrDML
	JxIQ2vGN08BS0VB8US3tiNHH5tmI2gdjkZ28uikR9i5SSCBBwiYE2hAwc9vtSTP5IEa8=;
Message-ID: <0398e6f6-c2c5-9329-627d-2dfaa818e406@xen.org>
Date: Tue, 9 May 2023 19:48:39 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 07/14] tools/xenstore: use accounting data array for
 per-domain values
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-8-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230508114754.31514-8-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 08/05/2023 12:47, Juergen Gross wrote:
> Add the accounting of per-domain usage of Xenstore memory, watches, and
> outstanding requests to the array based mechanism.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

> ---
> V5:
> - drop domid parameter from domain_outstanding_inc() (Julien Grall)
> ---
>   tools/xenstore/xenstored_core.c   |   4 +-
>   tools/xenstore/xenstored_domain.c | 109 +++++++++++-------------------
>   tools/xenstore/xenstored_domain.h |   8 ++-
>   3 files changed, 48 insertions(+), 73 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 4d1debeba1..e7f86f9487 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -255,7 +255,7 @@ static void free_buffered_data(struct buffered_data *out,
>   			req->pend.ref.event_cnt--;
>   			if (!req->pend.ref.event_cnt && !req->on_out_list) {
>   				if (req->on_ref_list) {
> -					domain_outstanding_domid_dec(
> +					domain_outstanding_dec(conn,
>   						req->pend.ref.domid);
>   					list_del(&req->list);
>   				}
> @@ -271,7 +271,7 @@ static void free_buffered_data(struct buffered_data *out,
>   		out->on_ref_list = true;
>   		return;
>   	} else
> -		domain_outstanding_dec(conn);
> +		domain_outstanding_dec(conn, conn->id);
>   
>   	talloc_free(out);
>   }
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
> index 7770c4f395..a35ed97fd7 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -72,19 +72,12 @@ struct domain
>   	/* Accounting data for this domain. */
>   	unsigned int acc[ACC_N];
>   
> -	/* Amount of memory allocated for this domain. */
> -	int memory;
> +	/* Memory quota data for this domain. */
>   	bool soft_quota_reported;
>   	bool hard_quota_reported;
>   	time_t mem_last_msg;
>   #define MEM_WARN_MINTIME_SEC 10
>   
> -	/* number of watch for this domain */
> -	int nbwatch;
> -
> -	/* Number of outstanding requests. */
> -	int nboutstanding;
> -
>   	/* write rate limit */
>   	wrl_creditt wrl_credit; /* [ -wrl_config_writecost, +_dburst ] */
>   	struct wrl_timestampt wrl_timestamp;
> @@ -200,14 +193,15 @@ static bool domain_can_write(struct connection *conn)
>   
>   static bool domain_can_read(struct connection *conn)
>   {
> -	struct xenstore_domain_interface *intf = conn->domain->interface;
> +	struct domain *domain = conn->domain;
> +	struct xenstore_domain_interface *intf = domain->interface;
>   
>   	if (domain_is_unprivileged(conn)) {
> -		if (conn->domain->wrl_credit < 0)
> +		if (domain->wrl_credit < 0)
>   			return false;
> -		if (conn->domain->nboutstanding >= quota_req_outstanding)
> +		if (domain->acc[ACC_OUTST] >= quota_req_outstanding)
>   			return false;
> -		if (conn->domain->memory >= quota_memory_per_domain_hard &&
> +		if (domain->acc[ACC_MEM] >= quota_memory_per_domain_hard &&
>   		    quota_memory_per_domain_hard)
>   			return false;
>   	}
> @@ -438,10 +432,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
>   	if (!resp) return ENOMEM
>   
>   	ent(nodes, d->acc[ACC_NODES]);
> -	ent(watches, d->nbwatch);
> +	ent(watches, d->acc[ACC_WATCH]);
>   	ent(transactions, ta);
> -	ent(outstanding, d->nboutstanding);
> -	ent(memory, d->memory);
> +	ent(outstanding, d->acc[ACC_OUTST]);
> +	ent(memory, d->acc[ACC_MEM]);
>   
>   #undef ent
>   
> @@ -1187,14 +1181,16 @@ unsigned int domain_nbentry(struct connection *conn)
>   	       ? domain_acc_add(conn, conn->id, ACC_NODES, 0, true) : 0;
>   }
>   
> -static bool domain_chk_quota(struct domain *domain, int mem)
> +static bool domain_chk_quota(struct connection *conn, unsigned int mem)
>   {
>   	time_t now;
> +	struct domain *domain;
>   
> -	if (!domain || !domid_is_unprivileged(domain->domid) ||
> -	    (domain->conn && domain->conn->is_ignored))
> +	if (!conn || !domid_is_unprivileged(conn->id) ||
> +	    conn->is_ignored)
>   		return false;
>   
> +	domain = conn->domain;
>   	now = time(NULL);
>   
>   	if (mem >= quota_memory_per_domain_hard &&
> @@ -1239,80 +1235,57 @@ static bool domain_chk_quota(struct domain *domain, int mem)
>   int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
>   		      bool no_quota_check)
>   {
> -	struct domain *domain;
> +	int ret;
>   
> -	domain = find_domain_struct(domid);
> -	if (domain) {
> -		/*
> -		 * domain_chk_quota() will print warning and also store whether
> -		 * the soft/hard quota has been hit. So check no_quota_check
> -		 * *after*.
> -		 */
> -		if (domain_chk_quota(domain, domain->memory + mem) &&
> -		    !no_quota_check)
> -			return ENOMEM;
> -		domain->memory += mem;
> -	} else {
> -		/*
> -		 * The domain the memory is to be accounted for should always
> -		 * exist, as accounting is done either for a domain related to
> -		 * the current connection, or for the domain owning a node
> -		 * (which is always existing, as the owner of the node is
> -		 * tested to exist and deleted or replaced by domid 0 if not).
> -		 * So not finding the related domain MUST be an error in the
> -		 * data base.
> -		 */
> -		errno = ENOENT;
> -		corrupt(NULL, "Accounting called for non-existing domain %u\n",
> -			domid);
> -		return ENOENT;
> -	}
> +	ret = domain_acc_add(conn, domid, ACC_MEM, 0, true);
> +	if (ret < 0)
> +		return -ret;
> +
> +	/*
> +	 * domain_chk_quota() will print warning and also store whether the
> +	 * soft/hard quota has been hit. So check no_quota_check *after*.
> +	 */
> +	if (domain_chk_quota(conn, ret + mem) && !no_quota_check)
> +		return ENOMEM;
> +
> +	/*
> +	 * The domain the memory is to be accounted for should always exist,
> +	 * as accounting is done either for a domain related to the current
> +	 * connection, or for the domain owning a node (which is always
> +	 * existing, as the owner of the node is tested to exist and deleted
> +	 * or replaced by domid 0 if not).
> +	 * So not finding the related domain MUST be an error in the data base.
> +	 */
> +	domain_acc_add(conn, domid, ACC_MEM, mem, true);
>   
>   	return 0;
>   }
>   
>   void domain_watch_inc(struct connection *conn)
>   {
> -	if (!conn || !conn->domain)
> -		return;
> -	conn->domain->nbwatch++;
> +	domain_acc_add(conn, conn->id, ACC_WATCH, 1, true);
>   }
>   
>   void domain_watch_dec(struct connection *conn)
>   {
> -	if (!conn || !conn->domain)
> -		return;
> -	if (conn->domain->nbwatch)
> -		conn->domain->nbwatch--;
> +	domain_acc_add(conn, conn->id, ACC_WATCH, -1, true);
>   }
>   
>   int domain_watch(struct connection *conn)
>   {
>   	return (domain_is_unprivileged(conn))
> -		? conn->domain->nbwatch
> +		? domain_acc_add(conn, conn->id, ACC_WATCH, 0, true)
>   		: 0;
>   }
>   
>   void domain_outstanding_inc(struct connection *conn)
>   {
> -	if (!conn || !conn->domain)
> -		return;
> -	conn->domain->nboutstanding++;
> +	domain_acc_add(conn, conn->id, ACC_OUTST, 1, true);
>   }
>   
> -void domain_outstanding_dec(struct connection *conn)
> +void domain_outstanding_dec(struct connection *conn, unsigned int domid)
>   {
> -	if (!conn || !conn->domain)
> -		return;
> -	conn->domain->nboutstanding--;
> -}
> -
> -void domain_outstanding_domid_dec(unsigned int domid)
> -{
> -	struct domain *d = find_domain_by_domid(domid);
> -
> -	if (d)
> -		d->nboutstanding--;
> +	domain_acc_add(conn, domid, ACC_OUTST, -1, true);
>   }
>   
>   static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
> index b94548fd7d..086133407b 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -29,7 +29,10 @@ enum accitem {
>   	ACC_REQ_N,		/* Number of elements per request. */
>   	ACC_TR_N = ACC_REQ_N,	/* Number of elements per transaction. */
>   	ACC_CHD_N = ACC_TR_N,	/* max(ACC_REQ_N, ACC_TR_N), for changed dom. */
> -	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
> +	ACC_WATCH = ACC_TR_N,
> +	ACC_OUTST,
> +	ACC_MEM,
> +	ACC_N,			/* Number of elements per domain. */
>   };
>   
>   void handle_event(void);
> @@ -109,8 +112,7 @@ void domain_watch_inc(struct connection *conn);
>   void domain_watch_dec(struct connection *conn);
>   int domain_watch(struct connection *conn);
>   void domain_outstanding_inc(struct connection *conn);
> -void domain_outstanding_dec(struct connection *conn);
> -void domain_outstanding_domid_dec(unsigned int domid);
> +void domain_outstanding_dec(struct connection *conn, unsigned int domid);
>   int domain_get_quota(const void *ctx, struct connection *conn,
>   		     unsigned int domid);
>   

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 09 18:51:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 18:51:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532445.828667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSQR-0005Ht-57; Tue, 09 May 2023 18:51:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532445.828667; Tue, 09 May 2023 18:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSQR-0005Hm-2Q; Tue, 09 May 2023 18:51:03 +0000
Received: by outflank-mailman (input) for mailman id 532445;
 Tue, 09 May 2023 18:51:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pwSQQ-0005Hg-3V
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 18:51:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwSQP-0000n0-G4; Tue, 09 May 2023 18:51:01 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.228]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwSQP-0005s8-AH; Tue, 09 May 2023 18:51:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=739NiJzUDoX5x+NVpd1GQWU9M4J/I0b8LtRpGLWxTAg=; b=f4N5MU5Svkzj6iWljmmYGm+aR0
	whc1TjtlgkoZH5T8ziYH1NE+NZTECbpADCaY6RG2/vgYA8yEiMfc9BMND/nvyHQt5a7m/91t9o9Ui
	vHdmR7L9EgplP1m7MLKM/7J2Pur0Iu942X4SZp16iS1K5VV1IJ0cdn9eJW5itCqsH5Xk=;
Message-ID: <81b70f6b-2953-4d31-f25e-5c96fff50f60@xen.org>
Date: Tue, 9 May 2023 19:50:59 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 10/14] tools/xenstore: switch transaction accounting to
 generic accounting
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-11-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230508114754.31514-11-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 08/05/2023 12:47, Juergen Gross wrote:
> As transaction accounting is active for unprivileged domains only, it
> can easily be added to the generic per-domain accounting.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V5:
> - use list_empty(&conn->transaction_list) for detection of "no
>    transaction active" (Julien Grall)
> ---
>   tools/xenstore/xenstored_core.c        |  3 +--
>   tools/xenstore/xenstored_core.h        |  1 -
>   tools/xenstore/xenstored_domain.c      | 21 ++++++++++++++++++---
>   tools/xenstore/xenstored_domain.h      |  4 ++++
>   tools/xenstore/xenstored_transaction.c | 14 ++++++--------
>   5 files changed, 29 insertions(+), 14 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 3caf9e45dc..c98d30561f 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -2087,7 +2087,7 @@ static void consider_message(struct connection *conn)
>   	 * stalled. This will ignore new requests until Live-Update happened
>   	 * or it was aborted.
>   	 */
> -	if (lu_is_pending() && conn->transaction_started == 0 &&
> +	if (lu_is_pending() && list_empty(&conn->transaction_list) &&
>   	    conn->in->hdr.msg.type == XS_TRANSACTION_START) {
>   		trace("Delaying transaction start for connection %p req_id %u\n",
>   		      conn, conn->in->hdr.msg.req_id);
> @@ -2194,7 +2194,6 @@ struct connection *new_connection(const struct interface_funcs *funcs)
>   	new->funcs = funcs;
>   	new->is_ignored = false;
>   	new->is_stalled = false;
> -	new->transaction_started = 0;
>   	INIT_LIST_HEAD(&new->out_list);
>   	INIT_LIST_HEAD(&new->acc_list);
>   	INIT_LIST_HEAD(&new->ref_list);
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
> index 5a11dc1231..3564d85d7d 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -151,7 +151,6 @@ struct connection
>   	/* List of in-progress transactions. */
>   	struct list_head transaction_list;
>   	uint32_t next_transaction_id;
> -	unsigned int transaction_started;
>   	time_t ta_start_time;
>   
>   	/* List of delayed requests. */
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
> index 03825ca24b..25c6d20036 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -417,12 +417,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
>   {
>   	struct domain *d = find_domain_struct(domid);
>   	char *resp;
> -	int ta;
>   
>   	if (!d)
>   		return ENOENT;
>   
> -	ta = d->conn ? d->conn->transaction_started : 0;
>   	resp = talloc_asprintf(ctx, "Domain %u:\n", domid);
>   	if (!resp)
>   		return ENOMEM;
> @@ -433,7 +431,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
>   
>   	ent(nodes, d->acc[ACC_NODES]);
>   	ent(watches, d->acc[ACC_WATCH]);
> -	ent(transactions, ta);
> +	ent(transactions, d->acc[ACC_TRANS]);
>   	ent(outstanding, d->acc[ACC_OUTST]);
>   	ent(memory, d->acc[ACC_MEM]);
>   
> @@ -1298,6 +1296,23 @@ void domain_outstanding_dec(struct connection *conn, unsigned int domid)
>   	domain_acc_add(conn, domid, ACC_OUTST, -1, true);
>   }
>   
> +void domain_transaction_inc(struct connection *conn)
> +{
> +	domain_acc_add(conn, conn->id, ACC_TRANS, 1, true);
> +}
> +
> +void domain_transaction_dec(struct connection *conn)
> +{
> +	domain_acc_add(conn, conn->id, ACC_TRANS, -1, true);
> +}
> +
> +unsigned int domain_transaction_get(struct connection *conn)
> +{
> +	return (domain_is_unprivileged(conn))
> +		? domain_acc_add(conn, conn->id, ACC_TRANS, 0, true)
> +		: 0;
> +}
> +
>   static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
>   static wrl_creditt wrl_config_rate           = WRL_RATE   * WRL_FACTOR;
>   static wrl_creditt wrl_config_dburst         = WRL_DBURST * WRL_FACTOR;
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
> index 086133407b..01b6f1861b 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -32,6 +32,7 @@ enum accitem {
>   	ACC_WATCH = ACC_TR_N,
>   	ACC_OUTST,
>   	ACC_MEM,
> +	ACC_TRANS,
>   	ACC_N,			/* Number of elements per domain. */
>   };
>   
> @@ -113,6 +114,9 @@ void domain_watch_dec(struct connection *conn);
>   int domain_watch(struct connection *conn);
>   void domain_outstanding_inc(struct connection *conn);
>   void domain_outstanding_dec(struct connection *conn, unsigned int domid);
> +void domain_transaction_inc(struct connection *conn);
> +void domain_transaction_dec(struct connection *conn);
> +unsigned int domain_transaction_get(struct connection *conn);
>   int domain_get_quota(const void *ctx, struct connection *conn,
>   		     unsigned int domid);
>   
> diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
> index 11c8bcec84..b9e9d76a1f 100644
> --- a/tools/xenstore/xenstored_transaction.c
> +++ b/tools/xenstore/xenstored_transaction.c
> @@ -479,8 +479,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
>   	if (conn->transaction)
>   		return EBUSY;
>   
> -	if (domain_is_unprivileged(conn) &&
> -	    conn->transaction_started > quota_max_transaction)
> +	if (domain_transaction_get(conn) > quota_max_transaction)
>   		return ENOSPC;
>   
>   	/* Attach transaction to ctx for autofree until it's complete */
> @@ -502,12 +501,12 @@ int do_transaction_start(const void *ctx, struct connection *conn,
>   	} while (!IS_ERR(exists));
>   
>   	/* Now we own it. */

This comment now feels a bit misplaced. I think ...

> +	if (list_empty(&conn->transaction_list))
> +		conn->ta_start_time = time(NULL);
... the two lines should be added before.

With that:

Acked-by: Julien Grall <jgrall@amazon.com>

>   	list_add_tail(&trans->list, &conn->transaction_list);
>   	talloc_steal(conn, trans);
>   	talloc_set_destructor(trans, destroy_transaction);
> -	if (!conn->transaction_started)
> -		conn->ta_start_time = time(NULL);
> -	conn->transaction_started++;
> +	domain_transaction_inc(conn);
>   	wrl_ntransactions++;
>   
>   	snprintf(id_str, sizeof(id_str), "%u", trans->id);
> @@ -533,8 +532,8 @@ int do_transaction_end(const void *ctx, struct connection *conn,
>   
>   	conn->transaction = NULL;
>   	list_del(&trans->list);
> -	conn->transaction_started--;
> -	if (!conn->transaction_started)
> +	domain_transaction_dec(conn);
> +	if (list_empty(&conn->transaction_list))
>   		conn->ta_start_time = 0;
>   
>   	chk_quota = trans->node_created && domain_is_unprivileged(conn);
> @@ -588,7 +587,6 @@ void conn_delete_all_transactions(struct connection *conn)
>   
>   	assert(conn->transaction == NULL);
>   
> -	conn->transaction_started = 0;
>   	conn->ta_start_time = 0;
>   }
>   

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 09 18:52:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 18:52:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532449.828678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSRd-0005pY-F0; Tue, 09 May 2023 18:52:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532449.828678; Tue, 09 May 2023 18:52:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSRd-0005pR-CF; Tue, 09 May 2023 18:52:17 +0000
Received: by outflank-mailman (input) for mailman id 532449;
 Tue, 09 May 2023 18:52:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwSRb-0005p9-ID; Tue, 09 May 2023 18:52:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwSRb-0000o1-Dz; Tue, 09 May 2023 18:52:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwSRa-00079Q-Sd; Tue, 09 May 2023 18:52:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwSRa-00045G-S9; Tue, 09 May 2023 18:52:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uZdeg8M8oJ7rlTCVGEDJ1vnRMp8Eg/j9boFgPbCvWgU=; b=5aD9ahc42AcXmVbdDOSe9vvH9S
	xN02lJ5C065kbCLDc89PRZdqau8n6NJtHtOyAyKxXFwp9GVRxo9bahl27ESGYK9FGrixmgWWjFFzM
	zFZZkUi7F21oIhAQEN6+4SFFcyVplYFW7xMXbbpgQkWFEyZsuZZdi9Zi8PB0OJxbpFgM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180586-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180586: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=271477b59e723250f17a7e20f139262057921b6a
X-Osstest-Versions-That:
    qemuu=792f77f376adef944f9a03e601f6ad90c2f891b2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 May 2023 18:52:14 +0000

flight 180586 qemu-mainline real [real]
flight 180591 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180586/
http://logs.test-lab.xenproject.org/osstest/logs/180591/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           8 xen-boot            fail pass in 180591-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 180591 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 180591 never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install             fail like 180559
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180559
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180559
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180559
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180559
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180559
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180559
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180559
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180559
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                271477b59e723250f17a7e20f139262057921b6a
baseline version:
 qemuu                792f77f376adef944f9a03e601f6ad90c2f891b2

Last test of basis   180559  2023-05-06 14:16:57 Z    3 days
Testing same since   180586  2023-05-09 06:38:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juan Quintela <quintela@redhat.com>
  Lukas Straub <lukasstraub2@web.de>
  Peter Xu <peterx@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   792f77f376..271477b59e  271477b59e723250f17a7e20f139262057921b6a -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue May 09 18:55:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 18:55:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532457.828688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSUm-0006X0-3j; Tue, 09 May 2023 18:55:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532457.828688; Tue, 09 May 2023 18:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSUm-0006Wt-0h; Tue, 09 May 2023 18:55:32 +0000
Received: by outflank-mailman (input) for mailman id 532457;
 Tue, 09 May 2023 18:55:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=i4ct=A6=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1pwSUk-0006Wj-9V
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 18:55:30 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10881ce9-ee9b-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 20:55:29 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-347-hns5wC0vOUi9L7gWVTn1Ow-1; Tue, 09 May 2023 14:55:21 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D4CC2811E7C;
 Tue,  9 May 2023 18:55:20 +0000 (UTC)
Received: from redhat.com (unknown [10.39.194.192])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 34EDC492B00;
 Tue,  9 May 2023 18:55:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10881ce9-ee9b-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683658528;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=4IcDDuIOaLXI9DAvceuatjdKsKxdhTRUwJDbKrSPK+8=;
	b=cVm5IPHZ9zfSUzvLTAnc3nRSJ46HzZwOY23l70Bodl/CeednPytYyKsF86y+BIsfjj1Axp
	M7WpacPKGpvQba4FBQ16jUF58yBqSg1M5AUh8JhhzMPtoYtKbELvYeU5qCLQjKbkTs+tnZ
	SkvflBg5vHCHMkRizq+do6hlEANa0kM=
X-MC-Unique: hns5wC0vOUi9L7gWVTn1Ow-1
Date: Tue, 9 May 2023 20:55:14 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>, Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com, Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v5 05/21] virtio-scsi: stop using aio_disable_external()
 during unplug
Message-ID: <ZFqXEihzG18me26X@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
 <20230504195327.695107-6-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230504195327.695107-6-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9

Am 04.05.2023 um 21:53 hat Stefan Hajnoczi geschrieben:
> This patch is part of an effort to remove the aio_disable_external()
> API because it does not fit in a multi-queue block layer world where
> many AioContexts may be submitting requests to the same disk.
> 
> The SCSI emulation code is already in good shape to stop using
> aio_disable_external(). It was only used by commit 9c5aad84da1c
> ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
> disk") to ensure that virtio_scsi_hotunplug() works while the guest
> driver is submitting I/O.
> 
> Ensure virtio_scsi_hotunplug() is safe as follows:
> 
> 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
>    device_set_realized() calls qatomic_set(&dev->realized, false) so
>    that future scsi_device_get() calls return NULL because they exclude
>    SCSIDevices with realized=false.
> 
>    That means virtio-scsi will reject new I/O requests to this
>    SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
>    virtio_scsi_hotunplug() is still executing. We are protected against
>    new requests!
> 
> 2. scsi_device_unrealize() already contains a call to

I think you mean scsi_qdev_unrealize(). Can be fixed while applying.

>    scsi_device_purge_requests() so that in-flight requests are cancelled
>    synchronously. This ensures that no in-flight requests remain once
>    qdev_simple_device_unplug_cb() returns.
> 
> Thanks to these two conditions we don't need aio_disable_external()
> anymore.
> 
> Cc: Zhengui Li <lizhengui@huawei.com>
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

Kevin



From xen-devel-bounces@lists.xenproject.org Tue May 09 19:07:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 19:07:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532462.828698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSgY-000892-4L; Tue, 09 May 2023 19:07:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532462.828698; Tue, 09 May 2023 19:07:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSgY-00088v-1R; Tue, 09 May 2023 19:07:42 +0000
Received: by outflank-mailman (input) for mailman id 532462;
 Tue, 09 May 2023 19:07:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=i4ct=A6=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1pwSgW-00088p-Ci
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 19:07:40 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c2a87f95-ee9c-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 21:07:37 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-529-B9dAK25vNzSSerVDhtPZ3A-1; Tue, 09 May 2023 15:07:32 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6C0DE872011;
 Tue,  9 May 2023 19:07:30 +0000 (UTC)
Received: from redhat.com (unknown [10.39.194.192])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C565640C6E67;
 Tue,  9 May 2023 19:07:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2a87f95-ee9c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683659256;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=3tJ3xOkIIK9sjNiEc9UslJdWtKHgMpCwgj/6/bqEnvw=;
	b=dO7BhPExgWVSedrdEDDS0Yo9SP/3abgm/piFKl4HpwT7OswgDHiGJYIMhFkUVHYncWwMrb
	MG6vzm6KFRzt2HAOtkx8KUE5NGszaXCmwn15PpUXphNJfDgWCTz4RnaIeIbPgOLTmBHnL5
	bYYSGP5vlc6TwCnFl6EYmJwsWeToMUM=
X-MC-Unique: B9dAK25vNzSSerVDhtPZ3A-1
Date: Tue, 9 May 2023 21:07:23 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>, Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com, Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Subject: Re: [PATCH v5 00/21] block: remove aio_disable_external() API
Message-ID: <ZFqZ6zUS9QOEXxhz@redhat.com>
References: <20230504195327.695107-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230504195327.695107-1-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

Am 04.05.2023 um 21:53 hat Stefan Hajnoczi geschrieben:
> v5:
> - Use atomic accesses for in_flight counter in vhost-user-server.c [Kevin]
> - Stash SCSIDevice id/lun values for VIRTIO_SCSI_T_TRANSPORT_RESET event
>   before unrealizing the SCSIDevice [Kevin]
> - Keep vhost-user-blk export .detach() callback so ctx is set to NULL [Kevin]
> - Narrow BdrvChildClass and BlockDriver drained_{begin/end/poll} callbacks from
>   IO_OR_GS_CODE() to GLOBAL_STATE_CODE() [Kevin]
> - Include Kevin's "block: Fix use after free in blockdev_mark_auto_del()" to
>   fix a latent bug that was exposed by this series
> 
> v4:
> - Remove external_disable_cnt variable [Philippe]
> - Add Patch 1 to fix assertion failure in .drained_end() -> blk_get_aio_context()
> v3:
> - Resend full patch series. v2 was sent in the middle of a git rebase and was
>   missing patches. [Eric]
> - Apply Reviewed-by tags.
> v2:
> - Do not rely on BlockBackend request queuing, implement .drained_begin/end()
>   instead in xen-block, virtio-blk, and virtio-scsi [Paolo]
> - Add qdev_is_realized() API [Philippe]
> - Add patch to avoid AioContext lock around blk_exp_ref/unref() [Paolo]
> - Add patch to call .drained_begin/end() from main loop thread to simplify
>   callback implementations
> 
> The aio_disable_external() API temporarily suspends file descriptor monitoring
> in the event loop. The block layer uses this to prevent new I/O requests being
> submitted from the guest and elsewhere between bdrv_drained_begin() and
> bdrv_drained_end().
> 
> While the block layer still needs to prevent new I/O requests in drained
> sections, the aio_disable_external() API can be replaced with
> .drained_begin/end/poll() callbacks that have been added to BdrvChildClass and
> BlockDevOps.
> 
> This newer .bdrained_begin/end/poll() approach is attractive because it works
> without specifying a specific AioContext. The block layer is moving towards
> multi-queue and that means multiple AioContexts may be processing I/O
> simultaneously.
> 
> The aio_disable_external() was always somewhat hacky. It suspends all file
> descriptors that were registered with is_external=true, even if they have
> nothing to do with the BlockDriverState graph nodes that are being drained.
> It's better to solve a block layer problem in the block layer than to have an
> odd event loop API solution.
> 
> The approach in this patch series is to implement BlockDevOps
> .drained_begin/end() callbacks that temporarily stop file descriptor handlers.
> This ensures that new I/O requests are not submitted in drained sections.

Patches 2-16: Reviewed-by: Kevin Wolf <kwolf@redhat.com>



From xen-devel-bounces@lists.xenproject.org Tue May 09 19:21:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 19:21:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532473.828708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwStZ-000271-7y; Tue, 09 May 2023 19:21:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532473.828708; Tue, 09 May 2023 19:21:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwStZ-00026u-4Y; Tue, 09 May 2023 19:21:09 +0000
Received: by outflank-mailman (input) for mailman id 532473;
 Tue, 09 May 2023 19:21:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pwStY-00026o-AH
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 19:21:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwStX-0001Xk-9X; Tue, 09 May 2023 19:21:07 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.0.228]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwStX-00079a-3c; Tue, 09 May 2023 19:21:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=mHt51Vtzo2mlWRAJ6DVCVqnXZcgvCer1AAjKv0QYQgg=; b=VrJ0Df6XAVZq03lvngDAbssqX2
	yqMJY5nFLxtYHvgB5VQ+iyQnhbPubEOHx6TZ4y+Dr1e8erXi3a5iRGhKlmf2NGLdvHwmLkOrlttAF
	xzyOwmQiHlzHRr016cBB0diBlDJU/4DsD8YTfcQjOI8WIRqwhV9ChW0852zCQKejL/uE=;
Message-ID: <90f5dfd0-e18a-7fcb-9048-057a0656a2b3@xen.org>
Date: Tue, 9 May 2023 20:21:05 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 12/14] tools/xenstore: use generic accounting for
 remaining quotas
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-13-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230508114754.31514-13-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 08/05/2023 12:47, Juergen Gross wrote:
> The maxrequests, node size, number of node permissions, and path length
> quota are a little bit special, as they are either active in
> transactions only (maxrequests), or they are just per item instead of
> count values. Nevertheless being able to know the maximum number of
> those quota related values per domain would be beneficial, so add them
> to the generic accounting.
> 
> The per domain value will never show current numbers other than zero,
> but the maximum number seen can be gathered the same way as the number
> of nodes during a transaction.
> 
> To be able to use the const qualifier for a new function switch
> domain_is_unprivileged() to take a const pointer, too.
> 
> For printing the quota/max values, adapt the print format string to
> the longest quota name (now 17 characters long).
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V5:
> - add comment (Julien Grall)
> - add missing quota printing (Julien Grall)
> ---
>   tools/xenstore/xenstored_core.c        | 15 +++++----
>   tools/xenstore/xenstored_core.h        |  2 +-
>   tools/xenstore/xenstored_domain.c      | 45 +++++++++++++++++++++-----
>   tools/xenstore/xenstored_domain.h      |  6 ++++
>   tools/xenstore/xenstored_transaction.c |  4 +--
>   tools/xenstore/xenstored_watch.c       |  2 +-
>   6 files changed, 55 insertions(+), 19 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index c98d30561f..fce73b883e 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -799,8 +799,9 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
>   		+ node->perms.num * sizeof(node->perms.p[0])
>   		+ node->datalen + node->childlen;
>   
> -	if (!no_quota_check && domain_is_unprivileged(conn) &&
> -	    data.dsize >= quota_max_entry_size) {
> +	/* Call domain_max_chk() in any case in order to record max values. */
> +	if (domain_max_chk(conn, ACC_NODESZ, data.dsize, quota_max_entry_size)
> +	    && !no_quota_check) {
>   		errno = ENOSPC;
>   		return errno;
>   	}
> @@ -1170,7 +1171,7 @@ static bool valid_chars(const char *node)
>   		       "0123456789-/_@") == strlen(node));
>   }
>   
> -bool is_valid_nodename(const char *node)
> +bool is_valid_nodename(const struct connection *conn, const char *node)
>   {
>   	int local_off = 0;
>   	unsigned int domid;
> @@ -1190,7 +1191,8 @@ bool is_valid_nodename(const char *node)
>   	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
>   		local_off = 0;
>   
> -	if (strlen(node) > local_off + quota_max_path_len)
> +	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off,
> +			   quota_max_path_len))
>   		return false;
>   
>   	return valid_chars(node);
> @@ -1252,7 +1254,7 @@ static struct node *get_node_canonicalized(struct connection *conn,
>   	*canonical_name = canonicalize(conn, ctx, name);
>   	if (!*canonical_name)
>   		return NULL;
> -	if (!is_valid_nodename(*canonical_name)) {
> +	if (!is_valid_nodename(conn, *canonical_name)) {
>   		errno = EINVAL;
>   		return NULL;
>   	}
> @@ -1778,8 +1780,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
>   		return EINVAL;
>   
>   	perms.num--;
> -	if (domain_is_unprivileged(conn) &&
> -	    perms.num > quota_nb_perms_per_node)
> +	if (domain_max_chk(conn, ACC_NPERM, perms.num, quota_nb_perms_per_node))
>   		return ENOSPC;
>   
>   	permstr = in->buffer + strlen(in->buffer) + 1;
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
> index 3564d85d7d..9339820156 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -258,7 +258,7 @@ void check_store(void);
>   void corrupt(struct connection *conn, const char *fmt, ...);
>   
>   /* Is this a valid node name? */
> -bool is_valid_nodename(const char *node);
> +bool is_valid_nodename(const struct connection *conn, const char *node);
>   
>   /* Get name of parent node. */
>   char *get_parent(const void *ctx, const char *node);
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
> index 6f3a27765a..b3a719569e 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -431,7 +431,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
>   		return ENOMEM;
>   
>   #define ent(t, e) \
> -	resp = talloc_asprintf_append(resp, "%-16s: %8u (max: %8u\n", #t, \
> +	resp = talloc_asprintf_append(resp, "%-17s: %8u (max: %8u\n", #t, \
>   				      d->acc[e].val, d->acc[e].max); \
>   	if (!resp) return ENOMEM
>   
> @@ -440,6 +440,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
>   	ent(transactions, ACC_TRANS);
>   	ent(outstanding, ACC_OUTST);
>   	ent(memory, ACC_MEM);
> +	ent(transaction-nodes, ACC_TRANSNODES);
> +	ent(node-permissions, ACC_NPERM);
> +	ent(path-length, ACC_PATHLEN);
> +	ent(node-size, ACC_NODESZ);
>   
>   #undef ent
>   
> @@ -457,7 +461,7 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
>   		return ENOMEM;
>   
>   #define ent(t, e) \
> -	resp = talloc_asprintf_append(resp, "%-16s: %8u\n", #t,   \
> +	resp = talloc_asprintf_append(resp, "%-17s: %8u\n", #t,   \
>   				      acc_global_max[e]);         \
>   	if (!resp) return ENOMEM
>   
> @@ -466,6 +470,10 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
>   	ent(transactions, ACC_TRANS);
>   	ent(outstanding, ACC_OUTST);
>   	ent(memory, ACC_MEM);
> +	ent(transaction-nodes, ACC_TRANSNODES);
> +	ent(node-permissions, ACC_NPERM);
> +	ent(path-length, ACC_PATHLEN);
> +	ent(node-size, ACC_NODESZ);
>   
>   #undef ent
>   
> @@ -1079,12 +1087,22 @@ int domain_adjust_node_perms(struct node *node)
>   	return 0;
>   }
>   
> +static void domain_acc_valid_max(struct domain *d, enum accitem what,
> +				 unsigned int val)
> +{
> +	assert(what < ARRAY_SIZE(d->acc));
> +	assert(what < ARRAY_SIZE(acc_global_max));
> +
> +	if (val > d->acc[what].max)
> +		d->acc[what].max = val;
> +	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
> +		acc_global_max[what] = val;
> +}
> +
>   static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
>   {
>   	unsigned int val;
>   
> -	assert(what < ARRAY_SIZE(d->acc));
> -

I didn't have a chance to reply on your comment on the previous version. 
So doing it here:

 > Following this reasoning I'd need to put it into even more functions.

Possibly. But for now, the discussion is about not removing the existing 
one (see more below).

 > And an
assert() triggering a little bit late is no real problem, as it will abort
xenstored anyway.

Not really. Xenstored would only be aborted if the condition is false. 
If it is not, we would return an error. That said, the condition that a 
change to be true in some condition.

But now you are relying on the compiler to never optimize out the check. 
We know that compilers can remove NULL check if a pointer was 
dereferenced beforehand. I wouldn't be surprised if they can do the same 
trick with accessing an array first and then checking the bound. So the 
abort() may actually never happen.

 > Additionally with the global and the per-domain arrays now covering all
possible quotas, it would even be reasonable to drop the assert()s in
domain_acc_valid_max() completely.

I have two concerns:
   * This is the state after this series. But I don't see what would 
prevent any change in the future.
   * If I am not mistaken none of the compilers properly enforce the 
enum in C. So you could in theory pass an outside value without the 
compiler shouting at you.

So to me, this is not warrant to completely drop the assert(). If we 
discard the latter point, the ideal would be a BUILD_BUG_ON() to tie the 
enum with the array but IIRC it is not possible to use BUILD_BUG_ON() 
with an enum. Therefore, the assert() should the best at the moment.

Ideally, we should add the assert() in other places. But, this is not 
something I think should be requested here. My only request is to not 
removing the existing one.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 09 19:22:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 19:22:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532477.828718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSuT-0002d6-HP; Tue, 09 May 2023 19:22:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532477.828718; Tue, 09 May 2023 19:22:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSuT-0002cz-E0; Tue, 09 May 2023 19:22:05 +0000
Received: by outflank-mailman (input) for mailman id 532477;
 Tue, 09 May 2023 19:22:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pwSuS-0002ct-7k
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 19:22:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwSuR-0001Yj-Jp; Tue, 09 May 2023 19:22:03 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.0.228]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwSuR-0007HH-E0; Tue, 09 May 2023 19:22:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=gENtAu0xcMiHrwsyqO5fY15RBn0iF+ALNYrU7xr6PBU=; b=fVv+XBgVuLjtaT2xBR8cG/1ucJ
	M+HZ1Fts8EeqadtQAi2LuBnT9mgKjy7gbqmtzMErB5m1075IUFoGmkJNnmAfb6T5ROgr/tUsBiVgg
	b0EZrpV/nC8wRAyjlmjD1Ad+YxFwuqMEEHuUITEIW+IVHu3o7uJF3/y9MZJHH0oJ/WvU=;
Message-ID: <216773a4-0b23-6af3-90a6-050af2d8cb7b@xen.org>
Date: Tue, 9 May 2023 20:22:02 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 13/14] tools/xenstore: switch get_optval_int() to
 get_optval_uint()
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-14-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230508114754.31514-14-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 08/05/2023 12:47, Juergen Gross wrote:
> Let get_optval_int() return an unsigned value and rename it
> accordingly.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 09 19:24:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 19:24:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532491.828728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSwM-0003DZ-Tj; Tue, 09 May 2023 19:24:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532491.828728; Tue, 09 May 2023 19:24:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwSwM-0003DS-Py; Tue, 09 May 2023 19:24:02 +0000
Received: by outflank-mailman (input) for mailman id 532491;
 Tue, 09 May 2023 19:24:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pwSwL-0003DM-Py
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 19:24:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwSwL-0001cI-9G; Tue, 09 May 2023 19:24:01 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.228]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwSwL-0007LD-3z; Tue, 09 May 2023 19:24:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=xgd3dabGCyxd1bHDiz5yr2QmYjSUogMyQF8yrndJ24Y=; b=arSGNCXhj3oX8KhiBxzRuxfRbu
	3vPof/kpm4G4ZHV45hUxibsAmE1rKPw8i+equPlSA8ACSv2vLvFAaoIXNd2rxybNVTnUDz1ERRMlS
	ZFRn2LeZcNU5+1bfXtJAKOsl8H2zQ9/5xuHWItPB+J3xI3mvctxvE3IyDMpJqfXkmDVk=;
Message-ID: <2a47f26e-a011-ce6c-cc88-c0e111a558a8@xen.org>
Date: Tue, 9 May 2023 20:23:59 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 14/14] tools/xenstore: switch quota management to be
 table based
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-15-jgross@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230508114754.31514-15-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 08/05/2023 12:47, Juergen Gross wrote:
> @@ -2714,15 +2710,19 @@ static unsigned int get_optval_uint(const char *arg)
>   
>   static bool what_matches(const char *arg, const char *what)
>   {
> -	unsigned int what_len = strlen(what);
> +	unsigned int what_len;
> +
> +	if (!what)
> +		return false;
>   
> +	what_len = strlen(what);

NIT: Keep the newline before the return.

Acked-by: Julien Grall <jgrall@amazon.com>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 09 19:29:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 19:29:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532514.828738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwT1x-0003vW-HJ; Tue, 09 May 2023 19:29:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532514.828738; Tue, 09 May 2023 19:29:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwT1x-0003vP-EQ; Tue, 09 May 2023 19:29:49 +0000
Received: by outflank-mailman (input) for mailman id 532514;
 Tue, 09 May 2023 19:29:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pwT1v-0003vJ-QL
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 19:29:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwT1v-0001iK-CL; Tue, 09 May 2023 19:29:47 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.0.228]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwT1v-0007QI-55; Tue, 09 May 2023 19:29:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=/mgowuNY58cywWzCLp+qkvB4sb4g+S5sA3URmT0u5ms=; b=T0UpQYqY4p5g821wl3TBmYJlA5
	baAQAUfJevUW/jgVixvL/wUpI9sL5+NCIPBGMARrm8B7ZuZc3RTez2jQ9dQDTbrf6rKPqk66Nmp9p
	KNKtmlswOFKQYZJbdyhQvgUOThEk0DXo1nA0gLMJyzHuZwt4OuCuM7U6Bn7Fz6nH7Y5M=;
Message-ID: <c9b39323-1ad4-ed01-bb89-09276b39def7@xen.org>
Date: Tue, 9 May 2023 20:29:44 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 03/12] xen/arm: Introduce a wrapper for
 dt_device_get_address() to handle paddr_t
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-4-ayan.kumar.halder@amd.com>
 <37c9a45f-ae07-8d47-093a-6cf7501389d4@xen.org>
 <d15ba304-8f79-f80a-b0ac-4dccebded17c@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d15ba304-8f79-f80a-b0ac-4dccebded17c@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 03/05/2023 15:06, Ayan Kumar Halder wrote:
> 
> On 03/05/2023 12:25, Julien Grall wrote:
>> Hi Ayan,
> Hi Julien,
>>
>> On 28/04/2023 18:55, Ayan Kumar Halder wrote:
>>> dt_device_get_address() can accept uint64_t only for address and size.
>>> However, the address/size denotes physical addresses. Thus, they should
>>> be represented by 'paddr_t'.
>>> Consequently, we introduce a wrapper for dt_device_get_address() ie
>>> dt_device_get_paddr() which accepts address/size as paddr_t and inturn
>>> invokes dt_device_get_address() after converting address/size to
>>> uint64_t.
>>>
>>> The reason for introducing this is that in future 'paddr_t' may not
>>> always be 64-bit. Thus, we need an explicit wrapper to do the type
>>> conversion and return an error in case of truncation.
>>>
>>> With this, callers can now invoke dt_device_get_paddr().
>>>
>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>> ---
>>> Changes from -
>>>
>>> v1 - 1. New patch.
>>>
>>> v2 - 1. Extracted part of "[XEN v2 05/11] xen/arm: Use paddr_t 
>>> instead of u64 for address/size"
>>> into this patch.
>>>
>>> 2. dt_device_get_address() callers now invoke dt_device_get_paddr() 
>>> instead.
>>>
>>> 3. Logged error in case of truncation.
>>>
>>> v3 - 1. Modified the truncation checks as "dt_addr != (paddr_t)dt_addr".
>>> 2. Some sanity fixes.
>>>
>>> v4 - 1. Some sanity fixes.
>>> 2. Preserved the declaration of dt_device_get_address() in
>>> xen/include/xen/device_tree.h. The reason being it is currently used by
>>> ns16550.c. This driver requires some more changes as pointed by Jan in
>>> https://lore.kernel.org/xen-devel/6196e90f-752e-e61a-45ce-37e46c22b812@suse.com/
>>> which is to be addressed as a separate series.
>>>
>>> v5 - 1. Removed initialization of variables.
>>> 2. In dt_device_get_paddr(), added the check
>>> if ( !addr )
>>>      return -EINVAL;
>>>
>>>   xen/arch/arm/domain_build.c                | 10 +++---
>>>   xen/arch/arm/gic-v2.c                      | 10 +++---
>>>   xen/arch/arm/gic-v3-its.c                  |  4 +--
>>>   xen/arch/arm/gic-v3.c                      | 10 +++---
>>>   xen/arch/arm/pci/pci-host-common.c         |  6 ++--
>>>   xen/arch/arm/platforms/brcm-raspberry-pi.c |  2 +-
>>>   xen/arch/arm/platforms/brcm.c              |  6 ++--
>>>   xen/arch/arm/platforms/exynos5.c           | 32 +++++++++---------
>>>   xen/arch/arm/platforms/sunxi.c             |  2 +-
>>>   xen/arch/arm/platforms/xgene-storm.c       |  2 +-
>>>   xen/common/device_tree.c                   | 39 ++++++++++++++++++++++
>>>   xen/drivers/char/cadence-uart.c            |  4 +--
>>>   xen/drivers/char/exynos4210-uart.c         |  4 +--
>>>   xen/drivers/char/imx-lpuart.c              |  4 +--
>>>   xen/drivers/char/meson-uart.c              |  4 +--
>>>   xen/drivers/char/mvebu-uart.c              |  4 +--
>>>   xen/drivers/char/omap-uart.c               |  4 +--
>>>   xen/drivers/char/pl011.c                   |  6 ++--
>>>   xen/drivers/char/scif-uart.c               |  4 +--
>>
>> What about the call in xen/drivers/char/ns16550.c?
> 
> Refer to 
> https://lore.kernel.org/xen-devel/6196e90f-752e-e61a-45ce-37e46c22b812@suse.com/ , Jan mentioned that this driver needs some prior cleanup.
> 
> So, I decided to take it out and do it after the current series has been 
> committed.
> 
> See 
> https://patchew.org/Xen/20230413173735.48387-1-ayan.kumar.halder@amd.com/ , Jan agreed to this.
> 
> Is this ok with you ?

I am OK with that. Can you mention it in the commit message? This would 
help the reviewer or a future reader to understand why you left out 
ns16550 (this is the only call left).

With the remarks from Michal addressed:

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 09 19:49:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 19:49:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532527.828748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwTKi-0006RN-5z; Tue, 09 May 2023 19:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532527.828748; Tue, 09 May 2023 19:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwTKi-0006RG-1x; Tue, 09 May 2023 19:49:12 +0000
Received: by outflank-mailman (input) for mailman id 532527;
 Tue, 09 May 2023 19:49:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=deGQ=A6=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pwTKg-0006RA-4d
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 19:49:10 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8ec8ad45-eea2-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 21:49:07 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0E86862C79;
 Tue,  9 May 2023 19:49:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2AFE1C433EF;
 Tue,  9 May 2023 19:49:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ec8ad45-eea2-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683661745;
	bh=vQxm3KrJ1zf77P5ZIze9jYa+8HMWqSPOPIOpkEql+pw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=C4NaBg804GDIaWuWMpmDUb/m3VMGg3hdGQk87ZdiB73lO0g2ZS39CWKgkBkfJ2DE7
	 k2PaDMCreN6C2kn86Fel/hDUnHoE49akqX/GE5VFBl619N7lxZQhnhV3I1oRkatwF5
	 cSaxrh7L4g8YXyyVFoRJiKGl1TzFP3/kIiG948DvGmmje4nJ0WsO2BOgecidMZd3MW
	 7kGwPkrWgNdph0VShXOIZFX5olz+icBHjfmLFo2LKgCmXLgfHvnQOudSX/VsPDn45G
	 nx3vOG6y7EME54V/B9B6h/TmpHVM6oSy4+iOz9tQ09baPdOjA0Fm6CJmqBbEUYEme4
	 aQM5Iic/2v5Sw==
Date: Tue, 9 May 2023 12:49:02 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Oleg Nikitenko <oleshiiwood@gmail.com>
cc: Michal Orzel <michal.orzel@amd.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Carlo Nonato <carlo.nonato@minervasys.tech>, Stewart.Hildebrand@amd.com
Subject: Re: xen cache colors in ARM
In-Reply-To: <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
Message-ID: <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com> <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com> <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com> <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com> <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com> <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop> <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop> <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com> <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop> <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com> <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com> <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1540650302-1683661744=:974517"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1540650302-1683661744=:974517
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

We test Xen Cache Coloring regularly on zcu102. Every Petalinux release
(twice a year) is tested with cache coloring enabled. The last Petalinux
release is 2023.1 and the kernel used is this:
https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS


On Tue, 9 May 2023, Oleg Nikitenko wrote:
> Hello guys,
> 
> I have a couple of more questions.
> Have you ever run xen with the cache coloring at Zynq UltraScale+ MPSoC zcu102 xczu15eg ?
> When did you run xen with the cache coloring last time ?
> What kernel version did you use for Dom0 when you ran xen with the cache coloring last time ?
> 
> Regards,
> Oleg
> 
> пт, 5 мая 2023 г. в 11:48, Oleg Nikitenko <oleshiiwood@gmail.com>:
>       Hi Michal,
> 
> Thanks.
> 
> Regards,
> Oleg
> 
> пт, 5 мая 2023 г. в 11:34, Michal Orzel <michal.orzel@amd.com>:
>       Hi Oleg,
> 
>       Replying, so that you do not need to wait for Stefano.
> 
>       On 05/05/2023 10:28, Oleg Nikitenko wrote:
>       >       
>       >
>       >
>       > Hello Stefano,
>       >
>       > I would like to try a xen cache color property from this repo  https://xenbits.xen.org/git-http/xen.git
>       <https://xenbits.xen.org/git-http/xen.git>
>       > Could you tell whot branch I should use ?
>       Cache coloring feature is not part of the upstream tree and it is still under review.
>       You can only find it integrated in the Xilinx Xen tree.
> 
>       ~Michal
> 
>       >
>       > Regards,
>       > Oleg
>       >
>       > пт, 28 апр. 2023 г. в 00:51, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
>       >
>       >     I am familiar with the zcu102 but I don't know how you could possibly
>       >     generate a SError.
>       >
>       >     I suggest to try to use ImageBuilder [1] to generate the boot
>       >     configuration as a test because that is known to work well for zcu102.
>       >
>       >     [1] https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder>
>       >
>       >
>       >     On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
>       >     > Hello Stefano,
>       >     >
>       >     > Thanks for clarification.
>       >     > We nighter use ImageBuilder nor uboot boot script.
>       >     > A model is zcu102 compatible.
>       >     >
>       >     > Regards,
>       >     > O.
>       >     >
>       >     > вт, 25 апр. 2023 г. в 21:21, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
>       >     >       This is interesting. Are you using Xilinx hardware by any chance? If so,
>       >     >       which board?
>       >     >
>       >     >       Are you using ImageBuilder to generate your boot.scr boot script? If so,
>       >     >       could you please post your ImageBuilder config file? If not, can you
>       >     >       post the source of your uboot boot script?
>       >     >
>       >     >       SErrors are supposed to be related to a hardware failure of some kind.
>       >     >       You are not supposed to be able to trigger an SError easily by
>       >     >       "mistake". I have not seen SErrors due to wrong cache coloring
>       >     >       configurations on any Xilinx board before.
>       >     >
>       >     >       The differences between Xen with and without cache coloring from a
>       >     >       hardware perspective are:
>       >     >
>       >     >       - With cache coloring, the SMMU is enabled and does address translations
>       >     >         even for dom0. Without cache coloring the SMMU could be disabled, and
>       >     >         if enabled, the SMMU doesn't do any address translations for Dom0. If
>       >     >         there is a hardware failure related to SMMU address translation it
>       >     >         could only trigger with cache coloring. This would be my normal
>       >     >         suggestion for you to explore, but the failure happens too early
>       >     >         before any DMA-capable device is programmed. So I don't think this can
>       >     >         be the issue.
>       >     >
>       >     >       - With cache coloring, the memory allocation is very different so you'll
>       >     >         end up using different DDR regions for Dom0. So if your DDR is
>       >     >         defective, you might only see a failure with cache coloring enabled
>       >     >         because you end up using different regions.
>       >     >
>       >     >
>       >     >       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
>       >     >       > Hi Stefano,
>       >     >       >
>       >     >       > Thank you.
>       >     >       > If I build xen without colors support there is not this error.
>       >     >       > All the domains are booted well.
>       >     >       > Hense it can not be a hardware issue.
>       >     >       > This panic arrived during unpacking the rootfs.
>       >     >       > Here I attached the boot log xen/Dom0 without color.
>       >     >       > A highlighted strings printed exactly after the place where 1-st time panic arrived.
>       >     >       >
>       >     >       >  Xen 4.16.1-pre
>       >     >       > (XEN) Xen version 4.16.1-pre (nole2390@(none)) (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=y
>       2023-04-21
>       >     >       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300 git:321687b231-dirty
>       >     >       > (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
>       >     >       > (XEN) Processor: 00000000410fd034: "ARM Limited", variant: 0x0, part 0xd03,rev 0x4
>       >     >       > (XEN) 64-bit Execution:
>       >     >       > (XEN)   Processor Features: 0000000000002222 0000000000000000
>       >     >       > (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
>       >     >       > (XEN)     Extensions: FloatingPoint AdvancedSIMD
>       >     >       > (XEN)   Debug Features: 0000000010305106 0000000000000000
>       >     >       > (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
>       >     >       > (XEN)   Memory Model Features: 0000000000001122 0000000000000000
>       >     >       > (XEN)   ISA Features:  0000000000011120 0000000000000000
>       >     >       > (XEN) 32-bit Execution:
>       >     >       > (XEN)   Processor Features: 0000000000000131:0000000000011011
>       >     >       > (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
>       >     >       > (XEN)     Extensions: GenericTimer Security
>       >     >       > (XEN)   Debug Features: 0000000003010066
>       >     >       > (XEN)   Auxiliary Features: 0000000000000000
>       >     >       > (XEN)   Memory Model Features: 0000000010201105 0000000040000000
>       >     >       > (XEN)                          0000000001260000 0000000002102211
>       >     >       > (XEN)   ISA Features: 0000000002101110 0000000013112111 0000000021232042
>       >     >       > (XEN)                 0000000001112131 0000000000011142 0000000000011121
>       >     >       > (XEN) Using SMC Calling Convention v1.2
>       >     >       > (XEN) Using PSCI v1.1
>       >     >       > (XEN) SMP: Allowing 4 CPUs
>       >     >       > (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 100000 KHz
>       >     >       > (XEN) GICv2 initialization:
>       >     >       > (XEN)         gic_dist_addr=00000000f9010000
>       >     >       > (XEN)         gic_cpu_addr=00000000f9020000
>       >     >       > (XEN)         gic_hyp_addr=00000000f9040000
>       >     >       > (XEN)         gic_vcpu_addr=00000000f9060000
>       >     >       > (XEN)         gic_maintenance_irq=25
>       >     >       > (XEN) GICv2: Adjusting CPU interface base to 0xf902f000
>       >     >       > (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
>       >     >       > (XEN) Using scheduler: null Scheduler (null)
>       >     >       > (XEN) Initializing null scheduler
>       >     >       > (XEN) WARNING: This is experimental software in development.
>       >     >       > (XEN) Use at your own risk.
>       >     >       > (XEN) Allocated console ring of 32 KiB.
>       >     >       > (XEN) CPU0: Guest atomics will try 12 times before pausing the domain
>       >     >       > (XEN) Bringing up CPU1
>       >     >       > (XEN) CPU1: Guest atomics will try 13 times before pausing the domain
>       >     >       > (XEN) CPU 1 booted.
>       >     >       > (XEN) Bringing up CPU2
>       >     >       > (XEN) CPU2: Guest atomics will try 13 times before pausing the domain
>       >     >       > (XEN) CPU 2 booted.
>       >     >       > (XEN) Bringing up CPU3
>       >     >       > (XEN) CPU3: Guest atomics will try 13 times before pausing the domain
>       >     >       > (XEN) Brought up 4 CPUs
>       >     >       > (XEN) CPU 3 booted.
>       >     >       > (XEN) smmu: /axi/smmu@fd800000: probing hardware configuration...
>       >     >       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
>       >     >       > (XEN) smmu: /axi/smmu@fd800000: stage 2 translation
>       >     >       > (XEN) smmu: /axi/smmu@fd800000: stream matching with 48 register groups, mask 0x7fff<2>smmu:
>       /axi/smmu@fd800000: 16 context
>       >     >       banks (0
>       >     >       > stage-2 only)
>       >     >       > (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit PA
>       >     >       > (XEN) smmu: /axi/smmu@fd800000: registered 29 master devices
>       >     >       > (XEN) I/O virtualisation enabled
>       >     >       > (XEN)  - Dom0 mode: Relaxed
>       >     >       > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>       >     >       > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>       >     >       > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>       >     >       > (XEN) alternatives: Patching with alt table 00000000002cc5c8 -> 00000000002ccb2c
>       >     >       > (XEN) *** LOADING DOMAIN 0 ***
>       >     >       > (XEN) Loading d0 kernel from boot module @ 0000000001000000
>       >     >       > (XEN) Loading ramdisk from boot module @ 0000000002000000
>       >     >       > (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
>       >     >       > (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
>       >     >       > (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
>       >     >       > (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
>       >     >       > (XEN) Grant table range: 0x00000000e00000-0x00000000e40000
>       >     >       > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
>       >     >       > (XEN) Allocating PPI 16 for event channel interrupt
>       >     >       > (XEN) Extended region 0: 0x81200000->0xa0000000
>       >     >       > (XEN) Extended region 1: 0xb1200000->0xc0000000
>       >     >       > (XEN) Extended region 2: 0xc8000000->0xe0000000
>       >     >       > (XEN) Extended region 3: 0xf0000000->0xf9000000
>       >     >       > (XEN) Extended region 4: 0x100000000->0x600000000
>       >     >       > (XEN) Extended region 5: 0x880000000->0x8000000000
>       >     >       > (XEN) Extended region 6: 0x8001000000->0x10000000000
>       >     >       > (XEN) Loading zImage from 0000000001000000 to 0000000010000000-0000000010e41008
>       >     >       > (XEN) Loading d0 initrd from 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
>       >     >       > (XEN) Loading d0 DTB to 0x0000000013400000-0x000000001340cbdc
>       >     >       > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>       >     >       > (XEN) Std. Loglevel: All
>       >     >       > (XEN) Guest Loglevel: All
>       >     >       > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
>       >     >       > (XEN) null.c:353: 0 <-- d0v0
>       >     >       > (XEN) Freed 356kB init memory.
>       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
>       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
>       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
>       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
>       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
>       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
>       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
>       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>       >     >       > [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
>       >     >       > [    0.000000] Linux version 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC)
>       11.3.0, GNU ld (GNU
>       >     >       Binutils)
>       >     >       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
>       >     >       > [    0.000000] Machine model: D14 Viper Board - White Unit
>       >     >       > [    0.000000] Xen 4.16 support found
>       >     >       > [    0.000000] Zone ranges:
>       >     >       > [    0.000000]   DMA      [mem 0x0000000010000000-0x000000007fffffff]
>       >     >       > [    0.000000]   DMA32    empty
>       >     >       > [    0.000000]   Normal   empty
>       >     >       > [    0.000000] Movable zone start for each node
>       >     >       > [    0.000000] Early memory node ranges
>       >     >       > [    0.000000]   node   0: [mem 0x0000000010000000-0x000000001fffffff]
>       >     >       > [    0.000000]   node   0: [mem 0x0000000022000000-0x0000000022147fff]
>       >     >       > [    0.000000]   node   0: [mem 0x0000000022200000-0x0000000022347fff]
>       >     >       > [    0.000000]   node   0: [mem 0x0000000024000000-0x0000000027ffffff]
>       >     >       > [    0.000000]   node   0: [mem 0x0000000030000000-0x000000007fffffff]
>       >     >       > [    0.000000] Initmem setup node 0 [mem 0x0000000010000000-0x000000007fffffff]
>       >     >       > [    0.000000] On node 0, zone DMA: 8192 pages in unavailable ranges
>       >     >       > [    0.000000] On node 0, zone DMA: 184 pages in unavailable ranges
>       >     >       > [    0.000000] On node 0, zone DMA: 7352 pages in unavailable ranges
>       >     >       > [    0.000000] cma: Reserved 256 MiB at 0x000000006e000000
>       >     >       > [    0.000000] psci: probing for conduit method from DT.
>       >     >       > [    0.000000] psci: PSCIv1.1 detected in firmware.
>       >     >       > [    0.000000] psci: Using standard PSCI v0.2 function IDs
>       >     >       > [    0.000000] psci: Trusted OS migration not required
>       >     >       > [    0.000000] psci: SMC Calling Convention v1.1
>       >     >       > [    0.000000] percpu: Embedded 16 pages/cpu s32792 r0 d32744 u65536
>       >     >       > [    0.000000] Detected VIPT I-cache on CPU0
>       >     >       > [    0.000000] CPU features: kernel page table isolation forced ON by KASLR
>       >     >       > [    0.000000] CPU features: detected: Kernel page table isolation (KPTI)
>       >     >       > [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 403845
>       >     >       > [    0.000000] Kernel command line: console=hvc0 earlycon=xen earlyprintk=xen clk_ignore_unused fips=1
>       root=/dev/ram0
>       >     >       maxcpus=2
>       >     >       > [    0.000000] Unknown kernel command line parameters "earlyprintk=xen fips=1", will be passed to user
>       space.
>       >     >       > [    0.000000] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
>       >     >       > [    0.000000] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
>       >     >       > [    0.000000] mem auto-init: stack:off, heap alloc:on, heap free:on
>       >     >       > [    0.000000] mem auto-init: clearing system memory may take some time...
>       >     >       > [    0.000000] Memory: 1121936K/1641024K available (9728K kernel code, 836K rwdata, 2396K rodata, 1536K
>       init, 262K bss,
>       >     >       256944K reserved,
>       >     >       > 262144K cma-reserved)
>       >     >       > [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
>       >     >       > [    0.000000] rcu: Hierarchical RCU implementation.
>       >     >       > [    0.000000] rcu: RCU event tracing is enabled.
>       >     >       > [    0.000000] rcu: RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
>       >     >       > [    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
>       >     >       > [    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
>       >     >       > [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
>       >     >       > [    0.000000] Root IRQ handler: gic_handle_irq
>       >     >       > [    0.000000] arch_timer: cp15 timer(s) running at 100.00MHz (virt).
>       >     >       > [    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x171024e7e0,
>       max_idle_ns: 440795205315 ns
>       >     >       > [    0.000000] sched_clock: 56 bits at 100MHz, resolution 10ns, wraps every 4398046511100ns
>       >     >       > [    0.000258] Console: colour dummy device 80x25
>       >     >       > [    0.310231] printk: console [hvc0] enabled
>       >     >       > [    0.314403] Calibrating delay loop (skipped), value calculated using timer frequency.. 200.00 BogoMIPS
>       (lpj=400000)
>       >     >       > [    0.324851] pid_max: default: 32768 minimum: 301
>       >     >       > [    0.329706] LSM: Security Framework initializing
>       >     >       > [    0.334204] Yama: becoming mindful.
>       >     >       > [    0.337865] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>       >     >       > [    0.345180] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>       >     >       > [    0.354743] xen:grant_table: Grant tables using version 1 layout
>       >     >       > [    0.359132] Grant table initialized
>       >     >       > [    0.362664] xen:events: Using FIFO-based ABI
>       >     >       > [    0.366993] Xen: initializing cpu0
>       >     >       > [    0.370515] rcu: Hierarchical SRCU implementation.
>       >     >       > [    0.375930] smp: Bringing up secondary CPUs ...
>       >     >       > (XEN) null.c:353: 1 <-- d0v1
>       >     >       > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>       >     >       > [    0.382549] Detected VIPT I-cache on CPU1
>       >     >       > [    0.388712] Xen: initializing cpu1
>       >     >       > [    0.388743] CPU1: Booted secondary processor 0x0000000001 [0x410fd034]
>       >     >       > [    0.388829] smp: Brought up 1 node, 2 CPUs
>       >     >       > [    0.406941] SMP: Total of 2 processors activated.
>       >     >       > [    0.411698] CPU features: detected: 32-bit EL0 Support
>       >     >       > [    0.416888] CPU features: detected: CRC32 instructions
>       >     >       > [    0.422121] CPU: All CPU(s) started at EL1
>       >     >       > [    0.426248] alternatives: patching kernel code
>       >     >       > [    0.431424] devtmpfs: initialized
>       >     >       > [    0.441454] KASLR enabled
>       >     >       > [    0.441602] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:
>       7645041785100000 ns
>       >     >       > [    0.448321] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
>       >     >       > [    0.496183] NET: Registered PF_NETLINK/PF_ROUTE protocol family
>       >     >       > [    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic allocations
>       >     >       > [    0.503772] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
>       >     >       > [    0.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
>       >     >       > [    0.519478] audit: initializing netlink subsys (disabled)
>       >     >       > [    0.524985] audit: type=2000 audit(0.336:1): state=initialized audit_enabled=0 res=1
>       >     >       > [    0.529169] thermal_sys: Registered thermal governor 'step_wise'
>       >     >       > [    0.533023] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
>       >     >       > [    0.545608] ASID allocator initialised with 32768 entries
>       >     >       > [    0.551030] xen:swiotlb_xen: Warning: only able to allocate 4 MB for software IO TLB
>       >     >       > [    0.559332] software IO TLB: mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB)
>       >     >       > [    0.583565] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>       >     >       > [    0.584721] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
>       >     >       > [    0.591478] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
>       >     >       > [    0.598225] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
>       >     >       > [    0.636520] DRBG: Continuing without Jitter RNG
>       >     >       > [    0.737187] raid6: neonx8   gen()  2143 MB/s
>       >     >       > [    0.805294] raid6: neonx8   xor()  1589 MB/s
>       >     >       > [    0.873406] raid6: neonx4   gen()  2177 MB/s
>       >     >       > [    0.941499] raid6: neonx4   xor()  1556 MB/s
>       >     >       > [    1.009612] raid6: neonx2   gen()  2072 MB/s
>       >     >       > [    1.077715] raid6: neonx2   xor()  1430 MB/s
>       >     >       > [    1.145834] raid6: neonx1   gen()  1769 MB/s
>       >     >       > [    1.213935] raid6: neonx1   xor()  1214 MB/s
>       >     >       > [    1.282046] raid6: int64x8  gen()  1366 MB/s
>       >     >       > [    1.350132] raid6: int64x8  xor()   773 MB/s
>       >     >       > [    1.418259] raid6: int64x4  gen()  1602 MB/s
>       >     >       > [    1.486349] raid6: int64x4  xor()   851 MB/s
>       >     >       > [    1.554464] raid6: int64x2  gen()  1396 MB/s
>       >     >       > [    1.622561] raid6: int64x2  xor()   744 MB/s
>       >     >       > [    1.690687] raid6: int64x1  gen()  1033 MB/s
>       >     >       > [    1.758770] raid6: int64x1  xor()   517 MB/s
>       >     >       > [    1.758809] raid6: using algorithm neonx4 gen() 2177 MB/s
>       >     >       > [    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
>       >     >       > [    1.767957] raid6: using neon recovery algorithm
>       >     >       > [    1.772824] xen:balloon: Initialising balloon driver
>       >     >       > [    1.778021] iommu: Default domain type: Translated
>       >     >       > [    1.782584] iommu: DMA domain TLB invalidation policy: strict mode
>       >     >       > [    1.789149] SCSI subsystem initialized
>       >     >       > [    1.792820] usbcore: registered new interface driver usbfs
>       >     >       > [    1.798254] usbcore: registered new interface driver hub
>       >     >       > [    1.803626] usbcore: registered new device driver usb
>       >     >       > [    1.808761] pps_core: LinuxPPS API ver. 1 registered
>       >     >       > [    1.813716] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it
>       <mailto:giometti@linux.it>>
>       >     >       > [    1.822903] PTP clock support registered
>       >     >       > [    1.826893] EDAC MC: Ver: 3.0.0
>       >     >       > [    1.830375] zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channels.
>       >     >       > [    1.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX channels.
>       >     >       > [    1.847356] zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX channels.
>       >     >       > [    1.855907] FPGA manager framework
>       >     >       > [    1.859952] clocksource: Switched to clocksource arch_sys_counter
>       >     >       > [    1.871712] NET: Registered PF_INET protocol family
>       >     >       > [    1.871838] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
>       >     >       > [    1.879392] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
>       >     >       > [    1.887078] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
>       >     >       > [    1.894846] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
>       >     >       > [    1.902900] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
>       >     >       > [    1.910350] TCP: Hash tables configured (established 16384 bind 16384)
>       >     >       > [    1.916778] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
>       >     >       > [    1.923509] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
>       >     >       > [    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol family
>       >     >       > [    1.936834] RPC: Registered named UNIX socket transport module.
>       >     >       > [    1.942342] RPC: Registered udp transport module.
>       >     >       > [    1.947088] RPC: Registered tcp transport module.
>       >     >       > [    1.951843] RPC: Registered tcp NFSv4.1 backchannel transport module.
>       >     >       > [    1.958334] PCI: CLS 0 bytes, default 64
>       >     >       > [    1.962709] Trying to unpack rootfs image as initramfs...
>       >     >       > [    1.977090] workingset: timestamp_bits=62 max_order=19 bucket_order=0
>       >     >       > [    1.982863] Installing knfsd (copyright (C) 1996 okir@monad.swb.de <mailto:okir@monad.swb.de>).
>       >     >       > [    2.021045] NET: Registered PF_ALG protocol family
>       >     >       > [    2.021122] xor: measuring software checksum speed
>       >     >       > [    2.029347]    8regs           :  2366 MB/sec
>       >     >       > [    2.033081]    32regs          :  2802 MB/sec
>       >     >       > [    2.038223]    arm64_neon      :  2320 MB/sec
>       >     >       > [    2.038385] xor: using function: 32regs (2802 MB/sec)
>       >     >       > [    2.043614] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
>       >     >       > [    2.050959] io scheduler mq-deadline registered
>       >     >       > [    2.055521] io scheduler kyber registered
>       >     >       > [    2.068227] xen:xen_evtchn: Event-channel device installed
>       >     >       > [    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
>       >     >       > [    2.076190] cacheinfo: Unable to detect cache hierarchy for CPU 0
>       >     >       > [    2.085548] brd: module loaded
>       >     >       > [    2.089290] loop: module loaded
>       >     >       > [    2.089341] Invalid max_queues (4), will use default max: 2.
>       >     >       > [    2.094565] tun: Universal TUN/TAP device driver, 1.6
>       >     >       > [    2.098655] xen_netfront: Initialising Xen virtual ethernet driver
>       >     >       > [    2.104156] usbcore: registered new interface driver rtl8150
>       >     >       > [    2.109813] usbcore: registered new interface driver r8152
>       >     >       > [    2.115367] usbcore: registered new interface driver asix
>       >     >       > [    2.120794] usbcore: registered new interface driver ax88179_178a
>       >     >       > [    2.126934] usbcore: registered new interface driver cdc_ether
>       >     >       > [    2.132816] usbcore: registered new interface driver cdc_eem
>       >     >       > [    2.138527] usbcore: registered new interface driver net1080
>       >     >       > [    2.144256] usbcore: registered new interface driver cdc_subset
>       >     >       > [    2.150205] usbcore: registered new interface driver zaurus
>       >     >       > [    2.155837] usbcore: registered new interface driver cdc_ncm
>       >     >       > [    2.161550] usbcore: registered new interface driver r8153_ecm
>       >     >       > [    2.168240] usbcore: registered new interface driver cdc_acm
>       >     >       > [    2.173109] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
>       >     >       > [    2.181358] usbcore: registered new interface driver uas
>       >     >       > [    2.186547] usbcore: registered new interface driver usb-storage
>       >     >       > [    2.192643] usbcore: registered new interface driver ftdi_sio
>       >     >       > [    2.198384] usbserial: USB Serial support registered for FTDI USB Serial Device
>       >     >       > [    2.206118] udc-core: couldn't find an available UDC - added [g_mass_storage] to list of pending
>       drivers
>       >     >       > [    2.215332] i2c_dev: i2c /dev entries driver
>       >     >       > [    2.220467] xen_wdt xen_wdt: initialized (timeout=60s, nowayout=0)
>       >     >       > [    2.225923] device-mapper: uevent: version 1.0.3
>       >     >       > [    2.230668] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
>       <mailto:dm-devel@redhat.com>
>       >     >       > [    2.239315] EDAC MC0: Giving out device to module 1 controller synps_ddr_controller: DEV synps_edac
>       (INTERRUPT)
>       >     >       > [    2.249405] EDAC DEVICE0: Giving out device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV
>       >     >       ff960000.memory-controller (INTERRUPT)
>       >     >       > [    2.261719] sdhci: Secure Digital Host Controller Interface driver
>       >     >       > [    2.267487] sdhci: Copyright(c) Pierre Ossman
>       >     >       > [    2.271890] sdhci-pltfm: SDHCI platform and OF driver helper
>       >     >       > [    2.278157] ledtrig-cpu: registered to indicate activity on CPUs
>       >     >       > [    2.283816] zynqmp_firmware_probe Platform Management API v1.1
>       >     >       > [    2.289554] zynqmp_firmware_probe Trustzone version v1.0
>       >     >       > [    2.327875] securefw securefw: securefw probed
>       >     >       > [    2.328324] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)
>       >     >       > [    2.332563] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
>       >     >       > [    2.341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)
>       >     >       > [    2.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is available
>       >     >       > [    2.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is available
>       >     >       > [    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registered
>       >     >       > [    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xen Proxy registered
>       >     >       > [    2.372525] viper-vdpp a4000000.vdpp: Device Tree Probing
>       >     >       > [    2.377778] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>       >     >       > [    2.386432] viper-vdpp a4000000.vdpp: Unable to register tamper handler. Retrying...
>       >     >       > [    2.394094] viper-vdpp-net a5000000.vdpp_net: Device Tree Probing
>       >     >       > [    2.399854] viper-vdpp-net a5000000.vdpp_net: Device registered
>       >     >       > [    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing
>       >     >       > [    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VTI Count: 512 Event Count: 32
>       >     >       > [    2.420856] default preset
>       >     >       > [    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Device registered
>       >     >       > [    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing
>       >     >       > [    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device registered
>       >     >       > [    2.441976] vmcu driver init
>       >     >       > [    2.444922] VMCU: : (240:0) registered
>       >     >       > [    2.444956] In K81 Updater init
>       >     >       > [    2.449003] pktgen: Packet Generator for packet performance testing. Version: 2.75
>       >     >       > [    2.468833] Initializing XFRM netlink socket
>       >     >       > [    2.468902] NET: Registered PF_PACKET protocol family
>       >     >       > [    2.472729] Bridge firewalling registered
>       >     >       > [    2.476785] 8021q: 802.1Q VLAN Support v1.8
>       >     >       > [    2.481341] registered taskstats version 1
>       >     >       > [    2.486394] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
>       >     >       > [    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq = 36, base_baud = 6250000) is a xuartps
>       >     >       > [    2.507103] of-fpga-region fpga-full: FPGA Region probed
>       >     >       > [    2.512986] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver Probe success
>       >     >       > [    2.520267] xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver Probe success
>       >     >       > [    2.528239] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe success
>       >     >       > [    2.536152] xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver Probe success
>       >     >       > [    2.544153] xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver Probe success
>       >     >       > [    2.552127] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver Probe success
>       >     >       > [    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver Probe success
>       >     >       > [    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe success
>       >     >       > [    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver Probe success
>       >     >       > [    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver Probe success
>       >     >       > [    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
>       >     >       > [    2.946467] 2 fixed-partitions partitions found on MTD device spi0.0
>       >     >       > [    2.952393] Creating 2 MTD partitions on "spi0.0":
>       >     >       > [    2.957231] 0x000004000000-0x000008000000 : "bank A"
>       >     >       > [    2.963332] 0x000000000000-0x000004000000 : "bank B"
>       >     >       > [    2.968694] macb ff0b0000.ethernet: Not enabling partial store and forward
>       >     >       > [    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25
>       (18:41:fe:0f:ff:02)
>       >     >       > [    2.984472] macb ff0c0000.ethernet: Not enabling partial store and forward
>       >     >       > [    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26
>       (18:41:fe:0f:ff:03)
>       >     >       > [    3.001043] viper_enet viper_enet: Viper power GPIOs initialised
>       >     >       > [    3.007313] viper_enet viper_enet vnet0 (uninitialized): Validate interface QSGMII
>       >     >       > [    3.014914] viper_enet viper_enet vnet1 (uninitialized): Validate interface QSGMII
>       >     >       > [    3.022138] viper_enet viper_enet vnet1 (uninitialized): Validate interface type 18
>       >     >       > [    3.030274] viper_enet viper_enet vnet2 (uninitialized): Validate interface QSGMII
>       >     >       > [    3.037785] viper_enet viper_enet vnet3 (uninitialized): Validate interface QSGMII
>       >     >       > [    3.045301] viper_enet viper_enet: Viper enet registered
>       >     >       > [    3.050958] xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
>       >     >       > [    3.057135] xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
>       >     >       > [    3.063538] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
>       >     >       > [    3.069920] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
>       >     >       > [    3.097729] si70xx: probe of 2-0040 failed with error -5
>       >     >       > [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s
>       >     >       > [    3.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s
>       >     >       > [    3.112457] viper-tamper viper-tamper: Device registered
>       >     >       > [    3.117593] active_bank active_bank: boot bank: 1
>       >     >       > [    3.122184] active_bank active_bank: boot mode: (0x02) qspi32
>       >     >       > [    3.128247] viper-vdpp a4000000.vdpp: Device Tree Probing
>       >     >       > [    3.133439] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>       >     >       > [    3.142151] viper-vdpp a4000000.vdpp: Tamper handler registered
>       >     >       > [    3.147438] viper-vdpp a4000000.vdpp: Device registered
>       >     >       > [    3.153007] lpc55_l2 spi1.0: registered handler for protocol 0
>       >     >       > [    3.158582] lpc55_user lpc55_user: The major number for your device is 236
>       >     >       > [    3.165976] lpc55_l2 spi1.0: registered handler for protocol 1
>       >     >       > [    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>       >     >       > [    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
>       >     >       > [    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
>       >     >       > [    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
>       >     >       > [    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
>       >     >       > [    3.202932] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit
>       >     >       > [    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
>       >     >       > [    3.215694] lpc55_l2 spi1.0: rx error: -110
>       >     >       > [    3.284438] mmc0: new HS200 MMC card at address 0001
>       >     >       > [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
>       >     >       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
>       >     >       > [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
>       >     >       > [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
>       >     >       > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
>       >     >       > [    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>       >     >       > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardware clock
>       >     >       > [    3.591252] cdns-i2c ff020000.i2c: recovery information complete
>       >     >       > [    3.597085] at24 0-0050: supply vcc not found, using dummy regulator
>       >     >       > [    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
>       >     >       > [    3.608093] at24 0-0050: 256 byte spd EEPROM, read-only
>       >     >       > [    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
>       >     >       > [    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
>       >     >       > [    3.624224] rtc-rv3028 0-0052: registered as rtc1
>       >     >       > [    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
>       >     >       > [    3.633253] lpc55_l2 spi1.0: rx error: -110
>       >     >       > [    3.639104] k81_bootloader 0-0010: probe
>       >     >       > [    3.641628] VMCU: : (235:0) registered
>       >     >       > [    3.641635] k81_bootloader 0-0010: probe completed
>       >     >       > [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 28
>       >     >       > [    3.669154] cdns-i2c ff030000.i2c: recovery information complete
>       >     >       > [    3.675412] lm75 1-0048: supply vs not found, using dummy regulator
>       >     >       > [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
>       >     >       > [    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
>       >     >       > [    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
>       >     >       > [    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
>       >     >       > [    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
>       >     >       > [    3.705157] pca954x 1-0070: registered 4 multiplexed busses for I2C switch pca9546
>       >     >       > [    3.713049] at24 1-0054: supply vcc not found, using dummy regulator
>       >     >       > [    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM, read-only
>       >     >       > [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq 29
>       >     >       > [    3.731272] sfp viper_enet:sfp-eth1: Host maximum power 2.0W
>       >     >       > [    3.737549] sfp_register_socket: got sfp_bus
>       >     >       > [    3.740709] sfp_register_socket: register sfp_bus
>       >     >       > [    3.745459] sfp_register_bus: ops ok!
>       >     >       > [    3.749179] sfp_register_bus: Try to attach
>       >     >       > [    3.753419] sfp_register_bus: Attach succeeded
>       >     >       > [    3.757914] sfp_register_bus: upstream ops attach
>       >     >       > [    3.762677] sfp_register_bus: Bus registered
>       >     >       > [    3.766999] sfp_register_socket: register sfp_bus succeeded
>       >     >       > [    3.775870] of_cfs_init
>       >     >       > [    3.776000] of_cfs_init: OK
>       >     >       > [    3.778211] clk: Not disabling unused clocks
>       >     >       > [   11.278477] Freeing initrd memory: 206056K
>       >     >       > [   11.279406] Freeing unused kernel memory: 1536K
>       >     >       > [   11.314006] Checked W+X mappings: passed, no W+X pages found
>       >     >       > [   11.314142] Run /init as init process
>       >     >       > INIT: version 3.01 booting
>       >     >       > fsck (busybox 1.35.0)
>       >     >       > /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600 blocks
>       >     >       > /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600 blocks
>       >     >       > /dev/mmcblk0p3 was not cleanly unmounted, check forced.
>       >     >       > /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous), 663/16384 blocks
>       >     >       > [   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem without journal. Opts: (null). Quota mode:
>       disabled.
>       >     >       > Starting random number generator daemon.
>       >     >       > [   11.580662] random: crng init done
>       >     >       > Starting udev
>       >     >       > [   11.613159] udevd[142]: starting version 3.2.10
>       >     >       > [   11.620385] udevd[143]: starting eudev-3.2.10
>       >     >       > [   11.704481] macb ff0b0000.ethernet control_red: renamed from eth0
>       >     >       > [   11.720264] macb ff0c0000.ethernet control_black: renamed from eth1
>       >     >       > [   12.063396] ip_local_port_range: prefer different parity for start/end values.
>       >     >       > [   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>       >     >       > Mon Feb 27 08:40:53 UTC 2023
>       >     >       > [   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad result
>       >     >       > hwclock: RTC_SET_TIME: Invalid exchange
>       >     >       > [   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>       >     >       > Starting mcud
>       >     >       > INIT: Entering runlevel: 5
>       >     >       > Configuring network interfaces... done.
>       >     >       > resetting network interface
>       >     >       > [   12.718295] macb ff0b0000.ethernet control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver [Xilinx
>       PCS/PMA PHY] (irq=POLL)
>       >     >       > [   12.723919] macb ff0b0000.ethernet control_red: configuring for phy/gmii link mode
>       >     >       > [   12.732151] pps pps0: new PPS source ptp0
>       >     >       > [   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp clock registered.
>       >     >       > [   12.745724] macb ff0c0000.ethernet control_black: PHY [ff0c0000.ethernet-ffffffff:01] driver [Xilinx
>       PCS/PMA PHY]
>       >     >       (irq=POLL)
>       >     >       > [   12.753469] macb ff0c0000.ethernet control_black: configuring for phy/gmii link mode
>       >     >       > [   12.761804] pps pps1: new PPS source ptp1
>       >     >       > [   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock registered.
>       >     >       > Auto-negotiation: off
>       >     >       > Auto-negotiation: off
>       >     >       > [   16.828151] macb ff0b0000.ethernet control_red: unable to generate target frequency: 125000000 Hz
>       >     >       > [   16.834553] macb ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full - flow control off
>       >     >       > [   16.860552] macb ff0c0000.ethernet control_black: unable to generate target frequency: 125000000 Hz
>       >     >       > [   16.867052] macb ff0c0000.ethernet control_black: Link is Up - 1Gbps/Full - flow control off
>       >     >       > Starting Failsafe Secure Shell server in port 2222: sshd
>       >     >       > done.
>       >     >       > Starting rpcbind daemon...done.
>       >     >       >
>       >     >       > [   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>       >     >       > Starting State Manager Service
>       >     >       > Start state-manager restarter...
>       >     >       > (XEN) d0v1 Forwarding AES operation: 3254779951
>       >     >       > Starting /usr/sbin/xenstored....[   17.265256] BTRFS: device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa
>       devid 1 transid 744
>       >     >       /dev/dm-0
>       >     >       > scanned by udevd (385)
>       >     >       > [   17.349933] BTRFS info (device dm-0): disk space caching is enabled
>       >     >       > [   17.350670] BTRFS info (device dm-0): has skinny extents
>       >     >       > [   17.364384] BTRFS info (device dm-0): enabling ssd optimizations
>       >     >       > [   17.830462] BTRFS: device fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
>       /dev/mapper/client_prov scanned by
>       >     >       mkfs.btrfs
>       >     >       > (526)
>       >     >       > [   17.872699] BTRFS info (device dm-1): using free space tree
>       >     >       > [   17.872771] BTRFS info (device dm-1): has skinny extents
>       >     >       > [   17.878114] BTRFS info (device dm-1): flagging fs with big metadata feature
>       >     >       > [   17.894289] BTRFS info (device dm-1): enabling ssd optimizations
>       >     >       > [   17.895695] BTRFS info (device dm-1): checking UUID tree
>       >     >       >
>       >     >       > Setting domain 0 name, domid and JSON config...
>       >     >       > Done setting up Dom0
>       >     >       > Starting xenconsoled...
>       >     >       > Starting QEMU as disk backend for dom0
>       >     >       > Starting domain watchdog daemon: xenwatchdogd startup
>       >     >       >
>       >     >       > [   18.408647] BTRFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
>       /dev/mapper/client_config scanned by
>       >     >       mkfs.btrfs
>       >     >       > (574)
>       >     >       > [done]
>       >     >       > [   18.465552] BTRFS info (device dm-2): using free space tree
>       >     >       > [   18.465629] BTRFS info (device dm-2): has skinny extents
>       >     >       > [   18.471002] BTRFS info (device dm-2): flagging fs with big metadata feature
>       >     >       > Starting crond: [   18.482371] BTRFS info (device dm-2): enabling ssd optimizations
>       >     >       > [   18.486659] BTRFS info (device dm-2): checking UUID tree
>       >     >       > OK
>       >     >       > starting rsyslogd ... Log partition ready after 0 poll loops
>       >     >       > done
>       >     >       > rsyslogd: cannot connect to 172.18.0.1:514 <http://172.18.0.1:514>: Network is unreachable [v8.2208.0 try
>       https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> ]
>       >     >       > [   18.670637] BTRFS: device fsid 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3
>       scanned by udevd (518)
>       >     >       >
>       >     >       > Please insert USB token and enter your role in login prompt.
>       >     >       >
>       >     >       > login:
>       >     >       >
>       >     >       > Regards,
>       >     >       > O.
>       >     >       >
>       >     >       >
>       >     >       > пн, 24 апр. 2023 г. в 23:39, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
>       >     >       >       Hi Oleg,
>       >     >       >
>       >     >       >       Here is the issue from your logs:
>       >     >       >
>       >     >       >       SError Interrupt on CPU0, code 0xbe000000 -- SError
>       >     >       >
>       >     >       >       SErrors are special signals to notify software of serious hardware
>       >     >       >       errors.  Something is going very wrong. Defective hardware is a
>       >     >       >       possibility.  Another possibility if software accessing address ranges
>       >     >       >       that it is not supposed to, sometimes it causes SErrors.
>       >     >       >
>       >     >       >       Cheers,
>       >     >       >
>       >     >       >       Stefano
>       >     >       >
>       >     >       >
>       >     >       >
>       >     >       >       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
>       >     >       >
>       >     >       >       > Hello,
>       >     >       >       >
>       >     >       >       > Thanks guys.
>       >     >       >       > I found out where the problem was.
>       >     >       >       > Now dom0 booted more. But I have a new one.
>       >     >       >       > This is a kernel panic during Dom0 loading.
>       >     >       >       > Maybe someone is able to suggest something ?
>       >     >       >       >
>       >     >       >       > Regards,
>       >     >       >       > O.
>       >     >       >       >
>       >     >       >       > [    3.771362] sfp_register_bus: upstream ops attach
>       >     >       >       > [    3.776119] sfp_register_bus: Bus registered
>       >     >       >       > [    3.780459] sfp_register_socket: register sfp_bus succeeded
>       >     >       >       > [    3.789399] of_cfs_init
>       >     >       >       > [    3.789499] of_cfs_init: OK
>       >     >       >       > [    3.791685] clk: Not disabling unused clocks
>       >     >       >       > [   11.010355] SError Interrupt on CPU0, code 0xbe000000 -- SError
>       >     >       >       > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>       >     >       >       > [   11.010393] Workqueue: events_unbound async_run_entry_fn
>       >     >       >       > [   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>       >     >       >       > [   11.010422] pc : simple_write_end+0xd0/0x130
>       >     >       >       > [   11.010431] lr : generic_perform_write+0x118/0x1e0
>       >     >       >       > [   11.010438] sp : ffffffc00809b910
>       >     >       >       > [   11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
>       >     >       >       > [   11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
>       >     >       >       > [   11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
>       >     >       >       > [   11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
>       >     >       >       > [   11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
>       >     >       >       > [   11.010490] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
>       >     >       >       > [   11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
>       >     >       >       > [   11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
>       >     >       >       > [   11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
>       >     >       >       > [   11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
>       >     >       >       > [   11.010534] Kernel panic - not syncing: Asynchronous SError Interrupt
>       >     >       >       > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>       >     >       >       > [   11.010545] Hardware name: D14 Viper Board - White Unit (DT)
>       >     >       >       > [   11.010548] Workqueue: events_unbound async_run_entry_fn
>       >     >       >       > [   11.010556] Call trace:
>       >     >       >       > [   11.010558]  dump_backtrace+0x0/0x1c4
>       >     >       >       > [   11.010567]  show_stack+0x18/0x2c
>       >     >       >       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
>       >     >       >       > [   11.010583]  dump_stack+0x18/0x34
>       >     >       >       > [   11.010588]  panic+0x14c/0x2f8
>       >     >       >       > [   11.010597]  print_tainted+0x0/0xb0
>       >     >       >       > [   11.010606]  arm64_serror_panic+0x6c/0x7c
>       >     >       >       > [   11.010614]  do_serror+0x28/0x60
>       >     >       >       > [   11.010621]  el1h_64_error_handler+0x30/0x50
>       >     >       >       > [   11.010628]  el1h_64_error+0x78/0x7c
>       >     >       >       > [   11.010633]  simple_write_end+0xd0/0x130
>       >     >       >       > [   11.010639]  generic_perform_write+0x118/0x1e0
>       >     >       >       > [   11.010644]  __generic_file_write_iter+0x138/0x1c4
>       >     >       >       > [   11.010650]  generic_file_write_iter+0x78/0xd0
>       >     >       >       > [   11.010656]  __kernel_write+0xfc/0x2ac
>       >     >       >       > [   11.010665]  kernel_write+0x88/0x160
>       >     >       >       > [   11.010673]  xwrite+0x44/0x94
>       >     >       >       > [   11.010680]  do_copy+0xa8/0x104
>       >     >       >       > [   11.010686]  write_buffer+0x38/0x58
>       >     >       >       > [   11.010692]  flush_buffer+0x4c/0xbc
>       >     >       >       > [   11.010698]  __gunzip+0x280/0x310
>       >     >       >       > [   11.010704]  gunzip+0x1c/0x28
>       >     >       >       > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
>       >     >       >       > [   11.010715]  do_populate_rootfs+0x80/0x164
>       >     >       >       > [   11.010722]  async_run_entry_fn+0x48/0x164
>       >     >       >       > [   11.010728]  process_one_work+0x1e4/0x3a0
>       >     >       >       > [   11.010736]  worker_thread+0x7c/0x4c0
>       >     >       >       > [   11.010743]  kthread+0x120/0x130
>       >     >       >       > [   11.010750]  ret_from_fork+0x10/0x20
>       >     >       >       > [   11.010757] SMP: stopping secondary CPUs
>       >     >       >       > [   11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc008000000
>       >     >       >       > [   11.010788] PHYS_OFFSET: 0x0
>       >     >       >       > [   11.010790] CPU features: 0x00000401,00000842
>       >     >       >       > [   11.010795] Memory Limit: none
>       >     >       >       > [   11.277509] ---[ end Kernel panic - not syncing: Asynchronous SError Interrupt ]---
>       >     >       >       >
>       >     >       >       > пт, 21 апр. 2023 г. в 15:52, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
>       >     >       >       >       Hi Oleg,
>       >     >       >       >
>       >     >       >       >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
>       >     >       >       >       >       
>       >     >       >       >       >
>       >     >       >       >       >
>       >     >       >       >       > Hello Michal,
>       >     >       >       >       >
>       >     >       >       >       > I was not able to enable earlyprintk in the xen for now.
>       >     >       >       >       > I decided to choose another way.
>       >     >       >       >       > This is a xen's command line that I found out completely.
>       >     >       >       >       >
>       >     >       >       >       > (XEN) $$$$ console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin
>       bootscrub=0
>       >     >       vwfi=native
>       >     >       >       sched=null
>       >     >       >       >       timer_slop=0
>       >     >       >       >       Yes, adding a printk() in Xen was also a good idea.
>       >     >       >       >
>       >     >       >       >       >
>       >     >       >       >       > So you are absolutely right about a command line.
>       >     >       >       >       > Now I am going to find out why xen did not have the correct parameters from the device
>       tree.
>       >     >       >       >       Maybe you will find this document helpful:
>       >     >       >       >       https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt
>       <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt>
>       >     >       >       >
>       >     >       >       >       ~Michal
>       >     >       >       >
>       >     >       >       >       >
>       >     >       >       >       > Regards,
>       >     >       >       >       > Oleg
>       >     >       >       >       >
>       >     >       >       >       > пт, 21 апр. 2023 г. в 11:16, Michal Orzel <michal.orzel@amd.com
>       <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
>       >     >       >       >       >
>       >     >       >       >       >
>       >     >       >       >       >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
>       >     >       >       >       >     >       
>       >     >       >       >       >     >
>       >     >       >       >       >     >
>       >     >       >       >       >     > Hello Michal,
>       >     >       >       >       >     >
>       >     >       >       >       >     > Yes, I use yocto.
>       >     >       >       >       >     >
>       >     >       >       >       >     > Yesterday all day long I tried to follow your suggestions.
>       >     >       >       >       >     > I faced a problem.
>       >     >       >       >       >     > Manually in the xen config build file I pasted the strings:
>       >     >       >       >       >     In the .config file or in some Yocto file (listing additional Kconfig options) added
>       to SRC_URI?
>       >     >       >       >       >     You shouldn't really modify .config file but if you do, you should execute "make
>       olddefconfig"
>       >     >       afterwards.
>       >     >       >       >       >
>       >     >       >       >       >     >
>       >     >       >       >       >     > CONFIG_EARLY_PRINTK
>       >     >       >       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
>       >     >       >       >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
>       >     >       >       >       >     I hope you added =y to them.
>       >     >       >       >       >
>       >     >       >       >       >     Anyway, you have at least the following solutions:
>       >     >       >       >       >     1) Run bitbake xen -c menuconfig to properly set early printk
>       >     >       >       >       >     2) Find out how you enable other Kconfig options in your project (e.g.
>       CONFIG_COLORING=y that is not
>       >     >       enabled by
>       >     >       >       default)
>       >     >       >       >       >     3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
>       >     >       >       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=y
>       >     >       >       >       >
>       >     >       >       >       >     ~Michal
>       >     >       >       >       >
>       >     >       >       >       >     >
>       >     >       >       >       >     > Host hangs in build time. 
>       >     >       >       >       >     > Maybe I did not set something in the config build file ?
>       >     >       >       >       >     >
>       >     >       >       >       >     > Regards,
>       >     >       >       >       >     > Oleg
>       >     >       >       >       >     >
>       >     >       >       >       >     > чт, 20 апр. 2023 г. в 11:57, Oleg Nikitenko <oleshiiwood@gmail.com
>       <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com
>       <mailto:oleshiiwood@gmail.com>>>>:
>       >     >       >       >       >     >
>       >     >       >       >       >     >     Thanks Michal,
>       >     >       >       >       >     >
>       >     >       >       >       >     >     You gave me an idea.
>       >     >       >       >       >     >     I am going to try it today.
>       >     >       >       >       >     >
>       >     >       >       >       >     >     Regards,
>       >     >       >       >       >     >     O.
>       >     >       >       >       >     >
>       >     >       >       >       >     >     чт, 20 апр. 2023 г. в 11:56, Oleg Nikitenko <oleshiiwood@gmail.com
>       <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com
>       <mailto:oleshiiwood@gmail.com>>>>:
>       >     >       >       >       >     >
>       >     >       >       >       >     >         Thanks Stefano.
>       >     >       >       >       >     >
>       >     >       >       >       >     >         I am going to do it today.
>       >     >       >       >       >     >
>       >     >       >       >       >     >         Regards,
>       >     >       >       >       >     >         O.
>       >     >       >       >       >     >
>       >     >       >       >       >     >         ср, 19 апр. 2023 г. в 23:05, Stefano Stabellini <sstabellini@kernel.org
>       <mailto:sstabellini@kernel.org>
>       >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
>       >     >       >       >       >     >
>       >     >       >       >       >     >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>       >     >       >       >       >     >             > Hi Michal,
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             > I corrected xen's command line.
>       >     >       >       >       >     >             > Now it is
>       >     >       >       >       >     >             > xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M
>       dom0_max_vcpus=2
>       >     >       dom0_vcpus_pin
>       >     >       >       >       bootscrub=0 vwfi=native sched=null
>       >     >       >       >       >     >             > timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";
>       >     >       >       >       >     >
>       >     >       >       >       >     >             4 colors is way too many for xen, just do xen_colors=0-0. There is no
>       >     >       >       >       >     >             advantage in using more than 1 color for Xen.
>       >     >       >       >       >     >
>       >     >       >       >       >     >             4 colors is too few for dom0, if you are giving 1600M of memory to
>       Dom0.
>       >     >       >       >       >     >             Each color is 256M. For 1600M you should give at least 7 colors. Try:
>       >     >       >       >       >     >
>       >     >       >       >       >     >             xen_colors=0-0 dom0_colors=1-8
>       >     >       >       >       >     >
>       >     >       >       >       >     >
>       >     >       >       >       >     >
>       >     >       >       >       >     >             > Unfortunately the result was the same.
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             > (XEN)  - Dom0 mode: Relaxed
>       >     >       >       >       >     >             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>       >     >       >       >       >     >             > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>       >     >       >       >       >     >             > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>       >     >       >       >       >     >             > (XEN) Coloring general information
>       >     >       >       >       >     >             > (XEN) Way size: 64kB
>       >     >       >       >       >     >             > (XEN) Max. number of colors available: 16
>       >     >       >       >       >     >             > (XEN) Xen color(s): [ 0 ]
>       >     >       >       >       >     >             > (XEN) alternatives: Patching with alt table 00000000002cc690 ->
>       00000000002ccc0c
>       >     >       >       >       >     >             > (XEN) Color array allocation failed for dom0
>       >     >       >       >       >     >             > (XEN)
>       >     >       >       >       >     >             > (XEN) ****************************************
>       >     >       >       >       >     >             > (XEN) Panic on CPU 0:
>       >     >       >       >       >     >             > (XEN) Error creating domain 0
>       >     >       >       >       >     >             > (XEN) ****************************************
>       >     >       >       >       >     >             > (XEN)
>       >     >       >       >       >     >             > (XEN) Reboot in five seconds...
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             > I am going to find out how command line arguments passed and parsed.
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             > Regards,
>       >     >       >       >       >     >             > Oleg
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             > ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com
>       <mailto:oleshiiwood@gmail.com>
>       >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com
>       <mailto:oleshiiwood@gmail.com>>>>:
>       >     >       >       >       >     >             >       Hi Michal,
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             > You put my nose into the problem. Thank you.
>       >     >       >       >       >     >             > I am going to use your point.
>       >     >       >       >       >     >             > Let's see what happens.
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             > Regards,
>       >     >       >       >       >     >             > Oleg
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             > ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com
>       <mailto:michal.orzel@amd.com>
>       >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>
>       >     >       >       >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com
>       <mailto:michal.orzel@amd.com>>>>:
>       >     >       >       >       >     >             >       Hi Oleg,
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>       >     >       >       >       >     >             >       >       
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       > Hello Stefano,
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       > Thanks for the clarification.
>       >     >       >       >       >     >             >       > My company uses yocto for image generation.
>       >     >       >       >       >     >             >       > What kind of information do you need to consult me in this
>       case ?
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       > Maybe modules sizes/addresses which were mentioned by @Julien
>       Grall
>       >     >       >       <mailto:julien@xen.org <mailto:julien@xen.org>
>       >     >       >       >       <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org
>       <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>> ?
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >       Sorry for jumping into discussion, but FWICS the Xen command
>       line you provided
>       >     >       seems to be
>       >     >       >       not the
>       >     >       >       >       one
>       >     >       >       >       >     >             >       Xen booted with. The error you are observing most likely is due
>       to dom0 colors
>       >     >       >       configuration not
>       >     >       >       >       being
>       >     >       >       >       >     >             >       specified (i.e. lack of dom0_colors=<> parameter). Although in
>       the command line you
>       >     >       >       provided, this
>       >     >       >       >       parameter
>       >     >       >       >       >     >             >       is set, I strongly doubt that this is the actual command line
>       in use.
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >       You wrote:
>       >     >       >       >       >     >             >       xen,xen-bootargs = "console=dtuart dtuart=serial0
>       dom0_mem=1600M dom0_max_vcpus=2
>       >     >       >       dom0_vcpus_pin
>       >     >       >       >       bootscrub=0 vwfi=native
>       >     >       >       >       >     >             >       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3
>       dom0_colors=4-7";
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >       but:
>       >     >       >       >       >     >             >       1) way_szize has a typo
>       >     >       >       >       >     >             >       2) you specified 4 colors (0-3) for Xen, but the boot log says
>       that Xen has only
>       >     >       one:
>       >     >       >       >       >     >             >       (XEN) Xen color(s): [ 0 ]
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >       This makes me believe that no colors configuration actually end
>       up in command line
>       >     >       that Xen
>       >     >       >       booted
>       >     >       >       >       with.
>       >     >       >       >       >     >             >       Single color for Xen is a "default if not specified" and way
>       size was probably
>       >     >       calculated
>       >     >       >       by asking
>       >     >       >       >       HW.
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >       So I would suggest to first cross-check the command line in
>       use.
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >       ~Michal
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       > Regards,
>       >     >       >       >       >     >             >       > Oleg
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini
>       <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org
>       <mailto:sstabellini@kernel.org>>>
>       >     >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org
>       <mailto:sstabellini@kernel.org>>>>>:
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>       >     >       >       >       >     >             >       >     > Hi Julien,
>       >     >       >       >       >     >             >       >     >
>       >     >       >       >       >     >             >       >     > >> This feature has not been merged in Xen upstream yet
>       >     >       >       >       >     >             >       >     >
>       >     >       >       >       >     >             >       >     > > would assume that upstream + the series on the ML [1]
>       work
>       >     >       >       >       >     >             >       >     >
>       >     >       >       >       >     >             >       >     > Please clarify this point.
>       >     >       >       >       >     >             >       >     > Because the two thoughts are controversial.
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >     Hi Oleg,
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >     As Julien wrote, there is nothing controversial. As you
>       are aware,
>       >     >       >       >       >     >             >       >     Xilinx maintains a separate Xen tree specific for Xilinx
>       here:
>       >     >       >       >       >     >             >       >     https://github.com/xilinx/xen
>       <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>
>       >     >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>       >     >       >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>
>       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen
>       <https://github.com/xilinx/xen>>
>       >     >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>       >     >       >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >     and the branch you are using (xlnx_rebase_4.16) comes
>       from there.
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >     Instead, the upstream Xen tree lives here:
>       >     >       >       >       >     >             >       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary
>       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >     The Cache Coloring feature that you are trying to
>       configure is present
>       >     >       >       >       >     >             >       >     in xlnx_rebase_4.16, but not yet present upstream (there
>       is an
>       >     >       >       >       >     >             >       >     outstanding patch series to add cache coloring to Xen
>       upstream but it
>       >     >       >       >       >     >             >       >     hasn't been merged yet.)
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't
>       matter too much for
>       >     >       >       >       >     >             >       >     you as you already have Cache Coloring as a feature
>       there.
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >     I take you are using ImageBuilder to generate the boot
>       configuration? If
>       >     >       >       >       >     >             >       >     so, please post the ImageBuilder config file that you are
>       using.
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >       >     But from the boot message, it looks like the colors
>       configuration for
>       >     >       >       >       >     >             >       >     Dom0 is incorrect.
>       >     >       >       >       >     >             >       >
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >             >
>       >     >       >       >       >     >
>       >     >       >       >       >
>       >     >       >       >
>       >     >       >       >
>       >     >       >       >
>       >     >       >
>       >     >       >
>       >     >       >
>       >     >
>       >     >
>       >     >
>       >
> 
> 
> 
--8323329-1540650302-1683661744=:974517--


From xen-devel-bounces@lists.xenproject.org Tue May 09 20:11:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 20:11:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532540.828758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwTg1-0001S6-1z; Tue, 09 May 2023 20:11:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532540.828758; Tue, 09 May 2023 20:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwTg0-0001Rz-UQ; Tue, 09 May 2023 20:11:12 +0000
Received: by outflank-mailman (input) for mailman id 532540;
 Tue, 09 May 2023 20:11:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5VI=A6=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pwTfz-0001Rt-PA
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 20:11:11 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a1adf6fe-eea5-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 22:11:07 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1adf6fe-eea5-11ed-b229-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683663065;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=GDHRmjiERMZtAMWS8NswxcpsZgr8pqyf8cBJN0UwKq4=;
	b=qRz1Wt3JIbNYLISqAbJr1CMERh+G2fMl0sqNUINGFnuAglUDS20PR5mShGM7iQQQaOXzUO
	6p1nBmlJSh5FISuh/gxPpyFl5N/5lIcPzXiWCbOhc/BY9nLjejGZdozuMFQuDWHB0czbQ3
	N334596N1dIxTBUtYyzDiKqjfnYYlcCALsa05iyDdZ772j72RpkHn+oDxyP4ObL7l+piSu
	ZhZkepOsryy7f/3f8Xvt0ySi2CMPDVnx0tsF8XLS0QXVKSABEb0ZDeaHvlrFZfnZHg6sEY
	EkUkcA416BPYsfyNhOSk3xIjz0pEBaTy2eNw1t0P+tqJe8e17B+vC1YPI9z1/Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683663065;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=GDHRmjiERMZtAMWS8NswxcpsZgr8pqyf8cBJN0UwKq4=;
	b=Xlcesbr/mowxxv4d+xv21wM9Nh14MEM6CP2UULmrKClB0/ijd/XoMlkIURymgqxjfMwTM5
	2x4mgfol9jpnTzDw==
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into
 separate phases and document them
In-Reply-To: <20230509100421.GU83892@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.671595388@linutronix.de>
 <20230509100421.GU83892@hirez.programming.kicks-ass.net>
Date: Tue, 09 May 2023 22:11:05 +0200
Message-ID: <87fs85z2na.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 09 2023 at 12:04, Peter Zijlstra wrote:
> On Mon, May 08, 2023 at 09:43:39PM +0200, Thomas Gleixner wrote:
> Not to the detriment of this patch, but this barrier() and it's comment
> seem weird vs smp_callin(). That function ends with an atomic bitop (it
> has to, at the very least it must not be weaker than store-release) but
> also has an explicit wmb() to order setup vs CPU_STARTING.
>
> (arguably that should be a full fence *AND* get a comment)

TBH: I'm grasping for something 'arguable': What's the point of this
wmb() or even a mb()?

Most of the [w]mb()'s in smpboot.c except those in mwait_play_dead()
have a very distinct voodoo programming smell.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Tue May 09 20:14:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 20:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532556.828768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwTjQ-00022D-G0; Tue, 09 May 2023 20:14:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532556.828768; Tue, 09 May 2023 20:14:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwTjQ-000226-D2; Tue, 09 May 2023 20:14:44 +0000
Received: by outflank-mailman (input) for mailman id 532556;
 Tue, 09 May 2023 20:14:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mGMN=A6=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1pwTjP-00021y-5u
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 20:14:43 +0000
Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 21c10306-eea6-11ed-b229-6b7b168915f2;
 Tue, 09 May 2023 22:14:41 +0200 (CEST)
Received: from zn.tnic (p5de8e8ea.dip0.t-ipconnect.de [93.232.232.234])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 1E5041EC051E;
 Tue,  9 May 2023 22:14:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21c10306-eea6-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1683663281;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=8FPEI6O6QAK3kRZvBqgdS/NqiP5GyoVTXmMIcWstwBU=;
	b=n6OGzuOgVvJpy46SSxzCzljQXgU88uLFdQj2tc18ANXJPJ/n6DZlbqX2Ya1zBVzJP82imI
	IzJlBM+TdVBQJB0VJYtKy4Jsd+Pjk9LIoJnvGZ5lLIkJansEWD5y7tusJyVZlQ3232xtIz
	IdrfYuxRa58gLxRAKyiKW9shhR18t08=
Date: Tue, 9 May 2023 22:14:37 +0200
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
	mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Message-ID: <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
References: <20230502120931.20719-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230502120931.20719-1-jgross@suse.com>

On Tue, May 02, 2023 at 02:09:15PM +0200, Juergen Gross wrote:
> This series tries to fix the rather special case of PAT being available
> without having MTRRs (either due to CONFIG_MTRR being not set, or
> because the feature has been disabled e.g. by a hypervisor).

More weird stuff. With the series:

[root@vh: ~> cat /proc/mtrr 
cat: /proc/mtrr: Input/output error

before:

[root@vh: ~> cat /proc/mtrr 
reg00: base=0x000000000 (    0MB), size= 2048MB, count=1: write-back
reg01: base=0x080000000 ( 2048MB), size= 1024MB, count=1: write-back
reg02: base=0x0c0000000 ( 3072MB), size=  256MB, count=1: write-back
reg03: base=0x0ff000000 ( 4080MB), size=   16MB, count=1: write-protect

I think it wrongly determines that MTRRs are disabled by BIOS:

MTRRs disabled by BIOS
x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT

which is obviously wrong.

But more debugging later.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Tue May 09 20:44:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 20:44:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532570.828777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwUBW-0005Zz-OE; Tue, 09 May 2023 20:43:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532570.828777; Tue, 09 May 2023 20:43:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwUBW-0005Zs-Le; Tue, 09 May 2023 20:43:46 +0000
Received: by outflank-mailman (input) for mailman id 532570;
 Tue, 09 May 2023 20:43:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hm7m=A6=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pwUBV-0005Zj-5X
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 20:43:45 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2df323d5-eeaa-11ed-8611-37d641c3527e;
 Tue, 09 May 2023 22:43:41 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-467-h2450ozEN8ukawalTzR5Sg-1; Tue, 09 May 2023 16:43:36 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 42C0F80C8C4;
 Tue,  9 May 2023 20:43:35 +0000 (UTC)
Received: from localhost (unknown [10.39.192.39])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 394F41121314;
 Tue,  9 May 2023 20:43:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2df323d5-eeaa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683665019;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=73rXAUSbDMMD9aLijjtyw6szEUieY4YD79d0W2REdQU=;
	b=JdltbhtyPtZpUTxq+FUe7F1Pl4hU/F0+3zOkt4niyRjJ8BhDMqtnQJbZXuuEa9iY0nqSJg
	OJ7lBdq9HM6BkhBJARaQMeoHuCCGnzNNBGsmXgHpq1V2jTrYr7ksOBr3XMfXdLUigRYkhy
	FWMkqEr9j/vuxJ2OU9FUs+vSxXX5pUI=
X-MC-Unique: h2450ozEN8ukawalTzR5Sg-1
Date: Tue, 9 May 2023 16:43:32 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Peter Lieven <pl@kamp.de>, Stefan Weil <sw@weilnetz.de>,
	Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com, Juan Quintela <quintela@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: Re: [PATCH v5 05/21] virtio-scsi: stop using aio_disable_external()
 during unplug
Message-ID: <20230509204332.GB1165676@fedora>
References: <20230504195327.695107-1-stefanha@redhat.com>
 <20230504195327.695107-6-stefanha@redhat.com>
 <ZFqXEihzG18me26X@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="pP+x37X0vy9h+g7D"
Content-Disposition: inline
In-Reply-To: <ZFqXEihzG18me26X@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3


--pP+x37X0vy9h+g7D
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, May 09, 2023 at 08:55:14PM +0200, Kevin Wolf wrote:
> Am 04.05.2023 um 21:53 hat Stefan Hajnoczi geschrieben:
> > This patch is part of an effort to remove the aio_disable_external()
> > API because it does not fit in a multi-queue block layer world where
> > many AioContexts may be submitting requests to the same disk.
> >=20
> > The SCSI emulation code is already in good shape to stop using
> > aio_disable_external(). It was only used by commit 9c5aad84da1c
> > ("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
> > disk") to ensure that virtio_scsi_hotunplug() works while the guest
> > driver is submitting I/O.
> >=20
> > Ensure virtio_scsi_hotunplug() is safe as follows:
> >=20
> > 1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
> >    device_set_realized() calls qatomic_set(&dev->realized, false) so
> >    that future scsi_device_get() calls return NULL because they exclude
> >    SCSIDevices with realized=3Dfalse.
> >=20
> >    That means virtio-scsi will reject new I/O requests to this
> >    SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
> >    virtio_scsi_hotunplug() is still executing. We are protected against
> >    new requests!
> >=20
> > 2. scsi_device_unrealize() already contains a call to
>=20
> I think you mean scsi_qdev_unrealize(). Can be fixed while applying.

Yes, it should be scsi_qdev_unrealize(). I'll review your other comments
and fix this if I need to respin.

Stefan

--pP+x37X0vy9h+g7D
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRasHQACgkQnKSrs4Gr
c8idhggAxwT1miphed6iDRzm0lcoIB+ERb0L9uu4SUcMFT6DDuoxIYBkdVwfzFye
/+ZEEQZrR0wW4GassEZR0evyH6N86XQCcbxZieYhvMS4vOlMtn6TLPOBO1kuV7bk
P/c4/aESmuRlERwSQ2xzmc3NsVc6jZI0onjQLNuC4wmEuWszLPfVNJ/oH46QjMOd
n9/RU5gOM/yajjrNHCnCE71xPsWQaP608ibSbsCzLbq9ByeaEqivhU7knUEQFcCC
KWWLL57nVhG2Lvvrjb1NiYDxq4NmadDxVFMQNm8jcqzuxLhZOwTqn1oxQk6A8P/x
A4btIkywXTRHK/EuAXKBCrBh+/7lSw==
=776u
-----END PGP SIGNATURE-----

--pP+x37X0vy9h+g7D--



From xen-devel-bounces@lists.xenproject.org Tue May 09 22:59:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 22:59:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532581.828788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwWIG-0002Kl-4T; Tue, 09 May 2023 22:58:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532581.828788; Tue, 09 May 2023 22:58:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwWIG-0002Ke-04; Tue, 09 May 2023 22:58:52 +0000
Received: by outflank-mailman (input) for mailman id 532581;
 Tue, 09 May 2023 22:58:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jxqG=A6=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pwWIE-0002KX-Rx
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 22:58:51 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [85.215.255.53]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0e6d2a48-eebd-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 00:58:48 +0200 (CEST)
Received: from aepfle.de by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz49MwU7VE
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 10 May 2023 00:58:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e6d2a48-eebd-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683673110; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=eZ1Xom1Goq0t0rDLQFjHxTIvR8GF9UFHiztm7ZLPmDIP0fFpWB3eD2NmIGqoeIwoFl
    UIvVVvwMZm+zPF2ghgAfQa6p89Yr741UO+3SCTQHVgF0va0TR4wCIxmUxgPQERrnKJam
    RfVyy5p89WDEWSh6bSZIpSrQ5VL8E6N8eIUJeP0EZ23CRJbmmfDPgVzXv7yDDtb+upXv
    zuJfmNGVD5SyN/6Xg4cHokERLz+EWafb9iM7/J051UboDX/LqCtP0ZEIoSL01qkViGlJ
    EmA/DVDBV1QlL8xJit6aaZgt4ADEvjNJDaTnUTPaj9u3pfbVHL0fVdgNPMylUWHGDGEr
    oxUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683673110;
    s=strato-dkim-0002; d=strato.com;
    h=In-Reply-To:References:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=GCP5/CljJD+CfipxX1GkhTK2QgMNJf/4Nny5lGr0f04=;
    b=cRUuD9cGfKlEtCzb7BlTSVPbFMUF6G907RL/iPoIp69Vw+i0bHc3oRsTKY7Ko/I8JK
    V9tJ4LT5VJcWimcyKS99DKQt6pc5kitto/mKV/Yoir34iWXaEPXRQiqBGQUgoooIBVYn
    8s2w+ULF6ZPquxMRsa+NmJwJJtVX4+/WMmhKekyvDjQoBlEq4tcUxDklqpB159Q1Nsgp
    KkHLz1a2nUxlushsIreIUEpTvF7vh9okq9o6TvZSUfWymTsTQRErIqkPMip27sJmXSw9
    r+FpPX4wSwfzzoNJ0pqH9LMFMDokbYBRKAflW6qorO8B3Y7DCRjcKWU4DRXQpHz2NXYr
    fOBQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683673110;
    s=strato-dkim-0002; d=aepfle.de;
    h=In-Reply-To:References:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=GCP5/CljJD+CfipxX1GkhTK2QgMNJf/4Nny5lGr0f04=;
    b=dAwxwnZPL9D6YZ3nj2hZLhNIw7LYa0niHD7BslvqySjsORmQlsHXffrdCKc0FRIqFg
    1n4ytzZwPDHFNx4sgX6zyw4P518WcHaOj7FYXe/ci8rRdtva4e3ScdW5CA8A9x9Svy9H
    F19ukFWXMPC2YuZxY3IpiAHduyJ1+gHjQZ2SaZuTc16yO9IATobnvtQ9WeLeC1lzDTvr
    wqgKkHrqW6+2aDLlPZxPB18tGCVM5R/12AXAe0WmS8nXuz4Gm2CbbgoqLRIfYIRPVC9+
    rs46q/8Vro/tykWVupZ8g1LjgjXIixcrnMCEk22cp6xuBatTEtORKuKGWK2bYkB5x+RN
    46+w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683673110;
    s=strato-dkim-0003; d=aepfle.de;
    h=In-Reply-To:References:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=GCP5/CljJD+CfipxX1GkhTK2QgMNJf/4Nny5lGr0f04=;
    b=nJCegGckozv2Rmq6r9oY9NISiqdVzUEISVzj5dczOpyX/FcHtlOkJY/hs6aw+JYzti
    I4MybNJ0EfHeeKOHSXAQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QDiZbDmui9LcK/RdXt7GAQpV1nK0bLkUAKSOj2nbrD7pC72l5quxWIls5ia2dFakLOv3Iu"
Date: Wed, 10 May 2023 00:58:27 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: John Snow <jsnow@redhat.com>, xen-devel@lists.xenproject.org,
	Stefano Stabellini <sstabellini@kernel.org>, qemu-devel@nongnu.org,
	qemu-block@nongnu.org,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: Re: [PATCH v2] piix: fix regression during unplug in Xen HVM domUs
Message-ID: <20230509225818.GA16290@aepfle.de>
References: <20210317070046.17860-1-olaf@aepfle.de>
 <4441d32f-bd52-9408-cabc-146b59f0e4dc@redhat.com>
 <20210325121219.7b5daf76.olaf@aepfle.de>
 <dae251e1-f808-708e-902c-05cfcbbea9cf@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="azLHFNyN32YCQGCU"
Content-Disposition: inline
In-Reply-To: <dae251e1-f808-708e-902c-05cfcbbea9cf@redhat.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
Content-Transfer-Encoding: 7bit


--azLHFNyN32YCQGCU
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Resuming this old thread about an unfixed bug, which was introduced in qemu=
-4.2:

qemu ends up in piix_ide_reset from pci_unplug_disks.
This was not the case prior 4.2, the removed call to
qemu_register_reset(piix3_reset, d) in
ee358e919e385fdc79d59d0d47b4a81e349cd5c9 did apparently nothing.

In my debugging (with v8.0.0) it turned out the three pci_set_word
causes the domU to hang. In fact, it is just the last one:

   pci_set_byte(pci_conf + 0x20, 0x01);  /* BMIBA: 20-23h */

It changes the value from 0xc121 to 0x1.

The question is: what does this do in practice?

Starting with recent qemu (like 7.2), the domU sometimes proceeds with
these messages:

    [    1.631161] uhci_hcd 0000:00:01.2: host system error, PCI problems?
    [    1.634965] uhci_hcd 0000:00:01.2: host controller process error, so=
mething bad happened!
    [    1.634965] uhci_hcd 0000:00:01.2: host controller halted, very bad!
    [    1.634965] uhci_hcd 0000:00:01.2: HC died; cleaning up
    Loading basic drivers...[    2.398048] Disabling IRQ #23

Is anyone familiar enough with PIIX3 and knows how these devices are
interacting?

00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton I=
I]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton =
II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 0=
1)
00:03.0 VGA compatible controller: Cirrus Logic GD 5446


Thanks,
Olaf

On Thu, Mar 25, Paolo Bonzini wrote:

> On 25/03/21 12:12, Olaf Hering wrote:
> > Am Mon, 22 Mar 2021 18:09:17 -0400
> > schrieb John Snow <jsnow@redhat.com>:
> >=20
> > > My understanding is that XEN has some extra disks that it unplugs when
> > > it later figures out it doesn't need them. How exactly this works is
> > > something I've not looked into too closely.
> >=20
> > It has no extra disks, why would it?
> >=20
> > I assume each virtualization variant has some sort of unplug if it has =
to support guests that lack PV/virtio/enlightened/whatever drivers.
>=20
> No, it's Xen only and really should be legacy.  Ideally one would just ha=
ve
> devices supported at all levels from firmware to kernel.
>=20
> > > So if these IDE devices have been "unplugged" already, we avoid
> > > resetting them here. What about this reset causes the bug you describe
> > > in the commit message?
> > >=20
> > > Does this reset now happen earlier/later as compared to what it did
> > > prior to ee358e91 ?
> >=20
> > Prior this commit, piix_ide_reset was only called when the entire
> > emulated machine was reset. Like: never. With this commit,
> > piix_ide_reset will be called from pci_piix3_xen_ide_unplug. For some
> > reason it confuses the emulated USB hardware. Why it does confused
> > it, no idea.
>=20
> > I wonder what the purpose of the qdev_reset_all() call really is. It
> > is 10 years old. It might be stale.
>=20
> piix_ide_reset is only calling ide_bus_reset, and from there ide_reset and
> bmdma_reset.  All of these functions do just two things: reset internal
> registers and ensure pending I/O is completed or canceled.  The latter is
> indeed unnecessary; drain/flush/detach is already done before the call to
> qdev_reset_all.
>=20
> But the fact that it breaks USB is weird.  That's the part that needs to =
be
> debugged, because changing IDE to unbreak USB needs an explanation even if
> it's the right thing to do.
>=20
> If you don't want to debug it, removing the qdev_reset_all call might do =
the
> job; you'll have to see what the Xen maintainers think of it.  But if you
> don't debug the USB issue now, it will come back later almost surely.
>=20
> Paolo

--azLHFNyN32YCQGCU
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRa0AUACgkQ86SN7mm1
DoDskBAAmoEFUprqfEqC/CGSWwEP9PGpwslHV/gA+3OQTGCY33DFWwSFhpCEs/o4
pc1cGyq3sTYNLJNHO14XOMMQ+q0EGMaJ3CcWj4i0/aYYerXLgpcsMSAQbt6kaZWC
11j/9XIrBvpg6wGDeZz1RHO2qUoBycVvamv4rWKWcBEyS/YqtgF9m3YRa5HKY69x
WkdAB2D9SMJGlSoe3+q6iH9yQ0jPrfHlDnxZvOxXQzx9VohdxqftxSknZtNR6bsZ
8k/M3krPuNIPhoz72VT5dWUcai5xmxpqdXCenztnk1lync/7550d0mC10vFCKIGu
QWs7eDgwJFLqZCWKGT2NR7p0+rLaUwh04GtDQoeHFlQGlkBJ1tCZ/cUpuTPgz4yu
fWXnrno78j6jvGEeifAEJXcY+tqvdr8kvOEwxcQTx/Pdgb2va8+x6WkIdU6xYQkC
mgwleaTC8Pud+1Q55dojJZVk/lfS2gQoC8VXvsocFuloQxTQCdgzIYSBANNgubVg
tFeTnDA8OXoANHxZa6TfdcPkCESe5u6q6+ug3bFvbzC1gpROv0mcQamBChXyk3OC
2ou0V5Fvm2IAN4j30v+C5KVCvRMOaA/jpz/9ia4XfqkJGiaAF5VMKYVboGOJTErE
OgKsASt88TF85smm5qMpmEhqsM7xlX6cyygCDyNSVoyjzKdXqE8=
=2FAJ
-----END PGP SIGNATURE-----

--azLHFNyN32YCQGCU--


From xen-devel-bounces@lists.xenproject.org Tue May 09 23:37:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 23:37:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532589.828798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwWt2-0006w6-Sj; Tue, 09 May 2023 23:36:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532589.828798; Tue, 09 May 2023 23:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwWt2-0006vz-Q0; Tue, 09 May 2023 23:36:52 +0000
Received: by outflank-mailman (input) for mailman id 532589;
 Tue, 09 May 2023 23:36:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mGMN=A6=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1pwWt1-0006vt-GB
 for xen-devel@lists.xenproject.org; Tue, 09 May 2023 23:36:51 +0000
Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5c55ad76-eec2-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 01:36:50 +0200 (CEST)
Received: from zn.tnic (p5de8e8ea.dip0.t-ipconnect.de [93.232.232.234])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 6BA1D1EC0338;
 Wed, 10 May 2023 01:36:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c55ad76-eec2-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1683675405;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=5lxz18Yo5hMImpwUjevLsHYDBAq+H3HMlQ4JsgvZNfQ=;
	b=ViBwKc/E9IwalgWRG3DAMDt1hENvVWCAvfGWo6kvQLcZaCN/W2U+PshsC+Sx7GbxRQ37Ra
	jHwik+DHKo+xHqa20cn0nYZe9SzXWFt4Kl9kgOWriVsLyd3kk0m9fU0QJL5+ZgYsnH7c4M
	Pq3h+M+VcNVK8HRaDnJtcn8Stpmew8s=
Date: Wed, 10 May 2023 01:36:41 +0200
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
	mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Message-ID: <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>

On Tue, May 09, 2023 at 10:14:37PM +0200, Borislav Petkov wrote:
> On Tue, May 02, 2023 at 02:09:15PM +0200, Juergen Gross wrote:
> > This series tries to fix the rather special case of PAT being available
> > without having MTRRs (either due to CONFIG_MTRR being not set, or
> > because the feature has been disabled e.g. by a hypervisor).
> 
> More weird stuff. With the series:

Yah, that was me.

That ->enabled thing is *two* bits. FFS.

More staring at this tomorrow, on a clear head.

diff --git a/arch/x86/include/uapi/asm/mtrr.h b/arch/x86/include/uapi/asm/mtrr.h
index a28e6bbd8f21..f476a1355182 100644
--- a/arch/x86/include/uapi/asm/mtrr.h
+++ b/arch/x86/include/uapi/asm/mtrr.h
@@ -84,7 +84,7 @@ typedef __u8 mtrr_type;
 struct mtrr_state_type {
 	struct mtrr_var_range var_ranges[MTRR_MAX_VAR_RANGES];
 	mtrr_type fixed_ranges[MTRR_NUM_FIXED_RANGES];
-	bool enabled;
+	unsigned char enabled;
 	bool have_fixed;
 	mtrr_type def_type;
 };

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Tue May 09 23:54:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 09 May 2023 23:54:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532595.828807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwX9u-0000wu-9N; Tue, 09 May 2023 23:54:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532595.828807; Tue, 09 May 2023 23:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwX9u-0000wn-6f; Tue, 09 May 2023 23:54:18 +0000
Received: by outflank-mailman (input) for mailman id 532595;
 Tue, 09 May 2023 23:54:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwX9s-0000wd-QC; Tue, 09 May 2023 23:54:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwX9s-0007yj-Nj; Tue, 09 May 2023 23:54:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwX9s-00006h-8p; Tue, 09 May 2023 23:54:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwX9s-0003DJ-8L; Tue, 09 May 2023 23:54:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ifVGkEf6VcLqpZy3zfIP5aNNJt6tyNuS0G67L+zfzc0=; b=W/YaJSN2MSNPWDl3EoipsDrDnv
	z37QwZAWSssf84zTBx4C/f8Ylbuh97yc2J579Elzh+TafVhUpC3WHcEqR+un0qxSz/HkPokcsO6i0
	3mI2kN0cpMak263X4/3SixtYN6RtmJ27vIi3SkDgK8b5t6yFlkF8vdHZQxaUV99o0XKo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180593-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180593: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=bee67e0c142af6599a85aa7640094816b8a24c4f
X-Osstest-Versions-That:
    ovmf=5215cd5baf6609e54050c69909273b7f5161c59e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 09 May 2023 23:54:16 +0000

flight 180593 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180593/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 bee67e0c142af6599a85aa7640094816b8a24c4f
baseline version:
 ovmf                 5215cd5baf6609e54050c69909273b7f5161c59e

Last test of basis   180581  2023-05-08 19:13:38 Z    1 days
Testing same since   180593  2023-05-09 22:10:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>
  Michael Brown <mcb30@ipxe.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5215cd5baf..bee67e0c14  bee67e0c142af6599a85aa7640094816b8a24c4f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 10 00:09:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 00:09:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532603.828818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwXOb-0003Fl-IL; Wed, 10 May 2023 00:09:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532603.828818; Wed, 10 May 2023 00:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwXOb-0003Fe-Co; Wed, 10 May 2023 00:09:29 +0000
Received: by outflank-mailman (input) for mailman id 532603;
 Wed, 10 May 2023 00:09:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwXOa-0003FU-3W; Wed, 10 May 2023 00:09:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwXOa-0000d4-0d; Wed, 10 May 2023 00:09:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwXOZ-0000U5-GJ; Wed, 10 May 2023 00:09:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwXOZ-0005io-Fu; Wed, 10 May 2023 00:09:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OuDmtMS1+c4JMxe/LkanGAQW+E8tsQOWP+j7xIZTrgs=; b=MPmLee3jess755VcqESdNwKxZe
	6e24kLuQzXNltlheR9geRfHuB6DsMaIAwHe3EI3Fnd2T8YjBhWPugdvA66UQY+ARnrCzY9ThfCzoy
	bWPN+x7Jw7PLJ4k6WaiyRj373G6SNsG+4cRrjn9Vh/A9i2EjUZQjINP1MZc55MLIqpSU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180592-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180592: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ed6b7c0266e512c1207c07911da14e684f47b909
X-Osstest-Versions-That:
    xen=8b1ac353b4db7c5bb2f82cb6afee9cc641e756a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 00:09:27 +0000

flight 180592 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180592/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ed6b7c0266e512c1207c07911da14e684f47b909
baseline version:
 xen                  8b1ac353b4db7c5bb2f82cb6afee9cc641e756a4

Last test of basis   180588  2023-05-09 10:01:52 Z    0 days
Testing same since   180592  2023-05-09 21:03:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8b1ac353b4..ed6b7c0266  ed6b7c0266e512c1207c07911da14e684f47b909 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 10 01:48:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 01:48:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532619.828840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwYvj-0003PH-Pb; Wed, 10 May 2023 01:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532619.828840; Wed, 10 May 2023 01:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwYvj-0003P9-Kr; Wed, 10 May 2023 01:47:47 +0000
Received: by outflank-mailman (input) for mailman id 532619;
 Wed, 10 May 2023 01:47:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwYvi-0003Oz-4I; Wed, 10 May 2023 01:47:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwYvi-0001Uj-15; Wed, 10 May 2023 01:47:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwYvh-0002n2-FL; Wed, 10 May 2023 01:47:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwYvh-0008Sd-Dt; Wed, 10 May 2023 01:47:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UjfL3c7ZqzNspfUhmZSTc+UA0R3yxjffAV2hmdXubg4=; b=uU1wv4Yr5qY/wKZw0uA6S2PR0r
	smSn/GE9kwinN4GUpyrXft5ioz+9wVdSQuMmj9lDm30qUtVWmCD+tQH97+1FOUmvwjmakus1HKygE
	vCtJ+QbtEVb6aXu4HON5wXYmXa1fR0CwSex8PvVESQuWMhLEt9ZOk2KCbWt6RNNX36bc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180589-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180589: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8b1ac353b4db7c5bb2f82cb6afee9cc641e756a4
X-Osstest-Versions-That:
    xen=a16fb78515d54be95f81c0d1c0a3a7b954a54d0a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 01:47:45 +0000

flight 180589 xen-unstable real [real]
flight 180594 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180589/
http://logs.test-lab.xenproject.org/osstest/logs/180594/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180594-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair         11 xen-install/dst_host         fail  like 180584
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180584
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180584
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180584
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180584
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180584
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180584
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180584
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180584
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180584
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180584
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180584
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180584
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  8b1ac353b4db7c5bb2f82cb6afee9cc641e756a4
baseline version:
 xen                  a16fb78515d54be95f81c0d1c0a3a7b954a54d0a

Last test of basis   180584  2023-05-09 01:53:21 Z    0 days
Testing same since   180589  2023-05-09 13:39:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Yann Dirson <yann.dirson@vates.fr>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a16fb78515..8b1ac353b4  8b1ac353b4db7c5bb2f82cb6afee9cc641e756a4 -> master


From xen-devel-bounces@lists.xenproject.org Wed May 10 02:14:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 02:14:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532625.828849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwZL5-00077A-NS; Wed, 10 May 2023 02:13:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532625.828849; Wed, 10 May 2023 02:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwZL5-000773-Kh; Wed, 10 May 2023 02:13:59 +0000
Received: by outflank-mailman (input) for mailman id 532625;
 Wed, 10 May 2023 02:13:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwZL3-00076t-PJ; Wed, 10 May 2023 02:13:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwZL3-0002eU-NO; Wed, 10 May 2023 02:13:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwZL3-0003VL-7t; Wed, 10 May 2023 02:13:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwZL3-0008Lc-7P; Wed, 10 May 2023 02:13:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YAafwJ9bByC9CX90S6jMx3uI1Bm/8d3VvqwfJo0Mnvg=; b=7AoXpFHzrVu8djY6S8TnXi+Aqk
	eqVnD3Q0w2/3z4qUFWvocFpfs7qhQ0uLF1xa03csBQKhCcZkZKNzDlvZ/TW3ZfcjVTmGH4rGp9WMr
	ubCDLGylchKWsiVAVxZB86fF/kQ4VilJhYtF3Il3f3QcABSWUxw2afTLby353lGej+W4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180595-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180595: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e97b9b4e5a4bd53fd5f18c44390b266a2a89881a
X-Osstest-Versions-That:
    ovmf=bee67e0c142af6599a85aa7640094816b8a24c4f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 02:13:57 +0000

flight 180595 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180595/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e97b9b4e5a4bd53fd5f18c44390b266a2a89881a
baseline version:
 ovmf                 bee67e0c142af6599a85aa7640094816b8a24c4f

Last test of basis   180593  2023-05-09 22:10:44 Z    0 days
Testing same since   180595  2023-05-10 00:40:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gua Guo <gua.guo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   bee67e0c14..e97b9b4e5a  e97b9b4e5a4bd53fd5f18c44390b266a2a89881a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 10 04:09:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 04:09:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532636.828859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwb8Y-0001uT-SF; Wed, 10 May 2023 04:09:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532636.828859; Wed, 10 May 2023 04:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwb8Y-0001uM-Pi; Wed, 10 May 2023 04:09:10 +0000
Received: by outflank-mailman (input) for mailman id 532636;
 Wed, 10 May 2023 04:09:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwb8X-0001uC-Jo; Wed, 10 May 2023 04:09:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwb8X-0005iw-HP; Wed, 10 May 2023 04:09:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwb8W-0007gZ-UE; Wed, 10 May 2023 04:09:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwb8W-0006fN-TS; Wed, 10 May 2023 04:09:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/b8LMUoz5KG+A2pgQcVDNL6baDwunkkh0b6/aYA35xI=; b=HAzIu+n0uzKf4D+Lgkea8rFzVY
	svV6JN3UCOZWbw3bxosNWrb4c6gtDvjS+wz1LEYApBPABUaLOGDEi1rPQvyXOepjsEzuLCvtvu5W1
	8tVPXXMfLXmG7KyxyCfxNFLbul0x4yWA/PqbcGklss9mPxuagYDRgoaG8z8K6hOq1Las=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180590-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180590: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ba0ad6ed89fd5dada3b7b65ef2b08e95d449d4ab
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 04:09:08 +0000

flight 180590 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180590/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 build-arm64-pvops             6 kernel-build   fail in 180587 REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail in 180587 pass in 180590
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180582
 test-amd64-amd64-xl-vhd      21 guest-start/debian.repeat  fail pass in 180587
 test-amd64-amd64-xl-qemut-win7-amd64 18 guest-localmigrate/x10 fail pass in 180587

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 180587 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 180587 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 180587 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 180587 n/a
 test-arm64-arm64-examine      1 build-check(1)           blocked in 180587 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 180587 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 180587 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 180587 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 180587 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop  fail in 180587 like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ba0ad6ed89fd5dada3b7b65ef2b08e95d449d4ab
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   23 days
Failing since        180281  2023-04-17 06:24:36 Z   22 days   42 attempts
Testing same since   180582  2023-05-08 21:11:46 Z    1 days    3 attempts

------------------------------------------------------------
2359 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 296937 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 10 05:14:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 05:14:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532612.828882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwc9N-0001Mk-WD; Wed, 10 May 2023 05:14:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532612.828882; Wed, 10 May 2023 05:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwc9N-0001Lj-Oo; Wed, 10 May 2023 05:14:05 +0000
Received: by outflank-mailman (input) for mailman id 532612;
 Wed, 10 May 2023 00:18:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VlwF=A7=hotmail.com=rafael_andreas@srs-se1.protection.inumbo.net>)
 id 1pwXXT-0004li-Sy
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 00:18:39 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02olkn20822.outbound.protection.outlook.com
 [2a01:111:f400:fe13::822])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 36252708-eec8-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 02:18:38 +0200 (CEST)
Received: from DU0P192MB1700.EURP192.PROD.OUTLOOK.COM (2603:10a6:10:3bf::6) by
 AM9P192MB0871.EURP192.PROD.OUTLOOK.COM (2603:10a6:20b:1fb::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.33; Wed, 10 May 2023 00:18:38 +0000
Received: from DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 ([fe80::5056:b334:c71f:b047]) by DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 ([fe80::5056:b334:c71f:b047%6]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 00:18:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36252708-eec8-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JpxuxsysUsDSy+raGfwZoQG51JT6QRIdG++XXpcSoUesQw9F8jwVr0CgyxdY5R1VWVm5xIdvfHI3sWj6Z0zJJQVYrvodrggawSqPEizDJuK7Jnrj8qjgZhvnW4eb8YyOOcvawrzlq6Hzk/idD2yl1nWTqFB4j3yphKgZ+G3GHeahTanKOZ9jhTI5OZPRvtzbUuBuUFyz4tx2uI6vdxgj+JtpecFWHwvtaQ9dGM7L55c88nhG0qC6TVGbyCDBr1ZetSQnlkrB25+06IgTPZP7cUxjU59sMb0EWsMq5qpe5Akaaj/zFEQDctr5asjw4mPRl1HJG60hHHzmkr+IBo9+LQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Co0A8XpUzSdvSffHfW1JcfS0NgGlehxe6pGZTt39HIE=;
 b=T5BOE9bB0TEiD5VWiQpmV7JGoup47DkMRhy1Y6WnVG6R2Nz1dioJx3xrlqr1RpOh3FRtFmDx0gBOiksojwbH5dXSMEqhHvL8Kh0gJ7fNogjiTh/s9lIWBkABKx4fkgKywqHX3q0V8Boavg0lJXNNgQAVHYwvEuBIux+5J9L71X51i9v0TbF2pBXAUZIR5ClNSc9LK/zUtly9yxCA7QHxbVeyAxg0OK0qusSGVfSQtsZujKowVj5SFYkEMs701RmSmruWKJDz+6+Rmmp23tTVDq2OuwMfIzrllGOmbp4eZV7Sn648HQa/x43ojBic33Omlf6dOCsLZpEL0EKEKRpYgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hotmail.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Co0A8XpUzSdvSffHfW1JcfS0NgGlehxe6pGZTt39HIE=;
 b=RIat4IdLGcLGFwu7+3bnNxBZdsF8F5KYs7DP+LXUPfgq9THaa7pZ3Os8sFfsjAwB5fdvJH8a97htQNrNib0c/qCJjmclRDcHntrpmJEHkBlK7ucGLq2HUTK6M35AbXNY/AOYFub+JqqhpiCH22WjkJxEB2ArhChsMEZK7HZ89DCHKfykWiRH7syXrbPvpQs8lN3Th/iDFcumvSTmhVFoZr4zjg0zHcOv1H8loSrUPA51+ZRWKsJMSo2ZS6Km0ofGotSrUD/OfWCesFINzSX1+3omg2HMZRW2oOhwnZ2Be6bFTaf8SqGtF6gcBCz3l8hMe5Z6ar0poKpGVNYgUppsmA==
From: =?UTF-8?q?Rafa=C3=ABl=20Kooi?= <rafael_andreas@hotmail.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Rafa=C3=ABl=20Kooi?= <rafael_andreas@hotmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [XEN PATCH 2/2] x86/Dom0: Use streaming decompression for ZSTD compressed kernels
Date: Wed, 10 May 2023 02:18:22 +0200
Message-ID:
 <DU0P192MB1700F6DFE45A48D90B5FE679E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <cover.1683673597.git.rafael_andreas@hotmail.com>
References: <cover.1683673597.git.rafael_andreas@hotmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-TMN: [HcumDLXUQx2Ka6y8dbImpEzA8tER5uiK1S3VfeyYV5wT2Hi4Nw5fZ9XqlsqI5KUK]
X-ClientProxiedBy: LO2P265CA0052.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::16) To DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:10:3bf::6)
X-Microsoft-Original-Message-ID:
 <c99bfbb52fa28dbc50c588828e2e325ba12be52a.1683673598.git.rafael_andreas@hotmail.com>
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DU0P192MB1700:EE_|AM9P192MB0871:EE_
X-MS-Office365-Filtering-Correlation-Id: 91a01a52-a7fb-4686-ab79-08db50ec19c7
X-MS-Exchange-SLBlob-MailProps:
	ZILSnhm0P3k2XayEnZ/FXgv66CsfMCS1d7vfnlsnkxPz9PUXJadlSovgYnwo3lJ6PxRKlGP/SxcNROzPIHNCRtP3M+1kuBI/U+oo7Dpgd+Cwj1g1PGktYMMwagUf+Fjl1181pDhQnaVi/teLqHASkpl1GFzJW0BraS7JuIUeP2gOtW5TRGOYoiZbWtidRWc/NYabT2xh+8CWAfawcyO751KLlO+DDiFD8fo3sPVzPiFeHju87Lu126nUFDfVAnRWdxvr/TZaiZqEVKUEQEohD0UYOHjqDDJ7pePb0jY3qFbHK6TxMrCQYycSpVsASUuuoPPsQMG52S/tDRs2Jk5kKcNPCVKtBZ1iez8uVDvf30oRt+KPzvxcD3hDlWqPXIDd1VVL3kAE1Nh/zllFkiujAt8NOPLwJJPO03XhzLoJlGzBaDUBIYFSN4hFpvshdp4zpkfny5Ksv52QXI6KJKv9X38f5uskkhQ60EVXVzSTglTsFdjcarpU573Thb8ezfJ97eWc8ER6TAFxzAc1FBPI81N2QfEByfP4FfKnfk3ee6VjwkXptZDjGd+hfl9Qi2qUPG1Oym80RH//ULTm1WL4/LTpz9cyIBB2KPi6Hnf+ca9C0bxJwy182OxXb1KnM0Ri15/D81jvL2biMVcBWPkpwRoVSDni8iuB1SXRhJ37Spoe4Rcb6Yo9qPkZFt80WagRwOqCuin9ejZHacIp3uQ0ZkoGM6YPzlVvJK03GLNndk3+54cGbPV2+hIH6ZUHa1O2Q0apHGOImIIjRJYBZ4RjjNpXuy8+YQLV
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	W5pC3f03pX93RXyqnro6AjJ8bptsX7vbdbLudov9rnBmZnW4pJtG5Z3hUX7qylEk6zEYu5mOu6Hq/ZLU7WrkoTFMb2H50RAfAIDWRLf8U+pW1w/Ez7DRmObRgD+2q7VPNyvxvjBanHX+FU42Aex0LbpdfH7i5nPl2b21UTu2J84F8uAs+8JBzeiC6wDXSazpagrBVnvSPsnzr0V5rNPQI8YCn2+rWqgo/kf/90jzOACVKY0ryFi3x24n66K+s2FFANPvbyk3N7JQzTO5UPwxKbNfegx0ApnURgrnf8bwwtfy/2MQnowLBYaqmFM2+TDy/NbxV7k6DgKkEioLakTeeqCxr+1tPQuaR9O16bLFMyOzb1EaNwIzuw7tny4dl5I1/Mgz7R1BJ3wXKQ+OUZ3taqwxQKTQqA6Q2xzyd2PXWQ8pMODyqkJeNhtjneDOXA4CilWl3z6ZExjcqDIvdX/N0WYGTVMvbxzk79mMDlZHZp9D++4irUe7XjqPWBNeJ7xyw8HhNXWLmx+i2RaqbL2/BQuz5zJLCgw6pCphhbZjqcIdgoVtB29AoJwkMEfPiSwRJBgDknUw5jCKHWh3FwHP1R+i0zuNETbepDlMdEjapyec33qkfjqPxdhR1PJIzVWv
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZTJhekZaeFpSd3JKV3I3Qi8rSnFpNjRBWXNNQWlGUXovcUZ4Q0crNTJiNG5W?=
 =?utf-8?B?L29qZEtKNWhqR2dNeXJheHExTjhKM3lMb3JpSVF2VExGU3ZRb3Q2enBtUDJ3?=
 =?utf-8?B?OVRMZDZnTm5ESUJyRUs4MzcrWDNFSHZmT0pwU3RVaENHbHYrMkZzbzJicFFi?=
 =?utf-8?B?NlRoTWNPcm1RZ3J4RENRMzR3UDZvdFVmOHd4QUdFMWlpMEl5OW85d3JDSGZI?=
 =?utf-8?B?ZUJONUNNQUlKajJScHZ0dXVIUkxabGJpOC9WSlRncE0rWHY3NTFVTWQ2L2dO?=
 =?utf-8?B?dU9IWjl2cDZKOS9QeVN2TjRSS1FoY0QzYXgrVTZrK3AybFNEemJ1aHBVWFhS?=
 =?utf-8?B?N2hRVlhsbCtucnN0cTdxbm41QklOQ0hUZW11RlBPdE9nTW9WQ1lhYWQxMFZJ?=
 =?utf-8?B?Q1lWT3c0Yk83YmR2ajliQWFUZWFxSVBPYnY1S1VUb3Jud2N5RE5OL2NLSGYx?=
 =?utf-8?B?SURyTERmVWhIQTZ5YklVNHMvdWw1QWVZYU13Q3RKRWZRQUZzZktXWE95Y0xT?=
 =?utf-8?B?L1JzTFc5Q0ZLMFBBUG5hb0tHWmgybU1Pa2R6eDQ4MWVMaUFJQkozb1hMN0JE?=
 =?utf-8?B?a2hpc3FxN3RvK1ArOXRya1N1LzFuTkM5QWhKR3F0b014SW0za0pHdzhLSGtx?=
 =?utf-8?B?aUVpZW5vOUxURy8zWjZGVVRnZjBDOWxRc2hhY0dKZ0dFcHZiUE45Z0tkWTZ0?=
 =?utf-8?B?SFh0ckZtVzdXZGxKT2dIOThQelVDV3U0SUVLWFo4ck9UbWtnYkhCU0l5dnFn?=
 =?utf-8?B?SW4rY0UybWd6WlRXZ3AydkhWSGd4YkczUEtXVXFJb1p2OXh1ay8vYWoxdEN3?=
 =?utf-8?B?UmpRUDJJRG9VYzBOcFNucUcyQkx5TVlReEFVU2dqNFk4eElsdEhpcFRHSHdr?=
 =?utf-8?B?Y3FEK29uZ3RMOXVrdThSUWdBRjNLcGpmUG5wY0kvKzhIQ3VWd2RyS2pWVENG?=
 =?utf-8?B?WjZWZ1N0bWo3UkdlTzVDQnk4c1Evek9Kc05Jb1hGditZSmNHMWM5dkFpd0d4?=
 =?utf-8?B?R2dFUTRlK2N2dnpOM0xyR09EUGZCSlFINi9VOG5JZC9sTktmRG5mS3ZvUmhn?=
 =?utf-8?B?TXhTVGxCMTVkUERPdHRSRTlEcmRxNW5Tc1V5WTQxekRnWjBjRHFKWUJJRE9F?=
 =?utf-8?B?ZktKNkV6aUZpTUgrRk1ia3EwWmdsc2NWb3ZEWHFaN2F5dnhyQUNXUDRuWmdp?=
 =?utf-8?B?MmtGNG1yUjkwYmVWcDVYZ05HNlYzQ3g1NFhBMXZMbFM2czFETTRnZEk3RTdQ?=
 =?utf-8?B?SnlDUG4wMlNRaDJFWHdFUVV3Y1J0VS9tY3ZneHBEM0NLZzNxbjhDUldrWW04?=
 =?utf-8?B?U3d1SFZncDhPOUVncjQzQ3ZuSWErSTRkcHdhY3l3RDJKOVRtTFhGeGREanRo?=
 =?utf-8?B?bHNwS0FLWTNpRUdybVlMSHJTQmtScERRMEJYakROK0t3NnJtSW82Ykh5SWpS?=
 =?utf-8?B?ZEx3dzVDakVyZ25DdmU0SW5mYVQyTmcyaWE0bzNZc3REaXd6d01id05LQnQw?=
 =?utf-8?B?NEZ3OElSUUVvRnRQb0FGaGNDNlVubitwUTE0MFhQTjZFdG1TZTVtS0FULzJV?=
 =?utf-8?B?S0t4T0pOME9pTDZ6eU1Od2dBZ2lodDhrM21KamptZXpqRnpVUjJUWThJb2Vj?=
 =?utf-8?B?NlJtR054b2FoZXFqMWlEZDRXbDZRKzdrQXU1WDlUWHBGd0w4a1VOaTFNT21P?=
 =?utf-8?B?Y0N4UHl1NjRJSUNiT2lHUzFTdTJxUlhON24zQVhJZ2NzZG1kdmtlUlRzTXdx?=
 =?utf-8?Q?8Xrx4eLFd8qMRBfmMI=3D?=
X-OriginatorOrg: sct-15-20-4755-11-msonline-outlook-fb43a.templateTenant
X-MS-Exchange-CrossTenant-Network-Message-Id: 91a01a52-a7fb-4686-ab79-08db50ec19c7
X-MS-Exchange-CrossTenant-AuthSource: DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 00:18:38.1978
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg:
	00000000-0000-0000-0000-000000000000
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9P192MB0871

On Arch Linux kernel decompression will fail when Xen has been unified
with the kernel and initramfs as a single binary. This change works for
both streaming and non-streaming ZSTD content.

Signed-off-by: Rafaël Kooi <rafael_andreas@hotmail.com>
---
 xen/common/decompress.c | 37 +++++++++++++++++++++++++++++++------
 1 file changed, 31 insertions(+), 6 deletions(-)

diff --git a/xen/common/decompress.c b/xen/common/decompress.c
index 989336983f..cde754ffb1 100644
--- a/xen/common/decompress.c
+++ b/xen/common/decompress.c
@@ -3,11 +3,26 @@
 #include <xen/string.h>
 #include <xen/decompress.h>
 
+typedef struct _ZSTD_state
+{
+    void *write_buf;
+    unsigned int write_pos;
+} ZSTD_state;
+
 static void __init cf_check error(const char *msg)
 {
     printk("%s\n", msg);
 }
 
+static int __init cf_check ZSTD_flush(void *buf, unsigned int pos,
+                                      void *userptr)
+{
+    ZSTD_state *state = (ZSTD_state*)userptr;
+    memcpy(state->write_buf + state->write_pos, buf, pos);
+    state->write_pos += pos;
+    return pos;
+}
+
 int __init decompress(void *inbuf, unsigned int len, void *outbuf)
 {
 #if 0 /* Not needed here yet. */
@@ -17,22 +32,32 @@ int __init decompress(void *inbuf, unsigned int len, void *outbuf)
 #endif
 
     if ( len >= 3 && !memcmp(inbuf, "\x42\x5a\x68", 3) )
-        return bunzip2(inbuf, len, NULL, NULL, outbuf, NULL, error);
+        return bunzip2(inbuf, len, NULL, NULL, outbuf, NULL, error, NULL);
 
     if ( len >= 6 && !memcmp(inbuf, "\3757zXZ", 6) )
-        return unxz(inbuf, len, NULL, NULL, outbuf, NULL, error);
+        return unxz(inbuf, len, NULL, NULL, outbuf, NULL, error, NULL);
 
     if ( len >= 2 && !memcmp(inbuf, "\135\000", 2) )
-        return unlzma(inbuf, len, NULL, NULL, outbuf, NULL, error);
+        return unlzma(inbuf, len, NULL, NULL, outbuf, NULL, error, NULL);
 
     if ( len >= 5 && !memcmp(inbuf, "\x89LZO", 5) )
-        return unlzo(inbuf, len, NULL, NULL, outbuf, NULL, error);
+        return unlzo(inbuf, len, NULL, NULL, outbuf, NULL, error, NULL);
 
     if ( len >= 2 && !memcmp(inbuf, "\x02\x21", 2) )
-	return unlz4(inbuf, len, NULL, NULL, outbuf, NULL, error);
+        return unlz4(inbuf, len, NULL, NULL, outbuf, NULL, error, NULL);
 
     if ( len >= 4 && !memcmp(inbuf, "\x28\xb5\x2f\xfd", 4) )
-	return unzstd(inbuf, len, NULL, NULL, outbuf, NULL, error);
+    {
+        // NOTE (Rafaël): On Arch Linux the kernel is compressed in a way
+        // that requires streaming ZSTD decompression. Otherwise decompression
+        // will fail when using a unified EFI binary. Somehow decompression
+        // works when not using a unified EFI binary, I suspect this is the
+        // kernel self decompressing. Or there is a code path that I am not
+        // aware of that takes care of the use case properly.
+
+        ZSTD_state state = (ZSTD_state){ outbuf, 0 };
+        return unzstd(inbuf, len, NULL, ZSTD_flush, NULL, NULL, error, &state);
+    }
 
     return 1;
 }
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Wed May 10 05:14:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 05:14:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532611.828876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwc9N-0001Ey-KY; Wed, 10 May 2023 05:14:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532611.828876; Wed, 10 May 2023 05:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwc9N-0001Ek-EU; Wed, 10 May 2023 05:14:05 +0000
Received: by outflank-mailman (input) for mailman id 532611;
 Wed, 10 May 2023 00:18:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VlwF=A7=hotmail.com=rafael_andreas@srs-se1.protection.inumbo.net>)
 id 1pwXXS-0004li-Si
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 00:18:39 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02olkn20822.outbound.protection.outlook.com
 [2a01:111:f400:fe13::822])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 358ca281-eec8-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 02:18:38 +0200 (CEST)
Received: from DU0P192MB1700.EURP192.PROD.OUTLOOK.COM (2603:10a6:10:3bf::6) by
 AM9P192MB0871.EURP192.PROD.OUTLOOK.COM (2603:10a6:20b:1fb::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.33; Wed, 10 May 2023 00:18:36 +0000
Received: from DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 ([fe80::5056:b334:c71f:b047]) by DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 ([fe80::5056:b334:c71f:b047%6]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 00:18:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 358ca281-eec8-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mOEebGwVICUAf+35vIFZxHWtEeFJhyzGW5z1UKCJ0K/+KmFyo5xg+Pv4KR6HTKN9rzisSUl/sp29JXcW4JjNhyubBNDoQ0D5bmfHYwMAz3M4N3Ts8RL7bO8Af6jhJ7HK4Z33xo+poLIEKYBz8qZPFc69UT027pDW+NuOTRh7o+EMYkyj+8zDiCR9Vd9t1t7eD1teuFQ8qORT+MYzpyz7ahXKYhE/YQihP65Lv6heUJy8wEswqApBcWEqA1x2EG8YNuY8uhEMWejt5biyJbt0z8TwvnQRQfzqGT2kqvmXvLOb3Gui4gezsGI8HY5cIH+7cp5REs6/OXuyg1UOfOGoDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ET36M7cTJdg4CJ6NvtMlwqnICtej3ldCzYBd3kr2cj0=;
 b=YuLCM9JoFlMUB4UQNDkDR5C/o9F6Fi8AFSrnxJG8Et0jIZsGB9NpQ9kX/43nZydoBJzupohR3Ghtk108K5o9cSvWkhS0+OSYqLweuuaRfKvOD60tnnLVhrtdmj+HRlNY1JhxDRSuQmJ9SBP6f856BPe48kIDgW6qadct+gVMieiEQ00qQdDYS4mylhkFQP7p+HnL6ynJX0y7OG8M17k19bzPw9af6c7+QJtl3mIXXJ5CncPvN8Iz8O0ZczKLOsXJ43RYhDl9wt+OFpmiFbIIKdOvI4oGPVaswufpX3DpreizKSgyvYjSMLupbZWixJL4iP0UjKpd2tqRjKBnVxXcMA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hotmail.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ET36M7cTJdg4CJ6NvtMlwqnICtej3ldCzYBd3kr2cj0=;
 b=oNExkE8UPjaq1+li7g+wT2UsONIAiV3UH9MAdKqTBTw17wIGc4GUznLazP7RlaQ6wUIQgxZ0TFL67QmaXGpT25Wr1HnM3dtzpzLOUDBh1mhmsxMxrnvGJ5rdf4WoSpXTVwoj3qVFAKUMgQrywV8cb4MlKITx/pZ5ZvdPFXKRVaNlmbRlFXJZQiT6brrn2HKJnpMQZcp/SOOItnq6I5OCalhIXJo6uQCcAduR1mfPbfXPIJTL3ZQYcDf44oiFzC0EQj9vkIOKc2WGn//KYe1PRJroLHaNZmW9pznxOEW5FYlqT1pusP6Xhlj7RMA6r0KpFIENlWhpo5NtdFOLUipkvw==
From: =?UTF-8?q?Rafa=C3=ABl=20Kooi?= <rafael_andreas@hotmail.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Rafa=C3=ABl=20Kooi?= <rafael_andreas@hotmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [XEN PATCH 1/2] xen/decompress: Add a user pointer for book keeping in the callbacks
Date: Wed, 10 May 2023 02:18:21 +0200
Message-ID:
 <DU0P192MB1700684CB8DF7B3845B64126E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
X-Mailer: git-send-email 2.40.0
In-Reply-To: <cover.1683673597.git.rafael_andreas@hotmail.com>
References: <cover.1683673597.git.rafael_andreas@hotmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-TMN: [tSTPrSG+RMNi4PjUJiX8cv+iUoVEEwY8OMWFWNwu3UdjJOU6en9lzcwWbm24R6GF]
X-ClientProxiedBy: LO2P265CA0052.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::16) To DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:10:3bf::6)
X-Microsoft-Original-Message-ID:
 <06c6d6eb57f2dffa036af44ac8d9cc3e9df6aad0.1683673598.git.rafael_andreas@hotmail.com>
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DU0P192MB1700:EE_|AM9P192MB0871:EE_
X-MS-Office365-Filtering-Correlation-Id: c2488a15-447a-4104-3ccb-08db50ec18a7
X-MS-Exchange-SLBlob-MailProps:
	AZnQBsB9Xmr6PcyLpofOFOY3OlYay91HJ3b9A+qmOPqQ9CxHjkFPP1tLatAmAKdGQVd0HMcrjUdFNDgYpjaMXKkNxlStevtGtWd8KtZf0Dxbhg3Xy9biPvwPr5+6EIHO3jy1tL4xUY3PR/QyPGct+KjDz47IqDKDzrMWEmI5IhRh1ddq66GtJ6ewGpFH07Js9OC6KOCB0RM8YTyjrp/3OFM+R9ay6kcha0Mr+DMJ/WqqZEKclNqmtEbVAYXEg2iexGV+7yiJy7IhF3G5MsmoN+WRsYYVWjConbbGhbDSbDkiDlR+v5Uu3ZQZ1HJDD+r6z4q5SYyuL5ZKIyIROy2h5K34UzrQAH7rlUb9vIYss7V7OsHA3V1pW/p27mbEZSnfgEICluj0UhcNwgxdMatZHofwE5LH5IEOybhzx7ph29ckd0WDMa6xXRe24zkYD5In0sUsaYjJZs6p1RxIoA7urgTbDRbF4RlX4FCQfqMkgXxnWdX+mWQ3CG/1YggpRbsp1tCxavF4LZhdRcGxDGddgmZSxKq1mfUsQH3tb42c8ox63dLNLzBFvkcalJ3bDEzmEx5FRCNUJY3IzsuTpulRQLhG68uvALqw1YXuVTXR1sWrohn65BsEeBbnaKB0LN01rZev8kPUgaBF8KmW4pUTW/HJ2ORh9nXgjPVvQe1FRpFbmiNR/IwvCiFr72rjn8UpDQVhDBon5Bcve4ORw59fQPZqX8NwvDYiJH/0sfaJbE5/5NQ/IE19eIZd8tCq+29BPuPdKoi5+f0=
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AMMwcnNMK4U4RE52DktPQ6YGKy5Zzdp7knMJlDFrp6XSmYyYlwV3bGHQEKZoyOIV873Cm/S9V3gtvINsucn8q0YB3HzoGtVNeJCN2BymMiPeEpClpf1sHjPcCNaaY90SpkNPIRdRHILRKVRA+clftUuY1BpsgvXVlmZPvJl4GR8lQvHkKQvWLKnMGaCGbu3qgWnnBJ1WDTn7tNTVXm2Hkkwy2PvzDqbggyqoJSqZpPtE5UR1UO+WI1vukzUndBXIx7K6aIn3uX6Yg4eyOdfe2AGLycKloZzeDll7Gt3t6nkj5LOjpTn8JViu2dlm5R0z6sCBKrUS10DKZqDy52OCEbHv6H+EMNM+d48grS4no0JjNfd6sqiSINzl4Cz6QqYYc4TSkfoqG2pWDdb+nejTUv0iGSAYqE4kXnviE+xfhJbn/GDdgGm9Z/xpD0dEFeZcmaMdkPMTYXMF34mkCmKmnXVjtzRR6NS9WywSUD0zCrdyB0z5WoDAlgu6DbElHm4jXIEHGaUjLehuhGQDwvPQCAkE5NceqMjT3ru9UPF6LobNa8sn+bvZcbAdugP2DfL8UYld9Exu7x9CZBwe9t5S7jd2FlCovMxl5zxC7eMLlvTsfU6ZnShuZvRW3YunUrwi
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Wm1GeEQwbE9WRlF6dG5pSkFWbGpVYTNiVTVpR1VBNnBlc2srajhjVW5VQ0FV?=
 =?utf-8?B?V0FVNVJrUlB1RUs0UVlmNklvVThRUU4xL2tFNms1ZkhqK3JsVERtYzVCVnAx?=
 =?utf-8?B?NVpKd3dJUS9pY2lBeHIrT1hOWXlFTTVyaFhiYitaWUpqTVYydFR5amg2YjZi?=
 =?utf-8?B?Sjd6QWU1WU5Zb3BNWjhLOE95c1hHNnRadStOVHJ5b1FZaUVUaGNaVURjNm9t?=
 =?utf-8?B?VUcwbDY1NWRrQjRNeFc5azdTNDdOd2lCV2RPcjJTU0FpRDJzYmxHN1dzQzV3?=
 =?utf-8?B?WG44cHdMTkdlUUZZMXNWcWRBNmJhUHJtdnRaYllXMXFtTW5Ud1RWRHFJK0tv?=
 =?utf-8?B?bHVJL09IQ2VBcmg5ZC82cjk2UnMwbU94UDNtQzFMTGhsQ3NydEx5eW8zQ0Vy?=
 =?utf-8?B?cUx4Y2ZRb0pSNlRGcjRmVTVYRGdZMmdWcC9tdE9JcU1zRjFRaU1nbWhMSTlO?=
 =?utf-8?B?andEbVBuazR4WUU4UVlaWkk1SEpKYlY3bDVkOHlkUTFXYWpaV3pZUXRlM1Jq?=
 =?utf-8?B?L2p6Q1RlWEdTaFQ2R3lDSkVlbVdtQmtNblhXMWZUVTZsZUtDVGlodHhaNUtw?=
 =?utf-8?B?SzIxUUtOUDhJN2FCTDFPMGtBV2VrRWtzamJZUjZMSFNCRTVtQmV2QVJVbG85?=
 =?utf-8?B?U1hzbEhraURkNFJidHluSXhmTVp5VXp3RjA5c09oYWllWkxTRzdhWnNGZjRL?=
 =?utf-8?B?QkpNNlhFM3hGUDBEeWJTWGJqMkhyYU5ySkc3dU1Ndm1rb3JYb0FTaDRNOWJ1?=
 =?utf-8?B?T2lEM05sQjlqWWZyVThRRDhHVnpGcUg3b3k4YmErRytBaE1WTFBLY3lEWTVE?=
 =?utf-8?B?d3ZQbGUvajNDMzQ3MElZb3FuUDZaV0lVN0xEd0pPQys0VWdpNnlVN0VLV2FB?=
 =?utf-8?B?eFF2MmJ1QUhGcmlTc0h1ZjlMNzl2WndyTFhnNmduSEJGdFFjdXZQT2piUEJ6?=
 =?utf-8?B?T0dwSy9JZXVwb0xDamowTlNRNDFMU01uclNnR1RQd0R0eUpGQkZiQjJJcFpK?=
 =?utf-8?B?VGhpQlB3bjhJZ01RajcyRC9Tejl1SXpPKys5UzFJZGxWZG9OTkdyK3V3UGxX?=
 =?utf-8?B?MTIxWTBWMVlONmI4QVZ3d202UFFITkZ2ZjFBLzR3U1YyZ0ZPSjZnblR2cUNm?=
 =?utf-8?B?V09FcUNvM2QzYzJ1ZnNmVnJMVDBhdnhvN2FFNVlJM01mdEFoRjNQQm13eEov?=
 =?utf-8?B?RlFRU1hmMDBFN2tUK1o5TXJXUkRueTNBV1RoNDdkb0xFdmhxeGdVK20vcEc0?=
 =?utf-8?B?N0psYWhnZktPQ1ZSd29KeWtrSEdIdlEyVW1TbmRCaWNPOGFTUjUvMlZQVGNP?=
 =?utf-8?B?ZjZmZWY2d0szK3ppTGRhTWhHcmRVZGdjYlpxeW01U0xha1J2N3dpWFRGNXVX?=
 =?utf-8?B?UForMlRXK083b1hRZysrZ1VHenFjTVJZa3VRVlFsU3F1cmV2eDJEVlFic3pD?=
 =?utf-8?B?VzlpbTJLTXR3SlRNRW5YOW9YdktlNWVKTklIeEhBWkZRb2VuVUVxRUtyUVpD?=
 =?utf-8?B?L294WUtVK2hUTnp1VWtybUJ1am5UTmVoNTZsZzNDanJZakFLNXZhOW42TDNj?=
 =?utf-8?B?QjUyV2p0alFPYTVsNFZLVVNqT01kSjdWdnZuVlpzVjlwTHZHWnNCV01SSlAw?=
 =?utf-8?B?V2FwRkFRY1doenNmdXZKUUNBUTlYQ29wY2t3dHlSOTFYMmN0SUd6ZHBpZ1ow?=
 =?utf-8?B?S1d4RnJmdVBja0h4REFiNGlIdVNuRTZXWVkzSkNzajFHM1RqQjIyVzhUK2Za?=
 =?utf-8?Q?RZ24vXXbu4o1BlYhVA=3D?=
X-OriginatorOrg: sct-15-20-4755-11-msonline-outlook-fb43a.templateTenant
X-MS-Exchange-CrossTenant-Network-Message-Id: c2488a15-447a-4104-3ccb-08db50ec18a7
X-MS-Exchange-CrossTenant-AuthSource: DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 00:18:36.3371
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg:
	00000000-0000-0000-0000-000000000000
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9P192MB0871

Before this change the callbacks either needed to be completely
stateless or use global state for book keeping. In the case where book
keeping is needed we can't use global state in decompress.c because the
linker disallows the existence of the .data and .bss segments. This
change allows for localized book keeping with the user pointer.

Signed-off-by: Rafaël Kooi <rafael_andreas@hotmail.com>
---
 xen/common/bunzip2.c         | 23 +++++++++++++----------
 xen/common/unlz4.c           | 15 ++++++++-------
 xen/common/unlzma.c          | 30 ++++++++++++++++++------------
 xen/common/unlzo.c           | 13 +++++++------
 xen/common/unxz.c            | 11 ++++++-----
 xen/common/unzstd.c          | 13 +++++++------
 xen/include/xen/decompress.h | 10 +++++++---
 7 files changed, 66 insertions(+), 49 deletions(-)

diff --git a/xen/common/bunzip2.c b/xen/common/bunzip2.c
index 4466426941..a854046524 100644
--- a/xen/common/bunzip2.c
+++ b/xen/common/bunzip2.c
@@ -1,4 +1,4 @@
-/* vi: set sw = 4 ts = 4: */
+/* vi: set sw=4 ts=4: */
 /*	Small bzip2 deflate implementation, by Rob Landley (rob@landley.net).
 
 	Based on bzip2 decompression code by Julian R Seward (jseward@acm.org),
@@ -86,7 +86,7 @@ struct bunzip_data {
 	/* State for interrupting output loop */
 	int writeCopies, writePos, writeRunCountdown, writeCount, writeCurrent;
 	/* I/O tracking data (file handles, buffers, positions, etc.) */
-	int (*fill)(void*, unsigned int);
+	int (*fill)(void*, unsigned int, void*);
 	int inbufCount, inbufPos /*, outbufPos*/;
 	unsigned char *inbuf /*,*outbuf*/;
 	unsigned int inbufBitCount, inbufBits;
@@ -99,6 +99,7 @@ struct bunzip_data {
 	unsigned char selectors[32768];		/* nSelectors = 15 bits */
 	struct group_data groups[MAX_GROUPS];	/* Huffman coding tables */
 	int io_error;			/* non-zero if we have IO error */
+	void *userptr;			/* user pointer to pass to fill */
 };
 
 
@@ -117,7 +118,7 @@ static unsigned int __init get_bits(struct bunzip_data *bd, char bits_wanted)
 		if (bd->inbufPos == bd->inbufCount) {
 			if (bd->io_error)
 				return 0;
-			bd->inbufCount = bd->fill(bd->inbuf, BZIP2_IOBUF_SIZE);
+			bd->inbufCount = bd->fill(bd->inbuf, BZIP2_IOBUF_SIZE, bd->userptr);
 			if (bd->inbufCount <= 0) {
 				bd->io_error = RETVAL_UNEXPECTED_INPUT_EOF;
 				return 0;
@@ -612,7 +613,7 @@ decode_next_byte:
 	goto decode_next_byte;
 }
 
-static int __init cf_check nofill(void *buf, unsigned int len)
+static int __init cf_check nofill(void *buf, unsigned int len, void *userptr)
 {
 	return -1;
 }
@@ -621,7 +622,7 @@ static int __init cf_check nofill(void *buf, unsigned int len)
    a complete bunzip file (len bytes long).  If in_fd!=-1, inbuf and len are
    ignored, and data is read from file handle into temporary buffer. */
 static int __init start_bunzip(struct bunzip_data **bdp, void *inbuf, int len,
-			       int (*fill)(void*, unsigned int))
+			       int (*fill)(void*, unsigned int, void*), void *userptr)
 {
 	struct bunzip_data *bd;
 	unsigned int i, j, c;
@@ -644,6 +645,7 @@ static int __init start_bunzip(struct bunzip_data **bdp, void *inbuf, int len,
 		bd->fill = fill;
 	else
 		bd->fill = nofill;
+	bd->userptr = userptr;
 
 	/* Init the CRC32 table (big endian) */
 	for (i = 0; i < 256; i++) {
@@ -671,10 +673,11 @@ static int __init start_bunzip(struct bunzip_data **bdp, void *inbuf, int len,
 /* Example usage: decompress src_fd to dst_fd.  (Stops at end of bzip2 data,
    not end of file.) */
 int __init bunzip2(unsigned char *buf, unsigned int len,
-		   int(*fill)(void*, unsigned int),
-		   int(*flush)(void*, unsigned int),
+		   int(*fill)(void*, unsigned int, void*),
+		   int(*flush)(void*, unsigned int, void*),
 		   unsigned char *outbuf, unsigned int *pos,
-		   void(*error)(const char *x))
+		   void(*error)(const char *x),
+		   void *userptr)
 {
 	struct bunzip_data *bd;
 	int i = -1;
@@ -696,7 +699,7 @@ int __init bunzip2(unsigned char *buf, unsigned int len,
 		i = RETVAL_OUT_OF_MEMORY;
 		goto exit_0;
 	}
-	i = start_bunzip(&bd, inbuf, len, fill);
+	i = start_bunzip(&bd, inbuf, len, fill, userptr);
 	if (!i) {
 		for (;;) {
 			i = read_bunzip(bd, outbuf, BZIP2_IOBUF_SIZE);
@@ -705,7 +708,7 @@ int __init bunzip2(unsigned char *buf, unsigned int len,
 			if (!flush)
 				outbuf += i;
 			else
-				if (i != flush(outbuf, i)) {
+				if (i != flush(outbuf, i, userptr)) {
 					i = RETVAL_UNEXPECTED_OUTPUT_EOF;
 					break;
 				}
diff --git a/xen/common/unlz4.c b/xen/common/unlz4.c
index 2096b98f36..00c179732e 100644
--- a/xen/common/unlz4.c
+++ b/xen/common/unlz4.c
@@ -23,10 +23,11 @@
 #define ARCHIVE_MAGICNUMBER 0x184C2102
 
 int __init unlz4(unsigned char *input, unsigned int in_len,
-		 int (*fill)(void *, unsigned int),
-		 int (*flush)(void *, unsigned int),
+		 int (*fill)(void *, unsigned int, void*),
+		 int (*flush)(void *, unsigned int, void*),
 		 unsigned char *output, unsigned int *posp,
-		 void (*error)(const char *x))
+		 void (*error)(const char *x),
+		 void *userptr)
 {
 	int ret = -1;
 	size_t chunksize = 0;
@@ -75,7 +76,7 @@ int __init unlz4(unsigned char *input, unsigned int in_len,
 		*posp = 0;
 
 	if (fill)
-		fill(inp, 4);
+		fill(inp, 4, userptr);
 
 	chunksize = get_unaligned_le32(inp);
 	if (chunksize == ARCHIVE_MAGICNUMBER) {
@@ -92,7 +93,7 @@ int __init unlz4(unsigned char *input, unsigned int in_len,
 	for (;;) {
 
 		if (fill)
-			fill(inp, 4);
+			fill(inp, 4, userptr);
 
 		chunksize = get_unaligned_le32(inp);
 		if (chunksize == ARCHIVE_MAGICNUMBER) {
@@ -113,7 +114,7 @@ int __init unlz4(unsigned char *input, unsigned int in_len,
 				error("chunk length is longer than allocated");
 				goto exit_2;
 			}
-			fill(inp, chunksize);
+			fill(inp, chunksize, userptr);
 		}
 #if defined(__XEN__) || defined(__MINIOS__)
 		if (out_len >= uncomp_chunksize) {
@@ -133,7 +134,7 @@ int __init unlz4(unsigned char *input, unsigned int in_len,
 		}
 
 		ret = -1;
-		if (flush && flush(outp, dest_len) != dest_len)
+		if (flush && flush(outp, dest_len, userptr) != dest_len)
 			goto exit_2;
 		if (output)
 			outp += dest_len;
diff --git a/xen/common/unlzma.c b/xen/common/unlzma.c
index 6cd99023ad..d5dbc44881 100644
--- a/xen/common/unlzma.c
+++ b/xen/common/unlzma.c
@@ -59,7 +59,7 @@ static long long __init read_int(unsigned char *ptr, int size)
 #define LZMA_IOBUF_SIZE	0x10000
 
 struct rc {
-	int (*fill)(void*, unsigned int);
+	int (*fill)(void*, unsigned int, void*);
 	uint8_t *ptr;
 	uint8_t *buffer;
 	uint8_t *buffer_end;
@@ -68,6 +68,7 @@ struct rc {
 	uint32_t range;
 	uint32_t bound;
 	void (*error)(const char *);
+	void *userptr;
 };
 
 
@@ -76,7 +77,7 @@ struct rc {
 #define RC_MODEL_TOTAL_BITS 11
 
 
-static int __init cf_check nofill(void *buffer, unsigned int len)
+static int __init cf_check nofill(void *buffer, unsigned int len, void *userptr)
 {
 	return -1;
 }
@@ -84,7 +85,7 @@ static int __init cf_check nofill(void *buffer, unsigned int len)
 /* Called twice: once at startup and once in rc_normalize() */
 static void __init rc_read(struct rc *rc)
 {
-	rc->buffer_size = rc->fill((char *)rc->buffer, LZMA_IOBUF_SIZE);
+	rc->buffer_size = rc->fill((char *)rc->buffer, LZMA_IOBUF_SIZE, rc->userptr);
 	if (rc->buffer_size <= 0)
 		rc->error("unexpected EOF");
 	rc->ptr = rc->buffer;
@@ -93,8 +94,9 @@ static void __init rc_read(struct rc *rc)
 
 /* Called once */
 static inline void __init rc_init(struct rc *rc,
-				  int (*fill)(void*, unsigned int),
-				  unsigned char *buffer, int buffer_size)
+				  int (*fill)(void*, unsigned int, void*),
+				  unsigned char *buffer, int buffer_size,
+				  void *userptr)
 {
 	if (fill)
 		rc->fill = fill;
@@ -104,6 +106,7 @@ static inline void __init rc_init(struct rc *rc,
 	rc->buffer_size = buffer_size;
 	rc->buffer_end = rc->buffer + rc->buffer_size;
 	rc->ptr = rc->buffer;
+	rc->userptr = userptr;
 
 	rc->code = 0;
 	rc->range = 0xFFFFFFFF;
@@ -274,8 +277,9 @@ struct writer {
 	size_t buffer_pos;
 	int bufsize;
 	size_t global_pos;
-	int(*flush)(void*, unsigned int);
+	int(*flush)(void*, unsigned int, void*);
 	struct lzma_header *header;
+	void *userptr;
 };
 
 struct cstate {
@@ -313,7 +317,7 @@ static inline int __init write_byte(struct writer *wr, uint8_t byte)
 	if (wr->flush && wr->buffer_pos == wr->header->dict_size) {
 		wr->buffer_pos = 0;
 		wr->global_pos += wr->header->dict_size;
-		if (wr->flush((char *)wr->buffer, wr->header->dict_size)
+		if (wr->flush((char *)wr->buffer, wr->header->dict_size, wr->userptr)
 				!= wr->header->dict_size)
 			return -1;
 	}
@@ -529,10 +533,11 @@ static inline int __init process_bit1(struct writer *wr, struct rc *rc,
 
 
 int __init unlzma(unsigned char *buf, unsigned int in_len,
-		  int(*fill)(void*, unsigned int),
-		  int(*flush)(void*, unsigned int),
+		  int(*fill)(void*, unsigned int, void*),
+		  int(*flush)(void*, unsigned int, void*),
 		  unsigned char *output, unsigned int *posp,
-		  void(*error)(const char *x))
+		  void(*error)(const char *x),
+		  void *userptr)
 {
 	struct lzma_header header;
 	int lc, pb, lp;
@@ -566,8 +571,9 @@ int __init unlzma(unsigned char *buf, unsigned int in_len,
 	wr.global_pos = 0;
 	wr.previous_byte = 0;
 	wr.buffer_pos = 0;
+	wr.userptr = userptr;
 
-	rc_init(&rc, fill, inbuf, in_len);
+	rc_init(&rc, fill, inbuf, in_len, userptr);
 
 	for (i = 0; i < sizeof(header); i++) {
 		if (rc.ptr >= rc.buffer_end)
@@ -644,7 +650,7 @@ int __init unlzma(unsigned char *buf, unsigned int in_len,
 
 	if (posp)
 		*posp = rc.ptr-rc.buffer;
-	if (!wr.flush || wr.flush(wr.buffer, wr.buffer_pos) == wr.buffer_pos)
+	if (!wr.flush || wr.flush(wr.buffer, wr.buffer_pos, wr.userptr) == wr.buffer_pos)
 		ret = 0;
 exit_3:
 	large_free(p);
diff --git a/xen/common/unlzo.c b/xen/common/unlzo.c
index 74056778eb..8908790425 100644
--- a/xen/common/unlzo.c
+++ b/xen/common/unlzo.c
@@ -115,10 +115,11 @@ static int __init parse_header(u8 *input, int *skip, int in_len)
 }
 
 int __init unlzo(unsigned char *input, unsigned int in_len,
-		 int (*fill) (void *, unsigned int),
-		 int (*flush) (void *, unsigned int),
+		 int (*fill) (void *, unsigned int, void*),
+		 int (*flush) (void *, unsigned int, void*),
 		 unsigned char *output, unsigned int *posp,
-		 void (*error) (const char *x))
+		 void (*error) (const char *x),
+		 void *userptr)
 {
 	u8 r = 0;
 	int skip = 0;
@@ -161,7 +162,7 @@ int __init unlzo(unsigned char *input, unsigned int in_len,
 		*posp = 0;
 
 	if (fill)
-		fill(in_buf, lzo1x_worst_compress(LZO_BLOCK_SIZE));
+		fill(in_buf, lzo1x_worst_compress(LZO_BLOCK_SIZE), userptr);
 
 	if (!parse_header(input, &skip, in_len)) {
 		error("invalid header");
@@ -227,7 +228,7 @@ int __init unlzo(unsigned char *input, unsigned int in_len,
 			}
 		}
 
-		if (flush && flush(out_buf, dst_len) != dst_len)
+		if (flush && flush(out_buf, dst_len, userptr) != dst_len)
 			goto exit_2;
 		if (output)
 			out_buf += dst_len;
@@ -235,7 +236,7 @@ int __init unlzo(unsigned char *input, unsigned int in_len,
 			*posp += src_len + 12;
 		if (fill) {
 			in_buf = in_buf_save;
-			fill(in_buf, lzo1x_worst_compress(LZO_BLOCK_SIZE));
+			fill(in_buf, lzo1x_worst_compress(LZO_BLOCK_SIZE), userptr);
 		} else {
 			in_buf += src_len;
 			in_len -= src_len;
diff --git a/xen/common/unxz.c b/xen/common/unxz.c
index 17aead0adf..6f005170d3 100644
--- a/xen/common/unxz.c
+++ b/xen/common/unxz.c
@@ -158,10 +158,11 @@
  * fill() and flush() won't be used.
  */
 int __init unxz(unsigned char *in, unsigned int in_size,
-		int (*fill)(void *dest, unsigned int size),
-		int (*flush)(void *src, unsigned int size),
+		int (*fill)(void *dest, unsigned int size, void *userptr),
+		int (*flush)(void *src, unsigned int size, void *userptr),
 		unsigned char *out, unsigned int *in_used,
-		void (*error)(const char *x))
+		void (*error)(const char *x),
+		void *userptr)
 {
 	struct xz_buf b;
 	struct xz_dec *s;
@@ -213,7 +214,7 @@ int __init unxz(unsigned char *in, unsigned int in_size,
 
 				b.in_pos = 0;
 
-				in_size = fill(in, XZ_IOBUF_SIZE);
+				in_size = fill(in, XZ_IOBUF_SIZE, userptr);
 				if ((int) in_size < 0) {
 					/*
 					 * This isn't an optimal error code
@@ -236,7 +237,7 @@ int __init unxz(unsigned char *in, unsigned int in_size,
 				 * returned by xz_dec_run(), but probably
 				 * it's not too bad.
 				 */
-				if (flush(b.out, b.out_pos) != (int)b.out_pos)
+				if (flush(b.out, b.out_pos, userptr) != (int)b.out_pos)
 					ret = XZ_BUF_ERROR;
 
 				b.out_pos = 0;
diff --git a/xen/common/unzstd.c b/xen/common/unzstd.c
index 47073dd3e3..09e8fdef04 100644
--- a/xen/common/unzstd.c
+++ b/xen/common/unzstd.c
@@ -143,10 +143,11 @@ out:
 }
 
 int __init unzstd(unsigned char *in_buf, unsigned int in_len,
-		  int (*fill)(void*, unsigned int),
-		  int (*flush)(void*, unsigned int),
+		  int (*fill)(void*, unsigned int, void*),
+		  int (*flush)(void*, unsigned int, void*),
 		  unsigned char *out_buf, unsigned int *in_pos,
-		  void (*error)(const char *x))
+		  void (*error)(const char *x),
+		  void *userptr)
 {
 	ZSTD_inBuffer in;
 	ZSTD_outBuffer out;
@@ -190,7 +191,7 @@ int __init unzstd(unsigned char *in_buf, unsigned int in_len,
 	}
 	/* Read the first chunk, since we need to decode the frame header. */
 	if (fill != NULL)
-		in_len = fill(in_buf, ZSTD_IOBUF_SIZE);
+		in_len = fill(in_buf, ZSTD_IOBUF_SIZE, userptr);
 	if ((int)in_len < 0) {
 		error("ZSTD-compressed data is truncated");
 		err = -1;
@@ -267,7 +268,7 @@ int __init unzstd(unsigned char *in_buf, unsigned int in_len,
 		if (in.pos == in.size) {
 			if (in_pos != NULL)
 				*in_pos += in.pos;
-			in_len = fill ? fill(in_buf, ZSTD_IOBUF_SIZE) : -1;
+			in_len = fill ? fill(in_buf, ZSTD_IOBUF_SIZE, userptr) : -1;
 			if ((int)in_len < 0) {
 				error("ZSTD-compressed data is truncated");
 				err = -1;
@@ -283,7 +284,7 @@ int __init unzstd(unsigned char *in_buf, unsigned int in_len,
 			goto out;
 		/* Flush all of the data produced if using flush(). */
 		if (flush != NULL && out.pos > 0) {
-			if (out.pos != flush(out.dst, out.pos)) {
+			if (out.pos != flush(out.dst, out.pos, userptr)) {
 				error("Failed to flush()");
 				err = -1;
 				goto out;
diff --git a/xen/include/xen/decompress.h b/xen/include/xen/decompress.h
index f5bc17f2b6..804dbca963 100644
--- a/xen/include/xen/decompress.h
+++ b/xen/include/xen/decompress.h
@@ -2,10 +2,11 @@
 #define __XEN_GENERIC_H
 
 typedef int decompress_fn(unsigned char *inbuf, unsigned int len,
-                          int (*fill)(void*, unsigned int),
-                          int (*flush)(void*, unsigned int),
+                          int (*fill)(void*, unsigned int, void*),
+                          int (*flush)(void*, unsigned int, void*),
                           unsigned char *outbuf, unsigned int *posp,
-                          void (*error)(const char *x));
+                          void (*error)(const char *x),
+                          void *userptr);
 
 /* inbuf   - input buffer
  * len     - len of pre-read data in inbuf
@@ -15,6 +16,7 @@ typedef int decompress_fn(unsigned char *inbuf, unsigned int len,
  * posp    - if non-null, input position (number of bytes read) will be
  *           returned here
  * error   - error reporting function
+ * userptr - user pointer to pass to flush and fill
  *
  * If len != 0, inbuf should contain all the necessary input data, and fill
  * should be NULL
@@ -29,6 +31,8 @@ typedef int decompress_fn(unsigned char *inbuf, unsigned int len,
  * decompressor (outbuf = NULL), and the flush function will be called to
  * flush the output buffer at the appropriate time (decompressor and stream
  * dependent).
+ *
+ * Flush and fill's third argument is a user pointer for book keeping purposes.
  */
 
 decompress_fn bunzip2, unxz, unlzma, unlzo, unlz4, unzstd;
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Wed May 10 05:14:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 05:14:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532609.828869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwc9N-0001Ba-8a; Wed, 10 May 2023 05:14:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532609.828869; Wed, 10 May 2023 05:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwc9N-0001BT-5t; Wed, 10 May 2023 05:14:05 +0000
Received: by outflank-mailman (input) for mailman id 532609;
 Wed, 10 May 2023 00:18:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VlwF=A7=hotmail.com=rafael_andreas@srs-se1.protection.inumbo.net>)
 id 1pwXXO-0004li-5W
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 00:18:34 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02olkn20800.outbound.protection.outlook.com
 [2a01:111:f400:fe13::800])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 32251c1f-eec8-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 02:18:32 +0200 (CEST)
Received: from DU0P192MB1700.EURP192.PROD.OUTLOOK.COM (2603:10a6:10:3bf::6) by
 AM9P192MB0871.EURP192.PROD.OUTLOOK.COM (2603:10a6:20b:1fb::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.33; Wed, 10 May 2023 00:18:31 +0000
Received: from DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 ([fe80::5056:b334:c71f:b047]) by DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 ([fe80::5056:b334:c71f:b047%6]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 00:18:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32251c1f-eec8-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e6ggHOMYqA+VDkqkLM68zo3xoa3Im/SBUvsAq441pKH/4JAmGm6lBaPQSyIhOVrb7TQrnLvqtXP5f7ZcAv0Xh5k4OA2qVebfapADHlVxPgr+MFHNkNsK/JZN9huKiLMSFFAFF85a/X6CDsPTE289L3oXx+iJYGSEyQxnSNQdvPAVTy3zvyEIdt7OBg/yB4sap+vMuJOC18XB5R3k9HGZZ37VNUfl5ZELpCXoscROY6AsrENtwk4hjb2iBl8Eq66nHXlbk4VewO8S/0J6l0aKW9rgxN2bl5Ug0scGB4+y5jh8A7eHcSdssx2frZH8b1AFK1KZtVO+6A1J8+pxqYPq8g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0HVxw76X4sNRQ9I6fgOOtesQKW3CkebbZBE9wbgYkqE=;
 b=oJg1Kn+8Rd1cLUYZQZJ+2xmot3wGRrKFPcCbsVZubCR+uKfFAg6E9SQAAXmFb0GedBelIAm4IH1W/PJr2eNzavj0xNWt7mR6cKkuDLXyPUUeMPgB6Hq0O3oybrq8L3uULH+eN/bZraKHEiUO+rV2B8QEpOSI4A/CzaN9+Am9i1hncTKRF0IiFqnG5uxzmdySvOfXESI7XLMBLQKeHkqtLrBPoH6qieUtVhen4Uk8I1o1B9TzzAd80sAXbPL348i558qYtw9yWctdfbkyPH0iB7cnf/gwkTRbk6otmcV0VYtr0DSD0sG5FirzVgaai9CGMZuI2okPOQGGYEt87sJ/HQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hotmail.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0HVxw76X4sNRQ9I6fgOOtesQKW3CkebbZBE9wbgYkqE=;
 b=eEjrJN9Cqodg2ENtj2iZJMJA2zMvzbV81Xq0POlDJc+WWGFB1w53H0oZdeJ/Ya9BeVHyaaOXOzspGju+c7zsdt7sQSTSiijo0swh/33ClDbN0TMmMPh0b8tXDVPASO481sSLR91YQU5yOzRrAy1zCATdxzEwuRCYVDxd4dOGDrwF4xO+BeL4FEydKjzVxv4hA30mV3AcOf8Fr+qs53tQMc2EhPOQS4mAKt3KSicr3TyfKnmpg+BQ3B+GPM7WhrddGqhkoKcF9EbuL8v/fVuovROm7NznNWr5ZgcI5kAlmk+7hsgXBEnqMaSDB2N6MLNVTh3B4UIOzA9jDIZxhegytw==
From: =?UTF-8?q?Rafa=C3=ABl=20Kooi?= <rafael_andreas@hotmail.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Rafa=C3=ABl=20Kooi?= <rafael_andreas@hotmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [XEN PATCH 0/2] Use streaming decompression for ZSTD kernels
Date: Wed, 10 May 2023 02:18:20 +0200
Message-ID:
 <DU0P192MB170087F1C604F82B946E0CD5E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-TMN: [beReuaQEGpvvNbuXWl9DsV1UepjdXlqA/OFaDoYM0Eh/m0k42bwxKZ3y6Hazqxye]
X-ClientProxiedBy: LO2P265CA0052.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::16) To DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:10:3bf::6)
X-Microsoft-Original-Message-ID:
 <cover.1683673597.git.rafael_andreas@hotmail.com>
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DU0P192MB1700:EE_|AM9P192MB0871:EE_
X-MS-Office365-Filtering-Correlation-Id: e80615e6-f4ff-4213-4d0c-08db50ec1586
X-MS-Exchange-SLBlob-MailProps:
	ymJijV9pwJmf0wTWvqOqqkTBWKB/77TABiBo5ZvpMUBhh6UTcjUfUxQ7enf06q6+eSMzXfClCxGJ3JFYCJhmRcPQ6jkmu31XbHF8Wcw5ocbIJSCq9JR9umdgARHejUCalI8WpDqtZjo61adrBQEiwRi4IZR/99u9xAbwVJSIJ5IIxAhSBE9zE2xzCBBuCBUo9NMSZ2PJpdpRBy7nwLXY/FpKxHIl8EDUNNbXr2Ioy2zao5gfX8/Atz55rzADkvkQqidDAcTHIj4R2cdxGboYFzOxix9cTDSGQ8Kz1jtM/Tao4G500mTUAzVjAqUjcFtMbQURKPzBQoBTjqGirBujuIfXQsGuTgflw5aerwRstjIUE3LetKvZcrvcgytD+AWDcthtVbWeWsjGsX41K0w9DZPp01JAEsUMbkrdIQZdpJxau3H0VfqQz+FjPsiC8PvUS6g/PiqEgGH645Sbpz3Pbmf6ksyVNi/TDFQC/isRJcbZ5DWpCEVe6OGN60iyDdafneWFezhYM8wQZS7OJxs00hcddwRXiNvIctZaZSvZ9zytArfDXESAuRtOFdf5XYux15LB0PJta+g9AJmJeVzAnp2mdk/844IoZ4jAI1e9W2FwhYXpjQuQ+WFOLgao3gwWOQ30P3uNauIWfqlEmyaoOe4G749HQ4hnygm8GC9lLvVUhDuviUZw9B7NoPChJz3a8zYL6rGBW7euLBOkSvGRAdKZoqKt8HHUWcBBBlZLlkYvifhgMBuCiNfsxiVxfIjyn9myem07H4gLxZvsLKuBSqRy0qEAX876tbrr1OeCiDVgWnjKLXHn4MNw/qCVT2FA
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gp7JB/is2poLvR1Hg1XZuV+GG6gIVp0+xC8MfgDSSGGtPoQyL4J9HVMny2tKxCXv1OqMOOGzYQNlMejKy5Di2o5mFFjm6rHKDgK/xF8rdwXosiCJpPyPvRiVWWTP6LqsbjId3r3ouEHrSr9Zd9pjnNIOK6HlcE7L+ZW0YfyBScOW8ZUpJ26pEBFGkmRVSbqeVgWOhrNaG9UflBslb1WPzMoAWVJ5Jo5w26uI/iYG/MM+WOip6OXlNNyTBjsru3bHU1rfMNm9je09cDX4I2si1yvbbXba0F7DSJ6IBPbENKg4IZ65gIH1g1OztemHriIRcQQxejNHpOjbwuUwGcoP0lHe2oW0gZBsHrKNA5qUr8IKBGJcUN52V6OMGvz7uzyzpHhvREjseCvpTErIsyhITcIbFVJH7rZOfODaeP/KQdXwwx+Rd6XEKR8ptSey06yoiiHlbqiQBMG4/sj3ZjQJTK7H2xlNjpmxR8amCkK3SWywaoxY6Ays7BNeBgup+av5kkPXYkadTevv4f/hbEEfEyouZG/xDzA2oRTSSaM/lOWLY6Se8fK+kMgIFkCOaJf+cyltX4pKW6kRG2QGMQaQ1Qf51Eu05EQPe11DwUkWiPo=
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SHZEK2lQWGZNTndjQ0I3dHIrck5zZ2FQVlV6TjUzdXMxaHM1alA5ekdVZXhN?=
 =?utf-8?B?ZmZndEtGUlV5MU1uejk0VTFBdlFva2Z1WkdkcU1pQzc5TmxXTk91azh2a21Z?=
 =?utf-8?B?MzRzV2VCQ0lKY2Zwc1ZIVWtDWGNGWGdtZURrNjVhbTk5QlA2M3ZsVzFFZjc2?=
 =?utf-8?B?VFY3b0VySjMvSnNEMXlId3YxNzZqdEtnNzZQK2xjdXNFOUV5Q2FTd0FJWVo2?=
 =?utf-8?B?L2xpNTNaQ0Y3aEN1V2tQUDFFYWgydzA5aHgxTmJjWjgvSXFXVkpXK3lSdTBi?=
 =?utf-8?B?REFndTl6d3M3c0tleUVUQlgvVHZGcWgyZXk1UjVJQ2xGbHVlVEFHU2c0SjRy?=
 =?utf-8?B?bE9kQXc2Rno1MnIraFBTTzNUbkxOV1U2MHZHRHgzbEFKM1VMRCtFcFVBS1c4?=
 =?utf-8?B?R3VoaFVuSDJLRHB5Q0hLMi9PNDNqa01UZ1NvNUxIZi9XckY0Sk9LazJCY3B4?=
 =?utf-8?B?UGZBaEdqczVRaThyUCsvWDI4M2Jid1lzU3RPeXFCdlBPQ3RyV3ZHNyswVFhI?=
 =?utf-8?B?NFBrTnpGUU5HZDhpMXZmNWlRdnVuVitSdWFoUzgrNVdjNG5aWWdjQXFlclNF?=
 =?utf-8?B?Y1krQk0rYmFKQTRDVVkrWXBRU3JaWVNXTmVFdVBwSTZUemRVZVpXSFYrZGFH?=
 =?utf-8?B?N0VFanJSbFZ0NGRNdmthaDVBK0ZrY2lRK283QzlkOFQ2UGpLZnpmdUNIN3Zi?=
 =?utf-8?B?d21nZFhEcm1mamRzVG1ORGE1RG5lcHh1ZXhnV1dOLy9MbUVkS0t3WWhpUjYz?=
 =?utf-8?B?ZElLTFBTTjM3eURXYWx3RWxINDhKMGhuc3ZrTzQwb1pRcmYvelJJUUZTckNh?=
 =?utf-8?B?aSswdktabnpJc2pmWFMrV3FZRkN4alR0bzRlMFY3VEZUdjlqMTVTckNHUTNk?=
 =?utf-8?B?RUVNVjJ0TnRKZUlJT1FObnNiZ3dMazFUYjlTU1Z4VCtJcXpoUC91QWpPRDFj?=
 =?utf-8?B?SWFHMmZGUlhPaHFPbFFrbW1aSHdsNTFLWkR2ZFljaXBWd2tRQXFpNXljTGNQ?=
 =?utf-8?B?OWR0YU5DcUw5dEZpYXlYZzJoMk45c3k4MUoxa2VjYVg2ci92UjVHU01xT3pO?=
 =?utf-8?B?TmN4ell1bi9kck1FOWNLb1RRTVFRdVQxbUtvaEJPdDlEOHVUblFHR0tkeUlv?=
 =?utf-8?B?WjZQRDR3Skoxb3J2VWZPTVN2NzhzdWFYdUFGTnFOd1VaVlk4RG01M244S3FX?=
 =?utf-8?B?MHhuYjkybWxzQWZxZnB3emJGd1FhUW9nNTVHcC9KL3hvNjIvRk5OZjJKVjBi?=
 =?utf-8?B?TTNYTFJDLy9ZT3ZETnBjVXU5YTRlZjJuUFVra0NJVTBFMGJ5dVdzbWF2Z0Yw?=
 =?utf-8?B?MnorcTFETUlINlpNSXRnYTljNWJaQkJJS0VwMG5GUjlUcS85alRWdFRVNEVl?=
 =?utf-8?B?Yi9LLzVaR3JzWU4zWUc0aWJCS0t4V1h0dXFPS2t4S1d6SEI5c2haNnNJU1l0?=
 =?utf-8?B?VUo5K3IzZ0MyZVJMZE5URkdRcGhGVThpUFRJVlFFTm40N3E0SHI2R0I3eGYx?=
 =?utf-8?B?cXMxQXFtUE81cHJ3c3JRUnEvdUF4cWpTRDJVMUc5Rjl1QkxPUTNZcnhhRVNK?=
 =?utf-8?B?OWdHK1A3TFEzQ252Mm5CNWE4UURoaC9YWVJVY0hodmlXczRiOWVwZWNIOHJV?=
 =?utf-8?B?YnNNZU9XWlJoaFFlYlU1NFduVGpMQkRlRVhna3VrTGU1R2RTTUJOcWZIb2t2?=
 =?utf-8?B?b3hWcXZrQm5VamFQTUEzUjhHUTJESVAvUk55L2JQa2dzL2t4cWV4UE1SeUk3?=
 =?utf-8?Q?WkXBGW0mFt9Zf+8pPU=3D?=
X-OriginatorOrg: sct-15-20-4755-11-msonline-outlook-fb43a.templateTenant
X-MS-Exchange-CrossTenant-Network-Message-Id: e80615e6-f4ff-4213-4d0c-08db50ec1586
X-MS-Exchange-CrossTenant-AuthSource: DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 00:18:31.0806
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg:
	00000000-0000-0000-0000-000000000000
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9P192MB0871

I've attempted to get Xen to boot Arch Linux as a unified EFI binary.
Using https://xenbits.xen.org/docs/unstable/misc/efi.html as my source
of information, I've been able to build a unified binary. When trying to
boot the kernel Xen complains that the stream is corrupt
("ZSTD-compressed data is corrupt"). I've been able to reproduce the
issue locally in user-mode, and confirmed that the issue is also present
in the latest ZSTD version.

Using streaming decompression the kernel gets unpacked properly and the
output is the same as if doing `cat kernel.zst | unzstd > bzImage`.

A problem I ran into was that adding book keeping to decompress.c would
result in either a .data section being added or a .bss.* section. The
linker would complain about this. And since I am not familiar with this
code, and why it is this way, I opted to add a user-pointer to the
internal decompression API.

Rafaël Kooi (2):
  xen/decompress: Add a user pointer for book keeping in the callbacks
  x86/Dom0: Use streaming decompression for ZSTD compressed kernels

 xen/common/bunzip2.c         | 23 ++++++++++++----------
 xen/common/decompress.c      | 37 ++++++++++++++++++++++++++++++------
 xen/common/unlz4.c           | 15 ++++++++-------
 xen/common/unlzma.c          | 30 +++++++++++++++++------------
 xen/common/unlzo.c           | 13 +++++++------
 xen/common/unxz.c            | 11 ++++++-----
 xen/common/unzstd.c          | 13 +++++++------
 xen/include/xen/decompress.h | 10 +++++++---
 8 files changed, 97 insertions(+), 55 deletions(-)

--
2.40.0



From xen-devel-bounces@lists.xenproject.org Wed May 10 05:47:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 05:47:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532656.828899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwcfI-0005sP-J7; Wed, 10 May 2023 05:47:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532656.828899; Wed, 10 May 2023 05:47:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwcfI-0005sI-GN; Wed, 10 May 2023 05:47:04 +0000
Received: by outflank-mailman (input) for mailman id 532656;
 Wed, 10 May 2023 05:47:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwcfG-0005s8-Hl; Wed, 10 May 2023 05:47:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwcfG-0008Do-FO; Wed, 10 May 2023 05:47:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwcfF-0004oa-PM; Wed, 10 May 2023 05:47:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwcfF-0002nb-Ot; Wed, 10 May 2023 05:47:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NEQClUeghl037EI3zjq4a1teGpGVd/bvrbk80Z/HdZI=; b=bkzSFYWcVF75+9EHRTm+PvrI7s
	jwYrjnOERLilKwSsYrrZCOa5gwmegm/8qQsFJ9yxAKj6es9R22GnsXSoFS+rCIGqsqRf0bVzWLOCo
	b13nrCIECd0tNtqw/fahV2GQWQxlcGaLXSH8BWh/fjGsHj5yGbs1DzknUC8APz1zmz/A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180597-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180597: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=9165a7e95ec6263c04c8babfdbe8bee133959300
X-Osstest-Versions-That:
    ovmf=e97b9b4e5a4bd53fd5f18c44390b266a2a89881a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 05:47:01 +0000

flight 180597 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180597/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9165a7e95ec6263c04c8babfdbe8bee133959300
baseline version:
 ovmf                 e97b9b4e5a4bd53fd5f18c44390b266a2a89881a

Last test of basis   180595  2023-05-10 00:40:40 Z    0 days
Testing same since   180597  2023-05-10 03:12:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e97b9b4e5a..9165a7e95e  9165a7e95ec6263c04c8babfdbe8bee133959300 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 10 07:48:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 07:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532671.828910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pweYG-0001MN-9E; Wed, 10 May 2023 07:47:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532671.828910; Wed, 10 May 2023 07:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pweYG-0001MG-54; Wed, 10 May 2023 07:47:56 +0000
Received: by outflank-mailman (input) for mailman id 532671;
 Wed, 10 May 2023 07:47:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rbbI=A7=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pweYD-0001M4-7f
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 07:47:54 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [81.169.146.165]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f6d6652d-ef06-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 09:47:51 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4A7lY8oh
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 10 May 2023 09:47:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6d6652d-ef06-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683704854; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=c+CEDxctFw9MGyl+F0u5UkTweo5v0QOac6s+9XQ6Ohek4sU9Nb3NjkBNLbSQc3X29p
    2CdyvI911gHLpvKz5hoLu9JjuuFsgzR03VUCr3SZIXms9NAuW4Vy/4dkmPrw/6Bmb3VE
    UaKnuTGT9JE19bACRywlR5HYy5J9SP5i+dEpyT0WfPDm+I1nkuMMxICFkySkA8PBh//v
    zYzqBsOlfTKBjA5deCHf/cJKV82V3nAqC+1UXGM60F9mI92Y7GYFpBzciNLxVDzBA3FR
    xyNUlQNb3+DhS06zMkpwAnhugHKGDm9+gjh8CWGnwJ9h94hc8eWIx8zj2CkabD+uw4zd
    DTUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683704854;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=fpMbxRbaAtliiEqD3yXPNT5MQEGuwjFCZILVH7eZVLE=;
    b=XkgGWnjmjh0LovQ0y1CUSjs5+WREcOsUOUk2zLuJUHTOca4+aEyZByxpsEApoFkNRP
    3vb/lhdYvmeiRPtqOpm+hO1CnErrXS3BbUw3z7UXTxAPf6IM6u/fYlRmL/f1ncMbd5Vd
    vc4PvG8GsrAWhNm9AWYo+sKFlMdH0Hxq7+tfiMdv+1UbkfeuqJ8o9sBaFNC3dvXfYQ+N
    K2RHk8N/icC5Qdh+7z6PT7Qfi2nSU6NZlRagbdQzoEUpOpF1fUI/D3C/cK+CJ4y26ogE
    OikicXS+faaliiYj5wACVqMsdvoINRbdK0p9qbyFkdc7RigZiHZqSIuBUk/WE3Gi70iT
    ei1Q==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683704854;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=fpMbxRbaAtliiEqD3yXPNT5MQEGuwjFCZILVH7eZVLE=;
    b=VtAUsFP4F+JSerLh8MZ7ULl38l9Hj85X55W1atDN/NNfPVXUtGM+qCJwizusxv+wTa
    k1OQywRxHqXhEF3+6Tp3C0rZfJ0hnxu3t5FZPVAgSMlFNIiQrS/l2caM/mw8BlrSQ+bc
    I1o3q2aH4XKDsO5iMgxenQnNE1DsBRu47zqWRl4wMrLEtwXfffrG1DHxQn4SEg/rAcDK
    2xPb1GStX2e7yg6n1ZFTCf/sY+bfKs97VNWNNnBmaIX5Ji+bO9t9VZPxvKRa6rB2yuIN
    4v4Fkyoq3W4RFutJiRg+JvonSO8WuY1KmqSLnkXwsfRM4BxaiI4/ehGUgDCBfKozldBN
    2IiQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683704854;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=fpMbxRbaAtliiEqD3yXPNT5MQEGuwjFCZILVH7eZVLE=;
    b=krAmvBX6oRNRFqJB0vvQOHessJgvwMw5kZBnFSJhW2UNLeDlEB/rHcpLzraKPVRrcM
    4iIe2hNTHbCfynLvd7Dg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4ARWIYxzstZKeVom+bauo0LKSCjuo5iX5xLikmg=="
Date: Wed, 10 May 2023 09:47:19 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Paolo Bonzini <pbonzini@redhat.com>, John Snow <jsnow@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, qemu-devel@nongnu.org,
 qemu-block@nongnu.org, Philippe =?UTF-8?B?TWF0aGlldS1EYXVkw6k=?=
 <f4bug@amsat.org>
Subject: Re: [PATCH v2] piix: fix regression during unplug in Xen HVM domUs
Message-ID: <20230510094719.26fb79e5.olaf@aepfle.de>
In-Reply-To: <20230509225818.GA16290@aepfle.de>
References: <20210317070046.17860-1-olaf@aepfle.de>
	<4441d32f-bd52-9408-cabc-146b59f0e4dc@redhat.com>
	<20210325121219.7b5daf76.olaf@aepfle.de>
	<dae251e1-f808-708e-902c-05cfcbbea9cf@redhat.com>
	<20230509225818.GA16290@aepfle.de>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/R/2sS42QIzwYzqn.D6NtiKf";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/R/2sS42QIzwYzqn.D6NtiKf
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Wed, 10 May 2023 00:58:27 +0200 Olaf Hering <olaf@aepfle.de>:

> In my debugging (with v8.0.0) it turned out the three pci_set_word
> causes the domU to hang. In fact, it is just the last one:
>=20
>    pci_set_byte(pci_conf + 0x20, 0x01);  /* BMIBA: 20-23h */
>=20
> It changes the value from 0xc121 to 0x1.

If I disable just "pci_set_word(pci_conf + PCI_COMMAND, 0x0000);" it works =
as well.
It changes the value from 0x5 to 0.

In general I feel it is wrong to fiddle with PCI from the host side.
This is most likely not the intention of the Xen unplug protocol.
I'm sure the guest does not expect such changes under the hood.
It happens to work by luck with pvops kernels because their PCI discovery
is done after the unplug.

So, what do we do here to get this off the table?


Olaf

--Sig_/R/2sS42QIzwYzqn.D6NtiKf
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRbTAcACgkQ86SN7mm1
DoBRfg//cAC+i271H3HLPzevJR+ToZ3Ywxfz+e53Xk6RnVZfy79aWT9delbKNvoo
/sEmxurXHIjU2vHju5bE7C4s8J8oTjjY/vgJrnBp0IIoDNF7s2DNPdpWP31bdC32
AzWMm2kzCWYbTf2k0ByGQ2FlyxJi17fbXaIgwMsmhG6WhCbf2CTD+ZPoW5DLPlI6
nhJaFpge2Lw5PjebKkZh/eHB9GmQ7o/Z1fs2VnghfJeeyeBAtx7UuBwol3ZX6rTS
V76Ftd+PcXPvrw3UkUyj6VE38YJT95bIHpEf4oR9iiPlvE/Lv8wYq7fdCdbvkKQd
yIyJzrg8S2YAFWxBVpQVNOYXhFTbhgyQr331OslGBGMRaZ/F9vQkuBzK+d6pVAHv
bTJJhDYqY1kr+gJerGPW5+gWoaCegEIzMj7fjYNVWHCh51hG7eJw8cYkFEPWApIP
XhKyulCg+vaCRO1/TTsrq2mXQ8GDUp+Xm1iorJ69CC9i2qQ8fWM8EJ8lVZZddas2
3tSwjtdrt/MWvmetZK0yM4jZint6nj0w4iTOs/MR8Kya/3bLbKRNDZczsIrNd8p6
5UrR33AMreG41Nkob/IRltibnHEDhHPgK9GJfC2yWj49Tf4H8PYTlT6HxOIeyX4r
OXp83UJ40RKuyvu7GJIxNfOgFdLTeZZEQSxAkxW4lrTFxuTQgls=
=+t5t
-----END PGP SIGNATURE-----

--Sig_/R/2sS42QIzwYzqn.D6NtiKf--


From xen-devel-bounces@lists.xenproject.org Wed May 10 07:54:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 07:54:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532676.828919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pweej-0002pC-Tc; Wed, 10 May 2023 07:54:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532676.828919; Wed, 10 May 2023 07:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pweej-0002p5-Qo; Wed, 10 May 2023 07:54:37 +0000
Received: by outflank-mailman (input) for mailman id 532676;
 Wed, 10 May 2023 07:54:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pweei-0002oz-PM
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 07:54:36 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20616.outbound.protection.outlook.com
 [2a01:111:f400:7d00::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7fa0dbd-ef07-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 09:54:35 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS5PR04MB10017.eurprd04.prod.outlook.com (2603:10a6:20b:67c::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Wed, 10 May
 2023 07:54:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 07:54:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7fa0dbd-ef07-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R3j3MOJ/7t6BwTPpsjK8TXvSl1FLjwbx0Ks7u0jnkGgxwinqyB9gLLXVTAG/Jm/Rt/c15rOT466QXNjO4QCx6mSA5AIlGC5q8ASsudTOymdFeL1rUpg5Ocrktt8x5KNaWR1bU17J+PWQed40VTFhHiYMgC2gqgl5b1ez9wRmltz1zJG1jGIrR2On2oGaDx1YlK9EML16NBG93TxJkLYswphCh7JEWNZpNAZzyK93TGqtp97jQE3Mkr6qelA8GdLOWMabO+pPdslSSUuwNFb+zTurfzBCxU9mo53uUHbEvEy0aMWMyMxDIJUEiNBGger1Y6undyySriRHt7cS0Abumw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=udbh7q+uh6IremICmSasivqBR73eF4POi85bbc4gJLA=;
 b=fwDY1pYYynoigeqqhGATx6XBmKHexrqSifzeKzfPlWpwOBj8Ltc86TUYs/qrPzIQC1JV6MbVB016kuDLqgaYijVJF6N/7Ynj+06CKf/o0/wwg3WosOdMfCWfsLnAaCyS7SR3qlQ1+ch/f2655Yg4HVlE/VJUIjEBMZ5+IXFd0Q6EdHoSdKf988w1Ea1O+rCqkoD3DWBtwWbaxf+ZOoR2bQsCeKxxDnRPBsi2yUqQ1ehxNK7SmQp/1awUjdeia9A9599IzSLVS4Yo36rHDENRWsscYOKW5JmTuIyOZPcu+P+p0dH5C7kawc0VQZ44+Y4RYAVYdAQ1DS7lbcU7Js33PA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=udbh7q+uh6IremICmSasivqBR73eF4POi85bbc4gJLA=;
 b=IiFcMrA19fsnJzdsNRh8izEowyrBNtYGMs9aooC/dzap+f2wJDjhY9bgX/8XZnBFfcAkshLIN+Kf4xnSS5hpSsHW0Z/i5LB0J0JzgiyjpJ6v63q53loAPstbMguvZIW+j9NKDJEmbGBKCEPzfggkuE/MOHjE6BR9yIwXyydjzt6wb7JYrGPebHP8s30pkyTxI3VPCAp24kaP6WZY1ayk/kJthvDeCEUbgc00JsZj4SzAS+/6JDXVSqy/u1GhGO9TfY+k8m0tmwIUURM8LVwouISY5MR7xNyc+DFUe9VL4GVxo+IFud7pxEZmWXq6mocUVoxKB7pGRYnTdqaqtLurLg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <dc899580-e002-09c3-ba81-c8b828b05e08@suse.com>
Date: Wed, 10 May 2023 09:54:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN PATCH 0/2] Use streaming decompression for ZSTD kernels
To: =?UTF-8?Q?Rafa=c3=abl_Kooi?= <rafael_andreas@hotmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <DU0P192MB170087F1C604F82B946E0CD5E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <DU0P192MB170087F1C604F82B946E0CD5E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0168.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS5PR04MB10017:EE_
X-MS-Office365-Filtering-Correlation-Id: 9bdfa85b-39cc-4e10-d269-08db512bcb17
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	I+sT9TwEMvK8lTkKAFcQIV/ndSbwx9Mi0SRbYdl+06eaY8aIdrXCaQ58abKqhyFu/WAAvhxI/p8hz4LyeB5S6vCoxCh9vcRRzODnPGnjqQGFauh/jpKBGuHbbYocpaZSM0HVWri0ITPyjBXlnwkMJnIU8zSeHvHQGspVgRKDhpkPeHdgdXyiprMDZgBOS0Xk1bjh8pxgGiTULLSyoXxIDptMo1+1IcrjLveGz6LByLLqjICVcEQ5e1B/xAtla25O3x+WR2XUe79eDgLmgezML4029o44XZlGzVIP725RVCn40ZMRVRbFJj15GZEQM5/8uml149PCA0LDi86HatvnXdrMePM7k09DLavs/kfO+eyDhynZ0Ud6PIcbHXk+zMfBGKJw2/RMft1ZRHWH7tTftjvfF7UlM9QfmCoQti12Thjs//MvL96D3qSkTYzsd1APMtfZzlOSLaYj7D4RVaUGtn/CcS/rxA/eX5/g9qWAEn9ZhJzz2BNWKIZIdvdnmuKCvkYrdfOFhed9Wh16gXuhEQdYbq2cb3yUsyDcpGvXROjGDZO9dcsSJQZ5Mvr/jVLQUfr6U1xa5Rp/nOY252tpTRR8nQVaqJuPcpeoUeEh+SV80n3OpZDtOp0HgThqbk4o8L3jGHz8TQTWwLgirri7Xg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(396003)(366004)(346002)(376002)(451199021)(31686004)(6916009)(5660300002)(66946007)(4326008)(66476007)(54906003)(478600001)(4744005)(66556008)(8676002)(8936002)(2906002)(316002)(41300700001)(31696002)(6486002)(53546011)(26005)(6506007)(6512007)(186003)(36756003)(2616005)(38100700002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b1NITWRxUVNXa21oZUtjNzhNUzN4ZENUb2E2WElyOW05c05KVUFRYWF0UDY3?=
 =?utf-8?B?anJYQWV4NEZYbDhSSGM0Zk1lbVh0aFlHdlJid0JCR29RS2xzeWsyVCs3MWdz?=
 =?utf-8?B?SjhILzYwOS95ZU9RZG1xWW5GMGFnRVZqM1BDY3J0ZWdRMnhTbTZXV3A1S3kv?=
 =?utf-8?B?bEw1MFpHNUVpZjhpMGlmZkpkdVhUaktEaWt2ODdoSkM2YnNsek5raytybHV2?=
 =?utf-8?B?cXRoUVhoTm1HQkcwZml4Zk14SjA3eUxybm5CMmcvMisvTTJaaktVOHpGZ2pF?=
 =?utf-8?B?MFJSZ1EvS0RHMXo4Skx4cFc1QTJjSU9RVjhMVW1NSFNjTzhZWVBVRTBwdEdR?=
 =?utf-8?B?ak1zL21kVUZkWHdPVkhqSXNsMUVNUFR0TDFqYW9MTUdadm9xb1luY2NGdUxs?=
 =?utf-8?B?UmlBWjVDVGo1VC9aMjA3Z3NjanhLSkc2cTFwQU5xVlVjOHZUQ1BTcTVRL05k?=
 =?utf-8?B?N24yUFZJbmczWEhabm9MRWJ0QWVBM08veGh1dXBseEcraVBiWGFSak9yYUVp?=
 =?utf-8?B?MWxtbVovQnovMFhWdC9iOTJFS1BEYUliMlJvT2ZvdWV6SGZJU1hTYlBLNG10?=
 =?utf-8?B?Q09zVi83bEd3S05aMEI3M1pmbWNwYzJ5U0pFZUFORUt1MitEZEhuNlRwREpi?=
 =?utf-8?B?YTdLNmpiU2d4TFE5SU9nNjZHT2F1QTN5aVY3cVpxQ3k5aTY2YzV4YUJUMGs5?=
 =?utf-8?B?QmM2aEU0RkpiZjE3T1Y5SGZBQ3VUd05iYlh1a1VxeEh4SGtWVlU5d3Y2dzh6?=
 =?utf-8?B?WHNQSVR6RG9SQzZTZXYzRFVzQ0dGejlWd3VTejlhdnNiQVgxYWZTMGNyVUFP?=
 =?utf-8?B?V3NJa1ZLNGVvSThqbmlCQmNYVlNZeVJFd0pKV0tmSVNxYkVvRmJXM3hobkdG?=
 =?utf-8?B?NjJiUE9HdzJDeWNTQ1c1UnRqbytySU9pSnhIVzF1UTNVZ2RzWlcxYXFha3JJ?=
 =?utf-8?B?azAxcEdNSVdBWnZ3QVdkSzVibWt1cTJZekVoTmR1K0lJa0Jvcy9jTEJUd0d2?=
 =?utf-8?B?b3dzQncwQ3BDZ3NXUit6L21oSzNiY1Bsa09Wc2w5UXhIZmlCSjZvTXJheDJm?=
 =?utf-8?B?MExkTjZweGt5UkxJcnVTY2JZMVFUbjFMamV3a3YvS1FOU1kyRUZnQzc0UFQz?=
 =?utf-8?B?emhwN2J5WnppUE9PeFp6OHlDYVc1aWJscDM5UXR6VmtDN1dMZ1FJU2xyV3pM?=
 =?utf-8?B?STI4UlpDdjh4RW1Gb2ZUOWJsRysvTWd5dmZtK3BaY1pqQ0ZSU1UxbnAxMmtQ?=
 =?utf-8?B?a1dncHlsSGFXZU1sWXVhM2NGcVFYRHhWK3dqRm1LaW1NcVQrVTJILzh4V05C?=
 =?utf-8?B?ei9XNGMra1NXSTNadEpEVkVSdDh6ak80TzVWQk5yTFF2WVlQaFYxSVFqcVBk?=
 =?utf-8?B?SkF2NmU5VFY4WkJycit6NHE3Q1BvZ1h3TzZGWlNDeUxHU1h3a1cyNzhXeVVT?=
 =?utf-8?B?bXg5bjBlRE0zc2wwQksrWU81V2xheGxDcDZuVUpBbG83OXlYK1dISW1rZ0VM?=
 =?utf-8?B?a1BKTkF5Sk1kSUVCc2M3amRZVGZZa0tEOFJaWWpoeGp2UHZIUmp1bDRPOGp1?=
 =?utf-8?B?NWU5cVJ1b2VmZHV5UkR3ZkNkMmd1aWNnQnBON2J4L2RsMy9BY0hkNnR5dTJE?=
 =?utf-8?B?OUVOU1NIRE05OUtJMURhRHM5Y2MvTjNFb0ZiWmx6OFpXWEI5eXVqQlJ3bU9a?=
 =?utf-8?B?WjBTVFB1VGhOZWxFcXU5L1hzelpCejQyZWFMMk9qbElkeWIvYTZSY1VWeFcx?=
 =?utf-8?B?aGFENTBUTVF0QmszZnhuSHQrM0VMRXhuc0V3V1ZmWFdlTXF0a2l2OEVXd2tX?=
 =?utf-8?B?Z0x1VHVTRWlRRkI4UDRkUTZOY2FJamozOW1XZ1Q0QXlreUd6L3RvcVhTeXUv?=
 =?utf-8?B?cTRsa1ZObUxMWjBnUXRTWUVwZHc1QmRCK0dkUXRKNnZUWm9IMXZEL01xUHAy?=
 =?utf-8?B?TzVWci9LWGd2VXFqcTE3aUhLUXJEdklWTU5vQkJzT3pTWGoza2xSa0xscm5K?=
 =?utf-8?B?NVNuekErOEwvN3lsNG5PTkdYYmNZTkRQZDdXTFhtMUJuMEQvdzJIOVZDdlF6?=
 =?utf-8?B?NlB2SHRMak9nOXhhcHpjd3dyWFN3dW5yWHRQRFpKa0JVMjEwY1pJV3g2YjRj?=
 =?utf-8?Q?Do0XPvdMh4M7QemiP/Qmxtpz4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9bdfa85b-39cc-4e10-d269-08db512bcb17
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 07:54:33.7529
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Vp/VVRpfyyqnW1I874RHs2zFpRCAtRBtcqn4YFRa+eSQa3TajNx2NznfkvestLQLq7JASx9evID0X34DcTwgDA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS5PR04MB10017

On 10.05.2023 02:18, Rafaël Kooi wrote:
> A problem I ran into was that adding book keeping to decompress.c would
> result in either a .data section being added or a .bss.* section. The
> linker would complain about this. And since I am not familiar with this
> code, and why it is this way, I opted to add a user-pointer to the
> internal decompression API.

At least gunzip uses global variables to track state as well. Without you
being explicit, I guess you mean "Error: size of <file>:<section> is <size>",
which is emitted by our build system, not the linker? That's easy to overcome
without touching all decompressors - use __initdata. See common/gunzip.c.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 10 08:03:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 08:03:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532686.828929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwen3-00050z-81; Wed, 10 May 2023 08:03:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532686.828929; Wed, 10 May 2023 08:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwen3-00050s-5I; Wed, 10 May 2023 08:03:13 +0000
Received: by outflank-mailman (input) for mailman id 532686;
 Wed, 10 May 2023 08:03:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwen1-00050k-Lb
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 08:03:11 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061c.outbound.protection.outlook.com
 [2a01:111:f400:7d00::61c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1a3328aa-ef09-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 10:03:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB10047.eurprd04.prod.outlook.com (2603:10a6:150:117::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.19; Wed, 10 May
 2023 08:03:05 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 08:03:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a3328aa-ef09-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L5RY3f4CO4rCqU7IDnvx3lUUT/gv13SDXAh8lMH1udCL3LOqe0Cow8Cb4fl8XTbm4RjU0iG1h2IN5va8Nqd2Gy10GEcLO4YGt40RYyGIP2s3LWju37OntGhEFSnlyErbPCPuz/Ar1/Xakd6KEaJRDyldcbTFApjA/LJrvzmOZzIaaPuFNOWa8Oy6DdWkTOMkXgVhKioTItbubtqfoGAuy9MEeo/eM6wdqYcZQbLnCt6LzHg4p+9RNXBoyXqbnubgv2J5rJc0UR9hR5GkiZcn1BlbxeWeSRXta9vb5aEPyJRNm7P4katUdhwcwWdriKtG3g3o/r45SyHzLaU+a1NaYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pVAIGQMnGL6MDJ42uGSoqBIUzYduE/3zimVh4z8RD6M=;
 b=Jm5pHSC3OKLeuq4ilPtvcVdjKiur92osYcJ8K86xyDipQhOl+3cggZQaPEA9l2Hc87/R6VQLsCfBs5h6ux1oukBDMq/cHXKuVUjxavB+RE1UFiP2r8R1kitcHL+Q0s1gHZsev2KzJSxnYoFXCxsr13KNfz2sQ6ftYS0T8Q5q6eEwOxvLob+L39EFeD/MYXI8XxMawN4RMbKkN5N2RSOAX90gIQAv7FZPLEnH9EuylaaeEVUVfNdgZNjpoer7kL+eBD+Kpl+iLR2zlm6AdCAzPptZWpoClU1KthN2zrFqb6mOoVr4b0E5Dl9bLVslxUz6BW0J7I6q5ORrO8ujnhzwoA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pVAIGQMnGL6MDJ42uGSoqBIUzYduE/3zimVh4z8RD6M=;
 b=hLH66T6C6R7r3n5fwA2u2f6bzXHNQ+kECO0S7GaSXyD2xkOXhQsu6LnUbpa+teqyt9THAngh4EcEq9f0VxmDEpw9/hXqNwpQpsORXjpMQnLPrr5VID3Yz3Lp8/vUHIGhvfixKIFYvxkuANDomaVtztZoSU2WTyx0gcGLDtj7QqHs2lyLr8uqtMjTctUlEEz9dYdXYl0IuGRG1DHIr5EPBrirjvzhpwgVbeUCkqC0+XlTcFKA7IWEJh0rmLV7M1UkDLIey+AnJ5C+Nb7ni6u5sJTNDhh+JaC4lUa3J5olPDh//RPpyy2/5ukj9ixDIiWBdS8X9s4bh2afT66pLK085g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <283798a0-0c69-7705-aade-6cd6b2c5f3c4@suse.com>
Date: Wed, 10 May 2023 10:03:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN PATCH 2/2] x86/Dom0: Use streaming decompression for ZSTD
 compressed kernels
Content-Language: en-US
To: =?UTF-8?Q?Rafa=c3=abl_Kooi?= <rafael_andreas@hotmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1683673597.git.rafael_andreas@hotmail.com>
 <DU0P192MB1700F6DFE45A48D90B5FE679E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <DU0P192MB1700F6DFE45A48D90B5FE679E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0089.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB10047:EE_
X-MS-Office365-Filtering-Correlation-Id: 198e4ee5-f0e0-440a-dd0e-08db512cfc20
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fSvmYG1ecltPAorwWI2RfKgRRGH/07lV9DgZToM8Igb5Sh1zOjYyBFYFqwEQsAiazAzfS+UxrhRkrH2VjW2XP0nwvp4RKAO9tQotQ4GOa+27gmjvVzDHA2n3Z6kB4jPzS/qLU3Va2tm//8SWhZm7Cl5sVwHjqTywY89cVN+6bvxV55ZJsp2TD6rXSNjt8A51gzmehiLU0emQ3RDXUjKXp29wSnpRDK+OCD0rSI4alF+0lku9LUSaW7bOoAvH17TeuPG5gCbvc0FdO58BrbQJVTshhjlQzNftVkD6tvnVR0ytUm+p18KkkKP4CZhP3nmBR/yS7wHCociGkFa4ZSJY66jS0nbCO0vTLmgb+T1Y+LRoqhSl6caqgGuHg7FCnmOnz9fUvPqLAWS36x2U/4/0TqEeymcin0bhZthwryxlmRb+JhCT29BAxTLmtJT4yZ2RRdxooqWsiee5FOUZhFg9bbBgkISmzgXVvFdkGY2LJaWk9MB/+K1kXQ4te5TDD/fyp4sQgxoh667lriyWbl6JaVXeOG4BHG7mXrw18b5/6hIT5WFkUXKqvafbgMqR+0RH3wpXL93m4pmlPuJJhNrxY2+VFG381m/NxEJcVyjL2WLvcNEzCa4xP22C5Yhc5Czresq5+Phfm5t4S6nL5sP+VQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(396003)(376002)(136003)(366004)(451199021)(38100700002)(36756003)(31696002)(86362001)(31686004)(8936002)(8676002)(5660300002)(478600001)(6486002)(6506007)(26005)(186003)(53546011)(2616005)(2906002)(66574015)(41300700001)(316002)(66476007)(66556008)(66946007)(54906003)(4326008)(6916009)(6512007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VUg0YlR1ZVRDTG0vb1U2MTRDYy92NXN1SUxSZmdSSkpjaGx5M1U1SGpTRkt5?=
 =?utf-8?B?b1VDWW5NQWlaNEI4ck9LcGpmMFBwRURSMllNZTNCWXl0YmFUelE4SHVjd3NZ?=
 =?utf-8?B?MWIvWGNoUE51OEJaMDIybjRzV1BqVmNpL2V4akprVjk5dlpSUTdVbThFYTVl?=
 =?utf-8?B?ak5yQ0x6dlFrU0dHM1J4VXRSelNWMmtrSWRVNHF5S241SGdCMmVRNU9vWjMr?=
 =?utf-8?B?MkxYcXJtZEpzakdad0toRmJuS2I3Vzcyc05MbEJJK3hyalZrL1VhRUxxd05L?=
 =?utf-8?B?cldsajZJZTBtNTEyNGlWa0lyWnZtcWt4RFRmRGtabTB5SGRLdzR1dUh2dWgv?=
 =?utf-8?B?U05pL1d5ZUlyTnlHQXNhcFo4U2VGNXIrTGY3dlNnY2hJRDVXRk91bnJQK1oz?=
 =?utf-8?B?SWZHOENabCs4MEt6bWV4SCtqRkh6UzRtRG5FSGRHRE5BQXEvR0lkc1hVUEJ4?=
 =?utf-8?B?cWRnZ3BoalNyN2tESlFXNGkvaTM3YUcrRFcvaFRUa3BWd1paTW5HRTZQYVFq?=
 =?utf-8?B?eHB1UE9HQlc0dlMxUlgvSk9ETldXY2EwSEU1Z1k3Y1NYWUsra0JYN0F5SWxY?=
 =?utf-8?B?UkpoTTc0S01aQVhoei93Z09yYTAvc2EreCtjWGdsYVdkZnhkZHFqbVpDUlZS?=
 =?utf-8?B?Rnh3Uk9RRFkrc09Mc01qZk1pTyt6R0hnVzJIeGtPK2t4VHQyZFhSVkYxeVdn?=
 =?utf-8?B?K2Z1eWpuTHJGeUdzMGxEN3JISDl5UEdNd2Iram5OZVhJbTd6SFVwdW5RMWQ4?=
 =?utf-8?B?dzBwNS9JWllpOGFVLytHTTI2REdHaCtWU1pvSW1reGFjTnAwUXVJWk5Ham5m?=
 =?utf-8?B?cUtkcHdOUllSYWV5R205L2JObnA0alhsZ3JSK1VrYTZmUXhUSjE0SE5lWGcv?=
 =?utf-8?B?NFlaajVLakZkeGxxVzVpd3dOcTFHdkk4aEh6SDVOZ3RyeW9xV250ZjhJUFBu?=
 =?utf-8?B?RzN5c3lkbUVleG1GbDlEU3hqQVp1UDZGSGtuak5iTmxGWWx3bDdJanhOcVll?=
 =?utf-8?B?NHJrRllsSnFSalA3bzQ0ZFJnb1RmUnMySzZweHFQdTluZDlvbytEaTdOOHFK?=
 =?utf-8?B?RDFKWEtSeDdrRWhoMkV4REl4MlJqRlRGSURRWFM1Vnl1S00rbzNETHgyUmEw?=
 =?utf-8?B?VDVnQjBnWi9NeGlwcnhVODlZOEF3Y1ZzWVFYeXBmbCs1bWxFT081UEtMcDha?=
 =?utf-8?B?RFhuK1JOS3BGU2JMSk5jMnFPUk9LNmUrOCtwZEtUOW1YYVgycDRtcnh6OFBD?=
 =?utf-8?B?Z3RtOHB4azdkVFpncEZQWHNNeGtPNFIxOWN6Q1lOT3UwRTlXaUZHUGU5M3N5?=
 =?utf-8?B?NGRRTWpsZmF6T0tPVy9VdjBadThvSjM4emFIZVphTEVZd01pRy9KbStMNUFE?=
 =?utf-8?B?SGozd3JsT1VlcThFZFhlc1JsUzVWZWpzelVZSWZZdDNXZlFBV1RDbmJ4ZU1N?=
 =?utf-8?B?cUZ2MHExUG16RW51TGhqdW84TlM0ZUNpQjZyU3dZNHFvY1lLZlI0L244WHpM?=
 =?utf-8?B?YWRmak55Q3ZnQ3VNTUtTV0dFYTY1QVYzbkJqM0MwbVA5VXl6NjcvZkZBU21W?=
 =?utf-8?B?YmcrdEtBTXU1TERQWXo1RkFNRmtWRXBDNk9UbExFMkFxUFNibHlGTzIvTTRr?=
 =?utf-8?B?dE5rdEwzYUk2aGRlY1NreVRYYnZPS1lLendBdmc5ZG5FVzRZQXRDSllDL2Jt?=
 =?utf-8?B?aUFjSzBOaytHQ2drbGtiREg2R0FsQWNQTXB3Rmg4UzZSYlRnOXREM0ZTcEZC?=
 =?utf-8?B?YStaUkhQbVc1MUYwc2lNN3ZrOWJ0TWFob3ZnU2k5OWZCa3hiNXRXV1VjUi81?=
 =?utf-8?B?SmVmMC9sdmNnSTB3VEFMRTVobmMyUjR4N1c0WS82bnErNk5yVXhweENQZlNq?=
 =?utf-8?B?cXU1V0x4cmFCbnVDT0tUMWlsaTZYNnFxMXhXd3FLN0IrM3FJSm5URHdwa2lz?=
 =?utf-8?B?N2oxSmhZWkV3ckRJcml2TTRBSkNKN1IwdXdnMS9BUjhPNDFXbnJDTk1iSzF1?=
 =?utf-8?B?SVkxRHRTYjNZZy84WjczYnZ2clFaOUxvZzlWeU9UMjg3OGhKdlJUOU5UVzQ0?=
 =?utf-8?B?bE1HSXVZNHRiaVJ6eHpseXRmQjNyZ281eE5tQXptV0Z4eTEwSDFNU3VsMzd4?=
 =?utf-8?Q?Ng/QlQB9lZwG212T1KhUiVfz2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 198e4ee5-f0e0-440a-dd0e-08db512cfc20
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 08:03:05.5138
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Fb9w3b4AXi/whWb88kfnizLMsFPO/NW3XzkI18k5V4XsgmzpeZEhsYrODj3cTcYilmgE2jnRlpGwJpCSLsZGLA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB10047

On 10.05.2023 02:18, Rafaël Kooi wrote:
> On Arch Linux kernel decompression will fail when Xen has been unified
> with the kernel and initramfs as a single binary. This change works for
> both streaming and non-streaming ZSTD content.

This could do with better explaining what "unified" means, and how
streaming decompression actually makes a difference.

> --- a/xen/common/decompress.c
> +++ b/xen/common/decompress.c
> @@ -3,11 +3,26 @@
>  #include <xen/string.h>
>  #include <xen/decompress.h>
>  
> +typedef struct _ZSTD_state
> +{
> +    void *write_buf;
> +    unsigned int write_pos;
> +} ZSTD_state;
> +
>  static void __init cf_check error(const char *msg)
>  {
>      printk("%s\n", msg);
>  }
>  
> +static int __init cf_check ZSTD_flush(void *buf, unsigned int pos,
> +                                      void *userptr)
> +{
> +    ZSTD_state *state = (ZSTD_state*)userptr;
> +    memcpy(state->write_buf + state->write_pos, buf, pos);
> +    state->write_pos += pos;
> +    return pos;
> +}

This doesn't really belong here, but will (I expect) go away anyway once
you drop the earlier patch.

> @@ -17,22 +32,32 @@ int __init decompress(void *inbuf, unsigned int len, void *outbuf)
>  #endif
>  
>      if ( len >= 3 && !memcmp(inbuf, "\x42\x5a\x68", 3) )
> -        return bunzip2(inbuf, len, NULL, NULL, outbuf, NULL, error);
> +        return bunzip2(inbuf, len, NULL, NULL, outbuf, NULL, error, NULL);
>  
>      if ( len >= 6 && !memcmp(inbuf, "\3757zXZ", 6) )
> -        return unxz(inbuf, len, NULL, NULL, outbuf, NULL, error);
> +        return unxz(inbuf, len, NULL, NULL, outbuf, NULL, error, NULL);
>  
>      if ( len >= 2 && !memcmp(inbuf, "\135\000", 2) )
> -        return unlzma(inbuf, len, NULL, NULL, outbuf, NULL, error);
> +        return unlzma(inbuf, len, NULL, NULL, outbuf, NULL, error, NULL);
>  
>      if ( len >= 5 && !memcmp(inbuf, "\x89LZO", 5) )
> -        return unlzo(inbuf, len, NULL, NULL, outbuf, NULL, error);
> +        return unlzo(inbuf, len, NULL, NULL, outbuf, NULL, error, NULL);
>  
>      if ( len >= 2 && !memcmp(inbuf, "\x02\x21", 2) )
> -	return unlz4(inbuf, len, NULL, NULL, outbuf, NULL, error);
> +        return unlz4(inbuf, len, NULL, NULL, outbuf, NULL, error, NULL);

This also looks wrong here - if the earlier patch was to be kept, I expect
all these adjustments would have to move there. Otherwise build would be
broken when just the 1st patch is in place.

>      if ( len >= 4 && !memcmp(inbuf, "\x28\xb5\x2f\xfd", 4) )
> -	return unzstd(inbuf, len, NULL, NULL, outbuf, NULL, error);
> +    {
> +        // NOTE (Rafaël): On Arch Linux the kernel is compressed in a way
> +        // that requires streaming ZSTD decompression. Otherwise decompression
> +        // will fail when using a unified EFI binary. Somehow decompression
> +        // works when not using a unified EFI binary, I suspect this is the
> +        // kernel self decompressing. Or there is a code path that I am not
> +        // aware of that takes care of the use case properly.

Along the lines of what I've said for the description, this wants to avoid
terms like "somehow" if at all possible.

We also don't normally put our names in such comments.

Finally please see ./CODING_STYLE for how we expect comments to be
formatted.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 10 08:15:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 08:15:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532691.828940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwez6-0006ch-BT; Wed, 10 May 2023 08:15:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532691.828940; Wed, 10 May 2023 08:15:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwez6-0006ca-8Q; Wed, 10 May 2023 08:15:40 +0000
Received: by outflank-mailman (input) for mailman id 532691;
 Wed, 10 May 2023 08:15:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwez4-0006cQ-Mw
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 08:15:38 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20631.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d769ac69-ef0a-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 10:15:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7672.eurprd04.prod.outlook.com (2603:10a6:20b:23e::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Wed, 10 May
 2023 08:15:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 08:15:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d769ac69-ef0a-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g5C3D/jZEFfnCcULQAWGM3/tXogwdQpun9MwJgdvdgg8U20jbDZRFTIsHZCoPzN5C9zRaikoTMCgqSOlG4MDBOR/+e5AJOgaV8cra6WTp5Xf6w31CHFPhDnBosxF34eyE072O9aNYqwiyAreHlOV3/9Huk96sHQf50GDLnUAAAuaVdDG5LVGYYrySy/ZY4EYg/oznhIDGb7moytf6kFn+diUoNDARfYj/WPbNX0mSzTn5/KP3WpTmJE2ij4vdMNAL5oefbjm5Mgmebb4GF4BqnYWFl7jBYk1AAcHNJLDrkp0vLAmpN/yUriyoPS8n4BiafcARnsab1TiluyYKg0zoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2nTaZG5JSa3b9kU2fi1/OejC6IzTEZM3QCV2ZpRwL1A=;
 b=SKD9RaQNMlMTC53HLWjXBfJ1WecnnXd/3Dn2BJYUwHpO8OzLotA5b1tk1Wwn1wrdnN/+60TRzKsbx3mkcb0FeFCr6+v9/xn9btnZo7EccHN73n1ReR/1nAWPpC9SupYVgI/53aPyFB/U4XK/JvnDmbh4atRJ6gavYxIEbr5kihUe5/iKFWAi4mw6CXuguBe2Bz9DguY9vFt8ujxeqcLA/mzrNBTLeutN5LuHqk0cNeOQQuxCTbWGaBdzhLjCGwAq2Wu+NxJb0o0lA9p3UpcFG78DbMWqf+/7rbn1QWXF21bWkY98gfEbn/XkkFGiYO3GSOLwZ6uR9PIXc0qsOee/dw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2nTaZG5JSa3b9kU2fi1/OejC6IzTEZM3QCV2ZpRwL1A=;
 b=fbveqnn6nYNlteY6DCyd8AAWCM9Q7C1s2yWiyyILTjrlbO0QIpBeUZkMM8ffjFYYh9URogy5CLfWjM4xoAxoG7dvgypf6Jy8irs/tSVReWcICHhJgSwDvoMRRHkVjmtISzyllvTfapcbFWXTL6KykC1OzSGutN3TYvvjYohjIvszlPgdP5sVo0kc6VteCz+3wDfc7CbEtaF/xcBwoJtHURll3tP/qFCnIsub7/yap6BUAGIzl+9C9RXYVt0J3gnSrwzj9v69yMXxRcLPhxBExw5guEThy4TwQ8xAldXTsNDEM+IJzw+VJxdsqWL7stSBNGzWHffu0M7/qLBS/daZDw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7d41940f-e671-954c-1afc-510e4fa674fa@suse.com>
Date: Wed, 10 May 2023 10:15:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 3/3] x86: Use CpuidUserDis if an AMD HVM guest toggles
 CPUID faulting
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
 <20230505175705.18098-4-alejandro.vallejo@cloud.com>
 <d8c9728b-b615-7229-2e76-106d81802a20@suse.com>
 <d0794e7d-604e-044f-000e-3a0bde126425@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d0794e7d-604e-044f-000e-3a0bde126425@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0130.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7672:EE_
X-MS-Office365-Filtering-Correlation-Id: 4b309d85-d4c6-4cd9-ab87-08db512eba31
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DJrg5iYe6kpovItnCUsynYiJrv+0yR7mb1X9XDcX+vptPzyHcYvLcHmducz9wE632q5raN1+rpF+fCNN7F4iGAGMm2YcUi7N6jZOqFjY72fR1aAniyA30qCgNFJJWPQNtDIxjZ8X0OuTJlYMoCIMMJ/PgLyGWg/jSB6zPtSVA8lu01/FM0Z2PHpNXUa8V7pYzLxfI7qncT/lNGMrQzl9z85R0HsV5BPMnX6wwV5AqDVqEDLdPHyhEM3J0YYx1vQl+FWrwjZcTwXxDBDUYKfK+3kdYKpqTfPsM1DkLBDJStQPYWoo3ddgjuOoE3cGH8nCKOHz1ekDuldPF9ql2Uii6CD0hvSA49z5sDXz5bUTdbXDNbRClJMNDd4qw9q7zBIiqoqGJEptuSNi3kfSBILnQATyNOx3WlVOuO26vGxQd/uYvjQRPfko5BPP4hVDAjOolxMwlpPust5L9nUnloLV20I1AKMFDZIyEvGexGX/HNxV6GxoJoQ1x7uKe/WIxXVjKFxQ9sxqUAEyOIgKwO3oHwwnFZM0pCxw8Zgsvps+pDVsu1FxkiOYulr0vhPE7ZQm9XN8LpoeS7PMZ88A/0ddGgtSksvZagXMaqssFtgzx1EfD3r93E9jR7sE9mX94vn/N/9wM4pxEbz8iIz3+BzD7g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(136003)(346002)(376002)(451199021)(110136005)(66476007)(66946007)(4326008)(316002)(38100700002)(31686004)(8936002)(8676002)(31696002)(54906003)(83380400001)(26005)(53546011)(6506007)(6512007)(86362001)(41300700001)(5660300002)(66556008)(478600001)(36756003)(2616005)(2906002)(186003)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L2E4M0NhWmJuNFo0WmpNc0hyU0p5b1ZtL2EwMXg2WXNEU09HWStoRnZEalB1?=
 =?utf-8?B?elh2ZjBrY1k4dWxhem1ZdFhyU3dDMEZGb25qU2lXaW95VEhRRXExSzhQTHJU?=
 =?utf-8?B?dG1wQVNFQ1l1czF1NndQUVQ5UUlEZDNMMkkyNzlkOVNUSHJBVEZ6NUJyaUwy?=
 =?utf-8?B?eEFsQWlUVFYrR2d2RWZRanVFMlZMVTFJbGVsY2hkUVk1N0gzVHozUTI2R0da?=
 =?utf-8?B?c1htenhkTjNleTh5TlFkU0txcmhhTzZZQWZCZ3VqVXJSQ0RMc1FTMm92VTlz?=
 =?utf-8?B?dGd4WjNVRFg1c090bnZhMU9mdEdJcGpocERaY2V2QTJJT1RwMEV0UUZpenlj?=
 =?utf-8?B?RncvV1QrRkdIWjNtb1B3amJKbzlDMHFLdGlCSWpWNlgwc1ZOdCtpVXJMWnd4?=
 =?utf-8?B?K2pCejdGWlhmT2c3RTNZZWx2b0RCUjBFMWZ4NFUvc2YzbkRYY2ZPYkkxUUcy?=
 =?utf-8?B?TzZEZElNL3FLLzc3aC85ajBiNG5LWXJwQ3d0SEJtRE9NU0g1QkFDaGRKeUtp?=
 =?utf-8?B?M0tzZ2ZlYndpTjBuZ3dkeG0reWM1WFVERWdOTUt5WE1iUklkUlQ0R2R2RUtD?=
 =?utf-8?B?ZUNMMnQ2Y1grNzR0NjBVOEhxMGdkWGFoTWJYTVVoaEs4alIvUEJqbm9mZ3NE?=
 =?utf-8?B?ckl6S1cyMWJRL3dPMlFxbXd3UTBzZ3diRTVYM1RGMmV3MHlnUGRKWEY2Qkcy?=
 =?utf-8?B?OG15Und6THR6b0V6U1Bxd2VvOGRPTGNTSHRBSVRHd09tZ05YTllUTnVRdUN2?=
 =?utf-8?B?SlpKYVluQ1lMZERKREc4MjlCdTNZVmxYbTJMZmp6Y1cyd2l0bXdJcGNFcEV6?=
 =?utf-8?B?WTB0aEsrYUx1Q1ovcVRBN0NTS0NCUlJuRCtTdTl2M2h1QUdXTkdmak44Wko4?=
 =?utf-8?B?MitIdStpVnErK3g2ejB1Zmw1NURaVVRQWnFNQWxiNGlTYmdHSDVJRnA5eDhD?=
 =?utf-8?B?ZHVoSkQ1REFSMWlBWlZnTHpsbGhGUGdxUURtdTNUWnEwQmtFdXlWRjMwMTcy?=
 =?utf-8?B?Y1kwSTc4K3VVNUFhVWRDb0ErVnRneGQ3YzdxWUtqWTdWWitHNzRWZm9MeXdu?=
 =?utf-8?B?ZFYvRWdwUGgzYTZvWE5JR09BOWhMNDU0TGRBM3Y4bjZoSy9GaG56UHJ5MjRB?=
 =?utf-8?B?eEd2eEtHbm0ySWZQd2I1WnFQWDEwZ3YzRnI2ZEsvUlU5YmRvVVpSamdvdEls?=
 =?utf-8?B?RkZuZjRzUThTNmhnTGtmeUFGZ0tuSXJZOVpCY0ZRcXo3SXNBQWVnaGd4Y1Nx?=
 =?utf-8?B?MWo5QlRlVE83WkxiQ3J3Tk1rZ1BySGV3Q2F1ZTFZbHBhVmdQcjVmdzVXNmx2?=
 =?utf-8?B?TnZrMnNua2xWSVlhQjVjZU5KeG9FeHN5Q1VUdVJYSnpZK2ZrMHQrYWVRdVlR?=
 =?utf-8?B?QkdHdGQ2V2M0QUxkSmNqZzVBYjlRRGd5VytGWTJhZE11aGhVdTNuTVdKbTVI?=
 =?utf-8?B?dXdMYzlkTHFGbjlxenR5dm1oK1dQT0QwbDFzdmVJbUI3dTV3RkVqdDdQVE52?=
 =?utf-8?B?WGcrZk9XTTV5Rm1MdU53a3orSC9yWEx5bVN4ZFl4VzFvSnlmYnJ2TUdJS1VB?=
 =?utf-8?B?ZDlaMzVUSzY4RnF6dGt1QUhHVmthdU83bnduR0FCZElNa0ZXMmRSYUpmR2tB?=
 =?utf-8?B?MEtoSTkrd2xGY1ZpSXZGclBocXgzelB6M0Z4WDZFQm90OWZxVUVXMnhxZkFt?=
 =?utf-8?B?TU5YVUhxUjlUZVV3Wm9aaHAxaXAyeHBpQXhjSFRTVEZvS1I4a2ZMY2Nja08x?=
 =?utf-8?B?T1U5bGw5S3VnYndTcEJhOW9FRitadEFiNTVqc0IzbzRiTUxNUUpobG9DNFVP?=
 =?utf-8?B?cTVGdlIzYk9aK3hNVlkrSFgyYmVBNVFyODNDQzM1UGhWSlJiUmdRem9uanJy?=
 =?utf-8?B?dndaamRMZjRqVy95alJFRTg5cXlXcXNCak04c2hVNDZzalZZWkEraGM2ci96?=
 =?utf-8?B?a25xU2tuZ0FtSnBwQVNzZHpLK3E3UHFuRnJPcEp0eHFFYUhuS2N5QkZvbXV1?=
 =?utf-8?B?eWpsUk05WmpJV0E2UkNiaFM0YlkwVmRTalpFMWhtNVFvRm9EQ0tERHh4T0N3?=
 =?utf-8?B?a1h1b1NlWlgzdjVsakpFR2Nyejh1Wm1SVnY4ZXFhc0Q1djYrVllIUmZqbjVp?=
 =?utf-8?Q?GUendGR2G+pIKtZZRtxEp5a2A?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b309d85-d4c6-4cd9-ab87-08db512eba31
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 08:15:33.8723
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: C/xOZ9Oc+8keQ6SRf/SkYf7Sg5u/1/mj9DSC8frpIyRPX76OLDAGaGbSWb9HEw8sNEzJuhJ9T8J/1W3SQ7uvig==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7672

On 09.05.2023 12:05, Andrew Cooper wrote:
> On 08/05/2023 2:18 pm, Jan Beulich wrote:
>> On 05.05.2023 19:57, Alejandro Vallejo wrote:
>>> This is in order to aid guests of AMD hardware that we have exposed
>>> CPUID faulting to. If they try to modify the Intel MSR that enables
>>> the feature, trigger levelling so AMD's version of it (CpuidUserDis)
>>> is used instead.
>>>
>>> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
>>> ---
>>>  xen/arch/x86/msr.c | 9 ++++++++-
>>>  1 file changed, 8 insertions(+), 1 deletion(-)
>> Don't you also need to update cpu-policy.c:calculate_host_policy()
>> for the guest to actually know it can use the functionality? Which
>> in turn would appear to require some form of adjustment to
>> lib/x86/policy.c:x86_cpu_policies_are_compatible().
> 
> I asked Alejandro to do it like this.
> 
> Advertising this to guests requires plumbing another MSR into the
> infrastructure which isn't quite set up properly let, and is in flux
> from my work.

Maybe there was some misunderstanding here, as I realize only now. I
wasn't asking to expose the AMD feature, but instead I was after

    /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
    /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
    p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;

wanting to be extended by "|| boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)".
That, afaict, has no connection to plumbing yet another MSR.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 10 08:27:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 08:27:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532695.828950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwfAj-0008Fj-D8; Wed, 10 May 2023 08:27:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532695.828950; Wed, 10 May 2023 08:27:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwfAj-0008Fc-A2; Wed, 10 May 2023 08:27:41 +0000
Received: by outflank-mailman (input) for mailman id 532695;
 Wed, 10 May 2023 08:27:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N27j=A7=citrix.com=prvs=487c252bd=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pwfAh-0008FU-NJ
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 08:27:39 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8374b93a-ef0c-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 10:27:35 +0200 (CEST)
Received: from mail-mw2nam04lp2172.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 May 2023 04:27:21 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH7PR03MB7171.namprd03.prod.outlook.com (2603:10b6:510:242::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Wed, 10 May
 2023 08:27:20 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%4]) with mapi id 15.20.6363.032; Wed, 10 May 2023
 08:27:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8374b93a-ef0c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683707255;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=E+5t9Cnnc8is8cu+ovngMamcXn3LD83riIPAV6If2BQ=;
  b=gfF6VbMH+rHGq2orR3YGiz/QytydahAIPCElD+Q3B8QVp7OS4I2WTRmm
   hHKI37lvnv6M82KOLyupqf4mT2iX8/crdQIRFMLYDsFk1CopSjBVquWRr
   sgpvBeEjI+cOIds7kL3NMlyqdtIznUrdYlJ7zHJ5l4V7d2v4IG0D7C1qk
   Q=;
X-IronPort-RemoteIP: 104.47.73.172
X-IronPort-MID: 107254847
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:wHpRN6N8UWIAGOHvrR2OlsFynXyQoLVcMsEvi/4bfWQNrUpzhjdWy
 WUeDW+BOvncMTH3Ktl0bNu/9RsPvMLdz9M3QQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5wRmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0qVIOWwe6
 9dbESIAYh6umejo/riXasA506zPLOGzVG8ekldJ6GmFSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+vFxujeJpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyrz3LKTzX+kMG4UPLaI5vdgu3KW/XwwWTM7SGSVu/q2q0HrDrqzL
 GRRoELCt5Ma9kamU938VB2Qu2Ofs1gXXN84O+439gCLjLbV6gCxB24YQzoHY9sj3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqWMMfZQ4M4t2mpZ5piBvKFopnCPTs0YezHizsy
 TeXqiR4n68UkcMAy6S8+xbAni6ooZ/KCAUy4207Q16Y0++wX6b9D6TA1LQRxawZRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:Hxp0sa0Y44iCxRW80r8K3gqjBRZxeYIsimQD101hICG9Lfb4qy
 n+ppomPEHP5wr5AEtQ5uxpOMG7MBThHO1OkPcs1NaZLUXbUQ6TTL2KgrGSuAEIdxeOk9K1kJ
 0QD5SWa+eAQmSS7/yKmjVQeuxIqLLqgcPY59s2jU0dMD2CAJsQiTuRfzzranGeMzM2fKbReq
 DsgvZvln6FQzA6f867Dn4KU6zqoMDKrovvZVorFgMq8w6HiBKv8frfHwKD1hkTfjtTyfN6mF
 K10jDR1+GGibWW2xXc32jc49B/n8bg8MJKAIihm9UYMTLljyevfcBEV6eZtD44jemz4BIBkc
 XKoT0nI8NvgkmhMF2dkF/I4U3NwTwu43jtxRuzmn34u/H0Qzo8Fo5omZ9ZWgGx0TtigPhMlI
 Zwm06JvZteCh3N2A7n4cLTah1snk2o5VI/jO8oiWBFW4d2Us4SkWVfxjIRLH4zJlO81GkVKp
 gpMCga3ocOTbquVQGcgoCo+q31Yp18JGbcfqFIgL3l79EfpgEI86Jf/r1eop9snKhNEaVs1q
 D4FuBBmbxPSYs/cb99bd1xEfefOyjxZVblPW+TJhDfD6cLJ3jRgZj77NwOlbKXUa1N8b93tZ
 jFUExVrioJe0zoAdCTx5EjyGGdfEyNGQnIjuV3x708gZHXaJrVHQDGdXALv6Kbyck3M4nnf7
 KWELJyR8PeDUaGI+t0N8iXYegMFZHbOPdl5urSnDq105/2A7yvi8jyStqWFZTANhwAfF/Ta0
 FzAgQbbf8wnXyDSzv2hgPcVGjqfVG69ZVsELLC9+xW04QVMJZQ2zJlwmhRy/v7YAGqiJZGNH
 dWMffiiOe2tGO29WHH4yFgPQdcFF9c5PHlX2lRrQEHPkvoefJb0u/vNFx6zT+CPFtyXsnWGA
 lQqxB+/r+2NYWZwWQnB8i8OmyXgnMPrDaBTosamKeE+cD5E6lIRKoOSeh0D0HGBhZ1kQFlpC
 NKbxIFXFbWEnf0haCsnPUvdZfinhlH8XCWyOJv2AbiXB+n1LMSr1MgLkuTbfI=
X-Talos-CUID: 9a23:6EoB+W+WG4/nkNPHmomVvw0rIs11bFae9WXdL0GVF2w3dre5aXbFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3Anh+YDw/rERGgFEqYcgFiwiuQf8xY862kM28IqoU?=
 =?us-ascii?q?l6uepKBNeITSSrTviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,264,1677560400"; 
   d="scan'208";a="107254847"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MveKHGzMySexfteJRyWnsUa161pRPSg+CbSoAoEUHZQVLTJoToqisJeH2sDTWK7z7ciOu8VHSs9iL7JEoKeT2CsmkgBn2f5bqJO0AdJrJFlJpnTltXcPlvEFripe+/VPHYCnC9Cfr5fBY79rYCSQsoaPO3rM4WKFP2Ih5Bf/M5BRukv/KobgSWaZbkcQwlUHBR2vmRNwPmYiHTCMXu6cAOQmzLvEJFM2rx3ahGygA38kor0hm8V2J5JQWHeNo6eQuPF0BcQbKiQ+oamakKwWS+hdY7qKTwoQGPY+6aU4YisBaDsLjogGAk3ee56dQOgcPT2rIH408wyAKV3Kej95Fg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q/Hp7+SOjMITpVc8Q+NNKeJqENwNVZysazit8HgCusU=;
 b=CWLTxMA+9h9tfobPRq7TCroVOh8V1k3w8HITimZl1lLw7+F9R9vAL+YAPNXLv2qWmgFbLs6MGU9T6kzWjvuJSRxUpZspWOw1oTi40GzvhlqKJqTTOuhVf3M4pUii+lX93P7N7USwhf4dbhFqy84jsI7epaMp6mXAtImiW2MzB0K1BRl5t0u8hzcVhshElQRC/Sj/aOFEe5Q/8ReAoTFWbvUS2O35JC4fovyK+38Z8P0xK6w1BiZa8m8Iwj+6g28arIlexdQEyJBd5rbuCkqqK1rHHtPXHZLArs5WPEaU2QQx9gWdQWp+jQiCoLM39MxHho5jf0q/f3wLO2d/wuqTGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q/Hp7+SOjMITpVc8Q+NNKeJqENwNVZysazit8HgCusU=;
 b=I2nV1nmoR6Eu8BdyQv9HBtLoKNq3J/fNFnYFR1a2BCqeNEz6nGcZxUPTgLDBwCxRiKfntFcISMV+RChrX298XnPli6hQQw1Wp+PgMdfoka3qAwPxz9KvfULQrOqItS1x75xMSYD3S0ne9Vw1YGs05jjutqcsT/aHj2NlV4zmhPQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 10 May 2023 10:27:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] iommu/vtd: fix address translation for superpages
Message-ID: <ZFtVYEVsELGfZxik@Air-de-Roger>
References: <20230509104146.61178-1-roger.pau@citrix.com>
 <bc750b8d-77be-f967-907a-4f19c783ad99@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <bc750b8d-77be-f967-907a-4f19c783ad99@suse.com>
X-ClientProxiedBy: LO4P123CA0066.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:153::17) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH7PR03MB7171:EE_
X-MS-Office365-Filtering-Correlation-Id: 925c38d1-2feb-4ce8-cc29-08db51305ed8
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oW55eTveE6X19YZPLSfmldGlsb68ZLosZoNY5H4R3mxvgzX1/TMM5iB6KpFq3I3U7TdbpJ/pS1a59Xa2fbWBmH4Wd6mEPZzYRczd6maFYb1zH5eWBikG7b6UGVpuPRoYpJ5J2pMizy42cP33G/DakyWWNPl9I0VyyUVsa+xwIbCSj+qEJYWMmcKxfY91+t/Z8lOe7MgTc/AGeZBiBd6ZBIaGgpjUOLozTHo4bdOqxqth+64ib7ZOm6UebMJGhNdPn9EOWKepMhQ413dE7X2w0k0UZK3iJDSTGs3bogezXXIOBlUi/mgQrFSlonqzvYuW6ibl4CF3UlQv6DDVH5e6I0cS57DgSlv7BtGf9BRK+GnZ8JBjYeUzUc1/5ejF1KahI5OW5gTJZHBeV63NoDHdoUpxbIDOvOm6IeUTrZgbCkG68bfFmNB7pkOPjcA15VhvJFutRRlHVy071rzNguln6dzrelaz8ck6Gr6JjOikTLLp36+V2qDaFZNpuPn3tY5MOyN+hleoU7Tk5KgK2wEKW1U4MS+6ulD11NtHKqy0yjpQB6xWCM9IozTM02UDN1Tx
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(39860400002)(136003)(366004)(396003)(346002)(451199021)(83380400001)(82960400001)(8676002)(8936002)(316002)(6666004)(66946007)(4326008)(2906002)(6486002)(6916009)(5660300002)(66556008)(66476007)(478600001)(86362001)(53546011)(186003)(9686003)(6506007)(6512007)(38100700002)(85182001)(41300700001)(26005)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a2ZFUHVOcVdWdXhreWxOL2dtODNPeExZOWVOZlBUOE45SGd5TTRtK25mNjJz?=
 =?utf-8?B?MEpQVmxpQ1llaE5sOS9tbFpqSHlCR1JuVkNYTDFTOTFON0VlZG9XODI0bHZ6?=
 =?utf-8?B?elJxenFRdUhvVmRhdld6ZGdIdlVtT2RyejNUTlVkb2V5VHpXRGx6UkFBNGpR?=
 =?utf-8?B?Mzc1YkxFRzdsM0kvc1lEOTRSRlR1bCtpVmhVYm5BWGpaTEpSS3J6cmF3MExE?=
 =?utf-8?B?UFRmVGhHMngrSjlLR1F3ZGJuR2ZWdmI2NmFFbzU3aDZDYTE5UWxjWVRGbzVM?=
 =?utf-8?B?c2RaZFl2L2NTRDRKakZUSzlNQzFYVXBIbXNqZmZSSkVGMEYrQUdSSHN3NUdx?=
 =?utf-8?B?MUdDeTdnekk2bmtHb0lDNWtodDNYdTdxM2dxY0l5YTQ3aDMxK2hVQURJZWEy?=
 =?utf-8?B?S04vOFZ0Q1RSOFpNWEk5WFNDTUptdHhrblZ4bEM4Q1RoTEQvRHNwQVJXd3gw?=
 =?utf-8?B?SzRNVEdPOTRuRjNIY2V3VXowRXBIeHlzQWw2N1hvUXVTNjBQdXRlamNnMU1E?=
 =?utf-8?B?aDVPK1U3QTdGdjYzRHA3eWhZN0RUcHBudEtwb3dIcUI0U3NMdlJLZ0FocDZG?=
 =?utf-8?B?cmlaUmpKRTY0RnBPUjZheU5sRHdHSEcvWUFPQUZGYXVQZ1Z0aXA4WkxQZm5Y?=
 =?utf-8?B?aDFSSHdrK0hlRDVoQ1dGVFNod0dPMk9xVm9aTGhXR3F5MHl6TkhIZmtpa1FV?=
 =?utf-8?B?VWpyNUMvTlovOUlRMzlFK1Rib3FiZURubXB2K2tyVjZrVWRHNGlvRGczTXBB?=
 =?utf-8?B?dU9BTDFJUGpldHc4VVorYk5HY2FReHY5cXZ2a3F0TUZNN3Z6Tm51alVDVUFK?=
 =?utf-8?B?RmN3UmtpVmt6N2dDZlRETTJlVXZ6TzNZK2Q4N3RtUEVKTWU0SWFyUTA3OFNY?=
 =?utf-8?B?amtGaDdqSXpzZkgxZWpKTGQvMlc5N1VCQWVBWmVicnpzRkJEUklOaTVybDJY?=
 =?utf-8?B?clBubWpZVUVYU0xXTzVVZjh6N09TenVTMFNiRndwazJ3YThEZU12SDN6WnBN?=
 =?utf-8?B?WmliWGtLUURJYkhnK1hZdHJPbjJHcDBBVnIwVi9ndC9GWm1DY0lpRXpZNTRK?=
 =?utf-8?B?STduakE5eHBOcHFlaUVvQ09IL29JQnNXbjBEYkhSUXhoZlFvS09lL1FNZEtZ?=
 =?utf-8?B?RXB1V0NRSDRGYVNCcjlEdTNhWU1RWE01VzFRYm5ueDN1VjhleVdxd2dPZ0Fw?=
 =?utf-8?B?c0lkWmJWOXV5c1MzcXpWbmFkR3o0enIxaUdkc3ZpZ1lmOUJzOXMxQUwwUUd4?=
 =?utf-8?B?aTFkbXk4cjMzaElMSTk3WW44eU5HNE5hd0cyS0ordXdOWi9SdG9wSEd2MEZm?=
 =?utf-8?B?alIvbWZPRXlzVFhJU0psb0NUTmV2czVJKzQ5VmtSYTBzeUFNZXFiV1ovK1l6?=
 =?utf-8?B?U0R3Tm8rZE5FbFZUWWhrUzAySmQzV2J2cDVnaE9VN1owOHd5bmUrSm9rTU9v?=
 =?utf-8?B?amc5WU04UCs1ZWxBUk96RWptbUJlZTBKb25ENFdTWmc5QXkyVFlZanJ0VFBL?=
 =?utf-8?B?SmM1QU44c3BQS2NXMktTVFh1QitQUk0yd1RvZ2VqOWl1SGpEbEhjUXRENE5i?=
 =?utf-8?B?NWZHS2huNFJDbHB5S2Jpc3IyUDA3UmJUaEhMbXF3TjhlVVhJZUtqTTB4bHpY?=
 =?utf-8?B?Vmt5SkVvTWxXSGlUSzhyeHJqNFg4QUU5YkYvNkFnY01KUkdycTdQQVBmQnFF?=
 =?utf-8?B?aEhWQldFZWtybW5tWFZBQlRINWRSajhId0ZyK0NUV1RwSmlweHczckVWbW9F?=
 =?utf-8?B?cTV3dnlOVTRtbE92M2NCZVJ1RjgvTkxNeEc4RlBaenVFNjJIUUZBd0NZZG9z?=
 =?utf-8?B?OWNMMXg2KzVkanY3OUM4SFJpcjBmSDFsSFJVVXBNMWVDb0RIeThHRkVVdzR1?=
 =?utf-8?B?VHlIU0RGeFZ4a0lyL3VZNk1ncU1wWVlUZFh6UlEzdmQ0TzlqWVNqR0ZxUndT?=
 =?utf-8?B?Tmx1K3B1TVFVSVo1dXhIQzcrZjc2K0RYRmNDRmpnMi9qYjhQRHJBR3Z5OVZQ?=
 =?utf-8?B?dU9YckFxM3h4cXpqeFhkeklNZWU4MHBEamp1djVNT25sQXg5dExSc3FUYmRx?=
 =?utf-8?B?UE93MjBhODg4ZjVkQnBxb1ZhVWdsRzFIRnNnbExWUTlKOU82MjdBSTRHL3Bj?=
 =?utf-8?Q?cB1hhZweNaZXDwTzaIwGbiFKw?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	FuPkU7csWNI1MZarc/LyjRvkNzSVhTu4oM6SBrL9tdTdrRZI/kFbH07wfve580SwNInWoO4StyuTTlNUzqvcGotscuApo3EMsnOHRr3yTXWTnXDMpPbdbFoZMVkRs/bnbtpivI3WbhoQ56EeJUWbizr0z69JqfnGSeBv+GWXICehIr2DedBoTEBth6I0t8D7rA33kkkDphtWmEHqCYUEk8pD7OtjwtOlB5xKKAN8FRNtmm2meB+uEKvGimoi5KBgpaoYPfXCroyQGH8GJpJ1Jg8bUIowBpyjJLG6XdFWzV1/np3pnvmWtTxy4KF1t91kLtsLcTEjIEfMhNo5UL4zGD6xucjS9F8nQrWjnqiQKPXG3i60diXcM9idMqoQVC2byJdnWY5Z2zpOPru/FaqCxn/aaojZKoFulrd4A92B/PfhKUMPml1Mlws3bzc1EG0KSrHAnK9SEv/3XEkYfk4P6MSBBA5a1JT+fxMQmgbHNwMSakPAdxf5Qt1Bc8Gg3Dc8hTBI/SRaQyyCSpXjIh/YXdc52K8JESQvFEk1ev9hAPpjxcdE297+bEIaideonCaZKeUNTjzaik/B8B1tuioPbLhX8ZQcBy4bzaKe6apDNaI8b6Y6f5exmBGKPYRB6GIxwo6nftbOK2jb6/IMrvcV4GkZK42ZTseWTtjbLDq7hqJ5QyxOMCOpxvcI9J2GhDdGPj1NmS6tQsnLzXWHz54uxyOs/jAQQUMFCedRjLiw5JUUwHqJLyg61du6K2NJOZ1tTjtcwB2NPcupYVzuoH0ETfOYbUmOt9Jmmf4dWh/MFhJB3Cdoi/cjgoEGuM+OJ79n/OsLr5Cs4QWSGJDeAFXQog==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 925c38d1-2feb-4ce8-cc29-08db51305ed8
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 08:27:19.7986
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UNFt2840DiVtrI8belwtwo65WfGXKDs4xrW/tsqZ9v/3fQ8KfDH5l/ZHuXzF+voOH+hcvqJALPLWXUpNKNo3vQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7171

On Tue, May 09, 2023 at 06:06:45PM +0200, Jan Beulich wrote:
> On 09.05.2023 12:41, Roger Pau Monne wrote:
> > When translating an address that falls inside of a superpage in the
> > IOMMU page tables the fetching of the PTE physical address field
> > wasn't using dma_pte_addr(), which caused the returned data to be
> > corrupt as it would contain bits not related to the address field.
> 
> I'm afraid I don't understand:
> 
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -359,16 +359,18 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
> >  
> >              if ( !alloc )
> >              {
> > -                pte_maddr = 0;
> >                  if ( !dma_pte_present(*pte) )
> > +                {
> > +                    pte_maddr = 0;
> >                      break;
> > +                }
> >  
> >                  /*
> >                   * When the leaf entry was requested, pass back the full PTE,
> >                   * with the address adjusted to account for the residual of
> >                   * the walk.
> >                   */
> > -                pte_maddr = pte->val +
> > +                pte_maddr +=
> >                      (addr & ((1UL << level_to_offset_bits(level)) - 1) &
> >                       PAGE_MASK);
> 
> With this change you're now violating what the comment says (plus what
> the comment ahead of the function says). And it says what it says for
> a reason - see intel_iommu_lookup_page(), which I think your change is
> breaking.

Hm, but the code in intel_iommu_lookup_page() is now wrong as it takes
the bits in DMA_PTE_CONTIG_MASK as part of the physical address when
doing the conversion to mfn?  maddr_to_mfn() doesn't perform a any
masking to remove the bits above PADDR_BITS.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 10 08:32:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 08:32:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532701.828960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwfFP-0001KD-2e; Wed, 10 May 2023 08:32:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532701.828960; Wed, 10 May 2023 08:32:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwfFO-0001K6-Vz; Wed, 10 May 2023 08:32:30 +0000
Received: by outflank-mailman (input) for mailman id 532701;
 Wed, 10 May 2023 08:32:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N27j=A7=citrix.com=prvs=487c252bd=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pwfFN-0001Jy-JT
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 08:32:29 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3131c7d9-ef0d-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 10:32:27 +0200 (CEST)
Received: from mail-bn1nam02lp2040.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 May 2023 04:32:21 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH7PR03MB7171.namprd03.prod.outlook.com (2603:10b6:510:242::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Wed, 10 May
 2023 08:32:19 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%4]) with mapi id 15.20.6363.032; Wed, 10 May 2023
 08:32:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3131c7d9-ef0d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683707547;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=ocT/v0lo9HfLXTxYo3iupeTkaeBgKXMsyJcKEb1wYCk=;
  b=O3Fj1OCjwTNa4s3wNzlErUE8xZvcxD6RlNnEyfLCroWykuviVDVSDFpT
   IMNZ7GikkC9/bRlqSCBqG5mL+Y/3Ir0+tC4OV3LrqTvASEnsNveU2ONrZ
   JkvzGQ6WpzPN/PSFuBDojH2tu5BUQURE9O5jEn8JQxdT4BD9YHKua66+v
   E=;
X-IronPort-RemoteIP: 104.47.51.40
X-IronPort-MID: 107255395
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:o6EQ1KhAOdNPEMiaKFN9ujS+X161QBEKZh0ujC45NGQN5FlHY01je
 htvD2CCbvuIamL3KNgka4Sw8UwAsJbTm4RkQFRvrHxhQXsb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QaGzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tRJLWEISzmFptuPwbSERtlJv+oyFdLSadZ3VnFIlVk1DN4AaLWbGeDgw4Yd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEoluS1WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHukAt9PT+zlnhJsqEGY4GoXCA8vb0OApcKV2mmdR8Nkd
 lNBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rLd/gKxFmUCCDlbZ7QOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRVLufgcBRAoBptPl+Yc6i0qVSs45SPLtyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CCvZ6s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:GGtYzqyLI8KEYw6r8TccKrPxiOgkLtp133Aq2lEZdPULSK2lfp
 GV8sjziyWatN9IYgBepTiBUJPwJk80hqQFn7X5XI3SEDUO3VHJEGgM1/qY/9SNIVyaygcZ79
 YdT0EcMqy+MbEZt7eB3ODQKb9Jq7X3k9HLuQ6d9QYRcegAUdAH0+4NMHfiLqQAfng+OXNWLu
 v52uN34x6bPVgHZMWyAXcIG8DFut3wjZrjJTIWGhI97wGKrDWwrJr3CQKR0BsyWy5Ghe5Kyx
 mFryXJooGY992rwB7V0GHeq7xQhdva09NGQOCcl8QPLT3oqwCwIKBsQaeLsjwZqPymrHwqjN
 7PiRE9ONkb0QKeQkiF5T/WnyXw2jcn7HHvjXWCh2H4nMD/TDUmT+JcmINwaHLimgkdleA59J
 gO83OStpJRAx+Ftj/6/cL0WxZjkVfxiWY+kNQUk2dUXeIlGf1sRM0kjQZo+aU7bWXHAbMcYa
 9T5Qbnla9rmGahHjTkV69UsYSRtzoIb0y7qwM5y72oOnBt7QBEJg0jtYwidhppzuNmd7B0o9
 nhdoxkmbFICucLcKMVPpZQfeKHTlHoBTrAPWKUZW39EqwaMW/mrZP6iY9Ft92CSdg06N8elJ
 HAT19C8VQzdUXnFNGU0PRwg0HwaVT4YBCo7ul/wtxDlpfRZIXGHGm/aHQD+vHLn9wvRvD+H9
 KaGLcTP8PCAALVdLqgGmXFKsZvwb13arxIhj79M2j+//7jG8nWksTgXLLjCpbLOxMDZk6XOA
 pcYNG7HrQy0mm7HnD/mxTfQHXrZwj2+o9xCrHT+6wJxJEKLZAkiHlftb2V3LDDFdR5iN1/QG
 JuZLf81q+rr2i/+mjFq21vJxpGF05QpLHtSWlDqwMGO179Ne9rgaTTRUlCmH+cYhNvRcLfFw
 BS41xx5KKsNpSVgSQvEciuPG6Wh2Ya4HiKU5AfkKue4tqNQOJzMr82HKhqUQnbHR18nghn7G
 9FdQ8fX0faUijjjK205aZkct03t+MM9ztDDfQk3U4373/s1/3HbkFrKgKTbQ==
X-Talos-CUID: 9a23:G3C3Gm0oNPGjrCaz3W7yvLxfS8V0d1j0ylbpGk6+NmpZbYS7WGaLwfYx
X-Talos-MUID: 9a23:IhRK/gUIylq940nq/CPnojdmNNVR2byVDHshwbIlp9i/aDMlbg==
X-IronPort-AV: E=Sophos;i="5.99,264,1677560400"; 
   d="scan'208";a="107255395"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dqhTLHl9xWHiF+4ESuGxK7eJAeCm1ObqYBdwancwH71Gt9GI8vomoc9WLKLtsXRUvVO6oyPGLbz3+a7Z/dUpuO5ldO9hOsLN5uW9OY45Z9z+LxnOvNZ1pzwveSdaYstNRstXRB9WiASV9ooWeuu5u+Sfkk6mXI6iar+DWIh+tE/YZCS+ksD4c+G9ZU9PNbwMcSGXbbhZA+2kjHmJ6bcYtAWgVgRALVoqaSKXLKCmvJkyrbAQNCztU5cIcoA2YUY4+W6ppz458nBKqLE4k/u0fi/W/Z1PPjr0VVkSLoEFJqwR59smxeG5yo5yNXowrGQ/dv94FjQbI9xw/fXeUk4nuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R4JoY2KIceGClWxi+pH3a+dZJx6oAHtkD+56sKaBiaM=;
 b=dHgicEyQ7RuH9bR9vZR50Iwq75ehp9rbH9jcSbNz6o9RJu36SC036Fst3ClvCyCYSpRftVBUBk0+Kix/bTZfI00mdteruwZx2HVkTsBAa5Nz5cA53dFnm6xcQMSjv6W07J3ERdpjjT9OijK5WxXh9QPKCgXY5+Wvi5Mp1dY+NxSsO1X/W47r5gHuQNCrk5Ch6v4rBJP/g8ylvQO1kPwrP8poIKIO8t1dKd0F8VnFFkQZ7xn1J3Oofjq7dJTikdH4KuRagdlGkOhvgJsB1eELPTF3oA16tWR4nt+t34Ees779VuTKdHk9yas66LsQsNxQR40GYfChh0WB8eg4p7Eetw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=R4JoY2KIceGClWxi+pH3a+dZJx6oAHtkD+56sKaBiaM=;
 b=oKX9XxnPluy1YRcO3UNYKEIwHyKNXiWOqc3LZDCSM7tTQwSIcwHjabOOTHvWx/MN1y1UVC7Q+ARNLujxWdJ5TcrYmRSFTDfjZ9EFiqV9psX5x8cXea6rVarf4QRStqCWM+aMpIDiBAlVkYInKXPoSNXfFepRFZ5ETuZJJGt97aw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 10 May 2023 10:32:13 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] x86/iommu: fix wrong iterator type in
 arch_iommu_hwdom_init()
Message-ID: <ZFtWc3ks5f0kMAQT@Air-de-Roger>
References: <20230509110325.61750-1-roger.pau@citrix.com>
 <56c5d0f1-bc47-8824-9515-239647015d47@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <56c5d0f1-bc47-8824-9515-239647015d47@suse.com>
X-ClientProxiedBy: LO2P265CA0393.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:f::21) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH7PR03MB7171:EE_
X-MS-Office365-Filtering-Correlation-Id: fa8f4b45-42f9-4e96-1136-08db51311152
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gdmqkdVfyRnffGVeQO2Wcz2O7oKRxTvbk8REctnlf/5Nx8R1TpP2UOcHI6+Rs1xTG8EB8SqEJoC1/QZyOA2t5YIEFjPfbRzGPWQ9oNTAHUKiFe/5Kc9oMwovInpWLjSlB5t0BkkxJ8lTYpUiai9uOgW89cLAVNbk/lEo8h/nQNW+wLXKtEHjQJWtc64dBGLvjz2Dvv/BpwH5c5xPbZXOghhbzVU1gJs4Vk/9a+v6poOyZyGdjU0d2WKcneMfNvHjJUtxvFTNHs2/FPfj8FjzvJcYJfDwVsZfK+ollSkborxh2tz0BJFHeVtV+C04LhvD7XwkezyF8ZukNjPBz7wfUQ3Ja5J8sjRtlAXMlkDUGkjLLQswWmt4CSQ8RHUaOgg4mWBdSJWykcW25Jsh732GDdUWWcfVKeJ5aY0eJ4DQ8pYguRICnvRcukMF455g6oVWMnqRsikf+CWD/CwSgqdzvET4cVTYmzctOG/Ikx2fpn2cf8LwP21aIsCE5ModMcZtR6N4UO6lrsJUOgvX2ADrgPXGRM/qzSifxtGi1Wy5xj2NIkC5qBIwDhOgJVTPG9D4
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(39860400002)(136003)(366004)(396003)(346002)(451199021)(83380400001)(82960400001)(8676002)(8936002)(316002)(6666004)(66946007)(4326008)(2906002)(6486002)(6916009)(5660300002)(66556008)(66476007)(478600001)(86362001)(53546011)(186003)(9686003)(6506007)(6512007)(38100700002)(85182001)(41300700001)(26005)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZCs0SWJ6NHN2SUpoRURRK0loZFUyL0ZveWphN0FHS2FySzR0VmtoOXZwNEN1?=
 =?utf-8?B?azhrcDFWamJ0OFRtZXhtZVFDSDBkR1lrM0tiTDN3ajl3bTJOR3Znekc4TGxN?=
 =?utf-8?B?eUl4Zk1QODZTNmxuUjdpM09QdnBpUHB6Q29zNUNtREtyTjh6V1F0RnhXajhu?=
 =?utf-8?B?NnlhUEh1TGEzcWhtTXNiNGN1SEJ5cW5tZFV3QWZwTWFSaGU3bnI5bGRUaUdQ?=
 =?utf-8?B?cG4yenZhNDk2OCt3akVOVml0Sm1QdEorTjBTdjQwM0tsaVZPdjhTeG55ay9X?=
 =?utf-8?B?cExHVGxFUzlZbS9pYnZpcG52bVhvRHEzaVFaQ1Z3V3o4bzNMTHBzWERFTG5P?=
 =?utf-8?B?TUQ1dEtzSXRacFJYSHlhRms0QTJSR2hXektkME1SRitTSHhDL3VrcTJZVHB2?=
 =?utf-8?B?bndqTnlnVnYveGZGQW1XSmFqQWRoS0ZxS2VvRW1aTGQ5YU4yMWNSZHMvUXVO?=
 =?utf-8?B?RlZKMllhMWdFRVhBdXJLNHdveDF0ckg0VFd5NXhyaC9tU1p3STNTQUJlWWg0?=
 =?utf-8?B?Wm5lMDBIc2pJQStkdDgxbTI1SXhlZHY4d1hWd3Fnclo1U203SUJpdnVwMlpH?=
 =?utf-8?B?REYrQVZwcllIZ1cwZnNoSndOQ3dnRk9NMGdmM3AxQk0zdWNHTSsrNjZBdU9J?=
 =?utf-8?B?WWkxZjVxMW5HdGFqbXQrdEYzVER1ZzlLS0lLQnc1TTdVS3hUcXRsenE1V2hG?=
 =?utf-8?B?QmsvSWhNSmczM3NzYUlWSHlFTi9WakV3SGNCZTVyTEwxUko5L1l0WUkrZVpn?=
 =?utf-8?B?amczUzBzbWdQWlg5THpITkFnL0xiNXJuUUlZTUxPekNoL1RWUUh6SmRGOWFC?=
 =?utf-8?B?RUNVYThGMG91NlJROVRYSk5zMzFFMWV4K2M1dGlVbjZtV2NNYis3aVg2c3pG?=
 =?utf-8?B?Q0c1U3dMNWdqVGpOU0R3c2l4NTRibnJSUzF4VTZQZ3hoSnlrYmUybXA1NGRh?=
 =?utf-8?B?b0hFYzFvRVZBOGtxQU9iMENiazhiS1dVMHMyWEhUeXpxakpvYW5oWVEvYldM?=
 =?utf-8?B?UnBCV0RVbks5b0F5L1lVWDhwQzhVQ3FpYmRVZFpXUU1GNHo1U2FVcUNaYUpj?=
 =?utf-8?B?bUtaVWhTS1UzVUdPOXVFVkE1dkRpWWJkemc3Nzl0eUFlSjVFa2Vob2JwZW1x?=
 =?utf-8?B?d3FCREF5U2FXeDBMakFqM3o1bXZ5TDQ1TmMzRE5adHQxYTFhdGdtcS9OUzdl?=
 =?utf-8?B?N01idElSdU1ydzUrWjA1b29sOEliMlUvZEZWemluWTNXMEd2Y2E4a256T2xE?=
 =?utf-8?B?RUVVQS9zN3pHeVpzY2RmUENPL3A4eVZLajVCbVNhY0JEZC9KWEdVZXE5d01s?=
 =?utf-8?B?VDdCdWVJQk00T0tpV3NRWTlCdUJnK2VMNVNQWnYrWmxWQXN3dGxxbFlGUEky?=
 =?utf-8?B?N3ZnVmxxWG5VUlVsT0hzUjVVdDZqMldxQTk1SVRPTFdHS1VWUS9vK3pGUjEz?=
 =?utf-8?B?RzQvelE0Qm1ia2FxVkdpYUt5VlhHeTJpRWMrRlgyQ1AzcGV0Z2IzbVBoaGQ5?=
 =?utf-8?B?SThyOWsxZ1pvZE1GUml1bmFNRGwyRlB0eWp4Tkh4R1NPWHllZmpMaFFxenhG?=
 =?utf-8?B?UjlsUFNaZ1JPSEJ0RUFsaXhLb2M5NHQ1d0p3WXdFMHFBeXp4dUl1WWF2bGVT?=
 =?utf-8?B?N3YzNGl5MDdZNDUvVWVYMG5pelRvMkZZMi90anppRWxxZlNMMXQ3bXBnOXpt?=
 =?utf-8?B?VFh2SkxrQ0FoWTlpankxQ21Vb0lqZDRQR1c2bVJ6ZGlPdGF4dHFpdkI5WGU4?=
 =?utf-8?B?Q3RNT0dvWVNmQUQ1a2M4dnF5dGswcHVNdjJJbmNGRldod0Jsbm92TUYrcFIw?=
 =?utf-8?B?ZnpYTXgwMXJZQUdtSGUrcXovakpleVVvUU1tRnZWQ05BNy82TE9MUHRWSXpK?=
 =?utf-8?B?eUtDMngvTDYwWDBWRENYTStmaTRnUmZwL3Nic1V0SEh1d3NDbEZSZ3JWTEVL?=
 =?utf-8?B?YW1vUWw1Q1lwSHBtSGpPOGZNMFlKQm9NMHVMNXEwN0s5V2tWU29TaGRrQkVV?=
 =?utf-8?B?ODlWR1ZLMmcrOE41Sms0YU9YWlYxdnBESmdxQ3B2OGtidDNKWlQrOXMwZy9N?=
 =?utf-8?B?Q3cxVS90MVpjamttWTJhZWV3SWFyZWp0SzFLUUpIeUxxZUVsUEUzb2liNVhV?=
 =?utf-8?Q?qdpaBz1L+RkrpPX7B6vP/Wz8I?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	2lYhpNaX/8MRIhBcgvkESpog6lwxUgdpHFEqcFVDtcE9aNx5Ox1uqgoNcy4kPWksMfNRAQSrOnexAS1G2TRqwluaHxH0Hb9jSdJM1ZbhTw+wRszIqbjliaKP9Dn021yYKH3DDqz+VAxMqjKi2/Rzv4v4bTB8oSbcTe8hbuo6VMFgyYvNxLj+n+WtA0p8aJjxPFmGF6ystd+PzYg78UPm3ZzAaFc6IF3kvvulkWgHiqv1cCvSA4kwBKWVmnlqf5zMdXt2f+d5D587Rh1RGRVQggTRVD3dFo5acf63QBKa7pG94OZIqcMxw+bzk18QBz/UpO64uOmtAzJMu/QQMKPUFgqE6OHt1WJJ4vh8t5ZClF62pVFgdJNszMYvuVackdSy0lRVQfxh8iRl+/teauzfPk9pCR5TJrUDZDJXBIZtulxipZA99HYZ1aZwVT9Gy7OtR8YpJ4GLpBSRsFMfuOpRIh0vibSWHeQ+Hqgd5W/4A0GBzdjLoPBcfYg4VKEo/mtUJoHi/NRJaKfWr6Yivqp8+jCmEdI2NmeiRXz6MICZ5FS/NF+e01wqTsOMmZ+8pzuwCpRN4ET0lNgO8y5t5P/Z4wJRRtbdNeq0yX6rEM8B4k53FlzVPjKibhJZ+2iVN8pQrqFMuJ1Ujp6MvpggNn8Mf82tgaR/URf+ie2KPzDR6JE7Ck3H/H9gNBRYtF7VWw0GEXfPS7hCf9yWWMS3XhUvL6MFj+JG/PyxKe68orAok9eCqGd4sI4y+Yq5PysSDZg+C6HpvFUDUT2Uk1EGeXmSnUAyolVS8mQuIy0xhPH62wgpa2cHaqf6ryLcVnyaNsmb
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fa8f4b45-42f9-4e96-1136-08db51311152
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 08:32:19.1173
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: S+U9iJz9nsB87EOAdsIiNVUIqV/K5udWyEqNdRJJWRyXvPUabiHN7OvjIq6KWUrO8Pdfh162RVkXcVzLGTK+eQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7171

On Tue, May 09, 2023 at 06:25:05PM +0200, Jan Beulich wrote:
> On 09.05.2023 13:03, Roger Pau Monne wrote:
> > The 'i' iterator index stores a pdx, not a pfn, and hence the initial
> > assignation of start (which stores a pfn) needs a conversion from pfn
> > to pdx.
> 
> Strictly speaking: Yes. But pdx compression skips the bottom MAX_ORDER
> bits, so ...

Oh, that wasn't obvious to me.

> > --- a/xen/drivers/passthrough/x86/iommu.c
> > +++ b/xen/drivers/passthrough/x86/iommu.c
> > @@ -406,7 +406,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
> >       */
> >      start = paging_mode_translate(d) ? PFN_DOWN(MB(1)) : 0;
> 
> ... with this, ...
> 
> > -    for ( i = start, count = 0; i < top; )
> > +    for ( i = pfn_to_pdx(start), count = 0; i < top; )
> 
> ... this is an expensive identity transformation. Could I talk you into
> adding
> 
>     ASSERT(start == pfn_to_pdx(start));
> 
> instead (or the corresponding BUG_ON() if you'd prefer that, albeit then
> the expensive identity transformation will still be there even in release
> builds; not that it matters all that much right here, but still)?

So far the value of start is not influenced by hardware, so having an
assert should be fine.

Given that the assignation is just done once at the start of the loop
I don't see it being that relevant to the performance of this piece of
code TBH, the more that we do a pdx_to_pfn() for each loop
iteration, so my preference would be to use the proposed change.

> In any event, with no real bug fixed (unless I'm overlooking something),
> I would suggest to drop the Fixes: tag.

Right, I could drop that.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 10 08:41:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 08:41:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532705.828970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwfNZ-0002qk-TM; Wed, 10 May 2023 08:40:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532705.828970; Wed, 10 May 2023 08:40:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwfNZ-0002qd-Pu; Wed, 10 May 2023 08:40:57 +0000
Received: by outflank-mailman (input) for mailman id 532705;
 Wed, 10 May 2023 08:40:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+mX7=A7=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pwfNX-0002qX-Vp
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 08:40:56 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5ebe99fe-ef0e-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 10:40:52 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux))
 id 1pwfM6-00G6ee-Fz; Wed, 10 May 2023 08:39:26 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 3F365300338;
 Wed, 10 May 2023 10:39:24 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 2636720B04BA2; Wed, 10 May 2023 10:39:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ebe99fe-ef0e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=kxXv2WqVS/N/AXpDGiSycxgzS2fMDzLbIFoszm9gQVI=; b=fruh6dMZerc8uTfAzuh9A7hIbP
	VvgdR1RS3pjBRGcpa6fHyCQvCR0Aq9ej79UowH+YJhhpMQFyqHZlLAYo85qDixs7AYee0NKygZtCT
	6hOzrcLL6lgbJ6CDMnHBTG+URFob7oahukW020DEiWDlSZvO3aCqWhyjd0R6lOQ+0xUt6NrsA7nY+
	t53+xab78Blhuyo9AdcouXah9dFIKssJhlWxI1vvizbNnJh1T+Up3Djmr8Kx45ZXR0w42rN3xa+Wj
	vQG7RFMGQeTTvgV6Pll4zRWoLGjolJBp4/QvccMTRYfaN+G127v+dF7n6q9sjrP5Au/LmGKAAaAB2
	nqiPFvtw==;
Date: Wed, 10 May 2023 10:39:24 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch v3 08/36] x86/smpboot: Split up native_cpu_up() into
 separate phases and document them
Message-ID: <20230510083924.GI4253@hirez.programming.kicks-ass.net>
References: <20230508181633.089804905@linutronix.de>
 <20230508185217.671595388@linutronix.de>
 <20230509100421.GU83892@hirez.programming.kicks-ass.net>
 <87fs85z2na.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87fs85z2na.ffs@tglx>

On Tue, May 09, 2023 at 10:11:05PM +0200, Thomas Gleixner wrote:
> On Tue, May 09 2023 at 12:04, Peter Zijlstra wrote:
> > On Mon, May 08, 2023 at 09:43:39PM +0200, Thomas Gleixner wrote:
> > Not to the detriment of this patch, but this barrier() and it's comment
> > seem weird vs smp_callin(). That function ends with an atomic bitop (it
> > has to, at the very least it must not be weaker than store-release) but
> > also has an explicit wmb() to order setup vs CPU_STARTING.
> >
> > (arguably that should be a full fence *AND* get a comment)
> 
> TBH: I'm grasping for something 'arguable': What's the point of this
> wmb() or even a mb()?
> 
> Most of the [w]mb()'s in smpboot.c except those in mwait_play_dead()
> have a very distinct voodoo programming smell.

Oh fully agreed, esp. without a comment these things are hugely suspect.
I could not immediately see purpose either.

My arguably argument was about IF it was needed at all, then it would
make more sense to me to also constrain loads. But I'd be more than
happy to see the whole thing go. But perhaps not in this series?


From xen-devel-bounces@lists.xenproject.org Wed May 10 09:49:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 09:49:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532714.828980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwgR7-0000vK-Uq; Wed, 10 May 2023 09:48:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532714.828980; Wed, 10 May 2023 09:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwgR7-0000vD-Rt; Wed, 10 May 2023 09:48:41 +0000
Received: by outflank-mailman (input) for mailman id 532714;
 Wed, 10 May 2023 09:48:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwgR7-0000v7-0c
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 09:48:41 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20622.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::622])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d607465e-ef17-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 11:48:37 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6777.eurprd04.prod.outlook.com (2603:10a6:10:11f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Wed, 10 May
 2023 09:48:35 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 09:48:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d607465e-ef17-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iRIwqDLV3cZyqSxz68AqplkBjGw3krKTdFYWH1cM0Rku96HhlNpES0CGTGbBUmC5DaTUq5fuaDcjgKZATZNWBOUE0OB6AUhp4Pe/kD+p9fq+PvNSWuQmYikkdF4dd9IS/7nJcffrxwG4uY2JowUdFuvzTe0LBKMPFMlw2vZ7IJPIfKS2zJyja77dGYH4sn3V4YfzbovFqtW1qA2F4ZJ9dQrUIGCpZfA9hPYznp9IZ00L7YLSTcAI8qS04V2XyfbgUkIeCrSrGQ2KSUh7xoxnX52HSyTANnSUiNHJe2Dbq15FooNnGTtu5FkSc+8k0ZwqKIrXc7a5VpHkhaVkzn9+og==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sh2cPGVwKioGILcuvUvPbT3bMsbGcosp6FhUyreLLms=;
 b=l8SONBMC64VkKp2/eg/R+RJIn6g++yd//RqdQ/yM9nGlYPx7e9z8XYqUbw+OSbLY8fyKryUtqkwN/R7u5MGBddbEU4Tk3GtSEJZN5l3dx08G+r6i1WOTS2aXgKQySm3WxBCPQFHeNFYd1qG5t6EnVzk9yane/w+LKZ7mZTyBUibp5AAHdFglCHYHSjE7QwtgKcc3Vznv/vj2GWDUxiDOjpaItjuz+Qs8PoSScR1sQxwYAwc1cJw5qNNp9t8p7TAJp9EXeEJ2FSzyVIlRkt66ucIA/sjoLgMKh55JMStWtf1O6A9JJ00Mm8n4U+HggJ8l8poNgXc/AflHy51OGRr2LQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sh2cPGVwKioGILcuvUvPbT3bMsbGcosp6FhUyreLLms=;
 b=m+nuxJBxGBUikAjydSj8kU9kpeI2JeHlwC0I+kbukSaS+qWkTDSyK3qi7pbAi+ScxJM5eSR6t6N8pV1tPWAEdg9oPzLXa/41BjHAEBQrKTgVIGzHLAWH/JgAQQlhCzNF7caFnY5HKsIFK+7KxLf7uWI9P2BDBm8WmUIIB1EbgHCgyp11178JWXnErOLKGr9yigPrXhr6tQ1HoS4Z1KdPyW5fIgELRfzOB61rVBlxxSkA5p5BiHkgcL4IsLcbRWPS4ZftopEWJybBEd44RH7VOgvHqu26TVjL2jEbutQNPG+X3Ktg+WoVbfwVHGAhOsaFcuILjb3CDjt7kdWUPli2ZA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9b8f033d-321c-412d-d0f3-51d29ac8d238@suse.com>
Date: Wed, 10 May 2023 11:48:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN PATCH 2/2] x86/Dom0: Use streaming decompression for ZSTD
 compressed kernels
Content-Language: en-US
To: =?UTF-8?Q?Rafa=c3=abl_Kooi?= <rafael_andreas@hotmail.com>
References: <cover.1683673597.git.rafael_andreas@hotmail.com>
 <DU0P192MB1700F6DFE45A48D90B5FE679E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
 <283798a0-0c69-7705-aade-6cd6b2c5f3c4@suse.com>
 <DU0P192MB1700F7BB44DC06D67D7AE345E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <DU0P192MB1700F7BB44DC06D67D7AE345E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0167.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6777:EE_
X-MS-Office365-Filtering-Correlation-Id: 3553c3b8-57e8-4550-4eec-08db513bb889
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+7aPKgfdOUKqh4MawrdN6u+0bzJxLXOSan5BkUFEeKyoGefK7DP6pXkSoK2b+65KM+Zuf1Si0kDAbPLF/LfgnBySYuuFfSzJdP2ggAzqv6BT7Xj5K2bw9fNiCrkeilWGl+XguTo/bVBnwoJ1uou66i+fUUGJZ+NJ6IoX/M3MQFoCz9yc0349gI44GABT5FAOfRbMbcpdGejhChl6omTq3WP1FoHVL7oVgaUyjpdWxggltMSmqU2gHpbCqq+/TrwgMqMN1d1fC4Q6NzQb5W8R0DLug0WM7+yG80G7cByXktSL2eAiZju6pSS7RtYRTyinQWuOfh/x4x/eRS5kjPrgFzj/2Dd4OCd0Ved55gPBbsTT+0YywneR+QRxCqp4/Ek6sZ75l8JGtTy/LrknWwtWYYBJ0fiQtB2dIW6yNIZPtht2EXrXmLjwccDtIsICxVUv8ntzxKC9RQqO9ZeCnb9yGTAd+nos40tLfkyQjTv3NO9CzygaEu/I8R0nZKKHpcs+UgvnM/4MWhhjNWDEYrJJNrDnPbPlJaabZYBnRh5fHERMtUvXvcRyal3jpsFRnR1Mt8xrhu07ZiN15aB3BPdyFXxZVat9rYaNqJ48V8v2pBm785hD8sCdpbL4z4Ye9fY+n82Vz6njxrHO6EixZ/O3mQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(39860400002)(396003)(376002)(136003)(346002)(451199021)(86362001)(31696002)(31686004)(316002)(6486002)(66556008)(66476007)(6666004)(41300700001)(66946007)(6916009)(4326008)(5660300002)(66899021)(8936002)(8676002)(478600001)(53546011)(38100700002)(26005)(6506007)(2906002)(186003)(6512007)(66574015)(2616005)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cEJtR0Iza2wrMEVIZUhyWVFlRWdVMFcxenVrV3VEL0hpay9ERHZqTU1DdWR0?=
 =?utf-8?B?UlFvWEtjK3c1U2Y3UUtQTVZjU3RUWEFxRmJ5ZnVjdmg2ZGtwbzVRSWxYSWlJ?=
 =?utf-8?B?cGFFUUlvK3RwbExSWGhDbUoxemRuajVIV0ozek9SNGpwVEJueDZEZDV5M3N6?=
 =?utf-8?B?bEpmZzRmOUVZb1JJUGpwOGNqbWVzTTJtYXRFUkp4SzAxbFE4UXRvMTlPUWJP?=
 =?utf-8?B?WUp4RW95V1hiNWNHdGIzMUtjd0FIa0VURzNpemhiWUhXSEFseTREUjFyN25Y?=
 =?utf-8?B?YWNsb3A5RFVFQTVacDZDblRZcnlraG9CSDlTUk1BZ2RjdEJDcFBmMkh5bVdn?=
 =?utf-8?B?OTl5amQ5eW0wd3JzTWFSdDg3NVRTUzhzaEJvTFJkNk0yVC9IMEhWNExiajZm?=
 =?utf-8?B?VmtEeXMwSmxqRFdFcjMwSGZCT09FdWwxcWtIWXg4bjR6MFd3WFhjNkFRd24r?=
 =?utf-8?B?dkNKbXR0T1M3YXF0bGFCVENBelg2aEJZN01nTXpKeVcreGJvN1RIei94ZHdR?=
 =?utf-8?B?V01yeldZSHdWbllMSUZOZmc3ZmtBbjJEZFRGNjNybnBjTHJqWG1CYkYyanNr?=
 =?utf-8?B?RnZUVzg1WVM5V1dmNFo4UXJoZVFSdFZCTld3QU5CeWlrZG1EL1ZKazlGd1Mx?=
 =?utf-8?B?T2RNRkRyTU1qWHZKR1F6Qks0dmZaV2Y3MlFZaEVhUXlNSUFqNW5IRU5RV0gx?=
 =?utf-8?B?Z1FEck1od0NwK3VEVE5NUW5IYUFHNmpKNkJVN09US3VjMUhZWkgyeGI4N2k1?=
 =?utf-8?B?N2R3VGVNdnEyUFk3QUtYK25kRG1EVlpGeU5iZkgyQVI0ZmhwOXRFWWc4ZWJD?=
 =?utf-8?B?S1BDemxkL01aS1N5THRKRW5URG4wZzE4MXVJL091NXFQdEE3cUd6VXcxZHJC?=
 =?utf-8?B?aWZmdzBzNnp6RjBxRTBXeVpIMXBsRUY5MVl0V004MGZLVWgvQyt4R2pMMnZT?=
 =?utf-8?B?VE1ZT1lUZnVPWFFSVnBZdHJuRmZaYnMyalV4ZWpCMG1pcEhRb0w1aHJycXVt?=
 =?utf-8?B?WkxLUFJhNVhvM1ZqUk1xQUxzMkt4ZmNqWWtqbTl5YU01VWdEcXZrYnZuR2t0?=
 =?utf-8?B?TWl4K3VWSEliVHRwUjdQVmFRNDc2cG93ejBpZks4NEZ0T2VWdS93MERmdm9v?=
 =?utf-8?B?bTNmRDNKeVVPaE0wU2pXTDdRSGpxbVRJM0Rqc1dqKy9KcDhVS1R0dkcya3U4?=
 =?utf-8?B?dDROeHo3NFo5WGdickRaVTlUZjZ6RlJHbHZNZXhYTTZJSC9ZdVYrU2ZEcjQ1?=
 =?utf-8?B?Y2cxS0FkeUkxbDhhaGt4dnBqTDVlam9qeFhONkZRVi9CMVNrQWJRNkxYQ2Ux?=
 =?utf-8?B?VTJYcHpMdkxUdU1JQmhEOGs2NDEwcnQxRnBIWm9BOWYxeTEyS3NidVQ4YnNX?=
 =?utf-8?B?bENDMVhjOTBJcGhoNHRSNitwSjNmWDUxTXpqbHZraTFPY1EzMSt0dWJTZkxs?=
 =?utf-8?B?U09UUVVOUHFlK1BJYlpybkt2L1oxbmtUa3dTTEt2aVlBbjRoWjB5Q1g0bURI?=
 =?utf-8?B?SmI2YjZzWFhSR0lvUEhpNDRoeGJDREtOTU40SDdvZURPZGlrL0o3YUFuNmV1?=
 =?utf-8?B?ZWFDMUlJdkx2Y08yWk02YXZKcjY2YnozV050dFdhZkNuRy8xOW5XYW94dVZn?=
 =?utf-8?B?SnFNMmNicmdnRUcyMlBjNCthU3k5Z2FaZmpHbWJPbC93NlpJLzVlZkJNdENw?=
 =?utf-8?B?MmJURzUrQlBCZDBZSXR5NVRRT1RvK2NhV0FjZFoyenZNZDh1cDVDdE9LdlQ5?=
 =?utf-8?B?dDRIc0REMnZtbFpiOHl4WkdMVk5mVjNXeTRiUExzSFNFcUhna2IyMk9nYWVX?=
 =?utf-8?B?MHUwUTM0dEdkSkhqcm9ZVk5hUWs2QUMvcUZ5Qk9WZm9VSzlLU2RmRlFqQzFB?=
 =?utf-8?B?R00zMUp2NTkvcDRQMkZGcHNlRHNHNVdCT1RvNlVDZDFSTjZTWXBWMjhMa29M?=
 =?utf-8?B?Z0JralMwRXpUSlBpdmp1aEJqMzJSL0lVeXZaaS8yek9naG5Qci9ma2ZLWGMz?=
 =?utf-8?B?YnlOT0lmb01tOEM4UktjbWJYMXlkZ05IdUxzVGtaRklWV2c1b2FKVzlleUFw?=
 =?utf-8?B?RGs0MGIwTWR3TlZXN3VyQWEweEpxT25TeU1mYy9Yc09kL3FlazFpazNWWEtI?=
 =?utf-8?Q?82JYdgilzfZLqa4vs4Ttx78qB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3553c3b8-57e8-4550-4eec-08db513bb889
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 09:48:34.6387
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4EnOty7aqn1XAsKeDlFbJxc057+9j1AqxZ03Darfq9HYGWLWlA+tbkQkTlJxb2dSi0fdCWqexoZNun409yqgtw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6777

First of all - please don't drop Cc-s when replying. I'm restoring
xen-devel@ here at least.

On 10.05.2023 10:51, Rafaël Kooi wrote:
> On 10/05/2023 10:03, Jan Beulich wrote:> On 10.05.2023 02:18, Rafaël Kooi wrote:
>>> On Arch Linux kernel decompression will fail when Xen has been unified
>>> with the kernel and initramfs as a single binary. This change works for
>>> both streaming and non-streaming ZSTD content.
>>
>> This could do with better explaining what "unified" means, and how
>> streaming decompression actually makes a difference.
>>
> 
> I don't mind explaining it further, but with the efi documentation for
> it existing on xenbits, should I just refer to that?

You may of course refer to existing documentation. Iirc that doesn't
cover any compression aspects, though.

>>> --- a/xen/common/decompress.c
>>> +++ b/xen/common/decompress.c
>>> @@ -3,11 +3,26 @@
>>>   #include <xen/string.h>
>>>   #include <xen/decompress.h>
>>>   
>>> +typedef struct _ZSTD_state
>>> +{
>>> +    void *write_buf;
>>> +    unsigned int write_pos;
>>> +} ZSTD_state;
>>> +
>>>   static void __init cf_check error(const char *msg)
>>>   {
>>>       printk("%s\n", msg);
>>>   }
>>>   
>>> +static int __init cf_check ZSTD_flush(void *buf, unsigned int pos,
>>> +                                      void *userptr)
>>> +{
>>> +    ZSTD_state *state = (ZSTD_state*)userptr;
>>> +    memcpy(state->write_buf + state->write_pos, buf, pos);
>>> +    state->write_pos += pos;
>>> +    return pos;
>>> +}
>>
>> This doesn't really belong here, but will (I expect) go away anyway once
>> you drop the earlier patch.
>>
> 
> The ZSTD_flush will have to stay, as that is how the decompressor will
> start streaming decompression. The difference will be that the book
> keeping will be "global" (to the translation unit).

But this bookkeeping should be entirely in zstd code (i.e. presumably
unzstd.c).

>>>       if ( len >= 4 && !memcmp(inbuf, "\x28\xb5\x2f\xfd", 4) )
>>> -	return unzstd(inbuf, len, NULL, NULL, outbuf, NULL, error);
>>> +    {
>>> +        // NOTE (Rafaël): On Arch Linux the kernel is compressed in a way
>>> +        // that requires streaming ZSTD decompression. Otherwise decompression
>>> +        // will fail when using a unified EFI binary. Somehow decompression
>>> +        // works when not using a unified EFI binary, I suspect this is the
>>> +        // kernel self decompressing. Or there is a code path that I am not
>>> +        // aware of that takes care of the use case properly.
>>
>> Along the lines of what I've said for the description, this wants to avoid
>> terms like "somehow" if at all possible.
> 
> I've used the term "somehow" because I don't know why decompression
> works when Xen loads the kernel from the EFI file system. I assume the
> kernel still gets unpacked by Xen, right? Or does the kernel unpack
> itself?

The handling of Dom0 kernel decompression ought to be entirely independent
of EFI vs legacy. Unless I'm wrong with that (mis-remembering), you
mentioning EFI is potentially misleading. And yes, at least on x86 the
kernel is decompressed by Xen (by peeking into the supplied bzImage). The
difference between a plain bzImage and a "unified EFI binary" is what you
will want to outline in the description (and at least mention in the
comment). What I'm wondering is whether there simply is an issue with size
determination when the kernel is taken from the .kernel section.

> When I present the v2 of this patch, do I add you as a reviewer? Or will
> that be done by the merger?

I'm afraid I don't understand the question. You will continue to Cc
respective maintainers, which will include me. In case you refer to a
Reviewed-by: tag - you can only add such tags once they were offered to
you by the respective person. For this specific one it doesn't mean "an
earlier version of this was looked at by <person>" but "this is deemed
okay by <person>".

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 10 10:01:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 10:01:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532722.828990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwgd0-0003Qh-5J; Wed, 10 May 2023 10:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532722.828990; Wed, 10 May 2023 10:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwgd0-0003Qa-1k; Wed, 10 May 2023 10:00:58 +0000
Received: by outflank-mailman (input) for mailman id 532722;
 Wed, 10 May 2023 10:00:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwgcy-0003QU-W8
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 10:00:57 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20616.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8df1f041-ef19-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 12:00:55 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7949.eurprd04.prod.outlook.com (2603:10a6:102:cc::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18; Wed, 10 May
 2023 10:00:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 10:00:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8df1f041-ef19-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YXYvn9/tUjqZ7DZ9Y+M0YYKCABtFHH6FEscgAkav9ylHZFQ++y1dZykYO5KUDcV8ovYZob9m3Y6cmovEX6jXaKk8VtHBZCHO8yoojbiWxDehatkVeeSzhcSGlfHYG8XeWexkNPfT1q2/GrYeQvZEOFCVF1aHhPxGEcm7vnCKRH0KvV/nXORhmejYj8fiCAUnKYsrr6i8kOTcfVjAYo7EIrNDfYJgHRODvpVYoO3bUD00JkbeZjsUEaZRC8xGu5K0hcvDbDpb36QYa8yMC5Q26M4f1DBav35wSEGe5NhiUJe6gdxP8xB9zZxvB5lNwRuzgK6yf/otAq7YXb8Vq7l7Hg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fKYiWpXDCM+z2qrWJWYHeIba4Y+0XKQKAWBJxbyEV8U=;
 b=USEOvp5Wdr4qBa1n1I9T08tlhDW1G5Kjp8curlMfViyCAVUgy3sRrOe2aCIr1urwUSAZazCW+FjfCmbPskJOYvkQCNISfHFXsjyVik1jalw7hshGHmjJijPCYRDcbxLL+TlRCXxt+nHl3AyQhfBA5Y+5PmJU7hdhDWT/fDcQrqP6nRRJ9Nde7l+YTTPD19W5JFOejvatqBvvoEeGVppa8geduFd2U2xO8Wavl7Ixi3X0wu5x0L3LGuadZf4YQ44veE+iyaL7AanF4K1qBSQIWQ/kgD4nGKrPozbUW+rmMgqnEOiyjiP/PnUY6prctsCGhzkI8SYDbsrSvp+ZWSdPEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fKYiWpXDCM+z2qrWJWYHeIba4Y+0XKQKAWBJxbyEV8U=;
 b=Wdc6VR/Gqtzs75Kkysv3VG4r7M8yAHv+LMeeV8GwLxxZA4XetBChLr9Dvt5r0xlg9mNgBGNqpjW7q7No6WC4+vc2A+Rx2lDnAtF8xjrU/Eo55/tIpP7R7orEYfsUwW12f6v1vS16D/qjTYUbTnvuQnbkVOYuW8tfgCLAiBW5oRIr7mYegc3sFNaRvbAXHLTKMPxMJfZCPN6IsYiU3wK85/aTycF2isZJRRppGCuOtIB0WfZsSZAMIB3pG2XV5FxcIvbjaZntBcKzoGmTKJ8KwjJXqfh4IiQT7LG/eA1OTHXGhCBnTuFlbia36Qvfj3mhURQUZUGkJcFDz9XatRAi+A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5bde1d79-03ef-6f8b-4b28-8d7e6867ba18@suse.com>
Date: Wed, 10 May 2023 12:00:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] iommu/vtd: fix address translation for superpages
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <20230509104146.61178-1-roger.pau@citrix.com>
 <bc750b8d-77be-f967-907a-4f19c783ad99@suse.com>
 <ZFtVYEVsELGfZxik@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFtVYEVsELGfZxik@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0246.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:af::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7949:EE_
X-MS-Office365-Filtering-Correlation-Id: 96c61b08-9c94-493f-8701-08db513d7108
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hb5sQ9OizO5tRjssdG7WYiYlToCFCqT+F+ALFUPufHOIdvzauh54Saa8AmQF8mPO3YPTE/dz5/ddYb9DvXprYHJ9CsGPkU84D117s2cJlwot1Rc+WctgrKPYFUXVCVZcQbLic+WaoJsk7D59EJoT0wVoUSPlacumpH3Xac0BBLJjxBoA9LZ57YpcCUSii0EhZnLqjtp0PAXQshCRb049+VxgFpsBTZe0QyZJhgNBR27tCpF2n8HNZ+ygoMbBkIDOIz6+CsBl8rwQgObbthCTkNvKk5S2GDHR7B5zb6Vff4T091y92KH9KZZA9kauzJNN3uLlSnYqss5gfaSE+BGbK5ZJ6TAvVKJjcPs9YcdsO/5UhuYUTgoIUktlu/8jRsn2MeZiDkXk5yZLr34osqDmSCrcYvy+x+2PYQFqta712/4dXXFgjHHhkFEEVdguZtSx7WYnZNZqrCjRdeqv4QwngBXIZ7YPGPVpRwkDExCtP7AA3t2D9NkPITCmIlH9hvscSRhiMzueOYVAmkbyeKjv52ro5RnoAldR4FDohh8cySFKw+VurQ7T5VcpSrhpuKhtAvnR76nJv1otGP9vnSjIv9g67kC3Gx3z4J+kGM+8WSdtUZEWsKFcEjT8VWOl3Dhw7cxzcR05fCCzJ87XRjIrtg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(396003)(346002)(376002)(39860400002)(451199021)(83380400001)(2616005)(186003)(2906002)(31696002)(36756003)(38100700002)(86362001)(8936002)(6486002)(316002)(8676002)(41300700001)(5660300002)(478600001)(31686004)(66946007)(66556008)(66476007)(6916009)(4326008)(53546011)(66899021)(6506007)(26005)(6512007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bkdmZDVkR2dhdUMraGZaajA3Qy9HZGpWVkFQeERKeVVtUWpGVndhdzhUaUZC?=
 =?utf-8?B?TE9tZWJHZzk1a3lVV2xPRHlTOVV4NCsrelZtZWFwR1Z6RFhHOWR3eUhDa1Rl?=
 =?utf-8?B?WmZNL0pFdUxVWm82bEZrSDdXdDM4bWVVSkppOEo1MDlteE5GT1hQYVpQV0xJ?=
 =?utf-8?B?VlBPQ21qRlg5TDZCWUp3OU5BaE96THBNSzVSVEhxZzBVWitlbG5RVExxaDQv?=
 =?utf-8?B?OThaSWkzQjR6TVljQ3pjaHRFYUNzaEZrYmRLbEMzRzU0NGN4ZzMrNHViUEpR?=
 =?utf-8?B?Wm1WZFZTRko0Ykx4enlYdUFIRFRDSmx5RitsQjFhN2krQ2x3SU9KU0hWTjBQ?=
 =?utf-8?B?QkhYSmxsdEdHdHQ1OEhXZ3FVL1BZVmNXbEhiRVZGUGI1eStjRXpzaGpDMDdY?=
 =?utf-8?B?bzllazV0azdscHA4UGJGVjQvVUcyZmxLTGVTaGc2TXB1dDNtTUdvVGtMTHRS?=
 =?utf-8?B?LzBHN0tTRVN6SmE5WUFzeFdVenM2SUxTU0ZtQnJ4ZjFZcDFIYk9GcjFuNGx4?=
 =?utf-8?B?WE1TZ01yNWVrbUZpUko1Q1VDNDFaZ2E1a1pLcXE4d0Q5Wi8xZ3UrT3hOUm55?=
 =?utf-8?B?cG9DaVpqNVp3a1hvY2poNTNDWGo4N1RESzZLUWlGaXQzcU1ZSk5PTC9maERr?=
 =?utf-8?B?MDlOaDMrU1FVY3NRR1VHTkN2WGRycXRuSkVFRi9jN1NZamJjQXpPUm05L3BI?=
 =?utf-8?B?bEw3Tm14TVBOODR5b2ZQMytoQnNQTUY0UXZheUJCcHJpZFhqL2FiY2Irb2M5?=
 =?utf-8?B?T25aeWNWaXh6NTFTR1pZcTg5NVFiTGRQV1NiR2d3RkUxK0NsZUYwZW5JaHFv?=
 =?utf-8?B?dlpUVmpLS3VWU3ljc3NRUmNFK1ZzQXlxemhlSk9HcjZ3bU5LVHQrK2xQRTFK?=
 =?utf-8?B?OTN4Z25OWUtUSWk3QlYyNGxVWUFlanFDRjdkUnlkRGx2cXlCc1BKYWtua0lS?=
 =?utf-8?B?Y1dPeFkxOVh6QzlmQWpZT1BTaTg4ZnZqV2tzSy9JajVRcXVpVGVBYVVrNmFh?=
 =?utf-8?B?a25JSnI5ZitzeVNsK2M2eXV0Z2srV2JaMUlWRUh1Ym4zUjZ1cGsrV29BUE8v?=
 =?utf-8?B?Y3RXaXJpSlNDb2R6QVFhNTV4R3oyeUlxVm15SzcrSWIrdjU5eDJnQzF4V2hs?=
 =?utf-8?B?bTdyd2hncGxpMWxsWDBTYjYzRUwzQUNmK2VTaFQvbXJ0NWcrTjBOK201bDF5?=
 =?utf-8?B?ZlBTVngrN0NIVlFkMlpNTjAzeUJDalRQdzZ5S1NmeXdjQ21HK1hGWm9sa21s?=
 =?utf-8?B?L0k0c1haTDFKM0wrYjlHdHFFUnp0emRFRkFobzVBazVMOEY2OUhSY1dvbHJh?=
 =?utf-8?B?enl3LzBQUmdabTh2bFlXSnprdE05K3l0MzBNaEFUa0RSaGk4eXA1SHFWTzJr?=
 =?utf-8?B?Tm9xU1VJdUpFK3d3TFdYMVdoeThrWHZOZ09YTjd0WEpWdUptQXEycEM2U3ht?=
 =?utf-8?B?YXprK3VUY0g3T0o5L3hJcUUrc1lBanQ0cXgxQkJsd1FLQklhZG16QlNoS1dw?=
 =?utf-8?B?OWp3WVlYdEFybnJiNjBrMlhYVjlxK1RzelJBU3pOSTlNeUtjaG5qeUVRTjd5?=
 =?utf-8?B?dGNpRWZWeVIxK1N2cURuQmpETzVQSWFjUnFrNkt4MDE2eWNzVTIwczEyTzhN?=
 =?utf-8?B?NEp4Z3NialdRS2ZSSDhzVlBuaG1tcTFTeTZkdmVTRXIvZFQ5ZFpXTUNHZUJy?=
 =?utf-8?B?bHdncVpaN0g3bDNvQ3NIU3QzOG5PK1NNZE40QWJOUENJNmNFeTc3TUg1Rk1z?=
 =?utf-8?B?NkZxRm5GL1dHSjhQeElLTndURHQxUHFzVm03TWF0OHoyK0o5aUN0cUsrTXJj?=
 =?utf-8?B?ZE1aQlh2Uk5BU21PY0NxTE9OZmRFOENJOXBKYWtJcXQ5MWJKd01OWURxUVRP?=
 =?utf-8?B?aFJkd1B2cSt6clI4cDRFUzZYWWNidjBVMWhVdnBsRjY5Qy96Y09yekRnSnRI?=
 =?utf-8?B?cHQyWWNxek9TLzJUanU5K0hRNTlQU3FHQnM4R21lOUZjWG5VYmJUaEdrcGkw?=
 =?utf-8?B?bEVrVEV0T3lEdlZnUk5lNXVyVzcxN050V2d5dGcxMVhGQ0NYV3BqT1NqT0dZ?=
 =?utf-8?B?MXgvMUxZWExoTTZzWnUzRjMrcWFFcnZjcTZ0d3JrdVZ3TllCTGJGY2lRYWNt?=
 =?utf-8?Q?EPI46cRq/R7MyuABhLbVbROJK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 96c61b08-9c94-493f-8701-08db513d7108
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 10:00:53.5979
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4ktzvYzMxMfRqEIVyJHUkDdnt8A4Frpl1tFsdddmgnMhlz3JNyC39Bso93ZjE2MGhe2l66ED5zsFVZrZK47+UA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7949

On 10.05.2023 10:27, Roger Pau Monné wrote:
> On Tue, May 09, 2023 at 06:06:45PM +0200, Jan Beulich wrote:
>> On 09.05.2023 12:41, Roger Pau Monne wrote:
>>> When translating an address that falls inside of a superpage in the
>>> IOMMU page tables the fetching of the PTE physical address field
>>> wasn't using dma_pte_addr(), which caused the returned data to be
>>> corrupt as it would contain bits not related to the address field.
>>
>> I'm afraid I don't understand:
>>
>>> --- a/xen/drivers/passthrough/vtd/iommu.c
>>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>>> @@ -359,16 +359,18 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
>>>  
>>>              if ( !alloc )
>>>              {
>>> -                pte_maddr = 0;
>>>                  if ( !dma_pte_present(*pte) )
>>> +                {
>>> +                    pte_maddr = 0;
>>>                      break;
>>> +                }
>>>  
>>>                  /*
>>>                   * When the leaf entry was requested, pass back the full PTE,
>>>                   * with the address adjusted to account for the residual of
>>>                   * the walk.
>>>                   */
>>> -                pte_maddr = pte->val +
>>> +                pte_maddr +=
>>>                      (addr & ((1UL << level_to_offset_bits(level)) - 1) &
>>>                       PAGE_MASK);
>>
>> With this change you're now violating what the comment says (plus what
>> the comment ahead of the function says). And it says what it says for
>> a reason - see intel_iommu_lookup_page(), which I think your change is
>> breaking.
> 
> Hm, but the code in intel_iommu_lookup_page() is now wrong as it takes
> the bits in DMA_PTE_CONTIG_MASK as part of the physical address when
> doing the conversion to mfn?  maddr_to_mfn() doesn't perform a any
> masking to remove the bits above PADDR_BITS.

Oh, right. But that's a missing dma_pte_addr() in intel_iommu_lookup_page()
then. (It would likely be better anyway to switch "uint64_t val" to
"struct dma_pte pte" there, to make more visible that it's a PTE we're
dealing with.) I indeed overlooked this aspect when doing the earlier
change.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 10 10:03:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 10:03:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532726.829000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwgff-0003zb-IY; Wed, 10 May 2023 10:03:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532726.829000; Wed, 10 May 2023 10:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwgff-0003zS-Fi; Wed, 10 May 2023 10:03:43 +0000
Received: by outflank-mailman (input) for mailman id 532726;
 Wed, 10 May 2023 10:03:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwgfd-0003zK-R8
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 10:03:41 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0614.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::614])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f05de1a9-ef19-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 12:03:41 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6961.eurprd04.prod.outlook.com (2603:10a6:208:180::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18; Wed, 10 May
 2023 10:03:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 10:03:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f05de1a9-ef19-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gyViGbAy9CHoyJVnL/OTCv76F3eVvOa86rZFU9hOzxiSAEcjoSvfu9fThTwVUwUokIRdEkibzq8Gqcuea3Zl9D8rvmC0frGlIJVFRz9jMunCojHpCceUhXMpi88GYcoUad8IEwV+fLGNfeD6XOsjLJQPAsagYk7meqji57XjPRoTliJEoJq+WmF6XEmPNaLUyDfKB7YprxqUVQoPm7kLDbqTJnwGNvr5PYr1Uz+7cbGiKxbDHis9tbd6ncef04/vYzHFISH+uA4FzfGoOjszRuKo3UTQGhZlx03XnkIfBG2on3YrEUrKtbVoncd3QisGWRB23AGrQnUHNZnWVaxO5Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SIzG/InMionDYyknx2veYU+SReoQM3RQYUFC15PQg4M=;
 b=fzIliewcBLpcIhXOFWN7L05uwI8+v40F+gkYKqscRAFKTagRdCIRYRH/pF9pzGf5K1k5mxuL7Bc/n8fhQxcl1I8rPsKiF1ZFtydyIh1ioVHAoUvwPEjhORzf04dPyB2HtreOHhYsUQ53TSEQaHyzuAmmGNFH1ZTLE3I2tkQbpL8XDEm49A1uhaQoIkfNciBSert/0AVRw4R7PZ93LrTmVcWmJARaZfCCYEaOr9N49usdAuIhfngjhZkXfVA+Vu2iZJTzeBskfJxm7hCQV6W+lqxgztDrJbL/WZXE+UPtXQSeQeUg+zdmwGGIHnDh45v+cPw7lcbjQ4gFrI5ShH3rKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SIzG/InMionDYyknx2veYU+SReoQM3RQYUFC15PQg4M=;
 b=waQ2R+x8FWhPo/s5vZ90CvFX8eBvSPRQVuo1CM2pg0W5GST2K6ECizAEqXVn+3wsy0Z3eFiCp+3TnaYnHsyOtHK+4QmKHq6mmFobXN1xiWkm4O301CLhYK8Z8OFvNGNC0kDzq96TjDE8nrF8WugMKaTNmSQ6O/0aEd8VMy/stLZFpPQ/eC8BtKxlu19bYjhXJEcahE5S+2OMhB0vQz/yXc9XJ09xP8mFePj5neiYBY/H/e9xmZbtqvRwAXVWykNDM3BPnd/tb7nf2oIrbm4tAIAiHB9inSeF0PwAGU/b85DGKYzcHhc7NVnDarkIkmo3UA8xeb665fcv44ULbZ6d+w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3c633370-bedc-96d2-1a33-271c6588da30@suse.com>
Date: Wed, 10 May 2023 12:03:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/iommu: fix wrong iterator type in
 arch_iommu_hwdom_init()
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230509110325.61750-1-roger.pau@citrix.com>
 <56c5d0f1-bc47-8824-9515-239647015d47@suse.com>
 <ZFtWc3ks5f0kMAQT@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFtWc3ks5f0kMAQT@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0031.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6961:EE_
X-MS-Office365-Filtering-Correlation-Id: 55172830-4ed3-4182-c129-08db513dd28b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+zuebOjPhwYOcI9fcHfziBieuPP2IpcCTN0xJvi38VZw0eKVuJ1w50IdMe1WbMfVHn3eU1NAiclpmdcR/RiN/tUqkYKZ/lHcIk6nJdIrajbm96hP/Jenp3KKJWZIgwCSZI3kHSMQ6kKTUXIJAnDdoKglTBlkWla0bicFR1YGeV2yLOdDpH01YHmEbHWiBFWAA3TEvkfAkwnIbm6+BeGn7vqwn1O+WmP2RgAN0wXXWL9AH4Y+uWSaqY2H+J3U/uU89152wOoTUZSvcMvImGcXcHcQ4z3KaKZgOr8rkSazeUL5Caw/qrR3z7cLRLJPvdCmzw2buojf8klRHwLKMRmpWBo3nx8xoo0PcomSsZZuGTqCOL/I5WjSgV6skjKmwq3TnAA0xdxvDUIJSaNsyfAr6SG3ofxFqXtL3deBPUy6vNGxmyhedsJfEMyhcPuO/qWMCrU/MYbLxZfIAT3X4s7kM+3EBIiFwhHY+5Y6kHwogffVkgEmDVTaLpB2UwE6fVGMw/k+r/SzGWb2hlu4FAT+zbnexTXOWggXuw0MWPs5IH4k+9ntrXq+hybfwH/fjuAcLdtKYox06B9jJFr9GliPqgVPKM/sGwEV8qCX9H+TRBJ7HC4EG5UuLsCdqeeLi+tJrSw3EbNcTiAe9B9YA9Hy2w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(376002)(396003)(136003)(39860400002)(451199021)(41300700001)(38100700002)(6486002)(5660300002)(8936002)(8676002)(478600001)(316002)(6916009)(36756003)(4326008)(66476007)(66556008)(66946007)(2616005)(86362001)(186003)(2906002)(83380400001)(31686004)(31696002)(53546011)(6512007)(6506007)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TGk5VXJNUktuQWdUTTl0SW1kcWRrdDEwT0F0Q016OEovUWFKTTdoVk9VS0Q4?=
 =?utf-8?B?Ymk1ckduTTF2ZklCYXR1TUNhcTNaQ093WkZ1WnFTemdWMExOdTRSME9QaXBQ?=
 =?utf-8?B?QW5GazlFTGtzV0RtZ0thWFZBVlhJMGZJNS9MWHZTblFSSVprOUxEdFd3NWh6?=
 =?utf-8?B?a3g2UGxsWkMvRHAvS3RFWkJqM3pHOG5oWmxHa2w4Q3RrZnlQTUw0NXExTWVn?=
 =?utf-8?B?Q0ZsWXFkL3ZqeVBXRlhnWG1RVDVGN1Z3Vmp3b1VFWXhFMURkNUFaM0R0Nzc3?=
 =?utf-8?B?NlZJbTFsaE1HYmUxd0VVWUhoaUd1ekVzdUF0d2ttRi9lTFdlcTBQN1NqMFY3?=
 =?utf-8?B?UjdCSjFZVGJHWUxqeXU2OElHSFVGMG14NkluR3c1anNYQXppMTY3NjViSDda?=
 =?utf-8?B?TjVoRGwzSUUvQXp3b2tYeGdmNTdZQ3U3TGVwQzU5UFllbU5mcW9iVGowZUVS?=
 =?utf-8?B?U20xWWFvWUZ2dGJxREpTK0sxQWVueVpxdFNXeTY3MHJVeUpITzZ4UzRPZmc2?=
 =?utf-8?B?Vmp5SHBKWFpJM2EvV0t6YXhTbmZWUFU1ZTRsa1VjbDdQR3RDZ094WFBEZTFp?=
 =?utf-8?B?Z3BzOEhaVWJ3bDJLc0taTllYc3F5aFRHb1pZS0NZU2oyUzRmTndncWE5NklE?=
 =?utf-8?B?WWxkWndma1VHTitJSDBJRGhaMTMwbW9WSCs0SkY3MVozRVdjRi9GWkMrYkRh?=
 =?utf-8?B?UXQxMEdDZHdhYmZpSkQ4b3lYZWR3c09mbWUwNlZ4VlhlWG5NMVhGSHlxTWIr?=
 =?utf-8?B?MWF1MnREbmZacEZST2IrR3Ivd3VWV01FR2ttZzNWd3pLNlRVV003ZkcrWFI1?=
 =?utf-8?B?Z2NTQVk4TDR6N2RxLzd0MC9mUlNWdnh2MGQ0Z09ZQ2UrOGV5clZmWUZOSHpZ?=
 =?utf-8?B?VWZQektPcjhETEp6dlBYSmhRTHZMSUk4d0VKSDl3MXFWcUNyeThLOEc1b1g0?=
 =?utf-8?B?cEkrUUhKaXVDMVBMZWYxMzEwNDdZU29oaXFSK1ZqSGJaSzV6K1hrT0dScWV3?=
 =?utf-8?B?cjRLYjNmTG9Fakk5bGNISXpYdWNBQUVlTkVwazFHYklMT1FCSXpickcydFVm?=
 =?utf-8?B?cEpZL2VwUkRRbFNWWWNXd2VzMTJ4cUZBUEF0MERib1lUWFhjNmZDN1hKZW9M?=
 =?utf-8?B?V0phZTFyODVJWnlLNm5tM3dUNE93dWxydmE0VFd5ZDlPOUVkVW9pSENIQlpI?=
 =?utf-8?B?NXVLSjE3RkhZMFBqdFlTenA5TDMwMGZxRm5USXJxd1dGR3Vud0c3SjBMYllG?=
 =?utf-8?B?Z3d1RXIxMlp1czRXTk45ZElQN1NvcXF3cHY3dUJhWDRIS1Rob1NxNjJtYy9T?=
 =?utf-8?B?bXBBYjZaZUFDZDZwcTU0bG1HNFZIREpOdStYeU8xMFQyM0FySFQybXBrSEQ3?=
 =?utf-8?B?UURoa3JYS1VEWUxNMmNXa3VTZXEzMDU3cm1zV0RlYWUxMFloS2hmNmhpd2xT?=
 =?utf-8?B?cWlmb3Y3RXQxZHE0bENKRzlreHNDVTBRdUgvWE00bzQ0MDZwUFBlN255SFlP?=
 =?utf-8?B?QWJ5NUJzM1FIVGs2NU9haVV0WGIzU29ZNWUrektyNm8reGVqSHhvOFFpNnFK?=
 =?utf-8?B?TjYxdEliaVdIN1B5L051Q0VkRGFjMHM2MWhBWXR2R2VSOGVBTE90V2MwbkN1?=
 =?utf-8?B?dXFOZ0VGemoyWkx1R3RZSUsra3pqU3NUY0VqVEtFZXlYRHo1b2lyaGRncWNq?=
 =?utf-8?B?NkVxbEt4Y0E2b2laSTNNWGVVdkFjdEZ5S0s0blZoY2E3STBDUnpVVkVJbmU0?=
 =?utf-8?B?MXNDeTIrcUpMRDA0RFdSN2tVdEFZNEZ0cWUwUGdJVzNwSHArTktyVkx0VUdF?=
 =?utf-8?B?LzBENGlJUDdRUzFRU3YrTXR2Z1R1TU54UlB3YVppem9OSDEvbGxtaW1YUkxp?=
 =?utf-8?B?TThsOVpIajRSZUpnUncya1NmM3E2UjUyQkRIVlRkZCtLbEF0dnorSm41VW4x?=
 =?utf-8?B?VDZEY1JhVXUwNjE0MTZ0bDRWczRhTDdXU3dqeURvQjExZlhraUdYaW12KzMw?=
 =?utf-8?B?MVJZaTc1RllhTW8rbUYrelRKZEpjWjdvQUg4RTBSR1RtekphWFliM1FhYlpT?=
 =?utf-8?B?WWUxZkJFOXpsU09FQVlTWWdtclpEa3E2QkNkVHZDZTVmWVBhVWNCSHNOOXdS?=
 =?utf-8?Q?jAMpOLo7Oi02bl8L+9IrTNQCo?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 55172830-4ed3-4182-c129-08db513dd28b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 10:03:37.2163
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 31AECp7XDGaiAqOWcQLVs24Y+IpIu3itEM0waYKlAFM5yyN+kPbv1YhCHtOi/j44dMmM23reKHA1iGVFQrjoAQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6961

On 10.05.2023 10:32, Roger Pau Monné wrote:
> On Tue, May 09, 2023 at 06:25:05PM +0200, Jan Beulich wrote:
>> On 09.05.2023 13:03, Roger Pau Monne wrote:
>>> The 'i' iterator index stores a pdx, not a pfn, and hence the initial
>>> assignation of start (which stores a pfn) needs a conversion from pfn
>>> to pdx.
>>
>> Strictly speaking: Yes. But pdx compression skips the bottom MAX_ORDER
>> bits, so ...
> 
> Oh, that wasn't obvious to me.
> 
>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>> @@ -406,7 +406,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
>>>       */
>>>      start = paging_mode_translate(d) ? PFN_DOWN(MB(1)) : 0;
>>
>> ... with this, ...
>>
>>> -    for ( i = start, count = 0; i < top; )
>>> +    for ( i = pfn_to_pdx(start), count = 0; i < top; )
>>
>> ... this is an expensive identity transformation. Could I talk you into
>> adding
>>
>>     ASSERT(start == pfn_to_pdx(start));
>>
>> instead (or the corresponding BUG_ON() if you'd prefer that, albeit then
>> the expensive identity transformation will still be there even in release
>> builds; not that it matters all that much right here, but still)?
> 
> So far the value of start is not influenced by hardware, so having an
> assert should be fine.
> 
> Given that the assignation is just done once at the start of the loop
> I don't see it being that relevant to the performance of this piece of
> code TBH, the more that we do a pdx_to_pfn() for each loop
> iteration, so my preference would be to use the proposed change.

Well, okay, but then please with the description also "softened" a
little (it isn't really "needs", but e.g. "better would undergo"),
alongside ...

>> In any event, with no real bug fixed (unless I'm overlooking something),
>> I would suggest to drop the Fixes: tag.
> 
> Right, I could drop that.

... this.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 10 10:04:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 10:04:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532730.829010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwggf-0004Ww-T5; Wed, 10 May 2023 10:04:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532730.829010; Wed, 10 May 2023 10:04:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwggf-0004Wp-QA; Wed, 10 May 2023 10:04:45 +0000
Received: by outflank-mailman (input) for mailman id 532730;
 Wed, 10 May 2023 10:04:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwggd-0004Wj-Pp
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 10:04:43 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on060f.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::60f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 157ac523-ef1a-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 12:04:43 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6961.eurprd04.prod.outlook.com (2603:10a6:208:180::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18; Wed, 10 May
 2023 10:04:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 10:04:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 157ac523-ef1a-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vah0/qIgRCN9s7awYXgkwPrc2JScNtMeKRTJgf6vVxMnqxNfnb54MzGMewTIFfiLB6xB3NPSJSrlRXglNMH7/8bMrOKcpa2hOEawBq4FDKb3mAwyJaiXfAjAqjKiElr4EdPzYtVv8lpEutFthYOOdkTMNQO6dc41lAMTvlP5Ofi5tIM268hSPATB73HA7VmV2Z65P/+lOugnNA7btKhI5htmi3m5ugH//38zqV129rDPu96RmaIdClIOEg1Wmg62cISGiaeQ/wnL8gUSA1ZwObVXDKpspb+/RfBSPtLV/9TOaw/XwWuvlGEWoroq5Rfko9IBakP7aG3Xkef10UyjKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ynexq2RV1+AMXVOAi5/6sOtizw/fq6DNVmCMx2qfSX0=;
 b=blFM4nMitjnN4vnuxwGjcUIEaxll10oJh5M5wcqWzskxGTQNhY3gT4SQVK2h9N5AFD3NPlIqN6LMSCKtva9qZXsR8Yas3QydslH+S5oQ0I5A/TE5imwRjt/mKG8d0L0L1OcD4+uj310e5ueE7+tdzRzJJsSsVX1aELSmiLtPeIlx1VKWovZoTFVOjE9WZ33A4G0A+YSAxK7JCojOlEKhBLYB5efFG9LQQ8MfTV8umlSgZ6f/QrxKAuhL5ySUMREFxnPACj+5eEm3m3q3K+5399UgMTKbw6qZUus8/d5JE37kYk4TwqIOLFIQw5snXz1TidjFiBVoNTb4MQTiBgo2Qw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ynexq2RV1+AMXVOAi5/6sOtizw/fq6DNVmCMx2qfSX0=;
 b=XqCTVkI/98m8Q+u5oQoGLC2Wl6IaZs6bBIn4yZqRynWdhjl/bBAfUuumk0LI6heO+2gp82nQmQNOMvR1P8fD0+mftZ0Ge/IO6gkDnmYzK0zpMitqrJth07w4FYsLj/JALxeEJfore4KzJ0WH4bhok1FCHVg43PsqL6NAIxEeiy/XfEc++4Z5JWJsEywO6yY8E0vKSdUqJnDP4Q6X4fwKBUex+SV/w7HYGXO/Ejl4khyzVRQdm1UKTn1cg3ltZA7Ze8zoWm+xOB3THeK6596/qnmUvM15QB/uqpJdmq3Eqkpvvo8+2SNHvPlL3iemJQJHHUhZF8iNtrX1jSvVfyI8GA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0342c320-d046-a591-b41b-6cec3c6ecbdc@suse.com>
Date: Wed, 10 May 2023 12:04:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/iommu: fix wrong iterator type in
 arch_iommu_hwdom_init()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230509110325.61750-1-roger.pau@citrix.com>
 <56c5d0f1-bc47-8824-9515-239647015d47@suse.com>
 <ZFtWc3ks5f0kMAQT@Air-de-Roger>
 <3c633370-bedc-96d2-1a33-271c6588da30@suse.com>
In-Reply-To: <3c633370-bedc-96d2-1a33-271c6588da30@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0010.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6961:EE_
X-MS-Office365-Filtering-Correlation-Id: c6ebf0d9-f0ea-4a95-fc30-08db513df8ca
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	izYn3TnCakb45lbOV6WkFZzV03razn1An9/Te1iBRa4rMN7nAf2bNcv8qfo7wxDmxUgdKoQlfKb+sRawjz/4KAkAz4WEpbg6fax//xxsP0LBN0YdefdF3vQpcJpyJ114TkqgxrIlIr+RmIB1eD/5bhw3DWH4J65LL47G3O3ZK8EIA43uLE936vAYcUWgOJDOAQVHmMv1pdszBlMDhh12Jw5WgwTNM6T+8Si99zp8q0I7p0jIqleJygphJv97Bsqp1kKI24ojBibYPNWPXZvJmkdkNEP5JwncuAhqBGc6dGNIAiPqqMnWf/zEHgeJ2Cn8F1gQLl2CPZ4C1VaqjGIgKfDLX7SlpLb55qS6QvAjaKn+h+tloYCU0wXgYP+zA3EQXFWNRgz7U/1n9fRYQT1JtEEPV8+mRKy+ResSY8KoAs9SwsR84BxqkikIySD3OgMZSvHyVsIAG/5Xu3Xd2fs9B4+4otOePoLSxL6RmRFvSHB+p+gEpmJVqcMQBTGsztOKx3kCOjbWDpej5FU+uJWSXxE8/XHTaE6REsmDjOh4+vA1BCEHhU3u+oBUJ3bE5GA6nWLcxiOEmTJAMuqtoHkROehhOGDpWgMKJ40m7TG/10meNd9U5N8vR5ITqlHkh/Qokr7Rfk3gurdhVLmrH0taMQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(376002)(396003)(136003)(39860400002)(451199021)(41300700001)(38100700002)(6486002)(5660300002)(8936002)(8676002)(478600001)(316002)(6916009)(36756003)(4326008)(66476007)(66556008)(66946007)(2616005)(86362001)(186003)(2906002)(83380400001)(31686004)(31696002)(53546011)(6512007)(6506007)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZE80TkIxWWRiakhwdFNjZU9kNEhBM3Z4UGsxd3YwaVA0WXNDbmZkSGpjWXg5?=
 =?utf-8?B?bUNhVnJrQzk0ZEdjell3TE9tT3NTUk1YajRSWFFvK0dqYzdMejNYTitVZmlE?=
 =?utf-8?B?a1YwblJUVkxPVHdiQ25FQk1ScldRZlJ4WmJZWmlPcTc2MnhkYWliN1MwRVFV?=
 =?utf-8?B?VkxCcmFyYkdZRE40dHovUXNTWElrdll1c3IzbHlTNmNuSk9ZZ1liTDhINjRa?=
 =?utf-8?B?RWFVOUlhZDFNakJDeWJib0Q2Rk05MVR3U1NscW1TWkRnSzJUdjRvanRvK2s2?=
 =?utf-8?B?Q3RqdTdvU0Z2Qi8rc3BNRjNNZ3lHRUI5Q1I0OFhqOG9kRDFic3ExeUdMVUlV?=
 =?utf-8?B?UDIxc29VSUJUZVlNU3pXbjVERVBDSFdCSWhZMzdyU1kwUVFyZHMyTDI5R1VQ?=
 =?utf-8?B?VFhVM2JscGJvZ0lMeTRwQ0ViVjNtNno3ZmZGMjMza3RVR3VjODgwWG90Qjlo?=
 =?utf-8?B?dlVJcy9Pa0VYZUM5WEwxYXFIa3FaMnlEZk5XS041cnRuZys1bWVVN3F2dWNj?=
 =?utf-8?B?TWF0VkRGNTFQdnkweXJUT1lJYW0wUUhzZ1pzaC9kZHdFWmcyWERYVmtlSnNG?=
 =?utf-8?B?RFdUQUMvRXFUaW5aTFpWS3J3VmJMeXZ0RXZmaHg5aWVzODk4VUNsUXAxYnlO?=
 =?utf-8?B?cEZueTBJbjYvUTRJNW1BZmZSYUpkZnpIMHhETUVTTDNXbGFCb2FaZUZTSnRB?=
 =?utf-8?B?QTlpdElRWUJ2cnh1c1k0MDFPQWNEYm8zR1NUaUQ4cEtrSHF2UWN6NjNXVmNW?=
 =?utf-8?B?Sll0eHZPNWhmRkhQSUhOWUZJVFdtMFdYREg5YUFUWTVtYmhtVXkrNnRPeUF4?=
 =?utf-8?B?NGw2ejF0aDlJZEZoUDVUc296YjZZalRtWDVLeG45ckJMOGd3QnRUajIvVEp3?=
 =?utf-8?B?cmdmN1B6eVorbXlCZEVFQW1jTDJMMHM1RWdkM015SUFPMGExR2hsY0lkMTBZ?=
 =?utf-8?B?cUlNVnRUdSsvWVJqY3pNSzNLbGU3RXlPVGhvWTVHT215UEs2TEx0Q3YvWVJW?=
 =?utf-8?B?a2tXcFQ5b3R4cVlzem5ERWJQb2U2SjdQbFA3bCt1M2xYcnZtbFZ0OXhnVlQr?=
 =?utf-8?B?QmdFbjlPUVpCUi9Zc2lpN3c1ZnJlSHZmeml1aVR1cXdJdEdueTdpZ1BnakVz?=
 =?utf-8?B?QTdhYjVOOUd6QmRlZGpFQ2t4OEVKanRySVdlYytGbFBxbmtBc0lGbllmNHA4?=
 =?utf-8?B?NmxHbDVxVnlxYzFhSFNhMFVZUEFRbFhlNTRZaHNjWDkzaVdseit0K0duMDRj?=
 =?utf-8?B?MXZpQ0ozZmZJYUZJcDJEOEYrZ3RyUTIrTFpCT2Z3YXFJdzcyOUp6bk9RclVX?=
 =?utf-8?B?bnJqZEptZnU2WGloc2d2aWtxQzRpbHBHSVhsSlc0eWI1U1hDSDF5VEFGWUdC?=
 =?utf-8?B?NWtDaEUrdzNvNEdsS1gwTVBXWXFsQS9Sd2hYdmwxNXdtYk5SM25ieVNOOCtB?=
 =?utf-8?B?bER4UmlPa1VkWEdPMXVZbi8wd3Y2OGcyK1pyZzRqL2V0djllbUFDei9yZkhp?=
 =?utf-8?B?OWtKVnBuZUI4WkNGUlViYmx2Vm9PWEJ6NFNaS2g5N2JiNzVlbDZRTkhxTnVG?=
 =?utf-8?B?Sis2TlhHR1R3UE1ybFlWbGdkaTRFUVNKNWozUUYvVk82bER4eUMxN0VoZkxN?=
 =?utf-8?B?aDVDamhKdk9GUVFvU2h4YU8zLy9BMkwzWFYyWmxzT1dlalg5WkpTU25mTi9o?=
 =?utf-8?B?Y2xYT203RklMTVNKUU90ZURhZWYxZkhWK3NjWncyOU9YRFpJQXVPNFBkM3BW?=
 =?utf-8?B?b2FtQ3NYRFBCSWIwWFhick1xbk5MUWYzOCtreFdiNWJKVzF3REJ1SHBrM2Zr?=
 =?utf-8?B?QW9TVXYvY0NGd1pSMHczRmZtOFRPVzNGdzNuakd6NEZUTDVDWmZDT1dQa2wz?=
 =?utf-8?B?SjZyNjNzT0N4bTNFUWNUVCtid2NTUnAxYmlwaE55S0lZd21FQUN1Ylc5U0pr?=
 =?utf-8?B?S1VzWG9jZDhYbFFpUStiSkMvdksvaTc1MklFRUR1MjN2YktQN2piNEkrcGVH?=
 =?utf-8?B?MlU4a3h4NFp3NUJFeDlLYkxDM3lIMWNtL2w3OUhiTFBIbUdXbDVLVUN5OFNi?=
 =?utf-8?B?NUZTS2tLU0E1MFpsOWtGdVZaWFdzbTFYU0NNQklDWHloZGxVQlNYaEh4Vmkv?=
 =?utf-8?Q?tBx4pBiNi6Rxs9IJbKZt/a2Ty?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c6ebf0d9-f0ea-4a95-fc30-08db513df8ca
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 10:04:41.3572
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mwW4MNlequGIe+8PADRkxAdbB3s9uu90WxeRyp5jM4lHzkJNyVqNeUR5rzZsVuCMlxWt5AkRRfskvHxL38HlOg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6961

On 10.05.2023 12:03, Jan Beulich wrote:
> On 10.05.2023 10:32, Roger Pau Monné wrote:
>> On Tue, May 09, 2023 at 06:25:05PM +0200, Jan Beulich wrote:
>>> On 09.05.2023 13:03, Roger Pau Monne wrote:
>>>> The 'i' iterator index stores a pdx, not a pfn, and hence the initial
>>>> assignation of start (which stores a pfn) needs a conversion from pfn
>>>> to pdx.
>>>
>>> Strictly speaking: Yes. But pdx compression skips the bottom MAX_ORDER
>>> bits, so ...
>>
>> Oh, that wasn't obvious to me.
>>
>>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>>> @@ -406,7 +406,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
>>>>       */
>>>>      start = paging_mode_translate(d) ? PFN_DOWN(MB(1)) : 0;
>>>
>>> ... with this, ...
>>>
>>>> -    for ( i = start, count = 0; i < top; )
>>>> +    for ( i = pfn_to_pdx(start), count = 0; i < top; )
>>>
>>> ... this is an expensive identity transformation. Could I talk you into
>>> adding
>>>
>>>     ASSERT(start == pfn_to_pdx(start));
>>>
>>> instead (or the corresponding BUG_ON() if you'd prefer that, albeit then
>>> the expensive identity transformation will still be there even in release
>>> builds; not that it matters all that much right here, but still)?
>>
>> So far the value of start is not influenced by hardware, so having an
>> assert should be fine.
>>
>> Given that the assignation is just done once at the start of the loop
>> I don't see it being that relevant to the performance of this piece of
>> code TBH, the more that we do a pdx_to_pfn() for each loop
>> iteration, so my preference would be to use the proposed change.
> 
> Well, okay, but then please with the description also "softened" a
> little (it isn't really "needs", but e.g. "better would undergo"),

And in the title then perhaps s/fix wrong/adjust/.

Jan

> alongside ...
> 
>>> In any event, with no real bug fixed (unless I'm overlooking something),
>>> I would suggest to drop the Fixes: tag.
>>
>> Right, I could drop that.
> 
> ... this.
> 
> Jan
> 



From xen-devel-bounces@lists.xenproject.org Wed May 10 10:19:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 10:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532736.829020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwguW-0006DG-8e; Wed, 10 May 2023 10:19:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532736.829020; Wed, 10 May 2023 10:19:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwguW-0006D9-59; Wed, 10 May 2023 10:19:04 +0000
Received: by outflank-mailman (input) for mailman id 532736;
 Wed, 10 May 2023 10:19:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m/7K=A7=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pwguV-0006D1-6K
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 10:19:03 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20622.outbound.protection.outlook.com
 [2a01:111:f400:7eab::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1429ba99-ef1c-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 12:19:01 +0200 (CEST)
Received: from MN2PR17CA0011.namprd17.prod.outlook.com (2603:10b6:208:15e::24)
 by MN0PR12MB6296.namprd12.prod.outlook.com (2603:10b6:208:3d3::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Wed, 10 May
 2023 10:18:56 +0000
Received: from BL02EPF0000EE3C.namprd05.prod.outlook.com
 (2603:10b6:208:15e:cafe::a5) by MN2PR17CA0011.outlook.office365.com
 (2603:10b6:208:15e::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.19 via Frontend
 Transport; Wed, 10 May 2023 10:18:56 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BL02EPF0000EE3C.mail.protection.outlook.com (10.167.241.132) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.20 via Frontend Transport; Wed, 10 May 2023 10:18:56 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 10 May
 2023 05:18:55 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 10 May 2023 05:18:54 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1429ba99-ef1c-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KhT0e1x9K15O2S7CyJ7WM5aWHxFp4KF7dDz92kDabvp8Xx+z7MUV1cIw8mNvEnbV1z4hYnfcY3567EE86JIQuTMTtuuDRqomHVMPqTk/+8KLyPh+NYow6g0DQoYTKMHmszu2wx51tmaDAN4AAPVVhGnBLUNst0q+KU2e1z89ONkfRW0iTzwfFNBd+iStTe/7HtTHDfl4GJXtDXRbQdCGyr/s6nWihvbMpJwxDKgMImlpxjBNT5JG9tICBqbw293yRQ/vcTSrvZJBDYt5XHLE+VGBOCmK1nkktNd8D2In42yW9zsQFDhCHOZMo9oljUeebKifyw0++CTDRLrF2BagQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dnXPHUfI0sEPPBxhjrCV7hWahf6nzD+pQ8KhDpK7Si4=;
 b=mwGk97gPGSi1Ai6cPu2X3bR6UojF+6jXKlQSXLDQHwt4P/QYYtTNkQmuV1i4BcGMoGruCmF9UWwsoamxPxpHHcdOmHp2sEPIE3IS37MNSH1CjyN2T0jfp6w5vZmoxraZKfOjQYorrE5XQpDvmoHln5akSGVU2unIXb6me2GYqzdYz78bZbXpcL0kn+QpwRF4BmQWx8mxV4a7RJHYLPRjwqkb2UeBjT7ZnXWks59O/cioEy9slt7tdCTVp6sfMOSsAbrSN6SWxM2aUyObt5KR5AFB1N32VsgQ9+X9Guw9c71ED0D25GHUsVfYPadfZRUtr0gy8Z45yqoTkvIjy1pOLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dnXPHUfI0sEPPBxhjrCV7hWahf6nzD+pQ8KhDpK7Si4=;
 b=qAFkYWN9n+yWIdIB7H4CFq8woXRV1mvlcEWOjjDJTA7wPM10jSI2ofGUdQ0i9mSDifc7r0qUWqXEaCOGe+g11cEeqpi+FVIkd7XStzf3Jrwssr+d/uJhrchW9psr5mHwcK91EZi9tFJNGF+6+hNy6ZLnvECgdHy5ZuKl1RtVDr8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <27af742a-c8c2-006e-fcd8-bbad116e1908@amd.com>
Date: Wed, 10 May 2023 12:18:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN][PATCH v6 16/19] xen/arm: Implement device tree node
 addition functionalities
To: Vikram Garhwal <vikram.garhwal@amd.com>, <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-17-vikram.garhwal@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230502233650.20121-17-vikram.garhwal@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF0000EE3C:EE_|MN0PR12MB6296:EE_
X-MS-Office365-Filtering-Correlation-Id: 13841552-47b9-41f6-39df-08db513ff67f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JcoJgc3vsX+SKO3wHiAZbDOgkGZVmV+q+4rI7gk1wNJPHbOeGdO0cnX4o1AspOSLf+UcGfvx2NvcPxjDNWbwSQ23i/AG2HaYO5QFLlwznT4VpAR6x/c5CHxkohQ5Szk6eH3OEqtpb8J4vn6sA70IvsVAdV2jfCoeRZj0GmT3EATOXQW6qLiJE2AS8PrMtGFm5ljNzRpv/wHT/vSqVUsgci3BtnODNCyXP/yEq7favubP2cocp/kESziB5CZen73Go+fQXdKck8CLDtYSBl0ksyeGjCE0GIajldOXbYA/CLJAQrxvhZHMTzCfEWaUzIXYUONR0lQScU15TnIndNqGp0MECXOGEylyb+HsF7DkzsORvt2Xld44lGPBkp6ssk8nTsoJKSaEZ46OoANI0H0x7/QmKh6h0/ejlxQ/ec8u17OeoWXY1bL/vn/4ZHdJ9ci0iPDzXhdrlP27xXUcXF1psq+TzyDgaPhvc1UhlQTEZU6kUguuGb6nqQ6yxne0Qo3UKXXXrhG1v0YasH4raPYihOYGLBJaCXngN+ZTs6T02Jbq5fkJT/a6VV4DqytI/IUNdn6eRPSpxTaZudzmghpQ9CZnl2ngW8mfH7D44beKINhokirLiWVcEjiYeU3EkVUg+MjZugWFM16E38d1yTjgYMzW2fhDpajhq4Mb6m0Fz1pMp9JceTw4qOZ11grBiFI94RTjI93V4yxCYdHizTymCV0NFdJRKktEThGMTYSsEOd2z+Y7e5b9j3sTt8kxrD7KLOr8/bzIObLvDPW6Azg8Wg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(136003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(26005)(53546011)(40480700001)(47076005)(336012)(426003)(36860700001)(83380400001)(2616005)(81166007)(86362001)(31696002)(36756003)(82310400005)(186003)(356005)(82740400003)(40460700003)(16576012)(44832011)(54906003)(110136005)(316002)(41300700001)(8676002)(8936002)(478600001)(30864003)(70586007)(4326008)(70206006)(5660300002)(31686004)(2906002)(66899021)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 10:18:56.2910
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 13841552-47b9-41f6-39df-08db513ff67f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF0000EE3C.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6296



On 03/05/2023 01:36, Vikram Garhwal wrote:
> Update sysctl XEN_SYSCTL_dt_overlay to enable support for dtbo nodes addition
> using device tree overlay.
> 
> xl dt-overlay add file.dtbo:
>     Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
>     device_tree_flattened) is created and updated with overlay nodes. This
>     updated fdt is further unflattened to a dt_host_new. Next, it checks if any
>     of the overlay nodes already exists in the dt_host. If overlay nodes doesn't
>     exist then find the overlay nodes in dt_host_new, find the overlay node's
>     parent in dt_host and add the nodes as child under their parent in the
>     dt_host. The node is attached as the last node under target parent.
> 
>     Finally, add IRQs, add device to IOMMUs, set permissions and map MMIO for the
>     overlay node.
> 
> When a node is added using overlay, a new entry is allocated in the
> overlay_track to keep the track of memory allocation due to addition of overlay
> node. This is helpful for freeing the memory allocated when a device tree node
> is removed.
> 
> The main purpose of this to address first part of dynamic programming i.e.
> making xen aware of new device tree node which means updating the dt_host with
> overlay node information. Here we are adding/removing node from dt_host, and
> checking/setting IOMMU and IRQ permission but never mapping them to any domain.
> Right now, mapping/Un-mapping will happen only when a new domU is
> created/destroyed using "xl create".
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> ---
>  xen/common/dt-overlay.c | 510 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 510 insertions(+)
> 
> diff --git a/xen/common/dt-overlay.c b/xen/common/dt-overlay.c
> index b89cceab84..09ea46111b 100644
> --- a/xen/common/dt-overlay.c
> +++ b/xen/common/dt-overlay.c
> @@ -33,6 +33,25 @@ static struct dt_device_node *
>      return child_node;
>  }
>  
> +/*
> + * Returns next node to the input node. If node has children then return
> + * last descendant's next node.
> +*/
> +static struct dt_device_node *
> +dt_find_next_node(struct dt_device_node *dt, const struct dt_device_node *node)
> +{
> +    struct dt_device_node *np;
> +
> +    dt_for_each_device_node(dt, np)
> +        if ( np == node )
> +            break;
> +
> +    if ( np->child )
> +        np = find_last_descendants_node(np);
> +
> +    return np->allnext;
> +}
> +
>  static int dt_overlay_remove_node(struct dt_device_node *device_node)
>  {
>      struct dt_device_node *np;
> @@ -106,6 +125,76 @@ static int dt_overlay_remove_node(struct dt_device_node *device_node)
>      return 0;
>  }
>  
> +static int dt_overlay_add_node(struct dt_device_node *device_node,
> +                               const char *parent_node_path)
> +{
> +    struct dt_device_node *parent_node;
> +    struct dt_device_node *next_node;
> +
> +    parent_node = dt_find_node_by_path(parent_node_path);
> +
> +    if ( parent_node == NULL )
> +    {
> +        dt_dprintk("Parent node %s not found. Overlay node will not be added\n",
> +                   parent_node_path);
> +        return -EINVAL;
> +    }
> +
> +    /* If parent has no child. */
> +    if ( parent_node->child == NULL )
> +    {
> +        next_node = parent_node->allnext;
> +        device_node->parent = parent_node;
> +        parent_node->allnext = device_node;
> +        parent_node->child = device_node;
> +    }
> +    else
> +    {
> +        struct dt_device_node *np;
> +        /* If parent has at least one child node.
incorrect comment style, should be:
/*
 *
 */

> +         * Iterate to the last child node of parent.
> +         */
> +        for ( np = parent_node->child; np->sibling != NULL; np = np->sibling );
> +
> +        /* Iterate over all child nodes of np node. */
> +        if ( np->child )
> +        {
> +            struct dt_device_node *np_last_descendant;
> +
> +            np_last_descendant = find_last_descendants_node(np);
> +
> +            next_node = np_last_descendant->allnext;
> +            np_last_descendant->allnext = device_node;
> +        }
> +        else
> +        {
> +            next_node = np->allnext;
> +            np->allnext = device_node;
> +        }
> +
> +        device_node->parent = parent_node;
> +        np->sibling = device_node;
> +        np->sibling->sibling = NULL;
> +    }
> +
> +    /* Iterate over all child nodes of device_node to add children too. */
> +    if ( device_node->child )
> +    {
> +        struct dt_device_node *device_node_last_descendant;
> +
> +        device_node_last_descendant = find_last_descendants_node(device_node);
empty line

> +        /* Plug next_node at the end of last children of device_node. */
> +        device_node_last_descendant->allnext = next_node;
> +    }
> +    else
> +    {
> +        /* Now plug next_node at the end of device_node. */
> +        device_node->allnext = next_node;
> +    }
> +
> +    return 0;
> +}
> +
>  /* Basic sanity check for the dtbo tool stack provided to Xen. */
>  static int check_overlay_fdt(const void *overlay_fdt, uint32_t overlay_fdt_size)
>  {
> @@ -145,6 +234,82 @@ static unsigned int overlay_node_count(const void *overlay_fdt)
>      return num_overlay_nodes;
>  }
>  
> +/*
> + * overlay_get_nodes_info gets full name with path for all the nodes which
> + * are in one level of __overlay__ tag. This is useful when checking node for
> + * duplication i.e. dtbo tries to add nodes which already exists in device tree.
> + */
> +static int overlay_get_nodes_info(const void *fdto, char ***nodes_full_path,
> +                                  unsigned int num_overlay_nodes)
> +{
> +    int fragment;
> +
> +    *nodes_full_path = xzalloc_bytes(num_overlay_nodes * sizeof(char *));
You are not freeing memory in this function in case of an error and rely on a caller
to free the memory allocated by this function. Wouldn't it be better to allocate this
from handle_add_overlay_nodes()?

> +
> +    if ( *nodes_full_path == NULL )
> +        return -ENOMEM;
> +
> +    fdt_for_each_subnode(fragment, fdto, 0)
> +    {
> +        int target;
> +        int overlay;
> +        int subnode;
> +        const char *target_path;
> +
> +        target = fdt_overlay_target_offset(device_tree_flattened, fdto,
> +                                           fragment, &target_path);
> +        if ( target < 0 )
> +            return target;
> +
> +        if ( target_path == NULL )
> +            return -EINVAL;
> +
> +        overlay = fdt_subnode_offset(fdto, fragment, "__overlay__");
> +
> +        /*
> +         * overlay value can be < 0. But fdt_for_each_subnode() loop checks for
> +         * overlay >= 0. So, no need for a overlay>=0 check here.
> +         */
> +        fdt_for_each_subnode(subnode, fdto, overlay)
> +        {
> +            const char *node_name = NULL;
> +            int node_name_len;
> +            unsigned int target_path_len = strlen(target_path);
> +            unsigned int node_full_name_len;
> +            unsigned int node_num = 0;
How this is supposed to work given that on each iteration you are setting this to 0
and at the end of the iteration you increment it? It will then always be 0.

> +
> +            node_name = fdt_get_name(fdto, subnode, &node_name_len);
> +
> +            if ( node_name == NULL )
> +                return node_name_len;
> +
> +            /*
> +             * Magic number 2 is for adding '/' and '\0'. This is done to keep
> +             * the node_full_path in the correct full node name format.
> +             */
> +            node_full_name_len = target_path_len + node_name_len + 2;
> +
> +            (*nodes_full_path)[node_num] = xmalloc_bytes(node_full_name_len);
> +
> +            if ( (*nodes_full_path)[node_num] == NULL )
> +                return -ENOMEM;
> +
> +            memcpy((*nodes_full_path)[node_num], target_path, target_path_len);
> +
> +            (*nodes_full_path)[node_num][target_path_len] = '/';
> +
> +            memcpy((*nodes_full_path)[node_num] + target_path_len + 1,
> +                    node_name, node_name_len);
> +
> +            (*nodes_full_path)[node_num][node_full_name_len - 1] = '\0';
> +
> +            node_num++;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
>  static int handle_remove_irq_iommu(struct dt_device_node *device_node)
>  {
>      int rc = 0;
> @@ -371,6 +536,344 @@ out:
>      return rc;
>  }
>  
> +/*
> + * Handles IRQ and IOMMU mapping for the overlay_node and all descendants of the
> + * overlay_node.
> + */
> +static int handle_add_irq_iommu(struct domain *d,
If you only allow adding nodes to hwdom, what's the point of this parameter?
This is not consitent with function that does removing.

> +                                struct dt_device_node *overlay_node)
> +{
> +    int rc;
> +    unsigned int naddr, i, len;
> +    struct dt_device_node *np;
> +
> +    /* First let's handle the interrupts. */
> +    rc = handle_device_interrupts(d, overlay_node, false);
Similarily to your comment about skip_mapping, it would be welcome to explain why false here.

> +    if ( rc < 0 )
> +    {
> +        printk(XENLOG_ERR "Failed to retrieve interrupts configuration\n");
> +        return rc;
> +    }
> +
> +    /* Check if iommu property exists. */
> +    if ( dt_get_property(overlay_node, "iommus", &len) )
> +    {
> +        /* Add device to IOMMUs. */
> +        rc = iommu_add_dt_device(overlay_node);
> +        if ( rc < 0 )
> +        {
> +            printk(XENLOG_ERR "Failed to add %s to the IOMMU\n",
> +                   dt_node_full_name(overlay_node));
> +            return rc;
> +        }
> +    }
> +
> +    /* Set permissions. */
> +    naddr = dt_number_of_address(overlay_node);
> +
> +    dt_dprintk("%s naddr = %u\n", dt_node_full_name(overlay_node), naddr);
> +
> +    /* Give permission to map MMIOs */
> +    for ( i = 0; i < naddr; i++ )
> +    {
> +        uint64_t addr, size;
Make sure to check Ayan's 32-bit paddr series status as it impacts parts of your code.

> +
> +        /*
> +         * For now, we skip_mapping which means it will only permit iomem access
> +         * to hardware_domain using iomem_permit_access() but will never map as
> +         * map_range_p2mt() will not be called.
> +         */
> +        struct map_range_data mr_data = { .d = d,
> +                                          .p2mt = p2m_mmio_direct_c,
> +                                          .skip_mapping = true
> +                                        };
> +
> +        rc = dt_device_get_address(overlay_node, i, &addr, &size);
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
> +                   i, dt_node_full_name(overlay_node));
> +            return rc;
> +        }
> +
> +        rc = map_range_to_domain(overlay_node, addr, size, &mr_data);
> +        if ( rc )
> +            return rc;
> +    }
> +
> +    /* Map IRQ and IOMMU for overlay_node's children. */
> +    for ( np = overlay_node->child; np != NULL; np = np->sibling )
> +    {
> +        rc = handle_add_irq_iommu(d, np);
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Adding IRQ and IOMMU failed\n");
> +            return rc;
> +        }
> +    }
> +
> +    return rc;
> +}
> +
> +/*
> + * Adds device tree nodes under target node.
> + * We use tr->dt_host_new to unflatten the updated device_tree_flattened. This
> + * is done to avoid the removal of device_tree generation, iomem regions mapping
> + * to hardware domain done by handle_node().
> + */
> +static long handle_add_overlay_nodes(void *overlay_fdt,
> +                                     uint32_t overlay_fdt_size)
> +{
> +    int rc, j, i;
> +    struct dt_device_node *overlay_node;
> +    struct overlay_track *tr = NULL;
> +    char **nodes_full_path = NULL;
> +    unsigned int new_fdt_size;
> +
> +    tr = xzalloc(struct overlay_track);
> +    if ( tr == NULL )
> +        return -ENOMEM;
> +
> +    new_fdt_size = fdt_totalsize(device_tree_flattened) +
> +                                 fdt_totalsize(overlay_fdt);
> +
> +    tr->fdt = xzalloc_bytes(new_fdt_size);
> +    if ( tr->fdt == NULL )
> +    {
> +        xfree(tr);
> +        return -ENOMEM;
> +    }
> +
> +    tr->num_nodes = overlay_node_count(overlay_fdt);
> +    if ( tr->num_nodes == 0 )
> +    {
> +        xfree(tr->fdt);
> +        xfree(tr);
> +        return -ENOMEM;
> +    }
> +
> +    tr->nodes_address = xzalloc_bytes(tr->num_nodes * sizeof(unsigned long));
> +    if ( tr->nodes_address == NULL )
> +    {
> +        xfree(tr->fdt);
> +        xfree(tr);
> +        return -ENOMEM;
> +    }
> +
> +    rc = check_overlay_fdt(overlay_fdt, overlay_fdt_size);
> +    if ( rc )
> +    {
> +        xfree(tr->nodes_address);
> +        xfree(tr->fdt);
> +        xfree(tr);
> +        return rc;
> +    }
> +
> +    /*
> +     * Keep a copy of overlay_fdt as fdt_overlay_apply will change the input
> +     * overlay's content(magic) when applying overlay.
> +     */
> +    tr->overlay_fdt = xzalloc_bytes(overlay_fdt_size);
> +    if ( tr->overlay_fdt == NULL )
> +    {
> +        xfree(tr->nodes_address);
> +        xfree(tr->fdt);
> +        xfree(tr);
> +        return -ENOMEM;
> +    }
> +
> +    memcpy(tr->overlay_fdt, overlay_fdt, overlay_fdt_size);
> +
> +    spin_lock(&overlay_lock);
> +
> +    memcpy(tr->fdt, device_tree_flattened,
> +           fdt_totalsize(device_tree_flattened));
> +
> +    /* Open tr->fdt with more space to accommodate the overlay_fdt. */
> +    rc = fdt_open_into(tr->fdt, tr->fdt, new_fdt_size);
> +    if ( rc )
> +    {
> +        printk(XENLOG_ERR "Increasing tr->fdt size failed with error %d\n",
User does not know what tr->fdt is and it's better to be more user friendly.

> +               rc);
> +        goto err;
> +    }
> +
> +    /*
> +     * overlay_get_nodes_info is called to get the node information from dtbo.
> +     * This is done before fdt_overlay_apply() because the overlay apply will
> +     * erase the magic of overlay_fdt.
> +     */
> +    rc = overlay_get_nodes_info(overlay_fdt, &nodes_full_path,
> +                                tr->num_nodes);
> +    if ( rc )
> +    {
> +        printk(XENLOG_ERR "Getting nodes information failed with error %d\n",
> +               rc);
> +        goto err;
> +    }
> +
> +    rc = fdt_overlay_apply(tr->fdt, overlay_fdt);
> +    if ( rc )
> +    {
> +        printk(XENLOG_ERR "Adding overlay node failed with error %d\n", rc);
> +        goto err;
> +    }
> +
> +    /*
> +     * Check if any of the node already exists in dt_host. If node already exits
> +     * we can return here as this overlay_fdt is not suitable for overlay ops.
> +     */
> +    for ( j = 0; j < tr->num_nodes; j++ )
> +    {
> +        overlay_node = dt_find_node_by_path(nodes_full_path[j]);
> +        if ( overlay_node != NULL )
> +        {
> +            printk(XENLOG_ERR "node %s exists in device tree\n",
> +                   nodes_full_path[j]);
> +            rc = -EINVAL;
> +            goto err;
> +        }
> +    }
> +
> +    /* Unflatten the tr->fdt into a new dt_host. */
> +    rc = unflatten_device_tree(tr->fdt, &tr->dt_host_new);
> +    if ( rc )
> +    {
> +        printk(XENLOG_ERR "unflatten_device_tree failed with error %d\n", rc);
> +        goto err;
> +    }
> +
> +    for ( j = 0; j < tr->num_nodes; j++ )
> +    {
> +        struct dt_device_node *prev_node, *next_node;
> +
> +        dt_dprintk("Adding node: %s\n", nodes_full_path[j]);
> +
> +        /* Find the newly added node in tr->dt_host_new by it's full path. */
> +        overlay_node = device_tree_find_node_by_path(tr->dt_host_new,
> +                                                     nodes_full_path[j]);
> +        if ( overlay_node == NULL )
> +        {
> +            /* Sanity check. But code will never come here. */
> +            ASSERT_UNREACHABLE();
> +            goto remove_node;
> +        }
> +
> +        /*
> +         * Find previous and next node to overlay_node in dt_host_new. We will
> +         * need these nodes to fix the dt_host_new mapping. When overlay_node is
> +         * take out of dt_host_new tree and added to dt_host, link between
> +         * previous node and next_node is broken. We will need to refresh
> +         * dt_host_new with correct linking for any other overlay nodes
> +         * extraction in future.
> +         */
> +        dt_for_each_device_node(tr->dt_host_new, prev_node)
> +            if ( prev_node->allnext == overlay_node )
> +                break;
> +
> +        next_node = dt_find_next_node(tr->dt_host_new, overlay_node);
> +
> +        read_lock(&dt_host->lock);
> +
> +        /* Add the node to dt_host. */
> +        rc = dt_overlay_add_node(overlay_node, overlay_node->parent->full_name);
> +        if ( rc )
> +        {
> +            read_unlock(&dt_host->lock);
> +
> +            /* Node not added in dt_host. */
> +            goto remove_node;
> +        }
> +
> +        read_unlock(&dt_host->lock);
> +
> +        prev_node->allnext = next_node;
> +
> +        overlay_node = dt_find_node_by_path(overlay_node->full_name);
> +        if ( overlay_node == NULL )
> +        {
> +            /* Sanity check. But code will never come here. */
> +            ASSERT_UNREACHABLE();
> +            goto remove_node;
> +        }
> +
> +        rc = handle_add_irq_iommu(hardware_domain, overlay_node);
> +        if ( rc )
> +        {
> +            printk(XENLOG_ERR "Adding IRQ and IOMMU failed\n");
> +            return rc;
This is definitely wrong as you will return with spinlock held and memory not freed.

> +        }
> +
> +        /* Keep overlay_node address in tracker. */
> +        tr->nodes_address[j] = (unsigned long)overlay_node;
> +    }
> +
> +    INIT_LIST_HEAD(&tr->entry);
> +    list_add_tail(&tr->entry, &overlay_tracker);
> +
> +    spin_unlock(&overlay_lock);
> +
> +    if ( nodes_full_path != NULL )
> +    {
> +        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
> +              i++ )
> +        {
> +            xfree(nodes_full_path[i]);
> +        }
> +        xfree(nodes_full_path);
> +    }
This block of code is identical to the one at the end so some cleanup is necessary.

> +
> +    return rc;
> +
> +/*
> + * Failure case. We need to remove the nodes, free tracker(if tr exists) and
> + * tr->dt_host_new.
> + */
> +remove_node:
> +    tr->num_nodes = j;
> +    rc = remove_nodes(tr);
> +
> +    if ( rc )
> +    {
> +        /*
> +         * User needs to provide right overlay. Incorrect node information
> +         * example parent node doesn't exist in dt_host etc can cause memory
> +         * leaks as removing_nodes() will fail and this means nodes memory is
> +         * not freed from tracker. Which may cause memory leaks. Ideally, these
> +         * device tree related mistakes will be caught by fdt_overlay_apply()
> +         * but given that we don't manage that code keeping this warning message
> +         * is better here.
> +         */
> +        printk(XENLOG_ERR "Removing node failed.\n");
> +        spin_unlock(&overlay_lock);
> +        return rc;
> +    }
> +
> +err:
> +    spin_unlock(&overlay_lock);
> +
> +    if ( tr->dt_host_new )
> +        xfree(tr->dt_host_new);
> +
> +    xfree(tr->overlay_fdt);
> +    xfree(tr->nodes_address);
> +    xfree(tr->fdt);
> +
> +    if ( nodes_full_path != NULL )
> +    {
> +        for ( i = 0; i < tr->num_nodes && nodes_full_path[i] != NULL;
> +              i++ )
> +        {
> +            xfree(nodes_full_path[i]);
> +        }
> +        xfree(nodes_full_path);
> +    }
> +
> +    xfree(tr);
> +
> +    return rc;
> +}
> +
>  long dt_sysctl(struct xen_sysctl_dt_overlay *op)
>  {
>      long ret;
> @@ -395,6 +898,13 @@ long dt_sysctl(struct xen_sysctl_dt_overlay *op)
>  
>      switch ( op->overlay_op )
>      {
> +    case XEN_SYSCTL_DT_OVERLAY_ADD:
> +        ret = handle_add_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
> +        if ( ret )
> +            xfree(overlay_fdt);
See my comment about xfree in previous patch

> +
> +        break;
> +
>      case XEN_SYSCTL_DT_OVERLAY_REMOVE:
>          ret = handle_remove_overlay_nodes(overlay_fdt, op->overlay_fdt_size);
>          xfree(overlay_fdt);

~Michal


From xen-devel-bounces@lists.xenproject.org Wed May 10 10:22:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 10:22:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532740.829030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwgy4-0007bz-NJ; Wed, 10 May 2023 10:22:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532740.829030; Wed, 10 May 2023 10:22:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwgy4-0007bs-KE; Wed, 10 May 2023 10:22:44 +0000
Received: by outflank-mailman (input) for mailman id 532740;
 Wed, 10 May 2023 10:22:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N27j=A7=citrix.com=prvs=487c252bd=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pwgy2-0007bm-PV
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 10:22:43 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 970a4918-ef1c-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 12:22:41 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 May 2023 06:22:12 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by CO3PR03MB6805.namprd03.prod.outlook.com (2603:10b6:303:164::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Wed, 10 May
 2023 10:22:10 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%4]) with mapi id 15.20.6363.032; Wed, 10 May 2023
 10:22:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 970a4918-ef1c-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683714160;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=ciQGgL+VslaFbguJ8+J97XijTWMSO8CxLC0B3anWwMA=;
  b=Oink0Bu8MdwC4/VZI1WOxCIekIXKfE8uUuvTNw8WXou/Ni6bmC/RFK82
   LMBdoNSr4uJTHjIGUq9xW/EmW3bYKfVqN+rgFra2RcSophRcBCuT1Ry+y
   mzgo9tn4Boj626+Icp/gKbD9ews10sLIeiqUrLnNXGkcm8kKc8c0CB8k7
   E=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 107266014
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:fmb6CaKvd9pZi82jFE+R+ZQlxSXFcZb7ZxGr2PjKsXjdYENS1TQCn
 2AZD2mAOP6PZmPyKYtwaoXi90kH6pPTmII2HQVlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wVmPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4rMUJR9
 8IzBwofMBuont2665C7aMpj05FLwMnDZOvzu1lG5BSAVLMMZ8CGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGNnWSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv02bWRw3uiBOr+EpWq7OwtkgSdzVYIARI0W0KrmP+Z1GyXDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8U55R+MzOzI4g+fLmkCUjNFLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDRLoViMA6tjn5Ys13hTGS486FLbv14OkXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94bRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:FsKRIqmessepF/3RfJ1a2wdhw5bpDfNLiWdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0mskKKdxbNhRYtKOzOWw1dATbsSlLcKpgeNJ8SQzI5gPM
 tbAstD4ZjLfCJHZKXBkXaF+rQbsb66GcmT7I+xrkuFDzsaDZ2Ihz0JdjpzeXcGIDWua6BJdq
 Z1saF81kedkDksH42G7j5vZZmxm/T70LbdJTIWDR8u7weDyRuu9b7BChCdmjsOTj9Vxr8m0G
 7d1yj0/L+qvf2XwgLVkza71eUapPLRjv94QOCcgMkcLTvhzi6ueYRaQrWH+Bwlve21714usd
 /U5zMtJd565X/9dny85THtxw7j+jAz7GKK8y7TvVLT5ejCAB4qActIgoxUNjPf9kobpdl5lI
 ZGxXiQuZZ7BQ7J2H2V3amCazha0m6P5VYym+8aiHJSFaMYdb9qtIQauGdYCo0JEi7W4J0uVM
 NuEMbfzvBLdk7yVQGQgkBfhPiXGlgjFBaPRUYP/uSTzjhthXh8i3AVwcQO901wgK4Vet1h3a
 DpI65onLZBQos9dqRmHtoMRsOxFyjkXQ/MGHj6GyWnKIg3f1b277Ln6rQ84++nPLYSyoEppZ
 jHWFRE8UYvZkPVD9GU1pEjyGGCfIyEZ0Wv9ihi3ek6hlWlL4CbdBFrCWpe3PdIms9vQvEyAJ
 2ISdZr6/yKFxqaJW8G5Xy4Z3BoEwhvbCQkgKdEZ7uwmLO7FmTLjJ2tTB+BHsuaLR8UHkXCP1
 AkYB/fYO1902HDYA6LvPGWYQKgRnDC
X-Talos-CUID: =?us-ascii?q?9a23=3AMHhEhWnuwZn0P+M6zSpBDZ9IER3XOSHt72jSHWS?=
 =?us-ascii?q?XM3Q3Q5aMdlO72qNGsMU7zg=3D=3D?=
X-Talos-MUID: 9a23:7wzH7gT9vZS/FRFWRXT+niMlJp5m6J61AU8vn4wWgs+kaw5ZbmI=
X-IronPort-AV: E=Sophos;i="5.99,264,1677560400"; 
   d="scan'208";a="107266014"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GzEBEi+Bev68dXdXS3AWdaz80g218r5IYakOqRo4m/m7bCRoZgUTC9tZnJ6z/nHj7NK8SKHWeDL5hJ4mS2rjG8Yr1owBZ/COMo2lyjTW6VN39iJU2Gsed4a33gzER6/ifYkgzuGxi7wiejg3Vc54vj//fcOWQQ1ho9T0aKdJLVFpx5smbAC+ThelBpn/88pDyZ8yUcOH6ipoKtw2YWyox8CYF/+w4Zs0SChu/J5vnQ6v+g9SJT2T8V4CABwrrllbeJTMn47l7+PICZeVUaM02FYmcVx6b1qCmLK7JRgq/NtU3R0WQswvWjSic8/dv42i9d4AjcZpr0LwkPBCncNf3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TkrqdmGVo5BkSV+xnHK6dPnnodz32UyrgXpBHX3ali0=;
 b=f4qenfT3LKI0haWzad2VFNqnGaMGrGDdOxj+LsQSPadsDDARLmTDp1sDDe49tK5qTNgJBDfTs4je1ZoOkk5DAPm0OssUsE8fn+cDxtl1I+cN0/ZiX9HAdxOU7Kh2usOa8oO7xr1p/kVbROxSMPuD3um8Y4dZxzp1q3cIruRH6k7H3g1LyHCFnLXnZrul4T4bGLgwZeUr/lbWTg+1aDTnSZDLYduRYT35kzkXSfQL8BX91n200UAMOb96M9iBb0QTjA5Alr6vGp5FkjL/1hQmgSF+6MW2iagzwJ2E89yxx53sq0a1Aw8tGsnWZU61K4WLOHcBU17meDq5D+d5fBbwNQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TkrqdmGVo5BkSV+xnHK6dPnnodz32UyrgXpBHX3ali0=;
 b=aaOYCAp/JJ9SW03NB286KhY4xYDJJmhRjcCkcw9FTwv0IMjItPoJk/q/OgBU/W2jnSdDY8tX5Kfi0QiWhUFQl/151dZqpP7w5bbR23V5mje9boNpvYU5Xt9kbgWZ11BWG/7p3wo49NnLTvoJCmqnCGEjQM8/f4rRdBgrvpNhhdQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 10 May 2023 12:22:02 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] iommu/vtd: fix address translation for superpages
Message-ID: <ZFtwSjuZaz05DIY0@Air-de-Roger>
References: <20230509104146.61178-1-roger.pau@citrix.com>
 <bc750b8d-77be-f967-907a-4f19c783ad99@suse.com>
 <ZFtVYEVsELGfZxik@Air-de-Roger>
 <5bde1d79-03ef-6f8b-4b28-8d7e6867ba18@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5bde1d79-03ef-6f8b-4b28-8d7e6867ba18@suse.com>
X-ClientProxiedBy: LO2P265CA0385.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:f::13) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|CO3PR03MB6805:EE_
X-MS-Office365-Filtering-Correlation-Id: 2a3f639e-c39b-4378-1947-08db5140698c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	snOIvpL75OvmB8LT5cKYjufJV2BAzhUwybt1mbTGIQMXoyoRDcmC4ahJSDkzxnKl91Ybhqvulqs3VpofYbuY+aGZnIzr9QNN6eXagoyZ79rMwyBPOCbDGwRek0Y6J55FcrcqvUkZe6NMkLTQ+qfPXFQHueKLCKc0PTQAh9c6O1W/LnTZndV/Qlk+ZKhurTjJzr/MMLCg5WN9wpsTIsZiO4iyNFoJ6qY0xVo1slPO/7CYe0AgXvoPZa5qzIaab1mPZbXRRpm7KkHR8jk0t12SIkVkLwuiDNr5A+2uXRTN2OHbHoLsnO2V3J05w4HK1gCv1mRu8qoVmqiY+2dEiwMkPI3DHh37tGWgNMXnIRidZchW1UAYIIr8hE3wgMAzWTlj6pmeetQoAup9U5AMOCG+6hbCIzBfwA3lsds4xadv55AcCJgkm3SPgFBXg57vNICi34cmVBB5cD2ioUKl/6fUmkmCPZro/KyEHtvB/S1UMnyRgsfMDM30Y6A5a12sD+i5JQZPWsEF30Co4wfiI+CrKn5eitYOb4StcmU7DanmmVN+U0wN1KzPGFFNd0EtN1nT
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(346002)(39860400002)(366004)(396003)(136003)(451199021)(66899021)(38100700002)(8676002)(66946007)(8936002)(5660300002)(85182001)(41300700001)(6916009)(82960400001)(66556008)(316002)(2906002)(86362001)(66476007)(83380400001)(186003)(9686003)(33716001)(26005)(6512007)(6506007)(53546011)(478600001)(6486002)(6666004)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NkxmVmJqWGlkTUdxdnhvYTkwUnNRYWFpZThrdE5TNG5CM016V1ppcnUxNDN0?=
 =?utf-8?B?SzZUQXdGUFRrR2pWR3h1NndVYUZ5Q1RSQXBncTBsWHBaeC9DVzc4dnVrVkxJ?=
 =?utf-8?B?K1NWamx3NFRBNTBlUjN2ZVQya2JtQS81clI5QzZxbldYS2lYNXpCdnE1Tnpo?=
 =?utf-8?B?L3VYZzlsQWRrbVNmR2VhdlpOelFlbDA2dnQ4RGljNTFsYzZUV3NTKzB6Zloy?=
 =?utf-8?B?WElEZVFSMTlvdG1vbEhDalVqUnRqQisvT0dzcDVKNCtsUW1XbzRTNStnRytm?=
 =?utf-8?B?dGJrZTNvS3krUHNlSVBDWUxEUERQMVR0TlNoRFNLRlc0cTlGVFRGRngrT3ly?=
 =?utf-8?B?ZEI0akVhTDNZYUF0ZFJRVHg1OHE2cEFyck5wZVFlMFRDbnV0Tjd3RGFKdU5Y?=
 =?utf-8?B?SHRhTkZYL3BhWUpSbnlEQTM1VnJseUN6UmNRZzZpdjJLTTVTVUcrdmFDUSti?=
 =?utf-8?B?RDVKdUVMcm8wamhzZkMzS1g5TXR3OHpZV2FjcEp4ZXkrNldJYzJXZUFlTjlN?=
 =?utf-8?B?Rjl6TVZCdFpNMHRlMkN2bjZxVDMwZUExekxXZHdWZjFGMTJZamdhSzdPSFB5?=
 =?utf-8?B?b2lCNzdwL2pOcEVhcFBYb2tUT1phMko3dERvNGxaV3lNcUR6enM2MGsvTWhK?=
 =?utf-8?B?cEZncDBNQU1VUVdYdHJqS1F3RHB3eTBzYTMvNWRDMUt1ZStEcVBoYWJwWGVk?=
 =?utf-8?B?UzJ0UDNCOVFZaFJwb0FZWGhXTk5qMzlid1pqKzlBVFF2d0dMM25tL2EwT3Jh?=
 =?utf-8?B?VkpmZDRqSzJpMXlLMlZ4VEQrNUxnSGxwTVVlSml4UTMyblhZTVVyMjJBRzJB?=
 =?utf-8?B?WXFTeTZoQ1BValY1cEtDMDdHOTVsVW1WMzFjbkpjVXpVT0R2cDdKcDM3WmNv?=
 =?utf-8?B?NW9BS21Vdks1RGFEMW9YRy85V3g1dDdXczArcll0MDBQTTJPQ0xEY253ams1?=
 =?utf-8?B?anRvR0ZvSFJlQmRZVmNiNmxsWXpQN2dqTENFOVFGSlBUajhVckN1VWVDcjBR?=
 =?utf-8?B?VFQxVVpwdGtoK3pxbE4vZWF3R3FNR2JqZE9nd1FBS255TnB6dWRhcnQ1MmpQ?=
 =?utf-8?B?b0trb0wraGZwMUZWSlNOZHBPWHFYMFpXOURFdnBmVGdIMmNoZlRNZmZ6WW9t?=
 =?utf-8?B?ak85UjJiWWxBOU9OdUw5T3FpNlBCN21TbUpkU0grYW1aclFVYjRiRi9UMWNY?=
 =?utf-8?B?a0dmOC9uOUZZTG1ybCtwNWZ4NTRWbE4wQjRDV252Tk1teG9KOGtSRnhPU1FT?=
 =?utf-8?B?SE5OSDArNGFuUWhsbDlsbTZZMnFvSFpoaGNyTHlqcEZ0a0RlZWtzc2xOaGZ0?=
 =?utf-8?B?SHZtUjNiSnFnZWw0cjB2S2ZYeHhUSUJqbmtqWjRIYjJNajZJVVpQK21YdVdR?=
 =?utf-8?B?bmNLQzFuSUFzclBnNU95VVF1S2ppVXh3Q3EzbzhKMkhybnhtakdBd1E4RFRs?=
 =?utf-8?B?YzNDNEZqV25Od3BJbitZczRiVGsrRzc0NXA1VzJONTNGM3N4ck5lWnZydTVi?=
 =?utf-8?B?Mk9DMHJGeVprQW9BU2JQUEFvSkdkYjJuZTY1TW5xckN4WjYzdW5GSXhUZFVB?=
 =?utf-8?B?N3BQWnFTMEZyc3BlOFp1T1RjdmpmK1VkSzZSd1NSNEs1SisxbmdOeGw1cGNy?=
 =?utf-8?B?cnkvRUgrRTA4dnVreU4xTTJteGxvcmNNMnV2TWYvWjViTElEdFEzLzRGMHVw?=
 =?utf-8?B?UUF5ZkRMYXNpY2xTN3JmM05pUll3M2ZOK3VqWEFUTHptZFBFL3dGejRveHo1?=
 =?utf-8?B?NHBqalBuWStGSW1sL29ZeUp6dGYwZmpranVFa1BtM0Vmdkp2VENpTUJFM0Za?=
 =?utf-8?B?dHdDcXV6TEMweEVPRzhhV1gzNFhEUURYUDZka012SE11RDhVMlZlRzVOdUFH?=
 =?utf-8?B?UityVUoyS2szam85cjc0YTd2S3ZZWnpGL2JOMFpTV09VSWhyTDlQcXl6QlNZ?=
 =?utf-8?B?M3RheEFhSFpIeWdoUUc1aGY0QVliQjUwL1lQZjBIaytIWGtTRGM0RVBCeWFw?=
 =?utf-8?B?eEUrRXpVMXZ1eHBpYkpyS2sxZkpzYTlKMm55aDhXZkZVK25HVUhBN2JZS1lh?=
 =?utf-8?B?OU1JVVBKeWRPRzNTZTdaeW1wa2VQcnpCVE56Wk80MHhMVnRwTi9EWC9wdzli?=
 =?utf-8?Q?w6T2x/8gAC+oifdo5/Jb+w30K?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	LbhMGZn3Gsxbd85kwZzciOqknlAKLRsclpGoGs5jVPPaDZB/Y1WNKS3FYM1nJKGZfIxZJn+OS/nsQ0DbInAsgcfYNHqLEFu6nZYfOFmSEqP1RIQZz6iWh2hSCu/M9ScUI3zoPZGhBp0a9CXNVNsabbn/qSqjpPkpbhWEJuijknzWcKVBiw8pMczrdbWua9aBggjKrUgEqh5pEOeepR33Ue4JRwzV7m8jZcI4FruRInY/l9IZb8V2XQDCrPCZgm+VRgVetbB7clX9a5zF79g+Lc1BlS3MGmVxGfH5QMNe6xQMBn/ZqxXNPNstLUBIfqXxM9nTHCXI7WnIhR+WL2m2MjiuMwYKXtFXEWHSMeqipFa2Te5g7IDp+0vg/y3DujthSOqQYLPsugpufVAVpLCp0EQPZdRCOA/lD1HS2V2Vckn9Hpvzu9Bj7f1ng8UFh0LZqhgCaCVaSWINSCH041KEnta6dWMgmpw1KCzYVbtH/7a6VxkW0D3y4t99syNmWr0JavUkxBgXc09+OQ9jBF1ZMI+kBRcDMAbseDSAQwybXPR/6yTJibV2JUdjOkuFXUMn9FY87qJ5AXTYAmt0q25Vz5bnB1kvj/H+WBNIuiS9V79tS0zcFyf52SJDCKDFDJHVCyBCZBnUnwr4w0YamQr5PnevomSjVmEDRM3IR1EcGq2TLF8ImntD+yXUQEPly4yRcyRElotP05aa++MRS5/4nLU1kqLP4OYeb45c3VPLTl+MYGJpE4T3Q5T//RGayhk9/KbaJyjmMarnxbicjuHwsF8OwBcq2bFA3A3yZzJZKZWKDwsYjUmA6lW01J28t7iDFTDhxTroi5boa5rAHpzMCA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a3f639e-c39b-4378-1947-08db5140698c
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 10:22:09.9395
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +YGjRpqNUcT0FE1v9d21AOzDES/lelA+Y6a/Q5rWYGSNoFZhP3GhYfTxqsVPcOoru2XAGlQj4lU5UK9f1k1N9Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO3PR03MB6805

On Wed, May 10, 2023 at 12:00:51PM +0200, Jan Beulich wrote:
> On 10.05.2023 10:27, Roger Pau Monné wrote:
> > On Tue, May 09, 2023 at 06:06:45PM +0200, Jan Beulich wrote:
> >> On 09.05.2023 12:41, Roger Pau Monne wrote:
> >>> When translating an address that falls inside of a superpage in the
> >>> IOMMU page tables the fetching of the PTE physical address field
> >>> wasn't using dma_pte_addr(), which caused the returned data to be
> >>> corrupt as it would contain bits not related to the address field.
> >>
> >> I'm afraid I don't understand:
> >>
> >>> --- a/xen/drivers/passthrough/vtd/iommu.c
> >>> +++ b/xen/drivers/passthrough/vtd/iommu.c
> >>> @@ -359,16 +359,18 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
> >>>  
> >>>              if ( !alloc )
> >>>              {
> >>> -                pte_maddr = 0;
> >>>                  if ( !dma_pte_present(*pte) )
> >>> +                {
> >>> +                    pte_maddr = 0;
> >>>                      break;
> >>> +                }
> >>>  
> >>>                  /*
> >>>                   * When the leaf entry was requested, pass back the full PTE,
> >>>                   * with the address adjusted to account for the residual of
> >>>                   * the walk.
> >>>                   */
> >>> -                pte_maddr = pte->val +
> >>> +                pte_maddr +=
> >>>                      (addr & ((1UL << level_to_offset_bits(level)) - 1) &
> >>>                       PAGE_MASK);
> >>
> >> With this change you're now violating what the comment says (plus what
> >> the comment ahead of the function says). And it says what it says for
> >> a reason - see intel_iommu_lookup_page(), which I think your change is
> >> breaking.
> > 
> > Hm, but the code in intel_iommu_lookup_page() is now wrong as it takes
> > the bits in DMA_PTE_CONTIG_MASK as part of the physical address when
> > doing the conversion to mfn?  maddr_to_mfn() doesn't perform a any
> > masking to remove the bits above PADDR_BITS.
> 
> Oh, right. But that's a missing dma_pte_addr() in intel_iommu_lookup_page()
> then. (It would likely be better anyway to switch "uint64_t val" to
> "struct dma_pte pte" there, to make more visible that it's a PTE we're
> dealing with.) I indeed overlooked this aspect when doing the earlier
> change.

I guess I'm still confused, as the other return value for target == 0
(when the address is not part of a superpage) does return
dma_pte_addr(pte).  I think that needs further fixing then.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 10 10:52:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 10:52:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532747.829040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwhQz-0002h8-1G; Wed, 10 May 2023 10:52:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532747.829040; Wed, 10 May 2023 10:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwhQy-0002h1-Ur; Wed, 10 May 2023 10:52:36 +0000
Received: by outflank-mailman (input) for mailman id 532747;
 Wed, 10 May 2023 10:52:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9NoG=A7=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pwhQy-0002gv-1J
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 10:52:36 +0000
Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com
 [2a00:1450:4864:20::42f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c3c74206-ef20-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 12:52:32 +0200 (CEST)
Received: by mail-wr1-x42f.google.com with SMTP id
 ffacd0b85a97d-3075e802738so6270424f8f.1
 for <xen-devel@lists.xenproject.org>; Wed, 10 May 2023 03:52:32 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 n10-20020a5d400a000000b00307a3045d65sm4992933wrp.46.2023.05.10.03.52.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 10 May 2023 03:52:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3c74206-ef20-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683715952; x=1686307952;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=zadnsZQHrMaPOWZK8QAnphZZY6spWdciijMZY/kvrM4=;
        b=S4lU1Nm/B+HbdBieOau78/vGwgwOpWqnztCu2JcV/ASS7eZqMKQ+gNARBO3iPf8aV3
         jtriUzfxa1ihGObdNpwA9L0kasI/LC/XBpfUfg6reqODJl6qun0ZDBNzTJmFBL4UOEoC
         ++doNEb/dVU1pyPhVwh+VIgmNtMkLfSf2mg98=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683715952; x=1686307952;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=zadnsZQHrMaPOWZK8QAnphZZY6spWdciijMZY/kvrM4=;
        b=LQbdB7gc/j0XwOZV24uEWyY5UQu9X3W08HSAeqIEYwGWnclgrCkVFmXKDkYppltk2y
         DIfx9+amfc+lBFuSVTlOMLB2sV5XeDOxYBbzu3EVQffqPd9leDMuv3J4kO1gXup20O3R
         FoQtRV7CLWrt4AQuZaPnf1FBa7CKvd+FTtZrJ3/dK4sW3C3wuLqiZy051ovHSW4ziMeR
         1zt6axGxV5EnJLbZqTo2AXB3UyRJWYd2dwXdNBHKT7PnhCeahyaxWNisQC4Jk88/ogej
         9h8J8hBt4/r5IEal73ydRrHQMn/4s/rsqGGvixVHSeOggsvR78ESLuf0waZiOkb4Z4ag
         gOgw==
X-Gm-Message-State: AC+VfDyTF7dKgmdB06Fy6KhMC/ZcpTj7+bDS/0AozEoAnoIdvCa4tTc4
	3OWKgH+ckx6rueFJrPQk1pm/sQ==
X-Google-Smtp-Source: ACHHUZ4oEbOUWLj76uGMdHyvet9QNIJ8iRyXoJHmtZx9dmiG9ireCoJAIjpSW9YGdV99RNz/Ovro0g==
X-Received: by 2002:a5d:4dc1:0:b0:307:97d9:d9a with SMTP id f1-20020a5d4dc1000000b0030797d90d9amr6689481wru.13.1683715951733;
        Wed, 10 May 2023 03:52:31 -0700 (PDT)
Message-ID: <645b776e.5d0a0220.3ff50.bf0e@mx.google.com>
X-Google-Original-Message-ID: <ZFt3bI2Nl8CgLsuU@EMEAENGAAD19049.>
Date: Wed, 10 May 2023 11:52:28 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 3/3] x86: Use CpuidUserDis if an AMD HVM guest toggles
 CPUID faulting
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
 <20230505175705.18098-4-alejandro.vallejo@cloud.com>
 <d8c9728b-b615-7229-2e76-106d81802a20@suse.com>
 <d0794e7d-604e-044f-000e-3a0bde126425@citrix.com>
 <7d41940f-e671-954c-1afc-510e4fa674fa@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <7d41940f-e671-954c-1afc-510e4fa674fa@suse.com>

On Wed, May 10, 2023 at 10:15:31AM +0200, Jan Beulich wrote:
> On 09.05.2023 12:05, Andrew Cooper wrote:
> > On 08/05/2023 2:18 pm, Jan Beulich wrote:
> >> On 05.05.2023 19:57, Alejandro Vallejo wrote:
> >>> This is in order to aid guests of AMD hardware that we have exposed
> >>> CPUID faulting to. If they try to modify the Intel MSR that enables
> >>> the feature, trigger levelling so AMD's version of it (CpuidUserDis)
> >>> is used instead.
> >>>
> >>> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> >>> ---
> >>>  xen/arch/x86/msr.c | 9 ++++++++-
> >>>  1 file changed, 8 insertions(+), 1 deletion(-)
> >> Don't you also need to update cpu-policy.c:calculate_host_policy()
> >> for the guest to actually know it can use the functionality? Which
> >> in turn would appear to require some form of adjustment to
> >> lib/x86/policy.c:x86_cpu_policies_are_compatible().
> > 
> > I asked Alejandro to do it like this.
> > 
> > Advertising this to guests requires plumbing another MSR into the
> > infrastructure which isn't quite set up properly let, and is in flux
> > from my work.
> 
> Maybe there was some misunderstanding here, as I realize only now. I
> wasn't asking to expose the AMD feature, but instead I was after
> 
>     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
>     /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
>     p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
> 
> wanting to be extended by "|| boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)".
> That, afaict, has no connection to plumbing yet another MSR.
> 
> Jan

Let me backtrack a little. There's 2 different (but related) matters under
discussion:

 1. Trapping PV guest attempts to run the cpuid instruction
 2. Providing a virtualized CPUID faulting interface so a guest kernel
    can intercept the CPUID instructions of code running under it.

This series was meant to provide fix (1) on AMD hardware. I did go a bit
beyond in v1, trying to provide support for a unified faulting mechanism
on AMD, but providing a virtualized CPUID faulting interface needs to be
done a bit more carefully than that. Doing it partially now just adds tech
debt to be paid when CpuidUserDis is exposed to the domains.

Changing the policy to expose the Intel interface when AMD is the backend
falls on (2), which is probably better off done separately in a unified way.

v2 removes the changes to x86/msr.c so only (1) is addressed.

Does this make sense?

Alejandro


From xen-devel-bounces@lists.xenproject.org Wed May 10 11:28:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 11:28:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532751.829050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwhzM-0006F1-LX; Wed, 10 May 2023 11:28:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532751.829050; Wed, 10 May 2023 11:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwhzM-0006Eu-Ie; Wed, 10 May 2023 11:28:08 +0000
Received: by outflank-mailman (input) for mailman id 532751;
 Wed, 10 May 2023 11:28:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9NoG=A7=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pwhzL-0006Eo-6n
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 11:28:07 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bb88a650-ef25-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 13:28:06 +0200 (CEST)
Received: by mail-wm1-x329.google.com with SMTP id
 5b1f17b1804b1-3f475366522so4562205e9.1
 for <xen-devel@lists.xenproject.org>; Wed, 10 May 2023 04:28:05 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 f1-20020a1cc901000000b003f25b40fc24sm22724788wmb.6.2023.05.10.04.28.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 10 May 2023 04:28:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb88a650-ef25-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683718085; x=1686310085;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=A26bHeWHbnOJ6AvtKoDtD9A0kZdw7DQ7Mh+MMahkDLA=;
        b=WN0QUi9QVNp5utc+AiSnqMH/ApTMiUQ11yt5GwWv3hA5t2UZ1jcBDdA8crnlziAuW8
         MBe5eJAxkyWQWXsdKHN3qLNXaemXJDgxMltNH1N8ga2SrT7A1lR6qRnOnKcuT9zhSad3
         GhlAApIDuDRxOkOdJGlcQcSpq/xRK5G9uGmow=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683718085; x=1686310085;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=A26bHeWHbnOJ6AvtKoDtD9A0kZdw7DQ7Mh+MMahkDLA=;
        b=N5BFC9mfD2GWqN1GLfQy+7se2BXbCdsE9Xa70p+fdwCSaUAEDz1dhwekpcxsWBwrLM
         wjRSmQ2e61TkH+UCWAQHqQWWODSSnuLJIMUhkRNMN22mLfW6na4nqTvCg+eDiukljq0n
         +t3xSRNVqxbaqP/qT0Anbx6hN6FF8Fj2uTk+bCmpatB1KETRav1uzkajVdPQKzfj8JB7
         4FnM7M0oC8qzO8jJnLY8OG7vUXbA7P7I/Fe+jSZYGd582nkXyTHdm0aBSoeIZvIFzsPA
         b4P0nL++T6si+AdaZdBIzfYfttgFcSJy8Bl8GvQ8afmDnY7OMNXicLKf7EQdtAlSz8m+
         xkMQ==
X-Gm-Message-State: AC+VfDxB2lggi4JY/4F+bWUIw/xvxFJ4o1wyN0YdylzLwdeD8CeMJLPc
	qmYK5aphq69EKKtMa1sojqr8aQ==
X-Google-Smtp-Source: ACHHUZ5EDvRS92Zqj2+MJ6l5hbyVtz0jwP2PcO1AHDcssDjWJk/FtzNubhaaW++z0i1+t/iVD4QMfg==
X-Received: by 2002:a7b:c8d0:0:b0:3f4:2cf3:a54e with SMTP id f16-20020a7bc8d0000000b003f42cf3a54emr2989278wml.41.1683718085351;
        Wed, 10 May 2023 04:28:05 -0700 (PDT)
Message-ID: <645b7fc4.1c0a0220.714be.162d@mx.google.com>
X-Google-Original-Message-ID: <ZFt/wdGirBAVh0FL@EMEAENGAAD19049.>
Date: Wed, 10 May 2023 12:28:01 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 0/3] Add CpuidUserDis support
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
 <05b27f59-f906-79af-6c3c-6b6fae7cb1a9@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <05b27f59-f906-79af-6c3c-6b6fae7cb1a9@suse.com>

On Mon, May 08, 2023 at 11:06:31AM +0200, Jan Beulich wrote:
> On 05.05.2023 19:57, Alejandro Vallejo wrote:
> > Nowadays AMD supports trapping the CPUID instruction from ring3 to ring0,
> 
> Since it's relevant for PV32: Their doc talks about CPL > 0, i.e. not just
> ring 3. Therefore I wonder whether ...

Very true. It's CPL>0, not just ring3. Noted and changed on v2.

> 
> > (CpuidUserDis)
> 
> ... we shouldn't deviate from the PM and avoid the misleading use of "user"
> in our internal naming.
> 
> Jan
> 

IMO it's going to be confusing enough as is. We'll eventually have
virtualized versions of both Intel and AMD that may or may not be
cross-hooked with each other (e.g: virtualized Intel interface on
an AMD host) due to backward compatibility. That means we'll probably
want:

 1. A name for the Intel mechanism, currently "CPUID faulting"
 2. A name for the AMD mechanism, currently "CpuidUserDis"
 3. A generic name for the cpuid-can-be-trapped behaviour, currently
    overloaded with the Intel name (but could do with a Xen-specific one).
    It doesn't matter a lot now, but it will once the AMD interface is
    virtualized.

Sure, we could give it an alternative name on AMD, but we still need yet
another one to disambiguate the trapping behaviour from the specific
mechanism that does it.

Using the AMD manual name does have the upside that it's easier to check
the manual without having to remember the AMD-specific feature that
corresponds to a Xen-specific name. That said, if you have a good suggestion
I'm happy to change it. So long as in the end result is (1), (2) and (3)
have non-ambiguous names.

Alejandro


From xen-devel-bounces@lists.xenproject.org Wed May 10 11:48:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 11:48:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532757.829067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwiIj-0000SU-DR; Wed, 10 May 2023 11:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532757.829067; Wed, 10 May 2023 11:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwiIj-0000SN-Aj; Wed, 10 May 2023 11:48:09 +0000
Received: by outflank-mailman (input) for mailman id 532757;
 Wed, 10 May 2023 11:48:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwiIi-0000SD-MG; Wed, 10 May 2023 11:48:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwiIi-0000P1-JJ; Wed, 10 May 2023 11:48:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwiIi-0007NG-4m; Wed, 10 May 2023 11:48:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwiIi-0006i5-4E; Wed, 10 May 2023 11:48:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vxhAB53mljKG3GMFoWxNcVrwoHTxcsz7Zdr2T34CcEs=; b=mk0yFbtEGRprNjESliwYOP5hkk
	CffmqBoXdt7rtm3CPV6a31Yc2Kr8DycMAiR0q16Lyc2ARdcttGUA8NES+XegOxFHuKb4cgz7QiBUy
	SPilf31Y1CDHujuyVAmHnCUBb9WpybXdybi13u7kHe4tcq5sJmBju9/bSsOkATbSskqA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180596-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180596: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ed6b7c0266e512c1207c07911da14e684f47b909
X-Osstest-Versions-That:
    xen=8b1ac353b4db7c5bb2f82cb6afee9cc641e756a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 11:48:08 +0000

flight 180596 xen-unstable real [real]
flight 180602 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180596/
http://logs.test-lab.xenproject.org/osstest/logs/180602/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install   fail pass in 180602-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180589
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180589
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180589
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180589
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180589
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180589
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180589
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180589
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180589
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180589
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180589
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180589
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  ed6b7c0266e512c1207c07911da14e684f47b909
baseline version:
 xen                  8b1ac353b4db7c5bb2f82cb6afee9cc641e756a4

Last test of basis   180589  2023-05-09 13:39:59 Z    0 days
Testing same since   180596  2023-05-10 01:53:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8b1ac353b4..ed6b7c0266  ed6b7c0266e512c1207c07911da14e684f47b909 -> master


From xen-devel-bounces@lists.xenproject.org Wed May 10 12:42:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 12:42:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532778.829077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwj98-0006yy-JP; Wed, 10 May 2023 12:42:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532778.829077; Wed, 10 May 2023 12:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwj98-0006yr-Ga; Wed, 10 May 2023 12:42:18 +0000
Received: by outflank-mailman (input) for mailman id 532778;
 Wed, 10 May 2023 12:42:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwj97-0006yh-A2; Wed, 10 May 2023 12:42:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwj97-0001aD-7r; Wed, 10 May 2023 12:42:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwj96-0000Ph-OH; Wed, 10 May 2023 12:42:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwj96-0006iv-Ni; Wed, 10 May 2023 12:42:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zUCd6CdzUSlY6LJW78wfkcyjIA2GfdFevU1THHy6Uto=; b=qXglOlDl+N8yNOB0ZeUg0Ftm7l
	hxFXPkXZ++QXaXp0MU377GqYLIHhkfyX59dQcD9cyTtIs4WZwzrZRJyG0r952W4Xpxn8l7wdP7VoK
	h1KLFmzyw09SWQHX5NyXFwm7qiLFAhhbdFLqHoqWeCX0mjA3c8K4+XW0z512kRVngJsE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180601-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180601: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=373a95532a785108f625877d993451d4a6d36713
X-Osstest-Versions-That:
    ovmf=9165a7e95ec6263c04c8babfdbe8bee133959300
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 12:42:16 +0000

flight 180601 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180601/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 373a95532a785108f625877d993451d4a6d36713
baseline version:
 ovmf                 9165a7e95ec6263c04c8babfdbe8bee133959300

Last test of basis   180597  2023-05-10 03:12:17 Z    0 days
Testing same since   180601  2023-05-10 10:12:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9165a7e95e..373a95532a  373a95532a785108f625877d993451d4a6d36713 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 10 12:54:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 12:54:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532793.829088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwjKs-00009S-MM; Wed, 10 May 2023 12:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532793.829088; Wed, 10 May 2023 12:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwjKs-00009L-IF; Wed, 10 May 2023 12:54:26 +0000
Received: by outflank-mailman (input) for mailman id 532793;
 Wed, 10 May 2023 12:54:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ibwy=A7=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pwjKq-00009F-Pe
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 12:54:24 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c9135dd4-ef31-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 14:54:22 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1541D1F6E6;
 Wed, 10 May 2023 12:54:22 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E345113519;
 Wed, 10 May 2023 12:54:21 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id gA72Nf2TW2RTfgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 10 May 2023 12:54:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9135dd4-ef31-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683723262; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=15ULA9v4ixy4aWl5RkeidEJQKePXQf3PS38Ydm98OBs=;
	b=F8kDotC7BG0gEXaqzehQ5u7I9GL2eP42Rz4T77VpAm/pVb24Puaoa6iilKJyKaawJQfT0/
	FSSD8aV4e7W+V2ujS+pd+fAaVyqDoOesUgLGxU7WZwGp47RLzoltQjzRIcDJs+/RSTgzV/
	M/6amc+v2TibHlQ6RniuUNvbImG8+6A=
Message-ID: <9745474f-db84-c8f3-3662-95728d4d5bd3@suse.com>
Date: Wed, 10 May 2023 14:54:21 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-6-jgross@suse.com>
 <21847835-4f7e-a09a-458e-e68dc59d4268@xen.org>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v5 05/14] tools/xenstore: use accounting buffering for
 node accounting
In-Reply-To: <21847835-4f7e-a09a-458e-e68dc59d4268@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------1E0AaHszEBAx6SZLBetrjz0o"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------1E0AaHszEBAx6SZLBetrjz0o
Content-Type: multipart/mixed; boundary="------------rss0Gaot1iYqJzjU3aM9SBQz";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <9745474f-db84-c8f3-3662-95728d4d5bd3@suse.com>
Subject: Re: [PATCH v5 05/14] tools/xenstore: use accounting buffering for
 node accounting
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-6-jgross@suse.com>
 <21847835-4f7e-a09a-458e-e68dc59d4268@xen.org>
In-Reply-To: <21847835-4f7e-a09a-458e-e68dc59d4268@xen.org>

--------------rss0Gaot1iYqJzjU3aM9SBQz
Content-Type: multipart/mixed; boundary="------------GdoHBGFGqum09XITwnpwb00G"

--------------GdoHBGFGqum09XITwnpwb00G
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDkuMDUuMjMgMjA6NDYsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDA4LzA1LzIwMjMgMTI6NDcsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBB
ZGQgdGhlIG5vZGUgYWNjb3VudGluZyB0byB0aGUgYWNjb3VudGluZyBpbmZvcm1hdGlvbiBi
dWZmZXJpbmcgaW4NCj4+IG9yZGVyIHRvIGF2b2lkIGhhdmluZyB0byB1bmRvIGl0IGluIGNh
c2Ugb2YgZmFpbHVyZS4NCj4+DQo+PiBUaGlzIHJlcXVpcmVzIHRvIGNhbGwgZG9tYWluX25i
ZW50cnlfZGVjKCkgYmVmb3JlIGFueSBjaGFuZ2VzIHRvIHRoZQ0KPj4gZGF0YSBiYXNlLCBh
cyBpdCBjYW4gcmV0dXJuIGFuIGVycm9yIG5vdy4NCj4+DQo+PiBTaWduZWQtb2ZmLWJ5OiBK
dWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+PiAtLS0NCj4+IFY1Og0KPj4gLSBh
ZGQgZXJyb3IgaGFuZGxpbmcgYWZ0ZXIgZG9tYWluX25iZW50cnlfZGVjKCkgY2FsbHMgKEp1
bGllbiBHcmFsbCkNCj4+IC0tLQ0KPj4gwqAgdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2Nv
cmUuY8KgwqAgfCAyOSArKysrKysrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4gwqAgdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2RvbWFpbi5oIHzCoCA0ICsrLS0NCj4+IMKgIDIgZmls
ZXMgY2hhbmdlZCwgOSBpbnNlcnRpb25zKCspLCAyNCBkZWxldGlvbnMoLSkNCj4+DQo+PiBk
aWZmIC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYyBiL3Rvb2xzL3hl
bnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+IGluZGV4IDgzOTJiZGVjOWIuLjIyZGE0MzRl
MmEgMTAwNjQ0DQo+PiAtLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+
PiArKysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+PiBAQCAtMTQ1NCw3
ICsxNDU0LDYgQEAgc3RhdGljIHZvaWQgZGVzdHJveV9ub2RlX3JtKHN0cnVjdCBjb25uZWN0
aW9uICpjb25uLCANCj4+IHN0cnVjdCBub2RlICpub2RlKQ0KPj4gwqAgc3RhdGljIGludCBk
ZXN0cm95X25vZGUoc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIHN0cnVjdCBub2RlICpub2Rl
KQ0KPj4gwqAgew0KPj4gwqDCoMKgwqDCoCBkZXN0cm95X25vZGVfcm0oY29ubiwgbm9kZSk7
DQo+PiAtwqDCoMKgIGRvbWFpbl9uYmVudHJ5X2RlYyhjb25uLCBnZXRfbm9kZV9vd25lcihu
b2RlKSk7DQo+PiDCoMKgwqDCoMKgIC8qDQo+PiDCoMKgwqDCoMKgwqAgKiBJdCBpcyBub3Qg
cG9zc2libGUgdG8gZWFzaWx5IHJldmVydCB0aGUgY2hhbmdlcyBpbiBhIHRyYW5zYWN0aW9u
Lg0KPj4gQEAgLTE2NDUsNiArMTY0NCw5IEBAIHN0YXRpYyBpbnQgZGVsbm9kZV9zdWIoY29u
c3Qgdm9pZCAqY3R4LCBzdHJ1Y3QgDQo+PiBjb25uZWN0aW9uICpjb25uLA0KPj4gwqDCoMKg
wqDCoCBpZiAocmV0ID4gMCkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gV0FMS19U
UkVFX1NVQ0NFU1NfU1RPUDsNCj4+ICvCoMKgwqAgaWYgKGRvbWFpbl9uYmVudHJ5X2RlYyhj
b25uLCBnZXRfbm9kZV9vd25lcihub2RlKSkpDQo+PiArwqDCoMKgwqDCoMKgwqAgcmV0dXJu
IFdBTEtfVFJFRV9FUlJPUl9TVE9QOw0KPiANCj4gSSB0aGluayB0aGVyZSBpcyBhIHBvdGVu
dGlhbCBpc3N1ZSB3aXRoIHRoZSBidWZmZXJpbmcgaGVyZS4gSW4gY2FzZSBvZiBmYWlsdXJl
LCANCj4gdGhlIG5vZGUgY291bGQgaGF2ZSBiZWVuIHJlbW92ZWQsIGJ1dCB0aGUgcXVvdGEg
d291bGQgbm90IGJlIHByb3Blcmx5IGFjY291bnRlZC4NCg0KWW91IG1lYW4gdGhlIGNhc2Ug
d2hlcmUgYW5vdGhlciBub2RlIGhhcyBiZWVuIGRlbGV0ZWQgYW5kIGR1ZSB0byBhY2NvdW50
aW5nDQpidWZmZXJpbmcgdGhlIGNvcnJlY3RlZCBhY2NvdW50aW5nIGRhdGEgd291bGRuJ3Qg
YmUgY29tbWl0dGVkPw0KDQpUaGlzIGlzIG5vIHByb2JsZW0sIGFzIGFuIGVycm9yIHJldHVy
bmVkIGJ5IGRlbG5vZGVfc3ViKCkgd2lsbCByZXN1bHQgaW4NCmNvcnJ1cHQoKSBiZWluZyBj
YWxsZWQsIHdoaWNoIGluIHR1cm4gd2lsbCBjb3JyZWN0IHRoZSBhY2NvdW50aW5nIGRhdGEu
DQoNCj4gQWxzbywgSSB0aGluayBhIGNvbW1lbnQgd291bGQgYmUgd2FycmFudCB0byBleHBs
YWluIHdoeSB3ZSBhcmUgcmV0dXJuaW5nIA0KPiBXQUxLX1RSRUVfRVJST1JfU1RPUCBoZXJl
IHdoZW4uLi4NCj4gDQo+PiArDQo+PiDCoMKgwqDCoMKgIC8qIEluIGNhc2Ugb2YgZXJyb3Ig
c3RvcCB0aGUgd2Fsay4gKi8NCj4+IMKgwqDCoMKgwqAgaWYgKCFyZXQgJiYgZG9fdGRiX2Rl
bGV0ZShjb25uLCAma2V5LCAmbm9kZS0+YWNjKSkNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBy
ZXR1cm4gV0FMS19UUkVFX1NVQ0NFU1NfU1RPUDsNCj4gDQo+IC4uLiB0aGlzIGlzIG5vdCB0
aGUgY2FzZSB3aGVuIGRvX3RkYl9kZWxldGUoKSBmYWlscyBmb3Igc29tZSByZWFzb25zLg0K
DQpUaGUgbWFpbiBpZGVhIHdhcyB0aGF0IHRoZSByZW1vdmUgaXMgd29ya2luZyBmcm9tIHRo
ZSBsZWFmcyB0b3dhcmRzIHRoZSByb290Lg0KSW4gY2FzZSBvbmUgZW50cnkgY2FuJ3QgYmUg
cmVtb3ZlZCwgd2Ugc2hvdWxkIGp1c3Qgc3RvcC4NCg0KT1RPSCByZXR1cm5pbmcgV0FMS19U
UkVFX0VSUk9SX1NUT1AgbWlnaHQgYmUgY2xlYW5lciwgYXMgdGhpcyB3b3VsZCBtYWtlIHN1
cmUNCnRoYXQgYWNjb3VudGluZyBkYXRhIGlzIHJlcGFpcmVkIGFmdGVyd2FyZHMuIEV2ZW4g
aWYgZG9fdGRiX2RlbGV0ZSgpIGNhbid0DQpyZWFsbHkgZmFpbCBpbiBub3JtYWwgY29uZmln
dXJhdGlvbnMsIGFzIHRoZSBvbmx5IGZhaWx1cmUgcmVhc29ucyBhcmU6DQoNCi0gdGhlIG5v
ZGUgaXNuJ3QgZm91bmQgKHF1aXRlIHVubGlrZWx5LCBhcyB3ZSBqdXN0IHJlYWQgaXQgYmVm
b3JlIGVudGVyaW5nDQogICBkZWxub2RlX3N1YigpKSwgb3INCi0gd3JpdGluZyB0aGUgdXBk
YXRlZCBkYXRhIGJhc2UgZmFpbGVkIChpbiBub3JtYWwgY29uZmlndXJhdGlvbnMgaXQgaXMg
aW4NCiAgIGFscmVhZHkgYWxsb2NhdGVkIG1lbW9yeSwgc28gbm8gd2F5IHRvIGZhaWwgdGhh
dCkNCg0KSSB0aGluayBJJ2xsIHN3aXRjaCB0byByZXR1cm4gV0FMS19UUkVFX0VSUk9SX1NU
T1AgaGVyZS4NCg0KDQpKdWVyZ2VuDQo=
--------------GdoHBGFGqum09XITwnpwb00G
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------GdoHBGFGqum09XITwnpwb00G--

--------------rss0Gaot1iYqJzjU3aM9SBQz--

--------------1E0AaHszEBAx6SZLBetrjz0o
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRbk/0FAwAAAAAACgkQsN6d1ii/Ey/w
Nwf/YmwmXNuZf22ns5/wQ1xW5RmjVVGKbnWrAYy33FPVbJ8+02bcmwm+gOBBESgUkgOeE36z9O+A
uUcc5c49GcqDyzruaFuWT5lQdX690fHnze/H8QJVhqkuXp7UO11jVjR6h01SNjtZ24kAkLbNF6Nn
g1c9T/dHiaOcAR5yJim1dsmKg5UYj3IdPtfqkYCCEqRjdDKx49OgYOVlPGlSoUnyH8kRMyJYZAW6
WloSM1kFrdOb0ilC2EByui0PxNkzyDonf0Uvj7jR8QQfMJsaRKt3qK64Nyvd6g7kTslSILs+g1oM
DiLptTvR38IHugEQgY6PEe7RdoSGq1Iw6uWPnAug/Q==
=dYdg
-----END PGP SIGNATURE-----

--------------1E0AaHszEBAx6SZLBetrjz0o--


From xen-devel-bounces@lists.xenproject.org Wed May 10 13:17:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 13:17:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532798.829098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwjhJ-0002tH-E4; Wed, 10 May 2023 13:17:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532798.829098; Wed, 10 May 2023 13:17:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwjhJ-0002tA-Ab; Wed, 10 May 2023 13:17:37 +0000
Received: by outflank-mailman (input) for mailman id 532798;
 Wed, 10 May 2023 13:17:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwjhI-0002t4-3F
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 13:17:36 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20608.outbound.protection.outlook.com
 [2a01:111:f400:fe16::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0551a46d-ef35-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 15:17:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9637.eurprd04.prod.outlook.com (2603:10a6:102:272::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Wed, 10 May
 2023 13:17:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 13:17:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0551a46d-ef35-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MHvcEEGW4owMrKE8MSJ6k6Ar4e/9rFA1zZ4kjaNvz9lKokiUgzBirA6ZLRyiCDOC37ZvzXfNswXBLyvt4kY+gtSL4PrQRkm0jj1Yr8ys+FIXa6RmXXQH/CAN8Xg0pFe8khNCek7zPy7RmzYiyImQJpYStUaMC1OCjlNvZYHmo11xh/fzkhx5LaHas9KD13Jic75LTC3gJMK2ERaCWxstSxYyv3vsxPQzmBIzlwSuqck01v3aOx/fRpdGqU42zq/QoTEKgdlj1uOTFkm/IpW4ZOOhUK5M1mpCIt4KN7eQKSdsUXI2zmauFIbOHhqTXvUEnZqybu6ukURWLhZAUASgKA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zkoh9VifVEwZQzVn2Y2CdZmtitVrvTfVply5lW35MOk=;
 b=YgDMzpK4JVH1wlcB93aZlREzHVY9o2sIE+TDrK+FX32O8TmN7AYnO21pRPQzwCXFxfayR9uv1BBkUK8IE0OCKKfOv2c7wru3UsdjJkOnwJOO3gy2/ehePaXnht80laNkevBeWtGv3QFRuEaX/JYWjfRueftV4dFeO49bHwMGT85ffYN5KyAYP44nVlnF+6PHGT1YXu4IwOm53lFn94/foGE08cs9Bw35qqRO9AXTXOgXcC340RTYe/bRuo68z8gdHV/xAKfmBP4Yj9lMb9hbjhy4+3Yn2hZdnXdR1CL1rSNrejHY5Vv5ZnyY7QNutADr0+mCjj7y6rc6JhwbW/reCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zkoh9VifVEwZQzVn2Y2CdZmtitVrvTfVply5lW35MOk=;
 b=Ypf/j8aFF5lAfx9Yd/6hfYLvkt4BZEbRYbtf6NvB74i6JzyU6cY5I0ZMvBtCdQYRO23oIRyF60dZBv16dgJbT+YASKRwCxmxNTJ1TielVm1ke4sKlOY1/MBXLeXXGQsxGyQ6IVuypbwP8WSV9fNiXO7MV12TLQUlSRfXkJmYndpQOKDzX9xb2jJxQ4UufnqIfHVDLE5KQii4CWmhARM7LnFsXuf6nkqSZTlsRj3vcdMVZLjJcJqfZ2kp2UieRXjyU5m5A2QZwYNXBB5hJJnB2MjYsPqJYTz5Dbwk7y8EGBJ2FeB2CD7c/Jh+2DfdbW5xifXjxeWUekleW8+FNASO/g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d32a49b4-5d68-9844-cc08-271a488534ef@suse.com>
Date: Wed, 10 May 2023 15:17:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 3/3] x86: Use CpuidUserDis if an AMD HVM guest toggles
 CPUID faulting
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230505175705.18098-1-alejandro.vallejo@cloud.com>
 <20230505175705.18098-4-alejandro.vallejo@cloud.com>
 <d8c9728b-b615-7229-2e76-106d81802a20@suse.com>
 <d0794e7d-604e-044f-000e-3a0bde126425@citrix.com>
 <7d41940f-e671-954c-1afc-510e4fa674fa@suse.com>
 <645b776e.5d0a0220.3ff50.bf0e@mx.google.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <645b776e.5d0a0220.3ff50.bf0e@mx.google.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0091.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9637:EE_
X-MS-Office365-Filtering-Correlation-Id: e44d9632-9d7f-45b7-77da-08db5158e765
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CKnWuHyeRsQtoqU4qOkJdjV5c1a4KYwUXyzndytTyu4HLvYerlipFuAYgSgZEdorWEieZL5cJ6YkmfD+0Lw3LW6lG03kdNjgyfM3GIvmpKC5aumM7EYItjMdLWeJHDmEOUIx5nnchAiCFQ/imUSZuTOK41TEzmgFybeusyTAm/ia2udNylOefjyT4NWP9IGUpewrrbWn9tvbyNavFZKfP+K83Nhpep7l3CXCoeJ8weq5y1h60tYg5BaO7bR2vsv+jFFwffKZ80h2Xx9h/8BHt/CxuGnREVl0O/q9Uj+v9pI579vEK5wOQzvMcFiz0fI/LiK3nobQz+HwMoa61B4GhCyJwJPjUn3MKB3SBSASWyaRqaFiHbHuv2dHRwYX+NRSyEA9NcugLi6UA/zYTvv1dUEXWlKblDUREFPCrlUerEuS0z34UKYREvhRsAE2OQ/UXM1wpMAx71V5f6aw33K1nrW1549joknXZFoeZ2SIBKsx0ErvwMFHuJtA2rkrgjcf72b7XB7PHVeX5qLcgxBqA0fMq6pWbNaI+b66W7lnVJSOHMq1TOW21Au/C7DKf22Zb09MfQlTrZtAsfCSAkzsQ6NqsqsMfhR7BVC7rKZFZjoU9PyhJ8Q1S8LJO8Mb+QzeWpdNspeHE87NNdyjMsCgKg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(346002)(396003)(136003)(366004)(451199021)(5660300002)(54906003)(31686004)(478600001)(26005)(6486002)(41300700001)(8676002)(316002)(8936002)(6506007)(6512007)(4326008)(66556008)(66476007)(66946007)(6916009)(53546011)(2616005)(2906002)(83380400001)(186003)(31696002)(86362001)(38100700002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b0g5djJVd3RxblkxK3ZLdUVQQmxhZDFobk9xTlpSTGFwaDMxOFNwL0c2RGh0?=
 =?utf-8?B?RTNndzNBVHEzTVYrSUtuc3hCMmRuUVkyYXNvTG5kZ2N3R2tZWklDcExWTkNR?=
 =?utf-8?B?RFEvRXFiaGExMEYzVXhUL1RmdVZ4MGpnQUF1ZE5RZGhuWkZ1U2FwbWYxYnhE?=
 =?utf-8?B?NHkzbTNHeTBlazk2S1MvalFaZkUvOHFlZ1RtVjFISVF1em0xTk9FeWFZUkIx?=
 =?utf-8?B?MzMyOVZoa251SDJGQlFJZUxGZFNIRm95OXlWcjNHQTRoVGw3dTMxdGszWENu?=
 =?utf-8?B?WEIvbmZ3T250c2krVjdUdUR5ZTMyR1c5UnVudzQ0WE5yNEo5WjJhc2N4a0Rn?=
 =?utf-8?B?OTU5WkNaZ0NJUHVwNDE1UG1ZN0RZSW5IbC8wRGNIWmluaWtMRm9RTEErNW1m?=
 =?utf-8?B?SFVkazU0bkpsY2FlZXJMVHdqY3E0QnJFTHVBME8xcHcwWXkyQXN3MmY1cGxG?=
 =?utf-8?B?Lyt2UEtnU1hGL0dZNjI3cmR1VXNmdnZQOHpSbUZwYVZYTVhKbTlMSGFnNHBh?=
 =?utf-8?B?WXVIQjg0RDIvNmZBaEZqUUswRWEzc2d2MVJnc2hqY3AzWVFsUWMxNVpyMTRD?=
 =?utf-8?B?dGxiQ29kZUpHRmYyMGZJdlM2QmZ1elVvUDZLUnMxWlE1SndMUHZZbGpPeUtH?=
 =?utf-8?B?bnIwU0ViT2h4RG0vc2I3eW9lRmN1Z1hwZU9HUGgvbjAxbTVRMGVpSFRJZnlk?=
 =?utf-8?B?bWdpRk9QZWhlcGYzMmhnQXZmN3FLd25LbUNnU0FhMVlqWGwrK2laL1cxa1BJ?=
 =?utf-8?B?dnhwYWZVUlFaSmVaRjhLSE1oODhSaXdGNW8xWWxIaURXRkdzWVNyVGpGd0ZT?=
 =?utf-8?B?VVZtMEUyQ2pYVVpuL3l5M29Pd2xnZE9UdFgxbnV0V1lMdmNjU2pLYWVZVWpG?=
 =?utf-8?B?eFIzQllaRmptTGdDVVVBNWdzdFVJZHRXaHF2QnJId2NlM2dSWlZJeGs3Ui9I?=
 =?utf-8?B?OGJQcWxkMGhxbStTVmRhZVptd1BIblZLa3NIRW40aVRMclVsWjVnTWhRRDln?=
 =?utf-8?B?U1YwZFN2Qk1tZUpqQ1JScnFybjdnaGVZdGR2R0U1eU1ydjJLbFQ3RHZIdGZy?=
 =?utf-8?B?RXRWbk11QnhqcmpEU1V5NXhmcHBETXNuWmpvanVEMTFkMnk3TkIwbzJOaDBD?=
 =?utf-8?B?N3N3S1doM2VPREVJRkF5dGk4RU9vUnZCSVJYWFJqZXM4MEplQTN0SkIySHoy?=
 =?utf-8?B?dkpBaytxWlFOdTJ5SVZSNUswMTFFTG5mVC9RaC9UT2h1NFdkb1RoOU9MN0NB?=
 =?utf-8?B?bkQ1SjNoVU40d1g3Y3JHQWo2Vmp0SnRtUWJqZkRQcXFpQXMrTzhsSDV2QTE2?=
 =?utf-8?B?dFEwUjV2VGpORGUvYXJLbm9yY2RYOGJkSDdnb3Y5ZkJiMVh2UkpFUm0xRWdX?=
 =?utf-8?B?SkdTSWlySmJkMTFuU0h2SmR4cnFFK2t3OWI5cnlwdURDcklSaTV1Z1lNV3lE?=
 =?utf-8?B?Y0ViK2EvdzlpQzJudTE3ekN0ckJXRllIUjJPR1JrSXJvODB0K0tXTnVXaHZB?=
 =?utf-8?B?VUJjbGlHbVJNc01ZR3lRR2QzcUFYVGtSTU91NzNpRXhOOEo5OXRuUEFLOEtJ?=
 =?utf-8?B?NjFEei9pampuOEkxV2I4dUR2TWcyd0NWSThZL1FnWnUxN05vOUI1WU94RWlY?=
 =?utf-8?B?Q1VFM1RnRmJiQjQ4b0lWeGFQUjJGdk5TSHl5VG1FaXUya2kyYjd3ODlBSEM3?=
 =?utf-8?B?cndNWnVjT2Q3TWJIQWJCcTRPTnl3U3g4ejhaZzg2VXZqOTVjVktyZjBKSjls?=
 =?utf-8?B?ODJObmhBOHpmUm80VzFmN0kzM0l1OUV4YUZPanpRMHRyeGJ3WXJZcTVMeU1F?=
 =?utf-8?B?dU5NcElzb1ZPZW5iV2p1UkNudjVKSjlKdlhkanJGYVRVaUJRRkRTWUhaaUZE?=
 =?utf-8?B?QVFmQVA0clp0Z0NGOWZERm1rWktjWUpCRzZNS0ZQS1ZTTzN0azlxNUk4T05G?=
 =?utf-8?B?a0w2b0kzemlkTHNXRE1USWk2TmN0MUxaVWZ5djkxQ3hHR3JwTW5kdXV5RGJ4?=
 =?utf-8?B?TVovT0lyOUtlb1Y5ZDA4WTQ4VXZZWmtCdnQyby9YdSsvczhPTGk5aitsRHFq?=
 =?utf-8?B?NDd3bDRyMlBIZ2QvaWh1OUdrVTdvRlJBRTdyMXBGNWlYZFVHa0NBbzZkTnNV?=
 =?utf-8?Q?GtzigkbbYxtC5iI2RSjjOs5xW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e44d9632-9d7f-45b7-77da-08db5158e765
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 13:17:28.7515
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0bcPTBpDcoxiW5tp8EDcrt9cucYmHll/D5UowLJjX9MRvbD5MimCHCB2/uxREGEjAl224zfvHkDKdytieKX38Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9637

On 10.05.2023 12:52, Alejandro Vallejo wrote:
> On Wed, May 10, 2023 at 10:15:31AM +0200, Jan Beulich wrote:
>> On 09.05.2023 12:05, Andrew Cooper wrote:
>>> On 08/05/2023 2:18 pm, Jan Beulich wrote:
>>>> On 05.05.2023 19:57, Alejandro Vallejo wrote:
>>>>> This is in order to aid guests of AMD hardware that we have exposed
>>>>> CPUID faulting to. If they try to modify the Intel MSR that enables
>>>>> the feature, trigger levelling so AMD's version of it (CpuidUserDis)
>>>>> is used instead.
>>>>>
>>>>> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
>>>>> ---
>>>>>  xen/arch/x86/msr.c | 9 ++++++++-
>>>>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>>> Don't you also need to update cpu-policy.c:calculate_host_policy()
>>>> for the guest to actually know it can use the functionality? Which
>>>> in turn would appear to require some form of adjustment to
>>>> lib/x86/policy.c:x86_cpu_policies_are_compatible().
>>>
>>> I asked Alejandro to do it like this.
>>>
>>> Advertising this to guests requires plumbing another MSR into the
>>> infrastructure which isn't quite set up properly let, and is in flux
>>> from my work.
>>
>> Maybe there was some misunderstanding here, as I realize only now. I
>> wasn't asking to expose the AMD feature, but instead I was after
>>
>>     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
>>     /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
>>     p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
>>
>> wanting to be extended by "|| boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)".
>> That, afaict, has no connection to plumbing yet another MSR.
> 
> Let me backtrack a little. There's 2 different (but related) matters under
> discussion:
> 
>  1. Trapping PV guest attempts to run the cpuid instruction
>  2. Providing a virtualized CPUID faulting interface so a guest kernel
>     can intercept the CPUID instructions of code running under it.
> 
> This series was meant to provide fix (1) on AMD hardware. I did go a bit
> beyond in v1, trying to provide support for a unified faulting mechanism
> on AMD, but providing a virtualized CPUID faulting interface needs to be
> done a bit more carefully than that. Doing it partially now just adds tech
> debt to be paid when CpuidUserDis is exposed to the domains.
> 
> Changing the policy to expose the Intel interface when AMD is the backend
> falls on (2), which is probably better off done separately in a unified way.
> 
> v2 removes the changes to x86/msr.c so only (1) is addressed.
> 
> Does this make sense?

Certainly. Nevertheless I would like to understand what Andrew meant,
even if only for staying better in sync with all the work he has been
doing (and is still planning to do) in the area of CPU policies and
leveling.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 10 13:30:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 13:30:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532808.829118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwjtm-0005a5-SV; Wed, 10 May 2023 13:30:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532808.829118; Wed, 10 May 2023 13:30:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwjtm-0005Zx-Pc; Wed, 10 May 2023 13:30:30 +0000
Received: by outflank-mailman (input) for mailman id 532808;
 Wed, 10 May 2023 13:30:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SHWl=A7=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1pwjtl-0005Ki-TK
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 13:30:29 +0000
Received: from mail.skyhub.de (mail.skyhub.de [2a01:4f8:190:11c2::b:1457])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4605425-ef36-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 15:30:29 +0200 (CEST)
Received: from zn.tnic (p5de8e8ea.dip0.t-ipconnect.de [93.232.232.234])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 4C15A1EC066E;
 Wed, 10 May 2023 15:30:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4605425-ef36-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1683725428;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=Oc3RW44mGf8OYw/mNzGyRLjYbfWEFY3fZ/ETgZKmRPc=;
	b=jV4B4Iueyphgm33/i6nlFz+SYXoKsBUzU8Ri7y4TYU68qN7/dmfMJlbK1MNpXw1TR90Kpl
	d1/rbYqm0aRl90Tzw/hOuuMoNnSYbkigi+LV1eHzl3IeCJdrbGH/onF7EABu+OV28GcY+p
	ZJ+8upyZumiT7YyvTUED0tVtgBdB47U=
Date: Wed, 10 May 2023 15:30:24 +0200
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
	mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Message-ID: <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>

On Wed, May 10, 2023 at 01:36:41AM +0200, Borislav Petkov wrote:
> More staring at this tomorrow, on a clear head.

Yeah, I'm going to leave it as is. Tried doing a union with bitfields
but doesn't get any prettier.

Next crapola:

The Intel box says now:

[    8.138683] sgx: EPC section 0x80200000-0x85ffffff
[    8.204838] pmd_set_huge: Cannot satisfy [mem 0x80200000-0x80400000] with a huge-page mapping due to MTRR override, uniform: 0

(I've extended the debug output).

and that happens because

[    8.174229] mtrr_type_lookup: mtrr_state_set: 1
[    8.178909] mtrr_type_lookup: start: 0x80200000, cache_map[3].start: 0x88800000

that's

	 if (start < cache_map[i].start) {

in mtrr_type_lookup(). I fail to see how that check would work for the
range 0x80200000-0x80400000 and the MTRR map is:

[    0.000587] MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs
[    0.000588]   0: 0000000000000000-000000000009ffff write-back
[    0.000589]   1: 00000000000a0000-00000000000bffff uncachable
[    0.000590]   2: 00000000000c0000-00000000000fffff write-protect
[    0.000591]   3: 0000000088800000-00000000ffffffff uncachable

so the UC range comes after this one we request.

[    8.186372] mtrr_type_lookup: type: 0x6, cache_map[3].type: 0x0

now the next type merging happens and the 3rd region's type is UC, ofc.

[    8.192433] type_merge: type: 0x6, new_type: 0x0, effective_type: 0x0, clear uniform

we clear uniform and we fail:

[    8.200331] mtrr_type_lookup: ret, uniform: 0

So this map lookup thing is wrong in this case.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Wed May 10 13:30:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 13:30:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532807.829107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwjtl-0005Kx-IG; Wed, 10 May 2023 13:30:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532807.829107; Wed, 10 May 2023 13:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwjtl-0005Kq-FY; Wed, 10 May 2023 13:30:29 +0000
Received: by outflank-mailman (input) for mailman id 532807;
 Wed, 10 May 2023 13:30:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwjtj-0005Ki-DL
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 13:30:27 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0630.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d28e1e94-ef36-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 15:30:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7621.eurprd04.prod.outlook.com (2603:10a6:20b:299::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.19; Wed, 10 May
 2023 13:30:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 13:30:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d28e1e94-ef36-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G+Nz5ZA0Igd7picUaiEoj8G/JQpNfZ3QWwtyzxxNExZd4ocYo5m8I4CvfHHh5NLTEP3ZkdQd0v3nabR2qzSLgYvL3oX7QlQO3hXhkt3p2J+L9OADdRn8MJnsFqHN3iGZp3IVmrTyies4Fp1LtYk1O5EPSA1cGajbHgKTEQNq3TK9RfUamjGBPA6o4dIPvVzy7TxTw8EBZBhntG2tjE3TIZlO3l7RYPpHi4l/N9jlkxjrGfKvPJMHrJqKaLfx5whLdPwnU7J4eMFDQr/gpJCbnNoneb9n3G9NAZgDvkPIAeC266ZQCWd4tlE308N6wAFs7+xYrZtbwWyfYdC0QrfKKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nkwdbCza5GI8/fMe9/WG6chBxRImyMQTKLkUScg/9gU=;
 b=Kr2Y3bRRqRHITPod+Jjw3pbEIKUWLXkNSf5cgpEqGV1X1y9HixsMCNB1LzOvMQ1ds43vteIg/CEBqxcTvyf11vpkBiGy8tvYr+RMtBRHdx8V+QQudBr29iaDEJNqU9uIrZn4eipAL3aKmkDsiqwZOjLYu4nurWQfTNl8CD9qNDO4ku9UDGNRwne/R+1Ng5n5ArVUVVe/nWK0ewmi2PPH/jg5GkwFzUKBJuuNGdAq0gAs47zPzzktLBbKuQkl62ckv+TXYWo5Umu5KYJ4Q+tCjdutxoBBXVi1A7NNGeO/c3LVfC3Y+nzIg2bc308McQb38j6q2TdfRzzT7lTmB37MMw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nkwdbCza5GI8/fMe9/WG6chBxRImyMQTKLkUScg/9gU=;
 b=u/vnipHb+NO991qh16Cl0rzCZ2GulFgPeWTCN4CWP4vZYs8fNG6KVyDSRLGNs2KeDfTb3WU05aAWJcwfpZG/QosAPHo+LUFyI7X46A8fZwu/yAYV3JIYTtmJKUsXZU+2D0DCbtIJAQW3vZQxJG4FF4Wxf0zABLzu+BGXDj1B1ivOC76PBzPCWdKcxkOoyggTrhjzLENdna4EfoYyLPGRMV6HF2w45VckynxAhCe0uj+w0Co6d2ptZC5eH16adeaHr2NOKaAuDKmIr/soSBfPTNC5/mjVUSsUjKXpSq0wQqSXwA7+0iE702pmWR7nLT5v782XBt0NI/8iVI1aATGMsA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f53d0041-d694-1221-475e-648a2afd2ff9@suse.com>
Date: Wed, 10 May 2023 15:30:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] iommu/vtd: fix address translation for superpages
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <20230509104146.61178-1-roger.pau@citrix.com>
 <bc750b8d-77be-f967-907a-4f19c783ad99@suse.com>
 <ZFtVYEVsELGfZxik@Air-de-Roger>
 <5bde1d79-03ef-6f8b-4b28-8d7e6867ba18@suse.com>
 <ZFtwSjuZaz05DIY0@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFtwSjuZaz05DIY0@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0103.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7621:EE_
X-MS-Office365-Filtering-Correlation-Id: 4bc7d5fa-a82c-4a95-8611-08db515ab578
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3Le/s2WHu3KAS5Ek72KiDIHqW6wOxTK1oUWC6HR0sSUchPFRJwX1C7ZJN0IAT6BjRk6JrKhRqzm+NlbiBZtdSHZUzocVDT8oqGKRPohpfUVpBg/f+sQrLJhUAJVIHlD+26Qas05TCzTAOWxHcH574s17wrjYk/5Z0wHlgwyO7R4cDpNR1kbhICu6xJr8hXJSZpLJx7f2e3dMAAr5tirtWZPDVI9kne3Y660b+bSrwIalgqjTSQ56/jD9uHe1tQelvjD3uZ3Do0kONMShg7byxIbxiIZyMW5HL16nujBd/Tpg5ZzsExX8eSkkZf0QR5FT7Ok6RtF3S2V3Vk/Mf6Q539PgBbbRhFZsj0C3ADE4aoNs8cxtll2InWfiN3ZI6A8zToZFIk8BaG9s0o5RvlDG9DxIUDKsLZEaGPnMwlhZLnNfmP1SZ8rMZdUjC7DFdTMVgQzbsOD0kXHcMpm2OKIwRPRccm7P2RmBFrzVFz5qRA5SZCeTprqYgOTqU8D3PPeZnsYD5mJMaDrRRHK0idltPPJ+/MyWgk2/tEShqt4jS+JJww+pR+0OHhcbSwMjSv2cCM2p1QkdzXprrn91dTLxNIjOjSebkrYtVw/+90+bFZd0olycFTnrwaQ4yPTE0FbVwhoOHjk6GhMd3g+5O0vxVA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(376002)(39860400002)(346002)(366004)(451199021)(31686004)(66899021)(6666004)(66476007)(6916009)(4326008)(478600001)(6486002)(66946007)(66556008)(316002)(86362001)(31696002)(36756003)(83380400001)(26005)(53546011)(6512007)(186003)(2616005)(6506007)(5660300002)(2906002)(8936002)(8676002)(41300700001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UlNqZkZ5c0NnelJTVkozZXpzc3o2c0U5Y2dIZ3lwU2xQUHhTUlFiYThXRmwy?=
 =?utf-8?B?ak1FTmg0TFZYcCtFQmZtSk5NTHdZY1ZCSWx5RXNrbXhZdDdUV2FtcnJoYm94?=
 =?utf-8?B?eldlcFc1QWFwWG40NmhxSDdDc0FYcHJWQ080R0s1NnVxOS9UMHQvZm5IMm5E?=
 =?utf-8?B?d0FuMXRRTmUxYkd5UkluOUt0NStOVHI2Umc0ZzNCWWNBMkROUFRVcHpwa2cz?=
 =?utf-8?B?ZVFRQldwdll0MVpySzAyUmgxWm5sQVNCM2lqbFd2a2xLNEd5NTFabXdlT2lM?=
 =?utf-8?B?bmRETVlxQi9BZ1RvdDVuY01zNEtkendrRnlFa1lkR2lXSCtDS1hxd092WTE4?=
 =?utf-8?B?SlpjMC9xY1djd2RkRDlDTWxKWk1sOU1Vbk5sd2lLV0hLOHpYd3ZNZzZpVmlI?=
 =?utf-8?B?dE9MdGRsdmI5VkpZWDg1ajNRTk9SQ3R1TTA1OUNGRTZtdlRWbmlEd1Z1ajJp?=
 =?utf-8?B?bkNVaEc2THh5ZG43VGhUSjA4N29sSXkrZHhUS3ZrYWI0TDgwdnRNdTZzY3BD?=
 =?utf-8?B?K1hTeU1nT25uei8xdjFQTUk2RXpMOHZFSjlYdnhWUS8yZXZadjRMU3loS1Rr?=
 =?utf-8?B?ZGg3ZUFob0ZKVUVPTC9ZcnhxY1RtNkVnN3RkaFp1U3VBZnlxeFVKT0xlbGdO?=
 =?utf-8?B?MFFPUXY1ZFZzdzRZQXIxSEZjRGdYT05iMDdwekIra29vWGNFZnpBSHVjNXRl?=
 =?utf-8?B?ZHhCSVdjRkFZMFJSTTR3cWtPTzZrbEN2OThRRFNFSUt6cXM1UExOd1dpUWJY?=
 =?utf-8?B?dGlyOVE5VnZncDd5b3oweFNQMmpFY3poaGNWenRTYzZ2YUp5VGQrSkdEanEz?=
 =?utf-8?B?bEdsYUxENE5wWUtWTEtnZ0djVUJ0Rjh1dWNpOUMzNnZhOS84YlFObFJpTlVK?=
 =?utf-8?B?dC9nYlNsaUkrVDdHb3Jvd3VSWk0rdUVJZG5oS2xWdzRKZ0d3SDVuSU0xYTht?=
 =?utf-8?B?QTBQOTg4VmNLVExybjB1ZGVLSll6SjFtM21lbHNySTdGUERSYndYbTR1T3Iy?=
 =?utf-8?B?V085c0RXZ1RWQy9IWEd3L2NwdUFUU2pmZVB5Q2VoOHNVK1FhSGJVZWdHQUwv?=
 =?utf-8?B?YzdMYkhlUkErMHo3Q1piM2tSTWpJejNZZjBOZFJZWkxhVUJkU0d3bi9URTYr?=
 =?utf-8?B?RDVzbUJJZXpCT0VjOEwwRmo5RFRBaFVpaTYrazkyejBaTFJNN3RlQ21LcXpN?=
 =?utf-8?B?V1BkOWhWV2RmVkdkZUxYRFkxb1IyWkN4djV1RnZ2ZFUzZUpiWDNldUNLZTdr?=
 =?utf-8?B?OFhPZ21wSndaRGxIdEFTVjNta3lRNXdlQlNXNDlKMEJaYUozZitRdDBoUXFO?=
 =?utf-8?B?L0dHUjlrYjJGa0Qramhmb29KbzR3UXlkeFBSK25VUUtHcUsrcTZ5WEFEckMv?=
 =?utf-8?B?cSt0dDdjb2k5b0VzMFhUM3ZMR1E2b1d4eHRraGNYOWNkT28wam80TWVLaURk?=
 =?utf-8?B?dk9zNk14V2FEV2xGVGxDdmpzaU00ZzRjSUxwV09aWG9uemtYVU05ZkdNUWtu?=
 =?utf-8?B?dHV4WE1lMnJQcGgwb2NpUVEwV0FKaEZjUlJIbzQvd1pBbnc5UjMvNS9PeHV4?=
 =?utf-8?B?QWdldStWcldwMklTVFNnY09XUVV6UndPSTZGMlhCaDV6M1NXcGFrcjFGZ0or?=
 =?utf-8?B?R2VGc0xSa0luc1VxcUJ4Z2l2bU4xaUNQMDAwR0FCa1RFUm84OG1ETDFaMXg4?=
 =?utf-8?B?L2dvUnQ1aUw0aGdld2ZsWXMyQlZleWxWTnBtaWUrbmQxTzkxc0ZQaEhUbWc3?=
 =?utf-8?B?blJ0VjAreExsMElkZG54T05RdTB4d04wd3dkMDYvN29GclVnRTJWazVteVE2?=
 =?utf-8?B?SjgwaXVMKzlGbGFLWlY3M25iZlgvVjZoUFdla1JRM1R3cDJodEFsUHVXdnUz?=
 =?utf-8?B?Z1N6ZjFkM0FkdW1iRCtWa3Zna2Y5d2VJeWVDMm5qQ3hBSkZVZHc1ZjlwOVF6?=
 =?utf-8?B?eTI1SXI5azZrK01kdVRkekxTT2RZcXJzc25Fbnd3NUFrQ1VrbmhmMHRsSmZS?=
 =?utf-8?B?R0dkeGN5cTQyOXRncDJWSUIwYWM3dm00Vk1HUjA3SzVrR3JhV3pXL3NvcGp0?=
 =?utf-8?B?NlZVRkFMb3huT1oyaE5LbVVlcDZIbUFRcTZ6UkJoYW1aYmVuUndub1VpalJ5?=
 =?utf-8?Q?Nv1nr9brkSu3qWtWD+0o4Gxwq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4bc7d5fa-a82c-4a95-8611-08db515ab578
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 13:30:23.8508
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3XrTXsSWSbFSFyC7U3i5c38VwhsgiqHE4lQm2+Y6JlO4CbmVe+Smr70Gmm2Y6G3Ql+oBViYGZgtdJVwTvObCcQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7621

On 10.05.2023 12:22, Roger Pau Monné wrote:
> On Wed, May 10, 2023 at 12:00:51PM +0200, Jan Beulich wrote:
>> On 10.05.2023 10:27, Roger Pau Monné wrote:
>>> On Tue, May 09, 2023 at 06:06:45PM +0200, Jan Beulich wrote:
>>>> On 09.05.2023 12:41, Roger Pau Monne wrote:
>>>>> When translating an address that falls inside of a superpage in the
>>>>> IOMMU page tables the fetching of the PTE physical address field
>>>>> wasn't using dma_pte_addr(), which caused the returned data to be
>>>>> corrupt as it would contain bits not related to the address field.
>>>>
>>>> I'm afraid I don't understand:
>>>>
>>>>> --- a/xen/drivers/passthrough/vtd/iommu.c
>>>>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>>>>> @@ -359,16 +359,18 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
>>>>>  
>>>>>              if ( !alloc )
>>>>>              {
>>>>> -                pte_maddr = 0;
>>>>>                  if ( !dma_pte_present(*pte) )
>>>>> +                {
>>>>> +                    pte_maddr = 0;
>>>>>                      break;
>>>>> +                }
>>>>>  
>>>>>                  /*
>>>>>                   * When the leaf entry was requested, pass back the full PTE,
>>>>>                   * with the address adjusted to account for the residual of
>>>>>                   * the walk.
>>>>>                   */
>>>>> -                pte_maddr = pte->val +
>>>>> +                pte_maddr +=
>>>>>                      (addr & ((1UL << level_to_offset_bits(level)) - 1) &
>>>>>                       PAGE_MASK);
>>>>
>>>> With this change you're now violating what the comment says (plus what
>>>> the comment ahead of the function says). And it says what it says for
>>>> a reason - see intel_iommu_lookup_page(), which I think your change is
>>>> breaking.
>>>
>>> Hm, but the code in intel_iommu_lookup_page() is now wrong as it takes
>>> the bits in DMA_PTE_CONTIG_MASK as part of the physical address when
>>> doing the conversion to mfn?  maddr_to_mfn() doesn't perform a any
>>> masking to remove the bits above PADDR_BITS.
>>
>> Oh, right. But that's a missing dma_pte_addr() in intel_iommu_lookup_page()
>> then. (It would likely be better anyway to switch "uint64_t val" to
>> "struct dma_pte pte" there, to make more visible that it's a PTE we're
>> dealing with.) I indeed overlooked this aspect when doing the earlier
>> change.
> 
> I guess I'm still confused, as the other return value for target == 0
> (when the address is not part of a superpage) does return
> dma_pte_addr(pte).  I think that needs further fixing then.

Hmm, indeed. But I think it's worse than this: addr_to_dma_page_maddr()
also does one too many iterations in that case. All "normal" callers
supply a positive "target". We need to terminate the walk at level 1
also when target == 0.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 10 13:31:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 13:31:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532816.829128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwjuW-0006PF-5L; Wed, 10 May 2023 13:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532816.829128; Wed, 10 May 2023 13:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwjuW-0006P8-1I; Wed, 10 May 2023 13:31:16 +0000
Received: by outflank-mailman (input) for mailman id 532816;
 Wed, 10 May 2023 13:31:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SHWl=A7=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1pwjuV-0005Ki-El
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 13:31:15 +0000
Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id efaea832-ef36-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 15:31:14 +0200 (CEST)
Received: from zn.tnic (p5de8e8ea.dip0.t-ipconnect.de [93.232.232.234])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 779BD1EC067E;
 Wed, 10 May 2023 15:31:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efaea832-ef36-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1683725474;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=nWEfS1oddXK3hwI0/gOtdcGGJ/0GZlUs3Fe3QO+YY7U=;
	b=jqkCdYyYkR38aWrhTJTu89bZLN/65qIr5shR0DFR44C13mCnkvAF8rdLFKZ4X/lwpKdawL
	2f1YmXXPJo6kZjS0vCIouVQs0dFXSgTFUgridFZQN8QYGK0Q3Fk+bR65n3MqsN9r5IzNl2
	QmVzxYJ0e1AnYp+oY6FWtrhcOdOPXpg=
Date: Wed, 10 May 2023 15:31:14 +0200
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
	mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Message-ID: <20230510133114.GCZFucoi32pjSFbEHr@fat_crate.local>
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>

On Wed, May 10, 2023 at 03:30:24PM +0200, Borislav Petkov wrote:
> So this map lookup thing is wrong in this case.

Btw, current tree is:

https://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git/log/?h=rc1-mtrr

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Wed May 10 13:55:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 13:55:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532828.829138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkHR-0000eq-2a; Wed, 10 May 2023 13:54:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532828.829138; Wed, 10 May 2023 13:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkHQ-0000ej-VC; Wed, 10 May 2023 13:54:56 +0000
Received: by outflank-mailman (input) for mailman id 532828;
 Wed, 10 May 2023 13:54:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e4GJ=A7=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pwkHP-0000ed-Kj
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 13:54:55 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3d597c0f-ef3a-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 15:54:53 +0200 (CEST)
Received: by mail-ed1-x52c.google.com with SMTP id
 4fb4d7f45d1cf-50bcae898b2so12942357a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 10 May 2023 06:54:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d597c0f-ef3a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683726893; x=1686318893;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=u7LfQnggO8g+70sW+PjZ0r6gGoNInRJXkgSlZGk1WBw=;
        b=CSod7Vg8n9dlQ2nGXGZNFpHHopF3Kk11zNRs8bV15UY7QS5678+bABR25KkNcxEhsC
         LFQRHGohYq7OkuwUzmmBRzCust6ggmMl4uuv2P6In4dWkIrKSGC0Lf6YBXmS5K7ntVGr
         fmoj/x6MwGOu5f/TfBBKefMTnhBNyUnCjrev/Cjh6kajT44uf7Y4olQZuemoubk6AXiy
         EnL0Obx00n6PMrR9fXMVvxn9iWiWRIjUPl7nUfh+jgvK4+QDhC/tVtaxGajR3X0mt3V9
         i8vnhgu733eKrU5m2IU7bMx1zlwWW/RQfK2bytTOPePzF76EHTLyEwYA4FO1WYYRIIbx
         kjvg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683726893; x=1686318893;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=u7LfQnggO8g+70sW+PjZ0r6gGoNInRJXkgSlZGk1WBw=;
        b=Ocae5VECs1FfBCqy/TV9Rh0yAISzFQ25zWYQduniE3nzbHf5/iGp3I4PWNz88VAPVp
         wNkb2CYTlk4XUzj8yjYIhrgLemkHhSZHGmZkYc+lIaEnwvCXUSe/6Y8Fxl1JBMIVKR5v
         KsE90S1V+28nzboABplLFs2M1/nyuj8yVEWcwjVr5NBjcoSd1wXLViTYaoj55k3CJMW0
         c8me0RWZXKqLJZB7FawVfYrQM28s3VsXHfpNxJ/kBdvMCvec1g2eXt4fXObXruL9kYCN
         s9VfoGFNsFChy8gXNP6/Yx7C8dtlanca8nVjUq+zmx1jqk/Mnk1pQXoz0iFZrQYSBsFZ
         73Bw==
X-Gm-Message-State: AC+VfDw51Oqa/970NKCk+f3pCuJmQ+OUU/XtPfhQ0GWkDg0k8MdTe/de
	2vGWxCTN8oYE8d98bW2Is4+4blb9MF40Ef4ltpA=
X-Google-Smtp-Source: ACHHUZ5gFeEoO7F8QsxLLjW8OKv8gzq19vcwdjzpYjMS2YRBOe4qvYO8pHhaYOh/gyFWP7Q8zL+OduwreivrrlplI2I=
X-Received: by 2002:a17:907:6eaa:b0:94f:81c:725e with SMTP id
 sh42-20020a1709076eaa00b0094f081c725emr19150348ejc.59.1683726892829; Wed, 10
 May 2023 06:54:52 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-5-jandryuk@gmail.com>
 <43c519f7-577d-f657-a4b1-1a31bf7f093a@suse.com> <CAKf6xpuzk6vLjbNAHzEmVpq8sDAO8O-cRFVStQxNqyD5ERr+Yg@mail.gmail.com>
 <7921d24d-7d4d-8829-44bf-b8c2ecd001c8@suse.com> <CAKf6xpt0n2GO1PuqeaiWi6iOoeBt0ULoKJ4xgeiTZo2Rh67kVg@mail.gmail.com>
 <4bf60ae8-7757-7440-1a4c-95260c0f0578@suse.com>
In-Reply-To: <4bf60ae8-7757-7440-1a4c-95260c0f0578@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 10 May 2023 09:54:40 -0400
Message-ID: <CAKf6xpuZRgQSe7=ST1sa=_vNOvDeC+bnDG4deb9m=A2M5+X2Eg@mail.gmail.com>
Subject: Re: [PATCH v3 04/14 RESEND] cpufreq: Add Hardware P-State (HWP) driver
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, May 8, 2023 at 2:33=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
>
> On 05.05.2023 17:35, Jason Andryuk wrote:
> > On Fri, May 5, 2023 at 3:01=E2=80=AFAM Jan Beulich <jbeulich@suse.com> =
wrote:
> >>
> >> On 04.05.2023 18:56, Jason Andryuk wrote:
> >>> On Thu, May 4, 2023 at 9:11=E2=80=AFAM Jan Beulich <jbeulich@suse.com=
> wrote:
> >>>> On 01.05.2023 21:30, Jason Andryuk wrote:
> >>>>> --- a/docs/misc/xen-command-line.pandoc
> >>>>> +++ b/docs/misc/xen-command-line.pandoc
> >>>>> @@ -499,7 +499,7 @@ If set, force use of the performance counters f=
or oprofile, rather than detectin
> >>>>>  available support.
> >>>>>
> >>>>>  ### cpufreq
> >>>>> -> `=3D none | {{ <boolean> | xen } [:[powersave|performance|ondema=
nd|userspace][,<maxfreq>][,[<minfreq>][,[verbose]]]]} | dom0-kernel`
> >>>>> +> `=3D none | {{ <boolean> | xen } [:[powersave|performance|ondema=
nd|userspace][,<hdc>][,[<hwp>]][,[<maxfreq>]][,[<minfreq>]][,[verbose]]]} |=
 dom0-kernel`
> >>>>
> >>>> Considering you use a special internal governor, the 4 governor alte=
rnatives are
> >>>> meaningless for hwp. Hence at the command line level recognizing "hw=
p" as if it
> >>>> was another governor name would seem better to me. This would then a=
lso get rid
> >>>> of one of the two special "no-" prefix parsing cases (which I'm not =
overly
> >>>> happy about).
> >>>>
> >>>> Even if not done that way I'm puzzled by the way you spell out the i=
nteraction
> >>>> of "hwp" and "hdc": As you say in the description, "hdc" is meaningf=
ul only when
> >>>> "hwp" was specified, so even if not merged with the governors group =
"hwp" should
> >>>> come first, and "hdc" ought to be rejected if "hwp" wasn't first spe=
cified. (The
> >>>> way you've spelled it out it actually looks to be kind of the other =
way around.)
> >>>
> >>> I placed them in alphabetical order, but, yes, it doesn't make sense.
> >>>
> >>>> Strictly speaking "maxfreq" and "minfreq" also should be objected to=
 when "hwp"
> >>>> was specified.
> >>>>
> >>>> Overall I'm getting the impression that beyond your "verbose" relate=
d adjustment
> >>>> more is needed, if you're meaning to get things closer to how we par=
se the
> >>>> option (splitting across multiple lines to help see what I mean):
> >>>>
> >>>> `=3D none
> >>>>  | {{ <boolean> | xen } [:{powersave|performance|ondemand|userspace}
> >>>>                           [{,hwp[,hdc]|[,maxfreq=3D<maxfreq>[,minfre=
q=3D<minfreq>]}]
> >>>>                           [,verbose]]}
> >>>>  | dom0-kernel`
> >>>>
> >>>> (We're still parsing in a more relaxed way, e.g. minfreq may come ah=
ead of
> >>>> maxfreq, but better be more tight in the doc than too relaxed.)
> >>>>
> >>>> Furthermore while max/min freq don't apply directly, there are still=
 two MSRs
> >>>> controlling bounds at the package and logical processor levels.
> >>>
> >>> Well, we only program the logical processor level MSRs because we
> >>> don't have a good idea of the packages to know when we can skip
> >>> writing an MSR.
> >>>
> >>> How about this:
> >>> `=3D none
> >>>  | {{ <boolean> | xen } {
> >>> [:{powersave|performance|ondemand|userspace}[,maxfreq=3D<maxfreq>[,mi=
nfreq=3D<minfreq>]]
> >>>                         | [:hwp[,hdc]] }
> >>>                           [,verbose]]}
> >>>  | dom0-kernel`
> >>
> >> Looks right, yes.
> >
> > There is a wrinkle to using the hwp governor.  The hwp governor was
> > named "hwp-internal", so it needs to be renamed to "hwp" for use with
> > command line parsing.  That means the checking for "-internal" needs
> > to change to just "hwp" which removes the generality of the original
> > implementation.
>
> I'm afraid I don't see why this would strictly be necessary or a
> consequence.

Maybe I took your comment too far when you mentioned using hwp as a
fake governor.  I used the actual HWP struct cpufreq_governor to hook
into cpufreq_cmdline_parse().  cpufreq_cmdline_parse() uses the that
name for comparison.  But the naming stops being an issue if struct
cpufreq_governor gains a bool .internal flag.  That flag also makes
the registration checks clearer.

> > The other issue is that if you select "hwp" as the governor, but HWP
> > hardware support is not available, then hwp_available() needs to reset
> > the governor back to the default.  This feels like a layering
> > violation.
>
> Layering violation - yes. But why would the governor need resetting in
> this case? If HWP was asked for but isn't available, I don't think any
> other cpufreq handling (and hence governor) should be put in place.
> And turning off cpufreq altogether (if necessary in the first place)
> wouldn't, to me, feel as much like a layering violation.

My goal was for Xen to use HWP if available and fallback to the acpi
cpufreq driver if not.  That to me seems more user-friendly than
disabling cpufreq.

            if ( hwp_available() )
                ret =3D hwp_register_driver();
            else
                ret =3D cpufreq_register_driver(&acpi_cpufreq_driver);

If we are setting cpufreq_opt_governor to enter hwp_available(), but
then HWP isn't available, it seems to me that we need to reset
cpufreq_opt_governor when exiting hwp_available() false.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 10 14:09:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 14:09:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532835.829149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkUx-0002LQ-8P; Wed, 10 May 2023 14:08:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532835.829149; Wed, 10 May 2023 14:08:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkUx-0002LJ-3k; Wed, 10 May 2023 14:08:55 +0000
Received: by outflank-mailman (input) for mailman id 532835;
 Wed, 10 May 2023 14:08:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e4GJ=A7=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pwkUv-0002LD-9t
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 14:08:53 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 30892a20-ef3c-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 16:08:51 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-94a342f7c4cso1287675766b.0
 for <xen-devel@lists.xenproject.org>; Wed, 10 May 2023 07:08:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30892a20-ef3c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683727730; x=1686319730;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=W3S3pf8HRFB2drSMnCxHEMdMXWlYH2EWlxxK17tauz0=;
        b=CiReim0wv/tmjiF115bGunDq1jJ7GwxpcD/vdU6srL+gHsRQYsI60Fy30OriXR3maI
         OTTrJAV0on3HJvbVjPjD9eIVM0A6ghInxa7doKCui+s8jatmik/ltLpnBlNCknY0zFoH
         eW8es18O+ElTQb3Q5F5qH9Pdrwt9x3K2yw0VNycfdh4XMG80KyA4ngaeAZMRlaB75RZ1
         yTclbPqeo/bzWUQGwgrr3ntNRGW9uC57OA4ekN+sxNC04dXxnsaW7og1j0N/IM5Z8fHe
         z0uNfnzIqrG0oHJpUSedBa8XonukP835o5AUWTZ6FL8p9l0MtOsqZqX/0kHsPEr6tv0b
         L5jw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683727730; x=1686319730;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=W3S3pf8HRFB2drSMnCxHEMdMXWlYH2EWlxxK17tauz0=;
        b=iJ7cavzXQocJ0l6MTa9iqV6IDa5Ia+hQyn/NHYw3O+qbamTLRsP0XmJm08Hc3sNDSb
         SHy9UbM/jPJlccA6nj0uGvLyN+LprWF5wHtcXj/aviN7aZR90+r7cHhdcCKgIs9GsJtI
         6oDlVsrZX+hfypGj6t/NlJzFfSLsuN4PqeHssbi4Nd456Bow5OnxaK0ytRsPjgtVSt7T
         TGjw8MfzFCRFMzIo8aV4qZ2iA2HXhnUERDSjVVQYn2ULzPAfM9vde4X+xiDf66mM1jdI
         5Z05nBjpzdacDcWSyDia5olRhSRUN7EJwQJtyyJFU1jbDKSa/40QkjFkjm6E/tCRHUoc
         Zuow==
X-Gm-Message-State: AC+VfDzMJ4B0DgSIlYc2lJdvUiEHZZas2BBl9QpREdx5/ZO/PLINDFXH
	sIexjKjVw5tSEZelLmqsmvy2RFDK4J4r7Rv61mA=
X-Google-Smtp-Source: ACHHUZ7dhRPHDFROTcAUrJxor4wuPvbU4JzK0sJTQhUIMq0GTm7dF1E/myzoyYqzG3llhihnCqD0W2Zkx287xKWsQJI=
X-Received: by 2002:a17:907:9719:b0:96a:1ee9:4a5 with SMTP id
 jg25-20020a170907971900b0096a1ee904a5mr3071240ejc.8.1683727730511; Wed, 10
 May 2023 07:08:50 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-7-jandryuk@gmail.com>
 <bdd7aa43-3fb7-0a26-4041-b476b285d14e@suse.com>
In-Reply-To: <bdd7aa43-3fb7-0a26-4041-b476b285d14e@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 10 May 2023 10:08:38 -0400
Message-ID: <CAKf6xpsi11BbSxUW5h1Yx=BmZX5w5+dBD4rWtB8gTNnw_dwk1A@mail.gmail.com>
Subject: Re: [PATCH v3 06/14 RESEND] xen/x86: Tweak PDC bits when using HWP
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, May 8, 2023 at 5:53=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
>
> On 01.05.2023 21:30, Jason Andryuk wrote:
> > --- a/xen/arch/x86/acpi/cpufreq/hwp.c
> > +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> > @@ -13,6 +13,8 @@
> >  #include <asm/msr.h>
> >  #include <acpi/cpufreq/cpufreq.h>
> >
> > +static bool hwp_in_use;
>
> __ro_after_init again, please.

Of course.  (I'd already made the change locally after the earlier ones.)

> > --- a/xen/include/acpi/pdc_intel.h
> > +++ b/xen/include/acpi/pdc_intel.h
> > @@ -17,6 +17,7 @@
> >  #define ACPI_PDC_C_C1_FFH            (0x0100)
> >  #define ACPI_PDC_C_C2C3_FFH          (0x0200)
> >  #define ACPI_PDC_SMP_P_HWCOORD               (0x0800)
> > +#define ACPI_PDC_CPPC_NTV_INT                (0x1000)
>
> I can probably live with NTV (albeit I'd prefer NATIVE), but INT is too
> ambiguous for my taste: Can at least that become INTR, please?

Sounds good.  I'm switching to ACPI_PDC_CPPC_NATIVE_INTR.

> With at least the minimal adjustments
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thank you.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 10 14:20:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 14:20:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532843.829158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkgD-0004iQ-8D; Wed, 10 May 2023 14:20:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532843.829158; Wed, 10 May 2023 14:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkgD-0004iJ-5X; Wed, 10 May 2023 14:20:33 +0000
Received: by outflank-mailman (input) for mailman id 532843;
 Wed, 10 May 2023 14:20:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwkgA-0004iD-V8
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 14:20:31 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2071.outbound.protection.outlook.com [40.107.7.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d1071a95-ef3d-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 16:20:29 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7649.eurprd04.prod.outlook.com (2603:10a6:20b:2d8::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.19; Wed, 10 May
 2023 14:20:00 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 14:19:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1071a95-ef3d-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Qls2oOEbzLEbNw4KI3CSPI9W7xgTGuOdjHY3SK4cNZ/oM5pjFofyYmGEWDznNad2BYuhHhK9kX5516JGeHCkRCfKyAe0WZZTd/6N6exl6i5z1TmpwG5lurJpYGRzauoKzbT8vRcBHhljrXJXv0a+HDYzRQchTNlgxkCdwzmBPHhGxaGbgg+5YI7cwmIL5iv1DPqrqSgzrPkqsKbJt4o/Loc1DW93ZVsVmN++1Ip6gk6a/DLWAW05jmNEuH3T4N0TZ6V/TpsbLp2ge5RCWe9KWebyxlGIwEsSsvvBZWV9oMEA5gY6GS6sbMLESIL13xZpP6QjecnW7hBvvKQW6hGqPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xDq3oeqi/3d0JA7vdfbc7ZJ6gyT24syYN1sRWt9jqR4=;
 b=DfLw785ExFefsQE58gaqjFJm31V+GxvZWkzGWfAEy7SUBW8p7bRXIrqUqb9/sDpGBoGvoUHa/ni5qamzlMDlc3S/1KdF3A28k3ctue+Swoi9oN7A4KNaJQ9ujCFGLaBURXH+S6NOlfVM78Xkn8tlI+/63ah4cX77E4n75V/n4qTjwJCDE2u63hnZcTsxwII/rbT3KVHKUYchgZ5TcreQOZ1RAhcEz8I3mNvlZGmfhoTJjdrIoPHSW6ntij/HtR/tVAIKU4K2uzUk4HCXY6arxuxkkYbgQiuZGaQlLBpzv2ZdcoBkCCWivoQOiQoDgbDgpzvGFuJO21n2MXCyw+eZ7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xDq3oeqi/3d0JA7vdfbc7ZJ6gyT24syYN1sRWt9jqR4=;
 b=biaWWzftxR8kajnJTdBV74zWep2dsDDU0qXEDYlUXZETT5OF6Orrso8lf0iKlHkv9Ae5vmSuWFDnb1HGYFxDPlVjXovXwQj8B+fjW/ps2UTt5MSDRynUcMGE6tgU15CJy5Juf6GQ6nKKk1OQGI0Myhh4Bmx4XkOeFEaZ8JxWre5Twz5FiFD52IQb9a4nODOTtvWmOwH6z/hWw6ul8p6GciSpgTR6YjftawCNIyIUrO0rSLa1D2CPuAuTy/QyIAuylP8+WxU8LP9Cc2IxMV/Oh/EJHwxt4msxcpVZKCrCOssnBVxWAP56HkC45K1kxpjd/uaOirOt/XzwmYfW1ZLPnw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e9dd85d3-0e97-cb96-561e-75b23386a7a3@suse.com>
Date: Wed, 10 May 2023 16:19:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 04/14 RESEND] cpufreq: Add Hardware P-State (HWP)
 driver
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-5-jandryuk@gmail.com>
 <43c519f7-577d-f657-a4b1-1a31bf7f093a@suse.com>
 <CAKf6xpuzk6vLjbNAHzEmVpq8sDAO8O-cRFVStQxNqyD5ERr+Yg@mail.gmail.com>
 <7921d24d-7d4d-8829-44bf-b8c2ecd001c8@suse.com>
 <CAKf6xpt0n2GO1PuqeaiWi6iOoeBt0ULoKJ4xgeiTZo2Rh67kVg@mail.gmail.com>
 <4bf60ae8-7757-7440-1a4c-95260c0f0578@suse.com>
 <CAKf6xpuZRgQSe7=ST1sa=_vNOvDeC+bnDG4deb9m=A2M5+X2Eg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xpuZRgQSe7=ST1sa=_vNOvDeC+bnDG4deb9m=A2M5+X2Eg@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0137.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB7649:EE_
X-MS-Office365-Filtering-Correlation-Id: 7b927391-1d1b-4204-2646-08db5161a349
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hCRmmxtZsedar4jSBpXAQvZxXGx/LELb2C6TZHzBfzkZpbeKbx5Ab+tx2Am5f+YjGQ2/JBf9/nYjeyRFRf8e55Mqoz+kkrrhRhqlFgryxpMXOBsBEOlpxBN8vbGuObhKY5JgMOeBPBB/kpQltIAD3OJLF4dtYgmmvvd3WSNVufPiNfrrjwVEQhmIF6d1tSBqs1EfHwBwmEDXyJoiOhf58L/CF6urTirWYW3pKsozmk9MJ5SMSc5GU8En2NQYk8i50R9qCLnlHAz3XK8s8Zr+whuebLvVjhpvgPEKOzqa3IQVcYFSYMURMwX0oo5FZruE2sIA0v89yiN7VCbdev5IT25qpmfXvQVYoynrdsCCh2Jh0Ci8fmTaXLmwRUWqdWB2rTgCv4T26TtF8xi+uux1oy7ilUgyDgN77ZdzgjobYnMegEBBLjXCABl7a06xNj/VBswO5DUIy3VEh/8rnpwU63lhSK4TS63xb3qNuaB0TEgw8aTb0RrOsqVhu1/5n52VZuYGB3wtiZUULv6p8hlxF+1JMYveNSDTyMDaCLpabKQtWcb60IU1Et1qjj4g9Zn8ShGReTCY7epgH1WQKO+/Yi9EN37Q2RTTHlBdq+8mPlEr3LAHJ3JZxnor3BSyt0xAsr5hHNup6V2RMTziX8QAew==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39860400002)(396003)(376002)(346002)(366004)(451199021)(31686004)(38100700002)(36756003)(86362001)(31696002)(41300700001)(4326008)(6916009)(316002)(8936002)(8676002)(26005)(186003)(53546011)(6506007)(6512007)(5660300002)(2616005)(2906002)(66556008)(66476007)(66946007)(478600001)(54906003)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SzJMTlQyNExzY2FURHV6VjAyd2pEb3NPQ09kZWlYQ2ZaaEF0bFBQMll3eEds?=
 =?utf-8?B?VHpTNjFQbGx1WktTNVdjR1pST1VsbGhPWnNEQnJTa2dqam5qdHZUU2ViNVlC?=
 =?utf-8?B?Z2FrcEF1c0g0YXJ2SHdFOWVVbnhRVVYxY014cUxRQUJTeTdOMzZIWlpFeTRS?=
 =?utf-8?B?OG5lSGRBM1A3Z0pFTVZwckxpc0I0M01UQUJnRTR2eE1wWExKSzU5VlV1bzhX?=
 =?utf-8?B?WUFEcFA5bGhSK2pVeG9OMjJWREdVeVBYeStEbHN6eG0rQVJSWFVKd1Q2SVdr?=
 =?utf-8?B?TUh0QzdKaXEwbkRDZjFIMC82WURQa3NXNWEzdXlidTRSVGpqRk1wSm8zSWIr?=
 =?utf-8?B?TkRROFNWUk9DY3FjRDFVK0xCRkplbFd4NzBxRHppc2hSVVE0YVZnQUYwVEs3?=
 =?utf-8?B?OVVBYU5pVGJheVRoeVNyamdHMjlwYkQ5KzJsSVQwTHBWTGR4SDU4M0pNUmhM?=
 =?utf-8?B?VDBxRFFjRHpBS0NYT1I2M29RVHcrcjk2YTJpdDltVzdLQS84eUhXMjV3aXFo?=
 =?utf-8?B?MlUvb0YxSzdoKzA4UFFSR1hnQjdaOXVTVUVPbTVXR3ZsTzhuNVhQdnZ2dno3?=
 =?utf-8?B?UWpxRG1oT21wRTh2S3dzS3ZDY3BQMmR2ODMvbHg0d0FLNXBiNUhuUk81T0U3?=
 =?utf-8?B?ZGQ0K3laYWJ4Mzdqdm9iY0Y3NVl2Q2swS05ZYmhsbEVIQXErZldhNCtFSDdk?=
 =?utf-8?B?NGprZTh0TUdFMGNUQzdMeVp1ZWQwc2hpVEpkaEw3TUJFSlQ1ang4aUE1YW5o?=
 =?utf-8?B?YzFnemdSU2N3bU9HaFZFNjlQbTdURERubmtWcmMrUnhhbzlTYXZwaVFLeHdZ?=
 =?utf-8?B?N0U2U3d0OUtqN1JRYUFWY0lBWDFvRElkaDYzamVYTG9kWERISU1xWit2ZWVx?=
 =?utf-8?B?S2libTl2ekptYXJIa2RzUWFqdFRBUVBIL0FPOTl0NE9Uay9JL1ZDSWt3TDRR?=
 =?utf-8?B?d0R0YnpoZFZWMjdJVk90cm1NZXJzc2hDWWRBRmhYTDFVZ0phczNDRjZ1K2xw?=
 =?utf-8?B?R3FqSFhzU1FaYlVyOUIzaUhSaWxGUmxRY2s0aHozNFJTQjhjY1hMZEtIK3B3?=
 =?utf-8?B?QUo0QXF1akxEVXZNczVMeU5ZTEhoclBtTXRFV0xsOEtnL252QzU3RU9EemJV?=
 =?utf-8?B?VGs3UFNUODVXa1k5ajYwMUp5bmZxZXlzblJHOFMrckFvK05pSFRTTi9mZ3BK?=
 =?utf-8?B?RzYwbGpqOHhYT2ZSbFlPdlA0cEpNWm9paDQxK2RIcWZJTWthcXR6ZGpYc0dl?=
 =?utf-8?B?TTJvZEthM1NQdFY0cWhTRFdIWG1WblNzbGcvcEE1WlE2dzRlU1NMVHVrOWw3?=
 =?utf-8?B?OFJQUkhYcFRwdnZxWlRVelVXODc1RnI5YkhpZCs4dTNNOHVIWlFuOXpQRFNx?=
 =?utf-8?B?TlpSRHN5OWFRbHVOdlZGZ1ZONk9OSkF3SmdSUEZobnhPNzN5S3pkak54NmdN?=
 =?utf-8?B?OVBLWkpvQjR3b2xXR1hHYW9lM3dPMk41SVoxdDg1TUNqaXFIUlFsVDBwV2Jp?=
 =?utf-8?B?QklqckU1a0lvSTlBSUdzYzBhcEd0SjljQ1FCNG5jMU5QMFhHMFpFZXRLUmg3?=
 =?utf-8?B?emVobTFDNC9CazdNUU5TV2JBb1loZEVPdklYREtRR0htQkQzOWZCVnhIeG4z?=
 =?utf-8?B?bmJnUjczSUx0Qjg1R0hoMlZ1Y2h5Umc2M1ppSVpIVmtaMVRvR3Q5c3ZLeVpw?=
 =?utf-8?B?VGx2VElISXZndnlTQ1lSSldCRjB0ODVRUjNBV1hodVg0M2hnakdSVjRHaGxW?=
 =?utf-8?B?dklnSWMrbTVBNXFGUzJZcjlpSC9rZ1diOURKMG1FRmpsVE9aVWdrN05Cd0Fw?=
 =?utf-8?B?cnpQc1ZSMm5QWHJpcThHaHZjOUhlajl1bkd2UWZPU0R1azd5SXdPTTNOZmFU?=
 =?utf-8?B?Sk9GK0J4UlpyNUdMUExJWmFzeElnU0hRRDJhRk5hRkdoUjBKSklvYXFaaUdj?=
 =?utf-8?B?VlNaSm5mV1IwUTBYaDBTa1YzNXRYak5MNThWNWdYVDB4cHVsUnFVZjJ0b0dZ?=
 =?utf-8?B?UXJhYlJpcmNQZEhxbmViWFdTeFh2UTZ3UStFM2xiY3N3Y0I2SzQ3a1dPMjVI?=
 =?utf-8?B?NDZNTFFyZnhDRkFRbHlpWVpHY3lGd1RuN2VUanlHbHJSY0l1WkdsMUJ3ZU9w?=
 =?utf-8?Q?JgodWfkPYbmtyR0NDT5DERyRx?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7b927391-1d1b-4204-2646-08db5161a349
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 14:19:59.8441
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U/i0PgAZGGqu8NlOD6RADqluGrJhKPNdl579sHbGCqqVvl15rU9w63g9W7DuvlIJd0EgXva9yYaDcX6oVqo6Nw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7649

On 10.05.2023 15:54, Jason Andryuk wrote:
> On Mon, May 8, 2023 at 2:33 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 05.05.2023 17:35, Jason Andryuk wrote:
>>> On Fri, May 5, 2023 at 3:01 AM Jan Beulich <jbeulich@suse.com> wrote:
>>> The other issue is that if you select "hwp" as the governor, but HWP
>>> hardware support is not available, then hwp_available() needs to reset
>>> the governor back to the default.  This feels like a layering
>>> violation.
>>
>> Layering violation - yes. But why would the governor need resetting in
>> this case? If HWP was asked for but isn't available, I don't think any
>> other cpufreq handling (and hence governor) should be put in place.
>> And turning off cpufreq altogether (if necessary in the first place)
>> wouldn't, to me, feel as much like a layering violation.
> 
> My goal was for Xen to use HWP if available and fallback to the acpi
> cpufreq driver if not.  That to me seems more user-friendly than
> disabling cpufreq.
> 
>             if ( hwp_available() )
>                 ret = hwp_register_driver();
>             else
>                 ret = cpufreq_register_driver(&acpi_cpufreq_driver);

That's fine as a (future) default, but for now using hwp requires a
command line option, and if that option says "hwp" then it ought to
be hwp imo.

> If we are setting cpufreq_opt_governor to enter hwp_available(), but
> then HWP isn't available, it seems to me that we need to reset
> cpufreq_opt_governor when exiting hwp_available() false.

This may be necessary in the future, but shouldn't be necessary right
now. It's not entirely clear to me how that future is going to look
like, command line option wise.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 10 14:20:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 14:20:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532845.829167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkgQ-00051T-IT; Wed, 10 May 2023 14:20:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532845.829167; Wed, 10 May 2023 14:20:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkgQ-00051K-Fm; Wed, 10 May 2023 14:20:46 +0000
Received: by outflank-mailman (input) for mailman id 532845;
 Wed, 10 May 2023 14:20:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2wMR=A7=bounce.vates.fr=bounce-md_30504962.645ba838.v1-8c1a117748d2476faf56e5359e5ae519@srs-se1.protection.inumbo.net>)
 id 1pwkgP-00050N-AB
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 14:20:45 +0000
Received: from mail179-5.suw41.mandrillapp.com
 (mail179-5.suw41.mandrillapp.com [198.2.179.5])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7b18ecc-ef3d-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 16:20:41 +0200 (CEST)
Received: from pmta12.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail179-5.suw41.mandrillapp.com (Mailchimp) with ESMTP id 4QGcd43JjKzG0CNqp
 for <xen-devel@lists.xenproject.org>; Wed, 10 May 2023 14:20:40 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 8c1a117748d2476faf56e5359e5ae519; Wed, 10 May 2023 14:20:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7b18ecc-ef3d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1683728440; x=1683988940; i=yann.dirson@vates.fr;
	bh=AKM5zlXqg5wAV/fR8iZ6sMC6JAr9mU0kItj363ZI22Y=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=FRzrsj/K84wL8v7PuGRvopKfuePCYePd8Pms/aogvM+dNbxvyByJcJzGNd4Cz2Q0z
	 RjE7gpPaioz1hjTYaXaHqgZ4Lni07XN2gAkRpxjBP3MTkUO9C15IAcBk62Fk9H6bPo
	 CBsx3afMj4noCWrjkoR+XL5Z5VlFpPw7Z27LaVgI=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1683728440; h=From : 
 Subject : To : Cc : Message-Id : Date : MIME-Version : Content-Type : 
 Content-Transfer-Encoding : From : Subject : Date : X-Mandrill-User : 
 List-Unsubscribe; bh=AKM5zlXqg5wAV/fR8iZ6sMC6JAr9mU0kItj363ZI22Y=; 
 b=IV+jiBcv2wMmTcLKczq8UJK9aOCy0OPzjZ3SSgdxEQwM5LX7YNR0SUn3ZcjOefNBAXJyHX
 ESZIxExkEHcujaTRT4UxE3hB9hPR+IuoUHejZt36TNvmL5GJb4e35gTp/ohmXKPk3S184VmB
 SsdDaz6h8wMUCnrk6LmKQxvdJAJkw=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: =?utf-8?Q?[PATCH=200/3]=20officializing=20xenstore=20control/feature-balloon=20entry?=
X-Mailer: git-send-email 2.30.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 9664e26d-c55d-4176-ac64-680cfb7c5564
X-Bm-Transport-Timestamp: 1683728433184
To: xen-devel@lists.xenproject.org
Cc: xihuan.yang@citrix.com, min.li1@citrix.com, Yann Dirson <yann.dirson@vates.fr>
Message-Id: <20230510142011.1120417-1-yann.dirson@vates.fr>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.8c1a117748d2476faf56e5359e5ae519?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230510:md
Date: Wed, 10 May 2023 14:20:40 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

The main topic of this patch series is the ~/control/feature-balloon
entry used by XAPI, prompted by the report of xe-guest-utilities on
FreeBSD not being able to report the feature when using just libxl on
the host.

First patch is a bit off-topic, but included here because it fixes the
text from which this feature description was adapted from.

Yann Dirson (3):
  docs: fix complex-and-wrong xenstore-path wording
  docs: document ~/control/feature-balloon
  libxl: create ~/control/feature-balloon

 docs/misc/xenstore-paths.pandoc | 16 ++++++++++------
 tools/libs/light/libxl_create.c |  3 +++
 2 files changed, 13 insertions(+), 6 deletions(-)

-- 
2.30.2



Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


From xen-devel-bounces@lists.xenproject.org Wed May 10 14:20:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 14:20:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532849.829198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkgX-0005sR-C1; Wed, 10 May 2023 14:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532849.829198; Wed, 10 May 2023 14:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkgX-0005sI-8e; Wed, 10 May 2023 14:20:53 +0000
Received: by outflank-mailman (input) for mailman id 532849;
 Wed, 10 May 2023 14:20:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9+Fk=A7=bounce.vates.fr=bounce-md_30504962.645ba840.v1-ff8208a920b241e18369e93a82372d38@srs-se1.protection.inumbo.net>)
 id 1pwkgV-00050N-SX
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 14:20:51 +0000
Received: from mail145-2.atl61.mandrillapp.com
 (mail145-2.atl61.mandrillapp.com [198.2.145.2])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dc518e69-ef3d-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 16:20:49 +0200 (CEST)
Received: from pmta06.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail145-2.atl61.mandrillapp.com (Mailchimp) with ESMTP id 4QGcdD3vF4zQXlPFj
 for <xen-devel@lists.xenproject.org>; Wed, 10 May 2023 14:20:48 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 ff8208a920b241e18369e93a82372d38; Wed, 10 May 2023 14:20:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc518e69-ef3d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1683728448; x=1683988948; i=yann.dirson@vates.fr;
	bh=ZwL8RO7m/XPAyFca6sOKxBF5d9RSDwmjnqFvJINXGyI=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=lutkl4UF1sjyKfPv3tb8+clPcK7umW8O0rxkERBr5Uu9sLMkTZRRa7SQTLrV0ZJ5b
	 3NDfiH04If4CwUlycWr+J4d6y3dRz4J4+F0bu/8lKQvTbNrO477V0Z2dGxLoIlfT0M
	 EIzH4HwLogCUoksCcML4iXo6V9P+RgebWWLBFaRw=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1683728448; h=From : 
 Subject : To : Cc : Message-Id : In-Reply-To : References : Date : 
 MIME-Version : Content-Type : Content-Transfer-Encoding : From : 
 Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=ZwL8RO7m/XPAyFca6sOKxBF5d9RSDwmjnqFvJINXGyI=; 
 b=Z6eABEsqG28450+P64bvOre/1WX7u0ACZAgBxq16N6/VIcxrbxsO20uRKO81ZJj6zRTFqi
 nLCBy20au5vShYLtS2jx6w4bS6ZR7KJP+xJGTPyywCZtAS6rYIUOCVGpGWAfl6sodB9IqpNR
 W1YWYHwVENgQchqPDUnc2UBGCuEtk=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: =?utf-8?Q?[PATCH=203/3]=20libxl:=20create=20~/control/feature-balloon?=
X-Mailer: git-send-email 2.30.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 9664e26d-c55d-4176-ac64-680cfb7c5564
X-Bm-Transport-Timestamp: 1683728447378
To: xen-devel@lists.xenproject.org
Cc: xihuan.yang@citrix.com, min.li1@citrix.com, Yann Dirson <yann.dirson@vates.fr>, zithro <slack@rabbit.lu>
Message-Id: <20230510142011.1120417-4-yann.dirson@vates.fr>
In-Reply-To: <20230510142011.1120417-1-yann.dirson@vates.fr>
References: <20230510142011.1120417-1-yann.dirson@vates.fr>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.ff8208a920b241e18369e93a82372d38?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230510:md
Date: Wed, 10 May 2023 14:20:48 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

Like other feature controls it has to be created by the toolstack before
the guest can advertise the feature.

Reported-by: zithro <slack@rabbit.lu>
Signed-off-by: Yann Dirson <yann.dirson@vates.fr>
---
 tools/libs/light/libxl_create.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index ec8eab02c2..85eccc90cd 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -863,6 +863,9 @@ retry_transaction:
     libxl__xs_mknod(gc, t,
                     GCSPRINTF("%s/control/sysrq", dom_path),
                     rwperm, ARRAY_SIZE(rwperm));
+    libxl__xs_mknod(gc, t,
+                    GCSPRINTF("%s/control/feature-balloon", dom_path),
+                    rwperm, ARRAY_SIZE(rwperm));
 
     libxl__xs_mknod(gc, t,
                     GCSPRINTF("%s/data", dom_path),
-- 
2.30.2



Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


From xen-devel-bounces@lists.xenproject.org Wed May 10 14:20:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 14:20:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532846.829178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkgT-0005IV-Py; Wed, 10 May 2023 14:20:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532846.829178; Wed, 10 May 2023 14:20:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkgT-0005IK-N0; Wed, 10 May 2023 14:20:49 +0000
Received: by outflank-mailman (input) for mailman id 532846;
 Wed, 10 May 2023 14:20:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Youv=A7=bounce.vates.fr=bounce-md_30504962.645ba83d.v1-744e931c24524562bfbf8de669d17973@srs-se1.protection.inumbo.net>)
 id 1pwkgS-0004iD-EI
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 14:20:48 +0000
Received: from mail145-2.atl61.mandrillapp.com
 (mail145-2.atl61.mandrillapp.com [198.2.145.2])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dad8aec9-ef3d-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 16:20:47 +0200 (CEST)
Received: from pmta06.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail145-2.atl61.mandrillapp.com (Mailchimp) with ESMTP id 4QGcd956WjzQXlPFg
 for <xen-devel@lists.xenproject.org>; Wed, 10 May 2023 14:20:45 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 744e931c24524562bfbf8de669d17973; Wed, 10 May 2023 14:20:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dad8aec9-ef3d-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1683728445; x=1683988945; i=yann.dirson@vates.fr;
	bh=zAC0Ae3ADV1q0oYr8YU1ywFE0/ZKiWQ1G1f22odEco4=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=Kj1Ho4iXXfCMhGxo3aF+cXoQGKCGeR4P19lSWLIjUBbPsKvH7Q2NL8HrUw0HJcvk4
	 oCN4Bgp1w6Fc1dR1z9hn5kxP14MUlcMCBXMp+vaXMJmNuEmsfNdim7RBIZezTMffjJ
	 3vXWtT9qGf1zd2fQenvD2Xwm3SWwSkuKtBG6GEk4=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1683728445; h=From : 
 Subject : To : Cc : Message-Id : In-Reply-To : References : Date : 
 MIME-Version : Content-Type : Content-Transfer-Encoding : From : 
 Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=zAC0Ae3ADV1q0oYr8YU1ywFE0/ZKiWQ1G1f22odEco4=; 
 b=VsFXzHXGU9IlXJo58FzA8k4ETK4yIM/0qkcbYa2wEMjImyq0KBQTrHwN11Qk0uplLjex3o
 lcCboXhNDg01TfOaBoa+MBvTawS3o4Df3M3T2A6F46OGV6LygGFFLbqkGP9QirohernNbc63
 asF5ammT7nJKv5uPMtX0nx9vvEQyo=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: =?utf-8?Q?[PATCH=201/3]=20docs:=20fix=20complex-and-wrong=20xenstore-path=20wording?=
X-Mailer: git-send-email 2.30.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 9664e26d-c55d-4176-ac64-680cfb7c5564
X-Bm-Transport-Timestamp: 1683728439569
To: xen-devel@lists.xenproject.org
Cc: xihuan.yang@citrix.com, min.li1@citrix.com, Yann Dirson <yann.dirson@vates.fr>
Message-Id: <20230510142011.1120417-2-yann.dirson@vates.fr>
In-Reply-To: <20230510142011.1120417-1-yann.dirson@vates.fr>
References: <20230510142011.1120417-1-yann.dirson@vates.fr>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.744e931c24524562bfbf8de669d17973?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230510:md
Date: Wed, 10 May 2023 14:20:45 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

"0 or 1 ... to indicate whether it is capable or incapable, respectively"
is luckily just swapped words.  Making this shorter will
make the reading easier.

Signed-off-by: Yann Dirson <yann.dirson@vates.fr>
---
 docs/misc/xenstore-paths.pandoc | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/docs/misc/xenstore-paths.pandoc b/docs/misc/xenstore-paths.pandoc
index f07ef90f63..a604f6b1c6 100644
--- a/docs/misc/xenstore-paths.pandoc
+++ b/docs/misc/xenstore-paths.pandoc
@@ -454,9 +454,8 @@ The precise protocol is not yet documented.
 #### ~/control/feature-suspend = (""|"0"|"1") [w]
 
 These may be initialized to "" by the toolstack and may then be set
-to 0 or 1 by a guest to indicate whether it is capable or incapable,
-respectively, of responding to the corresponding command when written
-to ~/control/shutdown.
+to 0 or 1 by a guest to indicate whether it is capable of responding
+to the corresponding command when written to ~/control/shutdown.
 A toolstack may then sample the feature- value at the point of issuing
 a PV control command and respond accordingly:
 
@@ -507,9 +506,8 @@ string back to the control node.
 #### ~/control/feature-laptop-slate-mode = (""|"0"|"1") [w]
 
 This may be initialized to "" by the toolstack and may then be set
-to 0 or 1 by a guest to indicate whether it is capable or incapable,
-respectively, of responding to a mode value written to
-~/control/laptop-slate-mode.
+to 0 or 1 by a guest to indicate whether it is capable of responding
+to a mode value written to ~/control/laptop-slate-mode.
 
 ### Domain Controlled Paths
 
-- 
2.30.2



Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


From xen-devel-bounces@lists.xenproject.org Wed May 10 14:20:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 14:20:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532848.829189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkgW-0005bu-5I; Wed, 10 May 2023 14:20:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532848.829189; Wed, 10 May 2023 14:20:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwkgW-0005bj-0C; Wed, 10 May 2023 14:20:52 +0000
Received: by outflank-mailman (input) for mailman id 532848;
 Wed, 10 May 2023 14:20:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pRsz=A7=bounce.vates.fr=bounce-md_30504962.645ba840.v1-79b70f81d945472da965fc1d3c800edb@srs-se1.protection.inumbo.net>)
 id 1pwkgU-00050N-SL
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 14:20:50 +0000
Received: from mail179-5.suw41.mandrillapp.com
 (mail179-5.suw41.mandrillapp.com [198.2.179.5])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dc275159-ef3d-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 16:20:49 +0200 (CEST)
Received: from pmta12.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail179-5.suw41.mandrillapp.com (Mailchimp) with ESMTP id 4QGcdD1wztzG0CPF8
 for <xen-devel@lists.xenproject.org>; Wed, 10 May 2023 14:20:48 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 79b70f81d945472da965fc1d3c800edb; Wed, 10 May 2023 14:20:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc275159-ef3d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1683728448; x=1683988948; i=yann.dirson@vates.fr;
	bh=zh8bKuTJDXhG/twy2I/iCoJrgotk0dfjxsaWcv5SUiY=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=con50Mxc/lRoi20nT7KPR1TmqwK4XIu8wlbc9wkoVawOg4ybxoor6ylrPCfzDUKWe
	 8CTw+fAMTOY9Xckla4nxmyNgiIhjJu3zXcmJl9cwTnj9YHiW39WbvvsGuJ1Pq4CoJb
	 L20kIEoh1uSrjbcvX/3ZtaC53cDc2v/qCbm7qaBw=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1683728448; h=From : 
 Subject : To : Cc : Message-Id : In-Reply-To : References : Date : 
 MIME-Version : Content-Type : Content-Transfer-Encoding : From : 
 Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=zh8bKuTJDXhG/twy2I/iCoJrgotk0dfjxsaWcv5SUiY=; 
 b=N3FXD+417OnrqbjigIfwiEMoA7Ugbz0IAVTDbXhkRzJ6X5h0KekRvP9oZ/ddJ4SDC6bg7o
 v+eAcmKSEIRBNEaydIpQUbVCpclEQH1G2ErQbxSowgusidN4C46/J/d2cOXGA7jYL8ns6tSy
 3JsIDF4kMeBW8QSRfcbywZ3PLXVhk=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: =?utf-8?Q?[PATCH=202/3]=20docs:=20document=20~/control/feature-balloon?=
X-Mailer: git-send-email 2.30.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 9664e26d-c55d-4176-ac64-680cfb7c5564
X-Bm-Transport-Timestamp: 1683728444990
To: xen-devel@lists.xenproject.org
Cc: xihuan.yang@citrix.com, min.li1@citrix.com, Yann Dirson <yann.dirson@vates.fr>
Message-Id: <20230510142011.1120417-3-yann.dirson@vates.fr>
In-Reply-To: <20230510142011.1120417-1-yann.dirson@vates.fr>
References: <20230510142011.1120417-1-yann.dirson@vates.fr>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.79b70f81d945472da965fc1d3c800edb?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230510:md
Date: Wed, 10 May 2023 14:20:48 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

This flag has been in use by XAPI for a long time (was present at creation
of the github xen-api/squeezed repo in 2013), and could be used by other
toolstacks.

Signed-off-by: Yann Dirson <yann.dirson@vates.fr>
---
 docs/misc/xenstore-paths.pandoc | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/docs/misc/xenstore-paths.pandoc b/docs/misc/xenstore-paths.pandoc
index a604f6b1c6..6c4e2c3da2 100644
--- a/docs/misc/xenstore-paths.pandoc
+++ b/docs/misc/xenstore-paths.pandoc
@@ -509,6 +509,12 @@ This may be initialized to "" by the toolstack and may then be set
 to 0 or 1 by a guest to indicate whether it is capable of responding
 to a mode value written to ~/control/laptop-slate-mode.
 
+#### ~/control/feature-balloon
+
+This may be initialized to "" by the toolstack and may then be set to
+0 or 1 by a guest to indicate whether it is capable of memory
+ballooning, and responds to values written to ~/memory/target.
+
 ### Domain Controlled Paths
 
 #### ~/data/* [w]
-- 
2.30.2



Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


From xen-devel-bounces@lists.xenproject.org Wed May 10 14:42:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 14:42:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532874.829208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwl1L-00015V-0s; Wed, 10 May 2023 14:42:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532874.829208; Wed, 10 May 2023 14:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwl1K-00015O-UE; Wed, 10 May 2023 14:42:22 +0000
Received: by outflank-mailman (input) for mailman id 532874;
 Wed, 10 May 2023 14:42:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e4GJ=A7=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pwl1K-00015I-Js
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 14:42:22 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id de8db5ac-ef40-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 16:42:21 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-965d2749e2eso1030987266b.1
 for <xen-devel@lists.xenproject.org>; Wed, 10 May 2023 07:42:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de8db5ac-ef40-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683729740; x=1686321740;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=83YNIqPiNpjGrPITv2xRDuTxFYp7ZVQdya3neYw7aEY=;
        b=N2Uhr2yN86qrROtaUAzgCc5rnQv4HFgiV7dRYt/vgA2mJs0O03w+kfXUD3YR4fbRuJ
         Qcwb/ir0cM3hw9WJ+ewDq+xe//7wgDnPVSnfek4z4titPXZHvKZao3eHPf0kSB9rP2+6
         qSWCIgW+kN6Z7C//mwEpf/qhLUqDIRIQAk/P8CAZBVdntF7Iyo8uc+VCqt5JGSULFWwU
         Cir3oDd6qilfJbKQ5a0D8gv1p7jSy6w/gIUeWBEPYGLPrBfvkm+TGjbwOtw9HLpcCllq
         Sw6gu8mtA3HUgitnlZed+B0ocWwqX/Gu6S9GNcyxXYrSB4Da4MVv4jybCOZrn7/OrzAd
         BpDg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683729740; x=1686321740;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=83YNIqPiNpjGrPITv2xRDuTxFYp7ZVQdya3neYw7aEY=;
        b=FzlUKjwu84Ce2B6lm6jQAfXZDv6mV5Rl5qvmLm4bws2BpHtbBzrC1+PuGZBciAtSkb
         Di49hMTK/i473gNw3EjY/R7vawiR+/JxPZU8lXtO4Kv5D84Q2Mh/8r1+67IXmCZXVy6H
         9bGBeHznwVIf4YS+825bcdINA+IBX62ir5zgS+LpOUBqxiXR7UBplJSrpVYQ7M7MmDW+
         FOoUQMoZjA6ANiG8DMMr5wsWWLeF/XJK5Imzp0PsgwQVGfreaRShPe+W0JkTRHU+G3Km
         cdLReVKScfKnx7r0IMUBfUwZZJL40QBG1Wof/iQ+jGwVMbwrYilsmMEhkBWfc4Hp71CG
         kByA==
X-Gm-Message-State: AC+VfDzX2Ns4V11BE96ZW1AlQfcE422pAriqh2esX4XV8ehPUwogbnUi
	AXjdqQW63rWYVBlML1pSZei02/hqKc4eNMebb8U=
X-Google-Smtp-Source: ACHHUZ6TO7lPHNH+bKvUr4CrwpQOrZgtyft6XS7sSLk0GVNo5CgNr6qEPUE4OUn/fndTnUvdkQfy+3GzzuNNnPu/XbA=
X-Received: by 2002:a17:907:318b:b0:962:9ffa:be19 with SMTP id
 xe11-20020a170907318b00b009629ffabe19mr16116821ejb.5.1683729740576; Wed, 10
 May 2023 07:42:20 -0700 (PDT)
MIME-Version: 1.0
References: <20230508171437.27424-1-olaf@aepfle.de>
In-Reply-To: <20230508171437.27424-1-olaf@aepfle.de>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 10 May 2023 10:42:08 -0400
Message-ID: <CAKf6xpv8Oj4k6bf6fcHk=0gqEAeP95OSygCkA2HZw4dcThKWSA@mail.gmail.com>
Subject: Re: [PATCH v2] Fix install.sh for systemd
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, 
	Anthony PERARD <anthony.perard@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, May 8, 2023 at 1:14=E2=80=AFPM Olaf Hering <olaf@aepfle.de> wrote:
>
> On a fedora system, if you run `sudo sh install.sh` you break your
> system.  The installation clobbers /var/run, a symlink to /run.  A
> subsequent boot fails when /var/run and /run are different since
> accesses through /var/run can't find items that now only exist in /run
> and vice-versa.
>
> Skip populating /var/run/xen during make install.
> The directory is already created by some scripts. Adjust all remaining
> scripts to create XEN_RUN_DIR at runtime.
>
> XEN_RUN_STORED is covered by XEN_RUN_DIR because xenstored is usually
> started afterwards.
>
> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Tested-by: Jason Andryuk <jandryuk@gmail.com>

I tested with Fedora/systemd.

Thanks, Olaf.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 10 14:48:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 14:48:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532880.829219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwl7P-0001lW-Ny; Wed, 10 May 2023 14:48:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532880.829219; Wed, 10 May 2023 14:48:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwl7P-0001lP-JB; Wed, 10 May 2023 14:48:39 +0000
Received: by outflank-mailman (input) for mailman id 532880;
 Wed, 10 May 2023 14:48:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwl7O-0001lF-BC; Wed, 10 May 2023 14:48:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwl7O-0004UK-9M; Wed, 10 May 2023 14:48:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwl7N-0003mc-Rd; Wed, 10 May 2023 14:48:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwl7N-0001UM-RG; Wed, 10 May 2023 14:48:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ur7q9c4fPfuQyUSrNRXrNAg2794GTgwABRNlS40SY+Y=; b=KkY1pB99pd4wZHbHUc6YqInhTu
	tIJtEZCMfnCmQBefjCSK6Wl6JRudrbm6lDZ+qyn1xXJzE33Vwv4CVNUUnROnd+L1QbXb6lcXEg7le
	YWz4sTwZJrLYcMBKTargpq/fdOOFAeyVi2gC1ZNKc1OrLDyaWwEVAU/GTC7/oVSrCUM4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180603-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180603: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e6447d2a08f5ca585816d093e79a01dad3781f98
X-Osstest-Versions-That:
    ovmf=373a95532a785108f625877d993451d4a6d36713
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 14:48:37 +0000

flight 180603 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180603/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e6447d2a08f5ca585816d093e79a01dad3781f98
baseline version:
 ovmf                 373a95532a785108f625877d993451d4a6d36713

Last test of basis   180601  2023-05-10 10:12:15 Z    0 days
Testing same since   180603  2023-05-10 13:12:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   373a95532a..e6447d2a08  e6447d2a08f5ca585816d093e79a01dad3781f98 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 10 15:07:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 15:07:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532888.829227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwlPD-0004IR-4p; Wed, 10 May 2023 15:07:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532888.829227; Wed, 10 May 2023 15:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwlPD-0004IK-20; Wed, 10 May 2023 15:07:03 +0000
Received: by outflank-mailman (input) for mailman id 532888;
 Wed, 10 May 2023 15:07:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PWYj=A7=citrix.com=prvs=487e665c6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pwlPB-0004IE-Sj
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 15:07:02 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4d04d64d-ef44-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 17:06:58 +0200 (CEST)
Received: from mail-dm3nam02lp2041.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 May 2023 11:06:36 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ2PR03MB7356.namprd03.prod.outlook.com (2603:10b6:a03:55c::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Wed, 10 May
 2023 15:06:32 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 15:06:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d04d64d-ef44-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683731218;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=p1S9mrpOlUpI2c1/yck9M7PzzZL+e4awhHsv1AAnzEo=;
  b=fHOagVraByUNtSJUxPtPRRxFXkdtLEWqsGmLYI55z6zn5scq2bBwDGDK
   nwIiiCGQwn60rIR/4gzBurX9zTV6NIHyzdsvPR0ls2ARosHNmFKAnJ9+J
   xma+QA6A+h19MCo+BnpAVwVRXjaKl/SzS0zSEEQ6ugbbMK/HCsbPJkajd
   I=;
X-IronPort-RemoteIP: 104.47.56.41
X-IronPort-MID: 108945977
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:UZnck6KWaWdKhEIiFE+R/JQlxSXFcZb7ZxGr2PjKsXjdYENS3zwFn
 WpOC2qDbvyPMWvxLtx0aY7k90pSsJPTzocxHlZlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wVmPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c50KFpIy
 dIFJwwIVS6iuOa12aKLVeJV05FLwMnDZOvzu1lG5BSBV7MKZMuGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dppTSKpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxXOqBN9DS+LQGvhCrXuN2zw3BhMsSXSr+daLmxOdBvcGN
 BlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml1a788wufQG8eQVZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9URX5NkcHbC4ACAcAuN/qpdlpigqVFoo6VqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraT2tjA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:/WbkbK2BNPdcevE5CnDAvAqjBEQkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7gr5PUtLpTnuAsa9qB/nm6KdpLNhX4tKPzOW31dATrsSjrcKqgeIc0HDH6xmpM
 JdmsBFY+EYZmIK6foSjjPYLz4hquP3j5xBh43lvglQpdcBUdAQ0+97YDzrYnGfXGN9dOME/A
 L33Ls7m9KnE05nFviTNz0+cMXogcbEr57iaQ5uPW9a1OHf5QnYk4ITCnKjr20jbw8=
X-Talos-CUID: 9a23:/CagomAsv3ffZAj6EypOz0goNN0qTnjm8lbvJ0yhGVZ2Q4TAHA==
X-Talos-MUID: 9a23:mJvC+Abz2+CSZOBTmhnymWFGOslS/4u+UG8/gMgl5uzdKnkl
X-IronPort-AV: E=Sophos;i="5.99,265,1677560400"; 
   d="scan'208";a="108945977"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ElJ00RoUBeOJbuCzJ9HitZxcxBqvSPSXKdBIyhNAVybgiQhGjUMLPMavpNZHSbmC4QlHlvgdrf6muyaWLUOAEvHald2VOgKT4bpcD8CNeY9ELj8nvgB6JWfg7Uzj6UQp1QO3e4N+mbbR3fNsw5RYtEat1X48s3PO3E7XkRhzKoppzOAFUcQl1dZgdEkNCx5pLXfnom5Hs8HsgiSGjzctieO7C1zRWyIOGkdvxoh8gqKCfAI0kakDzG96XRI1dRonT4s7gql6fohiTIz+ly62bo4yE6IYrwbk25S86Ewa5bcuO0zulOQPTgdjfgniyv6HOa88prZjz6hwzUnFi9sDAg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qEvFsUI9Q4p68/Quq3FWuzrbQRrDJ7foZ838nz9TFlk=;
 b=JuEQVnCxx/bvTwSLPdurkBfsKSYfEklYhtsAC/xBGpB0XToEXiOXkpO2vzDGBwOYxENrF6wLoPpl7EpdZekOdDEdEVpFaezBvv9+Wz73xHDDk3UIssJ2zC0avSCP5Wp3EU2Az1BUHy/ExIhwk5JOUT4L1GhGbzcFmD5xGcf90nWIDrC+ouLRXk819nxOFXggTeKX8DPFszdgdk6RrM7bpe4I6sSZkmtNmPxtbXWItnplgUPKxeRekt96JevYxntHZQajYPAjWQeybfWUQS73wVJPZEYpfPdggTUnEQme5LyzoLjKJFH6JTN9fRczo+T4R0cVeRtbZXcxQqKfynutMw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qEvFsUI9Q4p68/Quq3FWuzrbQRrDJ7foZ838nz9TFlk=;
 b=bJT/UQq5TxgquTjEdbEAfqJtNP6hQKwpEkKZDzrV2BXbU5x8vXiGtsGbkbipjQLRCKb44SYIZVcGh47M8M4RhAVx0Z6YckUw2+xRI04NQVWfepXTGJdUsThRN0PFJphay/i9AubMb2lODSGZJ2madwCc/YaF5tPG2oFa/pW1HVc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <6d62ba23-d247-08da-3a84-ed8d1cdc4271@citrix.com>
Date: Wed, 10 May 2023 16:06:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/6] x86/cpu-policy: Drop build time cross-checks of
 featureset sizes
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-2-andrew.cooper3@citrix.com>
 <6531df09-7f7f-a1e2-5993-bd2a14e22421@suse.com>
 <18090224-6838-8ed4-6be5-ae10a334e277@citrix.com>
 <bbbd4c8b-e681-f785-b85c-474b380d6160@suse.com>
 <742a5807-dd53-0cd1-d478-aed567d5c4f5@citrix.com>
 <c8cb1df9-33af-8cae-291d-9a86a3b7f6b9@suse.com>
In-Reply-To: <c8cb1df9-33af-8cae-291d-9a86a3b7f6b9@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0010.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ2PR03MB7356:EE_
X-MS-Office365-Filtering-Correlation-Id: 09efb80e-4bcb-4abd-c7a2-08db51682372
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DfHjTE34KX7ngiTsTT6vZgcPOPIARb5PExaXRRwD5B7UXr49oDhHfE+0C+l6vot8B6LlUW1rL2HmAEqo45mPvLg+dDjFG8OTmDAqgL/vlRVcE2qEoM39S0OreNTh3w/U4tDW5HoM3JQoIAxIPmiakajP2vQ84gnPgLSXB/esUZ8wwaEwQUYnnn9cPcH9b+q8mywYcM7luUelgt+rIueF5x/9n0bJS8i3LOv89sO/dERdsOOVz21qSDjy1Bne4bfBpEd+/IpaXX2mxAg2Yqw2HTujaoEd82ikuZ30CytGxpCxsC3Eei7C5B5pj35baSQe9rP1cMGvVJc4Kp2RqgATOfS9rUZ85XLAs+PWHhFBq7+DVjtcDRzaN7g2L6r6DX96ca5cvG/nKiSQ71liQYb2YMkEXjeVVudBgj7m1WxbZAXtJ8sevE3kfBJY9BoqkLPM9SuHegiVlWO1BTeaeElyTZmhpgtiakGMXHb41JVdj6h6npJELDp18vjARl7c40nbxwAW24ivkESrlatYXselQto+27DO+qUQIgtMuqG9i+ueEmyYYRDm4mnZTz4y+v6lwPfuR+YymcEuo4G7pKm9ouVhaHmUi9IgGwHk4mjjrVeXMGui/2APKEN3dECNUun+hblkHnNHZ4JvUUFadsvCZA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(39860400002)(396003)(366004)(136003)(451199021)(2616005)(86362001)(83380400001)(478600001)(8936002)(8676002)(36756003)(41300700001)(5660300002)(2906002)(316002)(4326008)(54906003)(66946007)(6916009)(66476007)(66556008)(6512007)(53546011)(6506007)(26005)(186003)(6486002)(6666004)(82960400001)(38100700002)(31696002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZkY5cktsZ3ZEWlU3YllucEJOd3lCK01LK3ptVTNCM3U3ZkNNMDl1UnloT0Nz?=
 =?utf-8?B?YlRXUG5uQTBsOWZyQkNKVXVWRWpUdXEzUHgvQ2g4aFdieStqdVFrV0pvR0dM?=
 =?utf-8?B?R2h3d1kzcDBJM2w0MTM0YmxXUUpabHdSdEhLU1Q3bC9Ub0lFWHBYNFdzbWtG?=
 =?utf-8?B?bWRCK2w5M1p0UnRkTXpheU1vTjVMakRWWWpCRS9NZWU2RmxpS2EvaWppa2Np?=
 =?utf-8?B?bWdGeEN4Z1NGMk5zQ0E5dnpQNzhBM0NTYW1RTy9VVVJUb2diYjgzL1AvMURJ?=
 =?utf-8?B?WjlTYXZiSEFXbWtiM3NyT2xYbXR2NnpHS0tmL2VaNkk0NmdQRkk3bTlVOU9N?=
 =?utf-8?B?VXBubzl4cFIrcldreTJDc0FEUU4zd0FweTBMdVdhdXdINSs1WG9jdm04ck1Z?=
 =?utf-8?B?S3l5akFSNVVGU09uakZtS2dWc1Mzem9CeEhGTk54M1cvVEpNMUl6UmxWN0pn?=
 =?utf-8?B?QzVGRTJrc3JmQ0F0ZUtmWU8yL0s4aGlESEphYzROUHRaYStBMEV4VnBJemhV?=
 =?utf-8?B?TFpTL2gyN0VpL3pjbHZVNWx1Y2UrOTdKMFRlTVhPU2VXRGhCaDF2L3hLakow?=
 =?utf-8?B?MHRlWTVwUkRmWGFXZWRKS29QQ1VLblk1NVdQSCtub3kyV0FMK1RlelFkSkg0?=
 =?utf-8?B?TVFQR0NRZmVERWhSaWtlaERrNFlMRkYxSUJRc0RoNGRBOHlRdUo3eVFZa2lV?=
 =?utf-8?B?NmloRzVWSHhBQUp0VEhxQzZod09tcXRVYXFTdkExTXc1VnBEM01ldkI4RTAv?=
 =?utf-8?B?UHZ6Rmc2aCtJMStBT1QxRFE0U3NnRjJ3QmNXdGhpVHA5US8yeW9HS1F4dmVF?=
 =?utf-8?B?TzRRQ2RWc2hCbmVXemNzMlJndmxHOHBWejIvVWFuTE9XSFJ0WWROa2R2Qysw?=
 =?utf-8?B?WStuTzhWdzRpVHRnUDlrV1dJU0pjKzlvTzZMTUtLUndjZW91ZzlsRlhlVlR6?=
 =?utf-8?B?UWVKYVl2NC9PTklCR0ZGMUg3SWdJWkZFY1ZUZytQaTRwa0RCNmF6bWZNbzRj?=
 =?utf-8?B?U1Zmc0VjT3NkSDd6dFo1N1FwNmxPbk83ZE5xcXZHazU2VmEyb0tvcit0cFE1?=
 =?utf-8?B?NWRMdzJMc24veFk4UTE3RlBRUzRKcll3Zk5WcUR4ZWRvRGNLeVMrZUozMnA2?=
 =?utf-8?B?ZHVNcjVuZUVrL2tDZEtBcy8xanVvd3hjOElEM2hVMWZpb1kxMHZUT0hVYTdB?=
 =?utf-8?B?R0NFUVpSWlpHdzFRL3Fxdm92WFBzZWJacmdJOEg5NXUrbkFEYTZCRkwzdU5q?=
 =?utf-8?B?OVMwUEtvZDF1WS9mZjJIU3BERlpGNzRDT3VINDdEc3doMnB3MEFhZzdWQzhK?=
 =?utf-8?B?YS91TTMvbkUyeHZYM3NtNmpRamI3c2VaeGNSblJmcUlHQlhWU3VWdDgwVlJw?=
 =?utf-8?B?V3FibGV2YWUwbFA5RGp0SGN3WjBla3J6TXdSazNIZUZldnloaUhMZXJxQ3BV?=
 =?utf-8?B?NGFONlNsTWNDaW9VMk9vclo1NFBwRnZqTDNXS0trQVp5MDFnbGFoZExnOXhQ?=
 =?utf-8?B?Y3RMMC83aXFHWFBLME96U3RUQVhNeXpBdVkwTm1LamxYVjRvWnM4R2JaZnZu?=
 =?utf-8?B?aEJmTFpxYVM5TEltVlVhMVczSTZsSjh1YTI5UXFUeUNJSDJMV1pSeXhVUDdo?=
 =?utf-8?B?anAyakN4cEtnbU0yQXgwMUplMWVmN2pKaHZMZGVtQUdLanFpUlBzUFpCM3Vx?=
 =?utf-8?B?b0NHZGJsVkk4VW84cGRzMEYrejFmN2xqeDRUTWJpOGNnTFBnMTNiSnlhRjA5?=
 =?utf-8?B?QXlQaHYzc2F1OUl6Z3gwcGROQW53WGtuRDlHeWVBNlNoQWRsTThnbjY5SXFJ?=
 =?utf-8?B?UjVrZHErTmFxTzJLQnorczVoN0wwb243aUkyRGFWU2xtSGdkelVFMzE2SzV4?=
 =?utf-8?B?TWFQR2Q3M3R2bXdSbGprWjVYdll3bEl2N0U0NWxCZ0tjMXdJQmozaEF5YUpY?=
 =?utf-8?B?NHI3K0UvUFRERGxVY0V1amkwcmZ6ZDMzTU5qZURhMzRzSTFkTmw4T3Y4K3Jj?=
 =?utf-8?B?UDN6aXFRRHJOY2NScEZ6RGZOeGJhMUpLeC9lcXdXdWl2aW5FbnJUZGlQRkRv?=
 =?utf-8?B?ZHRmandzRFRMcDF0a0NuUzBZZDlrZHdhMDM0UVlRa3BZcXFKZURpMWpkeUZs?=
 =?utf-8?B?OUxseEtCT2xOMko4WHZoM0lwRmVXanVhNWVIM3d0cW1TNG41N05OVHV6d2N0?=
 =?utf-8?B?bVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	oSqAl8u324e6+SFI/a1RjCdR6G4iaREpeqDDACiFx3w+Kv4ql+3x2i+1Di8FzhSkhIaIYONo2WUO2QgcLXIbe8MExKJe7kzPdIRFxVVPslenrdxyScqBcIoeIb6YFVneKmL7+hAXKUF//TzoqtkTnoYfJ7W0jjApbBCZmHi8UUXixin1ZKYbX5RQaPnMGklWJjuO2uzNuyyaX57cSeK71+kOPxJMtUdKHblYGhh3yGX8RJCrR3+ezVTFp6+hNHdSYH1t4UvDBhRFwe2CFrraqQCyAi5J+QtLrau28X/foBlx7eR0a7/BvNMF/+LtmJQpWBkaUvxH8kDas923sVsksqgC2W8h/pHUE6rJjUxprxeJYdYWmMLP8/ATU0d1lQ0zy1weHDGvv4ERW9jIIkFO9RuCYhwg0xgQjp6u9Yv0I5pk8nR0tuF7JSraiBE4tPB1ortzgNuX9C18pAvv7VgmO4ukxSMHDfsPWePAzACUPGQLAmiJTpCueYO/DLjtcKAaoCGDydjvEfuyByzL+AyoTdsLFc1YIfHk4oI9sH6xj+IGkL1ylv4hRRqHOMK3HEmT2SnwTSi5RDFna4v8JV0SsVz5hXmjfWP4ezeQl6qZFSzi1/c0bIlM9o5WFTqsc5Qro55aYC9wV5UtubW6B7D4J4NAgZNzH0/Ms5vw8LjQlJb3WCdxnC2oZtdkVqXAioFgqbSBH98APBfpw5wFUWRUfVsrkWfw9TyLSSV4L97eA876+AJ5HXcR8DKMRegUizPn8RwMICYfM3QM+qwGlSv2mWNZfzLUlxlVHLI9ySBgFhvVo5BITn+NHqe9aPUkKvSt
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 09efb80e-4bcb-4abd-c7a2-08db51682372
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 15:06:32.1046
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mNcckj+lrPYebPeTVp8yIZE/LB0ELfOWYJ48267AUvBPF2lCRvz2brStt+CfDnuO0MupLy2tKQNnAMnPRmOGI1wfQlP2APxnbNMQNYnFDpE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR03MB7356

On 09/05/2023 5:15 pm, Jan Beulich wrote:
> On 09.05.2023 17:59, Andrew Cooper wrote:
>> On 09/05/2023 3:28 pm, Jan Beulich wrote:
>>> On 09.05.2023 15:04, Andrew Cooper wrote:
>>>> On 08/05/2023 7:47 am, Jan Beulich wrote:
>>>>> On 04.05.2023 21:39, Andrew Cooper wrote:
>>>>>> These BUILD_BUG_ON()s exist to cover the curious absence of a diagnostic for
>>>>>> code which looks like:
>>>>>>
>>>>>>   uint32_t foo[1] = { 1, 2, 3 };
>>>>>>
>>>>>> However, GCC 12 at least does now warn for this:
>>>>>>
>>>>>>   foo.c:1:24: error: excess elements in array initializer [-Werror]
>>>>>>     884 | uint32_t foo[1] = { 1, 2, 3 };
>>>>>>         |                        ^
>>>>>>   foo.c:1:24: note: (near initialization for 'foo')
>>>>> I'm pretty sure all gcc versions we support diagnose such cases. In turn
>>>>> the arrays in question don't have explicit dimensions at their
>>>>> definition sites, and hence they derive their dimensions from their
>>>>> initializers. So the build-time-checks are about the arrays in fact
>>>>> obtaining the right dimensions, i.e. the initializers being suitable.
>>>>>
>>>>> With the core part of the reasoning not being applicable, I'm afraid I
>>>>> can't even say "okay with an adjusted description".
>>>> Now I'm extra confused.
>>>>
>>>> I put those BUILD_BUG_ON()'s in because I was not getting a diagnostic
>>>> when I was expecting one, and there was a bug in the original featureset
>>>> work caused by this going wrong.
>>>>
>>>> But godbolt seems to agree that even GCC 4.1 notices.
>>>>
>>>> Maybe it was some other error (C file not seeing the header properly?)
>>>> which disappeared across the upstream review?
>>> Or maybe, by mistake, too few initializer fields? But what exactly it
>>> was probably doesn't matter. If this patch is to stay (see below), some
>>> different description will be needed anyway (or the change be folded
>>> into the one actually invalidating those BUILD_BUG_ON()s).
>>>
>>>> Either way, these aren't appropriate, and need deleting before patch 5,
>>>> because the check is no longer valid when a featureset can be longer
>>>> than the autogen length.
>>> Well, they need deleting if we stick to the approach chosen there right
>>> now. If we switched to my proposed alternative, they better would stay.
>> Given that all versions of GCC do warn, I don't see any justification
>> for them to stay.
> All versions warn when the variable declarations / definitions have a
> dimension specified, and then there are excess initializers. Yet none
> of the five affected arrays have a dimension specified in their
> definitions.
>
> Even if dimensions were added, we'd then have only covered half of
> what the BUILD_BUG_ON()s cover right now: There could then be fewer
> than intended initializer fields, and things may still be screwed. I
> think it was for this very reason why BUILD_BUG_ON() was chosen.

???

The dimensions already exist, as proved by the fact GCC can spot the
violation.

On the other hand, zero extending a featureset is explicitly how they're
supposed to work.  How do you think Xapi has coped with migration
compatibility over the years, not to mention the microcode changes which
lengthen a featureset?

So no, there was never any problem with constructs of the form uint32_t
foo[10] = { 1, } in the first place.

The BUILD_BUG_ON()s therefore serve no purpose at all.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 10 15:15:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 15:15:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532896.829238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwlWn-0005s3-38; Wed, 10 May 2023 15:14:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532896.829238; Wed, 10 May 2023 15:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwlWm-0005rw-V0; Wed, 10 May 2023 15:14:52 +0000
Received: by outflank-mailman (input) for mailman id 532896;
 Wed, 10 May 2023 15:14:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0RdQ=A7=citrix.com=prvs=48752e3ca=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pwlWl-0005r5-UZ
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 15:14:51 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67731f8a-ef45-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 17:14:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67731f8a-ef45-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683731690;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=gRkEOKb2Ch7qwpUDaE2Smx4JK9+4Gs1SMrGfslNo2eA=;
  b=V6lu1ajIDnI6ClE5MGL/HOWRK7co6ybQtMf2FT0FFbIrv6B+KvSKLNVr
   squLNvFPMC56UzstBIE3IQmU7pYV7LVKx+y28KapoVey1+k5ELBfS4WHJ
   vhbiSQKMfvw0s7TIDejh2umrYNn6bx5HxyajXPHVDwIhdhM8n1M4CvHBY
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108428468
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:W1WxG69kY3xRWeeognAWDrUDm36TJUtcMsCJ2f8bNWPcYEJGY0x3m
 GEdUDuOMvaNMGGkfIxyPI+x8h9QvpWAnIJjTlc5+yA8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKkT5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklN1
 aMeFikmZSmNjuW18PHnc/F+osEseZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAj3/jczpeuRSNqLA++WT7xw1tyrn9dtHSf7RmQO0MxhbE/
 DKaoTSR7hcyK/aC9manyVKXie7OgwTGCZ0KKI/h+as/6LGU7jNKU0BHPbehmtGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c8VUO/037keK0KW8ywSWHG8fVRZadccr8sQxQFQC3
 0eEhdrzCRRzsbeeTjSW8bL8kN+pEXFLdylYP3ZCFFZbpYC5++nfky4jUP5gMPGzsMTeEgj+y
 jeJnGtivq0BqcsEgvDTEU/8vxqgoZ3ATwgQ7wrRX3644g4RWLNJd7BE+nCAs68ecd/xok2p+
 SFdxpPAtLxm4YSlznTlfQkbIF2+Cx9p2hX4iEUnIZQu/i/FF5WLLdEJu2EWyKuE3685ld7Vj
 K3741s5CHx7ZiHCgUpLj2WZVKwXIVDIT4iNaxwtRoMmjmJNXAGG5jpyQkWbwnrglkMh+YlmZ
 8fAL5vzUSZCVfk6pNZTewv6+eBD+8zD7TmLGcCTI+qPi9Jym0J5uZ9aaQDTP4jVHYuPoRnP8
 sY3CvZmPy53CbWkCgGOqN57ELz/BSRjbXwAg5ANJ7Hrz8sPMD1JNsI9Npt6K9U/wPoPxruZl
 px/M2cBoGfCabT8AV3iQhhehHnHBv6TcVpT0fQQAGuV
IronPort-HdrOrdr: A9a23:5nPvF6mwV/PwNuTfQZGnWwVYezbpDfLa3DAbv31ZSRFFG/Fw9/
 rCoB3U73/JYVcqKRcdcLW7UpVoLkmyyXcY2+cs1PKZLWvbUQiTXeZfBOnZsl7d8kTFn4Yw6U
 4jSdkaNDSZNzNHZK3BkW2F+rgboeVu8MqT9JjjJ3UGd3AVV0m3hT0JezpyESdNNXl77YJSLu
 vk2iLezQDQBEj+aK6AdwE4dtmGnfLnvrT8byULAhY2gTP+8Q9BuNbBYmOlNg51aUI0/Ysf
X-Talos-CUID: =?us-ascii?q?9a23=3AkT3WjWmTq2ZBhq7o7/q9b13PHCzXOX79k3HuOmS?=
 =?us-ascii?q?zMGhgT76lCgOp4LFAlsU7zg=3D=3D?=
X-Talos-MUID: 9a23:9eQnGQkmRoV98F6R0SGmdnp9HtpXv4aVKHsCkJQYnPTVKjRMCW6S2WE=
X-IronPort-AV: E=Sophos;i="5.99,265,1677560400"; 
   d="scan'208";a="108428468"
Date: Wed, 10 May 2023 16:14:39 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Christian Lindig <christian.lindig@cloud.com>,
	Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>, Christian Lindig
	<christian.lindig@citrix.com>
Subject: Re: [PATCH v4 1/3] tools: Modify single-domid callers of
 xc_domain_getinfolist()
Message-ID: <f75f5786-6b1f-47a7-ac3d-e8af79d5f750@perard>
References: <20230509160712.11685-1-alejandro.vallejo@cloud.com>
 <20230509160712.11685-2-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230509160712.11685-2-alejandro.vallejo@cloud.com>

On Tue, May 09, 2023 at 05:07:10PM +0100, Alejandro Vallejo wrote:
> xc_domain_getinfolist() internally relies on a sysctl that performs
> a linear search for the domids. Many callers of xc_domain_getinfolist()
> who require information about a precise domid are much better off calling
> xc_domain_getinfo_single() instead, that will use the getdomaininfo domctl
> instead and ensure the returned domid matches the requested one. The domtctl
> will find the domid faster too, because that uses hashed lists.
> 
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Christian Lindig <christian.lindig@cloud.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed May 10 15:20:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 15:20:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532903.829248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwlbk-0006Y4-Q4; Wed, 10 May 2023 15:20:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532903.829248; Wed, 10 May 2023 15:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwlbk-0006Xx-M6; Wed, 10 May 2023 15:20:00 +0000
Received: by outflank-mailman (input) for mailman id 532903;
 Wed, 10 May 2023 15:19:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=N27j=A7=citrix.com=prvs=487c252bd=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pwlbj-0006Xr-7f
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 15:19:59 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1e52eeda-ef46-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 17:19:57 +0200 (CEST)
Received: from mail-dm3nam02lp2040.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 May 2023 11:19:54 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA0PR03MB5466.namprd03.prod.outlook.com (2603:10b6:806:c0::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Wed, 10 May
 2023 15:19:52 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::b0b8:8f54:2603:54ec%4]) with mapi id 15.20.6363.032; Wed, 10 May 2023
 15:19:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e52eeda-ef46-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683731997;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=SGCX8BmDkjg6X3ZQ0fmrJ6M+bf6B4ItCD5uirPezaGc=;
  b=CEiLVnjntXbTb4z/Vlz6lTYkvnZddIJIWTXOKcbW2JYwSV0fMw0XfiUV
   c59IMWbK0bJ2BnjM1JTIp5QV1V8AcIxDtEHDj4PYv+Dwn90XlJEYS67Y3
   7cKGkGw2WQHvfcmQrBcmvcLr3QHF2OOyGOY/+R3055wjk3nihsgwQSpKD
   M=;
X-IronPort-RemoteIP: 104.47.56.40
X-IronPort-MID: 108948413
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:db2eIa3Omuo3WsWw4vbD5fpwkn2cJEfYwER7XKvMYLTBsI5bp2cCx
 2VODzyFbPvZZ2PyLdwlO4i2/BsD6MfQzoI2TAE/pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFnPagR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfByJXx
 MUkNS82bCuojeaczbSadPtlr5F2RCXrFNt3VnBI6xj8VaxjbbWYBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvS6PlGSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv01r6TzX+iA+r+EpXp7ftn2w2h6FAsLycNWxihgcub20yHDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8U55R+MzOzI4g+fLmkCUjNFLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDRLoViMA6tjn5Y021RTGS445FLbv1oGtXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94bRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:08T5j6AKelmFxHzlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-Talos-CUID: 9a23:8ZAs322ybZMGRq9FSclBZ7xfHIM8Q0Th0WjrIXSTA2JDRbu0d02Z0fYx
X-Talos-MUID: =?us-ascii?q?9a23=3Ab0b4uQwk87KHJAH6beoJY6q3JdOaqPW1DU1Wwac?=
 =?us-ascii?q?oh/ajDzd+eC29vhbmQLZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,265,1677560400"; 
   d="scan'208";a="108948413"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CSV+KpxG0Y55juAPMDlxgN6k2Q+KAfEaLbO4L97B2j0p+/qTBMTVkfbzPfzmv/fPnWRYsmfi8V6wp+cbbbPLCEpMcOjlQ1GM05wiNKRDUcI3cUIeX4zkzQRLVWdh3JKNOPwpDSJRXG3sdBrjPaei6OwRriGfnzxHtvyeKfYyUh+VKybOJgxbwR8IgLYHYtuG/wLo43oLPb57jPxHtpqgdsGspQvOGXMEFYpRIk8tZP+iYyjjdS62XmpH/m/NbkxYEdQVQFB99RXPnJGEs9xqmykgt9RX4J0Wk3y3cFRvjFbKffmzHAZpHJ4bJ2vSxKZnTYy2RuOVteBFAH6pbV3n+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xHH4CRqUzr0Tqny5FLwoqq2KFzM5tgVW9+7FUQNsDAo=;
 b=Fbf8XsrTAIxHwwz0gHOcif8D9DRxHoUE/T76X8XT2Uw/qGQLfVxAub2+v5IsMYnFF945tg9KgNZCVgyIHWYPV4oQe0S+SOchFoaUx0iQQS33Q80rWtInTRP9uKxIF+L9RyJQgUjQJzTY1Zt1dmCTmKc8FvWVNPe1REy3nQQMdA0XaK7k+HWiLIUekDRnusA64XRXAqSTtuQFPLQA9TZe1qPfnbSkiy2BoY+vye4LpXbQDLU0RG4V3/Vn8Md5ZJY50BAuraOGbJtB/RN61xdZ9/Jz7Xiq6McGOJoid+ZaNIk0bIiT+zKRM7Rfp6v0DkDDEw+sTSEk0IUaCwurZ7nsuA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xHH4CRqUzr0Tqny5FLwoqq2KFzM5tgVW9+7FUQNsDAo=;
 b=V6MPEaJDGEE/NIs0CoaavQ3w6j90+4oo5fMkWrlAZa7Q55MpESbeNp9WvodytA3ddYbc/xZda7Hk63+CpzXt9UnV+XXTJ4zr1Y6eBhSsV1jqYz0qX6MyqQa7c6CtABsek4fPCAokJHPk+0hDkxLW4u0/B2Oeu056zfUXKK72Wps=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 10 May 2023 17:19:47 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] iommu/vtd: fix address translation for superpages
Message-ID: <ZFu2EzMmQvfpE7tJ@Air-de-Roger>
References: <20230509104146.61178-1-roger.pau@citrix.com>
 <bc750b8d-77be-f967-907a-4f19c783ad99@suse.com>
 <ZFtVYEVsELGfZxik@Air-de-Roger>
 <5bde1d79-03ef-6f8b-4b28-8d7e6867ba18@suse.com>
 <ZFtwSjuZaz05DIY0@Air-de-Roger>
 <f53d0041-d694-1221-475e-648a2afd2ff9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f53d0041-d694-1221-475e-648a2afd2ff9@suse.com>
X-ClientProxiedBy: LO4P123CA0664.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:316::8) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA0PR03MB5466:EE_
X-MS-Office365-Filtering-Correlation-Id: 9870c1a2-8a13-41c0-2204-08db516a0095
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	miZ8/XDTEKW+SyS361Zx+qyfzbvd3O2Lqngu1no/md7MFL+HBqOoD2eLeN9Z1xu5QsvNdBWNlCv9AoS9c0hI4pNCYjheG5336RFZ3h07dusy4TUYIrlukKwjFEFjjC1IUeebNIcaShuojIQcmnj44iAtM+K3Xrg0rDhCqqgelB+8vtAbRTh1Ta9jEXHIZlYHQ+t5yN0+xnKWcmu/HNG8akSJJ125vDSSyIeQK0ry7kGQQyjkdKnmlEvIPhY6tq5amhrFXxLZ5vNp8dduYbX4BFIcXacst7N+2HnswjG310VdLv1qdT2NajuH1b/7kRzCMzxg0gYo4kUSHICsYacFYesQIeB0un+pt7XRCr5DIXNZRE3tWshfbqUkz5cr6aIdpMUgIid8IMnd+kEi+qAa/YgpkYLSkGG7yP+8Ck0cjH20DrPU4LzBg9RPDRujUL56NLIsR9NyWTNdPcZ70rLiGwAz/EG3KO5xYskW9Gol890C5gghz4RqmzA9n/NWtlLvtCd3efmYWie7yPj8BWalbVQGPCCYL8wVEPAbUi6oWNAoV6LH2gPNHrYj1T0BSn56
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(39860400002)(376002)(366004)(136003)(346002)(396003)(451199021)(186003)(2906002)(83380400001)(85182001)(38100700002)(33716001)(86362001)(82960400001)(6486002)(8936002)(8676002)(316002)(6666004)(41300700001)(5660300002)(478600001)(66476007)(66946007)(66556008)(6916009)(4326008)(6512007)(53546011)(9686003)(66899021)(6506007)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bTlTRDk2azlHVjFWU3hCc3AwNVVDQ2xqcGFGT1pBNVpTNi9sVHVpcmZTSjBV?=
 =?utf-8?B?OVVzejM0SXVXc0R2QXlyLytwSnY5NHVPaGxrZGprcDMxVkJQdFJZMkM3MXhJ?=
 =?utf-8?B?VHM1Y2pobTZlUDNadjhDOHdNUk1obHBlRmMvU2lCM1VENnFLUnZya2N3N2FY?=
 =?utf-8?B?TlhncWJzVlhMZlZGZjVjMnlVRGpqaTh2Ykw0a20xS2w5S2JVa0x5b1JhQ09V?=
 =?utf-8?B?enNCNHZ1dzlhMjBUdCt2MVJVa1M5Mk5mYjR2RDdESE1PdU50YSt4VFlxODdC?=
 =?utf-8?B?TXZ4ZzB3R0MxVnZnbmNNYzJMcS9IazFNVWJNQVgwU29zL3hHajlYckMrdUhI?=
 =?utf-8?B?azRUV3ZMeU83VmdQZUxIeGlySXhxVnIzM3NFSm50bUE4Q3pRNzZkdGtaWG9o?=
 =?utf-8?B?SVFFQjhFVzJDcGVpNnZaSVIyNUtKQkFid284bWlTSlV4QXUrd1FINWxUZkdz?=
 =?utf-8?B?aWZ4V0RqNHRkZWFJLzlJVnlqbHhjRVZqc2pnQ0NhY09xZUVMQUxzT0tBbU10?=
 =?utf-8?B?b1FHN1ZwWjlIZmk0MHFNbjZXM3IwU0wyTjlaclNwdE1McXNSNXRjQjBYWTlB?=
 =?utf-8?B?ZEUwK1NsODFuaTdlTWErZWd0VnRQOTV1dG1tbTFWV2JZbktONUNnVVJtOGZr?=
 =?utf-8?B?K2QwcXE3S2Qza05tMnRuSjMyVWkxTmk3RStKTkhOL0h4MEcyU0hiS283YjQ0?=
 =?utf-8?B?M0xiQlJiRVZXUW1vZnR2Z3J2UnlMakE0Y2hKa2VoRnR1QVk2OEhPTjduNTN2?=
 =?utf-8?B?SXpRdHJ0akxqcml5TjVKbVZ1VFZPQytzdGNJN1lPekNsR2RtZFd0MnZ2R08v?=
 =?utf-8?B?akRxT3FTMEhtUTNJMElqdGlod04vcHdoTFpiOUxXODdtWjJjbDE5SDNvUU1m?=
 =?utf-8?B?NlRkWnk1cUVVVktVNERvSGFOb1QwSzB2ZDhiTlBOMHVCMU8yQXV2R3ZhMzNv?=
 =?utf-8?B?amxTSSs4Zjc2ajdnUVFlZmFvYURoNUdxekFiU0tLQU9UUDRxQ0VNOWdGcnRB?=
 =?utf-8?B?cVU2UEZlY0VMVFJPUmVhNFZEaXZ1bHo4Yk5pakE4NTNVTERvcDJST1NEaFh4?=
 =?utf-8?B?ZFRhQm5BQjcybzVUTlNpb0E4T0F0U2VYUnhhcnFZSGthRWdIM1F0dGZWRjBa?=
 =?utf-8?B?RDRXU0VLT0JxZUMxdGlUek4zZTR4MW9sV21LVVRadXdZMXNNamo0M1Vuei93?=
 =?utf-8?B?S2lrWjlxamtZSWdjZkFOYXNseSs5ZVRMZjJETER3czBKVTYrRWErakZYSzZP?=
 =?utf-8?B?cXpVa1MydjBXRmphNWIrRjEwdDZKT1BuM3VTNFB4Mi9TZHJsYTROLzFUZWxF?=
 =?utf-8?B?cW1vc0wrMEVoTXBUakMxWll1NHQrcGs3NTVWalovR1pRczBad1Z2YWxGNWVv?=
 =?utf-8?B?ellRazRTUFJiY2pUaE5tc3RmYUlTMG5INGtscGpUcjdhbUpjdkYrU3lRak1M?=
 =?utf-8?B?Z0txa01BM05YQ0U0a2t5TldUdFVpMnZ2OC9OL1JXaWdFOUp6dTg1Y2lvUFRX?=
 =?utf-8?B?QXJrdWFxNlFqdGdkYVFaTGVlVXhPWHNUQW5GRVRlM2FEejh0ZU1UZ3VQb3BO?=
 =?utf-8?B?cm00THBxajRiZzgwZHRRdmZ2NnlqSkV5aWJTT2hRT2ZyRFZ0Q0RBajNubmR2?=
 =?utf-8?B?ZFRkays1bDZKU3hsNTdiakYxTE5KT2hyTitycEZZMEk4RWEyY3pxMnFsTldV?=
 =?utf-8?B?bmtzRjJ5YlVZTm9TaDVEK09XMW11WVN2TWxGdU4rdGNCT1NmamZWbDFRRmE0?=
 =?utf-8?B?bW00OS9oc1doNktFSm1MQnd2SEp4Z1FBZFE3M0FTVkxZZ2JtRUtnakVZVEJ0?=
 =?utf-8?B?SUtJbGhyNFpBQW5lc0FLZE8yZmZSNnZiVGl0TVlhTkZIdWNCU2d0ME1JYzN6?=
 =?utf-8?B?b2VFZm41M3BLTTVNdHRKT1V4bmxqaXhaSnVvMFBXRE9oQWc3WTluZ2FyMnlj?=
 =?utf-8?B?NkJZUEtqL3I5VDlnMDl2L05aMFZJUHdhNkZtUGNPUE5yeVBLWXJPTzkvbk1p?=
 =?utf-8?B?eHMrUlVpS3NYNS9YcnJaK3UxYjJRMEp1Y2s2RFBPQ2kxZTUyK3lqeE1DSDZK?=
 =?utf-8?B?c0gwaHg3WEs4SGNxSTRBbGlsMzI2emxMS1VXZU16VlVXK2swYXFCMjJjbGU1?=
 =?utf-8?Q?ZMnEZ2FEa+jU/iTp+cHXwG45F?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ufN7wAVKN3eDkD2vjRT36henLNkn4X1MzTLNnHicm1PSHDhz5CNScGiN0aZPxliRy3SoITjNZ1Al0ih9rLKRoYKboOQLnQun9Ttkr4JGmSQ4BWMpIy++sZ8gSMZK/14wyToY3szw/rmKPvtyVpX7ui7TkxfEc/uO7dn73O4mjI9iP+b0nFZGuqdYdzC0MyIx75IMZdUmlJzjFkIQNSCJo8mYS4aifWP76hmRGAfRvpXlZ8+4YbOuazW9uTXTDG5bvExA87YeFIdBwB/nXadqJ20TSjCDpM8Gl7HazzO9SeNjLiPuiHsRdhmHY5M3qlD2WWIj1+Xp/XV8p859jUP3wUOQVQh/bGhW6ma0DApMiHtVtQUVCJRMmlW6njCQpDYlaS24TS6Bfy4EGswoed/o4qgEEey0cuffx3VNNUnDKQEL5L9NDzP7UZFcvHqIuzD/HDu6XHTUxUezDk/1CMWGd2HxYC8DVw/+d2Btqn+6A5pPadxp0+j6JcuVlC3OAnIId7ZfGg9uEovRNAonhyhHHn5TfoJ9tshElVjJCukTWVlI4nU3Yn1AexY0MPNptlg+ads3cfteDDsNMx/PYA2hhuF1oqXw0hexG+E5yzm+kIDAAB1qgzLp54X6P/Jx2WjltLfpMs9zSogaxOv+pblRcA6bu/Bkqn6djGMbacUdOREv7FVf6Y5UnpfNjIVAm920+WTR7RoZWp5S0+H8qFN3cnGoAKHQtWpALpPMedipYVB8B1QY4Wj4JwOtrvF+ex8jPBVHkhC7R8Schz7BIDKPfT3ExyH9o2Up0laXSdfGZG+B2l0jTNS9xHXQN6CIQcCrrVYczuJ0APhab8IGD+TGhg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9870c1a2-8a13-41c0-2204-08db516a0095
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 15:19:52.3097
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mU8Eoa5srsX5Wq4qQ8i0xer9cQb8lbZSCro839pGaU1rV/ZKwoFvVUhBxqbiBBemRhcjPy82QSO1JmElsi4RRg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5466

On Wed, May 10, 2023 at 03:30:21PM +0200, Jan Beulich wrote:
> On 10.05.2023 12:22, Roger Pau Monné wrote:
> > On Wed, May 10, 2023 at 12:00:51PM +0200, Jan Beulich wrote:
> >> On 10.05.2023 10:27, Roger Pau Monné wrote:
> >>> On Tue, May 09, 2023 at 06:06:45PM +0200, Jan Beulich wrote:
> >>>> On 09.05.2023 12:41, Roger Pau Monne wrote:
> >>>>> When translating an address that falls inside of a superpage in the
> >>>>> IOMMU page tables the fetching of the PTE physical address field
> >>>>> wasn't using dma_pte_addr(), which caused the returned data to be
> >>>>> corrupt as it would contain bits not related to the address field.
> >>>>
> >>>> I'm afraid I don't understand:
> >>>>
> >>>>> --- a/xen/drivers/passthrough/vtd/iommu.c
> >>>>> +++ b/xen/drivers/passthrough/vtd/iommu.c
> >>>>> @@ -359,16 +359,18 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
> >>>>>  
> >>>>>              if ( !alloc )
> >>>>>              {
> >>>>> -                pte_maddr = 0;
> >>>>>                  if ( !dma_pte_present(*pte) )
> >>>>> +                {
> >>>>> +                    pte_maddr = 0;
> >>>>>                      break;
> >>>>> +                }
> >>>>>  
> >>>>>                  /*
> >>>>>                   * When the leaf entry was requested, pass back the full PTE,
> >>>>>                   * with the address adjusted to account for the residual of
> >>>>>                   * the walk.
> >>>>>                   */
> >>>>> -                pte_maddr = pte->val +
> >>>>> +                pte_maddr +=
> >>>>>                      (addr & ((1UL << level_to_offset_bits(level)) - 1) &
> >>>>>                       PAGE_MASK);
> >>>>
> >>>> With this change you're now violating what the comment says (plus what
> >>>> the comment ahead of the function says). And it says what it says for
> >>>> a reason - see intel_iommu_lookup_page(), which I think your change is
> >>>> breaking.
> >>>
> >>> Hm, but the code in intel_iommu_lookup_page() is now wrong as it takes
> >>> the bits in DMA_PTE_CONTIG_MASK as part of the physical address when
> >>> doing the conversion to mfn?  maddr_to_mfn() doesn't perform a any
> >>> masking to remove the bits above PADDR_BITS.
> >>
> >> Oh, right. But that's a missing dma_pte_addr() in intel_iommu_lookup_page()
> >> then. (It would likely be better anyway to switch "uint64_t val" to
> >> "struct dma_pte pte" there, to make more visible that it's a PTE we're
> >> dealing with.) I indeed overlooked this aspect when doing the earlier
> >> change.
> > 
> > I guess I'm still confused, as the other return value for target == 0
> > (when the address is not part of a superpage) does return
> > dma_pte_addr(pte).  I think that needs further fixing then.
> 
> Hmm, indeed. But I think it's worse than this: addr_to_dma_page_maddr()
> also does one too many iterations in that case. All "normal" callers
> supply a positive "target". We need to terminate the walk at level 1
> also when target == 0.

Don't we do that already due to the following check:

if ( --level == target )
    break;

Which prevents mapping the PTE address as a page table directory?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 10 15:21:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 15:21:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532907.829257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwldZ-0007tx-3w; Wed, 10 May 2023 15:21:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532907.829257; Wed, 10 May 2023 15:21:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwldZ-0007tq-1I; Wed, 10 May 2023 15:21:53 +0000
Received: by outflank-mailman (input) for mailman id 532907;
 Wed, 10 May 2023 15:21:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0RdQ=A7=citrix.com=prvs=48752e3ca=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pwldX-0007tk-FV
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 15:21:51 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 61853335-ef46-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 17:21:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61853335-ef46-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683732109;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=CQMK3pwsedQd+c8cI0qBfBUs4BxRgR2bpNpyDmHIvcs=;
  b=eCojYbOh7sYH/EBXhNVrdcWrMtIx3eq0hlGY1N/Ywufgmj79d0yL47JD
   l+2dE2UPMA7XEbcXYeh70+hiiRPzgtQZ5onEFeuMtU26F5+RZSYt3AzQ4
   xj7XXmd+TMG6kUotCLi4mYjz6qB6mp880MtFdeVHOOhfRradn9lmTmgnW
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 111000521
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:jiJaZqBLqqjLQhVW/wDjw5YqxClBgxIJ4kV8jS/XYbTApGkkgzUAn
 zYbWjqObveINmL9fN9xbori8RtU78OHyYRnQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5B5gRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwq7hrXiJPq
 98hIQsQKReSxN7o45y0Y7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9I4XTHZ0NwxzBz
 o7A12qgE0szCdO98zeIzy6xgcP1wh7XdqtHQdVU8dY12QbOlwT/EiY+SVq+iem0jAi5Qd03A
 0UM9zAnt6Qa6E2hRd67VBq9yFaGsQQbQMF4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq16bO8vT60fy8PIgc/iTQsFFVfpYO5+cdq00yJF4w4eEKosjHrMWCu3
 2/JrXYdvrwokIkMxuaHo3PpjBv58/AlUTUJzgnQW2uk6CZwa4ike5Gk5DDn0BpQEGqKZgLf5
 SZZwqBy+MhLVMjQz3LVHI3hCZnzv5643CvgbUmD9nXL3xCk4DadcI9Z+1mSz285Y59fKVcFj
 KI+0D69BaO/3lPwNcebgKrrUazGKJQM8vy6Ps04lvIUPvBMmPavpUmCn3K40WH3i1QLmqoiI
 5qdesvEJS9EWf85kWXqF79BjOJDKsUCKYT7GvjGI+mPi+LCNBZ5t59fWLdxUgzJxPzd+1iEm
 zquH8CL1w9eQIXDX8UjyqZKdQpiBSFiVfjLRzl/KrbrzvxORDtwVJc8ANoJJ+RYokiivrmWp
 ijnAxUClDISRxTvcG23V5yqU5u3Nb4XkJ7xFXZ1Zz5EB1BLjV6T0Zoi
IronPort-HdrOrdr: A9a23:cXo5I6MST3HGDcBcTsWjsMiBIKoaSvp037B87TEUdfU1SKylfq
 +V98jzuSWftN9zYhAdcLK7V5VoGkmskKKdiLN5VYtKOjOKhILCFu9fBOXZrwEJtEfFh4lgPQ
 sLSdkcNDVQZ2IK7/rH3A==
X-Talos-CUID: 9a23:pd33T2BLYQTG3Aj6E3g56BFXG54GS3Tm0TDXMgyEV0VHR5TAHA==
X-Talos-MUID: =?us-ascii?q?9a23=3ApjgvOA5KZa9CCVmsAuwwr0POxoxmurX0IX0Cs68?=
 =?us-ascii?q?LtvTYKCtcAWigoBqeF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,265,1677560400"; 
   d="scan'208";a="111000521"
Date: Wed, 10 May 2023 16:19:59 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>, Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v4 2/3] tools: Use new xc function for some
 xc_domain_getinfo() calls
Message-ID: <4672e3b5-870a-4a83-aba5-1d31d014fd6b@perard>
References: <20230509160712.11685-1-alejandro.vallejo@cloud.com>
 <20230509160712.11685-3-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230509160712.11685-3-alejandro.vallejo@cloud.com>

On Tue, May 09, 2023 at 05:07:11PM +0100, Alejandro Vallejo wrote:
> Move calls that require a information about a single precisely identified
> domain to the new xc_domain_getinfo_single().
> 
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed May 10 15:39:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 15:39:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532914.829268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwlub-0001Aa-IG; Wed, 10 May 2023 15:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532914.829268; Wed, 10 May 2023 15:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwlub-0001AT-EW; Wed, 10 May 2023 15:39:29 +0000
Received: by outflank-mailman (input) for mailman id 532914;
 Wed, 10 May 2023 15:39:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ibwy=A7=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pwlua-0001AN-Lb
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 15:39:28 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7bad960-ef48-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 17:39:26 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 3272B21987;
 Wed, 10 May 2023 15:39:25 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 0BB9413519;
 Wed, 10 May 2023 15:39:25 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id N9R5Aa26W2RJWAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 10 May 2023 15:39:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7bad960-ef48-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683733165; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=cBD7c1dZsSIt4vuhdf3raK+/xSAx9fCyhFAWkOh40/o=;
	b=TvWht57KgYreUx1HbMISYR5I98YAZktvsoiRlUdFi1n/OPn0wCwCj2rJ2J/0HUz0RS+WYw
	i0xgQGq72rpvTeZGHC9vRO1HqEbINutiCUqzvXRqV7HqTwTT/yEYi/FTbJxzF+rifz/CG0
	p0BpGPbp8Vig1z0sHSknbMUTS8HB4dY=
Message-ID: <747aa708-9bc5-f112-fdaf-039716a35c90@suse.com>
Date: Wed, 10 May 2023 17:39:24 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v5 12/14] tools/xenstore: use generic accounting for
 remaining quotas
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-13-jgross@suse.com>
 <90f5dfd0-e18a-7fcb-9048-057a0656a2b3@xen.org>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <90f5dfd0-e18a-7fcb-9048-057a0656a2b3@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------yQSLl45Mqv8zYhK4frJpF7MS"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------yQSLl45Mqv8zYhK4frJpF7MS
Content-Type: multipart/mixed; boundary="------------xShV1sJ6vodFv7RD3nXISiMA";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <747aa708-9bc5-f112-fdaf-039716a35c90@suse.com>
Subject: Re: [PATCH v5 12/14] tools/xenstore: use generic accounting for
 remaining quotas
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-13-jgross@suse.com>
 <90f5dfd0-e18a-7fcb-9048-057a0656a2b3@xen.org>
In-Reply-To: <90f5dfd0-e18a-7fcb-9048-057a0656a2b3@xen.org>

--------------xShV1sJ6vodFv7RD3nXISiMA
Content-Type: multipart/mixed; boundary="------------0asyYPejNPaaJ0gmqCmMnfzM"

--------------0asyYPejNPaaJ0gmqCmMnfzM
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMDkuMDUuMjMgMjE6MjEsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDA4LzA1LzIwMjMgMTI6NDcsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBU
aGUgbWF4cmVxdWVzdHMsIG5vZGUgc2l6ZSwgbnVtYmVyIG9mIG5vZGUgcGVybWlzc2lvbnMs
IGFuZCBwYXRoIGxlbmd0aA0KPj4gcXVvdGEgYXJlIGEgbGl0dGxlIGJpdCBzcGVjaWFsLCBh
cyB0aGV5IGFyZSBlaXRoZXIgYWN0aXZlIGluDQo+PiB0cmFuc2FjdGlvbnMgb25seSAobWF4
cmVxdWVzdHMpLCBvciB0aGV5IGFyZSBqdXN0IHBlciBpdGVtIGluc3RlYWQgb2YNCj4+IGNv
dW50IHZhbHVlcy4gTmV2ZXJ0aGVsZXNzIGJlaW5nIGFibGUgdG8ga25vdyB0aGUgbWF4aW11
bSBudW1iZXIgb2YNCj4+IHRob3NlIHF1b3RhIHJlbGF0ZWQgdmFsdWVzIHBlciBkb21haW4g
d291bGQgYmUgYmVuZWZpY2lhbCwgc28gYWRkIHRoZW0NCj4+IHRvIHRoZSBnZW5lcmljIGFj
Y291bnRpbmcuDQo+Pg0KPj4gVGhlIHBlciBkb21haW4gdmFsdWUgd2lsbCBuZXZlciBzaG93
IGN1cnJlbnQgbnVtYmVycyBvdGhlciB0aGFuIHplcm8sDQo+PiBidXQgdGhlIG1heGltdW0g
bnVtYmVyIHNlZW4gY2FuIGJlIGdhdGhlcmVkIHRoZSBzYW1lIHdheSBhcyB0aGUgbnVtYmVy
DQo+PiBvZiBub2RlcyBkdXJpbmcgYSB0cmFuc2FjdGlvbi4NCj4+DQo+PiBUbyBiZSBhYmxl
IHRvIHVzZSB0aGUgY29uc3QgcXVhbGlmaWVyIGZvciBhIG5ldyBmdW5jdGlvbiBzd2l0Y2gN
Cj4+IGRvbWFpbl9pc191bnByaXZpbGVnZWQoKSB0byB0YWtlIGEgY29uc3QgcG9pbnRlciwg
dG9vLg0KPj4NCj4+IEZvciBwcmludGluZyB0aGUgcXVvdGEvbWF4IHZhbHVlcywgYWRhcHQg
dGhlIHByaW50IGZvcm1hdCBzdHJpbmcgdG8NCj4+IHRoZSBsb25nZXN0IHF1b3RhIG5hbWUg
KG5vdyAxNyBjaGFyYWN0ZXJzIGxvbmcpLg0KPj4NCj4+IFNpZ25lZC1vZmYtYnk6IEp1ZXJn
ZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4+IC0tLQ0KPj4gVjU6DQo+PiAtIGFkZCBj
b21tZW50IChKdWxpZW4gR3JhbGwpDQo+PiAtIGFkZCBtaXNzaW5nIHF1b3RhIHByaW50aW5n
IChKdWxpZW4gR3JhbGwpDQo+PiAtLS0NCj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3Jl
ZF9jb3JlLmPCoMKgwqDCoMKgwqDCoCB8IDE1ICsrKysrLS0tLQ0KPj4gwqAgdG9vbHMveGVu
c3RvcmUveGVuc3RvcmVkX2NvcmUuaMKgwqDCoMKgwqDCoMKgIHzCoCAyICstDQo+PiDCoCB0
b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfZG9tYWluLmPCoMKgwqDCoMKgIHwgNDUgKysrKysr
KysrKysrKysrKysrKysrLS0tLS0NCj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9k
b21haW4uaMKgwqDCoMKgwqAgfMKgIDYgKysrKw0KPj4gwqAgdG9vbHMveGVuc3RvcmUveGVu
c3RvcmVkX3RyYW5zYWN0aW9uLmMgfMKgIDQgKy0tDQo+PiDCoCB0b29scy94ZW5zdG9yZS94
ZW5zdG9yZWRfd2F0Y2guY8KgwqDCoMKgwqDCoCB8wqAgMiArLQ0KPj4gwqAgNiBmaWxlcyBj
aGFuZ2VkLCA1NSBpbnNlcnRpb25zKCspLCAxOSBkZWxldGlvbnMoLSkNCj4+DQo+PiBkaWZm
IC0tZ2l0IGEvdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuYyBiL3Rvb2xzL3hlbnN0
b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+IGluZGV4IGM5OGQzMDU2MWYuLmZjZTczYjg4M2Ug
MTAwNjQ0DQo+PiAtLS0gYS90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+PiAr
KysgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+PiBAQCAtNzk5LDggKzc5
OSw5IEBAIGludCB3cml0ZV9ub2RlX3JhdyhzdHJ1Y3QgY29ubmVjdGlvbiAqY29ubiwgVERC
X0RBVEEgKmtleSwgDQo+PiBzdHJ1Y3Qgbm9kZSAqbm9kZSwNCj4+IMKgwqDCoMKgwqDCoMKg
wqDCoCArIG5vZGUtPnBlcm1zLm51bSAqIHNpemVvZihub2RlLT5wZXJtcy5wWzBdKQ0KPj4g
wqDCoMKgwqDCoMKgwqDCoMKgICsgbm9kZS0+ZGF0YWxlbiArIG5vZGUtPmNoaWxkbGVuOw0K
Pj4gLcKgwqDCoCBpZiAoIW5vX3F1b3RhX2NoZWNrICYmIGRvbWFpbl9pc191bnByaXZpbGVn
ZWQoY29ubikgJiYNCj4+IC3CoMKgwqDCoMKgwqDCoCBkYXRhLmRzaXplID49IHF1b3RhX21h
eF9lbnRyeV9zaXplKSB7DQo+PiArwqDCoMKgIC8qIENhbGwgZG9tYWluX21heF9jaGsoKSBp
biBhbnkgY2FzZSBpbiBvcmRlciB0byByZWNvcmQgbWF4IHZhbHVlcy4gKi8NCj4+ICvCoMKg
wqAgaWYgKGRvbWFpbl9tYXhfY2hrKGNvbm4sIEFDQ19OT0RFU1osIGRhdGEuZHNpemUsIHF1
b3RhX21heF9lbnRyeV9zaXplKQ0KPj4gK8KgwqDCoMKgwqDCoMKgICYmICFub19xdW90YV9j
aGVjaykgew0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgIGVycm5vID0gRU5PU1BDOw0KPj4gwqDC
oMKgwqDCoMKgwqDCoMKgIHJldHVybiBlcnJubzsNCj4+IMKgwqDCoMKgwqAgfQ0KPj4gQEAg
LTExNzAsNyArMTE3MSw3IEBAIHN0YXRpYyBib29sIHZhbGlkX2NoYXJzKGNvbnN0IGNoYXIg
Km5vZGUpDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAiMDEyMzQ1Njc4
OS0vX0AiKSA9PSBzdHJsZW4obm9kZSkpOw0KPj4gwqAgfQ0KPj4gLWJvb2wgaXNfdmFsaWRf
bm9kZW5hbWUoY29uc3QgY2hhciAqbm9kZSkNCj4+ICtib29sIGlzX3ZhbGlkX25vZGVuYW1l
KGNvbnN0IHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCBjb25zdCBjaGFyICpub2RlKQ0KPj4g
wqAgew0KPj4gwqDCoMKgwqDCoCBpbnQgbG9jYWxfb2ZmID0gMDsNCj4+IMKgwqDCoMKgwqAg
dW5zaWduZWQgaW50IGRvbWlkOw0KPj4gQEAgLTExOTAsNyArMTE5MSw4IEBAIGJvb2wgaXNf
dmFsaWRfbm9kZW5hbWUoY29uc3QgY2hhciAqbm9kZSkNCj4+IMKgwqDCoMKgwqAgaWYgKHNz
Y2FuZihub2RlLCAiL2xvY2FsL2RvbWFpbi8lNXUvJW4iLCAmZG9taWQsICZsb2NhbF9vZmYp
ICE9IDEpDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAgbG9jYWxfb2ZmID0gMDsNCj4+IC3CoMKg
wqAgaWYgKHN0cmxlbihub2RlKSA+IGxvY2FsX29mZiArIHF1b3RhX21heF9wYXRoX2xlbikN
Cj4+ICvCoMKgwqAgaWYgKGRvbWFpbl9tYXhfY2hrKGNvbm4sIEFDQ19QQVRITEVOLCBzdHJs
ZW4obm9kZSkgLSBsb2NhbF9vZmYsDQo+PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oCBxdW90YV9tYXhfcGF0aF9sZW4pKQ0KPj4gwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiBm
YWxzZTsNCj4+IMKgwqDCoMKgwqAgcmV0dXJuIHZhbGlkX2NoYXJzKG5vZGUpOw0KPj4gQEAg
LTEyNTIsNyArMTI1NCw3IEBAIHN0YXRpYyBzdHJ1Y3Qgbm9kZSAqZ2V0X25vZGVfY2Fub25p
Y2FsaXplZChzdHJ1Y3QgDQo+PiBjb25uZWN0aW9uICpjb25uLA0KPj4gwqDCoMKgwqDCoCAq
Y2Fub25pY2FsX25hbWUgPSBjYW5vbmljYWxpemUoY29ubiwgY3R4LCBuYW1lKTsNCj4+IMKg
wqDCoMKgwqAgaWYgKCEqY2Fub25pY2FsX25hbWUpDQo+PiDCoMKgwqDCoMKgwqDCoMKgwqAg
cmV0dXJuIE5VTEw7DQo+PiAtwqDCoMKgIGlmICghaXNfdmFsaWRfbm9kZW5hbWUoKmNhbm9u
aWNhbF9uYW1lKSkgew0KPj4gK8KgwqDCoCBpZiAoIWlzX3ZhbGlkX25vZGVuYW1lKGNvbm4s
ICpjYW5vbmljYWxfbmFtZSkpIHsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCBlcnJubyA9IEVJ
TlZBTDsNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gTlVMTDsNCj4+IMKgwqDCoMKg
wqAgfQ0KPj4gQEAgLTE3NzgsOCArMTc4MCw3IEBAIHN0YXRpYyBpbnQgZG9fc2V0X3Blcm1z
KGNvbnN0IHZvaWQgKmN0eCwgc3RydWN0IA0KPj4gY29ubmVjdGlvbiAqY29ubiwNCj4+IMKg
wqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gRUlOVkFMOw0KPj4gwqDCoMKgwqDCoCBwZXJtcy5u
dW0tLTsNCj4+IC3CoMKgwqAgaWYgKGRvbWFpbl9pc191bnByaXZpbGVnZWQoY29ubikgJiYN
Cj4+IC3CoMKgwqDCoMKgwqDCoCBwZXJtcy5udW0gPiBxdW90YV9uYl9wZXJtc19wZXJfbm9k
ZSkNCj4+ICvCoMKgwqAgaWYgKGRvbWFpbl9tYXhfY2hrKGNvbm4sIEFDQ19OUEVSTSwgcGVy
bXMubnVtLCBxdW90YV9uYl9wZXJtc19wZXJfbm9kZSkpDQo+PiDCoMKgwqDCoMKgwqDCoMKg
wqAgcmV0dXJuIEVOT1NQQzsNCj4+IMKgwqDCoMKgwqAgcGVybXN0ciA9IGluLT5idWZmZXIg
KyBzdHJsZW4oaW4tPmJ1ZmZlcikgKyAxOw0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0
b3JlL3hlbnN0b3JlZF9jb3JlLmggYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5o
DQo+PiBpbmRleCAzNTY0ZDg1ZDdkLi45MzM5ODIwMTU2IDEwMDY0NA0KPj4gLS0tIGEvdG9v
bHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuaA0KPj4gKysrIGIvdG9vbHMveGVuc3RvcmUv
eGVuc3RvcmVkX2NvcmUuaA0KPj4gQEAgLTI1OCw3ICsyNTgsNyBAQCB2b2lkIGNoZWNrX3N0
b3JlKHZvaWQpOw0KPj4gwqAgdm9pZCBjb3JydXB0KHN0cnVjdCBjb25uZWN0aW9uICpjb25u
LCBjb25zdCBjaGFyICpmbXQsIC4uLik7DQo+PiDCoCAvKiBJcyB0aGlzIGEgdmFsaWQgbm9k
ZSBuYW1lPyAqLw0KPj4gLWJvb2wgaXNfdmFsaWRfbm9kZW5hbWUoY29uc3QgY2hhciAqbm9k
ZSk7DQo+PiArYm9vbCBpc192YWxpZF9ub2RlbmFtZShjb25zdCBzdHJ1Y3QgY29ubmVjdGlv
biAqY29ubiwgY29uc3QgY2hhciAqbm9kZSk7DQo+PiDCoCAvKiBHZXQgbmFtZSBvZiBwYXJl
bnQgbm9kZS4gKi8NCj4+IMKgIGNoYXIgKmdldF9wYXJlbnQoY29uc3Qgdm9pZCAqY3R4LCBj
b25zdCBjaGFyICpub2RlKTsNCj4+IGRpZmYgLS1naXQgYS90b29scy94ZW5zdG9yZS94ZW5z
dG9yZWRfZG9tYWluLmMgDQo+PiBiL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4u
Yw0KPj4gaW5kZXggNmYzYTI3NzY1YS4uYjNhNzE5NTY5ZSAxMDA2NDQNCj4+IC0tLSBhL3Rv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uYw0KPj4gKysrIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2RvbWFpbi5jDQo+PiBAQCAtNDMxLDcgKzQzMSw3IEBAIGludCBkb21h
aW5fZ2V0X3F1b3RhKGNvbnN0IHZvaWQgKmN0eCwgc3RydWN0IGNvbm5lY3Rpb24gDQo+PiAq
Y29ubiwNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gRU5PTUVNOw0KPj4gwqAgI2Rl
ZmluZSBlbnQodCwgZSkgXA0KPj4gLcKgwqDCoCByZXNwID0gdGFsbG9jX2FzcHJpbnRmX2Fw
cGVuZChyZXNwLCAiJS0xNnM6ICU4dSAobWF4OiAlOHVcbiIsICN0LCBcDQo+PiArwqDCoMKg
IHJlc3AgPSB0YWxsb2NfYXNwcmludGZfYXBwZW5kKHJlc3AsICIlLTE3czogJTh1IChtYXg6
ICU4dVxuIiwgI3QsIFwNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgZC0+YWNjW2VdLnZhbCwgZC0+YWNjW2VdLm1heCk7IFwNCj4+IMKgwqDC
oMKgwqAgaWYgKCFyZXNwKSByZXR1cm4gRU5PTUVNDQo+PiBAQCAtNDQwLDYgKzQ0MCwxMCBA
QCBpbnQgZG9tYWluX2dldF9xdW90YShjb25zdCB2b2lkICpjdHgsIHN0cnVjdCBjb25uZWN0
aW9uIA0KPj4gKmNvbm4sDQo+PiDCoMKgwqDCoMKgIGVudCh0cmFuc2FjdGlvbnMsIEFDQ19U
UkFOUyk7DQo+PiDCoMKgwqDCoMKgIGVudChvdXRzdGFuZGluZywgQUNDX09VVFNUKTsNCj4+
IMKgwqDCoMKgwqAgZW50KG1lbW9yeSwgQUNDX01FTSk7DQo+PiArwqDCoMKgIGVudCh0cmFu
c2FjdGlvbi1ub2RlcywgQUNDX1RSQU5TTk9ERVMpOw0KPj4gK8KgwqDCoCBlbnQobm9kZS1w
ZXJtaXNzaW9ucywgQUNDX05QRVJNKTsNCj4+ICvCoMKgwqAgZW50KHBhdGgtbGVuZ3RoLCBB
Q0NfUEFUSExFTik7DQo+PiArwqDCoMKgIGVudChub2RlLXNpemUsIEFDQ19OT0RFU1opOw0K
Pj4gwqAgI3VuZGVmIGVudA0KPj4gQEAgLTQ1Nyw3ICs0NjEsNyBAQCBpbnQgZG9tYWluX21h
eF9nbG9iYWxfYWNjKGNvbnN0IHZvaWQgKmN0eCwgc3RydWN0IA0KPj4gY29ubmVjdGlvbiAq
Y29ubikNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gRU5PTUVNOw0KPj4gwqAgI2Rl
ZmluZSBlbnQodCwgZSkgXA0KPj4gLcKgwqDCoCByZXNwID0gdGFsbG9jX2FzcHJpbnRmX2Fw
cGVuZChyZXNwLCAiJS0xNnM6ICU4dVxuIiwgI3QswqDCoCBcDQo+PiArwqDCoMKgIHJlc3Ag
PSB0YWxsb2NfYXNwcmludGZfYXBwZW5kKHJlc3AsICIlLTE3czogJTh1XG4iLCAjdCzCoMKg
IFwNCj4+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
YWNjX2dsb2JhbF9tYXhbZV0pO8KgwqDCoMKgwqDCoMKgwqAgXA0KPj4gwqDCoMKgwqDCoCBp
ZiAoIXJlc3ApIHJldHVybiBFTk9NRU0NCj4+IEBAIC00NjYsNiArNDcwLDEwIEBAIGludCBk
b21haW5fbWF4X2dsb2JhbF9hY2MoY29uc3Qgdm9pZCAqY3R4LCBzdHJ1Y3QgDQo+PiBjb25u
ZWN0aW9uICpjb25uKQ0KPj4gwqDCoMKgwqDCoCBlbnQodHJhbnNhY3Rpb25zLCBBQ0NfVFJB
TlMpOw0KPj4gwqDCoMKgwqDCoCBlbnQob3V0c3RhbmRpbmcsIEFDQ19PVVRTVCk7DQo+PiDC
oMKgwqDCoMKgIGVudChtZW1vcnksIEFDQ19NRU0pOw0KPj4gK8KgwqDCoCBlbnQodHJhbnNh
Y3Rpb24tbm9kZXMsIEFDQ19UUkFOU05PREVTKTsNCj4+ICvCoMKgwqAgZW50KG5vZGUtcGVy
bWlzc2lvbnMsIEFDQ19OUEVSTSk7DQo+PiArwqDCoMKgIGVudChwYXRoLWxlbmd0aCwgQUND
X1BBVEhMRU4pOw0KPj4gK8KgwqDCoCBlbnQobm9kZS1zaXplLCBBQ0NfTk9ERVNaKTsNCj4+
IMKgICN1bmRlZiBlbnQNCj4+IEBAIC0xMDc5LDEyICsxMDg3LDIyIEBAIGludCBkb21haW5f
YWRqdXN0X25vZGVfcGVybXMoc3RydWN0IG5vZGUgKm5vZGUpDQo+PiDCoMKgwqDCoMKgIHJl
dHVybiAwOw0KPj4gwqAgfQ0KPj4gK3N0YXRpYyB2b2lkIGRvbWFpbl9hY2NfdmFsaWRfbWF4
KHN0cnVjdCBkb21haW4gKmQsIGVudW0gYWNjaXRlbSB3aGF0LA0KPj4gK8KgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGludCB2YWwpDQo+PiArew0KPj4gK8Kg
wqDCoCBhc3NlcnQod2hhdCA8IEFSUkFZX1NJWkUoZC0+YWNjKSk7DQo+PiArwqDCoMKgIGFz
c2VydCh3aGF0IDwgQVJSQVlfU0laRShhY2NfZ2xvYmFsX21heCkpOw0KPj4gKw0KPj4gK8Kg
wqDCoCBpZiAodmFsID4gZC0+YWNjW3doYXRdLm1heCkNCj4+ICvCoMKgwqDCoMKgwqDCoCBk
LT5hY2Nbd2hhdF0ubWF4ID0gdmFsOw0KPj4gK8KgwqDCoCBpZiAodmFsID4gYWNjX2dsb2Jh
bF9tYXhbd2hhdF0gJiYgZG9taWRfaXNfdW5wcml2aWxlZ2VkKGQtPmRvbWlkKSkNCj4+ICvC
oMKgwqDCoMKgwqDCoCBhY2NfZ2xvYmFsX21heFt3aGF0XSA9IHZhbDsNCj4+ICt9DQo+PiAr
DQo+PiDCoCBzdGF0aWMgaW50IGRvbWFpbl9hY2NfYWRkX3ZhbGlkKHN0cnVjdCBkb21haW4g
KmQsIGVudW0gYWNjaXRlbSB3aGF0LCBpbnQgYWRkKQ0KPj4gwqAgew0KPj4gwqDCoMKgwqDC
oCB1bnNpZ25lZCBpbnQgdmFsOw0KPj4gLcKgwqDCoCBhc3NlcnQod2hhdCA8IEFSUkFZX1NJ
WkUoZC0+YWNjKSk7DQo+PiAtDQo+IA0KPiBJIGRpZG4ndCBoYXZlIGEgY2hhbmNlIHRvIHJl
cGx5IG9uIHlvdXIgY29tbWVudCBvbiB0aGUgcHJldmlvdXMgdmVyc2lvbi4gU28gDQo+IGRv
aW5nIGl0IGhlcmU6DQo+IA0KPiAgPiBGb2xsb3dpbmcgdGhpcyByZWFzb25pbmcgSSdkIG5l
ZWQgdG8gcHV0IGl0IGludG8gZXZlbiBtb3JlIGZ1bmN0aW9ucy4NCj4gDQo+IFBvc3NpYmx5
LiBCdXQgZm9yIG5vdywgdGhlIGRpc2N1c3Npb24gaXMgYWJvdXQgbm90IHJlbW92aW5nIHRo
ZSBleGlzdGluZyBvbmUgDQo+IChzZWUgbW9yZSBiZWxvdykuDQo+IA0KPiAgPiBBbmQgYW4N
Cj4gYXNzZXJ0KCkgdHJpZ2dlcmluZyBhIGxpdHRsZSBiaXQgbGF0ZSBpcyBubyByZWFsIHBy
b2JsZW0sIGFzIGl0IHdpbGwgYWJvcnQNCj4geGVuc3RvcmVkIGFueXdheS4NCj4gDQo+IE5v
dCByZWFsbHkuIFhlbnN0b3JlZCB3b3VsZCBvbmx5IGJlIGFib3J0ZWQgaWYgdGhlIGNvbmRp
dGlvbiBpcyBmYWxzZS4gSWYgaXQgaXMgDQo+IG5vdCwgd2Ugd291bGQgcmV0dXJuIGFuIGVy
cm9yLiBUaGF0IHNhaWQsIHRoZSBjb25kaXRpb24gdGhhdCBhIGNoYW5nZSB0byBiZSB0cnVl
IA0KPiBpbiBzb21lIGNvbmRpdGlvbi4NCj4gDQo+IEJ1dCBub3cgeW91IGFyZSByZWx5aW5n
IG9uIHRoZSBjb21waWxlciB0byBuZXZlciBvcHRpbWl6ZSBvdXQgdGhlIGNoZWNrLiBXZSBr
bm93IA0KPiB0aGF0IGNvbXBpbGVycyBjYW4gcmVtb3ZlIE5VTEwgY2hlY2sgaWYgYSBwb2lu
dGVyIHdhcyBkZXJlZmVyZW5jZWQgYmVmb3JlaGFuZC4gSSANCj4gd291bGRuJ3QgYmUgc3Vy
cHJpc2VkIGlmIHRoZXkgY2FuIGRvIHRoZSBzYW1lIHRyaWNrIHdpdGggYWNjZXNzaW5nIGFu
IGFycmF5IA0KPiBmaXJzdCBhbmQgdGhlbiBjaGVja2luZyB0aGUgYm91bmQuIFNvIHRoZSBh
Ym9ydCgpIG1heSBhY3R1YWxseSBuZXZlciBoYXBwZW4uDQo+IA0KPiAgPiBBZGRpdGlvbmFs
bHkgd2l0aCB0aGUgZ2xvYmFsIGFuZCB0aGUgcGVyLWRvbWFpbiBhcnJheXMgbm93IGNvdmVy
aW5nIGFsbA0KPiBwb3NzaWJsZSBxdW90YXMsIGl0IHdvdWxkIGV2ZW4gYmUgcmVhc29uYWJs
ZSB0byBkcm9wIHRoZSBhc3NlcnQoKXMgaW4NCj4gZG9tYWluX2FjY192YWxpZF9tYXgoKSBj
b21wbGV0ZWx5Lg0KPiANCj4gSSBoYXZlIHR3byBjb25jZXJuczoNCj4gIMKgICogVGhpcyBp
cyB0aGUgc3RhdGUgYWZ0ZXIgdGhpcyBzZXJpZXMuIEJ1dCBJIGRvbid0IHNlZSB3aGF0IHdv
dWxkIHByZXZlbnQgYW55IA0KPiBjaGFuZ2UgaW4gdGhlIGZ1dHVyZS4NCj4gIMKgICogSWYg
SSBhbSBub3QgbWlzdGFrZW4gbm9uZSBvZiB0aGUgY29tcGlsZXJzIHByb3Blcmx5IGVuZm9y
Y2UgdGhlIGVudW0gaW4gQy4gDQo+IFNvIHlvdSBjb3VsZCBpbiB0aGVvcnkgcGFzcyBhbiBv
dXRzaWRlIHZhbHVlIHdpdGhvdXQgdGhlIGNvbXBpbGVyIHNob3V0aW5nIGF0IHlvdS4NCj4g
DQo+IFNvIHRvIG1lLCB0aGlzIGlzIG5vdCB3YXJyYW50IHRvIGNvbXBsZXRlbHkgZHJvcCB0
aGUgYXNzZXJ0KCkuIElmIHdlIGRpc2NhcmQgdGhlIA0KPiBsYXR0ZXIgcG9pbnQsIHRoZSBp
ZGVhbCB3b3VsZCBiZSBhIEJVSUxEX0JVR19PTigpIHRvIHRpZSB0aGUgZW51bSB3aXRoIHRo
ZSBhcnJheSANCj4gYnV0IElJUkMgaXQgaXMgbm90IHBvc3NpYmxlIHRvIHVzZSBCVUlMRF9C
VUdfT04oKSB3aXRoIGFuIGVudW0uIFRoZXJlZm9yZSwgdGhlIA0KPiBhc3NlcnQoKSBzaG91
bGQgdGhlIGJlc3QgYXQgdGhlIG1vbWVudC4NCj4gDQo+IElkZWFsbHksIHdlIHNob3VsZCBh
ZGQgdGhlIGFzc2VydCgpIGluIG90aGVyIHBsYWNlcy4gQnV0LCB0aGlzIGlzIG5vdCBzb21l
dGhpbmcgDQo+IEkgdGhpbmsgc2hvdWxkIGJlIHJlcXVlc3RlZCBoZXJlLiBNeSBvbmx5IHJl
cXVlc3QgaXMgdG8gbm90IHJlbW92aW5nIHRoZSANCj4gZXhpc3Rpbmcgb25lLg0KDQpPa2F5
LCBJJ2xsIGtlZXAgaXQuDQoNCg0KSnVlcmdlbg0K
--------------0asyYPejNPaaJ0gmqCmMnfzM
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0asyYPejNPaaJ0gmqCmMnfzM--

--------------xShV1sJ6vodFv7RD3nXISiMA--

--------------yQSLl45Mqv8zYhK4frJpF7MS
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRbuqwFAwAAAAAACgkQsN6d1ii/Ey+Y
3wf/WAQJNbvkiFmTIOJUVnm+T7PO3o6pibhB/pgijn/x3OXFwXDYQqxAmpb16OI9UcfSTK8hfP8a
Eq5m22dFGe2DT8+DP5LdoLi/xS93ZzKKkiafQv6xiapGmwk8ryxLFeRJEW8zQ+VI7SfsCFid1CoA
UwQau5KiqhaZ1pXCB0ve0njsGau+QH8bzfnfC4h1k91lmpMDkyoXo7R/1VRtaXCZdRpiSdTDh2OP
AFbrZGNaFyyiiT7f8FN1f4wnL9Mb0+hbDTEYf8NcQAeXWUqZZyfH5KfuGDx7BEBv1s/XPi8WcA88
kGzBHumtcxNXEOkP6J6Ff6IAiG7M+wx44rtXynNQ7A==
=6KWi
-----END PGP SIGNATURE-----

--------------yQSLl45Mqv8zYhK4frJpF7MS--


From xen-devel-bounces@lists.xenproject.org Wed May 10 15:53:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 15:53:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532933.829277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwm7y-0003fk-QE; Wed, 10 May 2023 15:53:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532933.829277; Wed, 10 May 2023 15:53:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwm7y-0003fd-NS; Wed, 10 May 2023 15:53:18 +0000
Received: by outflank-mailman (input) for mailman id 532933;
 Wed, 10 May 2023 15:53:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ibwy=A7=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pwm7x-0003f4-KO
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 15:53:17 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c7391b8a-ef4a-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 17:53:17 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 833EA1F891;
 Wed, 10 May 2023 15:53:16 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id F39EB13519;
 Wed, 10 May 2023 15:53:15 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id HfDlOeu9W2RyXwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 10 May 2023 15:53:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7391b8a-ef4a-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683733996; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Y1+dOc+fU/giFca0/pJ0B98dgvyjV6/LNsSqdAV5vqM=;
	b=Yac6J7hOywHho1Z/BenEB5NDrTUgFxVxCw6BVvxzlIwBOUaEhbei2THICBEdPcJruXWpCH
	z60ILTCLLj40bT8LGoW8vWhcjYHDhw9qHt1oNmLZLyLhyyDNKnqhEPaBknp28/FaNEKZMx
	/+R0DjBqPRR10nuTYvepW5nIXGAua90=
Message-ID: <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
Date: Wed, 10 May 2023 17:53:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: Borislav Petkov <bp@alien8.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
 mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
 Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
In-Reply-To: <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------PawQkyaJ0Yt3Fc51Y0RtiNKB"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------PawQkyaJ0Yt3Fc51Y0RtiNKB
Content-Type: multipart/mixed; boundary="------------09wjA0c3y8VRVbtv595YsEal";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Borislav Petkov <bp@alien8.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
 mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
 Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
Message-ID: <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
In-Reply-To: <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>

--------------09wjA0c3y8VRVbtv595YsEal
Content-Type: multipart/mixed; boundary="------------00hWa8W2Pg0H9d9bJ6b3L0Kg"

--------------00hWa8W2Pg0H9d9bJ6b3L0Kg
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTAuMDUuMjMgMTU6MzAsIEJvcmlzbGF2IFBldGtvdiB3cm90ZToNCj4gT24gV2VkLCBN
YXkgMTAsIDIwMjMgYXQgMDE6MzY6NDFBTSArMDIwMCwgQm9yaXNsYXYgUGV0a292IHdyb3Rl
Og0KPj4gTW9yZSBzdGFyaW5nIGF0IHRoaXMgdG9tb3Jyb3csIG9uIGEgY2xlYXIgaGVhZC4N
Cj4gDQo+IFllYWgsIEknbSBnb2luZyB0byBsZWF2ZSBpdCBhcyBpcy4gVHJpZWQgZG9pbmcg
YSB1bmlvbiB3aXRoIGJpdGZpZWxkcw0KPiBidXQgZG9lc24ndCBnZXQgYW55IHByZXR0aWVy
Lg0KPiANCj4gTmV4dCBjcmFwb2xhOg0KPiANCj4gVGhlIEludGVsIGJveCBzYXlzIG5vdzoN
Cj4gDQo+IFsgICAgOC4xMzg2ODNdIHNneDogRVBDIHNlY3Rpb24gMHg4MDIwMDAwMC0weDg1
ZmZmZmZmDQo+IFsgICAgOC4yMDQ4MzhdIHBtZF9zZXRfaHVnZTogQ2Fubm90IHNhdGlzZnkg
W21lbSAweDgwMjAwMDAwLTB4ODA0MDAwMDBdIHdpdGggYSBodWdlLXBhZ2UgbWFwcGluZyBk
dWUgdG8gTVRSUiBvdmVycmlkZSwgdW5pZm9ybTogMA0KPiANCj4gKEkndmUgZXh0ZW5kZWQg
dGhlIGRlYnVnIG91dHB1dCkuDQo+IA0KPiBhbmQgdGhhdCBoYXBwZW5zIGJlY2F1c2UNCj4g
DQo+IFsgICAgOC4xNzQyMjldIG10cnJfdHlwZV9sb29rdXA6IG10cnJfc3RhdGVfc2V0OiAx
DQo+IFsgICAgOC4xNzg5MDldIG10cnJfdHlwZV9sb29rdXA6IHN0YXJ0OiAweDgwMjAwMDAw
LCBjYWNoZV9tYXBbM10uc3RhcnQ6IDB4ODg4MDAwMDANCj4gDQo+IHRoYXQncw0KPiANCj4g
CSBpZiAoc3RhcnQgPCBjYWNoZV9tYXBbaV0uc3RhcnQpIHsNCj4gDQo+IGluIG10cnJfdHlw
ZV9sb29rdXAoKS4gSSBmYWlsIHRvIHNlZSBob3cgdGhhdCBjaGVjayB3b3VsZCB3b3JrIGZv
ciB0aGUNCj4gcmFuZ2UgMHg4MDIwMDAwMC0weDgwNDAwMDAwIGFuZCB0aGUgTVRSUiBtYXAg
aXM6DQo+IA0KPiBbICAgIDAuMDAwNTg3XSBNVFJSIG1hcDogNCBlbnRyaWVzICgzIGZpeGVk
ICsgMSB2YXJpYWJsZTsgbWF4IDIzKSwgYnVpbHQgZnJvbSAxMCB2YXJpYWJsZSBNVFJScw0K
PiBbICAgIDAuMDAwNTg4XSAgIDA6IDAwMDAwMDAwMDAwMDAwMDAtMDAwMDAwMDAwMDA5ZmZm
ZiB3cml0ZS1iYWNrDQo+IFsgICAgMC4wMDA1ODldICAgMTogMDAwMDAwMDAwMDBhMDAwMC0w
MDAwMDAwMDAwMGJmZmZmIHVuY2FjaGFibGUNCj4gWyAgICAwLjAwMDU5MF0gICAyOiAwMDAw
MDAwMDAwMGMwMDAwLTAwMDAwMDAwMDAwZmZmZmYgd3JpdGUtcHJvdGVjdA0KPiBbICAgIDAu
MDAwNTkxXSAgIDM6IDAwMDAwMDAwODg4MDAwMDAtMDAwMDAwMDBmZmZmZmZmZiB1bmNhY2hh
YmxlDQo+IA0KPiBzbyB0aGUgVUMgcmFuZ2UgY29tZXMgYWZ0ZXIgdGhpcyBvbmUgd2UgcmVx
dWVzdC4NCj4gDQo+IFsgICAgOC4xODYzNzJdIG10cnJfdHlwZV9sb29rdXA6IHR5cGU6IDB4
NiwgY2FjaGVfbWFwWzNdLnR5cGU6IDB4MA0KPiANCj4gbm93IHRoZSBuZXh0IHR5cGUgbWVy
Z2luZyBoYXBwZW5zIGFuZCB0aGUgM3JkIHJlZ2lvbidzIHR5cGUgaXMgVUMsIG9mYy4NCj4g
DQo+IFsgICAgOC4xOTI0MzNdIHR5cGVfbWVyZ2U6IHR5cGU6IDB4NiwgbmV3X3R5cGU6IDB4
MCwgZWZmZWN0aXZlX3R5cGU6IDB4MCwgY2xlYXIgdW5pZm9ybQ0KPiANCj4gd2UgY2xlYXIg
dW5pZm9ybSBhbmQgd2UgZmFpbDoNCj4gDQo+IFsgICAgOC4yMDAzMzFdIG10cnJfdHlwZV9s
b29rdXA6IHJldCwgdW5pZm9ybTogMA0KPiANCj4gU28gdGhpcyBtYXAgbG9va3VwIHRoaW5n
IGlzIHdyb25nIGluIHRoaXMgY2FzZS4NCj4gDQoNClVyZ2gsIHllcywgdGhlcmUgaXMgc29t
ZXRoaW5nIG1pc3Npbmc6DQoNCmRpZmYgLS1naXQgYS9hcmNoL3g4Ni9rZXJuZWwvY3B1L210
cnIvZ2VuZXJpYy5jIGIvYXJjaC94ODYva2VybmVsL2NwdS9tdHJyL2dlbmVyaWMuYw0KaW5k
ZXggMDMxZjdlYThlNzJiLi45NTQ0ZTdkMTNiYjMgMTAwNjQ0DQotLS0gYS9hcmNoL3g4Ni9r
ZXJuZWwvY3B1L210cnIvZ2VuZXJpYy5jDQorKysgYi9hcmNoL3g4Ni9rZXJuZWwvY3B1L210
cnIvZ2VuZXJpYy5jDQpAQCAtNTIxLDggKzUyMSwxMiBAQCB1OCBtdHJyX3R5cGVfbG9va3Vw
KHU2NCBzdGFydCwgdTY0IGVuZCwgdTggKnVuaWZvcm0pDQogICAgICAgICBmb3IgKGkgPSAw
OyBpIDwgY2FjaGVfbWFwX24gJiYgc3RhcnQgPCBlbmQ7IGkrKykgew0KICAgICAgICAgICAg
ICAgICBpZiAoc3RhcnQgPj0gY2FjaGVfbWFwW2ldLmVuZCkNCiAgICAgICAgICAgICAgICAg
ICAgICAgICBjb250aW51ZTsNCi0gICAgICAgICAgICAgICBpZiAoc3RhcnQgPCBjYWNoZV9t
YXBbaV0uc3RhcnQpDQorICAgICAgICAgICAgICAgaWYgKHN0YXJ0IDwgY2FjaGVfbWFwW2ld
LnN0YXJ0KSB7DQogICAgICAgICAgICAgICAgICAgICAgICAgdHlwZSA9IHR5cGVfbWVyZ2Uo
dHlwZSwgbXRycl9zdGF0ZS5kZWZfdHlwZSwgdW5pZm9ybSk7DQorICAgICAgICAgICAgICAg
ICAgICAgICBzdGFydCA9IGNhY2hlX21hcFtpXS5zdGFydDsNCisgICAgICAgICAgICAgICAg
ICAgICAgIGlmIChlbmQgPD0gc3RhcnQpDQorICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGJyZWFrOw0KKyAgICAgICAgICAgICAgIH0NCiAgICAgICAgICAgICAgICAgdHlwZSA9
IHR5cGVfbWVyZ2UodHlwZSwgY2FjaGVfbWFwW2ldLnR5cGUsIHVuaWZvcm0pOw0KDQogICAg
ICAgICAgICAgICAgIHN0YXJ0ID0gY2FjaGVfbWFwW2ldLmVuZDsNCg0KDQpKdWVyZ2VuDQo=

--------------00hWa8W2Pg0H9d9bJ6b3L0Kg
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------00hWa8W2Pg0H9d9bJ6b3L0Kg--

--------------09wjA0c3y8VRVbtv595YsEal--

--------------PawQkyaJ0Yt3Fc51Y0RtiNKB
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRbvesFAwAAAAAACgkQsN6d1ii/Ey+Q
QwgAkpBgR/BUh+gEt6C3HKEER4gJpOBy/GfvWju6wuNWbbsnRR5YMHjahzDRHwVwEL4wk23hbFL9
tZqcuAWRmsBH2yuzGo17qyBI/9Icyi+nmqzyOyYI4+gjU2b7jGpT+VOOUOTTiE9Ndi+/kLlwOppn
o+2JwtymthEaNlIYCofIeAIicK+03tS00OQUndDO4pZJQJXLRPFHrz1yqjRKpDqfUDdL3xhdk1w+
7msuQhcRvlXKoaO9QGFxOCIcnlgiHltgvoJ6OvIMNVZLeNh72YZpidVVm/fjiIseLFWf34o4GABI
YrBpTJZNnYaBnTKj5FqvoFu8AdQCSsO5WEJSmtEDEQ==
=AoW+
-----END PGP SIGNATURE-----

--------------PawQkyaJ0Yt3Fc51Y0RtiNKB--


From xen-devel-bounces@lists.xenproject.org Wed May 10 16:36:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 16:36:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532943.829288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwmnC-0000Jo-0t; Wed, 10 May 2023 16:35:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532943.829288; Wed, 10 May 2023 16:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwmnB-0000J3-Tq; Wed, 10 May 2023 16:35:53 +0000
Received: by outflank-mailman (input) for mailman id 532943;
 Wed, 10 May 2023 16:35:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwmnA-0000It-Hn; Wed, 10 May 2023 16:35:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwmnA-0007Qd-E7; Wed, 10 May 2023 16:35:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwmn9-0006IO-PL; Wed, 10 May 2023 16:35:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwmn9-0000Ac-Ov; Wed, 10 May 2023 16:35:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=H3lmxjuL8gkymlQCAa1L8w0af6LtD90SKbKnsg59uMM=; b=DSic0T3+iZebTUaE0FdehT6Sfs
	VOL/PQ0/vHeqc19xH0Wtea4uszwl0QKMhBMgNVVEIs5Iw0mFsMlTOlcrQz74CbEBN+pRQiLJfkbW5
	7JgtdPU8ewjQ7R//EyoL8RA6oyrkB59iUrCEcF1+jhaS2SBFd48gxGq8aYKZUaI6O0KU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180598-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180598: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=16a8829130ca22666ac6236178a6233208d425c3
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 16:35:51 +0000

flight 180598 linux-linus real [real]
flight 180605 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180598/
http://logs.test-lab.xenproject.org/osstest/logs/180605/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180605-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                16a8829130ca22666ac6236178a6233208d425c3
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   23 days
Failing since        180281  2023-04-17 06:24:36 Z   23 days   43 attempts
Testing same since   180598  2023-05-10 04:12:30 Z    0 days    1 attempts

------------------------------------------------------------
2361 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 297524 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 10 17:00:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 17:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532954.829299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwnB1-0003oW-Tq; Wed, 10 May 2023 17:00:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532954.829299; Wed, 10 May 2023 17:00:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwnB1-0003oP-PK; Wed, 10 May 2023 17:00:31 +0000
Received: by outflank-mailman (input) for mailman id 532954;
 Wed, 10 May 2023 17:00:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0RdQ=A7=citrix.com=prvs=48752e3ca=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pwnB0-0003oJ-52
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 17:00:30 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2902bee6-ef54-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 19:00:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2902bee6-ef54-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683738028;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=razkh7WSXxnGlMs6f+lGuSwuIwxW+cZSqfJrJE8zXMM=;
  b=iVYMh/FfbH7h9s4w5l0KB1dyJUZtc+sLZB0nvw0VJ1nAIMbPSf10hiZX
   dy/usfQdcYQmR9mvpDecq0D12VCkXYO0jKrYDckA5O0xEi9xQwonc1pAk
   sfuqPiDUb+AfVojjebp3iqFK2h1723xLlwvHu5t0QOi2WiI7YekKoht+K
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108442796
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:H060h6KP3PT8mUkvFE+R+ZUlxSXFcZb7ZxGr2PjKsXjdYENShjBVn
 zcfCDuBbqzeajbxKdAjPY21ph8GvJ7Qz9ZhG1RlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wVmPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5TX10Tr
 6VHMwkHQQi8gOmznaq4GvlV05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TTHZUJwhzH9
 zyuE2LRKS1dEu6F9TC+2WuhtN3Nu32jCZMMLejtnhJtqALKnTFCYPEMbnO5ruO+kVWWQM9EJ
 gof/S9Gha82/UKDR9TlURm15nKJ1jYMVtwVH+Ak5QWlzqvP/x3fFmUCViRGatEtqIkxXzNC/
 kCNt8PkA3poqrL9YXuF+62dtz+aJSkfJmhEbigBJTbp+PG6/tt11EiWCI8+Tujs1Iad9SzML
 y6iiHYC2u9K0tUy3YqjwlfMhm+0pcjZd1tgjunIZV6N4gR8bY+jQoWn71nH8PpNRLqkokm9U
 GsswJbHsr1XZX2ZvGnUGbhWQun1jxqQGGeE6WODCaXN4NhEF5SLWYlLqA9zK05yWirvUW+4O
 RSD0e+9CXI6AZdLUUOVS9jpYyjJ5fK6fTgAahwzRoUmX3SJXFXblByCnGbJt4wXrGAikLskJ
 bCQetu2AHARBMxPlWTmHLlAjuN7nnxjmAs/oKwXKDz4uYdymVbPEetVWLdwRrpRAFy4TPX9r
 I8EapriJ+R3W+zieCjHmbMuwaQxBSFjX/je8pUHHtNv1yI6QAnN/deNm+J+E2Gk9owJ/tr1E
 oaVBBEAkQCm3yebQehIA1g6AI7SsV9EhSpTFUQR0ZyAgRDPva7HAH8jSqYK
IronPort-HdrOrdr: A9a23:9VqTHa2/FN51js7rEm2CyAqjBSByeYIsimQD101hICG9Kvbo7v
 xG785rliMc6QxhGk3I/OrqBEDuewK4yXcY2+cs1PKZLW/bUQiTXfVfBOnZslnd8kTFn4Y2uc
 hdmupFebrN5DNB7foSlTPIcerIt+P3k5xA692+85/BJjsGV4hQqyNCTiqLGEx/QwdLQbAjEo
 CH28ZBrz28PVwKc8WSHBA+Lp7+juyOsKijTQ8NBhYh5gXLpyiv8qTGHx+R2Qpbey9TwI0l7X
 POn2XCl+yeWrCAu1fhPl3ont5rcejau5Z+7Qu3+4QowwDX+02VjUJaKvK/VX4O0a+SAR0R4a
 HxSl8bTr9OA/S7RBD0nfM4sDOQkQrGrUWSvmOwkD/tp9f0Syk9DNcEjYVFcgHB405lp91k1r
 lXtljpxKa/ICmw7RgV3eK4Jy1Chw6xuz4vgOQTh3tQXc8Xb6JQt5UW+AdQHI0bFCz35Yg7GK
 02Zfuskcp+YBefdTTUr2NvyNujUjA6GQqHWFELvoiQ3yJNlH50wkMEzIgUn2sG9pg6V55Yjt
 60QJhAhfVLVIsbfKh9DOAOTY++DXHMWwvFNCaILVHuBMg8SgzwQl7MkcoIDc2RCeA1JcEJ6e
 78uXtjxBMPR34=
X-Talos-CUID: 9a23:awk5hmxyL6FFml7mt7VXBgUwRPx4KCSC107aDFeHDm1sFaWWGU+prfY=
X-Talos-MUID: =?us-ascii?q?9a23=3AXHsBnQ1iS29B5gPsTApFPkOqYTUjyJjpLnIXrZ8?=
 =?us-ascii?q?9kNCJBx5yHWnF3SWpe9py?=
X-IronPort-AV: E=Sophos;i="5.99,265,1677560400"; 
   d="scan'208";a="108442796"
Date: Wed, 10 May 2023 18:00:21 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Juergen Gross <jgross@suse.com>
CC: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH v2] tools: convert bitfields to unsigned type
Message-ID: <b85759aa-183e-4cf3-9137-1427c947c160@perard>
References: <20230508164618.21496-1-olaf@aepfle.de>
 <0e7c1819-f611-1ba1-9f5a-3295eae7f95d@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <0e7c1819-f611-1ba1-9f5a-3295eae7f95d@suse.com>

On Tue, May 09, 2023 at 09:10:04AM +0200, Juergen Gross wrote:
> On 08.05.23 18:46, Olaf Hering wrote:
> > clang complains about the signed type:
> > 
> > implicit truncation from 'int' to a one-bit wide bit-field changes value from 1 to -1 [-Wsingle-bit-bitfield-constant-conversion]
> > 
> > The potential ABI change in libxenvchan is covered by the Xen version based SONAME.
> > 
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed May 10 17:19:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 17:19:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532966.829308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwnTS-0005Yc-GL; Wed, 10 May 2023 17:19:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532966.829308; Wed, 10 May 2023 17:19:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwnTS-0005YV-De; Wed, 10 May 2023 17:19:34 +0000
Received: by outflank-mailman (input) for mailman id 532966;
 Wed, 10 May 2023 17:19:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwnTR-0005YL-9Z; Wed, 10 May 2023 17:19:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwnTR-00005V-84; Wed, 10 May 2023 17:19:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwnTQ-0007TI-Tw; Wed, 10 May 2023 17:19:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwnTQ-0003xN-TM; Wed, 10 May 2023 17:19:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=a+uRgkP7C1QrVdjsIrUSkGKrMopYVdx2rI43qyIQrF0=; b=Gc7C4+sghQPftkfr+G7l27jjJl
	UBZNO2Lur9D1RZETV3f2RPpCc+cmDJ/ZHCMpakWKJ1VcdInvRgdfGITVdezjmGmia14e/kdGXxtYz
	crxdlKhtvOtgZfWpt+bz4NIjif1QtKzw4AFb7Rb4P9VAKooJUH3MnQ9KKrwtRAtquliw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180604-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180604: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6fb2760dc8939b16a906b8e6bb224764907168f3
X-Osstest-Versions-That:
    ovmf=e6447d2a08f5ca585816d093e79a01dad3781f98
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 17:19:32 +0000

flight 180604 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180604/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6fb2760dc8939b16a906b8e6bb224764907168f3
baseline version:
 ovmf                 e6447d2a08f5ca585816d093e79a01dad3781f98

Last test of basis   180603  2023-05-10 13:12:05 Z    0 days
Testing same since   180604  2023-05-10 15:12:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Jiewen Yao <Jiewen.yao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e6447d2a08..6fb2760dc8  6fb2760dc8939b16a906b8e6bb224764907168f3 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 10 17:29:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 17:29:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532973.829317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwnca-000783-Bl; Wed, 10 May 2023 17:29:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532973.829317; Wed, 10 May 2023 17:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwnca-00077w-9C; Wed, 10 May 2023 17:29:00 +0000
Received: by outflank-mailman (input) for mailman id 532973;
 Wed, 10 May 2023 17:28:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwncZ-00077m-9j; Wed, 10 May 2023 17:28:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwncZ-0000FP-5G; Wed, 10 May 2023 17:28:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwncY-0007g4-PW; Wed, 10 May 2023 17:28:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwncY-0001F2-P4; Wed, 10 May 2023 17:28:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yXpyR7tL6aCO633kHyLTwrXwTktkNg6RfkaWRKBe3LU=; b=gDJ8RNhP6h3RXyty+DhsRaHoQc
	pcDmem8vu3J777wrt5jUeD4dpW/okMSMDLR8TtfqvKCMoQ3XkykUA0Bk9B7HLKCmgT2MPb0ZFRUak
	7FCgjwLQP5kabaY+VEAbwPn4iHTYW/F5ass+Jgw6T7rP5HndAgiKLj6FHEyBWGWU/iG4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180599-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180599: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=3d6bc5c61101aadd6fca5d558a44a1cba8120178
X-Osstest-Versions-That:
    libvirt=9b8bb536ff999fa61e41869bd98a026b8e23378f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 17:28:58 +0000

flight 180599 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180599/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180552
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180552
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180552
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 libvirt              3d6bc5c61101aadd6fca5d558a44a1cba8120178
baseline version:
 libvirt              9b8bb536ff999fa61e41869bd98a026b8e23378f

Last test of basis   180552  2023-05-06 04:18:49 Z    4 days
Testing same since   180599  2023-05-10 04:18:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Erik Skultety <eskultet@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   9b8bb536ff..3d6bc5c611  3d6bc5c61101aadd6fca5d558a44a1cba8120178 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 10 17:30:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 17:30:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532979.829328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwndw-0008VT-NB; Wed, 10 May 2023 17:30:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532979.829328; Wed, 10 May 2023 17:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwndw-0008VM-K2; Wed, 10 May 2023 17:30:24 +0000
Received: by outflank-mailman (input) for mailman id 532979;
 Wed, 10 May 2023 17:30:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VlwF=A7=hotmail.com=rafael_andreas@srs-se1.protection.inumbo.net>)
 id 1pwndu-0007fU-S5
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 17:30:23 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05olkn2081c.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::81c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5738935c-ef58-11ed-b229-6b7b168915f2;
 Wed, 10 May 2023 19:30:22 +0200 (CEST)
Received: from DU0P192MB1700.EURP192.PROD.OUTLOOK.COM (2603:10a6:10:3bf::6) by
 GV1P192MB1954.EURP192.PROD.OUTLOOK.COM (2603:10a6:150:89::7) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6363.32; Wed, 10 May 2023 17:30:19 +0000
Received: from DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 ([fe80::5056:b334:c71f:b047]) by DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 ([fe80::5056:b334:c71f:b047%6]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 17:30:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5738935c-ef58-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WXzgPl6LO9xMZxlhNXqggjUpehZ4s76ahRpkk1sfAEsHbDmQptLE/+9SI6eVQQgjNYHjOx4iJUIoM8wJgdvitCf1AoZ5FfmfT4+uqZJdRLYMta4p3rs49z20Iv9nhLwl1ILAdE2fR7zhV2J/8lVrvo02dDTX0K7t+6O9qyYW7tZy/EgYSKO9nzbi23tG/rafpgj4cCzHmL+ARZUYVih5Gfeq03eIndfupoHaPKBTTuCyP3j2VoVoh4NFcpxfxbQItstheb1ZM9FUdRYldrWf0H0iM31198c3H2E1tIyDmdxP806df6ftXSAhhRc1h/bgSVl33amHENxsvyvGqxwRjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uqsYhJNFeJuQBb3WBqeRIoylAd76FXO9pXtwICNnV/M=;
 b=AwJBDXidGudzI6LZKGIOW5dqQRAfnj7Kt1u4jSjUSgtF50g1GjFQDgCS1n0rFPILFkrdckaA9qBEIGUVn6aIm2MUHVf/kTesfgp4lW9NtvemgHp9FySqTZNgjNy7e4gkfeW9519EE3oOgA0rTXHVZkOe3ZvRhI12U47SJzo7UFOKVz2zPn6v1pwAa/HXxEHsMUewbiBRVjaP9KKjfJZlIUphpP7TXnj+t/tyi+08Osh5lAbE/ZvG7JAZQVMKTReg5LOQfKGnGYwpAdxtU6x1JApiQmPswtT12a33eDOh1BvKhfuPeHywDJTDI/IPbww2zhzzn2XPXY4jeUo1Q/1APg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hotmail.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uqsYhJNFeJuQBb3WBqeRIoylAd76FXO9pXtwICNnV/M=;
 b=YFYroJFNFRCrNKBfjxNXBxZ+C9ziszMkG+YUc5wx2Lbf+iNxBc7dEkg08mBibxm8Y9lwpLF7t+LT7BiR1rf8733Mpk1k0L1gPF/V37Bh8XC0IZtAJYfooAjHiHIQItOisbWnXqJDXVSA7vxtU2yYfIguZdOZDYo9Zgz+Lx8zGLy8/kX/P1HxNTIwa86CUaaa1+YgiougQa+RaIMM7Wy1BKSg/B2PE51MD00xp1CEM5669AiHRod437KjcZi+nEbEregPBsGm23YwC/KR61nvORFu1d5c7b3sJ1KPbySQUOSJI3syMSFoAlKJg8Y1arMOJZf2PBqyszHA8D8KKYHoew==
Message-ID:
 <DU0P192MB17005DCCA86057EA3DF33171E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
Date: Wed, 10 May 2023 19:30:17 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN PATCH 2/2] x86/Dom0: Use streaming decompression for ZSTD
 compressed kernels
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1683673597.git.rafael_andreas@hotmail.com>
 <DU0P192MB1700F6DFE45A48D90B5FE679E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
 <283798a0-0c69-7705-aade-6cd6b2c5f3c4@suse.com>
 <DU0P192MB1700F7BB44DC06D67D7AE345E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
 <9b8f033d-321c-412d-d0f3-51d29ac8d238@suse.com>
From: =?UTF-8?Q?Rafa=c3=abl_Kooi?= <rafael_andreas@hotmail.com>
In-Reply-To: <9b8f033d-321c-412d-d0f3-51d29ac8d238@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-TMN: [ytUm3PzdyLVYb7OoH6gPSw35AlI9BmAEzohSjSD4e0iJCPqkdkGMCBB8O9iBN3PV]
X-ClientProxiedBy: LO2P265CA0501.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::8) To DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:10:3bf::6)
X-Microsoft-Original-Message-ID:
 <1b81fcf2-ed4e-ea55-41c6-aed537498e0a@hotmail.com>
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DU0P192MB1700:EE_|GV1P192MB1954:EE_
X-MS-Office365-Filtering-Correlation-Id: 67cb3e87-b6dc-4d72-a104-08db517c39b5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	D7OUgSOHce6lUHkKEA5LP2Fm90/2Rb6mzjN0dtWXt5Rd9JfZSyrRYMGPRqovpNiYYHtg4bxVmuKsji4lL12HigztWMYWlwBlsWQ7Jl1m+6qv36tAsF8W5B+usqvdJS++21bAbdkJPYp1Av9WeA2jhMUyNrWczDxCzrOahqVg3Y93/yFIB1kIkmqatSZPFPRTdy8sACbOaaZYkxav2wZ2xbYpC8WL18kDIE9CR/JmX26dVvoHKUtPB++UwOUnUydg7MAe8l0ln6eSDW7qngofyBFGn5FJVtToSnVkI6c+JXYCdJ9K7Io4I4G6+vvwMV2kQ1U8ooLLFqi36xUIgFaKtiZ58fvtmzfRGAJo9Ub3pz+JifnDP8Edv7FzkOgS4yfLUzWBVJ+TQ2QqI6A7Q61MPlaCTC9gU5737WY1BwFq35Y3NyOoI2IQ7bAbguxXBgJG/+ntfMyiK6LXwAlBULFalN1n8eecfqfiYGbXR3RlbDmZGf+xgjSkuR72JEPKZU0aSq/TBzq1X15WkjXY2d3CrLx6Onl2rYuog151vG2XPXelXbTOmj2yLQj0GuPAb4xRHs0Obvdf03eRpEaAOYaRatx1oMgYFAJPSBCOo88Dge6OUf6Wyw3VFfUOvHfFZN57
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ODFobU9KdzBlOWc1ZnlNVUtFc0lRbjVCU1R5dnJTemIrTkU0NU0zbkxoeTVj?=
 =?utf-8?B?V0VWMlViQ0NxM25qTC9XbTVjam9JdjdGQ1FkZ0EwaHR2eU1tTmpwelBaRDJm?=
 =?utf-8?B?c1BhbWw4S1RSL1F1OVk5dEk4R2w2UTNhSnRXdnlkdDRJdlBlV0xNU01qbTgy?=
 =?utf-8?B?WVRWcVJnTmgxQXVicll3QTBXWDNqU3lYa3FLdFZUNTErdnU3VlRqL1NLM1R1?=
 =?utf-8?B?UDVFMVdoWnhUWGN1YzlyZ0F5bGQwcW5ScUd6OW04UUFOSTc5blhydDNhdzVV?=
 =?utf-8?B?Vmd3RzhlK0FKTDVGSGtUS2FtNU55THRFSi9EWjYvK0htc3NDdGRJWUUwN081?=
 =?utf-8?B?UHI1MkVBQjZaOVZodFZ5SExxenhGbnB5c1NwZmRTY0RRaEdVaFJjQmpjQzYz?=
 =?utf-8?B?OGZEdDJ6a09Xd0RyVTdHV1NDUTFyU1JNUWdGV1hQcENKRksxbXJHSUdBWHk2?=
 =?utf-8?B?cnZ0VmlBOGM4V2p4RlNwSEkzSkpsNENIWE81WnhMUTJpM1Yrc1RxTmtXcGlx?=
 =?utf-8?B?dDlRdzNjbFE0dW9xazdzbUZjb1dab2Fmb2hPOFFOcW1LS3NpYTM4VXJUaVRG?=
 =?utf-8?B?V1hqVkFHaDBuUGJJeWVhZ2psaGpqRWhXVUtLMW5WeHZLbngvZkVtUE5SY0xM?=
 =?utf-8?B?WmlmVzFqdVp1ZC9UMWVOcnBHOTBCa3VxVnVmV2d3eVJ3QWpwZU1aWUZGWTVz?=
 =?utf-8?B?d05NWjBCdERhSVhTeDFCUjBmMGQ3NU1jaXJ1aE9sNit2czJPejZkU0dicTJM?=
 =?utf-8?B?SVFTbng3a3Q2akxYR3Z5RWlVdnRyaldrSHcvaGVkSVJaSVZNK0Z5KzZNa0I1?=
 =?utf-8?B?WVk2RFJNQXo3eDBHUHNzS1ZRSlNhTHNYem50M2pYK090RC9EcHRuUWZMSHVT?=
 =?utf-8?B?dTFuMHBBVm1wWTA1UzAwSThYY29IaEo5ZGRZOE1TYzc1Tm10NFdiNi9rKzBz?=
 =?utf-8?B?dHNEYlJTeGFnYU5nT1JFRXpyYXZXMytKY2h3VU1YUkROMU45aUNYVUhVNUh5?=
 =?utf-8?B?SzN5WHlyOVlEOG5YM1V4Q3dHK3k1MGE1NytMNTk3cUkyOFVSSlhFVnBvRW5w?=
 =?utf-8?B?QmxpTGYzMjRuZVlOS2YxeUREdUxYNVRMT2dGWVlKSFFtWGMvRGRVb2ZvblJE?=
 =?utf-8?B?M0g4MFRuUFNXVE80MXZxMXhrOWI2cUNqMlhFMzNwYks1TTVmWmsyRXZ0aHV1?=
 =?utf-8?B?UUxhald2YkVpVVArQkJTdHJiazY4KzVyd1FxOWpEZ3VtZzJoMzUrWFE1SUFK?=
 =?utf-8?B?Mnlmb3ZKSVR0OU5OUWprMEh3dE4yZUZnMWpNSWdGTkNvajRPbkdCK3BZZEpK?=
 =?utf-8?B?ejF0dnhScHdKc3kzekltMTR6YjJNclBSb28vdTB0eU9Jc050TUNZNy9nVnRt?=
 =?utf-8?B?cGFhWGZUZDBtTjlWRXpTbVFBMVdya2NzUkNiNnEvUThPMkl0OW1MVVhUU1c3?=
 =?utf-8?B?WGtrYWFDMjVIVHdoYWVmVW5rM2RuWmtOU24xSDh4ZCs5cnFtUlVNNWcrVE1X?=
 =?utf-8?B?VmFSN056aWJjNXhTaitYWmdKZEl2YzFFcGtqWWF6RDJyeHJ0eEpyMXFITTN3?=
 =?utf-8?B?WnNRbG5oSWhkdmJENndDK2d1TjZkbjJYQ0xpRXlCY09qSWphbWl3Zmt6d2Q4?=
 =?utf-8?B?R0ZkZjlDUHoyRUs2dFB2L0xEOGI1cWNLTW1qa1BjcHFhOWZ1dTExZHdrd2xn?=
 =?utf-8?B?eFFQc3U5dDEzamhCUWNWUDdZTEtpckV5K0E3VWVGYXJHeHUrRGpBSWZaUXB3?=
 =?utf-8?Q?IoxnGkFSq6ZL3chqOE=3D?=
X-OriginatorOrg: sct-15-20-4755-11-msonline-outlook-fb43a.templateTenant
X-MS-Exchange-CrossTenant-Network-Message-Id: 67cb3e87-b6dc-4d72-a104-08db517c39b5
X-MS-Exchange-CrossTenant-AuthSource: DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 17:30:19.4320
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg:
	00000000-0000-0000-0000-000000000000
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1P192MB1954

On 10/05/2023 11:48, Jan Beulich wrote:> First of all - please don't drop Cc-s when replying. I'm restoring
> xen-devel@ here at least.
> 

Apologies, I didn't notice that replying dropped the Cc-s. Should I send
the emails again with the proper Cc-s?

> On 10.05.2023 10:51, Rafaël Kooi wrote:
>> On 10/05/2023 10:03, Jan Beulich wrote:> On 10.05.2023 02:18, Rafaël Kooi wrote:
>>>> On Arch Linux kernel decompression will fail when Xen has been unified
>>>> with the kernel and initramfs as a single binary. This change works for
>>>> both streaming and non-streaming ZSTD content.
>>>
>>> This could do with better explaining what "unified" means, and how
>>> streaming decompression actually makes a difference.
>>>
>>
>> I don't mind explaining it further, but with the efi documentation for
>> it existing on xenbits, should I just refer to that?
> 
> You may of course refer to existing documentation. Iirc that doesn't
> cover any compression aspects, though.
> 

Right, I'll think about what ends up being the clearest explanation.

>>>> --- a/xen/common/decompress.c
>>>> +++ b/xen/common/decompress.c
>>>> @@ -3,11 +3,26 @@
>>>>    #include <xen/string.h>
>>>>    #include <xen/decompress.h>
>>>>    
>>>> +typedef struct _ZSTD_state
>>>> +{
>>>> +    void *write_buf;
>>>> +    unsigned int write_pos;
>>>> +} ZSTD_state;
>>>> +
>>>>    static void __init cf_check error(const char *msg)
>>>>    {
>>>>        printk("%s\n", msg);
>>>>    }
>>>>    
>>>> +static int __init cf_check ZSTD_flush(void *buf, unsigned int pos,
>>>> +                                      void *userptr)
>>>> +{
>>>> +    ZSTD_state *state = (ZSTD_state*)userptr;
>>>> +    memcpy(state->write_buf + state->write_pos, buf, pos);
>>>> +    state->write_pos += pos;
>>>> +    return pos;
>>>> +}
>>>
>>> This doesn't really belong here, but will (I expect) go away anyway once
>>> you drop the earlier patch.
>>>
>>
>> The ZSTD_flush will have to stay, as that is how the decompressor will
>> start streaming decompression. The difference will be that the book
>> keeping will be "global" (to the translation unit).
> 
> But this bookkeeping should be entirely in zstd code (i.e. presumably
> unzstd.c).
> 

The implementation of the decompression functions seem to indicate
otherwise. Referring to unzstd.c:`unzstd`, the function will take the
streaming decompression path if either `fill` or `flush` have been
supplied. I cross checked with unlzma.c and unxz.c, and that seems to
have similar behavior in regards to flushing the output data. The
`flush` function is passed a buffer to a chunk of decompressed data with
`pos` being the size of the chunk. For the sake of consistency I don't
think it's a good idea to deviate from this behavior in just unzstd.c.

I could wrap the decompression code in another file and function, but
in my opinion it should stay here and be renamed to something generic
like `stream_flush` or `chunk_flush`.

>>>>        if ( len >= 4 && !memcmp(inbuf, "\x28\xb5\x2f\xfd", 4) )
>>>> -	return unzstd(inbuf, len, NULL, NULL, outbuf, NULL, error);
>>>> +    {
>>>> +        // NOTE (Rafaël): On Arch Linux the kernel is compressed in a way
>>>> +        // that requires streaming ZSTD decompression. Otherwise decompression
>>>> +        // will fail when using a unified EFI binary. Somehow decompression
>>>> +        // works when not using a unified EFI binary, I suspect this is the
>>>> +        // kernel self decompressing. Or there is a code path that I am not
>>>> +        // aware of that takes care of the use case properly.
>>>
>>> Along the lines of what I've said for the description, this wants to avoid
>>> terms like "somehow" if at all possible.
>>
>> I've used the term "somehow" because I don't know why decompression
>> works when Xen loads the kernel from the EFI file system. I assume the
>> kernel still gets unpacked by Xen, right? Or does the kernel unpack
>> itself?
> 
> The handling of Dom0 kernel decompression ought to be entirely independent
> of EFI vs legacy. Unless I'm wrong with that (mis-remembering), you
> mentioning EFI is potentially misleading. And yes, at least on x86 the
> kernel is decompressed by Xen (by peeking into the supplied bzImage). The
> difference between a plain bzImage and a "unified EFI binary" is what you
> will want to outline in the description (and at least mention in the
> comment). What I'm wondering is whether there simply is an issue with size
> determination when the kernel is taken from the .kernel section.
> 

Assuming you are talking about size determination of the compressed
bzImage, I notice a discrepancy in the size of the ZSTD stream and the
reported size by the vmlinuz-* header upon further investigation. When
using the streaming decompression code I made it output how many bytes
it reads from the extracted-but-still-compressed bzImage. The code
reads 12,327,560 bytes, but the size of the compressed bzImage in the
header is 12,327,564 bytes. In xen/arch/x86/bzImage.c `decompress` is
called with `orig_image_len`, when the function `output_length`
calculates the end address and removes 4 bytes from that address. If I
remove the last 4 bytes from the compressed bzImage then `unzstd` will
unpack it with `unzstd bzImage.zst -o bzImage`, otherwise it will
complain with `zstd: /*stdin*\: unknown header`. With this new
information I think the correct solution is to try calling `decompress`
a second time with with `orig_image_len - 4` if it fails.

>> When I present the v2 of this patch, do I add you as a reviewer? Or will
>> that be done by the merger?
> 
> I'm afraid I don't understand the question. You will continue to Cc
> respective maintainers, which will include me. In case you refer to a
> Reviewed-by: tag - you can only add such tags once they were offered to
> you by the respective person. For this specific one it doesn't mean "an
> earlier version of this was looked at by <person>" but "this is deemed
> okay by <person>".
> 
> Jan

I meant the "Reviewed-by:" tag indeed. Thanks again.

Rafaël


From xen-devel-bounces@lists.xenproject.org Wed May 10 17:50:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 17:50:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532987.829338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwnx0-0002dc-7w; Wed, 10 May 2023 17:50:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532987.829338; Wed, 10 May 2023 17:50:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwnx0-0002dC-4m; Wed, 10 May 2023 17:50:06 +0000
Received: by outflank-mailman (input) for mailman id 532987;
 Wed, 10 May 2023 17:50:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e4GJ=A7=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pwnwz-0002YO-64
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 17:50:05 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 17440444-ef5b-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 19:50:03 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-50bd37ca954so71378593a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 10 May 2023 10:50:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17440444-ef5b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683741002; x=1686333002;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WlhCSp4HvAjawplHt1DQMmD8fJoyJaQJKiGyjRIHpzk=;
        b=bJiMPeZdTjx/2JTkVvVTJQCgz6PT5hL+swcSBTmAbcARup4+nQ/YQvXhsZWgHjOVEi
         Ogb3UpGkW8fGgOBP9tgEEdjhzY4l4MQrhNLCRm2g+ifjfIbkgr7Xh1btUb3rc9eFvCeS
         qRMQTZQM0DkglOGMSR4aQuVInZCyJNDKhJ/bm0BEG1cWOdX4Kn+Z9wk2ODn9peCr23qK
         ip0wz8ZuUhCvKX9iOjCYyl++R2RAdTUpWIdvuEQXO2sBa5AyHJbeZmB278kpbaagAKZI
         PFmZX0SnL4W5wjuY+iXtn0u8VFEPyNj2XBIDdEfmUA9agC3fdVPoQS10rXm5lpSvqpVJ
         PdXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683741002; x=1686333002;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=WlhCSp4HvAjawplHt1DQMmD8fJoyJaQJKiGyjRIHpzk=;
        b=ZKHHsC4w2e21fOr3mjIxUYiFqegcYQJdTC5cBT7HwPHmAaKoRsPZoupQnNF1e9UytP
         QLe2JACvRFCswpnmYkZiV5J3ylq8IE8kqoyCEiQeAhYlfNmA0RyVTrIFxG/BWXeUbLJ1
         TvD2vqQLEvtzmv5Yb14qemzWrooxa1xLm4y4eBQKCTYZwFihFy7SpnGnO4UimaSneRbh
         GUBlNN0VUV8gO9znwBOjBxctLxvvUosI4Zi2ybAvFmmI+Gu1VsDbMypsaMgkQscf8SME
         eZ1I/BNliPCnozH3kOsXOn1V9bXZnTqL85FQ4cl8QnwsjIBcd7XGc/94MD6kjnC5NOv9
         ud5w==
X-Gm-Message-State: AC+VfDxZX/XhFoDUQvIH8vCx1gduYLkN8d+zFqvJvFxyfYjUnwrkJXcT
	2RNrY6Q37Pefe4fwyt2ChnnZ7ri2i6coHUPJhYk=
X-Google-Smtp-Source: ACHHUZ5MgXLjWye+NCjf+WfmtC/BVN76xVRxSKQHQ/FSXFstIs8dYL1frcMRmk/PKSW+cJb/dk9DovAHWE288vTowis=
X-Received: by 2002:a17:907:3e2a:b0:966:4973:b35 with SMTP id
 hp42-20020a1709073e2a00b0096649730b35mr11320928ejc.22.1683741002253; Wed, 10
 May 2023 10:50:02 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-8-jandryuk@gmail.com>
 <7db38688-1233-bc16-03f3-7afdc3394054@suse.com> <9cf71407-6209-296a-489a-9732b1928246@suse.com>
In-Reply-To: <9cf71407-6209-296a-489a-9732b1928246@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 10 May 2023 13:49:50 -0400
Message-ID: <CAKf6xptOf7eSzstzjfbbSU0tMBpNjtPEwt2uNzj=2TucrgFRiA@mail.gmail.com>
Subject: Re: [PATCH v3 07/14 RESEND] cpufreq: Export HWP parameters to userspace
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, May 8, 2023 at 6:26=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
>
> On 01.05.2023 21:30, Jason Andryuk wrote:
> > Extend xen_get_cpufreq_para to return hwp parameters.  These match the
> > hardware rather closely.
> >
> > We need the features bitmask to indicated fields supported by the actua=
l
> > hardware.
> >
> > The use of uint8_t parameters matches the hardware size.  uint32_t
> > entries grows the sysctl_t past the build assertion in setup.c.  The
> > uint8_t ranges are supported across multiple generations, so hopefully
> > they won't change.
>
> Still it feels a little odd for values to be this narrow. Aiui the
> scaling_governor[] and scaling_{max,min}_freq fields aren't (really)
> used by HWP. So you could widen the union in struct
> xen_get_cpufreq_para (in a binary but not necessarily source compatible
> manner), gaining you 6 more uint32_t slots. Possibly the somewhat oddly
> placed scaling_cur_freq could be included as well ...

The values are narrow, but they match the hardware.  It works for HWP,
so there is no need to change at this time AFAICT.

Do you want me to make this change?

> > --- a/xen/arch/x86/acpi/cpufreq/hwp.c
> > +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> > @@ -506,6 +506,31 @@ static const struct cpufreq_driver __initconstrel =
hwp_cpufreq_driver =3D
> >      .update =3D hwp_cpufreq_update,
> >  };
> >
> > +int get_hwp_para(const struct cpufreq_policy *policy,
>
> While I don't really mind a policy being passed into here, ...
>
> > +                 struct xen_hwp_para *hwp_para)
> > +{
> > +    unsigned int cpu =3D policy->cpu;
>
> ... this is its only use afaics, and hence the caller could as well pass
> in just a CPU number?

Sounds good.

> > --- a/xen/include/public/sysctl.h
> > +++ b/xen/include/public/sysctl.h
> > @@ -292,6 +292,31 @@ struct xen_ondemand {
> >      uint32_t up_threshold;
> >  };
> >
> > +struct xen_hwp_para {
> > +    /*
> > +     * bits 6:0   - 7bit mantissa
> > +     * bits 9:7   - 3bit base-10 exponent
> > +     * btis 15:10 - Unused - must be 0
> > +     */
> > +#define HWP_ACT_WINDOW_MANTISSA_MASK  0x7f
> > +#define HWP_ACT_WINDOW_EXPONENT_MASK  0x7
> > +#define HWP_ACT_WINDOW_EXPONENT_SHIFT 7
> > +    uint16_t activity_window;
> > +    /* energy_perf range 0-255 if 1. Otherwise 0-15 */
> > +#define XEN_SYSCTL_HWP_FEAT_ENERGY_PERF (1 << 0)
> > +    /* activity_window supported if 1 */
> > +#define XEN_SYSCTL_HWP_FEAT_ACT_WINDOW  (1 << 1)
> > +    uint8_t features; /* bit flags for features */
> > +    uint8_t lowest;
> > +    uint8_t most_efficient;
> > +    uint8_t guaranteed;
> > +    uint8_t highest;
> > +    uint8_t minimum;
> > +    uint8_t maximum;
> > +    uint8_t desired;
> > +    uint8_t energy_perf;
>
> These fields could do with some more commentary. To be honest I had
> trouble figuring (from the SDM) what exact meaning specific numeric
> values have. Readers of this header should at the very least be told
> where they can turn to in order to understand what these fields
> communicate. (FTAOD this could be section names, but please not
> section numbers. The latter are fine to use in a discussion, but
> they're changing too frequently to make them useful in code
> comments.)

Sounds good.  I'll add some description.

> > +};
>
> Also, if you decide to stick to uint8_t, then the trailing padding
> field (another uint8_t) wants making explicit. I'm on the edge
> whether to ask to also check the field: Right here the struct is
> "get only", and peeking ahead you look to be introducing a separate
> sub-op for "set". Perhaps if you added /* OUT */ at the top of the
> new struct? (But if you don't check the field for being zero, then
> you'll want to set it to zero for forward compatibility.)

Thanks for catching.  I'll add the padding field and zero it.

On Mon, May 8, 2023 at 6:47=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
>
> On 08.05.2023 12:25, Jan Beulich wrote:
> > On 01.05.2023 21:30, Jason Andryuk wrote:
> >> Extend xen_get_cpufreq_para to return hwp parameters.  These match the
> >> hardware rather closely.
> >>
> >> We need the features bitmask to indicated fields supported by the actu=
al
> >> hardware.
> >>
> >> The use of uint8_t parameters matches the hardware size.  uint32_t
> >> entries grows the sysctl_t past the build assertion in setup.c.  The
> >> uint8_t ranges are supported across multiple generations, so hopefully
> >> they won't change.
> >
> > Still it feels a little odd for values to be this narrow. Aiui the
> > scaling_governor[] and scaling_{max,min}_freq fields aren't (really)
> > used by HWP. So you could widen the union in struct
> > xen_get_cpufreq_para (in a binary but not necessarily source compatible
> > manner), gaining you 6 more uint32_t slots. Possibly the somewhat oddly
> > placed scaling_cur_freq could be included as well ...
>
> Having seen patch 9 now as well, I wonder whether here (or in a separate
> patch) you don't want to limit providing inapplicable data (for example
> not filling *scaling_available_governors would even avoid an allocation,
> thus removing a possible reason for failure), while there (or again in a
> separate patch) you'd also limit what the tool reports (inapplicable
> output causes confusion / questions at best).

The xenpm output only shows relevant information:

# xenpm get-cpufreq-para 11
cpu id               : 11
affected_cpus        : 11
cpuinfo frequency    : base [1600000] max [4900000]
scaling_driver       : hwp-cpufreq
scaling_avail_gov    : hwp
current_governor     : hwp
hwp variables        :
  hardware limits    : lowest [1] most_efficient [11]
                     : guaranteed [11] highest [49]
  configured limits  : min [1] max [255] energy_perf [128]
                     : activity_window [0 hardware selected]
                     : desired [0 hw autonomous]
turbo mode           : enabled

The scaling_*_freq values, policy->{min,max,cur} are filled in with
base, max and 0 in hwp_get_cpu_speeds(), so it's not totally invalid
values being returned.  The governor registration restricting to only
internal governors when HWP is active means it only has the single
governor.  I think it's okay as-is, but let me know if you want
something changed.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 10 18:12:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 18:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.532995.829347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwoI3-0005LS-2s; Wed, 10 May 2023 18:11:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 532995.829347; Wed, 10 May 2023 18:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwoI3-0005LL-0M; Wed, 10 May 2023 18:11:51 +0000
Received: by outflank-mailman (input) for mailman id 532995;
 Wed, 10 May 2023 18:11:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e4GJ=A7=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pwoI1-0005LD-EN
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 18:11:49 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 20b971cd-ef5e-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 20:11:47 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-50bc0ced1d9so11371986a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 10 May 2023 11:11:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20b971cd-ef5e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683742307; x=1686334307;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=NhillotERfDC11pI/SjbWszv+x7ZdZYj7Ka2Ky8L5ZI=;
        b=gCqsC0iOF1MAHnPS+2arCcorZ6UW1WZ7SW4zXFghfDXU/xtopNcxXMkZW4+1rN7TsP
         TLksRHEk/xV+P61+rjZZvBQOPn4EkFYqVxCK0DCI14Hq0jM7w1E5s7jIqw4YjAMffC+5
         ClOju4FTNUytPo+7YuZkAbGdNZKBAL7H3K7CT2H6WqUw40ihaOct32INNPcouw04DNTU
         rjFWzrmqGcnlm+bbyX8V6mvdvxtaUBDhP5nxWLfAc+cCRpm3PqXwkcv+9500jG9qnzt2
         l/fXlNyeOUib5gMj7FBoEp95or604hnQ5joItKy8vevxfjn9f3oU3uBiqCTg4CJjnFwp
         VTcA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683742307; x=1686334307;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=NhillotERfDC11pI/SjbWszv+x7ZdZYj7Ka2Ky8L5ZI=;
        b=OzpFPlPd3O9Ss9JShseYZg+bVswVqJlOR80FNDo7UkBODDwLs7gHYKW/9oBK3Khngg
         Q91prNCu5IiXyIQtd1y8E6cp6JS358HYzoV5mYMywUVRJC6RQwlytms7rzUFcn7IzCeP
         X2/FH6eAJXJynq0XB7FLhpeJkMVBJG9v7qXkbkJAF4ZhCUFIgve8Mz7oxo+14dyH20he
         RhRvvVpuATTjBCZ9RWjJycOUrZD55FdH8bH2aE40yJHTqltQRgDnx83p/Fz4X9sQLVL9
         ISIrojvOzozs/QPJ7W6F/HcJjLaHyfd3PN5owL60IV5LSamTkHpUi2lzm5HLvfim+U8l
         ZGWA==
X-Gm-Message-State: AC+VfDxWpc/6bCa8Z52vCt18m4LZIywi+NDpZe9aRqOk7SDNT13o45Ct
	oexgl73ISZY+I8Bd+eqTqxTJj+BQ0C55g1x9qy+h5bwXkmE=
X-Google-Smtp-Source: ACHHUZ5sff9gdoF+tagHOWQt7z7g7hkn4GCWMfGXusRwqibUsKH2ln6NBtzX9BXarP483iqXd/+O8nh4hpw5PsGVVlo=
X-Received: by 2002:a17:906:da8e:b0:94f:7c4e:24ea with SMTP id
 xh14-20020a170906da8e00b0094f7c4e24eamr14918920ejb.38.1683742306719; Wed, 10
 May 2023 11:11:46 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-10-jandryuk@gmail.com>
 <256fc66c-066f-3f0c-b34b-a237e9268f22@suse.com>
In-Reply-To: <256fc66c-066f-3f0c-b34b-a237e9268f22@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 10 May 2023 14:11:34 -0400
Message-ID: <CAKf6xpu=KiSkjGpyRYBCpYh67XhdtmjvwLjthkpTbE+HoNQm7g@mail.gmail.com>
Subject: Re: [PATCH v3 09/14 RESEND] xenpm: Print HWP parameters
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, May 8, 2023 at 6:43=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
>
> On 01.05.2023 21:30, Jason Andryuk wrote:
> > Print HWP-specific parameters.  Some are always present, but others
> > depend on hardware support.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > ---
> > v2:
> > Style fixes
> > Declare i outside loop
> > Replace repearted hardware/configured limits with spaces
> > Fixup for hw_ removal
> > Use XEN_HWP_GOVERNOR
> > Use HWP_ACT_WINDOW_EXPONENT_*
> > Remove energy_perf hw autonomous - 0 doesn't mean autonomous
> > ---
> >  tools/misc/xenpm.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 65 insertions(+)
> >
> > diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
> > index ce8d7644d0..b2defde0d4 100644
> > --- a/tools/misc/xenpm.c
> > +++ b/tools/misc/xenpm.c
> > @@ -708,6 +708,44 @@ void start_gather_func(int argc, char *argv[])
> >      pause();
> >  }
> >
> > +static void calculate_hwp_activity_window(const xc_hwp_para_t *hwp,
> > +                                          unsigned int *activity_windo=
w,
> > +                                          const char **units)
>
> The function's return value would be nice to use for one of the two
> values that are being returned.

Ok, I'll return activity_window.

> > +{
> > +    unsigned int mantissa =3D hwp->activity_window & HWP_ACT_WINDOW_MA=
NTISSA_MASK;
> > +    unsigned int exponent =3D
> > +        (hwp->activity_window >> HWP_ACT_WINDOW_EXPONENT_SHIFT) &
> > +            HWP_ACT_WINDOW_EXPONENT_MASK;
>
> I wish we had MASK_EXTR() in common-macros.h. While really a comment on
> patch 7 - HWP_ACT_WINDOW_EXPONENT_SHIFT is redundant information and
> should imo be omitted from the public interface, in favor of just a
> (suitably shifted) mask value. Also note how those constants all lack
> proper XEN_ prefixes.

I'll add a patch adding MASK_EXTR() & MASK_INSR() to common-macros.h
and use those - is there any reason not to do that?

I'll also add XEN_ prefixes.

> > +    unsigned int multiplier =3D 1;
> > +    unsigned int i;
> > +
> > +    if ( hwp->activity_window =3D=3D 0 )
> > +    {
> > +        *units =3D "hardware selected";
> > +        *activity_window =3D 0;
> > +
> > +        return;
> > +    }
>
> While in line with documentation, any mantissa of 0 results in a 0us
> window, which I assume would then also mean "hardware selected".

I hadn't considered that.  The hardware seems to allow you to write a
0 mantissa, non-0 exponent.  From the SDM, it's unclear what that
would mean.  The code as written would display "0 us", "0 ms", or "0
s" - not "0 hardware selected".  Do you want more explicity printing
for those cases?  I think it's fine to have a distinction between the
output.  "0 hardware selected" is the known valid value that is
working as expected.  The other ones being something different seems
good to me since we don't really know what they mean.

> > @@ -773,6 +811,33 @@ static void print_cpufreq_para(int cpuid, struct x=
c_get_cpufreq_para *p_cpufreq)
> >                 p_cpufreq->scaling_cur_freq);
> >      }
> >
> > +    if ( strcmp(p_cpufreq->scaling_governor, XEN_HWP_GOVERNOR) =3D=3D =
0 )
> > +    {
> > +        const xc_hwp_para_t *hwp =3D &p_cpufreq->u.hwp_para;
> > +
> > +        printf("hwp variables        :\n");
> > +        printf("  hardware limits    : lowest [%u] most_efficient [%u]=
\n",
>
> Here and ...
>
> > +               hwp->lowest, hwp->most_efficient);
> > +        printf("                     : guaranteed [%u] highest [%u]\n"=
,
> > +               hwp->guaranteed, hwp->highest);
> > +        printf("  configured limits  : min [%u] max [%u] energy_perf [=
%u]\n",
>
> ... here I wonder what use the underscores are in produced output. I'd
> use blanks. If you really want a separator there, then please use
> dashes.

I'll use blanks.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 10 19:34:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 19:34:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533008.829358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwpZl-0005gP-VM; Wed, 10 May 2023 19:34:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533008.829358; Wed, 10 May 2023 19:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwpZl-0005gI-RP; Wed, 10 May 2023 19:34:13 +0000
Received: by outflank-mailman (input) for mailman id 533008;
 Wed, 10 May 2023 19:34:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PWYj=A7=citrix.com=prvs=487e665c6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pwpZk-0005gC-Bq
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 19:34:12 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a12d6ff3-ef69-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 21:34:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a12d6ff3-ef69-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683747248;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=zo3AkidUaDbd0s0WimnpWcrKKliyz/SIRsjhAoV2Jz4=;
  b=Pq5WlovHQNu0arzcZuQ0r57obcir1zj3DbXFPG57ICzH9Y5TYDd+T+S3
   VFoHjtXHwJxekiTXVG2TciwCRcE+OqrgT3LgFY9E2rZ3jDrBnPs0YY9jb
   yWHP5CForSiKlfFhMlwZTLW3Z6CRzJY8+1fzU701eOXwZO6gH6tyjHaLM
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108977908
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:wWybV6+Nq8Zpenw8ko37DrUDvn6TJUtcMsCJ2f8bNWPcYEJGY0x3m
 mZKXT2OMq6PamGjf9p0PIu+oUgO7cTQm9I3TAdkrio8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKkT5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklI7
 t8gbwocYirTlr3t3paHSLh+ic0aeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0MxhbJ+
 T2XpzSR7hcyDN2l8TfCqG2Q3qzyux28cpMIGf6m6as/6LGU7jNKU0BHPbehmtGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c/h6HvA+6QqN4rHJ+AvfDW8BJhZebPQ2uclwQiYlv
 neLgtWvAzVsub+UTHu197GIoDf0Mi8QRUcBaDEFS00Z4tDliIA1kh/LCN1kFcaIYsbdQG+qh
 WrQ9W5n2utV1JRQv0mmwbzZqzS3haCOdB853FXGATm7yj97NZyqbrX9vDA38s18BIqeS1CAu
 l0NlM6f8P0CAPmxqcCdfAkeNOr3vqjYaVUwlXYqRsB8rGr1pxZPaKgKuFlDyFFV3tHokNMDS
 Gvaoktv6ZBaJxNGhocnMtvqW6zGIUUNfOkJt8w4jPIUOvCdlyfdpkmCgHJ8OEiy+HXAaYllZ
 f+mnT+EVB7285hPwjusXPs62rQ23C04zm67bcmln0/4jeHOOiXJE+tt3L6yggcRs8u5TPj9q
 Y4DZ6NmNT0FOAEBXsUn2dFKdg1bRZTKLZv3t9ZWZoa+H+aSI0l4U6W56ep4K+RYc1F9yr+gE
 oeVBhUJlzISRBTvdW23V5yUQOqzDM4n9S5hZXBE0JTB8yFLXLtDJZw3L/MfFYTLPsQ6pRKoZ
 5Hpo/m9P8k=
IronPort-HdrOrdr: A9a23:XiJNPq9MaMZP3eR45DBuk+DnI+orL9Y04lQ7vn2ZhyYlC/Bw9v
 re5MjzsCWftN9/YgBEpTntAtjjfZqYz+8X3WBzB9aftWvdyQ+VxehZhOOI/9SjIU3DH4VmpM
 BdmsZFebvN5JtB4foSIjPULz/t+ra6GWmT69vj8w==
X-Talos-CUID: 9a23:NDD2EWz5olNc8k3QUMB5BgUSAeMdNUfMwE2AMhDjF0JVSuyHRlaprfY=
X-Talos-MUID: 9a23:cxFz4Anwyxc9dPsl6ej3dnpaNf1Q7uP0F3xV0qgghY6uJDZ9eCmC2WE=
X-IronPort-AV: E=Sophos;i="5.99,265,1677560400"; 
   d="scan'208";a="108977908"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH] x86: Use printk_once() instead of opencoding it
Date: Wed, 10 May 2023 20:33:57 +0100
Message-ID: <20230510193357.12278-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Technically our helper post-dates all of these examples, but it's good cleanup
nevertheless.  None of these examples should be using fully locked
test_and_set_bool() in the first place.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/cpu/amd.c     | 22 ++++++++--------------
 xen/arch/x86/hvm/vmx/vmx.c | 18 ++++++------------
 xen/arch/x86/srat.c        |  8 ++------
 xen/arch/x86/time.c        | 19 ++++---------------
 4 files changed, 20 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index caafe4474021..630adead2fc1 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -1061,25 +1061,19 @@ static void cf_check init_amd(struct cpuinfo_x86 *c)
 
 		rdmsrl(MSR_AMD64_LS_CFG, value);
 		if (!(value & (1 << 15))) {
-			static bool_t warned;
-
-			if (c == &boot_cpu_data || opt_cpu_info ||
-			    !test_and_set_bool(warned))
-				printk(KERN_WARNING
-				       "CPU%u: Applying workaround for erratum 793\n",
-				       smp_processor_id());
+			if (c == &boot_cpu_data || opt_cpu_info)
+				printk_once(XENLOG_WARNING
+					    "CPU%u: Applying workaround for erratum 793\n",
+					    smp_processor_id());
 			wrmsrl(MSR_AMD64_LS_CFG, value | (1 << 15));
 		}
 	} else if (c->x86 == 0x12) {
 		rdmsrl(MSR_AMD64_DE_CFG, value);
 		if (!(value & (1U << 31))) {
-			static bool warned;
-
-			if (c == &boot_cpu_data || opt_cpu_info ||
-			    !test_and_set_bool(warned))
-				printk(KERN_WARNING
-				       "CPU%u: Applying workaround for erratum 665\n",
-				       smp_processor_id());
+			if (c == &boot_cpu_data || opt_cpu_info)
+				printk_once(XENLOG_WARNING
+					    "CPU%u: Applying workaround for erratum 665\n",
+					    smp_processor_id());
 			wrmsrl(MSR_AMD64_DE_CFG, value | (1U << 31));
 		}
 	}
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 096c69251d58..0f392fc0d4fe 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1183,16 +1183,11 @@ static void cf_check vmx_get_segment_register(
      */
     if ( unlikely(!vmx_vmcs_try_enter(v)) )
     {
-        static bool_t warned;
+        printk_once(XENLOG_WARNING "Segment register inaccessible for %pv\n"
+                    "(If you see this outside of debugging activity,"
+                    " please report to xen-devel@lists.xenproject.org)\n",
+                    v);
 
-        if ( !warned )
-        {
-            warned = 1;
-            printk(XENLOG_WARNING "Segment register inaccessible for %pv\n"
-                   "(If you see this outside of debugging activity,"
-                   " please report to xen-devel@lists.xenproject.org)\n",
-                   v);
-        }
         memset(reg, 0, sizeof(*reg));
         return;
     }
@@ -2301,10 +2296,9 @@ static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)
 static void cf_check vmx_handle_eoi(uint8_t vector, int isr)
 {
     uint8_t old_svi = set_svi(isr);
-    static bool warned;
 
-    if ( vector != old_svi && !test_and_set_bool(warned) )
-        printk(XENLOG_WARNING "EOI for %02x but SVI=%02x\n", vector, old_svi);
+    if ( vector != old_svi )
+        printk_once(XENLOG_WARNING "EOI for %02x but SVI=%02x\n", vector, old_svi);
 }
 
 static void cf_check vmx_enable_msr_interception(struct domain *d, uint32_t msr)
diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index 56749ddca526..3f70338e6e23 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -55,7 +55,6 @@ nodeid_t setup_node(unsigned pxm)
 {
 	nodeid_t node;
 	unsigned idx;
-	static bool warned;
 	static unsigned nodes_found;
 
 	BUILD_BUG_ON(MAX_NUMNODES >= NUMA_NO_NODE);
@@ -75,11 +74,8 @@ nodeid_t setup_node(unsigned pxm)
 		if (pxm2node[idx].node == NUMA_NO_NODE)
 			goto finish;
 
-	if (!warned) {
-		printk(KERN_WARNING "SRAT: Too many proximity domains (%#x)\n",
-		       pxm);
-		warned = true;
-	}
+	printk_once(XENLOG_WARNING "SRAT: Too many proximity domains (%#x)\n",
+		    pxm);
 
 	return NUMA_NO_NODE;
 
diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index bc75e1ae7d42..f5e30d4e0236 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -876,13 +876,8 @@ static void cf_check plt_overflow(void *unused)
         plt_stamp64 += plt_mask + 1;
     }
     if ( i != 0 )
-    {
-        static bool warned_once;
-
-        if ( !test_and_set_bool(warned_once) )
-            printk("Platform timer appears to have unexpectedly wrapped "
-                   "%u%s times.\n", i, (i == 10) ? " or more" : "");
-    }
+        printk_once("Platform timer appears to have unexpectedly wrapped "
+                    "%u%s times.\n", i, (i == 10) ? " or more" : "");
 
     spin_unlock_irq(&platform_timer_lock);
 
@@ -2156,14 +2151,8 @@ void init_percpu_time(void)
         }
         else if ( adj != tsc_adjust[socket] )
         {
-            static bool __read_mostly warned;
-
-            if ( !warned )
-            {
-                warned = true;
-                printk(XENLOG_WARNING
-                       "Differing TSC ADJUST values within socket(s) - fixing all\n");
-            }
+            printk_once(XENLOG_WARNING
+                        "Differing TSC ADJUST values within socket(s) - fixing all\n");
             wrmsrl(MSR_IA32_TSC_ADJUST, tsc_adjust[socket]);
         }
     }

base-commit: 31c65549746179e16cf3f82b694b4b1e0b7545ca
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 10 20:13:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 20:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533013.829368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwqBx-0001zO-Uv; Wed, 10 May 2023 20:13:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533013.829368; Wed, 10 May 2023 20:13:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwqBx-0001zH-S1; Wed, 10 May 2023 20:13:41 +0000
Received: by outflank-mailman (input) for mailman id 533013;
 Wed, 10 May 2023 20:13:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PWYj=A7=citrix.com=prvs=487e665c6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pwqBw-0001zB-6c
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 20:13:40 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 24a8173b-ef6f-11ed-8611-37d641c3527e;
 Wed, 10 May 2023 22:13:37 +0200 (CEST)
Received: from mail-mw2nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 10 May 2023 16:13:34 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5355.namprd03.prod.outlook.com (2603:10b6:5:246::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Wed, 10 May
 2023 20:13:32 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Wed, 10 May 2023
 20:13:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24a8173b-ef6f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683749617;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=leXcpVaqqP8/gqGufFnWQM2LWpzMebms9ji6OtL8E4w=;
  b=LI6AXwUuqpYAW6+KOHnGUBck56q7MSRwt+DXW9e/4ZeemzC0WgJNqT1L
   XtCuXAxF2fmRmeg8ASD+FD/gaCOybKy+hZ9lfnL8bBF/Q5y+WDT19iarF
   8anXRGTM5bdStIoagD95B/LM1V/sZXEp6Ltd/12rsF7IGqwMgxNQeqyn2
   c=;
X-IronPort-RemoteIP: 104.47.55.108
X-IronPort-MID: 108461880
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ptcC/qORPkvSzLvvrR2IlsFynXyQoLVcMsEvi/4bfWQNrUon0mMFz
 jYZXmrSb/+IY2Chfop3O4+z9xsP6JfTyoM2TAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5wRmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0sF+UTF22
 ecxFHMmbwq9xPquxbO0F/Y506zPLOGzVG8ekldJ6GiBSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PdxujCDpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxXOnBttCROXQGvhCnBqBwm02AkEsRFaVnviHq0SkZftbN
 BlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml1a788wufQG8eQVZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9URX5NkcHbC4ACA4aud/qpdhrigqVF44zVqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraTzgbQHxZ6s9Lqkc2Q=
IronPort-HdrOrdr: A9a23:SP+JIarlMhcYmziL5b4VEfoaV5rbeYIsimQD101hICG9Evb0qy
 nOpoV+6faQslwssR4b9uxoVJPvfZq+z+8R3WByB8bAYOCOggLBQL2Ki7GC/9SJIUbDH4VmpM
 VdmsZFaOEYdmIK6voT4GODYqodKNvsytHWuQ8JpU0dMz2DaMtbnnZE4h7wKDwReOHfb6BJbq
 Z14KB81kOdUEVSVOuXLF8fUdPOotXa/aiWHCLvV3YcmXGzZSrD0s+ALySl
X-Talos-CUID: 9a23:yIpPd257XaON1QMO/9ss00dESvgJaVThj1zROXH7EX5sSbLIVgrF
X-Talos-MUID: 9a23:6mrutAa5oluCDuBTq3j+gDBHPptS8qmAN0EzsJ8cheCAKnkl
X-IronPort-AV: E=Sophos;i="5.99,265,1677560400"; 
   d="scan'208";a="108461880"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IibQ/VQA6tsT+c1QGxmUKFf2331X+Ts8dzhq8jMQ929ILmlaAnWdQmQ20ENnAd9V3m70pWdNNAAPzw4PvfDP7MJOY8rS/Bzbsq+0ChUOtUoRvyLklEwycvHBHhls98/uZlbhmegFSneqVw1jQSumc7WYGCKIpQlhoR14ZtWYKmBWIWDu4TLmDleFx4DQaWbB3nu+9oYt89tpBG4+xHZ3eA5GTDDlAZ9hHwk5aiRKpzHQQmrpfcqlJgZ3D939C2EjPaUTo1e9tv3GWXzGdHf0lXvPxOtWjn9bYZwj+1AqQToVQw/W34YTA60agijk1T60N7oDLkILkpIPzsQF+5sqfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XNYchtnQGr1T52aYNs+XuWuJ26zlEdOUQ/HmfBPouME=;
 b=LuKsR3uodsmSJYp3ldYDvWiH8Pq/WxRTOoPeFwooh7lZEmDLukno4NlwOqz1zq8lKxoaSTdHjTqNO2lQrc095Xdvkk8R1kdvvkSyISswZnjqLkWoYPHKUnvGnnQXB4TtGXE/EKY6us8Ii5pYsN+Yi89UL75/LoGzzyVEE5QJyln4dPGYbtOT6Q18OQNSPaL8nrfy+NKsEeZTIbzgmzDWJ81yT749nK8THP93syoqOKPoIabHG0GUw4UBRupi/++O54vdK50X6fv111TP2Hbi4Ebe6mXVveJdJs9JVGY3Mq8G4lnfKc2AY6LPBZlshS4PKKY7yHxZ5Mwl6O5K+xbAuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XNYchtnQGr1T52aYNs+XuWuJ26zlEdOUQ/HmfBPouME=;
 b=wNLasuuNBGSGoUciyKtF7Wrv1SPcq1iQqqF++MdzJZLTSD3wujlI8/gxiToPNjcb6vnRG8B6XKmdz27rLsU7fyZlKk0+fbZ78GQfQB5I+QIG2B5R2NuzLHJwf4zfN9RXRH6VSXPTSH3OzEMGnAG1TE+e1qIZh3uXXjEHOzP5sO4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <e4385229-a804-191f-c656-b605c498a292@citrix.com>
Date: Wed, 10 May 2023 21:13:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 5/6] x86/cpu-policy: Disentangle X86_NR_FEAT and
 FEATURESET_NR_ENTRIES
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-6-andrew.cooper3@citrix.com>
 <9613df3b-c49f-6034-ff99-7a6ea9286f0f@suse.com>
 <e888dc16-66bf-fc15-9ddd-f10879b79a5d@citrix.com>
 <f09c9127-1cc4-1ed9-0348-be12c0c999e8@suse.com>
In-Reply-To: <f09c9127-1cc4-1ed9-0348-be12c0c999e8@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0145.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:188::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM6PR03MB5355:EE_
X-MS-Office365-Filtering-Correlation-Id: d85493ae-7f5d-433f-4874-08db51930696
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/eJLQK3EykdDGBjeQf+z25HHgdzg8e1eD0QhkDcmvx040l+4a7iAk6ywVtOBTt4/tz037Sx8DqcG5jaPJvKKpSaGWpX8m7HETdqHfWugFtEgBW9tsvfkmcZUW++1oRxtasFEAaeYgCcSt8vOHJM4mpHcbiURlIIYXx3Cfp6psapQGjkFFbwabs5xLalUAn0ESQwk1lFHxtRoHqO0yLOefuab7PlULcH9W2mNx3ow1mqUDWGLY7TO7U5TNWLRe5rym34Q56kQ4hTmzUIS7pydWIn3Fi/oNaLKTHjnx0FIRvecxHk1kAETqJzg/Hv5x+VfxI1jl3HM1h1ujwk/Nm1lrYX6ygGSXtCNVIkvefv9ctsbXss9IqbNoGLsjtml7iyghTjESy2ck/Q7y5qFqLmkcCPIia2TbMA+IjrXDmhKSDa7f2/W6pIfger2dTjfD565MQMoOMtwZJFxRR/ALYmb5kz6GZhjs2dNhEfyP/v71edJ9RLJ51IH+fS8zVBFhgH/+PeO+8a19DtczgcY0s40bgNbuzGQfbadPD6zW5jTbID9mkO9iOUGZ4Uv0De5WfpcfttuGpvtuq9EZPBfGIWZs7MAc9fZ23Cz3PDOcFzWFrZJHifbnh0OezYVuYClGcuZpZk3Os7BqI3JoDA7OmRIOULIUMTZyLYcc+V8K+IkQR4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(396003)(39860400002)(376002)(346002)(451199021)(8676002)(8936002)(38100700002)(2616005)(2906002)(5660300002)(36756003)(6666004)(6486002)(31696002)(966005)(86362001)(186003)(6506007)(6512007)(53546011)(26005)(478600001)(83380400001)(66946007)(66476007)(66556008)(4326008)(316002)(6916009)(82960400001)(41300700001)(54906003)(31686004)(522954003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YmR4MWpwR3F3ZVFNSDIzdUNDUHZZRXV0dFlEUGRrZHB3U01lQzBBYlVhRUJt?=
 =?utf-8?B?NFNrbllQbTcyNFd1SHhNbUxNbEpISEFnbyt1YklYeVBUb0dsWXQwekMvelVt?=
 =?utf-8?B?TkFaT1ZSM2dkcXcrTTB5SlZHS2JnSWF6emh1VzdRZitOQnB6RVM4SCtiUngz?=
 =?utf-8?B?WGd3eXRoTWxOQXZNUHNkTUlGMDRSVVlhWGFhdHhyUkpIcVlUSlIvc1U3eW5a?=
 =?utf-8?B?MTVOVTVNbWhaZ0dqNXhwMFVPQjQ1d2lBcWZqaW9ra0FVR0ZoTjArcGlFd3B0?=
 =?utf-8?B?dXNlWUQvYkxkQ2JGei9qRFpKaTUreHdpcXBZd29LTzM1ck1tY1ZCaytyRVJV?=
 =?utf-8?B?cThZdDVrbnIva3cwUUUzSVFqYmpJWnhMOEdianBmRU9TTDFMRmZmbWIyWVFl?=
 =?utf-8?B?RFd3eEtrcEJPK05TNlBtQ1F6QlpZZDdBSHZmNlRnK2MzcWZYeWlkSDBMcGFm?=
 =?utf-8?B?NVREeUxURXkwaE9TcWE5ZEZ2cmlIeEh1Um5sb3RoWU1qL001Q0NLWHZmNC9K?=
 =?utf-8?B?bHVGdUk1LzlLTUgxcmFSN0l1cjMwS3N5OWFHMytOcVhNWGlTR2duVWxnWit1?=
 =?utf-8?B?SndHbEt0N2lxNUYvV2NGRDNralF0ZW1WNDJjLzJLNzhWajUyZzMrM3pwVzBX?=
 =?utf-8?B?bVNSWm00Q1VBWkp5aGtVV21Sd0RFangyYVNVWWFhN2NFMHVZR1FXL1pOUHZ4?=
 =?utf-8?B?L3c4UlU1N1EwV1ZlNDArZXJEaXZReWhMaTBDWHYyR1ZDT2pJRlhRUzJ6a24v?=
 =?utf-8?B?R1cvMTNZN3pXbmlzampIWWZWaksvVDdKUE5PTitTQ2lKNEd0R1lyS1RiUXdG?=
 =?utf-8?B?ejR1Mm9KdUFiN1NRUmx3M3k2SWdxWFV4UTlnYUJ2eExWRHMvMXZydTUwTWF1?=
 =?utf-8?B?cXhJMFVZVE5tdjFtL3VQdU5zU1JvbWR0d0V5R3hJbk5tZHlhbU9wSEV4R2Fq?=
 =?utf-8?B?dlV3QmtOcEQzYWRtSXJMZ2lwQTV4VDdLZVowRG5WMDltYytFcm1WN05PbEFx?=
 =?utf-8?B?SWpWRnFZb2k2YlN5YnlqUmFhRUpieGdNN25WZmN6Wk9ZcnBHOHhEdjJPOHA3?=
 =?utf-8?B?WXdOTjZySG15OTNrOGFIN2RiT1E5NjR3ZWRaT1FWeXFzQlpOSUFoVm1pQUdG?=
 =?utf-8?B?M3lndm9CTWsxNjdjRzVpbjIwV2tmYVE5dDcxRkl3TnQvdDgzR1RlRkZ0b1U1?=
 =?utf-8?B?MUZ1cXIxejlBRXk5b054dThqM3JPbTMzeWVkc0UvR3UzUHRxUkhLN1RXUHFH?=
 =?utf-8?B?Z3RJUXlEV0huc2lHallZOSsraEdpeDdsSWFISmFIeXU2SGUzdHlXRmxMWk9Q?=
 =?utf-8?B?Q3RMY2d5K0IvT3I0Tnh4WStoVXpKM2lFaTVvSnZuaTJwY3JVV3VXMDNqWkl6?=
 =?utf-8?B?dlBPZWdYUVhCUXNUY0NDZGhxTVJUK3hKelphakVuYmlYTjEzdW1VcmsrVHpI?=
 =?utf-8?B?YTIxNkZlTCs5ZnpodU9sVDFIU2JXNjBuUGtWWEp5ZTBrQTVOc1BoemVnTDhp?=
 =?utf-8?B?aTFUOHJxcFRMTUthTCswTXdQRi9vMjdsUjYzZGdMc3gyUUhQN3gzT2YzcUlF?=
 =?utf-8?B?MkIvTXlZQ3RhY0pHSDg2dmRaRmhrVXBCN2NVZzNkMGJUanRTM0pldFd3aHp1?=
 =?utf-8?B?QmxpeFkxNEkzZmtkL3BvNjFYb3ZCdFY5R0V6REZacWgyVURQZUY3ZEt1UFRa?=
 =?utf-8?B?eXdqZVFwTjlTSEtRUVBUZ3d3R0ppanQ4S0N6RDlKUnVLN2dPWkxlR0xJZTFH?=
 =?utf-8?B?QTlWUUhiUnFpN01TRVBBT0hqWVNBeFhybHJmUk9LNmVmKzYwWnBVaHN5ajdR?=
 =?utf-8?B?NlRyN1BRbGVVM3VLcUJkVXhkZm9WT2RWdjlZT0F1S3ZQVjFuQlo1UGdvQ2Rx?=
 =?utf-8?B?ZHI0TEc5Q0pkLzkxY3hvNkdHYnpOaXAwcDJsS3Q0N2NWeVk4NFhXeXc0UTkz?=
 =?utf-8?B?dEY4bkVrdG1zbWlaMWNwSDBxa01ndFFRZ2kxQVVHeXM2Mld6OVhwS2lEOVVw?=
 =?utf-8?B?T053bkQvWUozbnpncUFRdlY2V3h5b2dwU2ZqcUtXazRiNUZBNm5KVkFrMnh6?=
 =?utf-8?B?eDdPdGlmN3V4YzQwc1VsVFB1NDN3MFBWLzJvbTM0NzB1aDRyaXFnT0lPZ1Vx?=
 =?utf-8?B?Mld0eHlJWmdsOXRBQmdEakF6a1JEdEFnSnp0TERhTXM5bUJXQ3ZTbE1VYWNq?=
 =?utf-8?B?cUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	CjVLNkKxt8Oep944JO+76T3WdY7ai4bFKdOVehovqTiFfOOfTlQ4gxSbsDYeEoO0m2Lv2So1tPz3fscIBMUPD+HW9VeSxy/Qehw1tbDvVBSei2ODK2z6sG6vZRq1TrTgJURk8zPQvQTI8DKVJxNvKMHTVQJRNZ8eGLyGxPeLWYOJpbTRqaJ9mjQ2gL6t2AZl8w0AOrX8qIP2V0NptLCiyU9vrS0kbhGvaYXs22mNd+tcAD0PtkOxjs4gLD1TgGV8pe0ACcb3moYRyUt5574N2qKRz3kIlnRDfGjnxUFXDzchUE2K7X6KUiV4eWWyfpZ6U5XT7OasZSxOosnrLimvCGuSB/ID0y5HnUAlgxICeHv+Qs2coSR7dNU03KgrKDJdsa0j5bzO6reyF9P4/I6tVyvlljHoHXr3xamTC4LwiHyY+rCY+bMHKEa9XxvB3h5EbuAL5vofo40yzbCaO6WRE22m0bIVusdywA5dgJQ+vh8St9i33q0D/EN5Z5u1xhGIrRs6kAkvnjG+BHQHC116dXquaTdFoXj1bprte86/a2Wk2YE1q9GSUma8E+CTyxX3nzvUhbQlABvfrshWHkFD/dDa4+GeUs5VcDG1xT7ft/wsP4wZ4ddONbXhjIlFEEnZcJ14tjL0x1/AVGptlMxk2JL3ZudLe43zEwTjzrHu/MAOexFBXjlQRvxESsKcHD92+RGh0WQ5/CufxiAlfoZAvKd/1oSmneBqUp+9AUw15TeYg/lg1KfCmoKGV58P0Asgy2+mnDXBRWBIMvL5VP1DcySt+OK+qGWMvQIbP9PaNntR4sZfAs2VQ05sWV4l8zSj
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d85493ae-7f5d-433f-4874-08db51930696
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2023 20:13:31.9125
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: e2k4VibIfwNR+Q0L3l1uPPl5cfUgr+Fw9HthzonIH8l85seMsXTDYAz0k7ZOWCsbFomVUCv+3tB2xs8KN2NX2HsOLW8BP0WpkOcmttANFMc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5355

On 09/05/2023 3:24 pm, Jan Beulich wrote:
> On 09.05.2023 16:03, Andrew Cooper wrote:
>> On 08/05/2023 8:45 am, Jan Beulich wrote:
>>> On 04.05.2023 21:39, Andrew Cooper wrote:
>>>> When adding new words to a featureset, there is a reasonable amount of
>>>> boilerplate and it is preforable to split the addition into multiple patches.
>>>>
>>>> GCC 12 spotted a real (transient) error which occurs when splitting additions
>>>> like this.  Right now, FEATURESET_NR_ENTRIES is dynamically generated from the
>>>> highest numeric XEN_CPUFEATURE() value, and can be less than what the
>>>> FEATURESET_* constants suggest the length of a featureset bitmap ought to be.
>>>>
>>>> This causes the policy <-> featureset converters to genuinely access
>>>> out-of-bounds on the featureset array.
>>>>
>>>> Rework X86_NR_FEAT to be related to FEATURESET_* alone, allowing it
>>>> specifically to grow larger than FEATURESET_NR_ENTRIES.
>>>>
>>>> Reported-by: Jan Beulich <jbeulich@suse.com>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> While, like you, I could live with the previous patch even if I don't
>>> particularly like it, I'm not convinced of the route you take here.
>> It's the route you tentatively agreed to in
>> https://lore.kernel.org/xen-devel/a282c338-98ab-6c3f-314b-267a5a82bad1@suse.com/
> Right. Yet I deliberately said "may be the best" there, as something
> better might turn up. And getting the two numbers to always agree, as
> suggested, might end up being better.

Then don't write "yes" if what you actually mean is "I'd prefer a
different option if possible", which is a "no".

I cannot read your mind, and we both know I do not have time to waste on
this task.

And now I have to go and spend yet more time doing it differently.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 10 21:19:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 21:19:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533022.829378 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwrDB-0000Mq-Qo; Wed, 10 May 2023 21:19:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533022.829378; Wed, 10 May 2023 21:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwrDB-0000Mj-Ns; Wed, 10 May 2023 21:19:01 +0000
Received: by outflank-mailman (input) for mailman id 533022;
 Wed, 10 May 2023 21:19:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pwrDA-0000Md-45
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 21:19:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwrD9-0005f2-PI; Wed, 10 May 2023 21:18:59 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=[192.168.6.195]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwrD9-00043x-JD; Wed, 10 May 2023 21:18:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=z0fUX7B9LM6LF/MijMo79FjjtItfb1cp+WlY7ZGNAhk=; b=tkf0Pbfk+qImamxAxPWmhxTwVM
	NP5kEM+AWJIUAyRbT3IhcQUFES3KHmOn4T36WkyonKpqBLDbYMdESM2tyJiivMJzyydiJ91bCdf+u
	+Ps+jvzG5SMku2F6GGPDROxRx6qbQcfsaSFF5Q3moQcVQ20BpVtY7NDrnoB2cHEg/2aU=;
Message-ID: <6ff7e2e7-9689-13f4-749d-1b041540fd41@xen.org>
Date: Wed, 10 May 2023 22:18:57 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [RFC PATCH] xen/arm: arm32: Enable smpboot on Arm32 based systems
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com
References: <20230502105849.40677-1-ayan.kumar.halder@amd.com>
 <2d764f29-2eb9-ecff-84cd-9baf12961c54@xen.org>
 <e9a95271-021f-523a-770a-302c638bfe73@amd.com>
 <556611a5-dc9a-8155-650d-327b6853f761@xen.org>
 <22c4da5c-9ad7-68ba-b005-8ba18e584bf6@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <22c4da5c-9ad7-68ba-b005-8ba18e584bf6@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Ayan,

On 04/05/2023 09:57, Ayan Kumar Halder wrote:
> 
> On 03/05/2023 18:43, Julien Grall wrote:
>> Hi Ayan,
> Hi Julien,
>>
>> On 03/05/2023 17:49, Ayan Kumar Halder wrote:
>>>
>>> On 03/05/2023 08:40, Julien Grall wrote:
>>>> Hi,
>>> Hi Julien,
>>>>
>>>> Title: Did you mean "Enable spin table"?
>>> Yes, that would be more concrete.
>>>>
>>>> On 02/05/2023 11:58, Ayan Kumar Halder wrote:
>>>>> On some of the Arm32 based systems (eg Cortex-R52), smpboot is 
>>>>> supported.
>>>>
>>>> Same here.
>>> Yes
>>>>
>>>>> In these systems PSCI may not always be supported. In case of 
>>>>> Cortex-R52, there
>>>>> is no EL3 or secure mode. Thus, PSCI is not supported as it 
>>>>> requires EL3.
>>>>>
>>>>> Thus, we use 'spin-table' mechanism to boot the secondary cpus. The 
>>>>> primary
>>>>> cpu provides the startup address of the secondary cores. This 
>>>>> address is
>>>>> provided using the 'cpu-release-addr' property.
>>>>>
>>>>> To support smpboot, we have copied the code from 
>>>>> xen/arch/arm/arm64/smpboot.c
>>>>
>>>> I would rather prefer if we don't duplicate the code but instead 
>>>> move the logic in common code.
>>> Ack
>>>>
>>>>> with the following changes :-
>>>>>
>>>>> 1. 'enable-method' is an optional property. Refer to the comment in
>>>>> https://www.kernel.org/doc/Documentation/devicetree/bindings/arm/cpus.yaml
>>>>> "      # On ARM 32-bit systems this property is optional"
>>>>
>>>> Looking at this list, "spin-table" doesn't seem to be supported
>>>> for 32-bit systems. 
>>>
>>> However, looking at 
>>> https://developer.arm.com/documentation/den0013/d/Multi-core-processors/Booting-SMP-systems/SMP-boot-in-Linux , it seems "spin-table" is a valid boot mechanism for Armv7 cpus.
>>
>> I am not able to find the associated code in Linux 32-bit. Do you have 
>> any pointer?
> 
> Unfortunately, no.
> 
> I see that in linux, "spin-table" support is added in 
> arch/arm64/kernel/smp_spin_table.c. So, there seems to be no Arm32 
> support for this.
>>
>>>
>>>
>>>> Can you point me to the discussion/patch where this would be added?
>>>
>>> Actually, this is the first discussion I am having with regards to 
>>> adding a "spin-table" support on Arm32.
>>
>> I was asking for the discussion on the Device-Tree/Linux ML or code.
>> I don't really want to do a "spin-table" support if this is not even 
>> supported in Linux.
> 
> I see your point. But that brings me to my next question, how do I parse 
> cpu node specific properties for Arm32 cpus.

I was probably not very clear in my previous message. What I don't want 
is for Xen to use unofficial bindings.

IOW, if the existing binding doesn't allow the spin-table on arm32 (IMHO 
it is not clear) and it makes sense to use them, then we should first 
consider to send a patch against the documentation and merge it before 
Xen can use the properties.

> 
> In 
> https://www.kernel.org/doc/Documentation/devicetree/bindings/arm/cpus.yaml , I see some of the properties valid for Arm32 cpus.
> 
> For example:-
> 
> secondary-boot-reg
> rockchip,pmu
> 
> Also, it says "additionalProperties: true" which means I can add 
> platform specific properties under the cpu node.

For clarification, are you saying the bindings below is not yet 
official? If so, then this should be first discussed with the 
Device-Tree folks.

> Please correct me if 
> mistaken.
> 
> My cpu nodes will look like this :-
> 
>          cpu@1 {
>              device_type = "cpu";
>              compatible = "arm,armv8";
>              reg = <0x00 0x01>;
>              enable-method = "spin-table";

The enable-method "spin-table" is generic and expect property like 
cpu-release-addr. Yet...

>              amd-cpu-release-addr = <0xEB58C010>; /* might also use 
> "secondary-boot-reg" */
>              amd-cpu-reset-addr = <0xEB58C000>;
>              amd-cpu-reset-delay = <0xF00000>;
>              amd-cpu-re

... these are all AMD properties. What are the reasons to not use the 
generic "spin-table" bindings?

If you can't use it, then I think you should define a new type of 
enable-method.

>              phandle = <0x03>;
>          };
> 
>          cpu@2 {
>              device_type = "cpu";
>              compatible = "arm,armv8";
>              reg = <0x00 0x02>;
>              enable-method = "spin-table";
>              amd-cpu-release-addr = <0xEB59C010>; /* might also use 
> "secondary-boot-reg" */
>              amd-cpu-reset-addr = <0xEB59C000>;
>              amd-cpu-reset-delay = <0xF00000>;
>              amd-cpu-re
>              phandle = <0x03>;
>          };
> 
> If the reasoning makes sense, then does the following proposed change 
> looks sane ?

I would first like to understand a bit more about how the bindings were 
created and whether this was discussed with the Device-Tree community.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 10 21:31:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 21:31:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533026.829388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwrPN-0002l4-Vq; Wed, 10 May 2023 21:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533026.829388; Wed, 10 May 2023 21:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwrPN-0002kx-Rl; Wed, 10 May 2023 21:31:37 +0000
Received: by outflank-mailman (input) for mailman id 533026;
 Wed, 10 May 2023 21:31:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pwrPL-0002kr-VA
 for xen-devel@lists.xenproject.org; Wed, 10 May 2023 21:31:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwrPK-0005uZ-Sm; Wed, 10 May 2023 21:31:34 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=[192.168.6.195]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pwrPK-0004PA-N5; Wed, 10 May 2023 21:31:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=TYIeHSfVMpDx/iX1/UoqWOLbPM3nbyIl/yz0qrcztBw=; b=C/5XResFaUU4yyHrBgXq/Nj5ps
	0V/CDK7EV8bgrnW1O5T1lOjjxNkQSA51GT43eA/sWemmzxYzbpJYnEBmgmhkWzz6O4EZH3daV0OiM
	YexIf9AARWYI3sA5WNhaP3VpQjtqVlVf37i6rgdqBYpopFIL2kOmd96M6U3sYIpYg8Pc=;
Message-ID: <91380959-b63d-7d4f-0920-7d87dc0fc19d@xen.org>
Date: Wed, 10 May 2023 22:31:33 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 05/14] tools/xenstore: use accounting buffering for
 node accounting
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-6-jgross@suse.com>
 <21847835-4f7e-a09a-458e-e68dc59d4268@xen.org>
 <9745474f-db84-c8f3-3662-95728d4d5bd3@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <9745474f-db84-c8f3-3662-95728d4d5bd3@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 10/05/2023 13:54, Juergen Gross wrote:
> On 09.05.23 20:46, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 08/05/2023 12:47, Juergen Gross wrote:
>>> Add the node accounting to the accounting information buffering in
>>> order to avoid having to undo it in case of failure.
>>>
>>> This requires to call domain_nbentry_dec() before any changes to the
>>> data base, as it can return an error now.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V5:
>>> - add error handling after domain_nbentry_dec() calls (Julien Grall)
>>> ---
>>>   tools/xenstore/xenstored_core.c   | 29 +++++++----------------------
>>>   tools/xenstore/xenstored_domain.h |  4 ++--
>>>   2 files changed, 9 insertions(+), 24 deletions(-)
>>>
>>> diff --git a/tools/xenstore/xenstored_core.c 
>>> b/tools/xenstore/xenstored_core.c
>>> index 8392bdec9b..22da434e2a 100644
>>> --- a/tools/xenstore/xenstored_core.c
>>> +++ b/tools/xenstore/xenstored_core.c
>>> @@ -1454,7 +1454,6 @@ static void destroy_node_rm(struct connection 
>>> *conn, struct node *node)
>>>   static int destroy_node(struct connection *conn, struct node *node)
>>>   {
>>>       destroy_node_rm(conn, node);
>>> -    domain_nbentry_dec(conn, get_node_owner(node));
>>>       /*
>>>        * It is not possible to easily revert the changes in a 
>>> transaction.
>>> @@ -1645,6 +1644,9 @@ static int delnode_sub(const void *ctx, struct 
>>> connection *conn,
>>>       if (ret > 0)
>>>           return WALK_TREE_SUCCESS_STOP;
>>> +    if (domain_nbentry_dec(conn, get_node_owner(node)))
>>> +        return WALK_TREE_ERROR_STOP;
>>
>> I think there is a potential issue with the buffering here. In case of 
>> failure, the node could have been removed, but the quota would not be 
>> properly accounted.
> 
> You mean the case where another node has been deleted and due to accounting
> buffering the corrected accounting data wouldn't be committed?
> 
> This is no problem, as an error returned by delnode_sub() will result in
> corrupt() being called, which in turn will correct the accounting data.

To me corrupt() is a big hammer and it feels wrong to call it when I 
think we have easier/faster way to deal with the issue. Could we instead 
call acc_commit() before returning?

> 
>> Also, I think a comment would be warrant to explain why we are 
>> returning WALK_TREE_ERROR_STOP here when...
>>
>>> +
>>>       /* In case of error stop the walk. */
>>>       if (!ret && do_tdb_delete(conn, &key, &node->acc))
>>>           return WALK_TREE_SUCCESS_STOP;
>>
>> ... this is not the case when do_tdb_delete() fails for some reasons.
> 
> The main idea was that the remove is working from the leafs towards the 
> root.
> In case one entry can't be removed, we should just stop.
> 
> OTOH returning WALK_TREE_ERROR_STOP might be cleaner, as this would make 
> sure
> that accounting data is repaired afterwards. Even if do_tdb_delete() can't
> really fail in normal configurations, as the only failure reasons are:
> 
> - the node isn't found (quite unlikely, as we just read it before entering
>    delnode_sub()), or
> - writing the updated data base failed (in normal configurations it is in
>    already allocated memory, so no way to fail that)
> 
> I think I'll switch to return WALK_TREE_ERROR_STOP here.

See above for a different proposal.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 10 22:46:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 22:46:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533035.829397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwsZZ-0001vF-7j; Wed, 10 May 2023 22:46:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533035.829397; Wed, 10 May 2023 22:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwsZZ-0001v8-4p; Wed, 10 May 2023 22:46:13 +0000
Received: by outflank-mailman (input) for mailman id 533035;
 Wed, 10 May 2023 22:46:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwsZX-0001uy-Db; Wed, 10 May 2023 22:46:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwsZX-0007mI-Bk; Wed, 10 May 2023 22:46:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwsZW-0001hH-Ro; Wed, 10 May 2023 22:46:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwsZW-0006qQ-RG; Wed, 10 May 2023 22:46:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CdBPbClz0m7t2Y0MXsN3BGNpluLemsQmf5iUasQtT3M=; b=1kRho7KoV1hwvV4Af7b/59GGWk
	h98tNo51CrUKfcWyI/jPTSIZ7slEDKE8nrhCuIi5YrmOHS7ucXcgQXAgWGTyXFl5r7Hnlj92ONSIG
	TQYFTDHLT0ZsQVEzRFQWdnDLAvI+GRasxhVN311nlzWj9LgxPBCcPpnHNOtQkiVFY8Ns=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180607-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180607: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=31c65549746179e16cf3f82b694b4b1e0b7545ca
X-Osstest-Versions-That:
    xen=ed6b7c0266e512c1207c07911da14e684f47b909
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 22:46:10 +0000

flight 180607 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180607/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  31c65549746179e16cf3f82b694b4b1e0b7545ca
baseline version:
 xen                  ed6b7c0266e512c1207c07911da14e684f47b909

Last test of basis   180592  2023-05-09 21:03:16 Z    1 days
Testing same since   180607  2023-05-10 19:03:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alejandro Vallejo <alejandro.vallejo@cloud.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@cloud.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ed6b7c0266..31c6554974  31c65549746179e16cf3f82b694b4b1e0b7545ca -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 10 23:29:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 10 May 2023 23:29:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533043.829407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwtEz-0006cy-GZ; Wed, 10 May 2023 23:29:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533043.829407; Wed, 10 May 2023 23:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwtEz-0006cr-C4; Wed, 10 May 2023 23:29:01 +0000
Received: by outflank-mailman (input) for mailman id 533043;
 Wed, 10 May 2023 23:29:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwtEy-0006ch-LV; Wed, 10 May 2023 23:29:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwtEy-0000X7-IE; Wed, 10 May 2023 23:29:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwtEy-0002gZ-5e; Wed, 10 May 2023 23:29:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwtEy-0003gp-5A; Wed, 10 May 2023 23:29:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1AE/abUdCQ4meOG31DGaRTHQYhuGgDnEwc6zi5X+0BM=; b=b3YlLigo4VWkURwiWJj7T4yQir
	HqtcSAaT2iK6aNtEtfq6zhG1AW9MZFlMrQtiorw7eJNJSNLRW0U4tohXA/S5PXjOcgOKqUp1PGmQp
	EeG6qeK355zSIFBJGeOypT8ajTQwwUFhbAJo97x6v51VzsE97VFWvtG1UW0VH720vGsc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180600-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180600: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=577e648bdb524d1984659baf1bd6165de2edae83
X-Osstest-Versions-That:
    qemuu=271477b59e723250f17a7e20f139262057921b6a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 10 May 2023 23:29:00 +0000

flight 180600 qemu-mainline real [real]
flight 180608 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180600/
http://logs.test-lab.xenproject.org/osstest/logs/180608/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pair        11 xen-install/dst_host fail pass in 180608-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180586
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180586
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180586
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180586
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180586
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180586
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180586
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180586
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                577e648bdb524d1984659baf1bd6165de2edae83
baseline version:
 qemuu                271477b59e723250f17a7e20f139262057921b6a

Last test of basis   180586  2023-05-09 06:38:31 Z    1 days
Testing same since   180600  2023-05-10 05:40:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Babu Moger <babu.moger@amd.com>
  Kim Phillips <kim.phillips@amd.com>
  Michael Roth <michael.roth@amd.com>
  Michael S. Tsirkin <mst@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Santosh Shukla <santosh.shukla@amd.com>
  Thomas Huth <thuth@redhat.com>
  Wei Huang <wei.huang2@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   271477b59e..577e648bdb  577e648bdb524d1984659baf1bd6165de2edae83 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu May 11 01:07:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 01:07:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533052.829418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwuls-0007Xf-8c; Thu, 11 May 2023 01:07:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533052.829418; Thu, 11 May 2023 01:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwuls-0007XY-4L; Thu, 11 May 2023 01:07:04 +0000
Received: by outflank-mailman (input) for mailman id 533052;
 Thu, 11 May 2023 01:07:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6h4R=BA=hotmail.com=rafael_andreas@srs-se1.protection.inumbo.net>)
 id 1pwulo-0007XS-SB
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 01:07:01 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01olkn2071.outbound.protection.outlook.com [40.92.65.71])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 20bb1517-ef98-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 03:06:58 +0200 (CEST)
Received: from DU0P192MB1700.EURP192.PROD.OUTLOOK.COM (2603:10a6:10:3bf::6) by
 GV1P192MB1692.EURP192.PROD.OUTLOOK.COM (2603:10a6:150:50::9) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6387.21; Thu, 11 May 2023 01:06:29 +0000
Received: from DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 ([fe80::5056:b334:c71f:b047]) by DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 ([fe80::5056:b334:c71f:b047%6]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 01:06:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20bb1517-ef98-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j5+T3q3BLC986g/fpHXkHPntRmNZp/j3tKwmcTu3FXaNVr/Ne/xp0rvkK6FnzTmVXhkoG16ZCAmjuwbsP8TCm7CxMZc3Ysm4hzLf/5rEJq4BvHQI6zLPXdhqO2/2Ns2hjPU0T7fZXAcHazQ87//tTPm2jH5sP9W4yy9+gFjQRxu25rEexu1arNSZn3sSVABQUaXQI5oHMZ5RuWMH0Y+sdQMe0cq8hN8wmlPQwdxBjzldF2x5gIuUhYjXvEuwFF2zWOS/Zkm7jV+xpDvoq90W/rBsE+4zh1MCCa+EP8FOH5C7V6WZGykKwbp7MOfpqRrhWvC/1BuiCetiWDVrkRKPog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=a8efdFnv7QyOwR7v+2Lo1+b/mnbyjE1GIiKJ+okTHsg=;
 b=C9dFN37JEl0pvmtHsfcuup/fhUH8IVVTV4dAhbrwJkfKcDCw7W67WasYhgiT0+A2v9eYy5OuommszxxG6rNT6iH6b9mvfEjwQCJfEQmbRlbRPBsp+zYkPpTOZNna6dTGIUvPTIYZzN0xG/d0auiElZYpf9r8+BB/lTN8o7QFBVd+ufnSakTcEnLntgCxc5qDfAoj7n+miSTb5vgAz7hhGWoy7JZyFl8uIK8US7ZgIBFS477WhG+KQQk/p15ljDMsvt3ysa9N/qW0ltCXX9eiS09eSRqSobMUfipEwkUFtPGKm4QH2q5vrxMEWoqITioOmtCauCerY37No9IVOabf+A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hotmail.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a8efdFnv7QyOwR7v+2Lo1+b/mnbyjE1GIiKJ+okTHsg=;
 b=DUhRVo/ez7DJt9TuArR4ydBkZOMclV7esK+ViohNZSnE18EByHLVl0UI1WC5bE9aeuCoMpz/KNoXLrfaL7oXU8IXGJ0psDS3f33rB1ryTo+wQ8PERLbDmpanUoi00RdL/gCUPJh+4dblxgeHwrS2mlsqghFMqRlPKaOCc6p9vvKp4b0P0Q6WhGSGDzqoH9GMJKGiCmDUh8JwCbhf7E3qDU65rE6iaAR83g7ckoIsxF6KA0c1Ziv7G2qb9LHYmdkRuIyTbAAGFMlDQE44px/MfyMUOTuskxxZiNiipWVQ0UgJXm6gywHNPh8rcDcvVtveGyMrK9FheYZU5y3u1CfkWw==
Message-ID:
 <DU0P192MB1700618BAD55DBD486F27D56E3749@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
Date: Thu, 11 May 2023 03:06:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: DROPME [XEN PATCH 0/2] Use streaming decompression for ZSTD
 kernels
Content-Language: en-US
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>
References: <DU0P192MB170087F1C604F82B946E0CD5E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
From: =?UTF-8?Q?Rafa=c3=abl_Kooi?= <rafael_andreas@hotmail.com>
In-Reply-To: <DU0P192MB170087F1C604F82B946E0CD5E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-TMN: [PiJqp1sftzF4McVFj2RHPyaxk95Su+QqrQkVZvwjYeqdQoyWeyTVQeoIUexkRF2Z]
X-ClientProxiedBy: LO4P265CA0044.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ac::23) To DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:10:3bf::6)
X-Microsoft-Original-Message-ID:
 <33adcacb-9201-f588-028e-59a31fdfc7dc@hotmail.com>
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DU0P192MB1700:EE_|GV1P192MB1692:EE_
X-MS-Office365-Filtering-Correlation-Id: 49d9608c-0e84-4c90-428c-08db51bbf31e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RhSLZgEbWASSNgWec9NtqtcDdXwKclnhLqWWa5DGsSFPXZo1DzNECloqTkm3h4RtwPJ3wvDtq4qbkK4AQCJ8+oEAgghGLK6/gnxlCiSkcfk8r5Oi7Xj7mhpi9CUibQTUdwzrgxue7S86m22N3x2eEn3gA8SuZI4g56qJicQE/o4v7R/CZQNXe9QBsUhk02q66yPTOyF/eg4O2X+byLecXbAgl4LZOcWhXZs+Mv8VwvgQ9fvHd6/l6YPlgVygWcG1mgyoFlY1Y8evy6lDw9Y8FPCOWNrCoCDZ+b2gRxUHlVMziL26EvhT7zHQoPo/hTmJe+chzWG9sEd4CwOBG7bGbVmJ1iGCxR+ykY6lmGiBXgqhNOtQgYe6eTXD1J7bmbc1Sa6V9kZ+bf8Vay7id4XA0wHoR+t1JT805uwIxkDAah4MEwNhYdXI1Hp0L86twQGRcBst8IlntOCNcoDnZepiWW49Rnhuueg7srhaVWUje4FNhV31EYjuvi5OYjWugYGFOIx/M7CWRa9FAHV2LlM/Ig+Ni7E9X62Hm32IBhBn4L0b/tBkhfGJJbOras/9zOEnrP5r+BZUzmAy9YBSXHWzIGIgWTEUAKUhcUBlv4zJiKc=
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c1NEcnVpYmQ3OWU1Mmp2anRiY2k4VnA2L1VRcW1uZGcwWFc4Y0tZUnpJWk4y?=
 =?utf-8?B?K0RacG05Y1JlN3VsUVF0ckx3VzVYb29TR0p5UHBMbUFmbXJseWNncVAvUkIw?=
 =?utf-8?B?ck1FeXFPck04dUdudEJqcUkwb3RMaU01aTE1UEJlSitaWnJLRlZBdCtPSWda?=
 =?utf-8?B?RUM5SXNmUk8rRkVXSWlWcmtIOWdVd0dlWlM5NkwzUk5sVmwxdVZVcmRCazBX?=
 =?utf-8?B?cFpCaDJRaVl6aEpOMkdEOEhDanM1bzBrOTVGTXhjVzZmYkZnbUdhUERhNFFq?=
 =?utf-8?B?WitERXU5UGF3MlJXZEc5QndDYUxmd1YyMEp2TjBnYUc3dVdVdXpOaGRrS1pY?=
 =?utf-8?B?ejhBQmpNVlRNdGxjQWQ5RUZIM3M3RzlFQks4RHJtUHVib05SdTZlTFJaeFRK?=
 =?utf-8?B?RElRRXFtc0Nad05wVDFZRVpna0k2Yk8waVVVY3Q5ZFd6QlVYa0tSOHNEd0NC?=
 =?utf-8?B?by8zK29zUlM5VDVOdEtUUEt1UHRyR2h1d0p5NjVkSnYxUXkzNXZKbXdueW1G?=
 =?utf-8?B?NjlzSVltK0dEZTdSbUdKYjBRb2J2NU5hTmxMT0lYczRPZnhjL3l0V0wwWWpR?=
 =?utf-8?B?S3kxaUREanhCbDB4WEd4Y3ZqbmVpciswYjFET3hLZm9ZbGhCUWdoZllKOXZG?=
 =?utf-8?B?dXlrWm5CNzZwd2plYlN2UFBQV0s4RXJkMHZRcXFGMVh5R3pIeFFaa1g5bFZ5?=
 =?utf-8?B?dUJ4VksvZ3QxRTRhRFAvV3lCK21hVjI4NHpvQU45bE9OcnpUL0l2Y3cxL1hO?=
 =?utf-8?B?eHlReWxIdCt4ekdHbFQrc2VBblFvK1JyRk5ZTlBnK1pLeUhGZk1mZEpzT3Rx?=
 =?utf-8?B?aEpjaW41d2F2OUxMZEUzeTl2OVk0UmxpWEgyV2NVVkN6elpBNFlEM3FUVTM1?=
 =?utf-8?B?cnNVdkVSZ3Ixa2N3M0cyRXhtNCtNdTdJZW50MFlrbS8ycnRVaFFrU2xkZzRt?=
 =?utf-8?B?RTdNTExjNXNYWFVOOFcwU2c5OWhjK21neWEzNGdRT3ljdE9Wd09qeWFFOHky?=
 =?utf-8?B?TWEyOHBORVp5RGJpT1M0WEl6T3pwQ2dTWmEybHU2V0JLSVoxK0M0aE1BQzQ1?=
 =?utf-8?B?ZFRXOTc4MHZhQjkwVDRYVGR2QlNoRWdrRm0zSWNPMEFpck80VXNMeEJnQTRX?=
 =?utf-8?B?bzIzTGxTbVAwU3ZSa1RRUlBrT0VTU1JIb1V4TmdaQjltaGNOSW9ScURhSVBl?=
 =?utf-8?B?SmhUc3dua2xlaGJOcXBkN29wNlZPMWxMbWI1QXFacVZybHljZ0l2VDZEOXJR?=
 =?utf-8?B?bXJDTlI1Q3hOczZiZG5iQVZOUlZQYnl5SmorQW1uWitoMGU3UExTTEVKRHdt?=
 =?utf-8?B?K3NYbmFERXE5NWxuMnVuTHNSRFY5K2RlVEJpRHRoODl4R1lwMVRicG1pMU9T?=
 =?utf-8?B?djBLOEw4bCtQREQvNWdwLy8xSWhVQ1ZiOG13Rk1vdkxyNG1Ecnp5UW5jc2JZ?=
 =?utf-8?B?S3dveHorQ2VMazlxbjJMQ1NiVi9md0ZSZExPZmlndm5Oc3RieThTZmZZRDg1?=
 =?utf-8?B?VzJZSlVYRDdNWXhCVHhiT21BWTRhRmJveXU3a3lqa3RCSkhzVkdyc3hTbGJC?=
 =?utf-8?B?aGcwc2NZWmNXZVdqWGdKNENqR0FyTFlWa1NDQ2xPQWliclAyMXY0aWV0ZmNC?=
 =?utf-8?B?YllkNCt6M2hPamtPVjV6WmFpeGdxT09MTXltMTZ2dDlZY2QyU3M4KzZWdnJ6?=
 =?utf-8?B?QU4rY3ZBY29nd2NWczcvdUpGUTZLR3hYVGlrbjc5T09mSmExVTFTZXllMGNS?=
 =?utf-8?Q?4ElVVSPqWn7HObU46k=3D?=
X-OriginatorOrg: sct-15-20-4755-11-msonline-outlook-fb43a.templateTenant
X-MS-Exchange-CrossTenant-Network-Message-Id: 49d9608c-0e84-4c90-428c-08db51bbf31e
X-MS-Exchange-CrossTenant-AuthSource: DU0P192MB1700.EURP192.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 01:06:28.7394
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg:
	00000000-0000-0000-0000-000000000000
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1P192MB1692

On 10/05/2023 02:18, Rafaël Kooi wrote:> I've attempted to get Xen to boot Arch Linux as a unified EFI binary.
> Using https://xenbits.xen.org/docs/unstable/misc/efi.html as my source
> of information, I've been able to build a unified binary. When trying to
> boot the kernel Xen complains that the stream is corrupt
> ("ZSTD-compressed data is corrupt"). I've been able to reproduce the
> issue locally in user-mode, and confirmed that the issue is also present
> in the latest ZSTD version.
> 
> Using streaming decompression the kernel gets unpacked properly and the
> output is the same as if doing `cat kernel.zst | unzstd > bzImage`.
> 
> A problem I ran into was that adding book keeping to decompress.c would
> result in either a .data section being added or a .bss.* section. The
> linker would complain about this. And since I am not familiar with this
> code, and why it is this way, I opted to add a user-pointer to the
> internal decompression API.
> 
> Rafaël Kooi (2):
>    xen/decompress: Add a user pointer for book keeping in the callbacks
>    x86/Dom0: Use streaming decompression for ZSTD compressed kernels
> 
>   xen/common/bunzip2.c         | 23 ++++++++++++----------
>   xen/common/decompress.c      | 37 ++++++++++++++++++++++++++++++------
>   xen/common/unlz4.c           | 15 ++++++++-------
>   xen/common/unlzma.c          | 30 +++++++++++++++++------------
>   xen/common/unlzo.c           | 13 +++++++------
>   xen/common/unxz.c            | 11 ++++++-----
>   xen/common/unzstd.c          | 13 +++++++------
>   xen/include/xen/decompress.h | 10 +++++++---
>   8 files changed, 97 insertions(+), 55 deletions(-)
> 
> --
> 2.40.0
> 

This patch can be dropped in its entirety. The issue that this code
fixes is a symptom of another issue entirely. What ended up being the
issue was that the SSD of my laptop is dead, and that I messed up the
alignment of the sections inserted into xen.efi.

I plan to add some sanity checks to the EFI boot loader code to inform
the user if the sections are misaligned. That's a much more user friendly
error than whatever the decompressor will report.

To Jan, sorry for wasting your time, and thanks again for being patient
with me.

Rafaël


From xen-devel-bounces@lists.xenproject.org Thu May 11 01:15:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 01:15:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533058.829428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwutY-0000dt-1G; Thu, 11 May 2023 01:15:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533058.829428; Thu, 11 May 2023 01:15:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwutX-0000dm-Tk; Thu, 11 May 2023 01:14:59 +0000
Received: by outflank-mailman (input) for mailman id 533058;
 Thu, 11 May 2023 01:14:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UrGG=BA=huawei.com=linyunsheng@srs-se1.protection.inumbo.net>)
 id 1pwutW-0000dg-HQ
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 01:14:58 +0000
Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3b42950c-ef99-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 03:14:54 +0200 (CEST)
Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.57])
 by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QGv5h6001zsRVs;
 Thu, 11 May 2023 09:12:56 +0800 (CST)
Received: from localhost.localdomain (10.69.192.56) by
 dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 11 May 2023 09:14:49 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b42950c-ef99-11ed-8611-37d641c3527e
From: Yunsheng Lin <linyunsheng@huawei.com>
To: <davem@davemloft.net>, <kuba@kernel.org>, <pabeni@redhat.com>
CC: <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>, Yunsheng Lin
	<linyunsheng@huawei.com>, Leon Romanovsky <leonro@nvidia.com>, Simon Horman
	<simon.horman@corigine.com>, Igor Russkikh <irusskikh@marvell.com>, Eric
 Dumazet <edumazet@google.com>, Michael Chan <michael.chan@broadcom.com>, Raju
 Rangoju <rajur@chelsio.com>, Ajit Khaparde <ajit.khaparde@broadcom.com>,
	Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>, Somnath Kotur
	<somnath.kotur@broadcom.com>, Claudiu Manoil <claudiu.manoil@nxp.com>,
	Dimitris Michailidis <dmichail@fungible.com>, Thomas Petazzoni
	<thomas.petazzoni@bootlin.com>, Saeed Mahameed <saeedm@nvidia.com>, Leon
 Romanovsky <leon@kernel.org>, "Michael S. Tsirkin" <mst@redhat.com>, Jason
 Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>, Ronak
 Doshi <doshir@vmware.com>, VMware PV-Drivers Reviewers
	<pv-drivers@vmware.com>, Wei Liu <wei.liu@kernel.org>, Paul Durrant
	<paul@xen.org>, Alexei Starovoitov <ast@kernel.org>, Daniel Borkmann
	<daniel@iogearbox.net>, Andrii Nakryiko <andrii@kernel.org>, Martin KaFai Lau
	<martin.lau@linux.dev>, Song Liu <song@kernel.org>, Yonghong Song
	<yhs@fb.com>, John Fastabend <john.fastabend@gmail.com>, KP Singh
	<kpsingh@kernel.org>, Stanislav Fomichev <sdf@google.com>, Hao Luo
	<haoluo@google.com>, Jiri Olsa <jolsa@kernel.org>, Boris Pismenny
	<borisp@nvidia.com>, Steffen Klassert <steffen.klassert@secunet.com>, Herbert
 Xu <herbert@gondor.apana.org.au>, <linux-rdma@vger.kernel.org>,
	<virtualization@lists.linux-foundation.org>,
	<xen-devel@lists.xenproject.org>, <bpf@vger.kernel.org>
Subject: [PATCH net-next v2 1/2] net: introduce and use skb_frag_fill_page_desc()
Date: Thu, 11 May 2023 09:12:12 +0800
Message-ID: <20230511011213.59091-2-linyunsheng@huawei.com>
X-Mailer: git-send-email 2.33.0
In-Reply-To: <20230511011213.59091-1-linyunsheng@huawei.com>
References: <20230511011213.59091-1-linyunsheng@huawei.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.69.192.56]
X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To
 dggpemm500005.china.huawei.com (7.185.36.74)
X-CFilter-Loop: Reflected

Most users use __skb_frag_set_page()/skb_frag_off_set()/
skb_frag_size_set() to fill the page desc for a skb frag.

Introduce skb_frag_fill_page_desc() to do that.

net/bpf/test_run.c does not call skb_frag_off_set() to
set the offset, "copy_from_user(page_address(page), ...)"
and 'shinfo' being part of the 'data' kzalloced in
bpf_test_init() suggest that it is assuming offset to be
initialized as zero, so call skb_frag_fill_page_desc()
with offset being zero for this case.

Also, skb_frag_set_page() is not used anymore, so remove
it.

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
---
 .../net/ethernet/aquantia/atlantic/aq_ring.c  |  6 ++--
 drivers/net/ethernet/broadcom/bnxt/bnxt.c     |  5 ++-
 drivers/net/ethernet/chelsio/cxgb3/sge.c      |  5 ++-
 drivers/net/ethernet/emulex/benet/be_main.c   | 32 ++++++++++---------
 drivers/net/ethernet/freescale/enetc/enetc.c  |  5 ++-
 .../net/ethernet/fungible/funeth/funeth_rx.c  |  5 ++-
 drivers/net/ethernet/marvell/mvneta.c         |  5 ++-
 .../net/ethernet/mellanox/mlx5/core/en_rx.c   |  4 +--
 drivers/net/ethernet/sun/cassini.c            |  8 ++---
 drivers/net/virtio_net.c                      |  4 +--
 drivers/net/vmxnet3/vmxnet3_drv.c             |  4 +--
 drivers/net/xen-netback/netback.c             |  4 +--
 include/linux/skbuff.h                        | 27 ++++++----------
 net/bpf/test_run.c                            |  3 +-
 net/core/gro.c                                |  4 +--
 net/core/pktgen.c                             | 13 +++++---
 net/core/skbuff.c                             |  7 ++--
 net/tls/tls_device.c                          | 10 +++---
 net/xfrm/xfrm_ipcomp.c                        |  5 +--
 19 files changed, 64 insertions(+), 92 deletions(-)

diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
index 7f933175cbda..4de22eed099a 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
@@ -532,10 +532,10 @@ static bool aq_add_rx_fragment(struct device *dev,
 					      buff_->rxdata.pg_off,
 					      buff_->len,
 					      DMA_FROM_DEVICE);
-		skb_frag_off_set(frag, buff_->rxdata.pg_off);
-		skb_frag_size_set(frag, buff_->len);
 		sinfo->xdp_frags_size += buff_->len;
-		__skb_frag_set_page(frag, buff_->rxdata.page);
+		skb_frag_fill_page_desc(frag, buff_->rxdata.page,
+					buff_->rxdata.pg_off,
+					buff_->len);
 
 		buff_->is_cleaned = 1;
 
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index dcd9367f05af..efaff5018af8 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -1085,9 +1085,8 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp,
 			    RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT;
 
 		cons_rx_buf = &rxr->rx_agg_ring[cons];
-		skb_frag_off_set(frag, cons_rx_buf->offset);
-		skb_frag_size_set(frag, frag_len);
-		__skb_frag_set_page(frag, cons_rx_buf->page);
+		skb_frag_fill_page_desc(frag, cons_rx_buf->page,
+					cons_rx_buf->offset, frag_len);
 		shinfo->nr_frags = i + 1;
 		__clear_bit(cons, rxr->rx_agg_bmap);
 
diff --git a/drivers/net/ethernet/chelsio/cxgb3/sge.c b/drivers/net/ethernet/chelsio/cxgb3/sge.c
index efa7f401529e..2e9a74fe0970 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb3/sge.c
@@ -2184,9 +2184,8 @@ static void lro_add_page(struct adapter *adap, struct sge_qset *qs,
 	len -= offset;
 
 	rx_frag += nr_frags;
-	__skb_frag_set_page(rx_frag, sd->pg_chunk.page);
-	skb_frag_off_set(rx_frag, sd->pg_chunk.offset + offset);
-	skb_frag_size_set(rx_frag, len);
+	skb_frag_fill_page_desc(rx_frag, sd->pg_chunk.page,
+				sd->pg_chunk.offset + offset, len);
 
 	skb->len += len;
 	skb->data_len += len;
diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
index 7e408bcc88de..3164ed205cf7 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -2343,11 +2343,10 @@ static void skb_fill_rx_data(struct be_rx_obj *rxo, struct sk_buff *skb,
 		hdr_len = ETH_HLEN;
 		memcpy(skb->data, start, hdr_len);
 		skb_shinfo(skb)->nr_frags = 1;
-		skb_frag_set_page(skb, 0, page_info->page);
-		skb_frag_off_set(&skb_shinfo(skb)->frags[0],
-				 page_info->page_offset + hdr_len);
-		skb_frag_size_set(&skb_shinfo(skb)->frags[0],
-				  curr_frag_len - hdr_len);
+		skb_frag_fill_page_desc(&skb_shinfo(skb)->frags[0],
+					page_info->page,
+					page_info->page_offset + hdr_len,
+					curr_frag_len - hdr_len);
 		skb->data_len = curr_frag_len - hdr_len;
 		skb->truesize += rx_frag_size;
 		skb->tail += hdr_len;
@@ -2369,16 +2368,17 @@ static void skb_fill_rx_data(struct be_rx_obj *rxo, struct sk_buff *skb,
 		if (page_info->page_offset == 0) {
 			/* Fresh page */
 			j++;
-			skb_frag_set_page(skb, j, page_info->page);
-			skb_frag_off_set(&skb_shinfo(skb)->frags[j],
-					 page_info->page_offset);
-			skb_frag_size_set(&skb_shinfo(skb)->frags[j], 0);
+			skb_frag_fill_page_desc(&skb_shinfo(skb)->frags[j],
+						page_info->page,
+						page_info->page_offset,
+						curr_frag_len);
 			skb_shinfo(skb)->nr_frags++;
 		} else {
 			put_page(page_info->page);
+			skb_frag_size_add(&skb_shinfo(skb)->frags[j],
+					  curr_frag_len);
 		}
 
-		skb_frag_size_add(&skb_shinfo(skb)->frags[j], curr_frag_len);
 		skb->len += curr_frag_len;
 		skb->data_len += curr_frag_len;
 		skb->truesize += rx_frag_size;
@@ -2451,14 +2451,16 @@ static void be_rx_compl_process_gro(struct be_rx_obj *rxo,
 		if (i == 0 || page_info->page_offset == 0) {
 			/* First frag or Fresh page */
 			j++;
-			skb_frag_set_page(skb, j, page_info->page);
-			skb_frag_off_set(&skb_shinfo(skb)->frags[j],
-					 page_info->page_offset);
-			skb_frag_size_set(&skb_shinfo(skb)->frags[j], 0);
+			skb_frag_fill_page_desc(&skb_shinfo(skb)->frags[j],
+						page_info->page,
+						page_info->page_offset,
+						curr_frag_len);
 		} else {
 			put_page(page_info->page);
+			skb_frag_size_add(&skb_shinfo(skb)->frags[j],
+					  curr_frag_len);
 		}
-		skb_frag_size_add(&skb_shinfo(skb)->frags[j], curr_frag_len);
+
 		skb->truesize += rx_frag_size;
 		remaining -= curr_frag_len;
 		memset(page_info, 0, sizeof(*page_info));
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
index 3c4fa26f0f9b..63854294ac33 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
@@ -1445,9 +1445,8 @@ static void enetc_add_rx_buff_to_xdp(struct enetc_bdr *rx_ring, int i,
 		xdp_buff_set_frag_pfmemalloc(xdp_buff);
 
 	frag = &shinfo->frags[shinfo->nr_frags];
-	skb_frag_off_set(frag, rx_swbd->page_offset);
-	skb_frag_size_set(frag, size);
-	__skb_frag_set_page(frag, rx_swbd->page);
+	skb_frag_fill_page_desc(frag, rx_swbd->page, rx_swbd->page_offset,
+				size);
 
 	shinfo->nr_frags++;
 }
diff --git a/drivers/net/ethernet/fungible/funeth/funeth_rx.c b/drivers/net/ethernet/fungible/funeth/funeth_rx.c
index 29a6c2ede43a..7e2584895de3 100644
--- a/drivers/net/ethernet/fungible/funeth/funeth_rx.c
+++ b/drivers/net/ethernet/fungible/funeth/funeth_rx.c
@@ -323,9 +323,8 @@ static int fun_gather_pkt(struct funeth_rxq *q, unsigned int tot_len,
 		if (ref_ok)
 			ref_ok |= buf->node;
 
-		__skb_frag_set_page(frags, buf->page);
-		skb_frag_off_set(frags, q->buf_offset);
-		skb_frag_size_set(frags++, frag_len);
+		skb_frag_fill_page_desc(frags++, buf->page, q->buf_offset,
+					frag_len);
 
 		tot_len -= frag_len;
 		if (!tot_len)
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 2cad76d0a50e..01b0312977d6 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2369,9 +2369,8 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 	if (data_len > 0 && sinfo->nr_frags < MAX_SKB_FRAGS) {
 		skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags++];
 
-		skb_frag_off_set(frag, pp->rx_offset_correction);
-		skb_frag_size_set(frag, data_len);
-		__skb_frag_set_page(frag, page);
+		skb_frag_fill_page_desc(frag, page,
+					pp->rx_offset_correction, data_len);
 
 		if (!xdp_buff_has_frags(xdp)) {
 			sinfo->xdp_frags_size = *size;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 69634829558e..704b022cd1f0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -491,9 +491,7 @@ mlx5e_add_skb_shared_info_frag(struct mlx5e_rq *rq, struct skb_shared_info *sinf
 	}
 
 	frag = &sinfo->frags[sinfo->nr_frags++];
-	__skb_frag_set_page(frag, frag_page->page);
-	skb_frag_off_set(frag, frag_offset);
-	skb_frag_size_set(frag, len);
+	skb_frag_fill_page_desc(frag, frag_page->page, frag_offset, len);
 
 	if (page_is_pfmemalloc(frag_page->page))
 		xdp_buff_set_frag_pfmemalloc(xdp);
diff --git a/drivers/net/ethernet/sun/cassini.c b/drivers/net/ethernet/sun/cassini.c
index 4ef05bad4613..2d52f54ebb45 100644
--- a/drivers/net/ethernet/sun/cassini.c
+++ b/drivers/net/ethernet/sun/cassini.c
@@ -1998,10 +1998,8 @@ static int cas_rx_process_pkt(struct cas *cp, struct cas_rx_comp *rxc,
 		skb->truesize += hlen - swivel;
 		skb->len      += hlen - swivel;
 
-		__skb_frag_set_page(frag, page->buffer);
+		skb_frag_fill_page_desc(frag, page->buffer, off, hlen - swivel);
 		__skb_frag_ref(frag);
-		skb_frag_off_set(frag, off);
-		skb_frag_size_set(frag, hlen - swivel);
 
 		/* any more data? */
 		if ((words[0] & RX_COMP1_SPLIT_PKT) && ((dlen -= hlen) > 0)) {
@@ -2024,10 +2022,8 @@ static int cas_rx_process_pkt(struct cas *cp, struct cas_rx_comp *rxc,
 			skb->len      += hlen;
 			frag++;
 
-			__skb_frag_set_page(frag, page->buffer);
+			skb_frag_fill_page_desc(frag, page->buffer, 0, hlen);
 			__skb_frag_ref(frag);
-			skb_frag_off_set(frag, 0);
-			skb_frag_size_set(frag, hlen);
 			RX_USED_ADD(page, hlen + cp->crc_size);
 		}
 
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index a12ae26db0e2..fe048d4bec39 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1153,9 +1153,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 		}
 
 		frag = &shinfo->frags[shinfo->nr_frags++];
-		__skb_frag_set_page(frag, page);
-		skb_frag_off_set(frag, offset);
-		skb_frag_size_set(frag, len);
+		skb_frag_fill_page_desc(frag, page, offset, len);
 		if (page_is_pfmemalloc(page))
 			xdp_buff_set_frag_pfmemalloc(xdp);
 
diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
index f2b76ee866a4..7fa74b8b2100 100644
--- a/drivers/net/vmxnet3/vmxnet3_drv.c
+++ b/drivers/net/vmxnet3/vmxnet3_drv.c
@@ -686,9 +686,7 @@ vmxnet3_append_frag(struct sk_buff *skb, struct Vmxnet3_RxCompDesc *rcd,
 
 	BUG_ON(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS);
 
-	__skb_frag_set_page(frag, rbi->page);
-	skb_frag_off_set(frag, 0);
-	skb_frag_size_set(frag, rcd->len);
+	skb_frag_fill_page_desc(frag, rbi->page, 0, rcd->len);
 	skb->data_len += rcd->len;
 	skb->truesize += PAGE_SIZE;
 	skb_shinfo(skb)->nr_frags++;
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index c1501f41e2d8..3d79b35eb577 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -1128,9 +1128,7 @@ static int xenvif_handle_frag_list(struct xenvif_queue *queue, struct sk_buff *s
 			BUG();
 
 		offset += len;
-		__skb_frag_set_page(&frags[i], page);
-		skb_frag_off_set(&frags[i], 0);
-		skb_frag_size_set(&frags[i], len);
+		skb_frag_fill_page_desc(&frags[i], page, 0, len);
 	}
 
 	/* Release all the original (foreign) frags. */
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 738776ab8838..30be21c7d05f 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -2411,6 +2411,15 @@ static inline unsigned int skb_pagelen(const struct sk_buff *skb)
 	return skb_headlen(skb) + __skb_pagelen(skb);
 }
 
+static inline void skb_frag_fill_page_desc(skb_frag_t *frag,
+					   struct page *page,
+					   int off, int size)
+{
+	frag->bv_page = page;
+	frag->bv_offset = off;
+	skb_frag_size_set(frag, size);
+}
+
 static inline void __skb_fill_page_desc_noacc(struct skb_shared_info *shinfo,
 					      int i, struct page *page,
 					      int off, int size)
@@ -2422,9 +2431,7 @@ static inline void __skb_fill_page_desc_noacc(struct skb_shared_info *shinfo,
 	 * that not all callers have unique ownership of the page but rely
 	 * on page_is_pfmemalloc doing the right thing(tm).
 	 */
-	frag->bv_page		  = page;
-	frag->bv_offset		  = off;
-	skb_frag_size_set(frag, size);
+	skb_frag_fill_page_desc(frag, page, off, size);
 }
 
 /**
@@ -3496,20 +3503,6 @@ static inline void __skb_frag_set_page(skb_frag_t *frag, struct page *page)
 	frag->bv_page = page;
 }
 
-/**
- * skb_frag_set_page - sets the page contained in a paged fragment of an skb
- * @skb: the buffer
- * @f: the fragment offset
- * @page: the page to set
- *
- * Sets the @f'th fragment of @skb to contain @page.
- */
-static inline void skb_frag_set_page(struct sk_buff *skb, int f,
-				     struct page *page)
-{
-	__skb_frag_set_page(&skb_shinfo(skb)->frags[f], page);
-}
-
 bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio);
 
 /**
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index e79e3a415ca9..98143b86a9dd 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -1415,11 +1415,10 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
 			}
 
 			frag = &sinfo->frags[sinfo->nr_frags++];
-			__skb_frag_set_page(frag, page);
 
 			data_len = min_t(u32, kattr->test.data_size_in - size,
 					 PAGE_SIZE);
-			skb_frag_size_set(frag, data_len);
+			skb_frag_fill_page_desc(frag, page, 0, data_len);
 
 			if (copy_from_user(page_address(page), data_in + size,
 					   data_len)) {
diff --git a/net/core/gro.c b/net/core/gro.c
index 2d84165cb4f1..6783a47a6136 100644
--- a/net/core/gro.c
+++ b/net/core/gro.c
@@ -239,9 +239,7 @@ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb)
 
 		pinfo->nr_frags = nr_frags + 1 + skbinfo->nr_frags;
 
-		__skb_frag_set_page(frag, page);
-		skb_frag_off_set(frag, first_offset);
-		skb_frag_size_set(frag, first_size);
+		skb_frag_fill_page_desc(frag, page, first_offset, first_size);
 
 		memcpy(frag + 1, skbinfo->frags, sizeof(*frag) * skbinfo->nr_frags);
 		/* We dont need to clear skbinfo->nr_frags here */
diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index 760238196db1..f56b8d697014 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -2785,14 +2785,17 @@ static void pktgen_finalize_skb(struct pktgen_dev *pkt_dev, struct sk_buff *skb,
 					break;
 			}
 			get_page(pkt_dev->page);
-			skb_frag_set_page(skb, i, pkt_dev->page);
-			skb_frag_off_set(&skb_shinfo(skb)->frags[i], 0);
+
 			/*last fragment, fill rest of data*/
 			if (i == (frags - 1))
-				skb_frag_size_set(&skb_shinfo(skb)->frags[i],
-				    (datalen < PAGE_SIZE ? datalen : PAGE_SIZE));
+				skb_frag_fill_page_desc(&skb_shinfo(skb)->frags[i],
+							pkt_dev->page, 0,
+							(datalen < PAGE_SIZE ?
+							 datalen : PAGE_SIZE));
 			else
-				skb_frag_size_set(&skb_shinfo(skb)->frags[i], frag_len);
+				skb_frag_fill_page_desc(&skb_shinfo(skb)->frags[i],
+							pkt_dev->page, 0, frag_len);
+
 			datalen -= skb_frag_size(&skb_shinfo(skb)->frags[i]);
 			skb->len += skb_frag_size(&skb_shinfo(skb)->frags[i]);
 			skb->data_len += skb_frag_size(&skb_shinfo(skb)->frags[i]);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 26a586007d8b..c0bd428df3bf 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -4249,10 +4249,9 @@ static inline skb_frag_t skb_head_frag_to_page_desc(struct sk_buff *frag_skb)
 	struct page *page;
 
 	page = virt_to_head_page(frag_skb->head);
-	__skb_frag_set_page(&head_frag, page);
-	skb_frag_off_set(&head_frag, frag_skb->data -
-			 (unsigned char *)page_address(page));
-	skb_frag_size_set(&head_frag, skb_headlen(frag_skb));
+	skb_frag_fill_page_desc(&head_frag, page, frag_skb->data -
+				(unsigned char *)page_address(page),
+				skb_headlen(frag_skb));
 	return head_frag;
 }
 
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index a7cc4f9faac2..daeff54bdbfa 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -268,9 +268,8 @@ static void tls_append_frag(struct tls_record_info *record,
 		skb_frag_size_add(frag, size);
 	} else {
 		++frag;
-		__skb_frag_set_page(frag, pfrag->page);
-		skb_frag_off_set(frag, pfrag->offset);
-		skb_frag_size_set(frag, size);
+		skb_frag_fill_page_desc(frag, pfrag->page, pfrag->offset,
+					size);
 		++record->num_frags;
 		get_page(pfrag->page);
 	}
@@ -357,9 +356,8 @@ static int tls_create_new_record(struct tls_offload_context_tx *offload_ctx,
 		return -ENOMEM;
 
 	frag = &record->frags[0];
-	__skb_frag_set_page(frag, pfrag->page);
-	skb_frag_off_set(frag, pfrag->offset);
-	skb_frag_size_set(frag, prepend_size);
+	skb_frag_fill_page_desc(frag, pfrag->page, pfrag->offset,
+				prepend_size);
 
 	get_page(pfrag->page);
 	pfrag->offset += prepend_size;
diff --git a/net/xfrm/xfrm_ipcomp.c b/net/xfrm/xfrm_ipcomp.c
index 80143360bf09..9c0fa0e1786a 100644
--- a/net/xfrm/xfrm_ipcomp.c
+++ b/net/xfrm/xfrm_ipcomp.c
@@ -74,14 +74,11 @@ static int ipcomp_decompress(struct xfrm_state *x, struct sk_buff *skb)
 		if (!page)
 			return -ENOMEM;
 
-		__skb_frag_set_page(frag, page);
-
 		len = PAGE_SIZE;
 		if (dlen < len)
 			len = dlen;
 
-		skb_frag_off_set(frag, 0);
-		skb_frag_size_set(frag, len);
+		skb_frag_fill_page_desc(frag, page, 0, len);
 		memcpy(skb_frag_address(frag), scratch, len);
 
 		skb->truesize += len;
-- 
2.33.0



From xen-devel-bounces@lists.xenproject.org Thu May 11 04:27:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 04:27:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533094.829455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwxtI-0004oD-Pe; Thu, 11 May 2023 04:26:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533094.829455; Thu, 11 May 2023 04:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwxtI-0004o6-Lk; Thu, 11 May 2023 04:26:56 +0000
Received: by outflank-mailman (input) for mailman id 533094;
 Thu, 11 May 2023 04:26:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwxtI-0004nw-5T; Thu, 11 May 2023 04:26:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwxtI-0006sC-1y; Thu, 11 May 2023 04:26:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwxtH-0001uj-KU; Thu, 11 May 2023 04:26:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwxtH-00085Y-J9; Thu, 11 May 2023 04:26:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=F9lUFqDkr7NvT3GSs1VGoMXoFkS/ereG8RWYEtcaACc=; b=Gih2PGvd6V+luVAwFmsdlDP56t
	Hs1DmCW4KsoSF9QerUroPArJCVQHPxSRmvOXSrbub1ekKrtq3GPsnnqOCTmWtMrJlePPqrybFn9Lk
	CO0eJS40ieT4UYoqcM9mtv0VzV56hB2p1bHB/jtwHvTWIve0JKua4QOT1aUkMipeI6X8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180606-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180606: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ad2fd53a7870a395b8564697bef6c329d017c6c9
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 May 2023 04:26:55 +0000

flight 180606 linux-linus real [real]
flight 180612 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180606/
http://logs.test-lab.xenproject.org/osstest/logs/180612/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180612-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ad2fd53a7870a395b8564697bef6c329d017c6c9
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   24 days
Failing since        180281  2023-04-17 06:24:36 Z   23 days   44 attempts
Testing same since   180606  2023-05-10 16:41:43 Z    0 days    1 attempts

------------------------------------------------------------
2365 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 297690 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 11 05:25:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 05:25:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533101.829465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwynq-0003Mk-1W; Thu, 11 May 2023 05:25:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533101.829465; Thu, 11 May 2023 05:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwynp-0003Md-Ua; Thu, 11 May 2023 05:25:21 +0000
Received: by outflank-mailman (input) for mailman id 533101;
 Thu, 11 May 2023 05:25:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d7zm=BA=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pwyno-0003MX-QH
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 05:25:20 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 37842da9-efbc-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 07:25:18 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id ECDF41FD7F;
 Thu, 11 May 2023 05:25:17 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C266C134B2;
 Thu, 11 May 2023 05:25:17 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Gx75LT18XGQXJQAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 11 May 2023 05:25:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37842da9-efbc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1683782717; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=KS7xG2fXBnwEWqZoii+gVbaS9E2m6HU4IykCzTn/790=;
	b=JfkVGyUkfshLObmpMrq1GhTlzc95O6EvHMBmCHUSoWHY2yEASwksGOqay5/ejA4ej9VILB
	nWSiFsvVm/8cpQv6fbro611vH0CoZSlCcQ3dny9Rf/mAmvGDayiyMgYlUXBiuJbruPO+ft
	bYYAw0+llB5qK1bA/AS6BbGQhFA7Xqo=
Message-ID: <de77cc78-e07a-934f-e241-15fe851706df@suse.com>
Date: Thu, 11 May 2023 07:25:17 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-6-jgross@suse.com>
 <21847835-4f7e-a09a-458e-e68dc59d4268@xen.org>
 <9745474f-db84-c8f3-3662-95728d4d5bd3@suse.com>
 <91380959-b63d-7d4f-0920-7d87dc0fc19d@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v5 05/14] tools/xenstore: use accounting buffering for
 node accounting
In-Reply-To: <91380959-b63d-7d4f-0920-7d87dc0fc19d@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------Oxa8OChYtf600Y8nslRhB0at"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------Oxa8OChYtf600Y8nslRhB0at
Content-Type: multipart/mixed; boundary="------------qjlX8CfRPQMwjt9HJoa6GDl0";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <de77cc78-e07a-934f-e241-15fe851706df@suse.com>
Subject: Re: [PATCH v5 05/14] tools/xenstore: use accounting buffering for
 node accounting
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-6-jgross@suse.com>
 <21847835-4f7e-a09a-458e-e68dc59d4268@xen.org>
 <9745474f-db84-c8f3-3662-95728d4d5bd3@suse.com>
 <91380959-b63d-7d4f-0920-7d87dc0fc19d@xen.org>
In-Reply-To: <91380959-b63d-7d4f-0920-7d87dc0fc19d@xen.org>

--------------qjlX8CfRPQMwjt9HJoa6GDl0
Content-Type: multipart/mixed; boundary="------------0Zm7E48OkOpN6PDYFH0NFfEj"

--------------0Zm7E48OkOpN6PDYFH0NFfEj
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTAuMDUuMjMgMjM6MzEsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGksDQo+IA0KPiBP
biAxMC8wNS8yMDIzIDEzOjU0LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gT24gMDkuMDUu
MjMgMjA6NDYsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+PiBIaSBKdWVyZ2VuLA0KPj4+DQo+
Pj4gT24gMDgvMDUvMjAyMyAxMjo0NywgSnVlcmdlbiBHcm9zcyB3cm90ZToNCj4+Pj4gQWRk
IHRoZSBub2RlIGFjY291bnRpbmcgdG8gdGhlIGFjY291bnRpbmcgaW5mb3JtYXRpb24gYnVm
ZmVyaW5nIGluDQo+Pj4+IG9yZGVyIHRvIGF2b2lkIGhhdmluZyB0byB1bmRvIGl0IGluIGNh
c2Ugb2YgZmFpbHVyZS4NCj4+Pj4NCj4+Pj4gVGhpcyByZXF1aXJlcyB0byBjYWxsIGRvbWFp
bl9uYmVudHJ5X2RlYygpIGJlZm9yZSBhbnkgY2hhbmdlcyB0byB0aGUNCj4+Pj4gZGF0YSBi
YXNlLCBhcyBpdCBjYW4gcmV0dXJuIGFuIGVycm9yIG5vdy4NCj4+Pj4NCj4+Pj4gU2lnbmVk
LW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KPj4+PiAtLS0NCj4+
Pj4gVjU6DQo+Pj4+IC0gYWRkIGVycm9yIGhhbmRsaW5nIGFmdGVyIGRvbWFpbl9uYmVudHJ5
X2RlYygpIGNhbGxzIChKdWxpZW4gR3JhbGwpDQo+Pj4+IC0tLQ0KPj4+PiDCoCB0b29scy94
ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jwqDCoCB8IDI5ICsrKysrKystLS0tLS0tLS0tLS0t
LS0tLS0tLS0tDQo+Pj4+IMKgIHRvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9kb21haW4uaCB8
wqAgNCArKy0tDQo+Pj4+IMKgIDIgZmlsZXMgY2hhbmdlZCwgOSBpbnNlcnRpb25zKCspLCAy
NCBkZWxldGlvbnMoLSkNCj4+Pj4NCj4+Pj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3Jl
L3hlbnN0b3JlZF9jb3JlLmMgYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+
Pj4+IGluZGV4IDgzOTJiZGVjOWIuLjIyZGE0MzRlMmEgMTAwNjQ0DQo+Pj4+IC0tLSBhL3Rv
b2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMNCj4+Pj4gKysrIGIvdG9vbHMveGVuc3Rv
cmUveGVuc3RvcmVkX2NvcmUuYw0KPj4+PiBAQCAtMTQ1NCw3ICsxNDU0LDYgQEAgc3RhdGlj
IHZvaWQgZGVzdHJveV9ub2RlX3JtKHN0cnVjdCBjb25uZWN0aW9uICpjb25uLCANCj4+Pj4g
c3RydWN0IG5vZGUgKm5vZGUpDQo+Pj4+IMKgIHN0YXRpYyBpbnQgZGVzdHJveV9ub2RlKHN0
cnVjdCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqbm9kZSkNCj4+Pj4gwqAgew0K
Pj4+PiDCoMKgwqDCoMKgIGRlc3Ryb3lfbm9kZV9ybShjb25uLCBub2RlKTsNCj4+Pj4gLcKg
wqDCoCBkb21haW5fbmJlbnRyeV9kZWMoY29ubiwgZ2V0X25vZGVfb3duZXIobm9kZSkpOw0K
Pj4+PiDCoMKgwqDCoMKgIC8qDQo+Pj4+IMKgwqDCoMKgwqDCoCAqIEl0IGlzIG5vdCBwb3Nz
aWJsZSB0byBlYXNpbHkgcmV2ZXJ0IHRoZSBjaGFuZ2VzIGluIGEgdHJhbnNhY3Rpb24uDQo+
Pj4+IEBAIC0xNjQ1LDYgKzE2NDQsOSBAQCBzdGF0aWMgaW50IGRlbG5vZGVfc3ViKGNvbnN0
IHZvaWQgKmN0eCwgc3RydWN0IA0KPj4+PiBjb25uZWN0aW9uICpjb25uLA0KPj4+PiDCoMKg
wqDCoMKgIGlmIChyZXQgPiAwKQ0KPj4+PiDCoMKgwqDCoMKgwqDCoMKgwqAgcmV0dXJuIFdB
TEtfVFJFRV9TVUNDRVNTX1NUT1A7DQo+Pj4+ICvCoMKgwqAgaWYgKGRvbWFpbl9uYmVudHJ5
X2RlYyhjb25uLCBnZXRfbm9kZV9vd25lcihub2RlKSkpDQo+Pj4+ICvCoMKgwqDCoMKgwqDC
oCByZXR1cm4gV0FMS19UUkVFX0VSUk9SX1NUT1A7DQo+Pj4NCj4+PiBJIHRoaW5rIHRoZXJl
IGlzIGEgcG90ZW50aWFsIGlzc3VlIHdpdGggdGhlIGJ1ZmZlcmluZyBoZXJlLiBJbiBjYXNl
IG9mIA0KPj4+IGZhaWx1cmUsIHRoZSBub2RlIGNvdWxkIGhhdmUgYmVlbiByZW1vdmVkLCBi
dXQgdGhlIHF1b3RhIHdvdWxkIG5vdCBiZSANCj4+PiBwcm9wZXJseSBhY2NvdW50ZWQuDQo+
Pg0KPj4gWW91IG1lYW4gdGhlIGNhc2Ugd2hlcmUgYW5vdGhlciBub2RlIGhhcyBiZWVuIGRl
bGV0ZWQgYW5kIGR1ZSB0byBhY2NvdW50aW5nDQo+PiBidWZmZXJpbmcgdGhlIGNvcnJlY3Rl
ZCBhY2NvdW50aW5nIGRhdGEgd291bGRuJ3QgYmUgY29tbWl0dGVkPw0KPj4NCj4+IFRoaXMg
aXMgbm8gcHJvYmxlbSwgYXMgYW4gZXJyb3IgcmV0dXJuZWQgYnkgZGVsbm9kZV9zdWIoKSB3
aWxsIHJlc3VsdCBpbg0KPj4gY29ycnVwdCgpIGJlaW5nIGNhbGxlZCwgd2hpY2ggaW4gdHVy
biB3aWxsIGNvcnJlY3QgdGhlIGFjY291bnRpbmcgZGF0YS4NCj4gDQo+IFRvIG1lIGNvcnJ1
cHQoKSBpcyBhIGJpZyBoYW1tZXIgYW5kIGl0IGZlZWxzIHdyb25nIHRvIGNhbGwgaXQgd2hl
biBJIHRoaW5rIHdlIA0KPiBoYXZlIGVhc2llci9mYXN0ZXIgd2F5IHRvIGRlYWwgd2l0aCB0
aGUgaXNzdWUuIENvdWxkIHdlIGluc3RlYWQgY2FsbCANCj4gYWNjX2NvbW1pdCgpIGJlZm9y
ZSByZXR1cm5pbmc/DQoNCllvdSBhcmUgYXdhcmUgdGhhdCB0aGlzIGlzIGEgdmVyeSBwcm9i
bGVtYXRpYyBzaXR1YXRpb24gd2UgYXJlIGluPw0KDQpXZSBjb3VsZG4ndCBhbGxvY2F0ZSBh
IHNtYWxsIGFtb3VudCBvZiBtZW1vcnkgKGFyb3VuZCA2NCBieXRlcykhIFhlbnN0b3JlZA0K
d2lsbCBwcm9iYWJseSBkaWUgd2l0aGluIG1pbGxpc2Vjb25kcy4gVXNpbmcgdGhlIGJpZyBo
YW1tZXIgaW4gc3VjaCBhDQpzaXR1YXRpb24gaXMgZmluZSBJTU8uIEl0IHdpbGwgbWF5YmUg
cmVzdWx0IGluIHNvbHZpbmcgdGhlIHByb2JsZW0gYnkNCmZyZWVpbmcgb2YgbWVtb3J5IChx
dWl0ZSB1bmxpa2VseSwgdGhvdWdoKSwgYnV0IGl0IHdvbid0IGxlYXZlIHhlbnN0b3JlZA0K
aW4gYSB3b3JzZSBzdGF0ZSB0aGFuIHdpdGggeW91ciBzdWdnZXN0aW9uLg0KDQpBbmQgY2Fs
bGluZyBhY2NfY29tbWl0KCkgaGVyZSB3b3VsZG4ndCByZWFsbHkgaGVscCwgYXMgYWNjb3Vu
dGluZyBkYXRhDQpjb3VsZG4ndCBiZSByZWNvcmRlZCwgc28gdGhlcmUgYXJlIG1pc3Npbmcg
dXBkYXRlcyBhbnl3YXkgZHVlIHRvIHRoZSBmYWlsZWQNCmNhbGwgb2YgZG9tYWluX25iZW50
cnlfZGVjKCkuDQoNCj4+PiBBbHNvLCBJIHRoaW5rIGEgY29tbWVudCB3b3VsZCBiZSB3YXJy
YW50IHRvIGV4cGxhaW4gd2h5IHdlIGFyZSByZXR1cm5pbmcgDQo+Pj4gV0FMS19UUkVFX0VS
Uk9SX1NUT1AgaGVyZSB3aGVuLi4uDQo+Pj4NCj4+Pj4gKw0KPj4+PiDCoMKgwqDCoMKgIC8q
IEluIGNhc2Ugb2YgZXJyb3Igc3RvcCB0aGUgd2Fsay4gKi8NCj4+Pj4gwqDCoMKgwqDCoCBp
ZiAoIXJldCAmJiBkb190ZGJfZGVsZXRlKGNvbm4sICZrZXksICZub2RlLT5hY2MpKQ0KPj4+
PiDCoMKgwqDCoMKgwqDCoMKgwqAgcmV0dXJuIFdBTEtfVFJFRV9TVUNDRVNTX1NUT1A7DQo+
Pj4NCj4+PiAuLi4gdGhpcyBpcyBub3QgdGhlIGNhc2Ugd2hlbiBkb190ZGJfZGVsZXRlKCkg
ZmFpbHMgZm9yIHNvbWUgcmVhc29ucy4NCj4+DQo+PiBUaGUgbWFpbiBpZGVhIHdhcyB0aGF0
IHRoZSByZW1vdmUgaXMgd29ya2luZyBmcm9tIHRoZSBsZWFmcyB0b3dhcmRzIHRoZSByb290
Lg0KPj4gSW4gY2FzZSBvbmUgZW50cnkgY2FuJ3QgYmUgcmVtb3ZlZCwgd2Ugc2hvdWxkIGp1
c3Qgc3RvcC4NCj4+DQo+PiBPVE9IIHJldHVybmluZyBXQUxLX1RSRUVfRVJST1JfU1RPUCBt
aWdodCBiZSBjbGVhbmVyLCBhcyB0aGlzIHdvdWxkIG1ha2Ugc3VyZQ0KPj4gdGhhdCBhY2Nv
dW50aW5nIGRhdGEgaXMgcmVwYWlyZWQgYWZ0ZXJ3YXJkcy4gRXZlbiBpZiBkb190ZGJfZGVs
ZXRlKCkgY2FuJ3QNCj4+IHJlYWxseSBmYWlsIGluIG5vcm1hbCBjb25maWd1cmF0aW9ucywg
YXMgdGhlIG9ubHkgZmFpbHVyZSByZWFzb25zIGFyZToNCj4+DQo+PiAtIHRoZSBub2RlIGlz
bid0IGZvdW5kIChxdWl0ZSB1bmxpa2VseSwgYXMgd2UganVzdCByZWFkIGl0IGJlZm9yZSBl
bnRlcmluZw0KPj4gwqDCoCBkZWxub2RlX3N1YigpKSwgb3INCj4+IC0gd3JpdGluZyB0aGUg
dXBkYXRlZCBkYXRhIGJhc2UgZmFpbGVkIChpbiBub3JtYWwgY29uZmlndXJhdGlvbnMgaXQg
aXMgaW4NCj4+IMKgwqAgYWxyZWFkeSBhbGxvY2F0ZWQgbWVtb3J5LCBzbyBubyB3YXkgdG8g
ZmFpbCB0aGF0KQ0KPj4NCj4+IEkgdGhpbmsgSSdsbCBzd2l0Y2ggdG8gcmV0dXJuIFdBTEtf
VFJFRV9FUlJPUl9TVE9QIGhlcmUuDQo+IA0KPiBTZWUgYWJvdmUgZm9yIGEgZGlmZmVyZW50
IHByb3Bvc2FsLg0KDQpXaXRob3V0IGRlbGV0aW5nIHRoZSBub2RlIGluIHRoZSBkYXRhIGJh
c2UgdGhpcyB3b3VsZCBiZSBhbm90aGVyIGFjY291bnRpbmcNCmRhdGEgaW5jb25zaXN0ZW5j
eSwgc28gY2FsbGluZyBjb3JydXB0KCkgaXMgdGhlIGNvcnJlY3QgY2xlYW51cCBtZWFzdXJl
Lg0KDQoNCkp1ZXJnZW4NCg==
--------------0Zm7E48OkOpN6PDYFH0NFfEj
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------0Zm7E48OkOpN6PDYFH0NFfEj--

--------------qjlX8CfRPQMwjt9HJoa6GDl0--

--------------Oxa8OChYtf600Y8nslRhB0at
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRcfD0FAwAAAAAACgkQsN6d1ii/Ey+L
9Af/QOPog6phu0BAHnNX+tA3/PCaz2ee4YLmFlF+2t7aJpeb0GhRFEjZ+5a6D1azZzV/dmc7GJCE
fcC/ZGQsF7hT96DeYaRLawBhdqukjEBNabOKUbzFJUuDdZqmvV987liVtoXEkgT0OdqJolvtILOJ
nwNrDVciuQqX4pHaoUnEmfCOtxggHd1pI+4ylsyszO9u5AANd5PtdkyyJcwaULlC4V8vnoBQprJU
XI37QV9ZqrT/v95bAst17yhdJttnSUb7JKW2TMhBDDbVtFbB5n0BaPnVWeAjaEBhsLCeaxghwZBN
0bgnswhDMM5S0lKTqvveQqPNn6OfSB5KAQDXk+zBmw==
=8MZl
-----END PGP SIGNATURE-----

--------------Oxa8OChYtf600Y8nslRhB0at--


From xen-devel-bounces@lists.xenproject.org Thu May 11 05:36:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 05:36:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533105.829474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwyym-0004w0-1U; Thu, 11 May 2023 05:36:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533105.829474; Thu, 11 May 2023 05:36:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwyyl-0004vt-Uy; Thu, 11 May 2023 05:36:39 +0000
Received: by outflank-mailman (input) for mailman id 533105;
 Thu, 11 May 2023 05:36:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwyyk-0004vj-Jj; Thu, 11 May 2023 05:36:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwyyk-0000Wl-Ie; Thu, 11 May 2023 05:36:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pwyyk-0005Xq-4Q; Thu, 11 May 2023 05:36:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pwyyk-0005w6-3k; Thu, 11 May 2023 05:36:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6UexzMRZDbPhHCHQ2Pgw9syxGhmRayorIc/mnJwsaPg=; b=4IC0GcCyA0CxYFWZN0M9eQaA6z
	xKO30AfhZ/AMy6KW7AKE6UItZwK/nEailR4kt4v3kPn2xgiGkDvRWM3TO/w5z2lUrpthCwbTBpuQd
	7ViGfObMNY9dHuHRqHG/cCDSwFRWwnFacfXw/rHAD7A8yRdUQQ43e2rxTs50sZkqowp0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180611-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180611: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=f47415e0315e2b0cf7bf1d00634f46deee802227
X-Osstest-Versions-That:
    ovmf=6fb2760dc8939b16a906b8e6bb224764907168f3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 May 2023 05:36:38 +0000

flight 180611 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180611/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f47415e0315e2b0cf7bf1d00634f46deee802227
baseline version:
 ovmf                 6fb2760dc8939b16a906b8e6bb224764907168f3

Last test of basis   180604  2023-05-10 15:12:03 Z    0 days
Testing same since   180611  2023-05-11 00:42:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6fb2760dc8..f47415e031  f47415e0315e2b0cf7bf1d00634f46deee802227 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 11 06:08:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 06:08:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533113.829484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwzTa-0000LL-Em; Thu, 11 May 2023 06:08:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533113.829484; Thu, 11 May 2023 06:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwzTa-0000LE-Bk; Thu, 11 May 2023 06:08:30 +0000
Received: by outflank-mailman (input) for mailman id 533113;
 Thu, 11 May 2023 06:08:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwzTZ-0000L8-2R
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 06:08:29 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060f.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3e88bd58-efc2-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 08:08:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7494.eurprd04.prod.outlook.com (2603:10a6:20b:23f::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Thu, 11 May
 2023 06:08:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 06:08:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e88bd58-efc2-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QujaTB7cUuUw5xhipdJL0ncT3acgqqVFPRoF2STK+vZzqc0ZIqnrHNUYB9EzT3YXKcF5sWpMSEZkew9YKFaBXRyAiLyaPVbLRVKi3f691He77wvNLvawKNtYZNryqF6ufZ1gRSvsx/I4hizYlVqpSGcdIAZxUyjIEGGb2GKBpixCyIh97HfC5sXkuoAZiae8IamM2sRphkVTrR/U3+u8ER0+1Km2aykeJzDHAVeP6TzmmrP7M8STnTxhidqoHU8N5Ehs6ift4pmNNLBLilcZU0TmjkiZScqePPy9TRCHF44N4IVhLs1xHb6jrn50jECPgq+6DRTcTF4qW/CSi1n3oQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rPeDCyyi69C/QPSdoXyljUJNrvj5LSTzHr/5LDbgjIE=;
 b=iEXbwCC9Otdy9101/TerJRvJnQjYAGSAn34C1PrsmUEXJFBEa7uN0ua7VBufksBD633bBSfoYiH+OLvqhqRylVQHFDoLJt3o4HHLi8WKZMA9Zgzf3/LVISN2pvhm1MGmpxmjAhTZ3eR3NtXgXJNC8eTfDWM62chn4yfVLaK/02tDSkosu3B0goLJG9o8BYDqW+pXhsmZJy0iAW56Z72vRN9diIBShMc+Pr0L7kTQ32VikuXUNilyxYGSLn1Gs7scbFciMitxhBRuci8YQ6CZm3BCk7lFlkDWnW8G1ucut0ckhBSnsyaeJHo9uRxwtnyRzhpJ89hO95pyVsTZzDG7JQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rPeDCyyi69C/QPSdoXyljUJNrvj5LSTzHr/5LDbgjIE=;
 b=XEUmRsxEjGH7PsB8T7qYquJjgI9UXuuUu1ATM0x6svjzZiPYwPud28A10KSoHCC3Ir8E+XLln0qLzyxd2vJB7PkgYiHdqJGqrF8GkMhrfAb0ncrbn8CxDcp9wNngHWBuzl7hMnSxTgrIakH6jgATyxSTd180IvBeXvCtxV/hZVDmXdxyerCXPion0D5flLoLYyyTKuWLqf9VZ3ygMXyVQxY1tIgk46Yl98A6RJpiMju70mDQ+qbAnd+Xb0YSomxG4PBFF+a/hz934EudTeH35IzBKvClMEhInsWPEe130OvlCXYpr3J2Py9PvN9oWm6D+pAQAQ6i6rk511VMKTpSjw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8c4012a2-a0a7-b6f3-e931-c793659ad5a7@suse.com>
Date: Thu, 11 May 2023 08:08:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [XEN PATCH 2/2] x86/Dom0: Use streaming decompression for ZSTD
 compressed kernels
Content-Language: en-US
To: =?UTF-8?Q?Rafa=c3=abl_Kooi?= <rafael_andreas@hotmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1683673597.git.rafael_andreas@hotmail.com>
 <DU0P192MB1700F6DFE45A48D90B5FE679E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
 <283798a0-0c69-7705-aade-6cd6b2c5f3c4@suse.com>
 <DU0P192MB1700F7BB44DC06D67D7AE345E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
 <9b8f033d-321c-412d-d0f3-51d29ac8d238@suse.com>
 <DU0P192MB17005DCCA86057EA3DF33171E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <DU0P192MB17005DCCA86057EA3DF33171E3779@DU0P192MB1700.EURP192.PROD.OUTLOOK.COM>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0038.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7494:EE_
X-MS-Office365-Filtering-Correlation-Id: b7726a5d-220f-46db-2e29-08db51e620dc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yBNotofXybMzNmYp+obSafpIMjqQPLFXMokVZ0IHcqAPTevQkLsvBbuLg1SJlIbHCCiskMVVSXzXQxhG4ruPlp31dqTDlcpbcW1+8kO2b/hJkiZ2RCeIkYr5CPe4SBEsafPfiZ9feLpF5ZXxu/CBa+CLSjqlULpt4vytb4MZIlP6umzC62l6MCiHAh+T1E5JuhvR/VSMPsVSFFjrhNbtcAH3QMfWvt2KQ9KUO1Ux8OiGOgbVbi2gQcmvChxAvmQcF68hFUxZaAVcDqxie7LufvhMWQgi5Amzb/JC9XDa9nkxHZBv3kXfa6fpYfDYD16E/u10Yl/K3hDEVo9nVHpB8g/uZbC1CQszEi2n1ziXqcRz9bTd7KF0i1mU3SFekp19TQ1gCjpX9jL19RsvQ1ss5SOnB1SVAn7vsRnleeVNS0FiwHjXar8ioduSGB86dpltbwN7gBd7yWIT9s5HdqGxGiauBsgftSYDYd2sS1Alc5FqMpO1ko7TtUQ8nFyCrkNTWgl85sOry/AMlSEr4aVZgQd9Pmy/Jc6TBGSQzZg1xvF5mftqsezm4nJ620xZoFTZcWhvt6NQQFn7Mot2TnSkkSSBU3VQokxnUrreWhg3LzfHRJbFx4QwrRLV4kW8aRkGlZqjkumdiCLDJo6kAFYSjg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(136003)(366004)(376002)(396003)(451199021)(2906002)(86362001)(31696002)(36756003)(6486002)(6506007)(53546011)(6512007)(26005)(186003)(478600001)(5660300002)(6666004)(66574015)(38100700002)(4326008)(66476007)(66946007)(6916009)(66556008)(2616005)(41300700001)(316002)(8676002)(31686004)(8936002)(54906003)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TkZaUmo4dWk1MW4wZUppbWZsMXh2emtObzNCRlREd2s4NndGSW5XRTdUd2cr?=
 =?utf-8?B?MU5CeWZFQ0F6byt2VlQ3eXpVbkhHekhhZmxyMDAvUHlSRjVFZ1JqeitKSFlW?=
 =?utf-8?B?ZGJqcVJHdWt3VFdEZEQvUEFBUlJ6eFFsMWl4MDcvRmNueS9YWXJFSTRTMmt1?=
 =?utf-8?B?S3E2SnY3cEJaaGFCdmdQTHR1QThjVlZKK3VJTk9iRWRqeStEZEV6UFJFUEIw?=
 =?utf-8?B?cjRuMXdINlVqdTc2aDNRQzNtTmlZU0ZsWGtZTnJkKzJLT2d5WUdZME1BS1E2?=
 =?utf-8?B?dUY5bGo3Uy9EemhvblF2Z21FNzZVTDFCbEt6RlNOZ0RuQnV0WTFuSTF5cXcr?=
 =?utf-8?B?eFdkNzJBYi9zc2FmTVlOZUltTk9qMzNZNjJpeTZmRzcydHRIbS9DTlRsYlpp?=
 =?utf-8?B?U2xlWU9mYStVVnJ1Skw2T1FGbUM0NG1DYW5KTG84TFZsVlF5c3NMQzRIK3k2?=
 =?utf-8?B?S1NlYjkwdlZNMDdUT2VyTWZYaGorMFdVV1diQXRvMkVIcEtKdUd5R25jdk4y?=
 =?utf-8?B?YVRyQnlkVUZBMjdEUkVWdmpZZ25LYm0rb09mR0pGNGlObkx3NDMrRUt2TFZS?=
 =?utf-8?B?RjZ6OFR1LzNiamJHdFVWWWkzempVa3pWK2hxYkFWRXVwRHFUTFlhaXNVMElJ?=
 =?utf-8?B?QWpqOE1IWG00NmlQYkE3MHlyTEFqN3BOZlZyT3VWano3TVNjY2orbDB4Uk4x?=
 =?utf-8?B?WlBXSlNUd1RkUlhOUVc1RkN1RW5Oc3N0TVNEY3VFaDlkNlFVdjlyOHo0ZWMy?=
 =?utf-8?B?N2RJVWpJUVpTNU8veGFlN0x3bWQyRzNNYWdId3ZzSXVZT3ZvVzNxTGVWM0Ji?=
 =?utf-8?B?dFRwM0NnZ040biswWlUxbVp3clQ3MkpyRk52emhWZ0VubUVIZmc3ZVpXVEdD?=
 =?utf-8?B?Z3pHZXVzQUYwUlVQUEVKeUZYWTNsNFMxNWhXWWVaRGNBY2ZhdUc3Q0Rpb1E5?=
 =?utf-8?B?Wi95bmx5MTJqUzROZkc4bkI5ZXhOL0pLLzMvMnExRmJvRXRQSUtrTzVrNjI0?=
 =?utf-8?B?QnNuYUxYYUlwMTFPbUMyZTlURTI5Z1pUdjRXZ3luZVlzc056VjQwdEZPMEZr?=
 =?utf-8?B?ZTRMaGRQcGRIcVl2YmVWOTF2UUh2dzVlOThRS1dnYUdIYmdheE5UakZHdGxo?=
 =?utf-8?B?bFpaTEFTTGF0ekZjNVdLQTB0NEpkUHVyNitSbHQ1eE5LM2dwdHllK2VKRGJV?=
 =?utf-8?B?MUl5dFpNT2RheGlhZjlEN2FFaFRqOXQ2UUlrRXoybzcyRWtDYStldkpWZExR?=
 =?utf-8?B?cEQvQ2E5aWFCNmxhWnhiOXZBN2JFb1VsL3RkNU5hVWJQZC9JZFlja2F4QkM1?=
 =?utf-8?B?Z1dNRU8zdVV1eG5PdEZ3NDN6VVpCTGE2dGVzMGlrRVlOSmxJU1V4dS9HTkgy?=
 =?utf-8?B?dGVsR3BVWHVqQWxiNTA2NTA3WEc3WGVsWkE1YXZjR0NvWWpEUjlnWVBKM0xB?=
 =?utf-8?B?SHhiYlhVRmwwWWF3YmhWV0pBMUd1TTBWS2ZLMU83eEZVdmpMUFpiWStOVVFJ?=
 =?utf-8?B?R3RNbWh2NlRlbldnaDJZaUt2MnJmWDJLRTZsUXRqMVhhdmdkdDVHWjJNaXEw?=
 =?utf-8?B?NlNHMFo1bE5SZmlBUkM3eVZrYVZzc2k0N1hvZENOQVJGd2EzSGNoYzRId2E3?=
 =?utf-8?B?Rm14UmVveVhBSE16cEVBQUFtQVgrODU0RG5WV01tN3VXZWlmdGMzS1JWYThN?=
 =?utf-8?B?QmY2M0ljN0YwdnBQSzVuRVpSWG56NXAza0NvUzIxcmsxVHd6dTVnVytnaExS?=
 =?utf-8?B?dndiYzlIMXdNc051dmZmWTQxaXpYY2xsUGt5ZnFaTmw0WFdCbGwzSmpoNGVV?=
 =?utf-8?B?TW5iVlFzSk1haWQwK2Jyd2lqWHU2azVLblZRbmdQeEdQV3BJMlJRMEFTci9S?=
 =?utf-8?B?UzVaSlVucmE5ZDdzc0plNTBYYmc3SHhYUHh3WForZk4vMGNWUXhrYVJaZ3ZE?=
 =?utf-8?B?eDMvOCt6V3R6TXNzdlRnWElmMm0vTDZqUFkrd0c2V21LeDdYOG9tTldkbUxP?=
 =?utf-8?B?YUhkSkVqSWpVOFU1SWk0MWswbElMSWNBRVFkdEFzUSt2b0Z3b3M0VE1GUHVU?=
 =?utf-8?B?ekNSOXZVbnArdFJiK25BRWFhZG4xYmRlcFJpbUFmVjZ4bzZ4RVBsaFhOckh0?=
 =?utf-8?Q?qCGSVkLbjKCbmkYM4Ca7w2Fa0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b7726a5d-220f-46db-2e29-08db51e620dc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 06:08:24.2755
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QNnBYUKsuRCC2AbKy3DlvBH7kV/tRZuzpkw6uEYhGOUXspAvPL8rM0kbDj2b5CIcJe1lWbrE5YnazwPxUpg6Kg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7494

On 10.05.2023 19:30, Rafaël Kooi wrote:
> On 10/05/2023 11:48, Jan Beulich wrote:
>> On 10.05.2023 10:51, Rafaël Kooi wrote:
>>> On 10/05/2023 10:03, Jan Beulich wrote:> On 10.05.2023 02:18, Rafaël Kooi wrote:
>>>>> --- a/xen/common/decompress.c
>>>>> +++ b/xen/common/decompress.c
>>>>> @@ -3,11 +3,26 @@
>>>>>    #include <xen/string.h>
>>>>>    #include <xen/decompress.h>
>>>>>    
>>>>> +typedef struct _ZSTD_state
>>>>> +{
>>>>> +    void *write_buf;
>>>>> +    unsigned int write_pos;
>>>>> +} ZSTD_state;
>>>>> +
>>>>>    static void __init cf_check error(const char *msg)
>>>>>    {
>>>>>        printk("%s\n", msg);
>>>>>    }
>>>>>    
>>>>> +static int __init cf_check ZSTD_flush(void *buf, unsigned int pos,
>>>>> +                                      void *userptr)
>>>>> +{
>>>>> +    ZSTD_state *state = (ZSTD_state*)userptr;
>>>>> +    memcpy(state->write_buf + state->write_pos, buf, pos);
>>>>> +    state->write_pos += pos;
>>>>> +    return pos;
>>>>> +}
>>>>
>>>> This doesn't really belong here, but will (I expect) go away anyway once
>>>> you drop the earlier patch.
>>>>
>>>
>>> The ZSTD_flush will have to stay, as that is how the decompressor will
>>> start streaming decompression. The difference will be that the book
>>> keeping will be "global" (to the translation unit).
>>
>> But this bookkeeping should be entirely in zstd code (i.e. presumably
>> unzstd.c).
>>
> 
> The implementation of the decompression functions seem to indicate
> otherwise. Referring to unzstd.c:`unzstd`, the function will take the
> streaming decompression path if either `fill` or `flush` have been
> supplied. I cross checked with unlzma.c and unxz.c, and that seems to
> have similar behavior in regards to flushing the output data. The
> `flush` function is passed a buffer to a chunk of decompressed data with
> `pos` being the size of the chunk. For the sake of consistency I don't
> think it's a good idea to deviate from this behavior in just unzstd.c.

Well, so far we don't use any flush functions, do we? The question
could therefore also be put differently: In how far is the flush
function you introduce zstd-specific? If it isn't be other than the
fact that it is only passed to unzstd(), perhaps it shouldn't be
named as if zstd-specific?

>>>>>        if ( len >= 4 && !memcmp(inbuf, "\x28\xb5\x2f\xfd", 4) )
>>>>> -	return unzstd(inbuf, len, NULL, NULL, outbuf, NULL, error);
>>>>> +    {
>>>>> +        // NOTE (Rafaël): On Arch Linux the kernel is compressed in a way
>>>>> +        // that requires streaming ZSTD decompression. Otherwise decompression
>>>>> +        // will fail when using a unified EFI binary. Somehow decompression
>>>>> +        // works when not using a unified EFI binary, I suspect this is the
>>>>> +        // kernel self decompressing. Or there is a code path that I am not
>>>>> +        // aware of that takes care of the use case properly.
>>>>
>>>> Along the lines of what I've said for the description, this wants to avoid
>>>> terms like "somehow" if at all possible.
>>>
>>> I've used the term "somehow" because I don't know why decompression
>>> works when Xen loads the kernel from the EFI file system. I assume the
>>> kernel still gets unpacked by Xen, right? Or does the kernel unpack
>>> itself?
>>
>> The handling of Dom0 kernel decompression ought to be entirely independent
>> of EFI vs legacy. Unless I'm wrong with that (mis-remembering), you
>> mentioning EFI is potentially misleading. And yes, at least on x86 the
>> kernel is decompressed by Xen (by peeking into the supplied bzImage). The
>> difference between a plain bzImage and a "unified EFI binary" is what you
>> will want to outline in the description (and at least mention in the
>> comment). What I'm wondering is whether there simply is an issue with size
>> determination when the kernel is taken from the .kernel section.
>>
> 
> Assuming you are talking about size determination of the compressed
> bzImage, I notice a discrepancy in the size of the ZSTD stream and the
> reported size by the vmlinuz-* header upon further investigation. When
> using the streaming decompression code I made it output how many bytes
> it reads from the extracted-but-still-compressed bzImage. The code
> reads 12,327,560 bytes, but the size of the compressed bzImage in the
> header is 12,327,564 bytes. In xen/arch/x86/bzImage.c `decompress` is
> called with `orig_image_len`, when the function `output_length`
> calculates the end address and removes 4 bytes from that address. If I
> remove the last 4 bytes from the compressed bzImage then `unzstd` will
> unpack it with `unzstd bzImage.zst -o bzImage`, otherwise it will
> complain with `zstd: /*stdin*\: unknown header`. With this new
> information I think the correct solution is to try calling `decompress`
> a second time with with `orig_image_len - 4` if it fails.

That's not very likely to be an appropriate solution. If the sizes
diverge by 4, that difference needs explaining. Once understood, it'll
hopefully become clear under what conditions (if any) to adjust the
length right away, without any need to retry.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 06:22:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 06:22:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533117.829495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwzgY-0002kS-K1; Thu, 11 May 2023 06:21:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533117.829495; Thu, 11 May 2023 06:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwzgY-0002kL-Gm; Thu, 11 May 2023 06:21:54 +0000
Received: by outflank-mailman (input) for mailman id 533117;
 Thu, 11 May 2023 06:21:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwzgX-0002kF-KK
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 06:21:53 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20623.outbound.protection.outlook.com
 [2a01:111:f400:7d00::623])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d9be1cf-efc4-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 08:21:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9095.eurprd04.prod.outlook.com (2603:10a6:20b:446::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 06:21:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 06:21:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d9be1cf-efc4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i7W+Ky0+DMEP/Kw5xz5ZplkQUaFrHyIIwyx8aTTgawe3vSvcFz/qRrjQX5X3Bed+oJHjD3u3enPLQDkbTnDIxjaxe0yIjfI4JBNAlCGpCDJ+mkwUTDYAvqqlUl1DboV1lOAWjMSAQKxB9WNvcO+aZ3Y1tQOKUluJRwgJV+sBkW1uPSKOYTIEpFaJboACYk0Zq2K2zdtnU/K6/B7BkWf4UksTax3I0lPaoYwhgJEXwrBzzYWQg+3Oqi8efg0JrocUjuQTvVtR7AdL/x6zIA48QnVdrs2Y/l6kYYrHdk/e1PSrDx75tBO7wwAKoC+q2q5x3VrW0IVcKOH5FJlihkgOgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6xSy4r1ApRa2RujTA9D3qOzby21+mZcddGWct3F7Ou8=;
 b=KK0Vg/QIEIQiZoBSnu2h+zYxypmjdnjkwGcUd30R3pJKOECOEa1tbg1R201Md/LPmCmD8A0lNPOXC+Y4pJG7/xo3mGVTZt7uGEU3bLoofEWpiXxBhszzVDGlTKif5Bdpk7OdPHOUWvsPbx5ZTGg+MM8mNoIS6MzNNGLmeMORY2fq+0l/QMKTFs8wJypyik3fhAh4eKsVZEpxexIHI62FuviWukefp3WEtZlfeHLmj6clj9tBGMM5yUnuEyycGETbcu8ZdyPPXeJLbo4MOUWbjoh/0cq/slW+hhv4EwTx9okBA5Ok3CpNRTUb7mNg8iuzfj4ymWDJFi7m9JVEnWIUdw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6xSy4r1ApRa2RujTA9D3qOzby21+mZcddGWct3F7Ou8=;
 b=Nc7W6c1soEgsLv6VSTJCaQIHrzqp/YYmrPM6/QzvzdTI90LgztyL/Q58GfZKbPSsyVQUYanx3V65EhcDn+SsU8461D2qeArF6cL4RQeKLyMwmmpwoCIYv6RU16EUwbDWsSVcDKgN5JKsuaKcu0+3gv/nv0mTWsoY3GoIqehbneIyI6XKGCp+4f6608qM3Z1CKwJnfmYKK9o0/4gDsB7n6bqcqDXoV3lcuWfDU8uUDZblLDosWugfo+u4rs8wf7Www48g4/g39pI+C6Jp7A3kCEJBc/87mfxxnnmI0opkUiSh500x6ZxCuEKE5WNDfepnJKeP85xrmf+tRg5ub3aGRA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <80ccf9c7-5d22-b368-dac6-01fe6cec7add@suse.com>
Date: Thu, 11 May 2023 08:21:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 07/14 RESEND] cpufreq: Export HWP parameters to
 userspace
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-8-jandryuk@gmail.com>
 <7db38688-1233-bc16-03f3-7afdc3394054@suse.com>
 <9cf71407-6209-296a-489a-9732b1928246@suse.com>
 <CAKf6xptOf7eSzstzjfbbSU0tMBpNjtPEwt2uNzj=2TucrgFRiA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xptOf7eSzstzjfbbSU0tMBpNjtPEwt2uNzj=2TucrgFRiA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0014.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::24) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9095:EE_
X-MS-Office365-Filtering-Correlation-Id: a376ab8f-a4e9-41a0-ae9c-08db51e800c4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8wKG5l3kFI3mcwdriiCCbJpM42EaF9DwyragNL9iu7fPIJ7pmk5cQnh52UxlbP2FOUhngWBVkW3tu6sJ8mc0WHHmHNSerU4iC0pJJ96H5zSA7JbADXsR3/7Yd5Z09oh4t5XnWT6Xh3MDLZJowcMAOTn/qRsysmeWsRNRG9hTqM1ueoKfZ5Sa5vZUduSNs6WMY+uGT0Ajpa707f8Fga2cxHmyvS2S3eaTqbR87n9+xf215eZS/48pf0kqgfWEXAgTCjGzxJBaEMn2tkJjxFmFD0KIKZvP5FhqGcM4lFsXBWaPqH1tPrNQ0q+lBt2BtNwPi1Xyy3IBS+YL/eV+Iz28+0r+Re++PniUHhMvO0b+Zdn/m0jaFQWS1v24xQCi98io/sZ22RGFLRf8mEK0byqx1rJ7AeIlRSlVEQl/21XWTux6J+ZqzgVHUSYgIEaQcxAjWPNIWpGfxy+VEvKmOaVdKE+P8RUO0CnI9Vx3JGkS91XY/mHsTT5U9QFhTaoQcoDgxfuOaSYM38546lH+NgGbdKDxfLbP/teyjG8naLZuZj7ApWAI/8eO/4GM+o4Hkjrn8Ep93MQHSykIKy5qCPKPIygZZ0xq5PHxLluCosg5emQB/sV3zISVksDXhRGmMxkB5/WAh2miBCWYLS+YGfp72w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(136003)(396003)(376002)(346002)(451199021)(31686004)(66899021)(66556008)(66946007)(6916009)(66476007)(478600001)(6486002)(6666004)(54906003)(316002)(86362001)(36756003)(4326008)(31696002)(83380400001)(2616005)(6506007)(2906002)(6512007)(53546011)(26005)(8936002)(8676002)(5660300002)(41300700001)(38100700002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RzBMT3IycDBtQlVlc2g2NDZ3czZkZGNuK1ViUHdJRmlaajFtSEJPTXIwcS9u?=
 =?utf-8?B?aVZucGNkQ3M4RkVtallNNFgvRTdWdC8zYkR0eTJZVVpoNS9CZng3MTBFMHcz?=
 =?utf-8?B?REluR09sZnIrMjJxMzRGRmZJcVBwOTR0aTJGaWlnMGw5a0xWVmdYaXFqdEds?=
 =?utf-8?B?L2llZURKKy9OMFpDUXFzcldCRzRDM2J3YjZVUEdPcGVic0o0ekZra04yMGY5?=
 =?utf-8?B?ZFRhUDFYOVpJRHBiUUtsa2Z1d1F4VkkzYkNYZ3JIekJ1UTJZRmo5TjdKcEFV?=
 =?utf-8?B?b1g1Y3hUQ21IZzNVK2RzcUo5RFhOQ1NhQ2ZJaEphUHNtWlNFUUUwVmVaUndp?=
 =?utf-8?B?ZHNHTkxpT1VkM0lnUTQrU2JwUWY2YVJ6aTR0ajlNUlUzSnFGT0M4YTFEQkxn?=
 =?utf-8?B?cFp2TlBzcnNQSXJJRVhmNGl5RXRha3FYOW5oQjVFZ2MwYzlmZFJET25oVCtq?=
 =?utf-8?B?M05xZHhSd1lEQmJWQU9YalpxUGJsMDI5YytNenlYTG1BTFlzV01YOHpieG9a?=
 =?utf-8?B?SC9RWlFUM0lnREdZOWF4QytVSlhKS3lXemswdUZlaHZtTlM5alFQalFuMXlo?=
 =?utf-8?B?dVRPdCtjdXg1aCtYVGJsWmNhZjVJY0c4eEVicUs4SVBLcEwxdE05dmhybHc2?=
 =?utf-8?B?cmVPdlBoNXkrQkpHMjNXbFZOZmhnOXRnQTlSTWtUR3dhQm9PUG1HRU9kYmly?=
 =?utf-8?B?dFRpRHlFN0hpb3A5UWlENVNrMW91dFZ6RmpqVWtIU21TSWg5Q3U3b2Z3MEdE?=
 =?utf-8?B?dVRubEk2QmNCKzh6ZzJMTFhGa3dUbXVlWDMyRmpkQXdid3J0Wm9jdWFaOG5k?=
 =?utf-8?B?NjJUTVhQdTlZSDlCbTI4LzlsVVc0MkdXcjgxc2RvSDVrTTdNRXI5RzdFcTNz?=
 =?utf-8?B?T0JueHJzWVVmdFJnRmdxQ1pTcTg4dFYvSFR5cm82Y2NTQnkyNFlTcExHejRK?=
 =?utf-8?B?eHZkL0EvbXZ3Wm1RdzJsVXMzaXdLTWk5cDdmQnRJeGhPdWpjRytNMzVFdDJx?=
 =?utf-8?B?REVPaG5GSWo1VEtid0ZxQ3lkNGUrcHVRZFZEdFlzOW9aQVh4Z2tGNDVtYlQv?=
 =?utf-8?B?MXVGMUJlQmEyNlI4cktvRDUvaE5BYjhWQkg2ZmJBWDdSM2JnWTFpUkxhbDhm?=
 =?utf-8?B?OGtkODFIS0dtVXRTMWdRZWVqL3lhbXVXSzhVejA0MHh2M2NNaU1rZldrV3F5?=
 =?utf-8?B?UjJRVEU4VWt6VU8zVTZrMDFPRm9tL04xdFkwN1BNNEdlQkVVTTF1V3g0ekpT?=
 =?utf-8?B?dEpQTmVlVnY4RTBHOHE0WldUVEhDSUNvY0EyWTJpcThvWVVOakxweStPQjBF?=
 =?utf-8?B?amZBVkpaSFB3azd6M0IxTHlPNHZYaGxMNDhUdGZSSzJNVHVRNXdKMGhSRS9l?=
 =?utf-8?B?REg3YnF4YW5kQUhiaXNzOVpzZnhrc3NwemdzYWlTNzhsMXgvOEJzaEtpeXQ0?=
 =?utf-8?B?dzZ5S2trN25nQlBhUnlTY2VaNFF3UTBKdmlZcm1oS01qeVExbThzSEUxRFJR?=
 =?utf-8?B?OWJac2t6TGFQNnhXM25DbVJnNTcwUy9jUysxNzhGQUZoNTdDVU1sVFcwcHZs?=
 =?utf-8?B?RXczN1FRcmZ6SFpaeU9JR2R3UXNnQ2ZpbnRrMGRGNTZlaEVOS1lBMnc2RTJZ?=
 =?utf-8?B?b0FSVlZBY1VNd3NZdk91MkU4andLSy9HS0M1Ty9lck1MMUtLdWJnbVkwMWhl?=
 =?utf-8?B?eDVUYlI0d3F1KzgwdjgrU2FDeFlCMmduQVVtQmFGQnFJOFNPNER4UzFqSTNo?=
 =?utf-8?B?b05jZy9UTkRIY2tRTTBra2N3MVgxY1ZwcXpucng4MHBwQTIyWHFvVDNVMEto?=
 =?utf-8?B?Mnc3QzVzamEvNjJaUzNhbnJ4WnZIYkRnY09vY09NYm1JLzBoS0J5ZTh5UEkr?=
 =?utf-8?B?OWtvaFB4TzZZNjNIdFh1VGdBSVZsWTRnRlBWeGdaYlRPdDkwbjUvNFNLVGdP?=
 =?utf-8?B?TmxqUmMzSGdBaDJpUEdIblFPZUh6ZkIyY2NWQldXNkppMzgzOW5FRHlpN2pC?=
 =?utf-8?B?LzFvTkxaMkV5T05SK0VUeG5SZkhNclVsankvZU1SNHNRQlUveW1wMUhFdll3?=
 =?utf-8?B?Mkp6VCtSNFRyczl2anl6VGoxRmNrWDJvKzBrVnVvTjZHSHhXMmx3MXU1WHFv?=
 =?utf-8?Q?QIRiUhyEPV+i8eLfYBZ7EGwQP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a376ab8f-a4e9-41a0-ae9c-08db51e800c4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 06:21:49.1588
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xQJRbL11fruAqd9F0wXM4hCtSJXRHj/bzPb1S0R+NxPbuOzELH9p2Kl6JAPGc1PxyG/vqT/7npbQDFkWDkAlXg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9095

On 10.05.2023 19:49, Jason Andryuk wrote:
> On Mon, May 8, 2023 at 6:26 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 01.05.2023 21:30, Jason Andryuk wrote:
>>> Extend xen_get_cpufreq_para to return hwp parameters.  These match the
>>> hardware rather closely.
>>>
>>> We need the features bitmask to indicated fields supported by the actual
>>> hardware.
>>>
>>> The use of uint8_t parameters matches the hardware size.  uint32_t
>>> entries grows the sysctl_t past the build assertion in setup.c.  The
>>> uint8_t ranges are supported across multiple generations, so hopefully
>>> they won't change.
>>
>> Still it feels a little odd for values to be this narrow. Aiui the
>> scaling_governor[] and scaling_{max,min}_freq fields aren't (really)
>> used by HWP. So you could widen the union in struct
>> xen_get_cpufreq_para (in a binary but not necessarily source compatible
>> manner), gaining you 6 more uint32_t slots. Possibly the somewhat oddly
>> placed scaling_cur_freq could be included as well ...
> 
> The values are narrow, but they match the hardware.  It works for HWP,
> so there is no need to change at this time AFAICT.
> 
> Do you want me to make this change?

Well, much depends on what these 8-bit values actually express (I did
raise this question in one of the replies to your patches, as I wasn't
able to find anything in the SDM). That'll then hopefully allow to
make some educated prediction on on how likely it is that a future
variant of hwp would want to widen them. (Was it energy_perf that went
from 4 to 8 bits at some point, which you even comment upon in the
public header?)

> On Mon, May 8, 2023 at 6:47 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 08.05.2023 12:25, Jan Beulich wrote:
>>> On 01.05.2023 21:30, Jason Andryuk wrote:
>>>> Extend xen_get_cpufreq_para to return hwp parameters.  These match the
>>>> hardware rather closely.
>>>>
>>>> We need the features bitmask to indicated fields supported by the actual
>>>> hardware.
>>>>
>>>> The use of uint8_t parameters matches the hardware size.  uint32_t
>>>> entries grows the sysctl_t past the build assertion in setup.c.  The
>>>> uint8_t ranges are supported across multiple generations, so hopefully
>>>> they won't change.
>>>
>>> Still it feels a little odd for values to be this narrow. Aiui the
>>> scaling_governor[] and scaling_{max,min}_freq fields aren't (really)
>>> used by HWP. So you could widen the union in struct
>>> xen_get_cpufreq_para (in a binary but not necessarily source compatible
>>> manner), gaining you 6 more uint32_t slots. Possibly the somewhat oddly
>>> placed scaling_cur_freq could be included as well ...
>>
>> Having seen patch 9 now as well, I wonder whether here (or in a separate
>> patch) you don't want to limit providing inapplicable data (for example
>> not filling *scaling_available_governors would even avoid an allocation,
>> thus removing a possible reason for failure), while there (or again in a
>> separate patch) you'd also limit what the tool reports (inapplicable
>> output causes confusion / questions at best).
> 
> The xenpm output only shows relevant information:
> 
> # xenpm get-cpufreq-para 11
> cpu id               : 11
> affected_cpus        : 11
> cpuinfo frequency    : base [1600000] max [4900000]
> scaling_driver       : hwp-cpufreq
> scaling_avail_gov    : hwp
> current_governor     : hwp
> hwp variables        :
>   hardware limits    : lowest [1] most_efficient [11]
>                      : guaranteed [11] highest [49]
>   configured limits  : min [1] max [255] energy_perf [128]
>                      : activity_window [0 hardware selected]
>                      : desired [0 hw autonomous]
> turbo mode           : enabled
> 
> The scaling_*_freq values, policy->{min,max,cur} are filled in with
> base, max and 0 in hwp_get_cpu_speeds(), so it's not totally invalid
> values being returned.  The governor registration restricting to only
> internal governors when HWP is active means it only has the single
> governor.  I think it's okay as-is, but let me know if you want
> something changed.

Well, the main connection here was to the possible overloading of
space in the sysctl struct, by widening what the union covers. That
can of course only be done for fields which don't convey useful data.
If we go without the further overloading, I guess we can for now leave
the tool as you have it, and deal with possible tidying later on.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 06:25:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 06:25:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533120.829505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwzjv-0003MZ-2R; Thu, 11 May 2023 06:25:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533120.829505; Thu, 11 May 2023 06:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwzju-0003MS-Vp; Thu, 11 May 2023 06:25:22 +0000
Received: by outflank-mailman (input) for mailman id 533120;
 Thu, 11 May 2023 06:25:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwzjt-0003MK-Cm
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 06:25:21 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0614.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::614])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99c66f46-efc4-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 08:25:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9095.eurprd04.prod.outlook.com (2603:10a6:20b:446::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 06:25:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 06:25:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99c66f46-efc4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZeeUcyFCRI82HHdCj8qbYaZlTDR9M0IHjFz+XiBDfVSjSekrN2hxLmTaUkkUOfSdNUmA1RHdmk3RqMNOkSBM8tJb2+EuZwGhPA22Xfs+kaHTlds63fWnKEVMnVTvCocKEYAT59snP6Sa82AcULbURKYygog6yoVhtAiOP+6tLcbFXv1MJxC29NmGoYFCuuyagSlPDP5AJQzz/+uLOB2C0FnzkSH78jt5Gzkjq52YeerQb/NhCo5JYlyek32O44PxfuNJDlEa83UzrUKzHWUIk7eqMDskxdELBT1EBpwWg1CSQs0TLQTRZqRv7bbDarhHCZ2vj9sgMT1Sst9Q47BzVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BXrwMBcrWt2CFGtRdG26GLGiO6lMsVKCulPnAkeuZQg=;
 b=G7omN41XQ9NAc2tNaA+HGrbkwcodEry6uA73zXUqAYui9cIG6u6GHrcf8pE2A+LxbMAUKJDo+/z97KJbih2T3fkhWuOPVJqR7KoxLEgc/jjF/8HPRXy+OkeF5mREl+HUvoE0Ag8C/OBuaBbFlKQZclHS3V566suiVsN/aztqHpwxUKuUeh/2NnCEXX05SHWG4rkkNkTALjKYPmaHe2AbcDjDA36NEFzU7hnl8zSqFxrf42/LktNBhWnwfnyFufX3Q+CuSIH4rLun9R/hvWjDJ0wI3HkBChzds+ysSsrLcSaBSkiV2XPv2As+O4JD1GgyMVlBCXHa2bXEG4SzwUmgLA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BXrwMBcrWt2CFGtRdG26GLGiO6lMsVKCulPnAkeuZQg=;
 b=AkaJpJvIDPjCPEVqsgGNf0oLTAK1ZUPS9lTZNEKfKKJzdRr9RwLRdy7ZUd83iB1nb5m6jjs687s8SGP9ul1kqFE/0b8Ow8m+uAANmAPH8Ejds7dYTQA6EgP6ogIpLf4jsxfNtuZDdYL8JPMRZAtJqTqrbjKrBZ7npYFYbMvU8m9HiKXl7CJuBvqAHAEenl/W/unZ/H4IKWy2iYwzV0wFNInTiWomPNndLXLb63mF6ofakyehzYigZSvZVwJs1DdQO365p8+tIqLpj/1ejfuJ6ct8j48Pkn5CzvDiAUKAcNhQ8FBErS6HXralOQq7xXEtfOw1hmkcQF78hpMt89CLOw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <da5579aa-c37e-a67f-002a-23b6f6cec7e2@suse.com>
Date: Thu, 11 May 2023 08:25:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 09/14 RESEND] xenpm: Print HWP parameters
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-10-jandryuk@gmail.com>
 <256fc66c-066f-3f0c-b34b-a237e9268f22@suse.com>
 <CAKf6xpu=KiSkjGpyRYBCpYh67XhdtmjvwLjthkpTbE+HoNQm7g@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xpu=KiSkjGpyRYBCpYh67XhdtmjvwLjthkpTbE+HoNQm7g@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0029.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9095:EE_
X-MS-Office365-Filtering-Correlation-Id: cab958de-d93e-44ab-1e71-08db51e87ca5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ApKWb8Q3XDgyfI96uvschLwkUVpjhyzqKSGx4+WQ2XRqjBTsRbwcIbM5OfwKcRZZbz3zMVz8/sR3vvHJBWmrlwt9wKRq/PZ2Ua6Jv5JmMU9lg7nooQX+r0UCmE99f7VStOwDfC9DbKw/ahwqrzxadismmsigCxeD8LEEannd6GDyhz6w6xG30I7koBnQW0mTqe1+N+nTeD8dzkc+FJvJztEeLSx1nrMy33zQKhCAlvXpMOC3Cm0F7ecQTmUwMSk/lCDlDVzR4cgpIwQjwis1aKwStIQegWMDwB7LsJaJp+5/Ez1tKmp+FCUmIVn55qgGzbu0r7+05L76WqKJsFkYXCHVovOTsL2XUB2iZNQzgGAyFxAwgLB+OOO96WON63zkQKaMx02UsGpjZvvzXYsnd2mGsF/q2pPQSd91wnommv9y8JJAkmram6wOua4HHvpo8SaUmb8imNyboSNvlw0qzomXTSarQs2XKoW1v3K7cqg/kEl8EEv382UxHxtLvx7Zp5Sie2lvF9Xp/5ANzEn20yu9stFlOWnX+ANfV0gU3YHeddKItHuWXAbdAHFsa4/PmPQOGMphyCkTPHECTIA0FCm7SO2XucIdlEUtu9nz8cBUGmiarMvhNBQEWIJqlD2v1dYKp54RGEroY9ArcrCmUQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(136003)(396003)(376002)(346002)(451199021)(31686004)(66556008)(66946007)(6916009)(66476007)(478600001)(6486002)(6666004)(54906003)(316002)(86362001)(36756003)(4326008)(31696002)(2616005)(6506007)(2906002)(6512007)(53546011)(26005)(8936002)(8676002)(5660300002)(41300700001)(38100700002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T1FraGtwNnNiYjUyVjVtTitPTXJWZ2pjWXFpRHNKS2lBRENZRCs1bVNwbWlZ?=
 =?utf-8?B?NHZsTE14Vmw4blVXR3pTZVRaVUVPUU5vQ0svelRneE1QY2VFVE5wdWNCRzFU?=
 =?utf-8?B?SVRGQzhGWjlrR29Eakk1aDFaY2ZqRGRmZ1d2NWxxNkxqYXBxcWhrUENsSGVn?=
 =?utf-8?B?aU9UdGxXanN2b1BPTHVhaTdybXd0OG1yWnRscGFzeXR2T0xXQ3R4QjV1Wlpz?=
 =?utf-8?B?UzhWQlJCanpLNDhuQnBTK0lJR3EwWWdSUkwrMXBaNkk5WHpOMVFhOVhKT25r?=
 =?utf-8?B?bHZ1WURuTk5xaGZOZEJEWkNrT05yVlRxT285SzU4Mzk1S3ZmSlVFK0ZCMlVH?=
 =?utf-8?B?VTZRNHcvQ0JYUW5rMEJCN3JkeGkvMTNzdzRpaFphRjJzZDFKb3VCeVUvVGRn?=
 =?utf-8?B?WUVqT0pSY2lJdGdtS2VRTHlPUGViMnBRc3I0M3NpeHJsWG9jSXNYWUN5NHRv?=
 =?utf-8?B?NlZpSFdkd2hsSlgzVlpLMHlBVkduNkI1c0RkS1lvNGhST0FTUE5JM3c3Kzkr?=
 =?utf-8?B?Z1FWSFZrRW55UkdpaXJvYmZ2MEFBd001L0hIaXRWYnhjNlJOSGViM1ljOGpS?=
 =?utf-8?B?UFZWcnBORzVQVlUxUXFGWUpDU0dLVE9QSU5WeHRzODZrWnpXdFlkYWc5TTlx?=
 =?utf-8?B?ZDgydW11cmhQUzhSeDE4bFRkODRYQW5OVWhZc1VKN0MweE9ZZEx3UW0rOHlu?=
 =?utf-8?B?WVJIaUowUS9QWVhIU1h4ak5NTFRydkVtbDdLMU9mOERhTjhSb3VWMWdsQjRw?=
 =?utf-8?B?MXRrZExLek9KSmdoNWF0S1lvRkp5aTExOFB1WlEvby9DcDZEMS9MY25nUi9x?=
 =?utf-8?B?Rm0zaWdaQ3dVN3MxTDdlNmMrZUNoSU9ORVgvcGxUaVlZcThoazAvV1BEb2to?=
 =?utf-8?B?bkZxZEE3TjY1Z25oL1NtMmVrUEpuWDg0TTBlcDdCaVhVT1dCdnY1UzJyOTFO?=
 =?utf-8?B?b3RaZnFMNnlGVVdVZlJSTitib3BLVzJMVndlTEkzVHhMVGJEQm5wUjlhNi8x?=
 =?utf-8?B?aEJYUnVvcWxxNFNPZ3NTRzVsRUJBa0lIK3FDRlBXSmFnOFRBQisweHFWcmtC?=
 =?utf-8?B?T201a0t5NVQrVmlNYWZvc1hsTi9WNWJ5VGpvVTRNcGJRWkJIQW5ISXBoY01G?=
 =?utf-8?B?YUViaHJ2WE16OVVJb2JBcC9CVzc2N2FzUUtZN0Y5M2hLRUJra3ZIUEtmVlV4?=
 =?utf-8?B?SDg0NEdiSjg4YjdtbjlDVzZzaDA3MXlTNkFSUzlWRjIxTkduQVhvWDB5dFFO?=
 =?utf-8?B?K29CeXRZdGNLalIzandlT3FyVWF4OEdVbjdYSVJVTmh4NU1BcmpQL1h1M2U0?=
 =?utf-8?B?S3dVekVobkJ6QThiQWkzdmtYRXFHelZnVzhrbWx4Y1BUVkhBb0YzMDl6WnRp?=
 =?utf-8?B?Z0NzNm01MHR1VWp5Q1RSUFcvTWFIT09VYitRWmtNTDVHTWFTZjZiUXp2Tzdo?=
 =?utf-8?B?ZFR3ODFVdnVlTW05YmdPbjYrZUJKaXE4djhlb2Nja25vaWVuZkwvU0hnMlNY?=
 =?utf-8?B?eDh2WVJxa1psYkRKUnNCWHhFLzhEM0hvVkYyU0F5QjVQRnJCMXZhVW52cmpN?=
 =?utf-8?B?RkdQQ2hpaWozQXl4V0o3Z1cwSFNObkdsMFU1alltVWJkcGdxbkU2by8vL0lU?=
 =?utf-8?B?dTZVd1RaSWV1UCtyaUZxT0dZbmV5aEVHMHJ6bDVVV1BFWCttd0JRdTh1bWtK?=
 =?utf-8?B?VDF4ODBNWVhpajFIdGhxSHZpZGM2NW1hSjhqUmlFcFpTTjBBdVpLN2ZiWjFj?=
 =?utf-8?B?MG84K094ZkNxZGlnYVp2TS9jWUYxSVpJVTdrSmFZR0VscERabVpPaWxpZkpU?=
 =?utf-8?B?djhkWFFaaXdxd0N2R0Jyc21QZE1UT01kaDZEeC9OZ3ZtSlFtdjVDY3pGZlRG?=
 =?utf-8?B?akZ2M0hzcXg5MUlndEpwZ2JBNE5XMDVRVms0cTJYNHU3RThKbE9zN01taDlM?=
 =?utf-8?B?aUhEdHdhejdTR3NFMXZGZHJNS1lMVkdCM1Q4c2RCbVR5TVhmTHZFdmtQbEdN?=
 =?utf-8?B?VkpsOTRkWmRUSGMyaURsajJZUVNBMXNuSzNvNjA0U0dOL0ttL3p0S1VvVGRo?=
 =?utf-8?B?aitqOTlDc3BjU2N5bHhXUHVJa0dRNFNvTFFicTJodmNpd2oyendwbjJzVUVt?=
 =?utf-8?Q?prXjksmm+b0lhH/ZyLKpYgQQH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cab958de-d93e-44ab-1e71-08db51e87ca5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 06:25:16.9882
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8U2/E0nNeX0ieeHZEIZThXEBfizFFhhkL0jztCYyjjxBFtWa6Z6Mr/CtTKj8w5Y9+RMuSN0Lw7oigQTvK8pqcQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9095

On 10.05.2023 20:11, Jason Andryuk wrote:
> On Mon, May 8, 2023 at 6:43 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 01.05.2023 21:30, Jason Andryuk wrote:
>>> --- a/tools/misc/xenpm.c
>>> +++ b/tools/misc/xenpm.c
>>> @@ -708,6 +708,44 @@ void start_gather_func(int argc, char *argv[])
>>>      pause();
>>>  }
>>>
>>> +static void calculate_hwp_activity_window(const xc_hwp_para_t *hwp,
>>> +                                          unsigned int *activity_window,
>>> +                                          const char **units)
>>> +{
>>> +    unsigned int mantissa = hwp->activity_window & HWP_ACT_WINDOW_MANTISSA_MASK;
>>> +    unsigned int exponent =
>>> +        (hwp->activity_window >> HWP_ACT_WINDOW_EXPONENT_SHIFT) &
>>> +            HWP_ACT_WINDOW_EXPONENT_MASK;
>>
>> I wish we had MASK_EXTR() in common-macros.h. While really a comment on
>> patch 7 - HWP_ACT_WINDOW_EXPONENT_SHIFT is redundant information and
>> should imo be omitted from the public interface, in favor of just a
>> (suitably shifted) mask value. Also note how those constants all lack
>> proper XEN_ prefixes.
> 
> I'll add a patch adding MASK_EXTR() & MASK_INSR() to common-macros.h
> and use those - is there any reason not to do that?

I don't think there is, but I'm also not a maintainer of that code.

>>> +    unsigned int multiplier = 1;
>>> +    unsigned int i;
>>> +
>>> +    if ( hwp->activity_window == 0 )
>>> +    {
>>> +        *units = "hardware selected";
>>> +        *activity_window = 0;
>>> +
>>> +        return;
>>> +    }
>>
>> While in line with documentation, any mantissa of 0 results in a 0us
>> window, which I assume would then also mean "hardware selected".
> 
> I hadn't considered that.  The hardware seems to allow you to write a
> 0 mantissa, non-0 exponent.  From the SDM, it's unclear what that
> would mean.  The code as written would display "0 us", "0 ms", or "0
> s" - not "0 hardware selected".  Do you want more explicity printing
> for those cases?  I think it's fine to have a distinction between the
> output.  "0 hardware selected" is the known valid value that is
> working as expected.  The other ones being something different seems
> good to me since we don't really know what they mean.

Keeping things - apart from perhaps adding a respective comment - is
okay, as long as we don't know any better.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 06:27:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 06:27:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533127.829514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwzlw-0003zI-It; Thu, 11 May 2023 06:27:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533127.829514; Thu, 11 May 2023 06:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pwzlw-0003zB-Fl; Thu, 11 May 2023 06:27:28 +0000
Received: by outflank-mailman (input) for mailman id 533127;
 Thu, 11 May 2023 06:27:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pwzlv-0003yO-Ec
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 06:27:27 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0617.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::617])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e52f144c-efc4-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 08:27:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9095.eurprd04.prod.outlook.com (2603:10a6:20b:446::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 06:27:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 06:27:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e52f144c-efc4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bGHiv9JIS6vUaA4IXhbq5sa6WALxzZTJszV4ba6+2y3DdevlaxSh8Kho2a9fChRgsHMWZ0MlCzJBQF4SRBC9hrfPRXT591Jo91lVJnPwUGJ7BibbxdogwEsq974MDGG10gzoIellER3PE8vDyWJ0U6IhWgzH8LkzxmIilyoo6AxYFP1dQI1Ep8xKppQafgKoQ9dpiVFX/F7U9XF718wXnX9uqIKhix69upKDaPX8LJIw7e+IR7ffee1B0OaHL1ucwcJhUwtHrv+liwKkYzY1V4cy1GPSO3hxWQJUPPD73NAkU/BsurHFMqwGhePCSPsbCaFvxVEEPvbaKf/a4PksHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1/eE/QlgO9DdRdDNB43Un3ARKvzaZpNOV2K9WEcTCgc=;
 b=KnahjXu5TLgcjFFGJdbdxalpNIr8Io7LFNgnz5kH9hJQbE3mklIkt6lNS8fki89ecxBj0AMKat6d2gr3tHJ+6ckSPrFiW2WFGj3RYUPnxO8dXpG+GWeA9K+CwbQrQREX2D0HEQ9XhASJAxTn5cg2ODUTt3bv13YxVFrR5LtnOsk1ciXaH0WzquZdik5nBCTVJRnQXNzjPIq0+/6hJdeqyZ3nIAi6wIJRsiTHzSfmC1AEtR6WB7vKsNOrIrRKuddOMlmN47U++m7AfZLt13gNJLltN31xFfs5Hr839HvT607XmvpKubHundvJ2AeQ5WCi1L6Sd7upxB1CaVWa18t1oA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1/eE/QlgO9DdRdDNB43Un3ARKvzaZpNOV2K9WEcTCgc=;
 b=k4qneEeo8IaJ6pTdgRQvCY2R4T6434lLlIrrfuO+LLqfbjf3emiUUuzMOOlHSAHwinnc4QiVkmfoVmYJnjl/UEjhV3cna1cWwYqopHh32jjfkkHIS4xtzS2Y9bO7NW69vHPLnrS2kh4VS/xTwsSoVWdbk0pyUqXf/w6T9p/dbS4u3wGm8IKEtYxzgec37BrSB7/XsfbGKleStUHmknzrlxY1dUj39ValWEj0Uu3gZgpVPf/JuO2e4uQjBNnOmFkl5UNex2QoxysvdvvYVEMgmdLz1Vd41uDCdFwpppbtVkI+e/XMqQFSIBJbUlq4iZIdvZvgXtkSFYvBUUMcpC45oQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <87629ef5-fb35-60e8-0d67-ded6f777f9bc@suse.com>
Date: Thu, 11 May 2023 08:27:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] iommu/vtd: fix address translation for superpages
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <20230509104146.61178-1-roger.pau@citrix.com>
 <bc750b8d-77be-f967-907a-4f19c783ad99@suse.com>
 <ZFtVYEVsELGfZxik@Air-de-Roger>
 <5bde1d79-03ef-6f8b-4b28-8d7e6867ba18@suse.com>
 <ZFtwSjuZaz05DIY0@Air-de-Roger>
 <f53d0041-d694-1221-475e-648a2afd2ff9@suse.com>
 <ZFu2EzMmQvfpE7tJ@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZFu2EzMmQvfpE7tJ@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0204.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9095:EE_
X-MS-Office365-Filtering-Correlation-Id: 1318959f-5618-42da-b1a7-08db51e8c89b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	18YHKessPpf1UqXWvTRWNcQOQtKPdUuZgMbRPx3aL17zPSa7pZ3y2QylqAZTB8yzsK5W7Y1L/CYmnVtb0fqwoWQERw7V9LmFHkyN0hlQxyljZpom/7JjCyYB/Cpztwhm5GG3IpxSmAA51Ewv74av+XPYxyggdTn4SOwcSNM8PBRH+H+QOmPWYdr3ujllu8O7RGs7anYzbCHAD9Al6VSa+DWe7yLQDLExb3WstR4yePrqQV6M3nxMeSPqir3M+MSEaw8PpXanLKh5UnajCFFXEESToasCWzLUyZMxDtfZYszt7fz6JeCPdPUA/uqkHIgpEujBAIaVJDkZ/NJRtwvZBhIyQe7mD7PuHiF87bJT+AflLHPEFXYeazrydBzBo1TE//FbZR7QzbBjyBxPXdVFZS8O/gucfwszvDHjRE7t0oCXfaPpQ15lHZ4twrYnOXpZLDc0etCvalRV5D5Q6lDlY85JJK0wtG7xRHpZYBshwdSpvHWuPjPUXUkjpa+AOWuvetviEHSbMm9i5dXJIzF2lQawh8OwcwL6paG7me7Jg1DAXEZaITve0LbIP7RkN/F9zS+RJybPkN94u4DH0OncxPMEHRZtunTz2iwN3SIGDiJsv8f29mO65304o3D1n0aUkNJcvcRIz8xzPULGNGD4Ww==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(136003)(396003)(376002)(346002)(451199021)(31686004)(66899021)(66556008)(66946007)(6916009)(66476007)(478600001)(6486002)(316002)(86362001)(36756003)(4326008)(31696002)(83380400001)(2616005)(6506007)(2906002)(6512007)(53546011)(26005)(8936002)(8676002)(5660300002)(41300700001)(38100700002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ejBuSjRDMzRqVHYveTBvMXdpNVZZcHhjZVNvZ29YWlNSeXBCUVNwWEVkQzVI?=
 =?utf-8?B?KzhJNktUeS9mZzJ5MDRHSFM2Mk1HRDlKVy9zdUFBK1RETVo5U29VbWJZKy9t?=
 =?utf-8?B?ODg5UUdrMnZGM3F2ZDY1SFlGaG1tdllHNlNNSStDNWlTeFI2WWVNRkRqMGhV?=
 =?utf-8?B?T1pRd3cydDQrR0FweEtGVU5yUGJzb08zd09FdzVxVWJJMmZSckRNenJhYmdh?=
 =?utf-8?B?bjFRa0hEdXozQm9RazFTZGRVNVpaUGpCK3BESHV6c2Z4alVySXRQNXlXS3FZ?=
 =?utf-8?B?TWMzdXFNdEtiUGNDSTlQTGZkZThEZE41M1k4MVZoY2RwcFMrc2EwL1diakhT?=
 =?utf-8?B?OUVWRThVcDB0VE1uZ2tnb0VDYWxjR0RDMjFld3Q5VjJ4aHlVTDhCU2Zzb1NH?=
 =?utf-8?B?T3pReWpVOUVnWUVOVXJIYzVrK2ZNcmx0ckxtQUlWZk5RS3MzQWVyaXNHTWJl?=
 =?utf-8?B?RGxDcnRnU2RMaXhXam9xV3MvOVJ1M1FDa01yejFkcXRqa1l5ay9uVFpoTkM1?=
 =?utf-8?B?UHR2ekd1VW9nZ2FFekdGY3dvQ1Z3ZWMyNFhRTkdVZHNCODVKRXpQc09Kb2FJ?=
 =?utf-8?B?MGJnVjZLK1l0U1Vxdk5iZEwwRUVtZkNNakZEaUl0VWdOUkpBcXE4ZUxpalZU?=
 =?utf-8?B?RlRvcVJ2QkJzbnpDOHlqT1RScUlBYmdWdjdQZ0hTWnBKU3hlYzBCRE9QRzgr?=
 =?utf-8?B?WTVzZ01iNWZ0eTBuUlNiMWRTV29BZ3ZaUzIvT1hSZ3NadDZLZE16UERia29n?=
 =?utf-8?B?Y2daSXp6SXpOeDBMSkxnTDhkWW1aSjd2SmxkRmVTNE95KzdpRUhSR1VZYmFx?=
 =?utf-8?B?eW04dTNKdDBSSUQxYUVhSmQ1bXBQcERuNzZtMWdMazF0MmU4SWhMZWdqY0tE?=
 =?utf-8?B?M3dqSWxmblRTWjNSbDdQRmhHanY2emlsZW1KbmNwaTFTd1JtYldkMEY1OHVV?=
 =?utf-8?B?dFlSTlo5Um55SCtoN3BsalY2SHlDbDN6MW4xM0hCVFczRm12YlYyckNJMkFN?=
 =?utf-8?B?R2szN0dWTWhQMUFJYmhZL1ZQUlJ2K0hUSmdnai9Uem5ETmk0ZU5scG4zZDFP?=
 =?utf-8?B?ZDQ5YWJoR3NwQlgvV2FpNVJmNHdKZzB6eFZpTW8xeXl2blR5Rm8xUnRxVXRB?=
 =?utf-8?B?QW9UME9aY2gzdENOTUJWa3gwSi9YNndEcFFsdUcvV3kwWG1yK1VpRDUxc09Y?=
 =?utf-8?B?TWM2dUdramtnUHJpREVJTVpqMC9Qa0s0UFFlek5CWW5FQmFiM3BpRUtRTXZD?=
 =?utf-8?B?ODkzYXpqNWRISHRQb3hVZEFzbFAxMC9xNTRndys1Zmo4N3pyOS95QUViNmRZ?=
 =?utf-8?B?NzhGcXd2Qyttb2JyYmx0OE83VjljMjhxeGovTjhOVnN0aHZiYngzNWNzTjB4?=
 =?utf-8?B?YkNLUmMxd05OSU4vRlU3V0JEbm5pWS9EUnpBbnBqUldic1ZSUDdMc0ZzditN?=
 =?utf-8?B?UWJUemRIcGxuUFE0WS9vZDdVOXdRN3FQM1NoVmRsV3ZXZ2pKbnA0WTRxc2d2?=
 =?utf-8?B?aUFFUzRmUHhoWkx6WmwzcVRlZnhNSVU5N0cxeVc4bExneU9pdXRWeVUzak9D?=
 =?utf-8?B?S3k0cXpDdGwwNkg2Q093SGlHRnRVU2tFSzFxcS9FWlAxTEM3aWtNMHdaRGJR?=
 =?utf-8?B?Z2hnaGlOakpTTFlqMy9URVNWQ3FSVUNWckpXSDhBc3ZXaFB2Ykp0N3ZBNURT?=
 =?utf-8?B?Z0R3WnV1QXFHdldhazFtVjFXa0VkNjZzbEt1amhwemxIb2hDOWl3S3RxY2VQ?=
 =?utf-8?B?bzBScU9Ga1FESk42N3o3ajFZRzh2ZzlKN0c0MXBMUUp6RlNhWXorMDRqbVgw?=
 =?utf-8?B?cE9HUmpxcnZKYlFJVjJWRDU1c2czdTJnK2diWjgvb3k3S1lHUFUzRmgxZ3h2?=
 =?utf-8?B?Vkozenp4MWc4cGhZNTFNSkRaNXpiU2VFeG05d1VVSXloNUhYV2ZjVWVEMHBW?=
 =?utf-8?B?QTZoMTVwc3c1SHJUaDBUdGJLRVI4bTVzOGlKRSt0bVpJSWJ5RW9tMGRJN0VB?=
 =?utf-8?B?ZTVwNTB4SXBuelE3aWFXQ0traFZCbEpqU0RJd0VobDROSDF4NUJITThIdVZR?=
 =?utf-8?B?Q2wrYXRCaE1xdE9tbksrOXUxaHhmQUFaNEpHQjZSMTJvZXE0U0o2amxaWWl4?=
 =?utf-8?Q?5o1IYB29QhEY/rggT9Xh2Tr9T?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1318959f-5618-42da-b1a7-08db51e8c89b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 06:27:24.4298
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OcTQ9G/gDAlYmot+i2IeE1D/OWVU9o3gfMv9mFjkiliHYbyl/3XckTsOrAbC15d97+B3tgRU5900iyCjuFHXqg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9095

On 10.05.2023 17:19, Roger Pau Monné wrote:
> On Wed, May 10, 2023 at 03:30:21PM +0200, Jan Beulich wrote:
>> On 10.05.2023 12:22, Roger Pau Monné wrote:
>>> On Wed, May 10, 2023 at 12:00:51PM +0200, Jan Beulich wrote:
>>>> On 10.05.2023 10:27, Roger Pau Monné wrote:
>>>>> On Tue, May 09, 2023 at 06:06:45PM +0200, Jan Beulich wrote:
>>>>>> On 09.05.2023 12:41, Roger Pau Monne wrote:
>>>>>>> When translating an address that falls inside of a superpage in the
>>>>>>> IOMMU page tables the fetching of the PTE physical address field
>>>>>>> wasn't using dma_pte_addr(), which caused the returned data to be
>>>>>>> corrupt as it would contain bits not related to the address field.
>>>>>>
>>>>>> I'm afraid I don't understand:
>>>>>>
>>>>>>> --- a/xen/drivers/passthrough/vtd/iommu.c
>>>>>>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>>>>>>> @@ -359,16 +359,18 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
>>>>>>>  
>>>>>>>              if ( !alloc )
>>>>>>>              {
>>>>>>> -                pte_maddr = 0;
>>>>>>>                  if ( !dma_pte_present(*pte) )
>>>>>>> +                {
>>>>>>> +                    pte_maddr = 0;
>>>>>>>                      break;
>>>>>>> +                }
>>>>>>>  
>>>>>>>                  /*
>>>>>>>                   * When the leaf entry was requested, pass back the full PTE,
>>>>>>>                   * with the address adjusted to account for the residual of
>>>>>>>                   * the walk.
>>>>>>>                   */
>>>>>>> -                pte_maddr = pte->val +
>>>>>>> +                pte_maddr +=
>>>>>>>                      (addr & ((1UL << level_to_offset_bits(level)) - 1) &
>>>>>>>                       PAGE_MASK);
>>>>>>
>>>>>> With this change you're now violating what the comment says (plus what
>>>>>> the comment ahead of the function says). And it says what it says for
>>>>>> a reason - see intel_iommu_lookup_page(), which I think your change is
>>>>>> breaking.
>>>>>
>>>>> Hm, but the code in intel_iommu_lookup_page() is now wrong as it takes
>>>>> the bits in DMA_PTE_CONTIG_MASK as part of the physical address when
>>>>> doing the conversion to mfn?  maddr_to_mfn() doesn't perform a any
>>>>> masking to remove the bits above PADDR_BITS.
>>>>
>>>> Oh, right. But that's a missing dma_pte_addr() in intel_iommu_lookup_page()
>>>> then. (It would likely be better anyway to switch "uint64_t val" to
>>>> "struct dma_pte pte" there, to make more visible that it's a PTE we're
>>>> dealing with.) I indeed overlooked this aspect when doing the earlier
>>>> change.
>>>
>>> I guess I'm still confused, as the other return value for target == 0
>>> (when the address is not part of a superpage) does return
>>> dma_pte_addr(pte).  I think that needs further fixing then.
>>
>> Hmm, indeed. But I think it's worse than this: addr_to_dma_page_maddr()
>> also does one too many iterations in that case. All "normal" callers
>> supply a positive "target". We need to terminate the walk at level 1
>> also when target == 0.
> 
> Don't we do that already due to the following check:
> 
> if ( --level == target )
>     break;
> 
> Which prevents mapping the PTE address as a page table directory?

I don't think this is enough - this code, afaict right now, is only
sufficient when target >= 1.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 06:43:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 06:43:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533132.829524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px01c-0006Xv-SS; Thu, 11 May 2023 06:43:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533132.829524; Thu, 11 May 2023 06:43:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px01c-0006Xo-Pq; Thu, 11 May 2023 06:43:40 +0000
Received: by outflank-mailman (input) for mailman id 533132;
 Thu, 11 May 2023 06:43:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px01c-0006Xi-79
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 06:43:40 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20623.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::623])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 27c8a928-efc7-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 08:43:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8488.eurprd04.prod.outlook.com (2603:10a6:20b:41b::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 06:43:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 06:43:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27c8a928-efc7-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RuA1kLntLId123K2dwZnqFylwdHHugLn6LxdGmBUHugtIKwTNSQsadhJt4sxl5vKzHFaQWDcB1itYJRMMi4tsFtrhMjjVnF/zJ9RHKMCVA1jK3d6EaT3idgrU3lJjviXLh/b5sftR7YH/WyTzQ/C62d6DSVtul62P2KZd+4G/cbrwvhpicFr4wHDAduIa8u2UFWB5xQZm9lVO7AN8BO+z7xPh+xSwjCnStmzduBPoUcYWTa2jqQEPUdQFFnpTt5zhQCoFyzMY18SECwiiRUEXTY4ES70n5eJ/mc8tz110B8WNTUzhVhRgAGTdd72xiCe0NofuVIh31uEgbntExmv9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gxUeFdZzi4EVHfRrfaok42W1uoCrMVnOIH+dcDcBxHE=;
 b=gvR+2oiwFH2mUzWcuyIoFrP78LyzMEUoCXjTL8QgmSklUkN0AB0iUWeYJkyhYrmJPpoBEbgq53wQPf6uYdMJIpN/JFrPw6c7af2YHsSVimORfeZ+sQHCLETBawLW4F/JokS0b9Shz3iho4OpPh0SPMD3JgcKLYPeXPueA+TDZSDFpL5oURQIuXHmRvwT0MlJKP2LI1ue2CUT7P/p8vKCTTB/fR2bZy3Dl9yJowKGJLhCUcc1iJxMDjAuSGlxPaiUoTCNTn2dqga4uBoCB0oHqZ5E+VnU7TE8gyqQomfb5qCO7MOK0M/X/on0U5XYXwfjUDmmEybcWLK2NPslW8Q9yw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gxUeFdZzi4EVHfRrfaok42W1uoCrMVnOIH+dcDcBxHE=;
 b=O5g2JbFj60fkE6jx/mC+rxBEBRqfHZQYf+5Xr5+sD+vZM7Y9rbHDE2NR+G6+C/EjPJsAxwLF6nxlIxRbe32ibpkESO+PEqJAu7JbRkN52nyuJ7KAMyxl0XrSqLsgbkoY+mAnv7TcbZsrLJfgv2crSvyWwEP4TkMYiMszWOeHYEDUKl9MwOrEYrTFiagZCHwIe88e+4VQq/KVg/Gt9v38o8Ehyowt8RUu4x3s+z6up2b41dbqBk811BPPpNtDWmjWtnBt1fXXqYWVr0kcfVAM4guw2FZbwv/xMVqyKPml1Y2lyf1uazbx0jMhZAL68vv+NLb7GVDX7mgxYptECNwbdQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <62b721a0-7d09-751f-5d95-086584f3d7e5@suse.com>
Date: Thu, 11 May 2023 08:43:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/6] x86/cpu-policy: Drop build time cross-checks of
 featureset sizes
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-2-andrew.cooper3@citrix.com>
 <6531df09-7f7f-a1e2-5993-bd2a14e22421@suse.com>
 <18090224-6838-8ed4-6be5-ae10a334e277@citrix.com>
 <bbbd4c8b-e681-f785-b85c-474b380d6160@suse.com>
 <742a5807-dd53-0cd1-d478-aed567d5c4f5@citrix.com>
 <c8cb1df9-33af-8cae-291d-9a86a3b7f6b9@suse.com>
 <6d62ba23-d247-08da-3a84-ed8d1cdc4271@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6d62ba23-d247-08da-3a84-ed8d1cdc4271@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0002.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8488:EE_
X-MS-Office365-Filtering-Correlation-Id: c9fc5d2f-fac7-4791-f6a0-08db51eb0673
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ma93dwO/Dob/JBI2thkvtK0K7k2sJ+w0cT+wKR+snNqbYG4+O2O+bqqyRzmQmhAOLsiFdmup7BZElZEbs6kM4BdVOw299jttJnAyXCnga5e+UfMBGoxDrU2O/Z9k1WR0nORmgrBZfCi0zwsSomzL5QFpgedY3VUmXOS16IZ2XL+8ldaO10Ts9itRF5pabePf155/fDzPUNJ/5aP6K59vBbK87np402QNkJbWvGrkyqicJ3Gc+rATiUbUfABm2JhQf1+rsrqbmJ1MX4Mq34Tfmzo0GX8KLXiFJg5PIbJoZdYNNK1yLYPLOAX+c+SEZbpp/bYfcufdqQgwjj5vBbYiRoa3C3pmsh5xal/v/W0LQNTkvMeh+/OjeeDqqZcYQpW6Crw1+BQZPVe72cVyBz6VTCpmp7c4X5SZyYEWBB+XyFRlsCAO57kapjUXj+m/A1BrNBP9DXKUOLDOHYewfFd7kbhROmj8Ha6x6hH42A0r4egzolYxNoq3ABo6lbdWOawcYD9mws4kDznSFG3dBggolzXI9GGnheH54HG0k19ddARq8CP1Qu5M4qyXlBnhvx8MDM4efWRN8VFJU31ySA2tykPYYKv/muu2b7DEKtxQ7euqh4GE2p0XDadELXeHDY/nZBLQ+zT1A/vOKZmGjMQ1/Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(39860400002)(396003)(136003)(451199021)(38100700002)(2616005)(83380400001)(2906002)(86362001)(36756003)(186003)(54906003)(53546011)(478600001)(6512007)(6506007)(26005)(8676002)(6666004)(6486002)(41300700001)(5660300002)(8936002)(66946007)(316002)(66476007)(66556008)(31696002)(6916009)(4326008)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YkVMQVkxZGxVN3dwclh1cjBJSE5iU20rcm1TNTZsSXJYL3pDVE5HNnFoTnZq?=
 =?utf-8?B?eFRUellCYUQzd0VmS3A3QmRsSEltQThWLzJiN1NjNDM0Y1VlRENRZmhncVFa?=
 =?utf-8?B?V3JkNktBbHcydjJKTUVmZ2ZKUkc5VzJCVmN2bVlBeTY4UVd4YVlzaXR4OFhE?=
 =?utf-8?B?UzF3UkNCanpIQXFpQ3UvY1lEOTFXWFd3RUVWK0wvTlRlcUJ6L1haK1JKcCsy?=
 =?utf-8?B?R3JNUWw2T2xWdFlsaWFOL3IvUThQWUpMa0xseW92QmhkZnRVdkRPOFpVRk9j?=
 =?utf-8?B?SDdSUUlpV1pkcHFLQW1Bc2ZBYWIwZ2E0cnpEMEUrNFZ0Y3N4RUtjUlJaYW5Z?=
 =?utf-8?B?S204a1JWNTJSWU14TlBKS0RMbkZhVGJ0aWpjS2FMS0lwK3kwSVlsRTNEWU5N?=
 =?utf-8?B?SGYwcmw3MUdkMzRqeXdsZFRyNnJEN243cXIveWVJQ1BMRGRiUkdOZ1BHTVk4?=
 =?utf-8?B?aVYrQ1NFMDJIT0VoZUMzZmExQksrTWFZSWtvanQxU0NtZ0FZa284eHRrUGdh?=
 =?utf-8?B?bS9BQ2pkQU9obWIzbVR4ZWNPN3Q1enhkTlZsN0dhSDlKeW1QVGtLTmdRVjZG?=
 =?utf-8?B?UkFwNWE1VUJ6Zkc0WTBVU2UwZFM0dytwWVJuVVhVS0lRajBmcWN5eEVvYStM?=
 =?utf-8?B?NENObjJsSCtncnhXd0VUeTNVL0hWbXhmaFlrSkdjN3pvZTRTQzFqUy92K3hx?=
 =?utf-8?B?QUppNkVjRENOYll3clcrbzJnQllaQXNwWVpBMXozcGtIUEswKzNYOGh5RFZi?=
 =?utf-8?B?L2lSQ3VoTTBrcC9RdGJEVVdHUGNNa2NHUkJISCtjR1dWMVJ3RkV4N2JWR05N?=
 =?utf-8?B?bENFQjlrL0dZZU4vaWwvSmU2VkFmZmF2cGgrMmkxcFcxN3c5Rkx1YXRMREVR?=
 =?utf-8?B?ZkNZMmRCaFNnUnM5dUN0cGtrY043cmJhZy9pM2NPUm5LazUvVllJVnpvSDNU?=
 =?utf-8?B?bWhpNE5EYkZVZ2FyeitJSkRxWXFueThsZGdPbmdsRzlRdDk2NGt5czJrdFoy?=
 =?utf-8?B?OS9zd3ltaVhRNlVxYW5qY1FIUVZmSCtacWhqWG4waTVBQVdKUUdETG1IQ1NV?=
 =?utf-8?B?Z2xwSjNnN1NGSVZHRGIzMWt2RGVHMTdTOVlZVndKZnRCaEZqemRKbzdlNXVx?=
 =?utf-8?B?T215aWlUdkE2bWdrc3ZVZnVQU0ZkLzBHY2hMcGZDZDFDd2FpT3kzbk5pNWp0?=
 =?utf-8?B?NVgrUHpwT2d1RjVES1dBcUF6U28yYXVnK09tK2szcmFSM3ZDR2gzWFhBNEZz?=
 =?utf-8?B?N0VyUExxZUVnUmVEZ1Z3V0xTbXBhWGI3MDFzQXdaS0NnK0RZd1NsVFZDWTZk?=
 =?utf-8?B?S25vaWxrbjNlRHhuOEc4UTVqVVFURkNucU5ha2gxOWlBK2pMeW55a2NlcmI2?=
 =?utf-8?B?NWZ4bHloVENuY0hhK3BqckJzQ2ZPQ3ZsNk9lWFdsbkoxeDZKRHZnbWFoTnpE?=
 =?utf-8?B?dDREMW01b1VSRWIwdUxsMGRJVVFnUFV4YlU0UlQxWGVtOFNXZUhTMFpibCtT?=
 =?utf-8?B?KzJ0OGQ0OVpXd2RVby9FRTNZNkp5R0xMYUZhVGRBUjVoNmFmRTRkNlJKNG5t?=
 =?utf-8?B?TzJncS9GN1lnOVNwZFpOc2dQSTl4cTVNSGJldFlPSjZtSmJ2cHlZUGJTdTN2?=
 =?utf-8?B?d1Arblo1NFJnTHlwUTZyMFZlRDRlUGIzTjZZa3gvenVEZEU1QVVHZUYzblkx?=
 =?utf-8?B?a1RUNTRzNGVVdzZMY1dBK3FlZHFPR2JXdFdNWGltenVibzlxYjkvcHM3c0Iv?=
 =?utf-8?B?VGljNEYvWjg5cGxsS1l0dWY3OTBQTDBCeHR2Nnhna3hmbzhiZzhvNWVGaE1F?=
 =?utf-8?B?OVpJYTRjd2ZGaHBKbE9KQWE3ZzRRaS9NZmpad0p0a2xyNjFYSTV0SkRZME9P?=
 =?utf-8?B?ckZaUDREWVBUTjBmbkVtTDRqQnNJTUp6Qno4VlJ6TVh1eFc1UUoyY3dLMEFl?=
 =?utf-8?B?dDFmRlphSnRGTVNJZmU2UHdzeFFnN2NBK0k0QkhVblhFZVh3OXRxRkdiK0ZU?=
 =?utf-8?B?ZUtLOEM4Y1RuMCtHSGFXOU8yelNNcTNlZktycnNVSWlsNUpXNUdpc1JvcEQ4?=
 =?utf-8?B?bTFUZFFhZmRrZHpqVWFib0pvMVFZOHJxdEdBZzdIQzlIWlNvV1F1MkxtZldD?=
 =?utf-8?Q?CoTmgQsx3xjq2l6HddD0gkoJ+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c9fc5d2f-fac7-4791-f6a0-08db51eb0673
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 06:43:27.2668
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XGdK1+rhk4+e/bHAmWUC6BjVRdTja2qNwjGbX4qaAky2vA6LqSDd7tZlhMbQVY6JqReOFHdiaB38KOYU3LZu9g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8488

On 10.05.2023 17:06, Andrew Cooper wrote:
> On 09/05/2023 5:15 pm, Jan Beulich wrote:
>> On 09.05.2023 17:59, Andrew Cooper wrote:
>>> On 09/05/2023 3:28 pm, Jan Beulich wrote:
>>>> On 09.05.2023 15:04, Andrew Cooper wrote:
>>>>> On 08/05/2023 7:47 am, Jan Beulich wrote:
>>>>>> On 04.05.2023 21:39, Andrew Cooper wrote:
>>>>>>> These BUILD_BUG_ON()s exist to cover the curious absence of a diagnostic for
>>>>>>> code which looks like:
>>>>>>>
>>>>>>>   uint32_t foo[1] = { 1, 2, 3 };
>>>>>>>
>>>>>>> However, GCC 12 at least does now warn for this:
>>>>>>>
>>>>>>>   foo.c:1:24: error: excess elements in array initializer [-Werror]
>>>>>>>     884 | uint32_t foo[1] = { 1, 2, 3 };
>>>>>>>         |                        ^
>>>>>>>   foo.c:1:24: note: (near initialization for 'foo')
>>>>>> I'm pretty sure all gcc versions we support diagnose such cases. In turn
>>>>>> the arrays in question don't have explicit dimensions at their
>>>>>> definition sites, and hence they derive their dimensions from their
>>>>>> initializers. So the build-time-checks are about the arrays in fact
>>>>>> obtaining the right dimensions, i.e. the initializers being suitable.
>>>>>>
>>>>>> With the core part of the reasoning not being applicable, I'm afraid I
>>>>>> can't even say "okay with an adjusted description".
>>>>> Now I'm extra confused.
>>>>>
>>>>> I put those BUILD_BUG_ON()'s in because I was not getting a diagnostic
>>>>> when I was expecting one, and there was a bug in the original featureset
>>>>> work caused by this going wrong.
>>>>>
>>>>> But godbolt seems to agree that even GCC 4.1 notices.
>>>>>
>>>>> Maybe it was some other error (C file not seeing the header properly?)
>>>>> which disappeared across the upstream review?
>>>> Or maybe, by mistake, too few initializer fields? But what exactly it
>>>> was probably doesn't matter. If this patch is to stay (see below), some
>>>> different description will be needed anyway (or the change be folded
>>>> into the one actually invalidating those BUILD_BUG_ON()s).
>>>>
>>>>> Either way, these aren't appropriate, and need deleting before patch 5,
>>>>> because the check is no longer valid when a featureset can be longer
>>>>> than the autogen length.
>>>> Well, they need deleting if we stick to the approach chosen there right
>>>> now. If we switched to my proposed alternative, they better would stay.
>>> Given that all versions of GCC do warn, I don't see any justification
>>> for them to stay.
>> All versions warn when the variable declarations / definitions have a
>> dimension specified, and then there are excess initializers. Yet none
>> of the five affected arrays have a dimension specified in their
>> definitions.
>>
>> Even if dimensions were added, we'd then have only covered half of
>> what the BUILD_BUG_ON()s cover right now: There could then be fewer
>> than intended initializer fields, and things may still be screwed. I
>> think it was for this very reason why BUILD_BUG_ON() was chosen.
> 
> ???
> 
> The dimensions already exist, as proved by the fact GCC can spot the
> violation.

Where? Quoting cpu-policy.c:

const uint32_t known_features[] = INIT_KNOWN_FEATURES;

static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
static const uint32_t __initconst hvm_hap_max_featuremask[] =
    INIT_HVM_HAP_MAX_FEATURES;
static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
static const uint32_t __initconst hvm_shadow_def_featuremask[] =
    INIT_HVM_SHADOW_DEF_FEATURES;
static const uint32_t __initconst hvm_hap_def_featuremask[] =
    INIT_HVM_HAP_DEF_FEATURES;
static const uint32_t deep_features[] = INIT_DEEP_FEATURES;

I notice that known_features[], as an exception, has its dimension declared
in cpuid.h.

> On the other hand, zero extending a featureset is explicitly how they're
> supposed to work.  How do you think Xapi has coped with migration
> compatibility over the years, not to mention the microcode changes which
> lengthen a featureset?
> 
> So no, there was never any problem with constructs of the form uint32_t
> foo[10] = { 1, } in the first place.
> 
> The BUILD_BUG_ON()s therefore serve no purpose at all.

As per above the very minimum would be to accompany their dropping with
adding of explicitly specified dimensions for all the static arrays. I'm
not entirely certain about the other side (the zero-extension), but I'd
likely end up simply trusting you on that.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 06:46:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 06:46:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533136.829535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px04H-00077S-BK; Thu, 11 May 2023 06:46:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533136.829535; Thu, 11 May 2023 06:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px04H-00077L-8H; Thu, 11 May 2023 06:46:25 +0000
Received: by outflank-mailman (input) for mailman id 533136;
 Thu, 11 May 2023 06:46:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px04G-00077F-Jg
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 06:46:24 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0607.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::607])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8a992dba-efc7-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 08:46:22 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7551.eurprd04.prod.outlook.com (2603:10a6:102:e9::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 06:46:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 06:46:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a992dba-efc7-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iHSjgonNZklH8hC0/yH5+pKLgoozss5Cv2E5wf/sjrmzCc/rCdUOQPOJpM7QPS1T2RonikoqoNwlXhm6WzuQ1KEPVIKRuDSPm72LJ5kqfO8mnvjfj2XGzu6m4ihPSqVriQJbfiiBniAk7xokvrWqouCX0EHAFety1h2Ieb4+LeGPtyAoY1/sajdrv3Wk65/64TPm1JnaPVqmu+DbllszkseHB7J88QEAQnc2Sty8rqK7mQLf2cLj79SHFzoHiIaWXrHK17uZzneQ3EogS3u6G0U9MGIBMWO14pY2sKet6CU5zrbVSRvbj5+3b1L74h2/v/1hdxgckLpYmsrFINHu3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6DP1NEdlleLZ9Cmuz6iiZkkzUin637+Fbq4aY7GaIro=;
 b=GISGOXU3qzbjvOpQJWPOTowE48I2VfmEXok7HWI8JlF0HhJH5EqAfE4LN2u8T8VTKzWkx5HQYm6UVxdqvC7n1hxcHr5IGmc/zebb+I5jS9bO6JBeXfc5nWFwZ03ke5+uwWFtkgrEtSAv/QT6Q2VBaahk/6XIRBsLwBr0a0McZ/ZXgxyWVc6nWgnI8aKFLUc09PkqYbaIIh8RG7drsVgKHNivEHpFmerJuc8KB5s5ErOgGYeHsQi+oeesQCP4x6cBcKKzJhOKZZmbjV6NG7Hgi3m3yzHcpwOUee6ehbhrLeQVSy0+pSGqWsGMD16saiQyhOm3EVCqPq/8ojNwPrq1xQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6DP1NEdlleLZ9Cmuz6iiZkkzUin637+Fbq4aY7GaIro=;
 b=rUkwoJIXN9VdZ3d1WYDkSp4BwJ0OYlGBJcGMBYOiQFA7ujw9bs7F2utrBNXbdOW/nqIXXusoP/pSZqH+w6Fi8+6qocjGe2q3dpO9z9YvNyEeMKyCJSXctM0PgYuPTD+7BA0ZLVO4sp3/jbp5kReUvXCbPiuO88NxN4+WAsLYihM/s8VS4ubPZVzHpsWVdmoI+DvSaFEEUSL9QIQVyZvfEzrnM4A+t5z49a3ClbRmTGi+XaBpks6rqrQ5Qxmk0TYyGHmJ6r16Y14N46cdTw5fxkjjYGe08kpABZt3E7vx8tf1Un6nn7E9ka1gmvtdsHuVfPRbNMGK6PImk8hlQ+zmLQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <995d1c8a-860d-fab6-4f92-7f1f7510b7e4@suse.com>
Date: Thu, 11 May 2023 08:46:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 5/6] x86/cpu-policy: Disentangle X86_NR_FEAT and
 FEATURESET_NR_ENTRIES
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-6-andrew.cooper3@citrix.com>
 <9613df3b-c49f-6034-ff99-7a6ea9286f0f@suse.com>
 <e888dc16-66bf-fc15-9ddd-f10879b79a5d@citrix.com>
 <f09c9127-1cc4-1ed9-0348-be12c0c999e8@suse.com>
 <e4385229-a804-191f-c656-b605c498a292@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e4385229-a804-191f-c656-b605c498a292@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0156.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b3::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7551:EE_
X-MS-Office365-Filtering-Correlation-Id: 2d193722-9f7b-41fa-ba10-08db51eb6d05
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0g/RzrFA1qJkLC7CuOYTj+86V6J0CMtLHfbe2++RBspTXPlojhbG3XLmt75MmMOU+haPZwZ3LPr8dZuhoGukI6y0I96zHZW/yQVF/CWZzUSlkzl4OvgDAGnT2gzzk9v002BGOVpbfylC0DWc0zVYDxGtcrBIVsv/HAdIuyDK0fRf+3TeUK09rX+PbcediC71AA8fknomxyA5H67INNM3RVEBtX452LFqaZN0xASucsfYs9932NC4FjX0TeVfOWXwLBiLKOf4saloVK8QCGQx+yamFBnY7hjwQR8MFkO5z26j2vCPP/Pim+3gQJfXRT8eHZGy5mYvymBuJtmYXyBz+nzsig2VMeGDvHYCKlZpHLIWhqhUY0OdjqnpolRq0Iv0QQUPneIE+W9FfQo3SGfvHMvyekyInGaiqJQL7LDTYAp6Sw4Vf6j+9NZSl9MEVZ3c48t5fH5FG80H6s5zKApVn39NhwmZrJz1rPNY51gLWToJUlHvqFLA8aUbZLrr1RFdAwM3cYt2/jpIFLNZjmC/bDTX9IZPidaAY5xuyhQ9ZHxgROA28UaVFSkk9ge34gC7/JURqTfI7lp0XSLjx9nsAyT6rnZEUCg82Oqj2Nc66VKSEW2dP090RBJuGvnqT4V7kh0csxig5vmFqhMMB1Wm+CZ8R+aMuipjWzEdsdbKzo8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(346002)(376002)(366004)(136003)(451199021)(31686004)(36756003)(66556008)(966005)(66476007)(4326008)(6486002)(66946007)(6916009)(54906003)(478600001)(316002)(31696002)(86362001)(6512007)(26005)(186003)(53546011)(2616005)(6506007)(8676002)(8936002)(2906002)(5660300002)(41300700001)(83380400001)(38100700002)(522954003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SUZJNXlGRTh3SVkrdXRQdGFsZFhHQ2F5dmRzN3NTci9jd3NpWjFhNFJSd2pF?=
 =?utf-8?B?amlMSklLUUl3T2Q1a0lCVGovck1neWpKL2dTbnVTR3YwV3IwM3p6T2tGWkN2?=
 =?utf-8?B?dGg1bTFvVGJkVlNEOWtNV3l6STdzNzk1VGIyM3JiNVIyVDNnNExMZDBKbWs2?=
 =?utf-8?B?SG1PM3hJakYvNWEwV1FVODV3R056TG1wa2JabUJIcHEwWk00VU9IK2pVQlVY?=
 =?utf-8?B?K20rb3NVODV4OXJTZzRXL3FRQ0p2SXdISUpZcmdIYkdWMlBwYTJVZlpmZy9M?=
 =?utf-8?B?ZUZJMjFnYngrbXg5TlNZaGNMS1JnU0c1aUovTTI3L0w0UURUQnVJbGdFaUhX?=
 =?utf-8?B?aFVzelN5ZFF2K3kzVzhTVUQ0VGgrdkxhdk1HeHNuMU1wbUFzRXE1dDQ2TUpi?=
 =?utf-8?B?Z2dmcFpFRkdDK3R1NzdEUG90c054VXZteUkrS3JSRmZ3cXVHaDMzb1B1ZEVT?=
 =?utf-8?B?cjdETnpvelk3K2NxeWxOeVFuN1Nuc0hqd0VnQ25QTGpUbFdVSm5JS0dRSG5K?=
 =?utf-8?B?SnVTUmF6Y29UVlQvemV1amllS0tnajcrUWtLRmQ4SW9vRHFwSlVCZG5YcGxX?=
 =?utf-8?B?RWNnZEFFdS9YRm9JMDJqaDNlQ0psQ1lHMWNLUXBNbkhDMWxIU1U5bGFjYWRX?=
 =?utf-8?B?NUtXd05SVmxOV094UE0yY2U4VWE5ZzVnSzF3U0ZLeTNJMTNlaU9UaGlTZ1l6?=
 =?utf-8?B?TnFnVE50YTZLS05uRlBjM0tCQVM4SVBCSFhKV2EzcysyYlpFa3h2Vm5oUU1y?=
 =?utf-8?B?eS9IOFc2S1FOemFncENHcGIyZnZzdFJ4OXhmaWtlWElqNU14UE02UEtVWVM4?=
 =?utf-8?B?Q29FUXRkNVBaSzdvaGxQMHBDNkJJNmMvSnM0TVhBMTI3cjE2S3dmQjQxS1pJ?=
 =?utf-8?B?UklLU2Rqa2xNT2Rncjk0SHR1UFgvS0pYZWxBcWkwN2pkOHgxa1plbENQTWp3?=
 =?utf-8?B?WThBTllTVFBLZUQ4bG9qVVFxdkw2ZDRqc0JZcXdXWTBkbXkwQmtjcGR4b0do?=
 =?utf-8?B?YkM4TldOMHFVZ1FiRnh4MXNhR3kwUU9nVXZpYVNpWXFRS3pQdkYvczlFSnBZ?=
 =?utf-8?B?eDZZVytGcGdHaDhQVnFhdS9Nc2N1cWp1cWgrNG94WE9UdTVnMkVRNTZFRFlX?=
 =?utf-8?B?K1JUT3k3aVdVSjJtZmxjRWx5bTV2bzVIeUpsalNSWGRvRUFJRW5rc3BzM3hz?=
 =?utf-8?B?QWsyL3YvNGZFL0dhWTYvSkcvVnpBMGpZYXFvbWU4ZWxsdCt1Y3gxTEZUb1NH?=
 =?utf-8?B?c2JiSWFKa1VkNFo1Q0pocUJIOUUyenNRdE9qWmlNMXMwNVU1UFdqdzVaQ1dq?=
 =?utf-8?B?cGY4L241NkovaXFCamhZbVQxeTg3ZzZoZmlLdlRxajdoa2NlRnA3cWRlVWcv?=
 =?utf-8?B?cURudmRPTHBSUlJmMmdlc2V4QlRxamc1cWxIQVIxNFd3UlMzYTIwQmVpdjBR?=
 =?utf-8?B?Y0Y2eEU4dGtlR3R4bTlaQTZYdmpBWVNXL1R4bThNTSt6NmdqSWlqZmlHUGJP?=
 =?utf-8?B?OGc3Y1RGL1JjbHk0QXFBUWNZdnJTRmN2ZUhySGsyL2oyRjY1dXd6a1pKc1hK?=
 =?utf-8?B?YTB6T1JkV2NpN3ZhUUlrUHljY1Bycmk1VEVjRVRjeDgxR09PRFBQOFg1a2s5?=
 =?utf-8?B?eDM4WE9DU3RKUWxPN094QUtCWGtodEJ6bS9vVU1YcXk1dFVvY0kzeGRuTHlB?=
 =?utf-8?B?NXZhMGJQYU1qM1hKRGEvZk8wdDQxT3VMSkRLOTdDUzNtaUwyV1ROSXF0RVlw?=
 =?utf-8?B?RGdDRGJza25WYUhHNVJDMzA1RHNnWGc5RXJodmNIVlAzZEkzSnE2dFZNY3Y0?=
 =?utf-8?B?R3FqZFlxSEZHZnZ1bnFmRzJQVVE5UWZtUktjNEwwZ1JDYmdLUUpDS1ZXaTYr?=
 =?utf-8?B?VEdhTkpQMWx6SDBrRmJKSEUyTXFwd3Bob1k3RHRCZnBHeVdBc1dIc2NTMDBC?=
 =?utf-8?B?bVhZcVBuYlJVSFo3c09NZVlNZEl1ODUwMGVObElhY3hOejlyM21MejhHcllj?=
 =?utf-8?B?OGkxcGtER1JPblZab1ZId2ZkTWxxOExGL3ExMHFNc1pYTUF2akdQQ0lFK1Z5?=
 =?utf-8?B?U0RDR0RRMVUyV3QxRklZVnBwVTgwWW1MNitWbTRYMU9CU08wcE5JT2I5YVZY?=
 =?utf-8?Q?vSg7oCQ0pzQZ24FJ9X/8eG5Wc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d193722-9f7b-41fa-ba10-08db51eb6d05
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 06:46:19.2680
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DjSazTkqfyEY/FmwYB7KwvXp9hHGZlv1Y4YreMWGRidQR4iAgRtTFJyq6TZaCii10+TZMSp1/yOQeJq3HirO3w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7551

On 10.05.2023 22:13, Andrew Cooper wrote:
> On 09/05/2023 3:24 pm, Jan Beulich wrote:
>> On 09.05.2023 16:03, Andrew Cooper wrote:
>>> On 08/05/2023 8:45 am, Jan Beulich wrote:
>>>> On 04.05.2023 21:39, Andrew Cooper wrote:
>>>>> When adding new words to a featureset, there is a reasonable amount of
>>>>> boilerplate and it is preforable to split the addition into multiple patches.
>>>>>
>>>>> GCC 12 spotted a real (transient) error which occurs when splitting additions
>>>>> like this.  Right now, FEATURESET_NR_ENTRIES is dynamically generated from the
>>>>> highest numeric XEN_CPUFEATURE() value, and can be less than what the
>>>>> FEATURESET_* constants suggest the length of a featureset bitmap ought to be.
>>>>>
>>>>> This causes the policy <-> featureset converters to genuinely access
>>>>> out-of-bounds on the featureset array.
>>>>>
>>>>> Rework X86_NR_FEAT to be related to FEATURESET_* alone, allowing it
>>>>> specifically to grow larger than FEATURESET_NR_ENTRIES.
>>>>>
>>>>> Reported-by: Jan Beulich <jbeulich@suse.com>
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> While, like you, I could live with the previous patch even if I don't
>>>> particularly like it, I'm not convinced of the route you take here.
>>> It's the route you tentatively agreed to in
>>> https://lore.kernel.org/xen-devel/a282c338-98ab-6c3f-314b-267a5a82bad1@suse.com/
>> Right. Yet I deliberately said "may be the best" there, as something
>> better might turn up. And getting the two numbers to always agree, as
>> suggested, might end up being better.
> 
> Then don't write "yes" if what you actually mean is "I'd prefer a
> different option if possible", which is a "no".
> 
> I cannot read your mind, and we both know I do not have time to waste on
> this task.
> 
> And now I have to go and spend yet more time doing it differently.

I'm sorry for that. Yet please also allow for me to re-consider an earlier
voiced view, once I see things in more detail.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 06:48:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 06:48:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533142.829545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px06G-0007ot-QC; Thu, 11 May 2023 06:48:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533142.829545; Thu, 11 May 2023 06:48:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px06G-0007om-NX; Thu, 11 May 2023 06:48:28 +0000
Received: by outflank-mailman (input) for mailman id 533142;
 Thu, 11 May 2023 06:48:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px06F-0007og-0x
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 06:48:27 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0608.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4483bd0-efc7-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 08:48:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7551.eurprd04.prod.outlook.com (2603:10a6:102:e9::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 06:48:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 06:48:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4483bd0-efc7-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YksM4q4n2VOEzm3QmeSyUkzBRYLD7xB5xbm4bhHAxR5bdsuznwlcgaThqJihqFx9IVg2rgrw2rIXh8EUU7L4jH5k/WDQO6oebbRP2yysz7No2h64iYB2O/3iV0jiqDKifmd1/fsnDcWHbk2XVsHGK8ThqR1YzGHoQI2AnnQf9Dh/eW3ve0ShQgpFQ89/ZmiO3pd+I3wFT0Vn2qGhHOsrSAhedyhBXZb4bf/8IzBKaJU7q0HEMHrfW46ivJ8AYMpXW9kzvdb5W8u1MmHjHk+rtJYWXrYEjXWZwpJ/yCHmJAmFjSiqGji9Hcr8ZjltGg0i1zwdlxPwEl7HypnFWqmU2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5E0xoFqOO1H/rtjWFvPzFubDbc41FBKEVv5ge4YVCic=;
 b=HgfpDIEh4CTFNCI0mFC99BFACeWcF1IgH4CEXgZHUYbAFBAVvs/1wf/YTkw5Mh5pLdj3h+n3yrHEqX8Qx4jG/GUUiayQqfY8HX1GoM0GTR0CgHPLboljY3OQSdYFRAu5jC0gW5a7dG3URsoK17DwenNZmQpfRstAtzSpCO5OtXB87jDSQ2YuU9em2xeIffKej/cy4lJXwaR+uNeXTbX6yjR0wC3HbClQETqYsnT3lqK+dJnyeKYTWgd7XjvHFmwEKIJrqkQ4oWFCcwjmxUAq4D9z3tpH/VSLr2UQe9Joz1iCbBfFUjvHhqjFhtpkOhuSG7N9Y/SEFcaDdWW/btzZSw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5E0xoFqOO1H/rtjWFvPzFubDbc41FBKEVv5ge4YVCic=;
 b=2R0houClePBXVNMkZZPYfD7nYs7+Tu5371NBdbhKCAESgpwWtdOANYKH61o8k/v4fFe/50dvHaJ/xX/Q5J05zrpr1nbx3lTBG2KzV4Wq1YJVjMVvQPhZ8NRNCJl9BxWpBuPBjSeATuI9wUAFMuWyWD09QmeJwX6yUZSOpQirP03q4bYmagkNg5Y5WD/Wb893a9Uxyn75JPrpkvLj92IDIC5OXtfJVmTAcesfaPIGXJmMekyq8bG17jtrmhhoOQoa3zbbYXNle9xBaAl1CsR9yr43CakbapfKYtdy0rDFpLN9jVuqO2JU7zkOsBmXPtgTHvmD1m/mZgUvGDgxqpBHNg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <46701a92-2ada-abb4-e1a7-c8b1bfc5aa2c@suse.com>
Date: Thu, 11 May 2023 08:48:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86: Use printk_once() instead of opencoding it
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230510193357.12278-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230510193357.12278-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0174.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7551:EE_
X-MS-Office365-Filtering-Correlation-Id: ee89a06b-e14e-4d1f-ff35-08db51ebb7c3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YnJTlh51o+uNVYCG9r4B5ebW5wp5Vx/vtWABi1egJoHGwOSv8zsUbcsrtiK0Ur7lw64X9FC7RkCDo/sbz6JE2dWFnOcwVNYBXxp7EAtH9ieMHF7Q3zqeKZbmKFMsSq7005lYQd5pX7yEUpehJlluHwFcnXlIQkJ1G3eJyATrQ/WcXNgE3SKHqndVZDRAu5Bnjwi/vYu246r2fjWnYHINgRdvW+UNhFXyfnrbk8jHHZAYyR2L1SAwN8EHtV0Kkuv0G87/WDEYH5PHkpvxIC35vDnpx8oaZ1tkjKS4E+vA3P8TjHEKt6bliM/id2pD8+e/0zq0vk2yOk5fY9eZlA5phyqSOu7HF9W21fTl2A3hZSLkqCtyEBY50yx+um3IxMa8n8UP4/DtqmXTstECrjzTCQrraE6QcMb3vA4ncKdjMlq8jkgxlHM2R4vTO7OO14jaQy81t8FusgxVFTRABqd1KGCFOlOmdEDWvNavTXlaw6k8jlIjoL3RzcCs5sWxxvHEoUbvaHEjvHM8tfPMqtaQmMm3AB5I4VMdtpVs8T/DNcGyQy6PdTDxs/5xVnThBclrCAtPZHSqwyxLsN2Ip7pvTlpGFL6gnyQ6ERI71vLhjQd/NFa5XoLYbVCqFevI7hN5WznVj9pfCWfUUitVtrq8xg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(346002)(376002)(366004)(136003)(451199021)(31686004)(36756003)(66556008)(66476007)(4326008)(6486002)(66946007)(6916009)(54906003)(478600001)(316002)(31696002)(86362001)(6512007)(26005)(186003)(53546011)(2616005)(6506007)(8676002)(8936002)(2906002)(5660300002)(4744005)(41300700001)(83380400001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z3BTTTdBdVVheDVzYkFNUnFFTEh4bkVrMXdPTlA1dEMyZEozOEpyZGU4UWVB?=
 =?utf-8?B?TStFZkI1SlRPQmpRK3FFOFVVVlNqV2t2ZjhOZ2NPdWhQWERiYnBYNFdLSEE3?=
 =?utf-8?B?N2c2cnBEMGxZN1hzekJTLytKaU1Nbk04V3JDbHpsVWgxVE54dnZ3Q0VjT2dG?=
 =?utf-8?B?QzIxSkplZDM1WFo2WVVEczcyaUh5eFJHQkExaG1zM0gxTkVYcjdjTTdINzdM?=
 =?utf-8?B?OSsxb1Y4UTNaMzlTbkxCaGlJbkltc1VTM0s4MlpKangzMitVVlBOemNSQVJt?=
 =?utf-8?B?c1VUaUMrM3RuaVVlTGNKYTRWY1dWbTJwNVhxczlXaEJuUEVESlRFOTJoblk1?=
 =?utf-8?B?a1lXUi9rTGlsSTI2LzREVVpJSEE3QWdoTWRnZ21QWlVZV21BTVJZcmdTdTkx?=
 =?utf-8?B?L2QyTlNSaTZkdGVqRG5WTU1zenRkL3BMTTJYUWdIaWtlLzRiWXZEdWExWmRW?=
 =?utf-8?B?VlIranVKVmtXb25xeVl1NUdDK01nL1RSbDJ4SENsaSt4RmFmdnowYlNxV3BZ?=
 =?utf-8?B?aksybFZzM3BOUk15QW9zdnBjVE4vZmVTcUZIcTlNZjZtVWFKNTJ3RnRTaFR4?=
 =?utf-8?B?K05JQjBwZmhWNzRLamluUVo1WFVpem9FNWxCQkoybGgyVVgrcUpEcVJIcGZH?=
 =?utf-8?B?ei9mV1Bta3FONGhQaStLNDY0K01EZ2xPL2Rhbm01NUUxTVR0cGRsd2EzcktM?=
 =?utf-8?B?SkVXVURrQWNia1h1KzNITjlBTG9XNmxSNzdnN2dPQ2hIay9uNWgyWlYvd3pl?=
 =?utf-8?B?MC9PWXYxVmRqM3FjY1FLWHg2bVJRWUhkZFljNFp4VVltSk95b2laVFdqKzdm?=
 =?utf-8?B?WTZPSStDU2lBa2dRb2xON1g2V2Zrb1RFcThnQUowYlJIVnNmbkRjeUo3dmgw?=
 =?utf-8?B?WllQeWVJcnNMM3lGL20rNW5RbXRCWktUSERwMFVxSWJ0UnBnL0F4enRPNEJv?=
 =?utf-8?B?cnNyMlk1alJsSUxRVGJFWXk0OUpzS21zaEJUOWR5ME5xWW9HbzRmcUppeXNK?=
 =?utf-8?B?YUZhcmJ5NmhselNMUTFYMThQR1pNclNYMmpTTDg0Q0J6YjBGeVN5WFlsUC9r?=
 =?utf-8?B?M05xeENhZmwwM1NlTXBFZUJ3b00rWFpKTXBYQlR4YUN3bmxoVnY4dCt4ZUJF?=
 =?utf-8?B?VVhmQ21Dcnkxc0RRUDY3Rm84TzkwczhKK2hsTk5qTWN2cXcvSUtVN2RwM1FC?=
 =?utf-8?B?SHdxVnZ6eVhhb1Njd1Zaak1GQ0ZQSGhCNm1XcE1tRzhsMWcyMjUzWThCbXY5?=
 =?utf-8?B?T01iMllSM0VUM1QrTDd1ZDY3eXNGSTQ1b3lidTV0ZmlVZlpTNW4wemwrTlMy?=
 =?utf-8?B?dGd0V2wwYjRmSysyR0lTZnAxTTVWLzNyWmFENFdzT3ZBVlJXbGZGc1l4aTlR?=
 =?utf-8?B?OW9nc0poVkkrZ0ZUQjkwWDdKU3NMU1psV3hzcTJXWDdRWDJzR3g4YzQzTktu?=
 =?utf-8?B?bXZxV3Qvcm5nTWRhcEk4VzdEZWliYWVlNzlueWsxOTFyRXM0RjEzUTg4N3Az?=
 =?utf-8?B?NWU2ZjAyMEdUM1NCNHBGZXpPeDNXMHRkYzdYdXNjSEdTOHhuSWdYbm5wSWY0?=
 =?utf-8?B?eWd5UjNReERTTWViR2M3ZjZlaVNTR0ZGR2tzRlNwZlc3ZmFXTnpqdXNNMmdP?=
 =?utf-8?B?NnV0UkgrbzJhekw1MTlmOVhraWRscVJzclJaUHVIVDIyakdHeGNMUXhqM2py?=
 =?utf-8?B?UWh2bGYrVmdlNkJScnNhSlhrcjlpeTBYY1lnS1B3Lzc5U1dEREM0dGZtSDJM?=
 =?utf-8?B?ODdtVDA1cjNMSThkY2hHUmxQL0JpSGxEL2N5TjNiMkRrSUMzRWhPQ0QwMVRB?=
 =?utf-8?B?bEhMcXY4Nzg3dXVyajNUM1lLUnlqV2ZVYkg3T09uSTVQanMxZjNtMTlOY2FV?=
 =?utf-8?B?UFZETkZMTWpsR040YU5TSlV4L1hJRTdUNXRzbmM5bytCUW05UThkZjVMeWt3?=
 =?utf-8?B?MHUweVV2cG5YeE9aR09vZXJKNk4yejUyYUQ2NHFlbFhxeUtYQzZSK2ZiaGhi?=
 =?utf-8?B?Y0dqb0FKNUFmU0cwVHFiZkhUcVNaOWIzRmdhWWJTSUNiNGZWNWlpOUpNUHFQ?=
 =?utf-8?B?QUljVVlCdmNpd0tPektQajF3djRINVJyZ3VkK1RLTWM0bGl5c3o1c21IdzZl?=
 =?utf-8?Q?G5/pyBJ6Cpne8mChxrYUyq3OB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ee89a06b-e14e-4d1f-ff35-08db51ebb7c3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 06:48:24.7013
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BX+ZFTX7/SemYgy/grQDNA5g9d/a00IpH/EgkahUc/KAmybirLr2XVgub+/HMz/5s6CSrZ1sc8+ESCHAYtRUHA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7551

On 10.05.2023 21:33, Andrew Cooper wrote:
> Technically our helper post-dates all of these examples, but it's good cleanup
> nevertheless.  None of these examples should be using fully locked
> test_and_set_bool() in the first place.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Thu May 11 07:28:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 07:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533147.829555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px0j9-0003zt-OI; Thu, 11 May 2023 07:28:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533147.829555; Thu, 11 May 2023 07:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px0j9-0003zm-Kn; Thu, 11 May 2023 07:28:39 +0000
Received: by outflank-mailman (input) for mailman id 533147;
 Thu, 11 May 2023 07:28:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sau0=BA=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1px0j8-0003zg-SJ
 for xen-devel@lists.xen.org; Thu, 11 May 2023 07:28:38 +0000
Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com
 [2607:f8b0:4864:20::429])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 70a511a9-efcd-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 09:28:37 +0200 (CEST)
Received: by mail-pf1-x429.google.com with SMTP id
 d2e1a72fcca58-643557840e4so8903950b3a.2
 for <xen-devel@lists.xen.org>; Thu, 11 May 2023 00:28:36 -0700 (PDT)
Received: from localhost ([122.172.82.60]) by smtp.gmail.com with ESMTPSA id
 r38-20020a635166000000b0052863112065sm4362035pgl.35.2023.05.11.00.28.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 00:28:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70a511a9-efcd-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1683790115; x=1686382115;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=YnTCNWm2LykulONaWYVJi+pnI3QLH6QWJExDXITJhIs=;
        b=z1eBVdJFSIkFe5k9PgUIOZpm2t8Zb78+GSepz+PJd1dZdEngnABnV0luMfRYmQlrIt
         +rVvSCLVChzJsu8LN/zT99FoZn2VomZV8moqzgp6VIn7Ge+x8CUmpU664jyhXkcnvTBm
         SEQHxqOvfTR5U1TOPz1rEBkQYNJhdjlThx+mYD6xjprchx4DT8u5d9XqemWFfDARw2ab
         xh0dTpUu1f7rsx390MkfZrdnNjpmGDPqYoER39WQWorAecpaWmsk02qmE+BMJ/L7TfpF
         QQCNNQJILR5snkMxu1CM2dsg1nY5+xWUTi5cInOyO6lKNozF2UuQAv24bCmICIwtNYoy
         0U0g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683790115; x=1686382115;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YnTCNWm2LykulONaWYVJi+pnI3QLH6QWJExDXITJhIs=;
        b=BHb3leCOs53SVpEvTCbfsnmvKBcWVKFIztD34hzb+cXGuoIF+2Tvc3HgZ1rR5AKwv2
         C1GPsdsD4JtcbOlXMYORRa0SGS31nMGEMgOEli79IbuobkAp9WwhqPmV9ZxxjfTP4kcD
         pi7GH6pv3kRGPKAjAVoy2qc+QGAcGz22eYV/Tn/Jy17ioxrABpxSVrrw1fkOl4gRTtSB
         mOu1Vu8hPABDvxzNe4CJV7nl0+NGJNXpbJwsKRflxcbrP98VbJXuEs0i9Lmwic4K3C2v
         vwEbaARW5DVoV7qxH4fOsgfZZ2wFliaUFDlvEs2tidTlxbz7CqUS3i3I17XmxdTK7UJN
         aqdQ==
X-Gm-Message-State: AC+VfDwwHfyi4SZ7kkjwb7QFAzbW+6XtTrTfufhf+68R90W0o5dX6wTl
	rmmLNJQZcUULJexu9ovr2Lapeg==
X-Google-Smtp-Source: ACHHUZ6YkU3BEKQudVcAARWSQK4cMjpxfeRIkO72W0dI4/2zrp2Fa9GZne9XqnkhS0dG/U2HV/DDEg==
X-Received: by 2002:a05:6a20:a10f:b0:ff:b9c4:a0aa with SMTP id q15-20020a056a20a10f00b000ffb9c4a0aamr23387687pzk.48.1683790114880;
        Thu, 11 May 2023 00:28:34 -0700 (PDT)
Date: Thu, 11 May 2023 12:58:31 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xen.org, Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>,
	Erik Schilling <erik.schilling@linaro.org>
Subject: Re: [PATCH] libxl: arm: Allow grant mappings for backends running on
 Dom0
Message-ID: <20230511072831.4hubr2qxlkaov4am@vireshk-i7>
References: <817f0320316dd144826add0ac834618026b91160.1680165772.git.viresh.kumar@linaro.org>
 <92c7f972-f617-40fc-bc5d-582c8154d03c@perard>
 <20230505093835.jcbwo6zjk5hcjvsm@vireshk-i7>
 <e6b10450-50c2-468c-88ba-36e0274b5970@perard>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <e6b10450-50c2-468c-88ba-36e0274b5970@perard>

On 09-05-23, 16:05, Anthony PERARD wrote:
> I guess the way virtio devices are implemented in libxl suggest to me
> that the are just Xen PV devices. So I guess some documentation in the
> tree would be useful, maybe some comments in libxl_virtio.c.

Our use case is just PV devices, not sure if someone else may want to
use it differently.

I will add the comment.

> > The eventual fronend drivers (like drivers/i2c/busses/i2c-virtio.c)
> > aren't Xen aware and the respective virtio protocol doesn't talk about
> > how memory is mapped for the guest. The guest kernel allows both
> > memory mapping models and the decision is made based on the presence
> > or absence of the iommu node in the DT.
> 
> So, virtio's frontend don't know about xenstore?

Right.

> In this case, there's
> no need to have all those nodes in xenstore under the frontend path.

Yeah, I ended up copying them from disk or kbd I guess, but I am not
using them for sure.

> I guess the nodes for the backends are at least somewhat useful for
> libxl to reload the configuration of the virtio device. But even that
> isn't probably useful if we can't hot-plug or hot-unplug virtio devices.
> 
> Are the xenstore node for the backend actually been used by a virtio
> backend?

Yes, I am using them on the backend side to read device information
whenever a new guest comes in with a bunch of devices.

-- 
viresh


From xen-devel-bounces@lists.xenproject.org Thu May 11 07:51:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 07:51:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533153.829564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px154-0007OA-Ia; Thu, 11 May 2023 07:51:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533153.829564; Thu, 11 May 2023 07:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px154-0007O3-FZ; Thu, 11 May 2023 07:51:18 +0000
Received: by outflank-mailman (input) for mailman id 533153;
 Thu, 11 May 2023 07:51:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sau0=BA=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1px153-0007Nw-9Z
 for xen-devel@lists.xen.org; Thu, 11 May 2023 07:51:17 +0000
Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com
 [2607:f8b0:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 99036cba-efd0-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 09:51:12 +0200 (CEST)
Received: by mail-pl1-x632.google.com with SMTP id
 d9443c01a7336-1aaff9c93a5so57386245ad.2
 for <xen-devel@lists.xen.org>; Thu, 11 May 2023 00:51:12 -0700 (PDT)
Received: from localhost ([122.172.82.60]) by smtp.gmail.com with ESMTPSA id
 j2-20020a170902758200b001ac896ff65fsm5144181pll.129.2023.05.11.00.51.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 00:51:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99036cba-efd0-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1683791471; x=1686383471;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=aWX/tOsVsEtkoqeLFA/4H0fxG7drSPfnFrsi/yft9nk=;
        b=uSVL3Py0aLkaNBTw9FHgYM7y5E84S4h/p1nFqlFGXVMkENI/OnV2RIm/h6frcuZvHR
         pXNc/Qd0BuEj8CrWythHO7l/EEfux9SuPz6pYgXXjm96VxisbaWQ5ncIfo8FkBX4b6Ur
         FrjAq/rbdmiftXxGnO+IIhF/zNJrO6Sxn9bmdY6ZiWO6EvY9J68xqCSVC9CFIoRHpARZ
         jfGbCBK0j/zMfP9Nb6mY9VobrQe5gk+wgnVF8qxSjnLSRdHVOpCE3FQXMUirtoy5fM5L
         fZII/A2Oo2XkndmpfMIQd/YCQunfjAi0LpFKX/o/2b0KjpWMlPD4VWALIf5Ql2cXipeA
         PgEA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683791471; x=1686383471;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=aWX/tOsVsEtkoqeLFA/4H0fxG7drSPfnFrsi/yft9nk=;
        b=BiTOu3kBDc4jjgUGskBYu1ym/blxBbuvdMMEsq+BX61Wk7edMj1tVKGMRdv8ySKgIr
         oQVeRrmx80ISbmLwmao429hMGcs4N3tZGXFKjJu8W8sNfdMBIV3T6SGMwD1DzHD3GNd8
         3AsXqioC23/XMSYCC5ei8cbHV4GjKmDrxKMrxmhcBF6SlWq5yW/dsZ33RpYpawR8SyEm
         kwX3EMSvZCYss4UVgzCFZP2I4CsClz6FxYIkkhsvfrBgli7ZS10YOem0T6vZqMfaxqCQ
         xXNH7P5uRY+syVL0a5gihW1DAln4Hn8C8ud9Y41EpLl2sZwWGtpyA+Lr94edQMAE8xPZ
         9uJQ==
X-Gm-Message-State: AC+VfDwWbrK4tg6HkRjFRYSi/mr/eT4iQ7O+GZnGFQIMj4Hht7fh0NNG
	rTgB8G8J7DACrSahffjLdFF+jRzbVCP/DCIPBh0=
X-Google-Smtp-Source: ACHHUZ78ILZjWhXb/mHUt3PGSf2iMh04Zwf75sm+x2jw+gjomsP7xSJ+oO63Kzf69q6bjBDdUJ00eQ==
X-Received: by 2002:a17:902:d509:b0:1a6:51a6:ca76 with SMTP id b9-20020a170902d50900b001a651a6ca76mr25596760plg.11.1683791471177;
        Thu, 11 May 2023 00:51:11 -0700 (PDT)
From: Viresh Kumar <viresh.kumar@linaro.org>
To: xen-devel@lists.xen.org,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>,
	Erik Schilling <erik.schilling@linaro.org>
Subject: [PATCH V2 1/2] libxl: virtio: Remove unused frontend nodes
Date: Thu, 11 May 2023 13:20:42 +0530
Message-Id: <782a7b3f54c36a3930a031647f6778e8dd02131d.1683791298.git.viresh.kumar@linaro.org>
X-Mailer: git-send-email 2.31.1.272.g89b43f80a514
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The libxl_virtio file works for only PV devices and the nodes under the
frontend path are not used by anyone. Remove them.

While at it, also add a comment to the file describing what devices this
file is used for.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
V2: New patch.

 tools/libs/light/libxl_virtio.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/tools/libs/light/libxl_virtio.c b/tools/libs/light/libxl_virtio.c
index faada49e184e..eadcb7124c3f 100644
--- a/tools/libs/light/libxl_virtio.c
+++ b/tools/libs/light/libxl_virtio.c
@@ -10,6 +10,9 @@
  * but WITHOUT ANY WARRANTY; without even the implied warranty of
  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
  * GNU Lesser General Public License for more details.
+ *
+ * This is used for PV (paravirtualized) devices only and the frontend isn't
+ * aware of the xenstore.
  */
 
 #include "libxl_internal.h"
@@ -49,11 +52,6 @@ static int libxl__set_xenstore_virtio(libxl__gc *gc, uint32_t domid,
     flexarray_append_pair(back, "type", GCSPRINTF("%s", virtio->type));
     flexarray_append_pair(back, "transport", GCSPRINTF("%s", transport));
 
-    flexarray_append_pair(front, "irq", GCSPRINTF("%u", virtio->irq));
-    flexarray_append_pair(front, "base", GCSPRINTF("%#"PRIx64, virtio->base));
-    flexarray_append_pair(front, "type", GCSPRINTF("%s", virtio->type));
-    flexarray_append_pair(front, "transport", GCSPRINTF("%s", transport));
-
     return 0;
 }
 
-- 
2.31.1.272.g89b43f80a514



From xen-devel-bounces@lists.xenproject.org Thu May 11 07:51:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 07:51:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533154.829575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px159-0007dU-QU; Thu, 11 May 2023 07:51:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533154.829575; Thu, 11 May 2023 07:51:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px159-0007dN-Nl; Thu, 11 May 2023 07:51:23 +0000
Received: by outflank-mailman (input) for mailman id 533154;
 Thu, 11 May 2023 07:51:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sau0=BA=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1px158-0007cv-E4
 for xen-devel@lists.xen.org; Thu, 11 May 2023 07:51:22 +0000
Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com
 [2607:f8b0:4864:20::432])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9be22ab5-efd0-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 09:51:19 +0200 (CEST)
Received: by mail-pf1-x432.google.com with SMTP id
 d2e1a72fcca58-6436e075166so6208268b3a.0
 for <xen-devel@lists.xen.org>; Thu, 11 May 2023 00:51:17 -0700 (PDT)
Received: from localhost ([122.172.82.60]) by smtp.gmail.com with ESMTPSA id
 s24-20020aa78298000000b0062a7462d398sm4827715pfm.170.2023.05.11.00.51.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 00:51:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9be22ab5-efd0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1683791476; x=1686383476;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kl0WiYrFA4blVg1dBhdm+h7XFgspJ0eYE2qjacPrkmc=;
        b=aQQA9BcJCFvjCgTAMbDS1aOvb3MpQSgLOGHGrhbxZbEqb8ZvHC/MCsRHVoVjy/vV2W
         7NLWS8jr5KlqE5qqjbbNayNpP3HE/XH+cXZyOTqULL2ja1IkDm8DfMOdoaC4ifW1XCK9
         wUAPTqGJmu75tIVMAXVGNolIfj0FnEEGh+K7fIsTQVIRIWDb8tLCnyIXkVy6Nx+ZQpWe
         Svctv0O8aQXQIrSqyg3ToejMVgIo6qRB39E80gjK0r8yRSLiPFk08e2xrsl/kz2WQnSP
         F0Tv7uIwkeDs7epIkjHlR1qR+9+PtOcPHBYwlQd0rQugljggi4ePQIz2Mb8clBu3zM3h
         Lb+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683791476; x=1686383476;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=kl0WiYrFA4blVg1dBhdm+h7XFgspJ0eYE2qjacPrkmc=;
        b=C3vorkcWprzzmM3neulY6urQHDQd7tlmuwBeW98m3J+jy3+2Ezi9surttCDpWoqRik
         wLiqt/c8QIBvvoAT3RxEaxeikKt+2eASAVOe67CmRvsUHEVecCSnvhCPSI99B9Ys1nDe
         6NsTJ240E80ZIIFj567SmwnXaW4QeoTTpgW3mHqDMq2lYCSJZV4IBtyjX0D7Or7lUiPz
         XSvc0YHCIqCK5eqpKpNzLIC/h4gDbpPgX+pwWADCNfCUUZGHcKz6vcIoOidF4pNPd82M
         Kmr/OE9lPB6CO16UHAw9LVysYWRRWdMkDHIJ/EZZBb+xzN0KsAvIcL0utDSwue071ogk
         doig==
X-Gm-Message-State: AC+VfDwsqgWLtKJnCEMQoKW+p9Mp48jgDmTCsAfaTSGN6IJoeNkxUKog
	tQxitQIHb3AP30LsEGHqYrlEuomHtCVljFALeYg=
X-Google-Smtp-Source: ACHHUZ50EORTIY5Rv5/CCsGB9qpo3pWKmJzM9KSlIKm1LWRHZ7QyXJRoDYthpJuJhJzDUtaWQK9iNA==
X-Received: by 2002:a05:6a20:1607:b0:103:d538:5ea6 with SMTP id l7-20020a056a20160700b00103d5385ea6mr1906448pzj.48.1683791476017;
        Thu, 11 May 2023 00:51:16 -0700 (PDT)
From: Viresh Kumar <viresh.kumar@linaro.org>
To: xen-devel@lists.xen.org,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>,
	Erik Schilling <erik.schilling@linaro.org>
Subject: [PATCH V2 2/2] libxl: arm: Add grant_usage parameter for virtio devices
Date: Thu, 11 May 2023 13:20:43 +0530
Message-Id: <ccf5b1402fb7156be0ef33b44f7b114efbe76319.1683791298.git.viresh.kumar@linaro.org>
X-Mailer: git-send-email 2.31.1.272.g89b43f80a514
In-Reply-To: <782a7b3f54c36a3930a031647f6778e8dd02131d.1683791298.git.viresh.kumar@linaro.org>
References: <782a7b3f54c36a3930a031647f6778e8dd02131d.1683791298.git.viresh.kumar@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently, the grant mapping related device tree properties are added if
the backend domain is not Dom0. While Dom0 is privileged and can do
foreign mapping for the entire guest memory, it is still desired for
Dom0 to access guest's memory via grant mappings and hence map only what
is required.

This commit adds the "grant_usage" parameter for virtio devices, which
provides better control over the functionality.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
V1->V2:

- Instead of just 0 or 1, the argument can take multiple values now and control
  the functionality in a better way.

- Update .gen.go files as well.

- Don't add nodes under frontend path.

 docs/man/xl.cfg.5.pod.in             | 12 ++++++++++++
 tools/golang/xenlight/helpers.gen.go |  2 ++
 tools/golang/xenlight/types.gen.go   |  8 ++++++++
 tools/libs/light/libxl_arm.c         | 27 ++++++++++++++++++---------
 tools/libs/light/libxl_types.idl     |  7 +++++++
 tools/libs/light/libxl_virtio.c      | 19 +++++++++++++++++++
 tools/xl/xl_parse.c                  |  3 +++
 7 files changed, 69 insertions(+), 9 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 24ac92718288..0405f6efe62a 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1619,6 +1619,18 @@ hexadecimal format, without the "0x" prefix and all in lower case, like
 Specifies the transport mechanism for the Virtio device, only "mmio" is
 supported for now.
 
+=item B<grant_usage=STRING>
+
+Specifies the grant usage details for the Virtio device. This can be set to
+following values:
+
+- "default": The default grant setting will be used, enable grants if
+  backend-domid != 0.
+
+- "enabled": The grants are always enabled.
+
+- "disabled": The grants are always disabled.
+
 =back
 
 =item B<tee="STRING">
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 0a203d22321f..71d9c24e382c 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1792,6 +1792,7 @@ func (x *DeviceVirtio) fromC(xc *C.libxl_device_virtio) error {
 x.BackendDomname = C.GoString(xc.backend_domname)
 x.Type = C.GoString(xc._type)
 x.Transport = VirtioTransport(xc.transport)
+x.GrantUsage = VirtioGrantUsage(xc.grant_usage)
 x.Devid = Devid(xc.devid)
 x.Irq = uint32(xc.irq)
 x.Base = uint64(xc.base)
@@ -1809,6 +1810,7 @@ xc.backend_domname = C.CString(x.BackendDomname)}
 if x.Type != "" {
 xc._type = C.CString(x.Type)}
 xc.transport = C.libxl_virtio_transport(x.Transport)
+xc.grant_usage = C.libxl_virtio_grant_usage(x.GrantUsage)
 xc.devid = C.libxl_devid(x.Devid)
 xc.irq = C.uint32_t(x.Irq)
 xc.base = C.uint64_t(x.Base)
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index a7c17699f80e..8f7234d3494a 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -261,6 +261,13 @@ VirtioTransportUnknown VirtioTransport = 0
 VirtioTransportMmio VirtioTransport = 1
 )
 
+type VirtioGrantUsage int
+const(
+VirtioGrantUsageDefault VirtioGrantUsage = 0
+VirtioGrantUsageDisabled VirtioGrantUsage = 1
+VirtioGrantUsageEnabled VirtioGrantUsage = 2
+)
+
 type Passthrough int
 const(
 PassthroughDefault Passthrough = 0
@@ -683,6 +690,7 @@ BackendDomid Domid
 BackendDomname string
 Type string
 Transport VirtioTransport
+GrantUsage VirtioGrantUsage
 Devid Devid
 Irq uint32
 Base uint64
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 97c80d7ed0fa..9cd7dbef0237 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -922,7 +922,8 @@ static int make_xen_iommu_node(libxl__gc *gc, void *fdt)
 
 /* The caller is responsible to complete / close the fdt node */
 static int make_virtio_mmio_node_common(libxl__gc *gc, void *fdt, uint64_t base,
-                                        uint32_t irq, uint32_t backend_domid)
+                                        uint32_t irq, uint32_t backend_domid,
+                                        bool use_grant)
 {
     int res;
     gic_interrupt intr;
@@ -945,7 +946,7 @@ static int make_virtio_mmio_node_common(libxl__gc *gc, void *fdt, uint64_t base,
     res = fdt_property(fdt, "dma-coherent", NULL, 0);
     if (res) return res;
 
-    if (backend_domid != LIBXL_TOOLSTACK_DOMID) {
+    if (use_grant) {
         uint32_t iommus_prop[2];
 
         iommus_prop[0] = cpu_to_fdt32(GUEST_PHANDLE_IOMMU);
@@ -959,11 +960,12 @@ static int make_virtio_mmio_node_common(libxl__gc *gc, void *fdt, uint64_t base,
 }
 
 static int make_virtio_mmio_node(libxl__gc *gc, void *fdt, uint64_t base,
-                                 uint32_t irq, uint32_t backend_domid)
+                                 uint32_t irq, uint32_t backend_domid,
+                                 bool use_grant)
 {
     int res;
 
-    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid);
+    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid, use_grant);
     if (res) return res;
 
     return fdt_end_node(fdt);
@@ -1019,11 +1021,11 @@ static int make_virtio_mmio_node_gpio(libxl__gc *gc, void *fdt)
 
 static int make_virtio_mmio_node_device(libxl__gc *gc, void *fdt, uint64_t base,
                                         uint32_t irq, const char *type,
-                                        uint32_t backend_domid)
+                                        uint32_t backend_domid, bool use_grant)
 {
     int res;
 
-    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid);
+    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid, use_grant);
     if (res) return res;
 
     /* Add device specific nodes */
@@ -1363,22 +1365,29 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
                     iommu_needed = true;
 
                 FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq,
-                                           disk->backend_domid) );
+                                           disk->backend_domid,
+                                           disk->backend_domid != LIBXL_TOOLSTACK_DOMID) );
             }
         }
 
         for (i = 0; i < d_config->num_virtios; i++) {
             libxl_device_virtio *virtio = &d_config->virtios[i];
+            bool use_grant = false;
 
             if (virtio->transport != LIBXL_VIRTIO_TRANSPORT_MMIO)
                 continue;
 
-            if (virtio->backend_domid != LIBXL_TOOLSTACK_DOMID)
+            if ((virtio->grant_usage == LIBXL_VIRTIO_GRANT_USAGE_ENABLED) ||
+                ((virtio->grant_usage == LIBXL_VIRTIO_GRANT_USAGE_DEFAULT) &&
+                 (virtio->backend_domid != LIBXL_TOOLSTACK_DOMID))) {
+                use_grant = true;
                 iommu_needed = true;
+            }
 
             FDT( make_virtio_mmio_node_device(gc, fdt, virtio->base,
                                               virtio->irq, virtio->type,
-                                              virtio->backend_domid) );
+                                              virtio->backend_domid,
+                                              use_grant) );
         }
 
         /*
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index c10292e0d7e3..17228817f9b7 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -283,6 +283,12 @@ libxl_virtio_transport = Enumeration("virtio_transport", [
     (1, "MMIO"),
     ])
 
+libxl_virtio_grant_usage = Enumeration("virtio_grant_usage", [
+    (0, "DEFAULT"),
+    (1, "DISABLED"),
+    (2, "ENABLED"),
+    ])
+
 libxl_passthrough = Enumeration("passthrough", [
     (0, "default"),
     (1, "disabled"),
@@ -740,6 +746,7 @@ libxl_device_virtio = Struct("device_virtio", [
     ("backend_domname", string),
     ("type", string),
     ("transport", libxl_virtio_transport),
+    ("grant_usage", libxl_virtio_grant_usage),
     ("devid", libxl_devid),
     # Note that virtio-mmio parameters (irq and base) are for internal
     # use by libxl and can't be modified.
diff --git a/tools/libs/light/libxl_virtio.c b/tools/libs/light/libxl_virtio.c
index eadcb7124c3f..0a0fae967a0f 100644
--- a/tools/libs/light/libxl_virtio.c
+++ b/tools/libs/light/libxl_virtio.c
@@ -46,11 +46,13 @@ static int libxl__set_xenstore_virtio(libxl__gc *gc, uint32_t domid,
                                       flexarray_t *ro_front)
 {
     const char *transport = libxl_virtio_transport_to_string(virtio->transport);
+    const char *grant_usage = libxl_virtio_grant_usage_to_string(virtio->grant_usage);
 
     flexarray_append_pair(back, "irq", GCSPRINTF("%u", virtio->irq));
     flexarray_append_pair(back, "base", GCSPRINTF("%#"PRIx64, virtio->base));
     flexarray_append_pair(back, "type", GCSPRINTF("%s", virtio->type));
     flexarray_append_pair(back, "transport", GCSPRINTF("%s", transport));
+    flexarray_append_pair(back, "grant_usage", GCSPRINTF("%s", grant_usage));
 
     return 0;
 }
@@ -102,6 +104,23 @@ static int libxl__virtio_from_xenstore(libxl__gc *gc, const char *libxl_path,
         }
     }
 
+    tmp = NULL;
+    rc = libxl__xs_read_checked(gc, XBT_NULL,
+				GCSPRINTF("%s/grant_usage", be_path), &tmp);
+    if (rc) goto out;
+
+    if (!tmp || !strcmp(tmp, "default")) {
+        virtio->grant_usage = LIBXL_VIRTIO_GRANT_USAGE_DEFAULT;
+    } else {
+        if (!strcmp(tmp, "enabled")) {
+            virtio->grant_usage = LIBXL_VIRTIO_GRANT_USAGE_ENABLED;
+        } else if (!strcmp(tmp, "disabled")) {
+            virtio->grant_usage = LIBXL_VIRTIO_GRANT_USAGE_DISABLED;
+        } else {
+            return ERROR_INVAL;
+        }
+    }
+
     tmp = NULL;
     rc = libxl__xs_read_checked(gc, XBT_NULL,
 				GCSPRINTF("%s/type", be_path), &tmp);
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 1f6f47daf4e1..d2de3abfcd78 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1215,6 +1215,9 @@ static int parse_virtio_config(libxl_device_virtio *virtio, char *token)
     } else if (MATCH_OPTION("transport", token, oparg)) {
         rc = libxl_virtio_transport_from_string(oparg, &virtio->transport);
         if (rc) return rc;
+    } else if (MATCH_OPTION("grant_usage", token, oparg)) {
+        rc = libxl_virtio_grant_usage_from_string(oparg, &virtio->grant_usage);
+        if (rc) return rc;
     } else {
         fprintf(stderr, "Unknown string \"%s\" in virtio spec\n", token);
         return -1;
-- 
2.31.1.272.g89b43f80a514



From xen-devel-bounces@lists.xenproject.org Thu May 11 08:14:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 08:14:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533171.829585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px1RT-0002kY-9V; Thu, 11 May 2023 08:14:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533171.829585; Thu, 11 May 2023 08:14:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px1RT-0002kR-6k; Thu, 11 May 2023 08:14:27 +0000
Received: by outflank-mailman (input) for mailman id 533171;
 Thu, 11 May 2023 08:14:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bPy6=BA=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1px1RR-0002kL-Rb
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 08:14:25 +0000
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com
 [2a00:1450:4864:20::132])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d740b8bd-efd3-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 10:14:24 +0200 (CEST)
Received: by mail-lf1-x132.google.com with SMTP id
 2adb3069b0e04-4f13c577e36so9189882e87.1
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 01:14:24 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 m10-20020a056512014a00b004f11d1ab605sm1017969lfo.295.2023.05.11.01.14.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 01:14:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d740b8bd-efd3-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683792864; x=1686384864;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=+88Dancfok8X5Zw7Ulq5o58+aWZxHOF5SDxRA4KA8Gc=;
        b=aqrV06ZY3sXs0Bmwf9sjE2Y4ws5fg48Q1D8AcT56hgvxNSUQmiRzIHcCHPiOoHmdMD
         qLWfmCALyq8AYTcG/YMLY2rT7Au5/yO98qVr0X4I1+4DgEZfgUR+NV5qmgIT+YlQOt40
         GUDTqzhE17rHc7Zd6/FlY9buTcTUFTo28itTHwpOAgq32K0DmwiLQs7kYgkKzBWZu/o0
         0cup8dNJhN3OY7tNqwIhc/qJ8JqpIN+fkYLyM8c74zzEzPGzmCvXor9u0Y1FOMMPvr/g
         Wr2KJSa0SAvyII6nnaFsp1rayE/LSwxbgRzy9780Sm+1lJwassIhe53jUtKq5pAqfr3W
         zDmA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683792864; x=1686384864;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=+88Dancfok8X5Zw7Ulq5o58+aWZxHOF5SDxRA4KA8Gc=;
        b=ZF3stv6s3kAI3BX36qsxgnb7bYx0qkwu6dk1vcJh4lagJY9MV6Uid3g93m8gZmU/zb
         JtXhNRbwQLSEB3hL3xMzkiiz/GAKE3LKpH8LDnr1HODW9xGpiWJnNWRQlmjC80L9uuyp
         fR276pSy0SvegQtOyadA6G+KXsdskaeMpa8JE9d2GW+ZbxhSGsUqq9SNzia3eHIbSvmU
         kfkhfS1qgrYaUDzG9vlLTVCQAy+/qpAfviB0cD+7JIt3Sp5BOUjPAlVnCqng85YfvTnF
         v4YzlCalLGbBvD3bHAN8kKMAeTs3XZPcZSwg0QnlyfGVpWOoEjeGZwbKm2WjRcY1AVEG
         JP7g==
X-Gm-Message-State: AC+VfDyrTgs6u16KpeIDCVHMUYdXkKEPhNPPhTJSwsnBJIKvpiSDqZaL
	Qlab0eY1ijEImseGi4IRyCQ=
X-Google-Smtp-Source: ACHHUZ7FiiTygJjFnZyJ7sJe/ZTGELVOizsHEOdop2p25NpGGNDnKAX4ZPG8oHZtcYJnQ10KjSS0NQ==
X-Received: by 2002:ac2:4c82:0:b0:4ed:b4f9:28c7 with SMTP id d2-20020ac24c82000000b004edb4f928c7mr2722192lfl.6.1683792863942;
        Thu, 11 May 2023 01:14:23 -0700 (PDT)
Message-ID: <b0ea83c0e870567089b574056308b831c6cafea3.camel@gmail.com>
Subject: Re: [PATCH v6 2/4] xen/riscv: introduce setup_initial_pages
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Thu, 11 May 2023 11:14:22 +0300
In-Reply-To: <da4592b3-b5f2-50d8-b474-9b2340b5bb81@suse.com>
References: <cover.1683131359.git.oleksii.kurochko@gmail.com>
	 <d1a6fb6112b61000645eb1a4ab9ade8a208d4204.1683131359.git.oleksii.kurochko@gmail.com>
	 <0533b045-f4cb-0834-ae88-9229bd816cf2@suse.com>
	 <db6fe5a3db067ae3429d4b83766508233dfc9ca8.camel@gmail.com>
	 <da4592b3-b5f2-50d8-b474-9b2340b5bb81@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.1 (3.48.1-1.fc38) 
MIME-Version: 1.0

On Tue, 2023-05-09 at 16:38 +0200, Jan Beulich wrote:
> On 09.05.2023 14:59, Oleksii wrote:
> > On Mon, 2023-05-08 at 10:58 +0200, Jan Beulich wrote:
> > > On 03.05.2023 18:31, Oleksii Kurochko wrote:
> > > > --- /dev/null
> > > > +++ b/xen/arch/riscv/include/asm/page.h
> > > > @@ -0,0 +1,62 @@
> > > > +#ifndef _ASM_RISCV_PAGE_H
> > > > +#define _ASM_RISCV_PAGE_H
> > > > +
> > > > +#include <xen/const.h>
> > > > +#include <xen/types.h>
> > > > =3D)+
> > > > +#define VPN_MASK=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ((unsign=
ed
> > > > long)(PAGETABLE_ENTRIES - 1))
> > > > +
> > > > +#define XEN_PT_LEVEL_ORDER(lvl)=C2=A0=C2=A0=C2=A0=C2=A0 ((lvl) * P=
AGETABLE_ORDER)
> > > > +#define XEN_PT_LEVEL_SHIFT(lvl)=C2=A0=C2=A0=C2=A0=C2=A0 (XEN_PT_LE=
VEL_ORDER(lvl) +
> > > > PAGE_SHIFT)
> > > > +#define XEN_PT_LEVEL_SIZE(lvl)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (_AT(=
paddr_t, 1) <<
> > > > XEN_PT_LEVEL_SHIFT(lvl))
> > > > +#define XEN_PT_LEVEL_MAP_MASK(lvl)=C2=A0 (~(XEN_PT_LEVEL_SIZE(lvl)
> > > > -
> > > > 1))
> > > > +#define XEN_PT_LEVEL_MASK(lvl)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (VPN_=
MASK <<
> > > > XEN_PT_LEVEL_SHIFT(lvl))
> > > > +
> > > > +#define PTE_VALID=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(0, UL)
> > > > +#define PTE_READABLE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(1, UL)
> > > > +#define PTE_WRITABLE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(2, UL)
> > > > +#define PTE_EXECUTABLE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(3, UL)
> > > > +#define PTE_USER=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(4, U=
L)
> > > > +#define PTE_GLOBAL=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(5, UL)
> > > > +#define PTE_ACCESSED=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(6, UL)
> > > > +#define PTE_DIRTY=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BIT(7, UL)
> > > > +#define PTE_RSW=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (BIT(=
8, UL) | BIT(9, UL))
> > > > +
> > > > +#define PTE_LEAF_DEFAULT=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 (PTE_VALID | PTE_READABLE
> > > > |
> > > > PTE_WRITABLE)
> > > > +#define PTE_TABLE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (PTE_VALID)
> > > > +
> > > > +/* Calculate the offsets into the pagetables for a given VA */
> > > > +#define pt_linear_offset(lvl, va)=C2=A0=C2=A0 ((va) >>
> > > > XEN_PT_LEVEL_SHIFT(lvl))
> > > > +
> > > > +#define pt_index(lvl, va) pt_linear_offset(lvl, (va) &
> > > > XEN_PT_LEVEL_MASK(lvl))
> > >=20
> > > Maybe better
> > >=20
> > > #define pt_index(lvl, va) (pt_linear_offset(lvl, va) & VPN_MASK)
> > >=20
> > > as the involved constant will be easier to use for the compiler?
> > But VPN_MASK should be shifted by level shift value.
>=20
> Why? pt_linear_offset() already does the necessary shifting.
I missed a way how you offered to define pt_index(). I thought you
offered to define it as "pt_linear_offset(lvl, va & VPN_MASK)" instead
of
"(pt_linear_offset(lvl, va) & VPN_MASK)".

So you are right we can re-write as you offered.

>=20
> > > > +=C2=A0=C2=A0=C2=A0 csr_write(CSR_SATP, 0);
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 /* Clean MMU root page table */
> > > > +=C2=A0=C2=A0=C2=A0 stage1_pgtbl_root[index] =3D paddr_to_pte(0x0, =
0x0);
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 asm volatile ( "sfence.vma" );
> > >=20
> > > Doesn't this want to move between the SATP write and the clearing
> > > of
> > > the
> > > root table slot? Also here and elsewhere - don't these asm()s
> > > need
> > > memory
> > > clobbers? And anyway - could I talk you into introducing an
> > > inline
> > > wrapper
> > > (e.g. named sfence_vma()) so all uses end up consistent?
> > I think clearing of root page table should be done before
> > "sfence.vma"
> > because we have to first clear slot of MMU's root page table and
> > then
> > make updating root page table visible for all. ( by usage of sfence
> > instruction )
>=20
> I disagree. The SATP write has removed the connection of the CPU
> to the page tables. That's the action you want to fence, not the
> altering of some (then) no longer referenced data structure.
>From that point of view you are right. Thanks for clarification.
>=20
> > > > +void __init setup_initial_pagetables(void)
> > > > +{
> > > > +=C2=A0=C2=A0=C2=A0 struct mmu_desc mmu_desc =3D { 0, 0, NULL, NULL=
 };
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 /*
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * Access to _stard, _end is always PC-rel=
ative
> > >=20
> > > Nit: Typo-ed symbol name. Also ...
> > >=20
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * thereby when access them we will get lo=
ad adresses
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * of start and end of Xen
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * To get linker addresses LOAD_TO_LINK() =
is required
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 * to use
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0 */
> > >=20
> > > see the earlier line wrapping remark again. Finally in multi-
> > > sentence
> > > comments full stops are required.
> > Full stops mean '.' at the end of sentences?
>=20
> Yes. Please see ./CODING_STYLE.
Thanks. I'll read it again.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Thu May 11 08:42:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 08:42:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533176.829595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px1sI-0006N8-Dw; Thu, 11 May 2023 08:42:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533176.829595; Thu, 11 May 2023 08:42:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px1sI-0006N1-AJ; Thu, 11 May 2023 08:42:10 +0000
Received: by outflank-mailman (input) for mailman id 533176;
 Thu, 11 May 2023 08:42:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px1sG-0006Mp-NE; Thu, 11 May 2023 08:42:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px1sG-0005St-JZ; Thu, 11 May 2023 08:42:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px1sG-0006Sk-0B; Thu, 11 May 2023 08:42:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1px1sF-0002b0-Vx; Thu, 11 May 2023 08:42:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gxnW++5t0WuZp0qp+qzsdijABbBwPQBzAAuaftuyI7M=; b=RQJ0cQu2JW6idsd47U/CFDDfHI
	/NgOW9qY97+euz2V882hBlBSqS/GIqjwgfVA5Og6f6eC+cQAgnpZE4nP7Z3imWofH2IpcBD1Da51H
	huYo3l0jR/k6Fl7MZjaEaxQWFOIsHo/JfXMG/6l1XFTsm0YALq9SV+YDhdNd85+2xjn0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180615-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180615: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c6382ba0f2aec5200ee03e5d97018c303ddc64d8
X-Osstest-Versions-That:
    ovmf=f47415e0315e2b0cf7bf1d00634f46deee802227
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 May 2023 08:42:07 +0000

flight 180615 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180615/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c6382ba0f2aec5200ee03e5d97018c303ddc64d8
baseline version:
 ovmf                 f47415e0315e2b0cf7bf1d00634f46deee802227

Last test of basis   180611  2023-05-11 00:42:25 Z    0 days
Testing same since   180615  2023-05-11 05:41:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f47415e031..c6382ba0f2  c6382ba0f2aec5200ee03e5d97018c303ddc64d8 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 11 09:21:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 09:21:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533183.829605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px2UA-0002f7-A0; Thu, 11 May 2023 09:21:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533183.829605; Thu, 11 May 2023 09:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px2UA-0002f0-6c; Thu, 11 May 2023 09:21:18 +0000
Received: by outflank-mailman (input) for mailman id 533183;
 Thu, 11 May 2023 09:21:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px2U8-0002ec-My
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 09:21:16 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20620.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2b533834-efdd-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 11:21:11 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8979.eurprd04.prod.outlook.com (2603:10a6:20b:42e::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.19; Thu, 11 May
 2023 09:21:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 09:21:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b533834-efdd-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cTUKtgGV2RV7tKZxj/riMk5gC+KJiA7Oc7EbhZ9iINtujobNr2HB+aA/291H+aObW59UM5xXj54xjITqF3QvQViSwsr45lmEjlxxonuoTMzDvWrA9a/0c4gBBlvJ6yRsRUlOcBMb+4hKhAt+ISd9aDJ59UURDz7CvjpNlZbKLr5w0Fli2dbghABt4ztxqYyNy/oKITMqkmQPQi+ROv55Cqoe+f1Z6ePmwwz6uXYqjoIZK4gdLtxSLt8b/sg6wISwTSucSsSibKMTAt5f9MLSKDhRl7O9BfzgRD0LdyxtIezEnCT33m53EMWdybS6bT1omWH8IMsItl84SGEOandh0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jBY1Gs29LACMSjd3zFO4liO1LG+k66su+TGcjyEQdNA=;
 b=BQ84eOKwUsSb2tDq8Jg+pvRVXn1u6MOrVxVZiFm4i5lpIDvnb70an0XhkmTN/KkDZydGWrfhfUUofbQMc2jYu8JnbfwgfrkrNYGYuuMlmeTkvDyfDVq8671jRO/mLmoFW8e/qn5dLYr4F3djXN62DPMmdNDgYL2XAZI8D0iqs3b75BWjRibphhZdDNF/S/bD4yVDPcWHV/m4LOjLnIunjiglwhrNmWm5nb7NfBu2OoXRdHXmMGIACidCWxo/U2TXEvVKsxNpBWNnGjRYFMV+xWw/6XIs/GsF7In07C6dOEVGqv9cu1XMa6+EdW+N93G/U/vb0HOlOw4oWO8XMfeFSw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jBY1Gs29LACMSjd3zFO4liO1LG+k66su+TGcjyEQdNA=;
 b=Uz5ZlAnjN7wiZzXbMC+cYjb7HhyD9zBevdW+pOmZLB2Ychij5D+SNZdJ//rgtwpU/Gf8Y/qhkUSlbECsWy+z4AqXecsrymx6n9qrEMkFkdRmeETxLip1pe6yFTg57wUfdKCZ0od7kbVgSIjwEa7sIvZD26GbgOGtPXay2+0hMFY3QuOPA/bVkAOMuL+TksVoe7syH+TNKyk8B2i5S02SqspF1pjxDq4DPPKpfItP8YN1gIZ7RdZWEY/JIP2qF6VVcUQXsxShUeWc4TPk4QqmEBxtFNZFXGDTHBqsbGCW7vEokjnQPBBrWTcbZq3W9HL15s+Rb08uV5IRhXGhfzbrZw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bb215c55-5064-7f48-820c-bf41d01529bd@suse.com>
Date: Thu, 11 May 2023 11:21:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/3] docs: document ~/control/feature-balloon
Content-Language: en-US
To: Yann Dirson <yann.dirson@vates.fr>, xen-devel@lists.xenproject.org
Cc: xihuan.yang@citrix.com, min.li1@citrix.com
References: <20230510142011.1120417-1-yann.dirson@vates.fr>
 <20230510142011.1120417-3-yann.dirson@vates.fr>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230510142011.1120417-3-yann.dirson@vates.fr>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0123.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8979:EE_
X-MS-Office365-Filtering-Correlation-Id: e7f771f2-9a48-4c55-799d-08db52010e82
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oXBxQMS8366YkINTn8Yt5Ma4NEvtI5Mm6uHO602sfTv1IM0X7wSIjA9Z6/ePSu/SLyrkAkLU6KObGoFUM5LFuangyBD4t8BgNzCdCCbWh9OeqXI/IueQZAaStdocSP8BFK4/SW+/grX8B+BnT70rNcjAcUlYAhWlNjtpbX+9k+qfa5aEhKky2oodHYf8AqMkCghMBrY0m01u6RySjcmp9Wuk17K7IaiEmQjJmq15SU4gN5fDqC9lQ0sWo3YGRgWj2rl96bHYQhaDeV4L/zqNEnRshCaY1xlmCpNJeOYA1bAb4WzmN0T23bQpmk3NdeKTyEfu2DMB46qk+5TMMQZ6Eip/pKzjcWgHrYlOVU2xNSedNUTnXb9IhAdC0DsJ4SjKeBePy0LmnxalrgOdUoh6fgAPjaoBQ2Ugg1l2Oa1wk6QY+hN5xGf9VuGlcS9WO26MpH24NznxaQfS6qXTu1xDWeDOlzQhopBYPv5Kcwtoe9sB1spZqxu5CVvKnojBhjtSMp/tk7gPthO34WWBjVqC6NLWNiiJvDyNEFvXn7102D83loqLlY7XNCU1JbRSWz1fEr68R5NpwSsudNsDCCOVQke2i+wFn5zNI/1pDr6LYgeCgSzd4KLslnHb1MjlvGh0wCg8XcKPIErgfP6cbkrDOg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(136003)(376002)(346002)(366004)(451199021)(2616005)(86362001)(2906002)(83380400001)(186003)(26005)(36756003)(4326008)(66556008)(66946007)(66476007)(6506007)(6512007)(53546011)(31696002)(31686004)(5660300002)(41300700001)(6486002)(478600001)(38100700002)(316002)(8676002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U0doZ2V3SllmdVpUdTVWajBRenlnaHRiTENIUmZXZWNKandUTExZRFBmNHVH?=
 =?utf-8?B?R210QTl0cVNCTk9VRDJjNGdhMFZPTUxUeEpQRlkweXpPNkIvTEY5cFFhb09y?=
 =?utf-8?B?V2J0M29sN1ZreWh3VUR2ZkQvSUorUkpXZVQrMm9vY0FsamxLT1F3c3hLdTBS?=
 =?utf-8?B?ckUrTEQvbDJZZmh5a29aWk5POFlwdjQ0ekl3R3luZWpxN2RkTnFHc2pRK3g1?=
 =?utf-8?B?ck9QT2dtdlVRZ1lsdW1SbUdrRVlucVZhMWhWenFKR0wyUGtBeDVMbERuc1pN?=
 =?utf-8?B?cGtJYW15TDB6RkhvZzRGQjN4RFMvNmIzanl4MzNsTExNMG0wNjBVbVdqS2pC?=
 =?utf-8?B?S3JRQkNydkliWC9TbDYrTnVhenJCeHNiLzdtTXNXTVlKc1VSbGhRRDZlbXJq?=
 =?utf-8?B?MjNqdm5sc0d3cURiVnVDWDM2UUpXZHA4cVllQlBiK2JpakJxRkNuNS9YVWNi?=
 =?utf-8?B?VDNPMkJmT2VhNWh6RmhmdHFNZEdkeGtYb093OXhhQUl1UUNiSzJpMVZidzJM?=
 =?utf-8?B?Vkxkck51QVBRYytnUFAwcWhJUktaSFptWmFKcXlGbGExNm9aSVEwSml6Zzh3?=
 =?utf-8?B?Yk91eTBTLzNHOXg0eHI2UHJ6YUoyR3M4NmpTUmJJWm55VG80Q0I5UDk3TENm?=
 =?utf-8?B?SnZFc2pLcDVkZTF4TllnSVFjRkFlUENwVnc2S0dSZDFWN2ZldWk4MERqRDRa?=
 =?utf-8?B?UGZVSzV5ZU82OVlvMkx6eHdNQ0lNdWdHZzg0cUl4UWNJVHhnd21SRG9Ka2Fo?=
 =?utf-8?B?NWgwRGxpZFV2TW95QW1haEFmWDhIWGh4cThjMngwck9zSzR2b2R6c0VBNFA0?=
 =?utf-8?B?YjhRVUU5bU5jdU0xSXhaanRVNVhnbVF4OUU2aFhKOXhGVWxUUCtJT1htNCto?=
 =?utf-8?B?bXk0MWlDMjZqSWFRemRjUDk5N0RTS0ZNTVF5MmwxVUJNRFJrbG1lOXFobExH?=
 =?utf-8?B?bjdxVjZYNTIzaUdrWnlMK0JOanlUdXE5OHcwN0JPRDdTa3ovNlhxamEyVXBi?=
 =?utf-8?B?TUdiMElvSnNOVVN4dDBqS2dlSDNwKzlkcHovVjBFYTEwRzVPT1RjTU05blQ3?=
 =?utf-8?B?cVVIY2pRc0pYVmJ2U1I2N1U0Wk9ZWnVGSFllTGVWM2ZxMkFMZHVGWkRCZFFk?=
 =?utf-8?B?NVN5aSt5RFpyWVJzMXZVNEM4dUNZcG92bVdZRlJJeFV6WlJFTHFYYVpTbFcw?=
 =?utf-8?B?Vnlycm9FV3YvZk5hYm14WkZub3hHS2g2cnFMZytPOUxpbmY3TC9zQUI5Y2VD?=
 =?utf-8?B?OElOUnFrQ3BOWUxhcXV6V1hhN0loTDMwOWdzdFdzdzZ0U0VYRlowSHNUOHZB?=
 =?utf-8?B?Q0gzNjhXbEdrdFhzTXVOejIzQWpWMDdYaUdnK1JDQ3J2VzhGaDVTNUVGd1BT?=
 =?utf-8?B?TzhiS3RCcnYvS1AxVjNuODdzQ1Q3Nm5pYXRGQkZmWTZ3SGFMMll2b3lOY2JH?=
 =?utf-8?B?OHBYc2x5cUtObllZM3dveGl1NjZERkVGRVpvRW53YU55ZWcyTVlYeVNVeHln?=
 =?utf-8?B?Y1ZmUEhFVlZpOEo3QmJYMCs2ckhqQ04yOWJGM2VmR1h6bms5YlAxMHdKdWFM?=
 =?utf-8?B?SnQvUHVJblpsQ1N0cGY4dGFRWi9GVXZiMnRkYjJXcVZIMnhuQndBbnR1NFlm?=
 =?utf-8?B?Q3YwdldUKzRLN29pOUFHSUVMaG5XUGVnY0N6YUg1cGVYQVNCL29NRG1zam1h?=
 =?utf-8?B?S1JLZjNvcnUyWlkrWVlTajNoNURNbDVwVWlrUjNUSzFUTWVoSnE4TkV0ajdh?=
 =?utf-8?B?cnNDY25IRnp6RWNBbWhubVRYaHpiOHhlWW1yQVhMdlpNWFBEVkRDRjBrWEdT?=
 =?utf-8?B?VzdwRHNDRGZ2aWpCSE8vL3Z2dFlBNWM2RFZjRG9SK1lwZitFd2FRZUlBSGN3?=
 =?utf-8?B?c1krR1EvbFpYc0t1ZkZVSitzWVpSbllMNGUwR0xQSW9ReTI5MDRSU2lNRGRW?=
 =?utf-8?B?dGpXQ09xSVduRFM2OXdEWUswZVpCenEzdE9ncDNWUkM1bFNaeDVrVmQ1cGwy?=
 =?utf-8?B?TUFFK3pqV25LSEc5b3BTdFdBWEM3Mm9uZ2p4Rk5Zc3VCUzA5V1psTWp6MWUr?=
 =?utf-8?B?WkppMGZnRk9uelJJTEZJMmJWWFl5RTVGRUx2ekRqY0JiRkdIQWxzLzZhdVVi?=
 =?utf-8?Q?pGoFchozbvNMAKAQo7W7AkRSf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e7f771f2-9a48-4c55-799d-08db52010e82
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 09:21:09.6400
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lpIx0xMX7x385JaDJIKRGnmqC7eRnYJUg1SuK9NMH3DI1wBOFNBHighSxvHzn1wkvmg5XOQYd4lGBE2lSAya3g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8979

On 10.05.2023 16:20, Yann Dirson wrote:
> --- a/docs/misc/xenstore-paths.pandoc
> +++ b/docs/misc/xenstore-paths.pandoc
> @@ -509,6 +509,12 @@ This may be initialized to "" by the toolstack and may then be set
>  to 0 or 1 by a guest to indicate whether it is capable of responding
>  to a mode value written to ~/control/laptop-slate-mode.
>  
> +#### ~/control/feature-balloon
> +
> +This may be initialized to "" by the toolstack and may then be set to
> +0 or 1 by a guest to indicate whether it is capable of memory
> +ballooning, and responds to values written to ~/memory/target.

Besides correctly saying "may", I guess this wants to go further and also
clarify what the (intended) behavior is when the node is absent. Aiui PV
guests are always expected to have a balloon driver, so the assumed
value likely needs to be "1" there. Furthermore I'm afraid it doesn't
really become clear what value this node is if it's only optionally
present, while its absence doesn't really allow uniform assumptions
towards a default value.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 09:41:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 09:41:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533188.829615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px2nY-0005Eu-VJ; Thu, 11 May 2023 09:41:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533188.829615; Thu, 11 May 2023 09:41:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px2nY-0005En-Qs; Thu, 11 May 2023 09:41:20 +0000
Received: by outflank-mailman (input) for mailman id 533188;
 Thu, 11 May 2023 09:41:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px2nX-0005Eh-DL
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 09:41:19 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20616.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fa296340-efdf-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 11:41:17 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8617.eurprd04.prod.outlook.com (2603:10a6:20b:438::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 09:41:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 09:41:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa296340-efdf-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YMwy4P+b16P03MGpHi2J7Pi0ZUyxyjMR3/hDPlolvWZF66m7f5xUobmPiGXV8Ajw38pAkcwOi/hukdlSYILl3mtt1DwWWtxqvnsO0vrIkUKDe+U7tcXEhd4a5BitVyv9oX1h+A/efxof11kx0X7DKrl/N4tUnmJLbv3zS/+YYpUv7bnwd8ZAFf6Pvy5haj+wgexZJzBnRWHVQLX3s46r5sjKDldu8u7w6C+sEc4YFazWeE6P3zc9OVeZoh88RIXEQ4IT68wUTnoFMeqUkHE2CXwrE2T/Og4R6Dlu0pxMy8RZBjO/NyhxdNQgu20WZsD6q6EUg171NRWrGSbcfnJxlw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vwK8xy/p8hr9PzAIZvLZ2B03WcEt3mvoJWxm84yJlmU=;
 b=LhkTSYO/xAalJKOj0F0pUPxjOBMFgdcAP3l/tYnhLdBEIccX0BbPkjys53oCu19ILVKBIrZsu4iQ0T/fyZIQL5tRNgSyvQr2FcEZhI4pIeVImA3/6IDhDte7p/cWf3kVYugJniVjH1R3avl+xC/+UqRyrjV5lR9+ebx80WLpCOA1WjvGxmUJaHuYKFLvRO97s8CLaYY9FJbyoKo4cpnIRPcjU2GpJ6iweiH76OdCeLpu9ji7hNMlTJyuOQm2Bivx5pP+pcg3KGV+3fZRDDY5rE7R8PoxnoMnze8hepBpdzMbXOEE4Tkhdl4yJTL7/tnMVjvWzXaI8uimB+lWanyQqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vwK8xy/p8hr9PzAIZvLZ2B03WcEt3mvoJWxm84yJlmU=;
 b=j4hmOYhfIozNnoRFMX1M4XIcqwDEAK0SGF+wB0LOB36Y/soCcACdSbZihnju9+j/ufjLEkrFAQVvGTCQ2DuoUVpJ+aKy8Wy0iSkEErMpW5VyhRXTbzfowGNG3EB+8moJ4mxflYapMQUOztZxSiJUlESJNQWLlaDFrs2Xk6JZF7dqJY1MFAxzzR2ZIO7o5ybCIWj0IXwebOOkYDpXJPvUjB2pCZ9ke20T/puiQNC2VIp6pi6H/i7sGOmdSZQ1Glg19iRIXoTrrS3QFzUONPLFhSqDFk+fBqh6IfMe8mmPqYJXuW3ZHTO8dGnbbLvpn5ojZv0PEtQgjgWRzX5qnrqtbg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8995344d-cd14-d66b-efb6-e4ac7be6d457@suse.com>
Date: Thu, 11 May 2023 11:41:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 1/3] x86: Add AMD's CpuidUserDis bit definitions
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
 <20230509164336.12523-2-alejandro.vallejo@cloud.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230509164336.12523-2-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0063.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8617:EE_
X-MS-Office365-Filtering-Correlation-Id: 26ace943-83a6-4bb3-451c-08db5203dd23
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZW+rBXUAciqEOcJ5SfGhkpEfcOjpbZh1bMLdK/UOSfpX9fZDdO+NUTalNKFsa1T3rR3AeOtFw6QH/wbWLPzeVfaCjOVEWe2r6GK8fZVl7KWXWnkjdOffmsIO9YE17UKiUbfRuOrY+Gg82DpfZULvdqBpmSinJRoYGW2VIlg/pnLIpARzOPLWr2qZh8rggsEPgdIsOrOpL5N/iMoW3jVHmnopi5jPw726w/GZ5T8oCxtAjW8ZQ+/VuT7a+VlPYXVPd9rZ6UzKgdOkhNs3PzXVVtpQ4/TGFkWjc/yetiDqKgBJzjieisGakGPwjjRPH21NeNmUVtqJE4odKCDqRH/vwCArPjLfSLlG/xdldZ0eliJOHt18ruD1bO6tSRI5Nb9qM1d8f+bUeAnfC4aXgPI9nWdNJlrOIFR78SjES9adTi9KUwVogp8B571Ca4mccZ2UMihr02I5lWigLPSa8vQVbz1QR0PSUA3i1yaw7tLmqilqjw41HomipgWeh8APiK/vW/DK5zbuusiuo9ARP+1urVx+BeC77qmTMs1ARs93J7mEF5xqC4RIiOry/koDPxtjsTN7CfY99WleCUBntfoxlSZarcj9PR3k2KmjEsA8mQ5N03kMJRDQenCXce5z1dIjRZ2MZRXAQlDAiPgsLcR+jw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(366004)(376002)(346002)(39860400002)(451199021)(38100700002)(36756003)(86362001)(31696002)(66899021)(31686004)(316002)(41300700001)(6916009)(4326008)(5660300002)(26005)(8676002)(8936002)(53546011)(186003)(6512007)(54906003)(6506007)(2616005)(2906002)(66556008)(66946007)(66476007)(478600001)(83380400001)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bm1NbC9nVCtFNGUxbmpCaTFIS1J5WmY0MHVPdGE3M2tyY1kvRkw0d0tDZjlT?=
 =?utf-8?B?dkJKOExXN1RkV1N6b3lYL2ZhdEtEeGpwTlZmOXdBMjRhendjcXF2RXFZTUJM?=
 =?utf-8?B?ZXlLbVRHRHVmUHA2a2tMOG0xaHpIMXd1cWZETU9nNkhBak1BcHVJcExNWnds?=
 =?utf-8?B?OW9VQ2pGbUdXZzNycnZWS011RGdyUmd2Sk1UZFdUcTRWam42NUFicEJKS2E4?=
 =?utf-8?B?Uk9lQ08xY2NndEMyd3M4TjJYN0Jockt4NGlrdEJMS2xTN1RXSHY2aFFwZEtD?=
 =?utf-8?B?dXZqeWxwQ1Zzb0l1bXRzVi9xTEg4dFF3MjdWZ1V5Wml6bXFXNTNkL2dRTCtX?=
 =?utf-8?B?Y2VYVFluMGoxQ1p3alZsdEtacjVRMTljWC9ucVIwYUFjWGc3elNkcjRXQUFL?=
 =?utf-8?B?cXRqR1NDZUhSNldPWkZNamZFQnB3R0RhM0UvZTQ2NzRkMlI5NFRxZ1lyYTE2?=
 =?utf-8?B?Q2lKc0ZJYms3T2tYTURwTVFMajBKN005d1BiNi9ldjVoTkEvOUEyd2F4M0Nw?=
 =?utf-8?B?VDdRWnZWZ01CLzZlSXYrOTR3RUJRc2k5VFhuUVV3cGxCOGU5U2EvaC9vS0dO?=
 =?utf-8?B?WUFaNU1JZUZ3VnZlczBUaVNsLzk0UG1PY0pMZTB5ZzNWRGpRcWgyMkV4L1Ax?=
 =?utf-8?B?aEdoMGFEckFjMmlMYTlzU24yVEJRbUQ0c3c5LytlSk1sRXFnZDlDNVFFbm13?=
 =?utf-8?B?SUs3ekliVzZrOUE4bmE1eHczQXVGd25CVVRtWUVQTm42ZjBDUkJ0K251Q0k4?=
 =?utf-8?B?MVNJbHNpVzBPZk5aZkxlc3ZNaTJPN294TlE5ZlkyU3E0NmJyNGlPWEw4b1U4?=
 =?utf-8?B?M1dHTCtITGZkOU5nYWMwY3Z0MGdsUU84ZEpaZXQ5ZzUyWDFzUzRGanZLSGIr?=
 =?utf-8?B?M2hTNSszY1p3REhRUUdNYU5tVU5TZndWaXVNQ09CZmcxbi9LL3c0MndmNVp0?=
 =?utf-8?B?Y1ZZa3pMTUVYTkpkcnVRbVh0QVZEbWg3WmF0MVdNT3VNaUNHRnVibml0UTV1?=
 =?utf-8?B?ZkNWVlh0MnFvQ0N6M1A2TEZ2RUY5U1Bsa2tqTlh2MmtqTmQ5OVFCYXAyR3lY?=
 =?utf-8?B?TXBOeWlzWk9yUXNkMDRoc3dyN1B1Wm01Z2NDVC90NzdDcHJhMFY1V1FFdEl0?=
 =?utf-8?B?bmJvRFRrK0FDOFdhV24rQUhMU0VwbTdtOEc4QlZDZDk3Mlo4Qyt3SzZ6THZY?=
 =?utf-8?B?ZC85V0dKSnpqZEdoMWE1RVBzOElEVzdxQjYwZUdNQWcyNHh1eElIOW5SL1Vt?=
 =?utf-8?B?bHRHYTBlSjA3OGpMMlQrYVRzdWFLdEUwNXN0TDAwTkswclJETWNIQzBFbDRR?=
 =?utf-8?B?eWYwQ0wwT09lTENyL29SN0wyV0Rzc2RaaWdLbTcxaWtJN0w5UWl5Y3Z3b3FW?=
 =?utf-8?B?eFNnclA2WGI4cWVVRkxMVXNkUnlLUGdFd2hWTUY3NHU5OFdxMXd4cUhBdGRM?=
 =?utf-8?B?NWkvTTZUS1czUDk2YlZQcUFtbzFoYmdBOXduSzYrdkkvdjBaRHRPSXo5UlVi?=
 =?utf-8?B?ZmdTTFVqUFZWNzd2dDNRdXFyYURteldmWG1JZzUrRGp2Q3VoRDMzZk01SHhy?=
 =?utf-8?B?SkppV3FNaFJ3bWREanZIRGtBNVlTK3lZd25hcnhxMFNlSnBQT1NNMHl6VSt1?=
 =?utf-8?B?c1lFSDlYVlVieUpUVXg2SVkyT042V295aTlBaWszRmlzSWxaOXViWGRNSzhv?=
 =?utf-8?B?aHZlbTZhWUpQb1ZZZk1nLytnelczcDhNcVJocE9XWGxHc09NZ3Jnc3VrVG1U?=
 =?utf-8?B?ZWtBbE9sUGdBbW52bDBKc3M1cWV1cW1TcU5VU2RIZDRaQUVNY3lXZTFGWFk4?=
 =?utf-8?B?Vmc2LzRHQ1Ntc0xOK05pYWd0ZHlpYjBHeFdFQUNXS0NLSk53UzJhVU9OVW8w?=
 =?utf-8?B?TGZYMXJXRlo5VUV6WFZzcWZ6L1pvMk5xaWdHRzhyaStrcTMxa3RDWjk3MHpG?=
 =?utf-8?B?bkt0N2FnV2FyZXFqTEdjTTJGaUx1bklVbE5hWlpOUXBIMTc4VFM5b0NwVEht?=
 =?utf-8?B?Nkh1RGYxeTg5clZEdDliaEp1SUxZRDc5K0s5SlhxbGNWUTN0OENOWHJSbWQ2?=
 =?utf-8?B?c1NoK1lPVzd4L0c4ZFdsM2lNSEUwS3VZR29QTDZpK0N4M3lHK3dad3AvZ1V6?=
 =?utf-8?Q?VVnCn9weXvxuwmx/01Bahole8?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 26ace943-83a6-4bb3-451c-08db5203dd23
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 09:41:15.3612
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pY4tXoi3fnSOTZpGtKVX9LeLfSmnYFBqAw5dRVRdNExLG6IYRTUdndN6x7DzMmeVSfe6wEI8FcbPiZdja8FumQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8617

On 09.05.2023 18:43, Alejandro Vallejo wrote:
> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -287,6 +287,7 @@ XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
>  /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
>  XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
>  XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and limit too) */
> +XEN_CPUFEATURE(CPUID_USER_DIS,     11*32+17) /*   CPUID disable for non-privileged software */

While I can accept your argument towards staying close to AMD's doc
with the name, I'd really like to ask that the comment then be
disambiguated: "non-privileged" is more likely mean CPL=3 than all
of CPL>0. Since "not fully privileged" is getting a little long,
maybe indeed say "CPL > 0 software"? I would then offer you my R-b,
if only I could find proof of the HWCR bit being bit 35. The PM
mentions it only by name, and the PPRs I've checked all have it
marked reserved.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 09:58:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 09:58:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533194.829625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px33j-000713-Hm; Thu, 11 May 2023 09:58:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533194.829625; Thu, 11 May 2023 09:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px33j-00070w-DW; Thu, 11 May 2023 09:58:03 +0000
Received: by outflank-mailman (input) for mailman id 533194;
 Thu, 11 May 2023 09:58:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OCYL=BA=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1px33g-00070q-L2
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 09:58:01 +0000
Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com
 [2607:f8b0:4864:20::102e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4b55fefa-efe2-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 11:57:53 +0200 (CEST)
Received: by mail-pj1-x102e.google.com with SMTP id
 98e67ed59e1d1-24e5d5782edso7927543a91.0
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 02:57:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b55fefa-efe2-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683799072; x=1686391072;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=ou6dD1BCabqpIJ6mxhFA5Tjyyt6HcOmXCj1VIDdtcp0=;
        b=LZrW5uo3Zj/1FqWq/DyWTzUzGx9HNxyMT235tzlx0PeOFmezZeq8rGeXjCwyvqFEv4
         HAU4DC3QXFqvx/6DiL86lmtJDJz9lajSCnI/S/1mh+dA/93O/1pHI5r3jaNiDJbl2YXj
         Jn+w+W/0KwrsKPVJY8CdardZC10qXzTln7yFVmDWLE/4fQbN/GwUgNVgRlvK9/O9v3sa
         yO0z6m5lUQhJJvwQOLHcq0s76pp6uwZ/I5wBCJeWq+Luey+iWdO3L0uXK9VNjDmBmeZV
         Z2Mppfj6xncTxrTBARXu1FuPg6KNjFesRosRPK5NJTrbPDkWytEcwXKZv/HyNyx75QeM
         j3Tg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683799072; x=1686391072;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=ou6dD1BCabqpIJ6mxhFA5Tjyyt6HcOmXCj1VIDdtcp0=;
        b=l8t9jqjxxLo5YkRCuUIGOt6xL9Inlh0FyFmGgYDVeOh29LH/IhaLIThbazqacqSqZx
         mtPEUM277DwRLg9qxfX9BV4pbhXqEiQBHFkfrwhh0Yr/fVON+5m6MQXAwSBQj/KXiqvD
         El6sRA4ugg9+SQoEkHpa1gfrdoLv4x3o40y2pMvpxYlodjo+7z79HG2tRPFBgKjU5vlc
         7+FCugtU9kSB0kD4cewAyfA3xNyg+qZiA6ivWqdQEKCiQwnws2RMi7SR3Zr41eWt00k9
         BSuVpoCHRA/5Yxixaqwi34HnyB+28ZVtkVg00ZogyUijXjlrknj9SHRXqCy+gymzNoix
         Fx6Q==
X-Gm-Message-State: AC+VfDyya5cKWuZ1+FfGsmg+EVl/qUFJ5WHCdz7jTn80HVL80I6ShTKS
	Bggkf7Ev0d2GyaMtfMP171pkiS2SIbCjOAJaw1Q+ZTo9LMfJsQ==
X-Google-Smtp-Source: ACHHUZ6BuR1YLI/kunghylYnccROGknQMf0aDfnYfFpLk1USxiTrgSX+I3fKzZ7rzEJH0/43dmUuu+ilR2PEayUNZMA=
X-Received: by 2002:a17:90a:fa5:b0:24d:f992:5286 with SMTP id
 34-20020a17090a0fa500b0024df9925286mr9336318pjz.36.1683799071155; Thu, 11 May
 2023 02:57:51 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <CA+SAi2uPnpwNowMWPdcbSkF=iNe9Xr5LQMtmtF-G7dKNswog_g@mail.gmail.com>
 <cc6380b9-b452-6492-75ab-fc0825b223d3@amd.com> <CA+SAi2upd1P=KzbQS2BpD5zr3+OA=mrq7JiC7Zou9XSEJ_OYhA@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com> <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
 <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com> <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
 <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com> <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Thu, 11 May 2023 13:02:40 +0300
Message-ID: <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Michal Orzel <michal.orzel@amd.com>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>, 
	Stewart.Hildebrand@amd.com
Content-Type: multipart/mixed; boundary="00000000000082ed6405fb680561"

--00000000000082ed6405fb680561
Content-Type: multipart/alternative; boundary="00000000000082ed6105fb68055f"

--00000000000082ed6105fb68055f
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello,

Thanks Stefano.
Then the next question.
I cloned xen repo from xilinx site https://github.com/Xilinx/xen.git
I managed to build a xlnx_rebase_4.17 branch in my environment.
I did it without coloring first. I did not find any color footprints at
this branch.
I realized coloring is not in the xlnx_rebase_4.17 branch yet.
I switched to the master branch. All the coloring sources are presented
there. I tried to build these again.
I got a lot of errors. You may see a log in the attachment.
So this is a question.
What branch of xen did you use when you tested cache colors last time ?

Regards,
Oleg


=D0=B2=D1=82, 9 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 22:49, Stefa=
no Stabellini <sstabellini@kernel.org>:

> We test Xen Cache Coloring regularly on zcu102. Every Petalinux release
> (twice a year) is tested with cache coloring enabled. The last Petalinux
> release is 2023.1 and the kernel used is this:
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS
>
>
> On Tue, 9 May 2023, Oleg Nikitenko wrote:
> > Hello guys,
> >
> > I have a couple of more questions.
> > Have you ever run xen with the cache coloring at Zynq UltraScale+ MPSoC
> zcu102 xczu15eg ?
> > When did you run xen with the cache coloring last time ?
> > What kernel version did you use for Dom0 when you ran xen with the cach=
e
> coloring last time ?
> >
> > Regards,
> > Oleg
> >
> > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 11:48, O=
leg Nikitenko <oleshiiwood@gmail.com>:
> >       Hi Michal,
> >
> > Thanks.
> >
> > Regards,
> > Oleg
> >
> > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 11:34, M=
ichal Orzel <michal.orzel@amd.com>:
> >       Hi Oleg,
> >
> >       Replying, so that you do not need to wait for Stefano.
> >
> >       On 05/05/2023 10:28, Oleg Nikitenko wrote:
> >       >
> >       >
> >       >
> >       > Hello Stefano,
> >       >
> >       > I would like to try a xen cache color property from this repo
> https://xenbits.xen.org/git-http/xen.git
> >       <https://xenbits.xen.org/git-http/xen.git>
> >       > Could you tell whot branch I should use ?
> >       Cache coloring feature is not part of the upstream tree and it is
> still under review.
> >       You can only find it integrated in the Xilinx Xen tree.
> >
> >       ~Michal
> >
> >       >
> >       > Regards,
> >       > Oleg
> >       >
> >       > =D0=BF=D1=82, 28 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=
=B2 00:51, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
> >       >
> >       >     I am familiar with the zcu102 but I don't know how you coul=
d
> possibly
> >       >     generate a SError.
> >       >
> >       >     I suggest to try to use ImageBuilder [1] to generate the bo=
ot
> >       >     configuration as a test because that is known to work well
> for zcu102.
> >       >
> >       >     [1] https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>
> >       >
> >       >
> >       >     On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
> >       >     > Hello Stefano,
> >       >     >
> >       >     > Thanks for clarification.
> >       >     > We nighter use ImageBuilder nor uboot boot script.
> >       >     > A model is zcu102 compatible.
> >       >     >
> >       >     > Regards,
> >       >     > O.
> >       >     >
> >       >     > =D0=B2=D1=82, 25 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3.=
 =D0=B2 21:21, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
> >       >     >       This is interesting. Are you using Xilinx hardware
> by any chance? If so,
> >       >     >       which board?
> >       >     >
> >       >     >       Are you using ImageBuilder to generate your boot.sc=
r
> boot script? If so,
> >       >     >       could you please post your ImageBuilder config file=
?
> If not, can you
> >       >     >       post the source of your uboot boot script?
> >       >     >
> >       >     >       SErrors are supposed to be related to a hardware
> failure of some kind.
> >       >     >       You are not supposed to be able to trigger an SErro=
r
> easily by
> >       >     >       "mistake". I have not seen SErrors due to wrong
> cache coloring
> >       >     >       configurations on any Xilinx board before.
> >       >     >
> >       >     >       The differences between Xen with and without cache
> coloring from a
> >       >     >       hardware perspective are:
> >       >     >
> >       >     >       - With cache coloring, the SMMU is enabled and does
> address translations
> >       >     >         even for dom0. Without cache coloring the SMMU
> could be disabled, and
> >       >     >         if enabled, the SMMU doesn't do any address
> translations for Dom0. If
> >       >     >         there is a hardware failure related to SMMU
> address translation it
> >       >     >         could only trigger with cache coloring. This woul=
d
> be my normal
> >       >     >         suggestion for you to explore, but the failure
> happens too early
> >       >     >         before any DMA-capable device is programmed. So I
> don't think this can
> >       >     >         be the issue.
> >       >     >
> >       >     >       - With cache coloring, the memory allocation is ver=
y
> different so you'll
> >       >     >         end up using different DDR regions for Dom0. So i=
f
> your DDR is
> >       >     >         defective, you might only see a failure with cach=
e
> coloring enabled
> >       >     >         because you end up using different regions.
> >       >     >
> >       >     >
> >       >     >       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
> >       >     >       > Hi Stefano,
> >       >     >       >
> >       >     >       > Thank you.
> >       >     >       > If I build xen without colors support there is no=
t
> this error.
> >       >     >       > All the domains are booted well.
> >       >     >       > Hense it can not be a hardware issue.
> >       >     >       > This panic arrived during unpacking the rootfs.
> >       >     >       > Here I attached the boot log xen/Dom0 without
> color.
> >       >     >       > A highlighted strings printed exactly after the
> place where 1-st time panic arrived.
> >       >     >       >
> >       >     >       >  Xen 4.16.1-pre
> >       >     >       > (XEN) Xen version 4.16.1-pre (nole2390@(none))
> (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=3Dy
> >       2023-04-21
> >       >     >       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023
> +0300 git:321687b231-dirty
> >       >     >       > (XEN) build-id:
> c1847258fdb1b79562fc710dda40008f96c0fde5
> >       >     >       > (XEN) Processor: 00000000410fd034: "ARM Limited",
> variant: 0x0, part 0xd03,rev 0x4
> >       >     >       > (XEN) 64-bit Execution:
> >       >     >       > (XEN)   Processor Features: 0000000000002222
> 0000000000000000
> >       >     >       > (XEN)     Exception Levels: EL3:64+32 EL2:64+32
> EL1:64+32 EL0:64+32
> >       >     >       > (XEN)     Extensions: FloatingPoint AdvancedSIMD
> >       >     >       > (XEN)   Debug Features: 0000000010305106
> 0000000000000000
> >       >     >       > (XEN)   Auxiliary Features: 0000000000000000
> 0000000000000000
> >       >     >       > (XEN)   Memory Model Features: 0000000000001122
> 0000000000000000
> >       >     >       > (XEN)   ISA Features:  0000000000011120
> 0000000000000000
> >       >     >       > (XEN) 32-bit Execution:
> >       >     >       > (XEN)   Processor Features:
> 0000000000000131:0000000000011011
> >       >     >       > (XEN)     Instruction Sets: AArch32 A32 Thumb
> Thumb-2 Jazelle
> >       >     >       > (XEN)     Extensions: GenericTimer Security
> >       >     >       > (XEN)   Debug Features: 0000000003010066
> >       >     >       > (XEN)   Auxiliary Features: 0000000000000000
> >       >     >       > (XEN)   Memory Model Features: 0000000010201105
> 0000000040000000
> >       >     >       > (XEN)                          0000000001260000
> 0000000002102211
> >       >     >       > (XEN)   ISA Features: 0000000002101110
> 0000000013112111 0000000021232042
> >       >     >       > (XEN)                 0000000001112131
> 0000000000011142 0000000000011121
> >       >     >       > (XEN) Using SMC Calling Convention v1.2
> >       >     >       > (XEN) Using PSCI v1.1
> >       >     >       > (XEN) SMP: Allowing 4 CPUs
> >       >     >       > (XEN) Generic Timer IRQ: phys=3D30 hyp=3D26 virt=
=3D27
> Freq: 100000 KHz
> >       >     >       > (XEN) GICv2 initialization:
> >       >     >       > (XEN)         gic_dist_addr=3D00000000f9010000
> >       >     >       > (XEN)         gic_cpu_addr=3D00000000f9020000
> >       >     >       > (XEN)         gic_hyp_addr=3D00000000f9040000
> >       >     >       > (XEN)         gic_vcpu_addr=3D00000000f9060000
> >       >     >       > (XEN)         gic_maintenance_irq=3D25
> >       >     >       > (XEN) GICv2: Adjusting CPU interface base to
> 0xf902f000
> >       >     >       > (XEN) GICv2: 192 lines, 4 cpus, secure (IID
> 0200143b).
> >       >     >       > (XEN) Using scheduler: null Scheduler (null)
> >       >     >       > (XEN) Initializing null scheduler
> >       >     >       > (XEN) WARNING: This is experimental software in
> development.
> >       >     >       > (XEN) Use at your own risk.
> >       >     >       > (XEN) Allocated console ring of 32 KiB.
> >       >     >       > (XEN) CPU0: Guest atomics will try 12 times befor=
e
> pausing the domain
> >       >     >       > (XEN) Bringing up CPU1
> >       >     >       > (XEN) CPU1: Guest atomics will try 13 times befor=
e
> pausing the domain
> >       >     >       > (XEN) CPU 1 booted.
> >       >     >       > (XEN) Bringing up CPU2
> >       >     >       > (XEN) CPU2: Guest atomics will try 13 times befor=
e
> pausing the domain
> >       >     >       > (XEN) CPU 2 booted.
> >       >     >       > (XEN) Bringing up CPU3
> >       >     >       > (XEN) CPU3: Guest atomics will try 13 times befor=
e
> pausing the domain
> >       >     >       > (XEN) Brought up 4 CPUs
> >       >     >       > (XEN) CPU 3 booted.
> >       >     >       > (XEN) smmu: /axi/smmu@fd800000: probing hardware
> configuration...
> >       >     >       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
> >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stage 2
> translation
> >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stream matching
> with 48 register groups, mask 0x7fff<2>smmu:
> >       /axi/smmu@fd800000: 16 context
> >       >     >       banks (0
> >       >     >       > stage-2 only)
> >       >     >       > (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit
> IPA -> 48-bit PA
> >       >     >       > (XEN) smmu: /axi/smmu@fd800000: registered 29
> master devices
> >       >     >       > (XEN) I/O virtualisation enabled
> >       >     >       > (XEN)  - Dom0 mode: Relaxed
> >       >     >       > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VM=
ID
> >       >     >       > (XEN) P2M: 3 levels with order-1 root, VTCR
> 0x0000000080023558
> >       >     >       > (XEN) Scheduling granularity: cpu, 1 CPU per
> sched-resource
> >       >     >       > (XEN) alternatives: Patching with alt table
> 00000000002cc5c8 -> 00000000002ccb2c
> >       >     >       > (XEN) *** LOADING DOMAIN 0 ***
> >       >     >       > (XEN) Loading d0 kernel from boot module @
> 0000000001000000
> >       >     >       > (XEN) Loading ramdisk from boot module @
> 0000000002000000
> >       >     >       > (XEN) Allocating 1:1 mappings totalling 1600MB fo=
r
> dom0:
> >       >     >       > (XEN) BANK[0] 0x00000010000000-0x00000020000000
> (256MB)
> >       >     >       > (XEN) BANK[1] 0x00000024000000-0x00000028000000
> (64MB)
> >       >     >       > (XEN) BANK[2] 0x00000030000000-0x00000080000000
> (1280MB)
> >       >     >       > (XEN) Grant table range:
> 0x00000000e00000-0x00000000e40000
> >       >     >       > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr
> 0x000000087bf94000
> >       >     >       > (XEN) Allocating PPI 16 for event channel interru=
pt
> >       >     >       > (XEN) Extended region 0: 0x81200000->0xa0000000
> >       >     >       > (XEN) Extended region 1: 0xb1200000->0xc0000000
> >       >     >       > (XEN) Extended region 2: 0xc8000000->0xe0000000
> >       >     >       > (XEN) Extended region 3: 0xf0000000->0xf9000000
> >       >     >       > (XEN) Extended region 4: 0x100000000->0x600000000
> >       >     >       > (XEN) Extended region 5: 0x880000000->0x800000000=
0
> >       >     >       > (XEN) Extended region 6:
> 0x8001000000->0x10000000000
> >       >     >       > (XEN) Loading zImage from 0000000001000000 to
> 0000000010000000-0000000010e41008
> >       >     >       > (XEN) Loading d0 initrd from 0000000002000000 to
> 0x0000000013600000-0x000000001ff3a617
> >       >     >       > (XEN) Loading d0 DTB to
> 0x0000000013400000-0x000000001340cbdc
> >       >     >       > (XEN) Initial low memory virq threshold set at
> 0x4000 pages.
> >       >     >       > (XEN) Std. Loglevel: All
> >       >     >       > (XEN) Guest Loglevel: All
> >       >     >       > (XEN) *** Serial input to DOM0 (type 'CTRL-a'
> three times to switch input)
> >       >     >       > (XEN) null.c:353: 0 <-- d0v0
> >       >     >       > (XEN) Freed 356kB init memory.
> >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
> >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
> >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER4
> >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER8
> >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER12
> >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER16
> >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER20
> >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER0
> >       >     >       > [    0.000000] Booting Linux on physical CPU
> 0x0000000000 [0x410fd034]
> >       >     >       > [    0.000000] Linux version
> 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC=
)
> >       11.3.0, GNU ld (GNU
> >       >     >       Binutils)
> >       >     >       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 202=
3
> >       >     >       > [    0.000000] Machine model: D14 Viper Board -
> White Unit
> >       >     >       > [    0.000000] Xen 4.16 support found
> >       >     >       > [    0.000000] Zone ranges:
> >       >     >       > [    0.000000]   DMA      [mem
> 0x0000000010000000-0x000000007fffffff]
> >       >     >       > [    0.000000]   DMA32    empty
> >       >     >       > [    0.000000]   Normal   empty
> >       >     >       > [    0.000000] Movable zone start for each node
> >       >     >       > [    0.000000] Early memory node ranges
> >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000010000000-0x000000001fffffff]
> >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000022000000-0x0000000022147fff]
> >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000022200000-0x0000000022347fff]
> >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000024000000-0x0000000027ffffff]
> >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000030000000-0x000000007fffffff]
> >       >     >       > [    0.000000] Initmem setup node 0 [mem
> 0x0000000010000000-0x000000007fffffff]
> >       >     >       > [    0.000000] On node 0, zone DMA: 8192 pages in
> unavailable ranges
> >       >     >       > [    0.000000] On node 0, zone DMA: 184 pages in
> unavailable ranges
> >       >     >       > [    0.000000] On node 0, zone DMA: 7352 pages in
> unavailable ranges
> >       >     >       > [    0.000000] cma: Reserved 256 MiB at
> 0x000000006e000000
> >       >     >       > [    0.000000] psci: probing for conduit method
> from DT.
> >       >     >       > [    0.000000] psci: PSCIv1.1 detected in firmwar=
e.
> >       >     >       > [    0.000000] psci: Using standard PSCI v0.2
> function IDs
> >       >     >       > [    0.000000] psci: Trusted OS migration not
> required
> >       >     >       > [    0.000000] psci: SMC Calling Convention v1.1
> >       >     >       > [    0.000000] percpu: Embedded 16 pages/cpu
> s32792 r0 d32744 u65536
> >       >     >       > [    0.000000] Detected VIPT I-cache on CPU0
> >       >     >       > [    0.000000] CPU features: kernel page table
> isolation forced ON by KASLR
> >       >     >       > [    0.000000] CPU features: detected: Kernel pag=
e
> table isolation (KPTI)
> >       >     >       > [    0.000000] Built 1 zonelists, mobility
> grouping on.  Total pages: 403845
> >       >     >       > [    0.000000] Kernel command line: console=3Dhvc=
0
> earlycon=3Dxen earlyprintk=3Dxen clk_ignore_unused fips=3D1
> >       root=3D/dev/ram0
> >       >     >       maxcpus=3D2
> >       >     >       > [    0.000000] Unknown kernel command line
> parameters "earlyprintk=3Dxen fips=3D1", will be passed to user
> >       space.
> >       >     >       > [    0.000000] Dentry cache hash table entries:
> 262144 (order: 9, 2097152 bytes, linear)
> >       >     >       > [    0.000000] Inode-cache hash table entries:
> 131072 (order: 8, 1048576 bytes, linear)
> >       >     >       > [    0.000000] mem auto-init: stack:off, heap
> alloc:on, heap free:on
> >       >     >       > [    0.000000] mem auto-init: clearing system
> memory may take some time...
> >       >     >       > [    0.000000] Memory: 1121936K/1641024K availabl=
e
> (9728K kernel code, 836K rwdata, 2396K rodata, 1536K
> >       init, 262K bss,
> >       >     >       256944K reserved,
> >       >     >       > 262144K cma-reserved)
> >       >     >       > [    0.000000] SLUB: HWalign=3D64, Order=3D0-3,
> MinObjects=3D0, CPUs=3D2, Nodes=3D1
> >       >     >       > [    0.000000] rcu: Hierarchical RCU
> implementation.
> >       >     >       > [    0.000000] rcu: RCU event tracing is enabled.
> >       >     >       > [    0.000000] rcu: RCU restricting CPUs from
> NR_CPUS=3D8 to nr_cpu_ids=3D2.
> >       >     >       > [    0.000000] rcu: RCU calculated value of
> scheduler-enlistment delay is 25 jiffies.
> >       >     >       > [    0.000000] rcu: Adjusting geometry for
> rcu_fanout_leaf=3D16, nr_cpu_ids=3D2
> >       >     >       > [    0.000000] NR_IRQS: 64, nr_irqs: 64,
> preallocated irqs: 0
> >       >     >       > [    0.000000] Root IRQ handler: gic_handle_irq
> >       >     >       > [    0.000000] arch_timer: cp15 timer(s) running
> at 100.00MHz (virt).
> >       >     >       > [    0.000000] clocksource: arch_sys_counter:
> mask: 0xffffffffffffff max_cycles: 0x171024e7e0,
> >       max_idle_ns: 440795205315 ns
> >       >     >       > [    0.000000] sched_clock: 56 bits at 100MHz,
> resolution 10ns, wraps every 4398046511100ns
> >       >     >       > [    0.000258] Console: colour dummy device 80x25
> >       >     >       > [    0.310231] printk: console [hvc0] enabled
> >       >     >       > [    0.314403] Calibrating delay loop (skipped),
> value calculated using timer frequency.. 200.00 BogoMIPS
> >       (lpj=3D400000)
> >       >     >       > [    0.324851] pid_max: default: 32768 minimum: 3=
01
> >       >     >       > [    0.329706] LSM: Security Framework initializi=
ng
> >       >     >       > [    0.334204] Yama: becoming mindful.
> >       >     >       > [    0.337865] Mount-cache hash table entries:
> 4096 (order: 3, 32768 bytes, linear)
> >       >     >       > [    0.345180] Mountpoint-cache hash table
> entries: 4096 (order: 3, 32768 bytes, linear)
> >       >     >       > [    0.354743] xen:grant_table: Grant tables usin=
g
> version 1 layout
> >       >     >       > [    0.359132] Grant table initialized
> >       >     >       > [    0.362664] xen:events: Using FIFO-based ABI
> >       >     >       > [    0.366993] Xen: initializing cpu0
> >       >     >       > [    0.370515] rcu: Hierarchical SRCU
> implementation.
> >       >     >       > [    0.375930] smp: Bringing up secondary CPUs ..=
.
> >       >     >       > (XEN) null.c:353: 1 <-- d0v1
> >       >     >       > (XEN) d0v1: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER0
> >       >     >       > [    0.382549] Detected VIPT I-cache on CPU1
> >       >     >       > [    0.388712] Xen: initializing cpu1
> >       >     >       > [    0.388743] CPU1: Booted secondary processor
> 0x0000000001 [0x410fd034]
> >       >     >       > [    0.388829] smp: Brought up 1 node, 2 CPUs
> >       >     >       > [    0.406941] SMP: Total of 2 processors
> activated.
> >       >     >       > [    0.411698] CPU features: detected: 32-bit EL0
> Support
> >       >     >       > [    0.416888] CPU features: detected: CRC32
> instructions
> >       >     >       > [    0.422121] CPU: All CPU(s) started at EL1
> >       >     >       > [    0.426248] alternatives: patching kernel code
> >       >     >       > [    0.431424] devtmpfs: initialized
> >       >     >       > [    0.441454] KASLR enabled
> >       >     >       > [    0.441602] clocksource: jiffies: mask:
> 0xffffffff max_cycles: 0xffffffff, max_idle_ns:
> >       7645041785100000 ns
> >       >     >       > [    0.448321] futex hash table entries: 512
> (order: 3, 32768 bytes, linear)
> >       >     >       > [    0.496183] NET: Registered PF_NETLINK/PF_ROUT=
E
> protocol family
> >       >     >       > [    0.498277] DMA: preallocated 256 KiB
> GFP_KERNEL pool for atomic allocations
> >       >     >       > [    0.503772] DMA: preallocated 256 KiB
> GFP_KERNEL|GFP_DMA pool for atomic allocations
> >       >     >       > [    0.511610] DMA: preallocated 256 KiB
> GFP_KERNEL|GFP_DMA32 pool for atomic allocations
> >       >     >       > [    0.519478] audit: initializing netlink subsys
> (disabled)
> >       >     >       > [    0.524985] audit: type=3D2000 audit(0.336:1):
> state=3Dinitialized audit_enabled=3D0 res=3D1
> >       >     >       > [    0.529169] thermal_sys: Registered thermal
> governor 'step_wise'
> >       >     >       > [    0.533023] hw-breakpoint: found 6 breakpoint
> and 4 watchpoint registers.
> >       >     >       > [    0.545608] ASID allocator initialised with
> 32768 entries
> >       >     >       > [    0.551030] xen:swiotlb_xen: Warning: only abl=
e
> to allocate 4 MB for software IO TLB
> >       >     >       > [    0.559332] software IO TLB: mapped [mem
> 0x0000000011800000-0x0000000011c00000] (4MB)
> >       >     >       > [    0.583565] HugeTLB registered 1.00 GiB page
> size, pre-allocated 0 pages
> >       >     >       > [    0.584721] HugeTLB registered 32.0 MiB page
> size, pre-allocated 0 pages
> >       >     >       > [    0.591478] HugeTLB registered 2.00 MiB page
> size, pre-allocated 0 pages
> >       >     >       > [    0.598225] HugeTLB registered 64.0 KiB page
> size, pre-allocated 0 pages
> >       >     >       > [    0.636520] DRBG: Continuing without Jitter RN=
G
> >       >     >       > [    0.737187] raid6: neonx8   gen()  2143 MB/s
> >       >     >       > [    0.805294] raid6: neonx8   xor()  1589 MB/s
> >       >     >       > [    0.873406] raid6: neonx4   gen()  2177 MB/s
> >       >     >       > [    0.941499] raid6: neonx4   xor()  1556 MB/s
> >       >     >       > [    1.009612] raid6: neonx2   gen()  2072 MB/s
> >       >     >       > [    1.077715] raid6: neonx2   xor()  1430 MB/s
> >       >     >       > [    1.145834] raid6: neonx1   gen()  1769 MB/s
> >       >     >       > [    1.213935] raid6: neonx1   xor()  1214 MB/s
> >       >     >       > [    1.282046] raid6: int64x8  gen()  1366 MB/s
> >       >     >       > [    1.350132] raid6: int64x8  xor()   773 MB/s
> >       >     >       > [    1.418259] raid6: int64x4  gen()  1602 MB/s
> >       >     >       > [    1.486349] raid6: int64x4  xor()   851 MB/s
> >       >     >       > [    1.554464] raid6: int64x2  gen()  1396 MB/s
> >       >     >       > [    1.622561] raid6: int64x2  xor()   744 MB/s
> >       >     >       > [    1.690687] raid6: int64x1  gen()  1033 MB/s
> >       >     >       > [    1.758770] raid6: int64x1  xor()   517 MB/s
> >       >     >       > [    1.758809] raid6: using algorithm neonx4 gen(=
)
> 2177 MB/s
> >       >     >       > [    1.762941] raid6: .... xor() 1556 MB/s, rmw
> enabled
> >       >     >       > [    1.767957] raid6: using neon recovery algorit=
hm
> >       >     >       > [    1.772824] xen:balloon: Initialising balloon
> driver
> >       >     >       > [    1.778021] iommu: Default domain type:
> Translated
> >       >     >       > [    1.782584] iommu: DMA domain TLB invalidation
> policy: strict mode
> >       >     >       > [    1.789149] SCSI subsystem initialized
> >       >     >       > [    1.792820] usbcore: registered new interface
> driver usbfs
> >       >     >       > [    1.798254] usbcore: registered new interface
> driver hub
> >       >     >       > [    1.803626] usbcore: registered new device
> driver usb
> >       >     >       > [    1.808761] pps_core: LinuxPPS API ver. 1
> registered
> >       >     >       > [    1.813716] pps_core: Software ver. 5.3.6 -
> Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it
> >       <mailto:giometti@linux.it>>
> >       >     >       > [    1.822903] PTP clock support registered
> >       >     >       > [    1.826893] EDAC MC: Ver: 3.0.0
> >       >     >       > [    1.830375] zynqmp-ipi-mbox mailbox@ff990400:
> Registered ZynqMP IPI mbox with TX/RX channels.
> >       >     >       > [    1.838863] zynqmp-ipi-mbox mailbox@ff990600:
> Registered ZynqMP IPI mbox with TX/RX channels.
> >       >     >       > [    1.847356] zynqmp-ipi-mbox mailbox@ff990800:
> Registered ZynqMP IPI mbox with TX/RX channels.
> >       >     >       > [    1.855907] FPGA manager framework
> >       >     >       > [    1.859952] clocksource: Switched to
> clocksource arch_sys_counter
> >       >     >       > [    1.871712] NET: Registered PF_INET protocol
> family
> >       >     >       > [    1.871838] IP idents hash table entries: 3276=
8
> (order: 6, 262144 bytes, linear)
> >       >     >       > [    1.879392] tcp_listen_portaddr_hash hash tabl=
e
> entries: 1024 (order: 2, 16384 bytes, linear)
> >       >     >       > [    1.887078] Table-perturb hash table entries:
> 65536 (order: 6, 262144 bytes, linear)
> >       >     >       > [    1.894846] TCP established hash table entries=
:
> 16384 (order: 5, 131072 bytes, linear)
> >       >     >       > [    1.902900] TCP bind hash table entries: 16384
> (order: 6, 262144 bytes, linear)
> >       >     >       > [    1.910350] TCP: Hash tables configured
> (established 16384 bind 16384)
> >       >     >       > [    1.916778] UDP hash table entries: 1024
> (order: 3, 32768 bytes, linear)
> >       >     >       > [    1.923509] UDP-Lite hash table entries: 1024
> (order: 3, 32768 bytes, linear)
> >       >     >       > [    1.930759] NET: Registered PF_UNIX/PF_LOCAL
> protocol family
> >       >     >       > [    1.936834] RPC: Registered named UNIX socket
> transport module.
> >       >     >       > [    1.942342] RPC: Registered udp transport
> module.
> >       >     >       > [    1.947088] RPC: Registered tcp transport
> module.
> >       >     >       > [    1.951843] RPC: Registered tcp NFSv4.1
> backchannel transport module.
> >       >     >       > [    1.958334] PCI: CLS 0 bytes, default 64
> >       >     >       > [    1.962709] Trying to unpack rootfs image as
> initramfs...
> >       >     >       > [    1.977090] workingset: timestamp_bits=3D62
> max_order=3D19 bucket_order=3D0
> >       >     >       > [    1.982863] Installing knfsd (copyright (C)
> 1996 okir@monad.swb.de <mailto:okir@monad.swb.de>).
> >       >     >       > [    2.021045] NET: Registered PF_ALG protocol
> family
> >       >     >       > [    2.021122] xor: measuring software checksum
> speed
> >       >     >       > [    2.029347]    8regs           :  2366 MB/sec
> >       >     >       > [    2.033081]    32regs          :  2802 MB/sec
> >       >     >       > [    2.038223]    arm64_neon      :  2320 MB/sec
> >       >     >       > [    2.038385] xor: using function: 32regs (2802
> MB/sec)
> >       >     >       > [    2.043614] Block layer SCSI generic (bsg)
> driver version 0.4 loaded (major 247)
> >       >     >       > [    2.050959] io scheduler mq-deadline registere=
d
> >       >     >       > [    2.055521] io scheduler kyber registered
> >       >     >       > [    2.068227] xen:xen_evtchn: Event-channel
> device installed
> >       >     >       > [    2.069281] Serial: 8250/16550 driver, 4 ports=
,
> IRQ sharing disabled
> >       >     >       > [    2.076190] cacheinfo: Unable to detect cache
> hierarchy for CPU 0
> >       >     >       > [    2.085548] brd: module loaded
> >       >     >       > [    2.089290] loop: module loaded
> >       >     >       > [    2.089341] Invalid max_queues (4), will use
> default max: 2.
> >       >     >       > [    2.094565] tun: Universal TUN/TAP device
> driver, 1.6
> >       >     >       > [    2.098655] xen_netfront: Initialising Xen
> virtual ethernet driver
> >       >     >       > [    2.104156] usbcore: registered new interface
> driver rtl8150
> >       >     >       > [    2.109813] usbcore: registered new interface
> driver r8152
> >       >     >       > [    2.115367] usbcore: registered new interface
> driver asix
> >       >     >       > [    2.120794] usbcore: registered new interface
> driver ax88179_178a
> >       >     >       > [    2.126934] usbcore: registered new interface
> driver cdc_ether
> >       >     >       > [    2.132816] usbcore: registered new interface
> driver cdc_eem
> >       >     >       > [    2.138527] usbcore: registered new interface
> driver net1080
> >       >     >       > [    2.144256] usbcore: registered new interface
> driver cdc_subset
> >       >     >       > [    2.150205] usbcore: registered new interface
> driver zaurus
> >       >     >       > [    2.155837] usbcore: registered new interface
> driver cdc_ncm
> >       >     >       > [    2.161550] usbcore: registered new interface
> driver r8153_ecm
> >       >     >       > [    2.168240] usbcore: registered new interface
> driver cdc_acm
> >       >     >       > [    2.173109] cdc_acm: USB Abstract Control Mode=
l
> driver for USB modems and ISDN adapters
> >       >     >       > [    2.181358] usbcore: registered new interface
> driver uas
> >       >     >       > [    2.186547] usbcore: registered new interface
> driver usb-storage
> >       >     >       > [    2.192643] usbcore: registered new interface
> driver ftdi_sio
> >       >     >       > [    2.198384] usbserial: USB Serial support
> registered for FTDI USB Serial Device
> >       >     >       > [    2.206118] udc-core: couldn't find an
> available UDC - added [g_mass_storage] to list of pending
> >       drivers
> >       >     >       > [    2.215332] i2c_dev: i2c /dev entries driver
> >       >     >       > [    2.220467] xen_wdt xen_wdt: initialized
> (timeout=3D60s, nowayout=3D0)
> >       >     >       > [    2.225923] device-mapper: uevent: version 1.0=
.3
> >       >     >       > [    2.230668] device-mapper: ioctl: 4.45.0-ioctl
> (2021-03-22) initialised: dm-devel@redhat.com
> >       <mailto:dm-devel@redhat.com>
> >       >     >       > [    2.239315] EDAC MC0: Giving out device to
> module 1 controller synps_ddr_controller: DEV synps_edac
> >       (INTERRUPT)
> >       >     >       > [    2.249405] EDAC DEVICE0: Giving out device to
> module zynqmp-ocm-edac controller zynqmp_ocm: DEV
> >       >     >       ff960000.memory-controller (INTERRUPT)
> >       >     >       > [    2.261719] sdhci: Secure Digital Host
> Controller Interface driver
> >       >     >       > [    2.267487] sdhci: Copyright(c) Pierre Ossman
> >       >     >       > [    2.271890] sdhci-pltfm: SDHCI platform and OF
> driver helper
> >       >     >       > [    2.278157] ledtrig-cpu: registered to indicat=
e
> activity on CPUs
> >       >     >       > [    2.283816] zynqmp_firmware_probe Platform
> Management API v1.1
> >       >     >       > [    2.289554] zynqmp_firmware_probe Trustzone
> version v1.0
> >       >     >       > [    2.327875] securefw securefw: securefw probed
> >       >     >       > [    2.328324] alg: No test for xilinx-zynqmp-aes
> (zynqmp-aes)
> >       >     >       > [    2.332563] zynqmp_aes
> firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
> >       >     >       > [    2.341183] alg: No test for xilinx-zynqmp-rsa
> (zynqmp-rsa)
> >       >     >       > [    2.347667] remoteproc remoteproc0:
> ff9a0000.rf5ss:r5f_0 is available
> >       >     >       > [    2.353003] remoteproc remoteproc1:
> ff9a0000.rf5ss:r5f_1 is available
> >       >     >       > [    2.362605] fpga_manager fpga0: Xilinx ZynqMP
> FPGA Manager registered
> >       >     >       > [    2.366540] viper-xen-proxy viper-xen-proxy:
> Viper Xen Proxy registered
> >       >     >       > [    2.372525] viper-vdpp a4000000.vdpp: Device
> Tree Probing
> >       >     >       > [    2.377778] viper-vdpp a4000000.vdpp: VDPP
> Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> >       >     >       > [    2.386432] viper-vdpp a4000000.vdpp: Unable t=
o
> register tamper handler. Retrying...
> >       >     >       > [    2.394094] viper-vdpp-net a5000000.vdpp_net:
> Device Tree Probing
> >       >     >       > [    2.399854] viper-vdpp-net a5000000.vdpp_net:
> Device registered
> >       >     >       > [    2.405931] viper-vdpp-stat a8000000.vdpp_stat=
:
> Device Tree Probing
> >       >     >       > [    2.412037] viper-vdpp-stat a8000000.vdpp_stat=
:
> Build parameters: VTI Count: 512 Event Count: 32
> >       >     >       > [    2.420856] default preset
> >       >     >       > [    2.423797] viper-vdpp-stat a8000000.vdpp_stat=
:
> Device registered
> >       >     >       > [    2.430054] viper-vdpp-rng ac000000.vdpp_rng:
> Device Tree Probing
> >       >     >       > [    2.435948] viper-vdpp-rng ac000000.vdpp_rng:
> Device registered
> >       >     >       > [    2.441976] vmcu driver init
> >       >     >       > [    2.444922] VMCU: : (240:0) registered
> >       >     >       > [    2.444956] In K81 Updater init
> >       >     >       > [    2.449003] pktgen: Packet Generator for packe=
t
> performance testing. Version: 2.75
> >       >     >       > [    2.468833] Initializing XFRM netlink socket
> >       >     >       > [    2.468902] NET: Registered PF_PACKET protocol
> family
> >       >     >       > [    2.472729] Bridge firewalling registered
> >       >     >       > [    2.476785] 8021q: 802.1Q VLAN Support v1.8
> >       >     >       > [    2.481341] registered taskstats version 1
> >       >     >       > [    2.486394] Btrfs loaded,
> crc32c=3Dcrc32c-generic, zoned=3Dno, fsverity=3Dno
> >       >     >       > [    2.503145] ff010000.serial: ttyPS1 at MMIO
> 0xff010000 (irq =3D 36, base_baud =3D 6250000) is a xuartps
> >       >     >       > [    2.507103] of-fpga-region fpga-full: FPGA
> Region probed
> >       >     >       > [    2.512986] xilinx-zynqmp-dma
> fd500000.dma-controller: ZynqMP DMA driver Probe success
> >       >     >       > [    2.520267] xilinx-zynqmp-dma
> fd510000.dma-controller: ZynqMP DMA driver Probe success
> >       >     >       > [    2.528239] xilinx-zynqmp-dma
> fd520000.dma-controller: ZynqMP DMA driver Probe success
> >       >     >       > [    2.536152] xilinx-zynqmp-dma
> fd530000.dma-controller: ZynqMP DMA driver Probe success
> >       >     >       > [    2.544153] xilinx-zynqmp-dma
> fd540000.dma-controller: ZynqMP DMA driver Probe success
> >       >     >       > [    2.552127] xilinx-zynqmp-dma
> fd550000.dma-controller: ZynqMP DMA driver Probe success
> >       >     >       > [    2.560178] xilinx-zynqmp-dma
> ffa80000.dma-controller: ZynqMP DMA driver Probe success
> >       >     >       > [    2.567987] xilinx-zynqmp-dma
> ffa90000.dma-controller: ZynqMP DMA driver Probe success
> >       >     >       > [    2.576018] xilinx-zynqmp-dma
> ffaa0000.dma-controller: ZynqMP DMA driver Probe success
> >       >     >       > [    2.583889] xilinx-zynqmp-dma
> ffab0000.dma-controller: ZynqMP DMA driver Probe success
> >       >     >       > [    2.946379] spi-nor spi0.0: mt25qu512a (131072
> Kbytes)
> >       >     >       > [    2.946467] 2 fixed-partitions partitions foun=
d
> on MTD device spi0.0
> >       >     >       > [    2.952393] Creating 2 MTD partitions on
> "spi0.0":
> >       >     >       > [    2.957231] 0x000004000000-0x000008000000 :
> "bank A"
> >       >     >       > [    2.963332] 0x000000000000-0x000004000000 :
> "bank B"
> >       >     >       > [    2.968694] macb ff0b0000.ethernet: Not
> enabling partial store and forward
> >       >     >       > [    2.975333] macb ff0b0000.ethernet eth0:
> Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25
> >       (18:41:fe:0f:ff:02)
> >       >     >       > [    2.984472] macb ff0c0000.ethernet: Not
> enabling partial store and forward
> >       >     >       > [    2.992144] macb ff0c0000.ethernet eth1:
> Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26
> >       (18:41:fe:0f:ff:03)
> >       >     >       > [    3.001043] viper_enet viper_enet: Viper power
> GPIOs initialised
> >       >     >       > [    3.007313] viper_enet viper_enet vnet0
> (uninitialized): Validate interface QSGMII
> >       >     >       > [    3.014914] viper_enet viper_enet vnet1
> (uninitialized): Validate interface QSGMII
> >       >     >       > [    3.022138] viper_enet viper_enet vnet1
> (uninitialized): Validate interface type 18
> >       >     >       > [    3.030274] viper_enet viper_enet vnet2
> (uninitialized): Validate interface QSGMII
> >       >     >       > [    3.037785] viper_enet viper_enet vnet3
> (uninitialized): Validate interface QSGMII
> >       >     >       > [    3.045301] viper_enet viper_enet: Viper enet
> registered
> >       >     >       > [    3.050958] xilinx-axipmon
> ffa00000.perf-monitor: Probed Xilinx APM
> >       >     >       > [    3.057135] xilinx-axipmon
> fd0b0000.perf-monitor: Probed Xilinx APM
> >       >     >       > [    3.063538] xilinx-axipmon
> fd490000.perf-monitor: Probed Xilinx APM
> >       >     >       > [    3.069920] xilinx-axipmon
> ffa10000.perf-monitor: Probed Xilinx APM
> >       >     >       > [    3.097729] si70xx: probe of 2-0040 failed wit=
h
> error -5
> >       >     >       > [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx
> Watchdog Timer with timeout 60s
> >       >     >       > [    3.105111] cdns-wdt ff150000.watchdog: Xilinx
> Watchdog Timer with timeout 10s
> >       >     >       > [    3.112457] viper-tamper viper-tamper: Device
> registered
> >       >     >       > [    3.117593] active_bank active_bank: boot bank=
:
> 1
> >       >     >       > [    3.122184] active_bank active_bank: boot mode=
:
> (0x02) qspi32
> >       >     >       > [    3.128247] viper-vdpp a4000000.vdpp: Device
> Tree Probing
> >       >     >       > [    3.133439] viper-vdpp a4000000.vdpp: VDPP
> Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> >       >     >       > [    3.142151] viper-vdpp a4000000.vdpp: Tamper
> handler registered
> >       >     >       > [    3.147438] viper-vdpp a4000000.vdpp: Device
> registered
> >       >     >       > [    3.153007] lpc55_l2 spi1.0: registered handle=
r
> for protocol 0
> >       >     >       > [    3.158582] lpc55_user lpc55_user: The major
> number for your device is 236
> >       >     >       > [    3.165976] lpc55_l2 spi1.0: registered handle=
r
> for protocol 1
> >       >     >       > [    3.181999] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_get_time: bad result: 1
> >       >     >       > [    3.182856] rtc-lpc55 rtc_lpc55: registered as
> rtc0
> >       >     >       > [    3.188656] lpc55_l2 spi1.0: (2) mcu still not
> ready?
> >       >     >       > [    3.193744] lpc55_l2 spi1.0: (3) mcu still not
> ready?
> >       >     >       > [    3.198848] lpc55_l2 spi1.0: (4) mcu still not
> ready?
> >       >     >       > [    3.202932] mmc0: SDHCI controller on
> ff160000.mmc [ff160000.mmc] using ADMA 64-bit
> >       >     >       > [    3.210689] lpc55_l2 spi1.0: (5) mcu still not
> ready?
> >       >     >       > [    3.215694] lpc55_l2 spi1.0: rx error: -110
> >       >     >       > [    3.284438] mmc0: new HS200 MMC card at addres=
s
> 0001
> >       >     >       > [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
> >       >     >       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
> >       >     >       > [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.0=
0
> MiB
> >       >     >       > [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.0=
0
> MiB
> >       >     >       > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00
> MiB, chardev (244:0)
> >       >     >       > [    3.582676] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_get_time: bad result: 1
> >       >     >       > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys:
> unable to read the hardware clock
> >       >     >       > [    3.591252] cdns-i2c ff020000.i2c: recovery
> information complete
> >       >     >       > [    3.597085] at24 0-0050: supply vcc not found,
> using dummy regulator
> >       >     >       > [    3.603011] lpc55_l2 spi1.0: (2) mcu still not
> ready?
> >       >     >       > [    3.608093] at24 0-0050: 256 byte spd EEPROM,
> read-only
> >       >     >       > [    3.613620] lpc55_l2 spi1.0: (3) mcu still not
> ready?
> >       >     >       > [    3.619362] lpc55_l2 spi1.0: (4) mcu still not
> ready?
> >       >     >       > [    3.624224] rtc-rv3028 0-0052: registered as
> rtc1
> >       >     >       > [    3.628343] lpc55_l2 spi1.0: (5) mcu still not
> ready?
> >       >     >       > [    3.633253] lpc55_l2 spi1.0: rx error: -110
> >       >     >       > [    3.639104] k81_bootloader 0-0010: probe
> >       >     >       > [    3.641628] VMCU: : (235:0) registered
> >       >     >       > [    3.641635] k81_bootloader 0-0010: probe
> completed
> >       >     >       > [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmi=
o
> ff020000 irq 28
> >       >     >       > [    3.669154] cdns-i2c ff030000.i2c: recovery
> information complete
> >       >     >       > [    3.675412] lm75 1-0048: supply vs not found,
> using dummy regulator
> >       >     >       > [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp11=
2'
> >       >     >       > [    3.686548] i2c i2c-1: Added multiplexed i2c
> bus 3
> >       >     >       > [    3.690795] i2c i2c-1: Added multiplexed i2c
> bus 4
> >       >     >       > [    3.695629] i2c i2c-1: Added multiplexed i2c
> bus 5
> >       >     >       > [    3.700492] i2c i2c-1: Added multiplexed i2c
> bus 6
> >       >     >       > [    3.705157] pca954x 1-0070: registered 4
> multiplexed busses for I2C switch pca9546
> >       >     >       > [    3.713049] at24 1-0054: supply vcc not found,
> using dummy regulator
> >       >     >       > [    3.720067] at24 1-0054: 1024 byte 24c08
> EEPROM, read-only
> >       >     >       > [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmi=
o
> ff030000 irq 29
> >       >     >       > [    3.731272] sfp viper_enet:sfp-eth1: Host
> maximum power 2.0W
> >       >     >       > [    3.737549] sfp_register_socket: got sfp_bus
> >       >     >       > [    3.740709] sfp_register_socket: register
> sfp_bus
> >       >     >       > [    3.745459] sfp_register_bus: ops ok!
> >       >     >       > [    3.749179] sfp_register_bus: Try to attach
> >       >     >       > [    3.753419] sfp_register_bus: Attach succeeded
> >       >     >       > [    3.757914] sfp_register_bus: upstream ops
> attach
> >       >     >       > [    3.762677] sfp_register_bus: Bus registered
> >       >     >       > [    3.766999] sfp_register_socket: register
> sfp_bus succeeded
> >       >     >       > [    3.775870] of_cfs_init
> >       >     >       > [    3.776000] of_cfs_init: OK
> >       >     >       > [    3.778211] clk: Not disabling unused clocks
> >       >     >       > [   11.278477] Freeing initrd memory: 206056K
> >       >     >       > [   11.279406] Freeing unused kernel memory: 1536=
K
> >       >     >       > [   11.314006] Checked W+X mappings: passed, no
> W+X pages found
> >       >     >       > [   11.314142] Run /init as init process
> >       >     >       > INIT: version 3.01 booting
> >       >     >       > fsck (busybox 1.35.0)
> >       >     >       > /dev/mmcblk0p1: clean, 12/102400 files,
> 238162/409600 blocks
> >       >     >       > /dev/mmcblk0p2: clean, 12/102400 files,
> 171972/409600 blocks
> >       >     >       > /dev/mmcblk0p3 was not cleanly unmounted, check
> forced.
> >       >     >       > /dev/mmcblk0p3: 20/4096 files (0.0%
> non-contiguous), 663/16384 blocks
> >       >     >       > [   11.553073] EXT4-fs (mmcblk0p3): mounted
> filesystem without journal. Opts: (null). Quota mode:
> >       disabled.
> >       >     >       > Starting random number generator daemon.
> >       >     >       > [   11.580662] random: crng init done
> >       >     >       > Starting udev
> >       >     >       > [   11.613159] udevd[142]: starting version 3.2.1=
0
> >       >     >       > [   11.620385] udevd[143]: starting eudev-3.2.10
> >       >     >       > [   11.704481] macb ff0b0000.ethernet control_red=
:
> renamed from eth0
> >       >     >       > [   11.720264] macb ff0c0000.ethernet
> control_black: renamed from eth1
> >       >     >       > [   12.063396] ip_local_port_range: prefer
> different parity for start/end values.
> >       >     >       > [   12.084801] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_get_time: bad result: 1
> >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
> >       >     >       > Mon Feb 27 08:40:53 UTC 2023
> >       >     >       > [   12.115309] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_set_time: bad result
> >       >     >       > hwclock: RTC_SET_TIME: Invalid exchange
> >       >     >       > [   12.131027] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_get_time: bad result: 1
> >       >     >       > Starting mcud
> >       >     >       > INIT: Entering runlevel: 5
> >       >     >       > Configuring network interfaces... done.
> >       >     >       > resetting network interface
> >       >     >       > [   12.718295] macb ff0b0000.ethernet control_red=
:
> PHY [ff0b0000.ethernet-ffffffff:02] driver [Xilinx
> >       PCS/PMA PHY] (irq=3DPOLL)
> >       >     >       > [   12.723919] macb ff0b0000.ethernet control_red=
:
> configuring for phy/gmii link mode
> >       >     >       > [   12.732151] pps pps0: new PPS source ptp0
> >       >     >       > [   12.735563] macb ff0b0000.ethernet:
> gem-ptp-timer ptp clock registered.
> >       >     >       > [   12.745724] macb ff0c0000.ethernet
> control_black: PHY [ff0c0000.ethernet-ffffffff:01] driver [Xilinx
> >       PCS/PMA PHY]
> >       >     >       (irq=3DPOLL)
> >       >     >       > [   12.753469] macb ff0c0000.ethernet
> control_black: configuring for phy/gmii link mode
> >       >     >       > [   12.761804] pps pps1: new PPS source ptp1
> >       >     >       > [   12.765398] macb ff0c0000.ethernet:
> gem-ptp-timer ptp clock registered.
> >       >     >       > Auto-negotiation: off
> >       >     >       > Auto-negotiation: off
> >       >     >       > [   16.828151] macb ff0b0000.ethernet control_red=
:
> unable to generate target frequency: 125000000 Hz
> >       >     >       > [   16.834553] macb ff0b0000.ethernet control_red=
:
> Link is Up - 1Gbps/Full - flow control off
> >       >     >       > [   16.860552] macb ff0c0000.ethernet
> control_black: unable to generate target frequency: 125000000 Hz
> >       >     >       > [   16.867052] macb ff0c0000.ethernet
> control_black: Link is Up - 1Gbps/Full - flow control off
> >       >     >       > Starting Failsafe Secure Shell server in port
> 2222: sshd
> >       >     >       > done.
> >       >     >       > Starting rpcbind daemon...done.
> >       >     >       >
> >       >     >       > [   17.093019] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_get_time: bad result: 1
> >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
> >       >     >       > Starting State Manager Service
> >       >     >       > Start state-manager restarter...
> >       >     >       > (XEN) d0v1 Forwarding AES operation: 3254779951
> >       >     >       > Starting /usr/sbin/xenstored....[   17.265256]
> BTRFS: device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa
> >       devid 1 transid 744
> >       >     >       /dev/dm-0
> >       >     >       > scanned by udevd (385)
> >       >     >       > [   17.349933] BTRFS info (device dm-0): disk
> space caching is enabled
> >       >     >       > [   17.350670] BTRFS info (device dm-0): has
> skinny extents
> >       >     >       > [   17.364384] BTRFS info (device dm-0): enabling
> ssd optimizations
> >       >     >       > [   17.830462] BTRFS: device fsid
> 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
> >       /dev/mapper/client_prov scanned by
> >       >     >       mkfs.btrfs
> >       >     >       > (526)
> >       >     >       > [   17.872699] BTRFS info (device dm-1): using
> free space tree
> >       >     >       > [   17.872771] BTRFS info (device dm-1): has
> skinny extents
> >       >     >       > [   17.878114] BTRFS info (device dm-1): flagging
> fs with big metadata feature
> >       >     >       > [   17.894289] BTRFS info (device dm-1): enabling
> ssd optimizations
> >       >     >       > [   17.895695] BTRFS info (device dm-1): checking
> UUID tree
> >       >     >       >
> >       >     >       > Setting domain 0 name, domid and JSON config...
> >       >     >       > Done setting up Dom0
> >       >     >       > Starting xenconsoled...
> >       >     >       > Starting QEMU as disk backend for dom0
> >       >     >       > Starting domain watchdog daemon: xenwatchdogd
> startup
> >       >     >       >
> >       >     >       > [   18.408647] BTRFS: device fsid
> 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
> >       /dev/mapper/client_config scanned by
> >       >     >       mkfs.btrfs
> >       >     >       > (574)
> >       >     >       > [done]
> >       >     >       > [   18.465552] BTRFS info (device dm-2): using
> free space tree
> >       >     >       > [   18.465629] BTRFS info (device dm-2): has
> skinny extents
> >       >     >       > [   18.471002] BTRFS info (device dm-2): flagging
> fs with big metadata feature
> >       >     >       > Starting crond: [   18.482371] BTRFS info (device
> dm-2): enabling ssd optimizations
> >       >     >       > [   18.486659] BTRFS info (device dm-2): checking
> UUID tree
> >       >     >       > OK
> >       >     >       > starting rsyslogd ... Log partition ready after 0
> poll loops
> >       >     >       > done
> >       >     >       > rsyslogd: cannot connect to 172.18.0.1:514 <
> http://172.18.0.1:514>: Network is unreachable [v8.2208.0 try
> >       https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> ]
> >       >     >       > [   18.670637] BTRFS: device fsid
> 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3
> >       scanned by udevd (518)
> >       >     >       >
> >       >     >       > Please insert USB token and enter your role in
> login prompt.
> >       >     >       >
> >       >     >       > login:
> >       >     >       >
> >       >     >       > Regards,
> >       >     >       > O.
> >       >     >       >
> >       >     >       >
> >       >     >       > =D0=BF=D0=BD, 24 =D0=B0=D0=BF=D1=80. 2023=E2=80=
=AF=D0=B3. =D0=B2 23:39, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
> >       >     >       >       Hi Oleg,
> >       >     >       >
> >       >     >       >       Here is the issue from your logs:
> >       >     >       >
> >       >     >       >       SError Interrupt on CPU0, code 0xbe000000 -=
-
> SError
> >       >     >       >
> >       >     >       >       SErrors are special signals to notify
> software of serious hardware
> >       >     >       >       errors.  Something is going very wrong.
> Defective hardware is a
> >       >     >       >       possibility.  Another possibility if
> software accessing address ranges
> >       >     >       >       that it is not supposed to, sometimes it
> causes SErrors.
> >       >     >       >
> >       >     >       >       Cheers,
> >       >     >       >
> >       >     >       >       Stefano
> >       >     >       >
> >       >     >       >
> >       >     >       >
> >       >     >       >       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
> >       >     >       >
> >       >     >       >       > Hello,
> >       >     >       >       >
> >       >     >       >       > Thanks guys.
> >       >     >       >       > I found out where the problem was.
> >       >     >       >       > Now dom0 booted more. But I have a new on=
e.
> >       >     >       >       > This is a kernel panic during Dom0 loadin=
g.
> >       >     >       >       > Maybe someone is able to suggest somethin=
g
> ?
> >       >     >       >       >
> >       >     >       >       > Regards,
> >       >     >       >       > O.
> >       >     >       >       >
> >       >     >       >       > [    3.771362] sfp_register_bus: upstream
> ops attach
> >       >     >       >       > [    3.776119] sfp_register_bus: Bus
> registered
> >       >     >       >       > [    3.780459] sfp_register_socket:
> register sfp_bus succeeded
> >       >     >       >       > [    3.789399] of_cfs_init
> >       >     >       >       > [    3.789499] of_cfs_init: OK
> >       >     >       >       > [    3.791685] clk: Not disabling unused
> clocks
> >       >     >       >       > [   11.010355] SError Interrupt on CPU0,
> code 0xbe000000 -- SError
> >       >     >       >       > [   11.010380] CPU: 0 PID: 9 Comm:
> kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> >       >     >       >       > [   11.010393] Workqueue: events_unbound
> async_run_entry_fn
> >       >     >       >       > [   11.010414] pstate: 60000005 (nZCv dai=
f
> -PAN -UAO -TCO -DIT -SSBS BTYPE=3D--)
> >       >     >       >       > [   11.010422] pc :
> simple_write_end+0xd0/0x130
> >       >     >       >       > [   11.010431] lr :
> generic_perform_write+0x118/0x1e0
> >       >     >       >       > [   11.010438] sp : ffffffc00809b910
> >       >     >       >       > [   11.010441] x29: ffffffc00809b910 x28:
> 0000000000000000 x27: ffffffef69ba88c0
> >       >     >       >       > [   11.010451] x26: 0000000000003eec x25:
> ffffff807515db00 x24: 0000000000000000
> >       >     >       >       > [   11.010459] x23: ffffffc00809ba90 x22:
> 0000000002aac000 x21: ffffff807315a260
> >       >     >       >       > [   11.010472] x20: 0000000000001000 x19:
> fffffffe02000000 x18: 0000000000000000
> >       >     >       >       > [   11.010481] x17: 00000000ffffffff x16:
> 0000000000008000 x15: 0000000000000000
> >       >     >       >       > [   11.010490] x14: 0000000000000000 x13:
> 0000000000000000 x12: 0000000000000000
> >       >     >       >       > [   11.010498] x11: 0000000000000000 x10:
> 0000000000000000 x9 : 0000000000000000
> >       >     >       >       > [   11.010507] x8 : 0000000000000000 x7 :
> ffffffef693ba680 x6 : 000000002d89b700
> >       >     >       >       > [   11.010515] x5 : fffffffe02000000 x4 :
> ffffff807315a3c8 x3 : 0000000000001000
> >       >     >       >       > [   11.010524] x2 : 0000000002aab000 x1 :
> 0000000000000001 x0 : 0000000000000005
> >       >     >       >       > [   11.010534] Kernel panic - not syncing=
:
> Asynchronous SError Interrupt
> >       >     >       >       > [   11.010539] CPU: 0 PID: 9 Comm:
> kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> >       >     >       >       > [   11.010545] Hardware name: D14 Viper
> Board - White Unit (DT)
> >       >     >       >       > [   11.010548] Workqueue: events_unbound
> async_run_entry_fn
> >       >     >       >       > [   11.010556] Call trace:
> >       >     >       >       > [   11.010558]  dump_backtrace+0x0/0x1c4
> >       >     >       >       > [   11.010567]  show_stack+0x18/0x2c
> >       >     >       >       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
> >       >     >       >       > [   11.010583]  dump_stack+0x18/0x34
> >       >     >       >       > [   11.010588]  panic+0x14c/0x2f8
> >       >     >       >       > [   11.010597]  print_tainted+0x0/0xb0
> >       >     >       >       > [   11.010606]
>  arm64_serror_panic+0x6c/0x7c
> >       >     >       >       > [   11.010614]  do_serror+0x28/0x60
> >       >     >       >       > [   11.010621]
>  el1h_64_error_handler+0x30/0x50
> >       >     >       >       > [   11.010628]  el1h_64_error+0x78/0x7c
> >       >     >       >       > [   11.010633]  simple_write_end+0xd0/0x1=
30
> >       >     >       >       > [   11.010639]
>  generic_perform_write+0x118/0x1e0
> >       >     >       >       > [   11.010644]
>  __generic_file_write_iter+0x138/0x1c4
> >       >     >       >       > [   11.010650]
>  generic_file_write_iter+0x78/0xd0
> >       >     >       >       > [   11.010656]  __kernel_write+0xfc/0x2ac
> >       >     >       >       > [   11.010665]  kernel_write+0x88/0x160
> >       >     >       >       > [   11.010673]  xwrite+0x44/0x94
> >       >     >       >       > [   11.010680]  do_copy+0xa8/0x104
> >       >     >       >       > [   11.010686]  write_buffer+0x38/0x58
> >       >     >       >       > [   11.010692]  flush_buffer+0x4c/0xbc
> >       >     >       >       > [   11.010698]  __gunzip+0x280/0x310
> >       >     >       >       > [   11.010704]  gunzip+0x1c/0x28
> >       >     >       >       > [   11.010709]
>  unpack_to_rootfs+0x170/0x2b0
> >       >     >       >       > [   11.010715]
>  do_populate_rootfs+0x80/0x164
> >       >     >       >       > [   11.010722]
>  async_run_entry_fn+0x48/0x164
> >       >     >       >       > [   11.010728]
>  process_one_work+0x1e4/0x3a0
> >       >     >       >       > [   11.010736]  worker_thread+0x7c/0x4c0
> >       >     >       >       > [   11.010743]  kthread+0x120/0x130
> >       >     >       >       > [   11.010750]  ret_from_fork+0x10/0x20
> >       >     >       >       > [   11.010757] SMP: stopping secondary CP=
Us
> >       >     >       >       > [   11.010784] Kernel Offset: 0x2f6120000=
0
> from 0xffffffc008000000
> >       >     >       >       > [   11.010788] PHYS_OFFSET: 0x0
> >       >     >       >       > [   11.010790] CPU features:
> 0x00000401,00000842
> >       >     >       >       > [   11.010795] Memory Limit: none
> >       >     >       >       > [   11.277509] ---[ end Kernel panic - no=
t
> syncing: Asynchronous SError Interrupt ]---
> >       >     >       >       >
> >       >     >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=
=E2=80=AF=D0=B3. =D0=B2 15:52, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
> >       >     >       >       >       Hi Oleg,
> >       >     >       >       >
> >       >     >       >       >       On 21/04/2023 14:49, Oleg Nikitenko
> wrote:
> >       >     >       >       >       >
> >       >     >       >       >       >
> >       >     >       >       >       >
> >       >     >       >       >       > Hello Michal,
> >       >     >       >       >       >
> >       >     >       >       >       > I was not able to enable
> earlyprintk in the xen for now.
> >       >     >       >       >       > I decided to choose another way.
> >       >     >       >       >       > This is a xen's command line that
> I found out completely.
> >       >     >       >       >       >
> >       >     >       >       >       > (XEN) $$$$ console=3Ddtuart
> dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin
> >       bootscrub=3D0
> >       >     >       vwfi=3Dnative
> >       >     >       >       sched=3Dnull
> >       >     >       >       >       timer_slop=3D0
> >       >     >       >       >       Yes, adding a printk() in Xen was
> also a good idea.
> >       >     >       >       >
> >       >     >       >       >       >
> >       >     >       >       >       > So you are absolutely right about
> a command line.
> >       >     >       >       >       > Now I am going to find out why xe=
n
> did not have the correct parameters from the device
> >       tree.
> >       >     >       >       >       Maybe you will find this document
> helpful:
> >       >     >       >       >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >       <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >
> >       >     >       >       >
> >       >     >       >       >       ~Michal
> >       >     >       >       >
> >       >     >       >       >       >
> >       >     >       >       >       > Regards,
> >       >     >       >       >       > Oleg
> >       >     >       >       >       >
> >       >     >       >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=
=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:16,
> Michal Orzel <michal.orzel@amd.com
> >       <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com
> <mailto:michal.orzel@amd.com>>>:
> >       >     >       >       >       >
> >       >     >       >       >       >
> >       >     >       >       >       >     On 21/04/2023 10:04, Oleg
> Nikitenko wrote:
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >
> >       >     >       >       >       >     > Hello Michal,
> >       >     >       >       >       >     >
> >       >     >       >       >       >     > Yes, I use yocto.
> >       >     >       >       >       >     >
> >       >     >       >       >       >     > Yesterday all day long I
> tried to follow your suggestions.
> >       >     >       >       >       >     > I faced a problem.
> >       >     >       >       >       >     > Manually in the xen config
> build file I pasted the strings:
> >       >     >       >       >       >     In the .config file or in som=
e
> Yocto file (listing additional Kconfig options) added
> >       to SRC_URI?
> >       >     >       >       >       >     You shouldn't really modify
> .config file but if you do, you should execute "make
> >       olddefconfig"
> >       >     >       afterwards.
> >       >     >       >       >       >
> >       >     >       >       >       >     >
> >       >     >       >       >       >     > CONFIG_EARLY_PRINTK
> >       >     >       >       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
> >       >     >       >       >       >     >
> CONFIG_EARLY_UART_CHOICE_CADENCE
> >       >     >       >       >       >     I hope you added =3Dy to them=
.
> >       >     >       >       >       >
> >       >     >       >       >       >     Anyway, you have at least the
> following solutions:
> >       >     >       >       >       >     1) Run bitbake xen -c
> menuconfig to properly set early printk
> >       >     >       >       >       >     2) Find out how you enable
> other Kconfig options in your project (e.g.
> >       CONFIG_COLORING=3Dy that is not
> >       >     >       enabled by
> >       >     >       >       default)
> >       >     >       >       >       >     3) Append the following to
> "xen/arch/arm/configs/arm64_defconfig":
> >       >     >       >       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=3D=
y
> >       >     >       >       >       >
> >       >     >       >       >       >     ~Michal
> >       >     >       >       >       >
> >       >     >       >       >       >     >
> >       >     >       >       >       >     > Host hangs in build time.
> >       >     >       >       >       >     > Maybe I did not set
> something in the config build file ?
> >       >     >       >       >       >     >
> >       >     >       >       >       >     > Regards,
> >       >     >       >       >       >     > Oleg
> >       >     >       >       >       >     >
> >       >     >       >       >       >     > =D1=87=D1=82, 20 =D0=B0=D0=
=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:57,
> Oleg Nikitenko <oleshiiwood@gmail.com
> >       <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com>>
> >       >     >       >       >       <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com
> >       <mailto:oleshiiwood@gmail.com>>>>:
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >     Thanks Michal,
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >     You gave me an idea.
> >       >     >       >       >       >     >     I am going to try it
> today.
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >     Regards,
> >       >     >       >       >       >     >     O.
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >     =D1=87=D1=82, 20 =D0=B0=
=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2
> 11:56, Oleg Nikitenko <oleshiiwood@gmail.com
> >       <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com>>
> >       >     >       >       >       <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com
> >       <mailto:oleshiiwood@gmail.com>>>>:
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >         Thanks Stefano.
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >         I am going to do it
> today.
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >         Regards,
> >       >     >       >       >       >     >         O.
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >         =D1=81=D1=80, 19 =
=D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3.
> =D0=B2 23:05, Stefano Stabellini <sstabellini@kernel.org
> >       <mailto:sstabellini@kernel.org>
> >       >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>
> >       >     >       >       >       <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>
> >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>=
:
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >             On Wed, 19 Apr
> 2023, Oleg Nikitenko wrote:
> >       >     >       >       >       >     >             > Hi Michal,
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             > I corrected
> xen's command line.
> >       >     >       >       >       >     >             > Now it is
> >       >     >       >       >       >     >             >
> xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M
> >       dom0_max_vcpus=3D2
> >       >     >       dom0_vcpus_pin
> >       >     >       >       >       bootscrub=3D0 vwfi=3Dnative sched=
=3Dnull
> >       >     >       >       >       >     >             > timer_slop=3D=
0
> way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >             4 colors is way
> too many for xen, just do xen_colors=3D0-0. There is no
> >       >     >       >       >       >     >             advantage in
> using more than 1 color for Xen.
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >             4 colors is too
> few for dom0, if you are giving 1600M of memory to
> >       Dom0.
> >       >     >       >       >       >     >             Each color is
> 256M. For 1600M you should give at least 7 colors. Try:
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >             xen_colors=3D0-=
0
> dom0_colors=3D1-8
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >
> >       >     >       >       >       >     >             > Unfortunately
> the result was the same.
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             > (XEN)  - Dom0
> mode: Relaxed
> >       >     >       >       >       >     >             > (XEN) P2M:
> 40-bit IPA with 40-bit PA and 8-bit VMID
> >       >     >       >       >       >     >             > (XEN) P2M: 3
> levels with order-1 root, VTCR 0x0000000080023558
> >       >     >       >       >       >     >             > (XEN)
> Scheduling granularity: cpu, 1 CPU per sched-resource
> >       >     >       >       >       >     >             > (XEN) Colorin=
g
> general information
> >       >     >       >       >       >     >             > (XEN) Way
> size: 64kB
> >       >     >       >       >       >     >             > (XEN) Max.
> number of colors available: 16
> >       >     >       >       >       >     >             > (XEN) Xen
> color(s): [ 0 ]
> >       >     >       >       >       >     >             > (XEN)
> alternatives: Patching with alt table 00000000002cc690 ->
> >       00000000002ccc0c
> >       >     >       >       >       >     >             > (XEN) Color
> array allocation failed for dom0
> >       >     >       >       >       >     >             > (XEN)
> >       >     >       >       >       >     >             > (XEN)
> ****************************************
> >       >     >       >       >       >     >             > (XEN) Panic o=
n
> CPU 0:
> >       >     >       >       >       >     >             > (XEN) Error
> creating domain 0
> >       >     >       >       >       >     >             > (XEN)
> ****************************************
> >       >     >       >       >       >     >             > (XEN)
> >       >     >       >       >       >     >             > (XEN) Reboot
> in five seconds...
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             > I am going to
> find out how command line arguments passed and parsed.
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             > Regards,
> >       >     >       >       >       >     >             > Oleg
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             > =D1=81=D1=80,=
 19 =D0=B0=D0=BF=D1=80.
> 2023=E2=80=AF=D0=B3. =D0=B2 11:25, Oleg Nikitenko <oleshiiwood@gmail.com
> >       <mailto:oleshiiwood@gmail.com>
> >       >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>
> >       >     >       >       >       <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com
> >       <mailto:oleshiiwood@gmail.com>>>>:
> >       >     >       >       >       >     >             >       Hi
> Michal,
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             > You put my
> nose into the problem. Thank you.
> >       >     >       >       >       >     >             > I am going to
> use your point.
> >       >     >       >       >       >     >             > Let's see wha=
t
> happens.
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             > Regards,
> >       >     >       >       >       >     >             > Oleg
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             > =D1=81=D1=80,=
 19 =D0=B0=D0=BF=D1=80.
> 2023=E2=80=AF=D0=B3. =D0=B2 10:37, Michal Orzel <michal.orzel@amd.com
> >       <mailto:michal.orzel@amd.com>
> >       >     >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>
> >       >     >       >       >       <mailto:michal.orzel@amd.com
> <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com
> >       <mailto:michal.orzel@amd.com>>>>:
> >       >     >       >       >       >     >             >       Hi Oleg=
,
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >       On
> 19/04/2023 09:03, Oleg Nikitenko wrote:
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       > Hello
> Stefano,
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       > Thank=
s
> for the clarification.
> >       >     >       >       >       >     >             >       > My
> company uses yocto for image generation.
> >       >     >       >       >       >     >             >       > What
> kind of information do you need to consult me in this
> >       case ?
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       > Maybe
> modules sizes/addresses which were mentioned by @Julien
> >       Grall
> >       >     >       >       <mailto:julien@xen.org <mailto:
> julien@xen.org>
> >       >     >       >       >       <mailto:julien@xen.org <mailto:
> julien@xen.org>> <mailto:julien@xen.org
> >       <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>>>> ?
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >       Sorry
> for jumping into discussion, but FWICS the Xen command
> >       line you provided
> >       >     >       seems to be
> >       >     >       >       not the
> >       >     >       >       >       one
> >       >     >       >       >       >     >             >       Xen
> booted with. The error you are observing most likely is due
> >       to dom0 colors
> >       >     >       >       configuration not
> >       >     >       >       >       being
> >       >     >       >       >       >     >             >
>  specified (i.e. lack of dom0_colors=3D<> parameter). Although in
> >       the command line you
> >       >     >       >       provided, this
> >       >     >       >       >       parameter
> >       >     >       >       >       >     >             >       is set,
> I strongly doubt that this is the actual command line
> >       in use.
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >       You
> wrote:
> >       >     >       >       >       >     >             >
>  xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0
> >       dom0_mem=3D1600M dom0_max_vcpus=3D2
> >       >     >       >       dom0_vcpus_pin
> >       >     >       >       >       bootscrub=3D0 vwfi=3Dnative
> >       >     >       >       >       >     >             >
>  sched=3Dnull timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3
> >       dom0_colors=3D4-7";
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >       but:
> >       >     >       >       >       >     >             >       1)
> way_szize has a typo
> >       >     >       >       >       >     >             >       2) you
> specified 4 colors (0-3) for Xen, but the boot log says
> >       that Xen has only
> >       >     >       one:
> >       >     >       >       >       >     >             >       (XEN)
> Xen color(s): [ 0 ]
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >       This
> makes me believe that no colors configuration actually end
> >       up in command line
> >       >     >       that Xen
> >       >     >       >       booted
> >       >     >       >       >       with.
> >       >     >       >       >       >     >             >       Single
> color for Xen is a "default if not specified" and way
> >       size was probably
> >       >     >       calculated
> >       >     >       >       by asking
> >       >     >       >       >       HW.
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >       So I
> would suggest to first cross-check the command line in
> >       use.
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >       ~Michal
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
> Regards,
> >       >     >       >       >       >     >             >       > Oleg
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       > =D0=
=B2=D1=82, 18
> =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44, Stefano Stabellini
> >       <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> >       >     >       >       >       <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>>
> >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org
> >       <mailto:sstabellini@kernel.org>>>
> >       >     >       >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>
> >       >     >       >       >       <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>>
> >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org
> >       <mailto:sstabellini@kernel.org>>>>>:
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >     O=
n
> Tue, 18 Apr 2023, Oleg Nikitenko wrote:
> >       >     >       >       >       >     >             >       >     >
> Hi Julien,
> >       >     >       >       >       >     >             >       >     >
> >       >     >       >       >       >     >             >       >     >
> >> This feature has not been merged in Xen upstream yet
> >       >     >       >       >       >     >             >       >     >
> >       >     >       >       >       >     >             >       >     >
> > would assume that upstream + the series on the ML [1]
> >       work
> >       >     >       >       >       >     >             >       >     >
> >       >     >       >       >       >     >             >       >     >
> Please clarify this point.
> >       >     >       >       >       >     >             >       >     >
> Because the two thoughts are controversial.
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >     H=
i
> Oleg,
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >     A=
s
> Julien wrote, there is nothing controversial. As you
> >       are aware,
> >       >     >       >       >       >     >             >       >
>  Xilinx maintains a separate Xen tree specific for Xilinx
> >       here:
> >       >     >       >       >       >     >             >       >
> https://github.com/xilinx/xen
> >       <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>
> >       >     >       >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>
> >       >     >       >       >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>
> >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen
> >       <https://github.com/xilinx/xen>>
> >       >     >       >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>
> >       >     >       >       >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
>  and the branch you are using (xlnx_rebase_4.16) comes
> >       from there.
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
>  Instead, the upstream Xen tree lives here:
> >       >     >       >       >       >     >             >       >
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
>  The Cache Coloring feature that you are trying to
> >       configure is present
> >       >     >       >       >       >     >             >       >     i=
n
> xlnx_rebase_4.16, but not yet present upstream (there
> >       is an
> >       >     >       >       >       >     >             >       >
>  outstanding patch series to add cache coloring to Xen
> >       upstream but it
> >       >     >       >       >       >     >             >       >
>  hasn't been merged yet.)
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
>  Anyway, if you are using xlnx_rebase_4.16 it doesn't
> >       matter too much for
> >       >     >       >       >       >     >             >       >
>  you as you already have Cache Coloring as a feature
> >       there.
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >     I
> take you are using ImageBuilder to generate the boot
> >       configuration? If
> >       >     >       >       >       >     >             >       >
>  so, please post the ImageBuilder config file that you are
> >       using.
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >       >
>  But from the boot message, it looks like the colors
> >       configuration for
> >       >     >       >       >       >     >             >       >
>  Dom0 is incorrect.
> >       >     >       >       >       >     >             >       >
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >             >
> >       >     >       >       >       >     >
> >       >     >       >       >       >
> >       >     >       >       >
> >       >     >       >       >
> >       >     >       >       >
> >       >     >       >
> >       >     >       >
> >       >     >       >
> >       >     >
> >       >     >
> >       >     >
> >       >
> >
> >
> >

--00000000000082ed6105fb68055f
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+PGRpdj5IZWxsbyw8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlRoYW5r
cyBTdGVmYW5vLjwvZGl2PjxkaXY+PC9kaXY+PGRpdj5UaGVuIHRoZSBuZXh0IHF1ZXN0aW9uLjwv
ZGl2PjxkaXY+SSBjbG9uZWQgeGVuIHJlcG8gZnJvbSB4aWxpbnggc2l0ZSA8YSBocmVmPSJodHRw
czovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi5naXQiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngv
eGVuLmdpdDwvYT4gPGJyPjwvZGl2PjxkaXY+SSBtYW5hZ2VkIHRvIGJ1aWxkIGEgeGxueF9yZWJh
c2VfNC4xNyBicmFuY2ggaW4gbXkgZW52aXJvbm1lbnQuPC9kaXY+PGRpdj48L2Rpdj48ZGl2Pkkg
ZGlkIGl0IHdpdGhvdXQgY29sb3JpbmcgZmlyc3QuIEkgZGlkIG5vdCBmaW5kIGFueSBjb2xvciBm
b290cHJpbnRzIGF0IHRoaXMgYnJhbmNoLjwvZGl2PjxkaXY+SSByZWFsaXplZCBjb2xvcmluZyBp
cyBub3QgaW4gdGhlIHhsbnhfcmViYXNlXzQuMTcgYnJhbmNoIHlldC48L2Rpdj48ZGl2Pkkgc3dp
dGNoZWQgdG8gdGhlIG1hc3RlciBicmFuY2guIEFsbCB0aGUgY29sb3Jpbmcgc291cmNlcyBhcmUg
cHJlc2VudGVkIHRoZXJlLiBJIHRyaWVkIHRvIGJ1aWxkIHRoZXNlIGFnYWluLjwvZGl2PjxkaXY+
SSBnb3QgYSBsb3Qgb2YgZXJyb3JzLiBZb3UgbWF5IHNlZSBhIGxvZyBpbiB0aGUgYXR0YWNobWVu
dC48YnI+PC9kaXY+PGRpdj48L2Rpdj48ZGl2PlNvIHRoaXMgaXMgYSBxdWVzdGlvbi48L2Rpdj48
ZGl2PldoYXQgYnJhbmNoIG9mIHhlbiBkaWQgeW91IHVzZSB3aGVuIHlvdSB0ZXN0ZWQgY2FjaGUg
Y29sb3JzIGxhc3QgdGltZSA/PGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+UmVnYXJkcyw8
L2Rpdj48ZGl2Pk9sZWc8YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+PC9kaXY+PGJyPjxkaXYgY2xh
c3M9ImdtYWlsX3F1b3RlIj48ZGl2IGRpcj0ibHRyIiBjbGFzcz0iZ21haWxfYXR0ciI+0LLRgiwg
OSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAyMjo0OSwgU3RlZmFubyBTdGFiZWxsaW5pICZsdDs8YSBo
cmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4mZ3Q7Ojxicj48L2Rpdj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxl
PSJtYXJnaW46MHB4IDBweCAwcHggMC44ZXg7Ym9yZGVyLWxlZnQ6MXB4IHNvbGlkIHJnYigyMDQs
MjA0LDIwNCk7cGFkZGluZy1sZWZ0OjFleCI+V2UgdGVzdCBYZW4gQ2FjaGUgQ29sb3JpbmcgcmVn
dWxhcmx5IG9uIHpjdTEwMi4gRXZlcnkgUGV0YWxpbnV4IHJlbGVhc2U8YnI+DQoodHdpY2UgYSB5
ZWFyKSBpcyB0ZXN0ZWQgd2l0aCBjYWNoZSBjb2xvcmluZyBlbmFibGVkLiBUaGUgbGFzdCBQZXRh
bGludXg8YnI+DQpyZWxlYXNlIGlzIDIwMjMuMSBhbmQgdGhlIGtlcm5lbCB1c2VkIGlzIHRoaXM6
PGJyPg0KPGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC9saW51eC14bG54L3RyZWUv
eGxueF9yZWJhc2VfdjYuMV9MVFMiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngvbGludXgteGxueC90cmVlL3hsbnhfcmViYXNlX3Y2LjFf
TFRTPC9hPjxicj4NCjxicj4NCjxicj4NCk9uIFR1ZSwgOSBNYXkgMjAyMywgT2xlZyBOaWtpdGVu
a28gd3JvdGU6PGJyPg0KJmd0OyBIZWxsbyBndXlzLDxicj4NCiZndDsgPGJyPg0KJmd0OyBJIGhh
dmUgYSBjb3VwbGUgb2YgbW9yZSBxdWVzdGlvbnMuPGJyPg0KJmd0OyBIYXZlIHlvdSBldmVyIHJ1
biB4ZW4gd2l0aCB0aGUgY2FjaGUgY29sb3JpbmcgYXQgWnlucSBVbHRyYVNjYWxlKyBNUFNvQyB6
Y3UxMDIgeGN6dTE1ZWcgPzxicj4NCiZndDsgV2hlbiBkaWQgeW91IHJ1biB4ZW4gd2l0aCB0aGUg
Y2FjaGUgY29sb3JpbmcgbGFzdCB0aW1lID88YnI+DQomZ3Q7IFdoYXQga2VybmVsIHZlcnNpb24g
ZGlkIHlvdSB1c2UgZm9yIERvbTAgd2hlbiB5b3UgcmFuIHhlbiB3aXRoIHRoZSBjYWNoZSBjb2xv
cmluZyBsYXN0IHRpbWUgPzxicj4NCiZndDsgPGJyPg0KJmd0OyBSZWdhcmRzLDxicj4NCiZndDsg
T2xlZzxicj4NCiZndDsgPGJyPg0KJmd0OyDQv9GCLCA1INC80LDRjyAyMDIz4oCv0LMuINCyIDEx
OjQ4LCBPbGVnIE5pa2l0ZW5rbyAmbHQ7PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Ojxicj4N
CiZndDvCoCDCoCDCoCDCoEhpIE1pY2hhbCw8YnI+DQomZ3Q7IDxicj4NCiZndDsgVGhhbmtzLjxi
cj4NCiZndDsgPGJyPg0KJmd0OyBSZWdhcmRzLDxicj4NCiZndDsgT2xlZzxicj4NCiZndDsgPGJy
Pg0KJmd0OyDQv9GCLCA1INC80LDRjyAyMDIz4oCv0LMuINCyIDExOjM0LCBNaWNoYWwgT3J6ZWwg
Jmx0OzxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDs6PGJyPg0KJmd0O8KgIMKgIMKgIMKgSGkgT2xl
Zyw8YnI+DQomZ3Q7IDxicj4NCiZndDvCoCDCoCDCoCDCoFJlcGx5aW5nLCBzbyB0aGF0IHlvdSBk
byBub3QgbmVlZCB0byB3YWl0IGZvciBTdGVmYW5vLjxicj4NCiZndDsgPGJyPg0KJmd0O8KgIMKg
IMKgIMKgT24gMDUvMDUvMjAyMyAxMDoyOCwgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEhlbGxvIFN0ZWZh
bm8sPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgSSB3
b3VsZCBsaWtlIHRvIHRyeSBhIHhlbiBjYWNoZSBjb2xvciBwcm9wZXJ0eSBmcm9tIHRoaXMgcmVw
b8KgIDxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dC1odHRwL3hlbi5naXQ8L2E+PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDsgQ291bGQgeW91IHRlbGwgd2hvdCBicmFuY2gg
SSBzaG91bGQgdXNlID88YnI+DQomZ3Q7wqAgwqAgwqAgwqBDYWNoZSBjb2xvcmluZyBmZWF0dXJl
IGlzIG5vdCBwYXJ0IG9mIHRoZSB1cHN0cmVhbSB0cmVlIGFuZCBpdCBpcyBzdGlsbCB1bmRlciBy
ZXZpZXcuPGJyPg0KJmd0O8KgIMKgIMKgIMKgWW91IGNhbiBvbmx5IGZpbmQgaXQgaW50ZWdyYXRl
ZCBpbiB0aGUgWGlsaW54IFhlbiB0cmVlLjxicj4NCiZndDsgPGJyPg0KJmd0O8KgIMKgIMKgIMKg
fk1pY2hhbDxicj4NCiZndDsgPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0OyDQv9GCLCAyOCDQsNC/
0YAuIDIwMjPigK/Qsy4g0LIgMDA6NTEsIFN0ZWZhbm8gU3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7
Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgSSBhbSBmYW1pbGlhciB3aXRoIHRoZSB6Y3UxMDIgYnV0IEkgZG9uJiMzOTt0IGtub3cg
aG93IHlvdSBjb3VsZCBwb3NzaWJseTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoGdl
bmVyYXRlIGEgU0Vycm9yLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqBJIHN1Z2dlc3QgdG8gdHJ5IHRvIHVzZSBJbWFnZUJ1aWxkZXIgWzFd
IHRvIGdlbmVyYXRlIHRoZSBib290PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgY29u
ZmlndXJhdGlvbiBhcyBhIHRlc3QgYmVjYXVzZSB0aGF0IGlzIGtub3duIHRvIHdvcmsgd2VsbCBm
b3IgemN1MTAyLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqBbMV0gPGEgaHJlZj0iaHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2lt
YWdlYnVpbGRlciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRs
YWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8v
Z2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXI8
L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgT24gVGh1LCAyNyBBcHIgMjAyMywgT2xl
ZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBI
ZWxsbyBTdGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IFRoYW5rcyBmb3IgY2xhcmlmaWNhdGlvbi48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IFdlIG5pZ2h0ZXIgdXNlIEltYWdl
QnVpbGRlciBub3IgdWJvb3QgYm9vdCBzY3JpcHQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0OyBBIG1vZGVsIGlzIHpjdTEwMiBjb21wYXRpYmxlLjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBPLjxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7INCy0YIsIDI1INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAyMToyMSwgU3RlZmFubyBT
dGFiZWxsaW5pICZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoFRoaXMgaXMgaW50ZXJlc3RpbmcuIEFyZSB5b3UgdXNpbmcgWGls
aW54IGhhcmR3YXJlIGJ5IGFueSBjaGFuY2U/IElmIHNvLDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHdoaWNoIGJvYXJkPzxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBBcmUgeW91IHVzaW5nIEltYWdlQnVpbGRlciB0byBnZW5lcmF0ZSB5b3VyIGJvb3Qu
c2NyIGJvb3Qgc2NyaXB0PyBJZiBzbyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBjb3VsZCB5b3UgcGxlYXNlIHBvc3QgeW91ciBJbWFnZUJ1aWxkZXIgY29u
ZmlnIGZpbGU/IElmIG5vdCwgY2FuIHlvdTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoHBvc3QgdGhlIHNvdXJjZSBvZiB5b3VyIHVib290IGJvb3Qgc2NyaXB0
Pzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTRXJyb3JzIGFyZSBzdXBwb3NlZCB0byBiZSBy
ZWxhdGVkIHRvIGEgaGFyZHdhcmUgZmFpbHVyZSBvZiBzb21lIGtpbmQuPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgWW91IGFyZSBub3Qgc3VwcG9zZWQgdG8g
YmUgYWJsZSB0byB0cmlnZ2VyIGFuIFNFcnJvciBlYXNpbHkgYnk8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmcXVvdDttaXN0YWtlJnF1b3Q7LiBJIGhhdmUg
bm90IHNlZW4gU0Vycm9ycyBkdWUgdG8gd3JvbmcgY2FjaGUgY29sb3Jpbmc8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjb25maWd1cmF0aW9ucyBvbiBhbnkg
WGlsaW54IGJvYXJkIGJlZm9yZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgVGhlIGRpZmZl
cmVuY2VzIGJldHdlZW4gWGVuIHdpdGggYW5kIHdpdGhvdXQgY2FjaGUgY29sb3JpbmcgZnJvbSBh
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgaGFyZHdhcmUg
cGVyc3BlY3RpdmUgYXJlOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAtIFdpdGggY2FjaGUg
Y29sb3JpbmcsIHRoZSBTTU1VIGlzIGVuYWJsZWQgYW5kIGRvZXMgYWRkcmVzcyB0cmFuc2xhdGlv
bnM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBldmVu
IGZvciBkb20wLiBXaXRob3V0IGNhY2hlIGNvbG9yaW5nIHRoZSBTTU1VIGNvdWxkIGJlIGRpc2Fi
bGVkLCBhbmQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDC
oCBpZiBlbmFibGVkLCB0aGUgU01NVSBkb2VzbiYjMzk7dCBkbyBhbnkgYWRkcmVzcyB0cmFuc2xh
dGlvbnMgZm9yIERvbTAuIElmPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgwqAgdGhlcmUgaXMgYSBoYXJkd2FyZSBmYWlsdXJlIHJlbGF0ZWQgdG8gU01NVSBh
ZGRyZXNzIHRyYW5zbGF0aW9uIGl0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgwqAgY291bGQgb25seSB0cmlnZ2VyIHdpdGggY2FjaGUgY29sb3JpbmcuIFRo
aXMgd291bGQgYmUgbXkgbm9ybWFsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgwqAgc3VnZ2VzdGlvbiBmb3IgeW91IHRvIGV4cGxvcmUsIGJ1dCB0aGUgZmFp
bHVyZSBoYXBwZW5zIHRvbyBlYXJseTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoMKgIGJlZm9yZSBhbnkgRE1BLWNhcGFibGUgZGV2aWNlIGlzIHByb2dyYW1t
ZWQuIFNvIEkgZG9uJiMzOTt0IHRoaW5rIHRoaXMgY2FuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgYmUgdGhlIGlzc3VlLjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAtIFdpdGggY2FjaGUgY29sb3JpbmcsIHRoZSBtZW1vcnkgYWxsb2NhdGlvbiBp
cyB2ZXJ5IGRpZmZlcmVudCBzbyB5b3UmIzM5O2xsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgZW5kIHVwIHVzaW5nIGRpZmZlcmVudCBERFIgcmVnaW9u
cyBmb3IgRG9tMC4gU28gaWYgeW91ciBERFIgaXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBkZWZlY3RpdmUsIHlvdSBtaWdodCBvbmx5IHNlZSBhIGZh
aWx1cmUgd2l0aCBjYWNoZSBjb2xvcmluZyBlbmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgYmVjYXVzZSB5b3UgZW5kIHVwIHVzaW5nIGRpZmZl
cmVudCByZWdpb25zLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gVHVlLCAyNSBBcHIgMjAyMywgT2xlZyBOaWtpdGVua28g
d3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBIaSBTdGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFRoYW5rIHlvdS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IElmIEkgYnVpbGQgeGVuIHdpdGhvdXQgY29sb3JzIHN1cHBvcnQgdGhlcmUgaXMg
bm90IHRoaXMgZXJyb3IuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBBbGwgdGhlIGRvbWFpbnMgYXJlIGJvb3RlZCB3ZWxsLjxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGVuc2UgaXQgY2FuIG5vdCBi
ZSBhIGhhcmR3YXJlIGlzc3VlLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgVGhpcyBwYW5pYyBhcnJpdmVkIGR1cmluZyB1bnBhY2tpbmcgdGhlIHJv
b3Rmcy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IEhlcmUgSSBhdHRhY2hlZCB0aGUgYm9vdCBsb2cgeGVuL0RvbTAgd2l0aG91dCBjb2xvci48YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEEgaGlnaGxp
Z2h0ZWQgc3RyaW5ncyBwcmludGVkIGV4YWN0bHkgYWZ0ZXIgdGhlIHBsYWNlIHdoZXJlIDEtc3Qg
dGltZSBwYW5pYyBhcnJpdmVkLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IMKgWGVuIDQuMTYuMS1wcmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFhlbiB2ZXJzaW9uIDQuMTYuMS1wcmUgKG5vbGUy
MzkwQChub25lKSkgKGFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2NjIChHQ0MpIDExLjMuMCkgZGVi
dWc9eTxicj4NCiZndDvCoCDCoCDCoCDCoDIwMjMtMDQtMjE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIExhdGVzdCBDaGFuZ2VTZXQ6IFdl
ZCBBcHIgMTkgMTI6NTY6MTQgMjAyMyArMDMwMCBnaXQ6MzIxNjg3YjIzMS1kaXJ0eTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgYnVpbGQt
aWQ6IGMxODQ3MjU4ZmRiMWI3OTU2MmZjNzEwZGRhNDAwMDhmOTZjMGZkZTU8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFByb2Nlc3Nvcjog
MDAwMDAwMDA0MTBmZDAzNDogJnF1b3Q7QVJNIExpbWl0ZWQmcXVvdDssIHZhcmlhbnQ6IDB4MCwg
cGFydCAweGQwMyxyZXYgMHg0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSA2NC1iaXQgRXhlY3V0aW9uOjxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgUHJvY2Vzc29yIEZlYXR1
cmVzOiAwMDAwMDAwMDAwMDAyMjIyIDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIEV4Y2VwdGlvbiBM
ZXZlbHM6IEVMMzo2NCszMiBFTDI6NjQrMzIgRUwxOjY0KzMyIEVMMDo2NCszMjxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgRXh0
ZW5zaW9uczogRmxvYXRpbmdQb2ludCBBZHZhbmNlZFNJTUQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIERlYnVnIEZlYXR1cmVzOiAw
MDAwMDAwMDEwMzA1MTA2IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIEF1eGlsaWFyeSBGZWF0dXJlczog
MDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBNZW1vcnkgTW9kZWwgRmVhdHVy
ZXM6IDAwMDAwMDAwMDAwMDExMjIgMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgSVNBIEZlYXR1cmVzOiDC
oDAwMDAwMDAwMDAwMTExMjAgMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgMzItYml0IEV4ZWN1dGlvbjo8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKg
IFByb2Nlc3NvciBGZWF0dXJlczogMDAwMDAwMDAwMDAwMDEzMTowMDAwMDAwMDAwMDExMDExPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDC
oCDCoCBJbnN0cnVjdGlvbiBTZXRzOiBBQXJjaDMyIEEzMiBUaHVtYiBUaHVtYi0yIEphemVsbGU8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IMKgIMKgIEV4dGVuc2lvbnM6IEdlbmVyaWNUaW1lciBTZWN1cml0eTxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgRGVidWcgRmVhdHVy
ZXM6IDAwMDAwMDAwMDMwMTAwNjY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIEF1eGlsaWFyeSBGZWF0dXJlczogMDAwMDAwMDAwMDAw
MDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgwqAgTWVtb3J5IE1vZGVsIEZlYXR1cmVzOiAwMDAwMDAwMDEwMjAxMTA1IDAwMDAwMDAw
NDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgMDAwMDAwMDAw
MTI2MDAwMCAwMDAwMDAwMDAyMTAyMjExPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBJU0EgRmVhdHVyZXM6IDAwMDAwMDAwMDIxMDEx
MTAgMDAwMDAwMDAxMzExMjExMSAwMDAwMDAwMDIxMjMyMDQyPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCDCoCDCoCDC
oCDCoCAwMDAwMDAwMDAxMTEyMTMxIDAwMDAwMDAwMDAwMTExNDIgMDAwMDAwMDAwMDAxMTEyMTxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
VXNpbmcgU01DIENhbGxpbmcgQ29udmVudGlvbiB2MS4yPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2luZyBQU0NJIHYxLjE8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFNNUDog
QWxsb3dpbmcgNCBDUFVzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAoWEVOKSBHZW5lcmljIFRpbWVyIElSUTogcGh5cz0zMCBoeXA9MjYgdmlydD0y
NyBGcmVxOiAxMDAwMDAgS0h6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSBHSUN2MiBpbml0aWFsaXphdGlvbjo8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIGdp
Y19kaXN0X2FkZHI9MDAwMDAwMDBmOTAxMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgwqAgwqAgZ2ljX2NwdV9hZGRyPTAw
MDAwMDAwZjkwMjAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIGdpY19oeXBfYWRkcj0wMDAwMDAwMGY5MDQwMDAw
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVO
KSDCoCDCoCDCoCDCoCBnaWNfdmNwdV9hZGRyPTAwMDAwMDAwZjkwNjAwMDA8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKg
IGdpY19tYWludGVuYW5jZV9pcnE9MjU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEdJQ3YyOiBBZGp1c3RpbmcgQ1BVIGludGVyZmFjZSBi
YXNlIHRvIDB4ZjkwMmYwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIEdJQ3YyOiAxOTIgbGluZXMsIDQgY3B1cywgc2VjdXJlIChJSUQg
MDIwMDE0M2IpLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBudWxsIFNjaGVkdWxlciAobnVsbCk8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEluaXRp
YWxpemluZyBudWxsIHNjaGVkdWxlcjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgV0FSTklORzogVGhpcyBpcyBleHBlcmltZW50YWwgc29m
dHdhcmUgaW4gZGV2ZWxvcG1lbnQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2UgYXQgeW91ciBvd24gcmlzay48YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEFsbG9jYXRlZCBj
b25zb2xlIHJpbmcgb2YgMzIgS2lCLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVMDogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAxMiB0
aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCcmluZ2luZyB1cCBDUFUxPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUxOiBH
dWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEzIHRpbWVzIGJlZm9yZSBwYXVzaW5nIHRoZSBkb21haW48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IENQVSAxIGJvb3RlZC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIEJyaW5naW5nIHVwIENQVTI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIENQVTI6IEd1ZXN0IGF0b21pY3Mgd2ls
bCB0cnkgMTMgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhlIGRvbWFpbjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVIDIgYm9vdGVkLjxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
QnJpbmdpbmcgdXAgQ1BVMzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgKFhFTikgQ1BVMzogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAxMyB0aW1lcyBi
ZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCcm91Z2h0IHVwIDQgQ1BVczxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVIDMgYm9vdGVk
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
Tikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBwcm9iaW5nIGhhcmR3YXJlIGNvbmZpZ3VyYXRp
b24uLi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogU01NVXYyIHdpdGg6PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBzbW11OiAvYXhp
L3NtbXVAZmQ4MDAwMDA6IHN0YWdlIDIgdHJhbnNsYXRpb248YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBmZDgw
MDAwMDogc3RyZWFtIG1hdGNoaW5nIHdpdGggNDggcmVnaXN0ZXIgZ3JvdXBzLCBtYXNrIDB4N2Zm
ZiZsdDsyJmd0O3NtbXU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgL2F4aS9zbW11QGZkODAwMDAwOiAx
NiBjb250ZXh0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
YmFua3MgKDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IHN0YWdlLTIgb25seSk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogU3RhZ2UtMjogNDgt
Yml0IElQQSAtJmd0OyA0OC1iaXQgUEE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogcmVnaXN0
ZXJlZCAyOSBtYXN0ZXIgZGV2aWNlczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQ8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgLSBE
b20wIG1vZGU6IFJlbGF4ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIFAyTTogNDAtYml0IElQQSB3aXRoIDQwLWJpdCBQQSBhbmQgOC1i
aXQgVk1JRDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgUDJNOiAzIGxldmVscyB3aXRoIG9yZGVyLTEgcm9vdCwgVlRDUiAweDAwMDAwMDAw
ODAwMjM1NTg8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIFNjaGVkdWxpbmcgZ3JhbnVsYXJpdHk6IGNwdSwgMSBDUFUgcGVyIHNjaGVkLXJl
c291cmNlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyAoWEVOKSBhbHRlcm5hdGl2ZXM6IFBhdGNoaW5nIHdpdGggYWx0IHRhYmxlIDAwMDAwMDAwMDAy
Y2M1YzggLSZndDsgMDAwMDAwMDAwMDJjY2IyYzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgKioqIExPQURJTkcgRE9NQUlOIDAgKioqPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBM
b2FkaW5nIGQwIGtlcm5lbCBmcm9tIGJvb3QgbW9kdWxlIEAgMDAwMDAwMDAwMTAwMDAwMDxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTG9h
ZGluZyByYW1kaXNrIGZyb20gYm9vdCBtb2R1bGUgQCAwMDAwMDAwMDAyMDAwMDAwPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBBbGxvY2F0
aW5nIDE6MSBtYXBwaW5ncyB0b3RhbGxpbmcgMTYwME1CIGZvciBkb20wOjxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQkFOS1swXSAweDAw
MDAwMDEwMDAwMDAwLTB4MDAwMDAwMjAwMDAwMDAgKDI1Nk1CKTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQkFOS1sxXSAweDAwMDAwMDI0
MDAwMDAwLTB4MDAwMDAwMjgwMDAwMDAgKDY0TUIpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCQU5LWzJdIDB4MDAwMDAwMzAwMDAwMDAt
MHgwMDAwMDA4MDAwMDAwMCAoMTI4ME1CKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgR3JhbnQgdGFibGUgcmFuZ2U6IDB4MDAwMDAwMDBl
MDAwMDAtMHgwMDAwMDAwMGU0MDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBkMDogcDJt
YWRkciAweDAwMDAwMDA4N2JmOTQwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEFsbG9jYXRpbmcgUFBJIDE2IGZvciBldmVudCBjaGFu
bmVsIGludGVycnVwdDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDA6IDB4ODEyMDAwMDAtJmd0OzB4YTAwMDAw
MDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIEV4dGVuZGVkIHJlZ2lvbiAxOiAweGIxMjAwMDAwLSZndDsweGMwMDAwMDAwPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBFeHRlbmRl
ZCByZWdpb24gMjogMHhjODAwMDAwMC0mZ3Q7MHhlMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDM6
IDB4ZjAwMDAwMDAtJmd0OzB4ZjkwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiA0OiAweDEwMDAwMDAw
MC0mZ3Q7MHg2MDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiA1OiAweDg4MDAwMDAwMC0mZ3Q7MHg4
MDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gNjogMHg4MDAxMDAwMDAwLSZndDsweDEwMDAwMDAw
MDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAo
WEVOKSBMb2FkaW5nIHpJbWFnZSBmcm9tIDAwMDAwMDAwMDEwMDAwMDAgdG8gMDAwMDAwMDAxMDAw
MDAwMC0wMDAwMDAwMDEwZTQxMDA4PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBMb2FkaW5nIGQwIGluaXRyZCBmcm9tIDAwMDAwMDAwMDIw
MDAwMDAgdG8gMHgwMDAwMDAwMDEzNjAwMDAwLTB4MDAwMDAwMDAxZmYzYTYxNzxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTG9hZGluZyBk
MCBEVEIgdG8gMHgwMDAwMDAwMDEzNDAwMDAwLTB4MDAwMDAwMDAxMzQwY2JkYzxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgSW5pdGlhbCBs
b3cgbWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBhdCAweDQwMDAgcGFnZXMuPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBTdGQuIExvZ2xl
dmVsOiBBbGw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIEd1ZXN0IExvZ2xldmVsOiBBbGw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pICoqKiBTZXJpYWwgaW5wdXQgdG8gRE9NMCAo
dHlwZSAmIzM5O0NUUkwtYSYjMzk7IHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCk8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIG51bGwu
YzozNTM6IDAgJmx0Oy0tIGQwdjA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEZyZWVkIDM1NmtCIGluaXQgbWVtb3J5Ljxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MCBVbmhh
bmRsZWQgU01DL0hWQzogMHg4NDAwMDA1MDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MCBVbmhhbmRsZWQgU01DL0hWQzogMHg4NjAw
ZmYwMTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYg
dG8gSUNBQ1RJVkVSNDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAw
ZmZmZmZmZmYgdG8gSUNBQ1RJVkVSODxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRl
IDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMTI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQwdjA6IHZHSUNEOiB1bmhhbmRsZWQg
d29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjE2PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwOiB2R0lDRDog
dW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIyMDxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2
MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJ
VkVSMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuMDAwMDAwXSBCb290aW5nIExpbnV4IG9uIHBoeXNpY2FsIENQVSAweDAwMDAwMDAw
MDAgWzB4NDEwZmQwMzRdPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIExpbnV4IHZlcnNpb24gNS4xNS43Mi14aWxpbngt
djIwMjIuMSAob2UtdXNlckBvZS1ob3N0KSAoYWFyY2g2NC1wb3J0YWJsZS1saW51eC1nY2MgKEdD
Qyk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAxMS4zLjAsIEdOVSBsZCAoR05VPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgQmludXRpbHMpPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAyLjM4LjIwMjIwNzA4KSAjMSBT
TVAgVHVlIEZlYiAyMSAwNTo0Nzo1NCBVVEMgMjAyMzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBNYWNoaW5lIG1vZGVs
OiBEMTQgVmlwZXIgQm9hcmQgLSBXaGl0ZSBVbml0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIFhlbiA0LjE2IHN1cHBv
cnQgZm91bmQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjAwMDAwMF0gWm9uZSByYW5nZXM6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIERNQSDCoCDC
oCDCoFttZW0gMHgwMDAwMDAwMDEwMDAwMDAwLTB4MDAwMDAwMDA3ZmZmZmZmZl08YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAw
MF0gwqAgRE1BMzIgwqAgwqBlbXB0eTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBOb3JtYWwgwqAgZW1wdHk8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjAwMDAwMF0gTW92YWJsZSB6b25lIHN0YXJ0IGZvciBlYWNoIG5vZGU8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gRWFy
bHkgbWVtb3J5IG5vZGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAweDAw
MDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAwMDFmZmZmZmZmXTxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKg
IDA6IFttZW0gMHgwMDAwMDAwMDIyMDAwMDAwLTB4MDAwMDAwMDAyMjE0N2ZmZl08YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAw
MF0gwqAgbm9kZSDCoCAwOiBbbWVtIDB4MDAwMDAwMDAyMjIwMDAwMC0weDAwMDAwMDAwMjIzNDdm
ZmZdPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAweDAwMDAwMDAwMjQwMDAwMDAtMHgw
MDAwMDAwMDI3ZmZmZmZmXTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKgIDA6IFttZW0gMHgwMDAwMDAw
MDMwMDAwMDAwLTB4MDAwMDAwMDA3ZmZmZmZmZl08YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gSW5pdG1lbSBzZXR1cCBu
b2RlIDAgW21lbSAweDAwMDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAwMDdmZmZmZmZmXTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAw
MDAwXSBPbiBub2RlIDAsIHpvbmUgRE1BOiA4MTkyIHBhZ2VzIGluIHVuYXZhaWxhYmxlIHJhbmdl
czxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuMDAwMDAwXSBPbiBub2RlIDAsIHpvbmUgRE1BOiAxODQgcGFnZXMgaW4gdW5hdmFpbGFi
bGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC4wMDAwMDBdIE9uIG5vZGUgMCwgem9uZSBETUE6IDczNTIgcGFnZXMgaW4g
dW5hdmFpbGFibGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIGNtYTogUmVzZXJ2ZWQgMjU2IE1pQiBhdCAw
eDAwMDAwMDAwNmUwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogcHJvYmluZyBmb3IgY29uZHVpdCBt
ZXRob2QgZnJvbSBEVC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogUFNDSXYxLjEgZGV0ZWN0ZWQgaW4gZmly
bXdhcmUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC4wMDAwMDBdIHBzY2k6IFVzaW5nIHN0YW5kYXJkIFBTQ0kgdjAuMiBmdW5jdGlv
biBJRHM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogVHJ1c3RlZCBPUyBtaWdyYXRpb24gbm90IHJlcXVpcmVk
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMC4wMDAwMDBdIHBzY2k6IFNNQyBDYWxsaW5nIENvbnZlbnRpb24gdjEuMTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAw
XSBwZXJjcHU6IEVtYmVkZGVkIDE2IHBhZ2VzL2NwdSBzMzI3OTIgcjAgZDMyNzQ0IHU2NTUzNjxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDAuMDAwMDAwXSBEZXRlY3RlZCBWSVBUIEktY2FjaGUgb24gQ1BVMDxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBDUFUg
ZmVhdHVyZXM6IGtlcm5lbCBwYWdlIHRhYmxlIGlzb2xhdGlvbiBmb3JjZWQgT04gYnkgS0FTTFI8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjAwMDAwMF0gQ1BVIGZlYXR1cmVzOiBkZXRlY3RlZDogS2VybmVsIHBhZ2UgdGFibGUgaXNv
bGF0aW9uIChLUFRJKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBCdWlsdCAxIHpvbmVsaXN0cywgbW9iaWxpdHkgZ3Jv
dXBpbmcgb24uwqAgVG90YWwgcGFnZXM6IDQwMzg0NTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBLZXJuZWwgY29tbWFu
ZCBsaW5lOiBjb25zb2xlPWh2YzAgZWFybHljb249eGVuIGVhcmx5cHJpbnRrPXhlbiBjbGtfaWdu
b3JlX3VudXNlZCBmaXBzPTE8YnI+DQomZ3Q7wqAgwqAgwqAgwqByb290PS9kZXYvcmFtMDxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoG1heGNwdXM9Mjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
MDAwMDAwXSBVbmtub3duIGtlcm5lbCBjb21tYW5kIGxpbmUgcGFyYW1ldGVycyAmcXVvdDtlYXJs
eXByaW50az14ZW4gZmlwcz0xJnF1b3Q7LCB3aWxsIGJlIHBhc3NlZCB0byB1c2VyPGJyPg0KJmd0
O8KgIMKgIMKgIMKgc3BhY2UuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVu
dHJpZXM6IDI2MjE0NCAob3JkZXI6IDksIDIwOTcxNTIgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAw
MF0gSW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAxMzEwNzIgKG9yZGVyOiA4LCAxMDQ4
NTc2IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIG1lbSBhdXRvLWluaXQ6IHN0YWNrOm9mZiwg
aGVhcCBhbGxvYzpvbiwgaGVhcCBmcmVlOm9uPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIG1lbSBhdXRvLWluaXQ6IGNs
ZWFyaW5nIHN5c3RlbSBtZW1vcnkgbWF5IHRha2Ugc29tZSB0aW1lLi4uPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE1l
bW9yeTogMTEyMTkzNksvMTY0MTAyNEsgYXZhaWxhYmxlICg5NzI4SyBrZXJuZWwgY29kZSwgODM2
SyByd2RhdGEsIDIzOTZLIHJvZGF0YSwgMTUzNks8YnI+DQomZ3Q7wqAgwqAgwqAgwqBpbml0LCAy
NjJLIGJzcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAy
NTY5NDRLIHJlc2VydmVkLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgMjYyMTQ0SyBjbWEtcmVzZXJ2ZWQpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIFNMVUI6IEhXYWxp
Z249NjQsIE9yZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTIsIE5vZGVzPTE8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAw
MF0gcmN1OiBIaWVyYXJjaGljYWwgUkNVIGltcGxlbWVudGF0aW9uLjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSByY3U6
IFJDVSBldmVudCB0cmFjaW5nIGlzIGVuYWJsZWQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHJjdTogUkNVIHJlc3Ry
aWN0aW5nIENQVXMgZnJvbSBOUl9DUFVTPTggdG8gbnJfY3B1X2lkcz0yLjxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBy
Y3U6IFJDVSBjYWxjdWxhdGVkIHZhbHVlIG9mIHNjaGVkdWxlci1lbmxpc3RtZW50IGRlbGF5IGlz
IDI1IGppZmZpZXMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHJjdTogQWRqdXN0aW5nIGdlb21ldHJ5IGZvciByY3Vf
ZmFub3V0X2xlYWY9MTYsIG5yX2NwdV9pZHM9Mjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBOUl9JUlFTOiA2NCwgbnJf
aXJxczogNjQsIHByZWFsbG9jYXRlZCBpcnFzOiAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIFJvb3QgSVJRIGhhbmRs
ZXI6IGdpY19oYW5kbGVfaXJxPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIGFyY2hfdGltZXI6IGNwMTUgdGltZXIocykg
cnVubmluZyBhdCAxMDAuMDBNSHogKHZpcnQpLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBjbG9ja3NvdXJjZTogYXJj
aF9zeXNfY291bnRlcjogbWFzazogMHhmZmZmZmZmZmZmZmZmZiBtYXhfY3ljbGVzOiAweDE3MTAy
NGU3ZTAsPGJyPg0KJmd0O8KgIMKgIMKgIMKgbWF4X2lkbGVfbnM6IDQ0MDc5NTIwNTMxNSBuczxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDAuMDAwMDAwXSBzY2hlZF9jbG9jazogNTYgYml0cyBhdCAxMDBNSHosIHJlc29sdXRpb24gMTBu
cywgd3JhcHMgZXZlcnkgNDM5ODA0NjUxMTEwMG5zPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAyNThdIENvbnNvbGU6IGNvbG91
ciBkdW1teSBkZXZpY2UgODB4MjU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMxMDIzMV0gcHJpbnRrOiBjb25zb2xlIFtodmMwXSBl
bmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC4zMTQ0MDNdIENhbGlicmF0aW5nIGRlbGF5IGxvb3AgKHNraXBwZWQpLCB2YWx1
ZSBjYWxjdWxhdGVkIHVzaW5nIHRpbWVyIGZyZXF1ZW5jeS4uIDIwMC4wMCBCb2dvTUlQUzxicj4N
CiZndDvCoCDCoCDCoCDCoChscGo9NDAwMDAwKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzI0ODUxXSBwaWRfbWF4OiBkZWZhdWx0
OiAzMjc2OCBtaW5pbXVtOiAzMDE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMyOTcwNl0gTFNNOiBTZWN1cml0eSBGcmFtZXdvcmsg
aW5pdGlhbGl6aW5nPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC4zMzQyMDRdIFlhbWE6IGJlY29taW5nIG1pbmRmdWwuPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zMzc4
NjVdIE1vdW50LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogNDA5NiAob3JkZXI6IDMsIDMyNzY4
IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4zNDUxODBdIE1vdW50cG9pbnQtY2FjaGUgaGFzaCB0YWJsZSBl
bnRyaWVzOiA0MDk2IChvcmRlcjogMywgMzI3NjggYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM1NDc0M10g
eGVuOmdyYW50X3RhYmxlOiBHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91dDxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
MzU5MTMyXSBHcmFudCB0YWJsZSBpbml0aWFsaXplZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzYyNjY0XSB4ZW46ZXZlbnRzOiBV
c2luZyBGSUZPLWJhc2VkIEFCSTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzY2OTkzXSBYZW46IGluaXRpYWxpemluZyBjcHUwPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC4zNzA1MTVdIHJjdTogSGllcmFyY2hpY2FsIFNSQ1UgaW1wbGVtZW50YXRpb24uPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNzU5
MzBdIHNtcDogQnJpbmdpbmcgdXAgc2Vjb25kYXJ5IENQVXMgLi4uPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBudWxsLmM6MzUzOiAxICZs
dDstLSBkMHYxPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBkMHYxOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZm
ZmZmZiB0byBJQ0FDVElWRVIwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zODI1NDldIERldGVjdGVkIFZJUFQgSS1jYWNoZSBvbiBD
UFUxPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC4zODg3MTJdIFhlbjogaW5pdGlhbGl6aW5nIGNwdTE8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM4ODc0M10gQ1BVMTog
Qm9vdGVkIHNlY29uZGFyeSBwcm9jZXNzb3IgMHgwMDAwMDAwMDAxIFsweDQxMGZkMDM0XTxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
Mzg4ODI5XSBzbXA6IEJyb3VnaHQgdXAgMSBub2RlLCAyIENQVXM8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQwNjk0MV0gU01QOiBU
b3RhbCBvZiAyIHByb2Nlc3NvcnMgYWN0aXZhdGVkLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDExNjk4XSBDUFUgZmVhdHVyZXM6
IGRldGVjdGVkOiAzMi1iaXQgRUwwIFN1cHBvcnQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQxNjg4OF0gQ1BVIGZlYXR1cmVzOiBk
ZXRlY3RlZDogQ1JDMzIgaW5zdHJ1Y3Rpb25zPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40MjIxMjFdIENQVTogQWxsIENQVShzKSBz
dGFydGVkIGF0IEVMMTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuNDI2MjQ4XSBhbHRlcm5hdGl2ZXM6IHBhdGNoaW5nIGtlcm5lbCBj
b2RlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC40MzE0MjRdIGRldnRtcGZzOiBpbml0aWFsaXplZDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDQxNDU0XSBLQVNMUiBl
bmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC40NDE2MDJdIGNsb2Nrc291cmNlOiBqaWZmaWVzOiBtYXNrOiAweGZmZmZmZmZm
IG1heF9jeWNsZXM6IDB4ZmZmZmZmZmYsIG1heF9pZGxlX25zOjxicj4NCiZndDvCoCDCoCDCoCDC
oDc2NDUwNDE3ODUxMDAwMDAgbnM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQ0ODMyMV0gZnV0ZXggaGFzaCB0YWJsZSBlbnRyaWVz
OiA1MTIgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDk2MTgzXSBORVQ6IFJl
Z2lzdGVyZWQgUEZfTkVUTElOSy9QRl9ST1VURSBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQ5ODI3N10g
RE1BOiBwcmVhbGxvY2F0ZWQgMjU2IEtpQiBHRlBfS0VSTkVMIHBvb2wgZm9yIGF0b21pYyBhbGxv
Y2F0aW9uczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuNTAzNzcyXSBETUE6IHByZWFsbG9jYXRlZCAyNTYgS2lCIEdGUF9LRVJORUx8
R0ZQX0RNQSBwb29sIGZvciBhdG9taWMgYWxsb2NhdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjUxMTYxMF0gRE1BOiBwcmVh
bGxvY2F0ZWQgMjU2IEtpQiBHRlBfS0VSTkVMfEdGUF9ETUEzMiBwb29sIGZvciBhdG9taWMgYWxs
b2NhdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjUxOTQ3OF0gYXVkaXQ6IGluaXRpYWxpemluZyBuZXRsaW5rIHN1YnN5cyAo
ZGlzYWJsZWQpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC41MjQ5ODVdIGF1ZGl0OiB0eXBlPTIwMDAgYXVkaXQoMC4zMzY6MSk6IHN0
YXRlPWluaXRpYWxpemVkIGF1ZGl0X2VuYWJsZWQ9MCByZXM9MTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTI5MTY5XSB0aGVybWFs
X3N5czogUmVnaXN0ZXJlZCB0aGVybWFsIGdvdmVybm9yICYjMzk7c3RlcF93aXNlJiMzOTs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjUzMzAyM10gaHctYnJlYWtwb2ludDogZm91bmQgNiBicmVha3BvaW50IGFuZCA0IHdhdGNocG9p
bnQgcmVnaXN0ZXJzLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuNTQ1NjA4XSBBU0lEIGFsbG9jYXRvciBpbml0aWFsaXNlZCB3aXRo
IDMyNzY4IGVudHJpZXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjU1MTAzMF0geGVuOnN3aW90bGJfeGVuOiBXYXJuaW5nOiBvbmx5
IGFibGUgdG8gYWxsb2NhdGUgNCBNQiBmb3Igc29mdHdhcmUgSU8gVExCPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41NTkzMzJdIHNv
ZnR3YXJlIElPIFRMQjogbWFwcGVkIFttZW0gMHgwMDAwMDAwMDExODAwMDAwLTB4MDAwMDAwMDAx
MWMwMDAwMF0gKDRNQik8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjU4MzU2NV0gSHVnZVRMQiByZWdpc3RlcmVkIDEuMDAgR2lCIHBh
Z2Ugc2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41ODQ3MjFdIEh1Z2VUTEIgcmVnaXN0
ZXJlZCAzMi4wIE1pQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlczxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTkxNDc4
XSBIdWdlVExCIHJlZ2lzdGVyZWQgMi4wMCBNaUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAg
cGFnZXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjU5ODIyNV0gSHVnZVRMQiByZWdpc3RlcmVkIDY0LjAgS2lCIHBhZ2Ugc2l6ZSwg
cHJlLWFsbG9jYXRlZCAwIHBhZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC42MzY1MjBdIERSQkc6IENvbnRpbnVpbmcgd2l0aG91
dCBKaXR0ZXIgUk5HPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC43MzcxODddIHJhaWQ2OiBuZW9ueDggwqAgZ2VuKCkgwqAyMTQzIE1C
L3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAwLjgwNTI5NF0gcmFpZDY6IG5lb254OCDCoCB4b3IoKSDCoDE1ODkgTUIvczxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuODcz
NDA2XSByYWlkNjogbmVvbng0IMKgIGdlbigpIMKgMjE3NyBNQi9zPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC45NDE0OTldIHJhaWQ2
OiBuZW9ueDQgwqAgeG9yKCkgwqAxNTU2IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjAwOTYxMl0gcmFpZDY6IG5lb254MiDC
oCBnZW4oKSDCoDIwNzIgTUIvczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuMDc3NzE1XSByYWlkNjogbmVvbngyIMKgIHhvcigpIMKg
MTQzMCBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMS4xNDU4MzRdIHJhaWQ2OiBuZW9ueDEgwqAgZ2VuKCkgwqAxNzY5IE1CL3M8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAxLjIxMzkzNV0gcmFpZDY6IG5lb254MSDCoCB4b3IoKSDCoDEyMTQgTUIvczxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuMjgyMDQ2
XSByYWlkNjogaW50NjR4OCDCoGdlbigpIMKgMTM2NiBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4zNTAxMzJdIHJhaWQ2OiBp
bnQ2NHg4IMKgeG9yKCkgwqAgNzczIE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjQxODI1OV0gcmFpZDY6IGludDY0eDQgwqBn
ZW4oKSDCoDE2MDIgTUIvczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDEuNDg2MzQ5XSByYWlkNjogaW50NjR4NCDCoHhvcigpIMKgIDg1
MSBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMS41NTQ0NjRdIHJhaWQ2OiBpbnQ2NHgyIMKgZ2VuKCkgwqAxMzk2IE1CL3M8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAx
LjYyMjU2MV0gcmFpZDY6IGludDY0eDIgwqB4b3IoKSDCoCA3NDQgTUIvczxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNjkwNjg3XSBy
YWlkNjogaW50NjR4MSDCoGdlbigpIMKgMTAzMyBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43NTg3NzBdIHJhaWQ2OiBpbnQ2
NHgxIMKgeG9yKCkgwqAgNTE3IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc1ODgwOV0gcmFpZDY6IHVzaW5nIGFsZ29yaXRo
bSBuZW9ueDQgZ2VuKCkgMjE3NyBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43NjI5NDFdIHJhaWQ2OiAuLi4uIHhvcigpIDE1
NTYgTUIvcywgcm13IGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc2Nzk1N10gcmFpZDY6IHVzaW5nIG5lb24gcmVjb3Zl
cnkgYWxnb3JpdGhtPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMS43NzI4MjRdIHhlbjpiYWxsb29uOiBJbml0aWFsaXNpbmcgYmFsbG9v
biBkcml2ZXI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAxLjc3ODAyMV0gaW9tbXU6IERlZmF1bHQgZG9tYWluIHR5cGU6IFRyYW5zbGF0
ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAxLjc4MjU4NF0gaW9tbXU6IERNQSBkb21haW4gVExCIGludmFsaWRhdGlvbiBwb2xpY3k6
IHN0cmljdCBtb2RlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMS43ODkxNDldIFNDU0kgc3Vic3lzdGVtIGluaXRpYWxpemVkPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43
OTI4MjBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnM8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAx
Ljc5ODI1NF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWI8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAx
LjgwMzYyNl0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRyaXZlciB1c2I8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgw
ODc2MV0gcHBzX2NvcmU6IExpbnV4UFBTIEFQSSB2ZXIuIDEgcmVnaXN0ZXJlZDxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODEzNzE2
XSBwcHNfY29yZTogU29mdHdhcmUgdmVyLiA1LjMuNiAtIENvcHlyaWdodCAyMDA1LTIwMDcgUm9k
b2xmbyBHaW9tZXR0aSAmbHQ7PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IiB0YXJn
ZXQ9Il9ibGFuayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGludXguaXQiIHRhcmdldD0iX2JsYW5r
Ij5naW9tZXR0aUBsaW51eC5pdDwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODIyOTAzXSBQVFAgY2xvY2sgc3Vw
cG9ydCByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMS44MjY4OTNdIEVEQUMgTUM6IFZlcjogMy4wLjA8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgzMDM3
NV0genlucW1wLWlwaS1tYm94IG1haWxib3hAZmY5OTA0MDA6IFJlZ2lzdGVyZWQgWnlucU1QIElQ
SSBtYm94IHdpdGggVFgvUlggY2hhbm5lbHMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44Mzg4NjNdIHp5bnFtcC1pcGktbWJveCBt
YWlsYm94QGZmOTkwNjAwOiBSZWdpc3RlcmVkIFp5bnFNUCBJUEkgbWJveCB3aXRoIFRYL1JYIGNo
YW5uZWxzLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDEuODQ3MzU2XSB6eW5xbXAtaXBpLW1ib3ggbWFpbGJveEBmZjk5MDgwMDogUmVn
aXN0ZXJlZCBaeW5xTVAgSVBJIG1ib3ggd2l0aCBUWC9SWCBjaGFubmVscy48YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg1NTkwN10g
RlBHQSBtYW5hZ2VyIGZyYW1ld29yazxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODU5OTUyXSBjbG9ja3NvdXJjZTogU3dpdGNoZWQg
dG8gY2xvY2tzb3VyY2UgYXJjaF9zeXNfY291bnRlcjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODcxNzEyXSBORVQ6IFJlZ2lzdGVy
ZWQgUEZfSU5FVCBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg3MTgzOF0gSVAgaWRlbnRzIGhhc2ggdGFi
bGUgZW50cmllczogMzI3NjggKG9yZGVyOiA2LCAyNjIxNDQgYnl0ZXMsIGxpbmVhcik8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg3
OTM5Ml0gdGNwX2xpc3Rlbl9wb3J0YWRkcl9oYXNoIGhhc2ggdGFibGUgZW50cmllczogMTAyNCAo
b3JkZXI6IDIsIDE2Mzg0IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44ODcwNzhdIFRhYmxlLXBlcnR1cmIg
aGFzaCB0YWJsZSBlbnRyaWVzOiA2NTUzNiAob3JkZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFy
KTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDEuODk0ODQ2XSBUQ1AgZXN0YWJsaXNoZWQgaGFzaCB0YWJsZSBlbnRyaWVzOiAxNjM4NCAo
b3JkZXI6IDUsIDEzMTA3MiBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTAyOTAwXSBUQ1AgYmluZCBoYXNo
IHRhYmxlIGVudHJpZXM6IDE2Mzg0IChvcmRlcjogNiwgMjYyMTQ0IGJ5dGVzLCBsaW5lYXIpPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MS45MTAzNTBdIFRDUDogSGFzaCB0YWJsZXMgY29uZmlndXJlZCAoZXN0YWJsaXNoZWQgMTYzODQg
YmluZCAxNjM4NCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAxLjkxNjc3OF0gVURQIGhhc2ggdGFibGUgZW50cmllczogMTAyNCAob3Jk
ZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45MjM1MDldIFVEUC1MaXRlIGhhc2ggdGFi
bGUgZW50cmllczogMTAyNCAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45MzA3
NTldIE5FVDogUmVnaXN0ZXJlZCBQRl9VTklYL1BGX0xPQ0FMIHByb3RvY29sIGZhbWlseTxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEu
OTM2ODM0XSBSUEM6IFJlZ2lzdGVyZWQgbmFtZWQgVU5JWCBzb2NrZXQgdHJhbnNwb3J0IG1vZHVs
ZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAxLjk0MjM0Ml0gUlBDOiBSZWdpc3RlcmVkIHVkcCB0cmFuc3BvcnQgbW9kdWxlLjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEu
OTQ3MDg4XSBSUEM6IFJlZ2lzdGVyZWQgdGNwIHRyYW5zcG9ydCBtb2R1bGUuPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45NTE4NDNd
IFJQQzogUmVnaXN0ZXJlZCB0Y3AgTkZTdjQuMSBiYWNrY2hhbm5lbCB0cmFuc3BvcnQgbW9kdWxl
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDEuOTU4MzM0XSBQQ0k6IENMUyAwIGJ5dGVzLCBkZWZhdWx0IDY0PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45NjI3MDldIFRy
eWluZyB0byB1bnBhY2sgcm9vdGZzIGltYWdlIGFzIGluaXRyYW1mcy4uLjxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTc3MDkwXSB3
b3JraW5nc2V0OiB0aW1lc3RhbXBfYml0cz02MiBtYXhfb3JkZXI9MTkgYnVja2V0X29yZGVyPTA8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAxLjk4Mjg2M10gSW5zdGFsbGluZyBrbmZzZCAoY29weXJpZ2h0IChDKSAxOTk2IDxhIGhyZWY9
Im1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJfYmxhbmsiPm9raXJAbW9uYWQuc3di
LmRlPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSIgdGFy
Z2V0PSJfYmxhbmsiPm9raXJAbW9uYWQuc3diLmRlPC9hPiZndDspLjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDIxMDQ1XSBORVQ6
IFJlZ2lzdGVyZWQgUEZfQUxHIHByb3RvY29sIGZhbWlseTxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDIxMTIyXSB4b3I6IG1lYXN1
cmluZyBzb2Z0d2FyZSBjaGVja3N1bSBzcGVlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDI5MzQ3XSDCoCDCoDhyZWdzIMKgIMKg
IMKgIMKgIMKgIDogwqAyMzY2IE1CL3NlYzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDMzMDgxXSDCoCDCoDMycmVncyDCoCDCoCDC
oCDCoCDCoDogwqAyODAyIE1CL3NlYzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDM4MjIzXSDCoCDCoGFybTY0X25lb24gwqAgwqAg
wqA6IMKgMjMyMCBNQi9zZWM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAzODM4NV0geG9yOiB1c2luZyBmdW5jdGlvbjogMzJyZWdz
ICgyODAyIE1CL3NlYyk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAyLjA0MzYxNF0gQmxvY2sgbGF5ZXIgU0NTSSBnZW5lcmljIChic2cp
IGRyaXZlciB2ZXJzaW9uIDAuNCBsb2FkZWQgKG1ham9yIDI0Nyk8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA1MDk1OV0gaW8gc2No
ZWR1bGVyIG1xLWRlYWRsaW5lIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA1NTUyMV0gaW8gc2NoZWR1bGVyIGt5
YmVyIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAyLjA2ODIyN10geGVuOnhlbl9ldnRjaG46IEV2ZW50LWNoYW5uZWwg
ZGV2aWNlIGluc3RhbGxlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDIuMDY5MjgxXSBTZXJpYWw6IDgyNTAvMTY1NTAgZHJpdmVyLCA0
IHBvcnRzLCBJUlEgc2hhcmluZyBkaXNhYmxlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDc2MTkwXSBjYWNoZWluZm86IFVuYWJs
ZSB0byBkZXRlY3QgY2FjaGUgaGllcmFyY2h5IGZvciBDUFUgMDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDg1NTQ4XSBicmQ6IG1v
ZHVsZSBsb2FkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAyLjA4OTI5MF0gbG9vcDogbW9kdWxlIGxvYWRlZDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDg5MzQxXSBJ
bnZhbGlkIG1heF9xdWV1ZXMgKDQpLCB3aWxsIHVzZSBkZWZhdWx0IG1heDogMi48YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA5NDU2
NV0gdHVuOiBVbml2ZXJzYWwgVFVOL1RBUCBkZXZpY2UgZHJpdmVyLCAxLjY8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA5ODY1NV0g
eGVuX25ldGZyb250OiBJbml0aWFsaXNpbmcgWGVuIHZpcnR1YWwgZXRoZXJuZXQgZHJpdmVyPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi4xMDQxNTZdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgcnRsODE1
MDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMTA5ODEzXSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHI4
MTUyPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi4xMTUzNjddIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIg
YXNpeDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMTIwNzk0XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVy
IGF4ODgxNzlfMTc4YTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMTI2OTM0XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZh
Y2UgZHJpdmVyIGNkY19ldGhlcjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTMyODE2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBp
bnRlcmZhY2UgZHJpdmVyIGNkY19lZW08YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjEzODUyN10gdXNiY29yZTogcmVnaXN0ZXJlZCBu
ZXcgaW50ZXJmYWNlIGRyaXZlciBuZXQxMDgwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xNDQyNTZdIHVzYmNvcmU6IHJlZ2lzdGVy
ZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgY2RjX3N1YnNldDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTUwMjA1XSB1c2Jjb3JlOiBy
ZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHphdXJ1czxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTU1ODM3XSB1c2Jjb3Jl
OiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGNkY19uY208YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE2MTU1MF0gdXNi
Y29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciByODE1M19lY208YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE2ODI0
MF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNfYWNtPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4x
NzMxMDldIGNkY19hY206IFVTQiBBYnN0cmFjdCBDb250cm9sIE1vZGVsIGRyaXZlciBmb3IgVVNC
IG1vZGVtcyBhbmQgSVNETiBhZGFwdGVyczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTgxMzU4XSB1c2Jjb3JlOiByZWdpc3RlcmVk
IG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVhczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTg2NTQ3XSB1c2Jjb3JlOiByZWdpc3RlcmVk
IG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYi1zdG9yYWdlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xOTI2NDNdIHVzYmNvcmU6IHJl
Z2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgZnRkaV9zaW88YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE5ODM4NF0gdXNic2Vy
aWFsOiBVU0IgU2VyaWFsIHN1cHBvcnQgcmVnaXN0ZXJlZCBmb3IgRlRESSBVU0IgU2VyaWFsIERl
dmljZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMjA2MTE4XSB1ZGMtY29yZTogY291bGRuJiMzOTt0IGZpbmQgYW4gYXZhaWxhYmxl
IFVEQyAtIGFkZGVkIFtnX21hc3Nfc3RvcmFnZV0gdG8gbGlzdCBvZiBwZW5kaW5nPGJyPg0KJmd0
O8KgIMKgIMKgIMKgZHJpdmVyczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjE1MzMyXSBpMmNfZGV2OiBpMmMgL2RldiBlbnRyaWVz
IGRyaXZlcjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuMjIwNDY3XSB4ZW5fd2R0IHhlbl93ZHQ6IGluaXRpYWxpemVkICh0aW1lb3V0
PTYwcywgbm93YXlvdXQ9MCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIyNTkyM10gZGV2aWNlLW1hcHBlcjogdWV2ZW50OiB2ZXJz
aW9uIDEuMC4zPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi4yMzA2NjhdIGRldmljZS1tYXBwZXI6IGlvY3RsOiA0LjQ1LjAtaW9jdGwg
KDIwMjEtMDMtMjIpIGluaXRpYWxpc2VkOiA8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmRtLWRldmVsQHJlZGhhdC5jb208L2E+PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPmRtLWRldmVsQHJlZGhhdC5jb208L2E+Jmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjM5MzE1XSBF
REFDIE1DMDogR2l2aW5nIG91dCBkZXZpY2UgdG8gbW9kdWxlIDEgY29udHJvbGxlciBzeW5wc19k
ZHJfY29udHJvbGxlcjogREVWIHN5bnBzX2VkYWM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAoSU5URVJS
VVBUKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMjQ5NDA1XSBFREFDIERFVklDRTA6IEdpdmluZyBvdXQgZGV2aWNlIHRvIG1vZHVs
ZSB6eW5xbXAtb2NtLWVkYWMgY29udHJvbGxlciB6eW5xbXBfb2NtOiBERVY8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBmZjk2MDAwMC5tZW1vcnktY29udHJv
bGxlciAoSU5URVJSVVBUKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDIuMjYxNzE5XSBzZGhjaTogU2VjdXJlIERpZ2l0YWwgSG9zdCBD
b250cm9sbGVyIEludGVyZmFjZSBkcml2ZXI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI2NzQ4N10gc2RoY2k6IENvcHlyaWdodChj
KSBQaWVycmUgT3NzbWFuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMi4yNzE4OTBdIHNkaGNpLXBsdGZtOiBTREhDSSBwbGF0Zm9ybSBh
bmQgT0YgZHJpdmVyIGhlbHBlcjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjc4MTU3XSBsZWR0cmlnLWNwdTogcmVnaXN0ZXJlZCB0
byBpbmRpY2F0ZSBhY3Rpdml0eSBvbiBDUFVzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yODM4MTZdIHp5bnFtcF9maXJtd2FyZV9w
cm9iZSBQbGF0Zm9ybSBNYW5hZ2VtZW50IEFQSSB2MS4xPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yODk1NTRdIHp5bnFtcF9maXJt
d2FyZV9wcm9iZSBUcnVzdHpvbmUgdmVyc2lvbiB2MS4wPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zMjc4NzVdIHNlY3VyZWZ3IHNl
Y3VyZWZ3OiBzZWN1cmVmdyBwcm9iZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjMyODMyNF0gYWxnOiBObyB0ZXN0IGZvciB4aWxp
bngtenlucW1wLWFlcyAoenlucW1wLWFlcyk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjMzMjU2M10genlucW1wX2FlcyBmaXJtd2Fy
ZTp6eW5xbXAtZmlybXdhcmU6enlucW1wLWFlczogQUVTIFN1Y2Nlc3NmdWxseSBSZWdpc3RlcmVk
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4zNDExODNdIGFsZzogTm8gdGVzdCBmb3IgeGlsaW54LXp5bnFtcC1yc2EgKHp5bnFtcC1y
c2EpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi4zNDc2NjddIHJlbW90ZXByb2MgcmVtb3RlcHJvYzA6IGZmOWEwMDAwLnJmNXNzOnI1
Zl8wIGlzIGF2YWlsYWJsZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDIuMzUzMDAzXSByZW1vdGVwcm9jIHJlbW90ZXByb2MxOiBmZjlh
MDAwMC5yZjVzczpyNWZfMSBpcyBhdmFpbGFibGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM2MjYwNV0gZnBnYV9tYW5hZ2VyIGZw
Z2EwOiBYaWxpbnggWnlucU1QIEZQR0EgTWFuYWdlciByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zNjY1NDBdIHZp
cGVyLXhlbi1wcm94eSB2aXBlci14ZW4tcHJveHk6IFZpcGVyIFhlbiBQcm94eSByZWdpc3RlcmVk
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4zNzI1MjVdIHZpcGVyLXZkcHAgYTQwMDAwMDAudmRwcDogRGV2aWNlIFRyZWUgUHJvYmlu
Zzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMzc3Nzc4XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IFZEUFAgVmVyc2lvbjogMS4z
LjkuMCBJbmZvOiAxLjUxMi4xNS4wIEtleUxlbjogMzI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM4NjQzMl0gdmlwZXItdmRwcCBh
NDAwMDAwMC52ZHBwOiBVbmFibGUgdG8gcmVnaXN0ZXIgdGFtcGVyIGhhbmRsZXIuIFJldHJ5aW5n
Li4uPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi4zOTQwOTRdIHZpcGVyLXZkcHAtbmV0IGE1MDAwMDAwLnZkcHBfbmV0OiBEZXZpY2Ug
VHJlZSBQcm9iaW5nPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMi4zOTk4NTRdIHZpcGVyLXZkcHAtbmV0IGE1MDAwMDAwLnZkcHBfbmV0
OiBEZXZpY2UgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDA1OTMxXSB2aXBlci12ZHBwLXN0YXQgYTgwMDAwMDAu
dmRwcF9zdGF0OiBEZXZpY2UgVHJlZSBQcm9iaW5nPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40MTIwMzddIHZpcGVyLXZkcHAtc3Rh
dCBhODAwMDAwMC52ZHBwX3N0YXQ6IEJ1aWxkIHBhcmFtZXRlcnM6IFZUSSBDb3VudDogNTEyIEV2
ZW50IENvdW50OiAzMjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuNDIwODU2XSBkZWZhdWx0IHByZXNldDxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDIzNzk3XSB2aXBl
ci12ZHBwLXN0YXQgYTgwMDAwMDAudmRwcF9zdGF0OiBEZXZpY2UgcmVnaXN0ZXJlZDxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDMw
MDU0XSB2aXBlci12ZHBwLXJuZyBhYzAwMDAwMC52ZHBwX3JuZzogRGV2aWNlIFRyZWUgUHJvYmlu
Zzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuNDM1OTQ4XSB2aXBlci12ZHBwLXJuZyBhYzAwMDAwMC52ZHBwX3JuZzogRGV2aWNlIHJl
Z2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjQ0MTk3Nl0gdm1jdSBkcml2ZXIgaW5pdDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDQ0OTIyXSBWTUNVOiA6
ICgyNDA6MCkgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDQ0OTU2XSBJbiBLODEgVXBkYXRlciBpbml0PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40
NDkwMDNdIHBrdGdlbjogUGFja2V0IEdlbmVyYXRvciBmb3IgcGFja2V0IHBlcmZvcm1hbmNlIHRl
c3RpbmcuIFZlcnNpb246IDIuNzU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ2ODgzM10gSW5pdGlhbGl6aW5nIFhGUk0gbmV0bGlu
ayBzb2NrZXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjQ2ODkwMl0gTkVUOiBSZWdpc3RlcmVkIFBGX1BBQ0tFVCBwcm90b2NvbCBm
YW1pbHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjQ3MjcyOV0gQnJpZGdlIGZpcmV3YWxsaW5nIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ3Njc4
NV0gODAyMXE6IDgwMi4xUSBWTEFOIFN1cHBvcnQgdjEuODxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDgxMzQxXSByZWdpc3RlcmVk
IHRhc2tzdGF0cyB2ZXJzaW9uIDE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ4NjM5NF0gQnRyZnMgbG9hZGVkLCBjcmMzMmM9Y3Jj
MzJjLWdlbmVyaWMsIHpvbmVkPW5vLCBmc3Zlcml0eT1ubzxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNTAzMTQ1XSBmZjAxMDAwMC5z
ZXJpYWw6IHR0eVBTMSBhdCBNTUlPIDB4ZmYwMTAwMDAgKGlycSA9IDM2LCBiYXNlX2JhdWQgPSA2
MjUwMDAwKSBpcyBhIHh1YXJ0cHM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUwNzEwM10gb2YtZnBnYS1yZWdpb24gZnBnYS1mdWxs
OiBGUEdBIFJlZ2lvbiBwcm9iZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUxMjk4Nl0geGlsaW54LXp5bnFtcC1kbWEgZmQ1MDAw
MDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUy
MDI2N10geGlsaW54LXp5bnFtcC1kbWEgZmQ1MTAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBE
TUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUyODIzOV0geGlsaW54LXp5bnFtcC1kbWEgZmQ1
MjAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAy
LjUzNjE1Ml0geGlsaW54LXp5bnFtcC1kbWEgZmQ1MzAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFN
UCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU0NDE1M10geGlsaW54LXp5bnFtcC1kbWEg
ZmQ1NDAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjU1MjEyN10geGlsaW54LXp5bnFtcC1kbWEgZmQ1NTAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5
bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU2MDE3OF0geGlsaW54LXp5bnFtcC1k
bWEgZmZhODAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nl
c3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjU2Nzk4N10geGlsaW54LXp5bnFtcC1kbWEgZmZhOTAwMDAuZG1hLWNvbnRyb2xsZXI6
IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU3NjAxOF0geGlsaW54LXp5bnFt
cC1kbWEgZmZhYTAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1
Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjU4Mzg4OV0geGlsaW54LXp5bnFtcC1kbWEgZmZhYjAwMDAuZG1hLWNvbnRyb2xs
ZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk0NjM3OV0gc3BpLW5vciBz
cGkwLjA6IG10MjVxdTUxMmEgKDEzMTA3MiBLYnl0ZXMpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi45NDY0NjddIDIgZml4ZWQtcGFy
dGl0aW9ucyBwYXJ0aXRpb25zIGZvdW5kIG9uIE1URCBkZXZpY2Ugc3BpMC4wPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi45NTIzOTNd
IENyZWF0aW5nIDIgTVREIHBhcnRpdGlvbnMgb24gJnF1b3Q7c3BpMC4wJnF1b3Q7Ojxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTU3
MjMxXSAweDAwMDAwNDAwMDAwMC0weDAwMDAwODAwMDAwMCA6ICZxdW90O2JhbmsgQSZxdW90Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuOTYzMzMyXSAweDAwMDAwMDAwMDAwMC0weDAwMDAwNDAwMDAwMCA6ICZxdW90O2JhbmsgQiZx
dW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuOTY4Njk0XSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0OiBOb3QgZW5hYmxpbmcgcGFy
dGlhbCBzdG9yZSBhbmQgZm9yd2FyZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTc1MzMzXSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0
IGV0aDA6IENhZGVuY2UgR0VNIHJldiAweDUwMDcwMTA2IGF0IDB4ZmYwYjAwMDAgaXJxIDI1PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgKDE4OjQxOmZlOjBmOmZmOjAyKTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTg0NDcyXSBtYWNiIGZm
MGMwMDAwLmV0aGVybmV0OiBOb3QgZW5hYmxpbmcgcGFydGlhbCBzdG9yZSBhbmQgZm9yd2FyZDxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuOTkyMTQ0XSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0IGV0aDE6IENhZGVuY2UgR0VNIHJldiAw
eDUwMDcwMTA2IGF0IDB4ZmYwYzAwMDAgaXJxIDI2PGJyPg0KJmd0O8KgIMKgIMKgIMKgKDE4OjQx
OmZlOjBmOmZmOjAzKTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDMuMDAxMDQzXSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQ6IFZpcGVyIHBv
d2VyIEdQSU9zIGluaXRpYWxpc2VkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wMDczMTNdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldCB2
bmV0MCAodW5pbml0aWFsaXplZCk6IFZhbGlkYXRlIGludGVyZmFjZSBRU0dNSUk8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjAxNDkx
NF0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQxICh1bmluaXRpYWxpemVkKTogVmFsaWRhdGUg
aW50ZXJmYWNlIFFTR01JSTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDMuMDIyMTM4XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDEg
KHVuaW5pdGlhbGl6ZWQpOiBWYWxpZGF0ZSBpbnRlcmZhY2UgdHlwZSAxODxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDMwMjc0XSB2
aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDIgKHVuaW5pdGlhbGl6ZWQpOiBWYWxpZGF0ZSBpbnRl
cmZhY2UgUVNHTUlJPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy4wMzc3ODVdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldCB2bmV0MyAodW5p
bml0aWFsaXplZCk6IFZhbGlkYXRlIGludGVyZmFjZSBRU0dNSUk8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA0NTMwMV0gdmlwZXJf
ZW5ldCB2aXBlcl9lbmV0OiBWaXBlciBlbmV0IHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA1MDk1OF0geGlsaW54
LWF4aXBtb24gZmZhMDAwMDAucGVyZi1tb25pdG9yOiBQcm9iZWQgWGlsaW54IEFQTTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDU3
MTM1XSB4aWxpbngtYXhpcG1vbiBmZDBiMDAwMC5wZXJmLW1vbml0b3I6IFByb2JlZCBYaWxpbngg
QVBNPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy4wNjM1MzhdIHhpbGlueC1heGlwbW9uIGZkNDkwMDAwLnBlcmYtbW9uaXRvcjogUHJv
YmVkIFhpbGlueCBBUE08YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjA2OTkyMF0geGlsaW54LWF4aXBtb24gZmZhMTAwMDAucGVyZi1t
b25pdG9yOiBQcm9iZWQgWGlsaW54IEFQTTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDk3NzI5XSBzaTcweHg6IHByb2JlIG9mIDIt
MDA0MCBmYWlsZWQgd2l0aCBlcnJvciAtNTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDk4MDQyXSBjZG5zLXdkdCBmZDRkMDAwMC53
YXRjaGRvZzogWGlsaW54IFdhdGNoZG9nIFRpbWVyIHdpdGggdGltZW91dCA2MHM8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjEwNTEx
MV0gY2Rucy13ZHQgZmYxNTAwMDAud2F0Y2hkb2c6IFhpbGlueCBXYXRjaGRvZyBUaW1lciB3aXRo
IHRpbWVvdXQgMTBzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy4xMTI0NTddIHZpcGVyLXRhbXBlciB2aXBlci10YW1wZXI6IERldmlj
ZSByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy4xMTc1OTNdIGFjdGl2ZV9iYW5rIGFjdGl2ZV9iYW5rOiBib290IGJh
bms6IDE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjEyMjE4NF0gYWN0aXZlX2JhbmsgYWN0aXZlX2Jhbms6IGJvb3QgbW9kZTogKDB4
MDIpIHFzcGkzMjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuMTI4MjQ3XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IERldmljZSBU
cmVlIFByb2Jpbmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAzLjEzMzQzOV0gdmlwZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBWRFBQIFZl
cnNpb246IDEuMy45LjAgSW5mbzogMS41MTIuMTUuMCBLZXlMZW46IDMyPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNDIxNTFdIHZp
cGVyLXZkcHAgYTQwMDAwMDAudmRwcDogVGFtcGVyIGhhbmRsZXIgcmVnaXN0ZXJlZDxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTQ3
NDM4XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IERldmljZSByZWdpc3RlcmVkPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNTMw
MDddIGxwYzU1X2wyIHNwaTEuMDogcmVnaXN0ZXJlZCBoYW5kbGVyIGZvciBwcm90b2NvbCAwPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My4xNTg1ODJdIGxwYzU1X3VzZXIgbHBjNTVfdXNlcjogVGhlIG1ham9yIG51bWJlciBmb3IgeW91
ciBkZXZpY2UgaXMgMjM2PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy4xNjU5NzZdIGxwYzU1X2wyIHNwaTEuMDogcmVnaXN0ZXJlZCBo
YW5kbGVyIGZvciBwcm90b2NvbCAxPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xODE5OTldIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGxw
YzU1X3J0Y19nZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTgyODU2XSBydGMtbHBjNTUgcnRj
X2xwYzU1OiByZWdpc3RlcmVkIGFzIHJ0YzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE4ODY1Nl0gbHBjNTVfbDIgc3BpMS4wOiAo
MikgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE5Mzc0NF0gbHBjNTVfbDIgc3BpMS4wOiAoMykg
bWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE5ODg0OF0gbHBjNTVfbDIgc3BpMS4wOiAoNCkgbWN1
IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjIwMjkzMl0gbW1jMDogU0RIQ0kgY29udHJvbGxlciBvbiBm
ZjE2MDAwMC5tbWMgW2ZmMTYwMDAwLm1tY10gdXNpbmcgQURNQSA2NC1iaXQ8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjIxMDY4OV0g
bHBjNTVfbDIgc3BpMS4wOiAoNSkgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjIxNTY5NF0gbHBj
NTVfbDIgc3BpMS4wOiByeCBlcnJvcjogLTExMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjg0NDM4XSBtbWMwOiBuZXcgSFMyMDAg
TU1DIGNhcmQgYXQgYWRkcmVzcyAwMDAxPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4yODUxNzldIG1tY2JsazA6IG1tYzA6MDAwMSBT
RU0xNkcgMTQuNiBHaUI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjI5MTc4NF0gwqBtbWNibGswOiBwMSBwMiBwMyBwNCBwNSBwNiBw
NyBwODxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuMjkzOTE1XSBtbWNibGswYm9vdDA6IG1tYzA6MDAwMSBTRU0xNkcgNC4wMCBNaUI8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjI5OTA1NF0gbW1jYmxrMGJvb3QxOiBtbWMwOjAwMDEgU0VNMTZHIDQuMDAgTWlCPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4z
MDM5MDVdIG1tY2JsazBycG1iOiBtbWMwOjAwMDEgU0VNMTZHIDQuMDAgTWlCLCBjaGFyZGV2ICgy
NDQ6MCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjU4MjY3Nl0gcnRjLWxwYzU1IHJ0Y19scGM1NTogbHBjNTVfcnRjX2dldF90aW1l
OiBiYWQgcmVzdWx0OiAxPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy41ODMzMzJdIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGhjdG9zeXM6
IHVuYWJsZSB0byByZWFkIHRoZSBoYXJkd2FyZSBjbG9jazxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNTkxMjUyXSBjZG5zLWkyYyBm
ZjAyMDAwMC5pMmM6IHJlY292ZXJ5IGluZm9ybWF0aW9uIGNvbXBsZXRlPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy41OTcwODVdIGF0
MjQgMC0wMDUwOiBzdXBwbHkgdmNjIG5vdCBmb3VuZCwgdXNpbmcgZHVtbXkgcmVndWxhdG9yPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My42MDMwMTFdIGxwYzU1X2wyIHNwaTEuMDogKDIpIG1jdSBzdGlsbCBub3QgcmVhZHk/PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42
MDgwOTNdIGF0MjQgMC0wMDUwOiAyNTYgYnl0ZSBzcGQgRUVQUk9NLCByZWFkLW9ubHk8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYx
MzYyMF0gbHBjNTVfbDIgc3BpMS4wOiAoMykgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYxOTM2
Ml0gbHBjNTVfbDIgc3BpMS4wOiAoNCkgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYyNDIyNF0g
cnRjLXJ2MzAyOCAwLTAwNTI6IHJlZ2lzdGVyZWQgYXMgcnRjMTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjI4MzQzXSBscGM1NV9s
MiBzcGkxLjA6ICg1KSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjMzMjUzXSBscGM1NV9sMiBz
cGkxLjA6IHJ4IGVycm9yOiAtMTEwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42MzkxMDRdIGs4MV9ib290bG9hZGVyIDAtMDAxMDog
cHJvYmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjY0MTYyOF0gVk1DVTogOiAoMjM1OjApIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY0MTYzNV0g
azgxX2Jvb3Rsb2FkZXIgMC0wMDEwOiBwcm9iZSBjb21wbGV0ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY2ODM0Nl0gY2Rucy1p
MmMgZmYwMjAwMDAuaTJjOiA0MDAga0h6IG1taW8gZmYwMjAwMDAgaXJxIDI4PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42NjkxNTRd
IGNkbnMtaTJjIGZmMDMwMDAwLmkyYzogcmVjb3ZlcnkgaW5mb3JtYXRpb24gY29tcGxldGU8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjY3NTQxMl0gbG03NSAxLTAwNDg6IHN1cHBseSB2cyBub3QgZm91bmQsIHVzaW5nIGR1bW15IHJl
Z3VsYXRvcjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDMuNjgyOTIwXSBsbTc1IDEtMDA0ODogaHdtb24xOiBzZW5zb3IgJiMzOTt0bXAx
MTImIzM5Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDMuNjg2NTQ4XSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMg
Mzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDMuNjkwNzk1XSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMgNDxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
Njk1NjI5XSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMgNTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzAwNDky
XSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMgNjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzA1MTU3XSBwY2E5
NTR4IDEtMDA3MDogcmVnaXN0ZXJlZCA0IG11bHRpcGxleGVkIGJ1c3NlcyBmb3IgSTJDIHN3aXRj
aCBwY2E5NTQ2PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMy43MTMwNDldIGF0MjQgMS0wMDU0OiBzdXBwbHkgdmNjIG5vdCBmb3VuZCwg
dXNpbmcgZHVtbXkgcmVndWxhdG9yPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43MjAwNjddIGF0MjQgMS0wMDU0OiAxMDI0IGJ5dGUg
MjRjMDggRUVQUk9NLCByZWFkLW9ubHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjcyNDc2MV0gY2Rucy1pMmMgZmYwMzAwMDAuaTJj
OiAxMDAga0h6IG1taW8gZmYwMzAwMDAgaXJxIDI5PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43MzEyNzJdIHNmcCB2aXBlcl9lbmV0
OnNmcC1ldGgxOiBIb3N0IG1heGltdW0gcG93ZXIgMi4wVzxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzM3NTQ5XSBzZnBfcmVnaXN0
ZXJfc29ja2V0OiBnb3Qgc2ZwX2J1czxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzQwNzA5XSBzZnBfcmVnaXN0ZXJfc29ja2V0OiBy
ZWdpc3RlciBzZnBfYnVzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy43NDU0NTldIHNmcF9yZWdpc3Rlcl9idXM6IG9wcyBvayE8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
Ljc0OTE3OV0gc2ZwX3JlZ2lzdGVyX2J1czogVHJ5IHRvIGF0dGFjaDxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzUzNDE5XSBzZnBf
cmVnaXN0ZXJfYnVzOiBBdHRhY2ggc3VjY2VlZGVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NTc5MTRdIHNmcF9yZWdpc3Rlcl9i
dXM6IHVwc3RyZWFtIG9wcyBhdHRhY2g8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc2MjY3N10gc2ZwX3JlZ2lzdGVyX2J1czogQnVz
IHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAzLjc2Njk5OV0gc2ZwX3JlZ2lzdGVyX3NvY2tldDogcmVnaXN0ZXIgc2Zw
X2J1cyBzdWNjZWVkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjc3NTg3MF0gb2ZfY2ZzX2luaXQ8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc3NjAwMF0gb2ZfY2Zz
X2luaXQ6IE9LPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMy43NzgyMTFdIGNsazogTm90IGRpc2FibGluZyB1bnVzZWQgY2xvY2tzPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEx
LjI3ODQ3N10gRnJlZWluZyBpbml0cmQgbWVtb3J5OiAyMDYwNTZLPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjI3OTQwNl0gRnJlZWlu
ZyB1bnVzZWQga2VybmVsIG1lbW9yeTogMTUzNks8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMzE0MDA2XSBDaGVja2VkIFcrWCBtYXBw
aW5nczogcGFzc2VkLCBubyBXK1ggcGFnZXMgZm91bmQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMzE0MTQyXSBSdW4gL2luaXQgYXMg
aW5pdCBwcm9jZXNzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBJTklUOiB2ZXJzaW9uIDMuMDEgYm9vdGluZzxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgZnNjayAoYnVzeWJveCAxLjM1LjApPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAvZGV2L21tY2Js
azBwMTogY2xlYW4sIDEyLzEwMjQwMCBmaWxlcywgMjM4MTYyLzQwOTYwMCBibG9ja3M8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IC9kZXYvbW1jYmxr
MHAyOiBjbGVhbiwgMTIvMTAyNDAwIGZpbGVzLCAxNzE5NzIvNDA5NjAwIGJsb2Nrczxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgL2Rldi9tbWNibGsw
cDMgd2FzIG5vdCBjbGVhbmx5IHVubW91bnRlZCwgY2hlY2sgZm9yY2VkLjxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgL2Rldi9tbWNibGswcDM6IDIw
LzQwOTYgZmlsZXMgKDAuMCUgbm9uLWNvbnRpZ3VvdXMpLCA2NjMvMTYzODQgYmxvY2tzPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjU1
MzA3M10gRVhUNC1mcyAobW1jYmxrMHAzKTogbW91bnRlZCBmaWxlc3lzdGVtIHdpdGhvdXQgam91
cm5hbC4gT3B0czogKG51bGwpLiBRdW90YSBtb2RlOjxicj4NCiZndDvCoCDCoCDCoCDCoGRpc2Fi
bGVkLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
U3RhcnRpbmcgcmFuZG9tIG51bWJlciBnZW5lcmF0b3IgZGFlbW9uLjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS41ODA2NjJdIHJhbmRv
bTogY3JuZyBpbml0IGRvbmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIHVkZXY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNjEzMTU5XSB1ZGV2ZFsxNDJdOiBzdGFydGlu
ZyB2ZXJzaW9uIDMuMi4xMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxMS42MjAzODVdIHVkZXZkWzE0M106IHN0YXJ0aW5nIGV1ZGV2LTMu
Mi4xMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCAxMS43MDQ0ODFdIG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IHJlbmFt
ZWQgZnJvbSBldGgwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDExLjcyMDI2NF0gbWFjYiBmZjBjMDAwMC5ldGhlcm5ldCBjb250cm9sX2Js
YWNrOiByZW5hbWVkIGZyb20gZXRoMTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi4wNjMzOTZdIGlwX2xvY2FsX3BvcnRfcmFuZ2U6IHBy
ZWZlciBkaWZmZXJlbnQgcGFyaXR5IGZvciBzdGFydC9lbmQgdmFsdWVzLjxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi4wODQ4MDFdIHJ0
Yy1scGM1NSBydGNfbHBjNTU6IGxwYzU1X3J0Y19nZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgaHdjbG9jazog
UlRDX1JEX1RJTUU6IEludmFsaWQgZXhjaGFuZ2U8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE1vbiBGZWIgMjcgMDg6NDA6NTMgVVRDIDIwMjM8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIu
MTE1MzA5XSBydGMtbHBjNTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfc2V0X3RpbWU6IGJhZCByZXN1
bHQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGh3
Y2xvY2s6IFJUQ19TRVRfVElNRTogSW52YWxpZCBleGNoYW5nZTxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi4xMzEwMjddIHJ0Yy1scGM1
NSBydGNfbHBjNTU6IGxwYzU1X3J0Y19nZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgbWN1ZDxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSU5JVDog
RW50ZXJpbmcgcnVubGV2ZWw6IDU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IENvbmZpZ3VyaW5nIG5ldHdvcmsgaW50ZXJmYWNlcy4uLiBkb25lLjxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgcmVzZXR0
aW5nIG5ldHdvcmsgaW50ZXJmYWNlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjcxODI5NV0gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBj
b250cm9sX3JlZDogUEhZIFtmZjBiMDAwMC5ldGhlcm5ldC1mZmZmZmZmZjowMl0gZHJpdmVyIFtY
aWxpbng8YnI+DQomZ3Q7wqAgwqAgwqAgwqBQQ1MvUE1BIFBIWV0gKGlycT1QT0xMKTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43MjM5
MTldIG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IGNvbmZpZ3VyaW5nIGZvciBw
aHkvZ21paSBsaW5rIG1vZGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzMyMTUxXSBwcHMgcHBzMDogbmV3IFBQUyBzb3VyY2UgcHRw
MDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMi43MzU1NjNdIG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQ6IGdlbS1wdHAtdGltZXIgcHRwIGNs
b2NrIHJlZ2lzdGVyZWQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDEyLjc0NTcyNF0gbWFjYiBmZjBjMDAwMC5ldGhlcm5ldCBjb250cm9s
X2JsYWNrOiBQSFkgW2ZmMGMwMDAwLmV0aGVybmV0LWZmZmZmZmZmOjAxXSBkcml2ZXIgW1hpbGlu
eDxicj4NCiZndDvCoCDCoCDCoCDCoFBDUy9QTUEgUEhZXTxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoChpcnE9UE9MTCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzUzNDY5XSBtYWNiIGZmMGMw
MDAwLmV0aGVybmV0IGNvbnRyb2xfYmxhY2s6IGNvbmZpZ3VyaW5nIGZvciBwaHkvZ21paSBsaW5r
IG1vZGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTIuNzYxODA0XSBwcHMgcHBzMTogbmV3IFBQUyBzb3VyY2UgcHRwMTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43NjUzOThd
IG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQ6IGdlbS1wdHAtdGltZXIgcHRwIGNsb2NrIHJlZ2lzdGVy
ZWQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBB
dXRvLW5lZ290aWF0aW9uOiBvZmY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IEF1dG8tbmVnb3RpYXRpb246IG9mZjxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNi44MjgxNTFdIG1hY2IgZmYw
YjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IHVuYWJsZSB0byBnZW5lcmF0ZSB0YXJnZXQgZnJl
cXVlbmN5OiAxMjUwMDAwMDAgSHo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTYuODM0NTUzXSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0IGNv
bnRyb2xfcmVkOiBMaW5rIGlzIFVwIC0gMUdicHMvRnVsbCAtIGZsb3cgY29udHJvbCBvZmY8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTYu
ODYwNTUyXSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0IGNvbnRyb2xfYmxhY2s6IHVuYWJsZSB0byBn
ZW5lcmF0ZSB0YXJnZXQgZnJlcXVlbmN5OiAxMjUwMDAwMDAgSHo8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTYuODY3MDUyXSBtYWNiIGZm
MGMwMDAwLmV0aGVybmV0IGNvbnRyb2xfYmxhY2s6IExpbmsgaXMgVXAgLSAxR2Jwcy9GdWxsIC0g
ZmxvdyBjb250cm9sIG9mZjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgU3RhcnRpbmcgRmFpbHNhZmUgU2VjdXJlIFNoZWxsIHNlcnZlciBpbiBwb3J0
IDIyMjI6IHNzaGQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IGRvbmUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBTdGFydGluZyBycGNiaW5kIGRhZW1vbi4uLmRvbmUuPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy4wOTMwMTldIHJ0Yy1scGM1NSBy
dGNfbHBjNTU6IGxwYzU1X3J0Y19nZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgaHdjbG9jazogUlRDX1JEX1RJ
TUU6IEludmFsaWQgZXhjaGFuZ2U8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIFN0YXRlIE1hbmFnZXIgU2VydmljZTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnQgc3RhdGUtbWFu
YWdlciByZXN0YXJ0ZXIuLi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIGQwdjEgRm9yd2FyZGluZyBBRVMgb3BlcmF0aW9uOiAzMjU0Nzc5
OTUxPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBT
dGFydGluZyAvdXNyL3NiaW4veGVuc3RvcmVkLi4uLlsgwqAgMTcuMjY1MjU2XSBCVFJGUzogZGV2
aWNlIGZzaWQgODBlZmMyMjQtYzIwMi00ZjhlLWE5NDktNGRhZTdmMDRhMGFhPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgZGV2aWQgMSB0cmFuc2lkIDc0NDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoC9kZXYvZG0tMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgc2Nhbm5lZCBieSB1ZGV2ZCAoMzg1KTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy4zNDk5MzNd
IEJUUkZTIGluZm8gKGRldmljZSBkbS0wKTogZGlzayBzcGFjZSBjYWNoaW5nIGlzIGVuYWJsZWQ8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
MTcuMzUwNjcwXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMCk6IGhhcyBza2lubnkgZXh0ZW50czxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
Ny4zNjQzODRdIEJUUkZTIGluZm8gKGRldmljZSBkbS0wKTogZW5hYmxpbmcgc3NkIG9wdGltaXph
dGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTcuODMwNDYyXSBCVFJGUzogZGV2aWNlIGZzaWQgMjdmZjY2NmItZjRlNS00ZjkwLTkw
NTQtYzIxMGRiNWIyZTJlIGRldmlkIDEgdHJhbnNpZCA2PGJyPg0KJmd0O8KgIMKgIMKgIMKgL2Rl
di9tYXBwZXIvY2xpZW50X3Byb3Ygc2Nhbm5lZCBieTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoG1rZnMuYnRyZnM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICg1MjYpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3Ljg3MjY5OV0gQlRSRlMgaW5mbyAo
ZGV2aWNlIGRtLTEpOiB1c2luZyBmcmVlIHNwYWNlIHRyZWU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuODcyNzcxXSBCVFJGUyBpbmZv
IChkZXZpY2UgZG0tMSk6IGhhcyBza2lubnkgZXh0ZW50czxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy44NzgxMTRdIEJUUkZTIGluZm8g
KGRldmljZSBkbS0xKTogZmxhZ2dpbmcgZnMgd2l0aCBiaWcgbWV0YWRhdGEgZmVhdHVyZTxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy44
OTQyODldIEJUUkZTIGluZm8gKGRldmljZSBkbS0xKTogZW5hYmxpbmcgc3NkIG9wdGltaXphdGlv
bnM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTcuODk1Njk1XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6IGNoZWNraW5nIFVVSUQgdHJl
ZTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFNldHRpbmcg
ZG9tYWluIDAgbmFtZSwgZG9taWQgYW5kIEpTT04gY29uZmlnLi4uPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBEb25lIHNldHRpbmcgdXAgRG9tMDxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRp
bmcgeGVuY29uc29sZWQuLi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIFFFTVUgYXMgZGlzayBiYWNrZW5kIGZvciBkb20wPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBk
b21haW4gd2F0Y2hkb2cgZGFlbW9uOiB4ZW53YXRjaGRvZ2Qgc3RhcnR1cDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNDA4NjQ3XSBCVFJGUzog
ZGV2aWNlIGZzaWQgNWUwOGQ1ZTktYmMyYS00NmI5LWFmNmEtNDRjNzA4N2I4OTIxIGRldmlkIDEg
dHJhbnNpZCA2PGJyPg0KJmd0O8KgIMKgIMKgIMKgL2Rldi9tYXBwZXIvY2xpZW50X2NvbmZpZyBz
Y2FubmVkIGJ5PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
bWtmcy5idHJmczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKDU3NCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFtkb25lXTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCAxOC40NjU1NTJdIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTogdXNpbmcg
ZnJlZSBzcGFjZSB0cmVlPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDE4LjQ2NTYyOV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTIpOiBoYXMg
c2tpbm55IGV4dGVudHM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTguNDcxMDAyXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMik6IGZsYWdn
aW5nIGZzIHdpdGggYmlnIG1ldGFkYXRhIGZlYXR1cmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIGNyb25kOiBbIMKgIDE4LjQ4MjM3
MV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTIpOiBlbmFibGluZyBzc2Qgb3B0aW1pemF0aW9uczxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
OC40ODY2NTldIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTogY2hlY2tpbmcgVVVJRCB0cmVlPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBPSzxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgc3RhcnRpbmcg
cnN5c2xvZ2QgLi4uIExvZyBwYXJ0aXRpb24gcmVhZHkgYWZ0ZXIgMCBwb2xsIGxvb3BzPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBkb25lPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyByc3lzbG9nZDog
Y2Fubm90IGNvbm5lY3QgdG8gPGEgaHJlZj0iaHR0cDovLzE3Mi4xOC4wLjE6NTE0IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj4xNzIuMTguMC4xOjUxNDwvYT4gJmx0OzxhIGhyZWY9
Imh0dHA6Ly8xNzIuMTguMC4xOjUxNCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cDovLzE3Mi4xOC4wLjE6NTE0PC9hPiZndDs6IE5ldHdvcmsgaXMgdW5yZWFjaGFibGUgW3Y4
LjIyMDguMCB0cnk8YnI+DQomZ3Q7wqAgwqAgwqAgwqA8YSBocmVmPSJodHRwczovL3d3dy5yc3lz
bG9nLmNvbS9lLzIwMjciIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
d3d3LnJzeXNsb2cuY29tL2UvMjAyNzwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vd3d3LnJzeXNs
b2cuY29tL2UvMjAyNyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly93
d3cucnN5c2xvZy5jb20vZS8yMDI3PC9hPiZndDsgXTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxOC42NzA2MzddIEJUUkZTOiBkZXZpY2Ug
ZnNpZCAzOWQ3ZDllMS05NjdkLTQ3OGUtOTRhZS02OTBkZWI3MjIwOTUgZGV2aWQgMSB0cmFuc2lk
IDYwOCAvZGV2L2RtLTM8YnI+DQomZ3Q7wqAgwqAgwqAgwqBzY2FubmVkIGJ5IHVkZXZkICg1MTgp
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUGxlYXNlIGlu
c2VydCBVU0IgdG9rZW4gYW5kIGVudGVyIHlvdXIgcm9sZSBpbiBsb2dpbiBwcm9tcHQuPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgbG9naW46PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE8uPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7INC/0L0sIDI0INCw0L/RgC4gMjAyM+KAr9Cz
LiDQsiAyMzozOSwgU3RlZmFubyBTdGFiZWxsaW5pICZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7Ojxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhp
IE9sZWcsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoEhlcmUgaXMgdGhlIGlzc3VlIGZyb20geW91ciBsb2dzOjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTRXJyb3IgSW50ZXJydXB0
IG9uIENQVTAsIGNvZGUgMHhiZTAwMDAwMCAtLSBTRXJyb3I8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU0Vycm9ycyBhcmUgc3BlY2lhbCBz
aWduYWxzIHRvIG5vdGlmeSBzb2Z0d2FyZSBvZiBzZXJpb3VzIGhhcmR3YXJlPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZXJyb3Jz
LsKgIFNvbWV0aGluZyBpcyBnb2luZyB2ZXJ5IHdyb25nLiBEZWZlY3RpdmUgaGFyZHdhcmUgaXMg
YTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoHBvc3NpYmlsaXR5LsKgIEFub3RoZXIgcG9zc2liaWxpdHkgaWYgc29mdHdhcmUgYWNj
ZXNzaW5nIGFkZHJlc3MgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdGhhdCBpdCBpcyBub3Qgc3VwcG9zZWQgdG8sIHNv
bWV0aW1lcyBpdCBjYXVzZXMgU0Vycm9ycy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgQ2hlZXJzLDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTdGVmYW5vPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gTW9uLCAyNCBBcHIgMjAyMywgT2xl
ZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGVsbG8sPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgVGhhbmtz
IGd1eXMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBJIGZvdW5kIG91dCB3aGVyZSB0aGUgcHJvYmxlbSB3YXMuPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBOb3cgZG9tMCBib290ZWQgbW9yZS4gQnV0IEkgaGF2ZSBhIG5ldyBvbmUuPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBUaGlzIGlzIGEga2VybmVsIHBhbmljIGR1cmluZyBEb20wIGxvYWRpbmcuPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBN
YXliZSBzb21lb25lIGlzIGFibGUgdG8gc3VnZ2VzdCBzb21ldGhpbmcgPzxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBPLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
Ljc3MTM2Ml0gc2ZwX3JlZ2lzdGVyX2J1czogdXBzdHJlYW0gb3BzIGF0dGFjaDxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuNzc2MTE5XSBzZnBfcmVnaXN0ZXJfYnVzOiBCdXMgcmVnaXN0ZXJlZDxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDMuNzgwNDU5XSBzZnBfcmVnaXN0ZXJfc29ja2V0OiByZWdpc3RlciBzZnBfYnVz
IHN1Y2NlZWRlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzg5Mzk5XSBvZl9jZnNfaW5pdDxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDMuNzg5NDk5XSBvZl9jZnNfaW5pdDogT0s8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc5
MTY4NV0gY2xrOiBOb3QgZGlzYWJsaW5nIHVudXNlZCBjbG9ja3M8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEu
MDEwMzU1XSBTRXJyb3IgSW50ZXJydXB0IG9uIENQVTAsIGNvZGUgMHhiZTAwMDAwMCAtLSBTRXJy
b3I8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwMzgwXSBDUFU6IDAgUElEOiA5IENvbW06IGt3b3JrZXIv
dTQ6MCBOb3QgdGFpbnRlZCA1LjE1LjcyLXhpbGlueC12MjAyMi4xICMxPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IDExLjAxMDM5M10gV29ya3F1ZXVlOiBldmVudHNfdW5ib3VuZCBhc3luY19ydW5fZW50cnlfZm48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDE0XSBwc3RhdGU6IDYwMDAwMDA1IChuWkN2IGRhaWYgLVBB
TiAtVUFPIC1UQ08gLURJVCAtU1NCUyBCVFlQRT0tLSk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDIy
XSBwYyA6IHNpbXBsZV93cml0ZV9lbmQrMHhkMC8weDEzMDxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0
MzFdIGxyIDogZ2VuZXJpY19wZXJmb3JtX3dyaXRlKzB4MTE4LzB4MWUwPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IDExLjAxMDQzOF0gc3AgOiBmZmZmZmZjMDA4MDliOTEwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ0
MV0geDI5OiBmZmZmZmZjMDA4MDliOTEwIHgyODogMDAwMDAwMDAwMDAwMDAwMCB4Mjc6IGZmZmZm
ZmVmNjliYTg4YzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDUxXSB4MjY6IDAwMDAwMDAwMDAwMDNl
ZWMgeDI1OiBmZmZmZmY4MDc1MTVkYjAwIHgyNDogMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCAxMS4wMTA0NTldIHgyMzogZmZmZmZmYzAwODA5YmE5MCB4MjI6IDAwMDAwMDAwMDJhYWMw
MDAgeDIxOiBmZmZmZmY4MDczMTVhMjYwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ3Ml0geDIwOiAw
MDAwMDAwMDAwMDAxMDAwIHgxOTogZmZmZmZmZmUwMjAwMDAwMCB4MTg6IDAwMDAwMDAwMDAwMDAw
MDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDgxXSB4MTc6IDAwMDAwMDAwZmZmZmZmZmYgeDE2OiAw
MDAwMDAwMDAwMDA4MDAwIHgxNTogMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA0OTBdIHgxNDogMDAwMDAwMDAwMDAwMDAwMCB4MTM6IDAwMDAwMDAwMDAwMDAwMDAgeDEyOiAw
MDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ5OF0geDExOiAwMDAwMDAwMDAw
MDAwMDAwIHgxMDogMDAwMDAwMDAwMDAwMDAwMCB4OSA6IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTEuMDEwNTA3XSB4OCA6IDAwMDAwMDAwMDAwMDAwMDAgeDcgOiBmZmZmZmZlZjY5
M2JhNjgwIHg2IDogMDAwMDAwMDAyZDg5YjcwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1MTVdIHg1
IDogZmZmZmZmZmUwMjAwMDAwMCB4NCA6IGZmZmZmZjgwNzMxNWEzYzggeDMgOiAwMDAwMDAwMDAw
MDAxMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDUyNF0geDIgOiAwMDAwMDAwMDAyYWFiMDAwIHgx
IDogMDAwMDAwMDAwMDAwMDAwMSB4MCA6IDAwMDAwMDAwMDAwMDAwMDU8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
MTEuMDEwNTM0XSBLZXJuZWwgcGFuaWMgLSBub3Qgc3luY2luZzogQXN5bmNocm9ub3VzIFNFcnJv
ciBJbnRlcnJ1cHQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTM5XSBDUFU6IDAgUElEOiA5IENvbW06
IGt3b3JrZXIvdTQ6MCBOb3QgdGFpbnRlZCA1LjE1LjcyLXhpbGlueC12MjAyMi4xICMxPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDExLjAxMDU0NV0gSGFyZHdhcmUgbmFtZTogRDE0IFZpcGVyIEJvYXJkIC0gV2hp
dGUgVW5pdCAoRFQpPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU0OF0gV29ya3F1ZXVlOiBldmVudHNf
dW5ib3VuZCBhc3luY19ydW5fZW50cnlfZm48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTU2XSBDYWxs
IHRyYWNlOjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1NThdIMKgZHVtcF9iYWNrdHJhY2UrMHgwLzB4
MWM0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU2N10gwqBzaG93X3N0YWNrKzB4MTgvMHgyYzxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4wMTA1NzRdIMKgZHVtcF9zdGFja19sdmwrMHg3Yy8weGEwPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIDExLjAxMDU4M10gwqBkdW1wX3N0YWNrKzB4MTgvMHgzNDxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS4wMTA1ODhdIMKgcGFuaWMrMHgxNGMvMHgyZjg8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTk3XSDC
oHByaW50X3RhaW50ZWQrMHgwLzB4YjA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjA2XSDCoGFybTY0
X3NlcnJvcl9wYW5pYysweDZjLzB4N2M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjE0XSDCoGRvX3Nl
cnJvcisweDI4LzB4NjA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjIxXSDCoGVsMWhfNjRfZXJyb3Jf
aGFuZGxlcisweDMwLzB4NTA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjI4XSDCoGVsMWhfNjRfZXJy
b3IrMHg3OC8weDdjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDYzM10gwqBzaW1wbGVfd3JpdGVfZW5k
KzB4ZDAvMHgxMzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjM5XSDCoGdlbmVyaWNfcGVyZm9ybV93
cml0ZSsweDExOC8weDFlMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2NDRdIMKgX19nZW5lcmljX2Zp
bGVfd3JpdGVfaXRlcisweDEzOC8weDFjNDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2NTBdIMKgZ2Vu
ZXJpY19maWxlX3dyaXRlX2l0ZXIrMHg3OC8weGQwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY1Nl0g
wqBfX2tlcm5lbF93cml0ZSsweGZjLzB4MmFjPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY2NV0gwqBr
ZXJuZWxfd3JpdGUrMHg4OC8weDE2MDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2NzNdIMKgeHdyaXRl
KzB4NDQvMHg5NDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2ODBdIMKgZG9fY29weSsweGE4LzB4MTA0
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjAxMDY4Nl0gwqB3cml0ZV9idWZmZXIrMHgzOC8weDU4PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDExLjAxMDY5Ml0gwqBmbHVzaF9idWZmZXIrMHg0Yy8weGJjPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDExLjAxMDY5OF0gwqBfX2d1bnppcCsweDI4MC8weDMxMDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA3MDRdIMKgZ3VuemlwKzB4MWMvMHgyODxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3MDldIMKgdW5w
YWNrX3RvX3Jvb3RmcysweDE3MC8weDJiMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3MTVdIMKgZG9f
cG9wdWxhdGVfcm9vdGZzKzB4ODAvMHgxNjQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzIyXSDCoGFz
eW5jX3J1bl9lbnRyeV9mbisweDQ4LzB4MTY0PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDcyOF0gwqBw
cm9jZXNzX29uZV93b3JrKzB4MWU0LzB4M2EwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDczNl0gwqB3
b3JrZXJfdGhyZWFkKzB4N2MvMHg0YzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzQzXSDCoGt0aHJl
YWQrMHgxMjAvMHgxMzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzUwXSDCoHJldF9mcm9tX2Zvcmsr
MHgxMC8weDIwPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDc1N10gU01QOiBzdG9wcGluZyBzZWNvbmRh
cnkgQ1BVczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3ODRdIEtlcm5lbCBPZmZzZXQ6IDB4MmY2MTIw
MDAwMCBmcm9tIDB4ZmZmZmZmYzAwODAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3ODhdIFBI
WVNfT0ZGU0VUOiAweDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzkwXSBDUFUgZmVhdHVyZXM6IDB4
MDAwMDA0MDEsMDAwMDA4NDI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzk1XSBNZW1vcnkgTGltaXQ6
IG5vbmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMjc3NTA5XSAtLS1bIGVuZCBLZXJuZWwgcGFuaWMgLSBu
b3Qgc3luY2luZzogQXN5bmNocm9ub3VzIFNFcnJvciBJbnRlcnJ1cHQgXS0tLTxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7INC/0YIsIDIxINCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxNTo1MiwgTWljaGFsIE9y
emVsICZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2Js
YW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNv
bTwvYT4mZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBPbGVnLDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBPbiAyMS8wNC8yMDIzIDE0OjQ5LCBPbGVnIE5pa2l0ZW5rbyB3cm90
ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
SGVsbG8gTWljaGFsLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IEkgd2FzIG5vdCBhYmxlIHRvIGVuYWJsZSBlYXJseXByaW50ayBpbiB0aGUgeGVuIGZv
ciBub3cuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJIGRlY2lkZWQgdG8gY2hvb3NlIGFub3Ro
ZXIgd2F5Ljxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgVGhpcyBpcyBhIHhlbiYjMzk7cyBjb21t
YW5kIGxpbmUgdGhhdCBJIGZvdW5kIG91dCBjb21wbGV0ZWx5Ljxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pICQkJCQgY29uc29sZT1kdHVhcnQg
ZHR1YXJ0PXNlcmlhbDAgZG9tMF9tZW09MTYwME0gZG9tMF9tYXhfdmNwdXM9MiBkb20wX3ZjcHVz
X3Bpbjxicj4NCiZndDvCoCDCoCDCoCDCoGJvb3RzY3J1Yj0wPGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdndmaT1uYXRpdmU8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzY2hlZD1udWxsPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgdGltZXJfc2xvcD0wPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgWWVzLCBh
ZGRpbmcgYSBwcmludGsoKSBpbiBYZW4gd2FzIGFsc28gYSBnb29kIGlkZWEuPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFNvIHlvdSBh
cmUgYWJzb2x1dGVseSByaWdodCBhYm91dCBhIGNvbW1hbmQgbGluZS48YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IE5vdyBJIGFtIGdvaW5nIHRvIGZpbmQgb3V0IHdoeSB4ZW4gZGlkIG5vdCBoYXZl
IHRoZSBjb3JyZWN0IHBhcmFtZXRlcnMgZnJvbSB0aGUgZGV2aWNlPGJyPg0KJmd0O8KgIMKgIMKg
IMKgdHJlZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBNYXliZSB5b3Ugd2lsbCBmaW5kIHRoaXMgZG9j
dW1lbnQgaGVscGZ1bDo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqA8YSBocmVmPSJodHRwczovL2dpdGh1
Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZp
Y2UtdHJlZS9ib290aW5nLnR4dCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0
cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlz
Yy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQ8L2E+PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmx0
OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2Vf
NC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IiByZWw9Im5vcmVmZXJy
ZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hs
bnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dDwvYT4m
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoH5NaWNoYWw8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyDQv9GCLCAyMSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6MTYs
IE1pY2hhbCBPcnplbCAmbHQ7PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0
YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdl
dD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9y
emVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsm
Z3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgT24gMjEvMDQvMjAyMyAxMDowNCwgT2xlZyBO
aWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7IEhlbGxvIE1pY2hhbCw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBZZXMsIEkgdXNlIHlvY3RvLjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7IFllc3RlcmRheSBhbGwgZGF5IGxvbmcgSSB0cmllZCB0byBmb2xsb3cgeW91
ciBzdWdnZXN0aW9ucy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEkgZmFj
ZWQgYSBwcm9ibGVtLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgTWFudWFs
bHkgaW4gdGhlIHhlbiBjb25maWcgYnVpbGQgZmlsZSBJIHBhc3RlZCB0aGUgc3RyaW5nczo8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBJbiB0aGUgLmNvbmZpZyBmaWxlIG9yIGluIHNv
bWUgWW9jdG8gZmlsZSAobGlzdGluZyBhZGRpdGlvbmFsIEtjb25maWcgb3B0aW9ucykgYWRkZWQ8
YnI+DQomZ3Q7wqAgwqAgwqAgwqB0byBTUkNfVVJJPzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoFlvdSBzaG91bGRuJiMzOTt0IHJlYWxseSBtb2RpZnkgLmNvbmZpZyBmaWxlIGJ1dCBp
ZiB5b3UgZG8sIHlvdSBzaG91bGQgZXhlY3V0ZSAmcXVvdDttYWtlPGJyPg0KJmd0O8KgIMKgIMKg
IMKgb2xkZGVmY29uZmlnJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgYWZ0ZXJ3YXJkcy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDsgQ09ORklHX0VBUkxZX1BSSU5USzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDsgQ09ORklHX0VBUkxZX1BSSU5US19aWU5RTVA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7IENPTkZJR19FQVJMWV9VQVJUX0NIT0lDRV9DQURFTkNFPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgSSBob3BlIHlvdSBhZGRlZCA9eSB0byB0aGVtLjxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBBbnl3
YXksIHlvdSBoYXZlIGF0IGxlYXN0IHRoZSBmb2xsb3dpbmcgc29sdXRpb25zOjxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoDEpIFJ1biBiaXRiYWtlIHhlbiAtYyBtZW51Y29uZmlnIHRv
IHByb3Blcmx5IHNldCBlYXJseSBwcmludGs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAyKSBGaW5kIG91dCBob3cgeW91IGVuYWJsZSBvdGhlciBLY29uZmlnIG9wdGlvbnMgaW4geW91
ciBwcm9qZWN0IChlLmcuPGJyPg0KJmd0O8KgIMKgIMKgIMKgQ09ORklHX0NPTE9SSU5HPXkgdGhh
dCBpcyBub3Q8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBl
bmFibGVkIGJ5PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgZGVmYXVsdCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAz
KSBBcHBlbmQgdGhlIGZvbGxvd2luZyB0byAmcXVvdDt4ZW4vYXJjaC9hcm0vY29uZmlncy9hcm02
NF9kZWZjb25maWcmcXVvdDs6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgQ09ORklH
X0VBUkxZX1BSSU5US19aWU5RTVA9eTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqB+TWljaGFsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7IEhvc3QgaGFuZ3MgaW4gYnVpbGQgdGltZS7CoDxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDsgTWF5YmUgSSBkaWQgbm90IHNldCBzb21ldGhpbmcgaW4g
dGhlIGNvbmZpZyBidWlsZCBmaWxlID88YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgT2xlZzxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7INGH0YIs
IDIwINCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxMTo1NywgT2xlZyBOaWtpdGVua28gJmx0OzxhIGhy
ZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlp
d29vZEBnbWFpbC5jb208L2E+PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdv
b2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0i
X2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRh
cmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0
OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgVGhhbmtzIE1pY2hhbCw8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgWW91IGdhdmUgbWUgYW4gaWRlYS48YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqBJIGFtIGdvaW5nIHRvIHRyeSBpdCB0b2RheS48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqBPLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqDRh9GCLCAyMCDQ
sNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6NTYsIE9sZWcgTmlraXRlbmtvICZsdDs8YSBocmVmPSJt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RA
Z21haWwuY29tPC9hPjxicj4NCiZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdt
YWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9
Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7
Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoFRoYW5rcyBTdGVmYW5vLjxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqBJIGFtIGdvaW5nIHRvIGRvIGl0IHRvZGF5Ljxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoE8uPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoNGB0YAsIDE5INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAyMzowNSwgU3RlZmFubyBTdGFi
ZWxsaW5pICZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0
PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0
PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJu
ZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoE9uIFdlZCwgMTkgQXByIDIwMjMsIE9sZWcgTmlraXRlbmtvIHdyb3Rl
Ojxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDsgSGkgTWljaGFsLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEkgY29ycmVjdGVkIHhlbiYjMzk7cyBjb21tYW5kIGxp
bmUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0OyBOb3cgaXQgaXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IHhlbix4ZW4tYm9vdGFyZ3MgPSAmcXVvdDtjb25zb2xlPWR0
dWFydCBkdHVhcnQ9c2VyaWFsMCBkb20wX21lbT0xNjAwTTxicj4NCiZndDvCoCDCoCDCoCDCoGRv
bTBfbWF4X3ZjcHVzPTI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBkb20wX3ZjcHVzX3Bpbjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJvb3RzY3J1Yj0wIHZ3Zmk9
bmF0aXZlIHNjaGVkPW51bGw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IHRpbWVyX3Nsb3A9MCB3YXlfc2l6ZT02NTUzNiB4ZW5fY29s
b3JzPTAtMyBkb20wX2NvbG9ycz00LTcmcXVvdDs7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoDQgY29sb3JzIGlzIHdheSB0b28gbWFueSBmb3IgeGVuLCBqdXN0IGRvIHhlbl9j
b2xvcnM9MC0wLiBUaGVyZSBpcyBubzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoGFkdmFudGFnZSBpbiB1c2luZyBtb3JlIHRoYW4gMSBjb2xv
ciBmb3IgWGVuLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqA0IGNvbG9ycyBp
cyB0b28gZmV3IGZvciBkb20wLCBpZiB5b3UgYXJlIGdpdmluZyAxNjAwTSBvZiBtZW1vcnkgdG88
YnI+DQomZ3Q7wqAgwqAgwqAgwqBEb20wLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoEVhY2ggY29sb3IgaXMgMjU2TS4gRm9yIDE2MDBNIHlv
dSBzaG91bGQgZ2l2ZSBhdCBsZWFzdCA3IGNvbG9ycy4gVHJ5Ojxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqB4ZW5fY29sb3JzPTAtMCBkb20wX2NvbG9ycz0xLTg8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IFVuZm9ydHVu
YXRlbHkgdGhlIHJlc3VsdCB3YXMgdGhlIHNhbWUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAtIERvbTAgbW9k
ZTogUmVsYXhlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDsgKFhFTikgUDJNOiA0MC1iaXQgSVBBIHdpdGggNDAtYml0IFBBIGFuZCA4
LWJpdCBWTUlEPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06IDMgbGV2ZWxzIHdpdGggb3JkZXItMSByb290LCBWVENS
IDB4MDAwMDAwMDA4MDAyMzU1ODxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgU2NoZWR1bGluZyBncmFudWxhcml0eTogY3B1
LCAxIENQVSBwZXIgc2NoZWQtcmVzb3VyY2U8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIENvbG9yaW5nIGdlbmVyYWwgaW5m
b3JtYXRpb248YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIFdheSBzaXplOiA2NGtCPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBNYXguIG51bWJlciBv
ZiBjb2xvcnMgYXZhaWxhYmxlOiAxNjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgWGVuIGNvbG9yKHMpOiBbIDAgXTxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsg
KFhFTikgYWx0ZXJuYXRpdmVzOiBQYXRjaGluZyB3aXRoIGFsdCB0YWJsZSAwMDAwMDAwMDAwMmNj
NjkwIC0mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgMDAwMDAwMDAwMDJjY2MwYzxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikg
Q29sb3IgYXJyYXkgYWxsb2NhdGlvbiBmYWlsZWQgZm9yIGRvbTA8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVO
KSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBQYW5p
YyBvbiBDUFUgMDo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEVycm9yIGNyZWF0aW5nIGRvbWFpbiAwPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSAq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKTxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhF
TikgUmVib290IGluIGZpdmUgc2Vjb25kcy4uLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEkgYW0gZ29pbmcgdG8gZmluZCBv
dXQgaG93IGNvbW1hbmQgbGluZSBhcmd1bWVudHMgcGFzc2VkIGFuZCBwYXJzZWQuPGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsg
UmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyDRgdGALCAxOSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIg
MTE6MjUsIE9sZWcgTmlraXRlbmtvICZsdDs8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPjxicj4NCiZn
dDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hp
aXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0
OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNo
aWl3b29kQGdtYWlsLmNvbTwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlp
d29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBNaWNo
YWwsPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDsgWW91IHB1dCBteSBub3NlIGludG8gdGhlIHByb2JsZW0uIFRoYW5rIHlvdS48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7IEkgYW0gZ29pbmcgdG8gdXNlIHlvdXIgcG9pbnQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBMZXQmIzM5O3Mgc2VlIHdoYXQg
aGFwcGVucy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgT2xlZzxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyDRgdGALCAxOSDQ
sNC/0YAuIDIwMjPigK/Qsy4g0LIgMTA6MzcsIE1pY2hhbCBPcnplbCAmbHQ7PGEgaHJlZj0ibWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFt
ZC5jb208L2E+PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNv
bTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0i
X2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1k
LmNvbTwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVs
QGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPjxicj4NCiZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWlj
aGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBP
bGVnLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBPbiAxOS8wNC8yMDIzIDA5OjAzLCBPbGVnIE5pa2l0ZW5r
byB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGVsbG8gU3RlZmFubyw8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGFua3MgZm9yIHRoZSBjbGFyaWZp
Y2F0aW9uLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgTXkgY29tcGFueSB1c2VzIHlvY3RvIGZvciBpbWFn
ZSBnZW5lcmF0aW9uLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgV2hhdCBraW5kIG9mIGluZm9ybWF0aW9u
IGRvIHlvdSBuZWVkIHRvIGNvbnN1bHQgbWUgaW4gdGhpczxicj4NCiZndDvCoCDCoCDCoCDCoGNh
c2UgPzxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE1heWJlIG1vZHVsZXMg
c2l6ZXMvYWRkcmVzc2VzIHdoaWNoIHdlcmUgbWVudGlvbmVkIGJ5IEBKdWxpZW48YnI+DQomZ3Q7
wqAgwqAgwqAgwqBHcmFsbDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4
ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5v
cmc8L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Omp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVs
aWVuQHhlbi5vcmc8L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVu
QHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT48YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0
PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5q
dWxpZW5AeGVuLm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7ID88YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU29y
cnkgZm9yIGp1bXBpbmcgaW50byBkaXNjdXNzaW9uLCBidXQgRldJQ1MgdGhlIFhlbiBjb21tYW5k
PGJyPg0KJmd0O8KgIMKgIMKgIMKgbGluZSB5b3UgcHJvdmlkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzZWVtcyB0byBiZTxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoG5vdCB0aGU8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBvbmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBYZW4gYm9vdGVkIHdpdGguIFRoZSBl
cnJvciB5b3UgYXJlIG9ic2VydmluZyBtb3N0IGxpa2VseSBpcyBkdWU8YnI+DQomZ3Q7wqAgwqAg
wqAgwqB0byBkb20wIGNvbG9yczxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb24gbm90PGJyPg0KJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgYmVpbmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzcGVjaWZpZWQgKGkuZS4gbGFjayBvZiBkb20wX2Nv
bG9ycz0mbHQ7Jmd0OyBwYXJhbWV0ZXIpLiBBbHRob3VnaCBpbjxicj4NCiZndDvCoCDCoCDCoCDC
oHRoZSBjb21tYW5kIGxpbmUgeW91PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcHJvdmlkZWQsIHRoaXM8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBwYXJhbWV0ZXI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBpcyBzZXQsIEkgc3Ryb25nbHkgZG91YnQgdGhh
dCB0aGlzIGlzIHRoZSBhY3R1YWwgY29tbWFuZCBsaW5lPGJyPg0KJmd0O8KgIMKgIMKgIMKgaW4g
dXNlLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBZb3Ugd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgeGVuLHhlbi1i
b290YXJncyA9ICZxdW90O2NvbnNvbGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWwwPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgZG9tMF9tZW09MTYwME0gZG9tMF9tYXhfdmNwdXM9Mjxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGRvbTBfdmNwdXNf
cGluPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgYm9vdHNjcnViPTAgdndmaT1uYXRpdmU8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBzY2hlZD1udWxsIHRpbWVyX3Nsb3A9MCB3YXlfc3ppemU9NjU1MzYgeGVuX2NvbG9ycz0w
LTM8YnI+DQomZ3Q7wqAgwqAgwqAgwqBkb20wX2NvbG9ycz00LTcmcXVvdDs7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoGJ1dDo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAxKSB3YXlfc3ppemUgaGFzIGEgdHlwbzxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoDIpIHlvdSBzcGVjaWZpZWQgNCBjb2xvcnMgKDAtMykgZm9yIFhlbiwgYnV0IHRoZSBi
b290IGxvZyBzYXlzPGJyPg0KJmd0O8KgIMKgIMKgIMKgdGhhdCBYZW4gaGFzIG9ubHk8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBvbmU6PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgKFhFTikgWGVuIGNvbG9yKHMpOiBbIDAgXTxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBUaGlzIG1ha2Vz
IG1lIGJlbGlldmUgdGhhdCBubyBjb2xvcnMgY29uZmlndXJhdGlvbiBhY3R1YWxseSBlbmQ8YnI+
DQomZ3Q7wqAgwqAgwqAgwqB1cCBpbiBjb21tYW5kIGxpbmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0aGF0IFhlbjxicj4NCiZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJvb3RlZDxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoHdpdGguPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU2luZ2xlIGNvbG9yIGZvciBYZW4gaXMgYSAm
cXVvdDtkZWZhdWx0IGlmIG5vdCBzcGVjaWZpZWQmcXVvdDsgYW5kIHdheTxicj4NCiZndDvCoCDC
oCDCoCDCoHNpemUgd2FzIHByb2JhYmx5PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgY2FsY3VsYXRlZDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJ5IGFza2luZzxicj4NCiZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoEhXLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTbyBJIHdvdWxkIHN1Z2dlc3QgdG8gZmlyc3QgY3Jv
c3MtY2hlY2sgdGhlIGNvbW1hbmQgbGluZSBpbjxicj4NCiZndDvCoCDCoCDCoCDCoHVzZS48YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgfk1pY2hhbDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDQstGCLCAxOCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIg
MjA6NDQsIFN0ZWZhbm8gU3RhYmVsbGluaTxicj4NCiZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVm
PSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwv
YT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
PnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZzwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9h
Pjxicj4NCiZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9h
PiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqBPbiBUdWUsIDE4IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEhpIEp1bGllbiw8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyAmZ3Q7Jmd0OyBUaGlz
IGZlYXR1cmUgaGFzIG5vdCBiZWVuIG1lcmdlZCBpbiBYZW4gdXBzdHJlYW0geWV0PGJyPg0KJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgJmd0
OyB3b3VsZCBhc3N1bWUgdGhhdCB1cHN0cmVhbSArIHRoZSBzZXJpZXMgb24gdGhlIE1MIFsxXTxi
cj4NCiZndDvCoCDCoCDCoCDCoHdvcms8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBQbGVhc2UgY2xhcmlmeSB0aGlzIHBvaW50Ljxi
cj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgQmVjYXVzZSB0aGUgdHdvIHRob3VnaHRzIGFy
ZSBjb250cm92ZXJzaWFsLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqBIaSBPbGVnLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqBBcyBKdWxpZW4gd3JvdGUsIHRoZXJlIGlzIG5vdGhpbmcgY29udHJvdmVyc2lhbC4gQXMgeW91
PGJyPg0KJmd0O8KgIMKgIMKgIMKgYXJlIGF3YXJlLDxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oFhpbGlueCBtYWludGFpbnMgYSBzZXBhcmF0ZSBYZW4gdHJlZSBzcGVjaWZpYyBmb3IgWGlsaW54
PGJyPg0KJmd0O8KgIMKgIMKgIMKgaGVyZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqA8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
bjwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJu
b3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48
L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4i
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0
OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPjxi
cj4NCiZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29t
L3hpbGlueC94ZW48L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29t
L3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7Jmd0OyZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgYW5kIHRoZSBicmFu
Y2ggeW91IGFyZSB1c2luZyAoeGxueF9yZWJhc2VfNC4xNikgY29tZXM8YnI+DQomZ3Q7wqAgwqAg
wqAgwqBmcm9tIHRoZXJlLjxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgSW5zdGVhZCwgdGhlIHVwc3RyZWFtIFhlbiB0cmVlIGxp
dmVzIGhlcmU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgPGEgaHJlZj0iaHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk8L2E+PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5PC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3
ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRw
czovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3
ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8
YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4u
b3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7ICZsdDs8
YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4u
b3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT48YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT48YnI+DQomZ3Q7
wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZn
dDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPjxicj4NCiZndDvCoCDC
oCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPjxi
cj4NCiZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwv
YT4mZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBUaGUgQ2FjaGUgQ29sb3JpbmcgZmVhdHVyZSB0aGF0
IHlvdSBhcmUgdHJ5aW5nIHRvPGJyPg0KJmd0O8KgIMKgIMKgIMKgY29uZmlndXJlIGlzIHByZXNl
bnQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBpbiB4bG54X3JlYmFzZV80LjE2LCBidXQgbm90
IHlldCBwcmVzZW50IHVwc3RyZWFtICh0aGVyZTxicj4NCiZndDvCoCDCoCDCoCDCoGlzIGFuPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgb3V0c3RhbmRpbmcgcGF0Y2ggc2VyaWVzIHRvIGFkZCBj
YWNoZSBjb2xvcmluZyB0byBYZW48YnI+DQomZ3Q7wqAgwqAgwqAgwqB1cHN0cmVhbSBidXQgaXQ8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBoYXNuJiMzOTt0IGJlZW4gbWVyZ2VkIHlldC4pPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqBBbnl3YXksIGlmIHlvdSBhcmUgdXNpbmcgeGxueF9yZWJhc2VfNC4xNiBpdCBkb2VzbiYj
Mzk7dDxicj4NCiZndDvCoCDCoCDCoCDCoG1hdHRlciB0b28gbXVjaCBmb3I8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqB5b3UgYXMgeW91IGFscmVhZHkgaGF2ZSBDYWNoZSBDb2xvcmluZyBhcyBh
IGZlYXR1cmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqB0aGVyZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEkgdGFrZSB5b3UgYXJl
IHVzaW5nIEltYWdlQnVpbGRlciB0byBnZW5lcmF0ZSB0aGUgYm9vdDxicj4NCiZndDvCoCDCoCDC
oCDCoGNvbmZpZ3VyYXRpb24/IElmPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgc28sIHBsZWFz
ZSBwb3N0IHRoZSBJbWFnZUJ1aWxkZXIgY29uZmlnIGZpbGUgdGhhdCB5b3UgYXJlPGJyPg0KJmd0
O8KgIMKgIMKgIMKgdXNpbmcuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoEJ1dCBmcm9tIHRoZSBib290IG1lc3NhZ2UsIGl0IGxvb2tzIGxpa2UgdGhlIGNvbG9y
czxicj4NCiZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb24gZm9yPGJyPg0KJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgRG9tMCBpcyBpbmNvcnJlY3QuPGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDsgPGJy
Pg0KJmd0OyA8YnI+DQomZ3Q7IDwvYmxvY2txdW90ZT48L2Rpdj4NCg==
--00000000000082ed6105fb68055f--
--00000000000082ed6405fb680561
Content-Type: text/x-log; charset="UTF-7"; name="build.log"
Content-Disposition: attachment; filename="build.log"
Content-Transfer-Encoding: base64
Content-ID: <f_lhiykq4s0>
X-Attachment-Id: f_lhiykq4s0

REVCVUc6IEV4ZWN1dGluZyBweXRob24gZnVuY3Rpb24gYXV0b3Rvb2xzX2FjbG9jYWxzCkRFQlVH
OiBTSVRFIGZpbGVzIFsnZW5kaWFuLWxpdHRsZScsICdiaXQtNjQnLCAnYXJtLWNvbW1vbicsICdh
cm0tNjQnLCAnY29tbW9uLWxpbnV4JywgJ2NvbW1vbi1nbGliYycsICdhYXJjaDY0LWxpbnV4Jywg
J2NvbW1vbiddCkRFQlVHOiBQeXRob24gZnVuY3Rpb24gYXV0b3Rvb2xzX2FjbG9jYWxzIGZpbmlz
aGVkCkRFQlVHOiBFeGVjdXRpbmcgc2hlbGwgZnVuY3Rpb24gZG9fY29tcGlsZQpOT1RFOiBtYWtl
IC1qIDggU1REVkdBX1JPTT0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUt
ZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290L3Vzci9zaGFyZS9maXJtd2FyZS92
Z2FiaW9zLTAuOGEuYmluIENJUlJVU1ZHQV9ST009L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQv
d2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29s
cy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdC91c3Ivc2hh
cmUvZmlybXdhcmUvdmdhYmlvcy0wLjhhLmNpcnJ1cy5iaW4gU0VBQklPU19ST009L2hvbWUvbm9s
ZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRh
YmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNp
cGUtc3lzcm9vdC91c3Ivc2hhcmUvZmlybXdhcmUvYmlvcy5iaW4gRVRIRVJCT09UX1JPTVM9L2hv
bWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhh
LXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1y
MC9yZWNpcGUtc3lzcm9vdC91c3Ivc2hhcmUvZmlybXdhcmUvcnRsODEzOS5yb20gV0dFVD0vYmlu
L2ZhbHNlIEdJVD0vYmluL2ZhbHNlIFhFTl9CVUlMRF9EQVRFPTIwMjMtMDUtMTEgWEVOX0JVSUxE
X1RJTUU9MDk6MTY6MjYgWEVOX0NPTkZJR19FWFBFUlQ9eSBkZWJ1Zz1uIHRvb2xzIFBZVEhPTj0v
aG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12
OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZk
LXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZS91c3IvYmluL3B5dGhvbjMtbmF0aXZlL3B5dGhvbjMg
RVhUUkFfQ0ZMQUdTX1hFTl9UT09MUz0gLW1hcmNoPWFybXY4LWErY3JjK2NyeXB0byAtZnN0YWNr
LXByb3RlY3Rvci1zdHJvbmcgIC1PMiAtRF9GT1JUSUZZX1NPVVJDRT0yIC1XZm9ybWF0IC1XZm9y
bWF0LXNlY3VyaXR5IC1XZXJyb3I9Zm9ybWF0LXNlY3VyaXR5ICAtTzIgLXBpcGUgLWcgLWZlbGlt
aW5hdGUtdW51c2VkLWRlYnVnLXR5cGVzIC1mbWFjcm8tcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5
MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUt
bGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3Jj
L2RlYnVnL3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAg
ICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVp
bGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10
b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAg
ICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVp
cmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytz
dGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3Q9ICAgICAgICAgICAgICAg
ICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0
ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQu
MTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZT0gCm1h
a2UgLUMgdG9vbHMvaW5jbHVkZSBpbnN0YWxsCm1ha2VbMV06IEVudGVyaW5nIGRpcmVjdG9yeSAn
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9naXQvdG9vbHMvaW5jbHVkZScKbWFrZSAtQyB4ZW4tZm9yZWlnbgptYWtlWzJdOiBFbnRl
cmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUveGVuLWZvcmVpZ24nCm1rZGly
IC1wIHhlbi9saWJlbGYKbWtkaXIgLXAgeGVuLXhzbS9mbGFzawpsbiAtc2YgL2hvbWUvbm9sZTIz
OTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxl
LWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9v
bHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvQ09QWUlORyB4ZW4KY2QgL2hvbWUv
bm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBv
cnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9n
aXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4veHNtL2ZsYXNrLyAmJiBcCgkvYmluL3NoIHBvbGlj
eS9ta2ZsYXNrLnNoIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC9ob3N0dG9vbHMvYXdrIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVu
ZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFi
bGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUveGVuLXhzbS9mbGFzayBw
b2xpY3kvaW5pdGlhbF9zaWRzCmxuIC1zZiAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0
ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQu
MTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3hl
bi9pbmNsdWRlL3B1YmxpYy9hcmNoLWFybS5oIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUvLi4vLi4v
eGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2XzMyLmggL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVp
bGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10
b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8u
Li8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC14ODZfNjQuaCAvaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9pbmNs
dWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmdvLmggL2hvbWUvbm9sZTIzOTAvaXJlbmUv
YnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hl
bi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVk
ZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvY2FsbGJhY2suaCAvaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9pbmNs
dWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9kZXZpY2VfdHJlZV9kZWZzLmggL2hvbWUvbm9s
ZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRh
YmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQv
dG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tMF9vcHMuaCAvaG9tZS9u
b2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9y
dGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dp
dC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9kb21jdGwuaCAvaG9tZS9u
b2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9y
dGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dp
dC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9lbGZub3RlLmggL2hvbWUv
bm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBv
cnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9n
aXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvZXJybm8uaCAvaG9tZS9u
b2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9y
dGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dp
dC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9ldmVudF9jaGFubmVsLmgg
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvZmVhdHVyZXMu
aCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9h
cm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcw
ZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9ncmFudF90
YWJsZS5oIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93
b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2tl
eGVjLmggL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dv
cmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYz
MjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvbWVt
b3J5LmggL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dv
cmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYz
MjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvbm1p
LmggL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsv
YXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvcGh5c2Rl
di5oIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3Jr
L2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIz
NzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL3BsYXRm
b3JtLmggL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dv
cmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYz
MjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvcG11
LmggL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsv
YXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvc2NoZWQu
aCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9h
cm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcw
ZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy9zeXNjdGwu
aCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9h
cm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcw
ZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy90bWVtLmgg
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvdHJhY2UuaCAv
aG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12
OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZk
LXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy92Y3B1LmggL2hv
bWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhh
LXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1y
MC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvdmVyc2lvbi5oIC9o
b21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4
YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQt
cjAvZ2l0L3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL3ZtX2V2ZW50Lmgg
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMveGVuY29tbS5o
IC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2Fy
bXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBm
NmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL3hlbi1jb21w
YXQuaCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29y
ay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMy
MzcwZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy94ZW4u
aCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9h
cm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcw
ZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3hlbi9pbmNsdWRlL3B1YmxpYy94ZW5vcHJv
Zi5oIHhlbgpsbiAtc2YgL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9w
dWJsaWMvYXJjaC14ODYgL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9w
dWJsaWMvYXJjaC1hcm0gL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9w
dWJsaWMvaHZtIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3Rt
cC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5D
K2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGlj
L2lvIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3Jr
L2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIz
NzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUvLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL3hzbSB4
ZW4KbG4gLXNmIC4uL3hlbi1zeXMvTGludXggeGVuL3N5cwpsbiAtc2YgL2hvbWUvbm9sZTIzOTAv
aXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxp
bnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMv
aW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS94ZW4vbGliZWxmLmggL2hvbWUvbm9sZTIzOTAvaXJl
bmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4
L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5j
bHVkZS8uLi8uLi94ZW4vaW5jbHVkZS94ZW4vZWxmc3RydWN0cy5oIHhlbi9saWJlbGYvCmxuIC1z
IC4uL3hlbi1mb3JlaWduIHhlbi9mb3JlaWduCnRvdWNoIHhlbi14c20vLmRpcgpsbiAtc2YgL2hv
bWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhh
LXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1y
MC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi94ZW4vaW5jbHVkZS9hY3BpIGFjcGkKdG91Y2ggeGVu
Ly5kaXIKL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dv
cmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYz
MjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdC1uYXRpdmUvdXNyL2Jpbi9weXRob24zLW5hdGl2ZS9w
eXRob24zIG1raGVhZGVyLnB5IGFybTMyIGFybTMyLmgudG1wIC9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1
ZGUveGVuLWZvcmVpZ24vLi4vLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2FyY2gtYXJtLmggL2hv
bWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhh
LXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1y
MC9naXQvdG9vbHMvaW5jbHVkZS94ZW4tZm9yZWlnbi8uLi8uLi8uLi94ZW4vaW5jbHVkZS9wdWJs
aWMveGVuLmgKL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdC1uYXRpdmUvdXNyL2Jpbi9weXRob24zLW5hdGl2
ZS9weXRob24zIG1raGVhZGVyLnB5IGFybTY0IGFybTY0LmgudG1wIC9ob21lL25vbGUyMzkwL2ly
ZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51
eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2lu
Y2x1ZGUveGVuLWZvcmVpZ24vLi4vLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2FyY2gtYXJtLmgg
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS94ZW4tZm9yZWlnbi8uLi8uLi8uLi94ZW4vaW5jbHVkZS9w
dWJsaWMveGVuLmgKL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAv
dG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9J
TkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdC1uYXRpdmUvdXNyL2Jpbi9weXRob24zLW5h
dGl2ZS9weXRob24zIG1raGVhZGVyLnB5IHg4Nl8zMiB4ODZfMzIuaC50bXAgL2hvbWUvbm9sZTIz
OTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxl
LWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9v
bHMvaW5jbHVkZS94ZW4tZm9yZWlnbi8uLi8uLi8uLi94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC14
ODYveGVuLXg4Nl8zMi5oIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUveGVuLWZvcmVpZ24vLi4vLi4v
Li4veGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2L3hlbi5oIC9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1
ZGUveGVuLWZvcmVpZ24vLi4vLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL3hlbi5oCi9ob21lL25v
bGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0
YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVj
aXBlLXN5c3Jvb3QtbmF0aXZlL3Vzci9iaW4vcHl0aG9uMy1uYXRpdmUvcHl0aG9uMyBta2hlYWRl
ci5weSB4ODZfNjQgeDg2XzY0LmgudG1wIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRl
LWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4x
NytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUveGVuLWZvcmVp
Z24vLi4vLi4vLi4veGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2L3hlbi14ODZfNjQuaCAvaG9t
ZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEt
cG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIw
L2dpdC90b29scy9pbmNsdWRlL3hlbi1mb3JlaWduLy4uLy4uLy4uL3hlbi9pbmNsdWRlL3B1Ymxp
Yy9hcmNoLXg4Ni94ZW4uaCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUt
ZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9pbmNsdWRlL3hlbi1mb3JlaWduLy4uLy4u
Ly4uL3hlbi9pbmNsdWRlL3B1YmxpYy94ZW4uaAovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93
aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xz
LzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZS91
c3IvYmluL3B5dGhvbjMtbmF0aXZlL3B5dGhvbjMgbWtjaGVja2VyLnB5IGNoZWNrZXIuYyBhcm0z
MiBhcm02NCB4ODZfMzIgeDg2XzY0CiNBdm9pZCBtaXhpbmcgYW4gYWxpZ25tZW50IGRpcmVjdGl2
ZSB3aXRoIGEgdWludDY0X3QgY2FzdCBvciBzaXplb2YgZXhwcmVzc2lvbgpzZWQgJ3MvKF9fYWxp
Z244X18gXCh1aW50NjRfdFwpKS8oXDEpL2cnIDwgeDg2XzMyLmgudG1wID4geDg2XzMyLmgudG1w
MgpybSB4ODZfMzIuaC50bXAKaWYgISBjbXAgLXMgeDg2XzMyLmgudG1wMiB4ODZfMzIuaDsgdGhl
biBtdiAtZiB4ODZfMzIuaC50bXAyIHg4Nl8zMi5oOyBlbHNlIHJtIC1mIHg4Nl8zMi5oLnRtcDI7
IGZpCiNBdm9pZCBtaXhpbmcgYW4gYWxpZ25tZW50IGRpcmVjdGl2ZSB3aXRoIGEgdWludDY0X3Qg
Y2FzdCBvciBzaXplb2YgZXhwcmVzc2lvbgpzZWQgJ3MvKF9fYWxpZ244X18gXCh1aW50NjRfdFwp
KS8oXDEpL2cnIDwgYXJtMzIuaC50bXAgPiBhcm0zMi5oLnRtcDIKcm0gYXJtMzIuaC50bXAKaWYg
ISBjbXAgLXMgYXJtMzIuaC50bXAyIGFybTMyLmg7IHRoZW4gbXYgLWYgYXJtMzIuaC50bXAyIGFy
bTMyLmg7IGVsc2Ugcm0gLWYgYXJtMzIuaC50bXAyOyBmaQojQXZvaWQgbWl4aW5nIGFuIGFsaWdu
bWVudCBkaXJlY3RpdmUgd2l0aCBhIHVpbnQ2NF90IGNhc3Qgb3Igc2l6ZW9mIGV4cHJlc3Npb24K
c2VkICdzLyhfX2FsaWduOF9fIFwodWludDY0X3RcKSkvKFwxKS9nJyA8IGFybTY0LmgudG1wID4g
YXJtNjQuaC50bXAyCnJtIGFybTY0LmgudG1wCmlmICEgY21wIC1zIGFybTY0LmgudG1wMiBhcm02
NC5oOyB0aGVuIG12IC1mIGFybTY0LmgudG1wMiBhcm02NC5oOyBlbHNlIHJtIC1mIGFybTY0Lmgu
dG1wMjsgZmkKI0F2b2lkIG1peGluZyBhbiBhbGlnbm1lbnQgZGlyZWN0aXZlIHdpdGggYSB1aW50
NjRfdCBjYXN0IG9yIHNpemVvZiBleHByZXNzaW9uCnNlZCAncy8oX19hbGlnbjhfXyBcKHVpbnQ2
NF90XCkpLyhcMSkvZycgPCB4ODZfNjQuaC50bXAgPiB4ODZfNjQuaC50bXAyCnJtIHg4Nl82NC5o
LnRtcAppZiAhIGNtcCAtcyB4ODZfNjQuaC50bXAyIHg4Nl82NC5oOyB0aGVuIG12IC1mIHg4Nl82
NC5oLnRtcDIgeDg2XzY0Lmg7IGVsc2Ugcm0gLWYgeDg2XzY0LmgudG1wMjsgZmkKZ2NjICAtV2Fs
bCAtV2Vycm9yIC1Xc3RyaWN0LXByb3RvdHlwZXMgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1m
bm8tc3RyaWN0LWFsaWFzaW5nIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1EX19YRU5f
VE9PTFNfXyAtbyBjaGVja2VyIGNoZWNrZXIuYwouL2NoZWNrZXIgPiB0bXAuc2l6ZQpkaWZmIC11
IHJlZmVyZW5jZS5zaXplIHRtcC5zaXplCnJtIHRtcC5zaXplCm1ha2VbMl06IExlYXZpbmcgZGly
ZWN0b3J5ICcvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAv
d29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQyth
NjMyMzcwZjZkLXIwL2dpdC90b29scy9pbmNsdWRlL3hlbi1mb3JlaWduJwovaG9tZS9ub2xlMjM5
MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUt
bGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29s
cy9pbmNsdWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9ob21lL25v
bGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0
YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0
L2Rpc3QvaW5zdGFsbC91c3IvaW5jbHVkZS94ZW4vYXJjaC14ODYKL2hvbWUvbm9sZTIzOTAvaXJl
bmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4
L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5j
bHVkZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvaG9tZS9ub2xlMjM5
MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUt
bGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0
L2luc3RhbGwvdXNyL2luY2x1ZGUveGVuL2FyY2gteDg2L2h2bQovaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9pbmNs
dWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9ob21lL25vbGUyMzkw
L2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1s
aW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3Qv
aW5zdGFsbC91c3IvaW5jbHVkZS94ZW4vYXJjaC1hcm0KL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVp
bGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10
b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8u
Li8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3Rh
bGwvdXNyL2luY2x1ZGUveGVuL2FyY2gtYXJtL2h2bQovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWls
ZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRv
b2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4u
Ly4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFs
bC91c3IvaW5jbHVkZS94ZW4vZm9yZWlnbgovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0
ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQu
MTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3Rv
b2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxk
L3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9v
bHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3Iv
aW5jbHVkZS94ZW4vaHZtCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3Mt
aW5zdGFsbCAtZCAtbTA3NTUgLXAgL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWly
ZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0
YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvZGlzdC9pbnN0YWxsL3Vzci9pbmNsdWRlL3hl
bi9pbwovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29y
ay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMy
MzcwZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQg
LW0wNzU1IC1wIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3Rt
cC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5D
K2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvaW5jbHVkZS94ZW4vc3lzCi9ob21l
L25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1w
b3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAv
Z2l0L3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAg
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9naXQvZGlzdC9pbnN0YWxsL3Vzci9pbmNsdWRlL3hlbi94c20KL2hvbWUvbm9sZTIzOTAv
aXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxp
bnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMv
aW5jbHVkZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4ZW4vQ09QWUlORyAv
aG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12
OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZk
LXIwL2dpdC9kaXN0L2luc3RhbGwvdXNyL2luY2x1ZGUveGVuCi9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1
ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuLyouaCAvaG9tZS9ub2xl
MjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFi
bGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9k
aXN0L2luc3RhbGwvdXNyL2luY2x1ZGUveGVuCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUvLi4vLi4v
dG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuL2FyY2gteDg2LyouaCAvaG9tZS9ub2xl
MjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFi
bGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9k
aXN0L2luc3RhbGwvdXNyL2luY2x1ZGUveGVuL2FyY2gteDg2Ci9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1
ZGUvLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuL2FyY2gteDg2L2h2bS8q
LmggL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsv
YXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMC9naXQvZGlzdC9pbnN0YWxsL3Vzci9pbmNsdWRlL3hlbi9hcmNoLXg4Ni9odm0KL2hv
bWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhh
LXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1y
MC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4
ZW4vYXJjaC1hcm0vaHZtLyouaCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJl
bmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3Rh
YmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3RhbGwvdXNyL2luY2x1ZGUveGVu
L2FyY2gtYXJtL2h2bQovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0
MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVU
T0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3Rvb2xzL2Nyb3NzLWlu
c3RhbGwgLW0wNjQ0IC1wIHhlbi9mb3JlaWduLyouaCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWls
ZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRv
b2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3RhbGwvdXNy
L2luY2x1ZGUveGVuL2ZvcmVpZ24KL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWly
ZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0
YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvaW5jbHVkZS8uLi8uLi90b29scy9j
cm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4ZW4vaHZtLyouaCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9i
dWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVu
LXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3RhbGwv
dXNyL2luY2x1ZGUveGVuL2h2bQovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJl
bmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3Rh
YmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3Rvb2xzL2Ny
b3NzLWluc3RhbGwgLW0wNjQ0IC1wIHhlbi9pby8qLmggL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVp
bGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10
b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvZGlzdC9pbnN0YWxsL3Vz
ci9pbmNsdWRlL3hlbi9pbwovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUt
ZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9pbmNsdWRlLy4uLy4uL3Rvb2xzL2Nyb3Nz
LWluc3RhbGwgLW0wNjQ0IC1wIHhlbi9zeXMvKi5oIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxk
L3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9v
bHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3Iv
aW5jbHVkZS94ZW4vc3lzCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2luY2x1ZGUvLi4vLi4vdG9vbHMvY3Jvc3Mt
aW5zdGFsbCAtbTA2NDQgLXAgeGVuL3hzbS8qLmggL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQv
d2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29s
cy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvZGlzdC9pbnN0YWxsL3Vzci9p
bmNsdWRlL3hlbi94c20KbWFrZVsxXTogTGVhdmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkw
L2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1s
aW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xz
L2luY2x1ZGUnCm1ha2UgLUMgdG9vbHMgaW5zdGFsbAptYWtlWzFdOiBFbnRlcmluZyBkaXJlY3Rv
cnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3Jr
L2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIz
NzBmNmQtcjAvZ2l0L3Rvb2xzJwovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJl
bmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3Rh
YmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy8uLi90b29scy9jcm9zcy1pbnN0YWxs
IC1kIC1tMDc1NSAtcCAtbSA3MDAgL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWly
ZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0
YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvZGlzdC9pbnN0YWxsL3Zhci9saWIveGVuL2R1
bXAKL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsv
YXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMC9naXQvdG9vbHMvLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL2hv
bWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhh
LXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1y
MC9naXQvZGlzdC9pbnN0YWxsL3Zhci9sb2cveGVuCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxk
L3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9v
bHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzLy4uL3Rvb2xzL2Ny
b3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRl
LWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4x
NytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC92YXIvcnVuL3hl
bgovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9h
cm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcw
ZjZkLXIwL2dpdC90b29scy8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvaG9t
ZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEt
cG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIw
L2dpdC9kaXN0L2luc3RhbGwvdmFyL2xpYi94ZW4KL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQv
d2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29s
cy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvLi4vdG9vbHMvY3Jv
c3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUt
ZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3
K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvZGlzdC9pbnN0YWxsL3Zhci9ydW4veGVu
c3RvcmVkCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93
b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1w
IC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2Fy
bXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBm
NmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvbGliL3BrZ2NvbmZpZwptYWtlIHN1YmRpcnMtaW5z
dGFsbAptYWtlWzJdOiBFbnRlcmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzJwptYWtlWzNd
OiBFbnRlcmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVp
cmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytz
dGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzJwptYWtlIC1DIGxpYnMgaW5zdGFs
bAptYWtlWzRdOiBFbnRlcmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxk
L3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9v
bHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMnCm1ha2Vb
NV06IEVudGVyaW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUt
ZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3
K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicycKbWFrZSAtQyB0b29s
Y29yZSBpbnN0YWxsCm1ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAv
aXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxp
bnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMv
bGlicy90b29sY29yZScKbWFrZSBsaWJzCm1ha2VbN106IEVudGVyaW5nIGRpcmVjdG9yeSAnL2hv
bWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhh
LXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1y
MC9naXQvdG9vbHMvbGlicy90b29sY29yZScKL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hp
dGUtZWlyZW5lLWV2dDAvdG1wL2hvc3R0b29scy9wZXJsIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvdG9v
bGNvcmUvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZS94ZW4tZXh0ZXJuYWwvYnNkLXN5cy1xdWV1ZS1o
LXNlZGRlcnkgL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy90b29sY29yZS8uLi8uLi8uLi90b29scy9pbmNs
dWRlL3hlbi1leHRlcm5hbC9ic2Qtc3lzLXF1ZXVlLmggLS1wcmVmaXg9eGVudG9vbGNvcmUgPmlu
Y2x1ZGUvX3hlbnRvb2xjb3JlX2xpc3QuaC5uZXcKbWtkaXIgLXAgL2hvbWUvbm9sZTIzOTAvaXJl
bmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4
L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvcGtn
LWNvbmZpZwppZiAhIGNtcCAtcyBpbmNsdWRlL194ZW50b29sY29yZV9saXN0LmgubmV3IGluY2x1
ZGUvX3hlbnRvb2xjb3JlX2xpc3QuaDsgdGhlbiBtdiAtZiBpbmNsdWRlL194ZW50b29sY29yZV9s
aXN0LmgubmV3IGluY2x1ZGUvX3hlbnRvb2xjb3JlX2xpc3QuaDsgZWxzZSBybSAtZiBpbmNsdWRl
L194ZW50b29sY29yZV9saXN0LmgubmV3OyBmaQpmb3IgaSBpbiBpbmNsdWRlL3hlbnRvb2xjb3Jl
LmggaW5jbHVkZS94ZW50b29sY29yZV9pbnRlcm5hbC5oIGluY2x1ZGUvX3hlbnRvb2xjb3JlX2xp
c3QuaDsgZG8gXAogICAgYWFyY2g2NC1wb3J0YWJsZS1saW51eC1nY2MgIC0tc3lzcm9vdD0vaG9t
ZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEt
cG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIw
L3JlY2lwZS1zeXNyb290ICAteCBjIC1hbnNpIC1XYWxsIC1XZXJyb3IgLUkvaG9tZS9ub2xlMjM5
MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUt
bGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29s
cy9saWJzL3Rvb2xjb3JlLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgXAogICAgICAgICAgLVMgLW8g
L2Rldi9udWxsICRpIHx8IGV4aXQgMTsgXAogICAgZWNobyAkaTsgXApkb25lID5oZWFkZXJzLmNo
ay5uZXcKYWFyY2g2NC1wb3J0YWJsZS1saW51eC1nY2MgIC0tc3lzcm9vdD0vaG9tZS9ub2xlMjM5
MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUt
bGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1z
eXNyb290ICAgLURCVUlMRF9JRCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxs
IC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVduby11
bnVzZWQtYnV0LXNldC12YXJpYWJsZSAtV25vLXVudXNlZC1sb2NhbC10eXBlZGVmcyAgIC1PMiAt
Zm9taXQtZnJhbWUtcG9pbnRlciAtRF9fWEVOX0lOVEVSRkFDRV9WRVJTSU9OX189X19YRU5fTEFU
RVNUX0lOVEVSRkFDRV9WRVJTSU9OX18gLU1NRCAtTUYgLmhhbmRsZXJlZy5vLmQgLURfTEFSR0VG
SUxFX1NPVVJDRSAtRF9MQVJHRUZJTEU2NF9TT1VSQ0UgIC1tYXJjaD1hcm12OC1hK2NyYytjcnlw
dG8gLWZzdGFjay1wcm90ZWN0b3Itc3Ryb25nICAtTzIgLURfRk9SVElGWV9TT1VSQ0U9MiAtV2Zv
cm1hdCAtV2Zvcm1hdC1zZWN1cml0eSAtV2Vycm9yPWZvcm1hdC1zZWN1cml0eSAgLU8yIC1waXBl
IC1nIC1mZWxpbWluYXRlLXVudXNlZC1kZWJ1Zy10eXBlcyAtZm1hY3JvLXByZWZpeC1tYXA9L2hv
bWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhh
LXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1y
MD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQt
cjAgICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkw
L2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1s
aW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMv
ZGVidWcveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwICAgICAgICAg
ICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWls
ZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRv
b2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290PSAgICAg
ICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUv
YnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hl
bi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdC1u
YXRpdmU9ICAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLi9pbmNsdWRlIC1JL2hvbWUv
bm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBv
cnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9n
aXQvdG9vbHMvbGlicy90b29sY29yZS8uLi8uLi8uLi90b29scy9pbmNsdWRlICAgLWMgLW8gaGFu
ZGxlcmVnLm8gaGFuZGxlcmVnLmMgCmFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2NjICAtLXN5c3Jv
b3Q9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsv
YXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdCAgIC1EUElDIC1EQlVJTERfSUQgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50IC1Xbm8tdW51c2VkLWJ1dC1zZXQtdmFyaWFibGUgLVduby11bnVzZWQt
bG9jYWwtdHlwZWRlZnMgICAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLURfX1hFTl9JTlRFUkZB
Q0VfVkVSU0lPTl9fPV9fWEVOX0xBVEVTVF9JTlRFUkZBQ0VfVkVSU0lPTl9fIC1NTUQgLU1GIC5o
YW5kbGVyZWcub3BpYy5kIC1EX0xBUkdFRklMRV9TT1VSQ0UgLURfTEFSR0VGSUxFNjRfU09VUkNF
ICAtbWFyY2g9YXJtdjgtYStjcmMrY3J5cHRvIC1mc3RhY2stcHJvdGVjdG9yLXN0cm9uZyAgLU8y
IC1EX0ZPUlRJRllfU09VUkNFPTIgLVdmb3JtYXQgLVdmb3JtYXQtc2VjdXJpdHkgLVdlcnJvcj1m
b3JtYXQtc2VjdXJpdHkgIC1PMiAtcGlwZSAtZyAtZmVsaW1pbmF0ZS11bnVzZWQtZGVidWctdHlw
ZXMgLWZtYWNyby1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVp
cmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytz
dGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcr
c3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWct
cHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90
bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lO
QythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9J
TkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9yZWNpcGUtc3lzcm9vdD0gICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgt
bWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3Jr
L2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIz
NzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QtbmF0aXZlPSAgLVdlcnJvciAtV21pc3NpbmctcHJvdG90
eXBlcyAtSS4vaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVu
ZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFi
bGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvdG9vbGNvcmUvLi4vLi4vLi4v
dG9vbHMvaW5jbHVkZSAgIC1mUElDIC1jIC1vIGhhbmRsZXJlZy5vcGljIGhhbmRsZXJlZy5jIApt
diBoZWFkZXJzLmNoay5uZXcgaGVhZGVycy5jaGsKYWFyY2g2NC1wb3J0YWJsZS1saW51eC1nY2Mg
IC0tc3lzcm9vdD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90
bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lO
QythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgLXB0aHJlYWQgLVdsLC1zb25hbWUgLVdsLGxpYnhlbnRvb2xjb3JlLnNvLjEgLXNoYXJl
ZCAtV2wsLS12ZXJzaW9uLXNjcmlwdD1saWJ4ZW50b29sY29yZS5tYXAgLW8gbGlieGVudG9vbGNv
cmUuc28uMS4wIGhhbmRsZXJlZy5vcGljICAKYWFyY2g2NC1wb3J0YWJsZS1saW51eC1hciByYyBs
aWJ4ZW50b29sY29yZS5hIGhhbmRsZXJlZy5vCmxuIC1zZiBsaWJ4ZW50b29sY29yZS5zby4xLjAg
bGlieGVudG9vbGNvcmUuc28uMQpsbiAtc2YgbGlieGVudG9vbGNvcmUuc28uMSBsaWJ4ZW50b29s
Y29yZS5zbwptYWtlWzddOiBMZWF2aW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUv
YnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hl
bi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy90
b29sY29yZScKL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy90b29sY29yZS8uLi8uLi8uLi90b29scy9jcm9z
cy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1l
aXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcr
c3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3RhbGwvdXNyL2xpYgovaG9t
ZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEt
cG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIw
L2dpdC90b29scy9saWJzL3Rvb2xjb3JlLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQg
LW0wNzU1IC1wIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3Rt
cC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5D
K2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvaW5jbHVkZQovaG9tZS9ub2xlMjM5
MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUt
bGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29s
cy9saWJzL3Rvb2xjb3JlLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNzU1IC1wIGxp
YnhlbnRvb2xjb3JlLnNvLjEuMCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJl
bmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3Rh
YmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3RhbGwvdXNyL2xpYgovaG9tZS9u
b2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9y
dGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dp
dC90b29scy9saWJzL3Rvb2xjb3JlLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0
IC1wIGxpYnhlbnRvb2xjb3JlLmEgL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWly
ZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0
YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvZGlzdC9pbnN0YWxsL3Vzci9saWIKbG4gLXNm
IGxpYnhlbnRvb2xjb3JlLnNvLjEuMCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1l
aXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcr
c3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3RhbGwvdXNyL2xpYi9saWJ4
ZW50b29sY29yZS5zby4xCmxuIC1zZiBsaWJ4ZW50b29sY29yZS5zby4xIC9ob21lL25vbGUyMzkw
L2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1s
aW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3Qv
aW5zdGFsbC91c3IvbGliL2xpYnhlbnRvb2xjb3JlLnNvCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvdG9v
bGNvcmUvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgaW5jbHVkZS94ZW50
b29sY29yZS5oIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3Rt
cC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5D
K2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvaW5jbHVkZQovaG9tZS9ub2xlMjM5
MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUt
bGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29s
cy9saWJzL3Rvb2xjb3JlLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIHhl
bnRvb2xjb3JlLnBjIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvbGliL3BrZ2NvbmZpZwptYWtl
WzZdOiBMZWF2aW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUt
ZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3
K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy90b29sY29yZScKbWFr
ZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRl
LWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4x
NytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMnCm1ha2VbNV06IEVu
dGVyaW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5l
LWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJs
ZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicycKbWFrZSAtQyB0b29sbG9nIGlu
c3RhbGwKbWFrZVs2XTogRW50ZXJpbmcgZGlyZWN0b3J5ICcvaG9tZS9ub2xlMjM5MC9pcmVuZS9i
dWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVu
LXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL3Rv
b2xsb2cnCm1ha2UgbGlicwptYWtlWzddOiBFbnRlcmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUy
MzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJs
ZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rv
b2xzL2xpYnMvdG9vbGxvZycKZm9yIGkgaW4gaW5jbHVkZS94ZW50b29sbG9nLmg7IGRvIFwKICAg
IGFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2NjICAtLXN5c3Jvb3Q9L2hvbWUvbm9sZTIzOTAvaXJl
bmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4
L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9v
dCAgLXggYyAtYW5zaSAtV2FsbCAtV2Vycm9yIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQv
d2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29s
cy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy90b29sbG9n
Ly4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgXAogICAgICAgICAgLVMgLW8gL2Rldi9udWxsICRpIHx8
IGV4aXQgMTsgXAogICAgZWNobyAkaTsgXApkb25lID5oZWFkZXJzLmNoay5uZXcKYWFyY2g2NC1w
b3J0YWJsZS1saW51eC1nY2MgIC0tc3lzcm9vdD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93
aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xz
LzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290ICAgLURCVUlM
RF9JRCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVduby11bnVzZWQtYnV0LXNldC12
YXJpYWJsZSAtV25vLXVudXNlZC1sb2NhbC10eXBlZGVmcyAgIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtRF9fWEVOX0lOVEVSRkFDRV9WRVJTSU9OX189X19YRU5fTEFURVNUX0lOVEVSRkFDRV9W
RVJTSU9OX18gLU1NRCAtTUYgLnh0bF9jb3JlLm8uZCAtRF9MQVJHRUZJTEVfU09VUkNFIC1EX0xB
UkdFRklMRTY0X1NPVVJDRSAgLW1hcmNoPWFybXY4LWErY3JjK2NyeXB0byAtZnN0YWNrLXByb3Rl
Y3Rvci1zdHJvbmcgIC1PMiAtRF9GT1JUSUZZX1NPVVJDRT0yIC1XZm9ybWF0IC1XZm9ybWF0LXNl
Y3VyaXR5IC1XZXJyb3I9Zm9ybWF0LXNlY3VyaXR5ICAtTzIgLXBpcGUgLWcgLWZlbGltaW5hdGUt
dW51c2VkLWRlYnVnLXR5cGVzIC1mbWFjcm8tcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVn
L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAg
ICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hp
dGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80
LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZk
ZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3Q9ICAgICAgICAgICAgICAgICAgICAg
IC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJl
bmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3Rh
YmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZT0gIC1XZXJyb3Ig
LVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9i
dWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVu
LXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL3Rv
b2xsb2cvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgIC1jIC1vIHh0bF9jb3JlLm8geHRsX2NvcmUu
YyAKYWFyY2g2NC1wb3J0YWJsZS1saW51eC1nY2MgIC0tc3lzcm9vdD0vaG9tZS9ub2xlMjM5MC9p
cmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGlu
dXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNy
b290ICAgLURCVUlMRF9JRCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1X
c3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVduby11bnVz
ZWQtYnV0LXNldC12YXJpYWJsZSAtV25vLXVudXNlZC1sb2NhbC10eXBlZGVmcyAgIC1PMiAtZm9t
aXQtZnJhbWUtcG9pbnRlciAtRF9fWEVOX0lOVEVSRkFDRV9WRVJTSU9OX189X19YRU5fTEFURVNU
X0lOVEVSRkFDRV9WRVJTSU9OX18gLU1NRCAtTUYgLnh0bF9sb2dnZXJfc3RkaW8uby5kIC1EX0xB
UkdFRklMRV9TT1VSQ0UgLURfTEFSR0VGSUxFNjRfU09VUkNFICAtbWFyY2g9YXJtdjgtYStjcmMr
Y3J5cHRvIC1mc3RhY2stcHJvdGVjdG9yLXN0cm9uZyAgLU8yIC1EX0ZPUlRJRllfU09VUkNFPTIg
LVdmb3JtYXQgLVdmb3JtYXQtc2VjdXJpdHkgLVdlcnJvcj1mb3JtYXQtc2VjdXJpdHkgIC1PMiAt
cGlwZSAtZyAtZmVsaW1pbmF0ZS11bnVzZWQtZGVidWctdHlwZXMgLWZtYWNyby1wcmVmaXgtbWFw
PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2Fy
bXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBm
NmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcw
ZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xl
MjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFi
bGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Iv
c3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAg
ICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUv
YnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hl
bi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdD0g
ICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2ly
ZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51
eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jv
b3QtbmF0aXZlPSAgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4vaW5jbHVkZSAtSS9o
b21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4
YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQt
cjAvZ2l0L3Rvb2xzL2xpYnMvdG9vbGxvZy8uLi8uLi8uLi90b29scy9pbmNsdWRlICAgLWMgLW8g
eHRsX2xvZ2dlcl9zdGRpby5vIHh0bF9sb2dnZXJfc3RkaW8uYyAKYWFyY2g2NC1wb3J0YWJsZS1s
aW51eC1nY2MgIC0tc3lzcm9vdD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJl
bmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3Rh
YmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290ICAgLURQSUMgLURCVUlMRF9J
RCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlw
ZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVduby11bnVzZWQtYnV0LXNldC12YXJp
YWJsZSAtV25vLXVudXNlZC1sb2NhbC10eXBlZGVmcyAgIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRl
ciAtRF9fWEVOX0lOVEVSRkFDRV9WRVJTSU9OX189X19YRU5fTEFURVNUX0lOVEVSRkFDRV9WRVJT
SU9OX18gLU1NRCAtTUYgLnh0bF9jb3JlLm9waWMuZCAtRF9MQVJHRUZJTEVfU09VUkNFIC1EX0xB
UkdFRklMRTY0X1NPVVJDRSAgLW1hcmNoPWFybXY4LWErY3JjK2NyeXB0byAtZnN0YWNrLXByb3Rl
Y3Rvci1zdHJvbmcgIC1PMiAtRF9GT1JUSUZZX1NPVVJDRT0yIC1XZm9ybWF0IC1XZm9ybWF0LXNl
Y3VyaXR5IC1XZXJyb3I9Zm9ybWF0LXNlY3VyaXR5ICAtTzIgLXBpcGUgLWcgLWZlbGltaW5hdGUt
dW51c2VkLWRlYnVnLXR5cGVzIC1mbWFjcm8tcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVn
L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAg
ICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hp
dGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80
LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZk
ZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3Q9ICAgICAgICAgICAgICAgICAgICAg
IC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJl
bmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3Rh
YmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZT0gIC1XZXJyb3Ig
LVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9i
dWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVu
LXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL3Rv
b2xsb2cvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgIC1mUElDIC1jIC1vIHh0bF9jb3JlLm9waWMg
eHRsX2NvcmUuYyAKYWFyY2g2NC1wb3J0YWJsZS1saW51eC1nY2MgIC0tc3lzcm9vdD0vaG9tZS9u
b2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9y
dGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3Jl
Y2lwZS1zeXNyb290ICAgLURQSUMgLURCVUlMRF9JRCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3Rk
PWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0
ZW1lbnQgLVduby11bnVzZWQtYnV0LXNldC12YXJpYWJsZSAtV25vLXVudXNlZC1sb2NhbC10eXBl
ZGVmcyAgIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtRF9fWEVOX0lOVEVSRkFDRV9WRVJTSU9O
X189X19YRU5fTEFURVNUX0lOVEVSRkFDRV9WRVJTSU9OX18gLU1NRCAtTUYgLnh0bF9sb2dnZXJf
c3RkaW8ub3BpYy5kIC1EX0xBUkdFRklMRV9TT1VSQ0UgLURfTEFSR0VGSUxFNjRfU09VUkNFICAt
bWFyY2g9YXJtdjgtYStjcmMrY3J5cHRvIC1mc3RhY2stcHJvdGVjdG9yLXN0cm9uZyAgLU8yIC1E
X0ZPUlRJRllfU09VUkNFPTIgLVdmb3JtYXQgLVdmb3JtYXQtc2VjdXJpdHkgLVdlcnJvcj1mb3Jt
YXQtc2VjdXJpdHkgIC1PMiAtcGlwZSAtZyAtZmVsaW1pbmF0ZS11bnVzZWQtZGVidWctdHlwZXMg
LWZtYWNyby1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVu
ZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFi
bGVBVVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcrc3Rh
YmxlQVVUT0lOQythNjMyMzcwZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJl
Zml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAv
d29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQyth
NjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hv
bWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhh
LXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1y
MC9yZWNpcGUtc3lzcm9vdD0gICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFw
PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2Fy
bXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBm
NmQtcjAvcmVjaXBlLXN5c3Jvb3QtbmF0aXZlPSAgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBl
cyAtSS4vaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvdG9vbGxvZy8uLi8uLi8uLi90b29s
cy9pbmNsdWRlICAgLWZQSUMgLWMgLW8geHRsX2xvZ2dlcl9zdGRpby5vcGljIHh0bF9sb2dnZXJf
c3RkaW8uYyAKbWtkaXIgLXAgL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5l
LWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJs
ZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvcGtnLWNvbmZpZwptdiBoZWFkZXJzLmNo
ay5uZXcgaGVhZGVycy5jaGsKYWFyY2g2NC1wb3J0YWJsZS1saW51eC1hciByYyBsaWJ4ZW50b29s
bG9nLmEgeHRsX2NvcmUubyB4dGxfbG9nZ2VyX3N0ZGlvLm8KYWFyY2g2NC1wb3J0YWJsZS1saW51
eC1nY2MgIC0tc3lzcm9vdD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUt
ZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290ICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgLXB0aHJlYWQgLVdsLC1zb25hbWUgLVdsLGxpYnhlbnRvb2xsb2cuc28uMSAt
c2hhcmVkIC1XbCwtLXZlcnNpb24tc2NyaXB0PWxpYnhlbnRvb2xsb2cubWFwIC1vIGxpYnhlbnRv
b2xsb2cuc28uMS4wIHh0bF9jb3JlLm9waWMgeHRsX2xvZ2dlcl9zdGRpby5vcGljICAKbG4gLXNm
IGxpYnhlbnRvb2xsb2cuc28uMS4wIGxpYnhlbnRvb2xsb2cuc28uMQpsbiAtc2YgbGlieGVudG9v
bGxvZy5zby4xIGxpYnhlbnRvb2xsb2cuc28KbWFrZVs3XTogTGVhdmluZyBkaXJlY3RvcnkgJy9o
b21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4
YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQt
cjAvZ2l0L3Rvb2xzL2xpYnMvdG9vbGxvZycKL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hp
dGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80
LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy90b29sbG9nLy4u
Ly4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9ob21lL25vbGUyMzkwL2ly
ZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51
eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5z
dGFsbC91c3IvbGliCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvdG9vbGxvZy8uLi8uLi8uLi90b29scy9j
cm9zcy1pbnN0YWxsIC1kIC1tMDc1NSAtcCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0
ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQu
MTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3RhbGwvdXNyL2luY2x1
ZGUKL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsv
YXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMC9naXQvdG9vbHMvbGlicy90b29sbG9nLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3Rh
bGwgLW0wNzU1IC1wIGxpYnhlbnRvb2xsb2cuc28uMS4wIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91
c3IvbGliCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93
b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvdG9vbGxvZy8uLi8uLi8uLi90b29scy9jcm9zcy1p
bnN0YWxsIC1tMDY0NCAtcCBsaWJ4ZW50b29sbG9nLmEgL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVp
bGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10
b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvZGlzdC9pbnN0YWxsL3Vz
ci9saWIKbG4gLXNmIGxpYnhlbnRvb2xsb2cuc28uMS4wIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91
c3IvbGliL2xpYnhlbnRvb2xsb2cuc28uMQpsbiAtc2YgbGlieGVudG9vbGxvZy5zby4xIC9ob21l
L25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1w
b3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAv
Z2l0L2Rpc3QvaW5zdGFsbC91c3IvbGliL2xpYnhlbnRvb2xsb2cuc28KL2hvbWUvbm9sZTIzOTAv
aXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxp
bnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMv
bGlicy90b29sbG9nLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1wIGluY2x1
ZGUveGVudG9vbGxvZy5oIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvaW5jbHVkZQovaG9tZS9u
b2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9y
dGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dp
dC90b29scy9saWJzL3Rvb2xsb2cvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQg
LXAgeGVudG9vbGxvZy5wYyAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUt
ZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3RhbGwvdXNyL2xpYi9wa2djb25maWcK
bWFrZVs2XTogTGVhdmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvdG9vbGxvZycK
bWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMnCm1ha2VbNV06
IEVudGVyaW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWly
ZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0
YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicycKbWFrZSAtQyBldnRjaG4g
aW5zdGFsbAptYWtlWzZdOiBFbnRlcmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMv
ZXZ0Y2huJwptYWtlIGxpYnMKbWFrZVs3XTogRW50ZXJpbmcgZGlyZWN0b3J5ICcvaG9tZS9ub2xl
MjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFi
bGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90
b29scy9saWJzL2V2dGNobicKZm9yIGkgaW4gaW5jbHVkZS94ZW5ldnRjaG4uaDsgZG8gXAogICAg
YWFyY2g2NC1wb3J0YWJsZS1saW51eC1nY2MgIC0tc3lzcm9vdD0vaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290
ICAteCBjIC1hbnNpIC1XYWxsIC1XZXJyb3IgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93
aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xz
LzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2V2dGNobi8u
Li8uLi8uLi90b29scy9pbmNsdWRlIFwKICAgICAgICAgIC1TIC1vIC9kZXYvbnVsbCAkaSB8fCBl
eGl0IDE7IFwKICAgIGVjaG8gJGk7IFwKZG9uZSA+aGVhZGVycy5jaGsubmV3CmFhcmNoNjQtcG9y
dGFibGUtbGludXgtZ2NjICAtLXN5c3Jvb3Q9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hp
dGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80
LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdCAgIC1EQlVJTERf
SUQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5
cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1Xbm8tdW51c2VkLWJ1dC1zZXQtdmFy
aWFibGUgLVduby11bnVzZWQtbG9jYWwtdHlwZWRlZnMgICAtTzIgLWZvbWl0LWZyYW1lLXBvaW50
ZXIgLURfX1hFTl9JTlRFUkZBQ0VfVkVSU0lPTl9fPV9fWEVOX0xBVEVTVF9JTlRFUkZBQ0VfVkVS
U0lPTl9fIC1NTUQgLU1GIC5jb3JlLm8uZCAtRF9MQVJHRUZJTEVfU09VUkNFIC1EX0xBUkdFRklM
RTY0X1NPVVJDRSAgLW1hcmNoPWFybXY4LWErY3JjK2NyeXB0byAtZnN0YWNrLXByb3RlY3Rvci1z
dHJvbmcgIC1PMiAtRF9GT1JUSUZZX1NPVVJDRT0yIC1XZm9ybWF0IC1XZm9ybWF0LXNlY3VyaXR5
IC1XZXJyb3I9Zm9ybWF0LXNlY3VyaXR5ICAtTzIgLXBpcGUgLWcgLWZlbGltaW5hdGUtdW51c2Vk
LWRlYnVnLXR5cGVzIC1mbWFjcm8tcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWls
ZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRv
b2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10
b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAg
ICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWly
ZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0
YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMvNC4xNytz
dGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1w
cmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3Rt
cC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5D
K2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3Q9ICAgICAgICAgICAgICAgICAgICAgIC1mZGVi
dWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0
MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVU
T0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZT0gIC1XZXJyb3IgLVdtaXNz
aW5nLXByb3RvdHlwZXMgLUkuL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93
aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xz
LzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2V2dGNobi8u
Li8uLi8uLi90b29scy9pbmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRl
LWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4x
NytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZXZ0Y2huLy4uLy4u
Ly4uL3Rvb2xzL2xpYnMvdG9vbGxvZy9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVp
bGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10
b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9ldnRj
aG4vLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93
aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xz
LzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2V2dGNobi8u
Li8uLi8uLi90b29scy9saWJzL3Rvb2xjb3JlL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJz
L2V2dGNobi8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBjb3JlLm8gY29yZS5jIAphYXJj
aDY0LXBvcnRhYmxlLWxpbnV4LWdjYyAgLS1zeXNyb290PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QgICAt
REJVSUxEX0lEIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3Qt
cHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV25vLXVudXNlZC1idXQt
c2V0LXZhcmlhYmxlIC1Xbm8tdW51c2VkLWxvY2FsLXR5cGVkZWZzICAgLU8yIC1mb21pdC1mcmFt
ZS1wb2ludGVyIC1EX19YRU5fSU5URVJGQUNFX1ZFUlNJT05fXz1fX1hFTl9MQVRFU1RfSU5URVJG
QUNFX1ZFUlNJT05fXyAtTU1EIC1NRiAubGludXguby5kIC1EX0xBUkdFRklMRV9TT1VSQ0UgLURf
TEFSR0VGSUxFNjRfU09VUkNFICAtbWFyY2g9YXJtdjgtYStjcmMrY3J5cHRvIC1mc3RhY2stcHJv
dGVjdG9yLXN0cm9uZyAgLU8yIC1EX0ZPUlRJRllfU09VUkNFPTIgLVdmb3JtYXQgLVdmb3JtYXQt
c2VjdXJpdHkgLVdlcnJvcj1mb3JtYXQtc2VjdXJpdHkgIC1PMiAtcGlwZSAtZyAtZmVsaW1pbmF0
ZS11bnVzZWQtZGVidWctdHlwZXMgLWZtYWNyby1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2ly
ZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51
eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVi
dWcveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwICAgICAgICAgICAg
ICAgICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93
aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xz
LzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29s
cy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAt
ZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5l
LWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJs
ZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdD0gICAgICAgICAgICAgICAgICAg
ICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVp
cmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytz
dGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QtbmF0aXZlPSAgLVdlcnJv
ciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4vaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMv
ZXZ0Y2huLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVp
bGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10
b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9ldnRj
aG4vLi4vLi4vLi4vdG9vbHMvbGlicy90b29sbG9nL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9p
cmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGlu
dXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9s
aWJzL2V2dGNobi8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMv
ZXZ0Y2huLy4uLy4uLy4uL3Rvb2xzL2xpYnMvdG9vbGNvcmUvaW5jbHVkZSAtSS9ob21lL25vbGUy
MzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJs
ZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rv
b2xzL2xpYnMvZXZ0Y2huLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1jIC1vIGxpbnV4Lm8gbGlu
dXguYyAKYWFyY2g2NC1wb3J0YWJsZS1saW51eC1nY2MgIC0tc3lzcm9vdD0vaG9tZS9ub2xlMjM5
MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUt
bGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1z
eXNyb290ICAgLURQSUMgLURCVUlMRF9JRCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5
IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQg
LVduby11bnVzZWQtYnV0LXNldC12YXJpYWJsZSAtV25vLXVudXNlZC1sb2NhbC10eXBlZGVmcyAg
IC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtRF9fWEVOX0lOVEVSRkFDRV9WRVJTSU9OX189X19Y
RU5fTEFURVNUX0lOVEVSRkFDRV9WRVJTSU9OX18gLU1NRCAtTUYgLmNvcmUub3BpYy5kIC1EX0xB
UkdFRklMRV9TT1VSQ0UgLURfTEFSR0VGSUxFNjRfU09VUkNFICAtbWFyY2g9YXJtdjgtYStjcmMr
Y3J5cHRvIC1mc3RhY2stcHJvdGVjdG9yLXN0cm9uZyAgLU8yIC1EX0ZPUlRJRllfU09VUkNFPTIg
LVdmb3JtYXQgLVdmb3JtYXQtc2VjdXJpdHkgLVdlcnJvcj1mb3JtYXQtc2VjdXJpdHkgIC1PMiAt
cGlwZSAtZyAtZmVsaW1pbmF0ZS11bnVzZWQtZGVidWctdHlwZXMgLWZtYWNyby1wcmVmaXgtbWFw
PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2Fy
bXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBm
NmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcw
ZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xl
MjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFi
bGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Iv
c3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAg
ICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUv
YnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hl
bi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdD0g
ICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2ly
ZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51
eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jv
b3QtbmF0aXZlPSAgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4vaW5jbHVkZSAtSS9o
b21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4
YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQt
cjAvZ2l0L3Rvb2xzL2xpYnMvZXZ0Y2huLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1JL2hvbWUv
bm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBv
cnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9n
aXQvdG9vbHMvbGlicy9ldnRjaG4vLi4vLi4vLi4vdG9vbHMvbGlicy90b29sbG9nL2luY2x1ZGUg
LUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9h
cm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcw
ZjZkLXIwL2dpdC90b29scy9saWJzL2V2dGNobi8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtSS9o
b21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4
YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQt
cjAvZ2l0L3Rvb2xzL2xpYnMvZXZ0Y2huLy4uLy4uLy4uL3Rvb2xzL2xpYnMvdG9vbGNvcmUvaW5j
bHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93
b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZXZ0Y2huLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
IC1mUElDIC1jIC1vIGNvcmUub3BpYyBjb3JlLmMgCmFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2Nj
ICAtLXN5c3Jvb3Q9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAv
dG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9J
TkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdCAgIC1EUElDIC1EQlVJTERfSUQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1Xbm8tdW51c2VkLWJ1dC1zZXQtdmFyaWFibGUgLVdu
by11bnVzZWQtbG9jYWwtdHlwZWRlZnMgICAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLURfX1hF
Tl9JTlRFUkZBQ0VfVkVSU0lPTl9fPV9fWEVOX0xBVEVTVF9JTlRFUkZBQ0VfVkVSU0lPTl9fIC1N
TUQgLU1GIC5saW51eC5vcGljLmQgLURfTEFSR0VGSUxFX1NPVVJDRSAtRF9MQVJHRUZJTEU2NF9T
T1VSQ0UgIC1tYXJjaD1hcm12OC1hK2NyYytjcnlwdG8gLWZzdGFjay1wcm90ZWN0b3Itc3Ryb25n
ICAtTzIgLURfRk9SVElGWV9TT1VSQ0U9MiAtV2Zvcm1hdCAtV2Zvcm1hdC1zZWN1cml0eSAtV2Vy
cm9yPWZvcm1hdC1zZWN1cml0eSAgLU8yIC1waXBlIC1nIC1mZWxpbWluYXRlLXVudXNlZC1kZWJ1
Zy10eXBlcyAtZm1hY3JvLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hp
dGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80
LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZk
ZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4
LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29y
ay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMy
MzcwZjZkLXIwL3JlY2lwZS1zeXNyb290PSAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXBy
ZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdC1uYXRpdmU9ICAtV2Vycm9yIC1XbWlzc2luZy1w
cm90b3R5cGVzIC1JLi9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUt
ZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3
K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9ldnRjaG4vLi4vLi4v
Li4vdG9vbHMvaW5jbHVkZSAgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJl
bmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3Rh
YmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2V2dGNobi8uLi8uLi8uLi90
b29scy9saWJzL3Rvb2xsb2cvaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZXZ0Y2huLy4u
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUt
ZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3
K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9ldnRjaG4vLi4vLi4v
Li4vdG9vbHMvbGlicy90b29sY29yZS9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVp
bGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10
b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9ldnRj
aG4vLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWZQSUMgLWMgLW8gbGludXgub3BpYyBsaW51eC5j
IApta2RpciAtcCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90
bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lO
QythNjMyMzcwZjZkLXIwL2dpdC90b29scy9wa2ctY29uZmlnCm12IGhlYWRlcnMuY2hrLm5ldyBo
ZWFkZXJzLmNoawphYXJjaDY0LXBvcnRhYmxlLWxpbnV4LWdjYyAgLS1zeXNyb290PS9ob21lL25v
bGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0
YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVj
aXBlLXN5c3Jvb3QgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAtcHRocmVhZCAtV2ws
LXNvbmFtZSAtV2wsbGlieGVuZXZ0Y2huLnNvLjEgLXNoYXJlZCAtV2wsLS12ZXJzaW9uLXNjcmlw
dD1saWJ4ZW5ldnRjaG4ubWFwIC1vIGxpYnhlbmV2dGNobi5zby4xLjEgY29yZS5vcGljIGxpbnV4
Lm9waWMgICAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAv
d29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQyth
NjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2V2dGNobi8uLi8uLi8uLi90b29scy9saWJzL3Rv
b2xsb2cvbGlieGVudG9vbGxvZy5zbyAgIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRl
LWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4x
NytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZXZ0Y2huLy4uLy4u
Ly4uL3Rvb2xzL2xpYnMvdG9vbGNvcmUvbGlieGVudG9vbGNvcmUuc28gCmFhcmNoNjQtcG9ydGFi
bGUtbGludXgtYXIgcmMgbGlieGVuZXZ0Y2huLmEgY29yZS5vIGxpbnV4Lm8KbG4gLXNmIGxpYnhl
bmV2dGNobi5zby4xLjEgbGlieGVuZXZ0Y2huLnNvLjEKbG4gLXNmIGxpYnhlbmV2dGNobi5zby4x
IGxpYnhlbmV2dGNobi5zbwptYWtlWzddOiBMZWF2aW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIz
OTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxl
LWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9v
bHMvbGlicy9ldnRjaG4nCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZXZ0Y2huLy4uLy4uLy4uL3Rvb2xz
L2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvbGli
Ci9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2Fy
bXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBm
NmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZXZ0Y2huLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwg
LWQgLW0wNzU1IC1wIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvaW5jbHVkZQovaG9tZS9ub2xl
MjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFi
bGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90
b29scy9saWJzL2V2dGNobi8uLi8uLi8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDc1NSAtcCBs
aWJ4ZW5ldnRjaG4uc28uMS4xIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVu
ZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFi
bGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvbGliCi9ob21lL25v
bGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0
YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0
L3Rvb2xzL2xpYnMvZXZ0Y2huLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLW0wNjQ0IC1w
IGxpYnhlbmV2dGNobi5hIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvbGliCmxuIC1zZiBsaWJ4
ZW5ldnRjaG4uc28uMS4xIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvbGliL2xpYnhlbmV2dGNo
bi5zby4xCmxuIC1zZiBsaWJ4ZW5ldnRjaG4uc28uMSAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWls
ZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRv
b2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3RhbGwvdXNy
L2xpYi9saWJ4ZW5ldnRjaG4uc28KL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWly
ZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0
YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9ldnRjaG4vLi4vLi4vLi4v
dG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgaW5jbHVkZS94ZW5ldnRjaG4uaCAvaG9tZS9u
b2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9y
dGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dp
dC9kaXN0L2luc3RhbGwvdXNyL2luY2x1ZGUKL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hp
dGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80
LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9ldnRjaG4vLi4v
Li4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgeGVuZXZ0Y2huLnBjIC9ob21lL25v
bGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0
YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0
L2Rpc3QvaW5zdGFsbC91c3IvbGliL3BrZ2NvbmZpZwptYWtlWzZdOiBMZWF2aW5nIGRpcmVjdG9y
eSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsv
YXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9ldnRjaG4nCm1ha2VbNV06IExlYXZpbmcgZGlyZWN0b3J5
ICcvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9h
cm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcw
ZjZkLXIwL2dpdC90b29scy9saWJzJwptYWtlWzVdOiBFbnRlcmluZyBkaXJlY3RvcnkgJy9ob21l
L25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1w
b3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAv
Z2l0L3Rvb2xzL2xpYnMnCm1ha2UgLUMgZ250dGFiIGluc3RhbGwKbWFrZVs2XTogRW50ZXJpbmcg
ZGlyZWN0b3J5ICcvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90
bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lO
QythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2dudHRhYicKbWFrZSBsaWJzCm1ha2VbN106
IEVudGVyaW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWly
ZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0
YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWInCmZvciBpIGlu
IGluY2x1ZGUveGVuZ250dGFiLmg7IGRvIFwKICAgIGFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2Nj
ICAtLXN5c3Jvb3Q9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAv
dG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9J
TkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdCAgLXggYyAtYW5zaSAtV2FsbCAtV2Vycm9y
IC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsv
YXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSBcCiAg
ICAgICAgICAtUyAtbyAvZGV2L251bGwgJGkgfHwgZXhpdCAxOyBcCiAgICBlY2hvICRpOyBcCmRv
bmUgPmhlYWRlcnMuY2hrLm5ldwphYXJjaDY0LXBvcnRhYmxlLWxpbnV4LWdjYyAgLS1zeXNyb290
PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2Fy
bXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBm
NmQtcjAvcmVjaXBlLXN5c3Jvb3QgICAtREJVSUxEX0lEIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1z
dGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0
YXRlbWVudCAtV25vLXVudXNlZC1idXQtc2V0LXZhcmlhYmxlIC1Xbm8tdW51c2VkLWxvY2FsLXR5
cGVkZWZzICAgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1EX19YRU5fSU5URVJGQUNFX1ZFUlNJ
T05fXz1fX1hFTl9MQVRFU1RfSU5URVJGQUNFX1ZFUlNJT05fXyAtTU1EIC1NRiAuZ250dGFiX2Nv
cmUuby5kIC1EX0xBUkdFRklMRV9TT1VSQ0UgLURfTEFSR0VGSUxFNjRfU09VUkNFICAtbWFyY2g9
YXJtdjgtYStjcmMrY3J5cHRvIC1mc3RhY2stcHJvdGVjdG9yLXN0cm9uZyAgLU8yIC1EX0ZPUlRJ
RllfU09VUkNFPTIgLVdmb3JtYXQgLVdmb3JtYXQtc2VjdXJpdHkgLVdlcnJvcj1mb3JtYXQtc2Vj
dXJpdHkgIC1PMiAtcGlwZSAtZyAtZmVsaW1pbmF0ZS11bnVzZWQtZGVidWctdHlwZXMgLWZtYWNy
by1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVU
T0lOQythNjMyMzcwZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1h
cD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9h
cm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcw
ZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9s
ZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRh
YmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNp
cGUtc3lzcm9vdD0gICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21l
L25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1w
b3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAv
cmVjaXBlLXN5c3Jvb3QtbmF0aXZlPSAgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAtSS4v
aW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3Rt
cC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5D
K2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250dGFiLy4uLy4uLy4uL3Rvb2xzL2luY2x1
ZGUgIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dv
cmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYz
MjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvbGlicy90b29s
bG9nL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0
MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVU
T0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2dudHRhYi8uLi8uLi8uLi90b29scy9p
bmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3Rt
cC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5D
K2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250dGFiLy4uLy4uLy4uL3Rvb2xzL2xpYnMv
dG9vbGNvcmUvaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVu
ZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFi
bGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250dGFiLy4uLy4uLy4uL3Rv
b2xzL2luY2x1ZGUgIC1jIC1vIGdudHRhYl9jb3JlLm8gZ250dGFiX2NvcmUuYyAKYWFyY2g2NC1w
b3J0YWJsZS1saW51eC1nY2MgIC0tc3lzcm9vdD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93
aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xz
LzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290ICAgLURCVUlM
RF9JRCAtZm5vLXN0cmljdC1hbGlhc2luZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3Rv
dHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRlci1zdGF0ZW1lbnQgLVduby11bnVzZWQtYnV0LXNldC12
YXJpYWJsZSAtV25vLXVudXNlZC1sb2NhbC10eXBlZGVmcyAgIC1PMiAtZm9taXQtZnJhbWUtcG9p
bnRlciAtRF9fWEVOX0lOVEVSRkFDRV9WRVJTSU9OX189X19YRU5fTEFURVNUX0lOVEVSRkFDRV9W
RVJTSU9OX18gLU1NRCAtTUYgLmdudHNocl9jb3JlLm8uZCAtRF9MQVJHRUZJTEVfU09VUkNFIC1E
X0xBUkdFRklMRTY0X1NPVVJDRSAgLW1hcmNoPWFybXY4LWErY3JjK2NyeXB0byAtZnN0YWNrLXBy
b3RlY3Rvci1zdHJvbmcgIC1PMiAtRF9GT1JUSUZZX1NPVVJDRT0yIC1XZm9ybWF0IC1XZm9ybWF0
LXNlY3VyaXR5IC1XZXJyb3I9Zm9ybWF0LXNlY3VyaXR5ICAtTzIgLXBpcGUgLWcgLWZlbGltaW5h
dGUtdW51c2VkLWRlYnVnLXR5cGVzIC1mbWFjcm8tcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9p
cmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGlu
dXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2Rl
YnVnL3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAg
ICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQv
d2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29s
cy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9v
bHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAg
LWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVu
ZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFi
bGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3Q9ICAgICAgICAgICAgICAgICAg
ICAgIC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1l
aXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcr
c3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZT0gIC1XZXJy
b3IgLVdtaXNzaW5nLXByb3RvdHlwZXMgLUkuL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJz
L2dudHRhYi8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250
dGFiLy4uLy4uLy4uL3Rvb2xzL2xpYnMvdG9vbGxvZy9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAv
aXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxp
bnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMv
bGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLUkvaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJz
L2dudHRhYi8uLi8uLi8uLi90b29scy9saWJzL3Rvb2xjb3JlL2luY2x1ZGUgLUkvaG9tZS9ub2xl
MjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFi
bGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90
b29scy9saWJzL2dudHRhYi8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtYyAtbyBnbnRzaHJfY29y
ZS5vIGdudHNocl9jb3JlLmMgCmFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2NjICAtLXN5c3Jvb3Q9
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9yZWNpcGUtc3lzcm9vdCAgIC1EQlVJTERfSUQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0
ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3Rh
dGVtZW50IC1Xbm8tdW51c2VkLWJ1dC1zZXQtdmFyaWFibGUgLVduby11bnVzZWQtbG9jYWwtdHlw
ZWRlZnMgICAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLURfX1hFTl9JTlRFUkZBQ0VfVkVSU0lP
Tl9fPV9fWEVOX0xBVEVTVF9JTlRFUkZBQ0VfVkVSU0lPTl9fIC1NTUQgLU1GIC5saW51eC5vLmQg
LURfTEFSR0VGSUxFX1NPVVJDRSAtRF9MQVJHRUZJTEU2NF9TT1VSQ0UgIC1tYXJjaD1hcm12OC1h
K2NyYytjcnlwdG8gLWZzdGFjay1wcm90ZWN0b3Itc3Ryb25nICAtTzIgLURfRk9SVElGWV9TT1VS
Q0U9MiAtV2Zvcm1hdCAtV2Zvcm1hdC1zZWN1cml0eSAtV2Vycm9yPWZvcm1hdC1zZWN1cml0eSAg
LU8yIC1waXBlIC1nIC1mZWxpbWluYXRlLXVudXNlZC1kZWJ1Zy10eXBlcyAtZm1hY3JvLXByZWZp
eC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dv
cmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYz
MjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21l
L25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1w
b3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjA9
L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIw
ICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9p
cmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGlu
dXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNy
b290PSAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIz
OTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxl
LWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUt
c3lzcm9vdC1uYXRpdmU9ICAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLi9pbmNsdWRl
IC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsv
YXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLUkv
aG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12
OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZk
LXIwL2dpdC90b29scy9saWJzL2dudHRhYi8uLi8uLi8uLi90b29scy9saWJzL3Rvb2xsb2cvaW5j
bHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93
b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250dGFiLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUg
IC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsv
YXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvbGlicy90b29sY29y
ZS9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAv
dG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9J
TkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvaW5j
bHVkZSAgLWMgLW8gbGludXgubyBsaW51eC5jIAphYXJjaDY0LXBvcnRhYmxlLWxpbnV4LWdjYyAg
LS1zeXNyb290PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3Rt
cC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5D
K2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QgICAtRFBJQyAtREJVSUxEX0lEIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV25vLXVudXNlZC1idXQtc2V0LXZhcmlhYmxlIC1Xbm8t
dW51c2VkLWxvY2FsLXR5cGVkZWZzICAgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1EX19YRU5f
SU5URVJGQUNFX1ZFUlNJT05fXz1fX1hFTl9MQVRFU1RfSU5URVJGQUNFX1ZFUlNJT05fXyAtTU1E
IC1NRiAuZ250dGFiX2NvcmUub3BpYy5kIC1EX0xBUkdFRklMRV9TT1VSQ0UgLURfTEFSR0VGSUxF
NjRfU09VUkNFICAtbWFyY2g9YXJtdjgtYStjcmMrY3J5cHRvIC1mc3RhY2stcHJvdGVjdG9yLXN0
cm9uZyAgLU8yIC1EX0ZPUlRJRllfU09VUkNFPTIgLVdmb3JtYXQgLVdmb3JtYXQtc2VjdXJpdHkg
LVdlcnJvcj1mb3JtYXQtc2VjdXJpdHkgIC1PMiAtcGlwZSAtZyAtZmVsaW1pbmF0ZS11bnVzZWQt
ZGVidWctdHlwZXMgLWZtYWNyby1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxk
L3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9v
bHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRv
b2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwICAgICAgICAgICAgICAgICAgICAg
IC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJl
bmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3Rh
YmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0
YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXBy
ZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdD0gICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1
Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QtbmF0aXZlPSAgLVdlcnJvciAtV21pc3Np
bmctcHJvdG90eXBlcyAtSS4vaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250dGFiLy4u
Ly4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUt
ZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3
K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWIvLi4vLi4v
Li4vdG9vbHMvbGlicy90b29sbG9nL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWls
ZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRv
b2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2dudHRh
Yi8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250dGFiLy4u
Ly4uLy4uL3Rvb2xzL2xpYnMvdG9vbGNvcmUvaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMv
Z250dGFiLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1mUElDIC1jIC1vIGdudHRhYl9jb3JlLm9w
aWMgZ250dGFiX2NvcmUuYyAKYWFyY2g2NC1wb3J0YWJsZS1saW51eC1nY2MgIC0tc3lzcm9vdD0v
aG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12
OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZk
LXIwL3JlY2lwZS1zeXNyb290ICAgLURQSUMgLURCVUlMRF9JRCAtZm5vLXN0cmljdC1hbGlhc2lu
ZyAtc3RkPWdudTk5IC1XYWxsIC1Xc3RyaWN0LXByb3RvdHlwZXMgLVdkZWNsYXJhdGlvbi1hZnRl
ci1zdGF0ZW1lbnQgLVduby11bnVzZWQtYnV0LXNldC12YXJpYWJsZSAtV25vLXVudXNlZC1sb2Nh
bC10eXBlZGVmcyAgIC1PMiAtZm9taXQtZnJhbWUtcG9pbnRlciAtRF9fWEVOX0lOVEVSRkFDRV9W
RVJTSU9OX189X19YRU5fTEFURVNUX0lOVEVSRkFDRV9WRVJTSU9OX18gLU1NRCAtTUYgLmdudHNo
cl9jb3JlLm9waWMuZCAtRF9MQVJHRUZJTEVfU09VUkNFIC1EX0xBUkdFRklMRTY0X1NPVVJDRSAg
LW1hcmNoPWFybXY4LWErY3JjK2NyeXB0byAtZnN0YWNrLXByb3RlY3Rvci1zdHJvbmcgIC1PMiAt
RF9GT1JUSUZZX1NPVVJDRT0yIC1XZm9ybWF0IC1XZm9ybWF0LXNlY3VyaXR5IC1XZXJyb3I9Zm9y
bWF0LXNlY3VyaXR5ICAtTzIgLXBpcGUgLWcgLWZlbGltaW5hdGUtdW51c2VkLWRlYnVnLXR5cGVz
IC1mbWFjcm8tcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJl
bmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3Rh
YmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0
YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXBy
ZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5D
K2E2MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9o
b21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4
YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQt
cjAvcmVjaXBlLXN5c3Jvb3Q9ICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1h
cD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9h
cm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcw
ZjZkLXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZT0gIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlw
ZXMgLUkuL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUt
ZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2dudHRhYi8uLi8uLi8uLi90b29s
cy9pbmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250dGFiLy4uLy4uLy4uL3Rvb2xzL2xp
YnMvdG9vbGxvZy9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWly
ZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0
YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWIvLi4vLi4vLi4v
dG9vbHMvaW5jbHVkZSAgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUt
ZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2dudHRhYi8uLi8uLi8uLi90b29s
cy9saWJzL3Rvb2xjb3JlL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0
ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQu
MTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2dudHRhYi8uLi8u
Li8uLi90b29scy9pbmNsdWRlICAtZlBJQyAtYyAtbyBnbnRzaHJfY29yZS5vcGljIGdudHNocl9j
b3JlLmMgCmFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2NjICAtLXN5c3Jvb3Q9L2hvbWUvbm9sZTIz
OTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxl
LWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUt
c3lzcm9vdCAgIC1EUElDIC1EQlVJTERfSUQgLWZuby1zdHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5
OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50
IC1Xbm8tdW51c2VkLWJ1dC1zZXQtdmFyaWFibGUgLVduby11bnVzZWQtbG9jYWwtdHlwZWRlZnMg
ICAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLURfX1hFTl9JTlRFUkZBQ0VfVkVSU0lPTl9fPV9f
WEVOX0xBVEVTVF9JTlRFUkZBQ0VfVkVSU0lPTl9fIC1NTUQgLU1GIC5saW51eC5vcGljLmQgLURf
TEFSR0VGSUxFX1NPVVJDRSAtRF9MQVJHRUZJTEU2NF9TT1VSQ0UgIC1tYXJjaD1hcm12OC1hK2Ny
YytjcnlwdG8gLWZzdGFjay1wcm90ZWN0b3Itc3Ryb25nICAtTzIgLURfRk9SVElGWV9TT1VSQ0U9
MiAtV2Zvcm1hdCAtV2Zvcm1hdC1zZWN1cml0eSAtV2Vycm9yPWZvcm1hdC1zZWN1cml0eSAgLU8y
IC1waXBlIC1nIC1mZWxpbWluYXRlLXVudXNlZC1kZWJ1Zy10eXBlcyAtZm1hY3JvLXByZWZpeC1t
YXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsv
YXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3
MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIz
NzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25v
bGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0
YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vz
ci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwICAg
ICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290
PSAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAv
aXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxp
bnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lz
cm9vdC1uYXRpdmU9ICAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVzIC1JLi9pbmNsdWRlIC1J
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLUkvaG9t
ZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEt
cG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIw
L2dpdC90b29scy9saWJzL2dudHRhYi8uLi8uLi8uLi90b29scy9saWJzL3Rvb2xsb2cvaW5jbHVk
ZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3Jr
L2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIz
NzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250dGFiLy4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1J
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvbGlicy90b29sY29yZS9p
bmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvaW5jbHVk
ZSAgLWZQSUMgLWMgLW8gbGludXgub3BpYyBsaW51eC5jIApta2RpciAtcCAvaG9tZS9ub2xlMjM5
MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUt
bGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29s
cy9wa2ctY29uZmlnCm12IGhlYWRlcnMuY2hrLm5ldyBoZWFkZXJzLmNoawphYXJjaDY0LXBvcnRh
YmxlLWxpbnV4LWFyIHJjIGxpYnhlbmdudHRhYi5hIGdudHRhYl9jb3JlLm8gZ250c2hyX2NvcmUu
byBsaW51eC5vCmFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2NjICAtLXN5c3Jvb3Q9L2hvbWUvbm9s
ZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRh
YmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNp
cGUtc3lzcm9vdCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIC1wdGhyZWFkIC1XbCwt
c29uYW1lIC1XbCxsaWJ4ZW5nbnR0YWIuc28uMSAtc2hhcmVkIC1XbCwtLXZlcnNpb24tc2NyaXB0
PWxpYnhlbmdudHRhYi5tYXAgLW8gbGlieGVuZ250dGFiLnNvLjEuMiBnbnR0YWJfY29yZS5vcGlj
IGdudHNocl9jb3JlLm9waWMgbGludXgub3BpYyAgIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxk
L3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9v
bHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250dGFi
Ly4uLy4uLy4uL3Rvb2xzL2xpYnMvdG9vbGxvZy9saWJ4ZW50b29sbG9nLnNvICAgL2hvbWUvbm9s
ZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRh
YmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQv
dG9vbHMvbGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvbGlicy90b29sY29yZS9saWJ4ZW50b29s
Y29yZS5zbyAKbG4gLXNmIGxpYnhlbmdudHRhYi5zby4xLjIgbGlieGVuZ250dGFiLnNvLjEKbG4g
LXNmIGxpYnhlbmdudHRhYi5zby4xIGxpYnhlbmdudHRhYi5zbwptYWtlWzddOiBMZWF2aW5nIGRp
cmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWInCi9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMv
Z250dGFiLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9ob21lL25v
bGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0
YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0
L2Rpc3QvaW5zdGFsbC91c3IvbGliCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVp
cmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytz
dGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250dGFiLy4uLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91
c3IvaW5jbHVkZQovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90
bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lO
QythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2dudHRhYi8uLi8uLi8uLi90b29scy9jcm9z
cy1pbnN0YWxsIC1tMDc1NSAtcCBsaWJ4ZW5nbnR0YWIuc28uMS4yIC9ob21lL25vbGUyMzkwL2ly
ZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51
eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5z
dGFsbC91c3IvbGliCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZ250dGFiLy4uLy4uLy4uL3Rvb2xzL2Ny
b3NzLWluc3RhbGwgLW0wNjQ0IC1wIGxpYnhlbmdudHRhYi5hIC9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFs
bC91c3IvbGliCmxuIC1zZiBsaWJ4ZW5nbnR0YWIuc28uMS4yIC9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFs
bC91c3IvbGliL2xpYnhlbmdudHRhYi5zby4xCmxuIC1zZiBsaWJ4ZW5nbnR0YWIuc28uMSAvaG9t
ZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEt
cG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIw
L2dpdC9kaXN0L2luc3RhbGwvdXNyL2xpYi9saWJ4ZW5nbnR0YWIuc28KL2hvbWUvbm9sZTIzOTAv
aXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxp
bnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMv
bGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgaW5jbHVk
ZS94ZW5nbnR0YWIuaCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0
MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVU
T0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3RhbGwvdXNyL2luY2x1ZGUKL2hvbWUvbm9s
ZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRh
YmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQv
dG9vbHMvbGlicy9nbnR0YWIvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAg
eGVuZ250dGFiLnBjIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91c3IvbGliL3BrZ2NvbmZpZwptYWtl
WzZdOiBMZWF2aW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUt
ZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3
K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9nbnR0YWInCm1ha2Vb
NV06IExlYXZpbmcgZGlyZWN0b3J5ICcvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1l
aXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcr
c3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzJwptYWtlWzVdOiBFbnRl
cmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMnCm1ha2UgLUMgY2FsbCBpbnN0YWxs
Cm1ha2VbNl06IEVudGVyaW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQv
d2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29s
cy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsJwpt
YWtlIGxpYnMKbWFrZVs3XTogRW50ZXJpbmcgZGlyZWN0b3J5ICcvaG9tZS9ub2xlMjM5MC9pcmVu
ZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJz
L2NhbGwnCmZvciBpIGluIGluY2x1ZGUveGVuY2FsbC5oOyBkbyBcCiAgICBhYXJjaDY0LXBvcnRh
YmxlLWxpbnV4LWdjYyAgLS1zeXNyb290PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRl
LWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4x
NytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QgIC14IGMgLWFuc2kg
LVdhbGwgLVdlcnJvciAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvY2FsbC8uLi8uLi8uLi90b29scy9p
bmNsdWRlIFwKICAgICAgICAgIC1TIC1vIC9kZXYvbnVsbCAkaSB8fCBleGl0IDE7IFwKICAgIGVj
aG8gJGk7IFwKZG9uZSA+aGVhZGVycy5jaGsubmV3CmFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2Nj
ICAtLXN5c3Jvb3Q9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAv
dG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9J
TkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdCAgIC1EQlVJTERfSUQgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50IC1Xbm8tdW51c2VkLWJ1dC1zZXQtdmFyaWFibGUgLVduby11bnVz
ZWQtbG9jYWwtdHlwZWRlZnMgICAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLURfX1hFTl9JTlRF
UkZBQ0VfVkVSU0lPTl9fPV9fWEVOX0xBVEVTVF9JTlRFUkZBQ0VfVkVSU0lPTl9fIC1NTUQgLU1G
IC5jb3JlLm8uZCAtRF9MQVJHRUZJTEVfU09VUkNFIC1EX0xBUkdFRklMRTY0X1NPVVJDRSAgLW1h
cmNoPWFybXY4LWErY3JjK2NyeXB0byAtZnN0YWNrLXByb3RlY3Rvci1zdHJvbmcgIC1PMiAtRF9G
T1JUSUZZX1NPVVJDRT0yIC1XZm9ybWF0IC1XZm9ybWF0LXNlY3VyaXR5IC1XZXJyb3I9Zm9ybWF0
LXNlY3VyaXR5ICAtTzIgLXBpcGUgLWcgLWZlbGltaW5hdGUtdW51c2VkLWRlYnVnLXR5cGVzIC1m
bWFjcm8tcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUt
ZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0YWJs
ZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZp
eC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dv
cmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYz
MjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21l
L25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1w
b3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAv
cmVjaXBlLXN5c3Jvb3Q9ICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0v
aG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12
OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZk
LXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZT0gIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMg
LUkuL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0
MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVU
T0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMvaW5j
bHVkZSAgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAv
d29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQyth
NjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMvbGlicy90b29s
bG9nL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0
MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVU
T0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMvaW5j
bHVkZSAgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAv
d29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQyth
NjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMvbGlicy90b29s
Y29yZS9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsLy4uLy4uLy4uL3Rvb2xzL2lu
Y2x1ZGUgIC1jIC1vIGNvcmUubyBjb3JlLmMgCmFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2NjICAt
LXN5c3Jvb3Q9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdCAgIC1EQlVJTERfSUQgLWZuby1zdHJpY3QtYWxp
YXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRpb24t
YWZ0ZXItc3RhdGVtZW50IC1Xbm8tdW51c2VkLWJ1dC1zZXQtdmFyaWFibGUgLVduby11bnVzZWQt
bG9jYWwtdHlwZWRlZnMgICAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLURfX1hFTl9JTlRFUkZB
Q0VfVkVSU0lPTl9fPV9fWEVOX0xBVEVTVF9JTlRFUkZBQ0VfVkVSU0lPTl9fIC1NTUQgLU1GIC5i
dWZmZXIuby5kIC1EX0xBUkdFRklMRV9TT1VSQ0UgLURfTEFSR0VGSUxFNjRfU09VUkNFICAtbWFy
Y2g9YXJtdjgtYStjcmMrY3J5cHRvIC1mc3RhY2stcHJvdGVjdG9yLXN0cm9uZyAgLU8yIC1EX0ZP
UlRJRllfU09VUkNFPTIgLVdmb3JtYXQgLVdmb3JtYXQtc2VjdXJpdHkgLVdlcnJvcj1mb3JtYXQt
c2VjdXJpdHkgIC1PMiAtcGlwZSAtZyAtZmVsaW1pbmF0ZS11bnVzZWQtZGVidWctdHlwZXMgLWZt
YWNyby1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4
LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29y
ay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMy
MzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYz
MjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUv
bm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBv
cnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9y
ZWNpcGUtc3lzcm9vdD0gICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9o
b21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4
YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQt
cjAvcmVjaXBlLXN5c3Jvb3QtbmF0aXZlPSAgLVdlcnJvciAtV21pc3NpbmctcHJvdG90eXBlcyAt
SS4vaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvY2FsbC8uLi8uLi8uLi90b29scy9pbmNs
dWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93
b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvY2FsbC8uLi8uLi8uLi90b29scy9saWJzL3Rvb2xs
b2cvaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvY2FsbC8uLi8uLi8uLi90b29scy9pbmNs
dWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93
b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvY2FsbC8uLi8uLi8uLi90b29scy9saWJzL3Rvb2xj
b3JlL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0
MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVU
T0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMvaW5j
bHVkZSAgLWMgLW8gYnVmZmVyLm8gYnVmZmVyLmMgCmFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2Nj
ICAtLXN5c3Jvb3Q9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAv
dG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9J
TkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdCAgIC1EQlVJTERfSUQgLWZuby1zdHJpY3Qt
YWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVjbGFyYXRp
b24tYWZ0ZXItc3RhdGVtZW50IC1Xbm8tdW51c2VkLWJ1dC1zZXQtdmFyaWFibGUgLVduby11bnVz
ZWQtbG9jYWwtdHlwZWRlZnMgICAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLURfX1hFTl9JTlRF
UkZBQ0VfVkVSU0lPTl9fPV9fWEVOX0xBVEVTVF9JTlRFUkZBQ0VfVkVSU0lPTl9fIC1NTUQgLU1G
IC5saW51eC5vLmQgLURfTEFSR0VGSUxFX1NPVVJDRSAtRF9MQVJHRUZJTEU2NF9TT1VSQ0UgIC1t
YXJjaD1hcm12OC1hK2NyYytjcnlwdG8gLWZzdGFjay1wcm90ZWN0b3Itc3Ryb25nICAtTzIgLURf
Rk9SVElGWV9TT1VSQ0U9MiAtV2Zvcm1hdCAtV2Zvcm1hdC1zZWN1cml0eSAtV2Vycm9yPWZvcm1h
dC1zZWN1cml0eSAgLU8yIC1waXBlIC1nIC1mZWxpbWluYXRlLXVudXNlZC1kZWJ1Zy10eXBlcyAt
Zm1hY3JvLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5l
LWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJs
ZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMvNC4xNytzdGFi
bGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVm
aXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93
b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQyth
NjMyMzcwZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0vaG9t
ZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEt
cG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIw
L3JlY2lwZS1zeXNyb290PSAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9yZWNpcGUtc3lzcm9vdC1uYXRpdmU9ICAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVz
IC1JLi9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsLy4uLy4uLy4uL3Rvb2xzL2lu
Y2x1ZGUgIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsLy4uLy4uLy4uL3Rvb2xzL2xpYnMvdG9v
bGxvZy9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsLy4uLy4uLy4uL3Rvb2xzL2lu
Y2x1ZGUgIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsLy4uLy4uLy4uL3Rvb2xzL2xpYnMvdG9v
bGNvcmUvaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvY2FsbC8uLi8uLi8uLi90b29scy9p
bmNsdWRlICAtYyAtbyBsaW51eC5vIGxpbnV4LmMgCmFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2Nj
ICAtLXN5c3Jvb3Q9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAv
dG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9J
TkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdCAgIC1EUElDIC1EQlVJTERfSUQgLWZuby1z
dHJpY3QtYWxpYXNpbmcgLXN0ZD1nbnU5OSAtV2FsbCAtV3N0cmljdC1wcm90b3R5cGVzIC1XZGVj
bGFyYXRpb24tYWZ0ZXItc3RhdGVtZW50IC1Xbm8tdW51c2VkLWJ1dC1zZXQtdmFyaWFibGUgLVdu
by11bnVzZWQtbG9jYWwtdHlwZWRlZnMgICAtTzIgLWZvbWl0LWZyYW1lLXBvaW50ZXIgLURfX1hF
Tl9JTlRFUkZBQ0VfVkVSU0lPTl9fPV9fWEVOX0xBVEVTVF9JTlRFUkZBQ0VfVkVSU0lPTl9fIC1N
TUQgLU1GIC5jb3JlLm9waWMuZCAtRF9MQVJHRUZJTEVfU09VUkNFIC1EX0xBUkdFRklMRTY0X1NP
VVJDRSAgLW1hcmNoPWFybXY4LWErY3JjK2NyeXB0byAtZnN0YWNrLXByb3RlY3Rvci1zdHJvbmcg
IC1PMiAtRF9GT1JUSUZZX1NPVVJDRT0yIC1XZm9ybWF0IC1XZm9ybWF0LXNlY3VyaXR5IC1XZXJy
b3I9Zm9ybWF0LXNlY3VyaXR5ICAtTzIgLXBpcGUgLWcgLWZlbGltaW5hdGUtdW51c2VkLWRlYnVn
LXR5cGVzIC1mbWFjcm8tcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0
ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQu
MTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80
LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRl
YnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgt
bWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3Jr
L2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIz
NzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3Q9ICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJl
Zml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAv
d29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQyth
NjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZT0gIC1XZXJyb3IgLVdtaXNzaW5nLXBy
b3RvdHlwZXMgLUkuL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1l
aXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcr
c3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4v
dG9vbHMvaW5jbHVkZSAgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUt
ZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMv
bGlicy90b29sbG9nL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1l
aXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcr
c3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4v
dG9vbHMvaW5jbHVkZSAgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUt
ZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMv
bGlicy90b29sY29yZS9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUt
ZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3
K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsLy4uLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgIC1mUElDIC1jIC1vIGNvcmUub3BpYyBjb3JlLmMgCm12IGhlYWRlcnMu
Y2hrLm5ldyBoZWFkZXJzLmNoawphYXJjaDY0LXBvcnRhYmxlLWxpbnV4LWdjYyAgLS1zeXNyb290
PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2Fy
bXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBm
NmQtcjAvcmVjaXBlLXN5c3Jvb3QgICAtRFBJQyAtREJVSUxEX0lEIC1mbm8tc3RyaWN0LWFsaWFz
aW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9uLWFm
dGVyLXN0YXRlbWVudCAtV25vLXVudXNlZC1idXQtc2V0LXZhcmlhYmxlIC1Xbm8tdW51c2VkLWxv
Y2FsLXR5cGVkZWZzICAgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1EX19YRU5fSU5URVJGQUNF
X1ZFUlNJT05fXz1fX1hFTl9MQVRFU1RfSU5URVJGQUNFX1ZFUlNJT05fXyAtTU1EIC1NRiAuYnVm
ZmVyLm9waWMuZCAtRF9MQVJHRUZJTEVfU09VUkNFIC1EX0xBUkdFRklMRTY0X1NPVVJDRSAgLW1h
cmNoPWFybXY4LWErY3JjK2NyeXB0byAtZnN0YWNrLXByb3RlY3Rvci1zdHJvbmcgIC1PMiAtRF9G
T1JUSUZZX1NPVVJDRT0yIC1XZm9ybWF0IC1XZm9ybWF0LXNlY3VyaXR5IC1XZXJyb3I9Zm9ybWF0
LXNlY3VyaXR5ICAtTzIgLXBpcGUgLWcgLWZlbGltaW5hdGUtdW51c2VkLWRlYnVnLXR5cGVzIC1m
bWFjcm8tcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUt
ZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxl
QVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0YWJs
ZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZp
eC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dv
cmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYz
MjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21l
L25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1w
b3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAv
cmVjaXBlLXN5c3Jvb3Q9ICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0v
aG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12
OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZk
LXIwL3JlY2lwZS1zeXNyb290LW5hdGl2ZT0gIC1XZXJyb3IgLVdtaXNzaW5nLXByb3RvdHlwZXMg
LUkuL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0
MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVU
T0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMvaW5j
bHVkZSAgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAv
d29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQyth
NjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMvbGlicy90b29s
bG9nL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0
MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVU
T0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMvaW5j
bHVkZSAgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAv
d29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQyth
NjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMvbGlicy90b29s
Y29yZS9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsLy4uLy4uLy4uL3Rvb2xzL2lu
Y2x1ZGUgIC1mUElDIC1jIC1vIGJ1ZmZlci5vcGljIGJ1ZmZlci5jIAphYXJjaDY0LXBvcnRhYmxl
LWxpbnV4LWdjYyAgLS1zeXNyb290PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVp
cmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytz
dGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QgICAtRFBJQyAtREJVSUxE
X0lEIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90
eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV25vLXVudXNlZC1idXQtc2V0LXZh
cmlhYmxlIC1Xbm8tdW51c2VkLWxvY2FsLXR5cGVkZWZzICAgLU8yIC1mb21pdC1mcmFtZS1wb2lu
dGVyIC1EX19YRU5fSU5URVJGQUNFX1ZFUlNJT05fXz1fX1hFTl9MQVRFU1RfSU5URVJGQUNFX1ZF
UlNJT05fXyAtTU1EIC1NRiAubGludXgub3BpYy5kIC1EX0xBUkdFRklMRV9TT1VSQ0UgLURfTEFS
R0VGSUxFNjRfU09VUkNFICAtbWFyY2g9YXJtdjgtYStjcmMrY3J5cHRvIC1mc3RhY2stcHJvdGVj
dG9yLXN0cm9uZyAgLU8yIC1EX0ZPUlRJRllfU09VUkNFPTIgLVdmb3JtYXQgLVdmb3JtYXQtc2Vj
dXJpdHkgLVdlcnJvcj1mb3JtYXQtc2VjdXJpdHkgIC1PMiAtcGlwZSAtZyAtZmVsaW1pbmF0ZS11
bnVzZWQtZGVidWctdHlwZXMgLWZtYWNyby1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcv
eGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwICAgICAgICAgICAgICAg
ICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0
ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQu
MTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80
LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRl
YnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdD0gICAgICAgICAgICAgICAgICAgICAg
LWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVu
ZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFi
bGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QtbmF0aXZlPSAgLVdlcnJvciAt
V21pc3NpbmctcHJvdG90eXBlcyAtSS4vaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvY2Fs
bC8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvY2FsbC8uLi8u
Li8uLi90b29scy9saWJzL3Rvb2xsb2cvaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvY2Fs
bC8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvY2FsbC8uLi8u
Li8uLi90b29scy9saWJzL3Rvb2xjb3JlL2luY2x1ZGUgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9i
dWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVu
LXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2Nh
bGwvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAgLWZQSUMgLWMgLW8gbGludXgub3BpYyBsaW51eC5j
IApta2RpciAtcCAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90
bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lO
QythNjMyMzcwZjZkLXIwL2dpdC90b29scy9wa2ctY29uZmlnCmFhcmNoNjQtcG9ydGFibGUtbGlu
dXgtZ2NjICAtLXN5c3Jvb3Q9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5l
LWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJs
ZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdCAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIC1wdGhyZWFkIC1XbCwtc29uYW1lIC1XbCxsaWJ4ZW5jYWxsLnNvLjEgLXNo
YXJlZCAtV2wsLS12ZXJzaW9uLXNjcmlwdD1saWJ4ZW5jYWxsLm1hcCAtbyBsaWJ4ZW5jYWxsLnNv
LjEuMiBjb3JlLm9waWMgYnVmZmVyLm9waWMgbGludXgub3BpYyAgIC9ob21lL25vbGUyMzkwL2ly
ZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51
eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xp
YnMvY2FsbC8uLi8uLi8uLi90b29scy9saWJzL3Rvb2xsb2cvbGlieGVudG9vbGxvZy5zbyAgIC9o
b21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4
YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQt
cjAvZ2l0L3Rvb2xzL2xpYnMvY2FsbC8uLi8uLi8uLi90b29scy9saWJzL3Rvb2xjb3JlL2xpYnhl
bnRvb2xjb3JlLnNvIAphYXJjaDY0LXBvcnRhYmxlLWxpbnV4LWFyIHJjIGxpYnhlbmNhbGwuYSBj
b3JlLm8gYnVmZmVyLm8gbGludXgubwpsbiAtc2YgbGlieGVuY2FsbC5zby4xLjIgbGlieGVuY2Fs
bC5zby4xCmxuIC1zZiBsaWJ4ZW5jYWxsLnNvLjEgbGlieGVuY2FsbC5zbwptYWtlWzddOiBMZWF2
aW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsJwovaG9tZS9ub2xlMjM5MC9p
cmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGlu
dXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9s
aWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtZCAtbTA3NTUgLXAgL2hvbWUv
bm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBv
cnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9n
aXQvZGlzdC9pbnN0YWxsL3Vzci9saWIKL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUt
ZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3
K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsLy4uLy4uLy4u
L3Rvb2xzL2Nyb3NzLWluc3RhbGwgLWQgLW0wNzU1IC1wIC9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rpc3QvaW5zdGFsbC91
c3IvaW5jbHVkZQovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90
bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lO
QythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4vLi4vdG9vbHMvY3Jvc3Mt
aW5zdGFsbCAtbTA3NTUgLXAgbGlieGVuY2FsbC5zby4xLjIgL2hvbWUvbm9sZTIzOTAvaXJlbmUv
YnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hl
bi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvZGlzdC9pbnN0YWxs
L3Vzci9saWIKL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1w
L3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMr
YTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsLy4uLy4uLy4uL3Rvb2xzL2Nyb3NzLWlu
c3RhbGwgLW0wNjQ0IC1wIGxpYnhlbmNhbGwuYSAvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93
aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xz
LzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC9kaXN0L2luc3RhbGwvdXNyL2xp
YgpsbiAtc2YgbGlieGVuY2FsbC5zby4xLjIgL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hp
dGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80
LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvZGlzdC9pbnN0YWxsL3Vzci9saWIv
bGlieGVuY2FsbC5zby4xCmxuIC1zZiBsaWJ4ZW5jYWxsLnNvLjEgL2hvbWUvbm9sZTIzOTAvaXJl
bmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4
L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvZGlzdC9pbnN0
YWxsL3Vzci9saWIvbGlieGVuY2FsbC5zbwovaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0
ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQu
MTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2NhbGwvLi4vLi4v
Li4vdG9vbHMvY3Jvc3MtaW5zdGFsbCAtbTA2NDQgLXAgaW5jbHVkZS94ZW5jYWxsLmggL2hvbWUv
bm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBv
cnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9n
aXQvZGlzdC9pbnN0YWxsL3Vzci9pbmNsdWRlCi9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvY2FsbC8uLi8u
Li8uLi90b29scy9jcm9zcy1pbnN0YWxsIC1tMDY0NCAtcCB4ZW5jYWxsLnBjIC9ob21lL25vbGUy
MzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJs
ZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L2Rp
c3QvaW5zdGFsbC91c3IvbGliL3BrZ2NvbmZpZwptYWtlWzZdOiBMZWF2aW5nIGRpcmVjdG9yeSAn
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9naXQvdG9vbHMvbGlicy9jYWxsJwptYWtlWzVdOiBMZWF2aW5nIGRpcmVjdG9yeSAnL2hv
bWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhh
LXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1y
MC9naXQvdG9vbHMvbGlicycKbWFrZVs1XTogRW50ZXJpbmcgZGlyZWN0b3J5ICcvaG9tZS9ub2xl
MjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFi
bGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90
b29scy9saWJzJwptYWtlIC1DIGZvcmVpZ25tZW1vcnkgaW5zdGFsbAptYWtlWzZdOiBFbnRlcmlu
ZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZm9yZWlnbm1lbW9yeScKbWFrZSBsaWJz
Cm1ha2VbN106IEVudGVyaW5nIGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQv
d2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29s
cy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9mb3JlaWdu
bWVtb3J5Jwpmb3IgaSBpbiBpbmNsdWRlL3hlbmZvcmVpZ25tZW1vcnkuaDsgZG8gXAogICAgYWFy
Y2g2NC1wb3J0YWJsZS1saW51eC1nY2MgIC0tc3lzcm9vdD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9i
dWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVu
LXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290ICAt
eCBjIC1hbnNpIC1XYWxsIC1XZXJyb3IgLUkvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0
ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQu
MTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzL2ZvcmVpZ25tZW1v
cnkvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSBcCiAgICAgICAgICAtUyAtbyAvZGV2L251bGwgJGkg
fHwgZXhpdCAxOyBcCiAgICBlY2hvICRpOyBcCmRvbmUgPmhlYWRlcnMuY2hrLm5ldwphYXJjaDY0
LXBvcnRhYmxlLWxpbnV4LWdjYyAgLS1zeXNyb290PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxk
L3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9v
bHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QgICAtREJV
SUxEX0lEIC1mbm8tc3RyaWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJv
dG90eXBlcyAtV2RlY2xhcmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV25vLXVudXNlZC1idXQtc2V0
LXZhcmlhYmxlIC1Xbm8tdW51c2VkLWxvY2FsLXR5cGVkZWZzICAgLU8yIC1mb21pdC1mcmFtZS1w
b2ludGVyIC1EX19YRU5fSU5URVJGQUNFX1ZFUlNJT05fXz1fX1hFTl9MQVRFU1RfSU5URVJGQUNF
X1ZFUlNJT05fXyAtTU1EIC1NRiAuY29yZS5vLmQgLURfTEFSR0VGSUxFX1NPVVJDRSAtRF9MQVJH
RUZJTEU2NF9TT1VSQ0UgIC1tYXJjaD1hcm12OC1hK2NyYytjcnlwdG8gLWZzdGFjay1wcm90ZWN0
b3Itc3Ryb25nICAtTzIgLURfRk9SVElGWV9TT1VSQ0U9MiAtV2Zvcm1hdCAtV2Zvcm1hdC1zZWN1
cml0eSAtV2Vycm9yPWZvcm1hdC1zZWN1cml0eSAgLU8yIC1waXBlIC1nIC1mZWxpbWluYXRlLXVu
dXNlZC1kZWJ1Zy10eXBlcyAtZm1hY3JvLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUv
YnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hl
bi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAg
ICAgICAgLWZkZWJ1Zy1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRl
LWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4x
NytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQu
MTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVi
dWctcHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0
MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVU
T0lOQythNjMyMzcwZjZkLXIwL3JlY2lwZS1zeXNyb290PSAgICAgICAgICAgICAgICAgICAgICAt
ZmRlYnVnLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5l
LWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJs
ZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9yZWNpcGUtc3lzcm9vdC1uYXRpdmU9ICAtV2Vycm9yIC1X
bWlzc2luZy1wcm90b3R5cGVzIC1JLi9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVp
bGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10
b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9mb3Jl
aWdubWVtb3J5Ly4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUv
YnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hl
bi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9m
b3JlaWdubWVtb3J5Ly4uLy4uLy4uL3Rvb2xzL2xpYnMvdG9vbGxvZy9pbmNsdWRlIC1JL2hvbWUv
bm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBv
cnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9n
aXQvdG9vbHMvbGlicy9mb3JlaWdubWVtb3J5Ly4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1JL2hv
bWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhh
LXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1y
MC9naXQvdG9vbHMvbGlicy9mb3JlaWdubWVtb3J5Ly4uLy4uLy4uL3Rvb2xzL2xpYnMvdG9vbGNv
cmUvaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQw
L3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRP
SU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZm9yZWlnbm1lbW9yeS8uLi8uLi8uLi90
b29scy9pbmNsdWRlICAtYyAtbyBjb3JlLm8gY29yZS5jIAphYXJjaDY0LXBvcnRhYmxlLWxpbnV4
LWdjYyAgLS1zeXNyb290PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1l
dnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVB
VVRPSU5DK2E2MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QgICAtREJVSUxEX0lEIC1mbm8tc3Ry
aWN0LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xh
cmF0aW9uLWFmdGVyLXN0YXRlbWVudCAtV25vLXVudXNlZC1idXQtc2V0LXZhcmlhYmxlIC1Xbm8t
dW51c2VkLWxvY2FsLXR5cGVkZWZzICAgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1EX19YRU5f
SU5URVJGQUNFX1ZFUlNJT05fXz1fX1hFTl9MQVRFU1RfSU5URVJGQUNFX1ZFUlNJT05fXyAtTU1E
IC1NRiAubGludXguby5kIC1EX0xBUkdFRklMRV9TT1VSQ0UgLURfTEFSR0VGSUxFNjRfU09VUkNF
ICAtbWFyY2g9YXJtdjgtYStjcmMrY3J5cHRvIC1mc3RhY2stcHJvdGVjdG9yLXN0cm9uZyAgLU8y
IC1EX0ZPUlRJRllfU09VUkNFPTIgLVdmb3JtYXQgLVdmb3JtYXQtc2VjdXJpdHkgLVdlcnJvcj1m
b3JtYXQtc2VjdXJpdHkgIC1PMiAtcGlwZSAtZyAtZmVsaW1pbmF0ZS11bnVzZWQtZGVidWctdHlw
ZXMgLWZtYWNyby1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVp
cmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytz
dGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcr
c3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWct
cHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90
bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lO
QythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9J
TkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9yZWNpcGUtc3lzcm9vdD0gICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgt
bWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3Jr
L2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIz
NzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QtbmF0aXZlPSAgLVdlcnJvciAtV21pc3NpbmctcHJvdG90
eXBlcyAtSS4vaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVu
ZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFi
bGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZm9yZWlnbm1lbW9yeS8uLi8u
Li8uLi90b29scy9pbmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVp
cmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytz
dGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZm9yZWlnbm1lbW9yeS8u
Li8uLi8uLi90b29scy9saWJzL3Rvb2xsb2cvaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMv
Zm9yZWlnbm1lbW9yeS8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2ly
ZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51
eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xp
YnMvZm9yZWlnbm1lbW9yeS8uLi8uLi8uLi90b29scy9saWJzL3Rvb2xjb3JlL2luY2x1ZGUgLUkv
aG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12
OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZk
LXIwL2dpdC90b29scy9saWJzL2ZvcmVpZ25tZW1vcnkvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAg
LWMgLW8gbGludXgubyBsaW51eC5jIAphYXJjaDY0LXBvcnRhYmxlLWxpbnV4LWdjYyAgLS1zeXNy
b290PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3Jr
L2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIz
NzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QgICAtRFBJQyAtREJVSUxEX0lEIC1mbm8tc3RyaWN0LWFs
aWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0aW9u
LWFmdGVyLXN0YXRlbWVudCAtV25vLXVudXNlZC1idXQtc2V0LXZhcmlhYmxlIC1Xbm8tdW51c2Vk
LWxvY2FsLXR5cGVkZWZzICAgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1EX19YRU5fSU5URVJG
QUNFX1ZFUlNJT05fXz1fX1hFTl9MQVRFU1RfSU5URVJGQUNFX1ZFUlNJT05fXyAtTU1EIC1NRiAu
Y29yZS5vcGljLmQgLURfTEFSR0VGSUxFX1NPVVJDRSAtRF9MQVJHRUZJTEU2NF9TT1VSQ0UgIC1t
YXJjaD1hcm12OC1hK2NyYytjcnlwdG8gLWZzdGFjay1wcm90ZWN0b3Itc3Ryb25nICAtTzIgLURf
Rk9SVElGWV9TT1VSQ0U9MiAtV2Zvcm1hdCAtV2Zvcm1hdC1zZWN1cml0eSAtV2Vycm9yPWZvcm1h
dC1zZWN1cml0eSAgLU8yIC1waXBlIC1nIC1mZWxpbWluYXRlLXVudXNlZC1kZWJ1Zy10eXBlcyAt
Zm1hY3JvLXByZWZpeC1tYXA9L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5l
LWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJs
ZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMD0vdXNyL3NyYy9kZWJ1Zy94ZW4tdG9vbHMvNC4xNytzdGFi
bGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAgICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVm
aXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93
b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQyth
NjMyMzcwZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWctcHJlZml4LW1hcD0vaG9t
ZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEt
cG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIw
L3JlY2lwZS1zeXNyb290PSAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9yZWNpcGUtc3lzcm9vdC1uYXRpdmU9ICAtV2Vycm9yIC1XbWlzc2luZy1wcm90b3R5cGVz
IC1JLi9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9mb3JlaWdubWVtb3J5Ly4uLy4uLy4u
L3Rvb2xzL2luY2x1ZGUgIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5l
LWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJs
ZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9mb3JlaWdubWVtb3J5Ly4uLy4u
Ly4uL3Rvb2xzL2xpYnMvdG9vbGxvZy9pbmNsdWRlIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVp
bGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10
b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9mb3Jl
aWdubWVtb3J5Ly4uLy4uLy4uL3Rvb2xzL2luY2x1ZGUgIC1JL2hvbWUvbm9sZTIzOTAvaXJlbmUv
YnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hl
bi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy9m
b3JlaWdubWVtb3J5Ly4uLy4uLy4uL3Rvb2xzL2xpYnMvdG9vbGNvcmUvaW5jbHVkZSAtSS9ob21l
L25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1w
b3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAv
Z2l0L3Rvb2xzL2xpYnMvZm9yZWlnbm1lbW9yeS8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtZlBJ
QyAtYyAtbyBjb3JlLm9waWMgY29yZS5jIAphYXJjaDY0LXBvcnRhYmxlLWxpbnV4LWdjYyAgLS1z
eXNyb290PS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93
b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2
MzIzNzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QgICAtRFBJQyAtREJVSUxEX0lEIC1mbm8tc3RyaWN0
LWFsaWFzaW5nIC1zdGQ9Z251OTkgLVdhbGwgLVdzdHJpY3QtcHJvdG90eXBlcyAtV2RlY2xhcmF0
aW9uLWFmdGVyLXN0YXRlbWVudCAtV25vLXVudXNlZC1idXQtc2V0LXZhcmlhYmxlIC1Xbm8tdW51
c2VkLWxvY2FsLXR5cGVkZWZzICAgLU8yIC1mb21pdC1mcmFtZS1wb2ludGVyIC1EX19YRU5fSU5U
RVJGQUNFX1ZFUlNJT05fXz1fX1hFTl9MQVRFU1RfSU5URVJGQUNFX1ZFUlNJT05fXyAtTU1EIC1N
RiAubGludXgub3BpYy5kIC1EX0xBUkdFRklMRV9TT1VSQ0UgLURfTEFSR0VGSUxFNjRfU09VUkNF
ICAtbWFyY2g9YXJtdjgtYStjcmMrY3J5cHRvIC1mc3RhY2stcHJvdGVjdG9yLXN0cm9uZyAgLU8y
IC1EX0ZPUlRJRllfU09VUkNFPTIgLVdmb3JtYXQgLVdmb3JtYXQtc2VjdXJpdHkgLVdlcnJvcj1m
b3JtYXQtc2VjdXJpdHkgIC1PMiAtcGlwZSAtZyAtZmVsaW1pbmF0ZS11bnVzZWQtZGVidWctdHlw
ZXMgLWZtYWNyby1wcmVmaXgtbWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVp
cmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytz
dGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjA9L3Vzci9zcmMvZGVidWcveGVuLXRvb2xzLzQuMTcr
c3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwICAgICAgICAgICAgICAgICAgICAgIC1mZGVidWct
cHJlZml4LW1hcD0vaG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90
bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lO
QythNjMyMzcwZjZkLXIwPS91c3Ivc3JjL2RlYnVnL3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9J
TkMrYTYzMjM3MGY2ZC1yMCAgICAgICAgICAgICAgICAgICAgICAtZmRlYnVnLXByZWZpeC1tYXA9
L2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJt
djhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2
ZC1yMC9yZWNpcGUtc3lzcm9vdD0gICAgICAgICAgICAgICAgICAgICAgLWZkZWJ1Zy1wcmVmaXgt
bWFwPS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3Jr
L2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIz
NzBmNmQtcjAvcmVjaXBlLXN5c3Jvb3QtbmF0aXZlPSAgLVdlcnJvciAtV21pc3NpbmctcHJvdG90
eXBlcyAtSS4vaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVu
ZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFi
bGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZm9yZWlnbm1lbW9yeS8uLi8u
Li8uLi90b29scy9pbmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVp
cmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytz
dGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZm9yZWlnbm1lbW9yeS8u
Li8uLi8uLi90b29scy9saWJzL3Rvb2xsb2cvaW5jbHVkZSAtSS9ob21lL25vbGUyMzkwL2lyZW5l
L2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94
ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMv
Zm9yZWlnbm1lbW9yeS8uLi8uLi8uLi90b29scy9pbmNsdWRlICAtSS9ob21lL25vbGUyMzkwL2ly
ZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51
eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xp
YnMvZm9yZWlnbm1lbW9yeS8uLi8uLi8uLi90b29scy9saWJzL3Rvb2xjb3JlL2luY2x1ZGUgLUkv
aG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12
OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZk
LXIwL2dpdC90b29scy9saWJzL2ZvcmVpZ25tZW1vcnkvLi4vLi4vLi4vdG9vbHMvaW5jbHVkZSAg
LWZQSUMgLWMgLW8gbGludXgub3BpYyBsaW51eC5jIAptdiBoZWFkZXJzLmNoay5uZXcgaGVhZGVy
cy5jaGsKbWtkaXIgLXAgL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvcGtnLWNvbmZpZwpsaW51eC5jOjE2NTo1MDog
ZXJyb3I6IGFyZ3VtZW50IDcgb2YgdHlwZSAnY29uc3QgeGVuX3Bmbl90W10nIHtha2EgJ2NvbnN0
IGxvbmcgdW5zaWduZWQgaW50W10nfSBkZWNsYXJlZCBhcyBhbiBvcmRpbmFyeSBhcnJheSBbLVdl
cnJvcj12bGEtcGFyYW1ldGVyXQogIDE2NSB8ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGNvbnN0IHhlbl9wZm5fdCBhcnJbLypudW0qL10sIGludCBlcnJbLypudW0qL10pCiAgICAg
IHwgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfn5+fn5+fn5+fn5+fn5+fl5+fn5+
fn5+fn5+fgpJbiBmaWxlIGluY2x1ZGVkIGZyb20gbGludXguYzoyOToKcHJpdmF0ZS5oOjM1OjUw
OiBub3RlOiBwcmV2aW91c2x5IGRlY2xhcmVkIGFzIGEgdmFyaWFibGUgbGVuZ3RoIGFycmF5ICdj
b25zdCB4ZW5fcGZuX3RbbnVtXScge2FrYSAnY29uc3QgbG9uZyB1bnNpZ25lZCBpbnRbbnVtXSd9
CiAgIDM1IHwgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgeGVuX3Bmbl90
IGFycltudW1dLCBpbnQgZXJyW251bV0pOwogICAgICB8ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIH5+fn5+fn5+fn5+fn5+fn5efn5+fn5+fgpsaW51eC5jOjE2NTo2ODogZXJyb3I6
IGFyZ3VtZW50IDggb2YgdHlwZSAnaW50W10nIGRlY2xhcmVkIGFzIGFuIG9yZGluYXJ5IGFycmF5
IFstV2Vycm9yPXZsYS1wYXJhbWV0ZXJdCiAgMTY1IHwgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgY29uc3QgeGVuX3Bmbl90IGFyclsvKm51bSovXSwgaW50IGVyclsvKm51bSovXSkK
ICAgICAgfCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICB+fn5+Xn5+fn5+fn5+fn5+CkluIGZpbGUgaW5jbHVkZWQgZnJvbSBsaW51
eC5jOjI5Ogpwcml2YXRlLmg6MzU6NjQ6IG5vdGU6IHByZXZpb3VzbHkgZGVjbGFyZWQgYXMgYSB2
YXJpYWJsZSBsZW5ndGggYXJyYXkgJ2ludFtudW1dJwogICAzNSB8ICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIGNvbnN0IHhlbl9wZm5fdCBhcnJbbnVtXSwgaW50IGVycltudW1dKTsK
ICAgICAgfCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIH5+fn5efn5+fn5+fgpsaW51eC5jOjE2NTo1MDogZXJyb3I6IGFyZ3VtZW50IDcg
b2YgdHlwZSAnY29uc3QgeGVuX3Bmbl90W10nIHtha2EgJ2NvbnN0IGxvbmcgdW5zaWduZWQgaW50
W10nfSBkZWNsYXJlZCBhcyBhbiBvcmRpbmFyeSBhcnJheSBbLVdlcnJvcj12bGEtcGFyYW1ldGVy
XQogIDE2NSB8ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHhlbl9wZm5f
dCBhcnJbLypudW0qL10sIGludCBlcnJbLypudW0qL10pCiAgICAgIHwgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgfn5+fn5+fn5+fn5+fn5+fl5+fn5+fn5+fn5+fgpJbiBmaWxlIGlu
Y2x1ZGVkIGZyb20gbGludXguYzoyOToKcHJpdmF0ZS5oOjM1OjUwOiBub3RlOiBwcmV2aW91c2x5
IGRlY2xhcmVkIGFzIGEgdmFyaWFibGUgbGVuZ3RoIGFycmF5ICdjb25zdCB4ZW5fcGZuX3RbbnVt
XScge2FrYSAnY29uc3QgbG9uZyB1bnNpZ25lZCBpbnRbbnVtXSd9CiAgIDM1IHwgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgeGVuX3Bmbl90IGFycltudW1dLCBpbnQgZXJy
W251bV0pOwogICAgICB8ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIH5+fn5+fn5+
fn5+fn5+fn5efn5+fn5+fgpsaW51eC5jOjE2NTo2ODogZXJyb3I6IGFyZ3VtZW50IDggb2YgdHlw
ZSAnaW50W10nIGRlY2xhcmVkIGFzIGFuIG9yZGluYXJ5IGFycmF5IFstV2Vycm9yPXZsYS1wYXJh
bWV0ZXJdCiAgMTY1IHwgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgeGVu
X3Bmbl90IGFyclsvKm51bSovXSwgaW50IGVyclsvKm51bSovXSkKICAgICAgfCAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB+fn5+
Xn5+fn5+fn5+fn5+CkluIGZpbGUgaW5jbHVkZWQgZnJvbSBsaW51eC5jOjI5Ogpwcml2YXRlLmg6
MzU6NjQ6IG5vdGU6IHByZXZpb3VzbHkgZGVjbGFyZWQgYXMgYSB2YXJpYWJsZSBsZW5ndGggYXJy
YXkgJ2ludFtudW1dJwogICAzNSB8ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNv
bnN0IHhlbl9wZm5fdCBhcnJbbnVtXSwgaW50IGVycltudW1dKTsKICAgICAgfCAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIH5+fn5efn5+
fn5+fgpjYzE6IGFsbCB3YXJuaW5ncyBiZWluZyB0cmVhdGVkIGFzIGVycm9ycwptYWtlWzddOiAq
KiogWy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3Jr
L2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIz
NzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZm9yZWlnbm1lbW9yeS8uLi8uLi8uLi90b29scy9SdWxl
cy5tazoyMTM6IGxpbnV4Lm9dIEVycm9yIDEKbWFrZVs3XTogKioqIFdhaXRpbmcgZm9yIHVuZmlu
aXNoZWQgam9icy4uLi4KY2MxOiBhbGwgd2FybmluZ3MgYmVpbmcgdHJlYXRlZCBhcyBlcnJvcnMK
bWFrZVs3XTogTGVhdmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3do
aXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMv
NC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZm9yZWlnbm1l
bW9yeScKbWFrZVs2XTogTGVhdmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1
aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4t
dG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZm9y
ZWlnbm1lbW9yeScKbWFrZVs1XTogTGVhdmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2ly
ZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51
eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xp
YnMnCm1ha2VbNF06IExlYXZpbmcgZGlyZWN0b3J5ICcvaG9tZS9ub2xlMjM5MC9pcmVuZS9idWls
ZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9ydGFibGUtbGludXgveGVuLXRv
b2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dpdC90b29scy9saWJzJwptYWtl
WzddOiAqKiogWy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRlLWVpcmVuZS1ldnQwL3Rt
cC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4xNytzdGFibGVBVVRPSU5D
K2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvZm9yZWlnbm1lbW9yeS8uLi8uLi8uLi90b29s
cy9SdWxlcy5tazoyMTA6IGxpbnV4Lm9waWNdIEVycm9yIDEKbWFrZVs2XTogKioqIFsvaG9tZS9u
b2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12OGEtcG9y
dGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZkLXIwL2dp
dC90b29scy9saWJzL2ZvcmVpZ25tZW1vcnkvLi4vLi4vLi4vdG9vbHMvbGlicy9saWJzLm1rOjQ1
OiBidWlsZF0gRXJyb3IgMgptYWtlWzVdOiAqKiogWy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxk
L3doaXRlLWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9v
bHMvNC4xNytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzL2xpYnMvLi4vLi4v
dG9vbHMvUnVsZXMubWs6MjM3OiBzdWJkaXItaW5zdGFsbC1mb3JlaWdubWVtb3J5XSBFcnJvciAy
Cm1ha2VbNF06ICoqKiBbL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvbGlicy8uLi8uLi90b29scy9SdWxlcy5tazoy
MzI6IHN1YmRpcnMtaW5zdGFsbF0gRXJyb3IgMgpFUlJPUjogb2VfcnVubWFrZSBmYWlsZWQKbWFr
ZVszXTogTGVhdmluZyBkaXJlY3RvcnkgJy9ob21lL25vbGUyMzkwL2lyZW5lL2J1aWxkL3doaXRl
LWVpcmVuZS1ldnQwL3RtcC93b3JrL2FybXY4YS1wb3J0YWJsZS1saW51eC94ZW4tdG9vbHMvNC4x
NytzdGFibGVBVVRPSU5DK2E2MzIzNzBmNmQtcjAvZ2l0L3Rvb2xzJwptYWtlWzJdOiBMZWF2aW5n
IGRpcmVjdG9yeSAnL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAv
dG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9J
TkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMnCm1ha2VbMV06IExlYXZpbmcgZGlyZWN0b3J5ICcv
aG9tZS9ub2xlMjM5MC9pcmVuZS9idWlsZC93aGl0ZS1laXJlbmUtZXZ0MC90bXAvd29yay9hcm12
OGEtcG9ydGFibGUtbGludXgveGVuLXRvb2xzLzQuMTcrc3RhYmxlQVVUT0lOQythNjMyMzcwZjZk
LXIwL2dpdC90b29scycKV0FSTklORzogZXhpdCBjb2RlIDEgZnJvbSBhIHNoZWxsIGNvbW1hbmQu
Cm1ha2VbM106ICoqKiBbL2hvbWUvbm9sZTIzOTAvaXJlbmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2
dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4L3hlbi10b29scy80LjE3K3N0YWJsZUFV
VE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvLi4vdG9vbHMvUnVsZXMubWs6MjM3OiBzdWJk
aXItaW5zdGFsbC1saWJzXSBFcnJvciAyCm1ha2VbMl06ICoqKiBbL2hvbWUvbm9sZTIzOTAvaXJl
bmUvYnVpbGQvd2hpdGUtZWlyZW5lLWV2dDAvdG1wL3dvcmsvYXJtdjhhLXBvcnRhYmxlLWxpbnV4
L3hlbi10b29scy80LjE3K3N0YWJsZUFVVE9JTkMrYTYzMjM3MGY2ZC1yMC9naXQvdG9vbHMvLi4v
dG9vbHMvUnVsZXMubWs6MjMyOiBzdWJkaXJzLWluc3RhbGxdIEVycm9yIDIKbWFrZVsxXTogKioq
IFtNYWtlZmlsZTo3MzogaW5zdGFsbF0gRXJyb3IgMgptYWtlOiAqKiogW01ha2VmaWxlOjEzNDog
aW5zdGFsbC10b29sc10gRXJyb3IgMgo=
--00000000000082ed6405fb680561--


From xen-devel-bounces@lists.xenproject.org Thu May 11 10:16:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 10:16:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533200.829635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3LY-0001HS-Dh; Thu, 11 May 2023 10:16:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533200.829635; Thu, 11 May 2023 10:16:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3LY-0001HL-9q; Thu, 11 May 2023 10:16:28 +0000
Received: by outflank-mailman (input) for mailman id 533200;
 Thu, 11 May 2023 10:16:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/UO=BA=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1px3LW-0001HE-57
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 10:16:26 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on20622.outbound.protection.outlook.com
 [2a01:111:f400:7e83::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e0a35299-efe4-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 12:16:23 +0200 (CEST)
Received: from BN0PR04CA0157.namprd04.prod.outlook.com (2603:10b6:408:eb::12)
 by SN7PR12MB6742.namprd12.prod.outlook.com (2603:10b6:806:26e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22; Thu, 11 May
 2023 10:15:51 +0000
Received: from BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:eb:cafe::c2) by BN0PR04CA0157.outlook.office365.com
 (2603:10b6:408:eb::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22 via Frontend
 Transport; Thu, 11 May 2023 10:15:51 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT017.mail.protection.outlook.com (10.13.177.93) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.21 via Frontend Transport; Thu, 11 May 2023 10:15:51 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 05:15:50 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 05:15:48 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0a35299-efe4-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KDI3uoirfZMmSm1PVMaRnGSK/iYR/AKlBmPMxd09jxF9Y3nNos//ewhgiG68fVqWyWeO/0B3P8R9ffWZQUggRx6rNUeqjchybfTw/nA81LDdqkfubOaaLRBCwvkODlzaTFk82yX1f/VtYBwekJ09g62vpAZuq6vJSoNFWnzRUJ+k0nxdi/e+k0KS7zbb6PCVdUTuCs4CavEBww4d1eNfdJ/RVTF/BOaIGefYMOQDyxzpgeT9VS1Vx3GKdqPnV0SRo1uiSlbLoy6ZSAAS/YKU1AInKwr72ESpj4i4M7tk7AFz8h9UkvLfm+w3sZnh2KW34lZnMm3JBSEKNQVR/6ffgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XXioXtcCU3QuRYrH1/5SG+i0rYuWO5H9vvQV0ZnrVz4=;
 b=aqguua0IhkUtrERG4TotfkIFj/SiW5zur00iLydR4Jq/g7/D2YYjgnN19fK+f7G44FUmp9jXmuxNLXlrPQPu6E39G1C7bb1VxZAYFuXePCQn8hs9T+/w2DJq5QeiAEkFjqgDP2p2Irf6IMCcioRg2oyfGo9CgrgOVkUm2Xh+2u/wa6/bph1+22+VzsppQyrozV0dsuVdAOmmtnJTgnjKG1a7+jZEBGLZyJaXZpb62oFRyLSEy0GfiVGAtXlV3qFBb8xaHb35aUk64G/+PCHPzNia7RDOEi0X+eADvfZwY8Q0Oq/XJfIIhDW6GI1yV7PK4t0hrLe2Z91M3mkglXleqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XXioXtcCU3QuRYrH1/5SG+i0rYuWO5H9vvQV0ZnrVz4=;
 b=YokafbSMMHqV8YhID0QqliFH0Fwg1gjZshLuAMZUzBMgSyuvaeanLw5BuT8icndEw7P3Q5iNIXNTHGcwJi42y9WVzjg6lwd9edg12yqZZKPoWQFTNprz0uVj5eFwbIYpUXMUZIBSvka5w8ggaP7owvTQcbScN44CCh2lpAfTtHA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <bcac90c2-ef35-2908-9fe6-f09c1b1e2340@amd.com>
Date: Thu, 11 May 2023 12:15:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: xen cache colors in ARM
To: Oleg Nikitenko <oleshiiwood@gmail.com>, Stefano Stabellini
	<sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>,
	<Stewart.Hildebrand@amd.com>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com>
 <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
 <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com>
 <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
 <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
 <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
 <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT017:EE_|SN7PR12MB6742:EE_
X-MS-Office365-Filtering-Correlation-Id: 9a6ddd2b-aa9b-4e44-cf14-08db5208b282
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ENpWlCbRCJTekKek3QebK9odpNRErgRykMK66NYF1ZbIgR/OvWo1A5TrPx1XvtAKeacyo2mNg/5tTUCeZN4QoqkCsbb3l8zXWt8STXipiiUiUahr9SA7UrEhVPpVosF9cNNhyPW+OlrvNREYAyHRGFPDZnR5pDaK+yoIrCdPHrF1yWikbD5YKvQMTXXmHcPSq7r8Y9B0XI496iNXn2UpKYxpCFTqE6pPoEPetAKLfz357lvrzQORE51oh/T0WN2bTEoyDEg7NKHSQwoqGvRvNZ0fv4ii9/9of1SCC4s3BHuHYBjCLVjyBOId0zZ/vudPwbGeUkTyjZqo1DK9nw/SzWN37mjFhQvfGSiUVAhTAWN1ZXNQlsHjVnldBwReqieQFzJDQLRM1TwmkMPNyZ2XPop6LPF5J3H1s0NAOmUixuepHjX0HwR/mcuv0P8r9w6LNUXzYF28exsIIRRhGYndc/pczA7uYOUT6qaXolb4MSE1B5o3gNi7m1yx4eNuhhjE7SjtkikxUTPBHga1/48MnAbiuyBxTKwXgteWRIPxIpQOFPPhg5W5A7fmEpjunLfeJuXsSKc8cVIsYEGY9+akI7Nvvfut9FYvRLD67TZMvgtTky+lQ8e5zfiVnQzbhTV3CpzfoXapOTZd/sQukuhG/8tmtztQDrMFFymgx5mQ5AAKVusywUHqaLCP8WeEU+XmInYlHg1Jvuqo/9+xZ4ymmG0+KyTiEWyfkP6IYBTyrk0epBtsaUPvr3MC/rwPeOWth6VZdV6pJthHw06s01CdFscerRv+XZ+FErWFyw31n3FAaJXbhZ6mEXhshway4KBbL6NWPDCCrnjCXo12cUaFhzqs8ZfW5woifq1+fU3w5VI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(6019001)(4636009)(396003)(376002)(136003)(39860400002)(346002)(279900001)(451199021)(46966006)(40470700004)(36840700001)(966005)(36860700001)(478600001)(40480700001)(2616005)(336012)(426003)(40460700003)(26005)(53546011)(356005)(81166007)(186003)(83380400001)(82740400003)(82310400005)(2906002)(30864003)(86362001)(36756003)(31686004)(4326008)(47076005)(31696002)(70586007)(70206006)(316002)(41300700001)(16576012)(54906003)(110136005)(44832011)(5660300002)(8936002)(8676002)(43740500002)(36900700001)(559001)(579004)(139555002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 10:15:51.0877
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a6ddd2b-aa9b-4e44-cf14-08db5208b282
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT017.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6742

Hi Oleg,

On 11/05/2023 12:02, Oleg Nikitenko wrote:
> 	
> 
> 
> Hello,
> 
> Thanks Stefano.
> Then the next question.
> I cloned xen repo from xilinx site https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git>
> I managed to build a xlnx_rebase_4.17 branch in my environment.
> I did it without coloring first. I did not find any color footprints at this branch.
> I realized coloring is not in the xlnx_rebase_4.17 branch yet.
This is not true. Cache coloring is in xlnx_rebase_4.17. Please see the docs:
https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-coloring.rst

It describes the feature and documents the required properties.

~Michal

> 
> 
> вт, 9 мая 2023 г. в 22:49, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>>:
> 
>     We test Xen Cache Coloring regularly on zcu102. Every Petalinux release
>     (twice a year) is tested with cache coloring enabled. The last Petalinux
>     release is 2023.1 and the kernel used is this:
>     https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>
> 
> 
>     On Tue, 9 May 2023, Oleg Nikitenko wrote:
>     > Hello guys,
>     >
>     > I have a couple of more questions.
>     > Have you ever run xen with the cache coloring at Zynq UltraScale+ MPSoC zcu102 xczu15eg ?
>     > When did you run xen with the cache coloring last time ?
>     > What kernel version did you use for Dom0 when you ran xen with the cache coloring last time ?
>     >
>     > Regards,
>     > Oleg
>     >
>     > пт, 5 мая 2023 г. в 11:48, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>:
>     >       Hi Michal,
>     >
>     > Thanks.
>     >
>     > Regards,
>     > Oleg
>     >
>     > пт, 5 мая 2023 г. в 11:34, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
>     >       Hi Oleg,
>     >
>     >       Replying, so that you do not need to wait for Stefano.
>     >
>     >       On 05/05/2023 10:28, Oleg Nikitenko wrote:
>     >       >       
>     >       >
>     >       >
>     >       > Hello Stefano,
>     >       >
>     >       > I would like to try a xen cache color property from this repo  https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git>
>     >       <https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git>>
>     >       > Could you tell whot branch I should use ?
>     >       Cache coloring feature is not part of the upstream tree and it is still under review.
>     >       You can only find it integrated in the Xilinx Xen tree.
>     >
>     >       ~Michal
>     >
>     >       >
>     >       > Regards,
>     >       > Oleg
>     >       >
>     >       > пт, 28 апр. 2023 г. в 00:51, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>     >       >
>     >       >     I am familiar with the zcu102 but I don't know how you could possibly
>     >       >     generate a SError.
>     >       >
>     >       >     I suggest to try to use ImageBuilder [1] to generate the boot
>     >       >     configuration as a test because that is known to work well for zcu102.
>     >       >
>     >       >     [1] https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder> <https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder>>
>     >       >
>     >       >
>     >       >     On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
>     >       >     > Hello Stefano,
>     >       >     >
>     >       >     > Thanks for clarification.
>     >       >     > We nighter use ImageBuilder nor uboot boot script.
>     >       >     > A model is zcu102 compatible.
>     >       >     >
>     >       >     > Regards,
>     >       >     > O.
>     >       >     >
>     >       >     > вт, 25 апр. 2023 г. в 21:21, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>     >       >     >       This is interesting. Are you using Xilinx hardware by any chance? If so,
>     >       >     >       which board?
>     >       >     >
>     >       >     >       Are you using ImageBuilder to generate your boot.scr boot script? If so,
>     >       >     >       could you please post your ImageBuilder config file? If not, can you
>     >       >     >       post the source of your uboot boot script?
>     >       >     >
>     >       >     >       SErrors are supposed to be related to a hardware failure of some kind.
>     >       >     >       You are not supposed to be able to trigger an SError easily by
>     >       >     >       "mistake". I have not seen SErrors due to wrong cache coloring
>     >       >     >       configurations on any Xilinx board before.
>     >       >     >
>     >       >     >       The differences between Xen with and without cache coloring from a
>     >       >     >       hardware perspective are:
>     >       >     >
>     >       >     >       - With cache coloring, the SMMU is enabled and does address translations
>     >       >     >         even for dom0. Without cache coloring the SMMU could be disabled, and
>     >       >     >         if enabled, the SMMU doesn't do any address translations for Dom0. If
>     >       >     >         there is a hardware failure related to SMMU address translation it
>     >       >     >         could only trigger with cache coloring. This would be my normal
>     >       >     >         suggestion for you to explore, but the failure happens too early
>     >       >     >         before any DMA-capable device is programmed. So I don't think this can
>     >       >     >         be the issue.
>     >       >     >
>     >       >     >       - With cache coloring, the memory allocation is very different so you'll
>     >       >     >         end up using different DDR regions for Dom0. So if your DDR is
>     >       >     >         defective, you might only see a failure with cache coloring enabled
>     >       >     >         because you end up using different regions.
>     >       >     >
>     >       >     >
>     >       >     >       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
>     >       >     >       > Hi Stefano,
>     >       >     >       >
>     >       >     >       > Thank you.
>     >       >     >       > If I build xen without colors support there is not this error.
>     >       >     >       > All the domains are booted well.
>     >       >     >       > Hense it can not be a hardware issue.
>     >       >     >       > This panic arrived during unpacking the rootfs.
>     >       >     >       > Here I attached the boot log xen/Dom0 without color.
>     >       >     >       > A highlighted strings printed exactly after the place where 1-st time panic arrived.
>     >       >     >       >
>     >       >     >       >  Xen 4.16.1-pre
>     >       >     >       > (XEN) Xen version 4.16.1-pre (nole2390@(none)) (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=y
>     >       2023-04-21
>     >       >     >       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300 git:321687b231-dirty
>     >       >     >       > (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
>     >       >     >       > (XEN) Processor: 00000000410fd034: "ARM Limited", variant: 0x0, part 0xd03,rev 0x4
>     >       >     >       > (XEN) 64-bit Execution:
>     >       >     >       > (XEN)   Processor Features: 0000000000002222 0000000000000000
>     >       >     >       > (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
>     >       >     >       > (XEN)     Extensions: FloatingPoint AdvancedSIMD
>     >       >     >       > (XEN)   Debug Features: 0000000010305106 0000000000000000
>     >       >     >       > (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
>     >       >     >       > (XEN)   Memory Model Features: 0000000000001122 0000000000000000
>     >       >     >       > (XEN)   ISA Features:  0000000000011120 0000000000000000
>     >       >     >       > (XEN) 32-bit Execution:
>     >       >     >       > (XEN)   Processor Features: 0000000000000131:0000000000011011
>     >       >     >       > (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
>     >       >     >       > (XEN)     Extensions: GenericTimer Security
>     >       >     >       > (XEN)   Debug Features: 0000000003010066
>     >       >     >       > (XEN)   Auxiliary Features: 0000000000000000
>     >       >     >       > (XEN)   Memory Model Features: 0000000010201105 0000000040000000
>     >       >     >       > (XEN)                          0000000001260000 0000000002102211
>     >       >     >       > (XEN)   ISA Features: 0000000002101110 0000000013112111 0000000021232042
>     >       >     >       > (XEN)                 0000000001112131 0000000000011142 0000000000011121
>     >       >     >       > (XEN) Using SMC Calling Convention v1.2
>     >       >     >       > (XEN) Using PSCI v1.1
>     >       >     >       > (XEN) SMP: Allowing 4 CPUs
>     >       >     >       > (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 100000 KHz
>     >       >     >       > (XEN) GICv2 initialization:
>     >       >     >       > (XEN)         gic_dist_addr=00000000f9010000
>     >       >     >       > (XEN)         gic_cpu_addr=00000000f9020000
>     >       >     >       > (XEN)         gic_hyp_addr=00000000f9040000
>     >       >     >       > (XEN)         gic_vcpu_addr=00000000f9060000
>     >       >     >       > (XEN)         gic_maintenance_irq=25
>     >       >     >       > (XEN) GICv2: Adjusting CPU interface base to 0xf902f000
>     >       >     >       > (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
>     >       >     >       > (XEN) Using scheduler: null Scheduler (null)
>     >       >     >       > (XEN) Initializing null scheduler
>     >       >     >       > (XEN) WARNING: This is experimental software in development.
>     >       >     >       > (XEN) Use at your own risk.
>     >       >     >       > (XEN) Allocated console ring of 32 KiB.
>     >       >     >       > (XEN) CPU0: Guest atomics will try 12 times before pausing the domain
>     >       >     >       > (XEN) Bringing up CPU1
>     >       >     >       > (XEN) CPU1: Guest atomics will try 13 times before pausing the domain
>     >       >     >       > (XEN) CPU 1 booted.
>     >       >     >       > (XEN) Bringing up CPU2
>     >       >     >       > (XEN) CPU2: Guest atomics will try 13 times before pausing the domain
>     >       >     >       > (XEN) CPU 2 booted.
>     >       >     >       > (XEN) Bringing up CPU3
>     >       >     >       > (XEN) CPU3: Guest atomics will try 13 times before pausing the domain
>     >       >     >       > (XEN) Brought up 4 CPUs
>     >       >     >       > (XEN) CPU 3 booted.
>     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: probing hardware configuration...
>     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
>     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stage 2 translation
>     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stream matching with 48 register groups, mask 0x7fff<2>smmu:
>     >       /axi/smmu@fd800000: 16 context
>     >       >     >       banks (0
>     >       >     >       > stage-2 only)
>     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit PA
>     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: registered 29 master devices
>     >       >     >       > (XEN) I/O virtualisation enabled
>     >       >     >       > (XEN)  - Dom0 mode: Relaxed
>     >       >     >       > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>     >       >     >       > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>     >       >     >       > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>     >       >     >       > (XEN) alternatives: Patching with alt table 00000000002cc5c8 -> 00000000002ccb2c
>     >       >     >       > (XEN) *** LOADING DOMAIN 0 ***
>     >       >     >       > (XEN) Loading d0 kernel from boot module @ 0000000001000000
>     >       >     >       > (XEN) Loading ramdisk from boot module @ 0000000002000000
>     >       >     >       > (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
>     >       >     >       > (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
>     >       >     >       > (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
>     >       >     >       > (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
>     >       >     >       > (XEN) Grant table range: 0x00000000e00000-0x00000000e40000
>     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
>     >       >     >       > (XEN) Allocating PPI 16 for event channel interrupt
>     >       >     >       > (XEN) Extended region 0: 0x81200000->0xa0000000
>     >       >     >       > (XEN) Extended region 1: 0xb1200000->0xc0000000
>     >       >     >       > (XEN) Extended region 2: 0xc8000000->0xe0000000
>     >       >     >       > (XEN) Extended region 3: 0xf0000000->0xf9000000
>     >       >     >       > (XEN) Extended region 4: 0x100000000->0x600000000
>     >       >     >       > (XEN) Extended region 5: 0x880000000->0x8000000000
>     >       >     >       > (XEN) Extended region 6: 0x8001000000->0x10000000000
>     >       >     >       > (XEN) Loading zImage from 0000000001000000 to 0000000010000000-0000000010e41008
>     >       >     >       > (XEN) Loading d0 initrd from 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
>     >       >     >       > (XEN) Loading d0 DTB to 0x0000000013400000-0x000000001340cbdc
>     >       >     >       > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>     >       >     >       > (XEN) Std. Loglevel: All
>     >       >     >       > (XEN) Guest Loglevel: All
>     >       >     >       > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
>     >       >     >       > (XEN) null.c:353: 0 <-- d0v0
>     >       >     >       > (XEN) Freed 356kB init memory.
>     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
>     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
>     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
>     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
>     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
>     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
>     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
>     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>     >       >     >       > [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
>     >       >     >       > [    0.000000] Linux version 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC)
>     >       11.3.0, GNU ld (GNU
>     >       >     >       Binutils)
>     >       >     >       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
>     >       >     >       > [    0.000000] Machine model: D14 Viper Board - White Unit
>     >       >     >       > [    0.000000] Xen 4.16 support found
>     >       >     >       > [    0.000000] Zone ranges:
>     >       >     >       > [    0.000000]   DMA      [mem 0x0000000010000000-0x000000007fffffff]
>     >       >     >       > [    0.000000]   DMA32    empty
>     >       >     >       > [    0.000000]   Normal   empty
>     >       >     >       > [    0.000000] Movable zone start for each node
>     >       >     >       > [    0.000000] Early memory node ranges
>     >       >     >       > [    0.000000]   node   0: [mem 0x0000000010000000-0x000000001fffffff]
>     >       >     >       > [    0.000000]   node   0: [mem 0x0000000022000000-0x0000000022147fff]
>     >       >     >       > [    0.000000]   node   0: [mem 0x0000000022200000-0x0000000022347fff]
>     >       >     >       > [    0.000000]   node   0: [mem 0x0000000024000000-0x0000000027ffffff]
>     >       >     >       > [    0.000000]   node   0: [mem 0x0000000030000000-0x000000007fffffff]
>     >       >     >       > [    0.000000] Initmem setup node 0 [mem 0x0000000010000000-0x000000007fffffff]
>     >       >     >       > [    0.000000] On node 0, zone DMA: 8192 pages in unavailable ranges
>     >       >     >       > [    0.000000] On node 0, zone DMA: 184 pages in unavailable ranges
>     >       >     >       > [    0.000000] On node 0, zone DMA: 7352 pages in unavailable ranges
>     >       >     >       > [    0.000000] cma: Reserved 256 MiB at 0x000000006e000000
>     >       >     >       > [    0.000000] psci: probing for conduit method from DT.
>     >       >     >       > [    0.000000] psci: PSCIv1.1 detected in firmware.
>     >       >     >       > [    0.000000] psci: Using standard PSCI v0.2 function IDs
>     >       >     >       > [    0.000000] psci: Trusted OS migration not required
>     >       >     >       > [    0.000000] psci: SMC Calling Convention v1.1
>     >       >     >       > [    0.000000] percpu: Embedded 16 pages/cpu s32792 r0 d32744 u65536
>     >       >     >       > [    0.000000] Detected VIPT I-cache on CPU0
>     >       >     >       > [    0.000000] CPU features: kernel page table isolation forced ON by KASLR
>     >       >     >       > [    0.000000] CPU features: detected: Kernel page table isolation (KPTI)
>     >       >     >       > [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 403845
>     >       >     >       > [    0.000000] Kernel command line: console=hvc0 earlycon=xen earlyprintk=xen clk_ignore_unused fips=1
>     >       root=/dev/ram0
>     >       >     >       maxcpus=2
>     >       >     >       > [    0.000000] Unknown kernel command line parameters "earlyprintk=xen fips=1", will be passed to user
>     >       space.
>     >       >     >       > [    0.000000] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
>     >       >     >       > [    0.000000] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
>     >       >     >       > [    0.000000] mem auto-init: stack:off, heap alloc:on, heap free:on
>     >       >     >       > [    0.000000] mem auto-init: clearing system memory may take some time...
>     >       >     >       > [    0.000000] Memory: 1121936K/1641024K available (9728K kernel code, 836K rwdata, 2396K rodata, 1536K
>     >       init, 262K bss,
>     >       >     >       256944K reserved,
>     >       >     >       > 262144K cma-reserved)
>     >       >     >       > [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
>     >       >     >       > [    0.000000] rcu: Hierarchical RCU implementation.
>     >       >     >       > [    0.000000] rcu: RCU event tracing is enabled.
>     >       >     >       > [    0.000000] rcu: RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
>     >       >     >       > [    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
>     >       >     >       > [    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
>     >       >     >       > [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
>     >       >     >       > [    0.000000] Root IRQ handler: gic_handle_irq
>     >       >     >       > [    0.000000] arch_timer: cp15 timer(s) running at 100.00MHz (virt).
>     >       >     >       > [    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x171024e7e0,
>     >       max_idle_ns: 440795205315 ns
>     >       >     >       > [    0.000000] sched_clock: 56 bits at 100MHz, resolution 10ns, wraps every 4398046511100ns
>     >       >     >       > [    0.000258] Console: colour dummy device 80x25
>     >       >     >       > [    0.310231] printk: console [hvc0] enabled
>     >       >     >       > [    0.314403] Calibrating delay loop (skipped), value calculated using timer frequency.. 200.00 BogoMIPS
>     >       (lpj=400000)
>     >       >     >       > [    0.324851] pid_max: default: 32768 minimum: 301
>     >       >     >       > [    0.329706] LSM: Security Framework initializing
>     >       >     >       > [    0.334204] Yama: becoming mindful.
>     >       >     >       > [    0.337865] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>     >       >     >       > [    0.345180] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>     >       >     >       > [    0.354743] xen:grant_table: Grant tables using version 1 layout
>     >       >     >       > [    0.359132] Grant table initialized
>     >       >     >       > [    0.362664] xen:events: Using FIFO-based ABI
>     >       >     >       > [    0.366993] Xen: initializing cpu0
>     >       >     >       > [    0.370515] rcu: Hierarchical SRCU implementation.
>     >       >     >       > [    0.375930] smp: Bringing up secondary CPUs ...
>     >       >     >       > (XEN) null.c:353: 1 <-- d0v1
>     >       >     >       > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>     >       >     >       > [    0.382549] Detected VIPT I-cache on CPU1
>     >       >     >       > [    0.388712] Xen: initializing cpu1
>     >       >     >       > [    0.388743] CPU1: Booted secondary processor 0x0000000001 [0x410fd034]
>     >       >     >       > [    0.388829] smp: Brought up 1 node, 2 CPUs
>     >       >     >       > [    0.406941] SMP: Total of 2 processors activated.
>     >       >     >       > [    0.411698] CPU features: detected: 32-bit EL0 Support
>     >       >     >       > [    0.416888] CPU features: detected: CRC32 instructions
>     >       >     >       > [    0.422121] CPU: All CPU(s) started at EL1
>     >       >     >       > [    0.426248] alternatives: patching kernel code
>     >       >     >       > [    0.431424] devtmpfs: initialized
>     >       >     >       > [    0.441454] KASLR enabled
>     >       >     >       > [    0.441602] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:
>     >       7645041785100000 ns
>     >       >     >       > [    0.448321] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
>     >       >     >       > [    0.496183] NET: Registered PF_NETLINK/PF_ROUTE protocol family
>     >       >     >       > [    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic allocations
>     >       >     >       > [    0.503772] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
>     >       >     >       > [    0.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
>     >       >     >       > [    0.519478] audit: initializing netlink subsys (disabled)
>     >       >     >       > [    0.524985] audit: type=2000 audit(0.336:1): state=initialized audit_enabled=0 res=1
>     >       >     >       > [    0.529169] thermal_sys: Registered thermal governor 'step_wise'
>     >       >     >       > [    0.533023] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
>     >       >     >       > [    0.545608] ASID allocator initialised with 32768 entries
>     >       >     >       > [    0.551030] xen:swiotlb_xen: Warning: only able to allocate 4 MB for software IO TLB
>     >       >     >       > [    0.559332] software IO TLB: mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB)
>     >       >     >       > [    0.583565] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>     >       >     >       > [    0.584721] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
>     >       >     >       > [    0.591478] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
>     >       >     >       > [    0.598225] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
>     >       >     >       > [    0.636520] DRBG: Continuing without Jitter RNG
>     >       >     >       > [    0.737187] raid6: neonx8   gen()  2143 MB/s
>     >       >     >       > [    0.805294] raid6: neonx8   xor()  1589 MB/s
>     >       >     >       > [    0.873406] raid6: neonx4   gen()  2177 MB/s
>     >       >     >       > [    0.941499] raid6: neonx4   xor()  1556 MB/s
>     >       >     >       > [    1.009612] raid6: neonx2   gen()  2072 MB/s
>     >       >     >       > [    1.077715] raid6: neonx2   xor()  1430 MB/s
>     >       >     >       > [    1.145834] raid6: neonx1   gen()  1769 MB/s
>     >       >     >       > [    1.213935] raid6: neonx1   xor()  1214 MB/s
>     >       >     >       > [    1.282046] raid6: int64x8  gen()  1366 MB/s
>     >       >     >       > [    1.350132] raid6: int64x8  xor()   773 MB/s
>     >       >     >       > [    1.418259] raid6: int64x4  gen()  1602 MB/s
>     >       >     >       > [    1.486349] raid6: int64x4  xor()   851 MB/s
>     >       >     >       > [    1.554464] raid6: int64x2  gen()  1396 MB/s
>     >       >     >       > [    1.622561] raid6: int64x2  xor()   744 MB/s
>     >       >     >       > [    1.690687] raid6: int64x1  gen()  1033 MB/s
>     >       >     >       > [    1.758770] raid6: int64x1  xor()   517 MB/s
>     >       >     >       > [    1.758809] raid6: using algorithm neonx4 gen() 2177 MB/s
>     >       >     >       > [    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
>     >       >     >       > [    1.767957] raid6: using neon recovery algorithm
>     >       >     >       > [    1.772824] xen:balloon: Initialising balloon driver
>     >       >     >       > [    1.778021] iommu: Default domain type: Translated
>     >       >     >       > [    1.782584] iommu: DMA domain TLB invalidation policy: strict mode
>     >       >     >       > [    1.789149] SCSI subsystem initialized
>     >       >     >       > [    1.792820] usbcore: registered new interface driver usbfs
>     >       >     >       > [    1.798254] usbcore: registered new interface driver hub
>     >       >     >       > [    1.803626] usbcore: registered new device driver usb
>     >       >     >       > [    1.808761] pps_core: LinuxPPS API ver. 1 registered
>     >       >     >       > [    1.813716] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it <mailto:giometti@linux.it>
>     >       <mailto:giometti@linux.it <mailto:giometti@linux.it>>>
>     >       >     >       > [    1.822903] PTP clock support registered
>     >       >     >       > [    1.826893] EDAC MC: Ver: 3.0.0
>     >       >     >       > [    1.830375] zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channels.
>     >       >     >       > [    1.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX channels.
>     >       >     >       > [    1.847356] zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX channels.
>     >       >     >       > [    1.855907] FPGA manager framework
>     >       >     >       > [    1.859952] clocksource: Switched to clocksource arch_sys_counter
>     >       >     >       > [    1.871712] NET: Registered PF_INET protocol family
>     >       >     >       > [    1.871838] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
>     >       >     >       > [    1.879392] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
>     >       >     >       > [    1.887078] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
>     >       >     >       > [    1.894846] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
>     >       >     >       > [    1.902900] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
>     >       >     >       > [    1.910350] TCP: Hash tables configured (established 16384 bind 16384)
>     >       >     >       > [    1.916778] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
>     >       >     >       > [    1.923509] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
>     >       >     >       > [    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol family
>     >       >     >       > [    1.936834] RPC: Registered named UNIX socket transport module.
>     >       >     >       > [    1.942342] RPC: Registered udp transport module.
>     >       >     >       > [    1.947088] RPC: Registered tcp transport module.
>     >       >     >       > [    1.951843] RPC: Registered tcp NFSv4.1 backchannel transport module.
>     >       >     >       > [    1.958334] PCI: CLS 0 bytes, default 64
>     >       >     >       > [    1.962709] Trying to unpack rootfs image as initramfs...
>     >       >     >       > [    1.977090] workingset: timestamp_bits=62 max_order=19 bucket_order=0
>     >       >     >       > [    1.982863] Installing knfsd (copyright (C) 1996 okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>>).
>     >       >     >       > [    2.021045] NET: Registered PF_ALG protocol family
>     >       >     >       > [    2.021122] xor: measuring software checksum speed
>     >       >     >       > [    2.029347]    8regs           :  2366 MB/sec
>     >       >     >       > [    2.033081]    32regs          :  2802 MB/sec
>     >       >     >       > [    2.038223]    arm64_neon      :  2320 MB/sec
>     >       >     >       > [    2.038385] xor: using function: 32regs (2802 MB/sec)
>     >       >     >       > [    2.043614] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
>     >       >     >       > [    2.050959] io scheduler mq-deadline registered
>     >       >     >       > [    2.055521] io scheduler kyber registered
>     >       >     >       > [    2.068227] xen:xen_evtchn: Event-channel device installed
>     >       >     >       > [    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
>     >       >     >       > [    2.076190] cacheinfo: Unable to detect cache hierarchy for CPU 0
>     >       >     >       > [    2.085548] brd: module loaded
>     >       >     >       > [    2.089290] loop: module loaded
>     >       >     >       > [    2.089341] Invalid max_queues (4), will use default max: 2.
>     >       >     >       > [    2.094565] tun: Universal TUN/TAP device driver, 1.6
>     >       >     >       > [    2.098655] xen_netfront: Initialising Xen virtual ethernet driver
>     >       >     >       > [    2.104156] usbcore: registered new interface driver rtl8150
>     >       >     >       > [    2.109813] usbcore: registered new interface driver r8152
>     >       >     >       > [    2.115367] usbcore: registered new interface driver asix
>     >       >     >       > [    2.120794] usbcore: registered new interface driver ax88179_178a
>     >       >     >       > [    2.126934] usbcore: registered new interface driver cdc_ether
>     >       >     >       > [    2.132816] usbcore: registered new interface driver cdc_eem
>     >       >     >       > [    2.138527] usbcore: registered new interface driver net1080
>     >       >     >       > [    2.144256] usbcore: registered new interface driver cdc_subset
>     >       >     >       > [    2.150205] usbcore: registered new interface driver zaurus
>     >       >     >       > [    2.155837] usbcore: registered new interface driver cdc_ncm
>     >       >     >       > [    2.161550] usbcore: registered new interface driver r8153_ecm
>     >       >     >       > [    2.168240] usbcore: registered new interface driver cdc_acm
>     >       >     >       > [    2.173109] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
>     >       >     >       > [    2.181358] usbcore: registered new interface driver uas
>     >       >     >       > [    2.186547] usbcore: registered new interface driver usb-storage
>     >       >     >       > [    2.192643] usbcore: registered new interface driver ftdi_sio
>     >       >     >       > [    2.198384] usbserial: USB Serial support registered for FTDI USB Serial Device
>     >       >     >       > [    2.206118] udc-core: couldn't find an available UDC - added [g_mass_storage] to list of pending
>     >       drivers
>     >       >     >       > [    2.215332] i2c_dev: i2c /dev entries driver
>     >       >     >       > [    2.220467] xen_wdt xen_wdt: initialized (timeout=60s, nowayout=0)
>     >       >     >       > [    2.225923] device-mapper: uevent: version 1.0.3
>     >       >     >       > [    2.230668] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com <mailto:dm-devel@redhat.com>
>     >       <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com>>
>     >       >     >       > [    2.239315] EDAC MC0: Giving out device to module 1 controller synps_ddr_controller: DEV synps_edac
>     >       (INTERRUPT)
>     >       >     >       > [    2.249405] EDAC DEVICE0: Giving out device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV
>     >       >     >       ff960000.memory-controller (INTERRUPT)
>     >       >     >       > [    2.261719] sdhci: Secure Digital Host Controller Interface driver
>     >       >     >       > [    2.267487] sdhci: Copyright(c) Pierre Ossman
>     >       >     >       > [    2.271890] sdhci-pltfm: SDHCI platform and OF driver helper
>     >       >     >       > [    2.278157] ledtrig-cpu: registered to indicate activity on CPUs
>     >       >     >       > [    2.283816] zynqmp_firmware_probe Platform Management API v1.1
>     >       >     >       > [    2.289554] zynqmp_firmware_probe Trustzone version v1.0
>     >       >     >       > [    2.327875] securefw securefw: securefw probed
>     >       >     >       > [    2.328324] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)
>     >       >     >       > [    2.332563] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
>     >       >     >       > [    2.341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)
>     >       >     >       > [    2.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is available
>     >       >     >       > [    2.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is available
>     >       >     >       > [    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registered
>     >       >     >       > [    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xen Proxy registered
>     >       >     >       > [    2.372525] viper-vdpp a4000000.vdpp: Device Tree Probing
>     >       >     >       > [    2.377778] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>     >       >     >       > [    2.386432] viper-vdpp a4000000.vdpp: Unable to register tamper handler. Retrying...
>     >       >     >       > [    2.394094] viper-vdpp-net a5000000.vdpp_net: Device Tree Probing
>     >       >     >       > [    2.399854] viper-vdpp-net a5000000.vdpp_net: Device registered
>     >       >     >       > [    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing
>     >       >     >       > [    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VTI Count: 512 Event Count: 32
>     >       >     >       > [    2.420856] default preset
>     >       >     >       > [    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Device registered
>     >       >     >       > [    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing
>     >       >     >       > [    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device registered
>     >       >     >       > [    2.441976] vmcu driver init
>     >       >     >       > [    2.444922] VMCU: : (240:0) registered
>     >       >     >       > [    2.444956] In K81 Updater init
>     >       >     >       > [    2.449003] pktgen: Packet Generator for packet performance testing. Version: 2.75
>     >       >     >       > [    2.468833] Initializing XFRM netlink socket
>     >       >     >       > [    2.468902] NET: Registered PF_PACKET protocol family
>     >       >     >       > [    2.472729] Bridge firewalling registered
>     >       >     >       > [    2.476785] 8021q: 802.1Q VLAN Support v1.8
>     >       >     >       > [    2.481341] registered taskstats version 1
>     >       >     >       > [    2.486394] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
>     >       >     >       > [    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq = 36, base_baud = 6250000) is a xuartps
>     >       >     >       > [    2.507103] of-fpga-region fpga-full: FPGA Region probed
>     >       >     >       > [    2.512986] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver Probe success
>     >       >     >       > [    2.520267] xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver Probe success
>     >       >     >       > [    2.528239] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe success
>     >       >     >       > [    2.536152] xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver Probe success
>     >       >     >       > [    2.544153] xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver Probe success
>     >       >     >       > [    2.552127] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver Probe success
>     >       >     >       > [    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver Probe success
>     >       >     >       > [    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe success
>     >       >     >       > [    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver Probe success
>     >       >     >       > [    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver Probe success
>     >       >     >       > [    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
>     >       >     >       > [    2.946467] 2 fixed-partitions partitions found on MTD device spi0.0
>     >       >     >       > [    2.952393] Creating 2 MTD partitions on "spi0.0":
>     >       >     >       > [    2.957231] 0x000004000000-0x000008000000 : "bank A"
>     >       >     >       > [    2.963332] 0x000000000000-0x000004000000 : "bank B"
>     >       >     >       > [    2.968694] macb ff0b0000.ethernet: Not enabling partial store and forward
>     >       >     >       > [    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25
>     >       (18:41:fe:0f:ff:02)
>     >       >     >       > [    2.984472] macb ff0c0000.ethernet: Not enabling partial store and forward
>     >       >     >       > [    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26
>     >       (18:41:fe:0f:ff:03)
>     >       >     >       > [    3.001043] viper_enet viper_enet: Viper power GPIOs initialised
>     >       >     >       > [    3.007313] viper_enet viper_enet vnet0 (uninitialized): Validate interface QSGMII
>     >       >     >       > [    3.014914] viper_enet viper_enet vnet1 (uninitialized): Validate interface QSGMII
>     >       >     >       > [    3.022138] viper_enet viper_enet vnet1 (uninitialized): Validate interface type 18
>     >       >     >       > [    3.030274] viper_enet viper_enet vnet2 (uninitialized): Validate interface QSGMII
>     >       >     >       > [    3.037785] viper_enet viper_enet vnet3 (uninitialized): Validate interface QSGMII
>     >       >     >       > [    3.045301] viper_enet viper_enet: Viper enet registered
>     >       >     >       > [    3.050958] xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
>     >       >     >       > [    3.057135] xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
>     >       >     >       > [    3.063538] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
>     >       >     >       > [    3.069920] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
>     >       >     >       > [    3.097729] si70xx: probe of 2-0040 failed with error -5
>     >       >     >       > [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s
>     >       >     >       > [    3.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s
>     >       >     >       > [    3.112457] viper-tamper viper-tamper: Device registered
>     >       >     >       > [    3.117593] active_bank active_bank: boot bank: 1
>     >       >     >       > [    3.122184] active_bank active_bank: boot mode: (0x02) qspi32
>     >       >     >       > [    3.128247] viper-vdpp a4000000.vdpp: Device Tree Probing
>     >       >     >       > [    3.133439] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>     >       >     >       > [    3.142151] viper-vdpp a4000000.vdpp: Tamper handler registered
>     >       >     >       > [    3.147438] viper-vdpp a4000000.vdpp: Device registered
>     >       >     >       > [    3.153007] lpc55_l2 spi1.0: registered handler for protocol 0
>     >       >     >       > [    3.158582] lpc55_user lpc55_user: The major number for your device is 236
>     >       >     >       > [    3.165976] lpc55_l2 spi1.0: registered handler for protocol 1
>     >       >     >       > [    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >       >     >       > [    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
>     >       >     >       > [    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
>     >       >     >       > [    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
>     >       >     >       > [    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
>     >       >     >       > [    3.202932] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit
>     >       >     >       > [    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
>     >       >     >       > [    3.215694] lpc55_l2 spi1.0: rx error: -110
>     >       >     >       > [    3.284438] mmc0: new HS200 MMC card at address 0001
>     >       >     >       > [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
>     >       >     >       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
>     >       >     >       > [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
>     >       >     >       > [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
>     >       >     >       > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
>     >       >     >       > [    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >       >     >       > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardware clock
>     >       >     >       > [    3.591252] cdns-i2c ff020000.i2c: recovery information complete
>     >       >     >       > [    3.597085] at24 0-0050: supply vcc not found, using dummy regulator
>     >       >     >       > [    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
>     >       >     >       > [    3.608093] at24 0-0050: 256 byte spd EEPROM, read-only
>     >       >     >       > [    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
>     >       >     >       > [    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
>     >       >     >       > [    3.624224] rtc-rv3028 0-0052: registered as rtc1
>     >       >     >       > [    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
>     >       >     >       > [    3.633253] lpc55_l2 spi1.0: rx error: -110
>     >       >     >       > [    3.639104] k81_bootloader 0-0010: probe
>     >       >     >       > [    3.641628] VMCU: : (235:0) registered
>     >       >     >       > [    3.641635] k81_bootloader 0-0010: probe completed
>     >       >     >       > [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 28
>     >       >     >       > [    3.669154] cdns-i2c ff030000.i2c: recovery information complete
>     >       >     >       > [    3.675412] lm75 1-0048: supply vs not found, using dummy regulator
>     >       >     >       > [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
>     >       >     >       > [    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
>     >       >     >       > [    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
>     >       >     >       > [    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
>     >       >     >       > [    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
>     >       >     >       > [    3.705157] pca954x 1-0070: registered 4 multiplexed busses for I2C switch pca9546
>     >       >     >       > [    3.713049] at24 1-0054: supply vcc not found, using dummy regulator
>     >       >     >       > [    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM, read-only
>     >       >     >       > [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq 29
>     >       >     >       > [    3.731272] sfp viper_enet:sfp-eth1: Host maximum power 2.0W
>     >       >     >       > [    3.737549] sfp_register_socket: got sfp_bus
>     >       >     >       > [    3.740709] sfp_register_socket: register sfp_bus
>     >       >     >       > [    3.745459] sfp_register_bus: ops ok!
>     >       >     >       > [    3.749179] sfp_register_bus: Try to attach
>     >       >     >       > [    3.753419] sfp_register_bus: Attach succeeded
>     >       >     >       > [    3.757914] sfp_register_bus: upstream ops attach
>     >       >     >       > [    3.762677] sfp_register_bus: Bus registered
>     >       >     >       > [    3.766999] sfp_register_socket: register sfp_bus succeeded
>     >       >     >       > [    3.775870] of_cfs_init
>     >       >     >       > [    3.776000] of_cfs_init: OK
>     >       >     >       > [    3.778211] clk: Not disabling unused clocks
>     >       >     >       > [   11.278477] Freeing initrd memory: 206056K
>     >       >     >       > [   11.279406] Freeing unused kernel memory: 1536K
>     >       >     >       > [   11.314006] Checked W+X mappings: passed, no W+X pages found
>     >       >     >       > [   11.314142] Run /init as init process
>     >       >     >       > INIT: version 3.01 booting
>     >       >     >       > fsck (busybox 1.35.0)
>     >       >     >       > /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600 blocks
>     >       >     >       > /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600 blocks
>     >       >     >       > /dev/mmcblk0p3 was not cleanly unmounted, check forced.
>     >       >     >       > /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous), 663/16384 blocks
>     >       >     >       > [   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem without journal. Opts: (null). Quota mode:
>     >       disabled.
>     >       >     >       > Starting random number generator daemon.
>     >       >     >       > [   11.580662] random: crng init done
>     >       >     >       > Starting udev
>     >       >     >       > [   11.613159] udevd[142]: starting version 3.2.10
>     >       >     >       > [   11.620385] udevd[143]: starting eudev-3.2.10
>     >       >     >       > [   11.704481] macb ff0b0000.ethernet control_red: renamed from eth0
>     >       >     >       > [   11.720264] macb ff0c0000.ethernet control_black: renamed from eth1
>     >       >     >       > [   12.063396] ip_local_port_range: prefer different parity for start/end values.
>     >       >     >       > [   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>     >       >     >       > Mon Feb 27 08:40:53 UTC 2023
>     >       >     >       > [   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad result
>     >       >     >       > hwclock: RTC_SET_TIME: Invalid exchange
>     >       >     >       > [   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >       >     >       > Starting mcud
>     >       >     >       > INIT: Entering runlevel: 5
>     >       >     >       > Configuring network interfaces... done.
>     >       >     >       > resetting network interface
>     >       >     >       > [   12.718295] macb ff0b0000.ethernet control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver [Xilinx
>     >       PCS/PMA PHY] (irq=POLL)
>     >       >     >       > [   12.723919] macb ff0b0000.ethernet control_red: configuring for phy/gmii link mode
>     >       >     >       > [   12.732151] pps pps0: new PPS source ptp0
>     >       >     >       > [   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp clock registered.
>     >       >     >       > [   12.745724] macb ff0c0000.ethernet control_black: PHY [ff0c0000.ethernet-ffffffff:01] driver [Xilinx
>     >       PCS/PMA PHY]
>     >       >     >       (irq=POLL)
>     >       >     >       > [   12.753469] macb ff0c0000.ethernet control_black: configuring for phy/gmii link mode
>     >       >     >       > [   12.761804] pps pps1: new PPS source ptp1
>     >       >     >       > [   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock registered.
>     >       >     >       > Auto-negotiation: off
>     >       >     >       > Auto-negotiation: off
>     >       >     >       > [   16.828151] macb ff0b0000.ethernet control_red: unable to generate target frequency: 125000000 Hz
>     >       >     >       > [   16.834553] macb ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full - flow control off
>     >       >     >       > [   16.860552] macb ff0c0000.ethernet control_black: unable to generate target frequency: 125000000 Hz
>     >       >     >       > [   16.867052] macb ff0c0000.ethernet control_black: Link is Up - 1Gbps/Full - flow control off
>     >       >     >       > Starting Failsafe Secure Shell server in port 2222: sshd
>     >       >     >       > done.
>     >       >     >       > Starting rpcbind daemon...done.
>     >       >     >       >
>     >       >     >       > [   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>     >       >     >       > Starting State Manager Service
>     >       >     >       > Start state-manager restarter...
>     >       >     >       > (XEN) d0v1 Forwarding AES operation: 3254779951
>     >       >     >       > Starting /usr/sbin/xenstored....[   17.265256] BTRFS: device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa
>     >       devid 1 transid 744
>     >       >     >       /dev/dm-0
>     >       >     >       > scanned by udevd (385)
>     >       >     >       > [   17.349933] BTRFS info (device dm-0): disk space caching is enabled
>     >       >     >       > [   17.350670] BTRFS info (device dm-0): has skinny extents
>     >       >     >       > [   17.364384] BTRFS info (device dm-0): enabling ssd optimizations
>     >       >     >       > [   17.830462] BTRFS: device fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
>     >       /dev/mapper/client_prov scanned by
>     >       >     >       mkfs.btrfs
>     >       >     >       > (526)
>     >       >     >       > [   17.872699] BTRFS info (device dm-1): using free space tree
>     >       >     >       > [   17.872771] BTRFS info (device dm-1): has skinny extents
>     >       >     >       > [   17.878114] BTRFS info (device dm-1): flagging fs with big metadata feature
>     >       >     >       > [   17.894289] BTRFS info (device dm-1): enabling ssd optimizations
>     >       >     >       > [   17.895695] BTRFS info (device dm-1): checking UUID tree
>     >       >     >       >
>     >       >     >       > Setting domain 0 name, domid and JSON config...
>     >       >     >       > Done setting up Dom0
>     >       >     >       > Starting xenconsoled...
>     >       >     >       > Starting QEMU as disk backend for dom0
>     >       >     >       > Starting domain watchdog daemon: xenwatchdogd startup
>     >       >     >       >
>     >       >     >       > [   18.408647] BTRFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
>     >       /dev/mapper/client_config scanned by
>     >       >     >       mkfs.btrfs
>     >       >     >       > (574)
>     >       >     >       > [done]
>     >       >     >       > [   18.465552] BTRFS info (device dm-2): using free space tree
>     >       >     >       > [   18.465629] BTRFS info (device dm-2): has skinny extents
>     >       >     >       > [   18.471002] BTRFS info (device dm-2): flagging fs with big metadata feature
>     >       >     >       > Starting crond: [   18.482371] BTRFS info (device dm-2): enabling ssd optimizations
>     >       >     >       > [   18.486659] BTRFS info (device dm-2): checking UUID tree
>     >       >     >       > OK
>     >       >     >       > starting rsyslogd ... Log partition ready after 0 poll loops
>     >       >     >       > done
>     >       >     >       > rsyslogd: cannot connect to 172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>>: Network is unreachable [v8.2208.0 try
>     >       https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027>> ]
>     >       >     >       > [   18.670637] BTRFS: device fsid 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3
>     >       scanned by udevd (518)
>     >       >     >       >
>     >       >     >       > Please insert USB token and enter your role in login prompt.
>     >       >     >       >
>     >       >     >       > login:
>     >       >     >       >
>     >       >     >       > Regards,
>     >       >     >       > O.
>     >       >     >       >
>     >       >     >       >
>     >       >     >       > пн, 24 апр. 2023 г. в 23:39, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>     >       >     >       >       Hi Oleg,
>     >       >     >       >
>     >       >     >       >       Here is the issue from your logs:
>     >       >     >       >
>     >       >     >       >       SError Interrupt on CPU0, code 0xbe000000 -- SError
>     >       >     >       >
>     >       >     >       >       SErrors are special signals to notify software of serious hardware
>     >       >     >       >       errors.  Something is going very wrong. Defective hardware is a
>     >       >     >       >       possibility.  Another possibility if software accessing address ranges
>     >       >     >       >       that it is not supposed to, sometimes it causes SErrors.
>     >       >     >       >
>     >       >     >       >       Cheers,
>     >       >     >       >
>     >       >     >       >       Stefano
>     >       >     >       >
>     >       >     >       >
>     >       >     >       >
>     >       >     >       >       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
>     >       >     >       >
>     >       >     >       >       > Hello,
>     >       >     >       >       >
>     >       >     >       >       > Thanks guys.
>     >       >     >       >       > I found out where the problem was.
>     >       >     >       >       > Now dom0 booted more. But I have a new one.
>     >       >     >       >       > This is a kernel panic during Dom0 loading.
>     >       >     >       >       > Maybe someone is able to suggest something ?
>     >       >     >       >       >
>     >       >     >       >       > Regards,
>     >       >     >       >       > O.
>     >       >     >       >       >
>     >       >     >       >       > [    3.771362] sfp_register_bus: upstream ops attach
>     >       >     >       >       > [    3.776119] sfp_register_bus: Bus registered
>     >       >     >       >       > [    3.780459] sfp_register_socket: register sfp_bus succeeded
>     >       >     >       >       > [    3.789399] of_cfs_init
>     >       >     >       >       > [    3.789499] of_cfs_init: OK
>     >       >     >       >       > [    3.791685] clk: Not disabling unused clocks
>     >       >     >       >       > [   11.010355] SError Interrupt on CPU0, code 0xbe000000 -- SError
>     >       >     >       >       > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>     >       >     >       >       > [   11.010393] Workqueue: events_unbound async_run_entry_fn
>     >       >     >       >       > [   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>     >       >     >       >       > [   11.010422] pc : simple_write_end+0xd0/0x130
>     >       >     >       >       > [   11.010431] lr : generic_perform_write+0x118/0x1e0
>     >       >     >       >       > [   11.010438] sp : ffffffc00809b910
>     >       >     >       >       > [   11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
>     >       >     >       >       > [   11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
>     >       >     >       >       > [   11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
>     >       >     >       >       > [   11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
>     >       >     >       >       > [   11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
>     >       >     >       >       > [   11.010490] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
>     >       >     >       >       > [   11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
>     >       >     >       >       > [   11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
>     >       >     >       >       > [   11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
>     >       >     >       >       > [   11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
>     >       >     >       >       > [   11.010534] Kernel panic - not syncing: Asynchronous SError Interrupt
>     >       >     >       >       > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>     >       >     >       >       > [   11.010545] Hardware name: D14 Viper Board - White Unit (DT)
>     >       >     >       >       > [   11.010548] Workqueue: events_unbound async_run_entry_fn
>     >       >     >       >       > [   11.010556] Call trace:
>     >       >     >       >       > [   11.010558]  dump_backtrace+0x0/0x1c4
>     >       >     >       >       > [   11.010567]  show_stack+0x18/0x2c
>     >       >     >       >       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
>     >       >     >       >       > [   11.010583]  dump_stack+0x18/0x34
>     >       >     >       >       > [   11.010588]  panic+0x14c/0x2f8
>     >       >     >       >       > [   11.010597]  print_tainted+0x0/0xb0
>     >       >     >       >       > [   11.010606]  arm64_serror_panic+0x6c/0x7c
>     >       >     >       >       > [   11.010614]  do_serror+0x28/0x60
>     >       >     >       >       > [   11.010621]  el1h_64_error_handler+0x30/0x50
>     >       >     >       >       > [   11.010628]  el1h_64_error+0x78/0x7c
>     >       >     >       >       > [   11.010633]  simple_write_end+0xd0/0x130
>     >       >     >       >       > [   11.010639]  generic_perform_write+0x118/0x1e0
>     >       >     >       >       > [   11.010644]  __generic_file_write_iter+0x138/0x1c4
>     >       >     >       >       > [   11.010650]  generic_file_write_iter+0x78/0xd0
>     >       >     >       >       > [   11.010656]  __kernel_write+0xfc/0x2ac
>     >       >     >       >       > [   11.010665]  kernel_write+0x88/0x160
>     >       >     >       >       > [   11.010673]  xwrite+0x44/0x94
>     >       >     >       >       > [   11.010680]  do_copy+0xa8/0x104
>     >       >     >       >       > [   11.010686]  write_buffer+0x38/0x58
>     >       >     >       >       > [   11.010692]  flush_buffer+0x4c/0xbc
>     >       >     >       >       > [   11.010698]  __gunzip+0x280/0x310
>     >       >     >       >       > [   11.010704]  gunzip+0x1c/0x28
>     >       >     >       >       > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
>     >       >     >       >       > [   11.010715]  do_populate_rootfs+0x80/0x164
>     >       >     >       >       > [   11.010722]  async_run_entry_fn+0x48/0x164
>     >       >     >       >       > [   11.010728]  process_one_work+0x1e4/0x3a0
>     >       >     >       >       > [   11.010736]  worker_thread+0x7c/0x4c0
>     >       >     >       >       > [   11.010743]  kthread+0x120/0x130
>     >       >     >       >       > [   11.010750]  ret_from_fork+0x10/0x20
>     >       >     >       >       > [   11.010757] SMP: stopping secondary CPUs
>     >       >     >       >       > [   11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc008000000
>     >       >     >       >       > [   11.010788] PHYS_OFFSET: 0x0
>     >       >     >       >       > [   11.010790] CPU features: 0x00000401,00000842
>     >       >     >       >       > [   11.010795] Memory Limit: none
>     >       >     >       >       > [   11.277509] ---[ end Kernel panic - not syncing: Asynchronous SError Interrupt ]---
>     >       >     >       >       >
>     >       >     >       >       > пт, 21 апр. 2023 г. в 15:52, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
>     >       >     >       >       >       Hi Oleg,
>     >       >     >       >       >
>     >       >     >       >       >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
>     >       >     >       >       >       >       
>     >       >     >       >       >       >
>     >       >     >       >       >       >
>     >       >     >       >       >       > Hello Michal,
>     >       >     >       >       >       >
>     >       >     >       >       >       > I was not able to enable earlyprintk in the xen for now.
>     >       >     >       >       >       > I decided to choose another way.
>     >       >     >       >       >       > This is a xen's command line that I found out completely.
>     >       >     >       >       >       >
>     >       >     >       >       >       > (XEN) $$$$ console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin
>     >       bootscrub=0
>     >       >     >       vwfi=native
>     >       >     >       >       sched=null
>     >       >     >       >       >       timer_slop=0
>     >       >     >       >       >       Yes, adding a printk() in Xen was also a good idea.
>     >       >     >       >       >
>     >       >     >       >       >       >
>     >       >     >       >       >       > So you are absolutely right about a command line.
>     >       >     >       >       >       > Now I am going to find out why xen did not have the correct parameters from the device
>     >       tree.
>     >       >     >       >       >       Maybe you will find this document helpful:
>     >       >     >       >       >       https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt>
>     >       <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt>>
>     >       >     >       >       >
>     >       >     >       >       >       ~Michal
>     >       >     >       >       >
>     >       >     >       >       >       >
>     >       >     >       >       >       > Regards,
>     >       >     >       >       >       > Oleg
>     >       >     >       >       >       >
>     >       >     >       >       >       > пт, 21 апр. 2023 г. в 11:16, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>
>     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>:
>     >       >     >       >       >       >
>     >       >     >       >       >       >
>     >       >     >       >       >       >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
>     >       >     >       >       >       >     >       
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     > Hello Michal,
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     > Yes, I use yocto.
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     > Yesterday all day long I tried to follow your suggestions.
>     >       >     >       >       >       >     > I faced a problem.
>     >       >     >       >       >       >     > Manually in the xen config build file I pasted the strings:
>     >       >     >       >       >       >     In the .config file or in some Yocto file (listing additional Kconfig options) added
>     >       to SRC_URI?
>     >       >     >       >       >       >     You shouldn't really modify .config file but if you do, you should execute "make
>     >       olddefconfig"
>     >       >     >       afterwards.
>     >       >     >       >       >       >
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     > CONFIG_EARLY_PRINTK
>     >       >     >       >       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
>     >       >     >       >       >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
>     >       >     >       >       >       >     I hope you added =y to them.
>     >       >     >       >       >       >
>     >       >     >       >       >       >     Anyway, you have at least the following solutions:
>     >       >     >       >       >       >     1) Run bitbake xen -c menuconfig to properly set early printk
>     >       >     >       >       >       >     2) Find out how you enable other Kconfig options in your project (e.g.
>     >       CONFIG_COLORING=y that is not
>     >       >     >       enabled by
>     >       >     >       >       default)
>     >       >     >       >       >       >     3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
>     >       >     >       >       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=y
>     >       >     >       >       >       >
>     >       >     >       >       >       >     ~Michal
>     >       >     >       >       >       >
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     > Host hangs in build time. 
>     >       >     >       >       >       >     > Maybe I did not set something in the config build file ?
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     > Regards,
>     >       >     >       >       >       >     > Oleg
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     > чт, 20 апр. 2023 г. в 11:57, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>     >       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>:
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >     Thanks Michal,
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >     You gave me an idea.
>     >       >     >       >       >       >     >     I am going to try it today.
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >     Regards,
>     >       >     >       >       >       >     >     O.
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >     чт, 20 апр. 2023 г. в 11:56, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>     >       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>:
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >         Thanks Stefano.
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >         I am going to do it today.
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >         Regards,
>     >       >     >       >       >       >     >         O.
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >         ср, 19 апр. 2023 г. в 23:05, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>     >       >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>     >       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>:
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>     >       >     >       >       >       >     >             > Hi Michal,
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             > I corrected xen's command line.
>     >       >     >       >       >       >     >             > Now it is
>     >       >     >       >       >       >     >             > xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M
>     >       dom0_max_vcpus=2
>     >       >     >       dom0_vcpus_pin
>     >       >     >       >       >       bootscrub=0 vwfi=native sched=null
>     >       >     >       >       >       >     >             > timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >             4 colors is way too many for xen, just do xen_colors=0-0. There is no
>     >       >     >       >       >       >     >             advantage in using more than 1 color for Xen.
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >             4 colors is too few for dom0, if you are giving 1600M of memory to
>     >       Dom0.
>     >       >     >       >       >       >     >             Each color is 256M. For 1600M you should give at least 7 colors. Try:
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >             xen_colors=0-0 dom0_colors=1-8
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >     >             > Unfortunately the result was the same.
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             > (XEN)  - Dom0 mode: Relaxed
>     >       >     >       >       >       >     >             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>     >       >     >       >       >       >     >             > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>     >       >     >       >       >       >     >             > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>     >       >     >       >       >       >     >             > (XEN) Coloring general information
>     >       >     >       >       >       >     >             > (XEN) Way size: 64kB
>     >       >     >       >       >       >     >             > (XEN) Max. number of colors available: 16
>     >       >     >       >       >       >     >             > (XEN) Xen color(s): [ 0 ]
>     >       >     >       >       >       >     >             > (XEN) alternatives: Patching with alt table 00000000002cc690 ->
>     >       00000000002ccc0c
>     >       >     >       >       >       >     >             > (XEN) Color array allocation failed for dom0
>     >       >     >       >       >       >     >             > (XEN)
>     >       >     >       >       >       >     >             > (XEN) ****************************************
>     >       >     >       >       >       >     >             > (XEN) Panic on CPU 0:
>     >       >     >       >       >       >     >             > (XEN) Error creating domain 0
>     >       >     >       >       >       >     >             > (XEN) ****************************************
>     >       >     >       >       >       >     >             > (XEN)
>     >       >     >       >       >       >     >             > (XEN) Reboot in five seconds...
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             > I am going to find out how command line arguments passed and parsed.
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             > Regards,
>     >       >     >       >       >       >     >             > Oleg
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             > ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>     >       >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>     >       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
>     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>:
>     >       >     >       >       >       >     >             >       Hi Michal,
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             > You put my nose into the problem. Thank you.
>     >       >     >       >       >       >     >             > I am going to use your point.
>     >       >     >       >       >       >     >             > Let's see what happens.
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             > Regards,
>     >       >     >       >       >       >     >             > Oleg
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             > ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>
>     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>
>     >       >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>
>     >       >     >       >       >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>
>     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>>:
>     >       >     >       >       >       >     >             >       Hi Oleg,
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>     >       >     >       >       >       >     >             >       >       
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       > Hello Stefano,
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       > Thanks for the clarification.
>     >       >     >       >       >       >     >             >       > My company uses yocto for image generation.
>     >       >     >       >       >       >     >             >       > What kind of information do you need to consult me in this
>     >       case ?
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       > Maybe modules sizes/addresses which were mentioned by @Julien
>     >       Grall
>     >       >     >       >       <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>
>     >       >     >       >       >       <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> <mailto:julien@xen.org <mailto:julien@xen.org>
>     >       <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>>> ?
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >       Sorry for jumping into discussion, but FWICS the Xen command
>     >       line you provided
>     >       >     >       seems to be
>     >       >     >       >       not the
>     >       >     >       >       >       one
>     >       >     >       >       >       >     >             >       Xen booted with. The error you are observing most likely is due
>     >       to dom0 colors
>     >       >     >       >       configuration not
>     >       >     >       >       >       being
>     >       >     >       >       >       >     >             >       specified (i.e. lack of dom0_colors=<> parameter). Although in
>     >       the command line you
>     >       >     >       >       provided, this
>     >       >     >       >       >       parameter
>     >       >     >       >       >       >     >             >       is set, I strongly doubt that this is the actual command line
>     >       in use.
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >       You wrote:
>     >       >     >       >       >       >     >             >       xen,xen-bootargs = "console=dtuart dtuart=serial0
>     >       dom0_mem=1600M dom0_max_vcpus=2
>     >       >     >       >       dom0_vcpus_pin
>     >       >     >       >       >       bootscrub=0 vwfi=native
>     >       >     >       >       >       >     >             >       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3
>     >       dom0_colors=4-7";
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >       but:
>     >       >     >       >       >       >     >             >       1) way_szize has a typo
>     >       >     >       >       >       >     >             >       2) you specified 4 colors (0-3) for Xen, but the boot log says
>     >       that Xen has only
>     >       >     >       one:
>     >       >     >       >       >       >     >             >       (XEN) Xen color(s): [ 0 ]
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >       This makes me believe that no colors configuration actually end
>     >       up in command line
>     >       >     >       that Xen
>     >       >     >       >       booted
>     >       >     >       >       >       with.
>     >       >     >       >       >       >     >             >       Single color for Xen is a "default if not specified" and way
>     >       size was probably
>     >       >     >       calculated
>     >       >     >       >       by asking
>     >       >     >       >       >       HW.
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >       So I would suggest to first cross-check the command line in
>     >       use.
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >       ~Michal
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       > Regards,
>     >       >     >       >       >       >     >             >       > Oleg
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini
>     >       <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>     >       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
>     >       >     >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>     >       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>>:
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>     >       >     >       >       >       >     >             >       >     > Hi Julien,
>     >       >     >       >       >       >     >             >       >     >
>     >       >     >       >       >       >     >             >       >     > >> This feature has not been merged in Xen upstream yet
>     >       >     >       >       >       >     >             >       >     >
>     >       >     >       >       >       >     >             >       >     > > would assume that upstream + the series on the ML [1]
>     >       work
>     >       >     >       >       >       >     >             >       >     >
>     >       >     >       >       >       >     >             >       >     > Please clarify this point.
>     >       >     >       >       >       >     >             >       >     > Because the two thoughts are controversial.
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >     Hi Oleg,
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >     As Julien wrote, there is nothing controversial. As you
>     >       are aware,
>     >       >     >       >       >       >     >             >       >     Xilinx maintains a separate Xen tree specific for Xilinx
>     >       here:
>     >       >     >       >       >       >     >             >       >     https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>     >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>
>     >       >     >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>
>     >       >     >       >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>
>     >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>     >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>
>     >       >     >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>
>     >       >     >       >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >     and the branch you are using (xlnx_rebase_4.16) comes
>     >       from there.
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >     Instead, the upstream Xen tree lives here:
>     >       >     >       >       >       >     >             >       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>>
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >     The Cache Coloring feature that you are trying to
>     >       configure is present
>     >       >     >       >       >       >     >             >       >     in xlnx_rebase_4.16, but not yet present upstream (there
>     >       is an
>     >       >     >       >       >       >     >             >       >     outstanding patch series to add cache coloring to Xen
>     >       upstream but it
>     >       >     >       >       >       >     >             >       >     hasn't been merged yet.)
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't
>     >       matter too much for
>     >       >     >       >       >       >     >             >       >     you as you already have Cache Coloring as a feature
>     >       there.
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >     I take you are using ImageBuilder to generate the boot
>     >       configuration? If
>     >       >     >       >       >       >     >             >       >     so, please post the ImageBuilder config file that you are
>     >       using.
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >       >     But from the boot message, it looks like the colors
>     >       configuration for
>     >       >     >       >       >       >     >             >       >     Dom0 is incorrect.
>     >       >     >       >       >       >     >             >       >
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >             >
>     >       >     >       >       >       >     >
>     >       >     >       >       >       >
>     >       >     >       >       >
>     >       >     >       >       >
>     >       >     >       >       >
>     >       >     >       >
>     >       >     >       >
>     >       >     >       >
>     >       >     >
>     >       >     >
>     >       >     >
>     >       >
>     >
>     >
>     > 
> 


From xen-devel-bounces@lists.xenproject.org Thu May 11 10:28:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 10:28:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533206.829645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3Wb-000312-Lg; Thu, 11 May 2023 10:27:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533206.829645; Thu, 11 May 2023 10:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3Wb-00030v-I6; Thu, 11 May 2023 10:27:53 +0000
Received: by outflank-mailman (input) for mailman id 533206;
 Thu, 11 May 2023 10:27:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OCYL=BA=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1px3WZ-00030n-0B
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 10:27:51 +0000
Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com
 [2607:f8b0:4864:20::1036])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 777519c9-efe6-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 12:27:45 +0200 (CEST)
Received: by mail-pj1-x1036.google.com with SMTP id
 98e67ed59e1d1-24e2bbec3d5so6067347a91.3
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 03:27:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 777519c9-efe6-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683800864; x=1686392864;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=57Cc+W7gV5OdCLF7Rz2bIl7YNaBEtN2+dbrw/GGiiLQ=;
        b=EeWxZwdTCZmNrXGKRMuSgLC4zQSidodymzWdWfaNeC+eF2kwVWh7Q5ZQGD55aJzQtC
         md0sArJSclu1MvkxTUnFzc6fWraEg+E6V5sr5du87v/zsZGvTImYXufLSqlkoAZnVegN
         uCHDdyj1Mujv6J3riknsz/ef97pIrt8HNPxUA6nsJvnr+tuxsV2ecjuDl14zn+n5zVOy
         /uhZPFVtqmJqls1+tn2zMJc/+6YzkSn+5PP2oZaidKGksijCOHdJ19fSrwvt9oULk4Ic
         yHnImsbgASNNBD+lFmzx97vFug/fLeoIsRDgfmWD+YLp4X25nMc+vsSs4N6obo85ga37
         Ht+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683800864; x=1686392864;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=57Cc+W7gV5OdCLF7Rz2bIl7YNaBEtN2+dbrw/GGiiLQ=;
        b=bBGVMF9fxYr0xXPUyGcVrIQQ9oA5Tcc56Dr2stYsVlReAtP03/akO6GGQY4uy0J6xV
         yrQZjDvPLNL3hak/dLgsbL17UGewvPaX0u7tsPI/hGO/ESjt7bt6rHqqg7YEHIXsfSqb
         kLysc/sJ+TpOWTqbtNWdAlc0EjfiH89eJEcarU2h8fEfYDtFVEHn30TQ6Yaworppi2pd
         FEaqK9EYkeUodGko48SjVSjyJZ4jslPyYvPOD0JqbmS63gQ3zEfPqsQWoqX6ZklfatRB
         +0vbyme5u8Shi3zBBtbtFjxIREEKvWsv9TAD0fsh7RxGj65fwKuLbRZPhlIFxF7f+oC2
         MV6A==
X-Gm-Message-State: AC+VfDzU6qzicis6tko6ZL2F1o42kcsBGT2P76qHS5bhxnKlf6Pm/TSP
	dkYVjN/cfQcclcITUou/fTjJSqNGheB5+c346xE=
X-Google-Smtp-Source: ACHHUZ5te3up4we/9wvVz7ACNwpurDWcOg7E346SGt03TwXXoaGicZtMcNManiMorymiQvehTCGeZqKO1brRvVWzXIk=
X-Received: by 2002:a17:90a:73c7:b0:24d:ef91:b6d6 with SMTP id
 n7-20020a17090a73c700b0024def91b6d6mr19861708pjk.44.1683800863253; Thu, 11
 May 2023 03:27:43 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com> <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
 <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com> <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
 <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
 <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
 <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com> <bcac90c2-ef35-2908-9fe6-f09c1b1e2340@amd.com>
In-Reply-To: <bcac90c2-ef35-2908-9fe6-f09c1b1e2340@amd.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Thu, 11 May 2023 13:32:32 +0300
Message-ID: <CA+SAi2sgHbUKk6mQVnFWQWJ1LBY29GW+eagrqHNN6TLDmv2AgQ@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Michal Orzel <michal.orzel@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>, 
	Stewart.Hildebrand@amd.com
Content-Type: multipart/alternative; boundary="00000000000053aaa105fb687067"

--00000000000053aaa105fb687067
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Michal,

Thanks.
This compilation previously had a name CONFIG_COLORING.
It mixed me up.

Regards,
Oleg

=D1=87=D1=82, 11 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 13:15, Mich=
al Orzel <michal.orzel@amd.com>:

> Hi Oleg,
>
> On 11/05/2023 12:02, Oleg Nikitenko wrote:
> >
> >
> >
> > Hello,
> >
> > Thanks Stefano.
> > Then the next question.
> > I cloned xen repo from xilinx site https://github.com/Xilinx/xen.git <
> https://github.com/Xilinx/xen.git>
> > I managed to build a xlnx_rebase_4.17 branch in my environment.
> > I did it without coloring first. I did not find any color footprints at
> this branch.
> > I realized coloring is not in the xlnx_rebase_4.17 branch yet.
> This is not true. Cache coloring is in xlnx_rebase_4.17. Please see the
> docs:
>
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst
>
> It describes the feature and documents the required properties.
>
> ~Michal
>
> >
> >
> > =D0=B2=D1=82, 9 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 22:49, S=
tefano Stabellini <sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>>:
> >
> >     We test Xen Cache Coloring regularly on zcu102. Every Petalinux
> release
> >     (twice a year) is tested with cache coloring enabled. The last
> Petalinux
> >     release is 2023.1 and the kernel used is this:
> >     https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>
> >
> >
> >     On Tue, 9 May 2023, Oleg Nikitenko wrote:
> >     > Hello guys,
> >     >
> >     > I have a couple of more questions.
> >     > Have you ever run xen with the cache coloring at Zynq UltraScale+
> MPSoC zcu102 xczu15eg ?
> >     > When did you run xen with the cache coloring last time ?
> >     > What kernel version did you use for Dom0 when you ran xen with th=
e
> cache coloring last time ?
> >     >
> >     > Regards,
> >     > Oleg
> >     >
> >     > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 11=
:48, Oleg Nikitenko <oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com>>:
> >     >       Hi Michal,
> >     >
> >     > Thanks.
> >     >
> >     > Regards,
> >     > Oleg
> >     >
> >     > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 11=
:34, Michal Orzel <michal.orzel@amd.com
> <mailto:michal.orzel@amd.com>>:
> >     >       Hi Oleg,
> >     >
> >     >       Replying, so that you do not need to wait for Stefano.
> >     >
> >     >       On 05/05/2023 10:28, Oleg Nikitenko wrote:
> >     >       >
> >     >       >
> >     >       >
> >     >       > Hello Stefano,
> >     >       >
> >     >       > I would like to try a xen cache color property from this
> repo  https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>
> >     >       <https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>>
> >     >       > Could you tell whot branch I should use ?
> >     >       Cache coloring feature is not part of the upstream tree and
> it is still under review.
> >     >       You can only find it integrated in the Xilinx Xen tree.
> >     >
> >     >       ~Michal
> >     >
> >     >       >
> >     >       > Regards,
> >     >       > Oleg
> >     >       >
> >     >       > =D0=BF=D1=82, 28 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3.=
 =D0=B2 00:51, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
> >     >       >
> >     >       >     I am familiar with the zcu102 but I don't know how yo=
u
> could possibly
> >     >       >     generate a SError.
> >     >       >
> >     >       >     I suggest to try to use ImageBuilder [1] to generate
> the boot
> >     >       >     configuration as a test because that is known to work
> well for zcu102.
> >     >       >
> >     >       >     [1] https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>>
> >     >       >
> >     >       >
> >     >       >     On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
> >     >       >     > Hello Stefano,
> >     >       >     >
> >     >       >     > Thanks for clarification.
> >     >       >     > We nighter use ImageBuilder nor uboot boot script.
> >     >       >     > A model is zcu102 compatible.
> >     >       >     >
> >     >       >     > Regards,
> >     >       >     > O.
> >     >       >     >
> >     >       >     > =D0=B2=D1=82, 25 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=
=D0=B3. =D0=B2 21:21, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
> >     >       >     >       This is interesting. Are you using Xilinx
> hardware by any chance? If so,
> >     >       >     >       which board?
> >     >       >     >
> >     >       >     >       Are you using ImageBuilder to generate your
> boot.scr boot script? If so,
> >     >       >     >       could you please post your ImageBuilder confi=
g
> file? If not, can you
> >     >       >     >       post the source of your uboot boot script?
> >     >       >     >
> >     >       >     >       SErrors are supposed to be related to a
> hardware failure of some kind.
> >     >       >     >       You are not supposed to be able to trigger an
> SError easily by
> >     >       >     >       "mistake". I have not seen SErrors due to
> wrong cache coloring
> >     >       >     >       configurations on any Xilinx board before.
> >     >       >     >
> >     >       >     >       The differences between Xen with and without
> cache coloring from a
> >     >       >     >       hardware perspective are:
> >     >       >     >
> >     >       >     >       - With cache coloring, the SMMU is enabled an=
d
> does address translations
> >     >       >     >         even for dom0. Without cache coloring the
> SMMU could be disabled, and
> >     >       >     >         if enabled, the SMMU doesn't do any address
> translations for Dom0. If
> >     >       >     >         there is a hardware failure related to SMMU
> address translation it
> >     >       >     >         could only trigger with cache coloring. Thi=
s
> would be my normal
> >     >       >     >         suggestion for you to explore, but the
> failure happens too early
> >     >       >     >         before any DMA-capable device is programmed=
.
> So I don't think this can
> >     >       >     >         be the issue.
> >     >       >     >
> >     >       >     >       - With cache coloring, the memory allocation
> is very different so you'll
> >     >       >     >         end up using different DDR regions for Dom0=
.
> So if your DDR is
> >     >       >     >         defective, you might only see a failure wit=
h
> cache coloring enabled
> >     >       >     >         because you end up using different regions.
> >     >       >     >
> >     >       >     >
> >     >       >     >       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
> >     >       >     >       > Hi Stefano,
> >     >       >     >       >
> >     >       >     >       > Thank you.
> >     >       >     >       > If I build xen without colors support there
> is not this error.
> >     >       >     >       > All the domains are booted well.
> >     >       >     >       > Hense it can not be a hardware issue.
> >     >       >     >       > This panic arrived during unpacking the
> rootfs.
> >     >       >     >       > Here I attached the boot log xen/Dom0
> without color.
> >     >       >     >       > A highlighted strings printed exactly after
> the place where 1-st time panic arrived.
> >     >       >     >       >
> >     >       >     >       >  Xen 4.16.1-pre
> >     >       >     >       > (XEN) Xen version 4.16.1-pre (nole2390@(non=
e))
> (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=3Dy
> >     >       2023-04-21
> >     >       >     >       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14
> 2023 +0300 git:321687b231-dirty
> >     >       >     >       > (XEN) build-id:
> c1847258fdb1b79562fc710dda40008f96c0fde5
> >     >       >     >       > (XEN) Processor: 00000000410fd034: "ARM
> Limited", variant: 0x0, part 0xd03,rev 0x4
> >     >       >     >       > (XEN) 64-bit Execution:
> >     >       >     >       > (XEN)   Processor Features: 000000000000222=
2
> 0000000000000000
> >     >       >     >       > (XEN)     Exception Levels: EL3:64+32
> EL2:64+32 EL1:64+32 EL0:64+32
> >     >       >     >       > (XEN)     Extensions: FloatingPoint
> AdvancedSIMD
> >     >       >     >       > (XEN)   Debug Features: 0000000010305106
> 0000000000000000
> >     >       >     >       > (XEN)   Auxiliary Features: 000000000000000=
0
> 0000000000000000
> >     >       >     >       > (XEN)   Memory Model Features:
> 0000000000001122 0000000000000000
> >     >       >     >       > (XEN)   ISA Features:  0000000000011120
> 0000000000000000
> >     >       >     >       > (XEN) 32-bit Execution:
> >     >       >     >       > (XEN)   Processor Features:
> 0000000000000131:0000000000011011
> >     >       >     >       > (XEN)     Instruction Sets: AArch32 A32
> Thumb Thumb-2 Jazelle
> >     >       >     >       > (XEN)     Extensions: GenericTimer Security
> >     >       >     >       > (XEN)   Debug Features: 0000000003010066
> >     >       >     >       > (XEN)   Auxiliary Features: 000000000000000=
0
> >     >       >     >       > (XEN)   Memory Model Features:
> 0000000010201105 0000000040000000
> >     >       >     >       > (XEN)
>  0000000001260000 0000000002102211
> >     >       >     >       > (XEN)   ISA Features: 0000000002101110
> 0000000013112111 0000000021232042
> >     >       >     >       > (XEN)                 0000000001112131
> 0000000000011142 0000000000011121
> >     >       >     >       > (XEN) Using SMC Calling Convention v1.2
> >     >       >     >       > (XEN) Using PSCI v1.1
> >     >       >     >       > (XEN) SMP: Allowing 4 CPUs
> >     >       >     >       > (XEN) Generic Timer IRQ: phys=3D30 hyp=3D26
> virt=3D27 Freq: 100000 KHz
> >     >       >     >       > (XEN) GICv2 initialization:
> >     >       >     >       > (XEN)         gic_dist_addr=3D00000000f9010=
000
> >     >       >     >       > (XEN)         gic_cpu_addr=3D00000000f90200=
00
> >     >       >     >       > (XEN)         gic_hyp_addr=3D00000000f90400=
00
> >     >       >     >       > (XEN)         gic_vcpu_addr=3D00000000f9060=
000
> >     >       >     >       > (XEN)         gic_maintenance_irq=3D25
> >     >       >     >       > (XEN) GICv2: Adjusting CPU interface base t=
o
> 0xf902f000
> >     >       >     >       > (XEN) GICv2: 192 lines, 4 cpus, secure (IID
> 0200143b).
> >     >       >     >       > (XEN) Using scheduler: null Scheduler (null=
)
> >     >       >     >       > (XEN) Initializing null scheduler
> >     >       >     >       > (XEN) WARNING: This is experimental softwar=
e
> in development.
> >     >       >     >       > (XEN) Use at your own risk.
> >     >       >     >       > (XEN) Allocated console ring of 32 KiB.
> >     >       >     >       > (XEN) CPU0: Guest atomics will try 12 times
> before pausing the domain
> >     >       >     >       > (XEN) Bringing up CPU1
> >     >       >     >       > (XEN) CPU1: Guest atomics will try 13 times
> before pausing the domain
> >     >       >     >       > (XEN) CPU 1 booted.
> >     >       >     >       > (XEN) Bringing up CPU2
> >     >       >     >       > (XEN) CPU2: Guest atomics will try 13 times
> before pausing the domain
> >     >       >     >       > (XEN) CPU 2 booted.
> >     >       >     >       > (XEN) Bringing up CPU3
> >     >       >     >       > (XEN) CPU3: Guest atomics will try 13 times
> before pausing the domain
> >     >       >     >       > (XEN) Brought up 4 CPUs
> >     >       >     >       > (XEN) CPU 3 booted.
> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: probing
> hardware configuration...
> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with=
:
> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stage 2
> translation
> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stream
> matching with 48 register groups, mask 0x7fff<2>smmu:
> >     >       /axi/smmu@fd800000: 16 context
> >     >       >     >       banks (0
> >     >       >     >       > stage-2 only)
> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: Stage-2:
> 48-bit IPA -> 48-bit PA
> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: registered
> 29 master devices
> >     >       >     >       > (XEN) I/O virtualisation enabled
> >     >       >     >       > (XEN)  - Dom0 mode: Relaxed
> >     >       >     >       > (XEN) P2M: 40-bit IPA with 40-bit PA and
> 8-bit VMID
> >     >       >     >       > (XEN) P2M: 3 levels with order-1 root, VTCR
> 0x0000000080023558
> >     >       >     >       > (XEN) Scheduling granularity: cpu, 1 CPU pe=
r
> sched-resource
> >     >       >     >       > (XEN) alternatives: Patching with alt table
> 00000000002cc5c8 -> 00000000002ccb2c
> >     >       >     >       > (XEN) *** LOADING DOMAIN 0 ***
> >     >       >     >       > (XEN) Loading d0 kernel from boot module @
> 0000000001000000
> >     >       >     >       > (XEN) Loading ramdisk from boot module @
> 0000000002000000
> >     >       >     >       > (XEN) Allocating 1:1 mappings totalling
> 1600MB for dom0:
> >     >       >     >       > (XEN) BANK[0]
> 0x00000010000000-0x00000020000000 (256MB)
> >     >       >     >       > (XEN) BANK[1]
> 0x00000024000000-0x00000028000000 (64MB)
> >     >       >     >       > (XEN) BANK[2]
> 0x00000030000000-0x00000080000000 (1280MB)
> >     >       >     >       > (XEN) Grant table range:
> 0x00000000e00000-0x00000000e40000
> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr
> 0x000000087bf94000
> >     >       >     >       > (XEN) Allocating PPI 16 for event channel
> interrupt
> >     >       >     >       > (XEN) Extended region 0:
> 0x81200000->0xa0000000
> >     >       >     >       > (XEN) Extended region 1:
> 0xb1200000->0xc0000000
> >     >       >     >       > (XEN) Extended region 2:
> 0xc8000000->0xe0000000
> >     >       >     >       > (XEN) Extended region 3:
> 0xf0000000->0xf9000000
> >     >       >     >       > (XEN) Extended region 4:
> 0x100000000->0x600000000
> >     >       >     >       > (XEN) Extended region 5:
> 0x880000000->0x8000000000
> >     >       >     >       > (XEN) Extended region 6:
> 0x8001000000->0x10000000000
> >     >       >     >       > (XEN) Loading zImage from 0000000001000000
> to 0000000010000000-0000000010e41008
> >     >       >     >       > (XEN) Loading d0 initrd from
> 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
> >     >       >     >       > (XEN) Loading d0 DTB to
> 0x0000000013400000-0x000000001340cbdc
> >     >       >     >       > (XEN) Initial low memory virq threshold set
> at 0x4000 pages.
> >     >       >     >       > (XEN) Std. Loglevel: All
> >     >       >     >       > (XEN) Guest Loglevel: All
> >     >       >     >       > (XEN) *** Serial input to DOM0 (type
> 'CTRL-a' three times to switch input)
> >     >       >     >       > (XEN) null.c:353: 0 <-- d0v0
> >     >       >     >       > (XEN) Freed 356kB init memory.
> >     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
> >     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER4
> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER8
> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER12
> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER16
> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER20
> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER0
> >     >       >     >       > [    0.000000] Booting Linux on physical CP=
U
> 0x0000000000 [0x410fd034]
> >     >       >     >       > [    0.000000] Linux version
> 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC=
)
> >     >       11.3.0, GNU ld (GNU
> >     >       >     >       Binutils)
> >     >       >     >       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54
> UTC 2023
> >     >       >     >       > [    0.000000] Machine model: D14 Viper
> Board - White Unit
> >     >       >     >       > [    0.000000] Xen 4.16 support found
> >     >       >     >       > [    0.000000] Zone ranges:
> >     >       >     >       > [    0.000000]   DMA      [mem
> 0x0000000010000000-0x000000007fffffff]
> >     >       >     >       > [    0.000000]   DMA32    empty
> >     >       >     >       > [    0.000000]   Normal   empty
> >     >       >     >       > [    0.000000] Movable zone start for each
> node
> >     >       >     >       > [    0.000000] Early memory node ranges
> >     >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000010000000-0x000000001fffffff]
> >     >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000022000000-0x0000000022147fff]
> >     >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000022200000-0x0000000022347fff]
> >     >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000024000000-0x0000000027ffffff]
> >     >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000030000000-0x000000007fffffff]
> >     >       >     >       > [    0.000000] Initmem setup node 0 [mem
> 0x0000000010000000-0x000000007fffffff]
> >     >       >     >       > [    0.000000] On node 0, zone DMA: 8192
> pages in unavailable ranges
> >     >       >     >       > [    0.000000] On node 0, zone DMA: 184
> pages in unavailable ranges
> >     >       >     >       > [    0.000000] On node 0, zone DMA: 7352
> pages in unavailable ranges
> >     >       >     >       > [    0.000000] cma: Reserved 256 MiB at
> 0x000000006e000000
> >     >       >     >       > [    0.000000] psci: probing for conduit
> method from DT.
> >     >       >     >       > [    0.000000] psci: PSCIv1.1 detected in
> firmware.
> >     >       >     >       > [    0.000000] psci: Using standard PSCI
> v0.2 function IDs
> >     >       >     >       > [    0.000000] psci: Trusted OS migration
> not required
> >     >       >     >       > [    0.000000] psci: SMC Calling Convention
> v1.1
> >     >       >     >       > [    0.000000] percpu: Embedded 16 pages/cp=
u
> s32792 r0 d32744 u65536
> >     >       >     >       > [    0.000000] Detected VIPT I-cache on CPU=
0
> >     >       >     >       > [    0.000000] CPU features: kernel page
> table isolation forced ON by KASLR
> >     >       >     >       > [    0.000000] CPU features: detected:
> Kernel page table isolation (KPTI)
> >     >       >     >       > [    0.000000] Built 1 zonelists, mobility
> grouping on.  Total pages: 403845
> >     >       >     >       > [    0.000000] Kernel command line:
> console=3Dhvc0 earlycon=3Dxen earlyprintk=3Dxen clk_ignore_unused fips=3D=
1
> >     >       root=3D/dev/ram0
> >     >       >     >       maxcpus=3D2
> >     >       >     >       > [    0.000000] Unknown kernel command line
> parameters "earlyprintk=3Dxen fips=3D1", will be passed to user
> >     >       space.
> >     >       >     >       > [    0.000000] Dentry cache hash table
> entries: 262144 (order: 9, 2097152 bytes, linear)
> >     >       >     >       > [    0.000000] Inode-cache hash table
> entries: 131072 (order: 8, 1048576 bytes, linear)
> >     >       >     >       > [    0.000000] mem auto-init: stack:off,
> heap alloc:on, heap free:on
> >     >       >     >       > [    0.000000] mem auto-init: clearing
> system memory may take some time...
> >     >       >     >       > [    0.000000] Memory: 1121936K/1641024K
> available (9728K kernel code, 836K rwdata, 2396K rodata, 1536K
> >     >       init, 262K bss,
> >     >       >     >       256944K reserved,
> >     >       >     >       > 262144K cma-reserved)
> >     >       >     >       > [    0.000000] SLUB: HWalign=3D64, Order=3D=
0-3,
> MinObjects=3D0, CPUs=3D2, Nodes=3D1
> >     >       >     >       > [    0.000000] rcu: Hierarchical RCU
> implementation.
> >     >       >     >       > [    0.000000] rcu: RCU event tracing is
> enabled.
> >     >       >     >       > [    0.000000] rcu: RCU restricting CPUs
> from NR_CPUS=3D8 to nr_cpu_ids=3D2.
> >     >       >     >       > [    0.000000] rcu: RCU calculated value of
> scheduler-enlistment delay is 25 jiffies.
> >     >       >     >       > [    0.000000] rcu: Adjusting geometry for
> rcu_fanout_leaf=3D16, nr_cpu_ids=3D2
> >     >       >     >       > [    0.000000] NR_IRQS: 64, nr_irqs: 64,
> preallocated irqs: 0
> >     >       >     >       > [    0.000000] Root IRQ handler:
> gic_handle_irq
> >     >       >     >       > [    0.000000] arch_timer: cp15 timer(s)
> running at 100.00MHz (virt).
> >     >       >     >       > [    0.000000] clocksource:
> arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x171024e7e0,
> >     >       max_idle_ns: 440795205315 ns
> >     >       >     >       > [    0.000000] sched_clock: 56 bits at
> 100MHz, resolution 10ns, wraps every 4398046511100ns
> >     >       >     >       > [    0.000258] Console: colour dummy device
> 80x25
> >     >       >     >       > [    0.310231] printk: console [hvc0] enabl=
ed
> >     >       >     >       > [    0.314403] Calibrating delay loop
> (skipped), value calculated using timer frequency.. 200.00 BogoMIPS
> >     >       (lpj=3D400000)
> >     >       >     >       > [    0.324851] pid_max: default: 32768
> minimum: 301
> >     >       >     >       > [    0.329706] LSM: Security Framework
> initializing
> >     >       >     >       > [    0.334204] Yama: becoming mindful.
> >     >       >     >       > [    0.337865] Mount-cache hash table
> entries: 4096 (order: 3, 32768 bytes, linear)
> >     >       >     >       > [    0.345180] Mountpoint-cache hash table
> entries: 4096 (order: 3, 32768 bytes, linear)
> >     >       >     >       > [    0.354743] xen:grant_table: Grant table=
s
> using version 1 layout
> >     >       >     >       > [    0.359132] Grant table initialized
> >     >       >     >       > [    0.362664] xen:events: Using FIFO-based
> ABI
> >     >       >     >       > [    0.366993] Xen: initializing cpu0
> >     >       >     >       > [    0.370515] rcu: Hierarchical SRCU
> implementation.
> >     >       >     >       > [    0.375930] smp: Bringing up secondary
> CPUs ...
> >     >       >     >       > (XEN) null.c:353: 1 <-- d0v1
> >     >       >     >       > (XEN) d0v1: vGICD: unhandled word write
> 0x000000ffffffff to ICACTIVER0
> >     >       >     >       > [    0.382549] Detected VIPT I-cache on CPU=
1
> >     >       >     >       > [    0.388712] Xen: initializing cpu1
> >     >       >     >       > [    0.388743] CPU1: Booted secondary
> processor 0x0000000001 [0x410fd034]
> >     >       >     >       > [    0.388829] smp: Brought up 1 node, 2 CP=
Us
> >     >       >     >       > [    0.406941] SMP: Total of 2 processors
> activated.
> >     >       >     >       > [    0.411698] CPU features: detected:
> 32-bit EL0 Support
> >     >       >     >       > [    0.416888] CPU features: detected: CRC3=
2
> instructions
> >     >       >     >       > [    0.422121] CPU: All CPU(s) started at E=
L1
> >     >       >     >       > [    0.426248] alternatives: patching kerne=
l
> code
> >     >       >     >       > [    0.431424] devtmpfs: initialized
> >     >       >     >       > [    0.441454] KASLR enabled
> >     >       >     >       > [    0.441602] clocksource: jiffies: mask:
> 0xffffffff max_cycles: 0xffffffff, max_idle_ns:
> >     >       7645041785100000 ns
> >     >       >     >       > [    0.448321] futex hash table entries: 51=
2
> (order: 3, 32768 bytes, linear)
> >     >       >     >       > [    0.496183] NET: Registered
> PF_NETLINK/PF_ROUTE protocol family
> >     >       >     >       > [    0.498277] DMA: preallocated 256 KiB
> GFP_KERNEL pool for atomic allocations
> >     >       >     >       > [    0.503772] DMA: preallocated 256 KiB
> GFP_KERNEL|GFP_DMA pool for atomic allocations
> >     >       >     >       > [    0.511610] DMA: preallocated 256 KiB
> GFP_KERNEL|GFP_DMA32 pool for atomic allocations
> >     >       >     >       > [    0.519478] audit: initializing netlink
> subsys (disabled)
> >     >       >     >       > [    0.524985] audit: type=3D2000
> audit(0.336:1): state=3Dinitialized audit_enabled=3D0 res=3D1
> >     >       >     >       > [    0.529169] thermal_sys: Registered
> thermal governor 'step_wise'
> >     >       >     >       > [    0.533023] hw-breakpoint: found 6
> breakpoint and 4 watchpoint registers.
> >     >       >     >       > [    0.545608] ASID allocator initialised
> with 32768 entries
> >     >       >     >       > [    0.551030] xen:swiotlb_xen: Warning:
> only able to allocate 4 MB for software IO TLB
> >     >       >     >       > [    0.559332] software IO TLB: mapped [mem
> 0x0000000011800000-0x0000000011c00000] (4MB)
> >     >       >     >       > [    0.583565] HugeTLB registered 1.00 GiB
> page size, pre-allocated 0 pages
> >     >       >     >       > [    0.584721] HugeTLB registered 32.0 MiB
> page size, pre-allocated 0 pages
> >     >       >     >       > [    0.591478] HugeTLB registered 2.00 MiB
> page size, pre-allocated 0 pages
> >     >       >     >       > [    0.598225] HugeTLB registered 64.0 KiB
> page size, pre-allocated 0 pages
> >     >       >     >       > [    0.636520] DRBG: Continuing without
> Jitter RNG
> >     >       >     >       > [    0.737187] raid6: neonx8   gen()  2143
> MB/s
> >     >       >     >       > [    0.805294] raid6: neonx8   xor()  1589
> MB/s
> >     >       >     >       > [    0.873406] raid6: neonx4   gen()  2177
> MB/s
> >     >       >     >       > [    0.941499] raid6: neonx4   xor()  1556
> MB/s
> >     >       >     >       > [    1.009612] raid6: neonx2   gen()  2072
> MB/s
> >     >       >     >       > [    1.077715] raid6: neonx2   xor()  1430
> MB/s
> >     >       >     >       > [    1.145834] raid6: neonx1   gen()  1769
> MB/s
> >     >       >     >       > [    1.213935] raid6: neonx1   xor()  1214
> MB/s
> >     >       >     >       > [    1.282046] raid6: int64x8  gen()  1366
> MB/s
> >     >       >     >       > [    1.350132] raid6: int64x8  xor()   773
> MB/s
> >     >       >     >       > [    1.418259] raid6: int64x4  gen()  1602
> MB/s
> >     >       >     >       > [    1.486349] raid6: int64x4  xor()   851
> MB/s
> >     >       >     >       > [    1.554464] raid6: int64x2  gen()  1396
> MB/s
> >     >       >     >       > [    1.622561] raid6: int64x2  xor()   744
> MB/s
> >     >       >     >       > [    1.690687] raid6: int64x1  gen()  1033
> MB/s
> >     >       >     >       > [    1.758770] raid6: int64x1  xor()   517
> MB/s
> >     >       >     >       > [    1.758809] raid6: using algorithm neonx=
4
> gen() 2177 MB/s
> >     >       >     >       > [    1.762941] raid6: .... xor() 1556 MB/s,
> rmw enabled
> >     >       >     >       > [    1.767957] raid6: using neon recovery
> algorithm
> >     >       >     >       > [    1.772824] xen:balloon: Initialising
> balloon driver
> >     >       >     >       > [    1.778021] iommu: Default domain type:
> Translated
> >     >       >     >       > [    1.782584] iommu: DMA domain TLB
> invalidation policy: strict mode
> >     >       >     >       > [    1.789149] SCSI subsystem initialized
> >     >       >     >       > [    1.792820] usbcore: registered new
> interface driver usbfs
> >     >       >     >       > [    1.798254] usbcore: registered new
> interface driver hub
> >     >       >     >       > [    1.803626] usbcore: registered new
> device driver usb
> >     >       >     >       > [    1.808761] pps_core: LinuxPPS API ver. =
1
> registered
> >     >       >     >       > [    1.813716] pps_core: Software ver. 5.3.=
6
> - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it <mailto:
> giometti@linux.it>
> >     >       <mailto:giometti@linux.it <mailto:giometti@linux.it>>>
> >     >       >     >       > [    1.822903] PTP clock support registered
> >     >       >     >       > [    1.826893] EDAC MC: Ver: 3.0.0
> >     >       >     >       > [    1.830375] zynqmp-ipi-mbox
> mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channels.
> >     >       >     >       > [    1.838863] zynqmp-ipi-mbox
> mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX channels.
> >     >       >     >       > [    1.847356] zynqmp-ipi-mbox
> mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX channels.
> >     >       >     >       > [    1.855907] FPGA manager framework
> >     >       >     >       > [    1.859952] clocksource: Switched to
> clocksource arch_sys_counter
> >     >       >     >       > [    1.871712] NET: Registered PF_INET
> protocol family
> >     >       >     >       > [    1.871838] IP idents hash table entries=
:
> 32768 (order: 6, 262144 bytes, linear)
> >     >       >     >       > [    1.879392] tcp_listen_portaddr_hash has=
h
> table entries: 1024 (order: 2, 16384 bytes, linear)
> >     >       >     >       > [    1.887078] Table-perturb hash table
> entries: 65536 (order: 6, 262144 bytes, linear)
> >     >       >     >       > [    1.894846] TCP established hash table
> entries: 16384 (order: 5, 131072 bytes, linear)
> >     >       >     >       > [    1.902900] TCP bind hash table entries:
> 16384 (order: 6, 262144 bytes, linear)
> >     >       >     >       > [    1.910350] TCP: Hash tables configured
> (established 16384 bind 16384)
> >     >       >     >       > [    1.916778] UDP hash table entries: 1024
> (order: 3, 32768 bytes, linear)
> >     >       >     >       > [    1.923509] UDP-Lite hash table entries:
> 1024 (order: 3, 32768 bytes, linear)
> >     >       >     >       > [    1.930759] NET: Registered
> PF_UNIX/PF_LOCAL protocol family
> >     >       >     >       > [    1.936834] RPC: Registered named UNIX
> socket transport module.
> >     >       >     >       > [    1.942342] RPC: Registered udp transpor=
t
> module.
> >     >       >     >       > [    1.947088] RPC: Registered tcp transpor=
t
> module.
> >     >       >     >       > [    1.951843] RPC: Registered tcp NFSv4.1
> backchannel transport module.
> >     >       >     >       > [    1.958334] PCI: CLS 0 bytes, default 64
> >     >       >     >       > [    1.962709] Trying to unpack rootfs imag=
e
> as initramfs...
> >     >       >     >       > [    1.977090] workingset: timestamp_bits=
=3D62
> max_order=3D19 bucket_order=3D0
> >     >       >     >       > [    1.982863] Installing knfsd (copyright
> (C) 1996 okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:
> okir@monad.swb.de <mailto:okir@monad.swb.de>>).
> >     >       >     >       > [    2.021045] NET: Registered PF_ALG
> protocol family
> >     >       >     >       > [    2.021122] xor: measuring software
> checksum speed
> >     >       >     >       > [    2.029347]    8regs           :  2366
> MB/sec
> >     >       >     >       > [    2.033081]    32regs          :  2802
> MB/sec
> >     >       >     >       > [    2.038223]    arm64_neon      :  2320
> MB/sec
> >     >       >     >       > [    2.038385] xor: using function: 32regs
> (2802 MB/sec)
> >     >       >     >       > [    2.043614] Block layer SCSI generic
> (bsg) driver version 0.4 loaded (major 247)
> >     >       >     >       > [    2.050959] io scheduler mq-deadline
> registered
> >     >       >     >       > [    2.055521] io scheduler kyber registere=
d
> >     >       >     >       > [    2.068227] xen:xen_evtchn: Event-channe=
l
> device installed
> >     >       >     >       > [    2.069281] Serial: 8250/16550 driver, 4
> ports, IRQ sharing disabled
> >     >       >     >       > [    2.076190] cacheinfo: Unable to detect
> cache hierarchy for CPU 0
> >     >       >     >       > [    2.085548] brd: module loaded
> >     >       >     >       > [    2.089290] loop: module loaded
> >     >       >     >       > [    2.089341] Invalid max_queues (4), will
> use default max: 2.
> >     >       >     >       > [    2.094565] tun: Universal TUN/TAP devic=
e
> driver, 1.6
> >     >       >     >       > [    2.098655] xen_netfront: Initialising
> Xen virtual ethernet driver
> >     >       >     >       > [    2.104156] usbcore: registered new
> interface driver rtl8150
> >     >       >     >       > [    2.109813] usbcore: registered new
> interface driver r8152
> >     >       >     >       > [    2.115367] usbcore: registered new
> interface driver asix
> >     >       >     >       > [    2.120794] usbcore: registered new
> interface driver ax88179_178a
> >     >       >     >       > [    2.126934] usbcore: registered new
> interface driver cdc_ether
> >     >       >     >       > [    2.132816] usbcore: registered new
> interface driver cdc_eem
> >     >       >     >       > [    2.138527] usbcore: registered new
> interface driver net1080
> >     >       >     >       > [    2.144256] usbcore: registered new
> interface driver cdc_subset
> >     >       >     >       > [    2.150205] usbcore: registered new
> interface driver zaurus
> >     >       >     >       > [    2.155837] usbcore: registered new
> interface driver cdc_ncm
> >     >       >     >       > [    2.161550] usbcore: registered new
> interface driver r8153_ecm
> >     >       >     >       > [    2.168240] usbcore: registered new
> interface driver cdc_acm
> >     >       >     >       > [    2.173109] cdc_acm: USB Abstract Contro=
l
> Model driver for USB modems and ISDN adapters
> >     >       >     >       > [    2.181358] usbcore: registered new
> interface driver uas
> >     >       >     >       > [    2.186547] usbcore: registered new
> interface driver usb-storage
> >     >       >     >       > [    2.192643] usbcore: registered new
> interface driver ftdi_sio
> >     >       >     >       > [    2.198384] usbserial: USB Serial suppor=
t
> registered for FTDI USB Serial Device
> >     >       >     >       > [    2.206118] udc-core: couldn't find an
> available UDC - added [g_mass_storage] to list of pending
> >     >       drivers
> >     >       >     >       > [    2.215332] i2c_dev: i2c /dev entries
> driver
> >     >       >     >       > [    2.220467] xen_wdt xen_wdt: initialized
> (timeout=3D60s, nowayout=3D0)
> >     >       >     >       > [    2.225923] device-mapper: uevent:
> version 1.0.3
> >     >       >     >       > [    2.230668] device-mapper: ioctl:
> 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com <mailto:
> dm-devel@redhat.com>
> >     >       <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com>>
> >     >       >     >       > [    2.239315] EDAC MC0: Giving out device
> to module 1 controller synps_ddr_controller: DEV synps_edac
> >     >       (INTERRUPT)
> >     >       >     >       > [    2.249405] EDAC DEVICE0: Giving out
> device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV
> >     >       >     >       ff960000.memory-controller (INTERRUPT)
> >     >       >     >       > [    2.261719] sdhci: Secure Digital Host
> Controller Interface driver
> >     >       >     >       > [    2.267487] sdhci: Copyright(c) Pierre
> Ossman
> >     >       >     >       > [    2.271890] sdhci-pltfm: SDHCI platform
> and OF driver helper
> >     >       >     >       > [    2.278157] ledtrig-cpu: registered to
> indicate activity on CPUs
> >     >       >     >       > [    2.283816] zynqmp_firmware_probe
> Platform Management API v1.1
> >     >       >     >       > [    2.289554] zynqmp_firmware_probe
> Trustzone version v1.0
> >     >       >     >       > [    2.327875] securefw securefw: securefw
> probed
> >     >       >     >       > [    2.328324] alg: No test for
> xilinx-zynqmp-aes (zynqmp-aes)
> >     >       >     >       > [    2.332563] zynqmp_aes
> firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
> >     >       >     >       > [    2.341183] alg: No test for
> xilinx-zynqmp-rsa (zynqmp-rsa)
> >     >       >     >       > [    2.347667] remoteproc remoteproc0:
> ff9a0000.rf5ss:r5f_0 is available
> >     >       >     >       > [    2.353003] remoteproc remoteproc1:
> ff9a0000.rf5ss:r5f_1 is available
> >     >       >     >       > [    2.362605] fpga_manager fpga0: Xilinx
> ZynqMP FPGA Manager registered
> >     >       >     >       > [    2.366540] viper-xen-proxy
> viper-xen-proxy: Viper Xen Proxy registered
> >     >       >     >       > [    2.372525] viper-vdpp a4000000.vdpp:
> Device Tree Probing
> >     >       >     >       > [    2.377778] viper-vdpp a4000000.vdpp:
> VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> >     >       >     >       > [    2.386432] viper-vdpp a4000000.vdpp:
> Unable to register tamper handler. Retrying...
> >     >       >     >       > [    2.394094] viper-vdpp-net
> a5000000.vdpp_net: Device Tree Probing
> >     >       >     >       > [    2.399854] viper-vdpp-net
> a5000000.vdpp_net: Device registered
> >     >       >     >       > [    2.405931] viper-vdpp-stat
> a8000000.vdpp_stat: Device Tree Probing
> >     >       >     >       > [    2.412037] viper-vdpp-stat
> a8000000.vdpp_stat: Build parameters: VTI Count: 512 Event Count: 32
> >     >       >     >       > [    2.420856] default preset
> >     >       >     >       > [    2.423797] viper-vdpp-stat
> a8000000.vdpp_stat: Device registered
> >     >       >     >       > [    2.430054] viper-vdpp-rng
> ac000000.vdpp_rng: Device Tree Probing
> >     >       >     >       > [    2.435948] viper-vdpp-rng
> ac000000.vdpp_rng: Device registered
> >     >       >     >       > [    2.441976] vmcu driver init
> >     >       >     >       > [    2.444922] VMCU: : (240:0) registered
> >     >       >     >       > [    2.444956] In K81 Updater init
> >     >       >     >       > [    2.449003] pktgen: Packet Generator for
> packet performance testing. Version: 2.75
> >     >       >     >       > [    2.468833] Initializing XFRM netlink
> socket
> >     >       >     >       > [    2.468902] NET: Registered PF_PACKET
> protocol family
> >     >       >     >       > [    2.472729] Bridge firewalling registere=
d
> >     >       >     >       > [    2.476785] 8021q: 802.1Q VLAN Support
> v1.8
> >     >       >     >       > [    2.481341] registered taskstats version=
 1
> >     >       >     >       > [    2.486394] Btrfs loaded,
> crc32c=3Dcrc32c-generic, zoned=3Dno, fsverity=3Dno
> >     >       >     >       > [    2.503145] ff010000.serial: ttyPS1 at
> MMIO 0xff010000 (irq =3D 36, base_baud =3D 6250000) is a xuartps
> >     >       >     >       > [    2.507103] of-fpga-region fpga-full:
> FPGA Region probed
> >     >       >     >       > [    2.512986] xilinx-zynqmp-dma
> fd500000.dma-controller: ZynqMP DMA driver Probe success
> >     >       >     >       > [    2.520267] xilinx-zynqmp-dma
> fd510000.dma-controller: ZynqMP DMA driver Probe success
> >     >       >     >       > [    2.528239] xilinx-zynqmp-dma
> fd520000.dma-controller: ZynqMP DMA driver Probe success
> >     >       >     >       > [    2.536152] xilinx-zynqmp-dma
> fd530000.dma-controller: ZynqMP DMA driver Probe success
> >     >       >     >       > [    2.544153] xilinx-zynqmp-dma
> fd540000.dma-controller: ZynqMP DMA driver Probe success
> >     >       >     >       > [    2.552127] xilinx-zynqmp-dma
> fd550000.dma-controller: ZynqMP DMA driver Probe success
> >     >       >     >       > [    2.560178] xilinx-zynqmp-dma
> ffa80000.dma-controller: ZynqMP DMA driver Probe success
> >     >       >     >       > [    2.567987] xilinx-zynqmp-dma
> ffa90000.dma-controller: ZynqMP DMA driver Probe success
> >     >       >     >       > [    2.576018] xilinx-zynqmp-dma
> ffaa0000.dma-controller: ZynqMP DMA driver Probe success
> >     >       >     >       > [    2.583889] xilinx-zynqmp-dma
> ffab0000.dma-controller: ZynqMP DMA driver Probe success
> >     >       >     >       > [    2.946379] spi-nor spi0.0: mt25qu512a
> (131072 Kbytes)
> >     >       >     >       > [    2.946467] 2 fixed-partitions partition=
s
> found on MTD device spi0.0
> >     >       >     >       > [    2.952393] Creating 2 MTD partitions on
> "spi0.0":
> >     >       >     >       > [    2.957231] 0x000004000000-0x00000800000=
0
> : "bank A"
> >     >       >     >       > [    2.963332] 0x000000000000-0x00000400000=
0
> : "bank B"
> >     >       >     >       > [    2.968694] macb ff0b0000.ethernet: Not
> enabling partial store and forward
> >     >       >     >       > [    2.975333] macb ff0b0000.ethernet eth0:
> Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25
> >     >       (18:41:fe:0f:ff:02)
> >     >       >     >       > [    2.984472] macb ff0c0000.ethernet: Not
> enabling partial store and forward
> >     >       >     >       > [    2.992144] macb ff0c0000.ethernet eth1:
> Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26
> >     >       (18:41:fe:0f:ff:03)
> >     >       >     >       > [    3.001043] viper_enet viper_enet: Viper
> power GPIOs initialised
> >     >       >     >       > [    3.007313] viper_enet viper_enet vnet0
> (uninitialized): Validate interface QSGMII
> >     >       >     >       > [    3.014914] viper_enet viper_enet vnet1
> (uninitialized): Validate interface QSGMII
> >     >       >     >       > [    3.022138] viper_enet viper_enet vnet1
> (uninitialized): Validate interface type 18
> >     >       >     >       > [    3.030274] viper_enet viper_enet vnet2
> (uninitialized): Validate interface QSGMII
> >     >       >     >       > [    3.037785] viper_enet viper_enet vnet3
> (uninitialized): Validate interface QSGMII
> >     >       >     >       > [    3.045301] viper_enet viper_enet: Viper
> enet registered
> >     >       >     >       > [    3.050958] xilinx-axipmon
> ffa00000.perf-monitor: Probed Xilinx APM
> >     >       >     >       > [    3.057135] xilinx-axipmon
> fd0b0000.perf-monitor: Probed Xilinx APM
> >     >       >     >       > [    3.063538] xilinx-axipmon
> fd490000.perf-monitor: Probed Xilinx APM
> >     >       >     >       > [    3.069920] xilinx-axipmon
> ffa10000.perf-monitor: Probed Xilinx APM
> >     >       >     >       > [    3.097729] si70xx: probe of 2-0040
> failed with error -5
> >     >       >     >       > [    3.098042] cdns-wdt fd4d0000.watchdog:
> Xilinx Watchdog Timer with timeout 60s
> >     >       >     >       > [    3.105111] cdns-wdt ff150000.watchdog:
> Xilinx Watchdog Timer with timeout 10s
> >     >       >     >       > [    3.112457] viper-tamper viper-tamper:
> Device registered
> >     >       >     >       > [    3.117593] active_bank active_bank: boo=
t
> bank: 1
> >     >       >     >       > [    3.122184] active_bank active_bank: boo=
t
> mode: (0x02) qspi32
> >     >       >     >       > [    3.128247] viper-vdpp a4000000.vdpp:
> Device Tree Probing
> >     >       >     >       > [    3.133439] viper-vdpp a4000000.vdpp:
> VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> >     >       >     >       > [    3.142151] viper-vdpp a4000000.vdpp:
> Tamper handler registered
> >     >       >     >       > [    3.147438] viper-vdpp a4000000.vdpp:
> Device registered
> >     >       >     >       > [    3.153007] lpc55_l2 spi1.0: registered
> handler for protocol 0
> >     >       >     >       > [    3.158582] lpc55_user lpc55_user: The
> major number for your device is 236
> >     >       >     >       > [    3.165976] lpc55_l2 spi1.0: registered
> handler for protocol 1
> >     >       >     >       > [    3.181999] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_get_time: bad result: 1
> >     >       >     >       > [    3.182856] rtc-lpc55 rtc_lpc55:
> registered as rtc0
> >     >       >     >       > [    3.188656] lpc55_l2 spi1.0: (2) mcu
> still not ready?
> >     >       >     >       > [    3.193744] lpc55_l2 spi1.0: (3) mcu
> still not ready?
> >     >       >     >       > [    3.198848] lpc55_l2 spi1.0: (4) mcu
> still not ready?
> >     >       >     >       > [    3.202932] mmc0: SDHCI controller on
> ff160000.mmc [ff160000.mmc] using ADMA 64-bit
> >     >       >     >       > [    3.210689] lpc55_l2 spi1.0: (5) mcu
> still not ready?
> >     >       >     >       > [    3.215694] lpc55_l2 spi1.0: rx error:
> -110
> >     >       >     >       > [    3.284438] mmc0: new HS200 MMC card at
> address 0001
> >     >       >     >       > [    3.285179] mmcblk0: mmc0:0001 SEM16G
> 14.6 GiB
> >     >       >     >       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6
> p7 p8
> >     >       >     >       > [    3.293915] mmcblk0boot0: mmc0:0001
> SEM16G 4.00 MiB
> >     >       >     >       > [    3.299054] mmcblk0boot1: mmc0:0001
> SEM16G 4.00 MiB
> >     >       >     >       > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16=
G
> 4.00 MiB, chardev (244:0)
> >     >       >     >       > [    3.582676] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_get_time: bad result: 1
> >     >       >     >       > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys=
:
> unable to read the hardware clock
> >     >       >     >       > [    3.591252] cdns-i2c ff020000.i2c:
> recovery information complete
> >     >       >     >       > [    3.597085] at24 0-0050: supply vcc not
> found, using dummy regulator
> >     >       >     >       > [    3.603011] lpc55_l2 spi1.0: (2) mcu
> still not ready?
> >     >       >     >       > [    3.608093] at24 0-0050: 256 byte spd
> EEPROM, read-only
> >     >       >     >       > [    3.613620] lpc55_l2 spi1.0: (3) mcu
> still not ready?
> >     >       >     >       > [    3.619362] lpc55_l2 spi1.0: (4) mcu
> still not ready?
> >     >       >     >       > [    3.624224] rtc-rv3028 0-0052: registere=
d
> as rtc1
> >     >       >     >       > [    3.628343] lpc55_l2 spi1.0: (5) mcu
> still not ready?
> >     >       >     >       > [    3.633253] lpc55_l2 spi1.0: rx error:
> -110
> >     >       >     >       > [    3.639104] k81_bootloader 0-0010: probe
> >     >       >     >       > [    3.641628] VMCU: : (235:0) registered
> >     >       >     >       > [    3.641635] k81_bootloader 0-0010: probe
> completed
> >     >       >     >       > [    3.668346] cdns-i2c ff020000.i2c: 400
> kHz mmio ff020000 irq 28
> >     >       >     >       > [    3.669154] cdns-i2c ff030000.i2c:
> recovery information complete
> >     >       >     >       > [    3.675412] lm75 1-0048: supply vs not
> found, using dummy regulator
> >     >       >     >       > [    3.682920] lm75 1-0048: hwmon1: sensor
> 'tmp112'
> >     >       >     >       > [    3.686548] i2c i2c-1: Added multiplexed
> i2c bus 3
> >     >       >     >       > [    3.690795] i2c i2c-1: Added multiplexed
> i2c bus 4
> >     >       >     >       > [    3.695629] i2c i2c-1: Added multiplexed
> i2c bus 5
> >     >       >     >       > [    3.700492] i2c i2c-1: Added multiplexed
> i2c bus 6
> >     >       >     >       > [    3.705157] pca954x 1-0070: registered 4
> multiplexed busses for I2C switch pca9546
> >     >       >     >       > [    3.713049] at24 1-0054: supply vcc not
> found, using dummy regulator
> >     >       >     >       > [    3.720067] at24 1-0054: 1024 byte 24c08
> EEPROM, read-only
> >     >       >     >       > [    3.724761] cdns-i2c ff030000.i2c: 100
> kHz mmio ff030000 irq 29
> >     >       >     >       > [    3.731272] sfp viper_enet:sfp-eth1: Hos=
t
> maximum power 2.0W
> >     >       >     >       > [    3.737549] sfp_register_socket: got
> sfp_bus
> >     >       >     >       > [    3.740709] sfp_register_socket: registe=
r
> sfp_bus
> >     >       >     >       > [    3.745459] sfp_register_bus: ops ok!
> >     >       >     >       > [    3.749179] sfp_register_bus: Try to
> attach
> >     >       >     >       > [    3.753419] sfp_register_bus: Attach
> succeeded
> >     >       >     >       > [    3.757914] sfp_register_bus: upstream
> ops attach
> >     >       >     >       > [    3.762677] sfp_register_bus: Bus
> registered
> >     >       >     >       > [    3.766999] sfp_register_socket: registe=
r
> sfp_bus succeeded
> >     >       >     >       > [    3.775870] of_cfs_init
> >     >       >     >       > [    3.776000] of_cfs_init: OK
> >     >       >     >       > [    3.778211] clk: Not disabling unused
> clocks
> >     >       >     >       > [   11.278477] Freeing initrd memory: 20605=
6K
> >     >       >     >       > [   11.279406] Freeing unused kernel memory=
:
> 1536K
> >     >       >     >       > [   11.314006] Checked W+X mappings: passed=
,
> no W+X pages found
> >     >       >     >       > [   11.314142] Run /init as init process
> >     >       >     >       > INIT: version 3.01 booting
> >     >       >     >       > fsck (busybox 1.35.0)
> >     >       >     >       > /dev/mmcblk0p1: clean, 12/102400 files,
> 238162/409600 blocks
> >     >       >     >       > /dev/mmcblk0p2: clean, 12/102400 files,
> 171972/409600 blocks
> >     >       >     >       > /dev/mmcblk0p3 was not cleanly unmounted,
> check forced.
> >     >       >     >       > /dev/mmcblk0p3: 20/4096 files (0.0%
> non-contiguous), 663/16384 blocks
> >     >       >     >       > [   11.553073] EXT4-fs (mmcblk0p3): mounted
> filesystem without journal. Opts: (null). Quota mode:
> >     >       disabled.
> >     >       >     >       > Starting random number generator daemon.
> >     >       >     >       > [   11.580662] random: crng init done
> >     >       >     >       > Starting udev
> >     >       >     >       > [   11.613159] udevd[142]: starting version
> 3.2.10
> >     >       >     >       > [   11.620385] udevd[143]: starting
> eudev-3.2.10
> >     >       >     >       > [   11.704481] macb ff0b0000.ethernet
> control_red: renamed from eth0
> >     >       >     >       > [   11.720264] macb ff0c0000.ethernet
> control_black: renamed from eth1
> >     >       >     >       > [   12.063396] ip_local_port_range: prefer
> different parity for start/end values.
> >     >       >     >       > [   12.084801] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_get_time: bad result: 1
> >     >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
> >     >       >     >       > Mon Feb 27 08:40:53 UTC 2023
> >     >       >     >       > [   12.115309] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_set_time: bad result
> >     >       >     >       > hwclock: RTC_SET_TIME: Invalid exchange
> >     >       >     >       > [   12.131027] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_get_time: bad result: 1
> >     >       >     >       > Starting mcud
> >     >       >     >       > INIT: Entering runlevel: 5
> >     >       >     >       > Configuring network interfaces... done.
> >     >       >     >       > resetting network interface
> >     >       >     >       > [   12.718295] macb ff0b0000.ethernet
> control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver [Xilinx
> >     >       PCS/PMA PHY] (irq=3DPOLL)
> >     >       >     >       > [   12.723919] macb ff0b0000.ethernet
> control_red: configuring for phy/gmii link mode
> >     >       >     >       > [   12.732151] pps pps0: new PPS source ptp=
0
> >     >       >     >       > [   12.735563] macb ff0b0000.ethernet:
> gem-ptp-timer ptp clock registered.
> >     >       >     >       > [   12.745724] macb ff0c0000.ethernet
> control_black: PHY [ff0c0000.ethernet-ffffffff:01] driver [Xilinx
> >     >       PCS/PMA PHY]
> >     >       >     >       (irq=3DPOLL)
> >     >       >     >       > [   12.753469] macb ff0c0000.ethernet
> control_black: configuring for phy/gmii link mode
> >     >       >     >       > [   12.761804] pps pps1: new PPS source ptp=
1
> >     >       >     >       > [   12.765398] macb ff0c0000.ethernet:
> gem-ptp-timer ptp clock registered.
> >     >       >     >       > Auto-negotiation: off
> >     >       >     >       > Auto-negotiation: off
> >     >       >     >       > [   16.828151] macb ff0b0000.ethernet
> control_red: unable to generate target frequency: 125000000 Hz
> >     >       >     >       > [   16.834553] macb ff0b0000.ethernet
> control_red: Link is Up - 1Gbps/Full - flow control off
> >     >       >     >       > [   16.860552] macb ff0c0000.ethernet
> control_black: unable to generate target frequency: 125000000 Hz
> >     >       >     >       > [   16.867052] macb ff0c0000.ethernet
> control_black: Link is Up - 1Gbps/Full - flow control off
> >     >       >     >       > Starting Failsafe Secure Shell server in
> port 2222: sshd
> >     >       >     >       > done.
> >     >       >     >       > Starting rpcbind daemon...done.
> >     >       >     >       >
> >     >       >     >       > [   17.093019] rtc-lpc55 rtc_lpc55:
> lpc55_rtc_get_time: bad result: 1
> >     >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
> >     >       >     >       > Starting State Manager Service
> >     >       >     >       > Start state-manager restarter...
> >     >       >     >       > (XEN) d0v1 Forwarding AES operation:
> 3254779951
> >     >       >     >       > Starting /usr/sbin/xenstored....[
> 17.265256] BTRFS: device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa
> >     >       devid 1 transid 744
> >     >       >     >       /dev/dm-0
> >     >       >     >       > scanned by udevd (385)
> >     >       >     >       > [   17.349933] BTRFS info (device dm-0):
> disk space caching is enabled
> >     >       >     >       > [   17.350670] BTRFS info (device dm-0): ha=
s
> skinny extents
> >     >       >     >       > [   17.364384] BTRFS info (device dm-0):
> enabling ssd optimizations
> >     >       >     >       > [   17.830462] BTRFS: device fsid
> 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
> >     >       /dev/mapper/client_prov scanned by
> >     >       >     >       mkfs.btrfs
> >     >       >     >       > (526)
> >     >       >     >       > [   17.872699] BTRFS info (device dm-1):
> using free space tree
> >     >       >     >       > [   17.872771] BTRFS info (device dm-1): ha=
s
> skinny extents
> >     >       >     >       > [   17.878114] BTRFS info (device dm-1):
> flagging fs with big metadata feature
> >     >       >     >       > [   17.894289] BTRFS info (device dm-1):
> enabling ssd optimizations
> >     >       >     >       > [   17.895695] BTRFS info (device dm-1):
> checking UUID tree
> >     >       >     >       >
> >     >       >     >       > Setting domain 0 name, domid and JSON
> config...
> >     >       >     >       > Done setting up Dom0
> >     >       >     >       > Starting xenconsoled...
> >     >       >     >       > Starting QEMU as disk backend for dom0
> >     >       >     >       > Starting domain watchdog daemon:
> xenwatchdogd startup
> >     >       >     >       >
> >     >       >     >       > [   18.408647] BTRFS: device fsid
> 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
> >     >       /dev/mapper/client_config scanned by
> >     >       >     >       mkfs.btrfs
> >     >       >     >       > (574)
> >     >       >     >       > [done]
> >     >       >     >       > [   18.465552] BTRFS info (device dm-2):
> using free space tree
> >     >       >     >       > [   18.465629] BTRFS info (device dm-2): ha=
s
> skinny extents
> >     >       >     >       > [   18.471002] BTRFS info (device dm-2):
> flagging fs with big metadata feature
> >     >       >     >       > Starting crond: [   18.482371] BTRFS info
> (device dm-2): enabling ssd optimizations
> >     >       >     >       > [   18.486659] BTRFS info (device dm-2):
> checking UUID tree
> >     >       >     >       > OK
> >     >       >     >       > starting rsyslogd ... Log partition ready
> after 0 poll loops
> >     >       >     >       > done
> >     >       >     >       > rsyslogd: cannot connect to 172.18.0.1:514 =
<
> http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>>:
> Network is unreachable [v8.2208.0 try
> >     >       https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>> ]
> >     >       >     >       > [   18.670637] BTRFS: device fsid
> 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3
> >     >       scanned by udevd (518)
> >     >       >     >       >
> >     >       >     >       > Please insert USB token and enter your role
> in login prompt.
> >     >       >     >       >
> >     >       >     >       > login:
> >     >       >     >       >
> >     >       >     >       > Regards,
> >     >       >     >       > O.
> >     >       >     >       >
> >     >       >     >       >
> >     >       >     >       > =D0=BF=D0=BD, 24 =D0=B0=D0=BF=D1=80. 2023=
=E2=80=AF=D0=B3. =D0=B2 23:39, Stefano
> Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
> >     >       >     >       >       Hi Oleg,
> >     >       >     >       >
> >     >       >     >       >       Here is the issue from your logs:
> >     >       >     >       >
> >     >       >     >       >       SError Interrupt on CPU0, code
> 0xbe000000 -- SError
> >     >       >     >       >
> >     >       >     >       >       SErrors are special signals to notify
> software of serious hardware
> >     >       >     >       >       errors.  Something is going very
> wrong. Defective hardware is a
> >     >       >     >       >       possibility.  Another possibility if
> software accessing address ranges
> >     >       >     >       >       that it is not supposed to, sometimes
> it causes SErrors.
> >     >       >     >       >
> >     >       >     >       >       Cheers,
> >     >       >     >       >
> >     >       >     >       >       Stefano
> >     >       >     >       >
> >     >       >     >       >
> >     >       >     >       >
> >     >       >     >       >       On Mon, 24 Apr 2023, Oleg Nikitenko
> wrote:
> >     >       >     >       >
> >     >       >     >       >       > Hello,
> >     >       >     >       >       >
> >     >       >     >       >       > Thanks guys.
> >     >       >     >       >       > I found out where the problem was.
> >     >       >     >       >       > Now dom0 booted more. But I have a
> new one.
> >     >       >     >       >       > This is a kernel panic during Dom0
> loading.
> >     >       >     >       >       > Maybe someone is able to suggest
> something ?
> >     >       >     >       >       >
> >     >       >     >       >       > Regards,
> >     >       >     >       >       > O.
> >     >       >     >       >       >
> >     >       >     >       >       > [    3.771362] sfp_register_bus:
> upstream ops attach
> >     >       >     >       >       > [    3.776119] sfp_register_bus: Bu=
s
> registered
> >     >       >     >       >       > [    3.780459] sfp_register_socket:
> register sfp_bus succeeded
> >     >       >     >       >       > [    3.789399] of_cfs_init
> >     >       >     >       >       > [    3.789499] of_cfs_init: OK
> >     >       >     >       >       > [    3.791685] clk: Not disabling
> unused clocks
> >     >       >     >       >       > [   11.010355] SError Interrupt on
> CPU0, code 0xbe000000 -- SError
> >     >       >     >       >       > [   11.010380] CPU: 0 PID: 9 Comm:
> kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> >     >       >     >       >       > [   11.010393] Workqueue:
> events_unbound async_run_entry_fn
> >     >       >     >       >       > [   11.010414] pstate: 60000005
> (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=3D--)
> >     >       >     >       >       > [   11.010422] pc :
> simple_write_end+0xd0/0x130
> >     >       >     >       >       > [   11.010431] lr :
> generic_perform_write+0x118/0x1e0
> >     >       >     >       >       > [   11.010438] sp : ffffffc00809b91=
0
> >     >       >     >       >       > [   11.010441] x29: ffffffc00809b91=
0
> x28: 0000000000000000 x27: ffffffef69ba88c0
> >     >       >     >       >       > [   11.010451] x26: 0000000000003ee=
c
> x25: ffffff807515db00 x24: 0000000000000000
> >     >       >     >       >       > [   11.010459] x23: ffffffc00809ba9=
0
> x22: 0000000002aac000 x21: ffffff807315a260
> >     >       >     >       >       > [   11.010472] x20: 000000000000100=
0
> x19: fffffffe02000000 x18: 0000000000000000
> >     >       >     >       >       > [   11.010481] x17: 00000000fffffff=
f
> x16: 0000000000008000 x15: 0000000000000000
> >     >       >     >       >       > [   11.010490] x14: 000000000000000=
0
> x13: 0000000000000000 x12: 0000000000000000
> >     >       >     >       >       > [   11.010498] x11: 000000000000000=
0
> x10: 0000000000000000 x9 : 0000000000000000
> >     >       >     >       >       > [   11.010507] x8 : 000000000000000=
0
> x7 : ffffffef693ba680 x6 : 000000002d89b700
> >     >       >     >       >       > [   11.010515] x5 : fffffffe0200000=
0
> x4 : ffffff807315a3c8 x3 : 0000000000001000
> >     >       >     >       >       > [   11.010524] x2 : 0000000002aab00=
0
> x1 : 0000000000000001 x0 : 0000000000000005
> >     >       >     >       >       > [   11.010534] Kernel panic - not
> syncing: Asynchronous SError Interrupt
> >     >       >     >       >       > [   11.010539] CPU: 0 PID: 9 Comm:
> kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> >     >       >     >       >       > [   11.010545] Hardware name: D14
> Viper Board - White Unit (DT)
> >     >       >     >       >       > [   11.010548] Workqueue:
> events_unbound async_run_entry_fn
> >     >       >     >       >       > [   11.010556] Call trace:
> >     >       >     >       >       > [   11.010558]
>  dump_backtrace+0x0/0x1c4
> >     >       >     >       >       > [   11.010567]  show_stack+0x18/0x2=
c
> >     >       >     >       >       > [   11.010574]
>  dump_stack_lvl+0x7c/0xa0
> >     >       >     >       >       > [   11.010583]  dump_stack+0x18/0x3=
4
> >     >       >     >       >       > [   11.010588]  panic+0x14c/0x2f8
> >     >       >     >       >       > [   11.010597]
>  print_tainted+0x0/0xb0
> >     >       >     >       >       > [   11.010606]
>  arm64_serror_panic+0x6c/0x7c
> >     >       >     >       >       > [   11.010614]  do_serror+0x28/0x60
> >     >       >     >       >       > [   11.010621]
>  el1h_64_error_handler+0x30/0x50
> >     >       >     >       >       > [   11.010628]
>  el1h_64_error+0x78/0x7c
> >     >       >     >       >       > [   11.010633]
>  simple_write_end+0xd0/0x130
> >     >       >     >       >       > [   11.010639]
>  generic_perform_write+0x118/0x1e0
> >     >       >     >       >       > [   11.010644]
>  __generic_file_write_iter+0x138/0x1c4
> >     >       >     >       >       > [   11.010650]
>  generic_file_write_iter+0x78/0xd0
> >     >       >     >       >       > [   11.010656]
>  __kernel_write+0xfc/0x2ac
> >     >       >     >       >       > [   11.010665]
>  kernel_write+0x88/0x160
> >     >       >     >       >       > [   11.010673]  xwrite+0x44/0x94
> >     >       >     >       >       > [   11.010680]  do_copy+0xa8/0x104
> >     >       >     >       >       > [   11.010686]
>  write_buffer+0x38/0x58
> >     >       >     >       >       > [   11.010692]
>  flush_buffer+0x4c/0xbc
> >     >       >     >       >       > [   11.010698]  __gunzip+0x280/0x31=
0
> >     >       >     >       >       > [   11.010704]  gunzip+0x1c/0x28
> >     >       >     >       >       > [   11.010709]
>  unpack_to_rootfs+0x170/0x2b0
> >     >       >     >       >       > [   11.010715]
>  do_populate_rootfs+0x80/0x164
> >     >       >     >       >       > [   11.010722]
>  async_run_entry_fn+0x48/0x164
> >     >       >     >       >       > [   11.010728]
>  process_one_work+0x1e4/0x3a0
> >     >       >     >       >       > [   11.010736]
>  worker_thread+0x7c/0x4c0
> >     >       >     >       >       > [   11.010743]  kthread+0x120/0x130
> >     >       >     >       >       > [   11.010750]
>  ret_from_fork+0x10/0x20
> >     >       >     >       >       > [   11.010757] SMP: stopping
> secondary CPUs
> >     >       >     >       >       > [   11.010784] Kernel Offset:
> 0x2f61200000 from 0xffffffc008000000
> >     >       >     >       >       > [   11.010788] PHYS_OFFSET: 0x0
> >     >       >     >       >       > [   11.010790] CPU features:
> 0x00000401,00000842
> >     >       >     >       >       > [   11.010795] Memory Limit: none
> >     >       >     >       >       > [   11.277509] ---[ end Kernel pani=
c
> - not syncing: Asynchronous SError Interrupt ]---
> >     >       >     >       >       >
> >     >       >     >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80=
. 2023=E2=80=AF=D0=B3. =D0=B2 15:52, Michal
> Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
> >     >       >     >       >       >       Hi Oleg,
> >     >       >     >       >       >
> >     >       >     >       >       >       On 21/04/2023 14:49, Oleg
> Nikitenko wrote:
> >     >       >     >       >       >       >
> >     >       >     >       >       >       >
> >     >       >     >       >       >       >
> >     >       >     >       >       >       > Hello Michal,
> >     >       >     >       >       >       >
> >     >       >     >       >       >       > I was not able to enable
> earlyprintk in the xen for now.
> >     >       >     >       >       >       > I decided to choose another
> way.
> >     >       >     >       >       >       > This is a xen's command lin=
e
> that I found out completely.
> >     >       >     >       >       >       >
> >     >       >     >       >       >       > (XEN) $$$$ console=3Ddtuart
> dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin
> >     >       bootscrub=3D0
> >     >       >     >       vwfi=3Dnative
> >     >       >     >       >       sched=3Dnull
> >     >       >     >       >       >       timer_slop=3D0
> >     >       >     >       >       >       Yes, adding a printk() in Xen
> was also a good idea.
> >     >       >     >       >       >
> >     >       >     >       >       >       >
> >     >       >     >       >       >       > So you are absolutely right
> about a command line.
> >     >       >     >       >       >       > Now I am going to find out
> why xen did not have the correct parameters from the device
> >     >       tree.
> >     >       >     >       >       >       Maybe you will find this
> document helpful:
> >     >       >     >       >       >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >
> >     >       <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >>
> >     >       >     >       >       >
> >     >       >     >       >       >       ~Michal
> >     >       >     >       >       >
> >     >       >     >       >       >       >
> >     >       >     >       >       >       > Regards,
> >     >       >     >       >       >       > Oleg
> >     >       >     >       >       >       >
> >     >       >     >       >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=
=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:16,
> Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>
> >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>
> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>:
> >     >       >     >       >       >       >
> >     >       >     >       >       >       >
> >     >       >     >       >       >       >     On 21/04/2023 10:04,
> Oleg Nikitenko wrote:
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     > Hello Michal,
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     > Yes, I use yocto.
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     > Yesterday all day lon=
g
> I tried to follow your suggestions.
> >     >       >     >       >       >       >     > I faced a problem.
> >     >       >     >       >       >       >     > Manually in the xen
> config build file I pasted the strings:
> >     >       >     >       >       >       >     In the .config file or
> in some Yocto file (listing additional Kconfig options) added
> >     >       to SRC_URI?
> >     >       >     >       >       >       >     You shouldn't really
> modify .config file but if you do, you should execute "make
> >     >       olddefconfig"
> >     >       >     >       afterwards.
> >     >       >     >       >       >       >
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     > CONFIG_EARLY_PRINTK
> >     >       >     >       >       >       >     >
> CONFIG_EARLY_PRINTK_ZYNQMP
> >     >       >     >       >       >       >     >
> CONFIG_EARLY_UART_CHOICE_CADENCE
> >     >       >     >       >       >       >     I hope you added =3Dy t=
o
> them.
> >     >       >     >       >       >       >
> >     >       >     >       >       >       >     Anyway, you have at
> least the following solutions:
> >     >       >     >       >       >       >     1) Run bitbake xen -c
> menuconfig to properly set early printk
> >     >       >     >       >       >       >     2) Find out how you
> enable other Kconfig options in your project (e.g.
> >     >       CONFIG_COLORING=3Dy that is not
> >     >       >     >       enabled by
> >     >       >     >       >       default)
> >     >       >     >       >       >       >     3) Append the following
> to "xen/arch/arm/configs/arm64_defconfig":
> >     >       >     >       >       >       >
>  CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
> >     >       >     >       >       >       >
> >     >       >     >       >       >       >     ~Michal
> >     >       >     >       >       >       >
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     > Host hangs in build
> time.
> >     >       >     >       >       >       >     > Maybe I did not set
> something in the config build file ?
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     > Regards,
> >     >       >     >       >       >       >     > Oleg
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     > =D1=87=D1=82, 20 =D0=
=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2
> 11:57, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.co=
m
> >
> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com=
>>
> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
> >     >       >     >       >       >       <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>
> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com
> >>>>>:
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >     Thanks Michal,
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >     You gave me an
> idea.
> >     >       >     >       >       >       >     >     I am going to try
> it today.
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >     Regards,
> >     >       >     >       >       >       >     >     O.
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >     =D1=87=D1=82, 20 =
=D0=B0=D0=BF=D1=80.
> 2023=E2=80=AF=D0=B3. =D0=B2 11:56, Oleg Nikitenko <oleshiiwood@gmail.com =
<mailto:
> oleshiiwood@gmail.com>
> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com=
>>
> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
> >     >       >     >       >       >       <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>
> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com
> >>>>>:
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >         Thanks Stefan=
o.
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >         I am going to
> do it today.
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >         Regards,
> >     >       >     >       >       >       >     >         O.
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >         =D1=81=D1=80,=
 19 =D0=B0=D0=BF=D1=80.
> 2023=E2=80=AF=D0=B3. =D0=B2 23:05, Stefano Stabellini <sstabellini@kernel=
.org <mailto:
> sstabellini@kernel.org>
> >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>
> >     >       >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>
> >     >       >     >       >       >       <mailto:sstabellini@kernel.or=
g
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>
> >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>:
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >             On Wed, 1=
9
> Apr 2023, Oleg Nikitenko wrote:
> >     >       >     >       >       >       >     >             > Hi
> Michal,
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             > I
> corrected xen's command line.
> >     >       >     >       >       >       >     >             > Now it =
is
> >     >       >     >       >       >       >     >             >
> xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M
> >     >       dom0_max_vcpus=3D2
> >     >       >     >       dom0_vcpus_pin
> >     >       >     >       >       >       bootscrub=3D0 vwfi=3Dnative
> sched=3Dnull
> >     >       >     >       >       >       >     >             >
> timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >             4 colors
> is way too many for xen, just do xen_colors=3D0-0. There is no
> >     >       >     >       >       >       >     >             advantage
> in using more than 1 color for Xen.
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >             4 colors
> is too few for dom0, if you are giving 1600M of memory to
> >     >       Dom0.
> >     >       >     >       >       >       >     >             Each colo=
r
> is 256M. For 1600M you should give at least 7 colors. Try:
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >
>  xen_colors=3D0-0 dom0_colors=3D1-8
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >     >             >
> Unfortunately the result was the same.
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             > (XEN)  =
-
> Dom0 mode: Relaxed
> >     >       >     >       >       >       >     >             > (XEN)
> P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> >     >       >     >       >       >       >     >             > (XEN)
> P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> >     >       >     >       >       >       >     >             > (XEN)
> Scheduling granularity: cpu, 1 CPU per sched-resource
> >     >       >     >       >       >       >     >             > (XEN)
> Coloring general information
> >     >       >     >       >       >       >     >             > (XEN)
> Way size: 64kB
> >     >       >     >       >       >       >     >             > (XEN)
> Max. number of colors available: 16
> >     >       >     >       >       >       >     >             > (XEN)
> Xen color(s): [ 0 ]
> >     >       >     >       >       >       >     >             > (XEN)
> alternatives: Patching with alt table 00000000002cc690 ->
> >     >       00000000002ccc0c
> >     >       >     >       >       >       >     >             > (XEN)
> Color array allocation failed for dom0
> >     >       >     >       >       >       >     >             > (XEN)
> >     >       >     >       >       >       >     >             > (XEN)
> ****************************************
> >     >       >     >       >       >       >     >             > (XEN)
> Panic on CPU 0:
> >     >       >     >       >       >       >     >             > (XEN)
> Error creating domain 0
> >     >       >     >       >       >       >     >             > (XEN)
> ****************************************
> >     >       >     >       >       >       >     >             > (XEN)
> >     >       >     >       >       >       >     >             > (XEN)
> Reboot in five seconds...
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             > I am
> going to find out how command line arguments passed and parsed.
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             > Regards=
,
> >     >       >     >       >       >       >     >             > Oleg
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             > =D1=81=
=D1=80, 19
> =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25, Oleg Nikitenko <ol=
eshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>
> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com
> >>
> >     >       >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>
> >     >       >     >       >       >       <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>
> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com
> >>>>>:
> >     >       >     >       >       >       >     >             >       H=
i
> Michal,
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             > You put
> my nose into the problem. Thank you.
> >     >       >     >       >       >       >     >             > I am
> going to use your point.
> >     >       >     >       >       >       >     >             > Let's
> see what happens.
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             > Regards=
,
> >     >       >     >       >       >       >     >             > Oleg
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             > =D1=81=
=D1=80, 19
> =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37, Michal Orzel <mich=
al.orzel@amd.com <mailto:
> michal.orzel@amd.com>
> >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>
> >     >       >     >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>
> >     >       >     >       >       >       <mailto:michal.orzel@amd.com
> <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>
> >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com
> >>>>>:
> >     >       >     >       >       >       >     >             >       H=
i
> Oleg,
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >       O=
n
> 19/04/2023 09:03, Oleg Nikitenko wrote:
> >     >       >     >       >       >       >     >             >       >
>
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
> Hello Stefano,
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
> Thanks for the clarification.
> >     >       >     >       >       >       >     >             >       >
> My company uses yocto for image generation.
> >     >       >     >       >       >       >     >             >       >
> What kind of information do you need to consult me in this
> >     >       case ?
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
> Maybe modules sizes/addresses which were mentioned by @Julien
> >     >       Grall
> >     >       >     >       >       <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>
> >     >       >     >       >       >       <mailto:julien@xen.org
> <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>
> <mailto:julien@xen.org <mailto:julien@xen.org>
> >     >       <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>>>>> ?
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >
>  Sorry for jumping into discussion, but FWICS the Xen command
> >     >       line you provided
> >     >       >     >       seems to be
> >     >       >     >       >       not the
> >     >       >     >       >       >       one
> >     >       >     >       >       >       >     >             >
>  Xen booted with. The error you are observing most likely is due
> >     >       to dom0 colors
> >     >       >     >       >       configuration not
> >     >       >     >       >       >       being
> >     >       >     >       >       >       >     >             >
>  specified (i.e. lack of dom0_colors=3D<> parameter). Although in
> >     >       the command line you
> >     >       >     >       >       provided, this
> >     >       >     >       >       >       parameter
> >     >       >     >       >       >       >     >             >       i=
s
> set, I strongly doubt that this is the actual command line
> >     >       in use.
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >
>  You wrote:
> >     >       >     >       >       >       >     >             >
>  xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0
> >     >       dom0_mem=3D1600M dom0_max_vcpus=3D2
> >     >       >     >       >       dom0_vcpus_pin
> >     >       >     >       >       >       bootscrub=3D0 vwfi=3Dnative
> >     >       >     >       >       >       >     >             >
>  sched=3Dnull timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3
> >     >       dom0_colors=3D4-7";
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >
>  but:
> >     >       >     >       >       >       >     >             >       1=
)
> way_szize has a typo
> >     >       >     >       >       >       >     >             >       2=
)
> you specified 4 colors (0-3) for Xen, but the boot log says
> >     >       that Xen has only
> >     >       >     >       one:
> >     >       >     >       >       >       >     >             >
>  (XEN) Xen color(s): [ 0 ]
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >
>  This makes me believe that no colors configuration actually end
> >     >       up in command line
> >     >       >     >       that Xen
> >     >       >     >       >       booted
> >     >       >     >       >       >       with.
> >     >       >     >       >       >       >     >             >
>  Single color for Xen is a "default if not specified" and way
> >     >       size was probably
> >     >       >     >       calculated
> >     >       >     >       >       by asking
> >     >       >     >       >       >       HW.
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >       S=
o
> I would suggest to first cross-check the command line in
> >     >       use.
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >
>  ~Michal
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
> Regards,
> >     >       >     >       >       >       >     >             >       >
> Oleg
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
> =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44, S=
tefano Stabellini
> >     >       <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
> >     >       >     >       >       >       <mailto:sstabellini@kernel.or=
g
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>
> >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>
> >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>
> >     >       >     >       >       <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>
> >     >       >     >       >       >       <mailto:sstabellini@kernel.or=
g
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>
> >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>
> >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>>:
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
>    On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
> >     >       >     >       >       >       >     >             >       >
>    > Hi Julien,
> >     >       >     >       >       >       >     >             >       >
>    >
> >     >       >     >       >       >       >     >             >       >
>    > >> This feature has not been merged in Xen upstream yet
> >     >       >     >       >       >       >     >             >       >
>    >
> >     >       >     >       >       >       >     >             >       >
>    > > would assume that upstream + the series on the ML [1]
> >     >       work
> >     >       >     >       >       >       >     >             >       >
>    >
> >     >       >     >       >       >       >     >             >       >
>    > Please clarify this point.
> >     >       >     >       >       >       >     >             >       >
>    > Because the two thoughts are controversial.
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
>    Hi Oleg,
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
>    As Julien wrote, there is nothing controversial. As you
> >     >       are aware,
> >     >       >     >       >       >       >     >             >       >
>    Xilinx maintains a separate Xen tree specific for Xilinx
> >     >       here:
> >     >       >     >       >       >       >     >             >       >
>    https://github.com/xilinx/xen <https://github.com/xilinx/xen>
> >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>
> >     >       >     >       >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>
> >     >       >     >       >       >       <https://github.com/xilinx/xe=
n
> <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>
> >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>
> >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>
> >     >       >     >       >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>
> >     >       >     >       >       >       <https://github.com/xilinx/xe=
n
> <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>>
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
>    and the branch you are using (xlnx_rebase_4.16) comes
> >     >       from there.
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
>    Instead, the upstream Xen tree lives here:
> >     >       >     >       >       >       >     >             >       >
>    https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
>    The Cache Coloring feature that you are trying to
> >     >       configure is present
> >     >       >     >       >       >       >     >             >       >
>    in xlnx_rebase_4.16, but not yet present upstream (there
> >     >       is an
> >     >       >     >       >       >       >     >             >       >
>    outstanding patch series to add cache coloring to Xen
> >     >       upstream but it
> >     >       >     >       >       >       >     >             >       >
>    hasn't been merged yet.)
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
>    Anyway, if you are using xlnx_rebase_4.16 it doesn't
> >     >       matter too much for
> >     >       >     >       >       >       >     >             >       >
>    you as you already have Cache Coloring as a feature
> >     >       there.
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
>    I take you are using ImageBuilder to generate the boot
> >     >       configuration? If
> >     >       >     >       >       >       >     >             >       >
>    so, please post the ImageBuilder config file that you are
> >     >       using.
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >       >
>    But from the boot message, it looks like the colors
> >     >       configuration for
> >     >       >     >       >       >       >     >             >       >
>    Dom0 is incorrect.
> >     >       >     >       >       >       >     >             >       >
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >             >
> >     >       >     >       >       >       >     >
> >     >       >     >       >       >       >
> >     >       >     >       >       >
> >     >       >     >       >       >
> >     >       >     >       >       >
> >     >       >     >       >
> >     >       >     >       >
> >     >       >     >       >
> >     >       >     >
> >     >       >     >
> >     >       >     >
> >     >       >
> >     >
> >     >
> >     >
> >
>

--00000000000053aaa105fb687067
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+PGRpdj5IaSBNaWNoYWwsPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5U
aGFua3MuPC9kaXY+PGRpdj5UaGlzIGNvbXBpbGF0aW9uIHByZXZpb3VzbHkgaGFkIGEgbmFtZSBD
T05GSUdfQ09MT1JJTkcuPC9kaXY+PGRpdj5JdCBtaXhlZCBtZSB1cC48L2Rpdj48ZGl2Pjxicj48
L2Rpdj48ZGl2PlJlZ2FyZHMsPC9kaXY+PGRpdj5PbGVnPGJyPjwvZGl2PjwvZGl2Pjxicj48ZGl2
IGNsYXNzPSJnbWFpbF9xdW90ZSI+PGRpdiBkaXI9Imx0ciIgY2xhc3M9ImdtYWlsX2F0dHIiPtGH
0YIsIDExINC80LDRjyAyMDIz4oCv0LMuINCyIDEzOjE1LCBNaWNoYWwgT3J6ZWwgJmx0OzxhIGhy
ZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+
Jmd0Ozo8YnI+PC9kaXY+PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFy
Z2luOjBweCAwcHggMHB4IDAuOGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIwNCwy
MDQpO3BhZGRpbmctbGVmdDoxZXgiPkhpIE9sZWcsPGJyPg0KPGJyPg0KT24gMTEvMDUvMjAyMyAx
MjowMiwgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0
OyA8YnI+DQomZ3Q7IDxicj4NCiZndDsgSGVsbG8sPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IFRoYW5r
cyBTdGVmYW5vLjxicj4NCiZndDsgVGhlbiB0aGUgbmV4dCBxdWVzdGlvbi48YnI+DQomZ3Q7IEkg
Y2xvbmVkIHhlbiByZXBvIGZyb20geGlsaW54IHNpdGUgPGEgaHJlZj0iaHR0cHM6Ly9naXRodWIu
Y29tL1hpbGlueC94ZW4uZ2l0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi5naXQ8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dp
dGh1Yi5jb20vWGlsaW54L3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuLmdpdDwvYT4mZ3Q7PGJyPg0KJmd0OyBJIG1h
bmFnZWQgdG8gYnVpbGQgYSB4bG54X3JlYmFzZV80LjE3IGJyYW5jaCBpbiBteSBlbnZpcm9ubWVu
dC48YnI+DQomZ3Q7IEkgZGlkIGl0IHdpdGhvdXQgY29sb3JpbmcgZmlyc3QuIEkgZGlkIG5vdCBm
aW5kIGFueSBjb2xvciBmb290cHJpbnRzIGF0IHRoaXMgYnJhbmNoLjxicj4NCiZndDsgSSByZWFs
aXplZCBjb2xvcmluZyBpcyBub3QgaW4gdGhlIHhsbnhfcmViYXNlXzQuMTcgYnJhbmNoIHlldC48
YnI+DQpUaGlzIGlzIG5vdCB0cnVlLiBDYWNoZSBjb2xvcmluZyBpcyBpbiB4bG54X3JlYmFzZV80
LjE3LiBQbGVhc2Ugc2VlIHRoZSBkb2NzOjxicj4NCjxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNv
bS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNy9kb2NzL21pc2MvYXJtL2NhY2hlLWNv
bG9yaW5nLnJzdCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRo
dWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE3L2RvY3MvbWlzYy9hcm0vY2Fj
aGUtY29sb3JpbmcucnN0PC9hPjxicj4NCjxicj4NCkl0IGRlc2NyaWJlcyB0aGUgZmVhdHVyZSBh
bmQgZG9jdW1lbnRzIHRoZSByZXF1aXJlZCBwcm9wZXJ0aWVzLjxicj4NCjxicj4NCn5NaWNoYWw8
YnI+DQo8YnI+DQomZ3Q7IDxicj4NCiZndDsgPGJyPg0KJmd0OyDQstGCLCA5INC80LDRjyAyMDIz
4oCv0LMuINCyIDIyOjQ5LCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDs6PGJy
Pg0KJmd0OyA8YnI+DQomZ3Q7wqAgwqAgwqBXZSB0ZXN0IFhlbiBDYWNoZSBDb2xvcmluZyByZWd1
bGFybHkgb24gemN1MTAyLiBFdmVyeSBQZXRhbGludXggcmVsZWFzZTxicj4NCiZndDvCoCDCoCDC
oCh0d2ljZSBhIHllYXIpIGlzIHRlc3RlZCB3aXRoIGNhY2hlIGNvbG9yaW5nIGVuYWJsZWQuIFRo
ZSBsYXN0IFBldGFsaW51eDxicj4NCiZndDvCoCDCoCDCoHJlbGVhc2UgaXMgMjAyMy4xIGFuZCB0
aGUga2VybmVsIHVzZWQgaXMgdGhpczo8YnI+DQomZ3Q7wqAgwqAgwqA8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4xX0xUUyIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlu
eC9saW51eC14bG54L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFM8L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4x
X0xUUyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29t
L1hpbGlueC9saW51eC14bG54L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFM8L2E+Jmd0Ozxicj4N
CiZndDsgPGJyPg0KJmd0OyA8YnI+DQomZ3Q7wqAgwqAgwqBPbiBUdWUsIDkgTWF5IDIwMjMsIE9s
ZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCZndDsgSGVsbG8gZ3V5cyw8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0OyBJIGhhdmUgYSBjb3VwbGUg
b2YgbW9yZSBxdWVzdGlvbnMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0OyBIYXZlIHlvdSBldmVyIHJ1
biB4ZW4gd2l0aCB0aGUgY2FjaGUgY29sb3JpbmcgYXQgWnlucSBVbHRyYVNjYWxlKyBNUFNvQyB6
Y3UxMDIgeGN6dTE1ZWcgPzxicj4NCiZndDvCoCDCoCDCoCZndDsgV2hlbiBkaWQgeW91IHJ1biB4
ZW4gd2l0aCB0aGUgY2FjaGUgY29sb3JpbmcgbGFzdCB0aW1lID88YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7IFdoYXQga2VybmVsIHZlcnNpb24gZGlkIHlvdSB1c2UgZm9yIERvbTAgd2hlbiB5b3UgcmFu
IHhlbiB3aXRoIHRoZSBjYWNoZSBjb2xvcmluZyBsYXN0IHRpbWUgPzxicj4NCiZndDvCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0
OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDsg0L/Rgiwg
NSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAxMTo0OCwgT2xlZyBOaWtpdGVua28gJmx0OzxhIGhyZWY9
Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29v
ZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0
Ozo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBNaWNoYWwsPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDsgVGhhbmtzLjxicj4NCiZndDvCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0
OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDsg0L/Rgiwg
NSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAxMTozNCwgTWljaGFsIE9yemVsICZsdDs8YSBocmVmPSJt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxA
YW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Jmd0Ozo8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBPbGVnLDxicj4NCiZndDvCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBSZXBseWluZywgc28gdGhhdCB5b3Ug
ZG8gbm90IG5lZWQgdG8gd2FpdCBmb3IgU3RlZmFuby48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gMDUvMDUvMjAyMyAxMDoyOCwgT2xlZyBO
aWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IEhlbGxvIFN0ZWZhbm8sPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSSB3b3VsZCBsaWtlIHRvIHRyeSBh
IHhlbiBjYWNoZSBjb2xvciBwcm9wZXJ0eSBmcm9tIHRoaXMgcmVwb8KgIDxhIGhyZWY9Imh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0IiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXQtaHR0cC94ZW4uZ2l0PC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
LWh0dHAveGVuLmdpdDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dC1odHRwL3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IENvdWxkIHlvdSB0ZWxsIHdob3QgYnJhbmNoIEkgc2hv
dWxkIHVzZSA/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgQ2FjaGUgY29sb3Jpbmcg
ZmVhdHVyZSBpcyBub3QgcGFydCBvZiB0aGUgdXBzdHJlYW0gdHJlZSBhbmQgaXQgaXMgc3RpbGwg
dW5kZXIgcmV2aWV3Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFlvdSBjYW4gb25s
eSBmaW5kIGl0IGludGVncmF0ZWQgaW4gdGhlIFhpbGlueCBYZW4gdHJlZS48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgfk1pY2hhbDxicj4NCiZn
dDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgT2xlZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7INC/0YIsIDI4INCw0L/R
gC4gMjAyM+KAr9CzLiDQsiAwMDo1MSwgU3RlZmFubyBTdGFiZWxsaW5pICZsdDs8YSBocmVmPSJt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0
PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgSSBh
bSBmYW1pbGlhciB3aXRoIHRoZSB6Y3UxMDIgYnV0IEkgZG9uJiMzOTt0IGtub3cgaG93IHlvdSBj
b3VsZCBwb3NzaWJseTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oGdlbmVyYXRlIGEgU0Vycm9yLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBJIHN1Z2dlc3QgdG8g
dHJ5IHRvIHVzZSBJbWFnZUJ1aWxkZXIgWzFdIHRvIGdlbmVyYXRlIHRoZSBib290PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgY29uZmlndXJhdGlvbiBhcyBhIHRl
c3QgYmVjYXVzZSB0aGF0IGlzIGtub3duIHRvIHdvcmsgd2VsbCBmb3IgemN1MTAyLjxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqBbMV0gPGEgaHJlZj0iaHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9q
ZWN0L2ltYWdlYnVpbGRlciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6
Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcjwvYT4gJmx0OzxhIGhyZWY9Imh0
dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1
aWxkZXI8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0
L2ltYWdlYnVpbGRlciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9n
aXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBz
Oi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxk
ZXI8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoE9uIFRodSwgMjcgQXByIDIwMjMsIE9sZWcgTmlraXRlbmtvIHdy
b3RlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgSGVs
bG8gU3RlZmFubyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBUaGFu
a3MgZm9yIGNsYXJpZmljYXRpb24uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0OyBXZSBuaWdodGVyIHVzZSBJbWFnZUJ1aWxkZXIgbm9yIHVib290IGJvb3Qg
c2NyaXB0Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsg
QSBtb2RlbCBpcyB6Y3UxMDIgY29tcGF0aWJsZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDsgTy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
OyDQstGCLCAyNSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMjE6MjEsIFN0ZWZhbm8gU3RhYmVsbGlu
aSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2Js
YW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9i
bGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgVGhpcyBpcyBp
bnRlcmVzdGluZy4gQXJlIHlvdSB1c2luZyBYaWxpbnggaGFyZHdhcmUgYnkgYW55IGNoYW5jZT8g
SWYgc28sPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgd2hpY2ggYm9hcmQ/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoEFyZSB5b3UgdXNpbmcgSW1hZ2VCdWlsZGVyIHRvIGdlbmVyYXRlIHlv
dXIgYm9vdC5zY3IgYm9vdCBzY3JpcHQ/IElmIHNvLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvdWxkIHlvdSBwbGVhc2UgcG9zdCB5
b3VyIEltYWdlQnVpbGRlciBjb25maWcgZmlsZT8gSWYgbm90LCBjYW4geW91PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcG9zdCB0aGUg
c291cmNlIG9mIHlvdXIgdWJvb3QgYm9vdCBzY3JpcHQ/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFNFcnJvcnMgYXJlIHN1cHBvc2VkIHRvIGJlIHJl
bGF0ZWQgdG8gYSBoYXJkd2FyZSBmYWlsdXJlIG9mIHNvbWUga2luZC48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBZb3UgYXJlIG5vdCBz
dXBwb3NlZCB0byBiZSBhYmxlIHRvIHRyaWdnZXIgYW4gU0Vycm9yIGVhc2lseSBieTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZxdW90
O21pc3Rha2UmcXVvdDsuIEkgaGF2ZSBub3Qgc2VlbiBTRXJyb3JzIGR1ZSB0byB3cm9uZyBjYWNo
ZSBjb2xvcmluZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb25zIG9uIGFueSBYaWxpbnggYm9hcmQgYmVmb3JlLjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBUaGUgZGlm
ZmVyZW5jZXMgYmV0d2VlbiBYZW4gd2l0aCBhbmQgd2l0aG91dCBjYWNoZSBjb2xvcmluZyBmcm9t
IGE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBoYXJkd2FyZSBwZXJzcGVjdGl2ZSBhcmU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoC0gV2l0aCBjYWNoZSBjb2xvcmluZywgdGhlIFNNTVUg
aXMgZW5hYmxlZCBhbmQgZG9lcyBhZGRyZXNzIHRyYW5zbGF0aW9uczxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoMKgIGV2ZW4gZm9yIGRv
bTAuIFdpdGhvdXQgY2FjaGUgY29sb3JpbmcgdGhlIFNNTVUgY291bGQgYmUgZGlzYWJsZWQsIGFu
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoMKgIGlmIGVuYWJsZWQsIHRoZSBTTU1VIGRvZXNuJiMzOTt0IGRvIGFueSBhZGRyZXNzIHRy
YW5zbGF0aW9ucyBmb3IgRG9tMC4gSWY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCB0aGVyZSBpcyBhIGhhcmR3YXJlIGZhaWx1cmUg
cmVsYXRlZCB0byBTTU1VIGFkZHJlc3MgdHJhbnNsYXRpb24gaXQ8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBjb3VsZCBvbmx5IHRy
aWdnZXIgd2l0aCBjYWNoZSBjb2xvcmluZy4gVGhpcyB3b3VsZCBiZSBteSBub3JtYWw8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBz
dWdnZXN0aW9uIGZvciB5b3UgdG8gZXhwbG9yZSwgYnV0IHRoZSBmYWlsdXJlIGhhcHBlbnMgdG9v
IGVhcmx5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgwqAgYmVmb3JlIGFueSBETUEtY2FwYWJsZSBkZXZpY2UgaXMgcHJvZ3JhbW1lZC4g
U28gSSBkb24mIzM5O3QgdGhpbmsgdGhpcyBjYW48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBiZSB0aGUgaXNzdWUuPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoC0gV2l0aCBjYWNoZSBj
b2xvcmluZywgdGhlIG1lbW9yeSBhbGxvY2F0aW9uIGlzIHZlcnkgZGlmZmVyZW50IHNvIHlvdSYj
Mzk7bGw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqDCoCBlbmQgdXAgdXNpbmcgZGlmZmVyZW50IEREUiByZWdpb25zIGZvciBEb20wLiBT
byBpZiB5b3VyIEREUiBpczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoMKgIGRlZmVjdGl2ZSwgeW91IG1pZ2h0IG9ubHkgc2VlIGEgZmFp
bHVyZSB3aXRoIGNhY2hlIGNvbG9yaW5nIGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBiZWNhdXNlIHlvdSBlbmQgdXAg
dXNpbmcgZGlmZmVyZW50IHJlZ2lvbnMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBPbiBUdWUsIDI1IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IEhpIFN0ZWZhbm8sPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgVGhhbmsgeW91Ljxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSWYgSSBidWls
ZCB4ZW4gd2l0aG91dCBjb2xvcnMgc3VwcG9ydCB0aGVyZSBpcyBub3QgdGhpcyBlcnJvci48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IEFsbCB0aGUgZG9tYWlucyBhcmUgYm9vdGVkIHdlbGwuPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBIZW5zZSBpdCBjYW4g
bm90IGJlIGEgaGFyZHdhcmUgaXNzdWUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGlzIHBhbmljIGFycml2ZWQgZHVyaW5n
IHVucGFja2luZyB0aGUgcm9vdGZzLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGVyZSBJIGF0dGFjaGVkIHRoZSBib290IGxv
ZyB4ZW4vRG9tMCB3aXRob3V0IGNvbG9yLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQSBoaWdobGlnaHRlZCBzdHJpbmdzIHBy
aW50ZWQgZXhhY3RseSBhZnRlciB0aGUgcGxhY2Ugd2hlcmUgMS1zdCB0aW1lIHBhbmljIGFycml2
ZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgwqBYZW4gNC4xNi4xLXByZTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgWGVuIHZlcnNp
b24gNC4xNi4xLXByZSAobm9sZTIzOTBAKG5vbmUpKSAoYWFyY2g2NC1wb3J0YWJsZS1saW51eC1n
Y2MgKEdDQykgMTEuMy4wKSBkZWJ1Zz15PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
MjAyMy0wNC0yMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTGF0ZXN0IENoYW5nZVNldDogV2VkIEFwciAxOSAxMjo1
NjoxNCAyMDIzICswMzAwIGdpdDozMjE2ODdiMjMxLWRpcnR5PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBidWlsZC1p
ZDogYzE4NDcyNThmZGIxYjc5NTYyZmM3MTBkZGE0MDAwOGY5NmMwZmRlNTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
UHJvY2Vzc29yOiAwMDAwMDAwMDQxMGZkMDM0OiAmcXVvdDtBUk0gTGltaXRlZCZxdW90OywgdmFy
aWFudDogMHgwLCBwYXJ0IDB4ZDAzLHJldiAweDQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIDY0LWJpdCBFeGVjdXRp
b246PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAoWEVOKSDCoCBQcm9jZXNzb3IgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDIyMjIg
MDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgRXhjZXB0aW9uIExldmVsczogRUwz
OjY0KzMyIEVMMjo2NCszMiBFTDE6NjQrMzIgRUwwOjY0KzMyPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCBF
eHRlbnNpb25zOiBGbG9hdGluZ1BvaW50IEFkdmFuY2VkU0lNRDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgRGVi
dWcgRmVhdHVyZXM6IDAwMDAwMDAwMTAzMDUxMDYgMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgwqAgQXV4aWxpYXJ5IEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIMKgIE1lbW9yeSBNb2RlbCBGZWF0dXJlczogMDAwMDAwMDAwMDAwMTEy
MiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBJU0EgRmVhdHVyZXM6IMKgMDAwMDAw
MDAwMDAxMTEyMCAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSAzMi1iaXQgRXhlY3V0aW9u
Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgwqAgUHJvY2Vzc29yIEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAwMTMxOjAw
MDAwMDAwMDAwMTEwMTE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIEluc3RydWN0aW9uIFNldHM6IEFBcmNo
MzIgQTMyIFRodW1iIFRodW1iLTIgSmF6ZWxsZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgRXh0ZW5zaW9u
czogR2VuZXJpY1RpbWVyIFNlY3VyaXR5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBEZWJ1ZyBGZWF0dXJlczog
MDAwMDAwMDAwMzAxMDA2Njxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgQXV4aWxpYXJ5IEZlYXR1cmVzOiAwMDAw
MDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBNZW1vcnkgTW9kZWwgRmVhdHVyZXM6IDAwMDAw
MDAwMTAyMDExMDUgMDAwMDAwMDA0MDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAwMDAwMDAwMDAxMjYwMDAwIDAwMDAwMDAwMDIxMDIyMTE8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIMKgIElTQSBGZWF0dXJlczogMDAwMDAwMDAwMjEwMTExMCAwMDAwMDAwMDEz
MTEyMTExIDAwMDAwMDAwMjEyMzIwNDI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIDAwMDAwMDAwMDExMTIxMzEgMDAwMDAwMDAwMDAxMTE0MiAwMDAwMDAwMDAwMDExMTIxPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBVc2luZyBTTUMgQ2FsbGluZyBDb252ZW50aW9uIHYxLjI8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IFVzaW5nIFBTQ0kgdjEuMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgU01QOiBBbGxvd2luZyA0IENQVXM8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIEdlbmVyaWMgVGltZXIgSVJROiBwaHlzPTMwIGh5cD0yNiB2aXJ0PTI3IEZyZXE6IDEw
MDAwMCBLSHo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEdJQ3YyIGluaXRpYWxpemF0aW9uOjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
wqAgwqAgwqAgwqAgZ2ljX2Rpc3RfYWRkcj0wMDAwMDAwMGY5MDEwMDAwPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDC
oCDCoCDCoCDCoCBnaWNfY3B1X2FkZHI9MDAwMDAwMDBmOTAyMDAwMDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAg
wqAgwqAgwqAgZ2ljX2h5cF9hZGRyPTAwMDAwMDAwZjkwNDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKg
IMKgIMKgIGdpY192Y3B1X2FkZHI9MDAwMDAwMDBmOTA2MDAwMDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAg
wqAgwqAgZ2ljX21haW50ZW5hbmNlX2lycT0yNTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgR0lDdjI6IEFkanVzdGlu
ZyBDUFUgaW50ZXJmYWNlIGJhc2UgdG8gMHhmOTAyZjAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgR0lDdjI6IDE5
MiBsaW5lcywgNCBjcHVzLCBzZWN1cmUgKElJRCAwMjAwMTQzYikuPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2lu
ZyBzY2hlZHVsZXI6IG51bGwgU2NoZWR1bGVyIChudWxsKTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgSW5pdGlhbGl6
aW5nIG51bGwgc2NoZWR1bGVyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBXQVJOSU5HOiBUaGlzIGlzIGV4cGVyaW1l
bnRhbCBzb2Z0d2FyZSBpbiBkZXZlbG9wbWVudC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFVzZSBhdCB5b3VyIG93
biByaXNrLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgQWxsb2NhdGVkIGNvbnNvbGUgcmluZyBvZiAzMiBLaUIuPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBDUFUwOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEyIHRpbWVzIGJlZm9yZSBw
YXVzaW5nIHRoZSBkb21haW48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEJyaW5naW5nIHVwIENQVTE8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIENQVTE6IEd1ZXN0IGF0b21pY3Mgd2lsbCB0cnkgMTMgdGltZXMgYmVmb3JlIHBhdXNpbmcg
dGhlIGRvbWFpbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVIDEgYm9vdGVkLjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQnJpbmdp
bmcgdXAgQ1BVMjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVMjogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAxMyB0
aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUgMiBib290ZWQu
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBCcmluZ2luZyB1cCBDUFUzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUzOiBHdWVzdCBh
dG9taWNzIHdpbGwgdHJ5IDEzIHRpbWVzIGJlZm9yZSBwYXVzaW5nIHRoZSBkb21haW48YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIEJyb3VnaHQgdXAgNCBDUFVzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUgMyBib290ZWQuPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IHByb2JpbmcgaGFyZHdhcmUgY29uZmln
dXJhdGlvbi4uLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBTTU1VdjIg
d2l0aDo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogc3RhZ2UgMiB0cmFu
c2xhdGlvbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBzdHJlYW0gbWF0
Y2hpbmcgd2l0aCA0OCByZWdpc3RlciBncm91cHMsIG1hc2sgMHg3ZmZmJmx0OzImZ3Q7c21tdTo8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAvYXhpL3NtbXVAZmQ4MDAwMDA6IDE2IGNv
bnRleHQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBiYW5rcyAoMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgc3RhZ2UtMiBvbmx5KTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgc21tdTog
L2F4aS9zbW11QGZkODAwMDAwOiBTdGFnZS0yOiA0OC1iaXQgSVBBIC0mZ3Q7IDQ4LWJpdCBQQTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiByZWdpc3RlcmVkIDI5IG1hc3Rl
ciBkZXZpY2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZW5hYmxlZDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgwqAtIERvbTAgbW9kZTogUmVsYXhlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgUDJNOiA0MC1iaXQgSVBB
IHdpdGggNDAtYml0IFBBIGFuZCA4LWJpdCBWTUlEPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06IDMgbGV2ZWxz
IHdpdGggb3JkZXItMSByb290LCBWVENSIDB4MDAwMDAwMDA4MDAyMzU1ODxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
U2NoZWR1bGluZyBncmFudWxhcml0eTogY3B1LCAxIENQVSBwZXIgc2NoZWQtcmVzb3VyY2U8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIGFsdGVybmF0aXZlczogUGF0Y2hpbmcgd2l0aCBhbHQgdGFibGUgMDAwMDAwMDAw
MDJjYzVjOCAtJmd0OyAwMDAwMDAwMDAwMmNjYjJjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSAqKiogTE9BRElORyBE
T01BSU4gMCAqKio8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIExvYWRpbmcgZDAga2VybmVsIGZyb20gYm9vdCBtb2R1
bGUgQCAwMDAwMDAwMDAxMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBMb2FkaW5nIHJhbWRpc2sgZnJvbSBi
b290IG1vZHVsZSBAIDAwMDAwMDAwMDIwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEFsbG9jYXRpbmcgMTox
IG1hcHBpbmdzIHRvdGFsbGluZyAxNjAwTUIgZm9yIGRvbTA6PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCQU5LWzBd
IDB4MDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAyMDAwMDAwMCAoMjU2TUIpPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBC
QU5LWzFdIDB4MDAwMDAwMjQwMDAwMDAtMHgwMDAwMDAyODAwMDAwMCAoNjRNQik8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIEJBTktbMl0gMHgwMDAwMDAzMDAwMDAwMC0weDAwMDAwMDgwMDAwMDAwICgxMjgwTUIpPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBHcmFudCB0YWJsZSByYW5nZTogMHgwMDAwMDAwMGUwMDAwMC0weDAwMDAwMDAw
ZTQwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IGQwOiBwMm1hZGRy
IDB4MDAwMDAwMDg3YmY5NDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQWxsb2NhdGluZyBQUEkgMTYgZm9yIGV2
ZW50IGNoYW5uZWwgaW50ZXJydXB0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gMDogMHg4
MTIwMDAwMC0mZ3Q7MHhhMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDE6IDB4
YjEyMDAwMDAtJmd0OzB4YzAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiAyOiAw
eGM4MDAwMDAwLSZndDsweGUwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gMzog
MHhmMDAwMDAwMC0mZ3Q7MHhmOTAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDQ6
IDB4MTAwMDAwMDAwLSZndDsweDYwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9u
IDU6IDB4ODgwMDAwMDAwLSZndDsweDgwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJl
Z2lvbiA2OiAweDgwMDEwMDAwMDAtJmd0OzB4MTAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIExvYWRp
bmcgekltYWdlIGZyb20gMDAwMDAwMDAwMTAwMDAwMCB0byAwMDAwMDAwMDEwMDAwMDAwLTAwMDAw
MDAwMTBlNDEwMDg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIExvYWRpbmcgZDAgaW5pdHJkIGZyb20gMDAwMDAwMDAw
MjAwMDAwMCB0byAweDAwMDAwMDAwMTM2MDAwMDAtMHgwMDAwMDAwMDFmZjNhNjE3PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAo
WEVOKSBMb2FkaW5nIGQwIERUQiB0byAweDAwMDAwMDAwMTM0MDAwMDAtMHgwMDAwMDAwMDEzNDBj
YmRjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAoWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQgc2V0IGF0
IDB4NDAwMCBwYWdlcy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFN0ZC4gTG9nbGV2ZWw6IEFsbDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgR3Vlc3QgTG9nbGV2ZWw6IEFsbDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgKioqIFNlcmlhbCBpbnB1dCB0byBE
T00wICh0eXBlICYjMzk7Q1RSTC1hJiMzOTsgdGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0KTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgbnVsbC5jOjM1MzogMCAmbHQ7LS0gZDB2MDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRnJlZWQg
MzU2a0IgaW5pdCBtZW1vcnkuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwIFVuaGFuZGxlZCBTTUMvSFZDOiAw
eDg0MDAwMDUwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwIFVuaGFuZGxlZCBTTUMvSFZDOiAweDg2MDBmZjAx
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBm
ZmZmZmZmZiB0byBJQ0FDVElWRVI0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVk
IHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVI4PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBk
MHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FD
VElWRVIxMjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4
MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMTY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQwdjA6IHZHSUNEOiB1
bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjIwPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZm
ZiB0byBJQ0FDVElWRVIwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIEJvb3RpbmcgTGludXggb24g
cGh5c2ljYWwgQ1BVIDB4MDAwMDAwMDAwMCBbMHg0MTBmZDAzNF08YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAw
MDAwMF0gTGludXggdmVyc2lvbiA1LjE1LjcyLXhpbGlueC12MjAyMi4xIChvZS11c2VyQG9lLWhv
c3QpIChhYXJjaDY0LXBvcnRhYmxlLWxpbnV4LWdjYyAoR0NDKTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoDExLjMuMCwgR05VIGxkIChHTlU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBCaW51dGlscyk8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IDIuMzgu
MjAyMjA3MDgpICMxIFNNUCBUdWUgRmViIDIxIDA1OjQ3OjU0IFVUQyAyMDIzPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMC4wMDAwMDBdIE1hY2hpbmUgbW9kZWw6IEQxNCBWaXBlciBCb2FyZCAtIFdoaXRlIFVuaXQ8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gWGVuIDQuMTYgc3VwcG9ydCBmb3VuZDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuMDAwMDAwXSBab25lIHJhbmdlczo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gwqAgRE1B
IMKgIMKgIMKgW21lbSAweDAwMDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAwMDdmZmZmZmZmXTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBETUEzMiDCoCDCoGVtcHR5PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4w
MDAwMDBdIMKgIE5vcm1hbCDCoCBlbXB0eTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBNb3ZhYmxl
IHpvbmUgc3RhcnQgZm9yIGVhY2ggbm9kZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBFYXJseSBt
ZW1vcnkgbm9kZSByYW5nZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gwqAgbm9kZSDCoCAwOiBb
bWVtIDB4MDAwMDAwMDAxMDAwMDAwMC0weDAwMDAwMDAwMWZmZmZmZmZdPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAweDAwMDAwMDAwMjIwMDAwMDAtMHgwMDAwMDAw
MDIyMTQ3ZmZmXTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKgIDA6IFttZW0gMHgw
MDAwMDAwMDIyMjAwMDAwLTB4MDAwMDAwMDAyMjM0N2ZmZl08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAw
MF0gwqAgbm9kZSDCoCAwOiBbbWVtIDB4MDAwMDAwMDAyNDAwMDAwMC0weDAwMDAwMDAwMjdmZmZm
ZmZdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAweDAwMDAwMDAw
MzAwMDAwMDAtMHgwMDAwMDAwMDdmZmZmZmZmXTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBJbml0
bWVtIHNldHVwIG5vZGUgMCBbbWVtIDB4MDAwMDAwMDAxMDAwMDAwMC0weDAwMDAwMDAwN2ZmZmZm
ZmZdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE9uIG5vZGUgMCwgem9uZSBETUE6IDgxOTIgcGFn
ZXMgaW4gdW5hdmFpbGFibGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE9uIG5vZGUg
MCwgem9uZSBETUE6IDE4NCBwYWdlcyBpbiB1bmF2YWlsYWJsZSByYW5nZXM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjAwMDAwMF0gT24gbm9kZSAwLCB6b25lIERNQTogNzM1MiBwYWdlcyBpbiB1bmF2YWlsYWJs
ZSByYW5nZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gY21hOiBSZXNlcnZlZCAyNTYgTWlCIGF0
IDB4MDAwMDAwMDA2ZTAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBwc2NpOiBwcm9iaW5n
IGZvciBjb25kdWl0IG1ldGhvZCBmcm9tIERULjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBwc2Np
OiBQU0NJdjEuMSBkZXRlY3RlZCBpbiBmaXJtd2FyZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0g
cHNjaTogVXNpbmcgc3RhbmRhcmQgUFNDSSB2MC4yIGZ1bmN0aW9uIElEczxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDAuMDAwMDAwXSBwc2NpOiBUcnVzdGVkIE9TIG1pZ3JhdGlvbiBub3QgcmVxdWlyZWQ8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogU01DIENhbGxpbmcgQ29udmVudGlvbiB2MS4xPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC4wMDAwMDBdIHBlcmNwdTogRW1iZWRkZWQgMTYgcGFnZXMvY3B1IHMzMjc5MiBy
MCBkMzI3NDQgdTY1NTM2PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIERldGVjdGVkIFZJUFQgSS1j
YWNoZSBvbiBDUFUwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIENQVSBmZWF0dXJlczoga2VybmVs
IHBhZ2UgdGFibGUgaXNvbGF0aW9uIGZvcmNlZCBPTiBieSBLQVNMUjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
MDAwMDAwXSBDUFUgZmVhdHVyZXM6IGRldGVjdGVkOiBLZXJuZWwgcGFnZSB0YWJsZSBpc29sYXRp
b24gKEtQVEkpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIEJ1aWx0IDEgem9uZWxpc3RzLCBtb2Jp
bGl0eSBncm91cGluZyBvbi7CoCBUb3RhbCBwYWdlczogNDAzODQ1PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4w
MDAwMDBdIEtlcm5lbCBjb21tYW5kIGxpbmU6IGNvbnNvbGU9aHZjMCBlYXJseWNvbj14ZW4gZWFy
bHlwcmludGs9eGVuIGNsa19pZ25vcmVfdW51c2VkIGZpcHM9MTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoHJvb3Q9L2Rldi9yYW0wPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbWF4Y3B1cz0yPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4w
MDAwMDBdIFVua25vd24ga2VybmVsIGNvbW1hbmQgbGluZSBwYXJhbWV0ZXJzICZxdW90O2Vhcmx5
cHJpbnRrPXhlbiBmaXBzPTEmcXVvdDssIHdpbGwgYmUgcGFzc2VkIHRvIHVzZXI8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzcGFjZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gRGVu
dHJ5IGNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMjYyMTQ0IChvcmRlcjogOSwgMjA5NzE1MiBi
eXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBJbm9kZS1jYWNoZSBoYXNoIHRh
YmxlIGVudHJpZXM6IDEzMTA3MiAob3JkZXI6IDgsIDEwNDg1NzYgYnl0ZXMsIGxpbmVhcik8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjAwMDAwMF0gbWVtIGF1dG8taW5pdDogc3RhY2s6b2ZmLCBoZWFwIGFsbG9j
Om9uLCBoZWFwIGZyZWU6b248YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gbWVtIGF1dG8taW5pdDog
Y2xlYXJpbmcgc3lzdGVtIG1lbW9yeSBtYXkgdGFrZSBzb21lIHRpbWUuLi48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjAwMDAwMF0gTWVtb3J5OiAxMTIxOTM2Sy8xNjQxMDI0SyBhdmFpbGFibGUgKDk3MjhLIGtl
cm5lbCBjb2RlLCA4MzZLIHJ3ZGF0YSwgMjM5Nksgcm9kYXRhLCAxNTM2Szxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoGluaXQsIDI2MksgYnNzLDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDI1Njk0NEsgcmVzZXJ2ZWQsPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAyNjIxNDRLIGNtYS1yZXNlcnZlZCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gU0xVQjog
SFdhbGlnbj02NCwgT3JkZXI9MC0zLCBNaW5PYmplY3RzPTAsIENQVXM9MiwgTm9kZXM9MTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuMDAwMDAwXSByY3U6IEhpZXJhcmNoaWNhbCBSQ1UgaW1wbGVtZW50YXRpb24u
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHJjdTogUkNVIGV2ZW50IHRyYWNpbmcgaXMgZW5hYmxl
ZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcmN1OiBSQ1UgcmVzdHJpY3RpbmcgQ1BVcyBmcm9t
IE5SX0NQVVM9OCB0byBucl9jcHVfaWRzPTIuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHJjdTog
UkNVIGNhbGN1bGF0ZWQgdmFsdWUgb2Ygc2NoZWR1bGVyLWVubGlzdG1lbnQgZGVsYXkgaXMgMjUg
amlmZmllcy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcmN1OiBBZGp1c3RpbmcgZ2VvbWV0cnkg
Zm9yIHJjdV9mYW5vdXRfbGVhZj0xNiwgbnJfY3B1X2lkcz0yPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAw
MDBdIE5SX0lSUVM6IDY0LCBucl9pcnFzOiA2NCwgcHJlYWxsb2NhdGVkIGlycXM6IDA8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjAwMDAwMF0gUm9vdCBJUlEgaGFuZGxlcjogZ2ljX2hhbmRsZV9pcnE8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjAwMDAwMF0gYXJjaF90aW1lcjogY3AxNSB0aW1lcihzKSBydW5uaW5nIGF0IDEw
MC4wME1IeiAodmlydCkuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIGNsb2Nrc291cmNlOiBhcmNo
X3N5c19jb3VudGVyOiBtYXNrOiAweGZmZmZmZmZmZmZmZmZmIG1heF9jeWNsZXM6IDB4MTcxMDI0
ZTdlMCw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBtYXhfaWRsZV9uczogNDQwNzk1
MjA1MzE1IG5zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHNjaGVkX2Nsb2NrOiA1NiBiaXRzIGF0
IDEwME1IeiwgcmVzb2x1dGlvbiAxMG5zLCB3cmFwcyBldmVyeSA0Mzk4MDQ2NTExMTAwbnM8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjAwMDI1OF0gQ29uc29sZTogY29sb3VyIGR1bW15IGRldmljZSA4MHgyNTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDAuMzEwMjMxXSBwcmludGs6IGNvbnNvbGUgW2h2YzBdIGVuYWJsZWQ8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjMxNDQwM10gQ2FsaWJyYXRpbmcgZGVsYXkgbG9vcCAoc2tpcHBlZCksIHZh
bHVlIGNhbGN1bGF0ZWQgdXNpbmcgdGltZXIgZnJlcXVlbmN5Li4gMjAwLjAwIEJvZ29NSVBTPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgKGxwaj00MDAwMDApPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC4zMjQ4NTFdIHBpZF9tYXg6IGRlZmF1bHQ6IDMyNzY4IG1pbmltdW06IDMwMTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuMzI5NzA2XSBMU006IFNlY3VyaXR5IEZyYW1ld29yayBpbml0aWFsaXppbmc8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjMzNDIwNF0gWWFtYTogYmVjb21pbmcgbWluZGZ1bC48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjMzNzg2NV0gTW91bnQtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA0MDk2IChvcmRlcjogMywg
MzI3NjggYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM0NTE4MF0gTW91bnRwb2ludC1j
YWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDQwOTYgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGlu
ZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuMzU0NzQzXSB4ZW46Z3JhbnRfdGFibGU6IEdyYW50IHRhYmxl
cyB1c2luZyB2ZXJzaW9uIDEgbGF5b3V0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNTkxMzJdIEdyYW50IHRh
YmxlIGluaXRpYWxpemVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNjI2NjRdIHhlbjpldmVudHM6IFVzaW5n
IEZJRk8tYmFzZWQgQUJJPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNjY5OTNdIFhlbjogaW5pdGlhbGl6aW5n
IGNwdTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM3MDUxNV0gcmN1OiBIaWVyYXJjaGljYWwgU1JDVSBpbXBs
ZW1lbnRhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM3NTkzMF0gc21wOiBCcmluZ2luZyB1cCBzZWNv
bmRhcnkgQ1BVcyAuLi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIG51bGwuYzozNTM6IDEgJmx0Oy0tIGQwdjE8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIGQwdjE6IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZm
ZmZmIHRvIElDQUNUSVZFUjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM4MjU0OV0gRGV0ZWN0ZWQgVklQVCBJ
LWNhY2hlIG9uIENQVTE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM4ODcxMl0gWGVuOiBpbml0aWFsaXppbmcg
Y3B1MTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuMzg4NzQzXSBDUFUxOiBCb290ZWQgc2Vjb25kYXJ5IHByb2Nl
c3NvciAweDAwMDAwMDAwMDEgWzB4NDEwZmQwMzRdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zODg4MjldIHNt
cDogQnJvdWdodCB1cCAxIG5vZGUsIDIgQ1BVczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDA2OTQxXSBTTVA6
IFRvdGFsIG9mIDIgcHJvY2Vzc29ycyBhY3RpdmF0ZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40MTE2OThd
IENQVSBmZWF0dXJlczogZGV0ZWN0ZWQ6IDMyLWJpdCBFTDAgU3VwcG9ydDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDAuNDE2ODg4XSBDUFUgZmVhdHVyZXM6IGRldGVjdGVkOiBDUkMzMiBpbnN0cnVjdGlvbnM8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjQyMjEyMV0gQ1BVOiBBbGwgQ1BVKHMpIHN0YXJ0ZWQgYXQgRUwxPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC40MjYyNDhdIGFsdGVybmF0aXZlczogcGF0Y2hpbmcga2VybmVsIGNvZGU8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjQzMTQyNF0gZGV2dG1wZnM6IGluaXRpYWxpemVkPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC40NDE0NTRdIEtBU0xSIGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQ0MTYwMl0gY2xvY2tzb3Vy
Y2U6IGppZmZpZXM6IG1hc2s6IDB4ZmZmZmZmZmYgbWF4X2N5Y2xlczogMHhmZmZmZmZmZiwgbWF4
X2lkbGVfbnM6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgNzY0NTA0MTc4NTEwMDAw
MCBuczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuNDQ4MzIxXSBmdXRleCBoYXNoIHRhYmxlIGVudHJpZXM6IDUx
MiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40OTYxODNd
IE5FVDogUmVnaXN0ZXJlZCBQRl9ORVRMSU5LL1BGX1JPVVRFIHByb3RvY29sIGZhbWlseTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuNDk4Mjc3XSBETUE6IHByZWFsbG9jYXRlZCAyNTYgS2lCIEdGUF9LRVJORUwg
cG9vbCBmb3IgYXRvbWljIGFsbG9jYXRpb25zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41MDM3NzJdIERNQTog
cHJlYWxsb2NhdGVkIDI1NiBLaUIgR0ZQX0tFUk5FTHxHRlBfRE1BIHBvb2wgZm9yIGF0b21pYyBh
bGxvY2F0aW9uczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTExNjEwXSBETUE6IHByZWFsbG9jYXRlZCAyNTYg
S2lCIEdGUF9LRVJORUx8R0ZQX0RNQTMyIHBvb2wgZm9yIGF0b21pYyBhbGxvY2F0aW9uczxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuNTE5NDc4XSBhdWRpdDogaW5pdGlhbGl6aW5nIG5ldGxpbmsgc3Vic3lzIChk
aXNhYmxlZCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjUyNDk4NV0gYXVkaXQ6IHR5cGU9MjAwMCBhdWRpdCgw
LjMzNjoxKTogc3RhdGU9aW5pdGlhbGl6ZWQgYXVkaXRfZW5hYmxlZD0wIHJlcz0xPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC41MjkxNjldIHRoZXJtYWxfc3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292ZXJub3Ig
JiMzOTtzdGVwX3dpc2UmIzM5Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTMzMDIzXSBody1icmVha3BvaW50
OiBmb3VuZCA2IGJyZWFrcG9pbnQgYW5kIDQgd2F0Y2hwb2ludCByZWdpc3RlcnMuPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC41NDU2MDhdIEFTSUQgYWxsb2NhdG9yIGluaXRpYWxpc2VkIHdpdGggMzI3NjggZW50
cmllczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuNTUxMDMwXSB4ZW46c3dpb3RsYl94ZW46IFdhcm5pbmc6IG9u
bHkgYWJsZSB0byBhbGxvY2F0ZSA0IE1CIGZvciBzb2Z0d2FyZSBJTyBUTEI8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjU1OTMzMl0gc29mdHdhcmUgSU8gVExCOiBtYXBwZWQgW21lbSAweDAwMDAwMDAwMTE4MDAw
MDAtMHgwMDAwMDAwMDExYzAwMDAwXSAoNE1CKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTgzNTY1XSBIdWdl
VExCIHJlZ2lzdGVyZWQgMS4wMCBHaUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXM8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAwLjU4NDcyMV0gSHVnZVRMQiByZWdpc3RlcmVkIDMyLjAgTWlCIHBhZ2Ug
c2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41OTE0NzhdIEh1Z2VU
TEIgcmVnaXN0ZXJlZCAyLjAwIE1pQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlczxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDAuNTk4MjI1XSBIdWdlVExCIHJlZ2lzdGVyZWQgNjQuMCBLaUIgcGFnZSBz
aXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjYzNjUyMF0gRFJCRzog
Q29udGludWluZyB3aXRob3V0IEppdHRlciBSTkc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjczNzE4N10gcmFp
ZDY6IG5lb254OCDCoCBnZW4oKSDCoDIxNDMgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuODA1Mjk0XSBy
YWlkNjogbmVvbng4IMKgIHhvcigpIMKgMTU4OSBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC44NzM0MDZd
IHJhaWQ2OiBuZW9ueDQgwqAgZ2VuKCkgwqAyMTc3IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjk0MTQ5
OV0gcmFpZDY6IG5lb254NCDCoCB4b3IoKSDCoDE1NTYgTUIvczxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuMDA5
NjEyXSByYWlkNjogbmVvbngyIMKgIGdlbigpIMKgMjA3MiBNQi9zPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4w
Nzc3MTVdIHJhaWQ2OiBuZW9ueDIgwqAgeG9yKCkgwqAxNDMwIE1CL3M8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAx
LjE0NTgzNF0gcmFpZDY6IG5lb254MSDCoCBnZW4oKSDCoDE3NjkgTUIvczxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDEuMjEzOTM1XSByYWlkNjogbmVvbngxIMKgIHhvcigpIMKgMTIxNCBNQi9zPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMS4yODIwNDZdIHJhaWQ2OiBpbnQ2NHg4IMKgZ2VuKCkgwqAxMzY2IE1CL3M8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAxLjM1MDEzMl0gcmFpZDY6IGludDY0eDggwqB4b3IoKSDCoCA3NzMgTUIvczxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDEuNDE4MjU5XSByYWlkNjogaW50NjR4NCDCoGdlbigpIMKgMTYwMiBNQi9zPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMS40ODYzNDldIHJhaWQ2OiBpbnQ2NHg0IMKgeG9yKCkgwqAgODUxIE1CL3M8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAxLjU1NDQ2NF0gcmFpZDY6IGludDY0eDIgwqBnZW4oKSDCoDEzOTYgTUIvczxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDEuNjIyNTYxXSByYWlkNjogaW50NjR4MiDCoHhvcigpIMKgIDc0NCBNQi9z
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMS42OTA2ODddIHJhaWQ2OiBpbnQ2NHgxIMKgZ2VuKCkgwqAxMDMzIE1C
L3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAxLjc1ODc3MF0gcmFpZDY6IGludDY0eDEgwqB4b3IoKSDCoCA1MTcg
TUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDEuNzU4ODA5XSByYWlkNjogdXNpbmcgYWxnb3JpdGhtIG5lb254
NCBnZW4oKSAyMTc3IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc2Mjk0MV0gcmFpZDY6IC4uLi4geG9y
KCkgMTU1NiBNQi9zLCBybXcgZW5hYmxlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzY3OTU3XSByYWlkNjog
dXNpbmcgbmVvbiByZWNvdmVyeSBhbGdvcml0aG08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc3MjgyNF0geGVu
OmJhbGxvb246IEluaXRpYWxpc2luZyBiYWxsb29uIGRyaXZlcjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzc4
MDIxXSBpb21tdTogRGVmYXVsdCBkb21haW4gdHlwZTogVHJhbnNsYXRlZDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDEuNzgyNTg0XSBpb21tdTogRE1BIGRvbWFpbiBUTEIgaW52YWxpZGF0aW9uIHBvbGljeTogc3Ry
aWN0IG1vZGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc4OTE0OV0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6
ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAxLjc5MjgyMF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJm
YWNlIGRyaXZlciB1c2Jmczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzk4MjU0XSB1c2Jjb3JlOiByZWdpc3Rl
cmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGh1Yjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODAzNjI2XSB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBkZXZpY2UgZHJpdmVyIHVzYjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODA4
NzYxXSBwcHNfY29yZTogTGludXhQUFMgQVBJIHZlci4gMSByZWdpc3RlcmVkPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMS44MTM3MTZdIHBwc19jb3JlOiBTb2Z0d2FyZSB2ZXIuIDUuMy42IC0gQ29weXJpZ2h0IDIw
MDUtMjAwNyBSb2RvbGZvIEdpb21ldHRpICZsdDs8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGlu
dXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9tZXR0aUBsaW51eC5pdDwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGludXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9tZXR0
aUBsaW51eC5pdDwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGludXguaXQiIHRhcmdldD0iX2JsYW5rIj5n
aW9tZXR0aUBsaW51eC5pdDwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlA
bGludXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9tZXR0aUBsaW51eC5pdDwvYT4mZ3Q7Jmd0OyZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAxLjgyMjkwM10gUFRQIGNsb2NrIHN1cHBvcnQgcmVnaXN0ZXJlZDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDEuODI2ODkzXSBFREFDIE1DOiBWZXI6IDMuMC4wPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MS44MzAzNzVdIHp5bnFtcC1pcGktbWJveCBtYWlsYm94QGZmOTkwNDAwOiBSZWdpc3RlcmVkIFp5
bnFNUCBJUEkgbWJveCB3aXRoIFRYL1JYIGNoYW5uZWxzLjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODM4ODYz
XSB6eW5xbXAtaXBpLW1ib3ggbWFpbGJveEBmZjk5MDYwMDogUmVnaXN0ZXJlZCBaeW5xTVAgSVBJ
IG1ib3ggd2l0aCBUWC9SWCBjaGFubmVscy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg0NzM1Nl0genlucW1w
LWlwaS1tYm94IG1haWxib3hAZmY5OTA4MDA6IFJlZ2lzdGVyZWQgWnlucU1QIElQSSBtYm94IHdp
dGggVFgvUlggY2hhbm5lbHMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44NTU5MDddIEZQR0EgbWFuYWdlciBm
cmFtZXdvcms8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg1OTk1Ml0gY2xvY2tzb3VyY2U6IFN3aXRjaGVkIHRv
IGNsb2Nrc291cmNlIGFyY2hfc3lzX2NvdW50ZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg3MTcxMl0gTkVU
OiBSZWdpc3RlcmVkIFBGX0lORVQgcHJvdG9jb2wgZmFtaWx5PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44NzE4
MzhdIElQIGlkZW50cyBoYXNoIHRhYmxlIGVudHJpZXM6IDMyNzY4IChvcmRlcjogNiwgMjYyMTQ0
IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44NzkzOTJdIHRjcF9saXN0ZW5fcG9ydGFk
ZHJfaGFzaCBoYXNoIHRhYmxlIGVudHJpZXM6IDEwMjQgKG9yZGVyOiAyLCAxNjM4NCBieXRlcywg
bGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODg3MDc4XSBUYWJsZS1wZXJ0dXJiIGhhc2ggdGFibGUg
ZW50cmllczogNjU1MzYgKG9yZGVyOiA2LCAyNjIxNDQgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAxLjg5NDg0Nl0gVENQIGVzdGFibGlzaGVkIGhhc2ggdGFibGUgZW50cmllczogMTYzODQg
KG9yZGVyOiA1LCAxMzEwNzIgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjkwMjkwMF0g
VENQIGJpbmQgaGFzaCB0YWJsZSBlbnRyaWVzOiAxNjM4NCAob3JkZXI6IDYsIDI2MjE0NCBieXRl
cywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTEwMzUwXSBUQ1A6IEhhc2ggdGFibGVzIGNvbmZp
Z3VyZWQgKGVzdGFibGlzaGVkIDE2Mzg0IGJpbmQgMTYzODQpPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45MTY3
NzhdIFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6IDEwMjQgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywg
bGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTIzNTA5XSBVRFAtTGl0ZSBoYXNoIHRhYmxlIGVudHJp
ZXM6IDEwMjQgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEu
OTMwNzU5XSBORVQ6IFJlZ2lzdGVyZWQgUEZfVU5JWC9QRl9MT0NBTCBwcm90b2NvbCBmYW1pbHk8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAxLjkzNjgzNF0gUlBDOiBSZWdpc3RlcmVkIG5hbWVkIFVOSVggc29ja2V0
IHRyYW5zcG9ydCBtb2R1bGUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45NDIzNDJdIFJQQzogUmVnaXN0ZXJl
ZCB1ZHAgdHJhbnNwb3J0IG1vZHVsZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk0NzA4OF0gUlBDOiBSZWdp
c3RlcmVkIHRjcCB0cmFuc3BvcnQgbW9kdWxlLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTUxODQzXSBSUEM6
IFJlZ2lzdGVyZWQgdGNwIE5GU3Y0LjEgYmFja2NoYW5uZWwgdHJhbnNwb3J0IG1vZHVsZS48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAxLjk1ODMzNF0gUENJOiBDTFMgMCBieXRlcywgZGVmYXVsdCA2NDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDEuOTYyNzA5XSBUcnlpbmcgdG8gdW5wYWNrIHJvb3RmcyBpbWFnZSBhcyBpbml0cmFt
ZnMuLi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk3NzA5MF0gd29ya2luZ3NldDogdGltZXN0YW1wX2JpdHM9
NjIgbWF4X29yZGVyPTE5IGJ1Y2tldF9vcmRlcj0wPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45ODI4NjNdIElu
c3RhbGxpbmcga25mc2QgKGNvcHlyaWdodCAoQykgMTk5NiA8YSBocmVmPSJtYWlsdG86b2tpckBt
b25hZC5zd2IuZGUiIHRhcmdldD0iX2JsYW5rIj5va2lyQG1vbmFkLnN3Yi5kZTwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2tpckBtb25hZC5zd2IuZGUiIHRhcmdldD0iX2JsYW5rIj5v
a2lyQG1vbmFkLnN3Yi5kZTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9raXJA
bW9uYWQuc3diLmRlIiB0YXJnZXQ9Il9ibGFuayI+b2tpckBtb25hZC5zd2IuZGU8L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIiB0YXJnZXQ9Il9ibGFuayI+
b2tpckBtb25hZC5zd2IuZGU8L2E+Jmd0OyZndDspLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDIxMDQ1XSBO
RVQ6IFJlZ2lzdGVyZWQgUEZfQUxHIHByb3RvY29sIGZhbWlseTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDIx
MTIyXSB4b3I6IG1lYXN1cmluZyBzb2Z0d2FyZSBjaGVja3N1bSBzcGVlZDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuMDI5MzQ3XSDCoCDCoDhyZWdzIMKgIMKgIMKgIMKgIMKgIDogwqAyMzY2IE1CL3NlYzxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuMDMzMDgxXSDCoCDCoDMycmVncyDCoCDCoCDCoCDCoCDCoDogwqAyODAyIE1C
L3NlYzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDIuMDM4MjIzXSDCoCDCoGFybTY0X25lb24gwqAgwqAgwqA6IMKg
MjMyMCBNQi9zZWM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAzODM4NV0geG9yOiB1c2luZyBmdW5jdGlvbjog
MzJyZWdzICgyODAyIE1CL3NlYyk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA0MzYxNF0gQmxvY2sgbGF5ZXIg
U0NTSSBnZW5lcmljIChic2cpIGRyaXZlciB2ZXJzaW9uIDAuNCBsb2FkZWQgKG1ham9yIDI0Nyk8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAyLjA1MDk1OV0gaW8gc2NoZWR1bGVyIG1xLWRlYWRsaW5lIHJlZ2lzdGVy
ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAyLjA1NTUyMV0gaW8gc2NoZWR1bGVyIGt5YmVyIHJlZ2lzdGVyZWQ8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAyLjA2ODIyN10geGVuOnhlbl9ldnRjaG46IEV2ZW50LWNoYW5uZWwgZGV2
aWNlIGluc3RhbGxlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDY5MjgxXSBTZXJpYWw6IDgyNTAvMTY1NTAg
ZHJpdmVyLCA0IHBvcnRzLCBJUlEgc2hhcmluZyBkaXNhYmxlZDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDc2
MTkwXSBjYWNoZWluZm86IFVuYWJsZSB0byBkZXRlY3QgY2FjaGUgaGllcmFyY2h5IGZvciBDUFUg
MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMDg1NTQ4XSBicmQ6IG1vZHVsZSBsb2FkZWQ8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjA4OTI5MF0gbG9vcDogbW9kdWxlIGxvYWRlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDg5MzQxXSBJ
bnZhbGlkIG1heF9xdWV1ZXMgKDQpLCB3aWxsIHVzZSBkZWZhdWx0IG1heDogMi48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjA5NDU2NV0gdHVuOiBVbml2ZXJzYWwgVFVOL1RBUCBkZXZpY2UgZHJpdmVyLCAxLjY8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAyLjA5ODY1NV0geGVuX25ldGZyb250OiBJbml0aWFsaXNpbmcgWGVuIHZp
cnR1YWwgZXRoZXJuZXQgZHJpdmVyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xMDQxNTZdIHVzYmNvcmU6IHJl
Z2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgcnRsODE1MDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTA5
ODEzXSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHI4MTUyPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMi4xMTUzNjddIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2
ZXIgYXNpeDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTIwNzk0XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBp
bnRlcmZhY2UgZHJpdmVyIGF4ODgxNzlfMTc4YTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTI2OTM0XSB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGNkY19ldGhlcjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMTMyODE2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGNk
Y19lZW08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjEzODUyN10gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50
ZXJmYWNlIGRyaXZlciBuZXQxMDgwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xNDQyNTZdIHVzYmNvcmU6IHJl
Z2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgY2RjX3N1YnNldDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIu
MTUwMjA1XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHphdXJ1czxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuMTU1ODM3XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2Ug
ZHJpdmVyIGNkY19uY208YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE2MTU1MF0gdXNiY29yZTogcmVnaXN0ZXJl
ZCBuZXcgaW50ZXJmYWNlIGRyaXZlciByODE1M19lY208YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE2ODI0MF0g
dXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNfYWNtPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi4xNzMxMDldIGNkY19hY206IFVTQiBBYnN0cmFjdCBDb250cm9sIE1vZGVsIGRyaXZl
ciBmb3IgVVNCIG1vZGVtcyBhbmQgSVNETiBhZGFwdGVyczxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTgxMzU4
XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVhczxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMTg2NTQ3XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVz
Yi1zdG9yYWdlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xOTI2NDNdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3
IGludGVyZmFjZSBkcml2ZXIgZnRkaV9zaW88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE5ODM4NF0gdXNic2Vy
aWFsOiBVU0IgU2VyaWFsIHN1cHBvcnQgcmVnaXN0ZXJlZCBmb3IgRlRESSBVU0IgU2VyaWFsIERl
dmljZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDIuMjA2MTE4XSB1ZGMtY29yZTogY291bGRuJiMzOTt0IGZpbmQg
YW4gYXZhaWxhYmxlIFVEQyAtIGFkZGVkIFtnX21hc3Nfc3RvcmFnZV0gdG8gbGlzdCBvZiBwZW5k
aW5nPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZHJpdmVyczxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuMjE1MzMyXSBpMmNfZGV2OiBpMmMgL2RldiBlbnRyaWVzIGRyaXZlcjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuMjIwNDY3XSB4ZW5fd2R0IHhlbl93ZHQ6IGluaXRpYWxpemVkICh0aW1lb3V0PTYwcywgbm93
YXlvdXQ9MCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIyNTkyM10gZGV2aWNlLW1hcHBlcjogdWV2ZW50OiB2
ZXJzaW9uIDEuMC4zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yMzA2NjhdIGRldmljZS1tYXBwZXI6IGlvY3Rs
OiA0LjQ1LjAtaW9jdGwgKDIwMjEtMDMtMjIpIGluaXRpYWxpc2VkOiA8YSBocmVmPSJtYWlsdG86
ZG0tZGV2ZWxAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmRtLWRldmVsQHJlZGhhdC5jb208
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20iIHRhcmdl
dD0iX2JsYW5rIj5kbS1kZXZlbEByZWRoYXQuY29tPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpkbS1kZXZlbEByZWRoYXQu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+ZG0tZGV2ZWxAcmVkaGF0LmNvbTwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmRtLWRl
dmVsQHJlZGhhdC5jb208L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIzOTMxNV0gRURBQyBN
QzA6IEdpdmluZyBvdXQgZGV2aWNlIHRvIG1vZHVsZSAxIGNvbnRyb2xsZXIgc3lucHNfZGRyX2Nv
bnRyb2xsZXI6IERFViBzeW5wc19lZGFjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
KElOVEVSUlVQVCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI0OTQwNV0gRURBQyBERVZJQ0UwOiBHaXZpbmcg
b3V0IGRldmljZSB0byBtb2R1bGUgenlucW1wLW9jbS1lZGFjIGNvbnRyb2xsZXIgenlucW1wX29j
bTogREVWPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgZmY5NjAwMDAubWVtb3J5LWNvbnRyb2xsZXIgKElOVEVSUlVQVCk8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjI2MTcxOV0gc2RoY2k6IFNlY3VyZSBEaWdpdGFsIEhvc3QgQ29udHJvbGxlciBJbnRl
cmZhY2UgZHJpdmVyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yNjc0ODddIHNkaGNpOiBDb3B5cmlnaHQoYykg
UGllcnJlIE9zc21hbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjcxODkwXSBzZGhjaS1wbHRmbTogU0RIQ0kg
cGxhdGZvcm0gYW5kIE9GIGRyaXZlciBoZWxwZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI3ODE1N10gbGVk
dHJpZy1jcHU6IHJlZ2lzdGVyZWQgdG8gaW5kaWNhdGUgYWN0aXZpdHkgb24gQ1BVczxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMjgzODE2XSB6eW5xbXBfZmlybXdhcmVfcHJvYmUgUGxhdGZvcm0gTWFuYWdlbWVu
dCBBUEkgdjEuMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjg5NTU0XSB6eW5xbXBfZmlybXdhcmVfcHJvYmUg
VHJ1c3R6b25lIHZlcnNpb24gdjEuMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzI3ODc1XSBzZWN1cmVmdyBz
ZWN1cmVmdzogc2VjdXJlZncgcHJvYmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zMjgzMjRdIGFsZzogTm8g
dGVzdCBmb3IgeGlsaW54LXp5bnFtcC1hZXMgKHp5bnFtcC1hZXMpPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4z
MzI1NjNdIHp5bnFtcF9hZXMgZmlybXdhcmU6enlucW1wLWZpcm13YXJlOnp5bnFtcC1hZXM6IEFF
UyBTdWNjZXNzZnVsbHkgUmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzQxMTgzXSBhbGc6IE5v
IHRlc3QgZm9yIHhpbGlueC16eW5xbXAtcnNhICh6eW5xbXAtcnNhKTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIu
MzQ3NjY3XSByZW1vdGVwcm9jIHJlbW90ZXByb2MwOiBmZjlhMDAwMC5yZjVzczpyNWZfMCBpcyBh
dmFpbGFibGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM1MzAwM10gcmVtb3RlcHJvYyByZW1vdGVwcm9jMTog
ZmY5YTAwMDAucmY1c3M6cjVmXzEgaXMgYXZhaWxhYmxlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zNjI2MDVd
IGZwZ2FfbWFuYWdlciBmcGdhMDogWGlsaW54IFp5bnFNUCBGUEdBIE1hbmFnZXIgcmVnaXN0ZXJl
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMzY2NTQwXSB2aXBlci14ZW4tcHJveHkgdmlwZXIteGVuLXByb3h5
OiBWaXBlciBYZW4gUHJveHkgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzcyNTI1XSB2aXBl
ci12ZHBwIGE0MDAwMDAwLnZkcHA6IERldmljZSBUcmVlIFByb2Jpbmc8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAy
LjM3Nzc3OF0gdmlwZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBWRFBQIFZlcnNpb246IDEuMy45LjAg
SW5mbzogMS41MTIuMTUuMCBLZXlMZW46IDMyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zODY0MzJdIHZpcGVy
LXZkcHAgYTQwMDAwMDAudmRwcDogVW5hYmxlIHRvIHJlZ2lzdGVyIHRhbXBlciBoYW5kbGVyLiBS
ZXRyeWluZy4uLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzk0MDk0XSB2aXBlci12ZHBwLW5ldCBhNTAwMDAw
MC52ZHBwX25ldDogRGV2aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzk5ODU0XSB2
aXBlci12ZHBwLW5ldCBhNTAwMDAwMC52ZHBwX25ldDogRGV2aWNlIHJlZ2lzdGVyZWQ8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjQwNTkzMV0gdmlwZXItdmRwcC1zdGF0IGE4MDAwMDAwLnZkcHBfc3RhdDogRGV2
aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDEyMDM3XSB2aXBlci12ZHBwLXN0YXQg
YTgwMDAwMDAudmRwcF9zdGF0OiBCdWlsZCBwYXJhbWV0ZXJzOiBWVEkgQ291bnQ6IDUxMiBFdmVu
dCBDb3VudDogMzI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQyMDg1Nl0gZGVmYXVsdCBwcmVzZXQ8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjQyMzc5N10gdmlwZXItdmRwcC1zdGF0IGE4MDAwMDAwLnZkcHBfc3RhdDogRGV2
aWNlIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQzMDA1NF0gdmlwZXItdmRwcC1ybmcgYWMw
MDAwMDAudmRwcF9ybmc6IERldmljZSBUcmVlIFByb2Jpbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQzNTk0
OF0gdmlwZXItdmRwcC1ybmcgYWMwMDAwMDAudmRwcF9ybmc6IERldmljZSByZWdpc3RlcmVkPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi40NDE5NzZdIHZtY3UgZHJpdmVyIGluaXQ8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ0
NDkyMl0gVk1DVTogOiAoMjQwOjApIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ0NDk1Nl0g
SW4gSzgxIFVwZGF0ZXIgaW5pdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDQ5MDAzXSBwa3RnZW46IFBhY2tl
dCBHZW5lcmF0b3IgZm9yIHBhY2tldCBwZXJmb3JtYW5jZSB0ZXN0aW5nLiBWZXJzaW9uOiAyLjc1
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMi40Njg4MzNdIEluaXRpYWxpemluZyBYRlJNIG5ldGxpbmsgc29ja2V0
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMi40Njg5MDJdIE5FVDogUmVnaXN0ZXJlZCBQRl9QQUNLRVQgcHJvdG9j
b2wgZmFtaWx5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40NzI3MjldIEJyaWRnZSBmaXJld2FsbGluZyByZWdp
c3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40NzY3ODVdIDgwMjFxOiA4MDIuMVEgVkxBTiBTdXBwb3J0
IHYxLjg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ4MTM0MV0gcmVnaXN0ZXJlZCB0YXNrc3RhdHMgdmVyc2lv
biAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMi40ODYzOTRdIEJ0cmZzIGxvYWRlZCwgY3JjMzJjPWNyYzMyYy1n
ZW5lcmljLCB6b25lZD1ubywgZnN2ZXJpdHk9bm88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUwMzE0NV0gZmYw
MTAwMDAuc2VyaWFsOiB0dHlQUzEgYXQgTU1JTyAweGZmMDEwMDAwIChpcnEgPSAzNiwgYmFzZV9i
YXVkID0gNjI1MDAwMCkgaXMgYSB4dWFydHBzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MDcxMDNdIG9mLWZw
Z2EtcmVnaW9uIGZwZ2EtZnVsbDogRlBHQSBSZWdpb24gcHJvYmVkPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41
MTI5ODZdIHhpbGlueC16eW5xbXAtZG1hIGZkNTAwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAg
RE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MjAyNjddIHhpbGlueC16
eW5xbXAtZG1hIGZkNTEwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9i
ZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MjgyMzldIHhpbGlueC16eW5xbXAtZG1hIGZkNTIw
MDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMi41MzYxNTJdIHhpbGlueC16eW5xbXAtZG1hIGZkNTMwMDAwLmRtYS1jb250cm9s
bGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NDQx
NTNdIHhpbGlueC16eW5xbXAtZG1hIGZkNTQwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1B
IGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NTIxMjddIHhpbGlueC16eW5x
bXAtZG1hIGZkNTUwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBz
dWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NjAxNzhdIHhpbGlueC16eW5xbXAtZG1hIGZmYTgwMDAw
LmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi41Njc5ODddIHhpbGlueC16eW5xbXAtZG1hIGZmYTkwMDAwLmRtYS1jb250cm9sbGVy
OiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NzYwMThd
IHhpbGlueC16eW5xbXAtZG1hIGZmYWEwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRy
aXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41ODM4ODldIHhpbGlueC16eW5xbXAt
ZG1hIGZmYWIwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNj
ZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMi45NDYzNzldIHNwaS1ub3Igc3BpMC4wOiBtdDI1cXU1MTJhICgx
MzEwNzIgS2J5dGVzKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTQ2NDY3XSAyIGZpeGVkLXBhcnRpdGlvbnMg
cGFydGl0aW9ucyBmb3VuZCBvbiBNVEQgZGV2aWNlIHNwaTAuMDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTUy
MzkzXSBDcmVhdGluZyAyIE1URCBwYXJ0aXRpb25zIG9uICZxdW90O3NwaTAuMCZxdW90Ozo8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjk1NzIzMV0gMHgwMDAwMDQwMDAwMDAtMHgwMDAwMDgwMDAwMDAgOiAmcXVv
dDtiYW5rIEEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk2MzMzMl0gMHgwMDAwMDAwMDAwMDAtMHgw
MDAwMDQwMDAwMDAgOiAmcXVvdDtiYW5rIEImcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk2ODY5NF0g
bWFjYiBmZjBiMDAwMC5ldGhlcm5ldDogTm90IGVuYWJsaW5nIHBhcnRpYWwgc3RvcmUgYW5kIGZv
cndhcmQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk3NTMzM10gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBldGgw
OiBDYWRlbmNlIEdFTSByZXYgMHg1MDA3MDEwNiBhdCAweGZmMGIwMDAwIGlycSAyNTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCgxODo0MTpmZTowZjpmZjowMik8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjk4NDQ3Ml0gbWFjYiBmZjBjMDAwMC5ldGhlcm5ldDogTm90IGVuYWJsaW5nIHBhcnRpYWwg
c3RvcmUgYW5kIGZvcndhcmQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk5MjE0NF0gbWFjYiBmZjBjMDAwMC5l
dGhlcm5ldCBldGgxOiBDYWRlbmNlIEdFTSByZXYgMHg1MDA3MDEwNiBhdCAweGZmMGMwMDAwIGly
cSAyNjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCgxODo0MTpmZTowZjpmZjowMyk8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAzLjAwMTA0M10gdmlwZXJfZW5ldCB2aXBlcl9lbmV0OiBWaXBlciBwb3dl
ciBHUElPcyBpbml0aWFsaXNlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDA3MzEzXSB2aXBlcl9lbmV0IHZp
cGVyX2VuZXQgdm5ldDAgKHVuaW5pdGlhbGl6ZWQpOiBWYWxpZGF0ZSBpbnRlcmZhY2UgUVNHTUlJ
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy4wMTQ5MTRdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldCB2bmV0MSAodW5p
bml0aWFsaXplZCk6IFZhbGlkYXRlIGludGVyZmFjZSBRU0dNSUk8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjAy
MjEzOF0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQxICh1bmluaXRpYWxpemVkKTogVmFsaWRh
dGUgaW50ZXJmYWNlIHR5cGUgMTg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjAzMDI3NF0gdmlwZXJfZW5ldCB2
aXBlcl9lbmV0IHZuZXQyICh1bmluaXRpYWxpemVkKTogVmFsaWRhdGUgaW50ZXJmYWNlIFFTR01J
STxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDMuMDM3Nzg1XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDMgKHVu
aW5pdGlhbGl6ZWQpOiBWYWxpZGF0ZSBpbnRlcmZhY2UgUVNHTUlJPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4w
NDUzMDFdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldDogVmlwZXIgZW5ldCByZWdpc3RlcmVkPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMy4wNTA5NThdIHhpbGlueC1heGlwbW9uIGZmYTAwMDAwLnBlcmYtbW9uaXRvcjog
UHJvYmVkIFhpbGlueCBBUE08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA1NzEzNV0geGlsaW54LWF4aXBtb24g
ZmQwYjAwMDAucGVyZi1tb25pdG9yOiBQcm9iZWQgWGlsaW54IEFQTTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
MDYzNTM4XSB4aWxpbngtYXhpcG1vbiBmZDQ5MDAwMC5wZXJmLW1vbml0b3I6IFByb2JlZCBYaWxp
bnggQVBNPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wNjk5MjBdIHhpbGlueC1heGlwbW9uIGZmYTEwMDAwLnBl
cmYtbW9uaXRvcjogUHJvYmVkIFhpbGlueCBBUE08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA5NzcyOV0gc2k3
MHh4OiBwcm9iZSBvZiAyLTAwNDAgZmFpbGVkIHdpdGggZXJyb3IgLTU8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjA5ODA0Ml0gY2Rucy13ZHQgZmQ0ZDAwMDAud2F0Y2hkb2c6IFhpbGlueCBXYXRjaGRvZyBUaW1l
ciB3aXRoIHRpbWVvdXQgNjBzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xMDUxMTFdIGNkbnMtd2R0IGZmMTUw
MDAwLndhdGNoZG9nOiBYaWxpbnggV2F0Y2hkb2cgVGltZXIgd2l0aCB0aW1lb3V0IDEwczxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDMuMTEyNDU3XSB2aXBlci10YW1wZXIgdmlwZXItdGFtcGVyOiBEZXZpY2UgcmVn
aXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTE3NTkzXSBhY3RpdmVfYmFuayBhY3RpdmVfYmFuazog
Ym9vdCBiYW5rOiAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xMjIxODRdIGFjdGl2ZV9iYW5rIGFjdGl2ZV9i
YW5rOiBib290IG1vZGU6ICgweDAyKSBxc3BpMzI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjEyODI0N10gdmlw
ZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBEZXZpY2UgVHJlZSBQcm9iaW5nPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My4xMzM0MzldIHZpcGVyLXZkcHAgYTQwMDAwMDAudmRwcDogVkRQUCBWZXJzaW9uOiAxLjMuOS4w
IEluZm86IDEuNTEyLjE1LjAgS2V5TGVuOiAzMjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTQyMTUxXSB2aXBl
ci12ZHBwIGE0MDAwMDAwLnZkcHA6IFRhbXBlciBoYW5kbGVyIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAzLjE0NzQzOF0gdmlwZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBEZXZpY2UgcmVnaXN0ZXJl
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDMuMTUzMDA3XSBscGM1NV9sMiBzcGkxLjA6IHJlZ2lzdGVyZWQgaGFu
ZGxlciBmb3IgcHJvdG9jb2wgMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTU4NTgyXSBscGM1NV91c2VyIGxw
YzU1X3VzZXI6IFRoZSBtYWpvciBudW1iZXIgZm9yIHlvdXIgZGV2aWNlIGlzIDIzNjxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuMTY1OTc2XSBscGM1NV9sMiBzcGkxLjA6IHJlZ2lzdGVyZWQgaGFuZGxlciBmb3Ig
cHJvdG9jb2wgMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTgxOTk5XSBydGMtbHBjNTUgcnRjX2xwYzU1OiBs
cGM1NV9ydGNfZ2V0X3RpbWU6IGJhZCByZXN1bHQ6IDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE4Mjg1Nl0g
cnRjLWxwYzU1IHJ0Y19scGM1NTogcmVnaXN0ZXJlZCBhcyBydGMwPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4x
ODg2NTZdIGxwYzU1X2wyIHNwaTEuMDogKDIpIG1jdSBzdGlsbCBub3QgcmVhZHk/PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy4xOTM3NDRdIGxwYzU1X2wyIHNwaTEuMDogKDMpIG1jdSBzdGlsbCBub3QgcmVhZHk/
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy4xOTg4NDhdIGxwYzU1X2wyIHNwaTEuMDogKDQpIG1jdSBzdGlsbCBu
b3QgcmVhZHk/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4yMDI5MzJdIG1tYzA6IFNESENJIGNvbnRyb2xsZXIg
b24gZmYxNjAwMDAubW1jIFtmZjE2MDAwMC5tbWNdIHVzaW5nIEFETUEgNjQtYml0PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy4yMTA2ODldIGxwYzU1X2wyIHNwaTEuMDogKDUpIG1jdSBzdGlsbCBub3QgcmVhZHk/
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy4yMTU2OTRdIGxwYzU1X2wyIHNwaTEuMDogcnggZXJyb3I6IC0xMTA8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAzLjI4NDQzOF0gbW1jMDogbmV3IEhTMjAwIE1NQyBjYXJkIGF0IGFkZHJl
c3MgMDAwMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjg1MTc5XSBtbWNibGswOiBtbWMwOjAwMDEgU0VNMTZH
IDE0LjYgR2lCPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4yOTE3ODRdIMKgbW1jYmxrMDogcDEgcDIgcDMgcDQg
cDUgcDYgcDcgcDg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjI5MzkxNV0gbW1jYmxrMGJvb3QwOiBtbWMwOjAw
MDEgU0VNMTZHIDQuMDAgTWlCPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4yOTkwNTRdIG1tY2JsazBib290MTog
bW1jMDowMDAxIFNFTTE2RyA0LjAwIE1pQjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMzAzOTA1XSBtbWNibGsw
cnBtYjogbW1jMDowMDAxIFNFTTE2RyA0LjAwIE1pQiwgY2hhcmRldiAoMjQ0OjApPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy41ODI2NzZdIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGxwYzU1X3J0Y19nZXRfdGltZTog
YmFkIHJlc3VsdDogMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNTgzMzMyXSBydGMtbHBjNTUgcnRjX2xwYzU1
OiBoY3Rvc3lzOiB1bmFibGUgdG8gcmVhZCB0aGUgaGFyZHdhcmUgY2xvY2s8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjU5MTI1Ml0gY2Rucy1pMmMgZmYwMjAwMDAuaTJjOiByZWNvdmVyeSBpbmZvcm1hdGlvbiBj
b21wbGV0ZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNTk3MDg1XSBhdDI0IDAtMDA1MDogc3VwcGx5IHZjYyBu
b3QgZm91bmQsIHVzaW5nIGR1bW15IHJlZ3VsYXRvcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjAzMDExXSBs
cGM1NV9sMiBzcGkxLjA6ICgyKSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
NjA4MDkzXSBhdDI0IDAtMDA1MDogMjU2IGJ5dGUgc3BkIEVFUFJPTSwgcmVhZC1vbmx5PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMy42MTM2MjBdIGxwYzU1X2wyIHNwaTEuMDogKDMpIG1jdSBzdGlsbCBub3QgcmVh
ZHk/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy42MTkzNjJdIGxwYzU1X2wyIHNwaTEuMDogKDQpIG1jdSBzdGls
bCBub3QgcmVhZHk/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42MjQyMjRdIHJ0Yy1ydjMwMjggMC0wMDUyOiBy
ZWdpc3RlcmVkIGFzIHJ0YzE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYyODM0M10gbHBjNTVfbDIgc3BpMS4w
OiAoNSkgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYzMzI1M10gbHBjNTVf
bDIgc3BpMS4wOiByeCBlcnJvcjogLTExMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjM5MTA0XSBrODFfYm9v
dGxvYWRlciAwLTAwMTA6IHByb2JlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42NDE2MjhdIFZNQ1U6IDogKDIz
NTowKSByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42NDE2MzVdIGs4MV9ib290bG9hZGVyIDAt
MDAxMDogcHJvYmUgY29tcGxldGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42NjgzNDZdIGNkbnMtaTJjIGZm
MDIwMDAwLmkyYzogNDAwIGtIeiBtbWlvIGZmMDIwMDAwIGlycSAyODxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
NjY5MTU0XSBjZG5zLWkyYyBmZjAzMDAwMC5pMmM6IHJlY292ZXJ5IGluZm9ybWF0aW9uIGNvbXBs
ZXRlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy42NzU0MTJdIGxtNzUgMS0wMDQ4OiBzdXBwbHkgdnMgbm90IGZv
dW5kLCB1c2luZyBkdW1teSByZWd1bGF0b3I8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY4MjkyMF0gbG03NSAx
LTAwNDg6IGh3bW9uMTogc2Vuc29yICYjMzk7dG1wMTEyJiMzOTs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY4
NjU0OF0gaTJjIGkyYy0xOiBBZGRlZCBtdWx0aXBsZXhlZCBpMmMgYnVzIDM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjY5MDc5NV0gaTJjIGkyYy0xOiBBZGRlZCBtdWx0aXBsZXhlZCBpMmMgYnVzIDQ8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjY5NTYyOV0gaTJjIGkyYy0xOiBBZGRlZCBtdWx0aXBsZXhlZCBpMmMgYnVzIDU8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAzLjcwMDQ5Ml0gaTJjIGkyYy0xOiBBZGRlZCBtdWx0aXBsZXhlZCBpMmMg
YnVzIDY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjcwNTE1N10gcGNhOTU0eCAxLTAwNzA6IHJlZ2lzdGVyZWQg
NCBtdWx0aXBsZXhlZCBidXNzZXMgZm9yIEkyQyBzd2l0Y2ggcGNhOTU0Njxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuNzEzMDQ5XSBhdDI0IDEtMDA1NDogc3VwcGx5IHZjYyBub3QgZm91bmQsIHVzaW5nIGR1bW15
IHJlZ3VsYXRvcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzIwMDY3XSBhdDI0IDEtMDA1NDogMTAyNCBieXRl
IDI0YzA4IEVFUFJPTSwgcmVhZC1vbmx5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43MjQ3NjFdIGNkbnMtaTJj
IGZmMDMwMDAwLmkyYzogMTAwIGtIeiBtbWlvIGZmMDMwMDAwIGlycSAyOTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuNzMxMjcyXSBzZnAgdmlwZXJfZW5ldDpzZnAtZXRoMTogSG9zdCBtYXhpbXVtIHBvd2VyIDIu
MFc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjczNzU0OV0gc2ZwX3JlZ2lzdGVyX3NvY2tldDogZ290IHNmcF9i
dXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjc0MDcwOV0gc2ZwX3JlZ2lzdGVyX3NvY2tldDogcmVnaXN0ZXIg
c2ZwX2J1czxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzQ1NDU5XSBzZnBfcmVnaXN0ZXJfYnVzOiBvcHMgb2sh
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy43NDkxNzldIHNmcF9yZWdpc3Rlcl9idXM6IFRyeSB0byBhdHRhY2g8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAzLjc1MzQxOV0gc2ZwX3JlZ2lzdGVyX2J1czogQXR0YWNoIHN1Y2NlZWRl
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDMuNzU3OTE0XSBzZnBfcmVnaXN0ZXJfYnVzOiB1cHN0cmVhbSBvcHMg
YXR0YWNoPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NjI2NzddIHNmcF9yZWdpc3Rlcl9idXM6IEJ1cyByZWdp
c3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NjY5OTldIHNmcF9yZWdpc3Rlcl9zb2NrZXQ6IHJlZ2lz
dGVyIHNmcF9idXMgc3VjY2VlZGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NzU4NzBdIG9mX2Nmc19pbml0
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy43NzYwMDBdIG9mX2Nmc19pbml0OiBPSzxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
Nzc4MjExXSBjbGs6IE5vdCBkaXNhYmxpbmcgdW51c2VkIGNsb2Nrczxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4y
Nzg0NzddIEZyZWVpbmcgaW5pdHJkIG1lbW9yeTogMjA2MDU2Szxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4yNzk0
MDZdIEZyZWVpbmcgdW51c2VkIGtlcm5lbCBtZW1vcnk6IDE1MzZLPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjMx
NDAwNl0gQ2hlY2tlZCBXK1ggbWFwcGluZ3M6IHBhc3NlZCwgbm8gVytYIHBhZ2VzIGZvdW5kPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDExLjMxNDE0Ml0gUnVuIC9pbml0IGFzIGluaXQgcHJvY2Vzczxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSU5J
VDogdmVyc2lvbiAzLjAxIGJvb3Rpbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGZzY2sgKGJ1c3lib3ggMS4zNS4wKTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgL2Rldi9tbWNibGswcDE6IGNsZWFuLCAxMi8xMDI0MDAgZmlsZXMsIDIzODE2Mi80MDk2MDAg
YmxvY2tzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAvZGV2L21tY2JsazBwMjogY2xlYW4sIDEyLzEwMjQwMCBmaWxlcywgMTcx
OTcyLzQwOTYwMCBibG9ja3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IC9kZXYvbW1jYmxrMHAzIHdhcyBub3QgY2xlYW5seSB1
bm1vdW50ZWQsIGNoZWNrIGZvcmNlZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IC9kZXYvbW1jYmxrMHAzOiAyMC80MDk2IGZp
bGVzICgwLjAlIG5vbi1jb250aWd1b3VzKSwgNjYzLzE2Mzg0IGJsb2Nrczxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS41NTMwNzNdIEVYVDQtZnMgKG1tY2JsazBwMyk6IG1vdW50ZWQgZmlsZXN5c3RlbSB3aXRob3V0
IGpvdXJuYWwuIE9wdHM6IChudWxsKS4gUXVvdGEgbW9kZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBkaXNhYmxlZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIHJhbmRvbSBudW1iZXIgZ2VuZXJh
dG9yIGRhZW1vbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNTgwNjYyXSByYW5kb206IGNybmcgaW5pdCBkb25l
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBTdGFydGluZyB1ZGV2PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjYxMzE1OV0gdWRldmRbMTQyXTog
c3RhcnRpbmcgdmVyc2lvbiAzLjIuMTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNjIwMzg1XSB1ZGV2ZFsxNDNd
OiBzdGFydGluZyBldWRldi0zLjIuMTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNzA0NDgxXSBtYWNiIGZmMGIw
MDAwLmV0aGVybmV0IGNvbnRyb2xfcmVkOiByZW5hbWVkIGZyb20gZXRoMDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS43MjAyNjRdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29udHJvbF9ibGFjazogcmVuYW1lZCBm
cm9tIGV0aDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuMDYzMzk2XSBpcF9sb2NhbF9wb3J0X3JhbmdlOiBwcmVm
ZXIgZGlmZmVyZW50IHBhcml0eSBmb3Igc3RhcnQvZW5kIHZhbHVlcy48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIu
MDg0ODAxXSBydGMtbHBjNTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfZ2V0X3RpbWU6IGJhZCByZXN1
bHQ6IDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IGh3Y2xvY2s6IFJUQ19SRF9USU1FOiBJbnZhbGlkIGV4Y2hhbmdlPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBNb24gRmViIDI3IDA4OjQwOjUzIFVUQyAyMDIzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjExNTMwOV0gcnRj
LWxwYzU1IHJ0Y19scGM1NTogbHBjNTVfcnRjX3NldF90aW1lOiBiYWQgcmVzdWx0PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBo
d2Nsb2NrOiBSVENfU0VUX1RJTUU6IEludmFsaWQgZXhjaGFuZ2U8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuMTMx
MDI3XSBydGMtbHBjNTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfZ2V0X3RpbWU6IGJhZCByZXN1bHQ6
IDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFN0YXJ0aW5nIG1jdWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IElOSVQ6IEVudGVyaW5nIHJ1bmxldmVsOiA1
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBDb25maWd1cmluZyBuZXR3b3JrIGludGVyZmFjZXMuLi4gZG9uZS48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJl
c2V0dGluZyBuZXR3b3JrIGludGVyZmFjZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43MTgyOTVdIG1hY2IgZmYw
YjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IFBIWSBbZmYwYjAwMDAuZXRoZXJuZXQtZmZmZmZm
ZmY6MDJdIGRyaXZlciBbWGlsaW54PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgUENT
L1BNQSBQSFldIChpcnE9UE9MTCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzIzOTE5XSBtYWNiIGZmMGIwMDAw
LmV0aGVybmV0IGNvbnRyb2xfcmVkOiBjb25maWd1cmluZyBmb3IgcGh5L2dtaWkgbGluayBtb2Rl
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDEyLjczMjE1MV0gcHBzIHBwczA6IG5ldyBQUFMgc291cmNlIHB0cDA8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTIuNzM1NTYzXSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0OiBnZW0tcHRwLXRpbWVy
IHB0cCBjbG9jayByZWdpc3RlcmVkLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43NDU3MjRdIG1hY2IgZmYwYzAw
MDAuZXRoZXJuZXQgY29udHJvbF9ibGFjazogUEhZIFtmZjBjMDAwMC5ldGhlcm5ldC1mZmZmZmZm
ZjowMV0gZHJpdmVyIFtYaWxpbng8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBQQ1Mv
UE1BIFBIWV08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAoaXJxPVBPTEwpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjc1MzQ2OV0gbWFjYiBmZjBjMDAw
MC5ldGhlcm5ldCBjb250cm9sX2JsYWNrOiBjb25maWd1cmluZyBmb3IgcGh5L2dtaWkgbGluayBt
b2RlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDEyLjc2MTgwNF0gcHBzIHBwczE6IG5ldyBQUFMgc291cmNlIHB0cDE8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTIuNzY1Mzk4XSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0OiBnZW0tcHRwLXRp
bWVyIHB0cCBjbG9jayByZWdpc3RlcmVkLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQXV0by1uZWdvdGlhdGlvbjogb2ZmPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBBdXRvLW5lZ290aWF0aW9uOiBvZmY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTYuODI4MTUxXSBtYWNiIGZm
MGIwMDAwLmV0aGVybmV0IGNvbnRyb2xfcmVkOiB1bmFibGUgdG8gZ2VuZXJhdGUgdGFyZ2V0IGZy
ZXF1ZW5jeTogMTI1MDAwMDAwIEh6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE2LjgzNDU1M10gbWFjYiBmZjBiMDAw
MC5ldGhlcm5ldCBjb250cm9sX3JlZDogTGluayBpcyBVcCAtIDFHYnBzL0Z1bGwgLSBmbG93IGNv
bnRyb2wgb2ZmPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE2Ljg2MDU1Ml0gbWFjYiBmZjBjMDAwMC5ldGhlcm5ldCBj
b250cm9sX2JsYWNrOiB1bmFibGUgdG8gZ2VuZXJhdGUgdGFyZ2V0IGZyZXF1ZW5jeTogMTI1MDAw
MDAwIEh6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDE2Ljg2NzA1Ml0gbWFjYiBmZjBjMDAwMC5ldGhlcm5ldCBjb250
cm9sX2JsYWNrOiBMaW5rIGlzIFVwIC0gMUdicHMvRnVsbCAtIGZsb3cgY29udHJvbCBvZmY8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFN0YXJ0aW5nIEZhaWxzYWZlIFNlY3VyZSBTaGVsbCBzZXJ2ZXIgaW4gcG9ydCAyMjIyOiBz
c2hkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBkb25lLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgcnBjYmluZCBkYWVtb24uLi5kb25lLjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTcuMDkzMDE5XSBydGMtbHBjNTUgcnRjX2xwYzU1OiBscGM1NV9y
dGNfZ2V0X3RpbWU6IGJhZCByZXN1bHQ6IDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGh3Y2xvY2s6IFJUQ19SRF9USU1FOiBJ
bnZhbGlkIGV4Y2hhbmdlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBTdGF0ZSBNYW5hZ2VyIFNlcnZpY2U8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFN0YXJ0IHN0YXRlLW1hbmFnZXIgcmVzdGFydGVyLi4uPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYxIEZv
cndhcmRpbmcgQUVTIG9wZXJhdGlvbjogMzI1NDc3OTk1MTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgL3Vzci9z
YmluL3hlbnN0b3JlZC4uLi5bIMKgIDE3LjI2NTI1Nl0gQlRSRlM6IGRldmljZSBmc2lkIDgwZWZj
MjI0LWMyMDItNGY4ZS1hOTQ5LTRkYWU3ZjA0YTBhYTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoGRldmlkIDEgdHJhbnNpZCA3NDQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAvZGV2L2RtLTA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHNjYW5uZWQgYnkg
dWRldmQgKDM4NSk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuMzQ5OTMzXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0t
MCk6IGRpc2sgc3BhY2UgY2FjaGluZyBpcyBlbmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3LjM1MDY3MF0g
QlRSRlMgaW5mbyAoZGV2aWNlIGRtLTApOiBoYXMgc2tpbm55IGV4dGVudHM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
MTcuMzY0Mzg0XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMCk6IGVuYWJsaW5nIHNzZCBvcHRpbWl6
YXRpb25zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDE3LjgzMDQ2Ml0gQlRSRlM6IGRldmljZSBmc2lkIDI3ZmY2NjZi
LWY0ZTUtNGY5MC05MDU0LWMyMTBkYjViMmUyZSBkZXZpZCAxIHRyYW5zaWQgNjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoC9kZXYvbWFwcGVyL2NsaWVudF9wcm92IHNjYW5uZWQgYnk8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBta2ZzLmJ0cmZzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoNTI2KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy44NzI2OTldIEJUUkZTIGlu
Zm8gKGRldmljZSBkbS0xKTogdXNpbmcgZnJlZSBzcGFjZSB0cmVlPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3Ljg3
Mjc3MV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTEpOiBoYXMgc2tpbm55IGV4dGVudHM8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTcuODc4MTE0XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6IGZsYWdnaW5nIGZzIHdp
dGggYmlnIG1ldGFkYXRhIGZlYXR1cmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuODk0Mjg5XSBCVFJGUyBpbmZv
IChkZXZpY2UgZG0tMSk6IGVuYWJsaW5nIHNzZCBvcHRpbWl6YXRpb25zPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3
Ljg5NTY5NV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTEpOiBjaGVja2luZyBVVUlEIHRyZWU8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBTZXR0aW5nIGRvbWFpbiAwIG5hbWUsIGRvbWlkIGFuZCBKU09OIGNvbmZpZy4u
Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgRG9uZSBzZXR0aW5nIHVwIERvbTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIHhlbmNvbnNvbGVk
Li4uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBTdGFydGluZyBRRU1VIGFzIGRpc2sgYmFja2VuZCBmb3IgZG9tMDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
U3RhcnRpbmcgZG9tYWluIHdhdGNoZG9nIGRhZW1vbjogeGVud2F0Y2hkb2dkIHN0YXJ0dXA8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDE4LjQwODY0N10gQlRSRlM6IGRldmljZSBmc2lkIDVlMDhkNWU5LWJj
MmEtNDZiOS1hZjZhLTQ0YzcwODdiODkyMSBkZXZpZCAxIHRyYW5zaWQgNjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoC9kZXYvbWFwcGVyL2NsaWVudF9jb25maWcgc2Nhbm5lZCBieTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oG1rZnMuYnRyZnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7ICg1NzQpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbZG9uZV08YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNDY1
NTUyXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMik6IHVzaW5nIGZyZWUgc3BhY2UgdHJlZTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCAxOC40NjU2MjldIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTogaGFzIHNraW5ueSBl
eHRlbnRzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDE4LjQ3MTAwMl0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTIpOiBm
bGFnZ2luZyBmcyB3aXRoIGJpZyBtZXRhZGF0YSBmZWF0dXJlPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBjcm9u
ZDogWyDCoCAxOC40ODIzNzFdIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTogZW5hYmxpbmcgc3Nk
IG9wdGltaXphdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNDg2NjU5XSBCVFJGUyBpbmZvIChkZXZpY2Ug
ZG0tMik6IGNoZWNraW5nIFVVSUQgdHJlZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgT0s8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHN0YXJ0aW5nIHJzeXNs
b2dkIC4uLiBMb2cgcGFydGl0aW9uIHJlYWR5IGFmdGVyIDAgcG9sbCBsb29wczxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgZG9u
ZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgcnN5c2xvZ2Q6IGNhbm5vdCBjb25uZWN0IHRvIDxhIGhyZWY9Imh0dHA6Ly8xNzIu
MTguMC4xOjUxNCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+MTcyLjE4LjAuMTo1
MTQ8L2E+ICZsdDs8YSBocmVmPSJodHRwOi8vMTcyLjE4LjAuMTo1MTQiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly8xNzIuMTguMC4xOjUxNDwvYT4mZ3Q7ICZsdDs8YSBo
cmVmPSJodHRwOi8vMTcyLjE4LjAuMTo1MTQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHA6Ly8xNzIuMTguMC4xOjUxNDwvYT4gJmx0OzxhIGhyZWY9Imh0dHA6Ly8xNzIuMTgu
MC4xOjUxNCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovLzE3Mi4xOC4w
LjE6NTE0PC9hPiZndDsmZ3Q7OiBOZXR3b3JrIGlzIHVucmVhY2hhYmxlIFt2OC4yMjA4LjAgdHJ5
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGEgaHJlZj0iaHR0cHM6Ly93d3cucnN5
c2xvZy5jb20vZS8yMDI3IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3d3dy5yc3lz
bG9nLmNvbS9lLzIwMjciIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
d3d3LnJzeXNsb2cuY29tL2UvMjAyNzwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3d3dy5y
c3lzbG9nLmNvbS9lLzIwMjciIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNzwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vd3d3LnJz
eXNsb2cuY29tL2UvMjAyNyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6
Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3PC9hPiZndDsmZ3Q7IF08YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNjcw
NjM3XSBCVFJGUzogZGV2aWNlIGZzaWQgMzlkN2Q5ZTEtOTY3ZC00NzhlLTk0YWUtNjkwZGViNzIy
MDk1IGRldmlkIDEgdHJhbnNpZCA2MDggL2Rldi9kbS0zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgc2Nhbm5lZCBieSB1ZGV2ZCAoNTE4KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFBsZWFzZSBpbnNl
cnQgVVNCIHRva2VuIGFuZCBlbnRlciB5b3VyIHJvbGUgaW4gbG9naW4gcHJvbXB0Ljxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IGxvZ2luOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBPLjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyDQv9C9LCAyNCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMjM6MzksIFN0ZWZhbm8g
U3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJl
bGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3Jn
PC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0
YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDs6PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgSGkgT2xlZyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgSGVyZSBpcyB0
aGUgaXNzdWUgZnJvbSB5b3VyIGxvZ3M6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFNFcnJvciBJ
bnRlcnJ1cHQgb24gQ1BVMCwgY29kZSAweGJlMDAwMDAwIC0tIFNFcnJvcjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBTRXJyb3JzIGFyZSBzcGVjaWFsIHNpZ25hbHMgdG8gbm90aWZ5IHNvZnR3YXJl
IG9mIHNlcmlvdXMgaGFyZHdhcmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBlcnJvcnMuwqAgU29tZXRoaW5n
IGlzIGdvaW5nIHZlcnkgd3JvbmcuIERlZmVjdGl2ZSBoYXJkd2FyZSBpcyBhPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgcG9zc2liaWxpdHkuwqAgQW5vdGhlciBwb3NzaWJpbGl0eSBpZiBzb2Z0d2FyZSBhY2Nl
c3NpbmcgYWRkcmVzcyByYW5nZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0aGF0IGl0IGlzIG5vdCBzdXBw
b3NlZCB0bywgc29tZXRpbWVzIGl0IGNhdXNlcyBTRXJyb3JzLjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBDaGVlcnMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFN0ZWZhbm88YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBPbiBNb24sIDI0IEFwciAyMDIzLCBPbGVnIE5pa2l0
ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBIZWxsbyw8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGFua3MgZ3V5cy48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IEkgZm91bmQgb3V0IHdoZXJlIHRoZSBwcm9ibGVtIHdhcy48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IE5vdyBkb20wIGJvb3RlZCBtb3JlLiBCdXQgSSBoYXZlIGEgbmV3IG9uZS48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFRoaXMgaXMgYSBrZXJuZWwgcGFuaWMgZHVyaW5nIERvbTAg
bG9hZGluZy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE1heWJlIHNvbWVvbmUgaXMgYWJsZSB0byBz
dWdnZXN0IHNvbWV0aGluZyA/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE8uPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzcxMzYyXSBzZnBfcmVnaXN0ZXJf
YnVzOiB1cHN0cmVhbSBvcHMgYXR0YWNoPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43
NzYxMTldIHNmcF9yZWdpc3Rlcl9idXM6IEJ1cyByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMy43ODA0NTldIHNmcF9yZWdpc3Rlcl9zb2NrZXQ6IHJlZ2lzdGVyIHNmcF9i
dXMgc3VjY2VlZGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43ODkzOTldIG9mX2Nm
c19pbml0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43ODk0OTldIG9mX2Nmc19pbml0
OiBPSzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzkxNjg1XSBjbGs6IE5vdCBkaXNh
YmxpbmcgdW51c2VkIGNsb2Nrczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTAzNTVd
IFNFcnJvciBJbnRlcnJ1cHQgb24gQ1BVMCwgY29kZSAweGJlMDAwMDAwIC0tIFNFcnJvcjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTAzODBdIENQVTogMCBQSUQ6IDkgQ29tbToga3dv
cmtlci91NDowIE5vdCB0YWludGVkIDUuMTUuNzIteGlsaW54LXYyMDIyLjEgIzE8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwMzkzXSBXb3JrcXVldWU6IGV2ZW50c191bmJvdW5kIGFz
eW5jX3J1bl9lbnRyeV9mbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0MTRdIHBz
dGF0ZTogNjAwMDAwMDUgKG5aQ3YgZGFpZiAtUEFOIC1VQU8gLVRDTyAtRElUIC1TU0JTIEJUWVBF
PS0tKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0MjJdIHBjIDogc2ltcGxlX3dy
aXRlX2VuZCsweGQwLzB4MTMwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQzMV0g
bHIgOiBnZW5lcmljX3BlcmZvcm1fd3JpdGUrMHgxMTgvMHgxZTA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTEuMDEwNDM4XSBzcCA6IGZmZmZmZmMwMDgwOWI5MTA8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTEuMDEwNDQxXSB4Mjk6IGZmZmZmZmMwMDgwOWI5MTAgeDI4OiAwMDAwMDAw
MDAwMDAwMDAwIHgyNzogZmZmZmZmZWY2OWJhODhjMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMS4wMTA0NTFdIHgyNjogMDAwMDAwMDAwMDAwM2VlYyB4MjU6IGZmZmZmZjgwNzUxNWRiMDAg
eDI0OiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ1
OV0geDIzOiBmZmZmZmZjMDA4MDliYTkwIHgyMjogMDAwMDAwMDAwMmFhYzAwMCB4MjE6IGZmZmZm
ZjgwNzMxNWEyNjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDcyXSB4MjA6IDAw
MDAwMDAwMDAwMDEwMDAgeDE5OiBmZmZmZmZmZTAyMDAwMDAwIHgxODogMDAwMDAwMDAwMDAwMDAw
MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0ODFdIHgxNzogMDAwMDAwMDBmZmZm
ZmZmZiB4MTY6IDAwMDAwMDAwMDAwMDgwMDAgeDE1OiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ5MF0geDE0OiAwMDAwMDAwMDAwMDAwMDAwIHgxMzog
MDAwMDAwMDAwMDAwMDAwMCB4MTI6IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTEuMDEwNDk4XSB4MTE6IDAwMDAwMDAwMDAwMDAwMDAgeDEwOiAwMDAwMDAwMDAw
MDAwMDAwIHg5IDogMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS4wMTA1MDddIHg4IDogMDAwMDAwMDAwMDAwMDAwMCB4NyA6IGZmZmZmZmVmNjkzYmE2ODAgeDYg
OiAwMDAwMDAwMDJkODliNzAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDUxNV0g
eDUgOiBmZmZmZmZmZTAyMDAwMDAwIHg0IDogZmZmZmZmODA3MzE1YTNjOCB4MyA6IDAwMDAwMDAw
MDAwMDEwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTI0XSB4MiA6IDAwMDAw
MDAwMDJhYWIwMDAgeDEgOiAwMDAwMDAwMDAwMDAwMDAxIHgwIDogMDAwMDAwMDAwMDAwMDAwNTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1MzRdIEtlcm5lbCBwYW5pYyAtIG5vdCBz
eW5jaW5nOiBBc3luY2hyb25vdXMgU0Vycm9yIEludGVycnVwdDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCAxMS4wMTA1MzldIENQVTogMCBQSUQ6IDkgQ29tbToga3dvcmtlci91NDowIE5vdCB0
YWludGVkIDUuMTUuNzIteGlsaW54LXYyMDIyLjEgIzE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTEuMDEwNTQ1XSBIYXJkd2FyZSBuYW1lOiBEMTQgVmlwZXIgQm9hcmQgLSBXaGl0ZSBVbml0
IChEVCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTQ4XSBXb3JrcXVldWU6IGV2
ZW50c191bmJvdW5kIGFzeW5jX3J1bl9lbnRyeV9mbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMS4wMTA1NTZdIENhbGwgdHJhY2U6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAx
MDU1OF0gwqBkdW1wX2JhY2t0cmFjZSsweDAvMHgxYzQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTEuMDEwNTY3XSDCoHNob3dfc3RhY2srMHgxOC8weDJjPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIDExLjAxMDU3NF0gwqBkdW1wX3N0YWNrX2x2bCsweDdjLzB4YTA8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTgzXSDCoGR1bXBfc3RhY2srMHgxOC8weDM0PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU4OF0gwqBwYW5pYysweDE0Yy8weDJmODxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1OTddIMKgcHJpbnRfdGFpbnRlZCsweDAvMHhiMDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2MDZdIMKgYXJtNjRfc2Vycm9yX3Bhbmlj
KzB4NmMvMHg3Yzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2MTRdIMKgZG9fc2Vy
cm9yKzB4MjgvMHg2MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2MjFdIMKgZWwx
aF82NF9lcnJvcl9oYW5kbGVyKzB4MzAvMHg1MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS4wMTA2MjhdIMKgZWwxaF82NF9lcnJvcisweDc4LzB4N2M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTEuMDEwNjMzXSDCoHNpbXBsZV93cml0ZV9lbmQrMHhkMC8weDEzMDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2MzldIMKgZ2VuZXJpY19wZXJmb3JtX3dyaXRlKzB4MTE4
LzB4MWUwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY0NF0gwqBfX2dlbmVyaWNf
ZmlsZV93cml0ZV9pdGVyKzB4MTM4LzB4MWM0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEx
LjAxMDY1MF0gwqBnZW5lcmljX2ZpbGVfd3JpdGVfaXRlcisweDc4LzB4ZDA8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjU2XSDCoF9fa2VybmVsX3dyaXRlKzB4ZmMvMHgyYWM8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjY1XSDCoGtlcm5lbF93cml0ZSsweDg4LzB4
MTYwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY3M10gwqB4d3JpdGUrMHg0NC8w
eDk0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY4MF0gwqBkb19jb3B5KzB4YTgv
MHgxMDQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjg2XSDCoHdyaXRlX2J1ZmZl
cisweDM4LzB4NTg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjkyXSDCoGZsdXNo
X2J1ZmZlcisweDRjLzB4YmM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjk4XSDC
oF9fZ3VuemlwKzB4MjgwLzB4MzEwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDcw
NF0gwqBndW56aXArMHgxYy8weDI4PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDcw
OV0gwqB1bnBhY2tfdG9fcm9vdGZzKzB4MTcwLzB4MmIwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDExLjAxMDcxNV0gwqBkb19wb3B1bGF0ZV9yb290ZnMrMHg4MC8weDE2NDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3MjJdIMKgYXN5bmNfcnVuX2VudHJ5X2ZuKzB4NDgvMHgx
NjQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzI4XSDCoHByb2Nlc3Nfb25lX3dv
cmsrMHgxZTQvMHgzYTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzM2XSDCoHdv
cmtlcl90aHJlYWQrMHg3Yy8weDRjMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3
NDNdIMKga3RocmVhZCsweDEyMC8weDEzMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA3NTBdIMKgcmV0X2Zyb21fZm9yaysweDEwLzB4MjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTEuMDEwNzU3XSBTTVA6IHN0b3BwaW5nIHNlY29uZGFyeSBDUFVzPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDExLjAxMDc4NF0gS2VybmVsIE9mZnNldDogMHgyZjYxMjAwMDAwIGZyb20g
MHhmZmZmZmZjMDA4MDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDc4OF0g
UEhZU19PRkZTRVQ6IDB4MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3OTBdIENQ
VSBmZWF0dXJlczogMHgwMDAwMDQwMSwwMDAwMDg0Mjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMS4wMTA3OTVdIE1lbW9yeSBMaW1pdDogbm9uZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMS4yNzc1MDldIC0tLVsgZW5kIEtlcm5lbCBwYW5pYyAtIG5vdCBzeW5jaW5nOiBBc3luY2hy
b25vdXMgU0Vycm9yIEludGVycnVwdCBdLS0tPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsg0L/RgiwgMjEg0LDQv9GALiAyMDIz4oCv0LMuINCyIDE1OjUyLCBNaWNo
YWwgT3J6ZWwgJmx0OzxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBh
bWQuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFt
ZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5t
aWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgSGkgT2xlZyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgT24gMjEvMDQvMjAyMyAxNDo0OSwgT2xlZyBOaWtpdGVua28gd3Jv
dGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEhlbGxvIE1p
Y2hhbCw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJIHdhcyBub3QgYWJsZSB0byBlbmFibGUgZWFybHlwcmlu
dGsgaW4gdGhlIHhlbiBmb3Igbm93Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgSSBkZWNpZGVkIHRvIGNob29zZSBhbm90aGVyIHdheS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFRoaXMgaXMgYSB4ZW4mIzM5O3MgY29tbWFuZCBsaW5lIHRoYXQgSSBm
b3VuZCBvdXQgY29tcGxldGVseS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSAkJCQkIGNvbnNvbGU9
ZHR1YXJ0IGR0dWFydD1zZXJpYWwwIGRvbTBfbWVtPTE2MDBNIGRvbTBfbWF4X3ZjcHVzPTIgZG9t
MF92Y3B1c19waW48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBib290c2NydWI9MDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oHZ3Zmk9bmF0aXZlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgc2NoZWQ9bnVsbDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoHRpbWVyX3Nsb3A9MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoFllcywgYWRkaW5nIGEgcHJpbnRrKCkgaW4gWGVuIHdhcyBhbHNvIGEgZ29vZCBpZGVhLjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTbyB5b3UgYXJlIGFic29sdXRl
bHkgcmlnaHQgYWJvdXQgYSBjb21tYW5kIGxpbmUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBOb3cgSSBhbSBnb2luZyB0byBmaW5kIG91dCB3aHkgeGVuIGRpZCBub3QgaGF2
ZSB0aGUgY29ycmVjdCBwYXJhbWV0ZXJzIGZyb20gdGhlIGRldmljZTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoHRyZWUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgTWF5
YmUgeW91IHdpbGwgZmluZCB0aGlzIGRvY3VtZW50IGhlbHBmdWw6PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgPGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxv
Yi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQi
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxp
bngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jv
b3RpbmcudHh0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4v
YmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50
eHQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9Y
aWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVl
L2Jvb3RpbmcudHh0PC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7
PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80
LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxu
eF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0PC9hPiAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFz
ZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQiIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2Iv
eGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0PC9h
PiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoH5NaWNoYWw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVn
YXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDQv9GCLCAyMSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6MTYsIE1pY2hhbCBPcnpl
bCAmbHQ7PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1p
Y2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208
L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9y
emVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0i
X2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1k
LmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWlj
aGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoE9uIDIxLzA0LzIwMjMg
MTA6MDQsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0OyBIZWxsbyBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDsgWWVzLCBJIHVzZSB5b2N0by48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0OyBZZXN0ZXJkYXkgYWxsIGRheSBsb25nIEkgdHJpZWQgdG8gZm9sbG93IHlvdXIgc3Vn
Z2VzdGlvbnMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
OyBJIGZhY2VkIGEgcHJvYmxlbS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7IE1hbnVhbGx5IGluIHRoZSB4ZW4gY29uZmlnIGJ1aWxkIGZpbGUgSSBwYXN0
ZWQgdGhlIHN0cmluZ3M6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgSW4gdGhlIC5jb25maWcgZmlsZSBvciBpbiBzb21lIFlvY3RvIGZpbGUgKGxpc3RpbmcgYWRk
aXRpb25hbCBLY29uZmlnIG9wdGlvbnMpIGFkZGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgdG8gU1JDX1VSST88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqBZb3Ugc2hvdWxkbiYjMzk7dCByZWFsbHkgbW9kaWZ5IC5jb25maWcgZmlsZSBidXQgaWYg
eW91IGRvLCB5b3Ugc2hvdWxkIGV4ZWN1dGUgJnF1b3Q7bWFrZTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoG9sZGRlZmNvbmZpZyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGFmdGVyd2FyZHMuPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7IENPTkZJR19FQVJMWV9QUklOVEs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IENPTkZJR19FQVJMWV9QUklOVEtfWllOUU1QPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBDT05GSUdfRUFSTFlfVUFS
VF9DSE9JQ0VfQ0FERU5DRTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoEkgaG9wZSB5b3UgYWRkZWQgPXkgdG8gdGhlbS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
QW55d2F5LCB5b3UgaGF2ZSBhdCBsZWFzdCB0aGUgZm9sbG93aW5nIHNvbHV0aW9uczo8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAxKSBSdW4gYml0YmFrZSB4ZW4g
LWMgbWVudWNvbmZpZyB0byBwcm9wZXJseSBzZXQgZWFybHkgcHJpbnRrPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgMikgRmluZCBvdXQgaG93IHlvdSBlbmFibGUg
b3RoZXIgS2NvbmZpZyBvcHRpb25zIGluIHlvdXIgcHJvamVjdCAoZS5nLjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoENPTkZJR19DT0xPUklORz15IHRoYXQgaXMgbm90PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZW5hYmxl
ZCBieTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoGRlZmF1bHQpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgMykgQXBwZW5kIHRoZSBmb2xsb3dpbmcgdG8gJnF1b3Q7eGVuL2Fy
Y2gvYXJtL2NvbmZpZ3MvYXJtNjRfZGVmY29uZmlnJnF1b3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoENPTkZJR19FQVJMWV9QUklOVEtfWllOUU1QPXk8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgfk1pY2hhbDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBIb3N0IGhh
bmdzIGluIGJ1aWxkIHRpbWUuwqA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7IE1heWJlIEkgZGlkIG5vdCBzZXQgc29tZXRoaW5nIGluIHRoZSBjb25maWcg
YnVpbGQgZmlsZSA/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgUmVn
YXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IE9s
ZWc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyDRh9GCLCAyMCDQsNC/
0YAuIDIwMjPigK/Qsy4g0LIgMTE6NTcsIE9sZWcgTmlraXRlbmtvICZsdDs8YSBocmVmPSJtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21h
aWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwv
YT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9s
ZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9h
PiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0
YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlp
d29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0
YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlp
d29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVz
aGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4m
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RA
Z21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsm
Z3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoFRoYW5rcyBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoFlvdSBnYXZlIG1lIGFuIGlkZWEuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgSSBhbSBnb2luZyB0byB0cnkgaXQgdG9kYXku
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoFJlZ2FyZHMs
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
Ty48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg0YfRgiwg
MjAg0LDQv9GALiAyMDIz4oCv0LMuINCyIDExOjU2LCBPbGVnIE5pa2l0ZW5rbyAmbHQ7PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFp
bC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2Js
YW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20i
IHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNo
aWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqBUaGFua3MgU3RlZmFuby48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgSSBhbSBnb2luZyB0byBkbyBpdCB0b2RheS48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgUmVn
YXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqBPLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqDRgdGALCAxOSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMjM6MDUsIFN0ZWZhbm8g
U3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJl
bGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0
PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmci
IHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
PnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwu
b3JnPC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2Js
YW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9i
bGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9i
bGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7
Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgT24gV2VkLCAxOSBBcHIgMjAyMywgT2xlZyBOaWtpdGVua28g
d3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBIaSBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDsgSSBjb3JyZWN0ZWQgeGVuJiMzOTtzIGNvbW1hbmQgbGluZS48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7IE5vdyBpdCBpczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgeGVuLHhlbi1ib290YXJncyA9ICZxdW90O2Nv
bnNvbGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWwwIGRvbTBfbWVtPTE2MDBNPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgZG9tMF9tYXhfdmNwdXM9Mjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGRvbTBfdmNwdXNfcGluPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgYm9vdHNjcnViPTAgdndmaT1uYXRpdmUgc2NoZWQ9
bnVsbDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDsgdGltZXJfc2xvcD0wIHdheV9zaXplPTY1NTM2IHhlbl9jb2xv
cnM9MC0zIGRvbTBfY29sb3JzPTQtNyZxdW90Ozs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgNCBjb2xvcnMgaXMgd2F5IHRvbyBtYW55
IGZvciB4ZW4sIGp1c3QgZG8geGVuX2NvbG9ycz0wLTAuIFRoZXJlIGlzIG5vPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
YWR2YW50YWdlIGluIHVzaW5nIG1vcmUgdGhhbiAxIGNvbG9yIGZvciBYZW4uPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoDQgY29sb3Jz
IGlzIHRvbyBmZXcgZm9yIGRvbTAsIGlmIHlvdSBhcmUgZ2l2aW5nIDE2MDBNIG9mIG1lbW9yeSB0
bzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoERvbTAuPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgRWFjaCBj
b2xvciBpcyAyNTZNLiBGb3IgMTYwME0geW91IHNob3VsZCBnaXZlIGF0IGxlYXN0IDcgY29sb3Jz
LiBUcnk6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoHhlbl9jb2xvcnM9MC0wIGRvbTBfY29sb3JzPTEtODxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgVW5mb3J0dW5hdGVseSB0aGUgcmVzdWx0
IHdhcyB0aGUgc2FtZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoC0g
RG9tMCBtb2RlOiBSZWxheGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06IDQwLWJpdCBJUEEg
d2l0aCA0MC1iaXQgUEEgYW5kIDgtYml0IFZNSUQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFAyTTog
MyBsZXZlbHMgd2l0aCBvcmRlci0xIHJvb3QsIFZUQ1IgMHgwMDAwMDAwMDgwMDIzNTU4PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0OyAoWEVOKSBTY2hlZHVsaW5nIGdyYW51bGFyaXR5OiBjcHUsIDEgQ1BVIHBlciBz
Y2hlZC1yZXNvdXJjZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgQ29sb3JpbmcgZ2VuZXJhbCBpbmZv
cm1hdGlvbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgV2F5IHNpemU6IDY0a0I8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIE1heC4gbnVtYmVyIG9mIGNvbG9ycyBhdmFpbGFibGU6IDE2PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBYZW4gY29sb3Iocyk6IFsgMCBdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBhbHRl
cm5hdGl2ZXM6IFBhdGNoaW5nIHdpdGggYWx0IHRhYmxlIDAwMDAwMDAwMDAyY2M2OTAgLSZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAwMDAwMDAwMDAwMmNjYzBjPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0OyAoWEVOKSBDb2xvciBhcnJheSBhbGxvY2F0aW9uIGZhaWxlZCBmb3IgZG9tMDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDsgKFhFTik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pICoqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKio8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFBhbmljIG9uIENQ
VSAwOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgRXJyb3IgY3JlYXRpbmcgZG9tYWluIDA8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7IChYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBSZWJvb3QgaW4gZml2
ZSBzZWNvbmRzLi4uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgSSBhbSBnb2luZyB0
byBmaW5kIG91dCBob3cgY29tbWFuZCBsaW5lIGFyZ3VtZW50cyBwYXNzZWQgYW5kIHBhcnNlZC48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsg
T2xlZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7INGB0YAsIDE5INCw0L/RgC4gMjAy
M+KAr9CzLiDQsiAxMToyNSwgT2xlZyBOaWtpdGVua28gJmx0OzxhIGhyZWY9Im1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2Js
YW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlp
d29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RA
Z21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RA
Z21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2Js
YW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdv
b2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0OyZn
dDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIE1pY2hhbCw8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0OyBZb3UgcHV0IG15IG5vc2UgaW50byB0aGUgcHJvYmxlbS4gVGhh
bmsgeW91Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgSSBhbSBnb2luZyB0byB1c2UgeW91ciBwb2ludC48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7IExldCYjMzk7cyBzZWUgd2hhdCBoYXBwZW5zLjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7INGB0YAsIDE5INCw0L/RgC4gMjAy
M+KAr9CzLiDQsiAxMDozNywgTWljaGFsIE9yemVsICZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFs
Lm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4g
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0i
X2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNo
YWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0
YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hh
bC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6
ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsm
Z3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9y
emVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpt
aWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29t
PC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWlj
aGFsLm9yemVsQGFtZC5jb208L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9
Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFt
ZC5jb208L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
SGkgT2xlZyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gMTkv
MDQvMjAyMyAwOTowMywgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IEhlbGxvIFN0ZWZhbm8sPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgVGhhbmtzIGZvciB0aGUgY2xhcmlmaWNhdGlvbi48YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE15IGNvbXBhbnkgdXNlcyB5b2N0byBmb3IgaW1hZ2Ug
Z2VuZXJhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFdoYXQga2luZCBvZiBp
bmZvcm1hdGlvbiBkbyB5b3UgbmVlZCB0byBjb25zdWx0IG1lIGluIHRoaXM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBjYXNlID88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBNYXliZSBtb2R1bGVzIHNpemVzL2FkZHJl
c3NlcyB3aGljaCB3ZXJlIG1lbnRpb25lZCBieSBASnVsaWVuPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgR3JhbGw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
Pmp1bGllbkB4ZW4ub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVu
QHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVu
Lm9yZzwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVu
QHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0
YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
Pmp1bGllbkB4ZW4ub3JnPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
Pmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVu
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVu
QHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0
YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ID88YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU29ycnkgZm9yIGp1bXBpbmcgaW50byBkaXNj
dXNzaW9uLCBidXQgRldJQ1MgdGhlIFhlbiBjb21tYW5kPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgbGluZSB5b3UgcHJvdmlkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzZWVtcyB0byBiZTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oG5vdCB0aGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBvbmU8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBYZW4gYm9vdGVkIHdpdGguIFRoZSBlcnJvciB5b3UgYXJlIG9ic2Vydmlu
ZyBtb3N0IGxpa2VseSBpcyBkdWU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0byBk
b20wIGNvbG9yczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb24gbm90PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgYmVpbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzcGVj
aWZpZWQgKGkuZS4gbGFjayBvZiBkb20wX2NvbG9ycz0mbHQ7Jmd0OyBwYXJhbWV0ZXIpLiBBbHRo
b3VnaCBpbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHRoZSBjb21tYW5kIGxpbmUg
eW91PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgcHJvdmlkZWQsIHRoaXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBwYXJhbWV0ZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBpcyBzZXQsIEkg
c3Ryb25nbHkgZG91YnQgdGhhdCB0aGlzIGlzIHRoZSBhY3R1YWwgY29tbWFuZCBsaW5lPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgaW4gdXNlLjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBZb3Ugd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
eGVuLHhlbi1ib290YXJncyA9ICZxdW90O2NvbnNvbGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWwwPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZG9tMF9tZW09MTYwME0gZG9tMF9tYXhfdmNw
dXM9Mjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoGRvbTBfdmNwdXNfcGluPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgYm9vdHNjcnViPTAgdndmaT1uYXRpdmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBzY2hlZD1udWxsIHRpbWVyX3Nsb3A9MCB3YXlfc3ppemU9NjU1MzYgeGVuX2NvbG9ycz0w
LTM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBkb20wX2NvbG9ycz00LTcmcXVvdDs7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJ1dDo8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAxKSB3YXlfc3ppemUgaGFzIGEgdHlwbzxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoDIpIHlvdSBzcGVjaWZpZWQgNCBjb2xvcnMgKDAtMykgZm9yIFhlbiwgYnV0IHRo
ZSBib290IGxvZyBzYXlzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdGhhdCBYZW4g
aGFzIG9ubHk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBvbmU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgKFhFTikgWGVuIGNvbG9y
KHMpOiBbIDAgXTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBUaGlz
IG1ha2VzIG1lIGJlbGlldmUgdGhhdCBubyBjb2xvcnMgY29uZmlndXJhdGlvbiBhY3R1YWxseSBl
bmQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB1cCBpbiBjb21tYW5kIGxpbmU8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0
aGF0IFhlbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJvb3RlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoHdpdGguPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU2luZ2xlIGNvbG9yIGZvciBY
ZW4gaXMgYSAmcXVvdDtkZWZhdWx0IGlmIG5vdCBzcGVjaWZpZWQmcXVvdDsgYW5kIHdheTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHNpemUgd2FzIHByb2JhYmx5PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgY2FsY3VsYXRl
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoGJ5IGFza2luZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoEhXLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTbyBJIHdv
dWxkIHN1Z2dlc3QgdG8gZmlyc3QgY3Jvc3MtY2hlY2sgdGhlIGNvbW1hbmQgbGluZSBpbjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHVzZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgfk1pY2hhbDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDQ
stGCLCAxOCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMjA6NDQsIFN0ZWZhbm8gU3RhYmVsbGluaTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3Jn
PC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3Jn
PC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5z
c3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJn
ZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc8L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0i
X2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
PnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwu
b3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJn
ZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoE9uIFR1ZSwgMTggQXByIDIwMjMsIE9sZWcgTmlraXRlbmtvIHdy
b3RlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgSGkgSnVsaWVu
LDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7ICZndDsmZ3Q7IFRoaXMgZmVhdHVyZSBoYXMg
bm90IGJlZW4gbWVyZ2VkIGluIFhlbiB1cHN0cmVhbSB5ZXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0OyAmZ3Q7IHdvdWxkIGFzc3VtZSB0aGF0IHVwc3RyZWFtICsgdGhlIHNlcmllcyBvbiB0
aGUgTUwgWzFdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgd29yazxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7IFBsZWFzZSBjbGFyaWZ5IHRoaXMgcG9pbnQuPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBCZWNhdXNlIHRoZSB0d28gdGhvdWdodHMg
YXJlIGNvbnRyb3ZlcnNpYWwuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEhpIE9sZWcsPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEFzIEp1
bGllbiB3cm90ZSwgdGhlcmUgaXMgbm90aGluZyBjb250cm92ZXJzaWFsLiBBcyB5b3U8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBhcmUgYXdhcmUsPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgWGlsaW54IG1haW50YWlucyBhIHNlcGFyYXRlIFhlbiB0cmVlIHNw
ZWNpZmljIGZvciBYaWxpbng8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBoZXJlOjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoDxhIGhyZWY9Imh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwv
YT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4m
Z3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9h
PiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwv
YT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4m
Z3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29t
L3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dp
dGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1
Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0
OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsm
Z3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4i
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuPC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbjwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW48L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgYW5kIHRoZSBicmFuY2ggeW91
IGFyZSB1c2luZyAoeGxueF9yZWJhc2VfNC4xNikgY29tZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBmcm9tIHRoZXJlLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgSW5zdGVhZCwgdGhlIHVwc3RyZWFtIFhlbiB0cmVlIGxpdmVzIGhlcmU6PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgPGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVy
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+
Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9
Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
PC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
PC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeTwvYT4mZ3Q7Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVy
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
PC9hPiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqBUaGUgQ2FjaGUgQ29sb3JpbmcgZmVhdHVyZSB0aGF0IHlvdSBhcmUgdHJ5aW5n
IHRvPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgY29uZmlndXJlIGlzIHByZXNlbnQ8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBpbiB4bG54X3JlYmFzZV80LjE2
LCBidXQgbm90IHlldCBwcmVzZW50IHVwc3RyZWFtICh0aGVyZTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoGlzIGFuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
b3V0c3RhbmRpbmcgcGF0Y2ggc2VyaWVzIHRvIGFkZCBjYWNoZSBjb2xvcmluZyB0byBYZW48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB1cHN0cmVhbSBidXQgaXQ8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBoYXNuJiMzOTt0IGJlZW4gbWVyZ2VkIHlldC4pPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBBbnl3YXksIGlmIHlvdSBh
cmUgdXNpbmcgeGxueF9yZWJhc2VfNC4xNiBpdCBkb2VzbiYjMzk7dDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoG1hdHRlciB0b28gbXVjaCBmb3I8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqB5b3UgYXMgeW91IGFscmVhZHkgaGF2ZSBDYWNoZSBDb2xvcmluZyBh
cyBhIGZlYXR1cmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0aGVyZS48YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEkgdGFrZSB5b3UgYXJlIHVzaW5n
IEltYWdlQnVpbGRlciB0byBnZW5lcmF0ZSB0aGUgYm9vdDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoGNvbmZpZ3VyYXRpb24/IElmPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgc28sIHBsZWFzZSBwb3N0IHRoZSBJbWFnZUJ1aWxkZXIgY29uZmlnIGZpbGUgdGhh
dCB5b3UgYXJlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdXNpbmcuPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoEJ1dCBmcm9tIHRoZSBib290IG1lc3NhZ2UsIGl0IGxvb2tzIGxpa2UgdGhlIGNvbG9yczxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb24gZm9yPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgRG9tMCBpcyBpbmNvcnJlY3QuPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0OyA8YnI+DQomZ3Q7IDxicj4NCjwv
YmxvY2txdW90ZT48L2Rpdj4NCg==
--00000000000053aaa105fb687067--


From xen-devel-bounces@lists.xenproject.org Thu May 11 10:29:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 10:29:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533212.829655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3Y5-0003bE-AK; Thu, 11 May 2023 10:29:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533212.829655; Thu, 11 May 2023 10:29:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3Y5-0003b7-6w; Thu, 11 May 2023 10:29:25 +0000
Received: by outflank-mailman (input) for mailman id 533212;
 Thu, 11 May 2023 10:29:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px3Y3-0003ax-Hw; Thu, 11 May 2023 10:29:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px3Y3-0007qr-ED; Thu, 11 May 2023 10:29:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px3Y2-0001e3-Vw; Thu, 11 May 2023 10:29:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1px3Y2-0003qj-VS; Thu, 11 May 2023 10:29:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Jffg/ZE8cg5cPQYHQjekFmQkHm+EXtT+Vosy7ioxwzg=; b=VndBfJA81/3OikjxENj+GwNFLf
	50qwnCVtBntlTJahwppTJKA4+3lFZCPBfbEs5PsJUfDAFpbt95aC0eY/1vIMyqen2lDJ4BoieEYVG
	cGaIdFBIbJW2zlAfzDVwy+sb/Q7pipCFIlPMJPxuVYXxATmlVQCHjFKU9h6voNvk+KLQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180609-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180609: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=31c65549746179e16cf3f82b694b4b1e0b7545ca
X-Osstest-Versions-That:
    xen=ed6b7c0266e512c1207c07911da14e684f47b909
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 May 2023 10:29:22 +0000

flight 180609 xen-unstable real [real]
flight 180616 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180609/
http://logs.test-lab.xenproject.org/osstest/logs/180616/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install   fail pass in 180616-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180596
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180596
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180596
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180596
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180596
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180596
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180596
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180596
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180596
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180596
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180596
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180596
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  31c65549746179e16cf3f82b694b4b1e0b7545ca
baseline version:
 xen                  ed6b7c0266e512c1207c07911da14e684f47b909

Last test of basis   180596  2023-05-10 01:53:37 Z    1 days
Testing same since   180609  2023-05-10 23:07:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alejandro Vallejo <alejandro.vallejo@cloud.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@cloud.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ed6b7c0266..31c6554974  31c65549746179e16cf3f82b694b4b1e0b7545ca -> master


From xen-devel-bounces@lists.xenproject.org Thu May 11 10:38:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 10:38:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533222.829681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3gr-0005O9-DO; Thu, 11 May 2023 10:38:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533222.829681; Thu, 11 May 2023 10:38:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3gr-0005O2-AY; Thu, 11 May 2023 10:38:29 +0000
Received: by outflank-mailman (input) for mailman id 533222;
 Thu, 11 May 2023 10:38:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lTi/=BA=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1px3gq-0005Nw-M3
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 10:38:28 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f6b911f1-efe7-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 12:38:27 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-3f423ac6e2dso34891405e9.2
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 03:38:27 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 n18-20020adfe792000000b002f7780eee10sm20108858wrm.59.2023.05.11.03.38.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 03:38:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6b911f1-efe7-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683801507; x=1686393507;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ZrjW1m3eB5fn7SCWngoeCBr0XcCdI5ZWnuS9TYA8QYY=;
        b=gm+eCL3zxJFTpyYEo0x7GH1kwxxf54VYmD0QlaovBjO6wLjBN2hb0m+eVci99ntZ9B
         lWDVsMhzvoov7nojJTAwR/ZMb4LqpvZnnxNtmNuBd2Zl/JMgQiEAntbt5+Y8flE9qgQ5
         O0uLl7t3XCb0OumJx8fnIqcr4gkGbby+qGzFE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683801507; x=1686393507;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZrjW1m3eB5fn7SCWngoeCBr0XcCdI5ZWnuS9TYA8QYY=;
        b=aJBSC7U4DEOHiK8ZFSkmZ+JeM9zFQ3XCWfoF5f6KeeyqWHrhLifFBdzY9MLT0R+1OA
         HaH+3Pw2YyZhAB3gbkbwFPO3Quo5vhvjcZff/OoutTPWoQ/t5Nmg8DKcpNY9aq+HGZSk
         AEKZGpxuWzF3vO/iia/+Ij6HlDKrPKX77tE4Vmy45QcH3fNeqQMy5HyPHUIHHsp1GkOg
         YBNm7YfjyGYopaexwG7FHosgIdTtuckFRAqGS3gXxY1dRC80+1gjPk13xTTqxNH8eN9N
         ZZ8+05v5JP1vUntcDZFUyGA1bCC3uGRUmRH1dOxNIv8SVnCxOnWZgBKSvDI5ktVJQBlB
         1viA==
X-Gm-Message-State: AC+VfDwVxqZ8m3OB3D6fONwwkCMPYy0jRO6L+8pwcvaSTg//+tVuG1S5
	pP4d7jtYoU1tZ7pepM7UwSzIrQ==
X-Google-Smtp-Source: ACHHUZ5ew7UZo5TK2EnyQopfReIYl4ZulUAxFsf0lwyatXUIW9HIy14LK5w9mKlh+Kh0ct7CGcphOw==
X-Received: by 2002:a7b:c84a:0:b0:3f4:241a:e651 with SMTP id c10-20020a7bc84a000000b003f4241ae651mr10115799wml.27.1683801507060;
        Thu, 11 May 2023 03:38:27 -0700 (PDT)
Message-ID: <645cc5a2.df0a0220.e9123.e022@mx.google.com>
X-Google-Original-Message-ID: <ZFzFnytU7LLCX1aF@EMEAENGAAD19049.>
Date: Thu, 11 May 2023 11:38:23 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 1/3] x86: Add AMD's CpuidUserDis bit definitions
References: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
 <20230509164336.12523-2-alejandro.vallejo@cloud.com>
 <8995344d-cd14-d66b-efb6-e4ac7be6d457@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <8995344d-cd14-d66b-efb6-e4ac7be6d457@suse.com>

On Thu, May 11, 2023 at 11:41:13AM +0200, Jan Beulich wrote:
> On 09.05.2023 18:43, Alejandro Vallejo wrote:
> > --- a/xen/include/public/arch-x86/cpufeatureset.h
> > +++ b/xen/include/public/arch-x86/cpufeatureset.h
> > @@ -287,6 +287,7 @@ XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
> >  /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
> >  XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
> >  XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and limit too) */
> > +XEN_CPUFEATURE(CPUID_USER_DIS,     11*32+17) /*   CPUID disable for non-privileged software */
> 
> While I can accept your argument towards staying close to AMD's doc
> with the name, I'd really like to ask that the comment then be
> disambiguated: "non-privileged" is more likely mean CPL=3 than all
> of CPL>0. Since "not fully privileged" is getting a little long,
> maybe indeed say "CPL > 0 software"? [...]

Fair point. That was copied from AMD's PM, but there's no good reason to
keep it that way. I'll modify it as you pointed out.

> I would then offer you my R-b,
> if only I could find proof of the HWCR bit being bit 35. The PM
> mentions it only by name, and the PPRs I've checked all have it
> marked reserved.
> 
> Jan

It is in the Vol2 of the PM. Section 3.2.10 on the HWCR. I'm looking at
revision 4.06, from January 2023.

---
CpuidUserDis. Bit 35. Setting this bit to 1 causes #GP(0) when the CPUID
instruction is executed by non-privileged software (CPL > 0) outside SMM.
Support for the CPUID User Disable feature is indicated by CPUID
Fn80000021_EAX[CpuidUserDis]=1.
---

Alejandro


From xen-devel-bounces@lists.xenproject.org Thu May 11 10:41:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 10:41:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533225.829690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3je-0006lU-Pw; Thu, 11 May 2023 10:41:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533225.829690; Thu, 11 May 2023 10:41:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3je-0006lN-NJ; Thu, 11 May 2023 10:41:22 +0000
Received: by outflank-mailman (input) for mailman id 533225;
 Thu, 11 May 2023 10:41:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px3jd-0006lH-7P
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 10:41:21 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0620.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d3971e5-efe8-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 12:41:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9222.eurprd04.prod.outlook.com (2603:10a6:102:2a1::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.19; Thu, 11 May
 2023 10:41:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 10:41:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d3971e5-efe8-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QE7MxBRNVnR+RXSbUWUiAz4kFNqqSFKZYqwpXA2jUjar8LzjkrqgYmBzrju2+6Vrk6c68TAxert/1q2tK6hTDuCazO+dv6R+dLcMtF+zePmRvqINzTVb/2lE+vhX9UrXZliGPJ0UwXK4snxOsBHsX3hHx/fq3lP4FuR0Q4e7Q32WP6fuwKdRrPC3G3mHxnE4Y2J19otEXQfHxz7OP0zqyJpHUK9nkcZuoGw0hWUy5N/CvMRoRULsHn25THTHCQtGZ8OnewI/j98PmWsVEg4hvV7krk7dBQub0/IooJ89BU3ob2hmWaUFSRghOMMO9kM14PwTNB/jQhaJYip0O2MAMQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sYxnSEia4BAcz+5c5a5TbHBD4e9yV/qA5+E+R0E+xeg=;
 b=Msag0d6jcZSlNAQAZzU6ZCWkaL5nTIQEyFx4AKunGxj6/FNEBolaG1HgcfdzOr0zreESpmkwOJgepx38NirkW5MptXnPcdwbxvJtTJp7dom9b5sMMysKPwAt4zMd8wTlOl6YsIh4boHRDd8YV6mL2Q/i3N4tOrgLf3/fDhtMB4DZilQAg0CyeHFmJzj5ARmxeEh4Sd0EMJi48bJASnC/xurIAfSrhQUUSGtDuyD3/3dYvxraWEnb7Mz5OVB8PT13eg9qOYdL6VM/AeeXcuTNFatYDDbPGkyga3c1ISy+BhQy14BHhZ9/qq1bw7ucrww/ZIC4r9LQrPztjXZIIAnPSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sYxnSEia4BAcz+5c5a5TbHBD4e9yV/qA5+E+R0E+xeg=;
 b=RiiwnysADnZOafrem3KDIBKwrnu3rL8GqxYH2pocs/Yd/J5hHSWSxKeEw6YS5pm4jYH+X9BhcONUzbMNx2KCS/HWIOoBmJem+iTs3Mk4uDFJVZeqsuvCkcHjI/xE5svobCwh1EsmqrKcKQz2wgiS7VIDLGHTdqf04xe/P1Z1IPUTQDqUjJ1+t44M4QfYUpULXKMvSDPvt2OfPybrg1rjmFGcy5tQJyl9EAAKAXRAMVO4WcFYCxbyZXFQHo9kqpXT+DI+ZhxHgMsSB1oqgkbT8dFiABo1m1dB0qOizJOdz1LUkonrjuhqXZzGDz70JGW6zDShj6RtSXix4ZEv/ONgRg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5676f943-d35b-3989-c2de-4c0a46b307fa@suse.com>
Date: Thu, 11 May 2023 12:41:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v4 3/3] domctl: Modify XEN_DOMCTL_getdomaininfo to fail if
 domid is not found
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230509160712.11685-1-alejandro.vallejo@cloud.com>
 <20230509160712.11685-4-alejandro.vallejo@cloud.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230509160712.11685-4-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0078.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9222:EE_
X-MS-Office365-Filtering-Correlation-Id: a861448d-b56d-401f-8700-08db520c3f0e
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	anfwTergMSN2k95nD9sWutKXV3H9fJ7EzegbqIXCsmF6M1ToCWMThUFMHaQnlV8Lbr37nV+nHagk99w/zXEGcwVFFWQ8nBAQbSNrY9aA8Tzd8YVT5mPKsxErPnvdft5tLxMyK9xt2uQ74QAg/Q0+/Slu0rEQKHozvQcSxkk8A890AtOvfy0axIr4zYmN3br28+F2xUJ9dEbBOuQWuLQgAjIYZ/KVtd+KiHxU3oUM2LyHQAmL+jfTKRrCE7oQKSuOU+Em5WfvDo5yaeFyJm611q4ywQmqAQpRZkaySdBM2GZkmcZFCM7KIPGc76FyAaKPXM/MkrMmQLyL4zLgU4/oxkUvW15yEzdqV/i7m/qbPIEP4nvbZlx1KMzl12aLXQ+UI/LCpFC0wo8STqgeJd4BQD1BNKuSG+TYwuxhi+Nb7FuFfcY5zTi7vbff0IlGIuD9WR67VaIYlX6Be3/akDzpoSL9m43UwTZSA9iGFBHdVAAhiTANQvmCjYXTu/OdghsNrAd/LB6ijnebpEvzHnYSQZhgWVW92HuCe4OFFxyVBtL9oF3olp4S8JYpxrr04wuC/OwkAEWBmIQjOocaE/kmnSy5EHwut68ksR1R5LTrZcIBkOdoKtfbkg0mJHtC0QvmY2vvolCHZq/rq64mhyKB0Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(396003)(346002)(366004)(39860400002)(451199021)(31686004)(66476007)(4326008)(6486002)(66946007)(66556008)(478600001)(6916009)(316002)(54906003)(41300700001)(86362001)(31696002)(36756003)(83380400001)(186003)(6506007)(53546011)(6512007)(26005)(2616005)(5660300002)(8936002)(8676002)(38100700002)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eGxpMVFHZ1hlcGRFclhjd1NhUERqWHhIUjhGQjRWVzlIMjgwNUpRajlNY2NW?=
 =?utf-8?B?VkdDSVozUllTTDg2YzhxQktDMGFLTXNBRHF3M2h3djFLRVJXa0pHcUw4ZXcz?=
 =?utf-8?B?ZWc5d1ZWRG8zNWg0czJKaFIvWGhEeldCajUyOU1rMVhlRWdsWGVBMk1TOUFC?=
 =?utf-8?B?YjJtMUE3VkY5Yldqd2NUTlkwaENnbHQyTTdtTlV3elFpd1hqaDIvTXFtRi90?=
 =?utf-8?B?WWpDYkJ2NW9Sc3hqV0svUitBWkpIK1dnUFJnMGNabENqdmxJcTNLanVlc3hK?=
 =?utf-8?B?N2Qzb2kxSnJjZUs4YUM1WVB1QVhoUXd3Q0VXNm0vSU1SVUJCaTlscWY0eXNq?=
 =?utf-8?B?azBHY2V1OWtOWW1QbC9oTHoxL2Q0aEc2REsxQmdFZi9XSDVBbUNuRW5WL2t4?=
 =?utf-8?B?SlBtOVpveVpyTVJ2TVR5R0Q1NmVGbEhEWi9QUk9XdHgxQ1dEOHZNOFpoMUxM?=
 =?utf-8?B?RVd2NE12RFE1K3E5NEEwM2JqTjFtczBQdExQdXl2WlZtK0RRdmYxTWloU1pC?=
 =?utf-8?B?emFENi9Udnp2VkpaeXpjVXhJSWpRaENMWXJJT2hwNzkxR3VaSUxvZHRFak5E?=
 =?utf-8?B?YXk2bE90TzBzVFdUU21xLzRmbEZSNVJpNitmSVdMOWN5WFNYWjdmMStqejVO?=
 =?utf-8?B?aXo0blo3RnRVMzRqYldCYmwrUTVKckJVdStMRGNteDlCOFZibThrWEh2R3Zk?=
 =?utf-8?B?ZkwyTzJvZFZPUG1oK2x6ZVY1UDFZV1c3RFpXcE5mNC90eEhGdUlzRzhxaUI4?=
 =?utf-8?B?dlhUbUN6Zi9WZm55VWVMdGUzUkpxS3pCUU15TFBKUUM2OXptWlA0L291V3Zz?=
 =?utf-8?B?b3hwR1VyQzlpNHlJUFJETFcydFBBMzl6RHFMRjhtZFB4OFRBMEMva1dMNzVy?=
 =?utf-8?B?VC9RZW5wbS8rSjJtMDZPUTl2bXQ2S01VK3pmTkd6N2lpRUhaN1VqSXlnOXRD?=
 =?utf-8?B?OG5McU93Tk0zTldzQjE3RUVPcVBOT01OMWxVd2FBcU0wclQ4QklocFFpZDcw?=
 =?utf-8?B?WGtQbTNYcjJlQUc2TEpWREE5SjZFSDdjQUZSUHdFekt5czBseEdQZWdwamxo?=
 =?utf-8?B?RlRkaXhESUhNa29rL2dFRnNYUlZrQTFMc05ZT1IrUTVWMHREc3lGT200Ymw2?=
 =?utf-8?B?eEUwYWp0RWlsa0VXOU9Dd0I5bGRvSEEyRmlRM1h5b2pmeXlmSGQwTUY3aXZ1?=
 =?utf-8?B?NExLckYxbTNPWTVjQ1J5ZDY4bmRTN1hmNVhkYm1vSUtoVCtHRnl1cEd4RHdE?=
 =?utf-8?B?cWcyQkptOG1kNk5JRFNFcHQ3cHh1NUJ6bDR3bXVTT3o3NERtZ0JlRkFzZXdo?=
 =?utf-8?B?T2J6K1hINk9tOFNxcUZKbUUzMlp4TEVSMFBDTkdtbHdYMnhxUnphOHR2QXhX?=
 =?utf-8?B?UlBub2pzbU1rWlZJVHllOGh6akpCTlZBcjZHeHY0SU4zNFVhRFBLYnVSbnpJ?=
 =?utf-8?B?MlJSa3NLRGh6UTc2djhtNlhZR3ptL0xmNmZpQ2FRUzFPUjl0YWZ5Ymk2RkdH?=
 =?utf-8?B?TzJNNXRtbmR1QktEWXZOTGl6SVNVOXlqM2hhR0g1aUJrdzFYbDBNNTBKQThE?=
 =?utf-8?B?c3JHN0MzeXVEdWIwZGcvckhPbW9BMU5qQ3VoWmd2bjdHdHJkbzJsMDUvNmhN?=
 =?utf-8?B?OEFlbmJ4UVd3MWI0UWttU2grOU9rWjRGanhJc0cyL3hsUHV3YnFWaUJBOUR4?=
 =?utf-8?B?aHYvQkVmb2JELzkwY3BOSVVUMTZOcFIvd1NyS0JRYjhEWVdvZWE0Nk52dTFY?=
 =?utf-8?B?dWs0S1lLb2srYlAxRXNQV2c0TlB0U3c1Q2MxNE40eHFzMjVCUFhvTWdPYVFI?=
 =?utf-8?B?U05rcTlvaUdBOFNheEZJYVJmRVNTOVo0a3dnVzczaWMwS1JyM2taYlBqUlZx?=
 =?utf-8?B?UkdNVkRHMUNQSFBPTDk3MjJTR2JocnQ5b0IwYUlaZ0N0aEh0MmRURk10Tkl1?=
 =?utf-8?B?L1ZFZk9DN2lOVk4reXdPNDdFWk1jdDRSTWk4N3Fvei9KZ1ZJVlhXWGZHWFA4?=
 =?utf-8?B?K0pYQjdaS3c2NmdORGo4SWd3UzhyODIyMlkyR29MaEZzWFJJRzd0cU5jdUFk?=
 =?utf-8?B?SDhlSnJ6V2xJNW5PUDdZUE85a1ErMlJ2ZTZtbGRmOE1CVFVUNkJ4cTJSU0Ji?=
 =?utf-8?Q?JZXDSeBs5Uzy9q+VNFGl7QqYE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a861448d-b56d-401f-8700-08db520c3f0e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 10:41:15.5709
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: um3Z8FkSz8t8EzcC/0cayJ96OUDn9SzVv341Q0ecinEDJY9sumiLZPZxhO8kqsrOFQPYF9yi8Jit+BR68SueLw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9222

On 09.05.2023 18:07, Alejandro Vallejo wrote:
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -314,7 +314,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>          /* fall through */
>      default:
>          d = rcu_lock_domain_by_id(op->domain);
> -        if ( !d && op->cmd != XEN_DOMCTL_getdomaininfo )
> +        if ( !d )
>              return -ESRCH;
>      }
>  
> @@ -534,42 +534,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  
>      case XEN_DOMCTL_getdomaininfo:
>      {
> -        domid_t dom = DOMID_INVALID;
> -
> -        if ( !d )
> -        {
> -            ret = -EINVAL;
> -            if ( op->domain >= DOMID_FIRST_RESERVED )
> -                break;
> -
> -            rcu_read_lock(&domlist_read_lock);
> -
> -            dom = op->domain;
> -            for_each_domain ( d )
> -                if ( d->domain_id >= dom )
> -                    break;
> -        }
> -
> -        ret = -ESRCH;
> -        if ( d == NULL )
> -            goto getdomaininfo_out;
> -
>          ret = xsm_getdomaininfo(XSM_HOOK, d);
>          if ( ret )
> -            goto getdomaininfo_out;
> +            break;
>  
>          getdomaininfo(d, &op->u.getdomaininfo);
>  
>          op->domain = op->u.getdomaininfo.domain;
>          copyback = 1;
> -
> -    getdomaininfo_out:
> -        /* When d was non-NULL upon entry, no cleanup is needed. */
> -        if ( dom == DOMID_INVALID )
> -            break;
> -
> -        rcu_read_unlock(&domlist_read_lock);
> -        d = NULL;
>          break;
>      }
>  

I realize it's a little late that this occurs to me, but this being a binary
incompatible change it should imo have been accompanied by a bump of
XEN_DOMCTL_INTERFACE_VERSION (which we haven't bumped yet in this release
cycle).

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 10:46:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 10:46:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533231.829701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3oN-0007SE-EZ; Thu, 11 May 2023 10:46:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533231.829701; Thu, 11 May 2023 10:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3oN-0007S7-Bj; Thu, 11 May 2023 10:46:15 +0000
Received: by outflank-mailman (input) for mailman id 533231;
 Thu, 11 May 2023 10:46:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z+vG=BA=citrix.com=prvs=48888ca5b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1px3oM-0007S1-10
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 10:46:14 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0a899071-efe9-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 12:46:11 +0200 (CEST)
Received: from mail-dm6nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 May 2023 06:46:04 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5048.namprd03.prod.outlook.com (2603:10b6:a03:1e8::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 10:46:02 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 10:46:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a899071-efe9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683801971;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=vyIC6SoabYnCQGsG3dwqnSrdPo0DTqBLsuUDGGqBqhg=;
  b=XhncjtdxpKpaKbUGHKMoVeAq1rQFoYxkc9PYnItESatvvgRL7YX8MW06
   l0TN0vgfuTsCM41CnsxSTVE0iOeezKEsCr7gzmh+4XJHiH4/6/HTuOqMw
   6Ds23pK8u4QE7hMo7+X9upO25BYDPUTL8D2kKzG9WSZ1h+E2q9zpMeZ/r
   8=;
X-IronPort-RemoteIP: 104.47.58.103
X-IronPort-MID: 107409039
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:hu10KaxZYxJn2mVWxtx6t+fMxyrEfRIJ4+MujC+fZmUNrF6WrkUPn
 GJNCzyCOK2Na2uhL49wPY6xoR5Q7JDUnNM2S1dqrCAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPKkT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KV9j2
 fs9FWoSVDSaqfyI0aiJQ+JQgtt2eaEHPKtH0p1h5RfwKK9+BLzmHeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjiVlVQquFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiAN1OTOzgra8CbFu7n28tIyQzFl+ButLhr2icWogcM
 WJO0397xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L86gKUBGECQiRGLsIvsMs7RzsC3
 VuOgt+vDjtq2JWXVHac+7G8vT60fy8PIgcqfjQYRAEI593ipoAbjR/VSNtnVqmvgbXdBjXY0
 z2M6i8kiN07j8ER0L6g1UvamD/qrZ/MJiYl6wOSUm+74wdRYI++e5fu+VXd9exHLouSUh+Gp
 ndspiSFxOUHDJXImCnTRuwIRemt/6zcaGaahkNzFZ488Tjr42SkYY1b/DB5IgFuL9oAfjjqJ
 kTUvGu9+aNuAZdjVocvC6rZNijg5fGI+QjNPhwMUudzXw==
IronPort-HdrOrdr: A9a23:lOxx8qGK6cpqOct8pLqE0seALOsnbusQ8zAXPiFKOH9om6mj/P
 xG88526faZslkssRIb+exoWpPvfZq0z/cci+Qs1NyZPTUO1lHYS71K3M/PxCDhBj271sM179
 YHT0GmMqyUMbGtt7ef3OASKadD/OW6
X-Talos-CUID: 9a23:ycfPr20Q3oCBuKz5+DBqYrxfJO0JdVjglmjqB1aWF1h4Uo+cQmOU5/Yx
X-Talos-MUID: =?us-ascii?q?9a23=3Auasg0wx4eM+QPvQ+sRm5HiAIDU6aqP2tNms0tNY?=
 =?us-ascii?q?CgdKvbQlWGTbHzy6ZGKZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,266,1677560400"; 
   d="scan'208";a="107409039"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UNbOsFgUe/UgxLJtCxTBruNW1YtAuC8uaAgmu8MzaAejmck+Ii1I1XJUzfFcnc/mgdDwObsSJqB0pXqKw+gwXq0ezbmzjs8bEvHnGe8RB1MG3qBjPcYtSUUFtXtrdk4gTGiFauAaUztErdJEjnMfX+OU2KnUiluDKr6zMgBvIwP+VlXIoWZUPUthLEHU7dEM8OUXaeYfR/5qjAFsN6VBHpcJf1E5igysTuqaD0Xj1wRrzS2UGxJkqT7m/rs+w5VYJ8dlxwKiaGMWcbf8nTGVdS5x1lBFW1O2b++MhmvmzaCP/eLLmhYB6W7z67AUIcFIt14vp7v0SEvncxKcNslagw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KxhnbEX/WYVe0e0YY1SVJfAqgcBhhNlrdLM7udmvtC0=;
 b=g8nIlilA86JH4mYbsIenmRVlr4zL/HGHzVzJdEvVXgfutKfHRgJ9rN2jcPqJwoIOZcwj5vfy/1+dFdQ2gbvs+dC6oAbF5Pg9KxBLee6GUEd948NTn+OX6YMPYRWgM4TD6YYiRM1I/1MOL20QO8f8/9OS5SOynq9mUWRfe+7jnsXKOTtT4xCM/ViUBfmejY4rJW7WG6QXDAdnpquJwnjwJiUvEeLKUuRhXa5cy5u+25QampRU7/+ZToJ+OCh3n/uVEHj40yrMv+Yy5pJERGG3n8oFMjYocEVMDt6hG9KWqpSsc4GH2zF5dfmPG+lGLVJ6nYD7FQAtwduGKzgiYqKRXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KxhnbEX/WYVe0e0YY1SVJfAqgcBhhNlrdLM7udmvtC0=;
 b=YiP1+re5YOdfIMys1yGPIkEjRxW1ZgGJJX0xV4sXAaoLYxv169H0Z6e6o6E6DQP9ZZ3zBVuASQ8/YX23XbXOVhkiL/Yr9TVVvcD8zDoMt2uRvRwrjT7Mj6+iZZ1N+gqMwAY5CNUcTk7msw2Yx6JMazDTyHnUEoiCoDRf7TGjPA4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <574e4927-6b92-e33e-d2f2-d92f596912fa@citrix.com>
Date: Thu, 11 May 2023 11:45:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 3/3] domctl: Modify XEN_DOMCTL_getdomaininfo to fail if
 domid is not found
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230509160712.11685-1-alejandro.vallejo@cloud.com>
 <20230509160712.11685-4-alejandro.vallejo@cloud.com>
 <5676f943-d35b-3989-c2de-4c0a46b307fa@suse.com>
In-Reply-To: <5676f943-d35b-3989-c2de-4c0a46b307fa@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0294.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY5PR03MB5048:EE_
X-MS-Office365-Filtering-Correlation-Id: e196fbfd-5da8-4212-1eeb-08db520ce98f
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ku5orQqTcCEczbsMe8ukEeuDqNRPacyVBTikdcW7K/n1Hy+XXCoB2zpfZ4tODsP8RfxbWAh5DEl7kfWosWwgs33Ak0bDf1fZVgZ+8mv4JroAw3YYX26NuecHwIb3K8fF6Q6euKUxC7L+cKjr4yHKT7+pLGw5jHrZL6JknbJ7w83JBVSYXyHLydfUZIptyTB7YgENzU+ig75DooEORqmrr49AyWLIDr+Zh/PiHJucxQ3VM8YcnC14NtaivmPTkgFapD0GvUFzr1Akb/rH6qPEgjKOm8DOf0Rh6OJHDT5GrThYBuZdv5oSn89NoGozIuDXq70gZ5H92Hj2hVgrzGFwagAJw5sOcbpee9XQt/a+TT2uUvyVIBnRwfWFjINFGiVcY8/ajQu/D/6rA7RlZQqn+RRE/Vi90/oL67/arMi62T74CYry97afVZpwfvb2USgKLrcFcoBUd3833KzjQ9kEGMgCVfPl658TBEyoLrEWIE+eSZ/z02SBr7iX1mSrrBLVXZWj0/LHD82AkQ7440RwpjLFjROHn5ZR/r66ckYXmk6jPA0q4mgbQVmpjHV/BPISqADSq98YbYwnk+I4ndY9/dWSnn3jBeD2NJsD+yfNqJCtvWt1GUi8TisbmVf163bHE7OigZNWXmQB5PHj5JSZAw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(39860400002)(346002)(396003)(366004)(451199021)(86362001)(478600001)(83380400001)(2616005)(6486002)(31696002)(6506007)(6666004)(6512007)(66476007)(26005)(66946007)(82960400001)(66556008)(110136005)(54906003)(4326008)(316002)(53546011)(186003)(31686004)(5660300002)(38100700002)(8676002)(41300700001)(8936002)(2906002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MXFuZ05TNzN6cjl2SHk5SFRybDUwVm4xVk1FbnVyMng2eWFhY3VvM21wT3FF?=
 =?utf-8?B?Q3l0R2YydUVSNFExWlNKem41eWFKRFE3QmZzVzExem9ma0hWcTRJQlNjYWF0?=
 =?utf-8?B?Sjh5VURaQUlLczJOWUJML3FXUDlNM0xqSktaYk5xWkVYeThrS0pXRHZGL0xN?=
 =?utf-8?B?bXVWNGJIbDJmUVlTNHFKOVdWRHRUY01UdTI4a01UVTRWQVJmM24vNFNNb3BZ?=
 =?utf-8?B?SEtoNVFZK011Z05TcmNQZWtkcGYrUnJWclRnSzBua0g5Rjh6VUljWkROaHFh?=
 =?utf-8?B?UG10dDlPMkhzS0tha3d2a2UwQUVqYTltWWxkOVBCQVhNMEtlK3lObEZJVENV?=
 =?utf-8?B?aUNOMGdLQWpkcmpXZkh6aTFCVWJjTkpGckJYeW45Tjd0U0VzYmhwUUFhSU02?=
 =?utf-8?B?bGFtL2NGZDJhNzl2U2pyczlIK3E0elNwc2UrQkUwRWtJaTFSNHhBSjZzVWFO?=
 =?utf-8?B?OFY3blorVUllNGk1SGJJWWVoU25jMGtDaFQ3V3I4WDJ4ZnJpd29YelJlNnFC?=
 =?utf-8?B?dHRxQjN5UDJpSzBGRlhnb0FpQTRwNVNmZFZ5ZEVlQVpEeHY2amtmVEV2L3RG?=
 =?utf-8?B?eGsxMkY1VmZ1dUxyV2FaVFg5dlN1cDVhNkNDVzVCRG5KK1Z6azdnTTV1MXhU?=
 =?utf-8?B?bFp1RU44S0VlME84V0NYb2ZFYTErT3grUzkvNTNZNGNBNmtsbnU0OG91M0Fa?=
 =?utf-8?B?UzkvN2lUbHJsWDcyNnBuczd6THlTSjRBbXJNKzV0K1crbkJwWVIvL05UeDdE?=
 =?utf-8?B?NVljZnFicjI0elZuVi80VksweVRaekZnTWFIc2NKYk1Rb0pKVk9INGVyYkhV?=
 =?utf-8?B?S3doSVorZ3RadCtYWWVNVWJZb1YrRDJuSVJ1QVVrUG90Vk9xQ3FaRnR0MHBj?=
 =?utf-8?B?Y0U3SlZYdlgxd2RIMXQvdTZESy9wbDRMRENaRitFUWV4R0NyN0hxL212UE9w?=
 =?utf-8?B?NnhiVHZQTlNCTEtyeGRlelFrdHVuNDJEV2hKc0FLN1JNaEVBajBYTmRZUGhz?=
 =?utf-8?B?a3YyOTlSYmJmRFp3dEJMelhKeDd0NnNiYXVuOWRrQlh0aXR0NGF1WFZnQVpz?=
 =?utf-8?B?TGJUVzNWK1JvRUVqNHdNVWRHMXo4U1RjY1RBcU5iY3hIZ1lOamU3RmFtQ0d1?=
 =?utf-8?B?MEIzWmJWRnFKWWs5VTRvdEFjNmV0ZXpEMklUMGVBMFIreDcxS2lqWUw3cFZO?=
 =?utf-8?B?dm0xSE5sZlFRZHlRd0RmMm1BbHRrZTF6R3dmTGRJT0hPcHFnVkluT0lFUTZr?=
 =?utf-8?B?ME5scnN4b2U2TEU3M1BNTnU4NjNJUUgyZlBTVGNWdWpiQ093RURnemc1WW8x?=
 =?utf-8?B?b3FFakFTY1B2S3FpVFVTcUhrRG5Ib1BwaW5yR0tvb3Bldk9NcS9VWnR6Ym8x?=
 =?utf-8?B?d09SS29nUmUxeCtta3duWmx3VDRWNlIvYXBuMU82TytJZ3BuRW5KQ3YzZlJF?=
 =?utf-8?B?VFNFSHZOd0JsZXhuaDlRWFVzdjhEUjdneEZmSmtpcmZGblVzSEIvZlB4ZXdR?=
 =?utf-8?B?NHJUa2prbGQ4QmN0eVNETzc3c3JDZkdGMGRWc0ZDNlk2YndMcVFBbFpXR1V6?=
 =?utf-8?B?UTh4NjJBZmY0NmVRSm5PWVlncm5ZbkdzZ3RpeVJtZmlLYUZiTjlpTndwL2Ro?=
 =?utf-8?B?aVVCc3NpU2xjWTcrWDRIOG1FcnNVb1JlclpUNzVwTmMzWVp3QjBxckJLMFBr?=
 =?utf-8?B?TkdWazhGOU9MSEVVa1U0TU9LeGlJSWM5VDU4akRuYStlOGNSWmJmeXVuSmxS?=
 =?utf-8?B?Y2VVdnlHeFdBWldRNEtDUGc0ak83UGZwaDg2cVUzODBKOFNrKzBpQ0xVZ0dE?=
 =?utf-8?B?QUFIUlZ3VldEZG5xR29xTlIweHVyL1dkaXFRQkJObUU3K1VDL0pUNmsyUmNC?=
 =?utf-8?B?ZjRsNDNEdDRPemJBYjZKTkpzSGVLc3ZZanZIZUJ3NW85UkpOTTlMM29rMVIz?=
 =?utf-8?B?WlJRQ2w2QmJtREtqRDJRN3k1clJubExHNkVzTmVNUlpaaCs2ZElWaTFyMi91?=
 =?utf-8?B?a3hSc2hjRGY1ZmFTRFdzb1VZL25uaWN5Yi9ucVowTzBNSXYxOUhIaXVRZ255?=
 =?utf-8?B?S0hYaVRhVStBNEduRGtBNkJPT1ppU3F1QU1OZzZoN0taeEJ5WXVla2N0Y0lo?=
 =?utf-8?B?WE1uNm1SSWpTOWYzbFB5ZnZueWw5b2VoM1FVbXJVbzV6ZzQ1SzdaUEhobEw4?=
 =?utf-8?B?V0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ZtD4/QQnJV2bSXxNTzyKMRz+/l2WSqQsDUkuOYIzqG34vQ+FL2xkF8q6MCAHMlyAMX/g4Z9iyTy2VgeKjJrn1kMnCOdrF+qUEr+3V1mwdwVUcuZ5/AyuiDBh5B5ZceX51g22on78hXKoAAQvKl/jgpPLxiE32/qoAjEYbNKVqYUewodQRcH8e2uQifSELXudNaPcBHXGscvW4D8zhgEw6lNtwSgo9LTqbNXg9sfP5x9KaDVqcJ9OzOtmYOMrzRWl5CTeM/yFV28t5FydP6o1AB2QOhg1IvhIL8CfEMyT9ZDeD/HWlmTkHOBZrWBFkWitBm45Ujk11faRzpWw/o+UKwkWF0bgM3UuseQFuUxhLOJd83zT+6cxezQxZh/jImuavZLfRe3aUjCj+PL/CLCe8nD7UAudOtjcVFNOqVh5yTYFKOvdk0FXyKmAFazPVOS5WkQ/SNgCzdBg2ujKgcHK+U2jMnhuYxskDn9RVBKKPgI8x5A1Qmix1EzE2k+MrBHTBlGuuT/rZfjrKb2xo6BAiJ8odmzqLi4zFR1MyPeJDhOaqw3PL4O7mL7yYkegKbvdfhxN+BXpkf4lF25E6z8JTksfiZRf7Rmqbld+jTLxQr1ARS4jzK7THyQ8es+YzRBPzTT094lIVDIl3Wsu9uz9futXcPo5aykTy5ejfMJkuQWGjI4mo6lB5tjeiTgsURNKLSqLblHS12toXtB3KQwSE411JUUDxIryTj+raNO0X6AgpArROEcFvNc9R8N91r4G1eF8aK5AiIPCdmNR51Qf9bp9ibVlknNKFAkGhDkqBuAa1Stem+EvUYM5UBFcW8vZT/PWRcTAYQ8VSFysSZP3axNQkquZl8jFvfQhQx+npxZCHxiXvXitjcFdk46nO00F
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e196fbfd-5da8-4212-1eeb-08db520ce98f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 10:46:01.7514
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yAWDEVoGT0UKZ2Sbv50hLiwdhLeplddnLDgVo6rnbi8qenvchZNd1UKfRYnJu2DhgVah+lJyWd+E6mKF7dxGpXb4DA6AS6aRnmS9QewRhVs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5048

On 11/05/2023 11:41 am, Jan Beulich wrote:
> On 09.05.2023 18:07, Alejandro Vallejo wrote:
>> --- a/xen/common/domctl.c
>> +++ b/xen/common/domctl.c
>> @@ -314,7 +314,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>          /* fall through */
>>      default:
>>          d = rcu_lock_domain_by_id(op->domain);
>> -        if ( !d && op->cmd != XEN_DOMCTL_getdomaininfo )
>> +        if ( !d )
>>              return -ESRCH;
>>      }
>>  
>> @@ -534,42 +534,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>  
>>      case XEN_DOMCTL_getdomaininfo:
>>      {
>> -        domid_t dom = DOMID_INVALID;
>> -
>> -        if ( !d )
>> -        {
>> -            ret = -EINVAL;
>> -            if ( op->domain >= DOMID_FIRST_RESERVED )
>> -                break;
>> -
>> -            rcu_read_lock(&domlist_read_lock);
>> -
>> -            dom = op->domain;
>> -            for_each_domain ( d )
>> -                if ( d->domain_id >= dom )
>> -                    break;
>> -        }
>> -
>> -        ret = -ESRCH;
>> -        if ( d == NULL )
>> -            goto getdomaininfo_out;
>> -
>>          ret = xsm_getdomaininfo(XSM_HOOK, d);
>>          if ( ret )
>> -            goto getdomaininfo_out;
>> +            break;
>>  
>>          getdomaininfo(d, &op->u.getdomaininfo);
>>  
>>          op->domain = op->u.getdomaininfo.domain;
>>          copyback = 1;
>> -
>> -    getdomaininfo_out:
>> -        /* When d was non-NULL upon entry, no cleanup is needed. */
>> -        if ( dom == DOMID_INVALID )
>> -            break;
>> -
>> -        rcu_read_unlock(&domlist_read_lock);
>> -        d = NULL;
>>          break;
>>      }
>>  
> I realize it's a little late that this occurs to me, but this being a binary
> incompatible change it should imo have been accompanied by a bump of
> XEN_DOMCTL_INTERFACE_VERSION (which we haven't bumped yet in this release
> cycle).

Oh, sorry.  That's probably my fault.

Do you mind submitting a patch?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 11 10:46:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 10:46:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533232.829711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3ol-0007sX-Nj; Thu, 11 May 2023 10:46:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533232.829711; Thu, 11 May 2023 10:46:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3ol-0007sQ-JO; Thu, 11 May 2023 10:46:39 +0000
Received: by outflank-mailman (input) for mailman id 533232;
 Thu, 11 May 2023 10:46:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px3ok-0007S1-Fj
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 10:46:38 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0610.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1a342c77-efe9-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 12:46:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7990.eurprd04.prod.outlook.com (2603:10a6:20b:2a5::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 10:46:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 10:46:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a342c77-efe9-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gNuysS6SdNsN77+YMh/rkxqDQml3LA2NxlUaIbwdJyh4udHOckciqYNziL80DEUrGsEsiWyxD6GOH2CcWgK+BPJ14F3OCxx+YVbljZS1E6hlxy6UlSQSRxzmTORNEV/gTu7gpnjcze0YKgjJrS3bYHTsWRg+FEijJzjGaco2bLBKrGaTUMPVbgPm4SVqlawXhWTnet3JkDbgs+w9qWeuOj9vV13FcmGEn3eAJkqzEBYkLzxSyJ6DA0bxD6lMKIL1tnBtJ9EALdVoS8ceUocClKZ97ON/ExUr6z2Q9lbX+n/xM3DuUb7k+oQ8x7jBoJW94hDksvSDf1GvNI319Grctw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CQ5tf7CFaSVWbnEx0H8pjapDEkk0kRg2kbkOZf7M6Q8=;
 b=iHI/I1u0mlmdG4oRHFSKgJe4FKbxvziYenh0tOpfQ4vU1OqZYsjFx25sNZ+vJu2sxPa6+BGXcEnmngmX6pyYEnkocMZxX9I6gmEqwNAV9l5b24ws0TjufUX8u5r899fJeDR1z8FIxLGL9rAU10fF7KLBCWVu6RRyV0qhlQLIJf295D2u0juGaV+0HZWWM1nzAvANyqi5hwjmidHvMfH/30QelGA9RKzYGrtydA/wNUVQBtBBx5NrHrtCdK+WxigzCk4+FVKIz4XITwhGIHAF/d9xHQmVwHhTw93hOOj5a1fA7OIdogTO6RzaDCNWn1cfVdEoOrJGrVvSVi/IsWjCEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CQ5tf7CFaSVWbnEx0H8pjapDEkk0kRg2kbkOZf7M6Q8=;
 b=LQ0erhFkbjT9RKFYZpfhNCAs/whrOJU/w7ei2ikJiXjPSjv11OaiXwOuIohDBEJGFWhroxZwRle9nV5vp2tQp0dDbMUPNoMkwSad83mWx7cLSwSfV4wDBWYeHsmqvLFTFxMFIpxvAGBnha74WEzXqgI4+ZbRxHBPeY2vYVdfbt8bqJJpM8UTzjHyaHuzrITEMGcm8szvZ90WTL3ja8Zqwh8N7llmt4PgUA2OMBQb5e8sn21+3Q/yJLDyxsXmn1OYUY8OcI0RWOxmBhv62Z+JFcy1IdwpnIet4g0U2zl6vo8qeGEpPM/rIReRL/olt6JffqFxErnlBLkrufUsTbMpXw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <afce6377-4681-a171-e422-a084e934595a@suse.com>
Date: Thu, 11 May 2023 12:46:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 1/3] x86: Add AMD's CpuidUserDis bit definitions
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
 <20230509164336.12523-2-alejandro.vallejo@cloud.com>
 <8995344d-cd14-d66b-efb6-e4ac7be6d457@suse.com>
 <645cc5a2.df0a0220.e9123.e022@mx.google.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <645cc5a2.df0a0220.e9123.e022@mx.google.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0161.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7990:EE_
X-MS-Office365-Filtering-Correlation-Id: 66164bb9-ef77-4bab-5dd1-08db520cfd12
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HilS04ynMq8/yK4HUWVl3WV/dA3E5Ik8ryq/eNPhzBm3fpbOaFiGsa0Kj3iQnH4Fay+SxsQBhJUHIAJHvRZzO1CwwPJ70tbRPp4SB7lm2zNUY6JZj7SVMQCrr96X+o3rRqGt3VA6U8HKdrVetT62HyIDmIQF5jyeeyQbGZsA82miZiASKCOLnhNmYdu2mJw8lm0iBOkIkVi0vWqSTjHxKuQ9UU0SLzjePb+hZmrj7jPfmLde0PpV8vj27z5+GzEUqp8rjk361TDhHyxK2KRVcCMLKpYYiqlHH3/fvtTM/3woZ3B2dSqdy4Vl1eMrDHU8eFgPTVVqG8DBb6OioFrKIoOzgtxJFmU3iZMlMxE8C57rjat0dNdNPH5Xk2tqwgHDkAd6pb3r1VOEJLcMJ8MDrItsLjm3ONNEbBpmOO3hYwEPxhKRmx2VdbRWgz8zmI/kTAvtCuMIwvmv3gJ+6HnxwfGY9/jJhv+P1WHAh71ThvKgMZnGBIg0ZgeUPeRDaOrfTtvLZzQzpzcj+lf4FGJYZuZWYKd8VjTbzti+oJvv1SiGmIvNiNfkmj9iCqG9pPOGpXT4/sY1029ssDb8n5fOIE2+ZtqzmV+1PPbyoeRFHYG/SuTyBs6WOeHt7uyGGEbcKWLds76ADhEsuIEkX+Fx1A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(39860400002)(366004)(346002)(396003)(451199021)(66556008)(6916009)(66946007)(4326008)(478600001)(316002)(31696002)(86362001)(66476007)(54906003)(36756003)(66899021)(31686004)(2906002)(2616005)(186003)(6486002)(83380400001)(41300700001)(6506007)(8936002)(5660300002)(53546011)(8676002)(6512007)(38100700002)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dzcrQUZ6QzlCOURrVjQxNGtZcEMwSWUvdVRwTXdWdEJTOU9NS2k1UEdqQWkv?=
 =?utf-8?B?RmVCNm9KSzJka1V2M01CeUJmUFRtMm9TNWw5V0N3bkNrTkRPcU5XOWxGYmh1?=
 =?utf-8?B?c1lNRjFMS2ppc0hhYk5CWnRUcTRBWmVwTG9sSWg4QXVuTVZTR3VEM2ZJdHJY?=
 =?utf-8?B?Z3RaZnRGMzNiTDdyQ3BUNnYrUzAwa09pTXlnZjVlUlAyRFJQWk1vcHlhQ1g2?=
 =?utf-8?B?RW1GOTBBYzBHU1hWR0ZsQmFJdXpNZzNtQWF2a3NRb1JMZWhjNzVMMEwwR1BK?=
 =?utf-8?B?S05Oai9Wb1U3WGp1SVY4OGQ4M1JpK2dZcFBGcElqTXFkMWpFMHYvM0l1aUtu?=
 =?utf-8?B?Qnk4R3ZhNnpXRG80QXNLODRlSERnMnhvcEMvWTNhOTVNUmdWOEtWR3o2NzI5?=
 =?utf-8?B?MHUxMTRKcFFoT24rcUpxSUsrZjFjbC83dk5IT3BaOE5zaS9VVmptTW5iRXdZ?=
 =?utf-8?B?UTRGWTh6OUNISGJBazNVOC9YR21kVmFxZk5ZMWFNalFyVWV1VHFESDBzMTRi?=
 =?utf-8?B?dEVBTjh5bXBZTm5RUWxnQ0FXRFhUaDd4MngwOGtIeFAzczJVazNXbTVlOUFl?=
 =?utf-8?B?QW5oY1JwQnRsL2YrdWZQM1JINDFLVGNBNTl2VVJacWZRQ2pKV040T0tvTWor?=
 =?utf-8?B?cGQ5VHllWSs0dkw3YkppYWVNTTFOQ2s3ZEFIR2RCYVI2NlRjOEVmUGN5MWls?=
 =?utf-8?B?aTBDYThUeEtVcStrVmZUVUdiaUtpZUtURlNuUTByMHVUSUE0TFBzSk5PMWY1?=
 =?utf-8?B?NEdPbVBqWGlYNWxBN0NSbnE0VWdFaittdUc4VjJPcUV4ZkVEMmcwVlQzNmcx?=
 =?utf-8?B?OGQ1KzFYVHMyUTU2Y2JCZlBndnNVa2tGMDlzRE5jcjhPcHo0L3lveGZmUzlu?=
 =?utf-8?B?TFVVeHN6dmZaK204cncyMzBSc25STWJRS2l5RExwZGlocjZ0ZkE0UFJoclRv?=
 =?utf-8?B?QldpOTZ1Tnp3cHUyVXVIZkNRN0hIdkJlUnRMa3BRajFndFZhVU1UTFA2WUp4?=
 =?utf-8?B?VC9RaU5ZMUZUMDBHTjB6cGVSSzl6UGNLMTI1eUVUWmgvL1hRb1NteGR6TTdm?=
 =?utf-8?B?bzFXTDBGNm9HNTlnNGZzUmZSNUVkTlUzQmJHSDZTWWNUS1NWdEJjUERSTWg1?=
 =?utf-8?B?K2lzc3hMaHRrZFhPMDFrMGVBQ3VUeHBBRDVtMjdFcjgrb24yNW16eis5UXo1?=
 =?utf-8?B?OXU1VkFVZUljY3pXSEQrcnR3cTlhdU9XZmJ6ODhsaXplSFNyaHBHbFh3RHoy?=
 =?utf-8?B?c2RIVVd3TjVsM3VCSUN0bUduRDFRcE14MC9kK0J3dHhTVVZLWmhrMWkrZkZ3?=
 =?utf-8?B?UmNVbmZJYnFKcW9nYmZKdEV2VjVhSDdDc1N0dUtGWGdmZTZ4bnMzcVV0UGpH?=
 =?utf-8?B?ZXVLMUF1UklpL1ZZYktLanhtNlR1aTdMVWxEdStFWlh2UFdiYzNoeklHSGxP?=
 =?utf-8?B?TlZYbFI3MGJwVitaaGVPd3dMRUNaMU1qODBDZlZMZFpwWmtSc3hLdzBLbkw0?=
 =?utf-8?B?RU1qMUdodWFQbHpxUDRKR2pScnVHekJLMmIrSGdadnQ4TkhUNlU4SDlEN0Nz?=
 =?utf-8?B?dDQ3djVSWUpXUVRhaWIybkgyVnVuVDE0bmFEeitKaUVjZE9LSkMwcjg5NGZK?=
 =?utf-8?B?VHFUMEx2dzJsSzIxYkpkMmhiVWhsWlgxVmhpNGtTK0hYZENrZ3ZXNndEVzVC?=
 =?utf-8?B?WnB3RFdsVDZIUlREa3dGak54U2p5ZXJWdjMwNWk3eXo3OTg1SjRicmRUTVhS?=
 =?utf-8?B?dFk1NFovdG43TENrVDdsK0dGMmlzWEVRemFaMXBqaXpjRDNLNi9kVGtFM3I3?=
 =?utf-8?B?MDRIWUUzS1gycDV6ZnUvOHNPVWx6bjNTekh1N1RuTDJ5dlRCR3VoR1EvS2FJ?=
 =?utf-8?B?VUdKZ3dwem96a1lib2NET2QwaFJBenp1YVRTWWg5THR1S0VtbHoxUml4M0pP?=
 =?utf-8?B?Vy9Sa2RxTWR4dWRaQ1RqRUxtYjY5cmR0SUNoOExXUlpzeWgzNDdreUhPUmRw?=
 =?utf-8?B?QlozZEEwWGUvQmM1L1ZISy8rZGhTem44cURueHp5dmtUaHpkdVkrQ3R4K0tI?=
 =?utf-8?B?K2FkWTBja1Y0S0ZpZFpvcHdJN3lpQVBtME9obWxLY2tDQlhvZzZxTllzSlB2?=
 =?utf-8?Q?imzjEVvPQq4Dzo9+GGe4nTc/0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 66164bb9-ef77-4bab-5dd1-08db520cfd12
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 10:46:34.4153
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kgVLJHPMvDENXClJtsb53WaeKVvp69eJkUbUuN8sE0x6XaGExke3fWRtAarY2vgQkdrp+JW39KcDDqXVFGlU0Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7990

On 11.05.2023 12:38, Alejandro Vallejo wrote:
> On Thu, May 11, 2023 at 11:41:13AM +0200, Jan Beulich wrote:
>> On 09.05.2023 18:43, Alejandro Vallejo wrote:
>>> --- a/xen/include/public/arch-x86/cpufeatureset.h
>>> +++ b/xen/include/public/arch-x86/cpufeatureset.h
>>> @@ -287,6 +287,7 @@ XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
>>>  /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
>>>  XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
>>>  XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and limit too) */
>>> +XEN_CPUFEATURE(CPUID_USER_DIS,     11*32+17) /*   CPUID disable for non-privileged software */
>>
>> While I can accept your argument towards staying close to AMD's doc
>> with the name, I'd really like to ask that the comment then be
>> disambiguated: "non-privileged" is more likely mean CPL=3 than all
>> of CPL>0. Since "not fully privileged" is getting a little long,
>> maybe indeed say "CPL > 0 software"? [...]
> 
> Fair point. That was copied from AMD's PM, but there's no good reason to
> keep it that way. I'll modify it as you pointed out.
> 
>> I would then offer you my R-b,
>> if only I could find proof of the HWCR bit being bit 35. The PM
>> mentions it only by name, and the PPRs I've checked all have it
>> marked reserved.
> 
> It is in the Vol2 of the PM. Section 3.2.10 on the HWCR. I'm looking at
> revision 4.06, from January 2023.

Oh, my fault then: It didn't even occur to me to check Vol 2, as normally
it's the other way around: Only the PPRs can be sufficiently relied upon
to be at least halfway complete.

With the comment adjustment (which I'd also be okay to do while committing)
then
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 10:53:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 10:53:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533239.829721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3uf-00016B-BY; Thu, 11 May 2023 10:52:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533239.829721; Thu, 11 May 2023 10:52:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3uf-000164-8g; Thu, 11 May 2023 10:52:45 +0000
Received: by outflank-mailman (input) for mailman id 533239;
 Thu, 11 May 2023 10:52:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px3ud-00015y-6l
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 10:52:43 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20601.outbound.protection.outlook.com
 [2a01:111:f400:fe12::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f3f5e3d4-efe9-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 12:52:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8229.eurprd04.prod.outlook.com (2603:10a6:20b:3b3::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 10:52:39 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 10:52:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3f5e3d4-efe9-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hGNjF7ujj5T87kLnbsdvZXVxan5AhlRJDxhJpZE97NBwwjJfFmuYPzDGd6L/dw2NxUK1we7R+QwfhineOWnE5tXVkm9ko24i1efq6DmXs62lXc1XHSgxEXPqW4Wr7gHbT2g/LbPMAsP5F65O6pLQOKBaTUgNAY0ivIzCvnwbgplyj4M1FNp5zp3knzfFRToBOwCCy4pqWgeKnOILjGsKpEHV8iE85kn3GSfg4HYjt418YIq9MOab9WczZ19XOvZNLleWrYSID/nZka0fFvE4H7nmIBD2NJomeLld8zCTx7hLDtupdH2caAtJye3CIvYWOexTkmyWU3fYnNzepvohjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=U6u/rsxx3sp3nxzMqliZNEaZmEduBvGJK2C3yCtugYg=;
 b=ZOx1BVBohtC4elwBf6j67ZAULv2WsM0XvzOM7Xu42Xt10dB7yK415yJnshRtEHYXOBZlekS9XqGHg7MCUah9bcIrW6rW4KiudpKOkTCfsNH6dCMH0ccPB5uXBmng+k+hl/ELqptKdTsVAKLQ+DEVoyl5Q9JEv9b+mstvT7aNstwI4e2AkmGElp9BvFO80zdENUSLbRpPM6cHMnilTdmqHWB+TEoYx2Os3pfdEovcyinq07bco1lIuW2b59H3SqlslHKGMZiumK2rKKuCHB5m3ftDzUoqWiho9w4meHL6cA6Eg8NATkA91clbBuKiZOCiIgDOMT4TP5MU5AEAL55wig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U6u/rsxx3sp3nxzMqliZNEaZmEduBvGJK2C3yCtugYg=;
 b=V4XLtjsD/12dowB/HJ/9uvjpcg6DGCP+RIPrCKg8b31QRkSV4c3zMVf+Ks2SRfWquO5UTI5TNq7w5IrZ2hxfwmBlN1nIRqc1ww8keeY5aa6Q53oOYH8ID7Tc78pO+CgjWhaTelx0w3jpQTaSBxzGqLmwaN78fxFIV2GjkGBG60dg/0zd2qq2aRe77p8YbBvZ+GId7xhKYNsXDXhhwQE/klppAzSpmmsZmmQykVfcurCbIqrMi5xXCSPLFLYCUm4RjBs8FC4wuDiS9BmFqoUdg7+76I0LcebHGUhlc1cezlcMo7s4ZOK9V4K+Pk0WlNYSO/coauGnqsgixZXhvV3IfQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4bc416aa-8ff7-f2d4-c452-7494f6d2473a@suse.com>
Date: Thu, 11 May 2023 12:52:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Alejandro Vallejo <alejandro.vallejo@cloud.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] domctl: bump interface version
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0047.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8229:EE_
X-MS-Office365-Filtering-Correlation-Id: 0ce9d5aa-486f-4db0-0252-08db520dd6ba
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	boE6xavNhRVC+MtK1l/QKmHfVv4VySe7hQozrxRj6ikxo1G83uFwFVXvvBnpkhldUaulgZPcL/BIMBwjDn8D80J7R8vP6r6LSCviSJeWhQwCFFBBVwdfbIEaeEOVXFz4IORJUZQr9wLD6vOO8I+ZSP/Nj0U5/AOsdQMKP2gtNLSQS/w4mYlJE2j3pp/8c5QU1t+XwbYGXKadHS5tIQkBJa8hyvbNY1ax1TSP5SoeYF7m9WWg6mOhmbjsEn3/QyFS0SpxHRISnJnCVX4MMcdn5VdwxO4KzY9Me6Q0+NB4xyRE6bc2Oxkf80Ho+xsX/ilZlvGNpYxzJP4/1ouajKLAMJiZFFwm9/bkZvcXr19JkSq0fA7ESfIyrPWbfhWyQN1qOwpr3v23BMZgIAWkrQTSef7gt9GQmoTM+JI7EDANohnmaW3YVubrN6ugfiSdXUi/Nh0CE+oWmaok8DfQNWwcfYZEgG+vJrcfMRxh7K7at2R5fVp9/at6Ca+/+DokqxbRbr52DdXO18RsOSDQFxPQr8SHnMMSljDOW4GNptXdZQXoeWPEpHK5C2WOFiT9EMwyw2wDHgTJomSbXoHizCDfSiBFB6uEzDHw6AQZGO1NT+/7MF/ALG1v3UY2I2E5qidQumEa1ikkVbLrtr63LAk0aA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(376002)(136003)(346002)(451199021)(4744005)(316002)(6666004)(2906002)(2616005)(66946007)(4326008)(6512007)(66556008)(66476007)(54906003)(41300700001)(6916009)(6486002)(6506007)(5660300002)(26005)(8936002)(478600001)(8676002)(186003)(38100700002)(86362001)(36756003)(31696002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YmRnd2huRmprYkR2dnNxbWNXVUpQRnN6VERpSkh2cy8vc2JTMEFKUXZIM01x?=
 =?utf-8?B?dlQ4NVQvZFJhcjVWQ05YcFVhT2p1YzJpQWozRG9iZFZ0TlVCb3Naa0FmRDc3?=
 =?utf-8?B?NXZuUlYzd1ZBY05ya2pXdlQwM3VKZFlkM1hUSFZ5SEtHTWhOZnlIdFdzMm1j?=
 =?utf-8?B?V2pxRG0rNG02SHJzbWdzaUxEbVdhTW4yYzZDWGpVeWM3N2lMNjFwYTFOaFNW?=
 =?utf-8?B?TnpMMC9MMlhkbms2dkpKVE0wL2pJUThQUHcxK1hxa3BYWmEwOGRjdnliUmF1?=
 =?utf-8?B?NFUwYkcxTkgyTzZRU05RK1ZMVFJjZnhER3NPdnBaRStnaDUzS2JLVkpzNFM4?=
 =?utf-8?B?S0R5RWF6VUZKbm9kSkhCRGtHTTVKejR1L2FhYTNkc2RDOGRaZmVNYTRNcnFz?=
 =?utf-8?B?MFZ1NUxPU1lraXo1K2RtRHlCZ2p3OEN4RUZjSHo1ZEo3dndwbUlRWEg5aUdU?=
 =?utf-8?B?QkMvL2hSbmJlSmtyUk5nSE0ycnh6U3plR1pRRUhHTEIzL1BTU2diUGdMZGJ5?=
 =?utf-8?B?MWIzVHBGMFpJUWowMTNjZE9XaWVpcHF6bEkyYnRHNW9rY0J3bFRQUy9RWHNq?=
 =?utf-8?B?L3RTWnNibFh0WFpSWFRML20rNExhRkFzS21tM3M1THdUaEM5VE16UkRMSDRZ?=
 =?utf-8?B?UGNpWnRXQUU4NEVUWXovMFo0QlkyVUhMdW42TG5HVVpSN1QvbTdLZE9vNjNy?=
 =?utf-8?B?Yk9WWVpwQjJZY3lrRnllM1QvWWw1WjVmN0xRNjc4dHR3c092RUZPeHZGYWtS?=
 =?utf-8?B?dEN0NGw2ckZrUnhwbWx3cE45WFdCRUxGQnc3MzlMQnRlTTB2eTEzMlhZMnVw?=
 =?utf-8?B?cERFaHZveHdONE9xbWxPQlZ3Wk5ucmh6RmFPb3VYUVZQR1hYU3FDQU52dDAz?=
 =?utf-8?B?N0FrczBHV2JDSTJucG92aVRjWXVYYU5yVktBK2QwWU9YMVhOYmZUMGwva2d0?=
 =?utf-8?B?VlFsMG1zVWc5VWQ3Qkw1eWplQ3NpeGJXb245dm1uVGV6b0hyOVRYZ3BXZHV3?=
 =?utf-8?B?NTNiVkdJcVI1RWIzZEovMzhYekQ2RERRQitJZmFRMlhSdGdPM1Z4d21RdlJ1?=
 =?utf-8?B?Yk40SlhwWGhrU21ydFJUUFhKWEd3bDVIS05Wc2VlTlh6bHR2SkZSTXdHb0xR?=
 =?utf-8?B?ODFIanJCZGQ2U2pCZGVFMDRacGU4RXF6T0dUWFQ5dnY5VjFOQ0F5dXdicGta?=
 =?utf-8?B?WE5lZnBnVW9aT2VPTDQvcHh5YWpvMVFNRkl2SkdOL1VXeTdqdEN1cXhwT3Fa?=
 =?utf-8?B?TWxIVVE5N2l4YVEwc21SbURlRURpMUEydnlWblMvYjI4WFRGalAzOHFUdVJX?=
 =?utf-8?B?QkZXZ1E5MG5jNDlNY1pIbHVubXc4WThQTGNNVzkwRFBzRjFUOXJRR2o1d0dL?=
 =?utf-8?B?SGFkSVQ1NVMvdEVUeVBWTERXbHZiejNpbzVSZDRuM1gvS3hMbFVzVU5tNGYz?=
 =?utf-8?B?cWtPOUlrb3BITTZtZTBnN0pQeGx2QStPWHZBYlgvQmh2Tk1Jckgwb2ZYOHE5?=
 =?utf-8?B?V3YzbHIzNEFSdi9xcUNjRXQvSG16c1BHRkN0eFA4L2tZSWpuTzNqRUp1VDM1?=
 =?utf-8?B?U2F3SGpyT3NGNDRIQ3crZTVLVmRTVTdRRm5zVlZwaVVuUmdMSEIrbURYOXZW?=
 =?utf-8?B?K0ROMnRIRktkS3dOQ0I1K2R4Q0U5RWxRK2RzK1dMNVpvODNqN1k2S3NOdERp?=
 =?utf-8?B?OFc4eVFFNmh3SERmbWxXQzdWUHFqeHlmMlNOSWRGbjdyOGhuQ0l3SmhvUXMw?=
 =?utf-8?B?TXlRSXBPa253NlQySDRYVlRHMld3YlZDV2pZZ2dObXhldy95RnpkRENOeVBU?=
 =?utf-8?B?K0ZTQ1hkSnB0cnBvZUgxR0FLLzZqZmZqTDhUMk5DZGE3TXd5YUg4aW1yQmMv?=
 =?utf-8?B?V2dqNUV0TEx5ZjRjcThPTnQ1eCtyM0t6dEpiOTUrOGNkUkNrdVBaeVRxT2E2?=
 =?utf-8?B?bWlzTy80VG5WMGlJakNuY1pwR3FKSTc1WjFZZFR6cll6dWE1MTFKcWZkV0Rt?=
 =?utf-8?B?M2ZsVFdTaXZoRW44Uk1uTWd5MzYvbkdxLzh3VzJVL2g4L3hFRVFNK0NuVDFL?=
 =?utf-8?B?NTZGTTBQYUY2WTdqbTJzVXpTeUs0bCtFYkVUcUdnam53THUvZVl1VnUwbWR2?=
 =?utf-8?Q?4AmviPBwV7JphfDhJtzL9OWkm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0ce9d5aa-486f-4db0-0252-08db520dd6ba
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 10:52:39.5289
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: r4v/Lgh2Hf2COgdxBFOv77AkwV0FQmvL98Pmw7mBiBn+I7mOdymHdc0q67lnIw7IRnhzMXZhE4G8NzEMxHMtqA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8229

The change to XEN_DOMCTL_getdomaininfo was a binary incompatible one,
and the interface version wasn't bumped yet during the 4.18 release
cycle.

Fixes: 31c655497461 ("domctl: Modify XEN_DOMCTL_getdomaininfo to fail if domid is not found")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -21,7 +21,7 @@
 #include "hvm/save.h"
 #include "memory.h"
 
-#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
+#define XEN_DOMCTL_INTERFACE_VERSION 0x00000016
 
 /*
  * NB. xen_domctl.domain is an IN/OUT parameter for this operation.


From xen-devel-bounces@lists.xenproject.org Thu May 11 10:54:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 10:54:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533245.829730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3w3-0001gi-QH; Thu, 11 May 2023 10:54:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533245.829730; Thu, 11 May 2023 10:54:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3w3-0001gb-Nb; Thu, 11 May 2023 10:54:11 +0000
Received: by outflank-mailman (input) for mailman id 533245;
 Thu, 11 May 2023 10:54:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z+vG=BA=citrix.com=prvs=48888ca5b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1px3w3-0001gT-6B
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 10:54:11 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 26e68569-efea-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 12:54:08 +0200 (CEST)
Received: from mail-bn7nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 May 2023 06:54:05 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6624.namprd03.prod.outlook.com (2603:10b6:510:b6::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 10:53:57 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 10:53:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26e68569-efea-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683802448;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=rXzmCdZ8OXqqEwFXMSKhJdWFbt+23sM03kFUNLcTtdk=;
  b=Jr4IVSTBWw4nFoCe7PF2hjB62+rID37QGP7jhcVE4KJupQC+4ibpHUyV
   QSNrBOSLqxW0Wnb3Z5XPri9WgCI9jxrKjQIlR7jru3Ed+G+CugfK28TjK
   U97e89/zAtGM52HYs8EgiiSO5HHeIThjp1pzDEb+Y9/YgpxUYsp3NkGKt
   Q=;
X-IronPort-RemoteIP: 104.47.70.103
X-IronPort-MID: 107985223
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:IH0dqKl8wg3eNck2mYJt95zo5gxpJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJKWmmDb/qIa2Skfo93bti39BgEupaBmoVqQVRp/ypnEiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgW5QaGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 cYXKB1RX0+Yvb6N0LG9F+Qxqp4aa9a+aevzulk4pd3YJdAPZMmbBo/suppf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVk1Q3ieC2WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOLppqMw2gb7Kmo7AzM6bVKq+fmDqF+Mftxma
 HUq9yA0hP1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+WsDezNC49PWIEIygeQmMt+ML/qYs+ihbOSNdLE6OviNDxXzbqz
 FiisywWl7gVy8kR2M2T9FTKgTuqqoLOCBA84gHaXGWN5Qd+eYLjbIutgWU39t5FJYedC1OH4
 34NnpHG6PhUVMnW0iuQXO8KAbeloe6fNyHRikJuGJ9n8Cmx/3mkfsZb5zQWyFpVD/vosATBO
 Cf70T69LrcKVJd2Rcebu76MNvk=
IronPort-HdrOrdr: A9a23:YeozJK/IIIKXmOQokLRuk+DNI+orL9Y04lQ7vn2ZKCY0TiX8ra
 uTdZsgvyMc5Ax9ZJhYo6HjBEDYewK4yXcX2+gs1NWZMzUO0VHARL2KhrGN/9SPIUHDH4NmpM
 RdT5Q=
X-Talos-CUID: =?us-ascii?q?9a23=3AMO1iEGmGF4100bx9bW+MclpePazXOUfSzFqMEhC?=
 =?us-ascii?q?jMzpSWL6TFFLN1Pld1NU7zg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3APKftmQwVv/2HHFeOeUPcLTP/eGmaqK+AJHgEgM0?=
 =?us-ascii?q?GgeuVBwl+AnSksjm9WKZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,266,1677560400"; 
   d="scan'208";a="107985223"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CehyOBsL18dNleqlkHkpluZI9RpFu8x2pVIchzZBoPpGnrjNRmB3+9ATNy3TXTcP4nMHH0K50USDJwxNt1//Wf9ribgeUuQ7u9SUN8WYEdVxsy/B1OAVKBPEHyge80SJe60F6ijkymtPc8wMfPTWwJuVgLF4OKv14ZABg6h1jUOFCenAtOwRofDzg/Pcn5nB73ZYp5+GiCjSmD7B63hDE1PEE0eXmWA5PVo4NpSpu2i3D9tjUNwRcEyCiE199F4lmXOpxGNrJs1d0MpI+Oiesrgx+M1A83fCw+IHry+ScP3AjQoMpBbfkS2xmjCP4SMRTbDip3Kn41AOE1PdLE85uw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rXzmCdZ8OXqqEwFXMSKhJdWFbt+23sM03kFUNLcTtdk=;
 b=AsSGFaOrRjd66TJsv5ZoPhy+iNqtWnyodc1k2zVlHV5dLs9Off2ipC2U+WecmRdZvIqFSnup9VWafpfWAzHU5T4512xOHKknyRv5510X2O01NYl4Mzxtt/hpgClqo0VZtYUmhZholci9VYAG6NcikE7/xeD77I/tv4Onmkg5IP3wCo3dPJXupLf1MJRnuFydX0PzC1oJsRiHv9ouFU8RitfXKrHmdweQBCIFHsLqIr/dsa8bp48Mb2Fx5YRtkCV7vbX/5k2mLnleUIAuIDLNmg0oC2ekLStnoYTZhz4Nml0OUxkSS0WaNw/cQDULZeBzZbNE43d7zVTWlXKNHlb5IQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rXzmCdZ8OXqqEwFXMSKhJdWFbt+23sM03kFUNLcTtdk=;
 b=F6BG7zEIfFhi0glaGGINUfoAjy5xslhNaV4wRnM3EOXjJsFUTX8S7emp8hmPJTWEmG8d0jReFEFjjVqdxOHE84T5VN6Yoz/ZyWO7OssAV6fwwA+KpXzjRan0W1cBM2e/YAWWAbkdw4lPo37VCqksQLfhXMIxsaHsTaJGntqYt/I=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <03cf36b0-9ac2-40b3-e6c5-5bd687e69e0a@citrix.com>
Date: Thu, 11 May 2023 11:53:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] domctl: bump interface version
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Alejandro Vallejo <alejandro.vallejo@cloud.com>
References: <4bc416aa-8ff7-f2d4-c452-7494f6d2473a@suse.com>
In-Reply-To: <4bc416aa-8ff7-f2d4-c452-7494f6d2473a@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0249.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:350::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB6624:EE_
X-MS-Office365-Filtering-Correlation-Id: 4142613e-b6a2-403d-785e-08db520e04fc
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Md8bc4VqScyH/n0tF1i+PBHhi8wJT6CaIVO99yKyaAgUa/lOR3ynL37dKCBF62x35PWhXgfBr2qupHSE6MqPlc7WLu7+o9Tmq5SeD1Kc+n2/8Xk9oDiHv2F9tBBZdILQVuDbULWl6IF8YpEQAthC09U2/oWDuP06PMFg1+dokanejcmk7sa5rCgww+Puu3tHAVVOU5HzhDFQNjJyY3JZmhjLrwnK2LRztonROY5FVjiW0aAXx15b7zActvOOiOJZugcG5IlDk6/A2/oylirtfFd3uF9e62gvAxnzYFsJZIrxOUA0o5CbBxuV4qZ7YTUZBJUsStLErIUU/HV4efPesfB5a3eHqu5sngk9wrSsfo3ZcuVY/W1F5sz3p9ExjFNOHVevlZJ4JuII3nRSqR+cWIl75dmWQyvecxkokjGkPvywUKcLJkbcSfERg2IYwKVY7N6pVvlAxTJayon6WQja3ggqgNNzHxS08QOkIqvZCZAuqmbzNkvHydcDN9dIdYwiGgNDBShpAL1/FsvGMR86jzirEX0g71Mld2GpQLMUkNSlX+HuGK150VZS0PXvAiQitwivfB1HbC9U97MxDhRbvmxfRk9KhHStCLOGEpGsvnQaqlkiVWjwEuK+SbCIZTKwx9TBaMqaVxHAinbfAClJDw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(396003)(39860400002)(366004)(136003)(451199021)(2906002)(41300700001)(4744005)(38100700002)(316002)(8936002)(31686004)(8676002)(478600001)(86362001)(66946007)(66556008)(5660300002)(4326008)(31696002)(66476007)(82960400001)(36756003)(6486002)(6666004)(2616005)(54906003)(110136005)(186003)(6512007)(26005)(6506007)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dWtjRFN1bVpKM0toWllaVFdlaW9ST0I0K1ovdVNHUWFveEh6bjNaTC9qeGFB?=
 =?utf-8?B?WGdGVFNrblhCREl4OHcyeFo1UklOWk9xak10eFQ3TG5ISDBYdlhQSGJDQkdZ?=
 =?utf-8?B?UEMzc3RwUHBCYUNiYUlEVVRBNURBeGFMSFIvNnRLNDUvcVZUUktYN2hDY3F2?=
 =?utf-8?B?NEU4N0YxOXptT1ZBRm9Kc3pjSDdUbjZFeVowM1pIdFlkTXRCdUIvU2J5andM?=
 =?utf-8?B?WEcyVEtoYzRCSVpnTVlhYmU4U0Y3aktaYXp2UXQwTmU0OHZFT095M2gxcXB3?=
 =?utf-8?B?LzJ5Rmt4Y2QzUXEwd0JlMHIxOG5NSCs2UHFpYWVrdFpvTTBvcnJBZ1l1Vnl0?=
 =?utf-8?B?MW9JakM0NnBSQWtsWTNEa2hiOVZId2ZETExWZEFwdFB0R1ptRXlHOGM5S1JL?=
 =?utf-8?B?dnBQcFdzeVllNllJQnNuM2JCMk55VXdXZmMyWGFJYTlpdE5QSWwvSEhmdUhL?=
 =?utf-8?B?TnBZRTFWMnNML2Iva0xPTFJ4WEFobHB6SG43Yk9iRVhQZ01NNXRWcFVoYVlC?=
 =?utf-8?B?a1JlVzR1SXZPSWlvdjlFWlRFWTZqMlUvYTViK2Z6S0hFK3V4Sk1VeUdhM2dP?=
 =?utf-8?B?c29WcFNYc1o0VzBiT2pTOEQvTXlkTi9qNlZaWFBQTG5QL2g3UkRDeUpMQjR2?=
 =?utf-8?B?eW9hMEtDQkxaUGVJSEp4WThqN3loS3B3aHpFOFE3UjRML3d1T0IrVmJTVHFX?=
 =?utf-8?B?bFNTbTNvdk0zalpCanlFV2phQlVwQW1wWkU2MW1wVXQzWW1qMkFlUUpCMmFk?=
 =?utf-8?B?aVhCRVJnNjBUMU1YUDkwWjNjU1M4dTVxRWdjL29VS3BzdzJaUmtvb3ZyTXF6?=
 =?utf-8?B?VlY2dG9NVHhXa1FPNjRnR1FyMFk5U1N1QW50U3ZSckJoTHhxTDNCdTBnK2VZ?=
 =?utf-8?B?T2xuT0x3cCtYMXdBM09RM2ROcXdhaVhOWTMzVE1xVmZ5akVRWm1vVFN3NFpz?=
 =?utf-8?B?ZWJYMHovdEp4U0hWemRxOHdBWmYrWTNIUVVlVW1IVmVlRmsrUC9SbTN2VU5v?=
 =?utf-8?B?UXBOSnhBMHAxVWdpbGRsa1M2M3FtYXNCcTA2amNSaDlYS3J2QmZKdUpQeUxX?=
 =?utf-8?B?ZGcxVklMLzBsSFZJWldJSEVudG5KQk5haE9ORCtESmZCeUwvcUlITUJGcEFj?=
 =?utf-8?B?NklKNEdaWW90bnRlV2VBQmxST2VvS0R0N2l6YzduM3ZUa3l0OWt5aS9xTlRG?=
 =?utf-8?B?TEVWZ3NjK1ZkK0k0clh6S2RoRDBTYmREM2hGRW5YbkE3YUpYU2hDcDFpZDFG?=
 =?utf-8?B?Tlk4cDhqc2J3M3ZmOStZblg0bHlHVWZDcG04ZVpXSGQ0cUFobkRzNUk4Q0pB?=
 =?utf-8?B?VVpCL01ieFA4eTdiVkRUR2VjT2RmK2F1dWVQL1V2VkhLV3hINFIvVU13L3lY?=
 =?utf-8?B?bE9FVndycFJWWTMrZWxwOWpNTENVS1VGd0poRjI3b2ZlR3ExWTM0WEtnczli?=
 =?utf-8?B?QXVlQXBzT0k1c1p5VXlEdHNucUwzQWdsVGhUQ2hsMWNyall2ckwrQXpiK2FM?=
 =?utf-8?B?UmtLaFNIbEdWdG9xWnVYUHM1T1ErQVhYTC9OVW44eTRxa0RvSHRzc2VobFht?=
 =?utf-8?B?OW9HdkgyZDJaVUN1d09OalJVNllFOGVFTzRSeWl3bTlwOStWYUtpZFRUN1pH?=
 =?utf-8?B?aEs4a0dUNDkybWdyN0c0amg3SVZZcDMzUTVVWE5hOTNnUDVrVUVRa1IxVXlM?=
 =?utf-8?B?WDF2NXd0WjlYM2crOXJ5U0NoWmZVNm54WXI0Tkg3bmVJVWZ0WG1kV3Y1c1c0?=
 =?utf-8?B?dnNBNHRpSFhRMWVobEJHUTdueHdLQXVVTHQ4QVpTdGt5WjNONVNza044MFha?=
 =?utf-8?B?OWdJMU1MSnZVcFV2YmJiVmlzWVFrVDN5a05kNlNESE44YTNmaVd4Sy9FaWlx?=
 =?utf-8?B?SXhDZlFtR3Znb0ZNdElNcmlFRHc1VkE0bXk4SjRFNnMxMytmNkJNQWNZakEr?=
 =?utf-8?B?anh6a0xGSk5TeDVrWWU3c1ZnY0JjckVRenhEUnEvWkR0Nm1YcmlaOWQwaTdt?=
 =?utf-8?B?LzFsWVRIdFZubld5QnJQUGhtSmg0R3kwbTJ6TWhTOGh1MXBlRUlDejRjek5B?=
 =?utf-8?B?b3JEWjNmTmpYOTJQLzVDcG5WU1pkTkVpQXEyWTdEa0Y3WlBtSTRvclNYOG11?=
 =?utf-8?B?VVVsVWlCaEpYbWh1LzZ2ZjFiY1AvMVB2WUMrNDZVZ3loNmRIM21na3NrRkx2?=
 =?utf-8?B?VWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	5wMfwDu9IHiS6eq95/7rA3i++aMgha3yCnPMiXkzcaFz98boYQTs4qNpUwF0/huH7+HhqFlrTTwljqicIQnACqkKkyz1iKXRdNHflqOebIvd5pJ1Fh4OCYd9whpu3MYtaN3eVbS2WcQuMnQZMXDmu/hPO6JvhyeKHpvQR2dNCbgNPmVDQkocgQzWKmIq9zYhORgeu5fhHIz8LQRG8vFAYCh+E2a1oxPAoEv5Wdc58aBVl7L3QFA9hZ4106fuF3YqrZsjESRyHjTJYvpTAFKy7o4URkaJsVVielUt3iWe3ORQ1jJ19oM1yZVI9jCmFLag2sR2m6SH6o90lrGZ7KNy5zzGD/dnmGYJ+8uy102afuObUpGFjhqjV9oYQLwr35kV20Nh+MMS7rGmBaWXVjXhMI2Je4tSYJQzrVLhaQdfYiecbNTCnU7kJVyFaETVcXyvCsZ/fCUyh6wUlPNn3Vqu1tPOd+A2hxK0pmioW+dSyUOs9L9hcNx2T+xkKVrZDGoqtMAoLDbURiOL6F7eoTgei4eB5Dmn5tZajtz3ppgTa76/ITVwf4mx2Dcz6UAJaXDC6qomnZivqx7FuBIpl4z9lW+HrdeTrG5v9+dkL+zqMiHIZydPHBADXLaG9Y2DkpXeI1ZcgP4oDeTktdkk6NlqG86rtbrkEHiNsLOGNAc8FYG4W3gu/Ahu0HQdsmBbW9BYdZCCSqz3YRsg8mFFNmkNfd4dtdKTg6UIMsFpA+PXnixPwKwA1I5PTqgDUvakLeTCTpZhoe889yxncAhnuFBdDpR8rtmfKIg/rg/PeUBDhgkGXqMhgNqZT7dC/AACh4iqSdrP3y0owfL3ib6y8hvYvPrKgyh27VPv0Pk+fPzqoCAYyHDSAaLBFyElD+oWPmKo
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4142613e-b6a2-403d-785e-08db520e04fc
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 10:53:57.2698
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZVqcTk2rVciZuhO8tsT/OBZ2E4JmR3CCOD6x0DRZEsRAKFkO2KTy73baooy+wAmhros1Dl9hHONQOveUC2ACpCEeBQMLhwnGVwAHeB3uWt0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6624

On 11/05/2023 11:52 am, Jan Beulich wrote:
> The change to XEN_DOMCTL_getdomaininfo was a binary incompatible one,
> and the interface version wasn't bumped yet during the 4.18 release
> cycle.
>
> Fixes: 31c655497461 ("domctl: Modify XEN_DOMCTL_getdomaininfo to fail if domid is not found")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks, and sorry for missing this.


From xen-devel-bounces@lists.xenproject.org Thu May 11 10:57:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 10:57:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533249.829740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3yw-0002K8-7x; Thu, 11 May 2023 10:57:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533249.829740; Thu, 11 May 2023 10:57:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px3yw-0002K1-56; Thu, 11 May 2023 10:57:10 +0000
Received: by outflank-mailman (input) for mailman id 533249;
 Thu, 11 May 2023 10:57:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px3yu-0002Jm-HL
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 10:57:08 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91f99ad9-efea-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 12:57:07 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7809.eurprd04.prod.outlook.com (2603:10a6:20b:242::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 10:57:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 10:57:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91f99ad9-efea-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IJyNp5Y63WeV2J+kgWwNS81fk3nTLp6qNkWXZQPL4UP/BAH6MykV7Uv3nCEgvYDAh0O2TDCisqHERuflq3qaT92x2V7GrsP+qzYMlYFuaeaRNDr+7r/SYuQU3+rYq9Xt/kjg+ozaG5n/KdP2TU5arFTLHrKZpQBWHiqqtLZWEubvrynv3mW1Fe91V9dKN7rJV5ZNAcZu2V09o7GODEtz5ttPJqjbVfWZFQb7lxq8Dv/LOA8+846bl9uVC1SaDswkZb3BgNS2oy7n2OivSrpOGtnfl3AVTrAdPhh08T90bguCImkq0ySZ1ntk4ce3lu5cmMQw2LEYqGUX5fGjMD8eOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RxPrUb6VQf4cpKazuHKWLK2Znz3V+Dq5+6nS4clOEBQ=;
 b=fbU/Tiz3sKDwG2/PZ5Az5OAzU7F2uqHiGw/gtcFFh0Ys9j7StDUyBMqWKaHi39eCVrPa+wubocaYktv6iLXqW421VaRpcf2VTC2XlceUnrpU0Rg08OIk750oIhsp1Cu84VEpRjPimqGvQm0g02AcJII60snmrEldSRi0BUAnmbNMDFrv0m+sR2tOxpFqxT2YfZ2ZrTwMzqvSQX+XfG45UcZEpalYztXFOOacvpkIazVSkNEY+NEisNj3gPaNGdosbjuReiLLuD5bWQyoo+s9DqE/pMUXxWcFDiwIMobTQchKsZh1kNXFZ57dZHWSltxJ0GD9bdzaJ3oPMUdvCf41xw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RxPrUb6VQf4cpKazuHKWLK2Znz3V+Dq5+6nS4clOEBQ=;
 b=L9vUfKTY3IvWI4Y3i/R4pXTHOBlsDazZwobk+v1xfJlQT6eCtHGC/FuepjVEyR6pl4bcYRMg8a7/baS1y9zhVhpTrMFWxB00saPDzpGCKnczfGswHkGM+0KfKn4KjRIjCAljlr6U8+86p4J3Wm0Cr/1MZHGXPjV1A9bcru2fUP766GUx10lm66YUvfIOGR0DWnob7R0mWbH4sKRONW7BLbh/dq3w8TWOoAHtHlx8o5JxnvSAO8INjAaOjstOGOaEqKa41UQEk9ZycgOlVFTw9qqrCVlIm/J1ktWzCgOQM7ZLi2svbJcFn+FdDLdo4SJVp/IQrE/8rVn68+evzqI6fg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d0d1802a-76d2-9b8e-b71d-33c214a6b675@suse.com>
Date: Thu, 11 May 2023 12:57:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 2/3] x86: Refactor conditional guard in
 probe_cpuid_faulting()
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
 <20230509164336.12523-3-alejandro.vallejo@cloud.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230509164336.12523-3-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0141.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7809:EE_
X-MS-Office365-Filtering-Correlation-Id: d52a4ba1-6bbb-4f5a-5aae-08db520e74b2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xWiCNIfd91kwW3aoQD7JhaQKC/fY0cF1t0fvUDzFlfC3SSpxNGG50X8JMdpz1NVXa7Q+SPsO/6hMlZd5G2g/PIfZ+ym7zlV1LIUU/bb4DXICsajCkcqihmCGgb9ZRa4GJd1D1aRr48t0bsmxZ0RrrYDycB9mobvU0wF6/jOBqMMnt00FeX3oSBchZXNm4Yb0bXS3eVWycUc6uRuXzDtvz9SaiaM0srogJiyiMpyJPvIpMt4Z80IyvlZdr0KhmtbP3zu/Rom80v0ORBr1igrrkLbhdCVgLENCu2eEtSx9Ob4ABCMc2rzAgwJyMTLfDDBW3qXytwI4lq5UdWAucAUVz4fEDktW014n0HIL1hWzV8dY8aOZfZmPotvA5i/epqm+IbsG4iMJXUXIT1YNBwHXNUh9X0FJKdT35u06f3SQQVJmPP+e81AXKuotvhZYUJLODe4y4+Ie0mesETu/SkNPLOtCNLRklYaVc/1/79akorK8gE4Fx6OJlWw0xzKe/Frq5n+kXvyVs79+mMkcZEcEa+c7aB9mVrPindT2qIN3Z3qSSlreEAcMvC3E8rgGI6iViySM5uEr0ezc5NjFqkoEJnzdizJUa16s5oeMIIj6GHyejcX52Eht6fXGji1vwMPAL2mDmBziuSWn+bNRl/ttvg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(366004)(396003)(136003)(346002)(451199021)(31696002)(6916009)(66946007)(66476007)(4326008)(66556008)(31686004)(316002)(38100700002)(478600001)(54906003)(6512007)(53546011)(6506007)(186003)(26005)(36756003)(86362001)(5660300002)(8936002)(8676002)(6486002)(41300700001)(2616005)(558084003)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Mm1yRTNTb1JOeWlnQVozWEFrWmxqRGVhL0VRVCtFNDhCbENDcUw4Y1BIU0xJ?=
 =?utf-8?B?eWx4b0JheU81N1FZUjRia1h2NDc3ZFExeUFVM3hGcTMwSkczM1RtT1VUckoy?=
 =?utf-8?B?RmNRTG9GTDRZL2tQakpsR0tVdlZZVWZjajh6NTFHMmxCQ1kwR0FBQVpTV2ZJ?=
 =?utf-8?B?TEpJSFdFUzBRaXFFWjNqcE5UdG9JcmcxVHdORzE3OVY5bjZ0Q0l1N1pkZnJX?=
 =?utf-8?B?bUFEQWYzMDRKNmdOZ0VDYVYxVzBDS2hnRUhIWUpKUWJ4SGFuVE16M0psc3lO?=
 =?utf-8?B?eWlHNEtEOVZ4NXBFTzVXd1g2N291WGNFYXErR2ZodzFuck4zL05SQktnWHg0?=
 =?utf-8?B?N1ZRSnlwamduODN3OU42ZUwrN0d6MmlWbnlISzlDUmJLMEptcXpiVDVFQjkx?=
 =?utf-8?B?RE5kc2FUSFlHZDVRdG92ZzhmWEFiUnV4QlhtSU5oVEdjV3hmZTZFMGNnZ2hD?=
 =?utf-8?B?WWFnbXRrWHVZZWdiTUI0cVkvcFdLZGhmYU9nRFdRMDJqODZWYUoyWDVNMDJK?=
 =?utf-8?B?c3JlWVZGak1UM1ZYMmJHMU9JRERVOUJQMzRCNGZMN21wSkJtcjk1WitXTjZm?=
 =?utf-8?B?Qnd3TkY4VUhvVkxpcTd0NDNKUis1Zksza0k1c0l1UmZsa09PeHoySTB4WTJX?=
 =?utf-8?B?M1BBdjI5bjlYd0dFSUZCSE1ubHNFbDg4K25tNnYzRUs0K3VJQ3FZeVZSTlQw?=
 =?utf-8?B?bTR6clBUcjU1MFh6ZHJVbGpyb1dQcGplbzc2M2Jnd1BtZDcrOW40NExLZ1Rq?=
 =?utf-8?B?Skg5RThYRXFiZm9GU2ZvNndIRXR6c0d4UEZTUy84SnRKaUdWWUJOT3NVK3gz?=
 =?utf-8?B?OStGRDlScmlsTlNiaFdwQ1h5T1VCbDRFWERvcUNKUGxIOEdtUUpOcGI1Y2VC?=
 =?utf-8?B?OEQyY2tIbWQ3Ym5lZ1lpNDU1aFRrdmk0N1pLbWw5U2VhZVJPK1BMdEFEcTJt?=
 =?utf-8?B?QVhQSG14SFpXWDFoVlNDckFBWmVxSzA4UVUwU21pZkpkYkk1cDlUcjR6ajhU?=
 =?utf-8?B?YXhNdVdCd0FVLzNPd2l1YnhEdUlEK0Fld1Q5Ti9GaWhXMDNEQ2U2bTFqbGVJ?=
 =?utf-8?B?TG9WWGV6SmhVYzQ4Y1RoTm80OFJEQ05EbVJDRFI0a3BtT0hGMmVRc1hHMEVw?=
 =?utf-8?B?bjdXNHR1TnZkUGZPSkFVNlkxNzRFZ3Znb2RzYlFCR3RZa3MvRzJNbjlaYjZX?=
 =?utf-8?B?UVBxTWZQMXgyRzMzZW9FOTNSN2ZQclI3QWVlcGlBeWZ1L2dwMXZxek91OTVo?=
 =?utf-8?B?a3k4eUViQ2dwb3diYld1OUNiOXk5QkErQkF5TFJ1dDZGZEpmQ1FyNGY2K1V3?=
 =?utf-8?B?allVZzdudGx4Y2RaVUo5R0dDS3AzbVNqT1pNVmZ4dkU3cURUdlpUb1ZzUjMy?=
 =?utf-8?B?UjRpdFdpK0pSWWRxWnVPYmRDSmdaWkN3TXRmNitZOVNoTlkxZTA2WFhXTW9z?=
 =?utf-8?B?UGpSTEcyaHBTbGV5K2NzQ1lZOXhEcFdTL1REZnNVaDJUcy9uNlF0N0lWdzVM?=
 =?utf-8?B?OFhXN21EVDhFajAvRy9oTFFvcXhwWDVQWFpqMUI0bU50eXQ0SG8ybUhGSklE?=
 =?utf-8?B?WkgzdkJuR20wd2RoYlRuei96amk3NS9LK3ZmTzlpcWRVeVpCREJFcW9UdU03?=
 =?utf-8?B?Vy9OamlIMkp0VXhuUGg2RXJHOEJvV3hSYnkvbGdmTU1BVjNlNFVqUWlQa0o5?=
 =?utf-8?B?anpwRXdxcHBla3ZGY0NRQ2I2NkFFbm9YaWpIblFSRloyYXNxanNOcUIwYjJP?=
 =?utf-8?B?bm02a1hYeVQzRWhpWGNPMERMdTUzSFN0T3U3eGpKTHlGdUZDVzJPa01aMzVy?=
 =?utf-8?B?ZkxiVjV0a0sweFgrWGthMmY0NStHWUQwa2RRbSt5OGRFSWRKVkw3UUx6RTAw?=
 =?utf-8?B?Qm5SQ2RUbEFTNWU2L0g4dXpDTjFaamd5TTdDbmtEd3pGUnA0bWJEZXNUU3RT?=
 =?utf-8?B?NW0rc0xwM0dTRXQ4VVQxZFZjcTllY3Z4WG1xQzg3NXJnM1BVRjloQmRHdmZ2?=
 =?utf-8?B?dFV2L2oreERvM2ZOaVZOM3lxSlpodFVycE5JWC9HVUt2U3lPak9HS0Fnb1dF?=
 =?utf-8?B?M001MXpUZUVNYVMvQXRmSXRjTnpQT21JQzZaME1ScXY1dGRBaVhTdmR4WGRh?=
 =?utf-8?Q?yhbIzLT9lBnQyeQH55frMdkbn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d52a4ba1-6bbb-4f5a-5aae-08db520e74b2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 10:57:04.5694
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: V03x4QrF3eqqFAaYQ4gLvBjO3SnglQeYN17Q4KSQULimhkbmdY/MEthbbFFXvuq4tLICbd03VNYj/tlJ1PN5Pg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7809

On 09.05.2023 18:43, Alejandro Vallejo wrote:
> Move vendor-specific checks to the vendor-specific callers.
> 
> No functional change.
> 
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Thu May 11 11:06:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 11:06:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533253.829751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px47K-0003wT-3K; Thu, 11 May 2023 11:05:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533253.829751; Thu, 11 May 2023 11:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px47J-0003wM-Vh; Thu, 11 May 2023 11:05:49 +0000
Received: by outflank-mailman (input) for mailman id 533253;
 Thu, 11 May 2023 11:05:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px47I-0003wE-9t
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 11:05:48 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20620.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c822477f-efeb-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 13:05:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS5PR04MB9972.eurprd04.prod.outlook.com (2603:10a6:20b:682::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 11:05:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 11:05:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c822477f-efeb-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NOj+6N4i9HF2G0xj47tRe8a2qzLq/qYHGgw5xeU+6pjuHOQRK9msuTOaL56pmVuaj5olY872olcLMrCcrI4Y0gGpCcRbKJROtiuY4UJkOVxKTwUx9MhykdyIOWGPLJQqLIKlu2zuyeQML16tOj53MFiKVuVx6bKmaw5EBwtcENY7M36XNrRnvsxR6jS2FikD/Oc+On9CyPiSuaPS7pMl1y9AEIeEEYIn//Q0ogiOk1MVA/dBZA39Lkrk6oRdptvYA2Mt9rJ41JtdpDNDeyncZKyswGtPnPnJW7Ljs9Vcw/F50BFaoolD2kPItutc7TL+Kaf4hjO9/immed/liiU/Ew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1l1Gp6Ou/lbjv/gS/K182mAIX8nNebhtPOUDNl97VMQ=;
 b=OPioyROvLAXnojglmgeTruCZIEMRqy2vn58NRODaQlco9/hCuMYIL6BMlKsKt0IuoAw347cP3lsJU3IDwL2oO/y2kw22jpPb6HbHGC2DbBQjrKUl2stQZWmqezfTrEVmwzEpDrMWRu9PQipgRBppqdBZXuworLqNzo7bH112B1eQ2z1BdOucij1Za5s6hy1VgSVajTxtGU/5wWHSSY74eCQpf1F+QLp+rBqt33nJ5ZNGWoa3MP68ynfJB6LRiA++3HQx1hdrg6k8r1GwdcnzWtNFeEEpdbn4ZjxteCi4Q8GSsMMJKRR5pt8tMdkXdIaNoqCjS6I+ftg/WC1cW9BvLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1l1Gp6Ou/lbjv/gS/K182mAIX8nNebhtPOUDNl97VMQ=;
 b=sr2S/vFaSEUjdiZvATtXVmC2U40gAjI7K3lysozQDToS76PAGKgHdvkwC3M1IAcONZ5f2nIIZ8RBBlaa9h74onpa7YW3BQeoOaopqadQ0+pHk7oBargGP4D82rtp2K/eV5RtKtEOp9pW6iNzCxEzp/FebSPqIwue/66WJ9/yg5W3aZ4f2eWevTZfF0ZpMXIMf9zsEpQIpH0yitcgFoaASRtum37DJqpR2RX4QFKeLsGFi0IZHe5dGCQ9ScnxwaFx0IyJ/NnAm1Nt7HAlnPy88xFGn2kqDKDX3aenZifZegfNrs2QQRr7KwUhTfOGtwaL1ySkgf9TyORQu7ROs/7Fkw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1489425d-7627-30c6-bb0a-ca1145107f42@suse.com>
Date: Thu, 11 May 2023 13:05:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 3/3] x86: Add support for CpuidUserDis
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
 <20230509164336.12523-4-alejandro.vallejo@cloud.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230509164336.12523-4-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0093.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS5PR04MB9972:EE_
X-MS-Office365-Filtering-Correlation-Id: 2ce3ed6a-575a-4e64-3d48-08db520faad8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3dPWAFWcgoeqbi453qPSnCqMQ2pki8k15afj06uZMoJOCZLsXSIe5j0ijQcrh2nv6UDHuxhpCzh9CmhdZHpGkHb4YvRJT/vuCM99kXle8KRdD4fwCM5JHowUhmAosc2wOFJ0QhR+Y57UZALp1g7/oB81x0myb6ewisPaVmmXnH/OsqFiyM3ZdYj0ylmNP9H3TUNbBuGbmql9BOiPrAxgmwC5CuMnn1v0Ob/a2BzJ8dx7N3AeZCIRKt3APWo90lzcEPB/GStRVEOufp9w4O/aLgoZRCKLg7hdw7sEKFBb7hFCDPY4z1eMyOdlto9AH6Bpb9qSD/lYXrbYkw1WBwGBAVSYRXK1/6fXgJ81k1ITyKcrrycpHLI6IdnSP6NwBJeC11Dx1pgOyKS8wrNi6bAC/eBagAMXxvO0yUBmBScxqLGdEyJPcJSmw3EZY7S0wm7uwbekQfUOpSJkcX2GgJGLRMk/TJjBKNqIm7iDA61gpTRQ4l3mTrLeWbT4jvyYWdVreBnnxL0jS4Y81m6weFKAX6CzkQxfEAaUL3ZOs0Gx4EgdEftTC6k+vNLklciDZt6Vks2pBt9lASzra7eyGdmodNTEgqNq+cuDHnX4vBYY3MWcQfzvKQdQPF0bMC5S5WPvs6dxIs0gXl6pQVueJ6pVLQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(376002)(136003)(346002)(451199021)(316002)(2906002)(2616005)(66946007)(53546011)(4326008)(6512007)(66556008)(66476007)(54906003)(41300700001)(6916009)(6486002)(6506007)(5660300002)(26005)(8936002)(478600001)(8676002)(186003)(38100700002)(86362001)(36756003)(31696002)(31686004)(66899021)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ektmSTNhTVNWK3BqL3MyTXY5WGh6cVlDRWlidmo0dTE2L1l0V3QvS3JsdCtl?=
 =?utf-8?B?Z0hjeGxhS2RyWHBRaWs1T3JtMEFCTXBIZUtSK2s2OE05b21wU1hJdzFsaVIz?=
 =?utf-8?B?M3JNUFh1SXpnUGo2UmtObVJycmpweko2OG15YlB5VktwdHRJTUFRTnBpOEZa?=
 =?utf-8?B?SlJZNW0rZzc2RU00d1VsaHpWWVJiaUlsWFJVMHl6eEhRU1FUMjRBUyttUU9O?=
 =?utf-8?B?TDZ0andnYWwzNlJ1NERpdmRZdzg4dVl6Y1liVG5sVlFwQ2ZMTHZoUGFQUDZj?=
 =?utf-8?B?VmY5blVIcjJZdXBmYzRXVzhxOUl2cmJrRElqZDU1ZW9JNDAxbGF4M2J2WGJO?=
 =?utf-8?B?d043dmZyRGYrRFE5VDFqeXBudFFwNVhtbUdVemIyN2dEZ0kwbmxZaFVTeXZs?=
 =?utf-8?B?TkdVMytKaVBmOEFwdlZYZzc5bGplS3JMMStFOTJDUm1RTlBHS2ZGWDVQNzAw?=
 =?utf-8?B?S0RXLzk5bk1rSStKazFOOVZnaHBKU1YzZ0JLRkhGWkRpaTJmQUN1b1ZNMzNk?=
 =?utf-8?B?L3lwTmRGVnE0aldONWFFNTBXSEw2UVV3WkVaK05xVjBZcksrUmxVSjF4R0pr?=
 =?utf-8?B?R2E3bmRsTVlYZDQwZE1tWXMrOFVSbW9oQWJhY1kwdXg1VG9POGNQc1B3Ukh6?=
 =?utf-8?B?S3hmUzlKQ0poZ2lvRXdISE5kVVFnNlA0SE9EeVVtN1g0eEJGa0NySEJucm54?=
 =?utf-8?B?WkZYdXJuWkttSFN4WlhZdXpDcWVrQ29Gam9OcXpDZTBoMERkMUdjK0N0bUF6?=
 =?utf-8?B?NzRseGZLdUNDMm1CMjNFTkZaS2o2REVMNXZJMEhHck12ekYwV1VHN2I3WnNP?=
 =?utf-8?B?TGJycTZxcnAySXZ4SkQyQ21QVzJqQy9VaHp1T1JaWStEN0g0UVBySStZWFNK?=
 =?utf-8?B?UlBYbGcySWpKWnpFTWU1ajZsWWljZzF5SStCYWZqYVZlVGNoMkpNY3R6T0pw?=
 =?utf-8?B?THNpZVR1TG5uam0vWDhjQ0pHTFBCSkZXcXNIT0paZTVyVHFJL1dESkh3eFQy?=
 =?utf-8?B?ampuZE5MbWlqYjY2UlpFdUR3QXlZdEdrUGxZeWxGTUVRSEpYSkhiMUl1dXRX?=
 =?utf-8?B?Q1VFTDZ5YkhUcm9sZSt1SVRpUVJjNEF3ay9rMjVYanZybHAxUUVPbW1QN1BL?=
 =?utf-8?B?Sk5MSXVjZldUSW82MitkUlc1WmdXcHlVTFp0aFpsM1kxSmkzVHJxVDdaVFFv?=
 =?utf-8?B?S2tWd3lBNHRHQnIwNXFRUkNqcnVRVW95NFVXOXExN1RnRjhLaVRoQVNiRGF3?=
 =?utf-8?B?Vmd5MUpzVnZSRDVCS0FkbEZBcVdpeVNNdnN0NTJHRmkydG1PYzd5SFAzTGRz?=
 =?utf-8?B?aTBaUFltR2ROeHQ5YktIYzZtbkpnSHBWaGRYbVVhMjVOak1IVWJSV00zdEtl?=
 =?utf-8?B?TTRNT0h4S2FGaHhqNThiZklBMVBPenl1ZC9XeWR4blZqTDlzdFovM0hucFZF?=
 =?utf-8?B?bHBpOGltbTV4cEZadU1RYmhPYmVlaS93ejYwdmNtdzZxYXlMQiswSkY3cnp2?=
 =?utf-8?B?RFpHUFlVOW9wOWdIZGJJV2hja3pnRFh1UlVxbWYwb3lEak5TTVp1S3lEMGlk?=
 =?utf-8?B?K21McGJCOEVtSkJvQVNCaGk3amR1aWVVU3ZSWnpPeUNWS3NEYUY5T1Q2R01Q?=
 =?utf-8?B?Rm8raWw4cWpMalI0anluN2Q3cC9MNUtSaElQSTQ1Mlkyd3Rtc09HWFQ5c29m?=
 =?utf-8?B?NVllZ2FMa0h6WStTclN1dmp2Zi9vdEZMZWVhaEtjRlhuM1BBSElrdmI4SGJM?=
 =?utf-8?B?ZUIycG5PT1VFblkweEU4V01jdnJzbTNXdFBvdVZXSnM2cksrWlhFTVd6QVpI?=
 =?utf-8?B?UjJTczFCWGJoYXRaVFhJa1NOZWNnL3ZNR0RPdmt0ckNiMTBUOWZtWllrbzAx?=
 =?utf-8?B?cXpJQVN6RXNkcEdpbkp1dWR4WlA5djFpcTF0NG43eDVpT0llNXFNSktSVVUw?=
 =?utf-8?B?STlodUJpcldRTkJ5ZXRFRzNpbmRuTnNjT3FGSTNoMUp4VlZuM2tkQzRaYVg3?=
 =?utf-8?B?amkwUzRhVjJzYkwrR1NJc3FuNG94K3RKWGdSaU5xNllobWI5bDFSaTdWcG9a?=
 =?utf-8?B?akl0QUtwY052UUtISHpmSEJmeHg0NHlWbUtiVkJRQVJrN0pUQzlkTUYwNFpi?=
 =?utf-8?Q?EXXcyNTTxKi/Bhbr5KP9Q5eGn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ce3ed6a-575a-4e64-3d48-08db520faad8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 11:05:44.8955
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6ODPv7LlPFWmqFcBu2iBO4zNq3hLUPpxhStMSJbisnjXxsf7zF7RZxzUJKR00Hw9c5r/VeO+ZK2+cro3Ih2xxA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS5PR04MB9972

On 09.05.2023 18:43, Alejandro Vallejo wrote:
> Because CpuIdUserDis is reported in CPUID itself, the extended leaf
> containing that bit must be retrieved before calling c_early_init()
> 
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>

Looks largely okay when taken together with patch 2, but ...

> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -279,8 +279,12 @@ static void __init noinline amd_init_levelling(void)
>  	 * that can only be present when Xen is itself virtualized (because
>  	 * it can be emulated)
>  	 */
> -	if (cpu_has_hypervisor && probe_cpuid_faulting())
> +	if ((cpu_has_hypervisor && probe_cpuid_faulting()) ||
> +	    boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) {

... imo the probe_cpuid_faulting() call would better be avoided when
the CPUID bit is set.

> +		expected_levelling_cap |= LCAP_faulting;
> +		levelling_caps |=  LCAP_faulting;

Further the movement of these two lines from ...

> @@ -144,8 +145,6 @@ bool __init probe_cpuid_faulting(void)
>  		return false;
>  	}
>  
> -	expected_levelling_cap |= LCAP_faulting;
> -	levelling_caps |=  LCAP_faulting;
>  	setup_force_cpu_cap(X86_FEATURE_CPUID_FAULTING);

... here (and also to intel.c) should imo be part of patch 2. While
moving them, I think you also want to deal with the stray double
blank.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 11:08:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 11:08:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533259.829761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4AA-0004ex-MA; Thu, 11 May 2023 11:08:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533259.829761; Thu, 11 May 2023 11:08:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4AA-0004eq-IV; Thu, 11 May 2023 11:08:46 +0000
Received: by outflank-mailman (input) for mailman id 533259;
 Thu, 11 May 2023 11:08:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px4AA-0004eg-0k; Thu, 11 May 2023 11:08:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px4A9-0000Zx-Tz; Thu, 11 May 2023 11:08:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px4A9-0002nR-CP; Thu, 11 May 2023 11:08:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1px4A9-0000ct-Bx; Thu, 11 May 2023 11:08:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iKB1CT7HggMZ8y88Hxs8/vOwxhYgurPvfgWR6Cl89Y0=; b=aDnSPUEQz9E8n+lQs44LeBMWA3
	4dU84BMWEaS72RTuYvTE0f16Q4IYQXi3MojretUrclYmIjHzIBpYzQZ2u3xG9bn4+aEqxiSIqr89V
	llT2Ahex7SMiGlyJ0WI9Ir7OxogmdnMywDWe4vO0XQmDgnTEga27F2zUejQEMeneuQDw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180617-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180617: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0a0e60caf20ab621ee9c1fc66b82b739158c05cf
X-Osstest-Versions-That:
    ovmf=c6382ba0f2aec5200ee03e5d97018c303ddc64d8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 May 2023 11:08:45 +0000

flight 180617 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180617/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0a0e60caf20ab621ee9c1fc66b82b739158c05cf
baseline version:
 ovmf                 c6382ba0f2aec5200ee03e5d97018c303ddc64d8

Last test of basis   180615  2023-05-11 05:41:30 Z    0 days
Testing same since   180617  2023-05-11 09:10:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gua Guo <gua.guo@intel.com>
  Guo Gua <gua.guo@intel.com>
  Liming Gao <gaoliming@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c6382ba0f2..0a0e60caf2  0a0e60caf20ab621ee9c1fc66b82b739158c05cf -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 11 11:26:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 11:26:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533266.829771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4Qm-00079t-4s; Thu, 11 May 2023 11:25:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533266.829771; Thu, 11 May 2023 11:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4Qm-00079m-0i; Thu, 11 May 2023 11:25:56 +0000
Received: by outflank-mailman (input) for mailman id 533266;
 Thu, 11 May 2023 11:25:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/UO=BA=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1px4Qj-00079g-OC
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 11:25:53 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2061d.outbound.protection.outlook.com
 [2a01:111:f400:fe59::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 957d64ed-efee-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 13:25:51 +0200 (CEST)
Received: from MW4PR04CA0354.namprd04.prod.outlook.com (2603:10b6:303:8a::29)
 by DM4PR12MB8558.namprd12.prod.outlook.com (2603:10b6:8:187::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18; Thu, 11 May
 2023 11:25:37 +0000
Received: from CO1NAM11FT020.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8a:cafe::76) by MW4PR04CA0354.outlook.office365.com
 (2603:10b6:303:8a::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20 via Frontend
 Transport; Thu, 11 May 2023 11:25:36 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT020.mail.protection.outlook.com (10.13.174.149) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.21 via Frontend Transport; Thu, 11 May 2023 11:25:36 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 06:25:35 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 06:25:34 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 11 May 2023 06:25:33 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 957d64ed-efee-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WQLHv2Ry8uXWR+tiaRUVug+11wmkCBinZQu7LPLVIjKbODYL713uVw7glKIQMP2CAd48z8htzd8GNANdTk0B1wP5FQXI1aAVaWzIWN5cY3Il2T1q1Pm4cWQYBvGnMsrBBA+vNDEz/48ZS8xFHlI86KxOvnDI29717P+dKkkYd7YPRhsuF+gkv6yVC/PLXKdTVjtCBwfPCeKwfmMhC8UAjTUBRkye+//KpJZr6hNXcJjCixpk1rv9hNo40l4d7BrDo8Wm/r+EzXRywlldVXmJLwl9tEG3lIaXbQU5E+PjpuOLHg0Dks4Z0fWMo/K/N/6ncJeyElaquXg4O44Bwc7nVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=o3ofw8QlulpeonaWRaWUpbP/xOt0kD1++n4rLpxTc3I=;
 b=fntYdlp4umvT14KNV4f1eWwuk+7LkzSAHjb7NFp7AMqiGaRVpk8Qy61byVPsPjq8/wfDv5oi+mD7D/79zLsfNcE7DD9HDma7iaa8J3dJBNRJI7FFgI8W610FIKUtQj0RhBzawzFFv/veAoEMMROiSGypYkSL3Swl/PWJ8qOPw4QANhT/hTZlNwpsY8ulEEVXPtUDVRnWNjZBl88LuBN7Py0Dz3qbfeFg6vBrGzcZoc8YN5i4Rkdv7tV30wAnY8vH2s+sif8Rm+CLUK2Ko6ppGas4y/z6VRqHrf0ZLdAc8Mcg6NtsQx0QX02HKy8qI6NkomrEg5cxezcYcvtCUe/oLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o3ofw8QlulpeonaWRaWUpbP/xOt0kD1++n4rLpxTc3I=;
 b=NPUurkfWL2FQcEXqJIBm4UtA5l5bR6Y/9HwWRAPUwpeX5MKWwbkclQ25CQjI2MmoslCCzpkL4ihmuMcj0agYNzBhR+OsjItCLoyEZJjIkM9CqrQm9buwgLVFxXHf13QW9VcoW8c/ZmadQ8w3z1rCbhOPDiXgIoUP3ObhUcIpWGE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: domain_build: Propagate return code of map_irq_to_domain()
Date: Thu, 11 May 2023 13:25:31 +0200
Message-ID: <20230511112531.705-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT020:EE_|DM4PR12MB8558:EE_
X-MS-Office365-Filtering-Correlation-Id: d8c4e5d0-f6e6-4148-8c1f-08db52127113
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Qyok2kwuWwXOC+3igOs0sr20Id1Fow11Zw9kbSZYnohNP8ZLCNU5ARrpBxDkzKfQ/3/kjYFWUXE0Ft2ogCb470If63huCqujOaI6i0Vq5ZTb9FvNRP5IKlHbYQ2n04py1cxsaTFmneIcJys5MgBPcVeRabGI/mE0t5ji6xuF5GrH5zFsymQIIU98FhEw7ywx2u/x3jiX7m1Z2fZJdvEfddHk9xOmMcnTwypgFMPeEZ96Z7sgByLavWCJLXMKZyrBnORvdLCFl3iC48NuqjyTWxS/2xIylMaPAvu4IbZVyHleBAeWUOvKwojCCQDW+ByDxL0Oi4C63R/nSnA+ecS7+2Cfbpp2+pX4JwxkRfhpbntUb3mzKmURNQxYA6IvW4LhF6aNM068IPY868uS/naBHD0ZRW/XiSluEhnd/QSRhkP2DStibwkAPo4quQgqcMCTh0FCwS2QIYErZU/SCFyL56etKWaRJa6xpeaiWtpJiUnYJAW2y0j2gdexoesl8cA8VXYTBRpbe1mne4QIYiPrQ7fbDwZD+aSV+euvvfFL7MJtCOY59nv+ai5dbKmG373YXS30Qsdhl7VfqFg4IT9n7QYB6LKlNQqC0lgECf6jm8W4OpPBtnlZdoZRv2MuPj6gDeoHfctKbVBptfShSc5Q5Wo+S4l38AQH6HuEjKZBug3l2V70dmnogTfNDowpuniatAZRO8t/MyqS8MydKln+5T+VMGpKC1n5Dpoh2VhWSbom+hKOeoLoIeFgxsAbkRGol9iG0hv0BPLDZif5a+/Yjw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199021)(40470700004)(46966006)(36840700001)(2906002)(2616005)(186003)(478600001)(40460700003)(36756003)(44832011)(356005)(40480700001)(316002)(41300700001)(26005)(82740400003)(83380400001)(81166007)(47076005)(1076003)(54906003)(70586007)(4326008)(6916009)(5660300002)(8676002)(70206006)(426003)(86362001)(336012)(36860700001)(8936002)(82310400005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 11:25:36.1333
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d8c4e5d0-f6e6-4148-8c1f-08db52127113
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT020.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB8558

>From map_dt_irq_to_domain() we are assigning a return code of
map_irq_to_domain() to a variable without checking it for an error.
Fix it by propagating the return code directly since this is the last
call.

Take the opportunity to use the correct printk() format specifiers,
since both irq and domain id are of unsigned types.

Fixes: 467e5cbb2ffc ("xen: arm: consolidate mmio and irq mapping to dom0")
Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/arch/arm/domain_build.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f80fdd1af206..2c14718bff87 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2303,7 +2303,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
 
     if ( irq < NR_LOCAL_IRQS )
     {
-        printk(XENLOG_ERR "%s: IRQ%"PRId32" is not a SPI\n",
+        printk(XENLOG_ERR "%s: IRQ%u is not a SPI\n",
                dt_node_name(dev), irq);
         return -EINVAL;
     }
@@ -2313,14 +2313,14 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
     if ( res )
     {
         printk(XENLOG_ERR
-               "%s: Unable to setup IRQ%"PRId32" to dom%d\n",
+               "%s: Unable to setup IRQ%u to dom%u\n",
                dt_node_name(dev), irq, d->domain_id);
         return res;
     }
 
     res = map_irq_to_domain(d, irq, !mr_data->skip_mapping, dt_node_name(dev));
 
-    return 0;
+    return res;
 }
 
 int __init map_range_to_domain(const struct dt_device_node *dev,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 11:46:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 11:46:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533270.829781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4kd-0001PG-PB; Thu, 11 May 2023 11:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533270.829781; Thu, 11 May 2023 11:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4kd-0001P9-M4; Thu, 11 May 2023 11:46:27 +0000
Received: by outflank-mailman (input) for mailman id 533270;
 Thu, 11 May 2023 11:46:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DXxA=BA=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1px4kc-0001P1-DR
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 11:46:26 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on20605.outbound.protection.outlook.com
 [2a01:111:f400:7e83::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 73416516-eff1-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 13:46:23 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by SJ0PR12MB6990.namprd12.prod.outlook.com (2603:10b6:a03:449::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.29; Thu, 11 May
 2023 11:46:13 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::ef8d:bf8a:d296:ec2c]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::ef8d:bf8a:d296:ec2c%7]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 11:46:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73416516-eff1-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dOmxBFx+9O/X2BjuhUYLcc1pcE0O09o7YYacjI8LjQxBhtX8IDeReRFy16MkWQWU8F2ZKLkc9a3bqorMum0P+vRsSD0To/FuXeGcXCmT4XWx3kSjxKgDneHXr7+Y6peIT3b+AnPDgsTiAlst4qjYHUnz5IATONfK+Pj5TOuHwgJqibhCamJWARAZx7C2kMKKWuN8t1aSJe2N2ssUCR6H3VqkXl75/18GR41kyFTrGKzyofrEK2+05rlks/MXVElexD3FIvo08jwSw0nnu+jUVhVci2U2If0QlbJXl11gpWBSX0WudHcFnJac3GOyFaP2QD4GwofAuI5SkrgBjHEnGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3L93PlTDNF/+ZVKt9KlAsKMqQwonQfUPGmZBB3GjyjY=;
 b=aIEGJdwZI4SneVPzwjr3+cCq2h3iYT7jXNZUF0z+qAeo+UzrTsWqhEWe8IlxnMnXDVLgiQw/37kgqruWdc9YKYKIwghKl61yT+SC3om3hS+ITWbOqYQz1kZ/FVAoZI8c8HL9yA8PjRHu6xnZaT1lnU5dGcEJlxWpbPX+V3MEbP7eSYPePa//hErfcVVuHh+gSq3/OnKucD4X+AvNhZ3JYYuBrA82FcOFFrSlMWwm3R+ak8pylglz98ZwcroHMck2am4bT3kUhr9f8SDOhxYRS14zeHK1ASIXQCvQXWm4hZp0D/SyEC5J9RWVD1aqsKyRLRx7n7HIS0Y4h0u6rJCVkg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3L93PlTDNF/+ZVKt9KlAsKMqQwonQfUPGmZBB3GjyjY=;
 b=w9wMAIR+dw/1+vyKZgJvZqc6zD0QJxX8ZrAFoJzuBgeY8K8b7kJsx1jH12rpT3Y/qNwPmfWZS3J9yjwIvoQcnRCerNdxpt/f0Vnwr7gY7zqxkCG2YyBIQmCB/H8MpHmAx3StUdbYpLjNPh8Tcel6PT3GUVH5R4xWiOVAhYbXiAA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <4681a4d4-68d3-01cd-912c-bca2cdc83266@amd.com>
Date: Thu, 11 May 2023 12:45:59 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.1
From: Ayan Kumar Halder <ayankuma@amd.com>
Subject: Re: [XEN v6 11/12] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
To: Julien Grall <julien@xen.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-12-ayan.kumar.halder@amd.com>
 <63fa927e-72f5-1645-97c0-6986f2fdcabe@xen.org>
In-Reply-To: <63fa927e-72f5-1645-97c0-6986f2fdcabe@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0095.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bc::19) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|SJ0PR12MB6990:EE_
X-MS-Office365-Filtering-Correlation-Id: 3d8dffde-a487-465f-c784-08db5215520e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6e5n35glgCd5lVf5Nofvs8tluwDvnP4mT3yw/mi/lQIAcBVsb3utJqVkrinaaadneP9rpdGn5T6YGd+4otaeb6fhbRzEJl831FVVjBqqr1z1D8eknYbtL9z0rUnPMlEtYeQ7SuEAp0uvYUAb49TX+JXJCj3NMHx393TZTZdl5Xv9HwB3Xw7Gbq/p9DjUR48/E8rTy9zJzuB2z72dbIqf8SZnLYBJYEb0TNJdKYZ1ft9IVVtjtQECPln4UY8cebs3uNHUXILlbfLLwqNqq11bQWoNyS721uYX1g0HqrhVLMeqrh5xEeGyY19wsJEMWTIbmAyIZlQ1EOz9feqJKIu7L+pJB2fLBK92iKX2P5XsQwUuwqmb8JGhdexBq+tZ2zOGoBlqCFMfmlkdKmXD7TsJcMKPxVKtdtmLuOXT5+EzLZLO3mosjFaD2bYbNwFJhQ/dDaDMTcfo1bZRj7xwwgJUna4mp1kXrFi4dB18wQn+FRXVbnE4nstxEiSqaNw4Wn9ysX3ivQbHORAeWWj2Y2Xuwyw7fgfJPvnDqKHWFbeBF0TQ/1T/au2aGrp4+/kTcif8cfzC0WgDfULs3ITvQ+eAhkxOi/GKbgbuKiQCpgToxuJJMuqKJ3a0G7HI2DBPrcvaQLgfH+ghwIwHF+tmPmZTkQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(39860400002)(366004)(396003)(136003)(451199021)(6486002)(6666004)(41300700001)(38100700002)(8936002)(6506007)(2906002)(53546011)(6512007)(26005)(478600001)(110136005)(186003)(31696002)(316002)(36756003)(7416002)(5660300002)(8676002)(4326008)(31686004)(66946007)(66476007)(66556008)(83380400001)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UTE3ZEE4Nk94dkpQNEJrZ3gwRkRaTFlsWjNzRjNpUWpTOWc0bERKdmRnQXdK?=
 =?utf-8?B?c1U1OFpqdE9SN0J3TWlpdXpPenFGMEtyd2ZJRThZemFlK1N1ZHF5Rnc2VWdD?=
 =?utf-8?B?UzVkT2ZSdUFFckIraU5JYUpxUW4xVzhWeXpiYWkyT04xeGQ5aGpwL2tXYjgr?=
 =?utf-8?B?Q2NseFlGVG94NTFFZTVwTXdWSUhWMG5vUmd1WVFLTVhFR0hudCtBZ2ZvMi9B?=
 =?utf-8?B?RSsxKzluamN3TUZxUHliL0k0MHVmU2JPMlFjckdhRGV6ZWtGaHJic0p4SHFM?=
 =?utf-8?B?QXFjRlUxNU5jdjRqcHcyb2h6RTJvQlh0TUlvRk84Z1RybkFsUDNydTRSUExU?=
 =?utf-8?B?aTFwdkI4cVozQlM5Qm5QeDliNVZ2WFhkOS9QUjVmS0JTY1hjcllvb3Y0OEo5?=
 =?utf-8?B?UEhvNlFyM3A4dlQ0NkVRdzhTTzExb2szOVVGaFRpSXRoN3JUZlhFVXhPdi84?=
 =?utf-8?B?NkY3am9haEJ6bGI2TmxmMkg0Sy82cDBCL0N2QW9HeGsvdXZIYXdxMGtOYlRM?=
 =?utf-8?B?L0JuMHJPNkVhalliZDRuS2NwVEZSOXVOaXpyNDJ2Wm9iVkwvZ0xsQStNYnpo?=
 =?utf-8?B?bWE2dXZPV08vL3lYUmhLaVczRUdRdzRBUldGTzQrZmJzYUlzZS9Dc21zQUlT?=
 =?utf-8?B?bW1Ta2ZkV3kzUnZFVHN2eTk5THR2dlUyUkxkN252clorejR5a1BPTGM5NGhE?=
 =?utf-8?B?cDRNbnVrM2FpcWk0d3ZjRFF3OUR6TlpNRE1mRm1QZlFqVFhQbVlzZ0kweGls?=
 =?utf-8?B?Y2xWczRoL1ZmTlZIRFFERkRBWXdsQ0xJOFJtcmM1ZEtzamVwbWovTUwyZ3ov?=
 =?utf-8?B?anJ6cUN1M1RvdlVPOUx2NzZCVmRzbFV3MEh6MUFveWs0cC91azFCUDViQ2xz?=
 =?utf-8?B?bFNyb3RBWkp0NlpmbEtXSDErUWFKZXR5ZG1RcFk4djNFRG1FT21TYjBabXhz?=
 =?utf-8?B?dnJNdDU3bjU0d2F5ekFKVFhpREc5akdGQitwa3U1a2dSeWI1QTRaMlRkQlRv?=
 =?utf-8?B?amZVQVZkSFN4Z0Y5V2RHRlNIMXMvSjFHM0dMYTkyYXNUWTBSR3lhQXJqbFd3?=
 =?utf-8?B?MnhJOWFPc3VYellvVDFDTEJCVDVMSUZxbXBJSi9XNzM0bXFiZTV6MVkxY1Vn?=
 =?utf-8?B?S2RGcEJodmtta0w2elBIaGd3OW1EcVQ0YW90c2xPSFp0cUtFamt3Q0lVZWRW?=
 =?utf-8?B?UUtGMytybGR2Y1VVL255cC8yeDl1MWp6K2M5OWg0VkdCK3lBbndYWjlLT3Jw?=
 =?utf-8?B?dXhQbEE4QXRTNmhZREFxYU5wRnEvK2tLUjBMT0NaNy9nMSt1Qjc0c3pXS1FK?=
 =?utf-8?B?N1RlUE9hMi8zRVpVWXFNUS9SeDFJcElZY25MNUNkYjJvQXZUeHpYdjlZcHBY?=
 =?utf-8?B?bXRmSU5ZNHp2TGY2N0E2N09sUTZXUnhKQ3NsQ3ROTFVMM05xNy9lbmRCbGVL?=
 =?utf-8?B?eTNIWG5aOURlS0czOVhQL1EwdkFRY2laL2FDWVEyMjFmOGd4MmdBOUNlelhQ?=
 =?utf-8?B?cndjTzl2NTVYa0Z3M3NQUDAxTy85QS9MTGFZRWR0dVd2OUlvaUZjQ2VVNE1n?=
 =?utf-8?B?ZEE2Q2R2QlFKRHhoTXdGS3hsNlZkWDE1d2hVY3JmMGVDYmJEZW5FeUFNZFdQ?=
 =?utf-8?B?NDFQMkVaWFJ6dTRrMmUyMnFFQU5iVWsvcFZrQWJyUXFGV1FlTG9SUCtTbEFD?=
 =?utf-8?B?c3hkTlo0WWR2N1ZvdHhUeXlTdm1yZStUS3orQ2U0TzZpSkhsT3M3MDFYZkIv?=
 =?utf-8?B?V2NsdUZlMTVvYkhaYVdqQXNuOFBWYnYveWhhTDFnRW5vWTVZRlFiQlI3SW40?=
 =?utf-8?B?MldMU3ZaSG5sWEVDRHd4K2xyb3JxSTlTdUt6K0dCZnRUa082TWh2cjBWOW5G?=
 =?utf-8?B?Sm82U3VGNEkvb1BRdnQ4WS9VTFdLZHJqUkJxRDFMaElhcnljSFQrdXRuYmk2?=
 =?utf-8?B?TmdtY3FBZG55TXFIVjl3SmRSeHFxemRHWS9pcnEzQTZyTyswaWMzYis1Q3ho?=
 =?utf-8?B?QlRZR0Y4VVVVbnFhc1IySWtMaUpvRGtSR1NjcDNOc3E5aytaQUdiUmZsTS83?=
 =?utf-8?B?K3YwdzA1eDZ6SE1GdjhVTWZIM2Y5UGtZNm5pSEdQeEtLTEN1UzFIQ3lJYTNS?=
 =?utf-8?Q?+z7CyV1vLS+9nTtnWi54P0gak?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3d8dffde-a487-465f-c784-08db5215520e
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 11:46:12.9809
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SZUOAA/SBE1bqXELzzFGqt9GvfQyrqPT/X5e2RYWEr/CGj4ivyPq2LLUB8A8VnpGeRbyApd2PQgmg4MPitSx4g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB6990


On 03/05/2023 13:20, Julien Grall wrote:
> Hi,

Hi Julien,

I have some clarification.

>
> On 28/04/2023 18:55, Ayan Kumar Halder wrote:
>> Restructure the code so that one can use pa_range_info[] table for both
>> ARM_32 as well as ARM_64.
>>
>> Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
>> p2m_root_order can be obtained from the pa_range_info[].root_order and
>> p2m_root_level can be obtained from pa_range_info[].sl0.
>>
>> Refer ARM DDI 0406C.d ID040418, B3-1345,
>> "Use of concatenated first-level translation tables
>>
>> ...However, a 40-bit input address range with a translation 
>> granularity of 4KB
>> requires a total of 28 bits of address resolution. Therefore, a stage 2
>> translation that supports a 40-bit input address range requires two 
>> concatenated
>> first-level translation tables,..."
>>
>> Thus, root-order is 1 for 40-bit IPA on ARM_32.
>>
>> Refer ARM DDI 0406C.d ID040418, B3-1348,
>>
>> "Determining the required first lookup level for stage 2 translations
>>
>> For a stage 2 translation, the output address range from the stage 1
>> translations determines the required input address range for the stage 2
>> translation. The permitted values of VTCR.SL0 are:
>>
>> 0b00 Stage 2 translation lookup must start at the second level.
>> 0b01 Stage 2 translation lookup must start at the first level.
>>
>> VTCR.T0SZ must indicate the required input address range. The size of 
>> the input
>> address region is 2^(32-T0SZ) bytes."
>>
>> Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of 
>> input
>> address region is 2^40 bytes.
>>
>> Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b 
>> which is 24.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>> Changes from -
>>
>> v3 - 1. New patch introduced in v4.
>> 2. Restructure the code such that pa_range_info[] is used both by 
>> ARM_32 as
>> well as ARM_64.
>>
>> v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and 
>> P2M_ROOT_LEVEL.
>> The reason being root_order will not be always 1 (See the next patch).
>> 2. Updated the commit message to explain t0sz, sl0 and root_order 
>> values for
>> 32-bit IPA on Arm32.
>> 3. Some sanity fixes.
>>
>> v5 - pa_range_info is indexed by system_cpuinfo.mm64.pa_range. ie
>> when PARange is 0, the PA size is 32, 1 -> 36 and so on. So 
>> pa_range_info[] has
>> been updated accordingly.
>> For ARM_32 pa_range_info[0] = 0 and pa_range_info[1] = 0 as we do not 
>> support
>> 32-bit, 36-bit physical address range yet.
>>
>>   xen/arch/arm/include/asm/p2m.h |  8 +-------
>>   xen/arch/arm/p2m.c             | 32 ++++++++++++++++++--------------
>>   2 files changed, 19 insertions(+), 21 deletions(-)
>>
>> diff --git a/xen/arch/arm/include/asm/p2m.h 
>> b/xen/arch/arm/include/asm/p2m.h
>> index f67e9ddc72..4ddd4643d7 100644
>> --- a/xen/arch/arm/include/asm/p2m.h
>> +++ b/xen/arch/arm/include/asm/p2m.h
>> @@ -14,16 +14,10 @@
>>   /* Holds the bit size of IPAs in p2m tables.  */
>>   extern unsigned int p2m_ipa_bits;
>>   -#ifdef CONFIG_ARM_64
>>   extern unsigned int p2m_root_order;
>>   extern unsigned int p2m_root_level;
>> -#define P2M_ROOT_ORDER    p2m_root_order
>> +#define P2M_ROOT_ORDER p2m_root_order
>
> This looks like a spurious change.
>
>>   #define P2M_ROOT_LEVEL p2m_root_level
>> -#else
>> -/* First level P2M is always 2 consecutive pages */
>> -#define P2M_ROOT_ORDER    1
>> -#define P2M_ROOT_LEVEL 1
>> -#endif
>>     struct domain;
>>   diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 418997843d..1fe3cccf46 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -19,9 +19,9 @@
>>     #define INVALID_VMID 0 /* VMID 0 is reserved */
>>   -#ifdef CONFIG_ARM_64
>>   unsigned int __read_mostly p2m_root_order;
>>   unsigned int __read_mostly p2m_root_level;
>> +#ifdef CONFIG_ARM_64
>>   static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>>   /* VMID is by default 8 bit width on AArch64 */
>>   #define MAX_VMID       max_vmid
>> @@ -2247,16 +2247,6 @@ void __init setup_virt_paging(void)
>>       /* Setup Stage 2 address translation */
>>       register_t val = 
>> VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
>>   -#ifdef CONFIG_ARM_32
>> -    if ( p2m_ipa_bits < 40 )
>> -        panic("P2M: Not able to support %u-bit IPA at the moment\n",
>> -              p2m_ipa_bits);
>> -
>> -    printk("P2M: 40-bit IPA\n");
>> -    p2m_ipa_bits = 40;
>> -    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
>> -    val |= VTCR_SL0(0x1); /* P2M starts at first level */
>> -#else /* CONFIG_ARM_64 */
>>       static const struct {
>>           unsigned int pabits; /* Physical Address Size */
>>           unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
>> @@ -2265,19 +2255,26 @@ void __init setup_virt_paging(void)
>>       } pa_range_info[] __initconst = {
>>           /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table 
>> D5-6 */
>>           /*      PA size, t0sz(min), root-order, sl0(max) */
>> +        [2] = { 40,      24/*24*/,  1,          1 },
>
> I don't like the fact that the index are not ordered anymore and...
>
>> +#ifdef CONFIG_ARM_64
>>           [0] = { 32,      32/*32*/,  0,          1 },
>>           [1] = { 36,      28/*28*/,  0,          1 },
>> -        [2] = { 40,      24/*24*/,  1,          1 },
>>           [3] = { 42,      22/*22*/,  3,          1 },
>>           [4] = { 44,      20/*20*/,  0,          2 },
>>           [5] = { 48,      16/*16*/,  0,          2 },
>>           [6] = { 52,      12/*12*/,  4,          2 },
>>           [7] = { 0 }  /* Invalid */
>> +#else
>> +        [0] = { 0 },  /* Invalid */
>> +        [1] = { 0 },  /* Invalid */
>> +        [3] = { 0 }  /* Invalid */
>> +#endif
>
> ... it is not clear to me why we are adding 3 extra entries. I think 
> it would be better if we do:
>
> #ifdef CONFIG_ARM_64
>    [0] ...
>    [1] ...
> #endif
>    [2] ...
> #ifdef CONFIG_ARM_64
>    [3] ...
>    [4] ...
>    ...
> #endif
>
>>       };
>>         unsigned int i;
>>       unsigned int pa_range = 0x10; /* Larger than any possible value */
>>   +#ifdef CONFIG_ARM_64
>>       /*
>>        * Restrict "p2m_ipa_bits" if needed. As P2M table is always 
>> configured
>>        * with IPA bits == PA bits, compare against "pabits".
>> @@ -2291,6 +2288,9 @@ void __init setup_virt_paging(void)
>>        */
>>       if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
>>           max_vmid = MAX_VMID_16_BIT;
>> +#else
>> +    p2m_ipa_bits = PADDR_BITS;
>> +#endif
> Why do we need to reset p2m_ipa_bits for Arm?

Ah, this is a mistake. I will remove this.

>
>>         /* Choose suitable "pa_range" according to the resulted 
>> "p2m_ipa_bits". */
>>       for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
>> @@ -2306,24 +2306,28 @@ void __init setup_virt_paging(void)
>>       if ( pa_range >= ARRAY_SIZE(pa_range_info) || 
>> !pa_range_info[pa_range].pabits )
>>           panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", 
>> pa_range);
>>   +#ifdef CONFIG_ARM_64
>>       val |= VTCR_PS(pa_range);
>>       val |= VTCR_TG0_4K;
>>         /* Set the VS bit only if 16 bit VMID is supported. */
>>       if ( MAX_VMID == MAX_VMID_16_BIT )
>>           val |= VTCR_VS;
>> +
>> +    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
>> +#endif
>> +
>>       val |= VTCR_SL0(pa_range_info[pa_range].sl0);
>>       val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
>>         p2m_root_order = pa_range_info[pa_range].root_order;
>>       p2m_root_level = 2 - pa_range_info[pa_range].sl0;
>> -    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
>
> I think this line should stay for 32-bit as well because we 
> p2m_ipa_bits should be based on the PA range we selected (see the loop 
> 'Choose suitable "pa_range"...').

This isn't true for ARM_32.

ReferARM DDI 0406C.d ID040418, B3-1348, "Determining the required first 
lookup level for stage 2 translations"

"...The size of this input address region is 2(32-TxSZ) bytes, ..."

So for

#ifdef CONFIG_ARM_32

p2m_ipa_bits = 32 - pa_range_info[pa_range].t0sz;

#endif

This will be a problem for 40-bit PA as T0SZ = 24.

So p2m_ipa_bits = 32 - 24 = 8 (which is incorrect).


To get around this issue, there are two possible solutions :-

1. For ARM_32,  do not modify p2m_ipa_bits. Thus p2m_ipa_bits will be 
using its initialized value (ie PADDR_BITS).

2. T0SZ should be signed int for ARM_32 (so that it can hold -8) and 
unsigned int for ARM_64.

ie

     static const struct {
         unsigned int pabits; /* Physical Address Size */
#ifdef CONFIG_ARM_64
         unsigned int t0sz:5;   /* Desired T0SZ, minimum in comment */
#else
         signed int t0sz:5;   /* Desired T0SZ, minimum in comment */
#endif
         unsigned int root_order; /* Page order of the root of the p2m */
         unsigned int sl0;    /* Desired SL0, maximum in comment */
     } pa_range_info[] __initconst = {
....


I will prefer option 1 for the sake of less if..defs.

Happy to hear your opinions.

- Ayan

>
>>         printk("P2M: %d-bit IPA with %d-bit PA and %d-bit VMID\n",
>>              p2m_ipa_bits,
>>              pa_range_info[pa_range].pabits,
>>              ( MAX_VMID == MAX_VMID_16_BIT ) ? 16 : 8);
>> -#endif
>> +
>>       printk("P2M: %d levels with order-%d root, VTCR 
>> 0x%"PRIregister"\n",
>>              4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val);
>
> Cheers,
>


From xen-devel-bounces@lists.xenproject.org Thu May 11 11:50:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 11:50:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533276.829791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4oG-0002z7-Eo; Thu, 11 May 2023 11:50:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533276.829791; Thu, 11 May 2023 11:50:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4oG-0002z0-By; Thu, 11 May 2023 11:50:12 +0000
Received: by outflank-mailman (input) for mailman id 533276;
 Thu, 11 May 2023 11:50:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px4oF-0002yu-Gc
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 11:50:11 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20621.outbound.protection.outlook.com
 [2a01:111:f400:7d00::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fb30fdde-eff1-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 13:50:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8363.eurprd04.prod.outlook.com (2603:10a6:10:24b::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Thu, 11 May
 2023 11:50:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 11:50:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb30fdde-eff1-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J1KmYhuK6+4o5gPuLjsva2MWNG5hPQ5LeNK+kKqI8tuK+xmI9R8NIXZD4FXxDDyAMpteA5Jix91NYmHf/soN74UUGGuMoP9OJeIbtan5wHfCYduf7VFybucR7KdjavFkYHM8UucUOOEZrxb4qU4EnY3V8r/o1lphqzau+Q+gW/8sRymsHpINMXU2hW8IN6MBN6oDNkFm9AnDVw0zTqEWEcqYjd19IKE1rXJM3CVTgsdQ/SGx29KY1KGRekJRy/PpTnSGhd6OSatBIci9gvn8OE8gpps/T4fDAe/NXYlUICUlGLJpsaLBy3nzr6aBTiJLSaMvuuOG2Bjnykda7KX6UQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CPKJcU9yyCNN6ui7J+8QrolkEQPO4ronoUMXEO9IIrk=;
 b=Ph4VuLBqed1csXbbMLSP0QEKTCVFLiUVlxCUnio6stHdMazNLMjfuTokJtRfwiPgdfEf25EOAhEHnjruS5rh4qlnKcF2FUQaPtOhuqVp+wokctjPn0To7DKux92EpQ0nUVd3nnWDO4dzvn/471INROuQNr9YuWHSXjelRgBmaVhFhtkVc+XU30yMi+4ApfHqwoqUmZCX0RrAuWt/woCQqPeAuKnjLS81OP+4VLoOhMaq7wdKl8Z5EEYT3W59o4gPP3xRa1HMc0kVQzBQ5boQgMe4XMAGz73TXwewz1WfBHkfTpIuiDWk3Ntvv2rEnqdzXvWahEyUaosOPApALSYsYw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CPKJcU9yyCNN6ui7J+8QrolkEQPO4ronoUMXEO9IIrk=;
 b=VfZ7JE0aDX4ktYDKrda7knBhDagAkCFa1Dfwk9NvXAIpm8Jv1PkRzzU/+1xpaJTcOP3NDaFHP2y0rjIq7unPsDRmZIXh3iGU7ff/DBtEh0Od1oMwyiVxr0KxdzQJZLVwkRQH6bbWg3GcxYfd+zJkhKDPEGTPgplaZUt+AEBnoDUVPJ7tRyRH6lVYi1gFACrJSesklO1FlKpuTr51G0fjgohQQL6ge9fcNRp3yjFs9BYopwRX7Rznpc4q9PxVf/nUcwDRc/dhEMQNfDExsfLFOjsORyzAVANA01Mj29fWHuqRGotFyNUBvM3AksID2mP/7QsICdRgfITCib1d48iObA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f83213df-2433-ec51-814c-436ce5ea4967@suse.com>
Date: Thu, 11 May 2023 13:50:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/vPIT: check/bound values loaded from state save record
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0062.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8363:EE_
X-MS-Office365-Filtering-Correlation-Id: e5a1067e-b41e-413d-c5c3-08db5215ddf0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qxEc4IOSN8VaFsS2kYnDVrbVjvC09s3UduO92z4qrrsvij3SwBRzV6AM1vxdf9qkKl9ittLwPQAt7JtItLVxwLM9X0s9RNUb19FnolR+iymVMX8KUZF3I3dD9sOEFhNCGZ9nhAjuM/J6wKlvMQqCRNa1JhjcowJk4ePo/8uuDy/pRXRLJjDEVt0dKiVX3REuo4vE2E/81KROXii9ePIAc91Io5XM8QLWjmExfgPEgm5JOKXcnLJt2uExPGJ2LmfSoYWCSSNvSja119DXT3EEBVYQV6I7V8Y2jxj2QPrGxlSjTGwrRpImmbrNGGdzTbHYzgvtLcDHdS2A/KQZ0LNgpjYRVMPHoMm7Vf4/iDHAdSeoChnUtqZPE2CWseD87/bIgM/fcMgzMu/f/uMQQNVRvgFQlUsg2AepR/IIm7NA2NyWm7f3Mzx9d1XjMhTsYaqjRMOMJXnyaj9JUZjGNDBIssuJ07NKcNS+Pj9po5FQ4nYJ++dwB+V5jCHuvR3XjhO63pFdrL+c59BOFA54THikdlxlKpMpGlOhPrP2UIWPfIto/9XYjEldmVPXkPMG4+a0nK2nw+9k3eK8na96qLbkOESS5mOcRSBu7ktDxxA1MzsyJUswYJnJoncIOpObcWOkh0lsZ5S7aMZllAdGTVBwQajb4siBrzBkToFffxhqcpilmqTmNpr9516R+4zrxK1t
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(376002)(39860400002)(396003)(346002)(451199021)(38100700002)(6916009)(8676002)(31686004)(8936002)(54906003)(83380400001)(2616005)(66556008)(4326008)(66946007)(66476007)(316002)(41300700001)(6506007)(6512007)(186003)(26005)(478600001)(2906002)(86362001)(36756003)(31696002)(6486002)(6666004)(5660300002)(45980500001)(43740500002)(309714004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UEdCcXJpVmRHakIrelpYQkFGV0RKbStXemJFZWpTSnF5RlJLUlZHY1dBYjVy?=
 =?utf-8?B?MEV2cTEvaFB2dHZiWjcySGZyQW1GcVBNQzNRVGFLZ2RKUHgvL3dneFFOL3o5?=
 =?utf-8?B?UytOUTlCVDZyeitMRGFwV05yVlRGTnFIK0twK0ZuOW0vUFo2TFg2OXNkaWx5?=
 =?utf-8?B?cmdMNU9VYzIxeUl1elgxVExxdTlNL2xLM1NYNnErY1lMbjBzSWltZzlZSTd1?=
 =?utf-8?B?aGVTaFpGbE5mcUJzbTZYWjViVGl6bFFkRkx4VWluSkJEcEdGRzhreE9jclZn?=
 =?utf-8?B?bUtaWEp5VWpmazV6MkhsWlpqUlFoS1FvZ0I1d2ZyQzhITDNPclpmeEMzZmxB?=
 =?utf-8?B?bXJrRG5BM01JUnViM05jSFYrM2l0YmJRSmM2OXN5cm9zK3NyNDEyeklUQVJR?=
 =?utf-8?B?Y2ZoWkJiUEFxUVpyeVVvZjdSVVNQb05KMlcyN2hSQTRDMkJxRnVmdTVydjgv?=
 =?utf-8?B?d2lyTUg0bmRKQzFJUmYxb0V0YmcrWFJQT0RWOGFDeVhhNjk0SENGRUUrYnZz?=
 =?utf-8?B?eVZ5cnlPamtya3Q1YzUwK2hZTFZOaExUZTQxQy9pL1JYSU5RYTFqREtsa0Vk?=
 =?utf-8?B?Mm5iaTdqRkdFVEIwVU42bll4cURrTGxLdTNVZnA5Y0h3cFc1Yk1kdWM4NUw2?=
 =?utf-8?B?R1l4Zkp1RWh6b3BVa2FhbnYwTi9EWmJ4UzBLNkRDbVIraEJCd0daZ3BtZ1ZO?=
 =?utf-8?B?ZXhDZkxSbWpLZWh4enBYNHhURVV3OEtFZkp0NDdGSkpCaU5tN091SjdlYWg3?=
 =?utf-8?B?ZmxqUytneThtbHBTeE44Y3FhS3BoY1EvZnJvWHJ4VUxOTFBhY1hXZ2RRWjBS?=
 =?utf-8?B?QVlHT2FLb2hZUVZpRnRjVytOanB4NWc4c09Wc0twdzQ4cHdHZkhGQ0t5TWVv?=
 =?utf-8?B?Y1BNaG04TEU3UGkxbGtMaE93YjEwUnYwZlUyYjJibUZKTUtGNkhRYmtxVitG?=
 =?utf-8?B?dDJleGdnOHZTclRIVUxJRjk2OVVvTllDak9ocHpqaGI4aTY3bnpRWlJ0NDJa?=
 =?utf-8?B?T1NycXcxZDlVSnpEN2lGTk1qeHlvaGlrRm9mV0xQSTNMVjdKQkZ0WkdiQnB2?=
 =?utf-8?B?NEVyZFRLTG5iNnJyc2laVnFUK0c2Ry9DQWFqaWhFYW9FUzYybUM1L3A3THg2?=
 =?utf-8?B?M0VJQXZxMllxZzVNeHNGdWw3eTlIckZ6MTdRaW5JeDJML1lscFZCbjBtc1E1?=
 =?utf-8?B?REU2NVFHUm55RFR0am42cXkwRDJZRzRjczc2SWJWVnlIN0s1NmdCYnIrMG0x?=
 =?utf-8?B?K2ZKZWR2Y215SVhhK3NIa1prSmZyeVZsLzd3UEYzVk5jNUJLZG5lSEU0SWV5?=
 =?utf-8?B?S2xTbFAxd2pUcEJsYnUrMmtZV3ZUcFFwS2JhclVNU1pLY3F2a2h6MHFyMkdQ?=
 =?utf-8?B?L1E0cW1YMStIUUx3QjkzbkNHWW9KOTNvTW5tUExEQmNWMVVwOXpUcUxLU1pr?=
 =?utf-8?B?enlzZy9jS3duYklZaUtoWjFVMnJNMHpjbHFrTHdFNnBldU5CZS9neUdVVlMr?=
 =?utf-8?B?UElLcVdhU2RiaFVYZGVSUHNCS01YTWFDLzAzaXZrL3ByUTNVQzJZcWp1RTJ6?=
 =?utf-8?B?MXMzOTRMY3FWci9jVWh1NjF2Z3pXcmhFdHBpYkphc1VYZ1kvRXhMYVl6QkpY?=
 =?utf-8?B?VWlUK0huclZvUkJqZ29IUVE5WlZxZlZqQTVscVdBTDRiT1FOUFZlU0VjaUNv?=
 =?utf-8?B?U0VaS3QzajJMdGRBNHJ4bmNzRlY1TDdIbkh2NFYzQ3M3OHBKZU5FVDJPQmp1?=
 =?utf-8?B?RXFxOHdwUXVoQmF3QmZGaVhYVUtLWTQ5b1JzeHNoNkZzYTI0ZU1FRzUzdE9J?=
 =?utf-8?B?OU8rV1JqSmZWeHdtQmlQRnRYR0RHTlpwcm40MWliZENBRi9aNFhVc08wUXZ1?=
 =?utf-8?B?elVWUkNkbURCNXZiUVg0RmgyUWhHU3BKeXkzcnZFZEJRUWRneVpsS1pSMFlx?=
 =?utf-8?B?b3J1V0JDSzgwUEFCMHk2NjZVSTMwb1VBUmREK0RDR1hwWU1XUVM1V3dwbnND?=
 =?utf-8?B?RnZCV2V0ZlkwdUdLR1h0WXNudnRZMWpBc3NvWUkrT0gyTjZzd1hvWGxnTzhy?=
 =?utf-8?B?a3JaSDZhMHpTM2Z1V3lGMVplRkdoZUhhQ25CdXBseVhVN0hseERjSS8zTFFp?=
 =?utf-8?Q?07saZHLdgkRqL21a9Ow+mYN6X?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e5a1067e-b41e-413d-c5c3-08db5215ddf0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 11:50:07.5992
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WW0DAQBQWTU4l5Iz/Nad5wS0R9ygKxgRI0mRPWZuZoJMxY7EicbeQ/TLrBNdcpq8O6QAKHM3yD7QgM62RFSYYw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8363

In particular pit_latch_status() and speaker_ioport_read() perform
calculations which assume in-bounds values. Several of the state save
record fields can hold wider ranges, though.

Note that ->gate should only be possible to be zero for channel 2;
enforce that as well.

Adjust pit_reset()'s writing of ->mode as well, to not unduly affect
the value pit_latch_status() may calculate. The chosen mode of 7 is
still one which cannot be established by writing the control word.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Of course an alternative would be to simply reject state save records
with out of bounds values.

--- a/xen/arch/x86/emul-i8254.c
+++ b/xen/arch/x86/emul-i8254.c
@@ -47,6 +47,7 @@
 #define RW_STATE_MSB 2
 #define RW_STATE_WORD0 3
 #define RW_STATE_WORD1 4
+#define RW_STATE_NUM 5
 
 static int cf_check handle_pit_io(
     int dir, unsigned int port, unsigned int bytes, uint32_t *val);
@@ -426,6 +427,33 @@ static int cf_check pit_load(struct doma
     }
     
     /*
+     * Convert loaded values to be within valid range, for them to represent
+     * actually reachable state.  Uses of some of the values elsewhere assume
+     * this is the case.
+     */
+    for ( i = 0; i < ARRAY_SIZE(pit->hw.channels); ++i )
+    {
+        struct hvm_hw_pit_channel *ch = &pit->hw.channels[i];
+
+        /* pit_load_count() will convert 0 suitably back to 0x10000. */
+        ch->count &= 0xffff;
+        if ( ch->count_latched >= RW_STATE_NUM )
+            ch->count_latched = 0;
+        if ( ch->read_state >= RW_STATE_NUM )
+            ch->read_state = 0;
+        if ( ch->read_state >= RW_STATE_NUM )
+            ch->write_state = 0;
+        if ( ch->rw_mode > RW_STATE_WORD0 )
+            ch->rw_mode = 0;
+        if ( (ch->mode &= 7) > 5 )
+            ch->mode -= 4;
+        ch->bcd = !!ch->bcd;
+        ch->gate = i != 2 || ch->gate;
+    }
+
+    pit->hw.speaker_data_on = !!pit->hw.speaker_data_on;
+
+    /*
      * Recreate platform timers from hardware state.  There will be some 
      * time jitter here, but the wall-clock will have jumped massively, so 
      * we hope the guest can handle it.
@@ -464,7 +492,7 @@ void pit_reset(struct domain *d)
     for ( i = 0; i < 3; i++ )
     {
         s = &pit->hw.channels[i];
-        s->mode = 0xff; /* the init mode */
+        s->mode = 7; /* the init mode */
         s->gate = (i != 2);
         pit_load_count(pit, i, 0);
     }


From xen-devel-bounces@lists.xenproject.org Thu May 11 11:50:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 11:50:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533278.829800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4oj-0003SW-Mw; Thu, 11 May 2023 11:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533278.829800; Thu, 11 May 2023 11:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4oj-0003SP-KI; Thu, 11 May 2023 11:50:41 +0000
Received: by outflank-mailman (input) for mailman id 533278;
 Thu, 11 May 2023 11:50:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px4oi-0003SF-GZ
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 11:50:40 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0601.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0bec8b00-eff2-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 13:50:38 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9071.eurprd04.prod.outlook.com (2603:10a6:150:22::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 11:50:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 11:50:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bec8b00-eff2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L2r1doIJKrri6xCtuE2h9fGPWLuOUScI21erUdxDED3Nc0+zVe2H2klan8xFABFWNApAq4xMGqPV0frT4GNb8SIUWPY237KpRgTuo6kDPrJDMmk6UVwtFb7yk2gJw/oFx+N/H7ORo1fv4DwIHLjGjrDVG08Qytoa0MY3U/PMDEKSlYMQKWMZsj+efVLqUryHnbLCGpU8E3cqjYCaoKNGkEkN8o/gAuMRPNQpoVoiic9qG8OBZcJj1OyvsFPjjqD5EfB0OlgZfWAo4Ic6A4lrwEKtG9lBNBtOzsS7Slmy1GYzp0QQ5mhHlDv5B4BM/bq5RgShKymwLQmyIrY7SW3I2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FfX++Q/GI7gjOd68wAklq5PlXbkjPE/BHUhHueaKu3E=;
 b=enUGx60mKvujOcQMkiUT4Ecr6XvJWFbUF/jrmzjUDUnaDdLs6h2sqsO269cp/VbxmDjc606omf+XWmeL+ynTDH+b7cFSjVnv7aslwXZAtCuX6gFn6P0Uh+1gpCQfFNQAVK71tLtdSPNtM38dM8/gO/KO83PPIgwabG7msKtcUM30mrxHot+QNwz/M+hm0RuSvfObrDgLBshYiPo0pGFlbKcOxaKpVojGjzIjE6E4SgskkiIFYWDe0Iv0Wwh8Fz3ikiuc2delsaaIcCWLVV6g99dGdjM5FbUq7Csli3ifkVMD/gRduIt4eMlBIa/uorHgFFfVCYt1alXY4F7RC1jcrg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FfX++Q/GI7gjOd68wAklq5PlXbkjPE/BHUhHueaKu3E=;
 b=Vk7NyzyFtRuaYgl/A/fAcsYHqUUIUQ8NZwoRyG15J9S16wypfFff6d/L3tvHYXiGLMDJEJHeJuJzIKALdoU+oqrK9tTZsuhgNQ43SXZztit1Uagx+ru8iUzrW6ORSDbEaZtgrC1sbOHywTXESvTzUu0hSBJhG1n0f87/QThLa/eRuQGiOwazcDJgciEKsSkai1Ynr9vU0SyAlu9S0XEHy1D3C8urBDZmS/gKyCLhDjCX2/DfriBm8y8Ae3rCkFRU8f1oFnn8/VOwrsesm8QkUYatgKZ9D+CTqSYRs9h+7xe74vXhtgrpehE08i+B1Z5aBzlW6g5o4h1hbvEz8IzkIw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <060ae425-2daf-9ce4-d291-215d483d592d@suse.com>
Date: Thu, 11 May 2023 13:50:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/vPIC: check values loaded from state save record
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0068.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9071:EE_
X-MS-Office365-Filtering-Correlation-Id: 792d20a9-834d-4a4c-f221-08db5215eede
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NHO8R71sjy0wPHJeBw5EChMPMCffsYN2eaRrDS+ChaO5nwl8wxTrUGomy7eALKNM6RlIEBoHR7JuvQARfU7hJeAUaK5ywQxJ6FExZub4ShIeR8SZNwLCnx7fIhgEbQF7m7F0TcxsuFLLuZshqMu8ITQgtd71IPM0kU2npFIpcC1ZNUykt1aOaNUVAdZQsLksBwWE2juCuowkEVtDwlgC7BuXM4mxtYvCRik89zsBKMx89ziss+Ko3+rgQVgrxhS/5Hl+hAPAtLEldmuxv5HCCzlYPOAnIRLsEnAPfTU3tDE5g8Y+mtMopfXPObLz6vpqVwAwI04KFpCPAX9gwaDEG4+KecHz9bGsqwrOhRCR3ExwZJQd6B1HwbrAb51JR0bLxk9brohxCHinCH/TvRZxtlLsztnvEQrCpq+xe4+IGTkuwKKBeOeyzbmT9WN+UzWeHNJ/Oij7NpzZ8fBMkas9145/wLR7Ltai9KoSn2uQ8ux44q7IdxxOlxtXfqL/TEInXFADfXhXTbdxx+R1F1UABtySNVVcdutSk41elXdvvQbJsKns2UGR7Hm6oV4RaorvSo8Ootu4tPYjRXF1PXJ6mQXk28L3W9+8Vyv01vY5dJS6ASk4gETPVTLF+2qPjrEmuLdgoq1wdcrMIOq7BMlxHw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(376002)(136003)(366004)(396003)(451199021)(66476007)(66556008)(316002)(5660300002)(66946007)(4326008)(6916009)(31686004)(54906003)(41300700001)(83380400001)(6486002)(38100700002)(8936002)(2616005)(8676002)(6512007)(26005)(186003)(6506007)(478600001)(2906002)(36756003)(86362001)(6666004)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cFoyNEhzTmwvWlpGM1dYdUNFUTlmbjVvMHpJREg2UU5mQm9PTjJEYmhPTEty?=
 =?utf-8?B?VXMrZC8rNjA0QStxVzMvU3hRNGJqcGkvTHNVSUJaeEg3YjdDZHNNZEg1VlYw?=
 =?utf-8?B?djRzWktlY1RLQS90dTVRLzFZeHpndVYvSnU5Ym0yWExyOWZib055T0FmZy9B?=
 =?utf-8?B?dWtMV3BSY0ZLVjB5bVFIVFVUbVBzbk5LY0pwcS9QbXpWMVlNanRuNERPMTN4?=
 =?utf-8?B?bmQ5MG9lTEhmSS9JeE56V1ZtUlVTdHBIUHAzMTBDNFM4RTZWdUhsRzlCb0RT?=
 =?utf-8?B?VGdnR2tVN0NzSW9aa1Zrd2NQdHBpa21qYS9FdkRUa3cxOVdJaGluSFJMNGFp?=
 =?utf-8?B?N3JhTlphWW45akJsTnJWYS82dHFCQjBpRFl6Yi94cHNiU1ZuVlFtbXBtSWxX?=
 =?utf-8?B?V0NWWmw2QmtGenYxZGI2THppMFBEWmYyM1BibHNDRTBKek5RQXl6RzZ4QVYv?=
 =?utf-8?B?VzFqQU5mU2FoV1RHVzk5a3JGZ3FJVnQvV2IzUFl2QXJLdXR6Zi95QXFVU1h2?=
 =?utf-8?B?R0dwS0swUjF5WFJtdDlKMTM4OUkwWWdWS0ZVaGRIanRpdkVsUmtSb2o2dE5n?=
 =?utf-8?B?WWFnYXZkS2FPeHNWMjRxSTZHTUJNMGUzTmQwcVVLaFRzejJBMjhGQ3llb0pk?=
 =?utf-8?B?ekVsY2hsNkUweU5pRjA2ZWxkZnk0NUVTTVhLdXZNNmNjRmkzU2xETlNsNXVp?=
 =?utf-8?B?Zm1qamEzckNhWitubktqbnRHQ2J5S3l1MW1melhrTDRBNXFGSm1jVi9nRGY3?=
 =?utf-8?B?TFRhcHBWdVVDRDJpTitKSlp1Sm1hamNDTEdLTUNwc3VBT01hdzljclltWC90?=
 =?utf-8?B?YnNQeXNMNkQveEc1cHJOdHJRQzF3VWEwV0prZDhSZUNDTGthNVBjQk9KRUFF?=
 =?utf-8?B?dkYwSzc4V0VEcnNhRnJsUGhjSU41Z1dkOWdEeFh3WG5DMXM4dWVyclFYVVdv?=
 =?utf-8?B?MXcrOUNYS1Q2bFBsWks5RkJydm4xUUtYR0Faa3BpTzZZNncrUmx6R2tuSnRu?=
 =?utf-8?B?MGdlSFpYRUIvcEcwaWpkaUJEWDUyVytZdXZiQ3BKeXhTV3BmR1N2YVZnRUp4?=
 =?utf-8?B?aFdaTEVZUFE2VGFYUjNCT0xzV2l3YWpqdW5jcURWVTM5M2liVTl2MDhIUkhq?=
 =?utf-8?B?d2RtcUppMjlaMjFyRk5IZXZpY0doaWZrVnoxY1N0S2tWTXVjNXpTQVQyNDNH?=
 =?utf-8?B?aFJVZnBNZGxkeGUwNHBQMXY2TE5zRG5WMkE1Vk90L2h0MWo4Ylo3R2xkWTBS?=
 =?utf-8?B?eldib2ZsUzgwaDhQNFlSYWdQNVlPVlhwWm55eklyL0MyUXFhR290eTNndGFY?=
 =?utf-8?B?K0tLdnRoeXNKeC9kc0R3Uy9sTmJkNWJub3ZqMW5vaGJiVGswZTFKZHU5N2NW?=
 =?utf-8?B?NXJkRXhoNEFhTXkzK2Z5OG00R0JyMlVpejkzS0dwWVd0L0NkSVRnTFNDLzRG?=
 =?utf-8?B?QjJBVGE5OUxmWVVtb0V1ODd0R3c1c1Z3Uk44NFdsaVpuUnBNR3JOaWdSTG1X?=
 =?utf-8?B?VE51WG1mSFdzWk5QQTJldi9rLzA1OXJydzNkQVMxUWtQbEloK3pVUWY0Q1VJ?=
 =?utf-8?B?YzhWWFFLbWVCaWk4emV6ZTJtSm0ycnB2K3c4bWFxeE82aHNpUzRIKzQyOHk1?=
 =?utf-8?B?M0p1K0psSWoyVmZZOWFRNHUxWDFOc3I0QTFYcVpaYTVSakpYcThkeXJ1bkhL?=
 =?utf-8?B?NnY5QUlXV3NxMVRXYVMvc2lFMEdHbE5pM25mUGlOSDFpYUZwMndKUWo4Q0R4?=
 =?utf-8?B?V25EQnlwY21mMHBFNXRHUyt2eW1uUDRyU1gyeUZzVnBUK3BKTU9HOE8rTnA0?=
 =?utf-8?B?WStCTExVZnZHZi9Rb1FOVGpNb1VUVGZMWEdmdjI1Rk5CbGM0dDFqS1ljTGlD?=
 =?utf-8?B?Z0wvYzAzZkJJQ0dhR2lwZG52WXJJTVhGSENSemJteDFmU3FIZjkybzNsNDU1?=
 =?utf-8?B?N0RNRFJpb1k0dy9oSlJ3Y2RIM0NGM1l0ZU1XTmptTzE2aFJKL0xWcGFOM1BO?=
 =?utf-8?B?OFZyY2JBMmlNTG5xdEY5WWFpVmY3S00ydGxLK0FvMnhoUEdHRmNyVC8raDc1?=
 =?utf-8?B?VDdPS25UVmFsT2xtNHV6N3pnbEkzc25lbEhMYmNQS2VyTlRCRWtuRms1TnRk?=
 =?utf-8?Q?7Y83XWYjvVMyOHn0qT+wfuH9W?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 792d20a9-834d-4a4c-f221-08db5215eede
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 11:50:35.9825
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2dHY9LnykJJGKvu94rhDk4ZbSNen1BGAB8GIHMZv6ZNXQ64mSFUAJgpU1ys3GwRKB6O3X2EEfI79wijnTgyKYA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9071

Loading is_master from the state save record can lead to out-of-bounds
accesses via at least the two container_of() uses by vpic_domain() and
__vpic_lock(). Calculate the field from the supplied instance number
instead. Adjust the public header comment accordingly.

For ELCR follow what vpic_intercept_elcr_io()'s write path and
vpic_reset() do.

Convert ->int_output (which for whatever reason isn't a 1-bit bitfield)
to boolean, also taking ->init_state into account.

While there also correct vpic_domain() itself, to use its parameter in
both places.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Of course an alternative would be to simply reject state save records
with bogus values.

--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -35,7 +35,7 @@
 #include <asm/hvm/save.h>
 
 #define vpic_domain(v) (container_of((v), struct domain, \
-                        arch.hvm.vpic[!vpic->is_master]))
+                                     arch.hvm.vpic[!(v)->is_master]))
 #define __vpic_lock(v) &container_of((v), struct hvm_domain, \
                                         vpic[!(v)->is_master])->irq_lock
 #define vpic_lock(v)   spin_lock(__vpic_lock(v))
@@ -437,6 +437,14 @@ static int cf_check vpic_load(struct dom
     if ( hvm_load_entry(PIC, h, s) != 0 )
         return -EINVAL;
 
+    s->is_master = !inst;
+
+    s->elcr &= vpic_elcr_mask(s);
+    if ( s->is_master )
+        s->elcr |= 1 << 2;
+
+    s->int_output = !s->init_state && s->int_output;
+
     return 0;
 }
 
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -329,7 +329,10 @@ struct hvm_hw_vpic {
     /* Special mask mode excludes masked IRs from AEOI and priority checks. */
     uint8_t special_mask_mode:1;
 
-    /* Is this a master PIC or slave PIC? (NB. This is not programmable.) */
+    /*
+     * Is this the master PIC or a slave one? (NB. This is not programmable,
+     * and hence is ignored upon loading.)
+     */
     uint8_t is_master:1;
 
     /* Edge/trigger selection. */


From xen-devel-bounces@lists.xenproject.org Thu May 11 11:51:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 11:51:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533283.829811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4pF-0003zq-1d; Thu, 11 May 2023 11:51:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533283.829811; Thu, 11 May 2023 11:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4pE-0003zj-U6; Thu, 11 May 2023 11:51:12 +0000
Received: by outflank-mailman (input) for mailman id 533283;
 Thu, 11 May 2023 11:51:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px4pD-0003SF-Km
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 11:51:11 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0613.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1e667cac-eff2-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 13:51:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9071.eurprd04.prod.outlook.com (2603:10a6:150:22::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 11:51:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 11:51:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e667cac-eff2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PFO2N8g1N3L29rBDP2FEWSuT/Q78PjkkozzTuD5SiDey7pTKa72YA+SK6ycMHjYhpordTr0R+M8yMphxywIvB4qb4KL3+B2y8bfJLQ08mbBEZiGJggn3zt6PcIjIHKHijfERtDJZ80OqhBkGeOnFh35ABgF60VoDBhwxDaM13qO4ky80F3pxdsAn+G4bgGZA+30qidPKmi9CdnhUxz+k3VdoeKglNXbTSk6vS/YE8zCNbKC8flUCPNlqczQFNe+3MtHIdWvbiCkVUrC3OH5Y0vA5Ujlmjzjicqya7ML8h+92zvZVOqG5R01BbazrnMtUYXr7wkqPj4JkrDooiJ7pGg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mpgonQYUpEAl3AwPVAhrNHwGHxi1zyVswuSPi4ojvwU=;
 b=h/yOGzGTBVQQqe+IE1kPMMHvttNfp105yaRhV0UJ+ZxJ7MFEFru6THmhTJii9XMji2OuJiWmLbQIDpQUzPVy2xofFd35iYjjeYpxjjrxouMvKnltNe9DCVGdBMrl0i5rxrdrghryhsutNzQXG3JN4H4qHNMkJATHog3q952RksaZAJ6LAUe8Gg1ip5/8J48DRaDaRyLTaXW8UnBkrdGbQVahHV9a2Asklek3omrbvvfjVDLaFNgLOLS4l7eR15urUsC6qlvlgmhibr5hkJfVQYyy19+PWv078PPUgWGb5nB//Ti1W2KGuT1EoA4qYEXRrWmljTMpZCZ6M3HDxhCQzg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mpgonQYUpEAl3AwPVAhrNHwGHxi1zyVswuSPi4ojvwU=;
 b=D4oukvQKTxQ8eXVAfD3w4TL8nbZuX+FKpkQxNWQfmz/W9ZEzvrgzWz0HjhIL7iKTMvrkMSTjjk+y1MBVcaDspIIu/MT0vnlmQdoutIg+cYON2Hbo3Rto2VvU+1G0d1l2JugMVF7ayWYoyFVddLYbPCAhnDGO8Abk6p9J1/6uGTQxtoBVTUWOY3nAm0/FLcoZ3lXRaNYRX12muTiPrURfwCyd5z4+hepiNS9mWZb7MqHHSIer6p0tvaw0fVS0Sp5YcvRxsjGGi3HhuxvpgD+c4qJ3csQjd6nNNANaTyK49keO8eE4H99G7H7cyS2vOtJzyNdoR554ozZC3JalmNrcUA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <39903e79-af15-9017-e470-65124bd60847@suse.com>
Date: Thu, 11 May 2023 13:51:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/vRTC: minor adjustment to reads from index port
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0141.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9071:EE_
X-MS-Office365-Filtering-Correlation-Id: 0b0b0b17-c3d9-4f3d-9b4e-08db521601d9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kpvtAOjYy/gARNh01M5gdl+u/L26WXcIcDmo44RtGWuPr9WDhiBjhPS/Ao2g4vupt72ey7k/DD8oraFIUH9UhZ3mQ38EM62tjbnLjPmJlDZNGHnC5sHzVPFnXmOxhOcrQUvoYNIGRRCRIMi2BKpS9lzui4IpOBXHeDPMc7Ms/8r7BGIvZvBjlRp2twQhDKHe+O2nQALAokwBd8sbXN/3TD6Y3kioW36GO13j82hxLrKEm1HCRFBxvG0g211tkOqcnE9LnxnFKZPiS8d7cRc5B8lkFTBx/0Xd6Pa73ScB/AWUpJyA/X9IxkObB0xWiG+U4sIICIBrWe364N5JkHLlWOTbVwcJCallWb7vy8rh18MpwjEcUe8KXCEkILjJRaE/MHJjNZPu8lfcC4ZbpBHhdwPADP+RXCz6ooHDowRENpCe2LHGVpn9pQLnaR2EGZ6JcsiAs36+MyLUECARR+fzEl8iKcYrs4zFmg2WDqmd/HQizb6Msv0KD1U5EGqszku+mNRId3xjSQ1F7VsOIM4I6KX+aLropvIvLSGmjuKgWGetDsA4NVIU1ar1WXcB+oNPxEUyz2TaJzkeIYv4QMUzKd7wRdFFN5U7o5ZkabOILaotdr0gy/QqIQ+ZwfSKFuDzzCWD0ny1cyMNiTUc2vV05w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(376002)(136003)(366004)(396003)(451199021)(66476007)(66556008)(316002)(5660300002)(66946007)(4326008)(6916009)(31686004)(54906003)(41300700001)(83380400001)(6486002)(38100700002)(8936002)(2616005)(8676002)(6512007)(26005)(186003)(6506007)(478600001)(2906002)(36756003)(4744005)(86362001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b0hxSGFvUDFCUmlwMU94czYyeUZBUVdBU0wzbVZzbkxZc21vRXlGdVU2SkNh?=
 =?utf-8?B?dXJia1B3WmVVK3hjY3J5dHBIYldJaDF1a0RNNUcwOXFUS3Z0TUloZGR6TWIr?=
 =?utf-8?B?Sk5VTDlmQ3gzbHlnWWxMb01Nak9WcFVHWjNyYk5rNmFQdm5NTkFHdHhPTXRG?=
 =?utf-8?B?SUtBdHdOWWFSd3A2ZWlpQy8yd2lCbE5DeG4xVzltenlVZEFRRzBIMll5QzY0?=
 =?utf-8?B?a3h6NW5sSkkyVHFVMWxsV3NXWEU5SlZLMmplTzI1YUN3NlBVR0RPeTFrV0N6?=
 =?utf-8?B?Q1MzOXhOUC9KTnJOQjU2TFlTcTVYdHVOQVBPVzV3RnJJdHBLNndrc0g2ZzEx?=
 =?utf-8?B?Mmc2eDBRNTgwSCtEeUJIMSttVWpJL3NtLzd1UnlIZUg1eWlHeVBwZ3k2c2kx?=
 =?utf-8?B?NDI2dXdnTDFhL0pwQUJjdTErbzlCVEU1enRpRDFGUG1PdUVPWmZTZE1QOTB6?=
 =?utf-8?B?VEcySWIrOGZEbnc3S1R0VE5SZmhHVnM4VFc4bkUybmhVcEtRbGpXT09mSklW?=
 =?utf-8?B?MUZxNHpWRVA3eFl3eXhhTTlHVnl1WDRFc3owVzZVWHFwZ1NTSmY4L3dJdk1t?=
 =?utf-8?B?T2dReFIvNUtTVmU1bmg2cnNqSUIzaUgxMkVoSW1aTkwwMFE1NlhETWhwWmNY?=
 =?utf-8?B?c0xoUmtJaFBBZzRpblFVUjBCWHJpRXpUQlNsUDR3ZTFBdnZ4SHR5MXpnQ0pE?=
 =?utf-8?B?ZEFYRUE1b3BDUmd0TUo5R3RVQW8zeXRiQkZaNktjeEVJbTQ3U2E0c2hpL3dq?=
 =?utf-8?B?RkJFSGpPazY0S1p1c2hvVWpiU2hBSGx4L0V5S2o0YzY5dmc5RHRkdGpmcTY3?=
 =?utf-8?B?SXphRDV5VjlaMEhSUHJaRmtqQ0xveHQ5ZUd4N1p6c3cyZ0MrYVhlcS9TZjVt?=
 =?utf-8?B?cFFuT2oybHJHRGM4UFZWKzd1QzhzQVFwRjBNN0lkb0V2dnlIQjMzdEdieUFG?=
 =?utf-8?B?YkI5cEJwTGpoVWZ3UXRoMWJjd202cGR1WTBudG05em90TGtXK29vbUd5WWlo?=
 =?utf-8?B?Uk5RMHVaSHVIU3FJM2s5WlY3SU52QmhqV3NVN0dEdFN2TGlhM2ZBc2dSOFdN?=
 =?utf-8?B?ZG1oWVU4a3RGOW9NYVorYWtHWGlGRm1tSE9mUHRnREREREJLbDlVN0dEaE01?=
 =?utf-8?B?SzlISThCdlV3d29RLzlmMWsxbS92Uno5U2RNaFJwck5aU2lSRUFPSEdHNmt1?=
 =?utf-8?B?L2Y4RGFpSHZuNHUxY2ducnd1TEp1ckpGQUVDWHBUZm4yMCtyOXcvQ0JXMWo4?=
 =?utf-8?B?TnlsNmhjWjRjek5PcjNyL20wZU0wODI3RkhpcUQ2OVlXbU9WaTBjcmlKUXdz?=
 =?utf-8?B?TVZ5emthVDIxVlAva052dzk0ZWhBZDNSQ2RqeDlXQ0cvenFzYXpnRDZ6MlVi?=
 =?utf-8?B?VGZ0dXFCRXFULzdBUkNOeitQVURUaHVkWEtWcmNFU0txaUl3czFscWlacWJW?=
 =?utf-8?B?UFpDZ0plZHZ3RldNNkJpZTBtUWl0cVJSa2JZQTRBdFBEMnc3ZnBOQlBycVBz?=
 =?utf-8?B?UnUwVlRybE80NEpWMUMxOXZBb3ZXazVJSGJzeDVnNWxXZEpxNFJpZjQxbHc5?=
 =?utf-8?B?RXRodkEvc2dlNmdoRjd5OTNIZ0d3QWliTEVvRWhlTzQyYStYVkxuMlhtLyth?=
 =?utf-8?B?RituN1lTazJ2QnAvUlJuWlVQYmxDSzFqL3RmR3pmTFdqRllIclBsdHprN2dD?=
 =?utf-8?B?QXF0TlV6VE9nc2tOemJVbm1qeUlMZEVrQU5ZZEpycGk2V01jVEZzWlF0eU1T?=
 =?utf-8?B?ejVFMVcyNnVmSHNvak1iV1NsbTg3VFBTL0JHdm9sOEorbHV5MEtkd0ZEY1Fn?=
 =?utf-8?B?WmhKWWVrZVBCb1pGWXhtbWdVOXVlU00raXFEVmFmYk4yU0U0ZWVJVjJYcjhB?=
 =?utf-8?B?L00xSVBEV0xodDVVQnQrVnRQdXZ5SUVCK0tIYjJVV3Nrbk56SWNYc2h2dFlh?=
 =?utf-8?B?dm16dEU0ZTQ4YU5iK2tIbXdNai9STVlPR1VQQ2NuUno1ZGhlTWJzS0MyTXN2?=
 =?utf-8?B?VVM2S0N6T2hDUnUrUEgvUmZtK1JWV21GYUxmQk1OdndVUFFwSnk3ZlVxa0w1?=
 =?utf-8?B?UjdsVlRxSGZud09GdGY5YVJlckM1bXpsekkyMGZtVjdnTFlpSFQvL2Q3Q1Qv?=
 =?utf-8?Q?BQCTmO5iyTYAf1EQM8oHrybKq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b0b0b17-c3d9-4f3d-9b4e-08db521601d9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 11:51:07.8234
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zsbEhomY/kKCtwP2YeMvsr+EaevzlClPaJ6dIFeHobJ+3FuFUoBi1PNDITIASIouVDTMcN8dbZmwGreOu/DLQQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9071

Whether to handle this shouldn't depend on the present value of the
index register. Since the handling is done outside of the lock anyway,
pull it out into the sole caller and drop the no longer needed function
parameter.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -645,14 +645,11 @@ static int update_in_progress(RTCState *
     return 0;
 }
 
-static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
+static uint32_t rtc_ioport_read(RTCState *s)
 {
     int ret;
     struct domain *d = vrtc_domain(s);
 
-    if ( (addr & 1) == 0 )
-        return 0xff;
-
     spin_lock(&s->lock);
 
     switch ( s->hw.cmos_index )
@@ -714,9 +711,14 @@ static int cf_check handle_rtc_io(
         if ( rtc_ioport_write(vrtc, port, (uint8_t)*val) )
             return X86EMUL_OKAY;
     }
+    else if ( !(port & 1) )
+    {
+        *val = 0xff;
+        return X86EMUL_OKAY;
+    }
     else if ( vrtc->hw.cmos_index < RTC_CMOS_SIZE )
     {
-        *val = rtc_ioport_read(vrtc, port);
+        *val = rtc_ioport_read(vrtc);
         return X86EMUL_OKAY;
     }
 


From xen-devel-bounces@lists.xenproject.org Thu May 11 11:55:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 11:55:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533293.829821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4t7-0004ml-LH; Thu, 11 May 2023 11:55:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533293.829821; Thu, 11 May 2023 11:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4t7-0004me-Hy; Thu, 11 May 2023 11:55:13 +0000
Received: by outflank-mailman (input) for mailman id 533293;
 Thu, 11 May 2023 11:55:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1px4t7-0004mW-21
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 11:55:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1px4t6-0001if-OK; Thu, 11 May 2023 11:55:12 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.27.46]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1px4t6-00005o-Hj; Thu, 11 May 2023 11:55:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=gxiGGCrxhqXIQ9pZXMT0X4e4p8ZlYDBRB2ae+JEsPiU=; b=K6kZkkReOr2ThlgUsGw3GOhQQt
	CJjgceCW9mZKkQ+27wv+/bjf0hxYkh2hq2RzQ1GPxz748NSCMw4PSY8rI+/rrUMONQIYNA7/ZYpZL
	jbtUWk+jPcMNxGBnlQkv7aLdUQnInIk4l1neVA2lGFJJ4JEs8MrekRYxD5Oo6pdN40nw=;
Message-ID: <09f4eef3-df60-ee90-bc5b-7e61ef9788a0@xen.org>
Date: Thu, 11 May 2023 12:55:10 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH] xen/arm: domain_build: Propagate return code of
 map_irq_to_domain()
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230511112531.705-1-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230511112531.705-1-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 11/05/2023 12:25, Michal Orzel wrote:
>  From map_dt_irq_to_domain() we are assigning a return code of
> map_irq_to_domain() to a variable without checking it for an error.
> Fix it by propagating the return code directly since this is the last
> call.
> 
> Take the opportunity to use the correct printk() format specifiers,
> since both irq and domain id are of unsigned types.

I would rather prefer if this is split in a separate patch because while 
we want to backport the first part, I don't think the latter wants to be.

> 
> Fixes: 467e5cbb2ffc ("xen: arm: consolidate mmio and irq mapping to dom0")
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
>   xen/arch/arm/domain_build.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index f80fdd1af206..2c14718bff87 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2303,7 +2303,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>   
>       if ( irq < NR_LOCAL_IRQS )
>       {
> -        printk(XENLOG_ERR "%s: IRQ%"PRId32" is not a SPI\n",
> +        printk(XENLOG_ERR "%s: IRQ%u is not a SPI\n",
>                  dt_node_name(dev), irq);
>           return -EINVAL;
>       }
> @@ -2313,14 +2313,14 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>       if ( res )
>       {
>           printk(XENLOG_ERR
> -               "%s: Unable to setup IRQ%"PRId32" to dom%d\n",
> +               "%s: Unable to setup IRQ%u to dom%u\n",
>                  dt_node_name(dev), irq, d->domain_id);

Please switch %pd when printing the domain.

>           return res;
>       }
>   
>       res = map_irq_to_domain(d, irq, !mr_data->skip_mapping, dt_node_name(dev));
>   
> -    return 0;
> +    return res;
>   }
>   
>   int __init map_range_to_domain(const struct dt_device_node *dev,

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 11 12:01:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:01:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533300.829830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4yl-0006UW-JE; Thu, 11 May 2023 12:01:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533300.829830; Thu, 11 May 2023 12:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px4yl-0006UP-GH; Thu, 11 May 2023 12:01:03 +0000
Received: by outflank-mailman (input) for mailman id 533300;
 Thu, 11 May 2023 12:01:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1px4yj-0006UJ-SE
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:01:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1px4yi-0001rS-AM; Thu, 11 May 2023 12:01:00 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.27.46]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1px4yi-0000Rg-35; Thu, 11 May 2023 12:01:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=/St88HNcxWkuf4pdozYzqNwz35Cafc4+MSjD0fPYxkk=; b=RuJCaYq9HWkmmCqq2+2aq8JdhQ
	Qkpvz9Afr2nMN4/xH7wpzNQJ/7mpTiL0nAmEtkrP+nkHnwNk2RZFdHJ271ngaTQR4z2QVa2o7zcoK
	FVXJ0sJkcX8dRKvHZ9pA3/46hIUdYFxWe+SVAWCiy1iPe8bX6Y9IJ7InGFkYFOGw8aDs=;
Message-ID: <1473f9f0-5040-c10e-0098-9ba650b5a3e1@xen.org>
Date: Thu, 11 May 2023 13:00:58 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH] xen/arm: pci: fix -Wtype-limits warning in
 pci-host-common.c
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Stewart Hildebrand <Stewart.Hildebrand@amd.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230503191820.78322-1-stewart.hildebrand@amd.com>
 <5D298044-314C-473F-97AB-420DA3DA44A2@arm.com>
 <4ca00734-8e1e-fe5b-b2a0-6f08f3835433@xen.org>
 <837CD804-85F2-4D3C-87FD-3F65F22A8432@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <837CD804-85F2-4D3C-87FD-3F65F22A8432@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 04/05/2023 11:32, Bertrand Marquis wrote:
> Hi,
> 
>> On 4 May 2023, at 12:10, Julien Grall <julien@xen.org> wrote:
>>
>> Hi,
>>
>> On 04/05/2023 09:59, Bertrand Marquis wrote:
>>> Hi Stewart,
>>>> On 3 May 2023, at 21:18, Stewart Hildebrand <Stewart.Hildebrand@amd.com> wrote:
>>>>
>>>> When building with EXTRA_CFLAGS_XEN_CORE="-Wtype-limits", we observe the
>>>> following warning:
>>>>
>>>> arch/arm/pci/pci-host-common.c: In function ‘pci_host_common_probe’:
>>>> arch/arm/pci/pci-host-common.c:238:26: warning: comparison is always false due to limited range of data type [-Wtype-limits]
>>>>   238 |     if ( bridge->segment < 0 )
>>>>       |                          ^
>>>>
>>>> This is due to bridge->segment being an unsigned type. Fix it by introducing a
>>>> new variable of signed type to use in the condition.
>>>>
>>>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
>>> I would see this as a bug fix more than a compiler warning fix as the error code was
>>> ignored before that.
>>
>> +1. Also there is a missing fixes tag. AFAICT this issue was introduced by 6ec9176d94ae.
>>
>>> Anyway:
>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>
>> Just to clarify, you are happy with the current commit message? If so, I can commit it later on with the Reviewed-by + fixes tag.
> 
> Would be nice to add the proper fixes flag if you can do it on commit :-)

Sorry for the late. This is now committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 11 12:03:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:03:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533309.829841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px519-0007AW-0G; Thu, 11 May 2023 12:03:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533309.829841; Thu, 11 May 2023 12:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px518-0007AP-TJ; Thu, 11 May 2023 12:03:30 +0000
Received: by outflank-mailman (input) for mailman id 533309;
 Thu, 11 May 2023 12:03:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px517-0007AC-Mu
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:03:29 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0625.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d737ea80-eff3-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 14:03:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8336.eurprd04.prod.outlook.com (2603:10a6:102:1c5::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 12:03:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 12:03:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d737ea80-eff3-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kOKnbHRQ7qSA3rMMiJSiBKFak2GpNdzh/DoqUALpIXSHWkaZmOy1lqcPE2Ahkhzw19z9TAJjHl+y39pPwgbbfO3SOisDH9yk2+QSr+d2k2aP8wcvp4CAWekJXdI3XvoW63qiAyle1HBKDPBNs8nAr5WmTYeJYVXzJ/hdJGozz3Jt08tv2gL1koh7+QLg7UH9w0tSG5pywSbkmoC5XQ/SX+Z8JNE3T7CE7tTSTNI5Ldm68cyTz5ZCb7GmulvsCjbBO4sc71XGxFMo0wPYJU0Vu+s3ypmadUWWUt7UlsSDgZ7f6UU8Ih6RDQEkPph3if6Uni7xwJtl5VleKo3lQ1vFcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G4PgGZ3pjEle15bDbdUH/uSZ9bCgvWZfEO/3mozQMSI=;
 b=RT+wP2ZZOxemIGdKpS+usQBUYJp8f8jw5fi0jmHGWvL9JCE2g25/n4Vp2KP0vsQvZyXN+wK39kbZsjqiRCJzkSXD9mHkzeqSwHNnWNqufjWtkDPfIjTSKlesf3nQDV4nujyH9D0RKUM3PEDgdAptiUtzfXmVNs3TK/EhSHWOzVzxT/hjsGXWzQOl4Mi6KJ9+HYwoI+8ZuZjA8rb0k4UBxm4v0n88L+fbveEuXcfos838vP73BxwNwtMpwT/49CC2BbehL5qRI/wSlhd695KUfVSoCTyQQ1bD3RhHVH4o6Sxvps8QkIGuZnE5hUMAdvrYCplQnV/nbcDNIeYCbqA8+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G4PgGZ3pjEle15bDbdUH/uSZ9bCgvWZfEO/3mozQMSI=;
 b=SFVaSuhpr2LaNlBvP7RuBy7PGc9lT0E5g8oSUrHitsK0Yj8E2ko88SJ4Jc59HRCO1xLBJmLvgFzwvf8ftUWZz0bLjd0IpgTWRV+PZloQ1y4FfU+mQZCfQ+AvmgQYaTpK8wawtouImKNR3VrLEQl54qM+LfIubkVCG+a0lNe2YpRHmS7ngMxjRVj1AFlnfZR0GxfhEozeVKbMDD7OQvFmTVQj/a1OXt7qkfROMJAxwgnQll42EtzE5uBhJ4iAheCrlbeTb+PJ0dBPtqTvaA6gnZ5N0CtrguHwQmJXiyRmp0osSZs0Fc7R73BqtbsOk4IjHhZFmFsNrdvhPqWOGRi2xQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
Date: Thu, 11 May 2023 14:03:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/7] x86: Dom0 I/O port access permissions
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0115.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8336:EE_
X-MS-Office365-Filtering-Correlation-Id: 96ef3814-0a90-414a-b58d-08db5217b98b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sCnN67wNYB9hQUH+FlaCpMF1wztwqFURnV6VdUehf3eCO+MYbKdxOxFz/vdvZqnLLWeu0dKnyPhrIm9fJmZ1F5eX2qXcki7HfK9y1DXTgNjj+ACV0Mhfo7JjUVWCwdhui8rrl3biwdHj4huZ7fjCVLYWpdAG+QXZQ/W3JP+ac6tpYiluRf70RbJeGbsIULIix7G/I6hHLiZXoDlOo0t/y0ZH/U4zsdWaBGA0Z8Xp03TPTHXk3jAn3yBSt5aV0auiE8FbHWAHg0nKCMdpJOLI+jPidu0+BiYUyWblvmAxun4KA3BMiS9nB8JdxjndTI64Louha2qHogTPWc3Quh5YwxjeHHIKoudZXm6ehWWnd9EseF3XlwN+7hATbY0RJOJRdfu2oKU7iS9WNJMsZvf85jCR8xjwNrfS4ZVHYRORqhaY0q1VoIiIctEh/QnxZ/Ek0fb2SUlj6nLF0zxPxhA9BDSsxtZmYFuWNPMhehLY0iIRjgcpt/eChQ3b4X/ZvRgYOhJSyYTT9sKNLBzYDLc70MPlgkcncfq3ItOxreJU02kuZENW9g8g73dLXktsJRWWB4BCAdguge8arduTNy7zn4rBmXJCzi9LP/7WX5hPwiNH8txXMV7i5YvKlUaHeOiK3KKZ47BtQM17a+AowMuC+g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(136003)(366004)(376002)(396003)(451199021)(54906003)(31686004)(2906002)(4744005)(6666004)(6486002)(478600001)(41300700001)(6512007)(26005)(186003)(6506007)(5660300002)(2616005)(38100700002)(83380400001)(36756003)(8676002)(6916009)(8936002)(86362001)(66556008)(66946007)(66476007)(31696002)(316002)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TU8vMjNtaHZwV0lncEJ0NDVINmQvM1Z4K3dRY1IxdzV1Mm1Bb2loa2JzOXor?=
 =?utf-8?B?RUNNZU5YYVErcVRXeUZ5ak5PcTFxQUt5bDZBS2FZQjlvVVFJRTdmU0xHYWQr?=
 =?utf-8?B?T0owSDM1Mjg0TVZUcFgrb1F4V0NFZy9pR1VwcWd6QWs1ZGphNDRrZzZHeG94?=
 =?utf-8?B?L01pRHFTd2U1YmtId0dNRlJxblpRK2x2VHNuTEkvK2xycjdCelVSL3hudXR3?=
 =?utf-8?B?QUVldUUwNTgxV3hobVBya0FLak5tejNiUXV5MVZveVpXZlBwbUxaL1ZsZFdM?=
 =?utf-8?B?d2ZzYjhueWtWYjlKWG9wTlRoK3FxNlV4V2pRdVBkckdzdTFQTDBsYVVCekpU?=
 =?utf-8?B?UE0vTG9SR3hxbXhUNjhPTTkwdDFXNDYvZ0gvVG5aWFRkZVdrbFRYaU1SL3FM?=
 =?utf-8?B?c0tNTHpDZTFrcWVCWjZoTHY0OXNqUzlqeHlLOXZkcWZ1cVpnRFhPQXl3WWMr?=
 =?utf-8?B?MEJvUGp6cG8vZlJ6SmxVRE9NdUVseU05N2RlclZhUlI1dDZSS0VHUHY2eGhq?=
 =?utf-8?B?QjVZa3FaK1FhRHFsZEIyb3VsNFgyTjNrM3BGeGpDTnBsa0Z0RWhmN1A1MWxs?=
 =?utf-8?B?Nk1XSHp1bTU5MTA4c3RCdzBMTjFTbzhidkNGaDU1MWZTbEVNaTVCWEZkSitH?=
 =?utf-8?B?NGdwT3VCK3pMUDcrMXpKVW5maXMybE5XYzlZbVFzKzM0MFdoWTJERlBSK2sw?=
 =?utf-8?B?eUwvOTNUQmtsZUY1cE1sSWJXOTc3MzY3aXdMQmllekpqWENNaXQ0YUZxbmto?=
 =?utf-8?B?NWxueFNsTE1NQ1hhZVpRK1p1S3dMcnVBMDNMWFpWNy9NL3h6dExxZ2VyalVy?=
 =?utf-8?B?ZFROR096Mms4OU9mcGpzaWFQRXNOL2N0U1ExbjhEZzQ3RHplQUhXSjJmVHls?=
 =?utf-8?B?cWlSSFdmVmgxRXJWUXIzMzFFM2JnTlBiQW5OcGV5SkZ3ZmZzSlArVG9kT1I3?=
 =?utf-8?B?MjY1Zm9za1FxKzk0eHEzc3VsUTE0TmZLd3FwTzRxTVpIQ0ZHeGJxZEwvNVR6?=
 =?utf-8?B?anBzcVJHVmJLVTN3RlFBelVFdzJYamNDVlNWMURHTi9ucW5rM3VFOGlDQVZJ?=
 =?utf-8?B?YzYxT0NWZVZLR1VUOU93bGJxQVBUVjJtcDdOUE9CM0NQNzFGRnZOSENZdzFQ?=
 =?utf-8?B?My8wd2pDV05laTVDQ1FtYUd3NU9nV1VVcitoQnJOUGtSSS9zWjM4NVZtZUJI?=
 =?utf-8?B?K2c3SEFnSXJIZzZuVE1NY3hBQTQ0S2ltT3hHbmNadEdqbjAxNkllTlBEZy9R?=
 =?utf-8?B?Qk1nK09iYU9ITFNsMUFFVStkMDIzTmxVZTdNVThpeWd4d3orSEFabVBOaVpI?=
 =?utf-8?B?Z2ZMbHlnL3BEeGxFYXlpdmdqcHFwSmlaY3BzVDVZNlhpdDdFaG13UFV1R2dZ?=
 =?utf-8?B?ZVVBaEZISXhNVitoVUp5L2NlYWpRMnFYSk5zY0N3UGhBMTdobEtoazh0ZWtI?=
 =?utf-8?B?NktzVUNGdjE2dzcwcGlsVWw5UnF3QUk5cU4rdjFRUkFDRENST3h6SzYvU284?=
 =?utf-8?B?UGlpRFNJcVgyYkoxdytsTlpLVUEySHdrclJrWUJQekdWa0JiTWhTSWt4ZExF?=
 =?utf-8?B?TUx3WjdpeHRtekpod1JJdStmdW0zS1BYRlFJTm16Z0lZT1ZaM1V0WWJKZXBH?=
 =?utf-8?B?NFdYSFRXdEhVbHNVQXB2NTJFV0J2M0FRUDBKeDJDeDhOaHQxZkJBWlBkalB3?=
 =?utf-8?B?R0FQMUlVRy9sVnhBMnlPbmN4NWVYZkhwczdUMzJScnpDVHpWUkZQekJwQWZl?=
 =?utf-8?B?WE5GTG9MYXpBeDh5MVNIdzlmK1VRRXBudkl0SGQrdHdUZ1h1ZW9QUEJ1azlz?=
 =?utf-8?B?MUlLN25JbTZXOTVNTlpkQmo1SmVqWlFjM2p6WkVKQnVadE90VDZrWUN4UFhy?=
 =?utf-8?B?VGExVllwMWlCcmVoRDRnb3k2SmdkVzZRaURMK3ZmS01hTkNWc01qZTd0K005?=
 =?utf-8?B?Z1dGRFpyQ25PUm94N1c5eUViMmI0cGoxaHdLVmRhaFJZOWlYK1ZTdG5UUUtV?=
 =?utf-8?B?UndhbDNMVE1aNGFoZEJBMUNiNkd5YVgzMzRBN3lZSFRqWkg0MXpVZHdUbFIx?=
 =?utf-8?B?MklKaG5IaXA0RnBFYyszUEhpKzFUZkE3bWFEdzdCWkEzMDQrMUswTnBvYVhT?=
 =?utf-8?Q?d9fCCE7dkWPGOpYZabyzq6x37?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 96ef3814-0a90-414a-b58d-08db5217b98b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 12:03:25.5955
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MxLXBYZr5F7C/SkowUNaU8EMHlDRkKJn1YAdDOus6U/WOU+qxv8JXiElDnffue8De+uYBiyEtQoLN7moBzzUWQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8336

Following on from the CMOS/RTC port aliasing change, there are quite
a few more missing restrictions, and there's more port aliasing to be
aware of.

The last two patches are pretty much RFC for now.

Of course an alternative to all of this would be to do away with all
policy-only ioports_deny_access() in dom0_setup_permissions(), leaving
in place only ones which are truly required for functionality reasons.

1: don't allow Dom0 access to port CF9
2: don't allow Dom0 access to port 92
3: PVH: deny Dom0 access to the ISA DMA controller
4: detect PIC aliasing on ports other than 0x[2A][01]
5: detect PIT aliasing on ports other than 0x4[0-3]
6: don't allow Dom0 (direct) access to port F0
7: don't allow Dom0 access to ELCR ports

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 12:05:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:05:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533313.829851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px52s-0007l4-CW; Thu, 11 May 2023 12:05:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533313.829851; Thu, 11 May 2023 12:05:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px52s-0007kx-9f; Thu, 11 May 2023 12:05:18 +0000
Received: by outflank-mailman (input) for mailman id 533313;
 Thu, 11 May 2023 12:05:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px52r-0007kb-7u
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:05:17 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0611.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 172baf42-eff4-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 14:05:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7072.eurprd04.prod.outlook.com (2603:10a6:800:12c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 12:05:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 12:05:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 172baf42-eff4-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bwluglEoMsGa+5Na4A+t6A1cK3Y+AiZEPl820Patmrs4jup6/2ePmr1+ksOzNXMJ6PShqua9wlv1ctjKx69//JoABg/y5xozXmaDyezZfHulrDVzh6IOi52XWcFlXM8Rxu1ioW6dxPFfEvaNSWXWUrI2TgVQUmTnjuxZJ2eqUW7kgusKeWpi0Q/L3jF6IdnQ3/fdmL9IlKVffGmvQsiU6mo+LOFUc8bIZ+w3+RlMcMMY3QH60al+YK5mgrSxXaJCQ7ammbyMu9YT+ge1iLnX7K2Nc3BwuP4M959fr0E+H72vfbaMHuP+1A5gBTmaFotTs5i3HDO0ExpRV8Q+HH16pw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=85NF9wUGCDebSUIxy5wAPMqsl7IvRarqbL2JP+ygPsw=;
 b=j+0cOaR5AJb3eG1lhmEZkaEeZgLAIcFV1ppo8PGza5xmxMfMD29rSrdy1ER8Ld1DFeZGCpTYdlAWBsOlYjRvyAQkiOY/MREyy2xkWrj5d3MKlQhiCYBYTFYSvDV1O55jpXzeSAso/jFsoBt2NPjXdVyabN5YAnGK4iTnrRdvqwCdLD1dJ/7nsPt4oxyYTO9YngU18EuhIL3vVjTnAFPcWTC1U2209an7E69NMdpe6f4YZslKrCM+DtDlJ2ayErJ2KvZsYxAKWSqKBfMarEMNg/SlXowEJ5cLGXsgA1hsylJBnjFlWH3+GLgKJqFwl+xNRrdWDBg6yZSSZs7o7GmjYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=85NF9wUGCDebSUIxy5wAPMqsl7IvRarqbL2JP+ygPsw=;
 b=fQxm5XDa0eDeggqUKV1E5Yt+gM4kS+D0F5F5b+sho++pK5GGn0Ocq8A/3Y4nGE+f8TJAsTXWv8HO3tFXquft8mZKjsJkwe+qZCnqH3AtibgkBajAaO8QEqmbkRs8aavB8E61nUVPBBl/TINaUA/7UHiWVl5rQABerspwcjS4s6iv7gI40tw0tf1G37O4DG//AJSrTbxTAs+YNKOZPUosoYDNLQsZbhB2PbmOdwGlGNcRu9qEjmoNt0rvIUgiHJZxvsJD8izoJ5YGeS7W1WFiyt8oBRZZbAueLl0o9C5H4NcJrywkX6jI3UKcr0iGuUo1zhDsYyaJjnweI12rl8mPBg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <74c9e6a8-9094-4646-d06f-cfe0a427bb37@suse.com>
Date: Thu, 11 May 2023 14:05:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH 1/7] x86: don't allow Dom0 access to port CF9
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
In-Reply-To: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0025.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7072:EE_
X-MS-Office365-Filtering-Correlation-Id: f45392f6-06ff-48dd-91b6-08db5217fa5c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	c49y1dds/UoVd/uTNEFmBLQ3oJz3rTX/ncFYN8Gl0Vxp9/gFlVVMjb34GVj02qh1ffhaFtwgGCK/0TcyoSiPl6jAzEPqWvScr7FRyVCRVCFXjHfrNG21Bw8Lhb2AnvbcHFnvTlEPqnIqTJxLpKmXCnDAXp+l0eLkOB1slj18S0l82ueaxP/zLWpM1PwOzYOr5eg9hJzwXbczTdzh1ENsABEvjoN4e9As4afw1G4oiA6N+Z/n6H4jTjHwhZPDA9xjzQ8b/PF9NF96wPlI5ueYLJWC/ifFLfliWKL4P9ZDiVpHChIWOcXnGIrpuwhyIe7UbajL97GTPPOkoSdNds0WIAb9cRuqknylSLIfE8ojxApptGb+Mzvqq0O14A+aRGzZLbEOVo7CKIVbc3L0o1XQ/y86l0lbgqQuTXfmW8aJwXVxZPldr3ZZBydvk6jter7UwOrgueTutF5wiHgAo1FgOkDC+kx1kCJj9eqqKphfnnvQsRMY12/zRONOJ/sLSolXuzFzR1tbGDAGnBfVG8QbfUT+S5R6dC1xJ02dLDZeLRxJbBoz9rJu26FqAYYqJccr18AvNmJ29JPZf1sT8F/znjdOYYhKSR1s3igq4LVxezLkWc3O1drs1yyxEsw9SqRwsrZhiT9Nfqr9eaMYkIo44w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(39860400002)(136003)(396003)(346002)(451199021)(31686004)(36756003)(2906002)(41300700001)(86362001)(38100700002)(8676002)(31696002)(5660300002)(4744005)(8936002)(4326008)(316002)(83380400001)(478600001)(66946007)(6506007)(6916009)(186003)(6486002)(6666004)(66476007)(66556008)(26005)(54906003)(6512007)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MUtIL3NXMm1zME84QWdiZUJhWTdSTXp1WS9aOVdtRnIvUGNUbWsrRXpIMkZq?=
 =?utf-8?B?MUtPZ3RGUkFXL1VpVjNjMFNaTU9LQ3lvb1dURVVkS2xEUkJkMm90UG9BZDl5?=
 =?utf-8?B?UTV4MFhsVml5M1p3VTNVUmpUNU5RaEF0dkNhM2dJYW9vZW02bjdaclgzSUNo?=
 =?utf-8?B?U2RVaWRNWmZGa1U3dWFwVVJxdDNLejdPVmEzN0c1cWo0eVBmVUJMSCthT21u?=
 =?utf-8?B?SnVlZU9LMW81UE1Nbjh3U2F6Y2JUd0xCYndUNlo0aU5qWWg4R2RvRE5jb0hu?=
 =?utf-8?B?US8rdFZ6UGJjeXhFOURvTHhNZ0N3ekI4MTlvSXJwQVMxbnB2SDFiYXpjTzFs?=
 =?utf-8?B?czNtUSt2U3JYUkwrczRxb1NNV1QzM0lxejUxdVFzM2g3a1g1ODlFK216YXlN?=
 =?utf-8?B?UEVuS1lQU1dsMDJIWG9aeGdZVG1tdTh1Zmx6ZmlGdDE4RVNSVVVBdm5WZmVU?=
 =?utf-8?B?R0xMeVY5S0M4WFJsSDY1Ui81ekJXVzIxQWN6NHJUSm1uRnduK3FRejJXTWti?=
 =?utf-8?B?ZHNwZnZnK0I3aE5KbnlROWRzb0xLYTNqRHJtWi94VzZwSDJka3JUTTkzQ2dy?=
 =?utf-8?B?MEYzQVd5WTdqSVFzR2hzaSswejRXZWhweC8wTHBJTGdXS0IzL0U2T1NRNnRt?=
 =?utf-8?B?UjlxR3djcWUyaHU1Z1M3cXNXbDZEWUpVZ0dFeDFTc1VnYmI5V1lWTmV4bDNN?=
 =?utf-8?B?VHc0dk14Uk9QUFFwbHlNU3VGMmMycXo0MG0vTEJNYmxNYkErNmI5Z3VzNEtN?=
 =?utf-8?B?c2x4MUJ5Y01tQnhpUmVlbVYzVWZuUnp0enBDNlVMS1NRLzJGdldVNHhFaFhx?=
 =?utf-8?B?UXFQZG5XVURqRW42NElGSGZrbnd3TkI2SVp3NzhDMGpoK2lRMDhLZlZEK2dk?=
 =?utf-8?B?Q251NUlqVms1ZFBmMkdERFVSajk5Q21USmdvZndDcVBaVURRRU5HTHhpSEUz?=
 =?utf-8?B?Z0ZuN3hwRGd2Q3M3dlpsVGZ2Rm9OZHIvWUNZaG10dzBXeUtwMXpEbDdkSnZ2?=
 =?utf-8?B?WVVkT2RQTHZob3NtWWxLTytaMVZEY1dvNFJMK2V6QnlPYURZU1JjcWJSQjYw?=
 =?utf-8?B?OGRRSGx5L05na0xBNHFuWnhXMmQzNy9rRVdjV3ovenNxS3IwamNpcnlNTCtL?=
 =?utf-8?B?WmtuSTNuVE1ZYW02dmxiSkFvMEpJTFJraGpHNHlsT3N3Z2orVDZnaGlyTENh?=
 =?utf-8?B?SUZ2enJWR0pOOWlTbndFRk9JTU9iQjZKTmloczM5bW94QW9JcFpFVndkZGdr?=
 =?utf-8?B?bnJyNUcyWlZOK3B5MU9QMjdnYWgwV0hpbTNvZ05abXpVMFJVR3ZRckkvcWtm?=
 =?utf-8?B?OVNNR0hFTWpteStaZUVJYi9DRFRObDA5NkJmOWxhMEo4ZjJYUk54ZTYyTkc3?=
 =?utf-8?B?Rm02cWZ2b3VlNTJRd01PSlgzekJhK2FydG5YYTN6VWgvRW5PVERCK2lzYVIr?=
 =?utf-8?B?cTc4ZUduemJ3a2w2TksrOURBNW4xQjMvY1ZEUnRYTXREN0JmRkZteHZrNCs0?=
 =?utf-8?B?NHNTaVRrZ2UzZG1QV2J5T0UzekdPWGVtWVJ6emtZQ25DaWJSVU9JRVpwSTN1?=
 =?utf-8?B?RlhIZElOWkFlU1VUeGlTMzhWWUdQajJ1d3NESndCWXpMU0Y1azRydXZ4d2kz?=
 =?utf-8?B?YjhsSjJBMkQ5WjluYzl3Z2NPTnBleGQ5ekJGdG1XdnhsT1F4dllkMEtaZmMy?=
 =?utf-8?B?M2dlVHA2UVN0ZFRZREhrMzZjeXU0bU1lQnBhRi9ySDJNYXZFUUN6Ym9MT0ZN?=
 =?utf-8?B?enF3U2I0TDlNKzQrNjErSUQvZzRLRFlDQ2RtZDNLenFtb3VLWm9hNzlLRjU5?=
 =?utf-8?B?OHdhUHV1RjZIV1FVLy8zeGxjb3NCMXZ2ZDZSQlRGQWlUU1ZQdERsU1lsZThx?=
 =?utf-8?B?VEZMMkpxN3Q4RmpyRUFQbmNFays3T1ZvRSt0YnR1c0ZBSk9uaTJLVVBrRHZy?=
 =?utf-8?B?dlB0Zlg3T3RSRSt6Q3Q3a0JTVzFRV1RwVTBhSE9neTkxSHJmc0ZjRElVdTdG?=
 =?utf-8?B?Z1pwQVgxL3drcWNmY0FZOVpOZ3IzMjc5NXB6WTdPSGFYWTQ2d3VPWkN3ZWZs?=
 =?utf-8?B?QlMzNzFyRHpVUlUxUmNKWHBsWTRVZ2tQV2EwSjRQdGd1bkhFOTNMaVQzTGVD?=
 =?utf-8?Q?hQ3JxBXt4YH4HBc+P/UqpnVAa?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f45392f6-06ff-48dd-91b6-08db5217fa5c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 12:05:14.3150
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 414Kk9EEkJzByG44rXWxbzF+68rUKpIQ1ybXTnZw2O44GyKBPs81HIQ1nYmSYLJjpDc9Not9a/zw0yLyNv/eiw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7072

This allows to initiate machine reset, which we don't want to permit
Dom0 to invoke that way.

While there insert blank lines and convert the sibling PCI config space
port numbers to upper case, matching style earlier in the function.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -503,8 +503,13 @@ int __init dom0_setup_permissions(struct
     /* ACPI PM Timer. */
     if ( pmtmr_ioport )
         rc |= ioports_deny_access(d, pmtmr_ioport, pmtmr_ioport + 3);
-    /* PCI configuration space (NB. 0xcf8 has special treatment). */
-    rc |= ioports_deny_access(d, 0xcfc, 0xcff);
+
+    /* Reset control. */
+    rc |= ioports_deny_access(d, 0xCF9, 0xCF9);
+
+    /* PCI configuration space (NB. 0xCF8 has special treatment). */
+    rc |= ioports_deny_access(d, 0xCFC, 0xCFF);
+
 #ifdef CONFIG_HVM
     if ( is_hvm_domain(d) )
     {



From xen-devel-bounces@lists.xenproject.org Thu May 11 12:05:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:05:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533316.829861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px53Q-0008Bl-K9; Thu, 11 May 2023 12:05:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533316.829861; Thu, 11 May 2023 12:05:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px53Q-0008Be-HU; Thu, 11 May 2023 12:05:52 +0000
Received: by outflank-mailman (input) for mailman id 533316;
 Thu, 11 May 2023 12:05:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px53P-0008BT-Cf
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:05:51 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2ba90d4c-eff4-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 14:05:50 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7072.eurprd04.prod.outlook.com (2603:10a6:800:12c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 12:05:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 12:05:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ba90d4c-eff4-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WdselYRAYcXbhpzaS6z2V0aX9Eb2tElMtKD/HI3zCG3yFxoEOgLxGPGoPWc+kBYzOVinix1he5Mey9uM4+6NIhwA/vt432E2Xo9dQoInvPI7flemYA1n3/HuGxG20iNQPl22oOjwDi34d6jLQa5Jb5qA0UmJhGxxNFaZHutw88VqCK4GTuSucNDf1Fj26JaxlLpT0B/f1rgvQ3aJDoURlxSmFdORnFsB5ECrtG8ogh9zgLbC7vmuTONiYwQvB9adDg8Yl+0F3Z5pN9fd7mh5pTwUlsRrYxqEGxHvN9sHF13Vg+XoURQZuOLQ+NuLLDVAMzzgi9zyw/+kEzh7s+Ejpg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CR+eUbj/5Je+MQ4vbE1qFcIftH2nxRaJNZbDsHnTN30=;
 b=dEYNR9elkY97Kh/NWmiBiv7wYpgr1QixASMvbvXk8YxOW9IZjkHB/LmXp6yZT+xIvOmn7GnrpM0T/OhtRC3KDG8NuQV7czzcXyeopSzx6VirYUUfORKsZP0DnYcysMpNfdlGf8XeLnWG1pYU4WVq/eAp3fpZ3zPfF7rZ1ahqHM7FWAt3yVW6iioByk3qz1Q3X9P1J4wXav1ljMd0m7mE5k74HYrYfiXKrgN3ri2DgQI5C5ZwtVDS8BY+YuODkuYS0nDB1BIzhshb49wV3yye45O7TEu7DoxETsvLgedX0XqC2+we7tWHwT+mmJIeDKKRW8FZcgma4T17uCSK0k+sEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CR+eUbj/5Je+MQ4vbE1qFcIftH2nxRaJNZbDsHnTN30=;
 b=gmvVD31Uzhmc3065LJVJD9vi3aSZRZ0SRr0ZKUUEJ4jDyYLsDie5v7UqSL1pdf9Vgf+kpazZ5/256oLgfhqcxswyI9hi3L2F/pja4Li0dHYZmk+Xl/37tSOs3VM0D7zUjVfPCZ0NrrkhSGQvBP+cfijzmhDqmi3qdimIWhAOdEsWW1b9UNMVUx8X+h9S7ewjqeelfLY2mfbgDemyPF6U4q+l1nzJ0rWllCAAnjhyKGEfUxA0rQIroNw24cU7nFPu8AsPYZkzj+ub0Lv362uX/EBKVPUruF1gIFalYs0uVURuNjmGeytL76RidoHWg1Da1SjoUvC6DQNUjtcACVfBRw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ba1de950-185e-8d8e-e313-aef54b18a097@suse.com>
Date: Thu, 11 May 2023 14:05:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH 2/7] x86: don't allow Dom0 access to port 92
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
In-Reply-To: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0023.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7072:EE_
X-MS-Office365-Filtering-Correlation-Id: 33aa4a28-5125-4ced-661a-08db52180e4a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HM2sNlcV4vHZe7NyVOKjkwNirF8rzK75n/dcGPlPlZ5mnucRyJCOboYRuBDhAK4NoavUUR71AiXbu6EoiY4AJlYi6FBV1maIV76FXFN+hs+aX74FV2frTPisNepXVPVgFsxe8/7lvdUxcfeXkvnXrTN1Ap3jOWSlCPGI6ycMy0GDFufj6JVRj1jaKNTKEoJQ5JUWPHh+eoSIddZJFB8s3IcxaURbL4/z9GG4BylrhmvzgorDAe5e/nEaa9LD4GNwGQwZmGgsIatwPZKZCc5xE/jCgkr2arQGr4vAN9xnu1p1klWQFGExQqPXLyh+ySxjbr9NncAn6PIf/ER8Xd7B/BNEcwRslv25kJPH4P0wHJ2iF/4yLE0WY6yVGEXomOCvJymjS9zFCTEeLq64pdt/6+hHqnDTJzqIToKsJw0ImORpPmqMrf245cyFx7AEeR+tVV7HIkmD5jgxwpult3DFoTUCJAwKv68hI+BXI7FCb0siKigC4P0dRSSM860MzQnNSeGmFLpej4gA5MS16ir4JPrccOQNqBb5IaqT8d+V5qId676Fm0WMNyTmUAKyzsgG6ohrppBeglspV8bBklRjM9Rp7aDOO/JItrafBalPmuVHP+FA00DL3EHosXgjbgByAL9pnIXd2EmkSBg3hQtvPg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(39860400002)(136003)(396003)(346002)(451199021)(31686004)(36756003)(2906002)(41300700001)(86362001)(38100700002)(8676002)(31696002)(5660300002)(4744005)(8936002)(4326008)(316002)(83380400001)(478600001)(66946007)(6506007)(6916009)(186003)(6486002)(66476007)(66556008)(26005)(54906003)(6512007)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TVVjMzZ3ZmwwcWhKUExTMHE3RklyQmRNNUFzcTBWTm10Q0tJVWp2NVJ5dXBy?=
 =?utf-8?B?Wk55YURGMEZ5TEl4dm55VGUwT3JaaW15Tk9ZaGpUYzFOU3cwSms4bTlNalJY?=
 =?utf-8?B?TkhrT2IveGJHbUVKanlweUgwaW51ZWhySXhNa2cyU3VNckVpK3ExYzI1b1Bh?=
 =?utf-8?B?bXc0Rjdka1ZOcWhJTGlzRURDQXViS3BCTWZVc3dlcnJMamFFZDU4NmZMcWRl?=
 =?utf-8?B?VU9CUnpzdGsvTzhrbUtHbXJWYjRLa1hMN0FycE5rWnlvbWRWU0hrR2tGdW1n?=
 =?utf-8?B?NHdEbjNsUlFTYVlZbEJoTGtTeHVPbm1DaXA4YXhxM2Y4ZDVYcExqenhXblVZ?=
 =?utf-8?B?ZzM5eVgzRVdJVWg1bmxWZ3p4cG5kS2dVVWlvQktSQTdhVjQrM091NW9MUjNx?=
 =?utf-8?B?VW5aQWducExXUWxXVVA4b1RBbnlmOHQwV2pyQ2tFZ0JlVXFiOXcrR01Wcmgw?=
 =?utf-8?B?amRWMEd4YWxTRG1OSUppZnJCV0hlbysxMDhzSVZUNUlBL2xOV0NSNHVabkdw?=
 =?utf-8?B?ZFNlSTYyUW1mcElmWnhPYU5oRW1UaXdYTEo3d1RPZUlpRUQzaWFiNkFPOTRZ?=
 =?utf-8?B?MXJpZEY3RjFvV0NOaWJVS1Fodk9aRjFXVy9CNlR6bloxdnRUN2FwU21UYWNP?=
 =?utf-8?B?QW9VVmlkQlJEWWZadVAza3RuZ2dRQ3hXMWRQNHBGNjVWOCt0NTFOZGhrVHgw?=
 =?utf-8?B?bVBheENZOWRaU2U4anE4K2ZibWJBSkxvRy9tUURXaXB5blpyMUtQUXJlcVRB?=
 =?utf-8?B?cjg2aTV0dWFGUjdFaTZmTXZuMEwvWjQyNGRNclAwUStLU2YyQ2tZc1dKK085?=
 =?utf-8?B?UThWZ3VOMkYyMCtyRC8rNkZoUndBZ2xHa2dpVzZnN0dLK0dYTTdzV1NkajNY?=
 =?utf-8?B?SkxiSFFXTjg2b2VrVXpNbXJJTzZPNnlNZGFoS2tpdmdKWkM2cHg1WFRtc2NY?=
 =?utf-8?B?Nm5NYWx1VzlGa1FoZDZyRzZ5OEpmK0pBS2hySnRJNjZnd2ZzNm56cURZc3JN?=
 =?utf-8?B?Z1dHYTJsQ0ZVYnY0aFFreGNxOTJXNGVCT2pMVDdQSDV6Rkp5eENjRGJRSVgv?=
 =?utf-8?B?NGpUZmxhcmdLOUtWSDM2VkVOLzlwc3ZwN1Q3dEF0WXRPaitlUEsyOGZ6aE9O?=
 =?utf-8?B?OXlhS0h4eHZLcEJyQ0ZxMVk4L0VRKzhMU3VWWkNGYm9LcHZiU3U3c0RZZ2pJ?=
 =?utf-8?B?RFdhTnVvRnVMZzJPSU5VckFnZ3l2NTJCdlZCdStueVZRd2U3VVhyZ1VqWXhX?=
 =?utf-8?B?NlVJMnBjOVlUNGtZMXV6MTlXeXpiYmVueTJCQlRXR2d1WXpVUEZETXlXTHZ3?=
 =?utf-8?B?c2VMRDdtV2JwRTJwRDFOaVk3MkNscVdoYU1oVDV6L1k4S1pER29ydXFFZjF5?=
 =?utf-8?B?VURLSTR6Vk92ZHdKK1FQcmk0cmdGWE9lbkE2djdNaVdqZUtPcE1Md01vNkZM?=
 =?utf-8?B?c2tMV1ZUYW9Vc1krYkxQN1FXMUxXUW02NnUrcXYyUGczZXRPZUkyQkkyMFhI?=
 =?utf-8?B?WlJxYW9teUprWGZtRlZicnFVWkZwYnVDcEZTYk9EY0cwY0hUVXAwZ3g1enJG?=
 =?utf-8?B?NmJyYVBDWEJKd1lGbUdkWVM4WDhvY3NZRE94VTB2R3RXMElUZURWcmRTaUtl?=
 =?utf-8?B?NTYyNERVYlFQeHR0ZXNwc2lsVWtjSlQ3Y1drY2wrT2RwUmFkVmRwMHEvbDVq?=
 =?utf-8?B?N0owMGo5TWpxVlg5b0VocDhubnNlTmNtNThCWUJtbHlDaVp6U3FzTm1wemJy?=
 =?utf-8?B?bnROaGdBUk1jbGR6MjlBMmRPdHBJWU9wMk9BYS82YXk1WVhVS3lTSEVBSVRL?=
 =?utf-8?B?MGRrdEtSaWhDeS9tRkMvQmRtV2JSSlJRRDJFSHFwVjBwZitHYnBjZkwwWjdo?=
 =?utf-8?B?aUt6a21ZZ1RvdE9wNUxmL0hDeVNrWEd1eEthTDNUNWVaeWFzUW84a1B2WHpO?=
 =?utf-8?B?Q1YwMDg2Zk9vV29KQ3FOWHB2ZFFNdlB2bExadWRHaW1HVVhDSDJPeERxaVgz?=
 =?utf-8?B?NUp3bnRLK1JMbjNpRGU1NW9mTjNyZ1VFRFNheTVYdWFvRmpaTDArRzlwUm9i?=
 =?utf-8?B?S0pkZUtwYWRGcjJ0ZGpyMG5RbnBHRmtHbjJEMjVneURUc0s1VlpZRGVkVXhj?=
 =?utf-8?Q?FRp6Wq2q1EAL9fjrPLn8QTOYI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 33aa4a28-5125-4ced-661a-08db52180e4a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 12:05:47.6798
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9nbRYqbki1naMA/9xJWjk8F753BmG7VTR0U/KIGTSvkEF5vVfyrveuwMg7Xv2T7ChHxEMrWfqPEO0VSg/Aj61Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7072

Somewhat like port CF9 this may have a bit controlling the CPU's INIT#
signal, and it also may have a bit involved in the driving of A20M#.
Neither of these - just like CF9 - we want to allow Dom0 to drive.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -500,6 +500,10 @@ int __init dom0_setup_permissions(struct
     rc |= ioports_deny_access(d, 0x40, 0x43);
     /* PIT Channel 2 / PC Speaker Control. */
     rc |= ioports_deny_access(d, 0x61, 0x61);
+
+    /* INIT# and alternative A20M# control. */
+    rc |= ioports_deny_access(d, 0x92, 0x92);
+
     /* ACPI PM Timer. */
     if ( pmtmr_ioport )
         rc |= ioports_deny_access(d, pmtmr_ioport, pmtmr_ioport + 3);



From xen-devel-bounces@lists.xenproject.org Thu May 11 12:06:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:06:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533321.829871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px544-0000PG-2P; Thu, 11 May 2023 12:06:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533321.829871; Thu, 11 May 2023 12:06:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px543-0000P9-VC; Thu, 11 May 2023 12:06:31 +0000
Received: by outflank-mailman (input) for mailman id 533321;
 Thu, 11 May 2023 12:06:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px542-0000KI-C4
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:06:30 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20628.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 426e7a90-eff4-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 14:06:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7072.eurprd04.prod.outlook.com (2603:10a6:800:12c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 12:06:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 12:06:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 426e7a90-eff4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M9rNuV6oyWbE2uBeBsPkTsuA1LQEhfWPJGlfDdTG+eCc4nFJ9IvPLhcjAyF9hOdhwjxn+NG1VLa8NvE4YyNAtD396e3nBBDGkXgDcPzrse7LzbcRk8/1NgzVSy8cX+L8zXbvMbhNJ0VPxSjk3bOTht9crGPsxjket+YvozCZav5xMzfSyOMZKoJTCi82lqOzukmKIGyliK8SbA+NjuMaNgOsWUujjVrrrvqv2Vq0ssj8wtZYEzu3ijpSYklIXwIwhXHJxIdtRHLf0b5PZdiOFqJ6GQadFluIrXLoM4cY50wK94dvB1g9hfMrqYIbGVz0so8YcRyw9cU0QO5X8FZDjw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tKCZe8aIk+i5ppy7aKMs7dwLZsehuRpn4OGEOjQsCn8=;
 b=VP0tD1NzrJa8S6aWmgsv/IJZj5kG0/rQ3LkhxYSLZ3RABWGyBqpazCta2KweHVmHklcOf5jw5uBzwRXFko5vlbr1GT0zeg1DLDYTGDkTovPETsSxclUfa9TEPlzDTTHJaUJdImimJNlrrabzMIIsFbF2mVoa8saYmdlqQO/VESQrqrUjeWO52mSPMdH6Lhj0hbZWDyCfy694ukj1KixHmnZRJVPWa5ZD70KJm95h+bJbJolHyzr50TFcHNqAFzuUSjMAHtcP/9F3HLeaqP4zcp9eY2wOdMmop3WCZD5955oh4Q55W2gi9O/XTMLuYgrQtS8d4kVmaJ1ID1Oc80NyvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tKCZe8aIk+i5ppy7aKMs7dwLZsehuRpn4OGEOjQsCn8=;
 b=L8OXzxY91fz/m6MhXXbcOq7WdAXBqT5zMoJkmBOg53zoH1KW4xP2Y/g1Ria0APjRjaAJv795aDcCnhVxu86eiXC7/ygIe9mYc3EUYjSnju8wo91yLgMNlAiviYxEErC4sXtb+NksoywKOEyAaBTATghK//yjXph8Thj9i5n/2ICbTfINUFWPiEf1vqrCfl0LWy1bTQJoHNMKGQiodbsubX8c1UqaFDrEei0WLEdzItz4PAzObQKWRmMKmlnLbloY9Dce18o3BHgJiEasCyF3t8Fgf6kLDxG97KanDXszvvpdaGpuqNVVgXckUsuvudEfL8b9wuDp25LnoNlfCU+POA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f0c67b05-c4d8-c588-d2de-e98dc819491c@suse.com>
Date: Thu, 11 May 2023 14:06:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH 3/7] x86/PVH: deny Dom0 access to the ISA DMA controller
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
In-Reply-To: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0018.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7072:EE_
X-MS-Office365-Filtering-Correlation-Id: de9a531f-cd8d-43a3-9e1f-08db521825d7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YBmn72Z+88o0bYT92PouPSfxs2iuuPRfekHVGWd7ri6Eg7N/7RfG5m5/Jy01klu+sFGO0uc24lBNfD+ID2kKQJnfgcEYIiDZJ/RCsHfMubQBm//tiRGbSpKu+tPkkBkJMQ7yQeYQrEVkHZapWB5IV4lvHF9UtoJVFsFgOu5Uw88w6qLCobOu/03LAQbmhhz9E0UeN7rZ085tMJUT+rlRWJipUvn1RJDVUC+X0QgBaXqMJBXYuDENea26tkOZBPiN369t0GlFDPnZB2EtT0Y8+6utLlSAGpAV43yvrKaNpNxJCX/eZf2e/yzy8fVNilpPFLkhMYHIZO7bj0k+WJYzdZFHAAOzADsKwJqU1eos/SkYx4keRyjt3uI8UIocJpbflOJpTLedKvfgPpdLNdrTOLJCNRd/kZxdp43KpFvHDwy7PDt4JhplzfLZSto2JYjwMvgSZNSQkH1Crpe1MEHd3KQMMEC9A+Zsumeyegc44n6LF3K0yCCJKhXgBiu4RvOyorhDerB2SB/KtglrlZaJrqIZz2fTFEfBhSH6zPSWFUou8uo8KRhPRHly5yITPtNYEjvoXLO4TxqvBLKoelZYV2cwz3eAI7v/LEbBOxrrBSJdur1JKPVx+Cmc0EWY5xFjRLqoJjxs/GOljdIGOIW5m7gYBPeN00vfoxOrsV9+5sc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(39860400002)(136003)(396003)(346002)(451199021)(31686004)(36756003)(2906002)(41300700001)(86362001)(38100700002)(8676002)(31696002)(5660300002)(8936002)(4326008)(316002)(83380400001)(478600001)(66946007)(6506007)(6916009)(186003)(6486002)(66476007)(66556008)(26005)(54906003)(6512007)(2616005)(43740500002)(45980500001)(309714004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UWl2RXdYZ2o5R1RFcnNmQnh3Z0pKaFNvc2ovQWRwQlI1dGVvVTliZzlpMGdm?=
 =?utf-8?B?K3R0aDErbTZlN0VEazBPS21Kam1WQlFPNlZUK3hrWENqdWRWRC9ySUIyRU9K?=
 =?utf-8?B?MEs3c3RjZC9PNWZUL0Y0UGJaSW0vZ2s0V1BDTXJ5T2ZOTDF2M28vZ2xIOUlx?=
 =?utf-8?B?K2RMNkcxckZoS252NlBRYUJrc2xTVURSTHY2QVhPLzd4OWhCekVvbE5NUmhu?=
 =?utf-8?B?bGZQTnZ3VFFvK1R1MlYzVU1yNFNGdFZWRGdIQzN2cnJjVEljTS82dnBVdUdH?=
 =?utf-8?B?NmZJOWFyaEk4QkdKTHgrS2dNRGxXYWlmR3Z5T0dYME1YTlRCREh1eUFZbkRn?=
 =?utf-8?B?SjJ4SUo1ZzBwZkZSamVnbmQ0ZzJyTldGTVlkcnB4eFpScDlLQUZTU0RvbHdQ?=
 =?utf-8?B?LzFTZW1TSDJmeEE1Uk5hRHVYQW5xbjBOR2E3U2JFYlpsSXBYQ1JrUnYxVjNI?=
 =?utf-8?B?bGxaNmJQSVlYZ3VhL2NPbm9zNnIwT0ErQ2NkUHhiZ2Q3RVZxYXVMbVVjamxy?=
 =?utf-8?B?OGhWVjZoL2xvd2N1QkNEV0Q3eEgzVUhlY3pWVi9HU1VPTmMzN29jNXFhUTRl?=
 =?utf-8?B?bVBZTTR5bTRpTjFNQjhPVWpWS2x0Zm1SbVhFWnNMbjVZUlZGOHU2ZURUNllY?=
 =?utf-8?B?Q1FNelBlZGhOZjdSVTUwSW54T0hPR2FBeHBkQjZMc0hpT3hzaWtPcXFUYVhJ?=
 =?utf-8?B?SEVIbUJreGZ6cjd4NFdHTFgwdEt6UDlDWTJGMkdMMWMybXdxMCt4SEt2REdn?=
 =?utf-8?B?QWNNLzl4MnVIWFRld1JDZzhpaDZBaVZqYjlQSnJBOENaaTEvUUVNcHlmbGNW?=
 =?utf-8?B?dVc1eEVJOEdzaTNnbHg3VlIzYWNjWjhYMTBDWVk1SXdLaTdnY0JiRmtxY0pw?=
 =?utf-8?B?d1BaTHA1Ym1HTGZ4TzdZRWozbFdxdXFOTExkeTM4RjkrbmZaRkRZSG16OEFx?=
 =?utf-8?B?Tm5zZit5akY2TEhxTDJrQXBYc2MrbEE0eGFyRjlLUDROWWRHRFBkR0FmVFNy?=
 =?utf-8?B?QW5DMzNVZEVNZzA1bFk5bFJpMUlVTnFoNnRRc3hWWk5VbWdiMzZTNUIxQ0wv?=
 =?utf-8?B?M1hFVVgzWWt6T281MEZ6T1ROY1BLNTQveHBBczI3SjZoakZDKzZNUEJrUmtQ?=
 =?utf-8?B?NzhkRUVTeDN2RExOcGdaQlduUDZpZlhDamRTUENZQjBhZVhvV1g3NFhqNzh1?=
 =?utf-8?B?ZlVZbFFHanpjM09DODVNdU9YMU1EVE1QWUZqVjVaTmhDOExzb2U5ZEhDbEV4?=
 =?utf-8?B?cFI5bVVSeEZPUWsyM3B2Y1ErWHhXV3hPbnJjbFZrei9ZSzd4K1ZWVkdhVklC?=
 =?utf-8?B?c2JVT2pLc0VNWndac3R3aTc4SjdzL0Y4V0Z1ZVNwQ1NsUW5QMCtaem42UWkv?=
 =?utf-8?B?cmc2Z2VCcVBzNm5rVnYzajJSWWlUM2JLWDZzZDd5K3MzMS9aaElvRm1PQ3F2?=
 =?utf-8?B?WUxPeXpsN0dzOFBnRVRNZ2hydTJWOWd1Y3Nabzc5dDBQV2VXVHFsTFVUUE9D?=
 =?utf-8?B?aU9KRm94Y0huUVgwckh0a3IreTNrbXhoSmdOWmJuVDM5ZjR1M3lHWVhXR2Fs?=
 =?utf-8?B?SDVoQzdkaW9IQWJ2MTJESCszTTd1WkZhYkVMVFduNUIxaGN1YnEyT0VqbnRI?=
 =?utf-8?B?Q00rb1BObHE3cHZLWHRyZUlQL3ZCNFdnZnFYY1VET1FCT3RhZ1lVRFNrdnNM?=
 =?utf-8?B?aXhDb3doeEdtb2ZHZWUwMlp5NkNlTDhXUVNtQy9QS0V0Q3dnZFVTU21zZUFz?=
 =?utf-8?B?NlpIS2Fkdm5CVFZKbERJVWsyc3BvMktuQ1hURGpJNVkzdGNlU1BZSDAvQ0Q4?=
 =?utf-8?B?SGl0TURmVklwS0YvaVNXdWorVXdXRnFHMjJJTTNYVGpMdlp1UEp2TXcxdENI?=
 =?utf-8?B?cWhoZzJKSGZtUVhpV2czcFNuVU8xVWJERlFrQjVxclNGZGhHOVRnZWFjeWJ1?=
 =?utf-8?B?L0lHZ3FEcHNLcS9ySmZzMGtkUGtLU2VKMmMzTWNsRjAxYmtvT1RTL1VTdG0r?=
 =?utf-8?B?NEhOdGVGMXRndk9vMTAzWWs1QUhXWDRMMDY2R1A2WFZkRE9FYTVETC9tcGxr?=
 =?utf-8?B?WVo4eGtPUjNkMW45N3M0RzB2anJIOWVjMmFmY0FTNFE2NDhLeGtsWnZIR1RK?=
 =?utf-8?Q?KlfjQpz8fyUMeIvdBDASBTvp3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: de9a531f-cd8d-43a3-9e1f-08db521825d7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 12:06:27.3118
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dj2AUvX/4qBhIP5qtpanOsatBrePftGCSjjHzxVUY2MCzzmeW0xrvjAo5D5RZ9pXN6C93DBgWDoEYRm1Q6xpkA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7072

Unlike PV, a PVH Dom0 has no sensible way of driving the address and
page registers correctly, as it would need to translate guest physical
addresses to host ones. Rather than allowing data corruption to occur
from e.g. the use of a legacy floppy drive, disallow access altogether.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
The possible aliases of the page registers (90-9F, except 92) aren't
covered. Unlike the possible alias range 10-1F, which I think is okay
to include here blindly, I guess we'd better probe for aliasing of these
if we wanted to deny access there as well. This is first and foremost
because the range having had wider use on PS/2, and who knows what's
been re-used in that range beyond port 92.

--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -517,6 +517,13 @@ int __init dom0_setup_permissions(struct
 #ifdef CONFIG_HVM
     if ( is_hvm_domain(d) )
     {
+        /* ISA DMA controller, channels 0-3 (incl possible aliases). */
+        rc |= ioports_deny_access(d, 0x00, 0x1F);
+        /* ISA DMA controller, page registers (incl various reserved ones). */
+        rc |= ioports_deny_access(d, 0x80 + !!hvm_port80_allowed, 0x8F);
+        /* ISA DMA controller, channels 4-7 (incl usual aliases). */
+        rc |= ioports_deny_access(d, 0xC0, 0xDF);
+
         /* HVM debug console IO port. */
         rc |= ioports_deny_access(d, XEN_HVM_DEBUGCONS_IOPORT,
                                   XEN_HVM_DEBUGCONS_IOPORT);



From xen-devel-bounces@lists.xenproject.org Thu May 11 12:07:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:07:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533323.829881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px54P-0000s0-A2; Thu, 11 May 2023 12:06:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533323.829881; Thu, 11 May 2023 12:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px54P-0000rt-76; Thu, 11 May 2023 12:06:53 +0000
Received: by outflank-mailman (input) for mailman id 533323;
 Thu, 11 May 2023 12:06:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px54N-0000KI-Nq
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:06:51 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0602.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::602])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4efc1d0e-eff4-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 14:06:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7072.eurprd04.prod.outlook.com (2603:10a6:800:12c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 12:06:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 12:06:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4efc1d0e-eff4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i5Wgd725Hb5qq7bRrYHcOI4VKkdQuVNrO+SV8QoMAL50eW87DCuI/gtADXKGo0bsTw+LFtDeAJixHJhdxmBjNN3mMPZ83McpxKmt1orn9F8TmSDZP6SuZqMIsIom+TfN9uLMtI2ernXBFv7b4jeIMdFAZwK3yYzDYA5CwmvCJ9mc2NjoiRD16Q70vUSyxvI8H8yS27/GjqKd4FncQN0upa/6FmIe3Y+kJ1h1iqbHCwC8HbTeFZ9QUh3KH2ZAVdl1bQFtSAiVX61BzYQv7rvyP6n/DSbItfsIhqjqxStjjB0SvUBsvhcqnm2uPYdSghO+n9dKey986et2h3uCn0nnSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rthJQS7crBKjl3ZFrOs6SFetUwu0mLQJ0JboEF0Txzg=;
 b=LKxZ3wFOrPdP/8LXdsJSK9EtYo9p3h03s44LSJFMXoSInx+Ts5c1pyR7eClW3+DN379bJpS5fZuxyZMZNb6GTcZwJ85Rq9P0TrB8+hlcsaWVLLBoBm0Rw2ypNqmxb3g4P3ipJDVCK8yeXNvl2OWy8aFhvCSZu8m7S6q0lCO8cKv+YxwRMpUVtKbkIlQI7Fw/r7c//7QZlEjGaR06keefI2OX+EfONJ8hzLFmt79Z0TedxSiQDi+I2rx5ccTm6KwsF3xvcEwqOCbH5WKVA8lbC7r+RnUm0iOn/BDERgROHZJKLjPD23yfmsvhb2UUQq8NPsqaNJ5I/hM6b+s8WdZNNQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rthJQS7crBKjl3ZFrOs6SFetUwu0mLQJ0JboEF0Txzg=;
 b=uMue81MVy677rVdiSdhiIU7aDIlX64wsbjVLgVmpeafo8DYKGe1u80XCVnmKe9Cqtb0fx49GcKTGFJ2Z3voVzfheyqklq5rJ/ZRJ8AY58FSCFgHlDx5cdyDQi8ruEbz2tB9u4GnK/3tPEXKmtrMHCd8MBRnHvGZlr4YcF26WUvT3yyP3StLGAHyIXbRHMRAnQnIL4vU7s+H5avvfNgUE4iOYPnEUBBBXPWGDT8jPpf7YJZ0HKhpC8nPYeHVVlvT/2jsXydUwJOvJw+HBz43I0hEjMa5sj/WImEvHgxNwPGjVFp5+DkVwdBXEf9nHivX0dDOdmGJlZ3/Th5uVbVUnsg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <27dd8f40-1ea6-1e7e-49c2-31936a17e9d7@suse.com>
Date: Thu, 11 May 2023 14:06:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH 4/7] x86: detect PIC aliasing on ports other than 0x[2A][01]
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
In-Reply-To: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0106.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7072:EE_
X-MS-Office365-Filtering-Correlation-Id: 4caf44d5-2e29-4e0e-fb79-08db52183275
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	s8jy3byeHlo2Xcd4ocY4hZvB2ZmHVmCdT43DG/AWaSdOYNOkpPj5cAWdlDE7c3AZdSzWBGwGers9sxh7NKOXiTiXSM8SeneGw8ik7DdTVOC8fUIXnghWqqPAfV3e6zPI7Site3o5PX/AvkWCPVTWOtGon8VgGmNPfrrLORn/HGl0r9wstinfa+RqrMC9nZzHY8240k2H0jKhHwghjNjHObi1zYUKwfxsdKBTUHCqhWUEyWknZ3XdcO8qIto2InEjSBpWYeUz18EwAQA310JpzU2TsMR1Xz7yi9JitTgQ6/orp8UsUYWm3uihbTxA2okRDlkb+z6bX/P14pifHDw+5fjWReKkf5tVE2ybiGMiY2Dx7HUpgUMolGeBQag3re/qY/hCPPu/OjYzUhkyoyJGs471LrtKVVjeuPQTG2PP2rlgAjwncB3Rr3dtw/psajLdPjkRrtLWhXn3RifBLsG6LW3AcUo71vZ39Th/ySb+Ds3r5ya9Pr732hLfINPH8Ljk0Fe2tvJOW+2CmlogyErQUvXz/kcUAgdqeeRSr/2A3n2vEqbBSF2Zugfca0mhD56xlgiWfDErgjFaWkD66g99bUl+0XGAvyE+96CDeWJwBFp2mSLOEgGTAGer1M7qr3Q5UrxQgzTMkzCY1DObU5dNQg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(39860400002)(136003)(396003)(346002)(451199021)(31686004)(36756003)(2906002)(41300700001)(86362001)(38100700002)(8676002)(31696002)(5660300002)(8936002)(4326008)(316002)(83380400001)(478600001)(66946007)(6506007)(6916009)(186003)(6486002)(66476007)(66556008)(26005)(54906003)(6512007)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VVZDWTk0eFBzM05nM28relRrS1lJSlBkM0ZWOERibi9FbnJ3T3lMT1hnYldC?=
 =?utf-8?B?dlZNNHZ5cEV4Q2RwRlhOOWxTVDkza2lFbUFjTTJ3V2JZYzR6c2J5Q3YwWTM2?=
 =?utf-8?B?Zk1pWjZ2SVV6N2VYSWxiM09KVUJ4WUpaaXVMb1dKeSsvTm5wTGxFMUpTREs2?=
 =?utf-8?B?SW9Tc1ZJOG9pcGd0V3NBYmxOdWFBRmZCSnpOYjhXQVJodHoxVTJnRkRGVTN4?=
 =?utf-8?B?NmNBdGZLVDMrSThGZnAvRGJkU3Bkd2RiYjBKc3VLeVB5SDRxVE84aUIzSXFw?=
 =?utf-8?B?QWtGQXVMV2xURVdJRVlBRzVPZk1xTW41R1BZRFZHY1ppZGtTbVBCa0Y5MHpT?=
 =?utf-8?B?Q0F4SExXOXNhL0plL0JxYkh3WHE0bTBKZTE2ZHR1ay82bG9RQWdreGwzdmE0?=
 =?utf-8?B?Q0pPaGhNZkhFeVpYcnZzZndSUUZHZ0o0Rnd6Q1NzamJWN2hrcDZ5Vjh2ZmQv?=
 =?utf-8?B?ZitsanNCZjhLZHBJWEVjaWtTZnJxTkFGUGw2QVlEQTZQbDUwamRucExxK0dy?=
 =?utf-8?B?SUxPajJYQ21DQ2hERER0OUh2VXQ5S3YvQzZjU2hsNldoTHg4Q0wxdDB3RXFG?=
 =?utf-8?B?MWdpUk5FUmxITVZNdFJRY3NkUTRMREpmR3lDRVN3ZHhzekZDdFhLV1FIMVZh?=
 =?utf-8?B?SUVyTWc0bnMycWhtQzhHSTZqSzIrRDU4NEtRb0NtU29FcStXN0JrR05HRXEy?=
 =?utf-8?B?YTJ4cnpLamxZekYxLzloOFA2Ky9GcmZPQm9Wb3hVOGFSb0RRQXRYL1J6WWln?=
 =?utf-8?B?b0xyanhRdUxmY0VKbEh6SDhBazBlRGpqVFdVM2YyQThyQU5ocnI4TW5wSWR1?=
 =?utf-8?B?cGxaKytZdENVYlh0ZlpqeEd5YXpRbmsxSzdGYnBJaTQzWmx3TXBtSTJpWUli?=
 =?utf-8?B?K2dVUWtaeEM0V1JzMTFxbmowSUE5ZmZUNW9pRnU0TzFSSlhDYnVDUEtRRUdV?=
 =?utf-8?B?TFIwZzN3UDVLYWU3cGJ6Q1REWExZKzJsUjV0Z0VFRFhqeHliLzVHZDVFM3NH?=
 =?utf-8?B?UFY3eDVOYXVpRWJYcUk5KzZ5bDF6dlYydExrRzRqM25OTGhzTWtBakVlUElZ?=
 =?utf-8?B?ZXVuU0tmejdlOWRYRGwzcURjK3Z1dFlJK0t2aG5SeFBsOU03YUV6MTEwbmZ3?=
 =?utf-8?B?Q3FiTkFPMXlZSkl5cW1sQVREWkVxWEhKNXhEWWlHMERuaG1qS1M4VTlpeDJj?=
 =?utf-8?B?Tno4NTQxSGFxTjNzdGIya2kxMEdtS1RtclhuRUdyeWVGc1k5Y0VVTk5YWDdl?=
 =?utf-8?B?VTdBSE84SDN4M1hJc0dJRkJUWDFTSXlnU0ZaZEpJK0Q0bHlxMWFZUFlSNjI4?=
 =?utf-8?B?L25QTHNVcStaVmlLVFc2N1NXbWZ4ZVlkREVGYUE4dnN6a2NhODY3MHhTelUv?=
 =?utf-8?B?eEJCcjJjZksxZldLNkVXc00xbXkrRHhTZ2Raa1hxQ0ExNTBzaXR3cHNhZ0pW?=
 =?utf-8?B?VzNNVU1MTXludUEzeW9nRmFzUEpNTk5vWEdUcFprMWV5SnRIUVBYeHI5cWpv?=
 =?utf-8?B?NTBpRFdJSGV0aUNSUDY0Mk1GRFR2YjB6VmVQYkVQWXZpdFFYblB2QVEyY25m?=
 =?utf-8?B?NVVJMmFoK0hkQVhvZk9JOUJ0a2NaekxINHVOc2FJZkZsRW1uV2dZcUVRVUEy?=
 =?utf-8?B?ay95VG42eVhWREVOUVNHOUVyS3ZtSjZZVWRLYngwTDBYVGtZbzJPYWsyOFRw?=
 =?utf-8?B?eG1wWExjNzJaWWpFQmg0c29ueWF2czRWSTZuU1dOcmF2SXQrQVIrMi9BY2FW?=
 =?utf-8?B?M1NXOEttRDM0ZHZNZFhUSHhCSTdMQmxid0J6TmdNNFpTY3piTWdYTCtET1BL?=
 =?utf-8?B?S2NWZzhseDFmdXVvY1hwcE5Gek1YclhPbTNzTFQyamN1bmpmUWhNTFlLV0Vt?=
 =?utf-8?B?a1M3b09pZ1U3WHVwRHJ5MDdROUlCN1JaV1gyUDg4eVduMGtiM0QvVTJSU2Rx?=
 =?utf-8?B?bm4wODBxaWtzT0JINlhoM1VuN1Y3TnV0bHBCQnlvZHRSZGN1Qnk4TnFab0RX?=
 =?utf-8?B?WEpnZFVQYTFxNUFxM0JmVWc3TVIvYksrS1Z5YlF0YWNhcjB6R09WelJFWW1Q?=
 =?utf-8?B?cHB5akNuc0VVdTc0bDVRa0ZOVFA1eWtMeThydTBmWWNBU1NoWUJZbWVZRVRz?=
 =?utf-8?Q?Ygf5yU/403EeUFysAnRJYkmg0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4caf44d5-2e29-4e0e-fb79-08db52183275
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 12:06:48.3894
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W+MUsw/B+bw9jgOXSYSjSUMmhUl/nuWAbauq2C/lHVjDCX5r89UnHLeOBD0lKdxjZzZgk3Fbo0r/9GX7gBf3tg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7072

... in order to also deny Dom0 access through the alias ports. Without
this it is only giving the impression of denying access to both PICs.
Unlike for CMOS/RTC, do detection very early, to avoid disturbing normal
operation later on.

Like for CMOS/RTC a fundamental assumption of the probing is that reads
from the probed alias port won't have side effects in case it does not
alias the respective PIC's one.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -479,7 +479,7 @@ static void __init process_dom0_ioports_
 int __init dom0_setup_permissions(struct domain *d)
 {
     unsigned long mfn;
-    unsigned int i;
+    unsigned int i, offs;
     int rc;
 
     if ( pv_shim )
@@ -492,10 +492,17 @@ int __init dom0_setup_permissions(struct
 
     /* Modify I/O port access permissions. */
 
-    /* Master Interrupt Controller (PIC). */
-    rc |= ioports_deny_access(d, 0x20, 0x21);
-    /* Slave Interrupt Controller (PIC). */
-    rc |= ioports_deny_access(d, 0xA0, 0xA1);
+    for ( offs = 0, i = pic_alias_mask & -pic_alias_mask ?: 2;
+          offs <= pic_alias_mask; offs += i )
+    {
+        if ( offs & ~pic_alias_mask )
+            continue;
+        /* Master Interrupt Controller (PIC). */
+        rc |= ioports_deny_access(d, 0x20 + offs, 0x21 + offs);
+        /* Slave Interrupt Controller (PIC). */
+        rc |= ioports_deny_access(d, 0xA0 + offs, 0xA1 + offs);
+    }
+
     /* Interval Timer (PIT). */
     rc |= ioports_deny_access(d, 0x40, 0x43);
     /* PIT Channel 2 / PC Speaker Control. */
--- a/xen/arch/x86/i8259.c
+++ b/xen/arch/x86/i8259.c
@@ -19,6 +19,7 @@
 #include <xen/delay.h>
 #include <asm/apic.h>
 #include <asm/asm_defns.h>
+#include <asm/setup.h>
 #include <io_ports.h>
 #include <irq_vectors.h>
 
@@ -332,6 +333,55 @@ void __init make_8259A_irq(unsigned int
     irq_to_desc(irq)->handler = &i8259A_irq_type;
 }
 
+unsigned int __initdata pic_alias_mask;
+
+static void __init probe_pic_alias(void)
+{
+    unsigned int mask = 0x1e;
+    uint8_t val = 0;
+
+    /*
+     * The only properly r/w register is OCW1.  While keeping the master
+     * fully masked (thus also masking anything coming through the slave),
+     * write all possible 256 values to the slave's base port, and check
+     * whether the same value can then be read back through any of the
+     * possible alias ports.  Probing just the slave of course builds on the
+     * assumption that aliasing is identical for master and slave.
+     */
+
+    outb(0xff, 0x21); /* Fully mask master. */
+
+    do {
+        unsigned int offs;
+
+        outb(val, 0xa1);
+
+        /* Try to make sure we're actually having a PIC here. */
+        if ( inb(0xa1) != val )
+        {
+            mask = 0;
+            break;
+        }
+
+        for ( offs = mask & -mask; offs <= mask; offs <<= 1 )
+        {
+            if ( !(mask & offs) )
+                continue;
+            if ( inb(0xa1 + offs) != val )
+                mask &= ~offs;
+        }
+    } while ( mask && (val += 0x0d) );  /* Arbitrary uneven number. */
+
+    outb(cached_A1, 0xa1); /* Restore slave IRQ mask. */
+    outb(cached_21, 0x21); /* Restore master IRQ mask. */
+
+    if ( mask )
+    {
+        dprintk(XENLOG_INFO, "PIC aliasing mask: %02x\n", mask);
+        pic_alias_mask = mask;
+    }
+}
+
 static struct irqaction __read_mostly cascade = { no_action, "cascade", NULL};
 
 void __init init_IRQ(void)
@@ -342,6 +392,8 @@ void __init init_IRQ(void)
 
     init_8259A(0);
 
+    probe_pic_alias();
+
     for (irq = 0; platform_legacy_irq(irq); irq++) {
         struct irq_desc *desc = irq_to_desc(irq);
         
--- a/xen/arch/x86/include/asm/setup.h
+++ b/xen/arch/x86/include/asm/setup.h
@@ -52,6 +52,8 @@ extern uint8_t kbd_shift_flags;
 extern unsigned long highmem_start;
 #endif
 
+extern unsigned int pic_alias_mask;
+
 extern int8_t opt_smt;
 
 #ifdef CONFIG_SHADOW_PAGING



From xen-devel-bounces@lists.xenproject.org Thu May 11 12:07:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:07:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533329.829891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px54j-0001Nz-IY; Thu, 11 May 2023 12:07:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533329.829891; Thu, 11 May 2023 12:07:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px54j-0001Nq-Er; Thu, 11 May 2023 12:07:13 +0000
Received: by outflank-mailman (input) for mailman id 533329;
 Thu, 11 May 2023 12:07:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1px54i-0001L5-F6
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:07:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1px54h-0001z7-Qn; Thu, 11 May 2023 12:07:11 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.27.46]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1px54h-0000jY-KA; Thu, 11 May 2023 12:07:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=e9Tdnj091amBSRxdNv88yRqWE4IatFcZ8M1XocevZww=; b=NpNAyombeEwG5Tg8LlELK8XFDR
	GI8UFxjgM8YwxGeWruvR3vJ+bZz+8EIcXP9ORQExmG9I7/SJY/ZJ/C5mJrK5pKi9jxbImReRyFOBL
	1FBhVCwXGeKQZ8c4xAHxyKU1o1sOGMGAbZRsWufTvPCXf4VyAnTYRMz82vRplF33YA2w=;
Message-ID: <2aaf1cf4-baca-0974-ac0c-80328037ce52@xen.org>
Date: Thu, 11 May 2023 13:07:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v5 05/14] tools/xenstore: use accounting buffering for
 node accounting
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-6-jgross@suse.com>
 <21847835-4f7e-a09a-458e-e68dc59d4268@xen.org>
 <9745474f-db84-c8f3-3662-95728d4d5bd3@suse.com>
 <91380959-b63d-7d4f-0920-7d87dc0fc19d@xen.org>
 <de77cc78-e07a-934f-e241-15fe851706df@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <de77cc78-e07a-934f-e241-15fe851706df@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 11/05/2023 06:25, Juergen Gross wrote:
> On 10.05.23 23:31, Julien Grall wrote:
>> On 10/05/2023 13:54, Juergen Gross wrote:
>>> On 09.05.23 20:46, Julien Grall wrote:
>>>> Hi Juergen,
>>>>
>>>> On 08/05/2023 12:47, Juergen Gross wrote:
>>>>> Add the node accounting to the accounting information buffering in
>>>>> order to avoid having to undo it in case of failure.
>>>>>
>>>>> This requires to call domain_nbentry_dec() before any changes to the
>>>>> data base, as it can return an error now.
>>>>>
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>> ---
>>>>> V5:
>>>>> - add error handling after domain_nbentry_dec() calls (Julien Grall)
>>>>> ---
>>>>>   tools/xenstore/xenstored_core.c   | 29 +++++++----------------------
>>>>>   tools/xenstore/xenstored_domain.h |  4 ++--
>>>>>   2 files changed, 9 insertions(+), 24 deletions(-)
>>>>>
>>>>> diff --git a/tools/xenstore/xenstored_core.c 
>>>>> b/tools/xenstore/xenstored_core.c
>>>>> index 8392bdec9b..22da434e2a 100644
>>>>> --- a/tools/xenstore/xenstored_core.c
>>>>> +++ b/tools/xenstore/xenstored_core.c
>>>>> @@ -1454,7 +1454,6 @@ static void destroy_node_rm(struct connection 
>>>>> *conn, struct node *node)
>>>>>   static int destroy_node(struct connection *conn, struct node *node)
>>>>>   {
>>>>>       destroy_node_rm(conn, node);
>>>>> -    domain_nbentry_dec(conn, get_node_owner(node));
>>>>>       /*
>>>>>        * It is not possible to easily revert the changes in a 
>>>>> transaction.
>>>>> @@ -1645,6 +1644,9 @@ static int delnode_sub(const void *ctx, 
>>>>> struct connection *conn,
>>>>>       if (ret > 0)
>>>>>           return WALK_TREE_SUCCESS_STOP;
>>>>> +    if (domain_nbentry_dec(conn, get_node_owner(node)))
>>>>> +        return WALK_TREE_ERROR_STOP;
>>>>
>>>> I think there is a potential issue with the buffering here. In case 
>>>> of failure, the node could have been removed, but the quota would 
>>>> not be properly accounted.
>>>
>>> You mean the case where another node has been deleted and due to 
>>> accounting
>>> buffering the corrected accounting data wouldn't be committed?
>>>
>>> This is no problem, as an error returned by delnode_sub() will result in
>>> corrupt() being called, which in turn will correct the accounting data.
>>
>> To me corrupt() is a big hammer and it feels wrong to call it when I 
>> think we have easier/faster way to deal with the issue. Could we 
>> instead call acc_commit() before returning?
> 
> You are aware that this is a very problematic situation we are in?

It is not very clear from the code. And that's why comments are always 
useful to clarify why corrupt() is the right call.

> 
> We couldn't allocate a small amount of memory (around 64 bytes)! 

So long this is the only reason then...

Xenstored
> will probably die within milliseconds. Using the big hammer in such a
> situation is fine IMO. It will maybe result in solving the problem by
> freeing of memory (quite unlikely, though), but it won't leave xenstored
> in a worse state than with your suggestion.

... this might be OK. But in the past, we had a place where corrupt() 
could be reliably triggered by a guest. If you think that's not 
possible, then it should be properly documented.

> 
> And calling acc_commit() here wouldn't really help, as accounting data
> couldn't be recorded, so there are missing updates anyway due to the failed
> call of domain_nbentry_dec().

We are removing the node after the accounting is updated. So if the 
accounting fail, then it should still be correct for anything that was 
removed before.

> 
>>>> Also, I think a comment would be warrant to explain why we are 
>>>> returning WALK_TREE_ERROR_STOP here when...
>>>>
>>>>> +
>>>>>       /* In case of error stop the walk. */
>>>>>       if (!ret && do_tdb_delete(conn, &key, &node->acc))
>>>>>           return WALK_TREE_SUCCESS_STOP;
>>>>
>>>> ... this is not the case when do_tdb_delete() fails for some reasons.
>>>
>>> The main idea was that the remove is working from the leafs towards 
>>> the root.
>>> In case one entry can't be removed, we should just stop.
>>>
>>> OTOH returning WALK_TREE_ERROR_STOP might be cleaner, as this would 
>>> make sure
>>> that accounting data is repaired afterwards. Even if do_tdb_delete() 
>>> can't
>>> really fail in normal configurations, as the only failure reasons are:
>>>
>>> - the node isn't found (quite unlikely, as we just read it before 
>>> entering
>>>    delnode_sub()), or
>>> - writing the updated data base failed (in normal configurations it 
>>> is in
>>>    already allocated memory, so no way to fail that)
>>>
>>> I think I'll switch to return WALK_TREE_ERROR_STOP here.
>>
>> See above for a different proposal.
> 
> Without deleting the node in the data base this would be another accounting
> data inconsistency, so calling corrupt() is the correct cleanup measure.

Hmmm... I read this as this is already a bug rather than one introduced 
by this patch. IIUC, then this should be done in a new commit.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 11 12:07:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:07:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533330.829901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px54o-0001h9-W9; Thu, 11 May 2023 12:07:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533330.829901; Thu, 11 May 2023 12:07:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px54o-0001h2-Sy; Thu, 11 May 2023 12:07:18 +0000
Received: by outflank-mailman (input) for mailman id 533330;
 Thu, 11 May 2023 12:07:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px54o-0000KI-3M
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:07:18 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0601.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5e8ca344-eff4-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 14:07:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7072.eurprd04.prod.outlook.com (2603:10a6:800:12c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 12:07:14 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 12:07:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e8ca344-eff4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QLE6lLNvVHuSw/fdKAAzSm3vbMHlxyKjSYP1wn7xnBhFEAgjwWTW63L1poNNwN1UqW4DwHMwPbZhQq3+wz4BalLFpaS73N509zDQpmQr+WG6eeJZutserc20TH0r6XAjrUPqHY//t+DwqMXSS3UWj91LQqQYGLUNuQiGsVFCMoSzO/ywD9zyxg3S692T914FJNgbg8g6WuWhh5cQEunOdW1d/MvOLOIgDQN8QTBMyeJ7m94wHM9D2h4g+g+uqX/buceQMWdR3ohgVyA0GUPYzLGtRQq/G/HfvjcWoOuzByC77uYg+tsZOzfYkSEcYDD+fzJAMhEh3JuHqcu9m9bZHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=m8K8rfTh8nTzrnw7JpPtZZMmTWFZSgvhcqiMn1zjH5Y=;
 b=ZssgIDn4uYLKy92R3VrnrvWrVB4GnVJYeKzmqHKi7FO8BBW9bcq3wB7A0EXpzd2o77eN4y6PFhm2RdzAVWOIPpz6tI9Umlwf9musXSnswLoZncxgrZvfPS/akNjmunZ0vAdbLc6oL4oT5Jd32gZ99V18xLMygL38i8Tsu4ru7edz6CzK8A2RScgUP2xfjXrsIgrrXt0/rPcHVqYyrG1EsBIoJUYtQUy+M/zHbTpDmZdOj0uOM3cBjYHlu//iesPkY6pvV9RdgYcYiliG4x19rQ0aEUasl+ZTvMxh4mLpUPh5gc5s/TtG30V9aundapwsOLf0R2oWcyqnfxROifTngw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m8K8rfTh8nTzrnw7JpPtZZMmTWFZSgvhcqiMn1zjH5Y=;
 b=JhCfwXzO0baWy9CqDtDbNLF9ZaTgt4sYVeyB4ea1qSpYiZqRYrogTS10Q1ryKf/ViF45BPCHQifQn9gX6vd422yOn2pwGg4vgTK+58Zal310WAE2KRtzO5pSNZXiTxn8ZW+rLAaoIdPUSAxJ5WK01+vfYU23U20Xn8BBrH5vygdPCw6kKglPGY9r1txhbsKpJKpPgfj4MJe4rlb3QL3xFHaLddUG3rs5mlUsWHeipf/hf5XVYBfbMQku83fb2cW/dvHe8dyDzjdvrDNDkJNOfpIopAu02s2gtvtD1WOaTv+gwLo9GY46NkRBT6pGJ658hNP++TkQu1A3LsWa3PB/9A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <042f76dd-d189-c40a-baec-68ded32aa797@suse.com>
Date: Thu, 11 May 2023 14:07:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH 5/7] x86: detect PIT aliasing on ports other than 0x4[0-3]
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
In-Reply-To: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0221.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7072:EE_
X-MS-Office365-Filtering-Correlation-Id: 88a83702-e00d-411a-3ce5-08db5218420f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Piw6xsXWIlRnmjXfRN6ILvU41a/F2MUffr1rvlRQAgppIDeqYvesIV98II/UysaHxr/WmArp9kARj2198WkAr+2PhkRUlisFg+6qYr+EUHDS3EZRsB8CQRrTaiTUMe/xje18lRQllTIV0FneYeglDwdDijoipoYbsgy5+s5CkR+ARor9ReIesOOfluOK9L7naOHnYo2XreX+Dq2ENkIMlmXJrQG1OcOj6tH1tIqFaCyEGWbqcMn5nJh8KYKWJoJ4ZSXUrKHlLYeS3gzEwzRjVHWXr9M+ruD6/KrnGkaJ42Hkk6WDrOtxEqlR4iKF1fAOnrN0ADOe6ybJY8HALQmU1S4YOGdsMCZy/XB+17LIGD21clyYrIvLaoZplZrmK4l5wkx3KgF2hX7BeKQuBEFMRRh/g3/A1QHc9V9wgaPmZQl0Y0UF7Fw3rxwhgQnMFIXiJ4J6Zla1V9wG+HhFMlBh0pssqOtbLZa+fvz79qHKj4EOy/L82S8SRFHIi4tyPOwYyde8HWS2g/5Fb68y8u9psa63Zpfq+vov06zkWVNrbhZp3QTyRF57LCkGPVZDnoqxwc1BIxkz3Sz3Ea7E2WHxELr5g5qDcDLtFQ5QnWjkK2RXw0CFbIoQvzhuA7+YTsZqGlbJv5Dut8FmrbKZQwO28Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(39860400002)(136003)(396003)(346002)(451199021)(31686004)(36756003)(2906002)(41300700001)(86362001)(38100700002)(8676002)(31696002)(5660300002)(8936002)(4326008)(316002)(83380400001)(478600001)(66946007)(6506007)(6916009)(186003)(6486002)(66476007)(66556008)(26005)(54906003)(6512007)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZXFETDJWbjlyY01PYm51SHpQTFhmNWI4c05aZUZ0VnNKNmthb3RLYktlVUFN?=
 =?utf-8?B?TXRRWGk2NXlWdmp5MmVKWEhCSjBCbXVBVkkxVTBTV0hUQXNuU3Jva1Z0UkZ4?=
 =?utf-8?B?Tlk3Qzk5ZkFXQkxEMzVPZ1liMkFxR2JuYlpuMXM2aVJ6cGhtWEF2a01Jek1R?=
 =?utf-8?B?VytBODlybWg5bUFZSUZCS0l0cXl0WVpTVTNVTGdRSWZROEx6azhERTB5T0FT?=
 =?utf-8?B?QWNKNzUrSE9rSHYvM0s1Qy9NS0hJS1F2dVdMYU1yejBBSEowaW1WSXlaakRV?=
 =?utf-8?B?T2w3TmZNRGc4TzI2bUxpWkdqWEtKa2tzc3NYaEkxeDQ1WXNUKzlFLzh0azVU?=
 =?utf-8?B?L3VyNDRZMWJBdUJmTnhMaU5jaTgyVmZpWVlRazJxNGl3K0RjU29XdDBjSEM2?=
 =?utf-8?B?bzdqd2lsTFliZkUrZU5pRk8xbkdjYkdwM2NSUXdlWmhOQit1cjZwa25iM2pP?=
 =?utf-8?B?NitCOXZobmJDdi96TlB1UUZqQlVwMTh4MVd5TnJmUnNXZHNlbUxIOGRGTFNU?=
 =?utf-8?B?RTRpV29JNEZQV0xQcTVqNDA3WFBpcGdrMmFTcUxwbTVySTZ3N2NpbVo0ZXNy?=
 =?utf-8?B?V3A0QmRTQXRkd3lFN1MyMWI2d2dUb3hOR29KTERxQ1NjTUl2S2h2SmVac3dq?=
 =?utf-8?B?VmVPVS9mdmVzb0hRUDJFanM4dUZoQ0hCK0R0WVVmSWdyOW5kTERRWENrSlpJ?=
 =?utf-8?B?cWtIRGJKWjZTRWxhVHh3cmxycktqZzNNOEhKTjJEMXVhT1grWjNnMWpJd0Zs?=
 =?utf-8?B?Zkpob1dRNlErSldMVk9BcUp2VVFlaG9IaGtZNmxCOUdyOVV0czZTMXRRdlZV?=
 =?utf-8?B?RVZWczBXNis1M2NpeGxETjFjZGRJdmgwQjRUN3ZDQlp5WW9WQTUzNURLMHJn?=
 =?utf-8?B?MWxEQkZ0cEllRTRHK0xVYVRVTEF6UkFvem5GSjNKbWZHU0ExSkQzOGd2bFAx?=
 =?utf-8?B?ZTNsV3VjZ1VQYytFS0hqV1ByQ2VITDlTeTRSd3JMUVpiZjUxcEdTL2dja0xh?=
 =?utf-8?B?YUpXR2xoU3JhY0htV2lNQ01INjVseFMvTXlNanZnVDVobUUwK3VKcEVjSGU4?=
 =?utf-8?B?UmhuTy9HR3hvRm1HRXE0UGFpZGdZbmY3R2NEaFFtenBncjVsVnFuWW02Vld6?=
 =?utf-8?B?My9UZTQvVGRhRG9TSCszejlEbXFsMmxDQXJueWxCb3NjZ0pEVHdBQkJlWity?=
 =?utf-8?B?cVZkNm9YQ3l2VHBYM0c4WjMralhlaktjazl2VThVZTRZeUZDZVZrQk1uOFph?=
 =?utf-8?B?a3FBT2trWmVFdmdja2JXZG9IMDBOM1R0K1FtUGUzdTJPLzJiZzRRd3gvR1Mr?=
 =?utf-8?B?eFBIbzBGRjd4UmE2cFhmUGZXWFduVkpTQ0N2ck04OE9nWXIrOURwM0lmd2tY?=
 =?utf-8?B?d0o5bkxENWIrRFRRNndIOGhEcVN6RWNmTzZmcEtIR253bXdRa3hOdE9idTJo?=
 =?utf-8?B?R1N3Q2FQeEtnQzg2Uzh2bVR6cWNHaUpZa3BIYUFaN2lPKzVxcTMxQ3dnNjhj?=
 =?utf-8?B?SUlYQUVZTXdGRDlYOWkrZkFYN0JaVloxRitKb3RobEZjeFdxUnJOYUxBOEF3?=
 =?utf-8?B?TFk0WUFQbkFSY3Ztd0lDcWtQZ2MrVXFEMW9hWjNkUHdYYkdqWE9NTGhVQjIw?=
 =?utf-8?B?QnMrYkpucGxCMGhPNHZ0OXJOYktvL2JSZmVYNmlBMGJiVzl6K2NYZmQ3V0cv?=
 =?utf-8?B?RWdpa2dGWjkrbWhRWDNPMW51dDJZUXZJcG9lUVBqYVc3REgvV0d2amNKcVA1?=
 =?utf-8?B?T0RqTVE1VkNpZ2xUNFEvUU02cTVwSXRodDJzb2NBQ3hqUE5rQ0E1ejNYTUht?=
 =?utf-8?B?bXlNVjlMTW5kek5iMi9RS010SWs3V2xUdUU3YzAwb3M2SGZ6YWN3OTYzdkpr?=
 =?utf-8?B?WG5UbWZENXQxcjVZVHRDOTRzcjVQNytGMW5RMFFpalM5UWlwNFVkTnhYTFBa?=
 =?utf-8?B?ZU5sd0ZiamtZQ1NyTDkxeG5tMlNhaHRuMVpCQmQrQXdJZHlJNUQ0VUp4YWJ3?=
 =?utf-8?B?b3FUSTNzMU1FOHZCanY1YTQ3dWpKd1NLTkcxaVBmUGxUTzVhQXRmcElSdlJU?=
 =?utf-8?B?QnAxOGZ1aGI1bTg4S1pZR1JXbEk4aExPam83L0RrcUN0MzhpRFpVRDVreGVG?=
 =?utf-8?Q?pzFGROsaQU26Z4WUpawlDJppk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 88a83702-e00d-411a-3ce5-08db5218420f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 12:07:14.5554
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +YOKnAJl09OcsZ+ZZzuKHL3FKqG08+tHX1y4K+dqtN+HaGc/c0hfJSjxbKqtebzTKJUiW/8vvY+KZkFpDBpCRg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7072

... in order to also deny Dom0 access through the alias ports. Without
this it is only giving the impression of denying access to PIT. Unlike
for CMOS/RTC, do detection pretty early, to avoid disturbing normal
operation later on (even if typically we won't use much of the PIT).

Like for CMOS/RTC a fundamental assumption of the probing is that reads
from the probed alias port won't have side effects (beyond such that PIT
reads have anyway) in case it does not alias the PIT's.

At to the port 0x61 accesses: Unlike other accesses we do, this masks
off the top four bits (in addition to the bottom two ones), following
Intel chipset documentation saying that these (read-only) bits should
only be written with zero.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
If Xen was running on top of another instance of itself (in HVM mode,
not PVH, i.e. not as a shim), I'm afraid our vPIT logic would not allow
the "Try to further make sure ..." check to pass in the Xen running on
top: We don't respect the gate bit being clear when handling counter
reads. (There are more unhandled [and unmentioned as being so] aspects
of PIT behavior though, yet it's unclear in how far addressing at least
some of them would be useful.)

--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -504,7 +504,11 @@ int __init dom0_setup_permissions(struct
     }
 
     /* Interval Timer (PIT). */
-    rc |= ioports_deny_access(d, 0x40, 0x43);
+    for ( offs = 0, i = pit_alias_mask & -pit_alias_mask ?: 4;
+          offs <= pit_alias_mask; offs += i )
+        if ( !(offs & ~pit_alias_mask) )
+            rc |= ioports_deny_access(d, 0x40 + offs, 0x43 + offs);
+
     /* PIT Channel 2 / PC Speaker Control. */
     rc |= ioports_deny_access(d, 0x61, 0x61);
 
--- a/xen/arch/x86/include/asm/setup.h
+++ b/xen/arch/x86/include/asm/setup.h
@@ -53,6 +53,7 @@ extern unsigned long highmem_start;
 #endif
 
 extern unsigned int pic_alias_mask;
+extern unsigned int pit_alias_mask;
 
 extern int8_t opt_smt;
 
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -425,6 +425,69 @@ static struct platform_timesource __init
     .resume = resume_pit,
 };
 
+unsigned int __initdata pit_alias_mask;
+
+static void __init probe_pit_alias(void)
+{
+    unsigned int mask = 0x1c;
+    uint8_t val = 0;
+
+    /*
+     * Use channel 2 in mode 0 for probing.  In this mode even a non-initial
+     * count is loaded independent of counting being / becoming enabled.  Thus
+     * we have a 16-bit value fully under our control, to write and then check
+     * whether we can also read it back unaltered.
+     */
+
+    /* Turn off speaker output and disable channel 2 counting. */
+    outb(inb(0x61) & 0x0c, 0x61);
+
+    outb((2 << 6) | (3 << 4) | (0 << 1), PIT_MODE); /* Mode 0, LSB/MSB. */
+
+    do {
+        uint8_t val2;
+        unsigned int offs;
+
+        outb(val, PIT_CH2);
+        outb(val ^ 0xff, PIT_CH2);
+
+        /* Wait for the Null Count bit to clear. */
+        do {
+            /* Latch status. */
+            outb((3 << 6) | (1 << 5) | (1 << 3), PIT_MODE);
+
+            /* Try to make sure we're actually having a PIT here. */
+            val2 = inb(PIT_CH2);
+            if ( (val2 & ~(3 << 6)) != ((3 << 4) | (0 << 1)) )
+                return;
+        } while ( val2 & (1 << 6) );
+
+        /*
+         * Try to further make sure we're actually having a PIT here.
+         *
+         * NB: Deliberately |, not ||, as we always want both reads.
+         */
+        val2 = inb(PIT_CH2);
+        if ( (val2 ^ val) | (inb(PIT_CH2) ^ val ^ 0xff) )
+            return;
+
+        for ( offs = mask & -mask; offs <= mask; offs <<= 1 )
+        {
+            if ( !(mask & offs) )
+                continue;
+            val2 = inb(PIT_CH2 + offs);
+            if ( (val2 ^ val) | (inb(PIT_CH2 + offs) ^ val ^ 0xff) )
+                mask &= ~offs;
+        }
+    } while ( mask && (val += 0x0b) );  /* Arbitrary uneven number. */
+
+    if ( mask )
+    {
+        dprintk(XENLOG_INFO, "PIT aliasing mask: %02x\n", mask);
+        pit_alias_mask = mask;
+    }
+}
+
 /************************************************************
  * PLATFORM TIMER 2: HIGH PRECISION EVENT TIMER (HPET)
  */
@@ -2390,6 +2453,8 @@ void __init early_time_init(void)
     }
 
     preinit_pit();
+    probe_pit_alias();
+
     tmp = init_platform_timer();
     plt_tsc.frequency = tmp;
 



From xen-devel-bounces@lists.xenproject.org Thu May 11 12:08:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:08:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533340.829911 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px55j-0002ma-9S; Thu, 11 May 2023 12:08:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533340.829911; Thu, 11 May 2023 12:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px55j-0002mT-5g; Thu, 11 May 2023 12:08:15 +0000
Received: by outflank-mailman (input) for mailman id 533340;
 Thu, 11 May 2023 12:08:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px55h-0002N3-Df
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:08:13 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060b.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 80769162-eff4-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 14:08:12 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9500.eurprd04.prod.outlook.com (2603:10a6:10:361::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 12:08:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 12:08:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80769162-eff4-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GRsdVBXLWggBdeJz9wDXs/c0bpNMEwVxqf8eD7b4kYCdKfyYjJWtNgah0v7RcR+bMpp0KdpJch5HhN2wEUtIxbfbdy1WoObqG2oCiPbCijkScVfqdjRrFUbtIcKYlNb0UeMulNIr7GGwP/omCd7VuhkSBbCqx0dWX3YdJStlIIJntzS/sp2GEtbXX0AUMaCcHqyhazEYU+BQ7rtBoKLjxwNyQV1iyoTzo/Jp4VUw/Lp1dNF/sVQNW4T4Trsa0EdA10DGCAFpF77VTwosQeNAKmGDuoU6K1mPkHlvWNeIQuU/SCr/YSuUwkZlg7vRtHQyKnqmwDayj4XPPbgXNnM96g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G+zqZGPqfnT5MVrIi37WQFaIf6fuLMsYpLckidDxeS8=;
 b=j8PqPPaOhWr9n9lovQd0CVjd0QwrMmCRNxnOLj7+qF//i8k7Lj/923RH+TuGwzWNOHRcnHSDyAqr3N23gv1F9h5eoyE23VhESUvQMoCorGZv3HPzjI6O6dY558MvmJruEWC1v64IEyRcI3IQqa89EE7hikq0dwqAWKR6JW07yDktb3o3JIjMp7Dz0gcNhE0INtRJDwEdApiaxAuRfHxSuKJTLZs3E74THuK4qA3vE7UT1Q20t2fQmb/bk0D4I0goo+shSDkQSaItFwwLursLwDhxWvo7jEPGVjRByaJbZL6+Gv3t8hPDQlJHnICBcNXhirkf5nd3KHAQrnZXOubN8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G+zqZGPqfnT5MVrIi37WQFaIf6fuLMsYpLckidDxeS8=;
 b=X/eaUmwPTaHbbav2G5Q7c8e2osiyFc8R2cVrI3PwdMf6VG8+RvbAwQOJiXjIxJgSgoVtkde8jRWO2Yj4aKTgjaIGvUDqBOAYT8NXxPCE8wPvt8xv0MtcS2ejxs3Dg0H0dRNeE+3UZOjEMLD3LuMlLIj5bskjDzcwB0z03Zy8kAaPfY0Amt/9pLs5SAsTEE3hNTNa76ivuhxeqbQykN6zFbJSzdk8OCQf2Lp3hhFGUnjiuwvrdPl7FWcpiiHjWw5gT/4cN4WSqBt2FJF0W7ilMQkusc2YEfgObqEtJTcm9vdZ8xlTQ/G8HbiSBmteYIK7RyoH3mHD9W5dixWgaNSqYg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <118fa3e5-e1ac-ab3e-8b86-1ec751513434@suse.com>
Date: Thu, 11 May 2023 14:08:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH 7/7] x86: don't allow Dom0 access to ELCR ports
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
In-Reply-To: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0102.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9500:EE_
X-MS-Office365-Filtering-Correlation-Id: 473e6907-3e34-46d4-d0cb-08db521863e6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cOZvflC5zg2EVPAOnJ36A7aTTo+8zh2mLLhgqvwgKH5bbUJZTTMJxxDyxukYqHUKmZ4mWfK3VWvu7697nS/3JUpz8dhgjH6eWXXWiQvQR/NCvq/z53tj42Bt3Crq6U1i84KSsM+og4RsN9hzAmddkZMLNHgbqk9YDAbGDOwuox3Uc0cuEPM7elksG32O0UrVzC0dimyC5OiZ83tVnn6Wc5C9O7r2U6m9nqU+WcNgphEVdUBORLig/oyKi1GkoocMsIZQdp+S0HhlwCdyp/KtT7vba8rXkdA7eR/Pts7vigsS5wo8CY+nZAJM4V6wWeP4ahjBIu4DFucmCj96Wsj4I9AqGrvRpjbxLF5A1TWLBnQ6eTlC/xnidjmtcn9WjUoCBfM+O+GuOcB7aETOyHpcjYjWSWgXZGB9BP4Ua9pR8RUIWncCV7Y8x0lGRQyTM0ZupuSX0LUc1aEb2HS83G4IhBp0EBOaRVTbkBiVk00WRIqXdFlMYLo/jc1Fuv73PjdgjDJvYEoTfv1VAHy6OVJ38xyxvjS0xY7V+0SjszYup0Pgl75+hBblsTiyRaMCoE66DIeRJtY4HGS9R9eM1qBwyvWwHd7xN658p8RBCt2AWavGpIqWcszBotGqAEF7pcqjUJt8kaStU58y7KCxgTRP1g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(346002)(136003)(39860400002)(376002)(451199021)(31686004)(6486002)(86362001)(41300700001)(2906002)(31696002)(36756003)(38100700002)(8936002)(8676002)(5660300002)(83380400001)(478600001)(66946007)(4326008)(6916009)(26005)(316002)(2616005)(66476007)(6506007)(54906003)(6512007)(186003)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SUhTSlFYNFhvS2duZkZ3Qys2YndvMC9PekRabzFjTGpRQmZNbG1VM3VpRDRR?=
 =?utf-8?B?UFVyWnBXOTN5cm9Eeld0ZjJTT2cwSWNGd3JHS05PUlFOcDBFWHo1SjVER3Vp?=
 =?utf-8?B?UVJhMWFTZi9lNjRXY2J5RG5CUUZmSzZSc0EwRFpMR3pnVXFjb3lEczMrWUh6?=
 =?utf-8?B?WlhFYjIzZHM0V21lUXloclZ2YVdWUFpjTWZwYmE0VWdSYXdwUSt5RmFPWngx?=
 =?utf-8?B?Z3ozL2JIZFBkdnN3cWxLVGxVaWlVZDBvVndhcFpldVZjTjRiUnhPTHdRZm92?=
 =?utf-8?B?dkZMNFMvN0xoNEZNbFc2dWhMY25TWGtkUzhlblBuTVJQbEFUcGhkelJsT2FY?=
 =?utf-8?B?V3V1MUwzWVUrQ0FRREFCVUJreFdUcmdIdXYrTlFCc3B1cm1aWjY3SUpWYzVl?=
 =?utf-8?B?RVNRWkt1RE1UYVkzMW5kVlloN2Y5RStpckxuZk9QUkhCT1RHTFBUellYejRQ?=
 =?utf-8?B?M3BmOFVEZWY4VkxpOVVJUGVPd09UMzYzWTFPRi9kWHVRUVU4dUtBdmRVNmd1?=
 =?utf-8?B?dDBOOUVUVzFCbHJzMy9JZjdxRUJKUnRuRXlNY29sSXBPQk11bnZlU041VkQw?=
 =?utf-8?B?cmRFV09oMFRrNkhhblEwSHdsdEtDeWdVcGxzcld4S1ZBUVQ5eDRSZXdMSEFJ?=
 =?utf-8?B?a3pyWmJVaVE1SUR4UjQzVHN6YjY5endsTG9GTjlmWmdDVk9KVHlHQmlZYlQ4?=
 =?utf-8?B?REd6RnM3QWNOQlRiRm1YeDBnWGtvUFNUUTFQcDcxSnhGanJ2RVIxV2YvdXhX?=
 =?utf-8?B?RmtsQkdLVDRBK3pNNE9Lb2doRk9YQ0JGNjVObzlBRmJmcWkybDlmSVFEWmx4?=
 =?utf-8?B?SFJNUjRRSDcrQVVrbmFvWTBuOWZrTjA1bWVNb3BnOVB2MVZuZlVJNmppUnIz?=
 =?utf-8?B?VkdVd1hUdGpWSUhncmVXUTdRRWxTVjVJWmEzZ2N1bUdzY3hndllKdUo0WC9p?=
 =?utf-8?B?R2NIc01lOG42RXcvRkdHUmtOSCtqbUovWlNVeXdBRFQ2cVZsOXhkY1YrZDVG?=
 =?utf-8?B?YVBnUWEzeEQrU2RwTEhnT2lnSWxXTFYrWnJUZWZlaytHaEVHaWJ1OGhmTnBi?=
 =?utf-8?B?cFZ0MWQ5MHNEOHphZ2p0d2h2NW01N1JiblVZUE15dGpNL245Mys2L0psQ3NH?=
 =?utf-8?B?WmxWN2RCMG1zNm05WXlyMDZ4Yy90WnNwdlVIck45UjdkWHNPZFBIVG5SdW02?=
 =?utf-8?B?Q0RwZVhXQlRzaldDKzgvc1owZlVrOGVOaDhybkZxS3lWbTBVaG1YK282Q1Ix?=
 =?utf-8?B?TDU3dTIrSHMvWTlpZTFZa2JsSjlqczB5SUJLcGxjNDlVcHFqT05uR0pjL0tq?=
 =?utf-8?B?R0JrZG15UlVQRHpaa1Z4ZVJsazBhczNyTzA1d0JQVnNOaGVORkFZNjdETW9H?=
 =?utf-8?B?Sk5IbnJ1YlNZcXRaeUNtUjhmaW9zSmRMRzNjdnB5cSs5Q2ZscTBvQ0lqRlRz?=
 =?utf-8?B?MW51RkhrcjBRTHhmb2xETTVGYmhYYzB5TG5vcEJGdm5lM1l1YkxCeitxdGhp?=
 =?utf-8?B?bHEyQXFJRjl1TTkrSjd6WCtabGRLZUxwQkdlSG9FeFFxRHRtQUdvWGcrb0V1?=
 =?utf-8?B?R3FQeUx4Ukl2Tm9pV012L05DSmZ2NE1iellWSEkyR2pnd3I5VlBYS1V4K3gr?=
 =?utf-8?B?K0FQWVZvbU9RTTllT0JxZzVPU1p5aVQwYjloRFByZkt6SlptVk5ncjlqek8w?=
 =?utf-8?B?NVoxRVFRVjhISjZpaHZHMFE1M1JKbnV4NFp1aEVoay9DR3o5b09uUDVydExj?=
 =?utf-8?B?c3R6em1TbzRyNmdMa0pQdkFTa0lOeDdpNWNodmV2SjNBVVJSM0dISDRvblBn?=
 =?utf-8?B?azhNS1NZMnlTUWt0ZVFReFFPUXhCNUtPYW1aRXdKWTJOVnhSWnNJbm1aV1Nq?=
 =?utf-8?B?d0J5SzFrVHBhS2JzOEpWRFR4b2Iyc0lGMUFlMnhGekhjQTlGcjFWS0F2YzYr?=
 =?utf-8?B?dldDdU5jTXc1cURSME1QSFdTQkg5WEFodzlJY1B6U2FMZHZIbGdMODdQN0Z4?=
 =?utf-8?B?S2tNQ0dDSUlYMmYwUzVLNVloa3RuOEM4K3haSWtPbFA0NDFoZU04ckJLRVY4?=
 =?utf-8?B?QzJ0QzR3cDhvamI1YmJoOGdrUEZHK1FsalVWaU9YTXRPZ2NnUHNIQnhSVUFm?=
 =?utf-8?Q?Co6AAniq/G28ZTPT/sY/smwmZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 473e6907-3e34-46d4-d0cb-08db521863e6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 12:08:11.3268
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: e8g8PXSXfiaE2KZ+sCDJ+yxYt1jxPbZ9xkNcJHErPsuF80VIhnqxJz7LP0HovPSvZk6CaIWDxs8us17YoRRmAw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9500

Much like the other PIC ports, Dom0 has no business touching these. Even
our own uses are somewhat questionable, as the corresponding IO-APIC
code in Linux is enclosed in a CONFIG_EISA conditional; I don't think
there are any x86-64 EISA systems.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: For Linux'es (matching our) construct_default_ioirq_mptable() we
     may need to permit read access at least for PVH, if such default
     table construction is assumed to be sensible there in the first
     place (we assume ACPI and no PIC for PVH Dom0, after all).

RFC: Linux further has ACPI boot code accessing ELCR
     (acpi_pic_sci_set_trigger() and acpi_register_gsi_pic()), which we
     have no equivalent of.

Taken together, perhaps the hiding needs to be limited to PVH Dom0?

--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -503,6 +503,9 @@ int __init dom0_setup_permissions(struct
         rc |= ioports_deny_access(d, 0xA0 + offs, 0xA1 + offs);
     }
 
+    /* ELCR of both PICs. */
+    rc |= ioports_deny_access(d, 0x4D0, 0x4D1);
+
     /* Interval Timer (PIT). */
     for ( offs = 0, i = pit_alias_mask & -pit_alias_mask ?: 4;
           offs <= pit_alias_mask; offs += i )



From xen-devel-bounces@lists.xenproject.org Thu May 11 12:12:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:12:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533346.829921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px59a-0004Ka-Px; Thu, 11 May 2023 12:12:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533346.829921; Thu, 11 May 2023 12:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px59a-0004KT-Lj; Thu, 11 May 2023 12:12:14 +0000
Received: by outflank-mailman (input) for mailman id 533346;
 Thu, 11 May 2023 12:12:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lTi/=BA=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1px59Z-0004KN-4p
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:12:13 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f059b70-eff5-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 14:12:11 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-30796c0cbcaso4873433f8f.1
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 05:12:11 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 b15-20020a5d4b8f000000b003064600cff9sm20246328wrt.38.2023.05.11.05.12.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 05:12:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f059b70-eff5-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1683807131; x=1686399131;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=/NWSIbhrrmlXYe0SJpj1AsNExE5w5gRba4Zpu3WmBr0=;
        b=Ele3Mb2b9rwR0sNGyvG98PyG31uRLSbFzSeBud1Q190+Cvk9Vl6SCGR+UNJ2jva8gi
         Eny6Q32uEShOYIJ+fNPgnOHmvxMCRApsUVZTDfbkQuiKd4+Y1ENBQDhL205evH8BF6rV
         EaGeJr5fPi+4NqolByUNIzfXZ2VZVM67G/QkU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683807131; x=1686399131;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/NWSIbhrrmlXYe0SJpj1AsNExE5w5gRba4Zpu3WmBr0=;
        b=Jkc/54wLZcXHcvlFRlt94n6o/r9WNsTURIDPq3j8wLjjMsdTXVnA+tZZ953gD6sTvu
         Vr063IO4WFNoO1XTsFykH0CWTxuE3f6MpWcW6GlyhzzXs609ynFth6ThpiGi2rsfzUPr
         l7D4EZoe0bjxAtUmbFpQLdkSVZomO29s0Zc4sacrqSLVkPMu+glOAO8mhXFbbk/Z/xiW
         ncOTnEATqDgty9y/0mJvthi4AeaCJT3oW4Y9JsES/Jlc04JKUu2wwu0SE6vwulAeCO9i
         Rgb7PZ8dWdaSfElgTPklqZfXC5EDA4OM8bKtWCt7WDK1H47GJUCCXazMet48niWQLaAV
         qk6g==
X-Gm-Message-State: AC+VfDzSaFDFItmLi5dBpyDxq7ONG/MwEjjSFNQUmcZCk0yFOkYiFQ/0
	HkDStadCPbIswPUl45CUb48aCQ==
X-Google-Smtp-Source: ACHHUZ5vjA8J7Wwdm9US3ZWu0WEVFts9gbi2QKs3SI1D0iRn4g9SCjOpKxutEH0iDvO9HEHGIx6Mdw==
X-Received: by 2002:adf:e483:0:b0:2cf:e517:c138 with SMTP id i3-20020adfe483000000b002cfe517c138mr14740013wrm.66.1683807131062;
        Thu, 11 May 2023 05:12:11 -0700 (PDT)
Message-ID: <645cdb9a.5d0a0220.d3a29.e895@mx.google.com>
X-Google-Original-Message-ID: <ZFzbl2urdhmGUXKD@EMEAENGAAD19049.>
Date: Thu, 11 May 2023 13:12:07 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 3/3] x86: Add support for CpuidUserDis
References: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
 <20230509164336.12523-4-alejandro.vallejo@cloud.com>
 <1489425d-7627-30c6-bb0a-ca1145107f42@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1489425d-7627-30c6-bb0a-ca1145107f42@suse.com>

On Thu, May 11, 2023 at 01:05:42PM +0200, Jan Beulich wrote:
> > --- a/xen/arch/x86/cpu/amd.c
> > +++ b/xen/arch/x86/cpu/amd.c
> > @@ -279,8 +279,12 @@ static void __init noinline amd_init_levelling(void)
> >  	 * that can only be present when Xen is itself virtualized (because
> >  	 * it can be emulated)
> >  	 */
> > -	if (cpu_has_hypervisor && probe_cpuid_faulting())
> > +	if ((cpu_has_hypervisor && probe_cpuid_faulting()) ||
> > +	    boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) {
> 
> ... imo the probe_cpuid_faulting() call would better be avoided when
> the CPUID bit is set.

I wrote it like that originally. However, it felt wrong to leave
raw_policy.platform_info unset, as it's set inside probe_cpuid_faulting().
While it's highly unlikely a real AMD machine will have CPUID faulting
support, Xen might see both if it's itself virtualized under Xen.

The crux of the matter here is whether we want the raw policy to be an
accurate representation of _all_ the features of the machine (real or
virtual) or we're ok with it not having features we don't intend to use in
practice. It certainly can be argued either way. CpuidUserDis naturally
gets to the policy through CPUID leaf enumeration, so that's done
regardless.

My .02 is that raw means uncooked and as such should have the actual
physical features reported by the machine, but I could be persuaded either
way.

> 
> > +		expected_levelling_cap |= LCAP_faulting;
> > +		levelling_caps |=  LCAP_faulting;
> 
> Further the movement of these two lines from ...
> 
> > @@ -144,8 +145,6 @@ bool __init probe_cpuid_faulting(void)
> >  		return false;
> >  	}
> >  
> > -	expected_levelling_cap |= LCAP_faulting;
> > -	levelling_caps |=  LCAP_faulting;
> >  	setup_force_cpu_cap(X86_FEATURE_CPUID_FAULTING);
> 
> ... here (and also to intel.c) should imo be part of patch 2. While
> moving them, I think you also want to deal with the stray double
> blank.
> 
> Jan

Sure.

Alejandro


From xen-devel-bounces@lists.xenproject.org Thu May 11 12:13:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:13:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533351.829931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5AZ-0004uU-5O; Thu, 11 May 2023 12:13:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533351.829931; Thu, 11 May 2023 12:13:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5AZ-0004uN-2W; Thu, 11 May 2023 12:13:15 +0000
Received: by outflank-mailman (input) for mailman id 533351;
 Thu, 11 May 2023 12:13:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/UO=BA=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1px5AY-0004u9-07
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:13:14 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20605.outbound.protection.outlook.com
 [2a01:111:f400:7e88::605])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 328451b1-eff5-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 14:13:12 +0200 (CEST)
Received: from MW4P221CA0025.NAMP221.PROD.OUTLOOK.COM (2603:10b6:303:8b::30)
 by SN7PR12MB7227.namprd12.prod.outlook.com (2603:10b6:806:2aa::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 12:12:56 +0000
Received: from CO1NAM11FT034.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8b:cafe::6d) by MW4P221CA0025.outlook.office365.com
 (2603:10b6:303:8b::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21 via Frontend
 Transport; Thu, 11 May 2023 12:12:56 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT034.mail.protection.outlook.com (10.13.174.248) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.22 via Frontend Transport; Thu, 11 May 2023 12:12:56 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 07:12:55 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 07:12:55 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 07:12:54 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 328451b1-eff5-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gfRj3jvK8jfiKVw+rp9F0v6zMmirsDRwJh9HkcL0LFAilKKZVXYDEq+lx9VQkskatQEs7cDLfYRfAYmoy9l5M/CSiym1L+5F+h635OzkH0nBPxI11SdInPi1w94VSRwn577ki3Gt5B/pAV+fc+m2bIFMAzqN9DRQt/id4aXT2i6mk8OCfK1PRkuA+DRiA/fm2haCRLfL2U+bWuAfM8P12L2sukfVZRruVr4+wu5cZR6XxBFqwnbBhr1dE4u0NF4U9K6kXQpETXT1RiUBtr5gyFh0FrsaGlAMo37isLAAm4mJKi1SLsd/ncwufFJAMaqEeXSqred6GZQfgpjFClqs8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YQ1q4ALsP4QhZ4g2KjzF7NfmUjBeIDLYWFcMqLRaQWs=;
 b=Ez6+cr6Bcy2lpwwNfHpB79RWnaov+lzbKVa9uKSyg/i4Sq9RMFGmL1r8gac+KDBlrUd5+e3I4v3mrE6uathq8U07VbcwsZ5ScwV+FiA0Hj2IhP5nLiFev1ES7aISIqB8qrTnaP1gHZOZLbf7/IqTioun3h558xC+VAN1hYpWCbyGKzXfEuLzKDo126xpjT/2v/FzYus8PPGYmpn8KhoFbf6wyIucqNpXWu1+aFhbxh2Ctg1Pk9qt/gkO5YItXRpRy2NpryO5mbWUemWyhssUzUJcWvuoIp80sHOuz/5nKeElj9DxZ12besdABpqzpIADSnrv55G19ILitOWFC70Cag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YQ1q4ALsP4QhZ4g2KjzF7NfmUjBeIDLYWFcMqLRaQWs=;
 b=TcIH6z7LLpF09/Vdj6x8pvvESfxGGrEPZNoBXsNqpp89iWKmSb9d3ubdSNofCLnTU4N8/Zq7XjevNeud0B/RWllIN6yVTXOxoRVdbHrAEeamdbKzjlcnN2wDfHQgCcG8faAJBBPZ0HD2S+dMr5e7hbSQdTp4aVch3LEp98zjoWY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <1d9f1c3a-bd26-f44f-73d7-22492f40f7f0@amd.com>
Date: Thu, 11 May 2023 14:12:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] xen/arm: domain_build: Propagate return code of
 map_irq_to_domain()
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230511112531.705-1-michal.orzel@amd.com>
 <09f4eef3-df60-ee90-bc5b-7e61ef9788a0@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <09f4eef3-df60-ee90-bc5b-7e61ef9788a0@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT034:EE_|SN7PR12MB7227:EE_
X-MS-Office365-Filtering-Correlation-Id: 966cf49f-d9ee-4ce5-9a31-08db52190e09
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OA9iat3d/p3htb1FUqpTyCKD98LOfn+58w9OOXHNFmXyZXijiSnCmslO75YTiHohZBBxDA4ZSFLOthrZXpqrT7tFoiFtJfSgkebxqNX9VLNMfQjUqRSRRA3leqL3xifqbS6RkKNqdEmddSa9EfK3XmmdT89PgPGr19EsTf1f2q/dLe8jrSBrmE2juuQ2NPtZRXnMR/TNUseBNB2vfrndMMNXkVGybKHdNhN9lEdeJlfddHjokJ064opu6VSCkiC0mbEFrkISpkDgMRDTooSkoVS/dMgndmtGTK3olnnRU8zzLxvr0EFOXaPBbUfVNgKFzeTdQadDZKPjh/+iHHRFuR9uvUkJbarj6WPVT13kCjZfMnXCEixtPraTzNfxkLHqGZDYeIgMfpXxSoJ5do/hmOAAh+fJ1af3Ww2b0Xnp3zsLTEZ95T7nTWwGxi/wYTdju5hENlUGA/24SWAnSK1JGNDKyRjxtk1R7zRf2RKLNgPIf5KV2S9F23YHlAKt27JAlnw/BKNMSl9XXNjJJVKTP7q4YeLK5mUBJ5ufiblxN/iD1cseEuiReNKIhAiQk/kI+zLO0FLH8z932kSXV91zYQY+EvY+iIPmGalcfrGxjL/SgtrsMo667tIbckNa+Q1BMlnC5w+ukZvid6ePs1WI5vdrwFPxngMemuqQM/DXW8uc+W84ew1i8Qvrw3tJ+kYyup3P27w08ZYZ7AAB2Bs6XNvbtcGp9pkZGd8RRVfr19HOSUnh/1gyeL2pcjuKYOgyXRqRC6vgewH9E1jRhCJtkA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(2906002)(26005)(53546011)(36860700001)(186003)(40480700001)(2616005)(426003)(47076005)(336012)(4326008)(70586007)(41300700001)(70206006)(316002)(8936002)(8676002)(36756003)(110136005)(54906003)(82310400005)(478600001)(86362001)(31686004)(31696002)(16576012)(40460700003)(83380400001)(5660300002)(44832011)(356005)(81166007)(82740400003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 12:12:56.4508
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 966cf49f-d9ee-4ce5-9a31-08db52190e09
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT034.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7227

Hi Julien,

On 11/05/2023 13:55, Julien Grall wrote:
> 
> 
> Hi Michal,
> 
> On 11/05/2023 12:25, Michal Orzel wrote:
>>  From map_dt_irq_to_domain() we are assigning a return code of
>> map_irq_to_domain() to a variable without checking it for an error.
>> Fix it by propagating the return code directly since this is the last
>> call.
>>
>> Take the opportunity to use the correct printk() format specifiers,
>> since both irq and domain id are of unsigned types.
> 
> I would rather prefer if this is split in a separate patch because while
> we want to backport the first part, I don't think the latter wants to be.
Sure. I will then fix specifiers in both map_dt_irq_to_domain and map_irq_to_domain.

> 
>>
>> Fixes: 467e5cbb2ffc ("xen: arm: consolidate mmio and irq mapping to dom0")
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> ---
>>   xen/arch/arm/domain_build.c | 6 +++---
>>   1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index f80fdd1af206..2c14718bff87 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -2303,7 +2303,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>>
>>       if ( irq < NR_LOCAL_IRQS )
>>       {
>> -        printk(XENLOG_ERR "%s: IRQ%"PRId32" is not a SPI\n",
>> +        printk(XENLOG_ERR "%s: IRQ%u is not a SPI\n",
>>                  dt_node_name(dev), irq);
>>           return -EINVAL;
>>       }
>> @@ -2313,14 +2313,14 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>>       if ( res )
>>       {
>>           printk(XENLOG_ERR
>> -               "%s: Unable to setup IRQ%"PRId32" to dom%d\n",
>> +               "%s: Unable to setup IRQ%u to dom%u\n",
>>                  dt_node_name(dev), irq, d->domain_id);
> 
> Please switch %pd when printing the domain.
ok

~Michal


From xen-devel-bounces@lists.xenproject.org Thu May 11 12:17:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:17:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533357.829941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5Eu-0005g7-N9; Thu, 11 May 2023 12:17:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533357.829941; Thu, 11 May 2023 12:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5Eu-0005g0-K9; Thu, 11 May 2023 12:17:44 +0000
Received: by outflank-mailman (input) for mailman id 533357;
 Thu, 11 May 2023 12:17:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px55F-0000KI-UC
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:07:45 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20618.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6f73ea1a-eff4-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 14:07:44 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9500.eurprd04.prod.outlook.com (2603:10a6:10:361::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 12:07:42 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 12:07:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f73ea1a-eff4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GR2fCOrjH1EjhHyTGA87LAASim8ULatA4LHxlljV0dX7ipIPWvC0YsdEMKBNg5Gs09bk8qTdkl4jEHESRStGVNV/ZeYu8sDYv9rA+inKmq3TKuZp9ix8B13kf5GZldES8JwQLbTYzv+TWo25ujvss4Eeufaoxc64CaQbBn9QSDikw989VACNOgAVGHcmowEhmohpjf7okHGAWd1n9a90oiuo0W1Ru5iGEVs7JzbGoe6jr3LMmFYXBq9d9RulKnEI5xR/zRaxLmOWviNLVWKR2WvWW3nFwFdAjS4tjMejnnGebG/4keI6r0tBaNjz8lF4WqKqNtOmOPw2NE6agv6h0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pPOmgpiaeBQetnN9zjEskScvUsHZmKmMjuVbnrq/H8U=;
 b=QNebT1VILMiZ1upECjHAstVW82bZNRgwiq/2YYYVu2apIz+AFXUc8yktsJVr9hHicne3Ruw90IEngXuQWnCzW+NFrXBVMo5qrRllmi+XSI2sded2C+HhjN3S7RkpKdUSt79/SA/EmBmISSnYUN7GXEb10SmYca01VlPMNRBNYvgQ0gNuyS30csDKtfhmuWHdeJ/XdvZn09p3BwX4r/liNlmKRwSr0MwqFv8vT8W67Sjt4QZP575ay0fooqre/sJCnadDHLoDn2vZ7+0HQVDK74G0SUu0nLCloSELv5dBEDhHVe2a/2IexvXZol3Ij+eVs7nnZosNgC4nBSSMwib2GA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pPOmgpiaeBQetnN9zjEskScvUsHZmKmMjuVbnrq/H8U=;
 b=UdIaNCxwbR1y0MsusatviTv3mR1QPrCBJwkT5nbL2wV9bm2Ss6KeRB8iHN6UgkDDurbnlVRWWiIjcO7rQNs50ZCtMsH8MQbivQXxLyvFeyyiM1ztO+P8Z2ViaUgy1R+YxemLQp2bcvB9SKKuNkSpjWAkr3grJe4SzrbuHWC2Z70onIoDzqJ1B1RKDu64O2gqX9aSwHgTTO+Ghq83VfqCbjHXoUqI/UD6z/5iRQnarTBH+Z+pS9hcgwm9xqnnMdw+ADDdhkHZjZ1/0t1uICdndJZlXnxeN1deHUPBa8aypVd/vN5WS/rWnMo7a59pF/ucUUVIQqI8kt/8MLtZFmhR/Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a8be0096-42af-6d7d-ff7b-b6128d996ccc@suse.com>
Date: Thu, 11 May 2023 14:07:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH 6/7] x86: don't allow Dom0 (direct) access to port F0
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
In-Reply-To: <95129c04-f37c-9e26-e65d-786a1db2f003@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0098.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9500:EE_
X-MS-Office365-Filtering-Correlation-Id: f0637195-1229-4813-a85f-08db521852ba
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XaGnyRGTb3nJ82fu7+Dakev2PXrSotYjm45SpBmvv9ZA8AAdE7C/UQIVfhwr3nL3wBXVFd1qn9KGC1o9S4mkhokkxK8DTH43x23kaqhHEz80Khxx0h32oTSJTbQIl5KHeWj3vArIji3gYMWAXUa9Ls3+nQkYmaihH1eRFn03Lx7IzPXiUExXjpHmoNZjn68CZ8rOqnZYtD84tjpWwsLHe261XuetNGuc663qLeSvLvgR4jHHYMbd4yU/TX5LCWNGiCq19Ff7GwMisJZQH8n63v426zCsxlWdEJ1lRPTyCFzHOnsIvTFrffrHMhroQ5fgCiLN/L/mHs3j/e7vhibGniuErlsH+9f/xEb9o7cehh5lnhPAFW1MtJ1/31srLdVAPIrbVCTRbAf0XDK7j6Gn9Rv64RqJqGg5/SxyVjtXeJd3rH83XqMt0IXfyOx0cWAFyg86+vauefqf+ycmebf2jI4QqndvZS0bnKNXW9ZqKQT2y2UH1czwyh5BHwFyTx0EKkkuImcY0rJzSphDv+W4Ep93P1nA4VsRJgnLD2HGfQZYlWVMflh4kMKuzLlJYT19AmMPJwTsKHuYeu1qeGdC+Q61ObYUzI5HQxU53Na81+igXSD+J3f71MLpkdbCkgCQ4+Yhf8RTzjLwuA/7eKVzZQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(346002)(136003)(39860400002)(376002)(451199021)(31686004)(6486002)(86362001)(41300700001)(2906002)(31696002)(36756003)(38100700002)(8936002)(8676002)(5660300002)(83380400001)(478600001)(4744005)(66946007)(4326008)(6916009)(26005)(316002)(2616005)(66476007)(6506007)(54906003)(6512007)(186003)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VGFEM0dmV3RMZytuWVRXMWh2d21NVlVCUFNacVkyY1BhTEp3RTcvSTJCVEgr?=
 =?utf-8?B?RjFFaVlaY2E1ZGtKZWZyOWMvS2RnZURWRElHakV5Uk9lVjc1bk5XVklpaFNC?=
 =?utf-8?B?YmRUSW5XV2VveHMwKzBoZG1NMGRVcFRMZlY1UVJYelB0TmdHN1ZoK0NTUmRF?=
 =?utf-8?B?RGRaekFQMlFyTHpJV08rTTY1cTlwcWFtbGgvcDErNzJBeVVKb3U3bGtZV1R4?=
 =?utf-8?B?Nm9CS1ZqNWxpb2lRWWkyRThBZThQWXFBNDVzWTQ2d1hJaXRRTXFIKzdVcHNS?=
 =?utf-8?B?eFdyZ3VoUC94bU1kaitxL2w2bEVqQy9helhGVnlTcEFaQUJUa3R3cXlZTDhR?=
 =?utf-8?B?d2lxaUkwOUxhSGZQZm1zczVhcWdOMWpDTjBlME5GcksxZ3BwZkdWWlNEMHZj?=
 =?utf-8?B?OFgzR1dFZi9sRk0waGZnc2NEWmlZSCtFVzZUVVlycnVzUE12UGlCTVliSzQx?=
 =?utf-8?B?dGhuWGwrdXlVeVEzRGpjb0VDS0VQbTZ1d1c2MnVnMXFCend4cGR6SW84bE54?=
 =?utf-8?B?eEU3YzlhdnZaNlFwN05xREVQaEJYSnZ1T0Nuc2NSdnhZVHNYdmRuTnZwRnpF?=
 =?utf-8?B?RDEvY0s1TUFSZkVudVVuMDdhT09VcjB0Tm9oSlY5RTZTMlZBTmxCQ3JzdVRK?=
 =?utf-8?B?ZW41T011S0d2c2c0VDQzWlRiSVFGUURZcXlWeWVVbkVLTHVsS213S2lvM1VG?=
 =?utf-8?B?WS9WbmNHNGNmdThCcGdBZTBJVkE2ZEVqUVFvcGs3SVk3QTB4Wit6S2hUenlR?=
 =?utf-8?B?eTRWN0pZSGE5ZHJBbE41N3RtMWF0NEtnbDRWVzZaVnhqOHpBVFhVZFdRa2lN?=
 =?utf-8?B?TGp3dWRZZ2l6OHArN3daSFhYbmdsKzRHcXA2RXhPaEZwUkJXRmozSG9rNU1x?=
 =?utf-8?B?SFRaZzZRY2d2V2lKNTFuenBaNXNmV2M5L0JWOHR4N2gxSS9zTWpVTWlCR1JN?=
 =?utf-8?B?eWpPZjF6ZEtucEZndCtQVC9ka3dlcHZoTnRxbllRR0tPeWRyWE1WSDhBN0FY?=
 =?utf-8?B?N095UXJmSjZsNjhoR2xqdGM4aUZXSjdrV0VwS1UzR0dTa3d1WlRDN2krQzda?=
 =?utf-8?B?U1NEMjZvNTZxRWVCQkNNZVNvdTdNdkFGYVFSWHhlUVo3OFdHQXVaZGRXVmh6?=
 =?utf-8?B?QjVQeWlOd3VZVUd0WDRTQjVOczVIaXV2TVlJMXRYZkZLYlJ4bkJDdkxnbm5H?=
 =?utf-8?B?RE5SZXRsY3R5TW1oNHQ2d3hWcDE0b1M3LzYxQzczc0JyRlU2MHVFN044QkNi?=
 =?utf-8?B?ZmF4SWhBQzVKdkFrbWdwQ2VyMHNNSmw2SHZ2bXBZVmZuanFjdmkwTndteXZS?=
 =?utf-8?B?Qnk1ajBRcGVwVGFNY1ZUeVRRcEFqQjl6bENMd3dqNVJKaW15QzgvZDgvaEk1?=
 =?utf-8?B?Mjk2amkvc3paU3J4NmtnVzZTazJoazZWNllXTEswY1MrVUJ6Z2hLdCtRRlNZ?=
 =?utf-8?B?SWt3RXhmVEpPbVBkL0U5dnZCbWFBMFp3eFBvTU1wS3gzbGtoaDlzZWpvS0hh?=
 =?utf-8?B?dFFoRFZPdFcvU05ITkdxN0hNdm1ZRTNtdG1yc21NREJQellCTFo3eFJnMmRt?=
 =?utf-8?B?WXltUWxvbUYxK2JqQTR0QzlReDhKN3c2cG5tN3d4czhpMm5wNU1OUnJ6SHM1?=
 =?utf-8?B?QWdFZ0F5ZkpOdDIySzAzRUxYcWo5cjFCa0ppakJpNWNDd21WVXMrSmpUcW9V?=
 =?utf-8?B?bVh6THdqZjEyQ3FMa1lTTXhIOEZhbW1ybUUzenlyK20vNXBGdkRGanZSNmE2?=
 =?utf-8?B?OU1ieWxXajNNTy9kdkxmdTU5VERJdFQ4R05NTlkzb3pSeGUxNFF2OWRlOFZh?=
 =?utf-8?B?Wko1ODBYelFwTTdmUFJvVG9kaHNsc0hqb0xwNmJEclJQVWlmdmV2ZnFiMGpQ?=
 =?utf-8?B?OVNmbjY1TC9RN1JJeW44eHZoV3lGdDV6cjBTN3RqYWRadmVPQ3JQNHZsNWVI?=
 =?utf-8?B?bnc0cVYyaVRxTkt3b2JraGljZldVdGVKUjA3bWVKeFFMellEeWxCbldOdEFx?=
 =?utf-8?B?ZmZIRUs3enpVOHo4Z0g1Yi9Yb25ML3lHaXpSdDlGcG1SR2FWbXJEeFV2S2h0?=
 =?utf-8?B?UDNkeldxZU5Ia1ZUdnZuRmxOSVR3MU9Nam9FT1lnUk9zOUFHSzc4S2VVUlVa?=
 =?utf-8?Q?yB9W9917qaP8nHCsCvwE8DBN5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f0637195-1229-4813-a85f-08db521852ba
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 12:07:42.5224
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ELVHirba2NNhQzrQLiVTo2rxZzI9dVCgxyTCYbGPx1ZXzqKUmqVNUReGfFIc42TS8F7cycpcbfJSVqgg+0gPhA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9500

This controls the driving of IGNNE# (if such emulation is enabled in
hardware), and hence would need proper handling in the hypervisor to be
safe to use by Dom0 (and fully emulating for PVH/HVM DomU-s).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: Really this disabling of access would want to be conditional upon
     the functionality actually being enabled. For AMD this looks to be
     uniformly HWCR[8], but for Intel this is chipset-specific.

Port F1 (and perhaps also further ones up to FF) ought to be applicable
to external coprocessors only, and hence are left alone here.

--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -515,6 +515,9 @@ int __init dom0_setup_permissions(struct
     /* INIT# and alternative A20M# control. */
     rc |= ioports_deny_access(d, 0x92, 0x92);
 
+    /* IGNNE# control. */
+    rc |= ioports_deny_access(d, 0xF0, 0xF0);
+
     /* ACPI PM Timer. */
     if ( pmtmr_ioport )
         rc |= ioports_deny_access(d, pmtmr_ioport, pmtmr_ioport + 3);



From xen-devel-bounces@lists.xenproject.org Thu May 11 12:44:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:44:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533361.829951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5eb-0000na-M0; Thu, 11 May 2023 12:44:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533361.829951; Thu, 11 May 2023 12:44:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5eb-0000nT-IQ; Thu, 11 May 2023 12:44:17 +0000
Received: by outflank-mailman (input) for mailman id 533361;
 Thu, 11 May 2023 12:44:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z+vG=BA=citrix.com=prvs=48888ca5b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1px5eZ-0000nN-Ky
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:44:16 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 86be14c4-eff9-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 14:44:12 +0200 (CEST)
Received: from mail-bn8nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 May 2023 08:44:04 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5069.namprd03.prod.outlook.com (2603:10b6:208:1a2::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Thu, 11 May
 2023 12:43:59 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 12:43:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86be14c4-eff9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683809052;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=bVec7Tp0OCZ5cslkiMqF76gUL132hj4Id9gmY+bv45o=;
  b=LNBcHOxmqmOnaclXQDGp9dFxxedVNTEg3ZG3aCEq0f8J666esqcg43D1
   S5PpXE4ghmNid3p+rOdDkimHkzj6BOY2TKnGIibmmW0LOPM8CK26ORsTV
   w8ftUwUKOeefNB/WVP0APyAd/Tp+lGV4zbb/rJ9PeTx9q0V1f8EvxtBV8
   g=;
X-IronPort-RemoteIP: 104.47.55.171
X-IronPort-MID: 107421437
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:zn4M4qjlUlDqm8umySUJZGWtX161TREKZh0ujC45NGQN5FlHY01je
 htvXDiObvnbYmryKdl3b4Tg9xtSvsOEzd9qHgA+qCxkFCsb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QaHzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ1EyAmdC6MmNjx45aGQ/hutOY+L+zSadZ3VnFIlVk1DN4AaLWaGeDgw48d2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEsluGyabI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXnqqQz3AbMroAVIFoTa0ujguSAsR+7fdlvK
 lcK1BBpvadnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakc5oRAt5tDipMQ5iELJR9M6Sqqt1ISqQHf33
 iyAqzU4i/MLl8kX2q6n/FfBxTWxupzOSQ1z7QLSNo640j5EiEeeT9TAwTDmATxodu51knHpU
 KA4pvWj
IronPort-HdrOrdr: A9a23:/7MgmK+i1R8mwSkGDbJuk+DnI+orL9Y04lQ7vn2ZhyYlC/Bw9v
 re5MjzsCWftN9/YgBEpTntAtjjfZqYz+8X3WBzB9aftWvdyQ+VxehZhOOI/9SjIU3DH4VmpM
 BdmsZFebvN5JtB4foSIjPULz/t+ra6GWmT69vj8w==
X-Talos-CUID: =?us-ascii?q?9a23=3AYLYDQGnsjnxtfkfzFD4fR7t/0e3XOVDm3FzKHUC?=
 =?us-ascii?q?nMyFKbICcWF2i2r5ntvM7zg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3An2UWqgwEVCQzxUic3emVjM6iPvCaqOOrNlAVsog?=
 =?us-ascii?q?/gJiBEydAOiyDog7wUoByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,266,1677560400"; 
   d="scan'208";a="107421437"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HAR7pGklb0PQAkjVoq4z7IFKcL2i9Z1TzbOAFSYSEuFYKggeo4S9DXISUxxDCyU6AR3M5G9qp4AcF8l3RXzSGSTA3kIDmmJ+IpW3aPK4WcQDQAjRgL1WGJ4m5Q5XZqbEc1eVGiree0m6TerDZU3n68yZO8JlscJXL9+8w3Gow4AOaq+VHV5z2V/4eLmSQYZ8Ll58DFsJ9hP9lHpFNhFknxVcKR4eGF3KkavyXRyoOvN1a4N+Kj/djfQY54DE+pZhA5tE8EgiSHDrtOSTt478DEBiE7WxR+4jW8/47leQcbl9/s5e2EnCGP034bxQMhOh0nHjMdom/SAK0ZNIoJ25uQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GGJqgexwdaNn0ZMhjGxzc2Ea7RbILscdiQFdjAq+D4w=;
 b=YfrzrpnzI0jf0tDEq3Nt7kv1evc0h81XSZ4bVMhpTep8sfF23DMJHG2Ost5NyXiZM+8WEoWNuMuUdt3S6NHfKMJEbmUzie9vAra9VKQAB247xKhCZX/4WvpfWak3vRzDXxHjqwFCRdXJMD1Fjik1DTAp9W7eCtEoiIyevhR0y65KJXTUmjJKUl5wiRrv4uLcraZxSD/1zUT8gWPbeZOYvxxK96kQlEfejPzB5rQfebzJUIUdN15T3V36GW/ZcmLJ1ePGZ14nF4abQwrUa0ixvHZaKEo//ELq8FI0lvCT8zfPO4eganTuPOnAWobYB68vQwE22lQv6KzLOOXshiPOcw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GGJqgexwdaNn0ZMhjGxzc2Ea7RbILscdiQFdjAq+D4w=;
 b=rHTiuehvVjOFFk5s1PUzQvj5yx2RdpwBV2IGptL0BAtHwY58WiH7JlUrMnhHMvkcE3AlFcfxyKyYnJKp98De1bQo3TD29Pzj/ytRmAG+R5he1ycVxL82qrxDyCkqga5d82pzvm1toPs2m3dQEronUmqHi7Y2O2+XOitVXjDThmU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <64b3589b-b66e-fb44-a75d-ff5e7d00ea44@citrix.com>
Date: Thu, 11 May 2023 13:43:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/6] x86/cpu-policy: Drop build time cross-checks of
 featureset sizes
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230504193924.3305496-1-andrew.cooper3@citrix.com>
 <20230504193924.3305496-2-andrew.cooper3@citrix.com>
 <6531df09-7f7f-a1e2-5993-bd2a14e22421@suse.com>
 <18090224-6838-8ed4-6be5-ae10a334e277@citrix.com>
 <bbbd4c8b-e681-f785-b85c-474b380d6160@suse.com>
 <742a5807-dd53-0cd1-d478-aed567d5c4f5@citrix.com>
 <c8cb1df9-33af-8cae-291d-9a86a3b7f6b9@suse.com>
 <6d62ba23-d247-08da-3a84-ed8d1cdc4271@citrix.com>
 <62b721a0-7d09-751f-5d95-086584f3d7e5@suse.com>
In-Reply-To: <62b721a0-7d09-751f-5d95-086584f3d7e5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0248.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5069:EE_
X-MS-Office365-Filtering-Correlation-Id: 7e4cbd49-24ce-4663-0a9f-08db521d63f0
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/afh+buvmnyZVpItGsch/6mbEeIK6v8UwwN7Ehl4xy8PJhzOtYp8GgecxgxhCib6qUJ4w/UfySAsK9zck0Aja6g9FbYwWuyOuphWRZUsneLBmOxggnccLuceZDXyj7EdhcwtVE9F8YU/mA0v5FPRb0mujKjG8nCudEDbC3Fn8vuVCpD9pkW2sHZsnvMw0TtfyosGTdkhPnscTbD95rUZVZO+Qvm37niJ+SSzlbeXSU7zWeRNlJZq7n/z2n6FysdA107tZTOmacjsHtMbNPmIuBbueehkNHPXMh4ktYpTD6w/rH72/9zyuAJzn9MuaCHyHW1RFyl0PzDS1e97t9CoK5yj9K2y3bULvZF2qYjk2nQ/ss1CrtIA+VYktuykomxrbwR4taScnrmb6dEYE+3EmVgP4GFuMtn8a2F5OvI8yPyiNvTK0rL/pTWXQfIzfYUBfmnuJJSL1V7G4Uqc/GpqI0SOq0b7JSptaVinbYdoZUn06A66qkNBjfeTlRIYqfuSPqoAeDCg3csVVeZdkxM77qS+391e4jtgOHE4sPWMEkL9Mx5VivQNH+HFLm5MjFUtoIWUYGIsT2JvvWhGSKaDBylO127qNhtuRuq+cIjCuFW8c/hzB+bjoOc5k6M76Mtedh8hq3u5SvETufw0Fpgi4g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(396003)(136003)(366004)(39860400002)(451199021)(8676002)(36756003)(2616005)(5660300002)(86362001)(8936002)(83380400001)(6486002)(66946007)(6666004)(966005)(54906003)(478600001)(41300700001)(53546011)(6916009)(316002)(4326008)(66556008)(66476007)(186003)(26005)(2906002)(6512007)(6506007)(31696002)(82960400001)(38100700002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dDRmUjJKV0IwL0lWQ2ZpODgrMURTMnRCUjV5UlJIdXRodmlWdDNjd3RBWm9X?=
 =?utf-8?B?Vk9SUk5GUDF2R2RWNWxQSlA2K0tvVE83b3hFUFFBTGZrbnQzN1pKWDdRd2Ix?=
 =?utf-8?B?amdOb0hMVjhMZkwrUnRUVEIyMmQyNStFZzFJNm1aVXNsZW5Vb2JLc01uMWZ6?=
 =?utf-8?B?MWNVb2l6SU9kWlFzeGJrMTdhN3hta3ZJclBFR3VmakJtK2cxWDIyVExJQktk?=
 =?utf-8?B?N05zYm9FTWwzSEVOQ2dwTTZJb3gvMjMycWdFQ3FIblFVWjd3UkplbEFmejlO?=
 =?utf-8?B?OHBJNzFOajNyaVVLU2FpZG5MUUJsbGw5V1EzcFE2VXVwNk9JR0w0Mjh6Sm9F?=
 =?utf-8?B?V0kzSWpTL0RCV2VWWWVaeGw0V3NMd2ZjenN0aXdiSXN6cG9NYXc5dTVzMWNI?=
 =?utf-8?B?SS9ISzV3dWFCajZjZ1ZUVFJ5azNMVDRnN2Y0Zk1TbktYOWJYcWJZczVCeWtu?=
 =?utf-8?B?eWhJWkR3bTc2UVlLSVFOM0hnMi9QRzFJWEo1VmcyY1hkVkRla1plc3dHcXY5?=
 =?utf-8?B?ZXNBMzRLSFk5MG05ZGcraEwvWU85djJCVnk0cFI2clBsN2UzVUUzNVpLTGJz?=
 =?utf-8?B?SkczT3NwTldtQnZoSSs3M3EvNjJwM0lHb0M3UHhBcU5UbktXbml6RXBneWZq?=
 =?utf-8?B?bFhLcDYvT3RWZDYzZlZCUlpJQ1RmSnRMcGtEZjBOMG9Kc0FWWHM1RG85aFZj?=
 =?utf-8?B?UW9KS0k1WFdGOXdWMWhKNU0vaTNkVElBVk5EYWp5QWthYnNZM3puK2UzNWl3?=
 =?utf-8?B?Z0kvV09kckJVSlRBRkN4eE9rOUFaVERDRHhpU0FwcVFQVlhqZ3Zzd2FtTTEv?=
 =?utf-8?B?Y0ZIVGt5c00ydElpc2J6b09DTkdxT0xqeVlZUjJMVStZV3UwTDNHSS9GNzFq?=
 =?utf-8?B?MWlWajlORitGMG5iMlpJTlYyZE5LdU1UTjFyZ3hQbDNISUpJZ1hEeU5sY2g1?=
 =?utf-8?B?amRTY1lEaGpicGZ3MlRsUlIyd3BaaExsOXA2Y1hHeURIRTFodzRwUDMyNHhp?=
 =?utf-8?B?aDdmWXZWdGswUDZvMWk3QUN2M0wwTGtqWHdHMGR6dCszTFhZcUFGOHJRVFMw?=
 =?utf-8?B?cTZESVpEL0s0N3ZhV005R1RZWnJFeHB1U2pZUmdkZ1dLZWk3QlVjaHg3R2Qy?=
 =?utf-8?B?RlBNUkwyREtIanoxb2tsQ3V0aklOMUZqS08xSlVycHVBNlptNkFCdFhmRzFI?=
 =?utf-8?B?cngzdHNTcm5Eb2JoVE13b2ZUZzd3M2VwYlkvczhZS3FGWVIrQXlpWnlLendR?=
 =?utf-8?B?dXN6ZHlhK3drQy9RL1AyUUtiQzR6Y3N3QVhGY0lkUk5aQm5kVldJM1JHVmU2?=
 =?utf-8?B?aDdzTDBQaklqQ2Q3a0krbXI5dWtJWkdUM2tVZGJLbFdZSkdOMXFub3BXNjFR?=
 =?utf-8?B?dnkwWHJTSzZLWjNCWFAzdm5PdnV6L083bUt3RXA1UFdoVWd0bE1CejBEdENl?=
 =?utf-8?B?NzdHeFVTbnVzOXU5bkpCZkN3ampyYTBuVkNmVE5FUzJWaEVYcmRydFQ4MkRJ?=
 =?utf-8?B?Unl0T1d0dlpobHV3R1JuMUc4cDF2RHg0eER6alVNM0hjZ2M0d3ZqNFJ3WUxv?=
 =?utf-8?B?TjRZcy9FcjN6VVN1VDA0YjJ2TnE0ME8yNEV0WjlYN0JBVGlkU1JlWjQ0WTVF?=
 =?utf-8?B?ekNuZjBya2NnWVNBRU5mcElCbkRndzB2OUlRWTV0UW9HcGZuOStjRHc1a096?=
 =?utf-8?B?OHZPNGp3NmpLck5GTzliQWlpbjJHNWtJWWJwdVEzWTJ6dU1ZOWdySzdsbWRK?=
 =?utf-8?B?VTQ1MmhZZTVPdk5wTTNHY3Jud3I5dlVhL0ZQVmFvV2d6YUpDbzVDR1RibDR2?=
 =?utf-8?B?R0pZaVZ2MzNmZGxYNUxydDJuUVB0Qjk5di9wWjVESit4SHZySjRhMm9BbGNS?=
 =?utf-8?B?T2lEeEw2bXI5ZjJMMDRielJNUnNMVFhHTlY3UXFDbXJRL2tFelpXWFdZTEx6?=
 =?utf-8?B?MVc2NDFXSHRKbEFsYzhYTnR4Z1ZhVUkyVnkvWW0wMnZ6a20rcUIrNEV4ZzBt?=
 =?utf-8?B?WW4vQU1iOXBjajZRU3p5TUcxTE4wbTdKa0VuRitCNW9MZUJISnM3cithV2xk?=
 =?utf-8?B?MXhSejlNWFRsM2ozVkRCZmJOSWU4WmlMQktTWlJpSE9QUUZhZ1dSSUhKSnZl?=
 =?utf-8?B?R2tkWXVHMytoL2ZlZGc0V0QxR2dzbDBsL0tpNnJTemdZdEcyMkhNa2dyWnRw?=
 =?utf-8?B?L1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	se9muSsW5VxqpN/E8V+1PoGEQo1IJjtJ5wIV+TBZg3R/ST9b2hA7h3IruoobLWeq5xkPnPJYZtlZsNpSeeFcGDI8ubeTUjclRya9Jag8BkZNRR7WfemVj3QwCGRekOQ/C6IG9BxQjxoe3PD5veVMMFMZEOZh9OUAVZq2/6WK3Xvh1w8FylteSpVCQgincPkA18vDu0WrhwfOhjByy7OZJzntdoPWspi0k+FQmPbzKLiL4CZTPT/o0+CJQ7W/wk3kHqCOfmR5zEF3aX/BdjBkLxYXF9ujNgg4kfKuVY4bFTctTGN2QthAw1bocaO79SGkpdmh+Q+cfGSv2Wcec9TkAcA0MewQFS1FPl6kwSOADU1dqDygVD3fbXZOPZB7neQQQ7yl8gwM+3TspkaB2p3c+Ms/zXxZ+u0VHX9KxFYGXKESp7l/nZT3kBksRpdzqiCuiPqswcOdtdAu2MTUnNN7OAOyMwxnYuX3M6/PBvqYzN05nYQjyB7phs2pflk6qGIah96uBoCMnuCt4mUGzY4itV1uy+HiejNvtIAuWd65suLbNBXM9IjO2w4OSontbeev2ualPabP/Bj1bbaLNPoBKJkHTbEuvTbvBzJWGqdkrDGeI77unP5ufGdgSy7kAH/yl1hEoINMe9HKWbW5xKtEpse2nw67aAvcfW8lri3kAgWi9YLA5qKAaM2bTBPK9HLRCxzIT7+Aubsgf5MlO6iCkrHPbYySEjcMDVfjn91haK+dik1dSvKPCuOLUSBQUgYuK4bpts07dsiFx0MoigNfdTMX85AMaJZq9Qhb9I/zKQz68DRlMWqmLscmuwu6XZbu+j9OeSRrXzuRzimZzH9okg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e4cbd49-24ce-4663-0a9f-08db521d63f0
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 12:43:59.0716
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ssNzWB2+HyuPG6hXUPGKkLJAS7sSwgrCCjPTrqR4gJ9m6Zkw6O8Jx4H3LozKfeKSzJw2/7b3a6ySwHqZO/tFJkis3rLxthTxN8djxVf6MUg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5069

On 11/05/2023 7:43 am, Jan Beulich wrote:
> On 10.05.2023 17:06, Andrew Cooper wrote:
>> On 09/05/2023 5:15 pm, Jan Beulich wrote:
>>> On 09.05.2023 17:59, Andrew Cooper wrote:
>>>> On 09/05/2023 3:28 pm, Jan Beulich wrote:
>>>>> On 09.05.2023 15:04, Andrew Cooper wrote:
>>>>>> On 08/05/2023 7:47 am, Jan Beulich wrote:
>>>>>>> On 04.05.2023 21:39, Andrew Cooper wrote:
>>>>>>>> These BUILD_BUG_ON()s exist to cover the curious absence of a diagnostic for
>>>>>>>> code which looks like:
>>>>>>>>
>>>>>>>>   uint32_t foo[1] = { 1, 2, 3 };
>>>>>>>>
>>>>>>>> However, GCC 12 at least does now warn for this:
>>>>>>>>
>>>>>>>>   foo.c:1:24: error: excess elements in array initializer [-Werror]
>>>>>>>>     884 | uint32_t foo[1] = { 1, 2, 3 };
>>>>>>>>         |                        ^
>>>>>>>>   foo.c:1:24: note: (near initialization for 'foo')
>>>>>>> I'm pretty sure all gcc versions we support diagnose such cases. In turn
>>>>>>> the arrays in question don't have explicit dimensions at their
>>>>>>> definition sites, and hence they derive their dimensions from their
>>>>>>> initializers. So the build-time-checks are about the arrays in fact
>>>>>>> obtaining the right dimensions, i.e. the initializers being suitable.
>>>>>>>
>>>>>>> With the core part of the reasoning not being applicable, I'm afraid I
>>>>>>> can't even say "okay with an adjusted description".
>>>>>> Now I'm extra confused.
>>>>>>
>>>>>> I put those BUILD_BUG_ON()'s in because I was not getting a diagnostic
>>>>>> when I was expecting one, and there was a bug in the original featureset
>>>>>> work caused by this going wrong.
>>>>>>
>>>>>> But godbolt seems to agree that even GCC 4.1 notices.
>>>>>>
>>>>>> Maybe it was some other error (C file not seeing the header properly?)
>>>>>> which disappeared across the upstream review?
>>>>> Or maybe, by mistake, too few initializer fields? But what exactly it
>>>>> was probably doesn't matter. If this patch is to stay (see below), some
>>>>> different description will be needed anyway (or the change be folded
>>>>> into the one actually invalidating those BUILD_BUG_ON()s).
>>>>>
>>>>>> Either way, these aren't appropriate, and need deleting before patch 5,
>>>>>> because the check is no longer valid when a featureset can be longer
>>>>>> than the autogen length.
>>>>> Well, they need deleting if we stick to the approach chosen there right
>>>>> now. If we switched to my proposed alternative, they better would stay.
>>>> Given that all versions of GCC do warn, I don't see any justification
>>>> for them to stay.
>>> All versions warn when the variable declarations / definitions have a
>>> dimension specified, and then there are excess initializers. Yet none
>>> of the five affected arrays have a dimension specified in their
>>> definitions.
>>>
>>> Even if dimensions were added, we'd then have only covered half of
>>> what the BUILD_BUG_ON()s cover right now: There could then be fewer
>>> than intended initializer fields, and things may still be screwed. I
>>> think it was for this very reason why BUILD_BUG_ON() was chosen.
>> ???
>>
>> The dimensions already exist, as proved by the fact GCC can spot the
>> violation.
> Where? Quoting cpu-policy.c:
>
> const uint32_t known_features[] = INIT_KNOWN_FEATURES;
>
> static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
> static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
> static const uint32_t __initconst hvm_hap_max_featuremask[] =
>     INIT_HVM_HAP_MAX_FEATURES;
> static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
> static const uint32_t __initconst hvm_shadow_def_featuremask[] =
>     INIT_HVM_SHADOW_DEF_FEATURES;
> static const uint32_t __initconst hvm_hap_def_featuremask[] =
>     INIT_HVM_HAP_DEF_FEATURES;
> static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
>
> I notice that known_features[], as an exception, has its dimension declared
> in cpuid.h.

Ah.  I had indeed not spotted that.  Sorry.

It explains why all of my test builds (checking known_features[])
appeared to work.  I will rework these to have dimensions, because it
will remove some reasonably complex logic in gen-cpuid.py.

>
>> On the other hand, zero extending a featureset is explicitly how they're
>> supposed to work.  How do you think Xapi has coped with migration
>> compatibility over the years, not to mention the microcode changes which
>> lengthen a featureset?
>>
>> So no, there was never any problem with constructs of the form uint32_t
>> foo[10] = { 1, } in the first place.
>>
>> The BUILD_BUG_ON()s therefore serve no purpose at all.
> As per above the very minimum would be to accompany their dropping with
> adding of explicitly specified dimensions for all the static arrays. I'm
> not entirely certain about the other side (the zero-extension), but I'd
> likely end up simply trusting you on that.

https://godbolt.org/z/c13Kxcdsh

GCC (on both extremes that godbolt supports) zero extends to the
declaration dimension size.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 11 12:49:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 12:49:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533366.829960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5jX-0001aA-CE; Thu, 11 May 2023 12:49:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533366.829960; Thu, 11 May 2023 12:49:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5jX-0001a3-97; Thu, 11 May 2023 12:49:23 +0000
Received: by outflank-mailman (input) for mailman id 533366;
 Thu, 11 May 2023 12:49:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px5jV-0001Zx-Rf
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 12:49:21 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f519024-effa-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 14:49:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8549.eurprd04.prod.outlook.com (2603:10a6:10:2d4::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 12:49:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 12:49:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f519024-effa-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X2+81VKc7/LcpK5BjKBRNN9DH0YIygbDezTzyJ5pIWX4NC7QqdfpchBgM0whqe/Rm1XWc1l2Z/ACoFIW3o11AyqDblk4BBCB1MVqd9wmWrpjcK+1b4k24+MK88HDPdo5mXnb8ltCH8/lkGRNNWuYncgebw38Bt9JZZfHrHns+aXbCvxNsEotTcxnlCyZnbS7Zefu5mzKM3zaU2tI7CsIUMDa3bctK1YT8m5avVK8xOEVqq0aMuSW45wjhNZK6Dm3j43AO8WAZcvQF4ZvIhvIon2pt+XkFrqVe2AOwNTe7M5+fHqh8J1JyREUX4bPUSpTdNT1WsLMAtFdEuNm+X5IVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gQr4reaGCJlX4uKI3hEcVgTD1SfoXUe/v5PK4yMw4vI=;
 b=P5qrgFAGO7s35CdEIx9Syac03d/Ueg87iTe1AoDQKuJr6fYljg7D1TjDVLYl1VyfBcYaxOp5nciJGacl/MQ1h5rlSjAxRS0E7fgRgUePUoklUVo/3rS17lL2GYnRFgRoeo+QS8yiMfEC1+ec/bAhW+Rk8uejfDlDHiLelIbttGZyXNBIfih1OUcv8qFB5qBAuv/kwIZeyVLfljkksDILZli7HtKeidxHebHIfvT4sUKzts0WbsYWH+LX2PzV7aszV2meHoKQBCJCn/ZYZlf1t2HaICQyWD4CUtLRhg6NLojpI2Rx21xlwicjvgnKhP2fOfOKukRQHn+GqNXn9fplGg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gQr4reaGCJlX4uKI3hEcVgTD1SfoXUe/v5PK4yMw4vI=;
 b=XjDZU71abTj9I4zhOTgbOaagkkzSl+CyzpnkAeZVHX0IOdeOAG+yeNzOsBd/9wB8coEbCVg8xDSIb4Tpz5dwhAT3VWj/l2rhm2SzClu4ivASTdPxW8tDgdZGS83Z/Fp+1TEVk05UVjPcPJUKfQbfHWcF/GW+NFCxiP9rxVuPl08pOJ6D3rx4c9WhOXiTKs4xRA0q2hqnlro7u+60Zg5g3L7rG8474PVfRRgwId4l3Kgr7DO7exP5JJO4Ps2qkK18BX0vsSg8dm5PY3PuCfcGm9rYm1NzQoI9WmxuW3xd/CLkeeL9gjxCC7kGa4zuIZRUOGvsQ8A7HneU7jRHsI6sKw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c2cfb758-4055-d25a-2971-10da82439b27@suse.com>
Date: Thu, 11 May 2023 14:49:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 3/3] x86: Add support for CpuidUserDis
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230509164336.12523-1-alejandro.vallejo@cloud.com>
 <20230509164336.12523-4-alejandro.vallejo@cloud.com>
 <1489425d-7627-30c6-bb0a-ca1145107f42@suse.com>
 <645cdb9a.5d0a0220.d3a29.e895@mx.google.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <645cdb9a.5d0a0220.d3a29.e895@mx.google.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0083.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8549:EE_
X-MS-Office365-Filtering-Correlation-Id: 1e879e44-8ee0-4034-49ea-08db521e2235
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+AZkjRQD65YNB5HUyywi4HXw/lpQhYtUI1JPjwe8+X1CN9+24GDNcbCIr183LPKno4k7rOlPBMGYAxxJVQeEExP5x6XRbBJACb8WBBpXV1swe+wNrJ6kNz9V+Oa5geNr7jyhs/1pvDtJo/9XSxWnD+t2rp3VpjJSpTMRQzjIDyGAWj3cSm41p/+4kYnFBTDyK65fYrWfxtte5rkJCiUEQ8anawzmwfFd25qoALhvL2rkomUJk3YJIHHXTSOhRgaZFBz/abO0soJFwYh+ZeNhOURwuNvEsuv4dclsjhz+BJogwNDbBe8OeKekHVOhfzzkffU9S6/tx+1H5Kdd7xz8fVb5M1sVi5VJXKsNq11iKY4DPeClp9cAbwr8eERtvB6lM8iTXYlQCdgxgSO9KqUa2a48TKvuI9G5ek9/+xOyekd1WpL/jq6RdF7g9+pj3Hpi/lRV1bxmrIvecvPQ98K/GnrXVkaT0UYk9oPvDFgZQlxfxpz8k6jnjGSSvM3Ylcg7Sy/cy6rmoht9twxCFn/+Y5iXPgt388D9tAPXMKR2rK1UQ7z5eO1d30ZJjdM85Uta7cnnDK6wMEhfxvo9NxZOIh4U9hy6ygS/KHVwZEQyHUubZe5pzAw/NQCHKCGuhQ90WDDwYUUNVmUR0NuhhTknYA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(366004)(39860400002)(376002)(346002)(451199021)(66946007)(316002)(66476007)(4326008)(66556008)(110136005)(54906003)(6486002)(86362001)(31696002)(36756003)(6666004)(186003)(478600001)(26005)(6512007)(6506007)(53546011)(2906002)(5660300002)(31686004)(83380400001)(8936002)(8676002)(38100700002)(41300700001)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Rk1rcC9aeHdsZGpZcFZ6bUZuRGpVZTg3OFJhVjlYbVBmUHlqNmJqeWlNeXA1?=
 =?utf-8?B?eDlnSnNTT0EzcktlRnV5b1BwaUtickJHZVVNcEcxdVFmQUFVcTlHMVhXWFVl?=
 =?utf-8?B?cjBCSW9POWtYYXJ0R3dkMXlibmJlc1JCZUpFNGRuYkxXQUFBSXczM0lvdGJt?=
 =?utf-8?B?S21wZi85WEVDZXNtUHZGSnN0VUh5b3ljL21ISUNXM2tNT3MrQjVhRlJoeTYy?=
 =?utf-8?B?c2JDUWVDQ01mbUUzUkFRdzAzTmFEdlBlQWN6aWwyRldYczhUQWh3VTBXcnRn?=
 =?utf-8?B?S1RPM3ovVW5KTXpNR2hOUzd2OFB4aEVSWHp0ZXA1dUhSMDBvSWlSaE05a1Z1?=
 =?utf-8?B?eWMyQ2UyU1BNRzBIOW5ncU16YVYxYjU2VjQ1RVlTUXA3TDVKMUJoVml6U1Zo?=
 =?utf-8?B?NjRIWmpqZjE1dHNTeWhoM2NFdEhuK2cydWhXd3piV3pRQmxZU0JzZXh0b3Vz?=
 =?utf-8?B?ZGpxaEtkbm5MRTkyejU2L0FoQ21xOTFHVFRQNWlRc1hJdW9uOThycHZPdGpY?=
 =?utf-8?B?blFFMTczQVc5ZFBISlJhQUIzYVBkMWxMYnJxbnZ4Q1dWZTE4WlhyTHNaWG9j?=
 =?utf-8?B?eVBHSHJQT2J1cFhoZmI0aXlpdkkzSEVGMkdRU0tKMENjMzVML2FydGZKNDJS?=
 =?utf-8?B?dVYwV2hQUFIxcnNGdC9OcCs5ZXJaa3lMNFExazJXcjBJNkpqblFSbWVCTnJW?=
 =?utf-8?B?eS9hMjY3WmlvcjBCNkUvNXV4T0lOVVE0cWF1K2Era0I4L3RiTWkrOEhHeDBT?=
 =?utf-8?B?KzhBRm13a2ZKWlZRWEVwSzFRbnJmNndVVUZHaVpxR05HQ2x5Z2EyUjBGMnYv?=
 =?utf-8?B?WHZ5WkdkdG5WWGpkSkxZU1pzeGRpZ3ExY2ZrRzM4TEdGVk5wUUNESUx4TTZY?=
 =?utf-8?B?Ym9wQThsb0JySjdQcFJkQXZIcmh0YTA4Z0ZabXpkL2w2UEpVanNnVHJzVXlE?=
 =?utf-8?B?cG5PWThCUUQxQm1rcDgxS1NwdDVMcDgrd1dCNWp2MnhKZjBKdW5iNG4yUkZp?=
 =?utf-8?B?cmxTTWxYTGpCMS9Qa0xTSnUyV01VUWZVaDhrbGNveGVhVEtuZ05vSXZjYy9G?=
 =?utf-8?B?cTNPZ2JOcGVIQ1ExNmREcXRYdGFkeXNaYjhyN0ZQdVVqQ2IrcTlCa0lzZS9N?=
 =?utf-8?B?NEw1RjJVS0tkQ0xIWWdEYkkvTHBzUTVpN21XcFE5TW9mL05ZNHpZaFZpZzJh?=
 =?utf-8?B?bjEwbGRETU0yMTNIcjUxZ1NjbTNwWTRodjdLWlFNNzVaSjdJSjJYbVhOZ1dm?=
 =?utf-8?B?S0FkK0szVE1VSEEvRHp4cjdTK0xraVQrK3NDc2Q2bUdwZnlDMFprdWhsM1U3?=
 =?utf-8?B?WjFrK3NKQnU0bDV5emIvbmFVZ09kKzNZKzVoRHRyYXM3V3hzODFNTUFrZlZH?=
 =?utf-8?B?azF5YW5KRktQOHB3VXRxb0k4TktZUTYxMnhUVXV6ODU1Qnd4ZWtDV0NrWnRY?=
 =?utf-8?B?SEQyZUdBMi9LalZHVThtQ0NyZXBRcGp5RmdLdXJtdE1wdGZ1bmx2bTk2Z29I?=
 =?utf-8?B?VXQyZkVORFB4V3RLUU5vdGVxMVVJNVlnMUs2WmgzVDFhQUtvOXIzaC8yTGUy?=
 =?utf-8?B?VzB0NU1rVkNndWd5a0k2UHo5MWVmM1NvTlk2NmVzTldFaVhmdEhXZ1Q0d3pn?=
 =?utf-8?B?VmxySG5qbnNDV2Iyd3Rrak5EQmFWQ0ZoQ1ZzRUhVS2dKSGNXVVZwLzZPSENl?=
 =?utf-8?B?UWtDRXVHcWZ0WVNkVEdHdkdNbTdpRlNQNkpVTW1MTlVRaTY4Q09FZjhOdnFk?=
 =?utf-8?B?eEVpdVZEVG1HcjNuMC9qRHNDT0xzV0RCZVdlSU1QdVA2WDFMMjFnalhhS2V0?=
 =?utf-8?B?QlhaalllSjFENmdvZ2huMERxcHFDWUN4M0xZcHdodDd6SEZiL2tXVjVCV1dD?=
 =?utf-8?B?SU4yUXZwWGJuN2FFRlI0Q01yNTFleEZiMXhNeWw2ZDBmdEkvQ25XMDFOZklm?=
 =?utf-8?B?NVlkbUU0MS8vSnBEQUI5QUFWcjRFUXNuSU5tY2hjbjA5dU02VWRJRXpFSklQ?=
 =?utf-8?B?aXlPRE51ZTMyY01rMmJPM1V3aU4yOVpBNFJFSDM5QnpOTXNtTDNoMkZkVHgv?=
 =?utf-8?B?cWYzWWpPNUNKYkpGNDJpZExGN3ZsMzNHTmNWR3BnR2hYZVJHKzJtOWhxdDM2?=
 =?utf-8?Q?+SjJPVhUKPfpg506yx2q/DEM+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1e879e44-8ee0-4034-49ea-08db521e2235
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 12:49:18.1649
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Uq81c9B1LzSofryAV8XZqzH+X3eMzjWa7PlI17hGZnwWRfJL4f0yIzTC5z8vCutHiyQLeOLUeITnVhZ0xPT6fg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8549

On 11.05.2023 14:12, Alejandro Vallejo wrote:
> On Thu, May 11, 2023 at 01:05:42PM +0200, Jan Beulich wrote:
>>> --- a/xen/arch/x86/cpu/amd.c
>>> +++ b/xen/arch/x86/cpu/amd.c
>>> @@ -279,8 +279,12 @@ static void __init noinline amd_init_levelling(void)
>>>  	 * that can only be present when Xen is itself virtualized (because
>>>  	 * it can be emulated)
>>>  	 */
>>> -	if (cpu_has_hypervisor && probe_cpuid_faulting())
>>> +	if ((cpu_has_hypervisor && probe_cpuid_faulting()) ||
>>> +	    boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) {
>>
>> ... imo the probe_cpuid_faulting() call would better be avoided when
>> the CPUID bit is set.
> 
> I wrote it like that originally. However, it felt wrong to leave
> raw_policy.platform_info unset, as it's set inside probe_cpuid_faulting().
> While it's highly unlikely a real AMD machine will have CPUID faulting
> support, Xen might see both if it's itself virtualized under Xen.
> 
> The crux of the matter here is whether we want the raw policy to be an
> accurate representation of _all_ the features of the machine (real or
> virtual) or we're ok with it not having features we don't intend to use in
> practice. It certainly can be argued either way. CpuidUserDis naturally
> gets to the policy through CPUID leaf enumeration, so that's done
> regardless.
> 
> My .02 is that raw means uncooked and as such should have the actual
> physical features reported by the machine, but I could be persuaded either
> way.

I think I would be okay if that was (in perhaps slightly abridged form)
made part of the description (or if the code comment there said so, then
also preventing someone [like me] coming and re-ordering the conditional).

Nevertheless having raw_policy populated like this seems a little fragile
in the first place. Andrew - any particular thoughts from you in this
regard?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 13:02:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 13:02:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533369.829971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5wR-00044z-GW; Thu, 11 May 2023 13:02:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533369.829971; Thu, 11 May 2023 13:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5wR-00044s-Cn; Thu, 11 May 2023 13:02:43 +0000
Received: by outflank-mailman (input) for mailman id 533369;
 Thu, 11 May 2023 13:02:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/UO=BA=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1px5wP-00044h-OI
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 13:02:41 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20623.outbound.protection.outlook.com
 [2a01:111:f400:7eab::623])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1b77a21d-effc-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 15:02:39 +0200 (CEST)
Received: from SJ0PR13CA0059.namprd13.prod.outlook.com (2603:10b6:a03:2c2::34)
 by SJ1PR12MB6313.namprd12.prod.outlook.com (2603:10b6:a03:458::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22; Thu, 11 May
 2023 13:02:29 +0000
Received: from DM6NAM11FT073.eop-nam11.prod.protection.outlook.com
 (2603:10b6:a03:2c2:cafe::fc) by SJ0PR13CA0059.outlook.office365.com
 (2603:10b6:a03:2c2::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.6 via Frontend
 Transport; Thu, 11 May 2023 13:02:29 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT073.mail.protection.outlook.com (10.13.173.152) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.21 via Frontend Transport; Thu, 11 May 2023 13:02:29 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 08:02:28 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 11 May 2023 08:02:27 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b77a21d-effc-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m5FtknzW03Kqa87K3qF+TS8sv/koEI1cHw7wfCwRVZ+NUFZuh+yzxa5M3sxp7LyPmypTixxh04VPFpbk6S+8UqoWEbca3dwk/u5ZTaXqTWcf7DJ03waUfh2nOQcLUjeC/BpMPQUyB3bli64KStOYoQA+OTU9VM3b094r0elCiZ+rsns5lpwBcNE9M1+I4xo/M5eIzeC8QyJwmPmTYNqYk+CucVFUuFGC4yBBZacPhFzZH8f5fQYJrhJbPU/8d241QN9Q6uMMOV7q5zgzgWj7A67DQu04wI6HnCkr7vEWkR1YXJudJcSnRYpfYVyPdAvLHM6m2Q7dXj67IW1q6+E5sA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QFV3Thk556fIvO/E1qIqL45bP3uNbQnb7E7Joekqvfg=;
 b=Hu6q5z5DVrWtFAcQ+1I4MriYell5NNsFq7YMK9PImtPxKu89y5qyGkasWtjlmSSIRSortYRtng0ra8tEVyskoVdDeA2HW1aAUTLkjWZVHk3XTOCy7tv6eAsCqO0FQRoHXzJcb05uciJBAJsg9KgAdZJGitToo+sd2p5aEPag6pNl/9jfiCjJ62TrSq+kXo8HyBu9hvXLqc0QH7aGa0vDyjIGvW9Jsndfy36an7js8/NdG95Gu7FXmPUKgwhCoFS7P9A6OPkbWV3/gpPa9VYe4gOVNIhporNOm3g8rIl+QavDD3UNAL0KgR91xSPuLUhOyfZtwp1/eNKP6vdaJTcUgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QFV3Thk556fIvO/E1qIqL45bP3uNbQnb7E7Joekqvfg=;
 b=fUpCWYbleyXv4Ja3Xrv/x9EOAJMlBM6itJQ+IEtAfcW8Uwts5dauYk8vCiDuISpFLKE368PD5pDr3FGk3HNUM5I2XfQZbc3zWxHK7ZM6kmXOyxppO1hKqtbsy5Tqlf6CBrcob6BsZ4rizdaZt3SBMhQeqzZ8/b+VYwcJCqNOZrc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 2/2] xen/arm: domain_build: Fix format specifiers in map_{dt_}irq_to_domain()
Date: Thu, 11 May 2023 15:02:18 +0200
Message-ID: <20230511130218.22606-3-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230511130218.22606-1-michal.orzel@amd.com>
References: <20230511130218.22606-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT073:EE_|SJ1PR12MB6313:EE_
X-MS-Office365-Filtering-Correlation-Id: d69ada42-f431-4e9e-641f-08db521ffa08
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Pd9tXstX0+fYxXEqUp4cFD93UaLTYVt6FbOGWbp0JtB2YFa0ZxW6K2Tj+65MfW5LfvvNH+rCaTD94ajoXQnfp+qsljpNb9rq5NEjnCXMCvNURMSoLA7a9Xc3xxx702g2SWYREiYHZc9ByRcygrW/LymxTCDRalwTiKdGqw/QrdaJiVmRpZgNGgkNPZHEFFqm/0j2EO1xoO+TxUyOX891owPw8lN1lJd6QjlH93KjAnUWLSS6XOLdW599Aw0t9KGILQJBQ1H0ACewit1CaNkpCju2AgV3okp1Uq2FpT1i4nIGViog7HNWwmjKnpyab/mqOS+4lnm/KcVCV8rhxdjYwHkIbMofd8AIILFPH1OLnY+L/Io0imgG3Z49pS2u/Ryy0U+RrdKuMSMcvQYKQ6qWC4trFKMTZWOWrTXdg/8/IqZuF2CS5QS2Yk5DtiKwDXG+XtIZSfjRo1BSsrXOUlpYbjK8ufgVVfeQGt4FwY9pMYJ+iBS04BEUw6CwHwzk2kdh3UdEfJAQLedx8ZIG/CoCSoliDrAjDRdHL/jKPKrQC/HjYbd9gyWgrp67ZxSqCAW/zQGVScqrwOPIRAEGlJrylYXr412kJg3gimEUwyo/HnEJcgIMpgaOREqEtRgGxW5/YCWvl5GtLD41/rbf6sVkHawYArYXB5gQWM4Lt6s4AlF5rvG1YU4lZUdJeLME+K2ecfJt1dbJECA4GrJO58/RCe7ZvQpFDrKnLtMh0MGle83+2BPlIMlV9zZ1lM1uO+zKliF7Mbc22NT5jyImki4Zug==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(136003)(396003)(346002)(451199021)(46966006)(36840700001)(40470700004)(40480700001)(8676002)(8936002)(82740400003)(26005)(186003)(1076003)(47076005)(2616005)(356005)(426003)(336012)(2906002)(81166007)(40460700003)(83380400001)(6666004)(70586007)(70206006)(6916009)(4326008)(54906003)(478600001)(316002)(82310400005)(36756003)(86362001)(36860700001)(44832011)(5660300002)(41300700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 13:02:29.4593
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d69ada42-f431-4e9e-641f-08db521ffa08
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT073.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6313

IRQ is of unsigned type so %u should be used. When printing domain id,
%pd should be the correct format to maintain the consistency.

Also, wherever possible, reduce the number of splitted lines for printk().

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
Changes in v2:
 - split the v1 patch so that the format specifiers are handled separately
 - also fix map_irq_to_domain()
---
 xen/arch/arm/domain_build.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 9dee1bb8f21c..71f307a572e9 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2265,8 +2265,7 @@ int __init map_irq_to_domain(struct domain *d, unsigned int irq,
     res = irq_permit_access(d, irq);
     if ( res )
     {
-        printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n",
-               d->domain_id, irq);
+        printk(XENLOG_ERR "Unable to permit to %pd access to IRQ %u\n", d, irq);
         return res;
     }
 
@@ -2282,8 +2281,7 @@ int __init map_irq_to_domain(struct domain *d, unsigned int irq,
         res = route_irq_to_guest(d, irq, irq, devname);
         if ( res < 0 )
         {
-            printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n",
-                   irq, d->domain_id);
+            printk(XENLOG_ERR "Unable to map IRQ%u to %pd\n", irq, d);
             return res;
         }
     }
@@ -2303,8 +2301,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
 
     if ( irq < NR_LOCAL_IRQS )
     {
-        printk(XENLOG_ERR "%s: IRQ%"PRId32" is not a SPI\n",
-               dt_node_name(dev), irq);
+        printk(XENLOG_ERR "%s: IRQ%u is not a SPI\n", dt_node_name(dev), irq);
         return -EINVAL;
     }
 
@@ -2312,9 +2309,8 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
     res = irq_set_spi_type(irq, dt_irq->type);
     if ( res )
     {
-        printk(XENLOG_ERR
-               "%s: Unable to setup IRQ%"PRId32" to dom%d\n",
-               dt_node_name(dev), irq, d->domain_id);
+        printk(XENLOG_ERR "%s: Unable to setup IRQ%u to %pd\n",
+               dt_node_name(dev), irq, d);
         return res;
     }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 13:02:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 13:02:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533370.829976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5wR-00046z-Po; Thu, 11 May 2023 13:02:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533370.829976; Thu, 11 May 2023 13:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5wR-00046X-Jk; Thu, 11 May 2023 13:02:43 +0000
Received: by outflank-mailman (input) for mailman id 533370;
 Thu, 11 May 2023 13:02:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/UO=BA=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1px5wQ-00044h-Cn
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 13:02:42 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1af868ec-effc-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 15:02:39 +0200 (CEST)
Received: from MW4PR03CA0193.namprd03.prod.outlook.com (2603:10b6:303:b8::18)
 by DM4PR12MB5214.namprd12.prod.outlook.com (2603:10b6:5:395::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 13:02:30 +0000
Received: from CO1NAM11FT090.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:b8:cafe::1b) by MW4PR03CA0193.outlook.office365.com
 (2603:10b6:303:b8::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22 via Frontend
 Transport; Thu, 11 May 2023 13:02:30 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT090.mail.protection.outlook.com (10.13.175.152) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.22 via Frontend Transport; Thu, 11 May 2023 13:02:28 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 08:02:27 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 06:02:27 -0700
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 11 May 2023 08:02:25 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1af868ec-effc-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IE3StwRwLf7uGbeD0OpAER5yj8HPvOhqilx4du5HCpKpb4PYOXh48VhTJPUEmDagZMuf6J8f3bckdoANVOa/AkgHNY/glqiGWUXwRU+VG29m8lGNZ0PwRGq+jjeEvz1+jThG0E1PSBQzITEvpzhBl5MtmB+wUxKigkVjKmQsV4orJAXkKgX6DM5pHfYPHqzD2iKBZtx3bSD1grdu+oZG6ipOKAVQbiO+Hd3YJvTKqFnnIj1f3LXp3ZiHSPJLwqEmsd2/Bm6+2giYnxaHwYZl3AqIpxOwkhpuh40DsHPPdidWJ46kUqU6aaWo/AfN+ZsX7l2Ix7uxp+gcucTtubYn5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=s5it+s+Whb5mYuu1zowFBgJHBZo+jj7YCCm7iUuAZIM=;
 b=ZDMNMGoksvfWvBbzy6Pn3pyBMcOTDVKGyBO53i0BvCKxgr0xb58bT5eWw2PWcAv7BgY25VrkmtCAqBXeo47t5J5EgNou0iu34zoE+ws3qbAVUKF4RF5iOwn9lbe9mMzYdla99RxjX2dF8PHwop4a+R7EVaRKBXeduTkKE5CW0K8je7ixRhlDiWujfmQEJ7WmX2TxmOtB1Rdsu8/1PKOpzRm9NwzVe8bxJ2GFuupUr8XZBqo66rk/Tc/LGWrQPDKCWYLHOI4lI5TsugnjhgtmiqVlNxJ/LxMFiSGugZl99byjU1oroRUX+fz3QVb+S5GTZqPwlu2rNTV3vcw78twN7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s5it+s+Whb5mYuu1zowFBgJHBZo+jj7YCCm7iUuAZIM=;
 b=1ea/uTyTX953Gtha9S+SmalDqGmbvpDiT9XVmDyREVWOPddoR9GU2MkUorrAcOAbbpurlZZMzFNONxzRkexQuixx63gltLsvL/Xu/YVrwVHMk0Cm19NTu5LLlj5C+FK8sHEXNuCBvI9t35DecrzMm9BLOzE/r4rt9rI5W3zkWRA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 1/2] xen/arm: domain_build: Propagate return code of map_irq_to_domain()
Date: Thu, 11 May 2023 15:02:17 +0200
Message-ID: <20230511130218.22606-2-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230511130218.22606-1-michal.orzel@amd.com>
References: <20230511130218.22606-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT090:EE_|DM4PR12MB5214:EE_
X-MS-Office365-Filtering-Correlation-Id: 3dc46db1-103b-4398-c552-08db521ff956
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Odc0GW3fYHxlXubtCRRgDdgl/rX2iDo/QT3Wh33P2joVbRKsaVJqmNmONEx9VFT5tvm7lSVl4sLukACgPn/5fFs77q8VjjisuNReSmuokbZNoJri/X0ZBdeCUaMISqUBLdqdsFlCwuoSM7alSY1IAwD7ybb+eWmcgwXeGwqPjobNWXDqQW5oxR957inqwNSXOT+3VDh/80QDG7RbnOTRyHh/i+ZU7gCJHyKqRpoHGBvq1eqMbeta7FQgtPJsCIsNti+7OSHsTsVHBno9BYu4N0wrmIWOs08LlZ3lI0Ajzl7QDbwgUJ1PXp+tQvrCjESqYJvoW8YbFE0RJGoA9o3Xf2UIujbMKZYW+dVxNGV25n/av/FHzgC1evy1UZQ+24SlXO/pre5TEPSo4tZUIlppiJfjsbtF5CpehfJKkN9Y9ZPVXxsBu2V1XtmMzEpSQuBCxEbGOurTmPJWnhbXmYDzKSqJVw0Hb41rGH04zGnA6KChJhrbUT9jpxfkNHfk6CnM252H6NRZTk3cdSK2/juouhOgTWts0x2d+EFsSMtM90uvmdTA75dDH3uRHr3WmYEIyA8Td3mE8/REwAHjeKmyhsyScYypCStLegPRAFSgDqTHpXfSB0+QcR1Qa9Juy1hWOE7mfi03Y5bCx70stHWeFrS4IK/43CF/3tzodb8Xk5iV3km4PFxaUtEHsCEMoP3lkwO0s2Y+u/gwAqvu8fltQYw59aUzn8FghNKPBdtylG/mDEELgcVxeym8RbLfJLPMp+8G5Ge0UUY/u94E25ugRg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(346002)(136003)(396003)(451199021)(40470700004)(46966006)(36840700001)(70586007)(70206006)(81166007)(316002)(82740400003)(4326008)(6916009)(40460700003)(356005)(54906003)(82310400005)(478600001)(6666004)(86362001)(36756003)(186003)(40480700001)(26005)(1076003)(4744005)(426003)(336012)(2906002)(36860700001)(8936002)(8676002)(5660300002)(47076005)(83380400001)(44832011)(2616005)(41300700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 13:02:28.2171
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3dc46db1-103b-4398-c552-08db521ff956
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT090.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5214

>From map_dt_irq_to_domain() we are assigning a return code of
map_irq_to_domain() to a variable without checking it for an error.
Fix it by propagating the return code directly since this is the last
call.

Fixes: 467e5cbb2ffc ("xen: arm: consolidate mmio and irq mapping to dom0")
Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
Changes in v2:
 - split the patch so that a fix alone can be backported
---
 xen/arch/arm/domain_build.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f80fdd1af206..9dee1bb8f21c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2320,7 +2320,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
 
     res = map_irq_to_domain(d, irq, !mr_data->skip_mapping, dt_node_name(dev));
 
-    return 0;
+    return res;
 }
 
 int __init map_range_to_domain(const struct dt_device_node *dev,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 13:02:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 13:02:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533371.829990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5wX-0004aw-3J; Thu, 11 May 2023 13:02:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533371.829990; Thu, 11 May 2023 13:02:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px5wX-0004an-0b; Thu, 11 May 2023 13:02:49 +0000
Received: by outflank-mailman (input) for mailman id 533371;
 Thu, 11 May 2023 13:02:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/UO=BA=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1px5wW-0004aL-7V
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 13:02:48 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20601.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1fb5c0b5-effc-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 15:02:47 +0200 (CEST)
Received: from SJ0PR13CA0050.namprd13.prod.outlook.com (2603:10b6:a03:2c2::25)
 by SA3PR12MB9158.namprd12.prod.outlook.com (2603:10b6:806:380::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Thu, 11 May
 2023 13:02:27 +0000
Received: from DM6NAM11FT073.eop-nam11.prod.protection.outlook.com
 (2603:10b6:a03:2c2:cafe::e5) by SJ0PR13CA0050.outlook.office365.com
 (2603:10b6:a03:2c2::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18 via Frontend
 Transport; Thu, 11 May 2023 13:02:26 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT073.mail.protection.outlook.com (10.13.173.152) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.21 via Frontend Transport; Thu, 11 May 2023 13:02:26 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 08:02:25 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 11 May 2023 08:02:24 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1fb5c0b5-effc-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b1nAQtKBXuOd3xRIAzEjDsOoKJe6RQOE0deRKw+8r/ojZIsRPQyeVH+kM4DYNnDR57ygYqlZbWIqOzn22pJPfWS/ARVdIoG39hZNg7zsgMkqPO7bRo4+G1CN8P6JoQuCs5Zk8ENF8gQKrqIcwIZciw5uw0QsYy0OCYsAU4J9vjcNkNoaXCg7tshMjamodjk+rAA5BdjWQdjIuFHs09gOnZTJOJBQKHpG/x6T3LmCZ1SQQWCnYgZ8gbQwFoAQizkaO/VEAyq4Rd0exu4ttPdoLnsVWF3ti/h4UZXtnPpslB9VzQXFhCFDyMoik0IYToae8kdlhYJyMRLd6WK5cygf9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bY1B3vA6JWcnz+nsPstgyoxa2lCfFZtRYM2UkDNGy0k=;
 b=e7JzJ1rWSZOr6D62TtkPoFh1RvROOjslZ2Wm/X6LzowAmwdV+oWkBE2vA3qWRL73yzsvcOm3nk91fZWPL+wq4+PlqdeQPgMv6s6I0R44k3hSlTGtgNodLAfwa3ba2M3UFNBAHvQV6Y20Z9d0uf2PrRIdlg9HmNdT4cdLHgBAxNHbbKpcr8ISJztsQ5GX4dhYAEZT9ZPwH+3jHBuER6P3+Gw4M+A+TYIq+px9z3j5MTyz6ByyFI5GVjL5rWSf/S8A2yy3+TXDhONs+9pUI/+K4Ao2FXZJw9FpVG85tcx7Lz7ZW23G8rlSuGdHcguGzURH682PrJw9w/1eQATg69gWrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bY1B3vA6JWcnz+nsPstgyoxa2lCfFZtRYM2UkDNGy0k=;
 b=ggWhYb5hk/8itS6EJmF9Lhgk7YHWZ/8ldsQbBp05HAh2aRt1d+I4m481Y+w7v0lfvWm3XEBCg7uVrI2NqvMD6gp5juX8afsuk0rJVXWgbzoQAIduXRTTV8G6xLVO6Icp5/qls0rmC3JO2p75kblRc23S5NhGCvODbobbTUPkVHs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 0/2] xen/arm: domain_build: map_{dt_}irq_to_domain() fixes
Date: Thu, 11 May 2023 15:02:16 +0200
Message-ID: <20230511130218.22606-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT073:EE_|SA3PR12MB9158:EE_
X-MS-Office365-Filtering-Correlation-Id: 5fab323e-6a5a-444b-abcb-08db521ff81f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uC2c0/WaKm+jiSqFHdvAWDH8llij7kMlNK/ZE6Nc41AyWn5QAPeR4clAQVGyc5OKtscmm5AeSrZ1wN9YDORJqneknUud1AqRGCm+5kGgh4IYO4yHSmwLCzbJAy84SuDTSX5NRkCdJ9zPnkBvSbstqgaBN7MA+2TdifMKIp3Y+Upeg+ifh2bU/spRZEjQlmc5xG72/oT/liEDKmXokkkafsbut8kWs+3/o22Al6Vh4JWRmJZg3caRZyteXBVgwSgkk57FFYWwy/0jPNqmuggx51LhqSpyQPzOknMzpCW2Lp7l3Qq5Gq7BPRA5xLj1fGs//7+HSwGRXXzmUq0XfNe/3f9k68SNIUVUaOlQWNmHqCZ0a6rthr+tm5u3m8mMhV6tfnuPKmX7Z4L78/yF1/YnL1/1w3kuUHrM3eyY2hz1DFAovX13035XuOEODMOS/+iMWLlCfORbr9q2rc1jdJrp8DVA7Vjx7w2ImtB6TvzC+kjhGsKl18rCAFZQmKQnJj0DnJ46+dNz+pWtpOriU36DNc6tSY5xVLHLFoIJy3/T6I58XBI5ZPCn9y2UGPtkQAb+zEvv0Czr4pTx71KtnkcZzPa7FpaSmbHeCZJwyPCfpR1rnxiDlTZkHGcPnddRxu8S93HAI54pJPHLagYaVe941L13Cafo0D4WGx/89tNSNMq0vYaQBso4//MiC/fq+9+9rmspPiNiAT4R4ne7flEPfFZuWUocLwr/+XzZho5cmSI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(336012)(40460700003)(426003)(83380400001)(2616005)(36860700001)(186003)(36756003)(82310400005)(86362001)(40480700001)(356005)(82740400003)(81166007)(2906002)(47076005)(316002)(8676002)(41300700001)(6666004)(4744005)(8936002)(54906003)(5660300002)(4326008)(70206006)(478600001)(6916009)(70586007)(26005)(1076003)(44832011)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 13:02:26.1627
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5fab323e-6a5a-444b-abcb-08db521ff81f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT073.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB9158

Propagate return code correctly + fix the format specifiers.

Michal Orzel (2):
  xen/arm: domain_build: Propagate return code of map_irq_to_domain()
  xen/arm: domain_build: Fix format specifiers in
    map_{dt_}irq_to_domain()

 xen/arch/arm/domain_build.c | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 13:22:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 13:22:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533383.830001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px6Ew-0007sW-IT; Thu, 11 May 2023 13:21:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533383.830001; Thu, 11 May 2023 13:21:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px6Ew-0007sP-FF; Thu, 11 May 2023 13:21:50 +0000
Received: by outflank-mailman (input) for mailman id 533383;
 Thu, 11 May 2023 13:21:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z+vG=BA=citrix.com=prvs=48888ca5b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1px6Ev-0007sJ-F9
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 13:21:49 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c506effd-effe-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 15:21:43 +0200 (CEST)
Received: from mail-dm6nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 May 2023 09:21:38 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH0PR03MB5970.namprd03.prod.outlook.com (2603:10b6:610:e1::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Thu, 11 May
 2023 13:21:37 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 13:21:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c506effd-effe-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683811303;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=9WPipmuCY+y/9hC59mQdhfggkIBPL1jHCBEfpsN1Y0w=;
  b=Q4eWhih6NfuriL4fNn2zfa39uk6wQo/6PMHLvaEk5SCvDmOO/E7QrRRs
   HFCO9Z8yGn1p1VefoCc8QhsNkX0rmpEkoVxESbQbRd33eUUd2lsrbi2Jz
   lZhV5yoa6kQz2wnkLxOIbCLqSw8SFjbKG5IUnNmxiLmNvjsEHD1ihaSN4
   Q=;
X-IronPort-RemoteIP: 104.47.57.177
X-IronPort-MID: 107427063
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:6hKvcKrGA8h9TWXel7zez3sLPzdeBmLgZBIvgKrLsJaIsI4StFCzt
 garIBmGO6qLZDH1KIp3b4nk80MOsJLTztQ2HQJv/CpmQSMV85uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDziBNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAD4QcAvbub6y/J7lR8A1o8stJtnKMJxK7xmMzRmBZRonabbqZvyToPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeeraYWKEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhrqE22gLLmDd75Bs+cgWEoN2ziESEQ9NNE
 mk2wBcelZMy3Rn+JjX6d1jiyJKehTYZWtFQGul87xufx6786gOVQGMDS1ZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9UQAONnMLbyIASQoD4vHgrZs1gxaJScxseIa3k9n0FDfY0
 z2M6i8kiN07h8MRy7+y+1yBhju2v4XIVSY8/ACRVWWghitHY4qia52t+ELsx/9KJ4aETXGMp
 HEB3cOZ6Ygz4YqlkSWMRKABGe+v7vPcaTnE2wcwQN8m6iin/GOlccZI+jZiKUx1M8ECPzj0f
 EvUvgAX75hWVJe3UZJKj0uKI5xC5cDd+R7NDJg4svImjkBNSTK6
IronPort-HdrOrdr: A9a23:+NVkzqMapNTVtsBcTvujsMiBIKoaSvp037B87TEXdfUzSL36qy
 nOppQmPHDP4wr5NEtLpTniAsi9qBHnmqKdurNhWYtKNTOO0FdASrsO0WKI+VPd8kPFmtK0es
 1bAs5D4HGbNykZsS5aijPIduod/A==
X-Talos-CUID: 9a23:u+wgCWy1AtRnGIMfoncMBgUaB94+KkOByEv8fUKRUENmRreFaFOPrfY=
X-Talos-MUID: 9a23:aGYFrQQ2sVx6mfGZRXTL2g5jEM1q6Z62CVI0tLc8hvm8DixZbmI=
X-IronPort-AV: E=Sophos;i="5.99,266,1677560400"; 
   d="scan'208";a="107427063"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lOOlIEBLvU2m1nqKneUZ2UnPWNstnLv4qD9seUg4BFx95ZsoLSCLeIDXad4KmFvJUu9unyayi1QL6x9LCUhU9inUIduqpeQ8SKcosiQdYlwj5DmUOYvkvd5jErOIA5kpPnMkIcIJ2mpuDn2eVr4xiVfcG8OwZlYES/Hu/Og1x2P2wjcfOeDw4rP6bEUMN/EL7E9gNhf1SDZzMcj+xCCN3rD3Osc5eoD1quEE4sv2UrAZe6vAgoBMe803Ke2QYZKRXm8JXAs/QTU3xriDSli82tPqheH6wM1BG+O6KBF82ftcJvprkPPNFb+3GBW+G6ZhVJKX14yoyfrMt06F+Odxeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9WPipmuCY+y/9hC59mQdhfggkIBPL1jHCBEfpsN1Y0w=;
 b=PSfq+D3o1dSHSrwHDAICuFKThnwLcUCq00JgLrOs9ikZJ6vMQcc+WBIK688kUYOMDKlmhesJSHTfhGvts0cgSBeBLN0IWrhSieV+cml+QjFHDPJokXIr62l/1WDuMKQ1V+iVcCoPAltj018Zag3HbKfwVQJsiqvEAbrk1mmZ7YXdKDyhbZ4MBd7UjQyb68DEL262lFMvG2kiDn27FVBbJFUo7norgRgCjfRMrSnzi3uTlSUIl2SXr+KaX+COCBfQzWeNixJCgFk5Zz6afZeDNSgghhYqwdBpAHC/ntGRuY8JwKhNVzRNpnSdGSysym0qhk7QsmShfxO/Euo1AePEYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9WPipmuCY+y/9hC59mQdhfggkIBPL1jHCBEfpsN1Y0w=;
 b=aCb6KPArKt15zG1Itr5q02k0CuaNU0PLl/OV/k3guezZNnEHndCplA90yH9xbzmMPXiLslJhu7vHGhBXwlM0kgcTPsxfsvZn/yHs3Ec0s4WBBR/Zr3APSmTYRQTFV6Y5Rs0Bp+UR5TjRz9kIsmDM4l6xXJqXfLDAoPYNvbobtK4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <a4289d04-4cef-e4a6-a3ca-378f3e56be5a@citrix.com>
Date: Thu, 11 May 2023 14:21:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2 2/2] xen/arm: domain_build: Fix format specifiers in
 map_{dt_}irq_to_domain()
Content-Language: en-GB
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230511130218.22606-1-michal.orzel@amd.com>
 <20230511130218.22606-3-michal.orzel@amd.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230511130218.22606-3-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0190.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a4::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|CH0PR03MB5970:EE_
X-MS-Office365-Filtering-Correlation-Id: cc0c0d8c-3724-4749-6801-08db5222a5c9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0aAmVDvixqAgwxqyaqEgE7Z7VVgie0IH9KeMsjCbYpd+nY1QcDVVscFhNcrVvqyWloSVETEN+jFoMTyXh3oegVKwOJyLGRe7DvPrzQGDRL4lN1Ft1eZf/aA+FN3CCyhGBcn6jyNLa0NjG0ao4ZiCJLorO0Oi3cTcFmHSYRPLp2K80+Z1VxaeB49nwMoGgk7e4BJpESRFizvSoe4UZBjHv7b8Y0usgy6ybrdF8FjcDAbIjetpy/7Syx1lbEoHa7VGhBHMUj5qjH4h8QwwFuRR0GmXc94JK9GOZbZsItxtpI7hBgv/NSHPjuhcBV+4sb/PJpdF61AzP8X07c7oH86Jhre36F5jMn2U9F/7x2Zpxvzu6VHRvI1DSPBFvtENFZGOZhNzkKBRaA7Q090eVhJCDM7BOEWyQNnWPEJEG9ONO3kPoNSn4rP8K1yM9jx3VVQ3RCa6fQVye1U3CqLBn+F60SW0vxrcE/YXD+PZqdVVCIVK2p3NJn5YFPSVBGocn+MNrjTyrupQXt21+y1KvQHktzaYWkXE3rw/hcgvMH6qQy7gtmlE+pHJ+1FO7277h4Jzz8/PBlJKoEBQHtgA76MqAJX+8gGSVPR/XWjHoKertrCj6QTbc7POIlNHbua+71wCe0pZir33Ov1vabuUxB7lkw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(346002)(366004)(39860400002)(451199021)(31696002)(53546011)(38100700002)(82960400001)(6666004)(6486002)(31686004)(6506007)(86362001)(186003)(2616005)(2906002)(316002)(4744005)(4326008)(36756003)(41300700001)(8676002)(5660300002)(8936002)(6512007)(26005)(66556008)(54906003)(66476007)(66946007)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?STQ5eVBQV2EwNnoxSy9CTW5CK09VdmdQOUdzWEQwNlhES0lIbzJwTUREUFhH?=
 =?utf-8?B?R25qRDFjQTJDN28vUFZGZFZxUkRGQzcxbXA4TW9sbkJ3V0J4eUpPeTJuVzJq?=
 =?utf-8?B?STE5RDk5ZVNiMUZNWm5hcS9MVWhSQUZBbG9Hemg3bHpUejAwZ3Q2RWp3MEFZ?=
 =?utf-8?B?dmlPSU80UndKYllvR0ZnK2w5MjVPb2RBZS9OTW0zakY3UEp1dXZnTFczWVUz?=
 =?utf-8?B?ZDlMYnRnOHoyckhmMVBPWjdaVkdqYTJIU09vcU9jbjNSZEVGNzk1WDQ3eHhB?=
 =?utf-8?B?aEsyRmtvb1FYTTdSc0phUitYd29NOVQvR212bldYeFpZU21OL1o1QWh5bnNi?=
 =?utf-8?B?SmJ2QWo5azlqWTFiSTBlUDBSWVNBYU9RUFpJSGF4andQMjBYcjA5YmZ1RjJJ?=
 =?utf-8?B?YkpxMmV4WFBFZURta1NhUUhySnQ5YnlrdWxGVEhZSG93SlhOc05lK3pXMFhj?=
 =?utf-8?B?Q1g4TmRiVjBWUEhQeXBEZ0tMeFdlbE5hc1NoZzBkUzRuSEQ2U3pFRkI4c2dM?=
 =?utf-8?B?V0dHMXVzK2RZMlZNVXdIVTdqTGVsZVp3THpzYmJHUWJzdlBVTG9vR3o3TFRX?=
 =?utf-8?B?dGlHMFZPLzBoV0lmMVE1WW1LOUdYUTgxSGFab0gxMFE4YzhFbDRLZ0M2eUtO?=
 =?utf-8?B?TTlCOWw0YnV2VEh1YU5RQkx4bm8xREJCTndkZmZBRTlyOXdSWW5IL2lCcGlO?=
 =?utf-8?B?U09iZHFLeXJnQm5oalVZcjBTUzRpR3hGanI4c2xzQ1Q1aExCcXo3UUxLT2d6?=
 =?utf-8?B?MDc2eGszR1NLWVhRU2ZoM290RWRWRXN1UW9WRTY0S3hNMVNHeTJtMGo5Yllq?=
 =?utf-8?B?ZDhMNW43R0hUaW4rY3c2c1A1MzNnSXhxT3ZWOTdNY2ZtbGxiQVhQZVpTcm1Z?=
 =?utf-8?B?bEY0S0kzTnM5TGhCakczTTFKbVZlVDI1NHp1UlJzNFFEQTJjQnY5MzhkTXFq?=
 =?utf-8?B?ZzZzNHZWL3kvb3JVY3hqd3dpNGwvL3psaVJBQVQ5THlxZjVlOENFVnNhbUV6?=
 =?utf-8?B?NzNTYkhtbTBNb2krbzlRSzhxZnZ4c3ZZN3JIU1JpVmV5cmMrUS9rQjBzU1N0?=
 =?utf-8?B?eFoxUGgvNnFGS1ZNUE5kTTY2b2E5VHlBTXJoQU1VQS8xZk1PUDByVjVaL2tL?=
 =?utf-8?B?REVnSjY3SXRtbEhzVk8wZkVCMGFQSUhENzhiejAzZ1k4MFBuMzFQVHhCRFlo?=
 =?utf-8?B?MWhRNmdiaFdBRWtURXJ5bE1NYm5vemJpbTdWKzlqeEFqNGMvZ2o1NjAraTEr?=
 =?utf-8?B?aWpoaWY5NnNVUlVLdmlPcDdaMENVN2lsQUNQTE1udk4xMDVNeWxrVlJta1Zw?=
 =?utf-8?B?WlZ4OFJ4Nm5icUUyUXpYekUxd3ZldHhOcjI2ZlNKRmJidEhCbDV6ZXllS2px?=
 =?utf-8?B?dkZDSGJMN3Rpak9LY0pHWUlVN1R0RCtnUWhNZWpZNzdtdWl4WGRlZGpQRXRI?=
 =?utf-8?B?emhXZHFqanpyY2NVbndnS2pjNjVIenNHMGo3TDU3QjF2Y0hYQTdTZ2YwUmEz?=
 =?utf-8?B?UE5jVy9Ud0lYbUJCMmlqT2V0WTZ5UG83WFJLMGxMNW8vZzhwMmRiTW5uQ1Yr?=
 =?utf-8?B?OC9UaXBaRzYvWm03RWFXcTdlM0h2MWdNM2xkM3lwZ2tZNndKUVJaRmtNY0Rz?=
 =?utf-8?B?NDlLRldvYWhBWHZtRUJlNWloVW1MQkNnMkpuNVlyU1ZCaXRSV1pRci9lanA5?=
 =?utf-8?B?dFRXR1RrYzhGcExKMkFvNVBtdlNYRGpFbS85WHV0Z0RKVXRZYzU2dHR6Q3FG?=
 =?utf-8?B?SERjaURBVWdiVHVuNXd6OERwa0x1K2tjS0hibkNYYzREM2ptVVVBUlEwOE5z?=
 =?utf-8?B?WXdaT2pCZW5MVWtQcE1UNXFQVmVGZnByd3JXRW1BNHNHb3Y0OERHMzJxWTN2?=
 =?utf-8?B?K0RoQWNWMjYvKys4ZzJmeUVmYzVVenY2bzg4SFNrVURlZUdCVXVTcitHaDIr?=
 =?utf-8?B?a3ZUYThSN0ZjTHd1VHBmSzh1RHhLOEhLaG9MaEVPSmRzYU13OTg3Yks2TjZl?=
 =?utf-8?B?VW9hYzhORTdYNmx4UjROajFxVDNkNlIyL3hrYURiRG56VWNxWmZMVTd6RGo3?=
 =?utf-8?B?ZmxpejU1ZkRyUkEvUVMveFFneFhzbTM5N1dwL29TS2NmcnZLMUh4MEhWOE1w?=
 =?utf-8?B?ZWZBU1BiaXZpT0FaYnZ1N0tkdUJiK1o1aDRzdEEwVjgvbjJxWFpMYnpadTlk?=
 =?utf-8?B?eXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	1Xk1a8W8IxKY62Mh96jhHea/ZeNlV9oSxJsdEYkpG6QZoXqi7lBqD0BBbgrd+DtZgs9CSc1esk+79JM9QRF3IlPBniSmvF/Yvh2ih2BL3nk/wk+41WumNJrTbPjOA9SZFLWjQ3AELaJqGXOr0ZcV51c/rwUHtzl9ITvy6QpWBDk3frmGKf91SdLIWMyWiXkYaTg88ulti6uV+8QUGnRc/yZICNGzvp07YshuZNzf/YQD3j2mFTvZwBu274SBUjMVJBD4PotY0PtOJTCxWGN5RPu6Z7czx2kLkDXfSwSi6QAfX1OKtL7077XpDmtiXvN0cZqmmzDKDhUPCR0lEFw4u//rBQ6BgFi1TQ7ufTX+RU1Fjwcd1Pwffp/zxHmqw2AHbkjv37SkzXCLIDbi7lU+GvUMZfo7tqiPtPFw6e3KvI7RRbhzlDvpb3IepSnGDzQ6rgjb0Qa8nKZZE5hs0J7dqB5mvgXuRitVSGcXlLKqrvoTUHfcgoIzr5Dv8ENUbL5amYYE16BFtBPpQOw0k+nhyLphrX52BEHZyhH/6IrEUHaYIfP+4iHxIhOp155b7Uuh6tEURqk4gvRf6XbhjKifFQWAPQOSr7BLgMpzofBHONH5Jl4OKcn1cWNb6uQkbfIv18OyuotVEN/iG0Et30qLs/sTepThpwnMkogs7KmQKYaJAgYvC0hLoaXng8ZO43BKf2bPM4MzyBuoVmWt1vDQzsCRR82l8oj2vkPYExoosY3WfQjSCKU5fObYTr97B5ejs+iVGg+LXS1KdAagFt66KCU+YxF4xJ3SSY3lz4AdBZuJOz4VqpKF2g8O8WBdlPDzrE3pKvUZQ26Ug1sWOrUGkNQFMJ/8DLIPYvhdOFfYpldcD9kKhtRyX+6kCkSpyVLJhbX/gby/847O8TGTch04RQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cc0c0d8c-3724-4749-6801-08db5222a5c9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 13:21:37.0834
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: us+xLiPuZvnr+AOhxXR4CuxCizpVcw3EBrUUUVWLz/66eJbuT35J4sPeBEpaCHWLIISTBPwORiFSpmVQMy0DcqdZzvxhAioXezVQdVUrO1I=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB5970

On 11/05/2023 2:02 pm, Michal Orzel wrote:
> IRQ is of unsigned type so %u should be used. When printing domain id,
> %pd should be the correct format to maintain the consistency.
>
> Also, wherever possible, reduce the number of splitted lines for printk().

Very minor grammar note.  Just "split" here.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 11 13:50:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 13:50:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533390.830021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px6gG-0002xo-3g; Thu, 11 May 2023 13:50:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533390.830021; Thu, 11 May 2023 13:50:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px6gG-0002xG-0l; Thu, 11 May 2023 13:50:04 +0000
Received: by outflank-mailman (input) for mailman id 533390;
 Thu, 11 May 2023 13:50:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1K6=BA=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1px6gF-0002gS-Fv
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 13:50:03 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b9d8c7d0-f002-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 15:50:01 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id
 a640c23a62f3a-966400ee79aso1103821066b.0
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 06:50:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9d8c7d0-f002-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683813001; x=1686405001;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=rGpfGoxRzpaDiwFkADeVm0ofYJWAggtMDA6nm4FBXQo=;
        b=Ry0EGM6GP/o3Q15VSm0+WICH9Dy7UnRXhPCN8dbMkMKBhtm/AG5DljFjSepkgjl9kV
         mZR+YYIqBR6UYhunf58a2R3Vn2ve5R2Ph6xT1PvV1e03wfTCmanhF//V/FqA8GY0x5HB
         Bh25KQOfctsqlZoMSK09O8s0l0CBPeAmFoOY+PieevoYeF0C3/+MzbxlaUBMNx5dbYjP
         AwOhfwmi2h6GqFGijPOPKi17hmFR78i9R4fvBs/MHgycw8546sH4RuuzYUDQs/Ya+/yK
         As9b6w/ewT58RmaNqCPtPz6e8O435ERtnzbWF++NX2P+LwtNS7OQbf6nc6pErSRvCh/1
         CPig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683813001; x=1686405001;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=rGpfGoxRzpaDiwFkADeVm0ofYJWAggtMDA6nm4FBXQo=;
        b=O2cE0HacfNE1zHHGMFulgdIE+76EiNcPr6jBFZrpMllYlfJUyVskZuyTvuX83uQAlO
         YyUDdQd+Vhs/lSgHIrsSuPnYZ+NpneJlNjUC1IgHMJOX/G+6v/qMy+3MGOenrJ0uBrtB
         iMmtVG374jRP/1jwCVYSFD3XJhBga2oAwIb9vWGahF9joSk3KNv6sZ0eFNuQ/iPVMeV0
         4s50xv015JExNmoCkyDLBjqAXPPDuCX0Qg64MkVRH/h1YzxoriHCCscmG24S2P2aKDsU
         IlgtuUhNmd9+dqCT4W/A4rdA4+edXVEfuBHOph3XhVHeeyE1R3pqzBmSWpiLrtp8KWPX
         IUgQ==
X-Gm-Message-State: AC+VfDz2i5tJJwiNHzxl53MvuTyfIvtHQ6t4yEXKx6g6qmfMu37fL+r7
	OQczdo+tBFPebU8H5fwO4BxQZQlWDmLCu1gZArw=
X-Google-Smtp-Source: ACHHUZ7lnR79NCl/1Gp/0Wok1qwa6ajrGAlB4xFOe2wQQHe9UA5rNdZPvn9EhpBqbtfL3PaS5qAr7ZehP1PC8VWmUcI=
X-Received: by 2002:a17:907:3e1b:b0:967:3963:dab8 with SMTP id
 hp27-20020a1709073e1b00b009673963dab8mr13547819ejc.7.1683813001044; Thu, 11
 May 2023 06:50:01 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-8-jandryuk@gmail.com>
 <7db38688-1233-bc16-03f3-7afdc3394054@suse.com> <9cf71407-6209-296a-489a-9732b1928246@suse.com>
 <CAKf6xptOf7eSzstzjfbbSU0tMBpNjtPEwt2uNzj=2TucrgFRiA@mail.gmail.com> <80ccf9c7-5d22-b368-dac6-01fe6cec7add@suse.com>
In-Reply-To: <80ccf9c7-5d22-b368-dac6-01fe6cec7add@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 11 May 2023 09:49:48 -0400
Message-ID: <CAKf6xptLpj_L_G3Qk+KA-yaTcaMHLJLL9soFP9HD6Ro+8Lk7CA@mail.gmail.com>
Subject: Re: [PATCH v3 07/14 RESEND] cpufreq: Export HWP parameters to userspace
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 11, 2023 at 2:21=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 10.05.2023 19:49, Jason Andryuk wrote:
> > On Mon, May 8, 2023 at 6:26=E2=80=AFAM Jan Beulich <jbeulich@suse.com> =
wrote:
> >>
> >> On 01.05.2023 21:30, Jason Andryuk wrote:
> >>> Extend xen_get_cpufreq_para to return hwp parameters.  These match th=
e
> >>> hardware rather closely.
> >>>
> >>> We need the features bitmask to indicated fields supported by the act=
ual
> >>> hardware.
> >>>
> >>> The use of uint8_t parameters matches the hardware size.  uint32_t
> >>> entries grows the sysctl_t past the build assertion in setup.c.  The
> >>> uint8_t ranges are supported across multiple generations, so hopefull=
y
> >>> they won't change.
> >>
> >> Still it feels a little odd for values to be this narrow. Aiui the
> >> scaling_governor[] and scaling_{max,min}_freq fields aren't (really)
> >> used by HWP. So you could widen the union in struct
> >> xen_get_cpufreq_para (in a binary but not necessarily source compatibl=
e
> >> manner), gaining you 6 more uint32_t slots. Possibly the somewhat oddl=
y
> >> placed scaling_cur_freq could be included as well ...
> >
> > The values are narrow, but they match the hardware.  It works for HWP,
> > so there is no need to change at this time AFAICT.
> >
> > Do you want me to make this change?
>
> Well, much depends on what these 8-bit values actually express (I did
> raise this question in one of the replies to your patches, as I wasn't
> able to find anything in the SDM). That'll then hopefully allow to
> make some educated prediction on on how likely it is that a future
> variant of hwp would want to widen them.

Sorry for not providing a reference earlier.  In the SDM,
HARDWARE-CONTROLLED PERFORMANCE STATES (HWP) section, there is this
second paragraph:
"""
In contrast, HWP is an implementation of the ACPI-defined
Collaborative Processor Performance Control (CPPC), which specifies
that the platform enumerates a continuous, abstract unit-less,
performance value scale that is not tied to a specific performance
state / frequency by definition. While the enumerated scale is roughly
linear in terms of a delivered integer workload performance result,
the OS is required to characterize the performance value range to
comprehend the delivered performance for an applied workload.
"""

The numbers are "continuous, abstract unit-less, performance value."
So there isn't much to go on there, but generally, smaller numbers
mean slower and bigger numbers mean faster.

Cross referencing the ACPI spec here:
https://uefi.org/specs/ACPI/6.5/08_Processor_Configuration_and_Control.html=
#collaborative-processor-performance-control

Scrolling down you can find the register entries such as

Highest Performance
Register or DWORD Attribute:  Read
Size:                         8-32 bits

AMD has its own pstate implementation that is similar to HWP.  Looking
at the Linux support, the AMD hardware also use 8 bit values for the
comparable fields:
https://elixir.bootlin.com/linux/latest/source/arch/x86/include/asm/msr-ind=
ex.h#L612

So Intel and AMD are 8bit for now at least.  Something could do 32bits
according to the ACPI spec.

8 bits of granularity for slow to fast seems like plenty to me.  I'm
not sure what one would gain from 16 or 32 bits, but I'm not designing
the hardware.  From the earlier xenpm output, "highest" was 49, so
still a decent amount of room in an 8 bit range.

> (Was it energy_perf that went
> from 4 to 8 bits at some point, which you even comment upon in the
> public header?)

energy_perf (Energy_Performanc_Preference) had a fallback: "If
CPUID.06H:EAX[bit 10] indicates that this field is not supported, HWP
uses the value of the IA32_ENERGY_PERF_BIAS MSR to determine the
energy efficiency / performance preference."  So it had a different
range, but that was because it was being put into an older register.

However, I've removed that fallback code in v4.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 11 13:50:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 13:50:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533389.830012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px6gB-0002Km-PZ; Thu, 11 May 2023 13:49:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533389.830012; Thu, 11 May 2023 13:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px6gB-0002Kf-KG; Thu, 11 May 2023 13:49:59 +0000
Received: by outflank-mailman (input) for mailman id 533389;
 Thu, 11 May 2023 13:49:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qWxp=BA=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1px6gA-0002KZ-2G
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 13:49:58 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2061b.outbound.protection.outlook.com
 [2a01:111:f400:7eab::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b4dacc47-f002-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 15:49:54 +0200 (CEST)
Received: from BN9PR03CA0376.namprd03.prod.outlook.com (2603:10b6:408:f7::21)
 by CH3PR12MB8075.namprd12.prod.outlook.com (2603:10b6:610:122::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May
 2023 13:49:36 +0000
Received: from BN8NAM11FT071.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f7:cafe::b6) by BN9PR03CA0376.outlook.office365.com
 (2603:10b6:408:f7::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22 via Frontend
 Transport; Thu, 11 May 2023 13:49:36 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT071.mail.protection.outlook.com (10.13.177.92) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.22 via Frontend Transport; Thu, 11 May 2023 13:49:36 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 08:49:35 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 08:49:35 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 08:49:34 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4dacc47-f002-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ANU9ngaayjkuMcLpE7Y6r/4X0GMKzDGCk/2XAbOab313QD4wnuURGscmXQWnCVu588rHfJdLdl3xQ9lMl7vFX+/cY2yfWyYvpQrVQZ7//FZDJu06MHrPYLeR2qmC1SdDBuwHrHPA3TKd5I7r/gkFiUcpgthWOWI8BCKvLVLa1sFIS1/4YbdSpAn8QJTlPPHJ3nh3y1r/SICNbo0NpgHy16K7alTev8JlrW4O4zzi56talld9SZPJUyMxg3a5DEHpuycnrMxVNNt3iLW7C9zRjkvtSfrAIxSzNh9YyfQ7sVSBEymQg1YnzThnWtDeKoZwp1b851WmKrFVQBvGC4o6BQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aiKpn8nacWb/SqFV+/0oYlmZUWF/hJmp22PWlXeblk0=;
 b=hE4CgNnKH+ciCMcSj/WLHXaywWoE/6ILCvF3KYMMlLuvYu/x+hYIQpsiAyD8Ma1yv6mtMe0XvpcYkl5NLyrbaE/BpVlOXrdWJ1cnuF+mobV2zoL25jCuuTRlOm/mCJYRkz4pyuC668mOaPYclLGw1lR3/N3EUDZDkeCYoy51QU9OyglVEIk57t851iQyMrEov5b3oUHlCLmc8PuTUQosvKuVhvfArz9PJZknr+Yp2tb3X2wl/h8AxT4V9UE8sCd+gAfiUQWUxx6WIUV7GVcfhLXaA3lEFpTW4KCOu8yxvaOTnC1WdyZMccpscxk5qwgGDqWZ8jzn36Vo6ETUZD5OYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aiKpn8nacWb/SqFV+/0oYlmZUWF/hJmp22PWlXeblk0=;
 b=0jb30b7mWkei3J5q7t5icbOPLE2joK6P5pxjjcI/0RPUTx7AUwlwVSN1PDrCcDzjXnqyWhHxSSxWtu6gn26/GL7VbcoNB1ta0iPSjno1+v0r0P4FfL8/XqxgHq1+La/cw4ytvLStBLqwE0CkdU5xmIwUBkwmoVAg/u42mSkpq+g=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <03fa79ed-2b24-8329-36fd-dd8edc14fa72@amd.com>
Date: Thu, 11 May 2023 09:49:33 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v1 4/6] pci/arm: Use iommu_add_dt_pci_device() instead of
 arch hook
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant
	<paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
 <20230501200305.168058-5-stewart.hildebrand@amd.com>
 <ced11c6e-caf7-3a19-92c8-5c11b18952b6@suse.com>
 <8591236e-5dc2-7da7-fe3a-7cb2ae1ed7d0@amd.com>
 <462657d4-72e7-6266-6ea5-2b9e443f9813@suse.com>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <462657d4-72e7-6266-6ea5-2b9e443f9813@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT071:EE_|CH3PR12MB8075:EE_
X-MS-Office365-Filtering-Correlation-Id: 7d46e0f5-a9c2-4513-996b-08db52268ef6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PlzIOZzGKhNmD+KuQHlGmKB242hIazUJvjntDpZlpN8IKbQTRrPjj82WDXF/6pDUFRj0euJBJPPdoQxMyQNnLLI17Y6DOYHJv5Wn0nCi48EA6Z+UPCF1ZkHQlxKSbUiA4/Y+UBkAKzHiqGlz2wI75vXloA79h+r4mAeyy+k/nr5Qbza+C+DgN+lc7a6kljyOqYj2GNyUoRibqPTlti2NdUNqyb0KUePBzpbboTugIVlipuCLem3LiJBltR7uxUUVBGvG3qaQpdjEnHvLvlTZ1hOhFjqL1hxl8zCz45Xo2CIbI/yDNUPfMzC0VR1ZpT6SaIbsGv6ewLKrvCDGsJexLnxzUHcOYuIeTJsmsL8q9yB42i5/rqkh87tMGrkkGryrUkx8mAleZfexbmWoK/tqJD/S0+2GzBY8zfDd3GSeyH/Vp1Xg93lCNZs8jU9bvG377tiCKY6K++38LznYN3uqhjANXJ8oe0vpAzXdpxQ7LncRzt4P23zLhCCSix3tP2Ob771gna+8NOfOtlI1I372yx40IdZ0/YZsZQGPC83v1523WqsS0Rs55lswYdJelVQ0NyzIL45AikrdNezAL2Ya2UKgJhCdDarvTOInXvDhSevz6D33da5z8T6cwAPC+segCNUg4EwmcpXwXxjZ43AdLIPNBsUvoGq9bhDMemmAg94Q66Tp2rtDKv3Mjdnk7+sYylstPIbhUF9s7PoljJc/T2sPxLrytnGvZNBdMXD6U6AFoBXHDtGjmV4Y67qdufkmgnlZegALdCFMFCmAN3UU4w==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199021)(46966006)(36840700001)(40470700004)(4326008)(6916009)(8676002)(82310400005)(478600001)(70586007)(41300700001)(70206006)(31686004)(82740400003)(8936002)(31696002)(44832011)(316002)(86362001)(5660300002)(54906003)(16576012)(26005)(53546011)(40460700003)(2906002)(186003)(81166007)(47076005)(356005)(2616005)(83380400001)(336012)(36756003)(40480700001)(426003)(36860700001)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 13:49:36.3349
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7d46e0f5-a9c2-4513-996b-08db52268ef6
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT071.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8075

On 5/8/23 10:51, Jan Beulich wrote:
> On 08.05.2023 16:16, Stewart Hildebrand wrote:
>> On 5/2/23 03:50, Jan Beulich wrote:
>>> On 01.05.2023 22:03, Stewart Hildebrand wrote:
>>>> --- a/xen/drivers/passthrough/pci.c
>>>> +++ b/xen/drivers/passthrough/pci.c
>>>> @@ -1305,7 +1305,7 @@ __initcall(setup_dump_pcidevs);
>>>>
>>>>  static int iommu_add_device(struct pci_dev *pdev)
>>>>  {
>>>> -    const struct domain_iommu *hd;
>>>> +    const struct domain_iommu *hd __maybe_unused;
>>>>      int rc;
>>>>      unsigned int devfn = pdev->devfn;
>>>>
>>>> @@ -1318,17 +1318,30 @@ static int iommu_add_device(struct pci_dev *pdev)
>>>>      if ( !is_iommu_enabled(pdev->domain) )
>>>>          return 0;
>>>>
>>>> +#ifdef CONFIG_HAS_DEVICE_TREE
>>>> +    rc = iommu_add_dt_pci_device(devfn, pdev);
>>>> +#else
>>>>      rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>>>> -    if ( rc || !pdev->phantom_stride )
>>>> +#endif
>>>> +    if ( rc < 0 || !pdev->phantom_stride )
>>>> +    {
>>>> +        if ( rc < 0 )
>>>> +            printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
>>>> +                   &pdev->sbdf, rc);
>>>>          return rc;
>>>> +    }
>>>>
>>>>      for ( ; ; )
>>>>      {
>>>>          devfn += pdev->phantom_stride;
>>>>          if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
>>>>              return 0;
>>>> +#ifdef CONFIG_HAS_DEVICE_TREE
>>>> +        rc = iommu_add_dt_pci_device(devfn, pdev);
>>>> +#else
>>>>          rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>>>> -        if ( rc )
>>>> +#endif
>>>> +        if ( rc < 0 )
>>>>              printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
>>>>                     &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
>>>>      }
>>>
>>> Such #ifdef-ary may be okay at the call site(s), but replacing a per-
>>> IOMMU hook with a system-wide DT function here looks wrong to me.
>>
>> Perhaps a better approach would be to rely on the existing iommu add_device call.
>>
>> This might look something like:
>>
>> #ifdef CONFIG_HAS_DEVICE_TREE
>>     rc = iommu_add_dt_pci_device(pdev);
>>     if ( !rc ) /* or rc >= 0, or something... */
>> #endif
>>         rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>>
>> There would be no need to call iommu_add_dt_pci_device() in the loop iterating over phantom functions, as I will handle those inside iommu_add_dt_pci_device() in v2.
> 
> But that still leaves #ifdef-ary inside the function. If for whatever reason
> the hd->platform_ops hook isn't suitable (which I still don't understand),

There's nothing wrong with the existing hd->platform_ops hook. We just need to ensure we've translated RID to AXI stream ID sometime before it.

> then - as said - I'd view such #ifdef as possibly okay at the call site of
> iommu_add_device(), but not inside.

I'll move the #ifdef-ary.


From xen-devel-bounces@lists.xenproject.org Thu May 11 14:00:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:00:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533400.830031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px6pt-0004XH-24; Thu, 11 May 2023 14:00:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533400.830031; Thu, 11 May 2023 14:00:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px6ps-0004XA-VQ; Thu, 11 May 2023 14:00:00 +0000
Received: by outflank-mailman (input) for mailman id 533400;
 Thu, 11 May 2023 14:00:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1px6ps-0004X4-4r
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 14:00:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1px6pp-0004bJ-O4; Thu, 11 May 2023 13:59:57 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.27.46]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1px6pp-00065i-1E; Thu, 11 May 2023 13:59:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=w0U2ROCuByYabEe51ltdyQZ0BmIHcwVWCroGR66TcbM=; b=rfgWtkjdcFgE/9ws/IvDh9omFU
	l1+6yRqdwkkLuPDJiVK0eIyZ6T8I5dOq7W0IQUagLrE1iYy4MsxgT4cNupxkR0zj+WDuDPHpdQC+d
	ur5gJgOSb6WKpoO0Ldux3zadY8xDGsmMjcWVaZ1umzi1kJV5SNU0ZljGgqOgeHOY7Quo=;
Message-ID: <f07ceaef-d525-faa8-8911-77c588e85102@xen.org>
Date: Thu, 11 May 2023 14:59:54 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v1 4/6] pci/arm: Use iommu_add_dt_pci_device() instead of
 arch hook
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 Jan Beulich <jbeulich@suse.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
 <20230501200305.168058-5-stewart.hildebrand@amd.com>
 <ced11c6e-caf7-3a19-92c8-5c11b18952b6@suse.com>
 <8591236e-5dc2-7da7-fe3a-7cb2ae1ed7d0@amd.com>
 <462657d4-72e7-6266-6ea5-2b9e443f9813@suse.com>
 <03fa79ed-2b24-8329-36fd-dd8edc14fa72@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <03fa79ed-2b24-8329-36fd-dd8edc14fa72@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

Can you make sure to CC the Arm/SMMU maintainers as I feel it is 
important for them to also review the generic changes.

On 11/05/2023 14:49, Stewart Hildebrand wrote:
> On 5/8/23 10:51, Jan Beulich wrote:
>> On 08.05.2023 16:16, Stewart Hildebrand wrote:
>>> On 5/2/23 03:50, Jan Beulich wrote:
>>>> On 01.05.2023 22:03, Stewart Hildebrand wrote:
>>>>> --- a/xen/drivers/passthrough/pci.c
>>>>> +++ b/xen/drivers/passthrough/pci.c
>>>>> @@ -1305,7 +1305,7 @@ __initcall(setup_dump_pcidevs);
>>>>>
>>>>>   static int iommu_add_device(struct pci_dev *pdev)
>>>>>   {
>>>>> -    const struct domain_iommu *hd;
>>>>> +    const struct domain_iommu *hd __maybe_unused;
>>>>>       int rc;
>>>>>       unsigned int devfn = pdev->devfn;
>>>>>
>>>>> @@ -1318,17 +1318,30 @@ static int iommu_add_device(struct pci_dev *pdev)
>>>>>       if ( !is_iommu_enabled(pdev->domain) )
>>>>>           return 0;
>>>>>
>>>>> +#ifdef CONFIG_HAS_DEVICE_TREE
>>>>> +    rc = iommu_add_dt_pci_device(devfn, pdev);
>>>>> +#else
>>>>>       rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>>>>> -    if ( rc || !pdev->phantom_stride )
>>>>> +#endif
>>>>> +    if ( rc < 0 || !pdev->phantom_stride )
>>>>> +    {
>>>>> +        if ( rc < 0 )
>>>>> +            printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
>>>>> +                   &pdev->sbdf, rc);
>>>>>           return rc;
>>>>> +    }
>>>>>
>>>>>       for ( ; ; )
>>>>>       {
>>>>>           devfn += pdev->phantom_stride;
>>>>>           if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
>>>>>               return 0;
>>>>> +#ifdef CONFIG_HAS_DEVICE_TREE
>>>>> +        rc = iommu_add_dt_pci_device(devfn, pdev);
>>>>> +#else
>>>>>           rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>>>>> -        if ( rc )
>>>>> +#endif
>>>>> +        if ( rc < 0 )
>>>>>               printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
>>>>>                      &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
>>>>>       }
>>>>
>>>> Such #ifdef-ary may be okay at the call site(s), but replacing a per-
>>>> IOMMU hook with a system-wide DT function here looks wrong to me.
>>>
>>> Perhaps a better approach would be to rely on the existing iommu add_device call.
>>>
>>> This might look something like:
>>>
>>> #ifdef CONFIG_HAS_DEVICE_TREE
>>>      rc = iommu_add_dt_pci_device(pdev);
>>>      if ( !rc ) /* or rc >= 0, or something... */
>>> #endif
>>>          rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>>>
>>> There would be no need to call iommu_add_dt_pci_device() in the loop iterating over phantom functions, as I will handle those inside iommu_add_dt_pci_device() in v2.
>>
>> But that still leaves #ifdef-ary inside the function. If for whatever reason
>> the hd->platform_ops hook isn't suitable (which I still don't understand),
> 
> There's nothing wrong with the existing hd->platform_ops hook. We just need to ensure we've translated RID to AXI stream ID sometime before it.
> 
>> then - as said - I'd view such #ifdef as possibly okay at the call site of
>> iommu_add_device(), but not inside.
> 
> I'll move the #ifdef-ary.

I am not sure what #ifdef-ary you will have. However, at some point, we 
will also want to support ACPI on Arm. Both DT and ACPI co-exist and 
this would not work properly with the code you wrote.

If we need some DT specific call, then I think the call should happen 
with hd->platform_ops rather than in the generic infrastructure.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 11 14:11:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:11:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533405.830041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px70X-00073i-1V; Thu, 11 May 2023 14:11:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533405.830041; Thu, 11 May 2023 14:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px70W-00073b-V2; Thu, 11 May 2023 14:11:00 +0000
Received: by outflank-mailman (input) for mailman id 533405;
 Thu, 11 May 2023 14:10:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px70V-00073T-8j
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 14:10:59 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a658fa90-f005-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 16:10:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7892.eurprd04.prod.outlook.com (2603:10a6:20b:235::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Thu, 11 May
 2023 14:10:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 14:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a658fa90-f005-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EhKtI0n8/ZME3zsVh1IjI5eNEIlHR+vUxpwEwP87lxiSqRZib8UuBaCYZTqLkEUzzuxujgSwG3Nfth4EQI7fYgdRP9S1eK4R/uqanJqAU771cVNlZScRE+9LRc3k5sDzBCJGMw6qKRCEDZHGi4Y4gOFAx6u2naQn8W+hmohQ+ZUGoiUYA/eiK6FbY/Oh5LGdApa2/Mi4GGD0omI3ZqnCl+gKHmm2VnbSLI3hJMQoI6gcsTsDCtVx22XL1fWQwp2gP68Ak1OrE2WT+vjyEPvxVKT7Xf1vkikc9m4jHG3TYNISKKjQJ38OEGtlEjTR3FT3ta2DvKiihtLh5pppHc45WQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=60h4Ji5aH6BWI3I0HTaMMmDfMGn15K/ZLYgwIKV8H2o=;
 b=D1YFlvTFyMtKS0uK+38NuaLrtuOaR83CXYiK1ALcQpYbbIfwmhWurGPo/4lTfynx5WkM94CZJOf3qPAPyGfJeAowSPsnHgRMkecdVoj0TZFLK/0pcLKAwzqVC5ONo4EH7UhOF28ITvHztoYJwoVmEfVk0+lCy5QeUXi2zYexM7xe/NZTG28k7WK55GLLiWd530f0OiCEx2kptur4A1HEIU1OV77Pif9KwqWS4lpVw2jG4EI/19tU/xCyPhljUJ/2Wm9yBIU/7B6pWwM8madC5asCgof8weUUjRgkvdVNKrM+9etvOr8kN0MiXzSjD12i4YtAZlo+aVj6MtFU6EZ8Dg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=60h4Ji5aH6BWI3I0HTaMMmDfMGn15K/ZLYgwIKV8H2o=;
 b=xdcyy0Y9Ohr4WiX31W4TCDfy2WyRrw7JGV/BWW0yDCsnJXD+maZfOELwAYQeIcRK5B0YP6udvcxeFniRUPAfqKcwRTfZTO887oevexoFKSP+NBkvzmHKWPX9Py/Cbn60btwzVObb2UF//7/3cH7DAb9Or9WCrReOvYFdWfE5uoy26BSFp0oSp466oAPCBHE1Ra/Jo4O8kAQGk42TdRJISMIQmr48xJyoC8pvpgJaCfTtNi7JrRZEAAKBf3voAhWc6eBnAgbddUPWK4NJvx8JASBF7ktAiKQknUQvD7p+E3ce65aJEy+W96fcyqedvSGMkW6l0K7e8OiboGBxdB5vRA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <559c7f4f-113e-8e58-d4d0-3c0c36f27960@suse.com>
Date: Thu, 11 May 2023 16:10:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 07/14 RESEND] cpufreq: Export HWP parameters to
 userspace
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-8-jandryuk@gmail.com>
 <7db38688-1233-bc16-03f3-7afdc3394054@suse.com>
 <9cf71407-6209-296a-489a-9732b1928246@suse.com>
 <CAKf6xptOf7eSzstzjfbbSU0tMBpNjtPEwt2uNzj=2TucrgFRiA@mail.gmail.com>
 <80ccf9c7-5d22-b368-dac6-01fe6cec7add@suse.com>
 <CAKf6xptLpj_L_G3Qk+KA-yaTcaMHLJLL9soFP9HD6Ro+8Lk7CA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xptLpj_L_G3Qk+KA-yaTcaMHLJLL9soFP9HD6Ro+8Lk7CA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0081.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7892:EE_
X-MS-Office365-Filtering-Correlation-Id: 088fe856-7f58-471f-a4d8-08db52298878
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ptmc2Et4InGtcHMv3cPfXAakV0L7oc/IO4n5d+oGbAt7P5syj9mwG6PXVy+F7zmMbPpPtLfLNsrVGhJZ5Xpj9tk7pKd7Jx+j72Uw8h96zkVHk4B3ATvahv8CEPSPcwn636p/QtvSKemkHazD9jKsqkpbBsx9mzB+HRZF4rgb+sDVfali4O/5IdK92/MSAKegWgDosPTiBqIlZ9P4RIAUiePxU6wCUWmeWq266HL6+2DQkw9jTWkUZrBagOq1TN4JhBLDKtF2D/9XVG7y1fUXfGcnbWM862uGrwt67VBXqbjwAa2253nffqOqzP5y2sqok82nQoFLPkGNVl38fmcBGpYjoGN+7udMncRvS70xXy4ZIzIJKCvhDx4eXlHDTMbJugsUGzirSCfs+f0cYPOalDZQoIp694D5AnqSoaWjhpF7PQ7Y287iHwzKpW23AlD8OTm2hOReb4tOBbiOhlGTV9S2vs2J4Lpey5cITJyNyxXEMw2G1cCNaL4R0T66o5asnOKvb9LVilZJXFpziCT7B5k5L4WnMyJiDmC5e1fmc9b7wjQ5we8WwbuyXoHHa8rS/0Ge7iZgmUPW4D+d/MIWg549s+cRIlhQweXKA1knfqOdiMTQ4tMTzpUYCqVYpuc99iaTAiRQxxvztTmEtJoaeg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(39860400002)(366004)(396003)(346002)(451199021)(86362001)(38100700002)(36756003)(31696002)(31686004)(8676002)(8936002)(966005)(26005)(6506007)(478600001)(5660300002)(186003)(53546011)(6486002)(2616005)(83380400001)(2906002)(66556008)(66476007)(316002)(6666004)(41300700001)(54906003)(66946007)(4326008)(6916009)(6512007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SEx2WjduWnM1a0c0VjdvMnRYT0tBSzdiL2UvSnN1S2pwTjA2Z2wxd0NCdUhL?=
 =?utf-8?B?bnVYbWRLeXp5bnVZVkNDS1JnL0xRYXN6dUdTQTlwYWt4Zi9vZm1QditFR1dh?=
 =?utf-8?B?bG9EenB0SVNOTkdtSXVGZEdCNS9rRWlpUzcrNG5TcnZhYUFmRER3QzRiRnl3?=
 =?utf-8?B?WHJ2Q1Q0SGFQZmxCUXhSRzZvVGJGbjNMVGgyaFdubUFSeG9UNENzcmt6S1BM?=
 =?utf-8?B?cTZCS3pSTWlxb1VuSm5UVEZ0cWttQXBPSUxwNitwT1RRMG1rTUFXUVdud2x4?=
 =?utf-8?B?UkZzWm1USXhMOVV6M3pyQzBVSi9IS0h3bUEzakd2OEtUVjB1TmNhTVpqWnZK?=
 =?utf-8?B?MW5sT2pDYXg0MlNob1dRWlJkS1VYWkFENlg2M2hLUkVwa20wK0VvZEs2MVYx?=
 =?utf-8?B?YjZiZWMydUlIZFp0Y1VxMHB3SkwyUzRrc0pla1VNbDJpZXpoS2JyUzkwTVpS?=
 =?utf-8?B?ZmhmU3BWdDVKRSsrZGo5Q0NxOG96N0JYQjAvbHZYTHQzNUhqcFNKdk9JZTFB?=
 =?utf-8?B?akJnRk9qdFFkbzJLb0czTk52OG1sc1hCUTR2UnBqU0FReldGRWI4T2lnR2xv?=
 =?utf-8?B?RFZXMEdWelIzZzBkQXY0Sk9leUJ5V2ZSZmFCbjFQWk53WGtoZVhqcDk3aXgw?=
 =?utf-8?B?QzFEaVdoT3hpVk1jNW4xSzR6c1VHSVBYRmRuL2Zzak4weC9iQ3MvYVJ5UW0z?=
 =?utf-8?B?U1Z2ejYvL3ExT09NRjRpWGhtcFBCK29rNEFaVzlkSm9BZ2FsWVRKSlBucEVB?=
 =?utf-8?B?eGtmSnBaNnVPbVNiRXgvYXp4YmtLU2c5VVI1ZnY1aTVCTkZTMzVYcWtHdnBI?=
 =?utf-8?B?SUU1K2Y1U1R5dGtESDBoSWJuUmp5NmVkTitSc3BDVzVKRkRaM3gvSG5aczlt?=
 =?utf-8?B?ZlhXalo1L3dHeXNMUmgrd3ZKR1BFK1dQSE5DSVpYKzJWNFVaaUNZR296Q1RZ?=
 =?utf-8?B?WGZCTWV0bzY5RmIyaXZiY2poTm13OE5mbVNVZ28wV1RLOWg5V0dtMytPVVFs?=
 =?utf-8?B?UUFSUG1iMkQyR29HOWpiWjV1aDNIdThVOGpnVXZJdXl1ZEFtTU9oS2pseHlw?=
 =?utf-8?B?Q0g0TEU2Tmh6dlFOa3R6aDZMMEM1d1lZZlBEbGZFU1pTY0NiZENLUjBhbVlS?=
 =?utf-8?B?MEJCZTZ3VjlHYy9uS21Lc0RjU3FhRWxCNlZHTmpOREtWYjk0STRNaW1vcCtj?=
 =?utf-8?B?YXgwS0d0WmpjTk5hRzNVSHNVcU5VRy81aU5vYVEwRUN5bmhBTkMrQitjb3hH?=
 =?utf-8?B?UjNxTGZMWWlkeGN6UWFEMStHM1NwVm9NdDUvK3lpdlBqcXp2WEoyMC80MWR4?=
 =?utf-8?B?YmVNL2xvNS9tVFlIZWtFM0I4VmhmUkR1M0lLN3V4WlhFckFZV1Jjc2tDUjVu?=
 =?utf-8?B?R3FTRTJTZXZjOWpZYXNYM25lYjZBMTVWeExHbEtTamJuNG1KNEdHenlWZnc0?=
 =?utf-8?B?Z0ttdVQwYnlBbjBQQ1M0dUp5bjMzR0lRR3hYbGFHc0RSeTVwSWhzbEtpaVU4?=
 =?utf-8?B?MGI0ZEtJVC8xRnJQVXdaTWIwL0lPS3h6T2tZK1RXRkc4b21vc1ZFTXJGaW5s?=
 =?utf-8?B?V0paNWkwTzZZbGg4V2QrSWJ4Q2dDQXhHYTZsZjB1TEY5WVZTVWpZZWprMXlw?=
 =?utf-8?B?WUtGVkNjOU5ZaFRUeWNZNUhBWmFES3JqOFYwWEd1TXdDWFdnYkdVUnozUUZ0?=
 =?utf-8?B?TFlVcldrV2k2emFCMG1jNWtwZTVsblU0dHRhWEwwK09WdUhuZjFKdEpHYjhr?=
 =?utf-8?B?ZTVwM2laQVFLaXlIMWRvNVpBTURDRnNxMjBkcUZ5WWg3ZHBtU0pvcklBcHI1?=
 =?utf-8?B?NEZDN0JGbThhMnIrU0VGNFJ0U1RESXpOb1N2RHpiUDJ3V0VUYzJ5TGJLTmQ2?=
 =?utf-8?B?ZS9pMTZuTGtWaGswTXU2aWZCZU9ybWg2MEdScS9uekxVSkZDWFNUWDcwRGc3?=
 =?utf-8?B?M051Wk9tVCt6YnZFUG04bzFjbitoWTRiM0xOT3UzMlFOWmRSK212bm1RS0ZC?=
 =?utf-8?B?ZU5XTE1sRzVINmdEU05ScCtXNFIvaFUxeUZrM3hsQ2VTNWc2NDJlMHBZMHNa?=
 =?utf-8?B?enMxcDh2RW9LMzdrRndlZTFCQ3U3THAxRHp4bHc4WnhmQkkvbEEvd2E4bm4y?=
 =?utf-8?Q?iyNusO6Th/G/CLJwJ9+8yN2Eg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 088fe856-7f58-471f-a4d8-08db52298878
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 14:10:54.3532
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: s4jB2rlhA0M6jcossvqzjXqUHy2r89MclCQT7yjRJe/P9fpJP+/HgOY6sdAFXUXif05OjWQ1PKbk/krVRmnGPw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7892

On 11.05.2023 15:49, Jason Andryuk wrote:
> On Thu, May 11, 2023 at 2:21 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 10.05.2023 19:49, Jason Andryuk wrote:
>>> On Mon, May 8, 2023 at 6:26 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 01.05.2023 21:30, Jason Andryuk wrote:
>>>>> Extend xen_get_cpufreq_para to return hwp parameters.  These match the
>>>>> hardware rather closely.
>>>>>
>>>>> We need the features bitmask to indicated fields supported by the actual
>>>>> hardware.
>>>>>
>>>>> The use of uint8_t parameters matches the hardware size.  uint32_t
>>>>> entries grows the sysctl_t past the build assertion in setup.c.  The
>>>>> uint8_t ranges are supported across multiple generations, so hopefully
>>>>> they won't change.
>>>>
>>>> Still it feels a little odd for values to be this narrow. Aiui the
>>>> scaling_governor[] and scaling_{max,min}_freq fields aren't (really)
>>>> used by HWP. So you could widen the union in struct
>>>> xen_get_cpufreq_para (in a binary but not necessarily source compatible
>>>> manner), gaining you 6 more uint32_t slots. Possibly the somewhat oddly
>>>> placed scaling_cur_freq could be included as well ...
>>>
>>> The values are narrow, but they match the hardware.  It works for HWP,
>>> so there is no need to change at this time AFAICT.
>>>
>>> Do you want me to make this change?
>>
>> Well, much depends on what these 8-bit values actually express (I did
>> raise this question in one of the replies to your patches, as I wasn't
>> able to find anything in the SDM). That'll then hopefully allow to
>> make some educated prediction on on how likely it is that a future
>> variant of hwp would want to widen them.
> 
> Sorry for not providing a reference earlier.  In the SDM,
> HARDWARE-CONTROLLED PERFORMANCE STATES (HWP) section, there is this
> second paragraph:
> """
> In contrast, HWP is an implementation of the ACPI-defined
> Collaborative Processor Performance Control (CPPC), which specifies
> that the platform enumerates a continuous, abstract unit-less,
> performance value scale that is not tied to a specific performance
> state / frequency by definition. While the enumerated scale is roughly
> linear in terms of a delivered integer workload performance result,
> the OS is required to characterize the performance value range to
> comprehend the delivered performance for an applied workload.
> """
> 
> The numbers are "continuous, abstract unit-less, performance value."
> So there isn't much to go on there, but generally, smaller numbers
> mean slower and bigger numbers mean faster.
> 
> Cross referencing the ACPI spec here:
> https://uefi.org/specs/ACPI/6.5/08_Processor_Configuration_and_Control.html#collaborative-processor-performance-control
> 
> Scrolling down you can find the register entries such as
> 
> Highest Performance
> Register or DWORD Attribute:  Read
> Size:                         8-32 bits
> 
> AMD has its own pstate implementation that is similar to HWP.  Looking
> at the Linux support, the AMD hardware also use 8 bit values for the
> comparable fields:
> https://elixir.bootlin.com/linux/latest/source/arch/x86/include/asm/msr-index.h#L612
> 
> So Intel and AMD are 8bit for now at least.  Something could do 32bits
> according to the ACPI spec.
> 
> 8 bits of granularity for slow to fast seems like plenty to me.  I'm
> not sure what one would gain from 16 or 32 bits, but I'm not designing
> the hardware.  From the earlier xenpm output, "highest" was 49, so
> still a decent amount of room in an 8 bit range.

Hmm, thanks for the pointers. I'm still somewhat undecided. I guess I'm
okay with you keeping things as you have them. If and when needed we can
still rework the structure - it is possible to change it as it's (for
the time being at least) still an unstable interface.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 14:19:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:19:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533410.830051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px793-0007oP-Sc; Thu, 11 May 2023 14:19:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533410.830051; Thu, 11 May 2023 14:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px793-0007oI-Ob; Thu, 11 May 2023 14:19:49 +0000
Received: by outflank-mailman (input) for mailman id 533410;
 Thu, 11 May 2023 14:19:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px792-0007oC-G6
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 14:19:48 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20613.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e14ca37c-f006-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 16:19:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7399.eurprd04.prod.outlook.com (2603:10a6:10:1a8::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.19; Thu, 11 May
 2023 14:19:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 14:19:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e14ca37c-f006-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QJN+gHmpOrH9SdejrZCGFysRG4Rr5WzaDldsjcCK1M5AVCuQJYX54olFq/QErIB2VvGQQRb8dgw5kySmjII0C6/6qI7RFs9xLKNqVautPGGx5ZgG97EIF9tavVKh2mynYtOqG567rkRsMIgEnf0Km5WCQg5F8qT8uLi7YpZrAQLXiaLjPDBDDhGxE5TC/ah0nruy1csE2lJecp+HSHxHIAMfz79AR1Z5TtoitxnoFKr1mtx/6Fus3kpGE4CjaJS1L8lPBvOcrxlM3zizM9TqRfNHPd3/eH7G44j3YJrat8ljrAi406VEnjU+tsQLYHS+FyLJlTDiXEKk9j70+hSCBA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=s1rjBGOSvHegrtNBhVwTiHkfdYXlTGAw52sSIXeGlE4=;
 b=fdvoGLcW7N3ZuDlW+JFUVnuBCzPoD+XL3EUB2S4opYirGQJJ6Z52Aep0HFSU63PVj1RCR6IxzXBdMTP37ZQLaXd5eKTVTJ7jyXaaPxTYiC/6tWU2EXEQbwf22WG8pF4CUjm7mapH9I7MK7enzOIhifHkH/2nY4Js6NUA7KVMy4ME6UmQilqG86saCPpjqruhfo92vYlTz/6VO4r2WIGNhCiLmA7Y3knDi0T2Qs8c5Vjskx4BOAgLZELBcHJp3PbRJ6r2Q80E7G2aZnDvEVWhZWOp0Sk5Z3ZMeUDQMBUPVmPG4GEKYioQWIbDcnnA3USbGGU8/EE4/opv5dylDr+ymA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s1rjBGOSvHegrtNBhVwTiHkfdYXlTGAw52sSIXeGlE4=;
 b=lXDaqvbsjMCycRDvR4qqfQ7y2dACMzO2lFTzFPP1sRdz/QrXvMDAImCJeWVIcU432we68Kl32ZZe3Ny06ERF8uCdQunD983f9nnM7r/jrBjYkqPZ5k0dKYeoT1lrdnAuDMSzPtTGWt2jcPKPO7bVgd07R9ckIX8K6PpLig50gZ9BHxWJpHy6P3qBptNv2liiNOgODQuvn3W7FAbh1qZTGlyFJqJ0/wctvHxNbmdNHYn/mmqbjmojH4mQw0Yo1ivJNpasl+45PRf+4S8EizsHQLgnfmiuN4Ld07HJvcFqGlJdOY0Iz6aPN3q7hejhs1SrGba94zGX/zpvFaBIVwS9mQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <904896c5-45dd-45a0-b38b-3f72701194f4@suse.com>
Date: Thu, 11 May 2023 16:19:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v1 4/6] pci/arm: Use iommu_add_dt_pci_device() instead of
 arch hook
Content-Language: en-US
To: Julien Grall <julien@xen.org>,
 Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230501200305.168058-1-stewart.hildebrand@amd.com>
 <20230501200305.168058-5-stewart.hildebrand@amd.com>
 <ced11c6e-caf7-3a19-92c8-5c11b18952b6@suse.com>
 <8591236e-5dc2-7da7-fe3a-7cb2ae1ed7d0@amd.com>
 <462657d4-72e7-6266-6ea5-2b9e443f9813@suse.com>
 <03fa79ed-2b24-8329-36fd-dd8edc14fa72@amd.com>
 <f07ceaef-d525-faa8-8911-77c588e85102@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f07ceaef-d525-faa8-8911-77c588e85102@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0120.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7399:EE_
X-MS-Office365-Filtering-Correlation-Id: a42c82a9-9bb9-4b84-81f3-08db522ac400
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/P7GaYssQzmY8SePEHyYCYTg1bDCrZV8LUIc3FK/sWdBlf1CztHoAFS9DQwleM4ndXiGRR1n6crLo3XgfJAYKW0iye/ekoqrt5Ob6S1scL6bgh4rPvBZF7XkmLnl7t7RCFuUsDZYCqi/64OhoZWR/tf6me9evvcv8Y4NWJpwe67PjwcEOWzQqbkOtUJOVaUkfK1zgX4U7bfoGKYZXr06bfVILlPxudLoD/NDWng6M9sy7WD9NuymhJxDGO1u63dM+4tWlu5uo811yX9gMvthQyzcZniMrrV6FhUcCRgANLJrWsbMizUs/TDoaT7IDlIrWnxQ4LA59HK0uEfCk/XP97Ep8OszuwOS7h9+MYEa0dQo8+YAUye4k4pN7LtD8nerrkkOaeCzA9l9N0Sqi7n2ng/cof8x7+90GIAd/X4ud3WHilnReTnxv1qEPQSjP6OTM5eNTXuECBImzvkcnTERN0ZmSym27QU1YPxnec1C0jMwuq+a7ZmD12QejorDU8OFx7e8kjxeCah0aLCRJJFtXDjqnD1earaL/u465xxLkU4dLUtMo0nxw1sv8PGOV5nre/ivTbxAHh7ufX958IpUXzb3MWDuXbNArqAfm7hPJi+04TvgPTJq/ayBuRSuoiajRFdKV92kbPf2WupJA9w/JA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(346002)(39860400002)(396003)(376002)(451199021)(5660300002)(41300700001)(8676002)(8936002)(31696002)(31686004)(86362001)(66946007)(66556008)(66476007)(2906002)(316002)(4326008)(54906003)(478600001)(110136005)(38100700002)(6486002)(6506007)(2616005)(53546011)(186003)(6512007)(26005)(36756003)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y216NGJUeENFdEtHeGxsTGRSTmxYb2tTYTdITGtPZnNyclEyYlVtOGFRRGQx?=
 =?utf-8?B?RXdLaHVJNS92MkJoZGR1cEo4cHVjZjE3OUZIWEVwQWZXWW1FZzN1SW1ObGlH?=
 =?utf-8?B?T1FMRThSZGU4cWJ6SG1ieXlXMlZ6S1VlMzd5a0ZPT2dyaDJxQko4NGkyM2VT?=
 =?utf-8?B?UjErQzVlWThuS2RKeG10c3l5MTJTbzJHRjdjOHBmVzFKb0tBeUM4VDkzbFlF?=
 =?utf-8?B?cjRvcjBZQnFQbUI3cEdiMVFKdHpGU3dxNmNVQ1VEL1VQZzdOTEJHMTNvV0pX?=
 =?utf-8?B?UmdVdFh3T0FENk9oKzFQcHovYzh6c1lNb2YrVDZNb2szcXBHaXZGODNMU3ZM?=
 =?utf-8?B?a0NNMDFiK3dDS3E5OXArckgvcmRMTGpuMTQ5UmJPVGtvK1BIZXJyQllHNUNM?=
 =?utf-8?B?OHlmUTVtTTd2QjJ5cGRmcHY4U3pKNTRyMzh4RGpQOE9SVUgvUnI1eFkrRklN?=
 =?utf-8?B?NmNvRTJHd1dMNHE5SmRtbW5hb1Q5K0VjQ1JUd3ZWWU1RT1VIUmc5QWd2UVdX?=
 =?utf-8?B?dXR5UXFMQVQrdVFTYk1pOFZhZVQ4YjBjb2xUZlZBUHFuN0FjdEpqcWZoQzha?=
 =?utf-8?B?RjdXT09JT2llWWoyeUdvVmVINmFQdW90UWtUaUc0U041RXdpZjNJNDZOVENY?=
 =?utf-8?B?UE9hcElsUEVrYTFxcEZUQzY0NVI3bXJyNXdraGZCczdNRFZkU0pxYWdENnRk?=
 =?utf-8?B?dGhzcFlVVjhWWllOYmFNZGhxV08wNWxCUit6QUJ4azYvM2gzV0o3MDNxak16?=
 =?utf-8?B?dG9BanFOeG5VWDRUUFlXK25hMjZQSlVwZFdBTk5RYnJLWE9LSzc0RlQzMHR4?=
 =?utf-8?B?a1RzYUJiUDRjeXEvcnkvcGpyMktHTUFGOU9aYXk2b3FUMDVhYkc3K0JMNHZx?=
 =?utf-8?B?ZnNCbWNHTWlVMWF6TDMwZUcwUjF4YUNsVGppdHJ4K3VaYkpaMERqd0pMSDdl?=
 =?utf-8?B?K3J2NUlUbk5xM2wzSGhDckp6R1cwRmVOWTEwOS9uSkREZHZuZldQYmtGTEtF?=
 =?utf-8?B?NkZqY2t6VFFZU2VUK0VVUjBMYU12QnhscTRGKzlneTBCSHBsNGtBdVNZNjJJ?=
 =?utf-8?B?b2dWeEFPWFM0cDc3ZE1wMDI3SVprMEFKN3FVMHlER0hOMml0VU00aDRwSjE0?=
 =?utf-8?B?NkZpMjdvMVdORjZiK1loTWxMU3YyMmZlMkRFZlh3dTlyL1BYZ1VBaHpnaFAz?=
 =?utf-8?B?UThLajhjc2hhUnNOVGQxOFpacFkxaWIyYjVoMnM3TE9ZbVVJWm9DOWR3RWRC?=
 =?utf-8?B?M1lJVXhVOU9FYlFFSUh6L2R0cHVsWVkxVm0wejcrakc4VG1mRzc5SzcrVG5j?=
 =?utf-8?B?VU1vV3J4ODNSQTlYS1pWc2hTMndweWYyL1JnV0l3UUhrKzdBUWNGUmlERStX?=
 =?utf-8?B?ejJRY1pQdTgzVkZEcXFHL3VjK085bFFIcWxsRmNTSEw1ZnVtNEpyUzAxRDY4?=
 =?utf-8?B?UmxCK3JIUVh4SFFtak9WUTBCSGJsTzM2VkxWSlFVNTVCU1dWalZlMEFMcitY?=
 =?utf-8?B?ZTFQRHNJOXVOSC85WkxjYXNERVg0Y2dNY2dTaUZjVEtLNXkwK1VzQVU3cTNv?=
 =?utf-8?B?V25tRDd4emVSUWZPWmc3OW5ia2hnNUthMGw2dnEzNGpMYlBEb3VkUUtRcnlK?=
 =?utf-8?B?TEc2UzAwMUZLcXJmaUIvVGVaTVgySzFHOTJTRE5EQi9BSzU5OU45SFpmTk9u?=
 =?utf-8?B?VUhRVk9HNmd0VXBMdDFhMG5jRXFLclNpeGpFdXNtNjVQUVFZZXMwOU43ZWhy?=
 =?utf-8?B?V0NqTGdUajlXMWl5VkEvRldYWWU5bjI3MTNzaWF2ZWlPZW1hM1d6amNGZVVp?=
 =?utf-8?B?Q29Bc1hIRVdHWWlHZHpOYktHd2wrT2xTTENoMU5aTzJnREdIUDhXUVVIcnRH?=
 =?utf-8?B?R29idHBvNWpUVE1YT0pTM2pYOUVqQ1ZFRGZGV2ZzRDJkMVNSMy9yeU9STzlT?=
 =?utf-8?B?ZjI5Y1duTUNKZG16d3J0VWV1WFZRbTlrNXhYU0RYV2diQVJ6R05iRWh4OUZH?=
 =?utf-8?B?clpaZ1JGUDFPNWZCWjg1L0NYWlZjNFk1ekZDcVpURmszTlZUNzhhTGJUcTda?=
 =?utf-8?B?elB3MU15THlUei9Yd0dvN2haS0hiSVdCaHJWUVFsQ2gzUC83cFFRVDI1clZR?=
 =?utf-8?Q?ODWYHSGaa5cwPtl/HcaXEie3l?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a42c82a9-9bb9-4b84-81f3-08db522ac400
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 14:19:43.4605
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +jg8Q5KVCwfOAbmoIEJcpaE4Ozf7Zb7fJ5rwVxKbXjxVvFrM0GuQS7c8NrXH9lEIZSn0PEGmJ7CDjiQucrqRHQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7399

On 11.05.2023 15:59, Julien Grall wrote:
> On 11/05/2023 14:49, Stewart Hildebrand wrote:
>> On 5/8/23 10:51, Jan Beulich wrote:
>>> On 08.05.2023 16:16, Stewart Hildebrand wrote:
>>>> On 5/2/23 03:50, Jan Beulich wrote:
>>>>> On 01.05.2023 22:03, Stewart Hildebrand wrote:
>>>>>> --- a/xen/drivers/passthrough/pci.c
>>>>>> +++ b/xen/drivers/passthrough/pci.c
>>>>>> @@ -1305,7 +1305,7 @@ __initcall(setup_dump_pcidevs);
>>>>>>
>>>>>>   static int iommu_add_device(struct pci_dev *pdev)
>>>>>>   {
>>>>>> -    const struct domain_iommu *hd;
>>>>>> +    const struct domain_iommu *hd __maybe_unused;
>>>>>>       int rc;
>>>>>>       unsigned int devfn = pdev->devfn;
>>>>>>
>>>>>> @@ -1318,17 +1318,30 @@ static int iommu_add_device(struct pci_dev *pdev)
>>>>>>       if ( !is_iommu_enabled(pdev->domain) )
>>>>>>           return 0;
>>>>>>
>>>>>> +#ifdef CONFIG_HAS_DEVICE_TREE
>>>>>> +    rc = iommu_add_dt_pci_device(devfn, pdev);
>>>>>> +#else
>>>>>>       rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>>>>>> -    if ( rc || !pdev->phantom_stride )
>>>>>> +#endif
>>>>>> +    if ( rc < 0 || !pdev->phantom_stride )
>>>>>> +    {
>>>>>> +        if ( rc < 0 )
>>>>>> +            printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
>>>>>> +                   &pdev->sbdf, rc);
>>>>>>           return rc;
>>>>>> +    }
>>>>>>
>>>>>>       for ( ; ; )
>>>>>>       {
>>>>>>           devfn += pdev->phantom_stride;
>>>>>>           if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
>>>>>>               return 0;
>>>>>> +#ifdef CONFIG_HAS_DEVICE_TREE
>>>>>> +        rc = iommu_add_dt_pci_device(devfn, pdev);
>>>>>> +#else
>>>>>>           rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>>>>>> -        if ( rc )
>>>>>> +#endif
>>>>>> +        if ( rc < 0 )
>>>>>>               printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
>>>>>>                      &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
>>>>>>       }
>>>>>
>>>>> Such #ifdef-ary may be okay at the call site(s), but replacing a per-
>>>>> IOMMU hook with a system-wide DT function here looks wrong to me.
>>>>
>>>> Perhaps a better approach would be to rely on the existing iommu add_device call.
>>>>
>>>> This might look something like:
>>>>
>>>> #ifdef CONFIG_HAS_DEVICE_TREE
>>>>      rc = iommu_add_dt_pci_device(pdev);
>>>>      if ( !rc ) /* or rc >= 0, or something... */
>>>> #endif
>>>>          rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
>>>>
>>>> There would be no need to call iommu_add_dt_pci_device() in the loop iterating over phantom functions, as I will handle those inside iommu_add_dt_pci_device() in v2.
>>>
>>> But that still leaves #ifdef-ary inside the function. If for whatever reason
>>> the hd->platform_ops hook isn't suitable (which I still don't understand),
>>
>> There's nothing wrong with the existing hd->platform_ops hook. We just need to ensure we've translated RID to AXI stream ID sometime before it.
>>
>>> then - as said - I'd view such #ifdef as possibly okay at the call site of
>>> iommu_add_device(), but not inside.
>>
>> I'll move the #ifdef-ary.
> 
> I am not sure what #ifdef-ary you will have. However, at some point, we 
> will also want to support ACPI on Arm. Both DT and ACPI co-exist and 
> this would not work properly with the code you wrote.
> 
> If we need some DT specific call, then I think the call should happen 
> with hd->platform_ops rather than in the generic infrastructure.

I'm not sure about this, to be honest. platform_ops is about the particular
underlying IOMMU. I would expect that the kind of IOMMU in use has nothing
to do with where the configuration information comes from? Instead I would
view the proposed #ifdef (a layer up) as an intermediate step towards a
further level of indirection there. Then again I will admit I don't really
understand why special-casing DT here is necessary in the first place.
There's nothing ACPI-ish in this code, even if the IOMMU-specific
information comes from ACPI on x86. I therefore wonder whether the approach
chosen perhaps isn't the right one (which might mean going through
platform_op as we already do is the correct thing, and telling the DT case
from the [future] ACPI one ought to happen further down the call chain) ...

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 11 14:24:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:24:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533415.830061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7DD-0000rN-IF; Thu, 11 May 2023 14:24:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533415.830061; Thu, 11 May 2023 14:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7DD-0000rG-Cz; Thu, 11 May 2023 14:24:07 +0000
Received: by outflank-mailman (input) for mailman id 533415;
 Thu, 11 May 2023 14:24:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px7DC-0000r6-Qf; Thu, 11 May 2023 14:24:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px7DC-0005Ov-O9; Thu, 11 May 2023 14:24:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px7DC-0004Lk-Ay; Thu, 11 May 2023 14:24:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1px7DC-0002m7-AZ; Thu, 11 May 2023 14:24:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wEcBMEvROCeQAhanLknPmalsaEwN0vU39RWOB1u3T2g=; b=2PoZxWYxjdhIoWxUO1eLdWQKdF
	VREudOR161kVB6sBA6mVJpVL99MmUNAqgPs/327LTBDZ6ISoijFHRmUCAy8WOFZnHqY6lvHvN9P52
	dxeHV3pxRcsB+CUTxUcc4PZuTuKoEftzHIrHXUHi/X/YwbA2pFe0vno9ANfqMwkvpPiw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180610-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180610: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d530697ca20e19f7a626f4c1c8b26fccd0dc4470
X-Osstest-Versions-That:
    qemuu=577e648bdb524d1984659baf1bd6165de2edae83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 May 2023 14:24:06 +0000

flight 180610 qemu-mainline real [real]
flight 180620 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180610/
http://logs.test-lab.xenproject.org/osstest/logs/180620/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 180620-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180600
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180600
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180600
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180600
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180600
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180600
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180600
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180600
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d530697ca20e19f7a626f4c1c8b26fccd0dc4470
baseline version:
 qemuu                577e648bdb524d1984659baf1bd6165de2edae83

Last test of basis   180600  2023-05-10 05:40:33 Z    1 days
Testing same since   180610  2023-05-10 23:38:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Avihai Horon <avihaih@nvidia.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Kevin Wolf <kwolf@redhat.com>
  Lukas Straub <lukasstraub2@web.de>
  Markus Armbruster <armbru@redhat.com>
  Minwoo Im <minwoo.im@samsung.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Xu <peterx@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Stefan Hajnoczi <stefanha@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   577e648bdb..d530697ca2  d530697ca20e19f7a626f4c1c8b26fccd0dc4470 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu May 11 14:35:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:35:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533422.830071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7Ne-0002VD-HW; Thu, 11 May 2023 14:34:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533422.830071; Thu, 11 May 2023 14:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7Ne-0002V6-Ek; Thu, 11 May 2023 14:34:54 +0000
Received: by outflank-mailman (input) for mailman id 533422;
 Thu, 11 May 2023 14:34:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px7Nd-0002UE-6d
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 14:34:53 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20603.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::603])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc673756-f008-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 16:34:50 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7769.eurprd04.prod.outlook.com (2603:10a6:10:1e0::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Thu, 11 May
 2023 14:34:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 14:34:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc673756-f008-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EKJWdoDEgCnkmh1XElHZreqgXaJBhQkBmAaBqD9ZpiUrSJxQ24/PiMeVLbWT95whs68Z/OvYY38JqUo8BKtM9uAPgmltajYdE87/sfPnob7HJqWtweGWZbPYoIpu3Goia67/IOssmkFA/iw8zQXuJ0ICYtNgUlOzWEmd+La1s4gAHPvdigrjzropQw4tl4k/f1k39xImsFgpLo8Iz/Nr3lCuoQZrwWsauNea9kNiXNILAtGqejqgNxmKTrEFJbwYpSU1tAsu9zRJz7bkpDYBrQnfLWWO64ZbSN7h1WFYV8CRipLc0nVP0OFBnUnLSR5dSbNks8FWbxlp2lXkEtE50A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PZtfQKxifRk6kmgzpC/rH44iign0BqEwrPFys6tO/Ug=;
 b=norvQMP1yau8l4GrzUlCmqyBTuqmBiYro2VVuzoxXnpZx/AbULYRRWz9oLHrIrYJfdMR6dg1nilZ3krkgIVfZZn+UoduCDmDawJmZcuuAJWuQjjMlUBA+4SpamoukHCxupbCLDij0Oe+fPbNCLWQjwuGtAE04xpG7U2sn0SWsfjqTLKOhCKLBXQdsn29cRw//KHeJ1idGvMxqWj3ltP2gIiGj1zLsrSqBfF9Rp+b3PgYnII7Rq+pAqnJdA2KOt4PYHFIyyr70TzDIWLTEaUyFxLLZbgfERgJ5qBlAAkHx7HGMIJDUQTjemwIrrw2X8wJzLPUyR+hw0msxcnNS/7ikA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PZtfQKxifRk6kmgzpC/rH44iign0BqEwrPFys6tO/Ug=;
 b=GmxCDxjbKfzwVcca7kMFUaqQhoQpfyTLSx5DWoz8ZR8qFFlICBNt3gD69bZIZHI/AVU3RJVtbF39z6e2L89y5mZnna0XWNVKlceiuce9Yx185eKtoHNpjRdWxaaGAuSWlvNizENhvf6hYAZEL4C/lT25uAIcpAoAw9yo2XL0Ks0PUWJZ10QGgnPVG4mxi5lJ+kFB/sTu8VfF+wCFeGhXdJvd9Zuj2Y0chLkg5xJY5/yx4wtw4x1QCNYELYlASZ2uybbLA4Apg0qIpp3+PycqgRUad4qjaF1GqESTBzig5y/DC0L1siCEfIX9NTAXD+pEvrWdU4wRfsnaKI6DB5FmIw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <82c8ce93-a9a7-9309-2b04-8092ca84e7d6@suse.com>
Date: Thu, 11 May 2023 16:34:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] SUPPORT.md: explicitly mention EFI (secure) boot status
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0075.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7769:EE_
X-MS-Office365-Filtering-Correlation-Id: f454d98c-6d97-4b22-d505-08db522cdf06
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EF8Dh6P/frSamtkFen6aIG0SPJ53woQGIuqfB1jiS6cDa6Ns6UnXGmjmA4u6FPE1lFZ41qomx/9HYMiX48WUXJN5r3pyF72AJ6zWKYrVDRhXVtqwrvgQabid3kUP9w2tLjrObB9AHY/sVbW1U19EBKNKLm4QWzUcQrAvIkV4UrFNcIR9kyHiAASX9mt1FG7eNG0NwVqvegYceThgnzWbdjdpS1vC6A4gAHLgXktPXbd9Odw9nvRZKiKac7H/PHIVBV1Fo1/VU4CmuAu1WxM+hLiava5AE0OxOdtOozHwCUE8r82Z5AGoJgzUxrekc0oOL2Ny3eMOsMvUlqD+uhSo7fAqSUN73HwK55QiqhogNXurwuIm2bYaNVEYkC6YPlGVVy9rrI8b9n0AtM7/s5YFJnWIqbOEjECNCvLwCSsbUwXAZES5ilLVR4AkatjtssIsoxw6zfs6CQq0K8Xz5MbhR8HCf9ilH3x5263NxZnF3HMU2TZpbNYCRoVyCsCZ4SH8IGvQLWS6quaI7YUoQ/WhdYrMtRsm8blYNusZgY2fu6t4NpGFLw58R1jKd4XoOwLjxms/zCvOMicMxDtSJLnUIae7TlWwxZ2l67yy1hErWSfeuk4OIifr6VU5EwddzUFLUdaQyxZNLY2Nb9yBuglIFA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(346002)(136003)(39860400002)(376002)(366004)(451199021)(41300700001)(26005)(186003)(2906002)(4744005)(6512007)(6506007)(5660300002)(38100700002)(8936002)(31686004)(6916009)(316002)(54906003)(478600001)(83380400001)(2616005)(8676002)(36756003)(6486002)(86362001)(6666004)(66476007)(66556008)(66946007)(4326008)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OWRrUWxNZTlaS21EOWRyMjJPM0JpdzNLYUx5L242bHluOTkxU2kxZzJhYk9v?=
 =?utf-8?B?dFl1ZmlFUUxkbGdvVnV1YkNyMWJRNlF4MWI1L0ZWRHhxTXBKUnA2RDRyQ1B0?=
 =?utf-8?B?VkNCYmZTbGNHaDJOMkJMcGdBeEZlR3dxUStzcVRCbkJleUxtcU9ySGM3d21F?=
 =?utf-8?B?dGkzY3F2RitmVjNYM2lyNm4xbGMzSDdNZG5NVHAyRmxPREllR0oxUGJKSU85?=
 =?utf-8?B?dWk5OW1qT3diR25HeDdrWW5MUXA3NkZxT1NNcW16bE1zRXl4NnNDRVlRRTN3?=
 =?utf-8?B?Q3drMDh4bGlUbVpKSmdjRDN1VFNpMmtBYVZMU05COWo2Q0Z5djQzZVd2a29M?=
 =?utf-8?B?UTdzS050U3dNVCtRQllveUZLY3ZrNEI5bWxZSjBQMFFwUEdZYTF1bVVsNkM2?=
 =?utf-8?B?cThHZmU5RmFEYzIzc3lDOU1uSnRNdDAxeVY0TnQwZmdQMm01dVFDellZRlh0?=
 =?utf-8?B?VFJreGppRmVyY0d0RVpEM0JibXVoeVpqU1lnRXJKSXFZTE5kY0Q4NnNyTVN4?=
 =?utf-8?B?WHFIQlJLdGFSeDhSWkMxeE5LLzgrcGVPVUJNZW0wdEFwSWgrVkc1bXdONEZr?=
 =?utf-8?B?S2hNMjhCdUQwa2xySXJuSEtPNTdzZmJkL3hTeThWK3dqTGs4S1gxdHVwT21V?=
 =?utf-8?B?ZFVEb3VSSWJndE5tNkk2VUdPZEVva2JWK1VGaHd3OTlTdHptYUlzd2U5TDda?=
 =?utf-8?B?NnVuay9IL1dSWUl0d1pic2FlWGd0NTdpbTB6TThIZWhrdVZTejA4ZVBQVjFr?=
 =?utf-8?B?Z3B0RXZWc0hmUzNxS29wSjRYVXVTU29vUEN3Mm02K3NTSFhUakZBQ2gxdlRC?=
 =?utf-8?B?Q25VQlJ1dmlkMTlXbFI5UDFJOVp4K2JScU5oMnBMc2VhUTF4bkJFeWp6blQ1?=
 =?utf-8?B?RWp5eFJUNHpxMnNaSUIxcUFCTVB0Q1R3REptamRTVTlIaS9JbThFNnVxTlFU?=
 =?utf-8?B?S25HNXAzdGh5RVp6YUV0MUdJd1l3SXZvamZEQW5WQzkzWTVPR2J5clJnVXpn?=
 =?utf-8?B?VzhuOXdIT3lMSCtRdjFjRVcrQTFwS0xLTEdNMC8wT2NMNmYreUtuTXhXZ0g2?=
 =?utf-8?B?aFpxSDg5ejN6eTFXOUQwcy8vTW9CNTVaelpUaURER1ZONTBpYndBN0plbEIv?=
 =?utf-8?B?bTRtNkdYdmRVK1RBbGk5OE9PdE9VT0RRM1g1aSs1bUJ4Y3NJa0RqTXRMbDJa?=
 =?utf-8?B?UTRHbzV6c3VZaERpY0tDY1Vobm9ZOVlJdnQveHNhR3QxRTJRRDdjdnh2NHZG?=
 =?utf-8?B?RlFuNkZTRjJKZmZuQnl6VzJTZUk5NlBGVlovZ2hpemp6NlBRL3FxTDZWNENT?=
 =?utf-8?B?KytLSmEwR3U4YnRFUHBXdXNKU0E5K2xIUjA2blZHSDBLcjRMMytOYVBtWis1?=
 =?utf-8?B?V21xek5zZVJwbDVqclJhUlFKY3dpT3RiREdWZ1BFZlIxTjNrTE1KRnZCVzJS?=
 =?utf-8?B?WHo0YWh2SHlua1Z2NGU4dzR5SUVNNzBFRDJzSFplWmRzeG9IdmQrZUJneTE0?=
 =?utf-8?B?cUp5dTY5NWU4cmVVaHFlTjhrWDVNcy93RjBNYWdCU1BxMWovSHJUeExjcUJi?=
 =?utf-8?B?VVdteENXVHV1RkJGaXl6c3ppb25sUmdCRXZiK3ZwYWdEekErbWFyU1Q5OXlq?=
 =?utf-8?B?ZElIVk05bGRienA3WUEyb0ozVzhSK2huazRPVWxycHh4N0ZUZWxXcEk4NkFT?=
 =?utf-8?B?Q3ZmRHVDUnllUHFrTGdpMitkZUJQa09pUjJRQXlPVkxrYXk3T2pqdStUSXZm?=
 =?utf-8?B?Y2xtd3NYU2NwVS9YSUZURy9kdWFqWjdRdVFBdzFTeWdZNmhqNjdVK1NHNFVF?=
 =?utf-8?B?WGo4SjRMa2tMVThBTGo0SDJKbFRwaEJFcm1UMmlOSHkrVk1YckpEYU81QVlJ?=
 =?utf-8?B?VUN1MUVLQWs2U29QSjNiekJJaHNuS1N3YUpHYkFsc2pQQ1dYbFNhZmdhWlJC?=
 =?utf-8?B?dFcwUWJ6cThPbTVEWTI4UWxPMFgxZGdNaGFCRzEzRFo0OXFSaVBidkJUMUp4?=
 =?utf-8?B?aURlZWJHOEwydFI5VkhHSTdPY0pPMy9JR2tYYVdOQ3ZwWU9XMVA1b0ZaSEI4?=
 =?utf-8?B?QzJOdHNVWHRUSlVhbFZFUCtEMXFtU285ZDJtQmQvZU5NdlMvWjRqOElwVlJ4?=
 =?utf-8?Q?GFNeXMMb8l8k3wG87jqUAgzVe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f454d98c-6d97-4b22-d505-08db522cdf06
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 14:34:47.7779
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Fnwyy5RIPi9NRhny8qQW3QUuwphZJI5e2qbvFs1dyBlCDJHYQVRm7ox9xoQoRGCojaEDMYzniOJ4BTam4KEWuw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7769

While normal booting is properly supported on both x86 and Arm64, secure
boot reportedly requires quite a bit more work to be actually usable
(and providing the intended guarantees). The mere use of the shim
protocol for verifying the Dom0 kernel image isn't enough.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -63,6 +63,16 @@ For the Cortex A57 r0p0 - r1p1, see Erra
     Status, x86 PV: Supported
     Status, ARM: Experimental
 
+### Host EFI Boot
+
+    Status, x86: Supported
+    Status, Arm64: Supported
+
+### Host EFI Secure Boot
+
+    Status, x86: Experimental
+    Status, Arm64: Experimental
+
 ### x86/Intel Platform QoS Technologies
 
     Status: Tech Preview


From xen-devel-bounces@lists.xenproject.org Thu May 11 14:36:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:36:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533426.830081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7PH-00032w-Sr; Thu, 11 May 2023 14:36:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533426.830081; Thu, 11 May 2023 14:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7PH-00032p-Po; Thu, 11 May 2023 14:36:35 +0000
Received: by outflank-mailman (input) for mailman id 533426;
 Thu, 11 May 2023 14:36:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z+vG=BA=citrix.com=prvs=48888ca5b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1px7PG-00032j-LL
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 14:36:34 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38840875-f009-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 16:36:33 +0200 (CEST)
Received: from mail-bn8nam04lp2047.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 May 2023 10:36:22 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6987.namprd03.prod.outlook.com (2603:10b6:a03:43b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Thu, 11 May
 2023 14:36:19 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 14:36:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38840875-f009-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683815792;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=QynTMtKm9yTbukJYheanqMIgEUBj21sLJU4rzZoXO/Q=;
  b=gcjInLRjW5y1Yf65u1Y5b/QSlFW0ODK/wGBw3y1t8/Xkdp2M03G9lSER
   adYo6LJ1sEWd4R9+WZ+XjM01bxaRrCY3IGGUcN82hHrcCgeXnSTar4eJc
   c36KcH5mt/HlwPkybQNq/ip/L1ff3fnR4B1BZISVYCkTd5wDlyalA2rvw
   k=;
X-IronPort-RemoteIP: 104.47.74.47
X-IronPort-MID: 107438631
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:LY40G677GR4bdLo3N1WBagxRtPLGchMFZxGqfqrLsTDasY5as4F+v
 jQXWGjXOPeCYGOkctx+bIS0pkIO7Z/UzNJhS1Fq+3o3Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S5QeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5my
 /4UazA1cEu/hvOG/Kumd/Uvm5s4BZy+VG8fkikIITDxK98DGcyGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MkEotj9ABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpNSOboqKUz6LGV7lU4UkUxfEGFmviayUG9Vc8PM
 nMf5SV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq19KqQrD60ETgYKykFfyBsZRAe/9DprYU3jxTOZtVuCqi4ipvyAz6Y6
 y+OhDgzgfMUl8Fj6kmg1VXOgjbprJ6ZSAcwvlnTRjj9slw/Y5O5bYu171Sd9exHMIuSUliGu
 j4DhtSa6+cNS5qKkURhXdkwIV1g3N7dWBW0vLKlN8BJG+iFk5J7Qb1t3Q==
IronPort-HdrOrdr: A9a23:SGlJbqC9lWrxnjnlHeixsseALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ4OxoS5PwJk80kqQFnLX5XI3SJjUO3VHFEGgM1/qA/9SNIVyaygcZ79
 YaT0EcMqyPMbEZt6bHCWCDer5PoeVvsprY/ds2p00dMj2CAJsQizuRZDzrdHGeCDM2Z6bQQ/
 Gnl7Z6TnebCDwqhoPRPAh2Y8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzSIwxsEVDtL4LE6tU
 zIiRbw6KmPu+yyjka07R6e071m3P/ajvdTDs2FjcYYbh3qlwaTfYxkH5GSoTwvp+mryVAy1P
 3BuQ0pMchf427YOku1vRzu8Q/91ytG0Q6u9XaoxV/Y5eDpTjMzDMRMwapfbxvi8kIl+PVxyr
 hC0W61v4deSUqoplW22/H4EzVR0makq3srluAey1RZTIslcbdU6agS5llcHpssFD/zrKonDO
 5tJsfB4+s+SyLQU1np+k1UhPC8VHU6GRmLBmAEp8yuyjBT2Et0ykMJrfZv6UsoxdYYcd1p9u
 7EOqNnmPVlVckNd59wA+8HXI+eFnHNaQikChPTHX3XUIU8f17doZ/+57s4oMuwfoYT8Zc0kJ
 PdFHtFqG8JfV70A8Hm5uwLzvn0ehT+Yd3R8LAa23Ag0YeMAIYDcBfzBmzGqvHQ4Mn2WabgKr
 GO0JE/OY6WEYKhI/cO4+TEYeggFZAvarxlhj8FYSP/nivqEPydigWJSoebGJPdVRAZZ0jYPl
 wvGBDOGeQo1DHfZpa/ummfZ0/Q
X-Talos-CUID: =?us-ascii?q?9a23=3A8SiGamr8Ihjv+QWr+RL599PmUf4nfj7d9GmLH06?=
 =?us-ascii?q?pOVpwVYCJCkLP44oxxg=3D=3D?=
X-Talos-MUID: 9a23:b7w2XQRij+K/Xkv3RXTVr2s6E+5y3Z+vKwMCkpsb69C7PjN/bmI=
X-IronPort-AV: E=Sophos;i="5.99,266,1677560400"; 
   d="scan'208";a="107438631"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CLLCMzxg6kkd4RPs8Lm9KwTuIU7xQOTZbNK6hANLQuTRrRrrYQMmyUFfoyY7x0srSGlk7QEyselGKZnEEZb8i+bkJIMXHMU9XNkePrdgGNdw+6AAapqiv7Cz9r0OAOcpuGRDqXHd+Yz4yBCuvScQA5dB0UVutI2FucwD+Ui9hANLD8IYajDdx9AwAuok1+oybDu+pe+MELchqXQUjQqvPcopQDImoe1JVotu09J4pGzWxlDGWsxz2OPNDQbJiCVu3Xoab+AhamXX5Qu7hUT2qJaUiD4GZaeH7JSd7TMFNtHxR+GvpAD4bCZKLkLGcGggXOdJi2Y27YwlLEbehX/g+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QynTMtKm9yTbukJYheanqMIgEUBj21sLJU4rzZoXO/Q=;
 b=V9Zm3MUV5Y/3t8h+vmn/bEWfzxDwRjej2l62b12gc2cZvoiC7wB3LMi8WIF0xrqmfAaopbJwaOK0EcZoxGQHqA586erB83pHsgWEvRvpsrSPgES/ZRMEUyicK7Rpcr2nX4zchYCzsB9kyf5uK++MBXY8H10MyzkRvfQV9+WmHFqI8c+1Dek4NZ6EEu79Ghe/rBpKLGrPnH82F8Nj8+wGRKchbqF8t/BZMSYFQWDIQXjHGO10F+sPVpiAVfrrYoC4XZ3r7vpgaWHA5mKDuobGBJpPNbsKHE1ov0CMhmFUQMewFzyme2eG1j5oCMkdk+SVU89jZ1OfyT/iiuGJ7HT2+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QynTMtKm9yTbukJYheanqMIgEUBj21sLJU4rzZoXO/Q=;
 b=awLT6aBX4ODvJz9N3kExb8VrZVvT7LiHJPdUZEtUtizD33kjag/0iAtRyozP5Wtc99dMkADJwgQJ+AZqM62kvF51VcndA8+xKgI+IYJx19ddH9yE+xLVdCBZfb1rDh0G1OF0461dqhWCGQWPr5O8aMNe8x39MMCYfZd4lWr6778=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <67778be8-ebed-e160-559a-9cbbab7c06da@citrix.com>
Date: Thu, 11 May 2023 15:36:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] SUPPORT.md: explicitly mention EFI (secure) boot status
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <82c8ce93-a9a7-9309-2b04-8092ca84e7d6@suse.com>
In-Reply-To: <82c8ce93-a9a7-9309-2b04-8092ca84e7d6@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0169.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:312::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6987:EE_
X-MS-Office365-Filtering-Correlation-Id: 65006e44-c7b9-49c8-bf0b-08db522d157b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Wb6HPitimy6Ga0tPmi/Vd12mM0Yiz5RX3kU9ubYc+8Km1DoZ2yy0kGyqykxvlPW7GojRd/sK/fx9sC/pTe6Rhdm1YiKzqRBQUEuIPBelnkIvfFdrChUIeI2rSMux4a3853Y1uGFvmq3aLbLm7UKBcLep1Ow62mFlwBwD07aUjpuGDiK2CwoCLbk2XBFORpQlW7f0x7h0xWBz1Z7wzRbko99nvLojgWLrbIwrB26LoWg6unYlhmkSflmw3DdHIUeQ6Oiy4ECvubd7s12EPiw9+Ht04qddTtC8z2zq45Bvq0/fMOdGI777+BY1/XkPnHk1Qq+TbBToO1ifg6wFGBeBWWPr39fnTz9DEXPDuS26Ktpz0fXM5obs82NIAwx881JqXvpdktSL7Jqc/15j+iY12I+KEi2poEhvBkDInFz3jls5Ik5HcjHxAXTZUB8QpJVbKjg4pLzat8mUSrAJ+E4M3azfm4f9GDBQHg8nd6rD1H5P6xrDHIH7lG+07qnpb//PtjudeUKfivJwFK5ZKVUbKwu2KaodxstxOnZ4lVA3W06urG2JTV5LMsFHzcRDljpa6rhQ7MQVKH6y1rsnB6R3SSCLP9GDkVTNcNHirP+hUoiZpcuVdal+DWXBDKA1lfy5RygzqQS2a2T3Jfmt/utzOQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(366004)(39860400002)(451199021)(31686004)(53546011)(38100700002)(110136005)(54906003)(478600001)(86362001)(8936002)(41300700001)(4326008)(66556008)(316002)(6666004)(6486002)(66946007)(31696002)(66476007)(8676002)(83380400001)(26005)(82960400001)(6512007)(186003)(4744005)(2616005)(36756003)(6506007)(2906002)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q1h3eWV1YWdtVzJqbEQ0RFZ0L2V1MzVCdHN2M3RjMjlNcHJFZUNkUThoUW0x?=
 =?utf-8?B?QTVSd1VJbEEyS1FxTEwrMFJIQ0svRXhTMTVyak9uQ1dOdXNTQ012N3dIdzA0?=
 =?utf-8?B?SE1oWEw0WTd1R25kK3lFS1NyQ0gvOE0vZ002cFI1cHJZYkg5YzBNVFNOQWdh?=
 =?utf-8?B?N1lxZ1Z0bkIrRkRwYnlSM2lNWTJoeWlkMjhidUxwWUZHVVRMSk81MTEwUmJX?=
 =?utf-8?B?TXJlS3dNZlkyS3d4eU9jR1ZtUzRlOWtnS3JFMHNsOGJvWTY2M2VuTlNiWXpn?=
 =?utf-8?B?NFZ3TElUQ0d3aHA4aExMQzFsZjdNU3YyclQyL3dNTEpSbWwvVW92eGtHSFNr?=
 =?utf-8?B?R3RubTRidGJQdDgwUEs2MDVXbDdBUWp2eEhZYmIrMlJTek9zbmxxYlJLNS9V?=
 =?utf-8?B?Z2hMdnFPVWF3ZU5iMzI3UHptb2tRQVFEQ2llb2ZEZVdGOUZ2L3pPUmFWSWJE?=
 =?utf-8?B?eFF2UGFUeE1CM3VQQmN5T0RBNWYyNFVsS29iNlZnL2o1OTg1UU8zb2w4azJF?=
 =?utf-8?B?N3pqM2kzSnUzMW1HNnd1UFdhZk5zeklLRTM3SGk1NE1uOHZwWThnNHlZVnJj?=
 =?utf-8?B?anRyOG9BMC9TL3ZIaFJhWDZrVEJsb01iei93NlVzQkJTRFJUQjR1UjFzUS9n?=
 =?utf-8?B?RHBWQXlpYnRzYkl4WDZoT1EyMk5kU0QwTVhvNGNvdStYTUFTRDByclFHZGFF?=
 =?utf-8?B?dlQ4Q1VJelpOYVpiWDg0bEsxajkvVWsyWUtlKzhQS09pbElZTEtBV2VNcjQv?=
 =?utf-8?B?RlN3T3ZXb2xGbGgrSHFSYU5zRGE4Vlc0ckZhZm43dHh5ZTFuZGMzanNyQ2x3?=
 =?utf-8?B?QmsxVlNTNllkcGtqOTV1TXZmMUdvOVNWdWJaeE03QmhyRmNGa2NBM1FQMVpS?=
 =?utf-8?B?S0ZIMnV5OFMvcXEvcml3UFNvcTNJbDBEQ2xWbXlPK29ZMTFERldETDZXR1B0?=
 =?utf-8?B?TSs1ZVZQK0I3WUh1MmNtczR0cllrbWdiVUw1dHZXY2dXUGpCNTdFa0tSWEl2?=
 =?utf-8?B?L2FBc242TUxoYWF3a1p4U0gzMnJVeWNPOUlxUHF5OWQzdUk5bkVKVmcyRDFL?=
 =?utf-8?B?dFBCQ2cvekFQNFk5WVlPM1E5Rk9CcVIwZTRQeVRONXBNTWhKT3p1MDJwdnRW?=
 =?utf-8?B?MjVRdTJWWm9NSDF5bmZuMGx5Vmh2SCthdGdOMnN0N1dPV3FuQVpuV3lFNzVB?=
 =?utf-8?B?UmdVQ1M4R3huSHdLRGFid3JKZEtadWRaSXVMUGpUTXhQWWVpSkRKa0FLZ3Bh?=
 =?utf-8?B?TkRka3lXWVpSS0ZzSEI2ekV6TmpQU2grVXIzQUZiVDdNejJLZVpWcEcwQ1Yw?=
 =?utf-8?B?M1lGbTZyOHRORlhFb3ZQMnJEOTRXZHV6cG9ITjJoS3lKOG9iaFd1UXJualZm?=
 =?utf-8?B?bmJjdHo1ODdqRGlXT0p6VTJMV0RIWUZDdklZMUc0OGRlVElKbW9xeXR5eUIv?=
 =?utf-8?B?amNtYlNHb282TWtlbHlpczhadjlncEhQa0dVT1hTTTNJYzVRVWRXNFE0cVdk?=
 =?utf-8?B?U3NkYU5wWDNrdXlKc0JxVVByMU5PN0dSNWo5YkNyZTVvTGYyQXR2K0ZlNGdo?=
 =?utf-8?B?TWpDQncwNmlCZ0RtTnpQcFhTekpEQmxLOGVlaUVSV0xJWUFPVGlXNE9uYzBZ?=
 =?utf-8?B?ZlZlYjdPTzhrOTNVcVQ4c2kwaDROTzMzRW1pejM4dlpWZ1pNUDl4ekJIS29w?=
 =?utf-8?B?YThxUmk4NC91U2ZSa0lMY1lrRUlYdnJGVGtiWmdQbU1KYTlYNHFjZjFHanUw?=
 =?utf-8?B?UFV5UDZEUHlQeU9GUVZrblNuOTIxSTZXVVBERjg4d21NTHFCRmMwZlRxTnBG?=
 =?utf-8?B?VGJ3cmVvRW9iKzFsSVk5azNySnFRY01kZldxdkxYV0ZYS25PMEN2ZElXTlZ3?=
 =?utf-8?B?OUo5SFFoT0s5VmVCaEtiK0ZpNEFsOHNvbnVYRUJIZkJybkNxeUpwWXd3dXYw?=
 =?utf-8?B?S3NvOGdCZUh6VnBicFJUY2lzNjlXQjFsM1lzck8zN2NhN0ZFV1lNSS96c3Vz?=
 =?utf-8?B?VTFQWE1hZ0EwY1Z4UGd3dlBHamtRQnp3c2pWaGs0RzJQMWs4ckZyd3ZzUXJB?=
 =?utf-8?B?OGszaDVMN0h4TlBKQjJvTE5iSjd2Sng5OGltZmdDMTB0QnVvRXljeEdjZ25F?=
 =?utf-8?B?Yk1QdHZVRVdwdWt0ZlNEVWtJT0R2S29zeWkycEIyVzE2NEFPWXJHSWxOU2Zv?=
 =?utf-8?B?NVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	rfNILfZxCIgeuAFycaTmG6RDvt3qbAJtrYyd6yvkLkFwKdpeIXj94/rKJTps7zOWFheBwk4Ut1ZwFlO/MrcJrM+Yc16ihI/vB8tad5PlZxGtv63aHVSz5GXNNxTWWMdVK14LVH+iicj1qe7rwVgzUnzEHGmdPbxsqRg7rcrKYYkcmp++8FDxo5f1xZDqf6rZ6h9AkhgRiDDHHM8hnkgysLEpVYlgsWgaf/m0XDJpjGMXqZVonSGEUd0x89dGsuRFXQweMN89gXnAaQqIPst/3/T63PkzXsPIaA0S2uERgBqcaCnNP3gCnn1tP9Ro8S102jLaQHgqp3wZKluwN3HQUUwj7vjYRQ0mK4ChgRd1lGIt0HZZ0cPYT0LKjnL0gktB2Wgh6NWUzvlpQl2KjCOuTUXBrxYgdCzNdBtP8fFQ2FdCLxaEeDUauc2It2SVbBrkeGcwVmRo81/m7OVMC8KYD+BhXhHFI2ME4+lQ2FxnHAYnrs9bWDzTOEIL1038bn4/4ulubRjMH9dgvguIxSZElIyrOmtUKU64cASuLBArSg6yLS1baj6ejMGTdwD/h4BCiu5IvcTW5bCdT6s0+PqnIVS1XUensYLF282OFXRAsjcyq27smOut+odZ+2A+PKuvPJ9Ld6ZQbb0haOQIKAzQEufB3jSMo9hVft74KhKJcIQyZbyrSRaUXfv4WCMmZP9SRwfzJrNjFJiHviRemR0oC1T6+z28DLYonBkYGLx3DBLC1joHlJUz7SsoM2L+ES1ZuCu8jRW0r3EeaNgkQbMmiHBYNUS+/lcapjI1g3TrlkPtBkr9C3nfbzJ8YqYCzaUf5vvlCMa4mFILEYa1o9kgATBKPwwCzGhNs0WzCK21jXQ=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 65006e44-c7b9-49c8-bf0b-08db522d157b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 14:36:19.4032
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IBBi+2OZDDTJT3C/HewaxA8rRS2oLDmHlzAVHz/j4HWeRAIlYyc9yh3qjSIrbs2wgopQfYYO91FphFTcKS9fxeOparqCME2XhBYmbNtCE8s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6987

On 11/05/2023 3:34 pm, Jan Beulich wrote:
> While normal booting is properly supported on both x86 and Arm64, secure
> boot reportedly requires quite a bit more work to be actually usable
> (and providing the intended guarantees). The mere use of the shim
> protocol for verifying the Dom0 kernel image isn't enough.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 11 14:37:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:37:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533431.830091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7Q6-0003ct-9B; Thu, 11 May 2023 14:37:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533431.830091; Thu, 11 May 2023 14:37:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7Q6-0003cm-6C; Thu, 11 May 2023 14:37:26 +0000
Received: by outflank-mailman (input) for mailman id 533431;
 Thu, 11 May 2023 14:37:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9C6y=BA=crudebyte.com=qemu_oss@srs-se1.protection.inumbo.net>)
 id 1px7Q4-0003Xv-JM
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 14:37:24 +0000
Received: from kylie.crudebyte.com (kylie.crudebyte.com [5.189.157.229])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56457d17-f009-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 16:37:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56457d17-f009-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=crudebyte.com; s=kylie; h=Content-Type:Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:
	Content-ID:Content-Description;
	bh=ZvFzNgT8EWhTMGeJIh+BNbNw30pANCcOXAnUd2eArkQ=; b=WYpY4PnnYEJXSWyMQhw2u2WvHa
	pPT/dMcCgEMXDKZLvpPZSSTwSlaMbzaTHVIpG3GjoKYVMS26qIivqMsg7W0qaySl7DwzQO20alYpv
	3JXaZV1W11VeMxEApEhg64edDAMkFIqJPGSkXEe35GX88YfQDsBMAxlBN0Yh6MSEl+BCcz6k7ii21
	3zwut/j/xJgi0nQR3Q2VgWUlgIcrTvym8cvvJPC5GQgLGj03vYltU+mkegabM+zeS7vimHx5ZTtst
	f1J+Q2LiXpUgFUDkS3fw6gJxjp1uqCaGndWLdeobxGzw0SCbLVAw8iRWYd4rfPF1SgGEHQV/mPHHM
	rnF5rIWv8JzA3FEFtyGXqkLP7sXpVtbnmBcinrr+YAgly/WeHiQ7uqpoR0bgDRBH6w0AIHsvKyBkQ
	IDsb+ksmeHH2ApBF+i6zZEbC2G3LTRqZ3dN3ELT86RYJDBy52zSKAWH6FJVdFEQWRDozxvax7Ogyr
	tWcCNWVjIFjLOMoFdK+mu42R4MKaBSHWiOBxLoaSa2xETEE6NEuzqJGJCjLHrjv1OJ8GS/Aml20mN
	JZlDzIM+XZ+GBbYCEENQcN3YjtV2QnF0rCFKKit067BsI5tTPB1pOv+6ZhDMLkHsl5bdhNNjdvAsZ
	P+v4epzQqVo1mFiuXDdUDabPn4j2jQ/GXO2l0LlwY=;
From: Christian Schoenebeck <qemu_oss@crudebyte.com>
To: qemu-devel@nongnu.org
Cc: Jason Andryuk <jandryuk@gmail.com>, Greg Kurz <groug@kaod.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] 9pfs/xen: Fix segfault on shutdown
Date: Thu, 11 May 2023 16:37:01 +0200
Message-ID: <2110128.8IRXTbt6Kt@silver>
In-Reply-To: <20230502143722.15613-1-jandryuk@gmail.com>
References: <20230502143722.15613-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"

On Tuesday, May 2, 2023 4:37:22 PM CEST Jason Andryuk wrote:
> xen_9pfs_free can't use gnttabdev since it is already closed and NULL-ed
> out when free is called.  Do the teardown in _disconnect().  This
> matches the setup done in _connect().
> 
> trace-events are also added for the XenDevOps functions.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
>  hw/9pfs/trace-events     |  5 +++++
>  hw/9pfs/xen-9p-backend.c | 36 +++++++++++++++++++++++-------------
>  2 files changed, 28 insertions(+), 13 deletions(-)

With aforementioned 2 minor changes, queued on 9p.next:
https://github.com/cschoenebeck/qemu/commits/9p.next

Thanks!

Best regards,
Christian Schoenebeck




From xen-devel-bounces@lists.xenproject.org Thu May 11 14:37:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:37:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533432.830100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7Q8-0003sx-F0; Thu, 11 May 2023 14:37:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533432.830100; Thu, 11 May 2023 14:37:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7Q8-0003sk-CJ; Thu, 11 May 2023 14:37:28 +0000
Received: by outflank-mailman (input) for mailman id 533432;
 Thu, 11 May 2023 14:37:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1px7Q7-0003qb-5q
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 14:37:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1px7Q6-0005dz-47; Thu, 11 May 2023 14:37:26 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.27.46]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1px7Q5-0000Gg-Ss; Thu, 11 May 2023 14:37:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=ZYhj0Eshn4W2mHjc4M/pi9DEN1/5+B9kJarqvHqicHY=; b=YwuYaXhk+5Mwwu6s/r1P9lTWDX
	AlDO+xkOuePh/AhYLQ719tKz6MBjGLuy3e6UlMeyHDk5RddPOGzEwyKOAl7cNxl7BuOderEnV7GD0
	g6y8ILJfcPhAoNwMTORxi1DuwBm6XnW9JLEukdT/F9izY2d86jmn4KPXe448wgYCuU0k=;
Message-ID: <a15dee7c-c460-a581-3ae4-c7066223c403@xen.org>
Date: Thu, 11 May 2023 15:37:23 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH] SUPPORT.md: explicitly mention EFI (secure) boot status
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <82c8ce93-a9a7-9309-2b04-8092ca84e7d6@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <82c8ce93-a9a7-9309-2b04-8092ca84e7d6@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jan,

On 11/05/2023 15:34, Jan Beulich wrote:
> While normal booting is properly supported on both x86 and Arm64, secure
> boot reportedly requires quite a bit more work to be actually usable
> (and providing the intended guarantees). The mere use of the shim
> protocol for verifying the Dom0 kernel image isn't enough.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> 
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -63,6 +63,16 @@ For the Cortex A57 r0p0 - r1p1, see Erra
>       Status, x86 PV: Supported
>       Status, ARM: Experimental
>   
> +### Host EFI Boot
> +
> +    Status, x86: Supported
> +    Status, Arm64: Supported
> +
> +### Host EFI Secure Boot
> +
> +    Status, x86: Experimental
> +    Status, Arm64: Experimental
> +
>   ### x86/Intel Platform QoS Technologies
>   
>       Status: Tech Preview

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 11 14:52:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:52:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533439.830111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7eK-0006oS-ML; Thu, 11 May 2023 14:52:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533439.830111; Thu, 11 May 2023 14:52:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7eK-0006oL-Ja; Thu, 11 May 2023 14:52:08 +0000
Received: by outflank-mailman (input) for mailman id 533439;
 Thu, 11 May 2023 14:52:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z6rJ=BA=citrix.com=prvs=488b3df2b=roger.pau@srs-se1.protection.inumbo.net>)
 id 1px7eJ-0006oF-DI
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 14:52:07 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 63e7dff3-f00b-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 16:52:04 +0200 (CEST)
Received: from mail-bn8nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 May 2023 10:52:01 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SN7PR03MB7232.namprd03.prod.outlook.com (2603:10b6:806:2eb::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.32; Thu, 11 May
 2023 14:51:59 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6387.022; Thu, 11 May 2023
 14:51:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63e7dff3-f00b-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683816724;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=mrI2rusD0JviNDhvgeY+tdSjHbjiscnh4jCl5whg3SQ=;
  b=F2YQ5wwSpumw21kNBqZ0CC0FTdCMIXaonGWLVezYSn5nmhPxiFrNXDp8
   iTFtqDuvxiExgIiPy5ptOJpgoWeUMgOPpvP84B5oBIdq9XukLJN3eM21C
   d6OOoHVXMkuk+hOgUy5cM6bGx/VXRaOW9LN+vd8GqF3MdYkbAnJsSpKLC
   g=;
X-IronPort-RemoteIP: 104.47.55.171
X-IronPort-MID: 108568732
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:57LXn67nPOrMn+/HDJUN5AxRtCrGchMFZxGqfqrLsTDasY5as4F+v
 jZLCm/Va/+PNDGhfoxxOt7noEwO7Zbdm9A2HAdp+HtgHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S5QeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m5
 d4YFyktdy66rO+k/r2Vb8hUioMCI5y+VG8fkikIITDxK98DGMqGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooj+GF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxXOmBtJNT+fQGvhCsWWeyU0KWUYsD1ai/un+j3WDAs5RE
 hlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQN4sudIyRDcq/
 kSUhN6vDjtq2IB5UlqY/7aQ6Dm0aS4cKDZYYTdeFFRcpd7+vIs0kxTDCM55F7K4hcH0Hje2x
 C2WqC85hPMYistjO7iHwG0rSgmE/vDhJjPZLC2ONo55xmuVvLKYWrE=
IronPort-HdrOrdr: A9a23:ymn2carRAhBC/DBwAB+11J0aV5oReYIsimQD101hICG9Ffb1qy
 nOppsmPHrP4wr5N0tPpTntAsi9qBHnhPxICPgqXYtKNTOO0AHEEGgI1/qA/9SPIVyYysdtkY
 tmbqhiGJnRIDFB/KDHCdCDYrMdKQ+8gcSVuds=
X-Talos-CUID: 9a23:eNGsumFDPyd35PHxqmJ31k4rA+A8TET2wWXuLGGdKF5lD5yaHAo=
X-Talos-MUID: =?us-ascii?q?9a23=3AJn0aFQ1Yx/gZWryR3F8WRd1UXDUj0pyjB2EUwM4?=
 =?us-ascii?q?84cSdajw3Pgyg1Si5e9py?=
X-IronPort-AV: E=Sophos;i="5.99,266,1677560400"; 
   d="scan'208";a="108568732"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P5ShBfjRckqM7hncQynaown3a/AGEa4SUaOQsc3cI+ROz1Bn04AR4/whp7WvlGFcfLuDJTBCnphcda2R4g1NicMrFE0KIYDjVSfbvMvR4jZV4xdWh9qrBxz7DPlWj8qPgz9ciKn5kTFojT96Gbg/LQhcIE8lRxUNcezMOB0hNDurCRs7un/9Z8L5M7i/rxK4VDG2iEXRYj5oLxE/NA8VbSYJ9XVQnd2Faenh1FxI2z8MnX4GjDGnSrFoEjSUxuvwkYCYMDVwysD91sM4LCqneyBeXhUoLuYbyOcxecJHCjs913s92JXvcDBflx6lfXTzjRFpcXv0MtXMqpkLU6Wd/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WRLwXEHiNJLX7HjBGmjUrTtwbY+2Ehv3zGsT/ef0u38=;
 b=JBOLkziKLkcHIL7+C+VR+UoYx154PTCBRzjDjVTFqpC2HPCcGgpWDKVLk8bV8V76kV/NJA7p3diAV36k53392Sb8IcqbfGQYT3sGAR5okRQd813z5G9cfdjP/+ieFkvnjl/UsurYDe422++Q8gpceEn20Il//cwevFZEZ079YYOyUbn9mdS3dGVke9XIcvZj2zJ1NCy9mq/ri7se6Zn3ZYyAt8QE1MGoPh75Hk0hqIo2gOOTqN0W3f15PKv2GwuXRIJa84oIbvLKmiJJnAPMHtnSHr9X0lqSCIr6uYOuYLG9NuU9HsAsiDY1w98bVdSJZ08APe9OLuo71w/rkw+YbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WRLwXEHiNJLX7HjBGmjUrTtwbY+2Ehv3zGsT/ef0u38=;
 b=EcMN6eKrd3Eyf5NOpq5me8tbKOdoFrlB+Uh8D6f6lzMjvOdD1gF/3Sx9ACj8nMJoFO+w7A9a+Ad3TsdV5+BZPRoqWaR/E70/TDHKD86ELARv8g0uVo0n8LK3Td8ApslfsX3zFkPhj5o8tNAN1V+1hNWemiPAUY/wAa9uNfC5jEk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH] iommu/amd-vi: fix assert comparing boolean to enum
Date: Thu, 11 May 2023 16:51:52 +0200
Message-Id: <20230511145152.98328-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0141.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::33) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SN7PR03MB7232:EE_
X-MS-Office365-Filtering-Correlation-Id: 8c116db7-4e0f-4ff5-69c6-08db522f45c4
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BM8i55MCVmd/mNTkXATDZMt+npoRxGQ+4vE6tSehnwOlV1D/PDefOeU5cUZfwzhj30BfMe4RBObI3yLBhzWtA0M+uySKZdulEPv28ejxjtrUVVVqceiKonOBqm4HWm0+KHfTznbTTjfpD6q6UQb6Vp2GF7gEcmsGFunzUVpryVu0n9tYvc5ZGWcozFwFoLgWM43NV79PUdMQbLoN023cIYssS8Tob/ShgzZskEpA4LEdhGPZ4GBT2COhzeUDE1Ts4AV3Qg77rNCKicw61gn/8QoaTH17wPNIBWzn8o1ab6Z3mY0LUG47UqzxMWFVA4yxTELSuo8wwFMw8pjeQW+UZ0iaD/P4qfcT98fYcUn0cisKYFcBOrsqo9WE+jjrBZ549VWuEVSZelkYR9QOl6Lf+FQ2ioXgwkCCkJjj5e/PcSasRaJLl1uk/uqaXiMvMoRp3sfX4BHdOLS5dqMTT5Z7kMghCRtKGmq2ZJ4pJLwSNcG4/yWgM6nGpCR/p7muNEHLz56b+o7jIXmcEOq7tuvNK19YFdb71aTby77Ry9JJheuDzjj+DYFTeJMdH6E5e5q4fkGgQwZ2s0jQ9e/qQ4uVmcAxiRjw3l49KNVj8K+W8mUesVccrN3vzRr5aZQ8qV4v
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(39860400002)(366004)(396003)(346002)(451199021)(5660300002)(54906003)(8936002)(41300700001)(6486002)(6666004)(8676002)(316002)(26005)(66946007)(66556008)(478600001)(4326008)(6916009)(1076003)(66476007)(6512007)(107886003)(6506007)(83380400001)(4744005)(186003)(2906002)(2616005)(82960400001)(38100700002)(86362001)(36756003)(309714004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SUJTbVIyOHN0ZzFad0F3L2U5d2xGUWIvU1Nic0ErazZ1ZWlscFNmVnh3Lys4?=
 =?utf-8?B?b2Y2SElveVl3bkRxVUdPS0VXZWVjc3IwV2NUdStYT0lVMnR3R0hoSHFXdlQ3?=
 =?utf-8?B?N1NaakFjYWNkVW5iTXJURGNheFVnT0tlazFyOWh1d04zM21QOS9pdHFzU050?=
 =?utf-8?B?MkZ5cVAzUUFTZ2M3akNuZ2duRGFtSFZzakVwNVl5WGIySjNHREJZbmZyWmpW?=
 =?utf-8?B?TnpmcTVwZjVGSWpOWnhSYkx3UFNxcnJuUlRVMzg3QXZiYWJjTm1WTGJxMEhK?=
 =?utf-8?B?bG1jNUFVcTdaQU5iK3ByOXE4S1hZcXlLMkhlMXM5dVg3UkJaMHNvd1NVZTJk?=
 =?utf-8?B?dzJ3T2FLdmpOMmNBVDF2U1IvQkU5a0N6UGNPQWo3R201QUUrSzNuaUdPQTQr?=
 =?utf-8?B?dXd6WCtOYVVPbGN1UmMzM280Znc0VHRuU1FSZTV3cVJma0tONE41cFlST2gx?=
 =?utf-8?B?S3NQbmlaSkdKWFM1YlhqcFpSL0t2anFQSXdSaEY4anJDZ2pyNEhWSlF6aWdI?=
 =?utf-8?B?WW9lQXE1Vk9JekVZMW9jdE9wZ1RlcTRTcE52cUVDd2cwMU9lUzlXWjBPNUpl?=
 =?utf-8?B?ckZWSDBZZjZPVHMrNEdLTVgzczg1bmlJSFlldVM2YXRDL3Vnc29hTEtQYjhI?=
 =?utf-8?B?aWx5elNhbWxIZlZvcFllYVppNXZVd1AvMGY3b0ZuSGVDMHM5V2dPaU1ZYVBM?=
 =?utf-8?B?Njl6RC9lTVgyZDhTcUEvZUdLNEtlR1FLVlN3RnJzbmk0SXBtVlBwemdhcEh2?=
 =?utf-8?B?NjU2ZzIraERxK3BuTjBVaXphOUszdVY1dk50UzlWMkVRaU1WMkZycGhzVDM5?=
 =?utf-8?B?TDdBVnhVblpId0hCWmV4eTVpV0ZUK3Z0bEFVaG4vejN2SURiVEI3ZEtRWEE2?=
 =?utf-8?B?RE9UbXVveHJaODBIdjk4S1ArMHhHYkNHcWZtYlJmYkd3MUI3aG1kajkxVGF1?=
 =?utf-8?B?OXpRbWRQck1IT0puM3lYeDBiU3RQZmJhczdnS0xySUhEK1dPb0FubU9peGpk?=
 =?utf-8?B?Ny9MemdDeFZBY1kyZUJ6VCtzcU1vTkF6Yk5vY3ljRVFiYVZxUDFzeHdQQVVX?=
 =?utf-8?B?czBiSm5MSVpKRERwRFEyaHFhL0JWR25pa2dWNStONTgrai9MN0J4QUM0Q203?=
 =?utf-8?B?L0I0NDlWY0VJVzExUUowN1lBM3YreHdzTUdPcEhaWnlBTnZ2WUdVL2RnSW5S?=
 =?utf-8?B?MEk5OVVPaDhjd2lONFNLdVkvL1VKbEdGdVp1cmlubDFqQlBBbGRDRmdLd0RX?=
 =?utf-8?B?ZGFZQlhyKzloaDhwNGJXQ29Oc2dTWlRpTzdYM3BmRjdJMnVOT0cwdGpWdU8r?=
 =?utf-8?B?QWJndkpsUmU2VGZTWFFuY2d4czc4NE1xUDBRdCtXUVJJb2YwWm1WTFAwR1Qy?=
 =?utf-8?B?clhOS2hoWERmbVZVYjBXUzRReE1aS3ZuNDM2OWw1M3NDWE1icEpWemlzbWFp?=
 =?utf-8?B?NWVkRXlhWldsWk1uVGVmR0YyMkt3akJ0c2dzSkhzdDRXRzFwcm54SEpkN2pX?=
 =?utf-8?B?T0dVQ1o1dlNILzhRNGlhTVhWLzJabFNURnRVVHZtQjdTVlRJNjVPUVF6S3Fy?=
 =?utf-8?B?NndBbDZzRWR3KzNabDU5VmlJSzUrRVRKSFBZbWVqaVFseW0vZ3l4WTM5dkFo?=
 =?utf-8?B?STh3L0crUnBHc29VZWNod1k3cTVMY1BOUjM3bGVlSDNTM2tIQ09XWHlVRUtC?=
 =?utf-8?B?QzVyL2plbmlGLzlMZ2JVdlBTa2ZpWmUrdGVqc3hDeGJBTmtaN1J3TFBPdmk1?=
 =?utf-8?B?dXFyRzVwMnltK0JHazhhMUg0cDJpRFViV042T2tOSFd4UFU1V2Z3M25YZGJt?=
 =?utf-8?B?M1FHYXpyNTlySHFWZjBxVXJuRm0yOGtMaWkvMFE1V3JqWGw4aC8rRkJPemJU?=
 =?utf-8?B?eC80Nk04cllqdERZTktrdnJkeWpGdFAzc0hhYlZWcituT3pLV1IwbTZtTVli?=
 =?utf-8?B?L1BpOVRmNHNHTEcrY2lyV3lROUhmT1RHaVRlUFZCaERySkFEMFlvQkRtUDVD?=
 =?utf-8?B?Q05ncDZ1MG1XNzcvMlhNSjhGQ0VUNGw1NlQwMVJ4SGdpZUMyZm54UDFoNVdt?=
 =?utf-8?B?Tkg3OW5WWlUzdG5WcXRlelcwNDZRb2NJS2ZhcUFGbXE0ZDUwRFUwU2FlMzVM?=
 =?utf-8?Q?b1+ZxJBfrY39hsjTgZv7/dE33?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Z/Uyp7NQ5w+z+joKEXPqYEMZksbVvW1gw6Twd9n2MUjzKhky8StlXLfq2ffWTydTDC08YlcXp3Dk8kuxdVmUWvD13PS804p+D3Jba/rA33PVvsY5UpGPAjZbTrnBuVEFSHQHSdYtb6YTKLFOvP+4O/KCfG4DcOeGz76sSA7g61Y1sdTTqI/A3xJKFAF9uTAvqcOiZTwvsZAYVYulOBpiQa3Fd0Iw8Xr13SiibxfbuPfx/tkWbR41c7Kbu4n7iwvA8UVEFYtAh2FKN7bD71mze5u78HrQ9cx1BP1tpInkqMO4M1j+erhdmpEyA5OHOWz17Ufuy7Y6OGqD8MGaILnomzsdn1iopnG5ZIU4yvwRz//LM86Xp7oIixf+obRK75DG5ZePu7sbWf+SnhpUOYvQQXFS2dEtSPT/A5k7DWvLlUfRmuJE1POUNM7L6eSKFrX/VYZOF2yJC1cBpEtYGSqLfgVakh5zvTSsmieCI03gUuhN5jroEoLkEek22BpK/oS4ZXUqWh1D+OVfMrE68UueYUoijD6ncDKJwYVloyrWBFek3XqsiaLlA9qras4LNhD+7vre6dj//duw+hBV7tmnHNe/GmTu/WIUJX1uY1EZLfxgI9YiTYqT4F7b0VrkBBeeA6KiORXiqtVzYxTIzLq5fmkKzwOcpyolwmlYAw4Ca77X6y7ZYGuUQL+lQmh8TsKdUQVQKnhP3H3/ggejc5Ppl2v0aK5Pr47Rrn3mJz7rLoegh0sEwaCBKn940zd+9nDiViKWFQ41rkglkHMx/+dRo7au8LgNjetPhtg0oWHDFK4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8c116db7-4e0f-4ff5-69c6-08db522f45c4
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 14:51:59.2497
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7oMr0YUHJfEFLZrddAadlmzRULfIbQiQvculm03hFHh0jCZdeGpCC/idREC0AnUP7SgVPpkW/Uj06rU7jEArwA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR03MB7232

Or else when iommu_intremap is set to iommu_intremap_full the assert
triggers.

Fixes: 1ba66a870eba ('AMD/IOMMU: without XT, x2APIC needs to be forced into physical mode')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 4ba8e764b22f..94e37755064b 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -240,7 +240,7 @@ static int __must_check amd_iommu_setup_domain_device(
          */
         if ( dte->it_root )
             ASSERT(dte->int_ctl == IOMMU_DEV_TABLE_INT_CONTROL_TRANSLATED);
-        ASSERT(dte->iv == iommu_intremap);
+        ASSERT(dte->iv == !!iommu_intremap);
         ASSERT(dte->ex == ivrs_dev->dte_allow_exclusion);
         ASSERT(dte->sys_mgt == MASK_EXTR(ivrs_dev->device_flags,
                                          ACPI_IVHD_SYSTEM_MGMT));
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Thu May 11 14:54:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:54:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533443.830121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7gB-0007MK-2I; Thu, 11 May 2023 14:54:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533443.830121; Thu, 11 May 2023 14:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7gA-0007MD-Un; Thu, 11 May 2023 14:54:02 +0000
Received: by outflank-mailman (input) for mailman id 533443;
 Thu, 11 May 2023 14:54:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1px7g9-0007M2-Pj
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 14:54:01 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0620.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a94d94bd-f00b-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 16:53:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7958.eurprd04.prod.outlook.com (2603:10a6:20b:2a5::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.19; Thu, 11 May
 2023 14:53:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 14:53:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a94d94bd-f00b-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UNP08cvsY8UUKWk9PpCAYam+XAmQWDNIDrP4w9tAyLfnrCdgF9r7Wk3/f4PrCWeqQeHIZUPYGXFMsrsjTcgLHUAptOnq8Rw5wFq9975L85yEpP/ei8df5G9/LlPFpRrggqqGpEumZab3tgCRFve3c5HIhVBYn3rBM5ABnLe8VUmQ3EW/MH86GhWkaoiDm89KjZ0nDOOJpdaaYcq3WFqJWKPPhTEQGYFNdZsjmrIsdACw0SwR0I7cNNw/4aQNhPCquMZWcNfQTsdknSJwlU3DK1gfjSGHyrKE1XJ4mL8FMrkpo0FPBlDCjxMslbP83namIKFRuBwmmZgE8onWgLIjuQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xDyRx9KEvOcLTceN4UAuszxKCI14sK+kGj6KLWLhXFU=;
 b=oMt0cYTsiIZkKxK7U5/+ufV9E8hxo61ESHWYMXdhIiSiD2+aus9wEV/aLRq1srSBt+qHJVrOEw7idX+l8+7b7KL2BBfIsVxWS+p+jr1G8JZhqh46VAwvtmo/o/BPFI+OQr463oOZbKggRuRDkPy+PU2UvTDvMFZnNuXj1JCjFr6WT72DGG7nZaBnUZFuS5M8zEUBHU6YN3vJ6akaBQVb5vYfLc1SDbYc/Td6pLRRtHrwuQtvmM8nwXwx/+n0SvBUCvOJaNAkIevDX952J1qfDpt+LZM4jSkczrlCGyOGBtHRj7j06cos3eLj2TsoM7UdcFxW1xnZsKqbhoIO78bmEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xDyRx9KEvOcLTceN4UAuszxKCI14sK+kGj6KLWLhXFU=;
 b=A73hX4yvL/TcpY8GOUVUv/rCR64wRZ00HuN8beN/4MVsNB2IPCvkqfpm2M5B0o2Rhsd/cAw+WiGdpYqWoJImyCUmVttoAbQ2rtjs5GcO/HK36xr0LCuJsC7w7Qc6AN3R1vU0761fcQ0OPKM07xjp+zZ7hMfnvHuCqsnTf0ZI3fvceQcwVWqzTYJJRKJDjUmnBpCxnaEspFOaEJ2mly8S81azk+y+Lleo/T8dVXnFKKvfuYk+3hMFAKEXNnu0FmPgYKkJW2dP0orxz4BEaWRWmQNEkQagGrwm2u1ntuTsr4xTr9l33a5Rbn5KLkF7/+dyOCF0oi6bkcP1MwlEO4Qu2A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5aedc6b9-2ef6-2c85-1949-b6cc333a7267@suse.com>
Date: Thu, 11 May 2023 16:53:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] iommu/amd-vi: fix assert comparing boolean to enum
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20230511145152.98328-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230511145152.98328-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0118.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7958:EE_
X-MS-Office365-Filtering-Correlation-Id: cba70c83-9b51-4e32-9cd6-08db522f8bf8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	P9pJKihixDWYE4oldV5IPSPu1NZnwGonQgZ1U0pHRRZh1N7fbvrMz5VFkfA5AARK75Gm1HtMWcmCRia3nTGKjpHsTIb21Zm6130ZiL7d5Ha6BEZA20W+yKRJK6t0rP5h+vY4Jo6KenWdo6yr6upYfDb19j2JGUmWJ9AerIdlqYwM9/CQklvkVdOSSKqkbFgnTMhbSj4/YfHh4uL7hGHy+clhKRX1p5A3qxm4HYp/TGm90TBIQ9Rqn1F+ltMlrZS802Bhng4MTKXXrZclqsFnT0Ypjx9in755rySyXsjS9wiMKq9mt1mZfggTXK36tk44xBekcpepONetIUDfPEYmsLramUULVIIK4PpprlwzygkNDVqedFqSC/KkygHwseM/zb+h3WbDHghw56us4umdG5pQyE+tAV8GkpToC+hx4DXxQmxfn0XhJnYJ7IFr84zl4Uv3oeFZwosEU1k+aZE41OqHeUVDfXw+UCrPOyZrq4IkuRUlWIzhy3bE3ty0IGyVtx7hjoGhkyRv+0tj9SfzfjY1mY+nz+SrAyynp+yD6D/+OjdBI3NiuQDadxSDx029Zx61jSZb3gmlIYgCE/rqqveFys0wXo3YSs82w9KGA8dNheeIMq0EVhNhTxAwgfNsVqtuL5FB7M6akoNHb1AWZA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(39860400002)(136003)(376002)(346002)(451199021)(8676002)(2906002)(6486002)(186003)(53546011)(2616005)(66556008)(6916009)(4326008)(66946007)(4744005)(6512007)(41300700001)(66476007)(316002)(6666004)(6506007)(26005)(5660300002)(478600001)(8936002)(38100700002)(86362001)(31696002)(36756003)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?amdEemdWVDhNS3RGcUx5aFJ3TmN2S3ZCWWErOXdBZFFmaVd4bmJRVWdiZytj?=
 =?utf-8?B?Y3RLR1lLcyt1VkVZRTlSK3gxN2JCZi9nWnJvTVVsS3NMYVJDRGZhL2JOeWVa?=
 =?utf-8?B?UTllWXU3ZGltRjdsSWd2a05BNFpFZEFhRFhnMkNzaEd4dzJUelZpVGErREtX?=
 =?utf-8?B?TmZudXJrZVBXR09hMm1NWnZ2djFQdTdia0VnYk45ck1TMmdpVjhyOU5sMEtK?=
 =?utf-8?B?cGh0MzNCZ1UzWWFiZ1F6NVIwSVpIOHBUcWY3QUZhMElPQ0pOOCtmcExXSlVp?=
 =?utf-8?B?aUc4bytpb2pqRFF5R1Vpak14WlVZUmpJQkw1Z0l3WTBvSmxWVmwxVThQTVdL?=
 =?utf-8?B?S1ZEYXlBRkRKZGVoeW5PNG0zVVJnUHNiVWtMYmk5TGJLQm1aWmRYaEdianlY?=
 =?utf-8?B?bmIyNzJ4aUJuRythM3hKRXlDSE1Vc2oyTVV1blduSHpDcUNZUXJtak56R0hR?=
 =?utf-8?B?WjFRdWVmKzI4RVk3eFhtVWhaOFYvZmE0c1FTUkQrUHd2dFhBbzNEc0NsYlRU?=
 =?utf-8?B?bUhibkZDTVJhcUxQRUFCZlh2TTFKbVpnUGhHUERtY044MUwwSWFHYTBEY1RB?=
 =?utf-8?B?azByQzk4MkQxZzZINWRaWWRJY3l2OG0waDlHYWxTL3JVd05Bb2wxZUN6Rkdn?=
 =?utf-8?B?TFg5aUdvbWhWZ3J5Qmg4Y2F4VFJGdU41elFuaTFWcnh3eW9vMFhRQll5c09J?=
 =?utf-8?B?OTlwUHdxOUdsa3o1M1krYmNJQlFQREt3WG1ZOUNZTEM5TFBRK0VTVmJZL0xz?=
 =?utf-8?B?NDBhRzJkdXhhU1hmUVRRV2hHbFJWMlZBRG1QUU1GRVZ4UVJlemRSL2Z3cG44?=
 =?utf-8?B?U3d5TzhwYTZEUG03TS9hMnpmSzFWcDRqMVNZUG5oREpKdVZGSXU3T2J4bjFa?=
 =?utf-8?B?VTFocVVHUEpHWWpvUzlvM3drYmZVWE1PRGhFbE5ZSGNta3VML2pRdUdEQmRy?=
 =?utf-8?B?d3pCSXZ3VXlWNzRzYldCNEFlSE83R01SUzBVa2VLWFA2UjhRN1VMNW5IYjNB?=
 =?utf-8?B?aWlRZXNNeTJ5U3p0aFpOSkEwMGdDSERSaXVhMFRLbzhSblptT2JUaGtwRHVX?=
 =?utf-8?B?Uk5SMVRwZTZVbTAvR1BuR1B2OGRTQkFjWlg1bkNIN1NYSWw3ZGJmTDY2UDd4?=
 =?utf-8?B?akdBaFBDY1hPVTJKcERVd0hCWUM1V0ozVHBkWVZwTWNmTG1XengxaWpubmNk?=
 =?utf-8?B?ajZ3TUdMVVdxWVZDTVJyNHI2WHpQU0NWY2g3ZzFDM0NqczNTS3l6TWVhMlMy?=
 =?utf-8?B?VlRBalJ3RVR0ZW0ybk9xZ04zUkx2eTU5MnlKUkJmS3BOTjNRazk3YksxanRJ?=
 =?utf-8?B?blFhUmVITkZJeU1tY1IrOVBGaDd5cnk3Q1FyY3FDZElOTTdwRFpKU3YzNFFE?=
 =?utf-8?B?bXdyeFNrQ2Rqb2lLTWozUTEzWjZpUk5WOUFZUUR2SnFyMWtkZ2ZaWDhGMXVS?=
 =?utf-8?B?em04a0JSOTVrbTY2TWw2bHVhU1hmS3FQRnh6MG50WmRITDB6T3hQZUtZbVB2?=
 =?utf-8?B?U1M2ZDFjR1JWR2FZRDgvUHV5d0J6MVNPQTlPVWQyS0RyOUt4ZEQrckRyZ1kw?=
 =?utf-8?B?K0ZyelNBRXl2YWkzWUI5ejkwUTRFZG1IMFNaRENsV0dtSjRyTHlGdC9mNkJC?=
 =?utf-8?B?cTg1LzZVYkdUNXlBd2dhSXN5MXg5SzIzaUx2WDc2M3U3ZnhNZjUrNmtVbW9W?=
 =?utf-8?B?VlUrR2Y4UHV4bFBHTkR4K050S2tYNm04RDcrZ2M5WEtiNjA0K0c5RHRSVlBl?=
 =?utf-8?B?Q0dpdVM2NkI2WmVmaVViOXV0UmFRN0h6c3h5elord0lxT2RDKytDNkdoQ0Mx?=
 =?utf-8?B?cFVaOUxkS1hES2NhcTFCejhGZTFBdjlNT2dTYzMvcml1WklzenJDYlkzdTNC?=
 =?utf-8?B?Q0plMlQza21iZktPcDcwVXZDM2kzc2ZQblZNbTFBaHNqU0UyMGZmdThua0N6?=
 =?utf-8?B?NHd5M0h2ZXlZS1NyTGlqK3lNejdtRDlHUnlzNGJybXdMNW1ONmFrRXIvOHZT?=
 =?utf-8?B?aWlrMDU5U1hFdElyZGJ0Z01sd0dZTm4ybENKdEw1UnN1YWhoakxXWWwrQ3Jk?=
 =?utf-8?B?MnlYQ3FUS3kwYU5OTHhqdStrbzdhbUJ6R004QThzYk1oWEFyNUNmNERwU2Fz?=
 =?utf-8?Q?nNhzBoqsdlcciiuUKRyNRcGuF?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cba70c83-9b51-4e32-9cd6-08db522f8bf8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 14:53:57.0054
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vtSWyIQvR4NYWILLoj6v7SRT9A71lZCpPx7Ki+1E17hKw3L0IESv+pvbvITI7/H0Iu4giFAW+Hp4B98lcUY0GQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7958

On 11.05.2023 16:51, Roger Pau Monne wrote:
> Or else when iommu_intremap is set to iommu_intremap_full the assert
> triggers.
> 
> Fixes: 1ba66a870eba ('AMD/IOMMU: without XT, x2APIC needs to be forced into physical mode')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Thu May 11 14:55:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:55:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533447.830131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7hc-0007xI-CB; Thu, 11 May 2023 14:55:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533447.830131; Thu, 11 May 2023 14:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7hc-0007xB-9R; Thu, 11 May 2023 14:55:32 +0000
Received: by outflank-mailman (input) for mailman id 533447;
 Thu, 11 May 2023 14:55:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z+vG=BA=citrix.com=prvs=48888ca5b=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1px7ha-0007x1-SQ
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 14:55:30 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dd29d21c-f00b-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 16:55:28 +0200 (CEST)
Received: from mail-dm3nam02lp2049.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 11 May 2023 10:55:19 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by LV8PR03MB7469.namprd03.prod.outlook.com (2603:10b6:408:18d::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18; Thu, 11 May
 2023 14:55:17 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 14:55:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd29d21c-f00b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683816928;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=iLzNp0Uil+lUeDT7rXJ7JiIdPzmdObHlI3GyQ34QU7o=;
  b=fmbWBE5vorV0iEMPgNLil/tykVykg2M5Nf4SkvzANPeWhoPLCY1XrClK
   NXbqFLBz1HGbT9VEX1rMliXFdFbZg65PcPz0X0o3+bASI5qpG4s91I35a
   kbavm4HmEVaXvZfe8QRCzJblKrHEJGHazwbugw18PTe6TaRd6USZLx/8z
   w=;
X-IronPort-RemoteIP: 104.47.56.49
X-IronPort-MID: 108018898
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:8oDHEaAchVX21hVW/zfiw5YqxClBgxIJ4kV8jS/XYbTApGshgzdVm
 2UWWWmHMqmOMTb9eNsjaYq+8E0A65SEx9BrQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5B5wRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwo+NsM0tg3
 McieT0nPz/evfmv+427c7w57igjBJGD0II3nFhFlWucIdN9BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI++xrsgA/zyQouFTpGPPTdsaHWoN+mUGAq
 3id12/4HgsbJJqUzj/tHneE37efxXqrCNNCfFG+3twpjlehwV4KMycxTHCpouOerHOnVusKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRSXKa9THuc8vKYqGi0MC1Nd2saP3dYHE0C/sXpp5w1glTXVNF/HaWpj9rzXzbt3
 zSNqyt4jLIW5SIW65iGEZn8q2rEjvD0osQdvG07gkrNAttFWbOY
IronPort-HdrOrdr: A9a23:135BXaCFXQRPDA/lHemW55DYdb4zR+YMi2TDsHocdfU1SKOlfq
 WV98jzuiWbtN98YhAdcLK7Scu9qALnlaKdiLN5Vd3OYOCMghrKEGgN1/qE/xTQXwH46+5Bxe
 NBXsFFebvN5IFB/KPHCd+DYrId/OU=
X-Talos-CUID: =?us-ascii?q?9a23=3AmIT6JmsO7sYBtYEFDip7g/dP6It8d2bv5ln5GHX?=
 =?us-ascii?q?pBEw4Z+CvUkarx6Ndxp8=3D?=
X-Talos-MUID: 9a23:NBBNQwZoIkscBeBTlXzijgNlBt5T2YO3I1IOi7IghtCJDHkl
X-IronPort-AV: E=Sophos;i="5.99,266,1677560400"; 
   d="scan'208";a="108018898"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=McbQrTMfkEtpM4g5V0BJM4Clv0EJSx7LJKcSl+cb51bMuyUfKGfspXpk0fTXozSwkML0XDAatMJzk01tOG9C+6v0ZolQsnEBe3EidREew3h1YVYVLnFV5x0LeVMjYVKfSNhgMMKExIaT6ZK4mQehVmF5DejGqZx/Rr4sjDaVVDa5DbDNZBPad6Y2iLCi2PtfdWzMyagZkLvv6vjDvU0TOqOs67eGpZKyYr7sHiARNxGKuan4EySZqyjNNkV78FUSm2eP/WghE6Wqn3uQdPT3DeNEkqJySD5W4Yu1tMIN4n4BpaZ+npQuEATVF6yABCHq292TyeXV3wEM1omA+ojU8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iLzNp0Uil+lUeDT7rXJ7JiIdPzmdObHlI3GyQ34QU7o=;
 b=VgInGmBUxhkPhXJBT2Q62M4/Notk3s6aOh90wACw2+6FrpyNBzNkeepkrOOZL+2EHODQ1wroaCsbsgFIFLcFG/jl+vp/yPCgxRa+Uk3n6pTZtg8qELgOXj/R3wumc7wYgW75IuuII54YzVroMZv9kN8c139yvdViWVs2q3mTb1BJz1K/bGufjcc0lum5yMXyHLGLMmO2NywPoLZmiim238jFEn+rZRnJ0TJh+lanWh8ZcjtW1uwnNfymnWp+CTCIevoFeZcgqYXlUTcTX1ZbpaisD8cmX7tOfgrOKZYOC6iJFv99DbSUk0dX6yW2s2n7VhWTkGuiIIqjy2+QEvOHrQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iLzNp0Uil+lUeDT7rXJ7JiIdPzmdObHlI3GyQ34QU7o=;
 b=icEctXHmysuER4HXslgcV5pmj4tRBRUBg0Gdhn2WsZTN6dJqCpSoNYOGJhlQLyn4lG1lkQFd/9hElQFGF6iWo+vuJaBdwtnm86LVg2PXO4XEpUq1R4Wwo4k6KqM468hglifAB1b4/9U9gxnR5JSnexUSqf393sI5gp/kAQoJKeQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <83be515a-a327-c5f2-68d5-88d2e00ba806@citrix.com>
Date: Thu, 11 May 2023 15:55:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] iommu/amd-vi: fix assert comparing boolean to enum
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
References: <20230511145152.98328-1-roger.pau@citrix.com>
 <5aedc6b9-2ef6-2c85-1949-b6cc333a7267@suse.com>
In-Reply-To: <5aedc6b9-2ef6-2c85-1949-b6cc333a7267@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0337.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|LV8PR03MB7469:EE_
X-MS-Office365-Filtering-Correlation-Id: 0bd707c1-8821-4e01-ed2d-08db522fbba1
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	D1J3q0xJuBxHNVuufACgTbazwFFRak/EZgI95cQ5CQFYsIIgcWxIOaXwkD839xTtboLpiifH4ySPjj33U10AXdo5f2XiZeyDw+jgUx17eDnmt6OxjXHCjs8NYSyafNtOhtcDWrIMaXN7jsz6lfHT6gif5VIQU8JBx3bwjsE9/5z7J01nB3JwEJH/enhjMcjp79IhRgmjBhd0O4YwbrCufDuGdaem42xgF989tuINmtbgdRZ+m+mD0wd3imHR0iRXlE7tT2qGvTCJ2i2pdwenCthCXjPPvM888gELONYqDZiWcHk9ZhMoV2owt5MUMnLMeEuQaA8QJd3N6BS8AZ2xL2tpAsAAywxza5QBpVKaNr37WJ7BYMlzxu7pF8KNLWZrZ6LFFF/lGVzM5lbDE6uyXTa5w4oBFUYmKp+aMKhsTEAimnYlMzBsIMvc9l7gZr9kGwKSJJQCywSbu7ixH2V6sAn79TPW5Y+Gy+ql6uXv7+4M/xG5DjBghEVpafIvklOkUXotw/sh8wKpAKoF+QfEVzcnZTTEQy+UPwwodlNELPqdjpNvk1brsofDXM0PdCkc9q9tS0+955XJebP5VF91DVnBg/vUw1lnGvd/9KPDsJUkeBG8GVuMFEQKJC8F26PESali9vAGNunGvXWLljVcfA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(136003)(346002)(396003)(376002)(451199021)(6636002)(6512007)(6506007)(316002)(186003)(4326008)(86362001)(66476007)(26005)(53546011)(31696002)(66556008)(66946007)(478600001)(110136005)(36756003)(5660300002)(6666004)(82960400001)(6486002)(38100700002)(2906002)(8676002)(4744005)(8936002)(31686004)(41300700001)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?LzI0NkNGcnhFaENSOE9vQlJKUlZxYTFKbmJCVEh0MjhmNkFiK21RZk9hUTBM?=
 =?utf-8?B?NmhQaHNuZ1NIVE91Sys5QlcyVmp4WmNNN3JuWjU2ZW9XczBvejNBZ3k1RHdR?=
 =?utf-8?B?NXZiMVRsZU8zN1NkS0lqWjBQRUV4VjBNamNyeWo3QUNMRFNndjZUY3J5ekVy?=
 =?utf-8?B?VUhoakJwV1dQelJYUUZ4UjN2d3F6VXhra0lNNG9GR1ViSTI2bUszeUhMU2Ur?=
 =?utf-8?B?TUEzU2JPYlpGampiM0RWV3FPaFU4djZYL0pOWHI1aXViRkExSUVrZE9IOVhB?=
 =?utf-8?B?bDEzUExRU3l1bUhSeHVMYU9TSGdZOUxYb1Y0Z1ZhTk9hVFdUcll3NkprT283?=
 =?utf-8?B?OHBIZHlvc0d6cDZ4K2swMHRpajdUaUV2UkZ1SHVEYjVIanFZRStWVlUwRGRm?=
 =?utf-8?B?aUpXN2lZMldBdkVrL2cybmhWQjlWMHF1OXU5QVhIaEVlSWN2ZnhxUVBUYVJt?=
 =?utf-8?B?T3FDVmxQM2FVcnZ4MVdEUmdEY05IRVYxd0kwZldZemp3ZEt1K21yNjRoVkda?=
 =?utf-8?B?eDQzOFpMUmRFQjdPdWZCS0pKNURxM2RMMFUyY1JkVFpTa1laeFNiWmhseFkr?=
 =?utf-8?B?Rnk2NVFHbnpzdEdwTFpacVdrVzRmdHV3M0c3ODY0MFZRcGlCazY1SHVDQ0dM?=
 =?utf-8?B?bGR1T1JhdllVQVlHQlZVMHMxa1l2TVJiS1ZWUDBHNE01aVpqdDlxUm8xeUR4?=
 =?utf-8?B?ZWwzb1l3S2RDTWdVa3M1bDZJN1loV3ZQVzlKU0t5SlRySUZEb3VvMFVoWmE0?=
 =?utf-8?B?SHVUU0duM0dvOEdHa2ppNURBY2luekFhNzZTMk8xT1VuSEV5dm5NMk4rSHgw?=
 =?utf-8?B?bjhuR1JQU0tlZE9wY1FTbFJGN2lFQzgyRzdyTCtEbmtlSk5peC9rUXRia2xk?=
 =?utf-8?B?d3h0QW9xbGE5ZFRzQys3ZUJMOGpWY0dnd0pNWGRSYWJpRFJOOHNPNWlQbnpu?=
 =?utf-8?B?Y2NpNmNZVnpoN2ZEYkVqa0M5RVJVWHBJcFJxanJOdVUreHJoYi9NTDFQcEFv?=
 =?utf-8?B?SUpmTU1icW5tUzhyOW56STcrb0d2SlhzNzhCK1g0ZGt5RVpSN0Flelg0WFVv?=
 =?utf-8?B?M0ZoTGwxS2MrSXh3K1RkekMvbHRkTVdjV3JwYWgyZE5SdWw0bFNNZnd0dEtL?=
 =?utf-8?B?ZXJBdE1FRVA3NkRnSVBvZXdzSHBlMHpqV0NNTWRCaFlsZFBMQzg5VXZzS0Nm?=
 =?utf-8?B?ckdudE1vQU5Zd1ZialRHcjBEN0YwKyt6Y1lzRnZhZFFHb092UGlPYkpVVFVr?=
 =?utf-8?B?N2NucGJkN1o5MW5zUUFNa1ZhQnVSRjhHSUlMbzBLeHlKUmlHNW1BbEd2TzR5?=
 =?utf-8?B?R2hxNFhKNDA5OVNHU0M1N3A0Qnc4RkZqd0NQUnp1bStFNXpiciszeXp3c3pC?=
 =?utf-8?B?bXV1b28zS2hLeGJ5NFJnbStNUjlMVGhRVGw3VDFGNFhLa1lXUjlrWUk3UEIr?=
 =?utf-8?B?WVpiTExiOEpIR0VFQ1QwQkN3MTFjMWdGd3ZLekhEQkR4cW1lc3RxeTkvNnlo?=
 =?utf-8?B?cnZ1ZkI2NndJZXR2b3k1eVZ5elp0cGJBTkREVWpEMEw0bURJSS9CSDhackRC?=
 =?utf-8?B?aFVwSjk2QkFCQkduSXg2WHljN3dRU29oSDRrRUZMNm93SFp4cmVlNGZYUVFr?=
 =?utf-8?B?a0tjSW40QVlNcTJqUVZCWHFiOVk5czZYb3dLQitoVFpkT1ZUVmMwNVNKMW9Q?=
 =?utf-8?B?eFB2Wm9DWDYrKzNUZFAwMHZKdHJXSXJYSnhoYnArNThZYXhibG1hUGsreGk1?=
 =?utf-8?B?cG5saVhZNjJRMklFS1VkaGxrT1RUZHVlM1hWY1hna0JrdFFTeG4ra0VhMlI3?=
 =?utf-8?B?WndBVUlGUlZlV09jNnFPZHJDcGlrOHphWUdONFpGTWZRenF1YzFNRDRIZjVt?=
 =?utf-8?B?SWpXbGVQRWZ1RWFVdGFwRStWZFNYK1ZNR2RUMzNBelJ3RXBXOXhqelFPZTha?=
 =?utf-8?B?STN1cS90K0FzTDRKOVR5cHpnZmo2WlI5N0lDV0VRL2xsQ3Nkaml1QkxxZ0FF?=
 =?utf-8?B?Q3ErVmRVOWF5MFhSWm4zcENxNURJUjg4K3RydXJQb2VuU0dRNHFkNUVqTEZr?=
 =?utf-8?B?dHRHTDdMOG1xV0ZzQXNUdHh1NzdZUUhKQ3Ntb3ErSlBpS1NqTlFwRWhKK0Qv?=
 =?utf-8?B?UkZYM1ZCWWRPdDQ1aTFiOE9TYnhGVStUSzBNOXI1SzZMbGgya0E0Q1B2OFBX?=
 =?utf-8?B?Mnc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	QcrZL9mXzibKpqfUQfLKG1Cv4Wdo/BIrs9Nc4TDa7UqLbfLGX15DI6CCLzpN15Yvg9FlBEeTIXevbmTLnImtjCgoubnvPdWkYm6TrCyGJjQZi19JmkYWp1SRYJn5g89ze8ruHkBJi1fQYNDMpj1kmUzB81SABBIA7ea27fVEHIIP+Ey2/v2u9eWUp7BXKZ2d5F50+HMqp2VoGitE1mhU4idhVjMZlWn9pX+CsqEX6l9JmayMUTqpR499JAHwXwhFqfBtd5NdGuZDwHSquY7hZnxzzyWhfv6/pjYZbatqtZQHoe4kE+7w6sdZvnETIoQ8zf57wS/c88/w4qWQDjj/4s7qasWF45PYukhELNf8ETEcgTqgC8YwSMYKMoaZE1/2zDS3O+nzfLuV7tvVljsnEgQtvEAvdjvTl6QINpQLaPdc34IvcgCL+AA/K0wRrV7YVRSIorAVABdFNcxaOycjxhIX4+oBJaUaCsSJzeuH2aduwSssL8mmD2K2K3LxQT9L44wlb4D0tTbA1kIsj3IIhLHqGJ2sF7Esq9872FFYsCbPoTibX1wa9H+riSxgLwBhTOPzrSqDkqFS7kB7pQRnZXVya1Ke90AFmxCDjDeUlLlXHVwKsHdQQVDih6fty7X5plGCEUbSTKZPeNjGsIJ9AOeYIJtKsAkQBDqD9/7JSYsYv0wx7ZARls5yxGIYXyMKVP/mNFG1GH7l/yVg2njAxU3uX7nbeGZ3u0tTvk830Z9mXTVHlmupF7kuFmh9sCvxReL8ho3FV/a7Nx1r+6SipJxPNVtPPDdnuL/UQMQFaQE=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0bd707c1-8821-4e01-ed2d-08db522fbba1
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 14:55:17.1008
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aA7ZYFJ+nfHY/uiiGFOPk6k/XrNXhVBt5/mxRVdWxl4ntJxiPdy5PrvCmybI6xjloSxVtVzlI8vnK9gYXkv2HN1B/lz8VDePsCwMz7bfIGI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR03MB7469

On 11/05/2023 3:53 pm, Jan Beulich wrote:
> On 11.05.2023 16:51, Roger Pau Monne wrote:
>> Or else when iommu_intremap is set to iommu_intremap_full the assert
>> triggers.
>>
>> Fixes: 1ba66a870eba ('AMD/IOMMU: without XT, x2APIC needs to be forced into physical mode')
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

It occurs to me that this might be related to the reports we're still
getting of some problems on these systems...

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 11 14:59:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 14:59:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533453.830141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7l3-0000JV-0G; Thu, 11 May 2023 14:59:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533453.830141; Thu, 11 May 2023 14:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7l2-0000JO-Sv; Thu, 11 May 2023 14:59:04 +0000
Received: by outflank-mailman (input) for mailman id 533453;
 Thu, 11 May 2023 14:59:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1K6=BA=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1px7l1-0000JI-FN
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 14:59:03 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5d407e9d-f00c-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 16:59:01 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id
 a640c23a62f3a-96652cb7673so885294766b.0
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 07:59:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d407e9d-f00c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683817141; x=1686409141;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kDMPMJLrsM9dVLuvF/W0GQkxoJ6OcetP9YejMWwb9DM=;
        b=SahRhICcgTdz2iH+MvUCpagW9qlLlK5MQYaK7tvh0/qSt/wMBbPDaoEFD4DZasGhEq
         JmC52cmT1I97tpuEjEvMq8GRlRigmo5ElGizm9NWPqeyDJPPeHmnl9PoafJYNHOm7tmN
         nqQTB+A0tZ3+nnwGt0aYMdUWI3IZsfBA+LBr6/7Zs9DJyeoX/WiaHOoRiAEeqogEYNJl
         ayhuRtXfdb/iSbm07gY17v91xMjDAMOQRD3zkialvuugHfH92HhXpEWz0wxpIN7FyJjr
         w49x6rR+afGMM21Bp9WMTIV8sZDC8X/rQxjn9m0HCaSRuuVF2Cn6b6H4BEjkYEnDsOAl
         i+Kw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683817141; x=1686409141;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=kDMPMJLrsM9dVLuvF/W0GQkxoJ6OcetP9YejMWwb9DM=;
        b=XZVcMtbbYUf3Rv04G19iLQm5fPxq/LoRNsd6xcpTaGfXOqjPGcmriIiRq4N7SoCzET
         j5aO0aUJ4QIhh3qH2ZWXhw9bJmivEyssdQCqheI/VAK1qzSqmbTvne5P5IltaiA3KWEU
         gSYrWDqNMGREmO4emFwBY3bK0DuKIvRYSev0aUEl2LA0HJoM6QDbS0vm8t2ANdPq1wmF
         KOok1x7MiCN87pWjwbAA8M444zPFNh7fOjzRCE8GNR4x+Sz9YNEhLe05lsIZliJHx/Rg
         gvqwpSyMX3RYxm6hkVTCXnyN7abMX1tSp0MFKOJbhKyW3vw/5fQehYRqM8r1v3olNlwv
         7Xpw==
X-Gm-Message-State: AC+VfDyeSlMvvR5nDWgEzU6Ag4E/2JGDpxjaw0vlIdEV3w+Eqt7VpnGX
	tQ5NdaPuVD6fnjDqETgFUp/74elyf71KBDlDBcz4BONC
X-Google-Smtp-Source: ACHHUZ6MEXYm1d/rqgGP714Op3jMHhrLC2mdRPND2zjqt20YVjgq3sESHSGBXfn/pcpkV0BytDZ4EQAHq9OyqwKM31o=
X-Received: by 2002:a17:906:db0e:b0:96a:8412:a444 with SMTP id
 xj14-20020a170906db0e00b0096a8412a444mr822479ejb.73.1683817140791; Thu, 11
 May 2023 07:59:00 -0700 (PDT)
MIME-Version: 1.0
References: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
In-Reply-To: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 11 May 2023 10:58:48 -0400
Message-ID: <CAKf6xpspPdt6mM4MuL2-vwXHu23ahm874e4kZqROqCwC4cd=fA@mail.gmail.com>
Subject: Re: [PATCH v2 0/2] Add API for making parts of a MMIO page R/O and
 use it in XHCI console
To: =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 5, 2023 at 5:26=E2=80=AFPM Marek Marczykowski-G=C3=B3recki
<marmarek@invisiblethingslab.com> wrote:
>
> On older systems, XHCI xcap had a layout that no other (interesting) regi=
sters
> were placed on the same page as the debug capability, so Linux was fine w=
ith
> making the whole page R/O. But at least on Tiger Lake and Alder Lake, Lin=
ux
> needs to write to some other registers on the same page too.
>
> Add a generic API for making just parts of an MMIO page R/O and use it to=
 fix
> USB3 console with share=3Dyes or share=3Dhwdom options. More details in c=
ommit
> messages.
>
> Marek Marczykowski-G=C3=B3recki (2):
>   x86/mm: add API for marking only part of a MMIO page read only
>   drivers/char: Use sub-page ro API to make just xhci dbc cap RO

Series:
Tested-by: Jason Andryuk <jandryuk@gmail.com>

I had the issue with a 10th Gen, Comet Lake, laptop.  With an HVM
usbvm with dbgp=3Dxhci,share=3D1, Xen crashed the domain because of:
(XEN) d1v0 EPT violation 0xdaa (-w-/r-x) gpa 0x000000f1008470 mfn 0xcc328 t=
ype 5

The BAR is mfn 0xcc320-0xcc32f

Booting PV, it faulted at drivers/usb/host/pci-quirks.c:1170 which looks to=
 be:
```
/* Disable any BIOS SMIs and clear all SMI events*/
writel(val, base + ext_cap_offset + XHCI_LEGACY_CONTROL_OFFSET);
```

Thanks for integrating XUE, Marek!

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 11 15:08:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 15:08:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533457.830151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7te-0001ym-Ph; Thu, 11 May 2023 15:07:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533457.830151; Thu, 11 May 2023 15:07:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px7te-0001yf-Mw; Thu, 11 May 2023 15:07:58 +0000
Received: by outflank-mailman (input) for mailman id 533457;
 Thu, 11 May 2023 15:07:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px7td-0001yS-7J; Thu, 11 May 2023 15:07:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px7td-0006NT-4u; Thu, 11 May 2023 15:07:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1px7tc-00065g-LI; Thu, 11 May 2023 15:07:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1px7tc-0002QC-Kk; Thu, 11 May 2023 15:07:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HS4v1aS4Yu0xOZ3t9yU28ZQUsNrwJmpMLMpZqZmS8Y0=; b=7NDbq+MOeFQX/q24huEAMWBowP
	wRP04lNxWuWIXX/i9f+tavG62v4rlY04TkVHYp+Ytth+/iJy98jNikBRvUaYgIyO0NViu2z4bH9pm
	V8457389GDceDC9phczAOVNCwnqQI0nHADXMRTXtZubJ174WjyVLngHnO+6Hp9ZbpphI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180619-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180619: regressions - trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-build-prep:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bdb1184d4f6bf4e0121fda34a6c1cb51fe270e7d
X-Osstest-Versions-That:
    xen=31c65549746179e16cf3f82b694b4b1e0b7545ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 May 2023 15:07:56 +0000

flight 180619 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180619/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   5 host-build-prep          fail REGR. vs. 180607

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bdb1184d4f6bf4e0121fda34a6c1cb51fe270e7d
baseline version:
 xen                  31c65549746179e16cf3f82b694b4b1e0b7545ca

Last test of basis   180607  2023-05-10 19:03:28 Z    0 days
Testing same since   180619  2023-05-11 12:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alejandro Vallejo <alejandro.vallejo@cloud.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit bdb1184d4f6bf4e0121fda34a6c1cb51fe270e7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu May 11 13:13:55 2023 +0200

    domctl: bump interface version
    
    The change to XEN_DOMCTL_getdomaininfo was a binary incompatible one,
    and the interface version wasn't bumped yet during the 4.18 release
    cycle.
    
    Fixes: 31c655497461 ("domctl: Modify XEN_DOMCTL_getdomaininfo to fail if domid is not found")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 5b49f5e09df905a3688107a06bc022a590606555
Author: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Date:   Thu May 11 13:12:46 2023 +0200

    x86: Add AMD's CpuidUserDis bit definitions
    
    AMD reports support for CpuidUserDis in CPUID and provides the toggle in HWCR.
    This patch adds the positions of both of those bits to both xen and tools.
    
    No functional change.
    
    Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu May 11 15:21:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 15:21:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533463.830161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px863-0004RD-TK; Thu, 11 May 2023 15:20:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533463.830161; Thu, 11 May 2023 15:20:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px863-0004R6-QV; Thu, 11 May 2023 15:20:47 +0000
Received: by outflank-mailman (input) for mailman id 533463;
 Thu, 11 May 2023 15:20:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DXxA=BA=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1px862-0004R0-Gx
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 15:20:46 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20622.outbound.protection.outlook.com
 [2a01:111:f400:7e8d::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6580dff9-f00f-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 17:20:44 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by BN9PR12MB5290.namprd12.prod.outlook.com (2603:10b6:408:103::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May
 2023 15:20:41 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::ef8d:bf8a:d296:ec2c]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::ef8d:bf8a:d296:ec2c%7]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 15:20:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6580dff9-f00f-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g87zleXaVIEW4oaBKiY0RkQyXilG8X1j19Nj9KZCm2RGp5XU0Y214yXzO6TSR82e/dgDr2aIpPTcWT3w82bljnEsRPHSlS6eyuXkjpwnfEQlXVdyTRc4ubRyBsVPRht1QmjS9unG8b1I+WkcHJwcxqHvJlPzwUDdrMsb2b/hZtwzcr1Dyo6tMg/9+xoB3104dADcR8zI8MgVpBZLaKcDZ8beEonozhoi3MBf4dFCQz0PXVFb1BLmlLeA0ZSZCtN5GLnAUFmQzwSOkpo1ZPHTDzAMhZoQh8uXwfjaip5l+dkeoSK9ckjj//mIK9XW/dpymVWRHsXelHi0J6zIETSX6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AwKN5ctfo4h345rsU1a5vyrdU45foxlxf5zSETQJSko=;
 b=L8BVRObG0QwtYAOlc4K7jYObI1g0NnqIh/d5YdN0RGXxusa7lvQrQFq1s0M/L1YxlI7qj6Qy3s8ShlQHo64JjsEs75hD39PxZipvx7DnePDAIHmYkH9YWQI0IhJF7Q7UHqsYay28thxOuFQFTh8m6OgdmbyuaHRTQZyo22e/H3PWMfNdzuvByST+7p5wEb+xq2NmFfQKZkLGuDPvI/U0emESLupgHmSYRp00JK3nbqWqO2/lcEMmIYl/gEIMVXhvuqTymH3BSkhH0/8yFlKg9neGdVUBZKkp2IwQYOdpljZrurHUUTmtBvCEYFNyzQnLNT2xw4HTK0bZ7y0vSyBufA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AwKN5ctfo4h345rsU1a5vyrdU45foxlxf5zSETQJSko=;
 b=Hf9vJQ/EtEdtfefmF8BM+uSEo0puhjWPhcSRRWw/kQwW3yYqIxskuv1bsGPFKHJxKsMkA6sRo/gR5jg+gkWM5HPr8PNNROmxfb/EWGLDzOFK6TxhrO8KWZYas7yF/cn829JsLLMFa38btZg1/fIbeJhrhduMLw8878TKtQ2JDZk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <773f3bb2-249d-c6e9-13b4-e7637e1035b3@amd.com>
Date: Thu, 11 May 2023 16:20:36 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.1
Subject: Re: [PATCH v2 1/2] xen/arm: domain_build: Propagate return code of
 map_irq_to_domain()
To: xen-devel@lists.xenproject.org, Michal Orzel <michal.orzel@amd.com>
References: <20230511130218.22606-1-michal.orzel@amd.com>
 <20230511130218.22606-2-michal.orzel@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20230511130218.22606-2-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0457.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::13) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|BN9PR12MB5290:EE_
X-MS-Office365-Filtering-Correlation-Id: fe507d2d-475c-448d-6857-08db52334841
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BAtGrjjHghJyYSnJX4RCZ9hPTd7HQxeHyCms6r13SYLYpuG0JD4wbUdSsCzGQ4JNLKvrtN2qyRChsDtRshSN8MIgIicH1EMgt3abnK6TqrFxMJFkQPmLz2hBuXO/lBnsKhKiog0itCgYNviUrB6T7UESMeO9G99d7z2SsbzGFyKM0GrBmsT3kxECJp6HcfbZnheMLJijJvHH5fhWtTn3/UkHyZ/ZOCiM3Tb4fRVHiSKzkwD4YGFGt1Lk1+nhoSsu9LpC6iAbTu5eyvTV0DzhDL7jqIb5xKwTtGG8SrCX/LW5QXYOAjHKWE4xc/MV+wOFwFQIXQFNJ2w6gfqvdpOjIjZH24DSvy3DapuRe083IhCvxk1dp6dPkpfDXTmY2UiGeis9X+/cK3JpHUBpZ+/l4nLEIaT2mYaD8R+rJsIL91x235AlhHC5JP0q/wc9FHKk4DLhEB1Vg7J9IyXuSs/wyiIdl91f03pF446L3/2WV4ZBS4fkeOBlSoATbs/HxUffki3okb8RDcKr30Cq0rOre3FDiEz7CeTXcHaMtLZJTSdXZ/cmA5ddiT1GVSLuvmyqsiNNuS8qP4dgPbe6QaXakh5pUYKR2Y1ucMztM9OBGdScyuFxRtT3d1JhERbrha2J2s90JQW/EyVeixfM8VEPlg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(396003)(376002)(366004)(136003)(451199021)(8936002)(8676002)(5660300002)(38100700002)(6486002)(6862004)(6636002)(36756003)(66476007)(66556008)(66946007)(316002)(478600001)(41300700001)(6666004)(37006003)(2906002)(2616005)(31696002)(186003)(6512007)(31686004)(83380400001)(6506007)(26005)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NDB2bzlHaUYzYkQ4R2dUK3VXTGpJaks5RHo3UVJ1eGh1WEdQeUpPc1ZYaDRW?=
 =?utf-8?B?M0JhSjJQSGVWOXpyWlpCSjlpMmZiaDN5ek9QWVE5bDRYVXpJbGVrVEZELzZl?=
 =?utf-8?B?NlhRaTRaYjZ5cmZ0ckNmLzRlUVVxQVJFcmJ0OGRjT0hqUFRUbHVTditiMzVW?=
 =?utf-8?B?ZElwSmNOZkJKcTFzSVJIUFVkWExtUENTNTVKbnNsdUFxUWV4K01yQk1BSWxa?=
 =?utf-8?B?WVlzOG1VVnNZMjNYZUl6QkJzeFJHbStpYWNBb3lvWEJpQ2VaOTJpVUJnZVJw?=
 =?utf-8?B?TWxNMjE2bVpuU1NtakthK0k4aVZLcEpKdGptY3Z4OGhmeWNxNC81Vlp5VFBa?=
 =?utf-8?B?aUNpdklKWDdqakdSZzJpbFkvQmU0RU1iY1BZdlZLNi93aUpGYUNKSWYvZDR3?=
 =?utf-8?B?MFcyZTA2RFFOa3cyOGVGcGJMNVpFWXFCcnFiM2RrRWk3dUQxcTFXbjVsREJZ?=
 =?utf-8?B?Z0duMmJHSDJNVnFrSCt2WExUN1J4ZFB2aHRNUnI3OTlPNVFPQVVINzVCdlp5?=
 =?utf-8?B?NmNsSTc5TkQwa3VPV1czenNqdDlkZ2ptcm94aXBuUFR5ODQ2Z2Nha2Y1by9Y?=
 =?utf-8?B?djBCMUlZaTE4ZmxPSEpHUFpLalg4R3Awdk5MbzVRZFZFNnQ2elI4bDJtREpv?=
 =?utf-8?B?aEQrdlFCeThlVUhPSmFtSXZYcGhYaVFoS2ZPcDI0Nk80Z0ZvYk1rdjVZMlZF?=
 =?utf-8?B?cjVtSmwyUkl3ejVnL0huVlo3aXlUSVFRSFVKV1ZIS0pWQ3dpb1ZkWTdhWllr?=
 =?utf-8?B?dkM4eVZiRldLVC9WL1lwOWw4NDFnb0pUd01vSWN6UDNPWmMxMnoycENXY0hX?=
 =?utf-8?B?LzdSYUQvdVJ4czZDWjkvRTlLTyt1SUZXQndidXhCaVh6NGtPbldFWTNGbzhk?=
 =?utf-8?B?dHljdDBIU3M0bTNKcm5RNmZZSkExaVRBVElpTkFoZmwvbE8rQlJTeGxOUzV2?=
 =?utf-8?B?aGU5aHdsaloxQnBmbnFmczhwNkROVEljanh2b2h5eGNHbU1tT090dFk5ZWQx?=
 =?utf-8?B?enJBdnY1RnhhMTk1NUNOQi95eDZJZ2orOHZaV0wxQ2lBRVdCOXQ1N2tHMlNq?=
 =?utf-8?B?Zk80REJ0d0tFVFg5bDg0d1RTdHdZVFhoOEhOa1NtcERTZ2FrMnBHSjZFeGdO?=
 =?utf-8?B?TDFUOVpnUXEwcU12ZEp6YUNSVFdDNmM2c1lQaFY2T01SKzBTVVVZV1hlV09z?=
 =?utf-8?B?cmVNZlRxMG5jWFFvTnMzdGc2cW8vRy9hY01rSFJhczJ2cjBLWXBaRUpxak9w?=
 =?utf-8?B?M3ZrMFVUajJCUnVyR21sQmppeUNVdGVhV1pzVjk5SkxvanpjZ0ZpTFpTek0y?=
 =?utf-8?B?b1haOURkS3BLZ05QNG1SWElHL0Jwb1FDYzMzdEVrcWlrN3BCSmtaTC9zckJH?=
 =?utf-8?B?TnlkcjJVMWdoem1GQVVZV09oVmJQVStnYXJMNFRYSC9WQlVuZmk4Wms5ZHM4?=
 =?utf-8?B?QVl1Tlp2V1VZZy9lLzlFS2RSdVpmQlFMZU5TQSt3MzU3cnh2TFdVQ1BxY2xw?=
 =?utf-8?B?UkJBTytWbWpWUFlKM2FLZDQ2TnEvZDdsRURvMnBSNngxRTRxSDNsKzhtVUJq?=
 =?utf-8?B?ck5ya1FwYS93K0tqR2k2UUhWSnFiMThkeWswNVhUK1lhYjBTSS9zTnM3MUh2?=
 =?utf-8?B?TEE1SWlCR0Q1YlBQUU4ybHBIZmplNlRaclhrY1pwWCt1cHR4dzFoblpBYVh1?=
 =?utf-8?B?WXF3bkFlSitVcFJndlE0UndUTThkakhUUVVHMzhCYU9xcnc4ZURMQTk0Tm91?=
 =?utf-8?B?dGRJMGtvdXFWL3NSVzlTZDZBVE0wRWw5SDduRjgyYmFvcW02MWt5Z0w3cDg4?=
 =?utf-8?B?bWsxSlJUSUl5ODlWanllbU4vQXpTek1qWlgzTTE5akt2Nk1oTTNSMGhsTXEz?=
 =?utf-8?B?SVFOVzkyeHEzQXVrazJKVWRxRW9YRzZYY2Q0WlpuZndJYVNSREp6L1d0Mkln?=
 =?utf-8?B?RWxETU95WHJncGdGOThBbDJCUnMzUlpGKzlPNndDdzB2eDB4WU5HOVlVemR0?=
 =?utf-8?B?QjY4N1I4WldVWnRURUZESHkybDdrUlhVN0xSOGpheUprOHRiNjdicXB0STJk?=
 =?utf-8?B?WEJYUlc4c1pVVnlTSnZRVUpucUl4SzRsNEhxT0t4TVNrdG1halRWRkY3cS9M?=
 =?utf-8?Q?HDLZTLy9l8GR2ArrFa9Vmc8C5?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fe507d2d-475c-448d-6857-08db52334841
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 15:20:41.5072
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o277S/qqTIRJZ0U3YqGerTQEaVwqo6asS5iq7aurkFUWieC9scEGcuzu9kmkGT9gZkGsUSXqeXQ06dMHlT4gtw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5290


On 11/05/2023 14:02, Michal Orzel wrote:
>  From map_dt_irq_to_domain() we are assigning a return code of
> map_irq_to_domain() to a variable without checking it for an error.
> Fix it by propagating the return code directly since this is the last
> call.
>
> Fixes: 467e5cbb2ffc ("xen: arm: consolidate mmio and irq mapping to dom0")
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes in v2:
>   - split the patch so that a fix alone can be backported
> ---
>   xen/arch/arm/domain_build.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index f80fdd1af206..9dee1bb8f21c 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2320,7 +2320,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>   
>       res = map_irq_to_domain(d, irq, !mr_data->skip_mapping, dt_node_name(dev));
>   
> -    return 0;
> +    return res;
>   }
>   
>   int __init map_range_to_domain(const struct dt_device_node *dev,


From xen-devel-bounces@lists.xenproject.org Thu May 11 15:23:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 15:23:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533467.830171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px88V-0004zy-Be; Thu, 11 May 2023 15:23:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533467.830171; Thu, 11 May 2023 15:23:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px88V-0004zr-7a; Thu, 11 May 2023 15:23:19 +0000
Received: by outflank-mailman (input) for mailman id 533467;
 Thu, 11 May 2023 15:23:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3dl8=BA=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1px88G-0004zW-K4
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 15:23:18 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b27b2d23-f00f-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 17:22:54 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 50AA75C00EE;
 Thu, 11 May 2023 11:22:52 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Thu, 11 May 2023 11:22:52 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 11 May 2023 11:22:51 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b27b2d23-f00f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1683818572; x=1683904972; bh=YliH59hT2gaIIGacYHQkH/P/AUqcG8p3geu
	Ha5iQB+U=; b=cteZTWaOk5K7FZBWhFD4UDwgRl6KAFrIJp8X732p9WnxiAWLEQK
	7EZ9Pg2aEWhW5km9GXDsmsuoTHOPFDHrDq8b70/jPtnblSTZnU+qTTFs3cbUWbb4
	myt9PZCdHlgHz29/maCOEzG6O4y4sX5172lGQ2S/zZMoV+3iGoXubGGsqW7PQtT2
	pV3gZca5+6J4+8egaCjZc8bfZY4/vNm+lACI1t5mPw4k7OlM8AJug7ApEt7Ttbiq
	U+7g1+EnfFXXwWVWb3Jh4VuXVQz1L7BVLGmSSaKhYNRvSA+LhHpTCINaSe2mLcP3
	66L4uy/SlFcj6LUZ1evnEseLCrR8Gd6KUSA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1683818572; x=1683904972; bh=YliH59hT2gaII
	GacYHQkH/P/AUqcG8p3geuHa5iQB+U=; b=hIxK3irsrL9OcWotPMv0cHqIurLXM
	CwyPY2C3vwmuZXKPphHGH+qenxn9wWNfpGW0zzqd/OERvbfejtKwyADMYcPAF5h1
	XQ0D05xz+QhvQ71x2WLwjiGZ6nWIKHUPXF0c5AHmWn0J6fwyU/8ixGCD7ZcZJXU3
	xj9Mrj0hiP4iyY8egkXDKJ+U27yYstTdHVMNf3dlJDHAeWwqoQ0lDUgCAL9SCi2V
	rgXUBHtnDj/FGr4r/FvxetevIohfCbYnBasaSlOgP1mxUft4VFlzFHcTTFfFiB7G
	nEgtj9OJ3p0YDj9NzEt6rz3GEr1ZOyR5LEcHAgXAKN1r8tVGcutxO9XeA==
X-ME-Sender: <xms:TAhdZFeYlcxr1g73oNCDdvfFbrHSEfZn1U3Qsi6dJe1PP1eSx91oqQ>
    <xme:TAhdZDMjuqwn2rQ8aQQZDzHh4uweWwjWHn6059pTbVMCGUXAeX2mbozoi206GzUcn
    5Sk7wWauFX7Dw>
X-ME-Received: <xmr:TAhdZOgmRWsKcbmh60C-nPkEtuuFLeZbghlhwBc2PufMdwh8LDpMkYH6TfBNHycbJE2T0AHGF9M1kNH61f9jjAuth5MNlbrEiCE>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeegkedgkeekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:TAhdZO9sKSBCudRh8e2CqNH10x4RDC2r9U6UQeJdjGG9blT86ldFzA>
    <xmx:TAhdZBvgGdTKWQ0rR3Eti0h250GZHpaLisnaQB57n01AuWNfaMRS2g>
    <xmx:TAhdZNFJJxIPMsJgIKWAeAFhNUHDNBzaa-Kba5Pyzs_43uIC_csXHQ>
    <xmx:TAhdZG5p_hLCFz_es-3jYdUxYLp8Caq3_5zWfU_DYJr8gz8xeG2hNw>
Feedback-ID: i1568416f:Fastmail
Date: Thu, 11 May 2023 17:22:48 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 0/2] Add API for making parts of a MMIO page R/O and
 use it in XHCI console
Message-ID: <ZF0ISD/uMns0aLtd@mail-itl>
References: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
 <CAKf6xpspPdt6mM4MuL2-vwXHu23ahm874e4kZqROqCwC4cd=fA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="7EIq/qd3Zxw5tvMI"
Content-Disposition: inline
In-Reply-To: <CAKf6xpspPdt6mM4MuL2-vwXHu23ahm874e4kZqROqCwC4cd=fA@mail.gmail.com>


--7EIq/qd3Zxw5tvMI
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Thu, 11 May 2023 17:22:48 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 0/2] Add API for making parts of a MMIO page R/O and
 use it in XHCI console

On Thu, May 11, 2023 at 10:58:48AM -0400, Jason Andryuk wrote:
> On Fri, May 5, 2023 at 5:26=E2=80=AFPM Marek Marczykowski-G=C3=B3recki
> <marmarek@invisiblethingslab.com> wrote:
> >
> > On older systems, XHCI xcap had a layout that no other (interesting) re=
gisters
> > were placed on the same page as the debug capability, so Linux was fine=
 with
> > making the whole page R/O. But at least on Tiger Lake and Alder Lake, L=
inux
> > needs to write to some other registers on the same page too.
> >
> > Add a generic API for making just parts of an MMIO page R/O and use it =
to fix
> > USB3 console with share=3Dyes or share=3Dhwdom options. More details in=
 commit
> > messages.
> >
> > Marek Marczykowski-G=C3=B3recki (2):
> >   x86/mm: add API for marking only part of a MMIO page read only
> >   drivers/char: Use sub-page ro API to make just xhci dbc cap RO
>=20
> Series:
> Tested-by: Jason Andryuk <jandryuk@gmail.com>
>=20
> I had the issue with a 10th Gen, Comet Lake, laptop.  With an HVM
> usbvm with dbgp=3Dxhci,share=3D1, Xen crashed the domain because of:
> (XEN) d1v0 EPT violation 0xdaa (-w-/r-x) gpa 0x000000f1008470 mfn 0xcc328=
 type 5

Hmm, this series is supposed to avoid exactly this issue. I tested it on
12th Gen, so maybe 10th Gen has a bit different layout.
Can you add a debug print before subpage_mmio_ro_add() call in
xhci-dbc.c and see what area is getting protected?

> The BAR is mfn 0xcc320-0xcc32f
>=20
> Booting PV, it faulted at drivers/usb/host/pci-quirks.c:1170 which looks =
to be:
> ```
> /* Disable any BIOS SMIs and clear all SMI events*/
> writel(val, base + ext_cap_offset + XHCI_LEGACY_CONTROL_OFFSET);
> ```
>=20
> Thanks for integrating XUE, Marek!

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--7EIq/qd3Zxw5tvMI
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRdCEgACgkQ24/THMrX
1ywQPgf/U0lCST5MSzHwIwOXwDkviiPxKD1XdpSPaTgqT+N6aCV367NuKRgSW/qI
1sZiXUHMQ5qQ7V5IFIAVKIK4aPZeSPGs7OIQGeKyFssVsDNYCuF4i/D9XIX+3ySD
cko5UuPGM8OCXt0lRlYdnKGcJq3Dlhlz0Gb382wWy/VIP9hnJcLdQO02gpPdCqH/
IsI73VBCAV+skYv2vx82lkScJy2aXaj8hkB3jh6xq1++TV3l4n+Rt2nI1Z4wlneK
HziNe1nuvMPPIvQgCJ2SVdy1UMOcWa924v44+kqnzOctgx1PQo0YXgXSytRAMImc
zTr8iGasz6WHwYe5pdAOYigpDCZveQ==
=X8eh
-----END PGP SIGNATURE-----

--7EIq/qd3Zxw5tvMI--


From xen-devel-bounces@lists.xenproject.org Thu May 11 15:23:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 15:23:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533468.830181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px88W-0005F3-Io; Thu, 11 May 2023 15:23:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533468.830181; Thu, 11 May 2023 15:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px88W-0005Ew-FJ; Thu, 11 May 2023 15:23:20 +0000
Received: by outflank-mailman (input) for mailman id 533468;
 Thu, 11 May 2023 15:23:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DXxA=BA=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1px88V-0004zW-48
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 15:23:19 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20622.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::622])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd62b603-f00f-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 17:23:12 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by IA1PR12MB7565.namprd12.prod.outlook.com (2603:10b6:208:42f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22; Thu, 11 May
 2023 15:23:07 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::ef8d:bf8a:d296:ec2c]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::ef8d:bf8a:d296:ec2c%7]) with mapi id 15.20.6363.033; Thu, 11 May 2023
 15:23:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd62b603-f00f-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LhrK0B/tnLPFUPbz/tQI8DlAnZf3BXQ1/8o++fGxUmZhJtcwGPpw792Kw6lDEJrl5caFsZTGPprHYZirZ38NxX+rtaOOD7bJEO5y5nhEnD/UBCiZZHwPoPdapBJT3nprBGeLtKC8KknsW6LOX5vo3XwN3qhhXlSTuaxmrfdUJctN1/cZ2fRJJpfm2a4KGbK82c/c+bGhLc1+bwD1Qu73VRlSWJ1DI+tVjCt+MmKW0s3cls+UwwXrBfebyjOCAsv9dJy58lJlizDd2mKfS+A4GEWNRwuQHmEaQxq83DWIvpFLFxngNouQL3K3sGeCjLPuzEsuoChhQq6bKlI+Tn8MkQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9EIhEWeo7uRyaVVlo82soj0URdLOT8LPVKCRwaKMklo=;
 b=EgZcdTlKY2ibo4dHYo6tVFUncwNNbTcsQXMvMddSP7xN40DP0qx9e5ToYXA9QzDQOwuZN+iDa7Fy1OjtJVyT56avVAXqsib945KSjmQ318mVfBn4IiK3Dyxz+9U7qXFZYgsAX4/lAWXFp2qpJGkzEi6TByqeODl1RN2db5zBW820ZvIJWSPXsC4/pi38ATjXKmviIJm22XOPYdQrHDHvxjnK81J50QgkzYKIr2LqSv35b1qKd7Stu1V6cyORm8HQ+ZhL8IvgXoFFLwHDtW/Y87e0YnPbCby2ZgAuxKgb8TlBgcPVJ8M9iunD4DDFvPsIdVqepiwjuE7h7vLATAH8bQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9EIhEWeo7uRyaVVlo82soj0URdLOT8LPVKCRwaKMklo=;
 b=IbfP5jrF1z5BPAQy5w+I7+7cvUVTbEV7ybo/GcODVa9s1fVp85PQ7nN37jcfXFuw1xOa+jaZIvNFbZzhO6aarRYQ/GzmIypW9B08WzhabHzjsOgGQurSpkK9gEh/0Op3FrECZD0ZjzNUp/Qr/nkd2FAA7FBqMf9JudCe8dHvkzk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <c6e5fde1-6206-adce-65e8-fc651e954d1e@amd.com>
Date: Thu, 11 May 2023 16:23:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.1
Subject: Re: [PATCH v2 2/2] xen/arm: domain_build: Fix format specifiers in
 map_{dt_}irq_to_domain()
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230511130218.22606-1-michal.orzel@amd.com>
 <20230511130218.22606-3-michal.orzel@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20230511130218.22606-3-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0480.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::36) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|IA1PR12MB7565:EE_
X-MS-Office365-Filtering-Correlation-Id: c281c288-6525-4ceb-1ef8-08db52339f7a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YwNpqX2VeMsHj/8BEKyjEZ17TRArPGeqfWIUm4+MLi6MxWRTmoyyZ+lfVbYJP0RSr8W9IYLVZxUH50zAbFMV/GzyowfiqWDJbd13DNywMNWUDvW56NZGnKpqO/xuIoP0Jmu9uc8HbwTaYEMt/V2VI0gelKH+SNjFqjaVs/Ys+yka2wYTz5wYkQa7JbwMESqagrODkX3JarcNE0FywCbbdWDScF0ChDeTCeZJtVTSC4GzzTyrNINmx0ikwT6nxMGt3Ybn5a1PagTMY7/sVRRrT0txT838WlAi5ZIKxLLo2fwZJLopMPWYQOvC6kZq1JCWSz4bIQWY4nDvkSSuqxNEu/Y+BDz6cZubhtiLt4R0XWoNk66ORG7bDXn9MXu6PeeEr6qCKR9pvVV5bCbSQzbM5ybqXK4qrEpiUwyVehDbYOLczYITfhhDkkgsLoPqHBFM0Di0rcsHcUJ0Glix/PNUfdCqQqUzNHgYUI/0hH1BGQGgYXWtFbrqmt8ETngLgm1DI0R4MvMdg4CgL7GIJS8gE4QrdR2bKtafo2XElnvHP7u3pLKbasCYaPNMBImiR0IXSK9AHH18OEVfpb6EZuRl39brOVOLTeLSin2zQOHLSdj8KPUAFPwpVvKzUFfjGPZlIk1fV7ah97Yl1uXbXWBVYQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(366004)(39860400002)(376002)(396003)(451199021)(31686004)(38100700002)(41300700001)(6486002)(53546011)(83380400001)(6506007)(26005)(2616005)(6512007)(6666004)(186003)(478600001)(54906003)(66556008)(66476007)(4326008)(66946007)(316002)(5660300002)(2906002)(36756003)(31696002)(8936002)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Nk1ENkJvSURsNnVEcWI4Mm9TMEJ6UHpmNHB6VCtWRzAxcXdRcVNxdENCaGhM?=
 =?utf-8?B?SmhlVVdTYWpKa3l1b1pZU0NNaDZvZG5TYS81aU55RjNpMk9BTHE2elQyTzh2?=
 =?utf-8?B?anJuNFJ1V0dHeUZpMzdSaHRjNkkrejNiQldES0RlNFlCSjhnY0tYbzRwc3VN?=
 =?utf-8?B?MkdFV3ExWHZwTDNNQlJvSGF5RU5qK3hNNTFnV2JGMW93K1d5bDE4SXhidERs?=
 =?utf-8?B?eG5oaGFWdlBSNjZFUDArajVTOTRySVozd0hqeVlKM2srUFA3ZllMVFlsTUZ3?=
 =?utf-8?B?TnFzaFFsMGFuR0Z5d3J5UDE4R3lERll0WXdaNHVQNm41MitucXZTbWNTL2tV?=
 =?utf-8?B?eXNFWStvMzZUL3E2Rys1NndDN05rSE1QNFQwaVlqUXZLbm5HZk9lV2ZiSkVn?=
 =?utf-8?B?SG9XSTVXR05lRW5zSzRQV3AvTGx4cUQvamZyOW9OTllqQ2VOcDBKOGozemdo?=
 =?utf-8?B?Wmtsd082K293NDFoNFhYUEFIZXU5cEcycnhUS2RIQ0JMYlBreHFwZnorZlhL?=
 =?utf-8?B?dkhYVFdQMXFOY2NjWk45Mkh1VE5jUEp0YlI4eDRDQnVGMjRWOTNtcE42SmlG?=
 =?utf-8?B?NUF3YU8vUEdMdGdqUVNnTXZPSWdOL0oyRFQ4N2ZUSjVVWHZuUktldldVOWxZ?=
 =?utf-8?B?QmxuL1N5emtremFCSW1obzBHZkRRc093MGpoNzgxcDhBSkVIY0c0cThaR2Jh?=
 =?utf-8?B?RldtV3BGRXNKUmcvczdNVXU4MWorL28vbks3SjBFQWRXTG5oakFSYzV3eXA4?=
 =?utf-8?B?Ymg2S1N6TzFiQlJUeGJMUWZXWGZvTTNqQXlRR2JneGF2bUhBUDRUWHVRL0Rj?=
 =?utf-8?B?cmNXWER0QytCOCtlZ1cyU2hxUjhHbWJDV25jK0RPRjdMK3dLOU9DaUZoK0JT?=
 =?utf-8?B?dFBHNmsvSk42bmtyR09sN1RDa0pIaDhkaVZaemRSeCs5dkUxd28vWG9JcnVE?=
 =?utf-8?B?aENiektnRC9XeUY5UGNBRktSWHNrdnpvc3RoU0tGTmpycjBjdFpCb2xHeXZU?=
 =?utf-8?B?ZnZBa0Y3dGxLTWxEQXhQS1pKQWZSWUc4Z1ZpelJ2V2RiNnMyK0lhVjEyVWxU?=
 =?utf-8?B?djFaNnlNTTBSM0t1TUVKM3h6blYwSGZHWkdTSFE4b3hPZW9LRTRKVGFvMkdj?=
 =?utf-8?B?Z25PVzA3Q2hXbmVvZEo5RjkySHdzS3g0WFRGRU5oKzRqQ1JadFZudUxuTTky?=
 =?utf-8?B?UHVlOUlvYnBXTWI5eWxBZjAyM1RvQUFObER3VUl3aElrU0pPSW1qSUdJVGhp?=
 =?utf-8?B?WlQvTE5RWDZlQzJHWGV0SHR4TWhoWmdZdEhtQkFpNU0ramJjV0RwSlFHcytE?=
 =?utf-8?B?Vnd6ZDNXQktyNzJFd3dMbkN5c09vWjN3ekJuYnlKL2VjRHBXNmhyMzR0d1lD?=
 =?utf-8?B?MmNkajc1Q29kS3ZDU3gyTXFPZVFTWkxKcWg1WWIvMEVoTXpLRkI1djB5RS93?=
 =?utf-8?B?M2RHR0V4dHVta3VSVWp2aUJ1Tzdic3lZVHlCaHd5Z25yM1VlKzRTTU9MVVBG?=
 =?utf-8?B?T25pa2RyczVFMlZVMGZVTmVUdkh6NVpnRnVnS0l2amh6UDhLTXJTcTNIa1F4?=
 =?utf-8?B?Zkltc2xXTHBlcHVycU9jTnRLZUR1cy9peDFabnVwUXlTVElJTFdQTUhFeWV1?=
 =?utf-8?B?OGpBVlA3NGRCZzRqQkZPOG92ZjVJam5JL01Zend1ZHp4Lzg1clowcGYwekx6?=
 =?utf-8?B?aSs5WWp1NTFXVlZydXhjNTFHZW9YdDJzczMzelpFeDJ3K29EU1djQmhBaVZt?=
 =?utf-8?B?ZlR5MGNvYW9ZRkd0YUFic1JJWUFjWFFVdEpVazB2WlpETDl5YUNCV2JKYnhL?=
 =?utf-8?B?ZFFPdmdCcUw2bGd2OHRIaHR3OFVTU3dteFJKS0JKRGJyb2dFcTVVTms4S215?=
 =?utf-8?B?bkc3cEo2RGZtOEdzNkhZaDlxQy9jSHFrd1dzelRIYzhBZnl0Y2VjbmRGNWo2?=
 =?utf-8?B?eEIwR0NBMkx1NnhYdTBHTGR3YUhpZzBBM0lOdEpKalludTNRUXVFUUFqczBO?=
 =?utf-8?B?d3pWTmllbHZtQmRTVG5sNk9PbmwvcmRyWGpwc2xRNW41YWZnOG0zQWs0NCti?=
 =?utf-8?B?dkFnUzhiVldFRHRuQ1NTV1d0aSt5V0ZqR3FsZFgreGFmejNDOEd2ajF4czBE?=
 =?utf-8?Q?77pB10Y7PtJCGHxcEXypstHK3?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c281c288-6525-4ceb-1ef8-08db52339f7a
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 15:23:07.8023
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Dq3QyEqeZm+f6vFvDRFQmQz48ZapuO8+1sT9PNsccL9koK0myaG7e5IGrLj2Gyore9NCMwpG8hTqqPDeofLJRQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7565


On 11/05/2023 14:02, Michal Orzel wrote:
> IRQ is of unsigned type so %u should be used. When printing domain id,
> %pd should be the correct format to maintain the consistency.
>
> Also, wherever possible, reduce the number of splitted lines for printk().
>
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes in v2:
>   - split the v1 patch so that the format specifiers are handled separately
>   - also fix map_irq_to_domain()
> ---
>   xen/arch/arm/domain_build.c | 14 +++++---------
>   1 file changed, 5 insertions(+), 9 deletions(-)
>
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 9dee1bb8f21c..71f307a572e9 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2265,8 +2265,7 @@ int __init map_irq_to_domain(struct domain *d, unsigned int irq,
>       res = irq_permit_access(d, irq);
>       if ( res )
>       {
> -        printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n",
> -               d->domain_id, irq);
> +        printk(XENLOG_ERR "Unable to permit to %pd access to IRQ %u\n", d, irq);
>           return res;
>       }
>   
> @@ -2282,8 +2281,7 @@ int __init map_irq_to_domain(struct domain *d, unsigned int irq,
>           res = route_irq_to_guest(d, irq, irq, devname);
>           if ( res < 0 )
>           {
> -            printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n",
> -                   irq, d->domain_id);
> +            printk(XENLOG_ERR "Unable to map IRQ%u to %pd\n", irq, d);
>               return res;
>           }
>       }
> @@ -2303,8 +2301,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>   
>       if ( irq < NR_LOCAL_IRQS )
>       {
> -        printk(XENLOG_ERR "%s: IRQ%"PRId32" is not a SPI\n",
> -               dt_node_name(dev), irq);
> +        printk(XENLOG_ERR "%s: IRQ%u is not a SPI\n", dt_node_name(dev), irq);
>           return -EINVAL;
>       }
>   
> @@ -2312,9 +2309,8 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>       res = irq_set_spi_type(irq, dt_irq->type);
>       if ( res )
>       {
> -        printk(XENLOG_ERR
> -               "%s: Unable to setup IRQ%"PRId32" to dom%d\n",
> -               dt_node_name(dev), irq, d->domain_id);
> +        printk(XENLOG_ERR "%s: Unable to setup IRQ%u to %pd\n",
> +               dt_node_name(dev), irq, d);
>           return res;
>       }
>   


From xen-devel-bounces@lists.xenproject.org Thu May 11 15:27:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 15:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533477.830191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px8CM-0006DS-66; Thu, 11 May 2023 15:27:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533477.830191; Thu, 11 May 2023 15:27:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px8CM-0006DL-3H; Thu, 11 May 2023 15:27:18 +0000
Received: by outflank-mailman (input) for mailman id 533477;
 Thu, 11 May 2023 15:27:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1K6=BA=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1px8CK-0006DF-Lr
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 15:27:16 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f30a45c-f010-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 17:27:15 +0200 (CEST)
Received: by mail-ed1-x533.google.com with SMTP id
 4fb4d7f45d1cf-50bc2feb320so13637713a12.3
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 08:27:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f30a45c-f010-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683818835; x=1686410835;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Vtk5MqV3zMZ+NI6AhiF0E2y7HdDxZvVDkt8LnHerTJM=;
        b=OjrI5O9/UEBsNOcBAkwfOwK8qxL7NQVH1xrDFp9QO4BvOgE2hDalCPdorxGEN5+Prd
         mYt8alt6yS/keIFNRQmV5DKwjqxtHFfJVE8jkA20Pxmi9wB9WG2MFiG+6PHmhMFLNRkt
         HyghOw8mT9Pj9Uk5NRXCWzPrtekD0ulRRsSAxDPw9rrQMUei2N3xylXdScdqJ3YGrBEd
         +kOoYPI0hBDkT2CTUfIuf5MznPQCq2i6+tp6BVwpKBC7SpMbZvoyjz09fC44c24vQWM8
         G21wzeAh2IzTG7d8N+zK0l18gcswS1Tf+aA5KOGhqP1jFz1HJEWW4dwDqyls7zBTMeSO
         oLcw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683818835; x=1686410835;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Vtk5MqV3zMZ+NI6AhiF0E2y7HdDxZvVDkt8LnHerTJM=;
        b=Q5wbQ2gIvJyNka52/VK78iFrYhJpD5cDn+d56/Pnp6uKRIXMqdUF1nnBDcf0tp6UtM
         vuCN+5PM6KSdwhg7eQmCgcUJlHYhZyLWPHQ42HpPv0dxq7L5gx6DATMcsQEt2sNlYahi
         7+d3O/NEUXMAZe8jwoY8fHTatxYijvL0EFZ83uhevatG2IOJXQbnN0zqWZX5gMx67Z4q
         ifhIDxlOyi0Hgc78oP2qaFyH2r9LDpNGSs2evF5WjtdLbBlOGdikJttjsGVS3y8R1raY
         D9M6efqamqJwD9CRV+xwsPqmIg0WF2iRLLa9ph+hWOaFEE3xOzm2RHukKProO52fy+U6
         Lljg==
X-Gm-Message-State: AC+VfDz0Ypqg1Gq1ytGaATxifOC4hgIwTOYPi27nUDeYRboXuIfGZ0dg
	BKFrefB3gPRjVjMSW4FLmEU56XwSuhiaf7dtm91VD6Si
X-Google-Smtp-Source: ACHHUZ4eYXLD4gD3uhKBaMEEwvFvhn0Blmh9ChDyNU3HWbhw4o6yL02xnb3+mVteuMHbgNGoKrIUG2xVH/jbQQbHfNA=
X-Received: by 2002:a17:907:8a14:b0:94f:6218:191f with SMTP id
 sc20-20020a1709078a1400b0094f6218191fmr21655990ejc.52.1683818835233; Thu, 11
 May 2023 08:27:15 -0700 (PDT)
MIME-Version: 1.0
References: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
 <CAKf6xpspPdt6mM4MuL2-vwXHu23ahm874e4kZqROqCwC4cd=fA@mail.gmail.com> <ZF0ISD/uMns0aLtd@mail-itl>
In-Reply-To: <ZF0ISD/uMns0aLtd@mail-itl>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 11 May 2023 11:27:02 -0400
Message-ID: <CAKf6xpsLg-Dve9-uLysGv-hnkVuk0vPsxf=a6fc3ebduzryqbQ@mail.gmail.com>
Subject: Re: [PATCH v2 0/2] Add API for making parts of a MMIO page R/O and
 use it in XHCI console
To: =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 11, 2023 at 11:22=E2=80=AFAM Marek Marczykowski-G=C3=B3recki
<marmarek@invisiblethingslab.com> wrote:
>
> On Thu, May 11, 2023 at 10:58:48AM -0400, Jason Andryuk wrote:
> > On Fri, May 5, 2023 at 5:26=E2=80=AFPM Marek Marczykowski-G=C3=B3recki
> > <marmarek@invisiblethingslab.com> wrote:
> > >
> > > On older systems, XHCI xcap had a layout that no other (interesting) =
registers
> > > were placed on the same page as the debug capability, so Linux was fi=
ne with
> > > making the whole page R/O. But at least on Tiger Lake and Alder Lake,=
 Linux
> > > needs to write to some other registers on the same page too.
> > >
> > > Add a generic API for making just parts of an MMIO page R/O and use i=
t to fix
> > > USB3 console with share=3Dyes or share=3Dhwdom options. More details =
in commit
> > > messages.
> > >
> > > Marek Marczykowski-G=C3=B3recki (2):
> > >   x86/mm: add API for marking only part of a MMIO page read only
> > >   drivers/char: Use sub-page ro API to make just xhci dbc cap RO
> >
> > Series:
> > Tested-by: Jason Andryuk <jandryuk@gmail.com>
> >
> > I had the issue with a 10th Gen, Comet Lake, laptop.  With an HVM
> > usbvm with dbgp=3Dxhci,share=3D1, Xen crashed the domain because of:
> > (XEN) d1v0 EPT violation 0xdaa (-w-/r-x) gpa 0x000000f1008470 mfn 0xcc3=
28 type 5
>
> Hmm, this series is supposed to avoid exactly this issue. I tested it on
> 12th Gen, so maybe 10th Gen has a bit different layout.
> Can you add a debug print before subpage_mmio_ro_add() call in
> xhci-dbc.c and see what area is getting protected?

Your series fixes the problem!

I just had the details from the original issue, so I included them.  I
was trying to highlight what this series fixed for me.  Sorry for the
confusion.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 11 15:28:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 15:28:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533480.830201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px8Dd-0006t5-Fv; Thu, 11 May 2023 15:28:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533480.830201; Thu, 11 May 2023 15:28:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px8Dd-0006sy-Cm; Thu, 11 May 2023 15:28:37 +0000
Received: by outflank-mailman (input) for mailman id 533480;
 Thu, 11 May 2023 15:28:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3dl8=BA=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1px8Db-0006ss-TF
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 15:28:35 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7c6ba083-f010-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 17:28:34 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 2F0245C0655;
 Thu, 11 May 2023 11:28:31 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Thu, 11 May 2023 11:28:31 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 11 May 2023 11:28:30 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c6ba083-f010-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1683818911; x=1683905311; bh=giWFEUQ7CavNhcZALMqM8wfoPHUPcBghLDI
	TMO2MFvk=; b=d6n9L6ceCtInBL0eUPhsbNGYEi6Bv3935fwMJDYpInSWb0ROfqQ
	YFZ8gaLe49td2kPoIfPY+DwO55x5UgeJfKngTUEIErXdUho4YF5m6zbP90nnEo5V
	OTNNc3SZZsXqlVqbBKrlQnboBNk87ioVq1w/4NNRkNX2mzVWO0hNQzpwEQTSySiG
	Mw06/oNygfRJYP0iv13HJaJExRrc9Muj03FbC3ulE5qsDS+kj0NWmeU/haScZxQv
	JM2IFBy8AH9CB5R5XE6yJjlfa3RSbtgwSUSrU7Q4KvCo43saY9QTjj1yYS4CJOU7
	Y382uSxMmDNxilkZCTH7q+7RRmvHqf7mfyw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1683818911; x=1683905311; bh=giWFEUQ7CavNh
	cZALMqM8wfoPHUPcBghLDITMO2MFvk=; b=gznDip2ddyPcfQWSjhklR7gcAqesU
	zh+034EKuajFTnaSkZIUcwndtQL3PUFcVbvZRUKjvPif3pRMD+j0IrxhVaiFcTP5
	F7ZYDOi2dtftRq+NSyjUJYRRNv8zSbim/t66+2mJ92aX1RRxYP/VoQO3r0ZumG1l
	2887gIEOB2za+7MKZX2XgtYpYmiWw69P5F5G1PkViPEbrmYNG/NJdiSfIfyJgT0C
	V3s8V7F5v5P6UBJSU89monZ/Yo1QyBhwb4YweRezELmP38x0Ss+iOBL9g86kbY02
	u/qGJlEEiumduYGssdNnKtsA1vMU/qpFmti4ztEFqh9jyV5S7McbBvjMg==
X-ME-Sender: <xms:nwldZGVNXD5o3g6jFyHkSo6Zg4Hkb4EV8RGHxxTpD7IHYUBiRwSfaA>
    <xme:nwldZCnO57EWzCe1MnH2f6gLLxdzpDQLZao4RakBqDFfaRj_PKSc5UN5xGUETsY7B
    mRY9ksvlvVfpg>
X-ME-Received: <xmr:nwldZKb1F5aiT2hkt7omPTBiZEn1oXB6S3DQwhVN9Ft_hGiNJQgFWn658PyWCCrDm0tou6GnJAHrWeZkAO8uKXmdA-YLCVFSYME>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeegkedgkeelucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:nwldZNUn0hzYLJ6if9AwRDSd286_eauLdoFg90NhkOwk4EzyqK2eXQ>
    <xmx:nwldZAkY5L9v6tv0WFLvpz8HB7wF95AAN-mMnZMNd641bQVKjSBOPg>
    <xmx:nwldZCfu6BjTqWF2Lt3QnNQgytGRFdqTufmubh1SlF9_7Kxfl_nPIA>
    <xmx:nwldZCQ9npqBXqP3VwAPoaE75uKjElT6ucLUEs7PqMr_5yQJ_upPBA>
Feedback-ID: i1568416f:Fastmail
Date: Thu, 11 May 2023 17:28:27 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 0/2] Add API for making parts of a MMIO page R/O and
 use it in XHCI console
Message-ID: <ZF0Jm5anuVGLHSL7@mail-itl>
References: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
 <CAKf6xpspPdt6mM4MuL2-vwXHu23ahm874e4kZqROqCwC4cd=fA@mail.gmail.com>
 <ZF0ISD/uMns0aLtd@mail-itl>
 <CAKf6xpsLg-Dve9-uLysGv-hnkVuk0vPsxf=a6fc3ebduzryqbQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="zdjTCr1l9KJcVKkv"
Content-Disposition: inline
In-Reply-To: <CAKf6xpsLg-Dve9-uLysGv-hnkVuk0vPsxf=a6fc3ebduzryqbQ@mail.gmail.com>


--zdjTCr1l9KJcVKkv
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Thu, 11 May 2023 17:28:27 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 0/2] Add API for making parts of a MMIO page R/O and
 use it in XHCI console

On Thu, May 11, 2023 at 11:27:02AM -0400, Jason Andryuk wrote:
> On Thu, May 11, 2023 at 11:22=E2=80=AFAM Marek Marczykowski-G=C3=B3recki
> <marmarek@invisiblethingslab.com> wrote:
> >
> > On Thu, May 11, 2023 at 10:58:48AM -0400, Jason Andryuk wrote:
> > > On Fri, May 5, 2023 at 5:26=E2=80=AFPM Marek Marczykowski-G=C3=B3recki
> > > <marmarek@invisiblethingslab.com> wrote:
> > > >
> > > > On older systems, XHCI xcap had a layout that no other (interesting=
) registers
> > > > were placed on the same page as the debug capability, so Linux was =
fine with
> > > > making the whole page R/O. But at least on Tiger Lake and Alder Lak=
e, Linux
> > > > needs to write to some other registers on the same page too.
> > > >
> > > > Add a generic API for making just parts of an MMIO page R/O and use=
 it to fix
> > > > USB3 console with share=3Dyes or share=3Dhwdom options. More detail=
s in commit
> > > > messages.
> > > >
> > > > Marek Marczykowski-G=C3=B3recki (2):
> > > >   x86/mm: add API for marking only part of a MMIO page read only
> > > >   drivers/char: Use sub-page ro API to make just xhci dbc cap RO
> > >
> > > Series:
> > > Tested-by: Jason Andryuk <jandryuk@gmail.com>
> > >
> > > I had the issue with a 10th Gen, Comet Lake, laptop.  With an HVM
> > > usbvm with dbgp=3Dxhci,share=3D1, Xen crashed the domain because of:
> > > (XEN) d1v0 EPT violation 0xdaa (-w-/r-x) gpa 0x000000f1008470 mfn 0xc=
c328 type 5
> >
> > Hmm, this series is supposed to avoid exactly this issue. I tested it on
> > 12th Gen, so maybe 10th Gen has a bit different layout.
> > Can you add a debug print before subpage_mmio_ro_add() call in
> > xhci-dbc.c and see what area is getting protected?
>=20
> Your series fixes the problem!
>=20
> I just had the details from the original issue, so I included them.  I
> was trying to highlight what this series fixed for me.  Sorry for the
> confusion.

Ah, then all is good, thanks for testing! :)

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--zdjTCr1l9KJcVKkv
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRdCZwACgkQ24/THMrX
1yzmzQgAgp8lD3YdqZnWNV9XnP4jWqNjenCU9UFyUDYps6sJmJW3J5VBttxs/o/o
0UTOMVY7H3KhOc4+noT7dZ2NzggwkwhLZoG0B+j40fN4OrDt1vpuI74xKqxz/LzM
Qaiy1ozLNCOX/yl2ar9+sDO7YSQQMKkk/JuPddqL9F8XwUn2Dilxs0XO54RczKxo
K4KWsnMHz4xS66dk/txpX5KlvZsjgi3iXAFMmbK9S7o11wPB81ehHZyXF19G5DYm
hEJG807TcYpmZXF/QK6H6p/Ln+UZVbYVdT53r44n6bbguQlCPnLlm7BhPkifD+2G
TCyGn9SqqZ2ex4+AC+TCRcldRDMzhQ==
=P8Pi
-----END PGP SIGNATURE-----

--zdjTCr1l9KJcVKkv--


From xen-devel-bounces@lists.xenproject.org Thu May 11 16:32:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 16:32:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533485.830211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9DD-0006mA-Vs; Thu, 11 May 2023 16:32:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533485.830211; Thu, 11 May 2023 16:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9DD-0006m3-Ss; Thu, 11 May 2023 16:32:15 +0000
Received: by outflank-mailman (input) for mailman id 533485;
 Thu, 11 May 2023 16:32:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fLOV=BA=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1px9DC-0006lx-Bh
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 16:32:14 +0000
Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 61b83990-f019-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 18:32:12 +0200 (CEST)
Received: from zn.tnic (p5de8e8ea.dip0.t-ipconnect.de [93.232.232.234])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id D9DC31EC041F;
 Thu, 11 May 2023 18:32:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61b83990-f019-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1683822731;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=dygT3SQ1Awb918Yj04yvCpz0F75Gt7jJXYbHPMQO368=;
	b=R5T5ei9aRn2WmdaOiQI0b3lTEpke8mwH+YA13JUw69yB2L7ZadnZ0U2BTvfGyaOHt4JfRC
	3Aoy+LxV7mQkAih2yejm5Zzwfnof6sebecZ/7cyw7rLXxwO+GHLihccOrl0u8gG8MDSzAm
	XmteUQ7nficYxQuDo4lTodt4dy/cl80=
Date: Thu, 11 May 2023 18:32:08 +0200
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
	mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Message-ID: <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>

On Wed, May 10, 2023 at 05:53:15PM +0200, Juergen Gross wrote:
> Urgh, yes, there is something missing:
> 
> diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
> index 031f7ea8e72b..9544e7d13bb3 100644
> --- a/arch/x86/kernel/cpu/mtrr/generic.c
> +++ b/arch/x86/kernel/cpu/mtrr/generic.c
> @@ -521,8 +521,12 @@ u8 mtrr_type_lookup(u64 start, u64 end, u8 *uniform)
>         for (i = 0; i < cache_map_n && start < end; i++) {
>                 if (start >= cache_map[i].end)
>                         continue;

So the loop will go through the map until...

> -               if (start < cache_map[i].start)
> +               if (start < cache_map[i].start) {

... it reaches the first entry where that is true.

>                         type = type_merge(type, mtrr_state.def_type, uniform);

the @type argument is MTRR_TYPE_INVALID, def_type is WRBACK so what
this'll do is simply get you the default WRBACK type:

type_merge:
        if (type == MTRR_TYPE_INVALID)
                return new_type;

> +                       start = cache_map[i].start;
> +                       if (end <= start)
> +                               break;

Now you break here because end <= start. Why?

You can just as well do:

	if (start < cache_map[i].start) {
		/* region non-overlapping with the region in the map */
		if (end <= cache_map[i].start)
			return type_merge(type, mtrr_state.def_type, uniform);

		... rest of the processing ...

In general, I get it that your code is slick but I want it to be
maintainable - not slick. I'd like for when people look at this, not
have to  add a bunch of debugging output in order to swap the whole
thing back into their brains.

So mtrr_type_lookup() definitely needs comments explaining what goes
where.

You can send it as a diff ontop - I'll merge it.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Thu May 11 17:09:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 17:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533491.830231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nR-0002at-TS; Thu, 11 May 2023 17:09:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533491.830231; Thu, 11 May 2023 17:09:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nR-0002am-QX; Thu, 11 May 2023 17:09:41 +0000
Received: by outflank-mailman (input) for mailman id 533491;
 Thu, 11 May 2023 17:09:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bPy6=BA=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1px9nQ-0002Li-Ht
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 17:09:40 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9c21bbbd-f01e-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 19:09:37 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ac79d4858dso94947541fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 10:09:37 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 c9-20020a05651c014900b002aa3cff0529sm2443830ljd.74.2023.05.11.10.09.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 10:09:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c21bbbd-f01e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683824977; x=1686416977;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=75YVbwcoEhV6c5MmFHSJFEPsCkZJGLQGgPUplClZnLQ=;
        b=OsC105rR0ZNVkg7vGNJd6ytk/frQyk6vIwX7D/9ZQZafRORAjrUbwmIpj/ScdOU3+X
         VuFncD3JgzSxXkLwAsljYYXk7JDwFoogdzcq08gpGiTmqxhcsPJFam8GP3/YATrcmQEZ
         00D6JZZRMmItChSCTdNAYQ0lBPdVTK4dhyLMmmdZU2TF0cZXNk6mNUfS1F6IS1r8Stas
         Z2k31nYL5ckEVnSZnks4jp0d/aJfqpp4rMpJ4pN0H7gaizLDh9cVA0e5qfM9pQuiX9HJ
         zvcqEvyR3SoBYR6JZhl90zsbAMCQsH+jpWSKgGQdRs5k+SJpB8ZB6mUsiiNgCNsbF0Ot
         +jzA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683824977; x=1686416977;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=75YVbwcoEhV6c5MmFHSJFEPsCkZJGLQGgPUplClZnLQ=;
        b=iX+3PJg6Rr5umPDURZ2YprQrAZL2PZEKWH4rd5Q1hJIgF7xD3jO70S8Db8G8U4gO7g
         hEtYvMbR1QwwGWWkULik3LRPoe4ytZ2Qw1XpS/Ds8PCK2TljvqZ1GAjjh6MCoWJyd+JH
         TYXBJRCh4U2skqo8Ep6mpx6wcyASNJyhtjh96Ovu12+VEeBRDoqZGbEK5C2NbpJcbJVY
         8Iv2BRdGGXT4byCbFbzq0UifueXFyr8rALoXRtBmoGU7ZD36iiYYOl+iKQor+zMGahdW
         iV2vSDbewUfoiYS57xOoeqzsxNmsrnrzCFyKyLRLg8zIhMBnuEL6uBZezizrg+yw/QWb
         Jrmw==
X-Gm-Message-State: AC+VfDwFxKQRsAwDEOwXb54zqYt2StOuE6Pd73owwNjd522QX/tcVohX
	5E4pWmm3IxUqn940PA6Hrj1THsZjS6w=
X-Google-Smtp-Source: ACHHUZ6ZSzuN6D0Y7dcswyHvx7MsYvOAOjDP/rRUDueTRYqjqRsId8YcoMtOapaQTFt/9UsQwU43iw==
X-Received: by 2002:a2e:9dca:0:b0:2ac:795a:5a90 with SMTP id x10-20020a2e9dca000000b002ac795a5a90mr3318934ljj.38.1683824976761;
        Thu, 11 May 2023 10:09:36 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v7 0/5] enable MMU for RISC-V
Date: Thu, 11 May 2023 20:09:28 +0300
Message-Id: <cover.1683824347.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following things:
1. Functionality to build the page tables for Xen that map
   link-time to physical-time location.
2. Check that Xen is less then page size.
3. Check that load addresses don't overlap with linker addresses.
4. Prepare things for proper switch to virtual memory world.
5. Load the built page table into the SATP
6. Enable MMU.

---
Changes in V7:
	- Fix range of frametable range in RV64 layout.
	- Add ifdef SV39 to the RV64 layout comment to make it explicit that
    description if for SV39 mode.
	- Add missed row in the RV64 layout table.
 	- define CONFIG_PAGING_LEVELS=2 for SATP_MODE_SV32.
 	- update switch_stack_and_jump macros(): add constraint 'X' for fn,
    memory clobber and wrap into do {} while ( false ).
 	- add noreturn to definition of enable_mmu().
 	- update pt_index() to "(pt_linear_offset(lvl, (va)) & VPN_MASK)".
 	- expand macros pte_to_addr()/addr_to_pte() in paddr_to_pte() and
    pte_to_paddr() functions and after drop them.
 	- remove inclusion of <asm/config.h>.
 	- update commit message around definition of PGTBL_INITIAL_COUNT.
 	- remove PGTBL_ENTRY_AMOUNT and use PAGETABLE_ENTRIES instead.
 	- code style fixes
 	- remove permission argument of setup_initial_mapping() function
 	- remove calc_pgtbl_lvls_num() as it's not needed anymore after definition
    of CONFIG_PAGING_LEVELS
 	- introduce sfence_vma()
 	- remove satp_mode argument from check_pgtbl_mode_support() and use
    RV_STAGE1_MODE directly instead.
 	- change .align to .p2align
 	- drop inclusion of <asm/asm-offsets.h> from head.S. This change isn't
    necessary for the current patch series.
 	- create a separate patch for xen.lds.S.
---
Changes in V6:
  - update RV VM layout and related to it things
 	- move PAGE_SHIFT, PADDR_BITS to the top of page-bits.h
 	- cast argument x of pte_to_addr() macros to paddr_t to avoid risk of overflow for RV32
 	- update type of num_levels from 'unsigned long' to 'unsigned int'
 	- define PGTBL_INITIAL_COUNT as ((CONFIG_PAGING_LEVELS - 1) + 1)
 	- update type of permission arguments. changed it from 'unsgined long' to 'unsigned int'
 	- fix code style
 	- switch while 'loop' to 'for' loop
 	- undef HANDLE_PGTBL
 	- clean root page table after MMU is disabled in check_pgtbl_mode_support() function
 	- align __bss_start properly
 	- remove unnecesssary const for paddr_to_pte, pte_to_paddr, pte_is_valid functions
 	- add switch_stack_and_jump macros and use it inside enable_mmu() before jump to
 	  cont_after_mmu_is_enabled() function

---
Changes in V5:
  * rebase the patch series on top of current staging
  * Update cover letter: it was removed the info about the patches on which
    MMU patch series is based as they were merged to staging
  * add new patch with description of VM layout for RISC-V2
  * Indent fields of pte_t struct
	* Rename addr_to_pte() and ppn_to_paddr() to match their content
---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
  * Update the cover letter message: the patch series isn't depend on
    [ RISC-V basic exception handling implementation ] as it was decied
    to enable MMU before implementation of exception handling. Also MMU
    patch series is based on two other patches which weren't merged [1]
    and [2]
  - Update the commit message for [ [PATCH v3 1/3]
    xen/riscv: introduce setup_initial_pages ].
  - update definition of pte_t structure to have a proper size of pte_t in case of RV32.
  - update asm/mm.h with new functions and remove unnecessary 'extern'.
  - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
  - update paddr_to_pte() to receive permissions as an argument.
  - add check that map_start & pa_start is properly aligned.
  - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to <asm/page-bits.h>
  - Rename PTE_SHIFT to PTE_PPN_SHIFT
  - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses and after
    setup PTEs permission for sections; update check that linker and load addresses don't
    overlap.
  - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is necessary.
  - rewrite enable_mmu in C; add the check that map_start and pa_start are aligned on 4k
    boundary.
  - update the comment for setup_initial_pagetable funcion
  - Add RV_STAGE1_MODE to support different MMU modes.
  - update the commit message that MMU is also enabled here
  - set XEN_VIRT_START very high to not overlap with load address range
  - align bss section
---
Changes in V2:
  * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
    introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
  * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
  * Remove clear_pagetables() functions as pagetables were zeroed during
    .bss initialization
  * Rename _setup_initial_pagetables() to setup_initial_mapping()
  * Make PTE_DEFAULT equal to RX.
  * Update prototype of setup_initial_mapping(..., bool writable) -> 
    setup_initial_mapping(..., UL flags)  
  * Update calls of setup_initial_mapping according to new prototype
  * Remove unnecessary call of:
    _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
  * Define index* in the loop of setup_initial_mapping
  * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
    as we don't have such section
  * make arguments of paddr_to_pte() and pte_is_valid() as const.
  * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
  * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
  * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
  * set __section(".bss.page_aligned") for page tables arrays
  * fix identatations
  * Change '__attribute__((section(".entry")))' to '__init'
  * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
    setup_inital_mapping() as they should be already aligned.
  * Remove clear_pagetables() as initial pagetables will be
    zeroed during bss initialization
  * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
    as there is no such section in xen.lds.S
  * Update the argument of pte_is_valid() to "const pte_t *p"
  * Remove patch "[PATCH v1 3/3] automation: update RISC-V smoke test" from the patch series
    as it was introduced simplified approach for RISC-V smoke test by Andrew Cooper
  * Add patch [[xen/riscv: remove dummy_bss variable] as there is no any sense in
    dummy_bss variable after introduction of inittial page tables.
---

Oleksii Kurochko (5):
  xen/riscv: add VM space layout
  xen/riscv: introduce setup_initial_pages
  xen/riscv: align __bss_start
  xen/riscv: setup initial pagetables
  xen/riscv: remove dummy_bss variable

 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  50 ++++-
 xen/arch/riscv/include/asm/current.h   |  11 +
 xen/arch/riscv/include/asm/mm.h        |   9 +
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  58 ++++++
 xen/arch/riscv/include/asm/processor.h |   5 +
 xen/arch/riscv/mm.c                    | 277 +++++++++++++++++++++++++
 xen/arch/riscv/setup.c                 |  22 +-
 xen/arch/riscv/xen.lds.S               |   4 +
 10 files changed, 438 insertions(+), 9 deletions(-)
 create mode 100644 xen/arch/riscv/include/asm/current.h
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 17:09:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 17:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533493.830250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nT-00035W-In; Thu, 11 May 2023 17:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533493.830250; Thu, 11 May 2023 17:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nT-00035K-Fz; Thu, 11 May 2023 17:09:43 +0000
Received: by outflank-mailman (input) for mailman id 533493;
 Thu, 11 May 2023 17:09:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bPy6=BA=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1px9nS-0002Li-24
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 17:09:42 +0000
Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com
 [2a00:1450:4864:20::22a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9dc7fa0c-f01e-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 19:09:40 +0200 (CEST)
Received: by mail-lj1-x22a.google.com with SMTP id
 38308e7fff4ca-2ac81d2bfbcso95684911fa.3
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 10:09:40 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 c9-20020a05651c014900b002aa3cff0529sm2443830ljd.74.2023.05.11.10.09.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 10:09:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9dc7fa0c-f01e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683824980; x=1686416980;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nWsC0H8EDMmSieqZ7gOMXlH4Th+2i/GWuOb1eb4Ax/M=;
        b=RJ+GzWd2+M/UA2GASGdkSGFGLNbF7ILUaarJLuLahmcAmZLwXJBwplyASaKMNJL4wo
         8cy3Jw7hg/5gzdrZKh8iAceEJHbz3qZ7+8jwpX/g6qVBog732QR9JelzCULQm37M06Pm
         h0pE+D6mrelzUAcZFzkXdH1WuNu26DRlR8CaRy1APiDm6/K651UAKmt7EuoZy/YPRnGE
         /mbHWvFMFZdOOu8A2n3RCOWhjeTLABgLlNiOFWh0gGVv9qDV1uSFM6nM0iX2wBVUe5nK
         I6bOtd5P8x0G0D2q4LE/j5uHoElPe5G+bMfabqVDqQk092oFWOomTwtb5SrAK8l9SHsX
         WlXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683824980; x=1686416980;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=nWsC0H8EDMmSieqZ7gOMXlH4Th+2i/GWuOb1eb4Ax/M=;
        b=HBMXjZ6YlaBdMi8wHZbCHbwYTDdT/A8L/7FTZyVZvJwyD/Xg9rM1N1IG0u+AcbSKJI
         IlFlb/5MCzL+cfVc75E9gNNAfZ89N2nUhtNQUesEON3WC/9UNfxRt/0r15Qe+K9pReD8
         zyGiRMr7KzgBf2O8M6Ij5bdfOa3fXznfXaMc8NzSyx/sWlMXvlZCoet+NRGrBYj2caFJ
         XOZBkQLF79HFBlYAFWBVWAQBjYeqXZLjRRDc7XI/ghjCWvRxRTgh9kXq5defmlELTC/7
         1jBMunerRob/jWPtNqmqqJDoWvyzo8ECbcVq4cJxwe7GIAxOb7O6IduhtEPCi3Jqag17
         HLEw==
X-Gm-Message-State: AC+VfDz55vA1Ze00v/5dxOxoZXN5CkUCvL/O+TacM39/Tk5qBefIj5Bp
	L/xkdI7RCwHOh0+L2OkuGJQTWCuHMsM=
X-Google-Smtp-Source: ACHHUZ5t9Db1MsuXcfjj1l/E9PFrYldTbbi+EUASpKkr9atGawlHMITYeexvQlrD8+JnnfEbd2hQLA==
X-Received: by 2002:a2e:8456:0:b0:2a7:b163:6a40 with SMTP id u22-20020a2e8456000000b002a7b1636a40mr3441558ljh.12.1683824979545;
        Thu, 11 May 2023 10:09:39 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v7 3/5] xen/riscv: align __bss_start
Date: Thu, 11 May 2023 20:09:31 +0300
Message-Id: <2e9018989c628a519aadeae150786efe5e8054ab.1683824347.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1683824347.git.oleksii.kurochko@gmail.com>
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

bss clear cycle requires proper alignment of __bss_start.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V7:
 * the patch was introduced in the current patch series.
---
 xen/arch/riscv/xen.lds.S | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index fe475d096d..f9d89b69b9 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -137,6 +137,7 @@ SECTIONS
     __init_end = .;
 
     .bss : {                     /* BSS */
+        . = ALIGN(POINTER_ALIGN);
         __bss_start = .;
         *(.bss.stack_aligned)
         . = ALIGN(PAGE_SIZE);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 17:09:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 17:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533495.830256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nT-00038s-WB; Thu, 11 May 2023 17:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533495.830256; Thu, 11 May 2023 17:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nT-00037l-OV; Thu, 11 May 2023 17:09:43 +0000
Received: by outflank-mailman (input) for mailman id 533495;
 Thu, 11 May 2023 17:09:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bPy6=BA=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1px9nS-0002Li-9Y
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 17:09:42 +0000
Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com
 [2a00:1450:4864:20::12d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9dea8256-f01e-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 19:09:40 +0200 (CEST)
Received: by mail-lf1-x12d.google.com with SMTP id
 2adb3069b0e04-4f139de8cefso47444669e87.0
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 10:09:40 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 c9-20020a05651c014900b002aa3cff0529sm2443830ljd.74.2023.05.11.10.09.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 10:09:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9dea8256-f01e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683824980; x=1686416980;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/UWycGli8sXbmjgB8uls+zGw7KkS5qUAPDnnmK+Uofw=;
        b=CFr0ljiWtQtZURNRmK+W0rNds1/uOMmxJC64s61zAMi4+24iYrCn4FBZr99w2rtNIk
         o8ufeYa4NDlXjF6jI0M6HxG8/AbLuv9RIo9quOjL+KaUQ7yMr3pqxoKbMgqu+uHJXevF
         E7JvsyetWYBiqvXzgmtX+OwL/4uekO/CO8421ruyQ7kOmZ3JacNaoPxm6U37bOHvnKDQ
         jPsq0uj+PxYf8+89xWHDtPoEM2mkbvK2rIvXgYC/sWYrqNBjnYf34Kv41wGBeroFbYOO
         LTwU+0OAXvPhCyXanDAK82CqYNUHuRVYStm9GB8Q/yWUEvxUbRCzvp3jFKdbnR0ZGAs+
         R2Sg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683824980; x=1686416980;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/UWycGli8sXbmjgB8uls+zGw7KkS5qUAPDnnmK+Uofw=;
        b=iwhNNct8dT9J5yOTjK63g1JDw7h59uhZrM8pi/nheSkCfsv38tSX0qx4v9OJz7kzI6
         3edu+GZ6xyfIsdnj2/FyCeCK36jGc2dQPL9gmrZTvKCHkHJiM9osuv5XBvgqgyBEfaMn
         kl3iENRngm/UAWmQ83X+o1G7U2UztmG1W8jUkNKMoJ0UNo+9m+Jli6jYmElaVVjA9hB6
         Yr/kdyUIDVgHWojlt1AXGlfJc6z+eS1JT3+sE6eYz16/Elit6CNO2pVGM3KPQeI0q5ds
         V7WPmIw3q7OkzC6knnrQ0EJFhCld0fbILR/fNjwSIggWoPB95ZaIdvnvWmjjXoxtiemG
         +mZw==
X-Gm-Message-State: AC+VfDzLNYiITQzNdjaYdqNtHYacwC3ZWj4BaRZCS2jOjVf5H5jpMOAq
	A/0soarl8CWrsJ2zZO6R2+XgK3yNv+E=
X-Google-Smtp-Source: ACHHUZ4WsiO6Zk3o1Dla3fKJOAOJt6pH061LDn1vWRyukcylQsuRZeFs8TO6qz8yfwniP+D95fSNGw==
X-Received: by 2002:a2e:94cf:0:b0:2a7:6b40:7ea2 with SMTP id r15-20020a2e94cf000000b002a76b407ea2mr3084289ljh.14.1683824980238;
        Thu, 11 May 2023 10:09:40 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v7 4/5] xen/riscv: setup initial pagetables
Date: Thu, 11 May 2023 20:09:32 +0300
Message-Id: <eb9dee5bcc5d57822abb3aa7dea85421d9f7467e.1683824347.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1683824347.git.oleksii.kurochko@gmail.com>
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch does two thing:
1. Setup initial pagetables.
2. Enable MMU which end up with code in
   cont_after_mmu_is_enabled()

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V7:
 - Nothing changed. Only rebase
---
Changes in V6:
 - Nothing changed. Only rebase
---
Changes in V5:
 - Nothing changed. Only rebase
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 - update the commit message that MMU is also enabled here
 - remove early_printk("All set up\n") as it was moved to
   cont_after_mmu_is_enabled() function after MMU is enabled.
---
Changes in V2:
 * Update the commit message
---
 xen/arch/riscv/setup.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 315804aa87..cf5dc5824e 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -21,7 +21,10 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 {
     early_printk("Hello from C env\n");
 
-    early_printk("All set up\n");
+    setup_initial_pagetables();
+
+    enable_mmu();
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 17:09:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 17:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533492.830236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nS-0002df-7B; Thu, 11 May 2023 17:09:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533492.830236; Thu, 11 May 2023 17:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nS-0002dA-15; Thu, 11 May 2023 17:09:42 +0000
Received: by outflank-mailman (input) for mailman id 533492;
 Thu, 11 May 2023 17:09:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bPy6=BA=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1px9nR-0002Lh-3g
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 17:09:41 +0000
Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com
 [2a00:1450:4864:20::22b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9d54c99e-f01e-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 19:09:39 +0200 (CEST)
Received: by mail-lj1-x22b.google.com with SMTP id
 38308e7fff4ca-2ac8c0fbb16so82671501fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 10:09:39 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 c9-20020a05651c014900b002aa3cff0529sm2443830ljd.74.2023.05.11.10.09.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 10:09:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d54c99e-f01e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683824979; x=1686416979;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=vwLD+kKE8NLXZBOAdTX4gIPx6EQsrHZX6eN9m6ta0EI=;
        b=e7lb4FZv2O4Ig0A7pDKIpcy5DRfh+htlcl787kBs1VGjAJkSj2U1fFG/TFtTI+JOQw
         hEfzU8z4VOf9+w6LDiiHqT6WKx40OTob42n2EPqR4Lxzz9IMStTYZAMDOnJHCzsqojXU
         0mDmA44xVFkZ7hGEKiqeJf3u7wPJux/pur1/JKWyHkUH6/C1ruEL5pZBAvIdSJ5sw6cs
         ka1KgpvjG2kki6Jin4b/MlCP1xLctuWeKJ9b7FnGkiO2C0yvFDTBq8S1iN8UPPEWWf7t
         2Cyojv+eLSQTM8qmf/bdtKT8diOVQOEki4DY1/+cPrwjBmuhuOQPaylSYZXsm2YJRYyI
         /u2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683824979; x=1686416979;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=vwLD+kKE8NLXZBOAdTX4gIPx6EQsrHZX6eN9m6ta0EI=;
        b=EHFvnbUL2goxw3GKuicldrFOPFxvuRqTpT05qAroB3FXsttHvykOjfKAKBS24538Lc
         bap2zHOhGoi23v9pXAKH+etKEA3tCu1C/w6SB3tg1AKdrgb8URDCpC0hFslv6l9EcDvq
         CewZ/rCQ2XVZDd5XyvQEKFi0iQRSvKQne3w1JT7dVYBoi+kslGDoxfmM/QI5EtBRG8Je
         4gdZqmOWVL9d+IYVdXvugjOoaxtYRf7vgV66bzI26Ez32PhaN/kpCsUBU5rq+UXcTjRd
         0Prs/+DghVJQlDq75qWU+vdwWUbm8hh86vnNqnklBzyQEJcex5q7WGaIEAAqzwwDRNso
         nnqQ==
X-Gm-Message-State: AC+VfDzhA7zKNQ4OHFD3QVDYX4J+eIx82ZMb1VLHsyrTHbnduoWXLAIZ
	KPLYyAoWZocyYK3QpPNt1/T6v+OY7yw=
X-Google-Smtp-Source: ACHHUZ7uaG/9bFOJCEloPDk593SHBHPtihf5PvGSscZmr7THXwvexalXKaHPH4nnuWcNziE+IQGNVg==
X-Received: by 2002:a2e:9aca:0:b0:2a8:bc05:ab4 with SMTP id p10-20020a2e9aca000000b002a8bc050ab4mr3872501ljj.34.1683824978828;
        Thu, 11 May 2023 10:09:38 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v7 2/5] xen/riscv: introduce setup_initial_pages
Date: Thu, 11 May 2023 20:09:30 +0300
Message-Id: <632384e200b7de0fb4e2dae500a058c2a27628be.1683824347.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1683824347.git.oleksii.kurochko@gmail.com>
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The idea was taken from xvisor but the following changes
were done:
* Use only a minimal part of the code enough to enable MMU
* rename {_}setup_initial_pagetables functions
* add an argument for setup_initial_mapping to have
  an opportunity to make set PTE flags.
* update setup_initial_pagetables function to map sections
  with correct PTE flags.
* Rewrite enable_mmu() to C.
* map linker addresses range to load addresses range without
  1:1 mapping. It will be 1:1 only in case when
  load_start_addr is equal to linker_start_addr.
* add safety checks such as:
  * Xen size is less than page size
  * linker addresses range doesn't overlap load addresses
    range
* Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
* change PTE_LEAF_DEFAULT to RW instead of RWX.
* Remove phys_offset as it is not used now
* Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
  in  setup_inital_mapping() as they should be already aligned.
  Make a check that {map_pa}_start are aligned.
* Remove clear_pagetables() as initial pagetables will be
  zeroed during bss initialization
* Remove __attribute__((section(".entry")) for setup_initial_pagetables()
  as there is no such section in xen.lds.S
* Update the argument of pte_is_valid() to "const pte_t *p"
* Add check that Xen's load address is aligned at 4k boundary
* Refactor setup_initial_pagetables() so it is mapping linker
  address range to load address range. After setup needed
  permissions for specific section ( such as .text, .rodata, etc )
  otherwise RW permission will be set by default.
* Add function to check that requested SATP_MODE is supported

Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V7:
 	- define CONFIG_PAGING_LEVELS=2 for SATP_MODE_SV32.
 	- update switch_stack_and_jump macros(): add constraint 'X' for fn,
    memory clobber and wrap into do {} while ( false ).
 	- add noreturn to definition of enable_mmu().
 	- update pt_index() to "(pt_linear_offset(lvl, (va)) & VPN_MASK)".
 	- expand macros pte_to_addr()/addr_to_pte() in paddr_to_pte() and
    pte_to_paddr() functions and after drop them.
 	- remove inclusion of <asm/config.h>.
 	- update commit message around definition of PGTBL_INITIAL_COUNT.
 	- remove PGTBL_ENTRY_AMOUNT and use PAGETABLE_ENTRIES instead.
 	- code style fixes
 	- remove permission argument of setup_initial_mapping() function
 	- remove calc_pgtbl_lvls_num() as it's not needed anymore after definition
    of CONFIG_PAGING_LEVELS.
 	- introduce sfence_vma().
 	- remove satp_mode argument from check_pgtbl_mode_support() and use
    RV_STAGE1_MODE directly instead.
 	- change .align to .p2align.
 	- drop inclusion of <asm/asm-offsets.h> from head.S. This change isn't
    necessary for the current patch series.
---
Changes in V6:
 	- move PAGE_SHIFT, PADDR_BITS to the top of page-bits.h
 	- cast argument x of pte_to_addr() macros to paddr_t to avoid risk of overflow for RV32
 	- update type of num_levels from 'unsigned long' to 'unsigned int'
 	- define PGTBL_INITIAL_COUNT as ((CONFIG_PAGING_LEVELS - 1) + 1)
 	- update type of permission arguments. changed it from 'unsgined long' to 'unsigned int'
 	- fix code style
 	- switch while 'loop' to 'for' loop
 	- undef HANDLE_PGTBL
 	- clean root page table after MMU is disabled in check_pgtbl_mode_support() function
 	- align __bss_start properly
 	- remove unnecesssary const for paddr_to_pte, pte_to_paddr, pte_is_valid functions
 	- add switch_stack_and_jump macros and use it inside enable_mmu() before jump to
 	  cont_after_mmu_is_enabled() function
---
Changes in V5:
	* Indent fields of pte_t struct
	* Rename addr_to_pte() and ppn_to_paddr() to match their content
---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
 - update definition of pte_t structure to have a proper size of pte_t
   in case of RV32.
 - update asm/mm.h with new functions and remove unnecessary 'extern'.
 - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
 - update paddr_to_pte() to receive permissions as an argument.
 - add check that map_start & pa_start is properly aligned.
 - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to
   <asm/page-bits.h>
 - Rename PTE_SHIFT to PTE_PPN_SHIFT
 - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses
   and after setup PTEs permission for sections; update check that linker
   and load addresses don't overlap.
 - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is
   necessary.
 - rewrite enable_mmu in C; add the check that map_start and pa_start are
   aligned on 4k boundary.
 - update the comment for setup_initial_pagetable funcion
 - Add RV_STAGE1_MODE to support different MMU modes
 - set XEN_VIRT_START very high to not overlap with load address range
 - align bss section
---
Changes in V2:
 * update the commit message:
 * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
   introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
 * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
 * Remove clear_pagetables() functions as pagetables were zeroed during
   .bss initialization
 * Rename _setup_initial_pagetables() to setup_initial_mapping()
 * Make PTE_DEFAULT equal to RX.
 * Update prototype of setup_initial_mapping(..., bool writable) -> 
   setup_initial_mapping(..., UL flags)  
 * Update calls of setup_initial_mapping according to new prototype
 * Remove unnecessary call of:
   _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
 * Define index* in the loop of setup_initial_mapping
 * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
   as we don't have such section
 * make arguments of paddr_to_pte() and pte_is_valid() as const.
 * make xen_second_pagetable static.
 * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
 * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
 * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
 * set __section(".bss.page_aligned") for page tables arrays
 * fix identatations
 * Change '__attribute__((section(".entry")))' to '__init'
 * Remove phys_offset as it isn't used now.
 * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
   setup_inital_mapping() as they should be already aligned.
 * Remove clear_pagetables() as initial pagetables will be
   zeroed during bss initialization
 * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
   as there is no such section in xen.lds.S
 * Update the argument of pte_is_valid() to "const pte_t *p"
---
 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  14 +-
 xen/arch/riscv/include/asm/current.h   |  11 +
 xen/arch/riscv/include/asm/mm.h        |   9 +
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  58 ++++++
 xen/arch/riscv/include/asm/processor.h |   5 +
 xen/arch/riscv/mm.c                    | 277 +++++++++++++++++++++++++
 xen/arch/riscv/setup.c                 |  11 +
 xen/arch/riscv/xen.lds.S               |   3 +
 10 files changed, 398 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/include/asm/current.h
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 443f6bf15f..956ceb02df 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-y += entry.o
+obj-y += mm.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index a53833afee..01fd6a2818 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -75,12 +75,24 @@
   name:
 #endif
 
-#define XEN_VIRT_START  _AT(UL, 0x80200000)
+#ifdef CONFIG_RISCV_64
+#define XEN_VIRT_START 0xFFFFFFFFC0000000 /* (_AC(-1, UL) + 1 - GB(1)) */
+#else
+#error "RV32 isn't supported"
+#endif
 
 #define SMP_CACHE_BYTES (1 << 6)
 
 #define STACK_SIZE PAGE_SIZE
 
+#ifdef CONFIG_RISCV_64
+#define CONFIG_PAGING_LEVELS 3
+#define RV_STAGE1_MODE SATP_MODE_SV39
+#else
+#define CONFIG_PAGING_LEVELS 2
+#define RV_STAGE1_MODE SATP_MODE_SV32
+#endif
+
 #endif /* __RISCV_CONFIG_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/include/asm/current.h b/xen/arch/riscv/include/asm/current.h
new file mode 100644
index 0000000000..d87e6717e0
--- /dev/null
+++ b/xen/arch/riscv/include/asm/current.h
@@ -0,0 +1,11 @@
+#ifndef __ASM_CURRENT_H
+#define __ASM_CURRENT_H
+
+#define switch_stack_and_jump(stack, fn) do {               \
+    asm volatile (                                          \
+            "mv sp, %0\n"                                   \
+            "j " #fn :: "r" (stack), "X" (fn) : "memory" ); \
+    unreachable();                                          \
+} while ( false )
+
+#endif /* __ASM_CURRENT_H */
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
new file mode 100644
index 0000000000..e16ce66fae
--- /dev/null
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -0,0 +1,9 @@
+#ifndef _ASM_RISCV_MM_H
+#define _ASM_RISCV_MM_H
+
+void setup_initial_pagetables(void);
+
+void enable_mmu(void);
+void cont_after_mmu_is_enabled(void);
+
+#endif /* _ASM_RISCV_MM_H */
diff --git a/xen/arch/riscv/include/asm/page-bits.h b/xen/arch/riscv/include/asm/page-bits.h
index 1801820294..4a3e33589a 100644
--- a/xen/arch/riscv/include/asm/page-bits.h
+++ b/xen/arch/riscv/include/asm/page-bits.h
@@ -4,4 +4,14 @@
 #define PAGE_SHIFT              12 /* 4 KiB Pages */
 #define PADDR_BITS              56 /* 44-bit PPN */
 
+#ifdef CONFIG_RISCV_64
+#define PAGETABLE_ORDER         (9)
+#else /* CONFIG_RISCV_32 */
+#define PAGETABLE_ORDER         (10)
+#endif
+
+#define PAGETABLE_ENTRIES       (1 << PAGETABLE_ORDER)
+
+#define PTE_PPN_SHIFT           10
+
 #endif /* __RISCV_PAGE_BITS_H__ */
diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
new file mode 100644
index 0000000000..68ebe0b7b3
--- /dev/null
+++ b/xen/arch/riscv/include/asm/page.h
@@ -0,0 +1,58 @@
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <xen/const.h>
+#include <xen/types.h>
+
+#define VPN_MASK                    ((unsigned long)(PAGETABLE_ENTRIES - 1))
+
+#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
+#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) + PAGE_SHIFT)
+#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
+#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
+#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
+
+#define PTE_VALID                   BIT(0, UL)
+#define PTE_READABLE                BIT(1, UL)
+#define PTE_WRITABLE                BIT(2, UL)
+#define PTE_EXECUTABLE              BIT(3, UL)
+#define PTE_USER                    BIT(4, UL)
+#define PTE_GLOBAL                  BIT(5, UL)
+#define PTE_ACCESSED                BIT(6, UL)
+#define PTE_DIRTY                   BIT(7, UL)
+#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
+
+#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE | PTE_WRITABLE)
+#define PTE_TABLE                   (PTE_VALID)
+
+/* Calculate the offsets into the pagetables for a given VA */
+#define pt_linear_offset(lvl, va)   ((va) >> XEN_PT_LEVEL_SHIFT(lvl))
+
+#define pt_index(lvl, va) (pt_linear_offset(lvl, (va)) & VPN_MASK)
+
+/* Page Table entry */
+typedef struct {
+#ifdef CONFIG_RISCV_64
+    uint64_t pte;
+#else
+    uint32_t pte;
+#endif
+} pte_t;
+
+static inline pte_t paddr_to_pte(paddr_t paddr,
+                                 unsigned int permissions)
+{
+    return (pte_t) { .pte = (paddr >> PAGE_SHIFT) << PTE_PPN_SHIFT | permissions };
+}
+
+static inline paddr_t pte_to_paddr(pte_t pte)
+{
+    return ((paddr_t)pte.pte >> PTE_PPN_SHIFT) << PAGE_SHIFT;
+}
+
+static inline bool pte_is_valid(pte_t p)
+{
+    return p.pte & PTE_VALID;
+}
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/xen/arch/riscv/include/asm/processor.h b/xen/arch/riscv/include/asm/processor.h
index a71448e02e..187782bc75 100644
--- a/xen/arch/riscv/include/asm/processor.h
+++ b/xen/arch/riscv/include/asm/processor.h
@@ -69,6 +69,11 @@ static inline void die(void)
         wfi();
 }
 
+static inline void sfence_vma(void)
+{
+    __asm__ __volatile__ ("sfence.vma" ::: "memory");
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
new file mode 100644
index 0000000000..f2039db618
--- /dev/null
+++ b/xen/arch/riscv/mm.c
@@ -0,0 +1,277 @@
+#include <xen/compiler.h>
+#include <xen/init.h>
+#include <xen/kernel.h>
+#include <xen/pfn.h>
+
+#include <asm/early_printk.h>
+#include <asm/csr.h>
+#include <asm/current.h>
+#include <asm/mm.h>
+#include <asm/page.h>
+#include <asm/processor.h>
+
+struct mmu_desc {
+    unsigned int num_levels;
+    unsigned int pgtbl_count;
+    pte_t *next_pgtbl;
+    pte_t *pgtbl_base;
+};
+
+extern unsigned char cpu0_boot_stack[STACK_SIZE];
+
+#define PHYS_OFFSET ((unsigned long)_start - XEN_VIRT_START)
+#define LOAD_TO_LINK(addr) ((addr) - PHYS_OFFSET)
+#define LINK_TO_LOAD(addr) ((addr) + PHYS_OFFSET)
+
+
+/*
+ * It is expected that Xen won't be more then 2 MB.
+ * The check in xen.lds.S guarantees that.
+ * At least 3 page tables (in case of Sv39 ) are needed to cover 2 MB.
+ * One for each page level table with PAGE_SIZE = 4 Kb.
+ *
+ * One L0 page table can cover 2 MB(512 entries of one page table * PAGE_SIZE).
+ *
+ * It might be needed one more page table in case when Xen load address
+ * isn't 2 MB aligned.
+ */
+#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_root[PAGETABLE_ENTRIES];
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_nonroot[PGTBL_INITIAL_COUNT * PAGETABLE_ENTRIES];
+
+#define HANDLE_PGTBL(curr_lvl_num)                                          \
+    index = pt_index(curr_lvl_num, page_addr);                              \
+    if ( pte_is_valid(pgtbl[index]) )                                       \
+    {                                                                       \
+        /* Find L{ 0-3 } table */                                           \
+        pgtbl = (pte_t *)pte_to_paddr(pgtbl[index]);                        \
+    }                                                                       \
+    else                                                                    \
+    {                                                                       \
+        /* Allocate new L{0-3} page table */                                \
+        if ( mmu_desc->pgtbl_count == PGTBL_INITIAL_COUNT )                 \
+        {                                                                   \
+            early_printk("(XEN) No initial table available\n");             \
+            /* panic(), BUG() or ASSERT() aren't ready now. */              \
+            die();                                                          \
+        }                                                                   \
+        mmu_desc->pgtbl_count++;                                            \
+        pgtbl[index] = paddr_to_pte((unsigned long)mmu_desc->next_pgtbl,    \
+                                    PTE_VALID);                             \
+        pgtbl = mmu_desc->next_pgtbl;                                       \
+        mmu_desc->next_pgtbl += PAGETABLE_ENTRIES;                          \
+    }
+
+static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
+                                         unsigned long map_start,
+                                         unsigned long map_end,
+                                         unsigned long pa_start)
+{
+    unsigned int index;
+    pte_t *pgtbl;
+    unsigned long page_addr;
+
+    if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
+    {
+        early_printk("(XEN) Xen should be loaded at 4k boundary\n");
+        die();
+    }
+
+    if ( map_start & ~XEN_PT_LEVEL_MAP_MASK(0) ||
+         pa_start & ~XEN_PT_LEVEL_MAP_MASK(0) )
+    {
+        early_printk("(XEN) map and pa start addresses should be aligned\n");
+        /* panic(), BUG() or ASSERT() aren't ready now. */
+        die();
+    }
+
+    for ( page_addr = map_start;
+          page_addr < map_end;
+          page_addr += XEN_PT_LEVEL_SIZE(0) )
+    {
+        pgtbl = mmu_desc->pgtbl_base;
+
+        switch ( mmu_desc->num_levels )
+        {
+        case 4: /* Level 3 */
+            HANDLE_PGTBL(3);
+        case 3: /* Level 2 */
+            HANDLE_PGTBL(2);
+        case 2: /* Level 1 */
+            HANDLE_PGTBL(1);
+        case 1: /* Level 0 */
+            {
+                unsigned long paddr = (page_addr - map_start) + pa_start;
+                unsigned int permissions = PTE_LEAF_DEFAULT;
+                pte_t pte_to_be_written;
+
+                index = pt_index(0, page_addr);
+
+                if ( is_kernel_text(LINK_TO_LOAD(page_addr)) ||
+                     is_kernel_inittext(LINK_TO_LOAD(page_addr)) )
+                    permissions =
+                        PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
+
+                if ( is_kernel_rodata(LINK_TO_LOAD(page_addr)) )
+                    permissions = PTE_READABLE | PTE_VALID;
+
+                pte_to_be_written = paddr_to_pte(paddr, permissions);
+
+                if ( !pte_is_valid(pgtbl[index]) )
+                    pgtbl[index] = pte_to_be_written;
+                else
+                {
+                    if ( (pgtbl[index].pte ^ pte_to_be_written.pte) &
+                        ~(PTE_DIRTY | PTE_ACCESSED) )
+                    {
+                        early_printk("PTE overridden has occurred\n");
+                        /* panic(), <asm/bug.h> aren't ready now. */
+                        die();
+                    }
+                }
+            }
+        }
+    }
+}
+#undef HANDLE_PGTBL
+
+static bool __init check_pgtbl_mode_support(struct mmu_desc *mmu_desc,
+                                            unsigned long load_start)
+{
+    bool is_mode_supported = false;
+    unsigned int index;
+    unsigned int page_table_level = (mmu_desc->num_levels - 1);
+    unsigned level_map_mask = XEN_PT_LEVEL_MAP_MASK(page_table_level);
+
+    unsigned long aligned_load_start = load_start & level_map_mask;
+    unsigned long aligned_page_size = XEN_PT_LEVEL_SIZE(page_table_level);
+    unsigned long xen_size = (unsigned long)(_end - start);
+
+    if ( (load_start + xen_size) > (aligned_load_start + aligned_page_size) )
+    {
+        early_printk("please place Xen to be in range of PAGE_SIZE "
+                     "where PAGE_SIZE is XEN_PT_LEVEL_SIZE( {L3 | L2 | L1} ) "
+                     "depending on expected SATP_MODE \n"
+                     "XEN_PT_LEVEL_SIZE is defined in <asm/page.h>\n");
+        die();
+    }
+
+    index = pt_index(page_table_level, aligned_load_start);
+    stage1_pgtbl_root[index] = paddr_to_pte(aligned_load_start,
+                                            PTE_LEAF_DEFAULT | PTE_EXECUTABLE);
+
+    sfence_vma();
+    csr_write(CSR_SATP,
+              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
+              RV_STAGE1_MODE << SATP_MODE_SHIFT);
+
+    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) == RV_STAGE1_MODE )
+        is_mode_supported = true;
+
+    csr_write(CSR_SATP, 0);
+
+    sfence_vma();
+
+    /* Clean MMU root page table */
+    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
+
+    return is_mode_supported;
+}
+
+/*
+ * setup_initial_pagetables:
+ *
+ * Build the page tables for Xen that map the following:
+ *  1. Calculate page table's level numbers.
+ *  2. Init mmu description structure.
+ *  3. Check that linker addresses range doesn't overlap
+ *     with load addresses range
+ *  4. Map all linker addresses and load addresses ( it shouldn't
+ *     be 1:1 mapped and will be 1:1 mapped only in case if
+ *     linker address is equal to load address ) with
+ *     RW permissions by default.
+ *  5. Setup proper PTE permissions for each section.
+ */
+void __init setup_initial_pagetables(void)
+{
+    struct mmu_desc mmu_desc = { CONFIG_PAGING_LEVELS, 0, NULL, NULL };
+
+    /*
+     * Access to _start, _end is always PC-relative thereby when access
+     * them we will get load adresses of start and end of Xen.
+     * To get linker addresses LOAD_TO_LINK() is required to use.
+     */
+    unsigned long load_start    = (unsigned long)_start;
+    unsigned long load_end      = (unsigned long)_end;
+    unsigned long linker_start  = LOAD_TO_LINK(load_start);
+    unsigned long linker_end    = LOAD_TO_LINK(load_end);
+
+    if ( (linker_start != load_start) &&
+         (linker_start <= load_end) && (load_start <= linker_end) )
+    {
+        early_printk("(XEN) linker and load address ranges overlap\n");
+        die();
+    }
+
+    if ( !check_pgtbl_mode_support(&mmu_desc, load_start) )
+    {
+        early_printk("requested MMU mode isn't supported by CPU\n"
+                     "Please choose different in <asm/config.h>\n");
+        die();
+    }
+
+    mmu_desc.pgtbl_base = stage1_pgtbl_root;
+    mmu_desc.next_pgtbl = stage1_pgtbl_nonroot;
+
+    setup_initial_mapping(&mmu_desc,
+                          linker_start,
+                          linker_end,
+                          load_start);
+}
+
+void __init noreturn noinline enable_mmu()
+{
+    /*
+     * Calculate a linker time address of the mmu_is_enabled
+     * label and update CSR_STVEC with it.
+     * MMU is configured in a way where linker addresses are mapped
+     * on load addresses so in a case when linker addresses are not equal
+     * to load addresses, after MMU is enabled, it will cause
+     * an exception and jump to linker time addresses.
+     * Otherwise if load addresses are equal to linker addresses the code
+     * after mmu_is_enabled label will be executed without exception.
+     */
+    csr_write(CSR_STVEC, LOAD_TO_LINK((unsigned long)&&mmu_is_enabled));
+
+    /* Ensure page table writes precede loading the SATP */
+    sfence_vma();
+
+    /* Enable the MMU and load the new pagetable for Xen */
+    csr_write(CSR_SATP,
+              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
+              RV_STAGE1_MODE << SATP_MODE_SHIFT);
+
+    asm volatile ( ".p2align 2" );
+ mmu_is_enabled:
+    /*
+     * Stack should be re-inited as:
+     * 1. Right now an address of the stack is relative to load time
+     *    addresses what will cause an issue in case of load start address
+     *    isn't equal to linker start address.
+     * 2. Addresses in stack are all load time relative which can be an
+     *    issue in case when load start address isn't equal to linker
+     *    start address.
+     *
+     * We can't return to the caller because the stack was reseted
+     * and it may have stash some variable on the stack.
+     * Jump to a brand new function as the stack was reseted
+     */
+
+    switch_stack_and_jump((unsigned long)cpu0_boot_stack + STACK_SIZE,
+                          cont_after_mmu_is_enabled);
+}
+
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 3786f337e0..315804aa87 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -2,6 +2,7 @@
 #include <xen/init.h>
 
 #include <asm/early_printk.h>
+#include <asm/mm.h>
 
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
@@ -26,3 +27,13 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 
     unreachable();
 }
+
+void __init noreturn cont_after_mmu_is_enabled(void)
+{
+    early_printk("All set up\n");
+
+    for ( ;; )
+        asm volatile ("wfi");
+
+    unreachable();
+}
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index 31e0d3576c..fe475d096d 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -172,3 +172,6 @@ ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misaligned")
 
 ASSERT(!SIZEOF(.got),      ".got non-empty")
 ASSERT(!SIZEOF(.got.plt),  ".got.plt non-empty")
+
+ASSERT(_end - _start <= MB(2), "Xen too large for early-boot assumptions")
+
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 17:09:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 17:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533490.830221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nQ-0002M1-Mt; Thu, 11 May 2023 17:09:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533490.830221; Thu, 11 May 2023 17:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nQ-0002Lu-K2; Thu, 11 May 2023 17:09:40 +0000
Received: by outflank-mailman (input) for mailman id 533490;
 Thu, 11 May 2023 17:09:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bPy6=BA=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1px9nP-0002Lh-Mq
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 17:09:39 +0000
Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com
 [2a00:1450:4864:20::22b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9c949eed-f01e-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 19:09:38 +0200 (CEST)
Received: by mail-lj1-x22b.google.com with SMTP id
 38308e7fff4ca-2ac8c0fbb16so82671261fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 10:09:38 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 c9-20020a05651c014900b002aa3cff0529sm2443830ljd.74.2023.05.11.10.09.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 10:09:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c949eed-f01e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683824978; x=1686416978;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=a8bc8sBrfSe9Kkfb3sttx5nGDE1LXzDoaGCPR0OUB4U=;
        b=TR1Xt4pS5K4Vz1znogj/0rL73x12kbzbC2UCZyysdZOPd+11rHyWr9kPh8x/llH618
         N2/vKAGXJdC6OVTUN9N3Q2FN7iR1bdL3bI9xJCSlHyBokp2DMRrYMEgq5K8AGg06Ape/
         1rezhud1SRMlOd0zXlifugILIPOZV+o6qZg18jseLMqocHWfoObO8BFf2kQLk3oG134l
         9wKZilUShYfBD9slOAvNC3RuyPgxWpfNggnc9pdca9igu1/VCr23RrMdOrzcYl65DAA9
         3KWc/qCXj0TCue011TpmwW1mM3ZqLS8JfTH04MAQQRD0+yRTmKzPISi7QJCoLtZplI2y
         SWjg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683824978; x=1686416978;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=a8bc8sBrfSe9Kkfb3sttx5nGDE1LXzDoaGCPR0OUB4U=;
        b=WBW2JdVzIm30nqU8MXcMYFPT0QfDSPEnNIHfJzgBt9Fz9ZhF1SmZzNSu8vfv0lGe6U
         +5NSz3dO4njzbMZmBZqYGM1UdyGE2gPK+lrGBiuHMjX0HfE9Yu0Y72QPoMMi2Gmlz7T5
         1GI8rmeOp5elHsM6DHwiKQSIxUoSnSMCcmJhfr0nym/X0OTlTPZl+6DDfzz8zkoWXmg/
         oNCmqcprb86BM9k8TV5q7tAqRSQhfQwacoRaG5ygHx4tPKl9vLgIaxHdTXXNoTC7XVWO
         TezjonZMLPpJZ0Tl3QwP4aYxtY8fl+CaSiyIo5jFXszJgngLvNJBmW+sSGyFKnRKY0ua
         yozg==
X-Gm-Message-State: AC+VfDxhVJsWcXYSFk4uD9bjHcF0M67qUxiURTBJyQkT6jRW53OfcKC3
	t0pjn70ctF9fK1uIbgVvuoLJnewjbSg=
X-Google-Smtp-Source: ACHHUZ5StywoRO9/FVyt83hqn6Wq7FxO91WQxwO81pg8PQemoYjsnLzH82cDNAhPhqX7mtRiQqb5hQ==
X-Received: by 2002:a2e:a1c7:0:b0:2a8:c520:da1d with SMTP id c7-20020a2ea1c7000000b002a8c520da1dmr3066201ljm.29.1683824977866;
        Thu, 11 May 2023 10:09:37 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v7 1/5] xen/riscv: add VM space layout
Date: Thu, 11 May 2023 20:09:29 +0300
Message-Id: <7b03dbf21718ed9c05859a629f4442167d74553c.1683824347.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1683824347.git.oleksii.kurochko@gmail.com>
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Also it was added explanation about ignoring of top VA bits

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V7:
 - Fix range of frametable range in RV64 layout.
 - Add ifdef SV39 to the RV64 layout comment to make it explicit that
   description if for SV39 mode.
 - Add missed row in the RV64 layout table.
---
Changes in V6:
 - update comment above the RISCV-64 layout table
 - add Slot column to the table with RISCV-64 Layout
 - update RV-64 layout table.
---
Changes in V5:
* the patch was introduced in the current patch series.
---
 xen/arch/riscv/include/asm/config.h | 36 +++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 763a922a04..a53833afee 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -4,6 +4,42 @@
 #include <xen/const.h>
 #include <xen/page-size.h>
 
+/*
+ * RISC-V64 Layout:
+ *
+ * #ifdef SV39
+ *
+ * From the riscv-privileged doc:
+ *   When mapping between narrower and wider addresses,
+ *   RISC-V zero-extends a narrower physical address to a wider size.
+ *   The mapping between 64-bit virtual addresses and the 39-bit usable
+ *   address space of Sv39 is not based on zero-extension but instead
+ *   follows an entrenched convention that allows an OS to use one or
+ *   a few of the most-significant bits of a full-size (64-bit) virtual
+ *   address to quickly distinguish user and supervisor address regions.
+ *
+ * It means that:
+ *   top VA bits are simply ignored for the purpose of translating to PA.
+ *
+ * ============================================================================
+ *    Start addr    |   End addr        |  Size  | Slot       |area description
+ * ============================================================================
+ * FFFFFFFFC0800000 |  FFFFFFFFFFFFFFFF |1016 MB | L2 511     | Unused
+ * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | L2 511     | Fixmap
+ * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | L2 511     | FDT
+ * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | L2 511     | Xen
+ *                 ...                  |  1 GB  | L2 510     | Unused
+ * 0000003200000000 |  0000007f40000000 | 309 GB | L2 200-509 | Direct map
+ *                 ...                  |  1 GB  | L2 199     | Unused
+ * 0000003100000000 |  00000031C0000000 |  3 GB  | L2 196-198 | Frametable
+ *                 ...                  |  1 GB  | L2 195     | Unused
+ * 0000003080000000 |  00000030C0000000 |  1 GB  | L2 194     | VMAP
+ *                 ...                  | 194 GB | L2 0 - 193 | Unused
+ * ============================================================================
+ *
+ * #endif
+ */
+
 #if defined(CONFIG_RISCV_64)
 # define LONG_BYTEORDER 3
 # define ELFSIZE 64
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 17:09:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 17:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533494.830262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nU-0003FN-A9; Thu, 11 May 2023 17:09:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533494.830262; Thu, 11 May 2023 17:09:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1px9nU-0003Dm-5S; Thu, 11 May 2023 17:09:44 +0000
Received: by outflank-mailman (input) for mailman id 533494;
 Thu, 11 May 2023 17:09:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bPy6=BA=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1px9nS-0002Lh-7i
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 17:09:42 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9e5329c5-f01e-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 19:09:41 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2ac90178fdaso72974961fa.3
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 10:09:41 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 c9-20020a05651c014900b002aa3cff0529sm2443830ljd.74.2023.05.11.10.09.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 11 May 2023 10:09:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e5329c5-f01e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683824981; x=1686416981;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sf2FBCUITLNv92K79Iv4iXww3Xtu3EP3+kJgTTV/WyQ=;
        b=C+0/yYQ/2pH7HIj1RH6MaN0sFqlcocZq3ErT3et2CrJb2vdQyi5GSMRCD4S1GjeLRM
         y2JFyo9CZuyEoxBWR0WQczTPUXNZNAkRqf9HZfArg+tFVY5jC56RsezQEuCBNTFx2jqA
         nq6zHBSc2hhxyTTwySCsxljuAazrqhXiY+7ELuTVYU8hmfsetnCJUUiAkz2H/zPQ6+z8
         C5z/Cnt2hm+Y+ymhw0MVgX6IXD0mr1rdhPo5TSc11lp+xjnd+vWl1fMw/TXMoef2Sk0C
         OI+es7txnV3+hXIQHQ5J9Sf2d28PKARiQ0e75hgmY5TeFIH08rpziXwaLxsBhvfyTSaG
         DA3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683824981; x=1686416981;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=sf2FBCUITLNv92K79Iv4iXww3Xtu3EP3+kJgTTV/WyQ=;
        b=Zyg9rcpFwGFdRn2E+uz1jE18YGaSa2WwJWYtzoqgpeASCb++LCXxwEDguvOZNHAhMn
         msslw9WfxS0L6v1ojDjpT/i9s9sI/mXtLcWMkKy4nDjeWiipQc1+t3BYF6kjX9mR8QF6
         PryoVYoJiifDm/ezWazfJfPdV4xC4iQY5Mr6O+GrP3XuyBDA6zwJBEFflkymOXclCjHJ
         MJ8uOlP2laOt9A8S9U6r/iHj4KiT/3YrJjBKPgmFK7pRZfC1IPlmTf2hdCC8ZxkkK85f
         8VzuTpCbixUia8v384aFV0C+po/TBbxMQtfnGKkpgFgVUxf506aP0Y9r4SqutWGYoGL0
         dhQA==
X-Gm-Message-State: AC+VfDxMa5wkisn9G1gajT06T2tDO2vzEhEpUHLTyqt0fC0Q8Ixda9YD
	iTO/a66sBZ2xMuhxsrMLLdIDeHSLWj0=
X-Google-Smtp-Source: ACHHUZ4EyFu8mwEU1QYN86KZyNdN0+84ERXHNRwWS2bKpDaLjYjmYkAZX6xNdgA/4cykJvE19Bs+AA==
X-Received: by 2002:a2e:7004:0:b0:2a7:7055:97f5 with SMTP id l4-20020a2e7004000000b002a7705597f5mr3382400ljc.0.1683824980936;
        Thu, 11 May 2023 10:09:40 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v7 5/5] xen/riscv: remove dummy_bss variable
Date: Thu, 11 May 2023 20:09:33 +0300
Message-Id: <a8bb8b567166e9906815958317770c2c41106fe9.1683824347.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1683824347.git.oleksii.kurochko@gmail.com>
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

After introduction of initial pagetables there is no any sense
in dummy_bss variable as bss section will not be empty anymore.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V6:
 - Nothing changed. Only rebase
---
Changes in V5:
 - Nothing changed. Only rebase
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 * patch was introduced in the current one patch series (v3).
---
Changes in V2:
 * patch was introduced in the current one patch series (v2).
---
 xen/arch/riscv/setup.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index cf5dc5824e..845d18d86f 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -8,14 +8,6 @@
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
-/*  
- * To be sure that .bss isn't zero. It will simplify code of
- * .bss initialization.
- * TODO:
- *   To be deleted when the first real .bss user appears
- */
-int dummy_bss __attribute__((unused));
-
 void __init noreturn start_xen(unsigned long bootcpu_id,
                                paddr_t dtb_addr)
 {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 17:52:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 17:52:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533524.830281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxASU-0002Tp-Ly; Thu, 11 May 2023 17:52:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533524.830281; Thu, 11 May 2023 17:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxASU-0002Ti-JE; Thu, 11 May 2023 17:52:06 +0000
Received: by outflank-mailman (input) for mailman id 533524;
 Thu, 11 May 2023 17:52:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1K6=BA=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pxASU-0002Tc-6z
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 17:52:06 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 89ab3fab-f024-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 19:52:04 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-965cc5170bdso1329675366b.2
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 10:52:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89ab3fab-f024-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683827523; x=1686419523;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZSEK5kRaNfaTazE+L2wcv81AP85ltDVrZA2W8j/izv4=;
        b=r4F8DX5TRMCjOEpnUDkD6/KVwwOvJtr3iQa5BjXOxEeSRT71GrVreW1vg5ZRGFgqtV
         pvNUvzP4eJ3Q7CI9UBnxUPrdy21eSjOTEF1lFCCOK6WqePAs/zCe6MBtWlgSIgsm+pEE
         q2w9HvEP2wEVjRJcv4N+5iqOWb6vU3sEufAUkCiXZn9RUgmHJSvBbDCWOyl2lBIhJ2xL
         xMNZo33Bakk7ELbbfJo33i2uIAPaanDQXJjlBI404nEltehEkoWrTKV5rg4o8qVKZb9I
         yO1QLtYPd46j6cHFdFNqTmMDhkV6lBvXoJvNGCR2EOReuImI81R+UJr0wJcIvGVcgyIE
         xFtw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683827523; x=1686419523;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ZSEK5kRaNfaTazE+L2wcv81AP85ltDVrZA2W8j/izv4=;
        b=W0b/lwU0y2eTEVsexB1znLeMgiGztCiz7ctBPtexCOA7O/twXoEgMBPqqAkA4Pxq2K
         qyrXst66M8Zl+tvuB/k9mduoY6065OIT8BJusy9zwLF4hAHdyJ43vCAmv6Ldiv9NOe4j
         Hh6aeS781gmQfnEl8LEhgXm6xLsxFgh937aVM3i/l6kWc7SssB2O5xIKVgxuzfujCh/A
         VuU4UhiwMEPdSlWo1S3IOSBS14m1ILjfFqrmVRdyBmWEUfsOKQMZ/XnWPvB/zY5U7lys
         xmsPE5bHx9XzauQZ0px233nBS2tKx2C0crUI433DQzWI/LoaQ/Pitwjlp468VENWwAI3
         OHiA==
X-Gm-Message-State: AC+VfDy+8VDVzQvEfR9pgrzZneN550XqLlSnzy58zA8NGTXujGgg0rxz
	QNmh7WsFSj06HK1wIBDgJCdauTE0sLWoVRFi+AgSnJzC
X-Google-Smtp-Source: ACHHUZ6vhfG4DY6IBNq5pmsc4iJiRCN9IcyQg0GHOmEnY9cspA7pXFvIf5lz/IsMkH/YZ6q24NwcvtUSKIZLTwA80Bk=
X-Received: by 2002:a17:907:e91:b0:967:769e:a098 with SMTP id
 ho17-20020a1709070e9100b00967769ea098mr14240179ejc.15.1683827523190; Thu, 11
 May 2023 10:52:03 -0700 (PDT)
MIME-Version: 1.0
References: <f83213df-2433-ec51-814c-436ce5ea4967@suse.com>
In-Reply-To: <f83213df-2433-ec51-814c-436ce5ea4967@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 11 May 2023 13:51:51 -0400
Message-ID: <CAKf6xpvmdVT8QWFk_V8TCoZ1YHZecTUDT3x9HuRbGmUdGYKb-Q@mail.gmail.com>
Subject: Re: [PATCH] x86/vPIT: check/bound values loaded from state save record
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 11, 2023 at 7:50=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> In particular pit_latch_status() and speaker_ioport_read() perform
> calculations which assume in-bounds values. Several of the state save
> record fields can hold wider ranges, though.
>
> Note that ->gate should only be possible to be zero for channel 2;
> enforce that as well.
>
> Adjust pit_reset()'s writing of ->mode as well, to not unduly affect
> the value pit_latch_status() may calculate. The chosen mode of 7 is
> still one which cannot be established by writing the control word.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Of course an alternative would be to simply reject state save records
> with out of bounds values.
>
> --- a/xen/arch/x86/emul-i8254.c
> +++ b/xen/arch/x86/emul-i8254.c
> @@ -47,6 +47,7 @@
>  #define RW_STATE_MSB 2
>  #define RW_STATE_WORD0 3
>  #define RW_STATE_WORD1 4
> +#define RW_STATE_NUM 5
>
>  static int cf_check handle_pit_io(
>      int dir, unsigned int port, unsigned int bytes, uint32_t *val);
> @@ -426,6 +427,33 @@ static int cf_check pit_load(struct doma
>      }
>
>      /*
> +     * Convert loaded values to be within valid range, for them to repre=
sent
> +     * actually reachable state.  Uses of some of the values elsewhere a=
ssume
> +     * this is the case.
> +     */
> +    for ( i =3D 0; i < ARRAY_SIZE(pit->hw.channels); ++i )
> +    {
> +        struct hvm_hw_pit_channel *ch =3D &pit->hw.channels[i];
> +
> +        /* pit_load_count() will convert 0 suitably back to 0x10000. */
> +        ch->count &=3D 0xffff;
> +        if ( ch->count_latched >=3D RW_STATE_NUM )
> +            ch->count_latched =3D 0;
> +        if ( ch->read_state >=3D RW_STATE_NUM )
> +            ch->read_state =3D 0;
> +        if ( ch->read_state >=3D RW_STATE_NUM )
> +            ch->write_state =3D 0;

Should these both be write_state?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 11 18:00:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 18:00:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533528.830291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxAap-0004Ba-IR; Thu, 11 May 2023 18:00:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533528.830291; Thu, 11 May 2023 18:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxAap-0004BT-Df; Thu, 11 May 2023 18:00:43 +0000
Received: by outflank-mailman (input) for mailman id 533528;
 Thu, 11 May 2023 18:00:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxAao-0004BJ-Qd; Thu, 11 May 2023 18:00:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxAao-0002RE-Mt; Thu, 11 May 2023 18:00:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxAao-0006rG-3O; Thu, 11 May 2023 18:00:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxAao-0005Tf-2z; Thu, 11 May 2023 18:00:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Nxxz8qKJ9bA6ilkZ4tcVohfqFUjMYPXDQEfnBd3yOs4=; b=wDRdBx/6b142yr5C4Hb+04Qnhd
	5OLPmyRsszGQiH0FN9jRQxOCGXwmRbRl0d8TEP3LMObGEciMWHCiW5Ef1Fi8HUlnsZUHtHgVgu6WQ
	vB6R53SWk0Y8iA68fEmfu06Qi2LrlpWAKn+AZvZkHnj0cjR+f7WrTjKXZHb6jRDh9qOk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180613-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180613: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=db91bf2ba31766809f901e9b2bd02b4a9c917300
X-Osstest-Versions-That:
    libvirt=3d6bc5c61101aadd6fca5d558a44a1cba8120178
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 May 2023 18:00:42 +0000

flight 180613 libvirt real [real]
flight 180623 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180613/
http://logs.test-lab.xenproject.org/osstest/logs/180623/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 180623-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180599
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180599
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180599
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 libvirt              db91bf2ba31766809f901e9b2bd02b4a9c917300
baseline version:
 libvirt              3d6bc5c61101aadd6fca5d558a44a1cba8120178

Last test of basis   180599  2023-05-10 04:18:55 Z    1 days
Testing same since   180613  2023-05-11 04:19:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Dankaházi (ifj.) István <dankahazi.istvan@gmail.com>
  Erik Skultety <eskultet@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   3d6bc5c611..db91bf2ba3  db91bf2ba31766809f901e9b2bd02b4a9c917300 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 11 19:17:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 19:17:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533541.830300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBn4-00042n-AU; Thu, 11 May 2023 19:17:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533541.830300; Thu, 11 May 2023 19:17:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBn4-00042g-7n; Thu, 11 May 2023 19:17:26 +0000
Received: by outflank-mailman (input) for mailman id 533541;
 Thu, 11 May 2023 19:17:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qWxp=BA=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pxBn3-00042a-HL
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 19:17:25 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2062f.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 73e57476-f030-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 21:17:22 +0200 (CEST)
Received: from DM6PR13CA0020.namprd13.prod.outlook.com (2603:10b6:5:bc::33) by
 BN9PR12MB5322.namprd12.prod.outlook.com (2603:10b6:408:103::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22; Thu, 11 May
 2023 19:17:18 +0000
Received: from DM6NAM11FT056.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:bc:cafe::19) by DM6PR13CA0020.outlook.office365.com
 (2603:10b6:5:bc::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.7 via Frontend
 Transport; Thu, 11 May 2023 19:17:18 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT056.mail.protection.outlook.com (10.13.173.99) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.22 via Frontend Transport; Thu, 11 May 2023 19:17:17 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 14:17:16 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 14:17:14 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73e57476-f030-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=APPYOGhKMx3ts6jc9WSoBTvo6LCj+TSeWGxVj9VhIQzrpTzxhQUq7kMwypwdhNgQ3OY8bLXueHQCI8LTuqT+aKoABW+DzN5wxgQgVhinjQOg9FBCJ0lBh9uxjsn3ttwHiKA+lonYRH04ebhGLw+ZQSzKg68vTQdHDXS6qgxbddXjYkMol6mPWrCkM3flFAheK+sHNapJ9IpzLwfpTWt2p3iOgnmF8bwy752DeyItnLx/gnvhh48BD60ks96pagf27l+X3cyJ22qxTq2SmdUS2ewPTfjR1csnLeq1neGWmDQFDaW+jr4L1jSx8ecySpYLo/1dm/Yv1yzJz+uyJqWS8A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=m0+bgrDfozFwgaJjbhU1yrwEC2X4rAWcvmwSr5MYghU=;
 b=Ztqmtfzyyl3QwkAlP7TbPK/xljwlJpbRN7ncOpGawR54e4/61OwZQ5hUuvvYwg50U3sszsTKc6J7rjfiixsnoNjzDNqGG7rV81bz+UAJ124qsT3hmQ/aODyfcttDfggldWVcn+EmATXxso0j4/lRyFTvtHgWQ5L+m1RfXWZh1Njc118JBIpRVGk+tyibW6kDzTjuLX/23qQIllVDsMseNHbXVuQhURQMRiBZvfP4++mX3G9Y0KYexRRYLJX5YI2eS9GStuuYTcC4DQWt0qsnRunCyDvZDnJt9VEKPf50jz0EDUhUOnmPOyIYYsPt13EQAMq3pPfPtFVNlICDQ7woLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m0+bgrDfozFwgaJjbhU1yrwEC2X4rAWcvmwSr5MYghU=;
 b=qHM58tGhll2ghadVGdCJ2x5b10fD3JroQ/HOZcwt2LEHyWA8Ge88V8XmBlm6RVEZEqByTAB9fqBkMElh+BWgiTbjmfQndVx7cqeCmpTRKzwK1ck3NPCJkVS/M83mPdM3CqaatHtBt+e6VuzDh00LlFpglY+gmbnjCmTgzKiX18I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Rahul Singh <rahul.singh@arm.com>, Jan Beulich <jbeulich@suse.com>, "Paul
 Durrant" <paul@xen.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>
Subject: [PATCH v2 0/8] SMMU handling for PCIe Passthrough on ARM
Date: Thu, 11 May 2023 15:16:46 -0400
Message-ID: <20230511191654.400720-1-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT056:EE_|BN9PR12MB5322:EE_
X-MS-Office365-Filtering-Correlation-Id: 1d6b2846-75eb-47c5-bebe-08db525455de
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Zu7MhSvkpLg9Tat4Uy3N8cBzyW2XyyWDwM4J2KsVskuV698JvnOXTk8mESWh7FkXpdgt9i8jDSLYcwvPfevNDkAfgtizCAZyUMn28RGjq60CJhqjzkot9jSw1JBIiGtxJIpAz12RR6XdAb3rxK6xDcAslsazkmKGUFc76yrz2IPDApWZmvCcN+3MDJDAec7+fuDznMRAWMBNOqby0a2iInnx6KSr4VVCZlQix5XwHCuSztsYjBAZqY+y1bXlPSeliHaRYGmQMle1fuZ+Wh6MP3JYQxh6Ktt6wK5UNwNE4JYAxs3zIwJPKot2jsd2DPbeBGyBW8e8A02+PLH6Zd29LdGsL52x77fQzKKUAMUqehlq+QH0M/HqNdc6dVUa8IXvZrZAhLcz5NrnR9vmyFHpIi/wAG/RRoK3r6F7O+lDO4o+sW+tY2G1Zh827t5aaM4jroqmr6kBCSM1K3Ne8vlLi7kM9PRu6IzQRLkhgNQKZw5G+U7LVTGzsxQ8WSbrvSd7rMhqDp+du+jqRoxXLYJQ/GAAikBhg8w+hOuRBbZAz/Q+XIbwK5/CqUY6YrOoth0L9NpJewIWqoKZoudAwRx4j+H1hmZ796lY2iHeNc2KhKQa0CkXUSgROcyz6jJsCsMWYO/c0fFnYbtCh8+OI/69z1rgLu4bxghaOVBE1Qn5Iar0IXWKAQEs2p9ASxj88S4G2EoS5o37XZjDc8XWmAv1iqGt6CTjXVK6CtoTEXkAGGfIvI7UJAJUdHPmaWLW8KdZiPgiliWpMDzQ0WFUgY2/Jg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(39860400002)(376002)(451199021)(46966006)(36840700001)(40470700004)(336012)(426003)(2616005)(26005)(36860700001)(1076003)(41300700001)(47076005)(83380400001)(186003)(6666004)(478600001)(40460700003)(54906003)(4326008)(6916009)(82740400003)(70206006)(70586007)(40480700001)(356005)(316002)(81166007)(5660300002)(44832011)(86362001)(2906002)(36756003)(82310400005)(8936002)(8676002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 19:17:17.3667
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1d6b2846-75eb-47c5-bebe-08db525455de
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT056.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5322

This series introduces SMMU handling for PCIe passthrough on ARM. These patches
are independent from (and don't depend on) the vPCI reference counting/locking
work in progress, and should be able to be upstreamed independently.

v1-v2:
* phantom device handling
* shuffle around iommu_add_dt_pci_device()

Oleksandr Andrushchenko (1):
  xen/arm: smmuv2: Add PCI devices support for SMMUv2

Oleksandr Tyshchenko (4):
  xen/arm: Move is_protected flag to struct device
  iommu/arm: Add iommu_dt_xlate()
  iommu/arm: Introduce iommu_add_dt_pci_device API
  pci/arm: Use iommu_add_dt_pci_device()

Rahul Singh (1):
  xen/arm: smmuv3: Add PCI devices support for SMMUv3

Stewart Hildebrand (2):
  iommu/arm: iommu_add_dt_pci_device phantom handling
  RFC: pci/arm: don't do iommu call for phantom functions

 xen/arch/arm/domain_build.c              |   4 +-
 xen/arch/arm/include/asm/device.h        |  13 ++
 xen/common/device_tree.c                 |   2 +-
 xen/drivers/passthrough/arm/ipmmu-vmsa.c |   8 +-
 xen/drivers/passthrough/arm/smmu-v3.c    |  74 +++++++-
 xen/drivers/passthrough/arm/smmu.c       | 110 +++++++++---
 xen/drivers/passthrough/device_tree.c    | 218 ++++++++++++++++++++---
 xen/drivers/passthrough/pci.c            |  21 ++-
 xen/include/xen/device_tree.h            |  38 ++--
 xen/include/xen/iommu.h                  |   6 +-
 10 files changed, 417 insertions(+), 77 deletions(-)

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 19:17:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 19:17:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533542.830311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBnQ-0004W8-Mz; Thu, 11 May 2023 19:17:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533542.830311; Thu, 11 May 2023 19:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBnQ-0004Vz-K9; Thu, 11 May 2023 19:17:48 +0000
Received: by outflank-mailman (input) for mailman id 533542;
 Thu, 11 May 2023 19:17:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qWxp=BA=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pxBnP-00042a-7d
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 19:17:47 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20608.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 80e443ec-f030-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 21:17:45 +0200 (CEST)
Received: from DM6PR17CA0026.namprd17.prod.outlook.com (2603:10b6:5:1b3::39)
 by IA1PR12MB6652.namprd12.prod.outlook.com (2603:10b6:208:38a::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Thu, 11 May
 2023 19:17:40 +0000
Received: from DM6NAM11FT103.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1b3:cafe::76) by DM6PR17CA0026.outlook.office365.com
 (2603:10b6:5:1b3::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22 via Frontend
 Transport; Thu, 11 May 2023 19:17:40 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT103.mail.protection.outlook.com (10.13.172.75) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.22 via Frontend Transport; Thu, 11 May 2023 19:17:39 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 14:17:39 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 14:17:37 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80e443ec-f030-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZNrkR38SIlI+XXPwGihe6pUHsD1bNE9EaSEqIxbT/y+OtHrGB7jF/D8Mevww3YOcap6YKHhsGHQA2p2qrPFER5RFWwYqXRJNbhD27UoArf7cYPnH/ZtIezTPFGQxpkjw4WKidvxuvd2RSL9Bi9ZVbzE8sCdqPxi/HvcAp4r9BXIEXC64Ij74xEX01FKbyNESzb9dbcAwrQqd45H0W5OW769p5t4xFWJv7f3Mauo4TrDOvhBPcb1xiu+MX0OAiXAC+09SQL8Q1GMOI3T55585c8Gl0esbGSie+TwM/iX0TeCijxAtEq70Z1Rs0b4juTDUQ48aDzfi5lLnzxJU0ZvENA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uxhzk6CbTYHjfXnu2i9bDdgPDhMQRJVaJmh+gENOEuM=;
 b=a77Rca2x9UA+3CSDdu49JuSFKTi1vdeu101eYX9bIJsMI+QizQ6d6TfY8xUhGljlZQzTrfbPY5ohxsZnBsPriDVK6WsV1NMp4dyJ8zJ/jjJMyA92K4KKtMAd/ZlHWsfwueCpT+wz1ERv/K5PQ+/CTEENysvFtUkdUXwhh7X9sP2iM47hCrO+eatcW3LUKE/nH/u8zoNEIbUhADdtDfUcR+2vfsZE0p/vv1kAVdy+5WaPuXH93o2hgI/Tu6VutRFZqGGNlTmvDIwjidPyJA5bAAivmn5jnmFF2YuG6rGhLs2+KdPTwhEytHzGIvaMpomUPyZhcoSpkYEHeNo656byUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uxhzk6CbTYHjfXnu2i9bDdgPDhMQRJVaJmh+gENOEuM=;
 b=nut1PzLZRW6qo16muzo+WvE7+Hy3zSU0rEByWvc2zzsTWracuU2+ar9FOYgKH0lu0BNul8GR7fHJdtk5mymstmFb6zsPwRUzu+4hNdWFfRFM1DQaYPTKpnCfnpsOVpLdnGtniLC3QUOahc4v8HeDJGdbT8izeraapExdQHf/CyY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Rahul Singh <rahul.singh@arm.com>, Stewart Hildebrand
	<stewart.hildebrand@amd.com>
Subject: [PATCH v2 1/8] xen/arm: Move is_protected flag to struct device
Date: Thu, 11 May 2023 15:16:47 -0400
Message-ID: <20230511191654.400720-2-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230511191654.400720-1-stewart.hildebrand@amd.com>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT103:EE_|IA1PR12MB6652:EE_
X-MS-Office365-Filtering-Correlation-Id: 9a6e51ab-3b66-43a3-f43f-08db52546342
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	R+XxmJ41CVTP5/g0zZwmIoj1hVO9HgQCnfTLdA+5jzWAHOE0IQiCs17W14HVNLv40h6vJMWI/1+PmVpVHzemEWY5Ln0AUkOc9I/F2kqrQVguBGUYVQCklovLX5C1DAJkUNudnGCYNWPltPFhNa4b5oWFhKtO02llLVq9YLeDER0jMl3RoRSlok8geLdh3y2Dm+awDOxKIImmz8xpKOXarfgPGoiO8AI2RCLm7ZRKhKwqdsJ6EcgFebvvbkFPU379+ft4c1vTxwFjzht2hyczQHvA/PZSjDCnhhm9x9nP5uQ3rVox0d7a6ir8KWR+S0O/CxltWlGmQ3zwmm3KDOtnoRdzvoQ7d0dRNUF2UnG6zqqDIfAxN72J9cdr4iX9Pu6Zh16JCTzXj8xAiJ//1lch85ivcNEaSx0qtIoRT1hwo/AUsNQ3tIQa1ePF5ko+GWXB2pQeK994G3oVVuKSGYmvBITOsY65xdg/2bV6fDsRAkcpK0S+wB932Ej5klzl1V38IgbGVPvsH/tMaXKorz7zW627A7FqysqF7cOB1m8pu6ENJ9m9rvPGmArtRqbZZ+vua6VvII8mkTf5nj++5AHkeF7yf+b3civpDk8IC3rbtH/2sHBKAE0Gx/1o9p5OcY42aAuI6e/xIwI8bcSSlqQbuD7tMMDOY3A8WcI7luX2JErq/Pm8hPROuZ47nE3OvaO2UClBhmlaarxFFhtkrgCSj+y1CS/8Y2mrTKjd4857qBjvcQXxz4AniOU8JXEPWLQC
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(6666004)(8936002)(40460700003)(966005)(54906003)(70586007)(36756003)(70206006)(82740400003)(2906002)(4326008)(44832011)(6916009)(356005)(41300700001)(8676002)(5660300002)(316002)(81166007)(82310400005)(86362001)(40480700001)(478600001)(426003)(36860700001)(186003)(336012)(2616005)(47076005)(1076003)(26005)(83380400001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 19:17:39.8338
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a6e51ab-3b66-43a3-f43f-08db52546342
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT103.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6652

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This flag will be re-used for PCI devices by the subsequent
patches.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v1->v2:
* no change

downstream->v1:
* rebase
* s/dev_node->is_protected/dev_node->dev.is_protected/ in smmu.c
* s/dt_device_set_protected(dev_to_dt(dev))/device_set_protected(dev)/ in smmu-v3.c
* remove redundant device_is_protected checks in smmu-v3.c/ipmmu-vmsa.c

(cherry picked from commit 59753aac77528a584d3950936b853ebf264b68e7 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---
 xen/arch/arm/domain_build.c              |  4 ++--
 xen/arch/arm/include/asm/device.h        | 13 +++++++++++++
 xen/common/device_tree.c                 |  2 +-
 xen/drivers/passthrough/arm/ipmmu-vmsa.c |  8 +-------
 xen/drivers/passthrough/arm/smmu-v3.c    |  7 +------
 xen/drivers/passthrough/arm/smmu.c       |  2 +-
 xen/drivers/passthrough/device_tree.c    | 15 +++++++++------
 xen/include/xen/device_tree.h            | 13 -------------
 8 files changed, 28 insertions(+), 36 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f80fdd1af206..106f92c65a61 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2507,7 +2507,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
             return res;
         }
 
-        if ( dt_device_is_protected(dev) )
+        if ( device_is_protected(dt_to_dev(dev)) )
         {
             dt_dprintk("%s setup iommu\n", dt_node_full_name(dev));
             res = iommu_assign_dt_device(d, dev);
@@ -3007,7 +3007,7 @@ static int __init handle_passthrough_prop(struct kernel_info *kinfo,
         return res;
 
     /* If xen_force, we allow assignment of devices without IOMMU protection. */
-    if ( xen_force && !dt_device_is_protected(node) )
+    if ( xen_force && !device_is_protected(dt_to_dev(node)) )
         return 0;
 
     return iommu_assign_dt_device(kinfo->d, node);
diff --git a/xen/arch/arm/include/asm/device.h b/xen/arch/arm/include/asm/device.h
index b5d451e08776..086dde13eb6b 100644
--- a/xen/arch/arm/include/asm/device.h
+++ b/xen/arch/arm/include/asm/device.h
@@ -1,6 +1,8 @@
 #ifndef __ASM_ARM_DEVICE_H
 #define __ASM_ARM_DEVICE_H
 
+#include <xen/types.h>
+
 enum device_type
 {
     DEV_DT,
@@ -20,6 +22,7 @@ struct device
 #endif
     struct dev_archdata archdata;
     struct iommu_fwspec *iommu_fwspec; /* per-device IOMMU instance data */
+    bool is_protected; /* Shows that device is protected by IOMMU */
 };
 
 typedef struct device device_t;
@@ -94,6 +97,16 @@ int device_init(struct dt_device_node *dev, enum device_class class,
  */
 enum device_class device_get_class(const struct dt_device_node *dev);
 
+static inline void device_set_protected(struct device *device)
+{
+    device->is_protected = true;
+}
+
+static inline bool device_is_protected(const struct device *device)
+{
+    return device->is_protected;
+}
+
 #define DT_DEVICE_START(_name, _namestr, _class)                    \
 static const struct device_desc __dev_desc_##_name __used           \
 __section(".dev.info") = {                                          \
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 6c9712ab7bda..1d5d7cb5f01b 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1874,7 +1874,7 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
         /* By default dom0 owns the device */
         np->used_by = 0;
         /* By default the device is not protected */
-        np->is_protected = false;
+        np->dev.is_protected = false;
         INIT_LIST_HEAD(&np->domain_list);
 
         if ( new_format )
diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
index 091f09b21752..039212a3a990 100644
--- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
+++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
@@ -1288,14 +1288,8 @@ static int ipmmu_add_device(u8 devfn, struct device *dev)
     if ( !to_ipmmu(dev) )
         return -ENODEV;
 
-    if ( dt_device_is_protected(dev_to_dt(dev)) )
-    {
-        dev_err(dev, "Already added to IPMMU\n");
-        return -EEXIST;
-    }
-
     /* Let Xen know that the master device is protected by an IOMMU. */
-    dt_device_set_protected(dev_to_dt(dev));
+    device_set_protected(dev);
 
     dev_info(dev, "Added master device (IPMMU %s micro-TLBs %u)\n",
              dev_name(fwspec->iommu_dev), fwspec->num_ids);
diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index bfdb62b395ad..4b452e6fdd00 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -1521,13 +1521,8 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev)
 	 */
 	arm_smmu_enable_pasid(master);
 
-	if (dt_device_is_protected(dev_to_dt(dev))) {
-		dev_err(dev, "Already added to SMMUv3\n");
-		return -EEXIST;
-	}
-
 	/* Let Xen know that the master device is protected by an IOMMU. */
-	dt_device_set_protected(dev_to_dt(dev));
+	device_set_protected(dev);
 
 	dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
 			dev_name(fwspec->iommu_dev), fwspec->num_ids);
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0a514821b336..5b6024d579a8 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -838,7 +838,7 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 	master->of_node = dev_node;
 
 	/* Xen: Let Xen know that the device is protected by an SMMU */
-	dt_device_set_protected(dev_node);
+	device_set_protected(dev);
 
 	for (i = 0; i < fwspec->num_ids; ++i) {
 		if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) &&
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 1c32d7b50cce..b5bd13393b56 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -34,7 +34,7 @@ int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev)
     if ( !is_iommu_enabled(d) )
         return -EINVAL;
 
-    if ( !dt_device_is_protected(dev) )
+    if ( !device_is_protected(dt_to_dev(dev)) )
         return -EINVAL;
 
     spin_lock(&dtdevs_lock);
@@ -65,7 +65,7 @@ int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev)
     if ( !is_iommu_enabled(d) )
         return -EINVAL;
 
-    if ( !dt_device_is_protected(dev) )
+    if ( !device_is_protected(dt_to_dev(dev)) )
         return -EINVAL;
 
     spin_lock(&dtdevs_lock);
@@ -87,7 +87,7 @@ static bool_t iommu_dt_device_is_assigned(const struct dt_device_node *dev)
 {
     bool_t assigned = 0;
 
-    if ( !dt_device_is_protected(dev) )
+    if ( !device_is_protected(dt_to_dev(dev)) )
         return 0;
 
     spin_lock(&dtdevs_lock);
@@ -141,12 +141,15 @@ int iommu_add_dt_device(struct dt_device_node *np)
         return -EINVAL;
 
     /*
-     * The device may already have been registered. As there is no harm in
-     * it just return success early.
+     * This is needed in case a device has both the iommus property and
+     * also appears in the mmu-masters list.
      */
-    if ( dev_iommu_fwspec_get(dev) )
+    if ( device_is_protected(dev) )
         return 0;
 
+    if ( dev_iommu_fwspec_get(dev) )
+        return -EEXIST;
+
     /*
      * According to the Documentation/devicetree/bindings/iommu/iommu.txt
      * from Linux.
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 19a74909cece..c1e4751a581f 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -90,9 +90,6 @@ struct dt_device_node {
     struct dt_device_node *next; /* TODO: Remove it. Only use to know the last children */
     struct dt_device_node *allnext;
 
-    /* IOMMU specific fields */
-    bool is_protected;
-
     /* HACK: Remove this if there is a need of space */
     bool_t static_evtchn_created;
 
@@ -302,16 +299,6 @@ static inline domid_t dt_device_used_by(const struct dt_device_node *device)
     return device->used_by;
 }
 
-static inline void dt_device_set_protected(struct dt_device_node *device)
-{
-    device->is_protected = true;
-}
-
-static inline bool dt_device_is_protected(const struct dt_device_node *device)
-{
-    return device->is_protected;
-}
-
 static inline bool_t dt_property_name_is_equal(const struct dt_property *pp,
                                                const char *name)
 {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 19:18:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 19:18:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533547.830321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBnm-00053o-0Y; Thu, 11 May 2023 19:18:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533547.830321; Thu, 11 May 2023 19:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBnl-00053e-Tv; Thu, 11 May 2023 19:18:09 +0000
Received: by outflank-mailman (input) for mailman id 533547;
 Thu, 11 May 2023 19:18:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qWxp=BA=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pxBnk-00042a-8s
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 19:18:08 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2061b.outbound.protection.outlook.com
 [2a01:111:f400:fe59::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8e1778b3-f030-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 21:18:06 +0200 (CEST)
Received: from DS7PR03CA0107.namprd03.prod.outlook.com (2603:10b6:5:3b7::22)
 by PH7PR12MB7115.namprd12.prod.outlook.com (2603:10b6:510:1ee::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6340.22; Thu, 11 May
 2023 19:18:03 +0000
Received: from DM6NAM11FT059.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b7:cafe::72) by DS7PR03CA0107.outlook.office365.com
 (2603:10b6:5:3b7::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20 via Frontend
 Transport; Thu, 11 May 2023 19:18:02 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT059.mail.protection.outlook.com (10.13.172.92) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.22 via Frontend Transport; Thu, 11 May 2023 19:18:02 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 14:18:01 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 14:18:00 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e1778b3-f030-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=elqkgSRE/3k6CmmWliTBXBsU6XVz/7uZYbFq3LjTYIIad101OpT8y4Fjzss7/+ie1UW//fXrYHlj5gjN5O6AXDf2T2GuXYuQCs46rvMSm3NStsW4N3eUe9PAfd9bbKpH9I8Dp/UcoKFttFKArIg1gZiMDc/zwxZ5JicofWlDsIwhDWHVpt1MnadgFu7LYAZyCVJmQOtdc65Bu5T0ryYBV7XeE+LoG03T7Q9Liv/r+i4yJD0xaDfyva7gLeRiqwpXgc8K0MP18Vrx8HRNwrQKIYD3DzBIn1PRwQX4lzOJPj7BnOPOZBQzQG7L2qBBR8k7IDyzhB2N8NclqkGMjUkIxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G7y7/iwGlnJBfhykkW+cVx9ujgWVYtHTxAqrWt9z87M=;
 b=c4f5M1B2230ElonxrsdaRz8CKbXBqAcXsKfoet9KEjlb+Z+TcUuU92j6WbPaOPIX33EhiG6HonPvI6fkMSRINLXLYXJoH5hkSJ13hN06BBxdcAr2NLXS4CnXbOrRfDKrXC5nUdNQx70XYzsDWVYQIkpjR2xH7RTrTMORhTqRcMB174vDafePd7v71ujszBWFmxsKsi8NXVZE3yybBzxu0CrMyupSIcbD61UHwlN2WE71ViWmNMFduH4jC/WDqSYIdPzTEZfCSxYLuQgdjcbMz4ZpNL5xPhEmyjN0GU6zupEBbcMtcQCeCELCBe7Zm6MVzPW+zztGBsQBy+YCf3BC4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G7y7/iwGlnJBfhykkW+cVx9ujgWVYtHTxAqrWt9z87M=;
 b=apdorjHW8giUT/PLJNxI6xPQZyuLMipk3UNfuu2epSJBZQazA6eiLq5ziLAOFXWpjULNYvl+Fp+CxAnUnboJR4ign4Jh2XrtZiSZ4uOfNrR6EaCoGPsgi0u4vIUfd0sBVJn2jAu3GS33LHiOWcnaXLxSYFpuPVMugt1vE4Yw54w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Stewart Hildebrand
	<stewart.hildebrand@amd.com>
Subject: [PATCH v2 2/8] iommu/arm: Add iommu_dt_xlate()
Date: Thu, 11 May 2023 15:16:48 -0400
Message-ID: <20230511191654.400720-3-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230511191654.400720-1-stewart.hildebrand@amd.com>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT059:EE_|PH7PR12MB7115:EE_
X-MS-Office365-Filtering-Correlation-Id: 2d98ad87-1de2-499b-be23-08db525470b8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lYQCq3Fzr77Vspqg8KBdd+4/068rVH7qPe5cdSfX2VzLSRExEf1u2W2CjhkLtO8FZLkBzboerUPLaloglKIDBLKl7m2RqeIK9FJvcaoko6zytAH/IMgjCe/s+eEq6XemitUOAW1cap0NZQo+sI/1UDLtPrQzrk3g+Aa7APck1SFnTxKqSBEPgSrLO+Ucbf/vELnQpXx+VlOiXFR1zv6cP4B2Y3tW5FStaiqa+ftMd4uJ8K++j7HI2y9Oua4nR9yNrx9dBS0PNB7QvmKf9kvNgqgCxtWW3rsungXJs2Dm7ea4yD8oDbBiJ0AviIQuwiaKiezeS6jtwydkLnmwbrswotY5oFUNabSKOGTqc+HtzifH/Pci04BMqq+Cdw6rOdZBOI40jhYqqTDe+Ls2hQMcE+C+apfx7nuKqzIAYaACiiq77LJsZdltamcI3Cab1KJBlEcBDq71YdcaCom5e7nJweZzNoVMDIbybnXP6Cyxts+HRmRVFbc409FFZ0esXEZ1P4MIHQIcyzYYbR/EPcGAXS+pBaFxBAdpBu3dkLRh14m89EVNOUpGsyrwPxCWrreji/pbDrw3G2J+z9vWk1b1GalBQF0OIPOUa12mbSfEJXptBeWi9dG2ebBwbCOid1em//oNqRqrskV+Jhz55KNMyd0HO1DDqmAiJCLxKWownEUyoz3LnkNQI/top1QseWuLL/2RFC1XkSHhnPChi8DP0Q==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(39860400002)(376002)(451199021)(36840700001)(46966006)(40470700004)(356005)(316002)(47076005)(82740400003)(478600001)(81166007)(8676002)(2906002)(426003)(41300700001)(336012)(6666004)(82310400005)(86362001)(5660300002)(70586007)(83380400001)(36860700001)(70206006)(8936002)(4326008)(966005)(6916009)(44832011)(54906003)(26005)(36756003)(1076003)(2616005)(40460700003)(40480700001)(186003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 19:18:02.4005
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d98ad87-1de2-499b-be23-08db525470b8
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT059.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7115

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Move code for processing DT IOMMU specifier to a separate helper.
This helper will be re-used for adding PCI devices by the subsequent
patches as we will need exact the same actions for processing
DT PCI-IOMMU specifier.

While at it introduce NO_IOMMU to avoid magic "1".

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com> # rename
---
v1->v2:
* no change

downstream->v1:
* trivial rebase
* s/dt_iommu_xlate/iommu_dt_xlate/

(cherry picked from commit c26bab0415ca303df86aba1d06ef8edc713734d3 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---
 xen/drivers/passthrough/device_tree.c | 42 +++++++++++++++++----------
 1 file changed, 27 insertions(+), 15 deletions(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index b5bd13393b56..1b50f4670944 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -127,15 +127,39 @@ int iommu_release_dt_devices(struct domain *d)
     return 0;
 }
 
+/* This correlation must not be altered */
+#define NO_IOMMU    1
+
+static int iommu_dt_xlate(struct device *dev,
+                          struct dt_phandle_args *iommu_spec)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    int rc;
+
+    if ( !dt_device_is_available(iommu_spec->np) )
+        return NO_IOMMU;
+
+    rc = iommu_fwspec_init(dev, &iommu_spec->np->dev);
+    if ( rc )
+        return rc;
+
+    /*
+     * Provide DT IOMMU specifier which describes the IOMMU master
+     * interfaces of that device (device IDs, etc) to the driver.
+     * The driver is responsible to decide how to interpret them.
+     */
+    return ops->dt_xlate(dev, iommu_spec);
+}
+
 int iommu_add_dt_device(struct dt_device_node *np)
 {
     const struct iommu_ops *ops = iommu_get_ops();
     struct dt_phandle_args iommu_spec;
     struct device *dev = dt_to_dev(np);
-    int rc = 1, index = 0;
+    int rc = NO_IOMMU, index = 0;
 
     if ( !iommu_enabled )
-        return 1;
+        return NO_IOMMU;
 
     if ( !ops )
         return -EINVAL;
@@ -164,19 +188,7 @@ int iommu_add_dt_device(struct dt_device_node *np)
         if ( !ops->add_device || !ops->dt_xlate )
             return -EINVAL;
 
-        if ( !dt_device_is_available(iommu_spec.np) )
-            break;
-
-        rc = iommu_fwspec_init(dev, &iommu_spec.np->dev);
-        if ( rc )
-            break;
-
-        /*
-         * Provide DT IOMMU specifier which describes the IOMMU master
-         * interfaces of that device (device IDs, etc) to the driver.
-         * The driver is responsible to decide how to interpret them.
-         */
-        rc = ops->dt_xlate(dev, &iommu_spec);
+        rc = iommu_dt_xlate(dev, &iommu_spec);
         if ( rc )
             break;
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 19:18:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 19:18:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533553.830330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBoD-0005i5-9k; Thu, 11 May 2023 19:18:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533553.830330; Thu, 11 May 2023 19:18:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBoD-0005hy-6w; Thu, 11 May 2023 19:18:37 +0000
Received: by outflank-mailman (input) for mailman id 533553;
 Thu, 11 May 2023 19:18:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qWxp=BA=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pxBoB-0005hc-Rh
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 19:18:35 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20620.outbound.protection.outlook.com
 [2a01:111:f400:7e89::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9e71650a-f030-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 21:18:34 +0200 (CEST)
Received: from DM6PR02CA0098.namprd02.prod.outlook.com (2603:10b6:5:1f4::39)
 by PH8PR12MB6745.namprd12.prod.outlook.com (2603:10b6:510:1c0::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18; Thu, 11 May
 2023 19:18:30 +0000
Received: from DM6NAM11FT055.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1f4:cafe::38) by DM6PR02CA0098.outlook.office365.com
 (2603:10b6:5:1f4::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21 via Frontend
 Transport; Thu, 11 May 2023 19:18:30 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT055.mail.protection.outlook.com (10.13.173.103) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.22 via Frontend Transport; Thu, 11 May 2023 19:18:29 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 14:18:29 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 14:18:28 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 14:18:27 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e71650a-f030-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kZylhLI3sSmFoFIcwsHD+R4LLsKF9jsiZkbSg36DgtQU2VMipJTF37SWZo+nUMDjs/TsjN8W61dngspvRE8gGQofJc7+4IXoa0e4uKT0JTx8E8k3JMa/v5yop+wl3S43Nc9mib1c8KxjXLz5/5K1ZSKrENdw2t6zWD8oaCC5ywdx8mygKLvhv8s+jznce5iFyzug+DF/wIFmisMr3N2KjEJ0qKZaAEeZtXekRCRIHhsm+W0HjAUnOUTR0/hc2TCkxqxdY5hIDun4wxHBgNtwePNN7KQ2wCXf5kpcHJp72jWsFk2wX7MqDIBLKpmLPqNb4d4C3nO9JocMtfjx/sCeoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YjmoY8fsHSqzorBGqSHDpm3AnWulRPSKxDcliH64UiM=;
 b=B1Xkajjn6BYLu6gi4dlwdebJX8Dj5FvxQ4iMcjgTSL7k7lDq+UW2EE9PpspaxD/F5BsQGrh5ixvCBpNTjNZUNJg3jDI+nNCR4lMA0NpM7qqrIqRZ4pnJGgMZO51JXMBZ/HWSJ+2Ow3L/DBudftTYOhYDY6i6MqWc3MpomO6YnCfB4gIzNjpBgljAjQ4RgSArwl4YXdDYkNnIGYYOJfk0LQQjQlA+PbEX2G9rM2vJqw0RH/kl7sm7ffnnGxcZkDe9JRt5gYVV8G8jFxSrFWWiBD7NUXBPBJ0PXN8TQludC5gE7aeKhDTCeaOdmQT4znZltq/MA0GrlA9E7BkfrZ2Xng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YjmoY8fsHSqzorBGqSHDpm3AnWulRPSKxDcliH64UiM=;
 b=WTa8ObgbqZ+3Btd+BKoGQmZNmM5nT+648LBW3ZFlyo1v82VxiMF2J1g52QGi+Mf1l16qt1pAxyUQ5ZNNFn0iTeiU3JZRBp59ImpGtSUaTNcK0Il/bVhG3atZSYEV56cgomrpPuSRQ/cHyvhcu9AdE5h6WCugVBLVt8fjVZnHql8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, "Stewart
 Hildebrand" <stewart.hildebrand@amd.com>
Subject: [PATCH v2 3/8] iommu/arm: Introduce iommu_add_dt_pci_device API
Date: Thu, 11 May 2023 15:16:49 -0400
Message-ID: <20230511191654.400720-4-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230511191654.400720-1-stewart.hildebrand@amd.com>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT055:EE_|PH8PR12MB6745:EE_
X-MS-Office365-Filtering-Correlation-Id: 8850e3da-9fe3-4b02-d190-08db52548103
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5LoKySNp3TsAYAH4AYg1lfWfUV+HSJ3y8Zy5kvyWQqviKFHsKY72LYNR8Ji6dcr8r2RYHhayMHg72zB3jz18zS5cDYf1VcJCYNVR1gCAR3zse2zyOlxYSZl5QAdMbX+cFaCDZCFa19CYReTuvcc/ZzzjC92QWocuM1oB4FU2YCazXl/i3td+swGbbNFdGjd8KQhRztEypZoWHsNhrGnyQW6vEBYzwWpAX4ABTBASzXBPC7y0AN7yF8zWhSlm0+wR3dHHDomjggOsfJaCmGAHDEncq23xvHxpPSvstciBoghaefLigBZ/SFn2WtBmSHtbfYMp80D7LgvA9yvTOco3Hz+ToJN9NV45e7HC9H+5wn45G23z1wj7WnhSTi/FV2p+HjKWOQO573qJHpN6SbvEQIIC01I3xtL+5vf540uH2L0dDHYKS62ZrjnK2rOdvhXZDE09dgiS24IZkK84RR9sKpH6AxT3huHE6cLk5y5BoatQXGe/NEEX/jhyJBixpAm8+sBDyrElKiBJOkfeHgM0I6ColMND2fvYEKhnSlijMfJ8TlSZpBEj5FJuOnfKfUWqlUekEK8F0GkH+erhx31ocieQWhZY+L96gOAOHlcIQx/nBAYRVRhtJZEBxe9ZayMWcIkbEfK9EEdiwCvvJADd0Hob1pgSm+kBw8a3nhTrfePOgTaufrrIhJs8UFj9DP8s4SwvSp3qfGTw0X58HCJ0Wip8erT3AVuM/OoQjKMqxTJ8NZndhlCTkCH+DYtdEkSpc9JXi1POdMVxl2Rmy+Neg1haZNeGYdNgZjimeIDm0nIujo+HqsEylnnGxEbvCo1U9duei52ub4DCMkRiCBcABA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(346002)(376002)(136003)(451199021)(46966006)(36840700001)(40470700004)(2616005)(186003)(8676002)(8936002)(40460700003)(6666004)(82310400005)(26005)(40480700001)(36756003)(5660300002)(1076003)(2906002)(356005)(44832011)(316002)(82740400003)(81166007)(478600001)(54906003)(70206006)(47076005)(83380400001)(6916009)(70586007)(966005)(4326008)(36860700001)(86362001)(336012)(426003)(41300700001)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 19:18:29.7517
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8850e3da-9fe3-4b02-d190-08db52548103
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT055.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6745

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The main purpose of this patch is to add a way to register PCI device
(which is behind the IOMMU) using the generic PCI-IOMMU DT bindings [1]
before assigning that device to a domain.

This behaves in almost the same way as existing iommu_add_dt_device API,
the difference is in devices to handle and DT bindings to use.

The function of_map_id to translate an ID through a downstream mapping
(which is also suitable for mapping Requester ID) was borrowed from Linux
(v5.10-rc6) and updated according to the Xen code base.

XXX: I don't port pci_for_each_dma_alias from Linux which is a part
of PCI-IOMMU bindings infrastucture as I don't have a good understanding
for how it is expected to work in Xen environment.
Also it is not completely clear whether we need to distinguish between
different PCI types here (DEV_TYPE_PCI, DEV_TYPE_PCI_HOST_BRIDGE, etc).
For example, how we should behave here if the host bridge doesn't have
a stream ID (so not described in iommu-map property) just simple
fail or bypasses translation?

[1] https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-iommu.txt

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v1->v2:
* remove extra devfn parameter since pdev fully describes the device
* remove ops->add_device() call from iommu_add_dt_pci_device(). Instead, rely on
  the existing iommu call in iommu_add_device().
* move the ops->add_device and ops->dt_xlate checks earlier

downstream->v1:
* rebase
* add const qualifier to struct dt_device_node *np arg in dt_map_id()
* add const qualifier to struct dt_device_node *np declaration in iommu_add_pci_device()
* use stdint.h types instead of u8/u32/etc...
* rename functions:
  s/dt_iommu_xlate/iommu_dt_xlate/
  s/dt_map_id/iommu_dt_pci_map_id/
  s/iommu_add_pci_device/iommu_add_dt_pci_device/
* add device_is_protected check in iommu_add_dt_pci_device
* wrap prototypes in CONFIG_HAS_PCI

(cherry picked from commit 734e3bf6ee77e7947667ab8fa96c25b349c2e1da from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---
 xen/drivers/passthrough/device_tree.c | 140 ++++++++++++++++++++++++++
 xen/include/xen/device_tree.h         |  25 +++++
 xen/include/xen/iommu.h               |   6 +-
 3 files changed, 170 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 1b50f4670944..5e462e5c2ca8 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -151,6 +151,146 @@ static int iommu_dt_xlate(struct device *dev,
     return ops->dt_xlate(dev, iommu_spec);
 }
 
+#ifdef CONFIG_HAS_PCI
+int iommu_dt_pci_map_id(const struct dt_device_node *np, uint32_t id,
+                        const char *map_name, const char *map_mask_name,
+                        struct dt_device_node **target, uint32_t *id_out)
+{
+    uint32_t map_mask, masked_id, map_len;
+    const __be32 *map = NULL;
+
+    if ( !np || !map_name || (!target && !id_out) )
+        return -EINVAL;
+
+    map = dt_get_property(np, map_name, &map_len);
+    if ( !map )
+    {
+        if ( target )
+            return -ENODEV;
+        /* Otherwise, no map implies no translation */
+        *id_out = id;
+        return 0;
+    }
+
+    if ( !map_len || map_len % (4 * sizeof(*map)) )
+    {
+        printk(XENLOG_ERR "%pOF: Error: Bad %s length: %d\n", np,
+            map_name, map_len);
+        return -EINVAL;
+    }
+
+    /* The default is to select all bits. */
+    map_mask = 0xffffffff;
+
+    /*
+     * Can be overridden by "{iommu,msi}-map-mask" property.
+     * If of_property_read_u32() fails, the default is used.
+     */
+    if ( map_mask_name )
+        dt_property_read_u32(np, map_mask_name, &map_mask);
+
+    masked_id = map_mask & id;
+    for ( ; (int)map_len > 0; map_len -= 4 * sizeof(*map), map += 4 )
+    {
+        struct dt_device_node *phandle_node;
+        uint32_t id_base = be32_to_cpup(map + 0);
+        uint32_t phandle = be32_to_cpup(map + 1);
+        uint32_t out_base = be32_to_cpup(map + 2);
+        uint32_t id_len = be32_to_cpup(map + 3);
+
+        if ( id_base & ~map_mask )
+        {
+            printk(XENLOG_ERR "%pOF: Invalid %s translation - %s-mask (0x%x) ignores id-base (0x%x)\n",
+                   np, map_name, map_name, map_mask, id_base);
+            return -EFAULT;
+        }
+
+        if ( masked_id < id_base || masked_id >= id_base + id_len )
+            continue;
+
+        phandle_node = dt_find_node_by_phandle(phandle);
+        if ( !phandle_node )
+            return -ENODEV;
+
+        if ( target )
+        {
+            if ( !*target )
+                *target = phandle_node;
+
+            if ( *target != phandle_node )
+                continue;
+        }
+
+        if ( id_out )
+            *id_out = masked_id - id_base + out_base;
+
+        printk(XENLOG_DEBUG "%pOF: %s, using mask %08x, id-base: %08x, out-base: %08x, length: %08x, id: %08x -> %08x\n",
+               np, map_name, map_mask, id_base, out_base, id_len, id,
+               masked_id - id_base + out_base);
+        return 0;
+    }
+
+    printk(XENLOG_ERR "%pOF: no %s translation for id 0x%x on %pOF\n",
+           np, map_name, id, target && *target ? *target : NULL);
+
+    /*
+     * NOTE: Linux bypasses translation without returning an error here,
+     * but should we behave in the same way on Xen? Restrict for now.
+     */
+    return -EFAULT;
+}
+
+int iommu_add_dt_pci_device(struct pci_dev *pdev)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    struct dt_phandle_args iommu_spec = { .args_count = 1 };
+    struct device *dev = pci_to_dev(pdev);
+    const struct dt_device_node *np;
+    int rc = NO_IOMMU;
+
+    if ( !iommu_enabled )
+        return NO_IOMMU;
+
+    if ( !ops )
+        return -EINVAL;
+
+    if ( device_is_protected(dev) )
+        return 0;
+
+    if ( dev_iommu_fwspec_get(dev) )
+        return -EEXIST;
+
+    np = pci_find_host_bridge_node(pdev);
+    if ( !np )
+        return -ENODEV;
+
+    /*
+     * The driver which supports generic PCI-IOMMU DT bindings must have
+     * these callback implemented.
+     */
+    if ( !ops->add_device || !ops->dt_xlate )
+        return -EINVAL;
+
+    /*
+     * According to the Documentation/devicetree/bindings/pci/pci-iommu.txt
+     * from Linux.
+     */
+    rc = iommu_dt_pci_map_id(np, PCI_BDF(pdev->bus, pdev->devfn), "iommu-map",
+                             "iommu-map-mask", &iommu_spec.np, iommu_spec.args);
+    if ( rc )
+        return rc == -ENODEV ? NO_IOMMU : rc;
+
+    rc = iommu_dt_xlate(dev, &iommu_spec);
+    if ( rc < 0 )
+    {
+        iommu_fwspec_free(pci_to_dev(pdev));
+        return -EINVAL;
+    }
+
+    return rc;
+}
+#endif /* CONFIG_HAS_PCI */
+
 int iommu_add_dt_device(struct dt_device_node *np)
 {
     const struct iommu_ops *ops = iommu_get_ops();
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index c1e4751a581f..dc40fdfb9231 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -852,6 +852,31 @@ int dt_count_phandle_with_args(const struct dt_device_node *np,
  */
 int dt_get_pci_domain_nr(struct dt_device_node *node);
 
+#ifdef CONFIG_HAS_PCI
+/**
+ * iommu_dt_pci_map_id - Translate an ID through a downstream mapping.
+ * @np: root complex device node.
+ * @id: device ID to map.
+ * @map_name: property name of the map to use.
+ * @map_mask_name: optional property name of the mask to use.
+ * @target: optional pointer to a target device node.
+ * @id_out: optional pointer to receive the translated ID.
+ *
+ * Given a device ID, look up the appropriate implementation-defined
+ * platform ID and/or the target device which receives transactions on that
+ * ID, as per the "iommu-map" and "msi-map" bindings. Either of @target or
+ * @id_out may be NULL if only the other is required. If @target points to
+ * a non-NULL device node pointer, only entries targeting that node will be
+ * matched; if it points to a NULL value, it will receive the device node of
+ * the first matching target phandle, with a reference held.
+ *
+ * Return: 0 on success or a standard error code on failure.
+ */
+int iommu_dt_pci_map_id(const struct dt_device_node *np, uint32_t id,
+                        const char *map_name, const char *map_mask_name,
+                        struct dt_device_node **target, uint32_t *id_out);
+#endif /* CONFIG_HAS_PCI */
+
 struct dt_device_node *dt_find_node_by_phandle(dt_phandle handle);
 
 #ifdef CONFIG_DEVICE_TREE_DEBUG
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 405db59971c5..7cb4d2aa5511 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -219,7 +219,8 @@ int iommu_dt_domain_init(struct domain *d);
 int iommu_release_dt_devices(struct domain *d);
 
 /*
- * Helper to add master device to the IOMMU using generic IOMMU DT bindings.
+ * Helpers to add master device to the IOMMU using generic (PCI-)IOMMU
+ * DT bindings.
  *
  * Return values:
  *  0 : device is protected by an IOMMU
@@ -228,6 +229,9 @@ int iommu_release_dt_devices(struct domain *d);
  *      (IOMMU is not enabled/present or device is not connected to it).
  */
 int iommu_add_dt_device(struct dt_device_node *np);
+#ifdef CONFIG_HAS_PCI
+int iommu_add_dt_pci_device(struct pci_dev *pdev);
+#endif
 
 int iommu_do_dt_domctl(struct xen_domctl *, struct domain *,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 19:19:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 19:19:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533560.830341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBos-0006P0-On; Thu, 11 May 2023 19:19:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533560.830341; Thu, 11 May 2023 19:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBos-0006Ot-Lj; Thu, 11 May 2023 19:19:18 +0000
Received: by outflank-mailman (input) for mailman id 533560;
 Thu, 11 May 2023 19:19:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qWxp=BA=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pxBor-0005hc-Er
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 19:19:17 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e88::604])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b83c3e05-f030-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 21:19:16 +0200 (CEST)
Received: from DS7PR05CA0036.namprd05.prod.outlook.com (2603:10b6:8:2f::25) by
 DS0PR12MB7876.namprd12.prod.outlook.com (2603:10b6:8:148::9) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6387.18; Thu, 11 May 2023 19:19:14 +0000
Received: from DM6NAM11FT101.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:2f:cafe::a1) by DS7PR05CA0036.outlook.office365.com
 (2603:10b6:8:2f::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20 via Frontend
 Transport; Thu, 11 May 2023 19:19:13 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT101.mail.protection.outlook.com (10.13.172.208) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.22 via Frontend Transport; Thu, 11 May 2023 19:19:13 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 14:19:13 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 12:19:12 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 14:19:11 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b83c3e05-f030-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cHlBC3twaD0QcwAZzctAdvpgiJLTsz63RaOEVJ9msGHr8Y7j2zoLjVFuny0/5ogQz5rxJ1Q+NRSoufixqQCeNkpimpZkIgSnXM+DW6asFCLMt3NTAnSrj/zIIZzDF4SDUnnpnIQmdZhreh6fBzsq5UUaNJH+marOXj3Rh4VIPCOoaM4Er1HxwAQ4WA/mAd8e7gqWCy7igpxVN7DmiECKvErtNXC4AvuXQzOWZPDVHy31qAFbEQwbX9cg/ngqYz+6vZ9GyVd2pzE+w5CjK7moXejaEDWF6dvPNgbTDPDm47so+ox2K8d5JN3+5TDP5X6SFKK5Y5Jv9H9QAsua1ccshg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Inv85I3n2ejvqoWpRiGzQrvDhgSa+TdZRQ6ClAmebH8=;
 b=lvpyvywFyUV9sB/e1wCiHUb2rF9rs/1Dbcm/RSG+CROWgjBzD6OqdMKTbdGfwQzu63HecwTAokLVKuaNNm0+BlYsO8tniNiFiRXh5QkBWwxKY3UK0bBX5z9B2b9bzA+g3KXbsQpnOBdakYjMZF7bOVk8u/BhsgsgYYY19FVhP1j5FChuvVOKg7SwvWUFETOJC7L2xAmkLuALR10y5L/REupwNYyG/wK1rbjx/BjU8+Z+IwjCwRTN3SaSy7/NQwY+9x/cbnV6sZMZfKH4sagXUy8j5MV15h0kDnbe9Elpxrd3GVLaSIdynjDb7kq58V2YZYPNLc43WmGBh8zGYCcwZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Inv85I3n2ejvqoWpRiGzQrvDhgSa+TdZRQ6ClAmebH8=;
 b=rXphUoHFH8OuKLb8YQ9JRF1clLyR1HiDbY5zUpbeYDDfkyyYPrDRtIzyUGU0APGXstcKp+GcMylqIr2q2mWjU3+o3pNH/xdOlNfC6fom4WnN4Ufuy8TrtNfKacvrCwvHeDzxPsRpwAdGvRBy2/A6AN6RXZsTVlqpD9rRD1AAKeA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Jan Beulich
	<jbeulich@suse.com>
Subject: [PATCH v2 4/8] iommu/arm: iommu_add_dt_pci_device phantom handling
Date: Thu, 11 May 2023 15:16:50 -0400
Message-ID: <20230511191654.400720-5-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230511191654.400720-1-stewart.hildebrand@amd.com>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT101:EE_|DS0PR12MB7876:EE_
X-MS-Office365-Filtering-Correlation-Id: 64e53b38-bdff-4ffc-663e-08db52549b27
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hN4ihwctcp6vilATTmpnC9zylTk3X/7PDQiM6B0SHx5O93vqoFB0yPa2+soU8K5kbOhorNaF6ZOPvY46X55yrlZHP8fsxlC+PidT0sUWS0pV/cuaW9doAjAP8vGo/NMOcfKKg7cA1kLEeRz9Y7mq42IJo5tEwmIKRcRljiuAq5j7og4sZ22aP83FBhUZRTOFx6Wt4TnTe5G5TCWLYaM+7FOcEQYSS2bGGHRgKF0VPcoo3ARLbOIjTJJcddeOHmLwncCcuGMOPdDdNYXZnf3fdRZDKjdJSGe6RxzDHk+NCsBgjNCE3JGFxJsO7P+XKfyIfnwSugAi2bjeXr15SynAUvCLcIb+7vEd0NJ6FBBxalmKGGuk8WAcMCAocjYAaaVWAGcCTPxD8uWxvcwkiAd0ebPYDT8gexXVhAPuyR67IVJl5DBCBwUhAvF/TeVVWdOt5krNM6PFqjZtQSdDjo6NgNj2t7PZeIpA4vwR2NoyuWLdAFDgEvu/eCF/cac4Rh5qn9GCdlBTvaKvEfDjTmzL2E9OY3xkf6BIB912ls8Ciw3Xmakb6+V8rGN+Z6h78IU6uNhmGXyPwis13LtvFTMQU8e8/Tzw6PoorUBu98Qz85u/QCOc4+ZA/5s/ReaqtkA926s/o7fLOoRmxGRhJWsFbCmdgbOtdgGW6llTcKBZHmzt1F508Ov8ZCuLjPaoMFbmOPjl92oY/yFCXLRu4AgSQulznURIpD06wrd9ZaaAHKbfOrZh0X37AODzPQeG7TAl
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(376002)(136003)(396003)(451199021)(36840700001)(46966006)(40470700004)(47076005)(81166007)(54906003)(40480700001)(41300700001)(316002)(83380400001)(356005)(82740400003)(1076003)(26005)(82310400005)(8936002)(336012)(426003)(70586007)(6916009)(4326008)(70206006)(36860700001)(5660300002)(8676002)(86362001)(44832011)(186003)(966005)(478600001)(36756003)(6666004)(40460700003)(2616005)(2906002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 19:19:13.5900
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 64e53b38-bdff-4ffc-663e-08db52549b27
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT101.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7876

Handle phantom functions in iommu_add_dt_pci_device(). Each phantom function
will have a unique requestor ID (RID)/BDF. On ARM, we need to map/translate the
RID/BDF to an AXI stream ID for each phantom function according to the pci-iommu
device tree mapping [1]. The RID/BDF -> AXI stream ID mapping in DT could allow
phantom devices (i.e. devices with phantom functions) to use different AXI
stream IDs based on the (phantom) function.

[1] https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-iommu.txt

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v1->v2:
* new patch

---
 xen/drivers/passthrough/device_tree.c | 23 ++++++++++++++++++++++-
 1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 5e462e5c2ca8..ced911f4fb31 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -247,6 +247,7 @@ int iommu_add_dt_pci_device(struct pci_dev *pdev)
     struct device *dev = pci_to_dev(pdev);
     const struct dt_device_node *np;
     int rc = NO_IOMMU;
+    unsigned int devfn = pdev->devfn;
 
     if ( !iommu_enabled )
         return NO_IOMMU;
@@ -275,7 +276,7 @@ int iommu_add_dt_pci_device(struct pci_dev *pdev)
      * According to the Documentation/devicetree/bindings/pci/pci-iommu.txt
      * from Linux.
      */
-    rc = iommu_dt_pci_map_id(np, PCI_BDF(pdev->bus, pdev->devfn), "iommu-map",
+    rc = iommu_dt_pci_map_id(np, PCI_BDF(pdev->bus, devfn), "iommu-map",
                              "iommu-map-mask", &iommu_spec.np, iommu_spec.args);
     if ( rc )
         return rc == -ENODEV ? NO_IOMMU : rc;
@@ -286,6 +287,26 @@ int iommu_add_dt_pci_device(struct pci_dev *pdev)
         iommu_fwspec_free(pci_to_dev(pdev));
         return -EINVAL;
     }
+    for ( ; pdev->phantom_stride ; )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        rc = iommu_dt_pci_map_id(np, PCI_BDF(pdev->bus, devfn), "iommu-map",
+                                 "iommu-map-mask", &iommu_spec.np, iommu_spec.args);
+        if ( rc )
+        {
+            printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
+                   &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
+            return rc == -ENODEV ? NO_IOMMU : rc;
+        }
+        rc = iommu_dt_xlate(dev, &iommu_spec);
+        if ( rc < 0 )
+        {
+            iommu_fwspec_free(pci_to_dev(pdev));
+            return -EINVAL;
+        }
+    }
 
     return rc;
 }
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 19:21:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 19:21:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533563.830351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBqg-0007sp-4G; Thu, 11 May 2023 19:21:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533563.830351; Thu, 11 May 2023 19:21:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBqg-0007si-1S; Thu, 11 May 2023 19:21:10 +0000
Received: by outflank-mailman (input) for mailman id 533563;
 Thu, 11 May 2023 19:21:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qWxp=BA=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pxBqf-0007sN-33
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 19:21:09 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20625.outbound.protection.outlook.com
 [2a01:111:f400:7e88::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fa77328d-f030-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 21:21:08 +0200 (CEST)
Received: from DS7PR03CA0128.namprd03.prod.outlook.com (2603:10b6:5:3b4::13)
 by SJ2PR12MB8884.namprd12.prod.outlook.com (2603:10b6:a03:547::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Thu, 11 May
 2023 19:21:01 +0000
Received: from DM6NAM11FT109.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b4:cafe::e) by DS7PR03CA0128.outlook.office365.com
 (2603:10b6:5:3b4::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22 via Frontend
 Transport; Thu, 11 May 2023 19:21:01 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT109.mail.protection.outlook.com (10.13.173.178) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.22 via Frontend Transport; Thu, 11 May 2023 19:21:01 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 14:21:00 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 14:20:58 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa77328d-f030-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZQAw7w0st9xKb6p8aeneQt9eobiV7QsJIVBkAMup4rxrPMIehdZSrz4jKmTkHHuMw7nRfACVKXjwC7BAz4Gd3MbiI/phHO+K7tw0tbWE9h9WP7mPxpzKzmnVT0ZeTLkg8Dk1gEiqwaWw8rfv1ccgdqMZn2LcX0F/3ZM/fzZrLna5ZfbeWuiCsglANvkBTXeIdWSJCM6mEFL9eB189hwxY21+MjQZEWjqo0/C7WhKjHUGWIZzkf9C3lpqn82IIKZrpf6paQzSbPUiEgnTjC5t7HR1NYrdsjNSRXKiwkJgIQK2LMhpwAePKG1oQcc2vTMl2fBgP//mEiGgM7e7h6KFoA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GjAtPrq7Zr2LNV7JB6qbwRaGGVcDQHrRJHlX0JDBPbY=;
 b=mcP4s4F3g55OB3UvRSMRgfTjj1CuPGatFt3t5L5B2hSsR9cFyPoQmHp54Ngma4t3MzY1Vxt3pw9xHyhce0AVjsSNioEurVg2QoyD424VeMyRvPyb//wvaLnDWACn32HaT3/udMeFYhBYvIUgid/d470IOmFMpW9ZuN7yf2k1AGJAedL0OSa0bs6pcI1EjnPeImFHgBrchpA/XU7rK4itP3i4Y9S0A94WmPWTYr4tEqRVZuwaPMa9ia4TGp7O9Ykrwu6b7rTJIw++KJA8fLsUEjgV8x+ZGL5dKjve2WqEnSrzWlBLHH9BVHcM8PgKRjKy8KGr5shCRnkPg9zOJKb8uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GjAtPrq7Zr2LNV7JB6qbwRaGGVcDQHrRJHlX0JDBPbY=;
 b=Wgq07Ka0qPjcMhcQXk04uu/i+6q+C+xpj/lQgAQh77IOJ/iWVtAzwLGj9krMS4DfTW0wJQOzThxFO8XB+zb7iRnLfFxpD7Mm2pQep6ovJYAASzZ3wKzB8rppR3DA2JNL4IHXv9nmvFJZ/55a7x967feqW2jvOY1EJD2EquVTwhU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Julien Grall
	<julien@xen.org>, Rahul Singh <rahul.singh@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [RFC PATCH v2 6/8] pci/arm: don't do iommu call for phantom functions
Date: Thu, 11 May 2023 15:16:52 -0400
Message-ID: <20230511191654.400720-7-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230511191654.400720-1-stewart.hildebrand@amd.com>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT109:EE_|SJ2PR12MB8884:EE_
X-MS-Office365-Filtering-Correlation-Id: ff383d8c-2656-4bb3-7375-08db5254db40
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XEYm88fTWcuzOiKkobw4GSBwO2PuLjnE7oNBdzPsN8fxEnbpAsswhruMw+xQjn1YpIRlLY5M7uhq5WUzm+cgKip2CEyCST/uQsvrBF+9N8JohRFagw2rj1mRTh3hFGBlWqGS7vg9PcC3WW4Dwq5AhomUyPEMQ71H6mtbNY6OEzr4X4kfFaT1wJmgSyeKNi0LuQpK1qCQglKkJTMIdutP0689SxKMQ9WcxT69/mSVB2AV5a0j0m6Sn2b9awDi1drJxLXpjOq0Kc/ROePQhx7HOL8DAUTHIZU/FNV0xucHUBmilVfhY4SnDwxLEldmBaJqjk6b22Zfyv0KfvI5N68CagtkmnkASY0buvnszvpO/+6L1PzYvQROK+NiY0jRRAEq4FJPSC6WqLA1DnyV612j2KjyFB/D8a6DcYHYm855pPZL1qLzlXMKjB4hLZGgbMSLNHLAdZgPLhWVsIKGjqvi/HaRZEQDj6iKgaOU8W36ZazL7ij865i6DF1onY22OczlCFWNdXppIMcOYeRysUO8YvHAL/Zb0Htzc4D+xCd8bOgcp6hxtSzV1d0520B4zggRDD6pB16hmTxxa0wLh0yZ5bnnnhhT4h5HQtbEgpTzMpYiroanK8GR123Rew1d1hGlf57IkCmh0DcfY6EQ7XdfJwx3IRipCw7wNFgbAqq20Wyevz6H78RfIwSyJJsXa6KGLOfe65ntek4RRhrwj8CKIOm4oCV3w9vvg8X0wUez9JCMjpNquEhrKYprtGrhaJ7QDnQDdL8Cn8R9paMFkOdoAA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(186003)(26005)(1076003)(356005)(81166007)(2906002)(41300700001)(4326008)(316002)(6916009)(82740400003)(47076005)(83380400001)(36860700001)(36756003)(86362001)(8676002)(2616005)(44832011)(336012)(426003)(8936002)(5660300002)(40480700001)(54906003)(478600001)(82310400005)(6666004)(40460700003)(70206006)(70586007)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 19:21:01.1440
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ff383d8c-2656-4bb3-7375-08db5254db40
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT109.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8884

It's not necessary to add/remove/assign/deassign pci phantom functions
for the ARM SMMU drivers. All associated AXI stream IDs are added during
the iommu call for the base PCI device/function.

However, the ARM SMMU drivers can cope with the extra/unnecessary calls just
fine, so this patch is RFC as it's not strictly required.

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
I'm aware the indentation is wrong. I just wanted to keep the diffstat small
while this particular patch is RFC.

v1->v2:
* new patch
---
 xen/drivers/passthrough/pci.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 6dbaae682773..3823edf096eb 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -871,6 +871,7 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
     else
         target = hardware_domain;
 
+    if ( !IS_ENABLED(CONFIG_HAS_DEVICE_TREE) )
     while ( pdev->phantom_stride )
     {
         devfn += pdev->phantom_stride;
@@ -1335,7 +1336,7 @@ static int iommu_add_device(struct pci_dev *pdev)
         return 0;
 
     rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
-    if ( rc || !pdev->phantom_stride )
+    if ( rc || !pdev->phantom_stride || IS_ENABLED(CONFIG_HAS_DEVICE_TREE) )
         return rc;
 
     for ( ; ; )
@@ -1379,6 +1380,7 @@ static int iommu_remove_device(struct pci_dev *pdev)
     if ( !is_iommu_enabled(pdev->domain) )
         return 0;
 
+    if ( !IS_ENABLED(CONFIG_HAS_DEVICE_TREE) )
     for ( devfn = pdev->devfn ; pdev->phantom_stride; )
     {
         int rc;
@@ -1464,6 +1466,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
                           pci_to_dev(pdev), flag)) )
         goto done;
 
+    if ( !IS_ENABLED(CONFIG_HAS_DEVICE_TREE) )
     for ( ; pdev->phantom_stride; rc = 0 )
     {
         devfn += pdev->phantom_stride;
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 19:22:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 19:22:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533567.830361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBrd-0008Qf-F4; Thu, 11 May 2023 19:22:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533567.830361; Thu, 11 May 2023 19:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBrd-0008QY-Bl; Thu, 11 May 2023 19:22:09 +0000
Received: by outflank-mailman (input) for mailman id 533567;
 Thu, 11 May 2023 19:22:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qWxp=BA=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pxBrb-0008QA-G5
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 19:22:07 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20620.outbound.protection.outlook.com
 [2a01:111:f400:7e89::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1bda4b36-f031-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 21:22:04 +0200 (CEST)
Received: from BY3PR03CA0009.namprd03.prod.outlook.com (2603:10b6:a03:39a::14)
 by MN0PR12MB6368.namprd12.prod.outlook.com (2603:10b6:208:3d2::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Thu, 11 May
 2023 19:22:00 +0000
Received: from DM6NAM11FT029.eop-nam11.prod.protection.outlook.com
 (2603:10b6:a03:39a:cafe::67) by BY3PR03CA0009.outlook.office365.com
 (2603:10b6:a03:39a::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20 via Frontend
 Transport; Thu, 11 May 2023 19:21:59 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT029.mail.protection.outlook.com (10.13.173.23) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.22 via Frontend Transport; Thu, 11 May 2023 19:21:59 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 14:21:59 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 12:21:58 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 14:21:56 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bda4b36-f031-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JvMfM6sm2eQUSXF6J+K09dyMMFcKESUeF9HoASPMtNVKB4xPxheK4xAxOmofVP2/lFZCNfzvGhCYEXj7ZhpPclm8jNiCyT4j+MPsy3ekvPWAtpQCP5BiH0lGwwjPnieBIrpd/C4yfY6SK3zhXx0/9xHktUzIynCQFf8OOyWzrWkaagOB40efi3x5lRJkxsIj7CGI91VzKQjCndaWTCaQi2VumYvfYs/CWkg0aBTabOSlSmENmPw3f7ELwdW2jNzhDGv/13MLxegcHI5jjsneckxVVFDfXnV1lCzRm+LeuVF3jvDIHvcFMCACbJxwFuDRjUskYpQjosiVg6QSeD7iBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oJZqJ3OSmTgBvHf2UzJUuvXPD1qwSpScJxGorP0Ak6c=;
 b=dDTRKnBZh0Z84C5TflKobnntbh3kf6+WnpeZiKLs4wD6qCxzbE7SeLfBNA7N//upLmcSYbrK+Hzp/cYX4EdIL/isLdDxCv37f+998LAKICi5KDyN+R5LwWE1o5vvQVZxwvVy3PqOWtAimvRsHYvav7Q+0Jmqor7XZcrDNplE8a9oQaFWZGniD6rdne9colf1NdCVbexC8tiiBNEq88nefWrkXAHG3F1UGmJlWEpJP9cnhFVLhNNsPljs2eR0fifVKecfi4+4Se6IZBctBb9TZPcbDDLi+5D0Tq6KNKQe0vDR5OoxfFAzI3GjsYc8dGYHMnYQEH3rqucgmBVNl3NRkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oJZqJ3OSmTgBvHf2UzJUuvXPD1qwSpScJxGorP0Ak6c=;
 b=HJA+rY3eLG5bcfwDIqAz8BFSnwIU6mIqtaGBtgBIvLyrRGQ7eKJ9Bg9sDO5DpZ9rUuLfCHFxLRTKxh3wQ35QJzJQa9Q9Dplrj3Dl9Ew9SW/VZ8nYUzUpxAbAHj+dQXDshJuWHuX1Sd6DeMd8LVsLElYlbIFsxg6B6VLXpsKxhh8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Julien Grall
	<julien@xen.org>, Rahul Singh <rahul.singh@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Stewart Hildebrand
	<stewart.hildebrand@amd.com>
Subject: [PATCH v2 7/8] xen/arm: smmuv2: Add PCI devices support for SMMUv2
Date: Thu, 11 May 2023 15:16:53 -0400
Message-ID: <20230511191654.400720-8-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230511191654.400720-1-stewart.hildebrand@amd.com>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT029:EE_|MN0PR12MB6368:EE_
X-MS-Office365-Filtering-Correlation-Id: 87a366ff-48a8-4867-74cc-08db5254fe05
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	er6pSDsuUmE9S6VPddE5L9MyS3uJI8AcwGZYrQs2XFjOWiQYghEDUCNzlOxQJU6mek7kihlxFjgYdrdBAH8v9RO5otqafUb3FtznBLJq7CSBAb3W24CYgAEd/2ngkxhlUa6WvMPtbiIDuLtabneQ4YgHQpXNQ5A+FcWAmB3DoVgCMGsP5av7VMXASSs2yfTbVQHrj4ng4eQCh1lxF7FsVpzUHYt/5JL4IKcdwfvZUS+MDFiAT+M5+MyIzJ4uQ0E0mjG14frhvdijSMVgKJfcJsRcK5WOml/6A6m3JNqVJeaI4i6d7YHNCyiJ/QgF2xuwpRJOp3QousGLGLd3PwIUaUxcd6aAYito8JulTvWRx92RYpt6jUcKnRSSx/+6WIBoHSMTdB19K2/70SzTBKSBy1vAAtF+p1LqYwFSncL2ahXpWY1QfwPB0ZyIgwdvoPOinH0qwIbHqQHcrAnHH4GnrFBMPfLJKPQJSfFuOyc+DRa4vmJPfdyO6PUtTlkb32Fpb598mCrUNanbbyYFwVRidR0v37+xyV/Jk6m3QGY/sJNdlSqbYQlQEc7NIbYfcPqTj0OpQobv1FtioBSHyoiPgyp4iPrl2qEsrOqjnBjRPyy7gGDaRPrlZ4H9U82K8J2x+uaQagBYUOPvAbppBfO7XbQ6Omo41qumfd5MBpWQ/0k0hiIAknbSTxavF2OQzyWdJ+QdKnBzZr63sqQAMrmtaesPpYgq/BvpWdZE4med/LD+sYh96PPatSsFsrG7JHwDfZfbscRyauAY4t9okfZnZFOf3iMWMgeWXqRneZz16tI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(39860400002)(346002)(451199021)(40470700004)(36840700001)(46966006)(5660300002)(44832011)(54906003)(478600001)(6666004)(41300700001)(966005)(8936002)(8676002)(316002)(1076003)(26005)(4326008)(70206006)(6916009)(70586007)(2906002)(83380400001)(336012)(47076005)(40460700003)(426003)(2616005)(186003)(356005)(36860700001)(82740400003)(81166007)(40480700001)(86362001)(82310400005)(36756003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 19:21:59.4620
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 87a366ff-48a8-4867-74cc-08db5254fe05
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT029.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6368

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v1->v2:
* ignore add_device/assign_device/reassign_device calls for phantom functions
  (i.e. devfn != pdev->devfn)

downstream->v1:
* wrap unused function in #ifdef 0
* remove the remove_device() stub since it was submitted separately to the list
  [XEN][PATCH v5 07/17] xen/smmu: Add remove_device callback for smmu_iommu ops
  https://lists.xenproject.org/archives/html/xen-devel/2023-04/msg00432.html
* arm_smmu_(de)assign_dev: return error instead of crashing system
* update condition in arm_smmu_reassign_dev
* style fixup
* add && !is_hardware_domain(d) into condition in arm_smmu_assign_dev()

(cherry picked from commit 0c11a7f65f044c26d87d1e27ac6283ef1f9cfb7a from
 the downstream branch spider-master from
 https://github.com/xen-troops/xen.git)
---

This is a file imported from Linux with modifications for Xen. What should be
the coding style for Xen modifications?
---
 xen/drivers/passthrough/arm/smmu.c | 108 +++++++++++++++++++++++------
 1 file changed, 87 insertions(+), 21 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 5b6024d579a8..a8476a22b096 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -134,8 +134,20 @@ typedef enum irqreturn irqreturn_t;
 /* Device logger functions
  * TODO: Handle PCI
  */
-#define dev_print(dev, lvl, fmt, ...)						\
-	 printk(lvl "smmu: %s: " fmt, dt_node_full_name(dev_to_dt(dev)), ## __VA_ARGS__)
+#ifndef CONFIG_HAS_PCI
+#define dev_print(dev, lvl, fmt, ...)    \
+    printk(lvl "smmu: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#else
+#define dev_print(dev, lvl, fmt, ...) ({                                \
+    if ( !dev_is_pci((dev)) )                                           \
+        printk(lvl "smmu: %s: " fmt, dev_name((dev)), ## __VA_ARGS__);  \
+    else                                                                \
+    {                                                                   \
+        struct pci_dev *pdev = dev_to_pci((dev));                       \
+        printk(lvl "smmu: %pp: " fmt, &pdev->sbdf, ## __VA_ARGS__);     \
+    }                                                                   \
+})
+#endif
 
 #define dev_dbg(dev, fmt, ...) dev_print(dev, XENLOG_DEBUG, fmt, ## __VA_ARGS__)
 #define dev_notice(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
@@ -187,6 +199,7 @@ static void __iomem *devm_ioremap_resource(struct device *dev,
  * Xen: PCI functions
  * TODO: It should be implemented when PCI will be supported
  */
+#if 0 /* unused */
 #define to_pci_dev(dev)	(NULL)
 static inline int pci_for_each_dma_alias(struct pci_dev *pdev,
 					 int (*fn) (struct pci_dev *pdev,
@@ -196,6 +209,7 @@ static inline int pci_for_each_dma_alias(struct pci_dev *pdev,
 	BUG();
 	return 0;
 }
+#endif
 
 /* Xen: misc */
 #define PHYS_MASK_SHIFT		PADDR_BITS
@@ -632,7 +646,7 @@ struct arm_smmu_master_cfg {
 	for (i = 0; idx = cfg->smendx[i], i < num; ++i)
 
 struct arm_smmu_master {
-	struct device_node		*of_node;
+	struct device			*dev;
 	struct rb_node			node;
 	struct arm_smmu_master_cfg	cfg;
 };
@@ -724,7 +738,7 @@ arm_smmu_get_fwspec(struct arm_smmu_master_cfg *cfg)
 {
 	struct arm_smmu_master *master = container_of(cfg,
 			                                      struct arm_smmu_master, cfg);
-	return dev_iommu_fwspec_get(&master->of_node->dev);
+	return dev_iommu_fwspec_get(master->dev);
 }
 
 static void parse_driver_options(struct arm_smmu_device *smmu)
@@ -757,7 +771,7 @@ static struct device_node *dev_get_dev_node(struct device *dev)
 }
 
 static struct arm_smmu_master *find_smmu_master(struct arm_smmu_device *smmu,
-						struct device_node *dev_node)
+						struct device *dev)
 {
 	struct rb_node *node = smmu->masters.rb_node;
 
@@ -766,9 +780,9 @@ static struct arm_smmu_master *find_smmu_master(struct arm_smmu_device *smmu,
 
 		master = container_of(node, struct arm_smmu_master, node);
 
-		if (dev_node < master->of_node)
+		if (dev < master->dev)
 			node = node->rb_left;
-		else if (dev_node > master->of_node)
+		else if (dev > master->dev)
 			node = node->rb_right;
 		else
 			return master;
@@ -803,9 +817,9 @@ static int insert_smmu_master(struct arm_smmu_device *smmu,
 			= container_of(*new, struct arm_smmu_master, node);
 
 		parent = *new;
-		if (master->of_node < this->of_node)
+		if (master->dev < this->dev)
 			new = &((*new)->rb_left);
-		else if (master->of_node > this->of_node)
+		else if (master->dev > this->dev)
 			new = &((*new)->rb_right);
 		else
 			return -EEXIST;
@@ -824,18 +838,18 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 	struct arm_smmu_master *master;
 	struct device_node *dev_node = dev_get_dev_node(dev);
 
-	master = find_smmu_master(smmu, dev_node);
+	master = find_smmu_master(smmu, dev);
 	if (master) {
 		dev_err(dev,
 			"rejecting multiple registrations for master device %s\n",
-			dev_node->name);
+			dev_node ? dev_node->name : "");
 		return -EBUSY;
 	}
 
 	master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
 	if (!master)
 		return -ENOMEM;
-	master->of_node = dev_node;
+	master->dev = dev;
 
 	/* Xen: Let Xen know that the device is protected by an SMMU */
 	device_set_protected(dev);
@@ -845,7 +859,7 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 		     (fwspec->ids[i] >= smmu->num_mapping_groups)) {
 			dev_err(dev,
 				"stream ID for master device %s greater than maximum allowed (%d)\n",
-				dev_node->name, smmu->num_mapping_groups);
+				dev_node ? dev_node->name : "", smmu->num_mapping_groups);
 			return -ERANGE;
 		}
 		master->cfg.smendx[i] = INVALID_SMENDX;
@@ -889,6 +903,15 @@ static int arm_smmu_dt_add_device_generic(u8 devfn, struct device *dev)
 	if (smmu == NULL)
 		return -ENXIO;
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+		if ( devfn != pdev->devfn )
+			return 0;
+	}
+#endif
+
 	return arm_smmu_dt_add_device_legacy(smmu, dev, fwspec);
 }
 
@@ -912,11 +935,10 @@ static struct arm_smmu_device *find_smmu_for_device(struct device *dev)
 {
 	struct arm_smmu_device *smmu;
 	struct arm_smmu_master *master = NULL;
-	struct device_node *dev_node = dev_get_dev_node(dev);
 
 	spin_lock(&arm_smmu_devices_lock);
 	list_for_each_entry(smmu, &arm_smmu_devices, list) {
-		master = find_smmu_master(smmu, dev_node);
+		master = find_smmu_master(smmu, dev);
 		if (master)
 			break;
 	}
@@ -2006,6 +2028,7 @@ static bool arm_smmu_capable(enum iommu_cap cap)
 }
 #endif
 
+#if 0 /* Not used */
 static int __arm_smmu_get_pci_sid(struct pci_dev *pdev, u16 alias, void *data)
 {
 	*((u16 *)data) = alias;
@@ -2016,6 +2039,7 @@ static void __arm_smmu_release_pci_iommudata(void *data)
 {
 	kfree(data);
 }
+#endif
 
 static int arm_smmu_add_device(struct device *dev)
 {
@@ -2023,12 +2047,13 @@ static int arm_smmu_add_device(struct device *dev)
 	struct arm_smmu_master_cfg *cfg;
 	struct iommu_group *group;
 	void (*releasefn)(void *) = NULL;
-	int ret;
 
 	smmu = find_smmu_for_device(dev);
 	if (!smmu)
 		return -ENODEV;
 
+	/* There is no need to distinguish here, thanks to PCI-IOMMU DT bindings */
+#if 0
 	if (dev_is_pci(dev)) {
 		struct pci_dev *pdev = to_pci_dev(dev);
 		struct iommu_fwspec *fwspec;
@@ -2053,10 +2078,12 @@ static int arm_smmu_add_device(struct device *dev)
 				       &fwspec->ids[0]);
 		releasefn = __arm_smmu_release_pci_iommudata;
 		cfg->smmu = smmu;
-	} else {
+	} else
+#endif
+	{
 		struct arm_smmu_master *master;
 
-		master = find_smmu_master(smmu, dev->of_node);
+		master = find_smmu_master(smmu, dev);
 		if (!master) {
 			return -ENODEV;
 		}
@@ -2724,6 +2751,27 @@ static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
 			return -ENOMEM;
 	}
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) && !is_hardware_domain(d) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Assigning device %04x:%02x:%02x.%u to dom%d\n",
+		       pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+		       d->domain_id);
+
+		if ( devfn != pdev->devfn || pdev->domain == d )
+			return 0;
+
+		list_move(&pdev->domain_list, &d->pdev_list);
+		pdev->domain = d;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	if (!dev_iommu_group(dev)) {
 		ret = arm_smmu_add_device(dev);
 		if (ret)
@@ -2773,11 +2821,29 @@ out:
 	return ret;
 }
 
-static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
+static int arm_smmu_deassign_dev(struct domain *d, u8 devfn, struct device *dev)
 {
 	struct iommu_domain *domain = dev_iommu_domain(dev);
 	struct arm_smmu_xen_domain *xen_domain;
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Deassigning device %04x:%02x:%02x.%u from dom%d\n",
+		       pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+		       d->domain_id);
+
+		if ( devfn != pdev->devfn )
+			return 0;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	xen_domain = dom_iommu(d)->arch.priv;
 
 	if (!domain || domain->priv->cfg.domain != d) {
@@ -2805,13 +2871,13 @@ static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
 	int ret = 0;
 
 	/* Don't allow remapping on other domain than hwdom */
-	if ( t && !is_hardware_domain(t) )
+	if ( t && !is_hardware_domain(t) && t != dom_io )
 		return -EPERM;
 
 	if (t == s)
 		return 0;
 
-	ret = arm_smmu_deassign_dev(s, dev);
+	ret = arm_smmu_deassign_dev(s, devfn, dev);
 	if (ret)
 		return ret;
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 19:22:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 19:22:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533571.830371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBrz-0000Xh-Rw; Thu, 11 May 2023 19:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533571.830371; Thu, 11 May 2023 19:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBrz-0000Xa-Oh; Thu, 11 May 2023 19:22:31 +0000
Received: by outflank-mailman (input) for mailman id 533571;
 Thu, 11 May 2023 19:22:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qWxp=BA=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pxBrz-0008QA-DH
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 19:22:31 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20631.outbound.protection.outlook.com
 [2a01:111:f400:7e89::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2ad39f39-f031-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 21:22:29 +0200 (CEST)
Received: from DS7P222CA0011.NAMP222.PROD.OUTLOOK.COM (2603:10b6:8:2e::18) by
 DM6PR12MB4171.namprd12.prod.outlook.com (2603:10b6:5:21f::18) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6387.22; Thu, 11 May 2023 19:22:24 +0000
Received: from DM6NAM11FT085.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:2e:cafe::24) by DS7P222CA0011.outlook.office365.com
 (2603:10b6:8:2e::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22 via Frontend
 Transport; Thu, 11 May 2023 19:22:24 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT085.mail.protection.outlook.com (10.13.172.236) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.20 via Frontend Transport; Thu, 11 May 2023 19:22:24 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 14:22:23 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 14:22:22 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 14:22:21 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ad39f39-f031-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bak56nci9K5jjkkIkXvkbiaY+3qyRk9Y/L9C3w3i5L8DqGHjZQqoLxpjtSs+9Z7wLfvCAinnXUnKiA30MxXnXwomtoxjIDGEkjCXKmOmkT/Ymxowvei3Yi9PsISYTALPuP3LTb+nh7MDPwc61xDRsZ8GwdytdJi/I2X9tzUwkB6rMzThDWNCVAiux5I39wKRtNG4tTDdI7UiU6M3z/Wu7Sifui/Y4VrL8D7Sbkq54AhY8bkzrlg4siiYR7wQoO0+piCQnXgY6AfhAPyjdsJj21QK65keRuZUfRAHUGnchdezRzeeU7TxPQqKNadNrIdUYsGFzTA5wfa2Fge9kcGFRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SO3THuJ+/ik7n2bM+aWq9Ulbj6rZ/FkssTS50Og6lnI=;
 b=DcVPho3m/knVFB70n+uJdsR+l1LRyMEVYqut78+7RKqzAGXvc4v1b9+EJifgHOzUcSoZkHS64HeYASE8jloHotNwv5o1STGH1/qDDpAdbTnnSHivsr5jHDiQ+2p3Umsd9aDQnemmnCOEluOP08s0mog6BqvWs0TVUVSv9LOaTomnM+I3llOzfMJSGHQAM3Zufyqf+QXMVKzftqFG71+fp/6M0r6J5ueQJUx9G/LEA9Y+vsmLInpsZB6NipiaTZk8m1dv070z0R7ONBBWLZdWTDIOQnGNFz+jwVs/5eHrEFZ2FEuH0pCd+64kgdhVneldA4YNSemYOtepZ8Q+rE74lg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SO3THuJ+/ik7n2bM+aWq9Ulbj6rZ/FkssTS50Og6lnI=;
 b=4A5tPR31dR/4f0o4ukSFqtuV8joZhJa17RwUd9cWru37kLlJaiZso5W7W6iP0wjin7oZBXCCyZcjYlHqgjziPI/zZG/SYFtZ+lwVez7VgBktz3euXhezzif1CX/eG84daUWlPafePXTY1+gS6vdNzQ6fYJySXaZuH/LBCLaOp/I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Rahul Singh <rahul.singh@arm.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Stewart Hildebrand <stewart.hildebrand@amd.com>
Subject: [PATCH v2 8/8] xen/arm: smmuv3: Add PCI devices support for SMMUv3
Date: Thu, 11 May 2023 15:16:54 -0400
Message-ID: <20230511191654.400720-9-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230511191654.400720-1-stewart.hildebrand@amd.com>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT085:EE_|DM6PR12MB4171:EE_
X-MS-Office365-Filtering-Correlation-Id: 686471a2-ae31-4dc7-22f2-08db52550cb2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3sG1t+oyvh+DFdbRQmSOotQNDk8aoyCD9Gi82mgtcjL/LvUeDP0PUS4nfl7gXyjug8wk987CNowXe/bLDix0H46cqeBNRI3i8s58zVy5REQ3S4KTGV74WCkltexXERhLTxqhCNxSWOeuu/1S2PypELaqI33L8qF7goba0/XkXIl5NK7g6FfB3oC7sg6yPXg+HuuRYCrPPPrpxRuofOBqbaqFsjVJY1sF8V+sZZWggSKhQuhpClxcwRSFt/HVLgptrGaLF0swGTFAWUOv+V7dg6k0umUyPPOPn+7zKozSNp0gzvG/IAJm2m0Ej9UNrDh6G7s9UjXEEWXl46EIUV4ky65F3MZ/y5SeWEBAC0AGIxy75aCGkRBku7yMICPW2fv7ya8KuDaBrzx+E1JnjialBJ2pY/aGGhLGl1btBB24Dwyn1d18JfVQo88K8AL+QY7xg1eX5RjAAwVfUtaIRTXh8yMD5N2WHhCxF8ztg0WR3+FHlmXIZJyaAXgMVnpkdX4kY2JR+g7JeHnoVwmmJFuaU4Q7B2zwXAl/BoOftglE0z/U1reN5rK/D46dlvlKLQ1rN58jHqzKgPbHmAFFmldHE8B92J66upaWuGuj/PJ83DLc6uVc+wfOLIwEndDgAgm8h+vzPn6JxN3fWaBP/Un49hB366dB8oB0orAztl7lmXeh5exnsU4pz7yquItV3pWzbgX9AdUa8uxMkSuZuZ+/IEgGL2sPBsEtrDf62wOoyRoyxuGgcXHj6C57fFprkZvFsYZdM7AsZ4UvDEzoOQJtcA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(376002)(396003)(136003)(451199021)(40470700004)(36840700001)(46966006)(36756003)(86362001)(54906003)(316002)(6916009)(4326008)(70586007)(70206006)(966005)(478600001)(6666004)(40480700001)(82310400005)(8676002)(8936002)(41300700001)(2906002)(5660300002)(44832011)(81166007)(82740400003)(356005)(186003)(26005)(1076003)(336012)(36860700001)(426003)(83380400001)(47076005)(2616005)(40460700003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 19:22:24.0974
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 686471a2-ae31-4dc7-22f2-08db52550cb2
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT085.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4171

From: Rahul Singh <rahul.singh@arm.com>

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v1->v2:
* ignore add_device/assign_device/reassign_device calls for phantom functions
  (i.e. devfn != pdev->devfn)

downstream->v1:
* rebase
* move 2 replacements of s/dt_device_set_protected(dev_to_dt(dev))/device_set_protected(dev)/
  from this commit to ("xen/arm: Move is_protected flag to struct device")
  so as to not break ability to bisect
* adjust patch title (remove stray space)
* arm_smmu_(de)assign_dev: return error instead of crashing system
* remove arm_smmu_remove_device() stub
* update condition in arm_smmu_reassign_dev
* style fixup

(cherry picked from commit 7ed6c3ab250d899fe6e893a514278e406a2893e8 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---

This is a file imported from Linux with modifications for Xen. What should be
the coding style used for Xen modifications?
---
 xen/drivers/passthrough/arm/smmu-v3.c | 67 +++++++++++++++++++++++++--
 1 file changed, 64 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 4b452e6fdd00..807cfe575345 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -1469,6 +1469,8 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
 }
 /* Forward declaration */
 static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev);
+static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
+			struct device *dev, u32 flag);
 
 static int arm_smmu_add_device(u8 devfn, struct device *dev)
 {
@@ -1484,6 +1486,15 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev)
 	if (!smmu)
 		return -ENODEV;
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+		if ( devfn != pdev->devfn )
+			return 0;
+	}
+#endif
+
 	master = xzalloc(struct arm_smmu_master);
 	if (!master)
 		return -ENOMEM;
@@ -1527,6 +1538,17 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev)
 	dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
 			dev_name(fwspec->iommu_dev), fwspec->num_ids);
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		ret = arm_smmu_assign_dev(pdev->domain, devfn, dev, 0);
+		if (ret)
+			goto err_free_master;
+	}
+#endif
+
 	return 0;
 
 err_free_master:
@@ -2607,6 +2629,27 @@ static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
 	struct arm_smmu_domain *smmu_domain;
 	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) && !is_hardware_domain(d) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Assigning device %04x:%02x:%02x.%u to dom%d\n",
+			pdev->seg, pdev->bus, PCI_SLOT(devfn),
+			PCI_FUNC(devfn), d->domain_id);
+
+		if ( devfn != pdev->devfn || pdev->domain == d )
+			return 0;
+
+		list_move(&pdev->domain_list, &d->pdev_list);
+		pdev->domain = d;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	spin_lock(&xen_domain->lock);
 
 	/*
@@ -2640,7 +2683,7 @@ out:
 	return ret;
 }
 
-static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
+static int arm_smmu_deassign_dev(struct domain *d, uint8_t devfn, struct device *dev)
 {
 	struct iommu_domain *io_domain = arm_smmu_get_domain(d, dev);
 	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
@@ -2652,6 +2695,24 @@ static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
 		return -ESRCH;
 	}
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Deassigning device %04x:%02x:%02x.%u from dom%d\n",
+			pdev->seg, pdev->bus, PCI_SLOT(devfn),
+			PCI_FUNC(devfn), d->domain_id);
+
+		if ( devfn != pdev->devfn )
+			return 0;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	spin_lock(&xen_domain->lock);
 
 	arm_smmu_detach_dev(master);
@@ -2671,13 +2732,13 @@ static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
 	int ret = 0;
 
 	/* Don't allow remapping on other domain than hwdom */
-	if ( t && !is_hardware_domain(t) )
+	if ( t && !is_hardware_domain(t) && (t != dom_io) )
 		return -EPERM;
 
 	if (t == s)
 		return 0;
 
-	ret = arm_smmu_deassign_dev(s, dev);
+	ret = arm_smmu_deassign_dev(s, devfn, dev);
 	if (ret)
 		return ret;
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 19:27:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 19:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533578.830380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBws-0001PX-E9; Thu, 11 May 2023 19:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533578.830380; Thu, 11 May 2023 19:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxBws-0001PQ-BK; Thu, 11 May 2023 19:27:34 +0000
Received: by outflank-mailman (input) for mailman id 533578;
 Thu, 11 May 2023 19:27:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qWxp=BA=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pxBpB-00042a-D8
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 19:19:37 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20629.outbound.protection.outlook.com
 [2a01:111:f400:7eab::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c30d4630-f030-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 21:19:35 +0200 (CEST)
Received: from DS7P222CA0023.NAMP222.PROD.OUTLOOK.COM (2603:10b6:8:2e::9) by
 DS7PR12MB5911.namprd12.prod.outlook.com (2603:10b6:8:7c::16) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6387.18; Thu, 11 May 2023 19:19:31 +0000
Received: from DM6NAM11FT093.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:2e:cafe::9e) by DS7P222CA0023.outlook.office365.com
 (2603:10b6:8:2e::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22 via Frontend
 Transport; Thu, 11 May 2023 19:19:31 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT093.mail.protection.outlook.com (10.13.172.235) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.22 via Frontend Transport; Thu, 11 May 2023 19:19:31 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 14:19:30 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 11 May
 2023 12:19:30 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 11 May 2023 14:19:28 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c30d4630-f030-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z62fgB6WMELun9BSFmliivebJ5ZvIqlHQuq5HKMM+7NuUREA8YK69GgS1H+Icfa7jriO+hm/znpTDCYgiWLRnm4rPeVG7/uyg+Ld35p66cRCGE8znaD6zdX1ULGZKBUWJhhmYMDfcGF///QIUM0V4UGx+/91mTbzhwFuhCP5tRAyDCnl3rRynzlEu0GiouQACMPbC5mMhwNLNNUfeadwzeP0CnvZF4tt+raXjqGMw5kOuOG6BTRM7XujSDtmdDvuWKM/BkmDO3OA59NTpgY4Gkf7pdqALB2puYkalPFYKZUVlDuHzQpAejuWRibBoJQUHVXzRM0OJrx8y00slHd9ZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bw0rFieP9TglgEJiPQkHGTzJjkGcjQqChjjGQBEv5dQ=;
 b=C+ZrViLwvKc3xnHec0guBvhmK0C8aJaIg5ARM4oghkIeY5At1hqYvhFM1++VgyfDAwy3A1eBnd1xrnPyvRUaey69xoYnhEpMGFz8hUo39+jsu0pF7yz/lmcsVRdhlxtiSh2kicwVRGDntt2rcKfWtzJYZUyQKvn1Pwv3YvtK5qDYAnmmq5ofhXrrrfcl87Kb1X2U3424vGKWmrnGcMri1mED0uineT3rVrJvaaqtABr00V45CStyRJe6UCgRR3rfFCFqs1FmHMRNlXGKrlm+8L6sg9BW7kS8QL6WNxIeRUO/gFX+AJRWc0aRKQ0g2ZEiOwwNbmjFuNNWcFCahnOzIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bw0rFieP9TglgEJiPQkHGTzJjkGcjQqChjjGQBEv5dQ=;
 b=4ie/rqrNqJOuaGk+X0TXvsO7gIVhKgqH4AZIGH67kU7nrY49OO1C4K+ldJ1dtfciTt/YPrzNCHj8KoLeipA4zTHBqE0G8BOkad1a/RnkDRN1pbSki2HM3d/qnhZnLKCswE7ExAnSXGH4CqOogddj0SmBeRiraQ84ZaGEAnyZujc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Julien Grall
	<julien@xen.org>, Rahul Singh <rahul.singh@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Stewart Hildebrand
	<stewart.hildebrand@amd.com>
Subject: [PATCH v2 5/8] pci/arm: Use iommu_add_dt_pci_device()
Date: Thu, 11 May 2023 15:16:51 -0400
Message-ID: <20230511191654.400720-6-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230511191654.400720-1-stewart.hildebrand@amd.com>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT093:EE_|DS7PR12MB5911:EE_
X-MS-Office365-Filtering-Correlation-Id: aedad1e9-6edc-487b-3cca-08db5254a5bd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/JCPZQF+b+gEQ7Aq5Z2HwxiuUbrh7bQR2eRmoN4NuvXTxiIukSybeSfgepwVVoAZ7pQ5LnNIrNsHJgzWq+dko+4KsW6Om/o/WJNMYQZjaciPQM+womfyzZ8T0J0f6q/gW/byBgC0r1YdC+sEU98NT0UWt65WNe3BBcEfY/gLbNTyEuJcspJ97swiiQf6MvYVl+FpIaurGUddLx8oib/libFWq1xyz3yU3P/NCfisU4z8dij0pwI585wSi/6eb+Z+N/1niE/jCCHIjz0+srQpJ8+IG6Y/ipvPa30hH2lyPHS9ieQMJltg6RDO2IqrO//9ChlwWljO9+r7XFh73/heAAFvb9xak55cCz9aMmc+LTZJRZ36eCgrVnOPVAhKOFcwq3KNvBYqtffWt03WXsc+84W9QsedDalDMW8RySk/PAkdZXTq5S6qEc3R33c/8EssaXnPFezaF9YfewLNAfO0o+a4I/l1xn8Rwqvh5PD1KxNHirCcMWfcViEbHjrWsoCEUvXSP3PidaAGkeiWVaanaOiwdh18btELwEfWxMivJahntk1dJKqViGXwKUdkSZyl2VYCmEvqv1gaAGB8k0sTr1+yRskLkANi1tjSTbI61Jr8epxi1uhjXhgVZGgR1uneZtSaS0lAqQlkbhknhbjRPOKWdVQDNFkdyNsak6xepvH0DbwgNWAq3MMGDAATROsQTz/MPwVgrudLXfRz+rj83tPFuy03yQf/KN48g6IgupOijicMirmg4ULctbbIMkIZhm/lh204Q7GZJqvP59gqrw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199021)(46966006)(40470700004)(36840700001)(40480700001)(81166007)(1076003)(316002)(186003)(6916009)(4326008)(86362001)(26005)(70206006)(70586007)(40460700003)(82310400005)(54906003)(356005)(478600001)(36756003)(44832011)(47076005)(5660300002)(6666004)(966005)(7416002)(2906002)(36860700001)(8936002)(41300700001)(8676002)(82740400003)(336012)(426003)(2616005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 19:19:31.3648
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: aedad1e9-6edc-487b-3cca-08db5254a5bd
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT093.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5911

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

On Arm we need to parse DT PCI-IOMMU specifier and provide it to
the driver (for describing the relationship between PCI devices
and IOMMUs) before adding a device to it.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v1->v2:
* new patch title (was: "pci/arm: Use iommu_add_dt_pci_device() instead of arch hook")
* move iommu_add_dt_pci_device() call (and associated #ifdef) to
  pci_add_device()
* use existing call to iommu_add_device()

downstream->v1:
* rebase
* add __maybe_unused attribute to const struct domain_iommu *hd;
* Rename: s/iommu_add_pci_device/iommu_add_dt_pci_device/
* guard iommu_add_dt_pci_device call with CONFIG_HAS_DEVICE_TREE instead of
  CONFIG_ARM

(cherry picked from commit 2b9d26badab8b24b5a80d028c4499a5022817213 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---
 xen/drivers/passthrough/pci.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index b42acb8d7c09..6dbaae682773 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -34,6 +34,11 @@
 #include <xen/vpci.h>
 #include <xen/msi.h>
 #include <xsm/xsm.h>
+
+#ifdef CONFIG_HAS_DEVICE_TREE
+#include <asm/iommu_fwspec.h>
+#endif
+
 #include "ats.h"
 
 struct pci_seg {
@@ -762,9 +767,20 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
             pdev->domain = NULL;
             goto out;
         }
+#ifdef CONFIG_HAS_DEVICE_TREE
+        ret = iommu_add_dt_pci_device(pdev);
+        if ( ret < 0 )
+        {
+            printk(XENLOG_ERR "pci-iommu translation failed: %d\n", ret);
+            goto out;
+        }
+#endif
         ret = iommu_add_device(pdev);
         if ( ret )
         {
+#ifdef CONFIG_HAS_DEVICE_TREE
+            iommu_fwspec_free(pci_to_dev(pdev));
+#endif
             vpci_remove_device(pdev);
             list_del(&pdev->domain_list);
             pdev->domain = NULL;
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 11 20:07:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 20:07:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533584.830391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxCYu-0006Hg-Ce; Thu, 11 May 2023 20:06:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533584.830391; Thu, 11 May 2023 20:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxCYu-0006HZ-9u; Thu, 11 May 2023 20:06:52 +0000
Received: by outflank-mailman (input) for mailman id 533584;
 Thu, 11 May 2023 20:06:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxCYt-0006HP-88; Thu, 11 May 2023 20:06:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxCYt-0005LV-5p; Thu, 11 May 2023 20:06:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxCYs-0001fW-NP; Thu, 11 May 2023 20:06:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxCYs-0007Zv-Mv; Thu, 11 May 2023 20:06:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nH1P42usMCXqjenbpGUw4NqZTK9LyqenElGWzClgWf8=; b=SobmkpCickiQ1D205UL9fbEua7
	OHSyVw/Wy/kmC1lc0J8oSH+MAyQPrqzFS6pxFIrC8DQizMS1sEHTx4Czb7rT5lXKzatp02Jq7lrc4
	9dFau05Hm6IAwN8L8MvsHgi0IdYrhlIZf+CMun9oVj19KpHFS0U+ImpiEYGEYQSSiMks=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180622-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180622: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cb781ae2c98de5d5742aa0de6850f371fb25825f
X-Osstest-Versions-That:
    xen=31c65549746179e16cf3f82b694b4b1e0b7545ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 May 2023 20:06:50 +0000

flight 180622 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180622/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cb781ae2c98de5d5742aa0de6850f371fb25825f
baseline version:
 xen                  31c65549746179e16cf3f82b694b4b1e0b7545ca

Last test of basis   180607  2023-05-10 19:03:28 Z    1 days
Failing since        180619  2023-05-11 12:00:27 Z    0 days    2 attempts
Testing same since   180622  2023-05-11 16:02:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alejandro Vallejo <alejandro.vallejo@cloud.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   31c6554974..cb781ae2c9  cb781ae2c98de5d5742aa0de6850f371fb25825f -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 11 20:12:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 20:12:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533590.830400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxCe9-0007r9-0S; Thu, 11 May 2023 20:12:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533590.830400; Thu, 11 May 2023 20:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxCe8-0007r2-U4; Thu, 11 May 2023 20:12:16 +0000
Received: by outflank-mailman (input) for mailman id 533590;
 Thu, 11 May 2023 20:12:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hqDu=BA=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pxCe7-0007qw-Hv
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 20:12:15 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1dcc3142-f038-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 22:12:13 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-302-MgAbwRbGPoSDIs3I0ntU4g-1; Thu, 11 May 2023 16:12:07 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E91CB29DD9A1;
 Thu, 11 May 2023 20:12:06 +0000 (UTC)
Received: from localhost (unknown [10.39.194.137])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 9072FC15BA0;
 Thu, 11 May 2023 20:12:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1dcc3142-f038-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683835932;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=T2V413X5+i82I8ibkxixPdg5TkN8XN0+FCl19AWIH/c=;
	b=QRh12wCp32nzI8mjp5Z2TWz64rT1/LF1FjWFoGSnLFddpF2IAKZTYzaVf1P4xlFrxGaiQk
	4Vr4yy3XX5vMN+vETUS8nNveV4odBfe/qmIdo+di4zLvD6XH8d3CRvXU3lsI1aJN7C/A4t
	Bote9QZzlaytpNl7sqIHOfQgqHRui5w=
X-MC-Unique: MgAbwRbGPoSDIs3I0ntU4g-1
Date: Wed, 10 May 2023 17:40:00 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 16/20] virtio: make it possible to detach host
 notifier from any thread
Message-ID: <20230510214000.GC1287730@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-17-stefanha@redhat.com>
 <ZFQc89cFJuoGF+qI@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="5oxBC66SYAo2b05z"
Content-Disposition: inline
In-Reply-To: <ZFQc89cFJuoGF+qI@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8


--5oxBC66SYAo2b05z
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, May 04, 2023 at 11:00:35PM +0200, Kevin Wolf wrote:
> Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > virtio_queue_aio_detach_host_notifier() does two things:
> > 1. It removes the fd handler from the event loop.
> > 2. It processes the virtqueue one last time.
> >=20
> > The first step can be peformed by any thread and without taking the
> > AioContext lock.
> >=20
> > The second step may need the AioContext lock (depending on the device
> > implementation) and runs in the thread where request processing takes
> > place. virtio-blk and virtio-scsi therefore call
> > virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in
> > AioContext
> >=20
> > Scheduling a BH is undesirable for .drained_begin() functions. The next
> > patch will introduce a .drained_begin() function that needs to call
> > virtio_queue_aio_detach_host_notifier().
>=20
> Why is it undesirable? In my mental model, .drained_begin() is still
> free to start as many asynchronous things as it likes. The only
> important thing to take care of is that .drained_poll() returns true as
> long as the BH (or other asynchronous operation) is still pending.
>=20
> Of course, your way of doing things still seems to result in simpler
> code because you don't have to deal with a BH at all if you only really
> want the first part and not the second.

I have clarified this in the commit description. We can't wait
synchronously, but we could wait asynchronously as you described. It's
simpler to split the function instead of implementing async wait using
=2Edrained_poll().

>=20
> > Move the virtqueue processing out to the callers of
> > virtio_queue_aio_detach_host_notifier() so that the function can be
> > called from any thread. This is in preparation for the next patch.
>=20
> Did you forget to remove it in virtio_queue_aio_detach_host_notifier()?
> If it's unchanged, I don't think the AioContext requirement is lifted.

Yes! Thank you :)

>=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  hw/block/dataplane/virtio-blk.c | 2 ++
> >  hw/scsi/virtio-scsi-dataplane.c | 9 +++++++++
> >  2 files changed, 11 insertions(+)
> > diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virti=
o-blk.c
> > index b28d81737e..bd7cc6e76b 100644
> > --- a/hw/block/dataplane/virtio-blk.c
> > +++ b/hw/block/dataplane/virtio-blk.c
> > @@ -286,8 +286,10 @@ static void virtio_blk_data_plane_stop_bh(void *op=
aque)
> > =20
> >      for (i =3D 0; i < s->conf->num_queues; i++) {
> >          VirtQueue *vq =3D virtio_get_queue(s->vdev, i);
> > +        EventNotifier *host_notifier =3D virtio_queue_get_host_notifie=
r(vq);
> > =20
> >          virtio_queue_aio_detach_host_notifier(vq, s->ctx);
> > +        virtio_queue_host_notifier_read(host_notifier);
> >      }
> >  }
>=20
> The existing code in virtio_queue_aio_detach_host_notifier() has a
> comment before the read:
>=20
>     /* Test and clear notifier before after disabling event,
>      * in case poll callback didn't have time to run. */
>=20
> Do we want to keep it around in the new places? (And also fix the
> "before after", I suppose, or replace it with a similar, but better
> comment that explains why we're reading here.)

I will add the comment.

Stefan

--5oxBC66SYAo2b05z
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRcDzAACgkQnKSrs4Gr
c8i8mQf5AYxgNAuDvbOU1XvzA5Q6jeekOXOQm5LcsUhp7ukc76FAwjsIrpvmfZxR
8v6pkq1ReqvvojVK6qpC9HRB/WX94Ss+128FOa7TJ0pmD9jJHJbyHNEIg722GVk4
EdSE1Or7d97FsMOt5EqwJxQ8RtXgei/lpfyx4OhKXLLYQ+puTlrq8TW8O7wzBFDe
ZR9Sr8wDH9/yseUnOl17U6fJa9cUYroFGg6oZifWuTOPZAxwwgnryMOylwsknfUg
pSTtGXkph5Ugf+KOxdrG98eXR+MCiCs/DrnIUuy0RBf7jEqcqT4STNAqIVb8TJaa
gXzKpleSUOXubjuUJPcjpkQOP5ytMQ==
=8VSX
-----END PGP SIGNATURE-----

--5oxBC66SYAo2b05z--



From xen-devel-bounces@lists.xenproject.org Thu May 11 20:23:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 20:23:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533596.830411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxCol-0001Av-3l; Thu, 11 May 2023 20:23:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533596.830411; Thu, 11 May 2023 20:23:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxCol-0001Ao-0O; Thu, 11 May 2023 20:23:15 +0000
Received: by outflank-mailman (input) for mailman id 533596;
 Thu, 11 May 2023 20:23:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1K6=BA=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pxCoj-0001Ai-QF
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 20:23:13 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a65c936c-f039-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 22:23:11 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-956ff2399c9so1748501766b.3
 for <xen-devel@lists.xenproject.org>; Thu, 11 May 2023 13:23:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a65c936c-f039-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683836591; x=1686428591;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=yZFTuu27TWBz+rQB7H9iD5+gOGvCMbVa69FBH6J8ydc=;
        b=K6lG5bVIF7lTvl75uKXWNeYLa+L2CdDvwczbtmNV+I7srel/4AJ9M0fNy02414cmGX
         GdNLppTrKltvcHa3en+BM70atbZfqYGmCOFbPBp8qOoCKOD7SoDN6IYIpwXPN9ZPwft9
         XWEQSdPsq5/uSdJM1nsv6mltT8dHxlmCfuNgNsMxNEHOsSSol6oFCC5hRF+3joH6gmv/
         btRA4D3snaWhMZ4VrDnCYGfw2LNJMDXfBFgp1Ikn6JFKWbJasE2k6+g8iRX+Nr9CSs9S
         uwftHLveBGZ8JwOgoK5c8J3hl2juPyUJ+HLhzc/MiL33PZ/nzd+q82tPM5SP+8cxNAwz
         aAjg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683836591; x=1686428591;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=yZFTuu27TWBz+rQB7H9iD5+gOGvCMbVa69FBH6J8ydc=;
        b=iO/DhtYAZWuLvSDtBXM+mBe6VNlulcWZpVmJr10QS/gQ0m07RnqeLXbB8PgITlEbxh
         DKzBV5PoM2OWzTt5erlf1jU0GcXjnmMZGs6TxvxCcTFOwzGx6lQ2yHEPRA/Z/iQOEw7q
         6wQUVZrB7F8vF8k4cMQtLnEYBnYDycy2QVEsn6SSwQNW6TfG6N+nDBhvsF1uu/QywxX0
         4owSdjlDuxu6r9Fvtqy3sGFsERCGc0OkGBIYhXMv8RN1M5oBJon+1FYwCWDsOMbZ1FX3
         p6J0wqmNL/Y0xrutUDB7DiSu7DSUdLDQHXOgcKNSnN4uNv3Unx0M2ZF6L95Hk6brgx4t
         uqDg==
X-Gm-Message-State: AC+VfDxC16WNZokn8Mn04mOeus7NhG8vg015FR1SZtqhzfjk2Lhgp7UQ
	mVdQTDiZ4sOSFKaR8coClEFZ2llNLlusIA0q5NY=
X-Google-Smtp-Source: ACHHUZ49EdpcBrqeTCP2iuMZaZ33hrIx865LuA7k1/T0GoUUgo9sxv+OgAFyiujV4RFhqcof758VZuE1dBXrwVCtCIk=
X-Received: by 2002:a17:907:7d9f:b0:94f:7d45:4312 with SMTP id
 oz31-20020a1709077d9f00b0094f7d454312mr21590799ejc.29.1683836590763; Thu, 11
 May 2023 13:23:10 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-8-jandryuk@gmail.com>
 <7db38688-1233-bc16-03f3-7afdc3394054@suse.com> <9cf71407-6209-296a-489a-9732b1928246@suse.com>
 <CAKf6xptOf7eSzstzjfbbSU0tMBpNjtPEwt2uNzj=2TucrgFRiA@mail.gmail.com>
 <80ccf9c7-5d22-b368-dac6-01fe6cec7add@suse.com> <CAKf6xptLpj_L_G3Qk+KA-yaTcaMHLJLL9soFP9HD6Ro+8Lk7CA@mail.gmail.com>
 <559c7f4f-113e-8e58-d4d0-3c0c36f27960@suse.com>
In-Reply-To: <559c7f4f-113e-8e58-d4d0-3c0c36f27960@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 11 May 2023 16:22:58 -0400
Message-ID: <CAKf6xpvXsiac7WqEuj_e9GnuNMMEi-DZ-P0i1Hr79s2unGQZGQ@mail.gmail.com>
Subject: Re: [PATCH v3 07/14 RESEND] cpufreq: Export HWP parameters to userspace
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 11, 2023 at 10:10=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wr=
ote:
>
> On 11.05.2023 15:49, Jason Andryuk wrote:
> > On Thu, May 11, 2023 at 2:21=E2=80=AFAM Jan Beulich <jbeulich@suse.com>=
 wrote:
> >>
> >> On 10.05.2023 19:49, Jason Andryuk wrote:
> >>> On Mon, May 8, 2023 at 6:26=E2=80=AFAM Jan Beulich <jbeulich@suse.com=
> wrote:
> >>>>
> >>>> On 01.05.2023 21:30, Jason Andryuk wrote:
> >>>>> Extend xen_get_cpufreq_para to return hwp parameters.  These match =
the
> >>>>> hardware rather closely.
> >>>>>
> >>>>> We need the features bitmask to indicated fields supported by the a=
ctual
> >>>>> hardware.
> >>>>>
> >>>>> The use of uint8_t parameters matches the hardware size.  uint32_t
> >>>>> entries grows the sysctl_t past the build assertion in setup.c.  Th=
e
> >>>>> uint8_t ranges are supported across multiple generations, so hopefu=
lly
> >>>>> they won't change.
> >>>>
> >>>> Still it feels a little odd for values to be this narrow. Aiui the
> >>>> scaling_governor[] and scaling_{max,min}_freq fields aren't (really)
> >>>> used by HWP. So you could widen the union in struct
> >>>> xen_get_cpufreq_para (in a binary but not necessarily source compati=
ble
> >>>> manner), gaining you 6 more uint32_t slots. Possibly the somewhat od=
dly
> >>>> placed scaling_cur_freq could be included as well ...
> >>>
> >>> The values are narrow, but they match the hardware.  It works for HWP=
,
> >>> so there is no need to change at this time AFAICT.
> >>>
> >>> Do you want me to make this change?
> >>
> >> Well, much depends on what these 8-bit values actually express (I did
> >> raise this question in one of the replies to your patches, as I wasn't
> >> able to find anything in the SDM). That'll then hopefully allow to
> >> make some educated prediction on on how likely it is that a future
> >> variant of hwp would want to widen them.
> >
> > Sorry for not providing a reference earlier.  In the SDM,
> > HARDWARE-CONTROLLED PERFORMANCE STATES (HWP) section, there is this
> > second paragraph:
> > """
> > In contrast, HWP is an implementation of the ACPI-defined
> > Collaborative Processor Performance Control (CPPC), which specifies
> > that the platform enumerates a continuous, abstract unit-less,
> > performance value scale that is not tied to a specific performance
> > state / frequency by definition. While the enumerated scale is roughly
> > linear in terms of a delivered integer workload performance result,
> > the OS is required to characterize the performance value range to
> > comprehend the delivered performance for an applied workload.
> > """
> >
> > The numbers are "continuous, abstract unit-less, performance value."
> > So there isn't much to go on there, but generally, smaller numbers
> > mean slower and bigger numbers mean faster.
> >
> > Cross referencing the ACPI spec here:
> > https://uefi.org/specs/ACPI/6.5/08_Processor_Configuration_and_Control.=
html#collaborative-processor-performance-control
> >
> > Scrolling down you can find the register entries such as
> >
> > Highest Performance
> > Register or DWORD Attribute:  Read
> > Size:                         8-32 bits
> >
> > AMD has its own pstate implementation that is similar to HWP.  Looking
> > at the Linux support, the AMD hardware also use 8 bit values for the
> > comparable fields:
> > https://elixir.bootlin.com/linux/latest/source/arch/x86/include/asm/msr=
-index.h#L612
> >
> > So Intel and AMD are 8bit for now at least.  Something could do 32bits
> > according to the ACPI spec.
> >
> > 8 bits of granularity for slow to fast seems like plenty to me.  I'm
> > not sure what one would gain from 16 or 32 bits, but I'm not designing
> > the hardware.  From the earlier xenpm output, "highest" was 49, so
> > still a decent amount of room in an 8 bit range.
>
> Hmm, thanks for the pointers. I'm still somewhat undecided. I guess I'm
> okay with you keeping things as you have them. If and when needed we can
> still rework the structure - it is possible to change it as it's (for
> the time being at least) still an unstable interface.

With an anonymous union and anonymous struct, struct
xen_get_cpufreq_para can be re-arranged and compile without any
changes to other cpufreq code.  struct xen_hwp_para becomes 10
uint32_t's.  The old scaling is 3 * uint32_t + 16 bytes
CPUFREQ_NAME_LEN + 4 * uint32_t for xen_ondemand =3D 11 uint32_t.  So
int32_t turbo_enabled doesn't move and it's binary compatible.

Anonymous unions and structs aren't allowed in the public header
though, right?  So that would need to change, though it doesn't seem
too bad.  There isn't too much churn.

I have no plans to tackle AMD pstate.  But having glanced at it this
morning, maybe these hwp sysctls should be renamed cppc?  AMD pstate
and HWP are both implementations of CPPC, so that could be more future
proof?  But, again, I only glanced at the AMD stuff, so there may be
other changes needed.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 11 20:54:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 20:54:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533601.830420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxDIs-0004zJ-Fd; Thu, 11 May 2023 20:54:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533601.830420; Thu, 11 May 2023 20:54:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxDIs-0004zC-Cz; Thu, 11 May 2023 20:54:22 +0000
Received: by outflank-mailman (input) for mailman id 533601;
 Thu, 11 May 2023 20:54:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hqDu=BA=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pxDIr-0004z6-El
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 20:54:21 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ff12c282-f03d-11ed-b229-6b7b168915f2;
 Thu, 11 May 2023 22:54:19 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-664-pVDtR5lIO9KK6aA01R3HOQ-1; Thu, 11 May 2023 16:54:14 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4114F101A551;
 Thu, 11 May 2023 20:54:13 +0000 (UTC)
Received: from localhost (unknown [10.39.194.137])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4CE1D40C2076;
 Thu, 11 May 2023 20:54:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff12c282-f03d-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683838457;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lKaLumOy3GlXOhr3lyW/BQUjBQQOyF+DHc2DmfELBtc=;
	b=XasFPHD/Tgv4t1j17HtYj7vhdzDyFETtmUO227IXV8vC1aMbAt9dZXURK50dzcu7hH2Gyk
	6/c6lKZ+R7tGVJ+AuqNFvOQOG+Zahu9OInSIgDag1NYx26Hgp+Zr/c3sCbk9PQT/8YIAxh
	Kb3wt25Y5wmXUpd+4FPLc8g4Ob++cXE=
X-MC-Unique: pVDtR5lIO9KK6aA01R3HOQ-1
Date: Thu, 11 May 2023 16:54:10 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 17/20] virtio-blk: implement
 BlockDevOps->drained_begin()
Message-ID: <20230511205410.GB1425915@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-18-stefanha@redhat.com>
 <ZFQgBvWShB4NCymj@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="7sdJLAkKzW28PHgM"
Content-Disposition: inline
In-Reply-To: <ZFQgBvWShB4NCymj@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1


--7sdJLAkKzW28PHgM
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, May 04, 2023 at 11:13:42PM +0200, Kevin Wolf wrote:
> Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > Detach ioeventfds during drained sections to stop I/O submission from
> > the guest. virtio-blk is no longer reliant on aio_disable_external()
> > after this patch. This will allow us to remove the
> > aio_disable_external() API once all other code that relies on it is
> > converted.
> >=20
> > Take extra care to avoid attaching/detaching ioeventfds if the data
> > plane is started/stopped during a drained section. This should be rare,
> > but maybe the mirror block job can trigger it.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  hw/block/dataplane/virtio-blk.c | 17 +++++++++------
> >  hw/block/virtio-blk.c           | 38 ++++++++++++++++++++++++++++++++-
> >  2 files changed, 48 insertions(+), 7 deletions(-)
> >=20
> > diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virti=
o-blk.c
> > index bd7cc6e76b..d77fc6028c 100644
> > --- a/hw/block/dataplane/virtio-blk.c
> > +++ b/hw/block/dataplane/virtio-blk.c
> > @@ -245,13 +245,15 @@ int virtio_blk_data_plane_start(VirtIODevice *vde=
v)
> >      }
> > =20
> >      /* Get this show started by hooking up our callbacks */
> > -    aio_context_acquire(s->ctx);
> > -    for (i =3D 0; i < nvqs; i++) {
> > -        VirtQueue *vq =3D virtio_get_queue(s->vdev, i);
> > +    if (!blk_in_drain(s->conf->conf.blk)) {
> > +        aio_context_acquire(s->ctx);
> > +        for (i =3D 0; i < nvqs; i++) {
> > +            VirtQueue *vq =3D virtio_get_queue(s->vdev, i);
> > =20
> > -        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
> > +            virtio_queue_aio_attach_host_notifier(vq, s->ctx);
> > +        }
> > +        aio_context_release(s->ctx);
> >      }
> > -    aio_context_release(s->ctx);
> >      return 0;
> > =20
> >    fail_aio_context:
> > @@ -317,7 +319,10 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
> >      trace_virtio_blk_data_plane_stop(s);
> > =20
> >      aio_context_acquire(s->ctx);
> > -    aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
> > +
> > +    if (!blk_in_drain(s->conf->conf.blk)) {
> > +        aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
> > +    }
>=20
> So here we actually get a semantic change: What you described as the
> second part in the previous patch, processing the virtqueue one last
> time, isn't done any more if the device is drained.
>=20
> If it's okay to just skip this during drain, why do we need to do it
> outside of drain?

Yes, it's safe because virtio_blk_data_plane_stop() has two cases:
1. The device is being reset. It is not necessary to process new
   requests.
2. 'stop'/'cont'. 'cont' will call virtio_blk_data_plane_start() ->
   event_notifier_set() so new requests will be processed when the guest
   resumes exection.

That's why I think this is safe and the right thing to do.

However, your question led me to a pre-existing drain bug when a vCPU
resets the device during a drained section (e.g. when a mirror block job
has started a drained section and the main loop runs until the block job
exits). New requests must not be processed by
virtio_blk_data_plane_stop() because that would violate drain semantics.

It turns out requests are still processed because of
virtio_blk_data_plane_stop() -> virtio_bus_cleanup_host_notifier() ->
virtio_queue_host_notifier_read().

I think that should be handled in a separate patch series. It's not
related to aio_disable_external().

Stefan

--7sdJLAkKzW28PHgM
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRdVfIACgkQnKSrs4Gr
c8ig3wf/Ub9knJu/SlPq8GcOBNeoLnzjSUY4Sw4CMGQZP7HTtfyEzuHDcyem8dMb
hnX70qEgVgutOHbRBZG/cdU8MiCLqWxHqLm6PBJSEzyfI4MyTH1k3K5mN7MYlxqa
1Y6JQgCWYdoy+oGU1/MzZF6hYufpInTMPHl+Wkw6K5+1R9WrZB23PYeNgr0sfhFf
ln/aPF/FnkBUioLw4V0rto053D38fnFjNG62iD2z3CFiU9Pt+Qt2Z71TbMlIYk2m
eTIAXB4LI0ub9CYXVQSykpsYRTk2KzFdGR9VyeUiz5YIoLumSPMuOW+rs6Df1lww
Oy6Pkgt/sUviNAkQUANJ3x/oqLPSfg==
=RwYy
-----END PGP SIGNATURE-----

--7sdJLAkKzW28PHgM--



From xen-devel-bounces@lists.xenproject.org Thu May 11 21:23:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 21:23:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533636.830447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxDkj-0001LB-1t; Thu, 11 May 2023 21:23:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533636.830447; Thu, 11 May 2023 21:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxDki-0001L4-VP; Thu, 11 May 2023 21:23:08 +0000
Received: by outflank-mailman (input) for mailman id 533636;
 Thu, 11 May 2023 21:23:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hqDu=BA=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pxDkh-0001Ky-6h
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 21:23:07 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 03777d16-f042-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 23:23:04 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-475-3xO6SSSRNHmyv7vq4yn9MA-1; Thu, 11 May 2023 17:23:01 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B1EFD101A54F;
 Thu, 11 May 2023 21:23:00 +0000 (UTC)
Received: from localhost (unknown [10.39.194.137])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C76ED1121314;
 Thu, 11 May 2023 21:22:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03777d16-f042-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683840183;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=15TTAtwTe95TssE/QqqHEBqg7HkRY26GkLUlrAlX6ug=;
	b=IBSoeGcHRY15VNV+mWpW266dTOYTC1LlX/5veW5WpPm/kbrJEvnRqKlkhfSnvT1qYBIA/7
	q9fx/J8aKiO3/LGw4OPCGiFmzS6kFDXHquCxYt5wRxXEaUDPbJ8zpWqxUUsLVfdmQxeafy
	kxV/0jyTggNl8JR7Vg20/d9oc1VHZJA=
X-MC-Unique: 3xO6SSSRNHmyv7vq4yn9MA-1
Date: Thu, 11 May 2023 17:22:57 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 17/20] virtio-blk: implement
 BlockDevOps->drained_begin()
Message-ID: <20230511212257.GC1425915@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-18-stefanha@redhat.com>
 <ZFQgBvWShB4NCymj@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Oiaj1dMwGdbo2z3S"
Content-Disposition: inline
In-Reply-To: <ZFQgBvWShB4NCymj@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3


--Oiaj1dMwGdbo2z3S
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, May 04, 2023 at 11:13:42PM +0200, Kevin Wolf wrote:
> Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > Detach ioeventfds during drained sections to stop I/O submission from
> > the guest. virtio-blk is no longer reliant on aio_disable_external()
> > after this patch. This will allow us to remove the
> > aio_disable_external() API once all other code that relies on it is
> > converted.
> >=20
> > Take extra care to avoid attaching/detaching ioeventfds if the data
> > plane is started/stopped during a drained section. This should be rare,
> > but maybe the mirror block job can trigger it.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  hw/block/dataplane/virtio-blk.c | 17 +++++++++------
> >  hw/block/virtio-blk.c           | 38 ++++++++++++++++++++++++++++++++-
> >  2 files changed, 48 insertions(+), 7 deletions(-)
> >=20
> > diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virti=
o-blk.c
> > index bd7cc6e76b..d77fc6028c 100644
> > --- a/hw/block/dataplane/virtio-blk.c
> > +++ b/hw/block/dataplane/virtio-blk.c
> > @@ -245,13 +245,15 @@ int virtio_blk_data_plane_start(VirtIODevice *vde=
v)
> >      }
> > =20
> >      /* Get this show started by hooking up our callbacks */
> > -    aio_context_acquire(s->ctx);
> > -    for (i =3D 0; i < nvqs; i++) {
> > -        VirtQueue *vq =3D virtio_get_queue(s->vdev, i);
> > +    if (!blk_in_drain(s->conf->conf.blk)) {
> > +        aio_context_acquire(s->ctx);
> > +        for (i =3D 0; i < nvqs; i++) {
> > +            VirtQueue *vq =3D virtio_get_queue(s->vdev, i);
> > =20
> > -        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
> > +            virtio_queue_aio_attach_host_notifier(vq, s->ctx);
> > +        }
> > +        aio_context_release(s->ctx);
> >      }
> > -    aio_context_release(s->ctx);
> >      return 0;
> > =20
> >    fail_aio_context:
> > @@ -317,7 +319,10 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
> >      trace_virtio_blk_data_plane_stop(s);
> > =20
> >      aio_context_acquire(s->ctx);
> > -    aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
> > +
> > +    if (!blk_in_drain(s->conf->conf.blk)) {
> > +        aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
> > +    }
>=20
> So here we actually get a semantic change: What you described as the
> second part in the previous patch, processing the virtqueue one last
> time, isn't done any more if the device is drained.
>=20
> If it's okay to just skip this during drain, why do we need to do it
> outside of drain?

I forgot to answer why we need to process requests one last time outside
drain.

This approach comes from how vhost uses ioeventfd. When switching from
vhost back to QEMU emulation, there's a chance that a final virtqueue
kick snuck in while ioeventfd was being disabled.

This is not the case with dataplane nowadays (it may have in the past).
The only state dataplane transitions are device reset and 'stop'/'cont'.
Neither of these require QEMU to process new requests in while stopping
dataplane.

My confidence is not 100%, but still pretty high that the
virtio_queue_host_notifier_read() call could be dropped from dataplane
code. Since I'm not 100% sure I didn't attempt that.

Stefan

--Oiaj1dMwGdbo2z3S
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRdXLEACgkQnKSrs4Gr
c8jaKwf/fkopq41ZCcEWQFE6onRfieKs07b9qdmYnv9Ly46C4LHZe6PGnV7WnARH
Io4P4+i62gfIqhSteatqauqvYq0TKi/1WtWlzzOZhBU4J2zQGkJ29d5j7Z5YCTb7
uNx4J0SysXXmIau/T/Tg0KbpTI8srRIYOW5j7qlrEiLcINJvXaK10xjyz3ANbGkl
uh/ieO4H2Yuf4IZn51Lm7xQhUfcHCVNOS2Or0s2zrhu4wI8UpJUIrcH56yQIoXis
isUj+u0Yh4B7BMeBVYSOfV4omMO8NhBALqjpJBJ4frnh+1RZ0aeZKB0PWPafYYrc
UkYTFhgJIK8F7OpXUii69DxnOfDAmQ==
=Ls32
-----END PGP SIGNATURE-----

--Oiaj1dMwGdbo2z3S--



From xen-devel-bounces@lists.xenproject.org Thu May 11 21:24:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 21:24:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533640.830457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxDm5-0001sl-Ca; Thu, 11 May 2023 21:24:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533640.830457; Thu, 11 May 2023 21:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxDm5-0001se-9i; Thu, 11 May 2023 21:24:33 +0000
Received: by outflank-mailman (input) for mailman id 533640;
 Thu, 11 May 2023 21:24:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hqDu=BA=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pxDm3-0001sQ-Ku
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 21:24:31 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 35f1b70f-f042-11ed-8611-37d641c3527e;
 Thu, 11 May 2023 23:24:29 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-442-MZoc4K02OLOSuJDjxdWiuQ-1; Thu, 11 May 2023 17:24:24 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5E2BA101A54F;
 Thu, 11 May 2023 21:24:23 +0000 (UTC)
Received: from localhost (unknown [10.39.194.137])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B7EF32026D16;
 Thu, 11 May 2023 21:24:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35f1b70f-f042-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1683840268;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Iunu2OSen4SsTvAlKu71PnM2BjLBjTVmzME/01ZQY0I=;
	b=aTM6/LKYQB/D7hSik0h6o3A7dNeh7On3hNy0DR4Xt5CqSREDWKQT0e+Az9C2l4ldzAJ/cU
	si7CQOo8krO36juMg5Ff95oKa21n+5BEc8ju1FMYhMzQlyTxO803ZolWW0BadaYzoKGtSq
	kb/LakLtcs5oY87MpyN9G6g85r8F4QE=
X-MC-Unique: MZoc4K02OLOSuJDjxdWiuQ-1
Date: Thu, 11 May 2023 17:24:21 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Julia Suvorova <jusual@redhat.com>, xen-devel@lists.xenproject.org,
	eesposit@redhat.com,
	Richard Henderson <richard.henderson@linaro.org>,
	Fam Zheng <fam@euphon.net>, "Michael S. Tsirkin" <mst@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Peter Lieven <pl@kamp.de>, Paul Durrant <paul@xen.org>,
	"Richard W.M. Jones" <rjones@redhat.com>, qemu-block@nongnu.org,
	Stefano Garzarella <sgarzare@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	Stefan Weil <sw@weilnetz.de>, Xie Yongji <xieyongji@bytedance.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Eduardo Habkost <eduardo@habkost.net>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>
Subject: Re: [PATCH v4 20/20] aio: remove aio_disable_external() API
Message-ID: <20230511212421.GD1425915@fedora>
References: <20230425172716.1033562-1-stefanha@redhat.com>
 <20230425172716.1033562-21-stefanha@redhat.com>
 <ZFQk2TdhZ6DiwM4t@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="hOsvZoyrJ+/f+ltH"
Content-Disposition: inline
In-Reply-To: <ZFQk2TdhZ6DiwM4t@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4


--hOsvZoyrJ+/f+ltH
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, May 04, 2023 at 11:34:17PM +0200, Kevin Wolf wrote:
> Am 25.04.2023 um 19:27 hat Stefan Hajnoczi geschrieben:
> > All callers now pass is_external=3Dfalse to aio_set_fd_handler() and
> > aio_set_event_notifier(). The aio_disable_external() API that
> > temporarily disables fd handlers that were registered is_external=3Dtrue
> > is therefore dead code.
> >=20
> > Remove aio_disable_external(), aio_enable_external(), and the
> > is_external arguments to aio_set_fd_handler() and
> > aio_set_event_notifier().
> >=20
> > The entire test-fdmon-epoll test is removed because its sole purpose was
> > testing aio_disable_external().
> >=20
> > Parts of this patch were generated using the following coccinelle
> > (https://coccinelle.lip6.fr/) semantic patch:
> >=20
> >   @@
> >   expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_=
ready, opaque;
> >   @@
> >   - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll=
, io_poll_ready, opaque)
> >   + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_rea=
dy, opaque)
> >=20
> >   @@
> >   expression ctx, notifier, is_external, io_read, io_poll, io_poll_read=
y;
> >   @@
> >   - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll=
, io_poll_ready)
> >   + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_rea=
dy)
> >=20
> > Reviewed-by: Juan Quintela <quintela@redhat.com>
> > Reviewed-by: Philippe Mathieu-Daud=E9 <philmd@linaro.org>
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>=20
> > diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c
> > index 1683aa1105..6b6a1a91f8 100644
> > --- a/util/fdmon-epoll.c
> > +++ b/util/fdmon-epoll.c
> > @@ -133,13 +128,12 @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, uns=
igned npfd)
> >          return false;
> >      }
> > =20
> > -    /* Do not upgrade while external clients are disabled */
> > -    if (qatomic_read(&ctx->external_disable_cnt)) {
> > -        return false;
> > -    }
> > -
> > -    if (npfd < EPOLL_ENABLE_THRESHOLD) {
> > -        return false;
> > +    if (npfd >=3D EPOLL_ENABLE_THRESHOLD) {
> > +        if (fdmon_epoll_try_enable(ctx)) {
> > +            return true;
> > +        } else {
> > +            fdmon_epoll_disable(ctx);
> > +        }
> >      }
> > =20
> >      /* The list must not change while we add fds to epoll */
>=20
> I don't understand this hunk. Why are you changing more than just
> deleting the external_disable_cnt check?
>=20
> Is this a mismerge with your own commit e62da985?

Yes, it's a mismerge. Thanks for catching that!

Stefan

--hOsvZoyrJ+/f+ltH
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRdXQUACgkQnKSrs4Gr
c8jCAwgArfC5LraGY8jVq0rjlhqjo+1t+A8cy3ybRhi7AWg9HlCzom9EknmlRBb2
IFwC4528xXBwjj+W116JsMihzmJzH1z/QhcAfG9yuv5yRM0clnkYit5zYrajIIFW
4r37+T7TPpfIDfJce3jOBhB3or0bTez/YKmkfUE3E0vXwn8W4kvnP3rHUdm0XHN3
z/oTO5uOWntYkSzEsc5mUsTaMmeYuNUJEEFeNeOn9OVioSm2cbyWjFWcv37/K5RA
DTIzgFmxW0VHQGRfp9HanBR7feVw2SkC6yLeh5s1Q6qrRVYiXYVlWzDI87H6GtHp
fFBXo0mv/tcQgnU27cWMn0RHw2xGQQ==
=yBNf
-----END PGP SIGNATURE-----

--hOsvZoyrJ+/f+ltH--



From xen-devel-bounces@lists.xenproject.org Thu May 11 21:45:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 21:45:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533646.830468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxE6F-0004ix-6S; Thu, 11 May 2023 21:45:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533646.830468; Thu, 11 May 2023 21:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxE6F-0004iq-32; Thu, 11 May 2023 21:45:23 +0000
Received: by outflank-mailman (input) for mailman id 533646;
 Thu, 11 May 2023 21:45:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxE6E-0004ig-7a; Thu, 11 May 2023 21:45:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxE6E-0007tW-5C; Thu, 11 May 2023 21:45:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxE6D-0003xT-Hq; Thu, 11 May 2023 21:45:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxE6D-000469-HP; Thu, 11 May 2023 21:45:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=S2WYiM5aqiMiYDzqJFfLAAq3gecgRDSk//iwXYW8NSU=; b=p0Ex51l1Z70sc4BVhYrteKV0le
	+1AiJYlhjrByATWIiN9YZYpqMJL6Rk6TnVIx9x06r1Yk4sL/vHJprBKpcY/DfN1ed3n/a4682rzyq
	jCID751cYA3bBSt6/Av/WH3eI2TvzBmB0AWjiIZP/X/tK7YQcUbU8iqxtmPMd8YTcBOc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180614-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180614: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt-qcow2:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:host-install(5):broken:regression
    linux-linus:test-amd64-amd64-xl-shadow:guest-localmigrate/x10:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=80e62bc8487b049696e67ad133c503bf7f6806f7
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 May 2023 21:45:21 +0000

flight 180614 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180614/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-qcow2    <job status>                 broken
 test-armhf-armhf-libvirt-qcow2  5 host-install(5)      broken REGR. vs. 180278
 test-amd64-amd64-xl-shadow   20 guest-localmigrate/x10   fail REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                80e62bc8487b049696e67ad133c503bf7f6806f7
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   25 days
Failing since        180281  2023-04-17 06:24:36 Z   24 days   45 attempts
Testing same since   180614  2023-05-11 04:29:59 Z    0 days    1 attempts

------------------------------------------------------------
2365 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               broken  
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt-qcow2 broken
broken-step test-armhf-armhf-libvirt-qcow2 host-install(5)

Not pushing.

(No revision log; it would be 297803 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 11 21:55:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 21:55:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533653.830478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxEFV-0006Nb-4z; Thu, 11 May 2023 21:54:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533653.830478; Thu, 11 May 2023 21:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxEFV-0006NU-0f; Thu, 11 May 2023 21:54:57 +0000
Received: by outflank-mailman (input) for mailman id 533653;
 Thu, 11 May 2023 21:54:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxEFT-0006NK-Na; Thu, 11 May 2023 21:54:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxEFT-000833-LJ; Thu, 11 May 2023 21:54:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxEFT-00049z-8I; Thu, 11 May 2023 21:54:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxEFT-0006cA-7t; Thu, 11 May 2023 21:54:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KwKpT1slvLltQUp533FJE+gXg+YVjh57VjtcjIy52IY=; b=YEaS2H+LJQ4U4oY0Z2P/R6UrHM
	6vX08QBUX5fPyq4PQx+ZjVjP0n8ilUsVi/Dp3DorZrCx45a1ikNYo3lG35HAJppV2nFoL/HfRZeoU
	B52xzFb5Q4JB2cZ9FiEvLn9AZsh0Y4qNp8pJBifeewINNFjPj0CNM5IJOiHsoDxw9zCA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180618-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180618: regressions - trouble: blocked/broken/fail/pass/starved
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:build-armhf:host-build-prep:fail:regression
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=31c65549746179e16cf3f82b694b4b1e0b7545ca
X-Osstest-Versions-That:
    xen=31c65549746179e16cf3f82b694b4b1e0b7545ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 11 May 2023 21:54:55 +0000

flight 180618 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180618/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   5 host-build-prep          fail REGR. vs. 180609

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 180609 pass in 180618
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180609

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 180609 blocked in 180618
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail in 180609 blocked in 180618
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 180609 blocked in 180618
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 180609 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 180609 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 180609 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 180609 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 180609 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 180609 never pass
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 180609 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 180609 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 180609 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 180609 never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 180609 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 180609 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 180609 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 180609 never pass
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 180609 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 180609 never pass
 test-armhf-armhf-xl         15 migrate-support-check fail in 180609 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 180609 never pass
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 180609 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 180609 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 180609 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 180609 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 180609 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 180609 never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 180609 never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check fail in 180609 never pass
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 180609 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 180609 never pass
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 180609 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 180609 never pass
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 180609 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 180609 never pass
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 180609 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180609
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180609
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180609
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180609
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180609
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180609
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180609
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180609
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180609
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  31c65549746179e16cf3f82b694b4b1e0b7545ca
baseline version:
 xen                  31c65549746179e16cf3f82b694b4b1e0b7545ca

Last test of basis   180618  2023-05-11 10:33:05 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu May 11 23:23:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 11 May 2023 23:23:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533662.830488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxFcT-0008Uc-Ln; Thu, 11 May 2023 23:22:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533662.830488; Thu, 11 May 2023 23:22:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxFcT-0008UV-IK; Thu, 11 May 2023 23:22:45 +0000
Received: by outflank-mailman (input) for mailman id 533662;
 Thu, 11 May 2023 23:22:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Eoy0=BA=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pxFcS-0008UP-N2
 for xen-devel@lists.xenproject.org; Thu, 11 May 2023 23:22:44 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ba26587c-f052-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 01:22:42 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 8C56065283;
 Thu, 11 May 2023 23:22:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id DE4F9C433D2;
 Thu, 11 May 2023 23:22:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba26587c-f052-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683847361;
	bh=fm56nYRPn8WoeCpx9USW1lPIRzH1T19T/S8+YXGvkn0=;
	h=From:To:Cc:Subject:Date:From;
	b=HF65XH8mKAPt4SOhO1tBxNFaOBqLVkppKVQ1I1ItoLkK5SvQA959/iCwyHeKBRQUo
	 E1Quahst0Dj1SYHf9W3KHxvXrLGZPAA/lIR9V1HWiknMYBjv0aES79TgnK0hCDl/Zn
	 Du7hMk+wDTW04yNdmOE7aqoxKyirlX+LiMXsVqBz9g3yk5bX3SSwMxIzPJewA6o+Yx
	 a4ajwIOVBS4gwmQKo5pVyvKf7W5tRm8djM8t7GmW4jUeUQPhwSFw9SNIxcARJUJTeq
	 TcuDx/i+K3OQgOOB9CautTkCX9nskijwsGPII+hgyfrfQEiyCbnfUOS7JdtmWJikYU
	 O5yNwNsi4f5dw==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com,
	andrew.cooper3@citrix.com,
	roger.pau@citrix.com,
	Bertrand.Marquis@arm.com,
	julien@xen.org,
	sstabellini@kernel.org,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: [PATCH] docs/misra: adds Mandatory rules
Date: Thu, 11 May 2023 16:22:37 -0700
Message-Id: <20230511232237.3720769-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Stefano Stabellini <stefano.stabellini@amd.com>

Add the Mandatory rules agreed by the MISRA C working group to
docs/misra/rules.rst.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 docs/misra/rules.rst | 62 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
index 83f01462f7..d5a6ee8cb6 100644
--- a/docs/misra/rules.rst
+++ b/docs/misra/rules.rst
@@ -204,6 +204,12 @@ existing codebase are work-in-progress.
        braces
      -
 
+   * - `Rule 12.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_12_05.c>`_
+     - Mandatory
+     - The sizeof operator shall not have an operand which is a function
+       parameter declared as "array of type"
+     -
+
    * - `Rule 13.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_13_06.c>`_
      - Mandatory
      - The operand of the sizeof operator shall not contain any
@@ -274,3 +280,59 @@ existing codebase are work-in-progress.
        in the same file as the #if #ifdef or #ifndef directive to which
        they are related
      -
+
+   * - `Rule 21.13 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_13.c>`_
+     - Mandatory
+     - Any value passed to a function in <ctype.h> shall be representable as an
+       unsigned char or be the value EOF
+     -
+
+   * - `Rule 21.17 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_17.c>`_
+     - Mandatory
+     - Use of the string handling functions from <string.h> shall not result in
+       accesses beyond the bounds of the objects referenced by their pointer
+       parameters
+     -
+
+   * - `Rule 21.18 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_18.c>`_
+     - Mandatory
+     - The size_t argument passed to any function in <string.h> shall have an
+       appropriate value
+     -
+
+   * - `Rule 21.19 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_19.c>`_
+     - Mandatory
+     - The pointers returned by the Standard Library functions localeconv,
+       getenv, setlocale or, strerror shall only be used as if they have
+       pointer to const-qualified type
+     -
+
+   * - `Rule 21.20 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_20.c>`_
+     - Mandatory
+     - The pointer returned by the Standard Library functions asctime ctime
+       gmtime localtime localeconv getenv setlocale or strerror shall not be
+       used following a subsequent call to the same function
+     -
+
+   * - `Rule 22.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_02.c>`_
+     - Mandatory
+     - A block of memory shall only be freed if it was allocated by means of a
+       Standard Library function
+     -
+
+   * - `Rule 22.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_04.c>`_
+     - Mandatory
+     - There shall be no attempt to write to a stream which has been opened as
+       read-only
+     -
+
+   * - `Rule 22.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_05.c>`_
+     - Mandatory
+     - A pointer to a FILE object shall not be dereferenced
+     -
+
+   * - `Rule 22.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_06.c>`_
+     - Mandatory
+     - The value of a pointer to a FILE shall not be used after the associated
+       stream has been closed
+     -
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 12 01:02:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 01:02:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533666.830498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxHAq-00022Q-84; Fri, 12 May 2023 01:02:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533666.830498; Fri, 12 May 2023 01:02:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxHAq-00022J-5I; Fri, 12 May 2023 01:02:20 +0000
Received: by outflank-mailman (input) for mailman id 533666;
 Fri, 12 May 2023 01:02:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Sm6=BB=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pxHAo-00022C-96
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 01:02:18 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a149df75-f060-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 03:02:15 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id D43995C0AC4;
 Thu, 11 May 2023 21:02:12 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Thu, 11 May 2023 21:02:12 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 11 May 2023 21:02:11 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a149df75-f060-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1683853332; x=1683939732; bh=wnH0Qc70cVHDk1ei+4Ag0/AJcDNLo1aCfp8
	Wl4W7fW0=; b=fpcPDxra+f5FvEg6yHRTxN7qAvvf/cCCwCY7vQXWfyvthOy8iCe
	Gsgf6yZfEitBG7yZwQESQ8kkgMyYSSCtmKy0P3fSjfvcMfFrrqmGen5HAQEKJTwl
	GIA4LNLeAD3uDpzbH33pLCYqJbPYhv1gDoUMKyaMPfAOye2OGJFts68+KwJBajIH
	gCxCLHMjqjmSQGfrqQyM+nAx6SAiA4e6V9QhU+Vhif/x8mMFw63jN0gpK0btG2AJ
	ZzqjTIKDr0mlJIqEIkIrEp7lR8nTH69X/RUhJHhv2BCjrFiGoO2PZTgrZrB2anGD
	gR2kSP2Oc/UHpHhLkcQEJpV031/VtgicKyg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm3; t=1683853332; x=1683939732; bh=wnH0Qc70cVHDk
	1ei+4Ag0/AJcDNLo1aCfp8Wl4W7fW0=; b=elEyfU15rmHF25LNOKMjEpvg693U2
	grJlGQUwXQGOzwy3xXFkYsMIIU4ULNEGT8+opNZIhmsrgCDkfUWjqEEDl2Pq34Nw
	lEOaEjrQ0Aik21mE0Z6HjcYGzhu/hNdlQsXpVnD3zBy9T+CkEfcfHC73O5dUZtR1
	6/7FdcNjcqy5cMsLLWDm0+Ku9AXfqZqJnxeEn1CPqWCZpc6WAtcCjHKpdZppi+ke
	xOonHzMnI05DyQ6L9cf6MIqQwhJk97o+awD5VlfY0bJDTkHS0BfhVtcxet0Jtc/x
	bj5eER6a9S0VtJxBOWbbSK3MUQovzF47QjJ9Fsk7I/DS5AiTwNaLyl/IA==
X-ME-Sender: <xms:FJBdZBGuv8ac6YEpi4_iWe6veewN1rg7t4SJyHCQ3wHHjZO-N-N71w>
    <xme:FJBdZGV_olNl5Z-n4bc6oQCBRy4iN_oGcxi0ehvwNUzftHW2NO419FECy7oRLWh7S
    ZfC0G87jnIkHA>
X-ME-Received: <xmr:FJBdZDJUIZl0nl0Mr8Zd1zRdBQcAhO2_kMs3PJwvkC2Yy3T_fBbeqACECEl108V90URI33KRfs1C7sdSeiXl6EkrFU2_v0-gJMM>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeegledgfeekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:FJBdZHEx0WN8NWfXZlqZe08Ubv5cXyWSEZqXWmgY_tX9xsKOPevUSQ>
    <xmx:FJBdZHVX7UAafUDT7JYmVM6bTvQvZxtRVA2_UNNaBrh9D6h8smdFvQ>
    <xmx:FJBdZCP3Y3j2URNLNu3r_jrqrOjdzCkPbQuqmkK3AK-3bJWc5nXWiA>
    <xmx:FJBdZJK5KOHs0lbD_riupOyZOK9lyHEGCDbokpBGJEMQwMk2gKQ-ng>
Feedback-ID: i1568416f:Fastmail
Date: Fri, 12 May 2023 03:02:07 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3 04/14 RESEND] cpufreq: Add Hardware P-State (HWP)
 driver
Message-ID: <ZF2QEBZz25Bi5R0l@mail-itl>
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-5-jandryuk@gmail.com>
 <43c519f7-577d-f657-a4b1-1a31bf7f093a@suse.com>
 <CAKf6xpuzk6vLjbNAHzEmVpq8sDAO8O-cRFVStQxNqyD5ERr+Yg@mail.gmail.com>
 <7921d24d-7d4d-8829-44bf-b8c2ecd001c8@suse.com>
 <CAKf6xpt0n2GO1PuqeaiWi6iOoeBt0ULoKJ4xgeiTZo2Rh67kVg@mail.gmail.com>
 <4bf60ae8-7757-7440-1a4c-95260c0f0578@suse.com>
 <CAKf6xpuZRgQSe7=ST1sa=_vNOvDeC+bnDG4deb9m=A2M5+X2Eg@mail.gmail.com>
 <e9dd85d3-0e97-cb96-561e-75b23386a7a3@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="m636AhHkrpwoB9Kn"
Content-Disposition: inline
In-Reply-To: <e9dd85d3-0e97-cb96-561e-75b23386a7a3@suse.com>


--m636AhHkrpwoB9Kn
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 12 May 2023 03:02:07 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3 04/14 RESEND] cpufreq: Add Hardware P-State (HWP)
 driver

On Wed, May 10, 2023 at 04:19:57PM +0200, Jan Beulich wrote:
> On 10.05.2023 15:54, Jason Andryuk wrote:
> > On Mon, May 8, 2023 at 2:33=E2=80=AFAM Jan Beulich <jbeulich@suse.com> =
wrote:
> >> On 05.05.2023 17:35, Jason Andryuk wrote:
> >>> On Fri, May 5, 2023 at 3:01=E2=80=AFAM Jan Beulich <jbeulich@suse.com=
> wrote:
> >>> The other issue is that if you select "hwp" as the governor, but HWP
> >>> hardware support is not available, then hwp_available() needs to reset
> >>> the governor back to the default.  This feels like a layering
> >>> violation.
> >>
> >> Layering violation - yes. But why would the governor need resetting in
> >> this case? If HWP was asked for but isn't available, I don't think any
> >> other cpufreq handling (and hence governor) should be put in place.
> >> And turning off cpufreq altogether (if necessary in the first place)
> >> wouldn't, to me, feel as much like a layering violation.
> >=20
> > My goal was for Xen to use HWP if available and fallback to the acpi
> > cpufreq driver if not.  That to me seems more user-friendly than
> > disabling cpufreq.
> >=20
> >             if ( hwp_available() )
> >                 ret =3D hwp_register_driver();
> >             else
> >                 ret =3D cpufreq_register_driver(&acpi_cpufreq_driver);
>=20
> That's fine as a (future) default, but for now using hwp requires a
> command line option, and if that option says "hwp" then it ought to
> be hwp imo.

As a downstrem distribution, I'd strongly prefer to have an option that
would enable HWP when present and fallback to other driver otherwise,
even if that isn't the default upstream. I can't possibly require large
group of users (either HWP-having or HWP-not-having) to edit the Xen
cmdline to get power management working well.

If the meaning for cpufreq=3Dhwp absolutely must include "nothing if HWP
is not available", then maybe it should be named cpufreq=3Dtry-hwp
instead, or cpufreq=3Dprefer-hwp or something else like this?

> > If we are setting cpufreq_opt_governor to enter hwp_available(), but
> > then HWP isn't available, it seems to me that we need to reset
> > cpufreq_opt_governor when exiting hwp_available() false.
>=20
> This may be necessary in the future, but shouldn't be necessary right
> now. It's not entirely clear to me how that future is going to look
> like, command line option wise.
>=20
> Jan
>=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--m636AhHkrpwoB9Kn
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRdkBAACgkQ24/THMrX
1yx07gf/UVS7bGprG0gm02N4z509P/jwKGMtyqc/NNae+FU3szbLDCnwzLw8Umad
wx4K+UoO3I4kg60Pqczja6+AVMobjxrjGcsNONmXHnSSt9DkgKX4xeSGKS1TJcks
AODBexbk6F73OtwzoKkJpiYvF896B7BwrtVy/Qimen6QlkBzZMDlk9YJT0JdYxaj
uIgf9whC8buo2VloyqSY4bYNlnJCIhaMVvqSlyLT0RYkC1xVHNbTPvqgyPKk1N6Q
2JK1M+4DQ9s5qbWl8z20QnHc+0EstjHIdnXqvPN66OpeR+j0/VwT62O6ss2EcvdQ
KaLGPnL7V2q+VGoWjyy0nkAJRMhTJg==
=UnGx
-----END PGP SIGNATURE-----

--m636AhHkrpwoB9Kn--


From xen-devel-bounces@lists.xenproject.org Fri May 12 02:20:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 02:20:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533671.830508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxINy-0002OM-VJ; Fri, 12 May 2023 02:19:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533671.830508; Fri, 12 May 2023 02:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxINy-0002OF-Qr; Fri, 12 May 2023 02:19:58 +0000
Received: by outflank-mailman (input) for mailman id 533671;
 Fri, 12 May 2023 02:19:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yer8=BB=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pxINx-0002O9-4g
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 02:19:57 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20614.outbound.protection.outlook.com
 [2a01:111:f400:fe12::614])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7aed6275-f06b-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 04:19:53 +0200 (CEST)
Received: from AS9PR05CA0213.eurprd05.prod.outlook.com (2603:10a6:20b:494::9)
 by DU2PR08MB7344.eurprd08.prod.outlook.com (2603:10a6:10:2f3::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.22; Fri, 12 May
 2023 02:19:50 +0000
Received: from AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:494:cafe::57) by AS9PR05CA0213.outlook.office365.com
 (2603:10a6:20b:494::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.23 via Frontend
 Transport; Fri, 12 May 2023 02:19:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT031.mail.protection.outlook.com (100.127.140.84) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6387.22 via Frontend Transport; Fri, 12 May 2023 02:19:49 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Fri, 12 May 2023 02:19:49 +0000
Received: from ee0fe7d44566.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4C687C4F-BEBC-48C5-AB41-AC6C5C238C7A.1; 
 Fri, 12 May 2023 02:19:43 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ee0fe7d44566.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 12 May 2023 02:19:43 +0000
Received: from GV2PR08MB8001.eurprd08.prod.outlook.com (2603:10a6:150:a9::12)
 by DB4PR08MB8031.eurprd08.prod.outlook.com (2603:10a6:10:389::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.18; Fri, 12 May
 2023 02:19:39 +0000
Received: from GV2PR08MB8001.eurprd08.prod.outlook.com
 ([fe80::b95e:f68f:2747:5b86]) by GV2PR08MB8001.eurprd08.prod.outlook.com
 ([fe80::b95e:f68f:2747:5b86%5]) with mapi id 15.20.6387.021; Fri, 12 May 2023
 02:19:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7aed6275-f06b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h/S0Jvd4rp5+8PVqjVmxESGkBJPv236sH7HvXuetmgE=;
 b=tOtTvb7ORrAaEOC7wVijs3Klj52WI5Yc5gjMdusJkpvGzsXfuJ00vppNj2DY6mKcDdOVtqvgCEm9Nen8+mDBb4KWiL7OVS9cWyyDPR4M1sAYKdP0Isw7fwYQZCcmgaessdUp/lHbNQGFoPV3WYIdSWjOUOC6vmN9yWYeSD4/Gug=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BzB6kl1CXd9QnG2pZ2izSew89MfGLF2js8k/RtXgT7QX5iyUMFq4Kz/UP8Ao+wxwshMNDUGQwHKufgrdU26vuiwfLcnTpMwLjPY+IF2AdlvxTDPgGjP5FygXc4FwCMVTgApKc4eVGlTGeGnDfVtnd6/WEgTZ43p9xl4JpFrUW6I9mMVYENmw236vypZwyf30v/aDmzqPTy92+EezDpYvvTJUazENNksTN0eLoQ/499pFwxZwh8dEf9Ip7kks62wPoEwO94vo9MSUknvbQlVz7aqrC4Qu3cv9LGf9H8HYWGml6elYyZhLSlWNkV1o4P7eTdF+nkJ5khGPw8KCqbsNDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h/S0Jvd4rp5+8PVqjVmxESGkBJPv236sH7HvXuetmgE=;
 b=iTWjGUMP/Vy5Y/AC0YyT77RCCqDkWWBvqEu304HcmOWVBJ8QHAqiRxWMBqWDA3lWTZ3gev2IpSsKrv+K1mpX0rjG2DDXAAtsn8CkEe7rWFAYH8k8F/RxwX7U334ijxh8lUj4N6dZK32oeC/ydVW1Eqlx2uvOrzLi8JF1UezdGCeLNqokeKimw577EsW3anqJDYohCkAJtq6i0sS4Lr55W/F58XsqCJ2iZ2PdhIXNEkQzj5bEoHQkLe22A7Y1Mq8binG3i0asNvlBtuPSCtJHFeCUl2oX+CAIODlMYJKbRdWuyBqUeTeHP+gqeQjNjqovBXFO7GaV28i+xH+6l2rq5g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h/S0Jvd4rp5+8PVqjVmxESGkBJPv236sH7HvXuetmgE=;
 b=tOtTvb7ORrAaEOC7wVijs3Klj52WI5Yc5gjMdusJkpvGzsXfuJ00vppNj2DY6mKcDdOVtqvgCEm9Nen8+mDBb4KWiL7OVS9cWyyDPR4M1sAYKdP0Isw7fwYQZCcmgaessdUp/lHbNQGFoPV3WYIdSWjOUOC6vmN9yWYeSD4/Gug=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 1/2] xen/arm: domain_build: Propagate return code of
 map_irq_to_domain()
Thread-Topic: [PATCH v2 1/2] xen/arm: domain_build: Propagate return code of
 map_irq_to_domain()
Thread-Index: AQHZhAjs3jiwxpILn0CHXeAkKfqtgq9V5/uA
Date: Fri, 12 May 2023 02:19:38 +0000
Message-ID:
 <GV2PR08MB80014689EBF579A9288481E492759@GV2PR08MB8001.eurprd08.prod.outlook.com>
References: <20230511130218.22606-1-michal.orzel@amd.com>
 <20230511130218.22606-2-michal.orzel@amd.com>
In-Reply-To: <20230511130218.22606-2-michal.orzel@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 753080E4E6FF1347ABECCB3E4C8836AE.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	GV2PR08MB8001:EE_|DB4PR08MB8031:EE_|AM7EUR03FT031:EE_|DU2PR08MB7344:EE_
X-MS-Office365-Filtering-Correlation-Id: 0f7f32c5-b519-4649-64c2-08db528f5cf8
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 uf2BEmjUFY2YgMztHqq3J+m+5Ie1mRsgFL2jJ2nch7HjUhIIBlHsddC2AaacQQBK+bpQ7gD06n5LJN/fDZMZyWX+lKyS4zFuOB997broYN1htkx6hjPLwZdHSk4Vh4n9/DKIHrnWk7HCjnECsWhcCQ7oGG/plXRiMUnPPMfPqRnbx415bi7D29H2tAoxbxj0Ao8fZ9IofwH7HW+4EreNsKv8bTV3jRQFG+kY8lRs8gSV+PnlcXwjlrk6L3QSJvP7MRPM9IvMOonoz5vJ3LjhV236UpxtbI7lS3ko9TWwCShORF0LFx07llJBFF1h9WNpI22j+2KBKyC9C6nVFh1Qat48OyJOzsbWMAv+OboSP22adfqzi7L0z25SxpSC/jpJ+Ig8ZPaL59J4bmI456admu4/dnqtGtniVPxYvinQWM+PADjgkqUWrc9+rtuatw0RY6NWHspsmGwKp6gqr1RQFdCuQDZyyDHenbVm+432JR6WH4bB7kj4T+egaKUy3I+7a4P/xRcipvoN8ea/W17XlY0tA6ExxOGT6nzPvD3zvqNzmCVoSu6rJCinGjijojjs0NIOWnZJJwL/UR765HBFVKCZDs+IuYXs9wieZCLtrzXhCt4VguboPJNoKoeUKEOF
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:GV2PR08MB8001.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(376002)(346002)(396003)(39860400002)(451199021)(83380400001)(52536014)(71200400001)(478600001)(33656002)(186003)(55016003)(6506007)(26005)(9686003)(7696005)(122000001)(66476007)(66556008)(64756008)(2906002)(38100700002)(76116006)(66946007)(5660300002)(86362001)(4326008)(66446008)(41300700001)(38070700005)(8676002)(8936002)(316002)(4744005)(110136005)(54906003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB8031
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7010c85b-1677-4cd2-cbd9-08db528f5694
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GJ+Iw8xSq6hzl1JJqXwXHGTtouJcbtgejNah+2JTQhIXtxscOv8ea7/d5zUNuUfQLKMe34z9iWk6eSra25k54unTD8TSjOOASfbS6vApwDfSI0lYCCHeMyA7OWFvuPXBm7nT+Z1imvezSgFGZd3TBHdfM3Y738IR5RLPKsM0LRAPCSh9Aia24qe/QE6xpQSkQyHisNU3OBhx71o4qTpDENXR/2VbEnS5hzeFmczdwRDf0i3FK0K6UdsphvHTTcKMZw9fWot3VIOG9Sf8QJk5cBORuopn2QUTN/yiJfc1uJ0vogGlWQZ4fjEZFAvstmIQ6IjYrHYav7UeNKzGSvN4oWY/8p17zpZKbv5eMtIrSbvSNsSWmZXR0Iviyw45uQVQ++z8J5PKMj6vI+sNkHafp+kdf+L1vzDfoEOWCQ4rRd3CIW3DRJW/rDZ/t9eVqQ5lkJiJFZpVmZ1UlvihJh9A2Plb5dO9CWxRUPXnHo8hATUH1pRwYvi/hpBMQhrkcEwXOBNGY8XVqk4zrC4Hnv0XHcMTFhv+CKuXTipV3WpzzuXkrCiOMXjnKSXhntFEx46iWuebcB+oSNcpYZ9UepyUt85MrcJ2ljVn6t5JgWxqA+ue/7Va8MUUA4EFCsUDahLxe2L8D/kX2PQ2bt1Ey9tfAZ5hY0MWEuNIjK5Y3nJZHgRuhyu4H7BXaiG+cXStaJ6LwTlodnKAsqzM+pCL/cA3dQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(39860400002)(396003)(451199021)(36840700001)(46966006)(40470700004)(4744005)(83380400001)(6506007)(5660300002)(33656002)(2906002)(8676002)(82310400005)(316002)(40480700001)(7696005)(40460700003)(110136005)(52536014)(41300700001)(4326008)(478600001)(55016003)(70586007)(54906003)(70206006)(47076005)(107886003)(9686003)(8936002)(186003)(82740400003)(336012)(81166007)(356005)(36860700001)(86362001)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 02:19:49.6015
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0f7f32c5-b519-4649-64c2-08db528f5cf8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB7344

Hi Michal,

> -----Original Message-----
> Subject: [PATCH v2 1/2] xen/arm: domain_build: Propagate return code of
> map_irq_to_domain()
>=20
> From map_dt_irq_to_domain() we are assigning a return code of
> map_irq_to_domain() to a variable without checking it for an error.
> Fix it by propagating the return code directly since this is the last
> call.
>=20
> Fixes: 467e5cbb2ffc ("xen: arm: consolidate mmio and irq mapping to dom0"=
)
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri May 12 02:24:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 02:24:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533675.830518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxISV-0003ms-Fa; Fri, 12 May 2023 02:24:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533675.830518; Fri, 12 May 2023 02:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxISV-0003ml-Cn; Fri, 12 May 2023 02:24:39 +0000
Received: by outflank-mailman (input) for mailman id 533675;
 Fri, 12 May 2023 02:24:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yer8=BB=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pxIST-0003md-Rz
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 02:24:37 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2088.outbound.protection.outlook.com [40.107.13.88])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 238cb901-f06c-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 04:24:36 +0200 (CEST)
Received: from AM7PR02CA0011.eurprd02.prod.outlook.com (2603:10a6:20b:100::21)
 by VE1PR08MB5728.eurprd08.prod.outlook.com (2603:10a6:800:1a0::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Fri, 12 May
 2023 02:24:06 +0000
Received: from AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:100:cafe::e0) by AM7PR02CA0011.outlook.office365.com
 (2603:10a6:20b:100::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22 via Frontend
 Transport; Fri, 12 May 2023 02:24:06 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT009.mail.protection.outlook.com (100.127.140.130) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6387.22 via Frontend Transport; Fri, 12 May 2023 02:24:05 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Fri, 12 May 2023 02:24:05 +0000
Received: from beae77381aa4.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 531E9221-B98E-44A4-9282-2D50403542FA.1; 
 Fri, 12 May 2023 02:23:59 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id beae77381aa4.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 12 May 2023 02:23:59 +0000
Received: from GV2PR08MB8001.eurprd08.prod.outlook.com (2603:10a6:150:a9::12)
 by AM0PR08MB5522.eurprd08.prod.outlook.com (2603:10a6:208:18c::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.19; Fri, 12 May
 2023 02:23:58 +0000
Received: from GV2PR08MB8001.eurprd08.prod.outlook.com
 ([fe80::b95e:f68f:2747:5b86]) by GV2PR08MB8001.eurprd08.prod.outlook.com
 ([fe80::b95e:f68f:2747:5b86%5]) with mapi id 15.20.6387.021; Fri, 12 May 2023
 02:23:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 238cb901-f06c-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yb7tjz3QY1WBFKT272NHsZ3wggB2VB3aucIwPEG2eHY=;
 b=v0N7MwbPaR7dGS23JrGL8exw7Fe8HsnfLVkISfSMWY9EquAtWuiV3eDmKFe5u/Fil4bwYUbAFgNyw5MrsV7iaFXqyK1cwVO1J4v7bMiMd2KCXNwXp0aLG7Jx0xUA9LHSfSjYH6l6dKUttLv9PcJS7duqI8nTrTrsPfok5jgtc58=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fhAY4fN8vLaxY2aVHGK5tJ/kBFqWmnU4Ej3mV1ipY7sX/46+bCdiBXNVait65d21SRlv2qYAHo8eSzDLX0FNStCcVClsGlin1vp6MoB6Nkobgy0aPAOdjKiH/wDJQHc2I4S7OgPfkOQOg68esPs5NZRu+slVFQjUCrKGP9sXIBc/nDyFVEDcBX+/mfBlP/Wovmv+MYoLUi+dUk8/ohJF5gM0gPyJiSgUQbQnVIADraME9gLhF31fagac4PTrTl9hCFuf2DKCkdfbgWOLDsj8J++3fxQnXHiBO76EM1C188c/WcgHFEuInjmBhfwwI1Qt6KjjuLvHLisnFASoY5EX+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yb7tjz3QY1WBFKT272NHsZ3wggB2VB3aucIwPEG2eHY=;
 b=mQzSjswNJUHjeKRQOCG4Qs1uKpZIsl9u0PXk7T0Gv5QoZLY1w4K+lKdc5YXLPvyJFlT1Z3QY//A/sqy8Jp3Vt1apVU4JD0t/a7HEcts6qUNQGMhA9cYx5UoyrgVH0TBi6iy9SBRvnMT6YQtEwlyBKUnnt01hh+z1+0mp6e5mVZoACOzmrScJN9mLwe1pY5PrSFdgg0g2jodq9OivYIe3yvsewLUgbSrdnD/zRg3yPhh8F9N7Lf0qbG3njsXB3l2p2DOdhkDDHv/Tl+8c07EhJurOjWM3p8bbRmsmDRwGXGWlS62bgqGuwQPcF3W4ujpzcjzavErs//cDthDy349wTQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yb7tjz3QY1WBFKT272NHsZ3wggB2VB3aucIwPEG2eHY=;
 b=v0N7MwbPaR7dGS23JrGL8exw7Fe8HsnfLVkISfSMWY9EquAtWuiV3eDmKFe5u/Fil4bwYUbAFgNyw5MrsV7iaFXqyK1cwVO1J4v7bMiMd2KCXNwXp0aLG7Jx0xUA9LHSfSjYH6l6dKUttLv9PcJS7duqI8nTrTrsPfok5jgtc58=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH v2 2/2] xen/arm: domain_build: Fix format specifiers in
 map_{dt_}irq_to_domain()
Thread-Topic: [PATCH v2 2/2] xen/arm: domain_build: Fix format specifiers in
 map_{dt_}irq_to_domain()
Thread-Index: AQHZhAjssQBtAk/jHUK6Bly1mxSXQK9V6XHg
Date: Fri, 12 May 2023 02:23:57 +0000
Message-ID:
 <GV2PR08MB80015FF39CBE6D4DBCDA358092759@GV2PR08MB8001.eurprd08.prod.outlook.com>
References: <20230511130218.22606-1-michal.orzel@amd.com>
 <20230511130218.22606-3-michal.orzel@amd.com>
In-Reply-To: <20230511130218.22606-3-michal.orzel@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 0FEA23880DCBC94AA481157902BB48B7.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	GV2PR08MB8001:EE_|AM0PR08MB5522:EE_|AM7EUR03FT009:EE_|VE1PR08MB5728:EE_
X-MS-Office365-Filtering-Correlation-Id: 01db313a-76ea-4d11-72b2-08db528ff598
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 IhzCWRDP0ue0mkD3eev51rgFnDO4FiIcMt0vFQ42QBOCU+R1dWZ2uxvBHdjQzX8/Uct1ogRKcORE4efTH1DDnSO+ExTNIGJXy3VXYuz/3HXfLnaGKGE+MUC6CLjnD1u/1ngqUOvR3/9T3NNIDfNbEO2qofzuNH9kMDVJPVuktBo/BrORYPlR/3fvrDgD/X+R/M3QwJ4KLwVgQMZmsi2bCkwLzWNZ4QqVxM+CyTaN6vsLMpmbC/l44D5NVkGLTbfQDMFiYHkwdDhx7OErBzte2RpO0JLThOtrva4ZzDgoRdZnQKZ5GZbWO3JQVYRqN2y9s8LH5AO40zW4nZng0na9E8WBt3UgX6SefYiUFa5dWjdw6MDmBIw57SGVD25UOzybLZH2ZZZdlBu1RjAAvMbXAcilb0Icenq1IiKR7ULlHPsqCLYq0kfHZXKp6CscU/LBBxLC3I7+x7AUqKQFD2EE+3ULAET8sJZYS6TPuVSfEdSE2aTQIccrpPMa9VBejPU4OdTIADyeaZ1MzDNKcsETVNWTVCDiOhxEwBFuZU8GDQ3zQSdFGNQ85mC+MU/eRdMLSYfJMqoUnU3ajQVaustF/+wtfyzkPxy/vL9zl3qfi0aloIId2IbKc0gynKM4nLvk
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:GV2PR08MB8001.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(136003)(376002)(39860400002)(346002)(451199021)(83380400001)(2906002)(4744005)(186003)(9686003)(71200400001)(64756008)(66946007)(66556008)(41300700001)(66446008)(316002)(4326008)(54906003)(66476007)(110136005)(76116006)(5660300002)(478600001)(52536014)(6506007)(26005)(8676002)(8936002)(7696005)(55016003)(38070700005)(38100700002)(33656002)(122000001)(86362001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5522
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6c423f20-3ada-4a04-7dd2-08db528ff0c2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	025+IckRu+1n+ptMLmRoaRGNdWKcsSjnqVgY+ARfLeNdDSN/9jnm1Au01dX3SD0BY2UZRY9PVHUaddwONm5AdHQYlDyNclajZnnvsO9IGW0jNPueVcynFReTcqoyxWQvV8bCsUffey5M8XnVdyU8qWhF6rX93gPAkOxqvThwAw0T5IDYFGSG49JH635vqX9mRP+UNPyxSrhahTY3L+3XqpHtmzuqVlHe9Skrlr21dL3Lic4DsF+/ntK8xbzMUumPBlMY/75OmOADgLGcmUCMO+5lxE7aNkoC29F39LvtCkO5U2sMZSfNgpKw0BVlJVHvNlR1hnllIt2X4cBx6MSfYf1AKne2ra2PBsqzyG3CqVcYsC1tjoyzkG1N4qfPVb17JWMSMEqi84vtmmQOSmE2R61S9Rg9Xx4j/GwYIVTrXtQ5TFQZsL4iZcQ2G34Djup9zEmxPGitw39REYQxZBLaaL0QV/GCw6bo7UKo+lS/eEoqAp5RJ1G8Je5tdZLxpxKN4rXTNu+uX5wOc4476L1+u7ztC3r603WWYZqb5XOZcykRnNaZ0VfOAmim0Vo+bI+54uXnOd4Gj5MGDSXOND7EZmi5r3wOyKZo4lHZ0k0ME2WgHj74TSS0UgjfGSN3kYBma7Bsa1IpsiiHi8AWH0wztRD4OmyxzAbk1SnsZ62RFm19lfLOu1/T4wcVlzaUfui0r7jeW3sUv9ADJmGB+ZAuOSnkAeXXMca14E7t9HJl8LcA7z+VfZ4MXPm/b7tKg+gMGQqwZezp/eSSbWd0pVtg3A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ErrorRetry;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(346002)(396003)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(186003)(86362001)(4744005)(83380400001)(2906002)(356005)(55016003)(81166007)(41300700001)(107886003)(26005)(47076005)(336012)(9686003)(36860700001)(6506007)(110136005)(54906003)(33656002)(40460700003)(5660300002)(8676002)(8936002)(52536014)(7696005)(82310400005)(478600001)(4326008)(316002)(82740400003)(70206006)(70586007)(40480700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 02:24:05.6641
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 01db313a-76ea-4d11-72b2-08db528ff598
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5728

Hi Michal,

> -----Original Message-----
> Subject: [PATCH v2 2/2] xen/arm: domain_build: Fix format specifiers in
> map_{dt_}irq_to_domain()
>=20
> IRQ is of unsigned type so %u should be used. When printing domain id,
> %pd should be the correct format to maintain the consistency.
>=20
> Also, wherever possible, reduce the number of splitted lines for printk()=
.
>=20
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Fri May 12 02:34:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 02:34:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533681.830528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxIbL-0005UD-G4; Fri, 12 May 2023 02:33:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533681.830528; Fri, 12 May 2023 02:33:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxIbL-0005U6-DF; Fri, 12 May 2023 02:33:47 +0000
Received: by outflank-mailman (input) for mailman id 533681;
 Fri, 12 May 2023 02:33:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxIbJ-0005Tw-E3; Fri, 12 May 2023 02:33:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxIbJ-0005MC-Aq; Fri, 12 May 2023 02:33:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxIbI-0002oa-QO; Fri, 12 May 2023 02:33:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxIbI-0002Co-PZ; Fri, 12 May 2023 02:33:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rdhpv21XeC0x/q7bxubTT+m1bkQdWRKEcfrwIEY/yAc=; b=313twVI9vjMp/QGuvhFgrJ9eXa
	BnAPy0B4wJ4BIS2vLtgmbDr7XL3D1ihktjIb7Cw2lynEgC0L4V5oDsh09SJG12sx3sPlj8IazBBHl
	V2Mz+wRW2a5bfjqIhquaBVIQVCMmmoU4KY0ClVM+SJ2Xy92wmDK1iUJdXvH2w8JNQO6k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180621-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180621: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:host-build-prep:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=278238505d28d292927bff7683f39fb4fbca7fd1
X-Osstest-Versions-That:
    qemuu=d530697ca20e19f7a626f4c1c8b26fccd0dc4470
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 May 2023 02:33:44 +0000

flight 180621 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180621/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180610
 build-armhf                   5 host-build-prep          fail REGR. vs. 180610
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 180610

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180610
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180610
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180610
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180610
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180610
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180610
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                278238505d28d292927bff7683f39fb4fbca7fd1
baseline version:
 qemuu                d530697ca20e19f7a626f4c1c8b26fccd0dc4470

Last test of basis   180610  2023-05-10 23:38:37 Z    1 days
Testing same since   180621  2023-05-11 14:27:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dr. David Alan Gilbert <dave@treblig.org>
  Jamie Iles <quic_jiles@quicinc.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lukas Straub <lukasstraub2@web.de>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

(No revision log; it would be 1008 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 12 06:29:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 06:29:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533689.830538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxMGq-0006jE-ML; Fri, 12 May 2023 06:28:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533689.830538; Fri, 12 May 2023 06:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxMGq-0006j7-J8; Fri, 12 May 2023 06:28:52 +0000
Received: by outflank-mailman (input) for mailman id 533689;
 Fri, 12 May 2023 06:28:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pxMGp-0006j0-9v
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 06:28:51 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20631.outbound.protection.outlook.com
 [2a01:111:f400:fe13::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 40b8b2b1-f08e-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 08:28:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8779.eurprd04.prod.outlook.com (2603:10a6:20b:40a::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.23; Fri, 12 May
 2023 06:28:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Fri, 12 May 2023
 06:28:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40b8b2b1-f08e-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WuLJe5HgRCxDdpcsOZAwOuWsn8oNY694dqUH0jibLUVg4pR2NIGU+rfk+Lz/PerMFt3zcH6Z+D1/Gp4tMPLgwXvOC8aWV/00lkmktwxhj8aijmmlnD7pVKUqYtr02v9CuhfkEpZdR51t3tvaHSdts2P/YZdntNbL0w6K6qsTxOV6yto3ApIapfY65lLCjFQI9ngCdHscJBEtoAl+Gr5DYg5jAee8QYsLCWDCUe2HykYIkwjdA9LhaLPsQ/9MOxzXvNSWw+F37qDEuSRbO54PuDjmvuXS62anfZ/pFy1Me0emuMqvb9HbTH96S16/jtz72or07tlpsOtDCLh7bqdXWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oHFhPSwrTcUXK8uA4161kdOyRt5H5iLkWrE/pxhUHzw=;
 b=U8AfVUnHBbIHS2e1mW3IL45iFPwKFPciPdQuUmvwqkov0Wfw6ZM6W1o/v6BtIzTJZ4SQD3Of7E2gNO9VAZ/x8znZ6rIwv4oGEje+YlEwMIPVFkXL2CpMKCsad7f3+X5T8tGF/y203cxPXicFn9jtRvPxzAsMWeAEb67DCleo5LY3mrPKint0VI8ughw+BTzYavzJRQRWdfzD2RbxeqR2GVBOo4eQmgRGd0QhvVum7pYBNfX+5bd/TA+exW04FEJH0biV1TwJCMxAbJfNkqq+FlN1AfixkZjIIrtOc+819NTTu/hTBgGIote27MJuXGQFq+1LEv3QsBBOxuWajzs2VQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oHFhPSwrTcUXK8uA4161kdOyRt5H5iLkWrE/pxhUHzw=;
 b=hEYtx8jlEI8Yj3ZTWstd07zVJnfBg1KOtaWvT44GiK5qpLfyIJCVsDjhxdF+jchAMnrQSCvZgJDP6y/izxrp42/g5q6qp8d53NFAlqzLL72fTj/6b6XvZoLS1xpAIkiM93YWJj68EG40Ye1ouMlnm86ELjikRawf75d326pzgfQt7k3HU80tfAIy64eqgEI3X9MmTOkbuyrLMEPEQjU4r29Lk60bEONWzg35Om5VM5EAle+teqfIYapZRDi/4SqP4+96U1tnp4LF9SEkH2reDK4YyYUf+WdfV2Pyowj8PrNvI3cni+1odHRwMHuD2Q5NBxifHZh7CxZfgSy9zKqIVg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <11531470-89a5-5bf5-3c13-3cfd0d614e07@suse.com>
Date: Fri, 12 May 2023 08:28:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 04/14 RESEND] cpufreq: Add Hardware P-State (HWP)
 driver
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-5-jandryuk@gmail.com>
 <43c519f7-577d-f657-a4b1-1a31bf7f093a@suse.com>
 <CAKf6xpuzk6vLjbNAHzEmVpq8sDAO8O-cRFVStQxNqyD5ERr+Yg@mail.gmail.com>
 <7921d24d-7d4d-8829-44bf-b8c2ecd001c8@suse.com>
 <CAKf6xpt0n2GO1PuqeaiWi6iOoeBt0ULoKJ4xgeiTZo2Rh67kVg@mail.gmail.com>
 <4bf60ae8-7757-7440-1a4c-95260c0f0578@suse.com>
 <CAKf6xpuZRgQSe7=ST1sa=_vNOvDeC+bnDG4deb9m=A2M5+X2Eg@mail.gmail.com>
 <e9dd85d3-0e97-cb96-561e-75b23386a7a3@suse.com> <ZF2QEBZz25Bi5R0l@mail-itl>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZF2QEBZz25Bi5R0l@mail-itl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0087.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8779:EE_
X-MS-Office365-Filtering-Correlation-Id: 03c159d0-c73a-45bf-3bbc-08db52b22389
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	g5LVy2VgFgpH2otO9/zThewXEqN728TmpQCmME6WHjA19vdP3H0UFuoxx2E6iOhX1C98J5MX91SB52JD7DIboJStp460q71ujrJNDTzmrbNDf9iCfYrvKGI+5Rq5s6jY+NTcWmaIRDQOkcWUs4f5jOobQjAYYf4gYAgwOP1dWtC/9eq6fMNtpzYVjRCE5wytTVp1XGgTolXsqrwLxMDNh/CpimOcnCMUxZDARNNrd35qah8KJ0eOh+kLBkECuHvYiMca2ggl3NJhpOK+XZNsVzUdrYq6ntURsqCe5jKkpO7MIwzy+Mnc7HTS5/wxzEvoLC8zTA6lepPAYB/4P6tg/qn9S7QDfTy0lnhht4zvtAq7szsdcBUPGObGbZWC0uKLITHNHk3+DTfZjCGmrM7zq7B2lvLeXKbFF1ZQ4/iOtkJ2NLEy4kp+5nsNNmQ93sVpl+UGhdRR6MOUbS82YK0/QroU2C4EIPn6EHf+0wUoJk6xcW99O6k/Prz+iNWTS+Gjjk9DA3144BIEKT02DuF6S7HXhbKhOCyLo4iDhnsqe+gSL9oF+otJRTSMgOxH6/d4wWLgI6AKKunrptvEo+/0HDFTDG/txzGEz2N8JM7lk5Hk0yqv2WvIE3ouuEbPgrZ8/j+M7wMpvw6yx0clKE3clA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(376002)(346002)(396003)(39860400002)(451199021)(31686004)(83380400001)(4326008)(66946007)(66556008)(6916009)(66476007)(478600001)(316002)(54906003)(86362001)(6486002)(31696002)(36756003)(66574015)(26005)(186003)(2616005)(6506007)(53546011)(6512007)(8676002)(8936002)(5660300002)(6666004)(41300700001)(2906002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cmZERzFrSm42ZDByV0JXMDFhbDNxRUFOWVB0SWhGYjJYOFh5MHpIb21tYkNh?=
 =?utf-8?B?Uk9nMHRocDhDNGxSYzhTem1EdnY0ZExVZXlsSDU2SDNTRDlVcE1KY0RpYUhq?=
 =?utf-8?B?dWYvV2NPQkpBTndXR04yTmVHekp3R25qZFE1SDRJWkhic1BxMG5MZXZpWEpC?=
 =?utf-8?B?WVJpOGlYRjhpNEhFQkdsMjhJejhTK1JIaUJFWWkveUptRk1rSWQ4dUVreDlY?=
 =?utf-8?B?SHkvSGNtRnBLR1NXMndlV3Y4NjNKVXJEeHQzWTkzS3ZIZkdNRnFwVEM0cEgw?=
 =?utf-8?B?K0d2blVFWUtWaXNVVFovclFVVnBRb2ZrVGZSNmxjVG1mTVFlM1pFZko2cEE0?=
 =?utf-8?B?NUxJdTI1ay8xOWxFYWt0cWI5MDhxbmlKOHlUQWM1NnpHQjlDbXZGbnNCOHpm?=
 =?utf-8?B?Sml4c3FxMlNDcVN1OHpHMjFDcmFncmdaZ2JpOUFVUkpOelVqdTZXNERkU3FM?=
 =?utf-8?B?U2JVdVlEQVh3Smg1akpQenlGTUlkd3ZINUNJbzdmNXQwek1YVkQ4ZW9SRkho?=
 =?utf-8?B?NXlOdVM4SmcxT0V2bzlxeE5Ea084cGRndDg0TXJaT3Q4eTkrN0hYVCtPMGZS?=
 =?utf-8?B?cEdSOUcvMlY1eDJtOEhQMjA0cDZ1WHJBV09VTmFMR1dzRFdlOTZRa3BlaTd6?=
 =?utf-8?B?aFV6OFB4T0VoaGxvNmhKZ1NCUXd1dng4N0Q5SXA4MlhBTGd0RGRqSXN4OWdC?=
 =?utf-8?B?WTBLcGJOOXNpMmhkWWoyYTkvTzRORVQ3N2d0bDRMcHRWSE5hclZaTjU3U2hI?=
 =?utf-8?B?Nng0Tm9hSFZuZFJ2VnZBaVYrdHJUMjBIYVNwM05hcHp5dWVLM2liQTE4a2tU?=
 =?utf-8?B?UVM5czNqMlJ4KzRqSEF2L0szeVk5STVmWXhiVFY5anI4MXlmSVF5bllvSEJY?=
 =?utf-8?B?SWwxVWxINitXVUtlVjVjQzNmN2VraGRpMjhNVU02amc3RkxnaHBHTGdjNmpO?=
 =?utf-8?B?UVBvNjVVWjFEVHFrWHJkU2JKek04Z0dQaVR5MC9vUWN0ZCtUdlB2TDdkSWE0?=
 =?utf-8?B?MmNRdjNoVXRJWHpaUkFBM1I3bVZJN3RlcC9VMEQ3dHdoVzd1eHd4clVoNlRO?=
 =?utf-8?B?dUlNRE1LWnAzRjZRdkJHWFNCby9WY1BUZDlwVXNRTkh1cElxbldrRTg5dmJT?=
 =?utf-8?B?MXBjVzJYZ2lwZmJDMTg5RnRxQUZjRHNHK0hGYldKTmJqQTFORzBtYS9KMkMz?=
 =?utf-8?B?ckxVV2xDOStGNFBUK2xqb0dHSVpOQ0VLYmxGK3lWNDZJamY2dDdHYnBKQ0FU?=
 =?utf-8?B?YmY0NUJRNDRLS1R5c2RuaC9ScVBrOUtoNkVrQXRYaVNldEtJNU9tVExKeFpn?=
 =?utf-8?B?ajR6dEJ1VjlLcjVPZHZrUXNzYmNSaUtNY0VOTFNYdDJpcXBOUXMzRTgvQ1lj?=
 =?utf-8?B?Q3BSeUZCSkN0TWR2cjcyWVJlcDRkMTlkMFpYZHk5MVV6RUNnd0EzT0hLdzM5?=
 =?utf-8?B?b2JBaE9MRExNcWZsb1NyKzYySHR5ZCtVN283WWVmQytqb0FPa3lNbWJZMmVy?=
 =?utf-8?B?blQ5TjA2QkVKVkJZZGZnZm9LZlJGRkcvUGhua1FyckFWWUN2UXFPRktqcm5U?=
 =?utf-8?B?M1Nhdy9JaGkxT1pvWmczKzQyVm5aNXJyc0c0SkNwRW50UzlFcyswcndNSmFi?=
 =?utf-8?B?QWRzRjk4T2IrK0U4anBkbktaQUk1OWgxWkl1dENtUVBTZE1oMGtMdDhVSXow?=
 =?utf-8?B?QjQ3bWpUaWxjSE9QZko5NG8rVkIyNjdLR2d4NEVHakRXVEpUQ2NXQlJjZGty?=
 =?utf-8?B?RlhKc1J6T2kxZ1VPeFVxZ0hNRXNhSGp3c2FuSGZXdUlqRWJzbkxwZENrMUlK?=
 =?utf-8?B?dW56RWpJK285bXF5VVI2WVkvTXQ0SmUzdlFMd3ZNVURkSlJTbXhkbWNKbnc5?=
 =?utf-8?B?QjBSMS9FVFBSUHpZQnluZFNwNjJXbWh5NFJYZDhsT1pEbG5QNVUxUGRUZkRv?=
 =?utf-8?B?enNrMmxEM0dyT2RpMnFBQU5PQ0ZrMlF4alNuTXMwZXY1eWQ2MS9Ld3h3WUJx?=
 =?utf-8?B?TnRIb21aV1MzZkFyaU43cGlGVWl2OU9oMENOR1pKUWljTFNMaUQyc2RwNkY2?=
 =?utf-8?B?cDBTdmxmd0dZWlpXOU4rbDVUcHpCRTJXU1pLUUNjS2tPb1RiYWNDVmpZN3Rl?=
 =?utf-8?Q?eM1CGzAG11yDly8ajdL/1XFZY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 03c159d0-c73a-45bf-3bbc-08db52b22389
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 06:28:46.0038
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9X215zdK1Op6sk72I51VA1Tt/2InmWXss9g4tpVhjcbC5CI6XGpkingOWYsAs6aq1W9sknZO/WCtNOeKRZAKGg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8779

On 12.05.2023 03:02, Marek Marczykowski-Górecki wrote:
> On Wed, May 10, 2023 at 04:19:57PM +0200, Jan Beulich wrote:
>> On 10.05.2023 15:54, Jason Andryuk wrote:
>>> On Mon, May 8, 2023 at 2:33 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 05.05.2023 17:35, Jason Andryuk wrote:
>>>>> On Fri, May 5, 2023 at 3:01 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>> The other issue is that if you select "hwp" as the governor, but HWP
>>>>> hardware support is not available, then hwp_available() needs to reset
>>>>> the governor back to the default.  This feels like a layering
>>>>> violation.
>>>>
>>>> Layering violation - yes. But why would the governor need resetting in
>>>> this case? If HWP was asked for but isn't available, I don't think any
>>>> other cpufreq handling (and hence governor) should be put in place.
>>>> And turning off cpufreq altogether (if necessary in the first place)
>>>> wouldn't, to me, feel as much like a layering violation.
>>>
>>> My goal was for Xen to use HWP if available and fallback to the acpi
>>> cpufreq driver if not.  That to me seems more user-friendly than
>>> disabling cpufreq.
>>>
>>>             if ( hwp_available() )
>>>                 ret = hwp_register_driver();
>>>             else
>>>                 ret = cpufreq_register_driver(&acpi_cpufreq_driver);
>>
>> That's fine as a (future) default, but for now using hwp requires a
>> command line option, and if that option says "hwp" then it ought to
>> be hwp imo.
> 
> As a downstrem distribution, I'd strongly prefer to have an option that
> would enable HWP when present and fallback to other driver otherwise,
> even if that isn't the default upstream. I can't possibly require large
> group of users (either HWP-having or HWP-not-having) to edit the Xen
> cmdline to get power management working well.
> 
> If the meaning for cpufreq=hwp absolutely must include "nothing if HWP
> is not available", then maybe it should be named cpufreq=try-hwp
> instead, or cpufreq=prefer-hwp or something else like this?

Any new sub-option needs to fit the existing ones in its meaning. I
could see e.g. "cpufreq=xen" alone to effect what you want (once hwp
becomes available for use by default). But (for now at least) I
continue to think that a request for "hwp" ought to mean HWP.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 12 06:32:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 06:32:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533693.830548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxMKZ-00086x-5b; Fri, 12 May 2023 06:32:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533693.830548; Fri, 12 May 2023 06:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxMKZ-00086q-2L; Fri, 12 May 2023 06:32:43 +0000
Received: by outflank-mailman (input) for mailman id 533693;
 Fri, 12 May 2023 06:32:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pxMKX-00086k-A7
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 06:32:41 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20628.outbound.protection.outlook.com
 [2a01:111:f400:fe13::628])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb05cced-f08e-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 08:32:40 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8779.eurprd04.prod.outlook.com (2603:10a6:20b:40a::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.23; Fri, 12 May
 2023 06:32:38 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Fri, 12 May 2023
 06:32:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb05cced-f08e-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iqWeObCdq6+GO1FgZZtpFUVQ8wIJYKg/Xad6WCDXOHSASCHqw55eZ+SuViCLkrzDpSJ5IduWp+XdqCV6uqnqIZOpRR8nNW+ahW1gU8QFGO9NxUvMwiqhoHeUzmnstmU43yl2C1DW4B/OWvQxkZxEgfoTBBr1pohbuNQxVM5iXRCusq0MnCPnzo8QeSujQvAKWkFWbDp0Mq8gwTCXIWjJRuamWa4msNytRRS6y+Mw2bFyMSBLOS6YTqKLiWLQpIhBgPEF8cVMxAijqBtOBig5qj9tMJSoDj4MjGQ9ViKEZybgSo1JGijoXuvqRv3WgDOwHxxRQAcBD3PrSZ3jTn0pvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NyVFXXXN3OXQL9AkphEh7GbeLx+vqwx+D6JtFWEvfMw=;
 b=kBPXrhpLu3tFhsb6X6ciB7rv13NIYhZH3W1qlhYdrs2JjZVxSiSLFN6lgL1jCcIpq8F2tXL0H4ahS7nxVrm0MVZkVOGdLEejD6H8BpPABD7Dnymt5A2L+exaEHlSZo5ZPePVHyQ9KTESof/qL11d6lJJoEvmCW3yV8vXGX46ZfMsgmam7eANw97FTvW4hnHpxED1eeoErkXFfrScO1sQ9s7cCkHgo3x/3IF1O9MQk6dYl+OlVJkULWeVhn3UpJapB36uSuhbTTv29pfFH5Kw3XszrimuYblTpi7Cyf8CJqcJzcvxKIxde/smIUYkvBWi+4DDqverD9zNuRU7wTuLbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NyVFXXXN3OXQL9AkphEh7GbeLx+vqwx+D6JtFWEvfMw=;
 b=IkatPgbp8HG+6T4WqRaw4e5irpjEy3C4T5TUjokk1HzhvX6ZNBEMMRTNDCPWU30Cq+RrHrQtrm05j4qYLOOOpixYnYZznW+VU/xr5HOGnE+ClzU+X16uPleZoB6sHr3zhlz8GMGlRaupSm3dktXHneDAP7ogbuipYMNcQQjgMRY4ZnWC3TnwhiH/0cyKzCKUuHPRJsqUT0L+yq8uAEQ21Y6qe41tQ7EUpccArL3SrS0RxYlZN+o/JAs12bS/0sdTDo1339obXY2HEnPjjzuJDSObiB+ydleCF42NYbFUtAyLArZ/u6cAbghd7gWe8+BMTI7W6txRWeMc7mGgE+AZtg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <759d186d-95a4-86cc-1713-468a4f6ca4a7@suse.com>
Date: Fri, 12 May 2023 08:32:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 07/14 RESEND] cpufreq: Export HWP parameters to
 userspace
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-8-jandryuk@gmail.com>
 <7db38688-1233-bc16-03f3-7afdc3394054@suse.com>
 <9cf71407-6209-296a-489a-9732b1928246@suse.com>
 <CAKf6xptOf7eSzstzjfbbSU0tMBpNjtPEwt2uNzj=2TucrgFRiA@mail.gmail.com>
 <80ccf9c7-5d22-b368-dac6-01fe6cec7add@suse.com>
 <CAKf6xptLpj_L_G3Qk+KA-yaTcaMHLJLL9soFP9HD6Ro+8Lk7CA@mail.gmail.com>
 <559c7f4f-113e-8e58-d4d0-3c0c36f27960@suse.com>
 <CAKf6xpvXsiac7WqEuj_e9GnuNMMEi-DZ-P0i1Hr79s2unGQZGQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xpvXsiac7WqEuj_e9GnuNMMEi-DZ-P0i1Hr79s2unGQZGQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0189.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8779:EE_
X-MS-Office365-Filtering-Correlation-Id: f5fd901f-e83c-4dd4-a831-08db52b2ae59
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	z6ZTfGpFESWtPvKm0re/Vvih4X74SGHCh9Ef0sFt4RBE/z20nReoNjFGQyv+Gq+4aJjJv8AiSWkfJnlFOS+9HE5SRYLmtp4Ph8GVOk/Ls3amypD+/m9pRcp67r3Jzu/eGX5uo2i4UyoIjGpREE+4jMwIhrj9uymyeLwhSLFqbQVO2w01uRuT+jyyki0mNEORdRhxSyJoFnLlarq7Qw6HDexxDfQk/WkHjrUykQmZtQMzmpHAQPEeVdyEO0q8hfGbOCPWqKnhrT/0LqHC6raALem6rsuA8VxUexLzjPkiXfhPJXmWAGh5DZWIi1lP5C13so4Mb8CHf+yGo7mYZREtNjyq1Zn0giXENuLha+QVgSmM361cs0fk9bCNTRY+knYJ+zr2g54WKb+HCdk2sv6cMvNVjZaNDX+MOlWk5wwtkk5mau9IHxXugUYz9b4XbmUYNtFO78hno2yMYvqsm5mXxAgy6eP7CPIbnq3II79lEy7wAyUzX6NSM+nHnv49DXLRSXUh8C/Cmv6O3BcE8l4ARy/Q5CQdti+JpIfvYZ3su98BZEzM7rxoWg0bbkVdOv2UG9x1TUZG45rre3JrLhYKAQYpPJgpJHoAgXsa+FnHfSfL/KxVqG7dJdueEDcgB+PC4NHaClGSZR/tzQ+NJF3VcQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(376002)(346002)(396003)(39860400002)(451199021)(31686004)(83380400001)(4326008)(66946007)(66556008)(6916009)(66476007)(478600001)(966005)(316002)(54906003)(86362001)(6486002)(31696002)(36756003)(26005)(186003)(2616005)(6506007)(53546011)(6512007)(8676002)(8936002)(5660300002)(41300700001)(2906002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UFpobFdxbFZXWHF3Q0t0Q3dMTUUrTXVjY0FyZUJWNGp6c3RGWE4xcjNwZ0tQ?=
 =?utf-8?B?OHFXZ2xFV3p0U0ozWU5aMy9MRzNTa3lZZm9YUDYzRGJRVGF3SFhwb0tiTUpt?=
 =?utf-8?B?VkV1dnNXMXIzelB1OG4rdGdlbytnbnVPQk5rdS85a2xTN09RVEdkcEdNUUor?=
 =?utf-8?B?Q0lnT1lrT3lxdWtwRkhlcTRld1Q1cEhtZTFoaU1LZ3UrYnRnenhzS1hqaGlu?=
 =?utf-8?B?NkdCYnI0WWhib3lMNitmRnk1TE9GOUN6dkNiTGpQMmZiRDc3SlZTTzFoZzdx?=
 =?utf-8?B?YmgyTWRpSXRwTVpMa0ZzbFlqZjh5dzh1SWJCN2FYZFlvOFdJeHpjWDl6dlc5?=
 =?utf-8?B?cUVmeGgvOFdhcTdRZ2FMY0tDUkZsd1U5WkZQeFBGV3dESFBKYU5jTUdqaG10?=
 =?utf-8?B?MTdGUHU2UFdFajNLK2QvdXZ1VlUwcDFUVXNKMWtiQnJSWnFMZGdybmQ4T2Vt?=
 =?utf-8?B?NURMMW82cmk3bU1wanBNK0RKQzE1Q2pubjBwd2IyM1BQSVFabjNOSVY3SXZz?=
 =?utf-8?B?dHQ3SWF1dGlPQ1NsdVpCeWNoNllzNWJjSHBGTFM3dU1RT2FrcWljM3BialZt?=
 =?utf-8?B?V3BZelMzcXMyeU00eVoyMjhmRnVVQWZrYlBHd3MyQ0ZFdFRQR3FPSWVMaE94?=
 =?utf-8?B?eDZFUGtaR2Vudm1wUXlVcVpCcEgxMzlWaFdoYi9jNTZNbkhVVllzYjZMV0ZL?=
 =?utf-8?B?ZTdET0pyaW1tbFZxMjRVYVVidkY0Uk9tZG9tTDRVc2UzbmxMZWRJeGNPNlRx?=
 =?utf-8?B?VFVESVcwYVBndW9iN3RCVTdpRjRmL0Y3bFJiL2ZaVE93ZExlVDROK09PbElU?=
 =?utf-8?B?WW5odVRCdjNCNDJuZWJTaVgrZjUwbU1WL3h1eVFKZDdhS21lcGYwaUJDOElS?=
 =?utf-8?B?ZXZVZzZaaG9xL0xiYzZUUWJxdmZmTThvV2JpZ290N1V0dUYvalVmK1U0S3lM?=
 =?utf-8?B?dVlMdUlqNWljeDZma2lYcmZpMG1janNvVHZzSEp6TTVibStYdnZDbmY2a01Y?=
 =?utf-8?B?TFFwTGtGRVloUldzZVZ3cHNnd3oxbUFER0RiY05iRk9XYm1qYk1Ia3QxTGxF?=
 =?utf-8?B?THVXS3gvd2VRSVJVRE0wSG9HQ2Flb1FYd3dnMWsrTDA5M29Ra2ZNM2NaZTFR?=
 =?utf-8?B?aGY0YlJDNjR3ZlI0RERnRUNiMnQxNjAxSWlGQTIyRWRqcUlpdTlxWmJXcnQ5?=
 =?utf-8?B?S0o4dndkRWRSRXRrOHNMRzNlbit1c3FEdXFaYTBrbHZJcWI3SUdJQzhtSXZ3?=
 =?utf-8?B?bFlySStWZ0s5L1NrTFdFQUh0TFdhV1ZNZXRzZ25VUUdtcHp5MldrS1RDaFZU?=
 =?utf-8?B?S21hR0M3U2dFNVJrSko3MkVTZktZVmpiRVdzU1U0OXBLdGtnL1BPNGtZeFJR?=
 =?utf-8?B?Uks2K2JWUERMRi9YeU9oSk5vajJYNmdINzRXbmk2eHFsS1M2MFFXWHAwcmRo?=
 =?utf-8?B?VWkzcTkrQVI5WVo3ME14NjdEMHV2bzR6SnlELzRwYzhyYXVtUW14KytjZXRU?=
 =?utf-8?B?cXhEZlhGSzVtWldyNlNGM1pKcXBSNkZYUGxMb1B3amtNZzgyZStWNWNVdzlS?=
 =?utf-8?B?V0pHTWE2L1A2QkhyRVJtNkowQlNDaFVaLzY2VTZzMEtnaXlKTDBYSXlaVnlO?=
 =?utf-8?B?NzdScnVPemJVQ0ozZUZybWM3LzEzd3FhaWNYRXNQcDF6WFQ3Vng1WUhDa00x?=
 =?utf-8?B?aTRuL0J2ZzA4S1gvTWErZ2IvQS8zcWkvR0dqdEV3WEt5ZVY5dTBEYVRTTmZn?=
 =?utf-8?B?OG1jYjZlS1lHdE1KV3JvVXN1OE9jUnBkZmZEdjNWeWY1bXNOelV5Zy9RSjdT?=
 =?utf-8?B?Rm1mb2ZzL0VVV0JqWmp0ZmprQUlJSTNuM3hxT1hXNHB4ZEVGY0NJT0paSnBq?=
 =?utf-8?B?Umd3Ri9tWm05bjQ0Y2tuVGZCMEdIVitaVWJDblJ3TmllNVIxbjUzZ2J5b1hZ?=
 =?utf-8?B?SFoxa3hzS3REdnF3T1hWOGRyTS9CMHo0ck0vT1dLRVE4d28zNXdLTkJCVVZV?=
 =?utf-8?B?M1JGSTM0amJpYThoMnZPM3diR2FpS0l3amdXMWRXZlRLMmVnWlowVzVjTTJ0?=
 =?utf-8?B?NHUwRUdjMDM3Wnc5UlhlVGk0QUhwS3hCeUNaVXlQd25EUC9JZXRoRXl1N3hH?=
 =?utf-8?Q?zQTu5fiVcBXkQ3rQK2lEj+qjR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f5fd901f-e83c-4dd4-a831-08db52b2ae59
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 06:32:38.7369
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FVca6DHdb0zYUiObOWntlp7uejTCfRdkrR1mMXwFAewuojhyvOfO0TFlmyIlf0DxiJvpzOURc4kVoRUYbnEBXg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8779

On 11.05.2023 22:22, Jason Andryuk wrote:
> On Thu, May 11, 2023 at 10:10 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 11.05.2023 15:49, Jason Andryuk wrote:
>>> On Thu, May 11, 2023 at 2:21 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 10.05.2023 19:49, Jason Andryuk wrote:
>>>>> On Mon, May 8, 2023 at 6:26 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>
>>>>>> On 01.05.2023 21:30, Jason Andryuk wrote:
>>>>>>> Extend xen_get_cpufreq_para to return hwp parameters.  These match the
>>>>>>> hardware rather closely.
>>>>>>>
>>>>>>> We need the features bitmask to indicated fields supported by the actual
>>>>>>> hardware.
>>>>>>>
>>>>>>> The use of uint8_t parameters matches the hardware size.  uint32_t
>>>>>>> entries grows the sysctl_t past the build assertion in setup.c.  The
>>>>>>> uint8_t ranges are supported across multiple generations, so hopefully
>>>>>>> they won't change.
>>>>>>
>>>>>> Still it feels a little odd for values to be this narrow. Aiui the
>>>>>> scaling_governor[] and scaling_{max,min}_freq fields aren't (really)
>>>>>> used by HWP. So you could widen the union in struct
>>>>>> xen_get_cpufreq_para (in a binary but not necessarily source compatible
>>>>>> manner), gaining you 6 more uint32_t slots. Possibly the somewhat oddly
>>>>>> placed scaling_cur_freq could be included as well ...
>>>>>
>>>>> The values are narrow, but they match the hardware.  It works for HWP,
>>>>> so there is no need to change at this time AFAICT.
>>>>>
>>>>> Do you want me to make this change?
>>>>
>>>> Well, much depends on what these 8-bit values actually express (I did
>>>> raise this question in one of the replies to your patches, as I wasn't
>>>> able to find anything in the SDM). That'll then hopefully allow to
>>>> make some educated prediction on on how likely it is that a future
>>>> variant of hwp would want to widen them.
>>>
>>> Sorry for not providing a reference earlier.  In the SDM,
>>> HARDWARE-CONTROLLED PERFORMANCE STATES (HWP) section, there is this
>>> second paragraph:
>>> """
>>> In contrast, HWP is an implementation of the ACPI-defined
>>> Collaborative Processor Performance Control (CPPC), which specifies
>>> that the platform enumerates a continuous, abstract unit-less,
>>> performance value scale that is not tied to a specific performance
>>> state / frequency by definition. While the enumerated scale is roughly
>>> linear in terms of a delivered integer workload performance result,
>>> the OS is required to characterize the performance value range to
>>> comprehend the delivered performance for an applied workload.
>>> """
>>>
>>> The numbers are "continuous, abstract unit-less, performance value."
>>> So there isn't much to go on there, but generally, smaller numbers
>>> mean slower and bigger numbers mean faster.
>>>
>>> Cross referencing the ACPI spec here:
>>> https://uefi.org/specs/ACPI/6.5/08_Processor_Configuration_and_Control.html#collaborative-processor-performance-control
>>>
>>> Scrolling down you can find the register entries such as
>>>
>>> Highest Performance
>>> Register or DWORD Attribute:  Read
>>> Size:                         8-32 bits
>>>
>>> AMD has its own pstate implementation that is similar to HWP.  Looking
>>> at the Linux support, the AMD hardware also use 8 bit values for the
>>> comparable fields:
>>> https://elixir.bootlin.com/linux/latest/source/arch/x86/include/asm/msr-index.h#L612
>>>
>>> So Intel and AMD are 8bit for now at least.  Something could do 32bits
>>> according to the ACPI spec.
>>>
>>> 8 bits of granularity for slow to fast seems like plenty to me.  I'm
>>> not sure what one would gain from 16 or 32 bits, but I'm not designing
>>> the hardware.  From the earlier xenpm output, "highest" was 49, so
>>> still a decent amount of room in an 8 bit range.
>>
>> Hmm, thanks for the pointers. I'm still somewhat undecided. I guess I'm
>> okay with you keeping things as you have them. If and when needed we can
>> still rework the structure - it is possible to change it as it's (for
>> the time being at least) still an unstable interface.
> 
> With an anonymous union and anonymous struct, struct
> xen_get_cpufreq_para can be re-arranged and compile without any
> changes to other cpufreq code.  struct xen_hwp_para becomes 10
> uint32_t's.  The old scaling is 3 * uint32_t + 16 bytes
> CPUFREQ_NAME_LEN + 4 * uint32_t for xen_ondemand = 11 uint32_t.  So
> int32_t turbo_enabled doesn't move and it's binary compatible.
> 
> Anonymous unions and structs aren't allowed in the public header
> though, right?

Correct.

>  So that would need to change, though it doesn't seem
> too bad.  There isn't too much churn.
> 
> I have no plans to tackle AMD pstate.  But having glanced at it this
> morning, maybe these hwp sysctls should be renamed cppc?  AMD pstate
> and HWP are both implementations of CPPC, so that could be more future
> proof?  But, again, I only glanced at the AMD stuff, so there may be
> other changes needed.

I consider this naming change plan plausible. If further adjustments
end up necessary for AMD, that'll be no worse (but maybe better) than
if we have to go from HWP to a more general name altogether.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 12 06:37:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 06:37:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533699.830557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxMP0-0000NK-Ps; Fri, 12 May 2023 06:37:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533699.830557; Fri, 12 May 2023 06:37:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxMP0-0000ND-NA; Fri, 12 May 2023 06:37:18 +0000
Received: by outflank-mailman (input) for mailman id 533699;
 Fri, 12 May 2023 06:37:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxMOz-0000N3-28; Fri, 12 May 2023 06:37:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxMOy-0002y8-UR; Fri, 12 May 2023 06:37:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxMOy-0006rr-Im; Fri, 12 May 2023 06:37:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxMOy-0007Qz-IL; Fri, 12 May 2023 06:37:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=A4eZq4wtQ0DBA405N9XT4fkODSWQleHXuQiCFL5i3cI=; b=t/3zslWnHF6hF+I13Fg+OqMz8W
	tou4zZREAQZKs8kN/dNoKyysUsXjvJIifqFW7P+aH0QuRCoEazhl2YNEL/TwnzzFfjokJR3b8SqXf
	WqFIXDfin2gSnoIQdY0Y1JRDbkB+X0ModEx8GI4V2bE7qnt35CzPt6aiASHpQz1KLk/4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180627-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180627: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c08a3a96fd19ac8adc75f00d373b4a1606b26c00
X-Osstest-Versions-That:
    ovmf=0a0e60caf20ab621ee9c1fc66b82b739158c05cf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 May 2023 06:37:16 +0000

flight 180627 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180627/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c08a3a96fd19ac8adc75f00d373b4a1606b26c00
baseline version:
 ovmf                 0a0e60caf20ab621ee9c1fc66b82b739158c05cf

Last test of basis   180617  2023-05-11 09:10:41 Z    0 days
Testing same since   180627  2023-05-12 04:12:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Tinh Nguyen <tinhnguyen@os.amperecomputing.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0a0e60caf2..c08a3a96fd  c08a3a96fd19ac8adc75f00d373b4a1606b26c00 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 12 06:51:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 06:51:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533705.830568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxMc3-00033N-UX; Fri, 12 May 2023 06:50:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533705.830568; Fri, 12 May 2023 06:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxMc3-00033G-Rl; Fri, 12 May 2023 06:50:47 +0000
Received: by outflank-mailman (input) for mailman id 533705;
 Fri, 12 May 2023 06:50:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pxMc2-00033A-Ku
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 06:50:46 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061c.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 508793b5-f091-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 08:50:43 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9083.eurprd04.prod.outlook.com (2603:10a6:10:2f2::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Fri, 12 May
 2023 06:50:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Fri, 12 May 2023
 06:50:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 508793b5-f091-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wtz11kCnOZudvZPrJoREUNwS3Aufk/jjUGq3lzu7XzcaAnUYp1XhYlkbruafbNYpMsIQnUVmQmj9HoQgHdvhCJfV0buv/lF31qkoHkBx1B2qes6icna+uHkqvX6XmULogiyq4VgvcqY5WAlVuJbrDJ6tbjVVp4uG1urji4x4KoH5mRJ97kxh4n1dggqsjrKQ5A93G6RCmZwtBqQFDc2OPAL7rbkenTjvw9Z29kdEQUkHeYlnGJvFEE2n3uDM29iSkLaOiIS+9yd3vjQdxc/Y6LzohSRNrP0PdYBcrjGDKfipu25Bi/PBijOrf8L8HPsFTy9UW6su9O5PTSi/oOgHPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XNG8OUxPuCywyEEjSHCJgHFlKeyS1oZ3pXQQZ4rrj20=;
 b=VpBVDK4MjubYtqmndcmfCaqYV/Rggyf9qsazfbav32kbwYle0gwzvrxqDwu4khgVdI2qWaSN5YYRkFUJWq4uNaDvRJG6jVMauG9Yr5ZTjyIsa4XvLe3sY4LukLsTckFrLwIsMfw5VIj8LDfqwX4OxTinOhGHdQBmY9IxI8EyjpsSm6i+ms5QW8jnTGCYG11cVMDgM7MFTRa3Nd3NTnHJIFv07gj5GK5oynOcACpyD9d1EJ3lcRghRlKQa5VJgZlvlmlntPuRisNDmUHH+rf8ECChg66TzObOb2fx0ZCrTNz6qafgFicMkQkOihtRtix2NWd04v5CBZXWhrAl6II4HQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XNG8OUxPuCywyEEjSHCJgHFlKeyS1oZ3pXQQZ4rrj20=;
 b=a4GwAms6OS8YjpHTYrmk8AeMWDQALvxxHlItM7u6DkCjFaW1j4OHqP7ol1EipLTgxJYtebYtGm3JJmCBJxAv30wL2GYCrXJ12c3so0LXjh60Nsa+uzGa+sMWCsfFqaAcAgxW8XMAkBe5yNLRN1+1YShPjIka4dVqpP2FBxMREeSpmzEkyUp2OLwYEhbdtZxXh/krvdgHVIPBZnd5IFYBmXnY2zxZ0+fn7DfSJPq1L9+2ciS8YwC/O4nPIJRcSckIxuxXf3950uX9FlR4Ik1RB9ZVeVVQkOyGIfVDawKPkFdH44JJveT8atudfbeHYaqpvk7/TTd7va2HuDWBRWSoYg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <71af3a38-378c-5147-1c44-ae84993a796c@suse.com>
Date: Fri, 12 May 2023 08:50:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/vPIT: check/bound values loaded from state save
 record
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <f83213df-2433-ec51-814c-436ce5ea4967@suse.com>
 <CAKf6xpvmdVT8QWFk_V8TCoZ1YHZecTUDT3x9HuRbGmUdGYKb-Q@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xpvmdVT8QWFk_V8TCoZ1YHZecTUDT3x9HuRbGmUdGYKb-Q@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0075.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9083:EE_
X-MS-Office365-Filtering-Correlation-Id: 3b80f94a-6dd5-46f2-48d7-08db52b533d6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TuJtybk13dy90Eo1xpuVOOCIwd8PezvvrHu6sOKcVODyuDWKvV6lTXeCt7pUrULt1fZwbUJCWzfVwXM/SdHEmM6AMOCuYB2oulO4LcRUodnvyMM4wQ1ZqLkWPE1itbUoZPMGeH44Wmp6A5NZaEhKE9URyrbWZYXCfZxZAHekkPGe553cuJODuCycKV8PQtWt9kTTfuxbiWNfnoTgTQv9INBDyOyj+LYzAGNYInrWYwU9MnMYUOS/1J8Oo81teyim5wlOoVQDQhsBIFrk8YlZtXUZXMfJm6bKoQax7sD1y/+85sKNdCP8p9j8YqOmvGL3PI01Hlhc85mKw3DUBMYa0VjyOSygrC/yjbk+rDL3tI+ctF0O6EYkWjYV31OGoM/xHXVsc0cYfL3bFt5tyT1xXaCN3xAvcXW+2/WoYfSr7oTw0ye8vlNLttv8INcljQmz6dSAMU/FBrOv8khwnJ9neLA31kXVPcAfx8f9i58fZeYQ5/AFAvoGHv0MOhDJ7hrcyE9AbzfuLjNMp+jXZ82vpfyZbGIvVmxjoOi3h/Lt/JePuoVtvs//MqbdoRz/6tEKcwxPzWi6Ub/KaKbTiB8q5Cbn3An4iLXnjLTedIMwvqV1RGTkp1trFZvvzNs1K0uDilOjpcKJ/Zx96JGhUGkltQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(366004)(346002)(39850400004)(376002)(451199021)(66946007)(66556008)(316002)(66476007)(478600001)(86362001)(31696002)(4326008)(6916009)(54906003)(36756003)(31686004)(2906002)(4744005)(2616005)(186003)(6486002)(26005)(38100700002)(8936002)(6506007)(53546011)(8676002)(5660300002)(6512007)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UTZyWGhHRFR4V09aak5GM1V4SlBVak1MWXdkMFREQ091WGhXT085V29Lem1o?=
 =?utf-8?B?TURLWUs0enZYVkpycndqRy90MmFyR2lxNHhxb2pBMXV5Y21yY2sxVmo5S05P?=
 =?utf-8?B?elRxaVo5Rzdka0RNektROU1waGR3UkoxcTZTTHlwOXZlUmE1cDFVS3VQS1M4?=
 =?utf-8?B?RHJSQmtObTVFZkE1ZUgwaTgwcVQ4dDdPQlR6NGQzdnhKSzM5YzJKbnlkUVpV?=
 =?utf-8?B?M2VOUWJad0t1ekZFRmlmb0FpNzFCS2VHb0UwMnpGaUN2ZzRCeEM3R210NSty?=
 =?utf-8?B?eUFZQTUvWW81WWYxWGNybzlNTWNTaS9mRndYcDBVV1V1STF4V2FYNVlvdmNH?=
 =?utf-8?B?cjdjbUlyS3hwNGZZZDVmeW9GMlNhV3pDdVVMOE5kZ1pPdGRqQVlBcjR6MHdI?=
 =?utf-8?B?VXcydWxPNEgxTEVCd0xjMGhCOGUzcnlWcy9HUGZXUjA5Unlhb3V6SHFSUDI3?=
 =?utf-8?B?QjlYV2tCZUdGejFITGkyckl6WWFoZGcwNk1SMm5HK2V6bVZzdGlOYXA0bnZx?=
 =?utf-8?B?Mm9EY3FjRm5nMG14T3lZeUZtL0tZdk5zSVVyNlRoUlZKRGQwRHoyWDRxZE5m?=
 =?utf-8?B?ajN6TzR2M29ISDRMY0thcGtHOUNuVENzd09LNTNvYjgyTFQwNU5nNWcyY1M5?=
 =?utf-8?B?RXFJbjUvclVxanE4dzZSUUx1a0NObFVYMGFaSzdkdWJMemVWK3dMMGtJdmc5?=
 =?utf-8?B?NDdsMTdhZFp6WnFVK0RMQWNoYU1ONDViSFQ2bjJUUGEwT2FkdXVwTFFyVUx3?=
 =?utf-8?B?VVBpNkNBQTR1YlJNZ296ZWpLYmxlc0ZFdUFmS1lWSW5SNzVKTmZlelZOOXBk?=
 =?utf-8?B?Zmp5Y0dzaVBnN0xZZDlGemExcEg2VDZQNE5RdFl5NDExSVFJRVl1d05MTnNW?=
 =?utf-8?B?NGZleDhzZDRKSkMwbllqM1UwL2dyWVVMNmN0bUpMa05GcGtOL3VBNjZ3Yldj?=
 =?utf-8?B?amwra0x1eXBydnQzZzdlU0NTVXpmQUd1eGN4NndURjYybVh2WWVXcGFsTzNn?=
 =?utf-8?B?NStqMkhhS3dqV2x0K3doZWtHOGNBUWVScUFZc2l5alpZdDBPQXliU1BZdlNL?=
 =?utf-8?B?bGxzNGd1c2ZYQURmdEVNZS9WSVN5b3BxSTZRQ1JvNDhYUzVPWUpzQzhBVlRJ?=
 =?utf-8?B?U1FETk50UXRxOXJ3TFljM3p5SkxvbjFUWW11VkVBQkc0VHdKM3ZaQkxEdllT?=
 =?utf-8?B?SGVhWWVNRG01clhJejRYMFlzN3QwazA5UTczWnVxM1A0N01VNFM0cnd1ZzlN?=
 =?utf-8?B?TS9taUZZRWFOanUvMURyWmhrY3VUYkhXLzBNSkRGaWIyUkpDcFBWL09LS1c1?=
 =?utf-8?B?azB2WUFjdnlIOUR6WjJOaVJBS3VJQ3pwY1NaWmdjcFZKeUc2Mkp6cm5BaDYz?=
 =?utf-8?B?MzRWVGg4QkFyVkovVGJ6QnhvdStwSWV0N2h6S2lrWGorRGtJeWhNMjJQNlAr?=
 =?utf-8?B?cnRPaWx3akxvd09lRlFUeFUwaVhKM2M0K2I4RjJnUWJqd2NZbE85dDFlZDdQ?=
 =?utf-8?B?YjQyL3ltMGl1M3hOTStJNTlCRTZQazZpMUZQbFk5K3NZdytXYkdtcHl3RUNU?=
 =?utf-8?B?dEJVNG8vQ05yaERKT1VReURsV0FrWVlaSkdaYWJzNzBSU0JnSWdBTjJMZWNs?=
 =?utf-8?B?WC94NS9lWEJKSWFUZXVZTG1QL3gyOERXbXRpQmxiWlp3a0J6aHJMTEZRY2hi?=
 =?utf-8?B?QXJjNE5CK2poSEdNVi9sdzRtTUh1T1g0MCtBZDR5cHpna1Y5bERSVFNTZVQx?=
 =?utf-8?B?RWpOSDJPU2FiT0c0RDNpeElrM0J0Nnd4ZXh6V0JDNDFtTk55TjhJSVRtWG5r?=
 =?utf-8?B?endPMDIwT1ZGd2dTZzFMcmdMRVF5ZFBiQjdDaktrejdFVmdDZmJsU1RtcXRp?=
 =?utf-8?B?VmFSRU9UeXlNeVYzZlZqQ0RwdTllSGRpMFBpQURpQ3l3aDRMKytlenFyb3ZJ?=
 =?utf-8?B?cGJ5Mzg2UUNpamhDVDc5WmFtRSsySHZVbG5uRmEzTmtGUy9LWEhRc0dhZ1d6?=
 =?utf-8?B?RlFrMVYrN2hjVGVhZFg4Z3BZRHVkNXQ1aTJNOUlDRFVDSHFWUHJ3eWdSUHdU?=
 =?utf-8?B?NWVzWHp4VngyaVk4Z203WTBSb2sxeVRVTnpwdGhDWWV1amc4SCt5SFovcVVY?=
 =?utf-8?Q?wBgJHa0mqs1WJhltooHhV579j?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3b80f94a-6dd5-46f2-48d7-08db52b533d6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 06:50:41.7047
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IN4EnVWI/IhWjv2k0KTULoinjRNAcm0m22cKmfB2j43It445TxdEd6dDx6IFcbnTh9PCZ+jJzPK/mi4E48Dutw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9083

On 11.05.2023 19:51, Jason Andryuk wrote:
> On Thu, May 11, 2023 at 7:50 AM Jan Beulich <jbeulich@suse.com> wrote:
>> @@ -426,6 +427,33 @@ static int cf_check pit_load(struct doma
>>      }
>>
>>      /*
>> +     * Convert loaded values to be within valid range, for them to represent
>> +     * actually reachable state.  Uses of some of the values elsewhere assume
>> +     * this is the case.
>> +     */
>> +    for ( i = 0; i < ARRAY_SIZE(pit->hw.channels); ++i )
>> +    {
>> +        struct hvm_hw_pit_channel *ch = &pit->hw.channels[i];
>> +
>> +        /* pit_load_count() will convert 0 suitably back to 0x10000. */
>> +        ch->count &= 0xffff;
>> +        if ( ch->count_latched >= RW_STATE_NUM )
>> +            ch->count_latched = 0;
>> +        if ( ch->read_state >= RW_STATE_NUM )
>> +            ch->read_state = 0;
>> +        if ( ch->read_state >= RW_STATE_NUM )
>> +            ch->write_state = 0;
> 
> Should these both be write_state?

Definitely. Thanks for noticing!

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 12 07:06:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 07:06:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533709.830578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxMqa-0004p7-7P; Fri, 12 May 2023 07:05:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533709.830578; Fri, 12 May 2023 07:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxMqa-0004p0-45; Fri, 12 May 2023 07:05:48 +0000
Received: by outflank-mailman (input) for mailman id 533709;
 Fri, 12 May 2023 07:05:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pxMqY-0004ou-UX
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 07:05:46 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20625.outbound.protection.outlook.com
 [2a01:111:f400:7d00::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6a5c01f0-f093-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 09:05:45 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8846.eurprd04.prod.outlook.com (2603:10a6:102:20d::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Fri, 12 May
 2023 07:05:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Fri, 12 May 2023
 07:05:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a5c01f0-f093-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GsD5oKNuysPpbjPclJF19xphOgD9CYvnN6DlEGU1GPyudSZyCxOAu0x10b37w72Fb7M5veiQQgaaZQlaW44gAglr8n17yEqSu9wokO6fGJByrYpkJXRD9glZfHxe+3cdx8iLKYZV1pj35122H/YVVHNPpzVtDRiExFW1aO+96wK3ckTWSX/C3W0AieJh/3UR3C1gxEs1bbexAzBMfscb6WuItsO3maJ6jwZy7IIeQcsBRr21hkk/pRqwk6q97rczvqp4w9mk0NU7VrVfLfyDGAC4tpuYSrMUuDzVgJD0hLHaQyqOm4QHMw2VVpHn3gWoYyMygNXl20izgYY+pH7lGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pVXLIIKlLw+i1COasw9Ms6RK159MBpPYk5xRi9BswR0=;
 b=fORDHyXtE4WeANmfcB5Z7+UUNsS4qa18bhjvtLB92VPZYoY6UUcZtSohnjIQKMjpVzW7dlXIOTA3+N+dJQMODz1t31hBlUjZf9g3heOX3ciDo5M7+j21WAn9C0e/tdNT2uq3GNyfSsq1PeAWIRd+n/7lg7bK7HlS4weytshcdLHVk8bwSE9MmQmrJfZc1F+7mAw80g1i5KwWJruxBwbTZ4bKwv32SN2xmVvXcm5nblXmMorLvrMlRavEjYsjLtKd9vvT+fv4fqpOYTfmsqwsL/PWKfH3FB1OijIoSia95XhUrIc3PQxksyQhNconYeteyB4ELOsk4aN9hIuP/uuuPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pVXLIIKlLw+i1COasw9Ms6RK159MBpPYk5xRi9BswR0=;
 b=sLKFl8vEkgJy75nF0rCUzt6L19cOVNKYClCX/xbvKf2WfUpdcI3teLN5XCGAj7kLLRJCYWIRUXMUpdhquY9rrjL/ZzDH3LdEbuLbHpQ/hSb8u8Zhm49gvqD3Swqh+4nPgA0/3cHgCIIOMjc71zfyGdG0FUZvWJkmoRHJ9eNRVvW2rZRBL6P4Da+h6DpvFI38OA7ELsZQ5y86FCRWu/XYYZj5iZEo+IKGwSJmr9G15wKrGv61ZYy/RT8LNztiwU2LweAxJZlZ+mvZS5tiOe4pyncmUp2tgUXFvWoTYFhjCnKr1uBf4AKcT8i9lbZbkQmH2KnoK/DnPpbeOwTiKVo1HQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <71df3cfb-602d-e543-33bb-02e708e85f5e@suse.com>
Date: Fri, 12 May 2023 09:05:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 3/8] iommu/arm: Introduce iommu_add_dt_pci_device API
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
 <20230511191654.400720-4-stewart.hildebrand@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230511191654.400720-4-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0187.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8846:EE_
X-MS-Office365-Filtering-Correlation-Id: 5853ef20-4cd3-4992-0933-08db52b74d96
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QQlxYhJEx8Kyknx4k7dWZZnHhvS0M/KCDIvEYByUqa52iifz3T7aANVlKBFVfMmU2VynKeAC4aBWt5TxhAPrXwTdfQuZrZJMhC0BJTdv6tPBr+tbOj4trOpBUr1H5yf+A3GHURRHUFCcq02uv/eJNrK75phYsjIJbjo46Q03RSO07JRbeU12fqVYFmXyw+eFsu9W7T3VxPdVSReaoYkozoGGVPCJse1tngEQb3VbkNEspW+QY9liDJ3ynsF7XYUtiQiaGhj5gulZdPPj8yYRcjcoZZFjjV3fCGXs0TXSIiRox5gPJcHTbXA9dbI+4esADGZ6DP8kKcWrrOdMLRrKs15jAXdVfbjKKq97JyqBDwx3pqhFO1ZiTHmoJVVHBYM0/3PDcRsRj6XjRTymGUXPaWvwulbFB9MuOr31NrjjwfFNgKEquKc99ZrUeUHarDJhYt5r4WCLsbhoEmYDSC8ka9of3fDo+Bc9906OKNAv3/+5iMKcCHAygYDH+6bXvC6YfUXIeizDO4JgTxc56qqd+zj83dUi6zTD3W6QOlogmANl5xUx6PCZcxZKESS4IrD2JvSn0E96Vrq9EsYrejGgFNcGSs8wfeoGgsJGreGDrjir6d+IrqJ9CIh3OvmSahdFwuSxj2HmuFDjIAZyCEruyg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(346002)(376002)(366004)(396003)(451199021)(8676002)(36756003)(5660300002)(2616005)(86362001)(8936002)(6486002)(66946007)(54906003)(478600001)(6916009)(53546011)(316002)(4326008)(66556008)(66476007)(186003)(26005)(4744005)(2906002)(6506007)(6512007)(31696002)(41300700001)(38100700002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WUNHVUpvV0pzS2xwT3lVVEhmOTgvRlRqalhZU1BTMTVUSENHVENlU0l3QUdq?=
 =?utf-8?B?cFVTWEpFWjJ5dTdZYWlnOFJqbER5SUlhdExRQlIrQmZTNzVSY0VabDZ4S0hn?=
 =?utf-8?B?WCtDM2dwYjJ5U3EvRDdzR1R3MmlMUndaa1A2OGZXb0FNQ2FWWm1LSEU3Ulo0?=
 =?utf-8?B?cXk3QzVDZ3hkMTE0VlBUNDFnWGJ6bnpHejJIMWZ4NzZpU1J3YnMzd3VPNVdk?=
 =?utf-8?B?d3lxY0k4UWZnazZGY0IrbFNtRUxjNFhXOHl3czl0cm9LUUp3Zkw4aXVkK0Zr?=
 =?utf-8?B?cHVnVzlCWXJSdXVvc1FhT0lJU1NOTVhTb1dYYlVDR0tNQWpFcTRneHpJM2I4?=
 =?utf-8?B?b3V6WERzQk81N0toOTNPVUdxRG9VeWpDOUFBR2xjVjYyZEhlYklONTNjbmZs?=
 =?utf-8?B?cnRKcTN1U1NjMGE2bk5MemdJdDNrZlk4d2xiN3hkaEtsTldoZzErL0dVQWpV?=
 =?utf-8?B?Z3o1SWN4MWlLYkdHTG8ybU5sQTl0OTlKcXZZcnZ3SjE2VHQrSUxDZWc4UDYx?=
 =?utf-8?B?Q2YwVm1Sb05PbG1zalZ5MGZqaWNLRnJPOWw4ajV5V2ZselZ5VEhIOGd4MXVH?=
 =?utf-8?B?eXhvVmtSK25HZktUbGpWWmFnN3k3WnN5ZkQvb2dRQjE5dnd0bDV3VWpSbVdB?=
 =?utf-8?B?M2lXSHV2SE84ZnNjR2dyL0dISXlmeE9MYUs2LzVHdWwvNnRpNDRoWUF5TkVG?=
 =?utf-8?B?VTh4bEJKQXM0MUt4aGgrVjVFMno4VC9PR05raUxxOCtJQjVQTkk2aFZLaTF1?=
 =?utf-8?B?ODZSMEU4eUJmTDVoSXI5cmpLUUVXME5JeXFCSVdkeG9xMlRlMTdUNUVuaW42?=
 =?utf-8?B?ano4aUt0Y3BGNDEyc0VHV2hRK3NEelMwVHo4NXBiWkpIeSt6cXlDaE1vN1pr?=
 =?utf-8?B?bUFtTFBBWnNlNTNaOTByakd5cVNtZk1selB3eGtUL0lwRDRtMEd3N2ZNMjVY?=
 =?utf-8?B?ek83dk9KMFNVZjlmeHUvNjNHaFRMQTFsV09ZMkRKU2QwdFl3TjFwdy8xMU9v?=
 =?utf-8?B?SzJZOUJPM3c0aGtCelJYVGl0Nm9HL29TMEJhd3hKTWNqMnBjWk1kS0FMRU5v?=
 =?utf-8?B?T1pSWkEyU0R0T3h3SjRrdjdnM291bVU2SWhtVGtnQ3JSM2syMVRyOFZoOUhs?=
 =?utf-8?B?ZEZMZEdFcW5yUERJWm1uVjZhNzVxa05OSXlyVlAvMVpvVnJ6NGJwdjlxb2JJ?=
 =?utf-8?B?NTJmT1JnZG9tQTROQzRJU2FrNFZDbUtEcWdUVW1pVTVFK2ZqR1EwVXVYdlU5?=
 =?utf-8?B?NnBuWXNsYmZuK04zeFAwUytQVWptek1WNGZBcnZmMUl3cVM3OWxMaVJyVXM4?=
 =?utf-8?B?UVBFTUtjNUtQam5nb0ppZ2lqV2xiU01MelZqcVRTaGpOSTI4UHRFdkR5Ukxp?=
 =?utf-8?B?bjg4QTJuem9yWmxJT1BObG1VVE12UzE3VitMYnpiQ1ZINzlKTnJVQm9tQ3Bn?=
 =?utf-8?B?MmJIRW1tZ2ZzNHluZytrSmVJREJzN1JGNUhTdUc1ZVlrSklTT2ZJcWs4SjJE?=
 =?utf-8?B?NHNWTlN1V1VFMHg5K25MV28rUmFkT2RVT3diTG5RckFmdytwR21zekRCYTFB?=
 =?utf-8?B?WTV3NlJLMTBjWTMwZWVYWkdpWHNOa1ZQdFpkUUpJVmx5M3NvOTUzNFZ3K09S?=
 =?utf-8?B?Y0tFUWZwc09pWmhlcENRWDUxelB2c0FlK3JnYTdiM2dUNFdUdGlLRkp2VG16?=
 =?utf-8?B?QWZSNGdqeldyZHZZeUNMbzFuQkVma0daQWc4ZlBrZTJzT0xGVlhrOWVzenUx?=
 =?utf-8?B?bFVVcDArUGp0dWliU0NXcFU2VmZVckVkZWhKQzFwSlNTYm5MWGVGMWdDOGU5?=
 =?utf-8?B?Y3AvejdHWFpWTm91dWNkR2RBcDAxWTh4NzZjWFIwSWR2SGtJQVZ5a0FOdzBN?=
 =?utf-8?B?WnJUV1l1NFBrcHZwV2taMEtuMyszWDdrVFNzcE5iZWJ3WlJzbWM3eVZqZ2RG?=
 =?utf-8?B?cFNxU2E2aW5wT25GbllWR3pvdEtneFZvOEc4NkRnS0RnNlBwVWpNODNHVlZx?=
 =?utf-8?B?V2FOamhXblZHYlhLbUZhNG1pdGt3d0dDSzh3akF0RVdsakRNY08yTXJlcHIr?=
 =?utf-8?B?WENBVTRyNUExL3JQNmwzZzE4S3laSUpIRDZwWjdrZzlpV2Y3bnUyV2VKQzFi?=
 =?utf-8?Q?OZDJj6utEU9uvWfv+0e0bYy+3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5853ef20-4cd3-4992-0933-08db52b74d96
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 07:05:43.8638
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dPcuBB7TT3yp6eT9nXopCX/1xdVH/oTvXwETyDaUT3REIAjSYSy6jtYvV7YWu1J3ooceKOKB/n8uu29Y2wtTmw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8846

On 11.05.2023 21:16, Stewart Hildebrand wrote:
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -219,7 +219,8 @@ int iommu_dt_domain_init(struct domain *d);
>  int iommu_release_dt_devices(struct domain *d);
>  
>  /*
> - * Helper to add master device to the IOMMU using generic IOMMU DT bindings.
> + * Helpers to add master device to the IOMMU using generic (PCI-)IOMMU
> + * DT bindings.
>   *
>   * Return values:
>   *  0 : device is protected by an IOMMU
> @@ -228,6 +229,9 @@ int iommu_release_dt_devices(struct domain *d);
>   *      (IOMMU is not enabled/present or device is not connected to it).
>   */
>  int iommu_add_dt_device(struct dt_device_node *np);
> +#ifdef CONFIG_HAS_PCI
> +int iommu_add_dt_pci_device(struct pci_dev *pdev);
> +#endif

Is the #ifdef really necessary?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 12 07:25:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 07:25:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533713.830588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxN9Y-0007ZC-P1; Fri, 12 May 2023 07:25:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533713.830588; Fri, 12 May 2023 07:25:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxN9Y-0007Z5-Lm; Fri, 12 May 2023 07:25:24 +0000
Received: by outflank-mailman (input) for mailman id 533713;
 Fri, 12 May 2023 07:25:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pxN9X-0007Yz-HZ
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 07:25:23 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::60d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 27a34578-f096-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 09:25:22 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7484.eurprd04.prod.outlook.com (2603:10a6:102:8d::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.23; Fri, 12 May
 2023 07:25:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Fri, 12 May 2023
 07:25:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27a34578-f096-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cg7eN8w7BskBghb2WZscrW9r+CFjgpW3a0lP+ciqH+hv6+zujCQxwy+dOvnvwiOkd+Vaw/+eevi29VLOHt0WVRkIq+M87kblygXJ5efIfPAxSoaWVVH66Nr0XaEw17BObYyi7Y6x/6SLCjlykbc43gTdfRCCOhnJ5OL4oBMqR2mG4L2t2zVJ3/A7h1sgXxMTuFDyVmptVFtIWzP6wYxvpHDSGBR/4NQe9PHkD+7wdP1WexUf0Vblhd+4iNPRErYfGnl2n+Ab/Zv+qy6SE/CxaHbeL9nIV20dKw9/CpaVlSu/kNWJ56gAoWbEX1iAFnl/JcKO46MQa3boj+F7SovNCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VZ4UmRUFjPwOsf2YSnf6kGVLXFR7d2FX2Vj5IUKzqfk=;
 b=XmTXPx++3nw6FL5GZgiVIhKyqjuQ+2pq8VoFsVxR0M5LShY/FNOJvn7t2Xdkyc2b1SVubgIGA2VbKa+OAoWvZmPc9T358+r/9INTXkWW9X+RoMfNXuaD8Cm3QhqWcvciu4/srOzSQney9cjrRRr2sz8Y59488dw0+Ax3jvELk/zoPwV1cszDtN4lm6hvro1m5zprxLtV/obwNhSN8EOOe0Vu/i/Tc/tmJaHUYXqF8FisZanL38OFsl6JQhVVuJbvmj5GW24xN/PrW0bL02mmUCnPdwrEAtpBfqABEfxPu1my6wQLcP/5VrjkJcKrIHzcogLOJbfqB50lretN8lyYWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VZ4UmRUFjPwOsf2YSnf6kGVLXFR7d2FX2Vj5IUKzqfk=;
 b=4tTzZET/Fw3dNGMTc0e0RHoH8s0sgyrWSB2Lk65NyQI2GutdDeAYPMt/cvcwMb5zdhVWEj6DX/Sp52LGAYL9mlbxI/6O19wwaU0ro+Wlm8qUSNlH3GOM9rNNq3HnGgfhkNFF1JcNaTQfYZWeCeizUDetNbJheNGsN3uOoky5jcv5ix+WSU+WgG/z4dJ89rgax7RN/nVapbmMi5jJZqXVWH0Fgpq9vWvWEsu6efmLkX6oDrqSk0mAfY4+eI2R1yIgGP88N6v1vtyehoFwYuqZfEvHfKyf/GThwHyDkSDZ5TDADD45BZkuH7MLDPeItQahO5MxCSF1VSsfD7VJb9wFYg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <61ae93e8-ac8f-b373-4fa7-0a8aeb61ef4f@suse.com>
Date: Fri, 12 May 2023 09:25:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 5/8] pci/arm: Use iommu_add_dt_pci_device()
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien@xen.org>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
 <20230511191654.400720-6-stewart.hildebrand@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230511191654.400720-6-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0159.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7484:EE_
X-MS-Office365-Filtering-Correlation-Id: 1cc5424b-23e0-49a0-581d-08db52ba0a40
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SVF5sLK6hI6ZMo9mQInwPPvqfGbO3f4rfT8jjBQH5uUFn21lKHOQJWEGfzvft/dyVz6cGVZYzZ8WkNY6TKAtt5vYtaocdupeIHyxYg45jTv92IdZmGeOD2ZETQXQxASNgl+e07Um/ERdvLkENsfKftgsXqhpMTPTMqSN1YrB+6DFXzs4n3WfrKwjjyV39moembah5Hl2VMKOb8WetOYtvu3ZVNTAavfDWNF+d3QLrX7Q7R1V+Yr5LJoTXxiEZIWiskSfgOLYvDTmTCzyKd6z/qdwl9MZkSDgzxjINzev3Lc4/7ut/BgynJGHDbQJ5plhPdBgtmvR6xu6p9Z6ipNPDBoXM6gZeiLLFm1N0rYASVrUM2IKzQn01BMyaPaYgiM0uK1UWdSaNHTJjPHsb/mpfaM+MJ81jU7gS6y606oo4jH13mfy0deLwcExpbrhuYDQ+HlB3KcIyMqZmOdzU8CrRkO51u+5Op4yUL0CYRCk9KS42sd8FV/M0Db3LKGYdmGzvZ4aXG0SiVGfrDm9WHJgTE37BKGjR3/wy5X0TVnnIqAdEHQzLAISfT4bB2XR8eSWbmpRXgBquu/x1qzXQTT59FdbxVXpjF56ueJ0PukC7wqQFyduei3VIgcqUdLV/W8KIi7zVlsNSbs6/93YJU9tEw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(396003)(366004)(136003)(376002)(451199021)(31696002)(86362001)(36756003)(54906003)(316002)(66476007)(6916009)(66946007)(66556008)(4326008)(478600001)(6486002)(41300700001)(6666004)(8936002)(8676002)(5660300002)(2906002)(7416002)(38100700002)(2616005)(186003)(26005)(6512007)(6506007)(53546011)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NXVNOG1Sc3dldU9MS3BnSCtlNm1WNXBZWXFyQ2FsWVBlTDdWcW1OL1pQSlBB?=
 =?utf-8?B?bC9Pa1N3M05HMHVDUTU3VjRjZWY3OGVJUEdXRW1UQ1NCM2RxbHVVdHBXWnJB?=
 =?utf-8?B?YVJSNTZKNVVIZ3ZZSTVmdEN2dWtKMWIvRWVRWEhHR2hraks5U0xqYXAvcTA3?=
 =?utf-8?B?UEVjMUtuazJlcjZDN3M0ZXRkc0EyUVFSQlVudXdNaVRTVHc5R1lQSFhUSExh?=
 =?utf-8?B?QVNOWE1lb2F3VkxJTUNxV3ZwQmVxNE0yRitSQ0hOVmxzNElXaXVxSHRhNjk1?=
 =?utf-8?B?eXYwTFcxVG5HU0xYdWZ2ZnNzV1U3WjVDRlN6ZTBTM00xdTAzb05MaERPVDNy?=
 =?utf-8?B?dHJIZ3NlT3owa0xvZjhMdklPelBTZTJUMUhGdjdPUmJ5dS9hLzdWLy9WWXNH?=
 =?utf-8?B?VTBraGJZUjA3UWhhVXFHc2VOY28rVnZjbUJGdUhaa3RXRGlxRVhtTHd3Q3Vs?=
 =?utf-8?B?ZUVPR2UrdlRoYjFXZlM4b0U3QllMc3hPUnJhZlZWdm51V1dsSGVFOUFuUWZP?=
 =?utf-8?B?UzZyVHZtQVRKallyM21jZlVXU3ZQOVdVa0lYTElqanBNeXk3eThEWTFJc2Qz?=
 =?utf-8?B?MnkwY2NLKy9xSUtCY2pBbEUvdnZQd01IS2liYjlOcG9SOVVseXg4OHhKMGwr?=
 =?utf-8?B?cktBZ3R6NFBzQnl2MkZjeDY2c0I4UFhCaVBZNGNvb1RlbkFvNmt4SDV3aGVh?=
 =?utf-8?B?VUI3TnRWTisxYzViMDFmNVNsT2tKeHRPK3RFWkxHcVB1VXdlcHZXdmFYbHBr?=
 =?utf-8?B?UXJacE03QmhUaHBEY3FhVDdDU3JCcEo5RFl2YVZKM1dObWpRNXpEVEhvNzZ6?=
 =?utf-8?B?SjVEUXgrYWQxUVRORWtMYWJlOXI5T21sMGVLSVZUYnhxWE5YNEpqcVZXYkZv?=
 =?utf-8?B?ZFErTkMxdDFpNDJ0Q1VYTXVsSUFKdVVTV0FyUDZ0UGluYzdScEVWQjdwUEdE?=
 =?utf-8?B?LzNNMlp6UFFLVXU4TkpkeUxleldsNWhndStYaXVXd2Q1aXBWUXc3ck1Tbnow?=
 =?utf-8?B?NTFEQzRjVDBzOFlicDJXQXllR0xlNFBFeGs4VUR0NkZsTlltOEhQQkZDeCsx?=
 =?utf-8?B?bVUvWjE1NEN6T2N1N1N1Z1NrV1hXK3M1NjRQdzUzZUdtK0FwVmhhR3c5elJN?=
 =?utf-8?B?bGlwRjBTYWZ4S2pQSEVpZWRqZW4wZDBXTlhEYTZaSklOMzZtQlE4clFFdkRR?=
 =?utf-8?B?bUlHQjlGTVY1RTB3V0lNTFlXMWE5NlhpbUJFa21tei9lME1Rb1dOSnpVM3FD?=
 =?utf-8?B?bng0WG9qbExpbER4SytnaGo2Z2tCaVhzRzE4dHArMkMwVjVWOFpLUVJ1TWdn?=
 =?utf-8?B?T0FUM21kU2NkeXdDU1I3Q25MZHZCaDNNRk51KzlGU0ZDeTR0VXI5MDdCSHhI?=
 =?utf-8?B?WjZRbnJTaXNFemZGTWoyS3JobnNDc2NKNFVGTHJIU0NUazlqSE1qb3VkT0VO?=
 =?utf-8?B?emNocjBPNDBtUERtSWRGb1VHWGU3QkQ0NHpDbkVMT3pyS1NSTW91Nk41TnJk?=
 =?utf-8?B?UmIxTkFwd3NldyszZGRLWUZSRkNUUnZRTVROd1kvcDA4WjlOcjc3clhrSzdl?=
 =?utf-8?B?RWtTaDlVdW9vZWxja1MwTk15dk11K3RnbjBxUmxTWDlJMjdVaU10ZDBFS25J?=
 =?utf-8?B?M1BWMGJYVFh1anNZendiOFVqQXdJS2lDMkVBL2NTR3hvclIvK3FzTkhEb0Nk?=
 =?utf-8?B?WXh0eG9MSU1XNGVCSk5rbzBMNGxiWUpjK3JYc1BLWWMvV3Zvd1BMZDJqSVFG?=
 =?utf-8?B?cmpWWGRUVUFUKy9YdDBkWGg2U1NTdXE3RFBGdGR6Zm1HRDY3Q3hxd0NIWHQy?=
 =?utf-8?B?VzFwOFhXL28yUktGQ3d1c1pLaDNVZE9jN242WTBiR0loR2tZUThUNGl3a3JH?=
 =?utf-8?B?VE00cktRNUhkQ3JFQ2lLY20xZS9OdG5XOFlOZUI1Z05tRmpsTnNKSHJoSnl1?=
 =?utf-8?B?MlhLemFQYytnTmZyYUV2WktnbFJ4clU0VVJpT0R6SUE1dkhRdHN5Vk9OZ21l?=
 =?utf-8?B?MWtWbWFublp5bUx5eWk3NFBzNUYrL3FseFo4eVd1TFZBeWZrcGhkMmtvKzdo?=
 =?utf-8?B?cWdjRndVSEhiUVhRR1hZM3dhaUgyMUhZOTdqV25jb0JJTkY4Z3NYSXlIV2tZ?=
 =?utf-8?Q?9Vlzuvky+MVfEUx6Uy6GzS38x?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1cc5424b-23e0-49a0-581d-08db52ba0a40
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 07:25:19.3805
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: K8PdTXipjC83reegbNOimZvJ0FjL3m8xXkk8Tzq/V7yRvykGGLfpjKjNOedAgFzi2PFX1jkwXiMn1I3WEQrkzw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7484

On 11.05.2023 21:16, Stewart Hildebrand wrote:
> @@ -762,9 +767,20 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>              pdev->domain = NULL;
>              goto out;
>          }
> +#ifdef CONFIG_HAS_DEVICE_TREE
> +        ret = iommu_add_dt_pci_device(pdev);
> +        if ( ret < 0 )
> +        {
> +            printk(XENLOG_ERR "pci-iommu translation failed: %d\n", ret);
> +            goto out;
> +        }
> +#endif
>          ret = iommu_add_device(pdev);

Hmm, am I misremembering that in the earlier patch you had #else to
invoke the alternative behavior? Now you end up calling both functions;
if that's indeed intended, this may still want doing differently.
Looking at the earlier patch introducing the function, I can't infer
though whether that's intended: iommu_add_dt_pci_device() checks that
the add_device hook is present, but then I didn't find any use of this
hook. The revlog there suggests the check might be stale.

If indeed the function does only preparatory work, I don't see why it
would need naming "iommu_..."; I'd rather consider pci_add_dt_device()
then. Plus in such a case #ifdef-ary here probably wants avoiding by
introducing a suitable no-op stub for the !HAS_DEVICE_TREE case. Then
...

>          if ( ret )
>          {
> +#ifdef CONFIG_HAS_DEVICE_TREE
> +            iommu_fwspec_free(pci_to_dev(pdev));
> +#endif

... this (which I understand is doing the corresponding cleanup) then
also wants wrapping in a suitably named tiny helper function.

But yet further I'm then no longer convinced this is the right place
for the addition. pci_add_device() is backing physdev hypercalls. It
would seem to me that the function may want invoking yet one layer
further up, or it may even want invoking from a brand new DT-specific
physdev-op. This would then leave at least the x86-only paths (invoking
pci_add_device() from outside of pci_physdev_op()) entirely alone.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 12 07:28:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 07:28:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533719.830598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxNCu-0008MN-AS; Fri, 12 May 2023 07:28:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533719.830598; Fri, 12 May 2023 07:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxNCu-0008MG-7n; Fri, 12 May 2023 07:28:52 +0000
Received: by outflank-mailman (input) for mailman id 533719;
 Fri, 12 May 2023 07:28:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pxNCt-0008M9-8i
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 07:28:51 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20625.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a39bf61f-f096-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 09:28:50 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9574.eurprd04.prod.outlook.com (2603:10a6:20b:4fc::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Fri, 12 May
 2023 07:28:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Fri, 12 May 2023
 07:28:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a39bf61f-f096-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YPJiV/InSLocZuBH8aFP4xc/k/XHfhZIDFMIYJ+jPKujGgOhK/sZtkqu8PzKv93VeaLu04uUzg80i1rN72WzJXGbJWm9OLNmOnKh4Ap9ixCC4rwHoj6b7chASxIgkVNYWenFtl17ZD1456qo4sP/jhW57zGhHN4AwNqQ2RVaev3OwnUHBlYhdJ5nRpTxfbBJcOduS8sv83/83QdpeTuOCALdb2eXsGVK2BejGDDhsHgNDi1sQttLvXSIM8HQNBxq4P66acYDQye/j/gtOPJ72C31Ab/TL2QjbfnVWCcXZo2b52kmHVfAxVkZkaUVYvAd7kdixzfcDdjFjI1ioBIyeQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Nhtq7GtNdwkIwhtvega7bfE7ScUQN4DDpCOQvM8/moA=;
 b=cP/JJrSCuR2OGoPjFv3PkqE8iJ+R4lE3YoHaNbAiyXKjXttaJvxEttyhqCqkFEcDvGvNEB8Upp00tyOjAugnIQfqvgs35ybsd4V2FJu2KuLYKg4dCtguHR5MT8cE/oNh/49/1AjGoFe1K3kf124rgBahx7qlK36NqlYsFOpSpcKu/CsrH7tcSjH3pxtSQj/EDYW7QWXrW+33EgxsAP/JURrqBeXRs2fZpT2e5de2t5oQjPUqOG4exkqZVUr3sL/wlZAMOl1v08BfB+YuQenTe/0dSCmVMepJPd4aSoopXzsOXX2JRQFM/EB9xg+e2YvRWVYxWho+a04vUMLHKofHlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Nhtq7GtNdwkIwhtvega7bfE7ScUQN4DDpCOQvM8/moA=;
 b=sm73bf/RV4im8syAepzbOu3EVIvSCfdQMQ9/ZiAEwZd/mrNdx4gwy4YJ5wUbexrHTt3TWzoXSGyOof8KkMqoEucu4xPRsflxyD3Ww32mttIIrTuMRtxImoLebJ/p886WVWgDpmAV6u48Jq4r05z0BReKvqtyWH6Hh6teHCMfO7KuRYoD8C2TJiTHHJlY2zst60h2pR7WGmRTba4Si/dWfmNPDSUMlb12XPd1K+Tri0yk+pzV5fFF7hzhztcfbdRdVb293qccicaGF2WTXSIDl1Xny217hqBQHj7pAymTVU6aAPAzDH7+L9ChS1ijkZ867F49mO5ywLfdQ1rUDrHmVQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b1403a6b-80f8-6277-5bd0-b21a2c8f0dd9@suse.com>
Date: Fri, 12 May 2023 09:28:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [RFC PATCH v2 6/8] pci/arm: don't do iommu call for phantom
 functions
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien@xen.org>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
 <20230511191654.400720-7-stewart.hildebrand@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230511191654.400720-7-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0060.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9574:EE_
X-MS-Office365-Filtering-Correlation-Id: 2a1ff127-1e95-469c-265d-08db52ba8610
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nDsJSJJ8bV0Hvqj3HQx2QodmwOfY1Kl91cTJT7L3/SAh8cmMeRvuB47BJt1yzPeMUvbLQyfiQdu4vpKluEG7EmwqFTM1xnTMaiHwlWLQpnsfS5Vcm+yCZ/4nRy+o3eiywJhJl5sTpB498gmfrxDVRcJFDu4TtCEqHqs+w4YseUJ+YxMI2Il7xCVewsHjPJYlpjrTqxzz7M9Wd+mklTJY1QG/ng+Am212eZil4aj2EMQQpZ0q1yeTNItQL/IFbswYGOLeRvu66UOrK1os5VHtmBg1VnNzHlb/08YFeIKyng/Tz+FoEeQoN0qIw2QkWuPqaOaca8euJVNBfqmlbH2CaLEr8wZzefdV/3L/nQHj/llFWhs8Fstu3ml8FAo3W4GjySf0O2DgC8ajudhTE/dkvh39WGMnjqPETsfgcbfHP2q3JO510wHhC+VJDGFdccs4PwEKOqSic1DRTvgp/XCSpKsdjH+GUIJRMbK1uh+F6K9UHx7nCbewdMiPGpi69D6nCyD0Y1F8wG752jA3F9mgLdbldsFhB3bWu1/zgu2CFnGbu8nB6p3sZex7iC/34qPu3SZTie6xjurLQE/SrRZN5kICK1rJuSNWSDExHv/QVKA+fpILPNwUlVv7PukotDM4RQCNCtl2LDp6A+okGed8dA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(376002)(136003)(39860400002)(346002)(451199021)(6666004)(83380400001)(2616005)(8936002)(66556008)(66946007)(66476007)(53546011)(54906003)(6512007)(6506007)(4326008)(6486002)(478600001)(26005)(186003)(6916009)(41300700001)(2906002)(4744005)(86362001)(5660300002)(36756003)(316002)(8676002)(31696002)(38100700002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cHZQQng2V0tIYzVTNHZ5RGFXejdaYXVXMTRxbTZIR095QWxoMTNGTHFReklV?=
 =?utf-8?B?WUxvbUJhS2FrUVkvbitSK1J4bU5xK010Q0psUGNiY0p6Z2g3NlRiRGhnZlha?=
 =?utf-8?B?RXJLcHpKWGh4NkJVc2FDTFA5MjJ2RldTeWJ3dmVBUFFxbTB1K3NWZE1iMHk4?=
 =?utf-8?B?S1dJT1lCdk5iVFBSZ1ZFOHlsOE5MdkZCU2hSc0ZIdWNuekhFcU54YXlTZ0hS?=
 =?utf-8?B?cm5wT003UXhNYlpxZ0x1QVFob29hUXBIaTFlZlgxMm5jeGgvNnc1Tm9xYmF0?=
 =?utf-8?B?TEY2Slk4UjRSaU1Tb2kvMmcwOUhpdTFuekkzSFlhTkRKQUFqejFxbzNOVlVW?=
 =?utf-8?B?aVhkdTBkUEtDQjg3b3p3bEJwazNRT2dPRWdPM3E2RVRsajUxYlhINVN6eXY2?=
 =?utf-8?B?UG1FSHRpZGtOZmZkZHFmYWltcDliTTBFTzdTQTFNbWhvV0lid1VwWUdxZ0Ez?=
 =?utf-8?B?d1BORU4zY08yZ2FLNTN4ajJtVGQvRm5VN243OU03YlJXazNIQkJrbkZ4RGN6?=
 =?utf-8?B?Mmo2cGxOUGFsVkFuemhGQ3QraFo2Z1gycW5GUlZScW5oeDlSa2tZOGRhWHFj?=
 =?utf-8?B?MVRBQm9RcGdBd1NrcDZDSVh3M01oRjhrNjN5eTZvc091UDNCWnBjUERnelE5?=
 =?utf-8?B?bWdjVFdjUkFNOE92V3hucUpSZjYrZWtzbjFSeUR6VjVrRGtrbkQ3UWFEYWQ4?=
 =?utf-8?B?OG91R2ExdENCL1dTN2ZSUFd6aCs1MWVnZDB3SEdTakRxZFZCQWVpdXR6cCtF?=
 =?utf-8?B?NUVqZ0lxSkd3VktuQlgvVHhTeFNVdEN4ZE5INlA3RHpWejFqOWpxbHNsOTZM?=
 =?utf-8?B?M0VDTk9BZlpUQnJZdThRUjJMOTFZUjA4b2ZSMVp3YXhWQklSeFQ3ZTNPWk0w?=
 =?utf-8?B?RXFvVDdOK0RVcllEK0NRNGV6aEpvSXh2blU1MER3SUxYMlk5UVVIOGtiSDBz?=
 =?utf-8?B?Yk5sWTRCLytpaktwUy84MmJ6SmJXRWlKamJUckQ5N294RGxzUzAzTVhNSlB5?=
 =?utf-8?B?d2tWSHp4LytHQ0o4b2xlRlRHeFR5ZEZqRENpTzY2bGs3SWRxWkJNYkJXL0Nl?=
 =?utf-8?B?R3pObmFvSHFiZngyTk0yL0d6MDF0SzhOMHFFMlptdWM0Nnk1TEZJdytEdHpl?=
 =?utf-8?B?VUpXUFEvWlJZbHN0OFlvRXJvMXFIb2NiQ3l4UkV1UTZ4QUNweWNpeUFHZ1N2?=
 =?utf-8?B?b1pFYjl3aVV2ZzBkWXA1N2kweGhObHgwNS9CVVUrSnNCSHdYajdFTC9hR1JR?=
 =?utf-8?B?d3VoQnBEQ3dpY0R0SDMzNEkxdWFVMW1UWHRBMVZQRDQ0clMvenBrWHZuRG9w?=
 =?utf-8?B?Ym9reVhxV3VGOW8xL2doMHZZT1dSek83YzFaV2E4U3NWVlJhbmZBbGw1eWR0?=
 =?utf-8?B?cVZFNGhWVnI1djB4Y1c0VUNGdWJPeDVid2tpbHdtcGF0MktjMjJ5YjQ0Rm9V?=
 =?utf-8?B?cnNSdHV1Q3I4S3Q1SlJYc1AzMUpuQ0x4bDRsSm4rOGZzUVc0YWovR0J4Z0Jo?=
 =?utf-8?B?OTRUV3ZxTHZVcnV3VHVKWXVuMFppTXhZVUJ1WE5FTHhHTUduYnFWYXJ0T3ov?=
 =?utf-8?B?YWI0KzNBTU9zM0o5QXc1aVEwQ2RzcjFxSEZBaFZUUUlMdU5FbkJZSi9zeUxw?=
 =?utf-8?B?MWk5c2N5RlRBS3FoZjkyQlR4anRwcG1TWDBxS0x3NFJteTFNUnRwV2ZYbk81?=
 =?utf-8?B?KzJXckx6dlZZMnZ4VlRXR3FrSUpCdDFXeTY0empnTzRUYklMRVFwWU1tTldP?=
 =?utf-8?B?WTBwNkQva0w1Q0Rpb0N5VkZlaTNFaDdPRGFyWWRNZVZlQUYvbkJLNjVtSzdN?=
 =?utf-8?B?OW5YN0dWMGlVU0ZBOFJvTmhzZTl3ZGl4ZkZRNEg3MkdzSFNVbTNPOC9ib01J?=
 =?utf-8?B?d3l5STA0T2hXYlBlMEU2VVplUmdCM2g5UjhVT2RXUldBajVSd3dsaUNTMDZH?=
 =?utf-8?B?NWFBYmY1ZTBBWURKckNzN0dMdGhZUWpJRndnVFlVOVNZdDU3UkR3NzhFNHNB?=
 =?utf-8?B?a2Q5V1ltSk5sT291OTRwODRKeThPb0JkY3VTMGw3L3d6Z1N4S2doN0tmd2gv?=
 =?utf-8?B?VTg3WnNRalgzQVFWVytOeDUvU1JrbW56MEpxNzVKVFVNRklscktZcHAybzgr?=
 =?utf-8?Q?grlRn+Lk/bDjr/wCakRnwV7s1?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a1ff127-1e95-469c-265d-08db52ba8610
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 07:28:47.3134
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +Jplj0qb03Zna5abfWn+2+Q03yr/eQqhj1va5Wotq5wr6y9euG1o/rRqhxARbHgmecCwPAR75ANzFeT2Trm9TQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9574

On 11.05.2023 21:16, Stewart Hildebrand wrote:
> It's not necessary to add/remove/assign/deassign pci phantom functions
> for the ARM SMMU drivers. All associated AXI stream IDs are added during
> the iommu call for the base PCI device/function.
> 
> However, the ARM SMMU drivers can cope with the extra/unnecessary calls just
> fine, so this patch is RFC as it's not strictly required.

Tying the skipping to IS_ENABLED(CONFIG_HAS_DEVICE_TREE) goes against
one of Julien's earlier comments, towards DT and ACPI wanting to
co-exist at some point. So I think keeping the supposedly unnecessary
calls is going to be unavoidable, unless you have a runtime property
that you could check instead.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 12 07:45:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 07:45:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533723.830608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxNT5-0002Tx-Ng; Fri, 12 May 2023 07:45:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533723.830608; Fri, 12 May 2023 07:45:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxNT5-0002Tq-KH; Fri, 12 May 2023 07:45:35 +0000
Received: by outflank-mailman (input) for mailman id 533723;
 Fri, 12 May 2023 07:45:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pxNT4-0002Tk-DE
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 07:45:34 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe02::605])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f916a7bf-f098-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 09:45:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8516.eurprd04.prod.outlook.com (2603:10a6:20b:343::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Fri, 12 May
 2023 07:45:31 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::8e41:82b6:a27f:2e0c%4]) with mapi id 15.20.6363.033; Fri, 12 May 2023
 07:45:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f916a7bf-f098-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HPz6FQmvy/10do14R5N/drPQvvI18oQ6JjwERipX4tn0wBTuyxNOTnNtOqGKEGDfPnOuLbtC725MCf6NL5M+jUWbBDLxuyXA4tedFR41Q+AfwjFb95d3hXYZNl4sn9vnTWXCRUeCxYCBth+2n0Z+7ExSK78Y/B1mUA8Qo/EX/TAPUoJBGKkPA9HGBL96mH7ibEKzGrlRZmmYTkUvGmXEuksz/507CsLU9nOuEY6Yz+jAIf5pfeoujXdKtqy7Hyc6FaJO2RBdz6BK1yfTdrJAAoJVIG16Gd/PL35OjguTwLia6pa39y/TaYf9gUdeQj5IAFOY1sngKzKJPopFb+Eyxg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ue930GgMb0OzNyelI18ojsmGR6aRJTnwoO0VsjMJKw8=;
 b=MzVo2gRXXa5L3PM14E+xYwmzAP13hKIqAkDtmdkSkTtD4oZ0gfk1UWqQ1ZE1og8vBmBrg96yAvyFzVpHuBFKsgo5Mdo3G1yCgL7gmiijQ64twlO3NW9RH2eu4Qu0TqAHaEZ+a6s0xrExHB7wdiQZv3r6/fddecjqeLFXzu8C0ZijTzmE2bAo+tnNSepMa9qi4VZ49HA46YKnp8wqGI6sT5VS47bF9lGG4LFDVJdaOlonvWMrGh3C9bG8culkMoird0FGc5P327zQZUuRNmYhtF+mf1tehiA21Yp2VIVsPWO7aWF2Mwm0xp/sVGAMG+5KTzjm7HBSFTvUj//4mSdEWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ue930GgMb0OzNyelI18ojsmGR6aRJTnwoO0VsjMJKw8=;
 b=hz1r2NzAEfVdfvOJaLYXrc014GbC2GYTtWRXr1Q6+1Q3kqcb5Pd6+1rxg4al+qUrV5JkWteSt+CdFQ4xMHVk0VlofRP7nAxcLzekbtXSj01f826jUAdsMFjXfUy4dc0mpVrxkuyae9tgaX7bwfg0nwvlrkZ/JbmpYATW4+GS281+LMGa2eFoS0gGkOUn8wYhA4t/yz5TsvWTZWv5mzxVW44TJUc8h8O4Xh77UICRmTISEUUPDAn3tHO8rqk1Ad098v7kvxhiVu6sckC1HJBFz3pfEyCkhfU8skbwaHid6CLRmwjCBbd43tO2Fk+0M8mi8sViTBvtneH49nt36+WVvw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7ae95b59-3c13-373d-191b-97a80bddfc4f@suse.com>
Date: Fri, 12 May 2023 09:45:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v7 3/5] xen/riscv: align __bss_start
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
 <2e9018989c628a519aadeae150786efe5e8054ab.1683824347.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2e9018989c628a519aadeae150786efe5e8054ab.1683824347.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0115.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8516:EE_
X-MS-Office365-Filtering-Correlation-Id: 37fcfb08-f533-4005-71b6-08db52bcdc7e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jJFKkgPN6UG7ZvwniNoA1tNcoJvpsdi68ofthO/AZveBBBEgJ+jLvigu7mDwSJeUs1Pj6L+/yepG9V7GjJaz7SpW9RiBRJIShC1dyk8pUwt/0sxMTrnau8QvR6bXag2cU6S+QusuqsTU6ftTc0LaWeRoUFPKx6eFcCuUvCu1PfWU6W2r18pOWFNKvLhKTgkftH5hN+Rn42iYSU3GrUxDiMqJ1F7JOQfz6iljPu8XRAkaEn1g+SKyLor6s4Y2Dhwt1JmRTka6thtViOeMprkMkASgf5QIzQ0KFpXO8uyGkVd81KFEzGAddL2MgIxvJ7yeGyVvGNt38xUClrDSK90QhcCnDxww6f17U3vSniuMsQvSuUx3XOx0Wi2y6aSdS68sGfWHipQpzQ4yF8gry5QaeOJQbOw7IJlQzm2LuS0s6i9L8LRvQ/J+UBJhANzW2VXlxkKUb4RBuGvz2Z1dOd9QXy6Ep+PsgOhG4c97pFdjVYMTxrnqcJFamZrP0f8HfmB039IEUGPzl9L7ZnqSe+00xdJaEivbglHNqgzt76eRgycBbfhggGs9ygmVaVwmoV+l01BeVoPqaczjKIVxBw47h77MbyZsO38tStgWP1dTN8AmACCy48wQ7bysSFPl4vu1hL7M91PPEAPASDRf1eUXug==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199021)(478600001)(66556008)(66946007)(66476007)(8676002)(4326008)(6916009)(5660300002)(8936002)(6512007)(26005)(6506007)(53546011)(31686004)(316002)(41300700001)(6486002)(6666004)(54906003)(2906002)(2616005)(186003)(83380400001)(31696002)(86362001)(38100700002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M1pXdDJOa1lxdVF4WjVIM0pLRmNFdzhwZ0dHMVhDU2JtVjNtN0pDRmQ0TGs3?=
 =?utf-8?B?M2VBWklDbEZQZDBCQlEwaE9yUVNqbjl1QkM1N3ZCYkJvc0lwcXc1dWkvdWFy?=
 =?utf-8?B?VXBmTEdSMklJOTB3SEdYOFl4MmI1YzJGQ3c5Y0hOM2pHa2lzb0NPNndZWndw?=
 =?utf-8?B?aWxpQ2htMHQxSVc2WmoxVTlZcUZhR3UwTUhhaG9pYmVJMFZ5bmhHdllSMk42?=
 =?utf-8?B?WWZ3QzRvL3lKaG1IMWdXUHhEanVIMDdYRkhRM2NjOUQxbkR3eElCREFwR0Y4?=
 =?utf-8?B?TWFuTExlMjNRQU5GK3FoT1RITm5RMzVTRlpoZHBjc2hLWXNYYUJhQUxJcXpB?=
 =?utf-8?B?RnBRNnJEN2ZqeFljQ2JZYzNsY09uODI3T0dqTWlsWGxwOUgvbWNYTU9ORW91?=
 =?utf-8?B?a1FQVlorY29tZmdWS2tlZmFzZWlYTXllQVVIRmhheFJvaGtJNkttWXkxWUtu?=
 =?utf-8?B?RXNDeXpDTTB1cCtOdlZqTk8vdkVjV3c4bkNiK1JqallNMXhJK0REOUNkR2ZE?=
 =?utf-8?B?bHd1YlpETHNmYVJWV0ZjOGQ5aUFpRGtlRXAwMXRxMWxyOVlwM3YvY1VZbVpt?=
 =?utf-8?B?RnZCUjRiRnVFc3F2THdmUEtIMHFwaThVRGpLWVQxcks5ajVlbUdldGpsUU8w?=
 =?utf-8?B?dS9VUVhaVXVJaVdVUDU2R2dNbkNqRW0wV3FhNEpnWHhmZ1cvcnNQVVNvdU1r?=
 =?utf-8?B?Mk5WWVAvYnYzT2Z3OEdvelNDVUswU29mNWR1SXZINFgvUVpzL0liSWdEQWZu?=
 =?utf-8?B?UmxEY0V4aVJFRFRMS2hoUk1ta1lrclVQbE5rQThJUTdYdHg4MkR1RzRmSFdB?=
 =?utf-8?B?VHNJa2s4TzJnN1plVXRPSkpjVWsydmxaRDFPUEM5bVg0a0RhU2lNd3Z4UGJh?=
 =?utf-8?B?L3dtZlU0bzFLcWhZTGZnRWIzakpSbXhRZEZsbkpNUWlIN0JoR3AyY2hJbDZV?=
 =?utf-8?B?YktBVmZmd1FySzZUd1A0L1BrTWpLMXo0aTFKZSsvcFFjWXJvbm9nbzlSTUU3?=
 =?utf-8?B?ZWtaS21GUHpoYXd2WnArdXoweThMU2xubU51b0ZDditPNkx1Skd5ZUNCZzFh?=
 =?utf-8?B?TytmUXdGQ21BOGYreUMwVFBQTC9OU3VaYXE1RXdzTW1FMmp2VG1tOW92ZDBa?=
 =?utf-8?B?SzJOTWhRNnBrVGlhVDZLNkZRV1hXR1JqTFI5NXJsRzJSNEEzdE1PdUx4cVdX?=
 =?utf-8?B?dTFSdW8vM2gwYzZjMDYyNFp4T0NjWWE5TTRzUThlMnozUWdYeTN0enhwUEIv?=
 =?utf-8?B?bDM2MlROcFBhUEpPZ1dINEcvQ0l2QUNGSHM1SjNUcmVOZEhMZFMvNThGeGVK?=
 =?utf-8?B?QzdrejA3OUVoaG1XcVcyTDlwd21zMVpxNzg4eDhYdi9HVFVxVWw0dWJ0TlVG?=
 =?utf-8?B?WFE2cVV4QnNVYVQ5UXFyL2lobkd3SVpjeFBIekl3ckYwcnZ5UjRqdUw4d0dQ?=
 =?utf-8?B?bC9jSml3dU50TXBGNE1pSjhRUmJMeWl4TWIvUFJtbG9VczV1WW5YWEtYL2lw?=
 =?utf-8?B?b0Vpc3BEbjdxNEkyTXZYT0RwMno2dzF5MU5xR3ZGYXpudHAydjJEYTFvak8v?=
 =?utf-8?B?c0JyOG43STBnOXJkSlIzRzduWUdwaVFodTNleGhzZ3VkeFAxTERzd3lvOXBI?=
 =?utf-8?B?VHAzSWx4VlEwR1FuMjVkaFY5bWo3dTRZYjlRdnFKL284VTNOdXpzWUdFbjNk?=
 =?utf-8?B?MGZVZWVaMWtIMjV1aFRTSlorWDR3WVg0dEp1RmFHQTljQitoQzJ0Sk95N2xQ?=
 =?utf-8?B?U05hZm1uY0dmMkZzOEpORTVjNDFjdm9hWmhKQkR3OVdtMHZkQXNxamhMdG9D?=
 =?utf-8?B?WEJwbkZIdytEczBRZkcydTZqREpTUFY1YmhMalhvOU8yWmpxdGlSd0R5cVRl?=
 =?utf-8?B?Q2h6cHFLNkcxZEJMelIya01oRXVsM05MSGdIdEhjRFJRWUgwbVJFbEtTbHBX?=
 =?utf-8?B?ZEdsa2ZhY29zRVBBQWJ2d2w1N3F3bmFBOEc4MXQ2SGQ5dGM1QkljSmtvRXhU?=
 =?utf-8?B?TXRFa1o5STFLRDNxdUw0aGs1R0tnV0RFdlZQSlc0Y0szR0JPMUcxbjd4eWhq?=
 =?utf-8?B?MEVrTk5VZ0x0alRLRW1DZ3gwd2tQVkg3T2ZnMUE2blo5c3RvSVFvN0FNYmEx?=
 =?utf-8?Q?m9uZpcuJinIfxV0tNsmEwS9Ay?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 37fcfb08-f533-4005-71b6-08db52bcdc7e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 07:45:31.1136
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1MHK+e84NRt+CuzsOPegHh6LbJnfJeGsr8egHsBALiR8EIVKyWNdRaZDPcpsrZrHU1emDShj8pHEs/SS86hvbQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8516

On 11.05.2023 19:09, Oleksii Kurochko wrote:
> bss clear cycle requires proper alignment of __bss_start.
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with two remarks, though:

While probably not very important yet for RISC-V (until there is at
least enough functionality to, say, boot Dom0), you may want to get
used to add Fixes: tags in cases like this one.

> --- a/xen/arch/riscv/xen.lds.S
> +++ b/xen/arch/riscv/xen.lds.S
> @@ -137,6 +137,7 @@ SECTIONS
>      __init_end = .;
>  
>      .bss : {                     /* BSS */
> +        . = ALIGN(POINTER_ALIGN);
>          __bss_start = .;
>          *(.bss.stack_aligned)
>          . = ALIGN(PAGE_SIZE);

While independent of the change here, this ALIGN() visible in context
is unnecessary, afaict. ALIGN() generally only makes sense when
there's a linker-script-defined symbol right afterwards. Taking the
case here, any contributions to .bss.page_aligned have to specify
proper alignment themselves anyway (or else they'd be dependent upon
linking order). Just like there's (correctly) no ALIGN(STACK_SIZE)
ahead of *(.bss.stack_aligned).

The change here might be a good opportunity to drop that ALIGN() at
the same time. So long as you (and the maintainers) agree, I guess
the adjustment could easily be made while committing.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 12 08:10:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 08:10:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533732.830618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxNqk-0006DZ-5H; Fri, 12 May 2023 08:10:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533732.830618; Fri, 12 May 2023 08:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxNqk-0006D3-1F; Fri, 12 May 2023 08:10:02 +0000
Received: by outflank-mailman (input) for mailman id 533732;
 Fri, 12 May 2023 08:10:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxNqi-00062t-VJ; Fri, 12 May 2023 08:10:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxNqi-0005jk-Rt; Fri, 12 May 2023 08:10:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxNqi-00010p-AU; Fri, 12 May 2023 08:10:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxNqi-0008F5-A5; Fri, 12 May 2023 08:10:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=h/ESstVoxSvrHH5oYKCVYnaraWqufu1z6hxyUFlTjuY=; b=MjAdTbGLi2812S8yz1Z9EWNRTK
	l6FiIf3krFUn/Ku61JSrQXgfJtzVWmcwIqortLf8bIccOuGOzyJj/IHsTAiGBNLJduSMulHzRpgLO
	1RZ/FKFZE6Jn0p1V2H6HdbrR88JspKUEU77UYOurmDdOO55hdQC+74uUeHNGMB6mrbrM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180624-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180624: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-examine:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cb781ae2c98de5d5742aa0de6850f371fb25825f
X-Osstest-Versions-That:
    xen=31c65549746179e16cf3f82b694b4b1e0b7545ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 May 2023 08:10:00 +0000

flight 180624 xen-unstable real [real]
flight 180630 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180624/
http://logs.test-lab.xenproject.org/osstest/logs/180630/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-examine       6 xen-install         fail pass in 180630-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180609
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180609
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180609
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180618
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180618
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180618
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail like 180618
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180618
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180618
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180618
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180618
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180618
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180618
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  cb781ae2c98de5d5742aa0de6850f371fb25825f
baseline version:
 xen                  31c65549746179e16cf3f82b694b4b1e0b7545ca

Last test of basis   180618  2023-05-11 10:33:05 Z    0 days
Testing same since   180624  2023-05-11 22:08:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alejandro Vallejo <alejandro.vallejo@cloud.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   31c6554974..cb781ae2c9  cb781ae2c98de5d5742aa0de6850f371fb25825f -> master


From xen-devel-bounces@lists.xenproject.org Fri May 12 08:23:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 08:23:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533743.830640 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxO3G-0000Hf-Gd; Fri, 12 May 2023 08:22:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533743.830640; Fri, 12 May 2023 08:22:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxO3G-0000HX-Dy; Fri, 12 May 2023 08:22:58 +0000
Received: by outflank-mailman (input) for mailman id 533743;
 Fri, 12 May 2023 08:22:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxO3F-0000HH-Dq; Fri, 12 May 2023 08:22:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxO3F-0005zV-3v; Fri, 12 May 2023 08:22:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxO3E-0001NU-Ow; Fri, 12 May 2023 08:22:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxO3E-0005nO-OT; Fri, 12 May 2023 08:22:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dWCzwaXejHc5NsZ7zpLuZDX7orr5CiUIjcQneDeaZa8=; b=ZT7L01TB2HZDQLCU9OfDl7KljY
	RgFJhr30suPzTi+YcZ3gfykvS7WxjpOxPk/XCsO7ZICbjefVak73ESm+VonQRAh9fNuvsPFewRafj
	Aw/aBynjv6rOw9DWvOdoaqQ409OoN+UjstiCNYWCmjWdH276q6XQcMr3qsAn04nFKOIU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180625-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180625: regressions - FAIL
X-Osstest-Failures:
    linux-linus:build-arm64:xen-build:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-localmigrate/x10:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=cc3c44c9fda264c6d401be04e95449a57c1231c6
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 May 2023 08:22:56 +0000

flight 180625 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180625/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 180278
 test-amd64-amd64-dom0pvh-xl-intel 20 guest-localmigrate/x10 fail REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                cc3c44c9fda264c6d401be04e95449a57c1231c6
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   25 days
Failing since        180281  2023-04-17 06:24:36 Z   25 days   46 attempts
Testing same since   180625  2023-05-11 22:11:49 Z    0 days    1 attempts

------------------------------------------------------------
2380 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 300411 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 12 08:37:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 08:37:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533749.830650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxOH9-00020W-Nf; Fri, 12 May 2023 08:37:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533749.830650; Fri, 12 May 2023 08:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxOH9-00020P-Kf; Fri, 12 May 2023 08:37:19 +0000
Received: by outflank-mailman (input) for mailman id 533749;
 Fri, 12 May 2023 08:37:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IGko=BB=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pxOH8-00020G-5I
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 08:37:18 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32e4cc00-f0a0-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 10:37:15 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2ac7c59665bso106678061fa.3
 for <xen-devel@lists.xenproject.org>; Fri, 12 May 2023 01:37:16 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 e4-20020a2eb1c4000000b002a76c16ad65sm2707537lja.87.2023.05.12.01.37.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 12 May 2023 01:37:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32e4cc00-f0a0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683880635; x=1686472635;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=5jshfeE58bBaVwCcbNiWX9ibjxktpjo8LIbRDvC11Sw=;
        b=WXuXNqVuQJ+Nbw6ciDkDs77md+jeK5/x06xH7tSe26w0c+35bKuV+55mRR7/Q8+S/T
         8WPorP7588Qan7LvRL78G7oqN31jkYdfwgR9rQ2WMXrKq1cksipCk6keY6KK0U6PV8gB
         aBF2DwZZCXLFX9y9vOLAIYMbdwBqsqLNzY9UGTaDZ63GzwrOsTvCXmVdDGJWaxz0go85
         e1xe0LAZ1H/St+RU7MG8mfD6bHErNFNEN/pwZqzrm3sX3sEeBcua2krgi0fk+ZiKiN5L
         TRzA9FApOvQdI0d7T6zqGBbnMqbaigfhCPB6YDNZMOzob8Qa4aoBQ1GkztshgL728zm2
         G5Pw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683880635; x=1686472635;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=5jshfeE58bBaVwCcbNiWX9ibjxktpjo8LIbRDvC11Sw=;
        b=kcwxRv3oBTEbTEVpK8fQPvVVmP+nprnMK0zuvguxb9mZ8x1T0bEYystDvt4lSIUztU
         dRxVxYS36f0Z6WN2oy/nburc2WEfya9E/wZ5TaKV3SXfpo36H85ujOI442/5ZCpLrv9X
         OlVNcG4oaN7IYssmd2eqYKrC8vxB3dr2HwuzDGKOxbbC99qsXsL7FVyiomtMGtxS3lMI
         8pqCT86EO/yRKO1/YxmiQllQrXU4+5xQS3fx1I5HXq3MUCTYNS2bu9FaCMzpmFvYb4jK
         2r0LvUXSRQezODFrEKq/++Px0bgdy0h2LKxqJMaH37zwvK8mgFw65wIpvihVt6zoz4tx
         4AoA==
X-Gm-Message-State: AC+VfDwnr4zTppWRmeSz4AN9t/525oXq95FOQbNkaSBjlaLIYWyVMlfB
	/ZSglX/X84QgffKY2QHUfRU=
X-Google-Smtp-Source: ACHHUZ7kXv038ojaQs1C6D0CxaqKtlAhukqZYVIzbZoDRFkR9MyYZrEJjwBea6rRQXUPMTnkI0skPw==
X-Received: by 2002:a2e:87cc:0:b0:2a7:b168:9e87 with SMTP id v12-20020a2e87cc000000b002a7b1689e87mr3785507ljj.18.1683880635090;
        Fri, 12 May 2023 01:37:15 -0700 (PDT)
Message-ID: <921898bbda0c125a958c47a667657fc69c22b2c3.camel@gmail.com>
Subject: Re: [PATCH v7 3/5] xen/riscv: align __bss_start
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Fri, 12 May 2023 11:37:13 +0300
In-Reply-To: <7ae95b59-3c13-373d-191b-97a80bddfc4f@suse.com>
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
	 <2e9018989c628a519aadeae150786efe5e8054ab.1683824347.git.oleksii.kurochko@gmail.com>
	 <7ae95b59-3c13-373d-191b-97a80bddfc4f@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.1 (3.48.1-1.fc38) 
MIME-Version: 1.0

On Fri, 2023-05-12 at 09:45 +0200, Jan Beulich wrote:
> On 11.05.2023 19:09, Oleksii Kurochko wrote:
> > bss clear cycle requires proper alignment of __bss_start.
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with two remarks, though:
>=20
> While probably not very important yet for RISC-V (until there is at
> least enough functionality to, say, boot Dom0), you may want to get
> used to add Fixes: tags in cases like this one.
Got it.

>=20
> > --- a/xen/arch/riscv/xen.lds.S
> > +++ b/xen/arch/riscv/xen.lds.S
> > @@ -137,6 +137,7 @@ SECTIONS
> > =C2=A0=C2=A0=C2=A0=C2=A0 __init_end =3D .;
> > =C2=A0
> > =C2=A0=C2=A0=C2=A0=C2=A0 .bss : {=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 /* BSS */
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 . =3D ALIGN(POINTER_ALIGN);
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 __bss_start =3D .;
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *(.bss.stack_aligned)
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 . =3D ALIGN(PAGE_SIZE)=
;
>=20
> While independent of the change here, this ALIGN() visible in context
> is unnecessary, afaict. ALIGN() generally only makes sense when
> there's a linker-script-defined symbol right afterwards. Taking the
> case here, any contributions to .bss.page_aligned have to specify
> proper alignment themselves anyway (or else they'd be dependent upon
> linking order). Just like there's (correctly) no ALIGN(STACK_SIZE)
> ahead of *(.bss.stack_aligned).
It make sense.

>=20
> The change here might be a good opportunity to drop that ALIGN() at
> the same time. So long as you (and the maintainers) agree, I guess
> the adjustment could easily be made while committing.
I would agree with this. Thanks.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Fri May 12 08:54:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 08:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533754.830660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxOX3-0004jV-67; Fri, 12 May 2023 08:53:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533754.830660; Fri, 12 May 2023 08:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxOX3-0004jO-1P; Fri, 12 May 2023 08:53:45 +0000
Received: by outflank-mailman (input) for mailman id 533754;
 Fri, 12 May 2023 08:53:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxOX2-0004jE-1c; Fri, 12 May 2023 08:53:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxOX1-0006fH-UN; Fri, 12 May 2023 08:53:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxOX1-0002Hk-KE; Fri, 12 May 2023 08:53:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxOX1-0004lT-Jp; Fri, 12 May 2023 08:53:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jb0A/CrUvyljcxoFqhu6C0WCo3Rm9S9Bd3MvaSsiteA=; b=UDvs9qD595YzADj2oDEmQP4fow
	IlTyFycXa6c6TRBKqlIq1pEyZIxaPh+1k9dQUg7thXOhJFAwp0h0BgphFc3C8Zw8KT1IQHzbUcZmv
	nwoJqsj1LUMnuOgwZsuPw/9d81VgYN2AT4MJBs5mxecW67Vr1MTFcoMKW9cLHaBXiCIw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180629-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180629: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0b37723186ec1525b6caf14b0309fb0ed04084d7
X-Osstest-Versions-That:
    ovmf=c08a3a96fd19ac8adc75f00d373b4a1606b26c00
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 May 2023 08:53:43 +0000

flight 180629 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180629/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0b37723186ec1525b6caf14b0309fb0ed04084d7
baseline version:
 ovmf                 c08a3a96fd19ac8adc75f00d373b4a1606b26c00

Last test of basis   180627  2023-05-12 04:12:20 Z    0 days
Testing same since   180629  2023-05-12 06:42:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Giri Mudusuru <girim@apple.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c08a3a96fd..0b37723186  0b37723186ec1525b6caf14b0309fb0ed04084d7 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 12 10:12:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 10:12:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533761.830670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxPks-0005e7-UO; Fri, 12 May 2023 10:12:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533761.830670; Fri, 12 May 2023 10:12:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxPks-0005e0-Qw; Fri, 12 May 2023 10:12:06 +0000
Received: by outflank-mailman (input) for mailman id 533761;
 Fri, 12 May 2023 10:12:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GX4r=BB=citrix.com=prvs=48968f2ef=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pxPkr-0005du-9K
 for xen-devel@lists.xen.org; Fri, 12 May 2023 10:12:05 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6ed7dab0-f0ad-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 12:12:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ed7dab0-f0ad-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683886323;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=FRXaMTqIjdb7DUaJ81ZOK3/3JsFUFm6MFLGfquu1aWs=;
  b=dyT5aRh0Zmr7u5QpM+vCBv9CMet8IHwm/c1xaBioagDI4BafSp+r/cKO
   l70+gKg3IMlMSQpvYOuF1JCzTFS6nIujVU3bGVaqGIRd8PV7d9Wnk8mwU
   Qt2nUfW0UNgnF6zby1+bicKRD1OfGcbW2dQnwKuFDchyzjNoyOu0WfpKd
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 111253143
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Ykh3xaPoGx39LGHvrR31kMFynXyQoLVcMsEvi/4bfWQNrUolhGADn
 WEdXmCGPv7ZMTPxfY90O9629RxTv8DXytJkTQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5wZmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0sFXHV9O3
 vk4EjdOPknEuLqZy4OaT+Y506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUoXTHJ8IxR/E/
 Qoq+Uz1PCsLEuWS1gGg3TXrwdLzoBu8QK8NQejQGvlC3wTImz175ActfV6yvfm4h1P4Q9VeM
 U0Z4AIqrK477kvtRd74NzWxpHOU+BQRXdxdHsU+6QeE0K2S5ByWbkADSjNCc8A3r88eSjkj1
 1vPlNTsbRRmqLCPQGiR3quVpzi1fyMSKAcqaDUFTk0e6NnipIUyiB3nStdlGbSyyNrvFlnY2
 CyQpTQ5nPAfgNAj0L3++VHcnynqopnPRxQyoALNUQqN/g5/IYKoeYGswVza9upbapaUSB+Gp
 ndss9af9u0VDdeOiSmEWs0JHbeg/fHDNyfT6XZtEIMm7C+F4GO4cMZb5zQWDFloNM0JfyOvb
 1LSpR9W+LdXPX2jd6gxZJi+Y+wo0KzhGNLNRv3SKN1UbfBMmBSvpX80IxTKhia0zRZqyPtkU
 XuGTSqyJVE6FZpn5z+WfeBegeZs+XgRzlLyQJ+umnxLzoGiTHKSTL4ENn6HYeY48L6IrW3pz
 jpPCyeZ404BCbOjO0E75aZWdAlXdiZjWfgavuQNLoa+zhxa9HbN4hM76ZcoYMRbkqtcjY8kF
 VntCxYDmDITaZAqQDhmi0yPipu1Bf6TTlphZ0TA2GpEPFB9CbtDFI9FK/MKkUAPrYSPN8JcQ
 fgfYNmnCf9SUDnB8Dl1RcCj/NcyL0zx1VzXYnfNjN0Dk3lIGWT0FiLMJFOzpEHi8ALt3SfBn
 1FQ/lyCGsdSL+iTJM3XdOiu3zuMgJTpo8orBxGgCoAKKC3RHH1Cd3SZYgkff5tddn0uB1Kyi
 26rPPvvjbCT/tZlq4WX3fzsQkXAO7IWI3e21lLztd6eXRQ2NEL/qWOceI5kpQzgaV4=
IronPort-HdrOrdr: A9a23:dPyVF6iUYJjeFp8nvvzLJeCgvnBQXiwji2hC6mlwRA09TyX5ra
 2TdTogtSMc6QxhPk3I/OrrBEDuexzhHPJOj7X5eI3SPjUO21HYS72Kj7GSoAEIcheWnoJgPO
 VbAs1D4bXLZmSS5vyKhDVQfexA/DGGmprY+ts3zR1WPH9Xg3cL1XYJNu6ZeHcGNDWvHfACZe
 OhDlIsnUvcRZwQBP7LfkUtbqz4iPDgsonpWhICDw5P0njzsdv5gISKaCRxx30lIkly/Ys=
X-Talos-CUID: =?us-ascii?q?9a23=3A4HSYo2oyUOCjk1VXCVX/+D3mUecFbkPd53LwGUj?=
 =?us-ascii?q?7WT8ud+W6Qm6x/awxxg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AFPlHCQ4wMPQN6ErafO9nyV++xoxk3Z6/LnkAjq4?=
 =?us-ascii?q?jlI7DchFMBg/alBmoF9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,269,1677560400"; 
   d="scan'208";a="111253143"
Date: Fri, 12 May 2023 11:11:46 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
CC: <xen-devel@lists.xen.org>, Juergen Gross <jgross@suse.com>, Julien Grall
	<julien@xen.org>, Vincent Guittot <vincent.guittot@linaro.org>,
	<stratos-dev@op-lists.linaro.org>, Alex =?iso-8859-1?Q?Benn=E9e?=
	<alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>, Erik Schilling
	<erik.schilling@linaro.org>
Subject: Re: [PATCH V2 1/2] libxl: virtio: Remove unused frontend nodes
Message-ID: <d48500d8-344b-4ad6-af23-dacad8fec2b2@perard>
References: <782a7b3f54c36a3930a031647f6778e8dd02131d.1683791298.git.viresh.kumar@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <782a7b3f54c36a3930a031647f6778e8dd02131d.1683791298.git.viresh.kumar@linaro.org>

On Thu, May 11, 2023 at 01:20:42PM +0530, Viresh Kumar wrote:
> The libxl_virtio file works for only PV devices and the nodes under the

Well, VirtIO devices are a kind of PV devices, yes, like there's Xen PV
devices. So this explanation doesn't really make much sense.

> frontend path are not used by anyone. Remove them.

Instead of describing what isn't used, it might be better to describe
what we try to achieve. Something like:

    Only the VirtIO backend will watch xenstore to find out when a new
    instance needs to be created for a guest, and read the parameters
    from there. VirtIO frontend are only virtio, so they will not do
    anything with the xenstore nodes. They can be removed.


> While at it, also add a comment to the file describing what devices this
> file is used for.
> 
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
> V2: New patch.
> 
>  tools/libs/light/libxl_virtio.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/libs/light/libxl_virtio.c b/tools/libs/light/libxl_virtio.c
> index faada49e184e..eadcb7124c3f 100644
> --- a/tools/libs/light/libxl_virtio.c
> +++ b/tools/libs/light/libxl_virtio.c
> @@ -10,6 +10,9 @@
>   * but WITHOUT ANY WARRANTY; without even the implied warranty of
>   * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>   * GNU Lesser General Public License for more details.
> + *
> + * This is used for PV (paravirtualized) devices only and the frontend isn't
> + * aware of the xenstore.

It might be more common to put this kind of comment at the top, that is
just before the "Copyright" line.

Also, same remark as for the patch description, VirtIO are PV devices,
so the comment is unnecessary. What is less obvious is why is there even
xenstore interaction with a VirtIO device?

Maybe a better description for the source file would be:

    Setup VirtIO backend. This is intended to interact with a VirtIO
    backend that is watching xenstore, and create new VirtIO devices
    with the parameter found in xenstore. (VirtIO frontend don't
    interact with xenstore.)


Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 12 10:34:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 10:34:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533767.830679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQ6G-0008TZ-Qe; Fri, 12 May 2023 10:34:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533767.830679; Fri, 12 May 2023 10:34:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQ6G-0008TS-Ns; Fri, 12 May 2023 10:34:12 +0000
Received: by outflank-mailman (input) for mailman id 533767;
 Fri, 12 May 2023 10:34:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxQ6F-0008TI-5U; Fri, 12 May 2023 10:34:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxQ6F-0000T3-3q; Fri, 12 May 2023 10:34:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxQ6E-0000oe-KV; Fri, 12 May 2023 10:34:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxQ6E-0006km-Ip; Fri, 12 May 2023 10:34:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gYNAQrw43SC2v1Ts3KzWZTDz98Pdru+tNks+Ft57XJo=; b=afUJJEgkzqY3+jyZbjQUa+7248
	Qnas4VeDeaBmmHfe3UAXgzQqdZWIGsahKod1LJgSHYH9b9u2vQMa2tD9V8nII8Cpm4dfOLD43WHR/
	xH8wAl013t+NoE0pjGq79zhpAMHshB/Fj89sZmeHWiNqRlOKwQNZKuu8/5JUfeMDoKWc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180631-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180631: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4c507d8a6b6e8be90881a335b0a66eb28e0f7737
X-Osstest-Versions-That:
    xen=cb781ae2c98de5d5742aa0de6850f371fb25825f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 May 2023 10:34:10 +0000

flight 180631 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180631/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4c507d8a6b6e8be90881a335b0a66eb28e0f7737
baseline version:
 xen                  cb781ae2c98de5d5742aa0de6850f371fb25825f

Last test of basis   180622  2023-05-11 16:02:01 Z    0 days
Testing same since   180631  2023-05-12 08:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cb781ae2c9..4c507d8a6b  4c507d8a6b6e8be90881a335b0a66eb28e0f7737 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 12 10:44:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 10:44:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533773.830690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQFs-0001kO-Nn; Fri, 12 May 2023 10:44:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533773.830690; Fri, 12 May 2023 10:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQFs-0001kH-Kw; Fri, 12 May 2023 10:44:08 +0000
Received: by outflank-mailman (input) for mailman id 533773;
 Fri, 12 May 2023 10:44:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GX4r=BB=citrix.com=prvs=48968f2ef=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pxQFr-0001kB-3P
 for xen-devel@lists.xen.org; Fri, 12 May 2023 10:44:07 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e8c47ff8-f0b1-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 12:44:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8c47ff8-f0b1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683888244;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=Hz4U37AppMR36JTCO7x5tMz+33q6Ugp3iE+r7h2QfLY=;
  b=D2LFzzX6qhzqB3VcP77WnIQjYbfsJL0DpbGcjXhd5S4D5fIyeQaLT0AA
   hx6HwSzsNcENpI5vQc1vdU24TL2p2U9Bmw/48SW8NgV0SE5FOHuErQQR8
   dsD24QFGPcVAdPF1h/s56E+Y1Juw72uUJAKcvUoM0jKEgedudBwo8Ib+F
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108682800
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:d8G4HqyvEZQv4kFfAA96t+dnwSrEfRIJ4+MujC+fZmUNrF6WrkUPy
 2AZX2rQM/iMYjOhKtoiPYiypBsP6MLUnNJrHQNsqSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPKoT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVgX3
 +I7AzMhVzuOvPu30IOHQ9E3hct2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZwNzhjI/
 DKepwwVBDkVbIGR+yTa9Umhi8LljzL8CZgQDp+3o6sCbFq7mTVIVUx+uUGAiee4kEOlW5RcN
 kkd4AIqrK477kvtScPyNzWorXjBshMCVt54F+wh9BrL2qfSpQGDCQAsTDFbb8c9nNQrXjFs3
 ViM9/vrGDhuvbu9WX+bsLCOoluaJykTJmIEeWkLUAoZ/97/iIUyiBvVSZBkCqHdpsbpAzjsx
 CvPoCUgr7ILyMoKzLmgu1TGhTu2od7OVAFdzgzTU3Lj5A5/YoOoT4ip71HB6rBHNonxZlyIo
 HgFltXY9OcPF5CAjgSJQeMEBrbv7PGAWBXbhVNsBIUw7DSF9HuqfIQW6zZ7TG9kKMcHPyTiY
 E7XvQJX67dXPX2jd6gxZJi+Y+wj1aX6HM7pfuzVZNFJJJN2cWe6EDpGPBDKmTq3yQ51zP95Y
 M3AGSqxMZoEIZ0+5iSVbOQx6JQm/Tk/1VLvTKigzBvyhNJye0WpYbsCNVKPaMUw46WFvBjZ/
 r5jCiea9/lMeLagO3eKqOb/OXhPdCFmXs6u96S7Y8bZemJb9Hcd5+g9KF/LU6hshOxrm+jB5
 RlRsWcImQOk1RUrxehnA02PiY8Dv74l9RrX3gR2Zz5EPkTPhq7xhJrzj7NtIdEaGBVLlJaYt
 cUtdcSaGehoQT/a4TkbZpSVhNU8JE/73lrUb3T8PWVXk3tcq+vhpLfZkvbHrnFSXkJbS+Nky
 1Ff6u8racVaHFkzZConQPmu00mwrRAgpQ6GZGOReoM7UBy1oOBXx9nZ0qdfzzckdU+SmVN3F
 m++XX8lmAU6i9ZrrYCZ3vza99vB/ikXNhMyIlQ3JI2ebUHylldPC6caOApUVVgxjF/JxZg=
IronPort-HdrOrdr: A9a23:u6/LXK/D4XnzVmvIvDtuk+DYI+orL9Y04lQ7vn2YSXRuHPBws/
 re+MjztCWE7Qr5N0tMpTntAsW9qDbnhPlICOoqTNWftWvd2FdARbsKheCJ/9SjIVycygc079
 YHT0EUMrzN5DZB4vrH3A==
X-Talos-CUID: =?us-ascii?q?9a23=3AlJEz+GnEI/a3hBbfoa1r6fSYNFrXOUSE5mnzCEy?=
 =?us-ascii?q?UME1kc6SUWW6s8Zs/kMU7zg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3A/niYKQ1PxPS0WB6E0INSG6KrgjUj7PzzBnITzpI?=
 =?us-ascii?q?9tsSpHz5hPGaHji6VTdpy?=
X-IronPort-AV: E=Sophos;i="5.99,269,1677560400"; 
   d="scan'208";a="108682800"
Date: Fri, 12 May 2023 11:43:51 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
CC: <xen-devel@lists.xen.org>, Juergen Gross <jgross@suse.com>, Julien Grall
	<julien@xen.org>, Vincent Guittot <vincent.guittot@linaro.org>,
	<stratos-dev@op-lists.linaro.org>, Alex =?iso-8859-1?Q?Benn=E9e?=
	<alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>, Erik Schilling
	<erik.schilling@linaro.org>
Subject: Re: [PATCH V2 2/2] libxl: arm: Add grant_usage parameter for virtio
 devices
Message-ID: <5dc217d6-ca8f-4c5f-ad7c-2ab30d6647bd@perard>
References: <782a7b3f54c36a3930a031647f6778e8dd02131d.1683791298.git.viresh.kumar@linaro.org>
 <ccf5b1402fb7156be0ef33b44f7b114efbe76319.1683791298.git.viresh.kumar@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <ccf5b1402fb7156be0ef33b44f7b114efbe76319.1683791298.git.viresh.kumar@linaro.org>

On Thu, May 11, 2023 at 01:20:43PM +0530, Viresh Kumar wrote:
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index 24ac92718288..0405f6efe62a 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -1619,6 +1619,18 @@ hexadecimal format, without the "0x" prefix and all in lower case, like
>  Specifies the transport mechanism for the Virtio device, only "mmio" is
>  supported for now.
>  
> +=item B<grant_usage=STRING>
> +
> +Specifies the grant usage details for the Virtio device. This can be set to
> +following values:
> +
> +- "default": The default grant setting will be used, enable grants if
> +  backend-domid != 0.

I don't think this "default" setting is useful. We could just describe
what the default is when "grant_usage" setting is missing from the
configuration.

> +- "enabled": The grants are always enabled.
> +
> +- "disabled": The grants are always disabled.

> diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
> index c10292e0d7e3..17228817f9b7 100644
> --- a/tools/libs/light/libxl_types.idl
> +++ b/tools/libs/light/libxl_types.idl
> @@ -283,6 +283,12 @@ libxl_virtio_transport = Enumeration("virtio_transport", [
>      (1, "MMIO"),
>      ])
>  
> +libxl_virtio_grant_usage = Enumeration("virtio_grant_usage", [
> +    (0, "DEFAULT"),
> +    (1, "DISABLED"),
> +    (2, "ENABLED"),

libxl already provide this type, it's call "libxl_defbool". It can be
set to "default", "false" or "true".



> diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
> index 97c80d7ed0fa..9cd7dbef0237 100644
> --- a/tools/libs/light/libxl_arm.c
> +++ b/tools/libs/light/libxl_arm.c
> @@ -1363,22 +1365,29 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
>                      iommu_needed = true;
>  
>                  FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq,
> -                                           disk->backend_domid) );
> +                                           disk->backend_domid,
> +                                           disk->backend_domid != LIBXL_TOOLSTACK_DOMID) );
>              }
>          }
>  
>          for (i = 0; i < d_config->num_virtios; i++) {
>              libxl_device_virtio *virtio = &d_config->virtios[i];
> +            bool use_grant = false;
>  
>              if (virtio->transport != LIBXL_VIRTIO_TRANSPORT_MMIO)
>                  continue;
>  
> -            if (virtio->backend_domid != LIBXL_TOOLSTACK_DOMID)
> +            if ((virtio->grant_usage == LIBXL_VIRTIO_GRANT_USAGE_ENABLED) ||
> +                ((virtio->grant_usage == LIBXL_VIRTIO_GRANT_USAGE_DEFAULT) &&
> +                 (virtio->backend_domid != LIBXL_TOOLSTACK_DOMID))) {

I think libxl can select what the default value should be replace with
before we start to setup the guest. There's a *_setdefault() phase were
we set the correct value when a configuration value hasn't been set and
thus a default value is used. I think this can be done in
    libxl__device_virtio_setdefault().
After that, virtio->grant_usage will be true or false, and that's the
value that should be given to the virtio backend via xenstore.

> diff --git a/tools/libs/light/libxl_virtio.c b/tools/libs/light/libxl_virtio.c
> index eadcb7124c3f..0a0fae967a0f 100644
> --- a/tools/libs/light/libxl_virtio.c
> +++ b/tools/libs/light/libxl_virtio.c
> @@ -46,11 +46,13 @@ static int libxl__set_xenstore_virtio(libxl__gc *gc, uint32_t domid,
>                                        flexarray_t *ro_front)
>  {
>      const char *transport = libxl_virtio_transport_to_string(virtio->transport);
> +    const char *grant_usage = libxl_virtio_grant_usage_to_string(virtio->grant_usage);
>  
>      flexarray_append_pair(back, "irq", GCSPRINTF("%u", virtio->irq));
>      flexarray_append_pair(back, "base", GCSPRINTF("%#"PRIx64, virtio->base));
>      flexarray_append_pair(back, "type", GCSPRINTF("%s", virtio->type));
>      flexarray_append_pair(back, "transport", GCSPRINTF("%s", transport));
> +    flexarray_append_pair(back, "grant_usage", GCSPRINTF("%s", grant_usage));

Currently, this mean that we store "default" in this node. That mean
that both the virtio backend and libxl have to do computation in order
to figure out if "default" mean "true" or "false". And both have to find
the same result. I don't think this is necessary, and libxl can just
tell enabled or disable. This would be done in libxl before we run this
function. See previous comment on this patch.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 12 10:57:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 10:57:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533777.830700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQSB-0003Sv-RJ; Fri, 12 May 2023 10:56:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533777.830700; Fri, 12 May 2023 10:56:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQSB-0003So-O1; Fri, 12 May 2023 10:56:51 +0000
Received: by outflank-mailman (input) for mailman id 533777;
 Fri, 12 May 2023 10:56:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=stcw=BB=linux.intel.com=andriy.shevchenko@srs-se1.protection.inumbo.net>)
 id 1pxQSA-0003Sg-1S
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 10:56:50 +0000
Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aee7605b-f0b3-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 12:56:46 +0200 (CEST)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 12 May 2023 03:56:43 -0700
Received: from smile.fi.intel.com ([10.237.72.54])
 by fmsmga002.fm.intel.com with ESMTP; 12 May 2023 03:56:34 -0700
Received: from andy by smile.fi.intel.com with local (Exim 4.96)
 (envelope-from <andriy.shevchenko@linux.intel.com>)
 id 1pxQRq-0004Zv-0i; Fri, 12 May 2023 13:56:30 +0300
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aee7605b-f0b3-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1683889006; x=1715425006;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=aTme5qDkFV7Mcp0uaOboKBbeV5ze/dqrLO9cBA/tElE=;
  b=metEbHt10/j5rTqX/CuJRt8bYDclqhcaubvgP6l0z0dHj5dyc2asp7lb
   1xQm/358fwA4HRf+W8vjMV5Y1Kko27hVWmVCmy7jisGRQm/AeUFfrcrDQ
   U24a1rDUd/eIfSP31FQHEGWtoCJJggFuu/dSbp58COXtONJ+wT1Ev4yoo
   cupw6w15ojhJzKkVTH2KFwPRRfV7rt6BBj2zKV8swdlb36ZAexvTJk2QZ
   pGbFgxQQsPSLiz09IquI3dpt29kihROiRYx4EaKBzRH3v4igF4rLXQWPv
   cuyhhzvjvJo/9fqsVYF3p7PfRGC8vQnNDJTcpw8lw7JhkHQYg4+DJOU1I
   g==;
X-IronPort-AV: E=McAfee;i="6600,9927,10707"; a="414132443"
X-IronPort-AV: E=Sophos;i="5.99,269,1677571200"; 
   d="scan'208";a="414132443"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10707"; a="812041055"
X-IronPort-AV: E=Sophos;i="5.99,269,1677571200"; 
   d="scan'208";a="812041055"
Date: Fri, 12 May 2023 13:56:29 +0300
From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
To: Bjorn Helgaas <helgaas@kernel.org>
Cc: Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	Rich Felker <dalias@libc.org>, linux-sh@vger.kernel.org,
	linux-pci@vger.kernel.org,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	linux-mips@vger.kernel.org, Bjorn Helgaas <bhelgaas@google.com>,
	Andrew Lunn <andrew@lunn.ch>, sparclinux@vger.kernel.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Gregory Clement <gregory.clement@bootlin.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Russell King <linux@armlinux.org.uk>, linux-acpi@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>, xen-devel@lists.xenproject.org,
	Matt Turner <mattst88@gmail.com>,
	Anatolij Gustschin <agust@denx.de>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Nicholas Piggin <npiggin@gmail.com>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	=?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	linux-arm-kernel@lists.infradead.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	Randy Dunlap <rdunlap@infradead.org>, linux-kernel@vger.kernel.org,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	linux-alpha@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	"David S. Miller" <davem@davemloft.net>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>
Subject: Re: [PATCH v8 0/7] Add pci_dev_for_each_resource() helper and update
 users
Message-ID: <ZF4bXaz2r75dlA5g@smile.fi.intel.com>
References: <20230404161101.GA3554747@bhelgaas>
 <20230509182122.GA1259567@bhelgaas>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230509182122.GA1259567@bhelgaas>
Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo

On Tue, May 09, 2023 at 01:21:22PM -0500, Bjorn Helgaas wrote:
> On Tue, Apr 04, 2023 at 11:11:01AM -0500, Bjorn Helgaas wrote:
> > On Thu, Mar 30, 2023 at 07:24:27PM +0300, Andy Shevchenko wrote:
> > > Provide two new helper macros to iterate over PCI device resources and
> > > convert users.
> 
> > Applied 2-7 to pci/resource for v6.4, thanks, I really like this!
> 
> This is 09cc90063240 ("PCI: Introduce pci_dev_for_each_resource()")
> upstream now.
> 
> Coverity complains about each use,

It needs more clarification here. Use of reduced variant of the macro or all of
them? If the former one, then I can speculate that Coverity (famous for false
positives) simply doesn't understand `for (type var; var ...)` code.

>	sample below from
> drivers/pci/vgaarb.c.  I didn't investigate at all, so it might be a
> false positive; just FYI.
> 
> 	  1. Condition screen_info.capabilities & (2U /* 1 << 1 */), taking true branch.
>   556        if (screen_info.capabilities & VIDEO_CAPABILITY_64BIT_BASE)
>   557                base |= (u64)screen_info.ext_lfb_base << 32;
>   558
>   559        limit = base + size;
>   560
>   561        /* Does firmware framebuffer belong to us? */
> 	  2. Condition __b < PCI_NUM_RESOURCES, taking true branch.
> 	  3. Condition (r = &pdev->resource[__b]) , (__b < PCI_NUM_RESOURCES), taking true branch.
> 	  6. Condition __b < PCI_NUM_RESOURCES, taking true branch.
> 	  7. cond_at_most: Checking __b < PCI_NUM_RESOURCES implies that __b may be up to 16 on the true branch.
> 	  8. Condition (r = &pdev->resource[__b]) , (__b < PCI_NUM_RESOURCES), taking true branch.
> 	  11. incr: Incrementing __b. The value of __b may now be up to 17.
> 	  12. alias: Assigning: r = &pdev->resource[__b]. r may now point to as high as element 17 of pdev->resource (which consists of 17 64-byte elements).
> 	  13. Condition __b < PCI_NUM_RESOURCES, taking true branch.
> 	  14. Condition (r = &pdev->resource[__b]) , (__b < PCI_NUM_RESOURCES), taking true branch.
>   562        pci_dev_for_each_resource(pdev, r) {
> 	  4. Condition resource_type(r) != 512, taking true branch.
> 	  9. Condition resource_type(r) != 512, taking true branch.
> 
>   CID 1529911 (#1 of 1): Out-of-bounds read (OVERRUN)
>   15. overrun-local: Overrunning array of 1088 bytes at byte offset 1088 by dereferencing pointer r. [show details]
>   563                if (resource_type(r) != IORESOURCE_MEM)
> 	  5. Continuing loop.
> 	  10. Continuing loop.
>   564                        continue;

-- 
With Best Regards,
Andy Shevchenko




From xen-devel-bounces@lists.xenproject.org Fri May 12 11:19:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 11:19:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533781.830709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQnH-0006Vx-Du; Fri, 12 May 2023 11:18:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533781.830709; Fri, 12 May 2023 11:18:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQnH-0006Vq-BA; Fri, 12 May 2023 11:18:39 +0000
Received: by outflank-mailman (input) for mailman id 533781;
 Fri, 12 May 2023 11:18:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Uaj=BB=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pxQnG-0006Vg-1n
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 11:18:38 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.20]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bcf2b41d-f0b6-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 13:18:36 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4CBIXJFd
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 12 May 2023 13:18:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcf2b41d-f0b6-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683890313; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=FRe3TEsviGbwhVJHmGWu8+4MSEgkw3T0lAC2syXgRI1nGIqq4IFHA7nDWfp5zn0XGQ
    O5BzPCqdZVs6nyvvGOKjM66fhLQwN7GZw1Q3YU+q+4VzeKeaUrwTYAvqaJvSjcbxVH6e
    32/MhSugwJcx6icbAoSRkuTrHxdLc0PgaayPeO9YHmh6gb/JtQLy+FAUvDrmFe7kzDua
    6pBVen1s7R0KHbZ8oKMiTCt8h5lOjeRl35dS9R7HqLsJ2G5jrZPjPSaCNfCpBTnAelGO
    7CHdcp0xUYOMj79BBgnN4ET/xNQcmoDwXioMoelkK4Yu2IBCNcAnxZz/IkM2x0u6ZmgG
    v6XQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683890313;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=tigSnCt6TQ+e8MtVlCi7/8cx0Lj8IuTBbKJnyQt5R9A=;
    b=LVpkvAGQjOlKwXktpKmA7wDt7CqCtc4Uq9Xz9KTS57bpP8kdH0yRtlJJQjxB3NXxP6
    C7/xkL8zP2BpknsFVleJF390C7VaC3VUA+CzNTiF+hu2nHSZFN1+mNnT+WanF9Q4d6hm
    VmMyvrupmmexpRy2cLub6kk2KWu2Q9EfWdqpnZGfHTGDVADb+bPoojo2f6pS+8fZuU6O
    of7TfnubFVyONlq9tKtQB6hmJfBwC9aZSIE3cfrEsd7NLr9x+djzVnsgy0AOriWXbgQS
    p9uhYBmLuadbn+jLC+sle6EHT5HYFVTPOnLPEE4XAPdUuu6+Bb3WSfqaFrrIy9FQAW16
    TwAQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683890313;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=tigSnCt6TQ+e8MtVlCi7/8cx0Lj8IuTBbKJnyQt5R9A=;
    b=JOjldXokCJourXLUdec+xYfP6g2DvmiEgzgQ90vLixVHpBCWWxPAt8UssmKGAqFkQp
    sIoHJn4t0dgCVKqXqxincuKslzkPTPyykbF30skYQrNmyu+Ep+89kDfOKsDy8clsrNp4
    GUEvvgORgIF75oNqsT4qTaxDc4NvBGirxTLlDPmQEFUrv2lu+416p9S8JmJPUZAP71eU
    LSy7U4CTvw0Nd6myDQR31nn8vKIHfmOPUm6zKn7pr0dEieK0x1u+7X1g5VKPx5UMU9xY
    Q6fwLkqXKCQzxEftxejkBzDexgCkjrn2DcKGOx8SvwVRPt0JP92gx+yu0ZtnV8hRtRpY
    bxCg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683890313;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=tigSnCt6TQ+e8MtVlCi7/8cx0Lj8IuTBbKJnyQt5R9A=;
    b=hCYCpVVGIoXu0A9dIKa/bmac/BobJqKH+shMvSXjpx7iNAsLf6H8EMvFzHw3rJvuJX
    PABA0JDOP7r1drqVPWDA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4ARWIYxzstZKeVom+bauo0LKSCjuo5iX5xLikmg=="
Date: Fri, 12 May 2023 13:18:19 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jason Andryuk <jandryuk@gmail.com>, Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2] Fix install.sh for systemd
Message-ID: <20230512131819.02b3a128.olaf@aepfle.de>
In-Reply-To: <0785a316-1920-f5de-61d3-ed21ddbff0b9@citrix.com>
References: <20230508171437.27424-1-olaf@aepfle.de>
	<0785a316-1920-f5de-61d3-ed21ddbff0b9@citrix.com>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/kmuudUbtj62vrb0QAzfYPWU";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/kmuudUbtj62vrb0QAzfYPWU
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Tue, 9 May 2023 13:47:11 +0100 Andrew Cooper <andrew.cooper3@citrix.com>:

> Why is this 700, and the others just using regular perms?
> Also, doesn't it want quoting like the other examples too?

It is not clear why there is a single mkdir -m 0700 in the tree.
Most likely it will not give any extra security.

The scripts source hotplug.sh, which defines a variable XEN_RUN_DIR.
I think it is better to use the shell variable instead of hardcoded paths.

Regarding quoting: there are many paths used without quoting.
For the beauty an additional (huge) change could be done to quote
everything. Not sure if it is worth the effort...

I will post a v3 with this relative change:

--- a/tools/hotplug/FreeBSD/rc.d/xencommons.in
+++ b/tools/hotplug/FreeBSD/rc.d/xencommons.in
@@ -34,7 +34,7 @@ xen_startcmd()
 	local time=3D0
 	local timeout=3D30
=20
-	mkdir -p "@XEN_RUN_DIR@"
+	mkdir -p "${XEN_RUN_DIR}"
 	xenstored_pid=3D$(check_pidfile ${XENSTORED_PIDFILE} ${XENSTORED})
 	if test -z "$xenstored_pid"; then
 		printf "Starting xenservices: xenstored, xenconsoled."
--- a/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in
+++ b/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in
@@ -27,7 +27,7 @@ xendriverdomain_start()
 {
 	printf "Starting xenservices: xl devd."
=20
-	mkdir -p "@XEN_RUN_DIR@"
+	mkdir -p "${XEN_RUN_DIR}"
 	PATH=3D"${bindir}:${sbindir}:$PATH" ${sbindir}/xl devd --pidfile ${XLDEVD=
_PIDFILE} ${XLDEVD_ARGS}
=20
 	printf "\n"
--- a/tools/hotplug/Linux/init.d/xendriverdomain.in
+++ b/tools/hotplug/Linux/init.d/xendriverdomain.in
@@ -49,7 +49,7 @@ fi
=20
 do_start () {
 	echo Starting xl devd...
-	mkdir -m700 -p @XEN_RUN_DIR@
+	mkdir -p "${XEN_RUN_DIR}"
 	${sbindir}/xl devd --pidfile=3D$XLDEVD_PIDFILE $XLDEVD_ARGS
 }
 do_stop () {
--- a/tools/hotplug/NetBSD/rc.d/xendriverdomain.in
+++ b/tools/hotplug/NetBSD/rc.d/xendriverdomain.in
@@ -23,7 +23,7 @@ XLDEVD_PIDFILE=3D"@XEN_RUN_DIR@/xldevd.pid"
=20
 xendriverdomain_precmd()
 {
-	mkdir -p "@XEN_RUN_DIR@"
+	mkdir -p "${XEN_RUN_DIR}"
 }
=20
 xendriverdomain_startcmd()


--Sig_/kmuudUbtj62vrb0QAzfYPWU
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmReIHsACgkQ86SN7mm1
DoDskBAAikvmuvF3oMZ4tZTsP4/IdSAHXs2PuwupI9S0+uN/kncGYT2iOAtwiyhd
w99nTHWWqQ7DHY3yGxXkwJgeBhlVLUUtLUDCQ7uldRv//C/uRybFEYtOgLtg//Mg
04fONaZaCMjfd4O3Y+h4V2jFrepKwtiYh1moC73s0vrmG1WQ0FtcPzQgh5eY7ezN
I6myUdOx1Dgm/B6JxnJi73VbpbyJEIH3m31RgGb9Wl1Kh3xrRLmSFujn39M3xSjs
16oW0wU1/TGJAvgtNMCJ+HIl5pNPKeIOknJFevt1BGrOxE6hRDa2dsjdjLw9zdCq
fKbQ5fUTKhUh6zL7grXUdluywnYzVc9eWtC5IFpYZMN3wyaJHOLYHe02DQ1bxrE2
T2v6k7XoivyTTiPRkg7Zd+/FYOuxR5HkLapjYnwZ4+0sTIVUqIwTIy9enEYxMyid
e4t9w/l/lswBujutEw5WaA9hxux3MPOlaQRPSqq2BeCPthadMmo/MHYpELCEMT/x
GbsnE2CCAhK5O9T7TGHaxLYSAzQvpr5VaYG6LSHW36KK39YjmTBMz+H5baONQddW
skrAGri/1/7cN0giU4smY7tGpkr07fRK8D1obwrDc6dXA81AcKA9dzDtSeH8rkq5
Ngk4GzmXylckaoCw/Tt9CQZH+tiEMBLhoKs38YglsFUoj04BNB4=
=Gy9K
-----END PGP SIGNATURE-----

--Sig_/kmuudUbtj62vrb0QAzfYPWU--


From xen-devel-bounces@lists.xenproject.org Fri May 12 11:22:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 11:22:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533786.830720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQqs-0007vB-Sd; Fri, 12 May 2023 11:22:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533786.830720; Fri, 12 May 2023 11:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQqs-0007v4-Po; Fri, 12 May 2023 11:22:22 +0000
Received: by outflank-mailman (input) for mailman id 533786;
 Fri, 12 May 2023 11:22:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zhAz=BB=citrix.com=prvs=489789326=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pxQqs-0007uw-07
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 11:22:22 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 414fbb4e-f0b7-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 13:22:20 +0200 (CEST)
Received: from mail-dm6nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 12 May 2023 07:22:16 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5630.namprd03.prod.outlook.com (2603:10b6:a03:28b::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22; Fri, 12 May
 2023 11:22:15 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::4fc:e616:1cf0:57bb%4]) with mapi id 15.20.6363.033; Fri, 12 May 2023
 11:22:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 414fbb4e-f0b7-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683890539;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=o9IqwSYdBzXQ+2BuXRI4uXzPjierc5LrVwnlz/pewMA=;
  b=YAPmSQCdI7a7GP3n67tLRWuZmzzPDhdMeiJDW8sRxpyWPNP3FCb0ryx/
   1plKNyJP+AL0dMh6PAa8IdYW4FcnwD93IH8IEabp8Z/DtEFBzJh9GI+gG
   Xh0upaBXAkmGZ+6vEaYdD5Sts3QV5vn3UXrzzjXQ10PmiQZ0WjT4QaRU9
   Y=;
X-IronPort-RemoteIP: 104.47.58.109
X-IronPort-MID: 108820881
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:7LFhO6IqTL64fSuWFE+R7ZQlxSXFcZb7ZxGr2PjKsXjdYENSgWcDm
 2FNUG+PPK6KMGagc9wlaI++8U4E6JfSytBlTgNlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wVkPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5uA3Nnz
 +EZKgpSbz7Sm/Of6bWqc+1z05FLwMnDZOvzu1lG5BSAVbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGLnWSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHurCNhKRO3hnhJsqF2Q9mIOJERPaQeAjMC8t1W9UIx+C
 VNBr0LCqoB3riRHVOLVWBmxrlaNswYSX9cWH+BSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHsrSTRWiM67S8oja7OCxTJmgHDQcbSSMV7t+lp5s85i8jVf5mGa+xy9HwRzf5x
 mnSqDBk3u1Cy8kWy6+84FbLxSq2oYTERRI04QORWX+56gR+Z8iuYInABUXn0Mus5b2xFjGp1
 EXoUeDHhAzSJflhTBCwfdg=
IronPort-HdrOrdr: A9a23:szD9o6lEKTVzCQaXCqvzCrIp4XLpDfLa3DAbv31ZSRFFG/Fw9/
 rCoB3U73/JYVcqKRcdcLW7UpVoLkmyyXcY2+cs1PKZLWvbUQiTXeZfBOnZsl7d8kTFn4Yw6U
 4jSdkaNDSZNzNHZK3BkW2F+rgboeVu8MqT9JjjJ3UGd3AVV0m3hT0JezpyESdNNXl77YJSLu
 vk2iLezQDQBEj+aK6AdwE4dtmGnfLnvrT8byULAhY2gTP+8Q9BuNbBYmOlNg51aUI0/Ysf
X-Talos-CUID: 9a23:95rWe2/MreFjrqFQ6fWVv1cWCOkfb3zf9ijRDR6kGE9AT6ysTUDFrQ==
X-Talos-MUID: 9a23:w7Ja6grEJcxmru7fQxoezzFhBMl1z/33NBw2vKwDhtKHPnJRGSjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,269,1677560400"; 
   d="scan'208";a="108820881"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lff92meAxxqWDgZl54ckOwthIlF769MnRvgFGTfeomulWQiDY7kBwN4fHU3EFyTPUY7rA1I/Q3+2m4pRMbnD2Q4YGIwLKv4k0EQ1ZBRQxi1bvg9gJANd95iU9O//66bf5bO0f3Bm0P31GUIDgInHq/PeKMiAUs4CdTq7UQcw3anQHrYWk3M8BLl282U0qpnOn+C5R8Ha5j4TLno6Wzw5shzA1heTRX0J/wuxkikNuL7hb1g3R1p3iVHPVsdwbQPNNW/aZjZUeWVvaq9nj6+6qfzMnRCC5iD7hyNNggAFJii9Z9tN1DMmOmYomgY9o1Yyn9RzarDAQVPlLNXxK1lZaA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=o9IqwSYdBzXQ+2BuXRI4uXzPjierc5LrVwnlz/pewMA=;
 b=k8ZatOay22zmMfm7NnIhFthxzLqA7WWAGqR6bbVmDsJ5/6LikLZf9ThuUZtpaD+beIRjE5cIz+ug7zlazm1YhK2+AQO9bdsvMshPXExQ7LFbBIKf4rNeaCV2PfPedUa8VxZOWfoBO+9W1MlQApiZp6U1YuMr6TFqGGula+42V7EgZEsM53hzvwQpgUgPK5Q9zqPdqqWmRUL0eWj6oJxU+EhCm8EcBy8j+c1g7wfS5+Ugz2fphoNmqA+RDjFIDgTdVGiEBdxHfcWE9hsyjIeTnzolcLl+GQ0BPS2n7JzFEP9kA7tFI3NUD6IOFf/kvKszSNRhNNilUj0c+LtNd5NbGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o9IqwSYdBzXQ+2BuXRI4uXzPjierc5LrVwnlz/pewMA=;
 b=CGIRnRBVa4DBKuSXti123TqF+6ruakosHMaxzyz708X1oCAjJsQBH+9Mj+AJz12qNjfXWTDU9c3osHiKSgtcybNkYtmsm3y2lYQA4FehUD940DJu9drvdM6OUR1BCO0LFOBacY0uMDWMTs0nc+UDrK+3H7hb5vDkqNoW7tUWCOQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <3d0b1c1b-02e5-a0f3-221a-45dbdba1b411@citrix.com>
Date: Fri, 12 May 2023 12:22:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] Fix install.sh for systemd
Content-Language: en-GB
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Jason Andryuk <jandryuk@gmail.com>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508171437.27424-1-olaf@aepfle.de>
 <0785a316-1920-f5de-61d3-ed21ddbff0b9@citrix.com>
 <20230512131819.02b3a128.olaf@aepfle.de>
In-Reply-To: <20230512131819.02b3a128.olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0600.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:295::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5630:EE_
X-MS-Office365-Filtering-Correlation-Id: 6ede7ed7-5721-4c79-02f0-08db52db2314
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nSNozOxh8uR4ovwoMGBs8Gsfgh300Y0bMXxRKKzKo6d2RZeB0Yd3hFPACoBXonA5OHauui4l2Iuv/xjs07BFQSj/OprLC7/tmpF4VyIMPX9P6Gu1avQ2g6EbRkG6lUrFW4awwXvbVk4JuWt6K7tNddzFDvl9liB0HrqjGXesv8Fgi8RkhHEWrD+urkhrJ/3zeyty98Brb3tEX4/iBEVXRn2JxCb6s+skLEnsPh6Qa8xP4zBmFLlbgw2tXVHqv1glDsiSJTmY+BD1TREvxFEenlNAsjFJ7KcUnoIpJ0wO5P8mHJ5YSLJZwNvto9Y7K/Aj29CoAAghYvqsg/th7DAaWZ3Hkl6xU5ykAvw6q2R1ulbjdPmiItyDmdPNgNOLSlfqPJYAMot7kF4TWQUN+S5PGViqxH0A/9LMyqRRGvh/Y4UD0tpisla1NvzvaOdOzHXFNNFHvq4Hfefvdj8ZMWAAwVDf1LDwVARxf4556MiHCe7+YdZ63MAKqksVMI1nTTHAKd4Y6bocm6HH0ilACi+JBqqCanRoBqFSSvaDTMSNzH+7vL5IVcvrMl0OpKWqIdvfdA8RCqhVJwIeNIIMyGuudFKyFxOLSlT12/a3r9s3j8v+Sb6mhK5G4PvqeXvdIwt2uYXAXP48wKxlAaRzODSrjQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(366004)(346002)(39860400002)(451199021)(31686004)(107886003)(38100700002)(41300700001)(6486002)(53546011)(26005)(6506007)(83380400001)(2616005)(6512007)(186003)(6666004)(478600001)(6916009)(54906003)(8936002)(82960400001)(66556008)(4326008)(66476007)(66946007)(316002)(5660300002)(86362001)(2906002)(31696002)(36756003)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YTY4MXZVQitQbnJCWjhCaDB2aXZNTE51eUVudmlaTWZlLytMNWVtTzAyNUMv?=
 =?utf-8?B?ODNhVnJVZnhHNjdSa3dIUWo3UFVwZTBHcjk3MEtCNHhkd1d3RXl6TlZDWnM5?=
 =?utf-8?B?V1g2R0x6aU83ZmNjMWE3bTNQZUtNNW1Kc01IZHhSYTJONXRZMUo1cUxsSnBK?=
 =?utf-8?B?MTROTGc5d1NwVUtISGpFajlCSEVzT0cyczFxQy94elRkbHpTVC9wTWRvT21K?=
 =?utf-8?B?TWxQY2xQVWZXY2lQaEpDZW5DZWQvdGZNUWd6QnQyVkNrZnlwNDdzV0lyZG9X?=
 =?utf-8?B?VHhBaDNkVnZvZUJmY0djRWtFaWxMakNLaWNKVURzSEdWNUpxaHdqalhTQkEw?=
 =?utf-8?B?WUw4SXRkSXZPd1FERTFsQjFGbEVsQnZWUHlCc2I2Y2xlU3p4Z3cxZjIraHps?=
 =?utf-8?B?LzBxRURGUHI0MTNtZ0E1eENvdndWZi8zNFhvNW13MHVBOEN5aFhReitGNlMz?=
 =?utf-8?B?SHNWWE9oV0lseG56UHlURCttQnRVZG1HaVJLNFZEQlBqYngxa01YNUd4RTlL?=
 =?utf-8?B?VW1pcnhvSGU2UWZ5VGcrRml5R2dCTSt1Yys0U0pEaEFHS2IwYWU3MCtjL0lz?=
 =?utf-8?B?anA4MDRiUGcxemMyTWJUekZub3NTdklqV0dGRmFjUDdUVy9wdXFCZmY3azBP?=
 =?utf-8?B?Yit1clUrdERIRzE4ekJuZVJKcXM1ZjRISitwM0E2UGdkWnFZVlo0WG4vcTlM?=
 =?utf-8?B?UjJOVlkzNUdnVmgwWEVkNklwalB5NE5hYmorMzhEejRYcUhMOGh1U05vdnVN?=
 =?utf-8?B?dlNsaERNZGpBYWN1dlU2NTdGVkxjdkxiditPY3JjZTZYQzRqRE1MU0JkL2FH?=
 =?utf-8?B?Mld2SUNBbURpTmNtZzc0TXJMY2hrRWc1MDc1VkpXNzByc0pIbEVpckFvYUV6?=
 =?utf-8?B?WkEzM0hCY1FEOVNHcDNIeU1MRU9QdFNKTDNGWWVPdWgrWGNxY0Jpek5DMDdV?=
 =?utf-8?B?TVdrOXhVekhOSHFXY05pZ05jTkJRMDRVbVVReWNwcmk4b3IxMGRYanRKZSt2?=
 =?utf-8?B?UkplcXVXRlVEQXgwWkVSVnV2QlVDcTFTUkk3cU1LM2lZN1dRSExRbHFwdERk?=
 =?utf-8?B?TFFVYWR6azlrNXhKK0NONUx6ajhZQ3BmV3kyMkMrdW9iUHpKWEg4cUNYZVlH?=
 =?utf-8?B?Tng4dFRhc0p3d29zR0w2b21SbTVQQm01U09VcmZ4MnBvQ0loNVUrd1gwNy9O?=
 =?utf-8?B?dHZJREQ5aHVIblNDK3M3ekZrN1hnNlRKQ0JBdXNnUVhiQ1lTY2JzWGZ3cmlG?=
 =?utf-8?B?TVNXLzNKSCtpWncvdGIzcE5hZ3Z2eXpsejREK0tib1VMdDlycCtndGhTU1cy?=
 =?utf-8?B?WUtWMkVLNWxvVVVQTXNwRmtCalZBMHkxVHkrZlZhb3IwL1FJbm00RHFrTTZm?=
 =?utf-8?B?QUdNa2FKRlJzTTZuZEcrQTE1ZDA2YlQyQUlKWTd6c3VValRVQVNsZ2NZRktD?=
 =?utf-8?B?cGwzOVN1T3VPNC9rdG1rdzNpcFlIR01sSm1rWW1DaXV1T1NZQmt6aUE0a3JT?=
 =?utf-8?B?T25oL2ZDV0M1U1BON0J6Nk1YYVByVVNqc1VOTktGRGJuVW1tUks4cjBDUkho?=
 =?utf-8?B?bDV6Wjd4N1hrWnpIMkc1MVlReHpMUG9KTjR5dHJMZDRaN3Jid1I4T0QwTFda?=
 =?utf-8?B?S1NYbUQwcDRWNW5LeUpCd2RhRmVGbzRCbG9yU1A5bXhTSWJpT2VoNndyK00y?=
 =?utf-8?B?V1Z4Y01MbEszcCsxYUZJUEZmQ1cyRm9iYnp1RVFuVFJOWFdsVHBlTzd1cU5X?=
 =?utf-8?B?Y3I2S29UdzlFQ1g1TXFyZE9kWEYvUGFTeDFKNVlCUlBDSWFVTXZjQnA0dlJB?=
 =?utf-8?B?SERiRnI2dWNFSHdaaE95aHptVkViK0FYNVRCV2ViMm55ME0xeE9wdi9qZ1pn?=
 =?utf-8?B?WUtoODdod29iRW5tMXJQblRKSitON0NGdGNrT1l0YmhzY3BMbm9kZDRpcDRi?=
 =?utf-8?B?SmJMZDk2WEl4TEV2elhvc3hOVk5PekRtRmRqYlJmMjhiVC9RWDhTRGNSMmlt?=
 =?utf-8?B?NVNFbXh6dVpvaElJK056azNodkIxeHhGa1FsM0hoMi83NlE3OFg3dDJ6WWFU?=
 =?utf-8?B?bmpTaUlYd1NSVGZZVURheGZBempWRkhhcFVqL3U1QkQvaENwMW9mc1BQcXFi?=
 =?utf-8?B?NmZhYjdIRXJuVGdEUTcxQ1QwdGJjMmVzMFhuWWd6WTlvVXpoNWxCc3dCVCtH?=
 =?utf-8?B?RGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	9h3uWc5TyXG7RGc5mKF+hfi9b76GMeotqRYRdVVsmM7JnSpzKHCXPLB1EkLnSrfzs4R297rFB8xs0vDo7cNzvW/fPKuWVq9FyTuMJnlKey8xpMlAANle0v7kSqr7/hT4ZTZvPqruoqN1C8PlDV9FMnD/uLbjU6QKB4Xs6t7OVkIhXNgGGEyF2SeO0qqeFQB8t7EXw6avgHZWc3Q9EZG0o1oP0P3Gqx3e+eVaqiJmW27T3OvPU1jcwjK/mJDa39tWOIcqZoGoBzRR5rm+jM2JHG5/WapuCiwHOyu2IjoqnqAjFLrDtfXgLo9mOCZFhQI2eFgfb7OwbeP52/s27HLN9cq2AWKAEl7HDvmQO4Dfg8fV6sxkIadr03cjmoYpqZJZhE04ucPCKa+WE+SE9yEAf8P/JqVwd0BWl/B+AZ9cBEXXWGBHEYMhwfQYvoKGNtLkH3PQQ5R30bKIePWzbhTU9rxf6dRFt8QuTbbwKaiusoPMQ+7G/Lil9vdrpNiTnWPXoEBrx6TwnXwUvvsWjvLPM5+uxfJBALxTYLyva5Xsxogm3As07iMz4KSIfoF6oZXQywil2TVdF3bJDenzlrhh0wthXo2bh/psyZsWCioaW/QUgJ2bOD/XFB445vfvPi4xu4c3lvEU9mJvmR6g8xOF/KXg9YxMA3fB9NTqqqz0ldy9zOGX4yHZOZMo7kADJdRzcq0wtpli28Oga6p/lyYR9tfBHrJ7g7BfGExjgFpXpwQXaGSoH60J5z6vGAvN5C7rPisoGFoszBGDJo19yySQafsP1LQq5RYRqn2IUc1oc0LyQ7EfM0fAEVgdr2aQmut0Jr8Az9eWACNeeV53NVQGAUGu2gq9scHdmD4hnGZA75c=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6ede7ed7-5721-4c79-02f0-08db52db2314
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 11:22:14.5665
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qVgPjZGvkhY9oMx3/jtZnPGjzBzqJnNzA1ieZQI9qcb/RxtSIA77KfUTmteQ+Hs7z9xoGW/GHEOkIwosawKyEh0h38vgf9kAmfS/n6f2TAk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5630

On 12/05/2023 12:18 pm, Olaf Hering wrote:
> Tue, 9 May 2023 13:47:11 +0100 Andrew Cooper <andrew.cooper3@citrix.com>:
>
>> Why is this 700, and the others just using regular perms?
>> Also, doesn't it want quoting like the other examples too?
> It is not clear why there is a single mkdir -m 0700 in the tree.
> Most likely it will not give any extra security.

I agree.  It's weird and doesn't have a good reason for being different.

> The scripts source hotplug.sh, which defines a variable XEN_RUN_DIR.
> I think it is better to use the shell variable instead of hardcoded paths.

Sounds good.  Does this allow for making any of these files no longer
preprocessed by ./configure ?  (i.e. cease being .in files)

> Regarding quoting: there are many paths used without quoting.
> For the beauty an additional (huge) change could be done to quote
> everything. Not sure if it is worth the effort...

Perhaps, but variables should always be quoted.  At least make sure that
new additions (and edits) leave things quoted.

Thanks,

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 12 11:29:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 11:29:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533792.830729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQxb-0000Md-Ml; Fri, 12 May 2023 11:29:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533792.830729; Fri, 12 May 2023 11:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxQxb-0000MW-K4; Fri, 12 May 2023 11:29:19 +0000
Received: by outflank-mailman (input) for mailman id 533792;
 Fri, 12 May 2023 11:29:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Uaj=BB=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pxQxa-0000MQ-4v
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 11:29:18 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [85.215.255.52]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3a2e6d43-f0b8-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 13:29:16 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4CBTCJIz
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 12 May 2023 13:29:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a2e6d43-f0b8-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1683890953; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=HqkWQ/TLEvR95XxBJ1DQ8CV31L6NrxpA3As6VHVlPVd4G9TQO2IWm7pfKlOtv9XqDM
    AFUFZ/ogsTPWLz1To3INbkl1Hk9/xDDtRN12yBL00iAv1yN9ZgmhVB3fLVtLI+jCsPHd
    ApbRUZ4XD9QVG+bz2SHMOMXtC2dSEvBD4euoEP41qqcsOTt8lzX2EVozW7iK63MYBj+l
    ii3/jObAopS3fD3WcLL4zALSc7EwykGY9kVn5YHTg49TsYgfXuYOG37usskN8moHaaNi
    of9rhDWXP9ejCuSLIm3GwkAfyKStvljPWDGrpgtF31GPgh5/v657Qr7sou9rAM7R3iMF
    fAfQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683890953;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=tE6ZYixG65UnttHybhoHc5k4Gq9LVP8y0+vdkXhUX+Q=;
    b=BgtmZG94n7YIRz4DjnU5g4LE6NAUOTajNfxxi9aLfZizYhwhriMcoqYecXFiC8rzND
    IOL2eogtKWPRsk64q8rvo4b8aXSO1O0wmUcaZVXkZJZJxVPE4q77x5FY8awseCGBTTZG
    Tz71P13548X4LH9pXxF4cRTF+FEbT1HM99l3JV5HoPQGHXYGswgiN0S8OT/ngWv1iz6W
    3iszQe3B7NV4pbRkP1tNaukpc7ZHF0j/eqz0yOZ9OdR9l+Wf0avAnFEr2tFxiXu5dTH7
    wn+MHF8E/0v6x3AnpY0vGiwdrxNFBsQBZBAtFvjRkQ0Pb5dyvQDMNXOEA5yyUa34RewO
    E1iQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683890953;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=tE6ZYixG65UnttHybhoHc5k4Gq9LVP8y0+vdkXhUX+Q=;
    b=rtuIG7ZX0jWdaR2OpQoO9i4uALwDIcx0flTuuKDl9KaQVkflt6WxJB8S1YhrXwJY8U
    3bJvcQ0hHBLkjXgVlohfqHKPIO4NSnspTe5FCQI/mjUjEvwM4Ef/TK/xhO0FV2WoSQQ/
    uSTygKfUEsBfdlP9Y6nl0lEUfmAgJ/nNIig37RNCHsA+nP1TerRFIiehoEsj8SdQW9bb
    2nD0u9FqHposWdH49S2JHiFl7oOLOEc1fJqALi6k0nRPtOrN5cW/1cxCgsBu0B4Zc5ZS
    Y19ksxIN4Bhd43uytQilaC4G6asVB/QlGWC7VDmmRVqpU32FbicVJ6nV30E1qNeOfgYB
    xShw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683890953;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=tE6ZYixG65UnttHybhoHc5k4Gq9LVP8y0+vdkXhUX+Q=;
    b=XKNwrKE16kVT5BXBvY+JXV4zmPc5KyADNgT9LCERpxC+3/k4IMZl4/t/K3qpSuFlBN
    mTqUODH5Ln6CTemJpvDg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4ARWIYxzstZKeVom+bauo0LKSCjuo5iX5xLikmg=="
Date: Fri, 12 May 2023 13:29:05 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jason Andryuk <jandryuk@gmail.com>, Wei
 Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2] Fix install.sh for systemd
Message-ID: <20230512132905.23b87694.olaf@aepfle.de>
In-Reply-To: <3d0b1c1b-02e5-a0f3-221a-45dbdba1b411@citrix.com>
References: <20230508171437.27424-1-olaf@aepfle.de>
	<0785a316-1920-f5de-61d3-ed21ddbff0b9@citrix.com>
	<20230512131819.02b3a128.olaf@aepfle.de>
	<3d0b1c1b-02e5-a0f3-221a-45dbdba1b411@citrix.com>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/+k8Kho5OzHg9lFOIZ9ZB9el";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/+k8Kho5OzHg9lFOIZ9ZB9el
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Fri, 12 May 2023 12:22:08 +0100 Andrew Cooper <andrew.cooper3@citrix.com>:

> Does this allow for making any of these files no longer
> preprocessed by ./configure ?=C2=A0 (i.e. cease being .in files)

The path to hotplugpath.sh is variable, so each consumer needs to be a .in =
file.


Olaf

--Sig_/+k8Kho5OzHg9lFOIZ9ZB9el
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmReIwEACgkQ86SN7mm1
DoAgcg/9GMkV6h2abB5E3pll5DBRROAChHnOW6vALdODJzvrMHPabTlnIzFX+PFq
geeTPV97A5BOllCpHQwJmGNab+vCp282mfLhb976253sU5Taha3qUndjXo+RSi+s
re/9w7C4+TP1H3/6VDnS4TEZ4HoaOgXMRAjzrj2SS81DE4AdnGdSKa861zVpXeLW
ifG02POOY1BM1GMUuH0lEUdZR3F75re28PzVNxVvJFCBGFbtiLa4t1qDTvbhQQ8x
7UMV5yZ5opOQQ/94JxQYk/adjuDyDsucCeRQAE3De0RzNk9niGlC0dMKQKbW4Ik9
HJn5hS0H8/wIWqak3RHI6EWe8/vueWNz7UQ3iLAKBhsHCQ095qVWRkN7i2NyLdxC
e5aBF/OjR21PTUjqrfKT3+sz8O5+GZDCuft33HmtipqyhBGYaSDkAypMNTt49cYD
dBKkwOqUCfQkqjYtbRTSu9GN5NU1YhQh0MWoYNWjDlcF4ELcbEnrh1OblHjm6JFX
PNAGIE1g9iStYxUeHCHkWx6MCLJeXXQ+aMgnqO6TEHHkW9rPULgx6iTuQ/LFv5Nb
6lQuS6TkDEsJSrBSazLFoikE5g7s15pLDwDMaQbBFpT6FgsWRY2aj48rHuk8z5cX
5y7CE7Q7eZndHNNwCx0L95Gkr+2PCsbjcmORuiLfxGQxf5g4SUU=
=5CZE
-----END PGP SIGNATURE-----

--Sig_/+k8Kho5OzHg9lFOIZ9ZB9el--


From xen-devel-bounces@lists.xenproject.org Fri May 12 11:37:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 11:37:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533797.830739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxR5X-0001pm-FJ; Fri, 12 May 2023 11:37:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533797.830739; Fri, 12 May 2023 11:37:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxR5X-0001pf-Cj; Fri, 12 May 2023 11:37:31 +0000
Received: by outflank-mailman (input) for mailman id 533797;
 Fri, 12 May 2023 11:37:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Uaj=BB=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pxR5W-0001pZ-7r
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 11:37:30 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [81.169.146.164]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 601b5111-f0b9-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 13:37:29 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4CBbOJLT
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 12 May 2023 13:37:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 601b5111-f0b9-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683891444; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=m0EwRS4obPPprrZpWB3J2CyEC7twPxJUOqHNGo926fbwiV/0FO/zHDBvfXkO6YphJY
    SjPCDZ9XIBr+h+BfV/uWH+2/vC41HgLlSwWebwM8j5eLwJb4jzorFd+QfXQlp+H6wN6T
    LNkufYO60H9ewBWNmpYlTQHeE29IuYw3wASu5WDYcXS9lnnMmRG6xEVJ7OfdGx/6u6ek
    URXFen2oNfxTXASIMLbZBpZVMOxRQQ19/rasSaFG96VgW6cMAnrMROFEBnBxDHDxQ6U5
    de7Iy2oHx/J2G+fgb+JcPmgadqQLIJ975hHsRAfISuwdcBaBfcBZ44YJxYVJd8PSvlHN
    xGsA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683891444;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=3/HjLg8WC0z8XKR8q3J8WROiVeYOfFeQJrGqHDVdwzY=;
    b=H4/QOwAZ8Vn99qhxVwRtAd7bL9A68E0EX/lW7Z26LlLkUzta3Ygck9awAYZ8sPzkMk
    yOcK42E0bFsFVIAXcTBNi1wCVtUP6ATe/7veKKuoeKhqczAjzU3XE99mk/LG2sn3IKuH
    ecGcdZJXttlpiOay7bzM8oEj3faDdMVya5pfAwHs7iFlzoZavEAKfuWk2NTtgupidNqk
    n8aqJEaBofbHdZtYU3MAFG1Kb16ocCoGqtJqZHDri1mdb0lO5Y5TgGp6q9hMyol2Tb8x
    sqywq4X0J+nHkd6TlM+PNRwPriSUG0bMsMFhEH1cQ/Q4/2e3fxO3NUha7T41TxN/07Sq
    T0NA==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683891444;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=3/HjLg8WC0z8XKR8q3J8WROiVeYOfFeQJrGqHDVdwzY=;
    b=JyTu814bNR382Bmdfdo8+wonA/oY50TPIBrdgqTgrPW2LtLZH9nkxqeIqqDQnRZNpw
    kKDXHrD5TMaOhQTgtptveNDzIPiWd9kLfrgxa51/QkxSByKED+JpqRxR31SwydaaflLf
    JdY/pFHbtal9C9Oz5wKJJh5apxxs8iKrpvutBT5smm6+lTYfbe/81MPkiH7NR+tS8v0F
    z+NiEod/RIppQ/L5ScZZVg8toNkb33UWJu1iNHpICHIxp5xq1MVpfhEyx8um2CneYDy4
    WOfZvasQr0OELrx293ZYMuV+qbbU73oqkK29PHOTAP9Qmg6SRo3hRG6C5kMPwJbSz9eb
    O01g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683891444;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=3/HjLg8WC0z8XKR8q3J8WROiVeYOfFeQJrGqHDVdwzY=;
    b=p5+k55pLv40PM75YSx6prQ7IPz9yYFBFFNstwjHdfyBsOjh4SDvrKz0hhbJBq1ukow
    9Tg6Ga8UTXI5j71Np0DA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xr137Gpot26qU4O0oDB37weYobhAHKAaiA4NsOg=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3] Fix install.sh for systemd
Date: Fri, 12 May 2023 11:36:44 +0000
Message-Id: <20230512113643.3549-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

On a fedora system, if you run `sudo sh install.sh` you break your
system. The installation clobbers /var/run, a symlink to /run.
A subsequent boot fails when /var/run and /run are different since
accesses through /var/run can't find items that now only exist in /run
and vice-versa.

Skip populating /var/run/xen during make install.
The directory is already created by some scripts. Adjust all remaining
scripts to create XEN_RUN_DIR at runtime.

Use the shell variable XEN_RUN_DIR instead of hardcoded paths.

XEN_RUN_STORED is covered by XEN_RUN_DIR because xenstored is usually
started afterwards.

Reported-by: Jason Andryuk <jandryuk@gmail.com>
Tested-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
v3: use variables, and quote variables, drop -m0700 from one mkdir call

 tools/Makefile                                     | 2 --
 tools/hotplug/FreeBSD/rc.d/xencommons.in           | 1 +
 tools/hotplug/FreeBSD/rc.d/xendriverdomain.in      | 1 +
 tools/hotplug/Linux/init.d/xendriverdomain.in      | 1 +
 tools/hotplug/Linux/systemd/xenconsoled.service.in | 2 +-
 tools/hotplug/NetBSD/rc.d/xendriverdomain.in       | 2 +-
 6 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/tools/Makefile b/tools/Makefile
index 4906fdbc23..1ff90ddfa0 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -58,9 +58,7 @@ build all: subdirs-all
 install:
 	$(INSTALL_DIR) -m 700 $(DESTDIR)$(XEN_DUMP_DIR)
 	$(INSTALL_DIR) $(DESTDIR)$(XEN_LOG_DIR)
-	$(INSTALL_DIR) $(DESTDIR)$(XEN_RUN_DIR)
 	$(INSTALL_DIR) $(DESTDIR)$(XEN_LIB_DIR)
-	$(INSTALL_DIR) $(DESTDIR)$(XEN_RUN_STORED)
 	$(INSTALL_DIR) $(DESTDIR)$(PKG_INSTALLDIR)
 	$(MAKE) subdirs-install
 
diff --git a/tools/hotplug/FreeBSD/rc.d/xencommons.in b/tools/hotplug/FreeBSD/rc.d/xencommons.in
index 7f7cda289f..6f429e4b0c 100644
--- a/tools/hotplug/FreeBSD/rc.d/xencommons.in
+++ b/tools/hotplug/FreeBSD/rc.d/xencommons.in
@@ -34,6 +34,7 @@ xen_startcmd()
 	local time=0
 	local timeout=30
 
+	mkdir -p "${XEN_RUN_DIR}"
 	xenstored_pid=$(check_pidfile ${XENSTORED_PIDFILE} ${XENSTORED})
 	if test -z "$xenstored_pid"; then
 		printf "Starting xenservices: xenstored, xenconsoled."
diff --git a/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in b/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in
index a032822e33..f487c43468 100644
--- a/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in
+++ b/tools/hotplug/FreeBSD/rc.d/xendriverdomain.in
@@ -27,6 +27,7 @@ xendriverdomain_start()
 {
 	printf "Starting xenservices: xl devd."
 
+	mkdir -p "${XEN_RUN_DIR}"
 	PATH="${bindir}:${sbindir}:$PATH" ${sbindir}/xl devd --pidfile ${XLDEVD_PIDFILE} ${XLDEVD_ARGS}
 
 	printf "\n"
diff --git a/tools/hotplug/Linux/init.d/xendriverdomain.in b/tools/hotplug/Linux/init.d/xendriverdomain.in
index c63060f62a..17b381c3dc 100644
--- a/tools/hotplug/Linux/init.d/xendriverdomain.in
+++ b/tools/hotplug/Linux/init.d/xendriverdomain.in
@@ -49,6 +49,7 @@ fi
 
 do_start () {
 	echo Starting xl devd...
+	mkdir -p "${XEN_RUN_DIR}"
 	${sbindir}/xl devd --pidfile=$XLDEVD_PIDFILE $XLDEVD_ARGS
 }
 do_stop () {
diff --git a/tools/hotplug/Linux/systemd/xenconsoled.service.in b/tools/hotplug/Linux/systemd/xenconsoled.service.in
index 1f03de9041..d84c09aa9c 100644
--- a/tools/hotplug/Linux/systemd/xenconsoled.service.in
+++ b/tools/hotplug/Linux/systemd/xenconsoled.service.in
@@ -11,7 +11,7 @@ Environment=XENCONSOLED_TRACE=none
 Environment=XENCONSOLED_LOG_DIR=@XEN_LOG_DIR@/console
 EnvironmentFile=-@CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
 ExecStartPre=/bin/grep -q control_d /proc/xen/capabilities
-ExecStartPre=/bin/mkdir -p ${XENCONSOLED_LOG_DIR}
+ExecStartPre=/bin/mkdir -p ${XENCONSOLED_LOG_DIR} @XEN_RUN_DIR@
 ExecStart=@sbindir@/xenconsoled -i --log=${XENCONSOLED_TRACE} --log-dir=${XENCONSOLED_LOG_DIR} $XENCONSOLED_ARGS
 
 [Install]
diff --git a/tools/hotplug/NetBSD/rc.d/xendriverdomain.in b/tools/hotplug/NetBSD/rc.d/xendriverdomain.in
index f47b0b189c..87afc061ac 100644
--- a/tools/hotplug/NetBSD/rc.d/xendriverdomain.in
+++ b/tools/hotplug/NetBSD/rc.d/xendriverdomain.in
@@ -23,7 +23,7 @@ XLDEVD_PIDFILE="@XEN_RUN_DIR@/xldevd.pid"
 
 xendriverdomain_precmd()
 {
-	:
+	mkdir -p "${XEN_RUN_DIR}"
 }
 
 xendriverdomain_startcmd()


From xen-devel-bounces@lists.xenproject.org Fri May 12 11:52:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 11:52:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533802.830749 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxRJn-0004X8-OR; Fri, 12 May 2023 11:52:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533802.830749; Fri, 12 May 2023 11:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxRJn-0004X1-LR; Fri, 12 May 2023 11:52:15 +0000
Received: by outflank-mailman (input) for mailman id 533802;
 Fri, 12 May 2023 11:52:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Uaj=BB=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pxRJm-0004Wv-0P
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 11:52:14 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [81.169.146.167]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6e52901a-f0bb-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 13:52:12 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4CBq8JPe
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 12 May 2023 13:52:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e52901a-f0bb-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683892329; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=M4ovsXlVQlninO635eY0mKR8DuDh0BZQlf38BK/4NTBcRQj4b0Knz2L3molYqY77lU
    NLCLi+kfneJY6CnaEFsUDOC0IBlDGci8HuzwvgxoFfmvMXL6+dX5O8yMIX4+Ykrnlr0O
    Ns7v6eoWEDv3h7+xqaHn6p7qm5ZEm43rD7S1Uk4vETJcwK2Kwz7DQ+4LNOrKYkew7lEM
    E65gm2mL25BC7h21VBM9qeJBOHd67y4AuUtH2R8CQgfUVSmMRSniqG1vTx+d32isF7lG
    aLjo51pF6tmZMhA9h3NCJjifcTzObQMVvhE8vWwCP6pnIx9XpaZazaxI79xu24ITYxfb
    d6RQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683892329;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=4pO3OeqHNQDQyAaWXS+vdNOuErKYYtDSH/h/zsSid+E=;
    b=gqOFTSO+w3MnjYmwnNWfwKOlLE3UyOK5JFvO6vPsLM7vQ/sl9va5mX4H9fsl4xrjpp
    S8FdvkCCFnbNqRt661ua17QTnF/eUgQ1BXP1mUaeJcc8b+/4GYuMGrxhzKjETjyjcJwL
    /e5D2wGUZhr/UJMu0AVBcBPIWujrjXoRl2lkM2RyzZGWwqZapatfzUE5E78cP4lSbVOi
    n/q0W634+kPFWfx62vimGJmBEkbC7e60yFVfGGq0ZixwMKtPZqCPPBLtJYug76zFuplx
    w944s/Cuhzb4VQS9S8hXVQ+HYNjk+CfRc3fFsZEsTMYZfkhebFUJFIaA13FZglu8gr02
    NtiA==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683892329;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=4pO3OeqHNQDQyAaWXS+vdNOuErKYYtDSH/h/zsSid+E=;
    b=XfiOgOAlqR66VpE9RVaz0JMyxzuZkwZKlNIz7O84eknEqLBdAWLTg2UVY4+TE2Okep
    9YRx2mvgViiPMs3vmFPcf4wFTqEsDkZCSUwjmoiSfN2P4E1VgipzieTzkvunDM0TsEl+
    o12+u5bjg2uvyWbYRyG6x/vj1LLF5BquvmtVvEC86mIiN1aoAauHOk9Inj6kmV4Mugqn
    cYVXxQjuIryCaW2GvjsqkA4bjAB5zvvrA0iWr96STlioy3xQEJMcfJPEaynfA3Dx9VPL
    2JG9myK8gktv7kTPNix3Qz6zJJnvRiYlX1z5rgim45DR38FvxgDKcE1lgOhgFdIjX8yK
    N1nw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683892329;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=4pO3OeqHNQDQyAaWXS+vdNOuErKYYtDSH/h/zsSid+E=;
    b=AWsNT7YRCnt3GpKkb1O7y6HbyaXYAqOcHK/aqS5DSQvk5cSnGc+13zA0pDuv05P1+Y
    8ZOg2mo1/zo5waEXRcAA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4ARWIYxzstZKeVom+bauo0LKSCjuo5iX5xLikmg=="
Date: Fri, 12 May 2023 13:51:55 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1] tools: drop bogus and obsolete ptyfuncs.m4
Message-ID: <20230512135155.07157aa8.olaf@aepfle.de>
In-Reply-To: <9fd06ad0-4c21-43be-ac48-8d30844535ad@perard>
References: <20230502204800.10733-1-olaf@aepfle.de>
	<9fd06ad0-4c21-43be-ac48-8d30844535ad@perard>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/AMegc+nWY52/.ERzMOn52x4";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/AMegc+nWY52/.ERzMOn52x4
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Tue, 9 May 2023 17:47:33 +0100 Anthony PERARD <anthony.perard@citrix.com>:

> That change isn't enough. And I'm not convinced that it needs to be
> removed.

You are right, the provided functions must be removed as well.
My build scripts do not run autoreconf, perhaps it is about time to
change that.

> First, AX_CHECK_PTYFUNCS is still called in "tools/configure.ac".

This needs to be removed, yes.

> Then, AX_CHECK_PTYFUNCS define INCLUDE_LIBUTIL_H and PTYFUNCS_LIBS.

This is used inconsistently. Some places include <libutil.h> unconditionall=
y.
-lutil is already used unconditionally via UTIL_LIBS

> Also, that that macro isn't just about the header, but also about the
> needed library.

This is already covered.

Olaf

--Sig_/AMegc+nWY52/.ERzMOn52x4
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmReKFsACgkQ86SN7mm1
DoDXzw/+PKZdb1Z6tmsIT+GDx+aWdahQBGnmiRCF76Nx1HkBdkpoCA+cnr91uggy
rlTUaZNnTZSw1ie4rYLslltccr830w76BbmFLRjGPbznRlGgGFD44ZdK1axPs3yj
DeXzorWmxd+qpZApwWwnZXmbsq//a27s78XR7pRkegzTuvB9e+ZoZZDkhJKlTBgH
5VgSUrm+C6oEHq0UmpWgTecmJZecLpTQ2ZXZ/FnrI2UiHPNMJfBStpIVHW9akAJc
YyLrvlHyJsOXFwJhIODH2bn1gSoEiAp1lx5LgRxq1/W1XI/Es2Oebm470Z+RzHuU
7Yw1auxhV5LqRH65U0CqcVhk7fcnWfhXkXnmfntrAmzv75JzITFZwJrCkTZxVpJI
DZ4p2uq2VcO1pW5/rkV24EvWWm6In0rH5Yv9+PMp6Qo7OGlO4Bdu3OKEUEuls5vx
YtXyO6U3aTryiEaIG8ASLD51uK5j3cBRM+GMEp02uKmdDrvi2cfJQ9nR5JNq3a7p
dGGx02NMMFPU++wd6K44aM62cI5a9u8mDEMErShSBV2nfpTw6a4xU8CDJkyWS/qm
Qo1qGf+HvfoHLACGmtwsmrZFSYkvNzQ1w9gXr9q9Rg2qNPgcASa9LtnZ/TK1rrPG
OZtanD4EHPhqYG1DH0N6YvgAd/pT8vIHaRXft3Gkh1t8Mt3FtOI=
=5Rp/
-----END PGP SIGNATURE-----

--Sig_/AMegc+nWY52/.ERzMOn52x4--


From xen-devel-bounces@lists.xenproject.org Fri May 12 12:26:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 12:26:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533819.830760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxRqx-0000QM-NL; Fri, 12 May 2023 12:26:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533819.830760; Fri, 12 May 2023 12:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxRqx-0000QF-KX; Fri, 12 May 2023 12:26:31 +0000
Received: by outflank-mailman (input) for mailman id 533819;
 Fri, 12 May 2023 12:26:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Uaj=BB=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pxRqw-0000Q9-0g
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 12:26:30 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [81.169.146.166]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 377496e5-f0c0-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 14:26:27 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4CCQIJZJ
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 12 May 2023 14:26:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 377496e5-f0c0-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1683894378; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=DMMeweWM+GOw5a5NQGApknm8Bb6NFo+Q/LKHo/vyDgIx4cLdpWyhtwq6Jb0oFC2m5j
    choroABBkzYTT7hjvzmt04F3oFPVHXhuqSOfLnNZ59TB9xfbFfohKus0qe+VmdD3Xz+Z
    DVP1XlG098Xk3t1a75U3mgL9gEqKTK/5+b7APNjoU4RloP02dAWC6qRVpD6N6GFdt0lb
    KwKK0QaYdl7xWLLMC7efEucjdKkb5x/mEXgWWYbJxxN6O15vF6UjX+H8wr7EX3BIFN/J
    g4EuYUYkxUKHYVVD+CrwhLP50HmxFCQPdckPZ+xmxSvvKSeVR9C3Wqhkab67BhsrIZLX
    vx5Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1683894378;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=lI93bue9rL3OSJYRvI1INTpKr3FcpXHPrRG3HRdcxkI=;
    b=FTMvtzz7nJRY3TaWB4oX7ynIGCoyRpvN6HWOZJzFwTED8v0z1CDGj5N7GGauyKyEtN
    aca/N736R5a/0hULmSs3oU3F3exgeqSpF8afAzdXee35pSa2oHwUTHkFxadbrAQCfJER
    BsA1Pw+8Tfj+3I1IiBfJXpyKtoLSbG6gpNQG0qS3yQq+iNH2nRZI921bY1ayR//dMtDd
    rWncxMvpZoK4gGyTHyb03gIPIBp5NpHo0+4G2YchDwL2h5SztTiNH9BqmjZH8Wo79uzN
    FntQSBmxHcXo45BsAp4P3o575y0cDK7rFJH2L9jPxsuy99KsG0VRBGHxjWFQWfI3gohQ
    9UtQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1683894378;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=lI93bue9rL3OSJYRvI1INTpKr3FcpXHPrRG3HRdcxkI=;
    b=hvKAAk/SUXi/t5+UST7BxAjnDoO8YEZmAZXGowZZsUQm9v5H75KaibXFyB08q2xClG
    B8DdAJcIJZNv/hNnWDlNMS0C8L+UCFjctCDa5mSJ4Lyr4TH9acXpkuRy8P04lPAdmJVf
    gDnJcRaGunIkNWKCbjFgfy8oK8Qz9ByGTTWJICpnoVGkQzoLrnhJhe9vQoxRqt66BPCU
    DGGOVJCBB5UDcKieBDagbL6B6UW290Jq4NGibPHal/rW9z4ITxR01Zls5vCgscrx/9Ns
    0R8cM82FPxJ2ir2TEaGmoMH3VXp6TiZ0cNM0cFnyXLNjT2NbaT08R09Dnp46KHKVnkPG
    7YQw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1683894378;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=lI93bue9rL3OSJYRvI1INTpKr3FcpXHPrRG3HRdcxkI=;
    b=pY2RavDVbgn1pNE0A6SdhN0HwMQgVYWFTwbMqgNktaq2/9iniW+osG1S1flia7d/WV
    QgyunO8RmrJPVOe+zgBA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xr137Gpot26qU4O0oDB37weYobhAHKAaiA4NsOg=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v2] tools: drop bogus and obsolete ptyfuncs.m4
Date: Fri, 12 May 2023 12:26:14 +0000
Message-Id: <20230512122614.3724-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

According to openpty(3) it is required to include <pty.h> to get the
prototypes for openpty() and login_tty(). But this is not what the
function AX_CHECK_PTYFUNCS actually does. It makes no attempt to include
the required header.

The two source files which call openpty() and login_tty() already contain
the conditionals to include the required header.

Remove the bogus m4 file to fix build with clang, which complains about
calls to undeclared functions.

Remove usage of INCLUDE_LIBUTIL_H in libxl_bootloader.c, it is already
covered by inclusion of libxl_osdep.h.

Remove usage of PTYFUNCS_LIBS in libxl/Makefile, it is already covered
by UTIL_LIBS from config/StdGNU.mk.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
v2: remove consumers of the macros

 config/Tools.mk.in                  |  2 --
 m4/ptyfuncs.m4                      | 35 -----------------------------
 tools/configure.ac                  |  2 --
 tools/libs/light/Makefile           |  2 +-
 tools/libs/light/libxl_bootloader.c |  4 ----
 5 files changed, 1 insertion(+), 44 deletions(-)
 delete mode 100644 m4/ptyfuncs.m4

diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index 6abb377564..b7cc2961d8 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -31,8 +31,6 @@ PTHREAD_CFLAGS      := @PTHREAD_CFLAGS@
 PTHREAD_LDFLAGS     := @PTHREAD_LDFLAGS@
 PTHREAD_LIBS        := @PTHREAD_LIBS@
 
-PTYFUNCS_LIBS       := @PTYFUNCS_LIBS@
-
 LIBNL3_LIBS         := @LIBNL3_LIBS@
 LIBNL3_CFLAGS       := @LIBNL3_CFLAGS@
 XEN_TOOLS_RPATH     := @rpath@
diff --git a/m4/ptyfuncs.m4 b/m4/ptyfuncs.m4
deleted file mode 100644
index 3e37b5a23c..0000000000
--- a/m4/ptyfuncs.m4
+++ /dev/null
@@ -1,35 +0,0 @@
-AC_DEFUN([AX_CHECK_PTYFUNCS], [
-    dnl This is a workaround for a bug in Debian package
-    dnl libbsd-dev-0.3.0-1. Once we no longer support that
-    dnl package we can remove the addition of -Werror to
-    dnl CPPFLAGS.
-    AX_SAVEVAR_SAVE(CPPFLAGS)
-    CPPFLAGS="$CPPFLAGS -Werror"
-    AC_CHECK_HEADER([libutil.h],[
-      AC_DEFINE([INCLUDE_LIBUTIL_H],[<libutil.h>],[libutil header file name])
-    ])
-    AX_SAVEVAR_RESTORE(CPPFLAGS)
-    AC_CACHE_CHECK([for openpty et al], [ax_cv_ptyfuncs_libs], [
-        for ax_cv_ptyfuncs_libs in -lutil "" NOT_FOUND; do
-            if test "x$ax_cv_ptyfuncs_libs" = "xNOT_FOUND"; then
-                AC_MSG_FAILURE([Unable to find library for openpty and login_tty])
-            fi
-            AX_SAVEVAR_SAVE(LIBS)
-            LIBS="$LIBS $ax_cv_ptyfuncs_libs"
-            AC_LINK_IFELSE([AC_LANG_SOURCE([
-#ifdef INCLUDE_LIBUTIL_H
-#include INCLUDE_LIBUTIL_H
-#endif
-int main(void) {
-  openpty(0,0,0,0,0);
-  login_tty(0);
-}
-])],[
-                break
-            ],[])
-            AX_SAVEVAR_RESTORE(LIBS)
-        done
-    ])
-    PTYFUNCS_LIBS="$ax_cv_ptyfuncs_libs"
-    AC_SUBST(PTYFUNCS_LIBS)
-])
diff --git a/tools/configure.ac b/tools/configure.ac
index 9bcf42f233..3cccf41960 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -70,7 +70,6 @@ m4_include([../m4/uuid.m4])
 m4_include([../m4/pkg.m4])
 m4_include([../m4/curses.m4])
 m4_include([../m4/pthread.m4])
-m4_include([../m4/ptyfuncs.m4])
 m4_include([../m4/extfs.m4])
 m4_include([../m4/fetcher.m4])
 m4_include([../m4/ax_compare_version.m4])
@@ -416,7 +415,6 @@ AC_SUBST([ZLIB_CFLAGS])
 AC_SUBST([ZLIB_LIBS])
 AX_CHECK_EXTFS
 AX_CHECK_PTHREAD
-AX_CHECK_PTYFUNCS
 AC_CHECK_LIB([yajl], [yajl_alloc], [],
     [AC_MSG_ERROR([Could not find yajl])])
 AC_CHECK_LIB([z], [deflateCopy], [], [AC_MSG_ERROR([Could not find zlib])])
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 96daeabc47..5d7ff94b05 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -158,7 +158,7 @@ NO_HEADERS_CHK := y
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-LDLIBS-y += $(PTYFUNCS_LIBS)
+LDLIBS-y += $(UTIL_LIBS)
 LDLIBS-$(CONFIG_LIBNL) += $(LIBNL3_LIBS)
 LDLIBS-$(CONFIG_Linux) += -luuid
 LDLIBS-$(CONFIG_Linux) += -lrt
diff --git a/tools/libs/light/libxl_bootloader.c b/tools/libs/light/libxl_bootloader.c
index 18e9ebd714..1bc6e51827 100644
--- a/tools/libs/light/libxl_bootloader.c
+++ b/tools/libs/light/libxl_bootloader.c
@@ -19,10 +19,6 @@
 #include <utmp.h>
 #endif
 
-#ifdef INCLUDE_LIBUTIL_H
-#include INCLUDE_LIBUTIL_H
-#endif
-
 #include "libxl_internal.h"
 
 #define BOOTLOADER_BUF_OUT 65536


From xen-devel-bounces@lists.xenproject.org Fri May 12 12:46:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 12:46:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533824.830770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxSA4-0003Hp-4B; Fri, 12 May 2023 12:46:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533824.830770; Fri, 12 May 2023 12:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxSA4-0003Hi-1Q; Fri, 12 May 2023 12:46:16 +0000
Received: by outflank-mailman (input) for mailman id 533824;
 Fri, 12 May 2023 12:46:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zhAz=BB=citrix.com=prvs=489789326=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pxSA2-0003Hc-Ti
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 12:46:14 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f92cf0c2-f0c2-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 14:46:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f92cf0c2-f0c2-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1683895573;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=WAWUD+wNEpNVus+oB8cf4wiPLRwZUJ4zcYOUBw6BpT8=;
  b=LSQHzwW8hC2li6k5j8QrCXgkRTHjKzdreItniUAkNySD7GOF/f/S0VHQ
   giUKa+o0bzAN/jzkImwNBhDGlXiY+y1eHXNMR1iRjps4ImGgeRtlb5Fo3
   rbDqwLUe84HiWZSYO/cuWALIEBjOE/hM8czyhrQ5MA9cXLkrReyRgZlO6
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107567557
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:C0KgG66nVGQVCoh5dL8OHwxRtDHHchMFZxGqfqrLsTDasY5as4F+v
 jYZCjvXb/+ONjOhctF2PNmw9EoB78ODm9c3Gwts+ykwHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S5geE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m6
 PsDJi40X0+6nOPs0qmKZ+k2h8YqM5y+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9xx7H+
 zyZojmmav0cHMafzQOF7kqovc3ekizncZM5SKWk0sc/1TV/wURMUUZLBDNXu8KRlUqWS99Zb
 UsO9UIGvaU0sUCmUNT5dxm5u2Kf+A4RXcJKFO834x3LzbDbiy6bDGUZSj9KaPQ9qdQ7Azct0
 ze0c8jBXGI19ufPEDTEq+nS9GnpUcQIEYMcTTIDVgUb2ui8mZoy1ADUf/tjSq+3h8KgTFkc3
 Au2hCQ5grwSi+sC2KO64U3LjlqQm3TZcuImzl6JBzz4t2uVcKbgPtX1sgaDsZ6sOa7DFjG8U
 G44d99yBQzkJbWEj2SzTeoEB9lFDN7VYWSH0TaD83TMnglBGkJPn6gKuFmSx28zaK7onAMFh
 2eN0T69HLcJYBOXgVZfOupd8fgCw6n6DsjCXfvJdNdIaZUZXFbZrHo+PhbKjz60zRVEfUQD1
 XCzL66R4YsyU/w7nFJauc9HuVPU+szO7TyKHs2qp/hW+bGfeGSUWd84Dbd6VchgtPnsiFyMo
 75i2z6il003vBvWPnOGrub+7DkicRAGOHwBg5AMKLPcc1c9SAnMyZb5mNscRmCspIwN/s+gw
 513chMwJIbX7ZEfFTi3Vw==
IronPort-HdrOrdr: A9a23:/tXzxaGVUGLdllpmpLqEyceALOsnbusQ8zAXPiFKOGVom6mj9/
 xG885rtiMc5AxhOk3I4OrwXpVoIkmskKKdn7NhR4tKNTOO0ACVxedZnPPfKlbbakrDH4BmpN
 xdm68XMrPN5Q8Tt6rHCBvRKbcdKMruys+Vbfe39R1QpRsDUcxdBq5Ce2KmLnE=
X-Talos-CUID: 9a23:qngoxm7b2vDLXXZ7Otss6UkPFN4cQyLk13aKMVSJJGYzSb+cVgrF
X-Talos-MUID: =?us-ascii?q?9a23=3ACpPyKQwaYqwLLGrpT17onvJmAzSaqL6uDHkNja0?=
 =?us-ascii?q?CgOuVOiNsEia20CyrZrZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,269,1677560400"; 
   d="scan'208";a="107567557"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH] x86/cpuid: Calculate FEATURESET_NR_ENTRIES more helpfully
Date: Fri, 12 May 2023 13:45:51 +0100
Message-ID: <20230512124551.443139-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

When adding new featureset words, it is convenient to split the work into
several patches.  However, GCC 12 spotted that the way we prefer to split the
work results in a real (transient) breakage whereby the policy <-> featureset
helpers perform out-of-bounds accesses on the featureset array.

Fix this by having gen-cpuid.py calculate FEATURESET_NR_ENTRIES from the
comments describing the word blocks, rather than from the XEN_CPUFEATURE()
with the greatest value.

For simplicty, require that the word blocks appear in order.  This can be
revisted if we find a good reason to have blocks out of order.

No functional change.

Reported-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

This supercedes the entire "x86: Fix transient build breakage with featureset
additions" series, but doesn't really feel as if it ought to be labelled v2
---
 xen/tools/gen-cpuid.py | 42 ++++++++++++++++++++++++++++++++++++------
 1 file changed, 36 insertions(+), 6 deletions(-)

diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
index 0edb7d4a19f8..e0664e0defe1 100755
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -50,13 +50,37 @@ def parse_definitions(state):
         "\s+([\s\d]+\*[\s\d]+\+[\s\d]+)\)"
         "\s+/\*([\w!]*) .*$")
 
+    word_regex = re.compile(
+        r"^/\* .* word (\d*) \*/$")
+    last_word = -1
+
     this = sys.modules[__name__]
 
     for l in state.input.readlines():
-        # Short circuit the regex...
-        if not l.startswith("XEN_CPUFEATURE("):
+
+        # Short circuit the regexes...
+        if not (l.startswith("XEN_CPUFEATURE(") or
+                l.startswith("/* ")):
             continue
 
+        # Handle /* ... word $N */ lines
+        if l.startswith("/* "):
+
+            res = word_regex.match(l)
+            if res is None:
+                continue # Some other comment
+
+            word = int(res.groups()[0])
+
+            if word != last_word + 1:
+                raise Fail("Featureset word %u out of order (last word %u)"
+                           % (word, last_word))
+
+            last_word = word
+            state.nr_entries = word + 1
+            continue
+
+        # Handle XEN_CPUFEATURE( lines
         res = feat_regex.match(l)
 
         if res is None:
@@ -94,6 +118,15 @@ def parse_definitions(state):
     if len(state.names) == 0:
         raise Fail("No features found")
 
+    if state.nr_entries == 0:
+        raise Fail("No featureset word info found")
+
+    max_val = max(state.names.keys())
+    if (max_val >> 5) + 1 > state.nr_entries:
+        max_name = state.names[max_val]
+        raise Fail("Feature %s (%d*32+%d) exceeds FEATURESET_NR_ENTRIES (%d)"
+                   % (max_name, max_val >> 5, max_val & 31, state.nr_entries))
+
 def featureset_to_uint32s(fs, nr):
     """ Represent a featureset as a list of C-compatible uint32_t's """
 
@@ -122,9 +155,6 @@ def format_uint32s(state, featureset, indent):
 
 def crunch_numbers(state):
 
-    # Size of bitmaps
-    state.nr_entries = nr_entries = (max(state.names.keys()) >> 5) + 1
-
     # Features common between 1d and e1d.
     common_1d = (FPU, VME, DE, PSE, TSC, MSR, PAE, MCE, CX8, APIC,
                  MTRR, PGE, MCA, CMOV, PAT, PSE36, MMX, FXSR)
@@ -328,7 +358,7 @@ def crunch_numbers(state):
     state.nr_deep_deps = len(state.deep_deps.keys())
 
     # Calculate the bitfield name declarations
-    for word in range(nr_entries):
+    for word in range(state.nr_entries):
 
         names = []
         for bit in range(32):
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 12 14:21:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 14:21:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533835.830784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxTeA-0006d3-TK; Fri, 12 May 2023 14:21:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533835.830784; Fri, 12 May 2023 14:21:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxTeA-0006cw-PR; Fri, 12 May 2023 14:21:26 +0000
Received: by outflank-mailman (input) for mailman id 533835;
 Fri, 12 May 2023 14:21:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxTe9-0006cm-09; Fri, 12 May 2023 14:21:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxTe8-0005re-TX; Fri, 12 May 2023 14:21:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxTe8-0003KE-Ar; Fri, 12 May 2023 14:21:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxTe8-0006Vz-AM; Fri, 12 May 2023 14:21:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=A1H3R7zqYNBxqkSDnnS+pfsJb2ZkSrZdeqnW72G8HnI=; b=dr58TnsJkYNQRqZARuEhBK3uLT
	K6L5XptJLkIvx+h8wNHLpG+plU2HO2HlTrlmPsX0bYyXBTOqwL+aZZfbmE8ZVjnEATawkGM45psxE
	W5HRtvTdwj7FzkhLVw/0qowtlYzV7ht+sPNtrd+7SPROW7hTgrj0Wq8w9WFQ+UBN9D6M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180626-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180626: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=278238505d28d292927bff7683f39fb4fbca7fd1
X-Osstest-Versions-That:
    qemuu=d530697ca20e19f7a626f4c1c8b26fccd0dc4470
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 May 2023 14:21:24 +0000

flight 180626 qemu-mainline real [real]
flight 180634 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180626/
http://logs.test-lab.xenproject.org/osstest/logs/180634/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 180610

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180610
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180610
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180610
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180610
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180610
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180610
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180610
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180610
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                278238505d28d292927bff7683f39fb4fbca7fd1
baseline version:
 qemuu                d530697ca20e19f7a626f4c1c8b26fccd0dc4470

Last test of basis   180610  2023-05-10 23:38:37 Z    1 days
Testing same since   180621  2023-05-11 14:27:55 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dr. David Alan Gilbert <dave@treblig.org>
  Jamie Iles <quic_jiles@quicinc.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lukas Straub <lukasstraub2@web.de>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1008 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 12 14:35:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 14:35:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533844.830797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxTs9-0008R9-Au; Fri, 12 May 2023 14:35:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533844.830797; Fri, 12 May 2023 14:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxTs9-0008Qz-6V; Fri, 12 May 2023 14:35:53 +0000
Received: by outflank-mailman (input) for mailman id 533844;
 Fri, 12 May 2023 14:35:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iM5+=BB=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pxTs6-0008Qm-Tl
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 14:35:51 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20618.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 47dd7626-f0d2-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 16:35:46 +0200 (CEST)
Received: from MW4PR03CA0329.namprd03.prod.outlook.com (2603:10b6:303:dd::34)
 by CH3PR12MB9025.namprd12.prod.outlook.com (2603:10b6:610:129::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.24; Fri, 12 May
 2023 14:35:44 +0000
Received: from CO1NAM11FT039.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:dd:cafe::2b) by MW4PR03CA0329.outlook.office365.com
 (2603:10b6:303:dd::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.24 via Frontend
 Transport; Fri, 12 May 2023 14:35:44 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT039.mail.protection.outlook.com (10.13.174.110) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.24 via Frontend Transport; Fri, 12 May 2023 14:35:43 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 12 May
 2023 09:35:42 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 12 May
 2023 07:35:42 -0700
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 12 May 2023 09:35:41 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47dd7626-f0d2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i5zHy2fZgZP7Zab2idBBOh+wAGtvokifu+k2p1p14sfuGFczqJr4gfBekRzWMACYf8Kh0+yOIRdREnWuXEvd36tWhvF/QmztdVgSw1Uh+u+loWG63iIUD5EIqjEjq7YhLMyGAHEi7u1xwXyKs7CRFSvW+DUuTrDfH+cNivbY8M7kv4umr51PLaq2C5sfsgnUt5SON+XdMFCWgbM93p5SykZ5tjCr0DQPUlqB7m2GQttsZUxRsNH5mojN9+JIiX9kSlUol1a2LPS1u7iV5LkDcBRO5OX5DiF7RVbt+2I6jabCR3T84UGCBhd5NqjQx2b9wKFRr0vDdZ79gjnJyOun4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Z6zWCj0pfBOh+0Xh2L/OjheKNQMni1Q3R+HvQiATYJI=;
 b=GHjI84pCUIxLt/fXNapHLsu7Dyz7DRdB2D7szX0AzdxeYeXBuk/hh8ykH8bHIV7IM7s7e5FTgfdVDX4EJ0+FfVpsEzEZwma4AqB4eR4Mlaw6ypnSiHJbN59kOaRhBfvDuASFHgewTg1vwZxEhNOXuCaZPXXZUjoCZSN8kUul6HcfVBezTC5X0eQD3bRh5/6h8X9gq4JU1IIfWwEIxNlOkqOYR1kZO9MMqvNPEWSB6PyoiQYH1I6UB/UYTV7/8ID33Mqbs5i/7hsqhY19XAQmGEsUPT5s5jOuRC5lLHU3W27twUAJO3YEm6enKHJ2S8Gsu4mxLGsZU0QUum3Xf6uNvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Z6zWCj0pfBOh+0Xh2L/OjheKNQMni1Q3R+HvQiATYJI=;
 b=yqGvyC6OMSDaw0mIGyAclNg3Nxq6tufycV9/+hroiVEThSanda8giz81BKcM4FNtprgoyIckoGIKrYi9lJQwlYiYm59Fm8qe7oUqgnqCfWVmDzJmxufAANq+EsoE+aM0ZlZIjrhqWMUEHhKmDsczCCH7/5u0YcWJ9J7FSxe1I/k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Rahul Singh <rahul.singh@arm.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 0/2] xen/arm: smmuv3: Advertise coherent table walk
Date: Fri, 12 May 2023 16:35:33 +0200
Message-ID: <20230512143535.29679-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT039:EE_|CH3PR12MB9025:EE_
X-MS-Office365-Filtering-Correlation-Id: f5e8324d-69b3-4236-61fc-08db52f62ada
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dJM/ZEWvuCO6cza77a6RW4YhGSdEHB3U4X24svcTKxcduNi7ihl0k7Yy6+rjrGnfTzibHQAwGi8LJjM4RQXcfZG/65jj/SRNmGKT1YnN+jsan1rka9tmIqw5xFqZePtWK43tzqXdFm0OBgTs1qt+bVHMzKn8y502xLcFbxYaudhci4zUdCUmbXKu/9BblxFDsS5Kr32CTdm+bfwnY1+KyV8+jzjGMkh6/uldlAgo2AGmbluTZ7MKIi+vlyLIkZMthLLqWhkMMnNSU19DgOubsYG8HBjnotHP9rpqwSCNPuCRqtgxP7L9xxoAZFncZ3eSP0jgM7F33EBLgIYzAMK8KBqH4eI7T44k9RTx2SPHeMfuOSYL2jbkyB4JX2rn7ecckp2UcARdSL0MvIB9KgDE0MqEKEF5jpvCSKJ0/epD8pMDi8Ei626yIVg+K5myz0E3jj9jVdFIZPtmp5rPq+uWdaDRLe9W2m4CBlywz4fZTsCKUeFX0sDDwvhjRtuQLWK42vpQfyV2zIEEKBWjdeapc7hGdHPnyhmljl2lbzTGS6/KFeTpZYnglASk0TkMvDIqp6vddJnonU5jL4lySr9vbSpGs6XzL6/lEIEDLXjOyXwKpzYkzYhp+eySKEcDqkIZsI27NnDI1R+Fis1MjfW0cc2NiclLRVP6JrNP/yp5QEHtoAZnjYgoTrtdi4GDB9yuo0dhPr7SW+gp+A+TwrG9PAvvXPy6z4Pe0whgXN09tmTn86tDI2cfzEJbki/gI9/6ze53agk7esktCzcVMciNGw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(346002)(39860400002)(396003)(451199021)(46966006)(36840700001)(40470700004)(82310400005)(40460700003)(54906003)(316002)(41300700001)(6666004)(36756003)(478600001)(86362001)(70206006)(70586007)(6916009)(40480700001)(1076003)(26005)(2616005)(186003)(83380400001)(4326008)(82740400003)(336012)(426003)(81166007)(356005)(5660300002)(8676002)(8936002)(44832011)(47076005)(36860700001)(4744005)(2906002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 14:35:43.5725
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f5e8324d-69b3-4236-61fc-08db52f62ada
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT039.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB9025

Based on the work done for SMMU v1,v2 by commit:
080dcb781e1bc3bb22f55a9dfdecb830ccbabe88

Michal Orzel (2):
  xen/arm: smmuv3: Constify arm_smmu_get_by_dev() parameter
  xen/arm: smmuv3: Advertise coherent table walk if supported

 xen/drivers/passthrough/arm/smmu-v3.c | 28 ++++++++++++++++++++++++---
 1 file changed, 25 insertions(+), 3 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 12 14:35:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 14:35:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533845.830803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxTs9-0008UY-JQ; Fri, 12 May 2023 14:35:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533845.830803; Fri, 12 May 2023 14:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxTs9-0008Sl-Dt; Fri, 12 May 2023 14:35:53 +0000
Received: by outflank-mailman (input) for mailman id 533845;
 Fri, 12 May 2023 14:35:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iM5+=BB=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pxTs7-0008Qm-Ig
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 14:35:51 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2061a.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::61a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4929d9d5-f0d2-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 16:35:49 +0200 (CEST)
Received: from MW4P222CA0023.NAMP222.PROD.OUTLOOK.COM (2603:10b6:303:114::28)
 by DS0PR12MB7945.namprd12.prod.outlook.com (2603:10b6:8:153::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22; Fri, 12 May
 2023 14:35:46 +0000
Received: from CO1NAM11FT085.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:114:cafe::5c) by MW4P222CA0023.outlook.office365.com
 (2603:10b6:303:114::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.24 via Frontend
 Transport; Fri, 12 May 2023 14:35:46 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT085.mail.protection.outlook.com (10.13.174.137) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.25 via Frontend Transport; Fri, 12 May 2023 14:35:45 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 12 May
 2023 09:35:44 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 12 May
 2023 07:35:44 -0700
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 12 May 2023 09:35:42 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4929d9d5-f0d2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZcUfNV3Eu4n4M078M5k5Wi7ZiitlFG6x6XAU+yZ7CH+xOcYeQ4kNzOkCL0HZw1EP1vcLvx4ZH7yuvBNmvVu8ySqXp2/NHfAx0rexXLnwvzoSnnZAG9Qs7yHqgF2k2y1+LxBCqXvpxjfzyk8YyiBJ2GOuo8OEwR+W87O5iN5hpiCloDeQJdvBPZBQP9D0jBNB7HA4f2O/5kvrmJUoGAJ0n8cgc7Mxno4NGhcJsqCGwHyWjUV2MwJrroDWHdiyEVnLWCTb7WapYP+ACn8M5J47H3/CCUBNtgBC7/cfS5/Xmu2Cf98V9HoDaEEuEQS0Qkgg6xi5S1r9rK+IX8jWPA/MMQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+EOFf57ukdpYkZPaaSySVqh3AqlVA38LGW3V5B/W0s8=;
 b=LQ5UIiKWIIa58EokrfDNQ9Fkmgmvi24rhB5YhqgBV8ZSmjVkmMBZaJLlUGEOY3cP2XQMKetq+TMa1KiuNQ7ee67t1aUif62dcZrNMdvSXqCtY/kMTKNS3XNXVbpwZD7jJgXhQKeGHlaLxJNu3NRRNvmFg/YPoStAt5DATst6svh9ySY+XuP28ULUIC2KsuViI86xlRCgKEPmlNFZHAIWTXUjViBMjTMG1BygVwpaB2xB8bPHbh0k5SGOUWFM4laJBj3VpbOAVFX3GoSNMG1I65dZuMHn1J3d6c96v5nPNVc83xJm1NLeRhuxxFD1mKkZOkI46RNsveFtlFrk+daD0w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+EOFf57ukdpYkZPaaSySVqh3AqlVA38LGW3V5B/W0s8=;
 b=h5pVzONiBlb8V2QyBzTIUel81nDn+a5fEN+PLBpxE2wo8zMd/H/0lu1G9AGBXk6qdFD1KtDQe8KlT30EvpqTPvxqhh5oLRVlDViOWTkZt5jLvlBAruN+7x6gkggBwbTPlWcjfa/jaOkzUwVt85jNuJpfHt7jOCJwELCWZ/wVeS8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Rahul Singh <rahul.singh@arm.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 1/2] xen/arm: smmuv3: Constify arm_smmu_get_by_dev() parameter
Date: Fri, 12 May 2023 16:35:34 +0200
Message-ID: <20230512143535.29679-2-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230512143535.29679-1-michal.orzel@amd.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT085:EE_|DS0PR12MB7945:EE_
X-MS-Office365-Filtering-Correlation-Id: 7db70770-7d3a-4302-02a7-08db52f62bea
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	x+XFmjcKupki8ViUA15LmCEOxre/e1uaatk2wjD+GZyo9L9EkZ/j3HQWkNK37KDdGrK5tp2B7zHDVozEhAjE34rXGPsLKzakY3CC9Z9WxMtdrY411j4lG0uYTVVj1V3gS8cSHeh3WDaqTHGPhnyQ3u/3IHJP2ym54K3UezfU0qQRRFTV2Gl+/ohIm0c0TK4hKMyTlwOGvqihSlXoA7Xjfg0Z+uZ7Zz/Z4rlI+LW3ADhj6eLTZosRjqBDh7XTEKwYOeOg118+z0aF5Ifpmml5zO5SdSdZ1YqJ2Wt2cMq5l7hKkSDknxvRQ917QftdOgVBZq3Bf6+5WrhupNHzj9Kez2dmNngnQao7YTa/ak+sEtg6jlt8VDrJpeTJOkOyjnBUwtUqHMLghPLpasQNSiS0nCj4Kterh5KU23Squ3WKJALqAoEHek5qQWZgEOTGA/64IlQyjbMxmKqMPy4LSYjftVawilH/f9yeSPe3YXwtXd0Ymi8ykj5FqVw2HBdGw12dRl3GRScUH0m/XG27O0Wf9yhBSJVRO6wFa87Q9/GoowZ3c3jI2bysFWfH1BiKNCn+wUCTkxXfiOX20KDFQRo4oQ9D5v+4dKe4njpcqArLeJYVK2osnkgjdmn4sB80XGU/PWpRG7zS4fY8MmviuGYYObGa4CLN7+UpT0877g2NOYS6YOfPvmCl5RsQJJIjsj/bAkFsK9cHY5vlUgyFCy/x8NwGj03r8tyMTgVhKAnYiI5F0HazLHBnEafhsPzbmKAyOZj2FR83mqkqh2hMBBrgow==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199021)(46966006)(40470700004)(36840700001)(54906003)(86362001)(82310400005)(82740400003)(478600001)(316002)(4326008)(6916009)(70206006)(70586007)(6666004)(356005)(5660300002)(41300700001)(81166007)(44832011)(40460700003)(26005)(8676002)(336012)(1076003)(8936002)(36756003)(186003)(426003)(83380400001)(47076005)(2906002)(2616005)(36860700001)(40480700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 14:35:45.3683
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7db70770-7d3a-4302-02a7-08db52f62bea
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT085.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7945

This function does not modify its parameter 'dev' and it is not supposed
to do it. Therefore, constify it.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 xen/drivers/passthrough/arm/smmu-v3.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index bfdb62b395ad..bf053cdb6d5c 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -1468,7 +1468,7 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
 	return sid < limit;
 }
 /* Forward declaration */
-static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev);
+static struct arm_smmu_device *arm_smmu_get_by_dev(const struct device *dev);
 
 static int arm_smmu_add_device(u8 devfn, struct device *dev)
 {
@@ -2556,7 +2556,7 @@ static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
 	return arm_smmu_iotlb_flush_all(d);
 }
 
-static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev)
+static struct arm_smmu_device *arm_smmu_get_by_dev(const struct device *dev)
 {
 	struct arm_smmu_device *smmu = NULL;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 12 14:35:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 14:35:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533846.830817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxTsA-0000UT-QY; Fri, 12 May 2023 14:35:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533846.830817; Fri, 12 May 2023 14:35:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxTsA-0000UM-Nf; Fri, 12 May 2023 14:35:54 +0000
Received: by outflank-mailman (input) for mailman id 533846;
 Fri, 12 May 2023 14:35:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iM5+=BB=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pxTs9-0008Qm-5H
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 14:35:53 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20600.outbound.protection.outlook.com
 [2a01:111:f400:7eae::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 49a3dc77-f0d2-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 16:35:49 +0200 (CEST)
Received: from MW4PR03CA0212.namprd03.prod.outlook.com (2603:10b6:303:b9::7)
 by MN0PR12MB6367.namprd12.prod.outlook.com (2603:10b6:208:3d3::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Fri, 12 May
 2023 14:35:47 +0000
Received: from CO1NAM11FT076.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:b9:cafe::3e) by MW4PR03CA0212.outlook.office365.com
 (2603:10b6:303:b9::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.24 via Frontend
 Transport; Fri, 12 May 2023 14:35:47 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT076.mail.protection.outlook.com (10.13.174.152) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.23 via Frontend Transport; Fri, 12 May 2023 14:35:46 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 12 May
 2023 09:35:46 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 12 May
 2023 07:35:45 -0700
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Fri, 12 May 2023 09:35:44 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49a3dc77-f0d2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f0Tzh6k/Bu/yPrZNXWIm+QmDZqqEZwDPd5G+bQpJthVEBJzewbFwpcW7S4i0Yvc/Rc9uyLOfiyZOZQnz3RGHC9D8qx/aQKcROXlMl2d5VtHriM/yvlk3IV4Fe6YtZjvxr9XH1cwisjwOR3jPq3l575/7jzdsGsGGG6jhSyfTJA0imK7fT7HU/OEEUrmZM8MB1RucFEb/uABqw8QvhAzuGOdhe6Y8+A12/ja4qaV7RxB3VWdaYklA3khtn589nPqUCCluxh/WsbphDxHAjXf4fAmkZSbBWMPYgfFxeuExjwiZnEw1H/ehhu5i5rYkFi7aEkLQ1Z6Nq2ALfYi22WtuxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ccLxKQQiah6/PEdbn1fz+Wj01++EGami0z+kF1CwjS0=;
 b=YwzYZXatsoAoW44buJ0ksHkBT5Mvzu0fPDcApSswtKCg/mMREZlTcDG2icRIhZsDi/9dQXOPDLVh3ptlUsNnjmn+DQYs4yeCzIepimSqPR9KdnJI3vN2EUrrGJmQCh4iXxarG7icJ8rVkB29vL6adOXWsJWhS+Sf9XXS6Fx1eZvhM6p6KbZyE3fdpJpWydnVF1rdc55KcdiBIg/fN2imestPqjvStSaBp8KoF+cx/SF7dKd81ev/FPHe04xDk/8ZBVzOvKbJDuvnk3l6wTRhJyRPStHSsyTXCvz1OPPlLu87h3lNnC4k6cS+CysSfHObWSKC/LL4ZJugkYJhXh+Tpw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ccLxKQQiah6/PEdbn1fz+Wj01++EGami0z+kF1CwjS0=;
 b=LdJV2ofAhTabjhQZc5daUtxocD8YA4LOfsSZSETw1jWBlfodY+IAZcQYO7JTWC0XiTX2Gu3QIqDmeN5q5Kd3VdSinrji+KQ3/BkNSDFBQefkADfb7QBjsE20Kx+TRus8NrTvQ662eyPXo4LwEsh95fMYcuBn4os+I6VHm+iQGRw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Rahul Singh <rahul.singh@arm.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if supported
Date: Fri, 12 May 2023 16:35:35 +0200
Message-ID: <20230512143535.29679-3-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230512143535.29679-1-michal.orzel@amd.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT076:EE_|MN0PR12MB6367:EE_
X-MS-Office365-Filtering-Correlation-Id: 15f307bf-0df1-4b57-5f94-08db52f62ccb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	E7FsncrfND1k6Ehp+lISo5GtQyyMH6ME2ymQMEQI10//pmQji4defAEowZ3mLXfSzIWja/ilA45mksxxMyu2zq4PmlMizXLwQrGL8uMNYPnH7K9E1VxMmBmAVyxkLAWUIoy9c6Fzo6RGqys2hrL1WGzx2/qNNTUgqsAICbVndaOmVTxJS/UAiHHxB5qg57ORgRgmWAg1FQvNMe3Y0iys7s5ESqlpZYBnAKBEswjLrQNeqxZgCK+2TA1hmI/w5V9Z7HUJi+8q8myGG7QUQbUsiuu6yLgNUBs5OYtopQT35Tt64KE1gN1IhO4grlPmhyTt7TccJAc5O1HTtYaGWzXDztdjzZqHAfql0oHD+KbzkcHcEHoBdQ+fjX5R1Y1JNl+7xrSXV8hJzCdxuArlC2i1bXqNRqN0qQZbUU//E8JPOe82I10blMw0jnrP373d1/cdBjujwW8GPVA9y46ri8e3byDvwNfbozwL4OhyUPGs0MW/15JfGvvvViZRmGApD+u7zu0P51ls/q/pIT5P0/bW18whBCZjL8FWodeGikg7yxpRNL4roJAVcBGXdb/Q9WEbkTzz8h1rJHJ4vK7Lx4gT4V00ZlPpHaJ+FVbhYnGKE9B7ztYm8bG9QfIOD48E8XLScE3+Egnz5G3P1mCAzA54CsYUn0krkRD8OpYXhkY/qDUlJbF2Fwdub1dZq8kCp0uZT3sWlO6C+kfgg0+Mm181qwSSDiU424IytDqqRnyIrjjQQKEdBN4j8ARkUalY0K6rzB/QFOe08SGCHu7V5e7t/w==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(396003)(376002)(451199021)(46966006)(36840700001)(40470700004)(5660300002)(36756003)(44832011)(40460700003)(8936002)(8676002)(47076005)(83380400001)(336012)(36860700001)(186003)(2906002)(2616005)(426003)(81166007)(356005)(82740400003)(82310400005)(86362001)(4326008)(26005)(1076003)(316002)(40480700001)(70206006)(70586007)(54906003)(6916009)(478600001)(6666004)(41300700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 14:35:46.8452
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 15f307bf-0df1-4b57-5f94-08db52f62ccb
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT076.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6367

At the moment, even in case of a SMMU being I/O coherent, we clean the
updated PT as a result of not advertising the coherency feature. SMMUv3
coherency feature means that page table walks, accesses to memory
structures and queues are I/O coherent (refer ARM IHI 0070 E.A, 3.15).

Follow the same steps that were done for SMMU v1,v2 driver by the commit:
080dcb781e1bc3bb22f55a9dfdecb830ccbabe88

The same restrictions apply, meaning that in order to advertise coherent
table walk platform feature, all the SMMU devices need to report coherency
feature. This is because the page tables (we are sharing them with CPU)
are populated before any device assignment and in case of a device being
behind non-coherent SMMU, we would have to scan the tables and clean
the cache.

It is to be noted that the SBSA/BSA (refer ARM DEN0094C 1.0C, section D)
requires that all SMMUv3 devices support I/O coherency.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
There are very few platforms out there with SMMUv3 but I have never seen
a SMMUv3 that is not I/O coherent.
---
 xen/drivers/passthrough/arm/smmu-v3.c | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index bf053cdb6d5c..2adaad0fa038 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -2526,6 +2526,15 @@ static const struct dt_device_match arm_smmu_of_match[] = {
 };
 
 /* Start of Xen specific code. */
+
+/*
+ * Platform features. It indicates the list of features supported by all
+ * SMMUs. Actually we only care about coherent table walk, which in case of
+ * SMMUv3 is implied by the overall coherency feature (refer ARM IHI 0070 E.A,
+ * section 3.15 and SMMU_IDR0.COHACC bit description).
+ */
+static uint32_t platform_features = ARM_SMMU_FEAT_COHERENCY;
+
 static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
 {
 	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
@@ -2708,8 +2717,12 @@ static int arm_smmu_iommu_xen_domain_init(struct domain *d)
 	INIT_LIST_HEAD(&xen_domain->contexts);
 
 	dom_iommu(d)->arch.priv = xen_domain;
-	return 0;
 
+	/* Coherent walk can be enabled only when all SMMUs support it. */
+	if (platform_features & ARM_SMMU_FEAT_COHERENCY)
+		iommu_set_feature(d, IOMMU_FEAT_COHERENT_WALK);
+
+	return 0;
 }
 
 static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
@@ -2738,6 +2751,7 @@ static __init int arm_smmu_dt_init(struct dt_device_node *dev,
 				const void *data)
 {
 	int rc;
+	const struct arm_smmu_device *smmu;
 
 	/*
 	 * Even if the device can't be initialized, we don't want to
@@ -2751,6 +2765,14 @@ static __init int arm_smmu_dt_init(struct dt_device_node *dev,
 
 	iommu_set_ops(&arm_smmu_iommu_ops);
 
+	/* Find the just added SMMU and retrieve its features. */
+	smmu = arm_smmu_get_by_dev(dt_to_dev(dev));
+
+	/* It would be a bug not to find the SMMU we just added. */
+	BUG_ON(!smmu);
+
+	platform_features &= smmu->features;
+
 	return 0;
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 12 15:25:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 15:25:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533857.830827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxUdd-0007Dz-Cu; Fri, 12 May 2023 15:24:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533857.830827; Fri, 12 May 2023 15:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxUdd-0007Ds-9R; Fri, 12 May 2023 15:24:57 +0000
Received: by outflank-mailman (input) for mailman id 533857;
 Fri, 12 May 2023 15:24:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxUdb-0007Di-KF; Fri, 12 May 2023 15:24:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxUdb-0007GL-HK; Fri, 12 May 2023 15:24:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxUdb-00068N-2Q; Fri, 12 May 2023 15:24:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxUdb-0001xb-1w; Fri, 12 May 2023 15:24:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ks/U3sSgVal9DpYPuoGg79a0wQeJtdlaW2djJEtjvTk=; b=UE5f3q4KKSIrHNBAtmuYDfWUM4
	kU2JJ2r2QDg3hasOvyFJzmuUOlpaZCm1KcWYO6Lhig0FdI5KXUD1WDMKh8FZG+CnzP1qIZ5G9nfvc
	Zq8Jz+Mtcv7n3kZKrgz8ia5x67C60n0Qp8u3D0mZ2f4DHhinPn3B0ccbsPt8lMEJNrMg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180628-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180628: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=517d76466b2b798e492cde18b61e4b6db8bd4ce1
X-Osstest-Versions-That:
    libvirt=db91bf2ba31766809f901e9b2bd02b4a9c917300
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 May 2023 15:24:55 +0000

flight 180628 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180628/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180613
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180613
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180613
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180613
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass

version targeted for testing:
 libvirt              517d76466b2b798e492cde18b61e4b6db8bd4ce1
baseline version:
 libvirt              db91bf2ba31766809f901e9b2bd02b4a9c917300

Last test of basis   180613  2023-05-11 04:19:02 Z    1 days
Testing same since   180628  2023-05-12 04:18:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   db91bf2ba3..517d76466b  517d76466b2b798e492cde18b61e4b6db8bd4ce1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 12 15:40:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 15:40:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533865.830837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxUsa-0001aE-QM; Fri, 12 May 2023 15:40:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533865.830837; Fri, 12 May 2023 15:40:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxUsa-0001a7-Nj; Fri, 12 May 2023 15:40:24 +0000
Received: by outflank-mailman (input) for mailman id 533865;
 Fri, 12 May 2023 15:40:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxUsZ-0001Zx-0u; Fri, 12 May 2023 15:40:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxUsY-0007eg-U9; Fri, 12 May 2023 15:40:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxUsY-0006hm-HW; Fri, 12 May 2023 15:40:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxUsY-0007S2-GE; Fri, 12 May 2023 15:40:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R5DSpnZtP/yiYBCQOqbQxFoia8sHbk0F9RcmEQgjUig=; b=AC/DoRmAvll0wByfcFgHL9pqBJ
	Itk+3pUUQKVPUaa9nH4Ii4A1+tCAcnGMYNxgDRplPuxuWgpsYAfWv4TUNlHl924kFvyqMXeT/sxS/
	1vPTd6JKx/Y0lhTdQQ7JPRqCVuNkn1sEfFc3QRQnfvfAMBTeGIQA5MhC/uRbebDV/Rtw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180635-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180635: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d3225577123767fd09c91201d27e9c91663ae132
X-Osstest-Versions-That:
    ovmf=0b37723186ec1525b6caf14b0309fb0ed04084d7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 May 2023 15:40:22 +0000

flight 180635 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180635/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d3225577123767fd09c91201d27e9c91663ae132
baseline version:
 ovmf                 0b37723186ec1525b6caf14b0309fb0ed04084d7

Last test of basis   180629  2023-05-12 06:42:08 Z    0 days
Testing same since   180635  2023-05-12 13:40:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0b37723186..d322557712  d3225577123767fd09c91201d27e9c91663ae132 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 12 15:54:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 15:54:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533871.830847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxV6Q-0003HL-36; Fri, 12 May 2023 15:54:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533871.830847; Fri, 12 May 2023 15:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxV6P-0003HE-Vd; Fri, 12 May 2023 15:54:41 +0000
Received: by outflank-mailman (input) for mailman id 533871;
 Fri, 12 May 2023 15:54:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rLzh=BB=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pxV6O-0003H8-1s
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 15:54:40 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20611.outbound.protection.outlook.com
 [2a01:111:f400:7e88::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 44d493a9-f0dd-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 17:54:25 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by CH2PR12MB4053.namprd12.prod.outlook.com (2603:10b6:610:7c::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.24; Fri, 12 May
 2023 15:54:22 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::ef8d:bf8a:d296:ec2c]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::ef8d:bf8a:d296:ec2c%7]) with mapi id 15.20.6363.033; Fri, 12 May 2023
 15:54:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44d493a9-f0dd-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BOIXF96DU/1Iwy6VtkDZ6/Dh5Ah++V0Bh2mZadu67/sXz3jWOUCNC6eal7lKU/drPltLheWwFPYaO+cP2twS4x6NWU9c2t8ngNUtnjgzKdm7CYMFZz8MBc8hA6lcV/ToggO1ifph8HT9WT9QycBCv9/143ws7jH+DWzJ8eawcCxWiQ6EZaZfIlIPHAO8UBU4dBTQm4hHgATwKEtkMup+M8smffmk1CPa4ZePnzjDOAcZ6Zv91scjBI9mxbtZhurRAGAzrCdp9tIykOZWJpjFq8iXER79YCq+Zh/wnFigEEFn1G+3H67IqrOHLW7hQyOoh0afRzlk31fo3wqcukNOUQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=toZEIey6sJUbumywqr7pcneTwrRK5qzqvt8Mr0njl3w=;
 b=htS/2LzB+364pn9wFO0RErmupQEh6LXSaswi7usBtqHkFEE6x22xuHUo6Cjh1RGIL7pyoFSrpUqBIvR4xB1HxWkKFyomfV0tIprxwStzn5f7/et6iR+GoFTJX/RSsffzdtroPyNEjCwSbCYZso4G18lQERrx0yUfjbv6eBrwNieZc0RwUUtAZfZYm7nyjgvby18KA2LQDPtlc2ThaJNp7XStwizdbf4GU6rZDwEjxW4+ShBzZaJsPY7RLMymybJpXeBUDBuAfAjTuLe2TiKI1EcYQhVj/uwHuwLec1GBcrbPW5BZTUlXmQvDX8YFljcqLaAjzdxAMy2zCu5YIakU+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=toZEIey6sJUbumywqr7pcneTwrRK5qzqvt8Mr0njl3w=;
 b=d8weIgZW7Azh+zjjLPmt+7abRqV9WS3JgvC1oOctfBdpamQTumdldDJSDu0p25HC1G+3XDz3n7M6rWuaKp96GciNVeCn/Wj+SeeQTy2B0QstamwMrafGqFVlPydcuEim39kgbco4BUH6uj2hZeh3zuKkdcEUyIctSrfLjYH0BCc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <0147e7c2-32b5-9185-b269-fb890ba6dfc9@amd.com>
Date: Fri, 12 May 2023 16:54:15 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] xen/arm: smmuv3: Constify arm_smmu_get_by_dev()
 parameter
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-2-michal.orzel@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20230512143535.29679-2-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P302CA0007.GBRP302.PROD.OUTLOOK.COM
 (2603:10a6:600:2c2::14) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|CH2PR12MB4053:EE_
X-MS-Office365-Filtering-Correlation-Id: 60f11a52-48ab-42ef-b784-08db53012712
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	v44Bcpwj9xk1Dkx2s0R3i6hKqRaCqm7tMach/ATWmHwTDtMCyruaTls7PVJWC7pilw7+b1VqxFJdfOxxLEEjqWAM2TLnqq3J26h+2JW3g7EAINcNx9szOjn9BjhF1Jug1TsD0U/aMc1QS7gP4nZ8xZVQeWlodb4F2Mj4Bfqs+QjCDSqAC8ReLhHULVZmWvYXi/rMKrio/HHwEGADVt3TDADk0CuKYoIdmiX8HIA/pmPeDoOiKnO63Z0E4/K2T4c6WvVpJF2qYdQDgtgovyI6Y9F3Si2T54ND3kfdRWGxWol75cMChE909k4/hGjqMiNq+Ofpd1VFokiayoKXc7FiBTeziWpegwqJEaB1F+MKbZ/oEFL0t6DtL9Jsu3QF01dBbKCeOZhYNcj+yUMPes/86EjkmwQZFoeXjpsHw6Uu+Z8rkrPHpdN8YaL3utzqw4AdwS9g4sryxftW9dtsCI5a1sXPS+p8mEvHv4k0fsr9hkRMJ3r3KjvVj5UUosETmo/ZuuekejZqbMNeOz2ADWqnMNwZsblGLg1u0YZjcOBJ7oxHlM08Pv79t5BtUHNBrTirsd8Dj/MDuNvn300338dh7ZBEddVLsGKPn1cj4J7w/cWuRaMDv7IZxURHcAJhsgYEbqWcIUyJKWRVyxlaK8DO2Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(346002)(376002)(136003)(396003)(451199021)(6506007)(6512007)(53546011)(26005)(41300700001)(6486002)(6666004)(186003)(31686004)(2616005)(478600001)(54906003)(66556008)(4326008)(66946007)(66476007)(316002)(5660300002)(8936002)(8676002)(2906002)(36756003)(558084003)(31696002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M0ZqcGMyN0x0azZkOTBDQVdFeVE5RTZoUVpqaER5Y0ZhbGNWczkzM0xXOVg5?=
 =?utf-8?B?VEFZeEMyTDNPRXRqWkYydzVGQ2F5am5FN2dWTk1Vb3V4ejJ5dm5YSmFMQVJ6?=
 =?utf-8?B?OFNmNFRhWDVuTFNEU2IrM3RDRGlVeGRSV2lrWXduNnc4K3NMaU43RWJIaHFn?=
 =?utf-8?B?TWdzM1paQ2IxVzM1VnZjUkZwT1ZMMVpXVlR0U29DOUdDRVE5cDQzMXJuc3FD?=
 =?utf-8?B?WXNKZGJrcWdEd1hxZEYyWmk0aWIyTHVmR1JMOGNkMWNYZlF2Nm95dHExNEp2?=
 =?utf-8?B?Ri9DTHNwMnpVOXVvc1UwRjNVSFl1a1VEbzRWK0o4dWt1RnJJOGJ0ZXlpZ3Bv?=
 =?utf-8?B?eEhJcEF1NUhvU200RkFLbEpoQWwxYXNpSEFkNnNuNWxWLzF0T05RZDZ1Qlhk?=
 =?utf-8?B?VGg5bS9pWmNDRXFxZGNXVi9GeVM2cE5OZkVOSEx6Zml3TFc4a0dXOW9QMWND?=
 =?utf-8?B?azBLZUtSOEFsbVRDc0NnUmVibEdjQnNsVXUwUjJJM0ZoYWpTZEVEWlptOVlZ?=
 =?utf-8?B?SU1rQlZRZk50cEdZdHRxVmpOVlZySEZFNEYyV1NyTlhUOU84VElNcFQ1T291?=
 =?utf-8?B?WW1vbnNKOGNjb1BWaG9xMk9CRXRsU0NIQXNIYm5SanZVejdmOFp5eEl0TXM1?=
 =?utf-8?B?Q3dOdXZFRHEyOXdYdmxLZ3hJWVRCQW5oUElFYXZtN2ZFRE5SMTVlb0dZdjZQ?=
 =?utf-8?B?QzVqM3FnNmdvS2F5aXU5N05keWxtSklYQWhjdm9kTS9YQ01ZM2hBWXFjTDNF?=
 =?utf-8?B?Mm1wTGRyaUduZWxWbjVtR3pYbU1EbFc0QTg3VTNpdjVWZWtja1dhTGp3cUxK?=
 =?utf-8?B?WWhLRFRwQTk4eVJLYm1xbmIrWFU3dDd1Y1oyYjFLcFY3Y3ovU1pBTU1lU1Np?=
 =?utf-8?B?Z1l1SW9PekJXM1RvcUxMSHFiYXFJVlFJQithYmxsRlc2WnljR3RGL2FTa1h2?=
 =?utf-8?B?L1doMU1LRzhtT3J1Z2RTZzBhYzFldGJOK0NBOHVXT1NmNDdVMCsxd1R2Z1Bi?=
 =?utf-8?B?ckIzRHVXZjNkZk40ZHVEbE0yTkUvdVgxazZWVUg2d3lrQVRPNEloQm9XQ0Jt?=
 =?utf-8?B?OUZxMEl5dm9xdk85Znh5SVZ4WFZnbDVsV0hjUnlOZ0IzWUwwM1BnYmFFeVhu?=
 =?utf-8?B?bU9OcElzMFhjZFkzQllqNmVHYUZCZG8vSm9RVHlndXpSR1RSYjV4NVBkS0pO?=
 =?utf-8?B?Zkw0TDBLSURrT3drWjdVbFE0eU5OZG40YmIwRC9SVlRUT01hNklXRVZqYWow?=
 =?utf-8?B?am1CWHcyYjVHaEhRdkNuYkdDT3VpZTlqMHFuazlyYWU1TmhpbkhZdDZTbzRF?=
 =?utf-8?B?eStmWG5kZXdzWnZ4dkdWSVV2YjVjb3VkM0Q0RHZpNWRibGNDSFNQTWd4bEtX?=
 =?utf-8?B?SXlhL09mc1M2RUtjVVpvWUpvTEZ6VVEwc2xKcnk1N3NFdW9jNElQY2JwU3pJ?=
 =?utf-8?B?YnE3WE8zcTgwdXR1dzVibFFYaGFkSzd3TEZ5RUQzZkZCRXFYNUJrMmxJd290?=
 =?utf-8?B?cWFpOU9YNlVyNmVPZ091TTRMYnQ3cHMxSWpvODZuVmFPWGpQWUlrYmYvN3Z1?=
 =?utf-8?B?QVdTL0loeks1c0cvTVh3VHg1NStzMWU4cWZzYWQxN1dPcUhCNDR3SEMzYVQ1?=
 =?utf-8?B?a1lRbmEyeEtLazErSGZ3QStvNE5oYnlrTjZueVpSTmdEWXREcDk5dFNSTDFr?=
 =?utf-8?B?Z01va1g3UEJNbWdNb0hNSnF6azF3ekppZUpZay9keHdoZTd3WlZPVmYveE1h?=
 =?utf-8?B?L3krcUdSM0U0alVyU0l0SWhmUWswWXZDQ0o2VTV2N0w4WUI1cmIzSUg2QW1x?=
 =?utf-8?B?WmV0ZlE0N3h1RllKRFRKVnVXWWFORSs0ZUV0OGpHTWh5eGtpSzFpUlJCa1cr?=
 =?utf-8?B?MCtxQ1dUL1p2dWx6T1krTGw5VTAzZS9yZGZRL3hzRHB4NVl3UWdmZWFYTHZj?=
 =?utf-8?B?YXdEcEZJd01xSG5TQXFEQzN3N29jWDVSQVlKdDBvczRyU0xTT0YrcnFIUndu?=
 =?utf-8?B?TXZoRUZCRlhhemYxdUsyYWJzTTBBV1Y0RjZ4L2JMNCtpbGRRTjIxVmY2YUdo?=
 =?utf-8?B?cURGTktVSGluUG9kWUJBamxNRkExeERRRmU5TTlSalM3djl2WVpRT2x0Zkc4?=
 =?utf-8?Q?Pkr3JhOFhd/k6zb9Gx3wIjz5+?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 60f11a52-48ab-42ef-b784-08db53012712
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 15:54:22.1455
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +/yHBkDsf7if7Eh2EjgbmMiiXMWSMvt907ti7WU53SdqxejnBvCJdP7TfVvDzU6JU2OW1zE/HfjlUxqUbhh/TA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4053


On 12/05/2023 15:35, Michal Orzel wrote:
> This function does not modify its parameter 'dev' and it is not supposed
> to do it. Therefore, constify it.
>
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>


From xen-devel-bounces@lists.xenproject.org Fri May 12 17:00:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 17:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533877.830863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxW7k-0003aZ-1u; Fri, 12 May 2023 17:00:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533877.830863; Fri, 12 May 2023 17:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxW7j-0003aS-Tz; Fri, 12 May 2023 17:00:07 +0000
Received: by outflank-mailman (input) for mailman id 533877;
 Fri, 12 May 2023 17:00:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rLzh=BB=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pxW7Y-0002k1-Ai
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 17:00:06 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20627.outbound.protection.outlook.com
 [2a01:111:f400:7e89::627])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 51d14ad0-f0e6-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 18:59:13 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by DM8PR12MB5432.namprd12.prod.outlook.com (2603:10b6:8:32::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.22; Fri, 12 May
 2023 16:59:09 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::ef8d:bf8a:d296:ec2c]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::ef8d:bf8a:d296:ec2c%7]) with mapi id 15.20.6363.033; Fri, 12 May 2023
 16:59:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51d14ad0-f0e6-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H2ni2uqUoz+ZlJOZ7rgc8Lw81x05aEGzFT0PT3w0/aAeNYENXtUD6cGiDSN5ftgAbIGB4Khi6O/xfzV43j6jc0cME/tmoJ4wrkVRQW49ahygh775D+4LyFApu5qYOqLnVzPWnfP/fqi7LdrUYCYLW2GhpooYjIkBxitOx42UAAme5/hMYp0unbRsA0O3Cn5bP8Q/VRyxCbAYTcYIHzZTlBbNS5VbEDKJ6ullrdCaZgNo1L0bq58Id57CLq7tHTopWOd69oexK+GCG2V0heZSubDz9yosKxdwR5O16A75C7KqQ2HLvbeGN7XxaYsy4HRJIe8p1Duf0DFs8zFsEbIj1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PRkMx6RQ7X55tGOeMtLQIs0nrQ3yn4Iyrsnob6SPxaE=;
 b=k0bal5zEQK89jcyzWKkUhbxwzZ0s+t5dky1eABuWDjg/+nZ4WZ6v/wPsof5DxBLWXIaltdIO1MC7Qmt3EGyoVYwKUl3XShURWMpZzerK9RWEz3nYpkC3zktun6iDpcIb8mNaJ5Aksb9/XNqhX3g3TYeTTVWt1IPiAyy9uG7XvrNEU/kgOi4IaNGcRdCSpAPCN8ncN378B8xckFSUIQtjEpbS91ps5Yuczu6XTNtMNQ3xPdjNAuFa7NHW0ED+sbIteZW45DILJI//pSv+Wpk3vw9/HOusgPsCR7pUYDgiKj1KoxMa70IKXpBDNuFDGluDZXVH6nam8Y0WByRSys2IPg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PRkMx6RQ7X55tGOeMtLQIs0nrQ3yn4Iyrsnob6SPxaE=;
 b=YSHxSiOyG+cN1QOEf3gSCYmxqUKby432C7Pj5wMGhrMk6UjQPrXmjAWtoJgSAH1Vzo37hJ2wQOQlJYF02r0XB+7w7AvmZyFEVVdC+2sFWT1jaB2BU9y6YKjUpuv6ShJSaYWA1m/L2jXsIepQATQiRrA+gNixhsqkO2TCTpwro80=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <1dadc8b9-00be-55f9-e8b7-f867eacf20b1@amd.com>
Date: Fri, 12 May 2023 17:59:02 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.1
Subject: Re: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-3-michal.orzel@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <20230512143535.29679-3-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0411.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::20) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|DM8PR12MB5432:EE_
X-MS-Office365-Filtering-Correlation-Id: 930343de-02a9-4620-67db-08db530a33ed
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Owe6Ek5H5Zou9mnRaOMdgAD5HYOBCLAMQRUa8jCnww5g5DggVt/bUMsFBhdpmDC84k/RxS372yeiA0Yi5e8Mxli7PggkSiu0OA/ujZqNCrkBieiDzezeZq3aR6MPuDfi2uRzQ4u7lOeDxwRzogRdc/kgEeAHXglct3SlnOzsTflu0NbtPUttP53oL5V9TwSZn5Pke0HXriYo4u4jbLo284YDmWekJXnMzNRpcjocDFpsMgsL1SqBR4bUbW4A1NgoVfN5Kk4ig7Ri0eVWoirWSN+ZFvQREkp+/FEIEgVyfpXjqk908+MfOl280GnizkNAI4zPfn6iU650GfAbA5gRq3krQFc32aon8j9IZUquK+z6ykiEzmUTwudS9Kq1bGM4DKNXrg0HB8koK2yQVXr3f7EkmHVtRBEqzrhKeAzxP9nNmIVuRbGLgy3PnAJuf4vq6w75SLm59bMZDfx1KfozBui2dTIHh8RFN5KvJndEmhpZfdYRyHjuhbOqQy1zWNvpCFoTe9LNwZTGJwjd8hRsxBvhWxKGRPF+0EWqFi0zjz16lgFxDDv2ykg8tgKLbpBIVDjwr+ze2BTJ57jpf9PIG9SuWuDcr38gTkRACBHzouLgcNr49HpN1o5+ZtbwmN1g6Rfk1gla0CSGx/LPuWVsgg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(39860400002)(346002)(366004)(396003)(451199021)(38100700002)(36756003)(31696002)(31686004)(8676002)(8936002)(6486002)(478600001)(5660300002)(6512007)(6506007)(26005)(186003)(2616005)(53546011)(83380400001)(2906002)(316002)(66946007)(6666004)(66556008)(66476007)(41300700001)(54906003)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bzZyOXNBc0R4Z0NmYnd5ZVFLTlhmeFh6dEx2QTJHMC8xdlh2aUkyUjduZk5h?=
 =?utf-8?B?ZUxvcmovZHFRdHBjSmQ1SjZiL2xFVVpLWjJEbmhqUVhsYXJoUnM4dDQvVFY5?=
 =?utf-8?B?bEdWMTVySkUvaXBoN0diUjh6R1ZIK0VGUnVFNlFhb2hVbHEwWEkzMVNJQWtq?=
 =?utf-8?B?Q3Q5ME42cFB2WnlVdXZOcUlSNlJPdTRUZkl4K2FZcVN1YXVoZGNMWU5hWElo?=
 =?utf-8?B?Wm9qWGN3YllueHpmQlZnbU1HV2E5MWU1SnlvNTlTazF3clR5cXFpMjVWd2xK?=
 =?utf-8?B?MWtGeEx5d2g1QlFQOURoU0dhMzhRNEtiUCt4UHoyOUJvOXJ6endhTzlCekdv?=
 =?utf-8?B?RkdVdjhxVFZ2QThoVndqSUVyQnc2Y2lwckkyaldKUWE3UWFKUWtrZkxpMlBI?=
 =?utf-8?B?WERjckxVS2djYzYxVmxtZkRXMG9jOU9DTVhiTFZhQTFuckhlRjhFYks0Lzdl?=
 =?utf-8?B?d2gxL3oxemJwT3VlZDBiQU9yYzNCSUlRQTZ6MUxMRndPMTZsVlhhMmoyTlRM?=
 =?utf-8?B?V21NbWI3c1hHSDNlVW9iVGY3Zi9oVzVwd1ZvNlI1dXhxRVcrQWJWL3l3TTJZ?=
 =?utf-8?B?ZklQdnhxV21CUC9SQktkdjhqTFMrTkEzMVlQWldVNlJKVjAycHhGQ0xmd0ph?=
 =?utf-8?B?RTNad0JUYXJoaTJGVTlFenNPbkNLUXFGZHd5VmkrRGpLOVpkRWJHTU4wVDg0?=
 =?utf-8?B?QWp6bjFrWXdmOWtEYXFyTGxXakt4R2UrbWpMMXM0N1NZTEpVSlhtaXJDQVRy?=
 =?utf-8?B?ZTZhd3lrUURmcy9ReGtZL1ZIU0lMbW5qMDJ2eGhRMFZ4N3doRGNOaHFpK05y?=
 =?utf-8?B?RjRDbE5IMjFnbUp5Vmx2cXV2NVZNNVd4cWJFaklpYko5eDVhQ1A4V1dZZUJY?=
 =?utf-8?B?ek1YUlNTZzZwSnU3S0VXZVhna0lWQ1hRTGlVcmQzWU1KK2p2OTM1czNXWDdU?=
 =?utf-8?B?L1Q4YmxzZFVFZUFGN1BLc1FmTHhvQXhraXVrakdDcUJwZmdlTnFNNFpBWDVK?=
 =?utf-8?B?QWZSRnlVK3ByQ0dhVHdkaXlvSmNOZGk4VXkrazQxcXVCNmNycmIxTC9UR2xC?=
 =?utf-8?B?Y3BvdzJaZFVCQ2t6ZFhpUzRObTZQVyt5R2JTTU9vSTBXWDVQMEs5TUtIdUZ2?=
 =?utf-8?B?WWdJemN6ZTl0bWhGaXpMN1hnemdLcldkNlhvMGFhQlVwdzZ2WUVSaU9CSzFB?=
 =?utf-8?B?TG01MEFBUUVzUTRTTXgxY0NuNTBjWFpPMG5RSTZDZEhHSVFtelhIdFJGTXRr?=
 =?utf-8?B?ZjJJc3BMWkVudGZXWEQ5djRYWWNQUTVVdG5LSVc3a0JVQjJRM2FKMDBhLzIr?=
 =?utf-8?B?U2l6aE1Zek5xVzgxV2xzS1NRN280c0FkK0V1elYvcG9tOXJMWEF6N0ozV3ox?=
 =?utf-8?B?dURTMjZBZ2RPRG5USll4V0w0cUVoVzFSSjdSdWZTK0NsSUdaRnFBZ252Rjkz?=
 =?utf-8?B?RFhQdDN0UDNMNHhvYXBDelNrTC9lS3RyRjdRVUdUUW94NjNZTkErdjVZRWpD?=
 =?utf-8?B?b3o0OXhpSlozSms0SEwwMmVaZE9LUitDWGFvdkR5QnRyaGZXRC9Rak14YWlY?=
 =?utf-8?B?c2ljcXNLRURqVHlWYXFHVmhQMzRrQWRHZ3F1WCtkRVlxQ215K3pjZFhHb1FH?=
 =?utf-8?B?N2tpNStTZ2JHOUVqUmhqVHpJY1graWRrVi9kQWtsVEoyeUJLZXEzQTZBSDhP?=
 =?utf-8?B?ZzBsREQ5b01ReVlJODdBYzlFdE9ZcXM2YjlzR015OFRaMmlwY0J4QUhmazVK?=
 =?utf-8?B?MVhDRDFmS3VBSlhxd1U5dkNIa0oxUEc5WndHMUhtVFVqYVV3dFgvRlgxd1JE?=
 =?utf-8?B?d1FyeUNHam4xUFhrTlpvMnFZNXF5Q0FodkhCU1BtK3RaTVAxamRQSzQ4a2pP?=
 =?utf-8?B?dC9RZEZ4MW0wSUhYOFlPakUySWU1TFpMZ1RwaHNIQlNQeFdYMjNjUkszdWRN?=
 =?utf-8?B?Q01Ld0U2eEl6RjRibGZ6cllrdy85RHZJSlFsVTNGNUpmNENxQko4RWVHR0Ex?=
 =?utf-8?B?NGZTenkydHNLNVVQeEIvYWpVVk9EVFVzRHJRQitYQ0xMSGZ0VFJ1eXo5dyth?=
 =?utf-8?B?OWluQ3NiV2Fiekg5V2Jra2lMUUhzZGpoSHZlSXBqWlJOY1VTWUlkSzNGSHBO?=
 =?utf-8?Q?WjY4JlGwje7laaJw/vhkQTODd?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 930343de-02a9-4620-67db-08db530a33ed
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 16:59:09.1811
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KJvIVsdg9UiPpN0Yn6Q8VClry3/y8ubZk580L+UVStau/d2EEwYuSTWQAw5OJ3HT/Q9yb0yAR1sD65t2oobIXQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR12MB5432

Hi Michal,

On 12/05/2023 15:35, Michal Orzel wrote:
> At the moment, even in case of a SMMU being I/O coherent, we clean the
> updated PT as a result of not advertising the coherency feature. SMMUv3
> coherency feature means that page table walks, accesses to memory
> structures and queues are I/O coherent (refer ARM IHI 0070 E.A, 3.15).
>
> Follow the same steps that were done for SMMU v1,v2 driver by the commit:
> 080dcb781e1bc3bb22f55a9dfdecb830ccbabe88
>
> The same restrictions apply, meaning that in order to advertise coherent
> table walk platform feature, all the SMMU devices need to report coherency
> feature. This is because the page tables (we are sharing them with CPU)
> are populated before any device assignment and in case of a device being
> behind non-coherent SMMU, we would have to scan the tables and clean
> the cache.
>
> It is to be noted that the SBSA/BSA (refer ARM DEN0094C 1.0C, section D)
> requires that all SMMUv3 devices support I/O coherency.
>
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
> There are very few platforms out there with SMMUv3 but I have never seen
> a SMMUv3 that is not I/O coherent.
> ---
>   xen/drivers/passthrough/arm/smmu-v3.c | 24 +++++++++++++++++++++++-
>   1 file changed, 23 insertions(+), 1 deletion(-)
>
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index bf053cdb6d5c..2adaad0fa038 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -2526,6 +2526,15 @@ static const struct dt_device_match arm_smmu_of_match[] = {
>   };
>   
>   /* Start of Xen specific code. */
> +
> +/*
> + * Platform features. It indicates the list of features supported by all
> + * SMMUs. Actually we only care about coherent table walk, which in case of
> + * SMMUv3 is implied by the overall coherency feature (refer ARM IHI 0070 E.A,
> + * section 3.15 and SMMU_IDR0.COHACC bit description).
> + */
> +static uint32_t platform_features = ARM_SMMU_FEAT_COHERENCY;
> +
>   static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
>   {
>   	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
> @@ -2708,8 +2717,12 @@ static int arm_smmu_iommu_xen_domain_init(struct domain *d)
>   	INIT_LIST_HEAD(&xen_domain->contexts);
>   
>   	dom_iommu(d)->arch.priv = xen_domain;
> -	return 0;
>   
> +	/* Coherent walk can be enabled only when all SMMUs support it. */
> +	if (platform_features & ARM_SMMU_FEAT_COHERENCY)
> +		iommu_set_feature(d, IOMMU_FEAT_COHERENT_WALK);
> +
> +	return 0;
>   }
>   
All good till here.
>   static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
> @@ -2738,6 +2751,7 @@ static __init int arm_smmu_dt_init(struct dt_device_node *dev,
>   				const void *data)
>   {
>   	int rc;
> +	const struct arm_smmu_device *smmu;
>   
>   	/*
>   	 * Even if the device can't be initialized, we don't want to
> @@ -2751,6 +2765,14 @@ static __init int arm_smmu_dt_init(struct dt_device_node *dev,
>   
>   	iommu_set_ops(&arm_smmu_iommu_ops);
>   
> +	/* Find the just added SMMU and retrieve its features. */
> +	smmu = arm_smmu_get_by_dev(dt_to_dev(dev));
> +
> +	/* It would be a bug not to find the SMMU we just added. */
> +	BUG_ON(!smmu);
> +
> +	platform_features &= smmu->features;
> +

Can you explain this change in the commit message ?

- Ayan

>   	return 0;
>   }
>   


From xen-devel-bounces@lists.xenproject.org Fri May 12 19:49:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 19:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533888.830879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxYl7-0003E8-QU; Fri, 12 May 2023 19:48:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533888.830879; Fri, 12 May 2023 19:48:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxYl7-0003E1-No; Fri, 12 May 2023 19:48:57 +0000
Received: by outflank-mailman (input) for mailman id 533888;
 Fri, 12 May 2023 19:48:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t8+i=BB=kernel.org=helgaas@srs-se1.protection.inumbo.net>)
 id 1pxYl6-0003Dv-9O
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 19:48:56 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 05a12523-f0fe-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 21:48:53 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id DFCC365849;
 Fri, 12 May 2023 19:48:51 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 04CD3C433D2;
 Fri, 12 May 2023 19:48:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05a12523-f0fe-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683920931;
	bh=UJAu/sEPUEjYwNghUWoNUhNjtFQtSfleebB2APfRPuI=;
	h=Date:From:To:Cc:Subject:In-Reply-To:From;
	b=izATGEJPbIV3K4bCaiKo7Hs1883mTtDq56tAMqR9b8VXex2H1Ta5qXbJ34DUMlipI
	 hkP36Hf18gfQ/c5WhLTbqHxqmGDxT8c6GZsTNVnFEdOQaZaJ/oYwHmrit1EAw1x+1Y
	 2z22L2JvUWHTpA485tRqm3EMRgDgtReOCU64LBI1xffqjx2Y+7vetzAwosC65+lInq
	 x9YdWwtXI3O85iO05qxJMbcPnrLIE+gUjS7LacDVi8KOmMYZ7RSibPwc0Q194zjna+
	 tNHv7vuMAsogyAVpsXpRcwcmIepl/BM1Pky4mWnjJL9I8gbCdKyEWdBqbLqeB9zbcY
	 repkA1Uz/worA==
Date: Fri, 12 May 2023 14:48:49 -0500
From: Bjorn Helgaas <helgaas@kernel.org>
To: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	Rich Felker <dalias@libc.org>, linux-sh@vger.kernel.org,
	linux-pci@vger.kernel.org,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	linux-kernel@vger.kernel.org,
	=?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Andrew Lunn <andrew@lunn.ch>, sparclinux@vger.kernel.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Gregory Clement <gregory.clement@bootlin.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Russell King <linux@armlinux.org.uk>, linux-acpi@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>, xen-devel@lists.xenproject.org,
	Matt Turner <mattst88@gmail.com>,
	Anatolij Gustschin <agust@denx.de>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Nicholas Piggin <npiggin@gmail.com>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	linux-arm-kernel@lists.infradead.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	linuxppc-dev@lists.ozlabs.org, Randy Dunlap <rdunlap@infradead.org>,
	linux-mips@vger.kernel.org,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	linux-alpha@vger.kernel.org,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>
Subject: Re: [PATCH v8 0/7] Add pci_dev_for_each_resource() helper and update
 users
Message-ID: <ZF6YIezraETr9iNM@bhelgaas>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ZF4bXaz2r75dlA5g@smile.fi.intel.com>

On Fri, May 12, 2023 at 01:56:29PM +0300, Andy Shevchenko wrote:
> On Tue, May 09, 2023 at 01:21:22PM -0500, Bjorn Helgaas wrote:
> > On Tue, Apr 04, 2023 at 11:11:01AM -0500, Bjorn Helgaas wrote:
> > > On Thu, Mar 30, 2023 at 07:24:27PM +0300, Andy Shevchenko wrote:
> > > > Provide two new helper macros to iterate over PCI device resources and
> > > > convert users.
> > 
> > > Applied 2-7 to pci/resource for v6.4, thanks, I really like this!
> > 
> > This is 09cc90063240 ("PCI: Introduce pci_dev_for_each_resource()")
> > upstream now.
> > 
> > Coverity complains about each use,
> 
> It needs more clarification here. Use of reduced variant of the
> macro or all of them? If the former one, then I can speculate that
> Coverity (famous for false positives) simply doesn't understand `for
> (type var; var ...)` code.

True, Coverity finds false positives.  It flagged every use in
drivers/pci and drivers/pnp.  It didn't mention the arch/alpha, arm,
mips, powerpc, sh, or sparc uses, but I think it just didn't look at
those.

It flagged both:

  pbus_size_io    pci_dev_for_each_resource(dev, r)
  pbus_size_mem   pci_dev_for_each_resource(dev, r, i)

Here's a spreadsheet with a few more details (unfortunately I don't
know how to make it dump the actual line numbers or analysis like I
pasted below, so "pci_dev_for_each_resource" doesn't appear).  These
are mostly in the "Drivers-PCI" component.

https://docs.google.com/spreadsheets/d/1ohOJwxqXXoDUA0gwopgk-z-6ArLvhN7AZn4mIlDkHhQ/edit?usp=sharing

These particular reports are in the "High Impact Outstanding" tab.

> >	sample below from
> > drivers/pci/vgaarb.c.  I didn't investigate at all, so it might be a
> > false positive; just FYI.
> > 
> > 	  1. Condition screen_info.capabilities & (2U /* 1 << 1 */), taking true branch.
> >   556        if (screen_info.capabilities & VIDEO_CAPABILITY_64BIT_BASE)
> >   557                base |= (u64)screen_info.ext_lfb_base << 32;
> >   558
> >   559        limit = base + size;
> >   560
> >   561        /* Does firmware framebuffer belong to us? */
> > 	  2. Condition __b < PCI_NUM_RESOURCES, taking true branch.
> > 	  3. Condition (r = &pdev->resource[__b]) , (__b < PCI_NUM_RESOURCES), taking true branch.
> > 	  6. Condition __b < PCI_NUM_RESOURCES, taking true branch.
> > 	  7. cond_at_most: Checking __b < PCI_NUM_RESOURCES implies that __b may be up to 16 on the true branch.
> > 	  8. Condition (r = &pdev->resource[__b]) , (__b < PCI_NUM_RESOURCES), taking true branch.
> > 	  11. incr: Incrementing __b. The value of __b may now be up to 17.
> > 	  12. alias: Assigning: r = &pdev->resource[__b]. r may now point to as high as element 17 of pdev->resource (which consists of 17 64-byte elements).
> > 	  13. Condition __b < PCI_NUM_RESOURCES, taking true branch.
> > 	  14. Condition (r = &pdev->resource[__b]) , (__b < PCI_NUM_RESOURCES), taking true branch.
> >   562        pci_dev_for_each_resource(pdev, r) {
> > 	  4. Condition resource_type(r) != 512, taking true branch.
> > 	  9. Condition resource_type(r) != 512, taking true branch.
> > 
> >   CID 1529911 (#1 of 1): Out-of-bounds read (OVERRUN)
> >   15. overrun-local: Overrunning array of 1088 bytes at byte offset 1088 by dereferencing pointer r. [show details]
> >   563                if (resource_type(r) != IORESOURCE_MEM)
> > 	  5. Continuing loop.
> > 	  10. Continuing loop.
> >   564                        continue;
> 
> -- 
> With Best Regards,
> Andy Shevchenko
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri May 12 20:23:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 20:23:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533894.830889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZIG-0007df-Ik; Fri, 12 May 2023 20:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533894.830889; Fri, 12 May 2023 20:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZIG-0007dY-Ft; Fri, 12 May 2023 20:23:12 +0000
Received: by outflank-mailman (input) for mailman id 533894;
 Fri, 12 May 2023 20:23:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxZIF-0007dO-8b; Fri, 12 May 2023 20:23:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxZIF-0006Kq-5E; Fri, 12 May 2023 20:23:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxZIE-0001Cg-NE; Fri, 12 May 2023 20:23:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxZIE-0005Hc-Mn; Fri, 12 May 2023 20:23:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y/fiFpA5wZtQ4Moz2ouJWDupa985KmoEdP3EGuzbQZM=; b=oNIXDtExh/lEaWirDLpHh4IoH6
	s6R/3ECa64iAICyN1if+jmQCEXKpbNPtVpAm+DcQJxxyDILHL4EH2poVwF+lSyN3IDYm0vIvPLtxr
	2kCzs0SdOx8nQqqrK6AEpaJKs48z+Bl9hFV95o4iRPzRnMhxWKq9FfsWACkdx8qkUVg4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180632-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180632: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-examine-uefi:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    xen-unstable:test-amd64-i386-freebsd10-i386:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-examine:xen-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cb781ae2c98de5d5742aa0de6850f371fb25825f
X-Osstest-Versions-That:
    xen=cb781ae2c98de5d5742aa0de6850f371fb25825f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 12 May 2023 20:23:10 +0000

flight 180632 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180632/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 180624 pass in 180632
 test-amd64-i386-examine-uefi  6 xen-install                fail pass in 180624
 test-amd64-i386-pair         10 xen-install/src_host       fail pass in 180624
 test-amd64-i386-freebsd10-i386  7 xen-install              fail pass in 180624

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine       6 xen-install                  fail  like 180624
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180624
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180624
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180624
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180624
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180624
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180624
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180624
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180624
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180624
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180624
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180624
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180624
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  cb781ae2c98de5d5742aa0de6850f371fb25825f
baseline version:
 xen                  cb781ae2c98de5d5742aa0de6850f371fb25825f

Last test of basis   180632  2023-05-12 08:12:24 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:03:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:03:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533900.830898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZvJ-0003ad-My; Fri, 12 May 2023 21:03:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533900.830898; Fri, 12 May 2023 21:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZvJ-0003aW-KF; Fri, 12 May 2023 21:03:33 +0000
Received: by outflank-mailman (input) for mailman id 533900;
 Fri, 12 May 2023 21:03:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X/Ay=BB=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pxZvI-0003aQ-E1
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:03:32 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 70e92bd1-f108-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:03:28 +0200 (CEST)
Received: from MW4PR04CA0056.namprd04.prod.outlook.com (2603:10b6:303:6a::31)
 by DM4PR12MB7573.namprd12.prod.outlook.com (2603:10b6:8:10f::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Fri, 12 May
 2023 21:03:25 +0000
Received: from CO1NAM11FT010.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:6a:cafe::3f) by MW4PR04CA0056.outlook.office365.com
 (2603:10b6:303:6a::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.24 via Frontend
 Transport; Fri, 12 May 2023 21:03:25 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT010.mail.protection.outlook.com (10.13.175.88) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6387.24 via Frontend Transport; Fri, 12 May 2023 21:03:25 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 12 May
 2023 16:03:21 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 12 May
 2023 16:03:21 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 12 May 2023 16:03:19 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70e92bd1-f108-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GIm/dIKuvGLy5M25MK8OtQa8p7bSDT/RAIQnHWKDy1v8C6H+m9yUEIyHBptGGcskVdfbs3lPe6UQ1KpR+ySRco7ZNlXGnufYnsSrvmlaJ81ri+EgbDUa7K5o0W5wIYcYSP9+gTHNDhgww1QD3AHbI/XPeUN6Rc9pzfE+tdH6GvuITr3XJjy2wwgTNm4auz6Qetzv6dRJoq83clCbzlIfC7NgYGlXJWw/mRyDJw/LZR2ULXtIk3ifwRfv9ap3z65Fjt4wk5VCuQaBGgW1LfOCwnBCvlhYIHAtQJQhFMBXdAR9/8/WsKjKVwgwhi5/9z7+0TNCPobmF3YrZ4nlbLG7Gg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9VPlWYf129wYlKBXI6XeX6da7R6uiXrDylbJaC8/Pr8=;
 b=oK5x7vyZmY/PHJgtkNLjg5htKNymCCYAKamGK0a5HqMgYiGGB/HitRJLXcmrhOYrGQQITpBe1rSAkiBWx0bJy5SY9v5HrCP3Pf8unF6BCP9wlo7RxYv9SlOt9vKpDb7YOjGDzqwyvArIDfIgbNo2tZHhvMYoSDZgym43jz4ynvUQ6xDX3+nJj3H6B+agAIzAPLAR3vomcgbW1jlRamP50/h9yHh/OsHCsRXyBsLOR9QvCO/KJCSibd+49TpO7ZTntBpr3NE1lYav1vqOJj9P9ov2MFO7QYbyBW0vkLqypH/AmM6saKm4kio/RNOk9JrgoWqkqVLA2Qn/khUo4vRuSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9VPlWYf129wYlKBXI6XeX6da7R6uiXrDylbJaC8/Pr8=;
 b=pe+h2PTvED0hA0k/WpmAFgN7sns8tgzxiAdRqrJ3PqOjNb0nANicmnzsE917fftKPwoEg8MS+LFlT/TypqLnLRU/9/yrVgIMPvQDeOwzYmFQuWWQY5P2/FiaqhQ0DKs72Nt63+S9wW4s9ntcv5JgaP6ARBR0lx008g88eG3izec=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <f7d78b4e-3a16-d342-59d2-caa4d2b75b9c@amd.com>
Date: Fri, 12 May 2023 17:03:18 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH v2 5/8] pci/arm: Use iommu_add_dt_pci_device()
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant
	<paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Julien Grall <julien@xen.org>, Rahul Singh <rahul.singh@arm.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<xen-devel@lists.xenproject.org>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
 <20230511191654.400720-6-stewart.hildebrand@amd.com>
 <61ae93e8-ac8f-b373-4fa7-0a8aeb61ef4f@suse.com>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <61ae93e8-ac8f-b373-4fa7-0a8aeb61ef4f@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT010:EE_|DM4PR12MB7573:EE_
X-MS-Office365-Filtering-Correlation-Id: c9f25149-2073-482f-be1e-08db532c53c1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Jz1oEhBLfnBYTj4KfvgUm+FkGv1/ZhTZKA7NpGBN6ZHRqRCrkJLaznK52yo5h9beJehYIugk9efaUIreElOLr6wqxH0W2Nbyk4EGWpR+zp4uevHl/DMDshK2oLBCwDBfI19bTLz4dcIvs5Zbe6FJbONjKiBl1tNggUaKXwwhg1tQNqy23YRkhXZdEGHYZKC8yakY4smGaCgPNsi8H1FKneP7stqetdPtvirPrv6fr7GOVmdAYnoXfM7XTUg0+aEunGqGc3LIHhZvWZXvBNVNs8y6l4iLSydAzfPloBCjP4DxQ0h6Iulw/R6Q9bfJKB3A/j791THBkkaLx6Uup7Cny2LO+8WYcli43EigGcvwgNuXp41m7Wt1Twm3aWPLGfzgSnewn4nOJgtZLtHHQA1n2zbQlirv2WmRndOOLR43B6SFNIGc/ktZLO0SIfcRAt+VVcWo1VOYaWXEWCMGAE89Kgd+M+zU9cll6TqeqsdnZTj3OJsw/wQO1+N2GkmtKKLZkgj8l1CXOYqCru8mex4KVVfTNL9uLM0KgwR9bVUKT+HftsnfmUfADQsHyF8dAGSVSt7GP1AaHA1ezoobOcRyBgn2xYJ6SOaJixB+DiPmGR11sUDJNVslhviMMrh5pK1Pe7i0yGMAxQ6pNges8uGNdjtuXI9O6ouyOCNZy4dH3PqukTgYhoECYMIAVliOKmU4Itv0a4WQhOQnxVGO6/cwEXABRLbc/fG8L8hixjnuPTM3aSt2FY25ukpodd+sU6jQW5s3mYSQ+GVftTQER+TRrCwfCXSzfB4wRrGOL9ugvJg=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(39860400002)(136003)(451199021)(36840700001)(46966006)(40470700004)(82310400005)(2616005)(16576012)(81166007)(47076005)(70586007)(31686004)(316002)(8676002)(8936002)(54906003)(70206006)(426003)(4326008)(6916009)(41300700001)(336012)(7416002)(44832011)(40460700003)(53546011)(186003)(40480700001)(26005)(478600001)(86362001)(356005)(2906002)(36756003)(31696002)(82740400003)(36860700001)(5660300002)(966005)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2023 21:03:25.0523
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c9f25149-2073-482f-be1e-08db532c53c1
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT010.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7573

On 5/12/23 03:25, Jan Beulich wrote:
> On 11.05.2023 21:16, Stewart Hildebrand wrote:
>> @@ -762,9 +767,20 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>>              pdev->domain = NULL;
>>              goto out;
>>          }
>> +#ifdef CONFIG_HAS_DEVICE_TREE
>> +        ret = iommu_add_dt_pci_device(pdev);
>> +        if ( ret < 0 )
>> +        {
>> +            printk(XENLOG_ERR "pci-iommu translation failed: %d\n", ret);
>> +            goto out;
>> +        }
>> +#endif
>>          ret = iommu_add_device(pdev);
> 
> Hmm, am I misremembering that in the earlier patch you had #else to
> invoke the alternative behavior?

You are remembering correctly. v1 had an #else, v2 does not.

> Now you end up calling both functions;
> if that's indeed intended,

Yes, this is intentional.

> this may still want doing differently.
> Looking at the earlier patch introducing the function, I can't infer
> though whether that's intended: iommu_add_dt_pci_device() checks that
> the add_device hook is present, but then I didn't find any use of this
> hook. The revlog there suggests the check might be stale.

Ah, right, the ops->add_device check is stale in the other patch. Good catch, I'll remove it there.

> If indeed the function does only preparatory work, I don't see why it
> would need naming "iommu_..."; I'd rather consider pci_add_dt_device()
> then.

The function has now been reduced to reading SMMU configuration data from DT and mapping RID/BDF -> AXI stream ID. However, it is still SMMU related, and it is still invoking another iommu_ops hook function, dt_xlate (which is yet another AXI stream ID translation, separate from what is being discussed here). Does this justify keeping "iommu_..." in the name? I'm not convinced pci_add_dt_device() is a good name for it either (more on this below).

> Plus in such a case #ifdef-ary here probably wants avoiding by
> introducing a suitable no-op stub for the !HAS_DEVICE_TREE case. Then
> ...
> 
>>          if ( ret )
>>          {
>> +#ifdef CONFIG_HAS_DEVICE_TREE
>> +            iommu_fwspec_free(pci_to_dev(pdev));
>> +#endif
> 
> ... this (which I understand is doing the corresponding cleanup) then
> also wants wrapping in a suitably named tiny helper function.

Sure, I'm on board with eliminating/reducing the #ifdef-ary where possible. Will do.

> But yet further I'm then no longer convinced this is the right place
> for the addition. pci_add_device() is backing physdev hypercalls. It
> would seem to me that the function may want invoking yet one layer
> further up, or it may even want invoking from a brand new DT-specific
> physdev-op. This would then leave at least the x86-only paths (invoking
> pci_add_device() from outside of pci_physdev_op()) entirely alone.

Let's establish that pci_add_device()/iommu_add_device() are already inherently performing tasks related to setting up a PCI device to work with an IOMMU.

The preparatory work in question needs to happen after:

  pci_add_device()
    -> alloc_pdev()

since we need to know all the possible RIDs (including those for phantom functions), but before the add_device iommu hook:

  pci_add_device()
    -> iommu_add_device()
      -> iommu_call(hd->platform_ops, add_device, ...)


The preparatory work (i.e. mapping RID/BDF -> AXI stream ID) is inherently associated with setting up a PCI device to work with an ARM SMMU (but not any particular variant of the SMMU). The SMMU distinguishes what PCI device/function DMA traffic is associated with based on the derived AXI stream ID (sideband data), not the RID/BDF directly. See [1].

Moving the preparatory work one layer up would mean duplicating what alloc_pdev() is already doing to set up pdev->phantom_stride (which we need to figure out all RIDs for that particular device). Moving it down into the individual SMMU drivers (smmu_ops/platform_ops) would mean duplicating special phantom function handling in each SMMU driver, and further deviating from the Linux SMMU driver(s) they are based on.

It still feels to me like pci_add_device() (or iommu_add_device()) is the right place to perform the RID/BDF -> AXI stream ID mapping.

Since there's nothing inherently DT specific (or ACPI specific) about deriving sideband data from RID/BDF, let me propose a new name for the function (instead of iommu_add_dt_pci_device):

  iommu_derive_pci_device_sideband_IDs()


Now, as far as DT and ACPI co-existing goes, I admit I haven't tested with CONFIG_ACPI=y yet (there seems to be some issues when both CONFIG_ARM_SMMU_V3=y and CONFIG_ACPI=y are enabled, even in staging). But I do recognize that we need a way support both CONFIG_HAS_DEVICE_TREE=y and CONFIG_ACPI=y simultaneously. Let me think on that for a bit...

[1] https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-iommu.txt


From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533906.830908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyg-0004FQ-8v; Fri, 12 May 2023 21:07:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533906.830908; Fri, 12 May 2023 21:07:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyg-0004FI-67; Fri, 12 May 2023 21:07:02 +0000
Received: by outflank-mailman (input) for mailman id 533906;
 Fri, 12 May 2023 21:07:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZye-0004F7-OB
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:01 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ee9a52b6-f108-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:06:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee9a52b6-f108-11ed-8611-37d641c3527e
Message-ID: <20230512203426.452963764@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925617;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc; bh=gnTvflJCY17ngq7v3t1VaZ+udg1fVO8xaD7NmVGfdtM=;
	b=hpItgE596OZsCHwd4RZltNwkaoUY+M4nxaNk5aijZSoKnJnX+Z1ftHJzFxXeLlwiR7CBVO
	kAeU5V6ipd8kuvNmo2jUYlgeJQZcAoQ/B2gtcPpMbKBJ5Hdb19cpdNKmIFAdiRriVPQFBq
	TB+mebv9nD57p/zwHVWvLj1xYW2llen6ecBlhNLpiB4mMX8ISxfhUt0cg8NuDkltKRGqwv
	kwkihKVbLzgwH4zA3a4gu5TjBLy8knJzdeBq4mfGSlsG3QssoI+I+YLWMjEKtJyMnCZYCb
	q0UpvbfNJvV5gxrJ96IhSco7wHR/BgKJzA+FPEk8wMxiA3axwjVqCn3rrplQeA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925617;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc; bh=gnTvflJCY17ngq7v3t1VaZ+udg1fVO8xaD7NmVGfdtM=;
	b=MX4k3gwiR3uPG3aHdqZkMbzsCUhWKscxACmgEvdnthaN7mX9Y50PkiqEAK2jw11B2rBJ5s
	0yIwakfLrEZSxWCw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Date: Fri, 12 May 2023 23:06:56 +0200 (CEST)

Hi!

This is version 4 of the reworked parallel bringup series. Version 3 can be
found here:

   https://lore.kernel.org/lkml/20230508181633.089804905@linutronix.de

This is just a reiteration to address the following details:

  1) Address review feedback (Peter Zijlstra)

  2) Fix a MIPS related build problem (0day)

Other than that there are no changes and the other details are all the same
as in V3 and V2.

It's also available from git:

    git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git hotplug

Diff to V3 below.

Thanks,

	tglx
---
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index f5e0f4235746..90c71d800b59 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -690,7 +690,7 @@ void flush_tlb_one(unsigned long vaddr)
 EXPORT_SYMBOL(flush_tlb_page);
 EXPORT_SYMBOL(flush_tlb_one);
 
-#ifdef CONFIG_HOTPLUG_CPU
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
 void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
 	if (mp_ops->cleanup_dead_cpu)
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 0438802031c3..9cd77d319555 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -290,8 +290,7 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 
 	/*  APIC ID not found in the table. Drop the trampoline lock and bail. */
 	movq	trampoline_lock(%rip), %rax
-	lock
-	btrl	$0, (%rax)
+	movl	$0, (%rax)
 
 1:	cli
 	hlt
@@ -320,8 +319,7 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 	movq	trampoline_lock(%rip), %rax
 	testq	%rax, %rax
 	jz	.Lsetup_gdt
-	lock
-	btrl	$0, (%rax)
+	movl	$0, (%rax)
 
 .Lsetup_gdt:
 	/*
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 5caf4897b507..660709e94823 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -161,31 +161,28 @@ static inline void smpboot_restore_warm_reset_vector(void)
 
 }
 
-/*
- * Report back to the Boot Processor during boot time or to the caller processor
- * during CPU online.
- */
-static void smp_callin(void)
+/* Run the next set of setup steps for the upcoming CPU */
+static void ap_starting(void)
 {
 	int cpuid = smp_processor_id();
 
 	/*
-	 * If waken up by an INIT in an 82489DX configuration the alive
-	 * synchronization guarantees we don't get here before an
-	 * INIT_deassert IPI reaches our local APIC, so it is now safe to
-	 * touch our local APIC.
+	 * If woken up by an INIT in an 82489DX configuration the alive
+	 * synchronization guarantees that the CPU does not reach this
+	 * point before an INIT_deassert IPI reaches the local APIC, so it
+	 * is now safe to touch the local APIC.
 	 *
 	 * Set up this CPU, first the APIC, which is probably redundant on
 	 * most boards.
 	 */
 	apic_ap_setup();
 
-	/* Save our processor parameters. */
+	/* Save the processor parameters. */
 	smp_store_cpu_info(cpuid);
 
 	/*
 	 * The topology information must be up to date before
-	 * calibrate_delay() and notify_cpu_starting().
+	 * notify_cpu_starting().
 	 */
 	set_cpu_sibling_map(cpuid);
 
@@ -197,7 +194,7 @@ static void smp_callin(void)
 
 	/*
 	 * This runs the AP through all the cpuhp states to its target
-	 * state (CPUHP_ONLINE in the case of serial bringup).
+	 * state CPUHP_ONLINE.
 	 */
 	notify_cpu_starting(cpuid);
 }
@@ -274,10 +271,7 @@ static void notrace start_secondary(void *unused)
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
 
-	smp_callin();
-
-	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
-	barrier();
+	ap_starting();
 
 	/* Check TSC synchronization with the control CPU. */
 	check_tsc_sync_target();
diff --git a/arch/x86/realmode/rm/trampoline_64.S b/arch/x86/realmode/rm/trampoline_64.S
index 2dfb1c400167..c6de4deec746 100644
--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -40,17 +40,13 @@
 .macro LOAD_REALMODE_ESP
 	/*
 	 * Make sure only one CPU fiddles with the realmode stack
-	 */
+	*/
 .Llock_rm\@:
-	btl	$0, tr_lock
-	jnc	2f
-	pause
-	jmp	.Llock_rm\@
+        lock btsl       $0, tr_lock
+        jnc             2f
+        pause
+        jmp             .Llock_rm\@
 2:
-	lock
-	btsl	$0, tr_lock
-	jc	.Llock_rm\@
-
 	# Setup stack
 	movl	$rm_stack_end, %esp
 .endm
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 60b4093fae9e..005f863a3d2b 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -294,14 +294,14 @@ enum cpuhp_sync_state {
  * cpuhp_ap_update_sync_state - Update synchronization state during bringup/teardown
  * @state:	The synchronization state to set
  *
- * No synchronization point. Just update of the synchronization state.
+ * No synchronization point. Just update of the synchronization state, but implies
+ * a full barrier so that the AP changes are visible before the control CPU proceeds.
  */
 static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state)
 {
 	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
-	int sync = atomic_read(st);
 
-	while (!atomic_try_cmpxchg(st, &sync, state));
+	(void)atomic_xchg(st, state);
 }
 
 void __weak arch_cpuhp_sync_state_poll(void) { cpu_relax(); }
@@ -829,7 +829,11 @@ static int bringup_cpu(unsigned int cpu)
 	/*
 	 * Some architectures have to walk the irq descriptors to
 	 * setup the vector space for the cpu which comes online.
-	 * Prevent irq alloc/free across the bringup.
+	 *
+	 * Prevent irq alloc/free across the bringup by acquiring the
+	 * sparse irq lock. Hold it until the upcoming CPU completes the
+	 * startup in cpuhp_online_idle() which allows to avoid
+	 * intermediate synchronization points in the architecture code.
 	 */
 	irq_lock_sparse();
 




From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533907.830919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyh-0004UY-GD; Fri, 12 May 2023 21:07:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533907.830919; Fri, 12 May 2023 21:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyh-0004UP-DD; Fri, 12 May 2023 21:07:03 +0000
Received: by outflank-mailman (input) for mailman id 533907;
 Fri, 12 May 2023 21:07:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyf-0004F7-EK
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:01 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ef257537-f108-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:06:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef257537-f108-11ed-8611-37d641c3527e
Message-ID: <20230512205255.493750666@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925619;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=s6VYVqWXqkUTfsBItuhC7K5jhMRHOO4jS7pH6HBneyQ=;
	b=D8Bi7VzTkuViWKmMifMnim1UAY/BXTvSuSeXwEuTUeLAvarTwWc9SV8KqlElC0pgUFTjWP
	2jbw6QOOcurfoCC/WTzJxcK+XMQbkO3EARTs1kYAYqGIivOYxqP7phYyepd0BKzmfTwpU6
	Fw27scx120Ouaogv6XXjSnLZ9E+e/mhGz0dBUDIWJbMwJM56liQqcXjp2J4SP+zDoj4UTD
	RkPo9Y/fmZ/WujXo2Jf2gMUDGJZzU645UEoHLJ97eXWUzvUjxZ2adQMv8d0FDDOk5xDpxw
	JWIBF5jgEjyZJRTxuuMA3y82/EpNmPxEMuOLMXTOL5dXeVw9uM4RVrDqbMI2mw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925619;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=s6VYVqWXqkUTfsBItuhC7K5jhMRHOO4jS7pH6HBneyQ=;
	b=LJpoYetOuh1jVxtlimZvIQSZscCqqjHl4hpuECjN5kaXN60+ti3C/hqHH5CB8z82fLbd2o
	Blu2a904F8bPRxDw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 01/37] x86/smpboot: Cleanup topology_phys_to_logical_pkg()/die()
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:06:58 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Make topology_phys_to_logical_pkg_die() static as it's only used in
smpboot.c and fixup the kernel-doc warnings for both functions.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/include/asm/topology.h |    3 ---
 arch/x86/kernel/smpboot.c       |   10 ++++++----
 2 files changed, 6 insertions(+), 7 deletions(-)
---

--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -139,7 +139,6 @@ static inline int topology_max_smt_threa
 int topology_update_package_map(unsigned int apicid, unsigned int cpu);
 int topology_update_die_map(unsigned int dieid, unsigned int cpu);
 int topology_phys_to_logical_pkg(unsigned int pkg);
-int topology_phys_to_logical_die(unsigned int die, unsigned int cpu);
 bool topology_is_primary_thread(unsigned int cpu);
 bool topology_smt_supported(void);
 #else
@@ -149,8 +148,6 @@ topology_update_package_map(unsigned int
 static inline int
 topology_update_die_map(unsigned int dieid, unsigned int cpu) { return 0; }
 static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
-static inline int topology_phys_to_logical_die(unsigned int die,
-		unsigned int cpu) { return 0; }
 static inline int topology_max_die_per_package(void) { return 1; }
 static inline int topology_max_smt_threads(void) { return 1; }
 static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -288,6 +288,7 @@ bool topology_smt_supported(void)
 
 /**
  * topology_phys_to_logical_pkg - Map a physical package id to a logical
+ * @phys_pkg:	The physical package id to map
  *
  * Returns logical package id or -1 if not found
  */
@@ -304,15 +305,17 @@ int topology_phys_to_logical_pkg(unsigne
 	return -1;
 }
 EXPORT_SYMBOL(topology_phys_to_logical_pkg);
+
 /**
  * topology_phys_to_logical_die - Map a physical die id to logical
+ * @die_id:	The physical die id to map
+ * @cur_cpu:	The CPU for which the mapping is done
  *
  * Returns logical die id or -1 if not found
  */
-int topology_phys_to_logical_die(unsigned int die_id, unsigned int cur_cpu)
+static int topology_phys_to_logical_die(unsigned int die_id, unsigned int cur_cpu)
 {
-	int cpu;
-	int proc_id = cpu_data(cur_cpu).phys_proc_id;
+	int cpu, proc_id = cpu_data(cur_cpu).phys_proc_id;
 
 	for_each_possible_cpu(cpu) {
 		struct cpuinfo_x86 *c = &cpu_data(cpu);
@@ -323,7 +326,6 @@ int topology_phys_to_logical_die(unsigne
 	}
 	return -1;
 }
-EXPORT_SYMBOL(topology_phys_to_logical_die);
 
 /**
  * topology_update_package_map - Update the physical to logical package map





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533908.830929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyi-0004kq-O3; Fri, 12 May 2023 21:07:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533908.830929; Fri, 12 May 2023 21:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyi-0004kf-KQ; Fri, 12 May 2023 21:07:04 +0000
Received: by outflank-mailman (input) for mailman id 533908;
 Fri, 12 May 2023 21:07:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyg-0004FP-HK
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:02 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f00dd61b-f108-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f00dd61b-f108-11ed-b229-6b7b168915f2
Message-ID: <20230512205255.551974164@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925620;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=AjuZnUvqwfsid4jtivwVWrZiWUy0MGdS0WowlWq+wK8=;
	b=BSrk1PoZLCui388wbL4NkY9LaBfz+vMljVOsc8GC5TTSN9IaN934ZXN1fRPgPIUcdUGSe3
	rs+LcXmfTyJYvxvJjsFf8f0ApOwip1PicNa981xcjH4MKVqkoKpyzlxwwHOYcfLC+B5mEd
	7Y+gbMD+piaijvIlyMzYPyuPxHohAIb3XSFige4l+xuoi82leA8PAeQ31hGSlVRwPD+wQ3
	n6n4pYD7mFa1Y0Rsvl1DCfEMOgpANdQ/E16QD6t2Dh7pRICCqjhN/gPgPrM+fPp2Si/ih2
	sU+p5Egl9N/3D2qoNiCRvjAFt/X3L02D3fNEkcYdWn40i1TAkciD/voa8T04sQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925620;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=AjuZnUvqwfsid4jtivwVWrZiWUy0MGdS0WowlWq+wK8=;
	b=n1wnoGuKdWFqJ6PX+VgQN0Ufd6Syw8of6US1WCmh8XhXmEUBV6cIJXGutet95c8P750ccR
	/kEaioxsm/Oq80Bw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 02/37] cpu/hotplug: Mark arch_disable_smp_support() and
 bringup_nonboot_cpus() __init
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:00 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

No point in keeping them around.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/kernel/smpboot.c |    4 ++--
 kernel/cpu.c              |    2 +-
 kernel/smp.c              |    2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)


--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1269,9 +1269,9 @@ int native_cpu_up(unsigned int cpu, stru
 }
 
 /**
- * arch_disable_smp_support() - disables SMP support for x86 at runtime
+ * arch_disable_smp_support() - Disables SMP support for x86 at boottime
  */
-void arch_disable_smp_support(void)
+void __init arch_disable_smp_support(void)
 {
 	disable_ioapic_support();
 }
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1502,7 +1502,7 @@ int bringup_hibernate_cpu(unsigned int s
 	return 0;
 }
 
-void bringup_nonboot_cpus(unsigned int setup_max_cpus)
+void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
 {
 	unsigned int cpu;
 
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -892,7 +892,7 @@ EXPORT_SYMBOL(setup_max_cpus);
  * SMP mode to <NUM>.
  */
 
-void __weak arch_disable_smp_support(void) { }
+void __weak __init arch_disable_smp_support(void) { }
 
 static int __init nosmp(char *str)
 {





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533909.830934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyj-0004oT-2W; Fri, 12 May 2023 21:07:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533909.830934; Fri, 12 May 2023 21:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyi-0004mq-Tf; Fri, 12 May 2023 21:07:04 +0000
Received: by outflank-mailman (input) for mailman id 533909;
 Fri, 12 May 2023 21:07:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyh-0004FP-6g
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:03 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f0f55002-f108-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0f55002-f108-11ed-b229-6b7b168915f2
Message-ID: <20230512205255.608773568@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925622;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=r7mhuIr1UsffTwz/BZZT9WLBgbQp9Bjt9SYbqNDVnNo=;
	b=yfiqoUfo19pxfWqsqQNjGDRZah8o3a0ZerjqrGY5ai55QtwCUrIVOIn8alP1EE9LzZM7mi
	LPH8IwyPagJtgHY2f27LQ/LbPs0P4ko5todlsQYLwAgD7UCfloc08R4n7cM9Gt3+MVorMZ
	8MGTb5ruHSawih2o00yv0SEO+1JvnEAV/i4NFh7ghWWkK6lNXQJ/9EnCt4lw84NXftTo9P
	JcmKj1ni7f7obeT3ujrFyEcOECY66oLHfz5SlWUdoqP1oth/EWpvUCSzr8AOV4Shx8fhIg
	2DSis/F/UEQdEAn+yIXXpTQBwye68bk4hkn4J+KuIUIIIBTf7MxseOIWbQlN1Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925622;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=r7mhuIr1UsffTwz/BZZT9WLBgbQp9Bjt9SYbqNDVnNo=;
	b=x/vrMLP5hYljsVenHMq+Kmd30/I3mC2dGQvZTtFyNimzI+gVbhM9ZYe9TwhsHyUY8Y99el
	8jpNFGjpIWrX03Cg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 03/37] x86/smpboot: Avoid pointless delay calibration if
 TSC is synchronized
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:01 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

When TSC is synchronized across sockets then there is no reason to
calibrate the delay for the first CPU which comes up on a socket.

Just reuse the existing calibration value.

This removes 100ms pointlessly wasted time from CPU hotplug per socket.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/kernel/smpboot.c |   40 +++++++++++++++++++++++++---------------
 arch/x86/kernel/tsc.c     |   20 ++++++++++++++++----
 2 files changed, 41 insertions(+), 19 deletions(-)


--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -178,28 +178,17 @@ static void smp_callin(void)
 	 */
 	apic_ap_setup();
 
-	/*
-	 * Save our processor parameters. Note: this information
-	 * is needed for clock calibration.
-	 */
+	/* Save our processor parameters. */
 	smp_store_cpu_info(cpuid);
 
 	/*
 	 * The topology information must be up to date before
-	 * calibrate_delay() and notify_cpu_starting().
+	 * notify_cpu_starting().
 	 */
 	set_cpu_sibling_map(raw_smp_processor_id());
 
 	ap_init_aperfmperf();
 
-	/*
-	 * Get our bogomips.
-	 * Update loops_per_jiffy in cpu_data. Previous call to
-	 * smp_store_cpu_info() stored a value that is close but not as
-	 * accurate as the value just calculated.
-	 */
-	calibrate_delay();
-	cpu_data(cpuid).loops_per_jiffy = loops_per_jiffy;
 	pr_debug("Stack at about %p\n", &cpuid);
 
 	wmb();
@@ -212,8 +201,24 @@ static void smp_callin(void)
 	cpumask_set_cpu(cpuid, cpu_callin_mask);
 }
 
+static void ap_calibrate_delay(void)
+{
+	/*
+	 * Calibrate the delay loop and update loops_per_jiffy in cpu_data.
+	 * smp_store_cpu_info() stored a value that is close but not as
+	 * accurate as the value just calculated.
+	 *
+	 * As this is invoked after the TSC synchronization check,
+	 * calibrate_delay_is_known() will skip the calibration routine
+	 * when TSC is synchronized across sockets.
+	 */
+	calibrate_delay();
+	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
+}
+
 static int cpu0_logical_apicid;
 static int enable_start_cpu0;
+
 /*
  * Activate a secondary processor.
  */
@@ -240,10 +245,15 @@ static void notrace start_secondary(void
 
 	/* otherwise gcc will move up smp_processor_id before the cpu_init */
 	barrier();
+	/* Check TSC synchronization with the control CPU: */
+	check_tsc_sync_target();
+
 	/*
-	 * Check TSC synchronization with the boot CPU:
+	 * Calibrate the delay loop after the TSC synchronization check.
+	 * This allows to skip the calibration when TSC is synchronized
+	 * across sockets.
 	 */
-	check_tsc_sync_target();
+	ap_calibrate_delay();
 
 	speculative_store_bypass_ht_init();
 
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -1598,10 +1598,7 @@ void __init tsc_init(void)
 
 #ifdef CONFIG_SMP
 /*
- * If we have a constant TSC and are using the TSC for the delay loop,
- * we can skip clock calibration if another cpu in the same socket has already
- * been calibrated. This assumes that CONSTANT_TSC applies to all
- * cpus in the socket - this should be a safe assumption.
+ * Check whether existing calibration data can be reused.
  */
 unsigned long calibrate_delay_is_known(void)
 {
@@ -1609,6 +1606,21 @@ unsigned long calibrate_delay_is_known(v
 	int constant_tsc = cpu_has(&cpu_data(cpu), X86_FEATURE_CONSTANT_TSC);
 	const struct cpumask *mask = topology_core_cpumask(cpu);
 
+	/*
+	 * If TSC has constant frequency and TSC is synchronized across
+	 * sockets then reuse CPU0 calibration.
+	 */
+	if (constant_tsc && !tsc_unstable)
+		return cpu_data(0).loops_per_jiffy;
+
+	/*
+	 * If TSC has constant frequency and TSC is not synchronized across
+	 * sockets and this is not the first CPU in the socket, then reuse
+	 * the calibration value of an already online CPU on that socket.
+	 *
+	 * This assumes that CONSTANT_TSC is consistent for all CPUs in a
+	 * socket.
+	 */
 	if (!constant_tsc || !mask)
 		return 0;
 



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533910.830949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyl-0005Iq-AO; Fri, 12 May 2023 21:07:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533910.830949; Fri, 12 May 2023 21:07:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyl-0005Id-5k; Fri, 12 May 2023 21:07:07 +0000
Received: by outflank-mailman (input) for mailman id 533910;
 Fri, 12 May 2023 21:07:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyj-0004F7-Hq
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:05 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f1e5a2dc-f108-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1e5a2dc-f108-11ed-8611-37d641c3527e
Message-ID: <20230512205255.662319599@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925623;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=10aHqiRf10a+4x+F15wqO9TaNeOMM+gljY64qYOf8Og=;
	b=LjbBQ3NLEkvc7e9R5u6yBkYS0iiwraQZmIUY+HCWYnQqRfXsTowBnBHcG1TtxBH3fAVoSQ
	ljm3T3p6GDG72K8bcRQxW8UQ8AFnJeYuwF6m3Rwl3xWoi8H8/BSIRJj1rVH+ip9//Ks2CE
	fFJHEmdVTk7moIEGWYEDWyAQEPoRHcgoEnf4wiilyclEx5NIwOXSKAxL/OmHZ9Qn9T7BaW
	lI3CrJ+EKcygt3YU47C0WJLEJrKKGyf0mIoV0OZUtQJOG6t2eGddOZYBHYJSTeG44wWBqW
	Kw9bLLMDxF8/F1fqZSwBPeGdCY5b1uQ6aFc1HXUNFM0V7oRK2uI9i5ZDa5ctow==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925623;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=10aHqiRf10a+4x+F15wqO9TaNeOMM+gljY64qYOf8Og=;
	b=WOpc7uYL/VidVOZn4G/ZP/A8TyN4gZsLcoALEzP0CZGwamlcMyB4G3NO+AsTxWEolHIdoB
	1lO2LOlMtVeUdXAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 04/37] x86/smpboot: Rename start_cpu0() to soft_restart_cpu()
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:03 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

This is used in the SEV play_dead() implementation to re-online CPUs. But
that has nothing to do with CPU0.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/include/asm/cpu.h   |    2 +-
 arch/x86/kernel/callthunks.c |    2 +-
 arch/x86/kernel/head_32.S    |   10 +++++-----
 arch/x86/kernel/head_64.S    |   10 +++++-----
 arch/x86/kernel/sev.c        |    2 +-
 5 files changed, 13 insertions(+), 13 deletions(-)

--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -30,7 +30,7 @@ struct x86_cpu {
 #ifdef CONFIG_HOTPLUG_CPU
 extern int arch_register_cpu(int num);
 extern void arch_unregister_cpu(int);
-extern void start_cpu0(void);
+extern void soft_restart_cpu(void);
 #ifdef CONFIG_DEBUG_HOTPLUG_CPU0
 extern int _debug_hotplug_cpu(int cpu, int action);
 #endif
--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -134,7 +134,7 @@ static bool skip_addr(void *dest)
 	if (dest == ret_from_fork)
 		return true;
 #ifdef CONFIG_HOTPLUG_CPU
-	if (dest == start_cpu0)
+	if (dest == soft_restart_cpu)
 		return true;
 #endif
 #ifdef CONFIG_FUNCTION_TRACER
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -140,16 +140,16 @@ SYM_CODE_END(startup_32)
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
- * Boot CPU0 entry point. It's called from play_dead(). Everything has been set
- * up already except stack. We just set up stack here. Then call
- * start_secondary().
+ * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
+ * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
+ * unplug. Everything is set up already except the stack.
  */
-SYM_FUNC_START(start_cpu0)
+SYM_FUNC_START(soft_restart_cpu)
 	movl initial_stack, %ecx
 	movl %ecx, %esp
 	call *(initial_code)
 1:	jmp 1b
-SYM_FUNC_END(start_cpu0)
+SYM_FUNC_END(soft_restart_cpu)
 #endif
 
 /*
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -377,11 +377,11 @@ SYM_CODE_END(secondary_startup_64)
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
- * Boot CPU0 entry point. It's called from play_dead(). Everything has been set
- * up already except stack. We just set up stack here. Then call
- * start_secondary() via .Ljump_to_C_code.
+ * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
+ * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
+ * unplug. Everything is set up already except the stack.
  */
-SYM_CODE_START(start_cpu0)
+SYM_CODE_START(soft_restart_cpu)
 	ANNOTATE_NOENDBR
 	UNWIND_HINT_END_OF_STACK
 
@@ -390,7 +390,7 @@ SYM_CODE_START(start_cpu0)
 	movq	TASK_threadsp(%rcx), %rsp
 
 	jmp	.Ljump_to_C_code
-SYM_CODE_END(start_cpu0)
+SYM_CODE_END(soft_restart_cpu)
 #endif
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -1328,7 +1328,7 @@ static void sev_es_play_dead(void)
 	 * If we get here, the VCPU was woken up again. Jump to CPU
 	 * startup code to get it back online.
 	 */
-	start_cpu0();
+	soft_restart_cpu();
 }
 #else  /* CONFIG_HOTPLUG_CPU */
 #define sev_es_play_dead	native_play_dead





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533911.830959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZym-0005af-Nl; Fri, 12 May 2023 21:07:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533911.830959; Fri, 12 May 2023 21:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZym-0005aQ-Ji; Fri, 12 May 2023 21:07:08 +0000
Received: by outflank-mailman (input) for mailman id 533911;
 Fri, 12 May 2023 21:07:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyl-0004F7-Cj
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:07 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f2e1a438-f108-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2e1a438-f108-11ed-8611-37d641c3527e
Message-ID: <20230512205255.715707999@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925625;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=gxV9HvRA1jE0dGSE4CHHvLVIJlrSKDn4gpz4DP4Ulu8=;
	b=pMTNWJsceyKNdg61fjbgscmBiEya/qXyQ1hUAzJ7A7/eDZzveZgJQ1/Ofw9uXoUE0QwUxx
	zQMqliw+eqf0lOxHhpDPglVC9nQREayoLreN0YRA11SqKCWboSqP3a+xqKu7cQ4PyAYps3
	32ILY/vwdyGB79ZyA7SeZqzbdhT3PyOqjZVa+M9STgi/eJ3Iz+EGcHmN2p2V4IHTWmX4qQ
	TtsuPEsrbiwun4gHWBsg09kRwy+KArC18NO9wP18KIxcKmcvaY5Gla8Tt9W2MwGLiMWb77
	B0s97A4u0JnxD4936vMJ4C5LbS7RTpBC4LzVoMDgosuWPoxfYQEHzbTtdneXLA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925625;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=gxV9HvRA1jE0dGSE4CHHvLVIJlrSKDn4gpz4DP4Ulu8=;
	b=gM0S3YIFPYsQR5xI9sMpI9LN66f78xaB9ow7U++iKEWI+6+2HqGIc9AiXsr1YcxwC38vcW
	1TCJYhmxK4OuJJDA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 05/37] x86/topology: Remove CPU0 hotplug option
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:04 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

This was introduced together with commit e1c467e69040 ("x86, hotplug: Wake
up CPU0 via NMI instead of INIT, SIPI, SIPI") to eventually support
physical hotplug of CPU0:

 "We'll change this code in the future to wake up hard offlined CPU0 if
  real platform and request are available."

11 years later this has not happened and physical hotplug is not officially
supported. Remove the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 Documentation/admin-guide/kernel-parameters.txt |   14 ---
 Documentation/core-api/cpu_hotplug.rst          |   13 ---
 arch/x86/Kconfig                                |   43 ----------
 arch/x86/include/asm/cpu.h                      |    3 
 arch/x86/kernel/topology.c                      |   98 ------------------------
 arch/x86/power/cpu.c                            |   37 ---------
 6 files changed, 6 insertions(+), 202 deletions(-)

--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -818,20 +818,6 @@
 			Format:
 			<first_slot>,<last_slot>,<port>,<enum_bit>[,<debug>]
 
-	cpu0_hotplug	[X86] Turn on CPU0 hotplug feature when
-			CONFIG_BOOTPARAM_HOTPLUG_CPU0 is off.
-			Some features depend on CPU0. Known dependencies are:
-			1. Resume from suspend/hibernate depends on CPU0.
-			Suspend/hibernate will fail if CPU0 is offline and you
-			need to online CPU0 before suspend/hibernate.
-			2. PIC interrupts also depend on CPU0. CPU0 can't be
-			removed if a PIC interrupt is detected.
-			It's said poweroff/reboot may depend on CPU0 on some
-			machines although I haven't seen such issues so far
-			after CPU0 is offline on a few tested machines.
-			If the dependencies are under your control, you can
-			turn on cpu0_hotplug.
-
 	cpuidle.off=1	[CPU_IDLE]
 			disable the cpuidle sub-system
 
--- a/Documentation/core-api/cpu_hotplug.rst
+++ b/Documentation/core-api/cpu_hotplug.rst
@@ -127,17 +127,8 @@ Once the CPU is shutdown, it will be rem
  $ echo 1 > /sys/devices/system/cpu/cpu4/online
  smpboot: Booting Node 0 Processor 4 APIC 0x1
 
-The CPU is usable again. This should work on all CPUs. CPU0 is often special
-and excluded from CPU hotplug. On X86 the kernel option
-*CONFIG_BOOTPARAM_HOTPLUG_CPU0* has to be enabled in order to be able to
-shutdown CPU0. Alternatively the kernel command option *cpu0_hotplug* can be
-used. Some known dependencies of CPU0:
-
-* Resume from hibernate/suspend. Hibernate/suspend will fail if CPU0 is offline.
-* PIC interrupts. CPU0 can't be removed if a PIC interrupt is detected.
-
-Please let Fenghua Yu <fenghua.yu@intel.com> know if you find any dependencies
-on CPU0.
+The CPU is usable again. This should work on all CPUs, but CPU0 is often special
+and excluded from CPU hotplug.
 
 The CPU hotplug coordination
 ============================
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2305,49 +2305,6 @@ config HOTPLUG_CPU
 	def_bool y
 	depends on SMP
 
-config BOOTPARAM_HOTPLUG_CPU0
-	bool "Set default setting of cpu0_hotpluggable"
-	depends on HOTPLUG_CPU
-	help
-	  Set whether default state of cpu0_hotpluggable is on or off.
-
-	  Say Y here to enable CPU0 hotplug by default. If this switch
-	  is turned on, there is no need to give cpu0_hotplug kernel
-	  parameter and the CPU0 hotplug feature is enabled by default.
-
-	  Please note: there are two known CPU0 dependencies if you want
-	  to enable the CPU0 hotplug feature either by this switch or by
-	  cpu0_hotplug kernel parameter.
-
-	  First, resume from hibernate or suspend always starts from CPU0.
-	  So hibernate and suspend are prevented if CPU0 is offline.
-
-	  Second dependency is PIC interrupts always go to CPU0. CPU0 can not
-	  offline if any interrupt can not migrate out of CPU0. There may
-	  be other CPU0 dependencies.
-
-	  Please make sure the dependencies are under your control before
-	  you enable this feature.
-
-	  Say N if you don't want to enable CPU0 hotplug feature by default.
-	  You still can enable the CPU0 hotplug feature at boot by kernel
-	  parameter cpu0_hotplug.
-
-config DEBUG_HOTPLUG_CPU0
-	def_bool n
-	prompt "Debug CPU0 hotplug"
-	depends on HOTPLUG_CPU
-	help
-	  Enabling this option offlines CPU0 (if CPU0 can be offlined) as
-	  soon as possible and boots up userspace with CPU0 offlined. User
-	  can online CPU0 back after boot time.
-
-	  To debug CPU0 hotplug, you need to enable CPU0 offline/online
-	  feature by either turning on CONFIG_BOOTPARAM_HOTPLUG_CPU0 during
-	  compilation or giving cpu0_hotplug kernel parameter at boot.
-
-	  If unsure, say N.
-
 config COMPAT_VDSO
 	def_bool n
 	prompt "Disable the 32-bit vDSO (needed for glibc 2.3.3)"
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -31,9 +31,6 @@ struct x86_cpu {
 extern int arch_register_cpu(int num);
 extern void arch_unregister_cpu(int);
 extern void soft_restart_cpu(void);
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-extern int _debug_hotplug_cpu(int cpu, int action);
-#endif
 #endif
 
 extern void ap_init_aperfmperf(void);
--- a/arch/x86/kernel/topology.c
+++ b/arch/x86/kernel/topology.c
@@ -38,102 +38,12 @@
 static DEFINE_PER_CPU(struct x86_cpu, cpu_devices);
 
 #ifdef CONFIG_HOTPLUG_CPU
-
-#ifdef CONFIG_BOOTPARAM_HOTPLUG_CPU0
-static int cpu0_hotpluggable = 1;
-#else
-static int cpu0_hotpluggable;
-static int __init enable_cpu0_hotplug(char *str)
-{
-	cpu0_hotpluggable = 1;
-	return 1;
-}
-
-__setup("cpu0_hotplug", enable_cpu0_hotplug);
-#endif
-
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-/*
- * This function offlines a CPU as early as possible and allows userspace to
- * boot up without the CPU. The CPU can be onlined back by user after boot.
- *
- * This is only called for debugging CPU offline/online feature.
- */
-int _debug_hotplug_cpu(int cpu, int action)
-{
-	int ret;
-
-	if (!cpu_is_hotpluggable(cpu))
-		return -EINVAL;
-
-	switch (action) {
-	case 0:
-		ret = remove_cpu(cpu);
-		if (!ret)
-			pr_info("DEBUG_HOTPLUG_CPU0: CPU %u is now offline\n", cpu);
-		else
-			pr_debug("Can't offline CPU%d.\n", cpu);
-		break;
-	case 1:
-		ret = add_cpu(cpu);
-		if (ret)
-			pr_debug("Can't online CPU%d.\n", cpu);
-
-		break;
-	default:
-		ret = -EINVAL;
-	}
-
-	return ret;
-}
-
-static int __init debug_hotplug_cpu(void)
+int arch_register_cpu(int cpu)
 {
-	_debug_hotplug_cpu(0, 0);
-	return 0;
-}
-
-late_initcall_sync(debug_hotplug_cpu);
-#endif /* CONFIG_DEBUG_HOTPLUG_CPU0 */
-
-int arch_register_cpu(int num)
-{
-	struct cpuinfo_x86 *c = &cpu_data(num);
-
-	/*
-	 * Currently CPU0 is only hotpluggable on Intel platforms. Other
-	 * vendors can add hotplug support later.
-	 * Xen PV guests don't support CPU0 hotplug at all.
-	 */
-	if (c->x86_vendor != X86_VENDOR_INTEL ||
-	    cpu_feature_enabled(X86_FEATURE_XENPV))
-		cpu0_hotpluggable = 0;
-
-	/*
-	 * Two known BSP/CPU0 dependencies: Resume from suspend/hibernate
-	 * depends on BSP. PIC interrupts depend on BSP.
-	 *
-	 * If the BSP dependencies are under control, one can tell kernel to
-	 * enable BSP hotplug. This basically adds a control file and
-	 * one can attempt to offline BSP.
-	 */
-	if (num == 0 && cpu0_hotpluggable) {
-		unsigned int irq;
-		/*
-		 * We won't take down the boot processor on i386 if some
-		 * interrupts only are able to be serviced by the BSP in PIC.
-		 */
-		for_each_active_irq(irq) {
-			if (!IO_APIC_IRQ(irq) && irq_has_action(irq)) {
-				cpu0_hotpluggable = 0;
-				break;
-			}
-		}
-	}
-	if (num || cpu0_hotpluggable)
-		per_cpu(cpu_devices, num).cpu.hotpluggable = 1;
+	struct x86_cpu *xc = per_cpu_ptr(&cpu_devices, cpu);
 
-	return register_cpu(&per_cpu(cpu_devices, num).cpu, num);
+	xc->cpu.hotpluggable = cpu > 0;
+	return register_cpu(&xc->cpu, cpu);
 }
 EXPORT_SYMBOL(arch_register_cpu);
 
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -351,43 +351,6 @@ static int bsp_pm_callback(struct notifi
 	case PM_HIBERNATION_PREPARE:
 		ret = bsp_check();
 		break;
-#ifdef CONFIG_DEBUG_HOTPLUG_CPU0
-	case PM_RESTORE_PREPARE:
-		/*
-		 * When system resumes from hibernation, online CPU0 because
-		 * 1. it's required for resume and
-		 * 2. the CPU was online before hibernation
-		 */
-		if (!cpu_online(0))
-			_debug_hotplug_cpu(0, 1);
-		break;
-	case PM_POST_RESTORE:
-		/*
-		 * When a resume really happens, this code won't be called.
-		 *
-		 * This code is called only when user space hibernation software
-		 * prepares for snapshot device during boot time. So we just
-		 * call _debug_hotplug_cpu() to restore to CPU0's state prior to
-		 * preparing the snapshot device.
-		 *
-		 * This works for normal boot case in our CPU0 hotplug debug
-		 * mode, i.e. CPU0 is offline and user mode hibernation
-		 * software initializes during boot time.
-		 *
-		 * If CPU0 is online and user application accesses snapshot
-		 * device after boot time, this will offline CPU0 and user may
-		 * see different CPU0 state before and after accessing
-		 * the snapshot device. But hopefully this is not a case when
-		 * user debugging CPU0 hotplug. Even if users hit this case,
-		 * they can easily online CPU0 back.
-		 *
-		 * To simplify this debug code, we only consider normal boot
-		 * case. Otherwise we need to remember CPU0's state and restore
-		 * to that state and resolve racy conditions etc.
-		 */
-		_debug_hotplug_cpu(0, 0);
-		break;
-#endif
 	default:
 		break;
 	}





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533912.830968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyn-0005sQ-WD; Fri, 12 May 2023 21:07:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533912.830968; Fri, 12 May 2023 21:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyn-0005rP-Sb; Fri, 12 May 2023 21:07:09 +0000
Received: by outflank-mailman (input) for mailman id 533912;
 Fri, 12 May 2023 21:07:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZym-0004FP-BD
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:08 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f3d80c7a-f108-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:07 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3d80c7a-f108-11ed-b229-6b7b168915f2
Message-ID: <20230512205255.768845190@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925627;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Vdgu0US5isMOkXvsbtEKbpQbMn5jiJ+2zBLMqmM0YL8=;
	b=rHjvfgvb2+9T3PZ89upCVf/htUJYRgaiE4wYsm0JoNYDceADItU6dQkHNUxMVT0mAKxQCQ
	g1/OIsowQWZ6Bng8y3rPxKD6j9JG6nO9AqcKXUIfNqadj2wqAP10ZTZqf/wQ8bZuweIxU8
	FbGJYBtbB0RLpj8XodKxfM0T9yYAZ8rC5lIdf6QnDplFrmHRxCno1cLmIswNLEF/jQuqCF
	ufPsxRi6VSkjYg9pqBKmigegl+aCRSB9QunUmA1+oo3u2YiKlvmXRZ/YpzUK4x67YwhumP
	TeVJDM0kZMKjpua2HnPGv2PoAp8D/THStbkBRKgjlP6r9216eUdapX+gssKumg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925627;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Vdgu0US5isMOkXvsbtEKbpQbMn5jiJ+2zBLMqmM0YL8=;
	b=Nt8LRINmVeDr769RErpnLaIeXfPELF9qhbpYFR7qzL7qqBvkabcnCfZ6XufKRvpBRaDcqh
	7k1uPZJfBkBIsNBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 06/37] x86/smpboot: Remove the CPU0 hotplug kludge
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:06 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

This was introduced with commit e1c467e69040 ("x86, hotplug: Wake up CPU0
via NMI instead of INIT, SIPI, SIPI") to eventually support physical
hotplug of CPU0:

 "We'll change this code in the future to wake up hard offlined CPU0 if
  real platform and request are available."

11 years later this has not happened and physical hotplug is not officially
supported. Remove the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/include/asm/apic.h   |    1 
 arch/x86/include/asm/smp.h    |    1 
 arch/x86/kernel/smpboot.c     |  170 +++---------------------------------------
 drivers/acpi/processor_idle.c |    4 
 4 files changed, 14 insertions(+), 162 deletions(-)

--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -377,7 +377,6 @@ extern struct apic *__apicdrivers[], *__
  * APIC functionality to boot other CPUs - only used on SMP:
  */
 #ifdef CONFIG_SMP
-extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
 extern int lapic_can_unplug_cpu(void);
 #endif
 
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -130,7 +130,6 @@ void native_play_dead(void);
 void play_dead_common(void);
 void wbinvd_on_cpu(int cpu);
 int wbinvd_on_all_cpus(void);
-void cond_wakeup_cpu0(void);
 
 void native_smp_send_reschedule(int cpu);
 void native_send_call_func_ipi(const struct cpumask *mask);
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -216,9 +216,6 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
-static int cpu0_logical_apicid;
-static int enable_start_cpu0;
-
 /*
  * Activate a secondary processor.
  */
@@ -241,8 +238,6 @@ static void notrace start_secondary(void
 	x86_cpuinit.early_percpu_clock_init();
 	smp_callin();
 
-	enable_start_cpu0 = 0;
-
 	/* otherwise gcc will move up smp_processor_id before the cpu_init */
 	barrier();
 	/* Check TSC synchronization with the control CPU: */
@@ -410,7 +405,7 @@ void smp_store_cpu_info(int id)
 	c->cpu_index = id;
 	/*
 	 * During boot time, CPU0 has this setup already. Save the info when
-	 * bringing up AP or offlined CPU0.
+	 * bringing up an AP.
 	 */
 	identify_secondary_cpu(c);
 	c->initialized = true;
@@ -807,51 +802,14 @@ static void __init smp_quirk_init_udelay
 }
 
 /*
- * Poke the other CPU in the eye via NMI to wake it up. Remember that the normal
- * INIT, INIT, STARTUP sequence will reset the chip hard for us, and this
- * won't ... remember to clear down the APIC, etc later.
- */
-int
-wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip)
-{
-	u32 dm = apic->dest_mode_logical ? APIC_DEST_LOGICAL : APIC_DEST_PHYSICAL;
-	unsigned long send_status, accept_status = 0;
-	int maxlvt;
-
-	/* Target chip */
-	/* Boot on the stack */
-	/* Kick the second */
-	apic_icr_write(APIC_DM_NMI | dm, apicid);
-
-	pr_debug("Waiting for send to finish...\n");
-	send_status = safe_apic_wait_icr_idle();
-
-	/*
-	 * Give the other CPU some time to accept the IPI.
-	 */
-	udelay(200);
-	if (APIC_INTEGRATED(boot_cpu_apic_version)) {
-		maxlvt = lapic_get_maxlvt();
-		if (maxlvt > 3)			/* Due to the Pentium erratum 3AP.  */
-			apic_write(APIC_ESR, 0);
-		accept_status = (apic_read(APIC_ESR) & 0xEF);
-	}
-	pr_debug("NMI sent\n");
-
-	if (send_status)
-		pr_err("APIC never delivered???\n");
-	if (accept_status)
-		pr_err("APIC delivery error (%lx)\n", accept_status);
-
-	return (send_status | accept_status);
-}
-
-static int
-wakeup_secondary_cpu_via_init(int phys_apicid, unsigned long start_eip)
+ * Wake up AP by INIT, INIT, STARTUP sequence.
+ */
+static int wakeup_secondary_cpu_via_init(int phys_apicid, unsigned long start_eip)
 {
 	unsigned long send_status = 0, accept_status = 0;
 	int maxlvt, num_starts, j;
 
+	preempt_disable();
 	maxlvt = lapic_get_maxlvt();
 
 	/*
@@ -957,6 +915,7 @@ wakeup_secondary_cpu_via_init(int phys_a
 	if (accept_status)
 		pr_err("APIC delivery error (%lx)\n", accept_status);
 
+	preempt_enable();
 	return (send_status | accept_status);
 }
 
@@ -997,67 +956,6 @@ static void announce_cpu(int cpu, int ap
 			node, cpu, apicid);
 }
 
-static int wakeup_cpu0_nmi(unsigned int cmd, struct pt_regs *regs)
-{
-	int cpu;
-
-	cpu = smp_processor_id();
-	if (cpu == 0 && !cpu_online(cpu) && enable_start_cpu0)
-		return NMI_HANDLED;
-
-	return NMI_DONE;
-}
-
-/*
- * Wake up AP by INIT, INIT, STARTUP sequence.
- *
- * Instead of waiting for STARTUP after INITs, BSP will execute the BIOS
- * boot-strap code which is not a desired behavior for waking up BSP. To
- * void the boot-strap code, wake up CPU0 by NMI instead.
- *
- * This works to wake up soft offlined CPU0 only. If CPU0 is hard offlined
- * (i.e. physically hot removed and then hot added), NMI won't wake it up.
- * We'll change this code in the future to wake up hard offlined CPU0 if
- * real platform and request are available.
- */
-static int
-wakeup_cpu_via_init_nmi(int cpu, unsigned long start_ip, int apicid,
-	       int *cpu0_nmi_registered)
-{
-	int id;
-	int boot_error;
-
-	preempt_disable();
-
-	/*
-	 * Wake up AP by INIT, INIT, STARTUP sequence.
-	 */
-	if (cpu) {
-		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
-		goto out;
-	}
-
-	/*
-	 * Wake up BSP by nmi.
-	 *
-	 * Register a NMI handler to help wake up CPU0.
-	 */
-	boot_error = register_nmi_handler(NMI_LOCAL,
-					  wakeup_cpu0_nmi, 0, "wake_cpu0");
-
-	if (!boot_error) {
-		enable_start_cpu0 = 1;
-		*cpu0_nmi_registered = 1;
-		id = apic->dest_mode_logical ? cpu0_logical_apicid : apicid;
-		boot_error = wakeup_secondary_cpu_via_nmi(id, start_ip);
-	}
-
-out:
-	preempt_enable();
-
-	return boot_error;
-}
-
 int common_cpu_up(unsigned int cpu, struct task_struct *idle)
 {
 	int ret;
@@ -1086,8 +984,7 @@ int common_cpu_up(unsigned int cpu, stru
  * Returns zero if CPU booted OK, else error code from
  * ->wakeup_secondary_cpu.
  */
-static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,
-		       int *cpu0_nmi_registered)
+static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
 	/* start_ip had better be page-aligned! */
 	unsigned long start_ip = real_mode_header->trampoline_start;
@@ -1120,7 +1017,6 @@ static int do_boot_cpu(int apicid, int c
 	 * This grunge runs the startup process for
 	 * the targeted processor.
 	 */
-
 	if (x86_platform.legacy.warm_reset) {
 
 		pr_debug("Setting warm reset code and vector.\n");
@@ -1149,15 +1045,14 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use a method from the APIC driver if one defined, with wakeup
 	 *   straight to 64-bit mode preferred over wakeup to RM.
 	 * Otherwise,
-	 * - Use an INIT boot APIC message for APs or NMI for BSP.
+	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
 		boot_error = apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
 		boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
 	else
-		boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,
-						     cpu0_nmi_registered);
+		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
 
 	if (!boot_error) {
 		/*
@@ -1206,9 +1101,8 @@ static int do_boot_cpu(int apicid, int c
 int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
-	int cpu0_nmi_registered = 0;
 	unsigned long flags;
-	int err, ret = 0;
+	int err;
 
 	lockdep_assert_irqs_enabled();
 
@@ -1247,11 +1141,10 @@ int native_cpu_up(unsigned int cpu, stru
 	if (err)
 		return err;
 
-	err = do_boot_cpu(apicid, cpu, tidle, &cpu0_nmi_registered);
+	err = do_boot_cpu(apicid, cpu, tidle);
 	if (err) {
 		pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
-		ret = -EIO;
-		goto unreg_nmi;
+		return err;
 	}
 
 	/*
@@ -1267,15 +1160,7 @@ int native_cpu_up(unsigned int cpu, stru
 		touch_nmi_watchdog();
 	}
 
-unreg_nmi:
-	/*
-	 * Clean up the nmi handler. Do this after the callin and callout sync
-	 * to avoid impact of possible long unregister time.
-	 */
-	if (cpu0_nmi_registered)
-		unregister_nmi_handler(NMI_LOCAL, "wake_cpu0");
-
-	return ret;
+	return 0;
 }
 
 /**
@@ -1373,14 +1258,6 @@ static void __init smp_cpu_index_default
 	}
 }
 
-static void __init smp_get_logical_apicid(void)
-{
-	if (x2apic_mode)
-		cpu0_logical_apicid = apic_read(APIC_LDR);
-	else
-		cpu0_logical_apicid = GET_APIC_LOGICAL_ID(apic_read(APIC_LDR));
-}
-
 void __init smp_prepare_cpus_common(void)
 {
 	unsigned int i;
@@ -1443,8 +1320,6 @@ void __init native_smp_prepare_cpus(unsi
 	/* Setup local timer */
 	x86_init.timers.setup_percpu_clockev();
 
-	smp_get_logical_apicid();
-
 	pr_info("CPU0: ");
 	print_cpu_info(&cpu_data(0));
 
@@ -1752,18 +1627,6 @@ void play_dead_common(void)
 	local_irq_disable();
 }
 
-/**
- * cond_wakeup_cpu0 - Wake up CPU0 if needed.
- *
- * If NMI wants to wake up CPU0, start CPU0.
- */
-void cond_wakeup_cpu0(void)
-{
-	if (smp_processor_id() == 0 && enable_start_cpu0)
-		start_cpu0();
-}
-EXPORT_SYMBOL_GPL(cond_wakeup_cpu0);
-
 /*
  * We need to flush the caches before going to sleep, lest we have
  * dirty data in our caches when we come back up.
@@ -1831,8 +1694,6 @@ static inline void mwait_play_dead(void)
 		__monitor(mwait_ptr, 0, 0);
 		mb();
 		__mwait(eax, 0);
-
-		cond_wakeup_cpu0();
 	}
 }
 
@@ -1841,11 +1702,8 @@ void __noreturn hlt_play_dead(void)
 	if (__this_cpu_read(cpu_info.x86) >= 4)
 		wbinvd();
 
-	while (1) {
+	while (1)
 		native_halt();
-
-		cond_wakeup_cpu0();
-	}
 }
 
 void native_play_dead(void)
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -597,10 +597,6 @@ static int acpi_idle_play_dead(struct cp
 			io_idle(cx->address);
 		} else
 			return -ENODEV;
-
-#if defined(CONFIG_X86) && defined(CONFIG_HOTPLUG_CPU)
-		cond_wakeup_cpu0();
-#endif
 	}
 
 	/* Never reached */





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533919.831029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyy-0008Bz-5h; Fri, 12 May 2023 21:07:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533919.831029; Fri, 12 May 2023 21:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyx-0008AM-VQ; Fri, 12 May 2023 21:07:19 +0000
Received: by outflank-mailman (input) for mailman id 533919;
 Fri, 12 May 2023 21:07:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyw-0004F7-E2
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:18 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f98ec97d-f108-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:16 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f98ec97d-f108-11ed-8611-37d641c3527e
Message-ID: <20230512205256.091511483@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925636;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=JaqZxTMPaJm1imrOji7tK0gvC/N5/O2xb6G86Du4TqQ=;
	b=cDejufDlaxyCx5CGX4hiOWO/B+3Z4Wh19xsvilcT9IPO7FIYNiPHNJX+pqRc5X+N/dDVs5
	tKDZ+XQ4GQv9J6hR2SPc85jI+MqPX6q6xxXuGbFXUxAJjXbPQHMFewGqbQuhrfB7sY2u2u
	nkHDr5FQjj8WBPk6AwjKUsu+7FyPK4nSUfby3R43DpF7wYT+TQ8zFWGIr2lnlOG6YYmjM4
	uygdhGu263NI6nr6HYNtETYJgziVDxphZhDBdka0UYVVHe8n5+ats7GS5MAlTVrF+827ll
	IFxU3itG8+fVOGZr/SA1ZuDUQI+EuEEA8a+PjeX7nu7Hjbhz0RXLqD2NveT9KQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925636;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=JaqZxTMPaJm1imrOji7tK0gvC/N5/O2xb6G86Du4TqQ=;
	b=sOj3zUEdJE/NxvevxKX5WPFqqf/zY9uCI4lEroCJWyOhcwEBsNa+MdD4dK9wBn2igUgSdu
	8HJ4zRRiu3a17ODw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 12/37] x86/smpboot: Move synchronization masks to SMP boot code
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:16 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The usage is in smpboot.c and not in the CPU initialization code.

The XEN_PV usage of cpu_callout_mask is obsolete as cpu_init() not longer
waits and cacheinfo has its own CPU mask now, so cpu_callout_mask can be
made static too.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/include/asm/cpumask.h |    5 -----
 arch/x86/kernel/cpu/common.c   |   17 -----------------
 arch/x86/kernel/smpboot.c      |   16 ++++++++++++++++
 arch/x86/xen/smp_pv.c          |    3 ---
 4 files changed, 16 insertions(+), 25 deletions(-)
--- a/arch/x86/include/asm/cpumask.h
+++ b/arch/x86/include/asm/cpumask.h
@@ -4,11 +4,6 @@
 #ifndef __ASSEMBLY__
 #include <linux/cpumask.h>
 
-extern cpumask_var_t cpu_callin_mask;
-extern cpumask_var_t cpu_callout_mask;
-extern cpumask_var_t cpu_initialized_mask;
-extern cpumask_var_t cpu_sibling_setup_mask;
-
 extern void setup_cpu_local_masks(void);
 
 /*
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -67,14 +67,6 @@
 
 u32 elf_hwcap2 __read_mostly;
 
-/* all of these masks are initialized in setup_cpu_local_masks() */
-cpumask_var_t cpu_initialized_mask;
-cpumask_var_t cpu_callout_mask;
-cpumask_var_t cpu_callin_mask;
-
-/* representing cpus for which sibling maps can be computed */
-cpumask_var_t cpu_sibling_setup_mask;
-
 /* Number of siblings per CPU package */
 int smp_num_siblings = 1;
 EXPORT_SYMBOL(smp_num_siblings);
@@ -169,15 +161,6 @@ static void ppin_init(struct cpuinfo_x86
 	clear_cpu_cap(c, info->feature);
 }
 
-/* correctly size the local cpu masks */
-void __init setup_cpu_local_masks(void)
-{
-	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callin_mask);
-	alloc_bootmem_cpumask_var(&cpu_callout_mask);
-	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
-}
-
 static void default_init(struct cpuinfo_x86 *c)
 {
 #ifdef CONFIG_X86_64
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -101,6 +101,13 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
+/* All of these masks are initialized in setup_cpu_local_masks() */
+static cpumask_var_t cpu_initialized_mask;
+static cpumask_var_t cpu_callout_mask;
+static cpumask_var_t cpu_callin_mask;
+/* Representing CPUs for which sibling maps can be computed */
+static cpumask_var_t cpu_sibling_setup_mask;
+
 /* Logical package management. We might want to allocate that dynamically */
 unsigned int __max_logical_packages __read_mostly;
 EXPORT_SYMBOL(__max_logical_packages);
@@ -1548,6 +1555,15 @@ early_param("possible_cpus", _setup_poss
 		set_cpu_possible(i, true);
 }
 
+/* correctly size the local cpu masks */
+void __init setup_cpu_local_masks(void)
+{
+	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
+	alloc_bootmem_cpumask_var(&cpu_callin_mask);
+	alloc_bootmem_cpumask_var(&cpu_callout_mask);
+	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
+}
+
 #ifdef CONFIG_HOTPLUG_CPU
 
 /* Recompute SMT state for all CPUs on offline */
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -254,15 +254,12 @@ cpu_initialize_context(unsigned int cpu,
 	struct desc_struct *gdt;
 	unsigned long gdt_mfn;
 
-	/* used to tell cpu_init() that it can proceed with initialization */
-	cpumask_set_cpu(cpu, cpu_callout_mask);
 	if (cpumask_test_and_set_cpu(cpu, xen_cpu_initialized_map))
 		return 0;
 
 	ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL);
 	if (ctxt == NULL) {
 		cpumask_clear_cpu(cpu, xen_cpu_initialized_map);
-		cpumask_clear_cpu(cpu, cpu_callout_mask);
 		return -ENOMEM;
 	}
 



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533916.831009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyu-0007Mw-TC; Fri, 12 May 2023 21:07:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533916.831009; Fri, 12 May 2023 21:07:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyu-0007Lu-Nb; Fri, 12 May 2023 21:07:16 +0000
Received: by outflank-mailman (input) for mailman id 533916;
 Fri, 12 May 2023 21:07:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyt-0004F7-7U
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:15 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f7a8aa7f-f108-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7a8aa7f-f108-11ed-8611-37d641c3527e
Message-ID: <20230512205255.981999763@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925633;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=q33LoArtTxRD/Ru/7HNy8KNvjUqDiwm6+syFdpZ/kWw=;
	b=298zc4LQSXT6SexxbOrNR+pxdBUapgKN2FhLEK6OUjzceaZSFfF3gQZ/P02XnQJKV7onMX
	G82tp6FGlvEJCHk/fYGghxxqy0XquQ6dVAokNULe1uFj1f2YvF9kZo12v5AuT4x9dpb6LM
	MIV+WOcXU9u/9w2UWdlkf7j62NoVP8gL3gHpZTNYTPR4F3tjV1vXnCvr9cNhZILyGgQ3o6
	luGpxQQIC7NP6ZTlVMhe86GpRZLF8muTdGi2x+adhcL/y1D6yQ/W8J/tsp4QGhjHHs1Kbg
	sDQlO7c/Pp6NHwxuRk82HeXQRbgkvp6C+hpt8zSSXeTW3e5BQ1ba3sKB1TDAhg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925633;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=q33LoArtTxRD/Ru/7HNy8KNvjUqDiwm6+syFdpZ/kWw=;
	b=o/qZFH2k47iHoIluy2YsRj5WdNcKSir3sAheA0GheiTuEAqo2lMSo8uRRCc3N6NV3aBHkN
	EzHISfCLZ+rbXGCg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 10/37] x86/smpboot: Get rid of cpu_init_secondary()
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:12 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The synchronization of the AP with the control CPU is a SMP boot problem
and has nothing to do with cpu_init().

Open code cpu_init_secondary() in start_secondary() and move
wait_for_master_cpu() into the SMP boot code.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/include/asm/processor.h |    1 -
 arch/x86/kernel/cpu/common.c     |   27 ---------------------------
 arch/x86/kernel/smpboot.c        |   24 +++++++++++++++++++-----
 3 files changed, 19 insertions(+), 33 deletions(-)

--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -551,7 +551,6 @@ extern void switch_gdt_and_percpu_base(i
 extern void load_direct_gdt(int);
 extern void load_fixmap_gdt(int);
 extern void cpu_init(void);
-extern void cpu_init_secondary(void);
 extern void cpu_init_exception_handling(void);
 extern void cr4_init(void);
 
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2123,19 +2123,6 @@ static void dbg_restore_debug_regs(void)
 #define dbg_restore_debug_regs()
 #endif /* ! CONFIG_KGDB */
 
-static void wait_for_master_cpu(int cpu)
-{
-#ifdef CONFIG_SMP
-	/*
-	 * wait for ACK from master CPU before continuing
-	 * with AP initialization
-	 */
-	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
-	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
-		cpu_relax();
-#endif
-}
-
 static inline void setup_getcpu(int cpu)
 {
 	unsigned long cpudata = vdso_encode_cpunode(cpu, early_cpu_to_node(cpu));
@@ -2239,8 +2226,6 @@ void cpu_init(void)
 	struct task_struct *cur = current;
 	int cpu = raw_smp_processor_id();
 
-	wait_for_master_cpu(cpu);
-
 	ucode_cpu_init(cpu);
 
 #ifdef CONFIG_NUMA
@@ -2293,18 +2278,6 @@ void cpu_init(void)
 	load_fixmap_gdt(cpu);
 }
 
-#ifdef CONFIG_SMP
-void cpu_init_secondary(void)
-{
-	/*
-	 * Relies on the BP having set-up the IDT tables, which are loaded
-	 * on this CPU in cpu_init_exception_handling().
-	 */
-	cpu_init_exception_handling();
-	cpu_init();
-}
-#endif
-
 #ifdef CONFIG_MICROCODE_LATE_LOADING
 /**
  * store_cpu_caps() - Store a snapshot of CPU capabilities
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -220,6 +220,17 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
+static void wait_for_master_cpu(int cpu)
+{
+	/*
+	 * Wait for release by control CPU before continuing with AP
+	 * initialization.
+	 */
+	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
+	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
+		cpu_relax();
+}
+
 /*
  * Activate a secondary processor.
  */
@@ -237,13 +248,16 @@ static void notrace start_secondary(void
 	load_cr3(swapper_pg_dir);
 	__flush_tlb_all();
 #endif
+	cpu_init_exception_handling();
+
 	/*
-	 * Sync point with wait_cpu_initialized(). Before proceeding through
-	 * cpu_init(), the AP will call wait_for_master_cpu() which sets its
-	 * own bit in cpu_initialized_mask and then waits for the BSP to set
-	 * its bit in cpu_callout_mask to release it.
+	 * Sync point with wait_cpu_initialized(). Sets AP in
+	 * cpu_initialized_mask and then waits for the control CPU
+	 * to release it.
 	 */
-	cpu_init_secondary();
+	wait_for_master_cpu(raw_smp_processor_id());
+
+	cpu_init();
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
 





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533915.830998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZys-0006ng-IX; Fri, 12 May 2023 21:07:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533915.830998; Fri, 12 May 2023 21:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZys-0006mC-CC; Fri, 12 May 2023 21:07:14 +0000
Received: by outflank-mailman (input) for mailman id 533915;
 Fri, 12 May 2023 21:07:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyq-0004FP-UQ
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:12 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f6bde649-f108-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6bde649-f108-11ed-b229-6b7b168915f2
Message-ID: <20230512205255.928917242@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925631;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Xd8AfLi0wqokosOeIBU6PRKsH+asSTGynrmD4wsPm3o=;
	b=OSWq/hFbqxJ6Ld5bfeAs5sTSUkgPJakeSoZZI0W04OZW4wK4OOwnMfEKPPf6LMMX3NpWwW
	Dw4oxAh67Lba1WppJhoyY3dyiIlcsvkzneZoqU7XnbQXHi8hmoJddUBFlvsLuYO7WNwcg9
	oZ7GEYlfUP3vYCsu28zEkP4RaHpa8n3Kt6TrKtLK1CQyXhpuha29p8BqlPkYBXAms2NGUm
	hNxWEb0zb4I54IvTNXGA1RBYiIyA1H5VtF9jtMNRk7K5f5CxwO1qZcIeVQoael1ouMlHHt
	tc5I9SZCozwjDydA+emJNdIPmTkPL/s1O/lL5z/4MOdeiiHhbgz4LufEJIMbKw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925631;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Xd8AfLi0wqokosOeIBU6PRKsH+asSTGynrmD4wsPm3o=;
	b=Wbi8Q504asBnwQfAHxOaKyo6ldSeRV/i3NagxF3YUutPr0oEnV00hx/+nvDf+RtAS3JZ5j
	jwje6NG5bt6L3sBQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch V4 09/37] x86/smpboot: Split up native_cpu_up() into separate
 phases and document them
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:11 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

There are four logical parts to what native_cpu_up() does on the BSP (or
on the controlling CPU for a later hotplug):

 1) Wake the AP by sending the INIT/SIPI/SIPI sequence.

 2) Wait for the AP to make it as far as wait_for_master_cpu() which
    sets that CPU's bit in cpu_initialized_mask, then sets the bit in
    cpu_callout_mask to let the AP proceed through cpu_init().

 3) Wait for the AP to finish cpu_init() and get as far as the
    smp_callin() call, which sets that CPU's bit in cpu_callin_mask.

 4) Perform the TSC synchronization and wait for the AP to actually
    mark itself online in cpu_online_mask.

In preparation to allow these phases to operate in parallel on multiple
APs, split them out into separate functions and document the interactions
a little more clearly in both the BP and AP code paths.

No functional change intended.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/kernel/smpboot.c |  184 +++++++++++++++++++++++++++++-----------------
 1 file changed, 119 insertions(+), 65 deletions(-)

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -193,6 +193,10 @@ static void smp_callin(void)
 
 	wmb();
 
+	/*
+	 * This runs the AP through all the cpuhp states to its target
+	 * state CPUHP_ONLINE.
+	 */
 	notify_cpu_starting(cpuid);
 
 	/*
@@ -233,12 +237,28 @@ static void notrace start_secondary(void
 	load_cr3(swapper_pg_dir);
 	__flush_tlb_all();
 #endif
+	/*
+	 * Sync point with wait_cpu_initialized(). Before proceeding through
+	 * cpu_init(), the AP will call wait_for_master_cpu() which sets its
+	 * own bit in cpu_initialized_mask and then waits for the BSP to set
+	 * its bit in cpu_callout_mask to release it.
+	 */
 	cpu_init_secondary();
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
+
+	/*
+	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
+	 * but just sets the bit to let the controlling CPU (BSP) know that
+	 * it's got this far.
+	 */
 	smp_callin();
 
-	/* Check TSC synchronization with the control CPU: */
+	/*
+	 * Check TSC synchronization with the control CPU, which will do
+	 * its part of this from wait_cpu_online(), making it an implicit
+	 * synchronization point.
+	 */
 	check_tsc_sync_target();
 
 	/*
@@ -257,6 +277,7 @@ static void notrace start_secondary(void
 	 * half valid vector space.
 	 */
 	lock_vector_lock();
+	/* Sync point with do_wait_cpu_online() */
 	set_cpu_online(smp_processor_id(), true);
 	lapic_online();
 	unlock_vector_lock();
@@ -979,17 +1000,13 @@ int common_cpu_up(unsigned int cpu, stru
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
- * Returns zero if CPU booted OK, else error code from
+ * Returns zero if startup was successfully sent, else error code from
  * ->wakeup_secondary_cpu.
  */
 static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
-	/* start_ip had better be page-aligned! */
 	unsigned long start_ip = real_mode_header->trampoline_start;
 
-	unsigned long boot_error = 0;
-	unsigned long timeout;
-
 #ifdef CONFIG_X86_64
 	/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
 	if (apic->wakeup_secondary_cpu_64)
@@ -1046,60 +1063,89 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
-		boot_error = apic->wakeup_secondary_cpu_64(apicid, start_ip);
+		return apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
-		boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
-	else
-		boot_error = wakeup_secondary_cpu_via_init(apicid, start_ip);
+		return apic->wakeup_secondary_cpu(apicid, start_ip);
 
-	if (!boot_error) {
-		/*
-		 * Wait 10s total for first sign of life from AP
-		 */
-		boot_error = -1;
-		timeout = jiffies + 10*HZ;
-		while (time_before(jiffies, timeout)) {
-			if (cpumask_test_cpu(cpu, cpu_initialized_mask)) {
-				/*
-				 * Tell AP to proceed with initialization
-				 */
-				cpumask_set_cpu(cpu, cpu_callout_mask);
-				boot_error = 0;
-				break;
-			}
-			schedule();
-		}
-	}
+	return wakeup_secondary_cpu_via_init(apicid, start_ip);
+}
 
-	if (!boot_error) {
-		/*
-		 * Wait till AP completes initial initialization
-		 */
-		while (!cpumask_test_cpu(cpu, cpu_callin_mask)) {
-			/*
-			 * Allow other tasks to run while we wait for the
-			 * AP to come online. This also gives a chance
-			 * for the MTRR work(triggered by the AP coming online)
-			 * to be completed in the stop machine context.
-			 */
-			schedule();
-		}
-	}
+static int wait_cpu_cpumask(unsigned int cpu, const struct cpumask *mask)
+{
+	unsigned long timeout;
 
-	if (x86_platform.legacy.warm_reset) {
-		/*
-		 * Cleanup possible dangling ends...
-		 */
-		smpboot_restore_warm_reset_vector();
+	/*
+	 * Wait up to 10s for the CPU to report in.
+	 */
+	timeout = jiffies + 10*HZ;
+	while (time_before(jiffies, timeout)) {
+		if (cpumask_test_cpu(cpu, mask))
+			return 0;
+
+		schedule();
 	}
+	return -1;
+}
 
-	return boot_error;
+/*
+ * Bringup step two: Wait for the target AP to reach cpu_init_secondary()
+ * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
+ * to proceed.  The AP will then proceed past setting its 'callin' bit
+ * and end up waiting in check_tsc_sync_target() until we reach
+ * do_wait_cpu_online() to tend to it.
+ */
+static int wait_cpu_initialized(unsigned int cpu)
+{
+	/*
+	 * Wait for first sign of life from AP.
+	 */
+	if (wait_cpu_cpumask(cpu, cpu_initialized_mask))
+		return -1;
+
+	cpumask_set_cpu(cpu, cpu_callout_mask);
+	return 0;
 }
 
-int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+/*
+ * Bringup step three: Wait for the target AP to reach smp_callin().
+ * The AP is not waiting for us here so we don't need to parallelise
+ * this step. Not entirely clear why we care about this, since we just
+ * proceed directly to TSC synchronization which is the next sync
+ * point with the AP anyway.
+ */
+static void wait_cpu_callin(unsigned int cpu)
+{
+	while (!cpumask_test_cpu(cpu, cpu_callin_mask))
+		schedule();
+}
+
+/*
+ * Bringup step four: Synchronize the TSC and wait for the target AP
+ * to reach set_cpu_online() in start_secondary().
+ */
+static void wait_cpu_online(unsigned int cpu)
 {
-	int apicid = apic->cpu_present_to_apicid(cpu);
 	unsigned long flags;
+
+	/*
+	 * Check TSC synchronization with the AP (keep irqs disabled
+	 * while doing so):
+	 */
+	local_irq_save(flags);
+	check_tsc_sync_source(cpu);
+	local_irq_restore(flags);
+
+	/*
+	 * Wait for the AP to mark itself online, so the core caller
+	 * can drop sparse_irq_lock.
+	 */
+	while (!cpu_online(cpu))
+		schedule();
+}
+
+static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
+{
+	int apicid = apic->cpu_present_to_apicid(cpu);
 	int err;
 
 	lockdep_assert_irqs_enabled();
@@ -1140,25 +1186,33 @@ int native_cpu_up(unsigned int cpu, stru
 		return err;
 
 	err = do_boot_cpu(apicid, cpu, tidle);
-	if (err) {
+	if (err)
 		pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
-		return err;
-	}
 
-	/*
-	 * Check TSC synchronization with the AP (keep irqs disabled
-	 * while doing so):
-	 */
-	local_irq_save(flags);
-	check_tsc_sync_source(cpu);
-	local_irq_restore(flags);
+	return err;
+}
 
-	while (!cpu_online(cpu)) {
-		cpu_relax();
-		touch_nmi_watchdog();
-	}
+int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+{
+	int ret;
 
-	return 0;
+	ret = native_kick_ap(cpu, tidle);
+	if (ret)
+		goto out;
+
+	ret = wait_cpu_initialized(cpu);
+	if (ret)
+		goto out;
+
+	wait_cpu_callin(cpu);
+	wait_cpu_online(cpu);
+
+out:
+	/* Cleanup possible dangling ends... */
+	if (x86_platform.legacy.warm_reset)
+		smpboot_restore_warm_reset_vector();
+
+	return ret;
 }
 
 /**



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533913.830972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyo-0005wV-Ca; Fri, 12 May 2023 21:07:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533913.830972; Fri, 12 May 2023 21:07:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyo-0005vH-5y; Fri, 12 May 2023 21:07:10 +0000
Received: by outflank-mailman (input) for mailman id 533913;
 Fri, 12 May 2023 21:07:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyn-0004FP-E9
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:09 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f4d7f447-f108-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4d7f447-f108-11ed-b229-6b7b168915f2
Message-ID: <20230512205255.822234014@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925628;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Tu4ROvEgCDi3LP+wUPgYfwXC0d1wYCtN1rgZS5iXCTQ=;
	b=MZw4QYwhSKWUBgjySGtCLDIrNNjvvQrhL4KcBEOLiBVM/Xw3o0S0OP8efQHUilTkVgZ5uH
	V9lBZ9q2fPoE92w0/ZGYCX2Se/3i5TRE0KYKDmcJ9xA7PzNrkGdKRykS5AIxuIPSoIzOSR
	qquFenj8YNpFTpXPIh+OE91cddixF4zU9bT3S5YiNOAuuJODxHBH9C+Og6fOAQlNlkVK7/
	43Abm8kVyCrbP29RIsjsrJBt1KKm3ZqvVZMRUHpDw+q6Q/pktl3hX7SzxGL5NzDv4dD045
	lAy6zjykzhyBxbFzFjVgVFWfc60zNYE4XSYTbPM3UgP7tkubuyN97MhVhVFnTw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925628;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Tu4ROvEgCDi3LP+wUPgYfwXC0d1wYCtN1rgZS5iXCTQ=;
	b=nDZua/3lTIIczi9cGRpK+vOoiWt3kdE9sPPGo3SiN4kVNdEYfjj0VQ75ukCQ/EVkS9lzyq
	osDYwtJ/lUnA8jCQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 07/37] x86/smpboot: Restrict soft_restart_cpu() to SEV
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:08 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that the CPU0 hotplug cruft is gone, the only user is AMD SEV.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/kernel/callthunks.c |    2 +-
 arch/x86/kernel/head_32.S    |   14 --------------
 arch/x86/kernel/head_64.S    |    2 +-
 3 files changed, 2 insertions(+), 16 deletions(-)

--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -133,7 +133,7 @@ static bool skip_addr(void *dest)
 	/* Accounts directly */
 	if (dest == ret_from_fork)
 		return true;
-#ifdef CONFIG_HOTPLUG_CPU
+#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_AMD_MEM_ENCRYPT)
 	if (dest == soft_restart_cpu)
 		return true;
 #endif
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -138,20 +138,6 @@ SYM_CODE_START(startup_32)
 	jmp .Ldefault_entry
 SYM_CODE_END(startup_32)
 
-#ifdef CONFIG_HOTPLUG_CPU
-/*
- * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
- * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot
- * unplug. Everything is set up already except the stack.
- */
-SYM_FUNC_START(soft_restart_cpu)
-	movl initial_stack, %ecx
-	movl %ecx, %esp
-	call *(initial_code)
-1:	jmp 1b
-SYM_FUNC_END(soft_restart_cpu)
-#endif
-
 /*
  * Non-boot CPU entry point; entered from trampoline.S
  * We can't lgdt here, because lgdt itself uses a data segment, but
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -375,7 +375,7 @@ SYM_CODE_END(secondary_startup_64)
 #include "verify_cpu.S"
 #include "sev_verify_cbit.S"
 
-#ifdef CONFIG_HOTPLUG_CPU
+#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_AMD_MEM_ENCRYPT)
 /*
  * Entry point for soft restart of a CPU. Invoked from xxx_play_dead() for
  * restarting the boot CPU or for restarting SEV guest CPUs after CPU hot





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533917.831013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyv-0007So-Ed; Fri, 12 May 2023 21:07:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533917.831013; Fri, 12 May 2023 21:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyv-0007RI-8l; Fri, 12 May 2023 21:07:17 +0000
Received: by outflank-mailman (input) for mailman id 533917;
 Fri, 12 May 2023 21:07:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyt-0004FP-Pq
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:15 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f8a78f7c-f108-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8a78f7c-f108-11ed-b229-6b7b168915f2
Message-ID: <20230512205256.035041005@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925635;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=eF12ofNTJSRvTVUxVjnfh7vrXqrzAu3KwLqma5Svbqk=;
	b=UCtRDgTYjJ+kttYU7M2mEpsUrBis05axy9leXWtF3JfKsHzJdY7XDzWQPHnc/2SfbSgxN3
	8m5ad//4LAOnFAOL/Bgz84MaRBcO8q4cF80CAbsgBeiJOqNvHtlhaoNC/64M0YgHrD7WZF
	m7FV4aKz3m5h6Cp6yr8xhr2AsQLLE9oZWQrHYW7hmApYt9s+zdNMxCD8Sk8khgr3V7jlH9
	24gCCAqrup+dWEuJ3AbsClaZO7eYMsrW2hnkYlv3wZF2rR4sXKxZwEhr7++CGNWie8aJ7i
	WfiNAqAgMWsw6ewnu1bT4mVJ2qzYDEvD4Kt4UFMTRKFhnF6SyLakq+SftbvVqQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925635;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=eF12ofNTJSRvTVUxVjnfh7vrXqrzAu3KwLqma5Svbqk=;
	b=JUU+NLGEbC5VmebxGTxNM/TRbUGPyU8jXOJXmTvwOZyX3Vb6ZSwKxKk/9v+Ol3pfmA4OJd
	pwHgp2Bl/LROYvCg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 11/37] x86/cpu/cacheinfo: Remove cpu_callout_mask dependency
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:14 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

cpu_callout_mask is used for the stop machine based MTRR/PAT init.

In preparation of moving the BP/AP synchronization to the core hotplug
code, use a private CPU mask for cacheinfo and manage it in the
starting/dying hotplug state.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/kernel/cpu/cacheinfo.c |   21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -39,6 +39,8 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t
 /* Shared L2 cache maps */
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
 
+static cpumask_var_t cpu_cacheinfo_mask;
+
 /* Kernel controls MTRR and/or PAT MSRs. */
 unsigned int memory_caching_control __ro_after_init;
 
@@ -1172,8 +1174,10 @@ void cache_bp_restore(void)
 		cache_cpu_init();
 }
 
-static int cache_ap_init(unsigned int cpu)
+static int cache_ap_online(unsigned int cpu)
 {
+	cpumask_set_cpu(cpu, cpu_cacheinfo_mask);
+
 	if (!memory_caching_control || get_cache_aps_delayed_init())
 		return 0;
 
@@ -1191,11 +1195,17 @@ static int cache_ap_init(unsigned int cp
 	 *      lock to prevent MTRR entry changes
 	 */
 	stop_machine_from_inactive_cpu(cache_rendezvous_handler, NULL,
-				       cpu_callout_mask);
+				       cpu_cacheinfo_mask);
 
 	return 0;
 }
 
+static int cache_ap_offline(unsigned int cpu)
+{
+	cpumask_clear_cpu(cpu, cpu_cacheinfo_mask);
+	return 0;
+}
+
 /*
  * Delayed cache initialization for all AP's
  */
@@ -1210,9 +1220,12 @@ void cache_aps_init(void)
 
 static int __init cache_ap_register(void)
 {
+	zalloc_cpumask_var(&cpu_cacheinfo_mask, GFP_KERNEL);
+	cpumask_set_cpu(smp_processor_id(), cpu_cacheinfo_mask);
+
 	cpuhp_setup_state_nocalls(CPUHP_AP_CACHECTRL_STARTING,
 				  "x86/cachectrl:starting",
-				  cache_ap_init, NULL);
+				  cache_ap_online, cache_ap_offline);
 	return 0;
 }
-core_initcall(cache_ap_register);
+early_initcall(cache_ap_register);



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533914.830988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyr-0006V6-1Z; Fri, 12 May 2023 21:07:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533914.830988; Fri, 12 May 2023 21:07:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyq-0006Ui-UI; Fri, 12 May 2023 21:07:12 +0000
Received: by outflank-mailman (input) for mailman id 533914;
 Fri, 12 May 2023 21:07:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyo-0004FP-Vs
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:10 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5c0cce5-f108-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:10 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5c0cce5-f108-11ed-b229-6b7b168915f2
Message-ID: <20230512205255.875713771@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925630;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=CERacbFmAU0r2YyiKnQSCmUMCfPrzAHfru8m3ZNl0KY=;
	b=mlVKNpwrrZo8g1O5g2m+Me4aq4cJcRqHlfaggouQckkiLNuNl3gMI9wJGW+qTK77NJ8L1t
	BNZic2MIATVICLudtbI4AZFCsrK6yBKlMcV3RCwHbtApC9WMnfOTf/384jFhT6DDyDkJLU
	68mjGx+kFnbdgLpBHRHDsZybKUUSd4DCRTEkmrpezylPkqkyKvPz75OmYdO3ljKUmH8DT+
	qileb82DTQnWi+EWLiBSaCU365hMQBR6WvjZdiFAVtGRc9MA/Xyms7EJNSpFNCUC672GYD
	N9c4DHYS2yGrtOZTD5VUq08pLhp9Cqoy87Bcu1VdvrO1+17UGGzGP2jJGlQcVA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925630;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=CERacbFmAU0r2YyiKnQSCmUMCfPrzAHfru8m3ZNl0KY=;
	b=pI9Kl0/aVsNuuAI/L8tX3o8uA++5XGmGpbq0v8JJXHVuwg59C4dNMi5XaYjRi8u2L2df6G
	eiU05lrMbsKzBPAA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>,
 Peter Zijlstra <peterz@infradead.org>
Subject: [patch V4 08/37] x86/smpboot: Remove unnecessary barrier()
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:09 +0200 (CEST)

Peter stumbled over the barrier() after the invocation of smp_callin() in
start_secondary():

  "...this barrier() and it's comment seem weird vs smp_callin(). That
   function ends with an atomic bitop (it has to, at the very least it must
   not be weaker than store-release) but also has an explicit wmb() to order
   setup vs CPU_STARTING.

   There is no way the smp_processor_id() referred to in this comment can land
   before cpu_init() even without the barrier()."

The barrier() along with the comment was added in 2003 with commit
d8f19f2cac70 ("[PATCH] x86-64 merge") in the history tree. One of those
well documented combo patches of that time which changes world and some
more. The context back then was:

	/*
	 * Dont put anything before smp_callin(), SMP
	 * booting is too fragile that we want to limit the
	 * things done here to the most necessary things.
	 */
	cpu_init();
	smp_callin();

+	/* otherwise gcc will move up smp_processor_id before the cpu_init */
+ 	barrier();

	Dprintk("cpu %d: waiting for commence\n", smp_processor_id()); 

Even back in 2003 the compiler was not allowed to reorder that
smp_processor_id() invocation before the cpu_init() function call.
Especially not as smp_processor_id() resolved to:

  asm volatile("movl %%gs:%c1,%0":"=r" (ret__):"i"(pda_offset(field)):"memory");

There is no trace of this change in any mailing list archive including the
back then official x86_64 list discuss@x86-64.org, which would explain the
problem this change solved.

The debug prints are gone by now and the the only smp_processor_id()
invocation today is farther down in start_secondary() after locking
vector_lock which itself prevents reordering.

Even if the compiler would be allowed to reorder this, the code would still
be correct as GSBASE is set up early in the assembly code and is valid when
the CPU reaches start_secondary(), while the code at the time when this
barrier was added did the GSBASE setup in cpu_init().

As the barrier has zero value, remove it.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20230509100421.GU83892@hirez.programming.kicks-ass.net
---
V4: New patch
---
 arch/x86/kernel/smpboot.c |    2 --
 1 file changed, 2 deletions(-)

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -238,8 +238,6 @@ static void notrace start_secondary(void
 	x86_cpuinit.early_percpu_clock_init();
 	smp_callin();
 
-	/* otherwise gcc will move up smp_processor_id before the cpu_init */
-	barrier();
 	/* Check TSC synchronization with the control CPU: */
 	check_tsc_sync_target();
 



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533920.831037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyz-00009y-RF; Fri, 12 May 2023 21:07:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533920.831037; Fri, 12 May 2023 21:07:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyz-00007K-GR; Fri, 12 May 2023 21:07:21 +0000
Received: by outflank-mailman (input) for mailman id 533920;
 Fri, 12 May 2023 21:07:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyx-0004FP-Ad
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:19 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id faab64fd-f108-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: faab64fd-f108-11ed-b229-6b7b168915f2
Message-ID: <20230512205256.148255496@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925638;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=InZPVVmeEePOxGmqWQGUV6XuxxQh3/m9otIbiRiszwU=;
	b=uFfds7mph6cSiXSh6HRHOhxz3QiI1I/G0oy4nU6wsamawNsp4y33wqQj2P2/ZZZq5TzErm
	P87k2m6ViGcmmyQjFqg5ZdG0v1+yc1F5oiwojzlLt2nBOxzNdmBLhT1vTcDTtEXKonfGXr
	bYpsNXmQl1MSuCdbaM9TtiuEz77gYzdG98tHSIcLcmERpaTowK6K4gPRU+YYaJVv9v5LiL
	v8epnr0nAY01obW4WKRBIziEyjsoX5x9wxUQ9rvuVeF34k+67u/mdE64pmL1vHsRQ73lvC
	VrYHgd1WIw63x42ClwSD5+jVlRXy0HbeNAyi4ur5Lncniu66hjKh4gNUmpYBvQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925638;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=InZPVVmeEePOxGmqWQGUV6XuxxQh3/m9otIbiRiszwU=;
	b=WGrbLbvy26+fMWG16p8ngiAMfUOAs6HPlstKKf3BN4q4Sg+vviA7h7o2vQ7SLWzvkmFDzQ
	MTBm5QuQ57VULiAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 13/37] x86/smpboot: Make TSC synchronization function call based
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:17 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Spin-waiting on the control CPU until the AP reaches the TSC
synchronization is just a waste especially in the case that there is no
synchronization required.

As the synchronization has to run with interrupts disabled the control CPU
part can just be done from a SMP function call. The upcoming AP issues that
call async only in the case that synchronization is required.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/include/asm/tsc.h |    2 --
 arch/x86/kernel/smpboot.c  |   20 +++-----------------
 arch/x86/kernel/tsc_sync.c |   36 +++++++++++-------------------------
 3 files changed, 14 insertions(+), 44 deletions(-)

--- a/arch/x86/include/asm/tsc.h
+++ b/arch/x86/include/asm/tsc.h
@@ -55,12 +55,10 @@ extern bool tsc_async_resets;
 #ifdef CONFIG_X86_TSC
 extern bool tsc_store_and_check_tsc_adjust(bool bootcpu);
 extern void tsc_verify_tsc_adjust(bool resume);
-extern void check_tsc_sync_source(int cpu);
 extern void check_tsc_sync_target(void);
 #else
 static inline bool tsc_store_and_check_tsc_adjust(bool bootcpu) { return false; }
 static inline void tsc_verify_tsc_adjust(bool resume) { }
-static inline void check_tsc_sync_source(int cpu) { }
 static inline void check_tsc_sync_target(void) { }
 #endif
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -275,11 +275,7 @@ static void notrace start_secondary(void
 	 */
 	smp_callin();
 
-	/*
-	 * Check TSC synchronization with the control CPU, which will do
-	 * its part of this from wait_cpu_online(), making it an implicit
-	 * synchronization point.
-	 */
+	/* Check TSC synchronization with the control CPU. */
 	check_tsc_sync_target();
 
 	/*
@@ -1141,21 +1137,11 @@ static void wait_cpu_callin(unsigned int
 }
 
 /*
- * Bringup step four: Synchronize the TSC and wait for the target AP
- * to reach set_cpu_online() in start_secondary().
+ * Bringup step four: Wait for the target AP to reach set_cpu_online() in
+ * start_secondary().
  */
 static void wait_cpu_online(unsigned int cpu)
 {
-	unsigned long flags;
-
-	/*
-	 * Check TSC synchronization with the AP (keep irqs disabled
-	 * while doing so):
-	 */
-	local_irq_save(flags);
-	check_tsc_sync_source(cpu);
-	local_irq_restore(flags);
-
 	/*
 	 * Wait for the AP to mark itself online, so the core caller
 	 * can drop sparse_irq_lock.
--- a/arch/x86/kernel/tsc_sync.c
+++ b/arch/x86/kernel/tsc_sync.c
@@ -245,7 +245,6 @@ bool tsc_store_and_check_tsc_adjust(bool
  */
 static atomic_t start_count;
 static atomic_t stop_count;
-static atomic_t skip_test;
 static atomic_t test_runs;
 
 /*
@@ -344,21 +343,14 @@ static inline unsigned int loop_timeout(
 }
 
 /*
- * Source CPU calls into this - it waits for the freshly booted
- * target CPU to arrive and then starts the measurement:
+ * The freshly booted CPU initiates this via an async SMP function call.
  */
-void check_tsc_sync_source(int cpu)
+static void check_tsc_sync_source(void *__cpu)
 {
+	unsigned int cpu = (unsigned long)__cpu;
 	int cpus = 2;
 
 	/*
-	 * No need to check if we already know that the TSC is not
-	 * synchronized or if we have no TSC.
-	 */
-	if (unsynchronized_tsc())
-		return;
-
-	/*
 	 * Set the maximum number of test runs to
 	 *  1 if the CPU does not provide the TSC_ADJUST MSR
 	 *  3 if the MSR is available, so the target can try to adjust
@@ -368,16 +360,9 @@ void check_tsc_sync_source(int cpu)
 	else
 		atomic_set(&test_runs, 3);
 retry:
-	/*
-	 * Wait for the target to start or to skip the test:
-	 */
-	while (atomic_read(&start_count) != cpus - 1) {
-		if (atomic_read(&skip_test) > 0) {
-			atomic_set(&skip_test, 0);
-			return;
-		}
+	/* Wait for the target to start. */
+	while (atomic_read(&start_count) != cpus - 1)
 		cpu_relax();
-	}
 
 	/*
 	 * Trigger the target to continue into the measurement too:
@@ -397,14 +382,14 @@ void check_tsc_sync_source(int cpu)
 	if (!nr_warps) {
 		atomic_set(&test_runs, 0);
 
-		pr_debug("TSC synchronization [CPU#%d -> CPU#%d]: passed\n",
+		pr_debug("TSC synchronization [CPU#%d -> CPU#%u]: passed\n",
 			smp_processor_id(), cpu);
 
 	} else if (atomic_dec_and_test(&test_runs) || random_warps) {
 		/* Force it to 0 if random warps brought us here */
 		atomic_set(&test_runs, 0);
 
-		pr_warn("TSC synchronization [CPU#%d -> CPU#%d]:\n",
+		pr_warn("TSC synchronization [CPU#%d -> CPU#%u]:\n",
 			smp_processor_id(), cpu);
 		pr_warn("Measured %Ld cycles TSC warp between CPUs, "
 			"turning off TSC clock.\n", max_warp);
@@ -457,11 +442,12 @@ void check_tsc_sync_target(void)
 	 * SoCs the TSC is frequency synchronized, but still the TSC ADJUST
 	 * register might have been wreckaged by the BIOS..
 	 */
-	if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable) {
-		atomic_inc(&skip_test);
+	if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable)
 		return;
-	}
 
+	/* Kick the control CPU into the TSC synchronization function */
+	smp_call_function_single(cpumask_first(cpu_online_mask), check_tsc_sync_source,
+				 (unsigned long *)(unsigned long)cpu, 0);
 retry:
 	/*
 	 * Register this CPU's participation and wait for the



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533922.831042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZz0-0000Gg-7q; Fri, 12 May 2023 21:07:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533922.831042; Fri, 12 May 2023 21:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZyz-0000Ew-TP; Fri, 12 May 2023 21:07:21 +0000
Received: by outflank-mailman (input) for mailman id 533922;
 Fri, 12 May 2023 21:07:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZyy-0004FP-Tc
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:20 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fb9bcad4-f108-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb9bcad4-f108-11ed-b229-6b7b168915f2
Message-ID: <20230512205256.206394064@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925640;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=sV700lvJEg0dpnky36xy6IzX+5YcmbLPF3iuwfreLQI=;
	b=paQ53o7/mL5ElBtzALRvm5FWr+aqzidPhADEkGkNnFUgSdXdZKTpQ9LdM20zv3wNd69sIS
	it6M97+Fbbx8ktP1fteK/FRxSsUJ+d0bN0dEWXHrN+1EXE74+t5yfLHYlX4A2jlo5kaWpy
	u1Y4a1rKKqdKc7NvKZPW3+05W/WKM4dmC8NnWIBvmIiGSBT0dFtCcrPLjhVylwcQhwWEFh
	l3HKtTadZMnBQhRy9Lw4oQ4h0oFirsIvEgTyNYGnpZi6RiB0cbK/DRRrBfV3ithiX6HrKL
	M7rlozY1wPYf1JFhsOIHtY0UQp0uMjycyx8ukit8UWdY1BCjZKlup+pYCxy6HQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925640;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=sV700lvJEg0dpnky36xy6IzX+5YcmbLPF3iuwfreLQI=;
	b=sb9Uqj+LJNN+k1afPutXlTsDK2EI5uKAFco5RBP+AD3FcfTjEj4fGPWwaSSh6It8iDbVO5
	x1PNGDR7T4lyhTCA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 14/37] x86/smpboot: Remove cpu_callin_mask
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:19 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that TSC synchronization is SMP function call based there is no reason
to wait for the AP to be set in smp_callin_mask. The control CPU waits for
the AP to set itself in the online mask anyway.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
V4: Rename smp_callin() to ap_starting() - Peter Z.
---
 arch/x86/kernel/smpboot.c |   74 +++++++++-------------------------------------
 1 file changed, 15 insertions(+), 59 deletions(-)

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -104,7 +104,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_info);
 /* All of these masks are initialized in setup_cpu_local_masks() */
 static cpumask_var_t cpu_initialized_mask;
 static cpumask_var_t cpu_callout_mask;
-static cpumask_var_t cpu_callin_mask;
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -161,38 +160,30 @@ static inline void smpboot_restore_warm_
 
 }
 
-/*
- * Report back to the Boot Processor during boot time or to the caller processor
- * during CPU online.
- */
-static void smp_callin(void)
+/* Run the next set of setup steps for the upcoming CPU */
+static void ap_starting(void)
 {
-	int cpuid;
-
-	/*
-	 * If waken up by an INIT in an 82489DX configuration
-	 * cpu_callout_mask guarantees we don't get here before
-	 * an INIT_deassert IPI reaches our local APIC, so it is
-	 * now safe to touch our local APIC.
-	 */
-	cpuid = smp_processor_id();
+	int cpuid = smp_processor_id();
 
 	/*
-	 * the boot CPU has finished the init stage and is spinning
-	 * on callin_map until we finish. We are free to set up this
-	 * CPU, first the APIC. (this is probably redundant on most
-	 * boards)
+	 * If woken up by an INIT in an 82489DX configuration
+	 * cpu_callout_mask guarantees the CPU does not reach this point
+	 * before an INIT_deassert IPI reaches the local APIC, so it is now
+	 * safe to touch the local APIC.
+	 *
+	 * Set up this CPU, first the APIC, which is probably redundant on
+	 * most boards.
 	 */
 	apic_ap_setup();
 
-	/* Save our processor parameters. */
+	/* Save the processor parameters. */
 	smp_store_cpu_info(cpuid);
 
 	/*
 	 * The topology information must be up to date before
 	 * notify_cpu_starting().
 	 */
-	set_cpu_sibling_map(raw_smp_processor_id());
+	set_cpu_sibling_map(cpuid);
 
 	ap_init_aperfmperf();
 
@@ -205,11 +196,6 @@ static void smp_callin(void)
 	 * state CPUHP_ONLINE.
 	 */
 	notify_cpu_starting(cpuid);
-
-	/*
-	 * Allow the master to continue.
-	 */
-	cpumask_set_cpu(cpuid, cpu_callin_mask);
 }
 
 static void ap_calibrate_delay(void)
@@ -268,12 +254,7 @@ static void notrace start_secondary(void
 	rcu_cpu_starting(raw_smp_processor_id());
 	x86_cpuinit.early_percpu_clock_init();
 
-	/*
-	 * Sync point with wait_cpu_callin(). The AP doesn't wait here
-	 * but just sets the bit to let the controlling CPU (BSP) know that
-	 * it's got this far.
-	 */
-	smp_callin();
+	ap_starting();
 
 	/* Check TSC synchronization with the control CPU. */
 	check_tsc_sync_target();
@@ -1109,7 +1090,7 @@ static int wait_cpu_cpumask(unsigned int
  * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
  * to proceed.  The AP will then proceed past setting its 'callin' bit
  * and end up waiting in check_tsc_sync_target() until we reach
- * do_wait_cpu_online() to tend to it.
+ * wait_cpu_online() to tend to it.
  */
 static int wait_cpu_initialized(unsigned int cpu)
 {
@@ -1124,20 +1105,7 @@ static int wait_cpu_initialized(unsigned
 }
 
 /*
- * Bringup step three: Wait for the target AP to reach smp_callin().
- * The AP is not waiting for us here so we don't need to parallelise
- * this step. Not entirely clear why we care about this, since we just
- * proceed directly to TSC synchronization which is the next sync
- * point with the AP anyway.
- */
-static void wait_cpu_callin(unsigned int cpu)
-{
-	while (!cpumask_test_cpu(cpu, cpu_callin_mask))
-		schedule();
-}
-
-/*
- * Bringup step four: Wait for the target AP to reach set_cpu_online() in
+ * Bringup step three: Wait for the target AP to reach set_cpu_online() in
  * start_secondary().
  */
 static void wait_cpu_online(unsigned int cpu)
@@ -1167,14 +1135,6 @@ static int native_kick_ap(unsigned int c
 	}
 
 	/*
-	 * Already booted CPU?
-	 */
-	if (cpumask_test_cpu(cpu, cpu_callin_mask)) {
-		pr_debug("do_boot_cpu %d Already started\n", cpu);
-		return -ENOSYS;
-	}
-
-	/*
 	 * Save current MTRR state in case it was changed since early boot
 	 * (e.g. by the ACPI SMI) to initialize new CPUs with MTRRs in sync:
 	 */
@@ -1211,7 +1171,6 @@ int native_cpu_up(unsigned int cpu, stru
 	if (ret)
 		goto out;
 
-	wait_cpu_callin(cpu);
 	wait_cpu_online(cpu);
 
 out:
@@ -1327,7 +1286,6 @@ void __init smp_prepare_cpus_common(void
 	 * Setup boot CPU information
 	 */
 	smp_store_boot_cpu_info(); /* Final full version of the data */
-	cpumask_copy(cpu_callin_mask, cpumask_of(0));
 	mb();
 
 	for_each_possible_cpu(i) {
@@ -1542,7 +1500,6 @@ early_param("possible_cpus", _setup_poss
 void __init setup_cpu_local_masks(void)
 {
 	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callin_mask);
 	alloc_bootmem_cpumask_var(&cpu_callout_mask);
 	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
 }
@@ -1606,7 +1563,6 @@ static void remove_cpu_from_maps(int cpu
 {
 	set_cpu_online(cpu, false);
 	cpumask_clear_cpu(cpu, cpu_callout_mask);
-	cpumask_clear_cpu(cpu, cpu_callin_mask);
 	/* was set by cpu_init() */
 	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	numa_remove_cpu(cpu);



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533924.831057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZz3-00012J-2r; Fri, 12 May 2023 21:07:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533924.831057; Fri, 12 May 2023 21:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZz2-000101-K0; Fri, 12 May 2023 21:07:24 +0000
Received: by outflank-mailman (input) for mailman id 533924;
 Fri, 12 May 2023 21:07:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZz0-0004FP-ES
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:22 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fc9122c5-f108-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc9122c5-f108-11ed-b229-6b7b168915f2
Message-ID: <20230512205256.263722880@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925641;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=6HcPZ1S0gKGilIW8DEzs+8FA47mShbE6fcxT9xps3qM=;
	b=wq6r7KiTpvagWfasCy7BpMaZc62MAFu3oUyN8IkTQnL9aLQt2J8dO9GuPVqDCozi0utj8B
	4IkD4xWAu5eTIAkrXomOEfRirhMpcSmjIuqx0o7p3I34k341WcbR0oawavbm30EBV5G/SQ
	pqVHGdr8O5AONZrQcM1b3A+l1qf3n3+2rHPsKgwNOlF2vQ3k5Npt0LMG9aJW+erlOl1Ibn
	pyPvj+RRfga1j85S0Dc2olvz/HEP4GwJI0tr9/CeQrVpZHiYB9toQXczVgnAW4f1NLnOxh
	6UTdAjUh2XFk2HO9QVoMZ6pTiLkYX7+zo/CatpFvhsqM4XVqS07nh/cjtfBz3A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925641;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=6HcPZ1S0gKGilIW8DEzs+8FA47mShbE6fcxT9xps3qM=;
	b=kLfIJYpL8LcaVnGjQ4Dcmt10BIgjcw/Luxkj9XlqF3gpecWWEb7C3Qu2IrxE2br/1iTpq7
	mpK48/gsNDr9qICw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 15/37] cpu/hotplug: Rework sparse_irq locking in bringup_cpu()
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:21 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

There is no harm to hold sparse_irq lock until the upcoming CPU completes
in cpuhp_online_idle(). This allows to remove cpu_online() synchronization
from architecture code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
V4: Amend comment about sparse irq lock - Peter Z.
---
 kernel/cpu.c |   34 ++++++++++++++++++++++++----------
 1 file changed, 24 insertions(+), 10 deletions(-)

--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -558,7 +558,7 @@ static int cpuhp_kick_ap(int cpu, struct
 	return ret;
 }
 
-static int bringup_wait_for_ap(unsigned int cpu)
+static int bringup_wait_for_ap_online(unsigned int cpu)
 {
 	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
 
@@ -579,15 +579,12 @@ static int bringup_wait_for_ap(unsigned
 	 */
 	if (!cpu_smt_allowed(cpu))
 		return -ECANCELED;
-
-	if (st->target <= CPUHP_AP_ONLINE_IDLE)
-		return 0;
-
-	return cpuhp_kick_ap(cpu, st, st->target);
+	return 0;
 }
 
 static int bringup_cpu(unsigned int cpu)
 {
+	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
 	struct task_struct *idle = idle_thread_get(cpu);
 	int ret;
 
@@ -600,16 +597,33 @@ static int bringup_cpu(unsigned int cpu)
 	/*
 	 * Some architectures have to walk the irq descriptors to
 	 * setup the vector space for the cpu which comes online.
-	 * Prevent irq alloc/free across the bringup.
+	 *
+	 * Prevent irq alloc/free across the bringup by acquiring the
+	 * sparse irq lock. Hold it until the upcoming CPU completes the
+	 * startup in cpuhp_online_idle() which allows to avoid
+	 * intermediate synchronization points in the architecture code.
 	 */
 	irq_lock_sparse();
 
 	/* Arch-specific enabling code. */
 	ret = __cpu_up(cpu, idle);
-	irq_unlock_sparse();
 	if (ret)
-		return ret;
-	return bringup_wait_for_ap(cpu);
+		goto out_unlock;
+
+	ret = bringup_wait_for_ap_online(cpu);
+	if (ret)
+		goto out_unlock;
+
+	irq_unlock_sparse();
+
+	if (st->target <= CPUHP_AP_ONLINE_IDLE)
+		return 0;
+
+	return cpuhp_kick_ap(cpu, st, st->target);
+
+out_unlock:
+	irq_unlock_sparse();
+	return ret;
 }
 
 static int finish_cpu(unsigned int cpu)



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533929.831069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZz6-00022v-PB; Fri, 12 May 2023 21:07:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533929.831069; Fri, 12 May 2023 21:07:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZz6-00022F-Gy; Fri, 12 May 2023 21:07:28 +0000
Received: by outflank-mailman (input) for mailman id 533929;
 Fri, 12 May 2023 21:07:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZz5-0004F7-DF
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:27 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe84adb9-f108-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe84adb9-f108-11ed-8611-37d641c3527e
Message-ID: <20230512205256.369512093@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925644;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=PMK22/HxoxSZKsTx+MtkiiSihvL00YPGs/TFPEL6TlU=;
	b=okTigxEmoNFjuISzZ3w6/Y2KcJueXtK98ysR49cIhhmzWitrnTsrr30PPjCUMPXry4xftP
	J52lH1DVgnpuG/Rfq4wLyAmr6aBgTzg3Kcpzk53VYvl1lEluUMhVlu3W9w3D8aZ28W1fnU
	FYwd2Sfv1zxIrxvZkgBRqufQ3iNj4eO+ry/ohM+Ld/vwoFregiASpdqfC0/pYtJYsd1OgL
	Vlb3BzT7+3vJnR5zuW45X4PcDsWr4so0XRPW7RLUs94CbintoxNd6G1JVdbZNrIt24RqAz
	6rW20AhO6LWDKJUuEPLhETfdZ7aGAlZBRDu8S8t35nsN64xoJdMY+fi5MrLE5Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925644;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=PMK22/HxoxSZKsTx+MtkiiSihvL00YPGs/TFPEL6TlU=;
	b=6Ab3fDB1sVd28x5lyL9nkWKGbymDr5R+6KRuErqPsWXwifdYJwrujMAzLpyfU/ZQhUUikY
	XNL6casob2M1EbBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 17/37] x86/xen/smp_pv: Remove wait for CPU online
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:24 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.

Whether the control CPU runs in a wait loop or sleeps in the core code
waiting for the online operation to complete makes no difference.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/xen/smp_pv.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)


--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -340,11 +340,11 @@ static int xen_pv_cpu_up(unsigned int cp
 
 	xen_pmu_init(cpu);
 
-	rc = HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL);
-	BUG_ON(rc);
-
-	while (cpu_report_state(cpu) != CPU_ONLINE)
-		HYPERVISOR_sched_op(SCHEDOP_yield, NULL);
+	/*
+	 * Why is this a BUG? If the hypercall fails then everything can be
+	 * rolled back, no?
+	 */
+	BUG_ON(HYPERVISOR_vcpu_op(VCPUOP_up, xen_vcpu_nr(cpu), NULL));
 
 	return 0;
 }





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533932.831076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZz8-0002O3-A9; Fri, 12 May 2023 21:07:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533932.831076; Fri, 12 May 2023 21:07:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZz8-0002MY-35; Fri, 12 May 2023 21:07:30 +0000
Received: by outflank-mailman (input) for mailman id 533932;
 Fri, 12 May 2023 21:07:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZz7-0004F7-Dh
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:29 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ff656733-f108-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff656733-f108-11ed-8611-37d641c3527e
Message-ID: <20230512205256.423407127@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925646;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=1rKmHmZAbJT08KPFiwa9L0tLfU68vKcUg6wSGkvdHSU=;
	b=oQbDXlWOfM0QoeIS7fBd1ZDXblWtvo4+j/S5K5bAfdSwYbOZAW2gSSiDW+OyvgXb/rXSk2
	+/cGl59AtTsBjSk7vwY4uPAQTNLGgW4cVw5R5p0BPvJlNfEfRufZHS7KlZQMVwbmLXJxKb
	s6aysUoLSt4mwWJ1tetWxrFJExUFkRYOeNoA8l6IhoUsrN3BFbahOePPwtYjw/So5TqD69
	2Cz8B/vuKEN0Yd2xMrYK0SQw8UE0HL8tHmnz5xhL98Md0D/lNdYrgp9dQF6YIxHSHgOtX9
	NYORCVxryHyjQrZpsLDTBMe0I9/XQv2spHgCmpdrV7IAP/NBWUx7dh/o0Bd+yA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925646;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=1rKmHmZAbJT08KPFiwa9L0tLfU68vKcUg6wSGkvdHSU=;
	b=UfaoCMCiIYhlxZAtqiJhn7AYfYyYM9daYEgmWZsmdMFm1YeoGErpKjOwH0iDKmu7TweDmj
	Ryr2+mWXOS82QjCw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 18/37] x86/xen/hvm: Get rid of DEAD_FROZEN handling
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:25 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

No point in this conditional voodoo. Un-initializing the lock mechanism is
safe to be called unconditionally even if it was already invoked when the
CPU died.

Remove the invocation of xen_smp_intr_free() as that has been already
cleaned up in xen_cpu_dead_hvm().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/xen/enlighten_hvm.c |   11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -161,13 +161,12 @@ static int xen_cpu_up_prepare_hvm(unsign
 	int rc = 0;
 
 	/*
-	 * This can happen if CPU was offlined earlier and
-	 * offlining timed out in common_cpu_die().
+	 * If a CPU was offlined earlier and offlining timed out then the
+	 * lock mechanism is still initialized. Uninit it unconditionally
+	 * as it's safe to call even if already uninited. Interrupts and
+	 * timer have already been handled in xen_cpu_dead_hvm().
 	 */
-	if (cpu_report_state(cpu) == CPU_DEAD_FROZEN) {
-		xen_smp_intr_free(cpu);
-		xen_uninit_lock_cpu(cpu);
-	}
+	xen_uninit_lock_cpu(cpu);
 
 	if (cpu_acpi_id(cpu) != U32_MAX)
 		per_cpu(xen_vcpu_id, cpu) = cpu_acpi_id(cpu);





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533941.831089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZzD-0003la-Rm; Fri, 12 May 2023 21:07:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533941.831089; Fri, 12 May 2023 21:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZzD-0003jH-MP; Fri, 12 May 2023 21:07:35 +0000
Received: by outflank-mailman (input) for mailman id 533941;
 Fri, 12 May 2023 21:07:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZz2-0004FP-In
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:24 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fd7cf51e-f108-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd7cf51e-f108-11ed-b229-6b7b168915f2
Message-ID: <20230512205256.316417181@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925643;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=DlXLIimSUyDsLnpaLCie3XUbsPcjE+B3Y5/np73/syk=;
	b=0n8XyUqdMKHvhCV5NvBs8uq0bkyvVXbfLtaGYG0ApQJEOyilZiFp81ncYSBr6upR2FGS65
	C/nM69M5fEGpiISmkmAowedNeMh8stJLEoOo7kEL2Yn2+06PaBevfLVashkK38xEMls2k0
	f/2djYvKqyUSewvgr/tLWZAR3cGWoQ2BFlZWa6eUYCxL/R/BBbwHpxF3oihmcCNqrcO5vL
	DdSUgTidw3Y3j0rys0Ztwy/cnS3zHqjKWZhecFMnqr4ICUwW5Yz4Fvu/UCxs3vooavSjLb
	kpbMhzb9bUAKVmUd7V7KcFTAjo4mQTSyfyHcI0YMf+BOz6gUP+32KRGIcgecpA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925643;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=DlXLIimSUyDsLnpaLCie3XUbsPcjE+B3Y5/np73/syk=;
	b=NxwadJuaL5bNYJvtkxpDH+oWBvMqBd0ae6N3DUYGY2xW3p33/XvIRC9MwzW0Aq6/9Tjy03
	UcA8T+Ga+8Z9ZeBg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 16/37] x86/smpboot: Remove wait for cpu_online()
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:22 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Now that the core code drops sparse_irq_lock after the idle thread
synchronized, it's pointless to wait for the AP to mark itself online.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/kernel/smpboot.c |   26 ++------------------------
 1 file changed, 2 insertions(+), 24 deletions(-)

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -275,7 +275,6 @@ static void notrace start_secondary(void
 	 * half valid vector space.
 	 */
 	lock_vector_lock();
-	/* Sync point with do_wait_cpu_online() */
 	set_cpu_online(smp_processor_id(), true);
 	lapic_online();
 	unlock_vector_lock();
@@ -1104,20 +1103,6 @@ static int wait_cpu_initialized(unsigned
 	return 0;
 }
 
-/*
- * Bringup step three: Wait for the target AP to reach set_cpu_online() in
- * start_secondary().
- */
-static void wait_cpu_online(unsigned int cpu)
-{
-	/*
-	 * Wait for the AP to mark itself online, so the core caller
-	 * can drop sparse_irq_lock.
-	 */
-	while (!cpu_online(cpu))
-		schedule();
-}
-
 static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
@@ -1164,16 +1149,9 @@ int native_cpu_up(unsigned int cpu, stru
 	int ret;
 
 	ret = native_kick_ap(cpu, tidle);
-	if (ret)
-		goto out;
-
-	ret = wait_cpu_initialized(cpu);
-	if (ret)
-		goto out;
-
-	wait_cpu_online(cpu);
+	if (!ret)
+		ret = wait_cpu_initialized(cpu);
 
-out:
 	/* Cleanup possible dangling ends... */
 	if (x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533944.831095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZzE-0003sQ-HU; Fri, 12 May 2023 21:07:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533944.831095; Fri, 12 May 2023 21:07:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZzE-0003rn-66; Fri, 12 May 2023 21:07:36 +0000
Received: by outflank-mailman (input) for mailman id 533944;
 Fri, 12 May 2023 21:07:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZz9-0004F7-Ox
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:31 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 015a0e4c-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 015a0e4c-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205256.529657366@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925649;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=kerkTYQkqrtQtKFBpQXgl/mifSw4AkZB0D4QOQ6VFhw=;
	b=w8TT4L11mJvC55XynJcsQ1xoK1/w/yAF5qx8yneeH/P8fnwkK4CXFMISyUnKUwSgvhoDAl
	gV92Sbp/uLLIrG95EpfdasX7LlHXR/IOYpUwLTDN9RFnQnSNOC6B4H2r+dRh/QmZVhmA2u
	vjBehWHGo06ekfCrcc276dJYGf3nQWs/O77pK47j1qubry0rpJTNL54W+W3ORR2GKSH0Xb
	0N2+ef/WZNj3LEIt56CzNn7hWkIntm/GrvS1mfBAvoAQGgk5acbRL64idfy9V027U20bzo
	J5GxBCTWsyOexHnxcpVt0yFQws5nbC6M7W6V5/kfYs0j+Unqz8VfU6Qa4a577g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925649;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=kerkTYQkqrtQtKFBpQXgl/mifSw4AkZB0D4QOQ6VFhw=;
	b=lLj76nZCgHPXiHbYjFpSOoqhYSZIa9ISK3t+HdT3HUFLjSF+gSBjS0S9mYqFOEdVtXnouT
	ImnaeP+WLwe2IlBg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 20/37] x86/smpboot: Switch to hotplug core state synchronization
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:29 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The new AP state tracking and synchronization mechanism in the CPU hotplug
core code allows to remove quite some x86 specific code:

  1) The AP alive synchronization based on cpumasks

  2) The decision whether an AP can be brought up again

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
V2: Use for_each_online_cpu() - Brian
---
 arch/x86/Kconfig           |    1 
 arch/x86/include/asm/smp.h |    7 +
 arch/x86/kernel/smp.c      |    1 
 arch/x86/kernel/smpboot.c  |  165 +++++++++++----------------------------------
 arch/x86/xen/smp_hvm.c     |   16 +---
 arch/x86/xen/smp_pv.c      |   39 ++++++----
 6 files changed, 75 insertions(+), 154 deletions(-)


--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -274,6 +274,7 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_CORE_SYNC_FULL		if SMP
 	select HOTPLUG_SMT			if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -38,6 +38,8 @@ struct smp_ops {
 	void (*crash_stop_other_cpus)(void);
 	void (*smp_send_reschedule)(int cpu);
 
+	void (*cleanup_dead_cpu)(unsigned cpu);
+	void (*poll_sync_state)(void);
 	int (*cpu_up)(unsigned cpu, struct task_struct *tidle);
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
@@ -90,7 +92,8 @@ static inline int __cpu_disable(void)
 
 static inline void __cpu_die(unsigned int cpu)
 {
-	smp_ops.cpu_die(cpu);
+	if (smp_ops.cpu_die)
+		smp_ops.cpu_die(cpu);
 }
 
 static inline void __noreturn play_dead(void)
@@ -123,8 +126,6 @@ void native_smp_cpus_done(unsigned int m
 int common_cpu_up(unsigned int cpunum, struct task_struct *tidle);
 int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
 int native_cpu_disable(void);
-int common_cpu_die(unsigned int cpu);
-void native_cpu_die(unsigned int cpu);
 void __noreturn hlt_play_dead(void);
 void native_play_dead(void);
 void play_dead_common(void);
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -269,7 +269,6 @@ struct smp_ops smp_ops = {
 	.smp_send_reschedule	= native_smp_send_reschedule,
 
 	.cpu_up			= native_cpu_up,
-	.cpu_die		= native_cpu_die,
 	.cpu_disable		= native_cpu_disable,
 	.play_dead		= native_play_dead,
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -57,6 +57,7 @@
 #include <linux/pgtable.h>
 #include <linux/overflow.h>
 #include <linux/stackprotector.h>
+#include <linux/cpuhotplug.h>
 
 #include <asm/acpi.h>
 #include <asm/cacheinfo.h>
@@ -101,9 +102,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
-/* All of these masks are initialized in setup_cpu_local_masks() */
-static cpumask_var_t cpu_initialized_mask;
-static cpumask_var_t cpu_callout_mask;
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -166,10 +164,10 @@ static void ap_starting(void)
 	int cpuid = smp_processor_id();
 
 	/*
-	 * If woken up by an INIT in an 82489DX configuration
-	 * cpu_callout_mask guarantees the CPU does not reach this point
-	 * before an INIT_deassert IPI reaches the local APIC, so it is now
-	 * safe to touch the local APIC.
+	 * If woken up by an INIT in an 82489DX configuration the alive
+	 * synchronization guarantees that the CPU does not reach this
+	 * point before an INIT_deassert IPI reaches the local APIC, so it
+	 * is now safe to touch the local APIC.
 	 *
 	 * Set up this CPU, first the APIC, which is probably redundant on
 	 * most boards.
@@ -213,17 +211,6 @@ static void ap_calibrate_delay(void)
 	cpu_data(smp_processor_id()).loops_per_jiffy = loops_per_jiffy;
 }
 
-static void wait_for_master_cpu(int cpu)
-{
-	/*
-	 * Wait for release by control CPU before continuing with AP
-	 * initialization.
-	 */
-	WARN_ON(cpumask_test_and_set_cpu(cpu, cpu_initialized_mask));
-	while (!cpumask_test_cpu(cpu, cpu_callout_mask))
-		cpu_relax();
-}
-
 /*
  * Activate a secondary processor.
  */
@@ -244,11 +231,11 @@ static void notrace start_secondary(void
 	cpu_init_exception_handling();
 
 	/*
-	 * Sync point with wait_cpu_initialized(). Sets AP in
-	 * cpu_initialized_mask and then waits for the control CPU
-	 * to release it.
+	 * Synchronization point with the hotplug core. Sets the
+	 * synchronization state to ALIVE and waits for the control CPU to
+	 * release this CPU for further bringup.
 	 */
-	wait_for_master_cpu(raw_smp_processor_id());
+	cpuhp_ap_sync_alive();
 
 	cpu_init();
 	rcu_cpu_starting(raw_smp_processor_id());
@@ -278,7 +265,6 @@ static void notrace start_secondary(void
 	set_cpu_online(smp_processor_id(), true);
 	lapic_online();
 	unlock_vector_lock();
-	cpu_set_state_online(smp_processor_id());
 	x86_platform.nmi_init();
 
 	/* enable local interrupts */
@@ -729,9 +715,9 @@ static void impress_friends(void)
 	 * Allow the user to impress friends.
 	 */
 	pr_debug("Before bogomips\n");
-	for_each_possible_cpu(cpu)
-		if (cpumask_test_cpu(cpu, cpu_callout_mask))
-			bogosum += cpu_data(cpu).loops_per_jiffy;
+	for_each_online_cpu(cpu)
+		bogosum += cpu_data(cpu).loops_per_jiffy;
+
 	pr_info("Total of %d processors activated (%lu.%02lu BogoMIPS)\n",
 		num_online_cpus(),
 		bogosum/(500000/HZ),
@@ -1003,6 +989,7 @@ int common_cpu_up(unsigned int cpu, stru
 static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)
 {
 	unsigned long start_ip = real_mode_header->trampoline_start;
+	int ret;
 
 #ifdef CONFIG_X86_64
 	/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
@@ -1043,13 +1030,6 @@ static int do_boot_cpu(int apicid, int c
 		}
 	}
 
-	/*
-	 * AP might wait on cpu_callout_mask in cpu_init() with
-	 * cpu_initialized_mask set if previous attempt to online
-	 * it timed-out. Clear cpu_initialized_mask so that after
-	 * INIT/SIPI it could start with a clean state.
-	 */
-	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	smp_mb();
 
 	/*
@@ -1060,47 +1040,16 @@ static int do_boot_cpu(int apicid, int c
 	 * - Use an INIT boot APIC message
 	 */
 	if (apic->wakeup_secondary_cpu_64)
-		return apic->wakeup_secondary_cpu_64(apicid, start_ip);
+		ret = apic->wakeup_secondary_cpu_64(apicid, start_ip);
 	else if (apic->wakeup_secondary_cpu)
-		return apic->wakeup_secondary_cpu(apicid, start_ip);
-
-	return wakeup_secondary_cpu_via_init(apicid, start_ip);
-}
-
-static int wait_cpu_cpumask(unsigned int cpu, const struct cpumask *mask)
-{
-	unsigned long timeout;
-
-	/*
-	 * Wait up to 10s for the CPU to report in.
-	 */
-	timeout = jiffies + 10*HZ;
-	while (time_before(jiffies, timeout)) {
-		if (cpumask_test_cpu(cpu, mask))
-			return 0;
-
-		schedule();
-	}
-	return -1;
-}
-
-/*
- * Bringup step two: Wait for the target AP to reach cpu_init_secondary()
- * and thus wait_for_master_cpu(), then set cpu_callout_mask to allow it
- * to proceed.  The AP will then proceed past setting its 'callin' bit
- * and end up waiting in check_tsc_sync_target() until we reach
- * wait_cpu_online() to tend to it.
- */
-static int wait_cpu_initialized(unsigned int cpu)
-{
-	/*
-	 * Wait for first sign of life from AP.
-	 */
-	if (wait_cpu_cpumask(cpu, cpu_initialized_mask))
-		return -1;
+		ret = apic->wakeup_secondary_cpu(apicid, start_ip);
+	else
+		ret = wakeup_secondary_cpu_via_init(apicid, start_ip);
 
-	cpumask_set_cpu(cpu, cpu_callout_mask);
-	return 0;
+	/* If the wakeup mechanism failed, cleanup the warm reset vector */
+	if (ret)
+		arch_cpuhp_cleanup_kick_cpu(cpu);
+	return ret;
 }
 
 static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
@@ -1125,11 +1074,6 @@ static int native_kick_ap(unsigned int c
 	 */
 	mtrr_save_state();
 
-	/* x86 CPUs take themselves offline, so delayed offline is OK. */
-	err = cpu_check_up_prepare(cpu);
-	if (err && err != -EBUSY)
-		return err;
-
 	/* the FPU context is blank, nobody can own it */
 	per_cpu(fpu_fpregs_owner_ctx, cpu) = NULL;
 
@@ -1146,17 +1090,29 @@ static int native_kick_ap(unsigned int c
 
 int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
-	int ret;
-
-	ret = native_kick_ap(cpu, tidle);
-	if (!ret)
-		ret = wait_cpu_initialized(cpu);
+	return native_kick_ap(cpu, tidle);
+}
 
+void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu)
+{
 	/* Cleanup possible dangling ends... */
-	if (x86_platform.legacy.warm_reset)
+	if (smp_ops.cpu_up == native_cpu_up && x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();
+}
 
-	return ret;
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
+	if (smp_ops.cleanup_dead_cpu)
+		smp_ops.cleanup_dead_cpu(cpu);
+
+	if (system_state == SYSTEM_RUNNING)
+		pr_info("CPU %u is now offline\n", cpu);
+}
+
+void arch_cpuhp_sync_state_poll(void)
+{
+	if (smp_ops.poll_sync_state)
+		smp_ops.poll_sync_state();
 }
 
 /**
@@ -1348,9 +1304,6 @@ void __init native_smp_prepare_boot_cpu(
 	if (!IS_ENABLED(CONFIG_SMP))
 		switch_gdt_and_percpu_base(me);
 
-	/* already set me in cpu_online_mask in boot_cpu_init() */
-	cpumask_set_cpu(me, cpu_callout_mask);
-	cpu_set_state_online(me);
 	native_pv_lock_init();
 }
 
@@ -1477,8 +1430,6 @@ early_param("possible_cpus", _setup_poss
 /* correctly size the local cpu masks */
 void __init setup_cpu_local_masks(void)
 {
-	alloc_bootmem_cpumask_var(&cpu_initialized_mask);
-	alloc_bootmem_cpumask_var(&cpu_callout_mask);
 	alloc_bootmem_cpumask_var(&cpu_sibling_setup_mask);
 }
 
@@ -1540,9 +1491,6 @@ static void remove_siblinginfo(int cpu)
 static void remove_cpu_from_maps(int cpu)
 {
 	set_cpu_online(cpu, false);
-	cpumask_clear_cpu(cpu, cpu_callout_mask);
-	/* was set by cpu_init() */
-	cpumask_clear_cpu(cpu, cpu_initialized_mask);
 	numa_remove_cpu(cpu);
 }
 
@@ -1593,36 +1541,11 @@ int native_cpu_disable(void)
 	return 0;
 }
 
-int common_cpu_die(unsigned int cpu)
-{
-	int ret = 0;
-
-	/* We don't do anything here: idle task is faking death itself. */
-
-	/* They ack this in play_dead() by setting CPU_DEAD */
-	if (cpu_wait_death(cpu, 5)) {
-		if (system_state == SYSTEM_RUNNING)
-			pr_info("CPU %u is now offline\n", cpu);
-	} else {
-		pr_err("CPU %u didn't die...\n", cpu);
-		ret = -1;
-	}
-
-	return ret;
-}
-
-void native_cpu_die(unsigned int cpu)
-{
-	common_cpu_die(cpu);
-}
-
 void play_dead_common(void)
 {
 	idle_task_exit();
 
-	/* Ack it */
-	(void)cpu_report_death();
-
+	cpuhp_ap_report_dead();
 	/*
 	 * With physical CPU hotplug, we should halt the cpu
 	 */
@@ -1724,12 +1647,6 @@ int native_cpu_disable(void)
 	return -ENOSYS;
 }
 
-void native_cpu_die(unsigned int cpu)
-{
-	/* We said "no" in __cpu_disable */
-	BUG();
-}
-
 void native_play_dead(void)
 {
 	BUG();
--- a/arch/x86/xen/smp_hvm.c
+++ b/arch/x86/xen/smp_hvm.c
@@ -55,18 +55,16 @@ static void __init xen_hvm_smp_prepare_c
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
-static void xen_hvm_cpu_die(unsigned int cpu)
+static void xen_hvm_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (common_cpu_die(cpu) == 0) {
-		if (xen_have_vector_callback) {
-			xen_smp_intr_free(cpu);
-			xen_uninit_lock_cpu(cpu);
-			xen_teardown_timer(cpu);
-		}
+	if (xen_have_vector_callback) {
+		xen_smp_intr_free(cpu);
+		xen_uninit_lock_cpu(cpu);
+		xen_teardown_timer(cpu);
 	}
 }
 #else
-static void xen_hvm_cpu_die(unsigned int cpu)
+static void xen_hvm_cleanup_dead_cpu(unsigned int cpu)
 {
 	BUG();
 }
@@ -77,7 +75,7 @@ void __init xen_hvm_smp_init(void)
 	smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
 	smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
 	smp_ops.smp_cpus_done = xen_smp_cpus_done;
-	smp_ops.cpu_die = xen_hvm_cpu_die;
+	smp_ops.cleanup_dead_cpu = xen_hvm_cleanup_dead_cpu;
 
 	if (!xen_have_vector_callback) {
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -62,6 +62,7 @@ static void cpu_bringup(void)
 	int cpu;
 
 	cr4_init();
+	cpuhp_ap_sync_alive();
 	cpu_init();
 	touch_softlockup_watchdog();
 
@@ -83,7 +84,7 @@ static void cpu_bringup(void)
 
 	set_cpu_online(cpu, true);
 
-	cpu_set_state_online(cpu);  /* Implies full memory barrier. */
+	smp_mb();
 
 	/* We can take interrupts now: we're officially "up". */
 	local_irq_enable();
@@ -323,14 +324,6 @@ static int xen_pv_cpu_up(unsigned int cp
 
 	xen_setup_runstate_info(cpu);
 
-	/*
-	 * PV VCPUs are always successfully taken down (see 'while' loop
-	 * in xen_cpu_die()), so -EBUSY is an error.
-	 */
-	rc = cpu_check_up_prepare(cpu);
-	if (rc)
-		return rc;
-
 	/* make sure interrupts start blocked */
 	per_cpu(xen_vcpu, cpu)->evtchn_upcall_mask = 1;
 
@@ -349,6 +342,11 @@ static int xen_pv_cpu_up(unsigned int cp
 	return 0;
 }
 
+static void xen_pv_poll_sync_state(void)
+{
+	HYPERVISOR_sched_op(SCHEDOP_yield, NULL);
+}
+
 #ifdef CONFIG_HOTPLUG_CPU
 static int xen_pv_cpu_disable(void)
 {
@@ -364,18 +362,18 @@ static int xen_pv_cpu_disable(void)
 
 static void xen_pv_cpu_die(unsigned int cpu)
 {
-	while (HYPERVISOR_vcpu_op(VCPUOP_is_up,
-				  xen_vcpu_nr(cpu), NULL)) {
+	while (HYPERVISOR_vcpu_op(VCPUOP_is_up, xen_vcpu_nr(cpu), NULL)) {
 		__set_current_state(TASK_UNINTERRUPTIBLE);
 		schedule_timeout(HZ/10);
 	}
+}
 
-	if (common_cpu_die(cpu) == 0) {
-		xen_smp_intr_free(cpu);
-		xen_uninit_lock_cpu(cpu);
-		xen_teardown_timer(cpu);
-		xen_pmu_finish(cpu);
-	}
+static void xen_pv_cleanup_dead_cpu(unsigned int cpu)
+{
+	xen_smp_intr_free(cpu);
+	xen_uninit_lock_cpu(cpu);
+	xen_teardown_timer(cpu);
+	xen_pmu_finish(cpu);
 }
 
 static void __noreturn xen_pv_play_dead(void) /* used only with HOTPLUG_CPU */
@@ -397,6 +395,11 @@ static void xen_pv_cpu_die(unsigned int
 	BUG();
 }
 
+static void xen_pv_cleanup_dead_cpu(unsigned int cpu)
+{
+	BUG();
+}
+
 static void __noreturn xen_pv_play_dead(void)
 {
 	BUG();
@@ -437,6 +440,8 @@ static const struct smp_ops xen_smp_ops
 
 	.cpu_up = xen_pv_cpu_up,
 	.cpu_die = xen_pv_cpu_die,
+	.cleanup_dead_cpu = xen_pv_cleanup_dead_cpu,
+	.poll_sync_state = xen_pv_poll_sync_state,
 	.cpu_disable = xen_pv_cpu_disable,
 	.play_dead = xen_pv_play_dead,
 



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:07:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:07:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533945.831101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZzF-00045e-BA; Fri, 12 May 2023 21:07:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533945.831101; Fri, 12 May 2023 21:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxZzE-00041r-Ql; Fri, 12 May 2023 21:07:36 +0000
Received: by outflank-mailman (input) for mailman id 533945;
 Fri, 12 May 2023 21:07:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZz8-0004F7-Dc
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:30 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 006a0704-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 006a0704-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205256.476305035@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925648;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=8F6M/H3ECxhJ4BqhEzY6AuGn3oBBn//EycivI44L4rs=;
	b=lIEASxvORo0ppk6W8dhQ3RRe38WUWWZkZkTj4wJXekOjHLxjtLxHWZI5MmsGV1eYpOHRwz
	GzmeUyM2DfYz3jVvhPuEwnp2Cptdh5x8QIyt2fVac+rxhH0QBINjH/V2FKDgh5cIxcJtI6
	ZrhlCHaheKhVn+Kkg4J26sDcLYLTUYc/QN9YBRRJXXAM6G4y9OIFYSQhAaiZ6i9G2zclqI
	Gf0viMVU6Sr+F78hsSt7AUorMSN0TK2A21+28r0qT4HfTTmyJtXfnZepS8g4ys++/K4a+n
	cFQnNRTDZWjic7dAP1KxsD049sBrXPdy4fEQ+B8E4Tzr13RAGkF82n25fVLYSQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925648;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=8F6M/H3ECxhJ4BqhEzY6AuGn3oBBn//EycivI44L4rs=;
	b=IxI8KL8SmufrtflQC521t9GVZkc0WLtfWSTiRjyh60nybLltrHJuTzRI6QSR+WSYTNLr+o
	x5cJ1yJyEY1/T/CQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 19/37] cpu/hotplug: Add CPU state tracking and synchronization
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:27 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The CPU state tracking and synchronization mechanism in smpboot.c is
completely independent of the hotplug code and all logic around it is
implemented in architecture specific code.

Except for the state reporting of the AP there is absolutely nothing
architecture specific and the sychronization and decision functions can be
moved into the generic hotplug core code.

Provide an integrated variant and add the core synchronization and decision
points. This comes in two flavours:

  1) DEAD state synchronization

     Updated by the architecture code once the AP reaches the point where
     it is ready to be torn down by the control CPU, e.g. by removing power
     or clocks or tear down via the hypervisor.

     The control CPU waits for this state to be reached with a timeout. If
     the state is reached an architecture specific cleanup function is
     invoked.

  2) Full state synchronization

     This extends #1 with AP alive synchronization. This is new
     functionality, which allows to replace architecture specific wait
     mechanims, e.g. cpumasks, completely.

     It also prevents that an AP which is in a limbo state can be brought
     up again. This can happen when an AP failed to report dead state
     during a previous off-line operation.

The dead synchronization is what most architectures use. Only x86 makes a
bringup decision based on that state at the moment.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
V4: Remove the try_cmpxchg() loop in cpuhp_ap_update_sync_state() - Peter Z.
---
 arch/Kconfig               |   15 +++
 include/linux/cpuhotplug.h |   12 ++
 kernel/cpu.c               |  193 ++++++++++++++++++++++++++++++++++++++++++++-
 kernel/smpboot.c           |    2 
 4 files changed, 221 insertions(+), 1 deletion(-)
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -34,6 +34,21 @@ config ARCH_HAS_SUBPAGE_FAULTS
 config HOTPLUG_SMT
 	bool
 
+# Selected by HOTPLUG_CORE_SYNC_DEAD or HOTPLUG_CORE_SYNC_FULL
+config HOTPLUG_CORE_SYNC
+	bool
+
+# Basic CPU dead synchronization selected by architecture
+config HOTPLUG_CORE_SYNC_DEAD
+	bool
+	select HOTPLUG_CORE_SYNC
+
+# Full CPU synchronization with alive state selected by architecture
+config HOTPLUG_CORE_SYNC_FULL
+	bool
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
+	select HOTPLUG_CORE_SYNC
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -517,4 +517,16 @@ void cpuhp_online_idle(enum cpuhp_state
 static inline void cpuhp_online_idle(enum cpuhp_state state) { }
 #endif
 
+void cpuhp_ap_sync_alive(void);
+void arch_cpuhp_sync_state_poll(void);
+void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
+void cpuhp_ap_report_dead(void);
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu);
+#else
+static inline void cpuhp_ap_report_dead(void) { }
+static inline void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu) { }
+#endif
+
 #endif
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -17,6 +17,7 @@
 #include <linux/cpu.h>
 #include <linux/oom.h>
 #include <linux/rcupdate.h>
+#include <linux/delay.h>
 #include <linux/export.h>
 #include <linux/bug.h>
 #include <linux/kthread.h>
@@ -59,6 +60,7 @@
  * @last:	For multi-instance rollback, remember how far we got
  * @cb_state:	The state for a single callback (install/uninstall)
  * @result:	Result of the operation
+ * @ap_sync_state:	State for AP synchronization
  * @done_up:	Signal completion to the issuer of the task for cpu-up
  * @done_down:	Signal completion to the issuer of the task for cpu-down
  */
@@ -76,6 +78,7 @@ struct cpuhp_cpu_state {
 	struct hlist_node	*last;
 	enum cpuhp_state	cb_state;
 	int			result;
+	atomic_t		ap_sync_state;
 	struct completion	done_up;
 	struct completion	done_down;
 #endif
@@ -276,6 +279,182 @@ static bool cpuhp_is_atomic_state(enum c
 	return CPUHP_AP_IDLE_DEAD <= state && state < CPUHP_AP_ONLINE;
 }
 
+/* Synchronization state management */
+enum cpuhp_sync_state {
+	SYNC_STATE_DEAD,
+	SYNC_STATE_KICKED,
+	SYNC_STATE_SHOULD_DIE,
+	SYNC_STATE_ALIVE,
+	SYNC_STATE_SHOULD_ONLINE,
+	SYNC_STATE_ONLINE,
+};
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC
+/**
+ * cpuhp_ap_update_sync_state - Update synchronization state during bringup/teardown
+ * @state:	The synchronization state to set
+ *
+ * No synchronization point. Just update of the synchronization state, but implies
+ * a full barrier so that the AP changes are visible before the control CPU proceeds.
+ */
+static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state)
+{
+	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
+
+	(void)atomic_xchg(st, state);
+}
+
+void __weak arch_cpuhp_sync_state_poll(void) { cpu_relax(); }
+
+static bool cpuhp_wait_for_sync_state(unsigned int cpu, enum cpuhp_sync_state state,
+				      enum cpuhp_sync_state next_state)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	ktime_t now, end, start = ktime_get();
+	int sync;
+
+	end = start + 10ULL * NSEC_PER_SEC;
+
+	sync = atomic_read(st);
+	while (1) {
+		if (sync == state) {
+			if (!atomic_try_cmpxchg(st, &sync, next_state))
+				continue;
+			return true;
+		}
+
+		now = ktime_get();
+		if (now > end) {
+			/* Timeout. Leave the state unchanged */
+			return false;
+		} else if (now - start < NSEC_PER_MSEC) {
+			/* Poll for one millisecond */
+			arch_cpuhp_sync_state_poll();
+		} else {
+			usleep_range_state(USEC_PER_MSEC, 2 * USEC_PER_MSEC, TASK_UNINTERRUPTIBLE);
+		}
+		sync = atomic_read(st);
+	}
+	return true;
+}
+#else  /* CONFIG_HOTPLUG_CORE_SYNC */
+static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state state) { }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC */
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
+/**
+ * cpuhp_ap_report_dead - Update synchronization state to DEAD
+ *
+ * No synchronization point. Just update of the synchronization state.
+ */
+void cpuhp_ap_report_dead(void)
+{
+	cpuhp_ap_update_sync_state(SYNC_STATE_DEAD);
+}
+
+void __weak arch_cpuhp_cleanup_dead_cpu(unsigned int cpu) { }
+
+/*
+ * Late CPU shutdown synchronization point. Cannot use cpuhp_state::done_down
+ * because the AP cannot issue complete() at this stage.
+ */
+static void cpuhp_bp_sync_dead(unsigned int cpu)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	int sync = atomic_read(st);
+
+	do {
+		/* CPU can have reported dead already. Don't overwrite that! */
+		if (sync == SYNC_STATE_DEAD)
+			break;
+	} while (!atomic_try_cmpxchg(st, &sync, SYNC_STATE_SHOULD_DIE));
+
+	if (cpuhp_wait_for_sync_state(cpu, SYNC_STATE_DEAD, SYNC_STATE_DEAD)) {
+		/* CPU reached dead state. Invoke the cleanup function */
+		arch_cpuhp_cleanup_dead_cpu(cpu);
+		return;
+	}
+
+	/* No further action possible. Emit message and give up. */
+	pr_err("CPU%u failed to report dead state\n", cpu);
+}
+#else /* CONFIG_HOTPLUG_CORE_SYNC_DEAD */
+static inline void cpuhp_bp_sync_dead(unsigned int cpu) { }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC_DEAD */
+
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_FULL
+/**
+ * cpuhp_ap_sync_alive - Synchronize AP with the control CPU once it is alive
+ *
+ * Updates the AP synchronization state to SYNC_STATE_ALIVE and waits
+ * for the BP to release it.
+ */
+void cpuhp_ap_sync_alive(void)
+{
+	atomic_t *st = this_cpu_ptr(&cpuhp_state.ap_sync_state);
+
+	cpuhp_ap_update_sync_state(SYNC_STATE_ALIVE);
+
+	/* Wait for the control CPU to release it. */
+	while (atomic_read(st) != SYNC_STATE_SHOULD_ONLINE)
+		cpu_relax();
+}
+
+static bool cpuhp_can_boot_ap(unsigned int cpu)
+{
+	atomic_t *st = per_cpu_ptr(&cpuhp_state.ap_sync_state, cpu);
+	int sync = atomic_read(st);
+
+again:
+	switch (sync) {
+	case SYNC_STATE_DEAD:
+		/* CPU is properly dead */
+		break;
+	case SYNC_STATE_KICKED:
+		/* CPU did not come up in previous attempt */
+		break;
+	case SYNC_STATE_ALIVE:
+		/* CPU is stuck cpuhp_ap_sync_alive(). */
+		break;
+	default:
+		/* CPU failed to report online or dead and is in limbo state. */
+		return false;
+	}
+
+	/* Prepare for booting */
+	if (!atomic_try_cmpxchg(st, &sync, SYNC_STATE_KICKED))
+		goto again;
+
+	return true;
+}
+
+void __weak arch_cpuhp_cleanup_kick_cpu(unsigned int cpu) { }
+
+/*
+ * Early CPU bringup synchronization point. Cannot use cpuhp_state::done_up
+ * because the AP cannot issue complete() so early in the bringup.
+ */
+static int cpuhp_bp_sync_alive(unsigned int cpu)
+{
+	int ret = 0;
+
+	if (!IS_ENABLED(CONFIG_HOTPLUG_CORE_SYNC_FULL))
+		return 0;
+
+	if (!cpuhp_wait_for_sync_state(cpu, SYNC_STATE_ALIVE, SYNC_STATE_SHOULD_ONLINE)) {
+		pr_err("CPU%u failed to report alive state\n", cpu);
+		ret = -EIO;
+	}
+
+	/* Let the architecture cleanup the kick alive mechanics. */
+	arch_cpuhp_cleanup_kick_cpu(cpu);
+	return ret;
+}
+#else /* CONFIG_HOTPLUG_CORE_SYNC_FULL */
+static inline int cpuhp_bp_sync_alive(unsigned int cpu) { return 0; }
+static inline bool cpuhp_can_boot_ap(unsigned int cpu) { return true; }
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC_FULL */
+
 /* Serializes the updates to cpu_online_mask, cpu_present_mask */
 static DEFINE_MUTEX(cpu_add_remove_lock);
 bool cpuhp_tasks_frozen;
@@ -588,6 +767,9 @@ static int bringup_cpu(unsigned int cpu)
 	struct task_struct *idle = idle_thread_get(cpu);
 	int ret;
 
+	if (!cpuhp_can_boot_ap(cpu))
+		return -EAGAIN;
+
 	/*
 	 * Reset stale stack state from the last time this CPU was online.
 	 */
@@ -610,6 +792,10 @@ static int bringup_cpu(unsigned int cpu)
 	if (ret)
 		goto out_unlock;
 
+	ret = cpuhp_bp_sync_alive(cpu);
+	if (ret)
+		goto out_unlock;
+
 	ret = bringup_wait_for_ap_online(cpu);
 	if (ret)
 		goto out_unlock;
@@ -1113,6 +1299,8 @@ static int takedown_cpu(unsigned int cpu
 	/* This actually kills the CPU. */
 	__cpu_die(cpu);
 
+	cpuhp_bp_sync_dead(cpu);
+
 	tick_cleanup_dead_cpu(cpu);
 	rcutree_migrate_callbacks(cpu);
 	return 0;
@@ -1359,8 +1547,10 @@ void cpuhp_online_idle(enum cpuhp_state
 	if (state != CPUHP_AP_ONLINE_IDLE)
 		return;
 
+	cpuhp_ap_update_sync_state(SYNC_STATE_ONLINE);
+
 	/*
-	 * Unpart the stopper thread before we start the idle loop (and start
+	 * Unpark the stopper thread before we start the idle loop (and start
 	 * scheduling); this ensures the stopper task is always available.
 	 */
 	stop_machine_unpark(smp_processor_id());
@@ -2737,6 +2927,7 @@ void __init boot_cpu_hotplug_init(void)
 {
 #ifdef CONFIG_SMP
 	cpumask_set_cpu(smp_processor_id(), &cpus_booted_once_mask);
+	atomic_set(this_cpu_ptr(&cpuhp_state.ap_sync_state), SYNC_STATE_ONLINE);
 #endif
 	this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
 	this_cpu_write(cpuhp_state.target, CPUHP_ONLINE);
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -326,6 +326,7 @@ void smpboot_unregister_percpu_thread(st
 }
 EXPORT_SYMBOL_GPL(smpboot_unregister_percpu_thread);
 
+#ifndef CONFIG_HOTPLUG_CORE_SYNC
 static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
 
 /*
@@ -488,3 +489,4 @@ bool cpu_report_death(void)
 }
 
 #endif /* #ifdef CONFIG_HOTPLUG_CPU */
+#endif /* !CONFIG_HOTPLUG_CORE_SYNC */



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:13:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:13:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533959.831119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa4d-0000B7-6K; Fri, 12 May 2023 21:13:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533959.831119; Fri, 12 May 2023 21:13:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa4d-0000B0-2o; Fri, 12 May 2023 21:13:11 +0000
Received: by outflank-mailman (input) for mailman id 533959;
 Fri, 12 May 2023 21:13:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0kdp=BB=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pxa4b-0000At-KC
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:13:09 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ca245d49-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:13:07 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 2367863658;
 Fri, 12 May 2023 21:13:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B580C433D2;
 Fri, 12 May 2023 21:13:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca245d49-f109-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683925985;
	bh=Sj34y1UnMe+RdJ+d2LBaBf2d2ip7dGtT1ZKiNppIFOQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=n56X88njBufIX+d33OvAnz9LtQYd8cDaGAHCoscM3uoEmY1Ndn53CKeW8g6nt+DJP
	 DgvdodSZw9Paasj2pow+TxQ05txCX5hhy9rVNs2usgOfrhe+mRBy/nw9hW53UVM+m6
	 zLPgBAKe3vbivnJT5dC2ebwE7bJpSqd7qAQyjYQi5lxQp6n+bAQJEJ4SfTsvWIMisw
	 pZlz8c9VVw4uZRrOUX07QFB2fcmtPiH4HScrQdB50jcwG52YAILNoxGq5IshiV6kiA
	 gOKmwOGSzMhQk/8XtU4QYaODo9gcHCMgqw+6FxJGjsOYyEv1oYGjch1FJPHVomjI42
	 WWqdXucwQYsaw==
Date: Fri, 12 May 2023 14:12:57 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Olaf Hering <olaf@aepfle.de>
cc: Paolo Bonzini <pbonzini@redhat.com>, John Snow <jsnow@redhat.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, qemu-devel@nongnu.org, 
    qemu-block@nongnu.org, 
    =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: Re: [PATCH v2] piix: fix regression during unplug in Xen HVM domUs
In-Reply-To: <20230510094719.26fb79e5.olaf@aepfle.de>
Message-ID: <alpine.DEB.2.22.394.2305121411310.3748626@ubuntu-linux-20-04-desktop>
References: <20210317070046.17860-1-olaf@aepfle.de> <4441d32f-bd52-9408-cabc-146b59f0e4dc@redhat.com> <20210325121219.7b5daf76.olaf@aepfle.de> <dae251e1-f808-708e-902c-05cfcbbea9cf@redhat.com> <20230509225818.GA16290@aepfle.de>
 <20230510094719.26fb79e5.olaf@aepfle.de>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 10 May 2023, Olaf Hering wrote:
> Wed, 10 May 2023 00:58:27 +0200 Olaf Hering <olaf@aepfle.de>:
> 
> > In my debugging (with v8.0.0) it turned out the three pci_set_word
> > causes the domU to hang. In fact, it is just the last one:
> > 
> >    pci_set_byte(pci_conf + 0x20, 0x01);  /* BMIBA: 20-23h */
> > 
> > It changes the value from 0xc121 to 0x1.
> 
> If I disable just "pci_set_word(pci_conf + PCI_COMMAND, 0x0000);" it works as well.
> It changes the value from 0x5 to 0.
> 
> In general I feel it is wrong to fiddle with PCI from the host side.
> This is most likely not the intention of the Xen unplug protocol.
> I'm sure the guest does not expect such changes under the hood.
> It happens to work by luck with pvops kernels because their PCI discovery
> is done after the unplug.
> 
> So, what do we do here to get this off the table?

I don't have a concrete suggestion because I don't understand the root
cause of the issue. Looking back at Paolo's reply from 2021

https://marc.info/?l=xen-devel&m=161669099305992&w=2

I think he was right. We can either fix the root cause of the issue or
avoid calling qdev_reset_all on unplug. I am OK with either one.


From xen-devel-bounces@lists.xenproject.org Fri May 12 21:17:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:17:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533965.831136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa8s-0000rk-3R; Fri, 12 May 2023 21:17:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533965.831136; Fri, 12 May 2023 21:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa8r-0000rM-Rb; Fri, 12 May 2023 21:17:33 +0000
Received: by outflank-mailman (input) for mailman id 533965;
 Fri, 12 May 2023 21:17:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzJ-0004FP-MU
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:41 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 08144d87-f109-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08144d87-f109-11ed-b229-6b7b168915f2
Message-ID: <20230512205256.916055844@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925660;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=3WyGAeXCoINo+ZjbtqDSmG4Uv4XxBoW1eRT13EQq0fk=;
	b=gCuo784jqFHJl8ajBn8394w4OmHWarWHua689soLfocU0hu6yonNTwuMrKteuZS5UiIuEV
	qrcv15f4Q6TokbWN0YfdK3MSd6z8xiHfnEZzev7CF61wRptMWuiL7c2cva2fdafmy4x44i
	QLCrGnHJHlwNB/wrQX3E9sCIWLmnibPYc8f6Fvn0Mhtkx8DzedCkReA25Zy2fK9LabrFWb
	x1UPFk0hgEJqdWWpnRwgFxJ1i3v5p9suQyaN5MHQM1jUq9DQq3rPXVvibiH0XDHv/RMwo0
	mWl2Aksb+Mjpq0yv8/Ii5IfUB3vzQQo6B2UpFf/trJXJtP52QvDOQWDArAV10Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925660;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=3WyGAeXCoINo+ZjbtqDSmG4Uv4XxBoW1eRT13EQq0fk=;
	b=LWXqEciaP4I0I25ywyzxuHHtRRVAY1D7DccVqnOk2GJSZvUntz/BBxlff7vjipNfXjw8l7
	MDKU/BWHM78qyUDA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>,
 Palmer Dabbelt <palmer@rivosinc.com>
Subject: [patch V4 27/37] riscv: Switch to hotplug core state synchronization
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:40 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
---
 arch/riscv/Kconfig              |    1 +
 arch/riscv/include/asm/smp.h    |    2 +-
 arch/riscv/kernel/cpu-hotplug.c |   14 +++++++-------
 3 files changed, 9 insertions(+), 8 deletions(-)
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -122,6 +122,7 @@ config RISCV
 	select HAVE_RSEQ
 	select HAVE_STACKPROTECTOR
 	select HAVE_SYSCALL_TRACEPOINTS
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_DOMAIN
 	select IRQ_FORCED_THREADING
 	select KASAN_VMALLOC if KASAN
--- a/arch/riscv/include/asm/smp.h
+++ b/arch/riscv/include/asm/smp.h
@@ -70,7 +70,7 @@ asmlinkage void smp_callin(void);
 
 #if defined CONFIG_HOTPLUG_CPU
 int __cpu_disable(void);
-void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 #endif /* CONFIG_HOTPLUG_CPU */
 
 #else
--- a/arch/riscv/kernel/cpu-hotplug.c
+++ b/arch/riscv/kernel/cpu-hotplug.c
@@ -8,6 +8,7 @@
 #include <linux/sched.h>
 #include <linux/err.h>
 #include <linux/irq.h>
+#include <linux/cpuhotplug.h>
 #include <linux/cpu.h>
 #include <linux/sched/hotplug.h>
 #include <asm/irq.h>
@@ -49,17 +50,15 @@ int __cpu_disable(void)
 	return ret;
 }
 
+#ifdef CONFIG_HOTPLUG_CPU
 /*
- * Called on the thread which is asking for a CPU to be shutdown.
+ * Called on the thread which is asking for a CPU to be shutdown, if the
+ * CPU reported dead to the hotplug core.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
 	int ret = 0;
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU %u: didn't die\n", cpu);
-		return;
-	}
 	pr_notice("CPU%u: off\n", cpu);
 
 	/* Verify from the firmware if the cpu is really stopped*/
@@ -76,9 +75,10 @@ void __noreturn arch_cpu_idle_dead(void)
 {
 	idle_task_exit();
 
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	cpu_ops[smp_processor_id()]->cpu_stop();
 	/* It should never reach here */
 	BUG();
 }
+#endif





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:17:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:17:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533963.831130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa8r-0000o0-OY; Fri, 12 May 2023 21:17:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533963.831130; Fri, 12 May 2023 21:17:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa8r-0000nt-K9; Fri, 12 May 2023 21:17:33 +0000
Received: by outflank-mailman (input) for mailman id 533963;
 Fri, 12 May 2023 21:17:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzE-0004F7-Ed
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:36 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 042fa99a-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 042fa99a-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205256.690926018@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925654;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=/FbhQUaNPV/QmzMYEl0ZGQWeo+zsZNC/fHGpKaYgyVM=;
	b=sIkpRPZWtZDLRJsNbJqeMaIGrlSPPZQ54swrLTPGjSTFwGmKt9Oaa/S3e7+FMM/zwS3yh1
	zBi00At6yibSu1ow+h/o1wevr8Ct+nV3aQorHuUl5ADPJuwBO+bchPZ8ALQZ+kx6i0GW+w
	zqdpAbyLGX6msZM0R+So0xbvh7Y3sqfgTWEpUi4IiFsYvIytynrP4vP72VrzYLW0n/0gI5
	dcjuaB3bMiEaTjpeMR+ijjym0v0KwwT8U989zzsyAvMiMrnZ6zXBcVm77ZsVDRiZB9alDd
	vSvyAFgEvgFWYPzgBfw025kkJmJ87QGr2kj6vjn8as5Hgwu4BsQPx+xpR6SaPg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925654;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=/FbhQUaNPV/QmzMYEl0ZGQWeo+zsZNC/fHGpKaYgyVM=;
	b=dLO2baNTFk8RA1y+U+P6lN/kShhUhW/oiig8SXDmVflPM4GheqDBwaU3qN83KID4Lvii4k
	Tfhlc28wF/mJXYCQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 23/37] arm64: smp: Switch to hotplug core state synchronization
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:33 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/arm64/Kconfig           |    1 +
 arch/arm64/include/asm/smp.h |    2 +-
 arch/arm64/kernel/smp.c      |   14 +++++---------
 3 files changed, 7 insertions(+), 10 deletions(-)


--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -222,6 +222,7 @@ config ARM64
 	select HAVE_KPROBES
 	select HAVE_KRETPROBES
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_DOMAIN
 	select IRQ_FORCED_THREADING
 	select KASAN_VMALLOC if KASAN
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -99,7 +99,7 @@ static inline void arch_send_wakeup_ipi_
 
 extern int __cpu_disable(void);
 
-extern void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 extern void __noreturn cpu_die(void);
 extern void __noreturn cpu_die_early(void);
 
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -332,17 +332,13 @@ static int op_cpu_kill(unsigned int cpu)
 }
 
 /*
- * called on the thread which is asking for a CPU to be shutdown -
- * waits until shutdown has completed, or it is timed out.
+ * Called on the thread which is asking for a CPU to be shutdown after the
+ * shutdown completed.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
 	int err;
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
 	pr_debug("CPU%u: shutdown\n", cpu);
 
 	/*
@@ -369,8 +365,8 @@ void __noreturn cpu_die(void)
 
 	local_daif_mask();
 
-	/* Tell __cpu_die() that this CPU is now safe to dispose of */
-	(void)cpu_report_death();
+	/* Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose of */
+	cpuhp_ap_report_dead();
 
 	/*
 	 * Actually shutdown the CPU. This must never fail. The specific hotplug





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:17:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:17:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533966.831142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa8s-0000zf-CG; Fri, 12 May 2023 21:17:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533966.831142; Fri, 12 May 2023 21:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa8s-0000xI-5A; Fri, 12 May 2023 21:17:34 +0000
Received: by outflank-mailman (input) for mailman id 533966;
 Fri, 12 May 2023 21:17:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzY-0004F7-IB
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:56 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0fdd00c1-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fdd00c1-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205257.355425551@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925674;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=IP4bdlmLdqVc3LCZrNxGXTkJh6c/m5h1IVAGFT7rdmU=;
	b=r7gtP8cd7Oi46e4lE/ByHI+khRt47Q1tdhvFutLNb1nZ0EzY0luf/WbDQ0k8nKgYTLl4fZ
	6jVUGYKF/8lveenjXvvre5ndfySMPCdjlJpKvZns1rWXPEqcXr6eVN8NK8cZAbmnCQ6yQp
	Yx3aUoQDw2kxMWvLVB4RoPcVQxwMr1TEQt/JyflnlrUKkC5yB1WXFRYeLO2qGcavmQy/xb
	TWM6HpGmz8K6kbaRe7K69Mm4B+cOxkraoO55h4/F16N4d4HFqv7paqDclFhhm777QOq+fn
	+XmNnPyatRLfjMogFbBl6Kr8xcpvcs8sEUSklzYMFqy5aqxV2atkp8bBibVO7g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925674;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=IP4bdlmLdqVc3LCZrNxGXTkJh6c/m5h1IVAGFT7rdmU=;
	b=jvsTBFPAx2r/a9Hm2XoIkmPwImha2AedATS4Qf2wMfcTkLbpvnlZqm6V6xCv2m4a7aeYFX
	WwQng/2elbC/s/DA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch V4 35/37] x86/smpboot: Implement a bit spinlock to protect the
 realmode stack
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:53 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Parallel AP bringup requires that the APs can run fully parallel through
the early startup code including the real mode trampoline.

To prepare for this implement a bit-spinlock to serialize access to the
real mode stack so that parallel upcoming APs are not going to corrupt each
others stack while going through the real mode startup code.

Co-developed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
V4: Simplify the lock implementation - Peter Z.
---
 arch/x86/include/asm/realmode.h      |    3 +++
 arch/x86/kernel/head_64.S            |   12 ++++++++++++
 arch/x86/realmode/init.c             |    3 +++
 arch/x86/realmode/rm/trampoline_64.S |   23 ++++++++++++++++++-----
 4 files changed, 36 insertions(+), 5 deletions(-)
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -52,6 +52,7 @@ struct trampoline_header {
 	u64 efer;
 	u32 cr4;
 	u32 flags;
+	u32 lock;
 #endif
 };
 
@@ -64,6 +65,8 @@ extern unsigned long initial_stack;
 extern unsigned long initial_vc_handler;
 #endif
 
+extern u32 *trampoline_lock;
+
 extern unsigned char real_mode_blob[];
 extern unsigned char real_mode_relocs[];
 
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -252,6 +252,16 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	movq	TASK_threadsp(%rax), %rsp
 
 	/*
+	 * Now that this CPU is running on its own stack, drop the realmode
+	 * protection. For the boot CPU the pointer is NULL!
+	 */
+	movq	trampoline_lock(%rip), %rax
+	testq	%rax, %rax
+	jz	.Lsetup_gdt
+	movl	$0, (%rax)
+
+.Lsetup_gdt:
+	/*
 	 * We must switch to a new descriptor in kernel space for the GDT
 	 * because soon the kernel won't have access anymore to the userspace
 	 * addresses where we're currently running on. We have to do that here
@@ -433,6 +443,8 @@ SYM_DATA(initial_code,	.quad x86_64_star
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 SYM_DATA(initial_vc_handler,	.quad handle_vc_boot_ghcb)
 #endif
+
+SYM_DATA(trampoline_lock, .quad 0);
 	__FINITDATA
 
 	__INIT
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -154,6 +154,9 @@ static void __init setup_real_mode(void)
 
 	trampoline_header->flags = 0;
 
+	trampoline_lock = &trampoline_header->lock;
+	*trampoline_lock = 0;
+
 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);
 
 	/* Map the real mode stub as virtual == physical */
--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -37,6 +37,20 @@
 	.text
 	.code16
 
+.macro LOAD_REALMODE_ESP
+	/*
+	 * Make sure only one CPU fiddles with the realmode stack
+	*/
+.Llock_rm\@:
+        lock btsl       $0, tr_lock
+        jnc             2f
+        pause
+        jmp             .Llock_rm\@
+2:
+	# Setup stack
+	movl	$rm_stack_end, %esp
+.endm
+
 	.balign	PAGE_SIZE
 SYM_CODE_START(trampoline_start)
 	cli			# We should be safe anyway
@@ -49,8 +63,7 @@ SYM_CODE_START(trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	# Setup stack
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 
 	call	verify_cpu		# Verify the cpu supports long mode
 	testl   %eax, %eax		# Check for return code
@@ -93,8 +106,7 @@ SYM_CODE_START(sev_es_trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	# Setup stack
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 
 	jmp	.Lswitch_to_protected
 SYM_CODE_END(sev_es_trampoline_start)
@@ -177,7 +189,7 @@ SYM_CODE_START(pa_trampoline_compat)
 	 * In compatibility mode.  Prep ESP and DX for startup_32, then disable
 	 * paging and complete the switch to legacy 32-bit mode.
 	 */
-	movl	$rm_stack_end, %esp
+	LOAD_REALMODE_ESP
 	movw	$__KERNEL_DS, %dx
 
 	movl	$(CR0_STATE & ~X86_CR0_PG), %eax
@@ -241,6 +253,7 @@ SYM_DATA_START(trampoline_header)
 	SYM_DATA(tr_efer,		.space 8)
 	SYM_DATA(tr_cr4,		.space 4)
 	SYM_DATA(tr_flags,		.space 4)
+	SYM_DATA(tr_lock,		.space 4)
 SYM_DATA_END(trampoline_header)
 
 #include "trampoline_common.S"



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:17:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:17:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533977.831159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa94-0001wF-Qd; Fri, 12 May 2023 21:17:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533977.831159; Fri, 12 May 2023 21:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa94-0001w1-MM; Fri, 12 May 2023 21:17:46 +0000
Received: by outflank-mailman (input) for mailman id 533977;
 Fri, 12 May 2023 21:17:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzZ-0004FP-Uk
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:57 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10d3fdfc-f109-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10d3fdfc-f109-11ed-b229-6b7b168915f2
Message-ID: <20230512205257.411554373@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925675;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=HkbpL+B1Y6r1JdpJDFTNU1fkEbASEBEddrD3eS3vI9k=;
	b=qLhT9a5D6K6oU7T99IYHlQkEvakwR6lIzOFnJv/yHhn0sXuNpKkFhbOEKcvGXbBBQxIQWP
	Zf3GuVmh5j6E3zElkNNJjQKqASeg/zWDd8fBpcu5LZISPQ7gZXK+2XzuY0b7uXFSzVdssS
	ubfYflaKL2WZjOF6uBlRON47j5VgCv6b0jI7mNVtPxb1SGRa/FMQcvzdBiYEyvlbYIQ8tv
	16yQasf7F+i15KtF6TRUm+d4L8HwozBtts+ZN7Ind33VzLLSvWtDvzzBY1m5FdkgHeB5+5
	8FI3U3zhrRrAgsdFhRlZ5sy7Xx+ZEuxItTCmvBR1r6Ei6I0X7x5XCS9e2r5S2g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925675;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=HkbpL+B1Y6r1JdpJDFTNU1fkEbASEBEddrD3eS3vI9k=;
	b=3w7vG4iZWK+8aTvHpH0DZweTvPHFb4xkzDRr36AwYtst0DBORILo5kXlVqsA7w+2XpnmoK
	IzhkrrZIhSP2ASDA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject:
 [patch V4 36/37] x86/smpboot: Support parallel startup of secondary CPUs
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:55 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

In parallel startup mode the APs are kicked alive by the control CPU
quickly after each other and run through the early startup code in
parallel. The real-mode startup code is already serialized with a
bit-spinlock to protect the real-mode stack.

In parallel startup mode the smpboot_control variable obviously cannot
contain the Linux CPU number so the APs have to determine their Linux CPU
number on their own. This is required to find the CPUs per CPU offset in
order to find the idle task stack and other per CPU data.

To achieve this, export the cpuid_to_apicid[] array so that each AP can
find its own CPU number by searching therein based on its APIC ID.

Introduce a flag in the top bits of smpboot_control which indicates that
the AP should find its CPU number by reading the APIC ID from the APIC.

This is required because CPUID based APIC ID retrieval can only provide the
initial APIC ID, which might have been overruled by the firmware. Some AMD
APUs come up with APIC ID = initial APIC ID + 0x10, so the APIC ID to CPU
number lookup would fail miserably if based on CPUID. Also virtualization
can make its own APIC ID assignements. The only requirement is that the
APIC IDs are consistent with the APCI/MADT table.

For the boot CPU or in case parallel bringup is disabled the control bits
are empty and the CPU number is directly available in bit 0-23 of
smpboot_control.

[ tglx: Initial proof of concept patch with bitlock and APIC ID lookup ]
[ dwmw2: Rework and testing, commit message, CPUID 0x1 and CPU0 support ]
[ seanc: Fix stray override of initial_gs in common_cpu_up() ]
[ Oleksandr Natalenko: reported suspend/resume issue fixed in
  x86_acpi_suspend_lowlevel ]
[ tglx: Make it read the APIC ID from the APIC instead of using CPUID,
  	split the bitlock part out ]

Co-developed-by: Thomas Gleixner <tglx@linutronix.de>
Co-developed-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
V4: Remove the lock prefix in the error path - Peter Z.
---
 arch/x86/include/asm/apic.h    |    2 +
 arch/x86/include/asm/apicdef.h |    5 ++-
 arch/x86/include/asm/smp.h     |    6 ++++
 arch/x86/kernel/acpi/sleep.c   |    9 +++++-
 arch/x86/kernel/apic/apic.c    |    2 -
 arch/x86/kernel/head_64.S      |   61 +++++++++++++++++++++++++++++++++++++++++
 arch/x86/kernel/smpboot.c      |    2 -
 7 files changed, 83 insertions(+), 4 deletions(-)
--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -55,6 +55,8 @@ extern int local_apic_timer_c2_ok;
 extern int disable_apic;
 extern unsigned int lapic_timer_period;
 
+extern int cpuid_to_apicid[];
+
 extern enum apic_intr_mode_id apic_intr_mode;
 enum apic_intr_mode_id {
 	APIC_PIC,
--- a/arch/x86/include/asm/apicdef.h
+++ b/arch/x86/include/asm/apicdef.h
@@ -138,7 +138,8 @@
 #define		APIC_EILVT_MASKED	(1 << 16)
 
 #define APIC_BASE (fix_to_virt(FIX_APIC_BASE))
-#define APIC_BASE_MSR	0x800
+#define APIC_BASE_MSR		0x800
+#define APIC_X2APIC_ID_MSR	0x802
 #define XAPIC_ENABLE	(1UL << 11)
 #define X2APIC_ENABLE	(1UL << 10)
 
@@ -162,6 +163,7 @@
 #define APIC_CPUID(apicid)	((apicid) & XAPIC_DEST_CPUS_MASK)
 #define NUM_APIC_CLUSTERS	((BAD_APICID + 1) >> XAPIC_DEST_CPUS_SHIFT)
 
+#ifndef __ASSEMBLY__
 /*
  * the local APIC register structure, memory mapped. Not terribly well
  * tested, but we might eventually use this one in the future - the
@@ -435,4 +437,5 @@ enum apic_delivery_modes {
 	APIC_DELIVERY_MODE_EXTINT	= 7,
 };
 
+#endif /* !__ASSEMBLY__ */
 #endif /* _ASM_X86_APICDEF_H */
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -200,4 +200,10 @@ extern unsigned long apic_mmio_base;
 
 #endif /* !__ASSEMBLY__ */
 
+/* Control bits for startup_64 */
+#define STARTUP_READ_APICID	0x80000000
+
+/* Top 8 bits are reserved for control */
+#define STARTUP_PARALLEL_MASK	0xFF000000
+
 #endif /* _ASM_X86_SMP_H */
--- a/arch/x86/kernel/acpi/sleep.c
+++ b/arch/x86/kernel/acpi/sleep.c
@@ -16,6 +16,7 @@
 #include <asm/cacheflush.h>
 #include <asm/realmode.h>
 #include <asm/hypervisor.h>
+#include <asm/smp.h>
 
 #include <linux/ftrace.h>
 #include "../../realmode/rm/wakeup.h"
@@ -127,7 +128,13 @@ int x86_acpi_suspend_lowlevel(void)
 	 * value is in the actual %rsp register.
 	 */
 	current->thread.sp = (unsigned long)temp_stack + sizeof(temp_stack);
-	smpboot_control = smp_processor_id();
+	/*
+	 * Ensure the CPU knows which one it is when it comes back, if
+	 * it isn't in parallel mode and expected to work that out for
+	 * itself.
+	 */
+	if (!(smpboot_control & STARTUP_PARALLEL_MASK))
+		smpboot_control = smp_processor_id();
 #endif
 	initial_code = (unsigned long)wakeup_long64;
 	saved_magic = 0x123456789abcdef0L;
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2380,7 +2380,7 @@ static int nr_logical_cpuids = 1;
 /*
  * Used to store mapping between logical CPU IDs and APIC IDs.
  */
-static int cpuid_to_apicid[] = {
+int cpuid_to_apicid[] = {
 	[0 ... NR_CPUS - 1] = -1,
 };
 
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -24,7 +24,9 @@
 #include "../entry/calling.h"
 #include <asm/export.h>
 #include <asm/nospec-branch.h>
+#include <asm/apicdef.h>
 #include <asm/fixmap.h>
+#include <asm/smp.h>
 
 /*
  * We are not able to switch in one step to the final KERNEL ADDRESS SPACE
@@ -234,8 +236,67 @@ SYM_INNER_LABEL(secondary_startup_64_no_
 	ANNOTATE_NOENDBR // above
 
 #ifdef CONFIG_SMP
+	/*
+	 * For parallel boot, the APIC ID is read from the APIC, and then
+	 * used to look up the CPU number.  For booting a single CPU, the
+	 * CPU number is encoded in smpboot_control.
+	 *
+	 * Bit 31	STARTUP_READ_APICID (Read APICID from APIC)
+	 * Bit 0-23	CPU# if STARTUP_xx flags are not set
+	 */
 	movl	smpboot_control(%rip), %ecx
+	testl	$STARTUP_READ_APICID, %ecx
+	jnz	.Lread_apicid
+	/*
+	 * No control bit set, single CPU bringup. CPU number is provided
+	 * in bit 0-23. This is also the boot CPU case (CPU number 0).
+	 */
+	andl	$(~STARTUP_PARALLEL_MASK), %ecx
+	jmp	.Lsetup_cpu
+
+.Lread_apicid:
+	/* Check whether X2APIC mode is already enabled */
+	mov	$MSR_IA32_APICBASE, %ecx
+	rdmsr
+	testl	$X2APIC_ENABLE, %eax
+	jnz	.Lread_apicid_msr
+
+	/* Read the APIC ID from the fix-mapped MMIO space. */
+	movq	apic_mmio_base(%rip), %rcx
+	addq	$APIC_ID, %rcx
+	movl	(%rcx), %eax
+	shr	$24, %eax
+	jmp	.Llookup_AP
+
+.Lread_apicid_msr:
+	mov	$APIC_X2APIC_ID_MSR, %ecx
+	rdmsr
+
+.Llookup_AP:
+	/* EAX contains the APIC ID of the current CPU */
+	xorq	%rcx, %rcx
+	leaq	cpuid_to_apicid(%rip), %rbx
+
+.Lfind_cpunr:
+	cmpl	(%rbx,%rcx,4), %eax
+	jz	.Lsetup_cpu
+	inc	%ecx
+#ifdef CONFIG_FORCE_NR_CPUS
+	cmpl	$NR_CPUS, %ecx
+#else
+	cmpl	nr_cpu_ids(%rip), %ecx
+#endif
+	jb	.Lfind_cpunr
+
+	/*  APIC ID not found in the table. Drop the trampoline lock and bail. */
+	movq	trampoline_lock(%rip), %rax
+	movl	$0, (%rax)
+
+1:	cli
+	hlt
+	jmp	1b
 
+.Lsetup_cpu:
 	/* Get the per cpu offset for the given CPU# which is in ECX */
 	movq	__per_cpu_offset(,%rcx,8), %rdx
 #else
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -996,7 +996,7 @@ static int do_boot_cpu(int apicid, int c
 	if (IS_ENABLED(CONFIG_X86_32)) {
 		early_gdt_descr.address = (unsigned long)get_cpu_gdt_rw(cpu);
 		initial_stack  = idle->thread.sp;
-	} else {
+	} else if (!(smpboot_control & STARTUP_PARALLEL_MASK)) {
 		smpboot_control = cpu;
 	}
 



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:17:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:17:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533978.831164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa95-0001zo-5g; Fri, 12 May 2023 21:17:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533978.831164; Fri, 12 May 2023 21:17:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa94-0001yq-WC; Fri, 12 May 2023 21:17:47 +0000
Received: by outflank-mailman (input) for mailman id 533978;
 Fri, 12 May 2023 21:17:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzP-0004FP-2c
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:47 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0afe24b2-f109-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0afe24b2-f109-11ed-b229-6b7b168915f2
Message-ID: <20230512205257.080801387@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925665;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=SxTJFZiyF2WGH/LZ05mC0rxxeGaROu74J7Cy/GPHn70=;
	b=K/FezNvBOJNnPeMjn0jwZGZLrdjicSHW686SvUJHgCP7i5tNRPZ0+TcBzpR3zsHIAjx7el
	oZD3T4LSHKe96Gl8Vn9tHLhiugtu0Peaz0vQATG5sk1xtglaKPxkN9q6sYgkMinNckV8/o
	10M+IlKGzHanW2pN6YNi9PrY5sZrLsfysPv/Rf/Y1Br/eUV1uOi0RLQrCO075ABqwfNzUn
	Qw3IiZGS0j3l+s+ARRAaaJktiUQ2fdrXWEZmaXcucSO1rkDizolMPZOl5m8lUPUZsNHIlu
	X7qCk8reaCGdfX2XcY6t80L+Yvj3odTymg6hokHUnI0YYCr7zk0zNLVIAAqpWw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925665;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=SxTJFZiyF2WGH/LZ05mC0rxxeGaROu74J7Cy/GPHn70=;
	b=ZlpauXamVislT6/DN2ZOHNylyjhLP1HBQpResarL3DaTxfAUdyt/zGjPUsAQBjx7GbeYCQ
	FfQ51EH8ZACSPgDw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 30/37] cpu/hotplug: Provide a split up CPUHP_BRINGUP mechanism
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:45 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The bring up logic of a to be onlined CPU consists of several parts, which
are considered to be a single hotplug state:

  1) Control CPU issues the wake-up

  2) To be onlined CPU starts up, does the minimal initialization,
     reports to be alive and waits for release into the complete bring-up.

  3) Control CPU waits for the alive report and releases the upcoming CPU
     for the complete bring-up.

Allow to split this into two states:

  1) Control CPU issues the wake-up

     After that the to be onlined CPU starts up, does the minimal
     initialization, reports to be alive and waits for release into the
     full bring-up. As this can run after the control CPU dropped the
     hotplug locks the code which is executed on the AP before it reports
     alive has to be carefully audited to not violate any of the hotplug
     constraints, especially not modifying any of the various cpumasks.

     This is really only meant to avoid waiting for the AP to react on the
     wake-up. Of course an architecture can move strict CPU related setup
     functionality, e.g. microcode loading, with care before the
     synchronization point to save further pointless waiting time.

  2) Control CPU waits for the alive report and releases the upcoming CPU
     for the complete bring-up.

This allows that the two states can be split up to run all to be onlined
CPUs up to state #1 on the control CPU and then at a later point run state
#2. This spares some of the latencies of the full serialized per CPU
bringup by avoiding the per CPU wakeup/wait serialization. The assumption
is that the first AP already waits when the last AP has been woken up. This
obvioulsy depends on the hardware latencies and depending on the timings
this might still not completely eliminate all wait scenarios.

This split is just a preparatory step for enabling the parallel bringup
later. The boot time bringup is still fully serialized. It has a separate
config switch so that architectures which want to support parallel bringup
can test the split of the CPUHP_BRINGUG step separately.

To enable this the architecture must support the CPU hotplug core sync
mechanism and has to be audited that there are no implicit hotplug state
dependencies which require a fully serialized bringup.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/Kconfig               |    4 ++
 include/linux/cpuhotplug.h |    4 ++
 kernel/cpu.c               |   70 +++++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 76 insertions(+), 2 deletions(-)
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -49,6 +49,10 @@ config HOTPLUG_CORE_SYNC_FULL
 	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select HOTPLUG_CORE_SYNC
 
+config HOTPLUG_SPLIT_STARTUP
+	bool
+	select HOTPLUG_CORE_SYNC_FULL
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -133,6 +133,7 @@ enum cpuhp_state {
 	CPUHP_MIPS_SOC_PREPARE,
 	CPUHP_BP_PREPARE_DYN,
 	CPUHP_BP_PREPARE_DYN_END		= CPUHP_BP_PREPARE_DYN + 20,
+	CPUHP_BP_KICK_AP,
 	CPUHP_BRINGUP_CPU,
 
 	/*
@@ -517,9 +518,12 @@ void cpuhp_online_idle(enum cpuhp_state
 static inline void cpuhp_online_idle(enum cpuhp_state state) { }
 #endif
 
+struct task_struct;
+
 void cpuhp_ap_sync_alive(void);
 void arch_cpuhp_sync_state_poll(void);
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
+int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle);
 
 #ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
 void cpuhp_ap_report_dead(void);
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -761,6 +761,47 @@ static int bringup_wait_for_ap_online(un
 	return 0;
 }
 
+#ifdef CONFIG_HOTPLUG_SPLIT_STARTUP
+static int cpuhp_kick_ap_alive(unsigned int cpu)
+{
+	if (!cpuhp_can_boot_ap(cpu))
+		return -EAGAIN;
+
+	return arch_cpuhp_kick_ap_alive(cpu, idle_thread_get(cpu));
+}
+
+static int cpuhp_bringup_ap(unsigned int cpu)
+{
+	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
+	int ret;
+
+	/*
+	 * Some architectures have to walk the irq descriptors to
+	 * setup the vector space for the cpu which comes online.
+	 * Prevent irq alloc/free across the bringup.
+	 */
+	irq_lock_sparse();
+
+	ret = cpuhp_bp_sync_alive(cpu);
+	if (ret)
+		goto out_unlock;
+
+	ret = bringup_wait_for_ap_online(cpu);
+	if (ret)
+		goto out_unlock;
+
+	irq_unlock_sparse();
+
+	if (st->target <= CPUHP_AP_ONLINE_IDLE)
+		return 0;
+
+	return cpuhp_kick_ap(cpu, st, st->target);
+
+out_unlock:
+	irq_unlock_sparse();
+	return ret;
+}
+#else
 static int bringup_cpu(unsigned int cpu)
 {
 	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
@@ -781,7 +822,6 @@ static int bringup_cpu(unsigned int cpu)
 	 */
 	irq_lock_sparse();
 
-	/* Arch-specific enabling code. */
 	ret = __cpu_up(cpu, idle);
 	if (ret)
 		goto out_unlock;
@@ -805,6 +845,7 @@ static int bringup_cpu(unsigned int cpu)
 	irq_unlock_sparse();
 	return ret;
 }
+#endif
 
 static int finish_cpu(unsigned int cpu)
 {
@@ -1944,13 +1985,38 @@ static struct cpuhp_step cpuhp_hp_states
 		.startup.single		= timers_prepare_cpu,
 		.teardown.single	= timers_dead_cpu,
 	},
-	/* Kicks the plugged cpu into life */
+
+#ifdef CONFIG_HOTPLUG_SPLIT_STARTUP
+	/*
+	 * Kicks the AP alive. AP will wait in cpuhp_ap_sync_alive() until
+	 * the next step will release it.
+	 */
+	[CPUHP_BP_KICK_AP] = {
+		.name			= "cpu:kick_ap",
+		.startup.single		= cpuhp_kick_ap_alive,
+	},
+
+	/*
+	 * Waits for the AP to reach cpuhp_ap_sync_alive() and then
+	 * releases it for the complete bringup.
+	 */
+	[CPUHP_BRINGUP_CPU] = {
+		.name			= "cpu:bringup",
+		.startup.single		= cpuhp_bringup_ap,
+		.teardown.single	= finish_cpu,
+		.cant_stop		= true,
+	},
+#else
+	/*
+	 * All-in-one CPU bringup state which includes the kick alive.
+	 */
 	[CPUHP_BRINGUP_CPU] = {
 		.name			= "cpu:bringup",
 		.startup.single		= bringup_cpu,
 		.teardown.single	= finish_cpu,
 		.cant_stop		= true,
 	},
+#endif
 	/* Final state before CPU kills itself */
 	[CPUHP_AP_IDLE_DEAD] = {
 		.name			= "idle:dead",



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:17:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:17:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533980.831170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa95-000268-GR; Fri, 12 May 2023 21:17:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533980.831170; Fri, 12 May 2023 21:17:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa95-00023H-9F; Fri, 12 May 2023 21:17:47 +0000
Received: by outflank-mailman (input) for mailman id 533980;
 Fri, 12 May 2023 21:17:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzH-0004F7-OI
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:39 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 061b2b10-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 061b2b10-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205256.803238859@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925657;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=rAPxMfSCDqj/Bjl0E5i43Rztw8XcTG6Vczquydkj10k=;
	b=04qk/ZlFjsriOLCzUaO7G07gFfmf9cSDBVT9NtU+7tbOW/QWdBVDcKShT6npiFQr/WoEqd
	MWM0qCFWPxUFSHYUrnMv9aACc/F3hdH3zfjikD8cHjZGdvidCoMb/gZfrjULtp7/kLfW7o
	CJ3O88hhpiiQ/Ujm8sP+z35M3A8XzKzmV6u3fco0BDP/rCOeVluL2IgmeQrB1WxI+dCQ43
	38SrSq1kWMlzFo7HbfS1n+laBkEfc8Mx7KbtB2PLFCmjg/cgyNhU10VzWgLsswr/hyfi1r
	pEUYhkg8h2ej/QnNoIbv74MAldzPPnu0cIr7fw1oqsjpqV/mRP7E7ugEdZrzkg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925657;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=rAPxMfSCDqj/Bjl0E5i43Rztw8XcTG6Vczquydkj10k=;
	b=r1jGn7arz7wULDV9xkDaSal93zGzOXolBlXuS8ZVTApcC3PxDcswB0dG43ZqLydHIXZmQH
	Dpdh36zUoxzN3dDg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 25/37] MIPS: SMP_CPS: Switch to hotplug core state synchronization
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:37 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. This unfortunately requires to add dead reporting to the non CPS
platforms as CPS is the only user, but it allows an overall consolidation
of this functionality.

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/mips/Kconfig               |    1 +
 arch/mips/cavium-octeon/smp.c   |    1 +
 arch/mips/include/asm/smp-ops.h |    1 +
 arch/mips/kernel/smp-bmips.c    |    1 +
 arch/mips/kernel/smp-cps.c      |   14 +++++---------
 arch/mips/kernel/smp.c          |    8 ++++++++
 arch/mips/loongson64/smp.c      |    1 +
 7 files changed, 18 insertions(+), 9 deletions(-)


--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2285,6 +2285,7 @@ config MIPS_CPS
 	select MIPS_CM
 	select MIPS_CPS_PM if HOTPLUG_CPU
 	select SMP
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select SYNC_R4K if (CEVT_R4K || CSRC_R4K)
 	select SYS_SUPPORTS_HOTPLUG_CPU
 	select SYS_SUPPORTS_SCHED_SMT if CPU_MIPSR6
--- a/arch/mips/cavium-octeon/smp.c
+++ b/arch/mips/cavium-octeon/smp.c
@@ -345,6 +345,7 @@ void play_dead(void)
 	int cpu = cpu_number_map(cvmx_get_core_num());
 
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 	octeon_processor_boot = 0xff;
 	per_cpu(cpu_state, cpu) = CPU_DEAD;
 
--- a/arch/mips/include/asm/smp-ops.h
+++ b/arch/mips/include/asm/smp-ops.h
@@ -33,6 +33,7 @@ struct plat_smp_ops {
 #ifdef CONFIG_HOTPLUG_CPU
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
+	void (*cleanup_dead_cpu)(unsigned cpu);
 #endif
 #ifdef CONFIG_KEXEC
 	void (*kexec_nonboot_cpu)(void);
--- a/arch/mips/kernel/smp-bmips.c
+++ b/arch/mips/kernel/smp-bmips.c
@@ -392,6 +392,7 @@ static void bmips_cpu_die(unsigned int c
 void __ref play_dead(void)
 {
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 
 	/* flush data cache */
 	_dma_cache_wback_inv(0, ~0);
--- a/arch/mips/kernel/smp-cps.c
+++ b/arch/mips/kernel/smp-cps.c
@@ -503,8 +503,7 @@ void play_dead(void)
 		}
 	}
 
-	/* This CPU has chosen its way out */
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	cps_shutdown_this_cpu(cpu_death);
 
@@ -527,7 +526,9 @@ static void wait_for_sibling_halt(void *
 	} while (!(halted & TCHALT_H));
 }
 
-static void cps_cpu_die(unsigned int cpu)
+static void cps_cpu_die(unsigned int cpu) { }
+
+static void cps_cleanup_dead_cpu(unsigned cpu)
 {
 	unsigned core = cpu_core(&cpu_data[cpu]);
 	unsigned int vpe_id = cpu_vpe_id(&cpu_data[cpu]);
@@ -535,12 +536,6 @@ static void cps_cpu_die(unsigned int cpu
 	unsigned stat;
 	int err;
 
-	/* Wait for the cpu to choose its way out */
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU%u: didn't offline\n", cpu);
-		return;
-	}
-
 	/*
 	 * Now wait for the CPU to actually offline. Without doing this that
 	 * offlining may race with one or more of:
@@ -624,6 +619,7 @@ static const struct plat_smp_ops cps_smp
 #ifdef CONFIG_HOTPLUG_CPU
 	.cpu_disable		= cps_cpu_disable,
 	.cpu_die		= cps_cpu_die,
+	.cleanup_dead_cpu	= cps_cleanup_dead_cpu,
 #endif
 #ifdef CONFIG_KEXEC
 	.kexec_nonboot_cpu	= cps_kexec_nonboot_cpu,
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -690,6 +690,14 @@ void flush_tlb_one(unsigned long vaddr)
 EXPORT_SYMBOL(flush_tlb_page);
 EXPORT_SYMBOL(flush_tlb_one);
 
+#ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
+	if (mp_ops->cleanup_dead_cpu)
+		mp_ops->cleanup_dead_cpu(cpu);
+}
+#endif
+
 #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
 
 static void tick_broadcast_callee(void *info)
--- a/arch/mips/loongson64/smp.c
+++ b/arch/mips/loongson64/smp.c
@@ -775,6 +775,7 @@ void play_dead(void)
 	void (*play_dead_at_ckseg1)(int *);
 
 	idle_task_exit();
+	cpuhp_ap_report_dead();
 
 	prid_imp = read_c0_prid() & PRID_IMP_MASK;
 	prid_rev = read_c0_prid() & PRID_REV_MASK;



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:18:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:18:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533984.831188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9C-000364-0Z; Fri, 12 May 2023 21:17:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533984.831188; Fri, 12 May 2023 21:17:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9B-00035g-Rj; Fri, 12 May 2023 21:17:53 +0000
Received: by outflank-mailman (input) for mailman id 533984;
 Fri, 12 May 2023 21:17:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzb-0004F7-Iq
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:59 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 11bb9c99-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11bb9c99-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205257.467571745@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925677;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=09Gybl6gLYxqPWfBolvGD2u6nN/UgRkwP/RVYSeb51U=;
	b=lkhHN5ucCzBQq4wHtTdIuPxWfT+KaR8us2Fnqu6Cr5tOLNAYFJwbAjX6Z3ePYw7xBU5Rf+
	wiXaKi5wbkSJ0yiBL+vVDSE9lfXClJ13wICCfbrPv8NwRfgYgJmKsCHdwkitagDFT4Sy53
	3Mn9Rphq4Lwr1hRrlQYSh8G4siyYtpt76AklUE92TaBdwOhGU7rVMLqFcMcy15TW5md2c+
	3IsfJN76Y4Qt83mJf/v41gRvRUGzZI3ou8Af+vq5cz1lXYtnQvn0+VVBR6/3GVBOS0fRX9
	A3vvjzTjc3WS06TbYvPMJ2fPNMriQpu62Vf8hta3kuH4x12rVzRveCh1dJ+U4w==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925677;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=09Gybl6gLYxqPWfBolvGD2u6nN/UgRkwP/RVYSeb51U=;
	b=gsCVGrEF63u0kv3H/7VlEi7kjZ9EJYo14dXRVQK78ZLNGfUDOAVfm+otcHx6OPKkcYxyVn
	b0t3Yyn3VI1q72DQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 37/37] x86/smpboot/64: Implement
 arch_cpuhp_init_parallel_bringup() and enable it
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:56 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Implement the validation function which tells the core code whether
parallel bringup is possible.

The only condition for now is that the kernel does not run in an encrypted
guest as these will trap the RDMSR via #VC, which cannot be handled at that
point in early startup.

There was an earlier variant for AMD-SEV which used the GHBC protocol for
retrieving the APIC ID via CPUID, but there is no guarantee that the
initial APIC ID in CPUID is the same as the real APIC ID. There is no
enforcement from the secure firmware and the hypervisor can assign APIC IDs
as it sees fit as long as the ACPI/MADT table is consistent with that
assignment.

Unfortunately there is no RDMSR GHCB protocol at the moment, so enabling
AMD-SEV guests for parallel startup needs some more thought.

Intel-TDX provides a secure RDMSR hypercall, but supporting that is outside
the scope of this change.

Fixup announce_cpu() as e.g. on Hyper-V CPU1 is the secondary sibling of
CPU0, which makes the @cpu == 1 logic in announce_cpu() fall apart.

[ mikelley: Reported the announce_cpu() fallout

Originally-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
V2: Fixup announce_cpu() - Michael Kelley
V3: Fixup announce_cpu() for real - Michael Kelley
---
 arch/x86/Kconfig             |    3 -
 arch/x86/kernel/cpu/common.c |    6 --
 arch/x86/kernel/smpboot.c    |   87 +++++++++++++++++++++++++++++++++++--------
 3 files changed, 75 insertions(+), 21 deletions(-)
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -274,8 +274,9 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HOTPLUG_PARALLEL			if SMP && X86_64
 	select HOTPLUG_SMT			if SMP
-	select HOTPLUG_SPLIT_STARTUP		if SMP
+	select HOTPLUG_SPLIT_STARTUP		if SMP && X86_32
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
 	select NEED_PER_CPU_PAGE_FIRST_CHUNK
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2128,11 +2128,7 @@ static inline void setup_getcpu(int cpu)
 }
 
 #ifdef CONFIG_X86_64
-static inline void ucode_cpu_init(int cpu)
-{
-	if (cpu)
-		load_ucode_ap();
-}
+static inline void ucode_cpu_init(int cpu) { }
 
 static inline void tss_setup_ist(struct tss_struct *tss)
 {
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -58,6 +58,7 @@
 #include <linux/overflow.h>
 #include <linux/stackprotector.h>
 #include <linux/cpuhotplug.h>
+#include <linux/mc146818rtc.h>
 
 #include <asm/acpi.h>
 #include <asm/cacheinfo.h>
@@ -75,7 +76,7 @@
 #include <asm/fpu/api.h>
 #include <asm/setup.h>
 #include <asm/uv/uv.h>
-#include <linux/mc146818rtc.h>
+#include <asm/microcode.h>
 #include <asm/i8259.h>
 #include <asm/misc.h>
 #include <asm/qspinlock.h>
@@ -128,7 +129,6 @@ int arch_update_cpu_topology(void)
 	return retval;
 }
 
-
 static unsigned int smpboot_warm_reset_vector_count;
 
 static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip)
@@ -226,16 +226,43 @@ static void notrace start_secondary(void
 	 */
 	cr4_init();
 
-#ifdef CONFIG_X86_32
-	/* switch away from the initial page table */
-	load_cr3(swapper_pg_dir);
-	__flush_tlb_all();
-#endif
+	/*
+	 * 32-bit specific. 64-bit reaches this code with the correct page
+	 * table established. Yet another historical divergence.
+	 */
+	if (IS_ENABLED(CONFIG_X86_32)) {
+		/* switch away from the initial page table */
+		load_cr3(swapper_pg_dir);
+		__flush_tlb_all();
+	}
+
 	cpu_init_exception_handling();
 
 	/*
-	 * Synchronization point with the hotplug core. Sets the
-	 * synchronization state to ALIVE and waits for the control CPU to
+	 * 32-bit systems load the microcode from the ASM startup code for
+	 * historical reasons.
+	 *
+	 * On 64-bit systems load it before reaching the AP alive
+	 * synchronization point below so it is not part of the full per
+	 * CPU serialized bringup part when "parallel" bringup is enabled.
+	 *
+	 * That's even safe when hyperthreading is enabled in the CPU as
+	 * the core code starts the primary threads first and leaves the
+	 * secondary threads waiting for SIPI. Loading microcode on
+	 * physical cores concurrently is a safe operation.
+	 *
+	 * This covers both the Intel specific issue that concurrent
+	 * microcode loading on SMT siblings must be prohibited and the
+	 * vendor independent issue`that microcode loading which changes
+	 * CPUID, MSRs etc. must be strictly serialized to maintain
+	 * software state correctness.
+	 */
+	if (IS_ENABLED(CONFIG_X86_64))
+		load_ucode_ap();
+
+	/*
+	 * Synchronization point with the hotplug core. Sets this CPUs
+	 * synchronization state to ALIVE and spin-waits for the control CPU to
 	 * release this CPU for further bringup.
 	 */
 	cpuhp_ap_sync_alive();
@@ -918,9 +945,9 @@ static int wakeup_secondary_cpu_via_init
 /* reduce the number of lines printed when booting a large cpu count system */
 static void announce_cpu(int cpu, int apicid)
 {
+	static int width, node_width, first = 1;
 	static int current_node = NUMA_NO_NODE;
 	int node = early_cpu_to_node(cpu);
-	static int width, node_width;
 
 	if (!width)
 		width = num_digits(num_possible_cpus()) + 1; /* + '#' sign */
@@ -928,10 +955,10 @@ static void announce_cpu(int cpu, int ap
 	if (!node_width)
 		node_width = num_digits(num_possible_nodes()) + 1; /* + '#' */
 
-	if (cpu == 1)
-		printk(KERN_INFO "x86: Booting SMP configuration:\n");
-
 	if (system_state < SYSTEM_RUNNING) {
+		if (first)
+			pr_info("x86: Booting SMP configuration:\n");
+
 		if (node != current_node) {
 			if (current_node > (-1))
 				pr_cont("\n");
@@ -942,11 +969,11 @@ static void announce_cpu(int cpu, int ap
 		}
 
 		/* Add padding for the BSP */
-		if (cpu == 1)
+		if (first)
 			pr_cont("%*s", width + 1, " ");
+		first = 0;
 
 		pr_cont("%*s#%d", width - num_digits(cpu), " ", cpu);
-
 	} else
 		pr_info("Booting Node %d Processor %d APIC 0x%x\n",
 			node, cpu, apicid);
@@ -1236,6 +1263,36 @@ void __init smp_prepare_cpus_common(void
 	set_cpu_sibling_map(0);
 }
 
+#ifdef CONFIG_X86_64
+/* Establish whether parallel bringup can be supported. */
+bool __init arch_cpuhp_init_parallel_bringup(void)
+{
+	/*
+	 * Encrypted guests require special handling. They enforce X2APIC
+	 * mode but the RDMSR to read the APIC ID is intercepted and raises
+	 * #VC or #VE which cannot be handled in the early startup code.
+	 *
+	 * AMD-SEV does not provide a RDMSR GHCB protocol so the early
+	 * startup code cannot directly communicate with the secure
+	 * firmware. The alternative solution to retrieve the APIC ID via
+	 * CPUID(0xb), which is covered by the GHCB protocol, is not viable
+	 * either because there is no enforcement of the CPUID(0xb)
+	 * provided "initial" APIC ID to be the same as the real APIC ID.
+	 *
+	 * Intel-TDX has a secure RDMSR hypercall, but that needs to be
+	 * implemented seperately in the low level startup ASM code.
+	 */
+	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
+		pr_info("Parallel CPU startup disabled due to guest state encryption\n");
+		return false;
+	}
+
+	smpboot_control = STARTUP_READ_APICID;
+	pr_debug("Parallel CPU startup enabled: 0x%08x\n", smpboot_control);
+	return true;
+}
+#endif
+
 /*
  * Prepare for SMP bootup.
  * @max_cpus: configured maximum number of CPUs, It is a legacy parameter



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:18:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:18:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.533985.831194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9C-0003Ab-FN; Fri, 12 May 2023 21:17:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 533985.831194; Fri, 12 May 2023 21:17:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9C-00038D-7v; Fri, 12 May 2023 21:17:54 +0000
Received: by outflank-mailman (input) for mailman id 533985;
 Fri, 12 May 2023 21:17:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzE-0004FP-Nk
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:36 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 052382db-f109-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 052382db-f109-11ed-b229-6b7b168915f2
Message-ID: <20230512205256.747254502@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925656;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=L20cp74EuALPKGNFqf3nNcEqUQ5njakWZxxEQyBoBM4=;
	b=IylmoIIxIQfNggmcsvbOfTRB5loy+PFqncSpuulDyE2pbmPfQMyckdIFZ+PGkzKMchVerz
	824tdk3ji+s+muh2aWpQ4hOPSBstiEsc8Ycxd4R8aWnO6PDBIRQTGiDZsISTCLa+aukFhm
	x4A6LGNRLz29TcCZNg1iWdMrCiuBl9jKJu/vrQgbemP3n1nAr+CmyVo7neEnv+1EJTHsH/
	PTZz1ICYbBy547B2BRLaDGVl6AxYrgsPn9ONT0qbOk4Suw7f/40PEdLNffxaVBOkmtcJpy
	fK3mSXgZy/WEaiJstDZUyiyvtWGw9h1MOrKQgrzZW1MPAeydm25ZeQzLxo4ymQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925656;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=L20cp74EuALPKGNFqf3nNcEqUQ5njakWZxxEQyBoBM4=;
	b=GiZO41VRe8r3LWpyXMGSu8KPxp+TVGrarZhX/4m7s2HIKxd6xgo9fUFalNt2YFazivfLTZ
	JUofQxXulXrXv+Dw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 24/37] csky/smp: Switch to hotplug core state synchronization
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:35 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/csky/Kconfig           |    1 +
 arch/csky/include/asm/smp.h |    2 +-
 arch/csky/kernel/smp.c      |    8 ++------
 3 files changed, 4 insertions(+), 7 deletions(-)


--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -96,6 +96,7 @@ config CSKY
 	select HAVE_REGS_AND_STACK_ACCESS_API
 	select HAVE_STACKPROTECTOR
 	select HAVE_SYSCALL_TRACEPOINTS
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select MAY_HAVE_SPARSE_IRQ
 	select MODULES_USE_ELF_RELA if MODULES
 	select OF
--- a/arch/csky/include/asm/smp.h
+++ b/arch/csky/include/asm/smp.h
@@ -23,7 +23,7 @@ void __init set_send_ipi(void (*func)(co
 
 int __cpu_disable(void);
 
-void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 
 #endif /* CONFIG_SMP */
 
--- a/arch/csky/kernel/smp.c
+++ b/arch/csky/kernel/smp.c
@@ -291,12 +291,8 @@ int __cpu_disable(void)
 	return 0;
 }
 
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: shutdown failed\n", cpu);
-		return;
-	}
 	pr_notice("CPU%u: shutdown\n", cpu);
 }
 
@@ -304,7 +300,7 @@ void __noreturn arch_cpu_idle_dead(void)
 {
 	idle_task_exit();
 
-	cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	while (!secondary_stack)
 		arch_cpu_idle();





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:18:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:18:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534001.831209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9E-0003lM-R4; Fri, 12 May 2023 21:17:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534001.831209; Fri, 12 May 2023 21:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9E-0003l0-KQ; Fri, 12 May 2023 21:17:56 +0000
Received: by outflank-mailman (input) for mailman id 534001;
 Fri, 12 May 2023 21:17:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzR-0004FP-VO
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:49 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0cf2c146-f109-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0cf2c146-f109-11ed-b229-6b7b168915f2
Message-ID: <20230512205257.186599880@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925669;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Sn7EVHuimLKHkyL78WG8/WoMj36d51IE8hgcVFYO1Jg=;
	b=PgR2wWsegIPAljVCteHv3vb3ZngXvLBqfIDF3Gmbh21DDdZ1EJBxDZerQjkrUY9SqyqwSI
	7aCqhJcqE5EKxRul/Ekwl/+galMZgMYU/gsWyFePML5w0XtzpexbnbzcNH1ipLbmFtfUSJ
	sqj9mWlGmLa0srSDuSb3SK3zhOMPqX+olbYzQ/7oto5OTbOj13kWkRgiBByPBzkoOvyiFh
	s9BOzttECCt/mEPHn8jSQ7MwBRnl2KjLKTxaYYSTCPXPM0LOj5hC/jqAIJeZJlskC6CODd
	HffzltHP+m0RInj+D1XpyMr21HkCoSFtFqybeYAqvqEeQaISI8zQCoQUIG6Y3A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925669;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=Sn7EVHuimLKHkyL78WG8/WoMj36d51IE8hgcVFYO1Jg=;
	b=n6leXlfK754qoJremHc8EPvPr8gU3EoTvqoTXngFAP+h9W+kwsO3dRJXP1shvFCuejE5tj
	W/Ol0BleyOREMyCg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 32/37] x86/apic: Provide cpu_primary_thread mask
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:48 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Make the primary thread tracking CPU mask based in preparation for simpler
handling of parallel bootup.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/include/asm/apic.h     |    2 --
 arch/x86/include/asm/topology.h |   19 +++++++++++++++----
 arch/x86/kernel/apic/apic.c     |   20 +++++++++-----------
 arch/x86/kernel/smpboot.c       |   12 +++---------
 4 files changed, 27 insertions(+), 26 deletions(-)
--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -506,10 +506,8 @@ extern int default_check_phys_apicid_pre
 #endif /* CONFIG_X86_LOCAL_APIC */
 
 #ifdef CONFIG_SMP
-bool apic_id_is_primary_thread(unsigned int id);
 void apic_smt_update(void);
 #else
-static inline bool apic_id_is_primary_thread(unsigned int id) { return false; }
 static inline void apic_smt_update(void) { }
 #endif
 
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -31,9 +31,9 @@
  * CONFIG_NUMA.
  */
 #include <linux/numa.h>
+#include <linux/cpumask.h>
 
 #ifdef CONFIG_NUMA
-#include <linux/cpumask.h>
 
 #include <asm/mpspec.h>
 #include <asm/percpu.h>
@@ -139,9 +139,20 @@ static inline int topology_max_smt_threa
 int topology_update_package_map(unsigned int apicid, unsigned int cpu);
 int topology_update_die_map(unsigned int dieid, unsigned int cpu);
 int topology_phys_to_logical_pkg(unsigned int pkg);
-bool topology_is_primary_thread(unsigned int cpu);
 bool topology_smt_supported(void);
-#else
+
+extern struct cpumask __cpu_primary_thread_mask;
+#define cpu_primary_thread_mask ((const struct cpumask *)&__cpu_primary_thread_mask)
+
+/**
+ * topology_is_primary_thread - Check whether CPU is the primary SMT thread
+ * @cpu:	CPU to check
+ */
+static inline bool topology_is_primary_thread(unsigned int cpu)
+{
+	return cpumask_test_cpu(cpu, cpu_primary_thread_mask);
+}
+#else /* CONFIG_SMP */
 #define topology_max_packages()			(1)
 static inline int
 topology_update_package_map(unsigned int apicid, unsigned int cpu) { return 0; }
@@ -152,7 +163,7 @@ static inline int topology_max_die_per_p
 static inline int topology_max_smt_threads(void) { return 1; }
 static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
 static inline bool topology_smt_supported(void) { return false; }
-#endif
+#endif /* !CONFIG_SMP */
 
 static inline void arch_fix_phys_package_id(int num, u32 slot)
 {
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2386,20 +2386,16 @@ bool arch_match_cpu_phys_id(int cpu, u64
 }
 
 #ifdef CONFIG_SMP
-/**
- * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
- * @apicid: APIC ID to check
- */
-bool apic_id_is_primary_thread(unsigned int apicid)
+static void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid)
 {
-	u32 mask;
-
-	if (smp_num_siblings == 1)
-		return true;
 	/* Isolate the SMT bit(s) in the APICID and check for 0 */
-	mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
-	return !(apicid & mask);
+	u32 mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
+
+	if (smp_num_siblings == 1 || !(apicid & mask))
+		cpumask_set_cpu(cpu, &__cpu_primary_thread_mask);
 }
+#else
+static inline void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid) { }
 #endif
 
 /*
@@ -2544,6 +2540,8 @@ int generic_processor_info(int apicid, i
 	set_cpu_present(cpu, true);
 	num_processors++;
 
+	cpu_mark_primary_thread(cpu, apicid);
+
 	return cpu;
 }
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -102,6 +102,9 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
 
+/* CPUs which are the primary SMT threads */
+struct cpumask __cpu_primary_thread_mask __read_mostly;
+
 /* Representing CPUs for which sibling maps can be computed */
 static cpumask_var_t cpu_sibling_setup_mask;
 
@@ -277,15 +280,6 @@ static void notrace start_secondary(void
 }
 
 /**
- * topology_is_primary_thread - Check whether CPU is the primary SMT thread
- * @cpu:	CPU to check
- */
-bool topology_is_primary_thread(unsigned int cpu)
-{
-	return apic_id_is_primary_thread(per_cpu(x86_cpu_to_apicid, cpu));
-}
-
-/**
  * topology_smt_supported - Check whether SMT is supported by the CPUs
  */
 bool topology_smt_supported(void)



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:18:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:18:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534002.831213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9F-0003oy-BX; Fri, 12 May 2023 21:17:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534002.831213; Fri, 12 May 2023 21:17:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9F-0003nx-2w; Fri, 12 May 2023 21:17:57 +0000
Received: by outflank-mailman (input) for mailman id 534002;
 Fri, 12 May 2023 21:17:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzN-0004F7-G4
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:45 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0901af17-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0901af17-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205256.972894276@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925662;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=nzItUYys+FxnFlxHRSRwXNvBFw4/RHjBPySjAIg1FA0=;
	b=HFAJV0pMQwfu/4+GAhQLuXhpUO6abQd4y85mbgjAss6mqH/BExmc6AKeRMkuLpdoxhLinj
	NBMK7Ra2vav1RTE5mzIkDgtLQvIZiWXsjGTWMWD3X/NXyy3ID7jD6oi/PGdhpAjN0b4CZQ
	6IMLg3BcGBtrqbRHE3Td3hMjnCGGjMUZew8rgNfnNgQ+oeJaUqITgff3tJlFeEUzOnnv+R
	Fjr1+md7ekwb9fGi39VcdH3yTLAztWBmY3Yv4ndK172fdqfHvHqzXeEqjzim8ZrnuECBT9
	6bomIHXV7BUv75NlLf7B9z3kwvgz12t7LUL/w3aX0Wd0Yv0nG39/k2Y0dX/XYA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925662;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=nzItUYys+FxnFlxHRSRwXNvBFw4/RHjBPySjAIg1FA0=;
	b=eK6tV6hlAQaheu95GgdL2hUtApcdcJITHelzRXs0n6MnF+U01Souw/mHmLrDCW7jivSlgK
	RTZIurVUzouaTZAQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 28/37] cpu/hotplug: Remove unused state functions
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:41 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

All users converted to the hotplug core mechanism.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 include/linux/cpu.h |    2 -
 kernel/smpboot.c    |   75 ----------------------------------------------------
 2 files changed, 77 deletions(-)


--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -193,8 +193,6 @@ static inline void play_idle(unsigned lo
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
-bool cpu_wait_death(unsigned int cpu, int seconds);
-bool cpu_report_death(void);
 void cpuhp_report_idle_dead(void);
 #else
 static inline void cpuhp_report_idle_dead(void) { }
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -325,78 +325,3 @@ void smpboot_unregister_percpu_thread(st
 	cpus_read_unlock();
 }
 EXPORT_SYMBOL_GPL(smpboot_unregister_percpu_thread);
-
-#ifndef CONFIG_HOTPLUG_CORE_SYNC
-static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
-
-#ifdef CONFIG_HOTPLUG_CPU
-/*
- * Wait for the specified CPU to exit the idle loop and die.
- */
-bool cpu_wait_death(unsigned int cpu, int seconds)
-{
-	int jf_left = seconds * HZ;
-	int oldstate;
-	bool ret = true;
-	int sleep_jf = 1;
-
-	might_sleep();
-
-	/* The outgoing CPU will normally get done quite quickly. */
-	if (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) == CPU_DEAD)
-		goto update_state_early;
-	udelay(5);
-
-	/* But if the outgoing CPU dawdles, wait increasingly long times. */
-	while (atomic_read(&per_cpu(cpu_hotplug_state, cpu)) != CPU_DEAD) {
-		schedule_timeout_uninterruptible(sleep_jf);
-		jf_left -= sleep_jf;
-		if (jf_left <= 0)
-			break;
-		sleep_jf = DIV_ROUND_UP(sleep_jf * 11, 10);
-	}
-update_state_early:
-	oldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-update_state:
-	if (oldstate == CPU_DEAD) {
-		/* Outgoing CPU died normally, update state. */
-		smp_mb(); /* atomic_read() before update. */
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_POST_DEAD);
-	} else {
-		/* Outgoing CPU still hasn't died, set state accordingly. */
-		if (!atomic_try_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
-					&oldstate, CPU_BROKEN))
-			goto update_state;
-		ret = false;
-	}
-	return ret;
-}
-
-/*
- * Called by the outgoing CPU to report its successful death.  Return
- * false if this report follows the surviving CPU's timing out.
- *
- * A separate "CPU_DEAD_FROZEN" is used when the surviving CPU
- * timed out.  This approach allows architectures to omit calls to
- * cpu_check_up_prepare() and cpu_set_state_online() without defeating
- * the next cpu_wait_death()'s polling loop.
- */
-bool cpu_report_death(void)
-{
-	int oldstate;
-	int newstate;
-	int cpu = smp_processor_id();
-
-	oldstate = atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-	do {
-		if (oldstate != CPU_BROKEN)
-			newstate = CPU_DEAD;
-		else
-			newstate = CPU_DEAD_FROZEN;
-	} while (!atomic_try_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
-				     &oldstate, newstate));
-	return newstate == CPU_DEAD;
-}
-
-#endif /* #ifdef CONFIG_HOTPLUG_CPU */
-#endif /* !CONFIG_HOTPLUG_CORE_SYNC */





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:18:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:18:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534005.831223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9G-00044k-IC; Fri, 12 May 2023 21:17:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534005.831223; Fri, 12 May 2023 21:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9G-00043i-5L; Fri, 12 May 2023 21:17:58 +0000
Received: by outflank-mailman (input) for mailman id 534005;
 Fri, 12 May 2023 21:17:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzB-0004F7-EU
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:33 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 028239f8-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 028239f8-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205256.582584351@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925651;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=rBjhgMw+R3T5P7fIX0knHpgERmAnldETzGDcA6m3soI=;
	b=cbaRZFL8MG+FpoRLi83o8C0jAfcbHTHB+vfFTRdb6Fb5sn0p9lXLPNX7cm2p36LZqMBqmR
	1io+Dw1jv1H97vrkFacfHVWEtoGH53aEdX3vz4beRct8h0e0dd26xWSWd8eZW1KCcIJC7Q
	IQjkl2Ljo3IYhaQRAOEzrdGinOoTCNtqBI4rzplYXNLyGSql4W9oEPvKLwby3cjoIJikik
	vY7h0DuJoO+oJqXbmBmpR/nMx6mZkS+vKcfhwlFUdevTe78P5zT0sVzDMJyx2aF2BpjVmT
	1nL7SZdRWGuShjCMJNYSq1acmRBhEsUAZ4zDDuAqjK77jqt83aYfTHAfG+pL7A==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925651;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=rBjhgMw+R3T5P7fIX0knHpgERmAnldETzGDcA6m3soI=;
	b=JWEuLlJQJHioUWfttr6TJZop/D83BErX+18Kxt47uf7+fJoL0/b+dnZa33N9Vj8OnGttOs
	Ptz2z9++OXaZSQCA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 21/37] cpu/hotplug: Remove cpu_report_state() and related
 unused cruft
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:30 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

No more users.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 include/linux/cpu.h |    2 -
 kernel/smpboot.c    |   90 ----------------------------------------------------
 2 files changed, 92 deletions(-)


--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -184,8 +184,6 @@ void arch_cpu_idle_enter(void);
 void arch_cpu_idle_exit(void);
 void __noreturn arch_cpu_idle_dead(void);
 
-int cpu_report_state(int cpu);
-int cpu_check_up_prepare(int cpu);
 void cpu_set_state_online(int cpu);
 void play_idle_precise(u64 duration_ns, u64 latency_ns);
 
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -329,97 +329,7 @@ EXPORT_SYMBOL_GPL(smpboot_unregister_per
 #ifndef CONFIG_HOTPLUG_CORE_SYNC
 static DEFINE_PER_CPU(atomic_t, cpu_hotplug_state) = ATOMIC_INIT(CPU_POST_DEAD);
 
-/*
- * Called to poll specified CPU's state, for example, when waiting for
- * a CPU to come online.
- */
-int cpu_report_state(int cpu)
-{
-	return atomic_read(&per_cpu(cpu_hotplug_state, cpu));
-}
-
-/*
- * If CPU has died properly, set its state to CPU_UP_PREPARE and
- * return success.  Otherwise, return -EBUSY if the CPU died after
- * cpu_wait_death() timed out.  And yet otherwise again, return -EAGAIN
- * if cpu_wait_death() timed out and the CPU still hasn't gotten around
- * to dying.  In the latter two cases, the CPU might not be set up
- * properly, but it is up to the arch-specific code to decide.
- * Finally, -EIO indicates an unanticipated problem.
- *
- * Note that it is permissible to omit this call entirely, as is
- * done in architectures that do no CPU-hotplug error checking.
- */
-int cpu_check_up_prepare(int cpu)
-{
-	if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);
-		return 0;
-	}
-
-	switch (atomic_read(&per_cpu(cpu_hotplug_state, cpu))) {
-
-	case CPU_POST_DEAD:
-
-		/* The CPU died properly, so just start it up again. */
-		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);
-		return 0;
-
-	case CPU_DEAD_FROZEN:
-
-		/*
-		 * Timeout during CPU death, so let caller know.
-		 * The outgoing CPU completed its processing, but after
-		 * cpu_wait_death() timed out and reported the error. The
-		 * caller is free to proceed, in which case the state
-		 * will be reset properly by cpu_set_state_online().
-		 * Proceeding despite this -EBUSY return makes sense
-		 * for systems where the outgoing CPUs take themselves
-		 * offline, with no post-death manipulation required from
-		 * a surviving CPU.
-		 */
-		return -EBUSY;
-
-	case CPU_BROKEN:
-
-		/*
-		 * The most likely reason we got here is that there was
-		 * a timeout during CPU death, and the outgoing CPU never
-		 * did complete its processing.  This could happen on
-		 * a virtualized system if the outgoing VCPU gets preempted
-		 * for more than five seconds, and the user attempts to
-		 * immediately online that same CPU.  Trying again later
-		 * might return -EBUSY above, hence -EAGAIN.
-		 */
-		return -EAGAIN;
-
-	case CPU_UP_PREPARE:
-		/*
-		 * Timeout while waiting for the CPU to show up. Allow to try
-		 * again later.
-		 */
-		return 0;
-
-	default:
-
-		/* Should not happen.  Famous last words. */
-		return -EIO;
-	}
-}
-
-/*
- * Mark the specified CPU online.
- *
- * Note that it is permissible to omit this call entirely, as is
- * done in architectures that do no CPU-hotplug error checking.
- */
-void cpu_set_state_online(int cpu)
-{
-	(void)atomic_xchg(&per_cpu(cpu_hotplug_state, cpu), CPU_ONLINE);
-}
-
 #ifdef CONFIG_HOTPLUG_CPU
-
 /*
  * Wait for the specified CPU to exit the idle loop and die.
  */





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:18:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:18:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534018.831239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9R-0005l8-1D; Fri, 12 May 2023 21:18:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534018.831239; Fri, 12 May 2023 21:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9Q-0005kW-PW; Fri, 12 May 2023 21:18:08 +0000
Received: by outflank-mailman (input) for mailman id 534018;
 Fri, 12 May 2023 21:18:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzQ-0004F7-GW
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:48 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 09efbfc1-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09efbfc1-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205257.027075560@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925664;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=QdUmczWTVhK7K/GEDp21ZZBW6QiyoqITsSHkVbZTwSo=;
	b=QxFsCk8M0IwU/AWXJTbXBb8OmrmczN7eSVShQBD/xtp9CSloxhz1odBQguueWNc/vEM52/
	0Kl3gldkf4/0GgZgMmCaVqarwnt6c2WxoW5biu9zyuJifvGtM9/Jij2+5GahK4gODm1Y1Q
	1hd8RJJvQbU0l+Gnh64z7qRhDPk39TNRP842ELWqPXiUeNBVOhZ36Jqvxt0WeLRCeD/TLr
	8ppgeI8QfGvfUVwsNNPzx6acsMuK+QD0Sy+DuKWmGes0FQJYbkh9eWywGTaBGzAG8N39CJ
	IMTWuS0FUzsmcC8AS5xDPopwnYu51b4FiPLMJd67kDoqwu/y1krpwzf6eMhGgw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925664;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=QdUmczWTVhK7K/GEDp21ZZBW6QiyoqITsSHkVbZTwSo=;
	b=kpZz6QyptBqFfMzJcx1ZcoW9ytkNNQFs/6P8KAIUryIP/ZE4SXsEP30lmowL1TlGesMnkc
	+R008NAdqKJ6jZCQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch V4 29/37] cpu/hotplug: Reset task stack state in _cpu_up()
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:43 +0200 (CEST)

From: David Woodhouse <dwmw@amazon.co.uk>

Commit dce1ca0525bf ("sched/scs: Reset task stack state in bringup_cpu()")
ensured that the shadow call stack and KASAN poisoning were removed from
a CPU's stack each time that CPU is brought up, not just once.

This is not incorrect. However, with parallel bringup the idle thread setup
will happen at a different step. As a consequence the cleanup in
bringup_cpu() would be too late.

Move the SCS/KASAN cleanup to the generic _cpu_up() function instead,
which already ensures that the new CPU's stack is available, purely to
allow for early failure. This occurs when the CPU to be brought up is
in the CPUHP_OFFLINE state, which should correctly do the cleanup any
time the CPU has been taken down to the point where such is needed.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Michael Kelley <mikelley@microsoft.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
---
 kernel/cpu.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)


--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -771,12 +771,6 @@ static int bringup_cpu(unsigned int cpu)
 		return -EAGAIN;
 
 	/*
-	 * Reset stale stack state from the last time this CPU was online.
-	 */
-	scs_task_reset(idle);
-	kasan_unpoison_task_stack(idle);
-
-	/*
 	 * Some architectures have to walk the irq descriptors to
 	 * setup the vector space for the cpu which comes online.
 	 *
@@ -1587,6 +1581,12 @@ static int _cpu_up(unsigned int cpu, int
 			ret = PTR_ERR(idle);
 			goto out;
 		}
+
+		/*
+		 * Reset stale stack state from the last time this CPU was online.
+		 */
+		scs_task_reset(idle);
+		kasan_unpoison_task_stack(idle);
 	}
 
 	cpuhp_tasks_frozen = tasks_frozen;



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:18:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:18:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534025.831247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9S-00064S-7r; Fri, 12 May 2023 21:18:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534025.831247; Fri, 12 May 2023 21:18:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9S-00063p-2Q; Fri, 12 May 2023 21:18:10 +0000
Received: by outflank-mailman (input) for mailman id 534025;
 Fri, 12 May 2023 21:18:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzB-0004FP-Kn
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:33 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0343edf5-f109-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0343edf5-f109-11ed-b229-6b7b168915f2
Message-ID: <20230512205256.635326070@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925652;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=rINXxhKXmpYAtWqMmPzc6OdjQErCtPtZZiG/o2u67XA=;
	b=zVr4ZNUNaCJyD2RgIxyzBhfIe89JftjURTbIdWm+bfCJWlFf4rsjq4OuozvjGhm9ULhHrC
	U+aEJu664cXclDTxQuAvnz47evxGKtnynt0q7QKC8sqCfjf36XHNHRGVlHpVCtZBasVbXa
	gv1iPwgpkiOTosy1dsjss8b8knMy3Hz3lj6ov/gir6MWLV2GXOwcnbFuE4PuQ8MSg84x92
	hnzVxYv6Z/LoV2my9lHN/jUVhkZbsxpowfX+bp0NKPfT/TaL1EN1Lbe/fjcyoBF18PLYQI
	MH9mNvHLkLUEnvXztxwCwTD8RTyo3FLLhL56NmTy/yerLcQyMUsA3XboFVowyA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925652;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=rINXxhKXmpYAtWqMmPzc6OdjQErCtPtZZiG/o2u67XA=;
	b=Oocpkedn7RFFiZEZ6uMicf8fkqVHkAt6v2WrHB3tn/jDxmbRHsz+Nfcb/Op9ZCsGgDieF+
	01qSbNlzEhoG1jDQ==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject:
 [patch V4 22/37] ARM: smp: Switch to hotplug core state synchronization
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:32 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/arm/Kconfig           |    1 +
 arch/arm/include/asm/smp.h |    2 +-
 arch/arm/kernel/smp.c      |   18 +++++++-----------
 3 files changed, 9 insertions(+), 12 deletions(-)
---

--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -124,6 +124,7 @@ config ARM
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_UID16
 	select HAVE_VIRT_CPU_ACCOUNTING_GEN
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select IRQ_FORCED_THREADING
 	select MODULES_USE_ELF_REL
 	select NEED_DMA_MAP_STATE
--- a/arch/arm/include/asm/smp.h
+++ b/arch/arm/include/asm/smp.h
@@ -64,7 +64,7 @@ extern void secondary_startup_arm(void);
 
 extern int __cpu_disable(void);
 
-extern void __cpu_die(unsigned int cpu);
+static inline void __cpu_die(unsigned int cpu) { }
 
 extern void arch_send_call_function_single_ipi(int cpu);
 extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -288,15 +288,11 @@ int __cpu_disable(void)
 }
 
 /*
- * called on the thread which is asking for a CPU to be shutdown -
- * waits until shutdown has completed, or it is timed out.
+ * called on the thread which is asking for a CPU to be shutdown after the
+ * shutdown completed.
  */
-void __cpu_die(unsigned int cpu)
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
 {
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_err("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
 	pr_debug("CPU%u: shutdown\n", cpu);
 
 	clear_tasks_mm_cpumask(cpu);
@@ -336,11 +332,11 @@ void __noreturn arch_cpu_idle_dead(void)
 	flush_cache_louis();
 
 	/*
-	 * Tell __cpu_die() that this CPU is now safe to dispose of.  Once
-	 * this returns, power and/or clocks can be removed at any point
-	 * from this CPU and its cache by platform_cpu_kill().
+	 * Tell cpuhp_bp_sync_dead() that this CPU is now safe to dispose
+	 * of. Once this returns, power and/or clocks can be removed at
+	 * any point from this CPU and its cache by platform_cpu_kill().
 	 */
-	(void)cpu_report_death();
+	cpuhp_ap_report_dead();
 
 	/*
 	 * Ensure that the cache lines associated with that completion are





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:18:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:18:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534028.831257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9T-0006Sh-JL; Fri, 12 May 2023 21:18:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534028.831257; Fri, 12 May 2023 21:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9T-0006S8-Eo; Fri, 12 May 2023 21:18:11 +0000
Received: by outflank-mailman (input) for mailman id 534028;
 Fri, 12 May 2023 21:18:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzJ-0004F7-FO
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:41 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 071160ff-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 071160ff-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205256.859920443@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925659;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=4KTNmmFPJXZljDpOFQFSwkn0vJx9v7UbMNIhdACke4E=;
	b=LDLOAD2i8A3n7wKvyw/OHjYjE9g2++H+Ra6znb14aQBES5w+tCFjEYDcf9R4VTOZvx9n/u
	aJPKVhpjaAwY/9/gG5nbXzIELvXFhBTj6ik9OFCKp0sTKTUqsxOkjkPU5EOD3pLZJZ4Q08
	mBjq2r7Ehgaq64GBwCCWKc9urhnQ2OFs9Pi8ZnDO+A9Gy5HJGaINPnYSxBdC6tTcGbRZFa
	FZhkykTgFM2zztGv9fUkORf1CGPgc7ibQGmtESQXEt7iMvWTsEjR7OCpRCp0F7VavIl+3g
	BX/UupJRv8rSiDtBIxr6kkKrNs6sYKYM7ptmqJMF/1rd4KRE6ZPzMWlA9jlybw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925659;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=4KTNmmFPJXZljDpOFQFSwkn0vJx9v7UbMNIhdACke4E=;
	b=Gt2zwUCq7iu8dnyaeJHibD3RPnYUHsIc76G4Lbvo/gsvpFIWUZKRoyxcLE+Vi7t38Qh+jZ
	on551572+Ltc/RBA==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 26/37] parisc: Switch to hotplug core state synchronization
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:38 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

Switch to the CPU hotplug core state tracking and synchronization
mechanim. No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/parisc/Kconfig          |    1 +
 arch/parisc/kernel/process.c |    4 ++--
 arch/parisc/kernel/smp.c     |    7 +++----
 3 files changed, 6 insertions(+), 6 deletions(-)


--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -57,6 +57,7 @@ config PARISC
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_REGS_AND_STACK_ACCESS_API
+	select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
 	select GENERIC_SCHED_CLOCK
 	select GENERIC_IRQ_MIGRATION if SMP
 	select HAVE_UNSTABLE_SCHED_CLOCK if SMP
--- a/arch/parisc/kernel/process.c
+++ b/arch/parisc/kernel/process.c
@@ -166,8 +166,8 @@ void __noreturn arch_cpu_idle_dead(void)
 
 	local_irq_disable();
 
-	/* Tell __cpu_die() that this CPU is now safe to dispose of. */
-	(void)cpu_report_death();
+	/* Tell the core that this CPU is now safe to dispose of. */
+	cpuhp_ap_report_dead();
 
 	/* Ensure that the cache lines are written out. */
 	flush_cache_all_local();
--- a/arch/parisc/kernel/smp.c
+++ b/arch/parisc/kernel/smp.c
@@ -500,11 +500,10 @@ int __cpu_disable(void)
 void __cpu_die(unsigned int cpu)
 {
 	pdc_cpu_rendezvous_lock();
+}
 
-	if (!cpu_wait_death(cpu, 5)) {
-		pr_crit("CPU%u: cpu didn't die\n", cpu);
-		return;
-	}
+void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
+{
 	pr_info("CPU%u: is shutting down\n", cpu);
 
 	/* set task's state to interruptible sleep */





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:18:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:18:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534031.831264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9U-0006aU-81; Fri, 12 May 2023 21:18:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534031.831264; Fri, 12 May 2023 21:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9T-0006Z7-VH; Fri, 12 May 2023 21:18:11 +0000
Received: by outflank-mailman (input) for mailman id 534031;
 Fri, 12 May 2023 21:18:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzV-0004F7-HZ
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:53 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0dfb21e2-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0dfb21e2-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205257.240231377@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925670;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=m81D3nDf/Ldd8DuVFoioZX0PA8SIeNGwH2057r8GMno=;
	b=iJ35cdx8v18J/0cGFI4o1VXLLZdnkIko4bXOUYVtvqwDsNw74CR+LpFGs9LIoAVC3T7hi7
	dc0tRrWiWO4wC06n9RMswwP7pAMJGoUKbYcFT9fTUsgOratlkRnK2eSykPzemcWf0o70Ab
	DLVqxum0H5uEbU++/si9skYnkkYHRuLNRO4KEWLLGv/1LPbeUUCGkeVAUAWFNK6MRxw3ob
	p7jOuISbm6d1FYoaYVTrCQEtfSIvBbooWH7sIcvGmX+w76wRfVGgB55G9TZ6bM4r6Wt1KC
	sl80JEZsfrX0W0gRscaY3daheGYoOvayd+Q6CrCXuJzjI+EvpIoiXlHAST5WsA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925670;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=m81D3nDf/Ldd8DuVFoioZX0PA8SIeNGwH2057r8GMno=;
	b=ir949Ub28BoYQ8jmxON50pUJUyvSlsxOkeqN/nPFkQYD7n1qCJwGDVvYILtG2SfKKh3fqI
	A90aQ1nFBaivBfAg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: [patch V4 33/37] cpu/hotplug: Allow "parallel" bringup up to
 CPUHP_BP_KICK_AP_STATE
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:50 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

There is often significant latency in the early stages of CPU bringup, and
time is wasted by waking each CPU (e.g. with SIPI/INIT/INIT on x86) and
then waiting for it to respond before moving on to the next.

Allow a platform to enable parallel setup which brings all to be onlined
CPUs up to the CPUHP_BP_KICK_AP state. While this state advancement on the
control CPU (BP) is single-threaded the important part is the last state
CPUHP_BP_KICK_AP which wakes the to be onlined CPUs up.

This allows the CPUs to run up to the first sychronization point
cpuhp_ap_sync_alive() where they wait for the control CPU to release them
one by one for the full onlining procedure.

This parallelism depends on the CPU hotplug core sync mechanism which
ensures that the parallel brought up CPUs wait for release before touching
any state which would make the CPU visible to anything outside the hotplug
control mechanism.

To handle the SMT constraints of X86 correctly the bringup happens in two
iterations when CONFIG_HOTPLUG_SMT is enabled. The control CPU brings up
the primary SMT threads of each core first, which can load the microcode
without the need to rendevouz with the thread siblings. Once that's
completed it brings up the secondary SMT threads.

Co-developed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 Documentation/admin-guide/kernel-parameters.txt |    6 +
 arch/Kconfig                                    |    4 
 include/linux/cpuhotplug.h                      |    1 
 kernel/cpu.c                                    |  103 ++++++++++++++++++++++--
 4 files changed, 109 insertions(+), 5 deletions(-)
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -838,6 +838,12 @@
 			on every CPU online, such as boot, and resume from suspend.
 			Default: 10000
 
+	cpuhp.parallel=
+			[SMP] Enable/disable parallel bringup of secondary CPUs
+			Format: <bool>
+			Default is enabled if CONFIG_HOTPLUG_PARALLEL=y. Otherwise
+			the parameter has no effect.
+
 	crash_kexec_post_notifiers
 			Run kdump after running panic-notifiers and dumping
 			kmsg. This only for the users who doubt kdump always
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -53,6 +53,10 @@ config HOTPLUG_SPLIT_STARTUP
 	bool
 	select HOTPLUG_CORE_SYNC_FULL
 
+config HOTPLUG_PARALLEL
+	bool
+	select HOTPLUG_SPLIT_STARTUP
+
 config GENERIC_ENTRY
 	bool
 
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -524,6 +524,7 @@ void cpuhp_ap_sync_alive(void);
 void arch_cpuhp_sync_state_poll(void);
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu);
 int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle);
+bool arch_cpuhp_init_parallel_bringup(void);
 
 #ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
 void cpuhp_ap_report_dead(void);
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -649,8 +649,23 @@ bool cpu_smt_possible(void)
 		cpu_smt_control != CPU_SMT_NOT_SUPPORTED;
 }
 EXPORT_SYMBOL_GPL(cpu_smt_possible);
+
+static inline bool cpuhp_smt_aware(void)
+{
+	return topology_smt_supported();
+}
+
+static inline const struct cpumask *cpuhp_get_primary_thread_mask(void)
+{
+	return cpu_primary_thread_mask;
+}
 #else
 static inline bool cpu_smt_allowed(unsigned int cpu) { return true; }
+static inline bool cpuhp_smt_aware(void) { return false; }
+static inline const struct cpumask *cpuhp_get_primary_thread_mask(void)
+{
+	return cpu_present_mask;
+}
 #endif
 
 static inline enum cpuhp_state
@@ -1747,16 +1762,94 @@ int bringup_hibernate_cpu(unsigned int s
 	return 0;
 }
 
-void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
+static void __init cpuhp_bringup_mask(const struct cpumask *mask, unsigned int ncpus,
+				      enum cpuhp_state target)
 {
 	unsigned int cpu;
 
-	for_each_present_cpu(cpu) {
-		if (num_online_cpus() >= setup_max_cpus)
+	for_each_cpu(cpu, mask) {
+		struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
+
+		if (!--ncpus)
 			break;
-		if (!cpu_online(cpu))
-			cpu_up(cpu, CPUHP_ONLINE);
+
+		if (cpu_up(cpu, target) && can_rollback_cpu(st)) {
+			/*
+			 * If this failed then cpu_up() might have only
+			 * rolled back to CPUHP_BP_KICK_AP for the final
+			 * online. Clean it up. NOOP if already rolled back.
+			 */
+			WARN_ON(cpuhp_invoke_callback_range(false, cpu, st, CPUHP_OFFLINE));
+		}
+	}
+}
+
+#ifdef CONFIG_HOTPLUG_PARALLEL
+static bool __cpuhp_parallel_bringup __ro_after_init = true;
+
+static int __init parallel_bringup_parse_param(char *arg)
+{
+	return kstrtobool(arg, &__cpuhp_parallel_bringup);
+}
+early_param("cpuhp.parallel", parallel_bringup_parse_param);
+
+/*
+ * On architectures which have enabled parallel bringup this invokes all BP
+ * prepare states for each of the to be onlined APs first. The last state
+ * sends the startup IPI to the APs. The APs proceed through the low level
+ * bringup code in parallel and then wait for the control CPU to release
+ * them one by one for the final onlining procedure.
+ *
+ * This avoids waiting for each AP to respond to the startup IPI in
+ * CPUHP_BRINGUP_CPU.
+ */
+static bool __init cpuhp_bringup_cpus_parallel(unsigned int ncpus)
+{
+	const struct cpumask *mask = cpu_present_mask;
+
+	if (__cpuhp_parallel_bringup)
+		__cpuhp_parallel_bringup = arch_cpuhp_init_parallel_bringup();
+	if (!__cpuhp_parallel_bringup)
+		return false;
+
+	if (cpuhp_smt_aware()) {
+		const struct cpumask *pmask = cpuhp_get_primary_thread_mask();
+		static struct cpumask tmp_mask __initdata;
+
+		/*
+		 * X86 requires to prevent that SMT siblings stopped while
+		 * the primary thread does a microcode update for various
+		 * reasons. Bring the primary threads up first.
+		 */
+		cpumask_and(&tmp_mask, mask, pmask);
+		cpuhp_bringup_mask(&tmp_mask, ncpus, CPUHP_BP_KICK_AP);
+		cpuhp_bringup_mask(&tmp_mask, ncpus, CPUHP_ONLINE);
+		/* Account for the online CPUs */
+		ncpus -= num_online_cpus();
+		if (!ncpus)
+			return true;
+		/* Create the mask for secondary CPUs */
+		cpumask_andnot(&tmp_mask, mask, pmask);
+		mask = &tmp_mask;
 	}
+
+	/* Bring the not-yet started CPUs up */
+	cpuhp_bringup_mask(mask, ncpus, CPUHP_BP_KICK_AP);
+	cpuhp_bringup_mask(mask, ncpus, CPUHP_ONLINE);
+	return true;
+}
+#else
+static inline bool cpuhp_bringup_cpus_parallel(unsigned int ncpus) { return false; }
+#endif /* CONFIG_HOTPLUG_PARALLEL */
+
+void __init bringup_nonboot_cpus(unsigned int setup_max_cpus)
+{
+	/* Try parallel bringup optimization if enabled */
+	if (cpuhp_bringup_cpus_parallel(setup_max_cpus))
+		return;
+
+	/* Full per CPU serialized bringup */
+	cpuhp_bringup_mask(cpu_present_mask, setup_max_cpus, CPUHP_ONLINE);
 }
 
 #ifdef CONFIG_PM_SLEEP_SMP



From xen-devel-bounces@lists.xenproject.org Fri May 12 21:18:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:18:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534044.831282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9f-00083d-9n; Fri, 12 May 2023 21:18:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534044.831282; Fri, 12 May 2023 21:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9f-00082J-3d; Fri, 12 May 2023 21:18:23 +0000
Received: by outflank-mailman (input) for mailman id 534044;
 Fri, 12 May 2023 21:18:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzV-0004FP-Ty
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:53 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0ee2342e-f109-11ed-b229-6b7b168915f2;
 Fri, 12 May 2023 23:07:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ee2342e-f109-11ed-b229-6b7b168915f2
Message-ID: <20230512205257.299231005@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925672;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=1NEBWklEox1EHC85h866UHlaj72PU/D3Dw3p+jMC/F0=;
	b=AxLTeRaqaMDLy4ioLj6zEL9HPmP3cx+s/iQLQ36KY9z4ru+T511Ht9KAXa6I3GInzxRIU1
	auZCeKjxgzcWperPNzAB6/xoxSxZZT3u87ymCz3VAy/9CvphzwHc79I4z9See9dnYiwzvz
	bYSv0JUTpntUltbnxM2mkF4WxFblikPEgXHSvRGwaVfq73pCN36um7FFZl4vXhlXIHBRyk
	wdNVsJkRU8XRhNBC0JX4ba3dJqo9Z0BrkVvSLSPvH7jhRowqf1Jy1tcFMxYdxDiUMofIyI
	JqzBNBvPaJetJyHovLVoc2IeuKZ0XpgQRph7CuJjaFv7vGl5Yd3+ReqjtvieoQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925672;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=1NEBWklEox1EHC85h866UHlaj72PU/D3Dw3p+jMC/F0=;
	b=RcKUBGkIIKdhAcY84iUu32ds5jxMMcMhQacX7+NkfuyvLzMFgdSYMRTqeLxsIuyjcb0y9C
	Eb2r1CihAReSwIBw==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 34/37] x86/apic: Save the APIC virtual base address
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:51 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

For parallel CPU brinugp it's required to read the APIC ID in the low level
startup code. The virtual APIC base address is a constant because its a
fix-mapped address. Exposing that constant which is composed via macros to
assembly code is non-trivial due to header inclusion hell.

Aside of that it's constant only because of the vsyscall ABI
requirement. Once vsyscall is out of the picture the fixmap can be placed
at runtime.

Avoid header hell, stay flexible and store the address in a variable which
can be exposed to the low level startup code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
V4: Fixed changelog typo - Sergey
---
 arch/x86/include/asm/smp.h  |    1 +
 arch/x86/kernel/apic/apic.c |    4 ++++
 2 files changed, 5 insertions(+)
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -196,6 +196,7 @@ extern void nmi_selftest(void);
 #endif
 
 extern unsigned int smpboot_control;
+extern unsigned long apic_mmio_base;
 
 #endif /* !__ASSEMBLY__ */
 
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -101,6 +101,9 @@ static int apic_extnmi __ro_after_init =
  */
 static bool virt_ext_dest_id __ro_after_init;
 
+/* For parallel bootup. */
+unsigned long apic_mmio_base __ro_after_init;
+
 /*
  * Map cpu index to physical APIC ID
  */
@@ -2163,6 +2166,7 @@ void __init register_lapic_address(unsig
 
 	if (!x2apic_mode) {
 		set_fixmap_nocache(FIX_APIC_BASE, address);
+		apic_mmio_base = APIC_BASE;
 		apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n",
 			    APIC_BASE, address);
 	}





From xen-devel-bounces@lists.xenproject.org Fri May 12 21:18:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 12 May 2023 21:18:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534042.831279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9e-0007zU-V9; Fri, 12 May 2023 21:18:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534042.831279; Fri, 12 May 2023 21:18:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxa9e-0007z6-Qh; Fri, 12 May 2023 21:18:22 +0000
Received: by outflank-mailman (input) for mailman id 534042;
 Fri, 12 May 2023 21:18:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5PQu=BB=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1pxZzT-0004F7-H9
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 21:07:51 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0bf74c58-f109-11ed-8611-37d641c3527e;
 Fri, 12 May 2023 23:07:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bf74c58-f109-11ed-8611-37d641c3527e
Message-ID: <20230512205257.133453992@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1683925667;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=s6m9J4c1JvP70cnTGmFjRaHjxV+T4Hc55BQMBWx1oPc=;
	b=y2DUc1fXfsXkDW8K6WaMiHDe6gR4IEKxgaOddkKmSwSv+DqEsvnjWzXgPjyq32d1wpWK5Z
	DhULLjmoSJ6Twf97uZUUSDmOErh5HW4b2tsf6z1y+M36Y3f2xB7zi4wFgw7y4DDIduarbz
	yeLsAcWQsBCl07tIF0QqY/Jj/avRssPe2AvWtNVRytlleptjLfnchKkKtZCHOE5wyaDkBF
	6X8iVUK0NIWo0to7bK0cANrtWwlUAJ9ihLJH1D1NrqOK2ldq23z0J7UlmUsOUeID7o25Cf
	4F7tlHyUzLkqHXoKB/QYb/4diEkgPLMMdkcjx5AD44P4tw9zfHRvswMyp4a7FQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1683925667;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 references:references; bh=s6m9J4c1JvP70cnTGmFjRaHjxV+T4Hc55BQMBWx1oPc=;
	b=yCQezV+nLz69U6F8kXkQMvva3l/BtuBu7i4swHXcyBl7X7TgPYqW8jXLbGqot3jpiZi7IT
	CU7Sn35pJ28UiwBg==
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>,
 Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,
 linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>,
 linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: [patch V4 31/37] x86/smpboot: Enable split CPU startup
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Date: Fri, 12 May 2023 23:07:46 +0200 (CEST)

From: Thomas Gleixner <tglx@linutronix.de>

The x86 CPU bringup state currently does AP wake-up, wait for AP to
respond and then release it for full bringup.

It is safe to be split into a wake-up and and a separate wait+release
state.

Provide the required functions and enable the split CPU bringup, which
prepares for parallel bringup, where the bringup of the non-boot CPUs takes
two iterations: One to prepare and wake all APs and the second to wait and
release them. Depending on timing this can eliminate the wait time
completely.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
---
 arch/x86/Kconfig           |    2 +-
 arch/x86/include/asm/smp.h |    9 ++-------
 arch/x86/kernel/smp.c      |    2 +-
 arch/x86/kernel/smpboot.c  |    8 ++++----
 arch/x86/xen/smp_pv.c      |    4 ++--
 5 files changed, 10 insertions(+), 15 deletions(-)
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -274,8 +274,8 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
-	select HOTPLUG_CORE_SYNC_FULL		if SMP
 	select HOTPLUG_SMT			if SMP
+	select HOTPLUG_SPLIT_STARTUP		if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_PER_CPU_EMBED_FIRST_CHUNK
 	select NEED_PER_CPU_PAGE_FIRST_CHUNK
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -40,7 +40,7 @@ struct smp_ops {
 
 	void (*cleanup_dead_cpu)(unsigned cpu);
 	void (*poll_sync_state)(void);
-	int (*cpu_up)(unsigned cpu, struct task_struct *tidle);
+	int (*kick_ap_alive)(unsigned cpu, struct task_struct *tidle);
 	int (*cpu_disable)(void);
 	void (*cpu_die)(unsigned int cpu);
 	void (*play_dead)(void);
@@ -80,11 +80,6 @@ static inline void smp_cpus_done(unsigne
 	smp_ops.smp_cpus_done(max_cpus);
 }
 
-static inline int __cpu_up(unsigned int cpu, struct task_struct *tidle)
-{
-	return smp_ops.cpu_up(cpu, tidle);
-}
-
 static inline int __cpu_disable(void)
 {
 	return smp_ops.cpu_disable();
@@ -124,7 +119,7 @@ void native_smp_prepare_cpus(unsigned in
 void calculate_max_logical_packages(void);
 void native_smp_cpus_done(unsigned int max_cpus);
 int common_cpu_up(unsigned int cpunum, struct task_struct *tidle);
-int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
+int native_kick_ap(unsigned int cpu, struct task_struct *tidle);
 int native_cpu_disable(void);
 void __noreturn hlt_play_dead(void);
 void native_play_dead(void);
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -268,7 +268,7 @@ struct smp_ops smp_ops = {
 #endif
 	.smp_send_reschedule	= native_smp_send_reschedule,
 
-	.cpu_up			= native_cpu_up,
+	.kick_ap_alive		= native_kick_ap,
 	.cpu_disable		= native_cpu_disable,
 	.play_dead		= native_play_dead,
 
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1052,7 +1052,7 @@ static int do_boot_cpu(int apicid, int c
 	return ret;
 }
 
-static int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
+int native_kick_ap(unsigned int cpu, struct task_struct *tidle)
 {
 	int apicid = apic->cpu_present_to_apicid(cpu);
 	int err;
@@ -1088,15 +1088,15 @@ static int native_kick_ap(unsigned int c
 	return err;
 }
 
-int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
+int arch_cpuhp_kick_ap_alive(unsigned int cpu, struct task_struct *tidle)
 {
-	return native_kick_ap(cpu, tidle);
+	return smp_ops.kick_ap_alive(cpu, tidle);
 }
 
 void arch_cpuhp_cleanup_kick_cpu(unsigned int cpu)
 {
 	/* Cleanup possible dangling ends... */
-	if (smp_ops.cpu_up == native_cpu_up && x86_platform.legacy.warm_reset)
+	if (smp_ops.kick_ap_alive == native_kick_ap && x86_platform.legacy.warm_reset)
 		smpboot_restore_warm_reset_vector();
 }
 
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -314,7 +314,7 @@ cpu_initialize_context(unsigned int cpu,
 	return 0;
 }
 
-static int xen_pv_cpu_up(unsigned int cpu, struct task_struct *idle)
+static int xen_pv_kick_ap(unsigned int cpu, struct task_struct *idle)
 {
 	int rc;
 
@@ -438,7 +438,7 @@ static const struct smp_ops xen_smp_ops
 	.smp_prepare_cpus = xen_pv_smp_prepare_cpus,
 	.smp_cpus_done = xen_smp_cpus_done,
 
-	.cpu_up = xen_pv_cpu_up,
+	.kick_ap_alive = xen_pv_kick_ap,
 	.cpu_die = xen_pv_cpu_die,
 	.cleanup_dead_cpu = xen_pv_cleanup_dead_cpu,
 	.poll_sync_state = xen_pv_poll_sync_state,



From xen-devel-bounces@lists.xenproject.org Sat May 13 00:52:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 00:52:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534081.831299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxdV1-0008Bt-NB; Sat, 13 May 2023 00:52:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534081.831299; Sat, 13 May 2023 00:52:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxdV1-0008Bm-JQ; Sat, 13 May 2023 00:52:39 +0000
Received: by outflank-mailman (input) for mailman id 534081;
 Sat, 13 May 2023 00:52:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=g/2X=BC=m5p.com=ehem@srs-se1.protection.inumbo.net>)
 id 1pxdV0-0008Bf-2c
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 00:52:38 +0000
Received: from mailhost.m5p.com (mailhost.m5p.com [74.104.188.4])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7222bb3b-f128-11ed-b229-6b7b168915f2;
 Sat, 13 May 2023 02:52:35 +0200 (CEST)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 34D0qPv3060784
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO)
 for <xen-devel@lists.xenproject.org>; Fri, 12 May 2023 20:52:31 -0400 (EDT)
 (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 34D0qPmw060783
 for xen-devel@lists.xenproject.org; Fri, 12 May 2023 17:52:25 -0700 (PDT)
 (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7222bb3b-f128-11ed-b229-6b7b168915f2
Date: Fri, 12 May 2023 17:52:25 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xenproject.org
Subject: PVH feature omission
Message-ID: <ZF7fSeYH/NK105EQ@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.6
X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on mattapan.m5p.com

>From tools/libs/light/libxl_x86.c: libxl__arch_passthrough_mode_setdefault()

"passthrough not yet supported for x86 PVH guests\n"

So PVH is recommended for most situations, but it is /still/ impossible
to pass hardware devices to PVH domains?  Seems odd this has never been
addressed with how long PVH has been around.

The other tools omission I noticed is it appears `xl` has no support for
pvSCSI?  Might be infrequently used, but seems similarly valuable for
domains focused on handling storage.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat May 13 01:17:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 01:17:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534089.831309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxdsd-0000mv-MZ; Sat, 13 May 2023 01:17:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534089.831309; Sat, 13 May 2023 01:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxdsd-0000mo-Jm; Sat, 13 May 2023 01:17:03 +0000
Received: by outflank-mailman (input) for mailman id 534089;
 Sat, 13 May 2023 01:17:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lw2b=BC=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pxdsc-0000mh-4Q
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 01:17:02 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc193b1f-f12b-11ed-b229-6b7b168915f2;
 Sat, 13 May 2023 03:17:00 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 3FAE363255;
 Sat, 13 May 2023 01:16:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D980AC433EF;
 Sat, 13 May 2023 01:16:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc193b1f-f12b-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683940618;
	bh=30eKHqbzQynPu04VrMohs0VGvFQ+qPtTp6DAV8KaRgM=;
	h=Date:From:To:cc:Subject:From;
	b=beS5q9cNoQ3Sat/imqwBY0KHZY09BqY9sdd0uCEgKMryW5G7u1ymRKC8VGwW0Ppeh
	 UFOfbs+T6x5SAm4HesS8LH8Au/rUvD+W3lWcw9EvWiaJI/IT+azMcXuThPRRFgU0Eu
	 5uqTNY8nvHN2K3ANnR9+uQF5nT5KVTX87I9a+LrjuZ7oC1sbSNm+rqbCWeDBHjQ6HF
	 oRD4uIQBpggga+3DpNCMErJVSAaws5VmhziuySvnNnSTTXTbCVh+wvNGO/Fjegr0h4
	 n0Eo8clMqT+vrJbtJSPeSmbeaKQkb8kFDUvwf5GJDLjjFlh6qfYe0Lm6q7hHGyjIy5
	 42iN/okXoowyg==
Date: Fri, 12 May 2023 18:16:56 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: roger.pau@citrix.com, andrew.cooper3@citrix.com, jbeulich@suse.com
cc: sstabellini@kernel.org, xen-devel@lists.xenproject.org, 
    Xenia.Ragiadakou@amd.com
Subject: [PATCH 0/2] PVH Dom0 on QEMU
Message-ID: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

These 2 patches are necessary to boot Xen and Dom0 PVH on QEMU (with or
without KVM acceleration.)

The first one is a genuine fix. The second one is a workaround: I don't
know the underlying root cause of the problem.

Cheers,

Stefano


From xen-devel-bounces@lists.xenproject.org Sat May 13 01:17:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 01:17:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534090.831319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxdt3-0001ES-Uc; Sat, 13 May 2023 01:17:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534090.831319; Sat, 13 May 2023 01:17:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxdt3-0001EL-RS; Sat, 13 May 2023 01:17:29 +0000
Received: by outflank-mailman (input) for mailman id 534090;
 Sat, 13 May 2023 01:17:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lw2b=BC=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pxdt2-0001E1-Vd
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 01:17:28 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eb5e9da5-f12b-11ed-8611-37d641c3527e;
 Sat, 13 May 2023 03:17:26 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0358661DB1;
 Sat, 13 May 2023 01:17:25 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7C0C7C433D2;
 Sat, 13 May 2023 01:17:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb5e9da5-f12b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683940644;
	bh=2XytKpG4OSRs3JrzoCNRkbwec3tS/ewt5oNDo9tHoCI=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=clI0cieSNQPbJOut//0+xr5HlGcqxjGzeb4OQjYWS7GeROGV1/7ThtgGAqlS8WEDM
	 T8KNWl1ydCZMZloixzBO1PY+xv8aNNd+/3Ba0ToRPTOlVDZE/zXIIZwT8FBbQ911A7
	 ICmcO7ZhAeWUFc8FE/wykT8xWlN1c58TQmyNrCiu3FgoBhKvqfJfnfrEibnEZ+ABNH
	 up0Hnc5vZ2BJwXJ4gfGuoi7qKw4lXazOF2dX+s4d/2+4i3WGxGiC/B2q02zrKXD98H
	 mj7W3XjeQn6iQL7WR4vzlWyeHBLNSIi4YFq+HXUo7kCl8fUABZ9JtsPivC2jP8T+zf
	 cLj40ROvLnFwA==
From: Stefano Stabellini <sstabellini@kernel.org>
To: roger.pau@citrix.com,
	andrew.cooper3@citrix.com,
	jbeulich@suse.com
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT generation
Date: Fri, 12 May 2023 18:17:19 -0700
Message-Id: <20230513011720.3978354-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Stefano Stabellini <stefano.stabellini@amd.com>

Xen always generates a XSDT table even if the firmware provided a RSDT
table. Instead of copying the XSDT header from the firmware table (that
might be missing), generate the XSDT header from a preset.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 xen/arch/x86/hvm/dom0_build.c | 32 +++++++++-----------------------
 1 file changed, 9 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 307edc6a8c..5fde769863 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
                                       paddr_t *addr)
 {
     struct acpi_table_xsdt *xsdt;
-    struct acpi_table_header *table;
-    struct acpi_table_rsdp *rsdp;
     const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables;
     unsigned long size = sizeof(*xsdt);
     unsigned int i, j, num_tables = 0;
-    paddr_t xsdt_paddr;
     int rc;
+    struct acpi_table_header header = {
+        .signature    = "XSDT",
+        .length       = sizeof(struct acpi_table_header),
+        .revision     = 0x1,
+        .oem_id       = "Xen",
+        .oem_table_id = "HVM",
+        .oem_revision = 0,
+    };
 
     /*
      * Restore original DMAR table signature, we are going to filter it from
@@ -1001,26 +1006,7 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
         goto out;
     }
 
-    /* Copy the native XSDT table header. */
-    rsdp = acpi_os_map_memory(acpi_os_get_root_pointer(), sizeof(*rsdp));
-    if ( !rsdp )
-    {
-        printk("Unable to map RSDP\n");
-        rc = -EINVAL;
-        goto out;
-    }
-    xsdt_paddr = rsdp->xsdt_physical_address;
-    acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
-    table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
-    if ( !table )
-    {
-        printk("Unable to map XSDT\n");
-        rc = -EINVAL;
-        goto out;
-    }
-    xsdt->header = *table;
-    acpi_os_unmap_memory(table, sizeof(*table));
-
+    xsdt->header = header;
     /* Add the custom MADT. */
     xsdt->table_offset_entry[0] = madt_addr;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sat May 13 01:17:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 01:17:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534091.831325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxdt4-0001Hs-9H; Sat, 13 May 2023 01:17:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534091.831325; Sat, 13 May 2023 01:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxdt4-0001H2-2O; Sat, 13 May 2023 01:17:30 +0000
Received: by outflank-mailman (input) for mailman id 534091;
 Sat, 13 May 2023 01:17:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lw2b=BC=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pxdt3-0000mh-1G
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 01:17:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ec1f4da1-f12b-11ed-b229-6b7b168915f2;
 Sat, 13 May 2023 03:17:27 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 415B764F81;
 Sat, 13 May 2023 01:17:26 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B4A4CC4339C;
 Sat, 13 May 2023 01:17:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec1f4da1-f12b-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683940645;
	bh=Vqt3dc2X13y3hImbmGgang3paCHhm/n+3O0YoROZygc=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=rVLTB26Ei78khR18G+mDArFo+ngYBjl+oolbzCkU2dwF7MUexmfmpaPUabnzLbqsG
	 +vkE39MaENgM/uffCfXaQbrMKC9UG9+UqdGDcRW+W3cQbvroUU0w9UjZM1P9mxQltE
	 plccqG+mqnRPB9q/tVFzYSJW7cVN4Jylht74me8Ek9dTNMFglPSk77xDE3krNYbW7u
	 7ISjBh54NZ4HnItGmwFkKeyHBNb4Wia1H+KAfClw3D3kSJ39sDhce4modls5SI7mj2
	 nQNTkroWbKmPPh0qEsnpYZY4nsCo8hjTNOOeWZqQ3EhrXgNHe3/+iH+5bVMkks1UgU
	 K+1myxGuV7NMQ==
From: Stefano Stabellini <sstabellini@kernel.org>
To: roger.pau@citrix.com,
	andrew.cooper3@citrix.com,
	jbeulich@suse.com
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of mapping
Date: Fri, 12 May 2023 18:17:20 -0700
Message-Id: <20230513011720.3978354-2-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Stefano Stabellini <stefano.stabellini@amd.com>

Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
the tables in the guest. Instead, copy the tables to Dom0.

This is a workaround.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
As mentioned in the cover letter, this is a RFC workaround as I don't
know the cause of the underlying problem. I do know that this patch
solves what would be otherwise a hang at boot when Dom0 PVH attempts to
parse ACPI tables.
---
 xen/arch/x86/hvm/dom0_build.c | 107 +++++++++-------------------------
 1 file changed, 27 insertions(+), 80 deletions(-)

diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 5fde769863..a6037fc6ed 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -73,32 +73,6 @@ static void __init print_order_stats(const struct domain *d)
             printk("order %2u allocations: %u\n", i, order_stats[i]);
 }
 
-static int __init modify_identity_mmio(struct domain *d, unsigned long pfn,
-                                       unsigned long nr_pages, const bool map)
-{
-    int rc;
-
-    for ( ; ; )
-    {
-        rc = map ?   map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn))
-                 : unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
-        if ( rc == 0 )
-            break;
-        if ( rc < 0 )
-        {
-            printk(XENLOG_WARNING
-                   "Failed to identity %smap [%#lx,%#lx) for d%d: %d\n",
-                   map ? "" : "un", pfn, pfn + nr_pages, d->domain_id, rc);
-            break;
-        }
-        nr_pages -= rc;
-        pfn += rc;
-        process_pending_softirqs();
-    }
-
-    return rc;
-}
-
 /* Populate a HVM memory range using the biggest possible order. */
 static int __init pvh_populate_memory_range(struct domain *d,
                                             unsigned long start,
@@ -967,6 +941,8 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
     unsigned long size = sizeof(*xsdt);
     unsigned int i, j, num_tables = 0;
     int rc;
+    struct acpi_table_fadt fadt;
+    unsigned long fadt_addr = 0, dsdt_addr = 0, facs_addr = 0, fadt_size = 0;
     struct acpi_table_header header = {
         .signature    = "XSDT",
         .length       = sizeof(struct acpi_table_header),
@@ -1013,10 +989,33 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
     /* Copy the addresses of the rest of the allowed tables. */
     for( i = 0, j = 1; i < acpi_gbl_root_table_list.count; i++ )
     {
+        void *table;
+
+        pvh_steal_ram(d, tables[i].length, 0, GB(4), addr);
+        table = acpi_os_map_memory(tables[i].address, tables[i].length);
+        hvm_copy_to_guest_phys(*addr, table, tables[i].length, d->vcpu[0]);
+        pvh_add_mem_range(d, *addr, *addr + tables[i].length, E820_ACPI);
+
+        if ( !strncmp(tables[i].signature.ascii, ACPI_SIG_FADT, ACPI_NAME_SIZE) )
+        {
+            memcpy(&fadt, table, tables[i].length);
+            fadt_addr = *addr;
+            fadt_size = tables[i].length;
+        }
+        else if ( !strncmp(tables[i].signature.ascii, ACPI_SIG_DSDT, ACPI_NAME_SIZE) )
+                dsdt_addr = *addr;
+        else if ( !strncmp(tables[i].signature.ascii, ACPI_SIG_FACS, ACPI_NAME_SIZE) )
+                facs_addr = *addr;
+
         if ( pvh_acpi_xsdt_table_allowed(tables[i].signature.ascii,
-                                         tables[i].address, tables[i].length) )
-            xsdt->table_offset_entry[j++] = tables[i].address;
+                    tables[i].address, tables[i].length) )
+            xsdt->table_offset_entry[j++] = *addr;
+
+        acpi_os_unmap_memory(table, tables[i].length);
     }
+    fadt.dsdt = dsdt_addr;
+    fadt.facs = facs_addr;
+    hvm_copy_to_guest_phys(fadt_addr, &fadt, fadt_size, d->vcpu[0]);
 
     xsdt->header.revision = 1;
     xsdt->header.length = size;
@@ -1055,9 +1054,7 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
 
 static int __init pvh_setup_acpi(struct domain *d, paddr_t start_info)
 {
-    unsigned long pfn, nr_pages;
     paddr_t madt_paddr, xsdt_paddr, rsdp_paddr;
-    unsigned int i;
     int rc;
     struct acpi_table_rsdp *native_rsdp, rsdp = {
         .signature = ACPI_SIG_RSDP,
@@ -1065,56 +1062,6 @@ static int __init pvh_setup_acpi(struct domain *d, paddr_t start_info)
         .length = sizeof(rsdp),
     };
 
-
-    /* Scan top-level tables and add their regions to the guest memory map. */
-    for( i = 0; i < acpi_gbl_root_table_list.count; i++ )
-    {
-        const char *sig = acpi_gbl_root_table_list.tables[i].signature.ascii;
-        unsigned long addr = acpi_gbl_root_table_list.tables[i].address;
-        unsigned long size = acpi_gbl_root_table_list.tables[i].length;
-
-        /*
-         * Make sure the original MADT is also mapped, so that Dom0 can
-         * properly access the data returned by _MAT methods in case it's
-         * re-using MADT memory.
-         */
-        if ( strncmp(sig, ACPI_SIG_MADT, ACPI_NAME_SIZE)
-             ? pvh_acpi_table_allowed(sig, addr, size)
-             : !acpi_memory_banned(addr, size) )
-             pvh_add_mem_range(d, addr, addr + size, E820_ACPI);
-    }
-
-    /* Identity map ACPI e820 regions. */
-    for ( i = 0; i < d->arch.nr_e820; i++ )
-    {
-        if ( d->arch.e820[i].type != E820_ACPI &&
-             d->arch.e820[i].type != E820_NVS )
-            continue;
-
-        pfn = PFN_DOWN(d->arch.e820[i].addr);
-        nr_pages = PFN_UP((d->arch.e820[i].addr & ~PAGE_MASK) +
-                          d->arch.e820[i].size);
-
-        /* Memory below 1MB has been dealt with by pvh_populate_p2m(). */
-        if ( pfn < PFN_DOWN(MB(1)) )
-        {
-            if ( pfn + nr_pages <= PFN_DOWN(MB(1)) )
-                continue;
-
-            /* This shouldn't happen, but is easy to deal with. */
-            nr_pages -= PFN_DOWN(MB(1)) - pfn;
-            pfn = PFN_DOWN(MB(1));
-        }
-
-        rc = modify_identity_mmio(d, pfn, nr_pages, true);
-        if ( rc )
-        {
-            printk("Failed to map ACPI region [%#lx, %#lx) into Dom0 memory map\n",
-                   pfn, pfn + nr_pages);
-            return rc;
-        }
-    }
-
     rc = pvh_setup_acpi_madt(d, &madt_paddr);
     if ( rc )
         return rc;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sat May 13 01:28:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 01:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534103.831339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxe3u-0003ZW-GV; Sat, 13 May 2023 01:28:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534103.831339; Sat, 13 May 2023 01:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxe3u-0003ZP-CC; Sat, 13 May 2023 01:28:42 +0000
Received: by outflank-mailman (input) for mailman id 534103;
 Sat, 13 May 2023 01:28:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lw2b=BC=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pxe3t-0003ZH-99
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 01:28:41 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c5dfdff-f12d-11ed-8611-37d641c3527e;
 Sat, 13 May 2023 03:28:38 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id AF48761DBC;
 Sat, 13 May 2023 01:28:37 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 57E94C433EF;
 Sat, 13 May 2023 01:28:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c5dfdff-f12d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1683941317;
	bh=g8o3uwylLBuE9F/y1l0IWZ3FGGtPG9MdJzEjSOTO+vI=;
	h=From:To:Cc:Subject:Date:From;
	b=QSyBF95jB/W1K3LaL71FEECsNrzKHfyzHIMPNMJy70rk3edF5aF+K4TYWyfN9woHH
	 705Ujnj5CeckOLx698n/tG7jlM9vBffwHwnMqnBwfTUDR5ceIirtkszrZOnA6CZ5Ch
	 qUyJEyg1d0dnfhmiM2Lollr1Jy84sjk7dPZBW5SkZc335+kYmyjVQKAhwM1E+8Geih
	 3D1a/uP0HRXMYd7EWU3BWVSWw1tDH+EDZfgaCWz2Xv0VCBFahSW0+uW5L2i1LWItHA
	 xD4sH2PTH+SdlpVRZGN7i1vlKSs5Dp1l/7Nsd/p4C7SBzvPspKH28FZvAN3BwNDkVp
	 aDoakhUviDg5A==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: marmarek@invisiblethingslab.com,
	cardoe@cardoe.com,
	sstabellini@kernel.org,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: [PATCH] automation: add a Dom0 PVH test based on Qubes' runner
Date: Fri, 12 May 2023 18:28:33 -0700
Message-Id: <20230513012833.3980872-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Stefano Stabellini <stefano.stabellini@amd.com>

Straightforward Dom0 PVH test based on the existing basic Smoke test for
the Qubes runner.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 automation/gitlab-ci/test.yaml     |  8 ++++++++
 automation/scripts/qubes-x86-64.sh | 14 +++++++++-----
 2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 55ca0c27dc..9c0e08d456 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -149,6 +149,14 @@ adl-smoke-x86-64-gcc-debug:
     - *x86-64-test-needs
     - alpine-3.12-gcc-debug
 
+adl-smoke-x86-64-dom0pvh-gcc-debug:
+  extends: .adl-x86-64
+  script:
+    - ./automation/scripts/qubes-x86-64.sh dom0pvh 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.12-gcc-debug
+
 adl-suspend-x86-64-gcc-debug:
   extends: .adl-x86-64
   script:
diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index 056faf9e6d..35b9386e5d 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -5,6 +5,7 @@ set -ex
 test_variant=$1
 
 ### defaults
+dom0pvh=
 wait_and_wakeup=
 timeout=120
 domU_config='
@@ -18,8 +19,8 @@ vif = [ "bridge=xenbr0", ]
 disk = [ ]
 '
 
-### test: smoke test
-if [ -z "${test_variant}" ]; then
+### test: smoke test & smoke test PVH
+if [ -z "${test_variant}" ] || [ "${test_variant}" = "dom0pvh" ]; then
     passed="ping test passed"
     domU_check="
 ifconfig eth0 192.168.0.2
@@ -36,6 +37,9 @@ done
 tail -n 100 /var/log/xen/console/guest-domU.log
 echo \"${passed}\"
 "
+if [ "${test_variant}" = "dom0pvh" ]; then
+    dom0pvh="dom0=pvh"
+fi
 
 ### test: S3
 elif [ "${test_variant}" = "s3" ]; then
@@ -184,11 +188,11 @@ cd ..
 TFTP=/scratch/gitlab-runner/tftp
 CONTROLLER=control@thor.testnet
 
-echo '
-multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all dom0_mem=4G
+echo "
+multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all dom0_mem=4G $dom0pvh
 module2 (http)/gitlab-ci/vmlinuz console=hvc0 root=/dev/ram0
 module2 (http)/gitlab-ci/initrd-dom0
-' > $TFTP/grub.cfg
+" > $TFTP/grub.cfg
 
 cp -f binaries/xen $TFTP/xen
 cp -f binaries/bzImage $TFTP/vmlinuz
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sat May 13 01:31:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 01:31:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534106.831349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxe6q-0004zh-Tn; Sat, 13 May 2023 01:31:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534106.831349; Sat, 13 May 2023 01:31:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxe6q-0004za-Qe; Sat, 13 May 2023 01:31:44 +0000
Received: by outflank-mailman (input) for mailman id 534106;
 Sat, 13 May 2023 01:31:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxe6p-0004zO-65; Sat, 13 May 2023 01:31:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxe6p-0004yO-2G; Sat, 13 May 2023 01:31:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxe6o-0007TB-Jb; Sat, 13 May 2023 01:31:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxe6o-00043X-JI; Sat, 13 May 2023 01:31:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gQ2gEhJj1SXQ8pEI+Nxgwbkd0faAWluFZDmmnrzxfiQ=; b=rL2SdwzHDtl7vxA+ruR9LrWwSS
	5XdZfQoMhSR6satUUThnmbH4j4/6+ejHI3UuyTMUYER1PtWjO4H5sEw3+FDjem9f2NBoCrTJ/DX8X
	Samp04n12jFHvkfOH42n0egFNj2CBHeC2lrzwveUaSdhHlNZ73YNm1FWeTSI5R2sIz7U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180633-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180633: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=cc3c44c9fda264c6d401be04e95449a57c1231c6
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 May 2023 01:31:42 +0000

flight 180633 linux-linus real [real]
flight 180640 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180633/
http://logs.test-lab.xenproject.org/osstest/logs/180640/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                cc3c44c9fda264c6d401be04e95449a57c1231c6
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   26 days
Failing since        180281  2023-04-17 06:24:36 Z   25 days   47 attempts
Testing same since   180625  2023-05-11 22:11:49 Z    1 days    2 attempts

------------------------------------------------------------
2380 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 300411 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 13 01:43:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 01:43:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534113.831358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxeHw-0006aw-VZ; Sat, 13 May 2023 01:43:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534113.831358; Sat, 13 May 2023 01:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxeHw-0006ap-SC; Sat, 13 May 2023 01:43:12 +0000
Received: by outflank-mailman (input) for mailman id 534113;
 Sat, 13 May 2023 01:43:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cp9A=BC=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pxeHv-0006aj-V2
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 01:43:12 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8257891c-f12f-11ed-8611-37d641c3527e;
 Sat, 13 May 2023 03:43:08 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id BCB475C025F;
 Fri, 12 May 2023 21:43:06 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Fri, 12 May 2023 21:43:06 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 12 May 2023 21:43:05 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8257891c-f12f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1683942186; x=1684028586; bh=SeVijPJA3lT5I/vF0gZ0HZNMDzmtJKZk7U6
	iPzdbV0Y=; b=k1E40neXOXAuUps7OL9tFHwiMVQveii+c5eE3GitDARpVa3PW8C
	4iqiYqniv796yzbYfUDr8WWYne19pOcVa6/bO1jMQ94D2/Cc/p+9KiV2n6pmW+XN
	hAuPSieR6mOyWKoc7QdRIOmRvN6ohR0vWOvSTfRw035kccfIfroTDNrxTIlJbYHP
	WjJxC3RiS/B1bLiQ8fV1S4AXo3Owxq1k+UV58CULJTT8Q1jvvehnvedrteEzpq0r
	NNd5pRrjv3TAtrBewmhA2GlCJs5+A8Md722dMFAYD9+sSqgf+2mJRKibuUQlt+YT
	NSAqIUg5VhbZE6mJKuEUkyLArR54zBhoZNg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1683942186; x=1684028586; bh=SeVijPJA3lT5I
	/vF0gZ0HZNMDzmtJKZk7U6iPzdbV0Y=; b=ULPsbxsv0YE3wYa+TyUN2hcwj/QtU
	2OKyGSjxMWa2q4djc+erVL3O6fJAfCC+ib6jS3c/YICktrdyDlur4oVbGyheTbTp
	RoFXYf48lpJ5BsDiWntw2kajts1OZ16wuF/rXThotEyYq2p5cm0iJBI9zIq3eJAE
	uk3cfqh/xFMmy87kj0CVznYFta5UglGonOEHHtfjEGWpXkmShdqDI58Qh7efQqNU
	HAsjD71PSeM2P653znWYjY6j5hTKOg6CWlwIgX1FLGKTBKePuPJVfN8ht+CDupuH
	MlWiBuYX1jqfClWPIl8n11tFeEbQtr+CwSjI8+33VzrMLIUKBWBmakjvw==
X-ME-Sender: <xms:KuteZBEI9B8PllmZeZKJ7Cv6N9w0QoAxrx2nxtJMrWY6P7UgUvz_8w>
    <xme:KuteZGXRyE9knWiC-QoHkY03ozoNo6dkxauuIIltsD1BnTeikaR25lpsIZg0o9Vmq
    DeJAB-Rh49_Qw>
X-ME-Received: <xmr:KuteZDI-HvsF3ClRe3uJ7TdnbaD0ufTf8wGPJ_O6EQkVUCx76SkOMwWGV_Yc4gzp_3RwtYu51cMaBwShp0mR4904ayxXhdD3Soc>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeehuddgheduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:KuteZHHTvA49vT5tqT4MVzVrhgRTlsaeEKyJSz4qIgE469RV1gXOgQ>
    <xmx:KuteZHVRzRKWbHFwFqrmEptR7lUmXLvvRELTNUY9B69RU_2ZOECzqg>
    <xmx:KuteZCNpXYeXtUAye1vXWxBKH_ntHScK87hx6A_x2JjJ2FL3T8BjSw>
    <xmx:KuteZOd7VgebqLhlwidQSkghYOuZR4iJlAkyu4ZKS08uCWmK_pr25Q>
Feedback-ID: i1568416f:Fastmail
Date: Sat, 13 May 2023 03:43:02 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, cardoe@cardoe.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH] automation: add a Dom0 PVH test based on Qubes' runner
Message-ID: <ZF7rJggPX6rOfeiS@mail-itl>
References: <20230513012833.3980872-1-sstabellini@kernel.org>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="f2H6d/DJQANNBQfA"
Content-Disposition: inline
In-Reply-To: <20230513012833.3980872-1-sstabellini@kernel.org>


--f2H6d/DJQANNBQfA
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sat, 13 May 2023 03:43:02 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, cardoe@cardoe.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH] automation: add a Dom0 PVH test based on Qubes' runner

On Fri, May 12, 2023 at 06:28:33PM -0700, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@amd.com>
>=20
> Straightforward Dom0 PVH test based on the existing basic Smoke test for
> the Qubes runner.
>=20
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> ---
>  automation/gitlab-ci/test.yaml     |  8 ++++++++
>  automation/scripts/qubes-x86-64.sh | 14 +++++++++-----
>  2 files changed, 17 insertions(+), 5 deletions(-)
>=20
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.y=
aml
> index 55ca0c27dc..9c0e08d456 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -149,6 +149,14 @@ adl-smoke-x86-64-gcc-debug:
>      - *x86-64-test-needs
>      - alpine-3.12-gcc-debug
> =20
> +adl-smoke-x86-64-dom0pvh-gcc-debug:
> +  extends: .adl-x86-64
> +  script:
> +    - ./automation/scripts/qubes-x86-64.sh dom0pvh 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - *x86-64-test-needs
> +    - alpine-3.12-gcc-debug
> +
>  adl-suspend-x86-64-gcc-debug:
>    extends: .adl-x86-64
>    script:
> diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qube=
s-x86-64.sh
> index 056faf9e6d..35b9386e5d 100755
> --- a/automation/scripts/qubes-x86-64.sh
> +++ b/automation/scripts/qubes-x86-64.sh
> @@ -5,6 +5,7 @@ set -ex
>  test_variant=3D$1
> =20
>  ### defaults
> +dom0pvh=3D

Maybe better name it extra_xen_opts=3D? I can see it useful in other tests
in the future too.

>  wait_and_wakeup=3D
>  timeout=3D120
>  domU_config=3D'
> @@ -18,8 +19,8 @@ vif =3D [ "bridge=3Dxenbr0", ]
>  disk =3D [ ]
>  '
> =20
> -### test: smoke test
> -if [ -z "${test_variant}" ]; then
> +### test: smoke test & smoke test PVH
> +if [ -z "${test_variant}" ] || [ "${test_variant}" =3D "dom0pvh" ]; then
>      passed=3D"ping test passed"
>      domU_check=3D"
>  ifconfig eth0 192.168.0.2
> @@ -36,6 +37,9 @@ done
>  tail -n 100 /var/log/xen/console/guest-domU.log
>  echo \"${passed}\"
>  "
> +if [ "${test_variant}" =3D "dom0pvh" ]; then
> +    dom0pvh=3D"dom0=3Dpvh"
> +fi
> =20
>  ### test: S3
>  elif [ "${test_variant}" =3D "s3" ]; then
> @@ -184,11 +188,11 @@ cd ..
>  TFTP=3D/scratch/gitlab-runner/tftp
>  CONTROLLER=3Dcontrol@thor.testnet
> =20
> -echo '
> -multiboot2 (http)/gitlab-ci/xen console=3Dcom1 com1=3D115200,8n1 loglvl=
=3Dall guest_loglvl=3Dall dom0_mem=3D4G
> +echo "
> +multiboot2 (http)/gitlab-ci/xen console=3Dcom1 com1=3D115200,8n1 loglvl=
=3Dall guest_loglvl=3Dall dom0_mem=3D4G $dom0pvh
>  module2 (http)/gitlab-ci/vmlinuz console=3Dhvc0 root=3D/dev/ram0
>  module2 (http)/gitlab-ci/initrd-dom0
> -' > $TFTP/grub.cfg
> +" > $TFTP/grub.cfg
> =20
>  cp -f binaries/xen $TFTP/xen
>  cp -f binaries/bzImage $TFTP/vmlinuz
> --=20
> 2.25.1
>=20
>=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--f2H6d/DJQANNBQfA
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRe6yYACgkQ24/THMrX
1yyHBQf/ZuZuWiV5FUg2rY+g9fndhIpRAaddR1DawyWYigfS60X/WOjs+ZE+F+vE
jsF56zSn94W19TeBkVKm6O8jv6EtYrBKa5HWdmLtxhOSoi98X7mqj/CftkrywI6m
9LlcajrLPsV/L5adpuLkP6oHWpCuHfqsPNSvEUr4TiiT+C81qR4x+5rAVr/hgrxY
aPrXRhzawn/AzpCgLNVgZH20HTTvxN5jpKqdx2yAg377rm2uwmLgq6BeBhJqkE7/
cSmhJlg52BedqJD4ibtJT/+YuXW1MWzG7AWhGX33/Pm0tlagVFVTgFZMTQPpUzr5
P5hMkakWt+poJcr+6h8h5WkD5EFJ0g==
=N17w
-----END PGP SIGNATURE-----

--f2H6d/DJQANNBQfA--


From xen-devel-bounces@lists.xenproject.org Sat May 13 02:13:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 02:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534121.831399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxel4-0002wt-5M; Sat, 13 May 2023 02:13:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534121.831399; Sat, 13 May 2023 02:13:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxel4-0002wk-2A; Sat, 13 May 2023 02:13:18 +0000
Received: by outflank-mailman (input) for mailman id 534121;
 Sat, 13 May 2023 02:13:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cp9A=BC=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pxel2-0002fe-Oh
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 02:13:16 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b63e5fc3-f133-11ed-b229-6b7b168915f2;
 Sat, 13 May 2023 04:13:14 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id F18755C0230;
 Fri, 12 May 2023 22:13:11 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Fri, 12 May 2023 22:13:11 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 12 May 2023 22:13:10 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b63e5fc3-f133-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1683943991; x=1684030391; bh=Pq
	0ODF4oWtMA348eaNF9YassconxWjLy3yNF0iJyH1E=; b=FTKYYRqT1YMETKA5PJ
	pyIdjmf74CAq9jXOhJxBz6HfcOc2lehOtt2pKxjmuxAdjmP9bgVHMi34DxUPWg5r
	rKc4nvtv9tDiZFKalPL2qpu4bCxxIv0U/j/yoP5pLUS8bRMLHCgN/ySqoJmRhRLm
	ksOpuDVfNBuN0FYU8RJ+UcmXwidhw52Gh0vrwYSvBqge9ulPCIdelepTO7OxCuB5
	SicX2MYSlB+qaLHBAZKLUMjRWh8fKbh0Ofeq72NDfp1VxVOXbtlDFkJJLOXaPOOk
	6VBv2PIzrAKCymQweTILDqwQHc4uv4kG+lzmhst3eXPiCyFGbY++6d9AUvwwIl+o
	uB9w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=
	1683943991; x=1684030391; bh=Pq0ODF4oWtMA348eaNF9YassconxWjLy3yN
	F0iJyH1E=; b=oPq5DlDwW9cHaLN6XBZCy/0l1fSl9Nob/VInM2Yi4baMJ1+Y0I1
	sp1crTi8ISUHBVq/iyCpcnZSatpP56/ryA4CzSPcSB13/P8snSDDyDLuUR7H6mEw
	OsZY8nbTa6o5MUr2mAUREFZXa1fPkKuGRH+bBtb34DnmBcKtmEmDxruJhiQ2xdw6
	ezW+/y6lKeJ/BSHDiW2PRbE2y5c8xN1sHcVE9B8w1gJeCc5K/efK0x/lO/qF6WLR
	xU3PjBXryMInM+Xfw440agMXo5MSpXiyyIA25lNMzg0cavjewvNXDVWFsf+4GjbB
	nR2uEvutiMMkCc8qa4+Xk0kt0OZaEJ10jZA==
X-ME-Sender: <xms:N_JeZBMLdqaqGclVRNDY7I-6X4pEPXU4HwzijLEwB1psIgyS9TbVKg>
    <xme:N_JeZD_lr-W3irfQQ3eDK5EzT6MKEk6TlBmYDW2AqpEaW8kP7J4lsPbTR3a8jbig7
    7qW842pKruwhg>
X-ME-Received: <xmr:N_JeZASOT8NNa2p6ihHHvI1stCJ6lXhTpSZ_E-OUqwwFGRg8X3SH65nRaJh-lWamSm1v4zCByywqqMxlwogGq9GcStqIOJY1B8guZIRU7Ax4KhgbM3IC>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeehuddgheekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:N_JeZNt7Ckuw6i8pwVPMs-yA17PP18YlHoJwDwcOvhXVir19bF4Qgg>
    <xmx:N_JeZJdC5bKFKVpC9DkwJuFBu8c_3YNhBfEJW8_vU7Ik9NU8Fqk6fw>
    <xmx:N_JeZJ2gi_vE3eoUg_3EuHCjCeBwkadUm8aC4M3mKtlZTCfeZka_qQ>
    <xmx:N_JeZJEHRTaX_SzEsoFURRPj73ZPC5cTKx2I6ufHKRhWy1QpFm0HWQ>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 3/4] automation: add x86_64 tests on a AMD Zen3+ runner
Date: Sat, 13 May 2023 04:12:46 +0200
Message-Id: <741648760682e3097a0d984342e5cad9387172cf.1683943670.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com>
References: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This adds another physical runner to Gitlab-CI, running similar set of
jobs that the Adler Lake one.

The machine specifically is
MinisForum UM773 Lite with AMD Ryzen 7 7735HS

The PV passthrough test is skipped as currently it fails on this system
with:
(d1) Can't find new memory area for initrd needed due to E820 map conflict

The S3 test is skipped as it currently fails - the system seems to
suspend properly (power LED blinks), but when woken up the power LED
gets back to solid on and the fan spins at top speed and but otherwise there is no
signs of if life from the system (no output on the console, HDMI or
anything else).

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/gitlab-ci/test.yaml | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index cb7fd5c272e9..81d027532cca 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -108,6 +108,16 @@
   tags:
     - qubes-hw2
 
+.zen3p-x86-64:
+  # it's really similar to the above
+  extends: .adl-x86-64
+  variables:
+    PCIDEV: "01:00.0"
+    PCIDEV_INTR: "MSI-X"
+    CONSOLE_OPTS: "console=com1 com1=115200,8n1,pci,msi"
+  tags:
+    - qubes-hw11
+
 # Test jobs
 build-each-commit-gcc:
   extends: .test-jobs-common
@@ -176,6 +186,22 @@ adl-pci-hvm-x86-64-gcc-debug:
     - *x86-64-test-needs
     - alpine-3.12-gcc-debug
 
+zen3p-smoke-x86-64-gcc-debug:
+  extends: .zen3p-x86-64
+  script:
+    - ./automation/scripts/qubes-x86-64.sh 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.12-gcc-debug
+
+zen3p-pci-hvm-x86-64-gcc-debug:
+  extends: .zen3p-x86-64
+  script:
+    - ./automation/scripts/qubes-x86-64.sh pci-hvm 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.12-gcc-debug
+
 qemu-smoke-dom0-arm64-gcc:
   extends: .qemu-arm64
   script:
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Sat May 13 02:13:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 02:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534118.831369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxekz-0002Bh-7n; Sat, 13 May 2023 02:13:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534118.831369; Sat, 13 May 2023 02:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxekz-0002Ba-3r; Sat, 13 May 2023 02:13:13 +0000
Received: by outflank-mailman (input) for mailman id 534118;
 Sat, 13 May 2023 02:13:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cp9A=BC=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pxekx-0002BP-V9
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 02:13:11 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b3f5d380-f133-11ed-8611-37d641c3527e;
 Sat, 13 May 2023 04:13:09 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 110ED5C0231;
 Fri, 12 May 2023 22:13:08 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Fri, 12 May 2023 22:13:08 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 12 May 2023 22:13:07 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3f5d380-f133-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm3; t=1683943988; x=1684030388; bh=p6TUSKPLj/zKTSgTeu3n1zyjI
	MOvCJeqWuFvF1tlZPA=; b=HxuCbmeQTT+p22bxT4bkEKecAUbpufbCttwOYBcXc
	3Mv8GFFe6U9bR+MxklkxWRxHz4P3Fsfck6s0tN+B8YIjOwh5KZknoEL+uEF20+80
	wB0IaSJnMyJTfk4i9j3AtCqQ9uaIxHZGrzs/q/G4HDZJquj/nJqgrhQDWTl9QH5h
	E/eN9x24ZjBskQo+fyi1TYFlahSNimF2JhYcKP0xeshjte5lKq4zlJe8FVfTJP3V
	vcVYk2HzvHaKnX6XLB70swOmz741PiRXPcF765q2eW3s6JGBbQcDMRnXTmAJQM+v
	9S/ffjRBQJ0hfKeHxz6/ftg9ETrLc/c5nPLsx7e5QyWCg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm1; t=1683943988; x=1684030388; bh=p
	6TUSKPLj/zKTSgTeu3n1zyjIMOvCJeqWuFvF1tlZPA=; b=WahvPY47yyVVU8PwX
	CmgcwNhtWHPky79+k3Adm4DcU9xL0+BQY15lUrNwuCb2O26tsbPHf0tSQwp09b9d
	laRXPXKV45OsPF1CCJeOuJ/IAtoGesfxl686FsDki0Y8tgWFJVgSpKJK1XP6vbvq
	8+pDGLrWtgfKxgruuazG8MbN8mbLQbGA/FYcnVZ4qW7gW/hPvxdOwhcwNiunuJNr
	1g24K4FvbE4wu6ia8+qQpHxAgjtw2pT2ElojF6eGzJJ1U8zQEUDDlPnY3MBFgGN9
	pPO6iuK33l7U6U3DsX7REbWU+UMaMulnby2yOxhQy2eZWq4tc7bzOGe36z/tw1E5
	wrcIw==
X-ME-Sender: <xms:M_JeZKHJ_Q2UbLf3CZ19Rl6W-xmdEiLn6qm_3xctM1p0tHUHLdpHGA>
    <xme:M_JeZLWbl75kuJoFi3Z8wY6jJvaG8H3FKG2rONOajCh_SGVJgqywMCsBZxogTd6OU
    r6Ljh-F1Njniw>
X-ME-Received: <xmr:M_JeZEJCrtQNbGgWieW4BhJcHjPteBBBYELQEkfFg24e4XnlUmvKMj6ARMt81Bomz6qVu1_1ruQ64MGelRxzkOnCoiEcCMlFjMkkGsHTfg99zuavHUMa>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeehuddgheekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucenucfjughrpefhvfevufffkffogggtgfesthekre
    dtredtjeenucfhrhhomhepofgrrhgvkhcuofgrrhgtiiihkhhofihskhhiqdfikphrvggt
    khhiuceomhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
    eqnecuggftrfgrthhtvghrnhepleekhfduleetleelleetteevfeefteffkeetteejheel
    gfegkeelgeehhfdthedvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrg
    hilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdr
    tghomh
X-ME-Proxy: <xmx:M_JeZEHudLP6Aga0j4jNj_edwO8sNEQc3pfiNOSQ7F-QdqJ7C7OdzQ>
    <xmx:M_JeZAWcB7Vh1d8nsWUJ8EEJLQ4cDXrqsuuQQyG-LWeztkbKD2sfTg>
    <xmx:M_JeZHOI7JXClRICaNjQhZoX17w5_T3XLctnp7dP-VjnCCjay7QApw>
    <xmx:NPJeZDCwLRfP65DRexwtbL3vJRigX_aGrj-KW0SpDh-J3vcXoqCYfg>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH 0/4] automation: add AMD hw runner
Date: Sat, 13 May 2023 04:12:43 +0200
Message-Id: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This series adds another hardware runner to the CI.
See individual patch descriptions for details.

Marek Marczykowski-Górecki (4):
  automation: make console options configurable via variables
  automation: enable earlyprintk=xen for both dom0 and domU in hw tests
  automation: add x86_64 tests on a AMD Zen3+ runner
  automation: add PV passthrough tests on a AMD Zen3+ runner

 automation/gitlab-ci/test.yaml     | 36 +++++++++++++++++++++++++++++++-
 automation/scripts/qubes-x86-64.sh | 10 ++++-----
 2 files changed, 41 insertions(+), 5 deletions(-)

base-commit: 31c65549746179e16cf3f82b694b4b1e0b7545ca
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Sat May 13 02:13:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 02:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534122.831409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxel5-0003DU-Eb; Sat, 13 May 2023 02:13:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534122.831409; Sat, 13 May 2023 02:13:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxel5-0003DN-BU; Sat, 13 May 2023 02:13:19 +0000
Received: by outflank-mailman (input) for mailman id 534122;
 Sat, 13 May 2023 02:13:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cp9A=BC=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pxel3-0002fe-Oj
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 02:13:17 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b5852979-f133-11ed-b229-6b7b168915f2;
 Sat, 13 May 2023 04:13:15 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id 819205C023B;
 Fri, 12 May 2023 22:13:10 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Fri, 12 May 2023 22:13:10 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 12 May 2023 22:13:09 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5852979-f133-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1683943990; x=1684030390; bh=Gi
	NsMeX+zewnzsWVeGVX0jm6GR36XVArwG8BgmM3BnY=; b=lpVHpz6lJzix7sHvDl
	pxFM8lhTF32KT/IQdzFFbEt8WDad2Fh1jV05CuW7sNeLVK0kYCf17yRjd4i+V/ID
	70AQQKsNPc3nzesy9TUt186fxhwWstlhzujnYL7cBkhG+o/3fKthL5Dz1ZCcxn2h
	wh2eXnlaUKAksi86HJvEUCF/6LOr3XUWxcJmTJA3AVZLJWscZbNf0xej3Kz6JN+a
	Ub0moDYNW+/LmdhF0SGAu/Asg9zsRr3gnZiY2WTC476cXt7q/qWPANITMhNSeL+C
	0HrRwOl9snaAzE9qxBJVDrEocZNLzxe1nvdKZIEtXVQ0xtxMtpDRc3wg88ffdTX8
	xJWg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=
	1683943990; x=1684030390; bh=GiNsMeX+zewnzsWVeGVX0jm6GR36XVArwG8
	BgmM3BnY=; b=Bzh0cg6liXki1XVf5GDnlHHOMudOggd+GR0IHG7rPBDzv4jlyZ9
	dhlNtcdq/AG23vcbCLbQ03jl3YSQKQ7jFXSVWiMueKc19+O+6vhfpK7FavwHt+76
	zRLp3uW6EDTTWTj36XSEgwem/B9qwc521QEH9372eeXac4dpMSyRqwQU7g/BRYJi
	miTYPcex5CNP14tMgreIyRKamsh99mphA9sGHOiHmu8P2Jupry2iO0UthDiaI8cN
	h2uWp/KQO+XDZY/QeKLFxnWNOm9qJQ05K6a9uPaFtNaI3wyIgp5wAlblJhYv+m+p
	36/tTWonYBRJLuFTOGFtZaRSq0k4tRtTWrw==
X-ME-Sender: <xms:NvJeZJN9cNvicmVEiNp4mdRPys920qpiqOG6U5fD9GeGxk3zpHZ6Mw>
    <xme:NvJeZL-lMQXGiqlDEsVBkmUpWYF6VvPoWo1opULr6IUlCDiICiQnCjjDINIF-j08M
    h7sj4XwlwtEEA>
X-ME-Received: <xmr:NvJeZISEgTZZ7G3slpGexG-Sos97aog4486PxDjEvfzP07UIQq2t-cTFNRHkU3GdsBI_dbB8KUbNStFB1biiqD5rPuz9b13NH17vmbUzFISv-1t8PS3l>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeehuddgheekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:NvJeZFtwx3gRlSj2E_cRys3AA96i4qNN0qByb1c4g6MM-BuNbOxeng>
    <xmx:NvJeZBddBFY853gATdwflFCqOApGDp4zfJ-xzsFmP4dnykTSIYt3RA>
    <xmx:NvJeZB3pmM9KuwcVeQB7kNO9CVYfin7_pB4ozmOf8b7TrDz3PCfeoA>
    <xmx:NvJeZBEM_E07lwNIuIdibg5oahmiWK7yNWNT7fc3Mu-9E0_H6CcrKw>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 2/4] automation: enable earlyprintk=xen for both dom0 and domU in hw tests
Date: Sat, 13 May 2023 04:12:45 +0200
Message-Id: <7247aca99f5faf35ff1c6efd048a10c08883bc41.1683943670.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com>
References: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Make debugging early boot failures easier based on just CI logs.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
 automation/scripts/qubes-x86-64.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index ae766395d184..bd09451d7d28 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -80,7 +80,7 @@ type = "'${test_variant#pci-}'"
 name = "domU"
 kernel = "/boot/vmlinuz"
 ramdisk = "/boot/initrd-domU"
-extra = "root=/dev/ram0 console=hvc0"
+extra = "root=/dev/ram0 console=hvc0 earlyprintk=xen"
 memory = 512
 vif = [ ]
 disk = [ ]
@@ -186,7 +186,7 @@ CONTROLLER=control@thor.testnet
 
 echo "
 multiboot2 (http)/gitlab-ci/xen $CONSOLE_OPTS loglvl=all guest_loglvl=all dom0_mem=4G
-module2 (http)/gitlab-ci/vmlinuz console=hvc0 root=/dev/ram0
+module2 (http)/gitlab-ci/vmlinuz console=hvc0 root=/dev/ram0 earlyprintk=xen
 module2 (http)/gitlab-ci/initrd-dom0
 " > $TFTP/grub.cfg
 
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Sat May 13 02:13:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 02:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534119.831375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxekz-0002F8-Gm; Sat, 13 May 2023 02:13:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534119.831375; Sat, 13 May 2023 02:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxekz-0002Ef-C0; Sat, 13 May 2023 02:13:13 +0000
Received: by outflank-mailman (input) for mailman id 534119;
 Sat, 13 May 2023 02:13:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cp9A=BC=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pxeky-0002BP-63
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 02:13:12 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b4a47d85-f133-11ed-8611-37d641c3527e;
 Sat, 13 May 2023 04:13:10 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 510185C0237;
 Fri, 12 May 2023 22:13:09 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Fri, 12 May 2023 22:13:09 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 12 May 2023 22:13:08 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4a47d85-f133-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1683943989; x=1684030389; bh=V7
	ePsmBxwb4ItsH2fnAJjUEBAyFuGVIcqpRBvQlY+2s=; b=LLc/DsmcaGuqkDA7/G
	XFHH5kQ2LiF0qC9oFphHZSgUjFKAu7ZRM/jb7Ip0KqwNB7Esi5RPASBpF3RccMyW
	xAE5ode4eqDXH+DqneIzd4FZNM9cHR6m/AEuQGQgn5r9TSW1hMnLUnlH5+bFReB2
	M4rHb3W0ZQLr2eqQNyuXFUiPJY6SQUVkPccQQoud5X461iaCA3joGb2mBB4fkK1V
	Wc+Q17j1hzOsPWxy8rFkYpKNbEeGRA3837/SzXoNUbvsb8RtXW4ZZLOOd26QmPTL
	UO7YGJzA6Vq4qWpV79nvHm0MueOHTuD6FifzvridwC8TIG3GE+axorqfbBLAX1UH
	raOQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=
	1683943989; x=1684030389; bh=V7ePsmBxwb4ItsH2fnAJjUEBAyFuGVIcqpR
	BvQlY+2s=; b=F1MDuF1Tst97M5BRqz5IENsS8LrbtTvuPham3NbN04BzdDG6fR6
	OMBcRANWBFTh4BXaFOuIYxW23tUhnbMglqX4sjT4I8D2u+cIIEHSD9TVxhWyfZeg
	uWujucWTjfh6rRVqA3sjVRAPWjzZDjwdxsf/81YWrY25ROMnOBWFTwlS2+Z4zbnT
	muHLSBu0xycQNGTpAxFHyaLwzm8o59yucvshLhtOaiIAcuWoendGPOtvgm/9+eSN
	VNBOS82KIuN4F26jp/3u5/aXtQMe80yMj+HrEzVZanZyVxkOJJJrHEVsPX8svRWs
	mCLDV3K5M/SbKSD6GFdff+gAPZb/Y0oFWFw==
X-ME-Sender: <xms:NfJeZE7KzUtqfSkjSUFLowQHWXUBotwHQxOW2cf6u-q3rQ0aDU9Lzw>
    <xme:NfJeZF5TP5b494gkHmIYMLofdL5wWOkhWT0Iv9OId9k04vp6wyvcpuglgu0C6VzrA
    _pRd8Cf-Rr4Vg>
X-ME-Received: <xmr:NfJeZDdL5TPkH4ugsSFVqOBrEGYrUfFXF9QvVSEPhNKmgNkrgLZZ7i5l5XkyKbUAl6ndUfln9AZUH6KbNzrru_VwpXR7AGKlkQI46gJyzkepTTSjAvA8>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeehuddgheejucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:NfJeZJLcy2fYvk2SdX38X02RMqZeQ4wGjKKwzNJwKQ7C--5kGzVeGA>
    <xmx:NfJeZILa04iwJDI5vQz8E44KXq7DC9Q_o_PlVobmUGeylLm7Fq7x1Q>
    <xmx:NfJeZKxxQ0WPBJq7Tb0spfQD7C5se7mnJmW-jGgRNmogxORCe2qhFQ>
    <xmx:NfJeZCgot6nmU6NJ4OBg1s9jhH_v6sPGySoJ6JPaz3oMNFOD5dCI2w>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 1/4] automation: make console options configurable via variables
Date: Sat, 13 May 2023 04:12:44 +0200
Message-Id: <e0504797d1b3758c035cd82b2dc3b00d747ddcc8.1683943670.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com>
References: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This makes the test script easier reusable for different runners, where
console may be connected differently. Include both console= option and
configuration for specific chosen console too (like com1= here) in the
'CONSOLE_OPTS' variable.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
This will conflict with Stefano's patch, as both modify multiboot2 line,
but it shouldn't be too hard to resolve the conflict manually (both
replace console opts with a variable, and add extra opts at the end).
---
 automation/gitlab-ci/test.yaml     | 1 +
 automation/scripts/qubes-x86-64.sh | 6 +++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 55ca0c27dc49..cb7fd5c272e9 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -96,6 +96,7 @@
     LOGFILE: smoke-test.log
     PCIDEV: "03:00.0"
     PCIDEV_INTR: "MSI-X"
+    CONSOLE_OPTS: "console=com1 com1=115200,8n1"
   artifacts:
     paths:
       - smoke.serial
diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index 056faf9e6de8..ae766395d184 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -184,11 +184,11 @@ cd ..
 TFTP=/scratch/gitlab-runner/tftp
 CONTROLLER=control@thor.testnet
 
-echo '
-multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all dom0_mem=4G
+echo "
+multiboot2 (http)/gitlab-ci/xen $CONSOLE_OPTS loglvl=all guest_loglvl=all dom0_mem=4G
 module2 (http)/gitlab-ci/vmlinuz console=hvc0 root=/dev/ram0
 module2 (http)/gitlab-ci/initrd-dom0
-' > $TFTP/grub.cfg
+" > $TFTP/grub.cfg
 
 cp -f binaries/xen $TFTP/xen
 cp -f binaries/bzImage $TFTP/vmlinuz
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Sat May 13 02:13:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 02:13:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534120.831389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxel2-0002gn-Vb; Sat, 13 May 2023 02:13:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534120.831389; Sat, 13 May 2023 02:13:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxel2-0002gb-Qr; Sat, 13 May 2023 02:13:16 +0000
Received: by outflank-mailman (input) for mailman id 534120;
 Sat, 13 May 2023 02:13:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cp9A=BC=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pxel2-0002fe-D6
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 02:13:16 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b6fa99e3-f133-11ed-b229-6b7b168915f2;
 Sat, 13 May 2023 04:13:14 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 32B6A5C0231;
 Fri, 12 May 2023 22:13:13 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Fri, 12 May 2023 22:13:13 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 12 May 2023 22:13:12 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6fa99e3-f133-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1683943993; x=1684030393; bh=ay
	lMptUMLdF12TA37xOr+bDz6DV0R1SEAD/bYg58y6E=; b=XEdrt2ZU6/us5vvKWb
	+Vrmfv3rX8tyHa8pm9Iq5OCqN8d69JxQDXTkD1RV2fggrcdWMknuqerm0XJcy1IY
	VTfL85bmIoy6tNMwqomQZHu7j8QsyUAAPw5AF+HujaDM/zYeSVXpsnL0yMDOt6h1
	D7QWvbmSVW8n7fOdsW/B7Vd635zUj+TyHpy4HgK8ZRA2KBv0zQlkRDs4ErD2NXDc
	xB0oW4GgfjTjCScfb1H6eH6yNrIhSL0l9oKDAgC4aH0H4ywk5WZ/gxtXQZTuiArj
	le95jv6JLWd4HP6EE0TF9GUyVNzvmmIj/+W51Remkgnyu1kIfGL2h05JqR0ZUqkS
	ryCA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=
	1683943993; x=1684030393; bh=aylMptUMLdF12TA37xOr+bDz6DV0R1SEAD/
	bYg58y6E=; b=CXJ5loGABaYuwfyZCLKTIxczIpcPUfyuCc+hPoZM25xtDpAQ7Tl
	M10t4pdOLyH+fuejaWWwFIfcSr7CNURo8SIJqCVpkJNjKWEfiPtq36gEabeOIqMg
	lQLju9jAIxHAUYpHZ2HrC7KTtHGXKQxNhTt4cMh+SIbhBvPiBpLzAFsbmCfVRIdx
	K8MkfpmEdVDWPXIz4+UdQ8kXAdbeeCa8IDuo5mU4elJ9KKrgNC1921/Zfhj280Vc
	8FMRLhhbWFmd/hWjU401fXOwE8LwYQoXF/8Tv9jZOZ+Zt2G7ogop37IHHsOUjKra
	aa50mOqk2lCVMaDPcJl7WSbVYwuaRZCHxvQ==
X-ME-Sender: <xms:OfJeZMSHBwKgeqIINQP_21iXHJ-jYrhVZscnuYixdnC7g6nvDcyiLg>
    <xme:OfJeZJwC1Qmr8uK36YzzLRXaus0mxozmLr0-yuOxdiuYg8hGKL1zoIGILNVrh8g-V
    HJTpM83MrG8gA>
X-ME-Received: <xmr:OfJeZJ2cOjeVEonY7N3hu9FHDVF-YavDo5fI3o2ewHW0IxmasMuaAYroTAUwwL05-ZkcapT0T6YGgx40A1_fS4BIVcBZQOlR96lHLk7C2fTwvNVFumOl>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeehuddgheejucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofgjfhggtgfgsehtkeertdertdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    ueduhefgvdefheehudejheefudevueeghfekhfehleegveduteeuiedugffgffenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:OfJeZAApSfin5-NhX3OLy5xAaWHaEwMeu7e6GzYGH54PsgzuhT5ahw>
    <xmx:OfJeZFg5QHobZwdwDJCZAhnocloyAxAAoLZECk17CHLheNZLwRsGlw>
    <xmx:OfJeZMoW5G4_R_lrycpwxJezu4g2t0Gu4WRheLQ75tDOgeTkTNnp4A>
    <xmx:OfJeZFZUPHqtQgGY-blB8ies6T4NY5rbg8BEvI6dyBupzbqHbqsZuA>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 4/4] automation: add PV passthrough tests on a AMD Zen3+ runner
Date: Sat, 13 May 2023 04:12:47 +0200
Message-Id: <de2a2841e44f44eb7dd56c0b9a2c27fe041051e9.1683943670.git-series.marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com>
References: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The PV passthrough test currently fails on this system
with:
(d1) Can't find new memory area for initrd needed due to E820 map conflict

Setting e820_host=1 does not help. So, add this test with
"allow_failure: true" until the problem is fixed.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
I'm unsure if this should be included. On one hand, the test case will
help verifying potential fix. But on the other hand, until fixed it will
be wasting time.
---
 automation/gitlab-ci/test.yaml |  9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 81d027532cca..7becb7a6b782 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -194,6 +194,15 @@ zen3p-smoke-x86-64-gcc-debug:
     - *x86-64-test-needs
     - alpine-3.12-gcc-debug
 
+zen3p-pci-pv-x86-64-gcc-debug:
+  extends: .zen3p-x86-64
+  allow_failure: true
+  script:
+    - ./automation/scripts/qubes-x86-64.sh pci-pv 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.12-gcc-debug
+
 zen3p-pci-hvm-x86-64-gcc-debug:
   extends: .zen3p-x86-64
   script:
-- 
git-series 0.9.1


From xen-devel-bounces@lists.xenproject.org Sat May 13 03:55:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 03:55:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534148.831418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxgM0-00066w-1h; Sat, 13 May 2023 03:55:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534148.831418; Sat, 13 May 2023 03:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxgLz-00066p-VJ; Sat, 13 May 2023 03:55:31 +0000
Received: by outflank-mailman (input) for mailman id 534148;
 Sat, 13 May 2023 03:55:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxgLx-00066f-Sj; Sat, 13 May 2023 03:55:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxgLx-0000UD-LU; Sat, 13 May 2023 03:55:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxgLx-0005KN-C6; Sat, 13 May 2023 03:55:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxgLx-0001EI-Ba; Sat, 13 May 2023 03:55:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p9COexbxPTrzTjAskiIgzr1A96WyxwMzhtk/hHvWBlI=; b=yzqvW0mDtaNGAf0L4onsxnJM1l
	KAA/sfVTcR5dZMWXDa/w1bjDNMcy+43IDsW8tVNC7qhZbQbNTrlF2mFBEUhCQER4p9jy/3a9h+BaQ
	Rv7bt3bp9GU1fxVxnoEqK/mMqNSFmvBf58JvHjYVVCGPEkI7P2dm3gj5pPrkTP1XkqOw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180637-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180637: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=278238505d28d292927bff7683f39fb4fbca7fd1
X-Osstest-Versions-That:
    qemuu=d530697ca20e19f7a626f4c1c8b26fccd0dc4470
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 May 2023 03:55:29 +0000

flight 180637 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180637/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180610
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180610
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180610
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180610
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180610
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180610
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180610
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180610
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                278238505d28d292927bff7683f39fb4fbca7fd1
baseline version:
 qemuu                d530697ca20e19f7a626f4c1c8b26fccd0dc4470

Last test of basis   180610  2023-05-10 23:38:37 Z    2 days
Testing same since   180621  2023-05-11 14:27:55 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dr. David Alan Gilbert <dave@treblig.org>
  Jamie Iles <quic_jiles@quicinc.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lukas Straub <lukasstraub2@web.de>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   d530697ca2..278238505d  278238505d28d292927bff7683f39fb4fbca7fd1 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat May 13 10:38:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 10:38:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534191.831429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxmdi-0003yL-DP; Sat, 13 May 2023 10:38:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534191.831429; Sat, 13 May 2023 10:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxmdi-0003yE-A4; Sat, 13 May 2023 10:38:14 +0000
Received: by outflank-mailman (input) for mailman id 534191;
 Sat, 13 May 2023 10:38:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxmdg-0003y1-Rz; Sat, 13 May 2023 10:38:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxmdg-0002rZ-Ot; Sat, 13 May 2023 10:38:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxmdg-0001VY-5a; Sat, 13 May 2023 10:38:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxmdg-0007y3-4Z; Sat, 13 May 2023 10:38:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rXb4wQJqUtniM94XD9Tp+ogDTbeTw7SEL9o4Z4TPlQw=; b=UeKl7VLLNaFtF6w8BDzDvQ1+3C
	GGM7/rRIJStR9JAsyawiBfNEDjw9wRn2Gb2w1geEzHx4r4r11dj6JLFDh6IW2EG3OBBk8EGzjyhgW
	kAIqe5svIVKJWeJvG4AY15mVSWEbh8VpPtjilR5NKpSvteZzfCTgQB7x+ptdXKyFtv5s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180639-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180639: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4c507d8a6b6e8be90881a335b0a66eb28e0f7737
X-Osstest-Versions-That:
    xen=cb781ae2c98de5d5742aa0de6850f371fb25825f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 May 2023 10:38:12 +0000

flight 180639 xen-unstable real [real]
flight 180644 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180639/
http://logs.test-lab.xenproject.org/osstest/logs/180644/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 180632

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pair        11 xen-install/dst_host fail pass in 180644-retest
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install fail pass in 180644-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 180644 like 180632
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180632
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180632
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180632
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180632
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180632
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180632
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180632
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180632
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180632
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180632
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180632
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  4c507d8a6b6e8be90881a335b0a66eb28e0f7737
baseline version:
 xen                  cb781ae2c98de5d5742aa0de6850f371fb25825f

Last test of basis   180632  2023-05-12 08:12:24 Z    1 days
Testing same since   180639  2023-05-12 20:39:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Fri May 12 09:35:36 2023 +0200

    iommu/amd-vi: fix assert comparing boolean to enum
    
    Or else when iommu_intremap is set to iommu_intremap_full the assert
    triggers.
    
    Fixes: 1ba66a870eba ('AMD/IOMMU: without XT, x2APIC needs to be forced into physical mode')
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit d9dcd45c56baa2140857ec290962a1f9d71af73c
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri May 12 09:35:14 2023 +0200

    SUPPORT.md: explicitly mention EFI (secure) boot status
    
    While normal booting is properly supported on both x86 and Arm64, secure
    boot reportedly requires quite a bit more work to be actually usable
    (and providing the intended guarantees). The mere use of the shim
    protocol for verifying the Dom0 kernel image isn't enough.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat May 13 11:44:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 11:44:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534201.831441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxnfk-00032U-61; Sat, 13 May 2023 11:44:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534201.831441; Sat, 13 May 2023 11:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxnfk-00032N-36; Sat, 13 May 2023 11:44:24 +0000
Received: by outflank-mailman (input) for mailman id 534201;
 Sat, 13 May 2023 11:44:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z80I=BC=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1pxnfi-00032H-UG
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 11:44:22 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7f50626a-f183-11ed-8611-37d641c3527e;
 Sat, 13 May 2023 13:44:19 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id
 a640c23a62f3a-9661a1ff1e9so1328833466b.1
 for <xen-devel@lists.xenproject.org>; Sat, 13 May 2023 04:44:19 -0700 (PDT)
Received: from [127.0.0.1] (dynamic-077-013-174-037.77.13.pool.telefonica.de.
 [77.13.174.37]) by smtp.gmail.com with ESMTPSA id
 k18-20020a17090632d200b009661f07db93sm6684735ejk.223.2023.05.13.04.44.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 13 May 2023 04:44:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f50626a-f183-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1683978259; x=1686570259;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=V6rAM+uNA/W1vTTbidrPOvF1WB4ageDhKjMWnQRB0Dk=;
        b=hnfm3bmL71lHvrHS1ylREbQosLNuToMM5JyIfyAEk8e2Is0svS4qNdAaevNg8LmLtg
         Z9qtC/+TbRS3t70tUZ24A23mogaDQCbHnPU1ePpPOi43scxPTf6avbTZzuIu6Ix3zxKV
         Ib9nMn/Eg87HnHJevWS+wHz9blOY3hj6yovgkZPC+CV6dXBIKDE7nlW8mP6EZ9sV6bvC
         LT9zYI+tLoDLxCV+s5hGsdrC5G7jSHaRmau9ExngU+P8/LMoMpdmWyD5IYHdu2QLq2Aw
         3m/9bcgxLABPjhb4VSnzk1b+0lESQihCY+70Ll6pIaOxxGZuwB/dEFFz1tXqokr/48NC
         BMDg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1683978259; x=1686570259;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=V6rAM+uNA/W1vTTbidrPOvF1WB4ageDhKjMWnQRB0Dk=;
        b=RopOi939ZFHZFaa1hzauvHIShhyeQFuPZlAuYhzRLeFtzxndbfQBhLNRKMbUhJyTLm
         foMQcYLqvmmeGNEnXZJJNjWxCkJsvoSNehRyOZfecYoE1LVWP0E8CN9kINKF6My5iCBk
         pLNZBy/DSRqxGnCLXKDS/V9gHOPA/Hg1xRrm2OONfA7CBVEqIwmDZcFfzH3oVvcYpwy0
         yaYEfazFiXjcGDdD6jR0Qiui85oP5Ym58KdThJBnERjJqgnl7O9HE4tVO4bxbfWsOxmF
         TQ8ah+gkQxtO2UpcGyQJt16+tzBk4AFlbeTe1IefbGfIPNCbl77sMBGMij9mVANUtoC5
         NgeQ==
X-Gm-Message-State: AC+VfDzMXiL7ike7ZCtaHx0ebt83yL5fsqCkPU1ZMz2Hzfg3Mhtyi/pS
	04NHGrMPQ1AKkdcJ6MfPp88=
X-Google-Smtp-Source: ACHHUZ6JJgPDuB7Q8BYSij4drTRdfqqNWuU4pbKsBlUfNJbXghamkFlPpjRzt6VPnAgzf3POjvW/Yw==
X-Received: by 2002:a17:907:8a02:b0:967:d161:61c6 with SMTP id sc2-20020a1709078a0200b00967d16161c6mr19735527ejc.3.1683978259039;
        Sat, 13 May 2023 04:44:19 -0700 (PDT)
Date: Sat, 13 May 2023 11:44:14 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
CC: qemu-devel@nongnu.org, Richard Henderson <richard.henderson@linaro.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>, Eduardo Habkost <eduardo@habkost.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Chuck Zmudzinski <brchuckz@aol.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v4 0/7] Resolve TYPE_PIIX3_XEN_DEVICE
In-Reply-To: <20230421033757-mutt-send-email-mst@kernel.org>
References: <20230403074124.3925-1-shentey@gmail.com> <20230421033757-mutt-send-email-mst@kernel.org>
Message-ID: <9EB9A984-61E5-4226-8352-B5DDC6E2C62E@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 21=2E April 2023 07:38:10 UTC schrieb "Michael S=2E Tsirkin" <mst@redha=
t=2Ecom>:
>On Mon, Apr 03, 2023 at 09:41:17AM +0200, Bernhard Beschow wrote:
>> There is currently a dedicated PIIX3 device model for use under Xen=2E =
By reusing
>> existing PCI API during initialization this device model can be elimina=
ted and
>> the plain PIIX3 device model can be used instead=2E
>>=20
>> Resolving TYPE_PIIX3_XEN_DEVICE results in less code while also making =
Xen
>> agnostic towards the precise south bridge being used in the PC machine=
=2E The
>> latter might become particularily interesting once PIIX4 becomes usable=
 in the
>> PC machine, avoiding the "Frankenstein" use of PIIX4_ACPI in PIIX3=2E
>
>xen stuff so I assume that tree?

Ping

>
>> Testing done:
>> - `make check`
>> - Run `xl create` with the following config:
>>     name =3D "Manjaro"
>>     type =3D 'hvm'
>>     memory =3D 1536
>>     apic =3D 1
>>     usb =3D 1
>>     disk =3D [ "file:manjaro-kde-21=2E2=2E6-220416-linux515=2Eiso,hdc:c=
drom,r" ]
>>     device_model_override =3D "/usr/bin/qemu-system-x86_64"
>>     vga =3D "stdvga"
>>     sdl =3D 1
>> - `qemu-system-x86_64 -M pc -m 2G -cpu host -accel kvm \
>>     -cdrom manjaro-kde-21=2E2=2E6-220416-linux515=2Eiso`
>>=20
>> v4:
>> - Add patch fixing latent memory leak in pci_bus_irqs() (Anthony)
>>=20
>> v3:
>> - Rebase onto master
>>=20
>> v2:
>> - xen_piix3_set_irq() is already generic=2E Just rename it=2E (Chuck)
>>=20
>> Tested-by: Chuck Zmudzinski <brchuckz@aol=2Ecom>
>>=20
>> Bernhard Beschow (7):
>>   include/hw/xen/xen: Rename xen_piix3_set_irq() to xen_intx_set_irq()
>>   hw/pci/pci=2Ec: Don't leak PCIBus::irq_count[] in pci_bus_irqs()
>>   hw/isa/piix3: Reuse piix3_realize() in piix3_xen_realize()
>>   hw/isa/piix3: Wire up Xen PCI IRQ handling outside of PIIX3
>>   hw/isa/piix3: Avoid Xen-specific variant of piix3_write_config()
>>   hw/isa/piix3: Resolve redundant k->config_write assignments
>>   hw/isa/piix3: Resolve redundant TYPE_PIIX3_XEN_DEVICE
>>=20
>>  include/hw/southbridge/piix=2Eh |  1 -
>>  include/hw/xen/xen=2Eh          |  2 +-
>>  hw/i386/pc_piix=2Ec             | 36 +++++++++++++++++++--
>>  hw/i386/xen/xen-hvm=2Ec         |  2 +-
>>  hw/isa/piix3=2Ec                | 60 +--------------------------------=
--
>>  hw/pci/pci=2Ec                  |  2 ++
>>  stubs/xen-hw-stub=2Ec           |  2 +-
>>  7 files changed, 39 insertions(+), 66 deletions(-)
>>=20
>> --=20
>> 2=2E40=2E0
>>=20
>


From xen-devel-bounces@lists.xenproject.org Sat May 13 15:30:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 15:30:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534252.831464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxrCQ-0000Wn-Uy; Sat, 13 May 2023 15:30:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534252.831464; Sat, 13 May 2023 15:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxrCQ-0000Wg-RD; Sat, 13 May 2023 15:30:22 +0000
Received: by outflank-mailman (input) for mailman id 534252;
 Sat, 13 May 2023 15:30:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxrCP-0000WW-9M; Sat, 13 May 2023 15:30:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxrCP-0001Nf-70; Sat, 13 May 2023 15:30:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxrCO-0001RV-LX; Sat, 13 May 2023 15:30:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxrCO-0003AJ-L3; Sat, 13 May 2023 15:30:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XKbwhur/ESM3/LJiXtF7kQzeYA14ukprrtTSfaLFLjA=; b=lmYhKa5JU+qAoFgsQoB/QGygqG
	Onc0G63+M13d66+U9w3ceMoRGBpIBa14w10rq2nnphQVfk2d2gyWVY4KHosEQT3+boMcyi+APXsMr
	Qe9TNgMlzv8xHC0lsmsH654f24nlPDOnjvzTckazbmb6dpFDSxnZUfGdL5zBuVfJpMV8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180641-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180641: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9a48d604672220545d209e9996c2a1edbb5637f6
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 May 2023 15:30:20 +0000

flight 180641 linux-linus real [real]
flight 180648 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180641/
http://logs.test-lab.xenproject.org/osstest/logs/180648/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                9a48d604672220545d209e9996c2a1edbb5637f6
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   26 days
Failing since        180281  2023-04-17 06:24:36 Z   26 days   48 attempts
Testing same since   180641  2023-05-13 01:41:58 Z    0 days    1 attempts

------------------------------------------------------------
2383 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 300947 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 13 16:34:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 16:34:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534263.831474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxsBs-0007KP-Iw; Sat, 13 May 2023 16:33:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534263.831474; Sat, 13 May 2023 16:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxsBs-0007KI-Fz; Sat, 13 May 2023 16:33:52 +0000
Received: by outflank-mailman (input) for mailman id 534263;
 Sat, 13 May 2023 16:33:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxsBq-0007Jr-Iu; Sat, 13 May 2023 16:33:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxsBq-0003Gl-G6; Sat, 13 May 2023 16:33:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxsBq-0003qv-8N; Sat, 13 May 2023 16:33:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxsBq-0007Le-7r; Sat, 13 May 2023 16:33:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wiG8kLPCb5pdJFLgd0/d0VXGzWAiaWfWmojTPoFAuPY=; b=6Vq4jnQY2OZ/RvcL433NU/QqmD
	HNs1DHy6ugMJqRE80JAAaYe2W6Ujm22lJU2UcYCWpi4RC21YSG/ATumZrNi/6T4Tspr0Q4m1bpC0Q
	E4ipvX5E/Mu0lVSLjDpgaMmInBvu3yEdHyYJudfPn0IrKvklLgFaNVjtZaydNLqniRwE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180642-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180642: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-arm64-arm64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=4a681995bc9f0ba5df779c392b7bebf3470a3f9a
X-Osstest-Versions-That:
    libvirt=517d76466b2b798e492cde18b61e4b6db8bd4ce1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 May 2023 16:33:50 +0000

flight 180642 libvirt real [real]
flight 180650 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180642/
http://logs.test-lab.xenproject.org/osstest/logs/180650/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-qcow2 17 guest-start/debian.repeat fail pass in 180650-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180628
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180628
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180628
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              4a681995bc9f0ba5df779c392b7bebf3470a3f9a
baseline version:
 libvirt              517d76466b2b798e492cde18b61e4b6db8bd4ce1

Last test of basis   180628  2023-05-12 04:18:49 Z    1 days
Testing same since   180642  2023-05-13 04:21:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dankaházi (ifj.) István <dankahazi.istvan@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   517d76466b..4a681995bc  4a681995bc9f0ba5df779c392b7bebf3470a3f9a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat May 13 18:40:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 18:40:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534276.831493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxuA5-0003fz-FB; Sat, 13 May 2023 18:40:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534276.831493; Sat, 13 May 2023 18:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxuA5-0003fs-CB; Sat, 13 May 2023 18:40:09 +0000
Received: by outflank-mailman (input) for mailman id 534276;
 Sat, 13 May 2023 18:32:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QA79=BC=natalenko.name=oleksandr@srs-se1.protection.inumbo.net>)
 id 1pxu2r-0002gu-Qf
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 18:32:42 +0000
Received: from vulcan.natalenko.name (vulcan.natalenko.name [104.207.131.136])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 891d8ce1-f1bc-11ed-8611-37d641c3527e;
 Sat, 13 May 2023 20:32:38 +0200 (CEST)
Received: from spock.localnet (unknown [83.148.33.151])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by vulcan.natalenko.name (Postfix) with ESMTPSA id 3E98A130F201;
 Sat, 13 May 2023 20:32:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 891d8ce1-f1bc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=natalenko.name;
	s=dkim-20170712; t=1684002754;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vRMdV3qzOVMwmQcnB02GoMJuuadMUEoFe1zlRRYP5Zw=;
	b=BYkiLxOIWN8IVtlF+nUBPTMaSvVLLm07F0DHp73Z+Eb7fP5wfHLu6+z3tHtGudVAjPmDGo
	cMZS4YuquHcnYJ9oS7feOQiSz26jdOO3YZzY3i1btzZDXY+EaLoZDj8CRG0dAnOPQirDG2
	m5QBpNJvNrr34slkYe3VXn3/zoEgtxo=
From: Oleksandr Natalenko <oleksandr@natalenko.name>
To: LKML <linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org, David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>, Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
Subject: Re: [patch V4 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Date: Sat, 13 May 2023 20:32:31 +0200
Message-ID: <12207466.O9o76ZdvQC@natalenko.name>
In-Reply-To: <20230512203426.452963764@linutronix.de>
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="UTF-8"

Hello.

On p=C3=A1tek 12. kv=C4=9Btna 2023 23:06:56 CEST Thomas Gleixner wrote:
> Hi!
>=20
> This is version 4 of the reworked parallel bringup series. Version 3 can =
be
> found here:
>=20
>    https://lore.kernel.org/lkml/20230508181633.089804905@linutronix.de
>=20
> This is just a reiteration to address the following details:
>=20
>   1) Address review feedback (Peter Zijlstra)
>=20
>   2) Fix a MIPS related build problem (0day)
>=20
> Other than that there are no changes and the other details are all the sa=
me
> as in V3 and V2.
>=20
> It's also available from git:
>=20
>     git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git hotplug
>=20
> Diff to V3 below.
>=20
> Thanks,
>=20
> 	tglx

With this patchset:

```

[    0.137719] smpboot: Allowing 32 CPUs, 0 hotplug CPUs
[    0.777312] smpboot: CPU0: AMD Ryzen 9 5950X 16-Core Processor (family: =
0x19, model: 0x21, stepping: 0x2)
[    0.777896] smpboot: Parallel CPU startup enabled: 0x80000000
```

Seems to survive suspend/resume cycle too.

Hence:

Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>

Thanks.

> ---
> diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
> index f5e0f4235746..90c71d800b59 100644
> --- a/arch/mips/kernel/smp.c
> +++ b/arch/mips/kernel/smp.c
> @@ -690,7 +690,7 @@ void flush_tlb_one(unsigned long vaddr)
>  EXPORT_SYMBOL(flush_tlb_page);
>  EXPORT_SYMBOL(flush_tlb_one);
> =20
> -#ifdef CONFIG_HOTPLUG_CPU
> +#ifdef CONFIG_HOTPLUG_CORE_SYNC_DEAD
>  void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
>  {
>  	if (mp_ops->cleanup_dead_cpu)
> diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
> index 0438802031c3..9cd77d319555 100644
> --- a/arch/x86/kernel/head_64.S
> +++ b/arch/x86/kernel/head_64.S
> @@ -290,8 +290,7 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L=
_GLOBAL)
> =20
>  	/*  APIC ID not found in the table. Drop the trampoline lock and bail. =
*/
>  	movq	trampoline_lock(%rip), %rax
> -	lock
> -	btrl	$0, (%rax)
> +	movl	$0, (%rax)
> =20
>  1:	cli
>  	hlt
> @@ -320,8 +319,7 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L=
_GLOBAL)
>  	movq	trampoline_lock(%rip), %rax
>  	testq	%rax, %rax
>  	jz	.Lsetup_gdt
> -	lock
> -	btrl	$0, (%rax)
> +	movl	$0, (%rax)
> =20
>  .Lsetup_gdt:
>  	/*
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index 5caf4897b507..660709e94823 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -161,31 +161,28 @@ static inline void smpboot_restore_warm_reset_vecto=
r(void)
> =20
>  }
> =20
> -/*
> - * Report back to the Boot Processor during boot time or to the caller p=
rocessor
> - * during CPU online.
> - */
> -static void smp_callin(void)
> +/* Run the next set of setup steps for the upcoming CPU */
> +static void ap_starting(void)
>  {
>  	int cpuid =3D smp_processor_id();
> =20
>  	/*
> -	 * If waken up by an INIT in an 82489DX configuration the alive
> -	 * synchronization guarantees we don't get here before an
> -	 * INIT_deassert IPI reaches our local APIC, so it is now safe to
> -	 * touch our local APIC.
> +	 * If woken up by an INIT in an 82489DX configuration the alive
> +	 * synchronization guarantees that the CPU does not reach this
> +	 * point before an INIT_deassert IPI reaches the local APIC, so it
> +	 * is now safe to touch the local APIC.
>  	 *
>  	 * Set up this CPU, first the APIC, which is probably redundant on
>  	 * most boards.
>  	 */
>  	apic_ap_setup();
> =20
> -	/* Save our processor parameters. */
> +	/* Save the processor parameters. */
>  	smp_store_cpu_info(cpuid);
> =20
>  	/*
>  	 * The topology information must be up to date before
> -	 * calibrate_delay() and notify_cpu_starting().
> +	 * notify_cpu_starting().
>  	 */
>  	set_cpu_sibling_map(cpuid);
> =20
> @@ -197,7 +194,7 @@ static void smp_callin(void)
> =20
>  	/*
>  	 * This runs the AP through all the cpuhp states to its target
> -	 * state (CPUHP_ONLINE in the case of serial bringup).
> +	 * state CPUHP_ONLINE.
>  	 */
>  	notify_cpu_starting(cpuid);
>  }
> @@ -274,10 +271,7 @@ static void notrace start_secondary(void *unused)
>  	rcu_cpu_starting(raw_smp_processor_id());
>  	x86_cpuinit.early_percpu_clock_init();
> =20
> -	smp_callin();
> -
> -	/* Otherwise gcc will move up smp_processor_id() before cpu_init() */
> -	barrier();
> +	ap_starting();
> =20
>  	/* Check TSC synchronization with the control CPU. */
>  	check_tsc_sync_target();
> diff --git a/arch/x86/realmode/rm/trampoline_64.S b/arch/x86/realmode/rm/=
trampoline_64.S
> index 2dfb1c400167..c6de4deec746 100644
> --- a/arch/x86/realmode/rm/trampoline_64.S
> +++ b/arch/x86/realmode/rm/trampoline_64.S
> @@ -40,17 +40,13 @@
>  .macro LOAD_REALMODE_ESP
>  	/*
>  	 * Make sure only one CPU fiddles with the realmode stack
> -	 */
> +	*/
>  .Llock_rm\@:
> -	btl	$0, tr_lock
> -	jnc	2f
> -	pause
> -	jmp	.Llock_rm\@
> +        lock btsl       $0, tr_lock
> +        jnc             2f
> +        pause
> +        jmp             .Llock_rm\@
>  2:
> -	lock
> -	btsl	$0, tr_lock
> -	jc	.Llock_rm\@
> -
>  	# Setup stack
>  	movl	$rm_stack_end, %esp
>  .endm
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 60b4093fae9e..005f863a3d2b 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -294,14 +294,14 @@ enum cpuhp_sync_state {
>   * cpuhp_ap_update_sync_state - Update synchronization state during brin=
gup/teardown
>   * @state:	The synchronization state to set
>   *
> - * No synchronization point. Just update of the synchronization state.
> + * No synchronization point. Just update of the synchronization state, b=
ut implies
> + * a full barrier so that the AP changes are visible before the control =
CPU proceeds.
>   */
>  static inline void cpuhp_ap_update_sync_state(enum cpuhp_sync_state stat=
e)
>  {
>  	atomic_t *st =3D this_cpu_ptr(&cpuhp_state.ap_sync_state);
> -	int sync =3D atomic_read(st);
> =20
> -	while (!atomic_try_cmpxchg(st, &sync, state));
> +	(void)atomic_xchg(st, state);
>  }
> =20
>  void __weak arch_cpuhp_sync_state_poll(void) { cpu_relax(); }
> @@ -829,7 +829,11 @@ static int bringup_cpu(unsigned int cpu)
>  	/*
>  	 * Some architectures have to walk the irq descriptors to
>  	 * setup the vector space for the cpu which comes online.
> -	 * Prevent irq alloc/free across the bringup.
> +	 *
> +	 * Prevent irq alloc/free across the bringup by acquiring the
> +	 * sparse irq lock. Hold it until the upcoming CPU completes the
> +	 * startup in cpuhp_online_idle() which allows to avoid
> +	 * intermediate synchronization points in the architecture code.
>  	 */
>  	irq_lock_sparse();
> =20
>=20
>=20
>=20


=2D-=20
Oleksandr Natalenko (post-factum)




From xen-devel-bounces@lists.xenproject.org Sat May 13 21:01:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 21:01:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534327.831511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxwMu-00015m-Hd; Sat, 13 May 2023 21:01:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534327.831511; Sat, 13 May 2023 21:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxwMu-00015f-Em; Sat, 13 May 2023 21:01:32 +0000
Received: by outflank-mailman (input) for mailman id 534327;
 Sat, 13 May 2023 21:01:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CHpO=BC=gmx.de=deller@srs-se1.protection.inumbo.net>)
 id 1pxwMt-00015X-3K
 for xen-devel@lists.xenproject.org; Sat, 13 May 2023 21:01:31 +0000
Received: from mout.gmx.net (mout.gmx.net [212.227.15.19])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5478a654-f1d1-11ed-b229-6b7b168915f2;
 Sat, 13 May 2023 23:01:29 +0200 (CEST)
Received: from [192.168.20.60] ([94.134.158.250]) by mail.gmx.net (mrgmx004
 [212.227.17.190]) with ESMTPSA (Nemesis) id 1M1Ygz-1puuDq2TVb-0036k5; Sat, 13
 May 2023 23:00:49 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5478a654-f1d1-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.de; s=s31663417;
	t=1684011649; i=deller@gmx.de;
	bh=S8BdFQeWXnw0k7xb/Pd2AhJXn5drG2VSOAIP2nWf5C4=;
	h=X-UI-Sender-Class:Date:Subject:To:Cc:References:From:In-Reply-To;
	b=mcUG47eX0Gn5veG6Ma/ul80RdUo6bsI8PVfZz0zKmUflnHrmh7e47BPa5vNQH5AE5
	 xN8t8csT87bACHCqA8OsuJ2hWYdHBaim0kPYxyJPyGV+wdFIZqdMZ2KWJkGpMA71P/
	 egT+C+AIP++kGZ4bJ4izMXA5xAwpUSXSFsk26PUoT8kKqTDoQAx1/dB+6F6qtKXT3r
	 GvfRk5+vnzNiG67CzR+x13ghBN34jZ4cPRgxhCtav+HMxKP6wiojHtau4ptA6nVxv6
	 cehya8971oTBpuL9wmCK7IO8pT5+JyFocI4nxvSEkGnCMrJGyPpXH1ZFrtHGaJENlu
	 oDzqQ+ljsk/Dw==
X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a
Message-ID: <f918a8af-5a9b-1309-e19a-4f3f09ebab7f@gmx.de>
Date: Sat, 13 May 2023 23:00:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [patch V4 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Content-Language: en-US
To: LKML <linux-kernel@vger.kernel.org>, Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org, David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>, Paul Menzel
 <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
 linux-parisc@vger.kernel.org, Paul Walmsley <paul.walmsley@sifive.com>,
 Palmer Dabbelt <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>
References: <20230512203426.452963764@linutronix.de>
 <12207466.O9o76ZdvQC@natalenko.name>
From: Helge Deller <deller@gmx.de>
In-Reply-To: <12207466.O9o76ZdvQC@natalenko.name>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable
X-Provags-ID: V03:K1:RpgEId2B9D5HBcL9fBRb+3JWwwhOteIZ1nytZcasaTAmfarvNwn
 fMFgkrHHFQUTus1MzALvRPziABh8c3qvbp7zrAUgbDN6l4imCV7IS4gxdfOdNFI3zgs3XcO
 11oz6jBIU+1Ra6lyKd+YTydDGuvPMiAY0/gQGWH75WYI+9H8qnSK0e5rvRvnBk3gV+e2mh4
 /Iype0ZePC7XiPoEhVRJQ==
X-Spam-Flag: NO
UI-OutboundReport: notjunk:1;M01:P0:ea8O6TsDxRo=;lqMJCETSNInGwiqOiM4sQMSPd/7
 etoWsuDh2oxQRCTcOm6dNI+74udQOMfHHsm5ViBWq91/zE7BHjEibpBxtH5P3frWwqkILGDJY
 77TFvDyNXPxd7F47SPcZN1Y3tC0PxkAf8D2O4cXgxGzfXcqh6ZU5OdAWofsVnHmYksoaSdtQK
 /sdWCcq3GNm8Op/iz5sawvt5qV1caIjxirPF8NcX0+HOQDWTmdrgC3KQzahSNe6sgpUYU6A0L
 mj7+P2ZJAxwQSxuoPapXn3E9fK1sETKVzH5EkA2MRPR0BSWfZGm58+nvXtGRoh46V31D9fD3Q
 NgyF0MBkq1h5MMqm1cehrsOqM2fbQUc0mMtImW91/3B0tJVWvww1nAEia9y/XTCE6oKIP/2gr
 +84+DVNsM/OlplgxXAFgsPN0eCrTWdSERofcLepJeNo96NsytLDtOEC7en5zvPV0KkUQe00QS
 7rl6hFiJ2MYRqfAsgTAw+jmhG8FlpDZPBv8Qq+5EkCGTbFJhfELHNgSJGuuUlUFEGFIKg7jXn
 TbfcaEF6rc4H+WtmaiTMOKNj/SAINwPT+rP+dvyLp2D+729pu0fnQ5k2EkmED+YwXds8vE225
 wFJ3/ZTMghskjLsZVfA+2lbtPTtpXWJ2p2zBBziSEqo2Kk1sKdgS8zq+73wIqibIA4jML/dNg
 OjnqB1D1UaKpKshMSbg63O+5KVXRw0oSgTABWdEzITLhLZrpla6aYa+4BT2wx4mVopNigTidP
 6HmdhUQpPsfRRdKs/r4DivnIBsaKGWFyGuHjs39P1LTXCnQY+bq7LDx/BcQ49EV022T0FmVHP
 KfxdeSRzDf8VaI7h0Pu+Db96WH9ssPLUVoP+CljJisx2W/IBROPWWx2uhxuL/WIb5UkL14enC
 X/qSeD3BduOA1I9b5VZJterCRzLCiAvjZkN8LbeJOzVdVWMV4EDWlFIivJLXJf7C5BFFtiTfU
 ffX/dDZ8QuTU2qdmz3lPDMkbZx0=

Hi Thomas,
> On p=C3=A1tek 12. kv=C4=9Btna 2023 23:06:56 CEST Thomas Gleixner wrote:
>> This is version 4 of the reworked parallel bringup series. Version 3 ca=
n be
>> found here:
>>
>>     https://lore.kernel.org/lkml/20230508181633.089804905@linutronix.de
>> ...
>>
>> Other than that there are no changes and the other details are all the =
same
>> as in V3 and V2.
>>
>> It's also available from git:
>>
>>      git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git hotpl=
ug

I tested your series on the parisc architecture just to make sure that it =
still works
with your patch applied.
On parisc the CPU bringup happens later in the boot process (after the inv=
entory),
so your patch won't have an direct impact anyway.
But at least everything still works, incl. manual CPU enable/disable.

So, you may add
Tested-by: Helge Deller <deller@gmx.de> # parisc

Thanks!
Helge


From xen-devel-bounces@lists.xenproject.org Sat May 13 23:09:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 13 May 2023 23:09:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534340.831528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxyMS-0004zW-Bg; Sat, 13 May 2023 23:09:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534340.831528; Sat, 13 May 2023 23:09:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pxyMS-0004zP-8E; Sat, 13 May 2023 23:09:12 +0000
Received: by outflank-mailman (input) for mailman id 534340;
 Sat, 13 May 2023 23:09:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxyMQ-0004zF-Ni; Sat, 13 May 2023 23:09:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxyMQ-0004Dl-Kk; Sat, 13 May 2023 23:09:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pxyMQ-0000kN-3S; Sat, 13 May 2023 23:09:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pxyMQ-0004qg-2x; Sat, 13 May 2023 23:09:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KeLUEOujH4zS9RVLwMdp5DdngXdV8JkWQqCHwgpO7hQ=; b=Kz3LpC+hk6+oIGd9L9EkSso0jb
	jRxYAYzBOGE5fKqfOKIkiG6k5MsZoot1HAU79tYrBPs/LZ3kQOK4sAtckpxptuskGQIvUQ2NOIwY1
	/74zkj0vzJkwbf2dZ61DKegA9xQVjVNkwIZvABMRdEm4dfwt2+8sGY5brh58t8O4C/4k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180643-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180643: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=debca86cad28192f82f741700bc38845f17d5c10
X-Osstest-Versions-That:
    qemuu=278238505d28d292927bff7683f39fb4fbca7fd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 13 May 2023 23:09:10 +0000

flight 180643 qemu-mainline real [real]
flight 180655 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180643/
http://logs.test-lab.xenproject.org/osstest/logs/180655/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180655-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180637
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180637
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180637
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180637
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180637
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180637
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180637
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180637
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                debca86cad28192f82f741700bc38845f17d5c10
baseline version:
 qemuu                278238505d28d292927bff7683f39fb4fbca7fd1

Last test of basis   180637  2023-05-12 14:38:24 Z    1 days
Testing same since   180643  2023-05-13 08:38:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiko Odaki <akihiko.odaki@gmail.com>
  Fabiano Rosas <farosas@suse.de>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   278238505d..debca86cad  debca86cad28192f82f741700bc38845f17d5c10 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun May 14 01:03:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 May 2023 01:03:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534353.831558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1py08H-0007Xl-UZ; Sun, 14 May 2023 01:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534353.831558; Sun, 14 May 2023 01:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1py08H-0007Xe-Qq; Sun, 14 May 2023 01:02:41 +0000
Received: by outflank-mailman (input) for mailman id 534353;
 Sun, 14 May 2023 01:02:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1py08H-0007XU-0L; Sun, 14 May 2023 01:02:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1py08G-0005dU-Tx; Sun, 14 May 2023 01:02:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1py08G-0003My-Fd; Sun, 14 May 2023 01:02:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1py08G-0003Wf-F9; Sun, 14 May 2023 01:02:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7irWFzlytHoYN9QIPvliOJPjrJyE2Qt0d8hwTPRJ/4M=; b=krWPd5M0Lh0eUkcqGxLVkmiDxv
	n4hZwAn6bzq7aQRu31b7X+tHgapIA3VdklI/GCpHXYEWCdrzWWXkOJN895btUDQ/5Th86eFP6HIh3
	GSXzxkkxuw9gq/G+1WZUUKY5QhMfT21KeQ4Kjhj/K7UQdrHxKHmYI/e773tRzjAxksC0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180646-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180646: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4c507d8a6b6e8be90881a335b0a66eb28e0f7737
X-Osstest-Versions-That:
    xen=cb781ae2c98de5d5742aa0de6850f371fb25825f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 May 2023 01:02:40 +0000

flight 180646 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180646/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pair     11 xen-install/dst_host fail in 180639 pass in 180646
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 180639 pass in 180646
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install fail in 180639 pass in 180646
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install          fail pass in 180639
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat  fail pass in 180639

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180632
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180632
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180632
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180632
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180632
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180632
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180632
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180632
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180632
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180632
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180632
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180632
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  4c507d8a6b6e8be90881a335b0a66eb28e0f7737
baseline version:
 xen                  cb781ae2c98de5d5742aa0de6850f371fb25825f

Last test of basis   180632  2023-05-12 08:12:24 Z    1 days
Testing same since   180639  2023-05-12 20:39:51 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cb781ae2c9..4c507d8a6b  4c507d8a6b6e8be90881a335b0a66eb28e0f7737 -> master


From xen-devel-bounces@lists.xenproject.org Sun May 14 06:05:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 May 2023 06:05:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534366.831570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1py4qy-0004EM-Su; Sun, 14 May 2023 06:05:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534366.831570; Sun, 14 May 2023 06:05:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1py4qy-0004EF-Q7; Sun, 14 May 2023 06:05:08 +0000
Received: by outflank-mailman (input) for mailman id 534366;
 Sun, 14 May 2023 06:05:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1py4qx-0004E3-VE; Sun, 14 May 2023 06:05:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1py4qx-0004nI-Sr; Sun, 14 May 2023 06:05:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1py4qx-0008Va-Gv; Sun, 14 May 2023 06:05:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1py4qx-0004Ko-GE; Sun, 14 May 2023 06:05:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g0gRHXP8yBksr0PmMDMmm3FrlHNGQg3jxNC47TdsUr0=; b=zwc7H0D0U1KwE5aSiQziZvDI3O
	Qx+Ky+IhRHaxHGMEyesZ0r4aORmFuu/XF9tFbDc5YIzGMMQ+PC4jTLke2fHcNrbn8JcqybN6GowCd
	t3N5seDAg8UFuQrvgRQcAGR/DfFCgmRTFfrFY3g7iRqlllwnvJO2SezwJxyLPyf3+IE8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180651-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180651: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d4d58949a6eac1c45ab022562c8494725e1ac094
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 May 2023 06:05:07 +0000

flight 180651 linux-linus real [real]
flight 180661 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180651/
http://logs.test-lab.xenproject.org/osstest/logs/180661/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                d4d58949a6eac1c45ab022562c8494725e1ac094
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   27 days
Failing since        180281  2023-04-17 06:24:36 Z   26 days   49 attempts
Testing same since   180651  2023-05-13 15:33:17 Z    0 days    1 attempts

------------------------------------------------------------
2384 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 301061 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 14 10:48:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 May 2023 10:48:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534389.831581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1py9GO-0006lZ-7T; Sun, 14 May 2023 10:47:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534389.831581; Sun, 14 May 2023 10:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1py9GO-0006lS-45; Sun, 14 May 2023 10:47:40 +0000
Received: by outflank-mailman (input) for mailman id 534389;
 Sun, 14 May 2023 10:47:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1py9GM-0006lI-Td; Sun, 14 May 2023 10:47:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1py9GM-0003Hh-PM; Sun, 14 May 2023 10:47:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1py9GM-0000Cx-D3; Sun, 14 May 2023 10:47:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1py9GM-0003x7-Cb; Sun, 14 May 2023 10:47:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ifw/Ekum9yr/RTRtIZgpxgWJKHQfgNvgBPbM0/pS7a4=; b=jE9zsQJkQBntEE14mZceNwMjC5
	qwh+Ir4xR2dh9uHfdEYf4U7BKLFuJl89S0Gydw77BxAiLCSSC9B2nLamYlFmWchpWLAev/wwUffHf
	hWMCERq+8CqgjhNOg0t/ZHSC+yipZz5GqMDb4mGBCjc4zGTVZtbDLihJF7cBTD0v31Os=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180659-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180659: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8844bb8d896595ee1d25d21c770e6e6f29803097
X-Osstest-Versions-That:
    qemuu=debca86cad28192f82f741700bc38845f17d5c10
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 May 2023 10:47:38 +0000

flight 180659 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180659/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180643
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180643
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180643
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180643
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180643
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180643
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180643
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180643
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                8844bb8d896595ee1d25d21c770e6e6f29803097
baseline version:
 qemuu                debca86cad28192f82f741700bc38845f17d5c10

Last test of basis   180643  2023-05-13 08:38:30 Z    1 days
Testing same since   180659  2023-05-14 01:40:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Richard Henderson <richard.henderson@linaro.org>
  Stafford Horne <shorne@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   debca86cad..8844bb8d89  8844bb8d896595ee1d25d21c770e6e6f29803097 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun May 14 14:50:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 May 2023 14:50:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534482.831625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyD2c-0007SB-3L; Sun, 14 May 2023 14:49:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534482.831625; Sun, 14 May 2023 14:49:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyD2c-0007S4-0B; Sun, 14 May 2023 14:49:42 +0000
Received: by outflank-mailman (input) for mailman id 534482;
 Sun, 14 May 2023 14:49:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyD2a-0007Ru-10; Sun, 14 May 2023 14:49:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyD2Z-0000Rr-To; Sun, 14 May 2023 14:49:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyD2Z-0005tF-E4; Sun, 14 May 2023 14:49:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyD2Z-0001uN-Db; Sun, 14 May 2023 14:49:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OLUGZ4erObsRLYTqhTCPOSnuF8F3Yj9bnjKgpDfYXlI=; b=yRe3408irgQmmmwW8X7ym1Myko
	hqor9GNW1XvsbXGni+9sUm2e/ufCSVEEiVPdUzix6GawfrY1kYII0R76vZfOygkTqjKC3GFW4uHrk
	BhxmBLU5vlk8xgGblpQVYwS1fYB6UJtu3Xw7Z0H4yJtmEWRVoOyrtqd6ZZ2nwpcKjDH0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180660-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180660: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:guest-start/redhat.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4c507d8a6b6e8be90881a335b0a66eb28e0f7737
X-Osstest-Versions-That:
    xen=4c507d8a6b6e8be90881a335b0a66eb28e0f7737
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 May 2023 14:49:39 +0000

flight 180660 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180660/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 180646 pass in 180660
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 180646 pass in 180660
 test-amd64-i386-qemut-rhel6hvm-amd 14 guest-start/redhat.repeat fail pass in 180646

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180646
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180646
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180646
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180646
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180646
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180646
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180646
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180646
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180646
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180646
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180646
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180646
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  4c507d8a6b6e8be90881a335b0a66eb28e0f7737
baseline version:
 xen                  4c507d8a6b6e8be90881a335b0a66eb28e0f7737

Last test of basis   180660  2023-05-14 01:52:00 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 14 20:07:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 May 2023 20:07:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534524.831634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyI0O-00060y-FE; Sun, 14 May 2023 20:07:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534524.831634; Sun, 14 May 2023 20:07:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyI0O-00060r-Ch; Sun, 14 May 2023 20:07:44 +0000
Received: by outflank-mailman (input) for mailman id 534524;
 Sun, 14 May 2023 20:07:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyI0M-00060h-DY; Sun, 14 May 2023 20:07:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyI0M-00088X-8U; Sun, 14 May 2023 20:07:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyI0L-0004aX-Q5; Sun, 14 May 2023 20:07:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyI0L-0004BT-PZ; Sun, 14 May 2023 20:07:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iP/fftCAeHVgeXL1ycIcr2xyZJ743rBi658VI0XeT68=; b=gbxzvcy+mpiqYu71rEqpKBIuku
	EtHbECCUlHouS25JGX9fSHSrDpWkTG5rMDQQ0E+psLELP1WMehQsQmYvee2UaonJ77JULNT2RnSUR
	xtzIofoEdhQhVcmLqS+u8IaszxM/zCr8ax97VR9RVJMEvMOrTO85KXnjHvIFFqvEbbCc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180662-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180662: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=bb7c241fae6228e89c0286ffd6f249b3b0dea225
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 14 May 2023 20:07:41 +0000

flight 180662 linux-linus real [real]
flight 180663 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180662/
http://logs.test-lab.xenproject.org/osstest/logs/180663/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                bb7c241fae6228e89c0286ffd6f249b3b0dea225
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   28 days
Failing since        180281  2023-04-17 06:24:36 Z   27 days   50 attempts
Testing same since   180662  2023-05-14 06:10:20 Z    0 days    1 attempts

------------------------------------------------------------
2386 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 301876 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 14 21:50:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 14 May 2023 21:50:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534535.831644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyJbr-00007s-JS; Sun, 14 May 2023 21:50:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534535.831644; Sun, 14 May 2023 21:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyJbr-00007l-Gh; Sun, 14 May 2023 21:50:31 +0000
Received: by outflank-mailman (input) for mailman id 534535;
 Sun, 14 May 2023 21:50:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zp1L=BD=igalia.com=gpiccoli@srs-se1.protection.inumbo.net>)
 id 1pyJbp-00007f-Bp
 for xen-devel@lists.xenproject.org; Sun, 14 May 2023 21:50:29 +0000
Received: from fanzine2.igalia.com (fanzine2.igalia.com [213.97.179.56])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 563365ce-f2a1-11ed-b229-6b7b168915f2;
 Sun, 14 May 2023 23:50:27 +0200 (CEST)
Received: from [177.189.3.227] (helo=[192.168.1.60])
 by fanzine2.igalia.com with esmtpsa 
 (Cipher TLS1.3:ECDHE_X25519__RSA_PSS_RSAE_SHA256__AES_128_GCM:128) (Exim)
 id 1pyJaT-009SDU-MC; Sun, 14 May 2023 23:49:06 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 563365ce-f2a1-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com;
	s=20170329; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID:Sender:Reply-To:
	Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
	Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
	List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=zygIhDyV8ht+xJs1lmqnA3sy7ZHDhj2x5W8U9MyDTpM=; b=AtfoJ+yRkuDm9gCWovg7bgbFzv
	zvtG8omqD3Ee2e5vtlK1211bbhN7KzcG9ZOV/p9gXghTUS1HF9EaelwoM4SQo/b4sWNMb2wsJsVAh
	ryLGjfB/dL5jQW5LWnI6h4gq4lKj2LKWt1eQxsKNKi4CYTHPi2D+7G5csc6so1Mr3IZqjzYXfvDre
	Wa2IYRFjoBiG9xBoaM3MR/ceTkMFFWkX6SNnbTUn4IT7Xv2Ef1OD3QTBmf65tsdUdiBBhGBc7WMfJ
	YjLmEKzf9bN3ap+iE91yCmvTRvaupsrf+ACH5S2PsgkJyDm/ZJKotc9YiiD6FM6FqFSv7z1nQGLn3
	n3J6H6Qw==;
Message-ID: <b4733705-7014-49c6-57ab-a67459954f28@igalia.com>
Date: Sun, 14 May 2023 18:48:50 -0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [patch V4 00/37] cpu/hotplug, x86: Reworked parallel CPU bringup
Content-Language: en-US
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>
References: <20230512203426.452963764@linutronix.de>
From: "Guilherme G. Piccoli" <gpiccoli@igalia.com>
In-Reply-To: <20230512203426.452963764@linutronix.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 12/05/2023 18:06, Thomas Gleixner wrote:
> Hi!
> 
> This is version 4 of the reworked parallel bringup series. Version 3 can be
> found here:
> 
>    https://lore.kernel.org/lkml/20230508181633.089804905@linutronix.de


Hi Thomas, thanks for series! I was able to test it on the Steam Deck
(on top of 6.4-rc2), and everything is working fine; also tested S3
suspend/resume, working as expected.

Some logs from boot time:


Parallel boot
[    0.239764] smp: Bringing up secondary CPUs ...
[...]
[    0.253130] smp: Brought up 1 node, 8 CPUs


Regular boot (with cpuhp.parallel=0)
[    0.240093] smp: Bringing up secondary CPUs ...
[...]
[    0.253475] smp: Brought up 1 node, 8 CPUs


Feel free to add (to the series):

Tested-by: Guilherme G. Piccoli <gpiccoli@igalia.com> # Steam Deck

Cheers,


Guilherme


From xen-devel-bounces@lists.xenproject.org Mon May 15 02:14:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 02:14:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534546.831655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyNiy-0000Gl-0G; Mon, 15 May 2023 02:14:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534546.831655; Mon, 15 May 2023 02:14:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyNix-0000Gd-RE; Mon, 15 May 2023 02:14:07 +0000
Received: by outflank-mailman (input) for mailman id 534546;
 Mon, 15 May 2023 02:14:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hnKm=BE=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1pyNix-0000GP-9g
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 02:14:07 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061b.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28d1e525-f2c6-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 04:14:02 +0200 (CEST)
Received: from DB6PR07CA0162.eurprd07.prod.outlook.com (2603:10a6:6:43::16) by
 PA4PR08MB6304.eurprd08.prod.outlook.com (2603:10a6:102:e1::24) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6387.30; Mon, 15 May 2023 02:13:53 +0000
Received: from DBAEUR03FT058.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:43:cafe::1d) by DB6PR07CA0162.outlook.office365.com
 (2603:10a6:6:43::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.14 via Frontend
 Transport; Mon, 15 May 2023 02:13:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT058.mail.protection.outlook.com (100.127.142.120) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.13 via Frontend Transport; Mon, 15 May 2023 02:13:53 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Mon, 15 May 2023 02:13:53 +0000
Received: from 329b8123a383.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 85C33ADA-7D22-40CC-A1C6-97FBE7727AD8.1; 
 Mon, 15 May 2023 02:13:47 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 329b8123a383.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 15 May 2023 02:13:47 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by PAXPR08MB6560.eurprd08.prod.outlook.com (2603:10a6:102:12d::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 02:13:44 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%3]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 02:13:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28d1e525-f2c6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CYAGLuYTobLBNzIiopQ7l4ZSPYI8Agebm1EmgHfrBJE=;
 b=bzAHGNH2a1pEJwVh/0oDdNAbq6p8xrONSO9CIkMlO2jvgw2vqjffnd5lTcLf9tlTyzAFeS/VJX5JnuB0UiILlqOdBI/L/CSwoDhlt2ansNb3ajdRW9orY/MwTRVt/s5FwzzmEn3lCa2lHDtAeKroU2ziHFlNu3L/g5zF+Eg/tJc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C9wsWtDJdS2Ip0wOZeb1OD6S2KCK5cfNIe+ysZRxwnLkk5J1d4kDPjak0KZdhtG+jrIwEY6HhqqPxwF7Uxbfv2/goJ2PFy6H5QwPzFOXhTo0taXSmuaRXKJBLSU2Y8rqiQVjNw4VBm4SZ73rvR0JrGq/V7Li8gRcTkCIYKh96earRIliRgNw2v/J1cguXc1yDyJZGW65CWqND/7UgylXHkGosAq1As3B1OkDM8smdOVmj3HCls6dPZxQYGH34A0vBIM/0fBroV24dxA4EOYo0eSJZWJslwHamr1MIZhc3g4BNSxmA0azTfbUh5/5/QOg/TaoA+nE6pQfW9xE0Rl4mg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CYAGLuYTobLBNzIiopQ7l4ZSPYI8Agebm1EmgHfrBJE=;
 b=j51UQVvafUPSwcTdbd2YrWjKcZf99HZMiGLq5N8hpTL67eTtjTxgstA/1bC4+FhMlxVX7gdH7jb6QCY46ru8FRgQmA9hDACAqDM1SgNM0SHaR0e2TgacGgopAvTLJtrOjJsrlkRAubhWDjwVIP8b5zujbnvFk8fim1Q0IhtZFNT3HlWSWf7UMUD97zzG5f05c8Z7VR00H/8LKolH8xzSQEmj2ZjrC2ohpttz762UjU9hTy2eXaDBtxEhtGcqE7oaFGwVJ7mrEA2M2Yir4MR5Ql2oJc4AlnINRNZZAjMr6ISsXNCqxl95il2XN8UnQA9XiGgFgab5S3WlfKS0vsGCYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CYAGLuYTobLBNzIiopQ7l4ZSPYI8Agebm1EmgHfrBJE=;
 b=bzAHGNH2a1pEJwVh/0oDdNAbq6p8xrONSO9CIkMlO2jvgw2vqjffnd5lTcLf9tlTyzAFeS/VJX5JnuB0UiILlqOdBI/L/CSwoDhlt2ansNb3ajdRW9orY/MwTRVt/s5FwzzmEn3lCa2lHDtAeKroU2ziHFlNu3L/g5zF+Eg/tJc=
From: Henry Wang <Henry.Wang@arm.com>
To: Michal Orzel <michal.orzel@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Rahul Singh
	<Rahul.Singh@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: [PATCH 1/2] xen/arm: smmuv3: Constify arm_smmu_get_by_dev()
 parameter
Thread-Topic: [PATCH 1/2] xen/arm: smmuv3: Constify arm_smmu_get_by_dev()
 parameter
Thread-Index: AQHZhN8XLdBPXF4izkqlyUf4//7Z469am+6Q
Date: Mon, 15 May 2023 02:13:44 +0000
Message-ID:
 <AS8PR08MB7991A6AF52CF606DA6A67B0592789@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-2-michal.orzel@amd.com>
In-Reply-To: <20230512143535.29679-2-michal.orzel@amd.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: B20E6FE70081764B8E2A13E388F40B0D.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|PAXPR08MB6560:EE_|DBAEUR03FT058:EE_|PA4PR08MB6304:EE_
X-MS-Office365-Filtering-Correlation-Id: 938e02c9-95e2-46c0-1f5d-08db54ea07dd
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 GLHa6RiqEAxOXX4FnQUnq/fGyFLMakUAkXohJvtzMGAaXc3Tyq4/HXLqtplI0C3Yqj7o+ZyKV0VjqkfBCYq1WzX3sPFgFafwMjh4N+bqEBvPjLeatoCYZ7RiOgUb5ry/tmolz7mlMQ/N7iyd0Ea2nqrZcE1AcUEo0KWiJwDFI4zZcYGcjcoONxG6eIOSF2/QL6av9mw+Ik2dZbznvPq0MgNfJwPOdjCHFA1aFb7grPXmKC5EGqhvqnARcmFEQI3g5MqnbjPlPflnUj9Z9lSOk+Xf7FhujHaxlmsypMnYLI77d8dCju09sGhtl7czY0T6/1Fi/aSk3BZWbAvTaQBHUkzT9EHTa3mu7INqmR0Yd4tgfIK55k5wru65proZIiUDuxB4oP43EKQkNvCTWg4/WBMJACXRnU9D9b0eejIuU/xbCZNEd/JAQXFj/AXMMIqjt25BWAv7NAX8vC2ywWf85Lo9USKD5hRG4ZVDxPknr/NAHA6p+YLFf05iKKDLyZ2UwWK4qBahtAhVN1Oqg219jw+k+yLK/7dnJa3tWzfHgw/b/yrR6Cwagd+fJUmAlEAQoznwenIF47WVYUpI4BOlY2kRjiME/cH+QroMQacn/8txwV10we+8AnKyWLneeN/G
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(366004)(376002)(396003)(451199021)(66946007)(66476007)(66556008)(66446008)(76116006)(64756008)(38070700005)(8936002)(83380400001)(33656002)(71200400001)(6506007)(26005)(7696005)(4326008)(8676002)(2906002)(5660300002)(4744005)(122000001)(316002)(38100700002)(41300700001)(52536014)(86362001)(110136005)(54906003)(478600001)(55016003)(186003)(9686003);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6560
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f04e5c14-aec7-48a9-d385-08db54ea02a2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kLbiB/wEzb4n3Ic+bPK9D6IIRccqHJSvMSR6UhGYhS5C2DMEmmN/bYG5Knxp0S8shrHPE+XM34p/7WLWTY/Z/QII42AnsWG+aRwoEjBedRAMvudkgheGig3liHru3RCmGqeWAXe5mYpbDLoElzJEVmlWRe/UvqLGmcZ5/9BNF7Oxwrq7/DAnPoOThItwOuxZOYRSVCgKIlyaYnRI6yLt4WKjHT3sGb3CABhOUC7F5GE2uS1QHNzBAtbeu0F6Z+aB2ukHosZSFNAq4kXier9PQKg0A9Wfivd3bu1gHcWQy/KHIQ4Ve/uB16kzuyYAt6tALTB6DNmnuUd1PpQaiyvhMuZ8bXb9BMVpDvd2t+Eyth2zrMY7IBMzDJkOvsu5q5UhFNPhgUXCkj6vQfH+Dg6JX1C4Xe/SH/7ESilYdjd9PnAWv5T0ismzhUduzpcEeeIAzdtIcH0iaDFuYd1XnTw/GhWeS8l1pTJsL5g3vybs9BEd+NxdlCgRXP5J5e6umY3bisTSGTTpOYJu3w7vSLSHvUJevM5YVLPBb/aFnPL+Tq0B+ed/1R2S0alcJyALHmWfqhbcSAMEfBl+durQKVjN3USQtfuRBVUOqt6rEGooJryzcJNP2q4JsYbCb2q5GU0oPfYFDu+3kWaJQCrnJX2A/POR266d7surim5spbOrf9jCkpnvFStX1YJ/ZLwP3WtqrOEVVBNO2WEnccWdYFEMyA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(396003)(346002)(39860400002)(451199021)(46966006)(36840700001)(83380400001)(4744005)(6506007)(26005)(2906002)(47076005)(33656002)(186003)(82310400005)(110136005)(107886003)(9686003)(52536014)(336012)(86362001)(36860700001)(5660300002)(7696005)(8936002)(8676002)(82740400003)(356005)(4326008)(70586007)(478600001)(40480700001)(41300700001)(55016003)(54906003)(70206006)(81166007)(316002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 02:13:53.3913
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 938e02c9-95e2-46c0-1f5d-08db54ea07dd
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6304

Hi Michal,

> -----Original Message-----
> Subject: [PATCH 1/2] xen/arm: smmuv3: Constify arm_smmu_get_by_dev()
> parameter
>=20
> This function does not modify its parameter 'dev' and it is not supposed
> to do it. Therefore, constify it.
>=20
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry


From xen-devel-bounces@lists.xenproject.org Mon May 15 03:11:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 03:11:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534550.831664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyOcW-00071j-5Q; Mon, 15 May 2023 03:11:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534550.831664; Mon, 15 May 2023 03:11:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyOcW-00071c-2z; Mon, 15 May 2023 03:11:32 +0000
Received: by outflank-mailman (input) for mailman id 534550;
 Mon, 15 May 2023 03:11:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyOcV-00071S-3U; Mon, 15 May 2023 03:11:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyOcV-0000tx-0B; Mon, 15 May 2023 03:11:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyOcU-00020M-G2; Mon, 15 May 2023 03:11:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyOcU-00049Y-Fc; Mon, 15 May 2023 03:11:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dKJ9rOjROPS7B8S6V6cb7XJXXVbG94sTg2ky4ijBvNg=; b=QbYbQNmITFnzGKsu0iUmoWLxU8
	InXtSb7eFYFYtdYtmC0R3dGio2ueCy+MHVBnftbzlP87hgyOvOMVoA4qApyD+wGGB68PKAFD3oh3B
	m6csvVDHtWyvkAPmL6fHjjZZevIxRGoqLH/KVLn3gmM6Rn8OZbUvemXNsYgIfMG9n2qE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180665-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180665: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=80bc13db83ddbd5bbe757a20abcdd34daf4871f8
X-Osstest-Versions-That:
    ovmf=d3225577123767fd09c91201d27e9c91663ae132
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 May 2023 03:11:30 +0000

flight 180665 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180665/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 80bc13db83ddbd5bbe757a20abcdd34daf4871f8
baseline version:
 ovmf                 d3225577123767fd09c91201d27e9c91663ae132

Last test of basis   180635  2023-05-12 13:40:43 Z    2 days
Testing same since   180665  2023-05-15 01:12:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gua Guo <gua.guo@intel.com>
  Guo Gua <gua.guo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d322557712..80bc13db83  80bc13db83ddbd5bbe757a20abcdd34daf4871f8 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 15 04:21:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 04:21:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534557.831675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyPhq-00067U-6L; Mon, 15 May 2023 04:21:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534557.831675; Mon, 15 May 2023 04:21:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyPhq-00067L-3G; Mon, 15 May 2023 04:21:06 +0000
Received: by outflank-mailman (input) for mailman id 534557;
 Mon, 15 May 2023 04:21:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyPho-000677-Mh; Mon, 15 May 2023 04:21:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyPho-0002bo-Kq; Mon, 15 May 2023 04:21:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyPho-0005om-8g; Mon, 15 May 2023 04:21:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyPho-0002F5-8D; Mon, 15 May 2023 04:21:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oH4AEPnGxVzsXS0u+Y5+S3XgOunTSMVxwHk0KNMgi7E=; b=eim6uxhcZ3sVAVIaie2gBF6pco
	N5OpSgSCmyoyhGgphzQsn8kE6YQI8QlaG+TH0r9mWZwxIf+KD4sYmOnfWP6Fkupi+sF8ZCU9WE22D
	CpLZAqFfK4RWk7r15xOIfQCwE8Qhu0hrL9NGqY/5U36XFYPtRoZG65Vc2FCxMmEknOn0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180664-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180664: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 May 2023 04:21:04 +0000

flight 180664 linux-linus real [real]
flight 180667 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180664/
http://logs.test-lab.xenproject.org/osstest/logs/180667/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180667-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   28 days
Failing since        180281  2023-04-17 06:24:36 Z   27 days   51 attempts
Testing same since   180664  2023-05-14 20:12:01 Z    0 days    1 attempts

------------------------------------------------------------
2389 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 302499 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 15 06:47:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 06:47:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534569.831686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyRyw-0003tL-Jd; Mon, 15 May 2023 06:46:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534569.831686; Mon, 15 May 2023 06:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyRyw-0003tE-F6; Mon, 15 May 2023 06:46:54 +0000
Received: by outflank-mailman (input) for mailman id 534569;
 Mon, 15 May 2023 06:46:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HQr=BE=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pyRyv-0003t6-M7
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 06:46:53 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20614.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::614])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 44291080-f2ec-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 08:46:50 +0200 (CEST)
Received: from MW2PR2101CA0018.namprd21.prod.outlook.com (2603:10b6:302:1::31)
 by MN0PR12MB5787.namprd12.prod.outlook.com (2603:10b6:208:376::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 06:46:46 +0000
Received: from CO1NAM11FT057.eop-nam11.prod.protection.outlook.com
 (2603:10b6:302:1:cafe::16) by MW2PR2101CA0018.outlook.office365.com
 (2603:10b6:302:1::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.4 via Frontend
 Transport; Mon, 15 May 2023 06:46:45 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CO1NAM11FT057.mail.protection.outlook.com (10.13.174.205) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.14 via Frontend Transport; Mon, 15 May 2023 06:46:45 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 15 May
 2023 01:46:44 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 15 May
 2023 01:46:32 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 15 May 2023 01:46:30 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44291080-f2ec-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AvaPYCm/YWSl2rscrd+0iR0LjXjJmYkqXN+3HHnjl+gaJLoOfJAZhMsc/DA1p+c46ebuaT5lcAs+ealuHOWffuvfDYCEXu/ojNi3dB4olQJz4BTz6dmLVaptD64j7z9SoAO+fuip9/qaoBmESbBVmmPqXLndx/th0KUGVmfjrw3Y9zRUsZMBzNHi1OlnyhnvB2X2g9NmVKhRf9yHHIQRVc8DUYr3p0uEGh4i9mggi1QtEBVYUwO29JnLfX/SvXZx91Iqg9ucEjLQyYdc+7V35ETZfhaRer5G6oM87PAj1Mvm/1ZnuB0evOoz1SHnJti89binX+kXhegNP9oaPaoEEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cQjuZJ+G8w7Al9J6uPj6xfFtjYRtiwRu9tQGQhkRkYQ=;
 b=KqNQKocoYfph6c+Lb8uEyYbZp7580Zbdj1vfK6nyh8njFpQWiDzjLZiBcjAhNeJnDbFBXmKzSsLK5tiwEWT8N56/IVHjW17raCHcChUseJyl5/yEQpQ/BQmB0Oc0dHrYcgdbB8oDkON5Shb6ATDaulnmadi3V7m0+8bhjN6c6VTCXNGKlxfKidfIcyT0DcIqsZap8bQi/mmqOBq0wLlHsucJy12I80GFw8QvoZegJFEBr0CWLHsvP2rgbdSTEjviuDo7OvXKCmizu8HC95SqeSgPknDQqIFMHN3cO9otqEEN8tMKC9kpYy6htOZ9C4MSVes89ZjfMQHhp36pIVhyFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cQjuZJ+G8w7Al9J6uPj6xfFtjYRtiwRu9tQGQhkRkYQ=;
 b=xrb8gPYyMFplBoeOc8upQ8OCzti6eXTQ2P7/9l+7R2KiSYlpLuqzr94fulvZkFZnLePeQAvITG5tMPbdbX763BahDZhf9IHK4Tvlc6S+ycNHdLanLN3gwKuC+5sSgjSKlNvbIzWioDnUf67FCdRABkXirpN0VtS6b2EDeBIa8zk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <2cdb4e1c-5151-f820-5ceb-35f782842393@amd.com>
Date: Mon, 15 May 2023 08:46:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
To: Ayan Kumar Halder <ayankuma@amd.com>, <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <bertrand.marquis@arm.com>, Rahul Singh
	<rahul.singh@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-3-michal.orzel@amd.com>
 <1dadc8b9-00be-55f9-e8b7-f867eacf20b1@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <1dadc8b9-00be-55f9-e8b7-f867eacf20b1@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT057:EE_|MN0PR12MB5787:EE_
X-MS-Office365-Filtering-Correlation-Id: 11c28288-e4c9-41fb-0fb8-08db55102650
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7VtMKZrN6H+5vwomrjYPRe7yyoo9+cyh8vQBjaV2ewxZnPmPql75TUAeC8l9dDOe6qwbHn6u/JJrGfPt2nCTvt3dmQQnJbUuXpqpXbiAxVNdMFPDD+2UrSdiaiVmHWP+JyhAtIm7EFEZt70bKjv3MTTzMb2mJaimalfWgbI8GUYuYfWOKmVzI0zxFx91G6piSsBHkpTitDr2p0xcT1OOEoQx6Z+73PLNkN76+Ak2vQonXo6nlz8ib5O1uRsw5hQbnarFHHAx04OR5moyc3855NVPw3Bby0Jowrd1jKJX5FSHbc6E703qDq4pHz82tABU4bDV8kwli5x5qgbLlyZZSDMXRsp0QRiXpvJuViByxwHTeKpj8w+zdJdpSRVp9pOZezPv6o1LEh9k7YTNu4Va3+9nj5YheXZqxTmDAmZJOr5fg7515SVtpTorYwRFeu6rKPxP/bLoK+HPwxAwoq5O/KUkucnYhmVJ07rauXAeCzIDHdhoUNtQGs+hv7bEseBeOWs+ywCoY6wMVKJIkwuDCMYLI1cHkFVm8oOmZOwP7K3qwCEZvU25+ovhpFl0W/mLPqIqMmkuEnnyBP29HjoZC8OOZ2gAd5bHM01ytLf2SILmUTxUFiNp5wTPESUFgPzjD9lTGLe4L4sOSiDxJefBqZgrarg9sUyDCMIVPPsgiEjRTiAXK/1xBwTV/RPZNrAcblhHtXjwtVcEb4UW22/Espk88VsIXHNrlEtn0u/lpIgLMKSn+ChABY62TrnB8HsecOHYW9FeLKBhu8VYTd4jbobIxJ+MMNTGCZ5AiwmyoSw=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199021)(36840700001)(40470700004)(46966006)(36860700001)(47076005)(186003)(2616005)(41300700001)(31686004)(6666004)(426003)(53546011)(336012)(83380400001)(26005)(40460700003)(478600001)(16576012)(54906003)(110136005)(70206006)(70586007)(4326008)(82740400003)(40480700001)(81166007)(356005)(316002)(5660300002)(8676002)(8936002)(44832011)(86362001)(31696002)(2906002)(82310400005)(36756003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 06:46:45.1957
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 11c28288-e4c9-41fb-0fb8-08db55102650
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT057.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5787

Hi Ayan,

On 12/05/2023 18:59, Ayan Kumar Halder wrote:
> Hi Michal,
> 
> On 12/05/2023 15:35, Michal Orzel wrote:
>> At the moment, even in case of a SMMU being I/O coherent, we clean the
>> updated PT as a result of not advertising the coherency feature. SMMUv3
>> coherency feature means that page table walks, accesses to memory
>> structures and queues are I/O coherent (refer ARM IHI 0070 E.A, 3.15).
>>
>> Follow the same steps that were done for SMMU v1,v2 driver by the commit:
>> 080dcb781e1bc3bb22f55a9dfdecb830ccbabe88
>>
>> The same restrictions apply, meaning that in order to advertise coherent
>> table walk platform feature, all the SMMU devices need to report coherency
>> feature. This is because the page tables (we are sharing them with CPU)
>> are populated before any device assignment and in case of a device being
>> behind non-coherent SMMU, we would have to scan the tables and clean
>> the cache.
>>
>> It is to be noted that the SBSA/BSA (refer ARM DEN0094C 1.0C, section D)
>> requires that all SMMUv3 devices support I/O coherency.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> ---
>> There are very few platforms out there with SMMUv3 but I have never seen
>> a SMMUv3 that is not I/O coherent.
>> ---
>>   xen/drivers/passthrough/arm/smmu-v3.c | 24 +++++++++++++++++++++++-
>>   1 file changed, 23 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
>> index bf053cdb6d5c..2adaad0fa038 100644
>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>> @@ -2526,6 +2526,15 @@ static const struct dt_device_match arm_smmu_of_match[] = {
>>   };
>>   
>>   /* Start of Xen specific code. */
>> +
>> +/*
>> + * Platform features. It indicates the list of features supported by all
>> + * SMMUs. Actually we only care about coherent table walk, which in case of
>> + * SMMUv3 is implied by the overall coherency feature (refer ARM IHI 0070 E.A,
>> + * section 3.15 and SMMU_IDR0.COHACC bit description).
>> + */
>> +static uint32_t platform_features = ARM_SMMU_FEAT_COHERENCY;
>> +
>>   static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
>>   {
>>   	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
>> @@ -2708,8 +2717,12 @@ static int arm_smmu_iommu_xen_domain_init(struct domain *d)
>>   	INIT_LIST_HEAD(&xen_domain->contexts);
>>   
>>   	dom_iommu(d)->arch.priv = xen_domain;
>> -	return 0;
>>   
>> +	/* Coherent walk can be enabled only when all SMMUs support it. */
>> +	if (platform_features & ARM_SMMU_FEAT_COHERENCY)
>> +		iommu_set_feature(d, IOMMU_FEAT_COHERENT_WALK);
>> +
>> +	return 0;
>>   }
>>   
> All good till here.
>>   static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
>> @@ -2738,6 +2751,7 @@ static __init int arm_smmu_dt_init(struct dt_device_node *dev,
>>   				const void *data)
>>   {
>>   	int rc;
>> +	const struct arm_smmu_device *smmu;
>>   
>>   	/*
>>   	 * Even if the device can't be initialized, we don't want to
>> @@ -2751,6 +2765,14 @@ static __init int arm_smmu_dt_init(struct dt_device_node *dev,
>>   
>>   	iommu_set_ops(&arm_smmu_iommu_ops);
>>   
>> +	/* Find the just added SMMU and retrieve its features. */
>> +	smmu = arm_smmu_get_by_dev(dt_to_dev(dev));
>> +
>> +	/* It would be a bug not to find the SMMU we just added. */
>> +	BUG_ON(!smmu);
>> +
>> +	platform_features &= smmu->features;
>> +
> 
> Can you explain this change in the commit message ?
I think it is already explained by saying that in order to advertise the *platform* feature, all
SMMUs need to report it. If at least one doesn't, the feature is disabled. This is exactly
what this line is doing. It is comparing platform features with a just probed SMMU features (arm_smmu_dt_init()
is called for each SMMU).

~Michal


From xen-devel-bounces@lists.xenproject.org Mon May 15 07:30:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 07:30:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534573.831700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pySfX-0000o6-4v; Mon, 15 May 2023 07:30:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534573.831700; Mon, 15 May 2023 07:30:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pySfX-0000n5-1K; Mon, 15 May 2023 07:30:55 +0000
Received: by outflank-mailman (input) for mailman id 534573;
 Mon, 15 May 2023 07:30:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xh4z=BE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pySfV-0000k9-PC
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 07:30:53 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20623.outbound.protection.outlook.com
 [2a01:111:f400:7d00::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6bbf71ad-f2f2-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 09:30:52 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8068.eurprd04.prod.outlook.com (2603:10a6:20b:3b5::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 07:30:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.029; Mon, 15 May 2023
 07:30:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bbf71ad-f2f2-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fCeaeO4oWmINosBRzuxAyCfJI6uEjqgk03fAwabFsDENaRKQ1En7DRY6bPWVa0QVtIY7R/w/uQCrbg8rY8sFv3JVDjnNI849IVTSOGsIg33K5Cso04PbLv+WVkhNj2WFQ9hhIJvcP9rdGJWoXmz/2yHIIiQT4x9oZfL5bpIt0Y4yZOKuaQlTLwfvKLMz12c9w7WFnP1jOKqLsY0ud/yI3XDWOq4V2IHqO/pEMGtzrP5VTs+ic5tfoZ3Fk1NSwsbtv1D5aglLxYZMlA5cFHtdTRXbdpkoNZu7vI/hztxiE+ad5ug9Q94rq7AZkXaJdAV+KJtSYNji30o5jKso+/eAzw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eDofIGobZV3ZfZtNUodE5fycUG4IC6S48IzkCYA1KAY=;
 b=FiIRiunyfnuxm5XX8q74pIetpzY3+kzMewRjHDZ1I+kOTxpdCyi797S/jHUF61DCtSJfQ8chYT8n8EorTxTK1EfLi3ykOecnFmRJrQ1l+7byC+wzUgsiv9budY6LOuDMQkL8bnsOtydNxOXm2nj5zFEHGKubj58ciGtEuAzKhnasxwAXFwjQZigIApWRUwNMhLLW/RN93cpFWNGy+EfBqcMlqc6Xgs9075g2IqGv816WfXwKyp+/MvpZ+HUvO9T/qJXw1BPO32s6+hs+jRdtkjcDlvD7+ZFZPXKCpGYazy/2mkFTXgzDgzOLUJ1BasNHdU0qYi3JT0PCbl2vHVCT+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eDofIGobZV3ZfZtNUodE5fycUG4IC6S48IzkCYA1KAY=;
 b=luMccrqeDmXgqEzF8zU4uUxHeHtWgBzRMdpt/Sh94u4A2ddmuZDGRu7lVNma75sMtLfQL2lTinJgMCFhGGjo2B0t99Y1aeW/qemLLjH6+hIx1b8feovrkEarjoSmX6HDHrx6brT5DTufj1tN/DXwTrLXd9kaguSSzS9YndDjr5tMPK5dlN8pYCqKgdFcI2uOt9LI5Wo5FIjKJx8QRKeHDdp/gnyWTVpwl93tmGkjQbTz6kzQDVxjYk583E1rmsi35bZpD/J7HJontbSplPdTel5wgnDYL1KhuAUjCwD6BkkTuDfbvmFOYZYlgAm49wQeN+p0ywvD9zs9r+Q7taE04Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cdb08ae5-1cbb-ac43-67bb-0c73bde7f479@suse.com>
Date: Mon, 15 May 2023 09:30:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 5/8] pci/arm: Use iommu_add_dt_pci_device()
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien@xen.org>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
 <20230511191654.400720-6-stewart.hildebrand@amd.com>
 <61ae93e8-ac8f-b373-4fa7-0a8aeb61ef4f@suse.com>
 <f7d78b4e-3a16-d342-59d2-caa4d2b75b9c@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f7d78b4e-3a16-d342-59d2-caa4d2b75b9c@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: VI1PR0102CA0023.eurprd01.prod.exchangelabs.com
 (2603:10a6:802::36) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8068:EE_
X-MS-Office365-Filtering-Correlation-Id: 01a622b9-848f-4de6-7bba-08db55164ec7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CSoXzY4dSEVBT5BhiQCrdCrJg3zAqVpqmFwq1HCqYVspZXmQlH1cxoAQb8v6JRLtQoja5Gb0RwOpzW7rWe7Cz9DrUBTjESiY0wyxK2y3e1vYj2nON0865KsbtBY8tW9bTn87Kf09BvmyRmsgAQcxqQ99zNN9ZgMo4sCtMCFLm6IjnCCiKJP62EtZyagZC9qupaE6UxArNN0YHhggBhQwyUfBB8HAWqCX+6LsE8X7gHj8WMOC8A2Tbmq/9yuwdGtF9hKx/8jFSxoeFmk9kT8TihNKlsHdLg0HSNwzo60ZnspD65eSNERFNTOwnbdrlNOrTPweXWLUPQwqIUvgv3125VeQp+x1lxWFk+5zRqpovCheeji81sS4aXWDO5AGDd4RkZmaK2XDipS0MZp2TIswSXtcqOOhlxFxXpZREsSOIxLefTmg/RuCAFWBo7tb3i8ruBDq6Z3wcK8AAYNhor2k9aZABIsjH4E2LQFT/QryDI8tkIj7nOBk7czKGZKXhSvP4Gyf2312QOzlJ1gK34IFPYzVwDk4EtrRCphuhGDUrNymR3X55WzrjCaS8TPj1g9oCzVdEzPFWw0wET3hvij/A1GTrfCGmZL1XUgt0jemhFQMSY0jdT+/NMYN/1GZOaOdIxu0ckHUHzz+/xLl9iqeeQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(39860400002)(366004)(396003)(136003)(451199021)(31686004)(66556008)(6916009)(4326008)(66476007)(478600001)(66946007)(54906003)(6506007)(186003)(53546011)(6512007)(38100700002)(2616005)(36756003)(8936002)(8676002)(2906002)(6486002)(316002)(31696002)(86362001)(5660300002)(41300700001)(7416002)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bVM4eDVrakZGZi9CK2VBbWxlblV4WW1GanNLUEVINzJsUm1XMk5PU0NhZHYr?=
 =?utf-8?B?cU1sOWw4VktheGh2anhSTnZ1cEw0c05EODVleTdVZmRXUVJGcVlxUlorTlFt?=
 =?utf-8?B?bXFzM3NkV1JlNjZzUzJhWHBHVVF0aXVvckw4cDczdFZrcHcrdGQxM0hCcy8y?=
 =?utf-8?B?T3ZDeGwxbG13QTRUM2lzcHEzaUZjMXkrVGx3ckNqbmRWWW5wdUorclpXSE1C?=
 =?utf-8?B?Uk9NWkc5M0REWmtLeHFkY3dsVlBCTEtPeGpJSU5kQ01JUnhSMUJDSk9Yam01?=
 =?utf-8?B?UlNXMkQ1THhjeFI0QmhQUlVCcjNkV2owcjRFckNHMGNFVU44N05ySmJQaTIw?=
 =?utf-8?B?bEhkSDUyenZpamU4VmdLaEd0dElvb29wNXpWSzQwVFZ1RXJEbEF1SkF0dVND?=
 =?utf-8?B?N3drTk13M1owS3gwZUcxcCtZbzJkeWVYMmJzTERVQ01sOWVPZnlaNm53Q0tE?=
 =?utf-8?B?UzduclZoclRabWptMkRkV05LSlcvRWpaTmZRWThVM0ZCV3ZIbFl3cmpPTFJB?=
 =?utf-8?B?K1FsTXpVRG15SlVldjVoQkh1WFpnM1NuNEdkL1VkQWJyVGtnTlFtdmxDV1dJ?=
 =?utf-8?B?TEVuTHZCWWVkTFFzOVRXdlM3UnptSy9mZ3N2MzNLbHBwZTE3SW81SExlQzVz?=
 =?utf-8?B?TDJUZGwybHp1Mi9kd1dDaFdVOGovMld6MlFNOGpoRTZLc1RVajRFYVNEVFIv?=
 =?utf-8?B?WWU0akVDSktuRjQwdTRweWdHbDlWOTJ2blZCMHF1eHExU2svaSthQzhROXZj?=
 =?utf-8?B?QzJwWnZpcm1Ydng3YVVwU3JDMGlhejZuSDVGU2tzNnZTN1d5bE1jYTZLYVBL?=
 =?utf-8?B?MHFYMG5rbDlZTWZINVlYQzFhV1NYa0Z6RURrekFOMTZVbDRGZGg0OG9WNlVq?=
 =?utf-8?B?Z3lnbENCTkdHdnlwYkcwVUZrM3l3VVVVZTEwa3ZCNUJOM0ttaWd3Z2tRbzNj?=
 =?utf-8?B?Tk1jMGVzZldPZW5ueHNOdWpLM1crM3p1cWtXbDlvTFFBTGdFamcyVGFTSG96?=
 =?utf-8?B?YmFjVEd5WlcwWnVWeUtzT2hHdVhEYjQ3SFErUmpZVWk1T01zSVFPVS82TWg5?=
 =?utf-8?B?T28vQS95c3V6NUxCSEE4SE9lcVJpRTIxZ0RWcWFMVTlCd2RaWE9TcURBZWRG?=
 =?utf-8?B?QWQzOTVwZ1RKZWlWaTFJNEt5UE9ya0luTGl4RTdrWS84L3FJcTNibThBOC9M?=
 =?utf-8?B?OTJvZnVDRGtjaUNleWdrUXNxY1lFMWRZVXNxR0dDQVpkVjlvdlJ4ZXRsdkhU?=
 =?utf-8?B?S3lNbytsbk1vTE5rSk1Sd05VR0JndHFnZUs5dGlhd0VFNkJCK05FeWR6MmxY?=
 =?utf-8?B?NHZUR01sOUdrM1RJSEN5NCszZVgzTTMvOC9kSDJ1K3BmYnNpajR2VW4xN1ox?=
 =?utf-8?B?Y1Y0VE9wR1JmS25mbU5vcHdXS3VHcmlMRzVLSjBvVTlWQVFlb0NMN0pmK1VY?=
 =?utf-8?B?Ukp6aUhiRGpoSWVlWFc5cjVkRTk4TVNEVk1vV0NVUHpiSWpxY3VCdmhtYXFm?=
 =?utf-8?B?NlU5Q053UzJGQ0dDd1hWLzlqQ1J1SFNzdXdtZU9CQWJhK09xaXl0ellJTTdj?=
 =?utf-8?B?ZExTR0kyMEJxNlNTUFoyNFVEaEo5dUlWMDZWQk5WeEMvVjJkZThTdWxNVm5T?=
 =?utf-8?B?TUc1ZFdMSEZhcUk2b0M1bmJBWXJnbDVzQU40ck1rZmg1VmVNQmEvNjhrUzdk?=
 =?utf-8?B?VU11UE5Hd2dXNjhCeDI2NmFsazAwQWVMM2N6OElNZnJtbERkTnMrYlF6RUpK?=
 =?utf-8?B?Y1U2SmpwaWsyN0ZrKytvdlN2MWZoNm16VjljcjN5T2pEWmJYTXRJd1FYeUY4?=
 =?utf-8?B?T3NBS0Y3Uzh4N3RObG5KRU0wYkczdncyM0h0RW5Nb01YdjZreWVVdkpCaG1S?=
 =?utf-8?B?a3lnMjJYZm9RRmY2YzlMYnY3eE5mcHF0K2NzQ05kckZlOVNucFJ2ZlA2dDJJ?=
 =?utf-8?B?b25udHRBYjJuNFd5ZUg0dExybDZoOTBUdzlHZjd2TXMyQUdHSXRPMkYvU29u?=
 =?utf-8?B?YytJYnIvenBFcFpBaWkrVHVOSTY2VHNWYmZEcjBJNUR1cy9KSmVqMFhNNGg1?=
 =?utf-8?B?UHd2d0hmTTkyV1Z0OW84WDNuOHdJWXZYdU9CL2lOUVNkMlR6RllaSDFhT05y?=
 =?utf-8?Q?VkTj842C6QuSn09M1MGZx2asb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 01a622b9-848f-4de6-7bba-08db55164ec7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 07:30:50.4008
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /5Llqd/tAt6sFFt8m7nB/raLtse34GIjmX9AlyVk5gYyGHRm6v7Tlac2Epr0KEUmzwLmthXCIwSQmyl34g98uQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8068

On 12.05.2023 23:03, Stewart Hildebrand wrote:
> On 5/12/23 03:25, Jan Beulich wrote:
>> On 11.05.2023 21:16, Stewart Hildebrand wrote:
>>> @@ -762,9 +767,20 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>>>              pdev->domain = NULL;
>>>              goto out;
>>>          }
>>> +#ifdef CONFIG_HAS_DEVICE_TREE
>>> +        ret = iommu_add_dt_pci_device(pdev);
>>> +        if ( ret < 0 )
>>> +        {
>>> +            printk(XENLOG_ERR "pci-iommu translation failed: %d\n", ret);
>>> +            goto out;
>>> +        }
>>> +#endif
>>>          ret = iommu_add_device(pdev);
>>
>> Hmm, am I misremembering that in the earlier patch you had #else to
>> invoke the alternative behavior?
> 
> You are remembering correctly. v1 had an #else, v2 does not.
> 
>> Now you end up calling both functions;
>> if that's indeed intended,
> 
> Yes, this is intentional.
> 
>> this may still want doing differently.
>> Looking at the earlier patch introducing the function, I can't infer
>> though whether that's intended: iommu_add_dt_pci_device() checks that
>> the add_device hook is present, but then I didn't find any use of this
>> hook. The revlog there suggests the check might be stale.
> 
> Ah, right, the ops->add_device check is stale in the other patch. Good catch, I'll remove it there.
> 
>> If indeed the function does only preparatory work, I don't see why it
>> would need naming "iommu_..."; I'd rather consider pci_add_dt_device()
>> then.
> 
> The function has now been reduced to reading SMMU configuration data from DT and mapping RID/BDF -> AXI stream ID. However, it is still SMMU related, and it is still invoking another iommu_ops hook function, dt_xlate (which is yet another AXI stream ID translation, separate from what is being discussed here). Does this justify keeping "iommu_..." in the name? I'm not convinced pci_add_dt_device() is a good name for it either (more on this below).

The function being SMMU-related pretty strongly suggests it wants to be
invoked via a hook. If the add_device() one isn't suitable, perhaps we
need a new (optional) prepare_device() one? With pci_add_device() then
calling iommu_prepare_device(), wrapping the hook invocation?

But just to be clear: A new hook would need enough justification as to
the existing one being unsuitable.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 15 07:30:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 07:30:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534572.831694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pySfW-0000kR-Rr; Mon, 15 May 2023 07:30:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534572.831694; Mon, 15 May 2023 07:30:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pySfW-0000kK-PB; Mon, 15 May 2023 07:30:54 +0000
Received: by outflank-mailman (input) for mailman id 534572;
 Mon, 15 May 2023 07:30:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xh4z=BE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pySfV-0000k9-DL
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 07:30:53 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20623.outbound.protection.outlook.com
 [2a01:111:f400:7d00::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6b17a0f1-f2f2-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 09:30:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8068.eurprd04.prod.outlook.com (2603:10a6:20b:3b5::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 07:30:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.029; Mon, 15 May 2023
 07:30:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b17a0f1-f2f2-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iqhEiCFq5Oh8XbIZJDMqhTG6C2SgTf8avbI83U8uVCnW9r+cdXiNPTPeylQAFx4wioRY37t2JfkaoWzZcmfo6yjn8bSn4kWdREieln8EeLZZgYwvAOYbTnB/E9G7wFRlCWYR8ukwrM4HkKznuB+G43zR5Eq7zl3j+eduzEEwZlh7N8MPaBNcoX+LqQJ2TNlXfDM0/3+VIRxgEc6BoKY9j80MDp5F6Gb7fZe8Ytaze4/d9Fx/vytOl0ntsN9cFbiJaE5GnlQDNKx+HHyuSHVfJQMXKixEZGU3IlL3wBmNqoA7dtLRjnW2ymEWbCUpLq9mJihmDE3+MP+MvGmJzH5kSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eDofIGobZV3ZfZtNUodE5fycUG4IC6S48IzkCYA1KAY=;
 b=AhPxECmQ0mo0tk6DXIwadx2UaoNhhyV8HKHt35bBRISH/XLxi8kTG/bZptB+Rxe9+UDmfOOk20Zc4EsX4PteHdR38/VJLvGfWT6hYvta0OAObCMhBjxmjyzf5A5grccJmtRwQkMqn960NnUrbeuvsWowAKxbyy3qlCiWQ0A3zG3Cg/LcDDqrcJ5goUrnQt9dM9zkASlQDYrmDByM+EIDyDUmHQohGUNmye8xObR2PAtNcbYiH17qqS0zqBj4P6DL6s7gNlQS3jPmZt5y/lB0hw+NHQrYyEoYhDf1fABLuWXNoyWDv8JWmTuB6UOuCEAAlEsbUlWxD/mlRFXCN91X2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eDofIGobZV3ZfZtNUodE5fycUG4IC6S48IzkCYA1KAY=;
 b=w356QRrWqhvYfW6uVrUXFECdemdl0fHyjNJd34aKG2vAA9AnWPh9AJXABRxwcWn6zmPttV2SHfVS0Zs1Kvc9cg0YyQMKEJWcdI6g4v/mM+BTOhR0JxLVzZuYBRSUiiksXPGA70orAG+EKYatgldT1xGMf204L5zrbC7gzW56IFzlasdFmdRkB4Oz5oFixPbXO770qO7qtira2fspu8D4DWkB0WzRMX+eBZfhX24B8Nqn4z7+JkYyPq4Pr78RD0PpDk/F2V93TBxWCQ00ogOnE5VCeO+WN1TEt/fjCZ7xlJEhLnfk//NKgnHe/F/18Nwkz72mI3Egusw6jESYwMiQKQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <100b0a85-e138-b4a3-c70d-38581c44c04d@suse.com>
Date: Mon, 15 May 2023 09:30:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v2 5/8] pci/arm: Use iommu_add_dt_pci_device()
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Julien Grall <julien@xen.org>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
 <20230511191654.400720-6-stewart.hildebrand@amd.com>
 <61ae93e8-ac8f-b373-4fa7-0a8aeb61ef4f@suse.com>
 <f7d78b4e-3a16-d342-59d2-caa4d2b75b9c@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f7d78b4e-3a16-d342-59d2-caa4d2b75b9c@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: VI1PR0102CA0028.eurprd01.prod.exchangelabs.com
 (2603:10a6:802::41) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8068:EE_
X-MS-Office365-Filtering-Correlation-Id: 8b4399c8-cdb1-463c-fa33-08db55164e41
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	R6RLnnc2M9IfSRUqNjugFzYBIk/FRsdlrv4Xl76fbZUT0yVzo7k0dJ9qbRvsq2j17DxR1BUeN75/rSpliclLH4PlrOnXgJWU0kQGDTBD5otmdOFuUXhr8ePXCNfB4I2cuoeqnvONs3bYY7ADKHpqc1n4pkwIqBFINLk8BiK7BYSt4oouAwpqrQSQRsugXGSFzePg8I2EMpw0LYsqtnaHQhordugY+v7H6qQNcYGE4+krlSAzGSOfj878uN01F0ik/lFcTLoMi2ev61kKppX6b22N8zvclII5nKJS8wnkJxL3fy2Sp4BGxwYth8ecB4d9QQqJHtCY0aSspn/zoHoFwPOxqpYefEcnECru7vqDVO128EjFEQw6D3IO9Ryr9N3z49NOLv4iXd9WCxYU+fb7BF50pPeaYOUk8wC1vXFaVKKQLW4w6UT+Uv0MHM0VIHRZDqhgfGzbkEvhj/eMNXtjo+7WYOk8lsVSr+MfoykdLvZZAoi8OAKxvZMPIrJx6M+DO+LxyKCI+lMYmDiv7g/N7G/TqpcjBBZ9HzThGjPXxN0Pgeu6sR7kUfG05u2fzC77kLVAO6gNYJzyf6yjZ4CGY80hNg3C/UhFKqzDu3kB9jyBAeeXLM6GGslA/NzPLDASwN++RVpj6ebprfXmei45Gw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(39860400002)(366004)(396003)(136003)(451199021)(31686004)(66556008)(6916009)(4326008)(66476007)(478600001)(66946007)(54906003)(6506007)(186003)(53546011)(6512007)(38100700002)(2616005)(36756003)(8936002)(8676002)(2906002)(6486002)(316002)(31696002)(86362001)(5660300002)(41300700001)(7416002)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SDFIbGo1b0c5ZnZSblVValZHK2EvUVg4K2pwZUhKU3orNy9KOHk2bFV2a09J?=
 =?utf-8?B?SmlVNTcrTmFyaG9WVXQzTUR4WWhaL28yWC94dG5QZERlMUUyeEpPL3NxS1Mr?=
 =?utf-8?B?ZmZONjRFemJFbFNrWnZycGJocU9mam1VZGZwRGp5cXdkenUydHplanZEWTJY?=
 =?utf-8?B?VVhFbXFHbXlaMHk3MzkyM1IxbHNEbWRJV0FIbzJHTUJkVFYrdW04cDBhTnRi?=
 =?utf-8?B?dXlvbU9Idi9hc0FqbWdvNDNZeWRXa1VQdjZ5MjFhMHpmRWorYUFJc0pPeksy?=
 =?utf-8?B?eS9Pd0NGRHhTRVh0cjQ0KzYwYkFMcGp3S09FMkpTZUhLVXpGTDBJRzhWWENW?=
 =?utf-8?B?YmFwYm1Bc0FpNWRWSmVhTkJnb2tKZnhYRzN5dWRmK3ZWc2RLWXRYYVBCY0xB?=
 =?utf-8?B?YzI4YS8wbXo4YllsU2JvSUFoOTFmNzhMZCtVWklEN0hoL1hmVFYrbVhqRjdi?=
 =?utf-8?B?Mld5SjdQNUpwUnNvazdzejZONUZRVXJjTjNqVVZOWHlmaXZkZm13cW9iWWRt?=
 =?utf-8?B?TXMxN3h1dEpGL1hyRlJOYXJsd3VnZlJha1J0M0ZiaS9xem1BUFZ2enI2TGVk?=
 =?utf-8?B?aUtPNnMxRnpyblhXMlBWQTM5b0hZM2x2K2xDcnNJNy9TbVNNNld2QXBRQkZW?=
 =?utf-8?B?UEJSVmZVbXl1TFE4U0dBb3JCYkJBMEU3N3dFSFFwY2lubjVxWkFYWkx6bzRB?=
 =?utf-8?B?L0JoanZhanNVaXlDRzU0b0t2U1JUOHNpMWgvczhyNDlCelFEaENuMDBOanpa?=
 =?utf-8?B?NnJPNzdLdmhkSlozdEJqWFFzSXFHaWdZdEs0bExFTG8wY2NNU1M4cXFKUUxQ?=
 =?utf-8?B?Vkt3VlRtTjFUd0djSHJqZjhhUVZhaVJIS2hsWXcrSlN0Vzh3UG5hVkNScUhr?=
 =?utf-8?B?NEhxRHJPUWVjUDJZaUJBZDd4T05CM2NUR1FzdVBUa2h1Z1BMditQeFg0SWw0?=
 =?utf-8?B?V3VZQmlBYVFjMEhtdXpYK014QXdURCt0SnZCMVR4OUdMTjZzYUpkRjVMRHNu?=
 =?utf-8?B?L000UGhBY2c1bnEwaG4wU2tVcExNSzVSQk10bTlCQ3R6Ymxpa0dZLy9yMmtM?=
 =?utf-8?B?U1BQN2k1aDdvQWpia1B4Q1d6K3JuZWo2V3c5VndjSWVGbWxrS0FyOC9hdkVC?=
 =?utf-8?B?TGp6MFVoc2pvN0xaMzZzQmlSRXJ2Q0pHRlBYV1VnQXVpVzQ0N2R5U2dkbTd0?=
 =?utf-8?B?anlUYWtSRWxCOUxTbFVtcm1SYm5yV3FkM2JuL2lNOVljcWxPZW8zWVVrbDdu?=
 =?utf-8?B?N1hONkk5Z0ZaRW4zOFdEbW81U2ppY3NMOUJaOFpyUWtWRGEyb2t2Z1ZTV1U5?=
 =?utf-8?B?KzNoN1JjajlLRnZuUG1RSmZhNUU2bDZ2TDZmcDBvdFpsSDdYaXNOK2wvSmpY?=
 =?utf-8?B?NjN0QWNGV2JraFNQM2xROXdKajRud3RoaFlzNCs3RkFicXd5WFo2MU1HTTZZ?=
 =?utf-8?B?M0Z6MUZwWllwUGYrN2ZxeEU2amFMK01OWHE5N1B5TlNSUzhPV3plb1BYRFRu?=
 =?utf-8?B?VXN1NzVpdXpUT3VybVQvOXZMSWQxNmtGbGdqWnoyQkJjS1lvL1JwSTdiMW14?=
 =?utf-8?B?dEp4Y0dVTFdNbkJ5SGJmWUtJQnZsR3BYOFZBYS80U2ZYRlROYUJyQU5VZkha?=
 =?utf-8?B?TFRURW1iSTNDbkZkdU5kQm5CVU5Jc1RTTnNXVURkUmhJdTBTQXRJWG5NaGQ0?=
 =?utf-8?B?WEF1dEVVQWFpalo1SVZkTVE3SGxZeWNKRDk2d0VoQ2J0S0NJR2dqSXZTQ1l6?=
 =?utf-8?B?TEVFMXgxSUNaMC9qUmdEZWJCWThsVG1PRFRnd3YzaW4rMk41UUxRb0oyZVM1?=
 =?utf-8?B?Qm5Hdkc1VWlNL0UvL0ROMHg4a2dHWlhXS3BLMzhxV1RrVThrcndXTk15dFM1?=
 =?utf-8?B?WEw3OHdQUzdSNWw3Y21xNG1CU25FTC9vcEJnWVYxUURVY2szUkF2UXhCelRp?=
 =?utf-8?B?UkI4SWFhZ1ExSzFlaHl5UEM2K0VWYm0yZ1FybnZTdGtjS3VhMEhIOWhvM0tj?=
 =?utf-8?B?SUY4WUZvamk0bk90UlYzQVpyTjdIajhwT3AzU1UvdG1INTVqYWNIRTJQd08y?=
 =?utf-8?B?Wjd3dXBhTkVPY3lFMCtJNHBFS3F3Y3lvMkkrdlVhTTkyVWxqMENJYVl5WG9k?=
 =?utf-8?Q?Ner5984+pB01/R4hT9sDciFl/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8b4399c8-cdb1-463c-fa33-08db55164e41
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 07:30:49.5406
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EVWy/m9wtaYRMiUia9zCumDUcnzAK1IyITUlXFdMKEz/dON+P5RvRR6lVuP1VLMZ/NvvyezqIKPn9c7TLkD2Pg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8068

On 12.05.2023 23:03, Stewart Hildebrand wrote:
> On 5/12/23 03:25, Jan Beulich wrote:
>> On 11.05.2023 21:16, Stewart Hildebrand wrote:
>>> @@ -762,9 +767,20 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>>>              pdev->domain = NULL;
>>>              goto out;
>>>          }
>>> +#ifdef CONFIG_HAS_DEVICE_TREE
>>> +        ret = iommu_add_dt_pci_device(pdev);
>>> +        if ( ret < 0 )
>>> +        {
>>> +            printk(XENLOG_ERR "pci-iommu translation failed: %d\n", ret);
>>> +            goto out;
>>> +        }
>>> +#endif
>>>          ret = iommu_add_device(pdev);
>>
>> Hmm, am I misremembering that in the earlier patch you had #else to
>> invoke the alternative behavior?
> 
> You are remembering correctly. v1 had an #else, v2 does not.
> 
>> Now you end up calling both functions;
>> if that's indeed intended,
> 
> Yes, this is intentional.
> 
>> this may still want doing differently.
>> Looking at the earlier patch introducing the function, I can't infer
>> though whether that's intended: iommu_add_dt_pci_device() checks that
>> the add_device hook is present, but then I didn't find any use of this
>> hook. The revlog there suggests the check might be stale.
> 
> Ah, right, the ops->add_device check is stale in the other patch. Good catch, I'll remove it there.
> 
>> If indeed the function does only preparatory work, I don't see why it
>> would need naming "iommu_..."; I'd rather consider pci_add_dt_device()
>> then.
> 
> The function has now been reduced to reading SMMU configuration data from DT and mapping RID/BDF -> AXI stream ID. However, it is still SMMU related, and it is still invoking another iommu_ops hook function, dt_xlate (which is yet another AXI stream ID translation, separate from what is being discussed here). Does this justify keeping "iommu_..." in the name? I'm not convinced pci_add_dt_device() is a good name for it either (more on this below).

The function being SMMU-related pretty strongly suggests it wants to be
invoked via a hook. If the add_device() one isn't suitable, perhaps we
need a new (optional) prepare_device() one? With pci_add_device() then
calling iommu_prepare_device(), wrapping the hook invocation?

But just to be clear: A new hook would need enough justification as to
the existing one being unsuitable.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 15 07:40:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 07:40:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534580.831715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pySoB-00020W-2T; Mon, 15 May 2023 07:39:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534580.831715; Mon, 15 May 2023 07:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pySoA-00020P-Vb; Mon, 15 May 2023 07:39:50 +0000
Received: by outflank-mailman (input) for mailman id 534580;
 Mon, 15 May 2023 07:39:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xh4z=BE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pySo9-00020J-Bi
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 07:39:49 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20606.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aadee101-f2f3-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 09:39:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7530.eurprd04.prod.outlook.com (2603:10a6:10:1f5::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 07:39:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.029; Mon, 15 May 2023
 07:39:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aadee101-f2f3-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mo/5m6QMUwvQnzs70+a9zDe6V7YmTq/LJoy2MgxXVvp5xm/4KXNiRDVyB0/uKJdlThu08f600HdjNNfQbYLdZdpC1COZ5KORkg8GXo3eZ42CtKSVLwo6MaefEQw20KEwxGk931Frd1jtc8dKucu1j3hQiGtSoA201+SMFhquM2bIDnds21rH9GK4TlLzPJjlxAu1ZAxXI6z2DOF4PfO+7bZX8OQ6YFlmhG1iLtEfKOy85rNuWfsbxn2VblwcbnSb/dRnvPqrVoXC8EfcBBy8spCT8Eqw3njxNcKRG/qJBrEF2PrpY5Gs7rq8o6eKXvVJpCoyOfiK+1EwQFB+zs39UQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FFpaxNqzsXx7Y8p/iSavuFVh18KQFKOrSR/YTLdVKDE=;
 b=BCTtQfwwTfZECZIDidvH3F/+xfhfBupxSZwqUsZnGGK2L019lre1Nb19g71UaBB6uS9UxoECjTrn2S1enMqrCRRyUGz7gKBG1/nJJFDTxAb68E6BH+mIml6YhK2DA0EnrzoyuZ41YKxM1l87vIrAU6v0r6vPvEKZXR1XXjCrrgMcH0VF7ruDZWsXQdPta4QGrmTzs2unXiV5p1lMts/YCVz4w0+4scAhmU4rWQvlpDJr1joArgQJPeW6fJOV2zbQnXQoXQG8SNl7BaEpHEs7y7V4y75fBvXGls5e4XFVc2zOQu6TkITMNNLiQPskM8RFPjNRfNPlqTAZ0Jd1B7edtw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FFpaxNqzsXx7Y8p/iSavuFVh18KQFKOrSR/YTLdVKDE=;
 b=mP/zo7oPQrHx91bzAgz8d+lGgPWdvO9MPyP5//e18ElMS60X9zR44WIq8UMn6TvrCtDZTuB/VHlpDhaBD5hKQH4sQJJs0bxfypyzuUPYiSt3Q/FFcB98LJrRFs7LBQqCnvtw5cvlciBiOo3vFDKpRv14IJamGh553a6BOXwmPd+WgC5IMLw3droaQCbB32YZvXN9Bc+9kZoUtO12kr19tSUYxWRNffLyIObWgSvjCv9H9Qc3dtEzjc3m5cZle9la0YrZbEVyMRbeiI21vVAwp9Hbn+du3uu0m5xab3Lhff/wBV47UHya6C0owvotAxYVcZbnTLuqaG9l1DZ4RfXIBg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b45f7a41-5173-6515-6368-b2374b060596@suse.com>
Date: Mon, 15 May 2023 09:39:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: PVH feature omission
Content-Language: en-US
To: Elliott Mitchell <ehem+xen@m5p.com>
References: <ZF7fSeYH/NK105EQ@mattapan.m5p.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZF7fSeYH/NK105EQ@mattapan.m5p.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0201.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ad::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7530:EE_
X-MS-Office365-Filtering-Correlation-Id: 8c9684d1-1811-4ce7-98cc-08db55178e06
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gCSugdKn5vvaeaj70eYcNr1jgNbwfyJ/CkW1k7xfwVEN2OIZZOLcf3R6H2SW1pN7zsRGFJy3Tw48pKAxkgv1SZoGr54bklu4F3ZGdMQJwYgDSxbzx+ftdUsBBqoZSqB5JJZ2HrEvKmBWEQV8wYUekDKILU1s7YqfJr/g12cfYJo/ea6SXLiAY+Ox2XkV8DzE23RgWl5hTJA2eYwWNNMXbVHp9RKwC9ATLDOBKhkoNFyALHcmagYgJ+pS6FE60GtASG/hjurJHvhVMCNgYmznkrsc+qqLQCpFNP7qxZsav+aAnccEw/Dghx4jrC1R31JETjqgehsV0Yovtdg0jvSiS0iJ1wCthU2BIMD+TdnjYpnRM3wIAI3q9MOK5JcRjA4uf0Hf03A/NL5smzQKGAjwcGYbrdcWiWey7MMA5WrYCAGwFZ9TuHCQdmh1SgdcPgjEwez74/UMOxER0aiKE/mU1MpUB2JhWTBS1DdHrTw39vQRF8Q4vXhT4i6v+H0CmU71+SV1beZMiQpNriwlGAvqxzVA4pwktS4hogwNJN2Wh+HjQEswgFQqeipcfL2DEv4UUk082rkO8riZOjp8YO13ohZI+/pf1adezcN7+E0biEHGfy4WAUzDkwWf+9ZhlGGUBarrb3UN2jgJTy2UYS55nw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(346002)(396003)(136003)(366004)(451199021)(38100700002)(2616005)(186003)(3480700007)(26005)(53546011)(83380400001)(2906002)(4744005)(6506007)(6512007)(5660300002)(41300700001)(8676002)(8936002)(7116003)(36756003)(31696002)(478600001)(6486002)(66476007)(66556008)(66946007)(316002)(4326008)(86362001)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VU5QcWo4QnY4OTdtQWRIOXZvTzVPZjZOSTUraDdBUTJjL1MxdHlwbmE2LzRi?=
 =?utf-8?B?ZXpqd2hHTXc1MU1tY005N1hpRC9zbXRTUzZnS09FUTBYTlJEK2FISlBDbkgr?=
 =?utf-8?B?WFV4aFpIZlRxckNERUhTOWhZLzNlNm1saDB4a01pVUZUSS9CdXJ3WmwxWlhi?=
 =?utf-8?B?bjg3bHpFTVlTbERkNDMwa3RYZjhLcVRqYUNtcktwRHU5Ykd5YytDMkl6WWNq?=
 =?utf-8?B?SkU1ZDQ5c0w2TFYxZ1RseWRkdWJjWDExek9jUlZoc3FxTmJISDdtbHhnRURD?=
 =?utf-8?B?UzZnU0FRZEhpT1pNYitvYklTNHZxU29VUTJtSVpXY1E2T1V5SGxia0I5eVBt?=
 =?utf-8?B?NWlLTVY2MkRseC9aTVdhQnFZM3lxMFZzSnB3RFI4UmVzM2NWWXZxNTBKSTBH?=
 =?utf-8?B?QmhkTUxtM1dwQ2s3QmNZZXZVOXZ6YXhCcTBoZ25EUHcxSE9NY1BVMnZqYkc2?=
 =?utf-8?B?c0sxbXVDdlVzK3h6OHRrKzlxVWhBMUNCRW1lRmNjTGtWM1o3NlE3MHdwcElI?=
 =?utf-8?B?QnNqUUZpMCswU0JMWUlGT2dBYlVtV0g1MmxudVZmR3VPcDlUS0QyUGo3MStY?=
 =?utf-8?B?ZWpNRndzV3c3VnQ3N1NvZXpsUktrdnRhb3U4ZWlwdGs4WUswRTh3MG93b1lN?=
 =?utf-8?B?UHNQUk1rcmQ4dmFzVEtqc1lFVEszRS9TY2dOWExwL2M5cnVHajd1ZWRrNW5x?=
 =?utf-8?B?Rk1DUU54VVJ3N25XOGZKMy9Lc0MxaXBGWU1UVTdvU1RqK2ZESlp3ZllRbFFE?=
 =?utf-8?B?K2RpNCtTUnJuQUljYlJnSFNIcytVM1FkbmVNbXZSVDBZcUZHTGNHYnlqbms5?=
 =?utf-8?B?bGt3ZjAyL3BaT2FZdTRYeGNRT3YwdkVnaHd6YVdGaHpucGVXSjgrU0l0d0lD?=
 =?utf-8?B?bk1HRUVTbjAxVjhBbzZOV3hnOERDbEJ2Yi9PMnVtNFFqVC9UZTIyRGMwVWpI?=
 =?utf-8?B?UnB5TUNhUXdReHJ0bmpwSnNvSEM5SFJEbU9xNlpjQkJzUitJc0NZV1FMcFZO?=
 =?utf-8?B?Qzl0U09qZEVWQk1YbkJsS0RhWldaQVN6d2hnd2pNUzRHU1NBTzViT2p6VFYz?=
 =?utf-8?B?STM0S2dnMGwvOWVQRHJ2OEJLblFhNVBYREpFZ1kzOUsxNGNxdmh3SHVrVnUv?=
 =?utf-8?B?dURjRzN3UHR2WHNqODdUNEFyN0FjYUY5UTUwM2w2blB0am56SGF1WW05WUFa?=
 =?utf-8?B?enhjVmUwRTVubnlFR29hYmhRa2lOMDIzU085MmpMOXFyWXNiMnFkZGlROHJa?=
 =?utf-8?B?eEpreVBxOCsxeGdFMStEd09jR1JJeDdzZzFtMUtqL0E2OUROV2lzYThkeXZV?=
 =?utf-8?B?MjNmc0ROdlRJRXlCUUF6VnlmK1lETGNzUU8vbmp6ckhQZ3l4d2c2QUFMTWRJ?=
 =?utf-8?B?K3RORVc4ODZYaGt0MzlpZnBDbFgzNWZYeGdHZFpseUF6SjNtMFpUc01MWk91?=
 =?utf-8?B?VE8xQ1VnalkvczNTVHUwenZTVHRSaXRxQWZSb2NSelI1SGZTclhZTHF1YUtv?=
 =?utf-8?B?Q2xUSEI0bWwrVHplTUs2SFFieThiU0ZzT1hWVUZiTGtQeWNsSHpUZjd4VWRo?=
 =?utf-8?B?Rm1BUXB0VWNFMDdRS2FodkkwYVhRT05OVFdBQ1dOTFdsWVkrd09xb2xvZEU1?=
 =?utf-8?B?VG54SE01UER0eFBwTmRjL2RtR1JwWDlWam9pdXRIVkpJSHBKZldzcUdjMWVN?=
 =?utf-8?B?Q05sQy96Kzk1WVFvbGRvRkxpWUxVVSs1cWFOUHR0V1BWUjdEdng4c0xPVzVB?=
 =?utf-8?B?MDRTdWZUT2F0T0ZlSmZ0WVNaVzZPb2I2ZVFuTzZBUzRkRDlXMXQyZ2JWWlYv?=
 =?utf-8?B?aE4wcFBCNHZ1ZWw2TndzditSYnE1WVlpWEJCN0c4NWlmaWFvVkk2cWxldi8w?=
 =?utf-8?B?NWE4aFRKK1BuZzJvaXF2ZXZsSTlMNTI3SDExRXV5TEdyQzJZWHhyL2dsMjRO?=
 =?utf-8?B?aHNxOGJnM3hZbXYrZkJ6ZnU3R2dYK0UreDErZTg1YmM4SWFhQW82TVpuLzZ1?=
 =?utf-8?B?d043SFVkcG9IQ3Q2cm81c1hmU0UyaU1WL3lQTVVUVkR1b3VLeGRpTmFmS3lz?=
 =?utf-8?B?bnpBVTZUM2IybVNNY1R1UDh5VzIxUElvdlUwRUpkL0NzRU1XYVZCMHpud1la?=
 =?utf-8?Q?rteFPlmSeAe+bKjOkCKh70ySi?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8c9684d1-1811-4ce7-98cc-08db55178e06
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 07:39:45.9853
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9X19f1vFXTkG0Xya+Ccf5MxR/M/EZB/z2dgnqewvcb0Zao+f40uFs2foplIVzTHDQEeck3YW2GSpU5zNOfZHpw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7530

On 13.05.2023 02:52, Elliott Mitchell wrote:
> From tools/libs/light/libxl_x86.c: libxl__arch_passthrough_mode_setdefault()
> 
> "passthrough not yet supported for x86 PVH guests\n"
> 
> So PVH is recommended for most situations, but it is /still/ impossible
> to pass hardware devices to PVH domains?  Seems odd this has never been
> addressed with how long PVH has been around.

So if this is of importance to you, why don't you contribute patches? I'm
sure you're aware there are only so many people working on Xen, and that
any gap is very likely a result of a shortage of engineering resources?
Including (assuming there were patches) limited reviewing bandwidth?

Jan

> The other tools omission I noticed is it appears `xl` has no support for
> pvSCSI?  Might be infrequently used, but seems similarly valuable for
> domains focused on handling storage.
> 
> 



From xen-devel-bounces@lists.xenproject.org Mon May 15 07:47:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 07:47:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534583.831724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pySv7-0003Sg-Qc; Mon, 15 May 2023 07:47:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534583.831724; Mon, 15 May 2023 07:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pySv7-0003SZ-Nu; Mon, 15 May 2023 07:47:01 +0000
Received: by outflank-mailman (input) for mailman id 534583;
 Mon, 15 May 2023 07:47:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xh4z=BE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pySv6-0003ST-D4
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 07:47:00 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id abfafa2d-f2f4-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 09:46:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8569.eurprd04.prod.outlook.com (2603:10a6:20b:434::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 07:46:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.029; Mon, 15 May 2023
 07:46:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abfafa2d-f2f4-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IAqaAcranOCHYApk8yRxSAMSNnO12Xi8y7C/Q/j4MFdyNf/C4eJb9+c5vvC0khuGbSClzrfMtuy337L4m+Wm0M6Ecg7KugPLZAfe0K/NBgHWiwnA72PTiTm3i3tGzo492tbRzLdVOKGx0ek5waunfEH9ch0ILBw5JW9FN83VZTWYjPvkws9FMrur889b/OLbROAu6AGm583rIVl+6+0SzCrbke/8oIfRbV0uy6uBbxqAOr5hCwHBhVP02sZE1t2+Arq+M0zQONqGlOo4ZvnPcenu/4RdW8e8kyrN+v6IDVbtB8FUcMm1os4Bcwb1AIrafFImJ1eaGBIVzqCEvdF91A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=r4s3gTU5b46ruyQ52PiBJHNQaP6RsxtFLi8QL1fpfaY=;
 b=CYrmISoSJIKVbnn6Ll7dA7l7DdCYQRz8FtMSKJ6MvEuehwXBe3r6ndgwBUWWJW9OxxViTRfmnYCWHmxsgPYAI2kQK7nZBzWj263Unr/fAggo2DEpWUeCJyJAhkEteiqMBrtKU+RRbOSqZNP3FJaflK3RoyujuOOaHPyl6xQaW3USYBoYVNcbI1gmJ3ea/ObZ/47/++n0xFJ1IMgZ98g5xkO4b+Do4chWsQt7YQdNSNHxfFc/i5N3AA1RYKzSsY1HuKfMAjIgdIxjocVOIjVqVdw6JscpuxeOqxJY69UCu47CeMGbW311fg9Dx9brciZq3fyuDzWLFoL1Bb8qPotxfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r4s3gTU5b46ruyQ52PiBJHNQaP6RsxtFLi8QL1fpfaY=;
 b=xzFGGHzUA8+lTu94/5NqOhAZC/DrX0D5c2RZvqt/ibprW+STAFHHgyW/t4SjAS6Wwve7YkucRL5KneZVNaGzYMyyDo5cf1sJtTLTNbAUAZzufUke8k3TXU7Yz5v6BQ0dozYIn1FsE4lChj+2zTyNlOGuVAwat8ZDTlSz46jFLPKqspBOmbJFDVA2DpnifWR8cyDpJzJ4uZ6P0oFG6dNdMvgCRFIxJ99QhAM/g44KZRPeE7L9qgYiQEhjDAHXdbLbPRyCHVhXFibUhJfvB6n8UzIJdomVppHvWzZkGidGJ3N2gKARLI4Jw0JIhugxTJ0Hk95NFeEUqn5n3DkQ4OA+AA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <687ac0c3-9364-f03e-8f7e-afaf985618aa@suse.com>
Date: Mon, 15 May 2023 09:46:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/cpuid: Calculate FEATURESET_NR_ENTRIES more helpfully
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230512124551.443139-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230512124551.443139-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0116.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8569:EE_
X-MS-Office365-Filtering-Correlation-Id: 2726f24a-db7c-4686-756d-08db55188f36
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FQqbaMWmbz3tBA//fkRxqASl0pNVERpdshODGEbJ0vrHSU7DPWF4/Uj7vtRHXXW4gksc9RlUtj5rPklOeF/QmZJlWupzFtMDBkgyKRjS+JDE/zL+8g7uVQl2c6iJ9TB7nTcPhqr5UomZ/HOQ6toIflG3wOMfLBA2wHRgR2Ny04IU5nMP/nFZa4WHrD7ZRMs8lxwxPj/b/1CQqetTnhdVHhTs5yzb6DvXj5o+AfDRgeomdcBKaaPV546uFfNOdk3E0dfvoU+omBQDQpummECyqDhPVyZ1DYcxF4Kixbl9sJ5ktyAs9nP4t6beLY1BCIuqaP10h4DfSJctcb5h+PTWxep1K7UairTxAQq3j/9R3A24GomazcKa2l9fGUx4g5HzeU5ELeYRM5zXtEcMHG16LcqHDSbv+iJtFIgFuDAhi6K2gwNlGJ5slFl2lya+P6Rsdes6ZrRgkrGaDg+38K7M9pC1mKaC5QHp2byEa8p5TT9Yy05cwd0nzxiPmqPXccxijmZNOUkC1oOcaaRSEZDgKYw0gptR5SAf2OlnqzRmGZPjcx0GKW7dR4e+ROpc2UqV22f2yZc1jCXxqvmOtD0Vjin2o9Z6iqyIpMV2nmJzSmXcHK9k+mvNUucdyX7Xdtko+HK5hhD75hz0dHGPcUobFA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(396003)(376002)(346002)(39860400002)(451199021)(54906003)(478600001)(86362001)(2616005)(83380400001)(31696002)(6506007)(186003)(26005)(6512007)(53546011)(38100700002)(36756003)(6486002)(66556008)(66476007)(66946007)(2906002)(4326008)(6916009)(316002)(5660300002)(8936002)(8676002)(31686004)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cnhEZ1RSRXVrQW5hTHBrUXlmb3hnTGVvSFU0cHdqT0JJK3ZOa005MjB4aUw2?=
 =?utf-8?B?YjRBUzdNSFZPV0ZBUVlVcWp5Z1l6QkVoL0JVcndVajRVVTRhVUFkRDcvU09V?=
 =?utf-8?B?RUxINXAwWllPSVdISG13VG5GcmpoQWFaUWRQRkNwYmNaWVE0dzNEekJNTDU4?=
 =?utf-8?B?Q0ExT0Jpb1dVQTB4QlJBSjZKTjNZSWhzMEF3dk53L2ZOT0k4WUxXcWlxeGdm?=
 =?utf-8?B?cDZUS1RKTEl1Nk93TFRTL2QvWUs3WjZwcjlPTXlNRGY5RGdFLzlWc28rVDFw?=
 =?utf-8?B?U3pyVTl0cDFET1hra0lkN1V6Ni9LRElka1o5aDhpWjI4QkRyYzBtdnpibXM5?=
 =?utf-8?B?WnlPdTZ5K24vNm44M3FVdHQ5NENlSy9rRnpOVE0zdi9MUEowa2d1Tk41RjFD?=
 =?utf-8?B?Q1VkNGtDaWxPY2V3UEZTT2hEdndPMlJsdFZBTmwzMmtuNnNQbkhIbC9aaVh0?=
 =?utf-8?B?MCtncTVYQ2NhQ242Z1FyOHJYakhTSWdtVXJPS3pSUVQyUzhLQ1RLT2xyaURV?=
 =?utf-8?B?WmRMcjJ1WFJRNWVmMXhvcUo2SkxDYWwrOWFuVk9hYUVQUHhWSURTd21GbjVR?=
 =?utf-8?B?NnZkc3hkRHJ2VlhiZkFBWk1GTVVVVGJESkRPQXdxdjBOaGk3eW40L1doUk1i?=
 =?utf-8?B?U1RzN0haZjFwaW1CL2hpTmZISGhvbTJLY0pBTnlrU3g3MkgvMUhHSVNLdkFO?=
 =?utf-8?B?aUV0eEMzOGNaelNhY0t0bVNyby9kZW96bXpUV2Nab0UySnhRRnNXZTVacVVY?=
 =?utf-8?B?NXptM3psb2NVSEo4OU5tRXdXTm1LQk5uVC9OQ1FFenJBQzM2QWRiTzUyajNh?=
 =?utf-8?B?akJiK2dIQzl3U0Q3cWhoZkZKVFN0YVdJZkFtMEJvbE14aDMxU0dXL3oxbnRx?=
 =?utf-8?B?QWx4akJMaW00K1JLdkwyeFVZZm84SElSM0FFeDBEYXlpTk1hTFFoeXR6akg1?=
 =?utf-8?B?NmRmVURvd0g4QThST08wK3dwMjhmd09vQjVLbWtlQ0xvTk5JV2tmend6SFNh?=
 =?utf-8?B?RURrNVFnWHQ4Rmt2M3BQdnVZdDFyNUt5RUFyZFpEQjY5SGVVaEx2cVFEN3pm?=
 =?utf-8?B?bHUwTG85NWs4eGV3b2ErZnNSMW9Vc2ZZcURrTGFBS2g4Uk0xM0RwcCtPOFh5?=
 =?utf-8?B?UmhHOVA3dStMa2x6Vlc2UlBsR0RMUGJWMW91YnUrVzh3dDk3TDdOTWRRS3FQ?=
 =?utf-8?B?OUxHc0J6YjJpZFBJZlIrajBwaXFLbnhYL0M1MUhiUmpmNGNaclo4aExJNWE1?=
 =?utf-8?B?K29IUFJuVVdOcVFsd2lpTTB3Tm1BblRFSU1pVytlMWFLbVlJUnNEMHZLOGdu?=
 =?utf-8?B?MXFyMHgxUkozNnYwb210WVJQR1RPVVJpY3pyOUp2T2hHOEd5NldqQ051U1hp?=
 =?utf-8?B?UUxXczBudm8rajlmZlljY3JCRS8xUGNPTkVPU1NkWXpyVEhXU2x6bHBzOGhk?=
 =?utf-8?B?Ym5mRXRWU3VoeDhVT2c0NXZSSmtOdmRBdDJsdW03Q3gyZk5jRDRwMFpnNGRN?=
 =?utf-8?B?TG5IcmgwVStCZFJHN2FCd1B1bXd1YXRMN2hHVkhLcGtETVZJZlFGZHpFQzRy?=
 =?utf-8?B?NW5FelhtdUh3Z2Zmd0RaaFJQallHYzBCK2p1eHphTFFjbEJ6c3dkamlucnJl?=
 =?utf-8?B?WFB6dTh0d2M5R2J3Zzh1RENmYThaUFdobllFZjJYeE5maUpxOUl6QVhodzY0?=
 =?utf-8?B?Uk1FZkJkOEZkWEtzUzNJT3BrMStmYWxaY3duVjgzY01pRU05aUM3TXNvTzFv?=
 =?utf-8?B?QlhvQ3RzYzB1dE02QURoODUrNUYyUlNsTWxBK1ZOclNpNkJpVjV1ZFZKUjEz?=
 =?utf-8?B?bkUwWjFhbXFvdGVHaHBlVFE4ZDk1aWZHWEMwbXVhS3d3UkpWVGZaMVFoSTho?=
 =?utf-8?B?NG1MMHV2dGxmWmx3cjM5THlBU3V5cHZxcUVMalRKOGxXTXZvTlVOQ1hjWHE4?=
 =?utf-8?B?UUhUc0RUT0RzbHFvTlU5NWRuaWNNREVZTGRyR1krdElLUXNIejVkd3VwQVdG?=
 =?utf-8?B?eEZKNTJKcFhxd0JqNWVaVGFaZVZFMllqcVl4bDBQWThsUWg1UEdKRnd1TU1L?=
 =?utf-8?B?V2g4U0tSOWVGSjFuSTE1eC9XVWw2ZUtqd1Nub1c0OXNybkxMYUdiUXNTaktK?=
 =?utf-8?Q?0XqBqji/QYTcO4+2c/AKZToub?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2726f24a-db7c-4686-756d-08db55188f36
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 07:46:57.5284
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Jy5qhNANzLSy5q2ZehpcWVOPHTdGc2zvhinETy5lnDT/l6zsEmES9SQga+yL/baVglTib6vRxONmokxYabBM3g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8569

On 12.05.2023 14:45, Andrew Cooper wrote:
> When adding new featureset words, it is convenient to split the work into
> several patches.  However, GCC 12 spotted that the way we prefer to split the
> work results in a real (transient) breakage whereby the policy <-> featureset
> helpers perform out-of-bounds accesses on the featureset array.
> 
> Fix this by having gen-cpuid.py calculate FEATURESET_NR_ENTRIES from the
> comments describing the word blocks, rather than from the XEN_CPUFEATURE()
> with the greatest value.
> 
> For simplicty, require that the word blocks appear in order.  This can be
> revisted if we find a good reason to have blocks out of order.
> 
> No functional change.
> 
> Reported-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

As far as my Python goes:
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Just one remark further down.

> This supercedes the entire "x86: Fix transient build breakage with featureset
> additions" series, but doesn't really feel as if it ought to be labelled v2

Thank you for re-doing this altogether. I think it's more safe this way,
and it now being less intrusive is imo also beneficial.

> --- a/xen/tools/gen-cpuid.py
> +++ b/xen/tools/gen-cpuid.py
> @@ -50,13 +50,37 @@ def parse_definitions(state):
>          "\s+([\s\d]+\*[\s\d]+\+[\s\d]+)\)"
>          "\s+/\*([\w!]*) .*$")
>  
> +    word_regex = re.compile(
> +        r"^/\* .* word (\d*) \*/$")
> +    last_word = -1
> +
>      this = sys.modules[__name__]
>  
>      for l in state.input.readlines():
> -        # Short circuit the regex...
> -        if not l.startswith("XEN_CPUFEATURE("):
> +
> +        # Short circuit the regexes...
> +        if not (l.startswith("XEN_CPUFEATURE(") or
> +                l.startswith("/* ")):
>              continue
>  
> +        # Handle /* ... word $N */ lines
> +        if l.startswith("/* "):
> +
> +            res = word_regex.match(l)
> +            if res is None:
> +                continue # Some other comment
> +
> +            word = int(res.groups()[0])
> +
> +            if word != last_word + 1:
> +                raise Fail("Featureset word %u out of order (last word %u)"
> +                           % (word, last_word))
> +
> +            last_word = word
> +            state.nr_entries = word + 1
> +            continue
> +
> +        # Handle XEN_CPUFEATURE( lines
>          res = feat_regex.match(l)
>  
>          if res is None:
> @@ -94,6 +118,15 @@ def parse_definitions(state):
>      if len(state.names) == 0:
>          raise Fail("No features found")
>  
> +    if state.nr_entries == 0:
> +        raise Fail("No featureset word info found")
> +
> +    max_val = max(state.names.keys())
> +    if (max_val >> 5) + 1 > state.nr_entries:

Maybe

    if (max_val >> 5) >= state.nr_entries:

?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 15 08:08:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534595.831735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTFx-0006Zx-4z; Mon, 15 May 2023 08:08:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534595.831735; Mon, 15 May 2023 08:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTFx-0006Zq-1A; Mon, 15 May 2023 08:08:33 +0000
Received: by outflank-mailman (input) for mailman id 534595;
 Mon, 15 May 2023 08:08:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SknV=BE=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1pyTFu-0006ZJ-Ud
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:08:31 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20600.outbound.protection.outlook.com
 [2a01:111:f400:fe12::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac5ce13f-f2f7-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 10:08:28 +0200 (CEST)
Received: from AS4P190CA0003.EURP190.PROD.OUTLOOK.COM (2603:10a6:20b:5de::8)
 by AS8PR08MB6453.eurprd08.prod.outlook.com (2603:10a6:20b:31b::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.29; Mon, 15 May
 2023 08:08:23 +0000
Received: from AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:5de:cafe::29) by AS4P190CA0003.outlook.office365.com
 (2603:10a6:20b:5de::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30 via Frontend
 Transport; Mon, 15 May 2023 08:08:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT034.mail.protection.outlook.com (100.127.140.87) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.13 via Frontend Transport; Mon, 15 May 2023 08:08:23 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Mon, 15 May 2023 08:08:23 +0000
Received: from e0506ef73416.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A8E3240A-88D5-4BA6-8BDD-9F156F54FB0B.1; 
 Mon, 15 May 2023 08:08:12 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e0506ef73416.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 15 May 2023 08:08:12 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by AM7PR08MB5318.eurprd08.prod.outlook.com (2603:10a6:20b:104::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 08:08:10 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb%7]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 08:08:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac5ce13f-f2f7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BkCcm15jryxQsqbAW+jzj77yWrv/gfaCDXvDoGwKR+Y=;
 b=5Ct1/8P5/sP32TVdTGY65+O9cBAUkwbo3kGIjfggN/tAMV/RGgVVScADrjn3TsAvLl+fgOO7GMmFtF74sJyClABkHbS1nrXGjCenVA4AswcSn6o4RyvkB522Q/2ucOVcIOmUlDgfTg/cXKprG0l+MW7Q34mtnnfJuul8CaadVT8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: e6e637b9a5b9a686
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YLiw5WjUyupZuq8MgK0lubyGDNzRkvlIJKxfy8HkWMyV3UrdqeO92nz5rzQi/vkrCsgsaBfDjaJGZijyekh2k6tWVPn+g8VyA8PInW4xmigeb4JE1dNNGkQDSWC3Y2lXRs0EyIEY0jibdZFohroMoosKFnMQbDXtqL8f+YAFrQ2YeLKlxtwj6UzbvjNQzrFC4iVMynW/ciPrd3JYowlSJGpRAZIOboNn9v28wPvliWFem/BNh1Y2DlvbLRXYNEb+aZvxycU5Bsqh8DRaeeRgEWPJn0PTjoKFuegk9GgVexlmf1rQ28Z8yPVumLYPPUdb7G1YxT49iepO8HgXqiSs6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BkCcm15jryxQsqbAW+jzj77yWrv/gfaCDXvDoGwKR+Y=;
 b=Y3lW2r8l7h8nepquh5n9PJFmx7gy/ISR1Rt8ABNIC0vRFSr6xUWDqd4w8YXfpzp/d0rYKQPMUHbos0BK+XpgcMCYQrfrN3BFnGMuYomhshdo3VrwjZ7sh0N1kAyGY3/xMX136IIYpofPyU+zsDcE8/5laEbyOmHE0tn/YZFLvbh6USexlTHOHRGrVVmjmjq3PpEB7lPOyEs2kufepEaeGqGd8640qZparrKhabQjflPh54PVunOxJhJI9JHVuMI2UjFf1K0vbSIixCkdX7972HHh84PvRFhEvx6LXfvw67xvRq4lexFKgpCUfr17SVK0ASZuZoOhdG03g6fMnpbDiw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BkCcm15jryxQsqbAW+jzj77yWrv/gfaCDXvDoGwKR+Y=;
 b=5Ct1/8P5/sP32TVdTGY65+O9cBAUkwbo3kGIjfggN/tAMV/RGgVVScADrjn3TsAvLl+fgOO7GMmFtF74sJyClABkHbS1nrXGjCenVA4AswcSn6o4RyvkB522Q/2ucOVcIOmUlDgfTg/cXKprG0l+MW7Q34mtnnfJuul8CaadVT8=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Xen developer discussion <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 1/2] xen/arm: smmuv3: Constify arm_smmu_get_by_dev()
 parameter
Thread-Topic: [PATCH 1/2] xen/arm: smmuv3: Constify arm_smmu_get_by_dev()
 parameter
Thread-Index: AQHZhN8QV/wkEEzvWEaoo7ucvATCgK9a/vuA
Date: Mon, 15 May 2023 08:08:09 +0000
Message-ID: <A9FE3BF2-82F5-4872-A05F-7C79B984E274@arm.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-2-michal.orzel@amd.com>
In-Reply-To: <20230512143535.29679-2-michal.orzel@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7158:EE_|AM7PR08MB5318:EE_|AM7EUR03FT034:EE_|AS8PR08MB6453:EE_
X-MS-Office365-Filtering-Correlation-Id: deaadc02-f371-498f-e153-08db551b8dc6
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 hcL22fX694fRyN/k3YDiPR351qyVBnSrYei0/C0FpWq7ILhBjfAeNdfT/ehG9AUqapIek5K67J5cObORVg55s9DxzVd29rVsIIw88eSv/o95Aalse90qTgH1x4ueClZSDDVSmysF/Lfy2O4as1ra7effxFjcUCvV/vwPjfvYo7KLzW6hx77Key3EszGtcJvAlCQxC5YXFkV0fwEBvjAJlhsNrVZYCIkDVjaJowGcCsUCnhvrBysmILmfAE7sSHGhMUC7kajWUckJYEpAV0z57ugCFcS7yKPc0kfIcTkYHs2z9efE1vYshsoBRxHnwecBefayJBRdOPWQHeWMxR7JODKaNw18t1Z/uwokWyYpO+bg1ljimPH7h9yfzdBqGAg1b6KusPrMfulcgENe2ke/pU+zcPRzKVESejugDgBqvy3sU6ZlibMCnBltZZWjUwsIQwmZ8qFPkZeDEOC8oaDVOzLjhIEPtb0cyXGdth3BDU5C5VfPh7Rv0n+ZVSHJINA5JU9Ivk4xXZT/46zu8XqAdnCxEvHCn1tdIB17/LnU8lp1Ht78w8TIx/YcV43J/tAKkmdFTL9sTUhZineNIBU+k3cD2tshMlrKLS548msAZavMbLU3M9ov5s57A8E1FGv4KckJIOkmTkPR0OtdFDlFHg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39850400004)(136003)(366004)(376002)(451199021)(6486002)(36756003)(6916009)(38100700002)(4326008)(2906002)(5660300002)(8676002)(8936002)(316002)(86362001)(66556008)(66946007)(38070700005)(76116006)(66476007)(66446008)(64756008)(41300700001)(122000001)(91956017)(6512007)(186003)(6506007)(26005)(33656002)(558084003)(53546011)(71200400001)(478600001)(54906003)(2616005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <DEC136E1FC01A4449A82DBC6D4EE37EA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5318
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8045210b-d9b3-46a6-982a-08db551b85c8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	23De2yCLTLaHrouMaDVWHGJK6RdIj49vSVCrF/PY50GfSIoTZFNtnqNKx7TGK3x+TwQPr8GlUzrrUXZs5QohlysnLy7yJ4dElkdw/fA29c2azVym0feMeaAnygSajrcVY/w4+W0oJPChqJlbgwigRkzlKFFGPje9ZC5GOFNOzuMGunmaVO9GFIaF20azGydIxCaJrWgznb1d/kUahGdCFKVlFRGLdZTsdMW2ADuS+wQEKk/M9evgFtT5RyCC2jmn6rq8NuPwrOLu4R+6svayqgREY3vRRvEEv3J3wvx0lo8BVmq0P5yvrv9Uln2CsLHrhaaO7gv0/jvWGYdmsf6B8f7Qs3RJkvluBE+KTxcmooqQiJXp8vyrbNDpIAJIgdL8EeJjEfYoUcpMg22VVTUgMz4fl58slxI1be8ssCN5u+i/OGzUCPhM5xNQV3M78PtppJpo9tlxUHi4dbjjRY32paZQn1f9XSh0OBIRUwrOzjXNvP0LOptG4p0fFYwX+7nWkn7zojZzox43VvfZHpZaqCnLZN+AbmK/p31jSt6yqLtKRzQURcZsDCfOR2qfYb1d7oYRibcLGZDWIICPI4giFWthbJa242zZilIuiYoWtBBhOFrqE/oF2NTptTTNw3+49hd6l+pyjhESjgOgYy+9nCQyovA13pJlg65plHCLSRlvH24r11QKBzvn3om2nRK9s0QI+PL+RXjnsm+8igfuTN9GmMYMRbcOujF3mh5WehQfPvc5Q/m6GCGwbTkiKtUI
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(346002)(136003)(376002)(451199021)(46966006)(40470700004)(36840700001)(33656002)(36756003)(86362001)(316002)(54906003)(4326008)(70206006)(6486002)(70586007)(478600001)(82310400005)(40480700001)(8936002)(8676002)(41300700001)(5660300002)(6862004)(2906002)(4744005)(81166007)(82740400003)(356005)(2616005)(26005)(186003)(107886003)(53546011)(6512007)(36860700001)(6506007)(336012)(47076005)(40460700003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 08:08:23.3531
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: deaadc02-f371-498f-e153-08db551b8dc6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6453

Hi Michal,

> On 12 May 2023, at 3:35 pm, Michal Orzel <michal.orzel@amd.com> wrote:
>=20
> This function does not modify its parameter 'dev' and it is not supposed
> to do it. Therefore, constify it.
>=20
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Rahul Singh <rahul.singh@arm.com>

Regards,
Rahul



From xen-devel-bounces@lists.xenproject.org Mon May 15 08:20:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:20:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534601.831744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTRK-0000Zo-7t; Mon, 15 May 2023 08:20:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534601.831744; Mon, 15 May 2023 08:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTRK-0000Zh-5J; Mon, 15 May 2023 08:20:18 +0000
Received: by outflank-mailman (input) for mailman id 534601;
 Mon, 15 May 2023 08:20:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oWdR=BE=antgroup.com=houwenlong.hwl@srs-se1.protection.inumbo.net>)
 id 1pyTRI-0000Zb-QS
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:20:16 +0000
Received: from out0-199.mail.aliyun.com (out0-199.mail.aliyun.com
 [140.205.0.199]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f8ed683-f2f9-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 10:20:13 +0200 (CEST)
Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com
 fp:SMTPD_---.T2DpdLD_1684138803) by smtp.aliyun-inc.com;
 Mon, 15 May 2023 16:20:04 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f8ed683-f2f9-11ed-b229-6b7b168915f2
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047212;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=28;SR=0;TI=SMTPD_---.T2DpdLD_1684138803;
From: "Hou Wenlong" <houwenlong.hwl@antgroup.com>
To: linux-kernel@vger.kernel.org
Cc: "Lai Jiangshan" <jiangshan.ljs@antgroup.com>,
  "Hou Wenlong" <houwenlong.hwl@antgroup.com>,
  "Alexey Makhalov" <amakhalov@vmware.com>,
  "Andrew Morton" <akpm@linux-foundation.org>,
  "Andy Lutomirski" <luto@kernel.org>,
  "Anshuman Khandual" <anshuman.khandual@arm.com>,
  "Borislav Petkov" <bp@alien8.de>,
  "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
  "Brian Gerst" <brgerst@gmail.com>,
  "Dave Hansen" <dave.hansen@linux.intel.com>,
  "David Woodhouse" <dwmw@amazon.co.uk>,
  "H. Peter Anvin" <hpa@zytor.com>,
  "Ingo Molnar" <mingo@redhat.com>,
  "Josh Poimboeuf" <jpoimboe@kernel.org>,
  "Juergen Gross" <jgross@suse.com>,
  "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
  "=?UTF-8?B?TWlrZSBSYXBvcG9ydCAoSUJNKQ==?=" <rppt@kernel.org>,
  "Pasha Tatashin" <pasha.tatashin@soleen.com>,
  "Peter Zijlstra" <peterz@infradead.org>,
  "=?UTF-8?B?U3JpdmF0c2EgUy4gQmhhdCAoVk13YXJlKQ==?=" <srivatsa@csail.mit.edu>,
  "Suren Baghdasaryan" <surenb@google.com>,
  "Thomas Gleixner" <tglx@linutronix.de>,
  "Usama Arif" <usama.arif@bytedance.com>,
   <virtualization@lists.linux-foundation.org>,
  "VMware PV-Drivers Reviewers" <pv-drivers@vmware.com>,
   <x86@kernel.org>,
   <xen-devel@lists.xenproject.org>
Subject: [PATCH RFC 0/4] x86/fixmap: Unify FIXADDR_TOP
Date: Mon, 15 May 2023 16:19:31 +0800
Message-Id: <cover.1684137557.git.houwenlong.hwl@antgroup.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This patchset unifies FIXADDR_TOP as a variable for x86, allowing the
fixmap area to be movable and relocated with the kernel image in the
x86/PIE patchset [0]. This enables the kernel image to be relocated in
the top 512G of the address space.

[0] https://lore.kernel.org/lkml/cover.1682673542.git.houwenlong.hwl@antgroup.com

Hou Wenlong (4):
  x86/vsyscall: Don't use set_fixmap() to map vsyscall page
  x86/xen: Pin up to VSYSCALL_ADDR when vsyscall page is out of fixmap
    area
  x86/fixmap: Move vsyscall page out of fixmap area
  x86/fixmap: Unify FIXADDR_TOP

 arch/x86/entry/vsyscall/vsyscall_64.c |  7 +-----
 arch/x86/include/asm/fixmap.h         | 28 ++++-------------------
 arch/x86/include/asm/paravirt.h       |  7 ++++++
 arch/x86/include/asm/paravirt_types.h |  4 ++++
 arch/x86/include/asm/vsyscall.h       | 13 +++++++++++
 arch/x86/kernel/head64.c              |  1 -
 arch/x86/kernel/head_64.S             |  6 ++---
 arch/x86/kernel/paravirt.c            |  4 ++++
 arch/x86/mm/dump_pagetables.c         |  3 ++-
 arch/x86/mm/fault.c                   |  1 -
 arch/x86/mm/init_64.c                 |  2 +-
 arch/x86/mm/ioremap.c                 |  5 ++---
 arch/x86/mm/pgtable.c                 | 13 +++++++++++
 arch/x86/mm/pgtable_32.c              |  3 ---
 arch/x86/xen/mmu_pv.c                 | 32 +++++++++++++++++++--------
 15 files changed, 77 insertions(+), 52 deletions(-)


base-commit: f585d5177e1aad174fd6da0e3936b682ed58ced0
--
2.31.1



From xen-devel-bounces@lists.xenproject.org Mon May 15 08:20:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534602.831756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTRV-0000rc-IH; Mon, 15 May 2023 08:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534602.831756; Mon, 15 May 2023 08:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTRV-0000rV-DT; Mon, 15 May 2023 08:20:29 +0000
Received: by outflank-mailman (input) for mailman id 534602;
 Mon, 15 May 2023 08:20:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oWdR=BE=antgroup.com=houwenlong.hwl@srs-se1.protection.inumbo.net>)
 id 1pyTRU-0000r2-BN
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:20:28 +0000
Received: from out0-203.mail.aliyun.com (out0-203.mail.aliyun.com
 [140.205.0.203]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 55f924de-f2f9-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 10:20:23 +0200 (CEST)
Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com
 fp:SMTPD_---.T2DpdRe_1684138815) by smtp.aliyun-inc.com;
 Mon, 15 May 2023 16:20:16 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55f924de-f2f9-11ed-8611-37d641c3527e
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047187;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---.T2DpdRe_1684138815;
From: "Hou Wenlong" <houwenlong.hwl@antgroup.com>
To: linux-kernel@vger.kernel.org
Cc: "Lai Jiangshan" <jiangshan.ljs@antgroup.com>,
  "Hou Wenlong" <houwenlong.hwl@antgroup.com>,
  "Andy Lutomirski" <luto@kernel.org>,
  "Thomas Gleixner" <tglx@linutronix.de>,
  "Ingo Molnar" <mingo@redhat.com>,
  "Borislav Petkov" <bp@alien8.de>,
  "Dave Hansen" <dave.hansen@linux.intel.com>,
   <x86@kernel.org>,
  "H. Peter Anvin" <hpa@zytor.com>,
  "Juergen Gross" <jgross@suse.com>,
  "=?UTF-8?B?U3JpdmF0c2EgUy4gQmhhdCAoVk13YXJlKQ==?=" <srivatsa@csail.mit.edu>,
  "Alexey Makhalov" <amakhalov@vmware.com>,
  "VMware PV-Drivers Reviewers" <pv-drivers@vmware.com>,
  "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
  "Suren Baghdasaryan" <surenb@google.com>,
  "Andrew Morton" <akpm@linux-foundation.org>,
  "=?UTF-8?B?TWlrZSBSYXBvcG9ydCAoSUJNKQ==?=" <rppt@kernel.org>,
  "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
   <virtualization@lists.linux-foundation.org>,
   <xen-devel@lists.xenproject.org>
Subject: [PATCH RFC 1/4] x86/vsyscall: Don't use set_fixmap() to map vsyscall page
Date: Mon, 15 May 2023 16:19:32 +0800
Message-Id: <7453c8b3b3b273e45c2541d09b950ffc4a97189d.1684137557.git.houwenlong.hwl@antgroup.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1684137557.git.houwenlong.hwl@antgroup.com>
References: <cover.1684137557.git.houwenlong.hwl@antgroup.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to unify FIXADDR_TOP for x86 and allow the fixmap area to be
movable, the vsyscall page should be mapped individually. However, for
XENPV guests, the vsyscall page needs to be mapped into the user
pagetable as well. Therefore, a new PVMMU operation is introduced to
assist in mapping the vsyscall page.

Suggested-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
---
 arch/x86/entry/vsyscall/vsyscall_64.c |  3 +--
 arch/x86/include/asm/paravirt.h       |  7 +++++++
 arch/x86/include/asm/paravirt_types.h |  4 ++++
 arch/x86/include/asm/vsyscall.h       | 13 +++++++++++++
 arch/x86/kernel/paravirt.c            |  4 ++++
 arch/x86/xen/mmu_pv.c                 | 20 ++++++++++++++------
 6 files changed, 43 insertions(+), 8 deletions(-)

diff --git a/arch/x86/entry/vsyscall/vsyscall_64.c b/arch/x86/entry/vsyscall/vsyscall_64.c
index e0ca8120aea8..4373460ebbde 100644
--- a/arch/x86/entry/vsyscall/vsyscall_64.c
+++ b/arch/x86/entry/vsyscall/vsyscall_64.c
@@ -385,8 +385,7 @@ void __init map_vsyscall(void)
 	 * page.
 	 */
 	if (vsyscall_mode == EMULATE) {
-		__set_fixmap(VSYSCALL_PAGE, physaddr_vsyscall,
-			     PAGE_KERNEL_VVAR);
+		__set_vsyscall_page(physaddr_vsyscall, PAGE_KERNEL_VVAR);
 		set_vsyscall_pgtable_user_bits(swapper_pg_dir);
 	}
 
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index b49778664d2b..c9543d383df0 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -576,6 +576,13 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 {
 	pv_ops.mmu.set_fixmap(idx, phys, flags);
 }
+
+#ifdef CONFIG_X86_VSYSCALL_EMULATION
+static inline void __set_vsyscall_page(phys_addr_t phys, pgprot_t flags)
+{
+	pv_ops.mmu.set_vsyscall_page(phys, flags);
+}
+#endif
 #endif
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 4acbcddddc29..2dc9397e064d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -224,6 +224,10 @@ struct pv_mmu_ops {
 	   an mfn.  We can tell which is which from the index. */
 	void (*set_fixmap)(unsigned /* enum fixed_addresses */ idx,
 			   phys_addr_t phys, pgprot_t flags);
+
+#ifdef CONFIG_X86_VSYSCALL_EMULATION
+	void (*set_vsyscall_page)(phys_addr_t phys, pgprot_t flags);
+#endif
 #endif
 } __no_randomize_layout;
 
diff --git a/arch/x86/include/asm/vsyscall.h b/arch/x86/include/asm/vsyscall.h
index ab60a71a8dcb..73691fc60924 100644
--- a/arch/x86/include/asm/vsyscall.h
+++ b/arch/x86/include/asm/vsyscall.h
@@ -2,6 +2,7 @@
 #ifndef _ASM_X86_VSYSCALL_H
 #define _ASM_X86_VSYSCALL_H
 
+#include <asm/pgtable.h>
 #include <linux/seqlock.h>
 #include <uapi/asm/vsyscall.h>
 
@@ -15,6 +16,18 @@ extern void set_vsyscall_pgtable_user_bits(pgd_t *root);
  */
 extern bool emulate_vsyscall(unsigned long error_code,
 			     struct pt_regs *regs, unsigned long address);
+static inline void native_set_vsyscall_page(phys_addr_t phys, pgprot_t flags)
+{
+	pgprot_val(flags) &= __default_kernel_pte_mask;
+	set_pte_vaddr(VSYSCALL_ADDR, pfn_pte(phys >> PAGE_SHIFT, flags));
+}
+
+#ifndef CONFIG_PARAVIRT_XXL
+#define __set_vsyscall_page	native_set_vsyscall_page
+#else
+#include <asm/paravirt.h>
+#endif
+
 #else
 static inline void map_vsyscall(void) {}
 static inline bool emulate_vsyscall(unsigned long error_code,
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index ac10b46c5832..13c81402f377 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -33,6 +33,7 @@
 #include <asm/tlb.h>
 #include <asm/io_bitmap.h>
 #include <asm/gsseg.h>
+#include <asm/vsyscall.h>
 
 /*
  * nop stub, which must not clobber anything *including the stack* to
@@ -357,6 +358,9 @@ struct paravirt_patch_template pv_ops = {
 	},
 
 	.mmu.set_fixmap		= native_set_fixmap,
+#ifdef CONFIG_X86_VSYSCALL_EMULATION
+	.mmu.set_vsyscall_page	= native_set_vsyscall_page,
+#endif
 #endif /* CONFIG_PARAVIRT_XXL */
 
 #if defined(CONFIG_PARAVIRT_SPINLOCKS)
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index b3b8d289b9ab..c42c60faa3bb 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -59,6 +59,7 @@
 
 #include <asm/tlbflush.h>
 #include <asm/fixmap.h>
+#include <asm/vsyscall.h>
 #include <asm/mmu_context.h>
 #include <asm/setup.h>
 #include <asm/paravirt.h>
@@ -2020,9 +2021,6 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 	switch (idx) {
 	case FIX_BTMAP_END ... FIX_BTMAP_BEGIN:
-#ifdef CONFIG_X86_VSYSCALL_EMULATION
-	case VSYSCALL_PAGE:
-#endif
 		/* All local page mappings */
 		pte = pfn_pte(phys, prot);
 		break;
@@ -2058,14 +2056,21 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 	vaddr = __fix_to_virt(idx);
 	if (HYPERVISOR_update_va_mapping(vaddr, pte, UVMF_INVLPG))
 		BUG();
+}
 
 #ifdef CONFIG_X86_VSYSCALL_EMULATION
+static void xen_set_vsyscall_page(phys_addr_t phys, pgprot_t prot)
+{
+	pte_t pte = pfn_pte(phys >> PAGE_SHIFT, prot);
+
+	if (HYPERVISOR_update_va_mapping(VSYSCALL_ADDR, pte, UVMF_INVLPG))
+		BUG();
+
 	/* Replicate changes to map the vsyscall page into the user
 	   pagetable vsyscall mapping. */
-	if (idx == VSYSCALL_PAGE)
-		set_pte_vaddr_pud(level3_user_vsyscall, vaddr, pte);
-#endif
+	set_pte_vaddr_pud(level3_user_vsyscall, VSYSCALL_ADDR, pte);
 }
+#endif
 
 static void __init xen_post_allocator_init(void)
 {
@@ -2156,6 +2161,9 @@ static const typeof(pv_ops) xen_mmu_ops __initconst = {
 		},
 
 		.set_fixmap = xen_set_fixmap,
+#ifdef CONFIG_X86_VSYSCALL_EMULATION
+		.set_vsyscall_page = xen_set_vsyscall_page,
+#endif
 	},
 };
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Mon May 15 08:20:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:20:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534603.831765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTRX-00018L-P9; Mon, 15 May 2023 08:20:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534603.831765; Mon, 15 May 2023 08:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTRX-00018C-LU; Mon, 15 May 2023 08:20:31 +0000
Received: by outflank-mailman (input) for mailman id 534603;
 Mon, 15 May 2023 08:20:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oWdR=BE=antgroup.com=houwenlong.hwl@srs-se1.protection.inumbo.net>)
 id 1pyTRW-0000Zb-O6
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:20:30 +0000
Received: from out0-207.mail.aliyun.com (out0-207.mail.aliyun.com
 [140.205.0.207]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 591fb455-f2f9-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 10:20:29 +0200 (CEST)
Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com
 fp:SMTPD_---.T2G7nz7_1684138818) by smtp.aliyun-inc.com;
 Mon, 15 May 2023 16:20:19 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 591fb455-f2f9-11ed-b229-6b7b168915f2
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R621e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047194;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=12;SR=0;TI=SMTPD_---.T2G7nz7_1684138818;
From: "Hou Wenlong" <houwenlong.hwl@antgroup.com>
To: linux-kernel@vger.kernel.org
Cc: "Lai Jiangshan" <jiangshan.ljs@antgroup.com>,
  "Hou Wenlong" <houwenlong.hwl@antgroup.com>,
  "Juergen Gross" <jgross@suse.com>,
  "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
  "Thomas Gleixner" <tglx@linutronix.de>,
  "Ingo Molnar" <mingo@redhat.com>,
  "Borislav Petkov" <bp@alien8.de>,
  "Dave Hansen" <dave.hansen@linux.intel.com>,
   <x86@kernel.org>,
  "H. Peter Anvin" <hpa@zytor.com>,
   <xen-devel@lists.xenproject.org>
Subject: [PATCH RFC 2/4] x86/xen: Pin up to VSYSCALL_ADDR when vsyscall page is out of fixmap area
Date: Mon, 15 May 2023 16:19:33 +0800
Message-Id: <a0e2bcde86150831cf8a0da0e94f182668c5cc5e.1684137557.git.houwenlong.hwl@antgroup.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1684137557.git.houwenlong.hwl@antgroup.com>
References: <cover.1684137557.git.houwenlong.hwl@antgroup.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If the vsyscall page is moved out of the fixmap area, then FIXADDR_TOP
would be below the vsyscall page. Therefore, it should be pinned up to
VSYSCALL_ADDR if vsyscall is enabled.

Suggested-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
---
 arch/x86/xen/mmu_pv.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index c42c60faa3bb..c1f298c31e64 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -587,6 +587,12 @@ static void xen_p4d_walk(struct mm_struct *mm, p4d_t *p4d,
 	xen_pud_walk(mm, pud, func, last, limit);
 }
 
+#ifdef CONFIG_X86_VSYSCALL_EMULATION
+#define __KERNEL_MAP_TOP	(VSYSCALL_ADDR + PAGE_SIZE)
+#else
+#define __KERNEL_MAP_TOP	FIXADDR_TOP
+#endif
+
 /*
  * (Yet another) pagetable walker.  This one is intended for pinning a
  * pagetable.  This means that it walks a pagetable and calls the
@@ -594,7 +600,7 @@ static void xen_p4d_walk(struct mm_struct *mm, p4d_t *p4d,
  * at every level.  It walks the entire pagetable, but it only bothers
  * pinning pte pages which are below limit.  In the normal case this
  * will be STACK_TOP_MAX, but at boot we need to pin up to
- * FIXADDR_TOP.
+ * __KERNEL_MAP_TOP.
  *
  * We must skip the Xen hole in the middle of the address space, just after
  * the big x86-64 virtual hole.
@@ -609,7 +615,7 @@ static void __xen_pgd_walk(struct mm_struct *mm, pgd_t *pgd,
 
 	/* The limit is the last byte to be touched */
 	limit--;
-	BUG_ON(limit >= FIXADDR_TOP);
+	BUG_ON(limit >= __KERNEL_MAP_TOP);
 
 	/*
 	 * 64-bit has a great big hole in the middle of the address
@@ -797,7 +803,7 @@ static void __init xen_after_bootmem(void)
 #ifdef CONFIG_X86_VSYSCALL_EMULATION
 	SetPagePinned(virt_to_page(level3_user_vsyscall));
 #endif
-	xen_pgd_walk(&init_mm, xen_mark_pinned, FIXADDR_TOP);
+	xen_pgd_walk(&init_mm, xen_mark_pinned, __KERNEL_MAP_TOP);
 }
 
 static void xen_unpin_page(struct mm_struct *mm, struct page *page,
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Mon May 15 08:28:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:28:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534613.831774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTZM-0002Ks-I3; Mon, 15 May 2023 08:28:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534613.831774; Mon, 15 May 2023 08:28:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTZM-0002Kl-Es; Mon, 15 May 2023 08:28:36 +0000
Received: by outflank-mailman (input) for mailman id 534613;
 Mon, 15 May 2023 08:28:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CjwX=BE=citrix.com=prvs=492a8bb35=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pyTZK-0002Kf-Sp
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:28:35 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7960fc14-f2fa-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 10:28:32 +0200 (CEST)
Received: from mail-dm6nam11lp2174.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 15 May 2023 04:28:30 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BN9PR03MB6009.namprd03.prod.outlook.com (2603:10b6:408:132::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 08:28:28 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 08:28:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7960fc14-f2fa-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684139312;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=A+2w2uIL4CRfbOiI+bsEJtkIHLIf9kj5b50R9RFN5/U=;
  b=GPw+w98JjJd51EqAH2NeyX3uxvmBZwB7I7b/8S78gIGDqygXb7jUVAQF
   005ZKpVaXS5pyrD0b5E0/x3Z0Ob1p4hd5IGM0pn7txoqvtCuVCjTGF50n
   5B9BVMpdUb4Hd+eSbEamgXPvBnWgG6yfUPVdyhQ+Pi42ShK4loRHL0baR
   o=;
X-IronPort-RemoteIP: 104.47.57.174
X-IronPort-MID: 109048423
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:fLKyz6jJoxQkQ2olMeq/mDRNX161cxEKZh0ujC45NGQN5FlHY01je
 htvXTyCPPvcZ2GmKNlxPYzioE5UscXWnNZgQAZqripmFCMb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QaDzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQkciopbS+4ltnvnq+ibbFhwcRzPvXkadZ3VnFIlVk1DN4AaLWaGuDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEuluGybLI5efTTLSlRtlyfq
 W/cuXzwHzkRNcCFyCrD+XWp7gPKtXqjCdpOROHirZaGhnXC5FJJNyU4dmfjuNWZ00isVcgHF
 VI9r39GQa8asRbDosPGdxC4rXvHrhMac98NC6sx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9WopWm1876VqXa+PHYTJGpbPCscF1Jav5/kvZ05iQ/JQpB7Cqmpg9bpGDb2h
 TeXsCw5gLZVhskOv0mmwW36b/uXjsChZmYICs//BApJMisRiFaZWrGV
IronPort-HdrOrdr: A9a23:MDw8Bqhc/e1UJP+uaElJKExd9nBQXssji2hC6mlwRA09TyX4ra
 2TdZEgvnXJYVkqKRIdcK+7Scu9qB/nm6KdgrN8AV7BZmnbUQKTRelfBODZrAEIdReeygdV79
 YET5RD
X-Talos-CUID: =?us-ascii?q?9a23=3A5+K0j2hlkROsm0OYkk51Kk5FITJuTifH6HTvGF6?=
 =?us-ascii?q?DUDh0UZrNRBiQqLFYnJ87?=
X-Talos-MUID: 9a23:/lQnVgRU8RC8ZYxERXS9pg84C/9GwJj2EWwtyJMUtdPcCzJZbmI=
X-IronPort-AV: E=Sophos;i="5.99,276,1677560400"; 
   d="scan'208";a="109048423"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QXHEofgmBqIPQiMOeEtMlejjaTUDmWNExm77eQ6REMAa+JeFsKebSICHUExS79ROKEUGxMTElKqngmzu24gBbBPuRJVLeXMrHUXx3tP0S5sMtpod/nBVfQ7iHE+JV1Idc41NdZvp3vaqATtsAut2ZlbVQys7FmU1ehFE3W+vvCL8rShhX9gYo4C9syDVJXCW7pPdQCtu2w9xozgzYeqQQXuV47CLkrf6XzI9VIj/3QyxfULtmz8v/JFt3cSVqRp0Tcw2333hMoPsY57wht919zYFSAKQYZsAbu8hmMTpOqf8++dueUvEYYDKDrZJ3XuGka9BWC8XG+cQCdLTW+4uZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ry+RzQIHanrhbQAx87mr7fxRbIKLdK9lz0ghfyCqjuA=;
 b=PqXnOgGOO2frL6oC1cFKxdoMQuJzQnkdIH8UG6OMGP82dIPplMHaAkpCKh4RflO1zdSoeHKB8xprHleuJeGHuXBCUYUvJmvBj0o1MunJ8RVQnfwYvz3P3Ju7nd2SML5sAAqKuhLGN7RlWo2DrbqnBydt9rNxHyVosPep687e+35iSwqWsv4SOvfALXzd5zfRXu+kihEt+U10a7GYZWdRM5ZGkuOY2BnLLdEMaPYf8M2UZLK1y+S1skEj3krH0OjvbkpWgPjSXAR8vJXWch/eX2Fl+v9HnMZqVx9XY6Glge3EmZ1FNOccp7lWdfviIefkKkt2Xr7+cQjjxEaXM9tauA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ry+RzQIHanrhbQAx87mr7fxRbIKLdK9lz0ghfyCqjuA=;
 b=QjQyYa6SvF1wEOJtUzeglWOa0y6skSkbsSS1irc0xNnSl9r3sUD9W/PwaAqt4qWUb4BZNAkkBXlRZt945u6gY5fZ++zw6+dG1SUH+0pWknQ91ukiwQjmK+yAScwKSkAPkoIiI/XNVUL2ZfcItSpwMxyyg/OSK9SRnR+GbxOudEs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 15 May 2023 10:28:22 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: PVH feature omission
Message-ID: <ZGHtJlrucK+XcAJi@Air-de-Roger>
References: <ZF7fSeYH/NK105EQ@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <ZF7fSeYH/NK105EQ@mattapan.m5p.com>
X-ClientProxiedBy: LO4P123CA0545.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:319::13) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BN9PR03MB6009:EE_
X-MS-Office365-Filtering-Correlation-Id: 37253f1a-4003-4f83-85c6-08db551e5b8e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gKEKnJjW4+nEtkrNWgiPW2f+q7o/HjzLhRncdMsdEghXj4qX4XrR6M2qJcUNPFQUAD35z6AoNT+3r4npej28WxvIrNLDbiLerlmSKJKYeZWDJ0t3lv5QomL52MbFFVZ8aPp96fepBL81UCDuju0Qf2XwZiKonbBZN0jcxLJnKTxKAIIghaNIxNz1GXnTVDY0j+yO+K/Wxj2RCtqo+LLY+vXZBjurd7+V3ApJFbgUPsB77g7Cc7P8A33tvCfWo12Y23TxcpSlwUjzrqiviuqwEJuLaQB7S6H1txXn/2Ep+O1X++QPC9uCdFfNiixFYwBbpC2kq5JoFx2tF66bQbJWmLyB4E/MT5QwPlCunW71wV4QKZ8jATP1aPlG0EmQBErJgLTcQbnSc94kKI1QRu4PqJBIu2A5MWi0rVr4OVdCEvcj1BXVR7botZe4wXQVjvdI5PJl63wqMfkPkH79pOWk3xWmffwV0qhdud0fd4PUoRvGyyuTrvKDHbecj4LH3sWnmqCOfbqCXFfLHAulSUH9Z/PNDf3PFHO0H9L3Lwk9u8B4LS7o3Bip5HnGtwT5l1iyi/q0puKPX7ttyo7q9lvT5gFtwxer5cGX1aF8Gl/OgfE=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(85182001)(86362001)(316002)(66556008)(66946007)(66476007)(4326008)(966005)(6486002)(478600001)(4744005)(33716001)(8936002)(8676002)(5660300002)(7116003)(6666004)(2906002)(38100700002)(3480700007)(82960400001)(41300700001)(6512007)(26005)(186003)(9686003)(6506007)(83380400001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NkYyRjQ0VDBua0dkRTVWby9adXhGMlBMeU1UZWRxZ29rYzhqdWl0NjZiSjdN?=
 =?utf-8?B?QTNJR2ZjVjlLZ04xK05JVnFRVGFnYjdlbTRBSmFtVHJ4TjI1MDhNMlVOdFFM?=
 =?utf-8?B?M241SUUyNnB2MVNnQnhobkFkdWlMQVRuanlob1cxbGJYMG1TQTd0eWVmMEt5?=
 =?utf-8?B?MWVrMWN5Y3BWSDNzYk1UNzhZTDBTNEJaUkZBK2JSYkN4SjQ0SEdmNFE4MVl3?=
 =?utf-8?B?SXkrL2Y4VHI5SFkrTTNMMTcyYWRjbUVjMnc0aEVub3RSYjNLR2JLVmR0dk5m?=
 =?utf-8?B?dXRUVUlLYURkR09IelVxWWtZcDQyV241Um1lV2daMUMrT21kYVp0bDRJVTJq?=
 =?utf-8?B?NkZZOWp5ZzFxRDg2MHJOVUhxcE9mQnhZeVFZWlp2eVV0UXYveXdXTnVXN01w?=
 =?utf-8?B?YVRsNU1EcWJpY3R2MFZUTnI0dUd2OVpodElxblAzTFZNOEtCVWFYRk1GeU1W?=
 =?utf-8?B?OUJWRkxvSWtuREpNajJjQll6L3l0eXlQWGZTdXVDOHU0a3B0dEg5bkc5VGMx?=
 =?utf-8?B?Z3lyVUFmV1g1YWRxNjFUcXdxMHlRTlFEbXJZcXlwMTVpa1pFbnhLNDZIOFBT?=
 =?utf-8?B?V3VEb1FGRE9RdnpwUW5RVFZKTElDR3hiRlVFMUR3dk5GYWxrUk14WDZYSTJZ?=
 =?utf-8?B?MmhrSFVlaTltWDRYUFdQQkZrOGxNd1pBNFJ2SkorODVZc2pYOHdwL0YwMWVF?=
 =?utf-8?B?eGl1c0l0blV5NUFlaEYrLzliNGlZWlZISWR0bFpmNmpEOC96Nm4xSFJzMVBy?=
 =?utf-8?B?K204RFVVWjVSSlNiSkorcC9FRmhoQmdMUjZGcDJvTDEvbkF4cy9mLzdtdWtu?=
 =?utf-8?B?aVdaUWJJbVIvYjhDcDhaSUhDRGVseXN6ZFIyYldkTUI5OHExVTA4dkxndnVF?=
 =?utf-8?B?MzRQVUU2cWM2dnMrYkxHWmdQY3BPekdsL2pSb0FoL0xNSFI2VDR1blk2M09C?=
 =?utf-8?B?WVJaNCtsMEpuMk1tZUU0VjFrbFZsakJTdUdzT1hhUXIrdHdmR3cwM2RFemZz?=
 =?utf-8?B?d2VFaTVBUzZ2aGlLNWdRYnFhTmxmMklaS0pwS05DL1NSdUMzRFlEVEZSSEVu?=
 =?utf-8?B?UEFQVGE4b1U5VU1zV1FOYmRTRFRKVnEvWGtlMnp5N2dHRDNzbFlCa0ZKUmZZ?=
 =?utf-8?B?Q1dHQTdzaHBTQ2JldlhrZTRNenhlUHN2cHRNR2FjcGRZZ1d5ZXFiN1VMU1Bm?=
 =?utf-8?B?Q3ZBUWZnNzlUbnZNWUV5eEVzQU1jMGI0dVJrWnBuSFd0bUpSSGx3b3J3Nkpp?=
 =?utf-8?B?dXY2QmQzVy9ZMk1ZMGZCNUZLMkhlU0Eyelo1ZEQwVjFSd2FVcVVXdFhRWkxF?=
 =?utf-8?B?eFJBR3JHSXhuRFJVNllzNmF3Tmo0Znp6SnNKN0NyT0dwY0tjS003ZDZUVXN5?=
 =?utf-8?B?b0FKaWxkSmFrNlk4M0JucUFWdkdNT1hLS2FZeU5NRzc4V1VkSS9pSFg4Z3hz?=
 =?utf-8?B?MStrZ2Z6VGFZalBoY0QzZndSYjArVzVtTXBJUERza1NBRmgrVkMxaGNyQlAw?=
 =?utf-8?B?akQrODJtYVhIS2VaL3hjcWp3all4aktwOGV2Q0l4QjdQa0tzTUJCYzhVcWNH?=
 =?utf-8?B?U2lVNEltZjFPRFdlcmtsM0NwWlE0WnovaVdJUWRocXl1SzNxTEJwNUFMUUZm?=
 =?utf-8?B?UVRlZmFPbTVqS01rS2RZbmhhQUdJWDIwZi9USVBXRWR5TUVjb2c2OXVsUUNl?=
 =?utf-8?B?ZGJVOWNhbkNSS0IvZFdOeUl5T2VtcTRhbW96NGN1bnZhNk1YcmZiUTE1UkF1?=
 =?utf-8?B?dWU2Q2xIMktrUW82NVVjc1F4Z1dRY1h6MEM0SjZKTkJIQkt0cVFvTHFBWm5v?=
 =?utf-8?B?NHBrcm9MR0hMcXNxd1ZpUUlSRUJ4REhGby9QRE1kaXNjNVhCUEw3bzBsTjds?=
 =?utf-8?B?RDMvVHJBeTZpekNhNEJjaFZrWmkxUHZzRFhRd1k5S3ZGZ1M0bGFObTVncG15?=
 =?utf-8?B?NlZLWGZ0anoxNmVUcnV1YjNia1E4VzgyRXVGejhHQm1aNFdDWHk1Uk8rQ3NM?=
 =?utf-8?B?WkM4dE5uaERKZ3JXQUNFdkphNG40VS81Wi9EWEdzRDVxZWlQUUE2T0FBbHph?=
 =?utf-8?B?ak1pRUN4SEVmSzRPNHRLeGxZWDUydm94ZTdtb0xmaW91RFNqTUhIeHV4MWo1?=
 =?utf-8?Q?Ynd+PI7+jyX+gVukwNHAJLENE?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	eA9/5Yxnd6xlmxL2lZLu4DkpNyv1WkmRXtAsUrUVJW1QBbvTY1qWFNNuR9DmHhCDt2YiVSVV2PLW/K2V39qU2mKxNRuB7EYW+mRhy4IayICkWdcwkUBYzc1zlTa1pxp+KOWxzIqA1tuSChY0JifUAXt3MbFLUEzi7a0KcynGxO/uhzasI+IlnHvPXnpHtGQ1GQ8MHUzZlVGwcav0sdCtej6Km23EMrp0ea8d24SkCB5+7WC6bGuHL0wxlRPDstpNY2szTTANDUOaDkpYhCzJwhOMrCyfVmCbYBhyf+aakoMf43q9dbfk9dwbEplIm9t5Xlv4jJwOmoktvmmGUXSi/zVEmCvoiS0eFuraFXjKaJJk6HBkZ86CikR3AVTNA6RmeyJYtv4EVAP7gcFy3pZCrsxUsGJlktUmE+7uiG8dyySXEFQneEmFEA+5fy34pMEh0CkxESterVfKRHNdTE6cCl8Ub+ptzKTltQ6ZWh/Pzo48LWksPKPCXrg8qeJml4RRsix0BB9rvFhmSf6iM2BuzEpHHziB8Da2yjAVf7pNUi1CqGm/A0PHrdNWXSidO/meiLoWZF8sEmzqdyrYIBuBR53Su/Pa0Z4yyrRiivTB2hPn6PIjb0di3cYV1BmJI1PPbXvMQgEAap+eR3PXA6GNtJAWOfxqm3hYc17cv8GK2eU2ZQt1xHAD7j6fwMJGQp0WpNfGz4ibKS4DrP4kVNx05/OnM850WBiA31cVEvVk1TeYunsWfGF5cu4DbSUpl7e16roVcxgLF+hwk1eYXKu8ZJAPO/2xAYO8yP3zuHpkVoU=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 37253f1a-4003-4f83-85c6-08db551e5b8e
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 08:28:27.8224
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4XjBvSBcL6+NOkb5PjAX6khCH8WHl+dPvvo8QmM5+qJBHtvyRN79ELyNNmCeHasyw5zvIVQNsnDJIAxe5ksy/Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6009

On Fri, May 12, 2023 at 05:52:25PM -0700, Elliott Mitchell wrote:
> From tools/libs/light/libxl_x86.c: libxl__arch_passthrough_mode_setdefault()
> 
> "passthrough not yet supported for x86 PVH guests\n"
> 
> So PVH is recommended for most situations, but it is /still/ impossible
> to pass hardware devices to PVH domains?  Seems odd this has never been
> addressed with how long PVH has been around.

I have worked extensively on PVH but didn't get time to work on that
specific feature, and more pressing stuff has keep appearing.

AMD posted an RFC series not long ago:

https://lore.kernel.org/xen-devel/20230312075455.450187-1-ray.huang@amd.com/

That got quite a bit of feedback, we are currently waiting for an
updated version.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 15 08:31:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:31:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534618.831784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTcb-0003ny-3E; Mon, 15 May 2023 08:31:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534618.831784; Mon, 15 May 2023 08:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTcb-0003nr-0a; Mon, 15 May 2023 08:31:57 +0000
Received: by outflank-mailman (input) for mailman id 534618;
 Mon, 15 May 2023 08:31:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SknV=BE=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1pyTca-0003nl-G9
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:31:56 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062a.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f0b9ec2e-f2fa-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 10:31:51 +0200 (CEST)
Received: from AM5PR0601CA0043.eurprd06.prod.outlook.com
 (2603:10a6:203:68::29) by AS2PR08MB9523.eurprd08.prod.outlook.com
 (2603:10a6:20b:60d::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 08:31:44 +0000
Received: from AM7EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:68:cafe::ef) by AM5PR0601CA0043.outlook.office365.com
 (2603:10a6:203:68::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30 via Frontend
 Transport; Mon, 15 May 2023 08:31:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT018.mail.protection.outlook.com (100.127.140.97) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.14 via Frontend Transport; Mon, 15 May 2023 08:31:43 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Mon, 15 May 2023 08:31:43 +0000
Received: from 601bc94e0a85.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7620B939-C8D5-4088-BD93-99BB05170463.1; 
 Mon, 15 May 2023 08:31:33 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 601bc94e0a85.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 15 May 2023 08:31:33 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by AS8PR08MB9696.eurprd08.prod.outlook.com (2603:10a6:20b:614::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 08:31:29 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb%7]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 08:31:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0b9ec2e-f2fa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZQtJC4hjArQPkI0YrSMQ6T1z7MzxtJjXJZwbRfdp108=;
 b=HNgSyHZG7xdhwiNxn34VEnzSGvDO4X78BLdz/uEu5DbC4iQoRWRnLbgJsPnHgQ4Kht7rt2+lK7fMTYRlikQFB1WHksKXDPRvhcE1Wb2gPxaUPpICPo6GOh+OBsMN8Uj5XtUwdOTvm+dFb+4YAQba/0oF9toox5seWpy4SQXpeYo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 95073a80a5afebf1
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iFpti8nXVybzuoQbdgHu68yiOX6TzUCAMrMfIx049kgF84+Y8wTpZLN7sPMwJxeLxsTQOiQP/qnSXkwg0190M6HKT85UwUvoWpzPPjBRzjMT+Dh2jGPTBrJ5zVRgRGQ3nfqWUrxbW2z4KMNU4gu1jPHs+HxN9hyPog4D3g7hE/EdIz8nLP45Jmd04KaWnpqT+cmEZQfsfNQlyjFpooeeqNsFv5R/WRZgKpqSQIrgnw1kOS9R7hni2c+/43A8+ZvioIi+fA1FLkIeRNIdWZkkLtSdAWA0toVm4MB3siGEuS+rNiLmJovIeUIgp+YI9EeQM16Qk9W9PZOtgOGuUP/1Mg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZQtJC4hjArQPkI0YrSMQ6T1z7MzxtJjXJZwbRfdp108=;
 b=eHMoEG9fevQHMo6j90nR2CGMSMLYcrgoIA0tJUc/6l7b7lebpyCDu5Z2XTehekwF+yQUFky89bRMIup6RQpDaiS7lTXlbljriKc1felYi4vbZDN+lU0oZ4lc+1RlrAix2wXY9jkIRPKapAo+trBwyaa9t1N1AtPUiQW7wDIW8JJmsdIWcGJ5867jL/Sky2+7IDhblMWW/niYNZMpeRQTg69Vt9G/2us/FLDkpRHOuPbmCa+qCUg8tBmaoIPFyqoYD9xzUwKu/2fBKKUeETlU3xH+fBIA1nG1jRueyJfFGZBM4RnSEkA4LnZUJoV50wMlAmrhERAquHokWWmPxZLNyA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZQtJC4hjArQPkI0YrSMQ6T1z7MzxtJjXJZwbRfdp108=;
 b=HNgSyHZG7xdhwiNxn34VEnzSGvDO4X78BLdz/uEu5DbC4iQoRWRnLbgJsPnHgQ4Kht7rt2+lK7fMTYRlikQFB1WHksKXDPRvhcE1Wb2gPxaUPpICPo6GOh+OBsMN8Uj5XtUwdOTvm+dFb+4YAQba/0oF9toox5seWpy4SQXpeYo=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Xen developer discussion <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
Thread-Topic: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
Thread-Index: AQHZhN8Uzg8I+HOn4k6B42UqpaPyEa9bBX8A
Date: Mon, 15 May 2023 08:31:28 +0000
Message-ID: <CB2DA69D-2360-40A8-99F9-849A15392411@arm.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-3-michal.orzel@amd.com>
In-Reply-To: <20230512143535.29679-3-michal.orzel@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7158:EE_|AS8PR08MB9696:EE_|AM7EUR03FT018:EE_|AS2PR08MB9523:EE_
X-MS-Office365-Filtering-Correlation-Id: d337cc9c-1157-4563-babb-08db551ed08d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 dIsUhmVF0aYj4oXNR/jMLNvORQQUdzK/AeA+u1zn3VaLFtCTuHgKjC6TgM5z8vNgGFpKnewgSGLhlHjkb8dWclAexD200c3gErhzk/WoinrDpTU9m65xvqDnqRjGrg5rE0qZ0gc8gomH3ivIWisKd/3qOLeu2bkSE8Y1H2xY2S1y9+soGCQkyjOUFi3rvZxKuHAudJnmDxYRoRvYuk9DzOlCHNTjXEsEzW2qw7bsdMx2FSKMF8yMN1l32iInRViidWBK77m9lIj33uLtjIpwi/cn1DmG64ZUCslQnOfc/jpelqEbo/u64qjrEOKgjUfdzVwuIYLxXo6Y7murY+a50ScofBrTP5pUim53edH9YHqNhnvVRq+leYwO48JZSWUrSncA0QyomQFLcLixr/6LkhU5RVQV7ybuQVEJXpoqHm/HKA2Iynb7J4tLuafo8H3ks/0JzYtvxiBeRdpToM9x922rbFiWuVYaYkk/RrUO6xOI20QWTmk0+xQw/45fYsWbcWpEjK05V0gIkAC0bZmGPC41mpCHc9L2JtdPb2e1Ve/8eC8jXaprReoGxiLUBh2V6iVJoZ9AoMKjMUfRWDK1p1ckI7VimnbN8F3ycRP2Yec1iaGJHFAu7K6eO0Ig46hgc66i3/PIQIgc6ewDepMvEg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(366004)(396003)(39860400002)(376002)(451199021)(83380400001)(64756008)(66476007)(66556008)(66946007)(66446008)(91956017)(76116006)(2616005)(6486002)(6512007)(26005)(6506007)(53546011)(478600001)(54906003)(71200400001)(186003)(5660300002)(86362001)(8676002)(8936002)(2906002)(33656002)(36756003)(41300700001)(38070700005)(4326008)(6916009)(316002)(122000001)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <DA52AFA571B29A46AAE878C084408EA7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9696
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6b751bd3-4325-4168-b5a6-08db551ec7a4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Sn0CZhRcQyBzPIx+f382VnHypSlK1oHbUBkTJYYk5hgVr7vqg+X+LUY76mzjNHPbhF+ddyRGNTM34ynfNr8JzXtowbtz0Hn5kXuSAukobTcl33VfwpH3OmTpAsz6949SsjDuEbk+W/y6XRHxZwO4kQSbJKsUga11Mkg1Ma+xexZrg5wnhpGT5cK5kxbMt8+pvTFWT6odKw2pO7pWQUunNuuxWtxm2Ms0dCR+2w2slbA/JkGTGERZKXjWKIbxKkcMmH9pIKLxcWUHcLTgpXIfPSpZzn/OGfdFa164skxvTQgpdeqzMbByJFUEpsvCIV3t51vBnaQAk73t9WnAqsd21C5HKAAk1KbbLcb3G38Lzm3FOWY6dyeFVlsfQQbitNqE+ddMnD9D3HtJlRBKJmwXENrhakKRJmvYoXRFDurrxbs2Eh+AdQ8FSbu+smTHGXLgWDIKlRiTVwf9A7VDU9PjDA8N611QatVJL8TmUlFcKbs8PW9rvoMDbKCJMoj3GthxnVYH6FBV6IViyxIgzO8W74qQeWHIGfFTZZ95n8p1W7gM5FDXOtyTk35sDgmRuaOgWpUBUs84sc3HIhOkMP6IcoMx0rsAPKELgdtWSrjoZz5VCk7c9nV2luMVCvWgWQnakkQY93ZttPFS42yhbN+571yoy89qg4z4h7mZNpsjDk9skRlCjfr8ipZvMMnU6zbZCwbSwUpxLErulqG/OBGz4mMR8DhH0vudR28j8+BpIEU=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(376002)(396003)(451199021)(36840700001)(40470700004)(46966006)(33656002)(36756003)(86362001)(54906003)(316002)(4326008)(70206006)(70586007)(6486002)(478600001)(82310400005)(40480700001)(8936002)(8676002)(5660300002)(6862004)(2906002)(81166007)(356005)(82740400003)(41300700001)(6512007)(2616005)(26005)(107886003)(186003)(53546011)(6506007)(36860700001)(83380400001)(336012)(47076005)(40460700003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 08:31:43.8761
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d337cc9c-1157-4563-babb-08db551ed08d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9523

Hi Michal,

> On 12 May 2023, at 3:35 pm, Michal Orzel <michal.orzel@amd.com> wrote:
>=20
> At the moment, even in case of a SMMU being I/O coherent, we clean the
> updated PT as a result of not advertising the coherency feature. SMMUv3
> coherency feature means that page table walks, accesses to memory
> structures and queues are I/O coherent (refer ARM IHI 0070 E.A, 3.15).
>=20
> Follow the same steps that were done for SMMU v1,v2 driver by the commit:
> 080dcb781e1bc3bb22f55a9dfdecb830ccbabe88
>=20
> The same restrictions apply, meaning that in order to advertise coherent
> table walk platform feature, all the SMMU devices need to report coherenc=
y
> feature. This is because the page tables (we are sharing them with CPU)
> are populated before any device assignment and in case of a device being
> behind non-coherent SMMU, we would have to scan the tables and clean
> the cache.
>=20
> It is to be noted that the SBSA/BSA (refer ARM DEN0094C 1.0C, section D)
> requires that all SMMUv3 devices support I/O coherency.
>=20
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
> There are very few platforms out there with SMMUv3 but I have never seen
> a SMMUv3 that is not I/O coherent.
> ---
> xen/drivers/passthrough/arm/smmu-v3.c | 24 +++++++++++++++++++++++-
> 1 file changed, 23 insertions(+), 1 deletion(-)
>=20
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthro=
ugh/arm/smmu-v3.c
> index bf053cdb6d5c..2adaad0fa038 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -2526,6 +2526,15 @@ static const struct dt_device_match arm_smmu_of_ma=
tch[] =3D {
> };
>=20
> /* Start of Xen specific code. */
> +
> +/*
> + * Platform features. It indicates the list of features supported by all
> + * SMMUs. Actually we only care about coherent table walk, which in case=
 of
> + * SMMUv3 is implied by the overall coherency feature (refer ARM IHI 007=
0 E.A,
> + * section 3.15 and SMMU_IDR0.COHACC bit description).
> + */
> +static uint32_t platform_features =3D ARM_SMMU_FEAT_COHERENCY;
> +
> static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
> {
> 	struct arm_smmu_xen_domain *xen_domain =3D dom_iommu(d)->arch.priv;
> @@ -2708,8 +2717,12 @@ static int arm_smmu_iommu_xen_domain_init(struct d=
omain *d)
> 	INIT_LIST_HEAD(&xen_domain->contexts);
>=20
> 	dom_iommu(d)->arch.priv =3D xen_domain;
> -	return 0;
>=20
> +	/* Coherent walk can be enabled only when all SMMUs support it. */
> +	if (platform_features & ARM_SMMU_FEAT_COHERENCY)
> +		iommu_set_feature(d, IOMMU_FEAT_COHERENT_WALK);
> +
> +	return 0;
> }
>=20
> static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
> @@ -2738,6 +2751,7 @@ static __init int arm_smmu_dt_init(struct dt_device=
_node *dev,
> 				const void *data)
> {
> 	int rc;
> +	const struct arm_smmu_device *smmu;
>=20
> 	/*
> 	 * Even if the device can't be initialized, we don't want to
> @@ -2751,6 +2765,14 @@ static __init int arm_smmu_dt_init(struct dt_devic=
e_node *dev,
>=20
> 	iommu_set_ops(&arm_smmu_iommu_ops);
>=20
> +	/* Find the just added SMMU and retrieve its features. */
> +	smmu =3D arm_smmu_get_by_dev(dt_to_dev(dev));
> +
> +	/* It would be a bug not to find the SMMU we just added. */
> +	BUG_ON(!smmu);
> +
> +	platform_features &=3D smmu->features;
> +
> 	return 0;
> }
>=20
> --=20
> 2.25.1
>=20



From xen-devel-bounces@lists.xenproject.org Mon May 15 08:33:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:33:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534621.831794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTdf-0004Kg-E1; Mon, 15 May 2023 08:33:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534621.831794; Mon, 15 May 2023 08:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTdf-0004KZ-AK; Mon, 15 May 2023 08:33:03 +0000
Received: by outflank-mailman (input) for mailman id 534621;
 Mon, 15 May 2023 08:33:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SknV=BE=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1pyTde-0004KP-G1
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:33:02 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0604.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 19a82907-f2fb-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 10:33:00 +0200 (CEST)
Received: from AM3PR07CA0138.eurprd07.prod.outlook.com (2603:10a6:207:8::24)
 by DU0PR08MB10367.eurprd08.prod.outlook.com (2603:10a6:10:409::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 08:32:55 +0000
Received: from AM7EUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:207:8:cafe::a0) by AM3PR07CA0138.outlook.office365.com
 (2603:10a6:207:8::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.14 via Frontend
 Transport; Mon, 15 May 2023 08:32:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT062.mail.protection.outlook.com (100.127.140.99) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.14 via Frontend Transport; Mon, 15 May 2023 08:32:55 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Mon, 15 May 2023 08:32:54 +0000
Received: from 8ee53e271a89.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B8A5B01C-6F65-4662-9014-50F330509756.1; 
 Mon, 15 May 2023 08:32:43 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8ee53e271a89.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 15 May 2023 08:32:43 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by GV1PR08MB7803.eurprd08.prod.outlook.com (2603:10a6:150:5a::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.29; Mon, 15 May
 2023 08:32:41 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb%7]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 08:32:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19a82907-f2fb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9g3xWrl8pnopm5cFv/kjePR0wYB/NRo+6fpX922euJw=;
 b=tuZG/VGsXfN8wo2R1OaDivtaQp5pKoy3CFrx35pTWOEWxvWPCV2jW6cEj7yEsHOelzq2nhuui/yobN6eJqKkBWLSNFKgHN9uLbdvjXY+V7K2vOR/7OiAnhhlEgbdaBzkEHp8+xwvDx3lB5Iujf8jxBM4XtphFZrm46/mpWWWL5U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: b10abeb8d71aa736
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZxBnVmrVBhk07qVv2+f62mH83nEcGczce+raWAdd82ylIfoTt/zbt+Zkv1ZzD76JMz+iL+d2dxFTVFfbBO8BWQ26+zuPXc0M3DmAR+KnZhF833M6wRRoQjmC9oJfowYTt1AJSRqOGoy85o9ibTeiWI7LB0uWvWlt1eFfohaSyWMWESvRCJC6Diw7CUC/aEVfSzZ6d6nKJ1VZkz2drgNhd35qe2La6uRhUaptFTZLRCt0xipVl664fwl/iyPeFy/kM3vCGi9DbSjaHNlsNK9XI3XvYkRSLxaEgKvUCGomzlSg3FhFsUTZSXr7DpUKXWoW2UJCyOKXjuBfW+KqDM0Qsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9g3xWrl8pnopm5cFv/kjePR0wYB/NRo+6fpX922euJw=;
 b=ogo3+zb/rMOlPdEIn93Y7+GDtNGPp0Uaa0CSHHs4b6HN+1lpsqR15OcsRHh7e3uGeyYFcDX8sTRHp9hk0ev59XjG0I4Bx9cB3TR1AWOOo0EhRP+mkmA+r2mx7zdb3lRedYdHmIPczSoWiBRI7Gwyf/Lto6qfjnP5ciuq9I3KGzyomoVWjShYnDhe2RSGAVwdogAFkcKD4TIkiUkYlL18ryHFgSCb8dBIHQpb/LZil4+KvedJYsHm+8zJDKxV6dzp4LN3f5K41z+7MxZss9LeuL6/NL9Ilv8IrzunhLE9CqhTxMkcScAAeBLhTJOh/NcfrLFeXD+2YuS4W+DTvn4oGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9g3xWrl8pnopm5cFv/kjePR0wYB/NRo+6fpX922euJw=;
 b=tuZG/VGsXfN8wo2R1OaDivtaQp5pKoy3CFrx35pTWOEWxvWPCV2jW6cEj7yEsHOelzq2nhuui/yobN6eJqKkBWLSNFKgHN9uLbdvjXY+V7K2vOR/7OiAnhhlEgbdaBzkEHp8+xwvDx3lB5Iujf8jxBM4XtphFZrm46/mpWWWL5U=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Xen developer discussion <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
Thread-Topic: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
Thread-Index: AQHZhN8Uzg8I+HOn4k6B42UqpaPyEa9bBdQA
Date: Mon, 15 May 2023 08:32:41 +0000
Message-ID: <4506DFC2-A6F0-42CA-AB2D-EA95ED4C970A@arm.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-3-michal.orzel@amd.com>
In-Reply-To: <20230512143535.29679-3-michal.orzel@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7158:EE_|GV1PR08MB7803:EE_|AM7EUR03FT062:EE_|DU0PR08MB10367:EE_
X-MS-Office365-Filtering-Correlation-Id: 2eee98f3-748c-447e-f97f-08db551efafd
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 OW1pRH6PRgz1OwGwutYCi7UuxXTuLSv9LnKR6RC8spkCFRBlt2HHJgtmNw4gB9cqGT4npGbd2OTr8bYjzwTj8aNza+u0OCvQw4a64J6DZGjMysQpGsOMyqF7fgSPK9wS3yv1WNkllhi1ZXxgQ28AOP7J1jxgjKv8UINsNkCeqjxUA70KSu/kos0zovS19nya+RkoOIwXx3LvFWSbHbFZkYnGZNxbiapKBWilrGPKO+22zdrTsbt55Hwox4iKbK5KxfLJPT9CQbaZDk9UCG7GisA+C3KFTDcdc8ob5+qslFPej4YCqTTPMSF3s07kxccZL6FW/ZsYGRwBONt/QVbRifZ4CRbBtLX88D0cgMN51MwFMm8bf+qm3tC+rwCKBYh74nn23LLvqyuN72tsXy3ZWQj7ERWGjdmYdsrekAhdHNmUy/IwNfTuey8jJQOanQn/RR7sy1Lt1oo1RUOMbOPARi18mDeNqsC4fDmPGEu8ulm0saIF/q8DiyuA+crzQ6cy+GU7dnFAwE9Pae9ti4qvwlsA+Xe0CClt14HxXCfjziD8ePz75m5wnH4FXVmSLqXiPGtii5sdP3DD4zxQxPQwL+yOndlg+Bg3Li45Y+P8IFFiDG4DK1DlZwvReAs+2AZX0Xof5X0Q7IzQPW9zGHGlSQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(366004)(346002)(39860400002)(396003)(451199021)(66556008)(478600001)(66446008)(64756008)(66476007)(6916009)(4326008)(76116006)(91956017)(66946007)(54906003)(5660300002)(6512007)(6506007)(53546011)(186003)(38070700005)(2616005)(36756003)(38100700002)(83380400001)(8676002)(8936002)(2906002)(71200400001)(6486002)(316002)(41300700001)(86362001)(122000001)(33656002)(26005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: multipart/alternative;
	boundary="_000_4506DFC2A6F042CAAB2DEA95ED4C970Aarmcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7803
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d00f81f0-dbf2-47b3-88e8-08db551ef2a2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	J9em+mbkztvUM+EPvHDzedk8ZXcbXtggKxSIMmu8mtWGCQOcQzeMWQ9THFSnvvVtMEdCm43PzZHW4auZz09dR6wSYKLDV39mG+yARGGMCpvwcMsCUZ6jHYqe+wAg9t7tDg7kcz7/ccR+aISHeh/JPgI+FAZ4E2CBQjQMKQWm1vCKeNTj0yXeZTR4ZoeGRNCNja1+gVCSqF1Bf7uDpvynxC2izxfOchx5/g7J/g/FdXVmWCL0gG4i5H8IE8bueVHBzZrFR32v0hmbPVUxpux//+6Nk6x3SIBUHoDS/4Yo4HrUFrQiZHThWAT1ichur8oaGCTaXB/wcNtZbhlFwoMpkmxlBTFbZavxnIdEyTUyCLD4iEGNxgUv3CCc0LuNbHRBcdKLv+VX27lPV20N1OjCOlxb5s5OGMkW5SVZeClplXeC/vJoH15TWmfpjgmBFtz3OtViBy7EG4o59zOzUehDRWm8zFEQ1ZYaNSyKerNNklosfcwA6xC6f6dm3m1f1BfXngNAOn/juqiU+oH7UqmE8Re1kwld2ugNySNf7IwcQGvxhgrYoavt8aTw55uyMj43dwitYfsmhYf8y2i6l22hVvsOHphmiAQldtDxCqz3SZxMm0+GXpJgXW8CThW3rPIA+ludwx6IBFcSVckjG2Up+peFwxjk71RnJTp6xq5typVGF0zi7ctGrd0auEbesPeND5O7MQkKhbW16p8ByNNQ1nnpSYGaVjRnztVb+LcwI7s=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(396003)(346002)(376002)(451199021)(46966006)(36840700001)(40470700004)(82740400003)(83380400001)(40460700003)(86362001)(82310400005)(356005)(81166007)(40480700001)(33656002)(6506007)(47076005)(336012)(6512007)(2616005)(53546011)(54906003)(186003)(26005)(2906002)(5660300002)(41300700001)(8676002)(8936002)(6862004)(316002)(107886003)(4326008)(36860700001)(70586007)(70206006)(6486002)(45080400002)(36756003)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 08:32:55.0916
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2eee98f3-748c-447e-f97f-08db551efafd
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB10367

--_000_4506DFC2A6F042CAAB2DEA95ED4C970Aarmcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi Michal,


On 12 May 2023, at 3:35 pm, Michal Orzel <michal.orzel@amd.com> wrote:

At the moment, even in case of a SMMU being I/O coherent, we clean the
updated PT as a result of not advertising the coherency feature. SMMUv3
coherency feature means that page table walks, accesses to memory
structures and queues are I/O coherent (refer ARM IHI 0070 E.A, 3.15).

Follow the same steps that were done for SMMU v1,v2 driver by the commit:
080dcb781e1bc3bb22f55a9dfdecb830ccbabe88

The same restrictions apply, meaning that in order to advertise coherent
table walk platform feature, all the SMMU devices need to report coherency
feature. This is because the page tables (we are sharing them with CPU)
are populated before any device assignment and in case of a device being
behind non-coherent SMMU, we would have to scan the tables and clean
the cache.

It is to be noted that the SBSA/BSA (refer ARM DEN0094C 1.0C, section D)
requires that all SMMUv3 devices support I/O coherency.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Rahul Singh <rahul.singh@arm.com<mailto:rahul.singh@arm.com>>

Regards,
Rahul


--_000_4506DFC2A6F042CAAB2DEA95ED4C970Aarmcom_
Content-Type: text/html; charset="us-ascii"
Content-ID: <12D358091B457F46AD64736BB88812DB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"overflow-wrap: break-word; -webkit-nbsp-mode: space; line-br=
eak: after-white-space;">
Hi Michal,
<div><br>
</div>
<div>
<div><br>
<blockquote type=3D"cite">
<div>On 12 May 2023, at 3:35 pm, Michal Orzel &lt;michal.orzel@amd.com&gt; =
wrote:</div>
<br class=3D"Apple-interchange-newline">
<div>
<div>At the moment, even in case of a SMMU being I/O coherent, we clean the=
<br>
updated PT as a result of not advertising the coherency feature. SMMUv3<br>
coherency feature means that page table walks, accesses to memory<br>
structures and queues are I/O coherent (refer ARM IHI 0070 E.A, 3.15).<br>
<br>
Follow the same steps that were done for SMMU v1,v2 driver by the commit:<b=
r>
080dcb781e1bc3bb22f55a9dfdecb830ccbabe88<br>
<br>
The same restrictions apply, meaning that in order to advertise coherent<br=
>
table walk platform feature, all the SMMU devices need to report coherency<=
br>
feature. This is because the page tables (we are sharing them with CPU)<br>
are populated before any device assignment and in case of a device being<br=
>
behind non-coherent SMMU, we would have to scan the tables and clean<br>
the cache.<br>
<br>
It is to be noted that the SBSA/BSA (refer ARM DEN0094C 1.0C, section D)<br=
>
requires that all SMMUv3 devices support I/O coherency.<br>
<br>
Signed-off-by: Michal Orzel &lt;michal.orzel@amd.com&gt;<br>
</div>
</div>
</blockquote>
<div><br>
</div>
Reviewed-by: Rahul Singh &lt;<a href=3D"mailto:rahul.singh@arm.com">rahul.s=
ingh@arm.com</a>&gt;<br>
<br>
Regards,<br>
Rahul&nbsp;<br>
</div>
<br>
</div>
</body>
</html>

--_000_4506DFC2A6F042CAAB2DEA95ED4C970Aarmcom_--


From xen-devel-bounces@lists.xenproject.org Mon May 15 08:46:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:46:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534625.831805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTqn-0005w7-Kl; Mon, 15 May 2023 08:46:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534625.831805; Mon, 15 May 2023 08:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTqn-0005w0-HG; Mon, 15 May 2023 08:46:37 +0000
Received: by outflank-mailman (input) for mailman id 534625;
 Mon, 15 May 2023 08:46:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Phvy=BE=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pyTql-0005vu-GQ
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:46:36 +0000
Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com
 [2607:f8b0:4864:20::52a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fbfd4b8b-f2fc-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 10:46:30 +0200 (CEST)
Received: by mail-pg1-x52a.google.com with SMTP id
 41be03b00d2f7-530638a60e1so5035933a12.2
 for <xen-devel@lists.xenproject.org>; Mon, 15 May 2023 01:46:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbfd4b8b-f2fc-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684140388; x=1686732388;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=ISyG4/xrOtPJ+Si0noAsW23djZm2Cm73H6N6BH+873Q=;
        b=DBjkutg/iL9mFYP25JxS6kSjGQ8OvbHnBpQu/3NLUsiE4g4qXjkatk46UZQDJf+zQw
         eA7LWKxA4uN1wuagbXC7066QlkDiE+oKp969Wd1VqdFsggoPx5FHNEVQ5rNAIicmM0e7
         qCfnty3Bx1U3n5QFGeEBbWYSlxLF9EsKc/fFfpK+k0w4x+hSeqDIpvMZbCXLOdr6YtAU
         4I9R45h3LioFnUS1NX56nOBPfP1kHsM9pc4Zey9rdkr7whojT5ODvcusBl4ZcRDIOxNd
         dSIjuZeO7Ks048FUYdxeEp3UBIum0Hal7bAwFlMgJcdOmE2OsLbb0kr3c9rz6duWr/+f
         qwmQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684140388; x=1686732388;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=ISyG4/xrOtPJ+Si0noAsW23djZm2Cm73H6N6BH+873Q=;
        b=kI4QscxOpnciLvTDWGzubywHUZG59L37sFCHCwD5iQiF2rNcGoNmE9ohJ+jFxpWvNK
         2H2YLmcS7iLuJ0ar4+O6nGMK9FE6n9P0cE+L/wNE4zJDsSzGVaYeO3ofcjPDUhWrcX73
         2Hv/NZyRogWb8yrVan3r3ORkcGekd6q3v4ivsDkEkkBQaUnKMSfSwbaCJ7FcASK2/myh
         /3UhomnKbdhOuAuhmi0CDBNpAsHL08NrNFRxTBFpDckhULkTihzMqYOFi0aTPgFcL/kz
         46RCkNde2orA57pM4FhORgmW4O5U+2TxKs6DauKlxk4woep/4pz5h3rGYrFdRoAGMCJM
         g5kg==
X-Gm-Message-State: AC+VfDw5a7MKpuv8x8Wh73Z6r7wfRWuuxgF9ERUwFFBJci1l0g719IlY
	FiX9SVmeB8eCnlMIoe6OZuFIJ/wcCV/HPqkq2seB9HoLZDxkQQ==
X-Google-Smtp-Source: ACHHUZ7O6O4UuHtAD75PikgGyfSHlc9qwGE8sdb6xkc5SYtLL0WYyZcNFa0qHCiiy/R2/5dr00hVWUfvpYFeXUxo64g=
X-Received: by 2002:a17:90a:1d0c:b0:252:ad82:aeb7 with SMTP id
 c12-20020a17090a1d0c00b00252ad82aeb7mr11106300pjd.38.1684140387903; Mon, 15
 May 2023 01:46:27 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <43f5fdaa-47c7-6ec9-c477-dac62a5bceae@amd.com> <CA+SAi2uBmnUA0Z=+Ji_jaoOGjS8H8ea1_aRuRm=_B89oidxHCA@mail.gmail.com>
 <alpine.DEB.2.22.394.2304241337280.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com> <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
 <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
 <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
 <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com>
 <bcac90c2-ef35-2908-9fe6-f09c1b1e2340@amd.com> <CA+SAi2sgHbUKk6mQVnFWQWJ1LBY29GW+eagrqHNN6TLDmv2AgQ@mail.gmail.com>
In-Reply-To: <CA+SAi2sgHbUKk6mQVnFWQWJ1LBY29GW+eagrqHNN6TLDmv2AgQ@mail.gmail.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Mon, 15 May 2023 11:51:25 +0300
Message-ID: <CA+SAi2tErcaAkRT5zhTwSE=-jszwAWNtEAnm5jNGEP1NoqbQ3w@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Michal Orzel <michal.orzel@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>, 
	Stewart.Hildebrand@amd.com
Content-Type: multipart/alternative; boundary="00000000000092b17105fbb77d55"

--00000000000092b17105fbb77d55
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello guys,

Thanks a lot.
After a long problem list I was able to run xen with Dom0 with a cache
color.
One more question from my side.
I want to run a guest with color mode too.
I inserted a string into guest config file llc-colors =3D "9-13"
I got an error
[  457.517004] loop0: detected capacity change from 0 to 385840
Parsing config from /xen/red_config.cfg
/xen/red_config.cfg:26: config parsing error near `-colors': lexical error
warning: Config file looks like it contains Python code.
warning:  Arbitrary Python is no longer supported.
warning:  See https://wiki.xen.org/wiki/PythonInXlConfig
Failed to parse config: Invalid argument
So this is a question.
Is it possible to assign a color mode for the DomU by config file ?
If so, what string should I use?

Regards,
Oleg

=D1=87=D1=82, 11 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 13:32, Oleg=
 Nikitenko <oleshiiwood@gmail.com>:

> Hi Michal,
>
> Thanks.
> This compilation previously had a name CONFIG_COLORING.
> It mixed me up.
>
> Regards,
> Oleg
>
> =D1=87=D1=82, 11 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 13:15, Mi=
chal Orzel <michal.orzel@amd.com>:
>
>> Hi Oleg,
>>
>> On 11/05/2023 12:02, Oleg Nikitenko wrote:
>> >
>> >
>> >
>> > Hello,
>> >
>> > Thanks Stefano.
>> > Then the next question.
>> > I cloned xen repo from xilinx site https://github.com/Xilinx/xen.git <
>> https://github.com/Xilinx/xen.git>
>> > I managed to build a xlnx_rebase_4.17 branch in my environment.
>> > I did it without coloring first. I did not find any color footprints a=
t
>> this branch.
>> > I realized coloring is not in the xlnx_rebase_4.17 branch yet.
>> This is not true. Cache coloring is in xlnx_rebase_4.17. Please see the
>> docs:
>>
>> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-=
coloring.rst
>>
>> It describes the feature and documents the required properties.
>>
>> ~Michal
>>
>> >
>> >
>> > =D0=B2=D1=82, 9 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 22:49, =
Stefano Stabellini <sstabellini@kernel.org
>> <mailto:sstabellini@kernel.org>>:
>> >
>> >     We test Xen Cache Coloring regularly on zcu102. Every Petalinux
>> release
>> >     (twice a year) is tested with cache coloring enabled. The last
>> Petalinux
>> >     release is 2023.1 and the kernel used is this:
>> >     https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
>> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>
>> >
>> >
>> >     On Tue, 9 May 2023, Oleg Nikitenko wrote:
>> >     > Hello guys,
>> >     >
>> >     > I have a couple of more questions.
>> >     > Have you ever run xen with the cache coloring at Zynq UltraScale=
+
>> MPSoC zcu102 xczu15eg ?
>> >     > When did you run xen with the cache coloring last time ?
>> >     > What kernel version did you use for Dom0 when you ran xen with
>> the cache coloring last time ?
>> >     >
>> >     > Regards,
>> >     > Oleg
>> >     >
>> >     > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 1=
1:48, Oleg Nikitenko <oleshiiwood@gmail.com
>> <mailto:oleshiiwood@gmail.com>>:
>> >     >       Hi Michal,
>> >     >
>> >     > Thanks.
>> >     >
>> >     > Regards,
>> >     > Oleg
>> >     >
>> >     > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 1=
1:34, Michal Orzel <michal.orzel@amd.com
>> <mailto:michal.orzel@amd.com>>:
>> >     >       Hi Oleg,
>> >     >
>> >     >       Replying, so that you do not need to wait for Stefano.
>> >     >
>> >     >       On 05/05/2023 10:28, Oleg Nikitenko wrote:
>> >     >       >
>> >     >       >
>> >     >       >
>> >     >       > Hello Stefano,
>> >     >       >
>> >     >       > I would like to try a xen cache color property from this
>> repo  https://xenbits.xen.org/git-http/xen.git <
>> https://xenbits.xen.org/git-http/xen.git>
>> >     >       <https://xenbits.xen.org/git-http/xen.git <
>> https://xenbits.xen.org/git-http/xen.git>>
>> >     >       > Could you tell whot branch I should use ?
>> >     >       Cache coloring feature is not part of the upstream tree an=
d
>> it is still under review.
>> >     >       You can only find it integrated in the Xilinx Xen tree.
>> >     >
>> >     >       ~Michal
>> >     >
>> >     >       >
>> >     >       > Regards,
>> >     >       > Oleg
>> >     >       >
>> >     >       > =D0=BF=D1=82, 28 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3=
. =D0=B2 00:51, Stefano Stabellini <
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>> >     >       >
>> >     >       >     I am familiar with the zcu102 but I don't know how
>> you could possibly
>> >     >       >     generate a SError.
>> >     >       >
>> >     >       >     I suggest to try to use ImageBuilder [1] to generate
>> the boot
>> >     >       >     configuration as a test because that is known to wor=
k
>> well for zcu102.
>> >     >       >
>> >     >       >     [1] https://gitlab.com/xen-project/imagebuilder <
>> https://gitlab.com/xen-project/imagebuilder> <
>> https://gitlab.com/xen-project/imagebuilder <
>> https://gitlab.com/xen-project/imagebuilder>>
>> >     >       >
>> >     >       >
>> >     >       >     On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
>> >     >       >     > Hello Stefano,
>> >     >       >     >
>> >     >       >     > Thanks for clarification.
>> >     >       >     > We nighter use ImageBuilder nor uboot boot script.
>> >     >       >     > A model is zcu102 compatible.
>> >     >       >     >
>> >     >       >     > Regards,
>> >     >       >     > O.
>> >     >       >     >
>> >     >       >     > =D0=B2=D1=82, 25 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=
=D0=B3. =D0=B2 21:21, Stefano Stabellini <
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>> >     >       >     >       This is interesting. Are you using Xilinx
>> hardware by any chance? If so,
>> >     >       >     >       which board?
>> >     >       >     >
>> >     >       >     >       Are you using ImageBuilder to generate your
>> boot.scr boot script? If so,
>> >     >       >     >       could you please post your ImageBuilder
>> config file? If not, can you
>> >     >       >     >       post the source of your uboot boot script?
>> >     >       >     >
>> >     >       >     >       SErrors are supposed to be related to a
>> hardware failure of some kind.
>> >     >       >     >       You are not supposed to be able to trigger a=
n
>> SError easily by
>> >     >       >     >       "mistake". I have not seen SErrors due to
>> wrong cache coloring
>> >     >       >     >       configurations on any Xilinx board before.
>> >     >       >     >
>> >     >       >     >       The differences between Xen with and without
>> cache coloring from a
>> >     >       >     >       hardware perspective are:
>> >     >       >     >
>> >     >       >     >       - With cache coloring, the SMMU is enabled
>> and does address translations
>> >     >       >     >         even for dom0. Without cache coloring the
>> SMMU could be disabled, and
>> >     >       >     >         if enabled, the SMMU doesn't do any addres=
s
>> translations for Dom0. If
>> >     >       >     >         there is a hardware failure related to SMM=
U
>> address translation it
>> >     >       >     >         could only trigger with cache coloring.
>> This would be my normal
>> >     >       >     >         suggestion for you to explore, but the
>> failure happens too early
>> >     >       >     >         before any DMA-capable device is
>> programmed. So I don't think this can
>> >     >       >     >         be the issue.
>> >     >       >     >
>> >     >       >     >       - With cache coloring, the memory allocation
>> is very different so you'll
>> >     >       >     >         end up using different DDR regions for
>> Dom0. So if your DDR is
>> >     >       >     >         defective, you might only see a failure
>> with cache coloring enabled
>> >     >       >     >         because you end up using different regions=
.
>> >     >       >     >
>> >     >       >     >
>> >     >       >     >       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
>> >     >       >     >       > Hi Stefano,
>> >     >       >     >       >
>> >     >       >     >       > Thank you.
>> >     >       >     >       > If I build xen without colors support ther=
e
>> is not this error.
>> >     >       >     >       > All the domains are booted well.
>> >     >       >     >       > Hense it can not be a hardware issue.
>> >     >       >     >       > This panic arrived during unpacking the
>> rootfs.
>> >     >       >     >       > Here I attached the boot log xen/Dom0
>> without color.
>> >     >       >     >       > A highlighted strings printed exactly afte=
r
>> the place where 1-st time panic arrived.
>> >     >       >     >       >
>> >     >       >     >       >  Xen 4.16.1-pre
>> >     >       >     >       > (XEN) Xen version 4.16.1-pre (nole2390@(no=
ne))
>> (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=3Dy
>> >     >       2023-04-21
>> >     >       >     >       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:1=
4
>> 2023 +0300 git:321687b231-dirty
>> >     >       >     >       > (XEN) build-id:
>> c1847258fdb1b79562fc710dda40008f96c0fde5
>> >     >       >     >       > (XEN) Processor: 00000000410fd034: "ARM
>> Limited", variant: 0x0, part 0xd03,rev 0x4
>> >     >       >     >       > (XEN) 64-bit Execution:
>> >     >       >     >       > (XEN)   Processor Features:
>> 0000000000002222 0000000000000000
>> >     >       >     >       > (XEN)     Exception Levels: EL3:64+32
>> EL2:64+32 EL1:64+32 EL0:64+32
>> >     >       >     >       > (XEN)     Extensions: FloatingPoint
>> AdvancedSIMD
>> >     >       >     >       > (XEN)   Debug Features: 0000000010305106
>> 0000000000000000
>> >     >       >     >       > (XEN)   Auxiliary Features:
>> 0000000000000000 0000000000000000
>> >     >       >     >       > (XEN)   Memory Model Features:
>> 0000000000001122 0000000000000000
>> >     >       >     >       > (XEN)   ISA Features:  0000000000011120
>> 0000000000000000
>> >     >       >     >       > (XEN) 32-bit Execution:
>> >     >       >     >       > (XEN)   Processor Features:
>> 0000000000000131:0000000000011011
>> >     >       >     >       > (XEN)     Instruction Sets: AArch32 A32
>> Thumb Thumb-2 Jazelle
>> >     >       >     >       > (XEN)     Extensions: GenericTimer Securit=
y
>> >     >       >     >       > (XEN)   Debug Features: 0000000003010066
>> >     >       >     >       > (XEN)   Auxiliary Features: 00000000000000=
00
>> >     >       >     >       > (XEN)   Memory Model Features:
>> 0000000010201105 0000000040000000
>> >     >       >     >       > (XEN)
>>  0000000001260000 0000000002102211
>> >     >       >     >       > (XEN)   ISA Features: 0000000002101110
>> 0000000013112111 0000000021232042
>> >     >       >     >       > (XEN)                 0000000001112131
>> 0000000000011142 0000000000011121
>> >     >       >     >       > (XEN) Using SMC Calling Convention v1.2
>> >     >       >     >       > (XEN) Using PSCI v1.1
>> >     >       >     >       > (XEN) SMP: Allowing 4 CPUs
>> >     >       >     >       > (XEN) Generic Timer IRQ: phys=3D30 hyp=3D2=
6
>> virt=3D27 Freq: 100000 KHz
>> >     >       >     >       > (XEN) GICv2 initialization:
>> >     >       >     >       > (XEN)         gic_dist_addr=3D00000000f901=
0000
>> >     >       >     >       > (XEN)         gic_cpu_addr=3D00000000f9020=
000
>> >     >       >     >       > (XEN)         gic_hyp_addr=3D00000000f9040=
000
>> >     >       >     >       > (XEN)         gic_vcpu_addr=3D00000000f906=
0000
>> >     >       >     >       > (XEN)         gic_maintenance_irq=3D25
>> >     >       >     >       > (XEN) GICv2: Adjusting CPU interface base
>> to 0xf902f000
>> >     >       >     >       > (XEN) GICv2: 192 lines, 4 cpus, secure (II=
D
>> 0200143b).
>> >     >       >     >       > (XEN) Using scheduler: null Scheduler (nul=
l)
>> >     >       >     >       > (XEN) Initializing null scheduler
>> >     >       >     >       > (XEN) WARNING: This is experimental
>> software in development.
>> >     >       >     >       > (XEN) Use at your own risk.
>> >     >       >     >       > (XEN) Allocated console ring of 32 KiB.
>> >     >       >     >       > (XEN) CPU0: Guest atomics will try 12 time=
s
>> before pausing the domain
>> >     >       >     >       > (XEN) Bringing up CPU1
>> >     >       >     >       > (XEN) CPU1: Guest atomics will try 13 time=
s
>> before pausing the domain
>> >     >       >     >       > (XEN) CPU 1 booted.
>> >     >       >     >       > (XEN) Bringing up CPU2
>> >     >       >     >       > (XEN) CPU2: Guest atomics will try 13 time=
s
>> before pausing the domain
>> >     >       >     >       > (XEN) CPU 2 booted.
>> >     >       >     >       > (XEN) Bringing up CPU3
>> >     >       >     >       > (XEN) CPU3: Guest atomics will try 13 time=
s
>> before pausing the domain
>> >     >       >     >       > (XEN) Brought up 4 CPUs
>> >     >       >     >       > (XEN) CPU 3 booted.
>> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: probing
>> hardware configuration...
>> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2
>> with:
>> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stage 2
>> translation
>> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stream
>> matching with 48 register groups, mask 0x7fff<2>smmu:
>> >     >       /axi/smmu@fd800000: 16 context
>> >     >       >     >       banks (0
>> >     >       >     >       > stage-2 only)
>> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: Stage-2:
>> 48-bit IPA -> 48-bit PA
>> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: registered
>> 29 master devices
>> >     >       >     >       > (XEN) I/O virtualisation enabled
>> >     >       >     >       > (XEN)  - Dom0 mode: Relaxed
>> >     >       >     >       > (XEN) P2M: 40-bit IPA with 40-bit PA and
>> 8-bit VMID
>> >     >       >     >       > (XEN) P2M: 3 levels with order-1 root, VTC=
R
>> 0x0000000080023558
>> >     >       >     >       > (XEN) Scheduling granularity: cpu, 1 CPU
>> per sched-resource
>> >     >       >     >       > (XEN) alternatives: Patching with alt tabl=
e
>> 00000000002cc5c8 -> 00000000002ccb2c
>> >     >       >     >       > (XEN) *** LOADING DOMAIN 0 ***
>> >     >       >     >       > (XEN) Loading d0 kernel from boot module @
>> 0000000001000000
>> >     >       >     >       > (XEN) Loading ramdisk from boot module @
>> 0000000002000000
>> >     >       >     >       > (XEN) Allocating 1:1 mappings totalling
>> 1600MB for dom0:
>> >     >       >     >       > (XEN) BANK[0]
>> 0x00000010000000-0x00000020000000 (256MB)
>> >     >       >     >       > (XEN) BANK[1]
>> 0x00000024000000-0x00000028000000 (64MB)
>> >     >       >     >       > (XEN) BANK[2]
>> 0x00000030000000-0x00000080000000 (1280MB)
>> >     >       >     >       > (XEN) Grant table range:
>> 0x00000000e00000-0x00000000e40000
>> >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: d0:
>> p2maddr 0x000000087bf94000
>> >     >       >     >       > (XEN) Allocating PPI 16 for event channel
>> interrupt
>> >     >       >     >       > (XEN) Extended region 0:
>> 0x81200000->0xa0000000
>> >     >       >     >       > (XEN) Extended region 1:
>> 0xb1200000->0xc0000000
>> >     >       >     >       > (XEN) Extended region 2:
>> 0xc8000000->0xe0000000
>> >     >       >     >       > (XEN) Extended region 3:
>> 0xf0000000->0xf9000000
>> >     >       >     >       > (XEN) Extended region 4:
>> 0x100000000->0x600000000
>> >     >       >     >       > (XEN) Extended region 5:
>> 0x880000000->0x8000000000
>> >     >       >     >       > (XEN) Extended region 6:
>> 0x8001000000->0x10000000000
>> >     >       >     >       > (XEN) Loading zImage from 0000000001000000
>> to 0000000010000000-0000000010e41008
>> >     >       >     >       > (XEN) Loading d0 initrd from
>> 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
>> >     >       >     >       > (XEN) Loading d0 DTB to
>> 0x0000000013400000-0x000000001340cbdc
>> >     >       >     >       > (XEN) Initial low memory virq threshold se=
t
>> at 0x4000 pages.
>> >     >       >     >       > (XEN) Std. Loglevel: All
>> >     >       >     >       > (XEN) Guest Loglevel: All
>> >     >       >     >       > (XEN) *** Serial input to DOM0 (type
>> 'CTRL-a' three times to switch input)
>> >     >       >     >       > (XEN) null.c:353: 0 <-- d0v0
>> >     >       >     >       > (XEN) Freed 356kB init memory.
>> >     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
>> >     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
>> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
>> 0x000000ffffffff to ICACTIVER4
>> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
>> 0x000000ffffffff to ICACTIVER8
>> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
>> 0x000000ffffffff to ICACTIVER12
>> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
>> 0x000000ffffffff to ICACTIVER16
>> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
>> 0x000000ffffffff to ICACTIVER20
>> >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write
>> 0x000000ffffffff to ICACTIVER0
>> >     >       >     >       > [    0.000000] Booting Linux on physical
>> CPU 0x0000000000 [0x410fd034]
>> >     >       >     >       > [    0.000000] Linux version
>> 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc
>> (GCC)
>> >     >       11.3.0, GNU ld (GNU
>> >     >       >     >       Binutils)
>> >     >       >     >       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54
>> UTC 2023
>> >     >       >     >       > [    0.000000] Machine model: D14 Viper
>> Board - White Unit
>> >     >       >     >       > [    0.000000] Xen 4.16 support found
>> >     >       >     >       > [    0.000000] Zone ranges:
>> >     >       >     >       > [    0.000000]   DMA      [mem
>> 0x0000000010000000-0x000000007fffffff]
>> >     >       >     >       > [    0.000000]   DMA32    empty
>> >     >       >     >       > [    0.000000]   Normal   empty
>> >     >       >     >       > [    0.000000] Movable zone start for each
>> node
>> >     >       >     >       > [    0.000000] Early memory node ranges
>> >     >       >     >       > [    0.000000]   node   0: [mem
>> 0x0000000010000000-0x000000001fffffff]
>> >     >       >     >       > [    0.000000]   node   0: [mem
>> 0x0000000022000000-0x0000000022147fff]
>> >     >       >     >       > [    0.000000]   node   0: [mem
>> 0x0000000022200000-0x0000000022347fff]
>> >     >       >     >       > [    0.000000]   node   0: [mem
>> 0x0000000024000000-0x0000000027ffffff]
>> >     >       >     >       > [    0.000000]   node   0: [mem
>> 0x0000000030000000-0x000000007fffffff]
>> >     >       >     >       > [    0.000000] Initmem setup node 0 [mem
>> 0x0000000010000000-0x000000007fffffff]
>> >     >       >     >       > [    0.000000] On node 0, zone DMA: 8192
>> pages in unavailable ranges
>> >     >       >     >       > [    0.000000] On node 0, zone DMA: 184
>> pages in unavailable ranges
>> >     >       >     >       > [    0.000000] On node 0, zone DMA: 7352
>> pages in unavailable ranges
>> >     >       >     >       > [    0.000000] cma: Reserved 256 MiB at
>> 0x000000006e000000
>> >     >       >     >       > [    0.000000] psci: probing for conduit
>> method from DT.
>> >     >       >     >       > [    0.000000] psci: PSCIv1.1 detected in
>> firmware.
>> >     >       >     >       > [    0.000000] psci: Using standard PSCI
>> v0.2 function IDs
>> >     >       >     >       > [    0.000000] psci: Trusted OS migration
>> not required
>> >     >       >     >       > [    0.000000] psci: SMC Calling Conventio=
n
>> v1.1
>> >     >       >     >       > [    0.000000] percpu: Embedded 16
>> pages/cpu s32792 r0 d32744 u65536
>> >     >       >     >       > [    0.000000] Detected VIPT I-cache on CP=
U0
>> >     >       >     >       > [    0.000000] CPU features: kernel page
>> table isolation forced ON by KASLR
>> >     >       >     >       > [    0.000000] CPU features: detected:
>> Kernel page table isolation (KPTI)
>> >     >       >     >       > [    0.000000] Built 1 zonelists, mobility
>> grouping on.  Total pages: 403845
>> >     >       >     >       > [    0.000000] Kernel command line:
>> console=3Dhvc0 earlycon=3Dxen earlyprintk=3Dxen clk_ignore_unused fips=
=3D1
>> >     >       root=3D/dev/ram0
>> >     >       >     >       maxcpus=3D2
>> >     >       >     >       > [    0.000000] Unknown kernel command line
>> parameters "earlyprintk=3Dxen fips=3D1", will be passed to user
>> >     >       space.
>> >     >       >     >       > [    0.000000] Dentry cache hash table
>> entries: 262144 (order: 9, 2097152 bytes, linear)
>> >     >       >     >       > [    0.000000] Inode-cache hash table
>> entries: 131072 (order: 8, 1048576 bytes, linear)
>> >     >       >     >       > [    0.000000] mem auto-init: stack:off,
>> heap alloc:on, heap free:on
>> >     >       >     >       > [    0.000000] mem auto-init: clearing
>> system memory may take some time...
>> >     >       >     >       > [    0.000000] Memory: 1121936K/1641024K
>> available (9728K kernel code, 836K rwdata, 2396K rodata, 1536K
>> >     >       init, 262K bss,
>> >     >       >     >       256944K reserved,
>> >     >       >     >       > 262144K cma-reserved)
>> >     >       >     >       > [    0.000000] SLUB: HWalign=3D64, Order=
=3D0-3,
>> MinObjects=3D0, CPUs=3D2, Nodes=3D1
>> >     >       >     >       > [    0.000000] rcu: Hierarchical RCU
>> implementation.
>> >     >       >     >       > [    0.000000] rcu: RCU event tracing is
>> enabled.
>> >     >       >     >       > [    0.000000] rcu: RCU restricting CPUs
>> from NR_CPUS=3D8 to nr_cpu_ids=3D2.
>> >     >       >     >       > [    0.000000] rcu: RCU calculated value o=
f
>> scheduler-enlistment delay is 25 jiffies.
>> >     >       >     >       > [    0.000000] rcu: Adjusting geometry for
>> rcu_fanout_leaf=3D16, nr_cpu_ids=3D2
>> >     >       >     >       > [    0.000000] NR_IRQS: 64, nr_irqs: 64,
>> preallocated irqs: 0
>> >     >       >     >       > [    0.000000] Root IRQ handler:
>> gic_handle_irq
>> >     >       >     >       > [    0.000000] arch_timer: cp15 timer(s)
>> running at 100.00MHz (virt).
>> >     >       >     >       > [    0.000000] clocksource:
>> arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x171024e7e0,
>> >     >       max_idle_ns: 440795205315 ns
>> >     >       >     >       > [    0.000000] sched_clock: 56 bits at
>> 100MHz, resolution 10ns, wraps every 4398046511100ns
>> >     >       >     >       > [    0.000258] Console: colour dummy devic=
e
>> 80x25
>> >     >       >     >       > [    0.310231] printk: console [hvc0]
>> enabled
>> >     >       >     >       > [    0.314403] Calibrating delay loop
>> (skipped), value calculated using timer frequency.. 200.00 BogoMIPS
>> >     >       (lpj=3D400000)
>> >     >       >     >       > [    0.324851] pid_max: default: 32768
>> minimum: 301
>> >     >       >     >       > [    0.329706] LSM: Security Framework
>> initializing
>> >     >       >     >       > [    0.334204] Yama: becoming mindful.
>> >     >       >     >       > [    0.337865] Mount-cache hash table
>> entries: 4096 (order: 3, 32768 bytes, linear)
>> >     >       >     >       > [    0.345180] Mountpoint-cache hash table
>> entries: 4096 (order: 3, 32768 bytes, linear)
>> >     >       >     >       > [    0.354743] xen:grant_table: Grant
>> tables using version 1 layout
>> >     >       >     >       > [    0.359132] Grant table initialized
>> >     >       >     >       > [    0.362664] xen:events: Using FIFO-base=
d
>> ABI
>> >     >       >     >       > [    0.366993] Xen: initializing cpu0
>> >     >       >     >       > [    0.370515] rcu: Hierarchical SRCU
>> implementation.
>> >     >       >     >       > [    0.375930] smp: Bringing up secondary
>> CPUs ...
>> >     >       >     >       > (XEN) null.c:353: 1 <-- d0v1
>> >     >       >     >       > (XEN) d0v1: vGICD: unhandled word write
>> 0x000000ffffffff to ICACTIVER0
>> >     >       >     >       > [    0.382549] Detected VIPT I-cache on CP=
U1
>> >     >       >     >       > [    0.388712] Xen: initializing cpu1
>> >     >       >     >       > [    0.388743] CPU1: Booted secondary
>> processor 0x0000000001 [0x410fd034]
>> >     >       >     >       > [    0.388829] smp: Brought up 1 node, 2
>> CPUs
>> >     >       >     >       > [    0.406941] SMP: Total of 2 processors
>> activated.
>> >     >       >     >       > [    0.411698] CPU features: detected:
>> 32-bit EL0 Support
>> >     >       >     >       > [    0.416888] CPU features: detected:
>> CRC32 instructions
>> >     >       >     >       > [    0.422121] CPU: All CPU(s) started at
>> EL1
>> >     >       >     >       > [    0.426248] alternatives: patching
>> kernel code
>> >     >       >     >       > [    0.431424] devtmpfs: initialized
>> >     >       >     >       > [    0.441454] KASLR enabled
>> >     >       >     >       > [    0.441602] clocksource: jiffies: mask:
>> 0xffffffff max_cycles: 0xffffffff, max_idle_ns:
>> >     >       7645041785100000 ns
>> >     >       >     >       > [    0.448321] futex hash table entries:
>> 512 (order: 3, 32768 bytes, linear)
>> >     >       >     >       > [    0.496183] NET: Registered
>> PF_NETLINK/PF_ROUTE protocol family
>> >     >       >     >       > [    0.498277] DMA: preallocated 256 KiB
>> GFP_KERNEL pool for atomic allocations
>> >     >       >     >       > [    0.503772] DMA: preallocated 256 KiB
>> GFP_KERNEL|GFP_DMA pool for atomic allocations
>> >     >       >     >       > [    0.511610] DMA: preallocated 256 KiB
>> GFP_KERNEL|GFP_DMA32 pool for atomic allocations
>> >     >       >     >       > [    0.519478] audit: initializing netlink
>> subsys (disabled)
>> >     >       >     >       > [    0.524985] audit: type=3D2000
>> audit(0.336:1): state=3Dinitialized audit_enabled=3D0 res=3D1
>> >     >       >     >       > [    0.529169] thermal_sys: Registered
>> thermal governor 'step_wise'
>> >     >       >     >       > [    0.533023] hw-breakpoint: found 6
>> breakpoint and 4 watchpoint registers.
>> >     >       >     >       > [    0.545608] ASID allocator initialised
>> with 32768 entries
>> >     >       >     >       > [    0.551030] xen:swiotlb_xen: Warning:
>> only able to allocate 4 MB for software IO TLB
>> >     >       >     >       > [    0.559332] software IO TLB: mapped [me=
m
>> 0x0000000011800000-0x0000000011c00000] (4MB)
>> >     >       >     >       > [    0.583565] HugeTLB registered 1.00 GiB
>> page size, pre-allocated 0 pages
>> >     >       >     >       > [    0.584721] HugeTLB registered 32.0 MiB
>> page size, pre-allocated 0 pages
>> >     >       >     >       > [    0.591478] HugeTLB registered 2.00 MiB
>> page size, pre-allocated 0 pages
>> >     >       >     >       > [    0.598225] HugeTLB registered 64.0 KiB
>> page size, pre-allocated 0 pages
>> >     >       >     >       > [    0.636520] DRBG: Continuing without
>> Jitter RNG
>> >     >       >     >       > [    0.737187] raid6: neonx8   gen()  2143
>> MB/s
>> >     >       >     >       > [    0.805294] raid6: neonx8   xor()  1589
>> MB/s
>> >     >       >     >       > [    0.873406] raid6: neonx4   gen()  2177
>> MB/s
>> >     >       >     >       > [    0.941499] raid6: neonx4   xor()  1556
>> MB/s
>> >     >       >     >       > [    1.009612] raid6: neonx2   gen()  2072
>> MB/s
>> >     >       >     >       > [    1.077715] raid6: neonx2   xor()  1430
>> MB/s
>> >     >       >     >       > [    1.145834] raid6: neonx1   gen()  1769
>> MB/s
>> >     >       >     >       > [    1.213935] raid6: neonx1   xor()  1214
>> MB/s
>> >     >       >     >       > [    1.282046] raid6: int64x8  gen()  1366
>> MB/s
>> >     >       >     >       > [    1.350132] raid6: int64x8  xor()   773
>> MB/s
>> >     >       >     >       > [    1.418259] raid6: int64x4  gen()  1602
>> MB/s
>> >     >       >     >       > [    1.486349] raid6: int64x4  xor()   851
>> MB/s
>> >     >       >     >       > [    1.554464] raid6: int64x2  gen()  1396
>> MB/s
>> >     >       >     >       > [    1.622561] raid6: int64x2  xor()   744
>> MB/s
>> >     >       >     >       > [    1.690687] raid6: int64x1  gen()  1033
>> MB/s
>> >     >       >     >       > [    1.758770] raid6: int64x1  xor()   517
>> MB/s
>> >     >       >     >       > [    1.758809] raid6: using algorithm
>> neonx4 gen() 2177 MB/s
>> >     >       >     >       > [    1.762941] raid6: .... xor() 1556 MB/s=
,
>> rmw enabled
>> >     >       >     >       > [    1.767957] raid6: using neon recovery
>> algorithm
>> >     >       >     >       > [    1.772824] xen:balloon: Initialising
>> balloon driver
>> >     >       >     >       > [    1.778021] iommu: Default domain type:
>> Translated
>> >     >       >     >       > [    1.782584] iommu: DMA domain TLB
>> invalidation policy: strict mode
>> >     >       >     >       > [    1.789149] SCSI subsystem initialized
>> >     >       >     >       > [    1.792820] usbcore: registered new
>> interface driver usbfs
>> >     >       >     >       > [    1.798254] usbcore: registered new
>> interface driver hub
>> >     >       >     >       > [    1.803626] usbcore: registered new
>> device driver usb
>> >     >       >     >       > [    1.808761] pps_core: LinuxPPS API ver.
>> 1 registered
>> >     >       >     >       > [    1.813716] pps_core: Software ver.
>> 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it <mailto:
>> giometti@linux.it>
>> >     >       <mailto:giometti@linux.it <mailto:giometti@linux.it>>>
>> >     >       >     >       > [    1.822903] PTP clock support registere=
d
>> >     >       >     >       > [    1.826893] EDAC MC: Ver: 3.0.0
>> >     >       >     >       > [    1.830375] zynqmp-ipi-mbox
>> mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channels.
>> >     >       >     >       > [    1.838863] zynqmp-ipi-mbox
>> mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX channels.
>> >     >       >     >       > [    1.847356] zynqmp-ipi-mbox
>> mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX channels.
>> >     >       >     >       > [    1.855907] FPGA manager framework
>> >     >       >     >       > [    1.859952] clocksource: Switched to
>> clocksource arch_sys_counter
>> >     >       >     >       > [    1.871712] NET: Registered PF_INET
>> protocol family
>> >     >       >     >       > [    1.871838] IP idents hash table
>> entries: 32768 (order: 6, 262144 bytes, linear)
>> >     >       >     >       > [    1.879392] tcp_listen_portaddr_hash
>> hash table entries: 1024 (order: 2, 16384 bytes, linear)
>> >     >       >     >       > [    1.887078] Table-perturb hash table
>> entries: 65536 (order: 6, 262144 bytes, linear)
>> >     >       >     >       > [    1.894846] TCP established hash table
>> entries: 16384 (order: 5, 131072 bytes, linear)
>> >     >       >     >       > [    1.902900] TCP bind hash table entries=
:
>> 16384 (order: 6, 262144 bytes, linear)
>> >     >       >     >       > [    1.910350] TCP: Hash tables configured
>> (established 16384 bind 16384)
>> >     >       >     >       > [    1.916778] UDP hash table entries: 102=
4
>> (order: 3, 32768 bytes, linear)
>> >     >       >     >       > [    1.923509] UDP-Lite hash table entries=
:
>> 1024 (order: 3, 32768 bytes, linear)
>> >     >       >     >       > [    1.930759] NET: Registered
>> PF_UNIX/PF_LOCAL protocol family
>> >     >       >     >       > [    1.936834] RPC: Registered named UNIX
>> socket transport module.
>> >     >       >     >       > [    1.942342] RPC: Registered udp
>> transport module.
>> >     >       >     >       > [    1.947088] RPC: Registered tcp
>> transport module.
>> >     >       >     >       > [    1.951843] RPC: Registered tcp NFSv4.1
>> backchannel transport module.
>> >     >       >     >       > [    1.958334] PCI: CLS 0 bytes, default 6=
4
>> >     >       >     >       > [    1.962709] Trying to unpack rootfs
>> image as initramfs...
>> >     >       >     >       > [    1.977090] workingset:
>> timestamp_bits=3D62 max_order=3D19 bucket_order=3D0
>> >     >       >     >       > [    1.982863] Installing knfsd (copyright
>> (C) 1996 okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:
>> okir@monad.swb.de <mailto:okir@monad.swb.de>>).
>> >     >       >     >       > [    2.021045] NET: Registered PF_ALG
>> protocol family
>> >     >       >     >       > [    2.021122] xor: measuring software
>> checksum speed
>> >     >       >     >       > [    2.029347]    8regs           :  2366
>> MB/sec
>> >     >       >     >       > [    2.033081]    32regs          :  2802
>> MB/sec
>> >     >       >     >       > [    2.038223]    arm64_neon      :  2320
>> MB/sec
>> >     >       >     >       > [    2.038385] xor: using function: 32regs
>> (2802 MB/sec)
>> >     >       >     >       > [    2.043614] Block layer SCSI generic
>> (bsg) driver version 0.4 loaded (major 247)
>> >     >       >     >       > [    2.050959] io scheduler mq-deadline
>> registered
>> >     >       >     >       > [    2.055521] io scheduler kyber register=
ed
>> >     >       >     >       > [    2.068227] xen:xen_evtchn:
>> Event-channel device installed
>> >     >       >     >       > [    2.069281] Serial: 8250/16550 driver, =
4
>> ports, IRQ sharing disabled
>> >     >       >     >       > [    2.076190] cacheinfo: Unable to detect
>> cache hierarchy for CPU 0
>> >     >       >     >       > [    2.085548] brd: module loaded
>> >     >       >     >       > [    2.089290] loop: module loaded
>> >     >       >     >       > [    2.089341] Invalid max_queues (4), wil=
l
>> use default max: 2.
>> >     >       >     >       > [    2.094565] tun: Universal TUN/TAP
>> device driver, 1.6
>> >     >       >     >       > [    2.098655] xen_netfront: Initialising
>> Xen virtual ethernet driver
>> >     >       >     >       > [    2.104156] usbcore: registered new
>> interface driver rtl8150
>> >     >       >     >       > [    2.109813] usbcore: registered new
>> interface driver r8152
>> >     >       >     >       > [    2.115367] usbcore: registered new
>> interface driver asix
>> >     >       >     >       > [    2.120794] usbcore: registered new
>> interface driver ax88179_178a
>> >     >       >     >       > [    2.126934] usbcore: registered new
>> interface driver cdc_ether
>> >     >       >     >       > [    2.132816] usbcore: registered new
>> interface driver cdc_eem
>> >     >       >     >       > [    2.138527] usbcore: registered new
>> interface driver net1080
>> >     >       >     >       > [    2.144256] usbcore: registered new
>> interface driver cdc_subset
>> >     >       >     >       > [    2.150205] usbcore: registered new
>> interface driver zaurus
>> >     >       >     >       > [    2.155837] usbcore: registered new
>> interface driver cdc_ncm
>> >     >       >     >       > [    2.161550] usbcore: registered new
>> interface driver r8153_ecm
>> >     >       >     >       > [    2.168240] usbcore: registered new
>> interface driver cdc_acm
>> >     >       >     >       > [    2.173109] cdc_acm: USB Abstract
>> Control Model driver for USB modems and ISDN adapters
>> >     >       >     >       > [    2.181358] usbcore: registered new
>> interface driver uas
>> >     >       >     >       > [    2.186547] usbcore: registered new
>> interface driver usb-storage
>> >     >       >     >       > [    2.192643] usbcore: registered new
>> interface driver ftdi_sio
>> >     >       >     >       > [    2.198384] usbserial: USB Serial
>> support registered for FTDI USB Serial Device
>> >     >       >     >       > [    2.206118] udc-core: couldn't find an
>> available UDC - added [g_mass_storage] to list of pending
>> >     >       drivers
>> >     >       >     >       > [    2.215332] i2c_dev: i2c /dev entries
>> driver
>> >     >       >     >       > [    2.220467] xen_wdt xen_wdt: initialize=
d
>> (timeout=3D60s, nowayout=3D0)
>> >     >       >     >       > [    2.225923] device-mapper: uevent:
>> version 1.0.3
>> >     >       >     >       > [    2.230668] device-mapper: ioctl:
>> 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com <mailto:
>> dm-devel@redhat.com>
>> >     >       <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com>>
>> >     >       >     >       > [    2.239315] EDAC MC0: Giving out device
>> to module 1 controller synps_ddr_controller: DEV synps_edac
>> >     >       (INTERRUPT)
>> >     >       >     >       > [    2.249405] EDAC DEVICE0: Giving out
>> device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV
>> >     >       >     >       ff960000.memory-controller (INTERRUPT)
>> >     >       >     >       > [    2.261719] sdhci: Secure Digital Host
>> Controller Interface driver
>> >     >       >     >       > [    2.267487] sdhci: Copyright(c) Pierre
>> Ossman
>> >     >       >     >       > [    2.271890] sdhci-pltfm: SDHCI platform
>> and OF driver helper
>> >     >       >     >       > [    2.278157] ledtrig-cpu: registered to
>> indicate activity on CPUs
>> >     >       >     >       > [    2.283816] zynqmp_firmware_probe
>> Platform Management API v1.1
>> >     >       >     >       > [    2.289554] zynqmp_firmware_probe
>> Trustzone version v1.0
>> >     >       >     >       > [    2.327875] securefw securefw: securefw
>> probed
>> >     >       >     >       > [    2.328324] alg: No test for
>> xilinx-zynqmp-aes (zynqmp-aes)
>> >     >       >     >       > [    2.332563] zynqmp_aes
>> firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
>> >     >       >     >       > [    2.341183] alg: No test for
>> xilinx-zynqmp-rsa (zynqmp-rsa)
>> >     >       >     >       > [    2.347667] remoteproc remoteproc0:
>> ff9a0000.rf5ss:r5f_0 is available
>> >     >       >     >       > [    2.353003] remoteproc remoteproc1:
>> ff9a0000.rf5ss:r5f_1 is available
>> >     >       >     >       > [    2.362605] fpga_manager fpga0: Xilinx
>> ZynqMP FPGA Manager registered
>> >     >       >     >       > [    2.366540] viper-xen-proxy
>> viper-xen-proxy: Viper Xen Proxy registered
>> >     >       >     >       > [    2.372525] viper-vdpp a4000000.vdpp:
>> Device Tree Probing
>> >     >       >     >       > [    2.377778] viper-vdpp a4000000.vdpp:
>> VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>> >     >       >     >       > [    2.386432] viper-vdpp a4000000.vdpp:
>> Unable to register tamper handler. Retrying...
>> >     >       >     >       > [    2.394094] viper-vdpp-net
>> a5000000.vdpp_net: Device Tree Probing
>> >     >       >     >       > [    2.399854] viper-vdpp-net
>> a5000000.vdpp_net: Device registered
>> >     >       >     >       > [    2.405931] viper-vdpp-stat
>> a8000000.vdpp_stat: Device Tree Probing
>> >     >       >     >       > [    2.412037] viper-vdpp-stat
>> a8000000.vdpp_stat: Build parameters: VTI Count: 512 Event Count: 32
>> >     >       >     >       > [    2.420856] default preset
>> >     >       >     >       > [    2.423797] viper-vdpp-stat
>> a8000000.vdpp_stat: Device registered
>> >     >       >     >       > [    2.430054] viper-vdpp-rng
>> ac000000.vdpp_rng: Device Tree Probing
>> >     >       >     >       > [    2.435948] viper-vdpp-rng
>> ac000000.vdpp_rng: Device registered
>> >     >       >     >       > [    2.441976] vmcu driver init
>> >     >       >     >       > [    2.444922] VMCU: : (240:0) registered
>> >     >       >     >       > [    2.444956] In K81 Updater init
>> >     >       >     >       > [    2.449003] pktgen: Packet Generator fo=
r
>> packet performance testing. Version: 2.75
>> >     >       >     >       > [    2.468833] Initializing XFRM netlink
>> socket
>> >     >       >     >       > [    2.468902] NET: Registered PF_PACKET
>> protocol family
>> >     >       >     >       > [    2.472729] Bridge firewalling register=
ed
>> >     >       >     >       > [    2.476785] 8021q: 802.1Q VLAN Support
>> v1.8
>> >     >       >     >       > [    2.481341] registered taskstats versio=
n
>> 1
>> >     >       >     >       > [    2.486394] Btrfs loaded,
>> crc32c=3Dcrc32c-generic, zoned=3Dno, fsverity=3Dno
>> >     >       >     >       > [    2.503145] ff010000.serial: ttyPS1 at
>> MMIO 0xff010000 (irq =3D 36, base_baud =3D 6250000) is a xuartps
>> >     >       >     >       > [    2.507103] of-fpga-region fpga-full:
>> FPGA Region probed
>> >     >       >     >       > [    2.512986] xilinx-zynqmp-dma
>> fd500000.dma-controller: ZynqMP DMA driver Probe success
>> >     >       >     >       > [    2.520267] xilinx-zynqmp-dma
>> fd510000.dma-controller: ZynqMP DMA driver Probe success
>> >     >       >     >       > [    2.528239] xilinx-zynqmp-dma
>> fd520000.dma-controller: ZynqMP DMA driver Probe success
>> >     >       >     >       > [    2.536152] xilinx-zynqmp-dma
>> fd530000.dma-controller: ZynqMP DMA driver Probe success
>> >     >       >     >       > [    2.544153] xilinx-zynqmp-dma
>> fd540000.dma-controller: ZynqMP DMA driver Probe success
>> >     >       >     >       > [    2.552127] xilinx-zynqmp-dma
>> fd550000.dma-controller: ZynqMP DMA driver Probe success
>> >     >       >     >       > [    2.560178] xilinx-zynqmp-dma
>> ffa80000.dma-controller: ZynqMP DMA driver Probe success
>> >     >       >     >       > [    2.567987] xilinx-zynqmp-dma
>> ffa90000.dma-controller: ZynqMP DMA driver Probe success
>> >     >       >     >       > [    2.576018] xilinx-zynqmp-dma
>> ffaa0000.dma-controller: ZynqMP DMA driver Probe success
>> >     >       >     >       > [    2.583889] xilinx-zynqmp-dma
>> ffab0000.dma-controller: ZynqMP DMA driver Probe success
>> >     >       >     >       > [    2.946379] spi-nor spi0.0: mt25qu512a
>> (131072 Kbytes)
>> >     >       >     >       > [    2.946467] 2 fixed-partitions
>> partitions found on MTD device spi0.0
>> >     >       >     >       > [    2.952393] Creating 2 MTD partitions o=
n
>> "spi0.0":
>> >     >       >     >       > [    2.957231]
>> 0x000004000000-0x000008000000 : "bank A"
>> >     >       >     >       > [    2.963332]
>> 0x000000000000-0x000004000000 : "bank B"
>> >     >       >     >       > [    2.968694] macb ff0b0000.ethernet: Not
>> enabling partial store and forward
>> >     >       >     >       > [    2.975333] macb ff0b0000.ethernet eth0=
:
>> Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25
>> >     >       (18:41:fe:0f:ff:02)
>> >     >       >     >       > [    2.984472] macb ff0c0000.ethernet: Not
>> enabling partial store and forward
>> >     >       >     >       > [    2.992144] macb ff0c0000.ethernet eth1=
:
>> Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26
>> >     >       (18:41:fe:0f:ff:03)
>> >     >       >     >       > [    3.001043] viper_enet viper_enet: Vipe=
r
>> power GPIOs initialised
>> >     >       >     >       > [    3.007313] viper_enet viper_enet vnet0
>> (uninitialized): Validate interface QSGMII
>> >     >       >     >       > [    3.014914] viper_enet viper_enet vnet1
>> (uninitialized): Validate interface QSGMII
>> >     >       >     >       > [    3.022138] viper_enet viper_enet vnet1
>> (uninitialized): Validate interface type 18
>> >     >       >     >       > [    3.030274] viper_enet viper_enet vnet2
>> (uninitialized): Validate interface QSGMII
>> >     >       >     >       > [    3.037785] viper_enet viper_enet vnet3
>> (uninitialized): Validate interface QSGMII
>> >     >       >     >       > [    3.045301] viper_enet viper_enet: Vipe=
r
>> enet registered
>> >     >       >     >       > [    3.050958] xilinx-axipmon
>> ffa00000.perf-monitor: Probed Xilinx APM
>> >     >       >     >       > [    3.057135] xilinx-axipmon
>> fd0b0000.perf-monitor: Probed Xilinx APM
>> >     >       >     >       > [    3.063538] xilinx-axipmon
>> fd490000.perf-monitor: Probed Xilinx APM
>> >     >       >     >       > [    3.069920] xilinx-axipmon
>> ffa10000.perf-monitor: Probed Xilinx APM
>> >     >       >     >       > [    3.097729] si70xx: probe of 2-0040
>> failed with error -5
>> >     >       >     >       > [    3.098042] cdns-wdt fd4d0000.watchdog:
>> Xilinx Watchdog Timer with timeout 60s
>> >     >       >     >       > [    3.105111] cdns-wdt ff150000.watchdog:
>> Xilinx Watchdog Timer with timeout 10s
>> >     >       >     >       > [    3.112457] viper-tamper viper-tamper:
>> Device registered
>> >     >       >     >       > [    3.117593] active_bank active_bank:
>> boot bank: 1
>> >     >       >     >       > [    3.122184] active_bank active_bank:
>> boot mode: (0x02) qspi32
>> >     >       >     >       > [    3.128247] viper-vdpp a4000000.vdpp:
>> Device Tree Probing
>> >     >       >     >       > [    3.133439] viper-vdpp a4000000.vdpp:
>> VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>> >     >       >     >       > [    3.142151] viper-vdpp a4000000.vdpp:
>> Tamper handler registered
>> >     >       >     >       > [    3.147438] viper-vdpp a4000000.vdpp:
>> Device registered
>> >     >       >     >       > [    3.153007] lpc55_l2 spi1.0: registered
>> handler for protocol 0
>> >     >       >     >       > [    3.158582] lpc55_user lpc55_user: The
>> major number for your device is 236
>> >     >       >     >       > [    3.165976] lpc55_l2 spi1.0: registered
>> handler for protocol 1
>> >     >       >     >       > [    3.181999] rtc-lpc55 rtc_lpc55:
>> lpc55_rtc_get_time: bad result: 1
>> >     >       >     >       > [    3.182856] rtc-lpc55 rtc_lpc55:
>> registered as rtc0
>> >     >       >     >       > [    3.188656] lpc55_l2 spi1.0: (2) mcu
>> still not ready?
>> >     >       >     >       > [    3.193744] lpc55_l2 spi1.0: (3) mcu
>> still not ready?
>> >     >       >     >       > [    3.198848] lpc55_l2 spi1.0: (4) mcu
>> still not ready?
>> >     >       >     >       > [    3.202932] mmc0: SDHCI controller on
>> ff160000.mmc [ff160000.mmc] using ADMA 64-bit
>> >     >       >     >       > [    3.210689] lpc55_l2 spi1.0: (5) mcu
>> still not ready?
>> >     >       >     >       > [    3.215694] lpc55_l2 spi1.0: rx error:
>> -110
>> >     >       >     >       > [    3.284438] mmc0: new HS200 MMC card at
>> address 0001
>> >     >       >     >       > [    3.285179] mmcblk0: mmc0:0001 SEM16G
>> 14.6 GiB
>> >     >       >     >       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6
>> p7 p8
>> >     >       >     >       > [    3.293915] mmcblk0boot0: mmc0:0001
>> SEM16G 4.00 MiB
>> >     >       >     >       > [    3.299054] mmcblk0boot1: mmc0:0001
>> SEM16G 4.00 MiB
>> >     >       >     >       > [    3.303905] mmcblk0rpmb: mmc0:0001
>> SEM16G 4.00 MiB, chardev (244:0)
>> >     >       >     >       > [    3.582676] rtc-lpc55 rtc_lpc55:
>> lpc55_rtc_get_time: bad result: 1
>> >     >       >     >       > [    3.583332] rtc-lpc55 rtc_lpc55:
>> hctosys: unable to read the hardware clock
>> >     >       >     >       > [    3.591252] cdns-i2c ff020000.i2c:
>> recovery information complete
>> >     >       >     >       > [    3.597085] at24 0-0050: supply vcc not
>> found, using dummy regulator
>> >     >       >     >       > [    3.603011] lpc55_l2 spi1.0: (2) mcu
>> still not ready?
>> >     >       >     >       > [    3.608093] at24 0-0050: 256 byte spd
>> EEPROM, read-only
>> >     >       >     >       > [    3.613620] lpc55_l2 spi1.0: (3) mcu
>> still not ready?
>> >     >       >     >       > [    3.619362] lpc55_l2 spi1.0: (4) mcu
>> still not ready?
>> >     >       >     >       > [    3.624224] rtc-rv3028 0-0052:
>> registered as rtc1
>> >     >       >     >       > [    3.628343] lpc55_l2 spi1.0: (5) mcu
>> still not ready?
>> >     >       >     >       > [    3.633253] lpc55_l2 spi1.0: rx error:
>> -110
>> >     >       >     >       > [    3.639104] k81_bootloader 0-0010: prob=
e
>> >     >       >     >       > [    3.641628] VMCU: : (235:0) registered
>> >     >       >     >       > [    3.641635] k81_bootloader 0-0010: prob=
e
>> completed
>> >     >       >     >       > [    3.668346] cdns-i2c ff020000.i2c: 400
>> kHz mmio ff020000 irq 28
>> >     >       >     >       > [    3.669154] cdns-i2c ff030000.i2c:
>> recovery information complete
>> >     >       >     >       > [    3.675412] lm75 1-0048: supply vs not
>> found, using dummy regulator
>> >     >       >     >       > [    3.682920] lm75 1-0048: hwmon1: sensor
>> 'tmp112'
>> >     >       >     >       > [    3.686548] i2c i2c-1: Added multiplexe=
d
>> i2c bus 3
>> >     >       >     >       > [    3.690795] i2c i2c-1: Added multiplexe=
d
>> i2c bus 4
>> >     >       >     >       > [    3.695629] i2c i2c-1: Added multiplexe=
d
>> i2c bus 5
>> >     >       >     >       > [    3.700492] i2c i2c-1: Added multiplexe=
d
>> i2c bus 6
>> >     >       >     >       > [    3.705157] pca954x 1-0070: registered =
4
>> multiplexed busses for I2C switch pca9546
>> >     >       >     >       > [    3.713049] at24 1-0054: supply vcc not
>> found, using dummy regulator
>> >     >       >     >       > [    3.720067] at24 1-0054: 1024 byte 24c0=
8
>> EEPROM, read-only
>> >     >       >     >       > [    3.724761] cdns-i2c ff030000.i2c: 100
>> kHz mmio ff030000 irq 29
>> >     >       >     >       > [    3.731272] sfp viper_enet:sfp-eth1:
>> Host maximum power 2.0W
>> >     >       >     >       > [    3.737549] sfp_register_socket: got
>> sfp_bus
>> >     >       >     >       > [    3.740709] sfp_register_socket:
>> register sfp_bus
>> >     >       >     >       > [    3.745459] sfp_register_bus: ops ok!
>> >     >       >     >       > [    3.749179] sfp_register_bus: Try to
>> attach
>> >     >       >     >       > [    3.753419] sfp_register_bus: Attach
>> succeeded
>> >     >       >     >       > [    3.757914] sfp_register_bus: upstream
>> ops attach
>> >     >       >     >       > [    3.762677] sfp_register_bus: Bus
>> registered
>> >     >       >     >       > [    3.766999] sfp_register_socket:
>> register sfp_bus succeeded
>> >     >       >     >       > [    3.775870] of_cfs_init
>> >     >       >     >       > [    3.776000] of_cfs_init: OK
>> >     >       >     >       > [    3.778211] clk: Not disabling unused
>> clocks
>> >     >       >     >       > [   11.278477] Freeing initrd memory:
>> 206056K
>> >     >       >     >       > [   11.279406] Freeing unused kernel
>> memory: 1536K
>> >     >       >     >       > [   11.314006] Checked W+X mappings:
>> passed, no W+X pages found
>> >     >       >     >       > [   11.314142] Run /init as init process
>> >     >       >     >       > INIT: version 3.01 booting
>> >     >       >     >       > fsck (busybox 1.35.0)
>> >     >       >     >       > /dev/mmcblk0p1: clean, 12/102400 files,
>> 238162/409600 blocks
>> >     >       >     >       > /dev/mmcblk0p2: clean, 12/102400 files,
>> 171972/409600 blocks
>> >     >       >     >       > /dev/mmcblk0p3 was not cleanly unmounted,
>> check forced.
>> >     >       >     >       > /dev/mmcblk0p3: 20/4096 files (0.0%
>> non-contiguous), 663/16384 blocks
>> >     >       >     >       > [   11.553073] EXT4-fs (mmcblk0p3): mounte=
d
>> filesystem without journal. Opts: (null). Quota mode:
>> >     >       disabled.
>> >     >       >     >       > Starting random number generator daemon.
>> >     >       >     >       > [   11.580662] random: crng init done
>> >     >       >     >       > Starting udev
>> >     >       >     >       > [   11.613159] udevd[142]: starting versio=
n
>> 3.2.10
>> >     >       >     >       > [   11.620385] udevd[143]: starting
>> eudev-3.2.10
>> >     >       >     >       > [   11.704481] macb ff0b0000.ethernet
>> control_red: renamed from eth0
>> >     >       >     >       > [   11.720264] macb ff0c0000.ethernet
>> control_black: renamed from eth1
>> >     >       >     >       > [   12.063396] ip_local_port_range: prefer
>> different parity for start/end values.
>> >     >       >     >       > [   12.084801] rtc-lpc55 rtc_lpc55:
>> lpc55_rtc_get_time: bad result: 1
>> >     >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>> >     >       >     >       > Mon Feb 27 08:40:53 UTC 2023
>> >     >       >     >       > [   12.115309] rtc-lpc55 rtc_lpc55:
>> lpc55_rtc_set_time: bad result
>> >     >       >     >       > hwclock: RTC_SET_TIME: Invalid exchange
>> >     >       >     >       > [   12.131027] rtc-lpc55 rtc_lpc55:
>> lpc55_rtc_get_time: bad result: 1
>> >     >       >     >       > Starting mcud
>> >     >       >     >       > INIT: Entering runlevel: 5
>> >     >       >     >       > Configuring network interfaces... done.
>> >     >       >     >       > resetting network interface
>> >     >       >     >       > [   12.718295] macb ff0b0000.ethernet
>> control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver [Xilinx
>> >     >       PCS/PMA PHY] (irq=3DPOLL)
>> >     >       >     >       > [   12.723919] macb ff0b0000.ethernet
>> control_red: configuring for phy/gmii link mode
>> >     >       >     >       > [   12.732151] pps pps0: new PPS source pt=
p0
>> >     >       >     >       > [   12.735563] macb ff0b0000.ethernet:
>> gem-ptp-timer ptp clock registered.
>> >     >       >     >       > [   12.745724] macb ff0c0000.ethernet
>> control_black: PHY [ff0c0000.ethernet-ffffffff:01] driver [Xilinx
>> >     >       PCS/PMA PHY]
>> >     >       >     >       (irq=3DPOLL)
>> >     >       >     >       > [   12.753469] macb ff0c0000.ethernet
>> control_black: configuring for phy/gmii link mode
>> >     >       >     >       > [   12.761804] pps pps1: new PPS source pt=
p1
>> >     >       >     >       > [   12.765398] macb ff0c0000.ethernet:
>> gem-ptp-timer ptp clock registered.
>> >     >       >     >       > Auto-negotiation: off
>> >     >       >     >       > Auto-negotiation: off
>> >     >       >     >       > [   16.828151] macb ff0b0000.ethernet
>> control_red: unable to generate target frequency: 125000000 Hz
>> >     >       >     >       > [   16.834553] macb ff0b0000.ethernet
>> control_red: Link is Up - 1Gbps/Full - flow control off
>> >     >       >     >       > [   16.860552] macb ff0c0000.ethernet
>> control_black: unable to generate target frequency: 125000000 Hz
>> >     >       >     >       > [   16.867052] macb ff0c0000.ethernet
>> control_black: Link is Up - 1Gbps/Full - flow control off
>> >     >       >     >       > Starting Failsafe Secure Shell server in
>> port 2222: sshd
>> >     >       >     >       > done.
>> >     >       >     >       > Starting rpcbind daemon...done.
>> >     >       >     >       >
>> >     >       >     >       > [   17.093019] rtc-lpc55 rtc_lpc55:
>> lpc55_rtc_get_time: bad result: 1
>> >     >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>> >     >       >     >       > Starting State Manager Service
>> >     >       >     >       > Start state-manager restarter...
>> >     >       >     >       > (XEN) d0v1 Forwarding AES operation:
>> 3254779951
>> >     >       >     >       > Starting /usr/sbin/xenstored....[
>> 17.265256] BTRFS: device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa
>> >     >       devid 1 transid 744
>> >     >       >     >       /dev/dm-0
>> >     >       >     >       > scanned by udevd (385)
>> >     >       >     >       > [   17.349933] BTRFS info (device dm-0):
>> disk space caching is enabled
>> >     >       >     >       > [   17.350670] BTRFS info (device dm-0):
>> has skinny extents
>> >     >       >     >       > [   17.364384] BTRFS info (device dm-0):
>> enabling ssd optimizations
>> >     >       >     >       > [   17.830462] BTRFS: device fsid
>> 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
>> >     >       /dev/mapper/client_prov scanned by
>> >     >       >     >       mkfs.btrfs
>> >     >       >     >       > (526)
>> >     >       >     >       > [   17.872699] BTRFS info (device dm-1):
>> using free space tree
>> >     >       >     >       > [   17.872771] BTRFS info (device dm-1):
>> has skinny extents
>> >     >       >     >       > [   17.878114] BTRFS info (device dm-1):
>> flagging fs with big metadata feature
>> >     >       >     >       > [   17.894289] BTRFS info (device dm-1):
>> enabling ssd optimizations
>> >     >       >     >       > [   17.895695] BTRFS info (device dm-1):
>> checking UUID tree
>> >     >       >     >       >
>> >     >       >     >       > Setting domain 0 name, domid and JSON
>> config...
>> >     >       >     >       > Done setting up Dom0
>> >     >       >     >       > Starting xenconsoled...
>> >     >       >     >       > Starting QEMU as disk backend for dom0
>> >     >       >     >       > Starting domain watchdog daemon:
>> xenwatchdogd startup
>> >     >       >     >       >
>> >     >       >     >       > [   18.408647] BTRFS: device fsid
>> 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
>> >     >       /dev/mapper/client_config scanned by
>> >     >       >     >       mkfs.btrfs
>> >     >       >     >       > (574)
>> >     >       >     >       > [done]
>> >     >       >     >       > [   18.465552] BTRFS info (device dm-2):
>> using free space tree
>> >     >       >     >       > [   18.465629] BTRFS info (device dm-2):
>> has skinny extents
>> >     >       >     >       > [   18.471002] BTRFS info (device dm-2):
>> flagging fs with big metadata feature
>> >     >       >     >       > Starting crond: [   18.482371] BTRFS info
>> (device dm-2): enabling ssd optimizations
>> >     >       >     >       > [   18.486659] BTRFS info (device dm-2):
>> checking UUID tree
>> >     >       >     >       > OK
>> >     >       >     >       > starting rsyslogd ... Log partition ready
>> after 0 poll loops
>> >     >       >     >       > done
>> >     >       >     >       > rsyslogd: cannot connect to 172.18.0.1:514
>> <http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>>:
>> Network is unreachable [v8.2208.0 try
>> >     >       https://www.rsyslog.com/e/2027 <
>> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
>> https://www.rsyslog.com/e/2027>> ]
>> >     >       >     >       > [   18.670637] BTRFS: device fsid
>> 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3
>> >     >       scanned by udevd (518)
>> >     >       >     >       >
>> >     >       >     >       > Please insert USB token and enter your rol=
e
>> in login prompt.
>> >     >       >     >       >
>> >     >       >     >       > login:
>> >     >       >     >       >
>> >     >       >     >       > Regards,
>> >     >       >     >       > O.
>> >     >       >     >       >
>> >     >       >     >       >
>> >     >       >     >       > =D0=BF=D0=BD, 24 =D0=B0=D0=BF=D1=80. 2023=
=E2=80=AF=D0=B3. =D0=B2 23:39, Stefano
>> Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>> >     >       >     >       >       Hi Oleg,
>> >     >       >     >       >
>> >     >       >     >       >       Here is the issue from your logs:
>> >     >       >     >       >
>> >     >       >     >       >       SError Interrupt on CPU0, code
>> 0xbe000000 -- SError
>> >     >       >     >       >
>> >     >       >     >       >       SErrors are special signals to notif=
y
>> software of serious hardware
>> >     >       >     >       >       errors.  Something is going very
>> wrong. Defective hardware is a
>> >     >       >     >       >       possibility.  Another possibility if
>> software accessing address ranges
>> >     >       >     >       >       that it is not supposed to, sometime=
s
>> it causes SErrors.
>> >     >       >     >       >
>> >     >       >     >       >       Cheers,
>> >     >       >     >       >
>> >     >       >     >       >       Stefano
>> >     >       >     >       >
>> >     >       >     >       >
>> >     >       >     >       >
>> >     >       >     >       >       On Mon, 24 Apr 2023, Oleg Nikitenko
>> wrote:
>> >     >       >     >       >
>> >     >       >     >       >       > Hello,
>> >     >       >     >       >       >
>> >     >       >     >       >       > Thanks guys.
>> >     >       >     >       >       > I found out where the problem was.
>> >     >       >     >       >       > Now dom0 booted more. But I have a
>> new one.
>> >     >       >     >       >       > This is a kernel panic during Dom0
>> loading.
>> >     >       >     >       >       > Maybe someone is able to suggest
>> something ?
>> >     >       >     >       >       >
>> >     >       >     >       >       > Regards,
>> >     >       >     >       >       > O.
>> >     >       >     >       >       >
>> >     >       >     >       >       > [    3.771362] sfp_register_bus:
>> upstream ops attach
>> >     >       >     >       >       > [    3.776119] sfp_register_bus:
>> Bus registered
>> >     >       >     >       >       > [    3.780459] sfp_register_socket=
:
>> register sfp_bus succeeded
>> >     >       >     >       >       > [    3.789399] of_cfs_init
>> >     >       >     >       >       > [    3.789499] of_cfs_init: OK
>> >     >       >     >       >       > [    3.791685] clk: Not disabling
>> unused clocks
>> >     >       >     >       >       > [   11.010355] SError Interrupt on
>> CPU0, code 0xbe000000 -- SError
>> >     >       >     >       >       > [   11.010380] CPU: 0 PID: 9 Comm:
>> kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>> >     >       >     >       >       > [   11.010393] Workqueue:
>> events_unbound async_run_entry_fn
>> >     >       >     >       >       > [   11.010414] pstate: 60000005
>> (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=3D--)
>> >     >       >     >       >       > [   11.010422] pc :
>> simple_write_end+0xd0/0x130
>> >     >       >     >       >       > [   11.010431] lr :
>> generic_perform_write+0x118/0x1e0
>> >     >       >     >       >       > [   11.010438] sp : ffffffc00809b9=
10
>> >     >       >     >       >       > [   11.010441] x29:
>> ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
>> >     >       >     >       >       > [   11.010451] x26:
>> 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
>> >     >       >     >       >       > [   11.010459] x23:
>> ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
>> >     >       >     >       >       > [   11.010472] x20:
>> 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
>> >     >       >     >       >       > [   11.010481] x17:
>> 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
>> >     >       >     >       >       > [   11.010490] x14:
>> 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
>> >     >       >     >       >       > [   11.010498] x11:
>> 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
>> >     >       >     >       >       > [   11.010507] x8 :
>> 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
>> >     >       >     >       >       > [   11.010515] x5 :
>> fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
>> >     >       >     >       >       > [   11.010524] x2 :
>> 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
>> >     >       >     >       >       > [   11.010534] Kernel panic - not
>> syncing: Asynchronous SError Interrupt
>> >     >       >     >       >       > [   11.010539] CPU: 0 PID: 9 Comm:
>> kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>> >     >       >     >       >       > [   11.010545] Hardware name: D14
>> Viper Board - White Unit (DT)
>> >     >       >     >       >       > [   11.010548] Workqueue:
>> events_unbound async_run_entry_fn
>> >     >       >     >       >       > [   11.010556] Call trace:
>> >     >       >     >       >       > [   11.010558]
>>  dump_backtrace+0x0/0x1c4
>> >     >       >     >       >       > [   11.010567]  show_stack+0x18/0x=
2c
>> >     >       >     >       >       > [   11.010574]
>>  dump_stack_lvl+0x7c/0xa0
>> >     >       >     >       >       > [   11.010583]  dump_stack+0x18/0x=
34
>> >     >       >     >       >       > [   11.010588]  panic+0x14c/0x2f8
>> >     >       >     >       >       > [   11.010597]
>>  print_tainted+0x0/0xb0
>> >     >       >     >       >       > [   11.010606]
>>  arm64_serror_panic+0x6c/0x7c
>> >     >       >     >       >       > [   11.010614]  do_serror+0x28/0x6=
0
>> >     >       >     >       >       > [   11.010621]
>>  el1h_64_error_handler+0x30/0x50
>> >     >       >     >       >       > [   11.010628]
>>  el1h_64_error+0x78/0x7c
>> >     >       >     >       >       > [   11.010633]
>>  simple_write_end+0xd0/0x130
>> >     >       >     >       >       > [   11.010639]
>>  generic_perform_write+0x118/0x1e0
>> >     >       >     >       >       > [   11.010644]
>>  __generic_file_write_iter+0x138/0x1c4
>> >     >       >     >       >       > [   11.010650]
>>  generic_file_write_iter+0x78/0xd0
>> >     >       >     >       >       > [   11.010656]
>>  __kernel_write+0xfc/0x2ac
>> >     >       >     >       >       > [   11.010665]
>>  kernel_write+0x88/0x160
>> >     >       >     >       >       > [   11.010673]  xwrite+0x44/0x94
>> >     >       >     >       >       > [   11.010680]  do_copy+0xa8/0x104
>> >     >       >     >       >       > [   11.010686]
>>  write_buffer+0x38/0x58
>> >     >       >     >       >       > [   11.010692]
>>  flush_buffer+0x4c/0xbc
>> >     >       >     >       >       > [   11.010698]  __gunzip+0x280/0x3=
10
>> >     >       >     >       >       > [   11.010704]  gunzip+0x1c/0x28
>> >     >       >     >       >       > [   11.010709]
>>  unpack_to_rootfs+0x170/0x2b0
>> >     >       >     >       >       > [   11.010715]
>>  do_populate_rootfs+0x80/0x164
>> >     >       >     >       >       > [   11.010722]
>>  async_run_entry_fn+0x48/0x164
>> >     >       >     >       >       > [   11.010728]
>>  process_one_work+0x1e4/0x3a0
>> >     >       >     >       >       > [   11.010736]
>>  worker_thread+0x7c/0x4c0
>> >     >       >     >       >       > [   11.010743]  kthread+0x120/0x13=
0
>> >     >       >     >       >       > [   11.010750]
>>  ret_from_fork+0x10/0x20
>> >     >       >     >       >       > [   11.010757] SMP: stopping
>> secondary CPUs
>> >     >       >     >       >       > [   11.010784] Kernel Offset:
>> 0x2f61200000 from 0xffffffc008000000
>> >     >       >     >       >       > [   11.010788] PHYS_OFFSET: 0x0
>> >     >       >     >       >       > [   11.010790] CPU features:
>> 0x00000401,00000842
>> >     >       >     >       >       > [   11.010795] Memory Limit: none
>> >     >       >     >       >       > [   11.277509] ---[ end Kernel
>> panic - not syncing: Asynchronous SError Interrupt ]---
>> >     >       >     >       >       >
>> >     >       >     >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=
=80. 2023=E2=80=AF=D0=B3. =D0=B2 15:52, Michal
>> Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
>> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
>> >     >       >     >       >       >       Hi Oleg,
>> >     >       >     >       >       >
>> >     >       >     >       >       >       On 21/04/2023 14:49, Oleg
>> Nikitenko wrote:
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       > Hello Michal,
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       > I was not able to enable
>> earlyprintk in the xen for now.
>> >     >       >     >       >       >       > I decided to choose anothe=
r
>> way.
>> >     >       >     >       >       >       > This is a xen's command
>> line that I found out completely.
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       > (XEN) $$$$ console=3Ddtuar=
t
>> dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin
>> >     >       bootscrub=3D0
>> >     >       >     >       vwfi=3Dnative
>> >     >       >     >       >       sched=3Dnull
>> >     >       >     >       >       >       timer_slop=3D0
>> >     >       >     >       >       >       Yes, adding a printk() in Xe=
n
>> was also a good idea.
>> >     >       >     >       >       >
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       > So you are absolutely righ=
t
>> about a command line.
>> >     >       >     >       >       >       > Now I am going to find out
>> why xen did not have the correct parameters from the device
>> >     >       tree.
>> >     >       >     >       >       >       Maybe you will find this
>> document helpful:
>> >     >       >     >       >       >
>> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device=
-tree/booting.txt
>> <
>> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device=
-tree/booting.txt
>> >
>> >     >       <
>> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device=
-tree/booting.txt
>> <
>> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device=
-tree/booting.txt
>> >>
>> >     >       >     >       >       >
>> >     >       >     >       >       >       ~Michal
>> >     >       >     >       >       >
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       > Regards,
>> >     >       >     >       >       >       > Oleg
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       > =D0=BF=D1=82, 21 =D0=B0=D0=
=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2
>> 11:16, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>
>> >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>=
>
>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
>> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>:
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       >     On 21/04/2023 10:04,
>> Oleg Nikitenko wrote:
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     > Hello Michal,
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     > Yes, I use yocto.
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     > Yesterday all day
>> long I tried to follow your suggestions.
>> >     >       >     >       >       >       >     > I faced a problem.
>> >     >       >     >       >       >       >     > Manually in the xen
>> config build file I pasted the strings:
>> >     >       >     >       >       >       >     In the .config file or
>> in some Yocto file (listing additional Kconfig options) added
>> >     >       to SRC_URI?
>> >     >       >     >       >       >       >     You shouldn't really
>> modify .config file but if you do, you should execute "make
>> >     >       olddefconfig"
>> >     >       >     >       afterwards.
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     > CONFIG_EARLY_PRINTK
>> >     >       >     >       >       >       >     >
>> CONFIG_EARLY_PRINTK_ZYNQMP
>> >     >       >     >       >       >       >     >
>> CONFIG_EARLY_UART_CHOICE_CADENCE
>> >     >       >     >       >       >       >     I hope you added =3Dy =
to
>> them.
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       >     Anyway, you have at
>> least the following solutions:
>> >     >       >     >       >       >       >     1) Run bitbake xen -c
>> menuconfig to properly set early printk
>> >     >       >     >       >       >       >     2) Find out how you
>> enable other Kconfig options in your project (e.g.
>> >     >       CONFIG_COLORING=3Dy that is not
>> >     >       >     >       enabled by
>> >     >       >     >       >       default)
>> >     >       >     >       >       >       >     3) Append the followin=
g
>> to "xen/arch/arm/configs/arm64_defconfig":
>> >     >       >     >       >       >       >
>>  CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       >     ~Michal
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     > Host hangs in build
>> time.
>> >     >       >     >       >       >       >     > Maybe I did not set
>> something in the config build file ?
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     > Regards,
>> >     >       >     >       >       >       >     > Oleg
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     > =D1=87=D1=82, 20 =D0=
=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2
>> 11:57, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>
>> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.co=
m>>
>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
>> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>> >     >       >     >       >       >       <mailto:oleshiiwood@gmail.co=
m
>> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>
>> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.co=
m
>> >>>>>:
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >     Thanks Michal,
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >     You gave me an
>> idea.
>> >     >       >     >       >       >       >     >     I am going to tr=
y
>> it today.
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >     Regards,
>> >     >       >     >       >       >       >     >     O.
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >     =D1=87=D1=82, 20=
 =D0=B0=D0=BF=D1=80.
>> 2023=E2=80=AF=D0=B3. =D0=B2 11:56, Oleg Nikitenko <oleshiiwood@gmail.com=
 <mailto:
>> oleshiiwood@gmail.com>
>> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.co=
m>>
>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
>> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>> >     >       >     >       >       >       <mailto:oleshiiwood@gmail.co=
m
>> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>
>> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.co=
m
>> >>>>>:
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >         Thanks
>> Stefano.
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >         I am going t=
o
>> do it today.
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >         Regards,
>> >     >       >     >       >       >       >     >         O.
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >         =D1=81=D1=80=
, 19 =D0=B0=D0=BF=D1=80.
>> 2023=E2=80=AF=D0=B3. =D0=B2 23:05, Stefano Stabellini <sstabellini@kerne=
l.org <mailto:
>> sstabellini@kernel.org>
>> >     >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>>
>> >     >       >     >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>>>
>> >     >       >     >       >       >       <mailto:
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>> >     >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>>>>>:
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >             On Wed,
>> 19 Apr 2023, Oleg Nikitenko wrote:
>> >     >       >     >       >       >       >     >             > Hi
>> Michal,
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             > I
>> corrected xen's command line.
>> >     >       >     >       >       >       >     >             > Now it
>> is
>> >     >       >     >       >       >       >     >             >
>> xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M
>> >     >       dom0_max_vcpus=3D2
>> >     >       >     >       dom0_vcpus_pin
>> >     >       >     >       >       >       bootscrub=3D0 vwfi=3Dnative
>> sched=3Dnull
>> >     >       >     >       >       >       >     >             >
>> timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >             4 colors
>> is way too many for xen, just do xen_colors=3D0-0. There is no
>> >     >       >     >       >       >       >     >             advantag=
e
>> in using more than 1 color for Xen.
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >             4 colors
>> is too few for dom0, if you are giving 1600M of memory to
>> >     >       Dom0.
>> >     >       >     >       >       >       >     >             Each
>> color is 256M. For 1600M you should give at least 7 colors. Try:
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >
>>  xen_colors=3D0-0 dom0_colors=3D1-8
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >     >             >
>> Unfortunately the result was the same.
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             > (XEN)
>>  - Dom0 mode: Relaxed
>> >     >       >     >       >       >       >     >             > (XEN)
>> P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>> >     >       >     >       >       >       >     >             > (XEN)
>> P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>> >     >       >     >       >       >       >     >             > (XEN)
>> Scheduling granularity: cpu, 1 CPU per sched-resource
>> >     >       >     >       >       >       >     >             > (XEN)
>> Coloring general information
>> >     >       >     >       >       >       >     >             > (XEN)
>> Way size: 64kB
>> >     >       >     >       >       >       >     >             > (XEN)
>> Max. number of colors available: 16
>> >     >       >     >       >       >       >     >             > (XEN)
>> Xen color(s): [ 0 ]
>> >     >       >     >       >       >       >     >             > (XEN)
>> alternatives: Patching with alt table 00000000002cc690 ->
>> >     >       00000000002ccc0c
>> >     >       >     >       >       >       >     >             > (XEN)
>> Color array allocation failed for dom0
>> >     >       >     >       >       >       >     >             > (XEN)
>> >     >       >     >       >       >       >     >             > (XEN)
>> ****************************************
>> >     >       >     >       >       >       >     >             > (XEN)
>> Panic on CPU 0:
>> >     >       >     >       >       >       >     >             > (XEN)
>> Error creating domain 0
>> >     >       >     >       >       >       >     >             > (XEN)
>> ****************************************
>> >     >       >     >       >       >       >     >             > (XEN)
>> >     >       >     >       >       >       >     >             > (XEN)
>> Reboot in five seconds...
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             > I am
>> going to find out how command line arguments passed and parsed.
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             > Regard=
s,
>> >     >       >     >       >       >       >     >             > Oleg
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             > =D1=81=
=D1=80, 19
>> =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25, Oleg Nikitenko <o=
leshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>
>> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.co=
m
>> >>
>> >     >       >     >       <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>>>
>> >     >       >     >       >       >       <mailto:oleshiiwood@gmail.co=
m
>> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
>> oleshiiwood@gmail.com>
>> >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.co=
m
>> >>>>>:
>> >     >       >     >       >       >       >     >             >
>>  Hi Michal,
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             > You pu=
t
>> my nose into the problem. Thank you.
>> >     >       >     >       >       >       >     >             > I am
>> going to use your point.
>> >     >       >     >       >       >       >     >             > Let's
>> see what happens.
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             > Regard=
s,
>> >     >       >     >       >       >       >     >             > Oleg
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             > =D1=81=
=D1=80, 19
>> =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37, Michal Orzel <mic=
hal.orzel@amd.com <mailto:
>> michal.orzel@amd.com>
>> >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>=
>
>> >     >       >     >       <mailto:michal.orzel@amd.com <mailto:
>> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
>> michal.orzel@amd.com>>>
>> >     >       >     >       >       >       <mailto:michal.orzel@amd.com
>> <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
>> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
>> michal.orzel@amd.com>
>> >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com
>> >>>>>:
>> >     >       >     >       >       >       >     >             >
>>  Hi Oleg,
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >
>>  On 19/04/2023 09:03, Oleg Nikitenko wrote:
>> >     >       >     >       >       >       >     >             >
>>  >
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >       =
>
>> Hello Stefano,
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >       =
>
>> Thanks for the clarification.
>> >     >       >     >       >       >       >     >             >       =
>
>> My company uses yocto for image generation.
>> >     >       >     >       >       >       >     >             >       =
>
>> What kind of information do you need to consult me in this
>> >     >       case ?
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >       =
>
>> Maybe modules sizes/addresses which were mentioned by @Julien
>> >     >       Grall
>> >     >       >     >       >       <mailto:julien@xen.org <mailto:
>> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>
>> >     >       >     >       >       >       <mailto:julien@xen.org
>> <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>
>> <mailto:julien@xen.org <mailto:julien@xen.org>
>> >     >       <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:
>> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
>> julien@xen.org>>>>> ?
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >
>>  Sorry for jumping into discussion, but FWICS the Xen command
>> >     >       line you provided
>> >     >       >     >       seems to be
>> >     >       >     >       >       not the
>> >     >       >     >       >       >       one
>> >     >       >     >       >       >       >     >             >
>>  Xen booted with. The error you are observing most likely is due
>> >     >       to dom0 colors
>> >     >       >     >       >       configuration not
>> >     >       >     >       >       >       being
>> >     >       >     >       >       >       >     >             >
>>  specified (i.e. lack of dom0_colors=3D<> parameter). Although in
>> >     >       the command line you
>> >     >       >     >       >       provided, this
>> >     >       >     >       >       >       parameter
>> >     >       >     >       >       >       >     >             >
>>  is set, I strongly doubt that this is the actual command line
>> >     >       in use.
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >
>>  You wrote:
>> >     >       >     >       >       >       >     >             >
>>  xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0
>> >     >       dom0_mem=3D1600M dom0_max_vcpus=3D2
>> >     >       >     >       >       dom0_vcpus_pin
>> >     >       >     >       >       >       bootscrub=3D0 vwfi=3Dnative
>> >     >       >     >       >       >       >     >             >
>>  sched=3Dnull timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3
>> >     >       dom0_colors=3D4-7";
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >
>>  but:
>> >     >       >     >       >       >       >     >             >
>>  1) way_szize has a typo
>> >     >       >     >       >       >       >     >             >
>>  2) you specified 4 colors (0-3) for Xen, but the boot log says
>> >     >       that Xen has only
>> >     >       >     >       one:
>> >     >       >     >       >       >       >     >             >
>>  (XEN) Xen color(s): [ 0 ]
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >
>>  This makes me believe that no colors configuration actually end
>> >     >       up in command line
>> >     >       >     >       that Xen
>> >     >       >     >       >       booted
>> >     >       >     >       >       >       with.
>> >     >       >     >       >       >       >     >             >
>>  Single color for Xen is a "default if not specified" and way
>> >     >       size was probably
>> >     >       >     >       calculated
>> >     >       >     >       >       by asking
>> >     >       >     >       >       >       HW.
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >
>>  So I would suggest to first cross-check the command line in
>> >     >       use.
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >
>>  ~Michal
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >       =
>
>> Regards,
>> >     >       >     >       >       >       >     >             >       =
>
>> Oleg
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >       =
>
>> =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44, =
Stefano Stabellini
>> >     >       <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>> >     >       >     >       >       >       <mailto:
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>> >     >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>
>> >     >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>>>>
>> >     >       >     >       >       <mailto:sstabellini@kernel.org
>> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>>
>> >     >       >     >       >       >       <mailto:
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
>> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>> >     >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>
>> >     >       <mailto:sstabellini@kernel.org <mailto:
>> sstabellini@kernel.org>>>>>>:
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >
>>  >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>> >     >       >     >       >       >       >     >             >
>>  >     > Hi Julien,
>> >     >       >     >       >       >       >     >             >
>>  >     >
>> >     >       >     >       >       >       >     >             >
>>  >     > >> This feature has not been merged in Xen upstream yet
>> >     >       >     >       >       >       >     >             >
>>  >     >
>> >     >       >     >       >       >       >     >             >
>>  >     > > would assume that upstream + the series on the ML [1]
>> >     >       work
>> >     >       >     >       >       >       >     >             >
>>  >     >
>> >     >       >     >       >       >       >     >             >
>>  >     > Please clarify this point.
>> >     >       >     >       >       >       >     >             >
>>  >     > Because the two thoughts are controversial.
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >
>>  >     Hi Oleg,
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >
>>  >     As Julien wrote, there is nothing controversial. As you
>> >     >       are aware,
>> >     >       >     >       >       >       >     >             >
>>  >     Xilinx maintains a separate Xen tree specific for Xilinx
>> >     >       here:
>> >     >       >     >       >       >       >     >             >
>>  >     https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>> >     >       <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>>>
>> >     >       >     >       >       <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>>
>> >     >       >     >       >       >       <
>> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
>> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>
>> >     >       <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>
>> >     >       <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>>>
>> >     >       >     >       >       <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
>> https://github.com/xilinx/xen>>
>> >     >       >     >       >       >       <
>> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
>> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >
>>  >     and the branch you are using (xlnx_rebase_4.16) comes
>> >     >       from there.
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >
>>  >     Instead, the upstream Xen tree lives here:
>> >     >       >     >       >       >       >     >             >
>>  >     https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
>> >     >       >     >       >       >       <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
>> >     >       >     >       >       >       <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>> <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
>> >     >       >     >       >       >       <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
>> >     >       >     >       >       >       <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>
>> >     >       <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
>> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >
>>  >     The Cache Coloring feature that you are trying to
>> >     >       configure is present
>> >     >       >     >       >       >       >     >             >
>>  >     in xlnx_rebase_4.16, but not yet present upstream (there
>> >     >       is an
>> >     >       >     >       >       >       >     >             >
>>  >     outstanding patch series to add cache coloring to Xen
>> >     >       upstream but it
>> >     >       >     >       >       >       >     >             >
>>  >     hasn't been merged yet.)
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >
>>  >     Anyway, if you are using xlnx_rebase_4.16 it doesn't
>> >     >       matter too much for
>> >     >       >     >       >       >       >     >             >
>>  >     you as you already have Cache Coloring as a feature
>> >     >       there.
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >
>>  >     I take you are using ImageBuilder to generate the boot
>> >     >       configuration? If
>> >     >       >     >       >       >       >     >             >
>>  >     so, please post the ImageBuilder config file that you are
>> >     >       using.
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >
>>  >     But from the boot message, it looks like the colors
>> >     >       configuration for
>> >     >       >     >       >       >       >     >             >
>>  >     Dom0 is incorrect.
>> >     >       >     >       >       >       >     >             >       =
>
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >             >
>> >     >       >     >       >       >       >     >
>> >     >       >     >       >       >       >
>> >     >       >     >       >       >
>> >     >       >     >       >       >
>> >     >       >     >       >       >
>> >     >       >     >       >
>> >     >       >     >       >
>> >     >       >     >       >
>> >     >       >     >
>> >     >       >     >
>> >     >       >     >
>> >     >       >
>> >     >
>> >     >
>> >     >
>> >
>>
>

--00000000000092b17105fbb77d55
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+PGRpdj5IZWxsbyBndXlzLDwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+
VGhhbmtzIGEgbG90LiA8YnI+PC9kaXY+PGRpdj5BZnRlciBhIGxvbmcgcHJvYmxlbSBsaXN0IEkg
d2FzIGFibGUgdG8gcnVuIHhlbiB3aXRoIERvbTAgd2l0aCBhIGNhY2hlIGNvbG9yLjwvZGl2Pjxk
aXY+T25lIG1vcmUgcXVlc3Rpb24gZnJvbSBteSBzaWRlLjwvZGl2PjxkaXY+SSB3YW50IHRvIHJ1
biBhIGd1ZXN0IHdpdGggY29sb3IgbW9kZSB0b28uPC9kaXY+PGRpdj5JIGluc2VydGVkIGEgc3Ry
aW5nIGludG8gZ3Vlc3QgY29uZmlnIGZpbGUgbGxjLWNvbG9ycyA9ICZxdW90OzktMTMmcXVvdDs8
L2Rpdj48ZGl2PkkgZ290IGFuIGVycm9yIDxicj48L2Rpdj48ZGl2PlsgwqA0NTcuNTE3MDA0XSBs
b29wMDogZGV0ZWN0ZWQgY2FwYWNpdHkgY2hhbmdlIGZyb20gMCB0byAzODU4NDA8YnI+UGFyc2lu
ZyBjb25maWcgZnJvbSAveGVuL3JlZF9jb25maWcuY2ZnPGJyPi94ZW4vcmVkX2NvbmZpZy5jZmc6
MjY6IGNvbmZpZyBwYXJzaW5nIGVycm9yIG5lYXIgYC1jb2xvcnMmIzM5OzogbGV4aWNhbCBlcnJv
cjxicj53YXJuaW5nOiBDb25maWcgZmlsZSBsb29rcyBsaWtlIGl0IGNvbnRhaW5zIFB5dGhvbiBj
b2RlLjxicj53YXJuaW5nOiDCoEFyYml0cmFyeSBQeXRob24gaXMgbm8gbG9uZ2VyIHN1cHBvcnRl
ZC48YnI+d2FybmluZzogwqBTZWUgPGEgaHJlZj0iaHR0cHM6Ly93aWtpLnhlbi5vcmcvd2lraS9Q
eXRob25JblhsQ29uZmlnIj5odHRwczovL3dpa2kueGVuLm9yZy93aWtpL1B5dGhvbkluWGxDb25m
aWc8L2E+PGJyPkZhaWxlZCB0byBwYXJzZSBjb25maWc6IEludmFsaWQgYXJndW1lbnQ8L2Rpdj48
ZGl2PlNvIHRoaXMgaXMgYSBxdWVzdGlvbi48L2Rpdj48ZGl2PklzIGl0IHBvc3NpYmxlIHRvIGFz
c2lnbiBhIGNvbG9yIG1vZGUgZm9yIHRoZSBEb21VIGJ5IGNvbmZpZyBmaWxlID88L2Rpdj48ZGl2
PklmIHNvLCB3aGF0IHN0cmluZyBzaG91bGQgSSB1c2U/PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRp
dj5SZWdhcmRzLDwvZGl2PjxkaXY+T2xlZzxicj48L2Rpdj48L2Rpdj48YnI+PGRpdiBjbGFzcz0i
Z21haWxfcXVvdGUiPjxkaXYgZGlyPSJsdHIiIGNsYXNzPSJnbWFpbF9hdHRyIj7Rh9GCLCAxMSDQ
vNCw0Y8gMjAyM+KAr9CzLiDQsiAxMzozMiwgT2xlZyBOaWtpdGVua28gJmx0OzxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7
Ojxicj48L2Rpdj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46
MHB4IDBweCAwcHggMC44ZXg7Ym9yZGVyLWxlZnQ6MXB4IHNvbGlkIHJnYigyMDQsMjA0LDIwNCk7
cGFkZGluZy1sZWZ0OjFleCI+PGRpdiBkaXI9Imx0ciI+PGRpdj5IaSBNaWNoYWwsPC9kaXY+PGRp
dj48YnI+PC9kaXY+PGRpdj5UaGFua3MuPC9kaXY+PGRpdj5UaGlzIGNvbXBpbGF0aW9uIHByZXZp
b3VzbHkgaGFkIGEgbmFtZSBDT05GSUdfQ09MT1JJTkcuPC9kaXY+PGRpdj5JdCBtaXhlZCBtZSB1
cC48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlJlZ2FyZHMsPC9kaXY+PGRpdj5PbGVnPGJyPjwv
ZGl2PjwvZGl2Pjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+PGRpdiBkaXI9Imx0ciIgY2xh
c3M9ImdtYWlsX2F0dHIiPtGH0YIsIDExINC80LDRjyAyMDIz4oCv0LMuINCyIDEzOjE1LCBNaWNo
YWwgT3J6ZWwgJmx0OzxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDs6PGJyPjwvZGl2PjxibG9ja3F1
b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4IDBweCAwLjhleDti
b3JkZXItbGVmdDoxcHggc29saWQgcmdiKDIwNCwyMDQsMjA0KTtwYWRkaW5nLWxlZnQ6MWV4Ij5I
aSBPbGVnLDxicj4NCjxicj4NCk9uIDExLzA1LzIwMjMgMTI6MDIsIE9sZWcgTmlraXRlbmtvIHdy
b3RlOjxicj4NCiZndDvCoCDCoCDCoCDCoDxicj4NCiZndDsgPGJyPg0KJmd0OyA8YnI+DQomZ3Q7
IEhlbGxvLDxicj4NCiZndDsgPGJyPg0KJmd0OyBUaGFua3MgU3RlZmFuby48YnI+DQomZ3Q7IFRo
ZW4gdGhlIG5leHQgcXVlc3Rpb24uPGJyPg0KJmd0OyBJIGNsb25lZCB4ZW4gcmVwbyBmcm9tIHhp
bGlueCBzaXRlIDxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuLmdpdCIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94
ZW4uZ2l0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4uZ2l0
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGls
aW54L3hlbi5naXQ8L2E+Jmd0Ozxicj4NCiZndDsgSSBtYW5hZ2VkIHRvIGJ1aWxkIGEgeGxueF9y
ZWJhc2VfNC4xNyBicmFuY2ggaW4gbXkgZW52aXJvbm1lbnQuPGJyPg0KJmd0OyBJIGRpZCBpdCB3
aXRob3V0IGNvbG9yaW5nIGZpcnN0LiBJIGRpZCBub3QgZmluZCBhbnkgY29sb3IgZm9vdHByaW50
cyBhdCB0aGlzIGJyYW5jaC48YnI+DQomZ3Q7IEkgcmVhbGl6ZWQgY29sb3JpbmcgaXMgbm90IGlu
IHRoZSB4bG54X3JlYmFzZV80LjE3IGJyYW5jaCB5ZXQuPGJyPg0KVGhpcyBpcyBub3QgdHJ1ZS4g
Q2FjaGUgY29sb3JpbmcgaXMgaW4geGxueF9yZWJhc2VfNC4xNy4gUGxlYXNlIHNlZSB0aGUgZG9j
czo8YnI+DQo8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhf
cmViYXNlXzQuMTcvZG9jcy9taXNjL2FybS9jYWNoZS1jb2xvcmluZy5yc3QiIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2Iv
eGxueF9yZWJhc2VfNC4xNy9kb2NzL21pc2MvYXJtL2NhY2hlLWNvbG9yaW5nLnJzdDwvYT48YnI+
DQo8YnI+DQpJdCBkZXNjcmliZXMgdGhlIGZlYXR1cmUgYW5kIGRvY3VtZW50cyB0aGUgcmVxdWly
ZWQgcHJvcGVydGllcy48YnI+DQo8YnI+DQp+TWljaGFsPGJyPg0KPGJyPg0KJmd0OyA8YnI+DQom
Z3Q7IDxicj4NCiZndDsg0LLRgiwgOSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAyMjo0OSwgU3RlZmFu
byBTdGFiZWxsaW5pICZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3Rh
YmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7Ojxicj4NCiZndDsgPGJyPg0KJmd0O8KgIMKg
IMKgV2UgdGVzdCBYZW4gQ2FjaGUgQ29sb3JpbmcgcmVndWxhcmx5IG9uIHpjdTEwMi4gRXZlcnkg
UGV0YWxpbnV4IHJlbGVhc2U8YnI+DQomZ3Q7wqAgwqAgwqAodHdpY2UgYSB5ZWFyKSBpcyB0ZXN0
ZWQgd2l0aCBjYWNoZSBjb2xvcmluZyBlbmFibGVkLiBUaGUgbGFzdCBQZXRhbGludXg8YnI+DQom
Z3Q7wqAgwqAgwqByZWxlYXNlIGlzIDIwMjMuMSBhbmQgdGhlIGtlcm5lbCB1c2VkIGlzIHRoaXM6
PGJyPg0KJmd0O8KgIMKgIMKgPGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC9saW51
eC14bG54L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFMiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngvbGludXgteGxueC90cmVlL3hsbnhf
cmViYXNlX3Y2LjFfTFRTPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlu
eC9saW51eC14bG54L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFMiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngvbGludXgteGxueC90cmVl
L3hsbnhfcmViYXNlX3Y2LjFfTFRTPC9hPiZndDs8YnI+DQomZ3Q7IDxicj4NCiZndDsgPGJyPg0K
Jmd0O8KgIMKgIMKgT24gVHVlLCA5IE1heSAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7IEhlbGxvIGd1eXMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDsgSSBoYXZlIGEgY291cGxlIG9mIG1vcmUgcXVlc3Rpb25zLjxicj4N
CiZndDvCoCDCoCDCoCZndDsgSGF2ZSB5b3UgZXZlciBydW4geGVuIHdpdGggdGhlIGNhY2hlIGNv
bG9yaW5nIGF0IFp5bnEgVWx0cmFTY2FsZSsgTVBTb0MgemN1MTAyIHhjenUxNWVnID88YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7IFdoZW4gZGlkIHlvdSBydW4geGVuIHdpdGggdGhlIGNhY2hlIGNvbG9y
aW5nIGxhc3QgdGltZSA/PGJyPg0KJmd0O8KgIMKgIMKgJmd0OyBXaGF0IGtlcm5lbCB2ZXJzaW9u
IGRpZCB5b3UgdXNlIGZvciBEb20wIHdoZW4geW91IHJhbiB4ZW4gd2l0aCB0aGUgY2FjaGUgY29s
b3JpbmcgbGFzdCB0aW1lID88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDsgT2xlZzxicj4NCiZndDvCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7INC/0YIsIDUg0LzQsNGPIDIwMjPigK/Qsy4g0LIg
MTE6NDgsIE9sZWcgTmlraXRlbmtvICZsdDs8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgSGkgTWljaGFsLDxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7IFRoYW5rcy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDsgT2xlZzxicj4NCiZndDvCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7INC/0YIsIDUg0LzQsNGPIDIwMjPigK/Qsy4g0LIg
MTE6MzQsIE1pY2hhbCBPcnplbCAmbHQ7PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWlj
aGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgSGkgT2xlZyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgUmVwbHlpbmcsIHNvIHRoYXQgeW91IGRvIG5vdCBuZWVkIHRvIHdhaXQgZm9y
IFN0ZWZhbm8uPGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoE9uIDA1LzA1LzIwMjMgMTA6MjgsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBIZWxsbyBTdGVmYW5vLDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IEkgd291bGQgbGlrZSB0byB0cnkgYSB4ZW4gY2FjaGUgY29sb3IgcHJvcGVy
dHkgZnJvbSB0aGlzIHJlcG/CoCA8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQt
aHR0cC94ZW4uZ2l0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdDwvYT4mZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+ICZsdDs8
YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0
cC94ZW4uZ2l0PC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBDb3VsZCB5b3UgdGVsbCB3aG90IGJyYW5jaCBJIHNob3VsZCB1c2UgPzxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoENhY2hlIGNvbG9yaW5nIGZlYXR1cmUgaXMgbm90IHBhcnQgb2Yg
dGhlIHVwc3RyZWFtIHRyZWUgYW5kIGl0IGlzIHN0aWxsIHVuZGVyIHJldmlldy48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBZb3UgY2FuIG9ubHkgZmluZCBpdCBpbnRlZ3JhdGVkIGlu
IHRoZSBYaWxpbnggWGVuIHRyZWUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoH5NaWNoYWw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9s
ZWc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyDQv9GCLCAyOCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMDA6NTEs
IFN0ZWZhbm8gU3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0
OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEkgYW0gZmFtaWxpYXIgd2l0aCB0aGUgemN1
MTAyIGJ1dCBJIGRvbiYjMzk7dCBrbm93IGhvdyB5b3UgY291bGQgcG9zc2libHk8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBnZW5lcmF0ZSBhIFNFcnJvci48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgSSBzdWdnZXN0IHRvIHRyeSB0byB1c2UgSW1hZ2VCdWlsZGVy
IFsxXSB0byBnZW5lcmF0ZSB0aGUgYm9vdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoGNvbmZpZ3VyYXRpb24gYXMgYSB0ZXN0IGJlY2F1c2UgdGhhdCBpcyBrbm93
biB0byB3b3JrIHdlbGwgZm9yIHpjdTEwMi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgWzFdIDxh
IGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVj
dC9pbWFnZWJ1aWxkZXI8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGxhYi5jb20veGVuLXBy
b2plY3QvaW1hZ2VidWlsZGVyIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyPC9hPiZndDsgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9p
bWFnZWJ1aWxkZXI8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGxhYi5jb20veGVuLXByb2pl
Y3QvaW1hZ2VidWlsZGVyIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyPC9hPiZndDsmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBPbiBU
aHUsIDI3IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEhlbGxvIFN0ZWZhbm8sPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgVGhhbmtzIGZvciBjbGFyaWZpY2F0aW9uLjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgV2UgbmlnaHRl
ciB1c2UgSW1hZ2VCdWlsZGVyIG5vciB1Ym9vdCBib290IHNjcmlwdC48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEEgbW9kZWwgaXMgemN1MTAyIGNvbXBh
dGlibGUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgUmVnYXJkcyw8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IE8uPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsg0LLRgiwgMjUg0LDQv9GALiAyMDIz
4oCv0LMuINCyIDIxOjIxLCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFRoaXMgaXMgaW50ZXJlc3RpbmcuIEFyZSB5b3UgdXNp
bmcgWGlsaW54IGhhcmR3YXJlIGJ5IGFueSBjaGFuY2U/IElmIHNvLDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHdoaWNoIGJvYXJkPzxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBBcmUgeW91
IHVzaW5nIEltYWdlQnVpbGRlciB0byBnZW5lcmF0ZSB5b3VyIGJvb3Quc2NyIGJvb3Qgc2NyaXB0
PyBJZiBzbyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBjb3VsZCB5b3UgcGxlYXNlIHBvc3QgeW91ciBJbWFnZUJ1aWxkZXIgY29uZmln
IGZpbGU/IElmIG5vdCwgY2FuIHlvdTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHBvc3QgdGhlIHNvdXJjZSBvZiB5b3VyIHVib290IGJv
b3Qgc2NyaXB0Pzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBTRXJyb3JzIGFyZSBzdXBwb3NlZCB0byBiZSByZWxhdGVkIHRvIGEgaGFyZHdhcmUgZmFp
bHVyZSBvZiBzb21lIGtpbmQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgWW91IGFyZSBub3Qgc3VwcG9zZWQgdG8gYmUgYWJsZSB0byB0
cmlnZ2VyIGFuIFNFcnJvciBlYXNpbHkgYnk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmcXVvdDttaXN0YWtlJnF1b3Q7LiBJIGhhdmUg
bm90IHNlZW4gU0Vycm9ycyBkdWUgdG8gd3JvbmcgY2FjaGUgY29sb3Jpbmc8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjb25maWd1cmF0
aW9ucyBvbiBhbnkgWGlsaW54IGJvYXJkIGJlZm9yZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgVGhlIGRpZmZlcmVuY2VzIGJldHdlZW4gWGVuIHdp
dGggYW5kIHdpdGhvdXQgY2FjaGUgY29sb3JpbmcgZnJvbSBhPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgaGFyZHdhcmUgcGVyc3BlY3Rp
dmUgYXJlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAtIFdpdGggY2FjaGUgY29sb3JpbmcsIHRoZSBTTU1VIGlzIGVuYWJsZWQgYW5kIGRvZXMgYWRk
cmVzcyB0cmFuc2xhdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBldmVuIGZvciBkb20wLiBXaXRob3V0IGNhY2hlIGNvbG9y
aW5nIHRoZSBTTU1VIGNvdWxkIGJlIGRpc2FibGVkLCBhbmQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBpZiBlbmFibGVkLCB0aGUg
U01NVSBkb2VzbiYjMzk7dCBkbyBhbnkgYWRkcmVzcyB0cmFuc2xhdGlvbnMgZm9yIERvbTAuIElm
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgwqAgdGhlcmUgaXMgYSBoYXJkd2FyZSBmYWlsdXJlIHJlbGF0ZWQgdG8gU01NVSBhZGRyZXNz
IHRyYW5zbGF0aW9uIGl0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgwqAgY291bGQgb25seSB0cmlnZ2VyIHdpdGggY2FjaGUgY29sb3Jp
bmcuIFRoaXMgd291bGQgYmUgbXkgbm9ybWFsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgc3VnZ2VzdGlvbiBmb3IgeW91IHRvIGV4
cGxvcmUsIGJ1dCB0aGUgZmFpbHVyZSBoYXBwZW5zIHRvbyBlYXJseTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoMKgIGJlZm9yZSBhbnkg
RE1BLWNhcGFibGUgZGV2aWNlIGlzIHByb2dyYW1tZWQuIFNvIEkgZG9uJiMzOTt0IHRoaW5rIHRo
aXMgY2FuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgwqAgYmUgdGhlIGlzc3VlLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAtIFdpdGggY2FjaGUgY29sb3JpbmcsIHRoZSBtZW1vcnkgYWxs
b2NhdGlvbiBpcyB2ZXJ5IGRpZmZlcmVudCBzbyB5b3UmIzM5O2xsPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgZW5kIHVwIHVzaW5n
IGRpZmZlcmVudCBERFIgcmVnaW9ucyBmb3IgRG9tMC4gU28gaWYgeW91ciBERFIgaXM8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBk
ZWZlY3RpdmUsIHlvdSBtaWdodCBvbmx5IHNlZSBhIGZhaWx1cmUgd2l0aCBjYWNoZSBjb2xvcmlu
ZyBlbmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgwqAgYmVjYXVzZSB5b3UgZW5kIHVwIHVzaW5nIGRpZmZlcmVudCByZWdpb25z
Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gVHVlLCAyNSBB
cHIgMjAyMywgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBIaSBTdGVmYW5vLDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFRoYW5rIHlvdS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IElmIEkgYnVpbGQgeGVuIHdpdGhvdXQgY29sb3JzIHN1
cHBvcnQgdGhlcmUgaXMgbm90IHRoaXMgZXJyb3IuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBBbGwgdGhlIGRvbWFpbnMgYXJl
IGJvb3RlZCB3ZWxsLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgSGVuc2UgaXQgY2FuIG5vdCBiZSBhIGhhcmR3YXJlIGlzc3Vl
Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgVGhpcyBwYW5pYyBhcnJpdmVkIGR1cmluZyB1bnBhY2tpbmcgdGhlIHJvb3Rmcy48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IEhlcmUgSSBhdHRhY2hlZCB0aGUgYm9vdCBsb2cgeGVuL0RvbTAgd2l0aG91dCBjb2xv
ci48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IEEgaGlnaGxpZ2h0ZWQgc3RyaW5ncyBwcmludGVkIGV4YWN0bHkgYWZ0ZXIgdGhl
IHBsYWNlIHdoZXJlIDEtc3QgdGltZSBwYW5pYyBhcnJpdmVkLjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgWGVu
IDQuMTYuMS1wcmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFhlbiB2ZXJzaW9uIDQuMTYuMS1wcmUgKG5vbGUyMzkw
QChub25lKSkgKGFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2NjIChHQ0MpIDExLjMuMCkgZGVidWc9
eTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDIwMjMtMDQtMjE8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IExhdGVzdCBDaGFuZ2VTZXQ6IFdlZCBBcHIgMTkgMTI6NTY6MTQgMjAyMyArMDMwMCBnaXQ6MzIx
Njg3YjIzMS1kaXJ0eTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgYnVpbGQtaWQ6IGMxODQ3MjU4ZmRiMWI3OTU2MmZj
NzEwZGRhNDAwMDhmOTZjMGZkZTU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFByb2Nlc3NvcjogMDAwMDAwMDA0MTBm
ZDAzNDogJnF1b3Q7QVJNIExpbWl0ZWQmcXVvdDssIHZhcmlhbnQ6IDB4MCwgcGFydCAweGQwMyxy
ZXYgMHg0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSA2NC1iaXQgRXhlY3V0aW9uOjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgUHJv
Y2Vzc29yIEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAyMjIyIDAwMDAwMDAwMDAwMDAwMDA8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIMKgIMKgIEV4Y2VwdGlvbiBMZXZlbHM6IEVMMzo2NCszMiBFTDI6NjQrMzIgRUwxOjY0
KzMyIEVMMDo2NCszMjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgRXh0ZW5zaW9uczogRmxvYXRpbmdQb2lu
dCBBZHZhbmNlZFNJTUQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIERlYnVnIEZlYXR1cmVzOiAwMDAwMDAwMDEw
MzA1MTA2IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIEF1eGlsaWFyeSBGZWF0dXJl
czogMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBNZW1v
cnkgTW9kZWwgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDExMjIgMDAwMDAwMDAwMDAwMDAwMDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgwqAgSVNBIEZlYXR1cmVzOiDCoDAwMDAwMDAwMDAwMTExMjAgMDAwMDAwMDAwMDAw
MDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgKFhFTikgMzItYml0IEV4ZWN1dGlvbjo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIFByb2Nl
c3NvciBGZWF0dXJlczogMDAwMDAwMDAwMDAwMDEzMTowMDAwMDAwMDAwMDExMDExPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAo
WEVOKSDCoCDCoCBJbnN0cnVjdGlvbiBTZXRzOiBBQXJjaDMyIEEzMiBUaHVtYiBUaHVtYi0yIEph
emVsbGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIEV4dGVuc2lvbnM6IEdlbmVyaWNUaW1lciBTZWN1cml0
eTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgwqAgRGVidWcgRmVhdHVyZXM6IDAwMDAwMDAwMDMwMTAwNjY8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIMKgIEF1eGlsaWFyeSBGZWF0dXJlczogMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgwqAgTWVtb3J5IE1vZGVsIEZlYXR1cmVzOiAwMDAwMDAwMDEwMjAxMTA1IDAwMDAwMDAwNDAw
MDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
MDAwMDAwMDAwMTI2MDAwMCAwMDAwMDAwMDAyMTAyMjExPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBJU0EgRmVh
dHVyZXM6IDAwMDAwMDAwMDIxMDExMTAgMDAwMDAwMDAxMzExMjExMSAwMDAwMDAwMDIxMjMyMDQy
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCAwMDAwMDAwMDAxMTEyMTMxIDAw
MDAwMDAwMDAwMTExNDIgMDAwMDAwMDAwMDAxMTEyMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgVXNpbmcgU01DIENh
bGxpbmcgQ29udmVudGlvbiB2MS4yPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2luZyBQU0NJIHYxLjE8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIFNNUDogQWxsb3dpbmcgNCBDUFVzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBHZW5lcmljIFRpbWVyIElS
UTogcGh5cz0zMCBoeXA9MjYgdmlydD0yNyBGcmVxOiAxMDAwMDAgS0h6PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBH
SUN2MiBpbml0aWFsaXphdGlvbjo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIGdpY19kaXN0X2Fk
ZHI9MDAwMDAwMDBmOTAxMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgwqAgwqAgZ2ljX2NwdV9hZGRy
PTAwMDAwMDAwZjkwMjAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIGdpY19oeXBfYWRkcj0w
MDAwMDAwMGY5MDQwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCBnaWNfdmNwdV9hZGRyPTAw
MDAwMDAwZjkwNjAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIGdpY19tYWludGVuYW5jZV9p
cnE9MjU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIEdJQ3YyOiBBZGp1c3RpbmcgQ1BVIGludGVyZmFjZSBiYXNlIHRv
IDB4ZjkwMmYwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEdJQ3YyOiAxOTIgbGluZXMsIDQgY3B1cywgc2VjdXJl
IChJSUQgMDIwMDE0M2IpLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBudWxsIFNjaGVk
dWxlciAobnVsbCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEluaXRpYWxpemluZyBudWxsIHNjaGVkdWxlcjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgV0FSTklORzogVGhpcyBpcyBleHBlcmltZW50YWwgc29mdHdhcmUgaW4gZGV2ZWxv
cG1lbnQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSBVc2UgYXQgeW91ciBvd24gcmlzay48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEFs
bG9jYXRlZCBjb25zb2xlIHJpbmcgb2YgMzIgS2lCLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVMDogR3Vlc3Qg
YXRvbWljcyB3aWxsIHRyeSAxMiB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyAoWEVOKSBCcmluZ2luZyB1cCBDUFUxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUxOiBHdWVzdCBhdG9taWNz
IHdpbGwgdHJ5IDEzIHRpbWVzIGJlZm9yZSBwYXVzaW5nIHRoZSBkb21haW48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IENQVSAxIGJvb3RlZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEJyaW5naW5nIHVwIENQVTI8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IENQVTI6IEd1ZXN0IGF0b21pY3Mgd2lsbCB0cnkgMTMgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhl
IGRvbWFpbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVIDIgYm9vdGVkLjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQnJpbmdpbmcg
dXAgQ1BVMzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVMzogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAxMyB0aW1l
cyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCcm91Z2h0IHVwIDQgQ1BV
czxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgQ1BVIDMgYm9vdGVkLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgc21tdTogL2F4aS9zbW11
QGZkODAwMDAwOiBwcm9iaW5nIGhhcmR3YXJlIGNvbmZpZ3VyYXRpb24uLi48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogU01NVXYyIHdpdGg6PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBzbW11
OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IHN0YWdlIDIgdHJhbnNsYXRpb248YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNt
bXU6IC9heGkvc21tdUBmZDgwMDAwMDogc3RyZWFtIG1hdGNoaW5nIHdpdGggNDggcmVnaXN0ZXIg
Z3JvdXBzLCBtYXNrIDB4N2ZmZiZsdDsyJmd0O3NtbXU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgL2F4aS9zbW11QGZkODAwMDAwOiAxNiBjb250ZXh0PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgYmFua3MgKDA8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IHN0YWdlLTIgb25seSk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogU3Rh
Z2UtMjogNDgtYml0IElQQSAtJmd0OyA0OC1iaXQgUEE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkv
c21tdUBmZDgwMDAwMDogcmVnaXN0ZXJlZCAyOSBtYXN0ZXIgZGV2aWNlczxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
SS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgLSBEb20wIG1vZGU6IFJl
bGF4ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIFAyTTogNDAtYml0IElQQSB3aXRoIDQwLWJpdCBQQSBhbmQgOC1i
aXQgVk1JRDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgUDJNOiAzIGxldmVscyB3aXRoIG9yZGVyLTEgcm9vdCwgVlRD
UiAweDAwMDAwMDAwODAwMjM1NTg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFNjaGVkdWxpbmcgZ3JhbnVsYXJpdHk6
IGNwdSwgMSBDUFUgcGVyIHNjaGVkLXJlc291cmNlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBhbHRlcm5hdGl2ZXM6
IFBhdGNoaW5nIHdpdGggYWx0IHRhYmxlIDAwMDAwMDAwMDAyY2M1YzggLSZndDsgMDAwMDAwMDAw
MDJjY2IyYzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgKioqIExPQURJTkcgRE9NQUlOIDAgKioqPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVO
KSBMb2FkaW5nIGQwIGtlcm5lbCBmcm9tIGJvb3QgbW9kdWxlIEAgMDAwMDAwMDAwMTAwMDAwMDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgTG9hZGluZyByYW1kaXNrIGZyb20gYm9vdCBtb2R1bGUgQCAwMDAwMDAwMDAy
MDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSBBbGxvY2F0aW5nIDE6MSBtYXBwaW5ncyB0b3RhbGxpbmcgMTYw
ME1CIGZvciBkb20wOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQkFOS1swXSAweDAwMDAwMDEwMDAwMDAwLTB4MDAw
MDAwMjAwMDAwMDAgKDI1Nk1CKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQkFOS1sxXSAweDAwMDAwMDI0MDAwMDAw
LTB4MDAwMDAwMjgwMDAwMDAgKDY0TUIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCQU5LWzJdIDB4MDAwMDAwMzAw
MDAwMDAtMHgwMDAwMDA4MDAwMDAwMCAoMTI4ME1CKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgR3JhbnQgdGFibGUg
cmFuZ2U6IDB4MDAwMDAwMDBlMDAwMDAtMHgwMDAwMDAwMGU0MDAwMDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgc21t
dTogL2F4aS9zbW11QGZkODAwMDAwOiBkMDogcDJtYWRkciAweDAwMDAwMDA4N2JmOTQwMDA8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIEFsbG9jYXRpbmcgUFBJIDE2IGZvciBldmVudCBjaGFubmVsIGludGVycnVwdDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDA6IDB4ODEyMDAwMDAtJmd0OzB4YTAwMDAwMDA8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiAxOiAweGIxMjAwMDAwLSZndDsweGMwMDAwMDAw
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gMjogMHhjODAwMDAwMC0mZ3Q7MHhlMDAwMDAw
MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDM6IDB4ZjAwMDAwMDAtJmd0OzB4ZjkwMDAw
MDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiA0OiAweDEwMDAwMDAwMC0mZ3Q7MHg2MDAw
MDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiA1OiAweDg4MDAwMDAwMC0mZ3Q7MHg4
MDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gNjogMHg4MDAxMDAwMDAwLSZn
dDsweDEwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBMb2FkaW5nIHpJbWFnZSBmcm9tIDAwMDAwMDAw
MDEwMDAwMDAgdG8gMDAwMDAwMDAxMDAwMDAwMC0wMDAwMDAwMDEwZTQxMDA4PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVO
KSBMb2FkaW5nIGQwIGluaXRyZCBmcm9tIDAwMDAwMDAwMDIwMDAwMDAgdG8gMHgwMDAwMDAwMDEz
NjAwMDAwLTB4MDAwMDAwMDAxZmYzYTYxNzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTG9hZGluZyBkMCBEVEIgdG8g
MHgwMDAwMDAwMDEzNDAwMDAwLTB4MDAwMDAwMDAxMzQwY2JkYzxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgSW5pdGlh
bCBsb3cgbWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBhdCAweDQwMDAgcGFnZXMuPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAo
WEVOKSBTdGQuIExvZ2xldmVsOiBBbGw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEd1ZXN0IExvZ2xldmVsOiBBbGw8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pICoqKiBTZXJpYWwgaW5wdXQgdG8gRE9NMCAodHlwZSAmIzM5O0NUUkwtYSYj
Mzk7IHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIG51bGwuYzozNTM6
IDAgJmx0Oy0tIGQwdjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEZyZWVkIDM1NmtCIGluaXQgbWVtb3J5Ljxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgZDB2MCBVbmhhbmRsZWQgU01DL0hWQzogMHg4NDAwMDA1MDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
ZDB2MCBVbmhhbmRsZWQgU01DL0hWQzogMHg4NjAwZmYwMTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MDogdkdJ
Q0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSNDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZm
ZmZmZmYgdG8gSUNBQ1RJVkVSODxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3
b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMTI8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQw
djA6IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNU
SVZFUjE2PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgw
MDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIyMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVu
aGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuMDAwMDAwXSBCb290aW5nIExpbnV4IG9uIHBoeXNpY2FsIENQVSAweDAwMDAwMDAw
MDAgWzB4NDEwZmQwMzRdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIExpbnV4IHZlcnNpb24gNS4x
NS43Mi14aWxpbngtdjIwMjIuMSAob2UtdXNlckBvZS1ob3N0KSAoYWFyY2g2NC1wb3J0YWJsZS1s
aW51eC1nY2MgKEdDQyk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAxMS4zLjAsIEdO
VSBsZCAoR05VPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgQmludXRpbHMpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAyLjM4LjIwMjIwNzA4KSAjMSBTTVAgVHVlIEZl
YiAyMSAwNTo0Nzo1NCBVVEMgMjAyMzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBNYWNoaW5lIG1v
ZGVsOiBEMTQgVmlwZXIgQm9hcmQgLSBXaGl0ZSBVbml0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBd
IFhlbiA0LjE2IHN1cHBvcnQgZm91bmQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gWm9uZSByYW5n
ZXM6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIERNQSDCoCDCoCDCoFttZW0gMHgwMDAwMDAw
MDEwMDAwMDAwLTB4MDAwMDAwMDA3ZmZmZmZmZl08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gwqAg
RE1BMzIgwqAgwqBlbXB0eTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBOb3JtYWwgwqAgZW1w
dHk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gTW92YWJsZSB6b25lIHN0YXJ0IGZvciBlYWNoIG5v
ZGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gRWFybHkgbWVtb3J5IG5vZGUgcmFuZ2VzPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAweDAwMDAwMDAwMTAwMDAwMDAt
MHgwMDAwMDAwMDFmZmZmZmZmXTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKgIDA6
IFttZW0gMHgwMDAwMDAwMDIyMDAwMDAwLTB4MDAwMDAwMDAyMjE0N2ZmZl08YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjAwMDAwMF0gwqAgbm9kZSDCoCAwOiBbbWVtIDB4MDAwMDAwMDAyMjIwMDAwMC0weDAwMDAw
MDAwMjIzNDdmZmZdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAw
eDAwMDAwMDAwMjQwMDAwMDAtMHgwMDAwMDAwMDI3ZmZmZmZmXTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAw
MDAwXSDCoCBub2RlIMKgIDA6IFttZW0gMHgwMDAwMDAwMDMwMDAwMDAwLTB4MDAwMDAwMDA3ZmZm
ZmZmZl08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gSW5pdG1lbSBzZXR1cCBub2RlIDAgW21lbSAw
eDAwMDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAwMDdmZmZmZmZmXTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAw
MDAwXSBPbiBub2RlIDAsIHpvbmUgRE1BOiA4MTkyIHBhZ2VzIGluIHVuYXZhaWxhYmxlIHJhbmdl
czxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBPbiBub2RlIDAsIHpvbmUgRE1BOiAxODQgcGFnZXMg
aW4gdW5hdmFpbGFibGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE9uIG5vZGUgMCwg
em9uZSBETUE6IDczNTIgcGFnZXMgaW4gdW5hdmFpbGFibGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC4wMDAwMDBdIGNtYTogUmVzZXJ2ZWQgMjU2IE1pQiBhdCAweDAwMDAwMDAwNmUwMDAwMDA8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogcHJvYmluZyBmb3IgY29uZHVpdCBtZXRob2QgZnJv
bSBEVC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogUFNDSXYxLjEgZGV0ZWN0ZWQgaW4g
ZmlybXdhcmUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHBzY2k6IFVzaW5nIHN0YW5kYXJkIFBT
Q0kgdjAuMiBmdW5jdGlvbiBJRHM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogVHJ1c3Rl
ZCBPUyBtaWdyYXRpb24gbm90IHJlcXVpcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHBzY2k6
IFNNQyBDYWxsaW5nIENvbnZlbnRpb24gdjEuMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBwZXJj
cHU6IEVtYmVkZGVkIDE2IHBhZ2VzL2NwdSBzMzI3OTIgcjAgZDMyNzQ0IHU2NTUzNjxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuMDAwMDAwXSBEZXRlY3RlZCBWSVBUIEktY2FjaGUgb24gQ1BVMDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuMDAwMDAwXSBDUFUgZmVhdHVyZXM6IGtlcm5lbCBwYWdlIHRhYmxlIGlzb2xhdGlvbiBm
b3JjZWQgT04gYnkgS0FTTFI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gQ1BVIGZlYXR1cmVzOiBk
ZXRlY3RlZDogS2VybmVsIHBhZ2UgdGFibGUgaXNvbGF0aW9uIChLUFRJKTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDAuMDAwMDAwXSBCdWlsdCAxIHpvbmVsaXN0cywgbW9iaWxpdHkgZ3JvdXBpbmcgb24uwqAgVG90
YWwgcGFnZXM6IDQwMzg0NTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBLZXJuZWwgY29tbWFuZCBs
aW5lOiBjb25zb2xlPWh2YzAgZWFybHljb249eGVuIGVhcmx5cHJpbnRrPXhlbiBjbGtfaWdub3Jl
X3VudXNlZCBmaXBzPTE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqByb290PS9kZXYv
cmFtMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoG1heGNwdXM9Mjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBVbmtub3duIGtlcm5lbCBj
b21tYW5kIGxpbmUgcGFyYW1ldGVycyAmcXVvdDtlYXJseXByaW50az14ZW4gZmlwcz0xJnF1b3Q7
LCB3aWxsIGJlIHBhc3NlZCB0byB1c2VyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
c3BhY2UuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVu
dHJpZXM6IDI2MjE0NCAob3JkZXI6IDksIDIwOTcxNTIgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAwLjAwMDAwMF0gSW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAxMzEwNzIgKG9y
ZGVyOiA4LCAxMDQ4NTc2IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIG1l
bSBhdXRvLWluaXQ6IHN0YWNrOm9mZiwgaGVhcCBhbGxvYzpvbiwgaGVhcCBmcmVlOm9uPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC4wMDAwMDBdIG1lbSBhdXRvLWluaXQ6IGNsZWFyaW5nIHN5c3RlbSBtZW1vcnkg
bWF5IHRha2Ugc29tZSB0aW1lLi4uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE1lbW9yeTogMTEy
MTkzNksvMTY0MTAyNEsgYXZhaWxhYmxlICg5NzI4SyBrZXJuZWwgY29kZSwgODM2SyByd2RhdGEs
IDIzOTZLIHJvZGF0YSwgMTUzNks8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBpbml0
LCAyNjJLIGJzcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAyNTY5NDRLIHJlc2VydmVkLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgMjYyMTQ0SyBjbWEtcmVzZXJ2
ZWQpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIFNMVUI6IEhXYWxpZ249NjQsIE9yZGVyPTAtMywg
TWluT2JqZWN0cz0wLCBDUFVzPTIsIE5vZGVzPTE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcmN1
OiBIaWVyYXJjaGljYWwgUkNVIGltcGxlbWVudGF0aW9uLjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAw
XSByY3U6IFJDVSBldmVudCB0cmFjaW5nIGlzIGVuYWJsZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAw
MDBdIHJjdTogUkNVIHJlc3RyaWN0aW5nIENQVXMgZnJvbSBOUl9DUFVTPTggdG8gbnJfY3B1X2lk
cz0yLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSByY3U6IFJDVSBjYWxjdWxhdGVkIHZhbHVlIG9m
IHNjaGVkdWxlci1lbmxpc3RtZW50IGRlbGF5IGlzIDI1IGppZmZpZXMuPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC4wMDAwMDBdIHJjdTogQWRqdXN0aW5nIGdlb21ldHJ5IGZvciByY3VfZmFub3V0X2xlYWY9MTYs
IG5yX2NwdV9pZHM9Mjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBOUl9JUlFTOiA2NCwgbnJfaXJx
czogNjQsIHByZWFsbG9jYXRlZCBpcnFzOiAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIFJvb3Qg
SVJRIGhhbmRsZXI6IGdpY19oYW5kbGVfaXJxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIGFyY2hf
dGltZXI6IGNwMTUgdGltZXIocykgcnVubmluZyBhdCAxMDAuMDBNSHogKHZpcnQpLjxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuMDAwMDAwXSBjbG9ja3NvdXJjZTogYXJjaF9zeXNfY291bnRlcjogbWFzazogMHhm
ZmZmZmZmZmZmZmZmZiBtYXhfY3ljbGVzOiAweDE3MTAyNGU3ZTAsPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgbWF4X2lkbGVfbnM6IDQ0MDc5NTIwNTMxNSBuczxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDAuMDAwMDAwXSBzY2hlZF9jbG9jazogNTYgYml0cyBhdCAxMDBNSHosIHJlc29sdXRpb24gMTBu
cywgd3JhcHMgZXZlcnkgNDM5ODA0NjUxMTEwMG5zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAyNThdIENv
bnNvbGU6IGNvbG91ciBkdW1teSBkZXZpY2UgODB4MjU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMxMDIzMV0g
cHJpbnRrOiBjb25zb2xlIFtodmMwXSBlbmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zMTQ0MDNdIENh
bGlicmF0aW5nIGRlbGF5IGxvb3AgKHNraXBwZWQpLCB2YWx1ZSBjYWxjdWxhdGVkIHVzaW5nIHRp
bWVyIGZyZXF1ZW5jeS4uIDIwMC4wMCBCb2dvTUlQUzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoChscGo9NDAwMDAwKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzI0ODUxXSBwaWRfbWF4OiBkZWZh
dWx0OiAzMjc2OCBtaW5pbXVtOiAzMDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMyOTcwNl0gTFNNOiBTZWN1
cml0eSBGcmFtZXdvcmsgaW5pdGlhbGl6aW5nPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zMzQyMDRdIFlhbWE6
IGJlY29taW5nIG1pbmRmdWwuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zMzc4NjVdIE1vdW50LWNhY2hlIGhh
c2ggdGFibGUgZW50cmllczogNDA5NiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC4zNDUxODBdIE1vdW50cG9pbnQtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVz
OiA0MDk2IChvcmRlcjogMywgMzI3NjggYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM1
NDc0M10geGVuOmdyYW50X3RhYmxlOiBHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91
dDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuMzU5MTMyXSBHcmFudCB0YWJsZSBpbml0aWFsaXplZDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuMzYyNjY0XSB4ZW46ZXZlbnRzOiBVc2luZyBGSUZPLWJhc2VkIEFCSTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuMzY2OTkzXSBYZW46IGluaXRpYWxpemluZyBjcHUwPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4z
NzA1MTVdIHJjdTogSGllcmFyY2hpY2FsIFNSQ1UgaW1wbGVtZW50YXRpb24uPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMC4zNzU5MzBdIHNtcDogQnJpbmdpbmcgdXAgc2Vjb25kYXJ5IENQVXMgLi4uPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAo
WEVOKSBudWxsLmM6MzUzOiAxICZsdDstLSBkMHYxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYxOiB2R0lDRDog
dW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIwPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC4zODI1NDldIERldGVjdGVkIFZJUFQgSS1jYWNoZSBvbiBDUFUxPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC4zODg3MTJdIFhlbjogaW5pdGlhbGl6aW5nIGNwdTE8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM4
ODc0M10gQ1BVMTogQm9vdGVkIHNlY29uZGFyeSBwcm9jZXNzb3IgMHgwMDAwMDAwMDAxIFsweDQx
MGZkMDM0XTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzg4ODI5XSBzbXA6IEJyb3VnaHQgdXAgMSBub2RlLCAy
IENQVXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQwNjk0MV0gU01QOiBUb3RhbCBvZiAyIHByb2Nlc3NvcnMg
YWN0aXZhdGVkLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDExNjk4XSBDUFUgZmVhdHVyZXM6IGRldGVjdGVk
OiAzMi1iaXQgRUwwIFN1cHBvcnQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQxNjg4OF0gQ1BVIGZlYXR1cmVz
OiBkZXRlY3RlZDogQ1JDMzIgaW5zdHJ1Y3Rpb25zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40MjIxMjFdIENQ
VTogQWxsIENQVShzKSBzdGFydGVkIGF0IEVMMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDI2MjQ4XSBhbHRl
cm5hdGl2ZXM6IHBhdGNoaW5nIGtlcm5lbCBjb2RlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40MzE0MjRdIGRl
dnRtcGZzOiBpbml0aWFsaXplZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDQxNDU0XSBLQVNMUiBlbmFibGVk
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC40NDE2MDJdIGNsb2Nrc291cmNlOiBqaWZmaWVzOiBtYXNrOiAweGZm
ZmZmZmZmIG1heF9jeWNsZXM6IDB4ZmZmZmZmZmYsIG1heF9pZGxlX25zOjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoDc2NDUwNDE3ODUxMDAwMDAgbnM8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQ0
ODMyMV0gZnV0ZXggaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIgKG9yZGVyOiAzLCAzMjc2OCBieXRl
cywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDk2MTgzXSBORVQ6IFJlZ2lzdGVyZWQgUEZfTkVU
TElOSy9QRl9ST1VURSBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQ5ODI3N10gRE1B
OiBwcmVhbGxvY2F0ZWQgMjU2IEtpQiBHRlBfS0VSTkVMIHBvb2wgZm9yIGF0b21pYyBhbGxvY2F0
aW9uczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuNTAzNzcyXSBETUE6IHByZWFsbG9jYXRlZCAyNTYgS2lCIEdG
UF9LRVJORUx8R0ZQX0RNQSBwb29sIGZvciBhdG9taWMgYWxsb2NhdGlvbnM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjUxMTYxMF0gRE1BOiBwcmVhbGxvY2F0ZWQgMjU2IEtpQiBHRlBfS0VSTkVMfEdGUF9ETUEz
MiBwb29sIGZvciBhdG9taWMgYWxsb2NhdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjUxOTQ3OF0gYXVk
aXQ6IGluaXRpYWxpemluZyBuZXRsaW5rIHN1YnN5cyAoZGlzYWJsZWQpPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC41MjQ5ODVdIGF1ZGl0OiB0eXBlPTIwMDAgYXVkaXQoMC4zMzY6MSk6IHN0YXRlPWluaXRpYWxp
emVkIGF1ZGl0X2VuYWJsZWQ9MCByZXM9MTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTI5MTY5XSB0aGVybWFs
X3N5czogUmVnaXN0ZXJlZCB0aGVybWFsIGdvdmVybm9yICYjMzk7c3RlcF93aXNlJiMzOTs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjUzMzAyM10gaHctYnJlYWtwb2ludDogZm91bmQgNiBicmVha3BvaW50IGFu
ZCA0IHdhdGNocG9pbnQgcmVnaXN0ZXJzLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTQ1NjA4XSBBU0lEIGFs
bG9jYXRvciBpbml0aWFsaXNlZCB3aXRoIDMyNzY4IGVudHJpZXM8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjU1
MTAzMF0geGVuOnN3aW90bGJfeGVuOiBXYXJuaW5nOiBvbmx5IGFibGUgdG8gYWxsb2NhdGUgNCBN
QiBmb3Igc29mdHdhcmUgSU8gVExCPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41NTkzMzJdIHNvZnR3YXJlIElP
IFRMQjogbWFwcGVkIFttZW0gMHgwMDAwMDAwMDExODAwMDAwLTB4MDAwMDAwMDAxMWMwMDAwMF0g
KDRNQik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjU4MzU2NV0gSHVnZVRMQiByZWdpc3RlcmVkIDEuMDAgR2lC
IHBhZ2Ugc2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41ODQ3MjFd
IEh1Z2VUTEIgcmVnaXN0ZXJlZCAzMi4wIE1pQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBw
YWdlczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuNTkxNDc4XSBIdWdlVExCIHJlZ2lzdGVyZWQgMi4wMCBNaUIg
cGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjU5ODIyNV0g
SHVnZVRMQiByZWdpc3RlcmVkIDY0LjAgS2lCIHBhZ2Ugc2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBh
Z2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC42MzY1MjBdIERSQkc6IENvbnRpbnVpbmcgd2l0aG91dCBKaXR0
ZXIgUk5HPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC43MzcxODddIHJhaWQ2OiBuZW9ueDggwqAgZ2VuKCkgwqAy
MTQzIE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjgwNTI5NF0gcmFpZDY6IG5lb254OCDCoCB4b3IoKSDC
oDE1ODkgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuODczNDA2XSByYWlkNjogbmVvbng0IMKgIGdlbigp
IMKgMjE3NyBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC45NDE0OTldIHJhaWQ2OiBuZW9ueDQgwqAgeG9y
KCkgwqAxNTU2IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjAwOTYxMl0gcmFpZDY6IG5lb254MiDCoCBn
ZW4oKSDCoDIwNzIgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuMDc3NzE1XSByYWlkNjogbmVvbngyIMKg
IHhvcigpIMKgMTQzMCBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4xNDU4MzRdIHJhaWQ2OiBuZW9ueDEg
wqAgZ2VuKCkgwqAxNzY5IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjIxMzkzNV0gcmFpZDY6IG5lb254
MSDCoCB4b3IoKSDCoDEyMTQgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuMjgyMDQ2XSByYWlkNjogaW50
NjR4OCDCoGdlbigpIMKgMTM2NiBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4zNTAxMzJdIHJhaWQ2OiBp
bnQ2NHg4IMKgeG9yKCkgwqAgNzczIE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjQxODI1OV0gcmFpZDY6
IGludDY0eDQgwqBnZW4oKSDCoDE2MDIgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNDg2MzQ5XSByYWlk
NjogaW50NjR4NCDCoHhvcigpIMKgIDg1MSBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS41NTQ0NjRdIHJh
aWQ2OiBpbnQ2NHgyIMKgZ2VuKCkgwqAxMzk2IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjYyMjU2MV0g
cmFpZDY6IGludDY0eDIgwqB4b3IoKSDCoCA3NDQgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNjkwNjg3
XSByYWlkNjogaW50NjR4MSDCoGdlbigpIMKgMTAzMyBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43NTg3
NzBdIHJhaWQ2OiBpbnQ2NHgxIMKgeG9yKCkgwqAgNTE3IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc1
ODgwOV0gcmFpZDY6IHVzaW5nIGFsZ29yaXRobSBuZW9ueDQgZ2VuKCkgMjE3NyBNQi9zPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMS43NjI5NDFdIHJhaWQ2OiAuLi4uIHhvcigpIDE1NTYgTUIvcywgcm13IGVuYWJs
ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAxLjc2Nzk1N10gcmFpZDY6IHVzaW5nIG5lb24gcmVjb3ZlcnkgYWxn
b3JpdGhtPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43NzI4MjRdIHhlbjpiYWxsb29uOiBJbml0aWFsaXNpbmcg
YmFsbG9vbiBkcml2ZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc3ODAyMV0gaW9tbXU6IERlZmF1bHQgZG9t
YWluIHR5cGU6IFRyYW5zbGF0ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc4MjU4NF0gaW9tbXU6IERNQSBk
b21haW4gVExCIGludmFsaWRhdGlvbiBwb2xpY3k6IHN0cmljdCBtb2RlPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MS43ODkxNDldIFNDU0kgc3Vic3lzdGVtIGluaXRpYWxpemVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43OTI4
MjBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnM8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAxLjc5ODI1NF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZl
ciBodWI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgwMzYyNl0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgZGV2
aWNlIGRyaXZlciB1c2I8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgwODc2MV0gcHBzX2NvcmU6IExpbnV4UFBT
IEFQSSB2ZXIuIDEgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODEzNzE2XSBwcHNfY29yZTog
U29mdHdhcmUgdmVyLiA1LjMuNiAtIENvcHlyaWdodCAyMDA1LTIwMDcgUm9kb2xmbyBHaW9tZXR0
aSAmbHQ7PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+
Z2lvbWV0dGlAbGludXguaXQ8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRp
QGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+Jmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmdp
b21ldHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFu
ayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44MjI5
MDNdIFBUUCBjbG9jayBzdXBwb3J0IHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgyNjg5M10g
RURBQyBNQzogVmVyOiAzLjAuMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODMwMzc1XSB6eW5xbXAtaXBpLW1i
b3ggbWFpbGJveEBmZjk5MDQwMDogUmVnaXN0ZXJlZCBaeW5xTVAgSVBJIG1ib3ggd2l0aCBUWC9S
WCBjaGFubmVscy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgzODg2M10genlucW1wLWlwaS1tYm94IG1haWxi
b3hAZmY5OTA2MDA6IFJlZ2lzdGVyZWQgWnlucU1QIElQSSBtYm94IHdpdGggVFgvUlggY2hhbm5l
bHMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMS44NDczNTZdIHp5bnFtcC1pcGktbWJveCBtYWlsYm94QGZmOTkw
ODAwOiBSZWdpc3RlcmVkIFp5bnFNUCBJUEkgbWJveCB3aXRoIFRYL1JYIGNoYW5uZWxzLjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDEuODU1OTA3XSBGUEdBIG1hbmFnZXIgZnJhbWV3b3JrPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MS44NTk5NTJdIGNsb2Nrc291cmNlOiBTd2l0Y2hlZCB0byBjbG9ja3NvdXJjZSBhcmNoX3N5c19j
b3VudGVyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44NzE3MTJdIE5FVDogUmVnaXN0ZXJlZCBQRl9JTkVUIHBy
b3RvY29sIGZhbWlseTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODcxODM4XSBJUCBpZGVudHMgaGFzaCB0YWJs
ZSBlbnRyaWVzOiAzMjc2OCAob3JkZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFyKTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDEuODc5MzkyXSB0Y3BfbGlzdGVuX3BvcnRhZGRyX2hhc2ggaGFzaCB0YWJsZSBlbnRy
aWVzOiAxMDI0IChvcmRlcjogMiwgMTYzODQgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAx
Ljg4NzA3OF0gVGFibGUtcGVydHVyYiBoYXNoIHRhYmxlIGVudHJpZXM6IDY1NTM2IChvcmRlcjog
NiwgMjYyMTQ0IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44OTQ4NDZdIFRDUCBlc3Rh
Ymxpc2hlZCBoYXNoIHRhYmxlIGVudHJpZXM6IDE2Mzg0IChvcmRlcjogNSwgMTMxMDcyIGJ5dGVz
LCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45MDI5MDBdIFRDUCBiaW5kIGhhc2ggdGFibGUgZW50
cmllczogMTYzODQgKG9yZGVyOiA2LCAyNjIxNDQgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAxLjkxMDM1MF0gVENQOiBIYXNoIHRhYmxlcyBjb25maWd1cmVkIChlc3RhYmxpc2hlZCAxNjM4
NCBiaW5kIDE2Mzg0KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTE2Nzc4XSBVRFAgaGFzaCB0YWJsZSBlbnRy
aWVzOiAxMDI0IChvcmRlcjogMywgMzI3NjggYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAx
LjkyMzUwOV0gVURQLUxpdGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAxMDI0IChvcmRlcjogMywgMzI3
NjggYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjkzMDc1OV0gTkVUOiBSZWdpc3RlcmVk
IFBGX1VOSVgvUEZfTE9DQUwgcHJvdG9jb2wgZmFtaWx5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45MzY4MzRd
IFJQQzogUmVnaXN0ZXJlZCBuYW1lZCBVTklYIHNvY2tldCB0cmFuc3BvcnQgbW9kdWxlLjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDEuOTQyMzQyXSBSUEM6IFJlZ2lzdGVyZWQgdWRwIHRyYW5zcG9ydCBtb2R1bGUu
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMS45NDcwODhdIFJQQzogUmVnaXN0ZXJlZCB0Y3AgdHJhbnNwb3J0IG1v
ZHVsZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk1MTg0M10gUlBDOiBSZWdpc3RlcmVkIHRjcCBORlN2NC4x
IGJhY2tjaGFubmVsIHRyYW5zcG9ydCBtb2R1bGUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45NTgzMzRdIFBD
STogQ0xTIDAgYnl0ZXMsIGRlZmF1bHQgNjQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk2MjcwOV0gVHJ5aW5n
IHRvIHVucGFjayByb290ZnMgaW1hZ2UgYXMgaW5pdHJhbWZzLi4uPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45
NzcwOTBdIHdvcmtpbmdzZXQ6IHRpbWVzdGFtcF9iaXRzPTYyIG1heF9vcmRlcj0xOSBidWNrZXRf
b3JkZXI9MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTgyODYzXSBJbnN0YWxsaW5nIGtuZnNkIChjb3B5cmln
aHQgKEMpIDE5OTYgPGEgaHJlZj0ibWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIiB0YXJnZXQ9Il9i
bGFuayI+b2tpckBtb25hZC5zd2IuZGU8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9r
aXJAbW9uYWQuc3diLmRlIiB0YXJnZXQ9Il9ibGFuayI+b2tpckBtb25hZC5zd2IuZGU8L2E+Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJf
YmxhbmsiPm9raXJAbW9uYWQuc3diLmRlPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
a2lyQG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJfYmxhbmsiPm9raXJAbW9uYWQuc3diLmRlPC9hPiZn
dDsmZ3Q7KS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAyMTA0NV0gTkVUOiBSZWdpc3RlcmVkIFBGX0FMRyBw
cm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAyMTEyMl0geG9yOiBtZWFzdXJpbmcgc29m
dHdhcmUgY2hlY2tzdW0gc3BlZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAyOTM0N10gwqAgwqA4cmVncyDC
oCDCoCDCoCDCoCDCoCA6IMKgMjM2NiBNQi9zZWM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAzMzA4MV0gwqAg
wqAzMnJlZ3MgwqAgwqAgwqAgwqAgwqA6IMKgMjgwMiBNQi9zZWM8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAz
ODIyM10gwqAgwqBhcm02NF9uZW9uIMKgIMKgIMKgOiDCoDIzMjAgTUIvc2VjPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4wMzgzODVdIHhvcjogdXNpbmcgZnVuY3Rpb246IDMycmVncyAoMjgwMiBNQi9zZWMpPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi4wNDM2MTRdIEJsb2NrIGxheWVyIFNDU0kgZ2VuZXJpYyAoYnNnKSBkcml2
ZXIgdmVyc2lvbiAwLjQgbG9hZGVkIChtYWpvciAyNDcpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4wNTA5NTld
IGlvIHNjaGVkdWxlciBtcS1kZWFkbGluZSByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4wNTU1
MjFdIGlvIHNjaGVkdWxlciBreWJlciByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4wNjgyMjdd
IHhlbjp4ZW5fZXZ0Y2huOiBFdmVudC1jaGFubmVsIGRldmljZSBpbnN0YWxsZWQ8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjA2OTI4MV0gU2VyaWFsOiA4MjUwLzE2NTUwIGRyaXZlciwgNCBwb3J0cywgSVJRIHNo
YXJpbmcgZGlzYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA3NjE5MF0gY2FjaGVpbmZvOiBVbmFibGUg
dG8gZGV0ZWN0IGNhY2hlIGhpZXJhcmNoeSBmb3IgQ1BVIDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA4NTU0
OF0gYnJkOiBtb2R1bGUgbG9hZGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4wODkyOTBdIGxvb3A6IG1vZHVs
ZSBsb2FkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA4OTM0MV0gSW52YWxpZCBtYXhfcXVldWVzICg0KSwg
d2lsbCB1c2UgZGVmYXVsdCBtYXg6IDIuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4wOTQ1NjVdIHR1bjogVW5p
dmVyc2FsIFRVTi9UQVAgZGV2aWNlIGRyaXZlciwgMS42PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4wOTg2NTVd
IHhlbl9uZXRmcm9udDogSW5pdGlhbGlzaW5nIFhlbiB2aXJ0dWFsIGV0aGVybmV0IGRyaXZlcjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuMTA0MTU2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2Ug
ZHJpdmVyIHJ0bDgxNTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjEwOTgxM10gdXNiY29yZTogcmVnaXN0ZXJl
ZCBuZXcgaW50ZXJmYWNlIGRyaXZlciByODE1Mjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTE1MzY3XSB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGFzaXg8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAy
LjEyMDc5NF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBheDg4MTc5
XzE3OGE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjEyNjkzNF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50
ZXJmYWNlIGRyaXZlciBjZGNfZXRoZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjEzMjgxNl0gdXNiY29yZTog
cmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNfZWVtPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4x
Mzg1MjddIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgbmV0MTA4MDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuMTQ0MjU2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2Ug
ZHJpdmVyIGNkY19zdWJzZXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE1MDIwNV0gdXNiY29yZTogcmVnaXN0
ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB6YXVydXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE1NTgzN10g
dXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNfbmNtPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi4xNjE1NTBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIg
cjgxNTNfZWNtPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xNjgyNDBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3
IGludGVyZmFjZSBkcml2ZXIgY2RjX2FjbTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTczMTA5XSBjZGNfYWNt
OiBVU0IgQWJzdHJhY3QgQ29udHJvbCBNb2RlbCBkcml2ZXIgZm9yIFVTQiBtb2RlbXMgYW5kIElT
RE4gYWRhcHRlcnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE4MTM1OF0gdXNiY29yZTogcmVnaXN0ZXJlZCBu
ZXcgaW50ZXJmYWNlIGRyaXZlciB1YXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE4NjU0N10gdXNiY29yZTog
cmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Itc3RvcmFnZTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuMTkyNjQzXSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGZ0ZGlf
c2lvPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMi4xOTgzODRdIHVzYnNlcmlhbDogVVNCIFNlcmlhbCBzdXBwb3J0
IHJlZ2lzdGVyZWQgZm9yIEZUREkgVVNCIFNlcmlhbCBEZXZpY2U8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIw
NjExOF0gdWRjLWNvcmU6IGNvdWxkbiYjMzk7dCBmaW5kIGFuIGF2YWlsYWJsZSBVREMgLSBhZGRl
ZCBbZ19tYXNzX3N0b3JhZ2VdIHRvIGxpc3Qgb2YgcGVuZGluZzxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoGRyaXZlcnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIxNTMzMl0gaTJjX2RldjogaTJj
IC9kZXYgZW50cmllcyBkcml2ZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIyMDQ2N10geGVuX3dkdCB4ZW5f
d2R0OiBpbml0aWFsaXplZCAodGltZW91dD02MHMsIG5vd2F5b3V0PTApPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi4yMjU5MjNdIGRldmljZS1tYXBwZXI6IHVldmVudDogdmVyc2lvbiAxLjAuMzxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMjMwNjY4XSBkZXZpY2UtbWFwcGVyOiBpb2N0bDogNC40NS4wLWlvY3RsICgyMDIxLTAz
LTIyKSBpbml0aWFsaXNlZDogPGEgaHJlZj0ibWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20iIHRh
cmdldD0iX2JsYW5rIj5kbS1kZXZlbEByZWRoYXQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIiB0YXJnZXQ9Il9ibGFuayI+ZG0tZGV2ZWxAcmVk
aGF0LmNvbTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmRt
LWRldmVsQHJlZGhhdC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmRtLWRldmVs
QHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5kbS1kZXZlbEByZWRoYXQuY29tPC9hPiZndDsm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMi4yMzkzMTVdIEVEQUMgTUMwOiBHaXZpbmcgb3V0IGRldmljZSB0
byBtb2R1bGUgMSBjb250cm9sbGVyIHN5bnBzX2Rkcl9jb250cm9sbGVyOiBERVYgc3lucHNfZWRh
Yzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoChJTlRFUlJVUFQpPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4yNDk0MDVdIEVEQUMgREVWSUNFMDogR2l2aW5nIG91dCBkZXZpY2UgdG8gbW9kdWxlIHp5
bnFtcC1vY20tZWRhYyBjb250cm9sbGVyIHp5bnFtcF9vY206IERFVjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGZmOTYwMDAwLm1lbW9y
eS1jb250cm9sbGVyIChJTlRFUlJVUFQpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yNjE3MTldIHNkaGNpOiBT
ZWN1cmUgRGlnaXRhbCBIb3N0IENvbnRyb2xsZXIgSW50ZXJmYWNlIGRyaXZlcjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMjY3NDg3XSBzZGhjaTogQ29weXJpZ2h0KGMpIFBpZXJyZSBPc3NtYW48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjI3MTg5MF0gc2RoY2ktcGx0Zm06IFNESENJIHBsYXRmb3JtIGFuZCBPRiBkcml2ZXIg
aGVscGVyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yNzgxNTddIGxlZHRyaWctY3B1OiByZWdpc3RlcmVkIHRv
IGluZGljYXRlIGFjdGl2aXR5IG9uIENQVXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI4MzgxNl0genlucW1w
X2Zpcm13YXJlX3Byb2JlIFBsYXRmb3JtIE1hbmFnZW1lbnQgQVBJIHYxLjE8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjI4OTU1NF0genlucW1wX2Zpcm13YXJlX3Byb2JlIFRydXN0em9uZSB2ZXJzaW9uIHYxLjA8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAyLjMyNzg3NV0gc2VjdXJlZncgc2VjdXJlZnc6IHNlY3VyZWZ3IHByb2Jl
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMzI4MzI0XSBhbGc6IE5vIHRlc3QgZm9yIHhpbGlueC16eW5xbXAt
YWVzICh6eW5xbXAtYWVzKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzMyNTYzXSB6eW5xbXBfYWVzIGZpcm13
YXJlOnp5bnFtcC1maXJtd2FyZTp6eW5xbXAtYWVzOiBBRVMgU3VjY2Vzc2Z1bGx5IFJlZ2lzdGVy
ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAyLjM0MTE4M10gYWxnOiBObyB0ZXN0IGZvciB4aWxpbngtenlucW1w
LXJzYSAoenlucW1wLXJzYSk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM0NzY2N10gcmVtb3RlcHJvYyByZW1v
dGVwcm9jMDogZmY5YTAwMDAucmY1c3M6cjVmXzAgaXMgYXZhaWxhYmxlPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi4zNTMwMDNdIHJlbW90ZXByb2MgcmVtb3RlcHJvYzE6IGZmOWEwMDAwLnJmNXNzOnI1Zl8xIGlz
IGF2YWlsYWJsZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzYyNjA1XSBmcGdhX21hbmFnZXIgZnBnYTA6IFhp
bGlueCBaeW5xTVAgRlBHQSBNYW5hZ2VyIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM2NjU0
MF0gdmlwZXIteGVuLXByb3h5IHZpcGVyLXhlbi1wcm94eTogVmlwZXIgWGVuIFByb3h5IHJlZ2lz
dGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM3MjUyNV0gdmlwZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBE
ZXZpY2UgVHJlZSBQcm9iaW5nPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zNzc3NzhdIHZpcGVyLXZkcHAgYTQw
MDAwMDAudmRwcDogVkRQUCBWZXJzaW9uOiAxLjMuOS4wIEluZm86IDEuNTEyLjE1LjAgS2V5TGVu
OiAzMjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDIuMzg2NDMyXSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IFVu
YWJsZSB0byByZWdpc3RlciB0YW1wZXIgaGFuZGxlci4gUmV0cnlpbmcuLi48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjM5NDA5NF0gdmlwZXItdmRwcC1uZXQgYTUwMDAwMDAudmRwcF9uZXQ6IERldmljZSBUcmVl
IFByb2Jpbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM5OTg1NF0gdmlwZXItdmRwcC1uZXQgYTUwMDAwMDAu
dmRwcF9uZXQ6IERldmljZSByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40MDU5MzFdIHZpcGVy
LXZkcHAtc3RhdCBhODAwMDAwMC52ZHBwX3N0YXQ6IERldmljZSBUcmVlIFByb2Jpbmc8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjQxMjAzN10gdmlwZXItdmRwcC1zdGF0IGE4MDAwMDAwLnZkcHBfc3RhdDogQnVp
bGQgcGFyYW1ldGVyczogVlRJIENvdW50OiA1MTIgRXZlbnQgQ291bnQ6IDMyPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi40MjA4NTZdIGRlZmF1bHQgcHJlc2V0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40MjM3OTddIHZpcGVy
LXZkcHAtc3RhdCBhODAwMDAwMC52ZHBwX3N0YXQ6IERldmljZSByZWdpc3RlcmVkPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi40MzAwNTRdIHZpcGVyLXZkcHAtcm5nIGFjMDAwMDAwLnZkcHBfcm5nOiBEZXZpY2Ug
VHJlZSBQcm9iaW5nPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40MzU5NDhdIHZpcGVyLXZkcHAtcm5nIGFjMDAw
MDAwLnZkcHBfcm5nOiBEZXZpY2UgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDQxOTc2XSB2
bWN1IGRyaXZlciBpbml0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40NDQ5MjJdIFZNQ1U6IDogKDI0MDowKSBy
ZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40NDQ5NTZdIEluIEs4MSBVcGRhdGVyIGluaXQ8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjQ0OTAwM10gcGt0Z2VuOiBQYWNrZXQgR2VuZXJhdG9yIGZvciBwYWNrZXQg
cGVyZm9ybWFuY2UgdGVzdGluZy4gVmVyc2lvbjogMi43NTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDY4ODMz
XSBJbml0aWFsaXppbmcgWEZSTSBuZXRsaW5rIHNvY2tldDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDY4OTAy
XSBORVQ6IFJlZ2lzdGVyZWQgUEZfUEFDS0VUIHByb3RvY29sIGZhbWlseTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuNDcyNzI5XSBCcmlkZ2UgZmlyZXdhbGxpbmcgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIu
NDc2Nzg1XSA4MDIxcTogODAyLjFRIFZMQU4gU3VwcG9ydCB2MS44PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40
ODEzNDFdIHJlZ2lzdGVyZWQgdGFza3N0YXRzIHZlcnNpb24gMTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDg2
Mzk0XSBCdHJmcyBsb2FkZWQsIGNyYzMyYz1jcmMzMmMtZ2VuZXJpYywgem9uZWQ9bm8sIGZzdmVy
aXR5PW5vPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MDMxNDVdIGZmMDEwMDAwLnNlcmlhbDogdHR5UFMxIGF0
IE1NSU8gMHhmZjAxMDAwMCAoaXJxID0gMzYsIGJhc2VfYmF1ZCA9IDYyNTAwMDApIGlzIGEgeHVh
cnRwczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDIuNTA3MTAzXSBvZi1mcGdhLXJlZ2lvbiBmcGdhLWZ1bGw6IEZQ
R0EgUmVnaW9uIHByb2JlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNTEyOTg2XSB4aWxpbngtenlucW1wLWRt
YSBmZDUwMDAwMC5kbWEtY29udHJvbGxlcjogWnlucU1QIERNQSBkcml2ZXIgUHJvYmUgc3VjY2Vz
czxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuNTIwMjY3XSB4aWxpbngtenlucW1wLWRtYSBmZDUxMDAwMC5kbWEt
Y29udHJvbGxlcjogWnlucU1QIERNQSBkcml2ZXIgUHJvYmUgc3VjY2Vzczxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuNTI4MjM5XSB4aWxpbngtenlucW1wLWRtYSBmZDUyMDAwMC5kbWEtY29udHJvbGxlcjogWnlu
cU1QIERNQSBkcml2ZXIgUHJvYmUgc3VjY2Vzczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNTM2MTUyXSB4aWxp
bngtenlucW1wLWRtYSBmZDUzMDAwMC5kbWEtY29udHJvbGxlcjogWnlucU1QIERNQSBkcml2ZXIg
UHJvYmUgc3VjY2Vzczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNTQ0MTUzXSB4aWxpbngtenlucW1wLWRtYSBm
ZDU0MDAwMC5kbWEtY29udHJvbGxlcjogWnlucU1QIERNQSBkcml2ZXIgUHJvYmUgc3VjY2Vzczxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuNTUyMTI3XSB4aWxpbngtenlucW1wLWRtYSBmZDU1MDAwMC5kbWEtY29u
dHJvbGxlcjogWnlucU1QIERNQSBkcml2ZXIgUHJvYmUgc3VjY2Vzczxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIu
NTYwMTc4XSB4aWxpbngtenlucW1wLWRtYSBmZmE4MDAwMC5kbWEtY29udHJvbGxlcjogWnlucU1Q
IERNQSBkcml2ZXIgUHJvYmUgc3VjY2Vzczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNTY3OTg3XSB4aWxpbngt
enlucW1wLWRtYSBmZmE5MDAwMC5kbWEtY29udHJvbGxlcjogWnlucU1QIERNQSBkcml2ZXIgUHJv
YmUgc3VjY2Vzczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNTc2MDE4XSB4aWxpbngtenlucW1wLWRtYSBmZmFh
MDAwMC5kbWEtY29udHJvbGxlcjogWnlucU1QIERNQSBkcml2ZXIgUHJvYmUgc3VjY2Vzczxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuNTgzODg5XSB4aWxpbngtenlucW1wLWRtYSBmZmFiMDAwMC5kbWEtY29udHJv
bGxlcjogWnlucU1QIERNQSBkcml2ZXIgUHJvYmUgc3VjY2Vzczxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTQ2
Mzc5XSBzcGktbm9yIHNwaTAuMDogbXQyNXF1NTEyYSAoMTMxMDcyIEtieXRlcyk8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjk0NjQ2N10gMiBmaXhlZC1wYXJ0aXRpb25zIHBhcnRpdGlvbnMgZm91bmQgb24gTVRE
IGRldmljZSBzcGkwLjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk1MjM5M10gQ3JlYXRpbmcgMiBNVEQgcGFy
dGl0aW9ucyBvbiAmcXVvdDtzcGkwLjAmcXVvdDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi45NTcyMzFdIDB4
MDAwMDA0MDAwMDAwLTB4MDAwMDA4MDAwMDAwIDogJnF1b3Q7YmFuayBBJnF1b3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi45NjMzMzJdIDB4MDAwMDAwMDAwMDAwLTB4MDAwMDA0MDAwMDAwIDogJnF1b3Q7YmFu
ayBCJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi45Njg2OTRdIG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQ6
IE5vdCBlbmFibGluZyBwYXJ0aWFsIHN0b3JlIGFuZCBmb3J3YXJkPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi45
NzUzMzNdIG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQgZXRoMDogQ2FkZW5jZSBHRU0gcmV2IDB4NTAw
NzAxMDYgYXQgMHhmZjBiMDAwMCBpcnEgMjU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAoMTg6NDE6ZmU6MGY6ZmY6MDIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi45ODQ0NzJdIG1hY2IgZmYwYzAw
MDAuZXRoZXJuZXQ6IE5vdCBlbmFibGluZyBwYXJ0aWFsIHN0b3JlIGFuZCBmb3J3YXJkPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMi45OTIxNDRdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgZXRoMTogQ2FkZW5jZSBH
RU0gcmV2IDB4NTAwNzAxMDYgYXQgMHhmZjBjMDAwMCBpcnEgMjY8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAoMTg6NDE6ZmU6MGY6ZmY6MDMpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wMDEwNDNd
IHZpcGVyX2VuZXQgdmlwZXJfZW5ldDogVmlwZXIgcG93ZXIgR1BJT3MgaW5pdGlhbGlzZWQ8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAzLjAwNzMxM10gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQwICh1bmluaXRp
YWxpemVkKTogVmFsaWRhdGUgaW50ZXJmYWNlIFFTR01JSTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDE0OTE0
XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDEgKHVuaW5pdGlhbGl6ZWQpOiBWYWxpZGF0ZSBp
bnRlcmZhY2UgUVNHTUlJPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wMjIxMzhdIHZpcGVyX2VuZXQgdmlwZXJf
ZW5ldCB2bmV0MSAodW5pbml0aWFsaXplZCk6IFZhbGlkYXRlIGludGVyZmFjZSB0eXBlIDE4PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMy4wMzAyNzRdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldCB2bmV0MiAodW5pbml0
aWFsaXplZCk6IFZhbGlkYXRlIGludGVyZmFjZSBRU0dNSUk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjAzNzc4
NV0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQzICh1bmluaXRpYWxpemVkKTogVmFsaWRhdGUg
aW50ZXJmYWNlIFFTR01JSTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDQ1MzAxXSB2aXBlcl9lbmV0IHZpcGVy
X2VuZXQ6IFZpcGVyIGVuZXQgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDUwOTU4XSB4aWxp
bngtYXhpcG1vbiBmZmEwMDAwMC5wZXJmLW1vbml0b3I6IFByb2JlZCBYaWxpbnggQVBNPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMy4wNTcxMzVdIHhpbGlueC1heGlwbW9uIGZkMGIwMDAwLnBlcmYtbW9uaXRvcjog
UHJvYmVkIFhpbGlueCBBUE08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA2MzUzOF0geGlsaW54LWF4aXBtb24g
ZmQ0OTAwMDAucGVyZi1tb25pdG9yOiBQcm9iZWQgWGlsaW54IEFQTTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
MDY5OTIwXSB4aWxpbngtYXhpcG1vbiBmZmExMDAwMC5wZXJmLW1vbml0b3I6IFByb2JlZCBYaWxp
bnggQVBNPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wOTc3MjldIHNpNzB4eDogcHJvYmUgb2YgMi0wMDQwIGZh
aWxlZCB3aXRoIGVycm9yIC01PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wOTgwNDJdIGNkbnMtd2R0IGZkNGQw
MDAwLndhdGNoZG9nOiBYaWxpbnggV2F0Y2hkb2cgVGltZXIgd2l0aCB0aW1lb3V0IDYwczxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDMuMTA1MTExXSBjZG5zLXdkdCBmZjE1MDAwMC53YXRjaGRvZzogWGlsaW54IFdh
dGNoZG9nIFRpbWVyIHdpdGggdGltZW91dCAxMHM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjExMjQ1N10gdmlw
ZXItdGFtcGVyIHZpcGVyLXRhbXBlcjogRGV2aWNlIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjExNzU5M10gYWN0aXZlX2JhbmsgYWN0aXZlX2Jhbms6IGJvb3QgYmFuazogMTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDMuMTIyMTg0XSBhY3RpdmVfYmFuayBhY3RpdmVfYmFuazogYm9vdCBtb2RlOiAoMHgwMikg
cXNwaTMyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xMjgyNDddIHZpcGVyLXZkcHAgYTQwMDAwMDAudmRwcDog
RGV2aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTMzNDM5XSB2aXBlci12ZHBwIGE0
MDAwMDAwLnZkcHA6IFZEUFAgVmVyc2lvbjogMS4zLjkuMCBJbmZvOiAxLjUxMi4xNS4wIEtleUxl
bjogMzI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE0MjE1MV0gdmlwZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBU
YW1wZXIgaGFuZGxlciByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNDc0MzhdIHZpcGVyLXZk
cHAgYTQwMDAwMDAudmRwcDogRGV2aWNlIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE1MzAw
N10gbHBjNTVfbDIgc3BpMS4wOiByZWdpc3RlcmVkIGhhbmRsZXIgZm9yIHByb3RvY29sIDA8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAzLjE1ODU4Ml0gbHBjNTVfdXNlciBscGM1NV91c2VyOiBUaGUgbWFqb3IgbnVt
YmVyIGZvciB5b3VyIGRldmljZSBpcyAyMzY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE2NTk3Nl0gbHBjNTVf
bDIgc3BpMS4wOiByZWdpc3RlcmVkIGhhbmRsZXIgZm9yIHByb3RvY29sIDE8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjE4MTk5OV0gcnRjLWxwYzU1IHJ0Y19scGM1NTogbHBjNTVfcnRjX2dldF90aW1lOiBiYWQg
cmVzdWx0OiAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xODI4NTZdIHJ0Yy1scGM1NSBydGNfbHBjNTU6IHJl
Z2lzdGVyZWQgYXMgcnRjMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTg4NjU2XSBscGM1NV9sMiBzcGkxLjA6
ICgyKSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTkzNzQ0XSBscGM1NV9s
MiBzcGkxLjA6ICgzKSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTk4ODQ4
XSBscGM1NV9sMiBzcGkxLjA6ICg0KSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuMjAyOTMyXSBtbWMwOiBTREhDSSBjb250cm9sbGVyIG9uIGZmMTYwMDAwLm1tYyBbZmYxNjAw
MDAubW1jXSB1c2luZyBBRE1BIDY0LWJpdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjEwNjg5XSBscGM1NV9s
MiBzcGkxLjA6ICg1KSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjE1Njk0
XSBscGM1NV9sMiBzcGkxLjA6IHJ4IGVycm9yOiAtMTEwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4yODQ0Mzhd
IG1tYzA6IG5ldyBIUzIwMCBNTUMgY2FyZCBhdCBhZGRyZXNzIDAwMDE8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjI4NTE3OV0gbW1jYmxrMDogbW1jMDowMDAxIFNFTTE2RyAxNC42IEdpQjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuMjkxNzg0XSDCoG1tY2JsazA6IHAxIHAyIHAzIHA0IHA1IHA2IHA3IHA4PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMy4yOTM5MTVdIG1tY2JsazBib290MDogbW1jMDowMDAxIFNFTTE2RyA0LjAwIE1pQjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDMuMjk5MDU0XSBtbWNibGswYm9vdDE6IG1tYzA6MDAwMSBTRU0xNkcgNC4wMCBN
aUI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjMwMzkwNV0gbW1jYmxrMHJwbWI6IG1tYzA6MDAwMSBTRU0xNkcg
NC4wMCBNaUIsIGNoYXJkZXYgKDI0NDowKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNTgyNjc2XSBydGMtbHBj
NTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfZ2V0X3RpbWU6IGJhZCByZXN1bHQ6IDE8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAzLjU4MzMzMl0gcnRjLWxwYzU1IHJ0Y19scGM1NTogaGN0b3N5czogdW5hYmxlIHRvIHJl
YWQgdGhlIGhhcmR3YXJlIGNsb2NrPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy41OTEyNTJdIGNkbnMtaTJjIGZm
MDIwMDAwLmkyYzogcmVjb3ZlcnkgaW5mb3JtYXRpb24gY29tcGxldGU8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjU5NzA4NV0gYXQyNCAwLTAwNTA6IHN1cHBseSB2Y2Mgbm90IGZvdW5kLCB1c2luZyBkdW1teSBy
ZWd1bGF0b3I8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYwMzAxMV0gbHBjNTVfbDIgc3BpMS4wOiAoMikgbWN1
IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYwODA5M10gYXQyNCAwLTAwNTA6IDI1
NiBieXRlIHNwZCBFRVBST00sIHJlYWQtb25seTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjEzNjIwXSBscGM1
NV9sMiBzcGkxLjA6ICgzKSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjE5
MzYyXSBscGM1NV9sMiBzcGkxLjA6ICg0KSBtY3Ugc3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDMuNjI0MjI0XSBydGMtcnYzMDI4IDAtMDA1MjogcmVnaXN0ZXJlZCBhcyBydGMxPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMy42MjgzNDNdIGxwYzU1X2wyIHNwaTEuMDogKDUpIG1jdSBzdGlsbCBub3QgcmVh
ZHk/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy42MzMyNTNdIGxwYzU1X2wyIHNwaTEuMDogcnggZXJyb3I6IC0x
MTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjYzOTEwNF0gazgxX2Jvb3Rsb2FkZXIgMC0wMDEwOiBwcm9iZTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuNjQxNjI4XSBWTUNVOiA6ICgyMzU6MCkgcmVnaXN0ZXJlZDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuNjQxNjM1XSBrODFfYm9vdGxvYWRlciAwLTAwMTA6IHByb2JlIGNvbXBsZXRlZDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuNjY4MzQ2XSBjZG5zLWkyYyBmZjAyMDAwMC5pMmM6IDQwMCBrSHogbW1p
byBmZjAyMDAwMCBpcnEgMjg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY2OTE1NF0gY2Rucy1pMmMgZmYwMzAw
MDAuaTJjOiByZWNvdmVyeSBpbmZvcm1hdGlvbiBjb21wbGV0ZTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjc1
NDEyXSBsbTc1IDEtMDA0ODogc3VwcGx5IHZzIG5vdCBmb3VuZCwgdXNpbmcgZHVtbXkgcmVndWxh
dG9yPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy42ODI5MjBdIGxtNzUgMS0wMDQ4OiBod21vbjE6IHNlbnNvciAm
IzM5O3RtcDExMiYjMzk7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42ODY1NDhdIGkyYyBpMmMtMTogQWRkZWQg
bXVsdGlwbGV4ZWQgaTJjIGJ1cyAzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42OTA3OTVdIGkyYyBpMmMtMTog
QWRkZWQgbXVsdGlwbGV4ZWQgaTJjIGJ1cyA0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42OTU2MjldIGkyYyBp
MmMtMTogQWRkZWQgbXVsdGlwbGV4ZWQgaTJjIGJ1cyA1PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43MDA0OTJd
IGkyYyBpMmMtMTogQWRkZWQgbXVsdGlwbGV4ZWQgaTJjIGJ1cyA2PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43
MDUxNTddIHBjYTk1NHggMS0wMDcwOiByZWdpc3RlcmVkIDQgbXVsdGlwbGV4ZWQgYnVzc2VzIGZv
ciBJMkMgc3dpdGNoIHBjYTk1NDY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjcxMzA0OV0gYXQyNCAxLTAwNTQ6
IHN1cHBseSB2Y2Mgbm90IGZvdW5kLCB1c2luZyBkdW1teSByZWd1bGF0b3I8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjcyMDA2N10gYXQyNCAxLTAwNTQ6IDEwMjQgYnl0ZSAyNGMwOCBFRVBST00sIHJlYWQtb25s
eTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDMuNzI0NzYxXSBjZG5zLWkyYyBmZjAzMDAwMC5pMmM6IDEwMCBrSHog
bW1pbyBmZjAzMDAwMCBpcnEgMjk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjczMTI3Ml0gc2ZwIHZpcGVyX2Vu
ZXQ6c2ZwLWV0aDE6IEhvc3QgbWF4aW11bSBwb3dlciAyLjBXPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43Mzc1
NDldIHNmcF9yZWdpc3Rlcl9zb2NrZXQ6IGdvdCBzZnBfYnVzPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NDA3
MDldIHNmcF9yZWdpc3Rlcl9zb2NrZXQ6IHJlZ2lzdGVyIHNmcF9idXM8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
Ljc0NTQ1OV0gc2ZwX3JlZ2lzdGVyX2J1czogb3BzIG9rITxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzQ5MTc5
XSBzZnBfcmVnaXN0ZXJfYnVzOiBUcnkgdG8gYXR0YWNoPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NTM0MTld
IHNmcF9yZWdpc3Rlcl9idXM6IEF0dGFjaCBzdWNjZWVkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc1Nzkx
NF0gc2ZwX3JlZ2lzdGVyX2J1czogdXBzdHJlYW0gb3BzIGF0dGFjaDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
NzYyNjc3XSBzZnBfcmVnaXN0ZXJfYnVzOiBCdXMgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
NzY2OTk5XSBzZnBfcmVnaXN0ZXJfc29ja2V0OiByZWdpc3RlciBzZnBfYnVzIHN1Y2NlZWRlZDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuNzc1ODcwXSBvZl9jZnNfaW5pdDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzc2MDAw
XSBvZl9jZnNfaW5pdDogT0s8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc3ODIxMV0gY2xrOiBOb3QgZGlzYWJs
aW5nIHVudXNlZCBjbG9ja3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMjc4NDc3XSBGcmVlaW5nIGluaXRyZCBt
ZW1vcnk6IDIwNjA1Nks8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMjc5NDA2XSBGcmVlaW5nIHVudXNlZCBrZXJu
ZWwgbWVtb3J5OiAxNTM2Szxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4zMTQwMDZdIENoZWNrZWQgVytYIG1hcHBp
bmdzOiBwYXNzZWQsIG5vIFcrWCBwYWdlcyBmb3VuZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4zMTQxNDJdIFJ1
biAvaW5pdCBhcyBpbml0IHByb2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IElOSVQ6IHZlcnNpb24gMy4wMSBib290aW5n
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBmc2NrIChidXN5Ym94IDEuMzUuMCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IC9kZXYvbW1jYmxrMHAxOiBjbGVh
biwgMTIvMTAyNDAwIGZpbGVzLCAyMzgxNjIvNDA5NjAwIGJsb2Nrczxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgL2Rldi9tbWNi
bGswcDI6IGNsZWFuLCAxMi8xMDI0MDAgZmlsZXMsIDE3MTk3Mi80MDk2MDAgYmxvY2tzPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyAvZGV2L21tY2JsazBwMyB3YXMgbm90IGNsZWFubHkgdW5tb3VudGVkLCBjaGVjayBmb3JjZWQu
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAvZGV2L21tY2JsazBwMzogMjAvNDA5NiBmaWxlcyAoMC4wJSBub24tY29udGlndW91
cyksIDY2My8xNjM4NCBibG9ja3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNTUzMDczXSBFWFQ0LWZzIChtbWNi
bGswcDMpOiBtb3VudGVkIGZpbGVzeXN0ZW0gd2l0aG91dCBqb3VybmFsLiBPcHRzOiAobnVsbCku
IFF1b3RhIG1vZGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZGlzYWJsZWQuPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBTdGFydGluZyByYW5kb20gbnVtYmVyIGdlbmVyYXRvciBkYWVtb24uPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IDExLjU4MDY2Ml0gcmFuZG9tOiBjcm5nIGluaXQgZG9uZTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgdWRldjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS42MTMxNTldIHVkZXZkWzE0Ml06IHN0YXJ0aW5nIHZlcnNpb24gMy4yLjEw
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDExLjYyMDM4NV0gdWRldmRbMTQzXTogc3RhcnRpbmcgZXVkZXYtMy4yLjEw
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDExLjcwNDQ4MV0gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBjb250cm9sX3Jl
ZDogcmVuYW1lZCBmcm9tIGV0aDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNzIwMjY0XSBtYWNiIGZmMGMwMDAw
LmV0aGVybmV0IGNvbnRyb2xfYmxhY2s6IHJlbmFtZWQgZnJvbSBldGgxPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEy
LjA2MzM5Nl0gaXBfbG9jYWxfcG9ydF9yYW5nZTogcHJlZmVyIGRpZmZlcmVudCBwYXJpdHkgZm9y
IHN0YXJ0L2VuZCB2YWx1ZXMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjA4NDgwMV0gcnRjLWxwYzU1IHJ0Y19s
cGM1NTogbHBjNTVfcnRjX2dldF90aW1lOiBiYWQgcmVzdWx0OiAxPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBod2Nsb2NrOiBS
VENfUkRfVElNRTogSW52YWxpZCBleGNoYW5nZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgTW9uIEZlYiAyNyAwODo0MDo1MyBV
VEMgMjAyMzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxMi4xMTUzMDldIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGxwYzU1
X3J0Y19zZXRfdGltZTogYmFkIHJlc3VsdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgaHdjbG9jazogUlRDX1NFVF9USU1FOiBJ
bnZhbGlkIGV4Y2hhbmdlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjEzMTAyN10gcnRjLWxwYzU1IHJ0Y19scGM1
NTogbHBjNTVfcnRjX2dldF90aW1lOiBiYWQgcmVzdWx0OiAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBtY3Vk
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBJTklUOiBFbnRlcmluZyBydW5sZXZlbDogNTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQ29uZmlndXJpbmcgbmV0
d29yayBpbnRlcmZhY2VzLi4uIGRvbmUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyByZXNldHRpbmcgbmV0d29yayBpbnRlcmZh
Y2U8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTIuNzE4Mjk1XSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0IGNvbnRyb2xf
cmVkOiBQSFkgW2ZmMGIwMDAwLmV0aGVybmV0LWZmZmZmZmZmOjAyXSBkcml2ZXIgW1hpbGlueDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFBDUy9QTUEgUEhZXSAoaXJxPVBPTEwpPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDEyLjcyMzkxOV0gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBjb250cm9sX3JlZDog
Y29uZmlndXJpbmcgZm9yIHBoeS9nbWlpIGxpbmsgbW9kZTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43MzIxNTFd
IHBwcyBwcHMwOiBuZXcgUFBTIHNvdXJjZSBwdHAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjczNTU2M10gbWFj
YiBmZjBiMDAwMC5ldGhlcm5ldDogZ2VtLXB0cC10aW1lciBwdHAgY2xvY2sgcmVnaXN0ZXJlZC48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTIuNzQ1NzI0XSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0IGNvbnRyb2xfYmxh
Y2s6IFBIWSBbZmYwYzAwMDAuZXRoZXJuZXQtZmZmZmZmZmY6MDFdIGRyaXZlciBbWGlsaW54PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgUENTL1BNQSBQSFldPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgKGlycT1QT0xMKTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMi43NTM0NjldIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29udHJvbF9ibGFj
azogY29uZmlndXJpbmcgZm9yIHBoeS9nbWlpIGxpbmsgbW9kZTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43NjE4
MDRdIHBwcyBwcHMxOiBuZXcgUFBTIHNvdXJjZSBwdHAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjc2NTM5OF0g
bWFjYiBmZjBjMDAwMC5ldGhlcm5ldDogZ2VtLXB0cC10aW1lciBwdHAgY2xvY2sgcmVnaXN0ZXJl
ZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IEF1dG8tbmVnb3RpYXRpb246IG9mZjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQXV0by1uZWdvdGlhdGlvbjog
b2ZmPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDE2LjgyODE1MV0gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBjb250cm9s
X3JlZDogdW5hYmxlIHRvIGdlbmVyYXRlIHRhcmdldCBmcmVxdWVuY3k6IDEyNTAwMDAwMCBIejxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxNi44MzQ1NTNdIG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6
IExpbmsgaXMgVXAgLSAxR2Jwcy9GdWxsIC0gZmxvdyBjb250cm9sIG9mZjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
Ni44NjA1NTJdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29udHJvbF9ibGFjazogdW5hYmxlIHRv
IGdlbmVyYXRlIHRhcmdldCBmcmVxdWVuY3k6IDEyNTAwMDAwMCBIejxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNi44
NjcwNTJdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29udHJvbF9ibGFjazogTGluayBpcyBVcCAt
IDFHYnBzL0Z1bGwgLSBmbG93IGNvbnRyb2wgb2ZmPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBGYWlsc2FmZSBT
ZWN1cmUgU2hlbGwgc2VydmVyIGluIHBvcnQgMjIyMjogc3NoZDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgZG9uZS48YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFN0YXJ0aW5nIHJwY2JpbmQgZGFlbW9uLi4uZG9uZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3LjA5
MzAxOV0gcnRjLWxwYzU1IHJ0Y19scGM1NTogbHBjNTVfcnRjX2dldF90aW1lOiBiYWQgcmVzdWx0
OiAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBod2Nsb2NrOiBSVENfUkRfVElNRTogSW52YWxpZCBleGNoYW5nZTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
U3RhcnRpbmcgU3RhdGUgTWFuYWdlciBTZXJ2aWNlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydCBzdGF0ZS1tYW5hZ2Vy
IHJlc3RhcnRlci4uLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MSBGb3J3YXJkaW5nIEFFUyBvcGVyYXRpb246
IDMyNTQ3Nzk5NTE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIC91c3Ivc2Jpbi94ZW5zdG9yZWQuLi4uWyDCoCAx
Ny4yNjUyNTZdIEJUUkZTOiBkZXZpY2UgZnNpZCA4MGVmYzIyNC1jMjAyLTRmOGUtYTk0OS00ZGFl
N2YwNGEwYWE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBkZXZpZCAxIHRyYW5zaWQg
NzQ0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgL2Rldi9kbS0wPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBzY2FubmVkIGJ5IHVkZXZkICgzODUpPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IDE3LjM0OTkzM10gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTApOiBkaXNrIHNwYWNlIGNhY2hpbmcg
aXMgZW5hYmxlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy4zNTA2NzBdIEJUUkZTIGluZm8gKGRldmljZSBkbS0w
KTogaGFzIHNraW5ueSBleHRlbnRzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3LjM2NDM4NF0gQlRSRlMgaW5mbyAo
ZGV2aWNlIGRtLTApOiBlbmFibGluZyBzc2Qgb3B0aW1pemF0aW9uczxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy44
MzA0NjJdIEJUUkZTOiBkZXZpY2UgZnNpZCAyN2ZmNjY2Yi1mNGU1LTRmOTAtOTA1NC1jMjEwZGI1
YjJlMmUgZGV2aWQgMSB0cmFuc2lkIDY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAv
ZGV2L21hcHBlci9jbGllbnRfcHJvdiBzY2FubmVkIGJ5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbWtmcy5idHJmczxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKDUy
Nik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTcuODcyNjk5XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6IHVzaW5n
IGZyZWUgc3BhY2UgdHJlZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy44NzI3NzFdIEJUUkZTIGluZm8gKGRldmlj
ZSBkbS0xKTogaGFzIHNraW5ueSBleHRlbnRzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3Ljg3ODExNF0gQlRSRlMg
aW5mbyAoZGV2aWNlIGRtLTEpOiBmbGFnZ2luZyBmcyB3aXRoIGJpZyBtZXRhZGF0YSBmZWF0dXJl
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDE3Ljg5NDI4OV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTEpOiBlbmFibGlu
ZyBzc2Qgb3B0aW1pemF0aW9uczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy44OTU2OTVdIEJUUkZTIGluZm8gKGRl
dmljZSBkbS0xKTogY2hlY2tpbmcgVVVJRCB0cmVlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU2V0dGluZyBkb21h
aW4gMCBuYW1lLCBkb21pZCBhbmQgSlNPTiBjb25maWcuLi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IERvbmUgc2V0dGluZyB1
cCBEb20wPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBTdGFydGluZyB4ZW5jb25zb2xlZC4uLjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgUUVN
VSBhcyBkaXNrIGJhY2tlbmQgZm9yIGRvbTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIGRvbWFpbiB3YXRjaGRv
ZyBkYWVtb246IHhlbndhdGNoZG9nZCBzdGFydHVwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxOC40MDg2
NDddIEJUUkZTOiBkZXZpY2UgZnNpZCA1ZTA4ZDVlOS1iYzJhLTQ2YjktYWY2YS00NGM3MDg3Yjg5
MjEgZGV2aWQgMSB0cmFuc2lkIDY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAvZGV2
L21hcHBlci9jbGllbnRfY29uZmlnIHNjYW5uZWQgYnk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBta2ZzLmJ0cmZzPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoNTc0
KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgW2RvbmVdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE4LjQ2NTU1Ml0gQlRSRlMgaW5mbyAoZGV2aWNl
IGRtLTIpOiB1c2luZyBmcmVlIHNwYWNlIHRyZWU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNDY1NjI5XSBCVFJG
UyBpbmZvIChkZXZpY2UgZG0tMik6IGhhcyBza2lubnkgZXh0ZW50czxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxOC40
NzEwMDJdIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTogZmxhZ2dpbmcgZnMgd2l0aCBiaWcgbWV0
YWRhdGEgZmVhdHVyZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgY3JvbmQ6IFsgwqAgMTguNDgyMzcxXSBCVFJG
UyBpbmZvIChkZXZpY2UgZG0tMik6IGVuYWJsaW5nIHNzZCBvcHRpbWl6YXRpb25zPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDE4LjQ4NjY1OV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTIpOiBjaGVja2luZyBVVUlEIHRy
ZWU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IE9LPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBzdGFydGluZyByc3lzbG9nZCAuLi4gTG9nIHBhcnRpdGlvbiBy
ZWFkeSBhZnRlciAwIHBvbGwgbG9vcHM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGRvbmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJzeXNsb2dkOiBjYW5u
b3QgY29ubmVjdCB0byA8YSBocmVmPSJodHRwOi8vMTcyLjE4LjAuMTo1MTQiIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPjE3Mi4xOC4wLjE6NTE0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0
cDovLzE3Mi4xOC4wLjE6NTE0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
Oi8vMTcyLjE4LjAuMTo1MTQ8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cDovLzE3Mi4xOC4wLjE6
NTE0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8vMTcyLjE4LjAuMTo1
MTQ8L2E+ICZsdDs8YSBocmVmPSJodHRwOi8vMTcyLjE4LjAuMTo1MTQiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly8xNzIuMTguMC4xOjUxNDwvYT4mZ3Q7Jmd0OzogTmV0
d29yayBpcyB1bnJlYWNoYWJsZSBbdjguMjIwOC4wIHRyeTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoDxhIGhyZWY9Imh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyIgcmVsPSJu
b3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3
PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc8
L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIw
Mjc8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjciIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAy
NzwvYT4mZ3Q7Jmd0OyBdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE4LjY3MDYzN10gQlRSRlM6IGRldmljZSBmc2lk
IDM5ZDdkOWUxLTk2N2QtNDc4ZS05NGFlLTY5MGRlYjcyMjA5NSBkZXZpZCAxIHRyYW5zaWQgNjA4
IC9kZXYvZG0tMzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHNjYW5uZWQgYnkgdWRl
dmQgKDUxOCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBQbGVhc2UgaW5zZXJ0IFVTQiB0b2tlbiBhbmQgZW50ZXIg
eW91ciByb2xlIGluIGxvZ2luIHByb21wdC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBsb2dpbjo8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgTy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg0L/QvSwgMjQg0LDQ
v9GALiAyMDIz4oCv0LMuINCyIDIzOjM5LCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJn
ZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIE9sZWcs
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhlcmUgaXMgdGhlIGlzc3VlIGZyb20geW91ciBsb2dz
Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTRXJyb3IgSW50ZXJydXB0IG9uIENQVTAsIGNvZGUg
MHhiZTAwMDAwMCAtLSBTRXJyb3I8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU0Vycm9ycyBhcmUg
c3BlY2lhbCBzaWduYWxzIHRvIG5vdGlmeSBzb2Z0d2FyZSBvZiBzZXJpb3VzIGhhcmR3YXJlPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgZXJyb3JzLsKgIFNvbWV0aGluZyBpcyBnb2luZyB2ZXJ5IHdyb25nLiBE
ZWZlY3RpdmUgaGFyZHdhcmUgaXMgYTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHBvc3NpYmlsaXR5LsKgIEFu
b3RoZXIgcG9zc2liaWxpdHkgaWYgc29mdHdhcmUgYWNjZXNzaW5nIGFkZHJlc3MgcmFuZ2VzPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgdGhhdCBpdCBpcyBub3Qgc3VwcG9zZWQgdG8sIHNvbWV0aW1lcyBpdCBj
YXVzZXMgU0Vycm9ycy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgQ2hlZXJzLDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBTdGVmYW5vPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgT24gTW9uLCAyNCBBcHIgMjAyMywgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgSGVsbG8sPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgVGhhbmtzIGd1eXMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJIGZvdW5kIG91
dCB3aGVyZSB0aGUgcHJvYmxlbSB3YXMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBOb3cgZG9tMCBi
b290ZWQgbW9yZS4gQnV0IEkgaGF2ZSBhIG5ldyBvbmUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBU
aGlzIGlzIGEga2VybmVsIHBhbmljIGR1cmluZyBEb20wIGxvYWRpbmcuPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBNYXliZSBzb21lb25lIGlzIGFibGUgdG8gc3VnZ2VzdCBzb21ldGhpbmcgPzxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBPLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAzLjc3MTM2Ml0gc2ZwX3JlZ2lzdGVyX2J1czogdXBzdHJlYW0gb3BzIGF0dGFj
aDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzc2MTE5XSBzZnBfcmVnaXN0ZXJfYnVz
OiBCdXMgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzgwNDU5XSBz
ZnBfcmVnaXN0ZXJfc29ja2V0OiByZWdpc3RlciBzZnBfYnVzIHN1Y2NlZWRlZDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDMuNzg5Mzk5XSBvZl9jZnNfaW5pdDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuNzg5NDk5XSBvZl9jZnNfaW5pdDogT0s8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAzLjc5MTY4NV0gY2xrOiBOb3QgZGlzYWJsaW5nIHVudXNlZCBjbG9ja3M8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwMzU1XSBTRXJyb3IgSW50ZXJydXB0IG9uIENQ
VTAsIGNvZGUgMHhiZTAwMDAwMCAtLSBTRXJyb3I8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
MTEuMDEwMzgwXSBDUFU6IDAgUElEOiA5IENvbW06IGt3b3JrZXIvdTQ6MCBOb3QgdGFpbnRlZCA1
LjE1LjcyLXhpbGlueC12MjAyMi4xICMxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAx
MDM5M10gV29ya3F1ZXVlOiBldmVudHNfdW5ib3VuZCBhc3luY19ydW5fZW50cnlfZm48YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDE0XSBwc3RhdGU6IDYwMDAwMDA1IChuWkN2IGRh
aWYgLVBBTiAtVUFPIC1UQ08gLURJVCAtU1NCUyBCVFlQRT0tLSk8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTEuMDEwNDIyXSBwYyA6IHNpbXBsZV93cml0ZV9lbmQrMHhkMC8weDEzMDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0MzFdIGxyIDogZ2VuZXJpY19wZXJmb3JtX3dy
aXRlKzB4MTE4LzB4MWUwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQzOF0gc3Ag
OiBmZmZmZmZjMDA4MDliOTEwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ0MV0g
eDI5OiBmZmZmZmZjMDA4MDliOTEwIHgyODogMDAwMDAwMDAwMDAwMDAwMCB4Mjc6IGZmZmZmZmVm
NjliYTg4YzA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDUxXSB4MjY6IDAwMDAw
MDAwMDAwMDNlZWMgeDI1OiBmZmZmZmY4MDc1MTVkYjAwIHgyNDogMDAwMDAwMDAwMDAwMDAwMDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0NTldIHgyMzogZmZmZmZmYzAwODA5YmE5
MCB4MjI6IDAwMDAwMDAwMDJhYWMwMDAgeDIxOiBmZmZmZmY4MDczMTVhMjYwPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ3Ml0geDIwOiAwMDAwMDAwMDAwMDAxMDAwIHgxOTogZmZm
ZmZmZmUwMjAwMDAwMCB4MTg6IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTEuMDEwNDgxXSB4MTc6IDAwMDAwMDAwZmZmZmZmZmYgeDE2OiAwMDAwMDAwMDAwMDA4
MDAwIHgxNTogMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA0OTBdIHgxNDogMDAwMDAwMDAwMDAwMDAwMCB4MTM6IDAwMDAwMDAwMDAwMDAwMDAgeDEyOiAw
MDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ5OF0geDEx
OiAwMDAwMDAwMDAwMDAwMDAwIHgxMDogMDAwMDAwMDAwMDAwMDAwMCB4OSA6IDAwMDAwMDAwMDAw
MDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTA3XSB4OCA6IDAwMDAwMDAw
MDAwMDAwMDAgeDcgOiBmZmZmZmZlZjY5M2JhNjgwIHg2IDogMDAwMDAwMDAyZDg5YjcwMDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1MTVdIHg1IDogZmZmZmZmZmUwMjAwMDAwMCB4
NCA6IGZmZmZmZjgwNzMxNWEzYzggeDMgOiAwMDAwMDAwMDAwMDAxMDAwPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDExLjAxMDUyNF0geDIgOiAwMDAwMDAwMDAyYWFiMDAwIHgxIDogMDAwMDAw
MDAwMDAwMDAwMSB4MCA6IDAwMDAwMDAwMDAwMDAwMDU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTEuMDEwNTM0XSBLZXJuZWwgcGFuaWMgLSBub3Qgc3luY2luZzogQXN5bmNocm9ub3VzIFNF
cnJvciBJbnRlcnJ1cHQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTM5XSBDUFU6
IDAgUElEOiA5IENvbW06IGt3b3JrZXIvdTQ6MCBOb3QgdGFpbnRlZCA1LjE1LjcyLXhpbGlueC12
MjAyMi4xICMxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU0NV0gSGFyZHdhcmUg
bmFtZTogRDE0IFZpcGVyIEJvYXJkIC0gV2hpdGUgVW5pdCAoRFQpPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDExLjAxMDU0OF0gV29ya3F1ZXVlOiBldmVudHNfdW5ib3VuZCBhc3luY19ydW5f
ZW50cnlfZm48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTU2XSBDYWxsIHRyYWNl
Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1NThdIMKgZHVtcF9iYWNrdHJhY2Ur
MHgwLzB4MWM0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU2N10gwqBzaG93X3N0
YWNrKzB4MTgvMHgyYzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1NzRdIMKgZHVt
cF9zdGFja19sdmwrMHg3Yy8weGEwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU4
M10gwqBkdW1wX3N0YWNrKzB4MTgvMHgzNDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4w
MTA1ODhdIMKgcGFuaWMrMHgxNGMvMHgyZjg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEu
MDEwNTk3XSDCoHByaW50X3RhaW50ZWQrMHgwLzB4YjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTEuMDEwNjA2XSDCoGFybTY0X3NlcnJvcl9wYW5pYysweDZjLzB4N2M8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjE0XSDCoGRvX3NlcnJvcisweDI4LzB4NjA8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjIxXSDCoGVsMWhfNjRfZXJyb3JfaGFuZGxlcisweDMw
LzB4NTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjI4XSDCoGVsMWhfNjRfZXJy
b3IrMHg3OC8weDdjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDYzM10gwqBzaW1w
bGVfd3JpdGVfZW5kKzB4ZDAvMHgxMzA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEw
NjM5XSDCoGdlbmVyaWNfcGVyZm9ybV93cml0ZSsweDExOC8weDFlMDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4wMTA2NDRdIMKgX19nZW5lcmljX2ZpbGVfd3JpdGVfaXRlcisweDEzOC8w
eDFjNDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2NTBdIMKgZ2VuZXJpY19maWxl
X3dyaXRlX2l0ZXIrMHg3OC8weGQwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY1
Nl0gwqBfX2tlcm5lbF93cml0ZSsweGZjLzB4MmFjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IDExLjAxMDY2NV0gwqBrZXJuZWxfd3JpdGUrMHg4OC8weDE2MDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCAxMS4wMTA2NzNdIMKgeHdyaXRlKzB4NDQvMHg5NDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCAxMS4wMTA2ODBdIMKgZG9fY29weSsweGE4LzB4MTA0PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDExLjAxMDY4Nl0gwqB3cml0ZV9idWZmZXIrMHgzOC8weDU4PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjAxMDY5Ml0gwqBmbHVzaF9idWZmZXIrMHg0Yy8weGJjPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY5OF0gwqBfX2d1bnppcCsweDI4MC8weDMxMDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3MDRdIMKgZ3VuemlwKzB4MWMvMHgyODxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3MDldIMKgdW5wYWNrX3RvX3Jvb3Rmcysw
eDE3MC8weDJiMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3MTVdIMKgZG9fcG9w
dWxhdGVfcm9vdGZzKzB4ODAvMHgxNjQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEw
NzIyXSDCoGFzeW5jX3J1bl9lbnRyeV9mbisweDQ4LzB4MTY0PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIDExLjAxMDcyOF0gwqBwcm9jZXNzX29uZV93b3JrKzB4MWU0LzB4M2EwPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDczNl0gwqB3b3JrZXJfdGhyZWFkKzB4N2MvMHg0YzA8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzQzXSDCoGt0aHJlYWQrMHgxMjAvMHgx
MzA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzUwXSDCoHJldF9mcm9tX2Zvcmsr
MHgxMC8weDIwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDc1N10gU01QOiBzdG9w
cGluZyBzZWNvbmRhcnkgQ1BVczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3ODRd
IEtlcm5lbCBPZmZzZXQ6IDB4MmY2MTIwMDAwMCBmcm9tIDB4ZmZmZmZmYzAwODAwMDAwMDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3ODhdIFBIWVNfT0ZGU0VUOiAweDA8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzkwXSBDUFUgZmVhdHVyZXM6IDB4MDAwMDA0MDEs
MDAwMDA4NDI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzk1XSBNZW1vcnkgTGlt
aXQ6IG5vbmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMjc3NTA5XSAtLS1bIGVuZCBL
ZXJuZWwgcGFuaWMgLSBub3Qgc3luY2luZzogQXN5bmNocm9ub3VzIFNFcnJvciBJbnRlcnJ1cHQg
XS0tLTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7INC/0YIsIDIx
INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxNTo1MiwgTWljaGFsIE9yemVsICZsdDs8YSBocmVmPSJt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxA
YW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+
bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+
Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIE9sZWcsPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoE9uIDIx
LzA0LzIwMjMgMTQ6NDksIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBIZWxsbyBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
SSB3YXMgbm90IGFibGUgdG8gZW5hYmxlIGVhcmx5cHJpbnRrIGluIHRoZSB4ZW4gZm9yIG5vdy48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEkgZGVjaWRlZCB0byBjaG9vc2Ug
YW5vdGhlciB3YXkuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGlzIGlz
IGEgeGVuJiMzOTtzIGNvbW1hbmQgbGluZSB0aGF0IEkgZm91bmQgb3V0IGNvbXBsZXRlbHkuPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgKFhFTikgJCQkJCBjb25zb2xlPWR0dWFydCBkdHVhcnQ9c2VyaWFsMCBk
b20wX21lbT0xNjAwTSBkb20wX21heF92Y3B1cz0yIGRvbTBfdmNwdXNfcGluPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgYm9vdHNjcnViPTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB2d2ZpPW5hdGl2ZTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoHNjaGVkPW51bGw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0aW1lcl9z
bG9wPTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBZZXMsIGFkZGluZyBhIHByaW50
aygpIGluIFhlbiB3YXMgYWxzbyBhIGdvb2QgaWRlYS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgU28geW91IGFyZSBhYnNvbHV0ZWx5IHJpZ2h0IGFib3V0IGEgY29tbWFu
ZCBsaW5lLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgTm93IEkgYW0gZ29p
bmcgdG8gZmluZCBvdXQgd2h5IHhlbiBkaWQgbm90IGhhdmUgdGhlIGNvcnJlY3QgcGFyYW1ldGVy
cyBmcm9tIHRoZSBkZXZpY2U8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0cmVlLjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoE1heWJlIHlvdSB3aWxsIGZpbmQgdGhpcyBk
b2N1bWVudCBoZWxwZnVsOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDxhIGhyZWY9
Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2Nz
L21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNl
XzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dDwvYT4gJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9k
b2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmVi
YXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dDwvYT4mZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHVi
LmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2Rldmlj
ZS10cmVlL2Jvb3RpbmcudHh0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNj
L2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0
aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2Rl
dmljZS10cmVlL2Jvb3RpbmcudHh0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9t
aXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dDwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB+TWljaGFsPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg0L/RgiwgMjEg0LDQv9GA
LiAyMDIz4oCv0LMuINCyIDExOjE2LCBNaWNoYWwgT3J6ZWwgJmx0OzxhIGhyZWY9Im1haWx0bzpt
aWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29t
PC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxA
YW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFt
ZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29t
IiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1p
Y2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwu
b3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZn
dDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqBPbiAyMS8wNC8yMDIzIDEwOjA0LCBPbGVnIE5pa2l0ZW5rbyB3
cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgSGVsbG8gTWljaGFsLDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IFllcywgSSB1c2UgeW9jdG8u
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgWWVzdGVyZGF5IGFsbCBk
YXkgbG9uZyBJIHRyaWVkIHRvIGZvbGxvdyB5b3VyIHN1Z2dlc3Rpb25zLjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgSSBmYWNlZCBhIHByb2JsZW0uPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBNYW51YWxseSBp
biB0aGUgeGVuIGNvbmZpZyBidWlsZCBmaWxlIEkgcGFzdGVkIHRoZSBzdHJpbmdzOjxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEluIHRoZSAuY29uZmlnIGZpbGUg
b3IgaW4gc29tZSBZb2N0byBmaWxlIChsaXN0aW5nIGFkZGl0aW9uYWwgS2NvbmZpZyBvcHRpb25z
KSBhZGRlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHRvIFNSQ19VUkk/PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgWW91IHNob3VsZG4mIzM5O3Qg
cmVhbGx5IG1vZGlmeSAuY29uZmlnIGZpbGUgYnV0IGlmIHlvdSBkbywgeW91IHNob3VsZCBleGVj
dXRlICZxdW90O21ha2U8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBvbGRkZWZjb25m
aWcmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBhZnRlcndhcmRzLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBDT05GSUdfRUFSTFlf
UFJJTlRLPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBD
T05GSUdfRUFSTFlfUFJJTlRLX1pZTlFNUDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDsgQ09ORklHX0VBUkxZX1VBUlRfQ0hPSUNFX0NBREVOQ0U8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBJIGhvcGUgeW91IGFkZGVkID15
IHRvIHRoZW0uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEFueXdheSwgeW91IGhhdmUgYXQgbGVh
c3QgdGhlIGZvbGxvd2luZyBzb2x1dGlvbnM6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgMSkgUnVuIGJpdGJha2UgeGVuIC1jIG1lbnVjb25maWcgdG8gcHJvcGVy
bHkgc2V0IGVhcmx5IHByaW50azxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoDIpIEZpbmQgb3V0IGhvdyB5b3UgZW5hYmxlIG90aGVyIEtjb25maWcgb3B0aW9ucyBp
biB5b3VyIHByb2plY3QgKGUuZy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBDT05G
SUdfQ09MT1JJTkc9eSB0aGF0IGlzIG5vdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGVuYWJsZWQgYnk8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBk
ZWZhdWx0KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoDMpIEFw
cGVuZCB0aGUgZm9sbG93aW5nIHRvICZxdW90O3hlbi9hcmNoL2FybS9jb25maWdzL2FybTY0X2Rl
ZmNvbmZpZyZxdW90Ozo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqBDT05GSUdfRUFSTFlfUFJJTlRLX1pZTlFNUD15PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoH5N
aWNoYWw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgSG9zdCBoYW5ncyBpbiBidWlsZCB0aW1lLsKgPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBNYXliZSBJIGRp
ZCBub3Qgc2V0IHNvbWV0aGluZyBpbiB0aGUgY29uZmlnIGJ1aWxkIGZpbGUgPzxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDsg0YfRgiwgMjAg0LDQv9GALiAyMDIz4oCv0LMuINCyIDExOjU3
LCBPbGVnIE5pa2l0ZW5rbyAmbHQ7PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xl
c2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdl
dD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29k
QGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsm
Z3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9
Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0i
X2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqBUaGFua3MgTWljaGFsLDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqBZb3UgZ2F2ZSBtZSBh
biBpZGVhLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoEkgYW0gZ29pbmcgdG8gdHJ5IGl0IHRvZGF5Ljxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoE8uPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoNGH0YIsIDIwINCw0L/RgC4gMjAyM+KAr9CzLiDQ
siAxMTo1NiwgT2xlZyBOaWtpdGVua28gJmx0OzxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVz
aGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20i
IHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5v
bGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwv
YT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5v
bGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwv
YT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hp
aXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9s
ZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNv
bTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0
YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJf
YmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgVGhhbmtz
IFN0ZWZhbm8uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoEkgYW0gZ29pbmcgdG8gZG8gaXQgdG9kYXkuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgTy48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg0YHRgCwgMTkg0LDQ
v9GALiAyMDIz4oCv0LMuINCyIDIzOjA1LCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJl
bGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3Jn
PC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0
YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoE9u
IFdlZCwgMTkgQXByIDIwMjMsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsg
SGkgTWljaGFsLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEkgY29ycmVjdGVkIHhl
biYjMzk7cyBjb21tYW5kIGxpbmUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBOb3cgaXQgaXM8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7IHhlbix4ZW4tYm9vdGFyZ3MgPSAmcXVvdDtjb25zb2xlPWR0dWFydCBkdHVhcnQ9c2Vy
aWFsMCBkb20wX21lbT0xNjAwTTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGRvbTBf
bWF4X3ZjcHVzPTI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBkb20wX3ZjcHVzX3Bpbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoGJvb3RzY3J1Yj0wIHZ3Zmk9bmF0aXZlIHNjaGVkPW51bGw8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IHRp
bWVyX3Nsb3A9MCB3YXlfc2l6ZT02NTUzNiB4ZW5fY29sb3JzPTAtMyBkb20wX2NvbG9ycz00LTcm
cXVvdDs7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoDQgY29sb3JzIGlzIHdheSB0b28gbWFueSBmb3IgeGVuLCBqdXN0IGRvIHhlbl9j
b2xvcnM9MC0wLiBUaGVyZSBpcyBubzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoGFkdmFudGFnZSBpbiB1c2luZyBtb3Jl
IHRoYW4gMSBjb2xvciBmb3IgWGVuLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqA0IGNvbG9ycyBpcyB0b28gZmV3IGZvciBkb20wLCBp
ZiB5b3UgYXJlIGdpdmluZyAxNjAwTSBvZiBtZW1vcnkgdG88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBEb20wLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoEVhY2ggY29sb3IgaXMgMjU2TS4gRm9yIDE2MDBN
IHlvdSBzaG91bGQgZ2l2ZSBhdCBsZWFzdCA3IGNvbG9ycy4gVHJ5Ojxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqB4ZW5fY29sb3JzPTAt
MCBkb20wX2NvbG9ycz0xLTg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7IFVuZm9ydHVuYXRlbHkgdGhlIHJlc3VsdCB3YXMgdGhlIHNhbWUuPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAtIERvbTAgbW9kZTogUmVsYXhlZDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDsgKFhFTikgUDJNOiA0MC1iaXQgSVBBIHdpdGggNDAtYml0IFBBIGFuZCA4LWJp
dCBWTUlEPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06IDMgbGV2ZWxzIHdpdGggb3JkZXItMSBy
b290LCBWVENSIDB4MDAwMDAwMDA4MDAyMzU1ODxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgU2NoZWR1
bGluZyBncmFudWxhcml0eTogY3B1LCAxIENQVSBwZXIgc2NoZWQtcmVzb3VyY2U8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIENvbG9yaW5nIGdlbmVyYWwgaW5mb3JtYXRpb248YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIFdheSBzaXplOiA2NGtCPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBNYXguIG51bWJlciBv
ZiBjb2xvcnMgYXZhaWxhYmxlOiAxNjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgWGVuIGNvbG9yKHMp
OiBbIDAgXTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgYWx0ZXJuYXRpdmVzOiBQYXRjaGluZyB3aXRo
IGFsdCB0YWJsZSAwMDAwMDAwMDAwMmNjNjkwIC0mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgMDAwMDAwMDAwMDJjY2MwYzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgQ29sb3IgYXJy
YXkgYWxsb2NhdGlvbiBmYWlsZWQgZm9yIGRvbTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0OyAoWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBQYW5pYyBvbiBDUFUgMDo8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIEVycm9yIGNyZWF0aW5nIGRvbWFpbiAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSAqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDsgKFhFTikgUmVib290IGluIGZpdmUgc2Vjb25kcy4uLjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEkgYW0gZ29pbmcgdG8gZmluZCBvdXQgaG93IGNvbW1hbmQg
bGluZSBhcmd1bWVudHMgcGFzc2VkIGFuZCBwYXJzZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0OyDRgdGALCAxOSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6MjUsIE9sZWcg
TmlraXRlbmtvICZsdDs8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29v
ZEBnbWFpbC5jb208L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2Js
YW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20i
IHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNo
aWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBIaSBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgWW91
IHB1dCBteSBub3NlIGludG8gdGhlIHByb2JsZW0uIFRoYW5rIHlvdS48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
IEkgYW0gZ29pbmcgdG8gdXNlIHlvdXIgcG9pbnQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBMZXQmIzM5O3Mg
c2VlIHdoYXQgaGFwcGVucy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBSZWdhcmRz
LDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDsgT2xlZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0OyDRgdGALCAxOSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTA6MzcsIE1pY2hh
bCBPcnplbCAmbHQ7PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9
Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFt
ZC5jb208L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWlj
aGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0
OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5v
cnplbEBhbWQuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9y
emVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2Js
YW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxA
YW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNv
bTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRh
cmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1p
Y2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwu
b3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBh
bWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7Jmd0OyZn
dDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIE9sZWcsPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoE9uIDE5LzA0LzIwMjMgMDk6MDMsIE9sZWcgTmlr
aXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBIZWxsbyBTdGVmYW5vLDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFRo
YW5rcyBmb3IgdGhlIGNsYXJpZmljYXRpb24uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBNeSBjb21wYW55IHVzZXMgeW9jdG8gZm9yIGltYWdlIGdlbmVyYXRpb24uPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBXaGF0IGtpbmQgb2YgaW5mb3JtYXRpb24gZG8geW91IG5lZWQg
dG8gY29uc3VsdCBtZSBpbiB0aGlzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgY2Fz
ZSA/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgTWF5YmUgbW9kdWxlcyBzaXplcy9hZGRyZXNzZXMgd2hpY2ggd2VyZSBtZW50aW9u
ZWQgYnkgQEp1bGllbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEdyYWxsPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRh
cmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+anVsaWVuQHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4
ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxp
ZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4
ZW4ub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmci
IHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4m
Z3Q7Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRh
cmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5q
dWxpZW5AeGVuLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxp
ZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4
ZW4ub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmci
IHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4m
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyA/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoFNvcnJ5IGZvciBqdW1waW5nIGludG8gZGlzY3Vzc2lvbiwgYnV0IEZXSUNTIHRoZSBY
ZW4gY29tbWFuZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGxpbmUgeW91IHByb3Zp
ZGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgc2VlbXMgdG8gYmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBub3QgdGhlPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgb25lPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgWGVuIGJvb3Rl
ZCB3aXRoLiBUaGUgZXJyb3IgeW91IGFyZSBvYnNlcnZpbmcgbW9zdCBsaWtlbHkgaXMgZHVlPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdG8gZG9tMCBjb2xvcnM8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBjb25maWd1cmF0aW9uIG5vdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJl
aW5nPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgc3BlY2lmaWVkIChpLmUuIGxhY2sgb2YgZG9t
MF9jb2xvcnM9Jmx0OyZndDsgcGFyYW1ldGVyKS4gQWx0aG91Z2ggaW48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqB0aGUgY29tbWFuZCBsaW5lIHlvdTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHBy
b3ZpZGVkLCB0aGlzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcGFyYW1ldGVyPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgaXMgc2V0LCBJIHN0cm9uZ2x5IGRvdWJ0IHRoYXQgdGhp
cyBpcyB0aGUgYWN0dWFsIGNvbW1hbmQgbGluZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoGluIHVzZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgWW91
IHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHhlbix4ZW4tYm9vdGFyZ3MgPSAmcXVv
dDtjb25zb2xlPWR0dWFydCBkdHVhcnQ9c2VyaWFsMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoGRvbTBfbWVtPTE2MDBNIGRvbTBfbWF4X3ZjcHVzPTI8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBk
b20wX3ZjcHVzX3Bpbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJvb3RzY3J1Yj0w
IHZ3Zmk9bmF0aXZlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgc2NoZWQ9bnVsbCB0aW1lcl9z
bG9wPTAgd2F5X3N6aXplPTY1NTM2IHhlbl9jb2xvcnM9MC0zPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgZG9tMF9jb2xvcnM9NC03JnF1b3Q7Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBidXQ6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgMSkgd2F5
X3N6aXplIGhhcyBhIHR5cG88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAyKSB5b3Ugc3BlY2lm
aWVkIDQgY29sb3JzICgwLTMpIGZvciBYZW4sIGJ1dCB0aGUgYm9vdCBsb2cgc2F5czxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHRoYXQgWGVuIGhhcyBvbmx5PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgb25lOjxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoChYRU4pIFhlbiBjb2xvcihzKTogWyAwIF08YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgVGhpcyBtYWtlcyBtZSBiZWxpZXZlIHRoYXQg
bm8gY29sb3JzIGNvbmZpZ3VyYXRpb24gYWN0dWFsbHkgZW5kPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgdXAgaW4gY29tbWFuZCBsaW5lPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdGhhdCBYZW48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBib290ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB3aXRoLjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoFNpbmdsZSBjb2xvciBmb3IgWGVuIGlzIGEgJnF1b3Q7ZGVmYXVsdCBp
ZiBub3Qgc3BlY2lmaWVkJnF1b3Q7IGFuZCB3YXk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBzaXplIHdhcyBwcm9iYWJseTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNhbGN1bGF0ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBieSBh
c2tpbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBIVy48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU28gSSB3b3VsZCBzdWdnZXN0IHRvIGZpcnN0IGNy
b3NzLWNoZWNrIHRoZSBjb21tYW5kIGxpbmUgaW48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqB1c2UuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoH5NaWNo
YWw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg0LLRgiwgMTgg0LDQv9GALiAyMDIz4oCv
0LMuINCyIDIwOjQ0LCBTdGVmYW5vIFN0YWJlbGxpbmk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJn
ZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJn
ZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3Rh
YmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwv
YT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
PnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsm
Z3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3Jn
PC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0
YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5z
c3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5z
c3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZn
dDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBPbiBU
dWUsIDE4IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEhpIEp1bGllbiw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0OyAmZ3Q7Jmd0OyBUaGlzIGZlYXR1cmUgaGFzIG5vdCBiZWVuIG1lcmdlZCBpbiBYZW4g
dXBzdHJlYW0geWV0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgJmd0OyB3b3VsZCBhc3N1
bWUgdGhhdCB1cHN0cmVhbSArIHRoZSBzZXJpZXMgb24gdGhlIE1MIFsxXTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoHdvcms8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBQbGVh
c2UgY2xhcmlmeSB0aGlzIHBvaW50Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDsgQmVjYXVzZSB0aGUgdHdvIHRob3VnaHRzIGFyZSBjb250cm92ZXJzaWFsLjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqBIaSBPbGVnLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBBcyBKdWxpZW4gd3JvdGUsIHRoZXJlIGlzIG5v
dGhpbmcgY29udHJvdmVyc2lhbC4gQXMgeW91PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgYXJlIGF3YXJlLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoFhpbGlu
eCBtYWludGFpbnMgYSBzZXBhcmF0ZSBYZW4gdHJlZSBzcGVjaWZpYyBmb3IgWGlsaW54PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgaGVyZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqA8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJu
b3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48
L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwv
YT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4m
Z3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0
OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5j
b20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29t
L3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
biIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW48L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5j
b20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbjwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0Ozxh
IGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4i
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbjwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVm
PSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoGFuZCB0aGUgYnJhbmNoIHlvdSBhcmUgdXNpbmcgKHhsbnhfcmViYXNl
XzQuMTYpIGNvbWVzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZnJvbSB0aGVyZS48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEluc3RlYWQsIHRoZSB1
cHN0cmVhbSBYZW4gdHJlZSBsaXZlcyBoZXJlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoDxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0Ozxh
IGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5v
cmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7Jmd0
OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZs
dDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7Jmd0
OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZs
dDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7ICZs
dDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0Ozxh
IGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5v
cmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZs
dDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgVGhlIENhY2hlIENv
bG9yaW5nIGZlYXR1cmUgdGhhdCB5b3UgYXJlIHRyeWluZyB0bzxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoGNvbmZpZ3VyZSBpcyBwcmVzZW50PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgaW4geGxueF9yZWJhc2VfNC4xNiwgYnV0IG5vdCB5ZXQgcHJlc2VudCB1
cHN0cmVhbSAodGhlcmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBpcyBhbjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoG91dHN0YW5kaW5nIHBhdGNoIHNlcmll
cyB0byBhZGQgY2FjaGUgY29sb3JpbmcgdG8gWGVuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgdXBzdHJlYW0gYnV0IGl0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgaGFzbiYjMzk7dCBiZWVuIG1lcmdlZCB5ZXQuKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgQW55d2F5LCBpZiB5b3UgYXJlIHVzaW5nIHhsbnhfcmViYXNlXzQu
MTYgaXQgZG9lc24mIzM5O3Q8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBtYXR0ZXIg
dG9vIG11Y2ggZm9yPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgeW91IGFz
IHlvdSBhbHJlYWR5IGhhdmUgQ2FjaGUgQ29sb3JpbmcgYXMgYSBmZWF0dXJlPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgdGhlcmUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqBJIHRha2UgeW91IGFyZSB1c2luZyBJbWFnZUJ1aWxkZXIgdG8gZ2VuZXJh
dGUgdGhlIGJvb3Q8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjb25maWd1cmF0aW9u
PyBJZjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoHNvLCBwbGVhc2UgcG9z
dCB0aGUgSW1hZ2VCdWlsZGVyIGNvbmZpZyBmaWxlIHRoYXQgeW91IGFyZTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoHVzaW5nLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBCdXQgZnJvbSB0aGUgYm9vdCBt
ZXNzYWdlLCBpdCBsb29rcyBsaWtlIHRoZSBjb2xvcnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBjb25maWd1cmF0aW9uIGZvcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoERvbTAgaXMgaW5jb3JyZWN0Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDsgPGJyPg0KJmd0OyA8YnI+DQo8L2Jsb2NrcXVvdGU+PC9kaXY+DQo8L2Js
b2NrcXVvdGU+PC9kaXY+DQo=
--00000000000092b17105fbb77d55--


From xen-devel-bounces@lists.xenproject.org Mon May 15 08:49:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:49:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534631.831814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTtF-0006ZV-9x; Mon, 15 May 2023 08:49:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534631.831814; Mon, 15 May 2023 08:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyTtF-0006ZO-6u; Mon, 15 May 2023 08:49:09 +0000
Received: by outflank-mailman (input) for mailman id 534631;
 Mon, 15 May 2023 08:49:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CjwX=BE=citrix.com=prvs=492a8bb35=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pyTtD-0006Z7-4x
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:49:07 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 57724042-f2fd-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 10:49:04 +0200 (CEST)
Received: from mail-mw2nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 15 May 2023 04:49:01 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB5943.namprd03.prod.outlook.com (2603:10b6:510:30::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 08:48:59 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 08:48:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57724042-f2fd-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684140544;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=i95jhdnRcFZYlTYKQ4D3BgPVddhdALxz3L79+YtlSk4=;
  b=LEBbsLbMpT3tlgX71ETVjDlP7kZzuOqLvIjd8s1lXH2q5pjxBFqIOeci
   BLE+FpJ2vH4CeP8eKpXxS+DvaZ2GiVz91AiKRK1sLX56/9Al5r2zf30DY
   Q1Olbkqmmibopu7PKErrrh55DI65bpxkMJSklFJMjG4sYbNVM/9Wud3/j
   E=;
X-IronPort-RemoteIP: 104.47.55.101
X-IronPort-MID: 109050960
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:kHIjzql4o6DnmNVI5R43XDvo5gxhJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJNXmyBb/iOZTahLtglPYq//UwC78TdzdNlSgJrrXpnEyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE4p7aWaVA8w5ARkPqgW5QKGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 cU8dR8RVUmCvOuJ6aOiVrR0mfk5HMa+aevzulk4pd3YJdAPZMmbBoD1v5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVE3ieC0WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHuqAN9KRePgnhJsqE2znkMsKE0Ibxyykdaw2krjYfUCB
 FNBr0LCqoB3riRHVOLVRAakqXSJuhodXdt4EOAg7gyJjK3O7G6xBGIJUzpAY9wOr9ItSHoh0
 Vrht8PkA3ljvaOYTVqZ96yItnWiNC4NN2gAaCQYCwwf7LHLkIw1jxuJdN9sEae5h97yBBn5x
 jzMpy87750IkcMF3qO8u0vbijihopzISA8d7wDbGGmi62tEiJWNYoWp7R3e8qxGJYPAFl2Z5
 iFcwo6Z8fwECoyLmGqVWuIREbq15vGDdjrBnVpoGJpn/DOok5K+Qb1tDPhFDB8BGq45lfXBO
 Sc/ZSs5CEdvAUaX
IronPort-HdrOrdr: A9a23:AWX7yqsJEX0/PZvBWFl68tN57skDgNV00zEX/kB9WHVpm6yj+v
 xGUs566faUskd2ZJhEo7q90ca7Lk80maQa3WBVB8bBYOCEghrOEGgB1/qA/9SIIUSXmtK1l5
 0QFpSWYOeaMbEQt7ef3ODXKbcdKNnsytHWuQ/dpU0dMz2DvctbnnZE4gXwKDwHeOFfb6BJba
 Z1fqB81kedkXJ8VLXCOlA1G9Ltivfsj5zcbRsPF3ccmXWzZWPB0s+AL/CAtC1uKQ9y/Q==
X-Talos-CUID: 9a23:8n0a/W3L90sR1fOu7iOCPbxfI+R4XWHmlXbrKnCRGH91FaWKdEWg9/Yx
X-Talos-MUID: 9a23:pXkKMAay7lLsK+BTpx+0qw5CZNlSpK2eJ0ldlcg8nJSrKnkl
X-IronPort-AV: E=Sophos;i="5.99,276,1677560400"; 
   d="scan'208";a="109050960"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KSAJt/emBqfM8SCTEWERw99ZOzVovi9hWcuaCpTZfqVBADHLIzxtei42DlY58SoYM0wThxTMr2HHlpzCGLPznh1k+dPhrGJL6aD6oAXEyUaD6iqNcoxPOYVtQri5iNZVGiZXQi+zRZd1G1i1FZBTKtLITGI0UmAXG7GIJsDqHGnlZ8NeXbuSa8si+Oxi13OApBwZIEOa2rySslKA4zxIbp7MuXxemRivmgUOG7eCzPx2NMPdmjyd5NcK6JJBHWb5nPfQ6iPgvl0r/mg8Z9j8xNZgEyO0mT2bQAytJHmWLMEwN6Tg4EYhK8pTwy3rRPBeT+9su/jgK6DS8XcntpVhxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0iUFWSD5fZCel+qEQsi7Fu7uh+nvenxPJNXcaoWo3Hs=;
 b=C1Zkk4duAMzhbrh03IFluw2JbF/3mVxIwY+prhk32q6v0Y/mbkU4LAc1q2hWMhqVT/q6IOq6U7UDnEXE6/p1YNiccyd4ZFHasKcXewA+UdjjpsiTmnT/6t+eG8AAd/g7cfxIVkbHfPOiV8WSkxjpk4uEUMlfLtlX3vVCf0rgKCh6aiQl88xHl57Nc2GV8Rq6qUb8sVReEY1RD9r8LJ5aRoDT0VcMz6QM3plc9mXdrPrECQbHX3luutj32Vgm+sfxByC+i5hHkA5SVbH8rcFQXF1xpTpxwn8pxn1ERN06VpU1fy1YPhOov5j1o159R5RJiXD8gOnAOaF0KcE3ywnwZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0iUFWSD5fZCel+qEQsi7Fu7uh+nvenxPJNXcaoWo3Hs=;
 b=AGpQrcFkyeWKn+6F37aA7/f2jfhYAqLsgfuaYZtbTS9wdHv5gR0nLOlqowOveZ08H0fHz3f8hvoyI6a2WfJK6yE7Z6Kw5S42tpe/zEqZV/JwpNI5eQAxMBf0+weTYF7UV6Q/hJ3vaQPzQ6cyc5aj9SRgwy7KsrOYhHuvepRlwP8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 15 May 2023 10:48:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, jbeulich@suse.com,
	xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT
 generation
Message-ID: <ZGHx9Mk3UGPdli1h@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230513011720.3978354-1-sstabellini@kernel.org>
X-ClientProxiedBy: LO0P123CA0004.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:354::9) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB5943:EE_
X-MS-Office365-Filtering-Correlation-Id: 563691e6-87db-45d5-e56f-08db552139a3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yR+fY9QDxLVvBbk8tsZk4wpShUuYZ2Ba/gLj80rtI0SjapNZqT+uWZeP8hHLmgBS5Vrriw7FXFBRWxboFdGG5MgeHaHrFQL0GhPGjwNN1VGoxIT1NdrYn5DZGBS2MoYMsuJJJU9voLCAKHAMlW0z4GFJljmYaqa86kI9A+S0FprLVW+Nonh+LbAlQ+hiaIpZ8oAejV85rio+qxY30UhirWzKF2nOv1YaUyJNp85yNQVC5Qr83TBUVSDmmv+N3kV/Pp6WLC+eQTigkP8FbN61WPR4US1AY0i6ykWwvRqvba1WvtdBa55nAeuQ3NhX8lkiuE1m9ChNPdLNG8KPgluj6MJIA+J24Wps9NrwNPZ6IHAaNnGvDX11Y+IZcbduyAYV1mdD/tgD48osdYb4KH4imSAyVDtUAtE041Q5t48qNEuKdog8U4owJ3JoffPmsVED19aTvt62YaYSR7EFkNTrc6Wu66MaQwKbAkwpOqygfRhlg0WcdtykH83EhDOnJlf2+I/iJ5sAJwbUqE3PrJ8UOj9RZsQXye0k5ILRtbGkUhQTeXq8VHRjGc3MhTxRcYuJ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(39860400002)(136003)(346002)(376002)(396003)(366004)(451199021)(85182001)(86362001)(316002)(66556008)(66946007)(66476007)(4326008)(6916009)(6486002)(478600001)(33716001)(8936002)(8676002)(5660300002)(6666004)(2906002)(38100700002)(82960400001)(41300700001)(6512007)(26005)(186003)(9686003)(6506007)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z1BLbUU2Ti9KNHVodGoyQmg5VzlrYWtwU0JvRjNyd3BzTmc5dytVc3FMaUVJ?=
 =?utf-8?B?KzZJWUtQMml0WDZVY0hwMjF3YlhGc2t3NmwzcmtjWG1HREwxUEUycFJ2andI?=
 =?utf-8?B?VzZZT1ZrSlo5bXpxKzVWZEZPakE0NGVxNlZwNC9qWWtMSHZNVUR2ZjFRanNL?=
 =?utf-8?B?UWlRdGpFWEdKWkpycHp4VUFpN2ZNdTdkdWIzai82TEsydk9JeDE0V2sxMDZh?=
 =?utf-8?B?ZzVHdTJhdmx5WDNSYnc2blhqOGF5WUF0QTJGNXRsZWxtZFJZZGZzN29DdUpt?=
 =?utf-8?B?d3BqaFF1YnRYVzhLSDlkYlZTTlMvUWxSVEpHZ2M3SkN6OXI4aUFlQ0tDVDV0?=
 =?utf-8?B?bW1VQklJNHQxUXNmSU0va0dyZWJua2NDTGFFcU95d3NYOVBCQ2p1UC9iTTVZ?=
 =?utf-8?B?MlNvVFRWUkpDaUZqanQ3U1ViSmYzakpXbWtNczhPeGhhNkt5R2F3aGlqVjFI?=
 =?utf-8?B?RkMvdVRycUEra2oxeXJaOWpUcjdpMDVuVVpLWTJKMnpLRFlmTE5KOE1VR2lL?=
 =?utf-8?B?TnRQK1ozWG9Jdlk0aFgzczFBL005aFMzZWVWNm9xNmdFNHc0Z211SU5mNXFh?=
 =?utf-8?B?QkhOcHpFMjA4dUxSa2pkYXE3QTlkbTc3b3IxR0k3eWxDMXkxb3hOU0k0Zmhk?=
 =?utf-8?B?TGFlMWdSd1dWKzBHVGZQQ2Rua0tFaldBQnlVNUZFS3RRQ1VBWE1tM2pKaVh6?=
 =?utf-8?B?SVRFVzBvYXFBcGtxMy94bXI0QUJpZ3BOdUdZRldTWGRmMHpvWlBublQ4ZU1W?=
 =?utf-8?B?MzlybG1oUlNqUDY0WDltRDIybVJaK3hWTWR3b24vVk9sc2xrVzRxNXZqd0dp?=
 =?utf-8?B?QUZMT2g0b1FCSlZwR3BodGJsck9oUmt6ZVV2SG1FOHYvVXltcWc4TUtoSURz?=
 =?utf-8?B?TE52M0t5TnlNQUliZFhtczRrQi9ieFhnazZQTHZDL0lvNWt5UGxraVZqaFN0?=
 =?utf-8?B?VmNVb0NPcjA2eFFRaG9HMTN0bDhzMThVakVwM0p3TmpFMDdqam9Dd2c1TW5a?=
 =?utf-8?B?b2N5Y0ROU1dhVERvSlB6MG5ZUElBSjVrQ2JqKzRwSnIzWHdTNWEzZXRPdEJ6?=
 =?utf-8?B?OFdvYUtBYzdPb1FrSENDd2wwQjN1RWI5TElDNmxad3J1R0tIQ1hYSnJJMElz?=
 =?utf-8?B?Z09MclQvUkdKV0xHc0drYWtoUFU4VG5uNXlmZHlRWTQvSWRPNCtKamdPRmVN?=
 =?utf-8?B?YWNYTDJwWmlRZ3FreGRWNHZreFMrWS9nRmJCM0VlOE1mM21TTVdvWlgrUWov?=
 =?utf-8?B?SjYzVHhvU0dDUlNac2xZN2I0M2RjVmNRVERnejZaNDFNVEk1aDQ2ZGEzZzJW?=
 =?utf-8?B?amNWYmJuQUl3Vm95QmtGdlNtMWJlL1dRSWZ6VEh2NnNEQkkvbE44YVFiMTVW?=
 =?utf-8?B?SlFjNXplM1MwUWhMZFRIWWFMZXM2UjliUDVjQ1hpVGVoZmhRL0svOUVKM2dH?=
 =?utf-8?B?cnE1ZjVwcTBmejhBaU00bU9UMmJnRlBvamlYTFRsZWIyRXFuRGNjTXVKb2VE?=
 =?utf-8?B?TWI2Zk5Gb3JsRnFRc0ZGOXV6K2lkaWFPWm1tVWZxNXdnRHlBUlFUZzZodEhC?=
 =?utf-8?B?NmNNSndPejZWRkJKTlZIbWhhdkw0U3ZCSGtWQVBERWNZeDJ5OGRmaW5GYXgx?=
 =?utf-8?B?cjlZdGRkNXh4bWpNUHhKWUpYaUxXQ296d2ZlazdDWTg3aXE0RWlvUjAvb2xt?=
 =?utf-8?B?QzdBWjEzUDFMTEtUajUvWnpFRDRPNTJ2U0FWMTVyU1NKM2tkcUFGbkF0eldl?=
 =?utf-8?B?T05vaCtDSGlUTFgycmhLQndPdlhnVGZ6dVZER0x2b09Eb2RzRXBWNHdSN3pH?=
 =?utf-8?B?anBjZ2gydm50SkdYOWhjbnJCcUlpd3ByR3o2cTEwUWZxWXhsUzF4T0Y4WkNv?=
 =?utf-8?B?OGZpSFBSNUl5WkJoeUhobWxNVGRyRDhWblRuaHBjQVdrZDlZTXFGNkpCZkR4?=
 =?utf-8?B?aVNITGV4bEgrUXRpb2cxMXcybUxJejFRQWtObmpRYXFiTWEvTjYzbXN1MVlN?=
 =?utf-8?B?V0NDalBzbm1yMTl2TmpGQmJQWUpTWmlXeUY2VTFYL0pxcUJ1ejlERnQ3ZEZU?=
 =?utf-8?B?VDNCSlAvallNYWd3MzZqaXkvbmFSenJpTTlrSXZodjdydmdOK0pEZXo5c1dx?=
 =?utf-8?Q?iEF4Z2GGaYJwNXMt2ZLWeQEDk?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	NFWkODnSaC0Eg+J1IWyWDaEM7f+dYdJv4cr+7bcXCgrAzUA+T8g+QTPE9OXimKzqGNpsvulfLWGnfz3uT1Nr4SJfELIO3bj/NaEj44SC3O5X9d156yHw0N03CsBSxoQvqaaVFe3fbNDAQFP8y3c5V+PmTUK3GSiPd4w7Jhi4pnAbfry6h1E+LXM53gZb657foTG3oFJZzO5oGEerbEeNjM7bumfP52N/KLWCxJl5ZG2zAt5I1RekSTPEghW1wFxWg1MDuxX8iNibpBiRvX757+oKTphPOmu+vIhnmO0JxkoRTUcDxiEQWn4pN29uNv63pxyU6Q/t0CQppCAi4hKWgA5NMDQtI5cm7UikIdNky1tVZXIanVMTTmeOnB1FtTWcD7cA4ST7KSeUiVTFwRUIjmitCU6s1cdoSYHVGaaETyY6p3sPEvzOBByhrqynsY/DINtGvZSNWMK1cRsm/UTM3Tudd0ls3CIqjrluGveoA1mP8wgJZfoIvJf+JIsuautcInIqN1xACPmPQkRJZK6vRo9xwbL5lWKdeLK9j7CTa8gvJpd+9JPWQJARwYatg/zJQ0kp2HPxIySND4j5By+a5bGKGsPpSV5DPxlJDDcMvFZQEtkOpArnqsZ9eEMifNGKAzNX013+yn3Yhfvn5A33Bnk02Wlh2j8Z35t1A3KX3yGl3pAqykjqVSgnQ0fUgmyNWCVuLUowrhY2H+ZtDgfGLvcuynqhg3UYBQt+j/pv/sx8w4qbQ6gjZ51RlNyRtM4TLBaJG9rWdtR7S12MBOhHTyjfOaxCDosU2Cda1mQw2w1YdzZ1UiZv4YWfXWKmId1j/MP3wwygjUjN8pcg0fr/qh5eIU/xp4MPjSpUnsd5kW0=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 563691e6-87db-45d5-e56f-08db552139a3
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 08:48:59.4937
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AJuJwv2YYE8ZB9WT6CBMPlYxpxsvxfXVBWS1I7W8mLtGQ8hGaGuSv78RSkpJPx5fvkLHlBya8hQkWwxrzJWHHg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5943

On Fri, May 12, 2023 at 06:17:19PM -0700, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@amd.com>
> 
> Xen always generates a XSDT table even if the firmware provided a RSDT
> table. Instead of copying the XSDT header from the firmware table (that
> might be missing), generate the XSDT header from a preset.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> ---
>  xen/arch/x86/hvm/dom0_build.c | 32 +++++++++-----------------------
>  1 file changed, 9 insertions(+), 23 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> index 307edc6a8c..5fde769863 100644
> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
>                                        paddr_t *addr)
>  {
>      struct acpi_table_xsdt *xsdt;
> -    struct acpi_table_header *table;
> -    struct acpi_table_rsdp *rsdp;
>      const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables;
>      unsigned long size = sizeof(*xsdt);
>      unsigned int i, j, num_tables = 0;
> -    paddr_t xsdt_paddr;
>      int rc;
> +    struct acpi_table_header header = {
> +        .signature    = "XSDT",
> +        .length       = sizeof(struct acpi_table_header),
> +        .revision     = 0x1,
> +        .oem_id       = "Xen",
> +        .oem_table_id = "HVM",

I think this is wrong, as according to the spec the OEM Table ID must
match the OEM Table ID in the FADT.

We likely want to copy the OEM ID and OEM Table ID from the RSDP, and
possibly also the other OEM related fields.

Alternatively we might want to copy and use the RSDT on systems that
lack an XSDT, or even just copy the header from the RSDT into Xen's
crafted XSDT, since the format of the RSDP and the XSDT headers are
exactly the same (the difference is in the size of the description
headers that come after).

> +        .oem_revision = 0,
> +    };

This wants to be initdata static const if we go down this route.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 15 08:56:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:56:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534635.831825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU06-00082w-0K; Mon, 15 May 2023 08:56:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534635.831825; Mon, 15 May 2023 08:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU05-00082p-Tm; Mon, 15 May 2023 08:56:13 +0000
Received: by outflank-mailman (input) for mailman id 534635;
 Mon, 15 May 2023 08:56:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pyU05-00082j-45
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:56:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyU04-0001hW-Oe; Mon, 15 May 2023 08:56:12 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.27.136]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyU04-0002A4-If; Mon, 15 May 2023 08:56:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=4Zc3AsAZ/q6R6kPci9hs1zxLj/ZYQf+zBl6B7MaObCY=; b=g8GHUvgcQaObQer2J+JA/eJPT2
	2RKvjNzy5NIxfKIGj6P3k9u5xVH6nFNQ6z0IYwl/dm0q6/g49b+C3YvLgG4/KII0xjMRycX6kXS/d
	VPs7Jee/ne4wv0fddO9I6r8G9XdYnEvAr8Gs7RGGj090vksyxTZzz/mDZJ68lSAKx+64=;
Message-ID: <dff8ab04-ae35-3a71-b923-abe722dcdb1c@xen.org>
Date: Mon, 15 May 2023 09:56:10 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-3-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230512143535.29679-3-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 12/05/2023 15:35, Michal Orzel wrote:
> At the moment, even in case of a SMMU being I/O coherent, we clean the
> updated PT as a result of not advertising the coherency feature. SMMUv3
> coherency feature means that page table walks, accesses to memory
> structures and queues are I/O coherent (refer ARM IHI 0070 E.A, 3.15).
> 
> Follow the same steps that were done for SMMU v1,v2 driver by the commit:
> 080dcb781e1bc3bb22f55a9dfdecb830ccbabe88
> 
> The same restrictions apply, meaning that in order to advertise coherent
> table walk platform feature, all the SMMU devices need to report coherency
> feature. This is because the page tables (we are sharing them with CPU)
> are populated before any device assignment and in case of a device being
> behind non-coherent SMMU, we would have to scan the tables and clean
> the cache.
> 
> It is to be noted that the SBSA/BSA (refer ARM DEN0094C 1.0C, section D)
> requires that all SMMUv3 devices support I/O coherency.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
> There are very few platforms out there with SMMUv3 but I have never seen
> a SMMUv3 that is not I/O coherent.
> ---
>   xen/drivers/passthrough/arm/smmu-v3.c | 24 +++++++++++++++++++++++-
>   1 file changed, 23 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index bf053cdb6d5c..2adaad0fa038 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -2526,6 +2526,15 @@ static const struct dt_device_match arm_smmu_of_match[] = {
>   };
>   
>   /* Start of Xen specific code. */
> +
> +/*
> + * Platform features. It indicates the list of features supported by all
> + * SMMUs. Actually we only care about coherent table walk, which in case of
> + * SMMUv3 is implied by the overall coherency feature (refer ARM IHI 0070 E.A,
> + * section 3.15 and SMMU_IDR0.COHACC bit description).
> + */
> +static uint32_t platform_features = ARM_SMMU_FEAT_COHERENCY;

AFAICT, this variable is not meant to change after boot. So please add 
the attribute __ro_after_init.

With that:

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 15 08:57:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:57:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534638.831834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU1L-00008h-98; Mon, 15 May 2023 08:57:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534638.831834; Mon, 15 May 2023 08:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU1L-00008X-6X; Mon, 15 May 2023 08:57:31 +0000
Received: by outflank-mailman (input) for mailman id 534638;
 Mon, 15 May 2023 08:57:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pyU1K-00008N-7N
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:57:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyU1J-0001ik-Uz; Mon, 15 May 2023 08:57:29 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.27.136]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyU1J-0002CP-PW; Mon, 15 May 2023 08:57:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=oTue8a0mb1TBWg6DI4lRQ4+KAw6+2kS14JA+Yw/GkrE=; b=eUKBtEJfaW69mjueqllSJjEz3o
	/PPZ+1I+HdiUzbMCQTnKvWhSecPxNHM6aQ/Bm+2fRlybkZca/WlPCD0YXoxZdqpRJrWyyPzjAEjYe
	MHE/Vg9CLyfyrly/38R3c0pYXRX3wJNINTd24kzFUqp3qlvCLlDMZKww7ME2ik81rUw8=;
Message-ID: <320a1d9f-886e-d2b6-b483-1bb07e5899cd@xen.org>
Date: Mon, 15 May 2023 09:57:28 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH 0/2] xen/arm: smmuv3: Advertise coherent table walk
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230512143535.29679-1-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 12/05/2023 15:35, Michal Orzel wrote:
> Based on the work done for SMMU v1,v2 by commit:
> 080dcb781e1bc3bb22f55a9dfdecb830ccbabe88
> 
> Michal Orzel (2):
>    xen/arm: smmuv3: Constify arm_smmu_get_by_dev() parameter

I have committed this patch.

>    xen/arm: smmuv3: Advertise coherent table walk if supported

For this one, I would like on extra change to harden the code (see my 
reply to the patch).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 15 08:58:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:58:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534641.831845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU1t-0000gx-IH; Mon, 15 May 2023 08:58:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534641.831845; Mon, 15 May 2023 08:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU1t-0000gq-EU; Mon, 15 May 2023 08:58:05 +0000
Received: by outflank-mailman (input) for mailman id 534641;
 Mon, 15 May 2023 08:58:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HQr=BE=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pyU1s-0000gZ-2L
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:58:04 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on20629.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 96553810-f2fe-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 10:57:58 +0200 (CEST)
Received: from MW4PR03CA0222.namprd03.prod.outlook.com (2603:10b6:303:b9::17)
 by DM4PR12MB6589.namprd12.prod.outlook.com (2603:10b6:8:b4::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 08:57:53 +0000
Received: from CO1NAM11FT032.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:b9:cafe::e2) by MW4PR03CA0222.outlook.office365.com
 (2603:10b6:303:b9::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30 via Frontend
 Transport; Mon, 15 May 2023 08:57:53 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT032.mail.protection.outlook.com (10.13.174.218) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.14 via Frontend Transport; Mon, 15 May 2023 08:57:53 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 15 May
 2023 03:57:45 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 15 May 2023 03:57:44 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96553810-f2fe-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oBYvBny2Un1zFtXNZOjmnM8XvlZGyHURE2QTkA/SWFP3ZEX1J5DqQcpKpeIF1hzh4+zHpy8HCrCMzPsSpj7uUzVSD/cn2lNvfbilQMKF1Ko0FrFUWoCHblvuIW4Ti9J8n+hR7sYEUJvn9+1LsZdy54S0qrTfypae9UWPB756IqsCQIE9TrKjW2bihLkwc6QmnmBwtvznU7BmHwF54JvyMFh3lWeZcitH35xk5zKRIlON1pERVWQ2j+UdGgU6F36ArBghlAxecUu6fhyXF6q951i26Efi+Ow4mU5eVyg1QtbpHffjm6umYgllVKd1aHVQBm5ovgIL0VE3A0Rhnckrdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jZg47mbYlwzNlPhy8dzNi032EbFZvkiT49gude/RrRs=;
 b=JIbZxuFkBltRwf2Wd8y2DYjryajP7O26uoyR5ezKEc150QvDwwM0nRqH+pjwlXEooFJLp/fo55ZheDctrjkYJus2XKUT0PqlCqUSy7BkCWW29BEe53kfC6y+NyEoCVT2ncKnUvVYtMjasXQXlyQx75Eia7QoT/XYvNzY2an8YP1U5QzRKbQ4u7kwJWqEP9D2OWP722nHA4sG15xzxQNT7Tz7vaDYQd/LGAG9/GBX+E4/XmG1rhyQS6e4W0uDiIcChvMWTkCfk8MgFHz6oQ4zY4tSTQ+kAfTIEHboAxfIx/Qq0omCrjbBtoescnwbVwHYdY2Zj1nYh2cQUZdNbHqe7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jZg47mbYlwzNlPhy8dzNi032EbFZvkiT49gude/RrRs=;
 b=ozE4sZ6KujIa9U+o94+kD+ozy+TmpsPQBnX3XSUtHIFPfq38wIVjhb8/GhG0Z9NMLQprd1U8j36Adcsl+9/OTo0dROUwHiK6AHYXE9N95db3vvUj/SHBtpZiXh/K0s0eLbpvVmkSv0fwyQ+f6Jgsr262YiiC0KmDcC6rAyikklk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <53af4bc6-97ad-d806-003b-38e70ccd2b58@amd.com>
Date: Mon, 15 May 2023 10:57:43 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: xen cache colors in ARM
Content-Language: en-US
To: Oleg Nikitenko <oleshiiwood@gmail.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>,
	<Stewart.Hildebrand@amd.com>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com>
 <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
 <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
 <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
 <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com>
 <bcac90c2-ef35-2908-9fe6-f09c1b1e2340@amd.com>
 <CA+SAi2sgHbUKk6mQVnFWQWJ1LBY29GW+eagrqHNN6TLDmv2AgQ@mail.gmail.com>
 <CA+SAi2tErcaAkRT5zhTwSE=-jszwAWNtEAnm5jNGEP1NoqbQ3w@mail.gmail.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <CA+SAi2tErcaAkRT5zhTwSE=-jszwAWNtEAnm5jNGEP1NoqbQ3w@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT032:EE_|DM4PR12MB6589:EE_
X-MS-Office365-Filtering-Correlation-Id: c438589f-28d9-4b89-d214-08db552277ea
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kdx+EWOL7kD8TinXlakU0dsL3BN9pc542qlX1e5Z/VZUNXtZqs7wyAJX3yiQea+7dsQ6g24vJXdSJB6j3yCRfTG8NHwAmPVtE5zh2s49lZIhgSVkBcA6Ftvg0kfOCJbq6KfdV1B4E/+DeVEU+DZa+WKyNDdKzB70nMmWMmFJKiwykvwk6bc6Q+BLqGrcp+uG1monL33CnG6w9+4AA7v6QZPOUofqwdlMDxiP2GmuDh5L/3R8EPQPQVzspnIp5vvQF6d12WmvXJ8U0uBpN0AbK6w8tI7k6RMUmYP0/4Yos3AbxG7mXNS07nbob5734QvbEtQZeNUG5hDoXHxY5XG+fJ++rr5jN2QRsF0IZlx4GKBPwbJU3Q9KG0Db86w029xxf3Q4t4VtC6pVUAbt4Y7EI2E4gwk5SFkgCD72BG8eEcR8IXGLQBOknMCMAhLw2vwXTbbH5PAqBnDzRCInFvwfOpjqCc4zYp5v1Dve0GJ5pjVlnBAxTCoTvcOV04ylY9SvTX8nDnDyKAqoE+anbaK6CUhGwPg2tMhpGVnHuiUytJGk4Fqy2zqcXKA0EVO7HXpeBVOv4KZtvQIqWkUy+CBwddxAepzwlVaT7QRs2IGXt7lkAjPF5SgUpRmCuUN+N80oQn94KjCW1ynVSNWyrkrsXA35lRMfGo30xxQQmK0wo/eQYKVjNwn6GlQUJH8Fmk6ynLpEK4Yy7GPAEo3SG700OLr5s6CC2v67egyWxQMeSaBXX4TlDG/54dXYlsvih6HK+SCHSNLZx7g0Q+L32sMQUdp+ycqzV8mzHYjXi5p0DiZIhOxMmDOyc2TN93kGQEouvSOKLndGEooJUb6Oxj78GioWXOjli9NlO7uZKCLeTJA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(6019001)(4636009)(396003)(376002)(346002)(136003)(39860400002)(279900001)(451199021)(46966006)(40470700004)(36840700001)(36756003)(86362001)(54906003)(16576012)(316002)(4326008)(6916009)(70206006)(70586007)(966005)(478600001)(82310400005)(40480700001)(8936002)(8676002)(5660300002)(30864003)(2906002)(44832011)(31696002)(81166007)(356005)(82740400003)(41300700001)(2616005)(26005)(186003)(53546011)(336012)(36860700001)(426003)(83380400001)(47076005)(31686004)(40460700003)(43740500002)(36900700001)(579004)(559001)(139555002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 08:57:53.0465
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c438589f-28d9-4b89-d214-08db552277ea
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT032.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6589

Hi Oleg,

On 15/05/2023 10:51, Oleg Nikitenko wrote:
> 	
> 
> 
> Hello guys,
> 
> Thanks a lot.
> After a long problem list I was able to run xen with Dom0 with a cache color.
> One more question from my side.
> I want to run a guest with color mode too.
> I inserted a string into guest config file llc-colors = "9-13"
> I got an error
> [  457.517004] loop0: detected capacity change from 0 to 385840
> Parsing config from /xen/red_config.cfg
> /xen/red_config.cfg:26: config parsing error near `-colors': lexical error
> warning: Config file looks like it contains Python code.
> warning:  Arbitrary Python is no longer supported.
> warning:  See https://wiki.xen.org/wiki/PythonInXlConfig <https://wiki.xen.org/wiki/PythonInXlConfig>
> Failed to parse config: Invalid argument
> So this is a question.
> Is it possible to assign a color mode for the DomU by config file ?
> If so, what string should I use?
Please, always refer to the relevant documentation. In this case, for xl.cfg:
https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod.in#L2890

~Michal

> 
> Regards,
> Oleg
> 
> чт, 11 мая 2023 г. в 13:32, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>:
> 
>     Hi Michal,
> 
>     Thanks.
>     This compilation previously had a name CONFIG_COLORING.
>     It mixed me up.
> 
>     Regards,
>     Oleg
> 
>     чт, 11 мая 2023 г. в 13:15, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
> 
>         Hi Oleg,
> 
>         On 11/05/2023 12:02, Oleg Nikitenko wrote:
>         >       
>         >
>         >
>         > Hello,
>         >
>         > Thanks Stefano.
>         > Then the next question.
>         > I cloned xen repo from xilinx site https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git> <https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git>>
>         > I managed to build a xlnx_rebase_4.17 branch in my environment.
>         > I did it without coloring first. I did not find any color footprints at this branch.
>         > I realized coloring is not in the xlnx_rebase_4.17 branch yet.
>         This is not true. Cache coloring is in xlnx_rebase_4.17. Please see the docs:
>         https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-coloring.rst <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-coloring.rst>
> 
>         It describes the feature and documents the required properties.
> 
>         ~Michal
> 
>         >
>         >
>         > вт, 9 мая 2023 г. в 22:49, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
>         >
>         >     We test Xen Cache Coloring regularly on zcu102. Every Petalinux release
>         >     (twice a year) is tested with cache coloring enabled. The last Petalinux
>         >     release is 2023.1 and the kernel used is this:
>         >     https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS> <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>>
>         >
>         >
>         >     On Tue, 9 May 2023, Oleg Nikitenko wrote:
>         >     > Hello guys,
>         >     >
>         >     > I have a couple of more questions.
>         >     > Have you ever run xen with the cache coloring at Zynq UltraScale+ MPSoC zcu102 xczu15eg ?
>         >     > When did you run xen with the cache coloring last time ?
>         >     > What kernel version did you use for Dom0 when you ran xen with the cache coloring last time ?
>         >     >
>         >     > Regards,
>         >     > Oleg
>         >     >
>         >     > пт, 5 мая 2023 г. в 11:48, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>         >     >       Hi Michal,
>         >     >
>         >     > Thanks.
>         >     >
>         >     > Regards,
>         >     > Oleg
>         >     >
>         >     > пт, 5 мая 2023 г. в 11:34, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
>         >     >       Hi Oleg,
>         >     >
>         >     >       Replying, so that you do not need to wait for Stefano.
>         >     >
>         >     >       On 05/05/2023 10:28, Oleg Nikitenko wrote:
>         >     >       >       
>         >     >       >
>         >     >       >
>         >     >       > Hello Stefano,
>         >     >       >
>         >     >       > I would like to try a xen cache color property from this repo  https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git> <https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git>>
>         >     >       <https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git> <https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git>>>
>         >     >       > Could you tell whot branch I should use ?
>         >     >       Cache coloring feature is not part of the upstream tree and it is still under review.
>         >     >       You can only find it integrated in the Xilinx Xen tree.
>         >     >
>         >     >       ~Michal
>         >     >
>         >     >       >
>         >     >       > Regards,
>         >     >       > Oleg
>         >     >       >
>         >     >       > пт, 28 апр. 2023 г. в 00:51, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
>         >     >       >
>         >     >       >     I am familiar with the zcu102 but I don't know how you could possibly
>         >     >       >     generate a SError.
>         >     >       >
>         >     >       >     I suggest to try to use ImageBuilder [1] to generate the boot
>         >     >       >     configuration as a test because that is known to work well for zcu102.
>         >     >       >
>         >     >       >     [1] https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder> <https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder>> <https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder> <https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder>>>
>         >     >       >
>         >     >       >
>         >     >       >     On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
>         >     >       >     > Hello Stefano,
>         >     >       >     >
>         >     >       >     > Thanks for clarification.
>         >     >       >     > We nighter use ImageBuilder nor uboot boot script.
>         >     >       >     > A model is zcu102 compatible.
>         >     >       >     >
>         >     >       >     > Regards,
>         >     >       >     > O.
>         >     >       >     >
>         >     >       >     > вт, 25 апр. 2023 г. в 21:21, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
>         >     >       >     >       This is interesting. Are you using Xilinx hardware by any chance? If so,
>         >     >       >     >       which board?
>         >     >       >     >
>         >     >       >     >       Are you using ImageBuilder to generate your boot.scr boot script? If so,
>         >     >       >     >       could you please post your ImageBuilder config file? If not, can you
>         >     >       >     >       post the source of your uboot boot script?
>         >     >       >     >
>         >     >       >     >       SErrors are supposed to be related to a hardware failure of some kind.
>         >     >       >     >       You are not supposed to be able to trigger an SError easily by
>         >     >       >     >       "mistake". I have not seen SErrors due to wrong cache coloring
>         >     >       >     >       configurations on any Xilinx board before.
>         >     >       >     >
>         >     >       >     >       The differences between Xen with and without cache coloring from a
>         >     >       >     >       hardware perspective are:
>         >     >       >     >
>         >     >       >     >       - With cache coloring, the SMMU is enabled and does address translations
>         >     >       >     >         even for dom0. Without cache coloring the SMMU could be disabled, and
>         >     >       >     >         if enabled, the SMMU doesn't do any address translations for Dom0. If
>         >     >       >     >         there is a hardware failure related to SMMU address translation it
>         >     >       >     >         could only trigger with cache coloring. This would be my normal
>         >     >       >     >         suggestion for you to explore, but the failure happens too early
>         >     >       >     >         before any DMA-capable device is programmed. So I don't think this can
>         >     >       >     >         be the issue.
>         >     >       >     >
>         >     >       >     >       - With cache coloring, the memory allocation is very different so you'll
>         >     >       >     >         end up using different DDR regions for Dom0. So if your DDR is
>         >     >       >     >         defective, you might only see a failure with cache coloring enabled
>         >     >       >     >         because you end up using different regions.
>         >     >       >     >
>         >     >       >     >
>         >     >       >     >       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
>         >     >       >     >       > Hi Stefano,
>         >     >       >     >       >
>         >     >       >     >       > Thank you.
>         >     >       >     >       > If I build xen without colors support there is not this error.
>         >     >       >     >       > All the domains are booted well.
>         >     >       >     >       > Hense it can not be a hardware issue.
>         >     >       >     >       > This panic arrived during unpacking the rootfs.
>         >     >       >     >       > Here I attached the boot log xen/Dom0 without color.
>         >     >       >     >       > A highlighted strings printed exactly after the place where 1-st time panic arrived.
>         >     >       >     >       >
>         >     >       >     >       >  Xen 4.16.1-pre
>         >     >       >     >       > (XEN) Xen version 4.16.1-pre (nole2390@(none)) (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=y
>         >     >       2023-04-21
>         >     >       >     >       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300 git:321687b231-dirty
>         >     >       >     >       > (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
>         >     >       >     >       > (XEN) Processor: 00000000410fd034: "ARM Limited", variant: 0x0, part 0xd03,rev 0x4
>         >     >       >     >       > (XEN) 64-bit Execution:
>         >     >       >     >       > (XEN)   Processor Features: 0000000000002222 0000000000000000
>         >     >       >     >       > (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
>         >     >       >     >       > (XEN)     Extensions: FloatingPoint AdvancedSIMD
>         >     >       >     >       > (XEN)   Debug Features: 0000000010305106 0000000000000000
>         >     >       >     >       > (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
>         >     >       >     >       > (XEN)   Memory Model Features: 0000000000001122 0000000000000000
>         >     >       >     >       > (XEN)   ISA Features:  0000000000011120 0000000000000000
>         >     >       >     >       > (XEN) 32-bit Execution:
>         >     >       >     >       > (XEN)   Processor Features: 0000000000000131:0000000000011011
>         >     >       >     >       > (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
>         >     >       >     >       > (XEN)     Extensions: GenericTimer Security
>         >     >       >     >       > (XEN)   Debug Features: 0000000003010066
>         >     >       >     >       > (XEN)   Auxiliary Features: 0000000000000000
>         >     >       >     >       > (XEN)   Memory Model Features: 0000000010201105 0000000040000000
>         >     >       >     >       > (XEN)                          0000000001260000 0000000002102211
>         >     >       >     >       > (XEN)   ISA Features: 0000000002101110 0000000013112111 0000000021232042
>         >     >       >     >       > (XEN)                 0000000001112131 0000000000011142 0000000000011121
>         >     >       >     >       > (XEN) Using SMC Calling Convention v1.2
>         >     >       >     >       > (XEN) Using PSCI v1.1
>         >     >       >     >       > (XEN) SMP: Allowing 4 CPUs
>         >     >       >     >       > (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 100000 KHz
>         >     >       >     >       > (XEN) GICv2 initialization:
>         >     >       >     >       > (XEN)         gic_dist_addr=00000000f9010000
>         >     >       >     >       > (XEN)         gic_cpu_addr=00000000f9020000
>         >     >       >     >       > (XEN)         gic_hyp_addr=00000000f9040000
>         >     >       >     >       > (XEN)         gic_vcpu_addr=00000000f9060000
>         >     >       >     >       > (XEN)         gic_maintenance_irq=25
>         >     >       >     >       > (XEN) GICv2: Adjusting CPU interface base to 0xf902f000
>         >     >       >     >       > (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
>         >     >       >     >       > (XEN) Using scheduler: null Scheduler (null)
>         >     >       >     >       > (XEN) Initializing null scheduler
>         >     >       >     >       > (XEN) WARNING: This is experimental software in development.
>         >     >       >     >       > (XEN) Use at your own risk.
>         >     >       >     >       > (XEN) Allocated console ring of 32 KiB.
>         >     >       >     >       > (XEN) CPU0: Guest atomics will try 12 times before pausing the domain
>         >     >       >     >       > (XEN) Bringing up CPU1
>         >     >       >     >       > (XEN) CPU1: Guest atomics will try 13 times before pausing the domain
>         >     >       >     >       > (XEN) CPU 1 booted.
>         >     >       >     >       > (XEN) Bringing up CPU2
>         >     >       >     >       > (XEN) CPU2: Guest atomics will try 13 times before pausing the domain
>         >     >       >     >       > (XEN) CPU 2 booted.
>         >     >       >     >       > (XEN) Bringing up CPU3
>         >     >       >     >       > (XEN) CPU3: Guest atomics will try 13 times before pausing the domain
>         >     >       >     >       > (XEN) Brought up 4 CPUs
>         >     >       >     >       > (XEN) CPU 3 booted.
>         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: probing hardware configuration...
>         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
>         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stage 2 translation
>         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stream matching with 48 register groups, mask 0x7fff<2>smmu:
>         >     >       /axi/smmu@fd800000: 16 context
>         >     >       >     >       banks (0
>         >     >       >     >       > stage-2 only)
>         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit PA
>         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: registered 29 master devices
>         >     >       >     >       > (XEN) I/O virtualisation enabled
>         >     >       >     >       > (XEN)  - Dom0 mode: Relaxed
>         >     >       >     >       > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>         >     >       >     >       > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>         >     >       >     >       > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>         >     >       >     >       > (XEN) alternatives: Patching with alt table 00000000002cc5c8 -> 00000000002ccb2c
>         >     >       >     >       > (XEN) *** LOADING DOMAIN 0 ***
>         >     >       >     >       > (XEN) Loading d0 kernel from boot module @ 0000000001000000
>         >     >       >     >       > (XEN) Loading ramdisk from boot module @ 0000000002000000
>         >     >       >     >       > (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
>         >     >       >     >       > (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
>         >     >       >     >       > (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
>         >     >       >     >       > (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
>         >     >       >     >       > (XEN) Grant table range: 0x00000000e00000-0x00000000e40000
>         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
>         >     >       >     >       > (XEN) Allocating PPI 16 for event channel interrupt
>         >     >       >     >       > (XEN) Extended region 0: 0x81200000->0xa0000000
>         >     >       >     >       > (XEN) Extended region 1: 0xb1200000->0xc0000000
>         >     >       >     >       > (XEN) Extended region 2: 0xc8000000->0xe0000000
>         >     >       >     >       > (XEN) Extended region 3: 0xf0000000->0xf9000000
>         >     >       >     >       > (XEN) Extended region 4: 0x100000000->0x600000000
>         >     >       >     >       > (XEN) Extended region 5: 0x880000000->0x8000000000
>         >     >       >     >       > (XEN) Extended region 6: 0x8001000000->0x10000000000
>         >     >       >     >       > (XEN) Loading zImage from 0000000001000000 to 0000000010000000-0000000010e41008
>         >     >       >     >       > (XEN) Loading d0 initrd from 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
>         >     >       >     >       > (XEN) Loading d0 DTB to 0x0000000013400000-0x000000001340cbdc
>         >     >       >     >       > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>         >     >       >     >       > (XEN) Std. Loglevel: All
>         >     >       >     >       > (XEN) Guest Loglevel: All
>         >     >       >     >       > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
>         >     >       >     >       > (XEN) null.c:353: 0 <-- d0v0
>         >     >       >     >       > (XEN) Freed 356kB init memory.
>         >     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
>         >     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
>         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
>         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
>         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
>         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
>         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
>         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>         >     >       >     >       > [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
>         >     >       >     >       > [    0.000000] Linux version 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC)
>         >     >       11.3.0, GNU ld (GNU
>         >     >       >     >       Binutils)
>         >     >       >     >       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
>         >     >       >     >       > [    0.000000] Machine model: D14 Viper Board - White Unit
>         >     >       >     >       > [    0.000000] Xen 4.16 support found
>         >     >       >     >       > [    0.000000] Zone ranges:
>         >     >       >     >       > [    0.000000]   DMA      [mem 0x0000000010000000-0x000000007fffffff]
>         >     >       >     >       > [    0.000000]   DMA32    empty
>         >     >       >     >       > [    0.000000]   Normal   empty
>         >     >       >     >       > [    0.000000] Movable zone start for each node
>         >     >       >     >       > [    0.000000] Early memory node ranges
>         >     >       >     >       > [    0.000000]   node   0: [mem 0x0000000010000000-0x000000001fffffff]
>         >     >       >     >       > [    0.000000]   node   0: [mem 0x0000000022000000-0x0000000022147fff]
>         >     >       >     >       > [    0.000000]   node   0: [mem 0x0000000022200000-0x0000000022347fff]
>         >     >       >     >       > [    0.000000]   node   0: [mem 0x0000000024000000-0x0000000027ffffff]
>         >     >       >     >       > [    0.000000]   node   0: [mem 0x0000000030000000-0x000000007fffffff]
>         >     >       >     >       > [    0.000000] Initmem setup node 0 [mem 0x0000000010000000-0x000000007fffffff]
>         >     >       >     >       > [    0.000000] On node 0, zone DMA: 8192 pages in unavailable ranges
>         >     >       >     >       > [    0.000000] On node 0, zone DMA: 184 pages in unavailable ranges
>         >     >       >     >       > [    0.000000] On node 0, zone DMA: 7352 pages in unavailable ranges
>         >     >       >     >       > [    0.000000] cma: Reserved 256 MiB at 0x000000006e000000
>         >     >       >     >       > [    0.000000] psci: probing for conduit method from DT.
>         >     >       >     >       > [    0.000000] psci: PSCIv1.1 detected in firmware.
>         >     >       >     >       > [    0.000000] psci: Using standard PSCI v0.2 function IDs
>         >     >       >     >       > [    0.000000] psci: Trusted OS migration not required
>         >     >       >     >       > [    0.000000] psci: SMC Calling Convention v1.1
>         >     >       >     >       > [    0.000000] percpu: Embedded 16 pages/cpu s32792 r0 d32744 u65536
>         >     >       >     >       > [    0.000000] Detected VIPT I-cache on CPU0
>         >     >       >     >       > [    0.000000] CPU features: kernel page table isolation forced ON by KASLR
>         >     >       >     >       > [    0.000000] CPU features: detected: Kernel page table isolation (KPTI)
>         >     >       >     >       > [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 403845
>         >     >       >     >       > [    0.000000] Kernel command line: console=hvc0 earlycon=xen earlyprintk=xen clk_ignore_unused fips=1
>         >     >       root=/dev/ram0
>         >     >       >     >       maxcpus=2
>         >     >       >     >       > [    0.000000] Unknown kernel command line parameters "earlyprintk=xen fips=1", will be passed to user
>         >     >       space.
>         >     >       >     >       > [    0.000000] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
>         >     >       >     >       > [    0.000000] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
>         >     >       >     >       > [    0.000000] mem auto-init: stack:off, heap alloc:on, heap free:on
>         >     >       >     >       > [    0.000000] mem auto-init: clearing system memory may take some time...
>         >     >       >     >       > [    0.000000] Memory: 1121936K/1641024K available (9728K kernel code, 836K rwdata, 2396K rodata, 1536K
>         >     >       init, 262K bss,
>         >     >       >     >       256944K reserved,
>         >     >       >     >       > 262144K cma-reserved)
>         >     >       >     >       > [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
>         >     >       >     >       > [    0.000000] rcu: Hierarchical RCU implementation.
>         >     >       >     >       > [    0.000000] rcu: RCU event tracing is enabled.
>         >     >       >     >       > [    0.000000] rcu: RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
>         >     >       >     >       > [    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
>         >     >       >     >       > [    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
>         >     >       >     >       > [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
>         >     >       >     >       > [    0.000000] Root IRQ handler: gic_handle_irq
>         >     >       >     >       > [    0.000000] arch_timer: cp15 timer(s) running at 100.00MHz (virt).
>         >     >       >     >       > [    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x171024e7e0,
>         >     >       max_idle_ns: 440795205315 ns
>         >     >       >     >       > [    0.000000] sched_clock: 56 bits at 100MHz, resolution 10ns, wraps every 4398046511100ns
>         >     >       >     >       > [    0.000258] Console: colour dummy device 80x25
>         >     >       >     >       > [    0.310231] printk: console [hvc0] enabled
>         >     >       >     >       > [    0.314403] Calibrating delay loop (skipped), value calculated using timer frequency.. 200.00 BogoMIPS
>         >     >       (lpj=400000)
>         >     >       >     >       > [    0.324851] pid_max: default: 32768 minimum: 301
>         >     >       >     >       > [    0.329706] LSM: Security Framework initializing
>         >     >       >     >       > [    0.334204] Yama: becoming mindful.
>         >     >       >     >       > [    0.337865] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>         >     >       >     >       > [    0.345180] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>         >     >       >     >       > [    0.354743] xen:grant_table: Grant tables using version 1 layout
>         >     >       >     >       > [    0.359132] Grant table initialized
>         >     >       >     >       > [    0.362664] xen:events: Using FIFO-based ABI
>         >     >       >     >       > [    0.366993] Xen: initializing cpu0
>         >     >       >     >       > [    0.370515] rcu: Hierarchical SRCU implementation.
>         >     >       >     >       > [    0.375930] smp: Bringing up secondary CPUs ...
>         >     >       >     >       > (XEN) null.c:353: 1 <-- d0v1
>         >     >       >     >       > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>         >     >       >     >       > [    0.382549] Detected VIPT I-cache on CPU1
>         >     >       >     >       > [    0.388712] Xen: initializing cpu1
>         >     >       >     >       > [    0.388743] CPU1: Booted secondary processor 0x0000000001 [0x410fd034]
>         >     >       >     >       > [    0.388829] smp: Brought up 1 node, 2 CPUs
>         >     >       >     >       > [    0.406941] SMP: Total of 2 processors activated.
>         >     >       >     >       > [    0.411698] CPU features: detected: 32-bit EL0 Support
>         >     >       >     >       > [    0.416888] CPU features: detected: CRC32 instructions
>         >     >       >     >       > [    0.422121] CPU: All CPU(s) started at EL1
>         >     >       >     >       > [    0.426248] alternatives: patching kernel code
>         >     >       >     >       > [    0.431424] devtmpfs: initialized
>         >     >       >     >       > [    0.441454] KASLR enabled
>         >     >       >     >       > [    0.441602] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:
>         >     >       7645041785100000 ns
>         >     >       >     >       > [    0.448321] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
>         >     >       >     >       > [    0.496183] NET: Registered PF_NETLINK/PF_ROUTE protocol family
>         >     >       >     >       > [    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic allocations
>         >     >       >     >       > [    0.503772] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
>         >     >       >     >       > [    0.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
>         >     >       >     >       > [    0.519478] audit: initializing netlink subsys (disabled)
>         >     >       >     >       > [    0.524985] audit: type=2000 audit(0.336:1): state=initialized audit_enabled=0 res=1
>         >     >       >     >       > [    0.529169] thermal_sys: Registered thermal governor 'step_wise'
>         >     >       >     >       > [    0.533023] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
>         >     >       >     >       > [    0.545608] ASID allocator initialised with 32768 entries
>         >     >       >     >       > [    0.551030] xen:swiotlb_xen: Warning: only able to allocate 4 MB for software IO TLB
>         >     >       >     >       > [    0.559332] software IO TLB: mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB)
>         >     >       >     >       > [    0.583565] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>         >     >       >     >       > [    0.584721] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
>         >     >       >     >       > [    0.591478] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
>         >     >       >     >       > [    0.598225] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
>         >     >       >     >       > [    0.636520] DRBG: Continuing without Jitter RNG
>         >     >       >     >       > [    0.737187] raid6: neonx8   gen()  2143 MB/s
>         >     >       >     >       > [    0.805294] raid6: neonx8   xor()  1589 MB/s
>         >     >       >     >       > [    0.873406] raid6: neonx4   gen()  2177 MB/s
>         >     >       >     >       > [    0.941499] raid6: neonx4   xor()  1556 MB/s
>         >     >       >     >       > [    1.009612] raid6: neonx2   gen()  2072 MB/s
>         >     >       >     >       > [    1.077715] raid6: neonx2   xor()  1430 MB/s
>         >     >       >     >       > [    1.145834] raid6: neonx1   gen()  1769 MB/s
>         >     >       >     >       > [    1.213935] raid6: neonx1   xor()  1214 MB/s
>         >     >       >     >       > [    1.282046] raid6: int64x8  gen()  1366 MB/s
>         >     >       >     >       > [    1.350132] raid6: int64x8  xor()   773 MB/s
>         >     >       >     >       > [    1.418259] raid6: int64x4  gen()  1602 MB/s
>         >     >       >     >       > [    1.486349] raid6: int64x4  xor()   851 MB/s
>         >     >       >     >       > [    1.554464] raid6: int64x2  gen()  1396 MB/s
>         >     >       >     >       > [    1.622561] raid6: int64x2  xor()   744 MB/s
>         >     >       >     >       > [    1.690687] raid6: int64x1  gen()  1033 MB/s
>         >     >       >     >       > [    1.758770] raid6: int64x1  xor()   517 MB/s
>         >     >       >     >       > [    1.758809] raid6: using algorithm neonx4 gen() 2177 MB/s
>         >     >       >     >       > [    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
>         >     >       >     >       > [    1.767957] raid6: using neon recovery algorithm
>         >     >       >     >       > [    1.772824] xen:balloon: Initialising balloon driver
>         >     >       >     >       > [    1.778021] iommu: Default domain type: Translated
>         >     >       >     >       > [    1.782584] iommu: DMA domain TLB invalidation policy: strict mode
>         >     >       >     >       > [    1.789149] SCSI subsystem initialized
>         >     >       >     >       > [    1.792820] usbcore: registered new interface driver usbfs
>         >     >       >     >       > [    1.798254] usbcore: registered new interface driver hub
>         >     >       >     >       > [    1.803626] usbcore: registered new device driver usb
>         >     >       >     >       > [    1.808761] pps_core: LinuxPPS API ver. 1 registered
>         >     >       >     >       > [    1.813716] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it <mailto:giometti@linux.it> <mailto:giometti@linux.it <mailto:giometti@linux.it>>
>         >     >       <mailto:giometti@linux.it <mailto:giometti@linux.it> <mailto:giometti@linux.it <mailto:giometti@linux.it>>>>
>         >     >       >     >       > [    1.822903] PTP clock support registered
>         >     >       >     >       > [    1.826893] EDAC MC: Ver: 3.0.0
>         >     >       >     >       > [    1.830375] zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channels.
>         >     >       >     >       > [    1.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX channels.
>         >     >       >     >       > [    1.847356] zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX channels.
>         >     >       >     >       > [    1.855907] FPGA manager framework
>         >     >       >     >       > [    1.859952] clocksource: Switched to clocksource arch_sys_counter
>         >     >       >     >       > [    1.871712] NET: Registered PF_INET protocol family
>         >     >       >     >       > [    1.871838] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
>         >     >       >     >       > [    1.879392] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
>         >     >       >     >       > [    1.887078] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
>         >     >       >     >       > [    1.894846] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
>         >     >       >     >       > [    1.902900] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
>         >     >       >     >       > [    1.910350] TCP: Hash tables configured (established 16384 bind 16384)
>         >     >       >     >       > [    1.916778] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
>         >     >       >     >       > [    1.923509] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
>         >     >       >     >       > [    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol family
>         >     >       >     >       > [    1.936834] RPC: Registered named UNIX socket transport module.
>         >     >       >     >       > [    1.942342] RPC: Registered udp transport module.
>         >     >       >     >       > [    1.947088] RPC: Registered tcp transport module.
>         >     >       >     >       > [    1.951843] RPC: Registered tcp NFSv4.1 backchannel transport module.
>         >     >       >     >       > [    1.958334] PCI: CLS 0 bytes, default 64
>         >     >       >     >       > [    1.962709] Trying to unpack rootfs image as initramfs...
>         >     >       >     >       > [    1.977090] workingset: timestamp_bits=62 max_order=19 bucket_order=0
>         >     >       >     >       > [    1.982863] Installing knfsd (copyright (C) 1996 okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>>>).
>         >     >       >     >       > [    2.021045] NET: Registered PF_ALG protocol family
>         >     >       >     >       > [    2.021122] xor: measuring software checksum speed
>         >     >       >     >       > [    2.029347]    8regs           :  2366 MB/sec
>         >     >       >     >       > [    2.033081]    32regs          :  2802 MB/sec
>         >     >       >     >       > [    2.038223]    arm64_neon      :  2320 MB/sec
>         >     >       >     >       > [    2.038385] xor: using function: 32regs (2802 MB/sec)
>         >     >       >     >       > [    2.043614] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
>         >     >       >     >       > [    2.050959] io scheduler mq-deadline registered
>         >     >       >     >       > [    2.055521] io scheduler kyber registered
>         >     >       >     >       > [    2.068227] xen:xen_evtchn: Event-channel device installed
>         >     >       >     >       > [    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
>         >     >       >     >       > [    2.076190] cacheinfo: Unable to detect cache hierarchy for CPU 0
>         >     >       >     >       > [    2.085548] brd: module loaded
>         >     >       >     >       > [    2.089290] loop: module loaded
>         >     >       >     >       > [    2.089341] Invalid max_queues (4), will use default max: 2.
>         >     >       >     >       > [    2.094565] tun: Universal TUN/TAP device driver, 1.6
>         >     >       >     >       > [    2.098655] xen_netfront: Initialising Xen virtual ethernet driver
>         >     >       >     >       > [    2.104156] usbcore: registered new interface driver rtl8150
>         >     >       >     >       > [    2.109813] usbcore: registered new interface driver r8152
>         >     >       >     >       > [    2.115367] usbcore: registered new interface driver asix
>         >     >       >     >       > [    2.120794] usbcore: registered new interface driver ax88179_178a
>         >     >       >     >       > [    2.126934] usbcore: registered new interface driver cdc_ether
>         >     >       >     >       > [    2.132816] usbcore: registered new interface driver cdc_eem
>         >     >       >     >       > [    2.138527] usbcore: registered new interface driver net1080
>         >     >       >     >       > [    2.144256] usbcore: registered new interface driver cdc_subset
>         >     >       >     >       > [    2.150205] usbcore: registered new interface driver zaurus
>         >     >       >     >       > [    2.155837] usbcore: registered new interface driver cdc_ncm
>         >     >       >     >       > [    2.161550] usbcore: registered new interface driver r8153_ecm
>         >     >       >     >       > [    2.168240] usbcore: registered new interface driver cdc_acm
>         >     >       >     >       > [    2.173109] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
>         >     >       >     >       > [    2.181358] usbcore: registered new interface driver uas
>         >     >       >     >       > [    2.186547] usbcore: registered new interface driver usb-storage
>         >     >       >     >       > [    2.192643] usbcore: registered new interface driver ftdi_sio
>         >     >       >     >       > [    2.198384] usbserial: USB Serial support registered for FTDI USB Serial Device
>         >     >       >     >       > [    2.206118] udc-core: couldn't find an available UDC - added [g_mass_storage] to list of pending
>         >     >       drivers
>         >     >       >     >       > [    2.215332] i2c_dev: i2c /dev entries driver
>         >     >       >     >       > [    2.220467] xen_wdt xen_wdt: initialized (timeout=60s, nowayout=0)
>         >     >       >     >       > [    2.225923] device-mapper: uevent: version 1.0.3
>         >     >       >     >       > [    2.230668] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com>>
>         >     >       <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com>>>
>         >     >       >     >       > [    2.239315] EDAC MC0: Giving out device to module 1 controller synps_ddr_controller: DEV synps_edac
>         >     >       (INTERRUPT)
>         >     >       >     >       > [    2.249405] EDAC DEVICE0: Giving out device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV
>         >     >       >     >       ff960000.memory-controller (INTERRUPT)
>         >     >       >     >       > [    2.261719] sdhci: Secure Digital Host Controller Interface driver
>         >     >       >     >       > [    2.267487] sdhci: Copyright(c) Pierre Ossman
>         >     >       >     >       > [    2.271890] sdhci-pltfm: SDHCI platform and OF driver helper
>         >     >       >     >       > [    2.278157] ledtrig-cpu: registered to indicate activity on CPUs
>         >     >       >     >       > [    2.283816] zynqmp_firmware_probe Platform Management API v1.1
>         >     >       >     >       > [    2.289554] zynqmp_firmware_probe Trustzone version v1.0
>         >     >       >     >       > [    2.327875] securefw securefw: securefw probed
>         >     >       >     >       > [    2.328324] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)
>         >     >       >     >       > [    2.332563] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
>         >     >       >     >       > [    2.341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)
>         >     >       >     >       > [    2.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is available
>         >     >       >     >       > [    2.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is available
>         >     >       >     >       > [    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registered
>         >     >       >     >       > [    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xen Proxy registered
>         >     >       >     >       > [    2.372525] viper-vdpp a4000000.vdpp: Device Tree Probing
>         >     >       >     >       > [    2.377778] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>         >     >       >     >       > [    2.386432] viper-vdpp a4000000.vdpp: Unable to register tamper handler. Retrying...
>         >     >       >     >       > [    2.394094] viper-vdpp-net a5000000.vdpp_net: Device Tree Probing
>         >     >       >     >       > [    2.399854] viper-vdpp-net a5000000.vdpp_net: Device registered
>         >     >       >     >       > [    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing
>         >     >       >     >       > [    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VTI Count: 512 Event Count: 32
>         >     >       >     >       > [    2.420856] default preset
>         >     >       >     >       > [    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Device registered
>         >     >       >     >       > [    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing
>         >     >       >     >       > [    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device registered
>         >     >       >     >       > [    2.441976] vmcu driver init
>         >     >       >     >       > [    2.444922] VMCU: : (240:0) registered
>         >     >       >     >       > [    2.444956] In K81 Updater init
>         >     >       >     >       > [    2.449003] pktgen: Packet Generator for packet performance testing. Version: 2.75
>         >     >       >     >       > [    2.468833] Initializing XFRM netlink socket
>         >     >       >     >       > [    2.468902] NET: Registered PF_PACKET protocol family
>         >     >       >     >       > [    2.472729] Bridge firewalling registered
>         >     >       >     >       > [    2.476785] 8021q: 802.1Q VLAN Support v1.8
>         >     >       >     >       > [    2.481341] registered taskstats version 1
>         >     >       >     >       > [    2.486394] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
>         >     >       >     >       > [    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq = 36, base_baud = 6250000) is a xuartps
>         >     >       >     >       > [    2.507103] of-fpga-region fpga-full: FPGA Region probed
>         >     >       >     >       > [    2.512986] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver Probe success
>         >     >       >     >       > [    2.520267] xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver Probe success
>         >     >       >     >       > [    2.528239] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe success
>         >     >       >     >       > [    2.536152] xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver Probe success
>         >     >       >     >       > [    2.544153] xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver Probe success
>         >     >       >     >       > [    2.552127] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver Probe success
>         >     >       >     >       > [    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver Probe success
>         >     >       >     >       > [    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe success
>         >     >       >     >       > [    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver Probe success
>         >     >       >     >       > [    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver Probe success
>         >     >       >     >       > [    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
>         >     >       >     >       > [    2.946467] 2 fixed-partitions partitions found on MTD device spi0.0
>         >     >       >     >       > [    2.952393] Creating 2 MTD partitions on "spi0.0":
>         >     >       >     >       > [    2.957231] 0x000004000000-0x000008000000 : "bank A"
>         >     >       >     >       > [    2.963332] 0x000000000000-0x000004000000 : "bank B"
>         >     >       >     >       > [    2.968694] macb ff0b0000.ethernet: Not enabling partial store and forward
>         >     >       >     >       > [    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25
>         >     >       (18:41:fe:0f:ff:02)
>         >     >       >     >       > [    2.984472] macb ff0c0000.ethernet: Not enabling partial store and forward
>         >     >       >     >       > [    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26
>         >     >       (18:41:fe:0f:ff:03)
>         >     >       >     >       > [    3.001043] viper_enet viper_enet: Viper power GPIOs initialised
>         >     >       >     >       > [    3.007313] viper_enet viper_enet vnet0 (uninitialized): Validate interface QSGMII
>         >     >       >     >       > [    3.014914] viper_enet viper_enet vnet1 (uninitialized): Validate interface QSGMII
>         >     >       >     >       > [    3.022138] viper_enet viper_enet vnet1 (uninitialized): Validate interface type 18
>         >     >       >     >       > [    3.030274] viper_enet viper_enet vnet2 (uninitialized): Validate interface QSGMII
>         >     >       >     >       > [    3.037785] viper_enet viper_enet vnet3 (uninitialized): Validate interface QSGMII
>         >     >       >     >       > [    3.045301] viper_enet viper_enet: Viper enet registered
>         >     >       >     >       > [    3.050958] xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
>         >     >       >     >       > [    3.057135] xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
>         >     >       >     >       > [    3.063538] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
>         >     >       >     >       > [    3.069920] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
>         >     >       >     >       > [    3.097729] si70xx: probe of 2-0040 failed with error -5
>         >     >       >     >       > [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s
>         >     >       >     >       > [    3.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s
>         >     >       >     >       > [    3.112457] viper-tamper viper-tamper: Device registered
>         >     >       >     >       > [    3.117593] active_bank active_bank: boot bank: 1
>         >     >       >     >       > [    3.122184] active_bank active_bank: boot mode: (0x02) qspi32
>         >     >       >     >       > [    3.128247] viper-vdpp a4000000.vdpp: Device Tree Probing
>         >     >       >     >       > [    3.133439] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>         >     >       >     >       > [    3.142151] viper-vdpp a4000000.vdpp: Tamper handler registered
>         >     >       >     >       > [    3.147438] viper-vdpp a4000000.vdpp: Device registered
>         >     >       >     >       > [    3.153007] lpc55_l2 spi1.0: registered handler for protocol 0
>         >     >       >     >       > [    3.158582] lpc55_user lpc55_user: The major number for your device is 236
>         >     >       >     >       > [    3.165976] lpc55_l2 spi1.0: registered handler for protocol 1
>         >     >       >     >       > [    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>         >     >       >     >       > [    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
>         >     >       >     >       > [    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
>         >     >       >     >       > [    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
>         >     >       >     >       > [    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
>         >     >       >     >       > [    3.202932] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit
>         >     >       >     >       > [    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
>         >     >       >     >       > [    3.215694] lpc55_l2 spi1.0: rx error: -110
>         >     >       >     >       > [    3.284438] mmc0: new HS200 MMC card at address 0001
>         >     >       >     >       > [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
>         >     >       >     >       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
>         >     >       >     >       > [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
>         >     >       >     >       > [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
>         >     >       >     >       > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
>         >     >       >     >       > [    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>         >     >       >     >       > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardware clock
>         >     >       >     >       > [    3.591252] cdns-i2c ff020000.i2c: recovery information complete
>         >     >       >     >       > [    3.597085] at24 0-0050: supply vcc not found, using dummy regulator
>         >     >       >     >       > [    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
>         >     >       >     >       > [    3.608093] at24 0-0050: 256 byte spd EEPROM, read-only
>         >     >       >     >       > [    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
>         >     >       >     >       > [    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
>         >     >       >     >       > [    3.624224] rtc-rv3028 0-0052: registered as rtc1
>         >     >       >     >       > [    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
>         >     >       >     >       > [    3.633253] lpc55_l2 spi1.0: rx error: -110
>         >     >       >     >       > [    3.639104] k81_bootloader 0-0010: probe
>         >     >       >     >       > [    3.641628] VMCU: : (235:0) registered
>         >     >       >     >       > [    3.641635] k81_bootloader 0-0010: probe completed
>         >     >       >     >       > [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 28
>         >     >       >     >       > [    3.669154] cdns-i2c ff030000.i2c: recovery information complete
>         >     >       >     >       > [    3.675412] lm75 1-0048: supply vs not found, using dummy regulator
>         >     >       >     >       > [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
>         >     >       >     >       > [    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
>         >     >       >     >       > [    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
>         >     >       >     >       > [    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
>         >     >       >     >       > [    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
>         >     >       >     >       > [    3.705157] pca954x 1-0070: registered 4 multiplexed busses for I2C switch pca9546
>         >     >       >     >       > [    3.713049] at24 1-0054: supply vcc not found, using dummy regulator
>         >     >       >     >       > [    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM, read-only
>         >     >       >     >       > [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq 29
>         >     >       >     >       > [    3.731272] sfp viper_enet:sfp-eth1: Host maximum power 2.0W
>         >     >       >     >       > [    3.737549] sfp_register_socket: got sfp_bus
>         >     >       >     >       > [    3.740709] sfp_register_socket: register sfp_bus
>         >     >       >     >       > [    3.745459] sfp_register_bus: ops ok!
>         >     >       >     >       > [    3.749179] sfp_register_bus: Try to attach
>         >     >       >     >       > [    3.753419] sfp_register_bus: Attach succeeded
>         >     >       >     >       > [    3.757914] sfp_register_bus: upstream ops attach
>         >     >       >     >       > [    3.762677] sfp_register_bus: Bus registered
>         >     >       >     >       > [    3.766999] sfp_register_socket: register sfp_bus succeeded
>         >     >       >     >       > [    3.775870] of_cfs_init
>         >     >       >     >       > [    3.776000] of_cfs_init: OK
>         >     >       >     >       > [    3.778211] clk: Not disabling unused clocks
>         >     >       >     >       > [   11.278477] Freeing initrd memory: 206056K
>         >     >       >     >       > [   11.279406] Freeing unused kernel memory: 1536K
>         >     >       >     >       > [   11.314006] Checked W+X mappings: passed, no W+X pages found
>         >     >       >     >       > [   11.314142] Run /init as init process
>         >     >       >     >       > INIT: version 3.01 booting
>         >     >       >     >       > fsck (busybox 1.35.0)
>         >     >       >     >       > /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600 blocks
>         >     >       >     >       > /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600 blocks
>         >     >       >     >       > /dev/mmcblk0p3 was not cleanly unmounted, check forced.
>         >     >       >     >       > /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous), 663/16384 blocks
>         >     >       >     >       > [   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem without journal. Opts: (null). Quota mode:
>         >     >       disabled.
>         >     >       >     >       > Starting random number generator daemon.
>         >     >       >     >       > [   11.580662] random: crng init done
>         >     >       >     >       > Starting udev
>         >     >       >     >       > [   11.613159] udevd[142]: starting version 3.2.10
>         >     >       >     >       > [   11.620385] udevd[143]: starting eudev-3.2.10
>         >     >       >     >       > [   11.704481] macb ff0b0000.ethernet control_red: renamed from eth0
>         >     >       >     >       > [   11.720264] macb ff0c0000.ethernet control_black: renamed from eth1
>         >     >       >     >       > [   12.063396] ip_local_port_range: prefer different parity for start/end values.
>         >     >       >     >       > [   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>         >     >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>         >     >       >     >       > Mon Feb 27 08:40:53 UTC 2023
>         >     >       >     >       > [   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad result
>         >     >       >     >       > hwclock: RTC_SET_TIME: Invalid exchange
>         >     >       >     >       > [   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>         >     >       >     >       > Starting mcud
>         >     >       >     >       > INIT: Entering runlevel: 5
>         >     >       >     >       > Configuring network interfaces... done.
>         >     >       >     >       > resetting network interface
>         >     >       >     >       > [   12.718295] macb ff0b0000.ethernet control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver [Xilinx
>         >     >       PCS/PMA PHY] (irq=POLL)
>         >     >       >     >       > [   12.723919] macb ff0b0000.ethernet control_red: configuring for phy/gmii link mode
>         >     >       >     >       > [   12.732151] pps pps0: new PPS source ptp0
>         >     >       >     >       > [   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp clock registered.
>         >     >       >     >       > [   12.745724] macb ff0c0000.ethernet control_black: PHY [ff0c0000.ethernet-ffffffff:01] driver [Xilinx
>         >     >       PCS/PMA PHY]
>         >     >       >     >       (irq=POLL)
>         >     >       >     >       > [   12.753469] macb ff0c0000.ethernet control_black: configuring for phy/gmii link mode
>         >     >       >     >       > [   12.761804] pps pps1: new PPS source ptp1
>         >     >       >     >       > [   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock registered.
>         >     >       >     >       > Auto-negotiation: off
>         >     >       >     >       > Auto-negotiation: off
>         >     >       >     >       > [   16.828151] macb ff0b0000.ethernet control_red: unable to generate target frequency: 125000000 Hz
>         >     >       >     >       > [   16.834553] macb ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full - flow control off
>         >     >       >     >       > [   16.860552] macb ff0c0000.ethernet control_black: unable to generate target frequency: 125000000 Hz
>         >     >       >     >       > [   16.867052] macb ff0c0000.ethernet control_black: Link is Up - 1Gbps/Full - flow control off
>         >     >       >     >       > Starting Failsafe Secure Shell server in port 2222: sshd
>         >     >       >     >       > done.
>         >     >       >     >       > Starting rpcbind daemon...done.
>         >     >       >     >       >
>         >     >       >     >       > [   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>         >     >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>         >     >       >     >       > Starting State Manager Service
>         >     >       >     >       > Start state-manager restarter...
>         >     >       >     >       > (XEN) d0v1 Forwarding AES operation: 3254779951
>         >     >       >     >       > Starting /usr/sbin/xenstored....[   17.265256] BTRFS: device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa
>         >     >       devid 1 transid 744
>         >     >       >     >       /dev/dm-0
>         >     >       >     >       > scanned by udevd (385)
>         >     >       >     >       > [   17.349933] BTRFS info (device dm-0): disk space caching is enabled
>         >     >       >     >       > [   17.350670] BTRFS info (device dm-0): has skinny extents
>         >     >       >     >       > [   17.364384] BTRFS info (device dm-0): enabling ssd optimizations
>         >     >       >     >       > [   17.830462] BTRFS: device fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
>         >     >       /dev/mapper/client_prov scanned by
>         >     >       >     >       mkfs.btrfs
>         >     >       >     >       > (526)
>         >     >       >     >       > [   17.872699] BTRFS info (device dm-1): using free space tree
>         >     >       >     >       > [   17.872771] BTRFS info (device dm-1): has skinny extents
>         >     >       >     >       > [   17.878114] BTRFS info (device dm-1): flagging fs with big metadata feature
>         >     >       >     >       > [   17.894289] BTRFS info (device dm-1): enabling ssd optimizations
>         >     >       >     >       > [   17.895695] BTRFS info (device dm-1): checking UUID tree
>         >     >       >     >       >
>         >     >       >     >       > Setting domain 0 name, domid and JSON config...
>         >     >       >     >       > Done setting up Dom0
>         >     >       >     >       > Starting xenconsoled...
>         >     >       >     >       > Starting QEMU as disk backend for dom0
>         >     >       >     >       > Starting domain watchdog daemon: xenwatchdogd startup
>         >     >       >     >       >
>         >     >       >     >       > [   18.408647] BTRFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
>         >     >       /dev/mapper/client_config scanned by
>         >     >       >     >       mkfs.btrfs
>         >     >       >     >       > (574)
>         >     >       >     >       > [done]
>         >     >       >     >       > [   18.465552] BTRFS info (device dm-2): using free space tree
>         >     >       >     >       > [   18.465629] BTRFS info (device dm-2): has skinny extents
>         >     >       >     >       > [   18.471002] BTRFS info (device dm-2): flagging fs with big metadata feature
>         >     >       >     >       > Starting crond: [   18.482371] BTRFS info (device dm-2): enabling ssd optimizations
>         >     >       >     >       > [   18.486659] BTRFS info (device dm-2): checking UUID tree
>         >     >       >     >       > OK
>         >     >       >     >       > starting rsyslogd ... Log partition ready after 0 poll loops
>         >     >       >     >       > done
>         >     >       >     >       > rsyslogd: cannot connect to 172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>> <http://172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>>>: Network is unreachable [v8.2208.0 try
>         >     >       https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027>> <https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027>>> ]
>         >     >       >     >       > [   18.670637] BTRFS: device fsid 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3
>         >     >       scanned by udevd (518)
>         >     >       >     >       >
>         >     >       >     >       > Please insert USB token and enter your role in login prompt.
>         >     >       >     >       >
>         >     >       >     >       > login:
>         >     >       >     >       >
>         >     >       >     >       > Regards,
>         >     >       >     >       > O.
>         >     >       >     >       >
>         >     >       >     >       >
>         >     >       >     >       > пн, 24 апр. 2023 г. в 23:39, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
>         >     >       >     >       >       Hi Oleg,
>         >     >       >     >       >
>         >     >       >     >       >       Here is the issue from your logs:
>         >     >       >     >       >
>         >     >       >     >       >       SError Interrupt on CPU0, code 0xbe000000 -- SError
>         >     >       >     >       >
>         >     >       >     >       >       SErrors are special signals to notify software of serious hardware
>         >     >       >     >       >       errors.  Something is going very wrong. Defective hardware is a
>         >     >       >     >       >       possibility.  Another possibility if software accessing address ranges
>         >     >       >     >       >       that it is not supposed to, sometimes it causes SErrors.
>         >     >       >     >       >
>         >     >       >     >       >       Cheers,
>         >     >       >     >       >
>         >     >       >     >       >       Stefano
>         >     >       >     >       >
>         >     >       >     >       >
>         >     >       >     >       >
>         >     >       >     >       >       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
>         >     >       >     >       >
>         >     >       >     >       >       > Hello,
>         >     >       >     >       >       >
>         >     >       >     >       >       > Thanks guys.
>         >     >       >     >       >       > I found out where the problem was.
>         >     >       >     >       >       > Now dom0 booted more. But I have a new one.
>         >     >       >     >       >       > This is a kernel panic during Dom0 loading.
>         >     >       >     >       >       > Maybe someone is able to suggest something ?
>         >     >       >     >       >       >
>         >     >       >     >       >       > Regards,
>         >     >       >     >       >       > O.
>         >     >       >     >       >       >
>         >     >       >     >       >       > [    3.771362] sfp_register_bus: upstream ops attach
>         >     >       >     >       >       > [    3.776119] sfp_register_bus: Bus registered
>         >     >       >     >       >       > [    3.780459] sfp_register_socket: register sfp_bus succeeded
>         >     >       >     >       >       > [    3.789399] of_cfs_init
>         >     >       >     >       >       > [    3.789499] of_cfs_init: OK
>         >     >       >     >       >       > [    3.791685] clk: Not disabling unused clocks
>         >     >       >     >       >       > [   11.010355] SError Interrupt on CPU0, code 0xbe000000 -- SError
>         >     >       >     >       >       > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>         >     >       >     >       >       > [   11.010393] Workqueue: events_unbound async_run_entry_fn
>         >     >       >     >       >       > [   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>         >     >       >     >       >       > [   11.010422] pc : simple_write_end+0xd0/0x130
>         >     >       >     >       >       > [   11.010431] lr : generic_perform_write+0x118/0x1e0
>         >     >       >     >       >       > [   11.010438] sp : ffffffc00809b910
>         >     >       >     >       >       > [   11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
>         >     >       >     >       >       > [   11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
>         >     >       >     >       >       > [   11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
>         >     >       >     >       >       > [   11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
>         >     >       >     >       >       > [   11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
>         >     >       >     >       >       > [   11.010490] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
>         >     >       >     >       >       > [   11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
>         >     >       >     >       >       > [   11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
>         >     >       >     >       >       > [   11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
>         >     >       >     >       >       > [   11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
>         >     >       >     >       >       > [   11.010534] Kernel panic - not syncing: Asynchronous SError Interrupt
>         >     >       >     >       >       > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>         >     >       >     >       >       > [   11.010545] Hardware name: D14 Viper Board - White Unit (DT)
>         >     >       >     >       >       > [   11.010548] Workqueue: events_unbound async_run_entry_fn
>         >     >       >     >       >       > [   11.010556] Call trace:
>         >     >       >     >       >       > [   11.010558]  dump_backtrace+0x0/0x1c4
>         >     >       >     >       >       > [   11.010567]  show_stack+0x18/0x2c
>         >     >       >     >       >       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
>         >     >       >     >       >       > [   11.010583]  dump_stack+0x18/0x34
>         >     >       >     >       >       > [   11.010588]  panic+0x14c/0x2f8
>         >     >       >     >       >       > [   11.010597]  print_tainted+0x0/0xb0
>         >     >       >     >       >       > [   11.010606]  arm64_serror_panic+0x6c/0x7c
>         >     >       >     >       >       > [   11.010614]  do_serror+0x28/0x60
>         >     >       >     >       >       > [   11.010621]  el1h_64_error_handler+0x30/0x50
>         >     >       >     >       >       > [   11.010628]  el1h_64_error+0x78/0x7c
>         >     >       >     >       >       > [   11.010633]  simple_write_end+0xd0/0x130
>         >     >       >     >       >       > [   11.010639]  generic_perform_write+0x118/0x1e0
>         >     >       >     >       >       > [   11.010644]  __generic_file_write_iter+0x138/0x1c4
>         >     >       >     >       >       > [   11.010650]  generic_file_write_iter+0x78/0xd0
>         >     >       >     >       >       > [   11.010656]  __kernel_write+0xfc/0x2ac
>         >     >       >     >       >       > [   11.010665]  kernel_write+0x88/0x160
>         >     >       >     >       >       > [   11.010673]  xwrite+0x44/0x94
>         >     >       >     >       >       > [   11.010680]  do_copy+0xa8/0x104
>         >     >       >     >       >       > [   11.010686]  write_buffer+0x38/0x58
>         >     >       >     >       >       > [   11.010692]  flush_buffer+0x4c/0xbc
>         >     >       >     >       >       > [   11.010698]  __gunzip+0x280/0x310
>         >     >       >     >       >       > [   11.010704]  gunzip+0x1c/0x28
>         >     >       >     >       >       > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
>         >     >       >     >       >       > [   11.010715]  do_populate_rootfs+0x80/0x164
>         >     >       >     >       >       > [   11.010722]  async_run_entry_fn+0x48/0x164
>         >     >       >     >       >       > [   11.010728]  process_one_work+0x1e4/0x3a0
>         >     >       >     >       >       > [   11.010736]  worker_thread+0x7c/0x4c0
>         >     >       >     >       >       > [   11.010743]  kthread+0x120/0x130
>         >     >       >     >       >       > [   11.010750]  ret_from_fork+0x10/0x20
>         >     >       >     >       >       > [   11.010757] SMP: stopping secondary CPUs
>         >     >       >     >       >       > [   11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc008000000
>         >     >       >     >       >       > [   11.010788] PHYS_OFFSET: 0x0
>         >     >       >     >       >       > [   11.010790] CPU features: 0x00000401,00000842
>         >     >       >     >       >       > [   11.010795] Memory Limit: none
>         >     >       >     >       >       > [   11.277509] ---[ end Kernel panic - not syncing: Asynchronous SError Interrupt ]---
>         >     >       >     >       >       >
>         >     >       >     >       >       > пт, 21 апр. 2023 г. в 15:52, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>:
>         >     >       >     >       >       >       Hi Oleg,
>         >     >       >     >       >       >
>         >     >       >     >       >       >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
>         >     >       >     >       >       >       >       
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       > Hello Michal,
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       > I was not able to enable earlyprintk in the xen for now.
>         >     >       >     >       >       >       > I decided to choose another way.
>         >     >       >     >       >       >       > This is a xen's command line that I found out completely.
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       > (XEN) $$$$ console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin
>         >     >       bootscrub=0
>         >     >       >     >       vwfi=native
>         >     >       >     >       >       sched=null
>         >     >       >     >       >       >       timer_slop=0
>         >     >       >     >       >       >       Yes, adding a printk() in Xen was also a good idea.
>         >     >       >     >       >       >
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       > So you are absolutely right about a command line.
>         >     >       >     >       >       >       > Now I am going to find out why xen did not have the correct parameters from the device
>         >     >       tree.
>         >     >       >     >       >       >       Maybe you will find this document helpful:
>         >     >       >     >       >       >       https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt> <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt>>
>         >     >       <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt> <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt>>>
>         >     >       >     >       >       >
>         >     >       >     >       >       >       ~Michal
>         >     >       >     >       >       >
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       > Regards,
>         >     >       >     >       >       >       > Oleg
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       > пт, 21 апр. 2023 г. в 11:16, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>
>         >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>>:
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
>         >     >       >     >       >       >       >     >       
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     > Hello Michal,
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     > Yes, I use yocto.
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     > Yesterday all day long I tried to follow your suggestions.
>         >     >       >     >       >       >       >     > I faced a problem.
>         >     >       >     >       >       >       >     > Manually in the xen config build file I pasted the strings:
>         >     >       >     >       >       >       >     In the .config file or in some Yocto file (listing additional Kconfig options) added
>         >     >       to SRC_URI?
>         >     >       >     >       >       >       >     You shouldn't really modify .config file but if you do, you should execute "make
>         >     >       olddefconfig"
>         >     >       >     >       afterwards.
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     > CONFIG_EARLY_PRINTK
>         >     >       >     >       >       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
>         >     >       >     >       >       >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
>         >     >       >     >       >       >       >     I hope you added =y to them.
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       >     Anyway, you have at least the following solutions:
>         >     >       >     >       >       >       >     1) Run bitbake xen -c menuconfig to properly set early printk
>         >     >       >     >       >       >       >     2) Find out how you enable other Kconfig options in your project (e.g.
>         >     >       CONFIG_COLORING=y that is not
>         >     >       >     >       enabled by
>         >     >       >     >       >       default)
>         >     >       >     >       >       >       >     3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
>         >     >       >     >       >       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=y
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       >     ~Michal
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     > Host hangs in build time. 
>         >     >       >     >       >       >       >     > Maybe I did not set something in the config build file ?
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     > Regards,
>         >     >       >     >       >       >       >     > Oleg
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     > чт, 20 апр. 2023 г. в 11:57, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>
>         >     >       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>>:
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >     Thanks Michal,
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >     You gave me an idea.
>         >     >       >     >       >       >       >     >     I am going to try it today.
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >     Regards,
>         >     >       >     >       >       >       >     >     O.
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >     чт, 20 апр. 2023 г. в 11:56, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>
>         >     >       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>>:
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >         Thanks Stefano.
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >         I am going to do it today.
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >         Regards,
>         >     >       >     >       >       >       >     >         O.
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >         ср, 19 апр. 2023 г. в 23:05, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>         >     >       >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
>         >     >       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>>:
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>         >     >       >     >       >       >       >     >             > Hi Michal,
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             > I corrected xen's command line.
>         >     >       >     >       >       >       >     >             > Now it is
>         >     >       >     >       >       >       >     >             > xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M
>         >     >       dom0_max_vcpus=2
>         >     >       >     >       dom0_vcpus_pin
>         >     >       >     >       >       >       bootscrub=0 vwfi=native sched=null
>         >     >       >     >       >       >       >     >             > timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >             4 colors is way too many for xen, just do xen_colors=0-0. There is no
>         >     >       >     >       >       >       >     >             advantage in using more than 1 color for Xen.
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >             4 colors is too few for dom0, if you are giving 1600M of memory to
>         >     >       Dom0.
>         >     >       >     >       >       >       >     >             Each color is 256M. For 1600M you should give at least 7 colors. Try:
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >             xen_colors=0-0 dom0_colors=1-8
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >     >             > Unfortunately the result was the same.
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             > (XEN)  - Dom0 mode: Relaxed
>         >     >       >     >       >       >       >     >             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>         >     >       >     >       >       >       >     >             > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>         >     >       >     >       >       >       >     >             > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>         >     >       >     >       >       >       >     >             > (XEN) Coloring general information
>         >     >       >     >       >       >       >     >             > (XEN) Way size: 64kB
>         >     >       >     >       >       >       >     >             > (XEN) Max. number of colors available: 16
>         >     >       >     >       >       >       >     >             > (XEN) Xen color(s): [ 0 ]
>         >     >       >     >       >       >       >     >             > (XEN) alternatives: Patching with alt table 00000000002cc690 ->
>         >     >       00000000002ccc0c
>         >     >       >     >       >       >       >     >             > (XEN) Color array allocation failed for dom0
>         >     >       >     >       >       >       >     >             > (XEN)
>         >     >       >     >       >       >       >     >             > (XEN) ****************************************
>         >     >       >     >       >       >       >     >             > (XEN) Panic on CPU 0:
>         >     >       >     >       >       >       >     >             > (XEN) Error creating domain 0
>         >     >       >     >       >       >       >     >             > (XEN) ****************************************
>         >     >       >     >       >       >       >     >             > (XEN)
>         >     >       >     >       >       >       >     >             > (XEN) Reboot in five seconds...
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             > I am going to find out how command line arguments passed and parsed.
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             > Regards,
>         >     >       >     >       >       >       >     >             > Oleg
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             > ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>         >     >       >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>
>         >     >       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
>         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>>:
>         >     >       >     >       >       >       >     >             >       Hi Michal,
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             > You put my nose into the problem. Thank you.
>         >     >       >     >       >       >       >     >             > I am going to use your point.
>         >     >       >     >       >       >       >     >             > Let's see what happens.
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             > Regards,
>         >     >       >     >       >       >       >     >             > Oleg
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             > ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>
>         >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>
>         >     >       >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>
>         >     >       >     >       >       >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>
>         >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>>>:
>         >     >       >     >       >       >       >     >             >       Hi Oleg,
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>         >     >       >     >       >       >       >     >             >       >       
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       > Hello Stefano,
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       > Thanks for the clarification.
>         >     >       >     >       >       >       >     >             >       > My company uses yocto for image generation.
>         >     >       >     >       >       >       >     >             >       > What kind of information do you need to consult me in this
>         >     >       case ?
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       > Maybe modules sizes/addresses which were mentioned by @Julien
>         >     >       Grall
>         >     >       >     >       >       <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>
>         >     >       >     >       >       >       <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>
>         >     >       <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>>>> ?
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >       Sorry for jumping into discussion, but FWICS the Xen command
>         >     >       line you provided
>         >     >       >     >       seems to be
>         >     >       >     >       >       not the
>         >     >       >     >       >       >       one
>         >     >       >     >       >       >       >     >             >       Xen booted with. The error you are observing most likely is due
>         >     >       to dom0 colors
>         >     >       >     >       >       configuration not
>         >     >       >     >       >       >       being
>         >     >       >     >       >       >       >     >             >       specified (i.e. lack of dom0_colors=<> parameter). Although in
>         >     >       the command line you
>         >     >       >     >       >       provided, this
>         >     >       >     >       >       >       parameter
>         >     >       >     >       >       >       >     >             >       is set, I strongly doubt that this is the actual command line
>         >     >       in use.
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >       You wrote:
>         >     >       >     >       >       >       >     >             >       xen,xen-bootargs = "console=dtuart dtuart=serial0
>         >     >       dom0_mem=1600M dom0_max_vcpus=2
>         >     >       >     >       >       dom0_vcpus_pin
>         >     >       >     >       >       >       bootscrub=0 vwfi=native
>         >     >       >     >       >       >       >     >             >       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3
>         >     >       dom0_colors=4-7";
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >       but:
>         >     >       >     >       >       >       >     >             >       1) way_szize has a typo
>         >     >       >     >       >       >       >     >             >       2) you specified 4 colors (0-3) for Xen, but the boot log says
>         >     >       that Xen has only
>         >     >       >     >       one:
>         >     >       >     >       >       >       >     >             >       (XEN) Xen color(s): [ 0 ]
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >       This makes me believe that no colors configuration actually end
>         >     >       up in command line
>         >     >       >     >       that Xen
>         >     >       >     >       >       booted
>         >     >       >     >       >       >       with.
>         >     >       >     >       >       >       >     >             >       Single color for Xen is a "default if not specified" and way
>         >     >       size was probably
>         >     >       >     >       calculated
>         >     >       >     >       >       by asking
>         >     >       >     >       >       >       HW.
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >       So I would suggest to first cross-check the command line in
>         >     >       use.
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >       ~Michal
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       > Regards,
>         >     >       >     >       >       >       >     >             >       > Oleg
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini
>         >     >       <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>         >     >       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
>         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>
>         >     >       >     >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>         >     >       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
>         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
>         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>>>:
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>         >     >       >     >       >       >       >     >             >       >     > Hi Julien,
>         >     >       >     >       >       >       >     >             >       >     >
>         >     >       >     >       >       >       >     >             >       >     > >> This feature has not been merged in Xen upstream yet
>         >     >       >     >       >       >       >     >             >       >     >
>         >     >       >     >       >       >       >     >             >       >     > > would assume that upstream + the series on the ML [1]
>         >     >       work
>         >     >       >     >       >       >       >     >             >       >     >
>         >     >       >     >       >       >       >     >             >       >     > Please clarify this point.
>         >     >       >     >       >       >       >     >             >       >     > Because the two thoughts are controversial.
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >     Hi Oleg,
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >     As Julien wrote, there is nothing controversial. As you
>         >     >       are aware,
>         >     >       >     >       >       >       >     >             >       >     Xilinx maintains a separate Xen tree specific for Xilinx
>         >     >       here:
>         >     >       >     >       >       >       >     >             >       >     https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>
>         >     >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>
>         >     >       >     >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>
>         >     >       >     >       >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>
>         >     >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>
>         >     >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>
>         >     >       >     >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>
>         >     >       >     >       >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>>
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >     and the branch you are using (xlnx_rebase_4.16) comes
>         >     >       from there.
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >     Instead, the upstream Xen tree lives here:
>         >     >       >     >       >       >       >     >             >       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>         >     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>         >     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>         >     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>         >     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>
>         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>>>
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >     The Cache Coloring feature that you are trying to
>         >     >       configure is present
>         >     >       >     >       >       >       >     >             >       >     in xlnx_rebase_4.16, but not yet present upstream (there
>         >     >       is an
>         >     >       >     >       >       >       >     >             >       >     outstanding patch series to add cache coloring to Xen
>         >     >       upstream but it
>         >     >       >     >       >       >       >     >             >       >     hasn't been merged yet.)
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't
>         >     >       matter too much for
>         >     >       >     >       >       >       >     >             >       >     you as you already have Cache Coloring as a feature
>         >     >       there.
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >     I take you are using ImageBuilder to generate the boot
>         >     >       configuration? If
>         >     >       >     >       >       >       >     >             >       >     so, please post the ImageBuilder config file that you are
>         >     >       using.
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >       >     But from the boot message, it looks like the colors
>         >     >       configuration for
>         >     >       >     >       >       >       >     >             >       >     Dom0 is incorrect.
>         >     >       >     >       >       >       >     >             >       >
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >             >
>         >     >       >     >       >       >       >     >
>         >     >       >     >       >       >       >
>         >     >       >     >       >       >
>         >     >       >     >       >       >
>         >     >       >     >       >       >
>         >     >       >     >       >
>         >     >       >     >       >
>         >     >       >     >       >
>         >     >       >     >
>         >     >       >     >
>         >     >       >     >
>         >     >       >
>         >     >
>         >     >
>         >     >
>         >
> 


From xen-devel-bounces@lists.xenproject.org Mon May 15 08:59:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 08:59:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534648.831855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU3X-0001Mn-2M; Mon, 15 May 2023 08:59:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534648.831855; Mon, 15 May 2023 08:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU3W-0001Mg-VP; Mon, 15 May 2023 08:59:46 +0000
Received: by outflank-mailman (input) for mailman id 534648;
 Mon, 15 May 2023 08:59:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uOm1=BE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pyU3V-0001MY-AP
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 08:59:45 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2061f.outbound.protection.outlook.com
 [2a01:111:f400:7eab::61f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4bc28fd-f2fe-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 10:59:43 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by BL1PR12MB5850.namprd12.prod.outlook.com (2603:10b6:208:395::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 08:59:39 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::8018:78f7:1b08:7a54]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::8018:78f7:1b08:7a54%2]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 08:59:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4bc28fd-f2fe-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bn3oFdHk/e0wMMmgEjnVBUyRO7MO9SszVsu+RZqaZBBPAsEBacUJuHqn6s8SYQAd3FZWPrLLc49iHezb5ZTnbnuSnzmCQt7LQNtlWPiCXLbET68uH80KfdlWayOO1OYJ87d2QMo4AEf2QBjGb85kjOBF+XEYmthX3uNVZu7PH20aB3LbLa39qqHoGSOhLnNT7Bhf6G6pUtWo22B/iLinsAsSOrhE0O/39i09MJDLvI+zsFYTWBatYtAure60c4R5ZZbT+HbS+Cf/WQ3of1bm0AIhgnzSX9cqBCEvmHJJseA5BMaXbz4Lby49K0i1Q97l7wcNvHfzrJQZCpBHrORCzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mPLfuueglwoWf/gPGAQD/6AZ90RyjyMYCgwMF7EXa1k=;
 b=eKWNl/mYIjywH2vBXvCxueER1XBYG/tna9OnK1ytXJveEiGUss8AvM4CaTSyolv8eL3vTWHeImrPPqrI9d9vTJlmg0Q3wWhsFTSUOYLm7KYQsG/KpoxaY5O6Hi6KyO8NwhOtjI3rPE1RjArg7DX6m0HHMl3zPg59W+hvVyOvcoDArZ9X8xFII1CPD/dOVP/2w3Zyjg/8KqifVMbIuSMnGIm02eLVCrTAvhbD0z6JmHR9PxFIgUtS8RIo3k0hMC4067CSyTKzYjCHCpFVmHBLuFqe6PJQQ95dNgCOYcuKwP4NR3GfYFwSXXrfDInr1dCZNKk9a+HRArC2EO7GHMWe6Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mPLfuueglwoWf/gPGAQD/6AZ90RyjyMYCgwMF7EXa1k=;
 b=w39WKMmVQYh5dsDSSpDXJkFJucDTOh4IZKzI+k9yzZXP5YkkQ1gsYxbOQ1iFXNFSE152+6Aga5QOyUlkV4OTW9K3YQ9saUeZwYQ5k2uBYhpqmNdY1b4UZ8anOd+3bp1H2HUBlWTagmxMNmW82Z/SXUXbyJ1+CS+ZI9JA6y/+7HE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <2930ee0b-d8c0-9e85-eb8e-e4440c5c4628@amd.com>
Date: Mon, 15 May 2023 09:59:33 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.1
Subject: Re: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-3-michal.orzel@amd.com>
 <1dadc8b9-00be-55f9-e8b7-f867eacf20b1@amd.com>
 <2cdb4e1c-5151-f820-5ceb-35f782842393@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <2cdb4e1c-5151-f820-5ceb-35f782842393@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0277.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a1::25) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|BL1PR12MB5850:EE_
X-MS-Office365-Filtering-Correlation-Id: d8ea24a9-f046-4624-faba-08db5522b725
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gb0exok9xe6HBMtof/jg8fDyiivyhjosrNYirqMjxfbJIjNUAkDyXCFQ1GKkiq6oaxwix1mwqiIxCjRLSw60hGjxTts1k/yy6u1RpzPsk3YFf6FL0oqa9m59ZVCvV55qMhH9JudZvjDYpYvKru1pNHhwqNg0l+F7gnw2XK7GF92sG/EkG9sLz4vy/rkFtChp3v12Lz92PvtIthnGR5dK4szrjj+P+PivqaVF8GUhSQuQlaHq+qS9EP4uJULO6q3nDhbAtwP0gK14kFCun8que+lQlNQz1dFWhJwvPHMiDAUa3o6koonWvTjJkHORuiWXKMTX6PA0yp+1xseY0R5yzDX855s4YghZmI3ShYyyvFur7JrWk5WRQSmdGIS1hpp5e4sjxwNiS8eu2rCZgm3wnT9f9IHMxTXUdvX24cYqhA/VoZ9dNq2AkCq3gmj4M4XCF+Z808aI3QKFO/ry7YuZ+ndxUx+JtQuSV+zKQhNBuX5SF/5uRs8JdlJy9xEv9oIKbgtaXBN94m3LsyFD1MHQiWIn5wQJrqjX/cAMWweYAN/K4KMS8UaYOvCAir8pCoczCWpys9fcbFrBYdDmkTK+xeE/nT1jHDY12vusOTO4t5qFyq0QJXLh0BPR/UBv2SqGqvy8Xws6PTa0GvwBFj4qXQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(366004)(376002)(396003)(451199021)(36756003)(54906003)(316002)(66556008)(66946007)(66476007)(4326008)(6486002)(478600001)(8936002)(8676002)(5660300002)(6666004)(2906002)(38100700002)(31696002)(41300700001)(6512007)(2616005)(26005)(186003)(53546011)(6506007)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Vmtmb3I5YUhXekFBc2JwNVUzSXNlNVUraHJiYjNFWlJOSlRITHRybVNEYm5w?=
 =?utf-8?B?T3IxQUxoN1NubW9NUEpJZ1VUMkxtZUlxWDRWdDBkMFFOVXpZaU5UM0JpRTFF?=
 =?utf-8?B?WVpkYjB3MGsyRTVPdjlNV2s2dUs3VCtETktUTThYZmlMRWg5RlJFZ3NlWEoz?=
 =?utf-8?B?VG1jTVB2VXZEUWhacVZxbXpvUjhnMzhRZXRrVlJMRlI3dmw3SnFqOHBkSDUw?=
 =?utf-8?B?RXZtbi9ramlhT2UyWDh0RFpYSW9zeHM0eSs3N3BWd3psb25mZjgwOUIyTFRv?=
 =?utf-8?B?YWtIMUVvWUhRMmp1MlFmRlV2eGJiZVN1RllkN0VMYURkZ1A1Q2p0MHVHT1R0?=
 =?utf-8?B?NzZQazFqSUhlYVZOT1k4ZFcybWVqanY2MVFnZDBmTG1UNllnZ2cxQVhCK3Br?=
 =?utf-8?B?TjZJSUIzcy9aVFh3NWtFNmdmY3I0TFVkN3NvNXQxME1zMytTRUdIaERkSHR1?=
 =?utf-8?B?UklOODZLWEErajAvRU9UMUJGbWdCNCt3S2V5K2Z6b2xiQmNIempORW5MekZW?=
 =?utf-8?B?RjAyYUVlSGNNbHhrd1dOSnd3UnlSUHc5bjRUcDN3OWhlQTZTSS80V1pnQ2JO?=
 =?utf-8?B?YStiRlNkaWlYRllCNWtpakVZUnViaUNsYWF6cFRKZkVGYVNlcHVVZmVCYnFO?=
 =?utf-8?B?S1lUZWkxdnY3aDc1azVLbkxoTjZMNWZNc2NsSzdkS0NyL1d0ZlFpRGpodkV5?=
 =?utf-8?B?aXR1M2NleGlOdHoydDNubmZHckYra3lSdXVoc0RaV09vbVE1a1dadUgxZzBO?=
 =?utf-8?B?bE05VjR6UTFtdnBtSVRvcEtqMU84L2ViRVZaMFZFOUFtN09FOXVnOWJ2UkFs?=
 =?utf-8?B?WEc3NTR2TnZrQ2psTkdXVGxLMHEyaU52aHdRM291OUJOdmdiamZnb3RnMk5W?=
 =?utf-8?B?a3JUb01scmFLM0xMMXU3S0Q1TjR2NzMrZGQ3ZmJsZGtGaWxSb3hhbUdvcW1K?=
 =?utf-8?B?dHdlZitxZlJQNEJhYWYrUWFoL0RSMlFaT3FLdWZvVWpDYVZJbzFmczUrRXhu?=
 =?utf-8?B?SlRGT2xBOVJZckN6MDNoU2RwQTZtV2F0aldzM0UxOG5hV2Njb3J4S1dsbG1L?=
 =?utf-8?B?YU50UldzbmY0cmRNZXR2UWlZZHMreXIvbE5aY0laUGZGYnJPN1lUT3IxcnhL?=
 =?utf-8?B?ZmtYM0hINTdXUGNsQUtkMVluVzA0MnNnRlpGTldjZnNDWUpNTFprZ05aQm1s?=
 =?utf-8?B?UVAzdkZ5blVwNTFSaHJNeHozZURZZWdXaExvUlY2U0tCcjdqV3JIcUxITUR5?=
 =?utf-8?B?NGxNOWdicDlFa2tPTStvdkl5QmF4dXV6VzRpZlhtcUtnM3JiajdzUENZWE5j?=
 =?utf-8?B?aTh3Q2JiY24raCtoYnN4Q2dOM3Zpa0lFVzJ3UmJhMEV5TXcyb2o3VXZDMWlS?=
 =?utf-8?B?cWwwZGZwSkc3WHRqMklCeXNaNVhOMm1BUHErMEE3TXhpSHp2NzZTMDhqc0NR?=
 =?utf-8?B?MkFudnRHOWRjeitNdTZKa1c4VHptYmdtT1AyYm9ERjRrWXhnUFFoajNmUENq?=
 =?utf-8?B?aTdrWVlzaUtJYW9wWHZvM1Q2VFlWbTlRZXZIc2FBQWFTczF5Ti9ya0ltWkdn?=
 =?utf-8?B?Z1JnVmtjMzZjbkJzditDamJyWStwQi96czgySDQyMXZHbDJzajRwTnQxTnVa?=
 =?utf-8?B?aXFXWnZhdUQ1VzRRUThSOWNhRFpHeWIrTWQ3NWtlN2ZzMEtMd0RLdFkrQnNC?=
 =?utf-8?B?a3JvOS9kYk9UYTdyb3VSV3gzMWdDUkEwMVVRMzRFTUloak1GTzYra2dlditO?=
 =?utf-8?B?YzJQd1JhR1RmOVNyQmxSWFB3QWRIYnRUclltMUtsQ0xGUEU2SHJvNnBweEhr?=
 =?utf-8?B?WVErdUFqQUVkZzR3WW1kYWc2MVNCRDc4ZW16UUVMb3BOT05oUVRiNWhHclBR?=
 =?utf-8?B?eTJudDloVk55LzlhS0VyYS9rSTVTbmV3TTJFNW5jaU9pZ0hMUDZwY0lOeTlX?=
 =?utf-8?B?KzVXbmtIS095UENjTDZMOGQrYUp4dTkzbWZWOW9jbGw3Qk82bExpY3luY3Fl?=
 =?utf-8?B?T09JQlNaTll6azhWeU1IMHVQMFhVeXAyS2pFcktmTXRFVHEvNFNHdDJCNjRT?=
 =?utf-8?B?ZkFDa2xuUzBPRzhRWXVubUcwQ0RFY0JlcjRDd3JKNUwyODljQS9iU0ZnTVFU?=
 =?utf-8?Q?XrzU9Hq2GxNRVpfEVvQSrwFT9?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d8ea24a9-f046-4624-faba-08db5522b725
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 08:59:39.5702
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 93+TEQ9U202vNwaTJoB2QVa1t/pyvZZNZwlKm4Y3SbHp8czkKN5Uv2CWUar1gxnTMLHc+H/UghAS6971s9oPkQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5850


On 15/05/2023 07:46, Michal Orzel wrote:
> Hi Ayan,
Hi Michal,
>
> On 12/05/2023 18:59, Ayan Kumar Halder wrote:
>> Hi Michal,
>>
>> On 12/05/2023 15:35, Michal Orzel wrote:
>>> At the moment, even in case of a SMMU being I/O coherent, we clean the
>>> updated PT as a result of not advertising the coherency feature. SMMUv3
>>> coherency feature means that page table walks, accesses to memory
>>> structures and queues are I/O coherent (refer ARM IHI 0070 E.A, 3.15).
>>>
>>> Follow the same steps that were done for SMMU v1,v2 driver by the commit:
>>> 080dcb781e1bc3bb22f55a9dfdecb830ccbabe88
>>>
>>> The same restrictions apply, meaning that in order to advertise coherent
>>> table walk platform feature, all the SMMU devices need to report coherency
>>> feature. This is because the page tables (we are sharing them with CPU)
>>> are populated before any device assignment and in case of a device being
>>> behind non-coherent SMMU, we would have to scan the tables and clean
>>> the cache.
>>>
>>> It is to be noted that the SBSA/BSA (refer ARM DEN0094C 1.0C, section D)
>>> requires that all SMMUv3 devices support I/O coherency.
>>>
>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>> ---
>>> There are very few platforms out there with SMMUv3 but I have never seen
>>> a SMMUv3 that is not I/O coherent.
>>> ---
>>>    xen/drivers/passthrough/arm/smmu-v3.c | 24 +++++++++++++++++++++++-
>>>    1 file changed, 23 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
>>> index bf053cdb6d5c..2adaad0fa038 100644
>>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>>> @@ -2526,6 +2526,15 @@ static const struct dt_device_match arm_smmu_of_match[] = {
>>>    };
>>>    
>>>    /* Start of Xen specific code. */
>>> +
>>> +/*
>>> + * Platform features. It indicates the list of features supported by all
>>> + * SMMUs. Actually we only care about coherent table walk, which in case of
>>> + * SMMUv3 is implied by the overall coherency feature (refer ARM IHI 0070 E.A,
>>> + * section 3.15 and SMMU_IDR0.COHACC bit description).
>>> + */
>>> +static uint32_t platform_features = ARM_SMMU_FEAT_COHERENCY;
>>> +
>>>    static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
>>>    {
>>>    	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
>>> @@ -2708,8 +2717,12 @@ static int arm_smmu_iommu_xen_domain_init(struct domain *d)
>>>    	INIT_LIST_HEAD(&xen_domain->contexts);
>>>    
>>>    	dom_iommu(d)->arch.priv = xen_domain;
>>> -	return 0;
>>>    
>>> +	/* Coherent walk can be enabled only when all SMMUs support it. */
>>> +	if (platform_features & ARM_SMMU_FEAT_COHERENCY)
>>> +		iommu_set_feature(d, IOMMU_FEAT_COHERENT_WALK);
>>> +
>>> +	return 0;
>>>    }
>>>    
>> All good till here.
>>>    static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
>>> @@ -2738,6 +2751,7 @@ static __init int arm_smmu_dt_init(struct dt_device_node *dev,
>>>    				const void *data)
>>>    {
>>>    	int rc;
>>> +	const struct arm_smmu_device *smmu;
>>>    
>>>    	/*
>>>    	 * Even if the device can't be initialized, we don't want to
>>> @@ -2751,6 +2765,14 @@ static __init int arm_smmu_dt_init(struct dt_device_node *dev,
>>>    
>>>    	iommu_set_ops(&arm_smmu_iommu_ops);
>>>    
>>> +	/* Find the just added SMMU and retrieve its features. */
>>> +	smmu = arm_smmu_get_by_dev(dt_to_dev(dev));
>>> +
>>> +	/* It would be a bug not to find the SMMU we just added. */
>>> +	BUG_ON(!smmu);
>>> +
>>> +	platform_features &= smmu->features;
>>> +
>> Can you explain this change in the commit message ?
> I think it is already explained by saying that in order to advertise the *platform* feature, all
> SMMUs need to report it. If at least one doesn't, the feature is disabled. This is exactly
> what this line is doing. It is comparing platform features with a just probed SMMU features (arm_smmu_dt_init()
> is called for each SMMU).
All good.
>
> ~Michal


From xen-devel-bounces@lists.xenproject.org Mon May 15 09:00:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 09:00:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534650.831865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU41-0002jK-Ay; Mon, 15 May 2023 09:00:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534650.831865; Mon, 15 May 2023 09:00:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU41-0002jD-6h; Mon, 15 May 2023 09:00:17 +0000
Received: by outflank-mailman (input) for mailman id 534650;
 Mon, 15 May 2023 09:00:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pyU3z-0002j0-Hy
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 09:00:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyU3z-0001oz-B8; Mon, 15 May 2023 09:00:15 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.27.136]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyU3z-0002Pn-4I; Mon, 15 May 2023 09:00:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=19yXlszBuNfd5Y8kEucFSFTcXjp/ZcZ8h5tTmbP9iSY=; b=RLdDuqGwi0hcXkjg7z1++VMp/a
	gkcULCIKySYEY77ZRQAJ8vA/SbOUTvsV38UQ8Mu+CNC2cLGTpJK0eCL3/FPSrR6xKIHkp/94J4ceP
	SYqrsakW1dIH30EgPtBGBbaoLLr+pYrYIjmNyMggKpCHSSL71HUWbWUvjEStwV/bXLiI=;
Message-ID: <f2db75de-0da0-d6ca-5adf-a784080625a8@xen.org>
Date: Mon, 15 May 2023 10:00:13 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2 1/2] xen/arm: domain_build: Propagate return code of
 map_irq_to_domain()
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230511130218.22606-1-michal.orzel@amd.com>
 <20230511130218.22606-2-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230511130218.22606-2-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 11/05/2023 14:02, Michal Orzel wrote:
>  From map_dt_irq_to_domain() we are assigning a return code of
> map_irq_to_domain() to a variable without checking it for an error.
> Fix it by propagating the return code directly since this is the last
> call.
> 
> Fixes: 467e5cbb2ffc ("xen: arm: consolidate mmio and irq mapping to dom0")
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> Changes in v2:
>   - split the patch so that a fix alone can be backported
> ---
>   xen/arch/arm/domain_build.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index f80fdd1af206..9dee1bb8f21c 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2320,7 +2320,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>   
>       res = map_irq_to_domain(d, irq, !mr_data->skip_mapping, dt_node_name(dev));
>   
> -    return 0;
> +    return res;
>   }
>   
>   int __init map_range_to_domain(const struct dt_device_node *dev,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 15 09:04:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 09:04:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534655.831884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU7l-0003cR-1W; Mon, 15 May 2023 09:04:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534655.831884; Mon, 15 May 2023 09:04:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU7k-0003cK-V1; Mon, 15 May 2023 09:04:08 +0000
Received: by outflank-mailman (input) for mailman id 534655;
 Mon, 15 May 2023 09:04:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HQr=BE=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pyU7j-0003NQ-Ks
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 09:04:07 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20626.outbound.protection.outlook.com
 [2a01:111:f400:7e88::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 72012d6a-f2ff-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 11:04:06 +0200 (CEST)
Received: from MW4PR03CA0059.namprd03.prod.outlook.com (2603:10b6:303:8e::34)
 by BY5PR12MB4145.namprd12.prod.outlook.com (2603:10b6:a03:212::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 09:04:02 +0000
Received: from CO1NAM11FT039.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8e:cafe::1) by MW4PR03CA0059.outlook.office365.com
 (2603:10b6:303:8e::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30 via Frontend
 Transport; Mon, 15 May 2023 09:04:02 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT039.mail.protection.outlook.com (10.13.174.110) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.14 via Frontend Transport; Mon, 15 May 2023 09:04:02 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 15 May
 2023 04:04:00 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 15 May
 2023 04:03:55 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 15 May 2023 04:03:54 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72012d6a-f2ff-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BrJ3ABh11Ck7Go+V5QE7ddgXTWMaOnjuPb6+4fJUee9HOqvwR2sQuRl3bpbxDB7y3R+UA2ngElscEdyWPblyZaG8VLt+tzkoCOWxVyGU8zG+Cj4bg8QOoJsSIVUnRtKbgx6cvlxDRCrOBaSA/wA1w3KnaWyxxKla/tBKkSIv12JJObBdTy0lmbpDWveDXFFEhT8lhzOYdrWX0Dd0UFcV+ygoTZoZSZXujGB2VBzAOPSPOMs4psP2bOvXUVKy/QZSaWs4SmurXugN12mnTN3EKGMSPaAgTcyxwCC7LWMrNKe3iSwX8FchSEDlHyiX/Qlydx8PT3Aqf7o8V7SpKUYPkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wLhLZil+UL8YExw8j+P15PD3sT2ZL5XX8PAe9m63T5Q=;
 b=nkxEmD1E0xaD4NgocxH0e6hQW5/q2M2yoEbsJvyXVkRD4ZIr+vtze0zhtYT3YsE2GP37v15aYTXolKNf47Ie7oYkPlXF2W89Ddz5NKdZFrSWmwiquDcAkCR+7TkbhSJw2qm0Z0Ludhw8KM5TsucUgVdDph3MN/6Bz6d5EAXSWfdoPP4UX1EOt1QrEF7IHnsSGeVeUJNiCzpgFxIKkXnyBJB92ZFAtCXwGqxoa3aPaB4FBkQZMw8QzCIFraRZz64HdungIv6Rr8gYtBFPOcrATcyDGUkau7ydiVUJALK9dStc9KoS6okIYRtPc6H7/bRHN08yc879/GnvIyG/csRHtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wLhLZil+UL8YExw8j+P15PD3sT2ZL5XX8PAe9m63T5Q=;
 b=VOtsnVysXalojZH6jIve1BI/FaS0me8zD6UFFv0voLvEpe8ol0o5JXoctEgBgTS0j1mC9KiMb9BffywLHr8Bnssi122eVLdmpFcQsbNIncWDhqG0gYUt9kD1IrTut1e+udd1PMiZ6mDO3PLRXbLr0LatVfPO7pvWEcb1y433vcc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <f043c234-eb51-126f-1a1f-610796c203e8@amd.com>
Date: Mon, 15 May 2023 11:03:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <bertrand.marquis@arm.com>, Rahul Singh
	<rahul.singh@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	"Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-3-michal.orzel@amd.com>
 <dff8ab04-ae35-3a71-b923-abe722dcdb1c@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <dff8ab04-ae35-3a71-b923-abe722dcdb1c@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT039:EE_|BY5PR12MB4145:EE_
X-MS-Office365-Filtering-Correlation-Id: 5aa2ebb0-9984-4e07-5915-08db552353ef
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WFW9bbg5ZjzbTt9EBv3Uas+PQytGd3COBjWijv/3JMa8bI7B8nIf47AhgLHW95L+1skrSTysSx+v1c0mE/hQWGStoscrxRoyHr6L4iPMrZ2zYLHyQCJ4W00tCEnhrOygdQbaB9WveI9oc7NRGhyIKJtAo2G7i97O8sz/CtSw1f9xVU2NAhYGRF8dLln4D3tQT5heM7aXMeAfIHxWnOoeineDINHY54Pg3nEBFgg/5uYPHYl/a/oW1XW8hR4gr1G4bUOUmyr0HozRnSstdj0BBDwhPgXUi0LeKUwHPgxKTvy/lgnzZxgSS3mJjq9WZZJeMIzchrLbqtna8PCpuoYcrdqkbgbx3Nt7j9g+MWhi+AuuJPoIy8PcRGEBIW/PFyilFzy90P9vbEXUjedYdm46+xuvb5VHh6PTpjmPiNi6APu0jrftpGcMrh+G51997b2z+3uiRS6fu0DVXifYgD1HicsS6Q1a9irH1Y46EOGp8+/KmDrwbu1ccGxw7O63xRCuxAzto6ImfzQrwqnbFKI7IyX/cCxy+s+Q2NUO7v0n5jYNbYW6+TdLPTNtEZiqURhIFTsIA39PBTlzB0jcO3GaO7ki7YEkJqyiG7yKzuxAN5TiGVJke5WfAc4mUjA8Oak9AKXEXpZVEaSp8aKEM3X2M9vx9JnaaFV9Ufpfi4yJvsd4tlMKitc/3wese1BiCTvSW2wcsof7tZ/3b9Vif+fLTQRxoFrRdS0uuBncxVPremepj3Nlv1NLPAUKHo4sLOArmQklUSbcSp3f3vMDXOMuDacs7F4H+yE9ZVF4QA/hECU=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39850400004)(346002)(136003)(451199021)(46966006)(40470700004)(36840700001)(36756003)(86362001)(110136005)(54906003)(16576012)(316002)(4326008)(70206006)(70586007)(478600001)(82310400005)(40480700001)(8936002)(8676002)(5660300002)(2906002)(44832011)(31696002)(81166007)(356005)(82740400003)(41300700001)(2616005)(26005)(186003)(53546011)(336012)(36860700001)(426003)(83380400001)(47076005)(31686004)(40460700003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 09:04:02.1749
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5aa2ebb0-9984-4e07-5915-08db552353ef
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT039.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4145

Hi Julien,

On 15/05/2023 10:56, Julien Grall wrote:
> 
> 
> Hi,
> 
> On 12/05/2023 15:35, Michal Orzel wrote:
>> At the moment, even in case of a SMMU being I/O coherent, we clean the
>> updated PT as a result of not advertising the coherency feature. SMMUv3
>> coherency feature means that page table walks, accesses to memory
>> structures and queues are I/O coherent (refer ARM IHI 0070 E.A, 3.15).
>>
>> Follow the same steps that were done for SMMU v1,v2 driver by the commit:
>> 080dcb781e1bc3bb22f55a9dfdecb830ccbabe88
>>
>> The same restrictions apply, meaning that in order to advertise coherent
>> table walk platform feature, all the SMMU devices need to report coherency
>> feature. This is because the page tables (we are sharing them with CPU)
>> are populated before any device assignment and in case of a device being
>> behind non-coherent SMMU, we would have to scan the tables and clean
>> the cache.
>>
>> It is to be noted that the SBSA/BSA (refer ARM DEN0094C 1.0C, section D)
>> requires that all SMMUv3 devices support I/O coherency.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> ---
>> There are very few platforms out there with SMMUv3 but I have never seen
>> a SMMUv3 that is not I/O coherent.
>> ---
>>   xen/drivers/passthrough/arm/smmu-v3.c | 24 +++++++++++++++++++++++-
>>   1 file changed, 23 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
>> index bf053cdb6d5c..2adaad0fa038 100644
>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>> @@ -2526,6 +2526,15 @@ static const struct dt_device_match arm_smmu_of_match[] = {
>>   };
>>
>>   /* Start of Xen specific code. */
>> +
>> +/*
>> + * Platform features. It indicates the list of features supported by all
>> + * SMMUs. Actually we only care about coherent table walk, which in case of
>> + * SMMUv3 is implied by the overall coherency feature (refer ARM IHI 0070 E.A,
>> + * section 3.15 and SMMU_IDR0.COHACC bit description).
>> + */
>> +static uint32_t platform_features = ARM_SMMU_FEAT_COHERENCY;
> 
> AFAICT, this variable is not meant to change after boot. So please add
> the attribute __ro_after_init.
Yes, that makes total sense. After probing this variable is not meant to be modified.
Is it something that can be done on commit or would you want me to respin this patch?

~Michal


From xen-devel-bounces@lists.xenproject.org Mon May 15 09:04:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 09:04:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534654.831874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU7j-0003Nc-Qh; Mon, 15 May 2023 09:04:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534654.831874; Mon, 15 May 2023 09:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU7j-0003NV-Nk; Mon, 15 May 2023 09:04:07 +0000
Received: by outflank-mailman (input) for mailman id 534654;
 Mon, 15 May 2023 09:04:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pyU7i-0003NK-Lh
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 09:04:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyU7i-0001tj-3l; Mon, 15 May 2023 09:04:06 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.27.136]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyU7h-0002Y2-Tw; Mon, 15 May 2023 09:04:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=VrcPs2Jq8w9VNs7AIBdGcg2ROUBmtiXJ7MVVzP/q8dI=; b=zTkOOjxK7yB7ya9hK+EXYYSlM/
	hmsyVRJpl7nrxh1omDVbWRx3KXvjTA+vUxiTYuaJlXsnroi5pPeEnbPEojUBUNym9UjbGf/SrZGw2
	FWPqqRe6nIMD0xswi/+NfCXDnwYn4j4KG+kdbt8yExlXRnF5mdCoJFR9G8rUcJIiwvR8=;
Message-ID: <feea3570-739f-c998-16ed-afed884f7fe6@xen.org>
Date: Mon, 15 May 2023 10:04:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2 2/2] xen/arm: domain_build: Fix format specifiers in
 map_{dt_}irq_to_domain()
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230511130218.22606-1-michal.orzel@amd.com>
 <20230511130218.22606-3-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230511130218.22606-3-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 11/05/2023 14:02, Michal Orzel wrote:
> IRQ is of unsigned type so %u should be used. When printing domain id,
> %pd should be the correct format to maintain the consistency.
> 
> Also, wherever possible, reduce the number of splitted lines for printk().

Reviewed-by: Julien Grall <jgrall@amazon.com>

I will fix the typo pointed out by Andrew whilst committing the series.

Cheers,

> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
> Changes in v2:
>   - split the v1 patch so that the format specifiers are handled separately
>   - also fix map_irq_to_domain()
> ---
>   xen/arch/arm/domain_build.c | 14 +++++---------
>   1 file changed, 5 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 9dee1bb8f21c..71f307a572e9 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2265,8 +2265,7 @@ int __init map_irq_to_domain(struct domain *d, unsigned int irq,
>       res = irq_permit_access(d, irq);
>       if ( res )
>       {
> -        printk(XENLOG_ERR "Unable to permit to dom%u access to IRQ %u\n",
> -               d->domain_id, irq);
> +        printk(XENLOG_ERR "Unable to permit to %pd access to IRQ %u\n", d, irq);
>           return res;
>       }
>   
> @@ -2282,8 +2281,7 @@ int __init map_irq_to_domain(struct domain *d, unsigned int irq,
>           res = route_irq_to_guest(d, irq, irq, devname);
>           if ( res < 0 )
>           {
> -            printk(XENLOG_ERR "Unable to map IRQ%"PRId32" to dom%d\n",
> -                   irq, d->domain_id);
> +            printk(XENLOG_ERR "Unable to map IRQ%u to %pd\n", irq, d);
>               return res;
>           }
>       }
> @@ -2303,8 +2301,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>   
>       if ( irq < NR_LOCAL_IRQS )
>       {
> -        printk(XENLOG_ERR "%s: IRQ%"PRId32" is not a SPI\n",
> -               dt_node_name(dev), irq);
> +        printk(XENLOG_ERR "%s: IRQ%u is not a SPI\n", dt_node_name(dev), irq);
>           return -EINVAL;
>       }
>   
> @@ -2312,9 +2309,8 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
>       res = irq_set_spi_type(irq, dt_irq->type);
>       if ( res )
>       {
> -        printk(XENLOG_ERR
> -               "%s: Unable to setup IRQ%"PRId32" to dom%d\n",
> -               dt_node_name(dev), irq, d->domain_id);
> +        printk(XENLOG_ERR "%s: Unable to setup IRQ%u to %pd\n",
> +               dt_node_name(dev), irq, d);
>           return res;
>       }
>   

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 15 09:06:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 09:06:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534662.831895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU9q-0004Yk-JZ; Mon, 15 May 2023 09:06:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534662.831895; Mon, 15 May 2023 09:06:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyU9q-0004Yb-FQ; Mon, 15 May 2023 09:06:18 +0000
Received: by outflank-mailman (input) for mailman id 534662;
 Mon, 15 May 2023 09:06:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pyU9p-0004YV-5Z
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 09:06:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyU9o-0001vr-NQ; Mon, 15 May 2023 09:06:16 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.27.136]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyU9o-0002fB-Hd; Mon, 15 May 2023 09:06:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=H0qhiCwgVN4xX6yrB482OoZ5Sd6rF4cYlo0mutjTYcM=; b=OMa7pPWGUmSTyVgZd4WWaGN4d3
	KiCn6NJFwVYxqfphDg1W/ti1FgUoyMXJuyF6G5GxHCMuOWxp4Gg/Ma3Y+Lo6bYPSpSlhM/hq1dMEf
	KvFPoSV4krx9M6fyDwMJCGpnTrXetz+4/4HQhNQ3VPbExom10KEIHrYm6JgqVOfEcOjE=;
Message-ID: <bfc634ce-43f9-2617-eee7-6ce8ab15b6b1@xen.org>
Date: Mon, 15 May 2023 10:06:14 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-3-michal.orzel@amd.com>
 <dff8ab04-ae35-3a71-b923-abe722dcdb1c@xen.org>
 <f043c234-eb51-126f-1a1f-610796c203e8@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <f043c234-eb51-126f-1a1f-610796c203e8@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 15/05/2023 10:03, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> 
> On 15/05/2023 10:56, Julien Grall wrote:
>>
>>
>> Hi,
>>
>> On 12/05/2023 15:35, Michal Orzel wrote:
>>> At the moment, even in case of a SMMU being I/O coherent, we clean the
>>> updated PT as a result of not advertising the coherency feature. SMMUv3
>>> coherency feature means that page table walks, accesses to memory
>>> structures and queues are I/O coherent (refer ARM IHI 0070 E.A, 3.15).
>>>
>>> Follow the same steps that were done for SMMU v1,v2 driver by the commit:
>>> 080dcb781e1bc3bb22f55a9dfdecb830ccbabe88
>>>
>>> The same restrictions apply, meaning that in order to advertise coherent
>>> table walk platform feature, all the SMMU devices need to report coherency
>>> feature. This is because the page tables (we are sharing them with CPU)
>>> are populated before any device assignment and in case of a device being
>>> behind non-coherent SMMU, we would have to scan the tables and clean
>>> the cache.
>>>
>>> It is to be noted that the SBSA/BSA (refer ARM DEN0094C 1.0C, section D)
>>> requires that all SMMUv3 devices support I/O coherency.
>>>
>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>> ---
>>> There are very few platforms out there with SMMUv3 but I have never seen
>>> a SMMUv3 that is not I/O coherent.
>>> ---
>>>    xen/drivers/passthrough/arm/smmu-v3.c | 24 +++++++++++++++++++++++-
>>>    1 file changed, 23 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
>>> index bf053cdb6d5c..2adaad0fa038 100644
>>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>>> @@ -2526,6 +2526,15 @@ static const struct dt_device_match arm_smmu_of_match[] = {
>>>    };
>>>
>>>    /* Start of Xen specific code. */
>>> +
>>> +/*
>>> + * Platform features. It indicates the list of features supported by all
>>> + * SMMUs. Actually we only care about coherent table walk, which in case of
>>> + * SMMUv3 is implied by the overall coherency feature (refer ARM IHI 0070 E.A,
>>> + * section 3.15 and SMMU_IDR0.COHACC bit description).
>>> + */
>>> +static uint32_t platform_features = ARM_SMMU_FEAT_COHERENCY;
>>
>> AFAICT, this variable is not meant to change after boot. So please add
>> the attribute __ro_after_init.
> Yes, that makes total sense. After probing this variable is not meant to be modified.
> Is it something that can be done on commit or would you want me to respin this patch?

I can do it on commit. With that:

Reviewed-by: Julien Grall <jgrall@amazon.com>


Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 15 09:08:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 09:08:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534665.831905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUBr-0005Bh-Uz; Mon, 15 May 2023 09:08:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534665.831905; Mon, 15 May 2023 09:08:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUBr-0005Ba-S8; Mon, 15 May 2023 09:08:23 +0000
Received: by outflank-mailman (input) for mailman id 534665;
 Mon, 15 May 2023 09:08:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HQr=BE=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pyUBq-0005BS-11
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 09:08:22 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7e89::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 07f0a910-f300-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 11:08:19 +0200 (CEST)
Received: from MW4PR03CA0005.namprd03.prod.outlook.com (2603:10b6:303:8f::10)
 by SA3PR12MB7880.namprd12.prod.outlook.com (2603:10b6:806:305::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 09:08:15 +0000
Received: from CO1NAM11FT023.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:8f:cafe::45) by MW4PR03CA0005.outlook.office365.com
 (2603:10b6:303:8f::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30 via Frontend
 Transport; Mon, 15 May 2023 09:08:15 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT023.mail.protection.outlook.com (10.13.175.35) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.14 via Frontend Transport; Mon, 15 May 2023 09:08:14 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 15 May
 2023 04:08:12 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 15 May
 2023 02:08:04 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 15 May 2023 04:08:03 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07f0a910-f300-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lp/e548xlBT+5bSDuFztte5ggyVz0wtyY9K3jPWRABLRCxSQxd6J5cay0mynCOoovYdZ1Ysq73j1Jh+IPgNXwDP7jw0WE2utMVfyGHauJ6dL1zcHKmbZa9WoOO6dc+iaqqlv5geizKPWOeKvlLIZ3JAOfcf2NhesTPuUOW90T08ywCQtGkt4R++vI2eFTAfKvecDwBlZ59g/3erkGmO14Zkp+Yth+QdO80nGkn1m05HFljArbtBTW3R91/RquIpezF6nlYfBYzrwLqvKcXcALjjCddmaXtsyG3p+J3e9m4gmYXbLBBTZRg+5ydPgnZkRkWpctv6RDzyHjZxzq6j+Xw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UorGeRH2bJa9iLxYbSglBx9o6FQ1tj07XmvbncA39LA=;
 b=N58enbIV0RWIIjsUu3aMPEDUPOhzz3FcSNG6wJrORzFAl3nHt9r39VV4fe4Vork8IqalUp8Pk3DRM9kC0FMaHNKOh9gdaDn5YL3xi19HR+kX7qnTiZxHQTIcHDiTTvUZX6zROLOR3U1tFcHo5HWqXTat4GKoA+WBx34pV/pWtsApJ4ESIlrRosNsKT0AjW5W0Yg375lhQ+Ba8RxEjh2gQWQqmKqfLFtkeMJ0AHuL6ZuB7GxH24cxebURikcdsUuFeGSQ3uKzd53r9fMuB/FvcppJrzttmb1v0TB/5JbbWIbskYxFTjOI4zeGNOdMV6L1XKt2bdDcSJhRR9yDccggIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UorGeRH2bJa9iLxYbSglBx9o6FQ1tj07XmvbncA39LA=;
 b=WG/HhAjHsKE4RtAIDRZOr9UkXonhe6LK/E0qwxWAMqnpygBwYvwbNCBgzQNAxLwvHsfIQzqToiueOa7lxqf/s2Qu1xhrrOinRQvvU/1bLa8MhtXfZc63uxCOazXrxY9KgZsfxY3DTMqSNoGvX1TY+Ckml+eayQRG1cyalSR6S/s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <da9415ad-95cb-21bb-28ee-a007763a1e54@amd.com>
Date: Mon, 15 May 2023 11:07:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
Content-Language: en-US
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <bertrand.marquis@arm.com>, Rahul Singh
	<rahul.singh@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	"Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-3-michal.orzel@amd.com>
 <dff8ab04-ae35-3a71-b923-abe722dcdb1c@xen.org>
 <f043c234-eb51-126f-1a1f-610796c203e8@amd.com>
 <bfc634ce-43f9-2617-eee7-6ce8ab15b6b1@xen.org>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <bfc634ce-43f9-2617-eee7-6ce8ab15b6b1@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT023:EE_|SA3PR12MB7880:EE_
X-MS-Office365-Filtering-Correlation-Id: 68d408fb-ca2b-42ce-3bf7-08db5523ea52
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9/59y9q1I4y8+BsX3Ng0RV6iARPsEF445VtV8x41NcRuR/DJ6AthGWxqYMC+uUiPmTZGFrjg7LCll25RSMVoSas9T1IihTx1vBZgunp/MYoih9gOwfghXjfn8IZdc3Uwm8be++XcT66aKwDfgICCEiwhRc3QgoqEd58nSpJTxupgibv/vszXeF7BcUbDO+gIsxUJyI8a6yuFfL5OoZQHN2dkvoy8EGheFN2kfOLYAyBC5Aqqkdl30LsIOv6Jf5IBd86ugtPL0dSCphxnYMxsAHa5wcfMJjaGqSGY9GDG3WRdvs2KqtFk+G3N2EVPI+pY4LJ1BMAbbKzBKGPwL8ayTdxqHRY7S/98gE3Y76a7ueyRG117D0t/kdFUDVit498gKmu4EnPVPunkzZwmMdrHYiMfhq76GEPfavrQ6DEvFMeK25VJ1NmAadpVHM5dYzHdKUwbkTV4I+OigBDiBH8jmmzVgugab+K07ujSm8zhz039PpDK5oYPvDUXdV7LHIcx7hzCBz3OsaqoAX1ZUlx5+TsmKMeQZY0TZXv6//E99scOR0hPn4koFXvYdWAz45OAkPwCuBKqEG+iu+u/GY6zAw3GFmaqBrcxBk1EQOqxAgCVGppaNByrX16qrSkr2p8mffX4TAyQmye7zMwaqZkQ0QOpm1zw8KoZpPn302qM5xCmdrxfprfvr9Nt0LmdRHgA6fSC6G5QRd/SZCwdE4JTiiLZvQ6BvsbFiO4MWqXPXSnOAdb2g57pHZC/UkA20rSFFmO/vN0Dtl+AxISECfedmrF4yB0jF0gmunMMmm0upYw=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(39860400002)(136003)(451199021)(36840700001)(40470700004)(46966006)(31686004)(478600001)(31696002)(110136005)(70586007)(70206006)(6666004)(36756003)(40460700003)(16576012)(54906003)(316002)(4326008)(86362001)(82310400005)(426003)(36860700001)(186003)(336012)(47076005)(2616005)(356005)(81166007)(82740400003)(41300700001)(8676002)(8936002)(44832011)(2906002)(5660300002)(26005)(53546011)(40480700001)(83380400001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 09:08:14.4812
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 68d408fb-ca2b-42ce-3bf7-08db5523ea52
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT023.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7880



On 15/05/2023 11:06, Julien Grall wrote:
> 
> 
> On 15/05/2023 10:03, Michal Orzel wrote:
>> Hi Julien,
> 
> Hi Michal,
> 
>>
>> On 15/05/2023 10:56, Julien Grall wrote:
>>>
>>>
>>> Hi,
>>>
>>> On 12/05/2023 15:35, Michal Orzel wrote:
>>>> At the moment, even in case of a SMMU being I/O coherent, we clean the
>>>> updated PT as a result of not advertising the coherency feature. SMMUv3
>>>> coherency feature means that page table walks, accesses to memory
>>>> structures and queues are I/O coherent (refer ARM IHI 0070 E.A, 3.15).
>>>>
>>>> Follow the same steps that were done for SMMU v1,v2 driver by the commit:
>>>> 080dcb781e1bc3bb22f55a9dfdecb830ccbabe88
>>>>
>>>> The same restrictions apply, meaning that in order to advertise coherent
>>>> table walk platform feature, all the SMMU devices need to report coherency
>>>> feature. This is because the page tables (we are sharing them with CPU)
>>>> are populated before any device assignment and in case of a device being
>>>> behind non-coherent SMMU, we would have to scan the tables and clean
>>>> the cache.
>>>>
>>>> It is to be noted that the SBSA/BSA (refer ARM DEN0094C 1.0C, section D)
>>>> requires that all SMMUv3 devices support I/O coherency.
>>>>
>>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>>> ---
>>>> There are very few platforms out there with SMMUv3 but I have never seen
>>>> a SMMUv3 that is not I/O coherent.
>>>> ---
>>>>    xen/drivers/passthrough/arm/smmu-v3.c | 24 +++++++++++++++++++++++-
>>>>    1 file changed, 23 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
>>>> index bf053cdb6d5c..2adaad0fa038 100644
>>>> --- a/xen/drivers/passthrough/arm/smmu-v3.c
>>>> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
>>>> @@ -2526,6 +2526,15 @@ static const struct dt_device_match arm_smmu_of_match[] = {
>>>>    };
>>>>
>>>>    /* Start of Xen specific code. */
>>>> +
>>>> +/*
>>>> + * Platform features. It indicates the list of features supported by all
>>>> + * SMMUs. Actually we only care about coherent table walk, which in case of
>>>> + * SMMUv3 is implied by the overall coherency feature (refer ARM IHI 0070 E.A,
>>>> + * section 3.15 and SMMU_IDR0.COHACC bit description).
>>>> + */
>>>> +static uint32_t platform_features = ARM_SMMU_FEAT_COHERENCY;
>>>
>>> AFAICT, this variable is not meant to change after boot. So please add
>>> the attribute __ro_after_init.
>> Yes, that makes total sense. After probing this variable is not meant to be modified.
>> Is it something that can be done on commit or would you want me to respin this patch?
> 
> I can do it on commit. With that:
> 
> Reviewed-by: Julien Grall <jgrall@amazon.com>
Thanks,
Bare in mind that Rahul responded in HTML so there will be a <mailto when using b4.

~Michal


From xen-devel-bounces@lists.xenproject.org Mon May 15 09:09:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 09:09:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534668.831915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUD5-0005jR-Az; Mon, 15 May 2023 09:09:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534668.831915; Mon, 15 May 2023 09:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUD5-0005jJ-77; Mon, 15 May 2023 09:09:39 +0000
Received: by outflank-mailman (input) for mailman id 534668;
 Mon, 15 May 2023 09:09:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyUD4-0005jA-T5; Mon, 15 May 2023 09:09:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyUD4-00029t-LZ; Mon, 15 May 2023 09:09:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyUD4-0004pU-8V; Mon, 15 May 2023 09:09:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyUD4-0003MH-82; Mon, 15 May 2023 09:09:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b53xVOFxRgzrL06z9NPgcXfA+t27fC5r3Rvv6ISMAdQ=; b=5wpF2Cq8Y8+FDoQ7kXoWsstZKH
	75Rx+cZIaYfAjB5fPr/1WHGWgL4yonaZ5r8ev3XNsluVj635Cqj8ZCXgw2ZXoD+9HQBFuriX2Tz8C
	OZITIbAPhaAWBB+W2KRHYtAjL25X+WZoKnezhcyKZcmoXm+EE0E9jRcAWLN1mxDgqFMc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180666-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180666: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:guest-start/redhat.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4c507d8a6b6e8be90881a335b0a66eb28e0f7737
X-Osstest-Versions-That:
    xen=4c507d8a6b6e8be90881a335b0a66eb28e0f7737
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 May 2023 09:09:38 +0000

flight 180666 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180666/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd 14 guest-start/redhat.repeat fail in 180660 pass in 180666
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180660

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 180660 like 180646
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install       fail like 180639
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat    fail  like 180646
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180660
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180660
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180660
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180660
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180660
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180660
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180660
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180660
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180660
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180660
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180660
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  4c507d8a6b6e8be90881a335b0a66eb28e0f7737
baseline version:
 xen                  4c507d8a6b6e8be90881a335b0a66eb28e0f7737

Last test of basis   180666  2023-05-15 01:52:07 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 15 09:12:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 09:12:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534675.831924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUG8-0007Ew-RY; Mon, 15 May 2023 09:12:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534675.831924; Mon, 15 May 2023 09:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUG8-0007Ep-P0; Mon, 15 May 2023 09:12:48 +0000
Received: by outflank-mailman (input) for mailman id 534675;
 Mon, 15 May 2023 09:12:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pyUG8-0007Ej-6k
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 09:12:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyUG7-0002CG-VP; Mon, 15 May 2023 09:12:47 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.27.136]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyUG7-000368-Pi; Mon, 15 May 2023 09:12:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=A7mHmKFDOHFXZ1P1TmBdlm7hU5IL7ZKFwITEz2mtWpc=; b=fFWxTCCf9u1JFoO2WnKdQsLqnj
	sElu9oBj7UmctmP4PYTo10loqOZn7TyrmOlrw9EiDXrvM7EvTJ5APEzZoB1YWbw5AcVSi1a66/wqp
	hqsPalzaOOSIN0uZtYsGCEEjcSiKulmWfb+NVSv+UFPV9LeGVoMuZo5NPS6I4YDAT5ik=;
Message-ID: <c6801d1f-ae14-28bc-53b1-e9d0c58468eb@xen.org>
Date: Mon, 15 May 2023 10:12:46 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH 2/2] xen/arm: smmuv3: Advertise coherent table walk if
 supported
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
 Rahul Singh <rahul.singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230512143535.29679-1-michal.orzel@amd.com>
 <20230512143535.29679-3-michal.orzel@amd.com>
 <dff8ab04-ae35-3a71-b923-abe722dcdb1c@xen.org>
 <f043c234-eb51-126f-1a1f-610796c203e8@amd.com>
 <bfc634ce-43f9-2617-eee7-6ce8ab15b6b1@xen.org>
 <da9415ad-95cb-21bb-28ee-a007763a1e54@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <da9415ad-95cb-21bb-28ee-a007763a1e54@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 15/05/2023 10:07, Michal Orzel wrote:
>> Reviewed-by: Julien Grall <jgrall@amazon.com>
> Thanks,
> Bare in mind that Rahul responded in HTML so there will be a <mailto when using b4.

Thanks for the reminder! The patch is now committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 15 09:13:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 09:13:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534677.831935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUGf-0007jt-5K; Mon, 15 May 2023 09:13:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534677.831935; Mon, 15 May 2023 09:13:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUGf-0007jm-1O; Mon, 15 May 2023 09:13:21 +0000
Received: by outflank-mailman (input) for mailman id 534677;
 Mon, 15 May 2023 09:13:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pyUGe-0007je-30
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 09:13:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyUGd-0002EJ-S5; Mon, 15 May 2023 09:13:19 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.27.136]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyUGd-000384-MB; Mon, 15 May 2023 09:13:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=tI181xTvQCTnO2XrVULMeFyujQcoBT3EagprPcJm1Zc=; b=lYz68YA1/EQsRfHAzIV2oeNgXp
	NRo+LL1laSN/13JHi0ulPpeVY0CjXwUnk7iOR5Qw3G1FwnFM7ZnRXTA4px1o0/6t5/kor7TFdegD4
	TQt09k3lDWdTohK+zKpw8gNmtQxZnOLY5B9YVo0VAmBoF2ErjUWb9QxmoXqHuwFbIN9Y=;
Message-ID: <5c5b3924-57f4-284e-d1b4-d7de8c143bb9@xen.org>
Date: Mon, 15 May 2023 10:13:18 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [PATCH v2 0/2] xen/arm: domain_build: map_{dt_}irq_to_domain()
 fixes
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230511130218.22606-1-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230511130218.22606-1-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 11/05/2023 14:02, Michal Orzel wrote:
> Propagate return code correctly + fix the format specifiers.
> 
> Michal Orzel (2):
>    xen/arm: domain_build: Propagate return code of map_irq_to_domain()
>    xen/arm: domain_build: Fix format specifiers in
>      map_{dt_}irq_to_domain()

I have committed the series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 15 09:25:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 09:25:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534681.831945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUSR-0000xt-7Z; Mon, 15 May 2023 09:25:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534681.831945; Mon, 15 May 2023 09:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUSR-0000xm-3w; Mon, 15 May 2023 09:25:31 +0000
Received: by outflank-mailman (input) for mailman id 534681;
 Mon, 15 May 2023 09:25:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pyUSP-0000xg-HC
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 09:25:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyUSP-0002Q3-0d; Mon, 15 May 2023 09:25:29 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.27.136]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyUSO-0003ZJ-Py; Mon, 15 May 2023 09:25:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=eSDqX0ANqw1vckhCxB0kTPpcx8prpPMhBmDe2VqdsaE=; b=buaxHuhRSVepMjbziqJt51jqRu
	2JDoskX6dunxBZ5DSCBAHYYMROOtFCxOtW5RCDxZuJW+gkKWemYSyo5wxTOoYGzjmbp5Cfdr8RA/P
	QMVuPnXA2cxZKOJFL0NiRBXxM3uENNy3Iqa0r3ROiJnKcRZpRNBNp91zTGsppB62M13Y=;
Message-ID: <175d5e01-6258-edcc-bddd-05ff9e1eb547@xen.org>
Date: Mon, 15 May 2023 10:25:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 11/12] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-12-ayan.kumar.halder@amd.com>
 <63fa927e-72f5-1645-97c0-6986f2fdcabe@xen.org>
 <4681a4d4-68d3-01cd-912c-bca2cdc83266@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <4681a4d4-68d3-01cd-912c-bca2cdc83266@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Ayan,

On 11/05/2023 12:45, Ayan Kumar Halder wrote:
> 
> On 03/05/2023 13:20, Julien Grall wrote:
>> Hi,
> 
> Hi Julien,
> 
> I have some clarification.
> 
>>
>> On 28/04/2023 18:55, Ayan Kumar Halder wrote:
>>> Restructure the code so that one can use pa_range_info[] table for both
>>> ARM_32 as well as ARM_64.
>>>
>>> Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
>>> p2m_root_order can be obtained from the pa_range_info[].root_order and
>>> p2m_root_level can be obtained from pa_range_info[].sl0.
>>>
>>> Refer ARM DDI 0406C.d ID040418, B3-1345,
>>> "Use of concatenated first-level translation tables
>>>
>>> ...However, a 40-bit input address range with a translation 
>>> granularity of 4KB
>>> requires a total of 28 bits of address resolution. Therefore, a stage 2
>>> translation that supports a 40-bit input address range requires two 
>>> concatenated
>>> first-level translation tables,..."
>>>
>>> Thus, root-order is 1 for 40-bit IPA on ARM_32.
>>>
>>> Refer ARM DDI 0406C.d ID040418, B3-1348,
>>>
>>> "Determining the required first lookup level for stage 2 translations
>>>
>>> For a stage 2 translation, the output address range from the stage 1
>>> translations determines the required input address range for the stage 2
>>> translation. The permitted values of VTCR.SL0 are:
>>>
>>> 0b00 Stage 2 translation lookup must start at the second level.
>>> 0b01 Stage 2 translation lookup must start at the first level.
>>>
>>> VTCR.T0SZ must indicate the required input address range. The size of 
>>> the input
>>> address region is 2^(32-T0SZ) bytes."
>>>
>>> Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of 
>>> input
>>> address region is 2^40 bytes.
>>>
>>> Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b 
>>> which is 24.
>>>
>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>> ---
>>> Changes from -
>>>
>>> v3 - 1. New patch introduced in v4.
>>> 2. Restructure the code such that pa_range_info[] is used both by 
>>> ARM_32 as
>>> well as ARM_64.
>>>
>>> v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and 
>>> P2M_ROOT_LEVEL.
>>> The reason being root_order will not be always 1 (See the next patch).
>>> 2. Updated the commit message to explain t0sz, sl0 and root_order 
>>> values for
>>> 32-bit IPA on Arm32.
>>> 3. Some sanity fixes.
>>>
>>> v5 - pa_range_info is indexed by system_cpuinfo.mm64.pa_range. ie
>>> when PARange is 0, the PA size is 32, 1 -> 36 and so on. So 
>>> pa_range_info[] has
>>> been updated accordingly.
>>> For ARM_32 pa_range_info[0] = 0 and pa_range_info[1] = 0 as we do not 
>>> support
>>> 32-bit, 36-bit physical address range yet.
>>>
>>>   xen/arch/arm/include/asm/p2m.h |  8 +-------
>>>   xen/arch/arm/p2m.c             | 32 ++++++++++++++++++--------------
>>>   2 files changed, 19 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/include/asm/p2m.h 
>>> b/xen/arch/arm/include/asm/p2m.h
>>> index f67e9ddc72..4ddd4643d7 100644
>>> --- a/xen/arch/arm/include/asm/p2m.h
>>> +++ b/xen/arch/arm/include/asm/p2m.h
>>> @@ -14,16 +14,10 @@
>>>   /* Holds the bit size of IPAs in p2m tables.  */
>>>   extern unsigned int p2m_ipa_bits;
>>>   -#ifdef CONFIG_ARM_64
>>>   extern unsigned int p2m_root_order;
>>>   extern unsigned int p2m_root_level;
>>> -#define P2M_ROOT_ORDER    p2m_root_order
>>> +#define P2M_ROOT_ORDER p2m_root_order
>>
>> This looks like a spurious change.
>>
>>>   #define P2M_ROOT_LEVEL p2m_root_level
>>> -#else
>>> -/* First level P2M is always 2 consecutive pages */
>>> -#define P2M_ROOT_ORDER    1
>>> -#define P2M_ROOT_LEVEL 1
>>> -#endif
>>>     struct domain;
>>>   diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index 418997843d..1fe3cccf46 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -19,9 +19,9 @@
>>>     #define INVALID_VMID 0 /* VMID 0 is reserved */
>>>   -#ifdef CONFIG_ARM_64
>>>   unsigned int __read_mostly p2m_root_order;
>>>   unsigned int __read_mostly p2m_root_level;
>>> +#ifdef CONFIG_ARM_64
>>>   static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>>>   /* VMID is by default 8 bit width on AArch64 */
>>>   #define MAX_VMID       max_vmid
>>> @@ -2247,16 +2247,6 @@ void __init setup_virt_paging(void)
>>>       /* Setup Stage 2 address translation */
>>>       register_t val = 
>>> VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
>>>   -#ifdef CONFIG_ARM_32
>>> -    if ( p2m_ipa_bits < 40 )
>>> -        panic("P2M: Not able to support %u-bit IPA at the moment\n",
>>> -              p2m_ipa_bits);
>>> -
>>> -    printk("P2M: 40-bit IPA\n");
>>> -    p2m_ipa_bits = 40;
>>> -    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
>>> -    val |= VTCR_SL0(0x1); /* P2M starts at first level */
>>> -#else /* CONFIG_ARM_64 */
>>>       static const struct {
>>>           unsigned int pabits; /* Physical Address Size */
>>>           unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
>>> @@ -2265,19 +2255,26 @@ void __init setup_virt_paging(void)
>>>       } pa_range_info[] __initconst = {
>>>           /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table 
>>> D5-6 */
>>>           /*      PA size, t0sz(min), root-order, sl0(max) */
>>> +        [2] = { 40,      24/*24*/,  1,          1 },
>>
>> I don't like the fact that the index are not ordered anymore and...
>>
>>> +#ifdef CONFIG_ARM_64
>>>           [0] = { 32,      32/*32*/,  0,          1 },
>>>           [1] = { 36,      28/*28*/,  0,          1 },
>>> -        [2] = { 40,      24/*24*/,  1,          1 },
>>>           [3] = { 42,      22/*22*/,  3,          1 },
>>>           [4] = { 44,      20/*20*/,  0,          2 },
>>>           [5] = { 48,      16/*16*/,  0,          2 },
>>>           [6] = { 52,      12/*12*/,  4,          2 },
>>>           [7] = { 0 }  /* Invalid */
>>> +#else
>>> +        [0] = { 0 },  /* Invalid */
>>> +        [1] = { 0 },  /* Invalid */
>>> +        [3] = { 0 }  /* Invalid */
>>> +#endif
>>
>> ... it is not clear to me why we are adding 3 extra entries. I think 
>> it would be better if we do:
>>
>> #ifdef CONFIG_ARM_64
>>    [0] ...
>>    [1] ...
>> #endif
>>    [2] ...
>> #ifdef CONFIG_ARM_64
>>    [3] ...
>>    [4] ...
>>    ...
>> #endif
>>
>>>       };
>>>         unsigned int i;
>>>       unsigned int pa_range = 0x10; /* Larger than any possible value */
>>>   +#ifdef CONFIG_ARM_64
>>>       /*
>>>        * Restrict "p2m_ipa_bits" if needed. As P2M table is always 
>>> configured
>>>        * with IPA bits == PA bits, compare against "pabits".
>>> @@ -2291,6 +2288,9 @@ void __init setup_virt_paging(void)
>>>        */
>>>       if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
>>>           max_vmid = MAX_VMID_16_BIT;
>>> +#else
>>> +    p2m_ipa_bits = PADDR_BITS;
>>> +#endif
>> Why do we need to reset p2m_ipa_bits for Arm?
> 
> Ah, this is a mistake. I will remove this.
> 
>>
>>>         /* Choose suitable "pa_range" according to the resulted 
>>> "p2m_ipa_bits". */
>>>       for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
>>> @@ -2306,24 +2306,28 @@ void __init setup_virt_paging(void)
>>>       if ( pa_range >= ARRAY_SIZE(pa_range_info) || 
>>> !pa_range_info[pa_range].pabits )
>>>           panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", 
>>> pa_range);
>>>   +#ifdef CONFIG_ARM_64
>>>       val |= VTCR_PS(pa_range);
>>>       val |= VTCR_TG0_4K;
>>>         /* Set the VS bit only if 16 bit VMID is supported. */
>>>       if ( MAX_VMID == MAX_VMID_16_BIT )
>>>           val |= VTCR_VS;
>>> +
>>> +    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
>>> +#endif
>>> +
>>>       val |= VTCR_SL0(pa_range_info[pa_range].sl0);
>>>       val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
>>>         p2m_root_order = pa_range_info[pa_range].root_order;
>>>       p2m_root_level = 2 - pa_range_info[pa_range].sl0;
>>> -    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
>>
>> I think this line should stay for 32-bit as well because we 
>> p2m_ipa_bits should be based on the PA range we selected (see the loop 
>> 'Choose suitable "pa_range"...').
> 
> This isn't true for ARM_32.

You are correct that the line is not correct for Arm32. But my point was 
more for that fact you don't update p2m_ipa_bits based on the PA range 
selected.

> 
> ReferARM DDI 0406C.d ID040418, B3-1348, "Determining the required first 
> lookup level for stage 2 translations"
> 
> "...The size of this input address region is 2(32-TxSZ) bytes, ..."
> 
> So for
> 
> #ifdef CONFIG_ARM_32
> 
> p2m_ipa_bits = 32 - pa_range_info[pa_range].t0sz;
> 
> #endif
> 
> This will be a problem for 40-bit PA as T0SZ = 24.
> 
> So p2m_ipa_bits = 32 - 24 = 8 (which is incorrect).
> 
> 
> To get around this issue, there are two possible solutions :-
> 
> 1. For ARM_32,  do not modify p2m_ipa_bits. Thus p2m_ipa_bits will be 
> using its initialized value (ie PADDR_BITS).

AFAICT, this approach would be incorrect because we wouldn't take into 
account any restriction from the SMMU susbystem (it may support less 
than what the processor support).

> 
> 2. T0SZ should be signed int for ARM_32 (so that it can hold -8) and 
> unsigned int for ARM_64.
> 
> ie
> 
>      static const struct {
>          unsigned int pabits; /* Physical Address Size */
> #ifdef CONFIG_ARM_64
>          unsigned int t0sz:5;   /* Desired T0SZ, minimum in comment */
> #else
>          signed int t0sz:5;   /* Desired T0SZ, minimum in comment */
> #endif
>          unsigned int root_order; /* Page order of the root of the p2m */
>          unsigned int sl0;    /* Desired SL0, maximum in comment */
>      } pa_range_info[] __initconst = {
> ....
> 
> 
> I will prefer option 1 for the sake of less if..defs.
I don't think the two options are equivalent. So my first priority is 
correctness, then if we have still multiple possibility, we can discuss 
which one look the nicest to read.

To avoid the #ifdef in the struct, we could possibly use a series of 
cast. It might be slightly better because a reader will be less confused 
on the type change.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 15 09:44:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 09:44:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534684.831955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUkj-0003Pq-OA; Mon, 15 May 2023 09:44:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534684.831955; Mon, 15 May 2023 09:44:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyUkj-0003Pj-LQ; Mon, 15 May 2023 09:44:25 +0000
Received: by outflank-mailman (input) for mailman id 534684;
 Mon, 15 May 2023 09:44:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CjwX=BE=citrix.com=prvs=492a8bb35=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pyUkh-0003Pd-4K
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 09:44:23 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0eef9cf7-f305-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 11:44:18 +0200 (CEST)
Received: from mail-dm6nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 15 May 2023 05:44:15 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM8PR03MB6264.namprd03.prod.outlook.com (2603:10b6:8:29::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 09:44:11 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 09:44:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0eef9cf7-f305-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684143858;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=0mWk3X/SiGwiqgyhO5uguAOPzjbEtFPShirkYrAx3WE=;
  b=I2IQnEpwY7Y0gx/7eHZPSLbS3dkVerLbDyThMnA8aYqNMdF4z5fbAF4V
   Y92y51ggjiPzcgWvcCHsoEWQyd4Vvg/eIrPD8gBotDVxTiz1y1l0o83Zd
   QZjk3odWO7/+ExjLn+xT5y7p2ES3XG8xYp29DuakuEIiNsPR0kZSEKWhn
   U=;
X-IronPort-RemoteIP: 104.47.57.173
X-IronPort-MID: 108929799
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:CC0jo6qZoJRUwW/LbS4Ktu2DDlFeBmL4ZBIvgKrLsJaIsI4StFCzt
 garIBnVO6yNY2L2ct0jPoS2o0wCvZKGyYMwTgc6+C8yRSxA+ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDziRNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAGxTd0GBq+Kd+pCmd9J93s8NIpjiY4xK7xmMzRmBZRonabbqZvyToPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeeraYWOEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdpCRefpqa876LGV7l4aLUYfX32+muWa21yfQNdVO
 WI6pSV7+MDe82TuFLERRSaQu2WYtxQRX95RFewS6wyXzKfQpQGDCQAsTDRMddgnv88eXiEx2
 xmCmNaBLSxitviZRGyQ8p+QrCiuIm4FIGkafygGQAAZpd75r+kbvh/JT98lKqe6j9T5EDL33
 hiDqSR4jLIW5eYQy6i19FbAxSmwr5LESgo04AT/V2epqAh+YeaYi5eA7FHa6bNMKdifR1zY5
 XwcwZHBt6YJEI2HkzGLTKMVBra16v2ZMTrax1lyA50m8Dfr8HmmFWxN3AxDyI5SGp5sUVfUj
 IX742u9OLc70KOWUJJK
IronPort-HdrOrdr: A9a23:zCZgtKHo950ATZJopLqELMeALOsnbusQ8zAXPiBKJCC9E/bo8v
 xG+c5w6faaslkssR0b9+xoW5PwI080l6QU3WB5B97LMDUO0FHCEGgI1/qA/9SPIUzDHu4279
 YbT0B9YueAcGSTW6zBkXWF+9VL+qj5zEix792uq0uE1WtRGtldBwESMHf9LmRGADNoKLAeD5
 Sm6s9Ot1ObCA8qhpTSPAhiYwDbzee77a7bXQ==
X-Talos-CUID: 9a23:fHtY7G9yJDRH+2i1xquVvxJKA5EnXn329lXNHnSpUW1HRuycFlDFrQ==
X-Talos-MUID: 9a23:WhQ+mQh3aanIxUn+/pqEBsMpBs5NuK2kLkU2jbZXqeifMCZiFRnCg2Hi
X-IronPort-AV: E=Sophos;i="5.99,276,1677560400"; 
   d="scan'208";a="108929799"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MpRXXcpXVS/uVxHNUGOtE2VouKlIwSO8dG9YM+L5eIPvz0uQ2UMV4/QDVieWTHlWM6B6fwgHZkX8EpnyyQjsypdYESYCWA35Bj6IpIOS4iEI5wWigxQ2mE0gJhg6TjNSe7hjr5A5bvfZ2CpHpj1fA0ETVhFTublkDg0a+RFZ8J8+HjoBq1GZIP2+ZGloqFlaoKNHNrV0Xrj4TaqGCpIKpOzXtjA4fUkiB+eD1mOu6qNpHpPYn/nAcIs8rRcud4DmgR0q+m694WsqI6tffdVztSVNpulTZSIr7f9wzVwxIcDll7ilGt6+aeDpkb3Zde3ZUZS0XiePwpLKBeS8K7Oc7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iBlkc0jM4MIbeBcv91rLDTvK8fWhLWiEZsFEsCcyOSA=;
 b=fF3ToLZJu/p/6CZEI0HRB1TzR6CIDHDhA4OWlnRgP0jKpHF4JedqkdPDyq/f01YeOEueQJNCsZJot0ol56qcBkHdv7vBwEk7GrAfdhhxFLmS1RFqDRC+xpFAt/fcPAUwiwXAWC0SW+L4ff4+dzY+DBeo0Kg+OSyPW/5Tc2rA4htzRnPnM8deCqe1iY4yKMHnYcvUyTTBPgQ2bSfr50/89tVRn96faPgxh6NqP9NQmzo8L04qSfetLnW9mocdi7YafzzaCTfZN5c6quXOc5m/IicNtbW96KUwyEcjXKpdFDewcKUlnnNJLP2mkh6+edqWGlPt7Uit+jKtQ8HhRIf9Ig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iBlkc0jM4MIbeBcv91rLDTvK8fWhLWiEZsFEsCcyOSA=;
 b=ohA8tNmZzJGH/bv4DGCuuYObT+VFS/i+y9HOYweUFBXCSjDXn17X2Z/WTe7VE2ExWfPKKo0I+LcUvUelqKh0XgVJaLsTcsjGAOSr81pWNy7kd4IBlqXYnRgjzohYvgU8/FJfv34VM5+Zq6KFZsVu6eBvf8vYY1QnnbdfI0IWcwY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 15 May 2023 11:44:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, jbeulich@suse.com,
	xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
Message-ID: <ZGH+5OKqnjTjUr/F@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-2-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230513011720.3978354-2-sstabellini@kernel.org>
X-ClientProxiedBy: LO6P265CA0003.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:339::10) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM8PR03MB6264:EE_
X-MS-Office365-Filtering-Correlation-Id: 43f12eea-36a7-4c9e-a857-08db5528ef1d
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QQAAp4NAqrUbbksxup6wsiQWDST64GMQdisXQ89fofdlZO5QMIlsrCpzuz9D/DLjayJ45jChfvG7sByFGtKsWUSNfAaIM8r6ib5CDKqJWMwY7KMolUO27VlIfoz1ocpAh7IWPbO29fU8CT/MuRZ3PMasVpRfSJ2Sw+Q87kifcuvxw+WaG+CeF5cMlc37jHAba75Lav+p6gBLhcN8wlT0TEhKdcLzwwTbWbfMye8Ws5+3VbtJnLUpLECf6rDrL76DzKzlRfxSHvQG05gn+eufvh7jbJpn9rozNdvydiTSf5CglFZTfDeo6zGlr+vXdwcrw8D74Hsk7Ry9pcDyRP1aglKVY0AcQ5BDBJ1m8bAfKgMZpMLSVibmO491uyJEdkelV8VlkSfbF7YOEewGeva21UvwJI7iQO5zjBmAfq3GAKoRnaRbng2sg1IiYVqF5qG50EwT46EXspXuBt9TaCkV7VCbxkEuuNRW0UX26lY8sQkp3QsHYy+C1pcIyPdDSX/HKDBqQbf/HJpBM3rBw/AlHNRDNaf7RoOhrpyE5WqnemIOtaY+yDJ9F2OoKD8bdxI4
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(346002)(366004)(136003)(39860400002)(376002)(396003)(451199021)(66556008)(66946007)(66476007)(4326008)(6916009)(478600001)(6486002)(86362001)(316002)(85182001)(83380400001)(186003)(26005)(6512007)(9686003)(6506007)(8936002)(8676002)(5660300002)(6666004)(2906002)(33716001)(82960400001)(41300700001)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MFE2M1pyVVNqMkZXVFBmdUtlVnlEb2RJOTJ2TWtpZUQ5TTJPTVFQbWZxZGRY?=
 =?utf-8?B?Z2MwQnN2dm5nR08vUlFvdGUxdmhuaUc5ZEt0aGhuanQvZ3l5UXhQaUppWkFR?=
 =?utf-8?B?WTEwNmNsZXdJRzdRL1ZmUGJFeG9HazY4L3VrRm9qWjhCaEozd1Y2MVFaWXBQ?=
 =?utf-8?B?MFgrZjd2V3FnNWo0K2g2am5oRXJCZGNCZStnQVZnQS9UdXArdVJMRmxERlRo?=
 =?utf-8?B?V3NHKzhEbzYwZWtzTTdIbkJRSnVHamdtWldJKzVnWEFYbC9GVnJ4dUttVzVZ?=
 =?utf-8?B?bmEzTW4ySjQ1TUorVVNYT1JmaURhS0hhSFozcGk3NVJQYUVzS1VoYXM2UkY0?=
 =?utf-8?B?QmJRbW5VcERFemg5dXZVQWRCRXN6UlNKTEQ3TzZVSTBTblFrVnZNam16ajJv?=
 =?utf-8?B?NzRYeDU5dUswVUpiYTk3SUx0aXNxTjRQUFNjd256aWM5WitLdXI3MVZuUVM2?=
 =?utf-8?B?NHRRS1JwT1hWKzB2bll6QU5hK1I2eE1zdHdWODZKMC9DcUdYRUs0Snd1dStS?=
 =?utf-8?B?WkxqejF2MFowYWlEK0RtZVR2OHY0SnpRbGRWeUJ2b2w0ZEJtKzNsTHlWSDZs?=
 =?utf-8?B?M3MzcktidFNIL0pDcm96YlZmTzZIZU5CSEMzTU1tVHZ6M0s5MXFaRlBLMWtH?=
 =?utf-8?B?Nm9hS3ZsSFFLRUJaVThISmtQTitnbUlGMDRvUjUxdDBYNFBKbDQyalFtZEZo?=
 =?utf-8?B?Zk5qM0N2bXM0SWdQZHNzZitCYlY4YzRKb09VSjFJYnZEaWh5Q3hxZ3l3eXg4?=
 =?utf-8?B?RVV6Y0hFVXhoT25QbDF3VERub2ZNWUhXMloxZ09WWVM2Ympoc1cvZEpTa3ln?=
 =?utf-8?B?TmpxM0dxdWQ2bWhXWnozYmVSY05RU0ltc2N0UVloUVpid1EzUlRwQ0hRckov?=
 =?utf-8?B?eCtLTG9WUXBYS3dUV1lITlJENElmMFNLMmhsRDdhYjJ0eU1ELzJxT01HMXE3?=
 =?utf-8?B?NnBHNFZya2V6b3lnUjZ6NVFmUC9CNEJkemREbnBudkxNTkFtVDZyRXcxdGdh?=
 =?utf-8?B?RUMwQ0Fxcy9CRk5ZazdLNnc0ZENWR1UzRHZ2RWhkbUxMcWtTVkppVmloV1B3?=
 =?utf-8?B?cjNYU1Q1aGVmWVdNZGduYzJuYkh4VDdjSG9QbkJlVjNRbDl0bldOc3NUak54?=
 =?utf-8?B?ZERVM1o2ZHZhVmlaSFAzODEyWGpRRXQ2ekFGeVFNdk1nUmIyV2ZxKzhmTVg4?=
 =?utf-8?B?eVRQYXpsSVI1QW5NejhPdUxyaWs1eGhxVEVSMFVxWHovY3QzbnFwSUxRS2Na?=
 =?utf-8?B?YjdOUzZwdFBkRzA4K2tXYUdwdU9GY3B1dTVWdklPTnYrMHlQcm53bEtOelVj?=
 =?utf-8?B?SzFvM0ZSeG4wc0l0VUZDOWUxdCtwck5xWDZpMmM1QlZMSFlDc0M5TG1tb3NG?=
 =?utf-8?B?bGFWRzZZK3NFcVM4cDJTaEpZU3dqRkNDbVcyZDU2VFkzMjB3eldmbW54a3Js?=
 =?utf-8?B?TThuSUtwZXluWlJnVlhLVm1ucFp2UkhwQlozV1BOUEo4V0o1NXV4SlZYTUxW?=
 =?utf-8?B?L05VaVNCM0ZjdHlFMWFoT255blFqMnBubUNxcG5JRjRwaTRrckowRjZRZWtO?=
 =?utf-8?B?UHRmbFdOa3R2ZjFxQk01Y0FwSTdHSEdxNEpmT3g2emo0MGRDTDR6bzljdHZJ?=
 =?utf-8?B?dnM1ejNUaVl3djRkam1YYlYzSWlLaHErNDcwWG9KOVFkRHNMK1hjNGFUM0Zz?=
 =?utf-8?B?R1JicWRiTk1VZVJ6VW51RjlXeFk0cHpoWENoWGlHeXIvUjRwNkxOZHpCK3Rr?=
 =?utf-8?B?NGdNV1RzcnBEdGtSZWtxU2lQTUgzclhsSUdrcWZqOWpkN0x6NStiZUNKRHBF?=
 =?utf-8?B?MTY0U2Q0UU8zN3lPaHY5TDNyRVB1WTMzR0FaQ3Z2VlRxS3NzNHo4YmowNWVs?=
 =?utf-8?B?MTQram1WclJQM2t3VEY3RG5ubThkTjJwVDFDd1BFYVl3OEk4WnNOSVltaFM2?=
 =?utf-8?B?cU5WeGhTVzE3c3BlQ3diandyb2wwTFg2L1FSUFBQcHNSaFNqUlA2MUpOMldQ?=
 =?utf-8?B?UVQrVlVQU1N3TVE4WkZtZmFFcE9nT3FoYUNZU1ZCeThOUW9sMnNhN0tQUkU3?=
 =?utf-8?B?VUdHSGJiK2d3dG1hMjNPU0Y4RktOWExieVEvWkZnOHVUMEwxUEZWb3ZpMmwx?=
 =?utf-8?Q?KZCkqC6WaF0TGjuQ+CJ4inGSd?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	m/89adm6KhHNi00Qx77wxPj0W6ausuPACJGID+kPtjHxjPufZCm9BNHpicZkpiHRRFJKdUfqrqLpY7Q3GetbtKFuGWSVpsJnp7wNEIts08CDO9OboHEU09udaVV5kWcvjSOpXbfKV/Q7FumeOTbkHZm7npe1Moq3O/K/oEegSxoRPbKMYmX72tAu6kNeAw8tJVSbI6RjgWMPdeeJCySpOPe+WGk5j+RD56s2lhCVhNYgI5ahRaG8SQTBhil7b8SvKMEmjGiZ5Y2/U7P7NCNCo/jNBoDTFssSKcS7IqgjMjlTf+w0a8NfUHBwSRQww4EE6ulbY2IlBPQemthxzgrrf30wFAOzfSXJuLE+nkfxXfNeYvhV4YIUqRxFjE+KmpE98YxZaJxdoARY0jdcPxeUhPdePVCUDzWBWHEQpoQYItBd9wCLcjXemvLDuzNkXtoH0rVLKqJ3oILBj2pYLdxh4hBjV++NYJSvpzVRUPqwPOO6weU7ZAKcJfYukiAauzerqJP5baXRtWO+dAksev2Kx+6LuOGJ+HqUitmsri9AyaNd7Mh7RrlKjA3/z8kcw5sRKuidlEiowoNuVPZBzl+Ojp7vcMljFNuPEr9D60+HsgCHe8c3NBcr3txkM4DTMhO5MEZlKI0zQle4cUH0TRW0f1+aCkpB8i4YEe/OLC+2vQfcTTYAnTWlW1FGrPlqvg4iTP1Y1W+VZSR4/YAeTSOv5TMRXqLMpt91psqnlt0V5pb0FyME9W6ETGvajf8J/mSSTqU+BTT7ILERSIfBhvPdoqsylCxqjkynGixXDxLmpMuJQ8a1cbHfFvIkzsfvlHLr4TxVPTOa9USLa8+SpqCXRZfvdsEpv3KyzN8DxQr89xo=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 43f12eea-36a7-4c9e-a857-08db5528ef1d
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 09:44:10.4802
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: E72tKlpd79+GIl3ZWP2mNSERmcLVEgjWtEkqorACX1PkEzoB6wth19quqMyzLy+2Rj0yVhwNEvsclYv7ae3lOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR03MB6264

On Fri, May 12, 2023 at 06:17:20PM -0700, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@amd.com>
> 
> Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> the tables in the guest. Instead, copy the tables to Dom0.
> 
> This is a workaround.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> ---
> As mentioned in the cover letter, this is a RFC workaround as I don't
> know the cause of the underlying problem. I do know that this patch
> solves what would be otherwise a hang at boot when Dom0 PVH attempts to
> parse ACPI tables.

I'm unsure how safe this is for native systems, as it's possible for
firmware to modify the data in the tables, so copying them would
break that functionality.

I think we need to get to the root cause that triggers this behavior
on QEMU.  Is it the table checksum that fail, or something else?  Is
there an error from Linux you could reference?

I've got some feedback below, but I'm unsure copying is the correct
approach.

> ---
>  xen/arch/x86/hvm/dom0_build.c | 107 +++++++++-------------------------
>  1 file changed, 27 insertions(+), 80 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> index 5fde769863..a6037fc6ed 100644
> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -73,32 +73,6 @@ static void __init print_order_stats(const struct domain *d)
>              printk("order %2u allocations: %u\n", i, order_stats[i]);
>  }
>  
> -static int __init modify_identity_mmio(struct domain *d, unsigned long pfn,
> -                                       unsigned long nr_pages, const bool map)
> -{
> -    int rc;
> -
> -    for ( ; ; )
> -    {
> -        rc = map ?   map_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn))
> -                 : unmap_mmio_regions(d, _gfn(pfn), nr_pages, _mfn(pfn));
> -        if ( rc == 0 )
> -            break;
> -        if ( rc < 0 )
> -        {
> -            printk(XENLOG_WARNING
> -                   "Failed to identity %smap [%#lx,%#lx) for d%d: %d\n",
> -                   map ? "" : "un", pfn, pfn + nr_pages, d->domain_id, rc);
> -            break;
> -        }
> -        nr_pages -= rc;
> -        pfn += rc;
> -        process_pending_softirqs();
> -    }
> -
> -    return rc;
> -}
> -
>  /* Populate a HVM memory range using the biggest possible order. */
>  static int __init pvh_populate_memory_range(struct domain *d,
>                                              unsigned long start,
> @@ -967,6 +941,8 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
>      unsigned long size = sizeof(*xsdt);
>      unsigned int i, j, num_tables = 0;
>      int rc;
> +    struct acpi_table_fadt fadt;
> +    unsigned long fadt_addr = 0, dsdt_addr = 0, facs_addr = 0, fadt_size = 0;

paddr_t and size_t would be better.

>      struct acpi_table_header header = {
>          .signature    = "XSDT",
>          .length       = sizeof(struct acpi_table_header),
> @@ -1013,10 +989,33 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
>      /* Copy the addresses of the rest of the allowed tables. */
>      for( i = 0, j = 1; i < acpi_gbl_root_table_list.count; i++ )
>      {
> +        void *table;

const __iomem.

> +
> +        pvh_steal_ram(d, tables[i].length, 0, GB(4), addr);
> +        table = acpi_os_map_memory(tables[i].address, tables[i].length);
> +        hvm_copy_to_guest_phys(*addr, table, tables[i].length, d->vcpu[0]);
> +        pvh_add_mem_range(d, *addr, *addr + tables[i].length, E820_ACPI);

Need to check for errors in the calls above.

> +
> +        if ( !strncmp(tables[i].signature.ascii, ACPI_SIG_FADT, ACPI_NAME_SIZE) )
> +        {
> +            memcpy(&fadt, table, tables[i].length);
> +            fadt_addr = *addr;
> +            fadt_size = tables[i].length;
> +        }
> +        else if ( !strncmp(tables[i].signature.ascii, ACPI_SIG_DSDT, ACPI_NAME_SIZE) )
> +                dsdt_addr = *addr;
> +        else if ( !strncmp(tables[i].signature.ascii, ACPI_SIG_FACS, ACPI_NAME_SIZE) )
> +                facs_addr = *addr;

Wrong indentation.

> +
>          if ( pvh_acpi_xsdt_table_allowed(tables[i].signature.ascii,
> -                                         tables[i].address, tables[i].length) )
> -            xsdt->table_offset_entry[j++] = tables[i].address;
> +                    tables[i].address, tables[i].length) )

Unrelated withe space adjustment?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 15 10:01:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 10:01:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534689.831965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyV0j-00062U-7l; Mon, 15 May 2023 10:00:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534689.831965; Mon, 15 May 2023 10:00:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyV0j-00062N-4E; Mon, 15 May 2023 10:00:57 +0000
Received: by outflank-mailman (input) for mailman id 534689;
 Mon, 15 May 2023 10:00:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+TBY=BE=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pyV0i-00062H-FS
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 10:00:56 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on062e.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 609923a1-f307-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 12:00:53 +0200 (CEST)
Received: from AM0PR02CA0025.eurprd02.prod.outlook.com (2603:10a6:208:3e::38)
 by GV2PR08MB8317.eurprd08.prod.outlook.com (2603:10a6:150:bf::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.29; Mon, 15 May
 2023 10:00:46 +0000
Received: from AM7EUR03FT024.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:3e:cafe::92) by AM0PR02CA0025.outlook.office365.com
 (2603:10a6:208:3e::38) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30 via Frontend
 Transport; Mon, 15 May 2023 10:00:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT024.mail.protection.outlook.com (100.127.140.238) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.14 via Frontend Transport; Mon, 15 May 2023 10:00:45 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Mon, 15 May 2023 10:00:45 +0000
Received: from 4406fc9b615e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8423407F-DEF3-456B-B1C7-5BF678BDBAE1.1; 
 Mon, 15 May 2023 10:00:34 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4406fc9b615e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 15 May 2023 10:00:34 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAWPR08MB9567.eurprd08.prod.outlook.com (2603:10a6:102:2f1::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 10:00:31 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%6]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 10:00:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 609923a1-f307-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GOCxGocHzqSl3jRwKGH2m9GJtBz/aP0wlosl588qSHg=;
 b=gbqVFc/r4q+8AhlOPMP4uvyAKZlZ6bkvJLqOu99DStwVOd4rtMtd369SxE+nQIldM4L4VD4VrPFYixMOr3oL7hV00m08EaMXN9nRYGaeEjD5EY8PXKgoOU4Ua28XeALtSBpS6iUE/Ttjj6haLSEWzink8Ut7cHPJnEqKYVLy73c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 88c9a61ed72b391c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L9UHPmsilqi6k7uHaP/1LvNHOToWLgbHqakcNaRpDaFa43DdU7uw+VFNy0Ly0WseFbCsw3lk2j5k/Axd3fR5gdL+NLuIaVxh49mwSZtso/w38js2QXP3RKF5ayABBuyzAd2nx8jGLV1+vORZ5ePVtJiNaUpz2oKhqWXn3EbiqbGixxMIDLBNqUqfpj7FFm+iWktNh5u0/s3n+1rNgWqLrFpgb0q3h6Q1zhvaQ0GlkDc+MlK4wOnSoAsjdJ9VhaiHyevHLnpFMAZBIqTA1FHdt69hlCa8kchr7TF+3CN3i5QJLGGT5I17hclAyXy5v+Or6WzC/4DOL69//vySk4ppIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GOCxGocHzqSl3jRwKGH2m9GJtBz/aP0wlosl588qSHg=;
 b=IBuSF4qINyA8PnDyJqp3+Jv73E9J7lYUNGjZ+IlbyAwFlqO6ne+UuqGEqYhLio68f+lR91ZAre4+EtJRMrPEw/motkdw8HwdDkhSGm9xSQu/i4UnzV0W1zRix++aXv3vkXGGnUqAT3W/MbJi1Qgiy/ir1u24LK7uSo0tnTziHYFyZDz9p7BvPwTQvhoiKam//icVjVMkZjawlPXXwU96KQ8N1YhD9oV0IVo0ZdEkjVW+bIzjYIb3OwCQP/ZGtfEJ/I+KM9dGAn8lRoLa9JPaCxi/18hTc5VNUlVYE+GgT7AKbHr25ABYPCJzmnYcQddM32FWhd5Yx7LRasbSnEsPOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GOCxGocHzqSl3jRwKGH2m9GJtBz/aP0wlosl588qSHg=;
 b=gbqVFc/r4q+8AhlOPMP4uvyAKZlZ6bkvJLqOu99DStwVOd4rtMtd369SxE+nQIldM4L4VD4VrPFYixMOr3oL7hV00m08EaMXN9nRYGaeEjD5EY8PXKgoOU4Ua28XeALtSBpS6iUE/Ttjj6haLSEWzink8Ut7cHPJnEqKYVLy73c=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	"roger.pau@citrix.com" <roger.pau@citrix.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, "julien@xen.org" <julien@xen.org>, Stefano
 Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH] docs/misra: adds Mandatory rules
Thread-Topic: [PATCH] docs/misra: adds Mandatory rules
Thread-Index: AQHZhF+KuQT3lWH1wk6jaT1DLTQ+Ka9bH12A
Date: Mon, 15 May 2023 10:00:31 +0000
Message-ID: <BA0A6A76-EF0C-4D2F-B520-084758461CA7@arm.com>
References: <20230511232237.3720769-1-sstabellini@kernel.org>
In-Reply-To: <20230511232237.3720769-1-sstabellini@kernel.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAWPR08MB9567:EE_|AM7EUR03FT024:EE_|GV2PR08MB8317:EE_
X-MS-Office365-Filtering-Correlation-Id: 1d883978-6d48-4fe4-d176-08db552b4083
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 EqmWKbLQTQNV7szP2cFIaN9fpyJ9FSBF7mpR7Pp30lKp7IJUa5ywYKi2pAeB1j68/wVWXCPv9S5PGiSukSS8kKVNHJuGQDplfggzL5QUhpaFQpq9cH5SuGexHy2X7jcD+Ge7+V3sxdv4fEAfQhJdFSwMZZHG8VbNSu7A20O6kMGn7CC0FPv4nmWuQgXPIjMDP/W9VkCp3k91uPKHFEPcbekcz9DapclSWsj+QqJlHEvtL0JFUjlDAxPPhvzwIkctH3zF6/hNlMwDM03KwCHs62hLsvXG+xg4cIXtJjfEfvm4VIvoSUKsIkdzKdP0pzjYqFyrbD6zDgBFpuDWLUoV6AKDqUvvWah4KJthNvmMc6chOufkrgQ91ZyR/hOq2J57/hnJW8xvuBLQvrk54s86Me7cP/Qjxqh2EGmEfBjFMjugI5U/XEsdoIe+kZiKZHvcNh3SDQfUIh3VTWtsqmmtpYqWghZ1ryh8kJTFgZLZPufy11nZ6KFpOjqF5g4jved77jcsuQoPhdZqsJdowP/rcKHA1nmgYEgmNdjwTQYmKIjBK5YgcjdMDNV+NcsALxQeFDlYTR7u9n0ow0fmekooSbjJaxb+cL5WDqKbof3ilDl6l6SSaw2pg4BKnpLCHbfZH7+FjYcfa8No62tVIjWByQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(136003)(366004)(39860400002)(346002)(451199021)(4744005)(38070700005)(2906002)(38100700002)(316002)(41300700001)(122000001)(5660300002)(8936002)(8676002)(36756003)(33656002)(86362001)(71200400001)(478600001)(53546011)(26005)(6512007)(6506007)(186003)(6486002)(2616005)(76116006)(66946007)(91956017)(66476007)(66556008)(54906003)(66446008)(64756008)(6916009)(4326008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <E253218CF4B1EF4AAEDB0E0730BB430B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9567
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT024.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0b1241c1-c35b-493c-1e5d-08db552b37db
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cHOHr61qBv17TgP1TAAYkoHJTmKXNrru9dbnHosXhUpUq0+DqYlbvxrE/pRdMGikKGMou45tu/MKUMo+l1obpx94aStxP93PumcXSj06UaCbdXrG3/Orw1nGrmqxIoySvquhatDULamStqNUszuxIQlxi3kpWejBNFG+JsqDEgNB5ghc6QKVDtGR5QOjTZteQqTXuTQTvz4/3rGiqC6j1pmHLueT8F1csaidjtx7nuFtqK6COaqPimiAo32xoYdjCsDzzrBQHz0Zv1YPqUcoguQrI2J5sgEoKddZ6+JYuHJNQFswid40zHkVZCP5ddIlvtgJboffpGoW5jlXKfCRVYsxM7rbLCkCqfKOE12r+oJF+tT25hU3MlDnEzmhSm7mbLBd2FNzknTF2rF7/Ukr76Dt0cqGfTLEqCoBWM1DqC1l1NWK8h2bRHHyr+T99i72dj2E+3c/0t+aGfQ1lptu7dCfSzltV7cd52U98+o7LsLTWbrMdTOQ/rJlcZgt568h4mfM9vmtPO1ri+IvHDgHZ9OZ5BUJtgzHcULkm3wVEyu+mH+hUPP2cX63GyNeKOV9KCumbWZ87A+sD3K1cRafK2yjXoW0gM21TJ1fYvrEk0HJTDcRKIdDQqPoo6lpQ1GgIT2vjRvoO3oiKmBLlwKtY5xvlARJZxdYiLX9k3E/3sLJYfTjD9DhAZsVfrv8Mw1RuQMeJNstFn6yKIVsE7eQN5oyfn6+xwapYtmin8e3sJc=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(39860400002)(346002)(376002)(451199021)(46966006)(36840700001)(40470700004)(336012)(47076005)(36860700001)(70206006)(70586007)(2616005)(6486002)(26005)(6506007)(53546011)(40480700001)(478600001)(54906003)(186003)(5660300002)(86362001)(6862004)(8676002)(33656002)(40460700003)(6512007)(4744005)(8936002)(36756003)(2906002)(4326008)(316002)(41300700001)(82740400003)(356005)(81166007)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 10:00:45.6767
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1d883978-6d48-4fe4-d176-08db552b4083
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT024.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8317

DQoNCj4gT24gMTIgTWF5IDIwMjMsIGF0IDAwOjIyLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gRnJvbTogU3RlZmFubyBTdGFiZWxsaW5p
IDxzdGVmYW5vLnN0YWJlbGxpbmlAYW1kLmNvbT4NCj4gDQo+IEFkZCB0aGUgTWFuZGF0b3J5IHJ1
bGVzIGFncmVlZCBieSB0aGUgTUlTUkEgQyB3b3JraW5nIGdyb3VwIHRvDQo+IGRvY3MvbWlzcmEv
cnVsZXMucnN0Lg0KPiANCj4gU2lnbmVkLW9mZi1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzdGVm
YW5vLnN0YWJlbGxpbmlAYW1kLmNvbT4NCj4gLS0tDQoNCkhpIFN0ZWZhbm8sDQoNCknigJl2ZSB0
cmllZCB0aGlzIHBhdGNoIHdpdGggb3VyIGludGVncmF0aW9uIHRvb2wgeGVuLWFuYWx5c2lzLnB5
IGFuZCBpdCB3b3JrcywgSeKAmXZlIGJlZW4NCmFibGUgdG8gc3VjY2Vzc2Z1bGx5IHByb2R1Y2Ug
Y3BwY2hlY2sgcmVwb3J0cy4NCg0KSeKAmXZlIGNoZWNrZWQgbGlua3MgYW5kIHJ1bGUgdGV4dCBh
Z2FpbnN0IHRoZSBtaXNyYSBDIGRvY3MsIGV2ZXJ5dGhpbmcgbG9va3MgZmluZS4NCg0KUmV2aWV3
ZWQtYnk6IEx1Y2EgRmFuY2VsbHUgPGx1Y2EuZmFuY2VsbHVAYXJtLmNvbT4NClRlc3RlZC1ieTog
THVjYSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon May 15 10:03:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 10:03:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534692.831975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyV2X-0006ZY-KA; Mon, 15 May 2023 10:02:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534692.831975; Mon, 15 May 2023 10:02:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyV2X-0006ZR-G9; Mon, 15 May 2023 10:02:49 +0000
Received: by outflank-mailman (input) for mailman id 534692;
 Mon, 15 May 2023 10:02:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWdz=BE=citrix.com=prvs=492993889=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyV2W-0006ZL-Bw
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 10:02:48 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a35fef78-f307-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 12:02:46 +0200 (CEST)
Received: from mail-mw2nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 15 May 2023 06:02:29 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by CH2PR03MB5237.namprd03.prod.outlook.com (2603:10b6:610:9c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 10:02:27 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 10:02:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a35fef78-f307-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684144966;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ba3LveGlMFeP7ck9gTdobzK5ke7fvb5sP6oy9OLcNhk=;
  b=aJUDSEzpsxJOYMYI8TJ7ECqsxTtn5CGRY/FmsWRjaDjpi7amn/jbKj5i
   Fj415aub1/DdpHwBEzvPJyU62gnK4VmBa8fLOAHa0YLVE5txK+gbFMudH
   f+Cpu+aqoqz+VMcIsNHyaJpoS/LWkMVPT1F2IoqMlqEWRsMOsOFC6SNEE
   g=;
X-IronPort-RemoteIP: 104.47.55.100
X-IronPort-MID: 111500283
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:lvhSVKsvyq4s9P+WYTsHlnFq0efnVHpfMUV32f8akzHdYApBsoF/q
 tZmKWGHOPyDZjOmfNAkOoq//EJX6MXRnNMwSAdsrCw1QntE+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AKHyiFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwFylKLTeHrMePw7/nTrRw2dV8a8XhBdZK0p1g5Wmx4fcOZ7nmGv2PwOACmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjf60aIK9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOThr6My2AfDnQT/DjUPCQC5osO71nSdePt+J
 3Ep4XMFsbgboRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2iw2VSN9mSfaxloesQWm2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBN3GJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:pujAFKFshwgryuoOpLqFk5HXdLJyesId70hD6qm+c20wTiX4rb
 HboB1/73SbtN9/YhEdcK+7Sda9qB/nlKKdgrNhT4tKIjOW2ldARbsKheHfKlbbak7DH4BmpM
 Jdm6MXMqyPMbAT5/yX3OHSeexO/DFJmprEuc7ui05ICSVWQ+VY6QF9YzzrZ3GfhmN9dOsE/F
 733Ls1m9JkE05nHvhTfUN1ONTrlpnwjZf7ZhxDLwc/gTP+9Q+A2frBCh2F2RVbeC9OxLpKyx
 m4ryXJop+7tu29yFv632vehq4m/ufJ+594HcmRjcpQDCvqhh3AXvUZZ5Sy+Aotpf2p6hIRsP
 SkmWZaA+1Dr0nJe32zo1/W1xL+3C0I43vvoGXo+0fLkIjCXTcnDMgEuo5DaBve7CMbzaxB7J
 4=
X-Talos-CUID: 9a23:PPdqHWGOsLu1RZNOqmJY7QlXPcQOV0TewSvUflPnGDkuRLesHAo=
X-Talos-MUID: =?us-ascii?q?9a23=3A5A3WbQ0u13lHShAWJobWnxUsajUj+vj1CkM3k8k?=
 =?us-ascii?q?8mOqtFCBNBizEpyina9py?=
X-IronPort-AV: E=Sophos;i="5.99,276,1677560400"; 
   d="scan'208";a="111500283"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ATKe343yC73LJmNEoktf/Sr8vAm1WADcOFYDYL5xDc9cihbYayO0uHm5hZKk0SlxeFF+wAGCDNbPbCfnPbu4FL3m+LaXYPRucNSKbh9l0T8MCwlFh6DJPIxum6qnauo7TyidQHf3Ci+mVsabA96VgalxnCvPk94H3BokRfJqw5DKoPHe58P/kv/RQRTgfiUtGoXewazv9/BmNcsRAdEJn9iCewamYDb5JPECTre2wu2IW3g9/4J4yV/atstye8IkGvkFuEMW6zPn56097HU3Oq5lGwqSIPhpoSNmJ5MOhVI5PkycNZ4vduE+Y0SvZO9i9sZiP26NGf65mILKjkrKMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QyxrH1OZgCGkCuGj3opBlUiDQhuFXBUTBu1Jug+qvqA=;
 b=KhZKyYhcn7WzwqTlwiLCRSY6l6chBpJbQJy+8j0alLMjkl+D8x4rkNWXUv2FTFaHNVFhe0l/KoRi3dfRGXaq99IjYmIR3OuCRhAma996YxWZB6M4DIq7qjkEXTgO6okxv4SiiPOYFsrHthR+EZ/dCaojnuRlju4Bj2AJ9BlvfOtgnVQor+sJ83PKhM4+knfWTGGfQuQjeP8dEMvvzUZj+zYVN+m435Lyz0e4/XIrdsYtlatrArKo3/ANcePhHGwW3HH14yomavUbxEZf0NCRbqdBOw1B8vxSMRCwYh0H3aQhOQUzLjwf5eMD8Mf7Bq5jUzeS7rEhY1aliksDQ7iQZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QyxrH1OZgCGkCuGj3opBlUiDQhuFXBUTBu1Jug+qvqA=;
 b=HEge8/A1kB6/jmXbHa+wQdGcclNwDdQR5JZD9P0eVSAR/um2qthxuDTqFrTq3Pw3/pGxrCb3L1reENXMzrqtoRc/R5t6KFxUxU37nYIKcGQnPn45VY1aya+Vy3fhaR8jj5xZt6m1IfKdXdxiu0TUJ+BMTEMZh1wQW+o0Tdn0WwY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <e2ea24f6-fe4e-d7ae-f663-71f49811e8b5@citrix.com>
Date: Mon, 15 May 2023 11:02:21 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/cpuid: Calculate FEATURESET_NR_ENTRIES more helpfully
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230512124551.443139-1-andrew.cooper3@citrix.com>
 <687ac0c3-9364-f03e-8f7e-afaf985618aa@suse.com>
In-Reply-To: <687ac0c3-9364-f03e-8f7e-afaf985618aa@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0349.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18d::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|CH2PR03MB5237:EE_
X-MS-Office365-Filtering-Correlation-Id: dc5efaca-ea36-4fab-2f45-08db552b7cfc
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yAjq2EjgudU4IXYMfiseAQqfV/tJJBDwnytVJW2CntBOs9vizTSJm/zil1qrDnESruoxW3M5KUOhoIuxv4iRuIQgAP1tJbhWlZPnJEB/qCkvKqCHIYWM6TfeE1WGtIRV3PI0GgdRYWxXqx3HhMLw8KuIjRN6s0tlcJIsILgO0Mp7/yDkIhgB+QlMk0utDQq9GUm5ppSRmTmiSs+Kpe6DLxW1C7GI+xUrsYLj86ciuwp15s1KgxrX8tLHm03mAA6t+NuJCbfG+PVgQO+5nz8zz5EgSXWhqeOtPXJgbpkP9LEIq2YLVjQ+aTXejjxeUj5j4QfTH8Z5uwHCjYjW3ZqkyfFlKMqZ/1FHGGevcBeyV8pkaqDfi8AWokjPoXJ3AvJrmzIrdkGOIdEjNQwY+d8IzEWz6FsZwxzu9GPFVQXdMg/Pz+JvhbkE/mPF+rXzbRvT1vmJbTKKrF1e9nU+IKEGfZ2+dXbeW5YQ8nf8iZJCBv7tPyIOr3K28UCYtxD/e1zSaxmx8Cm+tr2cjXSaUyiNzBYocjIugih8uXfEQNlL5RaHnzn3h97btpl4bWKZUI8WeSixsNS4y8jrWAcPmDf1Uzf7f2cu1XMOlQLY7vRHgPAk3xr+0+ReJQX6oti+YK1VN31DUFIBklffAplBKM3r6w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39860400002)(136003)(366004)(346002)(451199021)(36756003)(86362001)(54906003)(316002)(66556008)(66946007)(66476007)(4326008)(6916009)(478600001)(6486002)(5660300002)(8676002)(8936002)(6666004)(2906002)(38100700002)(31696002)(82960400001)(41300700001)(6512007)(2616005)(186003)(53546011)(26005)(6506007)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NXlrK1BBL0JGU1p4YlphTTVVdDRiQTJHS05jZU9FNjB2cnlNaHpzVXNYYkVE?=
 =?utf-8?B?aXE5Y1lEeC8wUUNjb1FkcG1Oc3dNODgzWU8rV1NNdFlnVHVVUitCNktIRFR2?=
 =?utf-8?B?aTNXM2xUTWp3UVg2QklnYk5WU2s0MXRYb01VTU5lRCtaSDNCUUFXYjJJcnRa?=
 =?utf-8?B?OXUwUWF5WUJjQTVubFM1b3o0Q2QyQTdDWUpycEtKaldTdDllejBsdVQvNzUv?=
 =?utf-8?B?TkZWTjRDUzlwNmp3bWFNbnZjcDVLWThFcG1sK0FMNGRKcEE0bXljTVA0Z1Nn?=
 =?utf-8?B?ZEphd3ltQVorSGlxVHIrQjkxUmtJbCswMUx6SWdEa3hLUDdRQVJieVVpRWJa?=
 =?utf-8?B?WlNjS1JMVEM1WFUxY0pvSHA4cGg5a3l4SVI1bzVtSXp1RGZkQTNkNG56dFZn?=
 =?utf-8?B?N05FRm1VQmJDMEMvZFNJMlJpMU01Q3NBam5EUmlIVjV4T3FGa2g3V1ZBelBT?=
 =?utf-8?B?QVl4SEZSdVdRRGZhelNYTHJwSFRxTE1zU09ONE5FWFMyVmxlUE96cHdZL0lZ?=
 =?utf-8?B?YS8xY3Q1NHdXdG1XZ09JaWlUb0dvTSs0S0YvOUZMWkFQWWZnS1c5MDJYL2Vk?=
 =?utf-8?B?QWJjVDBzQk5HcGhEQkJzWGVWU1dYa1craFYwcm01TitLWE5zTlpRaTErd1M3?=
 =?utf-8?B?R09ZZXN5ckM2bFUyK1BnWFZidExRR20zUE9FMjV0RyttNGp4TWVJZ3BNdjZn?=
 =?utf-8?B?WmlLWnlXTjRBbDU2VkxMV2IxVWZubjJjdlpRLzFSMytRb204TWlielR2VDJy?=
 =?utf-8?B?NTdWTlR6RDNpRUZvYzY5S0lsU0U3RUtXa0tmLzhVZzZld0d5T1RuN3pZdnFy?=
 =?utf-8?B?OXpsalVZWW5uQzJMQVluaUx0MU5IV054Ny9lMHBJLzFzV29nMEpyd2kxMFRB?=
 =?utf-8?B?RTA4Z09oMW0rRy9MUDNQcXR3b3U5SVJZdngrTW5QNHBRNU1pVUJNa2FJb0NW?=
 =?utf-8?B?YzFSQ0dJZHVjQXBYVTI3QlQrNDV0c3JxcW4xM2hxRU5JOHlHeUdIZ0RwUkJs?=
 =?utf-8?B?aWFIVTRNYW9zdE1US3ZCaWcybGZJaGhWUlZKVjFRVnpkaU5xTzc5Tkp5c2Jy?=
 =?utf-8?B?UiswOC9VcGJYSndtblNyektDZzIxZkpTQXBRVHgzdmFxbUxwaWlxZ3RuZ05w?=
 =?utf-8?B?N002ekYzQnA0cU9oNzlmYkhWaEprd0xFZlZmekFaeURMcmx2QW9TUndxTVVE?=
 =?utf-8?B?NTN0MmhOeFFSZnp5cHErUTYvdmwzeUtZWUNXYzZiQWdiSUhIVjhXbDRGTXRS?=
 =?utf-8?B?cWI2clcvek9QNWhvTHAvYmZvZjk4eDhyUG92QWpZSUtCZlBDOTdYR3ZhZ2tF?=
 =?utf-8?B?aEgwbEJuM0ZhTzJ2M1E3enhiMm9vZXIyMHB4aE1aRytmcUo3TFlIb2E1TzF6?=
 =?utf-8?B?Q2NLY3RUYnlJZ3lMSUNHMkZOYVhwNlBQTDJhYVVUY1B2RElqZ2gyTnU2bVJq?=
 =?utf-8?B?NE9VbUROTE5yWnhLQk1XUXdweXMzVVhUWitFL3l1ZmxIMWRhOFY0TXNURFdl?=
 =?utf-8?B?ZHg4aEZZcExHeEljWnQwalMwbzZtT0pKTlNRQ3VNa3ozT01CVTFFd0g0UC95?=
 =?utf-8?B?OXlmSk9ib0xWdndDYmFGMVBYUTlaUytpYWRQRjJOeTRlTVFGV1hWdENLVUFP?=
 =?utf-8?B?b0s0enR5KzVXMkoyOEZuQmlWOWgrNW9scjE0M3E1aFRDbFdmTk96QmY5RGRH?=
 =?utf-8?B?eFhnLzV5b1JTeDdSSmNBN1hIZXNsaEV6eGp3UVhSdnhUY2dXMWdyZkRFN0Jm?=
 =?utf-8?B?SEY2c2xlL1ZMYUlaZ3NyMVlraUhiRU1DTjgrcEZWaUVUbDhlNW50QmxoamN6?=
 =?utf-8?B?bW9SMnJZQS9mVDFQcVd2dWJiS1o5NytnRnltNUx6MzBrVFlBcUd3ei9EMGVy?=
 =?utf-8?B?ZXlzQk9FckIxbUNpRDd3RUFXZllLNEQxMWwvMTEyK205eTJ4SW5IdUU0bUQv?=
 =?utf-8?B?UjJCUnJ4MnhZZU5PT0xHelRPTjkwdDlnd1lsY2h4NnpBZkp4eFk2ZkVTdmU2?=
 =?utf-8?B?V0ZSNXg2T252MkNmczVKS0FzRndzdFgrQUREbUJZN3lzeVhsRXU4azlxc0Ry?=
 =?utf-8?B?dnJuNlJHQjMrYUgxbVQ1NVV6bmFmZnlod3RTOW5aWXlLWHJseFcxbzJoTzky?=
 =?utf-8?B?aVdEbUZBc1ZCdXVWOVE2R0dVL3N6MDdxNGRSWVFUdFpCQzhVS21kTkhzeVVx?=
 =?utf-8?B?V2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	usvemT35RM7SqHNbQY8oYC4l6eN7tav3ntKzHlrl/V1SDZELQReHQ7Qb8BCQJdtuxDMYFd1PjRHoNCQXaF4aU0rPBfDn0F4tGlbzkj8qiMifHT6BCU8pmq8vKs6baaUEtXNwpGv3Hx82Q3XsFJJf7ryWYy1PvUtUcyZsmRDIDVnOIz4+6mlqqdp9dvGTKqy29wLCR0x4KWdID+IHrJNsirdl/Cyz5axLphKVAyhsaiIX7sC+sWU8o8YL6HgNaTCEQN+X4do9I73CrMGBY2q4ggygib0hcVW5L9QrJ1T03qSix1qDb5brXtZTYCMpi0qzc3ENZBX2NmZaOEP+c6fz81bb5oJDWBz2r21DDFILk9A+HqkB7z3jQWdU6b+S1d0RmWqi+8bBgdEG9utrlAXmqGZ+FgW8zaHqB6T5G20Xsz9+p/rMpSYHaYQ4/Zpi3RxwcEZPSvIHrKN9kaJYBf0s+8a6JhCknLWckBy+Bop5p/WjxGtFBFodxnW9JUmEPq8nmxjfhxR+ONGNXhB+YojGuZ/mJGKdudSrahrLJ51SDj8nicpc8RXEIAO33ME30ASm16uK0eVbcLZqjVwM0lJlrW5xzzJtiEIOHvT+o/DhF1qSY6Bj5JN8lxGVeTb7OBU/U2zkRG3gaAEf4QmTau5mX2BlaiAtIroDdQ1CYXo5uCf2jhzsWWcDCWuBZ0wB0EAv+FJYqFio3WnTk48fIJKO9WfxAyNJhVWtbF+dquUxqbym4MXGr5iB+aYURMd+V6li/3aF4+0v1z59a9DT/Zg+wkCdYbOjAkxlEuJFCJSQRWPNkZ995LUvBBbZs1TPy2Mivlx+kmE2qtqukP/egOyYGQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dc5efaca-ea36-4fab-2f45-08db552b7cfc
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 10:02:27.4865
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: D23SuBeZHGULwGFLI0k3vfdcZ5w2OhYYhzgAEmBe8ErKc0YkEvTgQ6RUog0PtVWVDWym2Mdpy1W1BJDnJvQUdcX2FbIa6bBMgy0OuQCgeCM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR03MB5237

On 15/05/2023 8:46 am, Jan Beulich wrote:
> On 12.05.2023 14:45, Andrew Cooper wrote:
>> When adding new featureset words, it is convenient to split the work into
>> several patches.  However, GCC 12 spotted that the way we prefer to split the
>> work results in a real (transient) breakage whereby the policy <-> featureset
>> helpers perform out-of-bounds accesses on the featureset array.
>>
>> Fix this by having gen-cpuid.py calculate FEATURESET_NR_ENTRIES from the
>> comments describing the word blocks, rather than from the XEN_CPUFEATURE()
>> with the greatest value.
>>
>> For simplicty, require that the word blocks appear in order.  This can be
>> revisted if we find a good reason to have blocks out of order.
>>
>> No functional change.
>>
>> Reported-by: Jan Beulich <jbeulich@suse.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> As far as my Python goes:
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

> Just one remark further down.
>
>> This supercedes the entire "x86: Fix transient build breakage with featureset
>> additions" series, but doesn't really feel as if it ought to be labelled v2
> Thank you for re-doing this altogether. I think it's more safe this way,
> and it now being less intrusive is imo also beneficial.

Yeah, now I've done both I do prefer this version.

>> --- a/xen/tools/gen-cpuid.py
>> +++ b/xen/tools/gen-cpuid.py
>> @@ -94,6 +118,15 @@ def parse_definitions(state):
>>      if len(state.names) == 0:
>>          raise Fail("No features found")
>>  
>> +    if state.nr_entries == 0:
>> +        raise Fail("No featureset word info found")
>> +
>> +    max_val = max(state.names.keys())
>> +    if (max_val >> 5) + 1 > state.nr_entries:
> Maybe
>
>     if (max_val >> 5) >= state.nr_entries:

Done.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 15 10:30:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 10:30:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534699.831985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyVTa-0001ff-QH; Mon, 15 May 2023 10:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534699.831985; Mon, 15 May 2023 10:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyVTa-0001fY-NT; Mon, 15 May 2023 10:30:46 +0000
Received: by outflank-mailman (input) for mailman id 534699;
 Mon, 15 May 2023 10:30:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uOm1=BE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pyVTZ-0001fS-4h
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 10:30:45 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on20601.outbound.protection.outlook.com
 [2a01:111:f400:7e83::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8a892650-f30b-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 12:30:42 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by IA1PR12MB7544.namprd12.prod.outlook.com (2603:10b6:208:42c::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 10:30:37 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::8018:78f7:1b08:7a54]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::8018:78f7:1b08:7a54%2]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 10:30:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a892650-f30b-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C9NDm2eAFS3lu3qq5RFmxkKLZYlOkaz40Z+zYZ1JP3b+1AEjaw0EmArrO0Xv0cSDr+U/rF7QMUb2gA0rpCpVG500PCenj2S1cgZBr4JHQ6nYKhIdM+XFS75Ttjoqk4TMH0WCHAeNMxiTSjUieUe+KKrixtYrruQDzfZ1JXk/yAE/qOYo1WcOZ/TWzs2gDyABrUeTnydScH/rJSG6q/Smm4LiJRHlpaZFlOyxRZIrfqJqPVqIdaciRBn/vresuO8M6qajt2wrSbbaKxuUWdjwEWHJOuXFMXlzqth4C5wOKvDNVbead2QLqgTNUh54NlbuPTuXkh3ilu5A9/jFtjtljw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YsdfJAI96B6kTWxhWoWoucYVGS+Uaoxo0yqRgKq7AGg=;
 b=ZVnE0I4Sbddjoay2favYFV7T/mpqMExqwgAC227Mt+SIyNmYq7Hu84bIZJewxWD83oZ+0bzx2DUTXV6ChCeQ1MK7+bFKPy6wid6VYWJKOpuoIeq5L8LsxnWA7ofSjldQ3LMnkxPo3FjJZajqVmB0rYxMbCvT+GSAky9K90QyZP0OqSwnqrYddJ79JG0c8ikwYLXZKhpr0ccb63a3GIz1f6Qn8DcGWgZppVIbjCGzHU0VzpGTnkukkA58r/nRoY27Y/xudLSML/WQF6zkoq7vhwre7n4o2WqdUuc39m/yfAfUrYr1uiZmxl2028xOjVGpZxS5MToAqDaISoHzaE96jA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YsdfJAI96B6kTWxhWoWoucYVGS+Uaoxo0yqRgKq7AGg=;
 b=geSNhN6gysHqLxP/lmRTSzBnOQFzb/eEdwrLrj8/uXUD0JEs8+riF++L6V76EAu2vTiFsEdvnI6dvv/fvuDuYRzW+qVFE5OTnJZb93Nq/nei1TQ2lKL4oBGZU9Gcljk1CzbCIV51nMM98NJ9xjx4YH8WrccV4wMx8goe7EGgiMk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <72fa0686-2703-6682-fe06-2fca14ff1986@amd.com>
Date: Mon, 15 May 2023 11:30:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.1
Subject: Re: [XEN v6 11/12] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
To: Julien Grall <julien@xen.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-12-ayan.kumar.halder@amd.com>
 <63fa927e-72f5-1645-97c0-6986f2fdcabe@xen.org>
 <4681a4d4-68d3-01cd-912c-bca2cdc83266@amd.com>
 <175d5e01-6258-edcc-bddd-05ff9e1eb547@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <175d5e01-6258-edcc-bddd-05ff9e1eb547@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0230.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a6::19) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|IA1PR12MB7544:EE_
X-MS-Office365-Filtering-Correlation-Id: 31089fa5-a750-48ea-a4a6-08db552f6c23
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qDnxTLMMzgMth4monGXRki1Bb2c3+H61lfI5rBzPsEgPPt+EyWlp2gswoESJWQfVar9jOsg/6te0NLv+uHOP29f5B34eOedooty2JeuPiJlvYya7araHu1eAD3M3mmQcjClTwVa8/qAITC3JLyTB0KiKp1gO01As9cyzZQGukrTx/Jvkcvumm4MfpI8BbMf8GUCRoZhn9gOxe6m+YQi+bsPqnMcYhXq6PgyeFavWPBDQD7AltOasC8DtMjYC3zwN5mlIMYGcu3RkWGty0lYkju8aBEX+p20r8nfTBVmeKt6zHr1Qjx2+An/AJ1Pwq7HrM0pePmc5/dc8Ko7yJVKHI4XbotaSIyFUod9C0Lg/Y1EnsDUfrd/r6dRB5tiLRgwS5lHodoGC9ZAY8HO+ROZS7nx2GZL5onnbLryNohsWBqf9XUbZmI9y8A0zBrFYPsKxsldc7IdmELct+ydmDmpytGw7rLPLFyTqpgtBjQveaxiBD6G2qs3rnziVlUv+VhXlIoEbwQBmxJcbGKo2Rrk8uKd6YDc0++H01bOBzjV7W3epfp45XUCcxCJPn1/9XVpD8cWerh22pm6ksSQfX9+OrTPgld3x6HNB4GI2mqyqp3HUNxag3ngywVqlMtA+zarj3OXtpf4ZO1dHIWOMWh6Qsg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(366004)(136003)(396003)(39860400002)(451199021)(7416002)(5660300002)(8936002)(8676002)(83380400001)(2616005)(2906002)(30864003)(186003)(36756003)(38100700002)(31696002)(53546011)(26005)(6512007)(6506007)(6666004)(478600001)(4326008)(66476007)(66946007)(66556008)(110136005)(316002)(6486002)(31686004)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T3F0aUlmbWhzRmtleWZFOGtqRUFJMzl2b29rWGltTTJubTBXZUFkUjhjOWE3?=
 =?utf-8?B?QjUrN2hmcHhKVng5TkFWWE5vZThmZWQ5QU44czhMMHNiaE1yWS92RTNTUEdR?=
 =?utf-8?B?ZkI0VGNHUGVOLzErd0ZPQ3FrVzROb3JXOGtydWtvSmJjY0VTcm1ObHNSTmJw?=
 =?utf-8?B?OG81SUZtQytLMm8rZUxvTGJEb2RFdFg5dXJoL3U1OEVsYXZ6YWhVRTR0ZXNI?=
 =?utf-8?B?RkJHT3d2eUNPSmExM1hZa0htd2FaMDZEelhQL0hHMUhJcmtnak5JYkhCSUlH?=
 =?utf-8?B?VzZ6WU5qTTlNSmh6SVB0dlB0QXlLWXY4aGFlbVdGRGUvc3QrT3AzYjR3eEUw?=
 =?utf-8?B?cTBISGZzNkgwUVdwVVg3K0dqa3VTaXBGY2NXczBIYXpzL2tHR0JUV1lVWXhp?=
 =?utf-8?B?ZDdCMDFYeWdwSlNvWXd4bUxsWHEwT1ZqaEQwbGVtSE52QTYvbklLSytBN3ZJ?=
 =?utf-8?B?bExkNzBiNEdiMCtZb0xLZVhMUnplZnZNMFdQcEg4ODlXcnNscDdMYlpST2px?=
 =?utf-8?B?S2Q2UnAxWmViL3RFM09mamNBWGpBdlhuQmIwWm42NktFQmhaeVBXeU15RUNk?=
 =?utf-8?B?UEd4YXpMK1hBcDZJblV3QjljckRzZUhvbEgydjRVYzhUM2l3Sm9ONDFSMFpB?=
 =?utf-8?B?SVk5RGZOUi9YVGtTZ1BHU0NjM0tqWUc3TDNRQi9oUjB3eC9tOTZPcHIvQTJJ?=
 =?utf-8?B?aXovdTVob3NjTGpRbnp6K3N1cmZqaHVoQ1NCb214OVdSSXJjWEh5dTdZWnJG?=
 =?utf-8?B?akJxcGNIbXAvK2VDdkk3eWRaRUpJUHBqWCtTbVp3dk5QMjlpcnQ2UzI1OHBE?=
 =?utf-8?B?ek9leTlZYWVRRjcrWGxnY3JpNEN0QnhZR1U3TDE3Rmg3ZVNtak5zc2RZRWt2?=
 =?utf-8?B?b0RaY1BGYTdVRVhNNHlQWHVBcTJmQXdlMWdFa3doaEhpUjZhbEN0Qnd5cWFx?=
 =?utf-8?B?cndYY0pCM1ZYYW9WajRSWHJQV1pobnJNMzY1a3BvYzkybC9sNWhpbUdmL0xy?=
 =?utf-8?B?Q2RhcVo4MklDOWZiQTZCQ3NvZHJnYnB6d29nQVJLSFBiNG5FbWJUQ3BJdEIx?=
 =?utf-8?B?V0lOQ1p1WDRFS3pIUzY5bTdacXhwZTZxd0VaU3BURDRDLzJvYkdmWDFpUzJh?=
 =?utf-8?B?QzJPVmRvRHFSSmxraDNybFBNVHFrSTU0Z1JsYUVOaVJFY24zTG1DNmMrT29m?=
 =?utf-8?B?d2dBZVVsR01ySzJWVlBzMHFrUWR5VWZIMFJIcXpCSGllYmVzRXlnWmUycTVu?=
 =?utf-8?B?VExmckdWZGw2SitMUkRDR2VBSkJQem1oN2dKWS9RYmNxc0hPYTYrWmlsOW4x?=
 =?utf-8?B?UTFidVk5NWVVRlhOZWdGTW03bmc3UE5JOWFYM2x1Y0FZOVhrTjNvZ2NlUWRs?=
 =?utf-8?B?MndDcmpxTmV4dFlFeXlsQnlBRGEvOG5XUENTY1ZlM0tZV1kxVmdKRTNyckMy?=
 =?utf-8?B?SFc1YjdJNUFadmM5dUVEQkltMEk5SWRQM2ZoZFJhZis4WHBJTkZSZzU4RjRI?=
 =?utf-8?B?OTV3aGxEK2xOWTh5Z3M2R2hpaUE1TzJLS0p1WEJVMis5WkFmTWNENDJDeHcy?=
 =?utf-8?B?cnN3d3cxam5IR3ZpOVh3L0h1Y2RrWklFVWhFZWY5WWRxakNqc0t5MGw0NWwv?=
 =?utf-8?B?RWV4NDVBN1J0VVRqaHg5RDY2dElwOWtPRkNCb1F1empBNGR0aHAwODRjRWJC?=
 =?utf-8?B?NlNJTE1zbERwUEhrcnJPalU4aE8wNElKZ0xDeERaaHU0KzdHL29DWnVRbGJr?=
 =?utf-8?B?UGYyMGpjVThQUWUzM0lJem1qNDFRWXlYcUdMS0ZBeUdZc3FQV0VXRENmL3Z1?=
 =?utf-8?B?MUQzYUQwc2dUS1NsVThTZE9jZnNIUC95V3kza0VlLzd3NXVFSWVWUmM2Z2xF?=
 =?utf-8?B?cEZpS25PbnJnYUx1eVN5a0YvaW5oaHo0anhhWVlQM1F2ZjVPeCtmK3hkTU41?=
 =?utf-8?B?R1pHdGliNGozc1NhbXNBeU8yenNibFdJUGVnU282MmxpYnRDc0g1RW44U1hu?=
 =?utf-8?B?SDAzb1pMY3F6Yi9SWEpCTVZKTnM5YzNqcnVxNEliN2daYTNDVFE0SHhBbUVO?=
 =?utf-8?B?VmFPVFpoK0ZQSTMzek41WHZweEV0N0hVS2NEUTlzMCtISVNHSmwyU2loczF5?=
 =?utf-8?Q?Al5QProeHhLQvrv4fVyMzRWic?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 31089fa5-a750-48ea-a4a6-08db552f6c23
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 10:30:37.1746
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RNVQj6lHh3zFA5wQ2/DL7uTgk8zlPDsCY38cUNqhbYBSGI2Vr2aYN6vPLp2IHXlDfrXNJS6LPoRY434GpInVWQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7544


On 15/05/2023 10:25, Julien Grall wrote:
> Hi Ayan,
Hi Julien,
>
> On 11/05/2023 12:45, Ayan Kumar Halder wrote:
>>
>> On 03/05/2023 13:20, Julien Grall wrote:
>>> Hi,
>>
>> Hi Julien,
>>
>> I have some clarification.
>>
>>>
>>> On 28/04/2023 18:55, Ayan Kumar Halder wrote:
>>>> Restructure the code so that one can use pa_range_info[] table for 
>>>> both
>>>> ARM_32 as well as ARM_64.
>>>>
>>>> Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
>>>> p2m_root_order can be obtained from the pa_range_info[].root_order and
>>>> p2m_root_level can be obtained from pa_range_info[].sl0.
>>>>
>>>> Refer ARM DDI 0406C.d ID040418, B3-1345,
>>>> "Use of concatenated first-level translation tables
>>>>
>>>> ...However, a 40-bit input address range with a translation 
>>>> granularity of 4KB
>>>> requires a total of 28 bits of address resolution. Therefore, a 
>>>> stage 2
>>>> translation that supports a 40-bit input address range requires two 
>>>> concatenated
>>>> first-level translation tables,..."
>>>>
>>>> Thus, root-order is 1 for 40-bit IPA on ARM_32.
>>>>
>>>> Refer ARM DDI 0406C.d ID040418, B3-1348,
>>>>
>>>> "Determining the required first lookup level for stage 2 translations
>>>>
>>>> For a stage 2 translation, the output address range from the stage 1
>>>> translations determines the required input address range for the 
>>>> stage 2
>>>> translation. The permitted values of VTCR.SL0 are:
>>>>
>>>> 0b00 Stage 2 translation lookup must start at the second level.
>>>> 0b01 Stage 2 translation lookup must start at the first level.
>>>>
>>>> VTCR.T0SZ must indicate the required input address range. The size 
>>>> of the input
>>>> address region is 2^(32-T0SZ) bytes."
>>>>
>>>> Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size 
>>>> of input
>>>> address region is 2^40 bytes.
>>>>
>>>> Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b 
>>>> which is 24.
>>>>
>>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>>> ---
>>>> Changes from -
>>>>
>>>> v3 - 1. New patch introduced in v4.
>>>> 2. Restructure the code such that pa_range_info[] is used both by 
>>>> ARM_32 as
>>>> well as ARM_64.
>>>>
>>>> v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and 
>>>> P2M_ROOT_LEVEL.
>>>> The reason being root_order will not be always 1 (See the next patch).
>>>> 2. Updated the commit message to explain t0sz, sl0 and root_order 
>>>> values for
>>>> 32-bit IPA on Arm32.
>>>> 3. Some sanity fixes.
>>>>
>>>> v5 - pa_range_info is indexed by system_cpuinfo.mm64.pa_range. ie
>>>> when PARange is 0, the PA size is 32, 1 -> 36 and so on. So 
>>>> pa_range_info[] has
>>>> been updated accordingly.
>>>> For ARM_32 pa_range_info[0] = 0 and pa_range_info[1] = 0 as we do 
>>>> not support
>>>> 32-bit, 36-bit physical address range yet.
>>>>
>>>>   xen/arch/arm/include/asm/p2m.h |  8 +-------
>>>>   xen/arch/arm/p2m.c             | 32 ++++++++++++++++++--------------
>>>>   2 files changed, 19 insertions(+), 21 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/include/asm/p2m.h 
>>>> b/xen/arch/arm/include/asm/p2m.h
>>>> index f67e9ddc72..4ddd4643d7 100644
>>>> --- a/xen/arch/arm/include/asm/p2m.h
>>>> +++ b/xen/arch/arm/include/asm/p2m.h
>>>> @@ -14,16 +14,10 @@
>>>>   /* Holds the bit size of IPAs in p2m tables.  */
>>>>   extern unsigned int p2m_ipa_bits;
>>>>   -#ifdef CONFIG_ARM_64
>>>>   extern unsigned int p2m_root_order;
>>>>   extern unsigned int p2m_root_level;
>>>> -#define P2M_ROOT_ORDER    p2m_root_order
>>>> +#define P2M_ROOT_ORDER p2m_root_order
>>>
>>> This looks like a spurious change.
>>>
>>>>   #define P2M_ROOT_LEVEL p2m_root_level
>>>> -#else
>>>> -/* First level P2M is always 2 consecutive pages */
>>>> -#define P2M_ROOT_ORDER    1
>>>> -#define P2M_ROOT_LEVEL 1
>>>> -#endif
>>>>     struct domain;
>>>>   diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>> index 418997843d..1fe3cccf46 100644
>>>> --- a/xen/arch/arm/p2m.c
>>>> +++ b/xen/arch/arm/p2m.c
>>>> @@ -19,9 +19,9 @@
>>>>     #define INVALID_VMID 0 /* VMID 0 is reserved */
>>>>   -#ifdef CONFIG_ARM_64
>>>>   unsigned int __read_mostly p2m_root_order;
>>>>   unsigned int __read_mostly p2m_root_level;
>>>> +#ifdef CONFIG_ARM_64
>>>>   static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>>>>   /* VMID is by default 8 bit width on AArch64 */
>>>>   #define MAX_VMID       max_vmid
>>>> @@ -2247,16 +2247,6 @@ void __init setup_virt_paging(void)
>>>>       /* Setup Stage 2 address translation */
>>>>       register_t val = 
>>>> VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
>>>>   -#ifdef CONFIG_ARM_32
>>>> -    if ( p2m_ipa_bits < 40 )
>>>> -        panic("P2M: Not able to support %u-bit IPA at the moment\n",
>>>> -              p2m_ipa_bits);
>>>> -
>>>> -    printk("P2M: 40-bit IPA\n");
>>>> -    p2m_ipa_bits = 40;
>>>> -    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
>>>> -    val |= VTCR_SL0(0x1); /* P2M starts at first level */
>>>> -#else /* CONFIG_ARM_64 */
>>>>       static const struct {
>>>>           unsigned int pabits; /* Physical Address Size */
>>>>           unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
>>>> @@ -2265,19 +2255,26 @@ void __init setup_virt_paging(void)
>>>>       } pa_range_info[] __initconst = {
>>>>           /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a 
>>>> Table D5-6 */
>>>>           /*      PA size, t0sz(min), root-order, sl0(max) */
>>>> +        [2] = { 40,      24/*24*/,  1,          1 },
>>>
>>> I don't like the fact that the index are not ordered anymore and...
>>>
>>>> +#ifdef CONFIG_ARM_64
>>>>           [0] = { 32,      32/*32*/,  0,          1 },
>>>>           [1] = { 36,      28/*28*/,  0,          1 },
>>>> -        [2] = { 40,      24/*24*/,  1,          1 },
>>>>           [3] = { 42,      22/*22*/,  3,          1 },
>>>>           [4] = { 44,      20/*20*/,  0,          2 },
>>>>           [5] = { 48,      16/*16*/,  0,          2 },
>>>>           [6] = { 52,      12/*12*/,  4,          2 },
>>>>           [7] = { 0 }  /* Invalid */
>>>> +#else
>>>> +        [0] = { 0 },  /* Invalid */
>>>> +        [1] = { 0 },  /* Invalid */
>>>> +        [3] = { 0 }  /* Invalid */
>>>> +#endif
>>>
>>> ... it is not clear to me why we are adding 3 extra entries. I think 
>>> it would be better if we do:
>>>
>>> #ifdef CONFIG_ARM_64
>>>    [0] ...
>>>    [1] ...
>>> #endif
>>>    [2] ...
>>> #ifdef CONFIG_ARM_64
>>>    [3] ...
>>>    [4] ...
>>>    ...
>>> #endif
>>>
>>>>       };
>>>>         unsigned int i;
>>>>       unsigned int pa_range = 0x10; /* Larger than any possible 
>>>> value */
>>>>   +#ifdef CONFIG_ARM_64
>>>>       /*
>>>>        * Restrict "p2m_ipa_bits" if needed. As P2M table is always 
>>>> configured
>>>>        * with IPA bits == PA bits, compare against "pabits".
>>>> @@ -2291,6 +2288,9 @@ void __init setup_virt_paging(void)
>>>>        */
>>>>       if ( system_cpuinfo.mm64.vmid_bits == 
>>>> MM64_VMID_16_BITS_SUPPORT )
>>>>           max_vmid = MAX_VMID_16_BIT;
>>>> +#else
>>>> +    p2m_ipa_bits = PADDR_BITS;
>>>> +#endif
>>> Why do we need to reset p2m_ipa_bits for Arm?
>>
>> Ah, this is a mistake. I will remove this.
>>
>>>
>>>>         /* Choose suitable "pa_range" according to the resulted 
>>>> "p2m_ipa_bits". */
>>>>       for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
>>>> @@ -2306,24 +2306,28 @@ void __init setup_virt_paging(void)
>>>>       if ( pa_range >= ARRAY_SIZE(pa_range_info) || 
>>>> !pa_range_info[pa_range].pabits )
>>>>           panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange 
>>>> %x\n", pa_range);
>>>>   +#ifdef CONFIG_ARM_64
>>>>       val |= VTCR_PS(pa_range);
>>>>       val |= VTCR_TG0_4K;
>>>>         /* Set the VS bit only if 16 bit VMID is supported. */
>>>>       if ( MAX_VMID == MAX_VMID_16_BIT )
>>>>           val |= VTCR_VS;
>>>> +
>>>> +    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
>>>> +#endif
>>>> +
>>>>       val |= VTCR_SL0(pa_range_info[pa_range].sl0);
>>>>       val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
>>>>         p2m_root_order = pa_range_info[pa_range].root_order;
>>>>       p2m_root_level = 2 - pa_range_info[pa_range].sl0;
>>>> -    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
>>>
>>> I think this line should stay for 32-bit as well because we 
>>> p2m_ipa_bits should be based on the PA range we selected (see the 
>>> loop 'Choose suitable "pa_range"...').
>>
>> This isn't true for ARM_32.
>
> You are correct that the line is not correct for Arm32. But my point 
> was more for that fact you don't update p2m_ipa_bits based on the PA 
> range selected.
>
>>
>> ReferARM DDI 0406C.d ID040418, B3-1348, "Determining the required 
>> first lookup level for stage 2 translations"
>>
>> "...The size of this input address region is 2(32-TxSZ) bytes, ..."
>>
>> So for
>>
>> #ifdef CONFIG_ARM_32
>>
>> p2m_ipa_bits = 32 - pa_range_info[pa_range].t0sz;
>>
>> #endif
>>
>> This will be a problem for 40-bit PA as T0SZ = 24.
>>
>> So p2m_ipa_bits = 32 - 24 = 8 (which is incorrect).
>>
>>
>> To get around this issue, there are two possible solutions :-
>>
>> 1. For ARM_32,  do not modify p2m_ipa_bits. Thus p2m_ipa_bits will be 
>> using its initialized value (ie PADDR_BITS).
>
> AFAICT, this approach would be incorrect because we wouldn't take into 
> account any restriction from the SMMU susbystem (it may support less 
> than what the processor support).

By the restriction from SMMU subsystem, I think you mean 
p2m_restrict_ipa_bits().

As I can see, p2m_restrict_ipa_bits() gets invoked much later than 
setup_virt_paging().

So p2m_ipa_bits will take into account SMMU restrictions. Thus, this 
approach should be correct.

Am I missing something ?

- Ayan

>
>>
>> 2. T0SZ should be signed int for ARM_32 (so that it can hold -8) and 
>> unsigned int for ARM_64.
>>
>> ie
>>
>>      static const struct {
>>          unsigned int pabits; /* Physical Address Size */
>> #ifdef CONFIG_ARM_64
>>          unsigned int t0sz:5;   /* Desired T0SZ, minimum in comment */
>> #else
>>          signed int t0sz:5;   /* Desired T0SZ, minimum in comment */
>> #endif
>>          unsigned int root_order; /* Page order of the root of the 
>> p2m */
>>          unsigned int sl0;    /* Desired SL0, maximum in comment */
>>      } pa_range_info[] __initconst = {
>> ....
>>
>>
>> I will prefer option 1 for the sake of less if..defs.
> I don't think the two options are equivalent. So my first priority is 
> correctness, then if we have still multiple possibility, we can 
> discuss which one look the nicest to read.
>
> To avoid the #ifdef in the struct, we could possibly use a series of 
> cast. It might be slightly better because a reader will be less 
> confused on the type change.
>
> Cheers,
>


From xen-devel-bounces@lists.xenproject.org Mon May 15 10:44:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 10:44:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534706.831995 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyVgD-0003KM-2W; Mon, 15 May 2023 10:43:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534706.831995; Mon, 15 May 2023 10:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyVgC-0003KF-UK; Mon, 15 May 2023 10:43:48 +0000
Received: by outflank-mailman (input) for mailman id 534706;
 Mon, 15 May 2023 10:43:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pyVgB-0003K9-Fv
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 10:43:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyVgB-0004OY-4k; Mon, 15 May 2023 10:43:47 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.27.136]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyVgA-0007Dw-Ti; Mon, 15 May 2023 10:43:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=LNoOzTB0CC3lyktdTwz+xxTG0I9DDNmDJw1pgKuSVhc=; b=Iz8seSAh5Nqoa3BnGgQk43W9aH
	zbaF0yl6xpOC1m7cN+ddCs39C5c8QIXTK9HAfmgPbJEw+K/2OZSh8MhQ85hHKoF0lQf3wH0IKfauj
	399InR3tXnfBSdjTxThwdwHGj3qEotFeGLpO4wLSrbde27acO3RCCb/XiSktnfAjPbFM=;
Message-ID: <701fb2b6-d552-0e3d-d108-a73863160b25@xen.org>
Date: Mon, 15 May 2023 11:43:44 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 11/12] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-12-ayan.kumar.halder@amd.com>
 <63fa927e-72f5-1645-97c0-6986f2fdcabe@xen.org>
 <4681a4d4-68d3-01cd-912c-bca2cdc83266@amd.com>
 <175d5e01-6258-edcc-bddd-05ff9e1eb547@xen.org>
 <72fa0686-2703-6682-fe06-2fca14ff1986@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <72fa0686-2703-6682-fe06-2fca14ff1986@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 15/05/2023 11:30, Ayan Kumar Halder wrote:
>> AFAICT, this approach would be incorrect because we wouldn't take into 
>> account any restriction from the SMMU susbystem (it may support less 
>> than what the processor support).
> 
> By the restriction from SMMU subsystem, I think you mean 
> p2m_restrict_ipa_bits().

Yes.

> 
> As I can see, p2m_restrict_ipa_bits() gets invoked much later than 
> setup_virt_paging().

I am afraid this is not correct. If you look at setup.c, you will notice 
that iommu_setup() is called before setup_virt_paging(). There is a 
comment on top of the former call explaining the ordering.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 15 11:32:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 11:32:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534713.832024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWQf-0000r1-1l; Mon, 15 May 2023 11:31:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534713.832024; Mon, 15 May 2023 11:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWQe-0000qu-V3; Mon, 15 May 2023 11:31:48 +0000
Received: by outflank-mailman (input) for mailman id 534713;
 Mon, 15 May 2023 11:31:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ig+9=BE=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pyWQd-0000c0-P6
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 11:31:47 +0000
Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com
 [2a00:1450:4864:20::332])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12ef7049-f314-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 13:31:46 +0200 (CEST)
Received: by mail-wm1-x332.google.com with SMTP id
 5b1f17b1804b1-3f41dceb9c9so66752665e9.3
 for <xen-devel@lists.xenproject.org>; Mon, 15 May 2023 04:31:46 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 2-20020a05600c22c200b003f42328b5d9sm24857485wmg.39.2023.05.15.04.31.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 15 May 2023 04:31:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12ef7049-f314-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1684150305; x=1686742305;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FePqsA7ELp0EtWX0sxXIM57wZyVx9jZJGScpb51/u1s=;
        b=K/MDGwykdntuhMMRF4Av/PjrIJnlSnmPzh4d19jq99GNIWbqO3TqH2Am2hRpY6UN0l
         PM6O6/zQbTIXMIuCXyFhxp3tCqXpF0p2b8EYwH6JUdrmRWN5fd/lQAJboYmhkBtOD4+H
         j7QsmoLpGPuul0gK7pD4UTLqdi4dNvg1rhTHc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684150305; x=1686742305;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=FePqsA7ELp0EtWX0sxXIM57wZyVx9jZJGScpb51/u1s=;
        b=Ykcpv0XmPTds/Zp9iZ11fXTWqevQMmgphLzVU9Jade/6OKuvW/ZGDnZQmyntZotcWz
         LwP/UaWBdYQYyHbQF+Wuo7GkHC2612/Y5MDJlBcDxyEKOexfbOvvCIbARjakCK72oFVK
         YPlgcd4bc4vHaAvw3A++D1qkw1G4FGYKp7NM3MwInKONkQ7QpUP8+7T0ntW8JtMn+DIe
         TcibUb9mwHcH/rA07BX3nf71xIBwk5oMDN0Zj333IgT1xgg85dfIRUSraASjiF+/fnLy
         w5yzx62rwOlI+QIl9vKO0CgEN4gTgsMSG3neF6gha/oTHvdYdO0km3NLQWW0f09tPdGO
         ibWA==
X-Gm-Message-State: AC+VfDz7ivnNOLS7CX8KVvjRoHTfSbUkDOLRBEtP0i3iZ0NL+HCQgYFj
	KEbMskbPMkgD4dsP0EyuR+tWWzQH3Hecs8RNHUY=
X-Google-Smtp-Source: ACHHUZ7q9WubgGbTtqbnUqhWCbotkJGKcKJUhhjjHf1p3pVtVMZKNAsd6Sd7lipunzQqN8XGfbKFQA==
X-Received: by 2002:a7b:cd89:0:b0:3f4:247c:b00a with SMTP id y9-20020a7bcd89000000b003f4247cb00amr19425242wmj.15.1684150305097;
        Mon, 15 May 2023 04:31:45 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 2/2] x86: Add support for CpuidUserDis
Date: Mon, 15 May 2023 12:31:36 +0100
Message-Id: <20230515113136.2465-3-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230515113136.2465-1-alejandro.vallejo@cloud.com>
References: <20230515113136.2465-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Because CpuIdUserDis is reported in CPUID itself, the extended leaf
containing that bit must be retrieved before calling c_early_init()

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v3:
 * Moved LCAP_* setters to the callers in patch1/v3
 * Added rationale for checking CPUID faulting before CpuidUserDis in AMD
---
 xen/arch/x86/cpu/amd.c         | 23 ++++++++++++++++++--
 xen/arch/x86/cpu/common.c      | 38 ++++++++++++++++++++++++----------
 xen/arch/x86/include/asm/amd.h |  1 +
 3 files changed, 49 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 440af59670..3072b68628 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -277,9 +277,14 @@ static void __init noinline amd_init_levelling(void)
 	 *
 	 * CPUID faulting is an Intel feature analogous to CpuidUserDis, so
 	 * that can only be present when Xen is itself virtualized (because
-	 * it can be emulated)
+	 * it can be emulated).
+	 *
+	 * Note that probing for the Intel feature _first_ isn't a mistake,
+	 * but a means to ensure MSR_INTEL_PLATFORM_INFO is read and added
+	 * to the raw CPU policy if present.
 	 */
-	if (cpu_has_hypervisor && probe_cpuid_faulting()) {
+	if ((cpu_has_hypervisor && probe_cpuid_faulting()) ||
+	    boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) {
 		expected_levelling_cap |= LCAP_faulting;
 		levelling_caps |= LCAP_faulting;
 		return;
@@ -374,6 +379,20 @@ static void __init noinline amd_init_levelling(void)
 		ctxt_switch_masking = amd_ctxt_switch_masking;
 }
 
+void amd_set_cpuid_user_dis(bool enable)
+{
+	const uint64_t bit = K8_HWCR_CPUID_USER_DIS;
+	uint64_t val;
+
+	rdmsrl(MSR_K8_HWCR, val);
+
+	if (!!(val & bit) == enable)
+		return;
+
+	val ^= bit;
+	wrmsrl(MSR_K8_HWCR, val);
+}
+
 /*
  * Check for the presence of an AMD erratum. Arguments are defined in amd.h 
  * for each known erratum. Return 1 if erratum is found.
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 52646f7dfb..9bbb385db4 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -4,6 +4,7 @@
 #include <xen/param.h>
 #include <xen/smp.h>
 
+#include <asm/amd.h>
 #include <asm/cpu-policy.h>
 #include <asm/current.h>
 #include <asm/debugreg.h>
@@ -166,8 +167,10 @@ static void set_cpuid_faulting(bool enable)
 void ctxt_switch_levelling(const struct vcpu *next)
 {
 	const struct domain *nextd = next ? next->domain : NULL;
+	bool enable_cpuid_faulting;
 
-	if (cpu_has_cpuid_faulting) {
+	if (cpu_has_cpuid_faulting ||
+	    boot_cpu_has(X86_FEATURE_CPUID_USER_DIS)) {
 		/*
 		 * No need to alter the faulting setting if we are switching
 		 * to idle; it won't affect any code running in idle context.
@@ -188,12 +191,18 @@ void ctxt_switch_levelling(const struct vcpu *next)
 		 * an interim escape hatch in the form of
 		 * `dom0=no-cpuid-faulting` to restore the older behaviour.
 		 */
-		set_cpuid_faulting(nextd && (opt_dom0_cpuid_faulting ||
-					     !is_control_domain(nextd) ||
-					     !is_pv_domain(nextd)) &&
-				   (is_pv_domain(nextd) ||
-				    next->arch.msrs->
-				    misc_features_enables.cpuid_faulting));
+		enable_cpuid_faulting = nextd && (opt_dom0_cpuid_faulting ||
+		                                  !is_control_domain(nextd) ||
+		                                  !is_pv_domain(nextd)) &&
+		                        (is_pv_domain(nextd) ||
+		                         next->arch.msrs->
+		                         misc_features_enables.cpuid_faulting);
+
+		if (cpu_has_cpuid_faulting)
+			set_cpuid_faulting(enable_cpuid_faulting);
+		else
+			amd_set_cpuid_user_dis(enable_cpuid_faulting);
+
 		return;
 	}
 
@@ -402,6 +411,17 @@ static void generic_identify(struct cpuinfo_x86 *c)
 	c->apicid = phys_pkg_id((ebx >> 24) & 0xFF, 0);
 	c->phys_proc_id = c->apicid;
 
+	eax = cpuid_eax(0x80000000);
+	if ((eax >> 16) == 0x8000)
+		c->extended_cpuid_level = eax;
+
+	/*
+	 * These AMD-defined flags are out of place, but we need
+	 * them early for the CPUID faulting probe code
+	 */
+	if (c->extended_cpuid_level >= 0x80000021)
+		c->x86_capability[FEATURESET_e21a] = cpuid_eax(0x80000021);
+
 	if (this_cpu->c_early_init)
 		this_cpu->c_early_init(c);
 
@@ -418,10 +438,6 @@ static void generic_identify(struct cpuinfo_x86 *c)
 	     (cpuid_ecx(CPUID_PM_LEAF) & CPUID6_ECX_APERFMPERF_CAPABILITY) )
 		__set_bit(X86_FEATURE_APERFMPERF, c->x86_capability);
 
-	eax = cpuid_eax(0x80000000);
-	if ((eax >> 16) == 0x8000)
-		c->extended_cpuid_level = eax;
-
 	/* AMD-defined flags: level 0x80000001 */
 	if (c->extended_cpuid_level >= 0x80000001)
 		cpuid(0x80000001, &tmp, &tmp,
diff --git a/xen/arch/x86/include/asm/amd.h b/xen/arch/x86/include/asm/amd.h
index a975d3de26..09ee52dc1c 100644
--- a/xen/arch/x86/include/asm/amd.h
+++ b/xen/arch/x86/include/asm/amd.h
@@ -155,5 +155,6 @@ extern bool amd_legacy_ssbd;
 extern bool amd_virt_spec_ctrl;
 bool amd_setup_legacy_ssbd(void);
 void amd_set_legacy_ssbd(bool enable);
+void amd_set_cpuid_user_dis(bool enable);
 
 #endif /* __AMD_H__ */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon May 15 11:32:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 11:32:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534711.832005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWQc-0000NA-Hg; Mon, 15 May 2023 11:31:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534711.832005; Mon, 15 May 2023 11:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWQc-0000N3-E3; Mon, 15 May 2023 11:31:46 +0000
Received: by outflank-mailman (input) for mailman id 534711;
 Mon, 15 May 2023 11:31:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ig+9=BE=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pyWQb-0000Ms-Vt
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 11:31:46 +0000
Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com
 [2a00:1450:4864:20::336])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 10b400dc-f314-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 13:31:42 +0200 (CEST)
Received: by mail-wm1-x336.google.com with SMTP id
 5b1f17b1804b1-3f41d087bd3so63356265e9.3
 for <xen-devel@lists.xenproject.org>; Mon, 15 May 2023 04:31:42 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 2-20020a05600c22c200b003f42328b5d9sm24857485wmg.39.2023.05.15.04.31.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 15 May 2023 04:31:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10b400dc-f314-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1684150301; x=1686742301;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=GcS0sFy1j+iBirD4dRuPYJrHtHgaoJyuoMIW8QbN+lY=;
        b=EvSyOXYOhoECtJSlqGBVgwanO9Uxg8r9uzRPIB5gUBORfCsOUdF4bkq5T4R8FVj/PD
         v577vCRVd8RVg4/VR1WRb6l1jj9IW1/xG846OqHybl9AT+xntr/cygtKJqHSe79a2aqQ
         AtypoBa3MunRWY/VDbCVZ52zM7mMgvE8nHJCA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684150301; x=1686742301;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=GcS0sFy1j+iBirD4dRuPYJrHtHgaoJyuoMIW8QbN+lY=;
        b=hTnv1za3mCaIBqd8TWLWeeF5bcTaW2wVEzAqugvcaKzvX/8AjFL+9Q+9Bzbe3Krdtg
         G/IJG5NwakljxAkXOnKINALCeAGohALgeUsdk0Nm3GWASZp3/Q9NDwpY2qujOR5qKDqZ
         PyYXjU6m1qDwRIs2OkxgFO5gcYqVMVzBbs5pL2i5o7x96p2DwXxxb2XOim015aaDTT//
         2LqHDm9snc4o8VuAufOeUEpAURNgyuwmbC1qA80lwTz4/L8vUVVtNswr8DuqCXjLLmWq
         o8aVwROuviOjQD5mUwH7Y2h5oF/P4tVwTiC8YReGIWno4OoaJJlLx5kVuv2PXVpwPE/D
         Gv3w==
X-Gm-Message-State: AC+VfDxuW0FHqNRFU3NyynL3q0UZpQ7cahla/ynjs2wI92XDY7LN8cAm
	AY0PmgDE/bFuYEnbTIIldy9WTrbrVgpJJgweJYY=
X-Google-Smtp-Source: ACHHUZ6RscVihJNZLvIthvOqYC3JgRqd5SMFPqjj7Q2jWvfWgRJj8KHE2E2T0xTekQpoZiKAgQU7dw==
X-Received: by 2002:a1c:6a0d:0:b0:3f4:23d4:e48 with SMTP id f13-20020a1c6a0d000000b003f423d40e48mr18847738wmc.23.1684150301172;
        Mon, 15 May 2023 04:31:41 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 0/2] Add CpuidUserDis support
Date: Mon, 15 May 2023 12:31:34 +0100
Message-Id: <20230515113136.2465-1-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

v3:
 * Move LCAP setters from patch 2 to patch 1
 * Comment on rationale for checking CPUID faulting before CpuidUserDis on AMD

Nowadays AMD supports trapping the CPUID instruction from ring>0 to ring0,
(CpuidUserDis) akin to Intel's "CPUID faulting". There is a difference in
that the toggle bit is in a different MSR and the support bit is in CPUID
itself rather than yet another MSR. This patch enables AMD hosts to use it
when supported in order to provide correct CPUID contents to PV guests.

Patch 1 moves vendor-specific code on probe_cpuid_faulting() to amd.c/intel.c

Patch 2 adds support for CpuidUserDis, hooking it in the probing path and
the context switching path.

Alejandro Vallejo (2):
  x86: Refactor conditional guard in probe_cpuid_faulting()
  x86: Add support for CpuidUserDis

 xen/arch/x86/cpu/amd.c         | 32 ++++++++++++++++++++-
 xen/arch/x86/cpu/common.c      | 51 ++++++++++++++++++----------------
 xen/arch/x86/cpu/intel.c       | 12 +++++++-
 xen/arch/x86/include/asm/amd.h |  1 +
 4 files changed, 70 insertions(+), 26 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon May 15 11:32:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 11:32:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534712.832015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWQd-0000c8-QU; Mon, 15 May 2023 11:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534712.832015; Mon, 15 May 2023 11:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWQd-0000c1-Mc; Mon, 15 May 2023 11:31:47 +0000
Received: by outflank-mailman (input) for mailman id 534712;
 Mon, 15 May 2023 11:31:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ig+9=BE=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1pyWQc-0000Ms-7e
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 11:31:46 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 11f9e53a-f314-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 13:31:44 +0200 (CEST)
Received: by mail-wr1-x434.google.com with SMTP id
 ffacd0b85a97d-307a8386946so4859001f8f.2
 for <xen-devel@lists.xenproject.org>; Mon, 15 May 2023 04:31:44 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 2-20020a05600c22c200b003f42328b5d9sm24857485wmg.39.2023.05.15.04.31.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 15 May 2023 04:31:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11f9e53a-f314-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1684150303; x=1686742303;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=TGFm+3lAlf7VMEM6bIEf2GWuO0LeQWDDCKOo0efIzZY=;
        b=AHy1rhj9SasQX0lJ1QL/RiJIXJGBHFGVG/UiFpfgd2tjoZkUHfxFX6SFoXS335ZQk4
         h1e1T7aYEDW4t3XxVbmtToVEEnUeG6iornHXOgphCQJYIYhUoIOvbmnC4pMLK7DVRVkM
         2BjAmrcDTkWVCLAVuKoVSSedzSF1EHgDYqjsU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684150303; x=1686742303;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=TGFm+3lAlf7VMEM6bIEf2GWuO0LeQWDDCKOo0efIzZY=;
        b=OwUddQVQyqgwR399qnf9TyfzOYsu2A+0/BU1foTaIicgklY8+Glzn6DVnAcn/j4LBJ
         47hIiMVdlkUdFjKLtTtJzXsnO+bGrHpQIg3gRB907mhK74pC4sYXvObk0Vnol/6C1mkp
         IwuWI+F9XjZ+4q7n29aZw9Mvj+dV/9P3/Dkj9Wk0VwO/zKKYtYZn4g8hdEVVSaD/Z/dX
         Osjadew8GkHC5/QQ0K4q2kn0g7v0wPehHDPrIo160v056kIbnZ96MiDmp/i66In86+LN
         f0Kot0c/wTKrTUBV6tQcnkvskIgX69oQ2kJ28/YD4myGIoy3MEaMojxkVlWWUXRek31Y
         fdpg==
X-Gm-Message-State: AC+VfDyonaheOEf0xQZfGjOYySmXhzo9TzL9RZW+3u6rM/cYGpTwqoCF
	mgzqZGvtEpQgjfHYTPaSJTlV9adAIHGJe4ZeXpo=
X-Google-Smtp-Source: ACHHUZ6rVekaauOji+qzfUyxp3ydiH/CIVyARgmQhoLqpMouNJpruA4b7lIfCJHSHroYfgYGX3ENww==
X-Received: by 2002:adf:f990:0:b0:307:c0c4:109a with SMTP id f16-20020adff990000000b00307c0c4109amr11096133wrr.6.1684150303524;
        Mon, 15 May 2023 04:31:43 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 1/2] x86: Refactor conditional guard in probe_cpuid_faulting()
Date: Mon, 15 May 2023 12:31:35 +0100
Message-Id: <20230515113136.2465-2-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230515113136.2465-1-alejandro.vallejo@cloud.com>
References: <20230515113136.2465-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move vendor-specific checks to the vendor-specific callers. While at it
move the synth cap setters to the callers too, as it's needed for a later
patch and it's not a functional change either.

No functional change.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
v3:
  * Moved the setting of LCAP_* bits to intel.c and amd.c
---
 xen/arch/x86/cpu/amd.c    | 13 ++++++++++++-
 xen/arch/x86/cpu/common.c | 13 -------------
 xen/arch/x86/cpu/intel.c  | 12 +++++++++++-
 3 files changed, 23 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index caafe44740..440af59670 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -271,8 +271,19 @@ static void __init noinline amd_init_levelling(void)
 {
 	const struct cpuidmask *m = NULL;
 
-	if (probe_cpuid_faulting())
+	/*
+	 * If there's support for CpuidUserDis or CPUID faulting then
+	 * we can skip levelling because CPUID accesses are trapped anyway.
+	 *
+	 * CPUID faulting is an Intel feature analogous to CpuidUserDis, so
+	 * that can only be present when Xen is itself virtualized (because
+	 * it can be emulated)
+	 */
+	if (cpu_has_hypervisor && probe_cpuid_faulting()) {
+		expected_levelling_cap |= LCAP_faulting;
+		levelling_caps |= LCAP_faulting;
 		return;
+	}
 
 	probe_masking_msrs();
 
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index edc4db1335..52646f7dfb 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -131,17 +131,6 @@ bool __init probe_cpuid_faulting(void)
 	uint64_t val;
 	int rc;
 
-	/*
-	 * Don't bother looking for CPUID faulting if we aren't virtualised on
-	 * AMD or Hygon hardware - it won't be present.  Likewise for Fam0F
-	 * Intel hardware.
-	 */
-	if (((boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
-	     ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
-	      boot_cpu_data.x86 == 0xf)) &&
-	    !cpu_has_hypervisor)
-		return false;
-
 	if ((rc = rdmsr_safe(MSR_INTEL_PLATFORM_INFO, val)) == 0)
 		raw_cpu_policy.platform_info.cpuid_faulting =
 			val & MSR_PLATFORM_INFO_CPUID_FAULTING;
@@ -155,8 +144,6 @@ bool __init probe_cpuid_faulting(void)
 		return false;
 	}
 
-	expected_levelling_cap |= LCAP_faulting;
-	levelling_caps |=  LCAP_faulting;
 	setup_force_cpu_cap(X86_FEATURE_CPUID_FAULTING);
 
 	return true;
diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 71fc1a1e18..168cd58f36 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -226,8 +226,18 @@ static void cf_check intel_ctxt_switch_masking(const struct vcpu *next)
  */
 static void __init noinline intel_init_levelling(void)
 {
-	if (probe_cpuid_faulting())
+	/*
+	 * Intel Fam0f is old enough that probing for CPUID faulting support
+	 * introduces spurious #GP(0) when the appropriate MSRs are read,
+	 * so skip it altogether. In the case where Xen is virtualized these
+	 * MSRs may be emulated though, so we allow it in that case.
+	 */
+	if ((boot_cpu_data.x86 != 0xf || cpu_has_hypervisor) &&
+	    probe_cpuid_faulting()) {
+		expected_levelling_cap |= LCAP_faulting;
+		levelling_caps |= LCAP_faulting;
 		return;
+	}
 
 	probe_masking_msrs();
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon May 15 11:42:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 11:42:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534717.832035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWaY-0002zO-1Y; Mon, 15 May 2023 11:42:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534717.832035; Mon, 15 May 2023 11:42:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWaX-0002zH-V1; Mon, 15 May 2023 11:42:01 +0000
Received: by outflank-mailman (input) for mailman id 534717;
 Mon, 15 May 2023 11:41:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyWaV-0002z6-RY; Mon, 15 May 2023 11:41:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyWaV-0005o4-OD; Mon, 15 May 2023 11:41:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyWaV-0008Ct-8I; Mon, 15 May 2023 11:41:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyWaV-00028t-7l; Mon, 15 May 2023 11:41:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3qAhgYMSQYYtwhjaRde19sWQ8KQG8SO+6ckAJxfZyss=; b=eLfh3WtEGMQ8oNrZdYZxo7xKS6
	UE545gCzbyMNDO4lQBYFMow53fKDPaT4LwvbLCWTzuj5PCKx57CWeat4mtOxcE+v0gvOJkI/1akP9
	81ZvL9vi6EO13RJOHBx4fpMlSf/iog8VzzEAhYjiIdTQeegZ628yLapJlvEM4TCV+xww=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180668-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180668: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 May 2023 11:41:59 +0000

flight 180668 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180668/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail in 180664 pass in 180668
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180664

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl         15 migrate-support-check fail in 180664 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 180664 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 180664 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 180664 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 180664 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 180664 never pass
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 180664 never pass
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 180664 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 180664 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 180664 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 180664 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 180664 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 180664 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 180664 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 180664 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 180664 never pass
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   28 days
Failing since        180281  2023-04-17 06:24:36 Z   28 days   52 attempts
Testing same since   180664  2023-05-14 20:12:01 Z    0 days    2 attempts

------------------------------------------------------------
2389 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 302499 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 15 11:49:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 11:49:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534725.832045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWi1-0003kK-U4; Mon, 15 May 2023 11:49:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534725.832045; Mon, 15 May 2023 11:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWi1-0003kD-QC; Mon, 15 May 2023 11:49:45 +0000
Received: by outflank-mailman (input) for mailman id 534725;
 Mon, 15 May 2023 11:49:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uOm1=BE=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pyWhz-0003k4-OU
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 11:49:43 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20601.outbound.protection.outlook.com
 [2a01:111:f400:7eae::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 93692205-f316-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 13:49:41 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by DS0PR12MB7928.namprd12.prod.outlook.com (2603:10b6:8:14c::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 11:49:36 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::8018:78f7:1b08:7a54]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::8018:78f7:1b08:7a54%2]) with mapi id 15.20.6387.030; Mon, 15 May 2023
 11:49:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93692205-f316-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NaAn4IX4e79lBj90V47STYE0t/9tHDHKXtfm05FbGkrRyCl41SCsM+VKuRnK5Xg64la3mZtB7cnAOGtUNUrXdgC0hUNv2R9mDFZhQcCZ4KolkNJttHOtroRGUTTTF4xs6MzfDLVQyB+3NpzFwYL9Z+fWVIu8SB4Ns+MXjukWUt6GHtKglV39UigABs4e6IbzaDfN42Yb/jZYxyT4mBf2DeGXN0RmZ/BVRkQOIgtEqnEEphLVUPmQvX0juWXtkPR8UKSurMqXzIJ3/LnufbHyhKpiNroKtODgE87TxzcgzXxbfVzR2ymD5UPi4cU4YqibthpUamb4WleNvrqaDK6tYw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RBuC+mIGtuzRqq+fDjuvUsQpxfVFb0bpcAkdk/fMXKc=;
 b=kFYtckE2N9dZFtvNriLSDIno8oBLO1BMyyWVDESB/ZULJ3ECiGARbILF+Y/TyX9PRlTF+pZeBO6nZkjHKoH8Ec+DaBNhn0UMr3N0DU2+XrNPGy9xwWOW5Svu6GvSoHYUxnKRocUFlduaOb2ZeZve0e0X5eneFgH37mxICr1F/ZIV+geS88eg1c/W2HQKbqS3VW/zEnEb/98vxm+V3SWTbEdR6o3cco+JXy3DN07lxRUMnEgCV9KuHzQfpnTjyCa6mXmyQSHdz3Em/NrQOJ/lDAkJyfa00Of3gdPrNc5Xzz5VGbYhRhTyXygpnkW1Ir1zMlTTWZjtXTyAWlJQn0ISzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RBuC+mIGtuzRqq+fDjuvUsQpxfVFb0bpcAkdk/fMXKc=;
 b=byfYvAlNDSdyQY7yeCjVKrD3i0H6sDvPXLxzYsSGvXcBdzVsYx2svkmosHf8ed0fHh8WU7d6+I6wI6GXUfJXjkaaCp6mkRI73DXor3pNKVEsvS3DbaCdTAAdHJslqW3Cm1p5+PRc9JSEyGRIHfIz10Plpk3+NBhJrplZAyrzFZo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <32fd56d1-80ce-d175-b13d-e17c1649b4e0@amd.com>
Date: Mon, 15 May 2023 12:49:29 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.1
Subject: Re: [XEN v6 11/12] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
To: Julien Grall <julien@xen.org>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-12-ayan.kumar.halder@amd.com>
 <63fa927e-72f5-1645-97c0-6986f2fdcabe@xen.org>
 <4681a4d4-68d3-01cd-912c-bca2cdc83266@amd.com>
 <175d5e01-6258-edcc-bddd-05ff9e1eb547@xen.org>
 <72fa0686-2703-6682-fe06-2fca14ff1986@amd.com>
 <701fb2b6-d552-0e3d-d108-a73863160b25@xen.org>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <701fb2b6-d552-0e3d-d108-a73863160b25@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0138.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::30) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|DS0PR12MB7928:EE_
X-MS-Office365-Filtering-Correlation-Id: 67b11bec-e5ca-40f4-67d9-08db553a7489
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CG+e36xEkBuDiawMgfT4m7CvZzu/VVeZPAL1p6QK7ruXccGL2Y57Trd8Q0pH6CrMjUZPVCFuB80SRSQDRJ3lEiDAI/FaToeZHt9ZJl1ZJarFk+aQBEKn4nMyXawGVHsmkf14a+rAa8Mjh01yO2avk/9wPaSt+9o/RWTSqrYAZl+VF013zLQKEzPpo//twvGcV5izfViQx0yYMtIzPGRhnsWTihPGcEV4KbouIqGFAXyhL/m22mxcgy8deRL95b7LV0oSej4V4Iq2fkpFqItu+4zamXrhtUAwXxMFdlIJ40/m00X9kC1UacMNCKstzw7+I/mOGMQKktzWpuZFnqkbX6loBFVkGO6Vts9/N5qB2OOZ9J27mrdVPMZeh4TU2JsFwJLM3PFQvwvxi3SGd076vKUkEGnc23qmUaAdTROpVlEsghrZ6HZfLYm9AYwcN9x3o9eTXC6QwFdPCRNv4LwwbvhqyLRGekZsXFwcwCmd7A2QcmYJNU+eEuLh5/11/YmYOUbCG/po5dRwoIFbxzWb3cGKKJIE/mGwkaAFvNWyF+wb4t+5f+1SmstrrZic3J2w5XYDp4q81362sLGSv0Unbxvv1N6O3xRX+BMmZXZm+TAW6ku+ZXrrPv0X+Mek7ygbVYiCGHLawzIYz9BREX//XQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(396003)(136003)(39860400002)(376002)(451199021)(83380400001)(66476007)(66556008)(66946007)(316002)(2616005)(6486002)(6512007)(26005)(6506007)(53546011)(478600001)(2906002)(110136005)(6666004)(186003)(7416002)(8676002)(8936002)(31696002)(5660300002)(36756003)(4326008)(41300700001)(38100700002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aFR6ZzFiQWtPUkZkTm1wM2VxaFNlVkdRL01yTzdidTV4TEpjN3UxQ1ZTb1Bm?=
 =?utf-8?B?NGhjOG1WaHJ0US9GNmtMWlZneWtPdGl1clVMcjhDaHZKa0tkN2ZvRTJaRmZ6?=
 =?utf-8?B?cUFuT0ZKNFZzQTAydXByYVRkY0lSbW8wVFlqRUtZN2NUQ095aFMwNElNa0pt?=
 =?utf-8?B?Qk1UaXJscDN0alEweEpGY0dHUFVhOWZTMEJQbWk1cU1JR2RGOFRod2NEV29w?=
 =?utf-8?B?eGVLOWpnZ2M4YW9NWjVTU1pIcmZBczZBbGJiTkxPc2dXT3VQQlJ3YUZLUjgw?=
 =?utf-8?B?eGxjc25nMGZpSi9sV3hOeXlqcHlDTHZBL0ovS1piUTZ4S0lST1p5WkRCMDRn?=
 =?utf-8?B?QzN2S01jY2tOeURpdVgrb2JmMHJsaHZIZlBKKzdqTUtYNDRYclRpOHhDem5L?=
 =?utf-8?B?SjNMSllHUEJLWkpjRGhHTWFvTEZrNGR3c21HY3BGR292NWhQNk4yWGdGQVJr?=
 =?utf-8?B?UEtIK1Vtc3krT1NhVkc0VGpLREEybGNoRjNSNlcreE5pc1Fpc2dTdkJvRWp5?=
 =?utf-8?B?c2ZSQ3pET0xDbXMrQTJ1ODZIWEV5N1ZyYlBPOW5mSjhKUVRnQ3ZDdEV5L1Nw?=
 =?utf-8?B?eTlucFpyRGxqeFI1VWwzSHhob0RubUVYSVUvbjhWWGYzc2htZlN6aHJHUnNn?=
 =?utf-8?B?cEIvSXp2bS90a1kxZlUzcDIvd1c1YitwMEN6OFZBbkY0eVM4SUFROUFoSjIw?=
 =?utf-8?B?aVEwa1czeGIvVVpZWEFsUTdQeVNDVk1MMk1IaWJVTDlNMGw3MDA2SE5NVkFX?=
 =?utf-8?B?QnlSWjh2RTVZL0h1ZUtDY1h2WmVkTkRGTFdnZSsrMkZmVWZCS0tMTzBXRTht?=
 =?utf-8?B?N1Y5TmhnTjdhNWJ3K3Y4eDBweC8vMTZsOGVEL1UzTEZZQmlieG1QSEd6cGE0?=
 =?utf-8?B?cVBxcTU4NCtFU3Jqd2Z2WnhDRGI4SS8zdnp1aHIzMHZnWDBuK0dPRG93elpG?=
 =?utf-8?B?d2M5Q3lpeWg4OERsdDg1Z3lYV0pZcUFQTDVHTEIvSnJaempqbllZNHp4Z2c2?=
 =?utf-8?B?MHdsSUVxSE5aL2ZuV3lqa1o3aE1zK004d0p3dVQ5VUF3b0RiazBHYUxXL3ZM?=
 =?utf-8?B?bmlhNk1LR0NwRFZSeUVmdGdKMUFFalAwMWlyMzlvaktCSVVPbmpPWmdxVm5h?=
 =?utf-8?B?UVJYeE1ZTHBiNTIyZXJsUlNLeHZDa0FaTnJheWI4RjEwU1FNQlRUYjdiVVJm?=
 =?utf-8?B?UnJUdG5aWkl2aXpvVjFHRjl6a2dmeEF1dGl1czFQSDl4Z2o2WW9LajdwNmlQ?=
 =?utf-8?B?Sm1wZ05kdnZLVU1vRThudzFMZW9qOEV4a1ZsVFNGZlNWZWd3MC93dlQySVFV?=
 =?utf-8?B?alJFV1FBQloyWFFoSTBWeXY3L1BqU0EzT2Nld1R2RHc4NTA1UGVqWjIzTU9G?=
 =?utf-8?B?MHJwbDBkdmVveDBseWpZQmFSMDFyM1AyY0pHRXpZL3lYSXl0Unc4cVhFVERG?=
 =?utf-8?B?djQ1dkUydHd2b0psWDcvM0FVbkNlcU5EV0tlSVl0YlJFamJ1Z0tLaVA5Sk9M?=
 =?utf-8?B?bi9CUGVlcThXZGlLWjY5ZVp5SVVQSXJTaHBPYmtWcHZUc3pVdDYzbkszT0Zn?=
 =?utf-8?B?UnBFUHF3NEVBQW9zQ0llYlBTczMvWUtoU1F2N1NoYUpaL2hDK25HNTN1S1dN?=
 =?utf-8?B?b3E4WnpjSW5IY1dqWXNLNzR5VktzWWJIZTkvWnJkUWMybkg5SDRSeXFZQ0xQ?=
 =?utf-8?B?MzNWL00yMFZnSnpGWllKay9lMlVoelk1WkI0SkM1b3ZTdCs1Y0Z4RGlCb05D?=
 =?utf-8?B?eGR6VHRiNnVBdEFZaFhEcnFxM20wK21JZUQyRVliajZGT3RyOFFSSlFYUE84?=
 =?utf-8?B?b0d4aGlIRkNLSWdibG8vVlJucjJDdW1nSTU5NVo2cHdiYmh1aHRDaFU5U0Vl?=
 =?utf-8?B?RlAxTXlSVWwyZ1lFZDNIKzFzUnoydkNsTTdFK1hLZTl6MTRQOGo3S2o0bHpO?=
 =?utf-8?B?YlpJUDhZNWJYL21RSXU4bTArd29tQThQanpDTGNTVUFNbi9DRzJzMGdFK0lI?=
 =?utf-8?B?RWVTNm9UenVMc1h6cDh5b0lmbWE0NnViTk5xUWx1TDVoWFR1ZnlhcTE0NXVN?=
 =?utf-8?B?WTJ5VmdzZGpIYjFZTzFGZzZqU1ZncnJZZDdrbUxsS3BSY2NHWEwvTEV0RmVN?=
 =?utf-8?Q?xTnUUb9VhcZYnKEnzDCWsWV6+?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 67b11bec-e5ca-40f4-67d9-08db553a7489
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 11:49:35.7821
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wzaSpDyHoQyEJiPgr6NLTrjHKGE5gmqxqiVwqO8m/07+4jv4eXSo79+tf9YxvpotLn7KHzMypK299YcO/kFlPg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7928

Hi Julien

On 15/05/2023 11:43, Julien Grall wrote:
>
>
> On 15/05/2023 11:30, Ayan Kumar Halder wrote:
>>> AFAICT, this approach would be incorrect because we wouldn't take 
>>> into account any restriction from the SMMU susbystem (it may support 
>>> less than what the processor support).
>>
>> By the restriction from SMMU subsystem, I think you mean 
>> p2m_restrict_ipa_bits().
>
> Yes.
>
>>
>> As I can see, p2m_restrict_ipa_bits() gets invoked much later than 
>> setup_virt_paging().
>
> I am afraid this is not correct. If you look at setup.c, you will 
> notice that iommu_setup() is called before setup_virt_paging(). There 
> is a comment on top of the former call explaining the ordering.

Yes, you are correct.


WRT to your comment

 >> You are correct that the line is not correct for Arm32. But my point 
was more for that fact you don't update p2m_ipa_bits based on the PA 
range selected.

Do you mean that I should update "p2m_ipa_bits" in setup_virt_paging() [ 
similar to what is now being done for CONFIG_ARM_64 ] ?


This will then override "p2m_ipa_bits"value set by p2m_restrict_ipa_bits().


But still it does not make sense to update "p2m_ipa_bits" in 
setup_virt_paging() for ARM_32 as

     /* Choose suitable "pa_range" according to the resulted 
"p2m_ipa_bits". */
     for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
     {
         if ( p2m_ipa_bits == pa_range_info[i].pabits )
         {
             pa_range = i;
break;
}
     }

p2m_ipa_bits will be same as pa_range_info[i].pabits.


- Ayan

>
> Cheers,
>


From xen-devel-bounces@lists.xenproject.org Mon May 15 12:02:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 12:02:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534733.832054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWtu-0006Lj-AW; Mon, 15 May 2023 12:02:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534733.832054; Mon, 15 May 2023 12:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWtu-0006Lc-7s; Mon, 15 May 2023 12:02:02 +0000
Received: by outflank-mailman (input) for mailman id 534733;
 Mon, 15 May 2023 12:02:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mCBX=BE=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1pyWtr-0006LU-PE
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 12:02:00 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4adf9cea-f318-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 14:01:58 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pyWsq-00BTBM-2z; Mon, 15 May 2023 12:00:57 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id D89B330003A;
 Mon, 15 May 2023 14:00:51 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id A377520AA5AEB; Mon, 15 May 2023 14:00:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4adf9cea-f318-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=5mEVtTPGjX5VUAK36tVkFbSuQO22AuGeDmAzn9epzuw=; b=AmiVyh00bk/IBxeKT1sJOhNhPw
	sKnnSOF2uFizS5FDPwFE+Ot6Zv0LcoZ6HJF47h5sBYfsBjpiqyZyI5uuDK5u2MZ4g4WobyDMhkodb
	fUSujxE1pqhBhDi+eUM0ht45DR2Y3i+nr1YT42g2zawDfvMwb295A0aq8ARQ+5Z4gi/pV9YXxQYVt
	3wUGZYGDjBC3RzitLceq6j/qPs1owVcSfLpElN1Y8S30KXDX5uH0Odcz4RLnxHbc6f5jNYL4oL/bz
	nTpSFPHQoB6BKp5r4Q6WAS69amz0yrUYHbuk51pYpxo8D3N1weBBW6mFoBFyYlJ+O4Y0A9ycf248d
	MFethNqQ==;
Date: Mon, 15 May 2023 14:00:51 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Ross Philipson <ross.philipson@oracle.com>
Subject: Re: [patch V4 37/37] x86/smpboot/64: Implement
 arch_cpuhp_init_parallel_bringup() and enable it
Message-ID: <20230515120051.GH83892@hirez.programming.kicks-ass.net>
References: <20230512203426.452963764@linutronix.de>
 <20230512205257.467571745@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230512205257.467571745@linutronix.de>

On Fri, May 12, 2023 at 11:07:56PM +0200, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Implement the validation function which tells the core code whether
> parallel bringup is possible.
> 
> The only condition for now is that the kernel does not run in an encrypted
> guest as these will trap the RDMSR via #VC, which cannot be handled at that
> point in early startup.
> 
> There was an earlier variant for AMD-SEV which used the GHBC protocol for
> retrieving the APIC ID via CPUID, but there is no guarantee that the
> initial APIC ID in CPUID is the same as the real APIC ID. There is no
> enforcement from the secure firmware and the hypervisor can assign APIC IDs
> as it sees fit as long as the ACPI/MADT table is consistent with that
> assignment.
> 
> Unfortunately there is no RDMSR GHCB protocol at the moment, so enabling
> AMD-SEV guests for parallel startup needs some more thought.

One option, other than adding said protocol, would be to:

 - use the APICID from CPUID -- with the expectation that it can be
   wrong.
 - (ab)use one of the high bits in cpuid_to_apicid[] as a test-and-set
   trylock. This avoids two CPUs from using the same per-cpu base, if
   CPUID is being malicious. Panic on fail.
 - validate against MSR the moment we can and panic if not matching

The trylock ensures the stacks/percpu state is not used by multiple
CPUs, and should guarantee a coherent state to get far enough along to
be able to do the #VE inducing RDMSR.




From xen-devel-bounces@lists.xenproject.org Mon May 15 12:06:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 12:06:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534742.832064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWxy-00073O-Td; Mon, 15 May 2023 12:06:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534742.832064; Mon, 15 May 2023 12:06:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyWxy-00073H-Qm; Mon, 15 May 2023 12:06:14 +0000
Received: by outflank-mailman (input) for mailman id 534742;
 Mon, 15 May 2023 12:06:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ei/B=BE=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1pyWxx-000737-9H
 for xen-devel@lists.xen.org; Mon, 15 May 2023 12:06:13 +0000
Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com
 [2607:f8b0:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dda9536d-f318-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 14:06:05 +0200 (CEST)
Received: by mail-pl1-x631.google.com with SMTP id
 d9443c01a7336-1ab05018381so117034955ad.2
 for <xen-devel@lists.xen.org>; Mon, 15 May 2023 05:06:05 -0700 (PDT)
Received: from localhost ([122.172.82.60]) by smtp.gmail.com with ESMTPSA id
 t12-20020a170902e84c00b001a687c505e9sm13390424plg.237.2023.05.15.05.06.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 15 May 2023 05:06:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dda9536d-f318-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1684152363; x=1686744363;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=MaJ1hoKbk4appWt+3OAlRAJLplc8pTlP5Thw5ZxratI=;
        b=SJ7N5TTQsXEKmvIwp/ciL5SxDFvd4x5lBWZKR0KsPRxiPAYcM++Dyou82jNZIqkbp5
         xqKGelgz9K3msdarVU2DNYaBNmN9gDKne/Y/giN4siyxb1uepcTxS4tRHDivv4R0r7Wu
         QGRfFbvbQWZ8K+prDTS5nTq3UuX55UE+T77GxDy5iwkG4oLuxasedjzcUWcZ/UjHPp3+
         NEvaVf2RM3/gnzla1LX0LTm7XLnud/bByX7rNmbcSHKqE9uU5Wa+KrMCnoGq7/exJr/P
         bDE3wmKSJAPTcIjyJfS8r3GeEcZYtz6b5yCUxdjO9LPLOxhQmUs1HnJogSBIxmvZqZTQ
         hnCg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684152363; x=1686744363;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=MaJ1hoKbk4appWt+3OAlRAJLplc8pTlP5Thw5ZxratI=;
        b=DsNg2BWgA+FMgZv0movUCttehYFNL3qsxSxg6lo1HHJLbLcn1X7XaZsxxWD/OJvgQG
         WqGA3BciBQ+PxBBAfg/etavuggv0swWMv8YrFW8ietb1mhDADfvk04LlAmxVuzWA5I30
         UT2GquDQeEIzOQZc9HCswJdYZvSDN/b4UizDFObdq1OUjfcHrGahWxV5K+ohzYpTL3k3
         2zJhvNCowRTCuLvzz+39W3XvEmSpfu8S/se9zATFA4ANgpl7PJY50MN/bxOaKs8/OBwb
         m7RQ/GbOey6jJWTO1oRDUnmgZUaZjgD0MV7GtuMBpFs1FQB6icLZdPi8ziGIe1lMqmn0
         AvuA==
X-Gm-Message-State: AC+VfDz0A5B2/clrG/rFypORvBZ/NlI5TVvtNY/VT26rsaOn94KzvVHL
	nsEAL61l08xwjkXGSry9/e2Few==
X-Google-Smtp-Source: ACHHUZ5wLqdS/db7atxCIX7LJ7htiXQyAOYtfc9oDskkuyhUoAWnSSW2RlHozDpIcMcKoVVR7hLS2A==
X-Received: by 2002:a17:902:d4c9:b0:1ac:637d:5888 with SMTP id o9-20020a170902d4c900b001ac637d5888mr34930419plg.43.1684152363450;
        Mon, 15 May 2023 05:06:03 -0700 (PDT)
Date: Mon, 15 May 2023 17:36:00 +0530
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xen.org, Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	stratos-dev@op-lists.linaro.org,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>,
	Erik Schilling <erik.schilling@linaro.org>
Subject: Re: [PATCH V2 2/2] libxl: arm: Add grant_usage parameter for virtio
 devices
Message-ID: <20230515120600.bsfw6pe3usae4sl4@vireshk-i7>
References: <782a7b3f54c36a3930a031647f6778e8dd02131d.1683791298.git.viresh.kumar@linaro.org>
 <ccf5b1402fb7156be0ef33b44f7b114efbe76319.1683791298.git.viresh.kumar@linaro.org>
 <5dc217d6-ca8f-4c5f-ad7c-2ab30d6647bd@perard>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <5dc217d6-ca8f-4c5f-ad7c-2ab30d6647bd@perard>

On 12-05-23, 11:43, Anthony PERARD wrote:
> On Thu, May 11, 2023 at 01:20:43PM +0530, Viresh Kumar wrote:
> > diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> > index 24ac92718288..0405f6efe62a 100644
> > --- a/docs/man/xl.cfg.5.pod.in
> > +++ b/docs/man/xl.cfg.5.pod.in
> > @@ -1619,6 +1619,18 @@ hexadecimal format, without the "0x" prefix and all in lower case, like
> >  Specifies the transport mechanism for the Virtio device, only "mmio" is
> >  supported for now.
> >  
> > +=item B<grant_usage=STRING>
> > +
> > +Specifies the grant usage details for the Virtio device. This can be set to
> > +following values:
> > +
> > +- "default": The default grant setting will be used, enable grants if
> > +  backend-domid != 0.
> 
> I don't think this "default" setting is useful. We could just describe
> what the default is when "grant_usage" setting is missing from the
> configuration.

This is what I suggested earlier [1], maybe I misunderstood what
Juergen said.

> > +- "enabled": The grants are always enabled.
> > +
> > +- "disabled": The grants are always disabled.
> 
> > +            if ((virtio->grant_usage == LIBXL_VIRTIO_GRANT_USAGE_ENABLED) ||
> > +                ((virtio->grant_usage == LIBXL_VIRTIO_GRANT_USAGE_DEFAULT) &&
> > +                 (virtio->backend_domid != LIBXL_TOOLSTACK_DOMID))) {
> 
> I think libxl can select what the default value should be replace with
> before we start to setup the guest. There's a *_setdefault() phase were
> we set the correct value when a configuration value hasn't been set and
> thus a default value is used. I think this can be done in
>     libxl__device_virtio_setdefault().
> After that, virtio->grant_usage will be true or false, and that's the
> value that should be given to the virtio backend via xenstore.

I am not great with Xen's flow of stuff and so would like to clarify a
few things here since I am confused.

In my case, parse_virtio_config() gets called first followed by
libxl__prepare_dtb(), where I need to use the "grant_usage" field.
Later libxl__device_virtio_setdefault() gets called, anything done
there isn't of much use in my case I guess.

Setting the default value of grant_usage in
libxl__device_virtio_setdefault() doesn't work for me (since
libxl__prepare_dtb() is already called), and I need to set this in
parse_virtio_config() only.

Currently, virtio->backend_domid is getting set via
libxl__resolve_domid() in libxl__device_virtio_setdefault(), which is
too late for me, but is working fine, accidentally I think, since the
default value of the field is 0, which is same as domain id in my
case. I would like to understand though how it works for Disk device
for Oleksandr, since they must also face similar issues. I must be
doing something wrong here :)

Lastly, libxl__virtio_from_xenstore() is never called in my case.
Should I just remove it ? I copied it from some other implementation.

This is how code looks for me currently, I am sure I need to fix it a
bit though, based on reply to above questions.

-------------------------8<-------------------------

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 24ac92718288..3a40ac8cb322 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1619,6 +1619,14 @@ hexadecimal format, without the "0x" prefix and all in lower case, like
 Specifies the transport mechanism for the Virtio device, only "mmio" is
 supported for now.
 
+=item B<grant_usage=BOOLEAN>
+
+If this option is B<true>, the Xen grants are always enabled.
+If this option is B<false>, the Xen grants are always disabled.
+
+If this option is missing, then the default grant setting will be used,
+i.e. enable grants if backend-domid != 0.
+
 =back
 
 =item B<tee="STRING">
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 0a203d22321f..bf846dca8ec0 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1792,6 +1792,9 @@ func (x *DeviceVirtio) fromC(xc *C.libxl_device_virtio) error {
 x.BackendDomname = C.GoString(xc.backend_domname)
 x.Type = C.GoString(xc._type)
 x.Transport = VirtioTransport(xc.transport)
+if err := x.GrantUsage.fromC(&xc.grant_usage);err != nil {
+return fmt.Errorf("converting field GrantUsage: %v", err)
+}
 x.Devid = Devid(xc.devid)
 x.Irq = uint32(xc.irq)
 x.Base = uint64(xc.base)
@@ -1809,6 +1812,9 @@ xc.backend_domname = C.CString(x.BackendDomname)}
 if x.Type != "" {
 xc._type = C.CString(x.Type)}
 xc.transport = C.libxl_virtio_transport(x.Transport)
+if err := x.GrantUsage.toC(&xc.grant_usage); err != nil {
+return fmt.Errorf("converting field GrantUsage: %v", err)
+}
 xc.devid = C.libxl_devid(x.Devid)
 xc.irq = C.uint32_t(x.Irq)
 xc.base = C.uint64_t(x.Base)
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index a7c17699f80e..e0c6e91bb0ef 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -683,6 +683,7 @@ BackendDomid Domid
 BackendDomname string
 Type string
 Transport VirtioTransport
+GrantUsage Defbool
 Devid Devid
 Irq uint32
 Base uint64
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 97c80d7ed0fa..bc2bd9649b95 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -922,7 +922,8 @@ static int make_xen_iommu_node(libxl__gc *gc, void *fdt)
 
 /* The caller is responsible to complete / close the fdt node */
 static int make_virtio_mmio_node_common(libxl__gc *gc, void *fdt, uint64_t base,
-                                        uint32_t irq, uint32_t backend_domid)
+                                        uint32_t irq, uint32_t backend_domid,
+                                        bool grant_usage)
 {
     int res;
     gic_interrupt intr;
@@ -945,7 +946,7 @@ static int make_virtio_mmio_node_common(libxl__gc *gc, void *fdt, uint64_t base,
     res = fdt_property(fdt, "dma-coherent", NULL, 0);
     if (res) return res;
 
-    if (backend_domid != LIBXL_TOOLSTACK_DOMID) {
+    if (grant_usage) {
         uint32_t iommus_prop[2];
 
         iommus_prop[0] = cpu_to_fdt32(GUEST_PHANDLE_IOMMU);
@@ -959,11 +960,12 @@ static int make_virtio_mmio_node_common(libxl__gc *gc, void *fdt, uint64_t base,
 }
 
 static int make_virtio_mmio_node(libxl__gc *gc, void *fdt, uint64_t base,
-                                 uint32_t irq, uint32_t backend_domid)
+                                 uint32_t irq, uint32_t backend_domid,
+                                 bool grant_usage)
 {
     int res;
 
-    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid);
+    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid, grant_usage);
     if (res) return res;
 
     return fdt_end_node(fdt);
@@ -1019,11 +1021,11 @@ static int make_virtio_mmio_node_gpio(libxl__gc *gc, void *fdt)
 
 static int make_virtio_mmio_node_device(libxl__gc *gc, void *fdt, uint64_t base,
                                         uint32_t irq, const char *type,
-                                        uint32_t backend_domid)
+                                        uint32_t backend_domid, bool grant_usage)
 {
     int res;
 
-    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid);
+    res = make_virtio_mmio_node_common(gc, fdt, base, irq, backend_domid, grant_usage);
     if (res) return res;
 
     /* Add device specific nodes */
@@ -1363,7 +1365,8 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
                     iommu_needed = true;
 
                 FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq,
-                                           disk->backend_domid) );
+                                           disk->backend_domid,
+                                           disk->backend_domid != LIBXL_TOOLSTACK_DOMID) );
             }
         }
 
@@ -1373,12 +1376,13 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
             if (virtio->transport != LIBXL_VIRTIO_TRANSPORT_MMIO)
                 continue;
 
-            if (virtio->backend_domid != LIBXL_TOOLSTACK_DOMID)
+            if (libxl_defbool_val(virtio->grant_usage))
                 iommu_needed = true;
 
             FDT( make_virtio_mmio_node_device(gc, fdt, virtio->base,
                                               virtio->irq, virtio->type,
-                                              virtio->backend_domid) );
+                                              virtio->backend_domid,
+                                              libxl_defbool_val(virtio->grant_usage)) );
         }
 
         /*
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index c10292e0d7e3..c5c0d1f91793 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -740,6 +740,7 @@ libxl_device_virtio = Struct("device_virtio", [
     ("backend_domname", string),
     ("type", string),
     ("transport", libxl_virtio_transport),
+    ("grant_usage", libxl_defbool),
     ("devid", libxl_devid),
     # Note that virtio-mmio parameters (irq and base) are for internal
     # use by libxl and can't be modified.
diff --git a/tools/libs/light/libxl_virtio.c b/tools/libs/light/libxl_virtio.c
index f8a78e22d156..19d834984777 100644
--- a/tools/libs/light/libxl_virtio.c
+++ b/tools/libs/light/libxl_virtio.c
@@ -23,8 +23,16 @@ static int libxl__device_virtio_setdefault(libxl__gc *gc, uint32_t domid,
                                            libxl_device_virtio *virtio,
                                            bool hotplug)
 {
-    return libxl__resolve_domid(gc, virtio->backend_domname,
-                                &virtio->backend_domid);
+    int rc;
+
+    rc = libxl__resolve_domid(gc, virtio->backend_domname,
+                              &virtio->backend_domid);
+    if (rc < 0) return rc;
+
+    libxl_defbool_setdefault(&virtio->grant_usage,
+                             virtio->backend_domid != LIBXL_TOOLSTACK_DOMID);
+
+    return 0;
 }
 
 static int libxl__device_from_virtio(libxl__gc *gc, uint32_t domid,
@@ -48,11 +56,13 @@ static int libxl__set_xenstore_virtio(libxl__gc *gc, uint32_t domid,
                                       flexarray_t *ro_front)
 {
     const char *transport = libxl_virtio_transport_to_string(virtio->transport);
+    const char *grant_usage = libxl_defbool_to_string(virtio->grant_usage);
 
     flexarray_append_pair(back, "irq", GCSPRINTF("%u", virtio->irq));
     flexarray_append_pair(back, "base", GCSPRINTF("%#"PRIx64, virtio->base));
     flexarray_append_pair(back, "type", GCSPRINTF("%s", virtio->type));
     flexarray_append_pair(back, "transport", GCSPRINTF("%s", transport));
+    flexarray_append_pair(back, "grant_usage", GCSPRINTF("%s", grant_usage));
 
     return 0;
 }
@@ -104,6 +114,15 @@ static int libxl__virtio_from_xenstore(libxl__gc *gc, const char *libxl_path,
         }
     }
 
+    tmp = NULL;
+    rc = libxl__xs_read_checked(gc, XBT_NULL,
+                                GCSPRINTF("%s/grant_usage", be_path), &tmp);
+    if (rc) goto out;
+
+    if (tmp) {
+        libxl_defbool_set(&virtio->grant_usage, strtoul(tmp, NULL, 0));
+    }
+
     tmp = NULL;
     rc = libxl__xs_read_checked(gc, XBT_NULL,
 				GCSPRINTF("%s/type", be_path), &tmp);
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 1f6f47daf4e1..aa2bb214091e 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1208,6 +1208,9 @@ static int parse_virtio_config(libxl_device_virtio *virtio, char *token)
     char *oparg;
     int rc;
 
+    libxl_defbool_setdefault(&virtio->grant_usage,
+                             virtio->backend_domid != 0);
+
     if (MATCH_OPTION("backend", token, oparg)) {
         virtio->backend_domname = strdup(oparg);
     } else if (MATCH_OPTION("type", token, oparg)) {
@@ -1215,6 +1218,8 @@ static int parse_virtio_config(libxl_device_virtio *virtio, char *token)
     } else if (MATCH_OPTION("transport", token, oparg)) {
         rc = libxl_virtio_transport_from_string(oparg, &virtio->transport);
         if (rc) return rc;
+    } else if (MATCH_OPTION("grant_usage", token, oparg)) {
+        libxl_defbool_set(&virtio->grant_usage, strtoul(oparg, NULL, 0));
     } else {
         fprintf(stderr, "Unknown string \"%s\" in virtio spec\n", token);
         return -1;


-- 
viresh

[1] https://lore.kernel.org/all/016edde8-e47e-a988-e5c1-f5aad0d4b60d@suse.com/


From xen-devel-bounces@lists.xenproject.org Mon May 15 12:38:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 12:38:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534747.832075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyXTH-0002Bp-CY; Mon, 15 May 2023 12:38:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534747.832075; Mon, 15 May 2023 12:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyXTH-0002Bi-9P; Mon, 15 May 2023 12:38:35 +0000
Received: by outflank-mailman (input) for mailman id 534747;
 Mon, 15 May 2023 12:38:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pyXTG-0002Bc-62
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 12:38:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyXTF-00072c-K6; Mon, 15 May 2023 12:38:33 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.27.136]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pyXTF-0003jF-Cu; Mon, 15 May 2023 12:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=L/XmlggfHVN1TpudAnX/jozrrRZSKbbgg3PxNkWtVuQ=; b=DE+aADP+jNW3byOgI2iEkhfe87
	PaL8mVpAJsFbFs66Mln83x7AgeqbLC0usqYKQIUb4sUbvNdwygUC2W1J5IjHXV8haknT32iZ0ybsL
	4FS+A3yqbRZpb+vCqeHrAEusrhTzYk8GFlmQ6m32pJTiZfiEw/FPLYov3FZ6jhGGbwkw=;
Message-ID: <aeb9eb5b-1707-7de0-cc85-af6acc9e08a1@xen.org>
Date: Mon, 15 May 2023 13:38:30 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.0
Subject: Re: [XEN v6 11/12] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230428175543.11902-1-ayan.kumar.halder@amd.com>
 <20230428175543.11902-12-ayan.kumar.halder@amd.com>
 <63fa927e-72f5-1645-97c0-6986f2fdcabe@xen.org>
 <4681a4d4-68d3-01cd-912c-bca2cdc83266@amd.com>
 <175d5e01-6258-edcc-bddd-05ff9e1eb547@xen.org>
 <72fa0686-2703-6682-fe06-2fca14ff1986@amd.com>
 <701fb2b6-d552-0e3d-d108-a73863160b25@xen.org>
 <32fd56d1-80ce-d175-b13d-e17c1649b4e0@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <32fd56d1-80ce-d175-b13d-e17c1649b4e0@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 15/05/2023 12:49, Ayan Kumar Halder wrote:
> Hi Julien

Hi Ayan,

> On 15/05/2023 11:43, Julien Grall wrote:
>>
>>
>> On 15/05/2023 11:30, Ayan Kumar Halder wrote:
>>>> AFAICT, this approach would be incorrect because we wouldn't take 
>>>> into account any restriction from the SMMU susbystem (it may support 
>>>> less than what the processor support).
>>>
>>> By the restriction from SMMU subsystem, I think you mean 
>>> p2m_restrict_ipa_bits().
>>
>> Yes.
>>
>>>
>>> As I can see, p2m_restrict_ipa_bits() gets invoked much later than 
>>> setup_virt_paging().
>>
>> I am afraid this is not correct. If you look at setup.c, you will 
>> notice that iommu_setup() is called before setup_virt_paging(). There 
>> is a comment on top of the former call explaining the ordering.
> 
> Yes, you are correct.
> 
> 
> WRT to your comment
> 
>  >> You are correct that the line is not correct for Arm32. But my point 
> was more for that fact you don't update p2m_ipa_bits based on the PA 
> range selected.
> 
> Do you mean that I should update "p2m_ipa_bits" in setup_virt_paging() [ 
> similar to what is now being done for CONFIG_ARM_64 ] ?

Yes.

> 
> 
> This will then override "p2m_ipa_bits"value set by p2m_restrict_ipa_bits().
> 
> 
> But still it does not make sense to update "p2m_ipa_bits" in 
> setup_virt_paging() for ARM_32 as
> 
>      /* Choose suitable "pa_range" according to the resulted 
> "p2m_ipa_bits". */
>      for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
>      {
>          if ( p2m_ipa_bits == pa_range_info[i].pabits )
>          {
>              pa_range = i;
> break;
> }
>      }
> 
> p2m_ipa_bits will be same as pa_range_info[i].pabits.

With the current logic yes. But, in the past, we had one case where we 
used a T0SZ that would not cover the full 2^pabits (see 896ebdfa3a85). I 
can't rule out this will not be again the case in the future.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 15 13:21:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 13:21:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534752.832084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyY91-0007Sf-JS; Mon, 15 May 2023 13:21:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534752.832084; Mon, 15 May 2023 13:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyY91-0007SY-Gi; Mon, 15 May 2023 13:21:43 +0000
Received: by outflank-mailman (input) for mailman id 534752;
 Mon, 15 May 2023 13:21:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyY90-0007SO-6L; Mon, 15 May 2023 13:21:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyY90-0008EZ-3w; Mon, 15 May 2023 13:21:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyY8z-0001zq-NI; Mon, 15 May 2023 13:21:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyY8z-0003Xq-Lz; Mon, 15 May 2023 13:21:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kpMTKpo2JdsHiw1lfmdNzhVzZ7J0Kg5w8iHUi4fE9wk=; b=zBaaNtl6BjNBt0FcaNQpt2qbJX
	mY2+N6+8ViCUmHSdFOsW+cJMiqJxdkuouRtscrA/+aO9/ygT6WTMpEg5LDMACep2tEWgn7VvCTHV8
	gcqRjG3MAs8Mivi+5DjQ4y/4ArLzaezCjyKgeQ6/0DM6JzSj1YSbwph09mXDHJ6HBjLw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180669-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180669: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b8be19ce432a2edd69c0673768a0beeec77f795a
X-Osstest-Versions-That:
    xen=4c507d8a6b6e8be90881a335b0a66eb28e0f7737
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 May 2023 13:21:41 +0000

flight 180669 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180669/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b8be19ce432a2edd69c0673768a0beeec77f795a
baseline version:
 xen                  4c507d8a6b6e8be90881a335b0a66eb28e0f7737

Last test of basis   180631  2023-05-12 08:00:26 Z    3 days
Testing same since   180669  2023-05-15 10:01:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4c507d8a6b..b8be19ce43  b8be19ce432a2edd69c0673768a0beeec77f795a -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 15 14:14:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 14:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534764.832094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyYyE-0004ec-Hf; Mon, 15 May 2023 14:14:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534764.832094; Mon, 15 May 2023 14:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyYyE-0004eV-Ep; Mon, 15 May 2023 14:14:38 +0000
Received: by outflank-mailman (input) for mailman id 534764;
 Mon, 15 May 2023 14:14:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xh4z=BE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyYyD-0004eP-N0
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 14:14:37 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20622.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d22f49ba-f32a-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 16:14:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9101.eurprd04.prod.outlook.com (2603:10a6:150:20::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 14:14:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.029; Mon, 15 May 2023
 14:14:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d22f49ba-f32a-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hkAHW3h0Vc9mNWWt1U5GjGBOEpSchaO75BBi6lNW8SAuvBUTgEZBydL9RK/iTt0G/B38+S8l8rJRvFxyHlY7vtqwdlgGSqYxyu5FPT8d2YcCT1rJG0SjcdJayvqTrTcHxnDEhdOCCqqCP2vV24Jm3vNPoghbmlPPzrm6KawpwgmYHZLVx2nOn+sg7SzduqvaNUgfSMuotEwpDsNnUXoeC21WKCiklZot5HiBFlbh/0P/xg355jpt8BeZ0HHI72Fsse5RL2dXCmsJvo70P8VG51fMoclazff4BcFU6wzIQ/YArZ7jR12Id+3U/VZxxkQm+bGQyB4ep2tuDNFqLZ4w2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FHnVB35JtcM912OMMxay/u6Y/5S8hm7gNAIw+GrofE8=;
 b=NIYVyfo0bh1tVNHsDvUXQ/HDuJyIVpQUfy5bYeHjB4j6xSHh3tc5zRcYITiBWVCZ2LEWV4sk8snQVs3M3Daf85Zl6oJ2D/ZOhRo3fgJa+Eki3/zazCY+dfUVgsiWc0kv6z8NMDJdRZkVqZPEAta2wNpS1TRMTphjWi0n3mAMzJafVJ+6hCDiv1D9se7f9GwtueMk5EF/wuOnyQOboVtlNmSHFi6cjdBb9buvyiiJ3cVT7HYQBmcaiXC5KWtKvNugpgeWyr5QMsOnaNv22AA5gkFxunYKqxgg9o5C64S3WSdz5fG2rB6+/Y1IAJD1NC9LyQ3N1ccZUASOL6MHGppbyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FHnVB35JtcM912OMMxay/u6Y/5S8hm7gNAIw+GrofE8=;
 b=DQl+sBOPSa4Zql0gIM/EHBgY+K+kkEagQwol40o76B4UZX1ndr2FgtWpNpevLwgaj2GRSeut82mEPN0ikbnrPtVfnfY/gnrMcH+F06zKjfEFT+Lwld7DoutKcGUPtAJwcIggYITRucO0VNeUnsQWGM6TXUfy6Z+HHuqjp6rDVF9j8LiKytIieh9KNoxyGvKyb2B+K3JM4OhxLO5vM4wik36U0HtdphgVK+i5XX3eaT48VUyLCh2BVVa1E5BS83QRXmsPBhOU4pBBA113U9vPAjI4y8+Hvth8e2HzYx8kJC8a4PDGMfAzJIaEha5vtr7n7ab/Oycm5F8MJmKMZxvKgg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <81ac6e51-6de9-5c4c-5cbd-7318cae93032@suse.com>
Date: Mon, 15 May 2023 16:14:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT
 generation
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 Xenia.Ragiadakou@amd.com, Stefano Stabellini <stefano.stabellini@amd.com>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-1-sstabellini@kernel.org>
 <ZGHx9Mk3UGPdli1h@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZGHx9Mk3UGPdli1h@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0043.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9101:EE_
X-MS-Office365-Filtering-Correlation-Id: 4378200d-7fdf-406d-6ac5-08db554eb47e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VkLrTEPCsTXbdp1cJsofcSY2cG8LeAWFYxKiXwBrZuvcyW6Mx7L2f6aDHqODQ9TirNidqcFk/ZJsDqwrXmWrqR89+t0yzf+czubpoW8FOt8DmI5phITJC3azaysJOP4d9XX+5lCCaE07oJbKjD+rS360IcMHkR4rXYLmqWo6n/8I7VGmuMnXOEHzmkCVtgknxq4XFSytLSU1XAX5WaD7+BKQj0jsrUHYaLumQN7F1lCUMtBMM7DtuRXiXWJfquOJsVG4NpoRoIK1rFo07etmw1CfpAWpaR83ZuePI/KFUkVSEzx3wG/BVBxoB6DyynF34u2P6QLbsfZDZpuIp3N9qIUHw1WFpe7ieirzLxqsKkhd8UaFhaMYASUeMNJKTovu6EcFAg8MYogMjB1Wott4rmImxlFkWdUskJG7nUsgWCOu/a6UY4g8EWvE612WlT67H1bApUw3n+Tvi5uQQ/W6YOBhQzNRl5/OtftJKq5YlBHRdEZOh0B1iYFIWW4r9YtmzusKsEIrtEhoiiKTFWfqt7DyzYJMwVSKfGTC1vKF/U1FRNbnZkEnp5+tOjqTDTwpyti4MozGVsHBifNvQ4T73ay3/CIwZ7/DVBJVVXUwNgaU5KDsjhit354IT7YsjgXZqYUVOpsUcTNXfncvEzLVPA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(136003)(346002)(366004)(376002)(451199021)(2906002)(38100700002)(316002)(41300700001)(5660300002)(8936002)(8676002)(36756003)(31696002)(86362001)(478600001)(53546011)(6512007)(6506007)(26005)(186003)(6486002)(31686004)(2616005)(66946007)(110136005)(66556008)(66476007)(83380400001)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YjUrczlJZGRhN2FLQWd0ZjZBeVV0SGhFYWIvYVd5RHVWVjQ5NGU2cVZIckZU?=
 =?utf-8?B?QUZqVFJrZkhPT25idEZNdUNsNndtcjFJaTlaOGUyVEp6MnhuaTM1RzNnK0p2?=
 =?utf-8?B?dkVGRVdWUjhzMXZwOEJOK3NVbFp5T2RseTNoUW85UDhGTk5wSTZwU2hZMzFN?=
 =?utf-8?B?KzlhMm83dW1wUkRBV2RoczVTY2J2Z0Jaa3ZMOWJ4dklnZWU4N3JRU0ZWOEFU?=
 =?utf-8?B?M3IxNGYvNzVFY3A5WExuWDBmMHVBYWNLSS9YOWhvQ0k5OGx2V2ZzelIrdlY5?=
 =?utf-8?B?YW1pNGI5VXNpb05rOFg1NUlGUy9KZklUblhxTldTbUVpR0FwMGo3Ym4yZ25T?=
 =?utf-8?B?MWgzcFdzMmtiQXgvTWo2dytRSW9hRi9YS08xVEx6U0tGdTl2Rng1dXhpQnpr?=
 =?utf-8?B?SHZqclF6QUs1MHdMVjdhQUtwRVdGUmZ2bTV0TEt0bXlvbGx4UmJNeGR3UVZM?=
 =?utf-8?B?NitRbU80dUZJc3RhaGlmbERqVm1sMHBvQkJ6aitOenJOeGdUZTRDeXNjbHBY?=
 =?utf-8?B?cExpMzcrckVNdnZ6a0VaR1NzSU5NS21MZFFuNTZJZjhRQlFyTDZaRVMwaE82?=
 =?utf-8?B?aXFOMEVnQjRudjZFM0x1dVJnM25ucHRwSmZkNEF1YnNjcTBMZG83SjlHYUhk?=
 =?utf-8?B?YitmL0tzNUx6UFZzc3QrYXN5OWhoTUt1ZDhKZWlBcVVhUFhHSG1TVVdZNHZ5?=
 =?utf-8?B?Q3FGZmE0MHNWMm9KUjR5amlIc2FSTExZUGVLRHZrOCtSZ2lUeGI1QlFsWXJD?=
 =?utf-8?B?RGRvWHpFeEFxS1ZwdlVDR3Zqc0wxRUxmbXl3Tk9Dc0crbGRuSGJkcU9SS2hT?=
 =?utf-8?B?MTlaK0RwZFQ5RjhHanpzK2hBTUlSbnBmL0YxQlRsU0E0Q0wyc1M1N1JmL1FX?=
 =?utf-8?B?ZlNwZHVXWnF1ZUQ4eG16czlQTWR3ZmR0YU5FU3ZrbkZUSnkwNERac29VUmFn?=
 =?utf-8?B?M2M3ZW91OFYrL2xINVhoWTZTWTdBd2JvenRrK3lRbXM4S3ZCdW8zeXFnTHZp?=
 =?utf-8?B?NktvZmhyV2wybk1SUjhMejhMbXk3WGFrYXdKNlZOS0hwM3FKVFhFRzJrbno2?=
 =?utf-8?B?TURrV2ptd3czTWxXZjdJRXRrU1FCaGpTTTYxQWNCb0RiQW9qLzYrckcwMDZq?=
 =?utf-8?B?U0R0M1hOc21RRDcxZFlWanpXcVB3RnlvSjJsc2R5S3pUSkJQUjNIakNDS0lx?=
 =?utf-8?B?QkJ5alVRbnN0TEt2VWUvVUlQTVdyY2syRkRnQXk2TmhFWE81dlVPUkRwL2c0?=
 =?utf-8?B?RWpObi8rS2hYM0ZIcHNJREs2bTFMR1ZIMVcxMWkvMmpqTUtIS1R5VXBHTnNV?=
 =?utf-8?B?UTVrdTU1bmVwczJrRHkvZHVlaTVBYnVlVmxTYkJPM09ERWMrLzh5dU4vZXQr?=
 =?utf-8?B?VmtFdm44WFpTMm9FU0FUL2tzK2NCR2trZElJTWhwK1E4cjRuLzBSOFE3TVVo?=
 =?utf-8?B?cktwZmVkRkZ4QUxaUmR3UlBnQVVoN3JHSmFNSUw5bTFURmlCd0dSdzZINGJI?=
 =?utf-8?B?TnI0NHdEb1NqQVNOUE55UlNoVDRmS0JwQU0zM1lITjQweHRGMG1ENGhEZjJC?=
 =?utf-8?B?RFk3bElyWW5MVFRHWmh1UlhQQkFoR0haZ3poeDcxNEp3b3BBWitkaVEvYldt?=
 =?utf-8?B?ejliOFh5ZmNLNy9VOFZHOUpoOEd2UGRqck9xUXlsV3ZZby9OTGJTRmNualpr?=
 =?utf-8?B?L09FNllIS3JXVjM1bW1peWtXUHR3c2N6TmNFdmU4TjhqQzBCOGd5Q1o0YnlY?=
 =?utf-8?B?TjhUMGNaWG9uM3hleDVrQ3NZTlpMeHJKZFp3eHhMRTVKckNhaWNiWEVIN3ZO?=
 =?utf-8?B?QWh2VDhKcTFaOG95VStaR0NTb043WkZOZHdnR0kvNUFVVTlnZ2M4ODROWG5r?=
 =?utf-8?B?L1VSTlJTdzVRSDlKdjRWcHJiZ0dvUHhFUzcxVGNUbUdiOVQ3cnhSc0p1ZmEz?=
 =?utf-8?B?V3FxSzFWMDVwaXdlQVlhRXpOV3c1cG9qVit5UVFFQzJKT2lDcFg3Z1VvUHFt?=
 =?utf-8?B?cXI0REFaRUtXSGNrc3FBVjdCRGRVbmtCUVVHSXhKS1EwVld0QnpzaU5CRS9L?=
 =?utf-8?B?MHBib3FCYXJNbFJHMmZDVjhxcVVLTkhnaTJ1NlY4OC9mK1plZ2NMU2FIZ3N5?=
 =?utf-8?Q?oZGvD7wHBv2R342QtIIN1OgPW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4378200d-7fdf-406d-6ac5-08db554eb47e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 14:14:32.8969
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xEWvRQGBJtzq6f0zwRZfncjYvWaxQPwrb5wrUIIgxmKF/dlGYwI/46Cpag7sxGNhlJUXKJzWKtEAWomG74QTvw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9101

On 15.05.2023 10:48, Roger Pau Monné wrote:
> On Fri, May 12, 2023 at 06:17:19PM -0700, Stefano Stabellini wrote:
>> From: Stefano Stabellini <stefano.stabellini@amd.com>
>>
>> Xen always generates a XSDT table even if the firmware provided a RSDT
>> table. Instead of copying the XSDT header from the firmware table (that
>> might be missing), generate the XSDT header from a preset.
>>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
>> ---
>>  xen/arch/x86/hvm/dom0_build.c | 32 +++++++++-----------------------
>>  1 file changed, 9 insertions(+), 23 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
>> index 307edc6a8c..5fde769863 100644
>> --- a/xen/arch/x86/hvm/dom0_build.c
>> +++ b/xen/arch/x86/hvm/dom0_build.c
>> @@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
>>                                        paddr_t *addr)
>>  {
>>      struct acpi_table_xsdt *xsdt;
>> -    struct acpi_table_header *table;
>> -    struct acpi_table_rsdp *rsdp;
>>      const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables;
>>      unsigned long size = sizeof(*xsdt);
>>      unsigned int i, j, num_tables = 0;
>> -    paddr_t xsdt_paddr;
>>      int rc;
>> +    struct acpi_table_header header = {
>> +        .signature    = "XSDT",
>> +        .length       = sizeof(struct acpi_table_header),
>> +        .revision     = 0x1,
>> +        .oem_id       = "Xen",
>> +        .oem_table_id = "HVM",
> 
> I think this is wrong, as according to the spec the OEM Table ID must
> match the OEM Table ID in the FADT.
> 
> We likely want to copy the OEM ID and OEM Table ID from the RSDP, and
> possibly also the other OEM related fields.
> 
> Alternatively we might want to copy and use the RSDT on systems that
> lack an XSDT, or even just copy the header from the RSDT into Xen's
> crafted XSDT, since the format of the RSDP and the XSDT headers are
> exactly the same (the difference is in the size of the description
> headers that come after).

I guess I'd prefer that last variant. ACPI specifically says "Platforms
provide the RSDT to enable compatibility with ACPI 1.0 operating
systems." IOW any halfway modern system (including qemu, that is) ought
to supply an XSDT anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 15 14:18:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 14:18:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534767.832104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZ1X-0005IH-0K; Mon, 15 May 2023 14:18:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534767.832104; Mon, 15 May 2023 14:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZ1W-0005IA-TX; Mon, 15 May 2023 14:18:02 +0000
Received: by outflank-mailman (input) for mailman id 534767;
 Mon, 15 May 2023 14:18:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xh4z=BE=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyZ1V-0005I2-Cr
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 14:18:01 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20603.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4bdbee2d-f32b-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 16:18:00 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9101.eurprd04.prod.outlook.com (2603:10a6:150:20::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Mon, 15 May
 2023 14:17:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.029; Mon, 15 May 2023
 14:17:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4bdbee2d-f32b-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n8vLfD78F9yUZBhIqhd1/QLTwkPL+j3eu8JtjoaT5nUFtJR+TMGyv2iifnFjQCwQFG+lY2ZMY/rtwYw+X2u1xB2z2R5CNH2Xf02kwztySb1jnLZRJQyJcR+IWvSzUq7oud+srYKmd2hh8H9vx0owixW1UofwpAh9FOJ3U96O8RjYVtk/Ldpz/+Ox8BlkrCzZNqi8Jye/5F1/68hx2rEWfJp1/JYcEiNR6JoLCrC5jICy806Tw3MWa74RxkRTKG6LKcGxWPy1cJlN7liJ8soiv8QQQxNoSjvOEw2Q+PNAnDcmyAOANdOYP3Dno6xwN4gVnJGa1KOAGZ7X3DJ6jDgOJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/siV6QeM35HdbrNIfkBM5oBIH1tlKDoalu1/+ke1uUA=;
 b=lHMklogAAZYR0Xween42+YzaI9XYOsHWilddDLbj8TQrC9/zcpOl3uXQ0N3Bd4C0D77GiKgLBJ2o3X478Jzw9oEVv5U50GlGdsFh+gYedzvZkvWm5ZzO4RZgduXKcLFbquz+4viTOmSnAjSoJCAuwiQ9ozlDGOC+FYSluTxdB0kxeUFSfK7CMNSMT7rld4lmVj1wdOTZ9oppqvmcpjPsGxIRfCctf+ZS4IiCRSFBPgCqVlQ27iP7qqt1aCO1RUF8PGQ4uepFDxiSmKOdzBSH0UZViRHIqMj1FU8FH8qM3v9fPc+n5/BczWR/sjTbTy3SL6NTb0uqGJx5ec+Tk2WHjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/siV6QeM35HdbrNIfkBM5oBIH1tlKDoalu1/+ke1uUA=;
 b=g1cg4A/L7UH+z3xt33ped3m6j7qJtolIw08noyEr1iCx8Q5lbUGUz7PGUvXrmcpOpdwaBQ2INtUSaIqe6XmG1qJuNblcpsrF7js7FO41t5NRBEzUjc6ZD3j0qLX/33gj/zY4dhW38gAg74ORSJoiqAbT4HWXx5aKQ81i5zr+KPS/j1XupdBcvYbSOi/IplTqiTTTKDrP+UGkGQj/+8wJRP3VkwgQn/MEt/UGtJxZkRXdo2A3ua7iVsc7VsRrdtv8vfgBbM2Wac3r3HLqydV6Vv44WrfY37QPZihCA3+Y9g9vPfhI1eWRvIR2zuRyFio3n5nrMwqMTyZymu6FmF1jYQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c22a8925-15e4-47b9-6f5d-f85bbe802255@suse.com>
Date: Mon, 15 May 2023 16:17:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com,
 Stefano Stabellini <stefano.stabellini@amd.com>, roger.pau@citrix.com,
 andrew.cooper3@citrix.com
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-2-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230513011720.3978354-2-sstabellini@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0258.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9101:EE_
X-MS-Office365-Filtering-Correlation-Id: 6dc1ae88-9d3f-486a-a01e-08db554f2f1a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b4vwj2j3npOxK7bt+KujPhIDKCo3/EVH3yIpJcOAVad85jdOJsrD9232n/4NlNcoYTmx9dzLRY5IjHNPqCHo/O5QSNAkxkTt60N77plJ4bcYNn3SDjkpaRitP5BAghQw8vzGHd9Oe/zR2Ivd8KpZ4TNAlR6Eoc4KzkOGcabBcv5makDvHRCRJqJc3twKi+teHtIvynZTu5yqJZdRcEV7btPDsSYwLHhy51Pdz7RrWFH5DMOHUctewASy2zmnDothbc0eseHzAa5CvNp5jOQJmXAEhN/3M1TKkLWE998dlWIXAW9j0IQo+J3yeRQx1cbi6WhSmEgRbunI9gGLdBmxitycu7ht+MZ18RyN0G5IzIGiyFDP90nx/JbjWjw5LqFS35g4YwIDklBf0pFUUdx9ILXM0QDRvG0QEkzjeDq0SkKpDyVldcHeBSkFV4qvA0Yar24FXu2vbV8kIiUS17Ny+kSD20CcMYCt6RFfsBH2ZopaVoO52zATc31O483kAGULs/7BpI59bUiVAkH4pkq0KMY3Fj2XGbefMTXl/12AW2Mfyc4FhZ9jXcCLf3iVYKWBLlt/EUolsH0Ns+cu1H/r+aIYd9LfaELWFCxiaCbAkIZ+nyCE14l9uxqEBibQedFW9jIMjmGnSpWXttm13Spi9Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(136003)(346002)(366004)(376002)(451199021)(4744005)(2906002)(38100700002)(316002)(41300700001)(5660300002)(8936002)(8676002)(36756003)(31696002)(86362001)(478600001)(53546011)(6512007)(6506007)(26005)(186003)(6486002)(31686004)(2616005)(66946007)(66556008)(66476007)(6916009)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eHdXVjVxdnR6ZHoxY1RLYnlOTzBycldBVG9GbTNPZHNNS1BwTFdjRTc4SmJZ?=
 =?utf-8?B?VzNKVDdwbzlNNFNZdUl2ZDJ6eTN4aW5RK1FNTlZHMlNkenNQZzY1cXdZZWhL?=
 =?utf-8?B?aDNMTkNkdTVGVERYNkNXcm1INndrRjJYZGVReGpEQ01pZC9uMmVxTG9ESHZx?=
 =?utf-8?B?T0xDbnM3Y0FsWFZZa2R5bFRiYjF6UFF2Y0NKZ21kaVJVSnhlSlB3alB0dCtk?=
 =?utf-8?B?T1FIVFZ0L3ZVa2htbjJvMDlwZFNNUTlwRlBqT3I3dE9pTHFwekpSUk43anNB?=
 =?utf-8?B?MVkrRVpKWVQrbEVLVVFzU05kdUkvcnJOK3NGa2Zpd2xJWnBJaGhIZzExQWJi?=
 =?utf-8?B?Y2xsdlJXWVpYSExLL1J4T1ZpMFVJWFpsQnBhL093ZXY0K0NMOXI3ZU1WQnNT?=
 =?utf-8?B?WUhmS25iYmhDOVNFaGExQXcveWxZRnBxOU9wUThkSk84ZHNORE1XRkFEWVg5?=
 =?utf-8?B?L3lZclBXWGdsS2R3akM0V3ZFRy9oeTFLU2lZd1llNURHRGtvR1ZGUm1aTzJC?=
 =?utf-8?B?NUcxMlR3ZU9jVVM3Qmc0cE4vQ2lLTTVXZmU2QXJaV3NRc0JzTUlPb2VxU0FH?=
 =?utf-8?B?R3F4TkdlUi95S2F5bU5ld1RpdnhZOVRPbUVFaEpuWjJiQXF2NWhFUDBtUGhv?=
 =?utf-8?B?QTlmbGRpNGFEdmtvTXczT0t6ZHFLUTRoMVZJYjZ3Y0QzM3dFYjU0RHl2eDh4?=
 =?utf-8?B?R0tBNGMxZjV5ZzkxcGVYdzZRQjZVLy9pZ0lSQ3dHT3h6T3ZhWU9BbFhZWHRN?=
 =?utf-8?B?Ykl1MFA0WFhobFdVSFhqU3pmcTJ1SnhSYjhQNmlNZzB1akQ0a25SaU9KcWtT?=
 =?utf-8?B?ZnE0U3NsY2RaM2pzdkdmL2FrR3dBUXNIbVZhc09oTVBrTnkvbXlBRlpqbm52?=
 =?utf-8?B?MUdFeTlyOHNNUjdySzBHTDFHdzJtdEJWYm03cHI1aWNNQWM3MHRlcE55OVhZ?=
 =?utf-8?B?b1h2T2x0bXZDU1RDbWYrOXZEcFZPaGtJN0xwMS9rWnJtVFVQSlk0VFJLNXpM?=
 =?utf-8?B?VEhlVjdoVGl0M0RyMUFUNlcrRy9MTDFKQUYwMjliMHlML3dPekpJcFA0RkIr?=
 =?utf-8?B?bi83ekRFbGJXOURneVo3U1BuRUNpUEJqTUxzUUhSQTlIZTY0T1IwYkxFeFJG?=
 =?utf-8?B?TVFXUnJFeWRHMkZkamZGMkhPRk5CRUNMOXRIdzJEOXdRbUlFeko3a2xKRDhN?=
 =?utf-8?B?OU16YmNxV0RpUG9jQXBXczNYU0JHM2FFeDZvOEF1RlAxYTl0TUlsOCt2RHVE?=
 =?utf-8?B?YmF3c2NBUVdBMUhVUUZCbDRPM2RFN0p6WWsyMm1uQTRlL3QxT1JydTNJdVZ6?=
 =?utf-8?B?cjliczZnREwyVU0ycEcvOWJreHl2S2hxN1FxY2ZMb3dLWkcxNjBxaTdEVEll?=
 =?utf-8?B?UzZITkRoK2ZsQWJsWkdpUzhpNkF2dXoyZXRTV1hQdnZGS0FyM1EvOEQ1bmoz?=
 =?utf-8?B?bFpRR0Y3dUZFakNZYlg5UTZEanZUVXI5dXBhSkphaDJHRTdHTy9iUHJ5UmhH?=
 =?utf-8?B?ajFQS29IWXZ3WlBzMVdUMlhRcjkxa1g4YkI5WmZTM1Q1OEZSR3VkZUVSVU5R?=
 =?utf-8?B?OHhobXV3QTBuYWtkdkVRLzhnWTZqeDczREhSdko2dEg3MWF2WS9LdVc3blZB?=
 =?utf-8?B?dGpTL1M4UlBKUkE2UlMyWk01NGlMUjEzQXNxY0kzSjYzVE85SHh4MXVxNHFy?=
 =?utf-8?B?RFNJblBsa1ZsNE1lZGRVRzVRU1A0V1FnbzJZWUl3MEFITWZmaHc4YlBvWjZC?=
 =?utf-8?B?dk1LWnNGZUk3OXVlRFhMMzhzVmwxNk40V1JKL0EvTnFad0ZWS2Y0cmVNTmU1?=
 =?utf-8?B?MGxQMzhKUjdrQm16M3ZsWTIvT3NRM25wT2RFNTRveWJPdjZkSXdiM0FqVUdO?=
 =?utf-8?B?Rm9ianB3c2J4V1BjanBNenJMbTRMam1FQkdhWWVsenBoeFdRRXhQSld3azFo?=
 =?utf-8?B?d2Fka2JhK1hlYWF1U0FoU2pXVnoyU3k0TEV2OEJCTTA4cU50Vm1Za0F6RWw4?=
 =?utf-8?B?TUlwd1o3RENON0cvQm9TMlNzNERPN3RXejZSUk9SU0cxWDQ0T2tWT1JMVEZJ?=
 =?utf-8?B?aXhxK3BRN0syTWsyUDZ3bTZnVHBnODhpMmZodStFeUtJNElua2hxdU5Gc2VS?=
 =?utf-8?Q?Wfc7lcU9Mt+VUJyD3c+NRfgUZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6dc1ae88-9d3f-486a-a01e-08db554f2f1a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2023 14:17:58.6078
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xuSqNN+Xn+QYtHJC0HszRdmUL1p42svvFxb1O9atbzX8thVtoKgyrRbaG0znWIgRhFLJV9nj+G+dV1osRdVAfw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9101

On 13.05.2023 03:17, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@amd.com>
> 
> Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> the tables in the guest. Instead, copy the tables to Dom0.

Do you really mean "in the guest" (i.e. from Xen's perspective, i.e.
ignoring that when running on qemu it is kind of a guest itself)?

I also consider the statement too broad anyway: Various people have
run PVH Dom0 without running into such an issue, so it's clearly not
just "leads to".

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 15 14:43:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 14:43:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534778.832128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZPq-0000M3-HA; Mon, 15 May 2023 14:43:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534778.832128; Mon, 15 May 2023 14:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZPq-0000LD-Av; Mon, 15 May 2023 14:43:10 +0000
Received: by outflank-mailman (input) for mailman id 534778;
 Mon, 15 May 2023 14:43:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWdz=BE=citrix.com=prvs=492993889=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyZPp-0000CT-3R
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 14:43:09 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ce87b97a-f32e-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 16:43:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce87b97a-f32e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684161788;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=7ExPV3k5UxWmHtT8O4awxl/SNyiFP8/mlp5LymAqznU=;
  b=b3SvCzhH2xJnuVY7xu7SZIO/wpl5VTzFPniz77zXKO0qxYfC51DK6SPp
   5hI+BPyH7bhe4OEQ32vP2yi5SbqG6u2a7rUqYCV/+cj6Pwqu2VZsdBJSn
   rX4j5Zk6JhuNkZ+4YhVcRG7sOkogvzQccUOXLainCjAJdEQVWbNGv2Xtp
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108969384
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:cTwuhawJh2tPRVLJuEV6t+c/xirEfRIJ4+MujC+fZmUNrF6WrkVRz
 2BMCDjTb//YNjakKY0nYIyyoU8PsJTdytNrSQU5qyAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPK0T5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KXpV2
 r82cS8vUk6kucGa3qC7UMJMg+12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZwNzhfG9
 zmWowwVBDkqBMe+kjWO3kimrfHfhgWjSYIALZ2Ro6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0efBdDuk74wGl0bfP7kCSAW1sZiFFQMwrsokxXzNC6
 7OSt4q3X3o16uTTEC/DsO7O9lteJBT5M0cZfgBHY1IaweW9h78QogzdTsxIMKuc24id9S7L/
 xiGqy03hrM2hMEN1rmm8V2vvw9AtqQlXSZuuFyJAzvNAhdRIdf8Otf2sQSzAeNodt7xc7WXg
 JQTdyFyBsgqBIrFqiGCSf5l8FqBt6fca220bbKC8vAcG9WRF5yLJ9o4DNJWfh0B3iM4ldjBP
 ifuVft5vsM7AZdTRfYfj3iNI8or17P8Mt/uS+rZaNFDCrAoKl/bpHE/PxHLjzu1+KTJrU3YE
 cbzTCpRJSxCVfQPIMSeHY/xLoPHNghhnDiOFPgXPjys0KaEZW79dIrpxGCmN7hjhIvd+VW9z
 jqqH5fSo/mpeLGkM3a/HE96BQxiEEXX8ris9p0GK7DZc1Y+cIzjYteIqY4cl0Vet/w9vo/1E
 ruVBie0FHKXaaX7FDi3
IronPort-HdrOrdr: A9a23:HPF36qx/FWb12r6faVwSKrPx/uskLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9xYgBHpTnkAsW9qBznhPpICOUqTNWftWrdyQiVxeNZnPLfKlTbckWQmI5gPM
 9bAtBD4bbLfD9HZKjBkWyF+uIbsaK6Ge2T9JTj5kYoaTsvR7Br7g9/BAreOkpqRDNeDZ58MJ
 aH/MJIqxepZHxSN62Adww4dtmGg+eOuIPtYBYACRJiwA6SjQmw4Lq/NxSDxB8RXx5G3L9n22
 nYlA7S4LmlrpiAu23h/l6Wy64TtMrqy9NFCsDJos8JKg/0ggLtX4hlU63qhkFKnAn6gmxKrP
 D85zMbe+hj4XLYeW+45TH33RP77Too43j+jXeFnHrKu6XCNXgHIvsEobgcXgrS6kImst05+r
 lMxXilu51eCg6FtDjh5uLPSwphmiOP0DEfeNYo/jFiuLYlGfZsRM0kjTVo+a47bVXHAVUcYa
 FT5MK13ocoTbrVVQGUgoBV+q3RYp0CJGb6fqE8gL3u79F3pgEJ86JK/r1uop5HzuNId6V5
X-Talos-CUID: 9a23:/FHBuGAh2VKbReL6EzJ82w06Q98HTiP68lT8PV24E01JTZTAHA==
X-Talos-MUID: 9a23:NisFiQVVxbrwjjfq/A/cnxA5O8xE2a6FFlAWt80Xu9u9FQUlbg==
X-IronPort-AV: E=Sophos;i="5.99,276,1677560400"; 
   d="scan'208";a="108969384"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max policies
Date: Mon, 15 May 2023 15:42:59 +0100
Message-ID: <20230515144259.1009245-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

We already have common and default feature adjustment helpers.  Introduce one
for max featuresets too.

Offer MSR_ARCH_CAPS unconditionally in the max policy, and stop clobbering the
data inherited from the Host policy.  This will be necessary level a VM safely
for migration.  Note: ARCH_CAPS is still max-only for now, so will not be
inhereted by the default policies.

With this done, the special case for dom0 can be shrunk to just resampling the
Host policy (as ARCH_CAPS isn't visible by default yet).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu-policy.c | 42 ++++++++++++++++++++++-----------------
 1 file changed, 24 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index dfd9abd8564c..74266d30b551 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -408,6 +408,25 @@ static void __init calculate_host_policy(void)
     p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
 }
 
+static void __init guest_common_max_feature_adjustments(uint32_t *fs)
+{
+    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+    {
+        /*
+         * MSR_ARCH_CAPS is just feature data, and we can offer it to guests
+         * unconditionally, although limit it to Intel systems as it is highly
+         * uarch-specific.
+         *
+         * In particular, the RSBA and RRSBA bits mean "you might migrate to a
+         * system where RSB underflow uses alternative predictors (a.k.a
+         * Retpoline not safe)", so these need to be visible to a guest in all
+         * cases, even when it's only some other server in the pool which
+         * suffers the identified behaviour.
+         */
+        __set_bit(X86_FEATURE_ARCH_CAPS, fs);
+    }
+}
+
 static void __init guest_common_default_feature_adjustments(uint32_t *fs)
 {
     /*
@@ -483,6 +502,7 @@ static void __init calculate_pv_max_policy(void)
         __clear_bit(X86_FEATURE_IBRS, fs);
     }
 
+    guest_common_max_feature_adjustments(fs);
     guest_common_feature_adjustments(fs);
 
     sanitise_featureset(fs);
@@ -490,8 +510,6 @@ static void __init calculate_pv_max_policy(void)
     recalculate_xstate(p);
 
     p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
-
-    p->arch_caps.raw = 0; /* Not supported yet. */
 }
 
 static void __init calculate_pv_def_policy(void)
@@ -598,6 +616,7 @@ static void __init calculate_hvm_max_policy(void)
     if ( !cpu_has_vmx )
         __clear_bit(X86_FEATURE_PKS, fs);
 
+    guest_common_max_feature_adjustments(fs);
     guest_common_feature_adjustments(fs);
 
     sanitise_featureset(fs);
@@ -606,8 +625,6 @@ static void __init calculate_hvm_max_policy(void)
 
     /* It's always possible to emulate CPUID faulting for HVM guests */
     p->platform_info.cpuid_faulting = true;
-
-    p->arch_caps.raw = 0; /* Not supported yet. */
 }
 
 static void __init calculate_hvm_def_policy(void)
@@ -828,7 +845,10 @@ void __init init_dom0_cpuid_policy(struct domain *d)
      * domain policy logic gains a better understanding of MSRs.
      */
     if ( is_hardware_domain(d) && cpu_has_arch_caps )
+    {
         p->feat.arch_caps = true;
+        p->arch_caps.raw = host_cpu_policy.arch_caps.raw;
+    }
 
     /* Apply dom0-cpuid= command line settings, if provided. */
     if ( dom0_cpuid_cmdline )
@@ -858,20 +878,6 @@ void __init init_dom0_cpuid_policy(struct domain *d)
         p->platform_info.cpuid_faulting = false;
 
     recalculate_cpuid_policy(d);
-
-    if ( is_hardware_domain(d) && cpu_has_arch_caps )
-    {
-        uint64_t val;
-
-        rdmsrl(MSR_ARCH_CAPABILITIES, val);
-
-        p->arch_caps.raw = val &
-            (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
-             ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
-             ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
-             ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
-             ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
-    }
 }
 
 static void __init __maybe_unused build_assertions(void)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 15 14:43:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 14:43:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534777.832121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZPq-0000Fq-8J; Mon, 15 May 2023 14:43:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534777.832121; Mon, 15 May 2023 14:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZPq-0000ES-1q; Mon, 15 May 2023 14:43:10 +0000
Received: by outflank-mailman (input) for mailman id 534777;
 Mon, 15 May 2023 14:43:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWdz=BE=citrix.com=prvs=492993889=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyZPo-0000CZ-Q5
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 14:43:08 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cc1c406d-f32e-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 16:43:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc1c406d-f32e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684161785;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=9XFTUf7BDejsn84FeldN2UDYxQF11ZKC+fFIFGNNJWI=;
  b=ZL/1IzA9H6Dzn4YgVKvqmACbJcAVo3zbtxZVW+tRiNHokpe+/nDnoD3q
   lLgHcFeySKRBA0cKdVs5XPzabNaojGEA+g8iTNMlmL2UY4R970ENLEvTU
   LsY5aChigc+65FzUkXz9cwOaOZkKhKL+6sB0r8k12ejnTacd94+FZfQgF
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109479563
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:PgQf2K6s0opUt0g6H7XBaAxRtDDHchMFZxGqfqrLsTDasY5as4F+v
 jQeUDjUbv2CZWWkfNFzOdnn9UoAupfVnIUwQQdu/CwwHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S4QeH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m3
 /02GB03b0i/l/O436inZMdQu9wMI5y+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9xx7J+
 j2Xojqiav0cHJui6heH8iyAvcCMowT9c54ANfqZpvE/1TV/wURMUUZLBDNXu8KRmkO4Ht5SN
 UEQ0i4vtrQpslymSMHnWB+1q2LCuQQTM/JyOeAn7ACGyoLP/h2UQGMDS1Zpd9gOpMIwAzsw2
 Te0c8jBXGI19ufPEDTEq+nS9GnpUcQIEYMcTQUFYzso2di7nJEigA3VUvhNLJOtodKgTFkc3
 Au2hCQ5grwSi+sC2KO64U3LjlqQm3TZcuImzl6JBzz4t2uVcKbgPtX1sgaDsZ6sOa7DFjG8U
 G44d99yBQzkJbWEj2SzTeoEB9lFDN7VYWSH0TaD83TMnglBGkJPn6gKulmSx28zaK7onAMFh
 2eN0T69HLcJYBOXgVZfOupd8fgCw6n6DsjCXfvJdNdIaZUZXFbZrH02NR/KgDu9yxBEfUQD1
 XCzIK6R4YsyU/w7nFJauc9HuVPU+szO7TyKHs2qp/hW+bGfeGSUWd84Dbd6VchgtPnsiFyMo
 75i2z6il003vBvWPnOGrub+7DkicRAGOHwBg5YKJ7/efFA3RDlJ5j246epJRrGJVp99zo/gl
 kxRkGcCoLYjrRUr8Tm3V00=
IronPort-HdrOrdr: A9a23:2I/yoq/Ibf4jrk98fe1uk+AoI+orL9Y04lQ7vn2ZKSY5TiX4rb
 HLoB1/73XJYVkqN03I9ervBEDiewK4yXcW2/hzAV7KZmCP1FdASrsC0WKL+Vbd8kbFh4xgPM
 lbE5SWc+eAamSTjazBkXWF+9RL+qj5zEh/792usUuETmtRGtBdBx8SMHf8LqXvLjM2f6bQEv
 Cnl7N6jgvlQ1s7ROKhCEIIWuDSzue76a4PMXY9dmYaABDlt0LS1ILH
X-Talos-CUID: 9a23:Mfbq3mxrosvvN86IBeUPBgU+G+4VX0fclEvAMkGnE3pHZoGoUXCfrfY=
X-Talos-MUID: 9a23:7rariQkXFeI4BdhTRuTtdnolCeRF4aTwJHsi0qshveKYJw03GmeS2WE=
X-IronPort-AV: E=Sophos;i="5.99,276,1677560400"; 
   d="scan'208";a="109479563"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 1/6] x86/boot: Rework dom0 feature configuration
Date: Mon, 15 May 2023 15:42:54 +0100
Message-ID: <20230515144259.1009245-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Right now, dom0's feature configuration is split between between the common
path and a dom0-specific one.  This mostly is by accident, and causes some
very subtle bugs.

First, start by clearly defining init_dom0_cpuid_policy() to be the domain
that Xen builds automatically.  The late hwdom case is still constructed in a
mostly normal way, with the control domain having full discretion over the CPU
policy.

Identifying this highlights a latent bug - the two halves of the MSR_ARCH_CAPS
bodge are asymmetric with respect to the hardware domain.  This means that
shim, or a control-only dom0 sees the MSR_ARCH_CAPS CPUID bit but none of the
MSR content.  This in turn declares the hardware to be retpoline-safe by
failing to advertise the {R,}RSBA bits appropriately.  Restrict this logic to
the hardware domain, although the special case will cease to exist shortly.

For the CPUID Faulting adjustment, the comment in ctxt_switch_levelling()
isn't actually relevant.  Provide a better explanation.

Move the recalculate_cpuid_policy() call outside of the dom0-cpuid= case.
This is no change for now, but will become necessary shortly.

Finally, place the second half of the MSR_ARCH_CAPS bodge after the
recalculate_cpuid_policy() call.  This is necessary to avoid transiently
breaking the hardware domain's view while the handling is cleaned up.  This
special case will cease to exist shortly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu-policy.c | 57 +++++++++++++++++++++------------------
 1 file changed, 31 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index ef6a2d0d180a..5e7e19fbcda8 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -687,29 +687,6 @@ int init_domain_cpu_policy(struct domain *d)
     if ( !p )
         return -ENOMEM;
 
-    /* See comment in ctxt_switch_levelling() */
-    if ( !opt_dom0_cpuid_faulting && is_control_domain(d) && is_pv_domain(d) )
-        p->platform_info.cpuid_faulting = false;
-
-    /*
-     * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
-     * so dom0 can turn off workarounds as appropriate.  Temporary, until the
-     * domain policy logic gains a better understanding of MSRs.
-     */
-    if ( is_hardware_domain(d) && cpu_has_arch_caps )
-    {
-        uint64_t val;
-
-        rdmsrl(MSR_ARCH_CAPABILITIES, val);
-
-        p->arch_caps.raw = val &
-            (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
-             ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
-             ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
-             ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
-             ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
-    }
-
     d->arch.cpu_policy = p;
 
     recalculate_cpuid_policy(d);
@@ -845,11 +822,15 @@ void recalculate_cpuid_policy(struct domain *d)
         p->extd.raw[0x19] = EMPTY_LEAF;
 }
 
+/*
+ * Adjust the CPU policy for dom0.  Really, this is "the domain Xen builds
+ * automatically on boot", and might not have the domid 0 (e.g. pvshim).
+ */
 void __init init_dom0_cpuid_policy(struct domain *d)
 {
     struct cpu_policy *p = d->arch.cpuid;
 
-    /* dom0 can't migrate.  Give it ITSC if available. */
+    /* Dom0 doesn't migrate relative to Xen.  Give it ITSC if available. */
     if ( cpu_has_itsc )
         p->extd.itsc = true;
 
@@ -858,7 +839,7 @@ void __init init_dom0_cpuid_policy(struct domain *d)
      * so dom0 can turn off workarounds as appropriate.  Temporary, until the
      * domain policy logic gains a better understanding of MSRs.
      */
-    if ( cpu_has_arch_caps )
+    if ( is_hardware_domain(d) && cpu_has_arch_caps )
         p->feat.arch_caps = true;
 
     /* Apply dom0-cpuid= command line settings, if provided. */
@@ -876,8 +857,32 @@ void __init init_dom0_cpuid_policy(struct domain *d)
         }
 
         x86_cpu_featureset_to_policy(fs, p);
+    }
+
+    /*
+     * PV Control domains used to require unfiltered CPUID.  This was fixed in
+     * Xen 4.13, but there is an cmdline knob to restore the prior behaviour.
+     *
+     * If the domain is getting unfiltered CPUID, don't let the guest kernel
+     * play with CPUID faulting either, as Xen's CPUID path won't cope.
+     */
+    if ( !opt_dom0_cpuid_faulting && is_control_domain(d) && is_pv_domain(d) )
+        p->platform_info.cpuid_faulting = false;
 
-        recalculate_cpuid_policy(d);
+    recalculate_cpuid_policy(d);
+
+    if ( is_hardware_domain(d) && cpu_has_arch_caps )
+    {
+        uint64_t val;
+
+        rdmsrl(MSR_ARCH_CAPABILITIES, val);
+
+        p->arch_caps.raw = val &
+            (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
+             ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
+             ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
+             ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
+             ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
     }
 }
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 15 14:43:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 14:43:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534776.832115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZPp-0000Cr-Ui; Mon, 15 May 2023 14:43:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534776.832115; Mon, 15 May 2023 14:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZPp-0000Ck-Re; Mon, 15 May 2023 14:43:09 +0000
Received: by outflank-mailman (input) for mailman id 534776;
 Mon, 15 May 2023 14:43:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWdz=BE=citrix.com=prvs=492993889=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyZPo-0000CT-7R
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 14:43:08 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cc62e9d9-f32e-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 16:43:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc62e9d9-f32e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684161785;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=hdj4U+pjxuDMgcyNg1EIZzudfZrVmDvOhSwPysOSv2Q=;
  b=XOlEqfhB8CBcu5aalFX6QukLzEIFyx3lpnGS/STph0molDoWg0uO7LMB
   wJFr/srSlmLNwwlefcZezt24vCWQ+ldD73r6vUKpcIpxCe8DHvWHe8gdz
   x2g+ATFudrZIMIr2bNu61Y1UApte/Ky0zEjdmBkGulVT1352r/pfwwQix
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108969382
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:ABMLR6zXuSKm1d3KBtx6t+c/xirEfRIJ4+MujC+fZmUNrF6WrkVVy
 mYfXWjXaKqNYzehfN5xboqw9EtU6J6Dz9VlSQVr+CAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPK0T5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KXpV2
 r82cS8vUk6kucGa3qC7UMJMg+12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZwNzhfG9
 zmWpQwVBDk6C/W95iis6UmRh8n2kT7ECNMWS52Ro6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0efBdDuk74wGl0bfP7kCSAW1sZiFFQMwrsokxXzNC6
 7OSt4q3X3o16uTTEC/DsO7O9lteJBT5M0cZfgBHY1IaweW9h78QogzdTsxIMKuc24id9S7L/
 xiGqy03hrM2hMEN1rmm8V2vvw9AtqQlXSZuuFyJAzvNAhdRIdf8Otf2sQSzAeNodt7xc7WXg
 JQTdyFyBsgqBIrFqiGCSf5l8FqBt6fca220bbKC8vAcG9WRF5yLJ9o4DNJWfh0B3iM4ldjBP
 ifuVft5vsM7AZdTRfYfj3iNI8or17P8Mt/uS+rZaNFDCrAoKl/bpHE/PxHLjzu1+KTJrU3YE
 cbzTCpRJSxCVfQPIMSeHY/xLoPHNghhnDiOFPgXPjys0KaEZW79dIrpxGCmN7hjhIvd+VW9z
 jqqH5fSo/mpeLGkM3a/HE96BQxiEEXX8ris9p0GK7DZc1Y+cIzjYteIqY4cl0Vet/w9vo/1E
 ruVBye0FHKXaaX7FDi3
IronPort-HdrOrdr: A9a23:zwjuKaoFE+xkGG4B+kZz5ggaV5pFeYIsimQD101hICG9E/b4qy
 nApp8mPHPP4gr5O0tPpTnjAsW9qBrnnPZICOIqUotKMjOKhFeV
X-Talos-CUID: =?us-ascii?q?9a23=3AGtx3GGmQ3u8TWWJ7mAkFNG3DgjbXOWbG4zT+elC?=
 =?us-ascii?q?aNThOdp65V02Q4ohEluM7zg=3D=3D?=
X-Talos-MUID: 9a23:enawMgnx4pXPe7X2UOnednpCC+5KzLaHE3swkLM/t9S7ZX10ZBe02WE=
X-IronPort-AV: E=Sophos;i="5.99,276,1677560400"; 
   d="scan'208";a="108969382"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 4/6] x86/cpu-policy: MSR_ARCH_CAPS feature names
Date: Mon, 15 May 2023 15:42:57 +0100
Message-ID: <20230515144259.1009245-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Seed the default visibility from the dom0 special case, which for the most
part just exposes the *_NO bits.  Insert a block dependency from the ARCH_CAPS
CPUID bit to the entire content of the MSR.

The overall CPUID bit is still max-only, so all of MSR_ARCH_CAPS is hidden in
the default policies.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

There is no libxl logic because libxl still uses the older xend format which
is specific to CPUID data.  That is going to need untangling at some other
point.
---
 tools/misc/xen-cpuid.c                      | 13 ++++++++++++
 xen/include/public/arch-x86/cpufeatureset.h | 23 +++++++++++++++++++++
 xen/tools/gen-cpuid.py                      |  3 +++
 3 files changed, 39 insertions(+)

diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index 258584aafb9f..5b717f3f0091 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -228,6 +228,19 @@ static const char *const str_7d2[32] =
 
 static const char *const str_10Al[32] =
 {
+    [ 0] = "rdcl-no",             [ 1] = "eibrs",
+    [ 2] = "rsba",                [ 3] = "skip-l1dfl",
+    [ 4] = "intel-ssb-no",        [ 5] = "mds-no",
+    [ 6] = "if-pschange-mc-no",   [ 7] = "tsx-ctrl",
+    [ 8] = "taa-no",              [ 9] = "mcu-ctrl",
+    [10] = "misc-pkg-ctrl",       [11] = "energy-ctrl",
+    [12] = "doitm",               [13] = "sbdr-ssdp-no",
+    [14] = "fbsdp-no",            [15] = "psdp-no",
+    /* 16 */                      [17] = "fb-clear",
+    [18] = "fb-clear-ctrl",       [19] = "rrsba",
+    [20] = "bhi-no",              [21] = "xapic-status",
+    /* 22 */                      [23] = "ovrclk-status",
+    [24] = "pbrsb-no",
 };
 
 static const char *const str_10Ah[32] =
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 032cec3ccba2..3cfdc71df92b 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -308,6 +308,29 @@ XEN_CPUFEATURE(AVX_NE_CONVERT,     15*32+ 5) /*A  AVX-NE-CONVERT Instructions */
 XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
 
 /* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.eax, word 16 */
+XEN_CPUFEATURE(RDCL_NO,            16*32+ 0) /*A  No Rogue Data Cache Load (Meltdown) */
+XEN_CPUFEATURE(EIBRS,              16*32+ 1) /*A  Enhanced IBRS */
+XEN_CPUFEATURE(RSBA,               16*32+ 2) /*!A RSB Alternative (Retpoline not safe) */
+XEN_CPUFEATURE(SKIP_L1DFL,         16*32+ 3) /*A  Don't need to flush L1D on VMEntry */
+XEN_CPUFEATURE(INTEL_SSB_NO,       16*32+ 4) /*A  No Speculative Store Bypass */
+XEN_CPUFEATURE(MDS_NO,             16*32+ 5) /*A  No Microarchitectural Data Sampling */
+XEN_CPUFEATURE(IF_PSCHANGE_MC_NO,  16*32+ 6) /*A  No Instruction fetch #MC */
+XEN_CPUFEATURE(TSX_CTRL,           16*32+ 7) /*   MSR_TSX_CTRL */
+XEN_CPUFEATURE(TAA_NO,             16*32+ 8) /*A  No TSX Async Abort */
+XEN_CPUFEATURE(MCU_CTRL,           16*32+ 9) /*   MSR_MCU_CTRL */
+XEN_CPUFEATURE(MISC_PKG_CTRL,      16*32+10) /*   MSR_MISC_PKG_CTRL */
+XEN_CPUFEATURE(ENERGY_FILTERING,   16*32+11) /*   MSR_MISC_PKG_CTRL.ENERGY_FILTERING */
+XEN_CPUFEATURE(DOITM,              16*32+12) /*   Data Operand Invariant Timing Mode */
+XEN_CPUFEATURE(SBDR_SSBD_NO,       16*32+13) /*A  No Shared Buffer Data Read or Sideband Stale Data Propagation */
+XEN_CPUFEATURE(FBDSP_NO,           16*32+14) /*A  No Fill Buffer Stale Data Propagation */
+XEN_CPUFEATURE(PSDP_NO,            16*32+15) /*A  No Primary Stale Data Propagation */
+XEN_CPUFEATURE(FB_CLEAR,           16*32+17) /*A  Fill Buffers cleared by VERW */
+XEN_CPUFEATURE(FB_CLEAR_CTRL,      16*32+18) /*   MSR_OPT_CPU_CTRL.FB_CLEAR_DIS */
+XEN_CPUFEATURE(RRSBA,              16*32+19) /*!A Restricted RSB Alternative */
+XEN_CPUFEATURE(BHI_NO,             16*32+20) /*A  No Branch History Injection  */
+XEN_CPUFEATURE(XAPIC_STATUS,       16*32+21) /*   MSR_XAPIC_DISABLE_STATUS */
+XEN_CPUFEATURE(OVRCLK_STATUS,      16*32+23) /*   MSR_OVERCLOCKING_STATUS */
+XEN_CPUFEATURE(PBRSB_NO,           16*32+24) /*A  No Post-Barrier RSB predictions */
 
 /* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.edx, word 17 */
 
diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
index 86d00bb3c273..f28ff708a2fc 100755
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -325,6 +325,9 @@ def crunch_numbers(state):
 
         # In principle the TSXLDTRK insns could also be considered independent.
         RTM: [TSXLDTRK],
+
+        # The ARCH_CAPS CPUID bit enumerates the availability of the whole register.
+        ARCH_CAPS: list(range(RDCL_NO, RDCL_NO + 64)),
     }
 
     deep_features = tuple(sorted(deps.keys()))
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 15 14:43:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 14:43:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534779.832145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZQ1-00016h-Px; Mon, 15 May 2023 14:43:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534779.832145; Mon, 15 May 2023 14:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZQ1-00016a-MD; Mon, 15 May 2023 14:43:21 +0000
Received: by outflank-mailman (input) for mailman id 534779;
 Mon, 15 May 2023 14:43:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWdz=BE=citrix.com=prvs=492993889=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyZQ0-0000CT-TK
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 14:43:20 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4c22416-f32e-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 16:43:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4c22416-f32e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684161799;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=gxW2n1DVfIuRfs7m6wcVAnKuUcSoIDDOH3lmvyYJ/Bs=;
  b=UkT0JuDlM4RDzTOpNMbkX4eJCBcCiqghNiEZTkr1oFjvy9Accnm95wiC
   u6XUmG+ZNBjucs/pPvs+5kyDVnP9m8EqW+htSvqEJOFvplkV3CoLRrtGL
   ZfND+IOtCLcUTdrkN6K1zQg/u0pCh2saFGExlnpsad9F8/idE9YZPaOMA
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107840892
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:+/MxvqxazdqT65HDIKJ6t+c/xirEfRIJ4+MujC+fZmUNrF6WrkUAx
 2MeXm/TbK6DN2HyKd1yOd+28h8G75OBnII1HQI9pSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPK0T5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KW9Q/
 /1DNGsWUk+8itOPmZG5erRxme12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZwNwRbE/
 jKXpQwVBDlEDfK6zBO400mhvf2UrXj5Rt4bDJiBo6sCbFq7mTVIVUx+uUGAiea9ol6zXZRYM
 UN80jojq+0++VKmSvH5XgakuziUsxgEQd1SHuYmrgaXxcL8wSyUG2wFRT5pc8E9uYk9QjlC6
 7OSt4q3X3o16uTTEC/DsO7O9lteJBT5M0cMeyUFFhde+OW8n4wCt0vsTOY+DbGc24id9S7L/
 xiGqy03hrM2hMEN1rmm8V2vvw9AtqQlXSZuuFyJAzvNAhdRIdf8Otf2sQSzAeNodt7xc7WXg
 JQTdyFyBsgqBIrFqiGCSf5l8FqBt6fca220bbKC8vAcG9WRF5yLJ9o4DNJWfh0B3iM4ldjBP
 ifuVft5vsM7AZdTRfYfj3iNI8or17P8Mt/uS+rZaNFDCrAoKl/bpHE/PxHLjzu1+KTJrU3YE
 cbzTCpRJSxCVfQPIMSeHY/xLoPHNghhnDiOFPgXPjys0KaEZW79dIrpxGCmN7hjhIvd+VW9z
 jqqH5fSo/mpeLGkM3a/HE96BQxiEEXX8ris9p0GK7DZc1Y+cIzjYteIqY4cl0Vet/w9vo/1E
 ruVBie0FHKXaaX7FDi3
IronPort-HdrOrdr: A9a23:ofAXFKs+dUl0UkyNAXMTdMSw7skDhtV00zEX/kB9WHVpm6yj+v
 xGUs566faUskd0ZJhEo7q90ca7Lk80maQa3WBzB8bGYOCFghrKEGgK1+KLrwEIcxeUygc379
 YDT0ERMrzN5CNB/KHHCAnTKadd/DGEmprY+ts3GR1WPH9Xg6IL1XYJNu6CeHcGIjWvnfACZe
 ChDswsnUvYRV0nKv6VK1MiROb5q9jChPvdEGM7705O0nj3sduwgoSKaCSl4g==
X-Talos-CUID: =?us-ascii?q?9a23=3A/qgE+muIPnPi3f4Y3qDcczSe6IsrbHfcl2uLOHP?=
 =?us-ascii?q?nNmtYceS2S2/N5bxNxp8=3D?=
X-Talos-MUID: 9a23:EVJnxAlBBa+JybIGNocUdno8DelN5o2TKXsAqoddi5KGK3B5IxKk2WE=
X-IronPort-AV: E=Sophos;i="5.99,276,1677560400"; 
   d="scan'208";a="107840892"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 5/6] x86/boot: Record MSR_ARCH_CAPS for the Raw and Host CPU policy
Date: Mon, 15 May 2023 15:42:58 +0100
Message-ID: <20230515144259.1009245-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Extend x86_cpu_policy_fill_native() with a read of ARCH_CAPS based on the
CPUID information just read, which removes the need handling it specially in
calculate_raw_cpu_policy().

Extend generic_identify() to read ARCH_CAPS into x86_capability[], which is
fed into the Host Policy.  This in turn means there's no need to special case
arch_caps in calculate_host_policy().

No practical change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu-policy.c | 12 ------------
 xen/arch/x86/cpu/common.c |  5 +++++
 xen/lib/x86/cpuid.c       |  7 ++++++-
 3 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 49f5465ec445..dfd9abd8564c 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -354,9 +354,6 @@ void calculate_raw_cpu_policy(void)
 
     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
     /* Was already added by probe_cpuid_faulting() */
-
-    if ( cpu_has_arch_caps )
-        rdmsrl(MSR_ARCH_CAPABILITIES, p->arch_caps.raw);
 }
 
 static void __init calculate_host_policy(void)
@@ -409,15 +406,6 @@ static void __init calculate_host_policy(void)
     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
     /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
     p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
-
-    /* Temporary, until we have known_features[] for feature bits in MSRs. */
-    p->arch_caps.raw = raw_cpu_policy.arch_caps.raw &
-        (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
-         ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
-         ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_TAA_NO |
-         ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO | ARCH_CAPS_PSDP_NO |
-         ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA | ARCH_CAPS_BHI_NO |
-         ARCH_CAPS_PBRSB_NO);
 }
 
 static void __init guest_common_default_feature_adjustments(uint32_t *fs)
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index edc4db1335eb..a3a341fd7db2 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -474,6 +474,11 @@ static void generic_identify(struct cpuinfo_x86 *c)
 		cpuid_count(0xd, 1,
 			    &c->x86_capability[FEATURESET_Da1],
 			    &tmp, &tmp, &tmp);
+
+	if (test_bit(X86_FEATURE_ARCH_CAPS, c->x86_capability))
+		rdmsr(MSR_ARCH_CAPABILITIES,
+		      c->x86_capability[FEATURESET_10Al],
+		      c->x86_capability[FEATURESET_10Ah]);
 }
 
 /*
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index a9f31858aeff..dfd377cfb7ef 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -226,7 +226,12 @@ void x86_cpu_policy_fill_native(struct cpu_policy *p)
     p->hv_limit = 0;
     p->hv2_limit = 0;
 
-    /* TODO MSRs */
+#ifdef __XEN__
+    /* TODO MSR_PLATFORM_INFO */
+
+    if ( p->feat.arch_caps )
+        rdmsrl(MSR_ARCH_CAPABILITIES, p->arch_caps.raw);
+#endif
 
     x86_cpu_policy_recalc_synth(p);
 }
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 15 14:43:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 14:43:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534780.832155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZQ9-0001hG-3b; Mon, 15 May 2023 14:43:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534780.832155; Mon, 15 May 2023 14:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZQ8-0001gg-WD; Mon, 15 May 2023 14:43:29 +0000
Received: by outflank-mailman (input) for mailman id 534780;
 Mon, 15 May 2023 14:43:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWdz=BE=citrix.com=prvs=492993889=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyZQ8-0000CT-8M
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 14:43:28 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d97bc3d2-f32e-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 16:43:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d97bc3d2-f32e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684161807;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=Et7lO32iFTeTcBuxF+aOHF19+4pOz9StdzKvbkzwkgA=;
  b=boiO6rGVhvHSiBWhpv3UkQ+HG1UZlrW+kFW4CgMTegIQaNZWxDmbMSZl
   d8bac/piW7cMjPNUIAqRQx4OcjQLz1QejeYoxr9VXNtU4+cvgx4VIQcaw
   V20y5bji4Zshi77S13okAVmjaHe3XxpnOiIlrwzE1KnGZVXhd183k+w+I
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 111537997
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Atd2O64DhIOwVmg94CPbvwxRtDPHchMFZxGqfqrLsTDasY5as4F+v
 jZNDW6PPP+PYGqge94gbdmw9RgA6MSHn9dqSQdlr3g9Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S4QeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m8
 6QydSwTdQG6lvuR+6iGbeRpu+18I5y+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9xx/B+
 zmZrjWmav0cHPyk5zzeqWqVurDSxHPCYbkWOqWK2tc/1TV/wURMUUZLBDNXu8KRlUqWS99Zb
 UsO9UIGvaU0sUCmUNT5dxm5u2Kf+A4RXcJKFO834x3LzbDbiy67LGUZSj9KaPQ9qdQ7Azct0
 ze0c8jBXGI19ufPEDTEq+nS9GnpUcQIEYMcTSlcZ1YZ/cLymZAqqSnVc/FuFZOc0dKgTFkc3
 Au2hCQ5grwSi+sC2KO64U3LjlqQm3TZcuImzl6JBzz4t2uVcKbgPtX1sgaDsZ6sOa7DFjG8U
 G44d99yBQzkJbWEj2SzTeoEB9lFDN7VYWSH0TaD83TMnglBGkJPn6gKulmSx28zaK7onAMFh
 2eN0T69HLcJYBOXgVZfOupd8fgCw6n6DsjCXfvJdNdIaZUZXFbZrH02NR/KgDu9yxBEfUQD1
 XCzIK6R4YsyU/w7nFJauc9HuVPU+szO7TyKHs2qp/hW+bGfeGSUWd84Dbd6VchgtPnsiFyMo
 75i2z6il003vBvWPnOGrub+7DkicRAGOHwBg5YKJ7/efFA3RDlJ5j246epJRrGJVp99zo/gl
 kxRkGcHkjITWVWvxd22V01e
IronPort-HdrOrdr: A9a23:BtMzv6BgLGoQsYTlHelc55DYdb4zR+YMi2TDt3oddfWaSKylfq
 GV7ZAmPHrP4gr5N0tOpTntAse9qBDnhPtICOsqTNSftWDd0QPFEGgL1+DfKlbbak/DH4BmtJ
 uICJIOb+EZDTJB/LrHCAvTKade/DFQmprY+9s3zB1WPHBXg7kL1XYeNu4CeHcGPjWvA/ACZe
 Ohz/sCnRWMU1INYP+2A3EUNtKz2uEixPrdEGY77wdM0nj0sQ+V
X-Talos-CUID: =?us-ascii?q?9a23=3AjB92CWvAv4eK4/LHFrcnKynO6IsjKleA0GnLMna?=
 =?us-ascii?q?pEEFSUoCyRFqt0fxrxp8=3D?=
X-Talos-MUID: 9a23:wYigsQmiGrWx2xyDHZ4WdnpCD+luuP+2MXxSjMxWgvC/MQJoCRu02WE=
X-IronPort-AV: E=Sophos;i="5.99,277,1677560400"; 
   d="scan'208";a="111537997"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 0/6] x86: Introduce MSR_ARCH_CAPS into featuresets
Date: Mon, 15 May 2023 15:42:53 +0100
Message-ID: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Also cleanup of the various special cases we've already got.  No practical
change to a system, but this is the trimmed view of the featuresets on a
Cascade Lake CPU with the series in place.

                        KEY ... 10Al     10Ah
  Static sets:
  Known                     ... 01beffff:00000000
  Special                   ... 00080004:00000000
  PV Max                    ... 011ae17f:00000000
  PV Default                ... 011ae17f:00000000
  HVM Shadow Max            ... 011ae17f:00000000
  HVM Shadow Default        ... 011ae17f:00000000
  HVM Hap Max               ... 011ae17f:00000000
  HVM Hap Default           ... 011ae17f:00000000

  Dynamic sets:
  Raw                       ... 000aacab:00000000
  Host                      ... 000aacab:00000000
  PV Default                ... 00000000:00000000
  HVM Default               ... 00000000:00000000
  PV Max                    ... 000aa02b:00000000
  HVM Max                   ... 000aa02b:00000000

Andrew Cooper (6):
  x86/boot: Rework dom0 feature configuration
  x86/boot: Adjust MSR_ARCH_CAPS handling for the Host policy
  x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
  x86/cpu-policy: MSR_ARCH_CAPS feature names
  x86/boot: Record MSR_ARCH_CAPS for the Raw and Host CPU policy
  x86/boot: Expose MSR_ARCH_CAPS data in guest max policies

 tools/misc/xen-cpuid.c                      | 23 ++++++
 xen/arch/x86/cpu-policy.c                   | 83 ++++++++++-----------
 xen/arch/x86/cpu/common.c                   |  5 ++
 xen/include/public/arch-x86/cpufeatureset.h | 27 +++++++
 xen/include/xen/lib/x86/cpu-policy.h        | 18 ++---
 xen/lib/x86/cpuid.c                         | 11 ++-
 xen/tools/gen-cpuid.py                      |  3 +
 7 files changed, 117 insertions(+), 53 deletions(-)

-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 15 14:43:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 14:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534781.832165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZQB-000252-HM; Mon, 15 May 2023 14:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534781.832165; Mon, 15 May 2023 14:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZQB-00024m-ES; Mon, 15 May 2023 14:43:31 +0000
Received: by outflank-mailman (input) for mailman id 534781;
 Mon, 15 May 2023 14:43:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWdz=BE=citrix.com=prvs=492993889=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyZQ9-0000CT-AZ
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 14:43:29 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dabed41e-f32e-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 16:43:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dabed41e-f32e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684161808;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Fr/2qPrv6QELYt0rVeizVvUhPwwfLWM3DN3E6UnacdY=;
  b=Rg8A/N/184UXnPi5ozCmRmy7/CzDXYF1Fig7MtYhgU6o8F7lwy/OOQf6
   t1H3+bA/QnYY44yNV3cJw8o0Fdav9JI50oyEXp5JdrHhn2cPQ24vxbSMA
   OkXP0op0HCkPoWADZwTU+AU3tV21TLy063uoHd5rMAeVoFfvpUohnyXOf
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 111537999
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Htmy667TGRBKaTyQxxGhegxRtDPHchMFZxGqfqrLsTDasY5as4F+v
 jFKXGiOaanbZmGjfNwgPIXl80NV6JeDzoU3QFc4+yxkHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S4QeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m8
 6QydSwTdQG6lvuR+6iGbeRpu+18I5y+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9xx/B+
 zmZrjWoav0cHNKA2yOU7Vuyv6jWnCXcCbsdNZeg+tc/1TV/wURMUUZLBDNXu8KRlUqWS99Zb
 UsO9UIGvaU0sUCmUNT5dxm5u2Kf+A4RXcJKFO834x3LzbDbiy67LGUZSj9KaPQ9qdQ7Azct0
 ze0c8jBXGI19ufPEDTEq+nS9GnpUcQIEYMcTSlcZ1YZ/cLymZAqqSnVc/FuFZOc0dKgTFkc3
 Au2hCQ5grwSi+sC2KO64U3LjlqQm3TZcuImzl6JBzz4t2uVcKbgPtX1sgaDsZ6sOa7DFjG8U
 G44d99yBQzkJbWEj2SzTeoEB9lFDN7VYWSH0TaD83TMnglBGkJPn6gKulmSx28zaK7onAMFh
 2eN0T69HLcJYBOXgVZfOupd8fgCw6n6DsjCXfvJdNdIaZUZXFbZrH02NR/KgDu9yxBEfUQD1
 XCzIK6R4YsyU/w7nFJauc9HuVPU+szO7TyKHs2qp/hW+bGfeGSUWd84Dbd6VchgtPnsiFyMo
 75i2z6il003vBvWPnOGrub+7DkicRAGOHwBg5YKJ7/efFA3RDlJ5j246epJRrGJVp99zo/gl
 kxRkGcGmTITWVWvxd22V01e
IronPort-HdrOrdr: A9a23:JeYlJ6DFsTaguLPlHelc55DYdb4zR+YMi2TDt3oddfWaSKylfq
 GV7ZAmPHrP4gr5N0tOpTntAse9qBDnhPtICOsqTNSftWDd0QPFEGgL1+DfKlbbak/DH4BmtJ
 uICJIOb+EZDTJB/LrHCAvTKade/DFQmprY+9s3zB1WPHBXg7kL1XYeNu4CeHcGPjWvA/ACZe
 Ohz/sCnRWMU1INYP+2A3EUNtKz2uEixPrdEGY77wdM0nj0sQ+V
X-Talos-CUID: =?us-ascii?q?9a23=3AE9T8v2nv0wE1g2FfdJkhpO3QNuHXOXuF5irMPVS?=
 =?us-ascii?q?pMHdwb4aPdHid/Z5NzMU7zg=3D=3D?=
X-Talos-MUID: 9a23:QLo55wtiVZhqF41PlM2nii5wLdZ3yrqXT1kMl7JXgsalHzR8EmLI
X-IronPort-AV: E=Sophos;i="5.99,277,1677560400"; 
   d="scan'208";a="111537999"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 3/6] x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
Date: Mon, 15 May 2023 15:42:56 +0100
Message-ID: <20230515144259.1009245-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Bits through 24 are already defined, meaning that we're not far off needing
the second word.  Put both in right away.

The bool bitfield names in the arch_caps union are unused, and somewhat out of
date.  They'll shortly be automatically generated.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 tools/misc/xen-cpuid.c                      | 10 ++++++++++
 xen/include/public/arch-x86/cpufeatureset.h |  4 ++++
 xen/include/xen/lib/x86/cpu-policy.h        | 18 ++++++++----------
 xen/lib/x86/cpuid.c                         |  4 ++++
 4 files changed, 26 insertions(+), 10 deletions(-)

diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index 8ec143ebc854..258584aafb9f 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -226,6 +226,14 @@ static const char *const str_7d2[32] =
     [ 4] = "bhi-ctrl",      [ 5] = "mcdt-no",
 };
 
+static const char *const str_10Al[32] =
+{
+};
+
+static const char *const str_10Ah[32] =
+{
+};
+
 static const struct {
     const char *name;
     const char *abbr;
@@ -248,6 +256,8 @@ static const struct {
     { "0x00000007:2.edx", "7d2", str_7d2 },
     { "0x00000007:1.ecx", "7c1", str_7c1 },
     { "0x00000007:1.edx", "7d1", str_7d1 },
+    { "0x0000010a.lo",   "10Al", str_10Al },
+    { "0x0000010a.hi",   "10Ah", str_10Ah },
 };
 
 #define COL_ALIGN "18"
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 8de73aebc3e0..032cec3ccba2 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -307,6 +307,10 @@ XEN_CPUFEATURE(AVX_VNNI_INT8,      15*32+ 4) /*A  AVX-VNNI-INT8 Instructions */
 XEN_CPUFEATURE(AVX_NE_CONVERT,     15*32+ 5) /*A  AVX-NE-CONVERT Instructions */
 XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
 
+/* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.eax, word 16 */
+
+/* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.edx, word 17 */
+
 #endif /* XEN_CPUFEATURE */
 
 /* Clean up from a default include.  Close the enum (for C). */
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index bfa425060464..9b51f8330f92 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -20,6 +20,8 @@
 #define FEATURESET_7d2   13 /* 0x00000007:2.edx    */
 #define FEATURESET_7c1   14 /* 0x00000007:1.ecx    */
 #define FEATURESET_7d1   15 /* 0x00000007:1.edx    */
+#define FEATURESET_10Al  16 /* 0x0000010a.eax      */
+#define FEATURESET_10Ah  17 /* 0x0000010a.edx      */
 
 struct cpuid_leaf
 {
@@ -350,17 +352,13 @@ struct cpu_policy
      * fixed in hardware.
      */
     union {
-        uint32_t raw;
+        uint64_t raw;
+        struct {
+            uint32_t lo, hi;
+        };
         struct {
-            bool rdcl_no:1;
-            bool ibrs_all:1;
-            bool rsba:1;
-            bool skip_l1dfl:1;
-            bool ssb_no:1;
-            bool mds_no:1;
-            bool if_pschange_mc_no:1;
-            bool tsx_ctrl:1;
-            bool taa_no:1;
+            DECL_BITFIELD(10Al);
+            DECL_BITFIELD(10Ah);
         };
     } arch_caps;
 
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 68aafb404927..a9f31858aeff 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -79,6 +79,8 @@ void x86_cpu_policy_to_featureset(
     fs[FEATURESET_7d2]       = p->feat._7d2;
     fs[FEATURESET_7c1]       = p->feat._7c1;
     fs[FEATURESET_7d1]       = p->feat._7d1;
+    fs[FEATURESET_10Al]      = p->arch_caps.lo;
+    fs[FEATURESET_10Ah]      = p->arch_caps.hi;
 }
 
 void x86_cpu_featureset_to_policy(
@@ -100,6 +102,8 @@ void x86_cpu_featureset_to_policy(
     p->feat._7d2             = fs[FEATURESET_7d2];
     p->feat._7c1             = fs[FEATURESET_7c1];
     p->feat._7d1             = fs[FEATURESET_7d1];
+    p->arch_caps.lo          = fs[FEATURESET_10Al];
+    p->arch_caps.hi          = fs[FEATURESET_10Ah];
 }
 
 void x86_cpu_policy_recalc_synth(struct cpu_policy *p)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 15 14:43:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 14:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534782.832170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZQB-00027Q-Qz; Mon, 15 May 2023 14:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534782.832170; Mon, 15 May 2023 14:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZQB-00026n-LS; Mon, 15 May 2023 14:43:31 +0000
Received: by outflank-mailman (input) for mailman id 534782;
 Mon, 15 May 2023 14:43:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uWdz=BE=citrix.com=prvs=492993889=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyZQA-0000CT-AL
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 14:43:30 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id db620a99-f32e-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 16:43:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db620a99-f32e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684161809;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=j2YW2cWonftkq/hMdpKaFZq5iF9LBLJDuv16HmF0O5k=;
  b=PVZzLs7enL9MtG4DB6ZOo7txhTIPdFr5pR/l5qkP6J3vmYjkyv3Ui0el
   iyYE+qGcd2IcxWcAs2vRBCTsTmXOnzXaLxEtuKhTvpEWT+Yk2V6UNBTdR
   cUWQqTArAmRK2TirO5N0QFiKQnOMEWNS2VuL9xkFWZsKV+3ks258Yau/3
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 111537998
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:nZ2xyKM54Km6rIjvrR2ul8FynXyQoLVcMsEvi/4bfWQNrUoghjYEy
 WYeCm/XPa2JMGumcop0a43j8xxQsZ7Wx4VrTAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5wFmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uAvKzxS6
 +cJESEUSSuZusW7xY+SFeY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLoXmuuyi2a5WDpfsF+P/oI84nTJzRw327/oWDbQUoXTHJgOwRfJ9
 goq+UzfRTo+ZfuH9AOl71/81/X1zCPDY407QejQGvlC3wTImz175ActfUS/iem0jAi5Qd03A
 1wZ/G8ioLY/8GSvT8LhRFuorXicpBkeVtFMVeog52ml6IDZ/gKYDWgsVSNaZZots8peeNAx/
 gbXxZWzX2Up6eDLDyvHrd94sA9eJwATdUVfeTEfXDIU+v7YurwUgxfyZ4lsRfvdYsLOJRn8x
 DWDrS4bjroVjNIW26jTwW0rkw5AtbCSEFdru1y/snaNq1ogOdX7P9DABU3zt64oEWqPcrWWU
 JHoceC65ftGM5yCnTflrA4lTODwvKbt3NExbDdS83gdG9aFoSTLkWN4umsWyKJV3iEsJ1fUj
 Lf741852XOqFCLCgVVLS4ywEd826qPrCM7oUPvZBvIXPMgtLF/Wpn41NRXIt4wIrKTLufBXB
 HtmWZz0USZy5VpPllJauNvxIZd0n3tjlAs/tLjwzgi90Kr2WUN5vYwtaQPUBshgtfPsnekg2
 4oHXyd840kFAbKWj+i+2dJ7EG3m2lBhWMGn9pINJ7LbSuekcUl4Y8LsLXoaU9QNt8xoei3go
 SnVtpNwoLYnuUD6FA==
IronPort-HdrOrdr: A9a23:10NsPKtlO5DUIiL4lA5bpxNk7skDhtV00zEX/kB9WHVpm6yj+v
 xGUs566faUskd0ZJhEo7q90ca7Lk80maQa3WBzB8bGYOCFghrKEGgK1+KLrwEIcxeUygc379
 YDT0ERMrzN5CNB/KHHCAnTKadd/DGEmprY+ts3GR1WPH9Xg6IL1XYJNu6CeHcGIjWvnfACZe
 ChDswsnUvYRV0nKv6VK1MiROb5q9jChPvdEGM7705O0nj3sduwgoSKaCSl4g==
X-Talos-CUID: =?us-ascii?q?9a23=3AemMzwWgM4vLSbgSLVbvsnCMrYTJudCX3kWvOOF6?=
 =?us-ascii?q?BGH9Oc6eoblKVxYo7nJ87?=
X-Talos-MUID: =?us-ascii?q?9a23=3AsAorpA0Ii0I3kkfVYHTIgsuKzjUjxeOFLBEqiag?=
 =?us-ascii?q?/m9SpZX1qAWadtA+4a9py?=
X-IronPort-AV: E=Sophos;i="5.99,277,1677560400"; 
   d="scan'208";a="111537998"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 2/6] x86/boot: Adjust MSR_ARCH_CAPS handling for the Host policy
Date: Mon, 15 May 2023 15:42:55 +0100
Message-ID: <20230515144259.1009245-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

We are about to move MSR_ARCH_CAPS into featureset, but the order of
operations (copy raw policy, then copy x86_capabilitiles[] in) will end up
clobbering the ARCH_CAPS value currently visible in the Host policy.

To avoid this transient breakage, read from raw_cpu_policy rather than
modifying it in place.  This logic will be removed entirely in due course.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu-policy.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 5e7e19fbcda8..49f5465ec445 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -411,7 +411,7 @@ static void __init calculate_host_policy(void)
     p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
 
     /* Temporary, until we have known_features[] for feature bits in MSRs. */
-    p->arch_caps.raw &=
+    p->arch_caps.raw = raw_cpu_policy.arch_caps.raw &
         (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
          ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
          ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_TAA_NO |
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 15 15:04:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 15:04:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534801.832185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZkM-0006BW-GN; Mon, 15 May 2023 15:04:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534801.832185; Mon, 15 May 2023 15:04:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyZkM-0006BP-Cz; Mon, 15 May 2023 15:04:22 +0000
Received: by outflank-mailman (input) for mailman id 534801;
 Mon, 15 May 2023 15:04:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1RbS=BE=citrix.com=prvs=492829998=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pyZkK-0006BJ-Uf
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 15:04:20 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bfc00883-f331-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 17:04:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfc00883-f331-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684163053;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=bQrXqaeJcD+jm1ccN670onl+ctwTMzM6NSMGay6Q+zI=;
  b=VjsMB0ouwx/UkZ9uyrBqgBEbmoc6bOD+V28ubbSWlVhlpmsEYSZXMiEU
   bV6NWrwJYPlyMgcVcxM7YuTi+MxlBzjXyl7JzGpuRhquVJPKyPmQkMRQO
   S79pIQDubqi/64oubBE/w0f2m8QU+SpgHEyPK2R6JIEG04kXAuZxN26Nn
   k=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108418728
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:li/UNq6hV1ZED69X2GkIewxRtM7HchMFZxGqfqrLsTDasY5as4F+v
 jQYUW2Obv+JNmbxe98gbonk9xkEucPWzNc1QQs/qSszHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S4QeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m9
 9gxCm0EShq5pueKmoqiF6pXl+V+BZy+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrnD5bz1frkPTvact6nLf5AdwzKLsIJzefdniqcB9xx7I+
 juWoD6pav0cHNWOwzeUyGuJuu2VlBL3f4Y3T4KoyuE/1TV/wURMUUZLBDNXu8KRj0ekXttFJ
 k88+ywwrLMz/kimUtn8WRKjpHeO+BUbXrJ4CuA/9USBx7TZ5y6fAW4LSCMHb8Yp3Oc/XTEw3
 0WFt8/oDzdo9raSTBq15rqS6D+/JyURBWsDfjMfCxsI5cH5p4M+hQ6JScxseIa3h9v5AyDtw
 BiFqSE/g/MYistj/76g4VnNjjaop57IZg04/APaWiSi9AwRTJaseoiA+VXdq/FaI+6kokKp5
 SZe3ZLEtaZXUM/LzXbWKAkQIF23z/ShGR+BiHplJbgky26V4iPgRpFxvRgrcS+FLf04UTPuZ
 UbSvyZY65lSIGamYMdLXm6hNyg55fO+TIq4D5g4evILO8EsL1HfoEmCcGbKhwjQfF4QfbbT0
 HtxWeKlFj4kBKtu11JarM9NgOZwlkjSKY4+LK0XLihLM5LEPhZ5qp9fajNii9zVC4vayDg5C
 /4Fa6O3J+x3CYUSmBX//48JNkwtJnMmH53woME/XrfdclY+SDB7VKSBmutJl2lZc0N9x4/1E
 oyVABcEmDITe1WdQel1VpyTQOy2BssuxZ7KFSctIUypyxAeXGpb149GL8FfVeB+pIReIQtcE
 6FtlzOoXq4eFVwqOl01MfHAkWCVXE721FPTYXD0PGBXklwJb1Whx+IItzDHrEEmZhdbf+Nny
 1F8/ms3maY+ejk=
IronPort-HdrOrdr: A9a23:v0sOLqzlO1N3/Ee+W/DGKrPw8L1zdoMgy1knxilNoH1uA7elfq
 WV98jzuiWbtN98YhwdcJO7Sc29qArnlKKduLNwAV7AZniFhILLFvAb0WKK+VSJcREWkNQtsJ
 uIGJIQNDSfNzRHZInBkW6F+nsbsb+62bHtr933i11qSRhua6lm5Qs8MACGCUd7LTM2ZqbRUK
 Dsn/Z6mw==
X-Talos-CUID: 9a23:cdAeUmAjf/2nCvr6EytD3WxJE+QvSFzY8C3oOhL7MndIFZTAHA==
X-Talos-MUID: 9a23:wQwRawsYw9+IUEQAfs2njSBoOdZS+oGSChoEoJkhv+zfCjxMJGLI
X-IronPort-AV: E=Sophos;i="5.99,277,1677560400"; 
   d="scan'208";a="108418728"
Date: Mon, 15 May 2023 16:03:55 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
	<xen-devel@lists.xenproject.org>, Marek
 =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>,
	<qemu-devel@nongnu.org>
Subject: Re: [PATCH] xen: Fix host pci for stubdom
Message-ID: <48c55d33-aa16-4867-a477-f6df45c7d9d9@perard>
References: <20230320000554.8219-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230320000554.8219-1-jandryuk@gmail.com>

On Sun, Mar 19, 2023 at 08:05:54PM -0400, Jason Andryuk wrote:
> diff --git a/hw/xen/xen-host-pci-device.c b/hw/xen/xen-host-pci-device.c
> index 8c6e9a1716..51a72b432d 100644
> --- a/hw/xen/xen-host-pci-device.c
> +++ b/hw/xen/xen-host-pci-device.c
> @@ -33,13 +34,101 @@
>  #define IORESOURCE_PREFETCH     0x00001000      /* No side effects */
>  #define IORESOURCE_MEM_64       0x00100000
>  
> +/*
> + * Non-passthrough (dom0) accesses are local PCI devices and use the given BDF
> + * Passthough (stubdom) accesses are through PV frontend PCI device.  Those
> + * either have a BDF identical to the backend's BFD (xen-backend.passthrough=1)
> + * or a local virtual BDF (xen-backend.passthrough=0)
> + *
> + * We are always given the backend's BDF and need to lookup the appropriate
> + * local BDF for sysfs access.
> + */
> +static void xen_host_pci_fill_local_addr(XenHostPCIDevice *d, Error **errp)
> +{
> +    unsigned int num_devs, len, i;
> +    unsigned int domain, bus, dev, func;
> +    char *be_path;
> +    char path[80];
> +    char *msg;
> +
> +    be_path = qemu_xen_xs_read(xenstore, 0, "device/pci/0/backend", &len);
> +    if (!be_path) {
> +        /*
> +         * be_path doesn't exist, so we are dealing with a local
> +         * (non-passthough) device.
> +         */
> +        d->local_domain = d->domain;
> +        d->local_bus = d->bus;
> +        d->local_dev = d->dev;
> +        d->local_func = d->func;
> +
> +        return;
> +    }
> +
> +    snprintf(path, sizeof(path), "%s/num_devs", be_path);

Is 80 bytes for `path` enough?
What if the path is truncated due to the limit?


There's xs_node_scanf() which might be useful. It does the error
handling and call scanf(). But I'm not sure if it can be used here, in
this file.

> +    msg = qemu_xen_xs_read(xenstore, 0, path, &len);
> +    if (!msg) {
> +        goto err_out;
> +    }
> +
> +    if (sscanf(msg, "%u", &num_devs) != 1) {

libxl writes `num_devs` as "%d". So I think qemu should read a %d.


> +        error_setg(errp, "Failed to parse %s (%s)", msg, path);
> +        goto err_out;
> +    }
> +    free(msg);
> +
> +    for (i = 0; i < num_devs; i++) {
> +        snprintf(path, sizeof(path), "%s/dev-%u", be_path, i);

Same here, the path is written with a %d, even if that doesn't change the
result.


Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon May 15 17:09:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 17:09:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534814.832194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pybhT-0002BH-BU; Mon, 15 May 2023 17:09:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534814.832194; Mon, 15 May 2023 17:09:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pybhT-0002B9-8K; Mon, 15 May 2023 17:09:31 +0000
Received: by outflank-mailman (input) for mailman id 534814;
 Mon, 15 May 2023 17:09:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pybhS-0002Az-1z; Mon, 15 May 2023 17:09:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pybhR-0005cS-VR; Mon, 15 May 2023 17:09:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pybhR-00062x-K8; Mon, 15 May 2023 17:09:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pybhR-0002Nb-Je; Mon, 15 May 2023 17:09:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YtHims+Hr3lV8U5wow4sLch2z0h5LQTtXS8+2KN9ypE=; b=BWcFyLyvBrfX5q7+D2sqfWza8w
	t7/IwWUN0PwnA21K4PxjPunxTq83pfyee0VAmiMVI9b7+yvB9sHtE69in0j7Ermyd1fXZjloA7msx
	6d8+YHFGEoYw5PxNGRe0etdM1VFSIKrItrcdXG1+x5J8QnTKLt4hhLOZZBlmQyM9UnBI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180672-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180672: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=56e2c8e5860090a35d5f0cafe168223a2a7c0e62
X-Osstest-Versions-That:
    xen=b8be19ce432a2edd69c0673768a0beeec77f795a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 May 2023 17:09:29 +0000

flight 180672 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180672/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  56e2c8e5860090a35d5f0cafe168223a2a7c0e62
baseline version:
 xen                  b8be19ce432a2edd69c0673768a0beeec77f795a

Last test of basis   180669  2023-05-15 10:01:53 Z    0 days
Testing same since   180672  2023-05-15 14:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b8be19ce43..56e2c8e586  56e2c8e5860090a35d5f0cafe168223a2a7c0e62 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 15 19:12:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 19:12:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534825.832205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pydc1-0008CA-14; Mon, 15 May 2023 19:12:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534825.832205; Mon, 15 May 2023 19:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pydc0-0008C3-Tk; Mon, 15 May 2023 19:12:00 +0000
Received: by outflank-mailman (input) for mailman id 534825;
 Mon, 15 May 2023 19:11:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TE02=BE=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1pydbz-0008Bv-Mq
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 19:11:59 +0000
Received: from mail-yb1-xb2b.google.com (mail-yb1-xb2b.google.com
 [2607:f8b0:4864:20::b2b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5cbb3f47-f354-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 21:11:58 +0200 (CEST)
Received: by mail-yb1-xb2b.google.com with SMTP id
 3f1490d57ef6-b9daef8681fso11343711276.1
 for <xen-devel@lists.xenproject.org>; Mon, 15 May 2023 12:11:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cbb3f47-f354-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684177917; x=1686769917;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lEdsDznoeQ0wGPMJWBs10mMvCj7VphDqPqR+uizklHY=;
        b=W/Z9qzmBUq3bCVhP8uewmPEIaZEVYfzpzeh9YTeAKrK3sWIwtOfql71JlvN0P+P5pK
         YKy9cml02fk4SnFQ2DdWQ6w4JijHunpNA4zsiud1GYDMnfuy8RTxF/bEehOFA+aPeByE
         DTSRG2bdMMqymbgvM1f+cbSlSYq+sQANl8uMoYLlmajJ3SyWJdGVyTNcYD0Ioakieviy
         BESs3L0iPHTKfiNNq75aO8FTJdn/BYay+E8sIKUXSqB49RgAR5VB2xxzW+nwEEKofO5F
         26ssX78nNr5JZdvfN8wyMBJyb6MRO06YlrKHWh/GpY8CaWKtewkcbGK5ZJkz6FL2Wiq9
         dAhA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684177917; x=1686769917;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=lEdsDznoeQ0wGPMJWBs10mMvCj7VphDqPqR+uizklHY=;
        b=kbRMhbxm4l6DDlJ0mEujGj2JvXwWm5w0PBBar109nZbOf0f4yVZOeDApCLgTZTwcdS
         BYeHAvk8aKNhCoLABge9JyPDKbgRI0iHmX88hyWajbG0RYjHVlT2i2XmapYeqkoEsftN
         6aHLXdKocuIOi5yHqgxy2Ga2CFTIPC6UatUneLU1MO6LTdFlS64HcOk9AKisyGTAa1Hd
         g1LK6y5B1igeWdSkC5fB/PXCyhtBnxYgO4TzXcZ4bgYGozivp1+MD2kyNRS3qCUXw790
         D8kQPDFtUShpYF/HeccrdJRrjFZltbwbyK2bfmGzB3D1QyueiAkSxPUL2EJwbJhZfcQC
         w3vQ==
X-Gm-Message-State: AC+VfDwufhP0+/ET6C+YJD3jpBe5O1E1Z8ba88wtCTkSmv1yZ+CaQZFX
	ASmcqT2L6JBzg7e3ZEtl2ci6Bx2/Nxk+iDj0Iow=
X-Google-Smtp-Source: ACHHUZ5oDsw20tcH12/01C4i2nT6w5iAb2H6/okJg55EYYpqTgJ8h9iiRdXz2tD9egoOuSwMxk4Rmyquuvv7u/aep3U=
X-Received: by 2002:a25:2586:0:b0:ba7:809c:50de with SMTP id
 l128-20020a252586000000b00ba7809c50demr6158133ybl.38.1684177917068; Mon, 15
 May 2023 12:11:57 -0700 (PDT)
MIME-Version: 1.0
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-31-vishal.moola@gmail.com> <c0677d21a4b6caa2e5018af000294a974121d9e8.camel@physik.fu-berlin.de>
In-Reply-To: <c0677d21a4b6caa2e5018af000294a974121d9e8.camel@physik.fu-berlin.de>
From: Vishal Moola <vishal.moola@gmail.com>
Date: Mon, 15 May 2023 12:11:46 -0700
Message-ID: <CAOzc2pz6y=gRcdfkQVgwRuzWeWf2Nx-UBtKnZBTs2qKJ+r7R0Q@mail.gmail.com>
Subject: Re: [PATCH v2 30/34] sh: Convert pte_free_tlb() to use ptdescs
To: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Cc: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org, 
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, 
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, 
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org, 
	Yoshinori Sato <ysato@users.sourceforge.jp>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, May 6, 2023 at 4:35=E2=80=AFAM John Paul Adrian Glaubitz
<glaubitz@physik.fu-berlin.de> wrote:
>
> Hi Vishal!
>
> On Mon, 2023-05-01 at 12:28 -0700, Vishal Moola (Oracle) wrote:
> > Part of the conversions to replace pgtable constructor/destructors with
> > ptdesc equivalents. Also cleans up some spacing issues.
> >
> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> > ---
> >  arch/sh/include/asm/pgalloc.h | 9 +++++----
> >  1 file changed, 5 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgallo=
c.h
> > index a9e98233c4d4..ce2ba99dbd84 100644
> > --- a/arch/sh/include/asm/pgalloc.h
> > +++ b/arch/sh/include/asm/pgalloc.h
> > @@ -2,6 +2,7 @@
> >  #ifndef __ASM_SH_PGALLOC_H
> >  #define __ASM_SH_PGALLOC_H
> >
> > +#include <linux/mm.h>
> >  #include <asm/page.h>
> >
> >  #define __HAVE_ARCH_PMD_ALLOC_ONE
> > @@ -31,10 +32,10 @@ static inline void pmd_populate(struct mm_struct *m=
m, pmd_t *pmd,
> >       set_pmd(pmd, __pmd((unsigned long)page_address(pte)));
> >  }
> >
> > -#define __pte_free_tlb(tlb,pte,addr)                 \
> > -do {                                                 \
> > -     pgtable_pte_page_dtor(pte);                     \
> > -     tlb_remove_page((tlb), (pte));                  \
> > +#define __pte_free_tlb(tlb, pte, addr)                               \
> > +do {                                                         \
> > +     ptdesc_pte_dtor(page_ptdesc(pte));                      \
> > +     tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));      \
> >  } while (0)
> >
> >  #endif /* __ASM_SH_PGALLOC_H */
>
> Looking at the patch which introduces tlb_remove_page_ptdesc() [1], it se=
ems that
> tlb_remove_page_ptdesc() already calls tlb_remove_page() with ptdesc_page=
(pt), so
> I'm not sure whether the above tlb_remove_page_ptdesc((tlb), (page_ptdesc=
(pte)))
> is correct.
>
> Shouldn't it just be tlb_remove_page_ptdesc((tlb), (pte))?

As of this patchset all implementations of __pte_free_tlb() take in a
struct page. Eventually we'll want it to be tlb_remove_page_ptdesc(tlb, pte=
),
but for now the cast is necessary here.


From xen-devel-bounces@lists.xenproject.org Mon May 15 19:22:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 19:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534830.832216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pydmA-0001bL-4x; Mon, 15 May 2023 19:22:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534830.832216; Mon, 15 May 2023 19:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pydm9-0001bE-VY; Mon, 15 May 2023 19:22:29 +0000
Received: by outflank-mailman (input) for mailman id 534830;
 Mon, 15 May 2023 19:22:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pydm9-0001b3-5e; Mon, 15 May 2023 19:22:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pydm9-0000BF-1O; Mon, 15 May 2023 19:22:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pydm8-0004SK-Hn; Mon, 15 May 2023 19:22:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pydm8-0005nc-HI; Mon, 15 May 2023 19:22:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/A+SzHOKlteUuqMtinrem6MQhWdOVKZN5nif3o1VrFc=; b=BIPH4RX27OAICPLQl28uMcARyQ
	g7A/oeUlk77ULcMlDRERB7+XK64Y/QEh8a+GzZILyyiW9hCtbfD3eNzWcjLOCKgV3s/1zx2w+0kPA
	Zr8uF+163sLw2DBWDESxnNZNQKdaZIa2G46b30YAfKqs/E4ub3lBrYz66vnayRI5BWqQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180670-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180670: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 May 2023 19:22:28 +0000

flight 180670 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180670/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 build-arm64-pvops             6 kernel-build   fail in 180668 REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 180668 pass in 180670
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 18 guest-localmigrate/x10 fail pass in 180668
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180668

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           1 build-check(1)           blocked in 180668 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 180668 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)           blocked in 180668 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 180668 n/a
 test-arm64-arm64-examine      1 build-check(1)           blocked in 180668 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 180668 n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)           blocked in 180668 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 180668 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 180668 n/a
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   28 days
Failing since        180281  2023-04-17 06:24:36 Z   28 days   53 attempts
Testing same since   180664  2023-05-14 20:12:01 Z    0 days    3 attempts

------------------------------------------------------------
2389 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 302499 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 15 20:49:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 20:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534840.832228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyf8L-0002qy-9K; Mon, 15 May 2023 20:49:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534840.832228; Mon, 15 May 2023 20:49:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyf8L-0002qr-6N; Mon, 15 May 2023 20:49:29 +0000
Received: by outflank-mailman (input) for mailman id 534840;
 Mon, 15 May 2023 20:49:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l5iH=BE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pyf8K-0002ql-6c
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 20:49:28 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f928abda-f361-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 22:49:24 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 30973622B4;
 Mon, 15 May 2023 20:49:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id DBFA6C433D2;
 Mon, 15 May 2023 20:49:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f928abda-f361-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684183762;
	bh=sQ/P2ULzjfN6VJdKvCGOHahGexxLVnAbbczGd7tIb4g=;
	h=From:To:Cc:Subject:Date:From;
	b=PHoTsjyVqEJ6xkCFgPhFCpW528/f6IZIKRYX1gquqi9O4evcYIbchSABLRDBsCBES
	 LSmMPBGJNgaM9G4w1aD9gxks7aSOdzbk1auCKHCLPpxyrjxXoL6vrhi1Hzn+MIwV73
	 tjJB/K5+ve8Tvy0OEvC6yiQx4lNup3Xry75+iMyLSb/AbBIQwPaJWDRsGzkj1VtmbR
	 QB+LnKvWfYeQYe2vwq182tJWtxsov2GB3no0jR6uBxV1SjsZunUHPj6dXBzqUDmwFh
	 UeYOmQS6hXtSSU5Yp1ANGZ18SphzLNIajaPQVMTohZyvh1PyVb4vml2B1e6p+QKiKC
	 0Jx0lJwrKXNOA==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: marmarek@invisiblethingslab.com,
	cardoe@cardoe.com,
	sstabellini@kernel.org,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: [PATCH v2] automation: add a Dom0 PVH test based on Qubes' runner
Date: Mon, 15 May 2023 13:49:19 -0700
Message-Id: <20230515204919.4174845-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Stefano Stabellini <stefano.stabellini@amd.com>

Straightforward Dom0 PVH test based on the existing basic Smoke test for
the Qubes runner.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
Changes in v2:
- rename dom0pvh to extra_xen_opts
---
 automation/gitlab-ci/test.yaml     |  8 ++++++++
 automation/scripts/qubes-x86-64.sh | 14 +++++++++-----
 2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 55ca0c27dc..9c0e08d456 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -149,6 +149,14 @@ adl-smoke-x86-64-gcc-debug:
     - *x86-64-test-needs
     - alpine-3.12-gcc-debug
 
+adl-smoke-x86-64-dom0pvh-gcc-debug:
+  extends: .adl-x86-64
+  script:
+    - ./automation/scripts/qubes-x86-64.sh dom0pvh 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.12-gcc-debug
+
 adl-suspend-x86-64-gcc-debug:
   extends: .adl-x86-64
   script:
diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index 056faf9e6d..4f17f1dd0b 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -5,6 +5,7 @@ set -ex
 test_variant=$1
 
 ### defaults
+extra_xen_opts=
 wait_and_wakeup=
 timeout=120
 domU_config='
@@ -18,8 +19,8 @@ vif = [ "bridge=xenbr0", ]
 disk = [ ]
 '
 
-### test: smoke test
-if [ -z "${test_variant}" ]; then
+### test: smoke test & smoke test PVH
+if [ -z "${test_variant}" ] || [ "${test_variant}" = "dom0pvh" ]; then
     passed="ping test passed"
     domU_check="
 ifconfig eth0 192.168.0.2
@@ -36,6 +37,9 @@ done
 tail -n 100 /var/log/xen/console/guest-domU.log
 echo \"${passed}\"
 "
+if [ "${test_variant}" = "dom0pvh" ]; then
+    extra_xen_opts="dom0=pvh"
+fi
 
 ### test: S3
 elif [ "${test_variant}" = "s3" ]; then
@@ -184,11 +188,11 @@ cd ..
 TFTP=/scratch/gitlab-runner/tftp
 CONTROLLER=control@thor.testnet
 
-echo '
-multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all dom0_mem=4G
+echo "
+multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all dom0_mem=4G $extra_xen_opts
 module2 (http)/gitlab-ci/vmlinuz console=hvc0 root=/dev/ram0
 module2 (http)/gitlab-ci/initrd-dom0
-' > $TFTP/grub.cfg
+" > $TFTP/grub.cfg
 
 cp -f binaries/xen $TFTP/xen
 cp -f binaries/bzImage $TFTP/vmlinuz
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 15 20:52:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 20:52:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534843.832237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyfBZ-0004Ft-NJ; Mon, 15 May 2023 20:52:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534843.832237; Mon, 15 May 2023 20:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyfBZ-0004Fm-KY; Mon, 15 May 2023 20:52:49 +0000
Received: by outflank-mailman (input) for mailman id 534843;
 Mon, 15 May 2023 20:52:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l5iH=BE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pyfBY-0004Fe-J5
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 20:52:48 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 716a1ede-f362-11ed-8611-37d641c3527e;
 Mon, 15 May 2023 22:52:46 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 154D562581;
 Mon, 15 May 2023 20:52:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 847BCC433EF;
 Mon, 15 May 2023 20:52:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 716a1ede-f362-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684183964;
	bh=vFlYQqO3R9fOaZp2LVY9Z+RN0/K9dfAVdbzmSfTgbw4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nvcNWasF6UzLB0DOBemuVmZcHn90y1zST/K20ymRDNVcPV5Emznx3lE8kjUEa1XY+
	 7SBWvkNJVnq+lpegKAG/wfU29l25mLm385iO1Ty/pJGKXTOwy02QXtkl+OZnC6iCwT
	 DpVMxJinh7MnOOSlFSg3J2YBMpR9p3wInxQbPRB0ZqLkpJ3ZsVvjwnDNV5YQgmZb6d
	 wAukcTbFLNw0NN12BOxkdG9mutU3H+0GtraPDgj50KeaVHX8c4aO48+ZdPGGye91V4
	 KV8RfJ/6EJj/DvB2E8raHNCYe8gDs+VulVgfr4kKGWAzDd8RZTEZGbyHsLYJNab09g
	 INdgxJie+4+Tw==
Date: Mon, 15 May 2023 13:52:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Bernhard Beschow <shentey@gmail.com>, mst@redhat.com
cc: qemu-devel@nongnu.org, Richard Henderson <richard.henderson@linaro.org>, 
    Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
    David Woodhouse <dwmw@amazon.co.uk>, Eduardo Habkost <eduardo@habkost.net>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Chuck Zmudzinski <brchuckz@aol.com>, Aurelien Jarno <aurelien@aurel32.net>, 
    =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>, 
    Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>, 
    =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <philmd@linaro.org>, 
    Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v4 0/7] Resolve TYPE_PIIX3_XEN_DEVICE
In-Reply-To: <9EB9A984-61E5-4226-8352-B5DDC6E2C62E@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2305151350180.4125828@ubuntu-linux-20-04-desktop>
References: <20230403074124.3925-1-shentey@gmail.com> <20230421033757-mutt-send-email-mst@kernel.org> <9EB9A984-61E5-4226-8352-B5DDC6E2C62E@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 13 May 2023, Bernhard Beschow wrote:
> Am 21. April 2023 07:38:10 UTC schrieb "Michael S. Tsirkin" <mst@redhat.com>:
> >On Mon, Apr 03, 2023 at 09:41:17AM +0200, Bernhard Beschow wrote:
> >> There is currently a dedicated PIIX3 device model for use under Xen. By reusing
> >> existing PCI API during initialization this device model can be eliminated and
> >> the plain PIIX3 device model can be used instead.
> >> 
> >> Resolving TYPE_PIIX3_XEN_DEVICE results in less code while also making Xen
> >> agnostic towards the precise south bridge being used in the PC machine. The
> >> latter might become particularily interesting once PIIX4 becomes usable in the
> >> PC machine, avoiding the "Frankenstein" use of PIIX4_ACPI in PIIX3.
> >
> >xen stuff so I assume that tree?
> 
> Ping

I am OK either way. Michael, what do you prefer?

Normally I would suggest for you to pick up the patches. But as it
happens I'll have to likely send another pull request in a week or two
and I can add these patches to it.

Let me know your preference and I am happy to follow it.


> >
> >> Testing done:
> >> - `make check`
> >> - Run `xl create` with the following config:
> >>     name = "Manjaro"
> >>     type = 'hvm'
> >>     memory = 1536
> >>     apic = 1
> >>     usb = 1
> >>     disk = [ "file:manjaro-kde-21.2.6-220416-linux515.iso,hdc:cdrom,r" ]
> >>     device_model_override = "/usr/bin/qemu-system-x86_64"
> >>     vga = "stdvga"
> >>     sdl = 1
> >> - `qemu-system-x86_64 -M pc -m 2G -cpu host -accel kvm \
> >>     -cdrom manjaro-kde-21.2.6-220416-linux515.iso`
> >> 
> >> v4:
> >> - Add patch fixing latent memory leak in pci_bus_irqs() (Anthony)
> >> 
> >> v3:
> >> - Rebase onto master
> >> 
> >> v2:
> >> - xen_piix3_set_irq() is already generic. Just rename it. (Chuck)
> >> 
> >> Tested-by: Chuck Zmudzinski <brchuckz@aol.com>
> >> 
> >> Bernhard Beschow (7):
> >>   include/hw/xen/xen: Rename xen_piix3_set_irq() to xen_intx_set_irq()
> >>   hw/pci/pci.c: Don't leak PCIBus::irq_count[] in pci_bus_irqs()
> >>   hw/isa/piix3: Reuse piix3_realize() in piix3_xen_realize()
> >>   hw/isa/piix3: Wire up Xen PCI IRQ handling outside of PIIX3
> >>   hw/isa/piix3: Avoid Xen-specific variant of piix3_write_config()
> >>   hw/isa/piix3: Resolve redundant k->config_write assignments
> >>   hw/isa/piix3: Resolve redundant TYPE_PIIX3_XEN_DEVICE
> >> 
> >>  include/hw/southbridge/piix.h |  1 -
> >>  include/hw/xen/xen.h          |  2 +-
> >>  hw/i386/pc_piix.c             | 36 +++++++++++++++++++--
> >>  hw/i386/xen/xen-hvm.c         |  2 +-
> >>  hw/isa/piix3.c                | 60 +----------------------------------
> >>  hw/pci/pci.c                  |  2 ++
> >>  stubs/xen-hw-stub.c           |  2 +-
> >>  7 files changed, 39 insertions(+), 66 deletions(-)
> >> 
> >> -- 
> >> 2.40.0
> >> 
> >
> 


From xen-devel-bounces@lists.xenproject.org Mon May 15 21:19:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 21:19:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534848.832248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyfbF-0006qS-Pz; Mon, 15 May 2023 21:19:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534848.832248; Mon, 15 May 2023 21:19:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyfbF-0006qL-MF; Mon, 15 May 2023 21:19:21 +0000
Received: by outflank-mailman (input) for mailman id 534848;
 Mon, 15 May 2023 21:19:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u/nm=BE=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pyfbD-0006qD-VY
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 21:19:20 +0000
Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com
 [64.147.123.24]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2582250d-f366-11ed-b229-6b7b168915f2;
 Mon, 15 May 2023 23:19:18 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id 247503200A27;
 Mon, 15 May 2023 17:19:14 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Mon, 15 May 2023 17:19:14 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 15 May 2023 17:19:12 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2582250d-f366-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1684185553; x=1684271953; bh=OgtNy6SeWKcYOssXQEa/LBl3yzyAva65px6
	mmQPjocE=; b=Sc/MIHBOnZk5wjA3qVp5qucV3/yh7pXMevbDZSdn2On2j1JO6oO
	+t7IDBxe/FJJzOqFN9zWsgLqWcC+cu1MLWTQp0J8XzeLQLOrU3/siqG2BFYKjf/t
	lSCL6JWhT/PhbQzngBfBT1P0VmhypH9ZeqYHYgyyuYUYst3wAQFN476ns6WFCWO4
	wQJK+pe4/svWfVJORzkrlG+Ns9LlHeibNzyjvSZsZ03A9d6CmpjjgR4dUmw38A1j
	GtROIhV5D7UcEDhWT81sHYlVjrkvXz07jFK6OYSCdxFDI840G7wQIDlwTNj/PQmB
	oIXa84YMAFUPbXUQbqNYVsitdhjh2X2i7DA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1684185553; x=1684271953; bh=OgtNy6SeWKcYO
	ssXQEa/LBl3yzyAva65px6mmQPjocE=; b=kqriT+enfIi3UVirdTqw6mUuWYqpO
	SqI5PS4Ve1v+M8G9MaSJQGyf+MABxod9wjvWh2qtvDYlhZyaXgoKThkW0eFbOSU8
	0uyb9Fm3Kadfef4fXFhdl8pjxFZYNmYAIZ6DXUcTceVfReTKfA234e8NBAYFeGCS
	rpOQybGTOG77x8d6D9m8+cO/FY5H5CNvz3tj58SvsMf/kfarEX2MvxqPgzOs490r
	1QW/vEZoSZ+4QfUo0UlkS2SLOjKgHgIcLtUJ+tHx+f7AaOXhZ5HVCbF1nd9BYR7a
	CTiJQCWZGsLdLYHmG9qLSI89BwGpbvvJEQmcvhbBmHf1spPVix6pAqvbA==
X-ME-Sender: <xms:0aFiZHgYo7ubm-52Uf27Ue1HmToS2AFAi524QkHnEP-gVHA5kyk-fA>
    <xme:0aFiZEBXA1PAKgIFi5sFAN-HJUrqONyfqnvThqc-bjZDRwl2IwIjpwNAoji53hqv0
    TnFnUnE7kBJzg>
X-ME-Received: <xmr:0aFiZHH-Lz-VQA228-IqtN6pWRPkF-6nwUwo3SEy46ScgY8w_D9bK3EFgSbDI0KoTQHvljHOGcFU3YH53dWlBI1z3ShBbaxPo1c>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeehjedgudeivdcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    udelteefvefhfeehieetleeihfejhfeludevteetkeevtedtvdegueetfeejudenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:0aFiZERIGNlv4pRDLEoibSNn2YmcjuS-XLs8MFvxbQ6nuaHZLEGbyw>
    <xmx:0aFiZEyhNxP5Shc3c7tRIP9-yx39tVSV1y3ZUWRCP8mlHeB6Xh37-A>
    <xmx:0aFiZK6RkEQA_syw3GD5u9MXy4xhxwlkEQlk7B_qNlOoofv7aO87Mg>
    <xmx:0aFiZBqYLtqnXOl2Ba_prG06wSurB2FyyJjOLvtaYukwMUftZzD3fw>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 15 May 2023 23:19:09 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, cardoe@cardoe.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH v2] automation: add a Dom0 PVH test based on Qubes' runner
Message-ID: <ZGKhzYPupmzjH/h+@mail-itl>
References: <20230515204919.4174845-1-sstabellini@kernel.org>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="GK4DG6VUkYPb4lb9"
Content-Disposition: inline
In-Reply-To: <20230515204919.4174845-1-sstabellini@kernel.org>


--GK4DG6VUkYPb4lb9
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 15 May 2023 23:19:09 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, cardoe@cardoe.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH v2] automation: add a Dom0 PVH test based on Qubes' runner

On Mon, May 15, 2023 at 01:49:19PM -0700, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@amd.com>
>=20
> Straightforward Dom0 PVH test based on the existing basic Smoke test for
> the Qubes runner.
>=20
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>

Acked-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>

> ---
> Changes in v2:
> - rename dom0pvh to extra_xen_opts
> ---
>  automation/gitlab-ci/test.yaml     |  8 ++++++++
>  automation/scripts/qubes-x86-64.sh | 14 +++++++++-----
>  2 files changed, 17 insertions(+), 5 deletions(-)
>=20
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.y=
aml
> index 55ca0c27dc..9c0e08d456 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -149,6 +149,14 @@ adl-smoke-x86-64-gcc-debug:
>      - *x86-64-test-needs
>      - alpine-3.12-gcc-debug
> =20
> +adl-smoke-x86-64-dom0pvh-gcc-debug:
> +  extends: .adl-x86-64
> +  script:
> +    - ./automation/scripts/qubes-x86-64.sh dom0pvh 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - *x86-64-test-needs
> +    - alpine-3.12-gcc-debug
> +
>  adl-suspend-x86-64-gcc-debug:
>    extends: .adl-x86-64
>    script:
> diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qube=
s-x86-64.sh
> index 056faf9e6d..4f17f1dd0b 100755
> --- a/automation/scripts/qubes-x86-64.sh
> +++ b/automation/scripts/qubes-x86-64.sh
> @@ -5,6 +5,7 @@ set -ex
>  test_variant=3D$1
> =20
>  ### defaults
> +extra_xen_opts=3D
>  wait_and_wakeup=3D
>  timeout=3D120
>  domU_config=3D'
> @@ -18,8 +19,8 @@ vif =3D [ "bridge=3Dxenbr0", ]
>  disk =3D [ ]
>  '
> =20
> -### test: smoke test
> -if [ -z "${test_variant}" ]; then
> +### test: smoke test & smoke test PVH
> +if [ -z "${test_variant}" ] || [ "${test_variant}" =3D "dom0pvh" ]; then
>      passed=3D"ping test passed"
>      domU_check=3D"
>  ifconfig eth0 192.168.0.2
> @@ -36,6 +37,9 @@ done
>  tail -n 100 /var/log/xen/console/guest-domU.log
>  echo \"${passed}\"
>  "
> +if [ "${test_variant}" =3D "dom0pvh" ]; then
> +    extra_xen_opts=3D"dom0=3Dpvh"
> +fi
> =20
>  ### test: S3
>  elif [ "${test_variant}" =3D "s3" ]; then
> @@ -184,11 +188,11 @@ cd ..
>  TFTP=3D/scratch/gitlab-runner/tftp
>  CONTROLLER=3Dcontrol@thor.testnet
> =20
> -echo '
> -multiboot2 (http)/gitlab-ci/xen console=3Dcom1 com1=3D115200,8n1 loglvl=
=3Dall guest_loglvl=3Dall dom0_mem=3D4G
> +echo "
> +multiboot2 (http)/gitlab-ci/xen console=3Dcom1 com1=3D115200,8n1 loglvl=
=3Dall guest_loglvl=3Dall dom0_mem=3D4G $extra_xen_opts
>  module2 (http)/gitlab-ci/vmlinuz console=3Dhvc0 root=3D/dev/ram0
>  module2 (http)/gitlab-ci/initrd-dom0
> -' > $TFTP/grub.cfg
> +" > $TFTP/grub.cfg
> =20
>  cp -f binaries/xen $TFTP/xen
>  cp -f binaries/bzImage $TFTP/vmlinuz
> --=20
> 2.25.1
>=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--GK4DG6VUkYPb4lb9
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRioc4ACgkQ24/THMrX
1yzSagf+MiMMMa7ap6mPghOsDZHxNpyYTiSQ4rMRjBodAdWPmXt3WtvNeSWbJDQP
BDpQLD+wv5Wzzr6KRcCbm3uV9hx43tT0mGsSHXO/l/ILR3yPVPSbPEFUyBGFZjJJ
55zOIsk2O3e3BIGnyIRCn55uNZ/XWT/kz47lirE7gYboLToQ3wyNsej9GLu9RpxT
Csn0O8jPi1QfmGL7U5Bw5WlkgQftJoxGT86PTFxF4unXr+czrGYchKg9UevIzs7t
6smuoWVZDR1SJmgOhZB10KEhSWwCwQDC65OqoudouVezmptiHFCyApwB8/jfgDtT
97lQL1oBI7aGe6yUbtjN8UDfUJmkXQ==
=WG9b
-----END PGP SIGNATURE-----

--GK4DG6VUkYPb4lb9--


From xen-devel-bounces@lists.xenproject.org Mon May 15 22:21:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 22:21:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534857.832257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pygZP-0005gY-Fv; Mon, 15 May 2023 22:21:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534857.832257; Mon, 15 May 2023 22:21:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pygZP-0005gR-Cc; Mon, 15 May 2023 22:21:31 +0000
Received: by outflank-mailman (input) for mailman id 534857;
 Mon, 15 May 2023 22:21:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l5iH=BE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pygZO-0005gJ-46
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 22:21:30 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d3cbc712-f36e-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 00:21:25 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E6D9B623F6;
 Mon, 15 May 2023 22:21:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB68EC433EF;
 Mon, 15 May 2023 22:21:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3cbc712-f36e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684189283;
	bh=l2rGpzaxcY3qDYucSpAYy4CV3h6TeMhP0McvT7p5qj8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=jHiCXCx3GSBItjIPKcAjV2WHKL0ffGR5TY6SNu1lJWYXTmUAxJJD4I+CIIwBq+Ey7
	 nWrmQRAQrFljWHuUsV7ymFwDfFGKt2pUDoQ90APMVCV4yK0w6I1NgIp5OrZnJYE9VR
	 lTaf5OSkBHJ1g26s5b5oXOkaKKKwfPhBDeAY0DOSwBbICaufinEc9L4zXSQ0zr8czD
	 F1vfjgQaQ6szthyuoEJe0ggMQpdFSvyktRJBFnjYjlECPn97Xijc1Hnr3FRcvSOkTI
	 Jk+GhzAb8wT6C31Nu2IkoByG791eLG9Cqg0MwccYZ9dVI4psa/ANiu4SGv7b0MF/0m
	 hegE738pf2pFg==
Date: Mon, 15 May 2023 15:21:21 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 1/4] automation: make console options configurable via
 variables
In-Reply-To: <e0504797d1b3758c035cd82b2dc3b00d747ddcc8.1683943670.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2305151441550.4125828@ubuntu-linux-20-04-desktop>
References: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com> <e0504797d1b3758c035cd82b2dc3b00d747ddcc8.1683943670.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-7102064-1684186921=:4125828"
Content-ID: <alpine.DEB.2.22.394.2305151521180.4125828@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-7102064-1684186921=:4125828
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305151521181.4125828@ubuntu-linux-20-04-desktop>

On Sat, 13 May 2023, Marek Marczykowski-Górecki wrote:
> This makes the test script easier reusable for different runners, where
> console may be connected differently. Include both console= option and
> configuration for specific chosen console too (like com1= here) in the
> 'CONSOLE_OPTS' variable.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> This will conflict with Stefano's patch, as both modify multiboot2 line,
> but it shouldn't be too hard to resolve the conflict manually (both
> replace console opts with a variable, and add extra opts at the end).
> ---
>  automation/gitlab-ci/test.yaml     | 1 +
>  automation/scripts/qubes-x86-64.sh | 6 +++---
>  2 files changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index 55ca0c27dc49..cb7fd5c272e9 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -96,6 +96,7 @@
>      LOGFILE: smoke-test.log
>      PCIDEV: "03:00.0"
>      PCIDEV_INTR: "MSI-X"
> +    CONSOLE_OPTS: "console=com1 com1=115200,8n1"
>    artifacts:
>      paths:
>        - smoke.serial
> diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
> index 056faf9e6de8..ae766395d184 100755
> --- a/automation/scripts/qubes-x86-64.sh
> +++ b/automation/scripts/qubes-x86-64.sh
> @@ -184,11 +184,11 @@ cd ..
>  TFTP=/scratch/gitlab-runner/tftp
>  CONTROLLER=control@thor.testnet
>  
> -echo '
> -multiboot2 (http)/gitlab-ci/xen console=com1 com1=115200,8n1 loglvl=all guest_loglvl=all dom0_mem=4G
> +echo "
> +multiboot2 (http)/gitlab-ci/xen $CONSOLE_OPTS loglvl=all guest_loglvl=all dom0_mem=4G
>  module2 (http)/gitlab-ci/vmlinuz console=hvc0 root=/dev/ram0
>  module2 (http)/gitlab-ci/initrd-dom0
> -' > $TFTP/grub.cfg
> +" > $TFTP/grub.cfg
>  
>  cp -f binaries/xen $TFTP/xen
>  cp -f binaries/bzImage $TFTP/vmlinuz
> -- 
> git-series 0.9.1
> 
--8323329-7102064-1684186921=:4125828--


From xen-devel-bounces@lists.xenproject.org Mon May 15 22:21:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 22:21:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534858.832268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pygZU-0005vz-MB; Mon, 15 May 2023 22:21:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534858.832268; Mon, 15 May 2023 22:21:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pygZU-0005vr-JO; Mon, 15 May 2023 22:21:36 +0000
Received: by outflank-mailman (input) for mailman id 534858;
 Mon, 15 May 2023 22:21:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l5iH=BE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pygZT-0005gJ-O0
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 22:21:35 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d91e955e-f36e-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 00:21:34 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 086F060EA2;
 Mon, 15 May 2023 22:21:33 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CB4B9C433D2;
 Mon, 15 May 2023 22:21:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d91e955e-f36e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684189292;
	bh=a4xUeaSWwpxYCw0nSoScWM3lYFDEh6TK0oocKUuFlCc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=l9KB/gTLnaLw0kD5zDf5t3LH8iS+jFEmgbt3zgDEX0HcadgD8naFbjCkv6mQk+6pD
	 Z9WfCbt29c8ciKjMPWTPihhBNOk/UqIqZRKFqsf9HCS2Z+Tmk5ZuIpyyFdpWNGd/AZ
	 qmUuq8fhoz9i5Shf5RiGZTCAO0Zx1pFPfrr9cjynrztwHxKrxeZFMBGiy4o+bw9U1V
	 xX3RfJYNZ/M5zixpivo6/P8faArGAmzxWxLf019nlNTUyrqTcJexQOeZqXCZlxmlTR
	 YHbjClAc+9TU7atKlBpCCx1lFQCEfBPPOIQBfnngk/NEeHsMl3gjqW1QzhJrMMtoLB
	 oummFdvOKbNIw==
Date: Mon, 15 May 2023 15:21:30 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 2/4] automation: enable earlyprintk=xen for both dom0
 and domU in hw tests
In-Reply-To: <7247aca99f5faf35ff1c6efd048a10c08883bc41.1683943670.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2305151443230.4125828@ubuntu-linux-20-04-desktop>
References: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com> <7247aca99f5faf35ff1c6efd048a10c08883bc41.1683943670.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1248633201-1684187008=:4125828"
Content-ID: <alpine.DEB.2.22.394.2305151521260.4125828@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1248633201-1684187008=:4125828
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305151521261.4125828@ubuntu-linux-20-04-desktop>

On Sat, 13 May 2023, Marek Marczykowski-Górecki wrote:
> Make debugging early boot failures easier based on just CI logs.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/scripts/qubes-x86-64.sh | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
> index ae766395d184..bd09451d7d28 100755
> --- a/automation/scripts/qubes-x86-64.sh
> +++ b/automation/scripts/qubes-x86-64.sh
> @@ -80,7 +80,7 @@ type = "'${test_variant#pci-}'"
>  name = "domU"
>  kernel = "/boot/vmlinuz"
>  ramdisk = "/boot/initrd-domU"
> -extra = "root=/dev/ram0 console=hvc0"
> +extra = "root=/dev/ram0 console=hvc0 earlyprintk=xen"
>  memory = 512
>  vif = [ ]
>  disk = [ ]
> @@ -186,7 +186,7 @@ CONTROLLER=control@thor.testnet
>  
>  echo "
>  multiboot2 (http)/gitlab-ci/xen $CONSOLE_OPTS loglvl=all guest_loglvl=all dom0_mem=4G
> -module2 (http)/gitlab-ci/vmlinuz console=hvc0 root=/dev/ram0
> +module2 (http)/gitlab-ci/vmlinuz console=hvc0 root=/dev/ram0 earlyprintk=xen
>  module2 (http)/gitlab-ci/initrd-dom0
>  " > $TFTP/grub.cfg
>  
> -- 
> git-series 0.9.1
> 
--8323329-1248633201-1684187008=:4125828--


From xen-devel-bounces@lists.xenproject.org Mon May 15 22:21:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 22:21:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534859.832277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pygZd-0006G2-Tl; Mon, 15 May 2023 22:21:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534859.832277; Mon, 15 May 2023 22:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pygZd-0006Fp-QW; Mon, 15 May 2023 22:21:45 +0000
Received: by outflank-mailman (input) for mailman id 534859;
 Mon, 15 May 2023 22:21:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l5iH=BE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pygZc-0005gJ-Jm
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 22:21:44 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id de45e670-f36e-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 00:21:42 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A617A60EA2;
 Mon, 15 May 2023 22:21:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7593DC433D2;
 Mon, 15 May 2023 22:21:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de45e670-f36e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684189301;
	bh=YQ37CjVnde7PRWWVSFrs4uX2zVngO5Ysg5rVND4UD7g=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=BaiGGzHhDq5LBXFv8qFs1nqKCsPSwvt7nEWk/+eYdQcp2JLVmhRr1x43YLR6yBoLJ
	 WwXho5BntOB8P65V0cqYMPvsMnrhWUfmVEOzvj/Sg3LFvds9LvGO5V+uXSAKuqQrEe
	 76XD5+L2RdUbBadG8vQHpbxR7hkjtMh1025VJLsZJboPCujQdO76tnBSrVF2wiMZ8J
	 G1zmqhe/wFyF3kIiS/FPZzNmronX358lVdsMfvVTVIl9d3GhkguUB5EZyg892zX+c2
	 j99csCzkW9wXCagDEHj5gaMHd5r+ht33ioU+PPuMpCjO/qNkSYvtDWHG77zQ2NhAp+
	 ocCPfj9T/vzEQ==
Date: Mon, 15 May 2023 15:21:38 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 3/4] automation: add x86_64 tests on a AMD Zen3+ runner
In-Reply-To: <741648760682e3097a0d984342e5cad9387172cf.1683943670.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2305151444310.4125828@ubuntu-linux-20-04-desktop>
References: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com> <741648760682e3097a0d984342e5cad9387172cf.1683943670.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-679932068-1684187077=:4125828"
Content-ID: <alpine.DEB.2.22.394.2305151506450.4125828@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-679932068-1684187077=:4125828
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305151506451.4125828@ubuntu-linux-20-04-desktop>

On Sat, 13 May 2023, Marek Marczykowski-Górecki wrote:
> This adds another physical runner to Gitlab-CI, running similar set of
> jobs that the Adler Lake one.
> 
> The machine specifically is
> MinisForum UM773 Lite with AMD Ryzen 7 7735HS
> 
> The PV passthrough test is skipped as currently it fails on this system
> with:
> (d1) Can't find new memory area for initrd needed due to E820 map conflict
> 
> The S3 test is skipped as it currently fails - the system seems to
> suspend properly (power LED blinks), but when woken up the power LED
> gets back to solid on and the fan spins at top speed and but otherwise there is no
> signs of if life from the system (no output on the console, HDMI or
> anything else).
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/gitlab-ci/test.yaml | 26 ++++++++++++++++++++++++++
>  1 file changed, 26 insertions(+)
> 
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index cb7fd5c272e9..81d027532cca 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -108,6 +108,16 @@
>    tags:
>      - qubes-hw2
>  
> +.zen3p-x86-64:
> +  # it's really similar to the above
> +  extends: .adl-x86-64
> +  variables:
> +    PCIDEV: "01:00.0"
> +    PCIDEV_INTR: "MSI-X"
> +    CONSOLE_OPTS: "console=com1 com1=115200,8n1,pci,msi"
> +  tags:
> +    - qubes-hw11
> +
>  # Test jobs
>  build-each-commit-gcc:
>    extends: .test-jobs-common
> @@ -176,6 +186,22 @@ adl-pci-hvm-x86-64-gcc-debug:
>      - *x86-64-test-needs
>      - alpine-3.12-gcc-debug
>  
> +zen3p-smoke-x86-64-gcc-debug:
> +  extends: .zen3p-x86-64
> +  script:
> +    - ./automation/scripts/qubes-x86-64.sh 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - *x86-64-test-needs
> +    - alpine-3.12-gcc-debug
> +
> +zen3p-pci-hvm-x86-64-gcc-debug:
> +  extends: .zen3p-x86-64
> +  script:
> +    - ./automation/scripts/qubes-x86-64.sh pci-hvm 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - *x86-64-test-needs
> +    - alpine-3.12-gcc-debug
> +
>  qemu-smoke-dom0-arm64-gcc:
>    extends: .qemu-arm64
>    script:
> -- 
> git-series 0.9.1
> 
--8323329-679932068-1684187077=:4125828--


From xen-devel-bounces@lists.xenproject.org Mon May 15 22:23:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 22:23:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534866.832287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pygbk-0007Hu-8I; Mon, 15 May 2023 22:23:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534866.832287; Mon, 15 May 2023 22:23:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pygbk-0007Hn-5f; Mon, 15 May 2023 22:23:56 +0000
Received: by outflank-mailman (input) for mailman id 534866;
 Mon, 15 May 2023 22:23:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l5iH=BE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pygbj-0007Hh-7o
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 22:23:55 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2c9d4170-f36f-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 00:23:54 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 1731D63349;
 Mon, 15 May 2023 22:23:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1F67C433EF;
 Mon, 15 May 2023 22:23:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c9d4170-f36f-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684189432;
	bh=7J+TjIjrqNFbQasiJrkzd+Aguwh6/1UXZj14kHSqUOg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=LsnnI8r25QRFogAkE7TcZruWhqXqkI8ARK+uv4SlrlChZ9QiM+BqlkN8ZM8S3Q9i0
	 C+D7jrDJVFcz7XZ7Nbn76OTVQeb/5RaXyo2QsxjVbT7uwJJxhl9BW4nsLJw9rEY9io
	 3TCBthC72dc1ii/5FFogqKsSGZ3obLgj1XW+zzUW7dTumRPk0uGjpeAurZqaIs79xu
	 4vkJ2Sin3qXTApuvYYiEj6dj+8v1xuEBqpI2of2TzhrMpV15FxaDySPn48P/RKl47B
	 XyKLk+9ncPefzK7la7+nHfZuBlgtQTFWW4cSVtdl0I+B75rUTlTp+wWW0912TBwvye
	 NVBimlxRVEIkw==
Date: Mon, 15 May 2023 15:23:50 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 4/4] automation: add PV passthrough tests on a AMD Zen3+
 runner
In-Reply-To: <de2a2841e44f44eb7dd56c0b9a2c27fe041051e9.1683943670.git-series.marmarek@invisiblethingslab.com>
Message-ID: <alpine.DEB.2.22.394.2305151444590.4125828@ubuntu-linux-20-04-desktop>
References: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com> <de2a2841e44f44eb7dd56c0b9a2c27fe041051e9.1683943670.git-series.marmarek@invisiblethingslab.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-168031179-1684187109=:4125828"
Content-ID: <alpine.DEB.2.22.394.2305151507050.4125828@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-168031179-1684187109=:4125828
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305151507051.4125828@ubuntu-linux-20-04-desktop>

On Sat, 13 May 2023, Marek Marczykowski-Górecki wrote:
> The PV passthrough test currently fails on this system
> with:
> (d1) Can't find new memory area for initrd needed due to E820 map conflict
> 
> Setting e820_host=1 does not help. So, add this test with
> "allow_failure: true" until the problem is fixed.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> I'm unsure if this should be included. On one hand, the test case will
> help verifying potential fix. But on the other hand, until fixed it will
> be wasting time.

I am not sure about this one either. I committed the other patches. I'll
give it a few days for others to comment on this one.


> ---
>  automation/gitlab-ci/test.yaml |  9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
> index 81d027532cca..7becb7a6b782 100644
> --- a/automation/gitlab-ci/test.yaml
> +++ b/automation/gitlab-ci/test.yaml
> @@ -194,6 +194,15 @@ zen3p-smoke-x86-64-gcc-debug:
>      - *x86-64-test-needs
>      - alpine-3.12-gcc-debug
>  
> +zen3p-pci-pv-x86-64-gcc-debug:
> +  extends: .zen3p-x86-64
> +  allow_failure: true
> +  script:
> +    - ./automation/scripts/qubes-x86-64.sh pci-pv 2>&1 | tee ${LOGFILE}
> +  needs:
> +    - *x86-64-test-needs
> +    - alpine-3.12-gcc-debug
> +
>  zen3p-pci-hvm-x86-64-gcc-debug:
>    extends: .zen3p-x86-64
>    script:
> -- 
> git-series 0.9.1
> 
--8323329-168031179-1684187109=:4125828--


From xen-devel-bounces@lists.xenproject.org Mon May 15 22:54:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 22:54:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534869.832298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyh5A-0002LO-Is; Mon, 15 May 2023 22:54:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534869.832298; Mon, 15 May 2023 22:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyh5A-0002LH-FL; Mon, 15 May 2023 22:54:20 +0000
Received: by outflank-mailman (input) for mailman id 534869;
 Mon, 15 May 2023 22:54:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyh58-0002L7-Kk; Mon, 15 May 2023 22:54:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyh58-00051P-IX; Mon, 15 May 2023 22:54:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyh58-00086e-2Y; Mon, 15 May 2023 22:54:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyh58-0007zy-1z; Mon, 15 May 2023 22:54:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BQzLrKe+5wgybJs2tLA3/wUhObgfZdlyfDNI0tu8h58=; b=HfM7dbYKOQFOxuUtZvurgrbfFh
	MUkZFfKwtx80lxkmjvRGI15u8Dh+6OGJDAx+AEcHqI4KxY2S20A50VjVzvB8X6fm0kroSGXoSiZCu
	orEAUAyyIo40LX0cABJAiBn/wz9for0u8Mjn4sulrpJk+gntqzWdJzBzy69rV4Z2b+m4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180671-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180671: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b8be19ce432a2edd69c0673768a0beeec77f795a
X-Osstest-Versions-That:
    xen=4c507d8a6b6e8be90881a335b0a66eb28e0f7737
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 May 2023 22:54:18 +0000

flight 180671 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180671/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180660
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180666
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180666
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180666
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180666
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180666
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180666
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180666
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180666
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180666
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180666
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180666
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  b8be19ce432a2edd69c0673768a0beeec77f795a
baseline version:
 xen                  4c507d8a6b6e8be90881a335b0a66eb28e0f7737

Last test of basis   180666  2023-05-15 01:52:07 Z    0 days
Testing same since   180671  2023-05-15 13:38:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4c507d8a6b..b8be19ce43  b8be19ce432a2edd69c0673768a0beeec77f795a -> master


From xen-devel-bounces@lists.xenproject.org Mon May 15 23:02:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 23:02:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534880.832324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyhCY-00046k-LJ; Mon, 15 May 2023 23:01:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534880.832324; Mon, 15 May 2023 23:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyhCY-00046d-IF; Mon, 15 May 2023 23:01:58 +0000
Received: by outflank-mailman (input) for mailman id 534880;
 Mon, 15 May 2023 23:01:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l5iH=BE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pyhCX-00046V-Dp
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 23:01:57 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c11c070-f374-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 01:01:54 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id BA01F62379;
 Mon, 15 May 2023 23:01:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F811C433EF;
 Mon, 15 May 2023 23:01:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c11c070-f374-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684191713;
	bh=7TItLIrWdztTahWmoph0tQ34Qa9LgDLdmD00FD2ViGs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=XQCiWQQqw8De6Ed3H10al8UcfysybMEzABBJpGRmwaA3OKGedn3hVdDfIKUFd/e9t
	 3mJ2dbS2nETzWzShfqj6wih7OJPwm0lTB2dyKO9ob3+8TvsVr4k0o634QRI/83XUh9
	 ZQX4Kf27lVUI5pfIyUjEu7iBUwYHH6vaztBV4h4kXFjqXKZbXY9CmYS+vNQLYELBt2
	 YT4ifNxioIhbhhwr/1YqrghrbCRGVoDeWsAMGhOS1XPlCq8cv10V9CDZBg/bVAGY3f
	 bPhTj4g/0FkDn3JG885uShFip+q5sOu69EJjsFVKF+MjyTeKE0FcAJxuxE43T/OPj9
	 LiEqp/X/id+ug==
Date: Mon, 15 May 2023 16:01:50 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Olaf Hering <olaf@aepfle.de>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v1] automation: provide diffutils and ghostscript in
 opensuse images
In-Reply-To: <20230502054218.15303-1-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.22.394.2305151533140.4125828@ubuntu-linux-20-04-desktop>
References: <20230502054218.15303-1-olaf@aepfle.de>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 2 May 2023, Olaf Hering wrote:
> The diffutils package is a hard requirement for building xen.
> It was dropped from the Tumbleweed base image in the past 12 months.
> 
> Building with --enable-docs does now require the gs tool.
> 
> Add both packages to the suse dockerfiles.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/build/suse/opensuse-leap.dockerfile       | 2 ++
>  automation/build/suse/opensuse-tumbleweed.dockerfile | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/automation/build/suse/opensuse-leap.dockerfile b/automation/build/suse/opensuse-leap.dockerfile
> index bac9385412..c7973dd6ab 100644
> --- a/automation/build/suse/opensuse-leap.dockerfile
> +++ b/automation/build/suse/opensuse-leap.dockerfile
> @@ -18,11 +18,13 @@ RUN zypper install -y --no-recommends \
>          clang \
>          cmake \
>          dev86 \
> +        diffutils \
>          discount \
>          flex \
>          gcc \
>          gcc-c++ \
>          git \
> +        ghostscript \
>          glib2-devel \
>          glibc-devel \
>          # glibc-devel-32bit for Xen < 4.15
> diff --git a/automation/build/suse/opensuse-tumbleweed.dockerfile b/automation/build/suse/opensuse-tumbleweed.dockerfile
> index 3e5771fccd..7e5f22acef 100644
> --- a/automation/build/suse/opensuse-tumbleweed.dockerfile
> +++ b/automation/build/suse/opensuse-tumbleweed.dockerfile
> @@ -18,11 +18,13 @@ RUN zypper install -y --no-recommends \
>          clang \
>          cmake \
>          dev86 \
> +        diffutils \
>          discount \
>          flex \
>          gcc \
>          gcc-c++ \
>          git \
> +        ghostscript \
>          glib2-devel \
>          glibc-devel \
>          # glibc-devel-32bit for Xen < 4.15
> 


From xen-devel-bounces@lists.xenproject.org Mon May 15 23:02:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 23:02:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534881.832334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyhCd-0004Mb-SZ; Mon, 15 May 2023 23:02:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534881.832334; Mon, 15 May 2023 23:02:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyhCd-0004MS-Pg; Mon, 15 May 2023 23:02:03 +0000
Received: by outflank-mailman (input) for mailman id 534881;
 Mon, 15 May 2023 23:02:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l5iH=BE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pyhCc-00046V-U4
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 23:02:02 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7fe5b8f8-f374-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 01:02:01 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4B83263368;
 Mon, 15 May 2023 23:02:00 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 167E6C433EF;
 Mon, 15 May 2023 23:01:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fe5b8f8-f374-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684191719;
	bh=T5QCOnaGDS0SaaGM4Z8iGOwpIhTOi0ryqfRQZhqoHR0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=iI8NUGyr/Rax+yKdjGstU/eznhYdPQv9tQTjQwD/V7BUz6F9nEdBKJEBCkByVP517
	 tYz+EubAegPngk0AHDQiph8iXYcUkGjRONE2sqaWEJsig9mkEn+njdBmTrmib4SHlc
	 kbbiRqE2WSInBtacfZ7ij1kM2bbQbsXtEGCrcER7lIFjY2jBCUvnHaOHxUI3jM+jZl
	 dBDCBLvHQABA5SIwbZUn2RKt+jZw1vu3ZQdBgmNRqu8q2StXgJ8LKYVs5IgOGi0fVu
	 drUMT+0p44cwR49TWqZxdJosLmJIZmrq6mMGIdxf4GAw+bygr6kbDHPF4SkwO7vJVw
	 hUY3fKrjH2btw==
Date: Mon, 15 May 2023 16:01:57 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Olaf Hering <olaf@aepfle.de>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v1] automation: remove python2 from opensuse images
In-Reply-To: <20230502200527.5365-1-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.22.394.2305151533230.4125828@ubuntu-linux-20-04-desktop>
References: <20230502200527.5365-1-olaf@aepfle.de>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 2 May 2023, Olaf Hering wrote:
> The upcoming Leap 15.5 will come without a binary named 'python'.
> Prepare the suse images for that change.
> 
> Starting with Xen 4.14 python3 can be used for build.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/build/suse/opensuse-leap.dockerfile       | 2 --
>  automation/build/suse/opensuse-tumbleweed.dockerfile | 1 -
>  2 files changed, 3 deletions(-)
> 
> diff --git a/automation/build/suse/opensuse-leap.dockerfile b/automation/build/suse/opensuse-leap.dockerfile
> index c7973dd6ab..79de83ac20 100644
> --- a/automation/build/suse/opensuse-leap.dockerfile
> +++ b/automation/build/suse/opensuse-leap.dockerfile
> @@ -58,8 +58,6 @@ RUN zypper install -y --no-recommends \
>          'pkgconfig(libpci)' \
>          'pkgconfig(sdl)' \
>          'pkgconfig(sdl2)' \
> -        python \
> -        python-devel \
>          python3-devel \
>          systemd-devel \
>          tar \
> diff --git a/automation/build/suse/opensuse-tumbleweed.dockerfile b/automation/build/suse/opensuse-tumbleweed.dockerfile
> index 7e5f22acef..abb25c8c84 100644
> --- a/automation/build/suse/opensuse-tumbleweed.dockerfile
> +++ b/automation/build/suse/opensuse-tumbleweed.dockerfile
> @@ -61,7 +61,6 @@ RUN zypper install -y --no-recommends \
>          'pkgconfig(libpci)' \
>          'pkgconfig(sdl)' \
>          'pkgconfig(sdl2)' \
> -        python-devel \
>          python3-devel \
>          systemd-devel \
>          tar \
> 


From xen-devel-bounces@lists.xenproject.org Mon May 15 23:03:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 23:03:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534887.832343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyhDh-00059L-75; Mon, 15 May 2023 23:03:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534887.832343; Mon, 15 May 2023 23:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyhDh-00059E-4Q; Mon, 15 May 2023 23:03:09 +0000
Received: by outflank-mailman (input) for mailman id 534887;
 Mon, 15 May 2023 23:03:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l5iH=BE=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pyhDf-000598-GL
 for xen-devel@lists.xenproject.org; Mon, 15 May 2023 23:03:07 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a5c39d9b-f374-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 01:03:05 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id CF75762379;
 Mon, 15 May 2023 23:03:03 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B2032C433D2;
 Mon, 15 May 2023 23:03:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5c39d9b-f374-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684191783;
	bh=FVn1KlANsBOdkLG6bSWMVJ/BL+zPiYygRD/WPFRChJ4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=EiWnajuJi2YByKn/dtI8VMzgGHM4sdd8akBbjr99F2JHIlO/DhhWKZZkEH3c1AzRv
	 llA606dTUM9Kuh5boX9CPwTDfeH4ZjMyVHxD7AE+52Urxm96CaTxlwjBogAmLFvuSo
	 3BZ4MDd5fot6aRr0LHls+9EbZ6t/dPZaXdezocGj5Bzce/BznanYxxTZ5tZ286mx1N
	 JRn4t4ZASD1GlVUiOwy+7VA2y6xziJduU+9QApIPhkzp6kkko6wgiJnoT1kXw4aX3g
	 fyRgkA1Dm3yvu6LpX/Kk/8cybWc2oc+1pCpEAMTQXh+IMZpy5sACay5WeGDjKJuxlJ
	 FIPbO8Vp8AZtg==
Date: Mon, 15 May 2023 16:03:01 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Olaf Hering <olaf@aepfle.de>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v1] automation: provide example for downloading an existing
 container
In-Reply-To: <20230502201444.6532-1-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.22.394.2305151533320.4125828@ubuntu-linux-20-04-desktop>
References: <20230502201444.6532-1-olaf@aepfle.de>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 2 May 2023, Olaf Hering wrote:
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Given that opensuse-tumbleweed is still broken (doesn't complete the Xen
build successfully) even after these patches, I suggest we use a
different example?


> ---
>  automation/build/README.md | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/automation/build/README.md b/automation/build/README.md
> index 2d07cafe0e..8ad89a259a 100644
> --- a/automation/build/README.md
> +++ b/automation/build/README.md
> @@ -12,6 +12,12 @@ can be pulled with Docker from the following path:
>  docker pull registry.gitlab.com/xen-project/xen/DISTRO:VERSION
>  ```
>  
> +This example shows how to pull the existing container for Tumbleweed:
> +
> +```
> +docker pull registry.gitlab.com/xen-project/xen/suse:opensuse-tumbleweed
> +```
> +
>  To see the list of available containers run `make` in this
>  directory. You will have to replace the `/` with a `:` to use
>  them.
> 


From xen-devel-bounces@lists.xenproject.org Mon May 15 23:54:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 15 May 2023 23:54:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534892.832354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyi0m-0002Gu-Sl; Mon, 15 May 2023 23:53:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534892.832354; Mon, 15 May 2023 23:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyi0m-0002Gn-Pg; Mon, 15 May 2023 23:53:52 +0000
Received: by outflank-mailman (input) for mailman id 534892;
 Mon, 15 May 2023 23:53:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyi0m-0002Gd-0s; Mon, 15 May 2023 23:53:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyi0l-0006K3-Js; Mon, 15 May 2023 23:53:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyi0l-0002DJ-3Z; Mon, 15 May 2023 23:53:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyi0l-0004r7-3D; Mon, 15 May 2023 23:53:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QvimEqs5pOiTdcKBzwZXomYGXKkV7maVksLGqAVzdl8=; b=LEpaAbaqTliL463DqczWsUHU91
	m4iK4dPgC4EF3gKkZAfju94dlAwSmdVWclWXKb/Vz5VPwysnZzBno3bnPAhpT4gMZT5qe4nodPeol
	hZwIAILZ/ymJgUjvWEiKm7NLlIyCOzEOcMo3EJHrMncFqGet2wtHcm1bOjiNZUI534cA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180675-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180675: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e
X-Osstest-Versions-That:
    ovmf=80bc13db83ddbd5bbe757a20abcdd34daf4871f8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 15 May 2023 23:53:51 +0000

flight 180675 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180675/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e
baseline version:
 ovmf                 80bc13db83ddbd5bbe757a20abcdd34daf4871f8

Last test of basis   180665  2023-05-15 01:12:11 Z    0 days
Testing same since   180675  2023-05-15 21:40:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Guo Dong <guo.dong@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   80bc13db83..cafb4f3f36  cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 16 00:11:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 00:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534897.832364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyiHu-0005Kv-0C; Tue, 16 May 2023 00:11:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534897.832364; Tue, 16 May 2023 00:11:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyiHt-0005Ko-Tk; Tue, 16 May 2023 00:11:33 +0000
Received: by outflank-mailman (input) for mailman id 534897;
 Tue, 16 May 2023 00:11:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PFhh=BF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pyiHs-0005Kg-JF
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 00:11:32 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 349bdf62-f37e-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 02:11:30 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A5B1F620F6;
 Tue, 16 May 2023 00:11:28 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 240B6C433D2;
 Tue, 16 May 2023 00:11:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 349bdf62-f37e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684195888;
	bh=vjBGWia8/pKhY0b/aW+hp1dFzcOtC0TuSqQL5lJkFP0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=sK4EM5u7kAGUw798nQB1nLAFSxe1O+RTgfE/YRNgAvmOaAFrxaiFNthqxF4nl3vZU
	 KQFPG4W+ygCZz/pMrix6emivkYhahr+C6331z6Fkzqc+9UuVTKUrwzYtwePMway63w
	 jhk15ktWTa+HYqDJkjSZ43pyTFJuLTx/BhBpFzYMRy/lNprNZR838aYqPp0SYMsWQR
	 V5/djpqUfpChpvAH3O00DWgRPTRljYl6twxSB3hrk+mhj8HOVhHDLgVJyR3shSxkAM
	 wE86xB4AgIwPIzfsOibpVfxFvT7dJGjRIR2jY0GVrUeAYOK6D49s8ywftmNxyd1cBr
	 6/8FRQb5w3rBg==
Date: Mon, 15 May 2023 17:11:25 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com, 
    jbeulich@suse.com, xen-devel@lists.xenproject.org, 
    Xenia.Ragiadakou@amd.com, Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
In-Reply-To: <ZGH+5OKqnjTjUr/F@Air-de-Roger>
Message-ID: <alpine.DEB.2.22.394.2305151656500.4125828@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop> <20230513011720.3978354-2-sstabellini@kernel.org> <ZGH+5OKqnjTjUr/F@Air-de-Roger>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1204352746-1684195264=:4125828"
Content-ID: <alpine.DEB.2.22.394.2305151701170.4125828@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1204352746-1684195264=:4125828
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305151701171.4125828@ubuntu-linux-20-04-desktop>

On Mon, 15 May 2023, Roger Pau Monné wrote:
> On Fri, May 12, 2023 at 06:17:20PM -0700, Stefano Stabellini wrote:
> > From: Stefano Stabellini <stefano.stabellini@amd.com>
> > 
> > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > the tables in the guest. Instead, copy the tables to Dom0.
> > 
> > This is a workaround.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > ---
> > As mentioned in the cover letter, this is a RFC workaround as I don't
> > know the cause of the underlying problem. I do know that this patch
> > solves what would be otherwise a hang at boot when Dom0 PVH attempts to
> > parse ACPI tables.
> 
> I'm unsure how safe this is for native systems, as it's possible for
> firmware to modify the data in the tables, so copying them would
> break that functionality.
> 
> I think we need to get to the root cause that triggers this behavior
> on QEMU.  Is it the table checksum that fail, or something else?  Is
> there an error from Linux you could reference?

I agree with you but so far I haven't managed to find a way to the root
of the issue. Here is what I know. These are the logs of a successful
boot using this patch:

[   10.437488] ACPI: Early table checksum verification disabled
[   10.439345] ACPI: RSDP 0x000000004005F955 000024 (v02 BOCHS )
[   10.441033] ACPI: RSDT 0x000000004005F979 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
[   10.444045] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
[   10.445984] ACPI: FACP 0x000000004005FA65 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
[   10.447170] ACPI BIOS Warning (bug): Incorrect checksum in table [FACP] - 0x67, should be 0x30 (20220331/tbprint-174)
[   10.449522] ACPI: DSDT 0x000000004005FB19 00145D (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
[   10.451258] ACPI: FACS 0x000000004005FAD9 000040
[   10.452245] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
[   10.452389] ACPI: Reserving FACP table memory at [mem 0x4005fa65-0x4005fad8]
[   10.452497] ACPI: Reserving DSDT table memory at [mem 0x4005fb19-0x40060f75]
[   10.452602] ACPI: Reserving FACS table memory at [mem 0x4005fad9-0x4005fb18]


And these are the logs of the same boot (unsuccessful) without this
patch:

[   10.516015] ACPI: Early table checksum verification disabled
[   10.517732] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
[   10.519535] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
[   10.522523] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
[   10.527453] ACPI: ���� 0x000000007FFE149D FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
[   10.528362] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
[   10.528491] ACPI: Reserving ���� table memory at [mem 0x7ffe149d-0x17ffe149b]

It is clearly a memory corruption around FACS but I couldn't find the
reason for it. The mapping code looks correct. I hope you can suggest a
way to narrow down the problem. If I could, I would suggest to apply
this patch just for the QEMU PVH tests but we don't have the
infrastructure for that in gitlab-ci as there is a single Xen build for
all tests.

If it helps to repro on your side, you can just do the following,
assuming your Xen repo is in /local/repos/xen:


cd /local/repos/xen
mkdir binaries
cd binaries
mkdir -p dist/install/

docker run -it -v `pwd`:`pwd` registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12
cp /initrd* /local/repos/xen/binaries
exit

docker run -it -v `pwd`:`pwd` registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:6.1.19
cp /bzImage /local/repos/xen/binaries
exit

That's it. Now you have enough pre-built binaries to repro the issue.
Next you can edit automation/scripts/qemu-alpine-x86_64.sh to add

  dom0=pvh dom0_mem=1G dom0-iommu=none

on the Xen command line. I also removed "timeout" and pipe "tee" at the
end for my own convenience:

 # Run the test
-rm -f smoke.serial
-set +e
-timeout -k 1 720 \
 qemu-system-x86_64 \
     -cpu qemu64,+svm \
     -m 2G -smp 2 \
     -monitor none -serial stdio \
     -nographic \
     -device virtio-net-pci,netdev=n0 \
-    -netdev user,id=n0,tftp=binaries,bootfile=/pxelinux.0 |& tee smoke.serial
+    -netdev user,id=n0,tftp=binaries,bootfile=/pxelinux.0
 

make sure to build the Xen hypervisor binary and place the binary under
/local/repos/xen/binaries/

You can finally run the test with the below:

cd ..
docker run -it -v /local/repos/xen:/local/repos/xen registry.gitlab.com/xen-project/xen/debian:unstable
cd /local/repos/xen
bash automation/scripts/qemu-alpine-x86_64.sh

It usually gets stuck halfway through the boot without this patch.
--8323329-1204352746-1684195264=:4125828--


From xen-devel-bounces@lists.xenproject.org Tue May 16 00:12:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 00:12:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534900.832373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyiIn-0005qV-90; Tue, 16 May 2023 00:12:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534900.832373; Tue, 16 May 2023 00:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyiIn-0005qO-6N; Tue, 16 May 2023 00:12:29 +0000
Received: by outflank-mailman (input) for mailman id 534900;
 Tue, 16 May 2023 00:12:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PFhh=BF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pyiIm-0005nK-JE
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 00:12:28 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56ece042-f37e-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 02:12:27 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 76E0C611EE;
 Tue, 16 May 2023 00:12:26 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EBFC5C433D2;
 Tue, 16 May 2023 00:12:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56ece042-f37e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684195945;
	bh=VYP4YsMzJK7J+FTeSVMVRGmOiXTfHuodRvufuPmrgPo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=F+0KCvGawRQjrxFPd5abQz3pQyUbH/WIVTb+0PF4zgIxL28eiIWceBqVIALCI0irF
	 W2Wd6zujC5WQZw2KZPUxV580T2TQD/hHR5ITL+xVrh9IK1KSW5/GOShL0G/zs3MLo2
	 YKZlEB1iXZmYIPeZC2OcJnjB4GwBtaMkkHERILhn70VGd4R3BVMf7wvyfdT1harth3
	 14VyF9tAZiZiQzMCUcSOrSBwlnzXeYeKnwTd95VrcmNTQrm/sEFBZdNK4k7Ft4AXRf
	 lU+3gW+Dcqstw/s84qAg7kNroUWxYU/UPptl3ynoYo+fnYhVX6vDEjA8jYhrg5W73Z
	 8oGl07VTP2apg==
Date: Mon, 15 May 2023 17:12:23 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com, 
    Stefano Stabellini <stefano.stabellini@amd.com>, roger.pau@citrix.com, 
    andrew.cooper3@citrix.com
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
In-Reply-To: <c22a8925-15e4-47b9-6f5d-f85bbe802255@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305151711480.4125828@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop> <20230513011720.3978354-2-sstabellini@kernel.org> <c22a8925-15e4-47b9-6f5d-f85bbe802255@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 15 May 2023, Jan Beulich wrote:
> On 13.05.2023 03:17, Stefano Stabellini wrote:
> > From: Stefano Stabellini <stefano.stabellini@amd.com>
> > 
> > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > the tables in the guest. Instead, copy the tables to Dom0.
> 
> Do you really mean "in the guest" (i.e. from Xen's perspective, i.e.
> ignoring that when running on qemu it is kind of a guest itself)?

Yes, I posted the memory corruption info I have on a separate email


> I also consider the statement too broad anyway: Various people have
> run PVH Dom0 without running into such an issue, so it's clearly not
> just "leads to".

Fair enough


From xen-devel-bounces@lists.xenproject.org Tue May 16 00:16:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 00:16:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534903.832384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyiMl-0006WE-QY; Tue, 16 May 2023 00:16:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534903.832384; Tue, 16 May 2023 00:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyiMl-0006W7-MW; Tue, 16 May 2023 00:16:35 +0000
Received: by outflank-mailman (input) for mailman id 534903;
 Tue, 16 May 2023 00:16:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PFhh=BF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pyiMk-0006Vz-Ju
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 00:16:34 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e89fb8f0-f37e-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 02:16:32 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D2C27611EE;
 Tue, 16 May 2023 00:16:30 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5767AC433EF;
 Tue, 16 May 2023 00:16:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e89fb8f0-f37e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684196190;
	bh=vHzC7PTPgkpZdzLyFNybngmOcYC3HM7pRTDTAG1k4/U=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QAdZ7EZOi+BA6c7OCxtP7sXksNg9xvcFGyRb5X1DcgNNkSwDmUJ1zlWNiQLs1S+fR
	 wAgRMnNteFEKEHxVJYJGx9w+rCX5YwtxVdAofTUdfcED8kAWKwIUYqGt4K4KZGhCQ1
	 PpKJF/doa1K8g3eGWnND1eoQTDUrX5bD9Ot2RrBVrGGc6Y+QUEo5LD/YyG6lVABXso
	 3Udk2iSOk4I/+KBctfaUVgouUHJ2aNqLcLSvnsWIOaXxMhY2heP5SZwAbTSGr4FmJP
	 T1cW/J8bmFzImGv46cyvQlk83sp8wxA92LZkkqV3wNMOoXV7Ve7WgAKTCKf0KGLIZq
	 sJWNlNnvsi0Jw==
Date: Mon, 15 May 2023 17:16:27 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com, 
    xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com, 
    Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT
 generation
In-Reply-To: <81ac6e51-6de9-5c4c-5cbd-7318cae93032@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305151712390.4125828@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop> <20230513011720.3978354-1-sstabellini@kernel.org> <ZGHx9Mk3UGPdli1h@Air-de-Roger> <81ac6e51-6de9-5c4c-5cbd-7318cae93032@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1714508303-1684196189=:4125828"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1714508303-1684196189=:4125828
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Mon, 15 May 2023, Jan Beulich wrote:
> On 15.05.2023 10:48, Roger Pau Monné wrote:
> > On Fri, May 12, 2023 at 06:17:19PM -0700, Stefano Stabellini wrote:
> >> From: Stefano Stabellini <stefano.stabellini@amd.com>
> >>
> >> Xen always generates a XSDT table even if the firmware provided a RSDT
> >> table. Instead of copying the XSDT header from the firmware table (that
> >> might be missing), generate the XSDT header from a preset.
> >>
> >> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> >> ---
> >>  xen/arch/x86/hvm/dom0_build.c | 32 +++++++++-----------------------
> >>  1 file changed, 9 insertions(+), 23 deletions(-)
> >>
> >> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> >> index 307edc6a8c..5fde769863 100644
> >> --- a/xen/arch/x86/hvm/dom0_build.c
> >> +++ b/xen/arch/x86/hvm/dom0_build.c
> >> @@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> >>                                        paddr_t *addr)
> >>  {
> >>      struct acpi_table_xsdt *xsdt;
> >> -    struct acpi_table_header *table;
> >> -    struct acpi_table_rsdp *rsdp;
> >>      const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables;
> >>      unsigned long size = sizeof(*xsdt);
> >>      unsigned int i, j, num_tables = 0;
> >> -    paddr_t xsdt_paddr;
> >>      int rc;
> >> +    struct acpi_table_header header = {
> >> +        .signature    = "XSDT",
> >> +        .length       = sizeof(struct acpi_table_header),
> >> +        .revision     = 0x1,
> >> +        .oem_id       = "Xen",
> >> +        .oem_table_id = "HVM",
> > 
> > I think this is wrong, as according to the spec the OEM Table ID must
> > match the OEM Table ID in the FADT.
> > 
> > We likely want to copy the OEM ID and OEM Table ID from the RSDP, and
> > possibly also the other OEM related fields.
> > 
> > Alternatively we might want to copy and use the RSDT on systems that
> > lack an XSDT, or even just copy the header from the RSDT into Xen's
> > crafted XSDT, since the format of the RSDP and the XSDT headers are
> > exactly the same (the difference is in the size of the description
> > headers that come after).
> 
> I guess I'd prefer that last variant.

I tried this approach (together with the second patch for necessity) and
it worked.

diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index fd2cbf68bc..11d6d1bc23 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -967,6 +967,10 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
         goto out;
     }
     xsdt_paddr = rsdp->xsdt_physical_address;
+    if ( !xsdt_paddr )
+    {
+        xsdt_paddr = rsdp->rsdt_physical_address;
+    }
     acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
     table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
     if ( !table )
--8323329-1714508303-1684196189=:4125828--


From xen-devel-bounces@lists.xenproject.org Tue May 16 00:22:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 00:22:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534908.832394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyiSB-00081l-Hy; Tue, 16 May 2023 00:22:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534908.832394; Tue, 16 May 2023 00:22:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyiSB-00081e-F6; Tue, 16 May 2023 00:22:11 +0000
Received: by outflank-mailman (input) for mailman id 534908;
 Tue, 16 May 2023 00:22:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyiSA-00081U-4t; Tue, 16 May 2023 00:22:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyiS9-0007Yr-TJ; Tue, 16 May 2023 00:22:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyiS9-00035v-GQ; Tue, 16 May 2023 00:22:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyiS9-0003Zk-G0; Tue, 16 May 2023 00:22:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M14eZ1sfFzJsvip2DOntWtB+MPd2HJq5hUe5Xdej8yg=; b=AGjCO09enkFXGp2Qgz8mHwJtaL
	Di6mkJ9qSgZSnSsDb8m3wmjfJ+ygcUS/BWOhhEXIxJog+rnX2+6hjeH4hS/DOhD44WZ929xljg7+m
	mIcayr6l2pWSMOcx+EXPazRtUW/Q+1SsWGgg4b7JERgDFwIXoY+1Ln5h1nA80k5sqmUg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180676-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180676: regressions - trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-build-prep:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=fc1b51268025233a81e5fd9c5eabe170bc830720
X-Osstest-Versions-That:
    xen=56e2c8e5860090a35d5f0cafe168223a2a7c0e62
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 May 2023 00:22:09 +0000

flight 180676 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180676/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   5 host-build-prep          fail REGR. vs. 180672

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  fc1b51268025233a81e5fd9c5eabe170bc830720
baseline version:
 xen                  56e2c8e5860090a35d5f0cafe168223a2a7c0e62

Last test of basis   180672  2023-05-15 14:00:25 Z    0 days
Testing same since   180676  2023-05-15 23:03:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit fc1b51268025233a81e5fd9c5eabe170bc830720
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Sat May 13 04:12:46 2023 +0200

    automation: add x86_64 tests on a AMD Zen3+ runner
    
    This adds another physical runner to Gitlab-CI, running similar set of
    jobs that the Adler Lake one.
    
    The machine specifically is
    MinisForum UM773 Lite with AMD Ryzen 7 7735HS
    
    The PV passthrough test is skipped as currently it fails on this system
    with:
    (d1) Can't find new memory area for initrd needed due to E820 map conflict
    
    The S3 test is skipped as it currently fails - the system seems to
    suspend properly (power LED blinks), but when woken up the power LED
    gets back to solid on and the fan spins at top speed and but otherwise there is no
    signs of if life from the system (no output on the console, HDMI or
    anything else).
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit d6f0c82539a8dad043186cf9f9e44acdd440f0ae
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 15 14:43:05 2023 -0700

    automation: enable earlyprintk=xen for both dom0 and domU in hw tests
    
    Make debugging early boot failures easier based on just CI logs.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 3e02611fac8b238f99415b5b90dd31373ded2fac
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 15 14:41:18 2023 -0700

    automation: make console options configurable via variables
    
    This makes the test script easier reusable for different runners, where
    console may be connected differently. Include both console= option and
    configuration for specific chosen console too (like com1= here) in the
    'CONSOLE_OPTS' variable.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit be6aa637db95b1d7c50671fb30b79b3cd7e6dabf
Author: Stefano Stabellini <stefano.stabellini@amd.com>
Date:   Fri May 12 18:24:44 2023 -0700

    automation: add a Dom0 PVH test based on Qubes' runner
    
    Straightforward Dom0 PVH test based on the existing basic Smoke test for
    the Qubes runner.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 16 00:38:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 00:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534913.832404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyihp-0001Fq-U2; Tue, 16 May 2023 00:38:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534913.832404; Tue, 16 May 2023 00:38:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyihp-0001Fj-Pz; Tue, 16 May 2023 00:38:21 +0000
Received: by outflank-mailman (input) for mailman id 534913;
 Tue, 16 May 2023 00:38:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PFhh=BF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pyiho-0001Fd-1n
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 00:38:20 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f2c7e491-f381-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 02:38:17 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5A6EB622CE;
 Tue, 16 May 2023 00:38:16 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE5E4C433EF;
 Tue, 16 May 2023 00:38:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2c7e491-f381-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684197495;
	bh=lNemjbpX0e9sENvkLQ4ss+2tvwGTi7UCJWS2TOL3VkU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=EoYAnpnt0uVlvXQhrGSTdsE19aqmeyTBBd6qQieddPmcPoA8XZ2SLKMIs4lUnycxH
	 0ojoP7/slzM3VV1qwsK5WOzehwPKMw1DpylMKg00bQ+Mwwvtm3ny54R0bPpBnigfZe
	 vSjPa8dztaeAiIRtaOve8EfoNU+VY4nuB1OX5DsbUUQCL/TKoyTzSsZRyD7UUcjxIh
	 M03HpZ/OH8qGdy0AMaB1379LRH6DO4ncYGGDJxGlpBvpSkV3/my7JQeXP8LCp9ckj+
	 zNZRNYImZmu/sQMA+k9Vnl/E7Z3Ey5484Ee5tFlQgB96sSuRgCPwWIbWHl1tfX0aea
	 HTAJTsKKIMiwg==
Date: Mon, 15 May 2023 17:38:13 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    andrew.cooper3@citrix.com, jbeulich@suse.com, 
    xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com, 
    Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
In-Reply-To: <alpine.DEB.2.22.394.2305151656500.4125828@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2305151737310.4125828@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop> <20230513011720.3978354-2-sstabellini@kernel.org> <ZGH+5OKqnjTjUr/F@Air-de-Roger> <alpine.DEB.2.22.394.2305151656500.4125828@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1784212664-1684197495=:4125828"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1784212664-1684197495=:4125828
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Mon, 15 May 2023, Stefano Stabellini wrote:
> On Mon, 15 May 2023, Roger Pau Monné wrote:
> > On Fri, May 12, 2023 at 06:17:20PM -0700, Stefano Stabellini wrote:
> > > From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > 
> > > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > > the tables in the guest. Instead, copy the tables to Dom0.
> > > 
> > > This is a workaround.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > > ---
> > > As mentioned in the cover letter, this is a RFC workaround as I don't
> > > know the cause of the underlying problem. I do know that this patch
> > > solves what would be otherwise a hang at boot when Dom0 PVH attempts to
> > > parse ACPI tables.
> > 
> > I'm unsure how safe this is for native systems, as it's possible for
> > firmware to modify the data in the tables, so copying them would
> > break that functionality.
> > 
> > I think we need to get to the root cause that triggers this behavior
> > on QEMU.  Is it the table checksum that fail, or something else?  Is
> > there an error from Linux you could reference?
> 
> I agree with you but so far I haven't managed to find a way to the root
> of the issue. Here is what I know. These are the logs of a successful
> boot using this patch:
> 
> [   10.437488] ACPI: Early table checksum verification disabled
> [   10.439345] ACPI: RSDP 0x000000004005F955 000024 (v02 BOCHS )
> [   10.441033] ACPI: RSDT 0x000000004005F979 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> [   10.444045] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> [   10.445984] ACPI: FACP 0x000000004005FA65 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
> [   10.447170] ACPI BIOS Warning (bug): Incorrect checksum in table [FACP] - 0x67, should be 0x30 (20220331/tbprint-174)
> [   10.449522] ACPI: DSDT 0x000000004005FB19 00145D (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
> [   10.451258] ACPI: FACS 0x000000004005FAD9 000040
> [   10.452245] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> [   10.452389] ACPI: Reserving FACP table memory at [mem 0x4005fa65-0x4005fad8]
> [   10.452497] ACPI: Reserving DSDT table memory at [mem 0x4005fb19-0x40060f75]
> [   10.452602] ACPI: Reserving FACS table memory at [mem 0x4005fad9-0x4005fb18]
> 
> 
> And these are the logs of the same boot (unsuccessful) without this
> patch:
> 
> [   10.516015] ACPI: Early table checksum verification disabled
> [   10.517732] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> [   10.519535] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> [   10.522523] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> [   10.527453] ACPI: ���� 0x000000007FFE149D FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> [   10.528362] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> [   10.528491] ACPI: Reserving ���� table memory at [mem 0x7ffe149d-0x17ffe149b]
> 
> It is clearly a memory corruption around FACS 

Sorry I meant FACP/FADT
--8323329-1784212664-1684197495=:4125828--


From xen-devel-bounces@lists.xenproject.org Tue May 16 01:52:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 01:52:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534918.832414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyjrC-0007wv-7e; Tue, 16 May 2023 01:52:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534918.832414; Tue, 16 May 2023 01:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyjrC-0007wo-3R; Tue, 16 May 2023 01:52:06 +0000
Received: by outflank-mailman (input) for mailman id 534918;
 Tue, 16 May 2023 01:52:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TnGy=BF=arm.com=Jiamei.Xie@srs-se1.protection.inumbo.net>)
 id 1pyjrA-0007wi-1T
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 01:52:04 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20601.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3e7ac675-f38c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 03:52:00 +0200 (CEST)
Received: from AM6P194CA0045.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::22)
 by DB4PR08MB9237.eurprd08.prod.outlook.com (2603:10a6:10:3fb::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 01:51:56 +0000
Received: from AM7EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::d6) by AM6P194CA0045.outlook.office365.com
 (2603:10a6:209:84::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33 via Frontend
 Transport; Tue, 16 May 2023 01:51:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT004.mail.protection.outlook.com (100.127.140.210) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.15 via Frontend Transport; Tue, 16 May 2023 01:51:55 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Tue, 16 May 2023 01:51:55 +0000
Received: from 12ab0d554c6f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 763CED03-F944-49C6-9E13-4BFFFC34A9F0.1; 
 Tue, 16 May 2023 01:51:48 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 12ab0d554c6f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 16 May 2023 01:51:48 +0000
Received: from DB9PR08MB7674.eurprd08.prod.outlook.com (2603:10a6:10:37d::21)
 by AS8PR08MB8827.eurprd08.prod.outlook.com (2603:10a6:20b:5ba::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 01:51:46 +0000
Received: from DB9PR08MB7674.eurprd08.prod.outlook.com
 ([fe80::17bf:7315:44a:3014]) by DB9PR08MB7674.eurprd08.prod.outlook.com
 ([fe80::17bf:7315:44a:3014%7]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 01:51:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e7ac675-f38c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Cn+qLn2HDGBuIuBfG2AsGcWSyQo5Ucg8P8Tm+y6XxQ=;
 b=yXceWx8PsYCgLmBp4i9ZiZ6Kj2lNWtfO6KazouneIQXkQn9F5jwNkTw7u5mIaeuD8/Uuo+g5ffPi8TGaa+LROuJ/KKFNO5L3vbCfsHYoVcjzpdqxsaoYszliCgVlUVTgtCrrz2zAdZt1hEoYWCAe+UEImORly7pzniT7kHwgewk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: eedf60301999df81
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Mp0o9I8GDF9dyaWXrYWimZ7ByQG+kcY768rT7snYoSmTRRzCPubsIRnGDvTX1W3x4fXdgo/BMiGLUl57C2KDcZ5drhx9cQkeuSDuDV4uzu9DIDWW6wFOc7fS+IS/Hyw7gfTf8WsLkTradAB5Lu4ZDM+i6Hf2ubJ+S/3e+QZo3FS9NdkWOtNZLqMb/IH5Hx/CiQROCtr3r2kYPIBpKsPzKFzwPSzl/zdLIAsawsnP9i2lodspVutpq8c1nT3Xxz2w7ym1Te8rcPm0El+WoeXLNP5P4vJj92lZHb5SlWoRZQHui7s1yFWNHyccumw/fd2u+n5fTa5yJseJnWc2//vfIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5Cn+qLn2HDGBuIuBfG2AsGcWSyQo5Ucg8P8Tm+y6XxQ=;
 b=BMVMZh4PQ1okYeLHKlCRLTh9lhXdp/N1l2XFyYuux0g8BPAMca20RrZSMlwC6hpOWw+uIWGe3KnXP1Dkk7xoLIRHxB9JI/TiOATUWcArIqC+iNaXXg11SHMYr+dUgDxFsP+CFX5KMThQNuqvsdOH71f4pVkUpFMEQg7w2c0u7knT9KMpi27KQoB/PDY/qbUXlagphgGzQGiP5fWzC0g+5R9PMq5SINaeS4pTUXuI65F75PzTtQhAUtePXyDO+qOWeP9yPcfQ5KSq6TaKiDnK2bRE+o71CQ3VzaHXwIRKd8TGmzRKaGfcJmLyowHbmfzIRLuaIltxW52EK2Wcxe5Htg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Cn+qLn2HDGBuIuBfG2AsGcWSyQo5Ucg8P8Tm+y6XxQ=;
 b=yXceWx8PsYCgLmBp4i9ZiZ6Kj2lNWtfO6KazouneIQXkQn9F5jwNkTw7u5mIaeuD8/Uuo+g5ffPi8TGaa+LROuJ/KKFNO5L3vbCfsHYoVcjzpdqxsaoYszliCgVlUVTgtCrrz2zAdZt1hEoYWCAe+UEImORly7pzniT7kHwgewk=
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
Content-Type: multipart/alternative;
 boundary="------------HciZi8HHxWRoK3JEVWJH9fXB"
Message-ID: <3e596d57-e4dd-c105-af3d-0b031767294f@arm.com>
Date: Tue, 16 May 2023 09:51:37 +0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH 1/4] automation: make console options configurable via
 variables
Content-Language: en-US
To: xen-devel@lists.xenproject.org
References: <cover.83768735557180cd65fb5f932c285271063e04ac.1683943670.git-series.marmarek@invisiblethingslab.com>
 <e0504797d1b3758c035cd82b2dc3b00d747ddcc8.1683943670.git-series.marmarek@invisiblethingslab.com>
From: Jiamei Xie <jiamei.xie@arm.com>
In-Reply-To: <e0504797d1b3758c035cd82b2dc3b00d747ddcc8.1683943670.git-series.marmarek@invisiblethingslab.com>
X-ClientProxiedBy: SI2P153CA0010.APCP153.PROD.OUTLOOK.COM
 (2603:1096:4:140::13) To DB9PR08MB7674.eurprd08.prod.outlook.com
 (2603:10a6:10:37d::21)
MIME-Version: 1.0
X-MS-TrafficTypeDiagnostic:
	DB9PR08MB7674:EE_|AS8PR08MB8827:EE_|AM7EUR03FT004:EE_|DB4PR08MB9237:EE_
X-MS-Office365-Filtering-Correlation-Id: b89aa031-1a69-4bcf-3dd1-08db55b020c3
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 a43KBJZyPDyOuA+hwbu2CjfR7azZExiGLLZ9C49dwIkbeF1ogJxEYgeKvx9Lpjj78V8CLhYD42ObxQfVGEpgzbkoLpkndmBrQDs6jDg0e0kQH/ZVryTspIYTGnkCz4F0KfmHwSbvppQF3PmHZ3nusW9VjwuP9x/O2daRnxGIw2hsB6dnm2JNvUYxlwN9DWWRz4rwyaHgAVVhYKC8qGz1Sni/OW01CWVAuTXfoflpvZQyo4H6Ax1vT7jLqDGSB6ZMiR4plb+Jgv/cOUv4J8xj9tcHzRHV+elW6lTKRih9nRuARx5COeXbup/Hl8bTHhwboLDA0gNvzt8MNu1O0ATCIWH1IOU1FIPBYXoxB8IgDg0dZQdzO5P9TigkXicbxhYy9N5HbvrwjCDeW6rGzc26IvOUG5FOEXzaUjx6Q2nYdnmTBDX/dlvW92HxnovSxr3AQdOUPdvRcXioJ1cTLnuqlC3Sghqpm2fIljbMoZ2iKun2GK+JuuTqiD9BIS6WFAtYhKuBdM+MK95WwwMjUFriPoDUXhmj4B4WdVcE7o1iOCoOCxsrRsDJ96QPuKI7f+p2MNB5bwfS2sFLbeQ7oYI2ig7nztRb8Rp2gzFx2EoUbmCuHZKiMYIMQeJ6+GyzzElD9k/PmBz1ul7u/fhHrBp4RQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR08MB7674.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(366004)(39860400002)(136003)(396003)(451199021)(31686004)(66556008)(66476007)(478600001)(6916009)(66946007)(6506007)(186003)(53546011)(6512007)(38100700002)(2616005)(36756003)(8936002)(8676002)(2906002)(44832011)(41300700001)(6666004)(33964004)(6486002)(316002)(31696002)(86362001)(5660300002)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8827
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1d023d8d-abdb-4834-d7bb-08db55b01ab6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EbwRq/SOEF0z0xBxwcWfCLpXC/FRJ45OtuCZFAN2O6G6LmDNRKNAsdwDwgzm2ExKZCBm1KgctNIIdC9+ETG0vz/AQXed/mRZH0EFPriAyYIPOuZvbz2hkV2F1Z6P/hRcLLQxKIDhMlC4JPtqMnN6omHzG/jAHiCmv7Sz09g/f/jclCngvkliS0S+T9tED6Zm78gESKVKgXdvp5ERQwg2IDg3B/tiFTuOoRCPCfs4GsTGVztPFOYRTP2zMeIcA5IcnkPdyKaLfgpEnPzmi/ApgFWtNo2OytHa+aU0+BgXkuAhvaQGn/qygNyjj3doZtYjlrPYxMNe8f3dtb4k8GJGkwjTTEijrdZoIypak4t//d6JqxabK1D/7LxS0wHjLNYNoyN1/MpyKENmmxNOpyWRvECeDkiyGUFNi9wqMqKKQfymOnd/WUAEJzgbJyEhSQr4NnB7He1GrEnuDp+zW4yVocCjhtA3I5UPG8X/RfhO75pR1bqgrBbJ6OpM6+iDXwKFsfyvRwOOKg4bc9mbnFeZXbtLUEjpK1pJi+epmDBITrOTV7I5rZpNT3z5Tu0SSVx4LEiVjSbLYScEBv7RJ7Uee19tPpdMt+UOVBnb0JaeOKv+3IExtXl5eKDBHXJ1y+y/64ujrcBHKhm9wnPiq/8hMipHQLFLRlV3Q85TGahxc62rxpnhPMXo57e8Le7Px+gAokjl9jm+PzpDkMy/6xXRpccZrGA2fW+2WEmLYpvt+qrupkj05qL/VN7VHHUBnltL1Gw2Y9JMzxSH5TghuiAg6A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199021)(46966006)(36840700001)(40470700004)(31686004)(36860700001)(36756003)(6916009)(44832011)(40460700003)(2906002)(31696002)(8936002)(8676002)(82310400005)(40480700001)(81166007)(316002)(86362001)(5660300002)(82740400003)(41300700001)(356005)(70206006)(70586007)(336012)(47076005)(53546011)(186003)(6506007)(6666004)(33964004)(26005)(478600001)(2616005)(6486002)(6512007)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 01:51:55.4547
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b89aa031-1a69-4bcf-3dd1-08db55b020c3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB9237

--------------HciZi8HHxWRoK3JEVWJH9fXB
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit


On 2023/5/13 10:12, Marek Marczykowski-Górecki wrote:
> This makes the test script easier reusable for different runners, where
> console may be connected differently. Include both console= option and
> configuration for specific chosen console too (like com1= here) in the
> 'CONSOLE_OPTS' variable.
>
> Signed-off-by: Marek Marczykowski-Górecki<marmarek@invisiblethingslab.com>

Reviewed-by: Jiamei Xie <jiamei.xie@arm.com>

--------------HciZi8HHxWRoK3JEVWJH9fXB
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<html><head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dutf-8">
  </head>
  <body>
    <p><br>
    </p>
    <div class=3D"moz-cite-prefix">On 2023/5/13 10:12, Marek
      Marczykowski-G=C3=B3recki wrote:<br>
    </div>
    <blockquote type=3D"cite" cite=3D"mid:e0504797d1b3758c035cd82b2dc3b00d7=
47ddcc8.1683943670.git-series.marmarek@invisiblethingslab.com">
      <pre class=3D"moz-quote-pre" wrap=3D"">This makes the test script eas=
ier reusable for different runners, where
console may be connected differently. Include both console=3D option and
configuration for specific chosen console too (like com1=3D here) in the
'CONSOLE_OPTS' variable.

Signed-off-by: Marek Marczykowski-G=C3=B3recki <a class=3D"moz-txt-link-rfc=
2396E" href=3D"mailto:marmarek@invisiblethingslab.com">&lt;marmarek@invisib=
lethingslab.com&gt;</a>
</pre>
    </blockquote>
    <pre class=3D"content" style=3D"box-sizing: border-box; overflow: auto;=
 font-family: Menlo, Monaco, Consolas, &quot;Courier New&quot;, monospace; =
font-size: 13px; display: block; padding: 9.5px; margin: 0px 0px 10px; line=
-height: 14.3px; color: rgb(51, 51, 51); word-break: break-all; overflow-wr=
ap: break-word; background-color: white; border: 0px; border-radius: 0px; f=
ont-style: normal; font-variant-ligatures: normal; font-variant-caps: norma=
l; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start;=
 text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -web=
kit-text-stroke-width: 0px; text-decoration-thickness: initial; text-decora=
tion-style: initial; text-decoration-color: initial;"><span class=3D"review=
ed-by" style=3D"box-sizing: border-box;">Reviewed-by: </span><span style=3D=
"color: rgb(33, 33, 33); font-family: &quot;Roboto Mono&quot;, &quot;SF Mon=
o&quot;, &quot;Lucida Console&quot;, Monaco, monospace; font-size: 14.6318p=
x; font-style: normal; font-variant-ligatures: normal; font-variant-caps: n=
ormal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: st=
art; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows:=
 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-thic=
kness: initial; text-decoration-style: initial; text-decoration-color: init=
ial; display: inline !important; float: none;">Jiamei Xie &lt;</span><a hre=
f=3D"mailto:jiamei.xie@arm.com" target=3D"_blank" rel=3D"noopener" style=3D=
"border: 0px; box-sizing: border-box; font-style: normal; font-variant-liga=
tures: normal; font-variant-caps: normal; font-variant-numeric: inherit; fo=
nt-variant-east-asian: inherit; font-variant-alternates: inherit; font-weig=
ht: 400; font-stretch: inherit; font-size: 14.6318px; line-height: inherit;=
 font-family: &quot;Roboto Mono&quot;, &quot;SF Mono&quot;, &quot;Lucida Co=
nsole&quot;, Monaco, monospace; font-optical-sizing: inherit; font-kerning:=
 inherit; font-feature-settings: inherit; font-variation-settings: inherit;=
 margin: 0px; padding: 0px; vertical-align: baseline; color: var(--link-col=
or); letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0p=
x; text-transform: none; white-space: pre-wrap; widows: 2; word-spacing: 0p=
x; -webkit-text-stroke-width: 0px;" class=3D"moz-txt-link-freetext">jiamei.=
xie@arm.com</a><span style=3D"color: rgb(33, 33, 33); font-family: &quot;Ro=
boto Mono&quot;, &quot;SF Mono&quot;, &quot;Lucida Console&quot;, Monaco, m=
onospace; font-size: 14.6318px; font-style: normal; font-variant-ligatures:=
 normal; font-variant-caps: normal; font-weight: 400; letter-spacing: norma=
l; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; w=
hite-space: pre-wrap; widows: 2; word-spacing: 0px; -webkit-text-stroke-wid=
th: 0px; text-decoration-thickness: initial; text-decoration-style: initial=
; text-decoration-color: initial; display: inline !important; float: none;"=
>&gt;</span></pre>
  </body>
</html>

--------------HciZi8HHxWRoK3JEVWJH9fXB--


From xen-devel-bounces@lists.xenproject.org Tue May 16 04:22:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 04:22:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534924.832424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pymCV-0006lX-0g; Tue, 16 May 2023 04:22:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534924.832424; Tue, 16 May 2023 04:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pymCU-0006lQ-Tj; Tue, 16 May 2023 04:22:14 +0000
Received: by outflank-mailman (input) for mailman id 534924;
 Tue, 16 May 2023 04:22:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pymCT-0006lG-H2; Tue, 16 May 2023 04:22:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pymCT-0003Ju-Bl; Tue, 16 May 2023 04:22:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pymCS-0005vG-Qt; Tue, 16 May 2023 04:22:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pymCS-0005Ep-QI; Tue, 16 May 2023 04:22:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4lhzNV39QYQTHP5exTskWrMeOIJ6EafMeijYq3KKQb4=; b=tZpMJHYVZASvG4w6bB4WH85iVG
	X4uepBSl8YAqJtg9JSjF6vWh8sZKfvAeEUAI9eRKxhKWO9GMcuLO+s48bd/7zK+Z48PLhB15QLvgH
	NkAyQD8MqZ3kDsqYxaJx/07009R4gWOMxLd32wKfBij1mQvIRh6eEpibzYSe0k9OUoL8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180678-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180678: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8f9c8274a4e3e860bd777269cb2c91971e9fa69e
X-Osstest-Versions-That:
    xen=56e2c8e5860090a35d5f0cafe168223a2a7c0e62
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 May 2023 04:22:12 +0000

flight 180678 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180678/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8f9c8274a4e3e860bd777269cb2c91971e9fa69e
baseline version:
 xen                  56e2c8e5860090a35d5f0cafe168223a2a7c0e62

Last test of basis   180672  2023-05-15 14:00:25 Z    0 days
Failing since        180676  2023-05-15 23:03:30 Z    0 days    2 attempts
Testing same since   180678  2023-05-16 01:02:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   56e2c8e586..8f9c8274a4  8f9c8274a4e3e860bd777269cb2c91971e9fa69e -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 16 04:58:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 04:58:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534931.832433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pymlv-0001t4-RJ; Tue, 16 May 2023 04:58:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534931.832433; Tue, 16 May 2023 04:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pymlv-0001sx-OS; Tue, 16 May 2023 04:58:51 +0000
Received: by outflank-mailman (input) for mailman id 534931;
 Tue, 16 May 2023 04:58:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tkax=BF=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pymlu-0001sq-Rq
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 04:58:50 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.218]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 57e315cf-f3a6-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 06:58:48 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4G4wiUdO
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 16 May 2023 06:58:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57e315cf-f3a6-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1684213124; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Zfi/ViYUh7QEX+xwRaAhpimpO6XEG5mN/ldVXuBH1bjHChXfPEiR0NATBRjCGVMhsQ
    zQPqUoTK+83rKPku96RM7dzNyBxNUCNYKmOPU/25h7dwgCCF58awf5hJRD9cXKBljdTZ
    vhxNQC5F6un3E9saNODRNTxFclOLyZLCifl66qWNEAukIegeWaQ71YBU4SptxFjKolXQ
    Tb4tpAe9kc/fRLqJNci4KIeVtJV/xusmgSwTN+DAe91OOaTNBb5rNT/wxkVgjhy8fTcv
    IpMq9PbLhF90gJiBEkdlcuJ2QzMgiTQ38m9qNpgIplrHndKkZAH+jJjFdZDe7zRy2pog
    DXoA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1684213124;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=URSaZjGgI9AxRR8qLgzMYVr4Gnu0X7An01qLZ5Mtnjc=;
    b=nWKjvMJMV3Kv2pIbmN/MD/Z0zYE1XES7HZUM3VFb6bDk/DHRqmMurzlqSK9jRsyXeA
    RW+UpBtQVFpRCJN8Yh9VvRaMGU2Wy5lUzZWYv4X/4Qmr8w7Oo9JQfszOka08WsrZNpYb
    u8U5v2UczsRwfwjhQAJY5H3DXgbVi/6bO7IuSLqkYH7HhMFQwrtCViN33Msen/u4FGzI
    atqwf9ZBovKZ+d64iWHoYD90+VtgsBEUuiBVgcEJTSPyenbTPygnJ5jhOtnoTHlSp9af
    Nb7XHV3T74NALePMs+uodRNN37dtAiNYybWevvBOzfW+erBtpFRobKeDGGn6iRhqXw+O
    jZcg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1684213124;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=URSaZjGgI9AxRR8qLgzMYVr4Gnu0X7An01qLZ5Mtnjc=;
    b=EtNP7YUkF2xBECy/+6VY+slzsAzB/r0LtbsLhvkyXN2Ur+i05KGXowwqmnC3/UwkOL
    hKDM2miTiTg/UlG43sKpHdc43CyD9nP3tCFKgHDRC0EWjyNw/312rGQ7ykD5+HmuEiqa
    rKpOLEK3n7ZGjnwzLPFZVTriIqurRPtIwFeHxmlLNahbpkeG9TH7uxHRwP4ccov96K5Y
    FplAHJRB40p+sycLJn2fTQxNOfwH+RoOOAmW2dK+kCpIIjE8LBm64TPOsay+xttvWmTc
    TJW6QJBsZ0+ROBNZVV5TwehvjFRiyjz43haJN44DurlHoKBeDQ0mkA+b13IDu1p+wXp9
    QWYQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1684213124;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=URSaZjGgI9AxRR8qLgzMYVr4Gnu0X7An01qLZ5Mtnjc=;
    b=wmKryQiQ1OdQMWA8MHlrpFs9n0Wdi/VKh7D2rBTbKWOWYq7pc4pcRjtK3bJIcI5hOh
    cJul8aStJHQth2TYx/AA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QED/SSGq+wjGiUC4kV1cX/0jCNVp4ivfSTHw=="
Date: Tue, 16 May 2023 04:58:35 +0000
From: Olaf Hering <olaf@aepfle.de>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH v1] automation: provide example for downloading an
 existing container
Message-ID: <20230516045835.2d210b75@sender>
In-Reply-To: <alpine.DEB.2.22.394.2305151533320.4125828@ubuntu-linux-20-04-desktop>
References: <20230502201444.6532-1-olaf@aepfle.de>
	<alpine.DEB.2.22.394.2305151533320.4125828@ubuntu-linux-20-04-desktop>
X-Mailer: Claws Mail 2023.04.19 (GTK 3.24.34; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/CjG_5.zCCO7l6_CoMBEX3Ke";
 protocol="application/pgp-signature"; micalg=pgp-sha1
Content-Transfer-Encoding: 7bit

--Sig_/CjG_5.zCCO7l6_CoMBEX3Ke
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 15 May 2023 16:03:01 -0700 (PDT)
schrieb Stefano Stabellini <sstabellini@kernel.org>:

> Given that opensuse-tumbleweed is still broken (doesn't complete the
> Xen build successfully) even after these patches, I suggest we use a
> different example?

For some reason it succeeded for me locally.
Does it fail for you as well if you run it locally?

Olaf

--Sig_/CjG_5.zCCO7l6_CoMBEX3Ke
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iF0EARECAB0WIQSkRyP6Rn//f03pRUBdQqD6ppg2fgUCZGMNewAKCRBdQqD6ppg2
fncLAJ0eL/AEakEKXBmosjXalNMInOtprwCeKogMNdk9BsU0BhkBCzs9lp9ob3Q=
=NvNJ
-----END PGP SIGNATURE-----

--Sig_/CjG_5.zCCO7l6_CoMBEX3Ke--


From xen-devel-bounces@lists.xenproject.org Tue May 16 05:43:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 05:43:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534937.832443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pynSq-0007Rk-4i; Tue, 16 May 2023 05:43:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534937.832443; Tue, 16 May 2023 05:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pynSq-0007Rd-29; Tue, 16 May 2023 05:43:12 +0000
Received: by outflank-mailman (input) for mailman id 534937;
 Tue, 16 May 2023 05:43:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pynSp-0007RT-Jb; Tue, 16 May 2023 05:43:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pynSp-0005gO-8x; Tue, 16 May 2023 05:43:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pynSo-0000x2-Lv; Tue, 16 May 2023 05:43:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pynSo-0002hW-LU; Tue, 16 May 2023 05:43:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3IwFd26rc9/izMOYo9B8A7QjLavqUpE5CC2xJf5Zcbo=; b=kj/z2Iy1/N2NQOTIObyuXLL2Ds
	aLWqMwAJqozrPy479b+E8BOqzb9mpmqlU8btmDIYY3r3/eyON9DhR4scw1vPInY3K5AiuwDC4LTbV
	jdaK5S+HZtT0rDp6RTZAHFDCCTIEtgYvy5knWGEptCab7EIHHfKH+EQProKezUfhqWN8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180673-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180673: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=18b6727083acceac5d76ea0b8cb6f5cdef6858a7
X-Osstest-Versions-That:
    qemuu=8844bb8d896595ee1d25d21c770e6e6f29803097
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 May 2023 05:43:10 +0000

flight 180673 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180673/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180659
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180659
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180659
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180659
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180659
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180659
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180659
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180659
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                18b6727083acceac5d76ea0b8cb6f5cdef6858a7
baseline version:
 qemuu                8844bb8d896595ee1d25d21c770e6e6f29803097

Last test of basis   180659  2023-05-14 01:40:01 Z    2 days
Testing same since   180673  2023-05-15 17:38:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Bulekov <alxndr@bu.edu>
  Richard Henderson <richard.henderson@linaro.org>
  Song Gao <gaosong@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   8844bb8d89..18b6727083  18b6727083acceac5d76ea0b8cb6f5cdef6858a7 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue May 16 06:08:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 06:08:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534945.832453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pynr4-0001di-5M; Tue, 16 May 2023 06:08:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534945.832453; Tue, 16 May 2023 06:08:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pynr4-0001db-2f; Tue, 16 May 2023 06:08:14 +0000
Received: by outflank-mailman (input) for mailman id 534945;
 Tue, 16 May 2023 06:08:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=y0Pc=BF=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1pynr2-0001dS-RU
 for xen-devel@lists.xen.org; Tue, 16 May 2023 06:08:12 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0725c82b-f3b0-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 08:08:08 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6AFB61F88F;
 Tue, 16 May 2023 06:08:07 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2C631138F9;
 Tue, 16 May 2023 06:08:07 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id /+jACccdY2QAagAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 16 May 2023 06:08:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0725c82b-f3b0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1684217287; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=GD/+EBI9SoZaAK8VDCEYu2eYK8G0RD7/2UU0BMpBuhY=;
	b=njqRSHzyf2dwL8Pzx4mrH3Mq7TW9NNwmubwHSfhwcx8luZM3mkP5yi/soWDkktE2iPPEgm
	6gdaiDJIMv/MgyBtFwB8iSXVZVXFBVSL49lOFN4YxBZ9RJmGOEb7HpD3NylGnYRBF5ssNW
	CgO2xwQT9UPuvDwWnw3ddohmCdvHn/4=
Message-ID: <c49127d5-be23-8aaf-0c6a-f8d9dfbd43cf@suse.com>
Date: Tue, 16 May 2023 08:08:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH V2 2/2] libxl: arm: Add grant_usage parameter for virtio
 devices
To: Viresh Kumar <viresh.kumar@linaro.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xen.org, Julien Grall <julien@xen.org>,
 Vincent Guittot <vincent.guittot@linaro.org>,
 stratos-dev@op-lists.linaro.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>,
 Erik Schilling <erik.schilling@linaro.org>
References: <782a7b3f54c36a3930a031647f6778e8dd02131d.1683791298.git.viresh.kumar@linaro.org>
 <ccf5b1402fb7156be0ef33b44f7b114efbe76319.1683791298.git.viresh.kumar@linaro.org>
 <5dc217d6-ca8f-4c5f-ad7c-2ab30d6647bd@perard>
 <20230515120600.bsfw6pe3usae4sl4@vireshk-i7>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230515120600.bsfw6pe3usae4sl4@vireshk-i7>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------fDmRZFZcHqwJ1E3tu0wmUO86"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------fDmRZFZcHqwJ1E3tu0wmUO86
Content-Type: multipart/mixed; boundary="------------ZuyNh3NhdITfLJGveM6Brdo8";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Viresh Kumar <viresh.kumar@linaro.org>,
 Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xen.org, Julien Grall <julien@xen.org>,
 Vincent Guittot <vincent.guittot@linaro.org>,
 stratos-dev@op-lists.linaro.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>,
 Erik Schilling <erik.schilling@linaro.org>
Message-ID: <c49127d5-be23-8aaf-0c6a-f8d9dfbd43cf@suse.com>
Subject: Re: [PATCH V2 2/2] libxl: arm: Add grant_usage parameter for virtio
 devices
References: <782a7b3f54c36a3930a031647f6778e8dd02131d.1683791298.git.viresh.kumar@linaro.org>
 <ccf5b1402fb7156be0ef33b44f7b114efbe76319.1683791298.git.viresh.kumar@linaro.org>
 <5dc217d6-ca8f-4c5f-ad7c-2ab30d6647bd@perard>
 <20230515120600.bsfw6pe3usae4sl4@vireshk-i7>
In-Reply-To: <20230515120600.bsfw6pe3usae4sl4@vireshk-i7>

--------------ZuyNh3NhdITfLJGveM6Brdo8
Content-Type: multipart/mixed; boundary="------------4xh0HsbVJAjb6gfQF1aE1YZ8"

--------------4xh0HsbVJAjb6gfQF1aE1YZ8
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTUuMDUuMjMgMTQ6MDYsIFZpcmVzaCBLdW1hciB3cm90ZToNCj4gT24gMTItMDUtMjMs
IDExOjQzLCBBbnRob255IFBFUkFSRCB3cm90ZToNCj4+IE9uIFRodSwgTWF5IDExLCAyMDIz
IGF0IDAxOjIwOjQzUE0gKzA1MzAsIFZpcmVzaCBLdW1hciB3cm90ZToNCj4+PiBkaWZmIC0t
Z2l0IGEvZG9jcy9tYW4veGwuY2ZnLjUucG9kLmluIGIvZG9jcy9tYW4veGwuY2ZnLjUucG9k
LmluDQo+Pj4gaW5kZXggMjRhYzkyNzE4Mjg4Li4wNDA1ZjZlZmU2MmEgMTAwNjQ0DQo+Pj4g
LS0tIGEvZG9jcy9tYW4veGwuY2ZnLjUucG9kLmluDQo+Pj4gKysrIGIvZG9jcy9tYW4veGwu
Y2ZnLjUucG9kLmluDQo+Pj4gQEAgLTE2MTksNiArMTYxOSwxOCBAQCBoZXhhZGVjaW1hbCBm
b3JtYXQsIHdpdGhvdXQgdGhlICIweCIgcHJlZml4IGFuZCBhbGwgaW4gbG93ZXIgY2FzZSwg
bGlrZQ0KPj4+ICAgU3BlY2lmaWVzIHRoZSB0cmFuc3BvcnQgbWVjaGFuaXNtIGZvciB0aGUg
VmlydGlvIGRldmljZSwgb25seSAibW1pbyIgaXMNCj4+PiAgIHN1cHBvcnRlZCBmb3Igbm93
Lg0KPj4+ICAgDQo+Pj4gKz1pdGVtIEI8Z3JhbnRfdXNhZ2U9U1RSSU5HPg0KPj4+ICsNCj4+
PiArU3BlY2lmaWVzIHRoZSBncmFudCB1c2FnZSBkZXRhaWxzIGZvciB0aGUgVmlydGlvIGRl
dmljZS4gVGhpcyBjYW4gYmUgc2V0IHRvDQo+Pj4gK2ZvbGxvd2luZyB2YWx1ZXM6DQo+Pj4g
Kw0KPj4+ICstICJkZWZhdWx0IjogVGhlIGRlZmF1bHQgZ3JhbnQgc2V0dGluZyB3aWxsIGJl
IHVzZWQsIGVuYWJsZSBncmFudHMgaWYNCj4+PiArICBiYWNrZW5kLWRvbWlkICE9IDAuDQo+
Pg0KPj4gSSBkb24ndCB0aGluayB0aGlzICJkZWZhdWx0IiBzZXR0aW5nIGlzIHVzZWZ1bC4g
V2UgY291bGQganVzdCBkZXNjcmliZQ0KPj4gd2hhdCB0aGUgZGVmYXVsdCBpcyB3aGVuICJn
cmFudF91c2FnZSIgc2V0dGluZyBpcyBtaXNzaW5nIGZyb20gdGhlDQo+PiBjb25maWd1cmF0
aW9uLg0KPiANCj4gVGhpcyBpcyB3aGF0IEkgc3VnZ2VzdGVkIGVhcmxpZXIgWzFdLCBtYXli
ZSBJIG1pc3VuZGVyc3Rvb2Qgd2hhdA0KPiBKdWVyZ2VuIHNhaWQuDQoNCkkgdGhpbmsgSSBq
dXN0IGhhZCBhbm90aGVyIG9waW5pb24uIEJ1dCB0aGUgb25lIG9mIHRoZSBtYWludGFpbmVy
IGlzIHRoZSBvbmUNCmNvdW50aW5nIGluIHRoaXMgY2FzZSBJTU8uIDotKQ0KDQoNCkp1ZXJn
ZW4NCg0K
--------------4xh0HsbVJAjb6gfQF1aE1YZ8
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4xh0HsbVJAjb6gfQF1aE1YZ8--

--------------ZuyNh3NhdITfLJGveM6Brdo8--

--------------fDmRZFZcHqwJ1E3tu0wmUO86
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRjHcYFAwAAAAAACgkQsN6d1ii/Ey9D
Uwf/QoK/ljTbasCTTz6fEEdYMzVwBiz+ZMQG+61Uul3x15lTUgqq2hyt92Eysi8gp7As4u9QitZg
RC/O9sIBFVNKEePUAxAsEU3igOnE2rw63YRAf8BdPU/ejlFqazjNG3sFlR1V1bWhsvCfD+72wqQP
x5ypW5g+5XQ9iHu1s7LI377NkQflhDUvO9uVU7pZbrtFP+4icmQOwzneYsQJoKhfUKqWX232Z2+U
qYY56OyCeA8GUjeicJzddTbyoyjgV+aim+3GQQJsvuXkMOJxv6qAuEXKRfxw6J669S7CQqHaZxp4
mDahkrtDzVn5sNgC5XzNOtc1dIZ4guRQELOGF4qENA==
=1Dbr
-----END PGP SIGNATURE-----

--------------fDmRZFZcHqwJ1E3tu0wmUO86--


From xen-devel-bounces@lists.xenproject.org Tue May 16 06:13:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 06:13:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534951.832463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pynw6-00039L-SZ; Tue, 16 May 2023 06:13:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534951.832463; Tue, 16 May 2023 06:13:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pynw6-00039E-Pw; Tue, 16 May 2023 06:13:26 +0000
Received: by outflank-mailman (input) for mailman id 534951;
 Tue, 16 May 2023 06:13:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pynw5-000397-H9
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 06:13:25 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2061b.outbound.protection.outlook.com
 [2a01:111:f400:fe12::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c3154a32-f3b0-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 08:13:23 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8589.eurprd04.prod.outlook.com (2603:10a6:102:218::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 06:13:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 06:13:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3154a32-f3b0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g+jz019SKMC3nDMK6yMsy/FwP2FKSdaWDoTQ8pcR/uAZBoBoL8yhrZjCajomBtMQo8fcOJvAzAsHadIFzMkhzblc2KtBAcALmQOV3nkOvOJUkLgJJLL1ihHZKwxdvb20tCCM39gWKvos+9KWTa3QTSwp+wS1WyllcJEYzfEjQPLuUqv0YUxDIrdMI6BQxpYRfV/pRV49NUe7XRnBKLn8jYJEqywDRMTynrD3t7yHTnCijFYNvwKIRAJwOvmyA6uTwpFy1gPwxCdPU1OgHLAa4KPqJkNwPTeF3iRFPSwx2W/BXqiwgMkDf6qk9CldKdqkJpKqgzuFoHGYKOO1eIqF8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wCiXzKIv7wGzCx0igUFvb4QA102zBnffJyHsyygvpXk=;
 b=XfALmFiuqUgzH8d4p+Vdph+iXCr3upqBUxRNxMndg65b7ziyW/QE5DcriumwWxtY1R8mLEW5noCT4+KKmkHQ8XEvGOIK17lN+gqAEN9kObTpnXfWknjikIMqqMnMgljAPDItUtg3sj/Ayqd+9B7Y8CsqsUL++KJw8Te1Os/TVyQRUa6jkupxUnRLlaW4HQIuHFta3ofCYfTGFXmPiKzZEjGir3q42GnqtAale0N3lV8oZqfLXNofHrUDT/OLyBCulk2htyAWc7/raaopk20rdOolfvv+cfEIhwQa8vjdnSVQxcr2Vfg6+SaYHH+r5zC+2ATpZh3OF+9Abo5nSxP9vg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wCiXzKIv7wGzCx0igUFvb4QA102zBnffJyHsyygvpXk=;
 b=N53v6tBDHJ87uUWRLgBflZATolxQ2oEP1rTwN7UqMxL/YLRVaTHBmTShIpJHOCfkrEJDnHQREb4byc44/3rjddv/JSzA+EnSDxvRiHhV113b92B7OVfWoNCnvu5aLd9+33WSV0ZKyfMDVC1tHNLwbtOC8MTPhYvSp6THf5mugNM8hH6FOZpX/05Hkg6Oo//lk9IYckPsNwrlKuxXHDVtknwPiz5XuobR465z+L+H7fMlpQcwiYmHYX0RQCrkM/z/rbJ9/TEBJWqrngcfxpthcXyv5ET6SWABLQfDit93ipC/buLHFnzHxetcmgPHnvjXUp6iO4p75RVHKQCKgrobmw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <220f12bf-d2aa-b83b-8cf2-94ba0b49b006@suse.com>
Date: Tue, 16 May 2023 08:13:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT
 generation
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 Xenia.Ragiadakou@amd.com, Stefano Stabellini <stefano.stabellini@amd.com>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-1-sstabellini@kernel.org>
 <ZGHx9Mk3UGPdli1h@Air-de-Roger>
 <81ac6e51-6de9-5c4c-5cbd-7318cae93032@suse.com>
 <alpine.DEB.2.22.394.2305151712390.4125828@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305151712390.4125828@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0254.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:af::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8589:EE_
X-MS-Office365-Filtering-Correlation-Id: a2cfbfba-180a-4b18-9a14-08db55d4a603
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9x2QmDu7Dv5siPT48aQdJ3sAEff+REGNEttbYWWswygA4miZG5DcnYdyqjvsGaXuZwQ7wC5mAH2SPNbnQ25Ca0xVoBriZ6Y1UqL5t60koB2Wxe35zpZOsSjVdt0VeNdRHtUDMzdH+MiEr37/ypp1UKlZGRh237ozWV8U+/Qge2wKlriYEJXi9S5kjJIjll56Nba9xalbdiElT3i4jBUG99HfB8JQMYQ6Bxt1vXRbA3GXUuQFMms/pqO7j1bjE94o63t+mOC9n0LKfTYGr4nNN+7GIL5NcQz88ohK6CH1aQQxWc9HtF0CEDlv/t0d7U60ut7v9TJyjphYtQuQPHZ+5hXIciqo4SY5vUlbdiy7OO/aKJWiRpEvTYWMk0OQnu7pw8+SH9kQmNk+5EL2smuvzDpDqFK6IuyTMq2kFDJdyRTgni4YIC29fyahKbMq4dyO8XhX5clmEGMd3WC1AGFZ6SdzZV5SluydUFlSnHDzGsKAYK8YbReqNRFtF8enERr9YZBWcaFkfVeBCOa6IZjGKK8MR3qNWhKdF9Ryj73mnRkb048iPCStgdMLf5mLTYQIGhIZha0aX2q5uXB4Ko6TskqpFr1ArZm3UJxLTBW5BOrRHzX4Vk7gPLQnhHdnpqrpVnOaQ9dalPClPP4kWYdcSg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(366004)(376002)(39860400002)(396003)(451199021)(38100700002)(2616005)(26005)(186003)(83380400001)(53546011)(6506007)(2906002)(8676002)(41300700001)(5660300002)(6512007)(8936002)(6486002)(36756003)(31696002)(478600001)(66476007)(66556008)(66946007)(6916009)(316002)(54906003)(4326008)(86362001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZGs5UmxFVTd4WXpKUTRJRmdHZTlQdlVCYzJwV2pySHpZNTE0KzN0NzdwdEZC?=
 =?utf-8?B?bTY4MEdhcjNLOU93TW56YU5KNFRxbnp3UC9Vb0RPWnh1dm9vWTNsMjlEUWJX?=
 =?utf-8?B?UEhQQ091eGhnSWxHNGVCcnZIRnoraGhsRkt5cUE1eE9HNVVZRnY2TG95K3Ns?=
 =?utf-8?B?aXNxMndqNFJhbC84aEpUUDVCeVVGaHY1TGsva0t3eWRncWg5cklaMFZFVlRO?=
 =?utf-8?B?R1VBcGJJdGE2RldNNDNGeWxyN3IyOVJjOFVLZXIwdElkQ0R4S2t3RUNZVnAv?=
 =?utf-8?B?NUJsVGVvMDdGbEh4c01SMWdCYmZPUWR3MDE0aEx5eXloOTB5OXRnOXdxQXds?=
 =?utf-8?B?U1U3QmRWeDUzZXVKK2NrTGRwZTV2eEt4VEhZcXk4UG0xODZZYUdTWnJ2YWRu?=
 =?utf-8?B?RUFaNlM0Y3F0VExEaWRUbmtWN0UvL1dPaGZIMEVUdndZZXA1SWQ1VlBVOWNL?=
 =?utf-8?B?Ly9BamtNSjdKaUI2RDBzYUwxSVljdnVmQWdnM1V1RkRZRGVTNDlUVGw2NFJS?=
 =?utf-8?B?clpBako5VVZNMmcvZE1Ua00ybnpJZDcyM25QdHErU0tiTEMyQnhhVmxjT0Iy?=
 =?utf-8?B?YzFXTDRkeW1NcHNtZ3ZUOWw3NTVmbnpzNHJtMW5velVOc202dTBNTlJ4N282?=
 =?utf-8?B?a1dLV0JWblpoajg0UVh2WXZHT1BNdDdUSXZFeVRZSDRHZCt2WklQRmJuazRs?=
 =?utf-8?B?VlZFYk1XN1VJa004U2NlR0FHaHhTcC80Qmc5N25LL2NFWjFQREhGQUZKR1dO?=
 =?utf-8?B?bVNoOXRFWUs1ckJPTFJTUm44UzY0ck9McWZIdENUb09INmYrTFNJYXRjM3I2?=
 =?utf-8?B?Zk5sOWIzSFhVMmg2a0EyNjFoN1h3RS9lMEx2Njd4cGtxdm9DZ3NUNWo4TUlP?=
 =?utf-8?B?Z1hLKzIzMnhUbWZsVXZaKys5MXZCYkl6LzBSUFZzajh4a2gwbmdKNTBBem5n?=
 =?utf-8?B?SlZtZnZsSGJ0UkoyNHd1V0d6QVNtZkVhclNpNXFQRzVwMFF4cnpPLzlIQ28r?=
 =?utf-8?B?ejdyVkdMSksrMkV3ZVVkSG05cWcySFZOcnhPSVNlcFZGaGwxNUcxY2NQeFhQ?=
 =?utf-8?B?SWJidXFZYkZKMlU1NnRTbjArMVdJbVdNVjh5MXBZQmM3K3pCbytxQlZsRVJM?=
 =?utf-8?B?SWJTUXN2aDdWT0NkL0dLVEVPV2FCck5iWjFzQXk1djE3c2ZEK2F5bWNuUzRT?=
 =?utf-8?B?Q0JPcjNjbStlVTRkam1rWWxnQ1RyVWtGdC9qN04rTDNJRDhnYjQ0SXM0aHlQ?=
 =?utf-8?B?dlVmYmFBbEV5Z1JpMTltR1VxNmgrYm91cG5SdUlicnc1MGZpWm4vbWYzeWx4?=
 =?utf-8?B?VnlDT09iUFJ2b3ZEaTZMcjlhNWpKRlg5QWt5T0UrMG0vVVNQMkRFRkpLL0c1?=
 =?utf-8?B?NitwRTNlWkxNb2RGelB6ZmFPQ3N6VzFKd3FyVStlaG1xZzRHbmxlZklvUXp1?=
 =?utf-8?B?TVh0QWRqcE5yY05uMUlpazNlMjhKWUkwYU9JSXJFcFdoZ1BTY0pPQnlSemg0?=
 =?utf-8?B?K2ZpL1pLYVR6ZSt6YW5jTkFCc3pwR0NGcFR6U2U3UTFIQVc0aEpVVnI0dmI1?=
 =?utf-8?B?RUt1TGJXTE4xdCtQd0JZUmtHQVVzaHVXQ2Z2TVJwcnREcHh4VG5BbG1LSHRP?=
 =?utf-8?B?UUhPdFJEeHJmOUdCZDZWUjFxMTFTdzR0aTVVUzFGUVVWKzJmUnNhOUFUN1lY?=
 =?utf-8?B?UFFFRWpwTDZGa0svNk80VU1mRkV1TVk4K2s0SXVPb0dzZG4vNGFVWnVIM0V3?=
 =?utf-8?B?M2ZsbzNoMnZnUG1RVzRha1pYYzhQb1lhSVlXbWxYQUxHTDFxZWxPbVJ2ejVY?=
 =?utf-8?B?am9UYUpVaFBWNzJ4YnBwVnUyVmZFSU03Umk3S3loMlRFaEJNTDRqeWxuT3Zu?=
 =?utf-8?B?eFV4aVM4bndRQnJmSFB0akQxTmhNK1FvS1ZuU0QxNC9wUElOdzJGZHVpQ3pN?=
 =?utf-8?B?TDBma1pEY0NhcEFjRzNoRUZ0N2RWejgyRGZ5emptMlBNNnZUcEJWNEtRM3pp?=
 =?utf-8?B?b1Zjb08wN3pwaWtTVDNvS2UvZmE0QWhHTzY2enJBZmE4TzJ2NmRuVmRqNXlO?=
 =?utf-8?B?WW1IUFQvUG5kU2h4d1l6aEo3VnRoamNxaEtLYm1pbldpZ0NmTjVQOHZiNjkv?=
 =?utf-8?Q?cecjus0+ensD06KEmw64qjbiC?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a2cfbfba-180a-4b18-9a14-08db55d4a603
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 06:13:21.2118
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mb41NwbMOFhqWp0Ai/3KGFQyRpfRXujfqoTG3ONqPbXBYXktFc6sv5eYQFKG7G8fZOyjlgDCbhzIFiTtaRKKLw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8589

On 16.05.2023 02:16, Stefano Stabellini wrote:
> On Mon, 15 May 2023, Jan Beulich wrote:
>> On 15.05.2023 10:48, Roger Pau Monné wrote:
>>> Alternatively we might want to copy and use the RSDT on systems that
>>> lack an XSDT, or even just copy the header from the RSDT into Xen's
>>> crafted XSDT, since the format of the RSDP and the XSDT headers are
>>> exactly the same (the difference is in the size of the description
>>> headers that come after).
>>
>> I guess I'd prefer that last variant.
> 
> I tried this approach (together with the second patch for necessity) and
> it worked.

Which I find slightly surprising - a fully conforming consumer of the
tables may expect an XSDT when xsdt_physical_address is set, and
reject RSDT. We may still want to limit ourselves to this less involved
approach, but then with a code comment clearly stating the limitation.

Jan

> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -967,6 +967,10 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
>          goto out;
>      }
>      xsdt_paddr = rsdp->xsdt_physical_address;
> +    if ( !xsdt_paddr )
> +    {
> +        xsdt_paddr = rsdp->rsdt_physical_address;
> +    }
>      acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
>      table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
>      if ( !table )



From xen-devel-bounces@lists.xenproject.org Tue May 16 06:27:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 06:27:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534955.832474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyo9U-0004iE-3c; Tue, 16 May 2023 06:27:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534955.832474; Tue, 16 May 2023 06:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyo9T-0004i6-Vw; Tue, 16 May 2023 06:27:15 +0000
Received: by outflank-mailman (input) for mailman id 534955;
 Tue, 16 May 2023 06:27:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyo9S-0004i0-DC
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 06:27:14 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20629.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b1737ce8-f3b2-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 08:27:12 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8242.eurprd04.prod.outlook.com (2603:10a6:20b:3ee::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Tue, 16 May
 2023 06:27:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 06:27:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1737ce8-f3b2-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=keyXqCQUp9/f6ky5njaEl9zt9E1lJdvUGVhXpYRAcTdaM7D5pU9vZZZz6jw71I4MEl7vyQgxSPQquOUd2n7GqR4HCI9mSmHgrREACe/n5M/wvN3gmbIoGvIfQU3SLkzS9nNBqNgtwjjskv0AqcK8Ah3I54aveIyFijXt6WmWDbdJVXytIT4EXQXA8LT0jY3Zzvo46l450YpH/wnPvYuNLbOHfMmZo84lpF6AH9l3rDXKCgWxEb+sO3rcMVk6TDcFYXyC64Jt+ni5LA5DL4fjH7sHCl43TBqOFZRnrq+POt5tf7UdxzgbLs6hAWw58V6J6byDaJW2PQUZBN9sKviIcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qAVow9u+Xh4x/o8AdSzCfnId7jaatw6Kd5EYpOSo6I0=;
 b=htkky16RIMz7Qb2Umt+kFfUdPklzWz0t9QdcXgq+dpYmqjiTl5nM9ppM6xZHqBZYJZFe0hnHvtI0DE36XxWrc2N6Qw5BKjl5y3JcibsWvvsceyQ+HBX5xBZkqNS9AY4176shqpqp9lQ96eiph85bVIMH2t+fUfwVjEaQsPKd6vIDmsbje8Y1sHLDnirtsfXHmZgqGiHNJfUUSQrails6MGeGwTeVfJH7CzRvhJ9S0YsHqofPMb6RZBNY2KFOoJNK3SDGe1P2PbrI91iDRmntvylJ1LbZKEQcxtklWQ6njhbE6ynGK+kHfvCgf6ND7g+ZnANyBsmJqk0XY8alAnrk8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qAVow9u+Xh4x/o8AdSzCfnId7jaatw6Kd5EYpOSo6I0=;
 b=kCbMHCeCFqTzT6eO6iXkuGjK5GHOehLp0LziVbOhj9LYV3upbwmHXxr3baQ7JJVab0gli0ralm4LvaERpMrjvqkSa7dR0JWeI6CQ+fcqraBsKtk0ibllXxM2WPVSxKFyf3iTmYZrC/JIOe0KKUM+EWjJc+bum4VV7gXXtpdIEdJG8nl1eQ5fweagSRAS6SoN5X2I4mA3iEqci+VAPcc5dK7XpddaoTGUVSjSBdjcVm9wZ2AT7FczfVaTxXo9yGivQuj72J2L0j7JfyVpTVY+NRZ83ZUaovmWP9tEhAMOgp3ZDj/Kxkwt99lE1RJR186ZrZZ8JzwzTXz+WvMSkzuNfA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ad78afc7-8733-dca2-e9e5-26435661d560@suse.com>
Date: Tue, 16 May 2023 08:27:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 Xenia.Ragiadakou@amd.com, Stefano Stabellini <stefano.stabellini@amd.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-2-sstabellini@kernel.org>
 <ZGH+5OKqnjTjUr/F@Air-de-Roger>
 <alpine.DEB.2.22.394.2305151656500.4125828@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305151656500.4125828@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0158.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8242:EE_
X-MS-Office365-Filtering-Correlation-Id: ad499888-73ed-4f92-259f-08db55d694ef
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tZhUEuKkJojRSxJEkDCJp+LsBeD+yvNo5b7ecCG7Ca4eOYQicrDHIPydNsHv/dkqmt0OlgGyfLxNVxP0Swol6HannomdCwLpl9pjsYiAVmWcPsv7CNmRoEMmrpdqgw408AIal0SShNmrp7SZKmNV8QtiuK+BwnACnel7xDwH8HRP5+KTrWWkyxmhaVUqHlmamy+tf5pFcSvy9auLHfuzvxs460/bYLpKUbr+eLRTJDm0jATtVrX4iKVgmbRtecxDCp5KaLNZuvjRWv+Bm+jm+8xNAuRNTqPKr0DeckikZQpgQ4Hs8VffHLY0q40j9iIlnGBqsUwlom58e6lNlhEZjATiPkJzJy/06cqWCsnMk3cG8dJBTVo8V7nIkoaZiFb/0C8abq3JgjLOf+rY7zZtOul+WKkVHtMZ4KrStcEjLAgQlI+hHdF8L7X6lsGRmwkyqr4I1LmvFdKyt9ypfXY6VzVgGsUore2uNr+rRVa2eHf5kV9W+kosfgOdv9QgB2KGyj5YVlymCy9lUSNLy8X9kQWuYnV2kM5i/Qfm0gesopZb4gM1m1X+5wLFKvTy3+ACYmhElZzz6g1OCxA8D+3ofKxA3yewj6/Pg4uG81gPnr9xY7/LOh2DJzRiD7k4slHnH0KloMPTae37fP0ELzARaQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(376002)(366004)(346002)(136003)(451199021)(6486002)(53546011)(83380400001)(36756003)(26005)(186003)(6506007)(2616005)(38100700002)(31696002)(86362001)(6512007)(8936002)(8676002)(5660300002)(66556008)(66476007)(66946007)(41300700001)(4326008)(316002)(6916009)(2906002)(54906003)(478600001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Mm9aTVdtM291cE9pVzNmeE9MWEkxbFBFUVo0SWdQV3VCM1gxbnU0eElBUWZM?=
 =?utf-8?B?K3Yvb215OEEwNzIyTVQ4ZUJDVTI1bFMvVEhOb2ZmSHp6VER2RExTbWk4R016?=
 =?utf-8?B?WjBSOUFWQ0dIT0ZMSFFsUERuR2VvZGtpZzV5cWlnZDBhYlpqeE1xU2QrUVFZ?=
 =?utf-8?B?NWpwU0xzcEdhRGd2OTRTUUtrMG54RDFjTkdZSVQwdE9IaVpuMVhwQ0lkOWZw?=
 =?utf-8?B?dnhJbXMzbWphMVRVb0VJWlRSSnB5Tm1nMGNpTHA2SDVhOXVtelowNFBUb0Zw?=
 =?utf-8?B?SXdwZjQrelJOMmd2WG5wZ010ZWplSW5UNXhWeG9WUDJjY0tvT0I1MVB6LzVw?=
 =?utf-8?B?cXBETGlJSHpiYndSWlU3dkNZVnoyMldPVTFGQm1ScnN5RWdYamFreHVoeDAr?=
 =?utf-8?B?eFVIN3lIc2FvWGRyU1plNndGRDlSb1NqL3hCNng3UzVTRS9NTGpKRTU0YVlV?=
 =?utf-8?B?THltTDRqckUwaGxYU0lOOUY1bWFqSHRDd24xNnJlWVJHR3ozS0szUURZeGJT?=
 =?utf-8?B?RWRKanp3NDlOSjlia3ZuZ0FHdzhCY2xZVkxtUC9Rd2dXTUpKRm5aYklubXRB?=
 =?utf-8?B?SHdleXBlQmVZZGpULzlRbWVodk9DMnRJejR2YzFJRUNEZUVUWllHV3V0cTdB?=
 =?utf-8?B?YlZZU3V4RHljZllRUE5NVEt2dWJZd0ozTmFPWlU3TFN5bno1S1FuYUFxb3Jy?=
 =?utf-8?B?MjlWc0ZxQ29WVWh2a1dBREg0akVOVkpvZU5XcWY4YzhzemRTL09pZHZWUmpn?=
 =?utf-8?B?cXJIT1QxS21BZGt4QWZKRkwxd3dvQiszaUY1RjVaSCtOL2w5dmVyTVZTRXRX?=
 =?utf-8?B?U2lRNjlMcUUxZlRkcnZ3STZWYWtOY0hnc1RwU3FTOUorSHVPL2RTa3pkUkRX?=
 =?utf-8?B?UHRUejFLdStYcnRUT2ozTG1EMDNDek9VRnI2R3dweGFYOTd4RUthbXhmVnpW?=
 =?utf-8?B?b0h2MTVUZjExaGlkRVoybU5JNDVHdHpWZU9ESlhLZGdmaCt3RzhZdHlpVU9H?=
 =?utf-8?B?ZnNJVHBGTGFnNXJCTmwzQ3FTYmpLZHk1SE9oOVNNWDU4ZXZsem0zb1ZsaUZH?=
 =?utf-8?B?cnVCeStIOTNINlg0dld2L1VYeUM2bjVFSkxBM1h6d053OWJESVczNXpQL1hr?=
 =?utf-8?B?S0dWRnlOWFQ4cXpPa1U3UWhXK3g0eHNFN2M2UE9tdGd4Zm5ZY2lkNWZ5ekhT?=
 =?utf-8?B?VzUzQStTVFNISkI5UDVKeWpMbXR3bm1oWERvaFgwS05kL3JaQ0x0NnBRNm1z?=
 =?utf-8?B?aFFoclpNWEhHVGJEUWZnT1NpbjBMYVNpMTNXV0dFZi9zQlZ5bkFMVTlreS9a?=
 =?utf-8?B?S0N1dkthYW5xNGhMOUtmd1V6MlI3a25KOTN1QWRjQ1JJcUl3VjkrU3hwdzRE?=
 =?utf-8?B?Mk4zWG44UTAvSjdRdkZBYlM4S3RybllmUlVRRkpaczNPUlFSYVRwU2tiR3Zl?=
 =?utf-8?B?VFlScnVGZW11UElGajV5WnRXeUZ4VVFJSWloNHBvSTVuclNYSEZqSVg2cTE5?=
 =?utf-8?B?dzJjcC9PRTB3ZWpLbVhXV1dFaVBWa01scU9Ic0dta1k4M0lwaTJobGpDVVJu?=
 =?utf-8?B?WlpyaFpleExVcGNSWmZXUDRLV0RhRko5c2RaZEdCZTdGa3lBOS9vZ0VTNjVs?=
 =?utf-8?B?cjhkZjhWVUJiVHNoTjJyTElDV3dIYWNlNGtsRnlySXZBR0VxRU5BaE9UOUpU?=
 =?utf-8?B?WTMzenEyVVE4cGIycS9lR041TDZ0cGNoRlJsWWNLV2hkZlMwNndQc3c3Y1Bk?=
 =?utf-8?B?QVdvc2VpYXRVTElzUm5SY3BFaFREZGdsSEZnblNScnp6c1ltajlOZGwvNHA3?=
 =?utf-8?B?K1ZIVndhRDV0eHVqeWkvc1BYbmUwbVRSOS9CenZ0NHlPUSt5UmNoVi9ObTRV?=
 =?utf-8?B?c3RqUXpLb3VGb3l5dWFqMll5L213YWFXY1VkUWY5aEZxQVVPSnRzTUxhRG9k?=
 =?utf-8?B?QzlkckRTVFAvWDBDSzh2SlpNRmVOS3p0YzZHdG9pNXllTGxWckZrY01NL283?=
 =?utf-8?B?NzRBeWJack9jK09Od2VVam9EcFp4Ukw2dm9NQ2ovdDZ6bEt4K0x3bjUvby9v?=
 =?utf-8?B?ZDR0bklNc1hIaXkwTnZuMHluYkhXa0E2WXoxdFY5aFFjTFUrMExsTzQzVXhM?=
 =?utf-8?Q?Q1Wqv/+1xJ1wxjrauTwnA3VZ5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad499888-73ed-4f92-259f-08db55d694ef
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 06:27:11.4514
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qIvB/znV6YJuoR86nOriTHyQReo4mTxUnayK2cciWLRtukmsFnF4e+l+vt8DoOmtiiYjTYIDs12le+ySN3b1MA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8242

On 16.05.2023 02:11, Stefano Stabellini wrote:
> On Mon, 15 May 2023, Roger Pau Monné wrote:
>> On Fri, May 12, 2023 at 06:17:20PM -0700, Stefano Stabellini wrote:
>>> From: Stefano Stabellini <stefano.stabellini@amd.com>
>>>
>>> Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
>>> the tables in the guest. Instead, copy the tables to Dom0.
>>>
>>> This is a workaround.
>>>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
>>> ---
>>> As mentioned in the cover letter, this is a RFC workaround as I don't
>>> know the cause of the underlying problem. I do know that this patch
>>> solves what would be otherwise a hang at boot when Dom0 PVH attempts to
>>> parse ACPI tables.
>>
>> I'm unsure how safe this is for native systems, as it's possible for
>> firmware to modify the data in the tables, so copying them would
>> break that functionality.
>>
>> I think we need to get to the root cause that triggers this behavior
>> on QEMU.  Is it the table checksum that fail, or something else?  Is
>> there an error from Linux you could reference?
> 
> I agree with you but so far I haven't managed to find a way to the root
> of the issue. Here is what I know. These are the logs of a successful
> boot using this patch:
> 
> [   10.437488] ACPI: Early table checksum verification disabled
> [   10.439345] ACPI: RSDP 0x000000004005F955 000024 (v02 BOCHS )
> [   10.441033] ACPI: RSDT 0x000000004005F979 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> [   10.444045] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> [   10.445984] ACPI: FACP 0x000000004005FA65 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
> [   10.447170] ACPI BIOS Warning (bug): Incorrect checksum in table [FACP] - 0x67, should be 0x30 (20220331/tbprint-174)

With this line I wouldn't really call it a "successful boot".

> [   10.449522] ACPI: DSDT 0x000000004005FB19 00145D (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
> [   10.451258] ACPI: FACS 0x000000004005FAD9 000040
> [   10.452245] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> [   10.452389] ACPI: Reserving FACP table memory at [mem 0x4005fa65-0x4005fad8]
> [   10.452497] ACPI: Reserving DSDT table memory at [mem 0x4005fb19-0x40060f75]
> [   10.452602] ACPI: Reserving FACS table memory at [mem 0x4005fad9-0x4005fb18]
> 
> 
> And these are the logs of the same boot (unsuccessful) without this
> patch:
> 
> [   10.516015] ACPI: Early table checksum verification disabled
> [   10.517732] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> [   10.519535] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> [   10.522523] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> [   10.527453] ACPI: ���� 0x000000007FFE149D FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)

By instrumenting the kernel a little, it shouldn't be difficult to
spot where this bogus address is coming from. Inspecting the
relevant tables right before Dom0 actually starts and then again
slightly later (perhaps triggered by Dom0 itself, again via slight
instrumentation) from Xen should also be possible. I kind of expect
they're going to be okay right after construction, and become
corrupted subsequently.

Did you check that the E820 table properly covers the ACPI table
range(s)?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 07:37:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:37:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534966.832484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypFK-0003zX-6k; Tue, 16 May 2023 07:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534966.832484; Tue, 16 May 2023 07:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypFK-0003zQ-3l; Tue, 16 May 2023 07:37:22 +0000
Received: by outflank-mailman (input) for mailman id 534966;
 Tue, 16 May 2023 07:37:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypFI-0003zK-Oh
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:37:21 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20608.outbound.protection.outlook.com
 [2a01:111:f400:7d00::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c0b2d2a-f3bc-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 09:37:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6909.eurprd04.prod.outlook.com (2603:10a6:803:13d::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 07:37:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:37:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c0b2d2a-f3bc-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hUyYfIa2f2IgT3eB2ai8vhsGEltZEFG1/Oo6ohSLdgb84Ag9aTIH+7uHn/UZ72u8nURvc9TrjIIiEWUl98AEyNbtuZjivHuARI6CXlU+NqKsjfgLl7Y4O7lF64LrewVXdrSj7YqTBhVMzetaicw1o4kR0qtloErMAESx5Ol2Y224qgzAEQCI7UEC5dhSRTU36jDrp6+XnsmYceMq6sgsk/srli9SKhFGDPor+CklQF7suwVSmWom9O/4ewqCJZCJFqpVHisWFCp3wQijW6Ir1tGbIf6/rzJTrCGlZ/2lDpKD13Hw7l8bJ2DWQYBstGjqheaQjG8RGgNl4MVrBeGDMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=v1ju+VcVywBnn0CpgBCckTNP5pe/u7oCdDxvqK7nfKw=;
 b=WI+KZ9UfZP+7T7GuK9hX2uUaF4k3eTJE7PDMwrI6k0cMirYThynBAxa61lHdE0GfHk/ZpvSqKDOaMJeQoj/3/4U0fFLq44LXnF8yjT6AF3j0noS6esupXzy1dLs2xxOJoaR+N5lWJyeHmcurivMi2uU5VuC/uZbfhIzAS0Zz7o7QY5e2kDDluLSktJ2HyQtwTMGiFT7LFvA+UfLHJ2JOTFlSm7eF5olps+OlQu5IP7R3+8IfSEwDAYBlRAxXBIQeWZd6kSZb5cUKIJZV+T/37x6475S5hq+eLZUvJvjXWXs5ia1SLxitt3IHlZpw+pvGFnBewWKsySovwF3xUg2oXw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v1ju+VcVywBnn0CpgBCckTNP5pe/u7oCdDxvqK7nfKw=;
 b=jw3lc1xsKEpJEmde03etNNYqR87/wLYp6HdWIMB69POSCfF4creWLKHU/OHnpwylFgBzfb8ZF4x2cByPUgVz3yUC5WxggUSOO9ShsSHXbk4+8r8DnSpKhSnNf8joCbvMYTzvKqwMAq0vKlCXAjwp4UWn3sVDDk7nDhyn2YahLzs3noGD7Rv19XVunahXqbhd6Zp0LW/msWD5I3v+suuD6xdH/ng/D64ekrKsJ6xqZD5/DiWHt7du+bEVCyRXm986wlQqMWx6mKjMEcGGKdha0DsuipJ2+kjrwf/cFeZtQKhIbs6F2I3gAT72y5SoNAXVnaeRglMHH2PVYGN6gMOO+A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Date: Tue, 16 May 2023 09:37:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 00/12] x86: assorted shadow mode adjustments
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0098.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6909:EE_
X-MS-Office365-Filtering-Correlation-Id: 87d240e6-16a6-4d4d-4493-08db55e05f05
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	n0Es4YCYcRwaLrDlx7w95w4kx4YRw7jxYKerzZ+OM+tvJIY0XRlcSZOYCqvO3XEFF7o5IqC8uc4C4W+9bh8uf5xNZaXg9SMo0/CmOctLf4+KlkiMk+GwXhVyh5xv6B/pmXPSQ6F8xl2P0SIMO7a3+uDBX8GSyX+f01QsokUFLL5oYiZZTQHvgXNBJMQYCkFJi99l/js47nkJyUXRRpgtZUOE20+ep4Von0nuh2q1CuTIDVJyBY5NPsCcngSRzT4HFJLT2LuCF1aVv+NL/imbhoY+QK+VIDv++ogUT6X/gaB0tBJdRCbfewzSybJaOM7pwTJdSioEVsWPgT3bog/t6dcY8f3Sb2Qz4eNZITZ79h7vmqnnkLfDdPIN2ULFw8sYBTN2all8DIGdV0HbqxGch/4KJ4g9HA+fKRrgzJOsy0icpV5KKRKhPt/w8wZHzBqBqhLeLT/q1q6Ruz572sq2KS5j+YgFat4RptJAYbcS7AG6fNQxycGfiRV6FZj80ik5ur/kdngdVGnsKVZ2M3Ds5CKI5+NwRFcarCfnL3rQjJ1YBrN5QKP3ySG0vF7a94K4WdNeie9PtSmCN5xMEtNafIpJrGkf2lyLxgpGErBqComK6jz0rAChZUU0+t65F8uFVCr2iPU/h+Fy/VE9se0q9g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(396003)(346002)(376002)(136003)(451199021)(31686004)(83380400001)(4326008)(316002)(6916009)(36756003)(41300700001)(38100700002)(6506007)(6512007)(26005)(6486002)(8936002)(66556008)(478600001)(66476007)(66946007)(2616005)(86362001)(31696002)(5660300002)(4744005)(8676002)(2906002)(54906003)(186003)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RG5RaURvZldaOWI0M1JQb2FodDM3OE1aaHg0UCsyVDBEa0RqNzI2R0FDYmRW?=
 =?utf-8?B?VkNoWVl6THl4LzRadGd3cmJPQUxORHUreFRVSmxjZTloV0dXVndVMDl4dWdC?=
 =?utf-8?B?bDlFanJYdVgvVWRjSGQ0WkExSExlQXU0OVRYRGViSHJhUUxKWnNtSzU5V0or?=
 =?utf-8?B?SGtkVTdYMzJQR3VQYjBtWHJTdFVHYU5sTlVKT200c1JzVjNSNkZ4MGovUnBw?=
 =?utf-8?B?Y0RXZmJWMk5McEhVWEp4OU9LZjlZclRnbkpUdEwwbGVnMUxwdGtGZFVqT0lk?=
 =?utf-8?B?R0JVRUNxL29DY2pZRGFuek5WV3JFdFEvU1RoTUxycHpXZVFCNDc0ZjBxbS9T?=
 =?utf-8?B?MHdjNThDMGVBRUZSeWdaaGhEMi93ckVvSTJvS3FSUW5oWVdoMDZFZlNGakVu?=
 =?utf-8?B?czJ0cjNDZVg1NWFGY3N3Z214Z3J4MitMYUx5bndZOXJQamwrcDZ5RHFxMi8w?=
 =?utf-8?B?cG9mUU5JaHYvalFMNjUxVDBDVVNkZzFLZ3c3NS95bnVLZ1RsalAzeGwwb2xr?=
 =?utf-8?B?S2JNYkE2ZFVNMG9vZEN5TjlmSE5Yd0w5WUVsT0RXSkJuZTgwUTdtZSs3Qms2?=
 =?utf-8?B?OWhLWlZIN2MrbEdtbS9ydGpWaHNGQWFJSjVvdXJLd1BOWmcxMjhXQ2RzZFJu?=
 =?utf-8?B?ZGlPNVc5S01vSXZyNnY0Qys3Vmc3MGo3UkZ2QUVwMURyVnJkVDhNblhpU0I3?=
 =?utf-8?B?MFk5OW81WkVxN1Bod2NESTI3ODdMMHFFSVFWQlpLSG9ZMkpyRll2V0VaSjNX?=
 =?utf-8?B?d0JpcmF2SnNJT3JpYXpaaUtNV0tZeXpmNXc2TmtvaDRQMjlzMmNuRTc3UUUx?=
 =?utf-8?B?RTYyMzBENndzSlpIa010QXhmdVJEdkZXc2FDWE5rdnlJa1VyVTN4UGd5QUpG?=
 =?utf-8?B?eEVOUlNkSUxNUmVwYnZucmNUT00zRWUwZFBPVWVRV3hOalNyQ0VJUFNiQnZY?=
 =?utf-8?B?SE5Zc1NkSnpGVzVkK2dCUkpUMG4yakdyL0UzemNIZHkzcXBvb1RFZjJZekxU?=
 =?utf-8?B?L01UendRSGo0S0pFMC9sOFFqcWNqT0h3dmQxSnZOcUh5STBKTjdtSTZ2K2wy?=
 =?utf-8?B?YTdaeUZJWGprbnlNSUN4V1ZsWnp1SkM5cE9sSTFIeUtmVlJnY2RueWhUMmxY?=
 =?utf-8?B?c3h5N2xUK3RIQVRWQ3B0Qmp0UkN0N3J0M0tmMU5xdXlQRE5HdmVCSkdiRUhx?=
 =?utf-8?B?MXZzekxIejlsajVKdEx4UFA4akNtUElWZEpzbnE5dHJDUkh6SEg4TU9ndUxI?=
 =?utf-8?B?RnhwZ3hCZzhQamtUVUdnTERYWEtQVDlFeWJ5WDB1VzJzcnhBQ0NvZWtzSkxz?=
 =?utf-8?B?djdiNzd5dStMbWtQYnNMd0lISFd5Qi9PNlNQMTRaWnFtaXdnbGxEWm9LaWVw?=
 =?utf-8?B?a1hkR3pLUEI0SXhjQzc2OHBxMTN5bkd0dlFxQ1VvbUFWODNwbS8rcEtKcFBm?=
 =?utf-8?B?SnBpZkZlWGdjOWRHRmpQazdrRUt5NHUzNlNydlVldEVIR09jT2VmSHJyTHI3?=
 =?utf-8?B?aGdxZ2tKN01ZVC9WVEY0azhwRk9lT1VSU2Jzb2VFbjA3Y1BYaklOdGllYk9h?=
 =?utf-8?B?RzZZZXEzcHBMQ2hCTmhGdi9JQXpMMTg0YUxBV2E4aTFXY2d5eS9nUnMzTW9z?=
 =?utf-8?B?SjNuU1Z0RmlhU0p0T3VuNGdkWEhJai9UaUhxd21oSUM4ZHVxZVRmQzcwSDQ4?=
 =?utf-8?B?dVhkZStjMGs5RFVxVTRkTWtaSXNrRzRYSTkwaDRzN0NsSElCY1NKM085V3Yz?=
 =?utf-8?B?ZU1GNDVISlhWczFZOTV5U0JMZ1JnWHhYelV0R3B6UGFTMTl1ZDNKTVJ2Z2Vx?=
 =?utf-8?B?KzFNZm9xdVdtUUVtd2FQU1VmWlpSKzdFK21KK0ZVUVZZZWY2RjNiNERWdmtH?=
 =?utf-8?B?N0FOTzlNL2ZkWGl6NFovZmkzYUFPY043eFptYzNRaXhHQ3hwbDRRSlUreFZz?=
 =?utf-8?B?aU5kNEVybEVGaFMzNWpBNEdqdlA5WUkzVW45Wk4vTy9SaXk3ZUhIWS9ZcTdh?=
 =?utf-8?B?Z1pBWHduQTRRcUxaNkZERDN0RDg4TG4zYWhMbm9DNFZGc0hFcWFXbEIwMlJF?=
 =?utf-8?B?cGpZbFJKdUR0blhZc20wODJFOUpPdS9COGxZT3hJWGVlM2dwWk5scGVHeFFo?=
 =?utf-8?Q?NUhFuhIld0o/gGNrHXrWgM7m9?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 87d240e6-16a6-4d4d-4493-08db55e05f05
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:37:15.9986
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: m5QSVseeJgbz9omusT+/NSI1JkCL+WqMOHKHvfGfbPJYeWlhP8r8zDV06RC5d8BTDAr7wB5GlpMP2h8tiqtiCg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6909

This is kind of fallout from XSA-427 investigations, partly related to
there having been a more intrusive first approach.

Most patches aren't really dependent upon one another, so can probably
go in independently (as they get acked).

A few patches from v2 went in, but there are also two new OOS patches in
v3. See individual patches for what has changed (in response to review
comments).

01: reduce explicit log-dirty recording for HVM
02: call sh_update_cr3() directly from sh_page_fault()
03: don't generate bogus "domain dying" trace entry from sh_page_fault()
04: use lighter weight mode checks
05: move OOS functions to their own file
06: restrict OOS allocation to when it's really needed
07: OOS doesn't track VAs anymore
08: sh_rm_write_access_from_sl1p() is HVM-only
09: drop is_hvm_...() where easily possible
10: make monitor table create/destroy more consistent
11: vCPU-s never have "no mode"
12: adjust monitor table prealloc amount

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 07:38:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:38:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534970.832494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypG1-0004WX-Iu; Tue, 16 May 2023 07:38:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534970.832494; Tue, 16 May 2023 07:38:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypG1-0004WQ-G0; Tue, 16 May 2023 07:38:05 +0000
Received: by outflank-mailman (input) for mailman id 534970;
 Tue, 16 May 2023 07:38:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypG0-0004T5-8V
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:38:04 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0608.outbound.protection.outlook.com
 [2a01:111:f400:fe02::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 964a0f5c-f3bc-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 09:38:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6909.eurprd04.prod.outlook.com (2603:10a6:803:13d::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 07:38:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:38:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 964a0f5c-f3bc-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yber5EyOc9aXzPiJPMYSKpM3Vm9k4NS/Piv9Je2csp1OxZSGVw+iwEXOzpEdZOIHrB9aonpOmQkyp8uwsCDW4oNAygkDDwKAoty+Z0UW3WUrVqeqy0y/qyQm6YNkaEV4noMwAq2otc7L3L/1iZemgLvIqyS5fZasV9u+uJaSZ9ztjZHTEWhLfS6CzBXmH9XZu7CxNET5RUzL+FwS7yafjiZbcE2oz4sDE5+tyWf9eSd0o8nDf/eJv3hzIJynVpSTs2M3Wcy4qln+QvcOQAeZ58dx2CE4txouvsRz/qiDvWdKEY2+HVv0pCvoa8ZsxfepXtp82y3gad81zVtxWi9eEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pGINYhnPXGwClXBqXIKbDyrF0P+Sfo8bCBfLUCWu++0=;
 b=cLq+2yA0Y/wqyUcmk6kq3WKE520qYaO02tZRLok+o3vhzx6EIhrozIzyZULeJSR6BKsXraS79X6E5fvPJ7t6YJZYiGe9GGX++eN/rofN6YcYs/46ETYlMa9+UgYlpv8YTy3ELcnhgTXKnHODfSg0OhtS7DMm5vOiaI+Ujsj4IIrlaABahZyb6OoN+opbVApdQDV3DbmNSg3YARCFFtNxxdc/tGGvVw+g04tIG+SHkBMidW5w9s22o6NToZih3oeit0aMV8p+ogZ2yUn7e3BOvusDe5xg8qXTXqWgRNmdTV+gdoC3Hb1GzqjBrwtbOGPL0BLLiFiKiZNczO/qHMfT7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pGINYhnPXGwClXBqXIKbDyrF0P+Sfo8bCBfLUCWu++0=;
 b=V3d5QbNyX5RiqLW0/uh2DarxjiliGqhLANGRPs83Uf6Nn4+cEgzKj/pgnAow4Eyj0EFkMDleF0AkhWIZ2beo3IyRPLS4deyUxg+FfTlmlt+Ky4Rqy7gkscTaniyWQJZyCCgwa5HDUgzfbJSkpuYaG/Cy1x/2695A2N9Llbw+bJz2P+7RzJBBkT3rv2TKDckb/LQ54gMd9HWWlFJltYKSCLHDxhT4ZURbepDOaDqRXDuDOmcovfQVdiUNRnidor2x777hkSy0v1kM7VDf533CpysqVbQSk4XgUXy7cqW9wk9HD+bUqkDiqsvuEwSrjkfZqU79oPrmnUNygOlmyhPNyw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <55287753-1973-ef36-3802-82250b8bbad8@suse.com>
Date: Tue, 16 May 2023 09:37:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 01/12] x86/shadow: reduce explicit log-dirty recording for
 HVM
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0099.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6909:EE_
X-MS-Office365-Filtering-Correlation-Id: b69b35a3-6210-4c9a-fc06-08db55e079c8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2qFzKQV9W3mbOcgFtoC/hLG5mFn6oTsQpUsSClwyWzoCTW8bKki6EKUxZWq97IgwBm6Alko1rl7aRFi1fUmPeYmujZ5X5KUNCwLT9UUXDrZDwuE5cECAqDaj8UgkOKucdSun0e2meMseyX8WgghX+gy+urqqEjtzltcbdTm9fAbMsqz3HFivaco3FuTMwt2XQYiF4VZJuwitzeC0ohD7uFAAysMl96vNiMX85OvHUq8mA+y1k1PV/RnAA+8xKL/lep+iMYco3CwLWZGMscZgiDxKSvHIvWMO41AZxn234WR11JYHnrMTK8pMo/48mujkGDR42N01YB3fXeiMuis4ZCYbFuG4Ig4rzUpbeCGWWzKzm9XXyYxz4uxB7tm4iUTCQEyeB6ofxJJMudLsA/S4XVa1rp0j689wBC4xh7dRqxN2Ng/VP34/+kfYNBE12KgXnewCuRUtpE5BNtp2rOZs2ycudXIDs62ysb0zbytVeijiASCVjPgo+pmJj9EiYrXWjoc2o55UFjyVlhThK4qBIJJkf/qtk1/dP3UZ4CViMvtm5KafxFXCrIyrgRcS6X0OdlDnzvZiHRfIyhzSGj5ycuZv8x/ZumNVXSLxspJauYAnZhpXzncCK2yakwOVp14y1Dv6W3h69yN/NIEa5eR9hkxclhzG/UWu/UDBWLCzJsQ=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(396003)(346002)(376002)(136003)(451199021)(31686004)(83380400001)(4326008)(316002)(6916009)(36756003)(41300700001)(38100700002)(6506007)(6512007)(26005)(6486002)(8936002)(66556008)(478600001)(66476007)(66946007)(2616005)(86362001)(31696002)(5660300002)(8676002)(2906002)(54906003)(186003)(14143004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eDRxSXRsamJqR04xQWtybk1wY2x4UnNtVFZkNGM2cThyWXpBUXZCMUdVVGQ5?=
 =?utf-8?B?MUJrM0pnNUk3NlNDVWlhaXl1QWUvTEdYUDhQRllvLzBZT1U3akMwcFQ3MFlm?=
 =?utf-8?B?TjExclJkZnd3c0xiemFjZERZRWNyY2JRUlpjL2FUemlnNGNXdFBCNTZSUVo3?=
 =?utf-8?B?V2JJa0hlMWhjTDY2b2lCK1o4MzluRXpvdVdNY1FrWCtyNlh3Z1J1Y29xU003?=
 =?utf-8?B?V1RJVUx6eTIwMWl6TU1pUW9NeTZtRG9YVzdkUGNjSThha05MU2l4QXc4K1F6?=
 =?utf-8?B?WmJXOHFNWmc4TDNKUGpWSWU3K09QUzBFd2ltTEc4ZzAwRUlTSlkzRG93azBr?=
 =?utf-8?B?d2h3bzNrZGlJeVNNdnBpQ2NTakJodXJhWnVDSkN4YVVwMHA5UFpIQ2gySnBz?=
 =?utf-8?B?QzdkV2dacHNrc3ZabmhuMWM2dnZGakFJSjA0WmpsVnJ6eFBVdE42RE9wZnRO?=
 =?utf-8?B?d1JsZGxWcmpPTUxJeUJnQkY0QnZNaTNqRTVMcVNXeFlWbWN3bVNabUFCL0hi?=
 =?utf-8?B?UENyaW11Qk96bU5BSjQ3Z2JZVy9yVWlkNjBsQzl3cTVYeUU1TGw0dEY4eUd5?=
 =?utf-8?B?MGQvYi8yMkVpY0Z3anoybGpDRi9RNWRHQ3pTT0xGcFNWend4ampGWkM3dGNU?=
 =?utf-8?B?d01VUzFMK25CK3pBRVZwd3pTWWk0WjYvOHFCT1o4ektyaDZTU0RReXRUZ2Zl?=
 =?utf-8?B?VXRKSjVQOWJCMmhDL00ydEJxemwxaERzUUFHVmI3MFR5V2R1YWF4ZDdFa0ll?=
 =?utf-8?B?K1UvSEdJdDJ2NUhheHUrQkJWdEIycjZ2Wm1WSW96anFNMHJXQ3RPV3JoS2hF?=
 =?utf-8?B?N0xvcS9PTjh1aHlxVFlMa1NWQXJnRkhwN3AwOTgwMklrN1BUMmo0cWNyWW9F?=
 =?utf-8?B?aWE4MUVrZG5BQ2V2bHBWak0vTERVcUttdy9QUFVLcG9jdTJWQzhvMmFpQzFv?=
 =?utf-8?B?SDAzSUxxR205N1JWRUM1SmpBalVOdmdkSmFWYmFSbHVKZU9zN1NkbFgrMkVY?=
 =?utf-8?B?OWdyMG1XblBCWGR2Qm5qOWdNQUs2M2RHMjVaU053VXc0V3ZFeCtKZDF6MkZk?=
 =?utf-8?B?eU9aMnNocWVWUS9kQXRWdlJQc3RmL2dnRzRBSUNNYmJJVkh6NHF6dUFOY0k1?=
 =?utf-8?B?dGZTV1EvdjQwNUlYcXI1Z050WFVCcG5vckRqTmtvVXg0QWJYYitvdlpxSldu?=
 =?utf-8?B?WkJ4cTlsTTZaanhHZ3FVWGxFcE4zdnQva1JiWWR2ZWJnb0FPM2o1YnN4QWda?=
 =?utf-8?B?d05wSDU3ZmUzaGhCeXJVS0RxejVkQUJ4Z0FMbE5VSHF2NEFqT2V6Q0ZWdGxV?=
 =?utf-8?B?dlpPc0ZGNUdpSVA2dHJmbjR0NEFCQzJGb1gvMndFeU51allvK2xBQXNPY3NI?=
 =?utf-8?B?Z2ZXY0RTbUNuRVFZMDB0R3RjNDdiSHZ1eXRQQTJ6NkVkQ2ZyTTcrb29iOEl6?=
 =?utf-8?B?NXVTRGUvOXVaTDdqMXp3anhkM3J2b3Y5dThDK2tmekY2NHk1Qk5SeUxKdkFv?=
 =?utf-8?B?eUQ1UWQ5VFBJb3R4STAvVTZMdXk1SkI0ZWRTcG11NnNHMmM3RUEwR1ovcjh5?=
 =?utf-8?B?Q0ZHcGo0TW9UblJiMVlOOUdWQ256aTNwdTRXc0NxNUFUVk84ODI3Nmpoc2hK?=
 =?utf-8?B?RHJUUmV1djY1TU9pZjQrQ3JGNTIrajBqWURZVkRnaENBVEI4YnQ1RTVrUVow?=
 =?utf-8?B?RHZ2MjQxZXo4MzVxNThNSHdkY0lzaVJqK1g1NGVHekRSbnorM0RNY3FMSzBz?=
 =?utf-8?B?SURFaEorN09yeTkyTmp4NHEvNEtldkFLV2UzbzA2a08zVi9LakxQWjk1SVZ3?=
 =?utf-8?B?UG02ZFpiZE9SRWJRV1hUQ0dkeS90YWlWWnJJOTFxaFJsS3lZdHJPOHVWdmFa?=
 =?utf-8?B?MytZak9BRmNnSE1PYWg3YU5LYXZIbU1uM1FJdzZOdCtvcURPZ2FvUlBYRDAz?=
 =?utf-8?B?dzUxaGdvd3ZweGhVQkF2aFI5NE1CMmlHZSs5SHBQV0xjZVdhNFBlbFJ3Zy9Q?=
 =?utf-8?B?MlJobDAyZ2o3N1NyM09vWldXdTAxOUhLMmFkZ2hJVm1CNm9KU3BuM1laY3lN?=
 =?utf-8?B?RmQzM3Q2Rkd6UHEzZ3YydlFRRGlPaU1OaFpWS3pFV1kydHZKTzMzREt5SlU4?=
 =?utf-8?Q?TZmvgKDkVeSanWYOTUGjRauf2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b69b35a3-6210-4c9a-fc06-08db55e079c8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:38:00.8764
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DO95Hf4QjijuPlhz9Oocic0LejgKKbDuG9avgn8jpMrk/2Lx/EffCbpwwyXduYROPg47BLJFLw7NXHn/EZJfvw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6909

validate_guest_pt_write(), by calling sh_validate_guest_entry(), already
guarantees the needed update of log-dirty information. Move the
operation into the sole code path needing it (when SHOPT_SKIP_VERIFY is
enabled), making clear that only one such call is needed.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -644,6 +644,7 @@ static void sh_emulate_unmap_dest(struct
     {
         /* Writes with this alignment constraint can't possibly cross pages. */
         ASSERT(!mfn_valid(sh_ctxt->mfn[1]));
+        paging_mark_dirty(v->domain, sh_ctxt->mfn[0]);
     }
     else
 #endif /* SHADOW_OPTIMIZATIONS & SHOPT_SKIP_VERIFY */
@@ -661,12 +662,10 @@ static void sh_emulate_unmap_dest(struct
             validate_guest_pt_write(v, sh_ctxt->mfn[1], addr + b1, b2);
     }
 
-    paging_mark_dirty(v->domain, sh_ctxt->mfn[0]);
     put_page(mfn_to_page(sh_ctxt->mfn[0]));
 
     if ( unlikely(mfn_valid(sh_ctxt->mfn[1])) )
     {
-        paging_mark_dirty(v->domain, sh_ctxt->mfn[1]);
         put_page(mfn_to_page(sh_ctxt->mfn[1]));
         vunmap((void *)((unsigned long)addr & PAGE_MASK));
     }



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:38:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:38:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534973.832504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypGR-00051T-T0; Tue, 16 May 2023 07:38:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534973.832504; Tue, 16 May 2023 07:38:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypGR-00051K-Ok; Tue, 16 May 2023 07:38:31 +0000
Received: by outflank-mailman (input) for mailman id 534973;
 Tue, 16 May 2023 07:38:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypGQ-0004Mf-V8
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:38:30 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20628.outbound.protection.outlook.com
 [2a01:111:f400:7d00::628])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a6fecc2d-f3bc-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 09:38:30 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8988.eurprd04.prod.outlook.com (2603:10a6:20b:40b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 07:38:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:38:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6fecc2d-f3bc-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZTwHUgngN4ZcZo0XYIWYTN62Wj0nIz1viPSyG6trB8FpDStCSY9v4LC0bZoJ4uTodtHG6P4YeYqxb1Z1CyRg/DeYjSy/XLDWyWk0IoPt8xXRdKJpbU+oxtkZBnllTntE8WpU+iDA01dOpPsw3Xo3fTj35xWMW54n+tCMNVV7aehJzqu4n2IZTkdzeVQL/sOFKIL71t9hM7/d9xqARCuunoNkqm5vnaCYhuWlldC7X9Rd7/F4I+OExy0/LGiOx07HslxYhtLIRYltsOg7C0DAxUn9tLwaScEMKky0hc/B48RNrIg8Vwmb1aP993r3P9bNvt+ng4FEXi22OsizDFmx1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G7CRVIADmzDGbiqvXB+w2ZRXIp+FhtVeLdWts3mages=;
 b=NrBMhyuDAMyvvuDTUhT7EX4pEdOsozYkGQvJ4nd3G5AsXK5mP8XfrimtAzcAh0DVRHnNygywG2B36hstHxcgDazgAkMHzEDOLLa64J2any4+y2O+EA1ayioMTLcZRk+zadKI5YbbLi5xaIYgOolO6GyhyBNVUqJYKlotL/PXYjbPzu2CAgp1jE+WJY1bNwPyTt4ppV988eIQW6GQxfT/g3EMtr2edUbq2BUwimlpemG2aDUOXI5f7yZmqNMVfvWO4TbdDmcleXqLmAG8ov13lQb023m97Hzg6jXM7hOsKZDe3jtN6LP3M9NoYqpa+wKwfxSq9OisAQRK+w6sNZGi2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G7CRVIADmzDGbiqvXB+w2ZRXIp+FhtVeLdWts3mages=;
 b=M72L3++wI8Gw/PkXoDK2IKmMr7TNc14fBtiRqWRH60LgYOhYIKsTbva5q3BwFt7abBa2cj9XN/mpf98ubIrTdnplM5pMnXMqzShi5Gb+4JBelJnu2CzjPLFLOdY5Ogx7a+cqQO/q1aYn8UrcBONeQl0ymK1n1XzFm0glXr+uD+OBz/clJdu+SVefWjJ1vbtEttA/Orpmr5IJmrN8sS+Rn6gR/9cb3VBgaQkObnOvgW4m3QG2UEJ2ktrJd1rkh5GYtxPrld8zZbRjOcfylxkEM31KokrpU1ra75AWaY2rFJOoGxeklTo4daE3Y2IgxOECHIfZz2f9Jiv7mRzSPcvPCA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <dbdc07e0-4700-6cb0-4ba0-927417482604@suse.com>
Date: Tue, 16 May 2023 09:38:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 02/12] x86/shadow: call sh_update_cr3() directly from
 sh_page_fault()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0009.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8988:EE_
X-MS-Office365-Filtering-Correlation-Id: 18bddaa9-4bbd-4262-4eda-08db55e08994
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FacHqmzhmsmkUsqlMxAoPY3i8OyrarTHs8o06Y80p9Pz7uy/YVIWZAfYXdYHmBa+ECLBsurFvrAu+QSP7NpYwB7kz8gTjJ7Jvay9HxhLb/uYTLYJm/Q19rOkvnoVB+wsM+n6V1XsYY82lc9FVe3mhdPZ7eeTykf/pOrLEdD5TsC6QpP3hrXaxWmUFEdWkqikkcKV5E1QE8TllNIyvA+zj70qZ+Ghgt6yKoKqpkc54Dysok/OlsHe7ybh0qvAMER00FAo+h1s3cnHFkxN/HFQE+K1L9jChHNirpUbQ8OlFWtiGXGjwvbdslT5s4QKM0UbHri2X27hqw1cBrlhMxxwueQ03IiPxtmydnqE+yczn9MdgD6OznmnXSWLRE/q9SfXUl22oOD3IvB4t6n8z6mJRR4fl7SRdCs7OWPK/0yxO5HKOlyxItg466kGOv5qF1xgXJAZrzhmRMouNbwfX0qXx2U7vfCNsWuRqcjVBY+0+GYsbq1+GxXSTNF0rFPgtG7w0l1ufiQ97tmQkfWcfglXQL8PeUB7FI9ht7woK5FYbT3ykCgkTVSlYaq1fLk/47Zh+r3fAvFISE2pN/ymFwMCQEd+auALPUi9rzPrBkdqfoXev6TNwzzmC5HdA5D73+wvFWZyRY6mUYLxzpom/Gh/7w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199021)(2906002)(316002)(6916009)(31686004)(66946007)(4326008)(8676002)(5660300002)(8936002)(66556008)(66476007)(41300700001)(86362001)(2616005)(478600001)(54906003)(6486002)(36756003)(6506007)(186003)(31696002)(83380400001)(6512007)(26005)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bzFIOFEyRlhuTlZlbWRDQ1RFVGZ6SzAyZ25OYVI1c0dIUmZhVWlJbjZiVDBI?=
 =?utf-8?B?UE9adHdjclFtVHpIRW5nNG5xQ3NxY0hrVEVIeTh4S2o1S2lhSjQrZ3I2UEQr?=
 =?utf-8?B?QkRLemExcHJiblZuWjQrY1lvcEh0aGZUd0tsaFh0TTRDc3NWTXVWNjljaDVQ?=
 =?utf-8?B?dFBnWHBKeHNQUTgrbC9JM1RTM0czcUxlaEFSaHp4cXhPUXJremM0UWtRb3Q1?=
 =?utf-8?B?aTVxOG1sMzB2dDJ2TlNyRGhYbjJkaDh1b0dNdUU0cWlEdm9FTTljUUxmc1I5?=
 =?utf-8?B?anJhT2ZsaU1jeVBLTWZ4cCt0M3ZmeStqWi94bFQrM3pNVitxem5mZXdEVWFr?=
 =?utf-8?B?alhrTGlzbVdkRENkUUxEbUVuYTdTUjMzN1lxYWFFU3hTOFc3Vm85YU9CeWRy?=
 =?utf-8?B?U3ZxZVc1ZEdHTFdTNW5hV2hrU1ladkh0cENSQlZ5VU9JMnViVGIyQUxXanRm?=
 =?utf-8?B?MzlFRjRtMHBld0V0MnBSZ3RicGREdmJXN3NEVk9rajVSR1ZPU3ZSQ2pHVFA0?=
 =?utf-8?B?cHpRMi9UU0xPWE9LNlhGL3RndTJFWVNHNUpXNWNaZGxOLzQ4V2RaOGFaNTc1?=
 =?utf-8?B?QmxIWE4zM09XZ2ZONENsUTFOaGMrS3cyMkdUVFhiTUwzTGlacWJOckdrVUF5?=
 =?utf-8?B?YnNZMXZ2VVJNTGdNSEVvcDdKYXdsV1h6cEVzbmZjWVQ2S1dVU1FLcHZmdERR?=
 =?utf-8?B?YTE5TEtESDhDdFJxM0VDVlI3SEVvYUlHOWVvTjVNeVN4eG9RSys4bnFnQU92?=
 =?utf-8?B?V1ZsMTFERU1HdENMaXZKODJSK29Scnp6YmY0akZ6bHk2ZHBGY0NvbEpjZldj?=
 =?utf-8?B?MmFxRERwT1hUVWduYmhmR21MeVM1bGdnSTRhKzgwOGVRMEU1R1RoamlyV0R0?=
 =?utf-8?B?VXdzQ2dMMWtERitmaVBzdVZJMWcyeUNtY29Jdnd6QmIvSk5jV3RURlBZak9Q?=
 =?utf-8?B?eU5TZXc5Q2I2S0Y4WmE3dDVtUWNZWGdxZzRHckRzR2pYemhvcTFlM2tYbDBU?=
 =?utf-8?B?d2VxenhaNnZoYXhKWFF4Vjg4WTExUFR5M3JOM0xTREdSK0tWM3JiTlFvNkh1?=
 =?utf-8?B?Zk1BZlVXY3ZBdU82ekxvRmtLYTltUjhvT0Z5RERYbHJZRlFUa0ZFK2dKUVNp?=
 =?utf-8?B?ZjhRWC9OVmVMQkt6Z2U5M0RqM1g2ODlkVEZmOHhNNThzUDZMaG0ycTI3KzUr?=
 =?utf-8?B?dFZ6TVY5dThocjQ0eG1ybk5DYjMvNUY2S01hWkdBVkVOWDdtQVllSmZZT0JI?=
 =?utf-8?B?N3pkYXZwMldKQ0JtWEJHeFczc1A2U3BoaU01OFo1VmxnYms4dXJZMzdWMUJP?=
 =?utf-8?B?dk1CdUNpb3FhUzU0Y2o2d21wNzNGUVFjdUIwT1FtbkJOakU0SzEyQS9jMmxo?=
 =?utf-8?B?SlBicUJQZTZlTkpjK3hSNnFjUHRUTURmZ1F1QWFZOFlxTmQweFRSM0Y0T3RF?=
 =?utf-8?B?MlhKcGMwQ1MzM3BxaVlOWkgzWUpLRldPNmVBWHRCeXpyb1Vwa2xEUXNTY2pP?=
 =?utf-8?B?NlRQU09ISWZuTXhTZCt3SkU3ak9BUk43cUlBK09nZUNWRmVXVjB4TlBLb0tH?=
 =?utf-8?B?bWNQaTNKR0NDR1Baa1Rjd1NXekVkQ2RKM1orKzlESEZKRDVjSWdjaEQ1dnBr?=
 =?utf-8?B?WisvSCtHV29jTTR5MjBhR3NlRXJFT0FGaFQvazk2bVBFQWJibUhLYXkySGo0?=
 =?utf-8?B?dmZEcy9sNUlINkZSakdjRXZKVFJYV1NsNCs1KytFNE5FMXdRclBvMHdVZUZC?=
 =?utf-8?B?dFJFaTlqMHVqdzNpZXRKcGR6YUtkbkxGUkpDNG9WajZGdEFQUjFnQU1BN1ZJ?=
 =?utf-8?B?bGxzRitDendBVkJRUHIwcnVNNjdmbFpScnVaSVorcEQ5YmExR3BsdjljNEZz?=
 =?utf-8?B?cTdXcW9OSVNwdTNnYXRsNWJFZmtqTzc5WUF5OGJvWjBTUVBpSTZwRVNKdzBS?=
 =?utf-8?B?ejYxeTZzaGJyRWFUUk4xWS92UGE5NzV5SmtnNlVzMnJnNHZ4eTJsOVpra25s?=
 =?utf-8?B?NEZOQldFa3ZMNjh0aHIzYTRWdzF1aS9namhwWktEMmt3amFWWFZNSURxUjJC?=
 =?utf-8?B?eG9MZkwxNnB3U3U5TWxVZG1tR1BTdXROSjZoRUx1QUVZQkhYUE5WYWJQYmlK?=
 =?utf-8?Q?qpSK+uHTCEeZRTo6C9fT7Udkc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 18bddaa9-4bbd-4262-4eda-08db55e08994
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:38:27.3444
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: E4J3lcM4X9wrhFzzmjcGN7R6BKIC5LSdXNtvQvX1z4Qe3bpZoGl+VxuSEMDMka4SDh0iu2vrtSJIPx9kW6Ov2g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8988

There's no need for an indirect call here, as the mode is invariant
throughout the entire paging-locked region. All it takes to avoid it is
to have a forward declaration of sh_update_cr3() in place.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I find this and the respective Win7 related comment suspicious: If we
really need to "fix up" L3 entries "on demand", wouldn't we better retry
the shadow_get_and_create_l1e() rather than exit? The spurious page
fault that the guest observes can, after all, not be known to be non-
fatal inside the guest. That's purely an OS policy.

Furthermore the sh_update_cr3() will also invalidate L3 entries which
were loaded successfully before, but invalidated by the guest
afterwards. I strongly suspect that the described hardware behavior is
_only_ to load previously not-present entries from the PDPT, but not
purge ones already marked present. IOW I think sh_update_cr3() would
need calling in an "incremental" mode here. (The alternative of doing
this in shadow_get_and_create_l3e() instead would likely be more
cumbersome.)

Beyond the "on demand" L3 entry creation I also can't see what guest
actions could lead to the ASSERT() being inapplicable in the PAE case.
The 3-level code in shadow_get_and_create_l2e() doesn't consult guest
PDPTEs, and all other logic is similar to that for other modes.

(See 89329d832aed ["x86 shadow: Update cr3 in PAE mode when guest walk
succeed but shadow walk fails"].)

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -79,6 +79,8 @@ const char *const fetch_type_names[] = {
 # define for_each_shadow_table(v, i) for ( (i) = 0; (i) < 1; ++(i) )
 #endif
 
+static void cf_check sh_update_cr3(struct vcpu *v, int do_locking, bool noflush);
+
 /* Helper to perform a local TLB flush. */
 static void sh_flush_local(const struct domain *d)
 {
@@ -2475,7 +2477,7 @@ static int cf_check sh_page_fault(
          * In any case, in the PAE case, the ASSERT is not true; it can
          * happen because of actions the guest is taking. */
 #if GUEST_PAGING_LEVELS == 3
-        v->arch.paging.mode->update_cr3(v, 0, false);
+        sh_update_cr3(v, 0, false);
 #else
         ASSERT(d->is_shutting_down);
 #endif



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:39:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:39:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534977.832514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypGm-0005XQ-6N; Tue, 16 May 2023 07:38:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534977.832514; Tue, 16 May 2023 07:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypGm-0005XJ-1O; Tue, 16 May 2023 07:38:52 +0000
Received: by outflank-mailman (input) for mailman id 534977;
 Tue, 16 May 2023 07:38:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypGk-0004Mf-KA
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:38:50 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0621.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b2c5e3d1-f3bc-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 09:38:50 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8988.eurprd04.prod.outlook.com (2603:10a6:20b:40b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 07:38:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:38:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2c5e3d1-f3bc-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WsbByeVcUA2TyjPDo+eQZHTPCcDRQAcRIzA26v6kAm4IaqYEaFv3oxCdVG9DBhrlXNjhJhgfGG/QwHZfA+sd4J3uslvJ+nbH8ebxDTKSAxH9Lw3PQUoUNrRfigarVlIgZ0t04CqtrXh5Nut2/H6z0FVZC00Zv4ztvGTwXvChZQwyZzp9dHq3CQI6cMQ9899Wo8SEkTADxPeiFDruccfAEtOpeEbYkJ15mUFFm6PXtTlDnVxueqY+XgZeVXbIGExU4B+bql8dUe8hS1tmMqpDinMeviRB6/C1l8dtNVkW0oJcOQsxWsRbZgHJ3+C0DIZMMENa4qBh2B7+ecznzbopSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Je87SJkHd+CatcG6yu9Imtn4DlVtYLFfArWG7ib5DLQ=;
 b=g22gCHQBtUmWzEcRCNdnuLtFvbJF2keC/W5fLxStVxFoKzaNyoDkiqZazmS78d5HZnNa/odYGrgP2L6bry+5h/U51/ag15++MCBq87D0Zk5BYGgVJgPUDRSHXHZXeCEIW3uxDmMAIqCMZeNRVkqyyRoenS77PQ8pDR3HKcGxeqMkH7qPdQZ3eQQSzCYIum5nhOLxr4ZbxfKz/HDrmsBgQLyNaDXjeP8JsSPpvhDhlA/zF33ur25nZMzcbajCAYf87EaHEMm1ruvoSbwwRIkSZGHbe1PRJypIBbeIQQDOEDdDJLt9o6x/d5KY2V3TS+IKWaRBo6qmXThfwXdn5BGmBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Je87SJkHd+CatcG6yu9Imtn4DlVtYLFfArWG7ib5DLQ=;
 b=qQMD8gXRPi+CQFgc2yZrSKP+ersXlQjkow/OOxqCXtOALVXlxpsTcY53uBN/ylhy/rnQ9dybsAzerWv2/Ds2s8JYzlrGsSxrtrJkfg8OfcUc/S0WVG0GR13jn68dyXVWolt+7rEM3VUkC4FTBe6wxafLrIgluaz+xv1eKlsThSSaKeRE/riK/0x1vSP0Jf4v6zw7O0uuDRDiSla00joNegXsbslOnyHWNLsX4/Vy/X1H8J2W5GFIfN5zc1ofof9QP/XE1cv8THaP7/tQIxoxegbk0GYv353OLaks3GL4GheHRnRr25BKeCOSzoruXBJLZlhAiFSV0vVuDwHH1UvKQw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e3aead11-f7f5-2ccd-d598-3e6ea19a0ce6@suse.com>
Date: Tue, 16 May 2023 09:38:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 03/12] x86/shadow: don't generate bogus "domain dying"
 trace entry from sh_page_fault()
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0075.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8988:EE_
X-MS-Office365-Filtering-Correlation-Id: db10e3db-deb2-4e41-8405-08db55e0964a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EiqFP7aoEJ9zFyqiC1Vn9hp85I5mBR2FImPg6n//IMGyxkHsbou1qp7YI3rQsav3oMHyeplVXTXp65wSp2t8nXGcM8OT6AK+MQV6W84jhfNXQPCh8WL919uUh10fH8L7dyBZtyZPaRh9/MAEsd7Nb38vICs7Ju2V6NYVxZ+md7o/Gj+VIt/K6UkmbVNA0n8zpHG02PdQ+jchUXOlDZs7RFY26EjsbUHaKV/EAhBeZ8WiKvkn9dh3+HgbllzKcQS5/S9P2dR8ZpOtONPGOx8GxKOBr3y0bFSd6Mog45FfhFZ+FxgQuHHhNHIzcBIowR2QiTdol6Tv97eOGtpWF8KNyDOadR6U8EqXWl1lEZnjspGgkQLO5VH7dGUFj1rKx06pCjtAmvdJHVnc+5QskopYXrZWWVq3j5zbJFXnmj/UIbwCPkYioMyrkTsxQp5yJl+Dp0ip+s8bJh31AY7gpQ1kIN9Cr5AEYzaFe63R6dV/LfMkbJu/JS81ES9/v0W9hDCpFcRcVB9C4pomBVr7VoysDRlXG3dbGKoxljnMv7GwLb5C4A+NAtBP57I88IdBFfJrflvuLY4OqUJm04pgmRC7quAREhjtn/mI767CdxrZ9Nzo29Q6p38ITsu5shW72Xa+7NSM3c1GrY8c00XteGP0eQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199021)(2906002)(316002)(6916009)(31686004)(66946007)(4326008)(8676002)(5660300002)(8936002)(4744005)(66556008)(66476007)(41300700001)(86362001)(2616005)(478600001)(54906003)(6486002)(36756003)(6506007)(186003)(31696002)(83380400001)(6512007)(26005)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RS9DL21rLzVEYkhRcFdkb1VnU01Yd2dOSFM1U2RHMWs4c3RHNzlwMjZHUFFu?=
 =?utf-8?B?M2krOHJMbmVkbllLVnhlRW5YTU1rRTVscDJXbkdzZDNCUk5Ld0g5dzRHSGdj?=
 =?utf-8?B?bkwzV1E1WjhhOVlDUVVETTZWOW10akV2dVlsbzN1VVJ6MzRRbWw4anhyWVla?=
 =?utf-8?B?bmUrWFM4TUY0NlNiaHpLVHdNVGV2WS80NzlybnYxbjh6MG1jWkpwOUFGU0RL?=
 =?utf-8?B?YTFBaFhVMEdrVkk0ZzIyMGkvb1dpSFVVZVVVRHRGTDI4eVh2clpITFVibWEy?=
 =?utf-8?B?ckN1bzNUNXg5NUw1bkNtK3BScnY5MHVHVW1sVTFLQ2d3Z25XVkJEdDhxeVFV?=
 =?utf-8?B?SHE4MGh3SVdUMVRoTDJCSFo3RVFBK3VUcEJUeTB4V1p2YlRaWFhJMXpZVFZX?=
 =?utf-8?B?MytUMDE3K1NUK3c5TnhQVjZXeXBXNFBlU0lNbVRoWjNtdjV5U1pkSGNxNkhx?=
 =?utf-8?B?VDVZVU1RRmRCOG00UkxGYTZobG5oclBsWTBPL1VYTzllOUcwVFVuSHBNSHov?=
 =?utf-8?B?M1JRSDl6WjJJMERmSG4xUVBjRHZ0b2dHeXBmcVRYTmRlNmRCR0w4aWJCUlda?=
 =?utf-8?B?aFZEMHlPQ1QySVEyd0VDdW04d0dlQ3BGNElQTDd1K2pRRjZrd2FJS2Rzck1C?=
 =?utf-8?B?dmtUTWIvWlZUQjBJcGFWbFVvRHFGd2FmWUtpczVWektjSUV4MFNwU3kvaHpv?=
 =?utf-8?B?Wkh0dEp6TlgvN1A4c1lmVVBseVo2MWFtK2sya3c4RnM3U0xva3J2d0ZJcEdt?=
 =?utf-8?B?TzlCS2RuUVlHWlFsYWJWcHVEbmJrZHBNcjNnbGlDVzN1aHNMeGFHelFyRHJU?=
 =?utf-8?B?ZWZlQmw5bGNWZTRPTnZEWnlkZTZ6ZEwzV2RpVW1PRWxkbTNEYVlhdnJRS2tt?=
 =?utf-8?B?L0F0bUJTSHVKRzh5c3VqSlNwRnNPRUx4cWZOZkpiTTBRZENWcmhaMzlEUlZ1?=
 =?utf-8?B?b05PV1JGT1NtR1M0bEJxZGxMTmtPSmUzMCthVFF5OGUyVC9yUjJPOEFPczBZ?=
 =?utf-8?B?MWcxTVR5bTVMelJzekNKY25YL0hUWVIzTXJOb3AvUWxkUVpHVnBoTFBsM1k1?=
 =?utf-8?B?a3JIeExkNThHcjhmbU1ybEYzYVROY3ZESFFlOGpZbjVadGRvTmxtVnBoNkZK?=
 =?utf-8?B?a2xoSTRxTFIrV0NWSGtrcDdhRHMxTGNpNVZnR3Q2dFU5UmhoVFN0USsvckhq?=
 =?utf-8?B?WFhnWmhBcnVYZHp0R1VSRnE2VTZEeFZZVFppaENLbHhjUXJZTzR2ZWJZa0gv?=
 =?utf-8?B?TnlMcE1QQXJKSlBpaTlaYkk5Ty9RVWdvRHlhcGJrV2ZWU2V3SFJJdnNPRHQ5?=
 =?utf-8?B?bDJBRGQ0clJRNU43cm0wMHFnNjMrZ01CRnhFYnF6ejIvWDFuZFF0WE03T1pK?=
 =?utf-8?B?blc3MTd4RzRIVUZ6c2ZiY2hJalJOUENKWnpjeVp0V2ZCR2FzVFJSWUh0VTZP?=
 =?utf-8?B?dStYZmZKQmYxZ0ttVzVqVWpYaTdNVjBMd2dwUjUrQ0VOY3F1WjlWWHZNOFRP?=
 =?utf-8?B?N1FndnJidTNtTG95WFJ6VnlKSzRPQjhTOGdKb2pUT3lkS1gyVGNMU28yQStK?=
 =?utf-8?B?QXFOV1lURXdoM1BDRUI1ZkFkSXpGdE5FWDZydW9pR0tTMHdMbWY1bnZXWU9C?=
 =?utf-8?B?SmdJcWVJbXVod05NRE9aRUF5Vk9hbUJod3JIUjlYSE9RRnpjRVlZQkJXWk5S?=
 =?utf-8?B?b3pnaVBFVVUxc0kzQ0FYMHhYR2Z0U1dWSWh4dFVYQ2crMDFqWU5kb1EwVTZp?=
 =?utf-8?B?d1B0MXd0Y3JhRlhvYkIyUHlMRTJVUEZCZXQ1R252MmswZlduT1pQTzRROUNx?=
 =?utf-8?B?NEpqSGpLT3NxQUYwQzlVc1lTSlp4Ti9ta2llSXZJS1RKMlIrNGwvb3pUZ3cw?=
 =?utf-8?B?eDBYS3V5Vm1wN3hpZnVrWWE1ZWdxcEJyRFQ4K1RFOGFJT3cxZGl2U0dFdWZu?=
 =?utf-8?B?VWdTUWc5Skd5WlFsbFZ4SEo0eHNqSEpUVnpJVWdwUFBTdVhoRXlRZ0RQUG95?=
 =?utf-8?B?dEZ4aDRTZEdmR0s3c0k1LzRtcWdkZVhVQmxBR1AvWmJWcDM3aDFIRmg2d0JU?=
 =?utf-8?B?UDNPcnFDZkVPbitjTWUxZHVEY1hTV3ZwVFdqWjQ5ZlJVcEFTa1pJTjhZRHBO?=
 =?utf-8?Q?uXwfTBm1dhLtXanbcNVApddn0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: db10e3db-deb2-4e41-8405-08db55e0964a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:38:48.6813
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bF2Nbgncv27JcrxTSu9ShI3vhc/8/rLz00qzh7Rujxt0g1yYYPqW8waSrWNz1LMO5lM+QrzehQqCmF9HvE4ojw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8988

When in 3-level guest mode we help a guest to stay alive, we also
shouldn't emit a trace entry to the contrary. Move the invocation up
into the respective #ifdef, noting that while this moves it into the
locked region, emitting trace records with the paging lock held is okay
(as done elsewhere as well), just needlessly increasing lock holding
time a little.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2480,10 +2480,10 @@ static int cf_check sh_page_fault(
         sh_update_cr3(v, 0, false);
 #else
         ASSERT(d->is_shutting_down);
+        trace_shadow_gen(TRC_SHADOW_DOMF_DYING, va);
 #endif
         paging_unlock(d);
         put_gfn(d, gfn_x(gfn));
-        trace_shadow_gen(TRC_SHADOW_DOMF_DYING, va);
         return 0;
     }
 



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:39:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:39:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534987.832524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypHS-0006Eu-Gw; Tue, 16 May 2023 07:39:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534987.832524; Tue, 16 May 2023 07:39:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypHS-0006En-Dn; Tue, 16 May 2023 07:39:34 +0000
Received: by outflank-mailman (input) for mailman id 534987;
 Tue, 16 May 2023 07:39:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypHR-0004Mf-7H
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:39:33 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061b.outbound.protection.outlook.com
 [2a01:111:f400:7d00::61b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb7542dc-f3bc-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 09:39:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8988.eurprd04.prod.outlook.com (2603:10a6:20b:40b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 07:39:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:39:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb7542dc-f3bc-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JDPVE/Z4asX9ZXnmIE/WOq0THHrkc0SjH2nA8mdYdrj6wbENfEcl0FyPveJ0rCpeTVfkibpzeyYUuXVHgSc0lvxzcd0Eb/07E3q0tkeaM6XJ8oXFB2UbMhjloDSglcyT1XkivHSSQwawzHnMmswXlnJlXp4/Ze79M5VgqcHxe/obJMSUtNOwjBeKCcZ0gOnSkLC03HQTEzXFPNHmuOsF2hmLE9WIzcCvUYY/+y6DYqT8I4A/C2OyOOUYRAk2HgVUKLR7OtWzy67zHoEw7XWOGlY8LOsGxJV8rH7VWF8D80jx/REnN2UVhJa36Eq6vuRIb1U2F7zshLYbhN9WWEW7nQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=C2eGJ4bTNYS/eWskMfePPdrR28QdCcEy7mZkgRqWnNI=;
 b=CGAobNkzyOE6UyjvFVimFYkCt+Fc3XJO3sYS4N/a0oHn5SXeBnm9HpLKf3//Kzzr9hiV64QgNpmGUkVBYh4y4oA5quJ5RpDJi+q12+h9EVFxWjqJLNvMBhrliF7uMDmOAqP4fhxLvy1FA6hz+JujcC1XpDk579TNWASlVrfhVfRAIKSx7nwF36zLGib6QjGJExEaCLhhvRDQhTV4k3HtWqv771nXr7XGHeZUm8XUtIJll9YpCygg5nYC306pN9v6vDkpjzUv0iBY+mgxzTvnv1mlNB4lhxhGXvKk797sz2zh8G0imkGZYxy+1WfknaMunPXZnGmUgoCC6dh/TvYHKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C2eGJ4bTNYS/eWskMfePPdrR28QdCcEy7mZkgRqWnNI=;
 b=fVAeEWi2ZYMtKMQi4kRrmDiWFYPSGG9j1shKVTelxLkELt+sV61k6+ygH/jBhc64jcPkjImEtNHXAGJnATJg1UYPeODuzxlFLDvogZDQQ/qXMq9Eka5ls73mZ1Au31DxKcQqKg1Bxuz6aOyZQvMFA1fByTk5zpNRDj4z7fOn+NVvLyP/rBkFcUfCcy6FV1wn+XuEBX1sZy7ARgHOEj6he4XcsQQOESqiaTxP9AEvf/+CzJ61Jioj8jZ4oLphPUKh3V1tRLJ9PYBEQDNoHocPYe3x6I1pkiyva94YFH2pP9c08FBJXbItYKR1gHJCDs6V1aQj0iEmXxd28sQXU/vOpg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7684392b-82b4-a7cb-35dc-a5de8142eea2@suse.com>
Date: Tue, 16 May 2023 09:39:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 04/12] x86/shadow: use lighter weight mode checks
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0072.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8988:EE_
X-MS-Office365-Filtering-Correlation-Id: 13e2b02d-7925-46a0-142b-08db55e0ae94
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7ZkAHOcAlLgOWO9jz9nPPQVVtUDaAm5QZpjWazjYap8tJASDEghQbAAzXG0gqq02FVeD5f0VvNEIB2LUuUcUKC5QJUScz6omr4IKvXUa6T4cZyQDdBGVacsz6ZGtIjHJ8uWeUw5+VaMmDBI9Ne9hvUKdS+K6ddUfF+sEPfcvsiO+WMaUi4hxXnZb7QMqSZajEt/fbC/fISR6vKGEVkWsjdJYxRDaDAJDrxItAQlO1TaHD9/mUPJYTVydB0J88XAn7SWGrZWY0uCw+0pOmJTDH5RI/Mu7PxPTdFow19kJTLbEXyZiZjPQt/ttT4rSSjqyHVFEs5t2WJQidOnPlgjiQh0wRJGRcpbElaZ13Rq2pzux/mpacGosE9yxsIune+FVPxGdphTtT3hOnCxoyGPrTNnj/54OVvgOnRlHjeKAqxixhqzGxyP/bTNxs1bKlVCT4EtkTuaBBzk9dh9Az5W5AHdO5kofW4BLpz8AFeH1/JW9CLHxSCkCUZYpmZtu1LHkDp4MGpdubTMzMPx6cA1ac8rtboytHXsPoE7W/UhFBld044/VZqIDwfsjPtxctxixFVM49GPe5xD7RPq0e6BP9EeAvbrgbwA+lXPRkgOjjMp9PZ2Vbtd3ThzzdACbrBrqpM+Hw166v0j/TElyD+Des6uqWUi+lSiUdbmOIEtJX1T8Y77bjmQ62uaAZB7McEW3Vnb/ADzwb2nAiENq6k09e0hbpxhuzV4C3noBjektlC0AzqbWQuQrh9/mAT5GQkXg
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199021)(2906002)(316002)(6916009)(31686004)(66946007)(4326008)(8676002)(5660300002)(8936002)(66556008)(66476007)(66899021)(41300700001)(86362001)(2616005)(478600001)(54906003)(6486002)(36756003)(6506007)(186003)(31696002)(83380400001)(6512007)(26005)(38100700002)(16393002)(45980500001)(43740500002)(473944003)(414714003)(357404004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VjYzK3crWlpsTVV5Z3gxR3MvaHZvL1MxZjM4MnZ1UGZPMlBEbXdMTFRCSi9m?=
 =?utf-8?B?eTd1alZ0MzI0Q0hvamVKNnBNRmsyTDBCa0NRUyszQUtPaS9aVk02WG9kQ0Vs?=
 =?utf-8?B?RCtTazhLL2RCSFFjNFFBZEFCTUdndit6aTJHZDQ1aDBaaE56SDFZWHNEK0Zl?=
 =?utf-8?B?QUZUTFRlbzhnZ2NjeGdkZkN3TDlsNVo2SEZ5TW1vT0pMTE1yeU9hWk1LMEZi?=
 =?utf-8?B?V3B1b1ZRTk9ld0JITGJ3Q0w3N3RwQUt4SjQ3aDBGOWpmbks2bXdVTFBUc1pR?=
 =?utf-8?B?d1lleUk5TS91bGFSWkl6c1d5cGI3V0IzS0dzL09vcElFd3pQVFJPa09GZ0FZ?=
 =?utf-8?B?Z2dHcW9PeHViTjFRbWgrUDBha0IyamNiSTBOZXlxOHRlWjByWmpVVkVhRTdx?=
 =?utf-8?B?djNPV2ZlYzRxVVpIQ1ZZdmpFaTlmRGd3Si9FVjZVWXV2VVdNblE5TkE4Q3ZV?=
 =?utf-8?B?VE45RWlBNGhxbEJRS0RoWTRQbVlPcFJDY0txSHd5bE1FYkdjenRlRWsrVGxr?=
 =?utf-8?B?eXp1UjE4cXVJSndVZlNFNFRIaVh5UUtQRENOSGgzQ0t3dmNtRGtBTEhDNFJr?=
 =?utf-8?B?QWZWZnYzbXBxQVQ4dzFLSkJNRnhaSVVnQ0N6eUN6RzJCcXVYUXFWWEpYU3Zs?=
 =?utf-8?B?ZEp5QUVwcVFRdlhNUDVwdnBaM2R6dzEvQWlEY0ZQbWJ3OTk0NFkxREhmMFJ1?=
 =?utf-8?B?QWhmSWdLUldyT293VlU2WDhPV2lkL2lhaU1pQmt5aEU2YnM4SEZVb1o1bUc3?=
 =?utf-8?B?VkhTQVRoVFFNYWNabnJwUmVRMytoNnJ5Qk9vc1lScDZZUXdBdnc5RjlOWTV1?=
 =?utf-8?B?NmU4Qi84ZjZsOWR5b0EvUHlVd0UwSVlwK3Z2N0tlTzBNQUxONE96cjl6djdx?=
 =?utf-8?B?NkdCNXpDYy9KNVBCaUtZcTIyY0N4QnlUQ2NieU9VSUFQY0s0aHVQZzZzK1ll?=
 =?utf-8?B?dzVQSnA3YnprUklsVHhSNjFZN3BpbWNCeUtVOUwyRk9tS2xDbE8yU01XM0dx?=
 =?utf-8?B?KzN5d3hlem40elVBQ2MraWVCejB2aUdNdEI4cUgrS1VVcUQyc081aTZnZUIw?=
 =?utf-8?B?OFVYMDlsZm1uRHB0MlJWQjh3RkpMVDV3dWNMajlBd3ZLd0JCL1JrMG12VTI5?=
 =?utf-8?B?VDlZekpuTStBYndLVzM3OFNCeG1MYVFzaWtlb2NOWXRkWGZRdFBhWUlKMVdZ?=
 =?utf-8?B?MXNNUzZRN3A0OTNhVmJMQkg1RXRLQzdvajBobmF2dUNETUdycjQxVEpjcmZz?=
 =?utf-8?B?aDI2WE1BclNnckJxOXcvWWJ1TUllcGcyMTRmbGh1TkMyQXhabGVxNHVrUHFP?=
 =?utf-8?B?UlROTEZoVlB5Rm5uU1VDMzVQUG1SOENTck5XQ3ZnY1pGYUZsNVJYUFcvMXgv?=
 =?utf-8?B?S2YyUWwvMWErUmtZNCtQRjB4NUtiZUhsUlpaZTRNMlZWcDVBWk1jb1grVk40?=
 =?utf-8?B?RWliUG9jM3NtSzh4dGxLdjd2a0NGM1BVSUQzQXRTYm5hK2NjVVB2SGlZWG1N?=
 =?utf-8?B?a1pTaFJ4Wm9kUGdYeWloQkxINzRVamFUTTYrd0FMRmR6Yk5CUytqcGUxMDEw?=
 =?utf-8?B?aTBZN1kyUHA3ZnBrcjk4bWUzZEFpUEcyRzR6YjRTRHdxM1k1LzI5RlYxWGln?=
 =?utf-8?B?V210aklQd3d5bURaYjcyVlduV2VrbjZXYzlCV2k3ODk3K3Y4Y0duWkxBSk5L?=
 =?utf-8?B?VC9oeDVGMEs2ejExMVFPS2hFUlVvV1FJL0NVelJpT1FCWjVqcExNU005OU9o?=
 =?utf-8?B?U01CQzRNaWdtc0dNMlI3RTEzKzI2TXVUb3hUUk5pdHVYVlEvRzR5ZmFKZEJy?=
 =?utf-8?B?cEIzaEF2N05QTXR4UndLVUo4b2RvcW92N1RzUUhrYjgrUmhjNzFQa3RJS0kw?=
 =?utf-8?B?TGwzMWdrVHhvMzFhelFGZmw1ZEhibWtIR0lkM2JPS0hPMXBwUjdEaEZGcHlk?=
 =?utf-8?B?RFBOOGpHNXdZZlRTdnRqbG1XMCtMTUVjU0FROS95TXNQZkpFZ0FoenJ6RFpT?=
 =?utf-8?B?NkpORnQzRG5ldHFDRXVqZkFvWFNNT0xOSllOL1dZZElDM2g5eEhraE9DL3pX?=
 =?utf-8?B?bHRVb09Gb05WQVF6bmp6WlRIdlNHZjI4K1hFRGVoaGprMHpWZ25lbnF4VkRv?=
 =?utf-8?Q?4q/zJl0uQwHiiaYxwxppVLyCv?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 13e2b02d-7925-46a0-142b-08db55e0ae94
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:39:29.4364
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vcBQmd/3vec2PpS/PRcPgnxoYJhCzSaX2dEQ+ESbZr/VNrRWVJbA40HdTD6k21VI53UG5w8blx0BjmPJqiKmpg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8988

shadow_mode_...(), with the exception of shadow_mode_enabled(), are
shorthands for shadow_mode_enabled() && paging_mode_...(). While
potentially useful outside of shadow-internal functions, when we already
know that we're dealing with a domain in shadow mode, the "paging"
checks are sufficient and cheaper. While the "shadow" ones commonly
translate to a MOV/AND/CMP/Jcc sequence, the "paging" ones typically
resolve to just TEST+Jcc.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-base over new earlier patch.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1843,7 +1843,7 @@ int sh_remove_write_access(struct domain
      * In guest refcounting, we trust Xen to already be restricting
      * all the writes to the guest page tables, so we do not need to
      * do more. */
-    if ( !shadow_mode_refcounts(d) )
+    if ( !paging_mode_refcounts(d) )
         return 0;
 
     /* Early exit if it's already a pagetable, or otherwise not writeable */
@@ -2075,7 +2075,7 @@ int sh_remove_all_mappings(struct domain
          *   guest pages with an extra reference taken by
          *   prepare_ring_for_helper().
          */
-        if ( !(shadow_mode_external(d)
+        if ( !(paging_mode_external(d)
                && (page->count_info & PGC_count_mask) <= 3
                && ((page->u.inuse.type_info & PGT_count_mask)
                    == (is_special_page(page) ||
@@ -2372,8 +2372,8 @@ static void sh_update_paging_modes(struc
     {
         const struct paging_mode *old_mode = v->arch.paging.mode;
 
-        ASSERT(shadow_mode_translate(d));
-        ASSERT(shadow_mode_external(d));
+        ASSERT(paging_mode_translate(d));
+        ASSERT(paging_mode_external(d));
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
         /* Need to resync all our pages now, because if a page goes out
@@ -2760,7 +2760,7 @@ void shadow_vcpu_teardown(struct vcpu *v
 
     sh_detach_old_tables(v);
 #ifdef CONFIG_HVM
-    if ( shadow_mode_external(d) )
+    if ( paging_mode_external(d) )
     {
         mfn_t mfn = pagetable_get_mfn(v->arch.hvm.monitor_table);
 
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -514,7 +514,7 @@ _sh_propagate(struct vcpu *v,
                || (level == 1
                    && page_get_owner(mfn_to_page(target_mfn)) == dom_io);
     if ( mmio_mfn
-         && !(level == 1 && (!shadow_mode_refcounts(d)
+         && !(level == 1 && (!paging_mode_refcounts(d)
                              || p2mt == p2m_mmio_direct)) )
     {
         ASSERT((ft == ft_prefetch));
@@ -531,7 +531,7 @@ _sh_propagate(struct vcpu *v,
                        _PAGE_RW | _PAGE_PRESENT);
     if ( guest_nx_enabled(v) )
         pass_thru_flags |= _PAGE_NX_BIT;
-    if ( level == 1 && !shadow_mode_refcounts(d) && mmio_mfn )
+    if ( level == 1 && !paging_mode_refcounts(d) && mmio_mfn )
         pass_thru_flags |= PAGE_CACHE_ATTRS;
     sflags = gflags & pass_thru_flags;
 
@@ -651,7 +651,7 @@ _sh_propagate(struct vcpu *v,
      * (We handle log-dirty entirely inside the shadow code, without using the
      * p2m_ram_logdirty p2m type: only HAP uses that.)
      */
-    if ( level == 1 && unlikely(shadow_mode_log_dirty(d)) && !mmio_mfn )
+    if ( level == 1 && unlikely(paging_mode_log_dirty(d)) && !mmio_mfn )
     {
         if ( ft & FETCH_TYPE_WRITE )
             paging_mark_dirty(d, target_mfn);
@@ -807,7 +807,7 @@ do {
 #define FOREACH_PRESENT_L2E(_sl2mfn, _sl2e, _gl2p, _done, _dom, _code)    \
 do {                                                                      \
     int _i, _j;                                                           \
-    ASSERT(shadow_mode_external(_dom));                                   \
+    ASSERT(paging_mode_external(_dom));                                   \
     ASSERT(mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2_32_shadow);      \
     for ( _j = 0; _j < 4; _j++ )                                          \
     {                                                                     \
@@ -833,7 +833,7 @@ do {
 do {                                                                       \
     int _i;                                                                \
     shadow_l2e_t *_sp = map_domain_page((_sl2mfn));                        \
-    ASSERT(shadow_mode_external(_dom));                                    \
+    ASSERT(paging_mode_external(_dom));                                    \
     ASSERT(mfn_to_page(_sl2mfn)->u.sh.type == SH_type_l2_pae_shadow);      \
     for ( _i = 0; _i < SHADOW_L2_PAGETABLE_ENTRIES; _i++ )                 \
     {                                                                      \
@@ -854,7 +854,7 @@ do {
     unsigned int _i, _end = SHADOW_L2_PAGETABLE_ENTRIES;                    \
     shadow_l2e_t *_sp = map_domain_page((_sl2mfn));                         \
     ASSERT_VALID_L2(mfn_to_page(_sl2mfn)->u.sh.type);                       \
-    if ( is_pv_32bit_domain(_dom) /* implies !shadow_mode_external */ &&    \
+    if ( is_pv_32bit_domain(_dom) /* implies !paging_mode_external */ &&    \
          mfn_to_page(_sl2mfn)->u.sh.type != SH_type_l2_64_shadow )          \
         _end = COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(_dom);                    \
     for ( _i = 0; _i < _end; ++_i )                                         \
@@ -896,7 +896,7 @@ do {
 #define FOREACH_PRESENT_L4E(_sl4mfn, _sl4e, _gl4p, _done, _dom, _code)  \
 do {                                                                    \
     shadow_l4e_t *_sp = map_domain_page((_sl4mfn));                     \
-    int _xen = !shadow_mode_external(_dom);                             \
+    int _xen = !paging_mode_external(_dom);                             \
     int _i;                                                             \
     ASSERT(mfn_to_page(_sl4mfn)->u.sh.type == SH_type_l4_64_shadow);\
     for ( _i = 0; _i < SHADOW_L4_PAGETABLE_ENTRIES; _i++ )              \
@@ -965,7 +965,7 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf
 #endif
 
     // Create the Xen mappings...
-    if ( !shadow_mode_external(d) )
+    if ( !paging_mode_external(d) )
     {
         switch (shadow_type)
         {
@@ -1367,7 +1367,7 @@ void sh_destroy_l1_shadow(struct domain
         shadow_demote(d, gmfn, t);
     }
 
-    if ( shadow_mode_refcounts(d) )
+    if ( paging_mode_refcounts(d) )
     {
         /* Decrement refcounts of all the old entries */
         mfn_t sl1mfn = smfn;
@@ -1464,7 +1464,7 @@ static int cf_check validate_gl4e(
     l4e_propagate_from_guest(v, new_gl4e, sl3mfn, &new_sl4e, ft_prefetch);
 
     // check for updates to xen reserved slots
-    if ( !shadow_mode_external(d) )
+    if ( !paging_mode_external(d) )
     {
         int shadow_index = (((unsigned long)sl4p & ~PAGE_MASK) /
                             sizeof(shadow_l4e_t));
@@ -2387,7 +2387,7 @@ static int cf_check sh_page_fault(
     gfn = guest_walk_to_gfn(&gw);
     gmfn = get_gfn(d, gfn, &p2mt);
 
-    if ( shadow_mode_refcounts(d) &&
+    if ( paging_mode_refcounts(d) &&
          ((!p2m_is_valid(p2mt) && !p2m_is_grant(p2mt)) ||
           (!p2m_is_mmio(p2mt) && !mfn_valid(gmfn))) )
     {
@@ -2611,7 +2611,7 @@ static int cf_check sh_page_fault(
     return EXCRET_fault_fixed;
 
  emulate:
-    if ( !shadow_mode_refcounts(d) )
+    if ( !paging_mode_refcounts(d) )
         goto not_a_shadow_fault;
 
 #ifdef CONFIG_HVM
@@ -3055,7 +3055,7 @@ sh_update_linear_entries(struct vcpu *v)
      */
 
     /* Don't try to update the monitor table if it doesn't exist */
-    if ( !shadow_mode_external(d) ||
+    if ( !paging_mode_external(d) ||
          pagetable_get_pfn(v->arch.hvm.monitor_table) == 0 )
         return;
 
@@ -3204,7 +3204,7 @@ static void cf_check sh_update_cr3(struc
     /* Double-check that the HVM code has sent us a sane guest_table */
     if ( is_hvm_domain(d) )
     {
-        ASSERT(shadow_mode_external(d));
+        ASSERT(paging_mode_external(d));
         if ( hvm_paging_enabled(v) )
             ASSERT(pagetable_get_pfn(v->arch.guest_table));
         else
@@ -3229,7 +3229,7 @@ static void cf_check sh_update_cr3(struc
      * table.  We cache the current state of that table and shadow that,
      * until the next CR3 write makes us refresh our cache.
      */
-    ASSERT(shadow_mode_external(d));
+    ASSERT(paging_mode_external(d));
 
     /*
      * Find where in the page the l3 table is, but ignore the low 2 bits of
@@ -3260,7 +3260,7 @@ static void cf_check sh_update_cr3(struc
         ASSERT(d->is_dying || d->is_shutting_down);
         return;
     }
-    if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
+    if ( !paging_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
 
@@ -3354,7 +3354,7 @@ static void cf_check sh_update_cr3(struc
     ///
     /// v->arch.cr3
     ///
-    if ( shadow_mode_external(d) )
+    if ( paging_mode_external(d) )
     {
         make_cr3(v, pagetable_get_mfn(v->arch.hvm.monitor_table));
     }
@@ -3371,7 +3371,7 @@ static void cf_check sh_update_cr3(struc
     ///
     /// v->arch.hvm.hw_cr[3]
     ///
-    if ( shadow_mode_external(d) )
+    if ( paging_mode_external(d) )
     {
         ASSERT(is_hvm_domain(d));
 #if SHADOW_PAGING_LEVELS == 3
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -407,7 +407,7 @@ static inline int sh_remove_write_access
                                          unsigned int level,
                                          unsigned long fault_addr)
 {
-    ASSERT(!shadow_mode_refcounts(d));
+    ASSERT(!paging_mode_refcounts(d));
     return 0;
 }
 #endif
@@ -520,8 +520,8 @@ sh_mfn_is_a_page_table(mfn_t gmfn)
         return 0;
 
     owner = page_get_owner(page);
-    if ( owner && shadow_mode_refcounts(owner)
-         && (page->count_info & PGC_shadowed_pt) )
+    if ( owner && paging_mode_refcounts(owner) &&
+         (page->count_info & PGC_shadowed_pt) )
         return 1;
 
     type_info = page->u.inuse.type_info & PGT_type_mask;
--- a/xen/arch/x86/mm/shadow/set.c
+++ b/xen/arch/x86/mm/shadow/set.c
@@ -81,7 +81,7 @@ shadow_get_page_from_l1e(shadow_l1e_t sl
     struct domain *owner = NULL;
 
     ASSERT(!sh_l1e_is_magic(sl1e));
-    ASSERT(shadow_mode_refcounts(d));
+    ASSERT(paging_mode_refcounts(d));
 
     if ( mfn_valid(mfn) )
     {
@@ -342,7 +342,7 @@ int shadow_set_l1e(struct domain *d, sha
          !sh_l1e_is_magic(new_sl1e) )
     {
         /* About to install a new reference */
-        if ( shadow_mode_refcounts(d) )
+        if ( paging_mode_refcounts(d) )
         {
 #define PAGE_FLIPPABLE (_PAGE_RW | _PAGE_PWT | _PAGE_PCD | _PAGE_PAT)
             int rc;
@@ -375,7 +375,7 @@ int shadow_set_l1e(struct domain *d, sha
 
     old_sl1f = shadow_l1e_get_flags(old_sl1e);
     if ( (old_sl1f & _PAGE_PRESENT) && !sh_l1e_is_magic(old_sl1e) &&
-         shadow_mode_refcounts(d) )
+         paging_mode_refcounts(d) )
     {
         /*
          * We lost a reference to an old mfn.
--- a/xen/arch/x86/mm/shadow/types.h
+++ b/xen/arch/x86/mm/shadow/types.h
@@ -262,7 +262,7 @@ int shadow_set_l4e(struct domain *d, sha
 static void inline
 shadow_put_page_from_l1e(shadow_l1e_t sl1e, struct domain *d)
 {
-    if ( !shadow_mode_refcounts(d) )
+    if ( !paging_mode_refcounts(d) )
         return;
 
     put_page_from_l1e(sl1e, d);



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:40:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:40:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534990.832534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypHx-0007Xf-Qa; Tue, 16 May 2023 07:40:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534990.832534; Tue, 16 May 2023 07:40:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypHx-0007XY-M8; Tue, 16 May 2023 07:40:05 +0000
Received: by outflank-mailman (input) for mailman id 534990;
 Tue, 16 May 2023 07:40:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypHw-0006iO-FV
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:40:04 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20612.outbound.protection.outlook.com
 [2a01:111:f400:7d00::612])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dd8e2932-f3bc-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 09:40:01 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8988.eurprd04.prod.outlook.com (2603:10a6:20b:40b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 07:39:59 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:39:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd8e2932-f3bc-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MaJ6debGa7XC2I/8Av5ed9y+TiWKLR3OBW4FrCMtGFqoX2kze3T705CgDlGzE2D8BKz6FfXKGdjIO5xuMZmleDrmpB9xms78Hx0fkp1lNDPZ1TADtmemDbAtNke4mMxgtBnn6dwrt63yzf5gwpGcMpCOYnvdl0UMRIQg5YrLwgAXiM9TDBNB2gffUrEPShhoMfoSR200YwvV9ZwFYqN0QIK2P9nJYMSmsL5dNEN3T42bdE63gN8t6/Dfl4E455O4f+oBaBLydP5VNsp52EpFHemDvc2RnK2An3N62UN77t7RRV/Vmi/OSnc9hhIKJvEb41FgNbYEJevpaiGt11SHvw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AZD5eFts/ElfUxmwkKvhbhBvNNzHTg69/OMKEXlE0s4=;
 b=nyvmD+SKS6auF0usbNpwcrmFBPrmUCD1d87Bfd8FU04Kq89seQoIuIyTpgKNuFQBrZ+R3KklYKrKSacnbJUZHt00UH9OECfrvQuaH/tSCrkox5PjUbdppllq5KV/Z1pc8iUNTZQ/aUmFn2L8La1nOMB8hZAuJQPBHVS3y+BK3O+u1VNNHSOz21FlXaXttGithrFdmVzHHu1EVap73ShzCKPDfruxm3Yu79FDhZ54kBKHu3h5WiA4JzjCr46rLVFmMc0Vs80p+wUD78UBV18Q6OobvzG70QK1B0pQST/kjLDIQvBfKi/g+osI7tz17L0bcqkJeqd7rHBZfw655f3efA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AZD5eFts/ElfUxmwkKvhbhBvNNzHTg69/OMKEXlE0s4=;
 b=iRsBiKZLoWhsmk6gtcQS/CSfRGIWxMFFLvAUUeTYxWjSRV30hg5Gi6lSMj5zGsiUDB3JWtkMjtcZNoIXSlCN3sxdc5dVHgPjgjQa2vsh5ey7cRt1bBsJZ8AlVo8OalsFq2xNlFyCLHmu8kolLdy9ViG0HPU3nDuRikk3OYA9qKx6Ulu1YLgijY9kLmZ1D+8sWqfpDQK0HnHK5Idm4JbjAAbCWR8chA5dVePvTkhmjq/ePoFwKYswes7CKc1eHcFAUVTTRhWc3NPDX/crFC0f26uNBrGR6b7z0MLMJPXiCvoCrd+zZ8B+H+tMonxHvRF7flRJld0iYQCJP4lx3h1FLg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c1145a07-1dfd-eafc-fc18-4b039e657bca@suse.com>
Date: Tue, 16 May 2023 09:39:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 05/12] x86/shadow: move OOS functions to their own file
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0190.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8988:EE_
X-MS-Office365-Filtering-Correlation-Id: fb4a8b18-8e91-4616-9dae-08db55e0bfe1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ew8pWgFMpXXYVRPTtq4EkOyEdpztfCi3aZ3XuijvMyFEQ74UsP31Wa8qa1OGo1PX4dlFhaIPHx7o20RWvDo8NKwEPC61UKmJA6DEPQq7uMw7dxFmZgVBGX7l/H2yYanpftb7qn4NmgKRVMI6IT5p0v+9AcNhRTiiBdcW3JHs/tqYnGJXLIiLEw4y3b4+gR5pwCXUstzvbX/Xj5Mxgw5+nsJn8PQi5MaAvsDeyr3MgBkiUcxacdAWY4MK5j0jqcrrkze/PviNw9iFZItltAn4RbD9l4dpxI67AktSV/aIZEk9/vzqFoM9Xq5fIZvb9c450c21cbRDoMej7L/MS99aJNgWxjM5rdZFoWNX8iPtFyHgv2BxFg0ta97Lh1UL/GwvUTK86bGDG9osXNXtzby69GgQnC+mgjniquCcRrtyQG+JOp/NpVdE0SJw3CnCCFsW+P2V1qb9YAadXM0J1hgz++2SCvLUD9RL1rlhBQaeNT67hEgh+WxrqBQx2x2DdaV2fXG1sgBTwFABU2nB09wrk+XmE4DYXK46JhD3CCGEjWvbzvuJ2eLIWvndZt03GEZTqVe9dnVrx0itm33mmfSkmN51pdujNJFz0L6Gg02hNEcxKiLueJLUSw8H/Con7oalA+rbKZ7llSUYr/kPDrgL6mM6HoxSWAoAAqSUyV7fOG99g1RldGXsR7TWbweoMZQY4a+gwlYPGo/Ou6elI6iIXg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199021)(2906002)(316002)(6916009)(31686004)(66946007)(4326008)(8676002)(5660300002)(8936002)(30864003)(66556008)(66476007)(66899021)(41300700001)(86362001)(2616005)(478600001)(54906003)(6486002)(36756003)(6666004)(6506007)(186003)(31696002)(83380400001)(6512007)(26005)(38100700002)(45980500001)(43740500002)(579004)(473944003)(414714003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bmc3SHErZjYvZVU5SzdqR3QvY0FoL29tWHcwMGMyS0dJcjFpOUsxOThrZDUv?=
 =?utf-8?B?Z3VkTzRaNVhrS0l4QjdmMTZsdURydUVrODhPMmtCMUp2a2szZmIzbTdpS1F2?=
 =?utf-8?B?cWJhYkJ2QkszSUJkT0FuajdmbEpsVjQvQWFpeWROOE1KRmpTWmhSYm9laUsz?=
 =?utf-8?B?cHFMbVFOc3k0ajErV1FFTGIxbkdaVEloaFZnUVhmbVhhQ3huYTJtUnhJZmI4?=
 =?utf-8?B?c0RLRmp5NW5KZ3FXLzBoUCtuRFcydFFnUlZjaDY0M1R3ZFAvdXdBK1FIZ2xM?=
 =?utf-8?B?WnRnNmVwcWZvMW8rUDNxaTdnaU9xMFE4WGZTS2Uyc21NU1krRzI3Z08wZTNM?=
 =?utf-8?B?bWpNYzR1QnBWN0ZlSG1ueUtvbjJhek5KVkxxUk9Gb1YxNGkyWCtTdGY5NG01?=
 =?utf-8?B?d0U1UlNGdHpyczdZK2pUUFB3VVRpUzJoNURGL2VCMWZFbWlyV3I5Z3NZeUY1?=
 =?utf-8?B?S0ZTa0xjRXJyRWFHa3ovVTNscTc5TGpzakdtL2pKamYvQy9SUkRCc2pEaGpi?=
 =?utf-8?B?alozVklBMjlOK09yMGZISDhTNE9UMDBIaUhxaTYxc2toeGpVV1NoQklEWmJm?=
 =?utf-8?B?aUVadC9OV0d5bWZFdTgzUHM1Rm5hQjZqQkJpQ3pNWlRVL0d3Nk9WelBaNjgv?=
 =?utf-8?B?TW11eHgxYlB6R1hLYmdMcllRUC9TanJ6YndlU2t2Y3h6ZmRneTBwdzNJdHRu?=
 =?utf-8?B?YXN6Q2h2WXJkTVZJaXFNRzVzVzRoSEh1cWI1NDRKRHdYTkFrZHoxeWdGT0FY?=
 =?utf-8?B?ZDlVci9OUjlDT0dsVEljOTFNUUxrUmdra1p4bmNKTzhVUUVNdFJ4aWpJQ1No?=
 =?utf-8?B?dUxLVWtMT2w3VjcyTUNsUXN1MGZXaENMNGlHZy9DYjI0eldtZHpvR2xJUjV2?=
 =?utf-8?B?KytpdWY0SWZCQWRKMjQrOWFEWEdETVYrblltTjZIWFo5bExBcGl5Qlp0NC8w?=
 =?utf-8?B?aDN2TFR5TTY4SG5Ldjh5eHl1dlFIYVVNbzFrQWFGSFNWMFQ4bHVQWmR3OVNI?=
 =?utf-8?B?MTkrb2tDZjhXWmR3ZHVJTGF6ZnNmdWFFeUkwOExaSzBxWWhobFpha2tGNGlZ?=
 =?utf-8?B?OXFSbHJITlRJWE0rbFZHSXVzYmJjZFBRQS80UkpHWUtzbmxMTUMrVERnUmNp?=
 =?utf-8?B?U0hsYkFpTGd0b2htaFR5d1BXVkFSK3B0ZCtEMC9zWmt0SXF4V0pIR3M3Z2NM?=
 =?utf-8?B?NnU0eHk0WFUxR3JuUE8xVjhPbkxEbmFiSjI3LzJFa2owNzd2dEg3SDBveVFj?=
 =?utf-8?B?bXJRbkpydjV1Z1d0Nk1zVTBCeFFxQmNuYXlBa0UyTEt3K0M0UVNsRU9hTDUv?=
 =?utf-8?B?bllFbEJUNXkvNm1UV0cvVERQcG5MNEc5UUhnRXVybWdJUDQvdjFBc2RmWDJa?=
 =?utf-8?B?M2R1YnBsTG5MSGJGWTNDWHJsd1I2NExVYXNndGUxYkJERGJ3NU1MZDBBMmY0?=
 =?utf-8?B?RnJTcmN6NjlSRWlHRUNleHFCb3lhV3Z6NFJlVG9WKzlLK3RnNktnNWtlUUds?=
 =?utf-8?B?NTJ4MHEwTDRzUW9ZL1lDUDVZMW50bjM3VzF0TC9xZ0ZGWVp2S0pQWExXN1Fz?=
 =?utf-8?B?Zk56Qit1R1FMa0hCMjV0Z3hnOHY2dHVYb2VUUldJU2dsQm1aeW1lQlZMS09L?=
 =?utf-8?B?SzJqck81UElpMlZySTdnNjZ3ekd6azFVS1FNSHF6SG9uRWxHeFZUdWpuUW52?=
 =?utf-8?B?aWFEdU90clhzNzhyaVZLa2sycGM4T0o1ZnRIajdvazBRaDdnV3lqQXljNGdr?=
 =?utf-8?B?K0Ivek9jaUJjcXZGbVR0akgvL21waUNnNWE4K1BSMWd6NVJ2NHE5V3hsUUtu?=
 =?utf-8?B?MkJjS3M0T3RDOGtIeExFSHB3czhSL05aTEVsMDh0Z204MTVFVDhqY2REVm1R?=
 =?utf-8?B?Vks0ZFN1T2FqTWh3YjBQUEJGdmRUSWI0TUpiTUpXMTk1eFhwYzVNb1VwZW9Y?=
 =?utf-8?B?Wlo4cFVrcDQvMUdhU3lSaGNURTFVTWw3RWR4TlpJNlBNR21PVUhqSVorNGxu?=
 =?utf-8?B?ejVYSkp2bTAyVk0xMU5CMUJzNlRnOFNxZ0NuQWIyZ1V4cDVPMjlhdzV3QkFi?=
 =?utf-8?B?MldZY0lvZFR3NHAybVc5aGpqY1lYeXZOWThpaExWejdCVmJDeDdvVm5IZnkv?=
 =?utf-8?Q?Wj6Ycpxm2qfsODM+09dFwGKzi?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fb4a8b18-8e91-4616-9dae-08db55e0bfe1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:39:58.9369
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JhSnzmbYXB9ebWjk2ylLn8ZqiJPJF3x3mwlnAsMoqIO3+pQ71GLs8mhCJhALYtJHe3DIGrXjxctq+bewi0SgVw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8988

The code has been identified as HVM-only, and its main functions are
pretty well isolated. Move them to their own file. While moving, besides
making two functions non-static, do a few style adjustments, mainly
comment formatting, but leave the code otherwise untouched.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v2: Adjust SPDX to GPL-2.0-only. A few more style adjustments.

--- a/xen/arch/x86/mm/shadow/Makefile
+++ b/xen/arch/x86/mm/shadow/Makefile
@@ -1,6 +1,6 @@
 ifeq ($(CONFIG_SHADOW_PAGING),y)
 obj-y += common.o set.o
-obj-$(CONFIG_HVM) += hvm.o guest_2.o guest_3.o guest_4.o
+obj-$(CONFIG_HVM) += hvm.o guest_2.o guest_3.o guest_4.o oos.o
 obj-$(CONFIG_PV) += pv.o guest_4.o
 else
 obj-y += none.o
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -140,576 +140,6 @@ static int __init cf_check shadow_audit_
 __initcall(shadow_audit_key_init);
 #endif /* SHADOW_AUDIT */
 
-#if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
-/**************************************************************************/
-/* Out-of-sync shadows. */
-
-/* From time to time, we let a shadowed pagetable page go out of sync
- * with its shadow: the guest is allowed to write directly to the page,
- * and those writes are not synchronously reflected in the shadow.
- * This lets us avoid many emulations if the guest is writing a lot to a
- * pagetable, but it relaxes a pretty important invariant in the shadow
- * pagetable design.  Therefore, some rules:
- *
- * 1. Only L1 pagetables may go out of sync: any page that is shadowed
- *    at at higher level must be synchronously updated.  This makes
- *    using linear shadow pagetables much less dangerous.
- *    That means that: (a) unsyncing code needs to check for higher-level
- *    shadows, and (b) promotion code needs to resync.
- *
- * 2. All shadow operations on a guest page require the page to be brought
- *    back into sync before proceeding.  This must be done under the
- *    paging lock so that the page is guaranteed to remain synced until
- *    the operation completes.
- *
- *    Exceptions to this rule: the pagefault and invlpg handlers may
- *    update only one entry on an out-of-sync page without resyncing it.
- *
- * 3. Operations on shadows that do not start from a guest page need to
- *    be aware that they may be handling an out-of-sync shadow.
- *
- * 4. Operations that do not normally take the paging lock (fast-path
- *    #PF handler, INVLPG) must fall back to a locking, syncing version
- *    if they see an out-of-sync table.
- *
- * 5. Operations corresponding to guest TLB flushes (MOV CR3, INVLPG)
- *    must explicitly resync all relevant pages or update their
- *    shadows.
- *
- * Currently out-of-sync pages are listed in a simple open-addressed
- * hash table with a second chance (must resist temptation to radically
- * over-engineer hash tables...)  The virtual address of the access
- * which caused us to unsync the page is also kept in the hash table, as
- * a hint for finding the writable mappings later.
- *
- * We keep a hash per vcpu, because we want as much as possible to do
- * the re-sync on the save vcpu we did the unsync on, so the VA hint
- * will be valid.
- */
-
-static void sh_oos_audit(struct domain *d)
-{
-    unsigned int idx, expected_idx, expected_idx_alt;
-    struct page_info *pg;
-    struct vcpu *v;
-
-    for_each_vcpu(d, v)
-    {
-        for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
-        {
-            mfn_t *oos = v->arch.paging.shadow.oos;
-            if ( mfn_eq(oos[idx], INVALID_MFN) )
-                continue;
-
-            expected_idx = mfn_x(oos[idx]) % SHADOW_OOS_PAGES;
-            expected_idx_alt = ((expected_idx + 1) % SHADOW_OOS_PAGES);
-            if ( idx != expected_idx && idx != expected_idx_alt )
-            {
-                printk("%s: idx %x contains gmfn %lx, expected at %x or %x.\n",
-                       __func__, idx, mfn_x(oos[idx]),
-                       expected_idx, expected_idx_alt);
-                BUG();
-            }
-            pg = mfn_to_page(oos[idx]);
-            if ( !(pg->count_info & PGC_shadowed_pt) )
-            {
-                printk("%s: idx %x gmfn %lx not a pt (count %lx)\n",
-                       __func__, idx, mfn_x(oos[idx]), pg->count_info);
-                BUG();
-            }
-            if ( !(pg->shadow_flags & SHF_out_of_sync) )
-            {
-                printk("%s: idx %x gmfn %lx not marked oos (flags %x)\n",
-                       __func__, idx, mfn_x(oos[idx]), pg->shadow_flags);
-                BUG();
-            }
-            if ( (pg->shadow_flags & SHF_page_type_mask & ~SHF_L1_ANY) )
-            {
-                printk("%s: idx %x gmfn %lx shadowed as non-l1 (flags %x)\n",
-                       __func__, idx, mfn_x(oos[idx]), pg->shadow_flags);
-                BUG();
-            }
-        }
-    }
-}
-
-#if SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES
-void oos_audit_hash_is_present(struct domain *d, mfn_t gmfn)
-{
-    int idx;
-    struct vcpu *v;
-    mfn_t *oos;
-
-    ASSERT(mfn_is_out_of_sync(gmfn));
-
-    for_each_vcpu(d, v)
-    {
-        oos = v->arch.paging.shadow.oos;
-        idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
-        if ( !mfn_eq(oos[idx], gmfn) )
-            idx = (idx + 1) % SHADOW_OOS_PAGES;
-
-        if ( mfn_eq(oos[idx], gmfn) )
-            return;
-    }
-
-    printk(XENLOG_ERR "gmfn %"PRI_mfn" marked OOS but not in hash table\n",
-           mfn_x(gmfn));
-    BUG();
-}
-#endif
-
-/* Update the shadow, but keep the page out of sync. */
-static inline void _sh_resync_l1(struct vcpu *v, mfn_t gmfn, mfn_t snpmfn)
-{
-    struct page_info *pg = mfn_to_page(gmfn);
-
-    ASSERT(mfn_valid(gmfn));
-    ASSERT(page_is_out_of_sync(pg));
-
-    /* Call out to the appropriate per-mode resyncing function */
-    if ( pg->shadow_flags & SHF_L1_32 )
-        SHADOW_INTERNAL_NAME(sh_resync_l1, 2)(v, gmfn, snpmfn);
-    else if ( pg->shadow_flags & SHF_L1_PAE )
-        SHADOW_INTERNAL_NAME(sh_resync_l1, 3)(v, gmfn, snpmfn);
-    else if ( pg->shadow_flags & SHF_L1_64 )
-        SHADOW_INTERNAL_NAME(sh_resync_l1, 4)(v, gmfn, snpmfn);
-}
-
-static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
-                                            mfn_t smfn, unsigned long off)
-{
-    ASSERT(mfn_valid(smfn));
-    ASSERT(mfn_valid(gmfn));
-
-    switch ( mfn_to_page(smfn)->u.sh.type )
-    {
-    case SH_type_l1_32_shadow:
-    case SH_type_fl1_32_shadow:
-        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 2)
-            (d, gmfn, smfn, off);
-
-    case SH_type_l1_pae_shadow:
-    case SH_type_fl1_pae_shadow:
-        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 3)
-            (d, gmfn, smfn, off);
-
-    case SH_type_l1_64_shadow:
-    case SH_type_fl1_64_shadow:
-        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 4)
-            (d, gmfn, smfn, off);
-
-    default:
-        return 0;
-    }
-}
-
-/*
- * Fixup arrays: We limit the maximum number of writable mappings to
- * SHADOW_OOS_FIXUPS and store enough information to remove them
- * quickly on resync.
- */
-
-static inline int oos_fixup_flush_gmfn(struct vcpu *v, mfn_t gmfn,
-                                       struct oos_fixup *fixup)
-{
-    struct domain *d = v->domain;
-    int i;
-    for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
-    {
-        if ( !mfn_eq(fixup->smfn[i], INVALID_MFN) )
-        {
-            sh_remove_write_access_from_sl1p(d, gmfn,
-                                             fixup->smfn[i],
-                                             fixup->off[i]);
-            fixup->smfn[i] = INVALID_MFN;
-        }
-    }
-
-    /* Always flush the TLBs. See comment on oos_fixup_add(). */
-    return 1;
-}
-
-void oos_fixup_add(struct domain *d, mfn_t gmfn,
-                   mfn_t smfn,  unsigned long off)
-{
-    int idx, next;
-    mfn_t *oos;
-    struct oos_fixup *oos_fixup;
-    struct vcpu *v;
-
-    perfc_incr(shadow_oos_fixup_add);
-
-    for_each_vcpu(d, v)
-    {
-        oos = v->arch.paging.shadow.oos;
-        oos_fixup = v->arch.paging.shadow.oos_fixup;
-        idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
-        if ( !mfn_eq(oos[idx], gmfn) )
-            idx = (idx + 1) % SHADOW_OOS_PAGES;
-        if ( mfn_eq(oos[idx], gmfn) )
-        {
-            int i;
-            for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
-            {
-                if ( mfn_eq(oos_fixup[idx].smfn[i], smfn)
-                     && (oos_fixup[idx].off[i] == off) )
-                    return;
-            }
-
-            next = oos_fixup[idx].next;
-
-            if ( !mfn_eq(oos_fixup[idx].smfn[next], INVALID_MFN) )
-            {
-                TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_OOS_FIXUP_EVICT);
-
-                /* Reuse this slot and remove current writable mapping. */
-                sh_remove_write_access_from_sl1p(d, gmfn,
-                                                 oos_fixup[idx].smfn[next],
-                                                 oos_fixup[idx].off[next]);
-                perfc_incr(shadow_oos_fixup_evict);
-                /* We should flush the TLBs now, because we removed a
-                   writable mapping, but since the shadow is already
-                   OOS we have no problem if another vcpu write to
-                   this page table. We just have to be very careful to
-                   *always* flush the tlbs on resync. */
-            }
-
-            oos_fixup[idx].smfn[next] = smfn;
-            oos_fixup[idx].off[next] = off;
-            oos_fixup[idx].next = (next + 1) % SHADOW_OOS_FIXUPS;
-
-            TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_OOS_FIXUP_ADD);
-            return;
-        }
-    }
-
-    printk(XENLOG_ERR "gmfn %"PRI_mfn" was OOS but not in hash table\n",
-           mfn_x(gmfn));
-    BUG();
-}
-
-static int oos_remove_write_access(struct vcpu *v, mfn_t gmfn,
-                                   struct oos_fixup *fixup)
-{
-    struct domain *d = v->domain;
-    int ftlb = 0;
-
-    ftlb |= oos_fixup_flush_gmfn(v, gmfn, fixup);
-
-    switch ( sh_remove_write_access(d, gmfn, 0, 0) )
-    {
-    default:
-    case 0:
-        break;
-
-    case 1:
-        ftlb |= 1;
-        break;
-
-    case -1:
-        /* An unfindable writeable typecount has appeared, probably via a
-         * grant table entry: can't shoot the mapping, so try to unshadow
-         * the page.  If that doesn't work either, the guest is granting
-         * his pagetables and must be killed after all.
-         * This will flush the tlb, so we can return with no worries. */
-        shadow_remove_all_shadows(d, gmfn);
-        return 1;
-    }
-
-    if ( ftlb )
-        guest_flush_tlb_mask(d, d->dirty_cpumask);
-
-    return 0;
-}
-
-
-static inline void trace_resync(int event, mfn_t gmfn)
-{
-    if ( tb_init_done )
-    {
-        /* Convert gmfn to gfn */
-        gfn_t gfn = mfn_to_gfn(current->domain, gmfn);
-
-        __trace_var(event, 0/*!tsc*/, sizeof(gfn), &gfn);
-    }
-}
-
-/* Pull all the entries on an out-of-sync page back into sync. */
-static void _sh_resync(struct vcpu *v, mfn_t gmfn,
-                       struct oos_fixup *fixup, mfn_t snp)
-{
-    struct page_info *pg = mfn_to_page(gmfn);
-
-    ASSERT(paging_locked_by_me(v->domain));
-    ASSERT(mfn_is_out_of_sync(gmfn));
-    /* Guest page must be shadowed *only* as L1 when out of sync. */
-    ASSERT(!(mfn_to_page(gmfn)->shadow_flags & SHF_page_type_mask
-             & ~SHF_L1_ANY));
-    ASSERT(!sh_page_has_multiple_shadows(mfn_to_page(gmfn)));
-
-    SHADOW_PRINTK("%pv gmfn=%"PRI_mfn"\n", v, mfn_x(gmfn));
-
-    /* Need to pull write access so the page *stays* in sync. */
-    if ( oos_remove_write_access(v, gmfn, fixup) )
-    {
-        /* Page has been unshadowed. */
-        return;
-    }
-
-    /* No more writable mappings of this page, please */
-    pg->shadow_flags &= ~SHF_oos_may_write;
-
-    /* Update the shadows with current guest entries. */
-    _sh_resync_l1(v, gmfn, snp);
-
-    /* Now we know all the entries are synced, and will stay that way */
-    pg->shadow_flags &= ~SHF_out_of_sync;
-    perfc_incr(shadow_resync);
-    trace_resync(TRC_SHADOW_RESYNC_FULL, gmfn);
-}
-
-
-/* Add an MFN to the list of out-of-sync guest pagetables */
-static void oos_hash_add(struct vcpu *v, mfn_t gmfn)
-{
-    int i, idx, oidx, swap = 0;
-    mfn_t *oos = v->arch.paging.shadow.oos;
-    mfn_t *oos_snapshot = v->arch.paging.shadow.oos_snapshot;
-    struct oos_fixup *oos_fixup = v->arch.paging.shadow.oos_fixup;
-    struct oos_fixup fixup = { .next = 0 };
-
-    for (i = 0; i < SHADOW_OOS_FIXUPS; i++ )
-        fixup.smfn[i] = INVALID_MFN;
-
-    idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
-    oidx = idx;
-
-    if ( !mfn_eq(oos[idx], INVALID_MFN)
-         && (mfn_x(oos[idx]) % SHADOW_OOS_PAGES) == idx )
-    {
-        /* Punt the current occupant into the next slot */
-        SWAP(oos[idx], gmfn);
-        SWAP(oos_fixup[idx], fixup);
-        swap = 1;
-        idx = (idx + 1) % SHADOW_OOS_PAGES;
-    }
-    if ( !mfn_eq(oos[idx], INVALID_MFN) )
-    {
-        /* Crush the current occupant. */
-        _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
-        perfc_incr(shadow_unsync_evict);
-    }
-    oos[idx] = gmfn;
-    oos_fixup[idx] = fixup;
-
-    if ( swap )
-        SWAP(oos_snapshot[idx], oos_snapshot[oidx]);
-
-    copy_domain_page(oos_snapshot[oidx], oos[oidx]);
-}
-
-/* Remove an MFN from the list of out-of-sync guest pagetables */
-static void oos_hash_remove(struct domain *d, mfn_t gmfn)
-{
-    int idx;
-    mfn_t *oos;
-    struct vcpu *v;
-
-    SHADOW_PRINTK("d%d gmfn %lx\n", d->domain_id, mfn_x(gmfn));
-
-    for_each_vcpu(d, v)
-    {
-        oos = v->arch.paging.shadow.oos;
-        idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
-        if ( !mfn_eq(oos[idx], gmfn) )
-            idx = (idx + 1) % SHADOW_OOS_PAGES;
-        if ( mfn_eq(oos[idx], gmfn) )
-        {
-            oos[idx] = INVALID_MFN;
-            return;
-        }
-    }
-
-    printk(XENLOG_ERR "gmfn %"PRI_mfn" was OOS but not in hash table\n",
-           mfn_x(gmfn));
-    BUG();
-}
-
-mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn)
-{
-    int idx;
-    mfn_t *oos;
-    mfn_t *oos_snapshot;
-    struct vcpu *v;
-
-    for_each_vcpu(d, v)
-    {
-        oos = v->arch.paging.shadow.oos;
-        oos_snapshot = v->arch.paging.shadow.oos_snapshot;
-        idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
-        if ( !mfn_eq(oos[idx], gmfn) )
-            idx = (idx + 1) % SHADOW_OOS_PAGES;
-        if ( mfn_eq(oos[idx], gmfn) )
-        {
-            return oos_snapshot[idx];
-        }
-    }
-
-    printk(XENLOG_ERR "gmfn %"PRI_mfn" was OOS but not in hash table\n",
-           mfn_x(gmfn));
-    BUG();
-}
-
-/* Pull a single guest page back into sync */
-void sh_resync(struct domain *d, mfn_t gmfn)
-{
-    int idx;
-    mfn_t *oos;
-    mfn_t *oos_snapshot;
-    struct oos_fixup *oos_fixup;
-    struct vcpu *v;
-
-    for_each_vcpu(d, v)
-    {
-        oos = v->arch.paging.shadow.oos;
-        oos_fixup = v->arch.paging.shadow.oos_fixup;
-        oos_snapshot = v->arch.paging.shadow.oos_snapshot;
-        idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
-        if ( !mfn_eq(oos[idx], gmfn) )
-            idx = (idx + 1) % SHADOW_OOS_PAGES;
-
-        if ( mfn_eq(oos[idx], gmfn) )
-        {
-            _sh_resync(v, gmfn, &oos_fixup[idx], oos_snapshot[idx]);
-            oos[idx] = INVALID_MFN;
-            return;
-        }
-    }
-
-    printk(XENLOG_ERR "gmfn %"PRI_mfn" was OOS but not in hash table\n",
-           mfn_x(gmfn));
-    BUG();
-}
-
-/* Figure out whether it's definitely safe not to sync this l1 table,
- * by making a call out to the mode in which that shadow was made. */
-static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
-{
-    struct page_info *pg = mfn_to_page(gl1mfn);
-    if ( pg->shadow_flags & SHF_L1_32 )
-        return SHADOW_INTERNAL_NAME(sh_safe_not_to_sync, 2)(v, gl1mfn);
-    else if ( pg->shadow_flags & SHF_L1_PAE )
-        return SHADOW_INTERNAL_NAME(sh_safe_not_to_sync, 3)(v, gl1mfn);
-    else if ( pg->shadow_flags & SHF_L1_64 )
-        return SHADOW_INTERNAL_NAME(sh_safe_not_to_sync, 4)(v, gl1mfn);
-    printk(XENLOG_ERR "gmfn %"PRI_mfn" was OOS but not shadowed as an l1\n",
-           mfn_x(gl1mfn));
-    BUG();
-}
-
-
-/* Pull all out-of-sync pages back into sync.  Pages brought out of sync
- * on other vcpus are allowed to remain out of sync, but their contents
- * will be made safe (TLB flush semantics); pages unsynced by this vcpu
- * are brought back into sync and write-protected.  If skip != 0, we try
- * to avoid resyncing at all if we think we can get away with it. */
-void sh_resync_all(struct vcpu *v, int skip, int this, int others)
-{
-    int idx;
-    struct vcpu *other;
-    mfn_t *oos = v->arch.paging.shadow.oos;
-    mfn_t *oos_snapshot = v->arch.paging.shadow.oos_snapshot;
-    struct oos_fixup *oos_fixup = v->arch.paging.shadow.oos_fixup;
-
-    SHADOW_PRINTK("%pv\n", v);
-
-    ASSERT(paging_locked_by_me(v->domain));
-
-    if ( !this )
-        goto resync_others;
-
-    /* First: resync all of this vcpu's oos pages */
-    for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
-        if ( !mfn_eq(oos[idx], INVALID_MFN) )
-        {
-            /* Write-protect and sync contents */
-            _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
-            oos[idx] = INVALID_MFN;
-        }
-
- resync_others:
-    if ( !others )
-        return;
-
-    /* Second: make all *other* vcpus' oos pages safe. */
-    for_each_vcpu(v->domain, other)
-    {
-        if ( v == other )
-            continue;
-
-        oos = other->arch.paging.shadow.oos;
-        oos_fixup = other->arch.paging.shadow.oos_fixup;
-        oos_snapshot = other->arch.paging.shadow.oos_snapshot;
-
-        for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
-        {
-            if ( mfn_eq(oos[idx], INVALID_MFN) )
-                continue;
-
-            if ( skip )
-            {
-                /* Update the shadows and leave the page OOS. */
-                if ( sh_skip_sync(v, oos[idx]) )
-                    continue;
-                trace_resync(TRC_SHADOW_RESYNC_ONLY, oos[idx]);
-                _sh_resync_l1(other, oos[idx], oos_snapshot[idx]);
-            }
-            else
-            {
-                /* Write-protect and sync contents */
-                _sh_resync(other, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
-                oos[idx] = INVALID_MFN;
-            }
-        }
-    }
-}
-
-/* Allow a shadowed page to go out of sync. Unsyncs are traced in
- * multi.c:sh_page_fault() */
-int sh_unsync(struct vcpu *v, mfn_t gmfn)
-{
-    struct page_info *pg;
-
-    ASSERT(paging_locked_by_me(v->domain));
-
-    SHADOW_PRINTK("%pv gmfn=%"PRI_mfn"\n", v, mfn_x(gmfn));
-
-    pg = mfn_to_page(gmfn);
-
-    /* Guest page must be shadowed *only* as L1 and *only* once when out
-     * of sync.  Also, get out now if it's already out of sync.
-     * Also, can't safely unsync if some vcpus have paging disabled.*/
-    if ( pg->shadow_flags &
-         ((SHF_page_type_mask & ~SHF_L1_ANY) | SHF_out_of_sync)
-         || sh_page_has_multiple_shadows(pg)
-         || !is_hvm_vcpu(v)
-         || !v->domain->arch.paging.shadow.oos_active )
-        return 0;
-
-    BUILD_BUG_ON(!(typeof(pg->shadow_flags))SHF_out_of_sync);
-    BUILD_BUG_ON(!(typeof(pg->shadow_flags))SHF_oos_may_write);
-
-    pg->shadow_flags |= SHF_out_of_sync|SHF_oos_may_write;
-    oos_hash_add(v, gmfn);
-    perfc_incr(shadow_unsync);
-    TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_UNSYNC);
-    return 1;
-}
-
-#endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */
-
-
 /**************************************************************************/
 /* Code for "promoting" a guest page to the point where the shadow code is
  * willing to let it be treated as a guest page table.  This generally
--- /dev/null
+++ b/xen/arch/x86/mm/shadow/oos.c
@@ -0,0 +1,606 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/******************************************************************************
+ * arch/x86/mm/shadow/oos.c
+ *
+ * Shadow code dealing with out-of-sync shadows.
+ * Parts of this code are Copyright (c) 2006 by XenSource Inc.
+ * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
+ * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
+ */
+
+#include "private.h"
+
+#if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
+
+#include <xen/trace.h>
+
+#include <asm/shadow.h>
+
+/*
+ * From time to time, we let a shadowed pagetable page go out of sync
+ * with its shadow: the guest is allowed to write directly to the page,
+ * and those writes are not synchronously reflected in the shadow.
+ * This lets us avoid many emulations if the guest is writing a lot to a
+ * pagetable, but it relaxes a pretty important invariant in the shadow
+ * pagetable design.  Therefore, some rules:
+ *
+ * 1. Only L1 pagetables may go out of sync: any page that is shadowed
+ *    at at higher level must be synchronously updated.  This makes
+ *    using linear shadow pagetables much less dangerous.
+ *    That means that: (a) unsyncing code needs to check for higher-level
+ *    shadows, and (b) promotion code needs to resync.
+ *
+ * 2. All shadow operations on a guest page require the page to be brought
+ *    back into sync before proceeding.  This must be done under the
+ *    paging lock so that the page is guaranteed to remain synced until
+ *    the operation completes.
+ *
+ *    Exceptions to this rule: the pagefault and invlpg handlers may
+ *    update only one entry on an out-of-sync page without resyncing it.
+ *
+ * 3. Operations on shadows that do not start from a guest page need to
+ *    be aware that they may be handling an out-of-sync shadow.
+ *
+ * 4. Operations that do not normally take the paging lock (fast-path
+ *    #PF handler, INVLPG) must fall back to a locking, syncing version
+ *    if they see an out-of-sync table.
+ *
+ * 5. Operations corresponding to guest TLB flushes (MOV CR3, INVLPG)
+ *    must explicitly resync all relevant pages or update their
+ *    shadows.
+ *
+ * Currently out-of-sync pages are listed in a simple open-addressed
+ * hash table with a second chance (must resist temptation to radically
+ * over-engineer hash tables...)  The virtual address of the access
+ * which caused us to unsync the page is also kept in the hash table, as
+ * a hint for finding the writable mappings later.
+ *
+ * We keep a hash per vcpu, because we want as much as possible to do
+ * the re-sync on the save vcpu we did the unsync on, so the VA hint
+ * will be valid.
+ */
+
+#if SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES_FULL
+void sh_oos_audit(struct domain *d)
+{
+    unsigned int idx, expected_idx, expected_idx_alt;
+    struct page_info *pg;
+    struct vcpu *v;
+
+    for_each_vcpu(d, v)
+    {
+        for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
+        {
+            mfn_t *oos = v->arch.paging.shadow.oos;
+
+            if ( mfn_eq(oos[idx], INVALID_MFN) )
+                continue;
+
+            expected_idx = mfn_x(oos[idx]) % SHADOW_OOS_PAGES;
+            expected_idx_alt = ((expected_idx + 1) % SHADOW_OOS_PAGES);
+            if ( idx != expected_idx && idx != expected_idx_alt )
+            {
+                printk("%s: idx %x contains gmfn %lx, expected at %x or %x.\n",
+                       __func__, idx, mfn_x(oos[idx]),
+                       expected_idx, expected_idx_alt);
+                BUG();
+            }
+            pg = mfn_to_page(oos[idx]);
+            if ( !(pg->count_info & PGC_shadowed_pt) )
+            {
+                printk("%s: idx %x gmfn %lx not a pt (count %lx)\n",
+                       __func__, idx, mfn_x(oos[idx]), pg->count_info);
+                BUG();
+            }
+            if ( !(pg->shadow_flags & SHF_out_of_sync) )
+            {
+                printk("%s: idx %x gmfn %lx not marked oos (flags %x)\n",
+                       __func__, idx, mfn_x(oos[idx]), pg->shadow_flags);
+                BUG();
+            }
+            if ( (pg->shadow_flags & SHF_page_type_mask & ~SHF_L1_ANY) )
+            {
+                printk("%s: idx %x gmfn %lx shadowed as non-l1 (flags %x)\n",
+                       __func__, idx, mfn_x(oos[idx]), pg->shadow_flags);
+                BUG();
+            }
+        }
+    }
+}
+#endif
+
+#if SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES
+void oos_audit_hash_is_present(struct domain *d, mfn_t gmfn)
+{
+    int idx;
+    struct vcpu *v;
+    mfn_t *oos;
+
+    ASSERT(mfn_is_out_of_sync(gmfn));
+
+    for_each_vcpu(d, v)
+    {
+        oos = v->arch.paging.shadow.oos;
+        idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
+        if ( !mfn_eq(oos[idx], gmfn) )
+            idx = (idx + 1) % SHADOW_OOS_PAGES;
+
+        if ( mfn_eq(oos[idx], gmfn) )
+            return;
+    }
+
+    printk(XENLOG_ERR "gmfn %"PRI_mfn" marked OOS but not in hash table\n",
+           mfn_x(gmfn));
+    BUG();
+}
+#endif
+
+/* Update the shadow, but keep the page out of sync. */
+static inline void _sh_resync_l1(struct vcpu *v, mfn_t gmfn, mfn_t snpmfn)
+{
+    struct page_info *pg = mfn_to_page(gmfn);
+
+    ASSERT(mfn_valid(gmfn));
+    ASSERT(page_is_out_of_sync(pg));
+
+    /* Call out to the appropriate per-mode resyncing function */
+    if ( pg->shadow_flags & SHF_L1_32 )
+        SHADOW_INTERNAL_NAME(sh_resync_l1, 2)(v, gmfn, snpmfn);
+    else if ( pg->shadow_flags & SHF_L1_PAE )
+        SHADOW_INTERNAL_NAME(sh_resync_l1, 3)(v, gmfn, snpmfn);
+    else if ( pg->shadow_flags & SHF_L1_64 )
+        SHADOW_INTERNAL_NAME(sh_resync_l1, 4)(v, gmfn, snpmfn);
+}
+
+static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
+                                            mfn_t smfn, unsigned long off)
+{
+    ASSERT(mfn_valid(smfn));
+    ASSERT(mfn_valid(gmfn));
+
+    switch ( mfn_to_page(smfn)->u.sh.type )
+    {
+    case SH_type_l1_32_shadow:
+    case SH_type_fl1_32_shadow:
+        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 2)
+            (d, gmfn, smfn, off);
+
+    case SH_type_l1_pae_shadow:
+    case SH_type_fl1_pae_shadow:
+        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 3)
+            (d, gmfn, smfn, off);
+
+    case SH_type_l1_64_shadow:
+    case SH_type_fl1_64_shadow:
+        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 4)
+            (d, gmfn, smfn, off);
+
+    default:
+        return 0;
+    }
+}
+
+/*
+ * Fixup arrays: We limit the maximum number of writable mappings to
+ * SHADOW_OOS_FIXUPS and store enough information to remove them
+ * quickly on resync.
+ */
+
+static inline int oos_fixup_flush_gmfn(struct vcpu *v, mfn_t gmfn,
+                                       struct oos_fixup *fixup)
+{
+    struct domain *d = v->domain;
+    int i;
+    for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
+    {
+        if ( !mfn_eq(fixup->smfn[i], INVALID_MFN) )
+        {
+            sh_remove_write_access_from_sl1p(d, gmfn,
+                                             fixup->smfn[i],
+                                             fixup->off[i]);
+            fixup->smfn[i] = INVALID_MFN;
+        }
+    }
+
+    /* Always flush the TLBs. See comment on oos_fixup_add(). */
+    return 1;
+}
+
+void oos_fixup_add(struct domain *d, mfn_t gmfn,
+                   mfn_t smfn,  unsigned long off)
+{
+    int idx, next;
+    mfn_t *oos;
+    struct oos_fixup *oos_fixup;
+    struct vcpu *v;
+
+    perfc_incr(shadow_oos_fixup_add);
+
+    for_each_vcpu(d, v)
+    {
+        oos = v->arch.paging.shadow.oos;
+        oos_fixup = v->arch.paging.shadow.oos_fixup;
+        idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
+        if ( !mfn_eq(oos[idx], gmfn) )
+            idx = (idx + 1) % SHADOW_OOS_PAGES;
+        if ( mfn_eq(oos[idx], gmfn) )
+        {
+            int i;
+            for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
+            {
+                if ( mfn_eq(oos_fixup[idx].smfn[i], smfn) &&
+                     (oos_fixup[idx].off[i] == off) )
+                    return;
+            }
+
+            next = oos_fixup[idx].next;
+
+            if ( !mfn_eq(oos_fixup[idx].smfn[next], INVALID_MFN) )
+            {
+                TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_OOS_FIXUP_EVICT);
+
+                /* Reuse this slot and remove current writable mapping. */
+                sh_remove_write_access_from_sl1p(d, gmfn,
+                                                 oos_fixup[idx].smfn[next],
+                                                 oos_fixup[idx].off[next]);
+                perfc_incr(shadow_oos_fixup_evict);
+                /*
+                 * We should flush the TLBs now, because we removed a
+                 * writable mapping, but since the shadow is already
+                 * OOS we have no problem if another vcpu write to
+                 * this page table. We just have to be very careful to
+                 * *always* flush the tlbs on resync.
+                 */
+            }
+
+            oos_fixup[idx].smfn[next] = smfn;
+            oos_fixup[idx].off[next] = off;
+            oos_fixup[idx].next = (next + 1) % SHADOW_OOS_FIXUPS;
+
+            TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_OOS_FIXUP_ADD);
+            return;
+        }
+    }
+
+    printk(XENLOG_ERR "gmfn %"PRI_mfn" was OOS but not in hash table\n",
+           mfn_x(gmfn));
+    BUG();
+}
+
+static int oos_remove_write_access(struct vcpu *v, mfn_t gmfn,
+                                   struct oos_fixup *fixup)
+{
+    struct domain *d = v->domain;
+    int ftlb = 0;
+
+    ftlb |= oos_fixup_flush_gmfn(v, gmfn, fixup);
+
+    switch ( sh_remove_write_access(d, gmfn, 0, 0) )
+    {
+    default:
+    case 0:
+        break;
+
+    case 1:
+        ftlb |= 1;
+        break;
+
+    case -1:
+        /*
+         * An unfindable writeable typecount has appeared, probably via a
+         * grant table entry: can't shoot the mapping, so try to unshadow
+         * the page.  If that doesn't work either, the guest is granting
+         * his pagetables and must be killed after all.
+         * This will flush the tlb, so we can return with no worries.
+         */
+        shadow_remove_all_shadows(d, gmfn);
+        return 1;
+    }
+
+    if ( ftlb )
+        guest_flush_tlb_mask(d, d->dirty_cpumask);
+
+    return 0;
+}
+
+static inline void trace_resync(int event, mfn_t gmfn)
+{
+    if ( tb_init_done )
+    {
+        /* Convert gmfn to gfn */
+        gfn_t gfn = mfn_to_gfn(current->domain, gmfn);
+
+        __trace_var(event, 0/*!tsc*/, sizeof(gfn), &gfn);
+    }
+}
+
+/* Pull all the entries on an out-of-sync page back into sync. */
+static void _sh_resync(struct vcpu *v, mfn_t gmfn,
+                       struct oos_fixup *fixup, mfn_t snp)
+{
+    struct page_info *pg = mfn_to_page(gmfn);
+
+    ASSERT(paging_locked_by_me(v->domain));
+    ASSERT(mfn_is_out_of_sync(gmfn));
+    /* Guest page must be shadowed *only* as L1 when out of sync. */
+    ASSERT(!(mfn_to_page(gmfn)->shadow_flags & SHF_page_type_mask
+             & ~SHF_L1_ANY));
+    ASSERT(!sh_page_has_multiple_shadows(mfn_to_page(gmfn)));
+
+    SHADOW_PRINTK("%pv gmfn=%"PRI_mfn"\n", v, mfn_x(gmfn));
+
+    /* Need to pull write access so the page *stays* in sync. */
+    if ( oos_remove_write_access(v, gmfn, fixup) )
+    {
+        /* Page has been unshadowed. */
+        return;
+    }
+
+    /* No more writable mappings of this page, please */
+    pg->shadow_flags &= ~SHF_oos_may_write;
+
+    /* Update the shadows with current guest entries. */
+    _sh_resync_l1(v, gmfn, snp);
+
+    /* Now we know all the entries are synced, and will stay that way */
+    pg->shadow_flags &= ~SHF_out_of_sync;
+    perfc_incr(shadow_resync);
+    trace_resync(TRC_SHADOW_RESYNC_FULL, gmfn);
+}
+
+/* Add an MFN to the list of out-of-sync guest pagetables */
+static void oos_hash_add(struct vcpu *v, mfn_t gmfn)
+{
+    int i, idx, oidx, swap = 0;
+    mfn_t *oos = v->arch.paging.shadow.oos;
+    mfn_t *oos_snapshot = v->arch.paging.shadow.oos_snapshot;
+    struct oos_fixup *oos_fixup = v->arch.paging.shadow.oos_fixup;
+    struct oos_fixup fixup = { .next = 0 };
+
+    for ( i = 0; i < SHADOW_OOS_FIXUPS; i++ )
+        fixup.smfn[i] = INVALID_MFN;
+
+    idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
+    oidx = idx;
+
+    if ( !mfn_eq(oos[idx], INVALID_MFN) &&
+         (mfn_x(oos[idx]) % SHADOW_OOS_PAGES) == idx )
+    {
+        /* Punt the current occupant into the next slot */
+        SWAP(oos[idx], gmfn);
+        SWAP(oos_fixup[idx], fixup);
+        swap = 1;
+        idx = (idx + 1) % SHADOW_OOS_PAGES;
+    }
+    if ( !mfn_eq(oos[idx], INVALID_MFN) )
+    {
+        /* Crush the current occupant. */
+        _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
+        perfc_incr(shadow_unsync_evict);
+    }
+    oos[idx] = gmfn;
+    oos_fixup[idx] = fixup;
+
+    if ( swap )
+        SWAP(oos_snapshot[idx], oos_snapshot[oidx]);
+
+    copy_domain_page(oos_snapshot[oidx], oos[oidx]);
+}
+
+/* Remove an MFN from the list of out-of-sync guest pagetables */
+void oos_hash_remove(struct domain *d, mfn_t gmfn)
+{
+    int idx;
+    mfn_t *oos;
+    struct vcpu *v;
+
+    SHADOW_PRINTK("d%d gmfn %lx\n", d->domain_id, mfn_x(gmfn));
+
+    for_each_vcpu(d, v)
+    {
+        oos = v->arch.paging.shadow.oos;
+        idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
+        if ( !mfn_eq(oos[idx], gmfn) )
+            idx = (idx + 1) % SHADOW_OOS_PAGES;
+        if ( mfn_eq(oos[idx], gmfn) )
+        {
+            oos[idx] = INVALID_MFN;
+            return;
+        }
+    }
+
+    printk(XENLOG_ERR "gmfn %"PRI_mfn" was OOS but not in hash table\n",
+           mfn_x(gmfn));
+    BUG();
+}
+
+mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn)
+{
+    int idx;
+    mfn_t *oos;
+    mfn_t *oos_snapshot;
+    struct vcpu *v;
+
+    for_each_vcpu(d, v)
+    {
+        oos = v->arch.paging.shadow.oos;
+        oos_snapshot = v->arch.paging.shadow.oos_snapshot;
+        idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
+        if ( !mfn_eq(oos[idx], gmfn) )
+            idx = (idx + 1) % SHADOW_OOS_PAGES;
+        if ( mfn_eq(oos[idx], gmfn) )
+        {
+            return oos_snapshot[idx];
+        }
+    }
+
+    printk(XENLOG_ERR "gmfn %"PRI_mfn" was OOS but not in hash table\n",
+           mfn_x(gmfn));
+    BUG();
+}
+
+/* Pull a single guest page back into sync */
+void sh_resync(struct domain *d, mfn_t gmfn)
+{
+    int idx;
+    mfn_t *oos;
+    mfn_t *oos_snapshot;
+    struct oos_fixup *oos_fixup;
+    struct vcpu *v;
+
+    for_each_vcpu(d, v)
+    {
+        oos = v->arch.paging.shadow.oos;
+        oos_fixup = v->arch.paging.shadow.oos_fixup;
+        oos_snapshot = v->arch.paging.shadow.oos_snapshot;
+        idx = mfn_x(gmfn) % SHADOW_OOS_PAGES;
+        if ( !mfn_eq(oos[idx], gmfn) )
+            idx = (idx + 1) % SHADOW_OOS_PAGES;
+
+        if ( mfn_eq(oos[idx], gmfn) )
+        {
+            _sh_resync(v, gmfn, &oos_fixup[idx], oos_snapshot[idx]);
+            oos[idx] = INVALID_MFN;
+            return;
+        }
+    }
+
+    printk(XENLOG_ERR "gmfn %"PRI_mfn" was OOS but not in hash table\n",
+           mfn_x(gmfn));
+    BUG();
+}
+
+/*
+ * Figure out whether it's definitely safe not to sync this l1 table,
+ * by making a call out to the mode in which that shadow was made.
+ */
+static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
+{
+    struct page_info *pg = mfn_to_page(gl1mfn);
+
+    if ( pg->shadow_flags & SHF_L1_32 )
+        return SHADOW_INTERNAL_NAME(sh_safe_not_to_sync, 2)(v, gl1mfn);
+    else if ( pg->shadow_flags & SHF_L1_PAE )
+        return SHADOW_INTERNAL_NAME(sh_safe_not_to_sync, 3)(v, gl1mfn);
+    else if ( pg->shadow_flags & SHF_L1_64 )
+        return SHADOW_INTERNAL_NAME(sh_safe_not_to_sync, 4)(v, gl1mfn);
+
+    printk(XENLOG_ERR "gmfn %"PRI_mfn" was OOS but not shadowed as an l1\n",
+           mfn_x(gl1mfn));
+    BUG();
+}
+
+/*
+ * Pull all out-of-sync pages back into sync.  Pages brought out of sync
+ * on other vcpus are allowed to remain out of sync, but their contents
+ * will be made safe (TLB flush semantics); pages unsynced by this vcpu
+ * are brought back into sync and write-protected.  If skip != 0, we try
+ * to avoid resyncing at all if we think we can get away with it.
+ */
+void sh_resync_all(struct vcpu *v, int skip, int this, int others)
+{
+    int idx;
+    struct vcpu *other;
+    mfn_t *oos = v->arch.paging.shadow.oos;
+    mfn_t *oos_snapshot = v->arch.paging.shadow.oos_snapshot;
+    struct oos_fixup *oos_fixup = v->arch.paging.shadow.oos_fixup;
+
+    SHADOW_PRINTK("%pv\n", v);
+
+    ASSERT(paging_locked_by_me(v->domain));
+
+    if ( !this )
+        goto resync_others;
+
+    /* First: resync all of this vcpu's oos pages */
+    for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
+        if ( !mfn_eq(oos[idx], INVALID_MFN) )
+        {
+            /* Write-protect and sync contents */
+            _sh_resync(v, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
+            oos[idx] = INVALID_MFN;
+        }
+
+ resync_others:
+    if ( !others )
+        return;
+
+    /* Second: make all *other* vcpus' oos pages safe. */
+    for_each_vcpu(v->domain, other)
+    {
+        if ( v == other )
+            continue;
+
+        oos = other->arch.paging.shadow.oos;
+        oos_fixup = other->arch.paging.shadow.oos_fixup;
+        oos_snapshot = other->arch.paging.shadow.oos_snapshot;
+
+        for ( idx = 0; idx < SHADOW_OOS_PAGES; idx++ )
+        {
+            if ( mfn_eq(oos[idx], INVALID_MFN) )
+                continue;
+
+            if ( skip )
+            {
+                /* Update the shadows and leave the page OOS. */
+                if ( sh_skip_sync(v, oos[idx]) )
+                    continue;
+                trace_resync(TRC_SHADOW_RESYNC_ONLY, oos[idx]);
+                _sh_resync_l1(other, oos[idx], oos_snapshot[idx]);
+            }
+            else
+            {
+                /* Write-protect and sync contents */
+                _sh_resync(other, oos[idx], &oos_fixup[idx], oos_snapshot[idx]);
+                oos[idx] = INVALID_MFN;
+            }
+        }
+    }
+}
+
+/*
+ * Allow a shadowed page to go out of sync. Unsyncs are traced in
+ * multi.c:sh_page_fault()
+ */
+int sh_unsync(struct vcpu *v, mfn_t gmfn)
+{
+    struct page_info *pg;
+
+    ASSERT(paging_locked_by_me(v->domain));
+
+    SHADOW_PRINTK("%pv gmfn=%"PRI_mfn"\n", v, mfn_x(gmfn));
+
+    pg = mfn_to_page(gmfn);
+
+    /*
+     * Guest page must be shadowed *only* as L1 and *only* once when out
+     * of sync.  Also, get out now if it's already out of sync.
+     * Also, can't safely unsync if some vcpus have paging disabled.
+     */
+    if ( (pg->shadow_flags &
+          ((SHF_page_type_mask & ~SHF_L1_ANY) | SHF_out_of_sync)) ||
+         sh_page_has_multiple_shadows(pg) ||
+         !is_hvm_vcpu(v) ||
+         !v->domain->arch.paging.shadow.oos_active )
+        return 0;
+
+    BUILD_BUG_ON(!(typeof(pg->shadow_flags))SHF_out_of_sync);
+    BUILD_BUG_ON(!(typeof(pg->shadow_flags))SHF_oos_may_write);
+
+    pg->shadow_flags |= SHF_out_of_sync|SHF_oos_may_write;
+    oos_hash_add(v, gmfn);
+    perfc_incr(shadow_unsync);
+    TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_UNSYNC);
+    return 1;
+}
+
+#endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -439,6 +439,7 @@ int sh_unsync(struct vcpu *v, mfn_t gmfn
 /* Pull an out-of-sync page back into sync. */
 void sh_resync(struct domain *d, mfn_t gmfn);
 
+void oos_hash_remove(struct domain *d, mfn_t gmfn);
 void oos_fixup_add(struct domain *d, mfn_t gmfn, mfn_t smfn, unsigned long off);
 
 /* Pull all out-of-sync shadows back into sync.  If skip != 0, we try
@@ -464,6 +465,7 @@ shadow_sync_other_vcpus(struct vcpu *v)
     sh_resync_all(v, 1 /* skip */, 0 /* this */, 1 /* others */);
 }
 
+void sh_oos_audit(struct domain *d);
 void oos_audit_hash_is_present(struct domain *d, mfn_t gmfn);
 mfn_t oos_snapshot_lookup(struct domain *d, mfn_t gmfn);
 



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:40:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:40:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.534992.832544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypIK-00084P-7G; Tue, 16 May 2023 07:40:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 534992.832544; Tue, 16 May 2023 07:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypIK-00084I-47; Tue, 16 May 2023 07:40:28 +0000
Received: by outflank-mailman (input) for mailman id 534992;
 Tue, 16 May 2023 07:40:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypIJ-0006iO-Eu
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:40:27 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20611.outbound.protection.outlook.com
 [2a01:111:f400:7d00::611])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ebc29790-f3bc-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 09:40:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8988.eurprd04.prod.outlook.com (2603:10a6:20b:40b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 07:40:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:40:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebc29790-f3bc-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Lgva9R4Kz9hWh79Su6qQs4tm+oMIrB7+fNZAwXUFkwUc8JO0+Nfhfe7QF+W+/hIeL9iffCAd4QaAcAHQlMtUSw7AYU+Ai3J0KoobWwGVA6Yk0tMTidIhbwOkCws/8syagcpOl6Cj1rVJynZoD8xOszHtaeqropTf6fJJyj4oSzJJnXOJpuqhiO0Rs9nR8dkSbLL6XPhDciysEiI2ZEBj0fhVEmjU70IfKWvWGlRPyftPVhkRBMCybSbo7Iw4jEfma8PMwsl7Hbw3yH/BXhTlH3vpT9expLBRhief6SMW2+oU0n0j1PCysTq4piUKTPRmXJ0iGkWrXlVThUkBlLG0Zg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fk5gtEXHOHX/0G/rLup+XQBTQcNEBbA/Wd3XareT6hc=;
 b=SOCMbqAis+NeL4/G9oAnkN3D9DEUdQjk3j3lwDdZaLpLWc2CkoulIJxtCtFXDRnGGHohnCNAMWtbFJkRYxSxeajjdWyyd3ujMYOpcjF7XYXAAustGDvOXBWPBsZbqent6TKM4DyQBvw1mP3/4lGFJtP6+VbQIEX71LoqsgEryP8UglTCehYYIjKB4lj3nPnuzrRjnVODZ0h1v9ODobNDGG+kmu7Goi3bCQex2PyjfbbZWLSmNReTbg0hjqkqZm+tfx+j2sgjM/yOPlAdmYng4YqKmfA6vVwByb8g/1StUjClcJTzlLpQUfiY9H+LY6TVc1deftOXXeowPsORHfRusg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fk5gtEXHOHX/0G/rLup+XQBTQcNEBbA/Wd3XareT6hc=;
 b=C/OFq4a8l6NcyKQs5YiP+g7pfJuBJiiFPy259RZUSTXySfqNmnDHl+FGTQeh/YT8s0OuEQ1zMU+WvVr9Gwo0qBffFPbChzZgBZ8+hI/qP8UPyeQDUmDIkVF53FNEWux6OD7MGy036BMJUcL13d01bCSHe8kKNdjkS4k2QzwFdyd7uQAixsox6lG/ZKlRWC83Hh0KN24DOZxjDI3/m8/ls5+w7+fgN0GlhC6VX/5BPy45yLUz+yL18kn2c3bt3DG1R/dfMG6l49KAPg+PCAh8YTd54jt7i7st1A8ZBdP0Wq2zmHZnzrMb/2wuNyHcbGgo/CFC0g204zigCGS05gp27A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3c05fb6c-f71e-1b86-6146-96f2b3f3c9ae@suse.com>
Date: Tue, 16 May 2023 09:40:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 06/12] x86/shadow: restrict OOS allocation to when it's
 really needed
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0051.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8988:EE_
X-MS-Office365-Filtering-Correlation-Id: 79688538-1ef5-4160-2166-08db55e0cf19
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BIlFoF3U5vvIPQ1zMoBElYhKga5zqxP4ZsOC/fKG+NTuxJRvmDpmA5AWamLJN53NujMz/Q+d+8lIeyujDsicTQ1+SlE7Dgcje0MEu0Ui1WgEUIyQPzWepDr3DVASuiSUY/aIPemU5u0BtpDhEv2BhcSD64A84Lr67FEIz1jCjKsBpEIvRyUGxbBHTW3O8FavpD3+Rk7C+quiZlcH8VEf6AKLOaiYGHZ0O1sULAp+yADw0oSELdReFQjdZs+84dNEGaEotaBFF6hw/4RFAva+XyC3gZHNcNuaatmwZbW/pjLZEBfEoJaVqMYQ+HYY2usNZs46slL+B1piBHIprp3u/h7uWzwQuQRg3B2CprBNoBBuO43MM7bGKM7MrcVwRguUL7lr2O2FMauLyBCCFmGiCM93B+Zxl0DDUcZxwiP0O0R68PmUAu83Xctu/F4a+QbLd8yjuiJagmKD+61Tdxs6K4at8z9ZmWFAcVnUGeD2WAK0lNGkVLVpiAAIE71Of0RjawKFxu13rKck1pvgONzxnHag68H9oykwEd0SvNKtjVfEbAOL2xLmodIJzE3A/qkcqChTG0mJXqj2BqHiS37ct8U/nYdy1OfO3aRDnoFnmwZD6HqxMJI94XAqFXmMaymYSex5AM0/4satCWqEsi5wxQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199021)(2906002)(316002)(6916009)(31686004)(66946007)(4326008)(8676002)(5660300002)(8936002)(4744005)(66556008)(66476007)(41300700001)(86362001)(2616005)(478600001)(54906003)(6486002)(36756003)(6506007)(186003)(31696002)(83380400001)(6512007)(26005)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eXZkclYxZVJYa2hNTXEyZFJCWXlvZWx3TmUxS3E2bHhvU3BnVk9MWEVyUlRq?=
 =?utf-8?B?NzNieHRkTy9DTkMyTUlId0Q1OEZLZ2Vlc0FLYzVtcWpKWTUwSzR5Q0FqTVhQ?=
 =?utf-8?B?OWpEQ1IvejhlZmFERW5CSWNEN0lEaU15bGtVSWE5MHVRVXdKRUxQVDRnL1Jy?=
 =?utf-8?B?SXMvU1lobUlSeGd4ZTV0YUR1b1Z3VCtTVGIwbDNvWVlTaXNVK0VYOHZoNFEr?=
 =?utf-8?B?S2RlYVI3NkJQWjlqMjgrd2pkN1lRcEVIYjE1NWcvK2VBTk5mdHN2b25MaFoy?=
 =?utf-8?B?YVdkL3AyUk80MlkyUnZsOW1JdEs3b3JyYXJNanZ5clJweE03Ky9hdmRGZ2pN?=
 =?utf-8?B?RTBUazlOdEFoZWd1TmFTYW9QVUwwbFRsNk51aEx5NG9GeVFPVFl5U0UyalR4?=
 =?utf-8?B?T09JTFpPY1RzdFlxTC9ObXFzS0hEWTZWeWdpeGxHM2NuUHdIRjNLWFUxZHdC?=
 =?utf-8?B?L3Y5NXB0NXJyTDNaRDhMYVZadHRMd3VDdmNmaHQrTVl2RTN6MmtSQ1RuUU9s?=
 =?utf-8?B?REU4bXBJM0tjc3BFeHY1Z2FIVFVnNlNPMzNsQTYrNDlMeW1KV0Jld1k1Ympk?=
 =?utf-8?B?alpXR3A3OFdZakRlVjZxTUJUWUFXemxNM2JaMkhvVzhCSVdPenF3dTJUNTNU?=
 =?utf-8?B?SngyTENoamJFZGVYdGVIMHhmcWt4UVVUR2R1d2xCejVub2RlS0ZIOTlrcS9W?=
 =?utf-8?B?UkJFaExzV2hrMEVrK3BZeHlOS215b3FscDhTdnVlRlZrZ2txRzYwcHM1NFRa?=
 =?utf-8?B?QldmVEpBczhlY3pEaG9pRkdYdGpPcUxkQVd0QWR2VUNsV0FLU2F4MDZCZnlU?=
 =?utf-8?B?aTNUUGhVSnBkZW1pWlIzcHV1dFhwUDhiamI2WDIzNU5FYnlheFBhb2JQRW5J?=
 =?utf-8?B?NDJCSVNON25XdmxEUWhIdkNLT0VTUW9TVHd0blZhLzJGSDg1M0ltcHJ1VDMx?=
 =?utf-8?B?bnExVHM0M1Bsd05ZWVlFM254OUx0VjFScWhSMjBqTWxnbDFJVFJ3dG9HK3oy?=
 =?utf-8?B?WjZUMDJiUWowWEZUTUpuWWZOR2cwNXQ5bU1adkdOemF6ZFh2Y1VSa2pUUitK?=
 =?utf-8?B?ajNLalBwdkJUY1FVQ2UwSDlVMXRvc3VtU2VKNW80SEhKQkZsa0pEbjA4NFoz?=
 =?utf-8?B?SWZYS04za3llM3haNFJ5NmltaEZTV2NQOGd6YWd2NW80YkFBRldiVXJSa25O?=
 =?utf-8?B?RkQvS2tUMkZBN1JBMFM2RHhJTEgveDlUQkFmZ3k0TTkxSXd5VDBhL2lvUHha?=
 =?utf-8?B?dW5YTVJlV1FNU0dGRWxVVnBjNVRQTmppSUdmb21YWnR0bFo1VmFGSUlCNllS?=
 =?utf-8?B?b2d2eE5BeHgwQUovN1c1MitTN20zZk9tZDFwU2NVaFcyQjd1WTBQS1dVQThY?=
 =?utf-8?B?TUk5Wm5mWkNSekkxam1JTXhrQ2x6MjF2VU81c0RzaC9zclhZQU9YbE9qS2d2?=
 =?utf-8?B?UEVRTDRoQmk4QkNLRXExdVhNZUtUOVZNQXN6R0N1WG5pWHlER0E0SFN3Zjli?=
 =?utf-8?B?VUFhMWs4U3hDd1JEMzYxSVFLak5YbUNFQi81VjIrc1FlRnRuQ3hwOTV4WDBo?=
 =?utf-8?B?RVhqWm0vd3k2SjMzRWpkYTBQSWRrRVhJWkhCdGl2Y3l5cmo4TDFEWlR0TWls?=
 =?utf-8?B?SnhmOCt6N2ZEenpYQ21zZTFrd3RxRkxFNzJYWmI4RTJLeHcvSVFrMjhRd0o4?=
 =?utf-8?B?Q2ZhTnllbjRqdUJ0TzNVdzhZaTR5YzQ3ZjZNaTJOTEpqMnUvWmMzSHF5R3Zj?=
 =?utf-8?B?dHZyQjFBN2dGUEc4emsyOG5ySWVkcTBQY3cvNFVwZkhrV281QXYxU0wrV1BK?=
 =?utf-8?B?d28zaHhIYngwWWFoY3k1MlV0SWpwUWRSYWIrZnBqNFBmM2dMQ1RvT2JVQWEy?=
 =?utf-8?B?V2ZpcVBLcUhScE9MVDdaNC9xWUttMDdqaGlpSGZWd1k2UzRvbERLTmFoanN3?=
 =?utf-8?B?ZVFQK3cvN2M0NC9veXY1UnNmb3ZqQjJEUEZ4NHdzb1l5T2RFRXk2ekxOeFhl?=
 =?utf-8?B?N0d0TnQ1ano1cy9JQlcwZ2ZRVWNZSmZmV3BReW5FWHFkN3VpdjhvaWdab0ps?=
 =?utf-8?B?ejVzb3E2UEFlUVVLRU00SUJzaEVvUk81elJiSVJacUhWeTJIQUlSUzBDMGFQ?=
 =?utf-8?Q?pBPqQY6RSP0t9a43P7BpmzuQN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 79688538-1ef5-4160-2166-08db55e0cf19
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:40:23.9683
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EWFBATjjxIMaX0RjuwXKakxAVcoHyhwHDSQmItUAl6PEISR8JkNwMl+vlZyCj7MeLKmNGn9dhLwdV+sajJ1Qzw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8988

PV domains won't use it, and even HVM ones won't when OOS is turned off
for them. There's therefore no point in putting extra pressure on the
(limited) pool of memory.

While there also zap the sh_type_to_size[] entry when OOS is disabled
altogether.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: New.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -61,7 +61,9 @@ const uint8_t sh_type_to_size[] = {
     [SH_type_l4_64_shadow]   = 1,
     [SH_type_p2m_table]      = 1,
     [SH_type_monitor_table]  = 1,
+#if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
     [SH_type_oos_snapshot]   = 1,
+#endif
 };
 #endif /* CONFIG_HVM */
 
@@ -1771,7 +1773,8 @@ static void sh_update_paging_modes(struc
 #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_VIRTUAL_TLB) */
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
-    if ( mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
+    if ( !(d->options & XEN_DOMCTL_CDF_oos_off) &&
+         mfn_eq(v->arch.paging.shadow.oos_snapshot[0], INVALID_MFN) )
     {
         int i;
 



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:40:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:40:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535000.832554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypIm-0000KU-GH; Tue, 16 May 2023 07:40:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535000.832554; Tue, 16 May 2023 07:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypIm-0000KN-DX; Tue, 16 May 2023 07:40:56 +0000
Received: by outflank-mailman (input) for mailman id 535000;
 Tue, 16 May 2023 07:40:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypIl-0006iO-GK
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:40:55 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2060c.outbound.protection.outlook.com
 [2a01:111:f400:7d00::60c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fca8eebd-f3bc-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 09:40:54 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8988.eurprd04.prod.outlook.com (2603:10a6:20b:40b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 07:40:52 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:40:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fca8eebd-f3bc-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LESN9Lz+A77zqz/xxXRoqlGtXA+cx+68kKUgkcioU7YfAEVQ3KzAyAw4InYh4Xt9wPIPn2BcLhyx1p4ghOoH40DEXYtJjM2OkJcQmdYiECF1yrzC1YFaS99SlUDnYiM32D+KAPKLIfS/pYCUPgcn+D0zGUzJtbH6YV1SCYu8mPoOETYdaPZHn0hWGvDnAJKDBLDBer7w2YCdtjbwEyUqBDEh8OvsqoWCpl180rVrpoQT8MJqimIteTzKnMdQicb+dhkIcw2KqXF5ESq0rPxLR28wMf4Etx1xojmLk88P0rUuGNOmMHTB7/+mtte9o3KEB4welbvD3UCBlHRrZL1JiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=H9Nh4XX47pvzSkaQ+Ro93eqswTOe4k38U8vTCnXpMhA=;
 b=NdiUpbXW9NKtSDqhLs32Z2wHbUDmcJC7x9Kf3O4Q5mRuH04o633QBggfD6i3T76FvLyo99jNlV8NhW696Ft5izNKDwjCpFVbbHwnwbxJ5xWvjRjuDn8R9eC4NiNB9MT7cSPI8+aoFUF/pL3CZRp1UVFYkMCB+yTc4v7kWVryvneLGnnsTXKutDFT1XkxVCrNcaTEz6fW2/3EB+dNxSzG5TNYUTEJfzKWxeeod9JZOPxgaCDj51oEzbiMB1gDKnANVcTY/iFmP9PRv/nbDYggLtpS8F/UEccOWw4K5M+kJG2Yc+9HF9Ll4SE4RYnizc97jC4img3w6wGW0mSJwCGysw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H9Nh4XX47pvzSkaQ+Ro93eqswTOe4k38U8vTCnXpMhA=;
 b=dqmBO3YSjhye44QfYOimkqdmnBFyj7sPSTYje0JJ/uwg6LCueoPrZD/N5gMsgEeUuDOWttOTltJmPH/MOByGUnvmxqMg6hbpVpk1Av6zZDQpaPMzNPBcqSh2rBGNRx4tB5CpZ3kyDW3TpaFQAv3zfcBgNy0wNiqRGTvq80vzlgRqpTRO46hiHA2VSbynsg0VqdPfAmCzMBBCYuzaabgxeyq2kPUhwzkpYFGwpmPikJPQFQlDYkN2flOkwShlpXniLQuSerjDbjmSBRjxjewf8mzM4GBNbZAcKCv3jZdovnYgiMtpn0X2D7DRTYaB0owPg0WEPr39MLurdypOJ8F5Vg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <834a38d1-6917-7aa8-c560-0c943abb44c5@suse.com>
Date: Tue, 16 May 2023 09:40:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 07/12] x86/shadow: OOS doesn't track VAs anymore
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0192.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8988:EE_
X-MS-Office365-Filtering-Correlation-Id: e3692e86-6cdd-47fb-13c5-08db55e0e00c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kBRLAhLVuxWiGPamyyPsvLdrFJIEKwG9T+zZAXL7UHq5rw91mxS7STYEKzV0hZVq4WIowZPmr8GgwPvpI0nkG1morDCkVzPA4WT9q6NZSlYwaLd2DTcCdNn76uxqW6gXiJvU03VdCXtqiIk5UOQl+wBvoi45WjOZZ5n/Ikbs4MKTww9mH28Gxt+9RcC7io2nIEkem7ZhvgQHwROLRlgJsTiVFikxjLLVIliZ0/IHQNltEkjSsdoWDvmhI9EzssYWcOUoUCl87ftaI2HC8mYUjgc7HMFy8BMZO2ugui+LB1ktPnsF/yN4ubtOyRtOMiqDkIiuE6Kb1NzdfRCHfXwvcoJJmlTMt9d5ncjZ97G6FM7/YsTKwdi9H6xLk+SRURc8FBgjak7PmXtrvPt2D+3mbazdIYz8ipJYosKMu2apgv5FMEMKr8Qtf1YU+TGh1GoLOpnfb3OBTbCSH33BMYdYIMqUIKRXyDxv+F2ynUYM4wJ/6bzXbZ6eJrElKeTIPisznT6ilCgaonqZUHu7oJr+gg42noJOEAy9yNQS/gQoxr9qTt+oRU4DiSYbo5D4zqI7bqmMJxui2+ej/xJh2l40b2ERC/aTPDBY0JBNQwGPvMCshAKrjWUOKPm7W3paQdzmTdeVQY574jg2Tp5xCgq1qw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199021)(2906002)(316002)(6916009)(31686004)(66946007)(4326008)(8676002)(5660300002)(8936002)(66556008)(66476007)(41300700001)(86362001)(2616005)(478600001)(54906003)(6486002)(36756003)(6506007)(186003)(31696002)(83380400001)(6512007)(26005)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZVFnUXJQZG1Zb0JPM1JXYnJ6a0lhdlVjZ2pwSXlKWXVDcXppK2ZKK2FucVVZ?=
 =?utf-8?B?Si83V01WV0M2VTFqRFkzTzN6UXpYV2Zab090UUt1VmFIK3hxYWRqSHAzRUJO?=
 =?utf-8?B?aW4zeE5OR29UeEtpcFJEK1E2OWpiaHQ4Y2NXQUY2VXUyQ0JwR1B0NjBJcDd4?=
 =?utf-8?B?TGN1cno3U0c0T0F6NVgyUnl1YzJ6b0Y3emUyZEJlcDZPK0lZZkRCZWJlbVNK?=
 =?utf-8?B?aE1KY0xkMmVLNS9Xc0VJUXVYUG1FSUJRNmZSa3JLamQ0MW4wRC9RNmttN1Zh?=
 =?utf-8?B?c1RtVE45UkxqbGNTZkhIVFJ3WG5tVUVtMmgzN2t6RThlczB0QUlyQnR6UEU4?=
 =?utf-8?B?Ym5NTjJnbVZYaVUrKy9jNTNvcVQ4MkcrM2pndjA1QWM5MWJ5WHFmWWVNYVN2?=
 =?utf-8?B?M0pDV3E1UE5NODhWWDk4Q25kRDY4TC8rTWI1L052b292VEpRa09nR1B4c0F3?=
 =?utf-8?B?dUpLeTZWdGhwTWp3NC80WTRucGlSNUxkNEt1MlBidHM1MHNtUnFudXlIUkds?=
 =?utf-8?B?ZDN4ZTZNbzYxeW9KSjR3cGlick1rS2VSNVoydXk2V08wTkRqNkI0a2NZODBt?=
 =?utf-8?B?QS9HNFNyZDJVZnBFT1BLa2xpTG4zaVY0bGdOZW9ZQk10eWVPVmpEcFpGRlVG?=
 =?utf-8?B?Q2FDc2xrbG1GVThEeGVkaU9RcER3Rk9mVTNuSWFYVU9vbDNtdkJYQmZRcXNQ?=
 =?utf-8?B?WHZMNEsrWVRWeHk5UWNUS1UyZEJ5enFpT2FRcm9FSG9rODlZMjd2elBJeThx?=
 =?utf-8?B?ODhrQjE2Nm5UNFBDMFBjNy9UQ2JBWmE3UWF4YzBFbmFWa1ZFTExOTUN3R3hK?=
 =?utf-8?B?QjFNdllWR1hERWc5N2JtcVl4NXc0Mkc2MTNLYWRENjgvK3YyZXpRenMrS1Zy?=
 =?utf-8?B?bDhlc0FZa3pVQi9neWQ4VzRjWVNWZkJJcVRnUE5YYnRDTEc1aTFXajd0OHRh?=
 =?utf-8?B?eGNJTUI4WEdBbEFmNnVNUGRCeGpSZEd2L3dZSkQ2eDRpVEdrTk1tSFRUajk1?=
 =?utf-8?B?Yllqa1hGY0NnSXlTSmd6ZHJTcWdyaTY0ZzJybFd4YlgwUkltb0JPQllYSmRs?=
 =?utf-8?B?RWRDNHpKQkxkY1RJR21TQmhVbkJDdDFrOWU5d1F1MnFLNnp1dlRSS1RKYUlm?=
 =?utf-8?B?NVc0Tk1pdkZ5VHNSREdlQnN1SXJFM2JnTTd0cm5qb3VFenk3VEpMTGk5TnJJ?=
 =?utf-8?B?NVF5M1lNNGxoZmZIRVRQTGl3RWxZamdiekNHVCsyWnpocWJkOEZXdjB1cUNC?=
 =?utf-8?B?WWwxMGdrcFVBWFN5R1ZYcUJqY2crMEdlZVA5ZnVzd01TYVF3NmlFby8vS09U?=
 =?utf-8?B?T0gvZng3UzhPOEtBN1Q3alV3R0Z4cTR1U0Q2TDVaM3IvUDYrNlZpRnRoR3VN?=
 =?utf-8?B?QXFiNTNIQUFhTHVLYXA3ME5Hbm9TazFzNGxJNWF6cDREL3NJTGp4Q1RZd0ly?=
 =?utf-8?B?ZWFSeXlKQlBCMjBaS2JlcTJoMS9Zb3ErLzhzdzBnM2U5ejRmRWRsVWxpRGhu?=
 =?utf-8?B?NUtlZnhOek9xeVM0VUNwNFhMVGp5a2dncWJXeCtsNGs1dGI2dW52NWhYd1VI?=
 =?utf-8?B?WlVxbzFJWStrTUlyY3d3ZzJBSHFMT293Q2M1T2FXQXpwMHFwZHRCLzJ0WDBD?=
 =?utf-8?B?bE9CTVRsclFQd0QxWnhPMllpMUlYZzJ6Smp6Kzl6VjZKNzc3TFE3YTl3UTBr?=
 =?utf-8?B?K3RjbEhHYnp5S3Ivb0NwaURLUjBmMjFJTndMVHc4cCsyditYaHB5NkZJa0Jm?=
 =?utf-8?B?WlRhUFRYOWNOWEswZmxuU2xvUENoK3ZYMEFVb2krajVPaWhVWE4zREplNkNo?=
 =?utf-8?B?NWRBb25wQUFWQ28rN3hSTXZ5VmFXNGRkcHlRWWVCTHNPSENRUW1DbWozOTBI?=
 =?utf-8?B?TUp0MDhlNHlZTFd4OTlBMkhDNnZESlZrZDg3VTlYNGthWlppVUJrR2wvWXdl?=
 =?utf-8?B?T3duS3FoUVB1QmxRVWlHV1loMmNKYTRuN2JiUThJbEcrUnJyOWk0bDR4a1Bl?=
 =?utf-8?B?UVB4eE5RQ2dpem1SNVo0WS9PZDFTajhxSXlOWHdXTHFhOGh5QmhVcEZyOGtZ?=
 =?utf-8?B?TVdNT3ZnVjhGQ3JmZGlmYlhtbVMrZHFuckRRT0g2VTZ1TzZ0eU91ck9wQklJ?=
 =?utf-8?Q?1m/GhupuflVZfa9JilZ7Jg4ll?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e3692e86-6cdd-47fb-13c5-08db55e0e00c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:40:52.4220
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 52kfRysJJE1vGN75oFRRPRh8C/BKcH4z095LQAhTmzEP5xSacf89pqj09bj87rmm5XMgt88atD9lIbYDwH8LeQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8988

The tracking lasted only for about two weeks, but the related comment
parts were never purged.

Fixes: 50b74f55e0c0 ("OOS cleanup: Fixup arrays instead of fixup tables")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I'm heavily inclined to fold this into "x86/shadow: move OOS functions
to their own file".

This largely removes the justification for the per-vCPU hash. Is there
any other reason to name there in exchange?
---
v3: New.

--- a/xen/arch/x86/mm/shadow/oos.c
+++ b/xen/arch/x86/mm/shadow/oos.c
@@ -51,13 +51,10 @@
  *
  * Currently out-of-sync pages are listed in a simple open-addressed
  * hash table with a second chance (must resist temptation to radically
- * over-engineer hash tables...)  The virtual address of the access
- * which caused us to unsync the page is also kept in the hash table, as
- * a hint for finding the writable mappings later.
+ * over-engineer hash tables...).
  *
  * We keep a hash per vcpu, because we want as much as possible to do
- * the re-sync on the save vcpu we did the unsync on, so the VA hint
- * will be valid.
+ * the re-sync on the same vcpu we did the unsync on.
  */
 
 #if SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES_FULL



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:41:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:41:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535001.832563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypJ5-0000o6-No; Tue, 16 May 2023 07:41:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535001.832563; Tue, 16 May 2023 07:41:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypJ5-0000nz-L4; Tue, 16 May 2023 07:41:15 +0000
Received: by outflank-mailman (input) for mailman id 535001;
 Tue, 16 May 2023 07:41:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypJ3-0000ks-S2
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:41:13 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20607.outbound.protection.outlook.com
 [2a01:111:f400:7d00::607])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 07e1bfd7-f3bd-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 09:41:12 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8988.eurprd04.prod.outlook.com (2603:10a6:20b:40b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 07:41:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:41:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07e1bfd7-f3bd-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EAgr2drEjkyUIiw7JOxAEKj0NBHR7FCPysbM2YGXYIzwvB4ONBdKWpY1dODvU45BR1wqB8NjMQIwvjhtMU1NMbE7mdGwQOBMPisoTQvsVURvCEHW0cLMgIwqkGNToUINyonOgDPaGwRDOqNK9q/YaF+DrY/jqXAetpFb6COvUyubLvfXUpMI1yvM+CaHvcTJwYp5U84dS/kLlppOi0pi+M+ffeWf6SUwwL5qT7wEKuLEs4Yzi8UGClIqiP2kGR2hNUY+dZXQXXZmUix+NF9xv0ftm23VVYczwLpS+IkULZb7Q0md9zM4GC1vSlvp+5SBgX55ul0FCeOL/KAvSFV8tA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=A5NXxvDRQXD0WHTV28k08tC6LtvezCvIIN1lLE6iH/8=;
 b=IQdVFRQGPSg/8liKRqJZuGoaziYlooUgIPUi4VNU0pBCMdW+Z/Q4hzRQ9eHWc0bhYe+/VDxUW5VEzadPyeKX/I4h3+ptHX8shUodzq1I9eb2TWz5Cavs+GTn2aPjTqveO5Jpb8sAMf4N3iIUxh/GXRv2tc0X30YhQEdkv+JcxDGTu05gtFAn0p9iu+LmG8WJypcaO1lwzU/VvF4s8ix4CkMpr5KMdCO6tJfYcyeOvD/RY+DWytVztc8+XrBLtivSZKVcnycCw/lLjm843CeIBvkzQcap2++qJsazzrdRT0nr8Ezej9Ogn+FVuP2aipd71FOo6PQhgEgmOHmX1m672w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A5NXxvDRQXD0WHTV28k08tC6LtvezCvIIN1lLE6iH/8=;
 b=13v2vaeZsUBg1ICCqZNPmIG5o7bpRSAEgCob9EWqyq/6kwduTbVwBmVjj+BLQUaJe/dLA//c2QRYLpyAVpCLvXa4dT+QEKEcMFkKIOVEQoKRxxOMlDmReXJ8CUaaSYXBm4ulKWbrQCMEdfTmLxxP+gZeeftAHs+R4TXFC2Rm4nuxh5G5pzSEMFgEkODnT8ay+Fh05WPDf5F90tZ4VTYfYkjBf3rjLJ+XE6MFwe5ULf651xAR/dead6220icpFQ51lQ8y7Qrq8QilgQHCqeCHgAfu79ftVNfDSWdmdhKmn/oD2HutH3iwPcOGRP5rDQ1yQR7Ww+nSQGI/KSs3Nm86nw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ce0dc192-7d42-fc71-2c68-2b67933c5912@suse.com>
Date: Tue, 16 May 2023 09:41:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 08/12] x86/shadow: sh_rm_write_access_from_sl1p() is
 HVM-only
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0068.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8988:EE_
X-MS-Office365-Filtering-Correlation-Id: c9ad2052-9393-46c3-e628-08db55e0eb95
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/Eov954o4miSyvxE9UQie6Q+eEJpkKJMGeqVClJ5F3ch1ndk1YbKVUGfpoew2E0LCFUFjHT57hDUi/RcM8eajBG8rqqwtFiHnw1Pg+UIr4sTKuwVyoKVRlbezgNW4QWHes3DesRrAfvX3KnfuXZ2B4Ic+ij5G6tYWs0kW8XdNTHHeO2cdrECU1B3Rn/RHcxMzSk1MBG+ZBSloPc3dWT/d3wKJeIYb+DBrVg458xHFAJew2HrzX0RDArmGWNiIlb8FigFGKMqHuvikkrphtMPjittV3k569GGD08Rp/b0eaz+r3jt4J//6VSasUE83N6xTglNMofizpwq0Z8wcSOSSCtYgGQyyAH5ngiJAqfJ4BSW5rAxGeDMGLt6BaniM7bj7u8J4ovEE7+rkDJXUwCCjrULjZrnjndD9jmEZZ+oIHdcGJ4nR13FzMY7FAaJ0ZGK/efeQ6ORFOljeE6Pb0sf3pKvMmRjp7gPmdH7zoamczJaM47I3SogKjIvDRejSuXt/rsxWMydnoIJU4hQXJnq8dsuLaX1p+Ke4jArHueDg+0ygfvND3CNCjbr8GX2NSHCiKF04kCF75B8WXYpIODo9SKoXLQFl8BOELgGjV3jdp/kESu8CibNuH/gnhct1jUh6V2/ySiLF4WE3+0G0/kJLIWurid3L6YmE7V2X6eVcQpBK7rLls830Qq5xUSNGsl6
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199021)(2906002)(316002)(6916009)(31686004)(66946007)(4326008)(8676002)(5660300002)(8936002)(66556008)(66476007)(41300700001)(86362001)(2616005)(478600001)(54906003)(6486002)(36756003)(6506007)(186003)(31696002)(83380400001)(6512007)(26005)(38100700002)(45980500001)(43740500002)(357404004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bzhhcjN6Q2puN0ZZY0k4TUlsZUpaNTdMc3owc0RXcFE3Skw1OE5venJPNXIv?=
 =?utf-8?B?cWY3MGFYTnB4ZEZPTEZCTFFuZzVob2FsV1MwVEg2MGVxUDdwU29oY3RMS242?=
 =?utf-8?B?RkZvYlVCaExFNnpGZ1B6Q3lsUTdIWmVCODBjZzJoRmR3QVc1M0hxYmM3U0VP?=
 =?utf-8?B?cFUyKzZ5bFpBS214U0lVU0krZGV3V2RhalJpMWR0TEVFdzZUc2szYWo5TVVW?=
 =?utf-8?B?UUFJRXMxTXB1aVMwTUx2aHZkSXV2MXJXaXRCaU5qUEozdXlnK1FrRkhaN0Qv?=
 =?utf-8?B?YkIvbmxjZ0ZHZG9UbDlPVEdOMFp3QXp0OWV6NHBWVVczRXpIUFVlVVZoVXBK?=
 =?utf-8?B?MDdJaUFacENSK1Bici85VkU0azV6MVdwNU9CdmJDbVB4QnlwOHVnL1U3UGRP?=
 =?utf-8?B?dC8vZXRhMXAzcWsyZlg3cXBmbDUrZkV1bkhtTXFuQzRmeXV5L2NTQS90QVVB?=
 =?utf-8?B?TXBDWW9QeGdqT2c0aHgvZVBlZmp0cWQwTmNWN3NqV1NNMlk3TGFvUWx3UWZZ?=
 =?utf-8?B?YWRUcXNiZVRqUFJuZllDcFdIdkpjOHdCWnA1V0REZ1lFN3F1eWpqY1poUy9Y?=
 =?utf-8?B?Uk1jNE9zenpQYlhpeGtXWjRmOUpRQTdRUWNnWnBBVzkydlVQRHU4UExCb1Bw?=
 =?utf-8?B?Ky8wbmZhOG44b1NxY3ZSRmNudE5GMFgxMU5qQkVrazVhZzluYUVJbkZxSlU3?=
 =?utf-8?B?RzJmOHJtNzhTZ0haMlRrelh0TGJENnFBeUZJOHViRnRDSVJQSEpIRm1jcm9I?=
 =?utf-8?B?aFV1SFhqcVBTZ0dyK2ZBT1R0TVhjOHZ5SkZUQVB1cW1iVVJ2MFdIOStTeldr?=
 =?utf-8?B?cnh1TEVSOWRTeW1wNWJwelRJWkJGVDRacnFoOWl5QzU5cGJEcVFnanFpeWJ3?=
 =?utf-8?B?bnc1OFYzd040NTdXQkpJanVzOU5PUWM4VXZxYllNb21MeDlPSC9yK0hmbVp6?=
 =?utf-8?B?bis4YlJ0SUdpZXg3TGJHZk9kbGhSNTlSaEJYVzV5MlhzZzdETjhVdjhyZVhK?=
 =?utf-8?B?dytSODFVeU5jQUZQVU82M1ptTmkwcFdTY2lRWkUrM25tYnNTOXpJQ3BRazVL?=
 =?utf-8?B?SnlUaVlKV0dwV1BhYjlySXZ0ZGVlcUJSVU1PaDB4bzd3RnV2cWZaZlJ4V3JV?=
 =?utf-8?B?YVpLZ0twYVFZbXNkWTVtWEE4eVgzaVBYeUZEWk0yZVEwWFlRVjg0QWxZNEFT?=
 =?utf-8?B?aENGY29jblhyVlpYbE9DMVJibFB3N3hGdE9RcU1QZlhaNnhYRXJ1cGNMbjc2?=
 =?utf-8?B?V21FOWRxV24zWHJwWnl2WThOTzBXVUVUUjVmS0FDays1NWhHd1RFNDhOSVJx?=
 =?utf-8?B?TEVLbWU5cHBRekwxL0U0K1ZGU200cm5NcU54bGp5eUgyNGRTYm9kUmpnaHV2?=
 =?utf-8?B?R1BodmtnVDRWRTU5TklwZFhMYk9HMVVnRnpIbjNFUjkvUGwxbnZpc28waXA2?=
 =?utf-8?B?S1BJczdqUnJuaElYbXk2cnRWTm94Z0JjVDBJME1CUlVzM3IzZEtNa3ZpdUhH?=
 =?utf-8?B?aThpT2R3cjNaMFNQSkozekc5c0xGOVlURkpWb054bytHMmVDR2F6dm14NGtG?=
 =?utf-8?B?TDdydDlpSkJ5aDZybzluSms0aG9YMlZsaElwaTNSVmZFZ3ZJSGpTUTNUTTRN?=
 =?utf-8?B?cEl3V1gyQzlaQlE3aTVXZ1hnZWxvS3ZFbnBHcE9wOW1jVFh2bUQ5eUdVTUhP?=
 =?utf-8?B?b2xYYTFiRmo0MkVGYkQ0ZzJTYUZTdit3UmRKMVZkTlQ3VWZUMDlXTElCT25W?=
 =?utf-8?B?YUpub0lCL21jZXFlc1pibHJBT1ZqZks2R25hK1dqeGczbVBVZjhoRkVtVzFC?=
 =?utf-8?B?T0UzaTVxWVp0VzhxSHBvdW5VRW9saTJDYXpIQXVlR2UrZWlEMmRJRlVnNDIy?=
 =?utf-8?B?RTFjMDdXS2tiUWdFQWVYaGtDdHE1WDVZSUY0SHB5MUxTcUl0WWJWby8rckdE?=
 =?utf-8?B?aFhtbE13UFR0eThBT2RDZXdxN0w2NG0xaXd0aU85UXUrUWVCYnA2N082Ry96?=
 =?utf-8?B?a2FoYVlGUDF3bjVQaUE2ZG54RjcwSXhSNkNISkVBeHFoSnQvU2xhZTZWZmM2?=
 =?utf-8?B?blpRTmVuZ2JrdmNwSVcxZFJ6blZyRW5pZUt6RkR6OFd1cFQ2Y1lLVHJCWEZo?=
 =?utf-8?Q?qeXcKQUKcNOojFkpca4iRqFZ0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c9ad2052-9393-46c3-e628-08db55e0eb95
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:41:11.7806
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: l72wAz+IDvMH1hb0F4xlpCg9k8azFnPdsPTp7tJ1rf5A45MjghWemYTS1EzbRZbg1UCFtg9lG/P1f/x1BuYASg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8988

The function is used from (HVM-only) OOS code only - replace the
respective #ifdef inside the function to make this more obvious. (Note
that SHOPT_OUT_OF_SYNC won't be set when !HVM, so the #ifdef surrounding
the function is already sufficient.)

Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3417,9 +3417,7 @@ static void cf_check sh_update_cr3(struc
 int sh_rm_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
                                  mfn_t smfn, unsigned long off)
 {
-#ifdef CONFIG_HVM
     struct vcpu *curr = current;
-#endif
     int r;
     shadow_l1e_t *sl1p, sl1e;
     struct page_info *sp;
@@ -3427,12 +3425,10 @@ int sh_rm_write_access_from_sl1p(struct
     ASSERT(mfn_valid(gmfn));
     ASSERT(mfn_valid(smfn));
 
-#ifdef CONFIG_HVM
     /* Remember if we've been told that this process is being torn down */
     if ( curr->domain == d && is_hvm_domain(d) )
         curr->arch.paging.shadow.pagetable_dying
             = mfn_to_page(gmfn)->pagetable_dying;
-#endif
 
     sp = mfn_to_page(smfn);
 



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:42:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:42:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535009.832573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypJo-0001YA-53; Tue, 16 May 2023 07:42:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535009.832573; Tue, 16 May 2023 07:42:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypJo-0001Y3-2R; Tue, 16 May 2023 07:42:00 +0000
Received: by outflank-mailman (input) for mailman id 535009;
 Tue, 16 May 2023 07:41:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypJn-0001Xt-4Z
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:41:59 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::604])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 22e09797-f3bd-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 09:41:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB9810.eurprd04.prod.outlook.com (2603:10a6:800:1df::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Tue, 16 May
 2023 07:41:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:41:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22e09797-f3bd-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gLpJfnqyJNH2BsjYz7bANMYnLyJXzDJDmLkWkdZOLOfN3l2kZce63GG6yYo1UC24G+dW0vJMiflomj+v4NNTxNLCy7bhAgdgwa/r3tI8qayn+GpS8Bs/a6U7N4EHqHxBfjsMD/pGhIPOaPUoLlOnFhd7/LUHZg59Fq0bvujKfUdwPEdej/Lex0lPkDdJzlHPlCtV+O5IdNM5ei17pd6E8omM5cNvKR5o0iDy8XT5D5AIwEJCJzDWLICrkJSGBrUjpk4Af8BwiTrXC8SFWFwVmIFx/uB5BTe1DQ5mnlPIp9tNY2jPJ8z7K+W/cyk9T9vG6nG74LypeYdaEVnTiGU+TQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZITHlve6KzewL4/nVxygx6EqipDtPX6ixQIP0ctjoDI=;
 b=kMa/e6OpAfvd8/6N9wPGMOODwkxj+arv4y3lMAtVLfat9cEeoRHFx5j67wQh3SdVlRWZb7vkrTjtk7kvkS0DmWyqRr8pe4uNA0TiUcY8pJzrIH3pEb9qHXPRg9KBXHjcAqUEkFNCWstOX7Fh4Vr/KVgIxqgYnR8cyA51QYJ1lUs7C2X2NIZWjEOH0kOHh5YLFgd+XX54nFukmZNzAMLVOxx3mXSWvj1ZpLlXrlqa0dlOI/A9wNhvv9OKxuzZUlpofAfNYuo4Z0az0RNMxfBSyuxGrU1bAlmc4C6kFnrRZa1yC/vni1Rlazxe5r4boR713xfusb+/uZnN+IJOMaE2rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZITHlve6KzewL4/nVxygx6EqipDtPX6ixQIP0ctjoDI=;
 b=NYpbA8MOiRJgXK3zgaToYQkRoitxTuQxrdw25VaZCDvPZD5qgaoRU8c5i6tsuMO8UBrs5tkSUi1i1PSlq0v78WWKxuw+AHdz49r6F5/my2fkfnQHH1Ug7RV/zNt8Wh2P1WdPPXf6x6gyU4+Q86QYuB4/4kOIPcpzqWwQaP2o/LGZSyIGuCUxDLOgs+b+lTMk2cQNN23U6OX6MsAxwRRp2i8n0G1JVTJgXCswGY3hYnQspsGVgMxAKJXTe7o7z9tCkamObrU5x7sH5D9cK4OEjAvmP5q030QQ69tKBNUx7uTQmfqRTjWctNKE7fEYt+wXUhxRCFECSoFy32dObksVMg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <dee5be5d-a997-cd80-aa67-ca2f5c68bada@suse.com>
Date: Tue, 16 May 2023 09:41:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 10/12] x86/shadow: make monitor table create/destroy more
 consistent
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0080.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB9810:EE_
X-MS-Office365-Filtering-Correlation-Id: dbe1e6a5-356f-4983-1220-08db55e10545
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3pUdJ7oAV3RKkdmEUfSPKP+tFZ75X4FyIGPuWS3U+RnEp48KYSgujLaN8ZRxoplZMZO3l7zNMVHJB1w/sfy9oPHDvjVEyr4K6ko9k9/5gfCV1l7PrSC/unsOAmPMDW6zutwXX1lgx1CGpFr4gn8N00VfpHV+g7FTrFZcTjH3ionKeo7nbVuRARmWznBhSPwq1wgh1VZdD96Kvx6j6Z5B6AVk04MyDHHdNG/51ZGS2qyKkvzW/gezxm2fIxINTks857ATo6MFNQOrYCLH51PvVy/9ybCzmW/VGKHSXJnGqSLiT1O87rSqLCJbIXSHMbN+3UE+hGHlAIYIbpaTNK54AbPYwvtcsWW5bTOO6pwj8Kh/3IcwzmUPOHN2pzY0kL4c+UdUxwVl/JOH963uJuYWJFAtfBQrRCWa/HB5Y2sPk8WFOMPeOUEO9cZZUxf4eFr6DAWv0gMiFj+qwATEYFTlhMbmgEdRYQ64wZhB1T21wwytFK399mxuC/SGAYCIfpX0xEc5ynVXd7qiPHDkYSyvyd+OLGOFm0Jdp4pxSvK6a0hzVMJhWZyRN7u9TN2yhodfVK2rRMiUQaF6YgN5uE4I8MRdvA7+AujykSfwjUkwFBWmwD7VJP6OX45eUkfqJD3AW/pIhW1H+3/v8+fmpabIQQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(346002)(376002)(366004)(39860400002)(451199021)(26005)(83380400001)(5660300002)(4326008)(6916009)(6512007)(316002)(66556008)(66476007)(66946007)(6506007)(6486002)(31686004)(2616005)(186003)(36756003)(31696002)(478600001)(66899021)(54906003)(2906002)(38100700002)(86362001)(8676002)(41300700001)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MHVEVnoyMnA1Y1Y1R295d1NrU3Q0bGpySlZhK2ZzSjlScmlnOGcrVTQxSU9C?=
 =?utf-8?B?V1RpbmM0Zll3TWpkL1JDKzFJSVA1ZGdSZWhSckpNVVlHazNxNVR3QjZjZ0lM?=
 =?utf-8?B?c3hqT0tJekhDTThPVm9wWGpDVEZueGJGWXZtb096MHd6Nk5CRDRvMDZvcms1?=
 =?utf-8?B?MTZ3SW9NL0J6emFjK2RQQlFhRUdEVkUwc0EvYnhRZVp4Tzc5VHJvSEZlaVNx?=
 =?utf-8?B?VE1jM0RpbjJ3ZHMySi9Uek5RSlMxS1dNdm9mNVpJUG05bjZXNTdKQTVDVUFj?=
 =?utf-8?B?RWQwYmpvUnp5eUNWL0NXa0xjUDBHbFdjUTU2YmF2ZUZ6aldsWm14M0FPMFpj?=
 =?utf-8?B?eUt6c25HWTdydzVTL1orR2pEOWJpTE1LWHJpMkwyNDdlOXhBY3lsaHY4T1BF?=
 =?utf-8?B?MmQrY0FZenRBUUViN3g1M3V5N01lVEhqZFdvTmpaS3JYbkgzMHUzUTArdkdh?=
 =?utf-8?B?WGZYNDhFZlEvRlJoemNHUHM1NHlLbm9KVlFZVkFiekhmVVdtcGdMc1pXQVha?=
 =?utf-8?B?YmM0TTFNNkRQRzdMODZ0Vzc4UysydVZLWExJNFlwTml2a3RkUVZYSDhwSVRK?=
 =?utf-8?B?UW9SWW1ubDFMQ2dGa3lTSG9pWGFldjVmUVh5SFVZNnhjRFZSeGw3NUttMGlW?=
 =?utf-8?B?N2RZN1FCbWNjQVF1MU1XZnplbmE3SWJ3N3NtSlJwZjByV1pmOUJkTHJ5a1Ju?=
 =?utf-8?B?WTBRWWk5cHJSZlRnL3RVQU0wZlFlTm1hWmhsdlNKMHB4dWtjSmg0cXBXbExW?=
 =?utf-8?B?cVl5QjNGY1FDOFpQTnJVb0Fzc2FDeittNWhQcVJYNGYrUTQybHQ0cG9GMktu?=
 =?utf-8?B?TkZiWUpMR1F0VzVrK0tFaFN0dmlPaWdPbm5WQXZ2eVdCK1JsTmlUNmxjb2Rh?=
 =?utf-8?B?STBUL2NhRUQ2M2psWVhhOTI1TDRiVVE2OEJCaCtrbFBWZjdGcmkzcmw1bVRJ?=
 =?utf-8?B?a0xFbk5EcTFNaHZjT1BBYUxSSm5kbVFBdzQ4MThLcjhFSDNSOUlNWGNkM0JZ?=
 =?utf-8?B?Q3dEUzcvUXRORndiSHV3NHBBenl1eS9ORXh6OG5uU1BPUWpzTElXV3lpSi9u?=
 =?utf-8?B?dVkxY0t5WGdyOWRnendicVUyVVJvclBqZVVaWlpJc1F2WEh5ekx6VFVhcm9J?=
 =?utf-8?B?cGpmU0ZhMmhQVXRWUENBazJuaEV0Q2R6d0xkS1BiRElFbHVVWEJOUnpuQ28x?=
 =?utf-8?B?UDF4RjBUdzF5d3VnaVI1TlJCc3V1bXllbEpuOHNmcU5QaDJQU3ZKT0lSZC9l?=
 =?utf-8?B?TElCbE5HVE1EWDhYaHluNFRnTUhpalVrallrUjFxNVhaeWY5a2I4U0F0WUdR?=
 =?utf-8?B?VzhuZWlqcDJwNHdudU5KWEhNbjUrcWNtWGJxSHJFNXlUVUJjWjdIRlZCVlB0?=
 =?utf-8?B?Z1BTZ1BFOWloMk1HUnllSEc2RU1JbmJnSUpBTVJ6bzhTSE1LdTFBSW5tNzdh?=
 =?utf-8?B?SHV5aHJqWmNqTytEdDlLbnErMjl0ZFBXeDR0Q2NvWFRRT0RRZnlqdXFGSFJk?=
 =?utf-8?B?enJSK0NML0ZWcC9PeUkwRWJhbkY4MXVoSkR0RDNlc3BxV2dTNjZkanVpb2lh?=
 =?utf-8?B?MDJnRmthYU5ZK005YkhIZmtjNW1HWXBZOExGeUVsSm9JNS9DMUdVa0FYUnQ3?=
 =?utf-8?B?UkhCMnZoL0lWMGFvL1E0MEtZOEFqc3owUmV3M09naUxPZEJBdmdGWDQ3TUVR?=
 =?utf-8?B?WDV4Y3ZwRGVOZ3hrWWpsSndOcVBRVUZYcUFRUTJRWVcrRks4ZURDKzBoR1g4?=
 =?utf-8?B?QUtybEdJUEFNQlVZMkJZY0FmNVFOamkvK2YwZlFSTlVycjNyYTlaREoyd0d5?=
 =?utf-8?B?Vk9wc1BjZHR5NXhDcFhyME9NbHMvSUxKVXU1SXpvZTI2N3NMaVIxWm9TQmFB?=
 =?utf-8?B?SG9kOXhKNWJwTUVBZWxsek5rNnJWQ2QxQ3VMTW0xaFpnNkdVK0hFZnVYQmZB?=
 =?utf-8?B?RWY2ODRDQUhiK08vM2tEaHcrSDFnaVZXWGVYR2hqdW5LSEJwTUhsYTg1SE1W?=
 =?utf-8?B?SWhUV1NmVENGb3hIa0Jhd3BKYmcvQ3BRS1VHSzhhbnVoaVRTbmxabFNRcjFt?=
 =?utf-8?B?N3pzcUttSHBpMzdYdE5lM2lvL1JSbDNvWXV2dGVLQlZabG9YRWpNNFllSXRG?=
 =?utf-8?Q?RgY2yRdJzbl0c1VukmzT9pIEX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dbe1e6a5-356f-4983-1220-08db55e10545
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:41:54.8543
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xJXm8nO9OFjeDqBkEBtFU+NJcgQLtMqWa2yKJwXVRETTgYABj83aFDzdhHOehVQqCFLw2YjH84jCTSMvJa60yQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB9810

While benign at present, it is still a little fragile to operate on a
wrong "old_mode" value in sh_update_paging_modes(). This can happen when
no monitor table was present initially - we'd create one for the new
mode without updating old_mode. Correct this in two ways, each of which
would be sufficient on its own: Once by adding "else" to the second of
the involved if()s in the function, and then by setting the correct
initial mode for HVM domains in shadow_vcpu_init().

Further use the same predicate (paging_mode_external()) consistently
when dealing with shadow mode init/update/cleanup, rather than a mix of
is_hvm_vcpu() (init), is_hvm_domain() (update), and
paging_mode_external() (cleanup).

Finally drop a redundant is_hvm_domain() from inside the bigger if()
(which is being converted to paging_mode_external()) in
sh_update_paging_modes().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Style adjustment.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -119,9 +119,9 @@ void shadow_vcpu_init(struct vcpu *v)
     }
 #endif
 
-    v->arch.paging.mode = is_hvm_vcpu(v) ?
-                          &SHADOW_INTERNAL_NAME(sh_paging_mode, 3) :
-                          &SHADOW_INTERNAL_NAME(sh_paging_mode, 4);
+    v->arch.paging.mode = paging_mode_external(v->domain)
+                          ? &SHADOW_INTERNAL_NAME(sh_paging_mode, 2)
+                          : &SHADOW_INTERNAL_NAME(sh_paging_mode, 4);
 }
 
 #if SHADOW_AUDIT
@@ -1801,7 +1801,7 @@ static void sh_update_paging_modes(struc
         sh_detach_old_tables(v);
 
 #ifdef CONFIG_HVM
-    if ( is_hvm_domain(d) )
+    if ( paging_mode_external(d) )
     {
         const struct paging_mode *old_mode = v->arch.paging.mode;
 
@@ -1854,13 +1854,12 @@ static void sh_update_paging_modes(struc
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
         }
-
-        if ( v->arch.paging.mode != old_mode )
+        else if ( v->arch.paging.mode != old_mode )
         {
             SHADOW_PRINTK("new paging mode: %pv pe=%d gl=%u "
                           "sl=%u (was g=%u s=%u)\n",
                           v,
-                          is_hvm_domain(d) ? hvm_paging_enabled(v) : 1,
+                          hvm_paging_enabled(v),
                           v->arch.paging.mode->guest_levels,
                           v->arch.paging.mode->shadow.shadow_levels,
                           old_mode ? old_mode->guest_levels : 0,



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:42:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:42:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535011.832584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypKc-00028E-Dz; Tue, 16 May 2023 07:42:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535011.832584; Tue, 16 May 2023 07:42:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypKc-000287-B2; Tue, 16 May 2023 07:42:50 +0000
Received: by outflank-mailman (input) for mailman id 535011;
 Tue, 16 May 2023 07:42:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypKb-00027r-RH
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:42:49 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20630.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 41039c32-f3bd-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 09:42:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB9810.eurprd04.prod.outlook.com (2603:10a6:800:1df::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Tue, 16 May
 2023 07:42:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:42:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41039c32-f3bd-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uc/4rCqq9DWcdIKuGe5SoTI3rM/kqdcaQZ3qLUl0bZTDMkcHb63MJ0UCeJb2ubv58YcNNOLxnSBjlTLVO4GUxcknwHCwy3pItgfX2c2yIlC4yuJ3Sd9qLGdOQ3PuiXqREy7dCAmeI11Pe9LpMDVwSQbWaugdb1LwyEqTjwC1uGVsW1BYgDVzrL5aMO6DzN/17tFFr6/KNYLJ4/48n7eQo3iVl9C3hK2I3lxm+Si6t1YBdHrr/rb4gW3hCNKmglJefiHyUVcsIkA9M9TAnjw2zafJfmEGn0/PCU/IRX+xn3FopCkgWBJ4cngOvm7J2kyvfwnG1eZdUeZR96B3ysaYxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G0JVlC1dkAfZN2ajpySPkgKYU3aVrOZcOWe5A070b+Q=;
 b=A2jMq2941PMDA14/RasjE9ydKKRuX8go7nzytFKfpIgURIFqCGbugbmOzNWQWopvrdwUgB0AN32/KAnYnOMsxpmRGFbvRpXGXkL2gihcs/QgWU0e5R4l8r7odggkaT0DQc++3QsvP3aUrHjINoWSeW8tKdUTlDvBPx5k4LqXp80v+rzEu/1uMi/RiJs21vOpjbHgAM0q/dCRhv3K4QXXfETk93hD6CnA4xHe1VAkVVslESHTYDcmPVvE4J69VAo4boqzeQTp1HKjAkW/1aXQP+TprErsJ25QNfM1GeUcbrCxcwn+PkLbrZr9NR1CoybdnKM3EXzTyxIlX1De0wY5JA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G0JVlC1dkAfZN2ajpySPkgKYU3aVrOZcOWe5A070b+Q=;
 b=t9rqnB9tWiXSLN+bMnxvUqCnUZ38i6F23EdNyImvuKrZyXn7ABjBc2MTOBMRygQBYGQ6HX0lo+KkMe1Y9Lgz1FLCwxVUS9Lo8e0gDu5LueN+zWYrDVGIqMsH3Fux3uUFtPcxGIRme2nawcQkPy2lT05ezoUerCI2bzeWX7JdaxDh9Y7snil4OtCdPC9ljN0nUWi5E0+hX/7jEyp4mtexqV52WOAoP8U8OHFX91JbRStefFyeWuZLG/aRGWMLSCJB3p128kWnbEl6wDZBu69IQqCjKbRgL+j8yCBw7ED6JFcVuEj1FSdl4UsPD1pYxlDzy8JTFhnYzplNB0BVxiCK0g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ca8abb5a-8a47-247d-cf56-e730dc76ab20@suse.com>
Date: Tue, 16 May 2023 09:42:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 12/12] x86/shadow: adjust monitor table prealloc amount
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0008.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB9810:EE_
X-MS-Office365-Filtering-Correlation-Id: cb9a0fa4-6eb2-4ad8-5a30-08db55e1249b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fo1hKvmhYMls1YY/C0Seo9l/8V9/+fb05kweTfzMH+w0cCbwDGdDJf73usl48fHa7zqxd7x4TPlcy/pmBRrq6ut5TdzfI3A7X48geMZNkK+sFdSJ0UlN+1kbKweZWOg2jwpFwoL2TT7PpEr4cm8ggGp72HXY3mkMRNNtTHNXNU6zrEaNyU/syzuQpxFK1UcLCDtgd9ZyL08ka1dJA1yA7A6xXadO/KvWimpg4R9IRJ2+D/KTXDioHPFSwDnUIbalmL+VK16ftmLeWaad8SgB+ZtK7DiGhAwpiFKNt7Yjlng/tAyCTy8b82N/0DG5IukkXAi9zrHIOC5sZx7LZ4mbLFrnePXHr+PhMmRkpl8Ostk/unJ2QGeF9buMrYDHVa9s7GbvbEmVc0LCkntcFTnBUeLX28EB+K4kDNwo2Mzf1jLsKslopWKVHGt9J9AVadITm3d+W06tyh4wvEF3oaCRYDAITg+NfY6xHmVJR2baISrvlTGW+zVIkW3kCWrPoURmE1d7Ixu9qpsvcC6AV1FPxWrE9VDMhbbvlC8wSWAMD7z5NM3piIj64DMfhG8o0iUgMWFx2ZwnP3lejAOiocSrTnoX5Y6xUxdNYCIA91VhvJdebXxvTU03p07QkXct/ztPSjyrjIcwaXhfCHcjiJFO6w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(346002)(376002)(366004)(39860400002)(451199021)(26005)(5660300002)(4326008)(6916009)(6512007)(316002)(66556008)(66476007)(66946007)(6506007)(6486002)(31686004)(2616005)(186003)(36756003)(31696002)(478600001)(54906003)(2906002)(38100700002)(86362001)(8676002)(41300700001)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RDcycGd4VFRtbUFVeWxCK2Jpd1lTc0tzY1E5WHp4SHRMVWdXeWxpL0syekhC?=
 =?utf-8?B?MTQ3eVYzS2NhTkhSSXlaSkpEV2VwNy9LK284MVhLWE9RYUh3ZXRNUm14ZTJh?=
 =?utf-8?B?bDN0bDdhRHBEUWRUUWZCWnJDRTRWeU5RaUNrV2g5UFJIcC9nTE1ySXVodFdw?=
 =?utf-8?B?eStib3FjTTJFMytxU0hIbVdzbUJtdW0zenZ4eXlZQ0VYYzYweFd4YmN3TnA3?=
 =?utf-8?B?ME1HTkhRQXA3eWJmWFpMT2c5VE5pVjZnaXg2ZDF5L3Q1bThzaWRCTjNRVklp?=
 =?utf-8?B?UFlZcnFzYmVTNWRSeDJmUS9DaWcwWE83VGlaUFk2dTZJZTQ2dER1ZkIrZkJL?=
 =?utf-8?B?SXRjMEFOMWUrYVJrOE5RK2dUWXFPMCtrTW1mdjhEWDlkc1ZQaUtRdkpOUlNy?=
 =?utf-8?B?Z1N1NlBWWEpGR2U1K0FWQ0QyNnZUYklnZXBPNmZaOU9YeGZtSFBqVEZpYkVX?=
 =?utf-8?B?RVZiY3JyUzNWMmlwMFJmbW16SnBuWUZaU0RoZ01IdGNCRElxaVFaWVRWalRP?=
 =?utf-8?B?bytnYUhOckY0dWlWYm9QdjhTUnhPdWVtSUc3TEpxS2lIOXpBZEU4OC9oaWFx?=
 =?utf-8?B?aGxseTZnaGRYMVRwamVneG85aUl2RHYzK1dpTWw0NzJpbk1lZWVIRDB0cjha?=
 =?utf-8?B?L3FLNk9LeVdXNHZZbDNnc0Rpdk9QcFgrYmtrQ3hrWklJeDJtNjEwaGpVanRV?=
 =?utf-8?B?ckNTS1docy9BbnBYUGEvcCs1QSsvaExoSWxNYnBQanBYaDlYbG1UUnVwMm1K?=
 =?utf-8?B?UXRwYlE4RS9ocEJEK2dsWkg2SHA1a1QzUEE1QWhNZFRnZHRaNVVuamp0ZnRI?=
 =?utf-8?B?Y2lRSTM5MDZSb045NC8xTVBFTWg0YkF1SkpWUnBJVHlMcE9OWDFIamJuNEJ5?=
 =?utf-8?B?dU5tQk5LR3FGb1dQRlF5YW9HQ0xTZkU3WXFieDd4ZmRydTBERWxUVFhDRUF5?=
 =?utf-8?B?bHRDUU9TaXBKWnU0SmdsRUtMbjdpUjh1aUF6UFI1RjNWc3ltSlFNY255d2Vp?=
 =?utf-8?B?UTFIVFYrakNhMHFxV08yV1p2RDY4aStQU2NCQUFIaDcrbnMvZkx2QXFyS1l1?=
 =?utf-8?B?bkNQMmlzdFBEVko2YTR3QkV6QklvbVpmMU5wTlVWOFN6azE4SituUjJSSGRh?=
 =?utf-8?B?VmJtZkQvdGMxaGlJU1VRZG5HYlJ6dDBDQWIyOTh2S2RMa0s3MmgyeXlTOUlQ?=
 =?utf-8?B?SkczRi9YZ05rUW1mUUZoT2FHUk40cFE3cmxHYyt6SFRqSXh3SEFyVVIzd3l6?=
 =?utf-8?B?dU92MFRqUGRReXZJZUZ0SStYSkFMK3N0T20zU3VuMlc1bGY1OWdRa0Zidlpi?=
 =?utf-8?B?Zjh3b1ZneHg4eGsza2UyMk9vTWlvR2psMU0vMGJHTHAzTmE1UDRwZjJKOUdM?=
 =?utf-8?B?aExtU0NOQ3JBb3RkeG1wS3JpbTF6QjYwSTNTelJtZ1habHNHMmw4R1pZWlB0?=
 =?utf-8?B?TDlOTUFpN1J1eGtnbUJEcDlFbFhOQThlK1NScW95YTJWRDVRcVoyeE5EZTJp?=
 =?utf-8?B?ZmlMd3ZoSEZNTm1nOEhHQzJ4TmplaXBTdWNaSjZmLzBDMU1ueCtZWUtRbkZs?=
 =?utf-8?B?TmpObUF3RHpFWGFKcXBJbHY5Z2xkanVtTjZ0Zmg2eVc2QXFqL2tDYmZ1UFc0?=
 =?utf-8?B?d3VpMjU3OVgxdEp6NHlNVTlPMUpsN3hIYUc5RjhQS2hxZzFTZjV1U2o5UmlT?=
 =?utf-8?B?NkpSTUVyckhMMFpDUERySnpqSTJJUCtGdUFwdC96Z2h1Y2lHc3k5eFJNWHJi?=
 =?utf-8?B?WDNnVDFRTEtjVmpmbGpxN241YnhJWHA0TTZMaGcrMzhibk5LdjRzZ3RXRnph?=
 =?utf-8?B?bFpBUy9wQWVsSWdsbWRwbjFiZ3l3SGQrRHZXZEJ0aW1YNjdFR3JRUVByY0s0?=
 =?utf-8?B?cU5QR2hzYkt2cDJPQXJNbnhURUMwbmhndUc1WGc4VDlmd05JVm9KdDNRS1Ay?=
 =?utf-8?B?akFtWWZMTE5wM1EzcU4zOVh4NTh4ZGEwNU9KSkhncHNYQmpaZzI4VW9hakI0?=
 =?utf-8?B?amdXazc5WVVscnd2WllaRGVZSVhOYXFKRGRPbjNDUzVGVFNJQlYxVTI1TitY?=
 =?utf-8?B?VUNtSzliNXlnQWZDRlhLZEVjNjYxU205SHlZaDZzdlRpWTFBclZMeDRKNzRq?=
 =?utf-8?Q?SYl1xKupnqAnLqQGLNS0mbc0/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cb9a0fa4-6eb2-4ad8-5a30-08db55e1249b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:42:47.4554
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: a44xl5ZM919plSzU7Vwao4AlA5h7Cdi+q6BzgyKKwKRitvrDDoHIT4aoE/6Xdl4WUtbZKNyG8TBC8Ps+LxcyTQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB9810

While 670d6b908ff2 ('x86 shadow: Move the shadow linear mapping for
n-on-3-on-4 shadows so') bumped the amount by one too little for the
32-on-64 case (which luckily was dead code, and hence a bump wasn't
necessary in the first place), 0b841314dace ('x86/shadow:
sh_{make,destroy}_monitor_table() are "even more" HVM-only'), dropping
the dead code, then didn't adjust the amount back. Yet even the original
amount was too high in certain cases. Switch to pre-allocating just as
much as is going to be needed.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -726,7 +726,7 @@ mfn_t sh_make_monitor_table(const struct
     ASSERT(!pagetable_get_pfn(v->arch.hvm.monitor_table));
 
     /* Guarantee we can get the memory we need */
-    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+    if ( !shadow_prealloc(d, SH_type_monitor_table, shadow_levels < 4 ? 3 : 1) )
         return INVALID_MFN;
 
     m4mfn = shadow_alloc(d, SH_type_monitor_table, 0);



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:47:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:47:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535018.832604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypPF-00036Y-BA; Tue, 16 May 2023 07:47:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535018.832604; Tue, 16 May 2023 07:47:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypPF-00036R-7h; Tue, 16 May 2023 07:47:37 +0000
Received: by outflank-mailman (input) for mailman id 535018;
 Tue, 16 May 2023 07:47:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypK7-0006iO-Kn
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:42:19 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062b.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2e3d5930-f3bd-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 09:42:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB9810.eurprd04.prod.outlook.com (2603:10a6:800:1df::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Tue, 16 May
 2023 07:42:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:42:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e3d5930-f3bd-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bq/lQuVdBH8OJVxvLkuI7ZYKR/3R51JXaBu1RSxMhygmomyGqoeM+Lw+EHn5Q8kGVhlaJTs6wnMtHwvGDQ/HNVdhN9UEKFOdWNnxWHbnGJTZh7BuQXsoNYbZdlPL5CxK1dBNDzivvuU9Y18j01RDeijHebKkyD7sUBBgKpk6pnIgQldJ+fiN2QTziuePEwBjIL4qevJu7t354A99X0hXVm6ZEyQom+tFOek3mYKa1MFb17CkgXoY6O6B4epVvy3s/c+pSxp0bhSDC/3LgMikexMdcMNJVyy2VNbGWnIaPNMtWj07etmT8Itu29ZMPUA0F3U1TOOAr9lm1nO+hLFJug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+LSqkT5W5RDveh4x9HUNVR27+X3P8oPB/8p3ASxxGxk=;
 b=W1oryS4xJc4AboF//dZiex3vCKd7FomkrFqFqmUCI6Xk0A4Er4nq2reCE0l1Y/tEd3iNLm5eEL/GuApuMcickx7q6KBB+sAH0D/MuCRrtG9g1dfkXWMfOGcFtKD3aZ105fPwIlOH4F8K5cYDuhF9WkkgsvFKqgyL6SPDazvRCguQ8eCwFoBbwUW787+cNxJlTVUVqZswfUeirQQ6ksEqRBPfz8BHPw14GtmrLNDJSti0Qj5ju89Z0QI8TTpl4axU0FbfGeyXbpsudwqHlLROY6K8TrhptwpY8ljTqlu7gbbtctczsVt7V/ZS8iVnSqpkBkg7lJuaXrBMn8c3TdCtGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+LSqkT5W5RDveh4x9HUNVR27+X3P8oPB/8p3ASxxGxk=;
 b=HT+XURQm6DZxJNRPAre7SDdErzwHUZE9pk4XCR0lZ4WvBvcSmSQmQpA9kUehTjSBO8xEZyvE/cH90Ettd5Ha2MXHVEJ6OdaM01aKmgvOTDbW99gDX1whBakTmMvsXbmJ9Gg0jxxluttld1I11chSRc2nNw9HIV6aVpKJQ9DhujHHWR4noZGF3hvowxiUsLJjvdkE4upNieVprWdRsCRlp4D6+sBRsQvdpvrij4kAxOPIfBWfX1svM0Jz00cSvQSIaB8CI5N9ZPmPzQWell5NNTZYDoCnKFC/wd9IvHu8Fl6NVTjuZxWWjZZ47tT4lh9cKCfmsBasN6s5+bWWRlYfWg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f34c8bd5-ab3b-faf2-e62d-70b5efbedb40@suse.com>
Date: Tue, 16 May 2023 09:42:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 11/12] x86/shadow: vCPU-s never have "no mode"
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0169.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB9810:EE_
X-MS-Office365-Filtering-Correlation-Id: 7f0340b1-cae1-4ddf-dc95-08db55e1114a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0yIGKYfk/tiFqWsKOXXoUYwRe5snVo7ETbsbRajTsKHM2G3KXrqRFD55NrT8eBPqAUEdkhjzn1bNmLb50u5GW+CQi31yqukXu0yeVKuTZcina60xdp3bgg81mm24EU6M/brVyI6ZOu1mAwIVKypJfHtTTyG4grCSigf3x/kGJjVKhXMzB7D7mIilRLeQbL5Q1YOvCBVO4NH7Zs2mjC7bBXbn9riWTfpoI5QkFhTyF80gwPonw9vdyGlcMFQ0JLIUf9bEas4yU05XjKDCt48IlFwWIEtsVS1zZtifkGyhXwb7oXe1rnL7KYMXFMZfK5wEwXCYdALZXMUb3bMpPPX9LufdluIYdUOPsBi2dlm8HJ9WlGx67C2/lEXcv7qAKa+ZX1hamaj5tKdArwhVqDYDm45K09qQzkPvPIm88MJckczqCRi3jah0pyEhQ2ABuXhAkvvvsh9inRYXLQol4HvUTt6GysVmGkBsOJNH2GrZAAWSZOqNaUWdl8pWlBrtz5Hn0ZZEO1QJnNH44Vg20jtAaBslof8RPpWUWeG94Fkq0boOslzfrotGsQ1nk+CeKmNyYDCAuUUcka4u3Cz6I7ocuVrWMOzt68Dzk50w2gGXBwxt0oixCTDdqUQqUPqDHaopt0caw5WfzhuRLrCUXWnV+w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(346002)(376002)(366004)(39860400002)(451199021)(26005)(83380400001)(5660300002)(4326008)(6916009)(6512007)(316002)(66556008)(66476007)(66946007)(6506007)(6486002)(31686004)(2616005)(186003)(36756003)(31696002)(478600001)(66899021)(54906003)(2906002)(38100700002)(86362001)(8676002)(41300700001)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?blQrcEJidktyTXU0U0tzNkFseU1PanZyU1NaOUpRZVJVWGIwKzRvSnpHSmNZ?=
 =?utf-8?B?bURMODRiT2x3VW5VZjZWd1FjT2NnaUJHVUk1Y3NQeEZYTFl2WWxpVG0rY2l0?=
 =?utf-8?B?aS9xYzI0d1RoY3pCV0RVTTZZbEVmaUZ2dmNiUXpCb21NTWRYbi9WekwyaTFK?=
 =?utf-8?B?REhZWnlJNHhkME4weFVwQU9XOXd3cDNRQitLYjJ5T0VONG5FYnNDRGkzcWpk?=
 =?utf-8?B?VXlyamY4TmRGMy9XYVRrRURMVUJuT0UvVmVtRTdXMlBGTUpwcWtqUVpBRW5J?=
 =?utf-8?B?ZlB0N01OMjFTU2t5V0ZTR2RjZ0tiVzNDTDhQZXQrWFBtQmFNY0s3MXNKQUVH?=
 =?utf-8?B?ZEtCUVlSeE5NdDR2K21va2l4MXpldkpuSlZjTnZOS3FOMEJMOUhIOEdzVFc4?=
 =?utf-8?B?WXBNUkR1TmpXMG1EMVpxVS9OVUwxYmZCZmhXNG5VYjIvWitCNmpPenVvcG9i?=
 =?utf-8?B?bEp0OHNDNjhaWXRsdUk0aDZYT05JdmUzNVdxMTA4aDB5eEFjcGtHUWd2bmgz?=
 =?utf-8?B?QXZMRm5RZmloU2YxNTBYMXpJQWZDY3RIVkJhMVlBVCtkY1ROUDhlL3N0S3hH?=
 =?utf-8?B?ZzFDeWZhbGFaak9WSFhVVkZMeHA3L1c0M3daTWpmd2daVndCWitVa01zVTFl?=
 =?utf-8?B?TXhlZlYyWFQ0SHRBbUdKS1NWTjkxWjFsWU9Sa3lUbko3VWlQY0pjTk01YWRK?=
 =?utf-8?B?cXpZR3ZuOGtrMnVSR3VxdU5FdHNuTzI4UjZhL1pjREgzUjhiL0R1V1dyTldw?=
 =?utf-8?B?azFBcHg3ekZoRytjbEVjRDZKSElxNGNiaGJQK2l3aTRSUGZrdkRIejh6N0py?=
 =?utf-8?B?SmlkbGh2blJzcHpMMkpJbHBJN2lVUmF0US9MSERUNVhsU2d5eWU2ME5wOGhl?=
 =?utf-8?B?aG1aOWZHWStFYVdiV0xMNDZVSmVNTDU0MXA3Y0VFZmhwODNKUFlKRE1EaDU3?=
 =?utf-8?B?enY2R3lTRUFDZWUxWVFzblBVYTlTRXladjVmWXEySDV0SEIvVEdzSnBxOE8v?=
 =?utf-8?B?Q1hyam5XeHZoTjZOenlLeFpKckM3LzFvSWZVZ0JkYktjV0I1SktZM0U4YThy?=
 =?utf-8?B?aElpOEYwUGdLbi9vQyszMjlONEhYNFJ0UFRyN0RxR3JRVEhiRVgxd3EzK2lN?=
 =?utf-8?B?YWdJK016NER4MnZlTVllc2xPaWNYSVpZU3lIRCs2Q1RCN1E5TGRlWTEvWXJB?=
 =?utf-8?B?UHA3Vjg1dzJRMW44TjdTekNpc0k5S01NUDlodjE2UUg4ME5hb0NVd3RBUXNs?=
 =?utf-8?B?RmlDMlFuNFZ3Um9oelJxQVpkcmo3Syt6WDR3TUU5UmhTcllodVYzYXk0SXU4?=
 =?utf-8?B?MHR0SnAxRFp0T295UkJRK2FMdWxXaWpGRWZIZTJ3RXhYY0FxV25HRXA5cENE?=
 =?utf-8?B?NWhTek1YYmV2TE5WcXViSHVHNXVka1JHaDkwRVM5R29meU5BTEFzQ0lOSE1V?=
 =?utf-8?B?cEhjTTQ0Mm1leUNYK0U5ZmdYRTB4SFRUdStLeXdlejM5OWhrWjdjUkt1L3Vm?=
 =?utf-8?B?amFma2tFY3F6dGpMTjYybXUyWnFDdUU0QlJLMmlHRGNJQm1NSVJwdnNyeG1M?=
 =?utf-8?B?WE0relpPQWRCQ1ZFd0x4dDZHWS96UEkwT1NVZnB3aFdZRjdPbndnQXJrMWw1?=
 =?utf-8?B?b0g3QVpFWUsvU3lQTUxJaWthWVFZZm1veTMvRkdudFM1ZEEzc1kwdTA5QjVh?=
 =?utf-8?B?RDBxTjk0N1BwK01tOStRMERmZlYxNkwwa3VjelZLeHRTQmJYLzFQdVRFNGJt?=
 =?utf-8?B?VDJqOUQ0d1gvck82QUdTZThFSGJvL1NXMWVxRkRLeGVqTkVBNW1UYzdZQTlI?=
 =?utf-8?B?UEJBaGY0OXU2VG02Y0RaazNDdHlnRlAzOVZsSk5EZjBwcXFyZ05oWkt0Y0g1?=
 =?utf-8?B?V2FlUU1vY3dpMUJLY1Q0M1lUZlZveGpTM01TSXBrVzdEN3RFaVJUa3R1UGQ1?=
 =?utf-8?B?V3pnS21KWFhlb0k5OTNHd1pBS3gyQktYb2VPNy9nL1JESXJnSlI2S08zaS9q?=
 =?utf-8?B?VDR4eG9pZDQ1LzNRaitCeEM2OUl2WlYybzFGcTNmbFljeHExWFFTanlpSEtY?=
 =?utf-8?B?NmJkeDA4cHp2UkgzTkhML0ExOEg1c0VtYkJhUHpNNjV4MUhKSGVvTXhYQ3Mx?=
 =?utf-8?Q?sEJAWNv5bURs13cq/Xm4hEg7y?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7f0340b1-cae1-4ddf-dc95-08db55e1114a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:42:15.0562
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KMeJXUNoJWeW/m+jn6x1YQ+ANK0XN2giIA+u2Ms7HMnpSQeYielmTpeTHI9xgNtKZaj69jQ0V5Xwg7H6unP8fQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB9810

With an initial mode installed by shadow_vcpu_init(), there's no need
for sh_update_paging_modes() to deal with the "mode is still unset"
case. Leave an assertion, though.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1854,6 +1854,8 @@ static void sh_update_paging_modes(struc
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
         }
+        else if ( !old_mode )
+            ASSERT_UNREACHABLE();
         else if ( v->arch.paging.mode != old_mode )
         {
             SHADOW_PRINTK("new paging mode: %pv pe=%d gl=%u "
@@ -1862,11 +1864,10 @@ static void sh_update_paging_modes(struc
                           hvm_paging_enabled(v),
                           v->arch.paging.mode->guest_levels,
                           v->arch.paging.mode->shadow.shadow_levels,
-                          old_mode ? old_mode->guest_levels : 0,
-                          old_mode ? old_mode->shadow.shadow_levels : 0);
-            if ( old_mode &&
-                 (v->arch.paging.mode->shadow.shadow_levels !=
-                  old_mode->shadow.shadow_levels) )
+                          old_mode->guest_levels,
+                          old_mode->shadow.shadow_levels);
+            if ( v->arch.paging.mode->shadow.shadow_levels !=
+                 old_mode->shadow.shadow_levels )
             {
                 /* Need to make a new monitor table for the new mode */
                 mfn_t new_mfn, old_mfn;



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:47:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:47:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535016.832594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypPE-0002qH-0e; Tue, 16 May 2023 07:47:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535016.832594; Tue, 16 May 2023 07:47:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypPD-0002q5-Sj; Tue, 16 May 2023 07:47:35 +0000
Received: by outflank-mailman (input) for mailman id 535016;
 Tue, 16 May 2023 07:47:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypJS-0006iO-0N
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:41:38 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 15c9742b-f3bd-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 09:41:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8988.eurprd04.prod.outlook.com (2603:10a6:20b:40b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 07:41:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:41:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15c9742b-f3bd-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PwDP8lrmopysT0jkBOYL0tQU94M6nwMe8EquTAPdA1iqQfaU6m6YcLwbqCGph5G/apICtXr9I7Tn69iSa3MZrXIA7TAaq+7w6ZhKyloeeN40qX5cwC0V9c+f30VL56UCYuI/vjp6B8aUWPtDGN5iCP47a6zIk+vrIl/pUOXxf0l5DKnm4APnYKkG2tWQ71X37mL17I4Oo2xOicpPWo5A/MqIVjlk/rMmAhWhrTP9pIcIjRa32N7G6AvG8KrME05dlsi6xnG92+FOjweqLas8usYupryxqMZ73TiJH+WXQ6qy7KbPffpOE9FnywZwo9HqlLP4sFTe2Vwapz8Kj7MVbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=na5tx/nGh+nhBImZ7UOWh0/j40yoMt2VPnfqJBakobc=;
 b=XcBJ7ClmZcu1ydINascp1KnXejJrfp/lVqwDLUUJOhWY217WAQeMAOvxnYAuKA2jbL9J7iB/zYq5THWxosZbVLcZ0O7PMuh6hV7tA/RtMQdYstaNr3OmxOsLI4XRGeagBUnwjneLWDh7YExVh2JpLQxPCoHRlYr2UNwNXKRFs9It7sDjDlJE7REhB12aKcFevdo9t84Euj7W8lGcDsi2HIQ41CV46+M9PpaTyneipb8qUsCw5LrIg2fNRYXlj+7hOyZSPi5ga57P5V2AxugAP+H4oy3etocLdIVnPKgAh8uoN+M/BDZmF97KjxaMw9sBZRob8K/rQVViUb94qQjCHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=na5tx/nGh+nhBImZ7UOWh0/j40yoMt2VPnfqJBakobc=;
 b=wM1wxf5OxHc2RXF8KC9nHn+JU/BG1mahSDHGEAwof2o84juFhwgemRaqdGPJey7FgmGr+eulSQCmziYRdV9XSO1mmQrK0cyOi+rmhbpQiq1hwc8yKLHYVLgiZkFF3vk1DWnSMOLpr0NkuRN78ySNM/fhtyndSq9pleKjNTCH3XmBp+niuMAy9nWKrx09X+dVujSoSGiIgeSN83ngNLRsz1qNzVr5KhKiKtFd4H1tBtQR8AZn2112xZZHnflnFLRyVFc9iT27dLNP2DC/FkdhFowRZFSMaM0d61UnYYnoc19ByjVhKZvhQ+qZnKyS3TbZFZseOqv2IYd7MXEAd7Egwg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ebf7a96c-8529-b238-f9bf-750cf42312f9@suse.com>
Date: Tue, 16 May 2023 09:41:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: [PATCH v3 09/12] x86/shadow: drop is_hvm_...() where easily possible
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
In-Reply-To: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0025.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8988:EE_
X-MS-Office365-Filtering-Correlation-Id: 38651a2e-22ff-4734-1efa-08db55e0f957
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ofPcZp4K2huLoi7lv7vcziDK1iQXGEATSCK6xSN3p8B1xhq2U5Lh11QXilyG7gtH+c3jOB35/YXnY2J8m6UM5o8/5Vo1DB1GuUT1r2PPP+JCV5TNRV9MynCPiz6EvjAitCTHe6V7qOnxYE939AX4JPgOj2MBvgu070/JJkH1+DeNEc1Gb/U5sCMyhELRocjtByaRDY7/VI11Ncb85MPEMdUSAtNHZEhBwiGGuNtJ16JckSshbYjTXmhyUCaUv+W+LScteWZz02Km1JUjslh4KfjXfxMp2MfmLf4NPNhGDZqbyDQCX7S5+b6Dnxm73kCz9IDGwZSEnUvw4VOdizJEdYBheI4Ht1YptAsGI8Dx9v4FfGVrlR8KOskPPad0fi50MA1nl2Unt3zmHd+lu2DAYHSAGsgHBbD1AZPmgQHisKIPcyvAxpHvqNGtsCEAwU7zXg11VLcITdcIzfT9hxVfh1x0+GhdXNXiEiCORhJDD426QcqoJ619L93q35YpEprgLlTuA6axz1FqxbFmZwDzq+gYbjzyzXu2uzHyCb1ZPREqp4twPL2hH/dnRjjZaT9lyDKuYiuY+zozTe5cInhV20KOodgIsWT8L1UOtIGj5eEcVMsxZorgR4pBV5W6WvVt2xB/W6Hcui5DNy2fL16CpA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199021)(2906002)(316002)(6916009)(31686004)(66946007)(4326008)(8676002)(5660300002)(8936002)(66556008)(66476007)(41300700001)(86362001)(2616005)(478600001)(54906003)(6486002)(36756003)(6506007)(186003)(31696002)(83380400001)(6512007)(26005)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ejZzTTJVYVNGdE13cjZqL0Y5UjdxSVBZenpSdEVFY3pwWk54QzFidFZ3Q1dt?=
 =?utf-8?B?NHZXYTc2SXZ4OEVWUTUyTURwdVFwM25RU0I2R2VLeklRRlVGeHkrTGNpNm5t?=
 =?utf-8?B?ckpFaEpwU0kxOVVPUm9waGtqNkxaL1VSWUlCVUFqVUlwaWRnZkFGZVFVQ2NM?=
 =?utf-8?B?WkRwZTV1YzlYcmZNeFRlOVNBRzhWREp6TTN4Qy9JbFU1cklaZzZONmNsZnd5?=
 =?utf-8?B?Y2dsSE9Rc3Zubng5R3Z6T1U4MkxYTTFCemlrc2JMMHF2VU1XTkxuYWpOaWlj?=
 =?utf-8?B?OEkralVaU3RFYWgwbUJ2N25rU3kvbENTRFlPd2JjM1piTkQ0eVpRZkhzVmNk?=
 =?utf-8?B?OFgxZmNiemlsamtPRUpsdzRlWXYvcitCYlRhdi9kUzdLN2t5eHlxOE8zT2Nx?=
 =?utf-8?B?R3Fmd0NaMDFuWVUrSjAzOXVPdXRwU2Q1aCtJSE94eDkwMGlua0RKSDl6eFgz?=
 =?utf-8?B?YXdQVm4xWWZyOTVJMXg2VGJ0bkNmRHFLaGZjMDhhaGZCQWJXQy81OFd6TWd4?=
 =?utf-8?B?ZDdRT1NZd083SzR6MlFOYStkczdZeVhNbUtCRVRBc2hKckZRb21xRkVHcTBJ?=
 =?utf-8?B?bEdzdThMMXMxaEo0Q3QxUitMRmdjSnBwMU1hYkhYWXlwU0tqRHZpVWplT0ho?=
 =?utf-8?B?c2J4SDhhNlZNYXN4RjlUVG5jU1dpY2x1SzROakJCWjVaQWtWUXEwWlM3QWlx?=
 =?utf-8?B?Qy91blV3NDJUbjRGcitCQTkrQm1xT2k2bnFid1FWYk0vNTE4WW44Q3dQRnhV?=
 =?utf-8?B?QWxmM2pKYTNsc0FpajhFV1JNb2VOejVOOFluc2sxTHRHRTFlSUtiK1pudDVn?=
 =?utf-8?B?bEI3NjErTEJ5ckZsWVJxZlN0ZUdKRzRlQWFYcmJ2NDExRVMzR2c2dC9XL1Qz?=
 =?utf-8?B?enZJVzNKK1o4UTlaWDdId1NLZDU4djU3WGRJWWVzdUozV2gxWTBkNzBZMCtC?=
 =?utf-8?B?TkxxS0ZJZXdTZkNVQS96aytrZHFOTUZjUStDL3NLWmxibDBRY1Y4UXMybm50?=
 =?utf-8?B?OFVzSk55eUxFaDYyQVowU3I5bys4eXJvbGlCN0JOR3BVQWFVb3hBcm9KdFVs?=
 =?utf-8?B?MjFORndDa2EwcHNtMWxob3RYZWFWS01PRTgwOFBGc0lGcUkrbDZuN1Iva0Nl?=
 =?utf-8?B?aUFBQ3dwQW5UcGhNM2N1TExNOUNHQ3VuYUg5YlNnR3BJczVuSkRYZXZPYTEw?=
 =?utf-8?B?VkdqZm1KSnp5YmgwNHNVY2x3dFVINjlmbjBES3pQa1BLOUxRMG5VdEJnNEp4?=
 =?utf-8?B?K2hjVk8wMkJQR1lrZmJ5NVcwd2F4NDd1c3R4b3NxYXVHRFRURHRjWlJqTlht?=
 =?utf-8?B?cUdjTWZmYzNYTjEyRVk1Y3RWWVgvcnRmeFV5RWVNKzF5TXhYdGduY0h2Qldj?=
 =?utf-8?B?UzBScGhrem9LTGYwQlNMaFo4WE04bEI0WVd0QlFZb05GVThxUzdrV3lHdDRk?=
 =?utf-8?B?dFhKT3lRdndqdDkyeWtZaVloalg1bE95cEYvRjFFTXNsazlodEFnaVRhSGI0?=
 =?utf-8?B?LzJPRTFVMVM1Zll0SXkyMklDQmFWcWxkWHJXQkFkalNuQjJTRG10aC9xU1RM?=
 =?utf-8?B?aDFoSHlDQVdlT3ZjRCt5OHhJVkg3ODRpaGwraXlrRlBMWm5ESzN5UFFiQXY5?=
 =?utf-8?B?S0FuU05CQnM1MGNRei9UUjFDT1pLWURVbzkvaG53RDQrWnU2NXIxOEdVaUJQ?=
 =?utf-8?B?Nm9IUitYSkdaUFJKMnUzUCswUGJOajdYc0h4WVVtQTl6RytqV01zcDlEekV5?=
 =?utf-8?B?a3JkSXpCbkVpOTVNLzdrVWI0Mnh2WmZhQTNHdHpiUFVvWmlpbDBVTkhjMjJR?=
 =?utf-8?B?VnIwbXJXMWcxc0wraEl6dmVFVWxXbXdaN1Mxa0VlT3VJd25pb1JjeFp4cFho?=
 =?utf-8?B?NjQ3ZGVkWDI1NGY0R3UwV1U3eUs3N0FmOTV0ZFNYSllTV003KzZ2dk5oU1Fa?=
 =?utf-8?B?RTk0MCtDUGRLNHJVeHViTUx5QUFWekxkOUFUWWJmMUdxcWNrUi9aVTVScFNI?=
 =?utf-8?B?bzd1dXJqYU9lZjl3Q2t4KzlXY0YvaWtxaVZQQTc2K09Oc3VIYlVKTE51SXV6?=
 =?utf-8?B?NDZSaFBtclVKT2hpQklyUkcrQWR1QW5VL1A0RktsQ0hSOW5zWmJBYzFlL0Ni?=
 =?utf-8?Q?kuZjesH/7s+N9SrgaqI+yWj08?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 38651a2e-22ff-4734-1efa-08db55e0f957
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:41:34.8465
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iyu7/ceSWDdvQ2hEl1prD0BjH15Yjtj40PB5h+bINJPhC2bXnkNUByte1RLAghvjRESPoDQWEjqh0ew5JRrojw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8988

Emulation related functions are involved in HVM handling only, and in
some cases they even invoke such checks after having already done things
which are valid for HVM domains only. OOS active also implies HVM. In
sh_remove_all_mappings() one of the two checks is redundant with an
earlier paging_mode_external() one (the other, however, needs to stay).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-base over changes/additions earlier in the series.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -1511,7 +1511,7 @@ int sh_remove_all_mappings(struct domain
                && (page->count_info & PGC_count_mask) <= 3
                && ((page->u.inuse.type_info & PGT_count_mask)
                    == (is_special_page(page) ||
-                       (is_hvm_domain(d) && is_ioreq_server_page(d, page))))) )
+                       is_ioreq_server_page(d, page)))) )
             printk(XENLOG_G_ERR "can't find all mappings of mfn %"PRI_mfn
                    " (gfn %"PRI_gfn"): c=%lx t=%lx s=%d i=%d\n",
                    mfn_x(gmfn), gfn_x(gfn),
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -192,10 +192,6 @@ hvm_emulate_write(enum x86_segment seg,
     if ( rc || !bytes )
         return rc;
 
-    /* Unaligned writes are only acceptable on HVM */
-    if ( (addr & (bytes - 1)) && !is_hvm_vcpu(v)  )
-        return X86EMUL_UNHANDLEABLE;
-
     ptr = sh_emulate_map_dest(v, addr, bytes, sh_ctxt);
     if ( IS_ERR(ptr) )
         return ~PTR_ERR(ptr);
@@ -246,10 +242,6 @@ hvm_emulate_cmpxchg(enum x86_segment seg
     if ( rc )
         return rc;
 
-    /* Unaligned writes are only acceptable on HVM */
-    if ( (addr & (bytes - 1)) && !is_hvm_vcpu(v)  )
-        return X86EMUL_UNHANDLEABLE;
-
     ptr = sh_emulate_map_dest(v, addr, bytes, sh_ctxt);
     if ( IS_ERR(ptr) )
         return ~PTR_ERR(ptr);
@@ -445,8 +437,7 @@ static void *sh_emulate_map_dest(struct
 
 #ifndef NDEBUG
     /* We don't emulate user-mode writes to page tables. */
-    if ( is_hvm_domain(d) ? hvm_get_cpl(v) == 3
-                          : !guest_kernel_mode(v, guest_cpu_user_regs()) )
+    if ( hvm_get_cpl(v) == 3 )
     {
         gdprintk(XENLOG_DEBUG, "User-mode write to pagetable reached "
                  "emulate_map_dest(). This should never happen!\n");
@@ -475,15 +466,6 @@ static void *sh_emulate_map_dest(struct
         sh_ctxt->mfn[1] = INVALID_MFN;
         map = map_domain_page(sh_ctxt->mfn[0]) + (vaddr & ~PAGE_MASK);
     }
-    else if ( !is_hvm_domain(d) )
-    {
-        /*
-         * Cross-page emulated writes are only supported for HVM guests;
-         * PV guests ought to know better.
-         */
-        put_page(mfn_to_page(sh_ctxt->mfn[0]));
-        return MAPPING_UNHANDLEABLE;
-    }
     else
     {
         /* This write crosses a page boundary. Translate the second page. */
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3426,7 +3426,7 @@ int sh_rm_write_access_from_sl1p(struct
     ASSERT(mfn_valid(smfn));
 
     /* Remember if we've been told that this process is being torn down */
-    if ( curr->domain == d && is_hvm_domain(d) )
+    if ( curr->domain == d )
         curr->arch.paging.shadow.pagetable_dying
             = mfn_to_page(gmfn)->pagetable_dying;
 
--- a/xen/arch/x86/mm/shadow/oos.c
+++ b/xen/arch/x86/mm/shadow/oos.c
@@ -577,7 +577,6 @@ int sh_unsync(struct vcpu *v, mfn_t gmfn
     if ( (pg->shadow_flags &
           ((SHF_page_type_mask & ~SHF_L1_ANY) | SHF_out_of_sync)) ||
          sh_page_has_multiple_shadows(pg) ||
-         !is_hvm_vcpu(v) ||
          !v->domain->arch.paging.shadow.oos_active )
         return 0;
 



From xen-devel-bounces@lists.xenproject.org Tue May 16 07:48:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:48:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535025.832613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypPd-0003sX-Io; Tue, 16 May 2023 07:48:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535025.832613; Tue, 16 May 2023 07:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypPd-0003sQ-G4; Tue, 16 May 2023 07:48:01 +0000
Received: by outflank-mailman (input) for mailman id 535025;
 Tue, 16 May 2023 07:47:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypPb-0003im-IO
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:47:59 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2042.outbound.protection.outlook.com [40.107.7.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f96cd99d-f3bd-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 09:47:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7132.eurprd04.prod.outlook.com (2603:10a6:10:12e::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 07:47:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:47:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f96cd99d-f3bd-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mfMDtuliroXiVFxMP4m/MdVDHlDQTw+mNQvT1D8H7hT4rgT1HubQyjd3qJQ16tpp1pmUAhobSelKDe1SRsxLx7++uSvxn5DHdBw4niqowDVDyK46pgODlVxbPpkT207QQtM8aRng7jqn68FmqTqfs6ZvT4LGq546v6pFk/b8JExRuZ75FX80r5LFFkkuUrjA+F2UYCQhat75z8N/zWcHapXBuhiUXgIo/3SMrxTh/FxQUS0q1pQyniFVL8MGGYAjpZm4E1BYU8U6Iq6bnJaKea3uwl9xyBWMS6AcMC+iMz6NGOB58if2cNudZJDZT5yEAFiR8LUFNEOi0SEtXPql3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W+mOXDtYrs539yzrfbsePpJM9wrJrMjthB/l2RTNe0M=;
 b=NYir+z60iozFVuDZ/v63rVn+xj9K2/NcbG5GrEzKwffYkbWhahiNwdWawE8t4usW/Kg/KymSOu20kIHLJrq6rjkw9JYQly5zihl6KXU6+7brM20PrLQChpN/hj4o39Fs8+HS+GiB4bJqqBJJCdG2VcdhAwVIsOdey3fP+jNSXRKYlfj+hrsO6srQRuF6IBb0yCw6IkjX/gGP9mnnTsK66TNuXmaP4uMaZDk2tNCYD9uIppOLgUchmY22bTnFo/PM5pzDvLsTGogcSqcW9DedIYHzyN+N6qr3IIqxEeaf7y9+4F5IUa5vidLDyRQQvGd8UF03VkMb7ouiQkVaq5Y9mg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W+mOXDtYrs539yzrfbsePpJM9wrJrMjthB/l2RTNe0M=;
 b=PDKiS/4hHB55a8GuTaYgUw3R1KP2Dlb/2DJo9S3a0vbP+IT3G3BVnUqF0GGU5+2MARnvPnHzSq8hp3JQaI9KYcjCzvD+L8RW3fKZz+8q/wtFq9edgzfWmQLcC8sMbHMymf8rYPc7seCcqO/l8M6aN6SIdRyEEiXSUwWTDNPVwKsuKJNg+dlQcw+vXXDVH5ia68N22ZLyvqpd6cI6NRVycSIOGg76y6WAWIGy7W9badmaw9yWsQUcfOvvGL3MP+xS1f0RxIvN7PHeBoRQLBsf7rzn4TZVODZsKfvrIkVzfNoCFPN0XbAmY+2BWDLLAjJs5NVIDQ1HZBufucSGCuTumw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f68cafb7-1b7c-b6c9-c1b4-cf3d7395bd38@suse.com>
Date: Tue, 16 May 2023 09:47:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 2/2] x86: Add support for CpuidUserDis
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515113136.2465-1-alejandro.vallejo@cloud.com>
 <20230515113136.2465-3-alejandro.vallejo@cloud.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230515113136.2465-3-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AS4P251CA0008.EURP251.PROD.OUTLOOK.COM
 (2603:10a6:20b:5d2::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7132:EE_
X-MS-Office365-Filtering-Correlation-Id: bec287a5-920d-4d67-49a5-08db55e1cc69
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hp9COV7433kF9HY1tt5yatmRZhhZQ04m0TkmPqjsEM2U+5jZqJ/gdnPIJhrW/NlPGD9oOUzUoYNW8Z1jHWZhWXffzywmlY1/h4t9NdLw2mzjew8QyKsSWC7R1dml/qcXy4GYrEWIIPmAxV3tEOplFCPhRHwZ27Va8boLKgRyAQeKvNMsb5jkrAfQe2XXWRom6F5ZZWqY9SuGfh/gDVHDC1omRjB04oNia/olC2LjNox/wLUeshTAAXQmWiq20ogXwQsv49fT77odj/AFs8NESI/oIWHECois0Bz4sYcgVlpBrTyn58vMAlJApYzrTL6r7kU8nlzGKypZ2ClJIKmBEXDdw9/oq1gcXYBIKj65Ti76C2N/pjK/FxD2Rgiy4Ur6ZYVEQ+hZSokbRMwh1hj2FAzz9oMWN9byJvA+K2ECih4q6i0ND93VZQ9bQ5VuKBsuvkOV4teDavybAfWdttwxxajZx/86FRtj93ZeL/JH+56KHxflBUjGwFxSWsUurYUrO8lTHutkxPcU9svH2vYCQz9h1JjFowXIoDLdYXGnt1jJYediH8v1GEJupDCgV7AO24M42teDzxqHyJ/twpUmyM7dJGdeGm6qoGtFJQt39Aqkx2tbYGi8goMjpvzVcuTIgQQXizE6qg2XqSPJKjLmpQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(136003)(376002)(366004)(346002)(451199021)(31686004)(66556008)(66946007)(478600001)(66476007)(6486002)(31696002)(36756003)(54906003)(316002)(6916009)(86362001)(4326008)(186003)(2616005)(6506007)(38100700002)(6512007)(26005)(41300700001)(558084003)(5660300002)(2906002)(8936002)(8676002)(53546011)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U0doYkFudUo1YW8yNnBsa0FoTzR5Vy9rQzVReThZa0ZUYldkZHlKZHU3Z1Rh?=
 =?utf-8?B?WnV6Q0VVL1U1U1A3K1lmMjIwWkhIMFM1Si9rOWJlcS90QSs3ZndZNlA5OWFs?=
 =?utf-8?B?UDZWcWhuSGZBbFlKTVNkMjh0S3d2eEttRUhxam1yVjI1SzlRV05VZEdBVWlz?=
 =?utf-8?B?dHc0dnZFTnZkNmU2bENRZk5iYXlLSXphakl6WFpaemhCNkdPWndIQVFUcDFJ?=
 =?utf-8?B?YW9KK2lpNGhCemF3cklZcEliM2tNZFppbDBUTG1kUDZ4T2VPZnBCanhlY3pk?=
 =?utf-8?B?RjB1S2lzSEJlRDJYY2ZDSUJCcWhhOXhuOHhBQ1FxUlduQmx1MlBDanZzRjgw?=
 =?utf-8?B?ZVdrYXErSndBZ3RmN0Q5YVltcXVMVyt0UGFwc2RxSisyY3dsQ002NVVPVW9o?=
 =?utf-8?B?dlhRUGlQU1NDbWhNV2RjektxV0Zld3J5Tjc3ZFZBb3IwUEhIRE53cXpsMmw5?=
 =?utf-8?B?bnVoeDE4MkNLNk4yU2RvdlZQZ2hGRGtianBwOE1saTl2R3ZacllPZDdvb1lk?=
 =?utf-8?B?RWp5cUI0TzBScnJUeWJHOU8zUnp5aTFBYkxTbmx3cVJRRlhFbXJxQWNoYUxw?=
 =?utf-8?B?MVgvOTVUQzl2OGxqNlJXSzFjOHdRU0dXVy9XazB2WENVR1ZsR1R1cEdkWDFV?=
 =?utf-8?B?MU9FaFZuWG1URnVhcXhRV2FobXRpMnNvNXNEM1hxVUdtYTZWd2lXaUpIeXFt?=
 =?utf-8?B?WGE3NUlxdU0xUnJwRnRJazBTdDdjRWxYM1BkMkVXVHNPL1VGVllOQTBmc0Jx?=
 =?utf-8?B?dmpqU2doT3gzKzh2QmlqWXYybmRwbGsvT2htTzZiZ0c0ZXlkWWMxWVRmZlJO?=
 =?utf-8?B?RjhSdGFGSU9HMHV1WlFpWUNLbEtOOVRza3BpWlVmVVkyMnFERHMralRsNHIz?=
 =?utf-8?B?MVZxV3ZpYUtoa0lCeXFZR1RxR3YyT3pQYTUvVGlCV05iMWs0Y1M2VDg4TjJS?=
 =?utf-8?B?dXVmU2VHRmlPdzN3YTNiNU9XVmZYMGFQMk53OUptTm1wbGhFU1FiYW1pN2R0?=
 =?utf-8?B?RC9aVXMwZXhDTCs3Rm15OUlITEhNTnR0V0NDOFRKd1RBRmtweTFSTkVNbEVn?=
 =?utf-8?B?QWZpd1YzSHEyT3RJQVNxczdIU2VxNk1FWVRTazRFL0pseXJPWW50M0o3dEFq?=
 =?utf-8?B?Sjh1ajV5YTVhdUlaSUZtMlZBS0Z4aENyaUt3d3ozcUZBWXpwZ1UrUVJSNHRE?=
 =?utf-8?B?Vkw1enhoU0ovZzlRc1YrTzFVNnltaytMSXNpYi9hTi9pL1V5Z01MUTlZcHF1?=
 =?utf-8?B?T3NkcHhvY2pLSlhZeWJYOWRuNVlZeVBRT1lydm1HYU04STJUQ2ovOXNzd21Q?=
 =?utf-8?B?OWZSdmZjMFA0RzNVRURFSkluSkZrcHFNdkxBYmp1c0UxbDYzNTZldmZwOWxB?=
 =?utf-8?B?MnNlT1ZYcUNtUVVGSnVFTE1QQ2JCY0laODU0a3BTS3pac2lOYmNkZTFWUThj?=
 =?utf-8?B?aG1pQlN6dklWaDVKTDluRWZRQ1FWamFQbWxDQlZ3R29WeTlzeWwycU5zOEph?=
 =?utf-8?B?NXJaY1FYNWxPQWpEYnhtMUdQcHpLaDc4eG1SZ0tnZVhJamVoZmdhMllOUzNw?=
 =?utf-8?B?T0ttV24xRzlFYW11K3VrUjBvUzRzb0pZOXdza00zTkJFZUVwMVZBNG1NMzlM?=
 =?utf-8?B?Y3BCa3g5MDJwOS96ZFhnSCtHMzB3M3hKcEVnb3h2QW1JakhVTFdmTzF2cEQ2?=
 =?utf-8?B?RCtUTG5NaDhuVzBsVUZwcERiS25TM09tV3Y4QVRtZjAzUldndkdOWGQ0Yjg4?=
 =?utf-8?B?NEMwcE8zd1cvZjRNejdCUTZBWGdzcnlLVkNyL0pTRkZiT2h2NUhBWDgvWWd6?=
 =?utf-8?B?OXJiUXhEbm5JejRJY2RvbDg1ejJQL0xnRkVBc3kzbTFvOWFuZmpYQ05aWmx3?=
 =?utf-8?B?VWZEaHJPc2YvK0dzZ2NjNTlwRGpTT0NjU3paNVMrSHMwM1dESVd2a1MvTzZY?=
 =?utf-8?B?dHZ1U2Q2eXlqK0kzRS9FY3BUWWdtQW5OYVU5SzUxTlhaNDVKRVZOSmFvVXhG?=
 =?utf-8?B?ZVA4TEZLUFA0STFPRUVVUTU3Wm42SzBxRTdJSVh5Tnl5eDB2T1Vna01nNGJF?=
 =?utf-8?B?eXVDMFcvbGIxaHpVQlJxRmg3OUxhbGZsalNob3hxNE1EdkNvb1JhS0g1d0hG?=
 =?utf-8?Q?RZIa2Bdq86i6P/UrrA/3PKs/E?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bec287a5-920d-4d67-49a5-08db55e1cc69
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:47:29.0037
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tDY6GJrgMvW5ErxTHfmsUorDevdD6fep59G8exwb3QQuX7BBvu94ed4Av2B5MCQjFKq73KuTF/WLkCAouf3VbA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7132

On 15.05.2023 13:31, Alejandro Vallejo wrote:
> Because CpuIdUserDis is reported in CPUID itself, the extended leaf
> containing that bit must be retrieved before calling c_early_init()
> 
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue May 16 07:58:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 07:58:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535035.832624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypZx-0005c7-In; Tue, 16 May 2023 07:58:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535035.832624; Tue, 16 May 2023 07:58:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypZx-0005c0-Ey; Tue, 16 May 2023 07:58:41 +0000
Received: by outflank-mailman (input) for mailman id 535035;
 Tue, 16 May 2023 07:58:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pypZw-0005bu-Ki
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 07:58:40 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20607.outbound.protection.outlook.com
 [2a01:111:f400:fe12::607])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 76e5209d-f3bf-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 09:58:38 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GV1PR04MB9055.eurprd04.prod.outlook.com (2603:10a6:150:1e::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 07:58:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 07:58:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76e5209d-f3bf-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RSTkbP28W9U42Gb1uMutBn5zs+o1LCLN68v4NG4k0R0vlhGPhu+6x7vtNsCayAVu0M2V41PVlbqqGy8IO7sNJouzS/Vzu3wY9fla0QjjxMrE42r9t/qSrOjV6zrmGZznihCoAcEz13AG/pUCaiw5g/dk+8S8sf8wRnGjJVn3Xs5iJkAc0i0fGUqhrN7L663rgc4IUPRBwKmuxv+AafS+x5q2UtZT90llTj40TGAYlULW6BVzOz+6fOyW4UnnUXmCiaJIwZYCiJe2VP1S+CDHLlj3QU0lMkTnc/dZuu2JVHk8Zo0F7TVpH610AW37HYfLzpof7T/mqJwBrUboqVZ9VA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=e+NqQN8QURNGKAr2qM5Y4A17EH6PrzkLoVprF0hx4cI=;
 b=DaDbzNh9bpkl7fTK9B7/xccNXr2xoXPhXiHjCxhcr+xDNzHNK0FK7qVeR4mOkkr+pbuFEBxCSrIyfETj1K9L5/CdtoayPOGD0az7KkjEwi0xnluAHdB+6xpj1MB9K6NPgqqvyTZMiIDvE5+N4b6CZCrVndcxrNGOPQRYjvoiFxB2dHvbyGtozMJNYQyTLdXdddShcTCfOEPA5WqY9s8RawOQO6nayEgFAY0+NXcA52PtRL+Teeqr7RQlqXhtcAp8cbd+BJI6NNaLzlZY3Kvj2c6/x8BXR05wLh/b20BTTdpDXF3xSrhqzFsro+3Kfauwr2dWeRmXV+SOqVXN+v/WsA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=e+NqQN8QURNGKAr2qM5Y4A17EH6PrzkLoVprF0hx4cI=;
 b=33u/xPgKzfbemfdF5sFpYPouWhDbR69p9QQx4yLuksQtQUqQEiCP+fq0RdQOYN8863l1py11PsWs+dlt2Ly+UWhigtHfwLkjDVeJ9tWlC1UuyXsJ0m4a5NSSpgQzrWSBFaakKYlreU6UHSb/irxL9Au5IKh3Kx0GWBILWXcbE8Y27okdRXJSe8SXFHTwzvcShoCmGPsytwNDeu1G8PrXiz4NQkF1DgBcyX4rh67fouCs/vWINCX17OzD9UxnY9nMIEsH/5u9L8ZAzoSfgs8iYiFdwhN4w4eaWKXQmR0QhlLMEb0HU9/mRpsUm7ZDJ2WoGvpdLU0b/EYDbgbJu7eQKw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <97825c89-87c5-1156-5621-9d03286fd865@suse.com>
Date: Tue, 16 May 2023 09:58:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/6] x86/boot: Rework dom0 feature configuration
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230515144259.1009245-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0131.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GV1PR04MB9055:EE_
X-MS-Office365-Filtering-Correlation-Id: 3f4f0439-71f7-462d-993f-08db55e35a0f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ejQy45v9paUH2lEVo83XMH3PeNmJrm0yjDrI2nrPhfdUEIMh9Xwl2EzlBFAXiYho9HSbWceVMSGYSe7nRzF50qH3pbGulJccCeaAz6c/PWsJ9i9dCBZVF6eHB0veHTfpYM4n+U8MW2L+VO7FWsje5b1PwNLXA0ddchbpp/2pddPmg8Q3i2wjwXg7SWq7RGGZ7uzCQTCOXocEYoDtou8u+CyH5reg3rxH7CdkxMEZYNpPfxqPVdVOOTKb9tln1wdtyhX0wC3s/Cw6XFX1PFNXdjU+qLKX1HUTaY9EbqVAvm0ioGQyUe35PeRUaX8JAQ6vEponkbHNSGsfQthPUmYODFbMZtBxzSQ9Qc5Y4sJyWrROji0DfluF2eojtkKz0G2Z9UVxu7K2RS1YmFtxgyutr6D4drlpN6+xDiiVLHlG3J4alj75xiOHypdVlBCueZUeUAEtCtaDHzY6DjzP2uNTVq7ELrC45Af8ZftInKRcISB/y+NWxO13enB6b8jzmdQqxatItVH7xO6UQNnmcLDw+ds3JN55Etf1UtWSf1/Zabc61S5yDkC7ZVdBsCnsd+JSh5yKCvDq58VAwR8ohonE9Mcy698ChxbBG6/xFwPrFagYiL+yZRL5iG44xkiR8hU6PLWQzot+zZWKTdrgUahv6w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(366004)(39860400002)(136003)(396003)(376002)(451199021)(36756003)(86362001)(31696002)(54906003)(316002)(66476007)(6916009)(66556008)(66946007)(4326008)(478600001)(8936002)(5660300002)(8676002)(2906002)(38100700002)(2616005)(6512007)(26005)(6506007)(53546011)(186003)(83380400001)(41300700001)(6486002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TWQyVDQ1c2tHQ3dHWVRrUHNITmp1Z1pwWmYwRjM1VHcyd0w3bHEzR0VyN2tT?=
 =?utf-8?B?SkVkcXF2WmdTKzlNdkd5SzVZSGZzODFBUDlDWTZjenV4YStGajJzYnBjT203?=
 =?utf-8?B?ZGdsQmJuUUV6d2lZTWNHWGZwTnUrdTRFUi9PcUY4c3ZycE02emQxT2xTKzFV?=
 =?utf-8?B?SkEvR0dleFdocEJQNUF5NmlZSUpHWnEySmNUd1FwQTlaUTJtOStCbG9QUEJO?=
 =?utf-8?B?TnFZZCtxKzNLbitXcGZIeEZwNm1oY3kyWVVoY3d1Kzc5NFhETTdvUmJiRExQ?=
 =?utf-8?B?NWU3MW9EVzc1cWhPUDZpUmFsVmFMV1k2MVErL3FwRkMzTlJVdXdJMVRSNnBC?=
 =?utf-8?B?NFNIdW4vUS91OFVDbDBSL01wR2VrUnBINUNYZDlxWHNPa0RhQXlIUTkvWXdo?=
 =?utf-8?B?TTUrWG03cmVmdEdIL2tQdWNsTDFiNmVMV1VUaW9aU1g2RTZobEpBbkF4MUgw?=
 =?utf-8?B?bEtYTE9adlRPaE9OUVYwVGQwWGxjQ2tiUEpVSHJGTzdraGFlV3RobW5kYXNF?=
 =?utf-8?B?V1A4ajFLdnQ0RWZmbzk2dG1SMnN1ZHFaOXc4YWdmZDN3bVN2cGhnaVE0cXpD?=
 =?utf-8?B?Q0NMdm5QZWxOREQzMWduNDZ6RER5eGxQaFM1bFRDNlhwRXpoZGNkaWlUK0tz?=
 =?utf-8?B?YUZXOHkzUDg1cGN1RGh2MGpiZDZkVlZKNEJvMTNFYlM1bjZjSXo4cGR0WVJ4?=
 =?utf-8?B?K1RLbWwyV1BSWWx5a1dOaEl5MEdDQUNKQU1LWWRFNXhQMTA5dkxnVkM0YnM2?=
 =?utf-8?B?MFFaNnY5YXNRM0FWU21GQk4xbmVCTEdMejY4T1lFRFVOcTh4NS96TEwzTDBz?=
 =?utf-8?B?TE05TGhyRk5UUWZWVTNZclpGekhJcXpwWW05YVFUU0h6dU5STlNhN0tyb1FL?=
 =?utf-8?B?K0ZrejZOTWFWV1g1TGNFMlhQWWREZ3kvQmtkOUdoUXBtRjJHYTBHVjBxZnhZ?=
 =?utf-8?B?VnlraWt3ZEtMTkNsOXpsV0RpV0xuNE5tS2pKZ2g1NVZ1NWxqT2Y4U2M2UDhn?=
 =?utf-8?B?UG90MG1LcFNaTVM2UlFFbUhEUVYyL0pmbGVlMnpBUG51Q0czV09IbXlDaUVV?=
 =?utf-8?B?WU1vYlNhYUpYWTVsbnlUMUFZaTkzeERXTnFqendpM01KTmJoSU9uSkJnMXJG?=
 =?utf-8?B?SXI3TUtYOTRWYlJta01HWUZkd05UcFhkRWZKWndGTTFUNEl6UFpMZXpJZGYy?=
 =?utf-8?B?K2xaRGg3ZncyRk5IM0FlNmR4UHFIRUNkamJRd0dKbjFHTnZhSG1vWFArRjMy?=
 =?utf-8?B?NytmMHg2MWFLYm5wUXY4c1o2ZmtXakYzanZrRHkreW9QS25PbURQd29Nb0wr?=
 =?utf-8?B?am4vWW1sa0F3MFI5b09uNzhlYUZteURHNHJwejZOVStUSFJLUTRjcVpMK25j?=
 =?utf-8?B?YmtxeDB6YXNmSmorVkdFb1BwSDhWcXdvZStMbGliZ2MzRUhWdWxEdC9ISmZ6?=
 =?utf-8?B?cVBrTDA2cEJFdWRqdnV1b3UxNEtHd0NyZzRtSmNwRHNicDYzT3hpK2JZczJw?=
 =?utf-8?B?YkJTK1FGSzd3cmo2WWdZN0NhdUZtYldlZ1lZKytVUDBFa2d4eXZ0SlRrM01y?=
 =?utf-8?B?b2xzdEhreElUSTg4SDJoOFczSWpuNXZYK3dXU1RsZi9YZWRsWG1ONnM1LzVW?=
 =?utf-8?B?MUppUWtmeXVKakhDZFYrUFdHVGQ4dlJFUGxSdWFQeFJlZGxVNUhZNStSWFk3?=
 =?utf-8?B?S21DaVlwdnhMUkhSQkZWWjRVZkJPMlE5d2llc2VPQTdoOTZCSW9GK3JWR2hH?=
 =?utf-8?B?MzJKbG82ZEJrOFQxaHdVRFNWYzFyelF6MWQ0VzRLQjJhV1lYWlkyczZqdWo3?=
 =?utf-8?B?K0x5TlNzOGdHcHp4V2NoSk1iRlY5NHhTTWx0djcwWmVxc3hjTU9UZWUwMEVV?=
 =?utf-8?B?MGNTQlRjRkx3MmRTbVpibEpKS2o1ZjY1L2lzVXZCWmVWSFBDUklLZ1UzbWhu?=
 =?utf-8?B?Nkh2Um5KMnFnVnZCQ3cvOC9KMkZZVU93K1loVDdmQVZqSDBTZjg5cVYrRUNy?=
 =?utf-8?B?V09JdlRvL0pVNkE2enI3OGNBRmRsdW9RbjNseTBLRWN0Z2t3bllqWXhRK1pJ?=
 =?utf-8?B?V29qVHErTDQ1b1BKTHdobEEvSHV5Ni9zeGZXUlVaak9EY1ZERElqSlQrQnZQ?=
 =?utf-8?Q?WmUoQkiJfmXx3M1PYJ85jvKFZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f4f0439-71f7-462d-993f-08db55e35a0f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 07:58:36.1500
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rSiHKmnUkNa/dixbc73XvdEjq+yUaW2oJ6MiZuRdYjbOQOBOToiR2c9eGaiXq9whrDcoEQFKTOhrVJo9Qjeqwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR04MB9055

On 15.05.2023 16:42, Andrew Cooper wrote:
> Right now, dom0's feature configuration is split between between the common
> path and a dom0-specific one.  This mostly is by accident, and causes some
> very subtle bugs.
> 
> First, start by clearly defining init_dom0_cpuid_policy() to be the domain
> that Xen builds automatically.  The late hwdom case is still constructed in a
> mostly normal way, with the control domain having full discretion over the CPU
> policy.
> 
> Identifying this highlights a latent bug - the two halves of the MSR_ARCH_CAPS
> bodge are asymmetric with respect to the hardware domain.  This means that
> shim, or a control-only dom0 sees the MSR_ARCH_CAPS CPUID bit but none of the
> MSR content.  This in turn declares the hardware to be retpoline-safe by
> failing to advertise the {R,}RSBA bits appropriately.  Restrict this logic to
> the hardware domain, although the special case will cease to exist shortly.
> 
> For the CPUID Faulting adjustment, the comment in ctxt_switch_levelling()
> isn't actually relevant.  Provide a better explanation.
> 
> Move the recalculate_cpuid_policy() call outside of the dom0-cpuid= case.
> This is no change for now, but will become necessary shortly.
> 
> Finally, place the second half of the MSR_ARCH_CAPS bodge after the
> recalculate_cpuid_policy() call.  This is necessary to avoid transiently
> breaking the hardware domain's view while the handling is cleaned up.  This
> special case will cease to exist shortly.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one question / suggestion:

> @@ -858,7 +839,7 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>       * so dom0 can turn off workarounds as appropriate.  Temporary, until the
>       * domain policy logic gains a better understanding of MSRs.
>       */
> -    if ( cpu_has_arch_caps )
> +    if ( is_hardware_domain(d) && cpu_has_arch_caps )
>          p->feat.arch_caps = true;

As a result of this, ...

> @@ -876,8 +857,32 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>          }
>  
>          x86_cpu_featureset_to_policy(fs, p);
> +    }
> +
> +    /*
> +     * PV Control domains used to require unfiltered CPUID.  This was fixed in
> +     * Xen 4.13, but there is an cmdline knob to restore the prior behaviour.
> +     *
> +     * If the domain is getting unfiltered CPUID, don't let the guest kernel
> +     * play with CPUID faulting either, as Xen's CPUID path won't cope.
> +     */
> +    if ( !opt_dom0_cpuid_faulting && is_control_domain(d) && is_pv_domain(d) )
> +        p->platform_info.cpuid_faulting = false;
>  
> -        recalculate_cpuid_policy(d);
> +    recalculate_cpuid_policy(d);
> +
> +    if ( is_hardware_domain(d) && cpu_has_arch_caps )

... it would feel slightly more logical if p->feat.arch_caps was used here.
Whether that's to replace the entire condition or merely the right side of
the && depends on what the subsequent changes require (which I haven't
looked at yet).

Jan

> +    {
> +        uint64_t val;
> +
> +        rdmsrl(MSR_ARCH_CAPABILITIES, val);
> +
> +        p->arch_caps.raw = val &
> +            (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
> +             ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
> +             ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
> +             ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
> +             ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
>      }
>  }
>  



From xen-devel-bounces@lists.xenproject.org Tue May 16 08:10:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 08:10:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535049.832634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyplM-0000CF-5N; Tue, 16 May 2023 08:10:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535049.832634; Tue, 16 May 2023 08:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyplM-0000C8-2S; Tue, 16 May 2023 08:10:28 +0000
Received: by outflank-mailman (input) for mailman id 535049;
 Tue, 16 May 2023 08:10:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S66/=BF=citrix.com=prvs=49309c509=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pyplJ-0000C2-M6
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 08:10:26 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1965a915-f3c1-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 10:10:21 +0200 (CEST)
Received: from mail-dm6nam10lp2105.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.105])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 04:10:18 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BY1PR03MB7287.namprd03.prod.outlook.com (2603:10b6:a03:527::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Tue, 16 May
 2023 08:10:13 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 08:10:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1965a915-f3c1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684224621;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=dF8KfOhaku3ezLauOaEsh/qUOMg+0QPae1KGp/NKK9w=;
  b=VB9V/glpYkofJpYKUfoYV7ErGKa0035iGUPMaQTMoY68OsRTXODWY3/B
   z7QgO+7A6HZOR7PJOKKf8rhEgbx7+T/JkzYij1KTcM/FnWJA/mr5BxFUr
   GgPy8H3ECzmwAeHWU5WZX60lkDgU1S4x/NKJE2CWqoUs2VfoVbNibtsE3
   M=;
X-IronPort-RemoteIP: 104.47.58.105
X-IronPort-MID: 109193109
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:bfG3VahGu5yc8Z+MLIVwVEFQX161gBEKZh0ujC45NGQN5FlHY01je
 htvWTjVMvjZMDT1c9pxOdu1/BsCvZfcz9NkSQNl+3s2RCob9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QaAzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQiDwwyYiG9hdvtmp6wbftzj9wSdcbCadZ3VnFIlVk1DN4AaLWaG+DmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEhluG1bbI5efTTLSlRtlyfq
 W/cuXzwHzkRNcCFyCrD+XWp7gPKtXqjCdtDT+Tgp5aGhnWU3mEPKkwceWKg4va7rFDgBtxUe
 nI9r39GQa8asRbDosPGdwajvHeOsxoYWtxRO+438geAzuzT+QnxLmoOQyNFadcmnNQrXjFs3
 ViM9/v5CDoqvLCLRHa18raPsSj0KSUTNXUFZyIPUU0C+daLiLE+iBPGCOxqH6+8gtT2HizYy
 jWG6iM5gt07ltIG2ay9+hbcnzumq5zNTwg0zgzSUiSu6QYRWWK+T4mh6Fye4fMeKo+cFwGFp
 CJdw5XY6/0SB5aQkiDLWP8KALyi+/eCNnvbnEJrGJ4isT+q/hZPYLxt3d23H28xWu5sRNMjS
 Ba7Vd95jHOLAEaXUA==
IronPort-HdrOrdr: A9a23:d0ka86/UkRuOZFtQSNluk+AuI+orL9Y04lQ7vn2ZKSY5TiVXra
 CTdZUgpHnJYVMqMk3I9uruBEDtex3hHNtOkOss1NSZLW7bUQmTXeJfBOLZqlWNJ8S9zJ856U
 4JScND4bbLfDxHZKjBgTVRE7wbsaa6GKLDv5ah85+6JzsaGp2J7G1Ce3am+lUdfng+OXKgfq
 Dsm/auoVCbCAwqR/X+PFYpdc7ZqebGkZr3CCR2eyLOuGG1/EiVAKeRKWnj4isj
X-Talos-CUID: =?us-ascii?q?9a23=3Ac+wIi2rId0fZvcGmwPLFC/TmUfoCTkLR/UnSGhT?=
 =?us-ascii?q?iJW1OWqKcUW7J94oxxg=3D=3D?=
X-Talos-MUID: 9a23:ExHzLAgFtkbE1PFAZICYwsMpM99l/4b/Amw0itYJue+aa3F+MGajpWHi
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="109193109"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZzKTJujN17HvN+LuTFlZPn/dt4wHa/cMMmtxTjZLj09PKxDP9XIUm2gAp8aZ4yXysgmZMvcqUmK8MFm8dnm1Kk8HJLB1HwL3ZhPlqCi0jGuuMJUDDUyU8owdAV7TCedtRA3/I3CAn9f1jdXbnMbtSGn3bvKpcjM4MGKqFlpizTVGLQWSEI8KAxE+t1FWJK167vDZhkDFzC6eTwTsgc/8UNweeIZY3DL7C+DQ9Uxr7bPRHuvsS4EVUNIWgmKaHAYec5xJkknidUqJWwqKdyND6rC9DQEG576our/ukJl61EUbKTnm+v6WDwl/B7MR6rHY6+lEku2ImxmFG590xZXmZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MOXbb4jEGs6Ot+i5UNfNslHGgWyzA/n6EdKWvgtkWrs=;
 b=bs7EhvvedcKCOOxJj8H2APz6g4V3b01xv8C8kKiv11NXrPizb7a0RDFPLvkIcopSu4sPzF/cUbjk7QUAEWqDy6ENP2asxRcTr7BcLZPBybRfHZxGIp6m/b5nvtBMmYHITAjV//i8rnhA+BRAT6pvL17jzl/fLUKAoQQtashr7TEP22Lws6TaU34jarEKwz6JGn9lCcbXuqIg0ph9EcVDKgpZwJMqQJP00dSFastHUzNPy1paERIv38osq6n9mwuzKqMdYM1gWu1WgDFTihf0qauyRxjilj7sE5iBEwn420aReNOZowkReMF9SlSu9kTV1Z2s2h1cabVZhHkihAKxrg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MOXbb4jEGs6Ot+i5UNfNslHGgWyzA/n6EdKWvgtkWrs=;
 b=lltVDrgRH6vBZIGe4fbW8C6zKkWPVhZMIuXIfEgy6UnFMV35w5Ek3Np9c10WTAVge1S/RSz8Mq0aBT9WvahD1h1Lgxl24RSZJoiE1avUN42LilYOdkq2mGZjQ3BNIyh9qUVozxPMU5nhYSywta2PUj0L+R5FLMzVg20nhqPbkaA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 16 May 2023 10:10:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT
 generation
Message-ID: <ZGM6X19p50oSqbNB@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-1-sstabellini@kernel.org>
 <ZGHx9Mk3UGPdli1h@Air-de-Roger>
 <81ac6e51-6de9-5c4c-5cbd-7318cae93032@suse.com>
 <alpine.DEB.2.22.394.2305151712390.4125828@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.22.394.2305151712390.4125828@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LO4P265CA0089.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bc::6) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BY1PR03MB7287:EE_
X-MS-Office365-Filtering-Correlation-Id: af3bd137-ec70-4083-3f60-08db55e4f978
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	88w5apDuGuXPQ87aE/622rpFGZOa+yEEkcwZPZodclNj/dbn76gb/GWGwjrrmBGEbN6a7e6jHxghrKlQay/tmRe1sCr5YywR+Y6oCAoPUH+i58X+NV5diez6QA2MKgOrxOFZPUDIuyZXwFY+bHjJZ4PqotVJX7PEVI2FiuU4/NJtJpZQAr7+2wV+KMI4X3xs7HU/EBOyKSXHFmt6GzQ/OhIEpgROrbFX80WjnOD8sU7y6VWkRjxJeJaJvFsNRX/6JpwCpnw1u7r84WYfa90Hmealk9t7npaR8bxXvMxnMJ+tP7uw2mbvHoV4Vd87CyDbZHCz0j5PlsKjOJ92kL+wpbF6QbN6SsQ5NrGfBtgq406mbkMG1ve8UMChNeLuRXvLaaJNqItEIqwCcWORyQ6ASYqv/4tloMvOwUG2ZwiXUvUjGCt0avwd1Zi1vUwbcZDvUCOT1CxMoU594MOK8w989xfMPYEbrH1+dF3G3K5Q08EI3AZFZO9sN6U3DVE78llPQSt6pGK5WrtyHuc4jtG0SwkFJxU+emC7cu8TVrXBNjHG51BUEwFAr9eZCJXcH6KT
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(136003)(376002)(396003)(39860400002)(366004)(451199021)(6486002)(6916009)(4326008)(478600001)(66946007)(66476007)(66556008)(316002)(54906003)(86362001)(85182001)(83380400001)(6506007)(6512007)(186003)(53546011)(9686003)(26005)(5660300002)(8676002)(8936002)(2906002)(6666004)(41300700001)(33716001)(82960400001)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UTU1T2VLdG45VDRmWjhqL1duUlovTVN5YWxlSW1wekRvQUJCcVIvS0N1Y2Mr?=
 =?utf-8?B?ek8raVJsV29GMUdKSU1QR3ZvVHBZSWtrb0twdW5iTXhIZ3lFMzJ4K0JXbE1l?=
 =?utf-8?B?VkROK1RMNm85YVpMM3VieHM0Z1ZRRS9GWTNMTHVreitTejh6ZldnUEljaUlJ?=
 =?utf-8?B?cU94NGVlNEpjTUpqS2QrODJsT0pKTmhJMFNzUnFPNzNkVmxIZU9XSU9DU1lH?=
 =?utf-8?B?Yk5TeTFHYTFnTnJGblU4N2Y4UVppTTlVbEJyMVkzVTdTeGl2VFpVMGhEcUJj?=
 =?utf-8?B?OE84RFdCRmo4czRSNHZLY05IS1NlTUgwVk9EOEN5L0RzVGt5L05MRy9pa2Ry?=
 =?utf-8?B?OXE4Z3dPRkhXZTRicjA1dTMxbE41UXVRQXVINFJ4Q2k1ampkU0hhbTZyUWYz?=
 =?utf-8?B?YzhHYmZvMmVuMFRPUWNPd0VRMEh1YVRQUGcyZ1U4Z2xkbTBqeTA3UHpBTHBj?=
 =?utf-8?B?NHNVQzNIN0V1REhTdW95UmtLZFh3QmxHK3FMamZnZkJQampZcm9TVDdRMVVi?=
 =?utf-8?B?VVZkdzNHa1NXK08wUlA4amRVZm0vTFRTSUR6M3g5dFc4U1FNa1NTZDh2UUtE?=
 =?utf-8?B?cTBPWUdvZUlvVDhHam5rbXI3Vkg2RENIUmttR0MyQlU4V1RFcUoyMFE4OUI1?=
 =?utf-8?B?T3ZYbmZmcEcxYVFXRWtzTXorK0grQWZMbHdOeU9QTlJsZmFLTmRaUVdmeFE4?=
 =?utf-8?B?eXhFdnlTYU1kZmVtZU9aNVhKdHhIcU5mUEtPTjAyR0FMVUo5ejN0M1VMSmU0?=
 =?utf-8?B?SzEvZG5OK2xJUTltdkZWVzZCV3l0T0g3cjl3VGZxbTFSbTZDRXBQdURJL0RF?=
 =?utf-8?B?MlV1TGdjMGp4MTZkWWNlTkVyUU42ZnFCbjY3Yys1RmdOdzNyWjFwbnlKWTZL?=
 =?utf-8?B?YWJudit0aUxkbHVSdlB6MkpjZlBqTjdBRTMycVNPQjg1V202QkZEY0ZNc29t?=
 =?utf-8?B?Mms1VGhmMFQxUFo1VnBkOTI1OEFIbHhWS2s2S0xZYlR6cjFlNWIrV25CSE52?=
 =?utf-8?B?S29CellqdUdjdmVGN3RmSmY1UzhZc3lHdGd2d0R4c2dvMVpSOFoyaTdjT3FC?=
 =?utf-8?B?b2xVWVdETkxNcDVMV0lHS0g3WlI3S05oMk9JNFlPT3FkajNqQ3BOYmtXZEhF?=
 =?utf-8?B?cVI0Y3VEUW9SeXJ4U2QyOGZMa29kR1FjeFA5ajZScTRoWnJOWTRDeG5qeER6?=
 =?utf-8?B?OVd6dEQvT2xVTlE2SXMzdDRZby90WCtBekd4SDFWN0tkY1gyNGNsbXRkZzVX?=
 =?utf-8?B?ek5YRWFFQ3pwcUR3bkM1MjR5cGdnZ09XTlZHK3Z0bUdFeXZ5OEF1bE1CTTBs?=
 =?utf-8?B?cFVVMmhYYTE0UG1tdlZBWDhQeTdSenNiaC80WlZCVE9FNVJBMGdhbENWMDBN?=
 =?utf-8?B?aGVmdmVBS1BVN1l6U2d1aFJSbW8zTzRqWnRVMHBLOWxsZEZ1YWpsazhYVnJC?=
 =?utf-8?B?ZHlKMDlQUVcyLzJsR2ozVDdyUVVWRmJDaExEblBDZG9JMzFVR0lMcTg3bHNt?=
 =?utf-8?B?K0ppRTExekVRbkpIbXVsV1ROMDVZc1BLQXVqZkdMdTVQS2x2UTJYc2dsVTF2?=
 =?utf-8?B?QjdKY3BNRWEwREIxNW1BcFVWRE4vN1hkVFcwT2hLaEsxdUFxRk42d3BkaXJQ?=
 =?utf-8?B?RHcwTUVrQUdCM3dCOGMwWTVYWW54ZlBQaHFBd0pvMGtDM3NXL2lJaFprdG0v?=
 =?utf-8?B?b0NhemExYlVBRHhld1lPWEg3c0xTWnBMNTkxN2lvbHpWRmN2NzIvUEZjc0Nt?=
 =?utf-8?B?cy9MWXZURlBsc25EdkFxaVZqbmRDSDluMk1yTGd6eVVHNzlndHovY1haYmVm?=
 =?utf-8?B?bldxTXlwNHpqL0c1Wm82alArbGxJblA5QzRpdmF4M3Y3MGlqMXF1MzY5cG5Y?=
 =?utf-8?B?c2NsQmVRYUF5ZDYyRlNVeFZuYkl2R3BQM29xbUJDZnoyd0dqRlptVmcraFRh?=
 =?utf-8?B?aFkvK1RnZ1lrNGkrMGJyUiswd0VMbUxCY3huL21sUm1sa1NnZDRnL2RUZ3hB?=
 =?utf-8?B?cXMyYVFrWlBodmJPUTlkZFZjQTJMTW4yc0wydlVqVXJQd2ZHSktWeHB2UW9O?=
 =?utf-8?B?ckhFR1kzQlZaUXNMdkZQZjArRDZRK0Rmc2ZqS0o4UUMzbmZjQ0VYZ2w4RGtN?=
 =?utf-8?Q?Ba0QC3MhXYVBZ7KnzPbp6NSCk?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	xs9ND/e1cXerf7JrkzY3l4Zr8cp9RiXYkfjwh3Uuca/sTOQGOBLj+2O18A6JYzqun3hvAg/1gtwMz42ehWcNSPv7hy9XrSh7DRMkAE4TKs1z5H4weGS/D+GwDeWgt6fcYMqOFJ9B807Cr00vX9fv+kr4s0t4aO0cc7wS90hjYrJ/weEuUue6oCvnWpZVmqZo2Mrf0Gyu+yCZzfnEu4gI7NixwV8S27Gl2ms594XxLqOP5GDt5iUuO9To0fIxyqgZRksl7wCLYkzcnvbUVymEHyuO5GVgDBmDmLQKbzlEZJ7QBEt5CvHFvyvgze4DAidp/ZIh9KYTeCY5bMG/sG8YS9+BAmHkljL+9wru2C/xGRkuWQIaV+jkrxyogzi1Aeu74JEkE2IekCwgzmLQh3goqIb9JR8vqF5pbOAB+gKDKBBPzbnNxSd9ayPBEQOvDpbTSju2jWy4Bt07Zfwwu5sh7B8oA5InMy30Zipm3CKqNzHNowS4EITWcyfRVxlh/AYjCsyCI6X1FnwJkSsW2KqB5PvqIPp0U706S9ty8gbWwnpI7cp0fymh6SezlfRgLhPvze0QlxO0O6ktw08AmcpGFF2qhLneaUC286INe2CUnV13l9W9hSYLFUI1XboeJr7ayoQqDL4S9K/orZcsRmEd2uyuQgBkDxY5mMma00XOie8fDewGzAPZiTMaeEDj4x5PYqNmTMic4EntPCrNsKG5GQmGfB+n+LcnnGu8WyGtnlJdW34mT2MK8gKqK0WvwnnPUWx4dk/KoOqjh4jZCv6Lbtm3aITuMG0DqHBStHvW8kMx8P9kh5Bz7h/MYaOWhpy0kL3ytcnMYEx+ukxScZ7FI96rzkVxfSBCZiOjqY2uCho=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: af3bd137-ec70-4083-3f60-08db55e4f978
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 08:10:13.2620
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LNVwBkg30i8RkXNJqhQtUhh121Suud/CaioMPxR9MvdZZOxzqJYMus2ltQn2WKDa2gs63lqSUV3IhCPRkSWzpQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY1PR03MB7287

On Mon, May 15, 2023 at 05:16:27PM -0700, Stefano Stabellini wrote:
> On Mon, 15 May 2023, Jan Beulich wrote:
> > On 15.05.2023 10:48, Roger Pau Monné wrote:
> > > On Fri, May 12, 2023 at 06:17:19PM -0700, Stefano Stabellini wrote:
> > >> From: Stefano Stabellini <stefano.stabellini@amd.com>
> > >>
> > >> Xen always generates a XSDT table even if the firmware provided a RSDT
> > >> table. Instead of copying the XSDT header from the firmware table (that
> > >> might be missing), generate the XSDT header from a preset.
> > >>
> > >> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > >> ---
> > >>  xen/arch/x86/hvm/dom0_build.c | 32 +++++++++-----------------------
> > >>  1 file changed, 9 insertions(+), 23 deletions(-)
> > >>
> > >> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> > >> index 307edc6a8c..5fde769863 100644
> > >> --- a/xen/arch/x86/hvm/dom0_build.c
> > >> +++ b/xen/arch/x86/hvm/dom0_build.c
> > >> @@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> > >>                                        paddr_t *addr)
> > >>  {
> > >>      struct acpi_table_xsdt *xsdt;
> > >> -    struct acpi_table_header *table;
> > >> -    struct acpi_table_rsdp *rsdp;
> > >>      const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables;
> > >>      unsigned long size = sizeof(*xsdt);
> > >>      unsigned int i, j, num_tables = 0;
> > >> -    paddr_t xsdt_paddr;
> > >>      int rc;
> > >> +    struct acpi_table_header header = {
> > >> +        .signature    = "XSDT",
> > >> +        .length       = sizeof(struct acpi_table_header),
> > >> +        .revision     = 0x1,
> > >> +        .oem_id       = "Xen",
> > >> +        .oem_table_id = "HVM",
> > > 
> > > I think this is wrong, as according to the spec the OEM Table ID must
> > > match the OEM Table ID in the FADT.
> > > 
> > > We likely want to copy the OEM ID and OEM Table ID from the RSDP, and
> > > possibly also the other OEM related fields.
> > > 
> > > Alternatively we might want to copy and use the RSDT on systems that
> > > lack an XSDT, or even just copy the header from the RSDT into Xen's
> > > crafted XSDT, since the format of the RSDP and the XSDT headers are
> > > exactly the same (the difference is in the size of the description
> > > headers that come after).
> > 
> > I guess I'd prefer that last variant.
> 
> I tried this approach (together with the second patch for necessity) and
> it worked.
> 
> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> index fd2cbf68bc..11d6d1bc23 100644
> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -967,6 +967,10 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
>          goto out;
>      }
>      xsdt_paddr = rsdp->xsdt_physical_address;
> +    if ( !xsdt_paddr )
> +    {
> +        xsdt_paddr = rsdp->rsdt_physical_address;
> +    }
>      acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
>      table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
>      if ( !table )

To be slightly more consistent, could you use:

/*
 * Note the header is the same for both RSDT and XSDT, so it's fine to
 * copy the native RSDT header to the Xen crafted XSDT if no native
 * XSDT is available.
 */
if (rsdp->revision > 1 && rsdp->xsdt_physical_address)
    sdt_paddr = rsdp->xsdt_physical_address;
else
    sdt_paddr = rsdp->rsdt_physical_address;

It was an oversight of mine to not check for the RSDP revision, as
RSDP < 2 will never have an XSDT.  Also add:

Fixes: 1d74282c455f ('x86: setup PVHv2 Dom0 ACPI tables')

Thanks.


From xen-devel-bounces@lists.xenproject.org Tue May 16 08:14:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 08:14:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535053.832644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypp4-0000ms-L1; Tue, 16 May 2023 08:14:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535053.832644; Tue, 16 May 2023 08:14:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypp4-0000ml-Hx; Tue, 16 May 2023 08:14:18 +0000
Received: by outflank-mailman (input) for mailman id 535053;
 Tue, 16 May 2023 08:14:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S66/=BF=citrix.com=prvs=49309c509=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pypp4-0000mf-3r
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 08:14:18 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a5127bb3-f3c1-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 10:14:15 +0200 (CEST)
Received: from mail-dm6nam04lp2044.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.44])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 04:14:03 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BY1PR03MB7287.namprd03.prod.outlook.com (2603:10b6:a03:527::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Tue, 16 May
 2023 08:14:01 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 08:14:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5127bb3-f3c1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684224855;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Dd4Eol+LN3B5PwKhICww0ApXSdFPx3ZiTqhNmjYn/VE=;
  b=iO6rzPCzxwvRNAadfgZVFkYI8zMtyv/DdmXwUD3lrxgmRufiNrLr1qo3
   8/hMP2yIKeqkKXhkyblH1aNXhsG1Fr3ynG/9zgC8iciRiibK4SCg4TTGl
   lIQT2dFYZfgEXRxeSu3RsWEdqU45sT044z/lSz+KKy/HxBQJE1NkMoxBO
   k=;
X-IronPort-RemoteIP: 104.47.73.44
X-IronPort-MID: 111637002
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1eGoa6AX4VP+tRVW/8jiw5YqxClBgxIJ4kV8jS/XYbTApDwq1TFWn
 WdNUW7Vb6uLMTGnc953Oo21pE4P6sPXmt8yQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5B4ARnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwpcF1B2VXr
 uwjeXNWTiCCuMGN2r/nc7w57igjBJGD0II3nFhFlW2cIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK/uxrvwA/zyQouFTpGMDSddGQA91cg26Tp
 37c/nS/CRYfXDCa4WPcqyz32beXx0sXXqoNMOHk5NhhkGeChXUYOEQXfnLlm+Km3xvWt9V3b
 hZ8FjAVhbMp6EWhQ935Xhu5iH2JpBgRX5xXCeJSwAOHx7fQ4g2ZLnMZVTMHY9sj3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqTR8JSwoMpePqr4A6ghvMSctLGau5yNbyHFnY2
 CyHoiE4gPMIkccB2qG//FbGqzupqt7CSQtd2+nMdmes7wc8aIv7YYWtsQTf9awYcNrfSUSdt
 n8ZncTY9PoJEZyGiC2KRqMKAa2t4PGGdjbbhDaDAqUcythkwFb7Fag43d20DB4B3hosEdMxX
 HLuhA==
IronPort-HdrOrdr: A9a23:5Wbiha12O2FGAzETeCZXQAqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-Talos-CUID: 9a23:nov8VmyWmHrXtxWo2hK/BgVLPcIjdmSG3EyBYBSYOWdWa4aPRAOprfY=
X-Talos-MUID: 9a23:mw6ZsAZaq1QMT+BTqBuw2CNEP9dS8ue3A3BUkL4v5fOfKnkl
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="111637002"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bn9GmChGysk0AnphPo8Fb5gMz4QEYU6TA6dt7x7XrcqiClSuayneIbvpT/tYV44/dbdUgOiPN167PMudsJrc3fICDVeXazEeD1lAONsWBxWkV1cRwkHD4HuQUCnk+d121DqKaRTpXLRy5ED2oQv1yefloC3m/IN5BvjFoTPVTM6rWbpjwVQxjbZFIH9fMTtEY9+THmW63P7fesghef2vmExNsQCFJgRr5IQZHeFf4BJpZBNWQxNA+ANdJo7qWFDGG1p8MDWUWH++k11IqFw8r+ntUHayJBcAe59p1H7puTC+K3LQY1lZHmNTz1BW6/tHs+3jpLVdxBsvkCzfaEbOSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=V2xxTbrJMtABL8oVUueyZxOvtbDMN3uh8lgsL1GTURU=;
 b=SlJdJlETOjx8LzV76XNXyazyXgwGfL76WEEtITJhdt+qCcTpV990WQUM6Vy9xe9Bk6CgLnlqfuCd9/pHpk5XC1UX1M4Lw9ilDLv5x/3qyaEnIfxatEGSM+32GbCjGQE53sYdNwd96jFeAWV9ADvAuYVNvsuNFsp9VyNmjsw5cgUiHjn7X8euah0b9iY2Z+75lWSTYZZIaqvgN6lY7IBlj9Uh8PqhtwCkDDKK5mz1CNgi6OWHWh3Jh1tkLcswMzVEbpQI97nrTwWtDqLeTXFoek9J+ok8wNCQs0NH7bgmafPvbm8sIi0ztM8VKtC7QdFcNT3EJeJ0WxQe+V18D76p+A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V2xxTbrJMtABL8oVUueyZxOvtbDMN3uh8lgsL1GTURU=;
 b=gs9PmkemtLhptTSx3AabU8VC7Wt2LG11t8ZChSjILX099Aq4CpfbQl525+2ZiRpXl+g0CXFVofOI0w41wxGUnEP4UMkIsAyMB4wNm9Ma/bXUk1Sx6RO3Ol+uN2iN3Iwrq4N70/8hs6PCQaNIYHs/+SUBDnXG0eDC4FZuYHqXH6Y=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 16 May 2023 10:13:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT
 generation
Message-ID: <ZGM7Q+uOm1HRvaTC@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-1-sstabellini@kernel.org>
 <ZGHx9Mk3UGPdli1h@Air-de-Roger>
 <81ac6e51-6de9-5c4c-5cbd-7318cae93032@suse.com>
 <alpine.DEB.2.22.394.2305151712390.4125828@ubuntu-linux-20-04-desktop>
 <ZGM6X19p50oSqbNB@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZGM6X19p50oSqbNB@Air-de-Roger>
X-ClientProxiedBy: LO2P265CA0095.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::35) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BY1PR03MB7287:EE_
X-MS-Office365-Filtering-Correlation-Id: a61a6744-65d9-4682-7a25-08db55e58180
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Je+jz57fLGphT2c+RhmNy2uw4+7ZUbqmNAlg2GI6HkP1fc+EjMn3nfQ7kAayoyBNcuDAi9NUWKoVCLyAFEgMJDDzsk9/ECCyWUkGBQ0r/EnhSoUg916ZLL36ZU/CF8bkUTBmZEvKRB13xPSZaHKISBRKblfuogUBI/BULvQjLD13ihl0uwg6szZQZhZpDDnT1WIU7d5pHzmKQ2k4Bhm3SQ3lBo6JaeZLnqh6qXD/gixY+QM8x5IwWZvOHFOIkJB9zBygT8q66PHK5mH3QCZoh7gHmO+H4rNT+qdo2kXfUPTj0gFUQE6qIwuy74FNcKU05zW+cPrGRw7TLVOzapddw09BzZbb6QFvlDf2qMrT6tNqH9KHG1W5A2Um66NmtI2etQO0ULAF7a/Z8BQ8Y4aWyqJP3/7EFzJGCIRcKWv6FpCdATwOp7V9VScTzc8zTwPHG2rIA/eDpcq3ADgLZ6AVAl827SRELFCQV+FtLgkDEgtVAfHWsEHh1MR0oH69xoN9Zfi+cmEgNoyGQJAXntC9kGdCLlecCZrI4JD359q9q4NaG2JMskTIocfxCYY8aFw7q4AP6wOFHhdx/8TexwU7WHVt2euDpEe2lix/XeJZkKA=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(136003)(376002)(396003)(39860400002)(366004)(451199021)(6486002)(6916009)(4326008)(478600001)(66946007)(66476007)(66556008)(316002)(54906003)(86362001)(85182001)(83380400001)(6506007)(6512007)(186003)(53546011)(9686003)(26005)(5660300002)(8676002)(8936002)(2906002)(6666004)(41300700001)(33716001)(82960400001)(38100700002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L1k3QnFtTTZRa2dYMUh5NThHL1ZLbnBwU3dOMWNWRk1zYWxUa2h2TmFaVmkx?=
 =?utf-8?B?bEYxaEVOWkpCSWgvSW03MVovTjAweUZhSTRrUTZpTnd2VGVUZjRBWittSXRS?=
 =?utf-8?B?S1NWUnZXRi80SmZnQnd5NEcvQk8wb3ZMdlNOYjE4QUdndk9VdU5Lc3dhUmNE?=
 =?utf-8?B?bTQwTTZKckRlSGVuL0NzS2VwWGNvMllEbytNZW51UVhGcXRWSVFsRDAveFBL?=
 =?utf-8?B?aGRndzQybWYrZDVXejRzQVpRRzVYMWxhVlVCajI0c2tUbkVSL1ZOVmtwUHRk?=
 =?utf-8?B?ejlvbzdPUDlSaVV3SHB3RFZXdE1RdlJFUVVoRDdTOUhyVmt3NUxJZ1lvWFpa?=
 =?utf-8?B?eEdxNFVsQjB0aEpuVGRSQ0pWeXFNcWM2RFVIWkhqVEdiT051dldOVzBDQ0U0?=
 =?utf-8?B?TVlmYjZGeHNiN0J0Mm5qOUJWbHhrSEVPTVVkY2k2MG1mSUhKSDlTaDBUY0RN?=
 =?utf-8?B?YVM5WWw4Q3NNMkw4ZFVJQitqUFM2TE9HTDdCK0FWV3VDU3BIVzVqUmpqUGZn?=
 =?utf-8?B?SDFIcUNobjRNWis3bWxkbmk0SWhlSXdrRjlGUFc5WkFKZ2U3V1Y5NUE4VGJP?=
 =?utf-8?B?UlUrTmFvK0RFVkc2L1Z2QTJhalFKOUl6MnpINm5QT245NzNEYlhMQ2YvZGRl?=
 =?utf-8?B?LzBucXhuL213MW55RC9MUG94ZHg1NmJ5SjhOS0w3QnFzeksrc2pWUHQ4UDJa?=
 =?utf-8?B?dUZxOHR1REJiZ2ZqSDBMY1JRWTVFM29jWkJsdmN1ZGoyTlJNbUZjTHVwL0dy?=
 =?utf-8?B?SnlSNUpzM2VnNGNDTVc1UVVJd0t1blRCdGd1YStwYVc0aXF1bmJVc1F2T3ox?=
 =?utf-8?B?NHRvWkNyNXhzMm9ONjhaR3puSjZ0RkdhVkZJK29tWlliZHp1SkVNT2FQaUhQ?=
 =?utf-8?B?Ly9wMWxkek10NHFoUkRydWQ4R1R4UW9kWHlLODVIOWp3TE9kbCtVamxCL0p2?=
 =?utf-8?B?WndxWlE2Q1VVeG9nY0xUZWJlWldmYUxvN0hWRVdhVVJJNkNZZk9aSU8vV3lm?=
 =?utf-8?B?aXlmU3lxaWpzQlAvSXZSUnVmUlhRU3ZXc01VK3lxTmNxVEhpZjZnV3MvZUo5?=
 =?utf-8?B?RWZJNDhEclB5cnZXcXFTTVVNY2xhb2lJVkk3RUVxRUtwdDBrblNueTJZckE4?=
 =?utf-8?B?akdabVcvVWxIZDVwUGgzcTdJazduaS9QQVFMS2tuMC9sTE1ibWNtejBtU3RV?=
 =?utf-8?B?bW5kbTl4ekZNSnF3Y3JTT01jS1IzNFdMeDRaWGU3L1pmVVZBWU5IL0ZJVmpW?=
 =?utf-8?B?TVRnVmY0N1Z4ajBzWGZhdk9hMS9Oc3dpdjUvL2RWbUtIUGRYSWUrQldwVEdB?=
 =?utf-8?B?UTZTeitSYVlhQXhrVEk5MlMzTDFwRlM5THZ5ODh3cWhlaFRNMXBKSDZyYjQ0?=
 =?utf-8?B?OXFVYWNuK1BqOUhNd3Z2bFdlNVJCZEFyVVE0c0N2RGpZU2ZwVGg2MnRYSW5t?=
 =?utf-8?B?S2Y4bXlSbmp1WGZhMnQ5cTlSY3crSm5FTHIvRW81b083SkJab1UvWVdrUTR5?=
 =?utf-8?B?cDBTNzJ0ckNMaFVOcjNIUjdzRTVBbXlZRkVGV1BRN0RXWCtpZ0E4R0szajZp?=
 =?utf-8?B?cmlySEExZlBtalZmM2U2YW8wZWZqS3Vrak9TYjAvNmwrSExDdkphMTliOFk4?=
 =?utf-8?B?cWpORDZXbGduVnBZTjRxVTcrSkpzY2JNaEFONWYyYTZjaFNxR2xHOWZrRVRX?=
 =?utf-8?B?VTVJc2Y0Uzg0Y25tdEpTVnBDK0NtbVVpWUVab052VjJXU1NtRGxwbDNEZ1RH?=
 =?utf-8?B?OTM1UEZTVktCVVM1NkNZRjdlait5cHd5SnNtQkt5blp5L1IxSmptUk9SVHZB?=
 =?utf-8?B?LzVaTENHL1VQemRMVjlxWFY4aGZpaUdIRDlZaDUyZ0x6NWozSTZhclN6azAr?=
 =?utf-8?B?ZXF4SXNWSVFnOUsreGZBdko0WHhUNXZ5TEtEb2hYTXNFdUhSR0NiN3V5N2hp?=
 =?utf-8?B?TnRjL2xHZXBBQ2FIa0J1VDNMZkltSzFqeVByOWpWaldla3Jjbkp4cjYrcUlq?=
 =?utf-8?B?cThGRkl1b3B0WHZsaEJlSE5aajFtYktpQ296VGZvY09zclQwQkpXSVdpODMz?=
 =?utf-8?B?UW43STF3dDJsZ1pSVkRiUk5YcjFGTVlValI3dzFuMFMzZFczNG95d3JMK3V2?=
 =?utf-8?Q?C2uQE3UV3MNAz6zuuzks28aNe?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	qnbDUphpEIDBrS0bDq3xXnUwP2DaHDHo51N0FDu9QiMr+xFD/CsbYc7qu7cVB58fQPg564pvpQDkOBt+IonhIul0bdRBPdXAfCKJSYC6k5W4rugoYyXt8E5A3lYb/xByxNYxWY6svXBhLcyT6kQ9m4q4CNPV3IanvPKFCLAlGxBct4pP2WQKjPjKqBY1bYypimtjvLvqNneOCCIlU5/EsTqd5DZO9GjSb7HAyLCSiPwgvGthM4jj/SUdP+r5XxrZompKy43z7EvcN07ktMh9YAgWHk4TAYRlUtrEv3lS1Xaf+22qze7GsXO1vlujYFKC6k235Gk3KYKr3T7F40bt5UTeKgJB9afUoojQIOuvSDD0siyNMjVXCMaV9a74UQBsytTJflo2CrMsSChma/EM0LpC+8VMPy7ynbYYwCAClbk2S5bAD7uLr+AR5nd+ZmpOZiqRuqemPOCVvIs+lPHMkXWbNKeiqJDj9wKPm1RdDGE9S+RVCmSCbMKi+TWnPv3gQdWovW+90VX2Ssf8GtnwSe1q8ZcKx9BLorTO4ocKC8C8UeIjDccUVan0Ed8HVsymrbWDvmWQqymNBcAZYUqsmWEbkkc6aVnVUmUIwhmGPAOXw5DMYcEHvsVNDTjDVKtOJxn65KeG2q8LwryckpcSMk8UK7pl2/CCIq7MEFMTWu397PcPtSpZTskcE4JuvHxaAWqwhgXjhV86XrcltZbRTdSV4Y+Iq/upcW/rHHgDVs6RNFX6BhyzAiwqmJ84Cw1TCRZm6zPztjAE+M0sNh9QG0dkxzsimolpuz38sfeu5muYiQP5Ifl6zGEbwKXimLeLaYcWtsUngasyWuMVJrMYEe8IVwIw2h2HCG99HSAG4QA=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a61a6744-65d9-4682-7a25-08db55e58180
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 08:14:01.4407
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zn1DQgxpGb9oNcRgdHoM/Rg9h2+5DmwElPpcUllHdmL+5Ms/28lIwazpHGFojbgqpj2/Cft9XWSOAZYMzYmBcw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY1PR03MB7287

On Tue, May 16, 2023 at 10:10:07AM +0200, Roger Pau Monné wrote:
> On Mon, May 15, 2023 at 05:16:27PM -0700, Stefano Stabellini wrote:
> > On Mon, 15 May 2023, Jan Beulich wrote:
> > > On 15.05.2023 10:48, Roger Pau Monné wrote:
> > > > On Fri, May 12, 2023 at 06:17:19PM -0700, Stefano Stabellini wrote:
> > > >> From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > >>
> > > >> Xen always generates a XSDT table even if the firmware provided a RSDT
> > > >> table. Instead of copying the XSDT header from the firmware table (that
> > > >> might be missing), generate the XSDT header from a preset.
> > > >>
> > > >> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > > >> ---
> > > >>  xen/arch/x86/hvm/dom0_build.c | 32 +++++++++-----------------------
> > > >>  1 file changed, 9 insertions(+), 23 deletions(-)
> > > >>
> > > >> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> > > >> index 307edc6a8c..5fde769863 100644
> > > >> --- a/xen/arch/x86/hvm/dom0_build.c
> > > >> +++ b/xen/arch/x86/hvm/dom0_build.c
> > > >> @@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> > > >>                                        paddr_t *addr)
> > > >>  {
> > > >>      struct acpi_table_xsdt *xsdt;
> > > >> -    struct acpi_table_header *table;
> > > >> -    struct acpi_table_rsdp *rsdp;
> > > >>      const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables;
> > > >>      unsigned long size = sizeof(*xsdt);
> > > >>      unsigned int i, j, num_tables = 0;
> > > >> -    paddr_t xsdt_paddr;
> > > >>      int rc;
> > > >> +    struct acpi_table_header header = {
> > > >> +        .signature    = "XSDT",
> > > >> +        .length       = sizeof(struct acpi_table_header),
> > > >> +        .revision     = 0x1,
> > > >> +        .oem_id       = "Xen",
> > > >> +        .oem_table_id = "HVM",
> > > > 
> > > > I think this is wrong, as according to the spec the OEM Table ID must
> > > > match the OEM Table ID in the FADT.
> > > > 
> > > > We likely want to copy the OEM ID and OEM Table ID from the RSDP, and
> > > > possibly also the other OEM related fields.
> > > > 
> > > > Alternatively we might want to copy and use the RSDT on systems that
> > > > lack an XSDT, or even just copy the header from the RSDT into Xen's
> > > > crafted XSDT, since the format of the RSDP and the XSDT headers are
> > > > exactly the same (the difference is in the size of the description
> > > > headers that come after).
> > > 
> > > I guess I'd prefer that last variant.
> > 
> > I tried this approach (together with the second patch for necessity) and
> > it worked.
> > 
> > diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> > index fd2cbf68bc..11d6d1bc23 100644
> > --- a/xen/arch/x86/hvm/dom0_build.c
> > +++ b/xen/arch/x86/hvm/dom0_build.c
> > @@ -967,6 +967,10 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> >          goto out;
> >      }
> >      xsdt_paddr = rsdp->xsdt_physical_address;
> > +    if ( !xsdt_paddr )
> > +    {
> > +        xsdt_paddr = rsdp->rsdt_physical_address;
> > +    }
> >      acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
> >      table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
> >      if ( !table )
> 
> To be slightly more consistent, could you use:
> 
> /*
>  * Note the header is the same for both RSDT and XSDT, so it's fine to
>  * copy the native RSDT header to the Xen crafted XSDT if no native
>  * XSDT is available.
>  */
> if (rsdp->revision > 1 && rsdp->xsdt_physical_address)
>     sdt_paddr = rsdp->xsdt_physical_address;
> else
>     sdt_paddr = rsdp->rsdt_physical_address;

(please add the missing spaces in the chunk above)


From xen-devel-bounces@lists.xenproject.org Tue May 16 08:25:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 08:25:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535059.832653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypzG-0002Ng-NO; Tue, 16 May 2023 08:24:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535059.832653; Tue, 16 May 2023 08:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pypzG-0002NZ-Ke; Tue, 16 May 2023 08:24:50 +0000
Received: by outflank-mailman (input) for mailman id 535059;
 Tue, 16 May 2023 08:24:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S66/=BF=citrix.com=prvs=49309c509=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pypzF-0002NT-OI
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 08:24:49 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d06b56b-f3c3-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 10:24:46 +0200 (CEST)
Received: from mail-mw2nam04lp2169.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 04:24:41 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6251.namprd03.prod.outlook.com (2603:10b6:510:ec::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 08:24:37 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 08:24:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d06b56b-f3c3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684225486;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Uav4PC2AlaEXB/8JoN+QkQTEaiv3MLghgQuNeCqbpu4=;
  b=HCjPgE3scM1aWQLo8aubN94Qg9mGlr7W9y8cIWfT+RxrXw+ygAC9VTEX
   QtpyYK/+x04I9KV/8i4tKmyfZaw54U+o0O0NieK8a7X/LdjWCjRUo7z65
   rUyJXUCAirw4PvDfRLiS2BkxXvNO4G3hrsDgS8xPPl7k3j8Fr0z6Dlhch
   o=;
X-IronPort-RemoteIP: 104.47.73.169
X-IronPort-MID: 107938355
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:FRmvUKDmPqefGhVW/8niw5YqxClBgxIJ4kV8jS/XYbTApDorgWFTm
 zEbUGzTM/vbZzDyLtlwO4y39EoOvpHXxtdhQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5B4ARnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwyttGM1NP7
 O0iCQsRdkisgM2r77arY7w57igjBJGD0II3nFhFlW2cKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK/uxuvTm7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqi393b+UwH6TtIQ6H+Ki9Ox7jnSp/C8fFxckCASqrtOFoxvrMz5YA
 wlOksY0loAp71CiRNT5Wxy+oVaHswQaVt4WFPc1gCmPwKfJ5weSBkAfUyVMLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDQc0QA0E6p/ZqY4yhx7GTdF+OKewgpv+HjSY6
 yuWoSY3gbJVltIC3ai/+VHBghqlo5SPRQkwjjg7RUqg5wJ9IYu6PYqh7ACH6e4addjICF6co
 HIDhs6SqvgUCo2AnzCMR+NLG6y14/GCM3vXhlsH84QdyglBMkWLJeh4iAyS7m8wWirYUVcFu
 HPuhD4=
IronPort-HdrOrdr: A9a23:cFOo96EnSr17wCcOpLqE0MeALOsnbusQ8zAXPiFKKCC9F/by/f
 xG885rtiMc5Ax9ZJhCo7690cu7LU80nKQdibX4V9+ZLW/bUQCTQ72Kg7GD/9XtcxeOlNK02M
 9bAs9D4NeZNykesS70iDPId+od/A==
X-Talos-CUID: 9a23:Si/+fWDr8JukmMf6ExlaxXwmG9k0SFzm0G+IDEiXFEltWJTAHA==
X-Talos-MUID: 9a23:Pw/TZQqiZhToGRU/KaUezzo9LsV68qqHMnpOnYoMiZTbJCNaBx7I2Q==
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="107938355"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hyP3KYCF432RbpWwbY1uDy2y6J1ncjztWDfqNGd+nOLGUxyL+RR5zkuSzwZg0sDwbaA5XYdrStnBdeUoAOXmjVwGyUU2R/MbzkMVIoL2fDwzKXpwWN7YO2Whp1cnWzl7Dk15YHBUKeMh9TrXM8p1pXA3HPjx/IEF/QWiNTzAbHFgs1YOh83Or503vjCDanXhUzXUurCp/8M7Vce8Ae+EgvjtHvMdYEfW0HJxrrlFUGtHO3/vZaycGPMb7q06q5ras1zf3hIT1tDhRtFsWRiSSc0gUTgP03mfnAXNbRCU8DmJpjpH3NBZflVhwYFmk0vgPrAYiFVCnzqwrhKSMdmLuQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ic5iJQGmBYjd8X6FP0vDdhSJLVLkXGKHvRSkUXcg8gU=;
 b=k7cbJg9bPXg4aTBqIga+JBfDLgXJpJw5GlrEQZ2SXl3quYZQPJ6fTrPVW4KELESO8tKpUoYyZXyJd2HKPnXo0ARM8Lr5uVar5UsRh1Z4rJOfo7g3lYQiDAtSdvk/UCWgvxhKEwLpNDD6DA6G4Ing0g1P5EhuJkbWqUAgPkakid3n7s++KD3emkwKVawqo0AQ8iugMA37DWc6XmMzvwlT5vAmqqHlZ++Ee5g2kJWA52Cl+EHplT07oeR7BbaoobVX4MHPYsr9qgYjpe7pOJZuBAW+AqP1ej7O3mmSkMVN9cljqsnplIwyw8phddMXam9lorUa8IZE6NefB5+jcpZ8UQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ic5iJQGmBYjd8X6FP0vDdhSJLVLkXGKHvRSkUXcg8gU=;
 b=F2o6aFwoafXWfYN8QzXYUW56R80n8BOSOfW9e+Pws6p5HoSVfWwCLw8dGJt9qf2Rm+EofcNfCmHc9onmvUrL3t7mtW+hED+Rlj3jiU7ENl5GU5KL0roZrbW3UIQRPI9Vz0hct9y6Hle3IDXJ6kjPgD3i5Q+S91eCu39bHXJxAZI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 16 May 2023 10:24:31 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT
 generation
Message-ID: <ZGM9vzwGm7Jv6i7M@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-1-sstabellini@kernel.org>
 <ZGHx9Mk3UGPdli1h@Air-de-Roger>
 <81ac6e51-6de9-5c4c-5cbd-7318cae93032@suse.com>
 <alpine.DEB.2.22.394.2305151712390.4125828@ubuntu-linux-20-04-desktop>
 <ZGM6X19p50oSqbNB@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZGM6X19p50oSqbNB@Air-de-Roger>
X-ClientProxiedBy: LO4P123CA0278.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:195::13) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6251:EE_
X-MS-Office365-Filtering-Correlation-Id: a8c3e553-e827-4e5d-36ee-08db55e6fc5c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qAzH8LZUfxgS0n3Ta8aVMW+F1Su5b5kZZXFagdi2Y3na02yA7tk6Cpdpib8nnkZAIapfDr7btJtU1ytGCQcpUeSg368GhZkQy1spxiu4fU2m3zW3Z1yH4L1Esbc0PXZiOgfcJVL02cF5s1xlpUVD7gC3Kj/fvshYIasM9FT3ZywnbUjroXqa1mfAZHY1wchQT8lnHKK5zjEEI8v1EbvcSI/FaEizBI4c8BFoc+/HUHANAN2tTk9PP0RVn/kdyc0tsgWRdtyZqXtQ3U1xOnYs8lQPR4fbgeY2H2zn4oaqEjaCQyrvmH8gpzN4HtXI22MSQIUzFnV/ZGA2GBZzHi9wSg0BAC3jffO4tkUbB95Qq6W/vjSJUT3hvoUmiUQYLb8Fx+PYLfQngGy8W9bYXCabk1m433DWI8Srj0vz4/nVr+tqe7JcfT/nvhxWCmeXPGD2tPOptW5sW6NYb2cTJ4KIdHVrHcevhQABnUjoj03Cd3AbxEDIMIJZNfLSwpv0AFS6xJdBV9GtFxsGb1swVBavUNcpeqXhDP3wCNsu1RA8Eze+BPXkwQi86k7UMCYDYp3g
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(376002)(136003)(366004)(396003)(346002)(451199021)(82960400001)(38100700002)(6506007)(186003)(26005)(9686003)(53546011)(83380400001)(2906002)(5660300002)(6512007)(8676002)(8936002)(41300700001)(85182001)(478600001)(54906003)(6486002)(6666004)(66556008)(66946007)(66476007)(316002)(4326008)(6916009)(86362001)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RmRhaURvbDlTY0pwOS80bmM3SWNrZ1Z0Y05kc09aeFQ1ZW00aFVhSWg3TDJi?=
 =?utf-8?B?Vk9NV0pVUXh2ZHFQN2RRSUJicEhwYjdsUGx4WlJhOEV1ZTZBREt3N2RxT3Mz?=
 =?utf-8?B?bFM0NGlpM2pKOE02RXBMTHBQWGQrVGE4a1FPSDVJSjRlZnBoTU9FUHlScXNl?=
 =?utf-8?B?V3crVmsySVZwWTBOMEdtNXYydE5YMWl4U1lua3VGbmhjTGY1Z0gvSGpFVEdj?=
 =?utf-8?B?bDZ0QmQ0bFIrdDQyS2pCWjN5VDJBZE9yRzBVeW9LaWkvUHZZKzhubk5nK1lQ?=
 =?utf-8?B?Tk5SRnR6NHd4N3E4WGZiVmhCZjJEaE5paTU0TVArS2JZZDVVcXFsVkFlVGc4?=
 =?utf-8?B?K2ZOdHk1SlJpUFpMOGVwQmQ3SnBLNFZKcWxRbnJXaVJlQk1UU25nZWdOVzli?=
 =?utf-8?B?Zkw4RWs0UWkzUFRseDVRZVNkZmdWc3EzNHhWeTM3Sk80TVIrYklWRjNNSGdL?=
 =?utf-8?B?dmFpVi9xcWpnVTZ4UmVoMmR4REMzWnVsQ0ZYWWJWbDZBZHNYdUJqSklRT3ln?=
 =?utf-8?B?OC9aNXFzejBHTVQ2cnA4OS8rdkdoamlMbUlweVQ1bkVtZHh3Mk5xVTF4SXVF?=
 =?utf-8?B?SFZNeDlPTzZhclZyS25qc1JTeS9jcWorakxrVmFSZlM5Z20xRUtpSFF4Um5s?=
 =?utf-8?B?QTBTei9FZmlVQktGWmRUYXRLM3E4K09QOVpDSlhlVFZ4ZWZFTFBRTWlxdENw?=
 =?utf-8?B?QVI0RVRFUjd0cGw2dEVxNFVIRHNvemt1SUJwQ24yYmFwV0JndFNoR0R0Z0lT?=
 =?utf-8?B?ZW50RGJoam9nOHJzcjRPc0lKWkErSWY1cWcvbTYyYVhvL21EM092dWJqZ0Zp?=
 =?utf-8?B?WXhPeXhHNWZ6WEN1Y0RyWHRLT1FXL2IybTE4ZER1NnZFWTcwNHljSHlmT3Ro?=
 =?utf-8?B?MXhNUkUrYjZIVkF1UnR1WEJxQ2M5ODNxeGlwanRqWmpDWkV1WkRabHZhWFMv?=
 =?utf-8?B?VjV0N2VpeWEzY25FVkV4TktaK3RudFZFM25ZNEN3VzZvVUUwOVA4TFF1Vllz?=
 =?utf-8?B?bmw0d2lzc0tGU252NjJIeWJXQWxQOVRtUEhCa1RrL2s5dXBleG05MDEyRnda?=
 =?utf-8?B?RjlYSk9sRFRKN01nSGVVa0F5K3ZJYTNJSGF1bzZGYy8wdmlySUlud08zK2dH?=
 =?utf-8?B?Q3hjRzhXTlRZMFhCaktCNG5rZi9aZ0VTdThNM1dON09Ya3A0bGlFWWJCOWJP?=
 =?utf-8?B?Q0VEMEtNaVJWbWY1M1F5RVZpZE5ORHE0V3pKbU4xQlhtTjErakV3TmU5R2Nx?=
 =?utf-8?B?U2pkYkplNXVqd2pVeUxhNGFZSDRDWk91dVh4ZEV1bEJUY0YwQW9MMzh3eGly?=
 =?utf-8?B?ZUNBTXZoUm1UQUlHMCs5WWZVaUt0N09Udk0zQnVwVmpEL3lSSzd1aHAyRzhl?=
 =?utf-8?B?a1U0WGlISVh2QmZHM2JUeUk3UmJBZWRtTWptdlJzQmd5VVJJS1VVTVMwazJT?=
 =?utf-8?B?ZlFENmp5M0xPY24rdHVYbVB1ZTByc0pHc0NQT1VwTFlzVlo0TjdDMjJ3dklm?=
 =?utf-8?B?ZGlSMWtaaFNTYTNyeXdFUHlBcmN4MFNLSHh1MFZLcXR4QU1WVC9hVXQxZWRa?=
 =?utf-8?B?VDlaaXNwL0NYWllMaytua0orb1FRRVowS1RDUzVmNENqOFB5NnlMckdocFZm?=
 =?utf-8?B?V0p0NlRSRkgrT2RMYlU0Q0QzWXBwMkErZTFQa3ZPRitEaTNNRGtFWWE4SEJO?=
 =?utf-8?B?S0djUFBuWWdodzJiVGhPSkpWS1VvWnNIU0gvTWV4a0w4ekFPemxtWlYxQmJw?=
 =?utf-8?B?UEdtR093Zlp6QklteS9KdE0xQ1dIYWx5Zy9DNUlLSENYNXlqUEtDeDJxcXFG?=
 =?utf-8?B?azFidkc2b0M1YWk3ZEdGNGpIYXlJYlFLVzRlc3VtTGRnQlZmZVViQWRvZXlz?=
 =?utf-8?B?WC9WaWgzL1J1d1Q3eC9PRTJjTHFFMlpQRHBUMVFCTXVqekZzWXJyV1B5Wllq?=
 =?utf-8?B?RkZmSE8vYnZIcEgzSmRrRklXTGJhb2s0Yy9NeS9iUElQM2lHUTVuUVJLTTA1?=
 =?utf-8?B?Ty8vbWxsWjhhdEkrL0dWSmVqS3VtZUxoSS9HQnFOdHlURk9aWXozbHduUVRQ?=
 =?utf-8?B?TDhXNmJNdnEyMU9nVXZEWnIzWkROWE1pU2tnYjdlOHJuM01HK1FLZkZPYkpz?=
 =?utf-8?Q?Ouv0FFAGwoe/UVgcqDPe6LWqG?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ruHxbP/oO5sl4xHW9jMTkKY632Ixxa6uSO3mQho7hh+8lt46BZbR/TJgBRjHPIyVvX7T0vYrBAY+nKDzxZ+0LNWq9HG/3+O5ppcWpXaCw3G8VSN2qt067riRJiRZgrBI6WwrI4mfO5pByLB9wY/1v0AGEFaR+c7A1Cf2LPoC31EsFgEcjpmNWHn06LqgQIdPwGpcR33zdFiTX9OKYMJ1+5StJHLs/NN4RusC4vcJ/odM/9+cxuYAKV724P61vU9XrpQ5mFKrWye00HLnndHgEJzWFs2ciJV37Wha+xEyySGKdF3u6Jyez0eV7L7c0xb/RxUlxTpywS6E6JLzW3dgtDTVpD+CaOSoKCLT2ESaseFYJaC4CyqVxIfaHF6sF6xgd1hKXojZhiIE/MkulMopXWfzDoWJ3klCVsmlRe185X9oA4ja+ADnjYSg89aKFc8mPiet8/WCReaien2fIoR4EzsFNy4tApHboxPwtcwB81GfIp61XxT0ZBP1OtxDo0Djy001ELQaMH+id0zKuCCT5AaQjcmydjeVcmc+GnCxcYsJXMF7N0b3ZFXDRozEXI1yusOQsaGViZ9M++gMjNXvqVIyhhZi5nURFW88UXP82Oyb6bibO7DDNRbPN9AqPI3g+yPVS8XHj0BC1sjTwU9MR8Ne/sx0K2uVvHEuQpBH3kgJhJlGeG3yQLXh54iUocTbCmL3EX4/rjG+dhqytkrrnMhosJRu8VDJiUXstjt8c4imttQn09uPAOxYz5jtBOFDqahQmjhAt0vJuqcnF8T5DixqJA5ai5kH5Q9w8KkmiwcUW19Ro0jY4l77zSyAlX2Gi49Ol3DVXQJqZJv7olSj2A3GgH/VIZLPMumxCa9nz8Y=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a8c3e553-e827-4e5d-36ee-08db55e6fc5c
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 08:24:37.0479
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Jt+d1ZDtOnXJEg8CSKkfEkBFcgw5Yi+KAgg3lwNz1l5s/UfE05ByNHda3sHE7hXiGbZkgl5blgP8sUAL/dtLWQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6251

On Tue, May 16, 2023 at 10:10:07AM +0200, Roger Pau Monné wrote:
> On Mon, May 15, 2023 at 05:16:27PM -0700, Stefano Stabellini wrote:
> > On Mon, 15 May 2023, Jan Beulich wrote:
> > > On 15.05.2023 10:48, Roger Pau Monné wrote:
> > > > On Fri, May 12, 2023 at 06:17:19PM -0700, Stefano Stabellini wrote:
> > > >> From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > >>
> > > >> Xen always generates a XSDT table even if the firmware provided a RSDT
> > > >> table. Instead of copying the XSDT header from the firmware table (that
> > > >> might be missing), generate the XSDT header from a preset.
> > > >>
> > > >> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > > >> ---
> > > >>  xen/arch/x86/hvm/dom0_build.c | 32 +++++++++-----------------------
> > > >>  1 file changed, 9 insertions(+), 23 deletions(-)
> > > >>
> > > >> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> > > >> index 307edc6a8c..5fde769863 100644
> > > >> --- a/xen/arch/x86/hvm/dom0_build.c
> > > >> +++ b/xen/arch/x86/hvm/dom0_build.c
> > > >> @@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> > > >>                                        paddr_t *addr)
> > > >>  {
> > > >>      struct acpi_table_xsdt *xsdt;
> > > >> -    struct acpi_table_header *table;
> > > >> -    struct acpi_table_rsdp *rsdp;
> > > >>      const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables;
> > > >>      unsigned long size = sizeof(*xsdt);
> > > >>      unsigned int i, j, num_tables = 0;
> > > >> -    paddr_t xsdt_paddr;
> > > >>      int rc;
> > > >> +    struct acpi_table_header header = {
> > > >> +        .signature    = "XSDT",
> > > >> +        .length       = sizeof(struct acpi_table_header),
> > > >> +        .revision     = 0x1,
> > > >> +        .oem_id       = "Xen",
> > > >> +        .oem_table_id = "HVM",
> > > > 
> > > > I think this is wrong, as according to the spec the OEM Table ID must
> > > > match the OEM Table ID in the FADT.
> > > > 
> > > > We likely want to copy the OEM ID and OEM Table ID from the RSDP, and
> > > > possibly also the other OEM related fields.
> > > > 
> > > > Alternatively we might want to copy and use the RSDT on systems that
> > > > lack an XSDT, or even just copy the header from the RSDT into Xen's
> > > > crafted XSDT, since the format of the RSDP and the XSDT headers are
> > > > exactly the same (the difference is in the size of the description
> > > > headers that come after).
> > > 
> > > I guess I'd prefer that last variant.
> > 
> > I tried this approach (together with the second patch for necessity) and
> > it worked.
> > 
> > diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> > index fd2cbf68bc..11d6d1bc23 100644
> > --- a/xen/arch/x86/hvm/dom0_build.c
> > +++ b/xen/arch/x86/hvm/dom0_build.c
> > @@ -967,6 +967,10 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> >          goto out;
> >      }
> >      xsdt_paddr = rsdp->xsdt_physical_address;
> > +    if ( !xsdt_paddr )
> > +    {
> > +        xsdt_paddr = rsdp->rsdt_physical_address;
> > +    }
> >      acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
> >      table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
> >      if ( !table )
> 
> To be slightly more consistent, could you use:
> 
> /*
>  * Note the header is the same for both RSDT and XSDT, so it's fine to
>  * copy the native RSDT header to the Xen crafted XSDT if no native
>  * XSDT is available.
>  */
> if (rsdp->revision > 1 && rsdp->xsdt_physical_address)
>     sdt_paddr = rsdp->xsdt_physical_address;
> else
>     sdt_paddr = rsdp->rsdt_physical_address;
> 
> It was an oversight of mine to not check for the RSDP revision, as
> RSDP < 2 will never have an XSDT.  Also add:
> 
> Fixes: 1d74282c455f ('x86: setup PVHv2 Dom0 ACPI tables')

Just realized this will require some more work so that the guest
(dom0) provided RSDP is at least revision 2.  You will need to adjust
the field and recalculate the checksum if needed.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 16 09:11:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 09:11:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535069.832664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyqiG-0007vN-9D; Tue, 16 May 2023 09:11:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535069.832664; Tue, 16 May 2023 09:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyqiG-0007vG-6X; Tue, 16 May 2023 09:11:20 +0000
Received: by outflank-mailman (input) for mailman id 535069;
 Tue, 16 May 2023 09:11:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyqiE-0007v4-VG; Tue, 16 May 2023 09:11:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyqiE-0002dP-Fo; Tue, 16 May 2023 09:11:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyqiD-00018J-Ui; Tue, 16 May 2023 09:11:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyqiD-0000BJ-SO; Tue, 16 May 2023 09:11:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=O6+614ACYppnFhBGrDWyjnHpBb4YomLlGkJaU2OhlH8=; b=lErm7wQNNzvD3Mc0naAGdK5KOa
	FkI8YJ5rowsdWrPi/PMAZYkpIeb4V8EBD8E/3D00bXMYoTQTwvrBxiyfgUyMn+b/6YI0PugaudFnB
	AKl28KUIetM8NsTHlrpledzdCM56DcU2B23jrqQdLVle11/VqwW7D1is6kUNfQQ84onw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180674-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180674: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 May 2023 09:11:17 +0000

flight 180674 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180674/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 18 guest-localmigrate/x10 fail in 180670 pass in 180674
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail in 180670 pass in 180674
 test-arm64-arm64-xl-credit2   7 xen-install                fail pass in 180670
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180670
 test-arm64-arm64-libvirt-raw 17 guest-start/debian.repeat  fail pass in 180670

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 180670 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 180670 never pass
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   29 days
Failing since        180281  2023-04-17 06:24:36 Z   29 days   54 attempts
Testing same since   180664  2023-05-14 20:12:01 Z    1 days    4 attempts

------------------------------------------------------------
2389 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 302499 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 16 09:13:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 09:13:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535077.832674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyqkH-00005W-P6; Tue, 16 May 2023 09:13:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535077.832674; Tue, 16 May 2023 09:13:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyqkH-00005P-MC; Tue, 16 May 2023 09:13:25 +0000
Received: by outflank-mailman (input) for mailman id 535077;
 Tue, 16 May 2023 09:13:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyqkG-00005J-7x
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 09:13:24 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e7b2f663-f3c9-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 11:13:22 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8925.eurprd04.prod.outlook.com (2603:10a6:102:20c::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 09:13:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 09:13:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7b2f663-f3c9-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cpsVJDsbC++Tf8PSbRlecHOhvLkqmLLEiuQDHgxHlCttYartm7RFSH97hA1aQB/xNWCAQ9JCu6Kb5Vg6LQykqoVXMatQX6TTFdXRf9hE8tm5fo8W5pydX5tVGcNkR3NXRg9m7MMnK/0+JN6Mj2aO4LxT0wrB1RN1UU6FXzlIf+m7RY97pk43ZGHpAhBTQ2SpDwSXyjRH2u+U+QYAQV0LRbnJRisD3sZiqeg76u8OEWh4YzBbgLtHGG+TEkrTOg6iQNkIDie9wOWWJGr6v6UdNir/2AGSMcOGhL4oK3gRBXM7sZgnLs0ySKlBlrZSlZvYf0K8VMRPpWpKTlNnioD0KA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ML7NNYnIEVvp9wboHB3nARQtK27CnZkjSArpdibskKQ=;
 b=iMqWK+y2j5XPfavqr/marj7uOuMNl4NH0jzFIf4L/jVVWUjJEvs7UzlK+7JYWSBtfn/O/rm2sZA5AV1Cy2iPBjmVrzk/4CrIoO9RbwNZWa6ZHAWdQpWhGOHbUOLOFjZL0UgnUyFeew/kx+tbc6u2xHJkbENtAvxCNAsci9jixaOFgRnfeZq1oyPPAri2awiXxffPErHVgLJN4bm5ZZ68yjg7MJFszdRkkj3FTeJ79zAY6r9iOer8L4ty25XmrOMZrdEamkUG41OKaB3r1lBKOKGoqnhMIC0t9ng6X2xhSFIt6ozL9WH72lgvt1noCCm2a2uBLQSOrWbvrJxH7Yw7Jg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ML7NNYnIEVvp9wboHB3nARQtK27CnZkjSArpdibskKQ=;
 b=3TyvC+xD/ywd3vsbs45CPkjTuNe9qAG7Wec+pA7jRMc5aJFeFHoM/aV8pR07snZJhpp+kYOOheuT412+r9hYX4qM9G848YdTpqYRQG/4CAjy/995jaQu9Jx1/Lof6R0OyJnhFA5dZYVQl3/ZVnSuy0XO1YH7H+voCdgXnIo0g7wCY9U1YJzoSTBa4D/JqeeGjxfLvARiFKStm1cYUCJpGCfuDk5MaSkFHUIV2uywZtWd7tyc0d9ausEQ5SqXTFU0n4kWaRLcTww8asE+Tg+SVW4qFCpwn8+SWWevXiiR3v4ks797mzGS+Lu3BijqG/2wiKUUjNAPDjdmnM3XY7V5hw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cb550850-4cf7-5bb6-d84f-633be1ce3bac@suse.com>
Date: Tue, 16 May 2023 11:13:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT
 generation
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 Xenia.Ragiadakou@amd.com, Stefano Stabellini <stefano.stabellini@amd.com>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-1-sstabellini@kernel.org>
 <ZGHx9Mk3UGPdli1h@Air-de-Roger>
 <81ac6e51-6de9-5c4c-5cbd-7318cae93032@suse.com>
 <alpine.DEB.2.22.394.2305151712390.4125828@ubuntu-linux-20-04-desktop>
 <ZGM6X19p50oSqbNB@Air-de-Roger> <ZGM9vzwGm7Jv6i7M@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZGM9vzwGm7Jv6i7M@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0141.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8925:EE_
X-MS-Office365-Filtering-Correlation-Id: e555bcd3-e7b8-4c77-b54f-08db55edca5c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wPdeFcqsZsTxvmOEqu9BlMh3zw/eKS0tlVGOWnGF7lqMq8VNxWhS3p8v2MR2MhgSWyh5sIclZlTObXXQ6CF1fjC/xGFUUw+/FMM23xmBAyfMRr4BIj4i02tae1S1o8dyr1A5CjJesI0V4GPn8d48LKVQlPcA3nKboGAQFsb+2bhUxcz8viADxoYW+KXTGlwKml717yafnSH4dHuzTMsqGntoMSMQXbD6SREPAVjYNgAcAMvAsDZDe7FTzE8xUxLXyy0O0kqgPcGurQGmwysu+Se5R0D9N3X6YAXpYmWH89+rXe9dzAU+51FhzwqekxS2n9aR/gQmohovdAM/LnTjYXI66ZgwLbiqTu0WawVA6OvavEenLjkHi6cIvnVtaSXRAXb7N6H1awoPvH78Gt8+2eQr0tPzYC6zek/qjsYhbZGMVZiKEEbQfTxxiBB0JQOGjS5I2+jFcdV6YtotgfM1a5ngLO6GNtKwdhBWu/uGX02AmmKFYAs4CTmydytf4XfmA6JJip54PNmPCBOOeJIWoHMV/qn1NjZlpPRnGXoN8qFFE03ntItmkbUfOqHxwv6XebxaYacPl6ai81lToneTb6uC0qG9tnC2jrRY83byNBzOBS2ZEFOr9PMChA4f5mvOj2oETJsWdqhDFRlvQiJA1A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(376002)(396003)(136003)(366004)(451199021)(38100700002)(2616005)(186003)(26005)(83380400001)(53546011)(2906002)(6506007)(8936002)(41300700001)(5660300002)(8676002)(6512007)(36756003)(66946007)(31696002)(478600001)(66476007)(110136005)(6486002)(86362001)(66556008)(4326008)(316002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eU5DS3J0WlJRc3JBOFQ3WmxCdXlhd2ZWeFFRQXBBd0d4UnM0MHlleDVHMmty?=
 =?utf-8?B?QWZoazVOSVp4TjRncklDMkd4WWY3TnVxRTduTUVmZm1FbTRsdDRMMHlLT25H?=
 =?utf-8?B?cktSbFdXY2JJNE01Zm5qanA1NnZkemFzYVdVNjh2cjQzSG1UTUFKWU1lbzFD?=
 =?utf-8?B?cWw0K3gxei95VmI5UjlqR3ZuakU1TXFuQks1eml1c0RXKzJSNFhhQmdGK29Y?=
 =?utf-8?B?b0NxRnV1eDBSRFlYSGVjVHBzNzkrSGFxaEE5bzJCM3dSS1ZsdDhGOGlUMlo1?=
 =?utf-8?B?MkIxNUFNMERkREYzQ3ZENmlvdmROK3h4ZHFnbEJQd0Q0UU5uWXlVSE1XZFJS?=
 =?utf-8?B?aGpzcC8zYUpraCtzSFE4WFRjUUdkWW92OFJZdDRFcHFtbXZaMXBGS0xmMEZw?=
 =?utf-8?B?MjNNSGorUm1ZdXpETE1ILzlLQzNqR0R2V2hXc21RREN3ampqc0lLcUVBYUVy?=
 =?utf-8?B?U1ViMmlXR2VaWUgxOTYxbTBMM253QlJZRVBiekZRRnZVaXBWekpDZm10Rjhs?=
 =?utf-8?B?WUVaTU52WllZcHFDRmJ5MUpJUjRPbkpQeHIxL044Zkg3RlZuWitOcnQxd1RJ?=
 =?utf-8?B?MVVScVBrU3puclBRV1lWTmx2VnpDTm43dUdaaTcwUEkvNTc4b2Jxak94aE5W?=
 =?utf-8?B?VmhXYmxwYm9TNVQyRG13VnRQODBvdXJMYVBBb1lCQVFPWGw4anRZWk1iTnc2?=
 =?utf-8?B?K2NMaXdGd2NiM0d4Z1JGWjBHakM1STdpWjJrcHJMNVBIZmE0Ny9SaUU2ZHI2?=
 =?utf-8?B?dm44cGtybmo5UHZSU0prR0ZHNjRGaW1JbUtTNHh6R0NkcWlIQ0ZBT1JoLy95?=
 =?utf-8?B?NVV6d3RMeC9Dbk5XdklqUlpPVXJrUUJwQklUdHltdGdYQlY4c0pjS1FoeVZE?=
 =?utf-8?B?Wmo1MG9qV0hkdWtFbm4zZWtPK2dJUHdFL3RBb2JCSDF1UkhWNUdaT3Z1eklo?=
 =?utf-8?B?Wjl2MEEvZmZmcG8ralp3U1BlS2prQzZOZlRYWGN5cndTc3E4MjNoNG0ydVpi?=
 =?utf-8?B?a3FnbGFsZ25wNlRKUmJodUpENFhZVnNwMFdoRTl5NUNmMDB6TERyVk1rSkpl?=
 =?utf-8?B?N3NXSlVvcjFlRzNqODB1YmlxYldPaUhIS1VSbDlwRGhsc1dhdjRQcTZQU3lE?=
 =?utf-8?B?dE51UUcvK0lKR2ZhYTl3TXFxbFp2M3REQjkyZ1lmWm5uNzM2WTR4YjhFNVll?=
 =?utf-8?B?V0R2cDU1UzhhOUtjUU1UQVNMMENOWjVPVm1veFZNYXZqaCt6bC9pUTN2N1NG?=
 =?utf-8?B?OVY5UERUeWdCdEsyaU9qSzVaUisxeDBEekZTSElhMjJpemUrTHUvVy9VMWJS?=
 =?utf-8?B?Tis2Yk5nd2NQTW9oOWtFckYyclUyZzFJMmE2TUx5MHF4Q1ZjNEN0WUdiK21l?=
 =?utf-8?B?V0VJSkxyUitYSVpiZHk1Y01VQ29HY3pZY0tCa2NpOG5HQk9qb3UzRDErSGpX?=
 =?utf-8?B?SnRnMmlIWDJLc25qbGtNQmVtRGZsZ1pEOWtTbjJobXVWZkVlOFZLbDlpSW9P?=
 =?utf-8?B?OEJzK0MyMHRyRm9QeFZ5ODg1UmVZN2FqZ3F5cllnL3ZENUxGeUE1cjVReUxz?=
 =?utf-8?B?b0pROCtMNWVJNWNpc2NaWGloNWZCTWpGYlRTWTJzWDNwZEFvVUgwN2JBb05t?=
 =?utf-8?B?RngySzlILzQyMElUazdnL0JCdjZxaHhmaDdQZjVVcisyRGZwZzZ5dG85NmpE?=
 =?utf-8?B?RlJobisxSVN4dUNrTDNYNEVQS0lsTS9TOWw0M1g0YlZaUjkyQmF2ZE5mTDRU?=
 =?utf-8?B?UmhTOS9ocTZPaFVCUlFseEJEalFPYjVkelJsRnp3WnBzOGs0V0pSTlFEemNZ?=
 =?utf-8?B?MlIxcnNWUkhLbnpkYzBpT3AzUFRHYjhWV2ZYSUI4Mk1YVFBPOW9aSmQ4dkVu?=
 =?utf-8?B?dWFTSXZibndPbW1vbVRGcUJHYWRZM1hlVGNOcCsyeEVQV1JEZmt6eFFJeEFL?=
 =?utf-8?B?VlBSNW1PRFlIRkwzODNUOTY4dlRHQnd0NG0yV1ZFVk54NWNtbDErb2xHd0NC?=
 =?utf-8?B?MHlVRy8yQmJKQmJVbW9Jc1pkZGdzYnRNcVBEMlNIYjRTR0REM2tKVHN3Vzgv?=
 =?utf-8?B?RjFpS0QvMlVUVVdDSi9xV3pnbklyUVphTGVvT3E3RGp1eUVZYlFKRzdLRWFa?=
 =?utf-8?Q?0KYfck21UVR0Lgnk7uggkaEUf?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e555bcd3-e7b8-4c77-b54f-08db55edca5c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 09:13:19.6599
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tf3VlUVkZI+3EE4HCTFEsTnM+wLBqqUyAagETvognbbFR2IDffuOH+tfjX0Uw8TgUfe+5v4tFOEiBXIeVqNZiw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8925

On 16.05.2023 10:24, Roger Pau Monné wrote:
> On Tue, May 16, 2023 at 10:10:07AM +0200, Roger Pau Monné wrote:
>> On Mon, May 15, 2023 at 05:16:27PM -0700, Stefano Stabellini wrote:
>>> On Mon, 15 May 2023, Jan Beulich wrote:
>>>> On 15.05.2023 10:48, Roger Pau Monné wrote:
>>>>> On Fri, May 12, 2023 at 06:17:19PM -0700, Stefano Stabellini wrote:
>>>>>> From: Stefano Stabellini <stefano.stabellini@amd.com>
>>>>>>
>>>>>> Xen always generates a XSDT table even if the firmware provided a RSDT
>>>>>> table. Instead of copying the XSDT header from the firmware table (that
>>>>>> might be missing), generate the XSDT header from a preset.
>>>>>>
>>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
>>>>>> ---
>>>>>>  xen/arch/x86/hvm/dom0_build.c | 32 +++++++++-----------------------
>>>>>>  1 file changed, 9 insertions(+), 23 deletions(-)
>>>>>>
>>>>>> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
>>>>>> index 307edc6a8c..5fde769863 100644
>>>>>> --- a/xen/arch/x86/hvm/dom0_build.c
>>>>>> +++ b/xen/arch/x86/hvm/dom0_build.c
>>>>>> @@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
>>>>>>                                        paddr_t *addr)
>>>>>>  {
>>>>>>      struct acpi_table_xsdt *xsdt;
>>>>>> -    struct acpi_table_header *table;
>>>>>> -    struct acpi_table_rsdp *rsdp;
>>>>>>      const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables;
>>>>>>      unsigned long size = sizeof(*xsdt);
>>>>>>      unsigned int i, j, num_tables = 0;
>>>>>> -    paddr_t xsdt_paddr;
>>>>>>      int rc;
>>>>>> +    struct acpi_table_header header = {
>>>>>> +        .signature    = "XSDT",
>>>>>> +        .length       = sizeof(struct acpi_table_header),
>>>>>> +        .revision     = 0x1,
>>>>>> +        .oem_id       = "Xen",
>>>>>> +        .oem_table_id = "HVM",
>>>>>
>>>>> I think this is wrong, as according to the spec the OEM Table ID must
>>>>> match the OEM Table ID in the FADT.
>>>>>
>>>>> We likely want to copy the OEM ID and OEM Table ID from the RSDP, and
>>>>> possibly also the other OEM related fields.
>>>>>
>>>>> Alternatively we might want to copy and use the RSDT on systems that
>>>>> lack an XSDT, or even just copy the header from the RSDT into Xen's
>>>>> crafted XSDT, since the format of the RSDP and the XSDT headers are
>>>>> exactly the same (the difference is in the size of the description
>>>>> headers that come after).
>>>>
>>>> I guess I'd prefer that last variant.
>>>
>>> I tried this approach (together with the second patch for necessity) and
>>> it worked.
>>>
>>> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
>>> index fd2cbf68bc..11d6d1bc23 100644
>>> --- a/xen/arch/x86/hvm/dom0_build.c
>>> +++ b/xen/arch/x86/hvm/dom0_build.c
>>> @@ -967,6 +967,10 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
>>>          goto out;
>>>      }
>>>      xsdt_paddr = rsdp->xsdt_physical_address;
>>> +    if ( !xsdt_paddr )
>>> +    {
>>> +        xsdt_paddr = rsdp->rsdt_physical_address;
>>> +    }
>>>      acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
>>>      table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
>>>      if ( !table )
>>
>> To be slightly more consistent, could you use:
>>
>> /*
>>  * Note the header is the same for both RSDT and XSDT, so it's fine to
>>  * copy the native RSDT header to the Xen crafted XSDT if no native
>>  * XSDT is available.
>>  */
>> if (rsdp->revision > 1 && rsdp->xsdt_physical_address)
>>     sdt_paddr = rsdp->xsdt_physical_address;
>> else
>>     sdt_paddr = rsdp->rsdt_physical_address;
>>
>> It was an oversight of mine to not check for the RSDP revision, as
>> RSDP < 2 will never have an XSDT.  Also add:
>>
>> Fixes: 1d74282c455f ('x86: setup PVHv2 Dom0 ACPI tables')
> 
> Just realized this will require some more work so that the guest
> (dom0) provided RSDP is at least revision 2.  You will need to adjust
> the field and recalculate the checksum if needed.

We could also mandate ACPI version >= 2 for PVH Dom0.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 09:21:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 09:21:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535083.832684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyqs8-0001dl-JP; Tue, 16 May 2023 09:21:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535083.832684; Tue, 16 May 2023 09:21:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyqs8-0001de-GX; Tue, 16 May 2023 09:21:32 +0000
Received: by outflank-mailman (input) for mailman id 535083;
 Tue, 16 May 2023 09:21:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S66/=BF=citrix.com=prvs=49309c509=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pyqs6-0001dW-Ve
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 09:21:31 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 08450031-f3cb-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 11:21:28 +0200 (CEST)
Received: from mail-bn8nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 05:21:15 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA2PR03MB5771.namprd03.prod.outlook.com (2603:10b6:806:11e::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 09:21:12 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 09:21:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08450031-f3cb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684228887;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=TDuJLJc9JTp2gSh3nCkWiB74uRaH05uB2T6EVgzcnz4=;
  b=E1TAKYGIo1EZfuwGJJZk93qzBH85j2UahJzWC3MCcKgbgF/LN4lXYViu
   9E1yxtcq4a8P3QdPYxyHsDVW8ZkLrLab0+zDyG0VPrvHFUYaiORcWRJ0i
   XTqVQkVeSfG0cp+5nPvT9+qbpBxw2kPgwiAhYn9vWTt+3k/GwqAultot0
   g=;
X-IronPort-RemoteIP: 104.47.55.174
X-IronPort-MID: 111644949
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:F+QxhK4P4o41lolxbJLdxAxRtNjGchMFZxGqfqrLsTDasY5as4F+v
 jQeXGyBaa7ZMDbwet0lOYji9ktVup+DmINqQQY/ryA8Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S4geH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mz
 cRBE3MiYAu4tryym52aQ85L3eEsBZy+VG8fkikIITDxK98DGMqGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooj+SF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxHqlBN5DSOzQGvhC2EK110EiKzwvTUK9quumikOnfN8FJ
 BlBksYphe1onKCxdfHmRAGxqnOAuh8aWvJTHvc85QXLzbDbiy6bDGUZSj9KaPQ9qdQ7Azct0
 zehj97vQDBirrCRYXac7auP6yO/PzAPKm0PbjNCShEKi/HTrYcyh1T1R9liGaK8jdroMTj1z
 3aBqy1Wr64PgMAC0aL95kzOiT+oopnPTyY84wmRVWWghj6Vf6agbo2srFLdvfBJKd/DSkHb5
 SdY3c+D8OoJEJeB0jSXR/kAF62o4PDDNyDAhVloHN8q8DHFF2OfQL28KQpWfC9BWvvosxewC
 KMPkWu9PKNuAUY=
IronPort-HdrOrdr: A9a23:1jZeKKt0f2rRjW/guRAkX0Eo7skD+dV00zEX/kB9WHVpmwKj9v
 xG+85rsSMc6QxhPU3I/OrrBEDuex7hHPJOjbX409+ZLXPbUSiTXfpfBbKL+UycJ8SGzJ8g6U
 4DSchD4azLfDpHZJ3BkW6F+r8bqbHtzEnPv4jjJhxWPGJXgs9bgTuRIzzrbXFedU1pBYcZCJ
 HZ3cZOvTymEE5nFviTNz0qX/Xju9aOr57tYQcHCxk7gTP+9A+A2frVEwW4whxbaD9Ewa4j/W
 /Z1yT1676uqevT8G6j60bjq7pXhfr8wZ94CMuAhtN9EESLtjqV
X-Talos-CUID: =?us-ascii?q?9a23=3ABG05+2tkTI4/eWzszQuwjcJs6IsdSiTmj1jeOHS?=
 =?us-ascii?q?JV2IyZ5ycbnjL3axNxp8=3D?=
X-Talos-MUID: 9a23:LoM9wwsHy2nIAh2Dls2n3C5EJepZwIWXEFkLmocZu8SdEiEqNGLI
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="111644949"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OVpnLsN0fHSmkvMCfioD73q2hNpKrPq4hW7OIbp50DRHFKHKaG9XBhEIYVzoJ51qS3nGZUIn09w1e1vji8vKHZx9+bKpq2Zd5P5Ld/gawn3pQ1u0SznqPbKFMaA84vQ1xAzO+2JGNJMKrp2Zbs6UIaoidJtO3MifKr0W09WRUSM6IXWmpmnNM6jN9QDC604IGmkpaxp/6x0vu6WEhPUPLGM4CEfOAPjoxs8Ht48rG0NZHc3TzJgwyhDrWOLmUy+9ueuhdjZRFoJ88uc/FuEwg5NRPOPO3B3MyOxaRvkPDFaHKzRoDr40+9rhZYlX78OXIL75XtA49ub52ylxZdpbSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0Iaz8Ns+pztBCChtJr+/H8kIaExMISa0flvV/PBsQXs=;
 b=mM3P+/+1NZFujYwvxqlaewGgA7txyhZEs68XIL3ayJnrPaLzTHpAOydJr9Kr8DHsKWYe3KA8Ekc+hg7JXf3KMA9YOO4zgiWNxXNAnsF7ajB11K4g4KYzZNxM6HYk0SAUpo2v6MbvujnHSYvexgimipeiQsu+/MTDTr18an9N7DuD/G/r//yUIOFy1Te2aaCmoLoTD95T3107Yl9CCB4HflmvnVYTgJgdsYrhUgEu+ZIillkviaGmGl/OUj13Hp4YIYYh6g5nhj93WHQ1tU6ZaelxL4pzWwZCvpTQJfOSpjE0dMEi9eRXPKcx/k6+MNdf+l0Z7cNUaCrbSICuzjVoNQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0Iaz8Ns+pztBCChtJr+/H8kIaExMISa0flvV/PBsQXs=;
 b=p8WFQkmbuKJ1vZFKHxf210Cp/BjuxwzQLxqD1mWoxpqwnB/qkSYKbsv6Bbz/fyoYNuTY/pFDNl9IiNx9ksLgWyTGa9FqPHBfwC7fa1orR9bwVCa0kwgBiUUrXVz0aCcy248tbqvP5GKLcA74tuMAB4eKYDlbJu2Cw9utwUKNDoY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 16 May 2023 11:21:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, jbeulich@suse.com,
	xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
Message-ID: <ZGNLArlA0Yei4Fr0@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-2-sstabellini@kernel.org>
 <ZGH+5OKqnjTjUr/F@Air-de-Roger>
 <alpine.DEB.2.22.394.2305151656500.4125828@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.22.394.2305151656500.4125828@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LO4P265CA0135.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c4::10) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA2PR03MB5771:EE_
X-MS-Office365-Filtering-Correlation-Id: e8fc7d12-3c4d-40cd-944a-08db55eee3db
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JiqOvaANBivDLepYIrUSCSZX7cmK18REKUuqb1AksWxOmeTih530K6lcgIZLJVYw8gNpToh7ZNI9me3BIKGMNDMUSWsv5GyIdE+nNmdnSBQr5xKLywBO1u6sSe5KO5f4+uefmX4tCP5TtVKB71O0smuQ3wr/nHNGDAmnAMlgQ/4/AwdYTb4gqMUtAhC3oY96eUW5sSKxcOHS8cHkEaiRn6bm3eUvfJYsfLiugjrtE17Ens8BOsl+o4W3oGkuBX4rXTjRfDDIu2ojgql+DTMJ8tgu7rQiSE4f7oQ9UUYr6Snvvm6G1cJDupd0xrMDKLd26CN7njLzs24F7Pm9H6xja10/dRiLvv2EEqSVGkPxU3bMinUyMDPslIILWNiDhPuUZYBD9D/nicn31rRwsYThUXbVo+kKapLI82wsBb1yYeBavEn0a6UF+TnZhPMtNz/Hq8x3TUk6F7Vh3PX0Wv4G2usS7DEtxOoqZf9ve4vN4Vo57xrv0SWGgAta2qGs10RUu6CMN33ajpmxElqF0XzfCxxBwPirQvNCaevI41uOvJo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(136003)(346002)(376002)(366004)(39860400002)(451199021)(84970400001)(4326008)(66946007)(66556008)(66476007)(6916009)(478600001)(86362001)(316002)(85182001)(6486002)(83380400001)(6506007)(186003)(26005)(6512007)(9686003)(41300700001)(8936002)(8676002)(5660300002)(2906002)(6666004)(33716001)(82960400001)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TEVpaUE0SmZHazdpeGpEdm0wSnI3RDRFZkdENzlWU1RubW8zMGZGaFh1ZUFs?=
 =?utf-8?B?TjVHVE05clNKZ2hjYWhpbzI2MUtGOXVFMXpUQnhLRVIvZjJPNTI0aE5MSUx1?=
 =?utf-8?B?UXFZMVRhVmo3dk1SZmxVeXl3cGtCSlIvR1Y1eDBSdnRDdk1jMjBxOUZvb0Ny?=
 =?utf-8?B?T1ZHZ2h1RHovaHNsdUNaR0VUL3I5RGdEZHEvUGhYVDBOcS9laEVjUDFjMjlJ?=
 =?utf-8?B?OVowWVk2dmRMRHZ2Szlhc3ZJdDdIczQ2Z2xoM2srdFp6VTJPTmJNRXhmSm9B?=
 =?utf-8?B?N09EVllleFNISjV5bUw4MTRhWDllTUg2bXI5b1p1SW5LL2F2Rkhqczl4K01H?=
 =?utf-8?B?ZHhoRnArWFAza2RMcTRrYkIrcXQ1VGwrSXU3ZGNGNUV2YmF1S3BHeURVMHlY?=
 =?utf-8?B?V3RnSUcvNUV0K3pySnIvOEdZK2hQYmtxZzZFMVBPcWhUeVQ3clRHZWpKeHlh?=
 =?utf-8?B?L05vU09lK3dCTjI1WjBnb1I5NlVteWlEUE53aFFGeGRxMXl3dVVZWmNYSi9t?=
 =?utf-8?B?Nm1rOVNBMVlpV0dtdHBwNWVMWmpBSjBVUVR2ZU1WekZqNE5FeFdHM25iMTlx?=
 =?utf-8?B?TkQvK3FxaHZGZnR3MWNHV2VEOC9YN2Q4czcxbzNaamU2WHBTbDY0QjhXdGF5?=
 =?utf-8?B?a3J4VkMwTmlBOXFRUUNqK2o4d2xuVm9MRmZXNlpZZlBFMVQwd0Z0aU80U2Fq?=
 =?utf-8?B?ZzVyYk81T0JMWEZaUFM2cGd0T0RwaU5Ub0RjOUpwRmVOaFZNVWtpd1NMMGI5?=
 =?utf-8?B?QjVjeEZTNGlod1RYRm52bkRpdi9xVUFUU003WHJrd0V4cUlDM1FsUHc4bUV1?=
 =?utf-8?B?NWdRUy91b1lRN25lWWx6WnFMOFFweENFWHZza21hNGFkZERIVHprL0hkSmNC?=
 =?utf-8?B?Rm5MUCtMOXhjWFZhZE42Yml0WHlveXlrUmRSY1RpVWtVbDNVZVJTRnpRTDFw?=
 =?utf-8?B?c2RuOWxZWDBnKzhRc3BpL1hMc0VqaUtDNG9CWlFlOXpGTlJzeFBzblJIUThO?=
 =?utf-8?B?STM0QmkwYTU4REZwNGpvOW45U0ljeW1uQzdvOGNRdFhTa2xFRVU0MURZcUtX?=
 =?utf-8?B?ZDE2K1FSZXYzSUZia0pVZkZyaTdyc1ZMYS9udGNZRHc1bUNWaVRTellXZE5F?=
 =?utf-8?B?NjVZZHErN1lqSXd2ekpnaHl5QlVabjcrbHlJMkhDS0s3aXRxcytseVpoTm9i?=
 =?utf-8?B?VktzQ0tMS09QcE9VNEFGeUR4dEdKNWsvd3AwZVg0Rk9QN1ZJZ2JjN1g4Smg3?=
 =?utf-8?B?U3pTajU1S0ljQzk3MEFDUkxQaWI0dlB2UGxVQTBUdkNTQndqY2gyaEhQUjZ4?=
 =?utf-8?B?UTN0UEowSklLRlo1Y1UvTnZLYzVTKytFY1I1SnpjdnVaY045UjdIRGkxekow?=
 =?utf-8?B?MVVyOVg5Uk5Sa1NrTmZ6dDJaYXpNdG84SDgyRC9RZk1tSzVwS29HSENKRGRW?=
 =?utf-8?B?QjFheXB5aGJIeWZSUzVDNjhKczNEdTFhTkN4cFZ5RnBwbVJ6bFZZelQrS09K?=
 =?utf-8?B?aDdoaDNiOS9tMnMvcjJ3WjZzZDI5QmV4MldNVzlETjNLREl2bXMzV0h3UnJl?=
 =?utf-8?B?elBnck5DSVlGQnZwS2xjdXhQLzhNZk9sL25LZXlVY0FyMm1EZ2NqOGFpS3lm?=
 =?utf-8?B?Z1JsendsbDdvem0vRnFjU1crZlpKaTEzU2RMQXVUdEMwWlNuSFJJckd6aXRM?=
 =?utf-8?B?QWgxYXBPS2JsV2ZEM0RvZmNxd0RZbldCUlNyY1JIZXBIWlV1RllIN3FYeXVq?=
 =?utf-8?B?YWVjZVZuR3RkME0zZ1dTaHY3ZWNXbnRBa2doVjk0R3VOWVEwb0hRU1F4UjhP?=
 =?utf-8?B?eDRKaDVqbGE1WUxEYkxVYS9tM0Y3M2FFM0lFandKY0hGSHlQV29Mc2ZXN093?=
 =?utf-8?B?WHo5S2NsQ2dxbnlkNVpxWkJNMldsaHMzWjBtZkllZjN2aWl3R21sT2J4VjBW?=
 =?utf-8?B?cVNzendvMWs4SkpQa2NXZzg3MXUrSzA3UHhRYnFuR0hJZzExdUo4QlRSM2xj?=
 =?utf-8?B?TDJvUkJWQmhCQW1FcWNpTGlVN2hPUHRjMW8rQW9mai82NkNWRWtqa05JT0VK?=
 =?utf-8?B?SEw2ZFpqbjZFdU5ITGRkN1JFNm5SR0pHUTgwVGVOQkNJaWttOWdFaUtuekk2?=
 =?utf-8?Q?sZ/Mg2WLY+6JlL8EW3ehJgjiX?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	jOhi7Y3ppVbJviKzs4Vpf6/2j7rrnm0MlzV3/boC4nh3WhdTBeLoet6tcAM/PDMR/j+d1qslq+ZZDxgxQ8zF4wwQNU7NswidNW+dUaYMQlrhuNCDBjOXiZKOV/j03C9w4hzBeS0pat/uz2njjzgIRLUmcKsCmzAfqZASObvkrAHbEaqSZ4jrDVCPSUST6EW0U1+n5Jri2LmbaXxRa/GqKA1npe1zklbNotsCAUeUX2Ph6efteCBG6N0OzyJFk7Af7JQfk4Czm/RuKeZ2B4gjjxdmpyLs/l1tj90EqmVaSA26Br1ddO/SMQ7CC1CqRKLq2NXrDLD414tZ6nCoD2kAaR7hfREFt8VTfV55xCR7cuEzpKsDXzMVjQc+nrEbcMtetUZVCy9ClsgIuBY08TRTlWyzqgfxm+M1noPc2Iq7XYOrGG9fWwHKUrK5fgUlB/OWdqEJU17vW+0RMNWimTZPMLA/FHUjGhmAefTmRTKLux5GthUn1n3PqMk4sqF/L39hJuCV0UuRZcml3M+aMLn9jdvUCoDNjNV5SyhAOJ3pv3U9W+s9gTicZJAkf6rgQvKo6uGRcj/TV9di2bdJ0VJVoqrMHVKDi8qiU+4oEnOFwu5htasJ/iXcSzzNxInzgYxQRO+VYRyrR4eFBWXpn4elv+vqu6w8LxKVWQbVbBvJq2lXz0Dure1zts0I4SIy3QAzvF4r8walMapz/qjp3BTafAVBl2IC+NwCnG3OAwwq3ONIF72lyLHQIWte4nus6RjXuHROjN+6UF2XYoHdbNaNATx07zPET3eq3xJ5EmObrzyrqfRxLoHmSBwyEXocuazgBf5cUAS78CoKbKQ8PH3FxXfPNihYmuMJ5rpWl5IgTIk=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e8fc7d12-3c4d-40cd-944a-08db55eee3db
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 09:21:11.9411
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sMQiu2VELBlhU/xTLRw7w161qkTEM1LoziQbRJamrLjNl6Q677OK1VreTm5P0x9g5uOHlmeFmfZWTQREt5T85w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5771

On Mon, May 15, 2023 at 05:11:25PM -0700, Stefano Stabellini wrote:
> On Mon, 15 May 2023, Roger Pau Monné wrote:
> > On Fri, May 12, 2023 at 06:17:20PM -0700, Stefano Stabellini wrote:
> > > From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > 
> > > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > > the tables in the guest. Instead, copy the tables to Dom0.
> > > 
> > > This is a workaround.
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > > ---
> > > As mentioned in the cover letter, this is a RFC workaround as I don't
> > > know the cause of the underlying problem. I do know that this patch
> > > solves what would be otherwise a hang at boot when Dom0 PVH attempts to
> > > parse ACPI tables.
> > 
> > I'm unsure how safe this is for native systems, as it's possible for
> > firmware to modify the data in the tables, so copying them would
> > break that functionality.
> > 
> > I think we need to get to the root cause that triggers this behavior
> > on QEMU.  Is it the table checksum that fail, or something else?  Is
> > there an error from Linux you could reference?
> 
> I agree with you but so far I haven't managed to find a way to the root
> of the issue. Here is what I know. These are the logs of a successful
> boot using this patch:
> 
> [   10.437488] ACPI: Early table checksum verification disabled
> [   10.439345] ACPI: RSDP 0x000000004005F955 000024 (v02 BOCHS )
> [   10.441033] ACPI: RSDT 0x000000004005F979 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> [   10.444045] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> [   10.445984] ACPI: FACP 0x000000004005FA65 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
> [   10.447170] ACPI BIOS Warning (bug): Incorrect checksum in table [FACP] - 0x67, should be 0x30 (20220331/tbprint-174)
> [   10.449522] ACPI: DSDT 0x000000004005FB19 00145D (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
> [   10.451258] ACPI: FACS 0x000000004005FAD9 000040
> [   10.452245] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> [   10.452389] ACPI: Reserving FACP table memory at [mem 0x4005fa65-0x4005fad8]
> [   10.452497] ACPI: Reserving DSDT table memory at [mem 0x4005fb19-0x40060f75]
> [   10.452602] ACPI: Reserving FACS table memory at [mem 0x4005fad9-0x4005fb18]
> 
> 
> And these are the logs of the same boot (unsuccessful) without this
> patch:
> 
> [   10.516015] ACPI: Early table checksum verification disabled
> [   10.517732] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> [   10.519535] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> [   10.522523] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> [   10.527453] ACPI: ���� 0x000000007FFE149D FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> [   10.528362] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> [   10.528491] ACPI: Reserving ���� table memory at [mem 0x7ffe149d-0x17ffe149b]
> 
> It is clearly a memory corruption around FACS but I couldn't find the
> reason for it. The mapping code looks correct. I hope you can suggest a
> way to narrow down the problem. If I could, I would suggest to apply
> this patch just for the QEMU PVH tests but we don't have the
> infrastructure for that in gitlab-ci as there is a single Xen build for
> all tests.

Would be helpful to see the memory map provided to Linux, just in case
we messed up and there's some overlap.

It seems like some of the XSDT entries (the FADT one) is corrupt?

Could you maybe add some debug to the Xen-crafted XSDT placement.

> 
> If it helps to repro on your side, you can just do the following,
> assuming your Xen repo is in /local/repos/xen:
> 
> 
> cd /local/repos/xen
> mkdir binaries
> cd binaries
> mkdir -p dist/install/
> 
> docker run -it -v `pwd`:`pwd` registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12
> cp /initrd* /local/repos/xen/binaries
> exit
> 
> docker run -it -v `pwd`:`pwd` registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:6.1.19
> cp /bzImage /local/repos/xen/binaries
> exit
> 
> That's it. Now you have enough pre-built binaries to repro the issue.
> Next you can edit automation/scripts/qemu-alpine-x86_64.sh to add
> 
>   dom0=pvh dom0_mem=1G dom0-iommu=none

Do you get to boot with dom0-iommu=none?  Is there also some trick
here in order to identity map dom0? I would expect things to not work
because addresses used for IO with QEMU emulated devices won't be
correct.

> 
> on the Xen command line. I also removed "timeout" and pipe "tee" at the
> end for my own convenience:
> 
>  # Run the test
> -rm -f smoke.serial
> -set +e
> -timeout -k 1 720 \
>  qemu-system-x86_64 \
>      -cpu qemu64,+svm \
>      -m 2G -smp 2 \
>      -monitor none -serial stdio \
>      -nographic \
>      -device virtio-net-pci,netdev=n0 \
> -    -netdev user,id=n0,tftp=binaries,bootfile=/pxelinux.0 |& tee smoke.serial
> +    -netdev user,id=n0,tftp=binaries,bootfile=/pxelinux.0
>  
> 
> make sure to build the Xen hypervisor binary and place the binary under
> /local/repos/xen/binaries/
> 
> You can finally run the test with the below:
> 
> cd ..
> docker run -it -v /local/repos/xen:/local/repos/xen registry.gitlab.com/xen-project/xen/debian:unstable
> cd /local/repos/xen
> bash automation/scripts/qemu-alpine-x86_64.sh
> 
> It usually gets stuck halfway through the boot without this patch.

Thanks for the instructions, will give it a try if I can find some
time.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 16 09:23:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 09:23:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535089.832694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyquE-0002G3-2u; Tue, 16 May 2023 09:23:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535089.832694; Tue, 16 May 2023 09:23:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyquE-0002Fw-03; Tue, 16 May 2023 09:23:42 +0000
Received: by outflank-mailman (input) for mailman id 535089;
 Tue, 16 May 2023 09:23:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S66/=BF=citrix.com=prvs=49309c509=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pyquC-0002Fq-W9
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 09:23:41 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56834df7-f3cb-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 11:23:39 +0200 (CEST)
Received: from mail-dm6nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 05:23:34 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA2PR03MB5771.namprd03.prod.outlook.com (2603:10b6:806:11e::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 09:23:30 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 09:23:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56834df7-f3cb-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684229019;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=YH2WWyh35VIrf0aKOEC22DESvMV2uTNWkqH4AfAHhpQ=;
  b=Nt2OOwqmt0dqaUT+4naaFglcUQmSxRQNSD7Lnhd4MiW4S3TyVI+g9mA4
   zuOe0JJpK1zzUgavax/qyAghTydKjRF3Zri+Jao3IJ+DcbPWND6XykDO6
   nsFlOF4YwUYo7Rh07gkrD4InnV4D2vgPzCOco7XhwmOpEqrKNyLeLZONC
   I=;
X-IronPort-RemoteIP: 104.47.57.173
X-IronPort-MID: 107945293
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:zcJf5q8J/R4pq2HzRg4PDrUDYH+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 zdKXTrTP/qDYjShf9p1aNjl8EJQ7MPUzdViTgU5qX88E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKkV5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklBq
 dsAFhw1TCm/vPiE7K2iVOd1o+gKeZyD0IM34hmMzBn/JNN/GNXvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWMilUui9ABM/KMEjCObd9SkUuC4
 HrP4kzyAw0ANczZwj2Amp6prraXwnKnBdhDT9VU8NY6mUacyV0eTyEHD3a6s8KItmKlXt1Qf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmW0bbd6QudAmkCTxZCZcYguctwQiYlv
 neZktWsCTFxvbm9TXOG6qzSvT60ITISL2IJeWkDVwRty+nupoA6yCjGQddqHKe2icDdEDT8h
 TuNqUAWnKkeg8cN0OOg4VnNjjaop57IZgcw6kPcWWfN0+9iTIusZojt5V2F6/9Fdd+dVgPY4
 yNCnNWC5ucTC53LjDaKXOgGALCu4bCCLSHYhllsWZIm8lxB5kKeQGyZ2xkmTG8BDyrOUWaBj
 JP70e+J2KJuAQ==
IronPort-HdrOrdr: A9a23:45/q2qs74VkxoVDBdtlr1Iy37skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-Talos-CUID: 9a23:QjdlZm/vL7RCTUR5dOmVv0FIPdgbXkX/9UvrOX2UFn93RqSwZXbFrQ==
X-Talos-MUID: 9a23:nsbBgwk9wAqc72M7eHQEdnoyLdxn7JWjT3wcy7gHqfK1CQBrPjiS2WE=
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="107945293"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Pk8GdR+FgnCiGpmmSdH1x3VK3AB8j/UVtJpLPHykvbFD3TO0TGcx8Gf5eWEOBM1nJxYQaR0Lk76Fj/F0kPvheGhL0+tI32E0/PH536RNDgevFjGIiRsgKaenYK1mzmcz9P2FTgskSPNnuwAcjhg726fpGvuEdttclREVfsQ8o4TTFMkYHONe24RL6tShxaQsDCSdXPn0YWvaldPFKmZAXUjQIPOykGwYkeBjuy+va8Ox5/R/QZOF/YYINB3DPs3zX7CfQyaCANqk9+B2GCIDr12VWD8hor6UhXOXNEP4BwetaGKBr0klI15egXx93WqgVS1vRssmQcOmvWoRnVy1pg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8X/ySDcgJo2bF/96Sgw/t2NcUNoutRPmYmkCEzvaqhQ=;
 b=njh8sggyNyex1dJf/MesShPP1F3/JHBelAJVYe1EwduY3WMKZQMj0UiCgEI998mCa4fkOLBeBVJo1QgC0GJHECTtJmriSrV0tq0/LsJl7m4KMhz706r9DL9Hr8ci7pcp/tDIDWDznYQmxtMmolffmYwDdNfUpDsnGL/4W+v8X7cPc2QkBrKP1/9cXLOQEu9lXEfvkg/BJxMIFLN9JOgT9Ck4LHQUVfDqQjbOMH2mWuXz25g3lyzbSuDanjPbhXXqJ+8gSWllq5ZQS8BHoUsWkaZNnqJFN3FdVH9z+Pmjtt6+bi11Pfw4XOC6J1BiHa/5jkerGxepsx6tEiL1zt+t0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8X/ySDcgJo2bF/96Sgw/t2NcUNoutRPmYmkCEzvaqhQ=;
 b=FhlbBD/PXxuetL01MR3sfHErDbSCAgI2sZnE0JPJCFTsWc14zpANuvcxfCfMyzwSmumAEcJGTirD6NveeAFICqjj7Bdb+dj21f070FWBHAWEBnOhmwVw77GzGggTFuoIq/U0eWp1IAF0+yxaA+jxNMW1FUqwJCvjVRvji/Xb9M0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 16 May 2023 11:23:25 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT
 generation
Message-ID: <ZGNLjYWLaOzgfH+z@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-1-sstabellini@kernel.org>
 <ZGHx9Mk3UGPdli1h@Air-de-Roger>
 <81ac6e51-6de9-5c4c-5cbd-7318cae93032@suse.com>
 <alpine.DEB.2.22.394.2305151712390.4125828@ubuntu-linux-20-04-desktop>
 <ZGM6X19p50oSqbNB@Air-de-Roger>
 <ZGM9vzwGm7Jv6i7M@Air-de-Roger>
 <cb550850-4cf7-5bb6-d84f-633be1ce3bac@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <cb550850-4cf7-5bb6-d84f-633be1ce3bac@suse.com>
X-ClientProxiedBy: LO2P265CA0515.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::22) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA2PR03MB5771:EE_
X-MS-Office365-Filtering-Correlation-Id: c6e83bca-be1a-4e57-71be-08db55ef3692
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	No5sNKxcxMO3adV8pGuZ7Co6ozcHJKUqcn1fH5W9FOEX0sf68W2V/0KjgZJ6yFwOQujUt3NxVu7u9nFvL+xGtDwBAydemjxacUf+dHt1RMFAdlgOvYQ7ubfzHXMYiICCl3WpEhT9dTguGzbFsbzmeu1x3XbOfbk+H7oyYVvAPMnOFQjnc8ylnvOxbemDxMlMJV4/XM4nEwWOGCflXCLCnMkPf1AYIs/yvH6G99RcIAYsmgqcZlw7piEd1u8fQSdSzluJm71678d8QMbdU4u8btQUQ1NoyWurLVtZjRGxRG1nJa6RkvI7+I9zSZqDvr4TSOkTyUwFhe4AGIAkGZB1qhAIgcVudxHT2bWdgNqm7EG9aSUTTHXlR4gINdXpcqaFUHqkDCv0fNkYUwJ9jBv3ZwVuAPZSrh943TUPjhOqmT1KKVSA/0IbBpIWGh1RjbVXK8V6oGCO9URjDZZsbcpjGipZMUj0vUYrZrdXHtQ9p70zocimeQ9lexqbmkWpr74b1LUnLVC/geZv1lSPAVtH7AXAb8sMoEzptl+L6UVRTh875QIklBM/xWuumzRbECG4He0BqZAKhd74/O1RRJajy090oC2ulWlk9qlheurwDuU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(136003)(346002)(376002)(366004)(39860400002)(451199021)(4326008)(66946007)(66556008)(66476007)(6916009)(478600001)(86362001)(316002)(54906003)(85182001)(6486002)(83380400001)(6506007)(186003)(26005)(53546011)(6512007)(9686003)(41300700001)(8936002)(8676002)(5660300002)(2906002)(6666004)(33716001)(82960400001)(38100700002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SmRsTWE0VExoNytHWGdPWjU3QXloTUpqVVc0WUI3dHZYSURmU3RKQlZtOFhV?=
 =?utf-8?B?ZnJuTldUNDc3MXVWKzVLTUJDUmUwUjd3a2RPS0x0a25qblFXZWlZNHh2Ly9n?=
 =?utf-8?B?ZE4wdmFpZVYvMXppc2sxaGkzUG0rVkEwcHh1VlRrVStaQkZ1UDIzZW1ZQ1J4?=
 =?utf-8?B?YWlEbDJsUk4rMVA2UmM4cHZIS0dscEZQTkh4RFYrL1JTcEkzZjNMZ3lEaXAx?=
 =?utf-8?B?U0VTQy9uVGVreXBlOHRwdDNNZVA2Uks4S2NwblR0QnZ4YWw4eVk0ZkNnOWJi?=
 =?utf-8?B?S01xRWdnWlRzY2Zhc21GMGxuell0RUEvUHY5WDJTRFFqR1Y3UFIwQXdpYW1L?=
 =?utf-8?B?cC9mYm9lUmVoVGp6R29JcERVMEFPSXFLNWxjaWFZWFNDZXA5QnJIOWFjTkh0?=
 =?utf-8?B?TUxkcjlnZFc0YnRLdHl0Z21NNWxvMjU3OHpld01EU2NaYkNBZHpMdlBYN1Vu?=
 =?utf-8?B?S1RneUVXYnErcnpFb0FhTUxMQWFXRW1EQ0lZTlMvcUdRQTdlQlBaQWtxRmJN?=
 =?utf-8?B?a0ZhUklYZzlvdTgrZWZ2bFk2MzBqY1Q0bkpaaGJtWmtSaXlKUVVtZEVqZ05T?=
 =?utf-8?B?eHJoNk0rL2cxdndSaVRUN3Q3WGNDa1NPblFvN203Rkp0bVRnVzk0VnM2Qjl2?=
 =?utf-8?B?cDBxczVjM05PdlBERnQ1MGZ5WWRDYUNDMjNmTDZVbzlCdUYzdjBFZWVJcmNT?=
 =?utf-8?B?LzZIanc5SWNHVjBDbldhUHFFellOVThzZ20yb3hOWjhzaDdFRVpLZVRrTjZu?=
 =?utf-8?B?R2tseE8vY1dRNk5GN3NXUUhlLzlSVmc2Tm4zTkJTSXhNdWtiYmlhREFmQlRM?=
 =?utf-8?B?ZTEvZG9ReVArQ3pnL2NXdExHcklLdUJaTzlGd2UrRmZqMzUwVmZqb0NwRE5Q?=
 =?utf-8?B?am5XM1FkNkwrMGo1NnMvUW9mQ2dVMGJid3gydUNSd21SckV3Wk5BQ1lsaXJh?=
 =?utf-8?B?SklOc0pTdEdWbTZ2ellyZXVWNWQ5ckgvcVpGMkhFcmNvRTByOWhkYUNDWkQr?=
 =?utf-8?B?TUxGVzg1OGN1dXk3YmNNTGkycjV2MnpjYlBuUEYvQmFBVm8rckpWbU8xV2Ry?=
 =?utf-8?B?eU0xSXI1b2dpTG12Yko1V2xoQUJnTFRCL2ZxQ3BZb1lvUWkzazdUVWZUVi9O?=
 =?utf-8?B?U1FWamordVA2MjloTzMydk1nZEk4aDQ2Zk40SXo1MEhDaTdLd0F3Q1B4ZEZP?=
 =?utf-8?B?T2hudkV5YlNRZy9IS2N2YzZXM1F2N1FVeVE1dXZudDZPZFhYaXVlQWlOTTlU?=
 =?utf-8?B?aU9OdlZRV2pZZG90Sm5vZlhmSTNibVV1RjRvckRLVVFZT3hJR1NEb1d3RnFS?=
 =?utf-8?B?ZUNSdER1NUcrSVNaVm8zV3BBbXo3YVZibitNVHZZZmhOb09Vam5hRDk4c25T?=
 =?utf-8?B?b3pOcEk5MzhHQ3drK1VUdVJRbVB1VjM1TTgvWlhadS94NERDZXNBdHhlUllE?=
 =?utf-8?B?MllvMTV2NXI1NFdxKzRsajY1eHhDdUJGRzhGdTVEL2xmSWVmeXRTeE9SNktE?=
 =?utf-8?B?Ukk3YkVrNnRtckpXUHRCYlR5alE1UXllSnVjOER5dndtS3UrcXI2Z3RmMGdH?=
 =?utf-8?B?cDRHakZSMlZ6czEzWmVKWnhCdUtueDBMQTd2bmFrK09jMjdqbEx5em8wbWJn?=
 =?utf-8?B?OTJYaUZKR05YY0ltWHVFVmxoeXJOa0dGUWNhbS9SeXhRMDF1cHlteFN2M21R?=
 =?utf-8?B?by8waVFRbzloczF2QUVlTXU2THJKRXFrMVhpZmVnWEpNVGxGd0VEWG1iakVl?=
 =?utf-8?B?V0E1WmQzLzdqNzFTS2J5UnZZVHl0L2E5QnVGMmFsQzRaSDQ2ZWhpYkdRTDU4?=
 =?utf-8?B?TzBnRThyTTdLaFRuTm00MUhPeGxEbjZRbnVUd1hta0N4WUF2MU5jSHNIU0Jj?=
 =?utf-8?B?R1hMeG5SeXpkdWNBRmticHpUaCtLWWgyUWdQRFBReG81QXdPR2x4VEhFdHVY?=
 =?utf-8?B?VHZzVnFhbHRmeTJ3SEYrcXA5QnZPMlo5Tk1pT3lxRktUaTAreEpBLzRVWFM2?=
 =?utf-8?B?bS94NVNsNVV4WlN6S2VGTFlTVENUaHBWQjZPWEl6cDg2TGViL20wUWRuUktV?=
 =?utf-8?B?Y2dPR0xqaGJrWDN2dGNjOGlCSzc5OFYvdS9vUEE0eXdyREtEcHFCNGlTQTFF?=
 =?utf-8?B?VExHZDJLemRCLytrRDNzNGd3S3R3SnRibC9xSzVuTW9EZTJGR0d2ZzFSOHFU?=
 =?utf-8?B?MGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	LCTlQhO5DT7x9bpSPUBTLSdkf1L5rPowxnpUAU+tcbKc5NgdqnfxGbwVI/seXQGw8FUB92R4Djnzfm6OpnxkPvz3J466PO908KHCQNq+emFrl80Wtecru2YNl54RaeP5jgRSQrJ2gEto0yC3pnD4u9WRGflQeXWsEausjs0mMfJiUC/n40JauS8FzQkf/sXCSdMemxrMPi6IBjNBRRm2YQR4Vm7etrsj1s/G8sKR0Bn+9MWOCB0lpXrueJSU287N21kEVVP0odBHyG45e5TjburKQA+rnUetXuNOAwlXvGrC/Z6vJ6l9JY7p1G8GsxdltN49tLx69w87WSzlWw4f4zxQJqnaJ/7ukPIPUrW2MWJBcKsI3m44fbmIqR8Tv+mYEL+c1m8anb4G9PJ8hgx2QqgDxdxUDTu0XOmyN775rZn6q+BLnnR7NYawE/faLjOoU0iUJMvXVen7QOuJ9nu7pdr9zrcRWvOfB+bfoAO7691/UVJ1wgg1tv46PvjpMZm8XLM1aRRr3wJyW/kWAAORBfHhtC7U/BN8lOTfWQS0Ita5B/U1MOSTJzmyYS5UXbBvNzVyf2D2COxIpcvZkkJTjEEGh7cJYJJMdxPNNzBZVy7FLZAq4ChlDrUbrfwNs/GNFsk9fvnTbWnuNliiI17G1tRQvsrz4it1pEHmwCaGAclySczk8vUaRQYcqZ9tOh3hm1k7BtTsm3jt4WErfIh+eNJKlY/vNlcVnkJ3BZc3N7q/jzhThuok50N0fhE8HzkErCxxfJsDZZdaS5xHY6YEaVfPcGWNeqAlghU7FOLDu3aq4bU+SECDIjTREsRwxv2hyDpiee3SLoLvT1ERkOM4OOLXrEGOCOBiIdnpDO2pswc=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c6e83bca-be1a-4e57-71be-08db55ef3692
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 09:23:30.6323
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /VRRibJGzjmkjOOqRH0lPFiORym5yOWcoEVBdajqT25Yw0qcb2w2Ki12NXP2AmwAx0v2xYLEqQaMStak3W4Kzw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5771

On Tue, May 16, 2023 at 11:13:17AM +0200, Jan Beulich wrote:
> On 16.05.2023 10:24, Roger Pau Monné wrote:
> > On Tue, May 16, 2023 at 10:10:07AM +0200, Roger Pau Monné wrote:
> >> On Mon, May 15, 2023 at 05:16:27PM -0700, Stefano Stabellini wrote:
> >>> On Mon, 15 May 2023, Jan Beulich wrote:
> >>>> On 15.05.2023 10:48, Roger Pau Monné wrote:
> >>>>> On Fri, May 12, 2023 at 06:17:19PM -0700, Stefano Stabellini wrote:
> >>>>>> From: Stefano Stabellini <stefano.stabellini@amd.com>
> >>>>>>
> >>>>>> Xen always generates a XSDT table even if the firmware provided a RSDT
> >>>>>> table. Instead of copying the XSDT header from the firmware table (that
> >>>>>> might be missing), generate the XSDT header from a preset.
> >>>>>>
> >>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> >>>>>> ---
> >>>>>>  xen/arch/x86/hvm/dom0_build.c | 32 +++++++++-----------------------
> >>>>>>  1 file changed, 9 insertions(+), 23 deletions(-)
> >>>>>>
> >>>>>> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> >>>>>> index 307edc6a8c..5fde769863 100644
> >>>>>> --- a/xen/arch/x86/hvm/dom0_build.c
> >>>>>> +++ b/xen/arch/x86/hvm/dom0_build.c
> >>>>>> @@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> >>>>>>                                        paddr_t *addr)
> >>>>>>  {
> >>>>>>      struct acpi_table_xsdt *xsdt;
> >>>>>> -    struct acpi_table_header *table;
> >>>>>> -    struct acpi_table_rsdp *rsdp;
> >>>>>>      const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables;
> >>>>>>      unsigned long size = sizeof(*xsdt);
> >>>>>>      unsigned int i, j, num_tables = 0;
> >>>>>> -    paddr_t xsdt_paddr;
> >>>>>>      int rc;
> >>>>>> +    struct acpi_table_header header = {
> >>>>>> +        .signature    = "XSDT",
> >>>>>> +        .length       = sizeof(struct acpi_table_header),
> >>>>>> +        .revision     = 0x1,
> >>>>>> +        .oem_id       = "Xen",
> >>>>>> +        .oem_table_id = "HVM",
> >>>>>
> >>>>> I think this is wrong, as according to the spec the OEM Table ID must
> >>>>> match the OEM Table ID in the FADT.
> >>>>>
> >>>>> We likely want to copy the OEM ID and OEM Table ID from the RSDP, and
> >>>>> possibly also the other OEM related fields.
> >>>>>
> >>>>> Alternatively we might want to copy and use the RSDT on systems that
> >>>>> lack an XSDT, or even just copy the header from the RSDT into Xen's
> >>>>> crafted XSDT, since the format of the RSDP and the XSDT headers are
> >>>>> exactly the same (the difference is in the size of the description
> >>>>> headers that come after).
> >>>>
> >>>> I guess I'd prefer that last variant.
> >>>
> >>> I tried this approach (together with the second patch for necessity) and
> >>> it worked.
> >>>
> >>> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> >>> index fd2cbf68bc..11d6d1bc23 100644
> >>> --- a/xen/arch/x86/hvm/dom0_build.c
> >>> +++ b/xen/arch/x86/hvm/dom0_build.c
> >>> @@ -967,6 +967,10 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> >>>          goto out;
> >>>      }
> >>>      xsdt_paddr = rsdp->xsdt_physical_address;
> >>> +    if ( !xsdt_paddr )
> >>> +    {
> >>> +        xsdt_paddr = rsdp->rsdt_physical_address;
> >>> +    }
> >>>      acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
> >>>      table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
> >>>      if ( !table )
> >>
> >> To be slightly more consistent, could you use:
> >>
> >> /*
> >>  * Note the header is the same for both RSDT and XSDT, so it's fine to
> >>  * copy the native RSDT header to the Xen crafted XSDT if no native
> >>  * XSDT is available.
> >>  */
> >> if (rsdp->revision > 1 && rsdp->xsdt_physical_address)
> >>     sdt_paddr = rsdp->xsdt_physical_address;
> >> else
> >>     sdt_paddr = rsdp->rsdt_physical_address;
> >>
> >> It was an oversight of mine to not check for the RSDP revision, as
> >> RSDP < 2 will never have an XSDT.  Also add:
> >>
> >> Fixes: 1d74282c455f ('x86: setup PVHv2 Dom0 ACPI tables')
> > 
> > Just realized this will require some more work so that the guest
> > (dom0) provided RSDP is at least revision 2.  You will need to adjust
> > the field and recalculate the checksum if needed.
> 
> We could also mandate ACPI version >= 2 for PVH Dom0.

Sorry, mentioned on IRC, the above is not required because the RSDP
provided to dom0 is already crafted by Xen and unconditionally set to
version == 2.  There's no need to adjust the RSDP at all.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 16 09:45:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 09:45:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535094.832703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyrFE-0004m9-PF; Tue, 16 May 2023 09:45:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535094.832703; Tue, 16 May 2023 09:45:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyrFE-0004m2-M7; Tue, 16 May 2023 09:45:24 +0000
Received: by outflank-mailman (input) for mailman id 535094;
 Tue, 16 May 2023 09:45:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyrFD-0004lw-VL
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 09:45:24 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5ea51607-f3ce-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 11:45:21 +0200 (CEST)
Received: from mail-bn8nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 05:45:16 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA3PR03MB7284.namprd03.prod.outlook.com (2603:10b6:806:303::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 09:45:13 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 09:45:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ea51607-f3ce-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684230320;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=wz16RpBAU+WLkMITB2r5LJynokysZuS3mkSNShEQL5E=;
  b=ghGO//JFicYkGkxUPah5IPJ7GeSEpiSjMvyl2qy6eVikuvQjCEDcflyg
   ptVXhl2KYiO9d4X14E8OdFPX8B/OEVwwZ6YDvV81v6t+vNxLiobSvEZLm
   kOcuP0W6MHYekWtVa4ZuagszuB0xlo+wAX3JrH3epOjwkEOXS5A2pfLwo
   Q=;
X-IronPort-RemoteIP: 104.47.55.174
X-IronPort-MID: 108521730
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:YgaFFajPCZKPcmjfCdGzW+PnX161TREKZh0ujC45NGQN5FlHY01je
 htvCjuBO/+ONmD3c9l+PIu09U0BuZDcn4djQAc5pSlkF3wb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QaAzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ0JRU1fjWevd6tyauLUNB3mM57AdvCadZ3VnFIlVk1DN4AaLWaGeDgw48d2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEsluGybLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXoq6Qz3wbLroAVIBM3DFXmrqXgs3SvfoNaL
 1A/9ic3nIFnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakc5oRAt5tDipMQ5iELJR9M6Sqqt1ISqRXf33
 iyAqzU4i/MLl8kX2q6n/FfBxTWxupzOSQ1z7QLSNo640j5EiEeeT9TAwTDmATxode51knHpU
 KA4pvWj
IronPort-HdrOrdr: A9a23:qzU3sK3PFXu9NyBT+KmiKwqjBIgkLtp133Aq2lEZdPUzSKClfq
 GV88jzsCWetN9/Yh8dcLy7WZVoI0mslqKdkLNwAV7KZmCP0gaVxedZnO7fKlbbak/DH4BmpM
 BdWpk7JNrsDUVryebWiTPIderIGeP3lJyVuQ==
X-Talos-CUID: 9a23:EMD1m20MXrTTcb8+cQLnubxfNs0ASm340U3sOEahOXxQU6OfEXKC0fYx
X-Talos-MUID: 9a23:oRFQBQgkRhZvSSzfKUbXL8MpOuYx7fmhVBk0q4Qj6syAKQd8Ni6spWHi
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="108521730"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yu0b5Au0lUHeN1ThAGBh0Is3mw9qL/R6ONFxPEFHu98zDskNADXoOX2ndys5N+naRuAW5k4wdzmdVtYw3c7+giPkUsWuYVPyG68px3a7PPoAnq8pq2fbu/5EXVy43biDgg2rmesg3W2ZQVcpG+ruQaG0724nZRTQ1WL3PDcvBDyocXdrAZmhwBDMLGk4Ew05mtw1N8XoqubUzlgJZjoqs2bgY+DqXg5nja3LTzYXp8YAN4wZsjRxF3SAqHjgQijEL612/cpC+SUTc4TTvZ2tvGARvXHQSWQ5y0RaRRZywnArFWtuTWmlTS0MlZbS+WYm8VsGlWb8yfdz5shpogrz2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pb/4DEpQ/IOoganG4SgFpoSATg3+6drrXwMDE97cgDo=;
 b=f7SYF/rnlqu0Jm0EzOSAXZGIuI9cMaEG0eFiCSfI1ibEQ2rabdDskmXD33aSdn20yRzSnUyxaaVMo7knWeyYeCBG0JmQJ67xvShGQvOjY6t3PcWQkXvrhikb8kVG6jGkIk98RwXUzVF1noTILqky4W/5E0v3XsotHAuClgLR2W6mULSb0FZWQDVeMLSta07f/EQoqNZNofbxg2KazdsREID+sYJEHOIAgBjFAcwhawwtci+nAAPJP/I1jY738dtcH54iLBLq3Hdgmmwlqrn6ZK2hR/P+XgSQEmkR7eMZ9qnjzCzTHIp7+MuaJYgdAypUSAqlNmaftTz1YJkiSGGNAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pb/4DEpQ/IOoganG4SgFpoSATg3+6drrXwMDE97cgDo=;
 b=HP7KTy2j5PdYzbIi7fO2m5jNGcezsnPInlZ3JYXed/HZqXeMD/An6ulAhfTAcPsnP+JETWQOaoQWKcKFDVQwfZSwFm6CWUaqJMbofjneFJunvA9c2SkGNQsKoItjGO+GrsXO3iXvNotz259pn/Duieotv/d6UoJhHI4rSpZDIZ8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <c819848e-d20a-a83e-9387-dd4fd95a6daa@citrix.com>
Date: Tue, 16 May 2023 10:45:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/6] x86/boot: Rework dom0 feature configuration
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-2-andrew.cooper3@citrix.com>
 <97825c89-87c5-1156-5621-9d03286fd865@suse.com>
In-Reply-To: <97825c89-87c5-1156-5621-9d03286fd865@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0688.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:37b::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA3PR03MB7284:EE_
X-MS-Office365-Filtering-Correlation-Id: 0fde6aea-adfc-4e99-9a63-08db55f23ecc
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9vdCYsSe8jy8krajKqvvZf3NHyRLPgjkVg0aA8/hTBr+3Ne9SspgnP2uOHsCJlxNQJFyVIux4fKTgh8IkeBBjcxvkO39yjvAoN6Okzz6TMuYb0Urt04bj1JbISi6/0Pnu3TMm8SPkGGO2H0vMpPBlbkKYlykq1HGNgXkLco7JdbwV+WY59vwdJb4UqZSxli/CDmW0Khc12G0ViKVDzW+7S273JzfdQi8TprdJ61n4tIUPOLr+5ViMIR/UQd6fPhMiTh/+Pnw+flNsDTQiPmhWZMCKJKcSTSCmoOPJ15dKWLxOmd9wc2w3a/MB+JjnsdoSlcge1SWN1rD/jK6UfqPMg8tEpibc+WB0Eb1+leo5uhogvQ8sP2RgQ15LPvkN5wo5rRdPiNkqzhpLCpynekRmNMCTf1QcLvkak4Z80HdvkRPHGZnU5wYjySpMaF/YFbrXBLS6Hac0xLn3b/vtMfBkmqYERvX/R0mqzx5fdoOL69ru1+HqcPVBulf+3muF/waD1B5kxmorlON8h0ItGuZ2Jr76H5TPzCRdJuq/CkhgRQotZcEuc3YTh8Kuu6OS91fFYJPyvmRuQ7by43nx0TgEIpWUAovnnyB7aMjejN/xnGRRngkd5Ar8EvzhvysHSGJ8Wrie2h+Kb5HABVBxEm9+p+D2erMlG8EaDBkJSe2RDHXlIx/ud91m3Hv2tJm8O0u3u+RWalBh7xKHt1oNOEn4g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(136003)(366004)(39860400002)(451199021)(83380400001)(31686004)(5660300002)(8676002)(8936002)(31696002)(316002)(2906002)(86362001)(2616005)(54906003)(186003)(6506007)(478600001)(6512007)(53546011)(26005)(38100700002)(6916009)(66556008)(66946007)(41300700001)(82960400001)(66476007)(36756003)(6666004)(4326008)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OE9PZVJUZjNsTmJ3SldLNWN1cmVsS1BzK0NwT1R2QXg3K3crbWhPTExNUHBU?=
 =?utf-8?B?cVhnQkZsS0Q2U1hHM2V5MklxYUcrS2hNdmhBYU1DVm41V0VTSDRNNWIwM0Jz?=
 =?utf-8?B?ZU9Hb3M1Si9UR1BrUHNzSkdNbkNxZDNMS25MN2NDd2gvOE1DK1cwNkFlOEc1?=
 =?utf-8?B?RFF2WDJYNHFKNlVqeU9tWFNmN1lRbXY2QjJFdEsxSDVxa2M5V0kvUG5MeXRH?=
 =?utf-8?B?ZHF3OFRCSEwvQmdwSHEyanZ1RHJ2RC9UcFRhZmNENk9wVk9vWWFkR0JNVW95?=
 =?utf-8?B?ZkZoUi8wYmtid0hXRDByS1U2cEhSbGdzYnZDdWc3VERFL21Vc3hZR1JxQk5R?=
 =?utf-8?B?b1NORVhGSXJCSXZRTFJZYU94elRYQ2VjYWJJdjM1TWlISFFoSUUvNER1RUFL?=
 =?utf-8?B?dzByUnhhbk9CU3RBYzEvelQxMmFLMHlTMFArU0pEU2VTTzBLUWlma2Nyb0ZF?=
 =?utf-8?B?THdROXdmT3BnVEFMZXJoUENTR05vTm90ZU9FT3JteUU4dzNWSjBPcGxtWG1K?=
 =?utf-8?B?dEcrMXdzNWUvN3pPajZnR1NLc0paRyt5ZS9Kc09CWko1ZUVRajZCZHc1QUZR?=
 =?utf-8?B?dmM5WkFpYkF2NC9aV2pHM05yMWw0U3lpeXFwa0JDcnJTOUUrUHlHZzgxZXFN?=
 =?utf-8?B?UVp0Q2VHaTBKWFhOaENwU1hNd3BGckR2aGNxenFLTG92cWYrN29PVkhaaU50?=
 =?utf-8?B?MituMW92MHpxZG9vcVhJWlZHdHFuZ3hOMUdWTFRKR0VZcGhWM0Q3UDBmRTRD?=
 =?utf-8?B?V0xFMy9FQndMR3NPQTFBU09JT0RHOUtQbktyVXpsZmh5RE9DVXJjdTBvTm03?=
 =?utf-8?B?MU5lV3NvSTVCRnZxcXBjaDM0MlQ0Q1JBMDh2aEppUSsrZWVMSHNHMWc4K3lG?=
 =?utf-8?B?aW1TYm5uNTZjaFl5WlNFY1BiZzZHWG9zaHVxZGRiSmNtOHJsUFJEbWFvM090?=
 =?utf-8?B?UjBGaFpNMGVTVGVYcXN6TlFWRGt2S2RoTXhGbjlUbnc3Z1BKL2lMM0w0YWJB?=
 =?utf-8?B?bFRoQzVrbTZpbERwSlR3S2VJMTgzZWFRdkF0NXBLd0RZUjdaTFBrdG9jYjVB?=
 =?utf-8?B?alQ4MUUvRzFWZjZlb3EwNDc1ZW95aWx2MndlUC9EYXhqbXArdlNFMlFPcFEw?=
 =?utf-8?B?OTAyUm0rZnNXcDN6MDYwVzhZSmVlOUJLSUVWWkJhWnVlUTNlOTcyT0grUnJt?=
 =?utf-8?B?RnpxZTZsMXVxSXp6OW5ER1Qyay9xTEV5TDRtejRjUlZ4aWt0NndhenZlRWI3?=
 =?utf-8?B?TEt3dnhtT0NZbmZOdkV6TEIwbFE1cXl6OHFhS1FmZERvWnBYYmtWWEM3YUNi?=
 =?utf-8?B?Q0FFaG5PRUhIL2dTWXYzVVp1bXQwS1ZmZzAzejlDS0NsZ0xIa2ozb1IyejQ2?=
 =?utf-8?B?VDZSVk1QK3VGTTNaOFU1OE1sVW1BaGtTNlpFNVBWUzBrc2cyZmNocFBSa2NJ?=
 =?utf-8?B?aHdzcHpDRHJiTEhiMGdka1dMOGNib2hLbTNaZXRGdVk2SWFFSVRscUZMRVpm?=
 =?utf-8?B?QllFTnZmSkFZdlJkM3hDNERFTmVWZm1qRkk5V1huNmlscThYNUpNdExBWVNP?=
 =?utf-8?B?YkFIM2JzczN3b2x2QmlIWUJmS1BRVklKMm1KVkVjMllxd1BiK3pRRXNwOVpt?=
 =?utf-8?B?NitwM29CMWM0RFJvVGRCdEx3eUY0dDlZcEpIczFtK3I4Nllhc1RaUWxON2I2?=
 =?utf-8?B?d0xIdmcwTi8rNGhQWEt1M3JkVmZqMjVyTVY0c015VDVCc3NoY1BYcURyTWFo?=
 =?utf-8?B?RXFsY2hxUEdjY2tRb2o4cEdGblNjcnRDY29aWEVJU1JTR3dzU0ZQVk1XclVD?=
 =?utf-8?B?aHZtd3RXMmgycEtBMFY1RmpXZzJoVDNvWXhidlpSYUZ4RXg4YjRIaUhWa3lm?=
 =?utf-8?B?elNBaVhCZURJOEYzUUVtQURqNGJrNGE0WEhIWXd4SHRoQzlYbW04SHlBMkpR?=
 =?utf-8?B?UXNtVnFsdWFjcVVVdWYrSEpMMEtCanJ6aWxaQTI2bzZMTmgwZG92K0ZqQy9L?=
 =?utf-8?B?UXU2ekJLWFNnY0x5QnU4ek1MMVpYdjQ5VFk3K2ZvR3dtZFRqOVdXNHBTZ2Q5?=
 =?utf-8?B?ek9XbWhaUjAvWHpTdjFxWkFoNEphK2htYWJmNmxyb1BhWW53SThIb3dNS2oz?=
 =?utf-8?B?Vm9Qbkg5NG9VdHh3eThTbFpTaW5lZ3B5Rm9pOEp2cVNIN2JaQmlHYlVWcjVS?=
 =?utf-8?B?RVE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Tm+uXat/WQjMGBH/uozvTpWprha0wYKo7JQnoyl3Deojo1ziravaZO1p6fGy4+7Y/a1bwCcjh8FkUJHxPnmoirShrRo8Vhqq6hlON+Pu4MQc6d7OotOAwwZDNYBMS/HsH+5W86Fg+fUJaVMWHHSzOogYcPdAVRjtBpfytE0CvPHc+tfUNOr8PGco+H8ZjykbxAfRg6LQQIpq0Obp3ZKMRWLVPyLM1YazVtD2tyIgVXthFurP9fuLv6PngnoG7o9wsvwaa6eFABuT/7R9sUA0Y0NGu27FugK9juUfZI4cVzJtGRGH2HfGYO2LAI2XGUtM8tZ7PWtp3cGgXcWw0jQjzhpN4e2x4Zn5eUvSpzo2wPVG3+r9Tn7uo94EeCmkYOizBz8gz2pqcSLoGnOMZJiwm4OxzJT/cnPLMW4ekjvi24lAMWqdIyt8QDJgGaH75dHlz+xv/5rlubn/8l4ECoRCMUJwaaGp6pHkE37nuhUtc+qFrL0/58NSW19vhFnkYah346BZYBdNd9YfZWyvB2n+tWOp6g+ub/ysEZGiPnFjg7ZBdxJpYtpJmxAi/RYVnbFKNubx1+zi6Au0pwcFcccd1MknVwu/CxVh3BOeiMLC8wbo5Fa4bbhycoLZ3Ben1kr6WIvKyXIL1FeRAi7XTkcPydGu/0DQb24lCseTbEznxUX40b4zUsRPmJEDjzaA5t37pSphSc9tfmqslJb76QRLFhRRShiVXH+4kpQ7VKQaiPH2JigU1rhEeKxMnQpw+kH2SIX/j4YTbTlcSYztzNxJFd2gd7Re2RRz62Bqg4AgPq4n8JmzX5laxD/9oYypSmkS
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0fde6aea-adfc-4e99-9a63-08db55f23ecc
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 09:45:13.0734
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fcB268PBWdlp/AvNaELjRjhLOnNoH5p44X0FdQ9csnj9Z16VZlTK5e5O8krzkn8mk5OEks4OXkhiJByviqsVk+WO3Hmi/xq7dK6N8NEa05k=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR03MB7284

On 16/05/2023 8:58 am, Jan Beulich wrote:
> On 15.05.2023 16:42, Andrew Cooper wrote:
>> Right now, dom0's feature configuration is split between between the common
>> path and a dom0-specific one.  This mostly is by accident, and causes some
>> very subtle bugs.
>>
>> First, start by clearly defining init_dom0_cpuid_policy() to be the domain
>> that Xen builds automatically.  The late hwdom case is still constructed in a
>> mostly normal way, with the control domain having full discretion over the CPU
>> policy.
>>
>> Identifying this highlights a latent bug - the two halves of the MSR_ARCH_CAPS
>> bodge are asymmetric with respect to the hardware domain.  This means that
>> shim, or a control-only dom0 sees the MSR_ARCH_CAPS CPUID bit but none of the
>> MSR content.  This in turn declares the hardware to be retpoline-safe by
>> failing to advertise the {R,}RSBA bits appropriately.  Restrict this logic to
>> the hardware domain, although the special case will cease to exist shortly.
>>
>> For the CPUID Faulting adjustment, the comment in ctxt_switch_levelling()
>> isn't actually relevant.  Provide a better explanation.
>>
>> Move the recalculate_cpuid_policy() call outside of the dom0-cpuid= case.
>> This is no change for now, but will become necessary shortly.
>>
>> Finally, place the second half of the MSR_ARCH_CAPS bodge after the
>> recalculate_cpuid_policy() call.  This is necessary to avoid transiently
>> breaking the hardware domain's view while the handling is cleaned up.  This
>> special case will cease to exist shortly.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

> with one question / suggestion:
>
>> @@ -858,7 +839,7 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>>       * so dom0 can turn off workarounds as appropriate.  Temporary, until the
>>       * domain policy logic gains a better understanding of MSRs.
>>       */
>> -    if ( cpu_has_arch_caps )
>> +    if ( is_hardware_domain(d) && cpu_has_arch_caps )
>>          p->feat.arch_caps = true;
> As a result of this, ...
>
>> @@ -876,8 +857,32 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>>          }
>>  
>>          x86_cpu_featureset_to_policy(fs, p);
>> +    }
>> +
>> +    /*
>> +     * PV Control domains used to require unfiltered CPUID.  This was fixed in
>> +     * Xen 4.13, but there is an cmdline knob to restore the prior behaviour.
>> +     *
>> +     * If the domain is getting unfiltered CPUID, don't let the guest kernel
>> +     * play with CPUID faulting either, as Xen's CPUID path won't cope.
>> +     */
>> +    if ( !opt_dom0_cpuid_faulting && is_control_domain(d) && is_pv_domain(d) )
>> +        p->platform_info.cpuid_faulting = false;
>>  
>> -        recalculate_cpuid_policy(d);
>> +    recalculate_cpuid_policy(d);
>> +
>> +    if ( is_hardware_domain(d) && cpu_has_arch_caps )
> ... it would feel slightly more logical if p->feat.arch_caps was used here.
> Whether that's to replace the entire condition or merely the right side of
> the && depends on what the subsequent changes require (which I haven't
> looked at yet).

I'd really prefer to leave it as-is.

You're likely right, but this entire block is deleted in patch 6 and it
has been a giant pain already making this series bisectable WRT all our
special cases.

For the sake of a few patches, it's safer to go with it in the
pre-existing form.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 16 10:32:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 10:32:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535080.832714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyryP-0001px-CS; Tue, 16 May 2023 10:32:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535080.832714; Tue, 16 May 2023 10:32:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyryP-0001pq-9b; Tue, 16 May 2023 10:32:05 +0000
Received: by outflank-mailman (input) for mailman id 535080;
 Tue, 16 May 2023 09:14:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Peq8=BF=geekplace.eu=flo@srs-se1.protection.inumbo.net>)
 id 1pyqkq-0000c5-Il
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 09:14:00 +0000
Received: from zulu.geekplace.eu (zulu.geekplace.eu [2a03:4000:6:3a8::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fda74b68-f3c9-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 11:13:59 +0200 (CEST)
Received: from neo-pc.sch (unknown [62.27.195.158])
 by zulu.geekplace.eu (Postfix) with ESMTPA id 161854A0B05;
 Tue, 16 May 2023 11:13:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fda74b68-f3c9-11ed-b229-6b7b168915f2
From: Florian Schmaus <flo@geekplace.eu>
To: xen-devel@lists.xenproject.org
Cc: Florian Schmaus <flo@geekplace.eu>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] m4/ptyfuncs.m4 tools/configure: add linux headers for pty functions
Date: Tue, 16 May 2023 11:13:55 +0200
Message-Id: <20230516091355.721398-1-flo@geekplace.eu>
X-Mailer: git-send-email 2.39.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

To avoid implicit function declarations, which will cause an error on
modern compilers. See https://wiki.gentoo.org/wiki/Modern_C_porting

Downstream Gentoo bug: https://bugs.gentoo.org/904449

Signed-off-by: Florian Schmaus <flo@geekplace.eu>
---
 m4/ptyfuncs.m4  | 3 +++
 tools/configure | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/m4/ptyfuncs.m4 b/m4/ptyfuncs.m4
index 3e37b5a23c8b..d1a2208398e3 100644
--- a/m4/ptyfuncs.m4
+++ b/m4/ptyfuncs.m4
@@ -19,6 +19,9 @@ AC_DEFUN([AX_CHECK_PTYFUNCS], [
             AC_LINK_IFELSE([AC_LANG_SOURCE([
 #ifdef INCLUDE_LIBUTIL_H
 #include INCLUDE_LIBUTIL_H
+#else
+#include <pty.h>
+#include <utmp.h>
 #endif
 int main(void) {
   openpty(0,0,0,0,0);
diff --git a/tools/configure b/tools/configure
index 5df30df9b35c..01f57b20c318 100755
--- a/tools/configure
+++ b/tools/configure
@@ -9002,6 +9002,9 @@ See \`config.log' for more details" "$LINENO" 5; }
 
 #ifdef INCLUDE_LIBUTIL_H
 #include INCLUDE_LIBUTIL_H
+#else
+#include <pty.h>
+#include <utmp.h>
 #endif
 int main(void) {
   openpty(0,0,0,0,0);
-- 
2.39.3



From xen-devel-bounces@lists.xenproject.org Tue May 16 10:35:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 10:35:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535107.832724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pys1J-0002Qn-QN; Tue, 16 May 2023 10:35:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535107.832724; Tue, 16 May 2023 10:35:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pys1J-0002Qg-N8; Tue, 16 May 2023 10:35:05 +0000
Received: by outflank-mailman (input) for mailman id 535107;
 Tue, 16 May 2023 10:35:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pys1H-0002Qa-Lb
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 10:35:03 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f80f530-f3d5-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 12:35:02 +0200 (CEST)
Received: from mail-bn8nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 06:34:33 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6939.namprd03.prod.outlook.com (2603:10b6:510:16d::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 10:34:31 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 10:34:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f80f530-f3d5-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684233302;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=X7nDNbKADaN3fZZUA0iUy29+Y5ne+Hhr1lGewDtgROo=;
  b=OgHBhJOQFScksvo4ORGBHm7jpVPDHVwMKW127gMA3NKtVO8crpUJkl+b
   5M2M5sTC1gD0ZWQIU7M6VDEN0q8yJt80LFRcchwwEVldWR2TuJujKaLDQ
   8T6z0HO4JCHN/mBRn+0wKMETu8ptCBEHyZxQWNaFakKmyoiLNvaAi8r9D
   s=;
X-IronPort-RemoteIP: 104.47.55.169
X-IronPort-MID: 111653578
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:QJ3eZqxwBwih4loc5lp6t+caxyrEfRIJ4+MujC+fZmUNrF6WrkUDm
 DYdCm3Sa/mIYWGmL4t3OoSyoEpUvsSEyYA1TlBsriAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPK4T5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVtE8
 d0gdhcnVReeuLjum76QZ+x0vtt2eaEHPKtH0p1h5RfwKK98BLrlE+DN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjWVlVIguFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiAdtMSeblqa4CbFu7w3xLDwxOUFeBneS82kXjV+lNE
 1IXw397xUQ13AnxJjXnZDW2pHmssRMRWMJUGuY3rgyQooLE7gDcCmUaQzppbN09qNRwVTEsz
 kWOnd7iGXpoqrL9YW2Z3qeZq3W1Iyd9EIMZTSoNTA9A6d+zpog210jLVow6Tv/zicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQWChRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:B6+JoqoyUsONNGdoD+skDjsaV5s2LNV00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwJ080hqQFhbX5Wo3SITUO2VHYVr2KiLGP/9SOIVycygcw79
 YZT0E6MqyKMbEYt7eF3ODbKbYdKbC8mcjH5Ns2jU0dND2CA5sQkDuRYTzrd3GeKjM2YqbRWK
 DshPau8FGbCAgqh4mAdzE4t6+pnay4qLvWJTo9QzI34giHij2lrJb8Dhijxx8bFx9f3Ls49m
 DBsgrhooGuqeuyxBPw33Laq80+oqqs9vJzQOi3zuQFIDTljQilIKxnRr25pTgw5M2/9Vowl9
 HIghE4e+B+8WnYcG2ZqQbknyPgzDEtwXn/zkLwuwqvneXJABYBT+ZRj4NQdRXUr2ImodFHya
 pOm0aUrYBeAx/slDn0o4GgbWAhqmOE5V4Z1cIDhX1WVoUTLJdXsIwk5UtQVLMNBjjz5owLGP
 RnSOvc+PFVW1WHaG2xhBgl/PWcGlAIWjuWSEkLvcKYlxBQgXBC1kMdgPcSm38RnahNPKVs1q
 DhCOBFhbtORsgZYeZWH+EaW/a6DWTLXFblLH+SCU6PLtBGB1v977rMpJkl7uCjf5IFiLEono
 7abV9evWkuP2rzFMy12oFR+BylehT9Yd3U8LAd23FFgMy4eFKyWhfzDGzG0vHQ7cn3O/erGM
 paY/ltcrjexWiHI/c84+SxYegVFZAkarxnhj8KYSP+niv1EPybigX6SoekGFO/K0dsZkrPRl
 0+YRPUGOJsqmiWZ16QummlZ5qqQD2xwa5N
X-Talos-CUID: 9a23:G2hZyWwhFXdh8NUmPjb0BgUuEMQESiP65kzXPnDiUT1lY5CQUWePrfY=
X-Talos-MUID: 9a23:wB0Owgth2avQWmqk382ngSE6M9Yw5qiUNF0IwZ5bt/KWDTJzAmLI
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="111653578"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RLacr7lUO1hdOa+st8bZBQwwitqTnpngG6yBFRje+sTpaMy6xv7c7LjW7L0JBsU5heOnUYw3t2Z8B45EP3VsGLmvw0KtlahBTRu1E8IcyeGkf2nF70jOM3GCC4t3wQvkQmBgTF5qwxhbDXsvBQYaAVSZ+mA68+ukdA/Ei9r/zJsc/KRQB0Nws6Uia1ZF6hZu2U2q+3OzTCZ5Sc/JKZIaHrU/Q0xQMieuKe0C4IPBKQYU+UiGJ2Gfxj66QXwXtmBzZacUKT0fAiwDHahmTOxfYsKQ0SLBXE+p4ekVvobNxQHGT/nik78e5T2zfzlr4dl3SCwzmr0HVEKT8hbEwLkxDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=X7nDNbKADaN3fZZUA0iUy29+Y5ne+Hhr1lGewDtgROo=;
 b=d9ybLRu/tm66+3bxFFcusFfLHAcRJFGqyf6iAXe9O82zDhxK/VjGKUJY9asjUmKucURghpScwrDSykm7h3IX/eUFgf4n4ipFisRt5RmAZr9QyMR5IDGYbqlMviGuDQ/y43y4jYzAii8iYMljwTR+bbaKQVWHRmAkmTcPQa5oDRDX4K/f4epELk9hR3/dhrsmxp/B+/wVkYoC6NDuvd/JJniL1zNJW7KJRcykGc4JqNrA9LMzrZmxyRUhVuqrj50WMiYcGF9jmNmNspjVGydrWfw4K9TVEOpnxT4cT1wqSf2B5lpLBZIuKQrG2j+esVdliBXcJR3Y/KNr4RXQBAkFMw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X7nDNbKADaN3fZZUA0iUy29+Y5ne+Hhr1lGewDtgROo=;
 b=SKzLRRECkDlcViSSDU0s5T8G0tHkCR507QfXL6/4GxqCGvdRQxCJTHmzOno2eBKb11YlbUApchOCwEF82QzEZq4aG9rTuvaCHwYrZ3r5Lhrd4/dFrdrQJJrnvplHDNJRz6dh7srk8G/KhKSBGyajM0LmxtB7y+wjHxCehb+pRWo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <6e9af5ec-4484-38ed-2b40-6231baa9ec93@citrix.com>
Date: Tue, 16 May 2023 11:34:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] m4/ptyfuncs.m4 tools/configure: add linux headers for pty
 functions
Content-Language: en-GB
To: Florian Schmaus <flo@geekplace.eu>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230516091355.721398-1-flo@geekplace.eu>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230516091355.721398-1-flo@geekplace.eu>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0234.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a6::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB6939:EE_
X-MS-Office365-Filtering-Correlation-Id: a8bd859b-ce80-4219-a453-08db55f92192
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XG6j3LtNTcWfvE86yZcLbMwEZnR5lvYE/8kAGAcdYg2u8fosIywDyGzPlhmF0YZj3icfwvK18SEODIJcWNO+ojebN8lSEuiHjY4E50/BfpCGuqaZd6QtGAxTA0fyRGVkkr04D0nNHlQ55SE+MYclUTbPzyU9UqJYbMwHEQuDS6ROXMyNjjyAXh7ZLnxqOFBUht/bGRCfEwck+6eaLU2SvaJWb3lnWEc0kxuDmRoJWzA57UeqblVou5BldNt7lz1IVZxgtUoUgRr2KmlGHYMEZsVbiyi1y11hmPS+031yai5ASlMUOc1PzkyWSjriuXS565aTjxZQpl153xEn+hEOQG/fy2il+VCa55Hg/2yz7IbeQ5BjZA/Iq4BLQdWc8U2hHsXmZATX/+EXlY1/XA+QB7fsFnVcRyR7YsbNEuBhQ2hcU5qegfXsHV7jTsrr4ikXF8ctkC8RbhH4yZ+tTrXQyDM/T7XdhVD+lLSdD+vpkoX2braQ/cEMKJu5uIyG1hANe/rM6seKLoyBuAZBATGN43C4IOKJQGFD5HIVtUDCQROHKghottIPPMCm5zqMltnhLpJlNwsHNmQPxDJB1qWd4qBD3Wm3+LOuM6IhzXV0E9VwA8GFFmZKGa7qq+FmMH6U6K0GoIdclIFU05vhSYVdfQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(346002)(366004)(39860400002)(396003)(451199021)(31686004)(966005)(66556008)(66476007)(66946007)(478600001)(4326008)(316002)(54906003)(31696002)(86362001)(36756003)(6666004)(83380400001)(6512007)(107886003)(6506007)(26005)(2616005)(53546011)(186003)(6486002)(41300700001)(8936002)(8676002)(5660300002)(2906002)(4744005)(82960400001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eWR3ckVaU3QrbUowZ1NYVmNLWURJakxFNVkrVk9Pc08wKzJNNVhyYnhuZDZB?=
 =?utf-8?B?Y05XTmxRKzJRcWtaQzd0NGlSaXZNVWQ0TWR6OEl5NDFSNUt5UFdhNDBzZzBl?=
 =?utf-8?B?Q3FpOVgzMXJuOC9KNm91U1haaTBid2IvS09nY29ySndvaDliNnlyNW82Y3N3?=
 =?utf-8?B?dWFzQ0VIVGxTeVVBVStUUU9qRUF4RzRGREcyTWJYMTJSSkFqVFFFZkRINlFR?=
 =?utf-8?B?aDU5MXZRNmQwdkxiVGQ3VGpub2o1SXVvNi9icjdSbVZzMHZxdHJxWW5NaHRh?=
 =?utf-8?B?ZTBGakJjdnNVUUw5QTNDWWppWmhGRGN5MWsvaWVNenpQYkZ0a1BFSnU2blhp?=
 =?utf-8?B?am5vNlpPaFlIS25OaWkvcU10MXUwSnJaMDh0a2RXWVJCNXhndGpQZTNpUWdX?=
 =?utf-8?B?UUFSNTNPRUpsQlNUVVpkOVFjUXIzSGNSUzgzK0VMeE1rQThMUlZqeGwybDNF?=
 =?utf-8?B?ZUxSUk1VaTQ2QjBPaFFGTWF4TlAyU1NkUXpScVB2Vm9Qbit4UlRMK3Roc1Y4?=
 =?utf-8?B?NGo1VVQ0Mm40aWRGd2hCNFB0eFZiTkZUa0JBaEtXQ3NZUnMxM1NjekNRVjZv?=
 =?utf-8?B?YXRaZW9VWW9XbHZwWWpUeStVdWR5S3YvYlZCNFZzbjdhOHdaMjJoQnoxU1hK?=
 =?utf-8?B?MGZYTjFJWE14c1BVaitTR05GS0thWVZSa1JqYzVYbklMU1FuSUVDWXE5a1E2?=
 =?utf-8?B?NUZJNzgvWVd3OWQrTkdGZVJOMlZMdVk0QmxITXU3OUZ1clB0aXRYOVlDMytr?=
 =?utf-8?B?ZEdTVFNVcHczdjBHY0NDZzJ5MjM5TmEza2kwWTg5SGk5T25lM3BCUHl2U1Uz?=
 =?utf-8?B?TlpYR1daVHYzWjZxelBBenMyNlk0Z1NMWXZWeHIvU2VWbmMyUGRhTGYwU3A3?=
 =?utf-8?B?MEROU0hoODdrN3JXZmdBR2RrektBMkdkUTY0WWxnMEpKcE5HVW0xTktpSmVT?=
 =?utf-8?B?aWh5MTRBTXFRTEkvVVVUYnJEdDkvcDJjR0w4UFllVjd0Nm9sN0pwRmRab3Z4?=
 =?utf-8?B?dlpzWDA5YTFFSkE3c2x1TUJMSmhwYlF5enlRVE8ySllLWk9mQjdvNVo3WUp5?=
 =?utf-8?B?aHh4WGpycE5zOUw3MEkxWjZvKzhkR3VVaGFORS84OS92Y1A4L3NrK1JYREVX?=
 =?utf-8?B?TFRhNjQvZHNDakJUNXhxOUFocmM1YVRXRFZUUW5HaThqdVV0dWhnK0JuZUc2?=
 =?utf-8?B?R1RpdXU4dHZjM2R5cis0VjM2VTlJak1JZTN2YTF3M0dGalhMQ3l5aGYvZVk4?=
 =?utf-8?B?VkIrTCtWR3JYbnVLd0xhcFdleEVHMDZkMlNjY29HVmVFSGVFb1huWkdBNHBn?=
 =?utf-8?B?VEpzTmNKdkVOSEQrSU1lbzJvNEV4ajdOaEtIdU0xOGZBcStpQXZtUmh3S3B2?=
 =?utf-8?B?SWZ2RTBJZzVlZG9TSUxMODhpaUhjUDU4dGFtaERrVTFBSDRhWGZCWmFQU2xi?=
 =?utf-8?B?V2VkUmY4UHdTSzlMd3d3L2Z1cngvMGJSeG9aMGpSeWlWMWdNQTVTbXVQRFhE?=
 =?utf-8?B?ZTArY2RzNE5STUp5MkJGTjVkcXEySFB1NkJoQndBZHlUR3dZd1FTQUhuc1dN?=
 =?utf-8?B?MWV2eWRzY1FLWVdaMGpITlh2dVBaWWwrUEpSRFMwOTA3dmZyVFdjMVEyNFZr?=
 =?utf-8?B?TFhZWE1VaWl6QnhTNFloQmJzVk1CYkFPbklPbWZtWDI1elRCVkZGckltcURL?=
 =?utf-8?B?M1QzQVRIQk1uQkdHc1RiQXlNQjF2eTFzcThhQkdJYXBnejZ2OGtkY1ZJZklt?=
 =?utf-8?B?Y2ZCZldhSEMwSFpaSWQ3TkhCVzVCWGxpT1Jta00reVZEMTJQbzRrRi95NDMr?=
 =?utf-8?B?V3lUQkxVS1FSVmt2RFdwVEk2dC9ndVhhYy9MTEdjT3ptLzgzcnBXSUovc0RD?=
 =?utf-8?B?NVRPS3BDWlJva2NkVTJhU0J3OXJtdy8rZDVDbk85Q1hqdklBcW5CT0JTMlBV?=
 =?utf-8?B?T1o4aUhzcG82UHZxcHljZDUxbkUvZEFZaVJkdDFIbTQ3eWxkS1Btb1ZINmNX?=
 =?utf-8?B?cWlMbU9mN0R5MkUxdXNXV2RpUnZRbzRHM3ZFcEY4M1Z0ME55dmFzek1oSmd3?=
 =?utf-8?B?S3YwT0FVS1BuR1lIYU1QL0NVSEI3YkoyVHdUeHo3WUhVK0tkNCsvOHNxMnRQ?=
 =?utf-8?B?bmhJZ3I2OXlaSmVodlg3dG1IT1plTGY2bTg4VE5YMEovcEo3TWZiVittYVNG?=
 =?utf-8?B?WXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	pSQtdL6k6YxwFNWd+kS2iBc5o90V6JZtNwQraxaZZwAKVLXgOuDyhpzUA9tp3h9RUqg9NfjHatr+eUj2TWIdp8wFQPl68Lh02vTN2Yxg4LGFR18hvGPZh7SJPvIMlpE5grIgAjTLouxQDrltKZmPWCIkbGTWGJVV1JYmMTAK+T36yv24mFSoWdBcA+1zT+McPbxucrnef4iZ2XWqIJh2U7iifmwrySRNHBHBeszE7/WsSsT9aMhH8u+z3PNkZSYGxK8RKDgg2QP/T8oGzblwgDE/NODgGfswchhg1aSVjXO72t/BSPjt1s/u2baRJ0UruxpCg5lArv4jPN8AF5KnHmSWOZd4TvYqo/omMWuTs/7//XM1HFHKgrboixWkksf3sBqT4sp067AT4MoKKUWRieNbtXVRUSudkVV0VmwOEG4JzNvhpx8pZcen053xuDRlc9aLSd5MIN1pXHCk9L9Uaf/uq6SA1FAE8DovHDT3OV4dequJlxj85IcNSwLXTC0DJg/JwHN9X3VP+/TmVKcoXUjbAxIs0rM5Keic0C1rp0K0nZo+oR+U+aUS1WYqgCXIuO8IVxhgZL0Ug1RWVp+/8cbQ0QNf1e52tp74NIWNYLuKQqnM4l1QsNnqKHnCwtH6WwE606WbMm5Ntckrb5BmvD9HTYoC4qvSMyHBJTRXpxFjhYSJJozo+wgttWPdZsVaF/R+OdlLf7n+mTwWVXd6/w4yvS1yS+HOKqxV9wCZW9KyJ9xw+2GO26Tbc/bjuSZsLJy4PAMCLDtxBP96/Bwf4ghIYHoUv5y5jpbHVL0TW6vbtX01iv8KamrcvzXYxIDsqCwODioN55rEVYNyO0ca0A==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a8bd859b-ce80-4219-a453-08db55f92192
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 10:34:30.5072
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cGDGLOCRwQqTu7xOvn8RFUeg2n54k2vON8xtXsxVkjJ2WFfepqyNhgBfUSMAZiHfuIIjkB0C97zKTRm2Zcx2tdoZy3u7ptj7g12W2ucLRg4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6939

On 16/05/2023 10:13 am, Florian Schmaus wrote:
> To avoid implicit function declarations, which will cause an error on
> modern compilers. See https://wiki.gentoo.org/wiki/Modern_C_porting
>
> Downstream Gentoo bug: https://bugs.gentoo.org/904449
>
> Signed-off-by: Florian Schmaus <flo@geekplace.eu>

Thanks for the patch, but there's already a different fix in flight.

Does
https://lore.kernel.org/xen-devel/20230512122614.3724-1-olaf@aepfle.de/
work for you?  If so, we'd definitely prefer to take the deletion of
obsolete functionality.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 16 10:52:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 10:52:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535113.832733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pysHs-0004t8-7r; Tue, 16 May 2023 10:52:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535113.832733; Tue, 16 May 2023 10:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pysHs-0004t1-51; Tue, 16 May 2023 10:52:12 +0000
Received: by outflank-mailman (input) for mailman id 535113;
 Tue, 16 May 2023 10:52:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tkax=BF=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pysHr-0004sv-AW
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 10:52:11 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.24]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b4741303-f3d7-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 12:52:09 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4GAq4WXG
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 16 May 2023 12:52:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4741303-f3d7-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1684234325; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=sB+Mj/0JYa57s0dH0pytNxN9x78tWqosoTB+23lYduhh3nEJmk8CbATrgSwuStHCEh
    hRYKgy6ypxhhQnsLgqUlmvTMTgLV+/53MdjRLezTny69VKe8wFl8nm5pB2l0Lltz6lQ8
    hGByzOarLZvSj+1kRIyHYg10JT5Q01k+ANXPbRcFK+UWewK3odXUSekyoqOmf40ak6ZT
    JEahKi1KtgVuPHx5qCjHjNTE7vp1YYRpMptPikm+4xMxMcz1XgpJv15JuCgv/fqaEnQX
    5gLOs9tvl9n76SMpXTIhRncPxa1n59MVsorauY96f8iiaHdGNWditd/mHBcDsNaHrzQw
    SJNA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1684234325;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=v6KfPESJ94Q7Wjf1uwsvrcxu+19Ysz4hcoZIgWyUOic=;
    b=FxaKO96Xwvw88TubWqlEhkKuZ6TKHl25OSCfLmZROBdI23ywAklO1mK08D771u21SN
    ssYAuFt+/BJQLayESMTLgHXkOZo0Vr95w8eqRbonFfp3TfKZ/RIShTarUwt2HyTYvoYz
    K51hhPkYne8zJyZafS09CgDeKRuLvnYT3LmwrpcbvjzXG0a2NRYp9TyagQ74Fix+gsts
    lWDmjDIHIu7F7cgIpzJsW+GxUJF7kQ+bWt1LxWH03hbFdSMpdqsB/7I9twZ0TZgLxP6u
    +GAClTujesCDFbSxZbY2riPIy6f3gQ0MI4Rj7W1aB/kmPQeqpx8zQQqi+JvD5vaxilJJ
    jwQg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1684234325;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=v6KfPESJ94Q7Wjf1uwsvrcxu+19Ysz4hcoZIgWyUOic=;
    b=UtUn7BkSqIeYmJef20+/or9DzgHd7C5MiAoDjp3lt09Yn6n6C4jhvBCIWVzcLK9M8z
    bfJc+jGU/5u1HtrO+HeCDnDItEEt1Y7w5Y9Pdc5tRvt+UxYYBsw3BuwZMtnltzjVb7hN
    71cX1gxlj/xpdQcY0VPWJq/2xF8n9Kv6EzN4VLVQYWypEDeSqwVnRoSkKXO1NJHUrKxY
    hCj1WJ0E3ZyGNVBVPE2Jr/afj3VivKGfEhzHyoQK7c3Zv0bHmOzfmi3kmRM0uVp+ADq2
    ++pvP2HO944iLycRtxcNt+F/Xq48CpCEvo9ejqQYI7LJBndF20HiXnXTo15mif9yTqUZ
    VIQA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1684234324;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=v6KfPESJ94Q7Wjf1uwsvrcxu+19Ysz4hcoZIgWyUOic=;
    b=ix2G6CxGDGI1lOczXnumWQFTRJQAbo6YJ6Z5znGofNseG0n898BF4LvLPtbN8DfyvS
    5hGvitc8Ju4laShNJ/CQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QED/SSGq+wjGiUC4kV1cX/0jCNVp4ivfSTHw=="
Date: Tue, 16 May 2023 10:51:55 +0000
From: Olaf Hering <olaf@aepfle.de>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH v1] automation: provide example for downloading an
 existing container
Message-ID: <20230516105155.0c59143a@sender>
In-Reply-To: <alpine.DEB.2.22.394.2305151533320.4125828@ubuntu-linux-20-04-desktop>
References: <20230502201444.6532-1-olaf@aepfle.de>
	<alpine.DEB.2.22.394.2305151533320.4125828@ubuntu-linux-20-04-desktop>
X-Mailer: Claws Mail 2023.04.19 (GTK 3.24.34; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/3N4AkxyLAJ0de+qo.mFApG0";
 protocol="application/pgp-signature"; micalg=pgp-sha1
Content-Transfer-Encoding: 7bit

--Sig_/3N4AkxyLAJ0de+qo.mFApG0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 15 May 2023 16:03:01 -0700 (PDT)
schrieb Stefano Stabellini <sstabellini@kernel.org>:

> Given that opensuse-tumbleweed is still broken (doesn't complete the
> Xen build successfully) even after these patches, I suggest we use a
> different example?

I think the example in automation/build/README.md needs to be fixed.
Right now it does something different than the Gitlab CI.

The CI runs automation/scripts/build with some environment variables
set, the example runs automation/scripts/containerize.


For me qemu-xen builds, I assume it is supposed to be 'master =3D=3D
"8c51cd9705 (HEAD -> dummy, origin/staging, origin/master, origin/HEAD,
master) hw/xen/xen_pt: fix uninitialized variable", but we do not know
what GI tests, because scripts/git-checkout.sh does not show what HEAD
actually is. I think it needs to run "$GIT --no-pager log --oneline
-n1" at the end, so everyone knows what 'dummy' actually is.


I think it is perfectly fine that both examples refer to Tumbleweed,
because one may need to fix future build errors, not test on something
from which we already know that it works.


Olaf

--Sig_/3N4AkxyLAJ0de+qo.mFApG0
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iF0EARECAB0WIQSkRyP6Rn//f03pRUBdQqD6ppg2fgUCZGNgSwAKCRBdQqD6ppg2
fmrEAJ94nlngjTejzdC9xC8fKwFh7B+JsQCfb+K7d26Ckh+5PjMBnEt/qoy/P7Q=
=1j8c
-----END PGP SIGNATURE-----

--Sig_/3N4AkxyLAJ0de+qo.mFApG0--


From xen-devel-bounces@lists.xenproject.org Tue May 16 11:44:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 11:44:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535129.832744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyt65-0001tI-24; Tue, 16 May 2023 11:44:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535129.832744; Tue, 16 May 2023 11:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyt64-0001tB-V9; Tue, 16 May 2023 11:44:04 +0000
Received: by outflank-mailman (input) for mailman id 535129;
 Tue, 16 May 2023 11:44:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyt63-0001sz-HE
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 11:44:03 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on062b.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f338f911-f3de-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 13:44:01 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7819.eurprd04.prod.outlook.com (2603:10a6:10:1e9::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 11:43:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 11:43:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f338f911-f3de-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ekjRwIYUH3pqWrs/iH1pt6YyDHmlVKgSEVB6EsdHctNBswM9bsqlg674fVBm6pIE8vYq1zfcxOb6XrGRdPJm38OmMbiF98RNy7Mp66iqV6dNZwk0Syl2P1rKlRonYxh2W2h+v5cUWFzE81aaNGSlS/sfuLK0gzNkiK4LseZ1vQ5lQld4ceRc4N1MgbP+AWwmhRbOCcRl5+UNfQu2yZqMoj8A2/C/kze9146Pr14zZdMvNRVso9g1k7xb5EwZ2UVFfWDfxYxkPOVHz0gYGSWEQg5niRUxcLkYsAGbiODr9RxzwqnlSNG0wmJfX9vbbvb/khJoUvz/Q6Q/ZuTuhaWvmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MWlTigUskYubx+9Zv0iuN3DWlrd8JnghAlmVuc4pw6s=;
 b=ODgYQJM12MSafds6Dq4MFJ1M+ww9pi1V2XSM10uJqsjsrZgr3zCkKIoLdKsXJgWMm0UgMt4qng5wLcEedd4UfY43kNwW5bSgwhAwhuJiQHgV9tGgE4QF/2Qnqenlf5oHq9EhybPOumxRwhp4xMYDDw6xoFwBpXzSgFkAM6rpEvdCyzN7nRRF5WdpCiIYWXPPikA/nLqogedEsdMqAGtjAtmdPiUs8vuU5HKj7t0igekprSienZ8eQMDORPg+oOK8/T3KEsbZ6KghIts1T+VL7YPu78u/JaCa4h2FsHu1sbOO7amqlOrxRhNtzplfET7AnFyXIj4da83V/6iKxXMcJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MWlTigUskYubx+9Zv0iuN3DWlrd8JnghAlmVuc4pw6s=;
 b=gYFNwTME1em5MIEATsI4rGDb4HwwPYRpPD8i0XOyRS9lmpjZ5NA/Y9bp4vofJF1QXegn3WyY6cE6HwNjcK76u0Fc7enMXYAgVlNafARx8EifIG8eb7e/0We23YwoqF2QcGzV6ApHSnCyjQWXCy11sqFSDQPORK7PF+TgVhQPj1GSpTGdiNxBCHPiablKHUTlRT91R3w+OTm78FooMIJtOkax+tg6VQ9iRtqidscMlMtVu99ocukEQtMgAO+nOI5J52Vbr1d4+oYGruCOMLh8mIoupEVMRVSgNYOwsq4KsOcCXHCneZEd3f4Vro8cekSftW6KChj4qA7Ewr7zXEoxbg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8e3759e3-f361-3838-5a71-9252ce83f293@suse.com>
Date: Tue, 16 May 2023 13:43:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/6] x86/boot: Rework dom0 feature configuration
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-2-andrew.cooper3@citrix.com>
 <97825c89-87c5-1156-5621-9d03286fd865@suse.com>
 <c819848e-d20a-a83e-9387-dd4fd95a6daa@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <c819848e-d20a-a83e-9387-dd4fd95a6daa@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0105.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7819:EE_
X-MS-Office365-Filtering-Correlation-Id: 023f8ad2-2b2d-491c-ed6b-08db5602d602
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uMg6teBdAcu5Dd1QDPF+/lObxkHTJIxrp20RqBMEmzidaGpfQrlN6j2w3gwH6jxg5uXd2RMq9stUiA7kXHBefaab2suGpzBoItSS3chFEbqfRYy0S9CTsXLQ4HdBiCQJ1t8uayzKGHBovouZw5uPRzuZYCLPkTG4cgbFTJiMAO5bTnGMHlryaMbAK+vd9+0FtRBh4D837o1Es/M9gZT+Jtdg+Ft1EGjuYKbPf7OuRx2f4Dmm0aiG7oOrCX2YMAAqT6w2oOEbacZgV1aHyazY9vMkWOokGegc3+Zbxs81Iv6rxDzpnCkBvomEiFFXyzXZyoUmfD5ghNlVJmteKy48c3QdAYqu8C5/lQPKiu3NtKbt9UUmUbKKzACg97BIgzfs58ooNTzCCTa0oUvbGH+AVuUGB68p9jqdA5K88Rg4M568EakZu4/EvoIhBUDp8ERcGBjUfdrXi9IjioT5Uc95Q+Z9BXH7lHomvtPJNV4cX9m6UaiRQgHfXt9lO2HRb5k2JyxC6NzDkiPBGQuKU4txYAgbpsPMWgUFs68XtKsQqDvhVOcQ5lv9RUq/zLqjoxnbw/hIZiYKqUPMkiFaHCOXgWXvOJv8LAe61nSQ8osW0YY5fEU0jcIPocYxRvM9mqgee8A371HXHHCRg4hcwtmooA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(136003)(366004)(396003)(376002)(451199021)(36756003)(86362001)(6486002)(54906003)(316002)(66556008)(66476007)(6916009)(66946007)(4326008)(478600001)(5660300002)(8676002)(8936002)(2906002)(41300700001)(38100700002)(31696002)(26005)(6506007)(186003)(6512007)(53546011)(83380400001)(2616005)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VVpHaGdXaUtzN0FreGkvaDN0UUVMT2JBczdrWTVEZFg5V0lLbEdRUS8yc3ky?=
 =?utf-8?B?YUpQazh1bDNHQVhLaUNCeWs0Qk9NYTNLcGRQRHZFbHc5enRPcHZHQ242VE5w?=
 =?utf-8?B?K2NpMGZHclREdmdYTCs0bGhuNjZ1RWdDL05vN0gxOWhkZjl1RFR2RGdkUFRx?=
 =?utf-8?B?aVpGNWdaVVR1T1dkL3krL3R1MnhHQVRRbFRHSW1YR1lkcDRRYWtCUFR5aE5o?=
 =?utf-8?B?NFAxSk1jVGpFMnBha2RpeGZ5WFJScWd4SlpZUVlPYWswOG9ScGZNcXVkQjdN?=
 =?utf-8?B?OEVNQjV3S0Qxem5KL2xIczVXM1lYa1JsYlVRY3g2Ykw1ZDcrY0N2VE9QOUlS?=
 =?utf-8?B?ZzVPUzNaekJnMDdHTjRyYkQzbkdDVjlZZGVsU0RsRThHY0dqWG5pVm9sVDJn?=
 =?utf-8?B?ajdqTElidW5DSm83V2ZSemlOanNsNUpxMDZIL1kvYm1aQUNxaDQ2Rnl6NENE?=
 =?utf-8?B?ZWxTdlBHWGNVbDhGYlJhMmhFUmdiYjhiL1ZmZlZrandobi9uZzdLeDJGMUxL?=
 =?utf-8?B?NFNXdnhoYzlNN2E0SzgvOXJyd3B6c3VEcS8wOEhiTWQ1cW9yMzB2OFVZMGgv?=
 =?utf-8?B?K3kvTElLRW1qUC9KeFo1VkUxWGIycGYrRjN0SEpaQU1vZUZ0Mm9QdEJuSVFh?=
 =?utf-8?B?RTEyNENKeVgvbVl6WmVmWTZ2eW5rcE9HTkRHQzl5SU45ZCtWakNjMGZOZFc2?=
 =?utf-8?B?d1JWakpWUTFFeHZMcWpDNnkvc2R6bmRrSFBZYURCVkpmOUtDMkRwVkZxUVlp?=
 =?utf-8?B?aVNSYlFOcThxakxCaEdHWjg0Y0ZOSjUrN0xOWnI2QTg5VzRLdUdQL3BFYXhv?=
 =?utf-8?B?OEZ5eGxaNVJVYms3SnRuUEpCcVlYZE13U3FmNjF4L3IvaGwwY21TL1R4MCtZ?=
 =?utf-8?B?VWFYMWhTYzJhajZPMURaajNkbHNJQXRlR2pwMk1XNm9memUvamF6WE9iTllJ?=
 =?utf-8?B?czZ3WVR1TStzakRPbllvR0pjVGZVb3UydFFjK3ladDBvdW1Bc3MwdXp1bDVG?=
 =?utf-8?B?WHJ3MnY2dkdXay9lUS9naFQzVk9FcHRjQlZldmNpdTJGS1VvWGtOeHBxTWlH?=
 =?utf-8?B?N1VacStXYnNtK3RRdFc5WVNSUFRobzVwTVpOTWlNb09jNTRwSkI5cENnenN4?=
 =?utf-8?B?Qmc1Vkw1d0hQU2dFZGpPeHFNR0pMcnhwZEhXeUJ2UVBnMFdLaVNCQlg1eWVW?=
 =?utf-8?B?M09CazdvK1htR0hFV05UcDBaZ09BWUwxb0R1a3hSaUZxdU9mZndBNzlSNVcx?=
 =?utf-8?B?c3dQSytYakJPWmJCbENyMmFMQW9qY2lEY2JqQ0FlWHQveEdzbFVJNjlVOEFD?=
 =?utf-8?B?SEtkRG1sUXgwNXpaenluVjZvOE4yaitQQ09vQTd1TldKSE1TQkJEWjFLVXJR?=
 =?utf-8?B?TGc4Mk80N2h2bEQ3YWVtVGRPeGdxZ0FzaXMvRzFyQTc3S3NDeFpZMys1ZDMx?=
 =?utf-8?B?dktFZFp5OFBBR3UyNDA0c0xtTTdRZUc3RktrMi9jSmgyQ2txUXlnWXhOVUxi?=
 =?utf-8?B?cUswZXJMd2V1cngyTE1Rc2hWM3dLQ25rNncvQm51L3ZzRjVoamhZNWNzU3pp?=
 =?utf-8?B?L3Y4VDJsVWxiN2tTbmE3ZHA0YjBlSGxodHRvbEw3WFdTZnJROWNCN0Q5UFo3?=
 =?utf-8?B?TmUwM3BMT3V1aW1jRUV5RkJUcHl4T0kyWno5UVlIQ2pERlhoU2NSYytTTkwr?=
 =?utf-8?B?TVM0a0Y1RnluSnpIOXREKzdsM2FQRWVnZVlVQzAwcUxnRVZRc3c4bVh3RVB2?=
 =?utf-8?B?TnhSVWRNZlNmOFhiSEZiWFJRRkhSUC9iWktzdWFyaWhwZ1hyaVI0UjF3SmJp?=
 =?utf-8?B?V0N1Q052ME9Vd0dVUVlwbXg2MzZGQkkvVnRBd1AvVEJDRkxCcWxyT1BKcWJO?=
 =?utf-8?B?WlJqQnF4cEY4TlU1NC9jNkVmaE1IZWppeTVMaENEaXdBRS81SjFLQVRxaHJD?=
 =?utf-8?B?WFQ0VENrQjNSZHpaYU00azR2U0UwcGx6Y0llY2hqRWcyNWpjdUJ1M3l4SWZQ?=
 =?utf-8?B?R3NOZitLVmNtQ1c5Tk9zaU1MQWhnWHNtTFlDdDZuSHZTU1ZZNG41dlNVRFlK?=
 =?utf-8?B?SmN4bFdNWjJ3Mkp3TG9UYkpMR0J1bkkzNENlaUxZUWhHaEhDWnRlMGlvZ1Za?=
 =?utf-8?Q?Nqkc2zHQYYEKpAh4N8ScWE8eT?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 023f8ad2-2b2d-491c-ed6b-08db5602d602
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 11:43:58.6357
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3aY/FIAiN7t0wo628SrihyQ7T9sgU/Edv957JFh0b/tGFzXA23N7orzzsVcKXTBiXnbjaellNk2rDJxjYbN8wA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7819

On 16.05.2023 11:45, Andrew Cooper wrote:
> On 16/05/2023 8:58 am, Jan Beulich wrote:
>> On 15.05.2023 16:42, Andrew Cooper wrote:
>>> @@ -858,7 +839,7 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>>>       * so dom0 can turn off workarounds as appropriate.  Temporary, until the
>>>       * domain policy logic gains a better understanding of MSRs.
>>>       */
>>> -    if ( cpu_has_arch_caps )
>>> +    if ( is_hardware_domain(d) && cpu_has_arch_caps )
>>>          p->feat.arch_caps = true;
>> As a result of this, ...
>>
>>> @@ -876,8 +857,32 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>>>          }
>>>  
>>>          x86_cpu_featureset_to_policy(fs, p);
>>> +    }
>>> +
>>> +    /*
>>> +     * PV Control domains used to require unfiltered CPUID.  This was fixed in
>>> +     * Xen 4.13, but there is an cmdline knob to restore the prior behaviour.
>>> +     *
>>> +     * If the domain is getting unfiltered CPUID, don't let the guest kernel
>>> +     * play with CPUID faulting either, as Xen's CPUID path won't cope.
>>> +     */
>>> +    if ( !opt_dom0_cpuid_faulting && is_control_domain(d) && is_pv_domain(d) )
>>> +        p->platform_info.cpuid_faulting = false;
>>>  
>>> -        recalculate_cpuid_policy(d);
>>> +    recalculate_cpuid_policy(d);
>>> +
>>> +    if ( is_hardware_domain(d) && cpu_has_arch_caps )
>> ... it would feel slightly more logical if p->feat.arch_caps was used here.
>> Whether that's to replace the entire condition or merely the right side of
>> the && depends on what the subsequent changes require (which I haven't
>> looked at yet).
> 
> I'd really prefer to leave it as-is.
> 
> You're likely right, but this entire block is deleted in patch 6 and it
> has been a giant pain already making this series bisectable WRT all our
> special cases.
> 
> For the sake of a few patches, it's safer to go with it in the
> pre-existing form.

Oh, sure, if this goes away anyway, then there's not that much point in
making such a change.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 11:47:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 11:47:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535135.832754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyt9K-0002XK-Kp; Tue, 16 May 2023 11:47:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535135.832754; Tue, 16 May 2023 11:47:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyt9K-0002XD-HC; Tue, 16 May 2023 11:47:26 +0000
Received: by outflank-mailman (input) for mailman id 535135;
 Tue, 16 May 2023 11:47:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyt9I-0002X4-Pn
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 11:47:24 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0614.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::614])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6c1bfb8a-f3df-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 13:47:24 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7077.eurprd04.prod.outlook.com (2603:10a6:20b:11c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Tue, 16 May
 2023 11:47:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 11:47:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c1bfb8a-f3df-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c5DW2ykdH9B1FKtuEWwxfaKiUIVVA54QfEnAbwsDMwyVR9cEQp95QFgQDi/sLn+NjrsdGpe1EuYXvtCOK/ZyhJuHMJzOUKdE66FttGovE4fchR0CawbdmRTAwVTOWql2YKA6tEWOzrna++iarXylvTB82RZ6QOsNfwIHFdkEIbyokNMjm6elFwoBouJBpUQsoJQtgVMmJ+MlNnwKcQUTRuODbDUbLI8LdMo59ujYdURrWuPFbcC+VZuokv7CfV5VYSuinMqtO9dPk4vKi8BT8wK+UCsj0/T3BtnspdNjoBTr8t9ZV8iJGD5UMBUnu7L/osF2W9pI/ruS+3mqjURDOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ntFgvsi7fESsY3d6UJGHVwwDUUmBEPxVNBJ8W6AJWB8=;
 b=mZJDQ6tsrWWCfPBald12ViR8OMJpq2IZuWssZ3OiBEs/TgbfDVraTFYEkBsk4Hk7qfEq2W87dfXvrhUDSbZiIaWqzSNgqqn05l0ZtdnNbducse3h9a0WmvHOLezR9Co70WcacQ45sezhJU+7cTHDZJCO6oi/angl87bm9U3eMU9fSdrFS75yjilSutWxRTiQcishQNP3tSAPaeibXsTy0My6fkr5yH69++KhXg5ktN+J7U6z92P8KxUYvM/0GL4iNYd/AJrulCaXCqxDMUVgdapTePtvh9ocOuGdw5gaEonoYJijUoeG1z6EmVqRWIyKwSeiOWIKbLAXr413nEaMkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ntFgvsi7fESsY3d6UJGHVwwDUUmBEPxVNBJ8W6AJWB8=;
 b=YBI9Y1s5LUTuIKquqqgXmK4vUQjcCQ9r1nj/MMmunBahpgBDzlo1VJiVlN9YGtkvwjjCNQb0w49xkdgHZs/mn6FVbgAPK+pDVo6bGFWpD8g7ji8ywelyl6ZdvQWMFo9WaW2mAqmOuv4VUZAjAku4a19Dxm4DRreo725DYqJKhVDuG5vgE9NwRZwuYj/8RwnGvJyd+1tNMyy455QuqxzmomWZUGmrPBobkJMtFNa/TyxOcXGYEVwPVFgwwfJqPYqewn8A1rGRY3TBNeH3L5TqxBSyDdK357D1I+V1ayL/k8hT5ziXGRyzSYlrF4Al2DdurtR8eoIUGHUkeLzUUh+D1A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b969f26f-9ae9-ad1d-0daf-9248be8e929e@suse.com>
Date: Tue, 16 May 2023 13:47:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/6] x86/boot: Adjust MSR_ARCH_CAPS handling for the Host
 policy
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230515144259.1009245-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0169.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7077:EE_
X-MS-Office365-Filtering-Correlation-Id: 32635353-9676-4c10-d56b-08db56034f09
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tUO/C29B1xQ6G/ibE6h9I1tVva+SBQsVXRb5IsJUdrVOxHkpWiAQfyz8uajzFyLPraEfuj9ate3RkhguO9nneEfUDv/idc2LTFLQl54de21bVFLudSeY/idneLcJ9n4Z34ZNAgqzyPSlJhM2blBNDZGVMWnESYttvq6yXN9yGn0Qbqv3w3uOI7EQSxMDmGh7W2uMkMcQkLB/I3a58SxJ0dCtrQka2cb9tT6pDufbAeF5X9ASSXUX4sbi+NZihj/3Q+t2IQ9F9jGdlG9LMNdosS3s6fp8HkAo2CLq4rjDalafgr73Wk3kceC6PUEKSSojpUgu0c1XFPn906ErLrv8KKIdKxiZIRLOMTnQQ0UeLClwdF4nTu1MZyCVz9ODLG+iu/YXwqGsu9/C3EUWVcc55CzIXm+OLmTViuWNZ9oaiTacfkIZHv1Qtqxn4M7q0rlVFdX5ymhCN5hGmIJlop3CTFqjYjH4h/QcY8UoHeC0tDxMJwSkB9OpCIQ7+NNOEXU6spvf13MB+aEoDkAWJCvKlNbkr/RaC3JUPPDsJLEt37euVX6TZqZb6H4bkjZ183e+OqugY3IzxbWcUkxh2efiWx4XC37nUTK75+0pkIHhPAx+j0vwS6UmsERYw7HBBpOO52mxZUeWWOR13IKCyeqY/A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(39850400004)(376002)(136003)(346002)(396003)(451199021)(86362001)(36756003)(31696002)(6486002)(54906003)(316002)(66946007)(66556008)(6916009)(4326008)(66476007)(478600001)(8676002)(5660300002)(2906002)(8936002)(4744005)(41300700001)(38100700002)(6512007)(6506007)(2616005)(53546011)(186003)(83380400001)(26005)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UzlRNSs4RWpRVHhCRlJybXdYNkowNWlMTUVHOEttb3FRaEROUzNDdDNHeXBO?=
 =?utf-8?B?RHZOME9BZ0JkMFp5WHVDR2ZSVVcxaVExa3E5Ukw2YXdBWm02SVFRcFFmNDVD?=
 =?utf-8?B?bEVPL2hqZS9QeUhHL0N6dy9mL0Z3ZVlrSmxLc3VZbTJHYXlTM0dMMGNmdEJY?=
 =?utf-8?B?NEFnL1o3Q1R1YW51bHhNTHhIZWc5VkJ5aWNSdG1xcVQ3NnhRMENwemxVQnNK?=
 =?utf-8?B?YzkwSEJwSExwUGNld083VlZ6V04xODRvWlpSTTVYUFlNRGE5T28yWFV2YXN6?=
 =?utf-8?B?WVpuU3lnUEFNL2JXUktiTVB4dm1ac1VuNzUxSndvWGFJOEdnc3hERko0Q09Q?=
 =?utf-8?B?ZVptMDZ4Qlgya2JBV2ZFQldpOVVqTkJYN3pDalNuZ0NEUWZ1dTduUkkzcy9S?=
 =?utf-8?B?V1F5L05ta3grbXlVamN6eDlvc2RZQXhnQTBRdncvMUdURzBXYjhUZVlNTDRy?=
 =?utf-8?B?a3lKNTZZMmM4RnpZNTM5d0pDLzU2YXRSZDR0VnZjbW5vVFpwZDAyb1ljKzZz?=
 =?utf-8?B?aVBFbi9mWDU0VlRUeVl6YWxJWVpZdWRabVl2bWUxSkZINFQyWDNoaktyV1J3?=
 =?utf-8?B?MGFNc2Niblp5VUFocWwyQjBhdUFpTld1czVKRjI4VHVKSGc0ZWJXVzRPNkl3?=
 =?utf-8?B?eDZMcmhtOHhoMmwzMkVucEZUSEN2eGNIR2I1NDZEWi9tRk4zS1duQlFIT0Q0?=
 =?utf-8?B?K0lGb0VWbkQ0eDdYSEozSUkvRStSVEp1YWNFM1Q3cDkxLzI4KzZyRndlQUgr?=
 =?utf-8?B?S3RTVDErRDIwTGt1OVk5Q1BObS9VZDhwVlhVYVI4RTJCbWE0NGZoc0VRMnJj?=
 =?utf-8?B?Z1k4cStpMFVNR0NXbUN0eVNZZFQyQ3JlTi9PUENTbjdWVGJBT0twUlJQd1RH?=
 =?utf-8?B?NXRmZkJOUng4ZE5VU0ErbDNxQU16dU5TQmVxY2h3WVlWL3RiVlM0dHVJdXRU?=
 =?utf-8?B?d0lmRE1BR1FqN1lLUURXcTloTDlwZ1dkS21xeGRaSGFiOWlvTkV4VjNPWmJK?=
 =?utf-8?B?M0FTYVJOMmY3MjRNRUhGTTBPTVRVZ3V3T3RKem5KR3Nvb2p2QURSZjVuTGMv?=
 =?utf-8?B?WTVQYlZ5VU0rT0ZXTFVGSjdHeVV1VDhVcndBNkg3TmZLQ0Y2aVY5K3FWc3Yr?=
 =?utf-8?B?Zkh5L0hkTTBDR0d4UmVHeG16UnlIOC9CK2NLT20zZ1gwcWpLd3hSN2V1WkpM?=
 =?utf-8?B?RTJReXU2S0Q3NHBxVGtubEYwaE90WDBIdWoyd1VsUGREYWRudnBNMk1jL2lK?=
 =?utf-8?B?am9LbElka01iczdDRkw5emxhTmJxaVE5V0Foa0NvcUc1aklqWm5KcUwzaUZV?=
 =?utf-8?B?UDhtOXBVTWVrZjNTTU5LRU8rWjBiRWFhNzRZT1BVam8yRGJlYUJBNmtoWlBP?=
 =?utf-8?B?K1FrU2JTVURNMmVhMVY5QmYwODRObk5GOC9iTFcwUjBmVlkxeVBIUUVGRmJL?=
 =?utf-8?B?eWt2cEhEQjRPSTRnM1VKWTNkTmlmQUpFZUcvTHY2eEEzQmFISHM3dzRFaVV2?=
 =?utf-8?B?dE1EcVkyRTg0ZE5OVDZocnZjaFhaaTc0OVdLODVuMjNEaEd5YnFpeXJxZGMw?=
 =?utf-8?B?RFJYODJrUVBIQnJQc1VyN05yeEtOYVdYaUo1cVhna1pxeGlTMmpKNDNZZVR5?=
 =?utf-8?B?aVk2SW4xdlZZaVB3SGFWUytpK05HdDFjN3E5OGVmL2ZMTlVpMmpXcXJ4Qlhl?=
 =?utf-8?B?d20wdHpxbnp2SVlNNEpXRUZRL2JaM0p4WmhkU0ZaS0RtWGZDa1VuS3pET1di?=
 =?utf-8?B?Z0FVcHdlSnE2VngrYVFPa09KUkJhKzl0MVJTbXJKaDdUUFphUlp5bHIyMTF4?=
 =?utf-8?B?WlJHdytnclFDcXRWU1J5ekVvSzAvNlBoQjF0RGdVM1lhOWFsV1hMZG1PTnJE?=
 =?utf-8?B?YkU2MXNBKzRjM3VpUTYzTzJGSDBCakdIZGVNd0d0Z21wRVpOY0dBcDA5Si8r?=
 =?utf-8?B?Z3Nndi9uaXNJSVlqeGUxUEpCalVPMkxwRy95MnhKR3N3VjlLaGZHUVB2M3ox?=
 =?utf-8?B?Nml6WnliQ2t4TWp1em1TZHV6YUs4eHlkVC9OaTYxMjNVb1MrSDE1VHVOdWlk?=
 =?utf-8?B?a1psaGQreXcwRFNEOXJPcUVBcFdNRG4rQ0Z4MEU3THlvRzFsaDlmaXdhZUdJ?=
 =?utf-8?Q?/rcDlrTm1uxhrQhPWxRm72Fd3?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 32635353-9676-4c10-d56b-08db56034f09
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 11:47:21.6329
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: I8CbA5VxCme+DisFkvUscoQOwYePKK/MsmxKHHmPrzD6Uw2WxoAf7dlta3TCw/A2eSVvZrMNmcTQ/xSF7zXnrQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7077

On 15.05.2023 16:42, Andrew Cooper wrote:
> We are about to move MSR_ARCH_CAPS into featureset, but the order of
> operations (copy raw policy, then copy x86_capabilitiles[] in) will end up
> clobbering the ARCH_CAPS value currently visible in the Host policy.
> 
> To avoid this transient breakage, read from raw_cpu_policy rather than
> modifying it in place.  This logic will be removed entirely in due course.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue May 16 12:02:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 12:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535141.832763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pytNe-0005A5-1w; Tue, 16 May 2023 12:02:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535141.832763; Tue, 16 May 2023 12:02:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pytNd-00059y-VK; Tue, 16 May 2023 12:02:13 +0000
Received: by outflank-mailman (input) for mailman id 535141;
 Tue, 16 May 2023 12:02:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pytNc-00059s-Qh
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 12:02:12 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2060c.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::60c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7d1fc134-f3e1-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 14:02:11 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9623.eurprd04.prod.outlook.com (2603:10a6:20b:4cd::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 12:02:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 12:02:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d1fc134-f3e1-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E8walw5CQquGVOWeyXPsaEr+Xx6HIVQ6ZT2jEmXYxJKmbQ4eGiO2tyMHgnG+DyajBrik/64wTe50rBL0qRtwTyO84UcT38BI6aZTsGe91vduW4nxfd7mm6Ev/mIYw6e9bRu2Lj9yywKizvNd4rThVdf4WdPhHaKL4sGAEmh7NK9/2AWmdrSCP4vkZyPrvCcKkThY5jfzVUb9KD+RFWFQge2uqlzbj+tVwYYDTGGYJF2Q4R0lWt889xSAZnIJtuQkJVeyjndCB5Gw0F5RdnzcXyP6kPm3f4ZTlhYNJiKuDxKpJc5k2WYSUdwdZSY7jye0Enj0haI26aKcMZcRs4GdxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FI+GX9LyzdYreaRhLZ0JN05S1nJBy9Ec7jzMujIlx60=;
 b=mLN+iPW0OR0NxXS1tNcKqI5nf9WJhJk58jsdToh4oJI34gusRJoZXKPhQ7TdR9TpKDgn54df5qjHSHU5b7STqF5QJ01jWSc+grnITUc4rTmH0E4J2j/j6UDpTncBj3eCKw01tC5ZeQ6NW8BLbkHKzspw7Bir1n2ffOI/KTecitMaR0wV6ig4lc5NmGt63yFeLf369AdITk6H/ybtAmLwYfVokTULy6X9C3pzSgcDEDZf8msH0hDway0Av2+fr/0/w5zIJcs3HQeKDWOw/IgSY+kdAu8GduuuhE7wRYUC+D6+SrirGlIH3HbWm2UVSbm2cJSrvZervopTqwzPtcYpbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FI+GX9LyzdYreaRhLZ0JN05S1nJBy9Ec7jzMujIlx60=;
 b=zBUkF4z1HX2U34REquTg6839nUvIAmuXrWehoz+4lcBwEEMH7emN4+XYwehgwfbrEP8Qpea8PIDxN78AxUPYopb10r+32jPM4U807Asl6xSbQMEMhL9Wt8RLA0dFLBU/h6MULUAV8k/LBh2n9htnTxeZtqpszHjGSRYkRMgTotKLdG+4mjb4Z7BCaB/NogLfTUtmdoOs6NL6SjQJ7ncuR1An5y0e8HLW7dDDCdzC21xkjSNvdNIClFzpVmoQQy0XxoYCj0rRhNgCny706hNwwZYH9OztrIDf/AVNAypf3vjASlxeOnOqXM4c7yRwjZGQllEWLl2GYkZg+X8f/L7eBg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <347219e4-6c3a-a0ad-b010-4dbd7282c7ad@suse.com>
Date: Tue, 16 May 2023 14:02:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 3/6] x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230515144259.1009245-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0110.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9623:EE_
X-MS-Office365-Filtering-Correlation-Id: 8aeb2b41-4abb-4424-3b7a-08db56055ffa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nVNIyxHSdgDh2zA3r7p+X4to6n+C8NzuekK2T1JEb3D6nQEbntSoeZVeFEopDKFgxP35kcZz2lOnaVT2xjNsNUPoYMFRUbOsPGmM0uu+w05PtGV+5CUeddPEKraWmfIvelN0S2ZwPZI2Z+NK3SKSAMoeV2bCOGbCRWMo1iZVqTFYxLpL4eHAnwBOouwztPnWCTaaAKBJ5Ak1aAED6aVVgrL16PI6FWaas8HjOZ6BeMKRRnn6VYAb8U1fkKd8Yd1Lb11zVmR98YrVhW7ZDTw+NbFm41YPFWFzEH4WpjlTfc/wiSMm3lSrfKuZpPXmbA/LqdewKOLxIBMOGv/PrM2U07k90cxAEXF2SWrM6wCOOIveOjlnZ1PX4QrBM2y7GTmAcnnDm24vBnqp1jyfL7phmMuTOObAK284mXi1TDrBfLbR7R0Sh1x9PC+ZQriPTeRqGm98W/7ZqSAMW9DiyZGtmvHuhzzofbX/VK/zPtLJjRG5gSxfg0nz2J7rXzYdDAc9FVPTpvy9ly/uLY7NL9exXyD/qqAkf2GxGCM9EYiGF+ck8gddCevpLcOakCU/Bv3ckmerscOCjK4s6PvwuoiSSdBiaQmA81v2igVJOKoMR8J3Wvg9v6aYWtD8nmxVhiIXFdm8MGu6j/uRpzjky83PDA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(346002)(366004)(136003)(376002)(451199021)(66556008)(66476007)(54906003)(6916009)(4326008)(478600001)(66946007)(6486002)(316002)(6666004)(8676002)(8936002)(41300700001)(31696002)(2906002)(5660300002)(26005)(86362001)(2616005)(6512007)(186003)(53546011)(6506007)(38100700002)(36756003)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NzlhbFdUTVRsR0o0djVOMHhxalhRSm1ncWhyNnM0Wno4aEhFb2dLMG9TelBw?=
 =?utf-8?B?UDFpK1F0R1dxcHhiZk9RVndNeWIzU3dnYjNUeFlHMlRSZmh4UzQxMnM0UE1J?=
 =?utf-8?B?eGxJbXluWk1kenNkb0VZbUVlWlZvY3pnaGlYMk1FbEUwL3FzS0VnMXJHWWpk?=
 =?utf-8?B?bG1MeU9RSnN4Z3Y3bllzcFNaaUpTdW91bWJUdHFON1IrMElDeTZ1RVk2blRa?=
 =?utf-8?B?TCtYblJnbjlqQkllV2QwYzhNY3EwbFNLS2JrUzZmWFJjUm9NMThrdDB5a0tC?=
 =?utf-8?B?VzQ2VVJjaDJKRGttME42UXp2NDYvMjhsWFNacjVrUmRRMGxzWFZZaEpVTjNH?=
 =?utf-8?B?VHA2ekNIamc3Ry9FS1BLWTlTVHl6cU9BVG55aHZCbEtLRUNZME9yUXoybnhS?=
 =?utf-8?B?SHdTaHNBcS9Ka0JacEhxR2w2SFYwUWdraXhsc01GaTJRL082OEFBaW5UTUN1?=
 =?utf-8?B?cEpzS2E4ZHZMYUFzbUtCMmNMd2thK3hmZWY3dkhPSzdFRnF2K0x3RVlIajM2?=
 =?utf-8?B?aVpWWVFYR1ZWeEd2QXV0dGhtK3VRZ25mMkRITmlmenNtWXdVeEROK09nSGpZ?=
 =?utf-8?B?V0dkZmtWeVhFTkFxZlhnSEo1WGEwaGQ0S2FIZjdEMit2d28zV3pxRmNMUEN2?=
 =?utf-8?B?c1hoZHRmQTE4a0ZWWUs2QUc3YnlTMU52U3Q5TGNyR1RzdnRKL2pzZFdaL0oy?=
 =?utf-8?B?NnJsSXV2cGNCMWlRbTZzOGxjdGFISFp5RWVFVXlwemxIVG1yaHlLRmtSSUx1?=
 =?utf-8?B?UTc0dWtJZ3dLcklCWS9CRXRDVDEweDJsSENPYzk2R3NjYnRid2NicUs3SFlB?=
 =?utf-8?B?dUJmT21JV1Yvd3BnZTRtQ2NCbVBzVUs1RmVLVFhMSEJmL3VJaUZGaGpRL3Ba?=
 =?utf-8?B?R0RRMnBsVTN5VitZdjJZemFUc2Y3ZmI3bWJ5ZHRjZm44SnVKV3ZlWWpxdnVG?=
 =?utf-8?B?aWNnTDdXUmZvdGdrVDlJYjRJeEpFSjh6bHB0bW1yU0oxSjhzN2IxRVN1TmlJ?=
 =?utf-8?B?Nlg3TTBSdis4L0M2c2hJYmQ0WDdNYXlYeVN0RkhNc0RqRlpSQTJTeWt6cU5O?=
 =?utf-8?B?Tkt5YWVoNVdrNWc0aFNlMGRVTGFuS3RMRDJiOTFpVjFFV3NKZjFSclc1cDlT?=
 =?utf-8?B?enYzOUVLYmFNWTl0MzhuKytIb1BKNW93TnlUWGlKNW9TWmpLTVhiV2orOHQ1?=
 =?utf-8?B?MEIvdWFzMmNXOXpudzJ2MUtPM2xqZXNiTGlOOGdsS2lBZExpRFJ1V3k0Tk9Y?=
 =?utf-8?B?L2kwbkpMcEwwUm92VW9vN0JYM20yWm5LalFhRWRrdEJZc0R0VituM0VsWG1R?=
 =?utf-8?B?TXg3VitkUERER0MyVm1NQkpuZ0orN3NWbG4yVTVtRjBDN3RuandwVGVabm0v?=
 =?utf-8?B?Q0hGd3laMklCY2tITjlvRWNSOERhVUZNQVlDc281SEhGaUpOVzZLT0JBTHYw?=
 =?utf-8?B?ck9sQnZVY090eDdKWnhqM0g0OTc1OTRnZ1BXd2IvSlkwZHBuZ3BOdzY0YUJW?=
 =?utf-8?B?a1ZwcEc0SmV1TDBoTnc4MUVjTndyMTVKTzcrL2RDZTAvS0pQUnZUK09JSUJa?=
 =?utf-8?B?SUNLZm56R3dyeGUreUxJVHFpaVVnQnZqaFVYRnBwWG5zSWNmWFRXVEwvUWN3?=
 =?utf-8?B?Y2RqQm41bUw1SW9hWXkyNW13L1FiTGtRSVJNVFBESm9uVmdWY05BWXBxY2Nu?=
 =?utf-8?B?NGRBaVA3Ry90VktHaDRDRVVNSVMvRldmU3BFaU5NMlpoZ3pzZWZFMXpVRlkr?=
 =?utf-8?B?VGgybjR3TVJSZG43emVCZTI1OFkySjRUTytCTGJGaE41eDBRWUovdUxpZUUz?=
 =?utf-8?B?WWtOdHFmWHNRYStXTWZVeUxNOHZSOU5SczR4UG5GN0UwKzZNQy9Eb3RLWDZ2?=
 =?utf-8?B?RjFoMFNFSXpHUnJTMk1mVDZ3VWl3WW1oWURyL0lpdzllckFCdjZ6MVBud01n?=
 =?utf-8?B?VVBDQlVEZXF0Y0hjWUtxWGFGQ1BPU1RzRHhpTURNZjV5NWIrN1pwc1cyNFp6?=
 =?utf-8?B?S3RrYTU0a096aXFvWGhpNVlvQWlrelZDK0hsQm40RGk1aE4zSDRvNGgzb1Bv?=
 =?utf-8?B?MnF6QmY3aUU3bUZwSm5ndWJ2YlRZck9DK3k2cWdtYUptR1JncnY1SjVobkJR?=
 =?utf-8?Q?FYFkF1LIZHMQA1945jbwjV4Hm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8aeb2b41-4abb-4424-3b7a-08db56055ffa
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 12:02:09.0179
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4ustMed+ZUX4jByj29cJ0KSgVKiibJDnRrJULVhGo6SphWqXZDP6j2Rgjx+KHGggaz7oCTzVd/9fR7VPPa+Alg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9623

On 15.05.2023 16:42, Andrew Cooper wrote:
> Bits through 24 are already defined, meaning that we're not far off needing
> the second word.  Put both in right away.
> 
> The bool bitfield names in the arch_caps union are unused, and somewhat out of
> date.  They'll shortly be automatically generated.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

I'm largely okay, but I'd like to raise a couple of naming / presentation
questions:

> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -226,6 +226,14 @@ static const char *const str_7d2[32] =
>      [ 4] = "bhi-ctrl",      [ 5] = "mcdt-no",
>  };
>  
> +static const char *const str_10Al[32] =
> +{
> +};
> +
> +static const char *const str_10Ah[32] =
> +{
> +};
> +
>  static const struct {
>      const char *name;
>      const char *abbr;
> @@ -248,6 +256,8 @@ static const struct {
>      { "0x00000007:2.edx", "7d2", str_7d2 },
>      { "0x00000007:1.ecx", "7c1", str_7c1 },
>      { "0x00000007:1.edx", "7d1", str_7d1 },
> +    { "0x0000010a.lo",   "10Al", str_10Al },
> +    { "0x0000010a.hi",   "10Ah", str_10Ah },

The MSR-ness can certainly be inferred from the .lo / .hi and l/h
suffixes of the strings, but I wonder whether having it e.g. like

    { "MSR0000010a.lo",   "m10Al", str_10Al },
    { "MSR0000010a.hi",   "m10Ah", str_10Ah },

or

    { "MSR[010a].lo",   "m10Al", str_10Al },
    { "MSR[010a].hi",   "m10Ah", str_10Ah },

or even

    { "ARCH_CAPS.lo",   "m10Al", str_10Al },
    { "ARCH_CAPS.hi",   "m10Ah", str_10Ah },

wouldn't make it more obvious. For the two str_*, see below.

> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -307,6 +307,10 @@ XEN_CPUFEATURE(AVX_VNNI_INT8,      15*32+ 4) /*A  AVX-VNNI-INT8 Instructions */
>  XEN_CPUFEATURE(AVX_NE_CONVERT,     15*32+ 5) /*A  AVX-NE-CONVERT Instructions */
>  XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
>  
> +/* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.eax, word 16 */
> +
> +/* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.edx, word 17 */

Right here I'd be inclined to omit the MSR index; the name ought to
be sufficient.

> --- a/xen/include/xen/lib/x86/cpu-policy.h
> +++ b/xen/include/xen/lib/x86/cpu-policy.h
> @@ -20,6 +20,8 @@
>  #define FEATURESET_7d2   13 /* 0x00000007:2.edx    */
>  #define FEATURESET_7c1   14 /* 0x00000007:1.ecx    */
>  #define FEATURESET_7d1   15 /* 0x00000007:1.edx    */
> +#define FEATURESET_10Al  16 /* 0x0000010a.eax      */
> +#define FEATURESET_10Ah  17 /* 0x0000010a.edx      */

Just like we use an "e" prefix for extended CPUID leaves, perhaps
use an "m" prefix for MSRs (then also affecting e.g. the str_*
above)?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 12:11:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 12:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535149.832774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pytVz-0006f6-Uo; Tue, 16 May 2023 12:10:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535149.832774; Tue, 16 May 2023 12:10:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pytVz-0006ez-RG; Tue, 16 May 2023 12:10:51 +0000
Received: by outflank-mailman (input) for mailman id 535149;
 Tue, 16 May 2023 12:10:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2FW8=BF=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pytVx-0006et-JJ
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 12:10:50 +0000
Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com
 [2607:f8b0:4864:20::1036])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae1a5563-f3e2-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 14:10:43 +0200 (CEST)
Received: by mail-pj1-x1036.google.com with SMTP id
 98e67ed59e1d1-24e14a24c9dso10087273a91.0
 for <xen-devel@lists.xenproject.org>; Tue, 16 May 2023 05:10:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae1a5563-f3e2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684239042; x=1686831042;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=2CgvpepAJcAAxNmSz3MNB3zR+smHF/uXLgAKpg7fSO8=;
        b=kcuKIKbgBn9vuLKuxCJyvOpsvi7TrlWP+tbC/zhmUyW9tVo/ZPwsnLADXyeakmw/J/
         nL+jYrlblvh72IACQ6PhP8v3Ojp2dcNK1CHeOb3yWfVcEp3BmdbDfEPNo1BY07ySdmp9
         JOGUsvDY7IGh8F5LcsB3hmOg24bKn96o93YFSYYooh7urDpJXTVbqE8h0nR2fDxguTtk
         /wo815BmRhB6fYQs9ufospfrDFy/eHEyag9AB+sdHy/l+rSMCfRtRAZz0gRlBVm3EKRe
         9enaYW32R/nzCdVXkLSQOFeRCqoktI5D1sjSZfnZAS8i2hw/6xUm3G+rN1KDC6Iu48K/
         m1Qw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684239042; x=1686831042;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=2CgvpepAJcAAxNmSz3MNB3zR+smHF/uXLgAKpg7fSO8=;
        b=IyKNPxmo8BjBJhsG32/1HjC2NQFppg+zfaCan3k+Waq2S5TQhgUwtLzP7qNyl68m7t
         aExUB3hTmj7ajQSg72pXj1oe6QDPgNcj21e0ZxNbGgIr8QDjwrj+gZi9iqeIASeruVDj
         W6xzTthQgOxD5INRFH9C+E2Siz+i3e2jDpOG4CybanB6AZXjgB56ahMTsZ8g6yxfJWKT
         J5jkp4/NSLAtLQ+QbpxnTBHHRsMjMikLOWlbopiqn9kISPyL6KdiMz6W8TwRX/4U1Pdx
         sSQ82c6uRUd+puB8i0gw/ZIzL9zY7uUWSaBb9GzkTnOfxuCe2+k8nrkKD6f+Obg8G+/L
         4yWw==
X-Gm-Message-State: AC+VfDymVH9TtD7mj/BM4BiWikCL+N5wtcq6ny+IrXPR3dmRuHul9oyi
	S6r4OdtAiSiYorp1WYTurKsL1g/utLQjCP4Gmd9yP9zrq1g=
X-Google-Smtp-Source: ACHHUZ4R9INxS6Rbpo5V5Ht3jIixqktDGxo+mnANGrtBKOS/hcH8845ZR0CBczaJ2v5A/ZGDEHG0P6wau4cAQ6ZBwBE=
X-Received: by 2002:a17:902:f552:b0:1ac:a058:555e with SMTP id
 h18-20020a170902f55200b001aca058555emr34781347plf.8.1684239041383; Tue, 16
 May 2023 05:10:41 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <CA+SAi2tPrvUYhkF2cmch5zowRqmpvJ6Cq0scxGHEaczhiDaJnw@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com> <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
 <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
 <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
 <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com>
 <bcac90c2-ef35-2908-9fe6-f09c1b1e2340@amd.com> <CA+SAi2sgHbUKk6mQVnFWQWJ1LBY29GW+eagrqHNN6TLDmv2AgQ@mail.gmail.com>
 <CA+SAi2tErcaAkRT5zhTwSE=-jszwAWNtEAnm5jNGEP1NoqbQ3w@mail.gmail.com> <53af4bc6-97ad-d806-003b-38e70ccd2b58@amd.com>
In-Reply-To: <53af4bc6-97ad-d806-003b-38e70ccd2b58@amd.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Tue, 16 May 2023 15:15:41 +0300
Message-ID: <CA+SAi2vrB714Tc9dn4SbtDo3VrT3Q8OpSiFXRLRaO5=0BJo_rQ@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Michal Orzel <michal.orzel@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>, 
	Stewart.Hildebrand@amd.com
Content-Type: multipart/alternative; boundary="000000000000c752cd05fbce75cf"

--000000000000c752cd05fbce75cf
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello,

Thanks a lot Michal.

Then the next question.
When I just started my experiments with xen, Stefano mentioned that each
cache's color size is 256M.
Is it possible to extend this figure ?

Regards,
Oleg

=D0=BF=D0=BD, 15 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 11:57, Mich=
al Orzel <michal.orzel@amd.com>:

> Hi Oleg,
>
> On 15/05/2023 10:51, Oleg Nikitenko wrote:
> >
> >
> >
> > Hello guys,
> >
> > Thanks a lot.
> > After a long problem list I was able to run xen with Dom0 with a cache
> color.
> > One more question from my side.
> > I want to run a guest with color mode too.
> > I inserted a string into guest config file llc-colors =3D "9-13"
> > I got an error
> > [  457.517004] loop0: detected capacity change from 0 to 385840
> > Parsing config from /xen/red_config.cfg
> > /xen/red_config.cfg:26: config parsing error near `-colors': lexical
> error
> > warning: Config file looks like it contains Python code.
> > warning:  Arbitrary Python is no longer supported.
> > warning:  See https://wiki.xen.org/wiki/PythonInXlConfig <
> https://wiki.xen.org/wiki/PythonInXlConfig>
> > Failed to parse config: Invalid argument
> > So this is a question.
> > Is it possible to assign a color mode for the DomU by config file ?
> > If so, what string should I use?
> Please, always refer to the relevant documentation. In this case, for
> xl.cfg:
>
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod=
.in#L2890
>
> ~Michal
>
> >
> > Regards,
> > Oleg
> >
> > =D1=87=D1=82, 11 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 13:32, =
Oleg Nikitenko <oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com>>:
> >
> >     Hi Michal,
> >
> >     Thanks.
> >     This compilation previously had a name CONFIG_COLORING.
> >     It mixed me up.
> >
> >     Regards,
> >     Oleg
> >
> >     =D1=87=D1=82, 11 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 13:=
15, Michal Orzel <michal.orzel@amd.com
> <mailto:michal.orzel@amd.com>>:
> >
> >         Hi Oleg,
> >
> >         On 11/05/2023 12:02, Oleg Nikitenko wrote:
> >         >
> >         >
> >         >
> >         > Hello,
> >         >
> >         > Thanks Stefano.
> >         > Then the next question.
> >         > I cloned xen repo from xilinx site
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git> <
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git>>
> >         > I managed to build a xlnx_rebase_4.17 branch in my environmen=
t.
> >         > I did it without coloring first. I did not find any color
> footprints at this branch.
> >         > I realized coloring is not in the xlnx_rebase_4.17 branch yet=
.
> >         This is not true. Cache coloring is in xlnx_rebase_4.17. Please
> see the docs:
> >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst
> >
> >
> >         It describes the feature and documents the required properties.
> >
> >         ~Michal
> >
> >         >
> >         >
> >         > =D0=B2=D1=82, 9 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=
=B2 22:49, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>:
> >         >
> >         >     We test Xen Cache Coloring regularly on zcu102. Every
> Petalinux release
> >         >     (twice a year) is tested with cache coloring enabled. The
> last Petalinux
> >         >     release is 2023.1 and the kernel used is this:
> >         >
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS> <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>>
> >         >
> >         >
> >         >     On Tue, 9 May 2023, Oleg Nikitenko wrote:
> >         >     > Hello guys,
> >         >     >
> >         >     > I have a couple of more questions.
> >         >     > Have you ever run xen with the cache coloring at Zynq
> UltraScale+ MPSoC zcu102 xczu15eg ?
> >         >     > When did you run xen with the cache coloring last time =
?
> >         >     > What kernel version did you use for Dom0 when you ran
> xen with the cache coloring last time ?
> >         >     >
> >         >     > Regards,
> >         >     > Oleg
> >         >     >
> >         >     > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3.=
 =D0=B2 11:48, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
> >         >     >       Hi Michal,
> >         >     >
> >         >     > Thanks.
> >         >     >
> >         >     > Regards,
> >         >     > Oleg
> >         >     >
> >         >     > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3.=
 =D0=B2 11:34, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
> >         >     >       Hi Oleg,
> >         >     >
> >         >     >       Replying, so that you do not need to wait for
> Stefano.
> >         >     >
> >         >     >       On 05/05/2023 10:28, Oleg Nikitenko wrote:
> >         >     >       >
> >         >     >       >
> >         >     >       >
> >         >     >       > Hello Stefano,
> >         >     >       >
> >         >     >       > I would like to try a xen cache color property
> from this repo  https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>>
> >         >     >       <https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>>>
> >         >     >       > Could you tell whot branch I should use ?
> >         >     >       Cache coloring feature is not part of the upstrea=
m
> tree and it is still under review.
> >         >     >       You can only find it integrated in the Xilinx Xen
> tree.
> >         >     >
> >         >     >       ~Michal
> >         >     >
> >         >     >       >
> >         >     >       > Regards,
> >         >     >       > Oleg
> >         >     >       >
> >         >     >       > =D0=BF=D1=82, 28 =D0=B0=D0=BF=D1=80. 2023=E2=80=
=AF=D0=B3. =D0=B2 00:51, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
> >         >     >       >
> >         >     >       >     I am familiar with the zcu102 but I don't
> know how you could possibly
> >         >     >       >     generate a SError.
> >         >     >       >
> >         >     >       >     I suggest to try to use ImageBuilder [1] to
> generate the boot
> >         >     >       >     configuration as a test because that is
> known to work well for zcu102.
> >         >     >       >
> >         >     >       >     [1]
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>>>
> >         >     >       >
> >         >     >       >
> >         >     >       >     On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
> >         >     >       >     > Hello Stefano,
> >         >     >       >     >
> >         >     >       >     > Thanks for clarification.
> >         >     >       >     > We nighter use ImageBuilder nor uboot boo=
t
> script.
> >         >     >       >     > A model is zcu102 compatible.
> >         >     >       >     >
> >         >     >       >     > Regards,
> >         >     >       >     > O.
> >         >     >       >     >
> >         >     >       >     > =D0=B2=D1=82, 25 =D0=B0=D0=BF=D1=80. 2023=
=E2=80=AF=D0=B3. =D0=B2 21:21, Stefano
> Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
> >         >     >       >     >       This is interesting. Are you using
> Xilinx hardware by any chance? If so,
> >         >     >       >     >       which board?
> >         >     >       >     >
> >         >     >       >     >       Are you using ImageBuilder to
> generate your boot.scr boot script? If so,
> >         >     >       >     >       could you please post your
> ImageBuilder config file? If not, can you
> >         >     >       >     >       post the source of your uboot boot
> script?
> >         >     >       >     >
> >         >     >       >     >       SErrors are supposed to be related
> to a hardware failure of some kind.
> >         >     >       >     >       You are not supposed to be able to
> trigger an SError easily by
> >         >     >       >     >       "mistake". I have not seen SErrors
> due to wrong cache coloring
> >         >     >       >     >       configurations on any Xilinx board
> before.
> >         >     >       >     >
> >         >     >       >     >       The differences between Xen with an=
d
> without cache coloring from a
> >         >     >       >     >       hardware perspective are:
> >         >     >       >     >
> >         >     >       >     >       - With cache coloring, the SMMU is
> enabled and does address translations
> >         >     >       >     >         even for dom0. Without cache
> coloring the SMMU could be disabled, and
> >         >     >       >     >         if enabled, the SMMU doesn't do
> any address translations for Dom0. If
> >         >     >       >     >         there is a hardware failure
> related to SMMU address translation it
> >         >     >       >     >         could only trigger with cache
> coloring. This would be my normal
> >         >     >       >     >         suggestion for you to explore, bu=
t
> the failure happens too early
> >         >     >       >     >         before any DMA-capable device is
> programmed. So I don't think this can
> >         >     >       >     >         be the issue.
> >         >     >       >     >
> >         >     >       >     >       - With cache coloring, the memory
> allocation is very different so you'll
> >         >     >       >     >         end up using different DDR region=
s
> for Dom0. So if your DDR is
> >         >     >       >     >         defective, you might only see a
> failure with cache coloring enabled
> >         >     >       >     >         because you end up using differen=
t
> regions.
> >         >     >       >     >
> >         >     >       >     >
> >         >     >       >     >       On Tue, 25 Apr 2023, Oleg Nikitenko
> wrote:
> >         >     >       >     >       > Hi Stefano,
> >         >     >       >     >       >
> >         >     >       >     >       > Thank you.
> >         >     >       >     >       > If I build xen without colors
> support there is not this error.
> >         >     >       >     >       > All the domains are booted well.
> >         >     >       >     >       > Hense it can not be a hardware
> issue.
> >         >     >       >     >       > This panic arrived during
> unpacking the rootfs.
> >         >     >       >     >       > Here I attached the boot log
> xen/Dom0 without color.
> >         >     >       >     >       > A highlighted strings printed
> exactly after the place where 1-st time panic arrived.
> >         >     >       >     >       >
> >         >     >       >     >       >  Xen 4.16.1-pre
> >         >     >       >     >       > (XEN) Xen version 4.16.1-pre
> (nole2390@(none)) (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=3Dy
> >         >     >       2023-04-21
> >         >     >       >     >       > (XEN) Latest ChangeSet: Wed Apr 1=
9
> 12:56:14 2023 +0300 git:321687b231-dirty
> >         >     >       >     >       > (XEN) build-id:
> c1847258fdb1b79562fc710dda40008f96c0fde5
> >         >     >       >     >       > (XEN) Processor: 00000000410fd034=
:
> "ARM Limited", variant: 0x0, part 0xd03,rev 0x4
> >         >     >       >     >       > (XEN) 64-bit Execution:
> >         >     >       >     >       > (XEN)   Processor Features:
> 0000000000002222 0000000000000000
> >         >     >       >     >       > (XEN)     Exception Levels:
> EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
> >         >     >       >     >       > (XEN)     Extensions:
> FloatingPoint AdvancedSIMD
> >         >     >       >     >       > (XEN)   Debug Features:
> 0000000010305106 0000000000000000
> >         >     >       >     >       > (XEN)   Auxiliary Features:
> 0000000000000000 0000000000000000
> >         >     >       >     >       > (XEN)   Memory Model Features:
> 0000000000001122 0000000000000000
> >         >     >       >     >       > (XEN)   ISA Features:
>  0000000000011120 0000000000000000
> >         >     >       >     >       > (XEN) 32-bit Execution:
> >         >     >       >     >       > (XEN)   Processor Features:
> 0000000000000131:0000000000011011
> >         >     >       >     >       > (XEN)     Instruction Sets:
> AArch32 A32 Thumb Thumb-2 Jazelle
> >         >     >       >     >       > (XEN)     Extensions: GenericTime=
r
> Security
> >         >     >       >     >       > (XEN)   Debug Features:
> 0000000003010066
> >         >     >       >     >       > (XEN)   Auxiliary Features:
> 0000000000000000
> >         >     >       >     >       > (XEN)   Memory Model Features:
> 0000000010201105 0000000040000000
> >         >     >       >     >       > (XEN)
>  0000000001260000 0000000002102211
> >         >     >       >     >       > (XEN)   ISA Features:
> 0000000002101110 0000000013112111 0000000021232042
> >         >     >       >     >       > (XEN)
> 0000000001112131 0000000000011142 0000000000011121
> >         >     >       >     >       > (XEN) Using SMC Calling Conventio=
n
> v1.2
> >         >     >       >     >       > (XEN) Using PSCI v1.1
> >         >     >       >     >       > (XEN) SMP: Allowing 4 CPUs
> >         >     >       >     >       > (XEN) Generic Timer IRQ: phys=3D3=
0
> hyp=3D26 virt=3D27 Freq: 100000 KHz
> >         >     >       >     >       > (XEN) GICv2 initialization:
> >         >     >       >     >       > (XEN)
> gic_dist_addr=3D00000000f9010000
> >         >     >       >     >       > (XEN)
> gic_cpu_addr=3D00000000f9020000
> >         >     >       >     >       > (XEN)
> gic_hyp_addr=3D00000000f9040000
> >         >     >       >     >       > (XEN)
> gic_vcpu_addr=3D00000000f9060000
> >         >     >       >     >       > (XEN)
> gic_maintenance_irq=3D25
> >         >     >       >     >       > (XEN) GICv2: Adjusting CPU
> interface base to 0xf902f000
> >         >     >       >     >       > (XEN) GICv2: 192 lines, 4 cpus,
> secure (IID 0200143b).
> >         >     >       >     >       > (XEN) Using scheduler: null
> Scheduler (null)
> >         >     >       >     >       > (XEN) Initializing null scheduler
> >         >     >       >     >       > (XEN) WARNING: This is
> experimental software in development.
> >         >     >       >     >       > (XEN) Use at your own risk.
> >         >     >       >     >       > (XEN) Allocated console ring of 3=
2
> KiB.
> >         >     >       >     >       > (XEN) CPU0: Guest atomics will tr=
y
> 12 times before pausing the domain
> >         >     >       >     >       > (XEN) Bringing up CPU1
> >         >     >       >     >       > (XEN) CPU1: Guest atomics will tr=
y
> 13 times before pausing the domain
> >         >     >       >     >       > (XEN) CPU 1 booted.
> >         >     >       >     >       > (XEN) Bringing up CPU2
> >         >     >       >     >       > (XEN) CPU2: Guest atomics will tr=
y
> 13 times before pausing the domain
> >         >     >       >     >       > (XEN) CPU 2 booted.
> >         >     >       >     >       > (XEN) Bringing up CPU3
> >         >     >       >     >       > (XEN) CPU3: Guest atomics will tr=
y
> 13 times before pausing the domain
> >         >     >       >     >       > (XEN) Brought up 4 CPUs
> >         >     >       >     >       > (XEN) CPU 3 booted.
> >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000:
> probing hardware configuration...
> >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000:
> SMMUv2 with:
> >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000:
> stage 2 translation
> >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000:
> stream matching with 48 register groups, mask 0x7fff<2>smmu:
> >         >     >       /axi/smmu@fd800000: 16 context
> >         >     >       >     >       banks (0
> >         >     >       >     >       > stage-2 only)
> >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000:
> Stage-2: 48-bit IPA -> 48-bit PA
> >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000:
> registered 29 master devices
> >         >     >       >     >       > (XEN) I/O virtualisation enabled
> >         >     >       >     >       > (XEN)  - Dom0 mode: Relaxed
> >         >     >       >     >       > (XEN) P2M: 40-bit IPA with 40-bit
> PA and 8-bit VMID
> >         >     >       >     >       > (XEN) P2M: 3 levels with order-1
> root, VTCR 0x0000000080023558
> >         >     >       >     >       > (XEN) Scheduling granularity: cpu=
,
> 1 CPU per sched-resource
> >         >     >       >     >       > (XEN) alternatives: Patching with
> alt table 00000000002cc5c8 -> 00000000002ccb2c
> >         >     >       >     >       > (XEN) *** LOADING DOMAIN 0 ***
> >         >     >       >     >       > (XEN) Loading d0 kernel from boot
> module @ 0000000001000000
> >         >     >       >     >       > (XEN) Loading ramdisk from boot
> module @ 0000000002000000
> >         >     >       >     >       > (XEN) Allocating 1:1 mappings
> totalling 1600MB for dom0:
> >         >     >       >     >       > (XEN) BANK[0]
> 0x00000010000000-0x00000020000000 (256MB)
> >         >     >       >     >       > (XEN) BANK[1]
> 0x00000024000000-0x00000028000000 (64MB)
> >         >     >       >     >       > (XEN) BANK[2]
> 0x00000030000000-0x00000080000000 (1280MB)
> >         >     >       >     >       > (XEN) Grant table range:
> 0x00000000e00000-0x00000000e40000
> >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000:
> d0: p2maddr 0x000000087bf94000
> >         >     >       >     >       > (XEN) Allocating PPI 16 for event
> channel interrupt
> >         >     >       >     >       > (XEN) Extended region 0:
> 0x81200000->0xa0000000
> >         >     >       >     >       > (XEN) Extended region 1:
> 0xb1200000->0xc0000000
> >         >     >       >     >       > (XEN) Extended region 2:
> 0xc8000000->0xe0000000
> >         >     >       >     >       > (XEN) Extended region 3:
> 0xf0000000->0xf9000000
> >         >     >       >     >       > (XEN) Extended region 4:
> 0x100000000->0x600000000
> >         >     >       >     >       > (XEN) Extended region 5:
> 0x880000000->0x8000000000
> >         >     >       >     >       > (XEN) Extended region 6:
> 0x8001000000->0x10000000000
> >         >     >       >     >       > (XEN) Loading zImage from
> 0000000001000000 to 0000000010000000-0000000010e41008
> >         >     >       >     >       > (XEN) Loading d0 initrd from
> 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
> >         >     >       >     >       > (XEN) Loading d0 DTB to
> 0x0000000013400000-0x000000001340cbdc
> >         >     >       >     >       > (XEN) Initial low memory virq
> threshold set at 0x4000 pages.
> >         >     >       >     >       > (XEN) Std. Loglevel: All
> >         >     >       >     >       > (XEN) Guest Loglevel: All
> >         >     >       >     >       > (XEN) *** Serial input to DOM0
> (type 'CTRL-a' three times to switch input)
> >         >     >       >     >       > (XEN) null.c:353: 0 <-- d0v0
> >         >     >       >     >       > (XEN) Freed 356kB init memory.
> >         >     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC:
> 0x84000050
> >         >     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC:
> 0x8600ff01
> >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word
> write 0x000000ffffffff to ICACTIVER4
> >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word
> write 0x000000ffffffff to ICACTIVER8
> >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word
> write 0x000000ffffffff to ICACTIVER12
> >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word
> write 0x000000ffffffff to ICACTIVER16
> >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word
> write 0x000000ffffffff to ICACTIVER20
> >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word
> write 0x000000ffffffff to ICACTIVER0
> >         >     >       >     >       > [    0.000000] Booting Linux on
> physical CPU 0x0000000000 [0x410fd034]
> >         >     >       >     >       > [    0.000000] Linux version
> 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC=
)
> >         >     >       11.3.0, GNU ld (GNU
> >         >     >       >     >       Binutils)
> >         >     >       >     >       > 2.38.20220708) #1 SMP Tue Feb 21
> 05:47:54 UTC 2023
> >         >     >       >     >       > [    0.000000] Machine model: D14
> Viper Board - White Unit
> >         >     >       >     >       > [    0.000000] Xen 4.16 support
> found
> >         >     >       >     >       > [    0.000000] Zone ranges:
> >         >     >       >     >       > [    0.000000]   DMA      [mem
> 0x0000000010000000-0x000000007fffffff]
> >         >     >       >     >       > [    0.000000]   DMA32    empty
> >         >     >       >     >       > [    0.000000]   Normal   empty
> >         >     >       >     >       > [    0.000000] Movable zone start
> for each node
> >         >     >       >     >       > [    0.000000] Early memory node
> ranges
> >         >     >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000010000000-0x000000001fffffff]
> >         >     >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000022000000-0x0000000022147fff]
> >         >     >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000022200000-0x0000000022347fff]
> >         >     >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000024000000-0x0000000027ffffff]
> >         >     >       >     >       > [    0.000000]   node   0: [mem
> 0x0000000030000000-0x000000007fffffff]
> >         >     >       >     >       > [    0.000000] Initmem setup node
> 0 [mem 0x0000000010000000-0x000000007fffffff]
> >         >     >       >     >       > [    0.000000] On node 0, zone
> DMA: 8192 pages in unavailable ranges
> >         >     >       >     >       > [    0.000000] On node 0, zone
> DMA: 184 pages in unavailable ranges
> >         >     >       >     >       > [    0.000000] On node 0, zone
> DMA: 7352 pages in unavailable ranges
> >         >     >       >     >       > [    0.000000] cma: Reserved 256
> MiB at 0x000000006e000000
> >         >     >       >     >       > [    0.000000] psci: probing for
> conduit method from DT.
> >         >     >       >     >       > [    0.000000] psci: PSCIv1.1
> detected in firmware.
> >         >     >       >     >       > [    0.000000] psci: Using
> standard PSCI v0.2 function IDs
> >         >     >       >     >       > [    0.000000] psci: Trusted OS
> migration not required
> >         >     >       >     >       > [    0.000000] psci: SMC Calling
> Convention v1.1
> >         >     >       >     >       > [    0.000000] percpu: Embedded 1=
6
> pages/cpu s32792 r0 d32744 u65536
> >         >     >       >     >       > [    0.000000] Detected VIPT
> I-cache on CPU0
> >         >     >       >     >       > [    0.000000] CPU features:
> kernel page table isolation forced ON by KASLR
> >         >     >       >     >       > [    0.000000] CPU features:
> detected: Kernel page table isolation (KPTI)
> >         >     >       >     >       > [    0.000000] Built 1 zonelists,
> mobility grouping on.  Total pages: 403845
> >         >     >       >     >       > [    0.000000] Kernel command
> line: console=3Dhvc0 earlycon=3Dxen earlyprintk=3Dxen clk_ignore_unused f=
ips=3D1
> >         >     >       root=3D/dev/ram0
> >         >     >       >     >       maxcpus=3D2
> >         >     >       >     >       > [    0.000000] Unknown kernel
> command line parameters "earlyprintk=3Dxen fips=3D1", will be passed to u=
ser
> >         >     >       space.
> >         >     >       >     >       > [    0.000000] Dentry cache hash
> table entries: 262144 (order: 9, 2097152 bytes, linear)
> >         >     >       >     >       > [    0.000000] Inode-cache hash
> table entries: 131072 (order: 8, 1048576 bytes, linear)
> >         >     >       >     >       > [    0.000000] mem auto-init:
> stack:off, heap alloc:on, heap free:on
> >         >     >       >     >       > [    0.000000] mem auto-init:
> clearing system memory may take some time...
> >         >     >       >     >       > [    0.000000] Memory:
> 1121936K/1641024K available (9728K kernel code, 836K rwdata, 2396K rodata=
,
> 1536K
> >         >     >       init, 262K bss,
> >         >     >       >     >       256944K reserved,
> >         >     >       >     >       > 262144K cma-reserved)
> >         >     >       >     >       > [    0.000000] SLUB: HWalign=3D64=
,
> Order=3D0-3, MinObjects=3D0, CPUs=3D2, Nodes=3D1
> >         >     >       >     >       > [    0.000000] rcu: Hierarchical
> RCU implementation.
> >         >     >       >     >       > [    0.000000] rcu: RCU event
> tracing is enabled.
> >         >     >       >     >       > [    0.000000] rcu: RCU
> restricting CPUs from NR_CPUS=3D8 to nr_cpu_ids=3D2.
> >         >     >       >     >       > [    0.000000] rcu: RCU calculate=
d
> value of scheduler-enlistment delay is 25 jiffies.
> >         >     >       >     >       > [    0.000000] rcu: Adjusting
> geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=3D2
> >         >     >       >     >       > [    0.000000] NR_IRQS: 64,
> nr_irqs: 64, preallocated irqs: 0
> >         >     >       >     >       > [    0.000000] Root IRQ handler:
> gic_handle_irq
> >         >     >       >     >       > [    0.000000] arch_timer: cp15
> timer(s) running at 100.00MHz (virt).
> >         >     >       >     >       > [    0.000000] clocksource:
> arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x171024e7e0,
> >         >     >       max_idle_ns: 440795205315 ns
> >         >     >       >     >       > [    0.000000] sched_clock: 56
> bits at 100MHz, resolution 10ns, wraps every 4398046511100ns
> >         >     >       >     >       > [    0.000258] Console: colour
> dummy device 80x25
> >         >     >       >     >       > [    0.310231] printk: console
> [hvc0] enabled
> >         >     >       >     >       > [    0.314403] Calibrating delay
> loop (skipped), value calculated using timer frequency.. 200.00 BogoMIPS
> >         >     >       (lpj=3D400000)
> >         >     >       >     >       > [    0.324851] pid_max: default:
> 32768 minimum: 301
> >         >     >       >     >       > [    0.329706] LSM: Security
> Framework initializing
> >         >     >       >     >       > [    0.334204] Yama: becoming
> mindful.
> >         >     >       >     >       > [    0.337865] Mount-cache hash
> table entries: 4096 (order: 3, 32768 bytes, linear)
> >         >     >       >     >       > [    0.345180] Mountpoint-cache
> hash table entries: 4096 (order: 3, 32768 bytes, linear)
> >         >     >       >     >       > [    0.354743] xen:grant_table:
> Grant tables using version 1 layout
> >         >     >       >     >       > [    0.359132] Grant table
> initialized
> >         >     >       >     >       > [    0.362664] xen:events: Using
> FIFO-based ABI
> >         >     >       >     >       > [    0.366993] Xen: initializing
> cpu0
> >         >     >       >     >       > [    0.370515] rcu: Hierarchical
> SRCU implementation.
> >         >     >       >     >       > [    0.375930] smp: Bringing up
> secondary CPUs ...
> >         >     >       >     >       > (XEN) null.c:353: 1 <-- d0v1
> >         >     >       >     >       > (XEN) d0v1: vGICD: unhandled word
> write 0x000000ffffffff to ICACTIVER0
> >         >     >       >     >       > [    0.382549] Detected VIPT
> I-cache on CPU1
> >         >     >       >     >       > [    0.388712] Xen: initializing
> cpu1
> >         >     >       >     >       > [    0.388743] CPU1: Booted
> secondary processor 0x0000000001 [0x410fd034]
> >         >     >       >     >       > [    0.388829] smp: Brought up 1
> node, 2 CPUs
> >         >     >       >     >       > [    0.406941] SMP: Total of 2
> processors activated.
> >         >     >       >     >       > [    0.411698] CPU features:
> detected: 32-bit EL0 Support
> >         >     >       >     >       > [    0.416888] CPU features:
> detected: CRC32 instructions
> >         >     >       >     >       > [    0.422121] CPU: All CPU(s)
> started at EL1
> >         >     >       >     >       > [    0.426248] alternatives:
> patching kernel code
> >         >     >       >     >       > [    0.431424] devtmpfs:
> initialized
> >         >     >       >     >       > [    0.441454] KASLR enabled
> >         >     >       >     >       > [    0.441602] clocksource:
> jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:
> >         >     >       7645041785100000 ns
> >         >     >       >     >       > [    0.448321] futex hash table
> entries: 512 (order: 3, 32768 bytes, linear)
> >         >     >       >     >       > [    0.496183] NET: Registered
> PF_NETLINK/PF_ROUTE protocol family
> >         >     >       >     >       > [    0.498277] DMA: preallocated
> 256 KiB GFP_KERNEL pool for atomic allocations
> >         >     >       >     >       > [    0.503772] DMA: preallocated
> 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
> >         >     >       >     >       > [    0.511610] DMA: preallocated
> 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
> >         >     >       >     >       > [    0.519478] audit: initializin=
g
> netlink subsys (disabled)
> >         >     >       >     >       > [    0.524985] audit: type=3D2000
> audit(0.336:1): state=3Dinitialized audit_enabled=3D0 res=3D1
> >         >     >       >     >       > [    0.529169] thermal_sys:
> Registered thermal governor 'step_wise'
> >         >     >       >     >       > [    0.533023] hw-breakpoint:
> found 6 breakpoint and 4 watchpoint registers.
> >         >     >       >     >       > [    0.545608] ASID allocator
> initialised with 32768 entries
> >         >     >       >     >       > [    0.551030] xen:swiotlb_xen:
> Warning: only able to allocate 4 MB for software IO TLB
> >         >     >       >     >       > [    0.559332] software IO TLB:
> mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB)
> >         >     >       >     >       > [    0.583565] HugeTLB registered
> 1.00 GiB page size, pre-allocated 0 pages
> >         >     >       >     >       > [    0.584721] HugeTLB registered
> 32.0 MiB page size, pre-allocated 0 pages
> >         >     >       >     >       > [    0.591478] HugeTLB registered
> 2.00 MiB page size, pre-allocated 0 pages
> >         >     >       >     >       > [    0.598225] HugeTLB registered
> 64.0 KiB page size, pre-allocated 0 pages
> >         >     >       >     >       > [    0.636520] DRBG: Continuing
> without Jitter RNG
> >         >     >       >     >       > [    0.737187] raid6: neonx8
> gen()  2143 MB/s
> >         >     >       >     >       > [    0.805294] raid6: neonx8
> xor()  1589 MB/s
> >         >     >       >     >       > [    0.873406] raid6: neonx4
> gen()  2177 MB/s
> >         >     >       >     >       > [    0.941499] raid6: neonx4
> xor()  1556 MB/s
> >         >     >       >     >       > [    1.009612] raid6: neonx2
> gen()  2072 MB/s
> >         >     >       >     >       > [    1.077715] raid6: neonx2
> xor()  1430 MB/s
> >         >     >       >     >       > [    1.145834] raid6: neonx1
> gen()  1769 MB/s
> >         >     >       >     >       > [    1.213935] raid6: neonx1
> xor()  1214 MB/s
> >         >     >       >     >       > [    1.282046] raid6: int64x8
>  gen()  1366 MB/s
> >         >     >       >     >       > [    1.350132] raid6: int64x8
>  xor()   773 MB/s
> >         >     >       >     >       > [    1.418259] raid6: int64x4
>  gen()  1602 MB/s
> >         >     >       >     >       > [    1.486349] raid6: int64x4
>  xor()   851 MB/s
> >         >     >       >     >       > [    1.554464] raid6: int64x2
>  gen()  1396 MB/s
> >         >     >       >     >       > [    1.622561] raid6: int64x2
>  xor()   744 MB/s
> >         >     >       >     >       > [    1.690687] raid6: int64x1
>  gen()  1033 MB/s
> >         >     >       >     >       > [    1.758770] raid6: int64x1
>  xor()   517 MB/s
> >         >     >       >     >       > [    1.758809] raid6: using
> algorithm neonx4 gen() 2177 MB/s
> >         >     >       >     >       > [    1.762941] raid6: .... xor()
> 1556 MB/s, rmw enabled
> >         >     >       >     >       > [    1.767957] raid6: using neon
> recovery algorithm
> >         >     >       >     >       > [    1.772824] xen:balloon:
> Initialising balloon driver
> >         >     >       >     >       > [    1.778021] iommu: Default
> domain type: Translated
> >         >     >       >     >       > [    1.782584] iommu: DMA domain
> TLB invalidation policy: strict mode
> >         >     >       >     >       > [    1.789149] SCSI subsystem
> initialized
> >         >     >       >     >       > [    1.792820] usbcore: registere=
d
> new interface driver usbfs
> >         >     >       >     >       > [    1.798254] usbcore: registere=
d
> new interface driver hub
> >         >     >       >     >       > [    1.803626] usbcore: registere=
d
> new device driver usb
> >         >     >       >     >       > [    1.808761] pps_core: LinuxPPS
> API ver. 1 registered
> >         >     >       >     >       > [    1.813716] pps_core: Software
> ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it
> <mailto:giometti@linux.it> <mailto:giometti@linux.it <mailto:
> giometti@linux.it>>
> >         >     >       <mailto:giometti@linux.it <mailto:
> giometti@linux.it> <mailto:giometti@linux.it <mailto:giometti@linux.it>>>=
>
> >         >     >       >     >       > [    1.822903] PTP clock support
> registered
> >         >     >       >     >       > [    1.826893] EDAC MC: Ver: 3.0.=
0
> >         >     >       >     >       > [    1.830375] zynqmp-ipi-mbox
> mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channels.
> >         >     >       >     >       > [    1.838863] zynqmp-ipi-mbox
> mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX channels.
> >         >     >       >     >       > [    1.847356] zynqmp-ipi-mbox
> mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX channels.
> >         >     >       >     >       > [    1.855907] FPGA manager
> framework
> >         >     >       >     >       > [    1.859952] clocksource:
> Switched to clocksource arch_sys_counter
> >         >     >       >     >       > [    1.871712] NET: Registered
> PF_INET protocol family
> >         >     >       >     >       > [    1.871838] IP idents hash
> table entries: 32768 (order: 6, 262144 bytes, linear)
> >         >     >       >     >       > [    1.879392]
> tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes,
> linear)
> >         >     >       >     >       > [    1.887078] Table-perturb hash
> table entries: 65536 (order: 6, 262144 bytes, linear)
> >         >     >       >     >       > [    1.894846] TCP established
> hash table entries: 16384 (order: 5, 131072 bytes, linear)
> >         >     >       >     >       > [    1.902900] TCP bind hash tabl=
e
> entries: 16384 (order: 6, 262144 bytes, linear)
> >         >     >       >     >       > [    1.910350] TCP: Hash tables
> configured (established 16384 bind 16384)
> >         >     >       >     >       > [    1.916778] UDP hash table
> entries: 1024 (order: 3, 32768 bytes, linear)
> >         >     >       >     >       > [    1.923509] UDP-Lite hash tabl=
e
> entries: 1024 (order: 3, 32768 bytes, linear)
> >         >     >       >     >       > [    1.930759] NET: Registered
> PF_UNIX/PF_LOCAL protocol family
> >         >     >       >     >       > [    1.936834] RPC: Registered
> named UNIX socket transport module.
> >         >     >       >     >       > [    1.942342] RPC: Registered ud=
p
> transport module.
> >         >     >       >     >       > [    1.947088] RPC: Registered tc=
p
> transport module.
> >         >     >       >     >       > [    1.951843] RPC: Registered tc=
p
> NFSv4.1 backchannel transport module.
> >         >     >       >     >       > [    1.958334] PCI: CLS 0 bytes,
> default 64
> >         >     >       >     >       > [    1.962709] Trying to unpack
> rootfs image as initramfs...
> >         >     >       >     >       > [    1.977090] workingset:
> timestamp_bits=3D62 max_order=3D19 bucket_order=3D0
> >         >     >       >     >       > [    1.982863] Installing knfsd
> (copyright (C) 1996 okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:
> okir@monad.swb.de <mailto:okir@monad.swb.de>> <mailto:okir@monad.swb.de
> <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:
> okir@monad.swb.de>>>).
> >         >     >       >     >       > [    2.021045] NET: Registered
> PF_ALG protocol family
> >         >     >       >     >       > [    2.021122] xor: measuring
> software checksum speed
> >         >     >       >     >       > [    2.029347]    8regs
> :  2366 MB/sec
> >         >     >       >     >       > [    2.033081]    32regs
>  :  2802 MB/sec
> >         >     >       >     >       > [    2.038223]    arm64_neon
>  :  2320 MB/sec
> >         >     >       >     >       > [    2.038385] xor: using
> function: 32regs (2802 MB/sec)
> >         >     >       >     >       > [    2.043614] Block layer SCSI
> generic (bsg) driver version 0.4 loaded (major 247)
> >         >     >       >     >       > [    2.050959] io scheduler
> mq-deadline registered
> >         >     >       >     >       > [    2.055521] io scheduler kyber
> registered
> >         >     >       >     >       > [    2.068227] xen:xen_evtchn:
> Event-channel device installed
> >         >     >       >     >       > [    2.069281] Serial: 8250/16550
> driver, 4 ports, IRQ sharing disabled
> >         >     >       >     >       > [    2.076190] cacheinfo: Unable
> to detect cache hierarchy for CPU 0
> >         >     >       >     >       > [    2.085548] brd: module loaded
> >         >     >       >     >       > [    2.089290] loop: module loade=
d
> >         >     >       >     >       > [    2.089341] Invalid max_queues
> (4), will use default max: 2.
> >         >     >       >     >       > [    2.094565] tun: Universal
> TUN/TAP device driver, 1.6
> >         >     >       >     >       > [    2.098655] xen_netfront:
> Initialising Xen virtual ethernet driver
> >         >     >       >     >       > [    2.104156] usbcore: registere=
d
> new interface driver rtl8150
> >         >     >       >     >       > [    2.109813] usbcore: registere=
d
> new interface driver r8152
> >         >     >       >     >       > [    2.115367] usbcore: registere=
d
> new interface driver asix
> >         >     >       >     >       > [    2.120794] usbcore: registere=
d
> new interface driver ax88179_178a
> >         >     >       >     >       > [    2.126934] usbcore: registere=
d
> new interface driver cdc_ether
> >         >     >       >     >       > [    2.132816] usbcore: registere=
d
> new interface driver cdc_eem
> >         >     >       >     >       > [    2.138527] usbcore: registere=
d
> new interface driver net1080
> >         >     >       >     >       > [    2.144256] usbcore: registere=
d
> new interface driver cdc_subset
> >         >     >       >     >       > [    2.150205] usbcore: registere=
d
> new interface driver zaurus
> >         >     >       >     >       > [    2.155837] usbcore: registere=
d
> new interface driver cdc_ncm
> >         >     >       >     >       > [    2.161550] usbcore: registere=
d
> new interface driver r8153_ecm
> >         >     >       >     >       > [    2.168240] usbcore: registere=
d
> new interface driver cdc_acm
> >         >     >       >     >       > [    2.173109] cdc_acm: USB
> Abstract Control Model driver for USB modems and ISDN adapters
> >         >     >       >     >       > [    2.181358] usbcore: registere=
d
> new interface driver uas
> >         >     >       >     >       > [    2.186547] usbcore: registere=
d
> new interface driver usb-storage
> >         >     >       >     >       > [    2.192643] usbcore: registere=
d
> new interface driver ftdi_sio
> >         >     >       >     >       > [    2.198384] usbserial: USB
> Serial support registered for FTDI USB Serial Device
> >         >     >       >     >       > [    2.206118] udc-core: couldn't
> find an available UDC - added [g_mass_storage] to list of pending
> >         >     >       drivers
> >         >     >       >     >       > [    2.215332] i2c_dev: i2c /dev
> entries driver
> >         >     >       >     >       > [    2.220467] xen_wdt xen_wdt:
> initialized (timeout=3D60s, nowayout=3D0)
> >         >     >       >     >       > [    2.225923] device-mapper:
> uevent: version 1.0.3
> >         >     >       >     >       > [    2.230668] device-mapper:
> ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com <mailto=
:
> dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com>>
> >         >     >       <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com>>>
> >         >     >       >     >       > [    2.239315] EDAC MC0: Giving
> out device to module 1 controller synps_ddr_controller: DEV synps_edac
> >         >     >       (INTERRUPT)
> >         >     >       >     >       > [    2.249405] EDAC DEVICE0:
> Giving out device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV
> >         >     >       >     >       ff960000.memory-controller
> (INTERRUPT)
> >         >     >       >     >       > [    2.261719] sdhci: Secure
> Digital Host Controller Interface driver
> >         >     >       >     >       > [    2.267487] sdhci: Copyright(c=
)
> Pierre Ossman
> >         >     >       >     >       > [    2.271890] sdhci-pltfm: SDHCI
> platform and OF driver helper
> >         >     >       >     >       > [    2.278157] ledtrig-cpu:
> registered to indicate activity on CPUs
> >         >     >       >     >       > [    2.283816]
> zynqmp_firmware_probe Platform Management API v1.1
> >         >     >       >     >       > [    2.289554]
> zynqmp_firmware_probe Trustzone version v1.0
> >         >     >       >     >       > [    2.327875] securefw securefw:
> securefw probed
> >         >     >       >     >       > [    2.328324] alg: No test for
> xilinx-zynqmp-aes (zynqmp-aes)
> >         >     >       >     >       > [    2.332563] zynqmp_aes
> firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
> >         >     >       >     >       > [    2.341183] alg: No test for
> xilinx-zynqmp-rsa (zynqmp-rsa)
> >         >     >       >     >       > [    2.347667] remoteproc
> remoteproc0: ff9a0000.rf5ss:r5f_0 is available
> >         >     >       >     >       > [    2.353003] remoteproc
> remoteproc1: ff9a0000.rf5ss:r5f_1 is available
> >         >     >       >     >       > [    2.362605] fpga_manager fpga0=
:
> Xilinx ZynqMP FPGA Manager registered
> >         >     >       >     >       > [    2.366540] viper-xen-proxy
> viper-xen-proxy: Viper Xen Proxy registered
> >         >     >       >     >       > [    2.372525] viper-vdpp
> a4000000.vdpp: Device Tree Probing
> >         >     >       >     >       > [    2.377778] viper-vdpp
> a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> >         >     >       >     >       > [    2.386432] viper-vdpp
> a4000000.vdpp: Unable to register tamper handler. Retrying...
> >         >     >       >     >       > [    2.394094] viper-vdpp-net
> a5000000.vdpp_net: Device Tree Probing
> >         >     >       >     >       > [    2.399854] viper-vdpp-net
> a5000000.vdpp_net: Device registered
> >         >     >       >     >       > [    2.405931] viper-vdpp-stat
> a8000000.vdpp_stat: Device Tree Probing
> >         >     >       >     >       > [    2.412037] viper-vdpp-stat
> a8000000.vdpp_stat: Build parameters: VTI Count: 512 Event Count: 32
> >         >     >       >     >       > [    2.420856] default preset
> >         >     >       >     >       > [    2.423797] viper-vdpp-stat
> a8000000.vdpp_stat: Device registered
> >         >     >       >     >       > [    2.430054] viper-vdpp-rng
> ac000000.vdpp_rng: Device Tree Probing
> >         >     >       >     >       > [    2.435948] viper-vdpp-rng
> ac000000.vdpp_rng: Device registered
> >         >     >       >     >       > [    2.441976] vmcu driver init
> >         >     >       >     >       > [    2.444922] VMCU: : (240:0)
> registered
> >         >     >       >     >       > [    2.444956] In K81 Updater ini=
t
> >         >     >       >     >       > [    2.449003] pktgen: Packet
> Generator for packet performance testing. Version: 2.75
> >         >     >       >     >       > [    2.468833] Initializing XFRM
> netlink socket
> >         >     >       >     >       > [    2.468902] NET: Registered
> PF_PACKET protocol family
> >         >     >       >     >       > [    2.472729] Bridge firewalling
> registered
> >         >     >       >     >       > [    2.476785] 8021q: 802.1Q VLAN
> Support v1.8
> >         >     >       >     >       > [    2.481341] registered
> taskstats version 1
> >         >     >       >     >       > [    2.486394] Btrfs loaded,
> crc32c=3Dcrc32c-generic, zoned=3Dno, fsverity=3Dno
> >         >     >       >     >       > [    2.503145] ff010000.serial:
> ttyPS1 at MMIO 0xff010000 (irq =3D 36, base_baud =3D 6250000) is a xuartp=
s
> >         >     >       >     >       > [    2.507103] of-fpga-region
> fpga-full: FPGA Region probed
> >         >     >       >     >       > [    2.512986] xilinx-zynqmp-dma
> fd500000.dma-controller: ZynqMP DMA driver Probe success
> >         >     >       >     >       > [    2.520267] xilinx-zynqmp-dma
> fd510000.dma-controller: ZynqMP DMA driver Probe success
> >         >     >       >     >       > [    2.528239] xilinx-zynqmp-dma
> fd520000.dma-controller: ZynqMP DMA driver Probe success
> >         >     >       >     >       > [    2.536152] xilinx-zynqmp-dma
> fd530000.dma-controller: ZynqMP DMA driver Probe success
> >         >     >       >     >       > [    2.544153] xilinx-zynqmp-dma
> fd540000.dma-controller: ZynqMP DMA driver Probe success
> >         >     >       >     >       > [    2.552127] xilinx-zynqmp-dma
> fd550000.dma-controller: ZynqMP DMA driver Probe success
> >         >     >       >     >       > [    2.560178] xilinx-zynqmp-dma
> ffa80000.dma-controller: ZynqMP DMA driver Probe success
> >         >     >       >     >       > [    2.567987] xilinx-zynqmp-dma
> ffa90000.dma-controller: ZynqMP DMA driver Probe success
> >         >     >       >     >       > [    2.576018] xilinx-zynqmp-dma
> ffaa0000.dma-controller: ZynqMP DMA driver Probe success
> >         >     >       >     >       > [    2.583889] xilinx-zynqmp-dma
> ffab0000.dma-controller: ZynqMP DMA driver Probe success
> >         >     >       >     >       > [    2.946379] spi-nor spi0.0:
> mt25qu512a (131072 Kbytes)
> >         >     >       >     >       > [    2.946467] 2 fixed-partitions
> partitions found on MTD device spi0.0
> >         >     >       >     >       > [    2.952393] Creating 2 MTD
> partitions on "spi0.0":
> >         >     >       >     >       > [    2.957231]
> 0x000004000000-0x000008000000 : "bank A"
> >         >     >       >     >       > [    2.963332]
> 0x000000000000-0x000004000000 : "bank B"
> >         >     >       >     >       > [    2.968694] macb
> ff0b0000.ethernet: Not enabling partial store and forward
> >         >     >       >     >       > [    2.975333] macb
> ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25
> >         >     >       (18:41:fe:0f:ff:02)
> >         >     >       >     >       > [    2.984472] macb
> ff0c0000.ethernet: Not enabling partial store and forward
> >         >     >       >     >       > [    2.992144] macb
> ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26
> >         >     >       (18:41:fe:0f:ff:03)
> >         >     >       >     >       > [    3.001043] viper_enet
> viper_enet: Viper power GPIOs initialised
> >         >     >       >     >       > [    3.007313] viper_enet
> viper_enet vnet0 (uninitialized): Validate interface QSGMII
> >         >     >       >     >       > [    3.014914] viper_enet
> viper_enet vnet1 (uninitialized): Validate interface QSGMII
> >         >     >       >     >       > [    3.022138] viper_enet
> viper_enet vnet1 (uninitialized): Validate interface type 18
> >         >     >       >     >       > [    3.030274] viper_enet
> viper_enet vnet2 (uninitialized): Validate interface QSGMII
> >         >     >       >     >       > [    3.037785] viper_enet
> viper_enet vnet3 (uninitialized): Validate interface QSGMII
> >         >     >       >     >       > [    3.045301] viper_enet
> viper_enet: Viper enet registered
> >         >     >       >     >       > [    3.050958] xilinx-axipmon
> ffa00000.perf-monitor: Probed Xilinx APM
> >         >     >       >     >       > [    3.057135] xilinx-axipmon
> fd0b0000.perf-monitor: Probed Xilinx APM
> >         >     >       >     >       > [    3.063538] xilinx-axipmon
> fd490000.perf-monitor: Probed Xilinx APM
> >         >     >       >     >       > [    3.069920] xilinx-axipmon
> ffa10000.perf-monitor: Probed Xilinx APM
> >         >     >       >     >       > [    3.097729] si70xx: probe of
> 2-0040 failed with error -5
> >         >     >       >     >       > [    3.098042] cdns-wdt
> fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s
> >         >     >       >     >       > [    3.105111] cdns-wdt
> ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s
> >         >     >       >     >       > [    3.112457] viper-tamper
> viper-tamper: Device registered
> >         >     >       >     >       > [    3.117593] active_bank
> active_bank: boot bank: 1
> >         >     >       >     >       > [    3.122184] active_bank
> active_bank: boot mode: (0x02) qspi32
> >         >     >       >     >       > [    3.128247] viper-vdpp
> a4000000.vdpp: Device Tree Probing
> >         >     >       >     >       > [    3.133439] viper-vdpp
> a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> >         >     >       >     >       > [    3.142151] viper-vdpp
> a4000000.vdpp: Tamper handler registered
> >         >     >       >     >       > [    3.147438] viper-vdpp
> a4000000.vdpp: Device registered
> >         >     >       >     >       > [    3.153007] lpc55_l2 spi1.0:
> registered handler for protocol 0
> >         >     >       >     >       > [    3.158582] lpc55_user
> lpc55_user: The major number for your device is 236
> >         >     >       >     >       > [    3.165976] lpc55_l2 spi1.0:
> registered handler for protocol 1
> >         >     >       >     >       > [    3.181999] rtc-lpc55
> rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >         >     >       >     >       > [    3.182856] rtc-lpc55
> rtc_lpc55: registered as rtc0
> >         >     >       >     >       > [    3.188656] lpc55_l2 spi1.0:
> (2) mcu still not ready?
> >         >     >       >     >       > [    3.193744] lpc55_l2 spi1.0:
> (3) mcu still not ready?
> >         >     >       >     >       > [    3.198848] lpc55_l2 spi1.0:
> (4) mcu still not ready?
> >         >     >       >     >       > [    3.202932] mmc0: SDHCI
> controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit
> >         >     >       >     >       > [    3.210689] lpc55_l2 spi1.0:
> (5) mcu still not ready?
> >         >     >       >     >       > [    3.215694] lpc55_l2 spi1.0: r=
x
> error: -110
> >         >     >       >     >       > [    3.284438] mmc0: new HS200 MM=
C
> card at address 0001
> >         >     >       >     >       > [    3.285179] mmcblk0: mmc0:0001
> SEM16G 14.6 GiB
> >         >     >       >     >       > [    3.291784]  mmcblk0: p1 p2 p3
> p4 p5 p6 p7 p8
> >         >     >       >     >       > [    3.293915] mmcblk0boot0:
> mmc0:0001 SEM16G 4.00 MiB
> >         >     >       >     >       > [    3.299054] mmcblk0boot1:
> mmc0:0001 SEM16G 4.00 MiB
> >         >     >       >     >       > [    3.303905] mmcblk0rpmb:
> mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
> >         >     >       >     >       > [    3.582676] rtc-lpc55
> rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >         >     >       >     >       > [    3.583332] rtc-lpc55
> rtc_lpc55: hctosys: unable to read the hardware clock
> >         >     >       >     >       > [    3.591252] cdns-i2c
> ff020000.i2c: recovery information complete
> >         >     >       >     >       > [    3.597085] at24 0-0050: suppl=
y
> vcc not found, using dummy regulator
> >         >     >       >     >       > [    3.603011] lpc55_l2 spi1.0:
> (2) mcu still not ready?
> >         >     >       >     >       > [    3.608093] at24 0-0050: 256
> byte spd EEPROM, read-only
> >         >     >       >     >       > [    3.613620] lpc55_l2 spi1.0:
> (3) mcu still not ready?
> >         >     >       >     >       > [    3.619362] lpc55_l2 spi1.0:
> (4) mcu still not ready?
> >         >     >       >     >       > [    3.624224] rtc-rv3028 0-0052:
> registered as rtc1
> >         >     >       >     >       > [    3.628343] lpc55_l2 spi1.0:
> (5) mcu still not ready?
> >         >     >       >     >       > [    3.633253] lpc55_l2 spi1.0: r=
x
> error: -110
> >         >     >       >     >       > [    3.639104] k81_bootloader
> 0-0010: probe
> >         >     >       >     >       > [    3.641628] VMCU: : (235:0)
> registered
> >         >     >       >     >       > [    3.641635] k81_bootloader
> 0-0010: probe completed
> >         >     >       >     >       > [    3.668346] cdns-i2c
> ff020000.i2c: 400 kHz mmio ff020000 irq 28
> >         >     >       >     >       > [    3.669154] cdns-i2c
> ff030000.i2c: recovery information complete
> >         >     >       >     >       > [    3.675412] lm75 1-0048: suppl=
y
> vs not found, using dummy regulator
> >         >     >       >     >       > [    3.682920] lm75 1-0048:
> hwmon1: sensor 'tmp112'
> >         >     >       >     >       > [    3.686548] i2c i2c-1: Added
> multiplexed i2c bus 3
> >         >     >       >     >       > [    3.690795] i2c i2c-1: Added
> multiplexed i2c bus 4
> >         >     >       >     >       > [    3.695629] i2c i2c-1: Added
> multiplexed i2c bus 5
> >         >     >       >     >       > [    3.700492] i2c i2c-1: Added
> multiplexed i2c bus 6
> >         >     >       >     >       > [    3.705157] pca954x 1-0070:
> registered 4 multiplexed busses for I2C switch pca9546
> >         >     >       >     >       > [    3.713049] at24 1-0054: suppl=
y
> vcc not found, using dummy regulator
> >         >     >       >     >       > [    3.720067] at24 1-0054: 1024
> byte 24c08 EEPROM, read-only
> >         >     >       >     >       > [    3.724761] cdns-i2c
> ff030000.i2c: 100 kHz mmio ff030000 irq 29
> >         >     >       >     >       > [    3.731272] sfp
> viper_enet:sfp-eth1: Host maximum power 2.0W
> >         >     >       >     >       > [    3.737549]
> sfp_register_socket: got sfp_bus
> >         >     >       >     >       > [    3.740709]
> sfp_register_socket: register sfp_bus
> >         >     >       >     >       > [    3.745459] sfp_register_bus:
> ops ok!
> >         >     >       >     >       > [    3.749179] sfp_register_bus:
> Try to attach
> >         >     >       >     >       > [    3.753419] sfp_register_bus:
> Attach succeeded
> >         >     >       >     >       > [    3.757914] sfp_register_bus:
> upstream ops attach
> >         >     >       >     >       > [    3.762677] sfp_register_bus:
> Bus registered
> >         >     >       >     >       > [    3.766999]
> sfp_register_socket: register sfp_bus succeeded
> >         >     >       >     >       > [    3.775870] of_cfs_init
> >         >     >       >     >       > [    3.776000] of_cfs_init: OK
> >         >     >       >     >       > [    3.778211] clk: Not disabling
> unused clocks
> >         >     >       >     >       > [   11.278477] Freeing initrd
> memory: 206056K
> >         >     >       >     >       > [   11.279406] Freeing unused
> kernel memory: 1536K
> >         >     >       >     >       > [   11.314006] Checked W+X
> mappings: passed, no W+X pages found
> >         >     >       >     >       > [   11.314142] Run /init as init
> process
> >         >     >       >     >       > INIT: version 3.01 booting
> >         >     >       >     >       > fsck (busybox 1.35.0)
> >         >     >       >     >       > /dev/mmcblk0p1: clean, 12/102400
> files, 238162/409600 blocks
> >         >     >       >     >       > /dev/mmcblk0p2: clean, 12/102400
> files, 171972/409600 blocks
> >         >     >       >     >       > /dev/mmcblk0p3 was not cleanly
> unmounted, check forced.
> >         >     >       >     >       > /dev/mmcblk0p3: 20/4096 files
> (0.0% non-contiguous), 663/16384 blocks
> >         >     >       >     >       > [   11.553073] EXT4-fs
> (mmcblk0p3): mounted filesystem without journal. Opts: (null). Quota mode=
:
> >         >     >       disabled.
> >         >     >       >     >       > Starting random number generator
> daemon.
> >         >     >       >     >       > [   11.580662] random: crng init
> done
> >         >     >       >     >       > Starting udev
> >         >     >       >     >       > [   11.613159] udevd[142]:
> starting version 3.2.10
> >         >     >       >     >       > [   11.620385] udevd[143]:
> starting eudev-3.2.10
> >         >     >       >     >       > [   11.704481] macb
> ff0b0000.ethernet control_red: renamed from eth0
> >         >     >       >     >       > [   11.720264] macb
> ff0c0000.ethernet control_black: renamed from eth1
> >         >     >       >     >       > [   12.063396]
> ip_local_port_range: prefer different parity for start/end values.
> >         >     >       >     >       > [   12.084801] rtc-lpc55
> rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >         >     >       >     >       > hwclock: RTC_RD_TIME: Invalid
> exchange
> >         >     >       >     >       > Mon Feb 27 08:40:53 UTC 2023
> >         >     >       >     >       > [   12.115309] rtc-lpc55
> rtc_lpc55: lpc55_rtc_set_time: bad result
> >         >     >       >     >       > hwclock: RTC_SET_TIME: Invalid
> exchange
> >         >     >       >     >       > [   12.131027] rtc-lpc55
> rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >         >     >       >     >       > Starting mcud
> >         >     >       >     >       > INIT: Entering runlevel: 5
> >         >     >       >     >       > Configuring network interfaces...
> done.
> >         >     >       >     >       > resetting network interface
> >         >     >       >     >       > [   12.718295] macb
> ff0b0000.ethernet control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver
> [Xilinx
> >         >     >       PCS/PMA PHY] (irq=3DPOLL)
> >         >     >       >     >       > [   12.723919] macb
> ff0b0000.ethernet control_red: configuring for phy/gmii link mode
> >         >     >       >     >       > [   12.732151] pps pps0: new PPS
> source ptp0
> >         >     >       >     >       > [   12.735563] macb
> ff0b0000.ethernet: gem-ptp-timer ptp clock registered.
> >         >     >       >     >       > [   12.745724] macb
> ff0c0000.ethernet control_black: PHY [ff0c0000.ethernet-ffffffff:01] driv=
er
> [Xilinx
> >         >     >       PCS/PMA PHY]
> >         >     >       >     >       (irq=3DPOLL)
> >         >     >       >     >       > [   12.753469] macb
> ff0c0000.ethernet control_black: configuring for phy/gmii link mode
> >         >     >       >     >       > [   12.761804] pps pps1: new PPS
> source ptp1
> >         >     >       >     >       > [   12.765398] macb
> ff0c0000.ethernet: gem-ptp-timer ptp clock registered.
> >         >     >       >     >       > Auto-negotiation: off
> >         >     >       >     >       > Auto-negotiation: off
> >         >     >       >     >       > [   16.828151] macb
> ff0b0000.ethernet control_red: unable to generate target frequency:
> 125000000 Hz
> >         >     >       >     >       > [   16.834553] macb
> ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full - flow control off
> >         >     >       >     >       > [   16.860552] macb
> ff0c0000.ethernet control_black: unable to generate target frequency:
> 125000000 Hz
> >         >     >       >     >       > [   16.867052] macb
> ff0c0000.ethernet control_black: Link is Up - 1Gbps/Full - flow control o=
ff
> >         >     >       >     >       > Starting Failsafe Secure Shell
> server in port 2222: sshd
> >         >     >       >     >       > done.
> >         >     >       >     >       > Starting rpcbind daemon...done.
> >         >     >       >     >       >
> >         >     >       >     >       > [   17.093019] rtc-lpc55
> rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >         >     >       >     >       > hwclock: RTC_RD_TIME: Invalid
> exchange
> >         >     >       >     >       > Starting State Manager Service
> >         >     >       >     >       > Start state-manager restarter...
> >         >     >       >     >       > (XEN) d0v1 Forwarding AES
> operation: 3254779951
> >         >     >       >     >       > Starting /usr/sbin/xenstored....[
>   17.265256] BTRFS: device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa
> >         >     >       devid 1 transid 744
> >         >     >       >     >       /dev/dm-0
> >         >     >       >     >       > scanned by udevd (385)
> >         >     >       >     >       > [   17.349933] BTRFS info (device
> dm-0): disk space caching is enabled
> >         >     >       >     >       > [   17.350670] BTRFS info (device
> dm-0): has skinny extents
> >         >     >       >     >       > [   17.364384] BTRFS info (device
> dm-0): enabling ssd optimizations
> >         >     >       >     >       > [   17.830462] BTRFS: device fsid
> 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
> >         >     >       /dev/mapper/client_prov scanned by
> >         >     >       >     >       mkfs.btrfs
> >         >     >       >     >       > (526)
> >         >     >       >     >       > [   17.872699] BTRFS info (device
> dm-1): using free space tree
> >         >     >       >     >       > [   17.872771] BTRFS info (device
> dm-1): has skinny extents
> >         >     >       >     >       > [   17.878114] BTRFS info (device
> dm-1): flagging fs with big metadata feature
> >         >     >       >     >       > [   17.894289] BTRFS info (device
> dm-1): enabling ssd optimizations
> >         >     >       >     >       > [   17.895695] BTRFS info (device
> dm-1): checking UUID tree
> >         >     >       >     >       >
> >         >     >       >     >       > Setting domain 0 name, domid and
> JSON config...
> >         >     >       >     >       > Done setting up Dom0
> >         >     >       >     >       > Starting xenconsoled...
> >         >     >       >     >       > Starting QEMU as disk backend for
> dom0
> >         >     >       >     >       > Starting domain watchdog daemon:
> xenwatchdogd startup
> >         >     >       >     >       >
> >         >     >       >     >       > [   18.408647] BTRFS: device fsid
> 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
> >         >     >       /dev/mapper/client_config scanned by
> >         >     >       >     >       mkfs.btrfs
> >         >     >       >     >       > (574)
> >         >     >       >     >       > [done]
> >         >     >       >     >       > [   18.465552] BTRFS info (device
> dm-2): using free space tree
> >         >     >       >     >       > [   18.465629] BTRFS info (device
> dm-2): has skinny extents
> >         >     >       >     >       > [   18.471002] BTRFS info (device
> dm-2): flagging fs with big metadata feature
> >         >     >       >     >       > Starting crond: [   18.482371]
> BTRFS info (device dm-2): enabling ssd optimizations
> >         >     >       >     >       > [   18.486659] BTRFS info (device
> dm-2): checking UUID tree
> >         >     >       >     >       > OK
> >         >     >       >     >       > starting rsyslogd ... Log
> partition ready after 0 poll loops
> >         >     >       >     >       > done
> >         >     >       >     >       > rsyslogd: cannot connect to
> 172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <
> http://172.18.0.1:514>> <http://172.18.0.1:514 <http://172.18.0.1:514> <
> http://172.18.0.1:514 <http://172.18.0.1:514>>>: Network is unreachable
> [v8.2208.0 try
> >         >     >       https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>>> ]
> >         >     >       >     >       > [   18.670637] BTRFS: device fsid
> 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3
> >         >     >       scanned by udevd (518)
> >         >     >       >     >       >
> >         >     >       >     >       > Please insert USB token and enter
> your role in login prompt.
> >         >     >       >     >       >
> >         >     >       >     >       > login:
> >         >     >       >     >       >
> >         >     >       >     >       > Regards,
> >         >     >       >     >       > O.
> >         >     >       >     >       >
> >         >     >       >     >       >
> >         >     >       >     >       > =D0=BF=D0=BD, 24 =D0=B0=D0=BF=D1=
=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:39,
> Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org=
>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
> >         >     >       >     >       >       Hi Oleg,
> >         >     >       >     >       >
> >         >     >       >     >       >       Here is the issue from your
> logs:
> >         >     >       >     >       >
> >         >     >       >     >       >       SError Interrupt on CPU0,
> code 0xbe000000 -- SError
> >         >     >       >     >       >
> >         >     >       >     >       >       SErrors are special signals
> to notify software of serious hardware
> >         >     >       >     >       >       errors.  Something is going
> very wrong. Defective hardware is a
> >         >     >       >     >       >       possibility.  Another
> possibility if software accessing address ranges
> >         >     >       >     >       >       that it is not supposed to,
> sometimes it causes SErrors.
> >         >     >       >     >       >
> >         >     >       >     >       >       Cheers,
> >         >     >       >     >       >
> >         >     >       >     >       >       Stefano
> >         >     >       >     >       >
> >         >     >       >     >       >
> >         >     >       >     >       >
> >         >     >       >     >       >       On Mon, 24 Apr 2023, Oleg
> Nikitenko wrote:
> >         >     >       >     >       >
> >         >     >       >     >       >       > Hello,
> >         >     >       >     >       >       >
> >         >     >       >     >       >       > Thanks guys.
> >         >     >       >     >       >       > I found out where the
> problem was.
> >         >     >       >     >       >       > Now dom0 booted more. But
> I have a new one.
> >         >     >       >     >       >       > This is a kernel panic
> during Dom0 loading.
> >         >     >       >     >       >       > Maybe someone is able to
> suggest something ?
> >         >     >       >     >       >       >
> >         >     >       >     >       >       > Regards,
> >         >     >       >     >       >       > O.
> >         >     >       >     >       >       >
> >         >     >       >     >       >       > [    3.771362]
> sfp_register_bus: upstream ops attach
> >         >     >       >     >       >       > [    3.776119]
> sfp_register_bus: Bus registered
> >         >     >       >     >       >       > [    3.780459]
> sfp_register_socket: register sfp_bus succeeded
> >         >     >       >     >       >       > [    3.789399] of_cfs_ini=
t
> >         >     >       >     >       >       > [    3.789499]
> of_cfs_init: OK
> >         >     >       >     >       >       > [    3.791685] clk: Not
> disabling unused clocks
> >         >     >       >     >       >       > [   11.010355] SError
> Interrupt on CPU0, code 0xbe000000 -- SError
> >         >     >       >     >       >       > [   11.010380] CPU: 0 PID=
:
> 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> >         >     >       >     >       >       > [   11.010393] Workqueue:
> events_unbound async_run_entry_fn
> >         >     >       >     >       >       > [   11.010414] pstate:
> 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=3D--)
> >         >     >       >     >       >       > [   11.010422] pc :
> simple_write_end+0xd0/0x130
> >         >     >       >     >       >       > [   11.010431] lr :
> generic_perform_write+0x118/0x1e0
> >         >     >       >     >       >       > [   11.010438] sp :
> ffffffc00809b910
> >         >     >       >     >       >       > [   11.010441] x29:
> ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
> >         >     >       >     >       >       > [   11.010451] x26:
> 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
> >         >     >       >     >       >       > [   11.010459] x23:
> ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
> >         >     >       >     >       >       > [   11.010472] x20:
> 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
> >         >     >       >     >       >       > [   11.010481] x17:
> 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
> >         >     >       >     >       >       > [   11.010490] x14:
> 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
> >         >     >       >     >       >       > [   11.010498] x11:
> 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
> >         >     >       >     >       >       > [   11.010507] x8 :
> 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
> >         >     >       >     >       >       > [   11.010515] x5 :
> fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
> >         >     >       >     >       >       > [   11.010524] x2 :
> 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
> >         >     >       >     >       >       > [   11.010534] Kernel
> panic - not syncing: Asynchronous SError Interrupt
> >         >     >       >     >       >       > [   11.010539] CPU: 0 PID=
:
> 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> >         >     >       >     >       >       > [   11.010545] Hardware
> name: D14 Viper Board - White Unit (DT)
> >         >     >       >     >       >       > [   11.010548] Workqueue:
> events_unbound async_run_entry_fn
> >         >     >       >     >       >       > [   11.010556] Call trace=
:
> >         >     >       >     >       >       > [   11.010558]
>  dump_backtrace+0x0/0x1c4
> >         >     >       >     >       >       > [   11.010567]
>  show_stack+0x18/0x2c
> >         >     >       >     >       >       > [   11.010574]
>  dump_stack_lvl+0x7c/0xa0
> >         >     >       >     >       >       > [   11.010583]
>  dump_stack+0x18/0x34
> >         >     >       >     >       >       > [   11.010588]
>  panic+0x14c/0x2f8
> >         >     >       >     >       >       > [   11.010597]
>  print_tainted+0x0/0xb0
> >         >     >       >     >       >       > [   11.010606]
>  arm64_serror_panic+0x6c/0x7c
> >         >     >       >     >       >       > [   11.010614]
>  do_serror+0x28/0x60
> >         >     >       >     >       >       > [   11.010621]
>  el1h_64_error_handler+0x30/0x50
> >         >     >       >     >       >       > [   11.010628]
>  el1h_64_error+0x78/0x7c
> >         >     >       >     >       >       > [   11.010633]
>  simple_write_end+0xd0/0x130
> >         >     >       >     >       >       > [   11.010639]
>  generic_perform_write+0x118/0x1e0
> >         >     >       >     >       >       > [   11.010644]
>  __generic_file_write_iter+0x138/0x1c4
> >         >     >       >     >       >       > [   11.010650]
>  generic_file_write_iter+0x78/0xd0
> >         >     >       >     >       >       > [   11.010656]
>  __kernel_write+0xfc/0x2ac
> >         >     >       >     >       >       > [   11.010665]
>  kernel_write+0x88/0x160
> >         >     >       >     >       >       > [   11.010673]
>  xwrite+0x44/0x94
> >         >     >       >     >       >       > [   11.010680]
>  do_copy+0xa8/0x104
> >         >     >       >     >       >       > [   11.010686]
>  write_buffer+0x38/0x58
> >         >     >       >     >       >       > [   11.010692]
>  flush_buffer+0x4c/0xbc
> >         >     >       >     >       >       > [   11.010698]
>  __gunzip+0x280/0x310
> >         >     >       >     >       >       > [   11.010704]
>  gunzip+0x1c/0x28
> >         >     >       >     >       >       > [   11.010709]
>  unpack_to_rootfs+0x170/0x2b0
> >         >     >       >     >       >       > [   11.010715]
>  do_populate_rootfs+0x80/0x164
> >         >     >       >     >       >       > [   11.010722]
>  async_run_entry_fn+0x48/0x164
> >         >     >       >     >       >       > [   11.010728]
>  process_one_work+0x1e4/0x3a0
> >         >     >       >     >       >       > [   11.010736]
>  worker_thread+0x7c/0x4c0
> >         >     >       >     >       >       > [   11.010743]
>  kthread+0x120/0x130
> >         >     >       >     >       >       > [   11.010750]
>  ret_from_fork+0x10/0x20
> >         >     >       >     >       >       > [   11.010757] SMP:
> stopping secondary CPUs
> >         >     >       >     >       >       > [   11.010784] Kernel
> Offset: 0x2f61200000 from 0xffffffc008000000
> >         >     >       >     >       >       > [   11.010788]
> PHYS_OFFSET: 0x0
> >         >     >       >     >       >       > [   11.010790] CPU
> features: 0x00000401,00000842
> >         >     >       >     >       >       > [   11.010795] Memory
> Limit: none
> >         >     >       >     >       >       > [   11.277509] ---[ end
> Kernel panic - not syncing: Asynchronous SError Interrupt ]---
> >         >     >       >     >       >       >
> >         >     >       >     >       >       > =D0=BF=D1=82, 21 =D0=B0=
=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2
> 15:52, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>
> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>:
> >         >     >       >     >       >       >       Hi Oleg,
> >         >     >       >     >       >       >
> >         >     >       >     >       >       >       On 21/04/2023 14:49=
,
> Oleg Nikitenko wrote:
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       > Hello Michal,
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       > I was not able to
> enable earlyprintk in the xen for now.
> >         >     >       >     >       >       >       > I decided to
> choose another way.
> >         >     >       >     >       >       >       > This is a xen's
> command line that I found out completely.
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       > (XEN) $$$$
> console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom=
0_vcpus_pin
> >         >     >       bootscrub=3D0
> >         >     >       >     >       vwfi=3Dnative
> >         >     >       >     >       >       sched=3Dnull
> >         >     >       >     >       >       >       timer_slop=3D0
> >         >     >       >     >       >       >       Yes, adding a
> printk() in Xen was also a good idea.
> >         >     >       >     >       >       >
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       > So you are
> absolutely right about a command line.
> >         >     >       >     >       >       >       > Now I am going to
> find out why xen did not have the correct parameters from the device
> >         >     >       tree.
> >         >     >       >     >       >       >       Maybe you will find
> this document helpful:
> >         >     >       >     >       >       >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >>
> >         >     >       <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >>>
> >         >     >       >     >       >       >
> >         >     >       >     >       >       >       ~Michal
> >         >     >       >     >       >       >
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       > Regards,
> >         >     >       >     >       >       >       > Oleg
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       > =D0=BF=D1=82, 21 =
=D0=B0=D0=BF=D1=80.
> 2023=E2=80=AF=D0=B3. =D0=B2 11:16, Michal Orzel <michal.orzel@amd.com <ma=
ilto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>
> >         >     >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>>:
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       >     On 21/04/2023
> 10:04, Oleg Nikitenko wrote:
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     > Hello Micha=
l,
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     > Yes, I use
> yocto.
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     > Yesterday
> all day long I tried to follow your suggestions.
> >         >     >       >     >       >       >       >     > I faced a
> problem.
> >         >     >       >     >       >       >       >     > Manually in
> the xen config build file I pasted the strings:
> >         >     >       >     >       >       >       >     In the .confi=
g
> file or in some Yocto file (listing additional Kconfig options) added
> >         >     >       to SRC_URI?
> >         >     >       >     >       >       >       >     You shouldn't
> really modify .config file but if you do, you should execute "make
> >         >     >       olddefconfig"
> >         >     >       >     >       afterwards.
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
> CONFIG_EARLY_PRINTK
> >         >     >       >     >       >       >       >     >
> CONFIG_EARLY_PRINTK_ZYNQMP
> >         >     >       >     >       >       >       >     >
> CONFIG_EARLY_UART_CHOICE_CADENCE
> >         >     >       >     >       >       >       >     I hope you
> added =3Dy to them.
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       >     Anyway, you
> have at least the following solutions:
> >         >     >       >     >       >       >       >     1) Run bitbak=
e
> xen -c menuconfig to properly set early printk
> >         >     >       >     >       >       >       >     2) Find out
> how you enable other Kconfig options in your project (e.g.
> >         >     >       CONFIG_COLORING=3Dy that is not
> >         >     >       >     >       enabled by
> >         >     >       >     >       >       default)
> >         >     >       >     >       >       >       >     3) Append the
> following to "xen/arch/arm/configs/arm64_defconfig":
> >         >     >       >     >       >       >       >
>  CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       >     ~Michal
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     > Host hangs
> in build time.
> >         >     >       >     >       >       >       >     > Maybe I did
> not set something in the config build file ?
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     > Regards,
> >         >     >       >     >       >       >       >     > Oleg
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     > =D1=87=D1=
=82, 20 =D0=B0=D0=BF=D1=80.
> 2023=E2=80=AF=D0=B3. =D0=B2 11:57, Oleg Nikitenko <oleshiiwood@gmail.com =
<mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>
> >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>
> >         >     >       >     >       >       >       <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
> >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>>:
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >     Thanks
> Michal,
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >     You gav=
e
> me an idea.
> >         >     >       >     >       >       >       >     >     I am
> going to try it today.
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >     Regards=
,
> >         >     >       >     >       >       >       >     >     O.
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >     =D1=87=
=D1=82, 20
> =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:56, Oleg Nikitenko <ol=
eshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>
> >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>
> >         >     >       >     >       >       >       <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
> >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>>:
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
>  Thanks Stefano.
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >         I a=
m
> going to do it today.
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
>  Regards,
> >         >     >       >     >       >       >       >     >         O.
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >         =D1=
=81=D1=80,
> 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:05, Stefano Stabell=
ini <sstabellini@kernel.org
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>
> >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>
> >         >     >       >     >       <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>
> >         >     >       >     >       >       >       <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
> >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>>:
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
>  On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
> >         >     >       >     >       >       >       >     >
>  > Hi Michal,
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  > I corrected xen's command line.
> >         >     >       >     >       >       >       >     >
>  > Now it is
> >         >     >       >     >       >       >       >     >
>  > xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D160=
0M
> >         >     >       dom0_max_vcpus=3D2
> >         >     >       >     >       dom0_vcpus_pin
> >         >     >       >     >       >       >       bootscrub=3D0
> vwfi=3Dnative sched=3Dnull
> >         >     >       >     >       >       >       >     >
>  > timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7";
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
>  4 colors is way too many for xen, just do xen_colors=3D0-0. There is no
> >         >     >       >     >       >       >       >     >
>  advantage in using more than 1 color for Xen.
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
>  4 colors is too few for dom0, if you are giving 1600M of memory to
> >         >     >       Dom0.
> >         >     >       >     >       >       >       >     >
>  Each color is 256M. For 1600M you should give at least 7 colors. Try:
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
>  xen_colors=3D0-0 dom0_colors=3D1-8
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >     >
>  > Unfortunately the result was the same.
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  > (XEN)  - Dom0 mode: Relaxed
> >         >     >       >     >       >       >       >     >
>  > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> >         >     >       >     >       >       >       >     >
>  > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> >         >     >       >     >       >       >       >     >
>  > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> >         >     >       >     >       >       >       >     >
>  > (XEN) Coloring general information
> >         >     >       >     >       >       >       >     >
>  > (XEN) Way size: 64kB
> >         >     >       >     >       >       >       >     >
>  > (XEN) Max. number of colors available: 16
> >         >     >       >     >       >       >       >     >
>  > (XEN) Xen color(s): [ 0 ]
> >         >     >       >     >       >       >       >     >
>  > (XEN) alternatives: Patching with alt table 00000000002cc690 ->
> >         >     >       00000000002ccc0c
> >         >     >       >     >       >       >       >     >
>  > (XEN) Color array allocation failed for dom0
> >         >     >       >     >       >       >       >     >
>  > (XEN)
> >         >     >       >     >       >       >       >     >
>  > (XEN) ****************************************
> >         >     >       >     >       >       >       >     >
>  > (XEN) Panic on CPU 0:
> >         >     >       >     >       >       >       >     >
>  > (XEN) Error creating domain 0
> >         >     >       >     >       >       >       >     >
>  > (XEN) ****************************************
> >         >     >       >     >       >       >       >     >
>  > (XEN)
> >         >     >       >     >       >       >       >     >
>  > (XEN) Reboot in five seconds...
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  > I am going to find out how command line arguments passed and parsed.
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  > Regards,
> >         >     >       >     >       >       >       >     >
>  > Oleg
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25=
, Oleg Nikitenko <oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>
> >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>
> >         >     >       >     >       <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>
> >         >     >       >     >       >       >       <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>
> >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>>:
> >         >     >       >     >       >       >       >     >
>  >       Hi Michal,
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  > You put my nose into the problem. Thank you.
> >         >     >       >     >       >       >       >     >
>  > I am going to use your point.
> >         >     >       >     >       >       >       >     >
>  > Let's see what happens.
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  > Regards,
> >         >     >       >     >       >       >       >     >
>  > Oleg
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37=
, Michal Orzel <michal.orzel@amd.com
> <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>
> >         >     >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>
> >         >     >       >     >       <mailto:michal.orzel@amd.com
> <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>
> >         >     >       >     >       >       >       <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>
> >         >     >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>>>:
> >         >     >       >     >       >       >       >     >
>  >       Hi Oleg,
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       > Hello Stefano,
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       > Thanks for the clarification.
> >         >     >       >     >       >       >       >     >
>  >       > My company uses yocto for image generation.
> >         >     >       >     >       >       >       >     >
>  >       > What kind of information do you need to consult me in this
> >         >     >       case ?
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       > Maybe modules sizes/addresses which were mentioned by @Julien
> >         >     >       Grall
> >         >     >       >     >       >       <mailto:julien@xen.org
> <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>
> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org
> <mailto:julien@xen.org>>>
> >         >     >       >     >       >       >       <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:
> julien@xen.org <mailto:julien@xen.org>>>> <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>
> >         >     >       <mailto:julien@xen.org <mailto:julien@xen.org>
> <mailto:julien@xen.org <mailto:julien@xen.org>>> <mailto:julien@xen.org
> <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>
> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org
> <mailto:julien@xen.org>>>>>> ?
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  >       Sorry for jumping into discussion, but FWICS the Xen command
> >         >     >       line you provided
> >         >     >       >     >       seems to be
> >         >     >       >     >       >       not the
> >         >     >       >     >       >       >       one
> >         >     >       >     >       >       >       >     >
>  >       Xen booted with. The error you are observing most likely is due
> >         >     >       to dom0 colors
> >         >     >       >     >       >       configuration not
> >         >     >       >     >       >       >       being
> >         >     >       >     >       >       >       >     >
>  >       specified (i.e. lack of dom0_colors=3D<> parameter). Although in
> >         >     >       the command line you
> >         >     >       >     >       >       provided, this
> >         >     >       >     >       >       >       parameter
> >         >     >       >     >       >       >       >     >
>  >       is set, I strongly doubt that this is the actual command line
> >         >     >       in use.
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  >       You wrote:
> >         >     >       >     >       >       >       >     >
>  >       xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0
> >         >     >       dom0_mem=3D1600M dom0_max_vcpus=3D2
> >         >     >       >     >       >       dom0_vcpus_pin
> >         >     >       >     >       >       >       bootscrub=3D0
> vwfi=3Dnative
> >         >     >       >     >       >       >       >     >
>  >       sched=3Dnull timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3
> >         >     >       dom0_colors=3D4-7";
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  >       but:
> >         >     >       >     >       >       >       >     >
>  >       1) way_szize has a typo
> >         >     >       >     >       >       >       >     >
>  >       2) you specified 4 colors (0-3) for Xen, but the boot log says
> >         >     >       that Xen has only
> >         >     >       >     >       one:
> >         >     >       >     >       >       >       >     >
>  >       (XEN) Xen color(s): [ 0 ]
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  >       This makes me believe that no colors configuration actually end
> >         >     >       up in command line
> >         >     >       >     >       that Xen
> >         >     >       >     >       >       booted
> >         >     >       >     >       >       >       with.
> >         >     >       >     >       >       >       >     >
>  >       Single color for Xen is a "default if not specified" and way
> >         >     >       size was probably
> >         >     >       >     >       calculated
> >         >     >       >     >       >       by asking
> >         >     >       >     >       >       >       HW.
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  >       So I would suggest to first cross-check the command line in
> >         >     >       use.
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  >       ~Michal
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       > Regards,
> >         >     >       >     >       >       >       >     >
>  >       > Oleg
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=
=B2 20:44, Stefano Stabellini
> >         >     >       <sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>
> >         >     >       >     >       >       >       <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
> >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>
> >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>
> >         >     >       >     >       >       <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
> >         >     >       >     >       >       >       <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
> >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>
> >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>>>:
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
> >         >     >       >     >       >       >       >     >
>  >       >     > Hi Julien,
> >         >     >       >     >       >       >       >     >
>  >       >     >
> >         >     >       >     >       >       >       >     >
>  >       >     > >> This feature has not been merged in Xen upstream yet
> >         >     >       >     >       >       >       >     >
>  >       >     >
> >         >     >       >     >       >       >       >     >
>  >       >     > > would assume that upstream + the series on the ML [1]
> >         >     >       work
> >         >     >       >     >       >       >       >     >
>  >       >     >
> >         >     >       >     >       >       >       >     >
>  >       >     > Please clarify this point.
> >         >     >       >     >       >       >       >     >
>  >       >     > Because the two thoughts are controversial.
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >     Hi Oleg,
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >     As Julien wrote, there is nothing controversial. As you
> >         >     >       are aware,
> >         >     >       >     >       >       >       >     >
>  >       >     Xilinx maintains a separate Xen tree specific for Xilinx
> >         >     >       here:
> >         >     >       >     >       >       >       >     >
>  >       >     https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>
> >         >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>
> >         >     >       >     >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>
> >         >     >       >     >       >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>
> >         >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>
> >         >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>
> >         >     >       >     >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>
> >         >     >       >     >       >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>>
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >     and the branch you are using (xlnx_rebase_4.16) comes
> >         >     >       from there.
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >     Instead, the upstream Xen tree lives here:
> >         >     >       >     >       >       >       >     >
>  >       >     https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>
> >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>>
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >     The Cache Coloring feature that you are trying to
> >         >     >       configure is present
> >         >     >       >     >       >       >       >     >
>  >       >     in xlnx_rebase_4.16, but not yet present upstream (there
> >         >     >       is an
> >         >     >       >     >       >       >       >     >
>  >       >     outstanding patch series to add cache coloring to Xen
> >         >     >       upstream but it
> >         >     >       >     >       >       >       >     >
>  >       >     hasn't been merged yet.)
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't
> >         >     >       matter too much for
> >         >     >       >     >       >       >       >     >
>  >       >     you as you already have Cache Coloring as a feature
> >         >     >       there.
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >     I take you are using ImageBuilder to generate the boot
> >         >     >       configuration? If
> >         >     >       >     >       >       >       >     >
>  >       >     so, please post the ImageBuilder config file that you are
> >         >     >       using.
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >
>  >       >     But from the boot message, it looks like the colors
> >         >     >       configuration for
> >         >     >       >     >       >       >       >     >
>  >       >     Dom0 is incorrect.
> >         >     >       >     >       >       >       >     >
>  >       >
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >            =
 >
> >         >     >       >     >       >       >       >     >
> >         >     >       >     >       >       >       >
> >         >     >       >     >       >       >
> >         >     >       >     >       >       >
> >         >     >       >     >       >       >
> >         >     >       >     >       >
> >         >     >       >     >       >
> >         >     >       >     >       >
> >         >     >       >     >
> >         >     >       >     >
> >         >     >       >     >
> >         >     >       >
> >         >     >
> >         >     >
> >         >     >
> >         >
> >
>

--000000000000c752cd05fbce75cf
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+PGRpdj5IZWxsbyw8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlRoYW5r
cyBhIGxvdCBNaWNoYWwuPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5UaGVuIHRoZSBuZXh0IHF1
ZXN0aW9uLjwvZGl2PjxkaXY+V2hlbiBJIGp1c3Qgc3RhcnRlZCBteSBleHBlcmltZW50cyB3aXRo
IHhlbiwgU3RlZmFubyBtZW50aW9uZWQgdGhhdCBlYWNoIGNhY2hlJiMzOTtzIGNvbG9yIHNpemUg
aXMgMjU2TS48L2Rpdj48ZGl2PklzIGl0IHBvc3NpYmxlIHRvIGV4dGVuZCB0aGlzIGZpZ3VyZSA/
PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5SZWdhcmRzLDwvZGl2PjxkaXY+T2xlZzxicj48L2Rp
dj48L2Rpdj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPjxkaXYgZGlyPSJsdHIiIGNsYXNz
PSJnbWFpbF9hdHRyIj7Qv9C9LCAxNSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAxMTo1NywgTWljaGFs
IE9yemVsICZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iPm1pY2hhbC5v
cnplbEBhbWQuY29tPC9hPiZndDs6PGJyPjwvZGl2PjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9x
dW90ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4IDBweCAwLjhleDtib3JkZXItbGVmdDoxcHggc29s
aWQgcmdiKDIwNCwyMDQsMjA0KTtwYWRkaW5nLWxlZnQ6MWV4Ij5IaSBPbGVnLDxicj4NCjxicj4N
Ck9uIDE1LzA1LzIwMjMgMTA6NTEsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDC
oCDCoCDCoDxicj4NCiZndDsgPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IEhlbGxvIGd1eXMsPGJyPg0K
Jmd0OyA8YnI+DQomZ3Q7IFRoYW5rcyBhIGxvdC48YnI+DQomZ3Q7IEFmdGVyIGEgbG9uZyBwcm9i
bGVtIGxpc3QgSSB3YXMgYWJsZSB0byBydW4geGVuIHdpdGggRG9tMCB3aXRoIGEgY2FjaGUgY29s
b3IuPGJyPg0KJmd0OyBPbmUgbW9yZSBxdWVzdGlvbiBmcm9tIG15IHNpZGUuPGJyPg0KJmd0OyBJ
IHdhbnQgdG8gcnVuIGEgZ3Vlc3Qgd2l0aCBjb2xvciBtb2RlIHRvby48YnI+DQomZ3Q7IEkgaW5z
ZXJ0ZWQgYSBzdHJpbmcgaW50byBndWVzdCBjb25maWcgZmlsZSBsbGMtY29sb3JzID0gJnF1b3Q7
OS0xMyZxdW90Ozxicj4NCiZndDsgSSBnb3QgYW4gZXJyb3I8YnI+DQomZ3Q7IFsgwqA0NTcuNTE3
MDA0XSBsb29wMDogZGV0ZWN0ZWQgY2FwYWNpdHkgY2hhbmdlIGZyb20gMCB0byAzODU4NDA8YnI+
DQomZ3Q7IFBhcnNpbmcgY29uZmlnIGZyb20gL3hlbi9yZWRfY29uZmlnLmNmZzxicj4NCiZndDsg
L3hlbi9yZWRfY29uZmlnLmNmZzoyNjogY29uZmlnIHBhcnNpbmcgZXJyb3IgbmVhciBgLWNvbG9y
cyYjMzk7OiBsZXhpY2FsIGVycm9yPGJyPg0KJmd0OyB3YXJuaW5nOiBDb25maWcgZmlsZSBsb29r
cyBsaWtlIGl0IGNvbnRhaW5zIFB5dGhvbiBjb2RlLjxicj4NCiZndDsgd2FybmluZzogwqBBcmJp
dHJhcnkgUHl0aG9uIGlzIG5vIGxvbmdlciBzdXBwb3J0ZWQuPGJyPg0KJmd0OyB3YXJuaW5nOiDC
oFNlZSA8YSBocmVmPSJodHRwczovL3dpa2kueGVuLm9yZy93aWtpL1B5dGhvbkluWGxDb25maWci
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vd2lraS54ZW4ub3JnL3dp
a2kvUHl0aG9uSW5YbENvbmZpZzwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vd2lraS54ZW4ub3Jn
L3dpa2kvUHl0aG9uSW5YbENvbmZpZyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly93aWtpLnhlbi5vcmcvd2lraS9QeXRob25JblhsQ29uZmlnPC9hPiZndDs8YnI+DQom
Z3Q7IEZhaWxlZCB0byBwYXJzZSBjb25maWc6IEludmFsaWQgYXJndW1lbnQ8YnI+DQomZ3Q7IFNv
IHRoaXMgaXMgYSBxdWVzdGlvbi48YnI+DQomZ3Q7IElzIGl0IHBvc3NpYmxlIHRvIGFzc2lnbiBh
IGNvbG9yIG1vZGUgZm9yIHRoZSBEb21VIGJ5IGNvbmZpZyBmaWxlID88YnI+DQomZ3Q7IElmIHNv
LCB3aGF0IHN0cmluZyBzaG91bGQgSSB1c2U/PGJyPg0KUGxlYXNlLCBhbHdheXMgcmVmZXIgdG8g
dGhlIHJlbGV2YW50IGRvY3VtZW50YXRpb24uIEluIHRoaXMgY2FzZSwgZm9yIHhsLmNmZzo8YnI+
DQo8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNl
XzQuMTcvZG9jcy9tYW4veGwuY2ZnLjUucG9kLmluI0wyODkwIiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmVi
YXNlXzQuMTcvZG9jcy9tYW4veGwuY2ZnLjUucG9kLmluI0wyODkwPC9hPjxicj4NCjxicj4NCn5N
aWNoYWw8YnI+DQo8YnI+DQomZ3Q7IDxicj4NCiZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7IE9sZWc8
YnI+DQomZ3Q7IDxicj4NCiZndDsg0YfRgiwgMTEg0LzQsNGPIDIwMjPigK/Qsy4g0LIgMTM6MzIs
IE9sZWcgTmlraXRlbmtvICZsdDs8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVz
aGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDs6PGJyPg0KJmd0OyA8YnI+DQomZ3Q7wqAgwqAg
wqBIaSBNaWNoYWwsPGJyPg0KJmd0OyA8YnI+DQomZ3Q7wqAgwqAgwqBUaGFua3MuPGJyPg0KJmd0
O8KgIMKgIMKgVGhpcyBjb21waWxhdGlvbiBwcmV2aW91c2x5IGhhZCBhIG5hbWUgQ09ORklHX0NP
TE9SSU5HLjxicj4NCiZndDvCoCDCoCDCoEl0IG1peGVkIG1lIHVwLjxicj4NCiZndDsgPGJyPg0K
Jmd0O8KgIMKgIMKgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqBPbGVnPGJyPg0KJmd0OyA8YnI+
DQomZ3Q7wqAgwqAgwqDRh9GCLCAxMSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAxMzoxNSwgTWljaGFs
IE9yemVsICZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0i
X2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1k
LmNvbTwvYT4mZ3Q7Jmd0Ozo8YnI+DQomZ3Q7IDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoEhpIE9s
ZWcsPGJyPg0KJmd0OyA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqBPbiAxMS8wNS8yMDIzIDEyOjAy
LCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDsgSGVsbG8sPGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDsgVGhhbmtzIFN0ZWZhbm8uPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0OyBUaGVuIHRoZSBuZXh0IHF1ZXN0aW9uLjxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDsgSSBjbG9uZWQgeGVuIHJlcG8gZnJvbSB4aWxpbnggc2l0ZSA8
YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi5naXQiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuLmdpdDwvYT4g
Jmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuLmdpdCIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4uZ2l0
PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuLmdpdCIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlu
eC94ZW4uZ2l0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4u
Z2l0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20v
WGlsaW54L3hlbi5naXQ8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7IEkg
bWFuYWdlZCB0byBidWlsZCBhIHhsbnhfcmViYXNlXzQuMTcgYnJhbmNoIGluIG15IGVudmlyb25t
ZW50Ljxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDsgSSBkaWQgaXQgd2l0aG91dCBjb2xvcmlu
ZyBmaXJzdC4gSSBkaWQgbm90IGZpbmQgYW55IGNvbG9yIGZvb3RwcmludHMgYXQgdGhpcyBicmFu
Y2guPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0OyBJIHJlYWxpemVkIGNvbG9yaW5nIGlzIG5v
dCBpbiB0aGUgeGxueF9yZWJhc2VfNC4xNyBicmFuY2ggeWV0Ljxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoFRoaXMgaXMgbm90IHRydWUuIENhY2hlIGNvbG9yaW5nIGlzIGluIHhsbnhfcmViYXNlXzQu
MTcuIFBsZWFzZSBzZWUgdGhlIGRvY3M6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgPGEgaHJlZj0i
aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE3L2RvY3Mv
bWlzYy9hcm0vY2FjaGUtY29sb3JpbmcucnN0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTcv
ZG9jcy9taXNjL2FybS9jYWNoZS1jb2xvcmluZy5yc3Q8L2E+ICZsdDs8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTcvZG9jcy9taXNjL2Fy
bS9jYWNoZS1jb2xvcmluZy5yc3QiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNy9kb2NzL21p
c2MvYXJtL2NhY2hlLWNvbG9yaW5nLnJzdDwvYT4mZ3Q7PGJyPg0KJmd0OyA8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqBJdCBkZXNjcmliZXMgdGhlIGZlYXR1cmUgYW5kIGRvY3VtZW50cyB0aGUgcmVx
dWlyZWQgcHJvcGVydGllcy48YnI+DQomZ3Q7IDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoH5NaWNo
YWw8YnI+DQomZ3Q7IDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0OyDQstGCLCA5INC80LDRjyAy
MDIz4oCv0LMuINCyIDIyOjQ5LCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9Im1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9i
bGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBXZSB0ZXN0IFhlbiBDYWNoZSBDb2xvcmlu
ZyByZWd1bGFybHkgb24gemN1MTAyLiBFdmVyeSBQZXRhbGludXggcmVsZWFzZTxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCh0d2ljZSBhIHllYXIpIGlzIHRlc3RlZCB3aXRoIGNh
Y2hlIGNvbG9yaW5nIGVuYWJsZWQuIFRoZSBsYXN0IFBldGFsaW51eDxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoHJlbGVhc2UgaXMgMjAyMy4xIGFuZCB0aGUga2VybmVsIHVzZWQg
aXMgdGhpczo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqA8YSBocmVmPSJodHRw
czovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4xX0xU
UyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hp
bGlueC9saW51eC14bG54L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFM8L2E+ICZsdDs8YSBocmVm
PSJodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92
Ni4xX0xUUyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIu
Y29tL1hpbGlueC9saW51eC14bG54L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFM8L2E+Jmd0OyAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC9saW51eC14bG54L3RyZWUveGxu
eF9yZWJhc2VfdjYuMV9MVFMiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8vZ2l0aHViLmNvbS9YaWxpbngvbGludXgteGxueC90cmVlL3hsbnhfcmViYXNlX3Y2LjFfTFRT
PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC9saW51eC14bG54L3Ry
ZWUveGxueF9yZWJhc2VfdjYuMV9MVFMiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngvbGludXgteGxueC90cmVlL3hsbnhfcmViYXNlX3Y2
LjFfTFRTPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBPbiBU
dWUsIDkgTWF5IDIwMjMsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDsgSGVsbG8gZ3V5cyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBJ
IGhhdmUgYSBjb3VwbGUgb2YgbW9yZSBxdWVzdGlvbnMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0OyBIYXZlIHlvdSBldmVyIHJ1biB4ZW4gd2l0aCB0aGUgY2FjaGUgY29s
b3JpbmcgYXQgWnlucSBVbHRyYVNjYWxlKyBNUFNvQyB6Y3UxMDIgeGN6dTE1ZWcgPzxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgV2hlbiBkaWQgeW91IHJ1biB4ZW4gd2l0
aCB0aGUgY2FjaGUgY29sb3JpbmcgbGFzdCB0aW1lID88YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7IFdoYXQga2VybmVsIHZlcnNpb24gZGlkIHlvdSB1c2UgZm9yIERvbTAg
d2hlbiB5b3UgcmFuIHhlbiB3aXRoIHRoZSBjYWNoZSBjb2xvcmluZyBsYXN0IHRpbWUgPzxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsg0L/RgiwgNSDQvNCw0Y8gMjAy
M+KAr9CzLiDQsiAxMTo0OCwgT2xlZyBOaWtpdGVua28gJmx0OzxhIGhyZWY9Im1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNo
aWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZn
dDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBIaSBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgVGhhbmtzLjxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsg0L/RgiwgNSDQvNCw0Y8gMjAyM+KAr9Cz
LiDQsiAxMTozNCwgTWljaGFsIE9yemVsICZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0
YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsmZ3Q7Ojxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIE9sZWcsPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFJlcGx5aW5nLCBzbyB0aGF0IHlvdSBkbyBu
b3QgbmVlZCB0byB3YWl0IGZvciBTdGVmYW5vLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBPbiAwNS8wNS8yMDIzIDEwOjI4LCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqA8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGVsbG8g
U3RlZmFubyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBJIHdvdWxkIGxpa2UgdG8gdHJ5IGEgeGVuIGNhY2hlIGNvbG9yIHByb3BlcnR5IGZyb20g
dGhpcyByZXBvwqAgPGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVu
LmdpdCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhl
bi5vcmcvZ2l0LWh0dHAveGVuLmdpdDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+Jmd0OyAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAv
eGVuLmdpdDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRw
L3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdDwvYT4gJmx0Ozxh
IGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQiIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRw
L3hlbi5naXQ8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
LWh0dHAveGVuLmdpdCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+Jmd0
OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBDb3VsZCB5b3UgdGVsbCB3aG90IGJyYW5jaCBJIHNob3VsZCB1c2UgPzxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoENhY2hlIGNvbG9yaW5n
IGZlYXR1cmUgaXMgbm90IHBhcnQgb2YgdGhlIHVwc3RyZWFtIHRyZWUgYW5kIGl0IGlzIHN0aWxs
IHVuZGVyIHJldmlldy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBZb3UgY2FuIG9ubHkgZmluZCBpdCBpbnRlZ3JhdGVkIGluIHRoZSBYaWxpbnggWGVu
IHRyZWUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoH5NaWNoYWw8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDQv9GCLCAyOCDQsNC/0YAuIDIwMjPi
gK/Qsy4g0LIgMDA6NTEsIFN0ZWZhbm8gU3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwu
b3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJf
YmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0
OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgSSBhbSBmYW1pbGlhciB3aXRoIHRoZSB6Y3UxMDIgYnV0IEkgZG9u
JiMzOTt0IGtub3cgaG93IHlvdSBjb3VsZCBwb3NzaWJseTxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoGdlbmVyYXRlIGEgU0Vycm9y
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqBJIHN1Z2dlc3QgdG8gdHJ5IHRvIHVzZSBJbWFnZUJ1aWxkZXIgWzFdIHRvIGdlbmVyYXRl
IHRoZSBib290PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgY29uZmlndXJhdGlvbiBhcyBhIHRlc3QgYmVjYXVzZSB0aGF0IGlzIGtu
b3duIHRvIHdvcmsgd2VsbCBmb3IgemN1MTAyLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBbMV0gPGEgaHJlZj0iaHR0cHM6Ly9naXRs
YWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcjwvYT4g
Jmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIi
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4t
cHJvamVjdC9pbWFnZWJ1aWxkZXI8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRsYWIu
Y29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcjwvYT4gJmx0
OzxhIGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJv
amVjdC9pbWFnZWJ1aWxkZXI8L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0bGFi
LmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXI8L2E+ICZs
dDs8YSBocmVmPSJodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyIiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGxhYi5jb20veGVuLXBy
b2plY3QvaW1hZ2VidWlsZGVyPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNv
bS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXI8L2E+ICZsdDs8
YSBocmVmPSJodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyIiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGxhYi5jb20veGVuLXByb2pl
Y3QvaW1hZ2VidWlsZGVyPC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgT24gVGh1LCAyNyBBcHIgMjAyMywgT2xl
ZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBIZWxsbyBTdGVmYW5vLDxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7IFRoYW5rcyBmb3IgY2xhcmlmaWNhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IFdlIG5pZ2h0ZXIgdXNl
IEltYWdlQnVpbGRlciBub3IgdWJvb3QgYm9vdCBzY3JpcHQuPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBBIG1vZGVsIGlz
IHpjdTEwMiBjb21wYXRpYmxlLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
OyBPLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7INCy0YIsIDI1INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAy
MToyMSwgU3RlZmFubyBTdGFiZWxsaW5pICZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0i
X2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZn
dDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmci
IHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0
YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwu
b3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsm
Z3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFRoaXMgaXMgaW50ZXJlc3RpbmcuIEFyZSB5b3UgdXNp
bmcgWGlsaW54IGhhcmR3YXJlIGJ5IGFueSBjaGFuY2U/IElmIHNvLDxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoHdoaWNoIGJvYXJkPzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBBcmUgeW91IHVz
aW5nIEltYWdlQnVpbGRlciB0byBnZW5lcmF0ZSB5b3VyIGJvb3Quc2NyIGJvb3Qgc2NyaXB0PyBJ
ZiBzbyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjb3VsZCB5b3UgcGxlYXNlIHBvc3QgeW91ciBJbWFn
ZUJ1aWxkZXIgY29uZmlnIGZpbGU/IElmIG5vdCwgY2FuIHlvdTxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oHBvc3QgdGhlIHNvdXJjZSBvZiB5b3VyIHVib290IGJvb3Qgc2NyaXB0Pzxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBTRXJyb3JzIGFyZSBzdXBwb3NlZCB0byBiZSByZWxhdGVkIHRvIGEg
aGFyZHdhcmUgZmFpbHVyZSBvZiBzb21lIGtpbmQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgWW91IGFy
ZSBub3Qgc3VwcG9zZWQgdG8gYmUgYWJsZSB0byB0cmlnZ2VyIGFuIFNFcnJvciBlYXNpbHkgYnk8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmcXVvdDttaXN0YWtlJnF1b3Q7LiBJIGhhdmUgbm90IHNlZW4g
U0Vycm9ycyBkdWUgdG8gd3JvbmcgY2FjaGUgY29sb3Jpbmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBj
b25maWd1cmF0aW9ucyBvbiBhbnkgWGlsaW54IGJvYXJkIGJlZm9yZS48YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgVGhlIGRpZmZlcmVuY2VzIGJldHdlZW4gWGVuIHdpdGggYW5kIHdpdGhv
dXQgY2FjaGUgY29sb3JpbmcgZnJvbSBhPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgaGFyZHdhcmUgcGVy
c3BlY3RpdmUgYXJlOjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAtIFdpdGggY2FjaGUg
Y29sb3JpbmcsIHRoZSBTTU1VIGlzIGVuYWJsZWQgYW5kIGRvZXMgYWRkcmVzcyB0cmFuc2xhdGlv
bnM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBldmVuIGZvciBkb20wLiBXaXRob3V0IGNhY2hlIGNv
bG9yaW5nIHRoZSBTTU1VIGNvdWxkIGJlIGRpc2FibGVkLCBhbmQ8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqDCoCBpZiBlbmFibGVkLCB0aGUgU01NVSBkb2VzbiYjMzk7dCBkbyBhbnkgYWRkcmVzcyB0cmFu
c2xhdGlvbnMgZm9yIERvbTAuIElmPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgdGhlcmUgaXMgYSBo
YXJkd2FyZSBmYWlsdXJlIHJlbGF0ZWQgdG8gU01NVSBhZGRyZXNzIHRyYW5zbGF0aW9uIGl0PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgwqAgY291bGQgb25seSB0cmlnZ2VyIHdpdGggY2FjaGUgY29sb3Jp
bmcuIFRoaXMgd291bGQgYmUgbXkgbm9ybWFsPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgc3VnZ2Vz
dGlvbiBmb3IgeW91IHRvIGV4cGxvcmUsIGJ1dCB0aGUgZmFpbHVyZSBoYXBwZW5zIHRvbyBlYXJs
eTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoMKgIGJlZm9yZSBhbnkgRE1BLWNhcGFibGUgZGV2aWNlIGlz
IHByb2dyYW1tZWQuIFNvIEkgZG9uJiMzOTt0IHRoaW5rIHRoaXMgY2FuPGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgwqAgYmUgdGhlIGlzc3VlLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAtIFdp
dGggY2FjaGUgY29sb3JpbmcsIHRoZSBtZW1vcnkgYWxsb2NhdGlvbiBpcyB2ZXJ5IGRpZmZlcmVu
dCBzbyB5b3UmIzM5O2xsPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgZW5kIHVwIHVzaW5nIGRpZmZl
cmVudCBERFIgcmVnaW9ucyBmb3IgRG9tMC4gU28gaWYgeW91ciBERFIgaXM8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqDCoCBkZWZlY3RpdmUsIHlvdSBtaWdodCBvbmx5IHNlZSBhIGZhaWx1cmUgd2l0aCBj
YWNoZSBjb2xvcmluZyBlbmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgYmVjYXVzZSB5b3Ug
ZW5kIHVwIHVzaW5nIGRpZmZlcmVudCByZWdpb25zLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgT24gVHVlLCAyNSBBcHIgMjAyMywgT2xlZyBOaWtpdGVua28gd3JvdGU6
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBIaSBTdGVmYW5vLDxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFRoYW5rIHlvdS48YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IElmIEkgYnVpbGQgeGVuIHdpdGhvdXQgY29sb3JzIHN1cHBvcnQgdGhlcmUgaXMg
bm90IHRoaXMgZXJyb3IuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBBbGwgdGhlIGRvbWFpbnMg
YXJlIGJvb3RlZCB3ZWxsLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGVuc2UgaXQgY2FuIG5v
dCBiZSBhIGhhcmR3YXJlIGlzc3VlLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgVGhpcyBwYW5p
YyBhcnJpdmVkIGR1cmluZyB1bnBhY2tpbmcgdGhlIHJvb3Rmcy48YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IEhlcmUgSSBhdHRhY2hlZCB0aGUgYm9vdCBsb2cgeGVuL0RvbTAgd2l0aG91dCBjb2xv
ci48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEEgaGlnaGxpZ2h0ZWQgc3RyaW5ncyBwcmludGVk
IGV4YWN0bHkgYWZ0ZXIgdGhlIHBsYWNlIHdoZXJlIDEtc3QgdGltZSBwYW5pYyBhcnJpdmVkLjxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IMKgWGVuIDQu
MTYuMS1wcmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFhlbiB2ZXJzaW9uIDQuMTYu
MS1wcmUgKG5vbGUyMzkwQChub25lKSkgKGFhcmNoNjQtcG9ydGFibGUtbGludXgtZ2NjIChHQ0Mp
IDExLjMuMCkgZGVidWc9eTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoDIwMjMtMDQtMjE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIExhdGVz
dCBDaGFuZ2VTZXQ6IFdlZCBBcHIgMTkgMTI6NTY6MTQgMjAyMyArMDMwMCBnaXQ6MzIxNjg3YjIz
MS1kaXJ0eTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgYnVpbGQtaWQ6IGMxODQ3MjU4
ZmRiMWI3OTU2MmZjNzEwZGRhNDAwMDhmOTZjMGZkZTU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIFByb2Nlc3NvcjogMDAwMDAwMDA0MTBmZDAzNDogJnF1b3Q7QVJNIExpbWl0ZWQmcXVv
dDssIHZhcmlhbnQ6IDB4MCwgcGFydCAweGQwMyxyZXYgMHg0PGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSA2NC1iaXQgRXhlY3V0aW9uOjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgwqAgUHJvY2Vzc29yIEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAyMjIyIDAwMDAwMDAwMDAwMDAw
MDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIEV4Y2VwdGlvbiBMZXZlbHM6
IEVMMzo2NCszMiBFTDI6NjQrMzIgRUwxOjY0KzMyIEVMMDo2NCszMjxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgwqAgwqAgRXh0ZW5zaW9uczogRmxvYXRpbmdQb2ludCBBZHZhbmNlZFNJ
TUQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIERlYnVnIEZlYXR1cmVzOiAwMDAw
MDAwMDEwMzA1MTA2IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIMKgIEF1eGlsaWFyeSBGZWF0dXJlczogMDAwMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAw
MDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBNZW1vcnkgTW9kZWwgRmVhdHVy
ZXM6IDAwMDAwMDAwMDAwMDExMjIgMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgwqAgSVNBIEZlYXR1cmVzOiDCoDAwMDAwMDAwMDAwMTExMjAgMDAwMDAwMDAw
MDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgMzItYml0IEV4ZWN1dGlvbjo8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIFByb2Nlc3NvciBGZWF0dXJlczogMDAw
MDAwMDAwMDAwMDEzMTowMDAwMDAwMDAwMDExMDExPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAo
WEVOKSDCoCDCoCBJbnN0cnVjdGlvbiBTZXRzOiBBQXJjaDMyIEEzMiBUaHVtYiBUaHVtYi0yIEph
emVsbGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIEV4dGVuc2lvbnM6IEdl
bmVyaWNUaW1lciBTZWN1cml0eTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgRGVi
dWcgRmVhdHVyZXM6IDAwMDAwMDAwMDMwMTAwNjY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIMKgIEF1eGlsaWFyeSBGZWF0dXJlczogMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgKFhFTikgwqAgTWVtb3J5IE1vZGVsIEZlYXR1cmVzOiAwMDAwMDAwMDEwMjAx
MTA1IDAwMDAwMDAwNDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKg
IMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgMDAwMDAwMDAwMTI2MDAwMCAwMDAwMDAw
MDAyMTAyMjExPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBJU0EgRmVhdHVyZXM6
IDAwMDAwMDAwMDIxMDExMTAgMDAwMDAwMDAxMzExMjExMSAwMDAwMDAwMDIxMjMyMDQyPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCDCoCDCoCDCoCDCoCAwMDAwMDAw
MDAxMTEyMTMxIDAwMDAwMDAwMDAwMTExNDIgMDAwMDAwMDAwMDAxMTEyMTxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgKFhFTikgVXNpbmcgU01DIENhbGxpbmcgQ29udmVudGlvbiB2MS4yPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2luZyBQU0NJIHYxLjE8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIFNNUDogQWxsb3dpbmcgNCBDUFVzPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBHZW5lcmljIFRpbWVyIElSUTogcGh5cz0zMCBoeXA9MjYgdmlydD0yNyBGcmVx
OiAxMDAwMDAgS0h6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBHSUN2MiBpbml0aWFs
aXphdGlvbjo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIGdpY19k
aXN0X2FkZHI9MDAwMDAwMDBmOTAxMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
wqAgwqAgwqAgwqAgZ2ljX2NwdV9hZGRyPTAwMDAwMDAwZjkwMjAwMDA8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIGdpY19oeXBfYWRkcj0wMDAwMDAwMGY5MDQwMDAw
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCBnaWNfdmNwdV9hZGRy
PTAwMDAwMDAwZjkwNjAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKg
IMKgIGdpY19tYWludGVuYW5jZV9pcnE9MjU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IEdJQ3YyOiBBZGp1c3RpbmcgQ1BVIGludGVyZmFjZSBiYXNlIHRvIDB4ZjkwMmYwMDA8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEdJQ3YyOiAxOTIgbGluZXMsIDQgY3B1cywgc2VjdXJl
IChJSUQgMDIwMDE0M2IpLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgVXNpbmcgc2No
ZWR1bGVyOiBudWxsIFNjaGVkdWxlciAobnVsbCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIEluaXRpYWxpemluZyBudWxsIHNjaGVkdWxlcjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgV0FSTklORzogVGhpcyBpcyBleHBlcmltZW50YWwgc29mdHdhcmUgaW4gZGV2ZWxvcG1l
bnQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2UgYXQgeW91ciBvd24gcmlzay48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEFsbG9jYXRlZCBjb25zb2xlIHJpbmcgb2Yg
MzIgS2lCLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVMDogR3Vlc3QgYXRvbWlj
cyB3aWxsIHRyeSAxMiB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSBCcmluZ2luZyB1cCBDUFUxPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBDUFUxOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEzIHRpbWVzIGJlZm9yZSBw
YXVzaW5nIHRoZSBkb21haW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIENQVSAxIGJv
b3RlZC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEJyaW5naW5nIHVwIENQVTI8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIENQVTI6IEd1ZXN0IGF0b21pY3Mgd2lsbCB0cnkg
MTMgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhlIGRvbWFpbjxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgQ1BVIDIgYm9vdGVkLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQnJp
bmdpbmcgdXAgQ1BVMzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVMzogR3Vlc3Qg
YXRvbWljcyB3aWxsIHRyeSAxMyB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCcm91Z2h0IHVwIDQgQ1BVczxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgKFhFTikgQ1BVIDMgYm9vdGVkLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBwcm9iaW5nIGhhcmR3YXJlIGNvbmZpZ3Vy
YXRpb24uLi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBm
ZDgwMDAwMDogU01NVXYyIHdpdGg6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBzbW11
OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IHN0YWdlIDIgdHJhbnNsYXRpb248YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogc3RyZWFtIG1hdGNoaW5n
IHdpdGggNDggcmVnaXN0ZXIgZ3JvdXBzLCBtYXNrIDB4N2ZmZiZsdDsyJmd0O3NtbXU6PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgL2F4aS9zbW11QGZk
ODAwMDAwOiAxNiBjb250ZXh0PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgYmFua3MgKDA8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IHN0YWdlLTIgb25seSk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogU3RhZ2UtMjogNDgtYml0IElQQSAtJmd0OyA0
OC1iaXQgUEE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBm
ZDgwMDAwMDogcmVnaXN0ZXJlZCAyOSBtYXN0ZXIgZGV2aWNlczxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIMKgLSBEb20wIG1vZGU6IFJlbGF4ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIFAyTTogNDAtYml0IElQQSB3aXRoIDQwLWJpdCBQQSBhbmQgOC1iaXQgVk1JRDxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgUDJNOiAzIGxldmVscyB3aXRoIG9yZGVyLTEg
cm9vdCwgVlRDUiAweDAwMDAwMDAwODAwMjM1NTg8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIFNjaGVkdWxpbmcgZ3JhbnVsYXJpdHk6IGNwdSwgMSBDUFUgcGVyIHNjaGVkLXJlc291cmNl
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBhbHRlcm5hdGl2ZXM6IFBhdGNoaW5nIHdp
dGggYWx0IHRhYmxlIDAwMDAwMDAwMDAyY2M1YzggLSZndDsgMDAwMDAwMDAwMDJjY2IyYzxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgKioqIExPQURJTkcgRE9NQUlOIDAgKioqPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBMb2FkaW5nIGQwIGtlcm5lbCBmcm9tIGJvb3QgbW9k
dWxlIEAgMDAwMDAwMDAwMTAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTG9h
ZGluZyByYW1kaXNrIGZyb20gYm9vdCBtb2R1bGUgQCAwMDAwMDAwMDAyMDAwMDAwPGJyPg0KJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBBbGxvY2F0aW5nIDE6MSBtYXBwaW5ncyB0b3RhbGxpbmcg
MTYwME1CIGZvciBkb20wOjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQkFOS1swXSAw
eDAwMDAwMDEwMDAwMDAwLTB4MDAwMDAwMjAwMDAwMDAgKDI1Nk1CKTxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgQkFOS1sxXSAweDAwMDAwMDI0MDAwMDAwLTB4MDAwMDAwMjgwMDAwMDAg
KDY0TUIpPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCQU5LWzJdIDB4MDAwMDAwMzAw
MDAwMDAtMHgwMDAwMDA4MDAwMDAwMCAoMTI4ME1CKTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgR3JhbnQgdGFibGUgcmFuZ2U6IDB4MDAwMDAwMDBlMDAwMDAtMHgwMDAwMDAwMGU0MDAw
MDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAw
OiBkMDogcDJtYWRkciAweDAwMDAwMDA4N2JmOTQwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIEFsbG9jYXRpbmcgUFBJIDE2IGZvciBldmVudCBjaGFubmVsIGludGVycnVwdDxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDA6IDB4ODEyMDAwMDAt
Jmd0OzB4YTAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJl
Z2lvbiAxOiAweGIxMjAwMDAwLSZndDsweGMwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gMjogMHhjODAwMDAwMC0mZ3Q7MHhlMDAwMDAwMDxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDM6IDB4ZjAwMDAwMDAt
Jmd0OzB4ZjkwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJl
Z2lvbiA0OiAweDEwMDAwMDAwMC0mZ3Q7MHg2MDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiA1OiAweDg4MDAwMDAwMC0mZ3Q7MHg4MDAwMDAwMDAw
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gNjogMHg4MDAx
MDAwMDAwLSZndDsweDEwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBM
b2FkaW5nIHpJbWFnZSBmcm9tIDAwMDAwMDAwMDEwMDAwMDAgdG8gMDAwMDAwMDAxMDAwMDAwMC0w
MDAwMDAwMDEwZTQxMDA4PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBMb2FkaW5nIGQw
IGluaXRyZCBmcm9tIDAwMDAwMDAwMDIwMDAwMDAgdG8gMHgwMDAwMDAwMDEzNjAwMDAwLTB4MDAw
MDAwMDAxZmYzYTYxNzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTG9hZGluZyBkMCBE
VEIgdG8gMHgwMDAwMDAwMDEzNDAwMDAwLTB4MDAwMDAwMDAxMzQwY2JkYzxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgKFhFTikgSW5pdGlhbCBsb3cgbWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBh
dCAweDQwMDAgcGFnZXMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBTdGQuIExvZ2xl
dmVsOiBBbGw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEd1ZXN0IExvZ2xldmVsOiBB
bGw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pICoqKiBTZXJpYWwgaW5wdXQgdG8gRE9N
MCAodHlwZSAmIzM5O0NUUkwtYSYjMzk7IHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCk8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIG51bGwuYzozNTM6IDAgJmx0Oy0tIGQwdjA8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEZyZWVkIDM1NmtCIGluaXQgbWVtb3J5Ljxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MCBVbmhhbmRsZWQgU01DL0hWQzogMHg4NDAw
MDA1MDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MCBVbmhhbmRsZWQgU01DL0hW
QzogMHg4NjAwZmYwMTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MDogdkdJQ0Q6
IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSNDxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdy
aXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSODxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZm
ZmYgdG8gSUNBQ1RJVkVSMTI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQwdjA6IHZH
SUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjE2
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdv
cmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIyMDxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAw
ZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
MDAwMDAwXSBCb290aW5nIExpbnV4IG9uIHBoeXNpY2FsIENQVSAweDAwMDAwMDAwMDAgWzB4NDEw
ZmQwMzRdPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIExpbnV4IHZl
cnNpb24gNS4xNS43Mi14aWxpbngtdjIwMjIuMSAob2UtdXNlckBvZS1ob3N0KSAoYWFyY2g2NC1w
b3J0YWJsZS1saW51eC1nY2MgKEdDQyk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAxMS4zLjAsIEdOVSBsZCAoR05VPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
QmludXRpbHMpPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAyLjM4LjIwMjIwNzA4KSAjMSBTTVAg
VHVlIEZlYiAyMSAwNTo0Nzo1NCBVVEMgMjAyMzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuMDAwMDAwXSBNYWNoaW5lIG1vZGVsOiBEMTQgVmlwZXIgQm9hcmQgLSBXaGl0ZSBVbml0
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIFhlbiA0LjE2IHN1cHBv
cnQgZm91bmQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gWm9uZSBy
YW5nZXM6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIERNQSDC
oCDCoCDCoFttZW0gMHgwMDAwMDAwMDEwMDAwMDAwLTB4MDAwMDAwMDA3ZmZmZmZmZl08YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gwqAgRE1BMzIgwqAgwqBlbXB0eTxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBOb3JtYWwgwqAgZW1w
dHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gTW92YWJsZSB6b25l
IHN0YXJ0IGZvciBlYWNoIG5vZGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAw
MDAwMF0gRWFybHkgbWVtb3J5IG5vZGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAweDAwMDAwMDAwMTAwMDAwMDAtMHgw
MDAwMDAwMDFmZmZmZmZmXTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAw
XSDCoCBub2RlIMKgIDA6IFttZW0gMHgwMDAwMDAwMDIyMDAwMDAwLTB4MDAwMDAwMDAyMjE0N2Zm
Zl08YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gwqAgbm9kZSDCoCAw
OiBbbWVtIDB4MDAwMDAwMDAyMjIwMDAwMC0weDAwMDAwMDAwMjIzNDdmZmZdPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAweDAwMDAw
MDAwMjQwMDAwMDAtMHgwMDAwMDAwMDI3ZmZmZmZmXTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKgIDA6IFttZW0gMHgwMDAwMDAwMDMwMDAwMDAwLTB4
MDAwMDAwMDA3ZmZmZmZmZl08YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAw
MF0gSW5pdG1lbSBzZXR1cCBub2RlIDAgW21lbSAweDAwMDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAw
MDdmZmZmZmZmXTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBPbiBu
b2RlIDAsIHpvbmUgRE1BOiA4MTkyIHBhZ2VzIGluIHVuYXZhaWxhYmxlIHJhbmdlczxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBPbiBub2RlIDAsIHpvbmUgRE1BOiAx
ODQgcGFnZXMgaW4gdW5hdmFpbGFibGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC4wMDAwMDBdIE9uIG5vZGUgMCwgem9uZSBETUE6IDczNTIgcGFnZXMgaW4gdW5hdmFp
bGFibGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIGNt
YTogUmVzZXJ2ZWQgMjU2IE1pQiBhdCAweDAwMDAwMDAwNmUwMDAwMDA8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogcHJvYmluZyBmb3IgY29uZHVpdCBtZXRo
b2QgZnJvbSBEVC48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcHNj
aTogUFNDSXYxLjEgZGV0ZWN0ZWQgaW4gZmlybXdhcmUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC4wMDAwMDBdIHBzY2k6IFVzaW5nIHN0YW5kYXJkIFBTQ0kgdjAuMiBmdW5jdGlv
biBJRHM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogVHJ1
c3RlZCBPUyBtaWdyYXRpb24gbm90IHJlcXVpcmVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC4wMDAwMDBdIHBzY2k6IFNNQyBDYWxsaW5nIENvbnZlbnRpb24gdjEuMTxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBwZXJjcHU6IEVtYmVkZGVkIDE2IHBh
Z2VzL2NwdSBzMzI3OTIgcjAgZDMyNzQ0IHU2NTUzNjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDAuMDAwMDAwXSBEZXRlY3RlZCBWSVBUIEktY2FjaGUgb24gQ1BVMDxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBDUFUgZmVhdHVyZXM6IGtlcm5lbCBwYWdl
IHRhYmxlIGlzb2xhdGlvbiBmb3JjZWQgT04gYnkgS0FTTFI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjAwMDAwMF0gQ1BVIGZlYXR1cmVzOiBkZXRlY3RlZDogS2VybmVsIHBhZ2Ug
dGFibGUgaXNvbGF0aW9uIChLUFRJKTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
MDAwMDAwXSBCdWlsdCAxIHpvbmVsaXN0cywgbW9iaWxpdHkgZ3JvdXBpbmcgb24uwqAgVG90YWwg
cGFnZXM6IDQwMzg0NTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBL
ZXJuZWwgY29tbWFuZCBsaW5lOiBjb25zb2xlPWh2YzAgZWFybHljb249eGVuIGVhcmx5cHJpbnRr
PXhlbiBjbGtfaWdub3JlX3VudXNlZCBmaXBzPTE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqByb290PS9kZXYvcmFtMDxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oG1heGNwdXM9Mjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBVbmtu
b3duIGtlcm5lbCBjb21tYW5kIGxpbmUgcGFyYW1ldGVycyAmcXVvdDtlYXJseXByaW50az14ZW4g
Zmlwcz0xJnF1b3Q7LCB3aWxsIGJlIHBhc3NlZCB0byB1c2VyPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgc3BhY2UuPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC4wMDAwMDBdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDI2
MjE0NCAob3JkZXI6IDksIDIwOTcxNTIgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gSW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiAx
MzEwNzIgKG9yZGVyOiA4LCAxMDQ4NTc2IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIG1lbSBhdXRvLWluaXQ6IHN0YWNrOm9mZiwgaGVhcCBh
bGxvYzpvbiwgaGVhcCBmcmVlOm9uPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4w
MDAwMDBdIG1lbSBhdXRvLWluaXQ6IGNsZWFyaW5nIHN5c3RlbSBtZW1vcnkgbWF5IHRha2Ugc29t
ZSB0aW1lLi4uPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE1lbW9y
eTogMTEyMTkzNksvMTY0MTAyNEsgYXZhaWxhYmxlICg5NzI4SyBrZXJuZWwgY29kZSwgODM2SyBy
d2RhdGEsIDIzOTZLIHJvZGF0YSwgMTUzNks8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBpbml0LCAyNjJLIGJzcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAy
NTY5NDRLIHJlc2VydmVkLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgMjYyMTQ0SyBjbWEtcmVz
ZXJ2ZWQpPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIFNMVUI6IEhX
YWxpZ249NjQsIE9yZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTIsIE5vZGVzPTE8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcmN1OiBIaWVyYXJjaGljYWwgUkNV
IGltcGxlbWVudGF0aW9uLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAw
XSByY3U6IFJDVSBldmVudCB0cmFjaW5nIGlzIGVuYWJsZWQuPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC4wMDAwMDBdIHJjdTogUkNVIHJlc3RyaWN0aW5nIENQVXMgZnJvbSBOUl9D
UFVTPTggdG8gbnJfY3B1X2lkcz0yLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
MDAwMDAwXSByY3U6IFJDVSBjYWxjdWxhdGVkIHZhbHVlIG9mIHNjaGVkdWxlci1lbmxpc3RtZW50
IGRlbGF5IGlzIDI1IGppZmZpZXMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4w
MDAwMDBdIHJjdTogQWRqdXN0aW5nIGdlb21ldHJ5IGZvciByY3VfZmFub3V0X2xlYWY9MTYsIG5y
X2NwdV9pZHM9Mjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBOUl9J
UlFTOiA2NCwgbnJfaXJxczogNjQsIHByZWFsbG9jYXRlZCBpcnFzOiAwPGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIFJvb3QgSVJRIGhhbmRsZXI6IGdpY19oYW5kbGVf
aXJxPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIGFyY2hfdGltZXI6
IGNwMTUgdGltZXIocykgcnVubmluZyBhdCAxMDAuMDBNSHogKHZpcnQpLjxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBjbG9ja3NvdXJjZTogYXJjaF9zeXNfY291bnRl
cjogbWFzazogMHhmZmZmZmZmZmZmZmZmZiBtYXhfY3ljbGVzOiAweDE3MTAyNGU3ZTAsPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbWF4X2lkbGVfbnM6
IDQ0MDc5NTIwNTMxNSBuczxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAw
XSBzY2hlZF9jbG9jazogNTYgYml0cyBhdCAxMDBNSHosIHJlc29sdXRpb24gMTBucywgd3JhcHMg
ZXZlcnkgNDM5ODA0NjUxMTEwMG5zPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4w
MDAyNThdIENvbnNvbGU6IGNvbG91ciBkdW1teSBkZXZpY2UgODB4MjU8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjMxMDIzMV0gcHJpbnRrOiBjb25zb2xlIFtodmMwXSBlbmFibGVk
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zMTQ0MDNdIENhbGlicmF0aW5nIGRl
bGF5IGxvb3AgKHNraXBwZWQpLCB2YWx1ZSBjYWxjdWxhdGVkIHVzaW5nIHRpbWVyIGZyZXF1ZW5j
eS4uIDIwMC4wMCBCb2dvTUlQUzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoChscGo9NDAwMDAwKTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDAuMzI0ODUxXSBwaWRfbWF4OiBkZWZhdWx0OiAzMjc2OCBtaW5pbXVtOiAzMDE8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMyOTcwNl0gTFNNOiBTZWN1cml0eSBGcmFtZXdvcmsg
aW5pdGlhbGl6aW5nPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zMzQyMDRdIFlh
bWE6IGJlY29taW5nIG1pbmRmdWwuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4z
Mzc4NjVdIE1vdW50LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogNDA5NiAob3JkZXI6IDMsIDMy
NzY4IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNDUx
ODBdIE1vdW50cG9pbnQtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA0MDk2IChvcmRlcjogMywg
MzI3NjggYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM1
NDc0M10geGVuOmdyYW50X3RhYmxlOiBHcmFudCB0YWJsZXMgdXNpbmcgdmVyc2lvbiAxIGxheW91
dDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzU5MTMyXSBHcmFudCB0YWJsZSBp
bml0aWFsaXplZDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzYyNjY0XSB4ZW46
ZXZlbnRzOiBVc2luZyBGSUZPLWJhc2VkIEFCSTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuMzY2OTkzXSBYZW46IGluaXRpYWxpemluZyBjcHUwPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC4zNzA1MTVdIHJjdTogSGllcmFyY2hpY2FsIFNSQ1UgaW1wbGVtZW50YXRp
b24uPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNzU5MzBdIHNtcDogQnJpbmdp
bmcgdXAgc2Vjb25kYXJ5IENQVXMgLi4uPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBu
dWxsLmM6MzUzOiAxICZsdDstLSBkMHYxPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBk
MHYxOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FD
VElWRVIwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zODI1NDldIERldGVjdGVk
IFZJUFQgSS1jYWNoZSBvbiBDUFUxPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4z
ODg3MTJdIFhlbjogaW5pdGlhbGl6aW5nIGNwdTE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAwLjM4ODc0M10gQ1BVMTogQm9vdGVkIHNlY29uZGFyeSBwcm9jZXNzb3IgMHgwMDAwMDAw
MDAxIFsweDQxMGZkMDM0XTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzg4ODI5
XSBzbXA6IEJyb3VnaHQgdXAgMSBub2RlLCAyIENQVXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjQwNjk0MV0gU01QOiBUb3RhbCBvZiAyIHByb2Nlc3NvcnMgYWN0aXZhdGVkLjxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDExNjk4XSBDUFUgZmVhdHVyZXM6IGRl
dGVjdGVkOiAzMi1iaXQgRUwwIFN1cHBvcnQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjQxNjg4OF0gQ1BVIGZlYXR1cmVzOiBkZXRlY3RlZDogQ1JDMzIgaW5zdHJ1Y3Rpb25zPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40MjIxMjFdIENQVTogQWxsIENQVShzKSBz
dGFydGVkIGF0IEVMMTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDI2MjQ4XSBh
bHRlcm5hdGl2ZXM6IHBhdGNoaW5nIGtlcm5lbCBjb2RlPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC40MzE0MjRdIGRldnRtcGZzOiBpbml0aWFsaXplZDxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuNDQxNDU0XSBLQVNMUiBlbmFibGVkPGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC40NDE2MDJdIGNsb2Nrc291cmNlOiBqaWZmaWVzOiBtYXNrOiAweGZm
ZmZmZmZmIG1heF9jeWNsZXM6IDB4ZmZmZmZmZmYsIG1heF9pZGxlX25zOjxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDc2NDUwNDE3ODUxMDAwMDAgbnM8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQ0ODMyMV0gZnV0ZXggaGFzaCB0YWJs
ZSBlbnRyaWVzOiA1MTIgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDk2MTgzXSBORVQ6IFJlZ2lzdGVyZWQgUEZfTkVUTElO
Sy9QRl9ST1VURSBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjQ5ODI3N10gRE1BOiBwcmVhbGxvY2F0ZWQgMjU2IEtpQiBHRlBfS0VSTkVMIHBvb2wgZm9y
IGF0b21pYyBhbGxvY2F0aW9uczxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTAz
NzcyXSBETUE6IHByZWFsbG9jYXRlZCAyNTYgS2lCIEdGUF9LRVJORUx8R0ZQX0RNQSBwb29sIGZv
ciBhdG9taWMgYWxsb2NhdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjUx
MTYxMF0gRE1BOiBwcmVhbGxvY2F0ZWQgMjU2IEtpQiBHRlBfS0VSTkVMfEdGUF9ETUEzMiBwb29s
IGZvciBhdG9taWMgYWxsb2NhdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjUxOTQ3OF0gYXVkaXQ6IGluaXRpYWxpemluZyBuZXRsaW5rIHN1YnN5cyAoZGlzYWJsZWQpPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41MjQ5ODVdIGF1ZGl0OiB0eXBlPTIwMDAg
YXVkaXQoMC4zMzY6MSk6IHN0YXRlPWluaXRpYWxpemVkIGF1ZGl0X2VuYWJsZWQ9MCByZXM9MTxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTI5MTY5XSB0aGVybWFsX3N5czogUmVn
aXN0ZXJlZCB0aGVybWFsIGdvdmVybm9yICYjMzk7c3RlcF93aXNlJiMzOTs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjUzMzAyM10gaHctYnJlYWtwb2ludDogZm91bmQgNiBicmVh
a3BvaW50IGFuZCA0IHdhdGNocG9pbnQgcmVnaXN0ZXJzLjxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuNTQ1NjA4XSBBU0lEIGFsbG9jYXRvciBpbml0aWFsaXNlZCB3aXRoIDMyNzY4
IGVudHJpZXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjU1MTAzMF0geGVuOnN3
aW90bGJfeGVuOiBXYXJuaW5nOiBvbmx5IGFibGUgdG8gYWxsb2NhdGUgNCBNQiBmb3Igc29mdHdh
cmUgSU8gVExCPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41NTkzMzJdIHNvZnR3
YXJlIElPIFRMQjogbWFwcGVkIFttZW0gMHgwMDAwMDAwMDExODAwMDAwLTB4MDAwMDAwMDAxMWMw
MDAwMF0gKDRNQik8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjU4MzU2NV0gSHVn
ZVRMQiByZWdpc3RlcmVkIDEuMDAgR2lCIHBhZ2Ugc2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2Vz
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41ODQ3MjFdIEh1Z2VUTEIgcmVnaXN0
ZXJlZCAzMi4wIE1pQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlczxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTkxNDc4XSBIdWdlVExCIHJlZ2lzdGVyZWQgMi4wMCBN
aUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjU5ODIyNV0gSHVnZVRMQiByZWdpc3RlcmVkIDY0LjAgS2lCIHBhZ2Ugc2l6
ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC42MzY1MjBdIERSQkc6IENvbnRpbnVpbmcgd2l0aG91dCBKaXR0ZXIgUk5HPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC43MzcxODddIHJhaWQ2OiBuZW9ueDggwqAgZ2VuKCkgwqAy
MTQzIE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjgwNTI5NF0gcmFpZDY6
IG5lb254OCDCoCB4b3IoKSDCoDE1ODkgTUIvczxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuODczNDA2XSByYWlkNjogbmVvbng0IMKgIGdlbigpIMKgMjE3NyBNQi9zPGJyPg0KJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC45NDE0OTldIHJhaWQ2OiBuZW9ueDQgwqAgeG9yKCkg
wqAxNTU2IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjAwOTYxMl0gcmFp
ZDY6IG5lb254MiDCoCBnZW4oKSDCoDIwNzIgTUIvczxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDEuMDc3NzE1XSByYWlkNjogbmVvbngyIMKgIHhvcigpIMKgMTQzMCBNQi9zPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4xNDU4MzRdIHJhaWQ2OiBuZW9ueDEgwqAgZ2Vu
KCkgwqAxNzY5IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjIxMzkzNV0g
cmFpZDY6IG5lb254MSDCoCB4b3IoKSDCoDEyMTQgTUIvczxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDEuMjgyMDQ2XSByYWlkNjogaW50NjR4OCDCoGdlbigpIMKgMTM2NiBNQi9zPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4zNTAxMzJdIHJhaWQ2OiBpbnQ2NHg4IMKg
eG9yKCkgwqAgNzczIE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjQxODI1
OV0gcmFpZDY6IGludDY0eDQgwqBnZW4oKSDCoDE2MDIgTUIvczxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDEuNDg2MzQ5XSByYWlkNjogaW50NjR4NCDCoHhvcigpIMKgIDg1MSBNQi9z
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS41NTQ0NjRdIHJhaWQ2OiBpbnQ2NHgy
IMKgZ2VuKCkgwqAxMzk2IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjYy
MjU2MV0gcmFpZDY6IGludDY0eDIgwqB4b3IoKSDCoCA3NDQgTUIvczxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDEuNjkwNjg3XSByYWlkNjogaW50NjR4MSDCoGdlbigpIMKgMTAzMyBN
Qi9zPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43NTg3NzBdIHJhaWQ2OiBpbnQ2
NHgxIMKgeG9yKCkgwqAgNTE3IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAx
Ljc1ODgwOV0gcmFpZDY6IHVzaW5nIGFsZ29yaXRobSBuZW9ueDQgZ2VuKCkgMjE3NyBNQi9zPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43NjI5NDFdIHJhaWQ2OiAuLi4uIHhvcigp
IDE1NTYgTUIvcywgcm13IGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAx
Ljc2Nzk1N10gcmFpZDY6IHVzaW5nIG5lb24gcmVjb3ZlcnkgYWxnb3JpdGhtPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43NzI4MjRdIHhlbjpiYWxsb29uOiBJbml0aWFsaXNpbmcg
YmFsbG9vbiBkcml2ZXI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc3ODAyMV0g
aW9tbXU6IERlZmF1bHQgZG9tYWluIHR5cGU6IFRyYW5zbGF0ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAxLjc4MjU4NF0gaW9tbXU6IERNQSBkb21haW4gVExCIGludmFsaWRhdGlv
biBwb2xpY3k6IHN0cmljdCBtb2RlPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS43
ODkxNDldIFNDU0kgc3Vic3lzdGVtIGluaXRpYWxpemVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMS43OTI4MjBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2
ZXIgdXNiZnM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc5ODI1NF0gdXNiY29y
ZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWI8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAxLjgwMzYyNl0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRy
aXZlciB1c2I8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgwODc2MV0gcHBzX2Nv
cmU6IExpbnV4UFBTIEFQSSB2ZXIuIDEgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDEuODEzNzE2XSBwcHNfY29yZTogU29mdHdhcmUgdmVyLiA1LjMuNiAtIENvcHly
aWdodCAyMDA1LTIwMDcgUm9kb2xmbyBHaW9tZXR0aSAmbHQ7PGEgaHJlZj0ibWFpbHRvOmdpb21l
dHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+
Z2lvbWV0dGlAbGludXguaXQ8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpnaW9t
ZXR0aUBsaW51eC5pdCIgdGFyZ2V0PSJfYmxhbmsiPmdpb21ldHRpQGxpbnV4Lml0PC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpnaW9tZXR0aUBsaW51eC5pdCIgdGFyZ2V0PSJfYmxhbmsi
Pmdpb21ldHRpQGxpbnV4Lml0PC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86Z2lvbWV0
dGlAbGludXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9tZXR0aUBsaW51eC5pdDwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGludXguaXQiIHRhcmdldD0iX2JsYW5rIj5n
aW9tZXR0aUBsaW51eC5pdDwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmdpb21l
dHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+
Z2lvbWV0dGlAbGludXguaXQ8L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDEuODIyOTAzXSBQVFAgY2xvY2sgc3VwcG9ydCByZWdpc3RlcmVkPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44MjY4OTNdIEVEQUMgTUM6IFZlcjogMy4wLjA8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgzMDM3NV0genlucW1wLWlwaS1tYm94
IG1haWxib3hAZmY5OTA0MDA6IFJlZ2lzdGVyZWQgWnlucU1QIElQSSBtYm94IHdpdGggVFgvUlgg
Y2hhbm5lbHMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44Mzg4NjNdIHp5bnFt
cC1pcGktbWJveCBtYWlsYm94QGZmOTkwNjAwOiBSZWdpc3RlcmVkIFp5bnFNUCBJUEkgbWJveCB3
aXRoIFRYL1JYIGNoYW5uZWxzLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODQ3
MzU2XSB6eW5xbXAtaXBpLW1ib3ggbWFpbGJveEBmZjk5MDgwMDogUmVnaXN0ZXJlZCBaeW5xTVAg
SVBJIG1ib3ggd2l0aCBUWC9SWCBjaGFubmVscy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAxLjg1NTkwN10gRlBHQSBtYW5hZ2VyIGZyYW1ld29yazxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDEuODU5OTUyXSBjbG9ja3NvdXJjZTogU3dpdGNoZWQgdG8gY2xvY2tzb3Vy
Y2UgYXJjaF9zeXNfY291bnRlcjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODcx
NzEyXSBORVQ6IFJlZ2lzdGVyZWQgUEZfSU5FVCBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg3MTgzOF0gSVAgaWRlbnRzIGhhc2ggdGFibGUgZW50cmll
czogMzI3NjggKG9yZGVyOiA2LCAyNjIxNDQgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAxLjg3OTM5Ml0gdGNwX2xpc3Rlbl9wb3J0YWRkcl9oYXNoIGhhc2gg
dGFibGUgZW50cmllczogMTAyNCAob3JkZXI6IDIsIDE2Mzg0IGJ5dGVzLCBsaW5lYXIpPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44ODcwNzhdIFRhYmxlLXBlcnR1cmIgaGFzaCB0
YWJsZSBlbnRyaWVzOiA2NTUzNiAob3JkZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFyKTxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODk0ODQ2XSBUQ1AgZXN0YWJsaXNoZWQgaGFz
aCB0YWJsZSBlbnRyaWVzOiAxNjM4NCAob3JkZXI6IDUsIDEzMTA3MiBieXRlcywgbGluZWFyKTxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTAyOTAwXSBUQ1AgYmluZCBoYXNoIHRh
YmxlIGVudHJpZXM6IDE2Mzg0IChvcmRlcjogNiwgMjYyMTQ0IGJ5dGVzLCBsaW5lYXIpPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45MTAzNTBdIFRDUDogSGFzaCB0YWJsZXMgY29u
ZmlndXJlZCAoZXN0YWJsaXNoZWQgMTYzODQgYmluZCAxNjM4NCk8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAxLjkxNjc3OF0gVURQIGhhc2ggdGFibGUgZW50cmllczogMTAyNCAob3Jk
ZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMS45MjM1MDldIFVEUC1MaXRlIGhhc2ggdGFibGUgZW50cmllczogMTAyNCAob3JkZXI6IDMs
IDMyNzY4IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45
MzA3NTldIE5FVDogUmVnaXN0ZXJlZCBQRl9VTklYL1BGX0xPQ0FMIHByb3RvY29sIGZhbWlseTxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTM2ODM0XSBSUEM6IFJlZ2lzdGVyZWQg
bmFtZWQgVU5JWCBzb2NrZXQgdHJhbnNwb3J0IG1vZHVsZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAxLjk0MjM0Ml0gUlBDOiBSZWdpc3RlcmVkIHVkcCB0cmFuc3BvcnQgbW9kdWxl
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTQ3MDg4XSBSUEM6IFJlZ2lzdGVy
ZWQgdGNwIHRyYW5zcG9ydCBtb2R1bGUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MS45NTE4NDNdIFJQQzogUmVnaXN0ZXJlZCB0Y3AgTkZTdjQuMSBiYWNrY2hhbm5lbCB0cmFuc3Bv
cnQgbW9kdWxlLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTU4MzM0XSBQQ0k6
IENMUyAwIGJ5dGVzLCBkZWZhdWx0IDY0PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MS45NjI3MDldIFRyeWluZyB0byB1bnBhY2sgcm9vdGZzIGltYWdlIGFzIGluaXRyYW1mcy4uLjxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTc3MDkwXSB3b3JraW5nc2V0OiB0aW1l
c3RhbXBfYml0cz02MiBtYXhfb3JkZXI9MTkgYnVja2V0X29yZGVyPTA8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAxLjk4Mjg2M10gSW5zdGFsbGluZyBrbmZzZCAoY29weXJpZ2h0IChD
KSAxOTk2IDxhIGhyZWY9Im1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJfYmxhbmsi
Pm9raXJAbW9uYWQuc3diLmRlPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpva2lyQG1v
bmFkLnN3Yi5kZSIgdGFyZ2V0PSJfYmxhbmsiPm9raXJAbW9uYWQuc3diLmRlPC9hPiZndDsgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2tpckBtb25hZC5zd2IuZGUiIHRhcmdldD0iX2JsYW5r
Ij5va2lyQG1vbmFkLnN3Yi5kZTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2tpckBt
b25hZC5zd2IuZGUiIHRhcmdldD0iX2JsYW5rIj5va2lyQG1vbmFkLnN3Yi5kZTwvYT4mZ3Q7Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJf
YmxhbmsiPm9raXJAbW9uYWQuc3diLmRlPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
a2lyQG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJfYmxhbmsiPm9raXJAbW9uYWQuc3diLmRlPC9hPiZn
dDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2tpckBtb25hZC5zd2IuZGUiIHRhcmdldD0i
X2JsYW5rIj5va2lyQG1vbmFkLnN3Yi5kZTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2tpckBtb25hZC5zd2IuZGUiIHRhcmdldD0iX2JsYW5rIj5va2lyQG1vbmFkLnN3Yi5kZTwvYT4m
Z3Q7Jmd0OyZndDspLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDIxMDQ1XSBO
RVQ6IFJlZ2lzdGVyZWQgUEZfQUxHIHByb3RvY29sIGZhbWlseTxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuMDIxMTIyXSB4b3I6IG1lYXN1cmluZyBzb2Z0d2FyZSBjaGVja3N1bSBz
cGVlZDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDI5MzQ3XSDCoCDCoDhyZWdz
IMKgIMKgIMKgIMKgIMKgIDogwqAyMzY2IE1CL3NlYzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMDMzMDgxXSDCoCDCoDMycmVncyDCoCDCoCDCoCDCoCDCoDogwqAyODAyIE1CL3Nl
Yzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDM4MjIzXSDCoCDCoGFybTY0X25l
b24gwqAgwqAgwqA6IMKgMjMyMCBNQi9zZWM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjAzODM4NV0geG9yOiB1c2luZyBmdW5jdGlvbjogMzJyZWdzICgyODAyIE1CL3NlYyk8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA0MzYxNF0gQmxvY2sgbGF5ZXIgU0NTSSBn
ZW5lcmljIChic2cpIGRyaXZlciB2ZXJzaW9uIDAuNCBsb2FkZWQgKG1ham9yIDI0Nyk8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA1MDk1OV0gaW8gc2NoZWR1bGVyIG1xLWRlYWRs
aW5lIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA1NTUyMV0g
aW8gc2NoZWR1bGVyIGt5YmVyIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjA2ODIyN10geGVuOnhlbl9ldnRjaG46IEV2ZW50LWNoYW5uZWwgZGV2aWNlIGluc3Rh
bGxlZDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDY5MjgxXSBTZXJpYWw6IDgy
NTAvMTY1NTAgZHJpdmVyLCA0IHBvcnRzLCBJUlEgc2hhcmluZyBkaXNhYmxlZDxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDc2MTkwXSBjYWNoZWluZm86IFVuYWJsZSB0byBkZXRl
Y3QgY2FjaGUgaGllcmFyY2h5IGZvciBDUFUgMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMDg1NTQ4XSBicmQ6IG1vZHVsZSBsb2FkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjA4OTI5MF0gbG9vcDogbW9kdWxlIGxvYWRlZDxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuMDg5MzQxXSBJbnZhbGlkIG1heF9xdWV1ZXMgKDQpLCB3aWxsIHVzZSBk
ZWZhdWx0IG1heDogMi48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA5NDU2NV0g
dHVuOiBVbml2ZXJzYWwgVFVOL1RBUCBkZXZpY2UgZHJpdmVyLCAxLjY8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAyLjA5ODY1NV0geGVuX25ldGZyb250OiBJbml0aWFsaXNpbmcgWGVu
IHZpcnR1YWwgZXRoZXJuZXQgZHJpdmVyPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi4xMDQxNTZdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgcnRsODE1
MDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTA5ODEzXSB1c2Jjb3JlOiByZWdp
c3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHI4MTUyPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMi4xMTUzNjddIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2
ZXIgYXNpeDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTIwNzk0XSB1c2Jjb3Jl
OiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGF4ODgxNzlfMTc4YTxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTI2OTM0XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBp
bnRlcmZhY2UgZHJpdmVyIGNkY19ldGhlcjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuMTMyODE2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGNkY19l
ZW08YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjEzODUyN10gdXNiY29yZTogcmVn
aXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBuZXQxMDgwPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi4xNDQyNTZdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBk
cml2ZXIgY2RjX3N1YnNldDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTUwMjA1
XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHphdXJ1czxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTU1ODM3XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5l
dyBpbnRlcmZhY2UgZHJpdmVyIGNkY19uY208YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjE2MTU1MF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciByODE1
M19lY208YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE2ODI0MF0gdXNiY29yZTog
cmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNfYWNtPGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMi4xNzMxMDldIGNkY19hY206IFVTQiBBYnN0cmFjdCBDb250cm9sIE1v
ZGVsIGRyaXZlciBmb3IgVVNCIG1vZGVtcyBhbmQgSVNETiBhZGFwdGVyczxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDIuMTgxMzU4XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRl
cmZhY2UgZHJpdmVyIHVhczxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTg2NTQ3
XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYi1zdG9yYWdlPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xOTI2NDNdIHVzYmNvcmU6IHJlZ2lzdGVy
ZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgZnRkaV9zaW88YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjE5ODM4NF0gdXNic2VyaWFsOiBVU0IgU2VyaWFsIHN1cHBvcnQgcmVnaXN0ZXJl
ZCBmb3IgRlRESSBVU0IgU2VyaWFsIERldmljZTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMjA2MTE4XSB1ZGMtY29yZTogY291bGRuJiMzOTt0IGZpbmQgYW4gYXZhaWxhYmxlIFVE
QyAtIGFkZGVkIFtnX21hc3Nfc3RvcmFnZV0gdG8gbGlzdCBvZiBwZW5kaW5nPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZHJpdmVyczxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjE1MzMyXSBpMmNfZGV2OiBpMmMgL2RldiBlbnRyaWVz
IGRyaXZlcjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjIwNDY3XSB4ZW5fd2R0
IHhlbl93ZHQ6IGluaXRpYWxpemVkICh0aW1lb3V0PTYwcywgbm93YXlvdXQ9MCk8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIyNTkyM10gZGV2aWNlLW1hcHBlcjogdWV2ZW50OiB2
ZXJzaW9uIDEuMC4zPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yMzA2NjhdIGRl
dmljZS1tYXBwZXI6IGlvY3RsOiA0LjQ1LjAtaW9jdGwgKDIwMjEtMDMtMjIpIGluaXRpYWxpc2Vk
OiA8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmRt
LWRldmVsQHJlZGhhdC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmRtLWRldmVs
QHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5kbS1kZXZlbEByZWRoYXQuY29tPC9hPiZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbSIgdGFyZ2V0PSJf
YmxhbmsiPmRtLWRldmVsQHJlZGhhdC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OmRtLWRldmVsQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5kbS1kZXZlbEByZWRoYXQuY29t
PC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPmRtLWRldmVsQHJlZGhhdC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5kbS1kZXZlbEByZWRo
YXQuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmRtLWRldmVsQHJlZGhhdC5jb208L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5kbS1k
ZXZlbEByZWRoYXQuY29tPC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMjM5MzE1XSBFREFDIE1DMDogR2l2aW5nIG91dCBkZXZpY2UgdG8gbW9kdWxlIDEg
Y29udHJvbGxlciBzeW5wc19kZHJfY29udHJvbGxlcjogREVWIHN5bnBzX2VkYWM8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAoSU5URVJSVVBUKTxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjQ5NDA1XSBFREFDIERFVklDRTA6IEdpdmlu
ZyBvdXQgZGV2aWNlIHRvIG1vZHVsZSB6eW5xbXAtb2NtLWVkYWMgY29udHJvbGxlciB6eW5xbXBf
b2NtOiBERVY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBmZjk2MDAwMC5tZW1vcnktY29udHJvbGxlciAo
SU5URVJSVVBUKTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjYxNzE5XSBzZGhj
aTogU2VjdXJlIERpZ2l0YWwgSG9zdCBDb250cm9sbGVyIEludGVyZmFjZSBkcml2ZXI8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI2NzQ4N10gc2RoY2k6IENvcHlyaWdodChjKSBQ
aWVycmUgT3NzbWFuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yNzE4OTBdIHNk
aGNpLXBsdGZtOiBTREhDSSBwbGF0Zm9ybSBhbmQgT0YgZHJpdmVyIGhlbHBlcjxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjc4MTU3XSBsZWR0cmlnLWNwdTogcmVnaXN0ZXJlZCB0
byBpbmRpY2F0ZSBhY3Rpdml0eSBvbiBDUFVzPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4yODM4MTZdIHp5bnFtcF9maXJtd2FyZV9wcm9iZSBQbGF0Zm9ybSBNYW5hZ2VtZW50IEFQ
SSB2MS4xPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yODk1NTRdIHp5bnFtcF9m
aXJtd2FyZV9wcm9iZSBUcnVzdHpvbmUgdmVyc2lvbiB2MS4wPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi4zMjc4NzVdIHNlY3VyZWZ3IHNlY3VyZWZ3OiBzZWN1cmVmdyBwcm9iZWQ8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjMyODMyNF0gYWxnOiBObyB0ZXN0IGZv
ciB4aWxpbngtenlucW1wLWFlcyAoenlucW1wLWFlcyk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjMzMjU2M10genlucW1wX2FlcyBmaXJtd2FyZTp6eW5xbXAtZmlybXdhcmU6enlu
cW1wLWFlczogQUVTIFN1Y2Nlc3NmdWxseSBSZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi4zNDExODNdIGFsZzogTm8gdGVzdCBmb3IgeGlsaW54LXp5bnFtcC1yc2Eg
KHp5bnFtcC1yc2EpPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zNDc2NjddIHJl
bW90ZXByb2MgcmVtb3RlcHJvYzA6IGZmOWEwMDAwLnJmNXNzOnI1Zl8wIGlzIGF2YWlsYWJsZTxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzUzMDAzXSByZW1vdGVwcm9jIHJlbW90
ZXByb2MxOiBmZjlhMDAwMC5yZjVzczpyNWZfMSBpcyBhdmFpbGFibGU8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAyLjM2MjYwNV0gZnBnYV9tYW5hZ2VyIGZwZ2EwOiBYaWxpbnggWnlu
cU1QIEZQR0EgTWFuYWdlciByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4zNjY1NDBdIHZpcGVyLXhlbi1wcm94eSB2aXBlci14ZW4tcHJveHk6IFZpcGVyIFhlbiBQ
cm94eSByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zNzI1MjVd
IHZpcGVyLXZkcHAgYTQwMDAwMDAudmRwcDogRGV2aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzc3Nzc4XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6
IFZEUFAgVmVyc2lvbjogMS4zLjkuMCBJbmZvOiAxLjUxMi4xNS4wIEtleUxlbjogMzI8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM4NjQzMl0gdmlwZXItdmRwcCBhNDAwMDAwMC52
ZHBwOiBVbmFibGUgdG8gcmVnaXN0ZXIgdGFtcGVyIGhhbmRsZXIuIFJldHJ5aW5nLi4uPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4zOTQwOTRdIHZpcGVyLXZkcHAtbmV0IGE1MDAw
MDAwLnZkcHBfbmV0OiBEZXZpY2UgVHJlZSBQcm9iaW5nPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMi4zOTk4NTRdIHZpcGVyLXZkcHAtbmV0IGE1MDAwMDAwLnZkcHBfbmV0OiBEZXZp
Y2UgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDA1OTMxXSB2
aXBlci12ZHBwLXN0YXQgYTgwMDAwMDAudmRwcF9zdGF0OiBEZXZpY2UgVHJlZSBQcm9iaW5nPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40MTIwMzddIHZpcGVyLXZkcHAtc3RhdCBh
ODAwMDAwMC52ZHBwX3N0YXQ6IEJ1aWxkIHBhcmFtZXRlcnM6IFZUSSBDb3VudDogNTEyIEV2ZW50
IENvdW50OiAzMjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDIwODU2XSBkZWZh
dWx0IHByZXNldDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDIzNzk3XSB2aXBl
ci12ZHBwLXN0YXQgYTgwMDAwMDAudmRwcF9zdGF0OiBEZXZpY2UgcmVnaXN0ZXJlZDxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDMwMDU0XSB2aXBlci12ZHBwLXJuZyBhYzAwMDAw
MC52ZHBwX3JuZzogRGV2aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuNDM1OTQ4XSB2aXBlci12ZHBwLXJuZyBhYzAwMDAwMC52ZHBwX3JuZzogRGV2aWNl
IHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ0MTk3Nl0gdm1j
dSBkcml2ZXIgaW5pdDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDQ0OTIyXSBW
TUNVOiA6ICgyNDA6MCkgcmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuNDQ0OTU2XSBJbiBLODEgVXBkYXRlciBpbml0PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi40NDkwMDNdIHBrdGdlbjogUGFja2V0IEdlbmVyYXRvciBmb3IgcGFja2V0IHBlcmZv
cm1hbmNlIHRlc3RpbmcuIFZlcnNpb246IDIuNzU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjQ2ODgzM10gSW5pdGlhbGl6aW5nIFhGUk0gbmV0bGluayBzb2NrZXQ8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ2ODkwMl0gTkVUOiBSZWdpc3RlcmVkIFBGX1BBQ0tF
VCBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ3Mjcy
OV0gQnJpZGdlIGZpcmV3YWxsaW5nIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAyLjQ3Njc4NV0gODAyMXE6IDgwMi4xUSBWTEFOIFN1cHBvcnQgdjEuODxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDgxMzQxXSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2
ZXJzaW9uIDE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ4NjM5NF0gQnRyZnMg
bG9hZGVkLCBjcmMzMmM9Y3JjMzJjLWdlbmVyaWMsIHpvbmVkPW5vLCBmc3Zlcml0eT1ubzxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNTAzMTQ1XSBmZjAxMDAwMC5zZXJpYWw6IHR0
eVBTMSBhdCBNTUlPIDB4ZmYwMTAwMDAgKGlycSA9IDM2LCBiYXNlX2JhdWQgPSA2MjUwMDAwKSBp
cyBhIHh1YXJ0cHM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUwNzEwM10gb2Yt
ZnBnYS1yZWdpb24gZnBnYS1mdWxsOiBGUEdBIFJlZ2lvbiBwcm9iZWQ8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAyLjUxMjk4Nl0geGlsaW54LXp5bnFtcC1kbWEgZmQ1MDAwMDAuZG1h
LWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUyMDI2N10geGlsaW54LXp5bnFtcC1kbWEgZmQ1MTAwMDAu
ZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUyODIzOV0geGlsaW54LXp5bnFtcC1kbWEgZmQ1MjAw
MDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjUzNjE1Ml0geGlsaW54LXp5bnFtcC1kbWEgZmQ1
MzAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU0NDE1M10geGlsaW54LXp5bnFtcC1kbWEg
ZmQ1NDAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3M8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU1MjEyN10geGlsaW54LXp5bnFtcC1k
bWEgZmQ1NTAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nl
c3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU2MDE3OF0geGlsaW54LXp5bnFt
cC1kbWEgZmZhODAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1
Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU2Nzk4N10geGlsaW54LXp5
bnFtcC1kbWEgZmZhOTAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2Jl
IHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU3NjAxOF0geGlsaW54
LXp5bnFtcC1kbWEgZmZhYTAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFBy
b2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjU4Mzg4OV0geGls
aW54LXp5bnFtcC1kbWEgZmZhYjAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVy
IFByb2JlIHN1Y2Nlc3M8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk0NjM3OV0g
c3BpLW5vciBzcGkwLjA6IG10MjVxdTUxMmEgKDEzMTA3MiBLYnl0ZXMpPGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMi45NDY0NjddIDIgZml4ZWQtcGFydGl0aW9ucyBwYXJ0aXRpb25z
IGZvdW5kIG9uIE1URCBkZXZpY2Ugc3BpMC4wPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi45NTIzOTNdIENyZWF0aW5nIDIgTVREIHBhcnRpdGlvbnMgb24gJnF1b3Q7c3BpMC4wJnF1
b3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTU3MjMxXSAweDAwMDAwNDAw
MDAwMC0weDAwMDAwODAwMDAwMCA6ICZxdW90O2JhbmsgQSZxdW90Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuOTYzMzMyXSAweDAwMDAwMDAwMDAwMC0weDAwMDAwNDAwMDAwMCA6
ICZxdW90O2JhbmsgQiZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTY4
Njk0XSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0OiBOb3QgZW5hYmxpbmcgcGFydGlhbCBzdG9yZSBh
bmQgZm9yd2FyZDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTc1MzMzXSBtYWNi
IGZmMGIwMDAwLmV0aGVybmV0IGV0aDA6IENhZGVuY2UgR0VNIHJldiAweDUwMDcwMTA2IGF0IDB4
ZmYwYjAwMDAgaXJxIDI1PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgKDE4OjQxOmZlOjBmOmZmOjAyKTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuOTg0NDcyXSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0OiBOb3QgZW5hYmxpbmcgcGFydGlh
bCBzdG9yZSBhbmQgZm9yd2FyZDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTky
MTQ0XSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0IGV0aDE6IENhZGVuY2UgR0VNIHJldiAweDUwMDcw
MTA2IGF0IDB4ZmYwYzAwMDAgaXJxIDI2PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgKDE4OjQxOmZlOjBmOmZmOjAzKTxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuMDAxMDQzXSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQ6IFZpcGVyIHBvd2Vy
IEdQSU9zIGluaXRpYWxpc2VkPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wMDcz
MTNdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldCB2bmV0MCAodW5pbml0aWFsaXplZCk6IFZhbGlkYXRl
IGludGVyZmFjZSBRU0dNSUk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjAxNDkx
NF0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQxICh1bmluaXRpYWxpemVkKTogVmFsaWRhdGUg
aW50ZXJmYWNlIFFTR01JSTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDIyMTM4
XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDEgKHVuaW5pdGlhbGl6ZWQpOiBWYWxpZGF0ZSBp
bnRlcmZhY2UgdHlwZSAxODxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDMwMjc0
XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDIgKHVuaW5pdGlhbGl6ZWQpOiBWYWxpZGF0ZSBp
bnRlcmZhY2UgUVNHTUlJPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wMzc3ODVd
IHZpcGVyX2VuZXQgdmlwZXJfZW5ldCB2bmV0MyAodW5pbml0aWFsaXplZCk6IFZhbGlkYXRlIGlu
dGVyZmFjZSBRU0dNSUk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA0NTMwMV0g
dmlwZXJfZW5ldCB2aXBlcl9lbmV0OiBWaXBlciBlbmV0IHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA1MDk1OF0geGlsaW54LWF4aXBtb24gZmZhMDAwMDAucGVy
Zi1tb25pdG9yOiBQcm9iZWQgWGlsaW54IEFQTTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDMuMDU3MTM1XSB4aWxpbngtYXhpcG1vbiBmZDBiMDAwMC5wZXJmLW1vbml0b3I6IFByb2Jl
ZCBYaWxpbnggQVBNPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wNjM1MzhdIHhp
bGlueC1heGlwbW9uIGZkNDkwMDAwLnBlcmYtbW9uaXRvcjogUHJvYmVkIFhpbGlueCBBUE08YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA2OTkyMF0geGlsaW54LWF4aXBtb24gZmZh
MTAwMDAucGVyZi1tb25pdG9yOiBQcm9iZWQgWGlsaW54IEFQTTxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuMDk3NzI5XSBzaTcweHg6IHByb2JlIG9mIDItMDA0MCBmYWlsZWQgd2l0
aCBlcnJvciAtNTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDk4MDQyXSBjZG5z
LXdkdCBmZDRkMDAwMC53YXRjaGRvZzogWGlsaW54IFdhdGNoZG9nIFRpbWVyIHdpdGggdGltZW91
dCA2MHM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjEwNTExMV0gY2Rucy13ZHQg
ZmYxNTAwMDAud2F0Y2hkb2c6IFhpbGlueCBXYXRjaGRvZyBUaW1lciB3aXRoIHRpbWVvdXQgMTBz
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xMTI0NTddIHZpcGVyLXRhbXBlciB2
aXBlci10YW1wZXI6IERldmljZSByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy4xMTc1OTNdIGFjdGl2ZV9iYW5rIGFjdGl2ZV9iYW5rOiBib290IGJhbms6IDE8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjEyMjE4NF0gYWN0aXZlX2JhbmsgYWN0aXZl
X2Jhbms6IGJvb3QgbW9kZTogKDB4MDIpIHFzcGkzMjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuMTI4MjQ3XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IERldmljZSBUcmVlIFBy
b2Jpbmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjEzMzQzOV0gdmlwZXItdmRw
cCBhNDAwMDAwMC52ZHBwOiBWRFBQIFZlcnNpb246IDEuMy45LjAgSW5mbzogMS41MTIuMTUuMCBL
ZXlMZW46IDMyPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNDIxNTFdIHZpcGVy
LXZkcHAgYTQwMDAwMDAudmRwcDogVGFtcGVyIGhhbmRsZXIgcmVnaXN0ZXJlZDxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTQ3NDM4XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6
IERldmljZSByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNTMw
MDddIGxwYzU1X2wyIHNwaTEuMDogcmVnaXN0ZXJlZCBoYW5kbGVyIGZvciBwcm90b2NvbCAwPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNTg1ODJdIGxwYzU1X3VzZXIgbHBjNTVf
dXNlcjogVGhlIG1ham9yIG51bWJlciBmb3IgeW91ciBkZXZpY2UgaXMgMjM2PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xNjU5NzZdIGxwYzU1X2wyIHNwaTEuMDogcmVnaXN0ZXJl
ZCBoYW5kbGVyIGZvciBwcm90b2NvbCAxPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My4xODE5OTldIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGxwYzU1X3J0Y19nZXRfdGltZTogYmFkIHJl
c3VsdDogMTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTgyODU2XSBydGMtbHBj
NTUgcnRjX2xwYzU1OiByZWdpc3RlcmVkIGFzIHJ0YzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjE4ODY1Nl0gbHBjNTVfbDIgc3BpMS4wOiAoMikgbWN1IHN0aWxsIG5vdCByZWFk
eT88YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE5Mzc0NF0gbHBjNTVfbDIgc3Bp
MS4wOiAoMykgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAzLjE5ODg0OF0gbHBjNTVfbDIgc3BpMS4wOiAoNCkgbWN1IHN0aWxsIG5vdCByZWFkeT88
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjIwMjkzMl0gbW1jMDogU0RIQ0kgY29u
dHJvbGxlciBvbiBmZjE2MDAwMC5tbWMgW2ZmMTYwMDAwLm1tY10gdXNpbmcgQURNQSA2NC1iaXQ8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjIxMDY4OV0gbHBjNTVfbDIgc3BpMS4w
OiAoNSkgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjIxNTY5NF0gbHBjNTVfbDIgc3BpMS4wOiByeCBlcnJvcjogLTExMDxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDMuMjg0NDM4XSBtbWMwOiBuZXcgSFMyMDAgTU1DIGNhcmQgYXQg
YWRkcmVzcyAwMDAxPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4yODUxNzldIG1t
Y2JsazA6IG1tYzA6MDAwMSBTRU0xNkcgMTQuNiBHaUI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjI5MTc4NF0gwqBtbWNibGswOiBwMSBwMiBwMyBwNCBwNSBwNiBwNyBwODxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMjkzOTE1XSBtbWNibGswYm9vdDA6IG1tYzA6
MDAwMSBTRU0xNkcgNC4wMCBNaUI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjI5
OTA1NF0gbW1jYmxrMGJvb3QxOiBtbWMwOjAwMDEgU0VNMTZHIDQuMDAgTWlCPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4zMDM5MDVdIG1tY2JsazBycG1iOiBtbWMwOjAwMDEgU0VN
MTZHIDQuMDAgTWlCLCBjaGFyZGV2ICgyNDQ6MCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAzLjU4MjY3Nl0gcnRjLWxwYzU1IHJ0Y19scGM1NTogbHBjNTVfcnRjX2dldF90aW1lOiBi
YWQgcmVzdWx0OiAxPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy41ODMzMzJdIHJ0
Yy1scGM1NSBydGNfbHBjNTU6IGhjdG9zeXM6IHVuYWJsZSB0byByZWFkIHRoZSBoYXJkd2FyZSBj
bG9jazxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNTkxMjUyXSBjZG5zLWkyYyBm
ZjAyMDAwMC5pMmM6IHJlY292ZXJ5IGluZm9ybWF0aW9uIGNvbXBsZXRlPGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy41OTcwODVdIGF0MjQgMC0wMDUwOiBzdXBwbHkgdmNjIG5vdCBm
b3VuZCwgdXNpbmcgZHVtbXkgcmVndWxhdG9yPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMy42MDMwMTFdIGxwYzU1X2wyIHNwaTEuMDogKDIpIG1jdSBzdGlsbCBub3QgcmVhZHk/PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42MDgwOTNdIGF0MjQgMC0wMDUwOiAyNTYg
Ynl0ZSBzcGQgRUVQUk9NLCByZWFkLW9ubHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjYxMzYyMF0gbHBjNTVfbDIgc3BpMS4wOiAoMykgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYxOTM2Ml0gbHBjNTVfbDIgc3BpMS4wOiAo
NCkgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjYyNDIyNF0gcnRjLXJ2MzAyOCAwLTAwNTI6IHJlZ2lzdGVyZWQgYXMgcnRjMTxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjI4MzQzXSBscGM1NV9sMiBzcGkxLjA6ICg1KSBtY3Ug
c3RpbGwgbm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjMzMjUz
XSBscGM1NV9sMiBzcGkxLjA6IHJ4IGVycm9yOiAtMTEwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMy42MzkxMDRdIGs4MV9ib290bG9hZGVyIDAtMDAxMDogcHJvYmU8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY0MTYyOF0gVk1DVTogOiAoMjM1OjApIHJlZ2lzdGVy
ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY0MTYzNV0gazgxX2Jvb3Rsb2Fk
ZXIgMC0wMDEwOiBwcm9iZSBjb21wbGV0ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjY2ODM0Nl0gY2Rucy1pMmMgZmYwMjAwMDAuaTJjOiA0MDAga0h6IG1taW8gZmYwMjAwMDAg
aXJxIDI4PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42NjkxNTRdIGNkbnMtaTJj
IGZmMDMwMDAwLmkyYzogcmVjb3ZlcnkgaW5mb3JtYXRpb24gY29tcGxldGU8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY3NTQxMl0gbG03NSAxLTAwNDg6IHN1cHBseSB2cyBub3Qg
Zm91bmQsIHVzaW5nIGR1bW15IHJlZ3VsYXRvcjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDMuNjgyOTIwXSBsbTc1IDEtMDA0ODogaHdtb24xOiBzZW5zb3IgJiMzOTt0bXAxMTImIzM5
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjg2NTQ4XSBpMmMgaTJjLTE6IEFk
ZGVkIG11bHRpcGxleGVkIGkyYyBidXMgMzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuNjkwNzk1XSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMgNDxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjk1NjI5XSBpMmMgaTJjLTE6IEFkZGVkIG11bHRp
cGxleGVkIGkyYyBidXMgNTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzAwNDky
XSBpMmMgaTJjLTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMgNjxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDMuNzA1MTU3XSBwY2E5NTR4IDEtMDA3MDogcmVnaXN0ZXJlZCA0IG11
bHRpcGxleGVkIGJ1c3NlcyBmb3IgSTJDIHN3aXRjaCBwY2E5NTQ2PGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy43MTMwNDldIGF0MjQgMS0wMDU0OiBzdXBwbHkgdmNjIG5vdCBmb3Vu
ZCwgdXNpbmcgZHVtbXkgcmVndWxhdG9yPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My43MjAwNjddIGF0MjQgMS0wMDU0OiAxMDI0IGJ5dGUgMjRjMDggRUVQUk9NLCByZWFkLW9ubHk8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjcyNDc2MV0gY2Rucy1pMmMgZmYwMzAw
MDAuaTJjOiAxMDAga0h6IG1taW8gZmYwMzAwMDAgaXJxIDI5PGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMy43MzEyNzJdIHNmcCB2aXBlcl9lbmV0OnNmcC1ldGgxOiBIb3N0IG1heGlt
dW0gcG93ZXIgMi4wVzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzM3NTQ5XSBz
ZnBfcmVnaXN0ZXJfc29ja2V0OiBnb3Qgc2ZwX2J1czxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuNzQwNzA5XSBzZnBfcmVnaXN0ZXJfc29ja2V0OiByZWdpc3RlciBzZnBfYnVzPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NDU0NTldIHNmcF9yZWdpc3Rlcl9idXM6
IG9wcyBvayE8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc0OTE3OV0gc2ZwX3Jl
Z2lzdGVyX2J1czogVHJ5IHRvIGF0dGFjaDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuNzUzNDE5XSBzZnBfcmVnaXN0ZXJfYnVzOiBBdHRhY2ggc3VjY2VlZGVkPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NTc5MTRdIHNmcF9yZWdpc3Rlcl9idXM6IHVwc3RyZWFt
IG9wcyBhdHRhY2g8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc2MjY3N10gc2Zw
X3JlZ2lzdGVyX2J1czogQnVzIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAzLjc2Njk5OV0gc2ZwX3JlZ2lzdGVyX3NvY2tldDogcmVnaXN0ZXIgc2ZwX2J1cyBzdWNj
ZWVkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc3NTg3MF0gb2ZfY2ZzX2lu
aXQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc3NjAwMF0gb2ZfY2ZzX2luaXQ6
IE9LPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NzgyMTFdIGNsazogTm90IGRp
c2FibGluZyB1bnVzZWQgY2xvY2tzPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjI3
ODQ3N10gRnJlZWluZyBpbml0cmQgbWVtb3J5OiAyMDYwNTZLPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDExLjI3OTQwNl0gRnJlZWluZyB1bnVzZWQga2VybmVsIG1lbW9yeTogMTUzNks8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMzE0MDA2XSBDaGVja2VkIFcrWCBtYXBw
aW5nczogcGFzc2VkLCBubyBXK1ggcGFnZXMgZm91bmQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTEuMzE0MTQyXSBSdW4gL2luaXQgYXMgaW5pdCBwcm9jZXNzPGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBJTklUOiB2ZXJzaW9uIDMuMDEgYm9vdGluZzxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgZnNjayAoYnVzeWJveCAxLjM1LjApPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAvZGV2
L21tY2JsazBwMTogY2xlYW4sIDEyLzEwMjQwMCBmaWxlcywgMjM4MTYyLzQwOTYwMCBibG9ja3M8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IC9kZXYvbW1jYmxrMHAyOiBjbGVhbiwgMTIvMTAyNDAw
IGZpbGVzLCAxNzE5NzIvNDA5NjAwIGJsb2Nrczxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgL2Rl
di9tbWNibGswcDMgd2FzIG5vdCBjbGVhbmx5IHVubW91bnRlZCwgY2hlY2sgZm9yY2VkLjxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgL2Rldi9tbWNibGswcDM6IDIwLzQwOTYgZmlsZXMgKDAuMCUg
bm9uLWNvbnRpZ3VvdXMpLCA2NjMvMTYzODQgYmxvY2tzPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIDExLjU1MzA3M10gRVhUNC1mcyAobW1jYmxrMHAzKTogbW91bnRlZCBmaWxlc3lzdGVt
IHdpdGhvdXQgam91cm5hbC4gT3B0czogKG51bGwpLiBRdW90YSBtb2RlOjxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGRpc2FibGVkLjxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgcmFuZG9tIG51bWJlciBnZW5lcmF0b3IgZGFlbW9uLjxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS41ODA2NjJdIHJhbmRvbTogY3JuZyBpbml0
IGRvbmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIHVkZXY8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNjEzMTU5XSB1ZGV2ZFsxNDJdOiBzdGFydGluZyB2ZXJzaW9u
IDMuMi4xMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS42MjAzODVdIHVkZXZkWzE0
M106IHN0YXJ0aW5nIGV1ZGV2LTMuMi4xMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS43MDQ0ODFdIG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IHJlbmFtZWQgZnJv
bSBldGgwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjcyMDI2NF0gbWFjYiBmZjBj
MDAwMC5ldGhlcm5ldCBjb250cm9sX2JsYWNrOiByZW5hbWVkIGZyb20gZXRoMTxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxMi4wNjMzOTZdIGlwX2xvY2FsX3BvcnRfcmFuZ2U6IHByZWZl
ciBkaWZmZXJlbnQgcGFyaXR5IGZvciBzdGFydC9lbmQgdmFsdWVzLjxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCAxMi4wODQ4MDFdIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGxwYzU1X3J0Y19n
ZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgaHdjbG9jazog
UlRDX1JEX1RJTUU6IEludmFsaWQgZXhjaGFuZ2U8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE1v
biBGZWIgMjcgMDg6NDA6NTMgVVRDIDIwMjM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
MTIuMTE1MzA5XSBydGMtbHBjNTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfc2V0X3RpbWU6IGJhZCBy
ZXN1bHQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGh3Y2xvY2s6IFJUQ19TRVRfVElNRTogSW52
YWxpZCBleGNoYW5nZTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi4xMzEwMjddIHJ0
Yy1scGM1NSBydGNfbHBjNTU6IGxwYzU1X3J0Y19nZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgbWN1ZDxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgSU5JVDogRW50ZXJpbmcgcnVubGV2ZWw6IDU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IENv
bmZpZ3VyaW5nIG5ldHdvcmsgaW50ZXJmYWNlcy4uLiBkb25lLjxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgcmVzZXR0aW5nIG5ldHdvcmsgaW50ZXJmYWNlPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIDEyLjcxODI5NV0gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBjb250cm9sX3JlZDogUEhZ
IFtmZjBiMDAwMC5ldGhlcm5ldC1mZmZmZmZmZjowMl0gZHJpdmVyIFtYaWxpbng8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBQQ1MvUE1BIFBIWV0gKGly
cT1QT0xMKTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43MjM5MTldIG1hY2IgZmYw
YjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IGNvbmZpZ3VyaW5nIGZvciBwaHkvZ21paSBsaW5r
IG1vZGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzMyMTUxXSBwcHMgcHBzMDog
bmV3IFBQUyBzb3VyY2UgcHRwMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43MzU1
NjNdIG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQ6IGdlbS1wdHAtdGltZXIgcHRwIGNsb2NrIHJlZ2lz
dGVyZWQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEyLjc0NTcyNF0gbWFjYiBmZjBj
MDAwMC5ldGhlcm5ldCBjb250cm9sX2JsYWNrOiBQSFkgW2ZmMGMwMDAwLmV0aGVybmV0LWZmZmZm
ZmZmOjAxXSBkcml2ZXIgW1hpbGlueDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoFBDUy9QTUEgUEhZXTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoChpcnE9UE9M
TCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzUzNDY5XSBtYWNiIGZmMGMwMDAw
LmV0aGVybmV0IGNvbnRyb2xfYmxhY2s6IGNvbmZpZ3VyaW5nIGZvciBwaHkvZ21paSBsaW5rIG1v
ZGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzYxODA0XSBwcHMgcHBzMTogbmV3
IFBQUyBzb3VyY2UgcHRwMTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43NjUzOThd
IG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQ6IGdlbS1wdHAtdGltZXIgcHRwIGNsb2NrIHJlZ2lzdGVy
ZWQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBBdXRvLW5lZ290aWF0aW9uOiBvZmY8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IEF1dG8tbmVnb3RpYXRpb246IG9mZjxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCAxNi44MjgxNTFdIG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQgY29udHJvbF9y
ZWQ6IHVuYWJsZSB0byBnZW5lcmF0ZSB0YXJnZXQgZnJlcXVlbmN5OiAxMjUwMDAwMDAgSHo8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTYuODM0NTUzXSBtYWNiIGZmMGIwMDAwLmV0aGVy
bmV0IGNvbnRyb2xfcmVkOiBMaW5rIGlzIFVwIC0gMUdicHMvRnVsbCAtIGZsb3cgY29udHJvbCBv
ZmY8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTYuODYwNTUyXSBtYWNiIGZmMGMwMDAw
LmV0aGVybmV0IGNvbnRyb2xfYmxhY2s6IHVuYWJsZSB0byBnZW5lcmF0ZSB0YXJnZXQgZnJlcXVl
bmN5OiAxMjUwMDAwMDAgSHo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTYuODY3MDUy
XSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0IGNvbnRyb2xfYmxhY2s6IExpbmsgaXMgVXAgLSAxR2Jw
cy9GdWxsIC0gZmxvdyBjb250cm9sIG9mZjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRp
bmcgRmFpbHNhZmUgU2VjdXJlIFNoZWxsIHNlcnZlciBpbiBwb3J0IDIyMjI6IHNzaGQ8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IGRvbmUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGlu
ZyBycGNiaW5kIGRhZW1vbi4uLmRvbmUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy4wOTMwMTldIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGxw
YzU1X3J0Y19nZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
aHdjbG9jazogUlRDX1JEX1RJTUU6IEludmFsaWQgZXhjaGFuZ2U8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFN0YXJ0aW5nIFN0YXRlIE1hbmFnZXIgU2VydmljZTxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgU3RhcnQgc3RhdGUtbWFuYWdlciByZXN0YXJ0ZXIuLi48YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIGQwdjEgRm9yd2FyZGluZyBBRVMgb3BlcmF0aW9uOiAzMjU0Nzc5OTUxPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyAvdXNyL3NiaW4veGVuc3RvcmVkLi4uLlsg
wqAgMTcuMjY1MjU2XSBCVFJGUzogZGV2aWNlIGZzaWQgODBlZmMyMjQtYzIwMi00ZjhlLWE5NDkt
NGRhZTdmMDRhMGFhPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgZGV2aWQgMSB0cmFuc2lkIDc0NDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoC9kZXYvZG0tMDxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgc2Nhbm5lZCBieSB1ZGV2ZCAoMzg1KTxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCAxNy4zNDk5MzNdIEJUUkZTIGluZm8gKGRldmljZSBkbS0wKTog
ZGlzayBzcGFjZSBjYWNoaW5nIGlzIGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTcuMzUwNjcwXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMCk6IGhhcyBza2lubnkgZXh0ZW50
czxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy4zNjQzODRdIEJUUkZTIGluZm8gKGRl
dmljZSBkbS0wKTogZW5hYmxpbmcgc3NkIG9wdGltaXphdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTcuODMwNDYyXSBCVFJGUzogZGV2aWNlIGZzaWQgMjdmZjY2NmItZjRlNS00
ZjkwLTkwNTQtYzIxMGRiNWIyZTJlIGRldmlkIDEgdHJhbnNpZCA2PGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgL2Rldi9tYXBwZXIvY2xpZW50X3Byb3Yg
c2Nhbm5lZCBieTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoG1rZnMuYnRyZnM8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7ICg1MjYpPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3Ljg3MjY5OV0g
QlRSRlMgaW5mbyAoZGV2aWNlIGRtLTEpOiB1c2luZyBmcmVlIHNwYWNlIHRyZWU8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuODcyNzcxXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6
IGhhcyBza2lubnkgZXh0ZW50czxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy44Nzgx
MTRdIEJUUkZTIGluZm8gKGRldmljZSBkbS0xKTogZmxhZ2dpbmcgZnMgd2l0aCBiaWcgbWV0YWRh
dGEgZmVhdHVyZTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxNy44OTQyODldIEJUUkZT
IGluZm8gKGRldmljZSBkbS0xKTogZW5hYmxpbmcgc3NkIG9wdGltaXphdGlvbnM8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuODk1Njk1XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6
IGNoZWNraW5nIFVVSUQgdHJlZTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFNldHRpbmcgZG9tYWluIDAgbmFtZSwgZG9taWQgYW5kIEpTT04gY29uZmln
Li4uPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBEb25lIHNldHRpbmcgdXAgRG9tMDxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgeGVuY29uc29sZWQuLi48YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFN0YXJ0aW5nIFFFTVUgYXMgZGlzayBiYWNrZW5kIGZvciBkb20wPGJyPg0KJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBkb21haW4gd2F0Y2hkb2cgZGFlbW9uOiB4ZW53YXRj
aGRvZ2Qgc3RhcnR1cDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTguNDA4NjQ3XSBCVFJGUzogZGV2aWNlIGZzaWQgNWUwOGQ1ZTktYmMyYS00
NmI5LWFmNmEtNDRjNzA4N2I4OTIxIGRldmlkIDEgdHJhbnNpZCA2PGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgL2Rldi9tYXBwZXIvY2xpZW50X2NvbmZp
ZyBzY2FubmVkIGJ5PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbWtmcy5idHJmczxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgKDU3NCk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFtkb25lXTxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxOC40NjU1NTJdIEJUUkZTIGluZm8gKGRldmljZSBkbS0y
KTogdXNpbmcgZnJlZSBzcGFjZSB0cmVlPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE4
LjQ2NTYyOV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTIpOiBoYXMgc2tpbm55IGV4dGVudHM8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNDcxMDAyXSBCVFJGUyBpbmZvIChkZXZpY2Ug
ZG0tMik6IGZsYWdnaW5nIGZzIHdpdGggYmlnIG1ldGFkYXRhIGZlYXR1cmU8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIGNyb25kOiBbIMKgIDE4LjQ4MjM3MV0gQlRSRlMgaW5mbyAo
ZGV2aWNlIGRtLTIpOiBlbmFibGluZyBzc2Qgb3B0aW1pemF0aW9uczxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCAxOC40ODY2NTldIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTogY2hlY2tp
bmcgVVVJRCB0cmVlPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBPSzxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgc3RhcnRpbmcgcnN5c2xvZ2QgLi4uIExvZyBwYXJ0aXRpb24gcmVhZHkgYWZ0ZXIg
MCBwb2xsIGxvb3BzPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBkb25lPGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyByc3lzbG9nZDogY2Fubm90IGNvbm5lY3QgdG8gPGEgaHJlZj0iaHR0cDovLzE3
Mi4xOC4wLjE6NTE0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj4xNzIuMTguMC4x
OjUxNDwvYT4gJmx0OzxhIGhyZWY9Imh0dHA6Ly8xNzIuMTguMC4xOjUxNCIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovLzE3Mi4xOC4wLjE6NTE0PC9hPiZndDsgJmx0Ozxh
IGhyZWY9Imh0dHA6Ly8xNzIuMTguMC4xOjUxNCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cDovLzE3Mi4xOC4wLjE6NTE0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cDovLzE3Mi4x
OC4wLjE6NTE0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8vMTcyLjE4
LjAuMTo1MTQ8L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHA6Ly8xNzIuMTguMC4xOjUxNCIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovLzE3Mi4xOC4wLjE6NTE0PC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cDovLzE3Mi4xOC4wLjE6NTE0IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwOi8vMTcyLjE4LjAuMTo1MTQ8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0i
aHR0cDovLzE3Mi4xOC4wLjE6NTE0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwOi8vMTcyLjE4LjAuMTo1MTQ8L2E+ICZsdDs8YSBocmVmPSJodHRwOi8vMTcyLjE4LjAuMTo1
MTQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly8xNzIuMTguMC4xOjUx
NDwvYT4mZ3Q7Jmd0OyZndDs6IE5ldHdvcmsgaXMgdW5yZWFjaGFibGUgW3Y4LjIyMDguMCB0cnk8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqA8YSBocmVm
PSJodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjciIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNzwvYT4gJmx0OzxhIGhyZWY9
Imh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3PC9hPiZndDsgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3PC9hPiAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc8L2E+Jmd0OyZndDsgJmx0
OzxhIGhyZWY9Imh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyIgcmVsPSJub3JlZmVycmVy
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3PC9hPiAmbHQ7
PGEgaHJlZj0iaHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc8L2E+Jmd0OyAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiByZWw9Im5vcmVmZXJy
ZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc8L2E+ICZs
dDs8YSBocmVmPSJodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjciIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNzwvYT4mZ3Q7
Jmd0OyZndDsgXTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxOC42NzA2MzddIEJUUkZT
OiBkZXZpY2UgZnNpZCAzOWQ3ZDllMS05NjdkLTQ3OGUtOTRhZS02OTBkZWI3MjIwOTUgZGV2aWQg
MSB0cmFuc2lkIDYwOCAvZGV2L2RtLTM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBzY2FubmVkIGJ5IHVkZXZkICg1MTgpPGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUGxlYXNlIGluc2VydCBVU0IgdG9rZW4g
YW5kIGVudGVyIHlvdXIgcm9sZSBpbiBsb2dpbiBwcm9tcHQuPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgbG9naW46PGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IE8uPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7INC/0L0sIDI0INCw0L/RgC4gMjAyM+KAr9CzLiDQ
siAyMzozOSwgU3RlZmFubyBTdGFiZWxsaW5pICZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9h
PiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJu
ZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZn
dDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhpIE9sZWcsPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhl
cmUgaXMgdGhlIGlzc3VlIGZyb20geW91ciBsb2dzOjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTRXJyb3IgSW50ZXJydXB0IG9uIENQ
VTAsIGNvZGUgMHhiZTAwMDAwMCAtLSBTRXJyb3I8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU0Vycm9ycyBhcmUgc3BlY2lhbCBzaWdu
YWxzIHRvIG5vdGlmeSBzb2Z0d2FyZSBvZiBzZXJpb3VzIGhhcmR3YXJlPGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgZXJyb3JzLsKgIFNvbWV0aGluZyBpcyBnb2luZyB2ZXJ5IHdy
b25nLiBEZWZlY3RpdmUgaGFyZHdhcmUgaXMgYTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoHBvc3NpYmlsaXR5LsKgIEFub3RoZXIgcG9zc2liaWxpdHkgaWYgc29mdHdhcmUgYWNj
ZXNzaW5nIGFkZHJlc3MgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
dGhhdCBpdCBpcyBub3Qgc3VwcG9zZWQgdG8sIHNvbWV0aW1lcyBpdCBjYXVzZXMgU0Vycm9ycy48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgQ2hlZXJzLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBTdGVmYW5vPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgT24gTW9uLCAyNCBBcHIgMjAyMywgT2xlZyBOaWtpdGVua28gd3Jv
dGU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgSGVsbG8sPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgVGhhbmtzIGd1eXMuPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJIGZvdW5kIG91dCB3aGVyZSB0
aGUgcHJvYmxlbSB3YXMuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBO
b3cgZG9tMCBib290ZWQgbW9yZS4gQnV0IEkgaGF2ZSBhIG5ldyBvbmUuPGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGlzIGlzIGEga2VybmVsIHBhbmljIGR1cmluZyBE
b20wIGxvYWRpbmcuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBNYXli
ZSBzb21lb25lIGlzIGFibGUgdG8gc3VnZ2VzdCBzb21ldGhpbmcgPzxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBP
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc3MTM2Ml0gc2ZwX3JlZ2lzdGVyX2J1
czogdXBzdHJlYW0gb3BzIGF0dGFjaDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuNzc2MTE5XSBzZnBfcmVnaXN0ZXJfYnVzOiBCdXMgcmVnaXN0ZXJlZDxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzgwNDU5XSBz
ZnBfcmVnaXN0ZXJfc29ja2V0OiByZWdpc3RlciBzZnBfYnVzIHN1Y2NlZWRlZDxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzg5Mzk5XSBvZl9jZnNfaW5p
dDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzg5NDk5
XSBvZl9jZnNfaW5pdDogT0s8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjc5MTY4NV0gY2xrOiBOb3QgZGlzYWJsaW5nIHVudXNlZCBjbG9ja3M8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwMzU1XSBTRXJyb3Ig
SW50ZXJydXB0IG9uIENQVTAsIGNvZGUgMHhiZTAwMDAwMCAtLSBTRXJyb3I8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwMzgwXSBDUFU6IDAgUElEOiA5
IENvbW06IGt3b3JrZXIvdTQ6MCBOb3QgdGFpbnRlZCA1LjE1LjcyLXhpbGlueC12MjAyMi4xICMx
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDM5M10g
V29ya3F1ZXVlOiBldmVudHNfdW5ib3VuZCBhc3luY19ydW5fZW50cnlfZm48YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDE0XSBwc3RhdGU6IDYwMDAw
MDA1IChuWkN2IGRhaWYgLVBBTiAtVUFPIC1UQ08gLURJVCAtU1NCUyBCVFlQRT0tLSk8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDIyXSBwYyA6IHNp
bXBsZV93cml0ZV9lbmQrMHhkMC8weDEzMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCAxMS4wMTA0MzFdIGxyIDogZ2VuZXJpY19wZXJmb3JtX3dyaXRlKzB4MTE4
LzB4MWUwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAx
MDQzOF0gc3AgOiBmZmZmZmZjMDA4MDliOTEwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ0MV0geDI5OiBmZmZmZmZjMDA4MDliOTEwIHgyODogMDAw
MDAwMDAwMDAwMDAwMCB4Mjc6IGZmZmZmZmVmNjliYTg4YzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDUxXSB4MjY6IDAwMDAwMDAwMDAwMDNlZWMg
eDI1OiBmZmZmZmY4MDc1MTVkYjAwIHgyNDogMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0NTldIHgyMzogZmZmZmZmYzAw
ODA5YmE5MCB4MjI6IDAwMDAwMDAwMDJhYWMwMDAgeDIxOiBmZmZmZmY4MDczMTVhMjYwPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ3Ml0geDIwOiAw
MDAwMDAwMDAwMDAxMDAwIHgxOTogZmZmZmZmZmUwMjAwMDAwMCB4MTg6IDAwMDAwMDAwMDAwMDAw
MDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDgx
XSB4MTc6IDAwMDAwMDAwZmZmZmZmZmYgeDE2OiAwMDAwMDAwMDAwMDA4MDAwIHgxNTogMDAwMDAw
MDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAx
MS4wMTA0OTBdIHgxNDogMDAwMDAwMDAwMDAwMDAwMCB4MTM6IDAwMDAwMDAwMDAwMDAwMDAgeDEy
OiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIDExLjAxMDQ5OF0geDExOiAwMDAwMDAwMDAwMDAwMDAwIHgxMDogMDAwMDAwMDAwMDAw
MDAwMCB4OSA6IDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTA3XSB4OCA6IDAwMDAwMDAwMDAwMDAwMDAgeDcgOiBmZmZm
ZmZlZjY5M2JhNjgwIHg2IDogMDAwMDAwMDAyZDg5YjcwMDxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1MTVdIHg1IDogZmZmZmZmZmUwMjAwMDAwMCB4
NCA6IGZmZmZmZjgwNzMxNWEzYzggeDMgOiAwMDAwMDAwMDAwMDAxMDAwPGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDUyNF0geDIgOiAwMDAwMDAwMDAy
YWFiMDAwIHgxIDogMDAwMDAwMDAwMDAwMDAwMSB4MCA6IDAwMDAwMDAwMDAwMDAwMDU8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTM0XSBLZXJuZWwg
cGFuaWMgLSBub3Qgc3luY2luZzogQXN5bmNocm9ub3VzIFNFcnJvciBJbnRlcnJ1cHQ8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTM5XSBDUFU6IDAg
UElEOiA5IENvbW06IGt3b3JrZXIvdTQ6MCBOb3QgdGFpbnRlZCA1LjE1LjcyLXhpbGlueC12MjAy
Mi4xICMxPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAx
MDU0NV0gSGFyZHdhcmUgbmFtZTogRDE0IFZpcGVyIEJvYXJkIC0gV2hpdGUgVW5pdCAoRFQpPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU0OF0gV29y
a3F1ZXVlOiBldmVudHNfdW5ib3VuZCBhc3luY19ydW5fZW50cnlfZm48YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTU2XSBDYWxsIHRyYWNlOjxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1NThdIMKgZHVt
cF9iYWNrdHJhY2UrMHgwLzB4MWM0PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDExLjAxMDU2N10gwqBzaG93X3N0YWNrKzB4MTgvMHgyYzxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1NzRdIMKgZHVtcF9zdGFja19s
dmwrMHg3Yy8weGEwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IDExLjAxMDU4M10gwqBkdW1wX3N0YWNrKzB4MTgvMHgzNDxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1ODhdIMKgcGFuaWMrMHgxNGMvMHgyZjg8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTk3XSDCoHBy
aW50X3RhaW50ZWQrMHgwLzB4YjA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTEuMDEwNjA2XSDCoGFybTY0X3NlcnJvcl9wYW5pYysweDZjLzB4N2M8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjE0XSDCoGRvX3Nl
cnJvcisweDI4LzB4NjA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgMTEuMDEwNjIxXSDCoGVsMWhfNjRfZXJyb3JfaGFuZGxlcisweDMwLzB4NTA8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjI4XSDCoGVsMWhfNjRf
ZXJyb3IrMHg3OC8weDdjPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDExLjAxMDYzM10gwqBzaW1wbGVfd3JpdGVfZW5kKzB4ZDAvMHgxMzA8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjM5XSDCoGdlbmVyaWNfcGVy
Zm9ybV93cml0ZSsweDExOC8weDFlMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4wMTA2NDRdIMKgX19nZW5lcmljX2ZpbGVfd3JpdGVfaXRlcisweDEzOC8w
eDFjNDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2
NTBdIMKgZ2VuZXJpY19maWxlX3dyaXRlX2l0ZXIrMHg3OC8weGQwPGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY1Nl0gwqBfX2tlcm5lbF93cml0ZSsw
eGZjLzB4MmFjPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEx
LjAxMDY2NV0gwqBrZXJuZWxfd3JpdGUrMHg4OC8weDE2MDxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2NzNdIMKgeHdyaXRlKzB4NDQvMHg5NDxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2ODBdIMKgZG9f
Y29weSsweGE4LzB4MTA0PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDExLjAxMDY4Nl0gwqB3cml0ZV9idWZmZXIrMHgzOC8weDU4PGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY5Ml0gwqBmbHVzaF9idWZmZXIrMHg0
Yy8weGJjPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAx
MDY5OF0gwqBfX2d1bnppcCsweDI4MC8weDMxMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3MDRdIMKgZ3VuemlwKzB4MWMvMHgyODxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3MDldIMKgdW5wYWNrX3Rv
X3Jvb3RmcysweDE3MC8weDJiMDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCAxMS4wMTA3MTVdIMKgZG9fcG9wdWxhdGVfcm9vdGZzKzB4ODAvMHgxNjQ8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzIyXSDCoGFzeW5j
X3J1bl9lbnRyeV9mbisweDQ4LzB4MTY0PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDExLjAxMDcyOF0gwqBwcm9jZXNzX29uZV93b3JrKzB4MWU0LzB4M2EwPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDczNl0gwqB3
b3JrZXJfdGhyZWFkKzB4N2MvMHg0YzA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTEuMDEwNzQzXSDCoGt0aHJlYWQrMHgxMjAvMHgxMzA8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzUwXSDCoHJldF9mcm9tX2Zv
cmsrMHgxMC8weDIwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IDExLjAxMDc1N10gU01QOiBzdG9wcGluZyBzZWNvbmRhcnkgQ1BVczxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3ODRdIEtlcm5lbCBPZmZzZXQ6IDB4
MmY2MTIwMDAwMCBmcm9tIDB4ZmZmZmZmYzAwODAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3ODhdIFBIWVNfT0ZGU0VUOiAweDA8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzkwXSBDUFUgZmVh
dHVyZXM6IDB4MDAwMDA0MDEsMDAwMDA4NDI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzk1XSBNZW1vcnkgTGltaXQ6IG5vbmU8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMjc3NTA5XSAtLS1bIGVuZCBLZXJu
ZWwgcGFuaWMgLSBub3Qgc3luY2luZzogQXN5bmNocm9ub3VzIFNFcnJvciBJbnRlcnJ1cHQgXS0t
LTxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7INC/0YIsIDIxINCw0L/RgC4gMjAyM+KAr9CzLiDQsiAx
NTo1MiwgTWljaGFsIE9yemVsICZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNo
YWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9
Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwu
b3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9i
bGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5j
b208L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBIaSBPbGVnLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBPbiAyMS8wNC8yMDIzIDE0OjQ5LCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqA8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGVsbG8gTWlj
aGFsLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IEkgd2FzIG5vdCBhYmxlIHRvIGVuYWJsZSBlYXJseXByaW50ayBpbiB0aGUgeGVuIGZvciBub3cu
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJ
IGRlY2lkZWQgdG8gY2hvb3NlIGFub3RoZXIgd2F5Ljxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgVGhpcyBpcyBhIHhlbiYjMzk7cyBjb21tYW5k
IGxpbmUgdGhhdCBJIGZvdW5kIG91dCBjb21wbGV0ZWx5Ljxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pICQkJCQgY29uc29sZT1kdHVhcnQg
ZHR1YXJ0PXNlcmlhbDAgZG9tMF9tZW09MTYwME0gZG9tMF9tYXhfdmNwdXM9MiBkb20wX3ZjcHVz
X3Bpbjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJv
b3RzY3J1Yj0wPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdndmaT1uYXRpdmU8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBzY2hlZD1udWxsPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdGltZXJfc2xvcD0wPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgWWVzLCBhZGRpbmcgYSBwcmludGsoKSBpbiBY
ZW4gd2FzIGFsc28gYSBnb29kIGlkZWEuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFNvIHlvdSBhcmUgYWJzb2x1dGVseSByaWdodCBhYm91dCBhIGNvbW1hbmQgbGluZS48YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE5vdyBJ
IGFtIGdvaW5nIHRvIGZpbmQgb3V0IHdoeSB4ZW4gZGlkIG5vdCBoYXZlIHRoZSBjb3JyZWN0IHBh
cmFtZXRlcnMgZnJvbSB0aGUgZGV2aWNlPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgdHJlZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBNYXliZSB5b3Ugd2lsbCBmaW5kIHRoaXMgZG9jdW1lbnQgaGVscGZ1
bDo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqA8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYv
ZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dCIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3Jl
YmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQ8L2E+ICZsdDs8
YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQu
MTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dCIgcmVsPSJub3JlZmVycmVy
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54
X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQ8L2E+Jmd0
OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3Jl
YmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQiIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Js
b2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0
PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54
X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVu
L2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3Rpbmcu
dHh0PC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2Iv
eGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54
L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290
aW5nLnR4dDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Js
b2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGls
aW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9i
b290aW5nLnR4dDwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGlsaW54
L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290
aW5nLnR4dCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIu
Y29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNl
LXRyZWUvYm9vdGluZy50eHQ8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGls
aW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9i
b290aW5nLnR4dCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRo
dWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2
aWNlLXRyZWUvYm9vdGluZy50eHQ8L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoH5NaWNoYWw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
UmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyDQv9GCLCAyMSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6MTYsIE1pY2hhbCBPcnpl
bCAmbHQ7PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1p
Y2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208
L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5v
cnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0
YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsmZ3Q7ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1p
Y2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208
L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5v
cnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9
Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnpl
bEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1k
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgT24gMjEvMDQvMjAyMyAxMDowNCwgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEhlbGxvIE1p
Y2hhbCw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBZZXMsIEkgdXNlIHlvY3RvLjxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7IFllc3RlcmRheSBhbGwgZGF5IGxvbmcgSSB0cmllZCB0byBmb2xsb3cgeW91ciBzdWdn
ZXN0aW9ucy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7IEkgZmFjZWQgYSBwcm9ibGVtLjxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgTWFudWFsbHkg
aW4gdGhlIHhlbiBjb25maWcgYnVpbGQgZmlsZSBJIHBhc3RlZCB0aGUgc3RyaW5nczo8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBJ
biB0aGUgLmNvbmZpZyBmaWxlIG9yIGluIHNvbWUgWW9jdG8gZmlsZSAobGlzdGluZyBhZGRpdGlv
bmFsIEtjb25maWcgb3B0aW9ucykgYWRkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqB0byBTUkNfVVJJPzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoFlvdSBzaG91bGRuJiMzOTt0IHJl
YWxseSBtb2RpZnkgLmNvbmZpZyBmaWxlIGJ1dCBpZiB5b3UgZG8sIHlvdSBzaG91bGQgZXhlY3V0
ZSAmcXVvdDttYWtlPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgb2xkZGVmY29uZmlnJnF1b3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgYWZ0ZXJ3YXJkcy48
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDsgQ09ORklHX0VBUkxZX1BSSU5USzxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgQ09ORklHX0VB
UkxZX1BSSU5US19aWU5RTVA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IENPTkZJR19FQVJMWV9VQVJUX0NIT0lDRV9DQURF
TkNFPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgSSBob3BlIHlvdSBhZGRlZCA9eSB0byB0aGVtLjxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBBbnl3YXksIHlvdSBoYXZl
IGF0IGxlYXN0IHRoZSBmb2xsb3dpbmcgc29sdXRpb25zOjxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoDEpIFJ1biBiaXRiYWtlIHhl
biAtYyBtZW51Y29uZmlnIHRvIHByb3Blcmx5IHNldCBlYXJseSBwcmludGs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAyKSBGaW5k
IG91dCBob3cgeW91IGVuYWJsZSBvdGhlciBLY29uZmlnIG9wdGlvbnMgaW4geW91ciBwcm9qZWN0
IChlLmcuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Q09ORklHX0NPTE9SSU5HPXkgdGhhdCBpcyBub3Q8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBlbmFibGVk
IGJ5PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZGVmYXVsdCk8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAzKSBB
cHBlbmQgdGhlIGZvbGxvd2luZyB0byAmcXVvdDt4ZW4vYXJjaC9hcm0vY29uZmlncy9hcm02NF9k
ZWZjb25maWcmcXVvdDs6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgQ09ORklHX0VBUkxZX1BSSU5US19aWU5RTVA9eTxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqB+TWlj
aGFsPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IEhvc3QgaGFuZ3MgaW4gYnVpbGQgdGltZS7CoDxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDsgTWF5YmUgSSBkaWQgbm90IHNldCBzb21ldGhpbmcgaW4gdGhlIGNvbmZpZyBidWlsZCBmaWxl
ID88YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgT2xlZzxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7INGH0YIsIDIwINCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxMTo1NywgT2xlZyBOaWtp
dGVua28gJmx0OzxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0i
X2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdt
YWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20i
IHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5v
bGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwv
YT4mZ3Q7Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFp
bC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2Js
YW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21h
aWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RA
Z21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2Js
YW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFp
bC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29t
PC9hPiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
VGhhbmtzIE1pY2hhbCw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgWW91IGdhdmUgbWUgYW4g
aWRlYS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqBJIGFtIGdvaW5nIHRvIHRyeSBpdCB0b2RheS48YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqBPLjxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqDRh9GCLCAyMCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6NTYs
IE9sZWcgTmlraXRlbmtvICZsdDs8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVz
aGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9
Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwv
YT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hp
aXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0
OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0
YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlp
d29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xl
c2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+
Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRh
cmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNo
aWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZn
dDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9s
ZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9s
ZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNv
bTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0
YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hp
aXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdl
dD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29k
QGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoFRoYW5rcyBTdGVmYW5vLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqBJIGFtIGdvaW5nIHRvIGRvIGl0IHRvZGF5Ljxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoE8uPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoNGB0YAsIDE5INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAyMzow
NSwgU3RlZmFubyBTdGFiZWxsaW5pICZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2Js
YW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJf
YmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0i
X2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJu
ZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4g
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0
PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5z
c3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9h
PiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJu
ZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZn
dDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoE9u
IFdlZCwgMTkgQXByIDIwMjMsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDsgSGkgTWljaGFsLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEkgY29ycmVjdGVkIHhlbiYj
Mzk7cyBjb21tYW5kIGxpbmUuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBOb3cg
aXQgaXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IHhlbix4ZW4tYm9vdGFyZ3Mg
PSAmcXVvdDtjb25zb2xlPWR0dWFydCBkdHVhcnQ9c2VyaWFsMCBkb20wX21lbT0xNjAwTTxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGRvbTBfbWF4X3Zj
cHVzPTI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBkb20wX3ZjcHVzX3Bpbjxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJvb3RzY3J1Yj0wIHZ3Zmk9bmF0aXZl
IHNjaGVkPW51bGw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IHRpbWVyX3Nsb3A9
MCB3YXlfc2l6ZT02NTUzNiB4ZW5fY29sb3JzPTAtMyBkb20wX2NvbG9ycz00LTcmcXVvdDs7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoDQgY29sb3JzIGlzIHdheSB0b28g
bWFueSBmb3IgeGVuLCBqdXN0IGRvIHhlbl9jb2xvcnM9MC0wLiBUaGVyZSBpcyBubzxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoGFkdmFudGFnZSBpbiB1c2luZyBtb3JlIHRoYW4gMSBjb2xv
ciBmb3IgWGVuLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqA0IGNvbG9y
cyBpcyB0b28gZmV3IGZvciBkb20wLCBpZiB5b3UgYXJlIGdpdmluZyAxNjAwTSBvZiBtZW1vcnkg
dG88YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBEb20w
Ljxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoEVhY2ggY29sb3IgaXMgMjU2TS4gRm9yIDE2
MDBNIHlvdSBzaG91bGQgZ2l2ZSBhdCBsZWFzdCA3IGNvbG9ycy4gVHJ5Ojxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqB4ZW5fY29sb3JzPTAtMCBkb20wX2NvbG9ycz0xLTg8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7IFVuZm9ydHVuYXRlbHkgdGhlIHJlc3VsdCB3YXMgdGhlIHNhbWUuPGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsg
KFhFTikgwqAtIERvbTAgbW9kZTogUmVsYXhlZDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDsgKFhFTikgUDJNOiA0MC1iaXQgSVBBIHdpdGggNDAtYml0IFBBIGFuZCA4LWJpdCBWTUlE
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06IDMgbGV2ZWxzIHdp
dGggb3JkZXItMSByb290LCBWVENSIDB4MDAwMDAwMDA4MDAyMzU1ODxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDsgKFhFTikgU2NoZWR1bGluZyBncmFudWxhcml0eTogY3B1LCAxIENQ
VSBwZXIgc2NoZWQtcmVzb3VyY2U8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIENvbG9yaW5nIGdlbmVyYWwgaW5mb3JtYXRpb248YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIFdheSBzaXplOiA2NGtCPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0OyAoWEVOKSBNYXguIG51bWJlciBvZiBjb2xvcnMgYXZhaWxhYmxlOiAxNjxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgWGVuIGNvbG9yKHMpOiBbIDAgXTxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgYWx0ZXJuYXRpdmVzOiBQYXRjaGlu
ZyB3aXRoIGFsdCB0YWJsZSAwMDAwMDAwMDAwMmNjNjkwIC0mZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgMDAwMDAwMDAwMDJjY2MwYzxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgQ29sb3IgYXJyYXkgYWxsb2NhdGlvbiBm
YWlsZWQgZm9yIGRvbTA8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pPGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSAqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAo
WEVOKSBQYW5pYyBvbiBDUFUgMDo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIEVycm9yIGNyZWF0aW5nIGRvbWFpbiAwPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0OyAoWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKTxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDsgKFhFTikgUmVib290IGluIGZpdmUgc2Vjb25kcy4uLjxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEkg
YW0gZ29pbmcgdG8gZmluZCBvdXQgaG93IGNvbW1hbmQgbGluZSBhcmd1bWVudHMgcGFzc2VkIGFu
ZCBwYXJzZWQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0OyDRgdGALCAxOSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6
MjUsIE9sZWcgTmlraXRlbmtvICZsdDs8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5v
bGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNo
aWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZn
dDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29v
ZEBnbWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208
L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20i
IHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNo
aWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9s
ZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9h
PiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9s
ZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9s
ZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNv
bTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0
YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29t
PC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVz
aGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdt
YWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20i
IHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNo
aWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwv
YT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBNaWNo
YWwsPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDsgWW91IHB1dCBteSBub3NlIGludG8gdGhlIHByb2JsZW0uIFRoYW5rIHlv
dS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IEkgYW0gZ29pbmcgdG8gdXNlIHlv
dXIgcG9pbnQuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBMZXQmIzM5O3Mgc2Vl
IHdoYXQgaGFwcGVucy48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDsgT2xlZzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0OyDRgdGALCAxOSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTA6MzcsIE1pY2hhbCBPcnplbCAm
bHQ7PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+
bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+
Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnpl
bEBhbWQuY29tPC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFt
ZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5t
aWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1p
Y2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNo
YWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9
Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwu
b3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9i
bGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5j
b208L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+
bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpt
aWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29t
PC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWlj
aGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBh
bWQuY29tPC9hPiZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwu
b3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJf
YmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxA
YW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Jmd0Ozxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWlj
aGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBh
bWQuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBPbGVnLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBPbiAx
OS8wNC8yMDIzIDA5OjAzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqA8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgSGVsbG8gU3RlZmFubyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBUaGFua3MgZm9yIHRoZSBjbGFyaWZpY2F0aW9uLjxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgTXkgY29tcGFueSB1c2VzIHlv
Y3RvIGZvciBpbWFnZSBnZW5lcmF0aW9uLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgV2hhdCBraW5kIG9mIGluZm9ybWF0aW9uIGRvIHlvdSBuZWVkIHRv
IGNvbnN1bHQgbWUgaW4gdGhpczxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoGNhc2UgPzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IE1heWJlIG1vZHVsZXMgc2l6ZXMvYWRkcmVzc2VzIHdoaWNoIHdlcmUgbWVudGlvbmVkIGJ5
IEBKdWxpZW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBHcmFsbDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5v
cmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9
Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1
bGllbkB4ZW4ub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGll
bkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhl
bi5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZn
dDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9i
bGFuayI+anVsaWVuQHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGll
bkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGll
bkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5v
cmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9
Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1
bGllbkB4ZW4ub3JnPC9hPiZndDsmZ3Q7Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5q
dWxpZW5AeGVuLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4
ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5v
cmc8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0
PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpq
dWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5q
dWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5v
cmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7Jmd0OyZndDsgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxp
ZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmci
IHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9i
bGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5q
dWxpZW5AeGVuLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4
ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5v
cmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ID88YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU29y
cnkgZm9yIGp1bXBpbmcgaW50byBkaXNjdXNzaW9uLCBidXQgRldJQ1MgdGhlIFhlbiBjb21tYW5k
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgbGluZSB5
b3UgcHJvdmlkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzZWVtcyB0byBiZTxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoG5vdCB0aGU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBvbmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBYZW4gYm9vdGVkIHdpdGguIFRoZSBlcnJvciB5b3UgYXJlIG9ic2Vydmlu
ZyBtb3N0IGxpa2VseSBpcyBkdWU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqB0byBkb20wIGNvbG9yczxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoGNvbmZpZ3VyYXRpb24gbm90PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgYmVpbmc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBzcGVjaWZpZWQgKGkuZS4gbGFjayBvZiBkb20wX2NvbG9ycz0mbHQ7Jmd0OyBw
YXJhbWV0ZXIpLiBBbHRob3VnaCBpbjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoHRoZSBjb21tYW5kIGxpbmUgeW91PGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgcHJvdmlkZWQsIHRoaXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBwYXJhbWV0ZXI8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBpcyBzZXQsIEkgc3Ryb25nbHkgZG91YnQgdGhhdCB0aGlzIGlz
IHRoZSBhY3R1YWwgY29tbWFuZCBsaW5lPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgaW4gdXNlLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBZb3Ugd3JvdGU6
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgeGVuLHhlbi1ib290
YXJncyA9ICZxdW90O2NvbnNvbGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWwwPGJyPg0KJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZG9tMF9tZW09MTYwME0gZG9tMF9t
YXhfdmNwdXM9Mjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGRvbTBfdmNwdXNf
cGluPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgYm9v
dHNjcnViPTAgdndmaT1uYXRpdmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBzY2hlZD1udWxsIHRpbWVyX3Nsb3A9MCB3YXlfc3ppemU9NjU1MzYgeGVuX2NvbG9y
cz0wLTM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBk
b20wX2NvbG9ycz00LTcmcXVvdDs7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJ1dDo8YnI+DQomZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAxKSB3YXlfc3ppemUgaGFzIGEgdHlw
bzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDIpIHlvdSBzcGVj
aWZpZWQgNCBjb2xvcnMgKDAtMykgZm9yIFhlbiwgYnV0IHRoZSBib290IGxvZyBzYXlzPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdGhhdCBYZW4gaGFz
IG9ubHk8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBvbmU6PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgKFhFTikgWGVuIGNvbG9yKHMpOiBbIDAgXTxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBUaGlzIG1ha2VzIG1lIGJlbGlldmUgdGhhdCBubyBjb2xvcnMgY29uZmlndXJhdGlv
biBhY3R1YWxseSBlbmQ8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqB1cCBpbiBjb21tYW5kIGxpbmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0aGF0IFhlbjxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJvb3RlZDxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHdpdGguPGJyPg0KJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgU2luZ2xlIGNvbG9yIGZvciBYZW4gaXMgYSAmcXVv
dDtkZWZhdWx0IGlmIG5vdCBzcGVjaWZpZWQmcXVvdDsgYW5kIHdheTxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHNpemUgd2FzIHByb2JhYmx5PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgY2FsY3VsYXRlZDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoGJ5IGFza2luZzxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoEhXLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTbyBJIHdvdWxkIHN1Z2dlc3QgdG8gZmly
c3QgY3Jvc3MtY2hlY2sgdGhlIGNvbW1hbmQgbGluZSBpbjxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHVzZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgfk1p
Y2hhbDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
UmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDQstGCLCAxOCDQ
sNC/0YAuIDIwMjPigK/Qsy4g0LIgMjA6NDQsIFN0ZWZhbm8gU3RhYmVsbGluaTxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJf
YmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJn
ZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3Rh
YmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwv
YT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJf
YmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0i
X2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0
YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJf
YmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7
Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0
YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJf
YmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4m
Z3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0i
X2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0
YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4m
Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqBPbiBUdWUsIDE4IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3
cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7IEhpIEp1bGllbiw8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyAmZ3Q7Jmd0OyBUaGlzIGZlYXR1cmUgaGFzIG5v
dCBiZWVuIG1lcmdlZCBpbiBYZW4gdXBzdHJlYW0geWV0PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgJmd0OyB3b3VsZCBhc3N1
bWUgdGhhdCB1cHN0cmVhbSArIHRoZSBzZXJpZXMgb24gdGhlIE1MIFsxXTxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHdvcms8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBQbGVhc2Ug
Y2xhcmlmeSB0aGlzIHBvaW50Ljxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDsgQmVjYXVzZSB0aGUgdHdvIHRob3VnaHRzIGFyZSBjb250
cm92ZXJzaWFsLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqBIaSBPbGVnLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqBBcyBKdWxpZW4gd3JvdGUsIHRoZXJlIGlzIG5vdGhpbmcgY29udHJvdmVyc2lhbC4gQXMgeW91
PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgYXJlIGF3
YXJlLDxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoFhpbGlueCBtYWludGFpbnMgYSBzZXBhcmF0ZSBYZW4gdHJlZSBzcGVjaWZpYyBmb3IgWGls
aW54PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgaGVy
ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqA8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVy
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8
YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7
PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9
Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0i
X2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8YSBocmVm
PSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsmZ3Q7ICZsdDs8
YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7ICZsdDs8
YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7Jmd0OyZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dp
dGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5j
b20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5j
b20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbjwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbjwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgYW5kIHRoZSBicmFuY2ggeW91IGFyZSB1c2luZyAo
eGxueF9yZWJhc2VfNC4xNikgY29tZXM8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBmcm9tIHRoZXJlLjxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgSW5zdGVhZCwgdGhlIHVwc3RyZWFtIFhlbiB0cmVlIGxpdmVzIGhlcmU6PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgPGEg
aHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8
L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0OyZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8
L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0OyZndDsm
Z3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9
Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVy
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+
Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZn
dDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9
Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVy
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+
Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwv
YT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVm
PSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwv
YT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVm
PSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0
OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsg
Jmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0
OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsg
Jmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBUaGUgQ2FjaGUgQ29sb3JpbmcgZmVhdHVy
ZSB0aGF0IHlvdSBhcmUgdHJ5aW5nIHRvPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgY29uZmlndXJlIGlzIHByZXNlbnQ8YnI+DQomZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBpbiB4bG54X3JlYmFzZV80LjE2
LCBidXQgbm90IHlldCBwcmVzZW50IHVwc3RyZWFtICh0aGVyZTxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGlzIGFuPGJyPg0KJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgb3V0c3RhbmRpbmcgcGF0Y2ggc2Vy
aWVzIHRvIGFkZCBjYWNoZSBjb2xvcmluZyB0byBYZW48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB1cHN0cmVhbSBidXQgaXQ8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBoYXNuJiMzOTt0IGJlZW4g
bWVyZ2VkIHlldC4pPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBBbnl3
YXksIGlmIHlvdSBhcmUgdXNpbmcgeGxueF9yZWJhc2VfNC4xNiBpdCBkb2VzbiYjMzk7dDxicj4N
CiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoG1hdHRlciB0b28g
bXVjaCBmb3I8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqB5b3UgYXMgeW91IGFscmVhZHkgaGF2ZSBDYWNoZSBDb2xvcmluZyBhcyBhIGZlYXR1
cmU8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0aGVy
ZS48YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEkgdGFrZSB5b3UgYXJl
IHVzaW5nIEltYWdlQnVpbGRlciB0byBnZW5lcmF0ZSB0aGUgYm9vdDxicj4NCiZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb24/IElmPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgc28sIHBs
ZWFzZSBwb3N0IHRoZSBJbWFnZUJ1aWxkZXIgY29uZmlnIGZpbGUgdGhhdCB5b3UgYXJlPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdXNpbmcuPGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEJ1dCBmcm9tIHRoZSBi
b290IG1lc3NhZ2UsIGl0IGxvb2tzIGxpa2UgdGhlIGNvbG9yczxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb24gZm9yPGJyPg0KJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgRG9tMCBpcyBp
bmNvcnJlY3QuPGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7IDxicj4NCjwvYmxvY2txdW90ZT48
L2Rpdj4NCg==
--000000000000c752cd05fbce75cf--


From xen-devel-bounces@lists.xenproject.org Tue May 16 12:27:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 12:27:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535163.832784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pytmG-0008M8-JU; Tue, 16 May 2023 12:27:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535163.832784; Tue, 16 May 2023 12:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pytmG-0008M1-FD; Tue, 16 May 2023 12:27:40 +0000
Received: by outflank-mailman (input) for mailman id 535163;
 Tue, 16 May 2023 12:27:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pytmE-0008Lv-Uo
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 12:27:38 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0622.outbound.protection.outlook.com
 [2a01:111:f400:fe02::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0ad1cac0-f3e5-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 14:27:37 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7431.eurprd04.prod.outlook.com (2603:10a6:10:1a1::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Tue, 16 May
 2023 12:27:35 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 12:27:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ad1cac0-f3e5-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EqKn/DKq2Dz0n8WX/xTVY9eJe5bA/lgqyOdgHADHAxIep0K4rVQvueLgUFO8pT0ipo+kdnvjbixLlmey9I7vKKrVByHoiCC2ZYDiCLioVnq/hraUjvHwoFy0CWrUUKpkc+L55+c8ZmH5hWpyFiWOrSFoU2QIkXSmgEhrbeo28dz3Cd5RD+zcN//pGsng04y2VEBFkKBOfhZ2F8DpwRsOiVkGG0asXRE7AImbow2LLlC+qQAsETNfl91Gaig9xFpicu5wKTKIkpqdDuEf49Jcu6xlYWBEdQdHtkr/oNe4d9IisYD7AXHdahCYVQ/sQqiK0H1v1qb+y1EjPfOXDN8BDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/klQvXHsLzX7lzggijdvUmRRhCoUDlBDMqxHfTJ3OzU=;
 b=bGfa9/+qTox1x1BXMOX9Tpd1P4uHW2p+e7yJdJ4aVZzWeSLq6/0XsCitIAEOvthPcbPF6hX22Prqo4ITjZNtHnZBw50BRt6B0u94RREp36uKUOFO6UVfCJCcx56Xxgnzj5FsM6RYXjvlfXN8EPpKrxJV04U1+QWH9Y2uGjux+sIsVbVY8L1ewAsWEDTzVHPGJLSZxI75pp6ePZlCUoOr/TjOgu2BYr+5Ke62Ocfba5QVsc8CQ4BMlZjkpeFYJTwUw42AXM49LBNiWXg4Urho06RTSf9FNrtwm3ohjMEJgvHxx3P6Q4HuPyW1DRtjDHttGiEMNsZ7C3kikds4KYFSlA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/klQvXHsLzX7lzggijdvUmRRhCoUDlBDMqxHfTJ3OzU=;
 b=J+C+N3RIFtTzMl+oadCUSChWhdQGmg6gURgTTqB8y5fxkL1koE1TDvC7atx9weUOHEAsiJYYvkLqx+MR2fs4VATuQT8QiMjg93IN+SMQcsgLDrNYWa1pPn/+AMxE1OE2R6W0IdPuNO4K8wuzJ64GW+4+ryqnR/MPDmJ/q3fVM3YD0quPAgoo0HLYkgQ3x1CnMjFXl+4VomQDCX4pKAP5RzeHAG6Og3jrTxlWO5kTgtgsMHTd4D1JOjcDcQafeOCCAtHgW3Z8VSJOX8TtEZOk/kDX3R8W9nx6xfftCgaVKWsVA4kIRFK09FcvXh1nHk7k5kuj/Hi7ekaPlK4iQLnDXg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <20a1b108-68a3-a200-1d0e-390cd20b5500@suse.com>
Date: Tue, 16 May 2023 14:27:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 4/6] x86/cpu-policy: MSR_ARCH_CAPS feature names
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230515144259.1009245-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0079.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7431:EE_
X-MS-Office365-Filtering-Correlation-Id: 77842db6-7172-43ba-baa0-08db5608ed70
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fkPgn0DaIgBYcU8MK0IzszUVeXJHR4GRaXVlc5U7DIiElycNhD3wi7gQNCpmqegoCMmQvzjDLcojn+B3VPYTkuoWSEw5FpNokD6fBsHis3HUpTI3aDrLfZnQ+0IjZ2LVWlGONdvdthLNMkOyYsQ3uzdD7rJZlTLjTyvzRVgW7J2EwI6QQUU1wQosbYOGgbgInWNstmPV8wvRKAaPBDXISuTaAVBPAGQ6OGEtAOFyS6uYvb5zl49BFFS2jJ3XTZrNvuI7J/dQ+HQ50tV/5ssHaf04GRLyn3wpxcChoA6ELDLsgk3MOQoQwIaAsXQPcgqm/jfe0DxQ3OdjpA6oxDssEihmTTKGZWYHQq+zc5ml+PT1H5bq5m7why/LBsrmfTgrIe/iOLI2MKbxJJa7eVrvoEoXNFoILS63JwvfjW1KrPKHWZMM9DLkQAf6wOVA7rehInMOcWqQ0PNWtwUUxpJHehw7ha1XcBPHYiWa/gIlSm99xdDFMt8Mm0/fw+iXrrxLClNr4PIii1lcHcnZpm1hZdTx687tyCdijfH9k9JWTUCIIusIaE12FE8hvQfZZ2MumTDFa92JBj3D87kX75O5xgde7akPUI7rL7cypJyneuAr/D73yVfOE5otloQPgcpEtm11peKf8AgJOUrqUUwUYw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(376002)(136003)(39860400002)(346002)(451199021)(4326008)(6916009)(2906002)(31686004)(36756003)(5660300002)(8936002)(8676002)(86362001)(41300700001)(316002)(66556008)(66476007)(66946007)(54906003)(6486002)(478600001)(186003)(6506007)(6512007)(53546011)(83380400001)(26005)(31696002)(2616005)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NXdWcTY1L3BLZitqdWFJRnJiTHdSSUxBQ1llNnZoWDZHVXJIUy9JTE91b3Zt?=
 =?utf-8?B?bWdJL282Q2NRRFFnNk8wZTRvQnl6c29KQUYza1pSMFRxc1ZpOXMzd0ZxM21K?=
 =?utf-8?B?WWx5Sy96SVdUMmIvZnRxUWg1bmlWa0ovUktGU2JGa1Q4ZzJQNjZzK2M4Tko1?=
 =?utf-8?B?RCtHRGdSYU8welRYaWFpUUhWSGR2a2hEUFl4VHgzSmd6ek1XRzMxR1JGNHVQ?=
 =?utf-8?B?RGdUSTVFS0ExazFFY3lxVytzYVZaK081U1FHQzhzdi90VjgyZ1NaZ0RsajBt?=
 =?utf-8?B?amRZVVhTTHV0dklIUFZ0TnRuSktHdDNZa1dtQWU1Y1l6SW1WRGJRb29iaSt6?=
 =?utf-8?B?cWVwditNR3pqdVY4aFhqNnhzRVRUK25ONEEvZTk4N0ZNbTBHQUVxQWJhZzAw?=
 =?utf-8?B?WTY3eW1KVHJOVStXcFVZVTVIbk9jaG9QaTlUVk9KOFBOYkd4eGFLKzE1WnJ0?=
 =?utf-8?B?c2NMSU9WTWRWL2l5WXg4RXJMWS9vaUIvbUtnNEd2dUlCR0ZuZXkybGtuNFln?=
 =?utf-8?B?M0YyWDNGMHFaM3oyREs0bFNhT2JySHVObkNZQThJMFBSQzREdEVnUXErRFN3?=
 =?utf-8?B?Z3ZxbWdoM2NWTTY0Z0Q0b3F1OVZKYmVUYkFuSzI3czMyUkdSV01VSHBkWVhR?=
 =?utf-8?B?VkRJRkR3ekJmM0poN2M0Q2lnMXRNYjQ0bFRUTWw5SG82blRiRG5rcjdCN3FD?=
 =?utf-8?B?bVA2eHl2QitDUTlMZFNNSzd1RloxRGpBT3NzY2pzQW1VRVpMcmpPb2pSNm1K?=
 =?utf-8?B?YlRpcWxDZU5saVlaTG9UYWh0ZUdOK2FrTVNhOHVmR1JRMVYyblQrNmdOVDhQ?=
 =?utf-8?B?TlRQTFBkcmhCQzlWcVNKWlpWQ252ckx2MTF0OGxLLzBlS0xIRzJYa2pOVFBS?=
 =?utf-8?B?N1gzUXdoSHVuV1NjQ0x6ZDkwRWpoRC9kV0FJN21PZ1hEUkM5K1N6cnA3c200?=
 =?utf-8?B?QStWQTBUaWI1ZGs0ZHppY0xQRkRvNHlBeVBmbEs2QVU4QXJPL2RVLzdQUEFr?=
 =?utf-8?B?LzdGejJwcjBIT1ljRWtPSVczSjRYUnE0MmtHR2dzdVBMb1N4WmVnNk80ZTNl?=
 =?utf-8?B?aVF3TGEwajJPeThzV2NJcnRVd241ZnpPYWZGeEYzRW9OU3FwdkF5VVhEL3pq?=
 =?utf-8?B?NGJZTlhZTk9DeUR0K1FicGpVQWh5ZThtNDlOWTV4S2VQdnkxdCtSRDVaclor?=
 =?utf-8?B?MlZjMkVzdFpmaU5jMGtqZ3lGZFhNcXhHYlVXQzdqbDlRVWJYbUJRM3hpZThh?=
 =?utf-8?B?Q2J3TGcrSlVpYmFIU096YWNNSjFGbVZUV0lwUTVydHFMZGpHeWlRZWxVUTFy?=
 =?utf-8?B?TTBxZmNTQ0t2U1FNQk8za25rZ1dEaldCN2NCeUxwWlZ3dGd5QStaLy9aN0NV?=
 =?utf-8?B?akQ1ZGF6UTAweDJYeUNrNTd0ZXJiS0JQL1N5OHFPMFlJeks0M3JjQXExQm5S?=
 =?utf-8?B?SFpMaXh0S1YrclVGTzNSUU5TVW5OZndJZTg4REpZVnlJdTdBRkNrcENjZ1lu?=
 =?utf-8?B?VENZSmYya0dvcWFFYnlqU3hGdEx3SGNFVXFuZ1VEWFU5T3NOdlg3aVo1Nk96?=
 =?utf-8?B?UHh3UU5yMjhpajRGejkwMllQUE5KL1gzcm5tWlR6aHArSXRlanJIOXV4UFFB?=
 =?utf-8?B?SjEyTjNwbkVHVmhZYlluZDhOekFDVVI1R1plVzdGSlFrL0kwVm1YQm0yV0Jh?=
 =?utf-8?B?STAyV3JiU2oweVAxUTY0M2Z5RDFGVTZQN2hOUEw3cWhuSGpqVUNqRkQwcHAv?=
 =?utf-8?B?WTMzZ0pvdlJ4SU45eXpxZGNlUG4xZC9aTGlEUC9UVktWdzRmSXd0MGNqaTRC?=
 =?utf-8?B?Q3NSVURzMjdBYXU5a2FFR2ZRbHBXcFltV0tldEUxa0pIM1JxdjdyM0ZOWS91?=
 =?utf-8?B?TGFoaHdOQm5jWGczUGpEWjJ2NUJ6T0xLbmJXYmE0ZFJ4UnJqM20zR0grQSt2?=
 =?utf-8?B?ZFlzUUFhMG0zNHF1YWIwQTRka2V2OURBU0xUZjR1aEJPYnFkbFJpcUc0bnVX?=
 =?utf-8?B?eTN0bUQrV1c2bGZFUWx0VG43RE9uSzlXa0JPRXZGSlJybk04RlczZC9vdzho?=
 =?utf-8?B?WGVaWm51UkVLRzFyUXhiSEh5dWtXNzZnblc1T2FSS2I5czNkMFRGUjdvMTl6?=
 =?utf-8?Q?HJyWbTzhoebXqcTDmtGHC5odL?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 77842db6-7172-43ba-baa0-08db5608ed70
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 12:27:34.8356
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: k9V4zxUQdYmPyOT68H7SEpt3xTZmHcv+CvOGgmBX0tTc0c7RlxPJEuOMX7Euu3uKb/X2w3mHTylNs4scmpsu3Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7431

On 15.05.2023 16:42, Andrew Cooper wrote:
> Seed the default visibility from the dom0 special case, which for the most
> part just exposes the *_NO bits.

EIBRS and SKIP_L1DFL are outliers here, in not presently being exposed
to Dom0. If (latent) exposing of them wasn't an oversight, then this would
imo want justifying here. They'll get exposed, after all, ...

>  Insert a block dependency from the ARCH_CAPS
> CPUID bit to the entire content of the MSR.
> 
> The overall CPUID bit is still max-only, so all of MSR_ARCH_CAPS is hidden in
> the default policies.

... once this changes, as they're also not just 'a', but 'A'.

> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -228,6 +228,19 @@ static const char *const str_7d2[32] =
>  
>  static const char *const str_10Al[32] =
>  {
> +    [ 0] = "rdcl-no",             [ 1] = "eibrs",
> +    [ 2] = "rsba",                [ 3] = "skip-l1dfl",
> +    [ 4] = "intel-ssb-no",        [ 5] = "mds-no",
> +    [ 6] = "if-pschange-mc-no",   [ 7] = "tsx-ctrl",
> +    [ 8] = "taa-no",              [ 9] = "mcu-ctrl",
> +    [10] = "misc-pkg-ctrl",       [11] = "energy-ctrl",

Not "energy-filtering" or "energy-filtering-ctl" for the right one here?

> +    [12] = "doitm",               [13] = "sbdr-ssdp-no",
> +    [14] = "fbsdp-no",            [15] = "psdp-no",
> +    /* 16 */                      [17] = "fb-clear",
> +    [18] = "fb-clear-ctrl",       [19] = "rrsba",
> +    [20] = "bhi-no",              [21] = "xapic-status",
> +    /* 22 */                      [23] = "ovrclk-status",
> +    [24] = "pbrsb-no",
>  };
>  
>  static const char *const str_10Ah[32] =
> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -308,6 +308,29 @@ XEN_CPUFEATURE(AVX_NE_CONVERT,     15*32+ 5) /*A  AVX-NE-CONVERT Instructions */
>  XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
>  
>  /* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.eax, word 16 */
> +XEN_CPUFEATURE(RDCL_NO,            16*32+ 0) /*A  No Rogue Data Cache Load (Meltdown) */
> +XEN_CPUFEATURE(EIBRS,              16*32+ 1) /*A  Enhanced IBRS */
> +XEN_CPUFEATURE(RSBA,               16*32+ 2) /*!A RSB Alternative (Retpoline not safe) */
> +XEN_CPUFEATURE(SKIP_L1DFL,         16*32+ 3) /*A  Don't need to flush L1D on VMEntry */
> +XEN_CPUFEATURE(INTEL_SSB_NO,       16*32+ 4) /*A  No Speculative Store Bypass */
> +XEN_CPUFEATURE(MDS_NO,             16*32+ 5) /*A  No Microarchitectural Data Sampling */
> +XEN_CPUFEATURE(IF_PSCHANGE_MC_NO,  16*32+ 6) /*A  No Instruction fetch #MC */
> +XEN_CPUFEATURE(TSX_CTRL,           16*32+ 7) /*   MSR_TSX_CTRL */
> +XEN_CPUFEATURE(TAA_NO,             16*32+ 8) /*A  No TSX Async Abort */
> +XEN_CPUFEATURE(MCU_CTRL,           16*32+ 9) /*   MSR_MCU_CTRL */
> +XEN_CPUFEATURE(MISC_PKG_CTRL,      16*32+10) /*   MSR_MISC_PKG_CTRL */
> +XEN_CPUFEATURE(ENERGY_FILTERING,   16*32+11) /*   MSR_MISC_PKG_CTRL.ENERGY_FILTERING */

These last two aren't exactly in sync with the SDM; I assume that's
intended?

> +XEN_CPUFEATURE(DOITM,              16*32+12) /*   Data Operand Invariant Timing Mode */
> +XEN_CPUFEATURE(SBDR_SSBD_NO,       16*32+13) /*A  No Shared Buffer Data Read or Sideband Stale Data Propagation */

SBDR_SSDP_NO?

> +XEN_CPUFEATURE(FBDSP_NO,           16*32+14) /*A  No Fill Buffer Stale Data Propagation */

FBSDP_NO?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 12:29:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 12:29:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535168.832794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyto9-0000Tb-0j; Tue, 16 May 2023 12:29:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535168.832794; Tue, 16 May 2023 12:29:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyto8-0000TU-Sh; Tue, 16 May 2023 12:29:36 +0000
Received: by outflank-mailman (input) for mailman id 535168;
 Tue, 16 May 2023 12:29:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r255=BF=gmail.com=fschmaus@srs-se1.protection.inumbo.net>)
 id 1pyto7-0000St-Iu
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 12:29:35 +0000
Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com
 [209.85.208.47]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4ff738ef-f3e5-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 14:29:33 +0200 (CEST)
Received: by mail-ed1-f47.google.com with SMTP id
 4fb4d7f45d1cf-510b154559fso151382a12.3
 for <xen-devel@lists.xenproject.org>; Tue, 16 May 2023 05:29:33 -0700 (PDT)
Received: from [192.168.188.10] ([62.27.195.158])
 by smtp.gmail.com with ESMTPSA id
 j5-20020aa7de85000000b0050bc4600d38sm8155527edv.79.2023.05.16.05.29.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 16 May 2023 05:29:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ff738ef-f3e5-11ed-8611-37d641c3527e
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684240173; x=1686832173;
        h=content-transfer-encoding:in-reply-to:from:references:cc:to
         :content-language:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=UF5NoWStWTbq5Siz8QiSqYAnylhbCK2MRLmmjITL1bI=;
        b=eWwDUAoxpDyTeGYgjIY134fuC4mO6F7lPfb4A0np4wpsnWO7OaGf/ckwddNodQXt8U
         7vb7w6fvbn5EgPUNNbb1cR55k/6bvenTYlUtaKoTatIGrnB/zA7TQks94Xibi6Ru4E3L
         aJPg7kUmYRwmPB0hyss9wF2O2Be1gfO7kzS09Vz/xSl8/X4excS+7WsBANvWqQvXWgiE
         OsXOttNhe4fpuB3YjwqMAOta08IF88km+u/7SJLtqmFE/0wrusbe9MemTvoq/sD5k1Ix
         2RrTXaGTMMShxc7KGLWFyxJ1sJTpO83sjaHdTp0wo4ENQfv0MwFSa7hfhabttAQ88i2I
         aSPg==
X-Gm-Message-State: AC+VfDxTXp00NFcSfkV6hxFnVPEHVNGiuU41GzT4acDdDho+GIQh8sh3
	jk5mk4kEDe5wbL8o76dwBR1xaqcGhMaS6Q==
X-Google-Smtp-Source: ACHHUZ4JoLeFY7w99FQg7BKppaQ5imA8yq2767XrRyJJ5B0EtqtNb7oEoYcQXXCwDgWji0zAGJVViQ==
X-Received: by 2002:aa7:c04a:0:b0:50d:be83:9ff2 with SMTP id k10-20020aa7c04a000000b0050dbe839ff2mr19037594edo.23.1684240172521;
        Tue, 16 May 2023 05:29:32 -0700 (PDT)
Message-ID: <42cd7c2e-4bcc-5743-59b5-eed4e5289df7@geekplace.eu>
Date: Tue, 16 May 2023 14:29:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] m4/ptyfuncs.m4 tools/configure: add linux headers for pty
 functions
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230516091355.721398-1-flo@geekplace.eu>
 <6e9af5ec-4484-38ed-2b40-6231baa9ec93@citrix.com>
From: Florian Schmaus <flo@geekplace.eu>
In-Reply-To: <6e9af5ec-4484-38ed-2b40-6231baa9ec93@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 16/05/2023 12.34, Andrew Cooper wrote:
> On 16/05/2023 10:13 am, Florian Schmaus wrote:
>> To avoid implicit function declarations, which will cause an error on
>> modern compilers. See https://wiki.gentoo.org/wiki/Modern_C_porting
>>
>> Downstream Gentoo bug: https://bugs.gentoo.org/904449
>>
>> Signed-off-by: Florian Schmaus <flo@geekplace.eu>
> 
> Thanks for the patch, but there's already a different fix in flight.

Thanks for the fast response pointing out the other patch.

> Does
> https://lore.kernel.org/xen-devel/20230512122614.3724-1-olaf@aepfle.de/
> work for you?  If so, we'd definitely prefer to take the deletion of
> obsolete functionality.

After a quick glance at the patch I believe that it would also achieve 
the same goal, while it is better than my approach.

- Florian


From xen-devel-bounces@lists.xenproject.org Tue May 16 12:54:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 12:54:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535176.832803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyuBe-0003uc-Rl; Tue, 16 May 2023 12:53:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535176.832803; Tue, 16 May 2023 12:53:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyuBe-0003uV-P1; Tue, 16 May 2023 12:53:54 +0000
Received: by outflank-mailman (input) for mailman id 535176;
 Tue, 16 May 2023 12:53:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyuBd-0003uP-Hg
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 12:53:53 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0603.outbound.protection.outlook.com
 [2a01:111:f400:fe02::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b4ae1511-f3e8-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 14:53:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9389.eurprd04.prod.outlook.com (2603:10a6:102:2a8::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 12:53:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 12:53:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4ae1511-f3e8-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oThhyUkcMuB9Y9AWrSnLEhxwxREE4/L7FJEakfk0A0Xu+3dK+VmDHVuDV/sMKtqfdWXn0pcZiM1XbSjp1GD3NTr+e9Uu9bSM9wQhI9zgGHyeA72SyhuOv3wLcNRccDTIkiXbQJutB+cl3CtjqndFTfUQS/6z14zy9uy0v67CUi1zOanbWP6WC4fAtlsDC9MuftjjaTQoqZh9LiQqDmFUZrH702A9mxMI5LBNzr94p0ZaThFBHmcFiBgrc4wxJh1Um9ilm2UKBYncQezuOboITILbI4rGcVF8lq8ByjxY3+xXFa6mYB5InaV5NP1Pp4ZzdN3deefVG+z88Be0I0rILQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Fj4WW3cDCNYbX2SRoGHsebVtFneF3IAQPw+XdPhZ/Ak=;
 b=Zn4WUQxn8wTJm4M/aIJUCoJDXsvOssCbon00QBymROnR+/JUpZZgYhU3NyBb7ipx2Dn/yCW8wHzTz/YkoIAAwZGFQUlY1/wNFcjhYBzIxFBikvS+b8BVA6RDktQuQzIRdUISPpQBGGTyeXay01+0FZLXMsFF/5STYUZMo/BwZ/liigiLpOn1lo5C9vfj1qSTeYHwrUDLTs7TDbu7m8mNBGUuNdAvrO7O8AzaPhbSBlNoOQ/baU85ix1Fyicm4NvH6bYubVI4/AJvP+M5VQelqBHX/Y6lD4ZAN9zdyVge4frRuzZfGLJUcSP9PAeFk7eGWAThSpdYdsTfrKkfoq0rOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Fj4WW3cDCNYbX2SRoGHsebVtFneF3IAQPw+XdPhZ/Ak=;
 b=hJjXowwfBWhKiUbo+q4OiuxaRuEiVGNvrLc73ZCaQ32CiWlOls86IOwdvt/q1r6WB93M4wzY6tCYJXE8q2UV+wtJENjVkgaR+3TJgSVXtenQPx4uKatQcih/NMag5cizlpgAEpr3JMlAlm6iaYm6NQIVVMYjE8qLp0xRAyAp0YAGas8DUnKA58Dhc0j3vB+5fU2sgiffZBnbm9+nbEJIStRKC9OoZtaleJCIyHAjCvjEs+F0/sh68Webp8TwqKU2hzFscUv+G/Sv7VmFwvFR4lKs71b3q/6SSV23sIMO9Qmg+d1OATmoF5DtRLeUydub7CiSLLm6VSx29HTd9CfwoA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f5f56c2b-f382-b007-3949-2bf4bd37b392@suse.com>
Date: Tue, 16 May 2023 14:53:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 5/6] x86/boot: Record MSR_ARCH_CAPS for the Raw and Host
 CPU policy
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-6-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230515144259.1009245-6-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0069.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9389:EE_
X-MS-Office365-Filtering-Correlation-Id: b860927f-8e80-4e4a-8f17-08db560c9723
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SDGJjGmAAQfq0mVWkmZiu18MDymq8x3ukOYAWciJFjnOjOhWV8fs1//+XSfjfgSftJjM0AfPIim0Z/9Gdw/uo9bpUoRU9HVIbjfdWcnM6oLnIZ19x7l90z63xIfFJpUYwOcK4QMRP2nKhlDSWUEEggg2buBD1y4GSXsTdQbL//ahw25DOTshTGPGQ7Yra9LtgUliUIRqitsspG/IZrFfyBvsBlc4Ji4Gu/aPt3EdyyKgwoFqnHiV7WjPJOiNoTbNc+m8If7f8u6NvkGgxAv/jdPmEyT3dvOhra2K1OOp1hzluwbOBNbC+2Mz/WrFduFTVx5gUs7RjllMqAbiuvQiQ4M7Jga2SZIMZFmHriZrmu5aEjI85VkjfU/ZH6qJUgTrwUJnnmC7/QnefILQz8SBvkdGkzzOY+J5HF7mjwM8+hpaBHUKHbroW2L8ApcoBCF2p3bgwiBV5/j82XuoNSq+SVqDsms0bOXfOWwM5BacFJM7j7ecLCxfRlOAASR3HefDM6ncx5B9bD19X5Ehx8t4NdEKXSHRiY4/NlaKtX9W4sAqxTeKj33sj+5tEe8MLlfaLifG0z1SboPQVrHgkA/JNnBizR1ntYrUBd3HAdr+PYKX8EYPvbqfsPL2K4peyndaJ00yymS1ipYtCKwY2KjQDA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(366004)(136003)(346002)(39860400002)(451199021)(83380400001)(478600001)(6486002)(54906003)(2616005)(6506007)(6512007)(26005)(53546011)(186003)(2906002)(4326008)(36756003)(38100700002)(6916009)(66476007)(66946007)(41300700001)(66556008)(8936002)(316002)(86362001)(5660300002)(8676002)(31696002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?anNJQ0NnTUVmcGc1cG80WFFOMUcrNmgxS1NielhQa2xWL0VLNU5DSHhBbll1?=
 =?utf-8?B?NGFlbTUyWlI3ZU0wWERXZHdUODg2dUVZbUMxU3QwTmd6cUl1WGFKeTlLcXgv?=
 =?utf-8?B?RDBqVWt0ZTNER3ViNWc5ZThRZGxvL1U3eURRc213UW0zUFkzaTlVa1lQVTBX?=
 =?utf-8?B?ZklwZWowbitwWnBQMTFxb2NZczREd2gvZnEwNGkrKzdQVUFmZTZDOWsycnR1?=
 =?utf-8?B?LzI1Y1FtNHA0c1orMFNhbUVZWUxKNmZPVHdQcWxMU2gyZzVkVC90cVZJOUdv?=
 =?utf-8?B?T21hY1IyTzBzK0tMMkYzNy9jVHF4OHVNZ0ZrTFJsZWRRQ1RabEVIQ1dQUUJP?=
 =?utf-8?B?S3ZPS1ZvVS8xeVZtQXI1RWFDeDhrWEZJT3VwQzZJS0x1RHVMUittN0VCQ0pZ?=
 =?utf-8?B?ZDd4V1laUDdyczBzZC9TVFRVaGIyRW5yT3ZsdWZueUg5c3N1ZzdlelV4MkVL?=
 =?utf-8?B?VmJxRGM1bCtGUnVuYkJmbFQwclZidmlQMmcvN3ZiMGVtaVo5SFRydWlyclJG?=
 =?utf-8?B?Umd3MkMyWngwL1VJTkxiVCsybDE4d2NvS2J3eHJaRlptZ3RGeDAvUVpHNkRr?=
 =?utf-8?B?R29odFNHaFdlaVVpUFA3ZjgybDQ0WkdKMCt5SnBnSUc0N3ZvYTNrVW5KdW8v?=
 =?utf-8?B?VE1yeEQ4L3FjMkhyZHJEV1lCazgrRlhNVDV5NUI2aDN0cW1FQ0tvMndZNkJN?=
 =?utf-8?B?UHhGVHNHcGd6Q3d1dVEzQUtzYkUxZTQrWExyL0gveThnVWV6N2JBaENaeGU5?=
 =?utf-8?B?WjloWEVaM1BEUU5CeHpXZU5VKzBGczRNVldoNG1NaUlJUVFMNHFKc2puOGZv?=
 =?utf-8?B?Sis3WWtVaDdHZmpaalBiQ0l2RTFKaVd3TUovcFZUcWlOdVVBUzUzZ3VJVnRM?=
 =?utf-8?B?NDQvcjlzT3B0R3NBWmg3V01KSmo0eGdBaXlqZEVLUmZYSktzM3hkSlVvNVJG?=
 =?utf-8?B?SkJ2amJGWDQ5WkNOSU5UMkRUTFRMbXRELzV3Wlh6KzhrbEd4My96cEljZ2NM?=
 =?utf-8?B?QzNjUm55WVNkVm9iYnRkRlNiUlU4SndqQnhPTVd3dGo4U3dPcExJSEVrMlNT?=
 =?utf-8?B?UUZrcEk3d3UxSVV2RDBkWFpzanlxZnNDOEM4OFNLVy9naTBrRDQ2aWo4ODZK?=
 =?utf-8?B?RkEveDJtY3JJRDJwUkZNSXNrdWhiV01Ndks3eFVPUTBxZHF4YnJIMDJRTzJR?=
 =?utf-8?B?SldVRGp1Q2FSY3NkVjhYaFAxWmoyenpwODFob3JWcTZoWGV5YTNiV0lnV1dW?=
 =?utf-8?B?NnM5dDZ4bHBtbGVGVXJrckV6c0lVZjlLNURud01QZll2ZDVQMlhPTmRpYVVq?=
 =?utf-8?B?c3JXVHBMYnpweHlTMG5qV050R3liM29DQm5hb29veGc0dEZRT0k2Y3BOVjRW?=
 =?utf-8?B?R1RPN3NOU25GdjBDTENqNTd5bWtURGR3elF6ajRwNHg0M04yRVNWR2Fna1NO?=
 =?utf-8?B?a1VtcWRqSlgzWDErbk02ODVMd1RrSzhjRTAxeG9mVEdlQUxHbHZzZlJxVk45?=
 =?utf-8?B?WUdMcDJjM0xLUnJzN01SbFQ1Ly9EWUVVWVFGSmZadUxvUW1zSGd1MHQyYSsx?=
 =?utf-8?B?L2NBVjF2V2MvSFBCZExiZFlOVDFDQm9hdDhmKzZnVlNJdHJmN2NZb1VpaGJz?=
 =?utf-8?B?SUs5ZTNWSXNJMlNXRlJYUWFaNDlhMG8zQ1BmVVVrSTV6RGJnU3VWRGhyaUsr?=
 =?utf-8?B?RjUxYjJtbHZrdHZzQjRqckZTMGNrSWFQaWlURVREV1czTUNLSlpFblJBUm85?=
 =?utf-8?B?b2ovYnU2VUVLVWE1R0EvQTh2M3pkZWZibjRsNU40UTBFVlcwTGQ2M1E0ZWdy?=
 =?utf-8?B?UGs4aXJPaUo0RSsyM3IwMC9DZkh1cXg4YlVsNnZjcjNBekJ6dUlTYzQ1aWRK?=
 =?utf-8?B?Y1c0Mnh1cVhXYjFqS21YUitLVWV4ODk2MTlmWU9UajM3WXVMZC9COVVQRWVY?=
 =?utf-8?B?YnNQNGMrNTlrNnlnYW9xTER1T2ZrODRLK2QrWkdiOEkrN29mWFZmbHVSV3Z2?=
 =?utf-8?B?ZDJUV3ZPc0VZbCtGTVBhMWNCalFDY2t4TFZpbVR2eTIwZlNUaGlkOUZ5M0pN?=
 =?utf-8?B?bTNSaWVsZzV5NHR6cTdmUTZLbFB1ZXorR1lGcENXSEdoN3BrU2VJeDA3VnBI?=
 =?utf-8?Q?2JjPeSIrhX46MKT2F0VxPAASL?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b860927f-8e80-4e4a-8f17-08db560c9723
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 12:53:48.0892
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2jcbjjHgD/I2TjzWXYtgDpZlUsX8f/YJ+3uEWTcVdFpTBOseQVLGbQB3SDOXI3umSyjHI6qHon1eDPaCY0ph7g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9389

On 15.05.2023 16:42, Andrew Cooper wrote:
> Extend x86_cpu_policy_fill_native() with a read of ARCH_CAPS based on the
> CPUID information just read, which removes the need handling it specially in
> calculate_raw_cpu_policy().
> 
> Extend generic_identify() to read ARCH_CAPS into x86_capability[], which is
> fed into the Host Policy.  This in turn means there's no need to special case
> arch_caps in calculate_host_policy().
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

I have a question though, which I think would be nice if the description
had covered:

> --- a/xen/lib/x86/cpuid.c
> +++ b/xen/lib/x86/cpuid.c
> @@ -226,7 +226,12 @@ void x86_cpu_policy_fill_native(struct cpu_policy *p)
>      p->hv_limit = 0;
>      p->hv2_limit = 0;
>  
> -    /* TODO MSRs */
> +#ifdef __XEN__
> +    /* TODO MSR_PLATFORM_INFO */
> +
> +    if ( p->feat.arch_caps )
> +        rdmsrl(MSR_ARCH_CAPABILITIES, p->arch_caps.raw);
> +#endif

What about non-Xen environments re-using this code? In particular the
test harnesses would be nice if they didn't run with the two fields
all blank at all times.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 12:56:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 12:56:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535180.832813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyuEV-0004Uw-Ab; Tue, 16 May 2023 12:56:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535180.832813; Tue, 16 May 2023 12:56:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyuEV-0004Up-7p; Tue, 16 May 2023 12:56:51 +0000
Received: by outflank-mailman (input) for mailman id 535180;
 Tue, 16 May 2023 12:56:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyuET-0004Uh-AO
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 12:56:49 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1cf0dd92-f3e9-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 14:56:47 +0200 (CEST)
Received: from mail-bn8nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 08:56:38 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB6053.namprd03.prod.outlook.com (2603:10b6:208:309::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 12:56:36 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 12:56:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1cf0dd92-f3e9-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684241807;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=XKCwIunBum2vxDWO5X2i9KAUoQUgFkD+77osieuX5Qk=;
  b=JlrAVS3GIG1Dx7GHjrQG92TGZnto+UkGybisjWKPtxegWz2nOW8Prvzw
   zHs12u7P6YEVrY37ik//AZOHtwiNKmxmVpXj7DZ67cxBryL6PxVInViXP
   d3BS46DBjmgNFu8+5mQzBD3fhRxVWSyODrtXq5VPzgk9Bags56qlDLGtJ
   I=;
X-IronPort-RemoteIP: 104.47.55.176
X-IronPort-MID: 111670068
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:dI8maas++0FwbNIglj867ui6UOfnVHtfMUV32f8akzHdYApBsoF/q
 tZmKWCOOq2CYWb2eIx0O9m390pVsJ7XyYcxGlZv/y9mEnsT+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AKHySFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwE2E0YR26ueKKkeiUEsY93eQfEuDqFdZK0p1g5Wmx4fcOZ7nmGv+Pz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjf60b4S9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOThr6A12AXNlgT/DjUTaVWKgKGF1XWFQo57c
 xIPx3d0lpQboRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2hBSfSN9mSfSxloesRm+2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBNrxooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:dVASba3TylDOJY5VpOPAPQqjBIMkLtp133Aq2lEZdPUCSL38qy
 nOpoVh6faQslx9ZJhOo6HmBEDtewKkyXcX2+gs1NWZLW3bUQKTRekI0WKF+UyCJ8TQzJ8+6U
 4KSdkZNDSfNykDsS842maF+hQbrOVvPJrHuQ4W9RdQcT0=
X-Talos-CUID: 9a23:P6eeI2z6LyDkwJ4+XPWRBgUvS/kqf0TjzEuBeWDgJjcyaIGrYF65rfY=
X-Talos-MUID: 9a23:YIvnJwmWTLnI81CsgpB3dnpnCMlZwvySV3s2iMki45efCHEqAAu02WE=
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="111670068"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tzy3vsZzWGu1rzpXVfobulIvqjsEytuJSwVfytWNr+Zl1YudfXamVZU+0R7CZV+lKsjj0uMEbVGot1Zg4PZ/qAQc6lLWKPpO6sHk6QK5IcQbo/GecyHLv1H4oWDySOhDHoy2HFzeusxDkN5hh+sW3Qz33f3RhCrMPkKY/ALRHAHEUMKbZddP8OR8oMeLxsnCenq+qctJdIJWF6Rlu9tb8p8G5WD4z6Ry490ASTGgDnhCrkc7dqVD8HwND5PC5ZCzrrmvuxl8qHASzk9ynvl+wthqTk2zQ2ngaC1GfkLNRFC3Ndr7fp24OOwuJkvnx+OHhLL+mi/0LmAEXLmwr6PhlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pTJ8dqIoUz5dD8/hena/jJbV1TPDKcWDlD2f6+2LorI=;
 b=IBlmvXzKqasdb6qZpPGtqDFSQOVve+1XFp141eULRPcbGrIwOpUktmBKHjOec2yMc7FK1EhtlJfs9Bbx98zB4nGOKadlM6t3L0r1386pL5xxpRgeo4aB3SdOR5M7MUyyypphpVioLOZQEOuRpzZnCjnOgGWhPY3QlqRbu8MCku1kjfK/GIlTGQqF/byRrj8xXErOFec6yNup5hEXNY2r3pM1I07nnQicKtKQkbZV0OLS+BClbONEugyzD07hsfXUb03zOKY28inMeUgSjopfB3kzdrRxpl/Lx8wE6AlSUT9NQd5q4UiisaIyCf1PTfydEiTa8Rly48BunBQ1uxbq3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pTJ8dqIoUz5dD8/hena/jJbV1TPDKcWDlD2f6+2LorI=;
 b=aELgFof/YY+nbRuNERUHobGFNZJvP5G5cON8+OpX2aFyTEfHS82ZxzFUQ5y1a3FlnMCWwiVgRvAluXrUbYgr8RFGVil6RsoV+toHvakyyVSEZFMaxTrVP8JSaJzjRj5MWpdpYe6f8ryWlkEjr0WhUXPgR62y2RnfVbyVK39CpWc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <85a04714-c8e5-de21-7722-cd1ff715448f@citrix.com>
Date: Tue, 16 May 2023 13:56:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 4/6] x86/cpu-policy: MSR_ARCH_CAPS feature names
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-5-andrew.cooper3@citrix.com>
 <20a1b108-68a3-a200-1d0e-390cd20b5500@suse.com>
In-Reply-To: <20a1b108-68a3-a200-1d0e-390cd20b5500@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0488.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BL1PR03MB6053:EE_
X-MS-Office365-Filtering-Correlation-Id: bc5888a7-6200-47d1-83b7-08db560cfb0d
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	W7ZZUlQvHVT+bnXSh3Th0PYv99ZtDAwjMRsGy15dDBTl47hdAvT5v8Nm3SvODnhorCsfo3Uw+UsYoLz5Td31vT318mimE5cIqAJDcMLVu/sPmXAQdNDjkeB6j1JbkG6QGUzInCTI97hApIHv4+OmbnT9AIAAcRLR7gJ+webn8LSh199UjaQ+RRCwrIN4qBEH0UZB7fQDCyS7eXcZSEngfj+UdfJzSgZ6iRFNPLJQoxcCn2OSWYHq8xh25utBErUyYOc3xDYd0pqEkJVV//qdVcssDRpnBmC1G0KMGTj8jNTPwk/DcLUlpjXpMrrYKl8Bc+JxLY8JgPpU+4rpzgUzdA5XS/65aBnUPIOOkkKsiFw/FqRUOYZrpJfOz77weoqElFOvbnHV+3WMiSSosK9/X4XdfHtCLlofWfqRQE6+8veebbL7HmbR5ZkdzaqkmKCQzE2juQPT17WIYFW6De8/uMZnTF3crBndW89uvzCOAIF3qJZtIpbEm+Cf/0y6lwD5/+GjxJ4Xc4zFX1DHML/sOF0fKNuphu4IvA+BUZEJuK6DfCXWfeVhHGxCTyebpfK2DjdHAAstU6oApKHrzxxBmiJaZ6twFyt64zVybYMV9Y5D0w422GvEr6ODo1RCnhHPDadPLAQdK5SyjA1y4nyA8w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(366004)(136003)(346002)(39860400002)(451199021)(478600001)(6486002)(6666004)(54906003)(2616005)(6506007)(6512007)(26005)(53546011)(186003)(2906002)(4326008)(36756003)(38100700002)(6916009)(82960400001)(66476007)(66946007)(41300700001)(66556008)(8936002)(316002)(86362001)(5660300002)(8676002)(31696002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aVpBekZ1NG5naktEN0hicG1IampxZGRTRW13WnI1aXUxTVMxaFh2RWJraXdI?=
 =?utf-8?B?U0lsV25YcThIVEQxLzhOWUVyVlFiYisrTVdQZ2VFdlBXWmVhbGRJNk5ZcjdY?=
 =?utf-8?B?QXhIZmNDQ2kvdUVTeDdTOFpHaEMvMEptUi9iK1FNQWhnZ0UwbUZWU2VGUDJw?=
 =?utf-8?B?TzVCQWRhbCtKYVZiaHVtSUJpTnhCdXREb1RPUVdZSFlrdkJCNGtWcWxzS29D?=
 =?utf-8?B?c0tzVVRkSVArUU9XaGlOTndEL2tLZUI3QkFRd3Z2ZXlkQXdHWHREWksrWFZQ?=
 =?utf-8?B?RVVHOUFIY3ZHRyt3cHZIWXlmZlNYRGN5OWVJbkdtb05Ja3JUTDRPUm5iTll1?=
 =?utf-8?B?L2FhVitna0V1NjdJbTZGbU9CSGh5S0d6cXlLUXJVL3BoTElMSi9EYkJTRFhs?=
 =?utf-8?B?UUlJSll3RW9VajdrcEEwZVRSN2l2OFpEWXRhMWtqcy83RWlPMEFEbEVWWmR1?=
 =?utf-8?B?MldrTlcvdFBqQ3JaOUFOUWdJM2VyQ0JEWkpXaGttR1cxNHdKMy84U3k2bTVz?=
 =?utf-8?B?Tm0xaDRpaDlxdHVHTUdhSjVQZ0kwMjlNaDMyODkxSUJ3bEE0LzNSM2VSZDdo?=
 =?utf-8?B?RmhpMDNzSVN1cEp3bFB0cDM4Ym13emVTWmdnZmwwSlZYRGkxRC9LM3h4Qk1Z?=
 =?utf-8?B?MWxzY053ekpKeExvNjRpWWdvZUFjbCtjeTdGZHFQaEVpbVc0VWhYZmNvbGUx?=
 =?utf-8?B?RUpFeWxia2I2dVk3bmZneVNNVytWYnJsdDYvdEhEamtyVkdOSFpKaG8vWmpu?=
 =?utf-8?B?N2MrVTk0cTFOeHBOZlJYM3ZZRXJmZThvUzNVTzUzaVVaYUg4THlmeG4rb3Ex?=
 =?utf-8?B?NkVreW5ZUEI5amxWWWZHWmdoczkrTTZTeDZMdUUzQ0Q0cTVVaWc2VjYzNXJH?=
 =?utf-8?B?UnNwb2xlaEJ5RWVUK1FGc3FEb3ljQlJXbmJ4ZitNNTlBbEZWRmRUa2FzcEQz?=
 =?utf-8?B?N0c2UWdzS09CTjZTUEVFaGdoSjdjQ2c5bXV4RFBNaEltVnRaeVpxMWgyZHQ5?=
 =?utf-8?B?VTJVNzZtQjE0TzhVSnJaNlJKVUJKaUNGcld2YUI4R2ttVHcxdTRnRmwwNXRV?=
 =?utf-8?B?MmJMQVdPS1BNZ09ZV2d5S1V3UFlPdVE2b1RlRVJIdVJySlU4S0FBcFJ1T1FO?=
 =?utf-8?B?MUpscTdHZG41dTFoNGRVKzhZQytYQXJnZVVGSGM4VEZQUWk0WEZDZWpmZzR5?=
 =?utf-8?B?VFdsQ1luNWJWb0JnUCtZcFZ3d0E3cExLN1FWT1pVOFRDS0ZFVDM1Z0loOGxi?=
 =?utf-8?B?cElBaWJvODZydm5DR1NXMWdSL29CQ09LbHBKdFQrc0ZMd1hIWm5penJtQXhI?=
 =?utf-8?B?a1VwelQ0cjZKYlNaSkdDc3hWajlsaldDRFdQSG1YTFRGcDVyaHEvTWc0M1JF?=
 =?utf-8?B?NlJTODhQbTMxZFMxVjhQVzNGNGp5WWVoaEhKWHhjQWxqNVAwN0lTMUZjR0lH?=
 =?utf-8?B?QUFpZWNPUE9TRERjWGU0djFOWmFSbE00YXhnOWRHVU4zTW9XbWk0ckxYd1NE?=
 =?utf-8?B?N1p1OTd3czEybkpWK2FKaUk1NVdUdnBISzRSWEVYQml6S1VqSW82NUFudmh5?=
 =?utf-8?B?Ty9mWU80b0didjFIeGRVaHU5aElCZk5zNmsweDE4a1FKbzU3dmZ6YU0vMkZm?=
 =?utf-8?B?MGtSV0dnbW40TllHQnRGdCtoM3dCZkNVUzdZNlQwekcyMzBKVjFhRlU0Nmwv?=
 =?utf-8?B?cjk1NUdUZTRQMmpHK1ozc09CcU0zWlF3S3U5OCtQbjVzS2N5YU15b3dHTmc5?=
 =?utf-8?B?Yy91UUZNbTVYUk1FNnpuU3Z1bTdYamVaVUNnQkdXWXB1bXVBb002QjNOSnBY?=
 =?utf-8?B?dVJ0TzMzb3BHalZTRTBQOHlRcGg0eHgrZ3FRSG1UelVpWVlWbm9mSktqNnFV?=
 =?utf-8?B?eWhiU09nenFwMDUxSGxKNk0xTlNoZ09yTTRvZktQQ1o4UXk1WU12d2p3b21B?=
 =?utf-8?B?RjdIVnBmck9sb3hid2JBVEErczdkQUdvR3lpWlc0cWFhTlBzWEZjQVhxT3h6?=
 =?utf-8?B?TU9ITUtkeTRoQ2NuQ2pvbFdFOHh3WkNuYmFJOTY5cW9acWtTTlYxbSs1ZVk4?=
 =?utf-8?B?SGdQRnF0WXpuT1BsTW5wR2t4RlY1dy9IS1RsVTU1dU9YQWRiaWJ4d2xYZ0ps?=
 =?utf-8?B?b1VoUGNnS0pSSVpMUmVsRXozeVRuVzM5YWhTclJkRjVMaGhKY3RNZnVzcGFH?=
 =?utf-8?B?WEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	k/fqHyJi6CkYvEPDf+hZNuitVUU2+uXSwWfexHXk0uqvxNH7ybEa6HhdhWVNw5Q57DvP2ZZ2o5XnOKYAg4Vmm34qfy8SEnvfM5Ob83yjBpnbPG0i9wspSQW5W+GMMPeU+C0e7x8SWTB71HJyalx/EasptpwjqUD1FO90Nn03VRsK1kLOlphGvyiis4jmw1fOVRLiOnuv7fWQ1hbN7T5/WKOUu4vB319FFohQW06gWTczFOgu9eTL7z01wYuNmT7QxNn6NZ2S9TZHaNt6x2shcJR0W3DWTTr46BfD8DIn3vBmefMFyXCgG87bW3AUqtaLxgrcjJJEHGPjP3vh8SZxWEWY1tqZDgLCMA+6glZ9ZtJ4C/I4Jc9Ll7gZWLi5XNKcZh92KXbD27LGM9QtB89HpdkAqGG26ZN1KNuGg1QpoIQR0q1SiY4LR+fZ22e7aDZL2/V/w08ibGbW2ysH7ixJD31al/RIw03WFHcBPytxTLSsh18ZWpwLSDVhthOR+ccDxIXrq75bjeaORxCkxrKt4bpacLgo/CWUqvWAKko0ijMPkNyv+i06o5ecngSsZQB3NB7KomJBmEdbZsR4whP7mRFEXBWGRgILrV0pwmxSXKOL+yrv5qWwKXx1+zSlJ1zetWgwhe76fekyb4C0NUlzT1NcmBh6ZN1ojCGwJCqA8NmFrLwI6QL7I1koO5FVbfA5rgS2vlejPDOLZqlwKfXGG3cXjVVMazNF1vF5RDdba3c/v18qLloSJwbB3DO9KduHDu19n83FZOzlBzoFZuYNieJ3EzC53wqQ9P5TzOqjCEx++TVdnc0BtMx3pSrbulAf8D1EJY/ZPlrVezy0p+ForQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bc5888a7-6200-47d1-83b7-08db560cfb0d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 12:56:35.7683
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8FUaiHx7lkqNibhHTiDDf4zL0uDtwBrwqNv12EndKuuiOTJbfkpxJVXSXafRs3lEUROahtLna8KaLOyiiId8imfwGHRdaG+JS07M8gKa7wI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6053

On 16/05/2023 1:27 pm, Jan Beulich wrote:
> On 15.05.2023 16:42, Andrew Cooper wrote:
>> Seed the default visibility from the dom0 special case, which for the most
>> part just exposes the *_NO bits.
> EIBRS and SKIP_L1DFL are outliers here, in not presently being exposed
> to Dom0. If (latent) exposing of them wasn't an oversight, then this would
> imo want justifying here. They'll get exposed, after all, ...

EIBRS is exposed to dom0.  I've intentionally renamed it from
ARCH_CAPS_IBRS_ALL because EIBRS is by far the more recognisable name now.


SKIP_L1DFL is more complicated, but on yet more consideration I think
it's probably wrong here.

The confusion is regarding L1 Terminal Fault, where RDCL_NO was
retroactively declared to mean "also fixes L1TF".  SKIP_L1DFL means "you
don't need to flush on vmentry", which is advertised by real hardware
that also advertises RDCL_NO, but should also be advertised on
vulnerable hardware by the L0 hypervisor to an L1 to say "don't worry,
I'm already taking care of that for you".

So on consideration, I think SKIP_L1DFL should not be visible by
default, and when we start doing nested virt, should be reintroduced
with a dependency on the VMX feature.

>> +    [12] = "doitm",               [13] = "sbdr-ssdp-no",
>> +    [14] = "fbsdp-no",            [15] = "psdp-no",
>> +    /* 16 */                      [17] = "fb-clear",
>> +    [18] = "fb-clear-ctrl",       [19] = "rrsba",
>> +    [20] = "bhi-no",              [21] = "xapic-status",
>> +    /* 22 */                      [23] = "ovrclk-status",
>> +    [24] = "pbrsb-no",
>>  };
>>  
>>  static const char *const str_10Ah[32] =
>> --- a/xen/include/public/arch-x86/cpufeatureset.h
>> +++ b/xen/include/public/arch-x86/cpufeatureset.h
>> @@ -308,6 +308,29 @@ XEN_CPUFEATURE(AVX_NE_CONVERT,     15*32+ 5) /*A  AVX-NE-CONVERT Instructions */
>>  XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
>>  
>>  /* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.eax, word 16 */
>> +XEN_CPUFEATURE(RDCL_NO,            16*32+ 0) /*A  No Rogue Data Cache Load (Meltdown) */
>> +XEN_CPUFEATURE(EIBRS,              16*32+ 1) /*A  Enhanced IBRS */
>> +XEN_CPUFEATURE(RSBA,               16*32+ 2) /*!A RSB Alternative (Retpoline not safe) */
>> +XEN_CPUFEATURE(SKIP_L1DFL,         16*32+ 3) /*A  Don't need to flush L1D on VMEntry */
>> +XEN_CPUFEATURE(INTEL_SSB_NO,       16*32+ 4) /*A  No Speculative Store Bypass */
>> +XEN_CPUFEATURE(MDS_NO,             16*32+ 5) /*A  No Microarchitectural Data Sampling */
>> +XEN_CPUFEATURE(IF_PSCHANGE_MC_NO,  16*32+ 6) /*A  No Instruction fetch #MC */
>> +XEN_CPUFEATURE(TSX_CTRL,           16*32+ 7) /*   MSR_TSX_CTRL */
>> +XEN_CPUFEATURE(TAA_NO,             16*32+ 8) /*A  No TSX Async Abort */
>> +XEN_CPUFEATURE(MCU_CTRL,           16*32+ 9) /*   MSR_MCU_CTRL */
>> +XEN_CPUFEATURE(MISC_PKG_CTRL,      16*32+10) /*   MSR_MISC_PKG_CTRL */
>> +XEN_CPUFEATURE(ENERGY_FILTERING,   16*32+11) /*   MSR_MISC_PKG_CTRL.ENERGY_FILTERING */
> These last two aren't exactly in sync with the SDM; I assume that's
> intended?

Yeah.  I'm bored of Intel failing with naming consistency.  This one, I
was assured that the draft was going to be fixed before publishing it,
and yet...

>
>> +XEN_CPUFEATURE(DOITM,              16*32+12) /*   Data Operand Invariant Timing Mode */
>> +XEN_CPUFEATURE(SBDR_SSBD_NO,       16*32+13) /*A  No Shared Buffer Data Read or Sideband Stale Data Propagation */
> SBDR_SSDP_NO?
>
>> +XEN_CPUFEATURE(FBDSP_NO,           16*32+14) /*A  No Fill Buffer Stale Data Propagation */
> FBSDP_NO?

Oops.  I hate the MMIO vulns especially because they're too similar to
other vulns.  Will fix.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 16 12:59:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 12:59:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535186.832823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyuGw-00058R-SF; Tue, 16 May 2023 12:59:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535186.832823; Tue, 16 May 2023 12:59:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyuGw-00058K-Om; Tue, 16 May 2023 12:59:22 +0000
Received: by outflank-mailman (input) for mailman id 535186;
 Tue, 16 May 2023 12:59:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyuGu-00058A-MD
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 12:59:20 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 768e2576-f3e9-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 14:59:17 +0200 (CEST)
Received: from mail-bn8nam12lp2168.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 08:59:15 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB6053.namprd03.prod.outlook.com (2603:10b6:208:309::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 12:59:13 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 12:59:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 768e2576-f3e9-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684241957;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=LulrSaczLJNPUqHUzIPQWjuKyFXbTUuOGlusYHUZ4UA=;
  b=XRTEpw8khjLb6uoV0B6r1JLDX/t7rv4FZQ67TTExMesvvGzr27GJAhLF
   ec+RVCuinI3O0Nk/2WxuQyrlNMnSZCktWwFz2x7YL6GZicHkR+YBzHcgi
   2mfIrtvKiKex8+bezc5b0zbHbEH5+FRyUdGUGnTgWbeoxJLNi2WT2GCMx
   Y=;
X-IronPort-RemoteIP: 104.47.55.168
X-IronPort-MID: 111670360
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:W/GVDK22p0Aq9FqMU/bD5f5wkn2cJEfYwER7XKvMYLTBsI5bpzECm
 mAfDGqPa/qPMDDyLthwaYi2pxhQsZeAzNFkTQdkpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFnO6gS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfX2QU2
 98gCHMxRQ2loeCVzriXbrN1v5F2RCXrFNt3VnBI6xj8VKxja7aTBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvi6KklwZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXqiAdxNTObmrpaGhnWWwCsyED89U2eRpLq8pGrvfokcG
 XY9r39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLL5y6JC25CSSROAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9EIMZTSoNTA9A6d+zpog210jLVow6Tv7zicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQKzASpoRGpBcmS8g
 Q==
IronPort-HdrOrdr: A9a23:afbS/68G6MF0cJ5TNm5uk+AcI+orL9Y04lQ7vn2ZKSY5TiX4rb
 HKoB1/73XJYVkqN03I9ervBEDiewK/yXcW2+ks1N6ZNWGLhILBFupfBODZsl7d8kPFl9K01c
 1bAtJD4N+bNykGsS4tijPIb+rJw7O8gd+Vbf+19QYIcenzAZsQlzuQDGygYypLbTgDP7UVPr
 yG6PFKojKxEE5nFfhSVhE+Lo7+T8SgruOeXSI7
X-Talos-CUID: =?us-ascii?q?9a23=3ARU+NVmiJQx++dgDizJCmwtE/2TJuLHiH4lToAwy?=
 =?us-ascii?q?EIntPbr+2R1W83Pl9qp87?=
X-Talos-MUID: 9a23:hYkwdgYlh3GL8+BTsjmrpSlAH8FUyaHtDW4pwLBd+Pu+HHkl
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="111670360"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dt7E1E4haJOBKvUBVGURg0pn0Jn1d381FvDd/xkLbLiy7AxM2o6M3LKgKuR4xA5T9QSPeCyyxbb3rqFJvNN1XiOAC6oblRbuLQheq2HQo/mpWjMmigLBumDyDWd7CEeFHVe1PaXkWyr2yDjVGWy3E2YIBkjN7GdIrvU51e4NXKAn7nUJdPFQM23F2g2jkOeDEm1b3lp721lgXz8BJFmBYZx8Mvrk1cuLKSX0hVmILNVbSEkF9mFtVfWaPwgIw/l1hX21ggYXH0AdUQF8j11RSvsiHot46jJnJNCC3wn16ruj74Ybi4Xe2J5013W6g6o9gl36exEztQEhmIEL5mUpQg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Yymk88xLfET+N1rPKyG5v1fh9t/6tHL6bBLVffnZmGw=;
 b=Yn1j1BVYTxwtOz9KWQZ5/2sCLe3CTB1rU3a6wZGDPu36C83PTmMrcWu4vNjci/n3TxzPztI2Phw4bVtjNxX1nuQ26e7vTTIZIZD3c+fmYfVppBDho0zqyxRsZh7fds2nwqz78LzY3fdHJvUxy11SE2LmK0DKBiYA7SQX8OM1sKVwqtBoC6IDkIjdcETmiqablVHXVy6ErfORttoJtU/Pk5gIaDhWTYHC1GCElEZLBCS+UdxnrrkscaQ3o0NeRD/4bdWX5MV4UNUGk+OLSpg2rY7OORV4i9iRfAoU6HyOuiYGWhP75eSAKzGchqMS0D6wdcBhk7Xeqta2O5lIsGRB5g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Yymk88xLfET+N1rPKyG5v1fh9t/6tHL6bBLVffnZmGw=;
 b=o/OW7EimGVrfb53F5JSwbxyFlxe+Cd4r3MeJEZUqfV2bYR0WRSKwYrMM1M7ISFJrkUro4xpmL3TdGiV2Y0zZQM/XcnRBXe8UAHC1A5BLHVzx8wkMkEPTXS6Y142De9NL1/IxLVDVNSb0bfdJn6+AdW5DUfyV43l4O0dHkPvksT8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <d3b7e5ce-3a6d-12b3-2ffe-c7b904274070@citrix.com>
Date: Tue, 16 May 2023 13:59:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 5/6] x86/boot: Record MSR_ARCH_CAPS for the Raw and Host
 CPU policy
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-6-andrew.cooper3@citrix.com>
 <f5f56c2b-f382-b007-3949-2bf4bd37b392@suse.com>
In-Reply-To: <f5f56c2b-f382-b007-3949-2bf4bd37b392@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0007.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ad::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BL1PR03MB6053:EE_
X-MS-Office365-Filtering-Correlation-Id: 80a48bd0-e481-4ce3-d79c-08db560d58e2
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N7jaEki5lDISzKvjbeeyeuPaSduJCOEzWnz0EbwrKA0anoqlLJX/UBYBQ47ZOZ9EzNL9IecG0B3Mp+RRm1gp692wvxsXQBxN0JN2cNIsi6FPWNOJfmV88v+5hLdV6CX1vpnVRLt0L7td6iwHybO6JFT2k7zQV01+OtgzozdIfblgUxOpxHbmh51ukga51H1FgjKbwMAj22pXJXjB3OWvQCpSqMfRGEZv8R9lGgxeCiBuM4hKW38uI1QVLnPDPeiES/+UPdJWGtymRNS4vlRERc5jS0VMZT2h++UvpgqXTpwMUcF5max6WIg/dTDZSa+Mo4E+M2DSf5PkOyvD+k1+AXQJuSDUW2fU9ocCDwjRa/HGYsbr4XLwT6CBdNgxS37YubJqksK/6DVaahYOHiir6Cy4aKQS2BnyLKMhFl+JrChaWft0rArRW5jsmlyxWXPf8DKFG6qtM4b2aPhSLWGuM40d5YszAM5To6Zxh+i6bBYbWgcnEll+3B1xgIK4vue14Ic5jsCT3zlD0ORPUhc1YRUvoCXDFpUlNtXmsHypeoc/aHaxTN7KzLxQU+xpUOZwU9mAgCmkhMvBqLnJjvSUj8HxCU7S+m14npkYUKOTQ4d0RW64nk92PZVwLx/YQjYAgL2JZGaBR1Fp8+3t0Nj3QQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(366004)(136003)(346002)(39860400002)(451199021)(83380400001)(478600001)(6486002)(6666004)(54906003)(2616005)(6506007)(6512007)(26005)(53546011)(186003)(2906002)(4326008)(36756003)(38100700002)(6916009)(82960400001)(66476007)(66946007)(41300700001)(66556008)(8936002)(316002)(86362001)(5660300002)(8676002)(31696002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QU1PVWdkTWlpRDdNbXFzMEJTanBUY20wYTZSc3VBcUV4Zm1PMStMYVpiWTY3?=
 =?utf-8?B?Z1k3dGZrQXlvWUY2dzV4Q1BsSHF3VlBFVVV6Ylh4Z0JNWER2b2tRdUVWSnVQ?=
 =?utf-8?B?UHBYSWM5RkUzeUpsYU5reUI4enAwSFZ3SnlIUHRNRFovMWlqcEVnUktIRlhD?=
 =?utf-8?B?WHFvWlpBVisrZzhpOGJPZ3k3Q2hmVGFPa251TW5WcVNUNk1mbEJJTVZERlk3?=
 =?utf-8?B?RHlEc05lMHJ5bmdjNTY1T1BaQ0VSMDNzWS9VNHZnYlNCYThzWkpCOEdvVE5U?=
 =?utf-8?B?TDE5U0xTazMremhmTUxsdnZHSDkwaEhtTjh4Q3kyd3hjOHF6ZngwaFY2ZTl2?=
 =?utf-8?B?VHBteUpyc0N0NGE3QU9GYjJTNG5jaThnUnZaN1BUMlV0b3p4SWhuVDBoV2VU?=
 =?utf-8?B?clBuNlpZQXZYKzdhN0F4dE1NcjNRR01ndkhsUU9lK1RmVmo0UkRvdFVmVnli?=
 =?utf-8?B?QjdRNmVoUFJTL1dqaWVBNXVSb29iRVBINi9wclJwR2ZkS3ZZUDQxTFdTcklS?=
 =?utf-8?B?ZHlNa0QwaG5DVEdpR2NSU0E3SnExd092ckI5VnljY1FlN083NUxDUHYva0lX?=
 =?utf-8?B?VDZQZTBTR2FSeHpZaXg2YkE1OG1rcVFMSVFtSmI5ZEs1MGpsejBnVlR4TzVo?=
 =?utf-8?B?YXA2dWZjZW1IS0JidExzOUpjVUNSVGZKVlpxQTRmQ2tIOTQzNWk2K2hFWTEv?=
 =?utf-8?B?bitoMWNTM2czOFBkWkVDTlkveEtzTDJ0L1JtNk9VWEtwZUxwQndJNFZVK1NP?=
 =?utf-8?B?ejZVMC83NThHSEFrakVrbE1raW8rdWViNlhrNE4vZUxhWmF4djVDL2lxZUwx?=
 =?utf-8?B?c3R6NnNhU2wzNkloUzBKQit5MHlOL3FyMFdsOFNROGVqdFF4ZXFSTVY3V3Y5?=
 =?utf-8?B?RHpSeUpvNWRHQTZRMThYNE1xT3VGTXp0SkVxUTRBREU4Zno5UlBJYkxCK3c2?=
 =?utf-8?B?YVVOU3FlSjdTL1RZZnFDc2NwTVdTVkhNMnBmZjAxY1lFOU02U0V0d2ZYVnJ2?=
 =?utf-8?B?UVl2NmxSUjZnVWk2WnJ1TnBsdXFyZ2NicGozR2F4UmJTcDdFb3dzdVFMQTJN?=
 =?utf-8?B?T3ZVZDU0bmhwSThvYjdpK1h4bHRYMzM0MHl0T2orb3hzUklUTnpKOFFHQTUw?=
 =?utf-8?B?SGJmYi8vQUVNN05qcWhiczc3QTZ4c3JpVVB5WWxvVFNyV1FhaFNaSk5jaGZw?=
 =?utf-8?B?Vm12ZGtycXNBcmowT1M0Y0RackZsSWNEemZYbzhIUGlrQ0pYUjA1RURkaEhK?=
 =?utf-8?B?WndLUzB3TDFYZVFuZktZdEgvTlhMaThXeEFqVGRNUEc0azhjUEY4a0Jtd1lZ?=
 =?utf-8?B?dXE5aUUrLzd4VFlZQUE3TGJacnRjdGRiNzgvbU9ZU1RhVHFKYUZHbnh1RjRr?=
 =?utf-8?B?aFdiT05YejZCU3lMb0dVVzlXVXI1eSsrNFUxbGJmaHZoSFdkOE93UE1FQ3gw?=
 =?utf-8?B?WWgzT1dpaDM0cWhmS1A5d1dwdTdXMjgzMFlnbzgvWGZVdlNFY0lxZnZmZjR4?=
 =?utf-8?B?OVA1ald0Qnc2YWNPRUpWVzlwdkdGZU5NM3lNVlU2enVnbVhBV09MaXRJMlhl?=
 =?utf-8?B?ejlFZW0rTmFmOE9FV2w0Y3UxQnVyeE1qcldpMXh6TkJSV2pwblZzSlVORmox?=
 =?utf-8?B?aDV5TEhCNHRzQVZmVThXVjBRZ3pkdVYzM2FNbUdTamRwakxzTDZ1T2FZdGs1?=
 =?utf-8?B?N2tuYVBGQVB5dnZKMElRNHI0N3ZKZ1RSdGErZG9SS1RQaTNHeVdMVkZDM0lC?=
 =?utf-8?B?TUZ0ZklERG0rb2ZhYnhDUmk1RkNySUZjVjZFOVFrSmE4eTVnWnpoY1A5NTdM?=
 =?utf-8?B?a1NwdjZWVldOVUxobWhjNnk2WEw5cTkzbUE0YkVpeVd2WGx1UllSSDdrcGx0?=
 =?utf-8?B?ZE1BYU5wd0txM2p0RTlVSU53YnA4TmRKUTZ0SitvVSs4eHhkazNCZzZxcjhl?=
 =?utf-8?B?Y25aTXR0Z0tQclpYcnM5WTNzNCtSU1hDaC9JY0d0Zit5b2hFT0NjK1NhNVVB?=
 =?utf-8?B?MCtRUVNBOVA5eUwwMXY2MzhPL1lTTlRHOXZMY2VITDRSSTQ3bnJ1RkREclhB?=
 =?utf-8?B?eGpvMkZTYmdmcVV2cHNrczZyZktRU255SzBCYTFUNWo5M3cyL2tIcW81OW5x?=
 =?utf-8?B?ZkwwU01hc1ZHUTFiWWNHbmh1blk3VFpzRm9kYnJ2NjA3VFNwMUtUNHVCQnNQ?=
 =?utf-8?B?R3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	x509ztbyeumEhzLH+Zftdd1q86OG7mo5EJN9I0tg3W7NNwidgTnLxvZtcrU3toJflszdLokg7cbScEBvaa+fLjeY94n2FiWkwPKAC5rKeevkiRux0IrzTWsdKY9lzilhZRWaFWT4QlhSnyXUiFUkz5dimPPmisusiFJLeQX/WMkVjIM3Up/g1+PH/krttvG9ndilBKBJZuanJIKzPdslgIa5t9yEGgIUIAAbn9ZwV5z5DGQrbqGEfU9O6zmAvkfb2SxfXHbtrBngp6YtJw1vNFiFcWnZUrDRkz8dzRhWFqvsKBgW8dwBQoAgDwo6MUH9aelfWnMD+2yeqFLIXqIEoKGiatYkUKWayfDkqtY31jbFT9s4DphUp0aa4PaXKVIflqJlGkBgp4+kDVJyCGM0p23OhkjRpqdfKQ2FG9azi0YySzkjFbBiEDtsbGPJX1y3VK6dBOAUn9JaXwkyu6bycj2QFwGTNpFhqCO8C2KX8ISlWtV8QPPj9Bt4Cshm8bFlQtt7tpHTrCqoN2MN1rJ4gueXRVOg4IDC1j5NN7o1bXumIPYNwD9yDmkyR+IrxKl3bwh5dNe5zSwml6Ob0qXohJq8gJ39lvU4ldtlZp6KaUy78VEpCcT1yh/mzR2D1bbfr0pQ968RkFafWll88QEQXurBxN/YTs1FmD30Te/OvTQNXMW6+5YgvYG0UWeVU98U5AnrOo1k3SrLI9c18kyvv6wohhKXWaonp5YATGX60Q6Ylvcgdo6Brk/0AvgHzHa/qRp08BQF8QS3qr1zimjzs6pLMfhDjS/a+UsgOCalEnwGpBQIwiHLAKwTEFS82b/dj1+P26aszW+ijN4tlNkjbQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 80a48bd0-e481-4ce3-d79c-08db560d58e2
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 12:59:13.1761
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9KRlkOd9nA1y4zaxDvvVecNq6qE2s5YtipQsM5U6oM1iWckK+LVs7LsuPKzraEyIU7XNeTilVtDC+JRHLegtk1Vkgzffr3zzNQjU5vkgre8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6053

On 16/05/2023 1:53 pm, Jan Beulich wrote:
> On 15.05.2023 16:42, Andrew Cooper wrote:
>> Extend x86_cpu_policy_fill_native() with a read of ARCH_CAPS based on the
>> CPUID information just read, which removes the need handling it specially in
>> calculate_raw_cpu_policy().
>>
>> Extend generic_identify() to read ARCH_CAPS into x86_capability[], which is
>> fed into the Host Policy.  This in turn means there's no need to special case
>> arch_caps in calculate_host_policy().
>>
>> No practical change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>
> I have a question though, which I think would be nice if the description
> had covered:
>
>> --- a/xen/lib/x86/cpuid.c
>> +++ b/xen/lib/x86/cpuid.c
>> @@ -226,7 +226,12 @@ void x86_cpu_policy_fill_native(struct cpu_policy *p)
>>      p->hv_limit = 0;
>>      p->hv2_limit = 0;
>>  
>> -    /* TODO MSRs */
>> +#ifdef __XEN__
>> +    /* TODO MSR_PLATFORM_INFO */
>> +
>> +    if ( p->feat.arch_caps )
>> +        rdmsrl(MSR_ARCH_CAPABILITIES, p->arch_caps.raw);
>> +#endif
> What about non-Xen environments re-using this code? In particular the
> test harnesses would be nice if they didn't run with the two fields
> all blank at all times.

Right now, I don't have an answer.

In Linux in lockdown mode, there isn't even a way to access this info
userspace, because /proc/cpu/$/msr goes away.

It's only a unit test, and this doesn't break it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 16 13:07:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 13:07:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535192.832834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyuOe-0006fJ-L2; Tue, 16 May 2023 13:07:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535192.832834; Tue, 16 May 2023 13:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyuOe-0006fC-Hn; Tue, 16 May 2023 13:07:20 +0000
Received: by outflank-mailman (input) for mailman id 535192;
 Tue, 16 May 2023 13:07:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyuOd-0006f6-S9
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 13:07:19 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2047.outbound.protection.outlook.com [40.107.7.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 95caa63a-f3ea-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 15:07:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6780.eurprd04.prod.outlook.com (2603:10a6:10:f9::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 13:06:49 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 13:06:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95caa63a-f3ea-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QMHF1LrVUr4wlC6XcxpQeH9S8wl8QL1Vvs1toMUFeIAIHirkopZDEA+/IVPCNOXKFP+JXJQmDJb0g+/rMlPu1KrtXp/GEEgnTmHSp2iGxY5M06kGIiWmcO/qbAu6T3i1Yh2NvnVpCZPztb3fZFVaw0cQtuZVT3ajbB58XzZZNcafAPWl+b2TkrFi3y7ua1ZvYmoRT8uV4Y4EH3TfNEgxWeMVGflxAYCAgl5jjOXXE9m+jx781GFk51RsRMr0pthwg+MDrJrdQu6oOW3ezC482SQxjNh+wMk0BcrD3C6ZUkAcA+JeifFEnTbe5o4obir40nCAtcPkFPgpKQL5l8rfIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=moJmLOgtDRlXESsAoDIoMyj790eDY0yQoiZ2exwgRXU=;
 b=GCUdGsCtrVaRTTNhZF0JzYocNSkKA74jyAMH2yxXlbZ1PwXaw/MrL2TJ3ba6pLUwOJBG2Nf6s4jMcG0PJbTV+1ka546w6k3zXgRjL7NoEakxSlsWzdBApbsLZD3ZbIVU38Ww1pRlAPxuF9q3WlREQe+0JUF8WLyGZz4v+4iPW2xkW/WqH8ze/FyKgZQgW654tBrO0OjxOLLr/Ldh3iVdRCWYwIUYgEkYXQHfwaLa43qY5W6DdrZ5hkleIjrvPcBOF/OZQ2i+R8t+2w39DQdrX72q3tWY8O+lYWFcJ0nTooXcxzh/ab1ItqbODtdmrYfIjhIYsewUYRLw24/moMUzhA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=moJmLOgtDRlXESsAoDIoMyj790eDY0yQoiZ2exwgRXU=;
 b=jPFtQjeuqqTZhYHc7q4ZJ3THDjOyTtYaoroIikbrR2lFXKgt7dtd4qcQdv39j0aKVrpPWBRgY7ePH83mtb2cpHc2MvcXx1EyYJTFYcO5LDzPkHEr+VdMv+W6WOwX0JhOh6lVwE+6DUiEPr3ynytkEtpj4mlkxlAnZHyMvUDGob+WkrxJYWQnVlE7PXpH0UCZAkTSBysRE8EmqByEvDB7xxzvtqx5XGmKBrC59vdctizbHsOvGT/NLQNvhG1VmKS1J0sw8eiihyYjfb7LTqhnwH1ldDjAeZbGQkCZzSbGIErNvlnb6lynvvxyROk+OvIp9PjeWJ1EMeYFvgR/vQuilQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a545a6c9-db48-9d91-c23b-59ea97def769@suse.com>
Date: Tue, 16 May 2023 15:06:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230515144259.1009245-7-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0095.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6780:EE_
X-MS-Office365-Filtering-Correlation-Id: 12468101-0162-41f9-699f-08db560e68d8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JJywxr/LxqLSaphpZYuX04PAljxN4BKsRdRku0Ti640HvUolygIgr1pqMsbLH+3EZf0Jf0LWeDdz2Smg4mA3NVN3A7M2roKtUtIH0VUDyWeKglGRJCHc/u/yrwymihlb7Qy56CZQM8t+5X68gOHCDq/9titPqxuGjtV+jG5865hn2vHdhGRpvOA3C5x6dhFmPiehrmBw1isjYIrLMmEGybCzLbuqig85qFl5JOUGUiAh47b+ErksVfKDLK0DJd0zXJac82n4bxqkQ41dHE6OG1+f6DTnvW1oI/+0eKa9sf0uoZn/RIxX1KADzy17J8iEgIE9iicIKzlZm054w9Mm0cal+Cqi7mff0qznlBXv/pdRApUSPxFDJePPdQn0AH3dFd4PswtdWeiKtxy4jLlqs1GefH2hEfiQyR23PFhOD7LSLGKhovhN3pQUqI22cy0K+0J3PoEIVBraM6eQW7BG9dpD1j2vMtJUFPGMxXrcQNDn31tQEUfv5JGRB+uV/odQ8Wtl3VymkCGJII9j4UclUqnq6rZHNq63SDAhi2bFyRSIvMgQdLvFCrcRyDbr0b6RaDK/tuhc9RSQIP3qxIYoObdttSKTgQcN0GKF5m+l5spj+8EYI52O4FwNMp6FRYqvvPBTyr8uo6gAtlhYftIsXg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(366004)(39860400002)(396003)(376002)(451199021)(38100700002)(6512007)(26005)(186003)(2616005)(6506007)(83380400001)(53546011)(2906002)(8936002)(5660300002)(41300700001)(8676002)(36756003)(66946007)(478600001)(66556008)(31696002)(6486002)(66476007)(6916009)(316002)(54906003)(4326008)(86362001)(31686004)(66899021)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YUpjKzl4MXBhNVp1b0VLV1Vpa1FIcFE0eUdoKzBYTUhtVGZRbVRDc2FxSWR6?=
 =?utf-8?B?cXNReGJwdXpWV2FhNjRicnUrUy8yNWlKbUl6WXZ2NHF2WWlkTzAxR3FZWU9P?=
 =?utf-8?B?emR2clJOWWhLejNRZkNTNFprMGI4ODdWYWpMcUk4SVBWcktodVpEVDdSTWc3?=
 =?utf-8?B?YVNnRXpuN2tKSFBJSmhEcjZvbXhBSURhWVVhWXk5RzBTdWovd21lLzlnY3pT?=
 =?utf-8?B?NURFSWtObktYc1ZuSUlCbFZRZFZhclByMWpBeVFBcDY4aUZnVTRvY0xEMHhy?=
 =?utf-8?B?NXJyNDRDYytrUWJrUHA4QjE5MitrL1NjaFM1SGJWb1JiV3AvV2p3S3p5bEtS?=
 =?utf-8?B?blRWdmN3MjZ5akp3RUFYemp3Ui9ZZEJ4V1I1MnpqSExSdFdDWGNIR3hkK3ZG?=
 =?utf-8?B?YjN5M1FFK0NsQ3RuUllKZUNuYjdkUlg0UllaOXZsSGd4akUvNC9rbjExN04w?=
 =?utf-8?B?WTk3RkhiM05yL2luK2JQZzh4TG8zSTRpRmZUbjZjUXl4eW1sSjhPLy9YNzhq?=
 =?utf-8?B?SW4rODhRSEdma2xScXZhTi93TXBQazFiWEdvWmlDdjhWUFVOcXdaWS8ySVdw?=
 =?utf-8?B?K3crdnhRbTAzUDJybEwyK3NQZXpEV2Z6eDd0T0JYeFQ4Qlk4b1p0RWszQzN0?=
 =?utf-8?B?RDh0MkRoY2RkbmFsTGRPSzZ0cXE3aUVsdlVySDFQSTZ1RWtxVTFWbFR0cTZk?=
 =?utf-8?B?azN5U1htK3l1alZpNVlYbWtIWXdSWVFxS1RoNGppc0ZQS2Q0VnNJVmtYL1Nk?=
 =?utf-8?B?SkY2djVsbldkRFRnUGsxRHJiRUdzQXpDbklhZHF2R1hqVG5rZnNaMXlEV09Y?=
 =?utf-8?B?aVI0OVU3aDA2R2ZYcnR6U2Y3c253MUo4ME9RT25PWGhEM1pXWjh6S3NXaHdY?=
 =?utf-8?B?N0lvQ3BVYWlpZ1N2cWJ5MFYyaEsyY2c3c0FhcE9MY042NzJlaVp0MGgydHlH?=
 =?utf-8?B?U3lCMlhWOHZUYWlpMm1taVFFN213Z0NLZDlyMnNkQTZRV1N1THZtZXorZG5P?=
 =?utf-8?B?WldPUEdaUmptbmloRExRMkwrcWdRNEY0YnVpQjNib1FTWm00Njd4d05kemNX?=
 =?utf-8?B?SWJOdjZoL2p0Y3BZem9vT1NXbFVnazQwTHkvQk1FQUM3VkhCSm5KdTVRV2dF?=
 =?utf-8?B?bUsyVy9DNnNlM2FNeVhUSG1ZdU9XcTZmMDNpQUlsWGZOdEVJZnFPR0dEc0Za?=
 =?utf-8?B?ZGprUHRMQnFPY3ltSk9LMUJoaldSN2NZc2w1cGFxMWFTWmg0VkkybWtEWTJP?=
 =?utf-8?B?SjQ5LzlQQ0wzWXFaU0RZNGRnOTJmL0pYaFB4UHRGUDVIM21CQzdvc0g2TzBU?=
 =?utf-8?B?bDhjczFhWUd3SjZVZ0FUVE1BUnVtS0xMMXJwSXZFUU8rUkN3b1pDNnFwamZT?=
 =?utf-8?B?UG4xTTlUazBsR1R0dDFKSTNueXBkRHhnMzNIZVp4YjhvT0NEcExmWXFDZ3V1?=
 =?utf-8?B?T2kvWEUreTM3aWFmYXZtZ3BBZGFhZlpoRmY4TVk1RkkrYjJWaTlWMkw5eHNh?=
 =?utf-8?B?UVl0TjZ3cjR1MEJxNG5DZE1VdnN1aVQ3N01HY25raXYwSjdxM2FiTllEWVdC?=
 =?utf-8?B?cWJOMm1rSjJaSzFLbExPSHczd0FiYWhYRDY3dE1wcVRNaDJTSjBnei8yZGJu?=
 =?utf-8?B?VjlQb1RxellpTjRURnlteHBHL1c1N1JLaXdob2lFdHh4YkVFM2VSTkk5WVhE?=
 =?utf-8?B?T3hOcC81WEVnTkVJaTVuSXJIcGdJZUE1YXI5TmNwbGJ0NG15VlN6Z1kzUzFu?=
 =?utf-8?B?SGRFNGRqcHRkT0U2NktjZHpmVW1SQmFtZ3BUbGRRVm5BdTl2K2U5dGdsMC9N?=
 =?utf-8?B?dU1IakxWREtGMzM0TFJlUG8xb25lMlJSTGFRdmxXby9iVnVBR1lwKzZITjJl?=
 =?utf-8?B?Vmc5a293RGNHbUhyRks2eFBmRDJLeWg3dDZhZUxqcjg0cEtTNzhBOTVKQ1ZW?=
 =?utf-8?B?bnZUZ01Sb2JrRG1pS1N3ZHdGT3kwN1FIZVpvbFFCUUttU3ppU3FZbktJRGZ5?=
 =?utf-8?B?TmxvREUwbjFLSEw4N1R6U1hMMTZDRCt1bnREMVhGbVUzL1dMLzJsU2ZtQ1g2?=
 =?utf-8?B?aCtXeGdlOUF0Y0dhSFhtMEVkekxZQ01MRnhSR1REZEtUZ292WXFQWFFyQU1I?=
 =?utf-8?Q?LfiaplNy8AbRbrdswcNDHPOZJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 12468101-0162-41f9-699f-08db560e68d8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 13:06:49.3360
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: k5Uwm4pauqCwukdf/xkA1FgNgNMNKq7KYptqFnKN/6dBj7l4t/e3OFpGwZ+u5poJtSEwS1EHyF/BF4Zjx67r2g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6780

On 15.05.2023 16:42, Andrew Cooper wrote:
> --- a/xen/arch/x86/cpu-policy.c
> +++ b/xen/arch/x86/cpu-policy.c
> @@ -408,6 +408,25 @@ static void __init calculate_host_policy(void)
>      p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
>  }
>  
> +static void __init guest_common_max_feature_adjustments(uint32_t *fs)
> +{
> +    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
> +    {
> +        /*
> +         * MSR_ARCH_CAPS is just feature data, and we can offer it to guests
> +         * unconditionally, although limit it to Intel systems as it is highly
> +         * uarch-specific.
> +         *
> +         * In particular, the RSBA and RRSBA bits mean "you might migrate to a
> +         * system where RSB underflow uses alternative predictors (a.k.a
> +         * Retpoline not safe)", so these need to be visible to a guest in all
> +         * cases, even when it's only some other server in the pool which
> +         * suffers the identified behaviour.
> +         */
> +        __set_bit(X86_FEATURE_ARCH_CAPS, fs);
> +    }
> +}

The comment reads as if it wasn't applying to "max" only, but rather to
"default". Reading this I'm therefore now (and perhaps even more so in
the future, when coming across it) wondering whether it's misplaced, or
and hence whether the commented code is also misplaced and/or wrong.

Further is even just non-default exposure of all the various bits okay
to other than Dom0? IOW is there indeed no further adjustment necessary
to guest_rdmsr()?

> @@ -828,7 +845,10 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>       * domain policy logic gains a better understanding of MSRs.
>       */
>      if ( is_hardware_domain(d) && cpu_has_arch_caps )
> +    {
>          p->feat.arch_caps = true;
> +        p->arch_caps.raw = host_cpu_policy.arch_caps.raw;
> +    }

Doesn't this expose all the bits, irrespective of their exposure
annotations in the public header? I.e. even more than just the two
bits that become 'A' in patch 4, but weren't ...

> @@ -858,20 +878,6 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>          p->platform_info.cpuid_faulting = false;
>  
>      recalculate_cpuid_policy(d);
> -
> -    if ( is_hardware_domain(d) && cpu_has_arch_caps )
> -    {
> -        uint64_t val;
> -
> -        rdmsrl(MSR_ARCH_CAPABILITIES, val);
> -
> -        p->arch_caps.raw = val &
> -            (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
> -             ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
> -             ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
> -             ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
> -             ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
> -    }

... included here?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 13:11:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 13:11:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535198.832844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyuSY-00086p-4d; Tue, 16 May 2023 13:11:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535198.832844; Tue, 16 May 2023 13:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyuSY-00086i-1s; Tue, 16 May 2023 13:11:22 +0000
Received: by outflank-mailman (input) for mailman id 535198;
 Tue, 16 May 2023 13:11:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyuSX-00086c-D5
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 13:11:21 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on062b.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2622397e-f3eb-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 15:11:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9322.eurprd04.prod.outlook.com (2603:10a6:10:355::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 13:11:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 13:11:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2622397e-f3eb-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iXoqJhqbVA6DUnI+BXLib8gsYcwKU45x0wuB8wnUByKpf7LlPCGwNgIl/XO09lxfUk7pZqO7frHEfAuIdSS2uqVCswx/4p1exFYPMhYWv0aCud7XaBPLpFgm65uXhyxJNTGTatetcpG+3Xz+0D6AJOvrGHhg39A6ChrFOR7o5AOAY0yrNsvzDZcEopyx5docWgLqwfcN0j0Rt572fztUk3S96bipk+qTjhw6GgKB2p4BQmlFtRU6WNOa0WrTt0VT+DzcjwerjuSG3oYeVMyZzAZb5JuD43jc4bjIL3NQtKjYvqSHTbuJZ64P6yrjas3F43mkOjkKs/06yWi7yoHNMQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tjDZJNXkCgyD5Ymk8/LhGwAm0IRVp8gLqlRdhPiW9Z0=;
 b=AvmZSf3ZodxEIi2dX4UoQvL7HdxY/96IzENZikdOhtf25I0eY82gDC5750H15Hm7D9PjbPM2O2mV0COEF87FDvsUgUW5HrEUKL9n/g+C5XAeakQ7iyENfCSAFjJGdXDiPf7uWX37ePLkmIYtJdMc+YP9r0MLVftBrChujadaNACii42cDaiLj9jBr9XXZvHIZXj9jOcPjntGdSCD2p7boUE4YA7hUR7h0KX+IS2bwoza1oqcyFXNT0o1rO9pwY7MqAVT1CUVhGJ6XeZ1i6+qrQvbAehtLIe6LX14ARyWg4H5B5vyXrzUF+T/uKNfoXy0S7CAUXxVNZIBXVnnpvE/ZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tjDZJNXkCgyD5Ymk8/LhGwAm0IRVp8gLqlRdhPiW9Z0=;
 b=KjN35QwlgS9DOM01Fa4PrMXZuXuOk7XWmGjmUosI1DtzihQnER+ake0OnAbYcfPsn01LHc/oeYSZq38XkbeW12V0/pkC26h87QY7SXzz5wcG1iKur4RTyOIMa0Z51cHLW6Im2KmA1JL/mpYSrf2n7yHkQJZcFOcdD4scluBJ4m8Md7gblMJaLjn1DZ0GZc1nsfY28I6O9GygaE2hQ3+mxstTIuI5dw9/+fi28QwjFsta5xLaaDFTpRCm/Xw35+piHI51s5lzOYumJUxIuSm0wfOSKzdoyuCd1PotQaHMeSWgmu4CqmTILkgdLvzOpHQZhflUv2shy6IIZC1OBVLm3A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <49f8909c-6b65-4225-40f2-6ee347399a24@suse.com>
Date: Tue, 16 May 2023 15:11:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 4/6] x86/cpu-policy: MSR_ARCH_CAPS feature names
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-5-andrew.cooper3@citrix.com>
 <20a1b108-68a3-a200-1d0e-390cd20b5500@suse.com>
 <85a04714-c8e5-de21-7722-cd1ff715448f@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <85a04714-c8e5-de21-7722-cd1ff715448f@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0150.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9322:EE_
X-MS-Office365-Filtering-Correlation-Id: f0fcb9c5-45e4-48a6-7482-08db560f0903
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Enw++lW6JduLRRByMKdVsDBEyph2BfRaaATc/yI8VYZuD8yE0s37Wr92WLCyhXGp/9+WEZgBqFuVna3ycblC5cdgkb8vLkdiMAiOkvsSksC4FS8Nl7pgp1Hp3XoTwlFrhsvFBWasYAH1fyL06uWCHQV7oS6Q9PPArCATsDRXJdLLujnYg+mutUjgjlncarJI5yFmb3X4SVknYxGnKk1v1mf/xCklFggnG+lA0M59YeewE8ckE+TNa/BRrKS87fqDPq3Z94Po8sPmZ3d+b1CvZWQS5pR0YkzWcX04JJOwZYUJ3lxl7zghvf8uy+BPdN570DsQIIlXqcWBvpEzShekTDcQET+A1oRO87LG1lYV4W1bMiImOpjFpCofzEauZjwCVu6J1T7R8+L14Nb3DCNzZWyp6dZi4B6ZEP3Ps+rXCKKNLhm9YSoC6SnTc9jDqs2ZrAR8QatjPFKfYm1PBPQUfklATUUpI/a00UKn9mOnSlSBCnq+NYneAadsWsBgJq7hpIDpqU3V3kJ7WWGMTtXWwhf5DrXWKOda5mJezOUs5s6pllOE8GK47U4zvJlqPTDiFjf5UGAqTDp0JcQaHg4wl3YxQXetgq+oWNZdmpnVP0iE0Lgmy8QdCrSWj0fOBqY3t0epWpZZY13rv/w1YZQ0KA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(136003)(396003)(366004)(376002)(451199021)(478600001)(5660300002)(8936002)(8676002)(86362001)(31696002)(36756003)(4744005)(2906002)(66556008)(66946007)(6916009)(4326008)(66476007)(316002)(54906003)(2616005)(38100700002)(41300700001)(6512007)(6506007)(26005)(31686004)(186003)(53546011)(6666004)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UGdkdmc3bmZXYzBzb2tzamhnWkFRYU5vbDNIQjBoOGdzdGY5UU1QRGFaVnVJ?=
 =?utf-8?B?VW5IY1pJbk0zUURhT3UrVkNDUkNqd09QUGdiZWhxeXl2Z3hDNGJESmVJTjZX?=
 =?utf-8?B?NzkyZ1FMeVBwaDQxZzN1Tm9vSTN2TDU5bWlKTkxOOUNKVTJRT3VYMXZQMWZ6?=
 =?utf-8?B?Y1ZPU1VJTzdNUjZoNndTVnNiQ1JaaWF2Q3JqbzViRmwxbGhPVHBPV1ZBT1M4?=
 =?utf-8?B?Y3NOTVBUYzlxdjhJMGxjSGVwcmhmSzJlNGt1aVpOMzBidlRzWHI4ZjBJOGxX?=
 =?utf-8?B?dnJxNUxHWnQ4b2hjWnVCTVpyWGtOcDRMTW92RkgzNm9hSHdFNjNrNHQrMzRX?=
 =?utf-8?B?dHNSVFBvRXNiRzNEWW16Ulp2akFnaTBQeWZUa2hDZ1h2YlRadnhscjZ3cC9U?=
 =?utf-8?B?SG1tSUdhckpHaUtIaEpRSGVEYUdWSVZIU0pKb2pzdlNDYXZFV1htMyswWGcx?=
 =?utf-8?B?ZkpIdi8xaUV5M25YbHRrQnNsbjBCS0UrNmVTbXhUY1dhNEU2TTN2UERoV0dj?=
 =?utf-8?B?R2Fkck9DRFNMdmdodTJYcmZQME1Wdm1aekxWNG84VzE3YzNzM25tbTdENnl1?=
 =?utf-8?B?cXpsc0Nwa3ZDaW5uS01ZVnVQUWtYVjArOSsxR0V0TUE3OEx6VTlKODRpR0hN?=
 =?utf-8?B?dUZRYUJhbkp3dDhrMzZtZWhKQzI2aytkR3VIWHRnS2tMVHUyZFZsMjlhMGtH?=
 =?utf-8?B?T3FIeGJXQlFhRGdXT1FFNjdLcG44ZGFua2Qyb1VQN1drS29QTHJxKzdOUlVT?=
 =?utf-8?B?SWpZZWFrWnhiaVJLWmY2U3hUdHhucVBKdXd2WklQVTFudFh4SWRlZ28vTFR3?=
 =?utf-8?B?R1VvRXhvTnV0N2dYYkNrTUVmanhMM0ViTGxOZXZHSWxMYmsvclBIcnV0TnV0?=
 =?utf-8?B?cnFZYlA3aFVYMWN0TkYzL1B5bm90czJpMmI2YzRjU1hrZHlZcjNIdFJoWDI2?=
 =?utf-8?B?Q0x2ZjhJazU0YVYwRVdjK0k0ODNtS3ZjVUxwYlZkNjRHd3lPQml6QzUrNWFB?=
 =?utf-8?B?UlV3MlJEc2VTVGc2YkdTRUFuY2xJeUpkbTQranJsbHdUVWdjc2xVak55YnRI?=
 =?utf-8?B?YnZlbTBnMHM3d2ZZdDJIT0NKaVYrRzdPUGV5eG9SWFZNbEYzREdRWTRBaFlo?=
 =?utf-8?B?YUtJNjhpOEtGZDJ0OGVjTDd2cXdCRWFockxoSm9La3E4ZTVGTndLcmM2NnR4?=
 =?utf-8?B?cXNVamFOZFdNcWd1ZWszd1M4cVArOUM3MEpLYVBVY2hCdEVpZE5UU2VNWEN0?=
 =?utf-8?B?dXNnb2l2VGZ1cTZUWjlUMC9iSkhNYjYyTjB6amRGTVJaQkRlWm5qSDdteFhp?=
 =?utf-8?B?bFBydXJTUkxtU0N3R2FkTklqVzllZjhBVW5MMk5rd2pIalNaeVpzcStCYXpJ?=
 =?utf-8?B?U0pDZGtmQ3B1NmZTRzhoTEYwQUk2ZXppU3lHSDcyaFRyNVFMV2lLY0lKbnhu?=
 =?utf-8?B?QVF0ZWxDMDFxVXJqY2YramZCeXlmeGpzekVXNnByems0MEQ3T2JBUXdSQ0kr?=
 =?utf-8?B?NnVNdU10MUJ6K3lXWEZkZEVwbU1Zc21GNzBYU3BMSGRBaTcxUHhudVlpcS83?=
 =?utf-8?B?N3ZFUmxQQTV3VHFrVzEyVjV3VFdLdjVibTkyVmw2M2NlZDliYmNxNk9WSHpl?=
 =?utf-8?B?YmhkWTFJaE1kNjVjSCtuUVA5aVYzTkF3QTA1a0oxUkczbGVZNHdWcGhsTHc0?=
 =?utf-8?B?ZkR5WW02R0JYemdsMU1tL1U3M3VnVE9mSjhEVU5WMm9YaTRpdWFIeWJZeFJY?=
 =?utf-8?B?TlQwbTFaOXpJeGtXMlREMHR5YmlHeVJiUkVxVXRBVDlCWHd5cEVISUE2N2to?=
 =?utf-8?B?cW1QalV5dkUvRUMrd1NqM2p6azIrTGVmRHhqeiswRU9mNWtqaHlweURMbitt?=
 =?utf-8?B?VTBzcHlZZ2hJMWZERkhpdFZpY1NiaVphMU8xS2NkcEN4L3FycnVBTGpjVHpy?=
 =?utf-8?B?UE16bEJTSm9pYlFOOVhJdHZ2V0xXSVdnT0NUZmRVQXRvL05oSTZPbkRKbmVZ?=
 =?utf-8?B?M0xPOEFXUEhJM2dIdE0vOWx3Yzl6bDV4QXlQazFDWUdYZTNESzRIQTRmL21R?=
 =?utf-8?B?WGczem1MQXBUZjd1QU1MaWEzTVY1anJXcGlVclAvNXkrdnBJdklGRXk1Q0Fl?=
 =?utf-8?Q?lSG4xHs83xoAIc0TTmPxNvyiD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f0fcb9c5-45e4-48a6-7482-08db560f0903
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 13:11:18.0807
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aiQ/ivSFQcrNv97ADKF6x8qc14Ex1Lg+ZRlaMRZ+jeKZnq+QpIAGewwc/42CrcqqQaFViGStG6NhORNpz3tkkg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9322

On 16.05.2023 14:56, Andrew Cooper wrote:
> On 16/05/2023 1:27 pm, Jan Beulich wrote:
>> On 15.05.2023 16:42, Andrew Cooper wrote:
>>> Seed the default visibility from the dom0 special case, which for the most
>>> part just exposes the *_NO bits.
>> EIBRS and SKIP_L1DFL are outliers here, in not presently being exposed
>> to Dom0. If (latent) exposing of them wasn't an oversight, then this would
>> imo want justifying here. They'll get exposed, after all, ...
> 
> EIBRS is exposed to dom0.  I've intentionally renamed it from
> ARCH_CAPS_IBRS_ALL because EIBRS is by far the more recognisable name now.

Oh, of course - I should have looked at more than just the names themselves.

Hence the comment on a later patch then also is incorrectly saying "two bits",
when referring back here.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 13:51:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 13:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535208.832854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyv5K-00048Q-AJ; Tue, 16 May 2023 13:51:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535208.832854; Tue, 16 May 2023 13:51:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyv5K-00048J-7G; Tue, 16 May 2023 13:51:26 +0000
Received: by outflank-mailman (input) for mailman id 535208;
 Tue, 16 May 2023 13:51:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyv5I-00048D-Dk
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 13:51:24 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bc9d9d2e-f3f0-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 15:51:21 +0200 (CEST)
Received: from mail-dm6nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 09:51:14 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5376.namprd03.prod.outlook.com (2603:10b6:208:1e7::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 13:51:08 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 13:51:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc9d9d2e-f3f0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684245081;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=s621DDRdKuG9wyHYyqDeBihFyTv4Mgo0Qd7mVLq4MpU=;
  b=KMD5+ysUgnj4c3suJqALWQEHH+GVuW5HujiceJyfa4xrgiWbfFrL+GYb
   A1rgvlJjdGm7ALDpLd+sSVaH/LG1822KDLoMxihvEb5uFvZNUrFxnK8IF
   ftMyVZZPaBTnPiHd6xA1hAnQnA5BidQWM+WLKxZQA27vBifNusrXZI+pv
   A=;
X-IronPort-RemoteIP: 104.47.58.100
X-IronPort-MID: 111678433
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ti3H36P2vnBVtLbvrR2IlsFynXyQoLVcMsEvi/4bfWQNrUojgTIGx
 msaWGzQPKvcZWSgeY9+Oou/9UsFscTcx9BhGwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5wJmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0s9bPkBUy
 KYjEyEiSAGlq9ntxe68bOY506zPLOGzVG8ekldJ6GiBSNoDH9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PpxujCLpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxHqlB9JOT+bQGvhCmXi26WEJU0IqaVr4mOaSqkSTSftGN
 BlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml1a788wufQG8eQVZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9URX5NkcHbC4ACAcAvd/qpdhrigqVF447VqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraTygbQHxZ6s9Lqkc2Q=
IronPort-HdrOrdr: A9a23:WmwdDK1mcavSOooVClF7dAqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-Talos-CUID: =?us-ascii?q?9a23=3A6aVoemgoP10guian0LPVxaMfizJuW1Hi7W3teXS?=
 =?us-ascii?q?DGElPWZu5cgSVpKRCnJ87?=
X-Talos-MUID: =?us-ascii?q?9a23=3AAXOOmg3XZ538Tel9CTOce0gz7jUj+oGLJ00Xyso?=
 =?us-ascii?q?6h8jHDDNbGz2wnh2uXdpy?=
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="111678433"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XIMsdsxsrl0llNVIdVBu0p3dNV7VCyXvYbNnTS/pD1BX+B/ZryKZt6CmLoGeYOZaFwe8mGWi1eJbUjmgwScEp2IkRRRDHLzfKumd+MtHilMkao+mmHarBLXqMrynoQxjDwSc8sXnDxl6/Vp9EXMjNY0DsEntiS6gp+DbEF6+Hf0BEHQ7t8LYWfJdgzl0/OA3IYn9aDHHlSFVPo9mwxczbxFbEDjIXLHSQIp81sPdjXWYgDpdcZRaT88ACA6U3vmCvLHLx+OoRcT9rMBPr+1pC0HiOWByfuYNdJ52gveiLgGNcKhWo1kfvDByKTOEUGqZ2B+VHvdl4A2lE4Kp+wa6lg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=10NddWnpMNRWtTxlaOTJyEk/AzGlbQuZ0DkZ/9+OPQU=;
 b=h3Ghvj9dTqWKufF955a/8MkacTr3hBKpnBg+pjRmc15EoTj8cYUq/ti/RFAgDw3hsQri0AbX1ljGL+Z9qJOIyqO3UgSWG+J2+/2sjLoOjWyzWfjP8lwLrC20Cu26h73R2e9WnLGyDy9Elm6ftFm5E0qQaMXqr9FOr4DOjmEby9KAKQkvsCs0Lg3Q/S+QOYafdD2iiGErB4XBZAVFriozESDzChHOOQdtbaBL407V0LMSUbC/vrAC06wfKnxIZJ8Sg25TLfAU+id0Ey0wWRrSVjNPOKw38b6xBJEbpYF5uXUdKELuSmuCk+pZ1YcqiltXNNff73k+HUxpae9DJi7e2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=10NddWnpMNRWtTxlaOTJyEk/AzGlbQuZ0DkZ/9+OPQU=;
 b=MoRFgKI6XrQbRsYjRInpYWziVkrcOw4vVD+G6kNQzpqBS7cyrKUHiLlqEoz9OC4gOUzAYwjsZzDKbWfEZSvXYoYgxzrtXGu89ElIfSz96UDpz7JB6DrOBbPEaYJnKMwqHSqJEKRylX7gASfaSq0taQzF38vg17ySXYjXFgTEyLk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <25421dbc-5889-a33c-37dd-d82476d56ce4@citrix.com>
Date: Tue, 16 May 2023 14:51:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
 <a545a6c9-db48-9d91-c23b-59ea97def769@suse.com>
In-Reply-To: <a545a6c9-db48-9d91-c23b-59ea97def769@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0592.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:295::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5376:EE_
X-MS-Office365-Filtering-Correlation-Id: 6a7b7455-0fa1-4546-63c6-08db5614998c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DHUqCIkcOD0u5rn0fisQ0yyEkfHlla4eORnU0CEbnKJIPEc8Orwz1ZhQRKwjH8Bof/TT/3ZuVUodQsBO6htMAcsvaePebIn+I/iaiB+zl8AZADanKy8guM1FzjNUZzrLzf6e7RbceiWLUh1NOeBH+kY4VTH8K6ffCjAfkStOkyWgGeBV0+29fMdEg+mfankRRk2nNHT7oD4DMblf3CO5gbEo5wiPvK3mO5eX7WJcdUmm5DnJOHhdMhnGWFsIhFbNG8PshkscB0h5HN0LdsEPSjYq0bAJX9RYMICC+iVG4DohFgAv6ex8PRoCUqt8NvDDVU8BzSDv/Bs42BnN5G1dcnj4Vk3KZjNkjbFwP3Y70DOY36MSDf9YHRDu5D6rRtGSyQSoLOds4i2FJxYsN8J8vxdrmpaylsyeH2ZazHlxSbOLkBTm5Mw+OwvZafXOMfxdmUahkqeA5Qw70rOZELUjghQzhaX5RGK8CWW1kZ0+6lasufmnmlWjjgD8HAswcmc+1TpfQORalsw4GN0G3LXc8fs3lkRDgqNNSU946dLspDlHYbFqYw2i0OoZSOZ1zt/tPD6mHnd6BlGPKxB4Z1dB8CNeKBbSUJEGFBUCQcPV2siYspPLLS644Al2ntFlRb1Bg5ENswQ8JwYiaD34uck4gg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(39860400002)(136003)(346002)(376002)(451199021)(66899021)(31686004)(8676002)(53546011)(6512007)(6506007)(5660300002)(8936002)(2616005)(26005)(86362001)(36756003)(2906002)(83380400001)(6916009)(66476007)(478600001)(41300700001)(66556008)(186003)(316002)(66946007)(54906003)(4326008)(6486002)(38100700002)(31696002)(82960400001)(6666004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?blNIQkVLNit4aTVPajNtSEYvYzNjTE83NUNxazR5L3N2NTZwWEpRWmQvODd6?=
 =?utf-8?B?UEdNVERmUnc5ZlorQ3Y4UnZIWTRwS3ZPbE5IVVJOTXJGcnEvd3BBQW9VVG9m?=
 =?utf-8?B?M3lpZXZ2YVFpc2VIcExyb3M1WTZVRmp6TkxjMi8vMWRjcVk3Q0VnYmFLbWJu?=
 =?utf-8?B?czFURUpNMHhUQzUwWWZLaVZPVyswYTNndmROeW54MEt4Q3luM29zYmVIU1Bt?=
 =?utf-8?B?N3kzWEVlRU0ycENmWmQxTEdkRFgxK0dTTnVlTjdPbGZlQmlpdjh4YndVTElV?=
 =?utf-8?B?ZUQ5SjcrbHpuZmtHb0wyR2UzWHBFYng5VmdPZkpjQWpnRXR6RWNzcmRUODFG?=
 =?utf-8?B?R2NDR21Xb0JwTVJtYmNaS2I3dDVJZmRRemd0T1NiaDlBZHkvRWZuMEY0dEQ0?=
 =?utf-8?B?QUtjcllWTlljdmh2UEhsR3huMEN3amZEWlJxVU1ISktNc3pyYU82eVhLbVc2?=
 =?utf-8?B?M2pkZXlCU3hkdTFuV0dzeTdYRm43M0pBN3E5WVRZQkxxc0E4MnNBdlYramdC?=
 =?utf-8?B?VWdTbGhENjBtTDlObEJzUzhJZlVlcWdnTDM1blEwZm0xWnVXQXhrb3ozZVpQ?=
 =?utf-8?B?TzFGYnNkTG1jTlNpYU5kdnhOcml4Mk5HT2NOcmh5WXBhbHY0YmplbnBJdkFr?=
 =?utf-8?B?YnMwOWR3S05OcTVkelhhZkVxM0l1eGhzY2RKamVObXZKR3NqZzViTWdUYVNp?=
 =?utf-8?B?WDBBcDNlRUdPZ2Q5NjIzQ0N0RWNpMWtRSWtCOUoxYVB0SURHN0NWT1ZXUVpU?=
 =?utf-8?B?NDgvcVJnWC81ZEo1dlJLNkdrejBKYXZ3VnQyQURGbjlXMExrT09FM3RFb2ll?=
 =?utf-8?B?d0cvMWpGOXJKaEdaQlJDYWp2MFhYdFFEN3BrU0FVU1hUb0d4NzhtYzhrRXRv?=
 =?utf-8?B?ZE5OamgzR1BDcjBNd3IyS1cxdWg4ZFBneStNN0ZXRlBudDZYVXIwNlo2L0JF?=
 =?utf-8?B?eENpL2pXMzVWME1lcnlvaDdrY2lNcnlpYkpaZlpCc2Jqanl2bDJZUER3SUM5?=
 =?utf-8?B?NE9jaGdlcWtjbGtRQjJveTJoSmVOVDhINFJKcGZOdWFXNjVDQnl6WEN4a0dZ?=
 =?utf-8?B?ZExHM1p5MkFxV2tkOGlydGVoZ0JPL0Y4T3NvS1F0UzBXMHZpT2RKQW12UU9a?=
 =?utf-8?B?Mkp6TW9wUk1aMlFaZzdXbTk0NWhRc29qY3ZvZUdEUFFhUjZsc01QSitnaTZZ?=
 =?utf-8?B?UVlncy85KzdzWU9ZUG44VjhyTWlhRUlSclp0eFNJUGpUTEVmUUtZb2c4cktY?=
 =?utf-8?B?Tk1rNmd4clJwS3BxYWRiNkQ5NkllSk9Eek1uUnVSOWcrU1AvYU4xc1JrZFpM?=
 =?utf-8?B?L1V3NlY3R1g4N1g5ZGV5QVdPeW5MS281L0s3M202MmhOaWIyWVdFY3Q3ZmlM?=
 =?utf-8?B?bzdxc2dSNDIzazBmM3lhV0RzV214NWx0Q0ExVXFRMWU5OWl4cXZYWjQvM2ov?=
 =?utf-8?B?VXJWYis0QnNpVmhvVHJjWjdzdE9VN2grMUd1VjhiQzBudURZZG5lM1pqSFVu?=
 =?utf-8?B?eHlmM3FoMnpPUm13YnB3TnpjNlpFcit4QlpCL3R6aHE2Y2QyczAvZEtVUWJn?=
 =?utf-8?B?OW5XRXNGWHhUdDlWeGhqMGlkSkIyZTNYdnkva3RhbVZxckpJdlJKUXpObDBv?=
 =?utf-8?B?NEp1TDNNVnZaTURNRE43RG9kS3loNTliVlFvTXZPZVo3Z2UyUnpvTTBUZ3Vo?=
 =?utf-8?B?K2dhbTBiQ1B5SElEUTlId2ZaQzFSYXAyakRUb0Q2ZDVTeDRNMm91WlA0S3JX?=
 =?utf-8?B?SEYyYWFrS09JbXRzTERuSU1jOG9ibmpvMlhYVUFTRzRUbkVRVEVFZnFVRFBM?=
 =?utf-8?B?dXI0VHpkSnVOY0J4eTNqM0lCSGx3aUpmMXM0UHBSZS9tdjFTVTZ6aElvSlJP?=
 =?utf-8?B?bXNENjBXeDh3QnVON0xDbW9HWnFrNVEzWXhkc3daeTZhbzM2bVdHUk42YnNi?=
 =?utf-8?B?WEYxN3JKNmdGbythaXZpYmJKNTlhbWdIRzdvTDBKOWs0dUdRQ1JiL2t0b2w0?=
 =?utf-8?B?SjBZamJNSHc0cEJYRnpyQURxQ3ZZZWhrYWF6ZTlGN1hma1p0QlV0UDhUZmpM?=
 =?utf-8?B?OXp3Qk54ZXdXbTdRSVl2VGZJN05xRmh6T2xQaDV6cnRuSGFVNlpNaTdNZDBp?=
 =?utf-8?B?d3FZUzNibUpmRXBrd3pzOTF5TnEzZnFSRE9TQWlPUmp2VzloZVhtOWRLZS9O?=
 =?utf-8?B?dmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	JGkwTFxCLXWaCHtBasG7cYQv0AiaKUboUyXq03mW2kzYNh+NjfAye73A62lzCOhKlZn06b1lwlFYvEf9rEU+K8VQ1ijomoJX7J7VJ/4G4e42p6McLZAaZifa44Z3xhKhKwg1jKi2xmGSP3WUDD0OSmbAQlgLxERmgFt33uLRdDbDvzKyrXmx/CXANrdfIlmXqTZG5KDBn9zz0cwxpDwzDfU5WT/CpPdfm9TBy1IS77WFPyg0hUG/uQllkSKfesqZXdBHjnyLfMX/LDCagGzgKGhenv4hNQ6jblP4p3e4VTkYaQR7NY2XzlpR/J1ptCSMDbjPD/CYyrfOwO6vlrVJjYEcBdWQc/MtcACx2uvpHUoKgdid32dwpLm6D3wcigmDNkQbG+C77VCfJYrTCmhP5HK73TTV39ssc6CCk7o5Gc0wjxzaYRfoEFCsq9n6NbzBL5Va9CtqicRApg9+tpbXLIrZ5RBFCR4pbcuPn/46lBMohBb7Pj5nDA/mHvJsRet0bE9XOWHREuJjMYCN77Gg9DKJKNkq2GsPfLsEo05hPRt6r58QSyZVnf5zQ/HSRY5Qqq6TERJOIb0ZQCJbQ/lMYOzhGZiCQjWN6FzGX7HIl1OU+d46ZNp34CN6IgQSgfGW0KgDaq3OlMpFKha+uIOzJ2tD4Pt5sc+/cgRpRCvOkoxHjBNl1r/zdP4Ac0+lQ0q9Wm597+50jz0XOo2oLfm8lV8GYMGimQOC72weVBkcj7bU3NlTqWFFZX1u7JYbwpm+0WneLooJIv70dWWNsySeCbyLUBXWVmwcM2Q1ZpzQGwcHwvh0cyCbXOEMA8bPd0E1
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a7b7455-0fa1-4546-63c6-08db5614998c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 13:51:08.2124
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B3pQAyLhZoq6veZqDsBbBx+mly3vevf1HBdihXLWWOFlNGnioqJ4L3WK2pqCgadeSJwXPtvQEmWlRF0qqOEmZhVsXaOLA90X2CE0D/D/dxY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5376

On 16/05/2023 2:06 pm, Jan Beulich wrote:
> On 15.05.2023 16:42, Andrew Cooper wrote:
>> --- a/xen/arch/x86/cpu-policy.c
>> +++ b/xen/arch/x86/cpu-policy.c
>> @@ -408,6 +408,25 @@ static void __init calculate_host_policy(void)
>>      p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
>>  }
>>  
>> +static void __init guest_common_max_feature_adjustments(uint32_t *fs)
>> +{
>> +    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
>> +    {
>> +        /*
>> +         * MSR_ARCH_CAPS is just feature data, and we can offer it to guests
>> +         * unconditionally, although limit it to Intel systems as it is highly
>> +         * uarch-specific.
>> +         *
>> +         * In particular, the RSBA and RRSBA bits mean "you might migrate to a
>> +         * system where RSB underflow uses alternative predictors (a.k.a
>> +         * Retpoline not safe)", so these need to be visible to a guest in all
>> +         * cases, even when it's only some other server in the pool which
>> +         * suffers the identified behaviour.
>> +         */
>> +        __set_bit(X86_FEATURE_ARCH_CAPS, fs);
>> +    }
>> +}
> The comment reads as if it wasn't applying to "max" only, but rather to
> "default". Reading this I'm therefore now (and perhaps even more so in
> the future, when coming across it) wondering whether it's misplaced, or
> and hence whether the commented code is also misplaced and/or wrong.

On migrate-in, we (well - toolstacks that understand multiple hosts)
check the cpu policy the VM saw against the appropriate PV/HVM max
policy to determine whether it can safely run.

So this is very intentionally for the max policy.  We need (I think -
still pending an clarification from Intel because there's pending work
still not published) to set RSBA unconditionally, and RRSBA conditional
on EIBRS being available, in max even on pre-Skylake hardware such that
we can migrate-in a VM which previously ran on Skylake or later hardware.

Activating this by default for VMs is just a case of swapping the CPUID
ARCH_CAPS bit from 'a' to 'A', without any adjustment to this logic.

> Further is even just non-default exposure of all the various bits okay
> to other than Dom0? IOW is there indeed no further adjustment necessary
> to guest_rdmsr()?
>
>> @@ -828,7 +845,10 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>>       * domain policy logic gains a better understanding of MSRs.
>>       */
>>      if ( is_hardware_domain(d) && cpu_has_arch_caps )
>> +    {
>>          p->feat.arch_caps = true;
>> +        p->arch_caps.raw = host_cpu_policy.arch_caps.raw;
>> +    }
> Doesn't this expose all the bits, irrespective of their exposure
> annotations in the public header?

No, because of ...

>  I.e. even more than just the two
> bits that become 'A' in patch 4, but weren't ...
>
>> @@ -858,20 +878,6 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>>          p->platform_info.cpuid_faulting = false;
>>  
>>      recalculate_cpuid_policy(d);

... this recalculate_cpuid_policy() (which was moved in patch 1), which
applies the appropriate pv/hvm max mask over the inherited bits.


More generally, this is how *all* opting-into-non-default features needs
to work when it's more than just turning on a single feature bit.  It's
also why doing full-policy levelling in the toolstack is much harder
than it appears on paper.

All domains get the default policy, so zero out all non-default
information.  It has to be recovered from somewhere.  Generally that
would be the appropriate max policy, but the host policy here is fine
because there's nothing to do other than applying the appropriate max mask.

When arch-caps becomes default, the full block feeding arch caps back
into dom0 will be dropped, but there's still a lot of work to do first.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 16 14:00:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:00:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535212.832864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvDg-0005Lu-7U; Tue, 16 May 2023 14:00:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535212.832864; Tue, 16 May 2023 14:00:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvDg-0005LP-1o; Tue, 16 May 2023 14:00:04 +0000
Received: by outflank-mailman (input) for mailman id 535212;
 Tue, 16 May 2023 14:00:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QXU0=BF=kernel.org=jpoimboe@srs-se1.protection.inumbo.net>)
 id 1pyvDe-0004p0-Fl
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:00:02 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f27ee586-f3f1-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 16:00:00 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 8D1966389A;
 Tue, 16 May 2023 13:59:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7210AC433D2;
 Tue, 16 May 2023 13:59:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f27ee586-f3f1-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684245599;
	bh=RgXUB8U9LXn46MjhwEIA4Tcw8LGTx3bvjUBiOiBdWtI=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=gU9lgSFE0CgUeEoV2RiiLXuj//7fUOLVHsZIboBuaF10dHQ8x3sHDq0+45y3M8YfA
	 EPHQGYyALRn+EfVIoOfEPiP0It1LWQORSRWEEQiQ5SB2/glfuF3BLUvwINCPa2VsNo
	 NmDodr14LZhbaLcyU2EHBc/QE09ZKNkWzwahw8gvH4QJ3lxyCciFgRnO5gOMK8osJm
	 nq6xT+pZVajU2Acheh9PE9j8Nhf1qciFsEEANVeylt2jBFzr5DaQV9XcTijr/EjFbG
	 8SNQd+LhxpUSI0b0mdeeMRMayImFcbnCSu+nYOMlwRP3Wzy+4HRFv8MuVff9WygS36
	 2Y9S+SCuhxJJg==
Date: Tue, 16 May 2023 06:59:56 -0700
From: Josh Poimboeuf <jpoimboe@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: x86@kernel.org, Joan Bruguera <joanbrugueram@gmail.com>,
	linux-kernel@vger.kernel.org, Juergen Gross <jgross@suse.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Kees Cook <keescook@chromium.org>, mark.rutland@arm.com,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	=?utf-8?B?SsO2cmcgUsO2ZGVs?= <joro@8bytes.org>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 0/7] x86: retbleed=stuff fixes
Message-ID: <20230516135956.35bnxekprirnv2fc@treble>
References: <20230116142533.905102512@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20230116142533.905102512@infradead.org>

On Mon, Jan 16, 2023 at 03:25:33PM +0100, Peter Zijlstra wrote:
> Hi all,
> 
> Patches to address the various callthunk fails reported by Joan.
> 
> The first two patches are new (and I've temporarily dropped the
> restore_processor_state sealing).
> 
> It is my understanding that AP bringup will always use the 16bit trampoline
> path, if this is not the case, please holler.

Ping?

-- 
Josh


From xen-devel-bounces@lists.xenproject.org Tue May 16 14:06:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:06:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535217.832874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvJl-0006LJ-RN; Tue, 16 May 2023 14:06:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535217.832874; Tue, 16 May 2023 14:06:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvJl-0006LC-NZ; Tue, 16 May 2023 14:06:21 +0000
Received: by outflank-mailman (input) for mailman id 535217;
 Tue, 16 May 2023 14:06:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyvJk-0006L6-Hf
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:06:20 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061c.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d3e4237c-f3f2-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 16:06:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7918.eurprd04.prod.outlook.com (2603:10a6:102:c7::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 14:06:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 14:06:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3e4237c-f3f2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gTmgq401PzgnvLSjHBo7HqQWs4AEALdvu4kGSNEGEntzXwt5o8v8Ye36c+nn8MfPwBhyjcDowL6UFfLEon9qtAjbyyDw1S5ySmgNPgz7VMh6kRPdTRaODfbcSmbCm7iaOPEQEguemHoh7FNnxxJGTx718E55aexNOwEhExV/s1upgzyN7u0zy7om93cQUfzRzU1itrnlqrBYf4v0K3Mto6p2nQM4Wk/M/hC6/p+iPMVPpN4K+USvV7H8EgTZcbw64U+8YcJjUhZeNjQaG79vxLTDSkTUqiDEEnlrLnKKknSCjWleh/SOsUhWGtoZD/s052g5Kj9/G4AT9XO3a0Lyxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gL+cwAeZ+SJaQComw1l1OSjLzrSEIlC/Xi0oA8oOze8=;
 b=aabXWZnmUWwnFbdRHjJYztCQKsIYWqMyMvggCSf3+ejpGnQUIJ0BF1VGSV+Rzp4wF7oSKz/25+ZCkvfFFOAMdbN5vhntByAKxIqtIGiST8ltlmCYgK3dRIGI9auQ1zHOldzt0UUN+es1gHGrNSS+37w0kqe1XCDo1HIXBGM7Cj6YFVev26O3XardF0RejaRxB3GBzwcU+ZLpRNoMpP6tjkUrcaXJNQhPK4pCTxW5yxE5gxzUNESNETRPYxFw5A78Xn3yH0K2zGM2UwjijbTrOrA63FpPbeIxLMtyyyFNmlaBAHmaOFnXjzP1FnzqxNHxIaBgcYqRwKYlAbV9NQAteA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gL+cwAeZ+SJaQComw1l1OSjLzrSEIlC/Xi0oA8oOze8=;
 b=zw4ZGuWo0D+jtj8gyTD+uvQPS7TU26MOV8QFPgdpSWjqhQ8tNzEeKtzc9PuzAIurQBd/fHDFzcBaqnxEhMlJbrztx9yzABpK4qAnyDvMU7HckfLkaFT2DpjCvZcrjs3rbytsfl0nNnr38pHA/++0frbVNTgb45DeQFAIrtWlESoiKQnFfPOstOBOp56W9tY/O/b4z/BFS+BeoB64v1736mXsc0w8M1t4o0JVQm+GYAOxK5VvjBYFAjOuu8WJ9ONPxsJb/my4BIxX6GLIU0eBHU4vOY0HKFjKV+ZXJv2Yw0JkGdTxhCL6DzmpeQO4BCFLW7Anp3SpFoA4Y3qydA9G6g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1bef2b0e-04e8-2ca0-cf03-f61cdae484a9@suse.com>
Date: Tue, 16 May 2023 16:06:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
 <a545a6c9-db48-9d91-c23b-59ea97def769@suse.com>
 <25421dbc-5889-a33c-37dd-d82476d56ce4@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <25421dbc-5889-a33c-37dd-d82476d56ce4@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0111.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7918:EE_
X-MS-Office365-Filtering-Correlation-Id: e2c53c36-6352-4a9b-5d5c-08db5616b6be
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Gs7bkkZGtpe0dvUjgjGyeznaUcA9rVfbQkOtcgp5tI2aozvNGhbzwVmSOqA3irFj1C8ZIqTtNZ/+1ii3eph2GjVeTpx9D+XJOThq1YCpQ90VLyJRjwYSZTC9itYXQrH3BaVw8DmiEIwHCJPUdMo2Ifees0JsL1IcMZHzehKGiTzRklUVPzkALkQogq2R/Bok4W7nEFJLBSzV/fAXRpnIN7tqOxG940BEfVh2dDTyJNGhykuHXwkG9bXU+oJ2jlnE7XnjJzWn5ZaRmlz59an64catxtEWhhwRViun/Q3RKu8UrUNFCpFYZuf8VrlBA3Y59Q7zqaUAiF/L/lUZrUuvHEFkgIHZv8immtBD1mCjwMWs1nL+eagrl2mZ5Dssqste2iCKRpgjyvmjDGU9I+PQK4dxjy5H/rLVQLIC4F9xeK2TCFS9SdXdUZdqYEVgdg6oRE2SHxNwD9Zr31gi1R61gqn7JpqKyXckCs2JctV9sxsX8KspAQohBC8i+AG7qU7pUQYMEmQ2xKZuhKaLDPkAJlmDkQKIxU8amx/vsGBp8KxMif0CjBeXulRjr10a0JAdJsqIMklSjK8QMkiPadOv0clCEAGDX6aW/51XhajoHTVkl476FraSmlQ4tcYbh/HB9miglLb6PBH7cbAsTNpB3w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(346002)(366004)(136003)(376002)(451199021)(66556008)(66476007)(54906003)(6916009)(4326008)(478600001)(66946007)(6486002)(316002)(8676002)(8936002)(41300700001)(31696002)(2906002)(5660300002)(26005)(86362001)(2616005)(83380400001)(6512007)(186003)(53546011)(6506007)(38100700002)(36756003)(31686004)(66899021)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S0NaMHRGVC9TL1p3eXZyWWgvOUdRSEJKdE43ODh0ZGVYK3h5RFRsK1JCcDJH?=
 =?utf-8?B?NzN0a1FyWWxlMTh3eTlvVCtkSkFCdUgwbFpObjVXRks1NGxDaDk2VWFUNXhU?=
 =?utf-8?B?QWpiVWcvYlM1QVlTcVkyRXRhY1pPT09jOENXS0h1SUJ3MUQ4WVUvaWdldkoz?=
 =?utf-8?B?T1BOMVhzaDEwc0lOdjUzVjhtQ3B5aDk1V2lQZkRnTUZDTXAwNW13SGxmcjhX?=
 =?utf-8?B?VmlKWFBLd3RRdllWQk9GSEJSUzNkRFBXT3doemtNMVRkczkyNzJJajBTMTU5?=
 =?utf-8?B?NEs2QlJVcnAyczIxU3hlbGJmTjYzUkR0QUdieDFFbE0wQlByaEJMdTRXZjBJ?=
 =?utf-8?B?UzJFdTVyUlNmTGdmTTdSeGJhRC8zTG5LbXVHTFFHWldqc3lOMEVjU2RhQ0FB?=
 =?utf-8?B?QlNtSEhlbDFmSFg3bXBCa0Q2VFFDZ215RzJTTzNXVjJjZms2MHFEcXZsWkRY?=
 =?utf-8?B?N1dpYnlOdUlPN2NvV0ZmM2Y3aXEyWStnZTJBU2lHaGdjY3lqMG5rWnhRbDJO?=
 =?utf-8?B?Y0hPczJQQjg2MEJkQmdPTDhyY1dPV0F3KzZ6bzFROHIwTEs1TDdkb3RUS3RP?=
 =?utf-8?B?aVFLZy81UTEwOEs1QjA1RFBucHV3cXhScVFUN09YK3FNRExOK1h4REVxK0Vi?=
 =?utf-8?B?M1dTWTRjemYwa0RhWm1EdHpKTm1MbkxOUVJ2cjFMSUprK3RmSzBHWExDMGJl?=
 =?utf-8?B?Nzc2ZTJpcnFnMTZybm5wRlVhOHNHUUVLYU1aMDA5OCtXMmhCMWMzcmxJVVdS?=
 =?utf-8?B?d2MxY0JDQmxKanQ1SnE1Vkwydm5IejNqekJEcytxSlV2ZStkNWNrNDRwWE9T?=
 =?utf-8?B?Y0VTYllOU1F3bjkrZjQ5MENqQ3dQbXR4TFJnWnRWRjJ2VWplWjNOais2VkdZ?=
 =?utf-8?B?M3dzKzlObktHcEhRUkM4d0Vyc3NPRXVkSVNNRjNIWld0bE9uRmVvVnRUZENI?=
 =?utf-8?B?OVIrLzBjbnZabGwvd0h5UEp2UzFOYzdaUFhUUDI5Wk8vM0NWT0ZoeldhYmtx?=
 =?utf-8?B?T3JyMG5sUU5uT0VqZnNmb0xpeGtWUlBnWHlaL3dlN0M1NlFPL0M0YUZTWm9O?=
 =?utf-8?B?dE1QUFlkMll5UStKeHlEU3dqanlUYWZSZXBpU2xDdktMci9lQkl0eGNvUjU4?=
 =?utf-8?B?UU9ObWE2UmprckxrUnZwdm02SlVjNm93K0lIV05TU2pZdTcrR3lpZ0Yxcjk3?=
 =?utf-8?B?SFlSL01ka2J1R2lud2tFZ1BuWUNPMXdudW5aQnRrbXE2MlBkUVJubzl4QXlM?=
 =?utf-8?B?Y0YremlJd0VQQnU2ZTJ0dkhEQjRPNDB5RVVoMXBUTS9vZ0RTQnY0dXcvRmI3?=
 =?utf-8?B?N1g3KytWaE5jYUJCUEJ4WmtwZVEzRjhVVnF1TEMyZUFFY1RCeGVCQUp2bmRq?=
 =?utf-8?B?N3VST0V5eVRKY05CVFo5QXlidnlMTGdtT2paRlp2R1ZFZmdweFMyK1ZTckd4?=
 =?utf-8?B?Vy9CdHlseGtVNWMvUnVOWWhja3pBZ3ZUdlBWR0ZSNkViUmxqcFZtU0ZSak16?=
 =?utf-8?B?ektTeXlmYXZIVFF2RUxMaXVSL2V2VFlzZlNxaXBaSkM2THM2ckNmb1FQa041?=
 =?utf-8?B?bDd5REJTaHRtalFJaGZOTEVQTGhlNTBtNmJoS2paNnBQSjNYOW53andLRmNx?=
 =?utf-8?B?a1FJczJvVm9mRmZMbEhQbWlGVG15bzBZV2NUYjEzdTdpNVFBdmx3UlZGV0NL?=
 =?utf-8?B?SFdkRE5sMVk5SW9JRHpLUXF3ZS9CK2hFZ2RZdE44VVZFRkJpaWNyM2xUTGtw?=
 =?utf-8?B?TkJ0WGlCNkg5VlI3aElCVnJ5Wmpza3R5aTNMODJRY2QrZ1hYbU5BL3lDbzd3?=
 =?utf-8?B?OHJYUjRSbE5wT0g1NWV4SzFwQzc2U1h5NlZvcFUreTV2YVNGbGZYdDhHUlF5?=
 =?utf-8?B?cEVvb0E2aUUxb3BwdmVTUnlMaEYyUStJdHJJU3JhN0dtMnoyc1hyc1ZudHRE?=
 =?utf-8?B?eVpVWDlIZHB3RG1NaVJ1QkFadlJnSVJZL2h5Wit1ZGczV2FiUnZ1dWNHNHVC?=
 =?utf-8?B?VG9DTmM2bDI5UXRmL2lmdDgxQzJJRUVhVVE0U0VsWVRwSHhCRWJLZFM2UFNm?=
 =?utf-8?B?OVdGMFZjeGlRQ1lrVmsrdVJkUUZQeWQ3MHo0SVZSQkFFMWgwazBiWW4vQWpj?=
 =?utf-8?Q?hmVqZy2eGqKeMcZ2t83vNrsKs?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e2c53c36-6352-4a9b-5d5c-08db5616b6be
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 14:06:16.0714
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: n0pEduz3pjQ+ovBh6emLT23HbhKg1k4LpRM8NHNovIfSQMMDxhmh1nwk0pbZsmrpcSrvg6DPbmPZ70tHE+2hFA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7918

On 16.05.2023 15:51, Andrew Cooper wrote:
> On 16/05/2023 2:06 pm, Jan Beulich wrote:
>> On 15.05.2023 16:42, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/cpu-policy.c
>>> +++ b/xen/arch/x86/cpu-policy.c
>>> @@ -408,6 +408,25 @@ static void __init calculate_host_policy(void)
>>>      p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
>>>  }
>>>  
>>> +static void __init guest_common_max_feature_adjustments(uint32_t *fs)
>>> +{
>>> +    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
>>> +    {
>>> +        /*
>>> +         * MSR_ARCH_CAPS is just feature data, and we can offer it to guests
>>> +         * unconditionally, although limit it to Intel systems as it is highly
>>> +         * uarch-specific.
>>> +         *
>>> +         * In particular, the RSBA and RRSBA bits mean "you might migrate to a
>>> +         * system where RSB underflow uses alternative predictors (a.k.a
>>> +         * Retpoline not safe)", so these need to be visible to a guest in all
>>> +         * cases, even when it's only some other server in the pool which
>>> +         * suffers the identified behaviour.
>>> +         */
>>> +        __set_bit(X86_FEATURE_ARCH_CAPS, fs);
>>> +    }
>>> +}
>> The comment reads as if it wasn't applying to "max" only, but rather to
>> "default". Reading this I'm therefore now (and perhaps even more so in
>> the future, when coming across it) wondering whether it's misplaced, or
>> and hence whether the commented code is also misplaced and/or wrong.
> 
> On migrate-in, we (well - toolstacks that understand multiple hosts)
> check the cpu policy the VM saw against the appropriate PV/HVM max
> policy to determine whether it can safely run.
> 
> So this is very intentionally for the max policy.  We need (I think -
> still pending an clarification from Intel because there's pending work
> still not published) to set RSBA unconditionally, and RRSBA conditional
> on EIBRS being available, in max even on pre-Skylake hardware such that
> we can migrate-in a VM which previously ran on Skylake or later hardware.
> 
> Activating this by default for VMs is just a case of swapping the CPUID
> ARCH_CAPS bit from 'a' to 'A', without any adjustment to this logic.

Hmm, I see. Not very intuitive, but I think I follow.

>> Further is even just non-default exposure of all the various bits okay
>> to other than Dom0? IOW is there indeed no further adjustment necessary
>> to guest_rdmsr()?

With your reply further down also sufficiently clarifying things for
me (in particular pointing the one oversight of mine), the question
above is the sole part remaining before I'd be okay giving my R-b here.

Jan

>>> @@ -828,7 +845,10 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>>>       * domain policy logic gains a better understanding of MSRs.
>>>       */
>>>      if ( is_hardware_domain(d) && cpu_has_arch_caps )
>>> +    {
>>>          p->feat.arch_caps = true;
>>> +        p->arch_caps.raw = host_cpu_policy.arch_caps.raw;
>>> +    }
>> Doesn't this expose all the bits, irrespective of their exposure
>> annotations in the public header?
> 
> No, because of ...
> 
>>  I.e. even more than just the two
>> bits that become 'A' in patch 4, but weren't ...
>>
>>> @@ -858,20 +878,6 @@ void __init init_dom0_cpuid_policy(struct domain *d)
>>>          p->platform_info.cpuid_faulting = false;
>>>  
>>>      recalculate_cpuid_policy(d);
> 
> ... this recalculate_cpuid_policy() (which was moved in patch 1), which
> applies the appropriate pv/hvm max mask over the inherited bits.
> 
> 
> More generally, this is how *all* opting-into-non-default features needs
> to work when it's more than just turning on a single feature bit.  It's
> also why doing full-policy levelling in the toolstack is much harder
> than it appears on paper.
> 
> All domains get the default policy, so zero out all non-default
> information.  It has to be recovered from somewhere.  Generally that
> would be the appropriate max policy, but the host policy here is fine
> because there's nothing to do other than applying the appropriate max mask.
> 
> When arch-caps becomes default, the full block feeding arch caps back
> into dom0 will be dropped, but there's still a lot of work to do first.
> 
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Tue May 16 14:16:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:16:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535226.832883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvTZ-0007ub-SP; Tue, 16 May 2023 14:16:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535226.832883; Tue, 16 May 2023 14:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvTZ-0007uU-PR; Tue, 16 May 2023 14:16:29 +0000
Received: by outflank-mailman (input) for mailman id 535226;
 Tue, 16 May 2023 14:16:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyvTY-0007uO-N2
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:16:28 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3cd656c9-f3f4-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 16:16:25 +0200 (CEST)
Received: from mail-dm6nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 10:16:22 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6397.namprd03.prod.outlook.com (2603:10b6:510:aa::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May
 2023 14:16:19 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 14:16:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cd656c9-f3f4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684246585;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=lelNDFBLsXA4NmuwpGtTfsFxaCGCvygrwm+5b2qA2NU=;
  b=EVh18G6zc+h7UMBIwGRIAlbKeKEZUazS9+ykbAMn/kdPzdylWPINfhx/
   5HKwpbr5sb+mRfT/pyzLtQzdsLHw5PuxxG/lUDY47ComaPoKQoLT01xl3
   hCMN6vWTaXgRJz14IkFPryWbJIy0IsDwqo3Oy4jFhh4ko4+7NSbvzUuPq
   0=;
X-IronPort-RemoteIP: 104.47.57.177
X-IronPort-MID: 109238001
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:RlcjRK4MQxhhPArK/U0aNQxRtBXGchMFZxGqfqrLsTDasY5as4F+v
 mAZDTiPPKneN2bxedwkOYW28k0Fu5LQyNVmHgA++Sw9Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S4geE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m6
 ttfLxATbDa5t+PskYq+WtNOoZ4MI5y+VG8fkikIITDxK98DGMmGb4CUoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6Ok0ooj+eF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNtKRefjpqYy2TV/wEQQJUEnRQuShcOfyRW8YdhzK
 UsM/DQx+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6GAkAUQzgHb8Yp3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqWMMfZQ4M4t2mrIRtiBvKF49nCPTs0YKzHizsy
 TeXqiR4n68UkcMAy6S8+xbAni6ooZ/KCAUy4207Q16Y0++wX6b9D6TA1LQRxawowFqxJrVZg
 EU5pg==
IronPort-HdrOrdr: A9a23:e+8MLqEFv9DRh0ydpLqE18eALOsnbusQ8zAXPo5KOGVom62j5r
 iTdZEgvyMc5wxhPU3I9erwWpVoBEmslqKdgrNxAV7BZniDhILAFugLhrcKgQeBJ8SUzJ876U
 4PSdkZNDQyNzRHZATBjTVQ3+xO/DBPys6Vuds=
X-Talos-CUID: 9a23:TYrPAW5f+XhnSGZBH9ss62lTHeA7cFvn/HaKcne9GVxLb+e6RgrF
X-Talos-MUID: 9a23:OrS3Fgig5ph+XCUfqB9pIsMpN+o00rT+IWQ3t9ZdufOOGS53IzGDk2Hi
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="109238001"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E6BBNaYp2FNbb/zCoRgWvs4QBA13VuoxC7oVrmscDUQB2baINzimrM57aH5q1wWhQ4VPkjc/YNl4CGnUHoQTQmU/3AJzz747KN7dcM9IOwU8tyK+y0q04hNXdYH7fIPx3i66Y5lCw4ILgsLJqHTFlidhjMiBWUtwZ3HuyfjjmWyw7chGKd1zbW8gqsjW6K5y95pT2z+WgE7G6D9NBOapizJ+pvjVCYV2pWDQx9bVrIDdHfkI4adsZ+iQnPdlLHg7p1YUVZ1eF0t9kDBuWP44oWSl2qceLFCSDSx2D/Q/cepbz4XVsHVjwbFzn+9i7/cmhKWWbsjD17jyJKiNVfOUVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VAwGYWF5pFRzkGozLRQbwCbMCrKb58pgpqIbImSxPGQ=;
 b=KWzC1uaDhzcAmUAL/xXFeP+J6Wvzxv9RJQ6PQVUAhcIweMrFp9m2S1kTJ41REgutZDAYK2VBamqoS93LwOEDkw7X/GZCwyjlTf7AZ4V4wKzMDeTN6AtYFMJ2nganGUs0rODadlYWh/Izv7N8N9ymHHXhONPfrCt7rsZLpddEcy3ILCpCqWnLhaBrLHOCoylE7QjXE4LIv235gIXTBU/ITC/RFkVYNlH5HHX9e+M8cJcCnK19aXNPTKn3dE2DCIuPHubASqdgchKKElbPj6mTHPMeUWhMlzu0vw8PDaQ2Jhkfe2UZFMATbo11pbXu4nV4Se4KvEU+aXgBT+50pd2bFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VAwGYWF5pFRzkGozLRQbwCbMCrKb58pgpqIbImSxPGQ=;
 b=oh2KTIP+ol3dDiv8B4ps7yxyivORKgVZHA3m9xq1lrQ08LJIJUGqz3g+B+rqBo54sJ25gJc7llAtbnm7BUjti5FVpHhs2DU1OmMPej7uVjQw1mIHymMCtpyZSK8kYT5g9MHB1OPaYGV6F49XlR6a3U9A73lWJ9YXPnLrHJOcmgY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <b1c56e56-90cc-0f37-4c8a-df1217339e59@citrix.com>
Date: Tue, 16 May 2023 15:16:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
 <a545a6c9-db48-9d91-c23b-59ea97def769@suse.com>
 <25421dbc-5889-a33c-37dd-d82476d56ce4@citrix.com>
 <1bef2b0e-04e8-2ca0-cf03-f61cdae484a9@suse.com>
In-Reply-To: <1bef2b0e-04e8-2ca0-cf03-f61cdae484a9@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0051.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:152::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB6397:EE_
X-MS-Office365-Filtering-Correlation-Id: 7450d755-6d17-401b-162d-08db56181e3b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	D5AbeKVpq8rf/DF4lXSwg6ekbQg41OGOQIXAve/78QzaUdN5p+xIaQboBFlx5bwgu+8o8N8SxHFPZvz32cmFUHOs4wjI0mrznpBt0FojP/zcTS532d+5Hh4gsvSVLEZahoOTPYzykDfcnyA5zRVeyjdwRdyQuyJZ16y1Xr1t2NVfAMFRNZaFrsBew1PW3bz6nsRefFg5nm+ZUXvGhUJKt+OB/FsDw4x5Cck11cEvcbNUQE6rpJYFoyaPcimnDOv5Tgo202Ppjx/PyPDaX0Mu8a4ashnS4Vsw5j9FOflKAXEAt3s0ictDblS07XBe/QiLDhUOggjX1ySnWdwZGvvJ8rAINOTlC2dI1L87m4sXXejCGO8fuZHCCtv7Pm8xPWyXIN1+gHNIpDUdy/tkF1G7nexajRuA28kNIEUmViMDumdi+U4Pbq9XafgJOIALUFLqVPnWSI8N7Gn6taMxVYSBAkcsLFNWrvUcPcAv2oWo2XYn2+RjCufYxPWRzmxi956UNO0IeWenO4UI1HWXv3gb8Wzvu4E/IXk4cMEEUeC6H3N2hYfmCtnKbMIvySlAb5lo9auT3m/wIWqlizUcBAK07zPOpUOYHzzQ7g8xmqHgwYLNKhfhn2czS8IWmkA2qkO3N0alCpXrkth3LrBNBeR6XA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(366004)(39860400002)(136003)(451199021)(86362001)(2906002)(38100700002)(316002)(66899021)(41300700001)(5660300002)(8936002)(8676002)(36756003)(31696002)(6666004)(478600001)(6506007)(6512007)(26005)(53546011)(186003)(6486002)(31686004)(2616005)(82960400001)(4326008)(6916009)(66946007)(66556008)(66476007)(54906003)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WmgrR1UvM1RxellXSnNQR1RTZ3lWdk1HbVl5WGh0NUx4cWdJc0RQMjVtUWt2?=
 =?utf-8?B?Zm9ST1B6OEI0ZzJKZjBaM0RTSXRmSXEwZk9IeEdHdmNrNXVMUk9pRis2L1N2?=
 =?utf-8?B?YzdtY085V3JIemMxV0ZZajdoZElGSkJqSm11Vm82eG8ycWxab3VnczFhN1pY?=
 =?utf-8?B?NXBhUmpJd3E0cGNPeTZpTnJ6NmVER3ZyYlJJd0NqQkUrZUdtbDl3SlBGWStS?=
 =?utf-8?B?ZW9YY2pMWmFqMXMrNk9iZUg5Z2M2RStCVzZKc0JUdldtZ3ZxRHBaR3Zocmd4?=
 =?utf-8?B?VlZ6bk9MV1ByaVA0UkF6QjN5K1dwdURHR1hBMkJnQ1pxR2tTUHYyVnhlZXlC?=
 =?utf-8?B?eDBYT0p1aWxjYWlHd2lQak8rY0NqemljbkYwdzQ0b002bC9ranRFU1A1OEJ1?=
 =?utf-8?B?S29ydWloajhsc2VtQkRJU2dKYXErWS9ETUNneGs3QzBPZkwrY0lBci9mNjNF?=
 =?utf-8?B?N0NxUXp1clFMYTQyU2UrYU1WTlhzbFhkNWUwV2hkUUdUeTVFNHliNHdZdzJl?=
 =?utf-8?B?eDJjM01US1RpeThKQlhyUlFwSThITzlRR2VJTkNMaGJUay96SVF6SlN6L0JY?=
 =?utf-8?B?a2VUOWJtd1BsNkMzOTU2aG9xNXRtQ29jQURDSUxidXN3MktvckhVU2xPSDg4?=
 =?utf-8?B?ckE2ZUpoWEV0ZXJjTks5RklXcHZOWjVCUjFyaWd6UDJIYUM0KzR5U3ZmRkoy?=
 =?utf-8?B?RWxwd2hNR0JvSVY1eXNLdkJrOXpkRXQ4TmJlbk9YK2dFcmtSWEJ2Y2tJUW9n?=
 =?utf-8?B?MXZiUmp4UlFsSzYxNm50MFEvWmF5cVBseGR1TzJBNDFLdFZhenp4bjQvN0Qv?=
 =?utf-8?B?dlVHUk5PTy9jb3Ixc1g5NmloNU82dlpZeDlYc3IyQkZZQUZQamlSdER4QUtl?=
 =?utf-8?B?dXNOWVFoZEFoeVBSWElLRkc5WWsvZWN4Q2NIWHdJZnhiRmRGN1NSSStma0Nr?=
 =?utf-8?B?a2dsUEJ4NC9KOEhoSk9mb1FQTkcyQVE1WFBDR3dybWRqSE5IUUpMcUp4T0do?=
 =?utf-8?B?ZnZhWldqdGRGcTlzY1pZUEtOWjR0c2FOZWxZeENmTWd5d2NrSnU4MnJyTkUz?=
 =?utf-8?B?NkRPWCtqY3ZCNFFOVEI3Wjd0SnFFWmdmMGYvQzhVVlFFdldZVFRLQnNXdUlQ?=
 =?utf-8?B?cUREUkMzOHFZbmJReGpmMld2MGhEZmVkZ01IaG5TK3lqbU9MKzNnTkdHUW5s?=
 =?utf-8?B?emkvVTJ6RmtnUis1SndwbW1DRDkxaXByNGltOTlqWklCdnJFbGljNWVjY2ZW?=
 =?utf-8?B?WlRweE5idTBJRFUxOHQzLzFxZkJPb01NVmlMWVNycjNMUWtMNkk0Y2JoOGN5?=
 =?utf-8?B?aFpNbzVPZTUvK0N2aXNOWmZhSTVjd3RWNTZIUTVpL3p2d0krZVJCTnhLOUo2?=
 =?utf-8?B?aEhRbkdERWZadGVBMmltVytYMFZ4dElCZnAvLy9wc25HY0Qrb1l4ZCtXRUln?=
 =?utf-8?B?cjVlaU1wOWMzRjYrZEo2TjJydHFBcXRtZWErVkhDWm1YQ3dwYlFoZTg3UEM1?=
 =?utf-8?B?TnU0RGwzOWluMnFFdy9RUGYwRlVaR3dvQ2lTMzhFTjJSSkd1RisrRG14ckNH?=
 =?utf-8?B?MFk4VlZUZTNpWDFoTzRSSlVHSWp1S2VuY3JwRHZWaEJwT1QvdDNseGV3Uytr?=
 =?utf-8?B?b1FLQTU4RzJmcXNGSDF2TUF0NFNZc2hXSDdES2J4QjJnMmhIa0xBZ2pkNFRk?=
 =?utf-8?B?UW1tQmRQaU9IS0M1aDVibXgzbzB5bWoxY2tHbUd5a09PZFpldTRRQjhsS1Fa?=
 =?utf-8?B?eTBIY1ErT25GZndmTWxKSExISm5PUERRK2VMa2FPa01TNjRzOTlaeERtTzJq?=
 =?utf-8?B?MXJRMGRsbDZXc1pZUjZTUlU1a2xNUGRmcFIrVEk4bWlCeFlzT1R4ZWVwUStP?=
 =?utf-8?B?dnFIUVBpdTAvVW4wR0hiaFZMSnRsOFcrRWhGcUZYN0wxb21kR2d3c1M0end3?=
 =?utf-8?B?UjJ5dmRUSUV3N1dITzJ1OGNDY3p1VjZmRGd4QmlWbXkyS1pGOUNWMFFQZW5E?=
 =?utf-8?B?eVlsbmxVdTRlNDF4L3l2dTJBdHhJQzZTV0t3MHkvRmE5RnN0SDhCMXBQVDht?=
 =?utf-8?B?NVRlb1BBNGpRaEpTaWEwVE41cm40ZmFiUVNRVW5DQXJNcjVEM2NZenpzcVM5?=
 =?utf-8?B?RVIxS3lmMWswd0R0bWVwbW1oNENRbHU5YWZBNGdBVDR3ZGlhVWJOdlN5SXVQ?=
 =?utf-8?B?ZUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	wxUQPJjxMrFP5ZqyqeNV4JJ/7kiBwHQRV/D2PWVTRFX9ssd+SHkc44DECC5IhkrgTBI6vSM/V7KEawGX68aaAUjKlyL6z2rJJ+uRbFh1Xcx7PA15MhFVb7xecpOByLMNTz1GIRJo19bDMFtxaWhcfOaIzN6iAUPVMtPllLqL8lXhbUuKq8PY2gM9QAhalXitSVEaA6qqbofN34NWIqEj49AL/kXGMFtDo4JJgNLDbn0f6Hec/Lqyk+Or9Q/L5JJ3QnLWbmVViRrRCqpXCV2HZ2F+qFJ026EsaU+6JL5Rw1O3cZbO5XRGWa8vRTUyfKX6XyVmj4NW8AGoGWI8Eq42CSoLXfnUqKP/+lMSKmv38NF6szOaJ2xzNk+mMZIU3qkrzNh3GxaN0Qu+OuoQTVADsBVRH2XSXlbGccrHK16Lu7LANcJgOKzT69Pprnlj1OxMNtDVYkqf1D404Cme22Jw+BlGurOoV4Vt3nf8rPRPu+KQF8dHFZSATWMY63Cak0xmKtv7pEgx0O4XeZoyMnmELsD2t+kKfRTzXGHW21y/Bs9ZwROR7mtV9GS44WDPv+mA0gMXY25/Ngbp5bbooxvrL1ynbeiIk7Nv2g642lHHYxTtJ9xnltFzS8NqdvLHxQ1c38Lr+wqDR6RaiJ9GW9+uR4jjS3z7bvHoYIkNMN1iLc4h+agKPYq/cXQZYK3GFCWsXn0sxwqwPbccJEnv3F8ZB1+EmlOED7Tw5frQjCxzyNpcMmktaVQNOG+jZ/S1LmaVdr2hVPHs7sr6Jqb2QRuL31JpUtjjYJKR0BwOI2A3FTgjIX85IqQmqEidUNTERw+u
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7450d755-6d17-401b-162d-08db56181e3b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 14:16:19.1384
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AYdQ4lIbeHrF3i5zP4+6trfr81feMJ+z1VrKskQeziLd+AdXBbsoh0agscx0EvT/ONjqPnQqCIWSRHHr1tAjnvyO4mkKRDwOxXQS9XM1uTI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6397

On 16/05/2023 3:06 pm, Jan Beulich wrote:
> On 16.05.2023 15:51, Andrew Cooper wrote:
>> On 16/05/2023 2:06 pm, Jan Beulich wrote:
>>> On 15.05.2023 16:42, Andrew Cooper wrote:
>>>> --- a/xen/arch/x86/cpu-policy.c
>>>> +++ b/xen/arch/x86/cpu-policy.c
>>>> @@ -408,6 +408,25 @@ static void __init calculate_host_policy(void)
>>>>      p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
>>>>  }
>>>>  
>>>> +static void __init guest_common_max_feature_adjustments(uint32_t *fs)
>>>> +{
>>>> +    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
>>>> +    {
>>>> +        /*
>>>> +         * MSR_ARCH_CAPS is just feature data, and we can offer it to guests
>>>> +         * unconditionally, although limit it to Intel systems as it is highly
>>>> +         * uarch-specific.
>>>> +         *
>>>> +         * In particular, the RSBA and RRSBA bits mean "you might migrate to a
>>>> +         * system where RSB underflow uses alternative predictors (a.k.a
>>>> +         * Retpoline not safe)", so these need to be visible to a guest in all
>>>> +         * cases, even when it's only some other server in the pool which
>>>> +         * suffers the identified behaviour.
>>>> +         */
>>>> +        __set_bit(X86_FEATURE_ARCH_CAPS, fs);
>>>> +    }
>>>> +}
>>> The comment reads as if it wasn't applying to "max" only, but rather to
>>> "default". Reading this I'm therefore now (and perhaps even more so in
>>> the future, when coming across it) wondering whether it's misplaced, or
>>> and hence whether the commented code is also misplaced and/or wrong.
>> On migrate-in, we (well - toolstacks that understand multiple hosts)
>> check the cpu policy the VM saw against the appropriate PV/HVM max
>> policy to determine whether it can safely run.
>>
>> So this is very intentionally for the max policy.  We need (I think -
>> still pending an clarification from Intel because there's pending work
>> still not published) to set RSBA unconditionally, and RRSBA conditional
>> on EIBRS being available, in max even on pre-Skylake hardware such that
>> we can migrate-in a VM which previously ran on Skylake or later hardware.
>>
>> Activating this by default for VMs is just a case of swapping the CPUID
>> ARCH_CAPS bit from 'a' to 'A', without any adjustment to this logic.
> Hmm, I see. Not very intuitive, but I think I follow.
>
>>> Further is even just non-default exposure of all the various bits okay
>>> to other than Dom0? IOW is there indeed no further adjustment necessary
>>> to guest_rdmsr()?
> With your reply further down also sufficiently clarifying things for
> me (in particular pointing the one oversight of mine), the question
> above is the sole part remaining before I'd be okay giving my R-b here.

Oh sorry.  Yes, it is sufficient.  Because VMs (other than dom0) don't
get the ARCH_CAPS CPUID bit, reads of MSR_ARCH_CAPS will #GP.

Right now, you can set cpuid = "host:arch-caps" in an xl.cfg file.  If
you do this, you get to keep both pieces, as you'll end up advertising
the MSR but with a value of 0 because of the note in patch 4.  libxl
still only understand the xend CPUID format and can't express any MSR
data at all.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 16 14:24:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:24:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535232.832894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvbB-0000x2-Kv; Tue, 16 May 2023 14:24:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535232.832894; Tue, 16 May 2023 14:24:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvbB-0000wv-Hu; Tue, 16 May 2023 14:24:21 +0000
Received: by outflank-mailman (input) for mailman id 535232;
 Tue, 16 May 2023 14:24:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a0wm=BF=citrix.com=prvs=4936e02c6=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pyvbA-0000wp-Hg
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:24:20 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 564ea919-f3f5-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 16:24:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 564ea919-f3f5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684247057;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=asvGL2SZQNz6yR0hbwCPfuEFQwUqbyPvuKCqorCCv1M=;
  b=GXp1aubu76yPhlXykh0KJW7FGixSCPOZubevudU3O///RgZAQ88rfdBn
   B9nqDehha5qhnJ2UGD9T44cOFRNiQQEkgEJEIuU+rnF9qTtk70PlYfE/Q
   77LK4l9plTnQFPPpyZlWConJFwbOKszi13GN8GQjlArS8U2ThVlgYcY3z
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109113447
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:B5LCt6t99tnzCSBDijYdupAwP+fnVIVYMUV32f8akzHdYApBsoF/q
 tZmKTiEOv+LN2fxLo1zOY638RwAvpfdn4VrHgA9/y0zQSMU+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AKHySFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwFA5dTk+cisuKx+y6SfJin8kycOeoI9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WzRetFKSo7tx+2XJxRZ9+LPsLMDUapqBQsA9ckOw/
 zqXpj2iXk1FXDCZ4QaI8yKW1uPUpnz2ZYQdF56R7+wzvlLGkwT/DzVJDADm8JFVkHWWQM5SJ
 UUS+iMGt6U+9EW3CNL6WnWQrneBlhMBQ9pXFas85R3l4rDW5ACUHUAeQzJBYcBgv8gzLRQzy
 16Pg97tDBR1vbGVQG7b/bCRxRu/NDMNKnQDeQceQAcF6sWlq4Y25jrfQ9AmHKOrg9ndHTDr3
 yvMvCU4n68Uj8MAy+O851+vqzCxvYXASiYv6wnXV37j5QR8DKavapay4F7U4N5aIYqZR0XHt
 38B8+CG7OEKCJ2lkC2LSf8KWraz6J6tKzLVm0xmBZQJ7Tmh+3e/O4tX5VlWKUt0O9wIfhfpa
 UXQvhhNopleIBOCRqh2aZP3O9kCwq3pFNCjXffRBvJCfZ1uLieG+i90bEKd1myrl1Ij+YklO
 JybeNygS3YXD6hq1iGxQc8Z0Lki3Cd4wnncLbj+xg6s3L7YbWabRLMPOUaLRu885aKA5g7S9
 r53LNCWwh9SVOn/ZCj/8oMJK10Oa38hCvjes8FLe/STChBrFGEoF7naxrZJU4homK5Pn/zg5
 HyxWkZEjlH4gBXvLB6WQmpuZLPmQdB0qndTFTQmOlC62n4iSZyi4KcWa908erxP3O98yvp6Z
 +MId8WJHrJETTGv0yQcYIj6qoNgdTyhlB6DMyOjazV5dJllLyTS5ff7cwrr8iQSHGy7uKMWu
 7mI1QreWdwPSmxKFM/LbNq/wlX3umIS8MprUk7VZ9NXf07v2I5tLSP3k7kwOc5kARDOxieTk
 Q+fGwpC/cHCpoY09J/CgqXsh52uFe1/E1FTH23R4Ky5HSbf92unh4RHVY6gZD3YSSb49buvY
 c1TyPfzNuBBm0xF26J8Cbtq0bAW/dbjp7ZGiA9jGR3jZkymELp6LlGa3MNEsetGwboxhOetc
 hvRoJ8AY+zPYZ67VgdLf2LJc9hvy9kopGnysus4CXn9ui5057SeXFQOEVqT3Xk1wKRODKspx
 uIoucgz4gO5iwY3Ptvush2451hgPVRbDfx568hy7JvDz1NylwocOcC05jreusnnVjlaDqU9z
 tZ4boLmjq8U+EfNemFb+ZPljbsE3sRmVPynITY/y7W1djjt3KdfMP55q25fouFpIvJvjYpO1
 pBDbREdGEl3124AaDJ/d26tARpdIxaS51b8zVAE/EWAERn0DzOQcjxkZbnXlKz8z467Vmkzw
 V1l4Dy9DWaCkD/ZhEPepnKJW9S8FIcsp2UuaeisHtifHolSXAcJdpSGPDJSwzO+WJNZuaEyj
 bUylAqGQfGhZHF4TmxSI9Xy6In8vzjadDMfEa0+pfhh8KO1UGja5AVi4nuZIqtlT8EmO2fiY
 yCyDqqjjyiD6Rs=
IronPort-HdrOrdr: A9a23:z43tZKxvSlFxxe6vbFJfKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-Talos-CUID: 9a23:GF4hImzw9KnNAG9pY5TgBgUoJJAFLWLQ5UvXfU67EmhwRazJZxyPrfY=
X-Talos-MUID: =?us-ascii?q?9a23=3ACR/MUg2a6hBlw4JObxbVBW9xTTUj36mvWHpdnY4?=
 =?us-ascii?q?95MS5MyNzGB6Enm2JTdpy?=
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="109113447"
Date: Tue, 16 May 2023 15:24:07 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
CC: <qemu-devel@nongnu.org>, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>, <qemu-block@nongnu.org>, Paul Durrant
	<paul@xen.org>, Peter Lieven <pl@kamp.de>, Stefan Weil <sw@weilnetz.de>, Xie
 Yongji <xieyongji@bytedance.com>, Kevin Wolf <kwolf@redhat.com>, Paolo
 Bonzini <pbonzini@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>, Hanna Reitz
	<hreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, Richard
 Henderson <richard.henderson@linaro.org>, David Woodhouse
	<dwmw2@infradead.org>, Coiby Xu <Coiby.Xu@gmail.com>, Eduardo Habkost
	<eduardo@habkost.net>, Stefano Garzarella <sgarzare@redhat.com>, Philippe
 =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>, Daniel
 =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>, Julia Suvorova
	<jusual@redhat.com>, <xen-devel@lists.xenproject.org>, <eesposit@redhat.com>,
	Juan Quintela <quintela@redhat.com>, "Richard W.M. Jones"
	<rjones@redhat.com>, Fam Zheng <fam@euphon.net>, Marcel Apfelbaum
	<marcel.apfelbaum@gmail.com>
Subject: Re: [PATCH v5 12/21] xen-block: implement
 BlockDevOps->drained_begin()
Message-ID: <c760c8bd-49f3-4be0-b01c-9afd1efa619c@perard>
References: <20230504195327.695107-1-stefanha@redhat.com>
 <20230504195327.695107-13-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230504195327.695107-13-stefanha@redhat.com>

On Thu, May 04, 2023 at 03:53:18PM -0400, Stefan Hajnoczi wrote:
> Detach event channels during drained sections to stop I/O submission
> from the ring. xen-block is no longer reliant on aio_disable_external()
> after this patch. This will allow us to remove the
> aio_disable_external() API once all other code that relies on it is
> converted.
> 
> Extend xen_device_set_event_channel_context() to allow ctx=NULL. The
> event channel still exists but the event loop does not monitor the file
> descriptor. Event channel processing can resume by calling
> xen_device_set_event_channel_context() with a non-NULL ctx.
> 
> Factor out xen_device_set_event_channel_context() calls in
> hw/block/dataplane/xen-block.c into attach/detach helper functions.
> Incidentally, these don't require the AioContext lock because
> aio_set_fd_handler() is thread-safe.
> 
> It's safer to register BlockDevOps after the dataplane instance has been
> created. The BlockDevOps .drained_begin/end() callbacks depend on the
> dataplane instance, so move the blk_set_dev_ops() call after
> xen_block_dataplane_create().
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 16 14:25:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:25:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535237.832903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvcJ-0001Um-Tl; Tue, 16 May 2023 14:25:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535237.832903; Tue, 16 May 2023 14:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvcJ-0001Uf-Ql; Tue, 16 May 2023 14:25:31 +0000
Received: by outflank-mailman (input) for mailman id 535237;
 Tue, 16 May 2023 14:25:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a0wm=BF=citrix.com=prvs=4936e02c6=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pyvcI-0001UF-Jj
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:25:30 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 80a5b2fc-f3f5-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 16:25:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80a5b2fc-f3f5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684247128;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=aGmtyziSnn1Gib25HiVVWwKSGRKb01gX2nE2o8MoFMc=;
  b=iNkb4Ng+Cn/SUg8qr8dTxuzVkv2AnLODQZ6hB8y/MP2vfUOUy+N7FIeK
   UxlxuWbqloaOeVMFIV6SxVRjfIHwJYd4NlEVJ4WAzSSOR7ZhBrJnsVm3o
   xrPf2+/dXssA9I9ggt2dh45h+JJce+3mGQ3fry1KPPISI94iy6c+TNo1q
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107984133
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:WUeaxazOHDDjFqrAnGB6t+fkwCrEfRIJ4+MujC+fZmUNrF6WrkVSx
 mMbC2mAO/ePY2GmKtlzPt7i9x4D7JKAyodrSgptqSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPK4T5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KVtW8
 aQDJSJUV0C8he+U6umpTbVVnMt2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZwNwRfI+
 judpAwVBDkTL/CG8Xm70UijqeHuohjlYpMxFuGno6sCbFq7mTVIVUx+uUGAieKlh0C3XdJWA
 1YZ9ionse4580nDZtf0WjW1vWaEuRhaXddMe8Ug6gaLx7H8+QuVBmEYCDVGbbQOrdI3QyAn0
 lahhd7lBTVz9raSTBq1+7qPtTSpODQ9N2IOZSYYCwAC5rHLsIw1yx7CUNtnOKq0lcHuXyH9x
 SiQqyozjKlVitQEv4254EvaijaEvJXFTgcpoA7QWwqN5A5jfoejaIGA81XX7f9cao2eSzGpp
 HgFn82SxOQPC5iXk2qKWuplNKqm7uyXOSfRqURiE5ko63Km/HvLVY1c/ThkKEBBMs8Oejjyf
 AnVtB85zLRUPXG7K59qS420AsUui6PnELzNWu/RcYBmY55/bgaL8ShiI0mK0AjFikknkaglN
 dGbfMCgAGwBDqJPyDuwTv0al7gxyUgWwGrJTp3/iR2/37eabneITJ8KNVKPaqYy66bsiB7O7
 99VOs+OyhNeeO7zeC/a9cgUN19iBWA2GZ3su+RNe+KDKxYgE2YkY9fVzLQrYYV+t75YmufB4
 je2XUow4F/kmVXdJAmKY2wlY7TqNb5np3U9Ly0qPH6y1nQjaJrp56AaH7M0Yr4j+sR5wPJ0R
 uVDcMKFatxfSznW/DISb5jVpp1+eRiigwmDeSGiZVAXZIFIWAHI/tnpYhmp+CRmJjKzncg3p
 qXm2g6zaYIKWgBKHMvQLvW1wDuZpHUYhad4W0zDIfFVf0Pj9pUsLDb+5tcyLsEQIFPKzyGHj
 V6+DhIRpO2LqIgwmPHTgqaCroqyGut6FEdAEEHU6L+3MW/R+W/L6ZZNVfvNcT3DWWfc/qKka
 uNIifbmP5UvnExHspZuO6xmwaI3+53koLoy5g18EW/CdViDFrJqKX7A1s5K3pCh3ZcA51HwA
 BjWvIAHZ/PQYpiN/EMtyBQNVPyHjeoZiGDrvcs3LWfQvz1woIOdXhAHV/WTsxB1ILxwOYIj5
 O4uvs8K9gCy4iYX3sa6YjN8rDrVcCFZO0kzntRDWdKw1FJ3or1XScaEYhIa9q1jfDml3qMCB
 jaPzJTPiL1HrqYpWypiTCOdtQaxaHlnhfyr8LPgDw7R8jYmrqVttPG0zdjQZlo98/m/+7gvU
 lWHzmUsTUl0wx9mhdJYQ0enEBxbCRuS9yTZkgVZyDSGEhf5DzCSdgXR3NphG2hIm1+wgxABp
 O3IoIobeWyCkD7NMtsaBhc+9q2LoS1Z/QzegsG3d/m4820BSWO92MeGPDNYwyYL9Ott3CUrU
 8E2prcvAUA6XAZMy5AG536yi+RPFUDYezUZKRyjlYtQdVzhlPiJ8WDmAyiMlgllfpQmLWfQ5
 xRSG/9y
IronPort-HdrOrdr: A9a23:oCFmuKHvnrN7vG7mpLqELMeALOsnbusQ8zAXPiBKJCC9E/bo8v
 xG+c5w6faaslkssR0b9+xoW5PwI080l6QU3WB5B97LMDUO0FHCEGgI1/qA/9SPIUzDHu4279
 YbT0B9YueAcGSTW6zBkXWF+9VL+qj5zEix792uq0uE1WtRGtldBwESMHf9LmRGADNoKLAeD5
 Sm6s9Ot1ObCA8qhpTSPAhiYwDbzee77a7bXQ==
X-Talos-CUID: 9a23:eNez+mDXUuJzIJD6ExN/8BYPF9J8Sy3iznryCRSqDjpQUrLAHA==
X-Talos-MUID: 9a23:4bn87gqMVqthFqrvv+Yezzh5GN5QoJqNMk8in7Q5gNe+JA9yZyjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="107984133"
Date: Tue, 16 May 2023 15:25:09 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
CC: <qemu-devel@nongnu.org>, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>, <qemu-block@nongnu.org>, Paul Durrant
	<paul@xen.org>, Peter Lieven <pl@kamp.de>, Stefan Weil <sw@weilnetz.de>, Xie
 Yongji <xieyongji@bytedance.com>, Kevin Wolf <kwolf@redhat.com>, Paolo
 Bonzini <pbonzini@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>, Hanna Reitz
	<hreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, Richard
 Henderson <richard.henderson@linaro.org>, David Woodhouse
	<dwmw2@infradead.org>, Coiby Xu <Coiby.Xu@gmail.com>, Eduardo Habkost
	<eduardo@habkost.net>, Stefano Garzarella <sgarzare@redhat.com>, Philippe
 =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>, Daniel
 =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>, Julia Suvorova
	<jusual@redhat.com>, <xen-devel@lists.xenproject.org>, <eesposit@redhat.com>,
	Juan Quintela <quintela@redhat.com>, "Richard W.M. Jones"
	<rjones@redhat.com>, Fam Zheng <fam@euphon.net>, Marcel Apfelbaum
	<marcel.apfelbaum@gmail.com>
Subject: Re: [PATCH v5 13/21] hw/xen: do not set is_external=true on evtchn
 fds
Message-ID: <dec567c6-5850-48b5-89f9-676c0160389b@perard>
References: <20230504195327.695107-1-stefanha@redhat.com>
 <20230504195327.695107-14-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230504195327.695107-14-stefanha@redhat.com>

On Thu, May 04, 2023 at 03:53:19PM -0400, Stefan Hajnoczi wrote:
> is_external=true suspends fd handlers between aio_disable_external() and
> aio_enable_external(). The block layer's drain operation uses this
> mechanism to prevent new I/O from sneaking in between
> bdrv_drained_begin() and bdrv_drained_end().
> 
> The previous commit converted the xen-block device to use BlockDevOps
> .drained_begin/end() callbacks. It no longer relies on is_external=true
> so it is safe to pass is_external=false.
> 
> This is part of ongoing work to remove the aio_disable_external() API.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 16 14:41:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:41:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535244.832914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvrL-0003xg-5n; Tue, 16 May 2023 14:41:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535244.832914; Tue, 16 May 2023 14:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvrL-0003xZ-2F; Tue, 16 May 2023 14:41:03 +0000
Received: by outflank-mailman (input) for mailman id 535244;
 Tue, 16 May 2023 14:41:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LW/A=BF=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pyvrI-0003xR-Ah
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:41:01 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2062c.outbound.protection.outlook.com
 [2a01:111:f400:fe59::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a947a52a-f3f7-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 16:40:55 +0200 (CEST)
Received: from BN9PR03CA0066.namprd03.prod.outlook.com (2603:10b6:408:fc::11)
 by DS0PR12MB7928.namprd12.prod.outlook.com (2603:10b6:8:14c::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 14:40:48 +0000
Received: from BN8NAM11FT113.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:fc:cafe::80) by BN9PR03CA0066.outlook.office365.com
 (2603:10b6:408:fc::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33 via Frontend
 Transport; Tue, 16 May 2023 14:40:48 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT113.mail.protection.outlook.com (10.13.176.163) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.15 via Frontend Transport; Tue, 16 May 2023 14:40:48 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 16 May
 2023 09:40:43 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 16 May
 2023 09:40:40 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 16 May 2023 09:40:39 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a947a52a-f3f7-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U13myQoy0at+NOZOnZKgewVQOeZxGgCinAOghhm5DM+deqV2roT8YQyeq0uAafxUEjJGYVRRjKeSI/f6F/ngzq43SHSuOJKmSSWJvcrAWwzOz/IOZOJWQazyYjgDLswiblUhrL6ob+KTCu2tk7f2oryVa/7ktWW9TGbXjMKrCUBzt5M7RHY+9jzy5AfJUXXBBNkqyh6PttfsrL7APMUJGpcE+HtHMgiDNqBtxUz2yVQDzQE6SLrrs1MAd1aE9Jt2yClFYfKCXCKsNMFQt1x/PPn5hid0LmDXGT0yNVmRujK+j7VZN+mzonrrXrtMdgaTQwujs5Uy2zGmu9pbXadU+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RPlJcjemd82mtvkCHxInNP2fSc7sk2VFDARkPUi7Zy8=;
 b=ZUOUCP0UykxbNUVMSDfpM9DMhGAMkHmnpgZqoK9+eaKfpKaxHvpW1CWTG2Shp5CseDOflFLn3DUqujiwOoIeoozbGm1xoREPpCEMqE50d7pFY1YADAOGC/PieOeZHohxd5x4/aKLPOCteB30lUqIA9G34YLGT9jNrSL9riCz7kWzIg4VIw2xnlp7qX0FWK/m9FV6yrmGQsfn/T6OUq2ha+VeOM4bERQunR6wjzBuBGN9XKV3ZmhkOFAiDkvgavalZ7rfviqHhVJiPKtnXo3eF68XC10uluG79tZeCqoZljoR/VEEacKXw2vqk+Fy4T7eop5wiRdjGaJreiL+fBhx4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RPlJcjemd82mtvkCHxInNP2fSc7sk2VFDARkPUi7Zy8=;
 b=rTHZP8eX2F6U0+KgPYwf2eiHx8o0TDUB1ijuWhdbwmbzbOcwtDHwHUCJBKHFqcNb4Ip6plC1luzDrCMt2EEkc/eqnGI20aJI63rcIEzhO1jhrGi3r7V3PZwyWRT/BOXCjKBOKGIDkW/o84s8RlooIQOLe0SzACMltPNTy0kBCYs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <f0e6ca10-2142-7c39-0c7b-042c454e7e08@amd.com>
Date: Tue, 16 May 2023 16:40:38 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: xen cache colors in ARM
To: Oleg Nikitenko <oleshiiwood@gmail.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>,
	<Stewart.Hildebrand@amd.com>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com>
 <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
 <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
 <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
 <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com>
 <bcac90c2-ef35-2908-9fe6-f09c1b1e2340@amd.com>
 <CA+SAi2sgHbUKk6mQVnFWQWJ1LBY29GW+eagrqHNN6TLDmv2AgQ@mail.gmail.com>
 <CA+SAi2tErcaAkRT5zhTwSE=-jszwAWNtEAnm5jNGEP1NoqbQ3w@mail.gmail.com>
 <53af4bc6-97ad-d806-003b-38e70ccd2b58@amd.com>
 <CA+SAi2vrB714Tc9dn4SbtDo3VrT3Q8OpSiFXRLRaO5=0BJo_rQ@mail.gmail.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <CA+SAi2vrB714Tc9dn4SbtDo3VrT3Q8OpSiFXRLRaO5=0BJo_rQ@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT113:EE_|DS0PR12MB7928:EE_
X-MS-Office365-Filtering-Correlation-Id: fa8e0c3a-c0bb-4c46-1a14-08db561b8a39
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RRXZPeRZsvwlREwbOk4sLUWqB6GRsCeQcqEgSJUYBuKBLjljsN6cFK1voWikpGa/7P/+/te+kFDbYua+Ot0XLHOtG0NSKfCojCG1a7OfedVNJ2SwNGCy/hBuhPrA93AUJmE9VD3gCik2So1QYGWhoAE7u3zIy1szZkPILxnwnjj4az1o/Yhd1sNzl0ncycbqRoex+c94QMaXFBdVr+lBbL0pD4r2hfsEpTuAY1l4IFk2v7ScsJzSHmqDO8zluegAfHE3eQZ25UAGuRv58gj7VoO323LOdG6fpWlzGW91t3mEKWC3MOF6aJXFeNPzLtRC8EAjecrUAhBrm2LRm7/xGXtkEPrbUAVjAMVJPScomq6P/ouqTbYmVEocFOBGUFWQ0V7Gye3MuaFBw3hsvPUsRsjYAkHIE6ypklsOLVauMXb7cTzI205aRIME+/lZpEAm+RDisVKQeynLjQrEYF69fS+gGt9RXdynflwz4kiL7kdBdeggFk3h5175q+4TzZhQju/cO06hAXk3ceMAG628yu4Qc08dYObS8XVMZ1sdkels2hMtSeHhKeAa+2Cf3ccqzKn0mkuErPmc6kG3DFFCLOGYb8qSeRFOnhybZijrBNn5AT/Q3K/ZNyqkaYQqTgIdZ+U9So/5/XurDJk8FiMpkPzw4W7xZ5aAQWw4M5GB0MwI4efRalU43vM2oY8mNiD62l7hRD5Lgmum3l/l906yXtIljF6byTj0x1p0vOB3/gWLTmCZ0/B74a1RaKpkfR4NEfYD46Ms1dLGHuuqcgBfdMA5wgTqEVgSHSlrm96xRm8VLMXwbCr0vTI4IBvmi/5XDH8NlH63OMOp+21cfCY7I1u5LmwMChUG8WxMcWz9XPY=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(6019001)(4636009)(136003)(376002)(396003)(346002)(39860400002)(279900001)(451199021)(46966006)(40470700004)(36840700001)(40460700003)(31686004)(966005)(6916009)(4326008)(70206006)(70586007)(478600001)(16576012)(316002)(54906003)(86362001)(31696002)(36756003)(47076005)(83380400001)(36860700001)(26005)(186003)(53546011)(2616005)(336012)(426003)(5660300002)(8936002)(8676002)(44832011)(30864003)(2906002)(40480700001)(41300700001)(82310400005)(81166007)(356005)(82740400003)(43740500002)(36900700001)(579004)(559001)(139555002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 14:40:48.5692
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fa8e0c3a-c0bb-4c46-1a14-08db561b8a39
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT113.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7928

Hi Oleg,

On 16/05/2023 14:15, Oleg Nikitenko wrote:
> 	
> 
> 
> Hello,
> 
> Thanks a lot Michal.
> 
> Then the next question.
> When I just started my experiments with xen, Stefano mentioned that each cache's color size is 256M.
> Is it possible to extend this figure ?
With 16 colors (e.g. on Cortex-A53) and 4GB of memory, roughly each color is 256M (i.e. 4GB/16 = 256M).
So as you can see this figure depends on the number of colors and memory size.

~Michal

> 
> Regards,
> Oleg
> 
> пн, 15 мая 2023 г. в 11:57, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com>>:
> 
>     Hi Oleg,
> 
>     On 15/05/2023 10:51, Oleg Nikitenko wrote:
>     >       
>     >
>     >
>     > Hello guys,
>     >
>     > Thanks a lot.
>     > After a long problem list I was able to run xen with Dom0 with a cache color.
>     > One more question from my side.
>     > I want to run a guest with color mode too.
>     > I inserted a string into guest config file llc-colors = "9-13"
>     > I got an error
>     > [  457.517004] loop0: detected capacity change from 0 to 385840
>     > Parsing config from /xen/red_config.cfg
>     > /xen/red_config.cfg:26: config parsing error near `-colors': lexical error
>     > warning: Config file looks like it contains Python code.
>     > warning:  Arbitrary Python is no longer supported.
>     > warning:  See https://wiki.xen.org/wiki/PythonInXlConfig <https://wiki.xen.org/wiki/PythonInXlConfig> <https://wiki.xen.org/wiki/PythonInXlConfig <https://wiki.xen.org/wiki/PythonInXlConfig>>
>     > Failed to parse config: Invalid argument
>     > So this is a question.
>     > Is it possible to assign a color mode for the DomU by config file ?
>     > If so, what string should I use?
>     Please, always refer to the relevant documentation. In this case, for xl.cfg:
>     https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod.in#L2890 <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod.in#L2890>
> 
>     ~Michal
> 
>     >
>     > Regards,
>     > Oleg
>     >
>     > чт, 11 мая 2023 г. в 13:32, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>:
>     >
>     >     Hi Michal,
>     >
>     >     Thanks.
>     >     This compilation previously had a name CONFIG_COLORING.
>     >     It mixed me up.
>     >
>     >     Regards,
>     >     Oleg
>     >
>     >     чт, 11 мая 2023 г. в 13:15, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>:
>     >
>     >         Hi Oleg,
>     >
>     >         On 11/05/2023 12:02, Oleg Nikitenko wrote:
>     >         >       
>     >         >
>     >         >
>     >         > Hello,
>     >         >
>     >         > Thanks Stefano.
>     >         > Then the next question.
>     >         > I cloned xen repo from xilinx site https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git> <https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git>> <https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git> <https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git>>>
>     >         > I managed to build a xlnx_rebase_4.17 branch in my environment.
>     >         > I did it without coloring first. I did not find any color footprints at this branch.
>     >         > I realized coloring is not in the xlnx_rebase_4.17 branch yet.
>     >         This is not true. Cache coloring is in xlnx_rebase_4.17. Please see the docs:
>     >         https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-coloring.rst <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-coloring.rst> <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-coloring.rst <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-coloring.rst>>
>     >
>     >         It describes the feature and documents the required properties.
>     >
>     >         ~Michal
>     >
>     >         >
>     >         >
>     >         > вт, 9 мая 2023 г. в 22:49, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
>     >         >
>     >         >     We test Xen Cache Coloring regularly on zcu102. Every Petalinux release
>     >         >     (twice a year) is tested with cache coloring enabled. The last Petalinux
>     >         >     release is 2023.1 and the kernel used is this:
>     >         >     https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS> <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>> <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS> <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>>>
>     >         >
>     >         >
>     >         >     On Tue, 9 May 2023, Oleg Nikitenko wrote:
>     >         >     > Hello guys,
>     >         >     >
>     >         >     > I have a couple of more questions.
>     >         >     > Have you ever run xen with the cache coloring at Zynq UltraScale+ MPSoC zcu102 xczu15eg ?
>     >         >     > When did you run xen with the cache coloring last time ?
>     >         >     > What kernel version did you use for Dom0 when you ran xen with the cache coloring last time ?
>     >         >     >
>     >         >     > Regards,
>     >         >     > Oleg
>     >         >     >
>     >         >     > пт, 5 мая 2023 г. в 11:48, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>:
>     >         >     >       Hi Michal,
>     >         >     >
>     >         >     > Thanks.
>     >         >     >
>     >         >     > Regards,
>     >         >     > Oleg
>     >         >     >
>     >         >     > пт, 5 мая 2023 г. в 11:34, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>:
>     >         >     >       Hi Oleg,
>     >         >     >
>     >         >     >       Replying, so that you do not need to wait for Stefano.
>     >         >     >
>     >         >     >       On 05/05/2023 10:28, Oleg Nikitenko wrote:
>     >         >     >       >       
>     >         >     >       >
>     >         >     >       >
>     >         >     >       > Hello Stefano,
>     >         >     >       >
>     >         >     >       > I would like to try a xen cache color property from this repo  https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git> <https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git>> <https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git> <https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git>>>
>     >         >     >       <https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git> <https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git>> <https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git> <https://xenbits.xen.org/git-http/xen.git <https://xenbits.xen.org/git-http/xen.git>>>>
>     >         >     >       > Could you tell whot branch I should use ?
>     >         >     >       Cache coloring feature is not part of the upstream tree and it is still under review.
>     >         >     >       You can only find it integrated in the Xilinx Xen tree.
>     >         >     >
>     >         >     >       ~Michal
>     >         >     >
>     >         >     >       >
>     >         >     >       > Regards,
>     >         >     >       > Oleg
>     >         >     >       >
>     >         >     >       > пт, 28 апр. 2023 г. в 00:51, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>:
>     >         >     >       >
>     >         >     >       >     I am familiar with the zcu102 but I don't know how you could possibly
>     >         >     >       >     generate a SError.
>     >         >     >       >
>     >         >     >       >     I suggest to try to use ImageBuilder [1] to generate the boot
>     >         >     >       >     configuration as a test because that is known to work well for zcu102.
>     >         >     >       >
>     >         >     >       >     [1] https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder> <https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder>> <https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder> <https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder>>> <https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder> <https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder>> <https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder> <https://gitlab.com/xen-project/imagebuilder <https://gitlab.com/xen-project/imagebuilder>>>>
>     >         >     >       >
>     >         >     >       >
>     >         >     >       >     On Thu, 27 Apr 2023, Oleg Nikitenko wrote:
>     >         >     >       >     > Hello Stefano,
>     >         >     >       >     >
>     >         >     >       >     > Thanks for clarification.
>     >         >     >       >     > We nighter use ImageBuilder nor uboot boot script.
>     >         >     >       >     > A model is zcu102 compatible.
>     >         >     >       >     >
>     >         >     >       >     > Regards,
>     >         >     >       >     > O.
>     >         >     >       >     >
>     >         >     >       >     > вт, 25 апр. 2023 г. в 21:21, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>:
>     >         >     >       >     >       This is interesting. Are you using Xilinx hardware by any chance? If so,
>     >         >     >       >     >       which board?
>     >         >     >       >     >
>     >         >     >       >     >       Are you using ImageBuilder to generate your boot.scr boot script? If so,
>     >         >     >       >     >       could you please post your ImageBuilder config file? If not, can you
>     >         >     >       >     >       post the source of your uboot boot script?
>     >         >     >       >     >
>     >         >     >       >     >       SErrors are supposed to be related to a hardware failure of some kind.
>     >         >     >       >     >       You are not supposed to be able to trigger an SError easily by
>     >         >     >       >     >       "mistake". I have not seen SErrors due to wrong cache coloring
>     >         >     >       >     >       configurations on any Xilinx board before.
>     >         >     >       >     >
>     >         >     >       >     >       The differences between Xen with and without cache coloring from a
>     >         >     >       >     >       hardware perspective are:
>     >         >     >       >     >
>     >         >     >       >     >       - With cache coloring, the SMMU is enabled and does address translations
>     >         >     >       >     >         even for dom0. Without cache coloring the SMMU could be disabled, and
>     >         >     >       >     >         if enabled, the SMMU doesn't do any address translations for Dom0. If
>     >         >     >       >     >         there is a hardware failure related to SMMU address translation it
>     >         >     >       >     >         could only trigger with cache coloring. This would be my normal
>     >         >     >       >     >         suggestion for you to explore, but the failure happens too early
>     >         >     >       >     >         before any DMA-capable device is programmed. So I don't think this can
>     >         >     >       >     >         be the issue.
>     >         >     >       >     >
>     >         >     >       >     >       - With cache coloring, the memory allocation is very different so you'll
>     >         >     >       >     >         end up using different DDR regions for Dom0. So if your DDR is
>     >         >     >       >     >         defective, you might only see a failure with cache coloring enabled
>     >         >     >       >     >         because you end up using different regions.
>     >         >     >       >     >
>     >         >     >       >     >
>     >         >     >       >     >       On Tue, 25 Apr 2023, Oleg Nikitenko wrote:
>     >         >     >       >     >       > Hi Stefano,
>     >         >     >       >     >       >
>     >         >     >       >     >       > Thank you.
>     >         >     >       >     >       > If I build xen without colors support there is not this error.
>     >         >     >       >     >       > All the domains are booted well.
>     >         >     >       >     >       > Hense it can not be a hardware issue.
>     >         >     >       >     >       > This panic arrived during unpacking the rootfs.
>     >         >     >       >     >       > Here I attached the boot log xen/Dom0 without color.
>     >         >     >       >     >       > A highlighted strings printed exactly after the place where 1-st time panic arrived.
>     >         >     >       >     >       >
>     >         >     >       >     >       >  Xen 4.16.1-pre
>     >         >     >       >     >       > (XEN) Xen version 4.16.1-pre (nole2390@(none)) (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=y
>     >         >     >       2023-04-21
>     >         >     >       >     >       > (XEN) Latest ChangeSet: Wed Apr 19 12:56:14 2023 +0300 git:321687b231-dirty
>     >         >     >       >     >       > (XEN) build-id: c1847258fdb1b79562fc710dda40008f96c0fde5
>     >         >     >       >     >       > (XEN) Processor: 00000000410fd034: "ARM Limited", variant: 0x0, part 0xd03,rev 0x4
>     >         >     >       >     >       > (XEN) 64-bit Execution:
>     >         >     >       >     >       > (XEN)   Processor Features: 0000000000002222 0000000000000000
>     >         >     >       >     >       > (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
>     >         >     >       >     >       > (XEN)     Extensions: FloatingPoint AdvancedSIMD
>     >         >     >       >     >       > (XEN)   Debug Features: 0000000010305106 0000000000000000
>     >         >     >       >     >       > (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
>     >         >     >       >     >       > (XEN)   Memory Model Features: 0000000000001122 0000000000000000
>     >         >     >       >     >       > (XEN)   ISA Features:  0000000000011120 0000000000000000
>     >         >     >       >     >       > (XEN) 32-bit Execution:
>     >         >     >       >     >       > (XEN)   Processor Features: 0000000000000131:0000000000011011
>     >         >     >       >     >       > (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
>     >         >     >       >     >       > (XEN)     Extensions: GenericTimer Security
>     >         >     >       >     >       > (XEN)   Debug Features: 0000000003010066
>     >         >     >       >     >       > (XEN)   Auxiliary Features: 0000000000000000
>     >         >     >       >     >       > (XEN)   Memory Model Features: 0000000010201105 0000000040000000
>     >         >     >       >     >       > (XEN)                          0000000001260000 0000000002102211
>     >         >     >       >     >       > (XEN)   ISA Features: 0000000002101110 0000000013112111 0000000021232042
>     >         >     >       >     >       > (XEN)                 0000000001112131 0000000000011142 0000000000011121
>     >         >     >       >     >       > (XEN) Using SMC Calling Convention v1.2
>     >         >     >       >     >       > (XEN) Using PSCI v1.1
>     >         >     >       >     >       > (XEN) SMP: Allowing 4 CPUs
>     >         >     >       >     >       > (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 100000 KHz
>     >         >     >       >     >       > (XEN) GICv2 initialization:
>     >         >     >       >     >       > (XEN)         gic_dist_addr=00000000f9010000
>     >         >     >       >     >       > (XEN)         gic_cpu_addr=00000000f9020000
>     >         >     >       >     >       > (XEN)         gic_hyp_addr=00000000f9040000
>     >         >     >       >     >       > (XEN)         gic_vcpu_addr=00000000f9060000
>     >         >     >       >     >       > (XEN)         gic_maintenance_irq=25
>     >         >     >       >     >       > (XEN) GICv2: Adjusting CPU interface base to 0xf902f000
>     >         >     >       >     >       > (XEN) GICv2: 192 lines, 4 cpus, secure (IID 0200143b).
>     >         >     >       >     >       > (XEN) Using scheduler: null Scheduler (null)
>     >         >     >       >     >       > (XEN) Initializing null scheduler
>     >         >     >       >     >       > (XEN) WARNING: This is experimental software in development.
>     >         >     >       >     >       > (XEN) Use at your own risk.
>     >         >     >       >     >       > (XEN) Allocated console ring of 32 KiB.
>     >         >     >       >     >       > (XEN) CPU0: Guest atomics will try 12 times before pausing the domain
>     >         >     >       >     >       > (XEN) Bringing up CPU1
>     >         >     >       >     >       > (XEN) CPU1: Guest atomics will try 13 times before pausing the domain
>     >         >     >       >     >       > (XEN) CPU 1 booted.
>     >         >     >       >     >       > (XEN) Bringing up CPU2
>     >         >     >       >     >       > (XEN) CPU2: Guest atomics will try 13 times before pausing the domain
>     >         >     >       >     >       > (XEN) CPU 2 booted.
>     >         >     >       >     >       > (XEN) Bringing up CPU3
>     >         >     >       >     >       > (XEN) CPU3: Guest atomics will try 13 times before pausing the domain
>     >         >     >       >     >       > (XEN) Brought up 4 CPUs
>     >         >     >       >     >       > (XEN) CPU 3 booted.
>     >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: probing hardware configuration...
>     >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: SMMUv2 with:
>     >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stage 2 translation
>     >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: stream matching with 48 register groups, mask 0x7fff<2>smmu:
>     >         >     >       /axi/smmu@fd800000: 16 context
>     >         >     >       >     >       banks (0
>     >         >     >       >     >       > stage-2 only)
>     >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit PA
>     >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: registered 29 master devices
>     >         >     >       >     >       > (XEN) I/O virtualisation enabled
>     >         >     >       >     >       > (XEN)  - Dom0 mode: Relaxed
>     >         >     >       >     >       > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>     >         >     >       >     >       > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>     >         >     >       >     >       > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>     >         >     >       >     >       > (XEN) alternatives: Patching with alt table 00000000002cc5c8 -> 00000000002ccb2c
>     >         >     >       >     >       > (XEN) *** LOADING DOMAIN 0 ***
>     >         >     >       >     >       > (XEN) Loading d0 kernel from boot module @ 0000000001000000
>     >         >     >       >     >       > (XEN) Loading ramdisk from boot module @ 0000000002000000
>     >         >     >       >     >       > (XEN) Allocating 1:1 mappings totalling 1600MB for dom0:
>     >         >     >       >     >       > (XEN) BANK[0] 0x00000010000000-0x00000020000000 (256MB)
>     >         >     >       >     >       > (XEN) BANK[1] 0x00000024000000-0x00000028000000 (64MB)
>     >         >     >       >     >       > (XEN) BANK[2] 0x00000030000000-0x00000080000000 (1280MB)
>     >         >     >       >     >       > (XEN) Grant table range: 0x00000000e00000-0x00000000e40000
>     >         >     >       >     >       > (XEN) smmu: /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
>     >         >     >       >     >       > (XEN) Allocating PPI 16 for event channel interrupt
>     >         >     >       >     >       > (XEN) Extended region 0: 0x81200000->0xa0000000
>     >         >     >       >     >       > (XEN) Extended region 1: 0xb1200000->0xc0000000
>     >         >     >       >     >       > (XEN) Extended region 2: 0xc8000000->0xe0000000
>     >         >     >       >     >       > (XEN) Extended region 3: 0xf0000000->0xf9000000
>     >         >     >       >     >       > (XEN) Extended region 4: 0x100000000->0x600000000
>     >         >     >       >     >       > (XEN) Extended region 5: 0x880000000->0x8000000000
>     >         >     >       >     >       > (XEN) Extended region 6: 0x8001000000->0x10000000000
>     >         >     >       >     >       > (XEN) Loading zImage from 0000000001000000 to 0000000010000000-0000000010e41008
>     >         >     >       >     >       > (XEN) Loading d0 initrd from 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
>     >         >     >       >     >       > (XEN) Loading d0 DTB to 0x0000000013400000-0x000000001340cbdc
>     >         >     >       >     >       > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>     >         >     >       >     >       > (XEN) Std. Loglevel: All
>     >         >     >       >     >       > (XEN) Guest Loglevel: All
>     >         >     >       >     >       > (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
>     >         >     >       >     >       > (XEN) null.c:353: 0 <-- d0v0
>     >         >     >       >     >       > (XEN) Freed 356kB init memory.
>     >         >     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x84000050
>     >         >     >       >     >       > (XEN) d0v0 Unhandled SMC/HVC: 0x8600ff01
>     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
>     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
>     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
>     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
>     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
>     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>     >         >     >       >     >       > [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
>     >         >     >       >     >       > [    0.000000] Linux version 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC)
>     >         >     >       11.3.0, GNU ld (GNU
>     >         >     >       >     >       Binutils)
>     >         >     >       >     >       > 2.38.20220708) #1 SMP Tue Feb 21 05:47:54 UTC 2023
>     >         >     >       >     >       > [    0.000000] Machine model: D14 Viper Board - White Unit
>     >         >     >       >     >       > [    0.000000] Xen 4.16 support found
>     >         >     >       >     >       > [    0.000000] Zone ranges:
>     >         >     >       >     >       > [    0.000000]   DMA      [mem 0x0000000010000000-0x000000007fffffff]
>     >         >     >       >     >       > [    0.000000]   DMA32    empty
>     >         >     >       >     >       > [    0.000000]   Normal   empty
>     >         >     >       >     >       > [    0.000000] Movable zone start for each node
>     >         >     >       >     >       > [    0.000000] Early memory node ranges
>     >         >     >       >     >       > [    0.000000]   node   0: [mem 0x0000000010000000-0x000000001fffffff]
>     >         >     >       >     >       > [    0.000000]   node   0: [mem 0x0000000022000000-0x0000000022147fff]
>     >         >     >       >     >       > [    0.000000]   node   0: [mem 0x0000000022200000-0x0000000022347fff]
>     >         >     >       >     >       > [    0.000000]   node   0: [mem 0x0000000024000000-0x0000000027ffffff]
>     >         >     >       >     >       > [    0.000000]   node   0: [mem 0x0000000030000000-0x000000007fffffff]
>     >         >     >       >     >       > [    0.000000] Initmem setup node 0 [mem 0x0000000010000000-0x000000007fffffff]
>     >         >     >       >     >       > [    0.000000] On node 0, zone DMA: 8192 pages in unavailable ranges
>     >         >     >       >     >       > [    0.000000] On node 0, zone DMA: 184 pages in unavailable ranges
>     >         >     >       >     >       > [    0.000000] On node 0, zone DMA: 7352 pages in unavailable ranges
>     >         >     >       >     >       > [    0.000000] cma: Reserved 256 MiB at 0x000000006e000000
>     >         >     >       >     >       > [    0.000000] psci: probing for conduit method from DT.
>     >         >     >       >     >       > [    0.000000] psci: PSCIv1.1 detected in firmware.
>     >         >     >       >     >       > [    0.000000] psci: Using standard PSCI v0.2 function IDs
>     >         >     >       >     >       > [    0.000000] psci: Trusted OS migration not required
>     >         >     >       >     >       > [    0.000000] psci: SMC Calling Convention v1.1
>     >         >     >       >     >       > [    0.000000] percpu: Embedded 16 pages/cpu s32792 r0 d32744 u65536
>     >         >     >       >     >       > [    0.000000] Detected VIPT I-cache on CPU0
>     >         >     >       >     >       > [    0.000000] CPU features: kernel page table isolation forced ON by KASLR
>     >         >     >       >     >       > [    0.000000] CPU features: detected: Kernel page table isolation (KPTI)
>     >         >     >       >     >       > [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 403845
>     >         >     >       >     >       > [    0.000000] Kernel command line: console=hvc0 earlycon=xen earlyprintk=xen clk_ignore_unused fips=1
>     >         >     >       root=/dev/ram0
>     >         >     >       >     >       maxcpus=2
>     >         >     >       >     >       > [    0.000000] Unknown kernel command line parameters "earlyprintk=xen fips=1", will be passed to user
>     >         >     >       space.
>     >         >     >       >     >       > [    0.000000] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
>     >         >     >       >     >       > [    0.000000] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
>     >         >     >       >     >       > [    0.000000] mem auto-init: stack:off, heap alloc:on, heap free:on
>     >         >     >       >     >       > [    0.000000] mem auto-init: clearing system memory may take some time...
>     >         >     >       >     >       > [    0.000000] Memory: 1121936K/1641024K available (9728K kernel code, 836K rwdata, 2396K rodata, 1536K
>     >         >     >       init, 262K bss,
>     >         >     >       >     >       256944K reserved,
>     >         >     >       >     >       > 262144K cma-reserved)
>     >         >     >       >     >       > [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
>     >         >     >       >     >       > [    0.000000] rcu: Hierarchical RCU implementation.
>     >         >     >       >     >       > [    0.000000] rcu: RCU event tracing is enabled.
>     >         >     >       >     >       > [    0.000000] rcu: RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
>     >         >     >       >     >       > [    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
>     >         >     >       >     >       > [    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
>     >         >     >       >     >       > [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
>     >         >     >       >     >       > [    0.000000] Root IRQ handler: gic_handle_irq
>     >         >     >       >     >       > [    0.000000] arch_timer: cp15 timer(s) running at 100.00MHz (virt).
>     >         >     >       >     >       > [    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x171024e7e0,
>     >         >     >       max_idle_ns: 440795205315 ns
>     >         >     >       >     >       > [    0.000000] sched_clock: 56 bits at 100MHz, resolution 10ns, wraps every 4398046511100ns
>     >         >     >       >     >       > [    0.000258] Console: colour dummy device 80x25
>     >         >     >       >     >       > [    0.310231] printk: console [hvc0] enabled
>     >         >     >       >     >       > [    0.314403] Calibrating delay loop (skipped), value calculated using timer frequency.. 200.00 BogoMIPS
>     >         >     >       (lpj=400000)
>     >         >     >       >     >       > [    0.324851] pid_max: default: 32768 minimum: 301
>     >         >     >       >     >       > [    0.329706] LSM: Security Framework initializing
>     >         >     >       >     >       > [    0.334204] Yama: becoming mindful.
>     >         >     >       >     >       > [    0.337865] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>     >         >     >       >     >       > [    0.345180] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
>     >         >     >       >     >       > [    0.354743] xen:grant_table: Grant tables using version 1 layout
>     >         >     >       >     >       > [    0.359132] Grant table initialized
>     >         >     >       >     >       > [    0.362664] xen:events: Using FIFO-based ABI
>     >         >     >       >     >       > [    0.366993] Xen: initializing cpu0
>     >         >     >       >     >       > [    0.370515] rcu: Hierarchical SRCU implementation.
>     >         >     >       >     >       > [    0.375930] smp: Bringing up secondary CPUs ...
>     >         >     >       >     >       > (XEN) null.c:353: 1 <-- d0v1
>     >         >     >       >     >       > (XEN) d0v1: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
>     >         >     >       >     >       > [    0.382549] Detected VIPT I-cache on CPU1
>     >         >     >       >     >       > [    0.388712] Xen: initializing cpu1
>     >         >     >       >     >       > [    0.388743] CPU1: Booted secondary processor 0x0000000001 [0x410fd034]
>     >         >     >       >     >       > [    0.388829] smp: Brought up 1 node, 2 CPUs
>     >         >     >       >     >       > [    0.406941] SMP: Total of 2 processors activated.
>     >         >     >       >     >       > [    0.411698] CPU features: detected: 32-bit EL0 Support
>     >         >     >       >     >       > [    0.416888] CPU features: detected: CRC32 instructions
>     >         >     >       >     >       > [    0.422121] CPU: All CPU(s) started at EL1
>     >         >     >       >     >       > [    0.426248] alternatives: patching kernel code
>     >         >     >       >     >       > [    0.431424] devtmpfs: initialized
>     >         >     >       >     >       > [    0.441454] KASLR enabled
>     >         >     >       >     >       > [    0.441602] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:
>     >         >     >       7645041785100000 ns
>     >         >     >       >     >       > [    0.448321] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
>     >         >     >       >     >       > [    0.496183] NET: Registered PF_NETLINK/PF_ROUTE protocol family
>     >         >     >       >     >       > [    0.498277] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic allocations
>     >         >     >       >     >       > [    0.503772] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
>     >         >     >       >     >       > [    0.511610] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
>     >         >     >       >     >       > [    0.519478] audit: initializing netlink subsys (disabled)
>     >         >     >       >     >       > [    0.524985] audit: type=2000 audit(0.336:1): state=initialized audit_enabled=0 res=1
>     >         >     >       >     >       > [    0.529169] thermal_sys: Registered thermal governor 'step_wise'
>     >         >     >       >     >       > [    0.533023] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
>     >         >     >       >     >       > [    0.545608] ASID allocator initialised with 32768 entries
>     >         >     >       >     >       > [    0.551030] xen:swiotlb_xen: Warning: only able to allocate 4 MB for software IO TLB
>     >         >     >       >     >       > [    0.559332] software IO TLB: mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB)
>     >         >     >       >     >       > [    0.583565] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>     >         >     >       >     >       > [    0.584721] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
>     >         >     >       >     >       > [    0.591478] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
>     >         >     >       >     >       > [    0.598225] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
>     >         >     >       >     >       > [    0.636520] DRBG: Continuing without Jitter RNG
>     >         >     >       >     >       > [    0.737187] raid6: neonx8   gen()  2143 MB/s
>     >         >     >       >     >       > [    0.805294] raid6: neonx8   xor()  1589 MB/s
>     >         >     >       >     >       > [    0.873406] raid6: neonx4   gen()  2177 MB/s
>     >         >     >       >     >       > [    0.941499] raid6: neonx4   xor()  1556 MB/s
>     >         >     >       >     >       > [    1.009612] raid6: neonx2   gen()  2072 MB/s
>     >         >     >       >     >       > [    1.077715] raid6: neonx2   xor()  1430 MB/s
>     >         >     >       >     >       > [    1.145834] raid6: neonx1   gen()  1769 MB/s
>     >         >     >       >     >       > [    1.213935] raid6: neonx1   xor()  1214 MB/s
>     >         >     >       >     >       > [    1.282046] raid6: int64x8  gen()  1366 MB/s
>     >         >     >       >     >       > [    1.350132] raid6: int64x8  xor()   773 MB/s
>     >         >     >       >     >       > [    1.418259] raid6: int64x4  gen()  1602 MB/s
>     >         >     >       >     >       > [    1.486349] raid6: int64x4  xor()   851 MB/s
>     >         >     >       >     >       > [    1.554464] raid6: int64x2  gen()  1396 MB/s
>     >         >     >       >     >       > [    1.622561] raid6: int64x2  xor()   744 MB/s
>     >         >     >       >     >       > [    1.690687] raid6: int64x1  gen()  1033 MB/s
>     >         >     >       >     >       > [    1.758770] raid6: int64x1  xor()   517 MB/s
>     >         >     >       >     >       > [    1.758809] raid6: using algorithm neonx4 gen() 2177 MB/s
>     >         >     >       >     >       > [    1.762941] raid6: .... xor() 1556 MB/s, rmw enabled
>     >         >     >       >     >       > [    1.767957] raid6: using neon recovery algorithm
>     >         >     >       >     >       > [    1.772824] xen:balloon: Initialising balloon driver
>     >         >     >       >     >       > [    1.778021] iommu: Default domain type: Translated
>     >         >     >       >     >       > [    1.782584] iommu: DMA domain TLB invalidation policy: strict mode
>     >         >     >       >     >       > [    1.789149] SCSI subsystem initialized
>     >         >     >       >     >       > [    1.792820] usbcore: registered new interface driver usbfs
>     >         >     >       >     >       > [    1.798254] usbcore: registered new interface driver hub
>     >         >     >       >     >       > [    1.803626] usbcore: registered new device driver usb
>     >         >     >       >     >       > [    1.808761] pps_core: LinuxPPS API ver. 1 registered
>     >         >     >       >     >       > [    1.813716] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it <mailto:giometti@linux.it> <mailto:giometti@linux.it <mailto:giometti@linux.it>> <mailto:giometti@linux.it <mailto:giometti@linux.it> <mailto:giometti@linux.it <mailto:giometti@linux.it>>>
>     >         >     >       <mailto:giometti@linux.it <mailto:giometti@linux.it> <mailto:giometti@linux.it <mailto:giometti@linux.it>> <mailto:giometti@linux.it <mailto:giometti@linux.it> <mailto:giometti@linux.it <mailto:giometti@linux.it>>>>>
>     >         >     >       >     >       > [    1.822903] PTP clock support registered
>     >         >     >       >     >       > [    1.826893] EDAC MC: Ver: 3.0.0
>     >         >     >       >     >       > [    1.830375] zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX channels.
>     >         >     >       >     >       > [    1.838863] zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX channels.
>     >         >     >       >     >       > [    1.847356] zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX channels.
>     >         >     >       >     >       > [    1.855907] FPGA manager framework
>     >         >     >       >     >       > [    1.859952] clocksource: Switched to clocksource arch_sys_counter
>     >         >     >       >     >       > [    1.871712] NET: Registered PF_INET protocol family
>     >         >     >       >     >       > [    1.871838] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
>     >         >     >       >     >       > [    1.879392] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
>     >         >     >       >     >       > [    1.887078] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
>     >         >     >       >     >       > [    1.894846] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
>     >         >     >       >     >       > [    1.902900] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
>     >         >     >       >     >       > [    1.910350] TCP: Hash tables configured (established 16384 bind 16384)
>     >         >     >       >     >       > [    1.916778] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
>     >         >     >       >     >       > [    1.923509] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
>     >         >     >       >     >       > [    1.930759] NET: Registered PF_UNIX/PF_LOCAL protocol family
>     >         >     >       >     >       > [    1.936834] RPC: Registered named UNIX socket transport module.
>     >         >     >       >     >       > [    1.942342] RPC: Registered udp transport module.
>     >         >     >       >     >       > [    1.947088] RPC: Registered tcp transport module.
>     >         >     >       >     >       > [    1.951843] RPC: Registered tcp NFSv4.1 backchannel transport module.
>     >         >     >       >     >       > [    1.958334] PCI: CLS 0 bytes, default 64
>     >         >     >       >     >       > [    1.962709] Trying to unpack rootfs image as initramfs...
>     >         >     >       >     >       > [    1.977090] workingset: timestamp_bits=62 max_order=19 bucket_order=0
>     >         >     >       >     >       > [    1.982863] Installing knfsd (copyright (C) 1996 okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>>> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>>>>).
>     >         >     >       >     >       > [    2.021045] NET: Registered PF_ALG protocol family
>     >         >     >       >     >       > [    2.021122] xor: measuring software checksum speed
>     >         >     >       >     >       > [    2.029347]    8regs           :  2366 MB/sec
>     >         >     >       >     >       > [    2.033081]    32regs          :  2802 MB/sec
>     >         >     >       >     >       > [    2.038223]    arm64_neon      :  2320 MB/sec
>     >         >     >       >     >       > [    2.038385] xor: using function: 32regs (2802 MB/sec)
>     >         >     >       >     >       > [    2.043614] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
>     >         >     >       >     >       > [    2.050959] io scheduler mq-deadline registered
>     >         >     >       >     >       > [    2.055521] io scheduler kyber registered
>     >         >     >       >     >       > [    2.068227] xen:xen_evtchn: Event-channel device installed
>     >         >     >       >     >       > [    2.069281] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
>     >         >     >       >     >       > [    2.076190] cacheinfo: Unable to detect cache hierarchy for CPU 0
>     >         >     >       >     >       > [    2.085548] brd: module loaded
>     >         >     >       >     >       > [    2.089290] loop: module loaded
>     >         >     >       >     >       > [    2.089341] Invalid max_queues (4), will use default max: 2.
>     >         >     >       >     >       > [    2.094565] tun: Universal TUN/TAP device driver, 1.6
>     >         >     >       >     >       > [    2.098655] xen_netfront: Initialising Xen virtual ethernet driver
>     >         >     >       >     >       > [    2.104156] usbcore: registered new interface driver rtl8150
>     >         >     >       >     >       > [    2.109813] usbcore: registered new interface driver r8152
>     >         >     >       >     >       > [    2.115367] usbcore: registered new interface driver asix
>     >         >     >       >     >       > [    2.120794] usbcore: registered new interface driver ax88179_178a
>     >         >     >       >     >       > [    2.126934] usbcore: registered new interface driver cdc_ether
>     >         >     >       >     >       > [    2.132816] usbcore: registered new interface driver cdc_eem
>     >         >     >       >     >       > [    2.138527] usbcore: registered new interface driver net1080
>     >         >     >       >     >       > [    2.144256] usbcore: registered new interface driver cdc_subset
>     >         >     >       >     >       > [    2.150205] usbcore: registered new interface driver zaurus
>     >         >     >       >     >       > [    2.155837] usbcore: registered new interface driver cdc_ncm
>     >         >     >       >     >       > [    2.161550] usbcore: registered new interface driver r8153_ecm
>     >         >     >       >     >       > [    2.168240] usbcore: registered new interface driver cdc_acm
>     >         >     >       >     >       > [    2.173109] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters
>     >         >     >       >     >       > [    2.181358] usbcore: registered new interface driver uas
>     >         >     >       >     >       > [    2.186547] usbcore: registered new interface driver usb-storage
>     >         >     >       >     >       > [    2.192643] usbcore: registered new interface driver ftdi_sio
>     >         >     >       >     >       > [    2.198384] usbserial: USB Serial support registered for FTDI USB Serial Device
>     >         >     >       >     >       > [    2.206118] udc-core: couldn't find an available UDC - added [g_mass_storage] to list of pending
>     >         >     >       drivers
>     >         >     >       >     >       > [    2.215332] i2c_dev: i2c /dev entries driver
>     >         >     >       >     >       > [    2.220467] xen_wdt xen_wdt: initialized (timeout=60s, nowayout=0)
>     >         >     >       >     >       > [    2.225923] device-mapper: uevent: version 1.0.3
>     >         >     >       >     >       > [    2.230668] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com>> <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com>>>
>     >         >     >       <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com>> <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:dm-devel@redhat.com>>>>
>     >         >     >       >     >       > [    2.239315] EDAC MC0: Giving out device to module 1 controller synps_ddr_controller: DEV synps_edac
>     >         >     >       (INTERRUPT)
>     >         >     >       >     >       > [    2.249405] EDAC DEVICE0: Giving out device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV
>     >         >     >       >     >       ff960000.memory-controller (INTERRUPT)
>     >         >     >       >     >       > [    2.261719] sdhci: Secure Digital Host Controller Interface driver
>     >         >     >       >     >       > [    2.267487] sdhci: Copyright(c) Pierre Ossman
>     >         >     >       >     >       > [    2.271890] sdhci-pltfm: SDHCI platform and OF driver helper
>     >         >     >       >     >       > [    2.278157] ledtrig-cpu: registered to indicate activity on CPUs
>     >         >     >       >     >       > [    2.283816] zynqmp_firmware_probe Platform Management API v1.1
>     >         >     >       >     >       > [    2.289554] zynqmp_firmware_probe Trustzone version v1.0
>     >         >     >       >     >       > [    2.327875] securefw securefw: securefw probed
>     >         >     >       >     >       > [    2.328324] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)
>     >         >     >       >     >       > [    2.332563] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
>     >         >     >       >     >       > [    2.341183] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)
>     >         >     >       >     >       > [    2.347667] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is available
>     >         >     >       >     >       > [    2.353003] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is available
>     >         >     >       >     >       > [    2.362605] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registered
>     >         >     >       >     >       > [    2.366540] viper-xen-proxy viper-xen-proxy: Viper Xen Proxy registered
>     >         >     >       >     >       > [    2.372525] viper-vdpp a4000000.vdpp: Device Tree Probing
>     >         >     >       >     >       > [    2.377778] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>     >         >     >       >     >       > [    2.386432] viper-vdpp a4000000.vdpp: Unable to register tamper handler. Retrying...
>     >         >     >       >     >       > [    2.394094] viper-vdpp-net a5000000.vdpp_net: Device Tree Probing
>     >         >     >       >     >       > [    2.399854] viper-vdpp-net a5000000.vdpp_net: Device registered
>     >         >     >       >     >       > [    2.405931] viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing
>     >         >     >       >     >       > [    2.412037] viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VTI Count: 512 Event Count: 32
>     >         >     >       >     >       > [    2.420856] default preset
>     >         >     >       >     >       > [    2.423797] viper-vdpp-stat a8000000.vdpp_stat: Device registered
>     >         >     >       >     >       > [    2.430054] viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing
>     >         >     >       >     >       > [    2.435948] viper-vdpp-rng ac000000.vdpp_rng: Device registered
>     >         >     >       >     >       > [    2.441976] vmcu driver init
>     >         >     >       >     >       > [    2.444922] VMCU: : (240:0) registered
>     >         >     >       >     >       > [    2.444956] In K81 Updater init
>     >         >     >       >     >       > [    2.449003] pktgen: Packet Generator for packet performance testing. Version: 2.75
>     >         >     >       >     >       > [    2.468833] Initializing XFRM netlink socket
>     >         >     >       >     >       > [    2.468902] NET: Registered PF_PACKET protocol family
>     >         >     >       >     >       > [    2.472729] Bridge firewalling registered
>     >         >     >       >     >       > [    2.476785] 8021q: 802.1Q VLAN Support v1.8
>     >         >     >       >     >       > [    2.481341] registered taskstats version 1
>     >         >     >       >     >       > [    2.486394] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
>     >         >     >       >     >       > [    2.503145] ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq = 36, base_baud = 6250000) is a xuartps
>     >         >     >       >     >       > [    2.507103] of-fpga-region fpga-full: FPGA Region probed
>     >         >     >       >     >       > [    2.512986] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver Probe success
>     >         >     >       >     >       > [    2.520267] xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver Probe success
>     >         >     >       >     >       > [    2.528239] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe success
>     >         >     >       >     >       > [    2.536152] xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver Probe success
>     >         >     >       >     >       > [    2.544153] xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver Probe success
>     >         >     >       >     >       > [    2.552127] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver Probe success
>     >         >     >       >     >       > [    2.560178] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver Probe success
>     >         >     >       >     >       > [    2.567987] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe success
>     >         >     >       >     >       > [    2.576018] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver Probe success
>     >         >     >       >     >       > [    2.583889] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver Probe success
>     >         >     >       >     >       > [    2.946379] spi-nor spi0.0: mt25qu512a (131072 Kbytes)
>     >         >     >       >     >       > [    2.946467] 2 fixed-partitions partitions found on MTD device spi0.0
>     >         >     >       >     >       > [    2.952393] Creating 2 MTD partitions on "spi0.0":
>     >         >     >       >     >       > [    2.957231] 0x000004000000-0x000008000000 : "bank A"
>     >         >     >       >     >       > [    2.963332] 0x000000000000-0x000004000000 : "bank B"
>     >         >     >       >     >       > [    2.968694] macb ff0b0000.ethernet: Not enabling partial store and forward
>     >         >     >       >     >       > [    2.975333] macb ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25
>     >         >     >       (18:41:fe:0f:ff:02)
>     >         >     >       >     >       > [    2.984472] macb ff0c0000.ethernet: Not enabling partial store and forward
>     >         >     >       >     >       > [    2.992144] macb ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26
>     >         >     >       (18:41:fe:0f:ff:03)
>     >         >     >       >     >       > [    3.001043] viper_enet viper_enet: Viper power GPIOs initialised
>     >         >     >       >     >       > [    3.007313] viper_enet viper_enet vnet0 (uninitialized): Validate interface QSGMII
>     >         >     >       >     >       > [    3.014914] viper_enet viper_enet vnet1 (uninitialized): Validate interface QSGMII
>     >         >     >       >     >       > [    3.022138] viper_enet viper_enet vnet1 (uninitialized): Validate interface type 18
>     >         >     >       >     >       > [    3.030274] viper_enet viper_enet vnet2 (uninitialized): Validate interface QSGMII
>     >         >     >       >     >       > [    3.037785] viper_enet viper_enet vnet3 (uninitialized): Validate interface QSGMII
>     >         >     >       >     >       > [    3.045301] viper_enet viper_enet: Viper enet registered
>     >         >     >       >     >       > [    3.050958] xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
>     >         >     >       >     >       > [    3.057135] xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
>     >         >     >       >     >       > [    3.063538] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
>     >         >     >       >     >       > [    3.069920] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
>     >         >     >       >     >       > [    3.097729] si70xx: probe of 2-0040 failed with error -5
>     >         >     >       >     >       > [    3.098042] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s
>     >         >     >       >     >       > [    3.105111] cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s
>     >         >     >       >     >       > [    3.112457] viper-tamper viper-tamper: Device registered
>     >         >     >       >     >       > [    3.117593] active_bank active_bank: boot bank: 1
>     >         >     >       >     >       > [    3.122184] active_bank active_bank: boot mode: (0x02) qspi32
>     >         >     >       >     >       > [    3.128247] viper-vdpp a4000000.vdpp: Device Tree Probing
>     >         >     >       >     >       > [    3.133439] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
>     >         >     >       >     >       > [    3.142151] viper-vdpp a4000000.vdpp: Tamper handler registered
>     >         >     >       >     >       > [    3.147438] viper-vdpp a4000000.vdpp: Device registered
>     >         >     >       >     >       > [    3.153007] lpc55_l2 spi1.0: registered handler for protocol 0
>     >         >     >       >     >       > [    3.158582] lpc55_user lpc55_user: The major number for your device is 236
>     >         >     >       >     >       > [    3.165976] lpc55_l2 spi1.0: registered handler for protocol 1
>     >         >     >       >     >       > [    3.181999] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >         >     >       >     >       > [    3.182856] rtc-lpc55 rtc_lpc55: registered as rtc0
>     >         >     >       >     >       > [    3.188656] lpc55_l2 spi1.0: (2) mcu still not ready?
>     >         >     >       >     >       > [    3.193744] lpc55_l2 spi1.0: (3) mcu still not ready?
>     >         >     >       >     >       > [    3.198848] lpc55_l2 spi1.0: (4) mcu still not ready?
>     >         >     >       >     >       > [    3.202932] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit
>     >         >     >       >     >       > [    3.210689] lpc55_l2 spi1.0: (5) mcu still not ready?
>     >         >     >       >     >       > [    3.215694] lpc55_l2 spi1.0: rx error: -110
>     >         >     >       >     >       > [    3.284438] mmc0: new HS200 MMC card at address 0001
>     >         >     >       >     >       > [    3.285179] mmcblk0: mmc0:0001 SEM16G 14.6 GiB
>     >         >     >       >     >       > [    3.291784]  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
>     >         >     >       >     >       > [    3.293915] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
>     >         >     >       >     >       > [    3.299054] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
>     >         >     >       >     >       > [    3.303905] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
>     >         >     >       >     >       > [    3.582676] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >         >     >       >     >       > [    3.583332] rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardware clock
>     >         >     >       >     >       > [    3.591252] cdns-i2c ff020000.i2c: recovery information complete
>     >         >     >       >     >       > [    3.597085] at24 0-0050: supply vcc not found, using dummy regulator
>     >         >     >       >     >       > [    3.603011] lpc55_l2 spi1.0: (2) mcu still not ready?
>     >         >     >       >     >       > [    3.608093] at24 0-0050: 256 byte spd EEPROM, read-only
>     >         >     >       >     >       > [    3.613620] lpc55_l2 spi1.0: (3) mcu still not ready?
>     >         >     >       >     >       > [    3.619362] lpc55_l2 spi1.0: (4) mcu still not ready?
>     >         >     >       >     >       > [    3.624224] rtc-rv3028 0-0052: registered as rtc1
>     >         >     >       >     >       > [    3.628343] lpc55_l2 spi1.0: (5) mcu still not ready?
>     >         >     >       >     >       > [    3.633253] lpc55_l2 spi1.0: rx error: -110
>     >         >     >       >     >       > [    3.639104] k81_bootloader 0-0010: probe
>     >         >     >       >     >       > [    3.641628] VMCU: : (235:0) registered
>     >         >     >       >     >       > [    3.641635] k81_bootloader 0-0010: probe completed
>     >         >     >       >     >       > [    3.668346] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 28
>     >         >     >       >     >       > [    3.669154] cdns-i2c ff030000.i2c: recovery information complete
>     >         >     >       >     >       > [    3.675412] lm75 1-0048: supply vs not found, using dummy regulator
>     >         >     >       >     >       > [    3.682920] lm75 1-0048: hwmon1: sensor 'tmp112'
>     >         >     >       >     >       > [    3.686548] i2c i2c-1: Added multiplexed i2c bus 3
>     >         >     >       >     >       > [    3.690795] i2c i2c-1: Added multiplexed i2c bus 4
>     >         >     >       >     >       > [    3.695629] i2c i2c-1: Added multiplexed i2c bus 5
>     >         >     >       >     >       > [    3.700492] i2c i2c-1: Added multiplexed i2c bus 6
>     >         >     >       >     >       > [    3.705157] pca954x 1-0070: registered 4 multiplexed busses for I2C switch pca9546
>     >         >     >       >     >       > [    3.713049] at24 1-0054: supply vcc not found, using dummy regulator
>     >         >     >       >     >       > [    3.720067] at24 1-0054: 1024 byte 24c08 EEPROM, read-only
>     >         >     >       >     >       > [    3.724761] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq 29
>     >         >     >       >     >       > [    3.731272] sfp viper_enet:sfp-eth1: Host maximum power 2.0W
>     >         >     >       >     >       > [    3.737549] sfp_register_socket: got sfp_bus
>     >         >     >       >     >       > [    3.740709] sfp_register_socket: register sfp_bus
>     >         >     >       >     >       > [    3.745459] sfp_register_bus: ops ok!
>     >         >     >       >     >       > [    3.749179] sfp_register_bus: Try to attach
>     >         >     >       >     >       > [    3.753419] sfp_register_bus: Attach succeeded
>     >         >     >       >     >       > [    3.757914] sfp_register_bus: upstream ops attach
>     >         >     >       >     >       > [    3.762677] sfp_register_bus: Bus registered
>     >         >     >       >     >       > [    3.766999] sfp_register_socket: register sfp_bus succeeded
>     >         >     >       >     >       > [    3.775870] of_cfs_init
>     >         >     >       >     >       > [    3.776000] of_cfs_init: OK
>     >         >     >       >     >       > [    3.778211] clk: Not disabling unused clocks
>     >         >     >       >     >       > [   11.278477] Freeing initrd memory: 206056K
>     >         >     >       >     >       > [   11.279406] Freeing unused kernel memory: 1536K
>     >         >     >       >     >       > [   11.314006] Checked W+X mappings: passed, no W+X pages found
>     >         >     >       >     >       > [   11.314142] Run /init as init process
>     >         >     >       >     >       > INIT: version 3.01 booting
>     >         >     >       >     >       > fsck (busybox 1.35.0)
>     >         >     >       >     >       > /dev/mmcblk0p1: clean, 12/102400 files, 238162/409600 blocks
>     >         >     >       >     >       > /dev/mmcblk0p2: clean, 12/102400 files, 171972/409600 blocks
>     >         >     >       >     >       > /dev/mmcblk0p3 was not cleanly unmounted, check forced.
>     >         >     >       >     >       > /dev/mmcblk0p3: 20/4096 files (0.0% non-contiguous), 663/16384 blocks
>     >         >     >       >     >       > [   11.553073] EXT4-fs (mmcblk0p3): mounted filesystem without journal. Opts: (null). Quota mode:
>     >         >     >       disabled.
>     >         >     >       >     >       > Starting random number generator daemon.
>     >         >     >       >     >       > [   11.580662] random: crng init done
>     >         >     >       >     >       > Starting udev
>     >         >     >       >     >       > [   11.613159] udevd[142]: starting version 3.2.10
>     >         >     >       >     >       > [   11.620385] udevd[143]: starting eudev-3.2.10
>     >         >     >       >     >       > [   11.704481] macb ff0b0000.ethernet control_red: renamed from eth0
>     >         >     >       >     >       > [   11.720264] macb ff0c0000.ethernet control_black: renamed from eth1
>     >         >     >       >     >       > [   12.063396] ip_local_port_range: prefer different parity for start/end values.
>     >         >     >       >     >       > [   12.084801] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >         >     >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>     >         >     >       >     >       > Mon Feb 27 08:40:53 UTC 2023
>     >         >     >       >     >       > [   12.115309] rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad result
>     >         >     >       >     >       > hwclock: RTC_SET_TIME: Invalid exchange
>     >         >     >       >     >       > [   12.131027] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >         >     >       >     >       > Starting mcud
>     >         >     >       >     >       > INIT: Entering runlevel: 5
>     >         >     >       >     >       > Configuring network interfaces... done.
>     >         >     >       >     >       > resetting network interface
>     >         >     >       >     >       > [   12.718295] macb ff0b0000.ethernet control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver [Xilinx
>     >         >     >       PCS/PMA PHY] (irq=POLL)
>     >         >     >       >     >       > [   12.723919] macb ff0b0000.ethernet control_red: configuring for phy/gmii link mode
>     >         >     >       >     >       > [   12.732151] pps pps0: new PPS source ptp0
>     >         >     >       >     >       > [   12.735563] macb ff0b0000.ethernet: gem-ptp-timer ptp clock registered.
>     >         >     >       >     >       > [   12.745724] macb ff0c0000.ethernet control_black: PHY [ff0c0000.ethernet-ffffffff:01] driver [Xilinx
>     >         >     >       PCS/PMA PHY]
>     >         >     >       >     >       (irq=POLL)
>     >         >     >       >     >       > [   12.753469] macb ff0c0000.ethernet control_black: configuring for phy/gmii link mode
>     >         >     >       >     >       > [   12.761804] pps pps1: new PPS source ptp1
>     >         >     >       >     >       > [   12.765398] macb ff0c0000.ethernet: gem-ptp-timer ptp clock registered.
>     >         >     >       >     >       > Auto-negotiation: off
>     >         >     >       >     >       > Auto-negotiation: off
>     >         >     >       >     >       > [   16.828151] macb ff0b0000.ethernet control_red: unable to generate target frequency: 125000000 Hz
>     >         >     >       >     >       > [   16.834553] macb ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full - flow control off
>     >         >     >       >     >       > [   16.860552] macb ff0c0000.ethernet control_black: unable to generate target frequency: 125000000 Hz
>     >         >     >       >     >       > [   16.867052] macb ff0c0000.ethernet control_black: Link is Up - 1Gbps/Full - flow control off
>     >         >     >       >     >       > Starting Failsafe Secure Shell server in port 2222: sshd
>     >         >     >       >     >       > done.
>     >         >     >       >     >       > Starting rpcbind daemon...done.
>     >         >     >       >     >       >
>     >         >     >       >     >       > [   17.093019] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
>     >         >     >       >     >       > hwclock: RTC_RD_TIME: Invalid exchange
>     >         >     >       >     >       > Starting State Manager Service
>     >         >     >       >     >       > Start state-manager restarter...
>     >         >     >       >     >       > (XEN) d0v1 Forwarding AES operation: 3254779951
>     >         >     >       >     >       > Starting /usr/sbin/xenstored....[   17.265256] BTRFS: device fsid 80efc224-c202-4f8e-a949-4dae7f04a0aa
>     >         >     >       devid 1 transid 744
>     >         >     >       >     >       /dev/dm-0
>     >         >     >       >     >       > scanned by udevd (385)
>     >         >     >       >     >       > [   17.349933] BTRFS info (device dm-0): disk space caching is enabled
>     >         >     >       >     >       > [   17.350670] BTRFS info (device dm-0): has skinny extents
>     >         >     >       >     >       > [   17.364384] BTRFS info (device dm-0): enabling ssd optimizations
>     >         >     >       >     >       > [   17.830462] BTRFS: device fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
>     >         >     >       /dev/mapper/client_prov scanned by
>     >         >     >       >     >       mkfs.btrfs
>     >         >     >       >     >       > (526)
>     >         >     >       >     >       > [   17.872699] BTRFS info (device dm-1): using free space tree
>     >         >     >       >     >       > [   17.872771] BTRFS info (device dm-1): has skinny extents
>     >         >     >       >     >       > [   17.878114] BTRFS info (device dm-1): flagging fs with big metadata feature
>     >         >     >       >     >       > [   17.894289] BTRFS info (device dm-1): enabling ssd optimizations
>     >         >     >       >     >       > [   17.895695] BTRFS info (device dm-1): checking UUID tree
>     >         >     >       >     >       >
>     >         >     >       >     >       > Setting domain 0 name, domid and JSON config...
>     >         >     >       >     >       > Done setting up Dom0
>     >         >     >       >     >       > Starting xenconsoled...
>     >         >     >       >     >       > Starting QEMU as disk backend for dom0
>     >         >     >       >     >       > Starting domain watchdog daemon: xenwatchdogd startup
>     >         >     >       >     >       >
>     >         >     >       >     >       > [   18.408647] BTRFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
>     >         >     >       /dev/mapper/client_config scanned by
>     >         >     >       >     >       mkfs.btrfs
>     >         >     >       >     >       > (574)
>     >         >     >       >     >       > [done]
>     >         >     >       >     >       > [   18.465552] BTRFS info (device dm-2): using free space tree
>     >         >     >       >     >       > [   18.465629] BTRFS info (device dm-2): has skinny extents
>     >         >     >       >     >       > [   18.471002] BTRFS info (device dm-2): flagging fs with big metadata feature
>     >         >     >       >     >       > Starting crond: [   18.482371] BTRFS info (device dm-2): enabling ssd optimizations
>     >         >     >       >     >       > [   18.486659] BTRFS info (device dm-2): checking UUID tree
>     >         >     >       >     >       > OK
>     >         >     >       >     >       > starting rsyslogd ... Log partition ready after 0 poll loops
>     >         >     >       >     >       > done
>     >         >     >       >     >       > rsyslogd: cannot connect to 172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>> <http://172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>>> <http://172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>> <http://172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>>>>: Network is unreachable [v8.2208.0 try
>     >         >     >       https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027>> <https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027>>> <https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027>> <https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <https://www.rsyslog.com/e/2027>>>> ]
>     >         >     >       >     >       > [   18.670637] BTRFS: device fsid 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3
>     >         >     >       scanned by udevd (518)
>     >         >     >       >     >       >
>     >         >     >       >     >       > Please insert USB token and enter your role in login prompt.
>     >         >     >       >     >       >
>     >         >     >       >     >       > login:
>     >         >     >       >     >       >
>     >         >     >       >     >       > Regards,
>     >         >     >       >     >       > O.
>     >         >     >       >     >       >
>     >         >     >       >     >       >
>     >         >     >       >     >       > пн, 24 апр. 2023 г. в 23:39, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>:
>     >         >     >       >     >       >       Hi Oleg,
>     >         >     >       >     >       >
>     >         >     >       >     >       >       Here is the issue from your logs:
>     >         >     >       >     >       >
>     >         >     >       >     >       >       SError Interrupt on CPU0, code 0xbe000000 -- SError
>     >         >     >       >     >       >
>     >         >     >       >     >       >       SErrors are special signals to notify software of serious hardware
>     >         >     >       >     >       >       errors.  Something is going very wrong. Defective hardware is a
>     >         >     >       >     >       >       possibility.  Another possibility if software accessing address ranges
>     >         >     >       >     >       >       that it is not supposed to, sometimes it causes SErrors.
>     >         >     >       >     >       >
>     >         >     >       >     >       >       Cheers,
>     >         >     >       >     >       >
>     >         >     >       >     >       >       Stefano
>     >         >     >       >     >       >
>     >         >     >       >     >       >
>     >         >     >       >     >       >
>     >         >     >       >     >       >       On Mon, 24 Apr 2023, Oleg Nikitenko wrote:
>     >         >     >       >     >       >
>     >         >     >       >     >       >       > Hello,
>     >         >     >       >     >       >       >
>     >         >     >       >     >       >       > Thanks guys.
>     >         >     >       >     >       >       > I found out where the problem was.
>     >         >     >       >     >       >       > Now dom0 booted more. But I have a new one.
>     >         >     >       >     >       >       > This is a kernel panic during Dom0 loading.
>     >         >     >       >     >       >       > Maybe someone is able to suggest something ?
>     >         >     >       >     >       >       >
>     >         >     >       >     >       >       > Regards,
>     >         >     >       >     >       >       > O.
>     >         >     >       >     >       >       >
>     >         >     >       >     >       >       > [    3.771362] sfp_register_bus: upstream ops attach
>     >         >     >       >     >       >       > [    3.776119] sfp_register_bus: Bus registered
>     >         >     >       >     >       >       > [    3.780459] sfp_register_socket: register sfp_bus succeeded
>     >         >     >       >     >       >       > [    3.789399] of_cfs_init
>     >         >     >       >     >       >       > [    3.789499] of_cfs_init: OK
>     >         >     >       >     >       >       > [    3.791685] clk: Not disabling unused clocks
>     >         >     >       >     >       >       > [   11.010355] SError Interrupt on CPU0, code 0xbe000000 -- SError
>     >         >     >       >     >       >       > [   11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>     >         >     >       >     >       >       > [   11.010393] Workqueue: events_unbound async_run_entry_fn
>     >         >     >       >     >       >       > [   11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>     >         >     >       >     >       >       > [   11.010422] pc : simple_write_end+0xd0/0x130
>     >         >     >       >     >       >       > [   11.010431] lr : generic_perform_write+0x118/0x1e0
>     >         >     >       >     >       >       > [   11.010438] sp : ffffffc00809b910
>     >         >     >       >     >       >       > [   11.010441] x29: ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
>     >         >     >       >     >       >       > [   11.010451] x26: 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
>     >         >     >       >     >       >       > [   11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
>     >         >     >       >     >       >       > [   11.010472] x20: 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
>     >         >     >       >     >       >       > [   11.010481] x17: 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
>     >         >     >       >     >       >       > [   11.010490] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
>     >         >     >       >     >       >       > [   11.010498] x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
>     >         >     >       >     >       >       > [   11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
>     >         >     >       >     >       >       > [   11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
>     >         >     >       >     >       >       > [   11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
>     >         >     >       >     >       >       > [   11.010534] Kernel panic - not syncing: Asynchronous SError Interrupt
>     >         >     >       >     >       >       > [   11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
>     >         >     >       >     >       >       > [   11.010545] Hardware name: D14 Viper Board - White Unit (DT)
>     >         >     >       >     >       >       > [   11.010548] Workqueue: events_unbound async_run_entry_fn
>     >         >     >       >     >       >       > [   11.010556] Call trace:
>     >         >     >       >     >       >       > [   11.010558]  dump_backtrace+0x0/0x1c4
>     >         >     >       >     >       >       > [   11.010567]  show_stack+0x18/0x2c
>     >         >     >       >     >       >       > [   11.010574]  dump_stack_lvl+0x7c/0xa0
>     >         >     >       >     >       >       > [   11.010583]  dump_stack+0x18/0x34
>     >         >     >       >     >       >       > [   11.010588]  panic+0x14c/0x2f8
>     >         >     >       >     >       >       > [   11.010597]  print_tainted+0x0/0xb0
>     >         >     >       >     >       >       > [   11.010606]  arm64_serror_panic+0x6c/0x7c
>     >         >     >       >     >       >       > [   11.010614]  do_serror+0x28/0x60
>     >         >     >       >     >       >       > [   11.010621]  el1h_64_error_handler+0x30/0x50
>     >         >     >       >     >       >       > [   11.010628]  el1h_64_error+0x78/0x7c
>     >         >     >       >     >       >       > [   11.010633]  simple_write_end+0xd0/0x130
>     >         >     >       >     >       >       > [   11.010639]  generic_perform_write+0x118/0x1e0
>     >         >     >       >     >       >       > [   11.010644]  __generic_file_write_iter+0x138/0x1c4
>     >         >     >       >     >       >       > [   11.010650]  generic_file_write_iter+0x78/0xd0
>     >         >     >       >     >       >       > [   11.010656]  __kernel_write+0xfc/0x2ac
>     >         >     >       >     >       >       > [   11.010665]  kernel_write+0x88/0x160
>     >         >     >       >     >       >       > [   11.010673]  xwrite+0x44/0x94
>     >         >     >       >     >       >       > [   11.010680]  do_copy+0xa8/0x104
>     >         >     >       >     >       >       > [   11.010686]  write_buffer+0x38/0x58
>     >         >     >       >     >       >       > [   11.010692]  flush_buffer+0x4c/0xbc
>     >         >     >       >     >       >       > [   11.010698]  __gunzip+0x280/0x310
>     >         >     >       >     >       >       > [   11.010704]  gunzip+0x1c/0x28
>     >         >     >       >     >       >       > [   11.010709]  unpack_to_rootfs+0x170/0x2b0
>     >         >     >       >     >       >       > [   11.010715]  do_populate_rootfs+0x80/0x164
>     >         >     >       >     >       >       > [   11.010722]  async_run_entry_fn+0x48/0x164
>     >         >     >       >     >       >       > [   11.010728]  process_one_work+0x1e4/0x3a0
>     >         >     >       >     >       >       > [   11.010736]  worker_thread+0x7c/0x4c0
>     >         >     >       >     >       >       > [   11.010743]  kthread+0x120/0x130
>     >         >     >       >     >       >       > [   11.010750]  ret_from_fork+0x10/0x20
>     >         >     >       >     >       >       > [   11.010757] SMP: stopping secondary CPUs
>     >         >     >       >     >       >       > [   11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc008000000
>     >         >     >       >     >       >       > [   11.010788] PHYS_OFFSET: 0x0
>     >         >     >       >     >       >       > [   11.010790] CPU features: 0x00000401,00000842
>     >         >     >       >     >       >       > [   11.010795] Memory Limit: none
>     >         >     >       >     >       >       > [   11.277509] ---[ end Kernel panic - not syncing: Asynchronous SError Interrupt ]---
>     >         >     >       >     >       >       >
>     >         >     >       >     >       >       > пт, 21 апр. 2023 г. в 15:52, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>>:
>     >         >     >       >     >       >       >       Hi Oleg,
>     >         >     >       >     >       >       >
>     >         >     >       >     >       >       >       On 21/04/2023 14:49, Oleg Nikitenko wrote:
>     >         >     >       >     >       >       >       >       
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       > Hello Michal,
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       > I was not able to enable earlyprintk in the xen for now.
>     >         >     >       >     >       >       >       > I decided to choose another way.
>     >         >     >       >     >       >       >       > This is a xen's command line that I found out completely.
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       > (XEN) $$$$ console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin
>     >         >     >       bootscrub=0
>     >         >     >       >     >       vwfi=native
>     >         >     >       >     >       >       sched=null
>     >         >     >       >     >       >       >       timer_slop=0
>     >         >     >       >     >       >       >       Yes, adding a printk() in Xen was also a good idea.
>     >         >     >       >     >       >       >
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       > So you are absolutely right about a command line.
>     >         >     >       >     >       >       >       > Now I am going to find out why xen did not have the correct parameters from the device
>     >         >     >       tree.
>     >         >     >       >     >       >       >       Maybe you will find this document helpful:
>     >         >     >       >     >       >       >       https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt> <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt>> <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt> <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt>>>
>     >         >     >       <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt> <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt>> <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt> <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt <https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt>>>>
>     >         >     >       >     >       >       >
>     >         >     >       >     >       >       >       ~Michal
>     >         >     >       >     >       >       >
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       > Regards,
>     >         >     >       >     >       >       >       > Oleg
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       > пт, 21 апр. 2023 г. в 11:16, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>
>     >         >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>>>:
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       >     On 21/04/2023 10:04, Oleg Nikitenko wrote:
>     >         >     >       >     >       >       >       >     >       
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     > Hello Michal,
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     > Yes, I use yocto.
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     > Yesterday all day long I tried to follow your suggestions.
>     >         >     >       >     >       >       >       >     > I faced a problem.
>     >         >     >       >     >       >       >       >     > Manually in the xen config build file I pasted the strings:
>     >         >     >       >     >       >       >       >     In the .config file or in some Yocto file (listing additional Kconfig options) added
>     >         >     >       to SRC_URI?
>     >         >     >       >     >       >       >       >     You shouldn't really modify .config file but if you do, you should execute "make
>     >         >     >       olddefconfig"
>     >         >     >       >     >       afterwards.
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     > CONFIG_EARLY_PRINTK
>     >         >     >       >     >       >       >       >     > CONFIG_EARLY_PRINTK_ZYNQMP
>     >         >     >       >     >       >       >       >     > CONFIG_EARLY_UART_CHOICE_CADENCE
>     >         >     >       >     >       >       >       >     I hope you added =y to them.
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       >     Anyway, you have at least the following solutions:
>     >         >     >       >     >       >       >       >     1) Run bitbake xen -c menuconfig to properly set early printk
>     >         >     >       >     >       >       >       >     2) Find out how you enable other Kconfig options in your project (e.g.
>     >         >     >       CONFIG_COLORING=y that is not
>     >         >     >       >     >       enabled by
>     >         >     >       >     >       >       default)
>     >         >     >       >     >       >       >       >     3) Append the following to "xen/arch/arm/configs/arm64_defconfig":
>     >         >     >       >     >       >       >       >     CONFIG_EARLY_PRINTK_ZYNQMP=y
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       >     ~Michal
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     > Host hangs in build time. 
>     >         >     >       >     >       >       >       >     > Maybe I did not set something in the config build file ?
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     > Regards,
>     >         >     >       >     >       >       >       >     > Oleg
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     > чт, 20 апр. 2023 г. в 11:57, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>
>     >         >     >       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>>>:
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >     Thanks Michal,
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >     You gave me an idea.
>     >         >     >       >     >       >       >       >     >     I am going to try it today.
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >     Regards,
>     >         >     >       >     >       >       >       >     >     O.
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >     чт, 20 апр. 2023 г. в 11:56, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>
>     >         >     >       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>>>:
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >         Thanks Stefano.
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >         I am going to do it today.
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >         Regards,
>     >         >     >       >     >       >       >       >     >         O.
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >         ср, 19 апр. 2023 г. в 23:05, Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>     >         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
>     >         >     >       >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>
>     >         >     >       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
>     >         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>>>:
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >             On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
>     >         >     >       >     >       >       >       >     >             > Hi Michal,
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             > I corrected xen's command line.
>     >         >     >       >     >       >       >       >     >             > Now it is
>     >         >     >       >     >       >       >       >     >             > xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M
>     >         >     >       dom0_max_vcpus=2
>     >         >     >       >     >       dom0_vcpus_pin
>     >         >     >       >     >       >       >       bootscrub=0 vwfi=native sched=null
>     >         >     >       >     >       >       >       >     >             > timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >             4 colors is way too many for xen, just do xen_colors=0-0. There is no
>     >         >     >       >     >       >       >       >     >             advantage in using more than 1 color for Xen.
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >             4 colors is too few for dom0, if you are giving 1600M of memory to
>     >         >     >       Dom0.
>     >         >     >       >     >       >       >       >     >             Each color is 256M. For 1600M you should give at least 7 colors. Try:
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >             xen_colors=0-0 dom0_colors=1-8
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >     >             > Unfortunately the result was the same.
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             > (XEN)  - Dom0 mode: Relaxed
>     >         >     >       >     >       >       >       >     >             > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
>     >         >     >       >     >       >       >       >     >             > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
>     >         >     >       >     >       >       >       >     >             > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
>     >         >     >       >     >       >       >       >     >             > (XEN) Coloring general information
>     >         >     >       >     >       >       >       >     >             > (XEN) Way size: 64kB
>     >         >     >       >     >       >       >       >     >             > (XEN) Max. number of colors available: 16
>     >         >     >       >     >       >       >       >     >             > (XEN) Xen color(s): [ 0 ]
>     >         >     >       >     >       >       >       >     >             > (XEN) alternatives: Patching with alt table 00000000002cc690 ->
>     >         >     >       00000000002ccc0c
>     >         >     >       >     >       >       >       >     >             > (XEN) Color array allocation failed for dom0
>     >         >     >       >     >       >       >       >     >             > (XEN)
>     >         >     >       >     >       >       >       >     >             > (XEN) ****************************************
>     >         >     >       >     >       >       >       >     >             > (XEN) Panic on CPU 0:
>     >         >     >       >     >       >       >       >     >             > (XEN) Error creating domain 0
>     >         >     >       >     >       >       >       >     >             > (XEN) ****************************************
>     >         >     >       >     >       >       >       >     >             > (XEN)
>     >         >     >       >     >       >       >       >     >             > (XEN) Reboot in five seconds...
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             > I am going to find out how command line arguments passed and parsed.
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             > Regards,
>     >         >     >       >     >       >       >       >     >             > Oleg
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             > ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>
>     >         >     >       >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>
>     >         >     >       >     >       >       >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
>     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>>>:
>     >         >     >       >     >       >       >       >     >             >       Hi Michal,
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             > You put my nose into the problem. Thank you.
>     >         >     >       >     >       >       >       >     >             > I am going to use your point.
>     >         >     >       >     >       >       >       >     >             > Let's see what happens.
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             > Regards,
>     >         >     >       >     >       >       >       >     >             > Oleg
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             > ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>
>     >         >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>
>     >         >     >       >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>>
>     >         >     >       >     >       >       >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>
>     >         >     >       <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>>>>:
>     >         >     >       >     >       >       >       >     >             >       Hi Oleg,
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>     >         >     >       >     >       >       >       >     >             >       >       
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       > Hello Stefano,
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       > Thanks for the clarification.
>     >         >     >       >     >       >       >       >     >             >       > My company uses yocto for image generation.
>     >         >     >       >     >       >       >       >     >             >       > What kind of information do you need to consult me in this
>     >         >     >       case ?
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       > Maybe modules sizes/addresses which were mentioned by @Julien
>     >         >     >       Grall
>     >         >     >       >     >       >       <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>>
>     >         >     >       >     >       >       >       <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>
>     >         >     >       <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>>>>> ?
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >       Sorry for jumping into discussion, but FWICS the Xen command
>     >         >     >       line you provided
>     >         >     >       >     >       seems to be
>     >         >     >       >     >       >       not the
>     >         >     >       >     >       >       >       one
>     >         >     >       >     >       >       >       >     >             >       Xen booted with. The error you are observing most likely is due
>     >         >     >       to dom0 colors
>     >         >     >       >     >       >       configuration not
>     >         >     >       >     >       >       >       being
>     >         >     >       >     >       >       >       >     >             >       specified (i.e. lack of dom0_colors=<> parameter). Although in
>     >         >     >       the command line you
>     >         >     >       >     >       >       provided, this
>     >         >     >       >     >       >       >       parameter
>     >         >     >       >     >       >       >       >     >             >       is set, I strongly doubt that this is the actual command line
>     >         >     >       in use.
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >       You wrote:
>     >         >     >       >     >       >       >       >     >             >       xen,xen-bootargs = "console=dtuart dtuart=serial0
>     >         >     >       dom0_mem=1600M dom0_max_vcpus=2
>     >         >     >       >     >       >       dom0_vcpus_pin
>     >         >     >       >     >       >       >       bootscrub=0 vwfi=native
>     >         >     >       >     >       >       >       >     >             >       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3
>     >         >     >       dom0_colors=4-7";
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >       but:
>     >         >     >       >     >       >       >       >     >             >       1) way_szize has a typo
>     >         >     >       >     >       >       >       >     >             >       2) you specified 4 colors (0-3) for Xen, but the boot log says
>     >         >     >       that Xen has only
>     >         >     >       >     >       one:
>     >         >     >       >     >       >       >       >     >             >       (XEN) Xen color(s): [ 0 ]
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >       This makes me believe that no colors configuration actually end
>     >         >     >       up in command line
>     >         >     >       >     >       that Xen
>     >         >     >       >     >       >       booted
>     >         >     >       >     >       >       >       with.
>     >         >     >       >     >       >       >       >     >             >       Single color for Xen is a "default if not specified" and way
>     >         >     >       size was probably
>     >         >     >       >     >       calculated
>     >         >     >       >     >       >       by asking
>     >         >     >       >     >       >       >       HW.
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >       So I would suggest to first cross-check the command line in
>     >         >     >       use.
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >       ~Michal
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       > Regards,
>     >         >     >       >     >       >       >       >     >             >       > Oleg
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini
>     >         >     >       <sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
>     >         >     >       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>
>     >         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>     >         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>>
>     >         >     >       >     >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
>     >         >     >       >     >       >       >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>
>     >         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>
>     >         >     >       <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>>>>:
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>     >         >     >       >     >       >       >       >     >             >       >     > Hi Julien,
>     >         >     >       >     >       >       >       >     >             >       >     >
>     >         >     >       >     >       >       >       >     >             >       >     > >> This feature has not been merged in Xen upstream yet
>     >         >     >       >     >       >       >       >     >             >       >     >
>     >         >     >       >     >       >       >       >     >             >       >     > > would assume that upstream + the series on the ML [1]
>     >         >     >       work
>     >         >     >       >     >       >       >       >     >             >       >     >
>     >         >     >       >     >       >       >       >     >             >       >     > Please clarify this point.
>     >         >     >       >     >       >       >       >     >             >       >     > Because the two thoughts are controversial.
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >     Hi Oleg,
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >     As Julien wrote, there is nothing controversial. As you
>     >         >     >       are aware,
>     >         >     >       >     >       >       >       >     >             >       >     Xilinx maintains a separate Xen tree specific for Xilinx
>     >         >     >       here:
>     >         >     >       >     >       >       >       >     >             >       >     https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>
>     >         >     >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>
>     >         >     >       >     >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>
>     >         >     >       >     >       >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>>
>     >         >     >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>
>     >         >     >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>
>     >         >     >       >     >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>
>     >         >     >       >     >       >       >       <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>>>
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >     and the branch you are using (xlnx_rebase_4.16) comes
>     >         >     >       from there.
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >     Instead, the upstream Xen tree lives here:
>     >         >     >       >     >       >       >       >     >             >       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>     >         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>
>     >         >     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>     >         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>     >         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>
>     >         >     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>     >         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>     >         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>
>     >         >     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>     >         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>     >         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>
>     >         >     >       >     >       >       >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>
>     >         >     >       <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary> <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>>>>>>>
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >     The Cache Coloring feature that you are trying to
>     >         >     >       configure is present
>     >         >     >       >     >       >       >       >     >             >       >     in xlnx_rebase_4.16, but not yet present upstream (there
>     >         >     >       is an
>     >         >     >       >     >       >       >       >     >             >       >     outstanding patch series to add cache coloring to Xen
>     >         >     >       upstream but it
>     >         >     >       >     >       >       >       >     >             >       >     hasn't been merged yet.)
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't
>     >         >     >       matter too much for
>     >         >     >       >     >       >       >       >     >             >       >     you as you already have Cache Coloring as a feature
>     >         >     >       there.
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >     I take you are using ImageBuilder to generate the boot
>     >         >     >       configuration? If
>     >         >     >       >     >       >       >       >     >             >       >     so, please post the ImageBuilder config file that you are
>     >         >     >       using.
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >       >     But from the boot message, it looks like the colors
>     >         >     >       configuration for
>     >         >     >       >     >       >       >       >     >             >       >     Dom0 is incorrect.
>     >         >     >       >     >       >       >       >     >             >       >
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >             >
>     >         >     >       >     >       >       >       >     >
>     >         >     >       >     >       >       >       >
>     >         >     >       >     >       >       >
>     >         >     >       >     >       >       >
>     >         >     >       >     >       >       >
>     >         >     >       >     >       >
>     >         >     >       >     >       >
>     >         >     >       >     >       >
>     >         >     >       >     >
>     >         >     >       >     >
>     >         >     >       >     >
>     >         >     >       >
>     >         >     >
>     >         >     >
>     >         >     >
>     >         >
>     >
> 


From xen-devel-bounces@lists.xenproject.org Tue May 16 14:45:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:45:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535252.832924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvvf-0004dl-0G; Tue, 16 May 2023 14:45:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535252.832924; Tue, 16 May 2023 14:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyvve-0004de-T7; Tue, 16 May 2023 14:45:30 +0000
Received: by outflank-mailman (input) for mailman id 535252;
 Tue, 16 May 2023 14:45:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a0wm=BF=citrix.com=prvs=4936e02c6=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pyvvd-0004dY-9O
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:45:29 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4b8e6d0c-f3f8-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 16:45:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b8e6d0c-f3f8-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684248327;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=XPbviYIhCynnrTIGSWjvBLeKf07MWzJPqOrTjUu9r5I=;
  b=Rn9iuD+eL/B+VGyEQW0mPQR9cPqW6gfC5lWllG9JqA6pIsy+/M1C35z4
   qmXTS4cAjpaOYe3HYT+7w6+xZos4aRNgVsGFKDreA5JM8vcjZkIBUhNf5
   jKMghPB4ixZSI8Gxlkilnv4WBvOa3CzNAnjIBhgKWkNazu6BSawk0SRI0
   g=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 107987435
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:whRtCqtW8dSVJ8Y1YoA9u9FhXufnVIVYMUV32f8akzHdYApBsoF/q
 tZmKTzTbv2PZDChf91yaYjg805SvJaHzdJnTFNlpHo3HylE+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj6Vv0gnRkPaoQ5AKHySFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwN3MOXDqN3tuMkbuybOVSj5QkINnWM9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WzRetFKSo7tx+2XJxRZ9+LPsLMDUapqBQsA9ckOw/
 zqZrjSmXkpHXDCZ4RqoqHy1tv3GoS+lQpsYLe2gr85hpELGkwT/DzVJDADm8JFVkHWWQM5SJ
 UUS+iMGt6U+9EW3CNL6WnWQrneBlhMBQ9pXFas85R3l4rDW5ACUHUAeQzJBYcBgv8gzLRQzy
 16Pg97tDBR1vbGVQG7b/bCRxRu/NDMNKnQDeQceQAcF6sWlq4Y25jrfQ9AmHKOrg9ndHTDr3
 yvMvCU4n68Uj8MAy+O851+vqzCxvYXASiYv6wnXV37j5QR8DKavapay4F7U4N5aIYqZR0XHt
 38B8+CG7OEKCJ2lkC2LSf8KWraz6J6tKzLVm0xmBZQJ7Tmh+3e/O4tX5VlWKUt0O9wIfhfpa
 UXQvhhNopleIBOCRqh2aZP3O9kCwq3pFNCjXffRBvJCfZ1uLieG+i90bEKd1myrl1Ij+YklO
 JybeNygS3YXD6hq1iGxQc8Z0Lki3Cd4wnncLbj+xg6s3L7YbWabRLMPOUaLRu885aKA5g7S9
 r53LNCWwh9SVOn/ZCj/8oMJK10Oa38hCvjes8FLe/STChBrFGEoF7naxrZJU4homK5Pn/zg5
 HyxWkZEjlH4gBXvLB6WQmpuZLPmQdB0qndTFTQmOlC62n4iSZyi4KcWa908erxP3O98yvp6Z
 +MId8WJHrJETTGv0yQcYIj6qoNgdTyhlB6DMyOjazV5dJllLyTS5ff7cwrr8iQSHGy7uKMWu
 7mI1QreWdwPSmxKFM/LbNq/wlX3umIS8MprUk7VZ9NXf07v2I5tLSP3k7kwOc5kARDOxieTk
 Q+fGwpC/cHCpoY09J/CgqXsh52uFe1/E1FTH23R4Ky5HSbf92unh4RHVY6gZD3YSSb49buvY
 c1TyPfzNuBBm0xF26J8Cbtq0bAW/dbjp7ZGiA9jGR3jZkymELp6LlGa3MNEsetGwboxhOetc
 hvRoJ8AY+zPYZ67VgdLf2LJc9hvy9kL2Trgx/psGX+r7SZR8ISOAUJeLySD3Xk1wKRODKspx
 uIoucgz4gO5iwY3Ptvush2451hgPVRbDfx568hy7JvDz1NylwocOcC05jreusnnVjlaDqU9z
 tZ4boLmjq8U+EfNemFb+ZPljbsE3sRmVPynITY/y7W1djjt3KdfMP55q25fouFpIvJvjYpO1
 pBDbREdGEl3124AaDJ/d26tARpdIxaS51b8zVAE/EWAERn0DzOQcjxkZbnXlKz8z467Vmkzw
 V1l4Dy9DWaCkD/ZhEPepnKJW9S8FIcsp2UuaeisHtifHolSXAcJdpSGPDJSwzO+WJNZuaEyj
 bUylAqGQfGhZHF4TmxSI9Xy6In8vzjadDMfEa0+pfhh8KO1UGja5AVi4nuZIqtlT8EmO2fjY
 yCyDqqjjyiD6Rs=
IronPort-HdrOrdr: A9a23:i8/w/66P2n8iNL3sBQPXwBHXdLJyesId70hD6qkRc20pTiX8ra
 uTdZsgtCMc9wxhPk3I9ertBEDCewKkyXcN2/hrAV76ZnidhILKFvAf0WKB+V3d8kTFn4Y26U
 4HScdD4bbLYGSS4/yV3OHEe+xQuOVvJJrY59s3sx9WPGZXgtlbnmRE4/GgYylLraB9dP8EKK
 Y=
X-Talos-CUID: =?us-ascii?q?9a23=3AZLK7BGsYtRCVIPp0+bq9dZAP6Is6MX7v5nmJGnX?=
 =?us-ascii?q?pAG9PRIOcGVic6J5rxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AYNIFCA2MWV/pGIwZcp8iK6/F0TUjw4n0Blk9g5w?=
 =?us-ascii?q?6t5eeLCZJay/EvD+TXdpy?=
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="107987435"
Date: Tue, 16 May 2023 15:45:14 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
CC: <qemu-devel@nongnu.org>, Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>, <qemu-block@nongnu.org>, Paul Durrant
	<paul@xen.org>, Peter Lieven <pl@kamp.de>, Stefan Weil <sw@weilnetz.de>, Xie
 Yongji <xieyongji@bytedance.com>, Kevin Wolf <kwolf@redhat.com>, Paolo
 Bonzini <pbonzini@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
	Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>, Hanna Reitz
	<hreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, Richard
 Henderson <richard.henderson@linaro.org>, David Woodhouse
	<dwmw2@infradead.org>, Coiby Xu <Coiby.Xu@gmail.com>, Eduardo Habkost
	<eduardo@habkost.net>, Stefano Garzarella <sgarzare@redhat.com>, Philippe
 =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>, Daniel
 =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>, Julia Suvorova
	<jusual@redhat.com>, <xen-devel@lists.xenproject.org>, <eesposit@redhat.com>,
	Juan Quintela <quintela@redhat.com>, "Richard W.M. Jones"
	<rjones@redhat.com>, Fam Zheng <fam@euphon.net>, Marcel Apfelbaum
	<marcel.apfelbaum@gmail.com>
Subject: Re: [PATCH v5 12/21] xen-block: implement
 BlockDevOps->drained_begin()
Message-ID: <481d1f56-298c-49a3-9a55-5a205d4b20fd@perard>
References: <20230504195327.695107-1-stefanha@redhat.com>
 <20230504195327.695107-13-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230504195327.695107-13-stefanha@redhat.com>

On Thu, May 04, 2023 at 03:53:18PM -0400, Stefan Hajnoczi wrote:
> @@ -819,11 +841,9 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
>      blk_set_aio_context(dataplane->blk, dataplane->ctx, NULL);
>      aio_context_release(old_context);
>  
> -    /* Only reason for failure is a NULL channel */
> -    aio_context_acquire(dataplane->ctx);
> -    xen_device_set_event_channel_context(xendev, dataplane->event_channel,
> -                                         dataplane->ctx, &error_abort);
> -    aio_context_release(dataplane->ctx);
> +    if (!blk_in_drain(dataplane->blk)) {

There's maybe something missing in the patch.
xen_block_dataplane_start() calls xen_device_bind_event_channel() just
before xen_block_dataplane_attach().

And xen_device_bind_event_channel() sets the event context to
qemu_get_aio_context() instead of NULL.

So, even if we don't call xen_block_dataplane_attach() while in drain,
there's already a fd_handler attach to the fd. So should
xen_device_bind_event_channel() be changed as well? Or maybe a call to
xen_block_dataplane_detach() would be enough.

(There's only one user of xen_device_bind_event_channel() at the moment
so I don't know if other implementation making use of this API will want
to call set_event_channel_context or not.)

> +        xen_block_dataplane_attach(dataplane);
> +    }
>  
>      return;
>  

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 16 14:53:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535262.832948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3h-0006Sa-D2; Tue, 16 May 2023 14:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535262.832948; Tue, 16 May 2023 14:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3h-0006S2-83; Tue, 16 May 2023 14:53:49 +0000
Received: by outflank-mailman (input) for mailman id 535262;
 Tue, 16 May 2023 14:53:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyw3g-0006OC-DR
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:53:48 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 745f770e-f3f9-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 16:53:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 745f770e-f3f9-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684248825;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=nyQur+30ZM6XscWFa3iBiYiEbv5nHeYpGVnr0FExeKQ=;
  b=VSFv6Z+6ku77nlo+gSIF+uvMIjQD5fKGuo89Oz6Z/p/fsGeEFtMU8fVm
   VmxTHFSIpTwyZ03W5i1bx4Z+m4is51H6MQHH5ywC+UgqnraOTf/ejMDpm
   b5+vNtjRvkp/bsb4kriQRPoM0nsXpy77GZPDCHhB4mGedk4PjfPCY9BY9
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109117764
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:QB58IK71qn8wGRrfY7ImjgxRtDHHchMFZxGqfqrLsTDasY5as4F+v
 jcbD2iHOqmKNjHyc412PIrnoE9T6p7WndJgSAU4rHxhHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S4geE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m3
 uIFAzcNPlO4rO+omI2DW9tn1+t7BZy+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9xx7J/
 zuZoDqlav0cHIez1Weo7FGoveCMpjrEe4U8G/6WrfE/1TV/wURMUUZLBDNXu8KRlUqWS99Zb
 UsO9UIGvaU0sUCmUNT5dxm5u2Kf+A4RXcJKFO834x3LzbDbiy67LGUZSj9KaPQ9qdQ7Azct0
 ze0c8jBXGI19ufPEDTEq+nS9GnpUcQIEYMcTQQaUCAC44mgm6o+kErwdsdRG/+a1vSgTFkc3
 Au2hCQ5grwSi+sC2KO64U3LjlqQm3TZcuImzl6JBzz4t2uVcKbgPtX1sgaDsZ6sOa7DFjG8U
 G44d99yBQzkJbWEj2SzTeoEB9lFDN7VYWSH0TaD83TMnglBGkJPn6gKuFmSx28zaK7onAMFh
 2eN0T69HLcJYBOXgVZfOupd8fgCw6n6DsjCXfvJdNdIaZUZXFbZrH03OhPOjjmxzxJEfUQD1
 XCzIK6R4YsyU/w7nFJauc9HuVPU+szO7TyKHs2qp/hW+bGfeGSUWd84Dbd6VchgtPnsiFyMo
 75i2z6il003vBvWPnOGrub+7DkicRAGOHwBg5IHKbPTflU/QwnMyZb5mNscRmCspIwN/s+gw
 513chYwJIbX7ZEfFTi3Vw==
IronPort-HdrOrdr: A9a23:VvZXy6HQsft+I1SQpLqFUpHXdLJyesId70hD6qkvc3Jom52j+P
 xGws526fatskdsZJkh8erwXJVoMkmsiqKdgLNhcItKOTOGhILGFvAb0WKP+UyDJ8S6zJ8h6U
 4CSdkzNDSTNykAsS+S2mDReLxMoKjlzEnrv5al854Ed3AtV0gK1XYfNu/vKDwOeOAwP+teKH
 Pz3Lsjm9OIQwVZUu2LQl0+G8TTrdzCk5zrJTQcAQQ81QWIhTS0rJbnDhmxxH4lInJy6IZn1V
 KAvx3y562lvf3+4ATbzXXv45Nfn8ak4sdfBfaLltMeJlzX+0aVjcVaKv6/VQIO0aSSAWUR4Z
 3xSiIbToZOAzS4RBD6nfKi4Xim7N9k0Q6d9bbRuwqTnSW+fkNjN+NxwbtQaAHU5ncgp9dh3q
 Ns2FuDu55WFx/b9R6NvuQgHisa5nacsD4sl/UegGdYVpZbYLhNrZYH9EcQC5sYGjnmgbpXWN
 WGo/uskMq+XGnqGUwxhFMfieCETzA2BFOLU0ICssua33xfm2141VIRwIgakm0b/JwwRpFY76
 CcW54Y3o1mX4sTd+ZwFe0BScy4BijERg/NKnubJRDiGLscM3zAppbr6PE+5f2sepYP0Jwu8a
 6xG2+wdVRCDH4GJff+qaGjqCq9M1lVdQ6duP1j2w==
X-Talos-CUID: =?us-ascii?q?9a23=3AcNYIIWhBew4ACRNy0Gp+98n0PzJuWWPA53DQcwy?=
 =?us-ascii?q?DImt3EIKtWG2bqYw8up87?=
X-Talos-MUID: =?us-ascii?q?9a23=3A1sEVUQ7S5rIBv2XxGS2G2UO2xowzz7jzJV4ula4?=
 =?us-ascii?q?J+PKdCg93FTjeom+4F9o=3D?=
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="109117764"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 1/4] x86/cpufeature: Rework {boot_,}cpu_has()
Date: Tue, 16 May 2023 15:53:31 +0100
Message-ID: <20230516145334.1271347-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

One area where Xen deviates from Linux is that test_bit() forces a volatile
read.  This leads to poor code generation, because the optimiser cannot merge
bit operations on the same word.

Drop the use of test_bit(), and write the expressions in regular C.  This
removes the include of bitops.h (which is a frequent source of header
tangles), and it offers the optimiser far more flexibility.

Bloat-o-meter reports a net change of:

  add/remove: 0/0 grow/shrink: 21/87 up/down: 641/-2751 (-2110)

with half of that in x86_emulate() alone.  vmx_ctxt_switch_to() seems to be
the fastpath with the greatest delta at -24, where the optimiser has
successfully removed the branch hidden in cpu_has_msr_tsc_aux.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/include/asm/cpufeature.h | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 4140ec0938b2..4f827cc6ff91 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -7,6 +7,7 @@
 #define __ASM_I386_CPUFEATURE_H
 
 #include <xen/const.h>
+#include <xen/stdbool.h>
 #include <asm/cpuid.h>
 
 #define cpufeat_word(idx)	((idx) / 32)
@@ -17,7 +18,6 @@
 #define X86_FEATURE_ALWAYS      X86_FEATURE_LM
 
 #ifndef __ASSEMBLY__
-#include <xen/bitops.h>
 
 struct cpuinfo_x86 {
     unsigned char x86;                 /* CPU family */
@@ -43,8 +43,15 @@ struct cpuinfo_x86 {
 
 extern struct cpuinfo_x86 boot_cpu_data;
 
-#define cpu_has(c, bit)		test_bit(bit, (c)->x86_capability)
-#define boot_cpu_has(bit)	test_bit(bit, boot_cpu_data.x86_capability)
+static inline bool cpu_has(const struct cpuinfo_x86 *info, unsigned int feat)
+{
+    return info->x86_capability[cpufeat_word(feat)] & cpufeat_mask(feat);
+}
+
+static inline bool boot_cpu_has(unsigned int feat)
+{
+    return cpu_has(&boot_cpu_data, feat);
+}
 
 #define CPUID_PM_LEAF                    6
 #define CPUID6_ECX_APERFMPERF_CAPABILITY 0x1

base-commit: 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
prerequisite-patch-id: ceeba7d5ab9498cb188e5012953c7e8c9a86347d
prerequisite-patch-id: c0957b9e1157ae6eb8de973c96716fd02587c486
prerequisite-patch-id: d2574bba15748cd021e5b33fa50e6cadc38863b6
prerequisite-patch-id: 0f66cd4287ffdc06f24dc01c7d26fb428f3e8c09
prerequisite-patch-id: a585f61b546ff96be3624ff253f8100b2f465de6
prerequisite-patch-id: 54551cdefaca083b4a4b97528d27d0f3dc9753ee
prerequisite-patch-id: 051423463e4a34728ab524f03e801e7103777684
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 14:53:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535260.832933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3a-00068V-Qf; Tue, 16 May 2023 14:53:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535260.832933; Tue, 16 May 2023 14:53:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3a-00068O-Nq; Tue, 16 May 2023 14:53:42 +0000
Received: by outflank-mailman (input) for mailman id 535260;
 Tue, 16 May 2023 14:53:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyw3Z-00067V-2j
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:53:41 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7145a7a1-f3f9-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 16:53:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8445.eurprd04.prod.outlook.com (2603:10a6:10:2cf::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.23; Tue, 16 May
 2023 14:53:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 14:53:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7145a7a1-f3f9-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S+u7KO4cXrHRU5kX4NKHs+4rdAaIeYwJvAbEti/vi+gTcJrmrYq2Kb1FwHETnKEnWJOopSVnfM9t1yUimuTkKBCgH9+UP35oSO+6ia1D+DdUpezxWK+tcWFckZluEeb3zRamaQB1kWhhlXm0eXIWpwRaVSHa+Ws88zPKUj8LRBvchZAW9F4Jmp7rlE2SuP5fqcCKpMUpDFcCpDDEldaD7HJ1TEvNCQZa7/hUvbBMMC+Wbgh/ifGR/BRZYlK6CYsQDp0g7nxbvoAnG0sr3fosQ4Udy4n/6PU3nLGD7XMiKFOPXvIjArWj+2kORgxXvLrOyzDIPcyK5EsnlLZwgtXETg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=daInJN88i+p1YQGyc6eg9rrply3P7vvIgfvu63l1Sos=;
 b=dZZYwrVXfmVUpUFnhALZeOztQr5Fdo80ebdIbZweHW4xFBuwBXMXE0Bn/r4kwh+UtUw04Q1TS3hZEQ6RlRqH7QjIDSxegBZCB6zBYSXXlMpFTGNM5kf9RfOeBcp9jTNrASX6DzM79qK8otzB/Y0/vdxNXFJYN2OthcDOzNZ216KjM6UHq0Qk/jFP4sOCRHoK4vfyGyzaquc22IPo64IA5ghbyDMkiT1tOJ9P/GaIjISjywaZsdA5i6Ywk8+xvhoTUev+tvqKPsqsd+TFGjBZw3QK265MOqSALEI6H6WLSaBLZEps3k9cuvqpeoZmETn7IGPvKa40uk1NBM3w7UT2Pw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=daInJN88i+p1YQGyc6eg9rrply3P7vvIgfvu63l1Sos=;
 b=f9pT7SH8loHYXkUZAwkZx8Qdh6IhRcd26QKwRfsnV64sci+k8DEeuR9x3BzkYLoZYifezy80uo7sZFgO9lt07U+bFEZqB05oxGRkk7Dc4yBO9KOXSZD1aQkecIFSYaLbTRqGbPHLw8MmZmdVkDmoOpYcbe2hGMQBKILkmKgFxnq/dN6u0ow5rZvAOrhfbtWMaL+UHy4R1Fy7oYZw/y9808UrUk6VX/DTBBTeKDw4eYgcwHOCCFH7ebGrTYQPl+BYRpaaHXWheGIzU56AaCFEeHbUQvr+to8OjByL82b0YWLIl5T9pO1clNL2XZkyNqDm8DqeKABSjwIAF99NRW6XuQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <22a6bd70-887e-da72-ccff-16b3627463c3@suse.com>
Date: Tue, 16 May 2023 16:53:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
 <a545a6c9-db48-9d91-c23b-59ea97def769@suse.com>
 <25421dbc-5889-a33c-37dd-d82476d56ce4@citrix.com>
 <1bef2b0e-04e8-2ca0-cf03-f61cdae484a9@suse.com>
 <b1c56e56-90cc-0f37-4c8a-df1217339e59@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b1c56e56-90cc-0f37-4c8a-df1217339e59@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0178.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8445:EE_
X-MS-Office365-Filtering-Correlation-Id: 491aff20-e2e2-42d1-317e-08db561d5420
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZxtKs+ULyFF7EDPW700o7eebinl7Tz26AmX2TZKa2/adIELNXlTpvvWmOsXqmTbabESIX5IAkGy5nSZ0UIVVJ7+nkxNc5PEsqa7a7yPumFozoZxsIn5/tu8dnTZD3mGKRvY2hKRv0kLsGFjXh4lM6KY1O5OseBPlsQL+EkRdGftWkbGL95iysZem/Q9JOgSb1dRMrSBHOmjYfl2TEKFPwFAQJuFAyNJGz5U5iuW4nvRHXcrEZaenGKyLRbMNjAq+ocSkPzQX+mhta8hFs3fnuPTPVMBuQ6bc89R/asHtxSJGRh2liDQEueEtjwJFunf5CG7VyFzkpYNvbIaEKG6zayQYrOXSoZ1IVMEhuswK+XzzsfiYH+qrnkoVScnOW67rqSDsPZCtLRs61K80OVjskEmxA+VsIGhCYzSbtSkk6fip4nhKF46kgB8+Gtp0VFfhB5hbIIGbpTPH63yDKaAgHJItcrpCFG6JcTvN2466r2AvBraIx/KhgiVHWL0zgVNgZ4LQjhrQ2iKN8HzLmjjstBx42p6QR2XM6oQ3rQT/bl/ovrxZ7Boatu0mlM9at80pE3jY/XppSS8jcm70dtSK6Zk1sFkX0MAvTV3blC3aqsCnWQZLrX9WI04JGTej5hvNmTwzIcXS1Lv6Mr5AA2oeoA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(396003)(376002)(366004)(39860400002)(451199021)(31686004)(2616005)(186003)(41300700001)(4326008)(66946007)(8936002)(66556008)(6916009)(26005)(53546011)(8676002)(5660300002)(316002)(31696002)(6506007)(54906003)(478600001)(6486002)(38100700002)(66476007)(86362001)(6512007)(36756003)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RkhnTis3d3FCTDErNjdOSXZvV0NJVmxoa3dHdTFKaUVndGg4UjNZZCtHZHda?=
 =?utf-8?B?T0F5ZHlxeWRRcHYxWldVVzl6MjVGUzNONkFpWHhzZnErVkE3Y3A3NzJCMm5N?=
 =?utf-8?B?cDhObVA2VTdodGRYUVUrRkh0aytqR0FmRVo3endUcytXWTJXbVRLdzJWRFZ6?=
 =?utf-8?B?MWZzZEJjNFV4Y1R6UGsrTUtnMUtaZUR5Tm1qMmNwNGVEYVRoVTZDb1RrbndT?=
 =?utf-8?B?TGMybGs4TFlnamVHc3pQc2s0ajZLQzl5ZWpjT0E3VSt3Y2JodG9FdXZ0MDFZ?=
 =?utf-8?B?MmNQczJHWmtXQ0FBaitPUlpKKzRLZTZmSjJpNDFueGlPTXdZS2ttUjZTeWFU?=
 =?utf-8?B?NDRDSFQwSkhiYUdtd1c4ejJJNFZLTlV5LzMxOW9taVc5amtmK1dremJNam0v?=
 =?utf-8?B?SFVEWFoyYzRCbXQyZDkxRmtveDZIdWRLalNKd1BVT3dnQnUxcVJUdzZKVHFH?=
 =?utf-8?B?cWpTM2Z1VTdPOFFVeG1LeGk4eWsydU9PeDdWVHVXRGJvQTV3S1FBbUMxRzhh?=
 =?utf-8?B?ejA0SFd6ZnZUT3lwM3hvT0tqNUo0NTNlTVJLeC8zbVlnY1BZU2cwMit3a05w?=
 =?utf-8?B?anFaWnlvUTkxQ3R6Vm0xUnZ4bEhuTUY0ODc0VmRpeGpqaGIwSWs0K0diR1JG?=
 =?utf-8?B?NklTazRyTW9NbFltMVhBaTZQS1VKOVpqYjZRNnVWWE1TaENUTGErZm9Yb2hk?=
 =?utf-8?B?NmJMbDJpRVRTNGh1d1kxdFhncnE0L2hiQXpNaWR1bURYUGJ0N3J6dkwzWUt6?=
 =?utf-8?B?a1ZkUkRrdXFzUEIrZzBTQVJPSERPczdVZTR6MWFZMjQvT2VhVkc2V3MrL1Jx?=
 =?utf-8?B?TGw2ZzU2MXBEOEsrSWk3S1ZHMVdnc094aytPNFVWYVl2eUJnVW5kRHJlUjI4?=
 =?utf-8?B?blF0VlZZcFh5TzB2aUFmNlZ3UklhYnBhMy9UYUZ4dy9jUG9mZ1RRbXNyOG1Z?=
 =?utf-8?B?ZWE0QnRjQTZMbDJsQkhTRGgvelFwM0V3TXBMTHlEU3YzdEJ5Wko0Q2pCNEVN?=
 =?utf-8?B?YTdoV0ZETTNrOGVqMGlYTS9vUjNpd3JLcUp3VTE3OVp4T0xhTFN4RVBlZ1Zu?=
 =?utf-8?B?dWxwbnpMdnlpRkovMy83SVloUE5XUmgwNnJkOXpzdGV3K0M0V01pcXBYMWw2?=
 =?utf-8?B?OTBsVnRQVkNJdlpTMVArSytxbXhYTnQ3V01sMjZkUklSMXo1VjAzOGdpdVdI?=
 =?utf-8?B?RmVaR2JGOWFrREE5LzlSbHlkZ2pub1NuWFlVMkZJZWoyaG9Rd1VMZTFmRXNO?=
 =?utf-8?B?aDBXdHBsRHNONE5VUzFPSUMwd3FQb3djQ0pHWGo0WDdJNDZ0NXM1S2ZnRm5P?=
 =?utf-8?B?ZnlMS21VMUNMdXpudVZCb1RPZE16OVlLS0JGMXF6elAzdkw1NVJZakYvVDJ6?=
 =?utf-8?B?TEpFTW9jOHNEaDI3ZkZlYlU4QVpldTRTeStlbTNlTHdDa1ZOWmwrMmIzWjFz?=
 =?utf-8?B?Sm0rQ3lQdzhyeXRHR3p4QTRQTHk2a3VMdndQUG95Tm10VVJ2cFVNZVJJc0dp?=
 =?utf-8?B?b3l3S2JQSS84d1Qwck90RTczSWpZNXBvRUdRL0dRM1ExQXBiTDdpVVREUkZU?=
 =?utf-8?B?WWhDWHFheFh5RXBOYWlnN3pXZjZDdCtIMEV6WGFJWG1veWdvNkdra0hhVWNr?=
 =?utf-8?B?L1RwczNyVzVpM2FUdFE4VFhaSkI3dTlMQVRka2c4VVJSRVc4UFl4RVBWQi9t?=
 =?utf-8?B?Z3V2NlR5UGtWQkRvbktTeGduSFc3alcvV0lmTmtJcjRBZVBZa0Y4dHptMTBX?=
 =?utf-8?B?bm5NZXZmalplcHgyU2pWd1JWc1BYQTNLMmxpNVdHbllPNFI5ekJzd0U5eC84?=
 =?utf-8?B?ZWxrWUN1L3lmdE5qWGJrMGJKNktETHZ3TVNXdkhaR25iWEhqNWRrL3BhZjNS?=
 =?utf-8?B?L2tyb3Mvc2NsZ2pzRGZUZlpsN1lIWHVWdU1ZWURnTGlUM25ZNDR6eHdGQzJk?=
 =?utf-8?B?YjZ1R2pUWTJOUkZuLzRFenZWdlZWMDhPcTl0SEhUeVMxOVJPalVFTjMrazRv?=
 =?utf-8?B?TmJLQTdFSEhIbnk5cUNTeng2QlpZdU5IbktTaXA0QU9kT0lBam5Fb1lzTkE5?=
 =?utf-8?B?dkpGb3YwZ3dGV1NwSDVVdHlKZjU5THVrdEU2aTd0dXJKYXJySm5PUUxscERD?=
 =?utf-8?Q?4fW6h3+yUTNFTI+4XAmytfdU5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 491aff20-e2e2-42d1-317e-08db561d5420
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 14:53:37.0098
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wFJuGr6m/3/OzKC+85IY2IR0NRI1AmF7aAsqUZ+Zbr8M/cyiz+9sD2dIRZoHuC/hXMUgkesS7bTnKSPN3PsKmQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8445

On 16.05.2023 16:16, Andrew Cooper wrote:
> On 16/05/2023 3:06 pm, Jan Beulich wrote:
>> On 16.05.2023 15:51, Andrew Cooper wrote:
>>> On 16/05/2023 2:06 pm, Jan Beulich wrote:
>>>> On 15.05.2023 16:42, Andrew Cooper wrote:
>>>> Further is even just non-default exposure of all the various bits okay
>>>> to other than Dom0? IOW is there indeed no further adjustment necessary
>>>> to guest_rdmsr()?
>> With your reply further down also sufficiently clarifying things for
>> me (in particular pointing the one oversight of mine), the question
>> above is the sole part remaining before I'd be okay giving my R-b here.
> 
> Oh sorry.  Yes, it is sufficient.  Because VMs (other than dom0) don't
> get the ARCH_CAPS CPUID bit, reads of MSR_ARCH_CAPS will #GP.
> 
> Right now, you can set cpuid = "host:arch-caps" in an xl.cfg file.  If
> you do this, you get to keep both pieces, as you'll end up advertising
> the MSR but with a value of 0 because of the note in patch 4.  libxl
> still only understand the xend CPUID format and can't express any MSR
> data at all.

Hmm, so the CPUID bit being max only results in all the ARCH_CAPS bits
getting turned off in the default policy. That is, to enable anything
you need to not only enable the CPUID bit, but also the ARCH_CAPS bits
you want enabled (with no presents means to do so). I guess that's no
different from other max-only features with dependents, but I wonder
whether that's good behavior. Wouldn't it make more sense for the
individual bits' exposure qualifiers to become meaningful one to
qualifying feature is enabled? I.e. here this would then mean that
some ARCH_CAPS bits may become available, while others may require
explicit turning on (assuming they weren't all 'A').

But irrespective of that (which is kind of orthogonal) my question was
rather with already considering the point in time when the CPUID bit
would become 'A'. IOW I was wondering whether at that point having all
the individual bits be 'A' is actually going to be correct.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 14:53:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535261.832944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3h-0006P7-3i; Tue, 16 May 2023 14:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535261.832944; Tue, 16 May 2023 14:53:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3g-0006P0-Vy; Tue, 16 May 2023 14:53:48 +0000
Received: by outflank-mailman (input) for mailman id 535261;
 Tue, 16 May 2023 14:53:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyw3f-0006OC-Nj
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:53:47 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74c30a6d-f3f9-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 16:53:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74c30a6d-f3f9-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684248826;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=LZA9DiPzTWBX0eSgS7FnHPS/B/27DbsY3IexopoDr6Y=;
  b=AvgpE37ZbnVokSWYKFJ2qe26tIkMQHvnOeYMHZoff/W+J9akpAm8iOO3
   0uQAFz8o1uX1fdLKhW1FjS1hOZTtgV+IpDxAWKuUPp+a84H66puOa7zzP
   6uOPBH3pU8womLd4iRYUpvkuZ9gdPUYsz+TAwqiKGIkgRKTiVCz5Gr6Dm
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109243519
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:hHoczq9imQHx+vMaQrs3DrUDv36TJUtcMsCJ2f8bNWPcYEJGY0x3n
 zAdWmGDb/2LZzH0fdtxbY++9E0EuJ/UndRkTQpsrSA8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKkV5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklPy
 v4FGD8hRCupgtKsnunmUvIrq5QseZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0MxhfC+
 jmbpjWR7hcyaeCR0hTdzHyWhrHEuQ3pSp0KSa2y36s/6LGU7jNKU0BHPbehmtGmjmauVtQZL
 FYbkgIssK508kWoR9v8WhSQoXiYsxpaUN1Ve8U55R+MzOzI4g+fLmkCUjNFLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDQcmZwYY59jooKkokwnCCN1kFcaIYsbdQG+qh
 WrQ9W5n2utV1JRQv0mmwbzZqzOTpIX5QSIf3S/Sbnj7tV5hf8mIN4P9vDA38s18BIqeS1CAu
 l0NlM6f8P0CAPmxqcCdfAkeNOr3vqjYaVUwlXYqRsB8rGr1pxZPaKgKuFlDyFFV3tHokNMDS
 Gvaoktv6ZBaJxNGhocnMtvqW6zGIUUNfOkJt8w4jPIUOvCdlyfdpkmCgHJ8OEiz+HXAaYllZ
 f+mnT+EVB7285hPwjusXPs62rQ23C04zm67bcmln0j+jufANSfJFu1t3L6yggcRvcu5TPj9q
 Y4DZ6NmNT0FOAEBXsUn2dFKdg1bRZTKLZv3t9ZWZoa+H+aSI0l4U6W56ep4K+RYc1F9yr+gE
 oeVBhUJlzISRBTvdW23V5yUQO2/Assv8S1iZkTB/z+AghAeXGpm149HH7NfQFXt3LULISJcJ
 xXdR/i9Pw==
IronPort-HdrOrdr: A9a23:u8L0iKqLY652xS2w1VZxeF0aV5uRL9V00zEX/kB9WHVpm5Oj+f
 xGzc516farslossSkb6Ky90KnpewK5yXcH2/hvAV7CZniqhILMFuBfBOTZskXd8kHFh4xgPO
 JbAtVD4b7LfBRHZKTBkXKF+r8bqbHtms3J9ITjJjVWPHtXgslbnkFE422gYypLrXx9dOME/e
 2nl6x6TlSbCBEqh+2AdzY4dtmGg+eOuIPtYBYACRJiwhKJlymU5LnzFAXd9gsCUhtUqI1SsF
 Ttokjc3OGOovu7whjT2yv49JJNgubszdNFGYilltUVEDPxkQylDb4RG4Fq/QpF491H2mxa1e
 UkkC1Qe/ib3kmhPF1c5nPWqkfdOXgVmjjfIBSj8AXeSITCNUMH4ox69NpkWyqc0kI7pt1w7a
 NR2X6WtrxRNAjNmCTm68KgbWAyqqP8mwtTrccDy3NYSocQc7lXsMgW+15UCo4JGGbg5JkgC/
 QGNrCV2B/4SyLvU5n1hBgY/DWXZAV7Ij6WBkwZ/sCF2Tlfm350i0Me2cwEh38FsJYwUYNN6e
 jIOrlh0OgmdL5dUYttQOMaBcenAG3ERhzBdGqUPFT8DakCf3bAsYT+7rk57PyjPJYI0Jwxkp
 LcV04wjx94R6svM7z44HRmyGG5fIzmZ0Wf9ih33ekKhoHB
X-Talos-CUID: =?us-ascii?q?9a23=3ANY0Q0mjmxrZpR5c3un12WD3YOTJuc0Lf0UffD2W?=
 =?us-ascii?q?DI2NZUuPOFEeLv440nJ87?=
X-Talos-MUID: =?us-ascii?q?9a23=3AOcCANg8BF75ysaasesJnxF6Qf5pU7b+AUH4Dq5V?=
 =?us-ascii?q?YsO2qJz5vI2qNhh3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="109243519"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH 2/4] x86/vtx: Remove opencoded MSR_ARCH_CAPS check
Date: Tue, 16 May 2023 15:53:32 +0100
Message-ID: <20230516145334.1271347-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

MSR_ARCH_CAPS data is now included in featureset information.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c            | 8 ++------
 xen/arch/x86/include/asm/cpufeature.h | 3 +++
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 096c69251d58..9dc16d0cc6b9 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2849,8 +2849,6 @@ static void __init ler_to_fixup_check(void);
  */
 static bool __init has_if_pschange_mc(void)
 {
-    uint64_t caps = 0;
-
     /*
      * If we are virtualised, there is nothing we can do.  Our EPT tables are
      * shadowed by our hypervisor, and not walked by hardware.
@@ -2858,10 +2856,8 @@ static bool __init has_if_pschange_mc(void)
     if ( cpu_has_hypervisor )
         return false;
 
-    if ( cpu_has_arch_caps )
-        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
-
-    if ( caps & ARCH_CAPS_IF_PSCHANGE_MC_NO )
+    /* Hardware reports itself as fixed. */
+    if ( cpu_has_if_pschange_mc_no )
         return false;
 
     /*
diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 4f827cc6ff91..8446f98625f7 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -183,6 +183,9 @@ static inline bool boot_cpu_has(unsigned int feat)
 #define cpu_has_avx_vnni_int8   boot_cpu_has(X86_FEATURE_AVX_VNNI_INT8)
 #define cpu_has_avx_ne_convert  boot_cpu_has(X86_FEATURE_AVX_NE_CONVERT)
 
+/* MSR_ARCH_CAPS 10A */
+#define cpu_has_if_pschange_mc_no boot_cpu_has(X86_FEATURE_IF_PSCHANGE_MC_NO)
+
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
 #define cpu_has_cpuid_faulting  boot_cpu_has(X86_FEATURE_CPUID_FAULTING)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 14:53:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:53:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535264.832964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3n-00070G-PK; Tue, 16 May 2023 14:53:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535264.832964; Tue, 16 May 2023 14:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3n-000709-LT; Tue, 16 May 2023 14:53:55 +0000
Received: by outflank-mailman (input) for mailman id 535264;
 Tue, 16 May 2023 14:53:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyw3m-00067V-3y
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:53:54 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77c9e0d8-f3f9-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 16:53:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77c9e0d8-f3f9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684248831;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=GMlRcLDKg4s3arvBRJ23Or4PyYQ1A2vHuJYymPBgPdQ=;
  b=g5d17d/MDvxEdSn4VBia7e5xijAqo6pfM8Qztt3gQzSUG5ejSOie1jHV
   z2m33jhDv4f30j0JWu98Y7iwkFb+jTH7nH2I8yLQyLCsMxE27GVexr273
   JE121N8rHXAWQpGxsq+Si64I9MVLalZ59RVhJSdlmHUQ+1zdTKXblS3/R
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 111689025
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Gf2vtK5WG0SBXkUkBV+1PQxRtDHHchMFZxGqfqrLsTDasY5as4F+v
 mBJCGzXaaqOMTCnL4siYdjlpkoHup/cmNVnQARlqSk9Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S4geE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m/
 /VEMj8jTxm/g92v6qLhY8Nuj/4eFZy+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9xx/B+
 DKXpz6kav0cHI2VkCrCwnDxv8/CoifYAaIZE7rn6fE/1TV/wURMUUZLBDNXu8KRlUqWS99Zb
 UsO9UIGvaU0sUCmUNT5dxm5u2Kf+A4RXcJKFO834x3LzbDbiy67LGUZSj9KaPQ9qdQ7Azct0
 ze0c8jBXGI19ufPEDTEq+nS9GnpUcQIEYMcTSUNEREKzfjqnIUMlzjfFP9JEqePs+SgTFkc3
 Au2hCQ5grwSi+sC2KO64U3LjlqQm3TZcuImzl6JBzz4t2uVcKbgPtX1sgaDsZ6sOa7DFjG8U
 G44d99yBQzkJbWEj2SzTeoEB9lFDN7VYWSH0TaD83TMnglBGkJPn6gKuFmSx28zaK7onAMFh
 2eN0T69HLcJYBOXgVZfOupd8fgCw6n6DsjCXfvJdNdIaZUZXFbZrH03OhPOjjmxzxJEfUQD1
 XCzIK6R4YsyU/w7nFJauc9HuVPU+szO7TyKHs2qp/hW+bGfeGSUWd84Dbd6VchgtPnsiFyMo
 75i2z6il003vBvWPnOGrub+7DkicRAGOHwBg5IHKbPTflU/QwnMyZb5mNscRmCspIwN/s+gw
 513chUwJIbX7ZEfFTi3Vw==
IronPort-HdrOrdr: A9a23:0aSzw6CKhgtfj0blHeiksseALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPEfP+VAssQIb6Km90ci7MDrhHPtOjbX5Uo3SODUO1FHIEGgA1/qr/9SDIVyYygc178
 4JHMZD4bbLfDtHZLPBkWyF+qEbsbu6Gc6T5dv2/jNId0VHeqtg5wB2BkKyFVB3fhBPAd4UBY
 eR/c1OohunYDAyYt6gDncIcuDfr5mT/aiWKyIuNloC0k2jnDmo4Ln1H1yx2QofaSpGxfMP4H
 XIiAvw44SkqrWexgXH32HewpxKkJ/Ky8dFBuaLls8JQw+cwTqAVcBEYfmvrTo1qOag5BIBi9
 /XuSotOMx19jf4Yny1iQGF4Xit7B8er1vZjXOIi3rqpsL0ABggDdBauI5fehzFr2I9odBH1r
 5R1W7xjesZMfqAplWy2zH7bWArqqOGmwtgrQfVtQ0cbWIqUs4RkWXYxjIRLH5PJlO/1GltKp
 gXMCiV3ocsTbrdVQGVgoAn+q3QYpw+cy32OHQqq4ib1SNbk2t+yFZdzMsDnm0Y/JZ4UJVc4f
 /YW54Y4I2mY/VmH56VPt1xNPefGyjIW1bBIWiSKVPoGOUOPG/MsYf+5PEw6PuxcJIFwZMukN
 CZOWkow1IaagbrE4mDzZdL+hfCTCG0Wins0NhX49x8tqfnTLTmPCWfQBQlktemof8YHsrHMs
 zDT65+ErvmNy/jCIxJ1wrxV91bLmQfStQcvpIhV1eHsqvwW/7XXyzgAYbuzZbWYEgZsznEcw
 c+tRDIVbp9x1HuXGPkix7MXH6oclDj/PtLYdnnw9Q=
X-Talos-CUID: =?us-ascii?q?9a23=3Aje1LP2nciwB3gSQFdYCAlOOoDY/XOXfUkG3fDW6?=
 =?us-ascii?q?xNWJWUZeFE3a6+qpervM7zg=3D=3D?=
X-Talos-MUID: 9a23:3Iz8Ogr1Q3nKBlX8FPIez2Bobdcv06bzMWAArpEl4umPMisvJDjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="111689025"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 4/4] x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
Date: Tue, 16 May 2023 15:53:34 +0100
Message-ID: <20230516145334.1271347-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

MSR_ARCH_CAPS data is now included in featureset information.  Replace
opencoded checks with regular feature ones.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/include/asm/cpufeature.h |  7 ++++
 xen/arch/x86/spec_ctrl.c              | 56 +++++++++++++--------------
 2 files changed, 33 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index deca5bfc2629..00a43123ac82 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -184,8 +184,15 @@ static inline bool boot_cpu_has(unsigned int feat)
 #define cpu_has_avx_ne_convert  boot_cpu_has(X86_FEATURE_AVX_NE_CONVERT)
 
 /* MSR_ARCH_CAPS 10A */
+#define cpu_has_rdcl_no         boot_cpu_has(X86_FEATURE_RDCL_NO)
+#define cpu_has_eibrs           boot_cpu_has(X86_FEATURE_EIBRS)
+#define cpu_has_rsba            boot_cpu_has(X86_FEATURE_RSBA)
+#define cpu_has_skip_l1dfl      boot_cpu_has(X86_FEATURE_SKIP_L1DFL)
+#define cpu_has_mds_no          boot_cpu_has(X86_FEATURE_MDS_NO)
 #define cpu_has_if_pschange_mc_no boot_cpu_has(X86_FEATURE_IF_PSCHANGE_MC_NO)
 #define cpu_has_tsx_ctrl        boot_cpu_has(X86_FEATURE_TSX_CTRL)
+#define cpu_has_taa_no          boot_cpu_has(X86_FEATURE_TAA_NO)
+#define cpu_has_fb_clear        boot_cpu_has(X86_FEATURE_FB_CLEAR)
 
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index f81db2143328..50d467f74cf8 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -282,12 +282,10 @@ custom_param("spec-ctrl", parse_spec_ctrl);
 int8_t __read_mostly opt_xpti_hwdom = -1;
 int8_t __read_mostly opt_xpti_domu = -1;
 
-static __init void xpti_init_default(uint64_t caps)
+static __init void xpti_init_default(void)
 {
-    if ( boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
-        caps = ARCH_CAPS_RDCL_NO;
-
-    if ( caps & ARCH_CAPS_RDCL_NO )
+    if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
+         cpu_has_rdcl_no )
     {
         if ( opt_xpti_hwdom < 0 )
             opt_xpti_hwdom = 0;
@@ -390,9 +388,10 @@ static int __init cf_check parse_pv_l1tf(const char *s)
 }
 custom_param("pv-l1tf", parse_pv_l1tf);
 
-static void __init print_details(enum ind_thunk thunk, uint64_t caps)
+static void __init print_details(enum ind_thunk thunk)
 {
     unsigned int _7d0 = 0, _7d2 = 0, e8b = 0, max = 0, tmp;
+    uint64_t caps = 0;
 
     /* Collect diagnostics about available mitigations. */
     if ( boot_cpu_data.cpuid_level >= 7 )
@@ -401,6 +400,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
         cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
     if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
         cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
+    if ( cpu_has_arch_caps )
+        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
 
     printk("Speculative mitigation facilities:\n");
 
@@ -578,7 +579,7 @@ static bool __init check_smt_enabled(void)
 }
 
 /* Calculate whether Retpoline is known-safe on this CPU. */
-static bool __init retpoline_safe(uint64_t caps)
+static bool __init retpoline_safe(void)
 {
     unsigned int ucode_rev = this_cpu(cpu_sig).rev;
 
@@ -596,7 +597,7 @@ static bool __init retpoline_safe(uint64_t caps)
      * Processors offering Enhanced IBRS are not guarenteed to be
      * repoline-safe.
      */
-    if ( caps & (ARCH_CAPS_RSBA | ARCH_CAPS_IBRS_ALL) )
+    if ( cpu_has_rsba || cpu_has_eibrs )
         return false;
 
     switch ( boot_cpu_data.x86_model )
@@ -845,7 +846,7 @@ static void __init ibpb_calculations(void)
 }
 
 /* Calculate whether this CPU is vulnerable to L1TF. */
-static __init void l1tf_calculations(uint64_t caps)
+static __init void l1tf_calculations(void)
 {
     bool hit_default = false;
 
@@ -933,7 +934,7 @@ static __init void l1tf_calculations(uint64_t caps)
     }
 
     /* Any processor advertising RDCL_NO should be not vulnerable to L1TF. */
-    if ( caps & ARCH_CAPS_RDCL_NO )
+    if ( cpu_has_rdcl_no )
         cpu_has_bug_l1tf = false;
 
     if ( cpu_has_bug_l1tf && hit_default )
@@ -992,7 +993,7 @@ static __init void l1tf_calculations(uint64_t caps)
 }
 
 /* Calculate whether this CPU is vulnerable to MDS. */
-static __init void mds_calculations(uint64_t caps)
+static __init void mds_calculations(void)
 {
     /* MDS is only known to affect Intel Family 6 processors at this time. */
     if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
@@ -1000,7 +1001,7 @@ static __init void mds_calculations(uint64_t caps)
         return;
 
     /* Any processor advertising MDS_NO should be not vulnerable to MDS. */
-    if ( caps & ARCH_CAPS_MDS_NO )
+    if ( cpu_has_mds_no )
         return;
 
     switch ( boot_cpu_data.x86_model )
@@ -1113,10 +1114,6 @@ void __init init_speculation_mitigations(void)
     enum ind_thunk thunk = THUNK_DEFAULT;
     bool has_spec_ctrl, ibrs = false, hw_smt_enabled;
     bool cpu_has_bug_taa;
-    uint64_t caps = 0;
-
-    if ( cpu_has_arch_caps )
-        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
 
     hw_smt_enabled = check_smt_enabled();
 
@@ -1163,7 +1160,7 @@ void __init init_speculation_mitigations(void)
              * On all hardware, we'd like to use retpoline in preference to
              * IBRS, but only if it is safe on this hardware.
              */
-            if ( retpoline_safe(caps) )
+            if ( retpoline_safe() )
                 thunk = THUNK_RETPOLINE;
             else if ( has_spec_ctrl )
                 ibrs = true;
@@ -1392,13 +1389,13 @@ void __init init_speculation_mitigations(void)
      * threads.  Activate this if SMT is enabled, and Xen is using a non-zero
      * MSR_SPEC_CTRL setting.
      */
-    if ( boot_cpu_has(X86_FEATURE_IBRSB) && !(caps & ARCH_CAPS_IBRS_ALL) &&
+    if ( boot_cpu_has(X86_FEATURE_IBRSB) && !cpu_has_eibrs &&
          hw_smt_enabled && default_xen_spec_ctrl )
         setup_force_cpu_cap(X86_FEATURE_SC_MSR_IDLE);
 
-    xpti_init_default(caps);
+    xpti_init_default();
 
-    l1tf_calculations(caps);
+    l1tf_calculations();
 
     /*
      * By default, enable PV domU L1TF mitigations on all L1TF-vulnerable
@@ -1419,7 +1416,7 @@ void __init init_speculation_mitigations(void)
     if ( !boot_cpu_has(X86_FEATURE_L1D_FLUSH) )
         opt_l1d_flush = 0;
     else if ( opt_l1d_flush == -1 )
-        opt_l1d_flush = cpu_has_bug_l1tf && !(caps & ARCH_CAPS_SKIP_L1DFL);
+        opt_l1d_flush = cpu_has_bug_l1tf && !cpu_has_skip_l1dfl;
 
     /* We compile lfence's in by default, and nop them out if requested. */
     if ( !opt_branch_harden )
@@ -1442,7 +1439,7 @@ void __init init_speculation_mitigations(void)
             "enabled.  Please assess your configuration and choose an\n"
             "explicit 'smt=<bool>' setting.  See XSA-273.\n");
 
-    mds_calculations(caps);
+    mds_calculations();
 
     /*
      * Parts which enumerate FB_CLEAR are those which are post-MDS_NO and have
@@ -1454,7 +1451,7 @@ void __init init_speculation_mitigations(void)
      * the return-to-guest path.
      */
     if ( opt_unpriv_mmio )
-        opt_fb_clear_mmio = caps & ARCH_CAPS_FB_CLEAR;
+        opt_fb_clear_mmio = cpu_has_fb_clear;
 
     /*
      * By default, enable PV and HVM mitigations on MDS-vulnerable hardware.
@@ -1484,7 +1481,7 @@ void __init init_speculation_mitigations(void)
      */
     if ( opt_md_clear_pv || opt_md_clear_hvm || opt_fb_clear_mmio )
         setup_force_cpu_cap(X86_FEATURE_SC_VERW_IDLE);
-    opt_md_clear_hvm &= !(caps & ARCH_CAPS_SKIP_L1DFL) && !opt_l1d_flush;
+    opt_md_clear_hvm &= !cpu_has_skip_l1dfl && !opt_l1d_flush;
 
     /*
      * Warn the user if they are on MLPDS/MFBDS-vulnerable hardware with HT
@@ -1515,8 +1512,7 @@ void __init init_speculation_mitigations(void)
      *       we check both to spot TSX in a microcode/cmdline independent way.
      */
     cpu_has_bug_taa =
-        (cpu_has_rtm || (caps & ARCH_CAPS_TSX_CTRL)) &&
-        (caps & (ARCH_CAPS_MDS_NO | ARCH_CAPS_TAA_NO)) == ARCH_CAPS_MDS_NO;
+        (cpu_has_rtm || cpu_has_tsx_ctrl) && cpu_has_mds_no && !cpu_has_taa_no;
 
     /*
      * On TAA-affected hardware, disabling TSX is the preferred mitigation, vs
@@ -1535,7 +1531,7 @@ void __init init_speculation_mitigations(void)
      * plausibly value TSX higher than Hyperthreading...), disable TSX to
      * mitigate TAA.
      */
-    if ( opt_tsx == -1 && cpu_has_bug_taa && (caps & ARCH_CAPS_TSX_CTRL) &&
+    if ( opt_tsx == -1 && cpu_has_bug_taa && cpu_has_tsx_ctrl &&
          ((hw_smt_enabled && opt_smt) ||
           !boot_cpu_has(X86_FEATURE_SC_VERW_IDLE)) )
     {
@@ -1560,15 +1556,15 @@ void __init init_speculation_mitigations(void)
     if ( cpu_has_srbds_ctrl )
     {
         if ( opt_srb_lock == -1 && !opt_unpriv_mmio &&
-             (caps & (ARCH_CAPS_MDS_NO|ARCH_CAPS_TAA_NO)) == ARCH_CAPS_MDS_NO &&
-             (!cpu_has_hle || ((caps & ARCH_CAPS_TSX_CTRL) && rtm_disabled)) )
+             cpu_has_mds_no && !cpu_has_taa_no &&
+             (!cpu_has_hle || (cpu_has_tsx_ctrl && rtm_disabled)) )
             opt_srb_lock = 0;
 
         set_in_mcu_opt_ctrl(MCU_OPT_CTRL_RNGDS_MITG_DIS,
                             opt_srb_lock ? 0 : MCU_OPT_CTRL_RNGDS_MITG_DIS);
     }
 
-    print_details(thunk, caps);
+    print_details(thunk);
 
     /*
      * If MSR_SPEC_CTRL is available, apply Xen's default setting and discard
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 14:53:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535265.832974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3q-0007KT-0V; Tue, 16 May 2023 14:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535265.832974; Tue, 16 May 2023 14:53:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3p-0007KM-Tn; Tue, 16 May 2023 14:53:57 +0000
Received: by outflank-mailman (input) for mailman id 535265;
 Tue, 16 May 2023 14:53:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyw3o-00067V-6o
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:53:56 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a37d56f-f3f9-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 16:53:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a37d56f-f3f9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684248834;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=NrkVJ16nwirBobiHdjNkj0CWXhNZMNORuoJHCIkmXkg=;
  b=EwUOwEBmBRDji5a/S1nG3XAqsTaw8ZThUkg1jqG7lJwgVBxGtqg5umoQ
   l/e9Gyi7cAXiuK2BNWgTp7L4hWol9qbLyT35WromwYVrIzeyIG821a3H7
   hC94Ljy6ywIPtr0R1jwg6Scg5k07qCp6SYIKmM7KU6xJL6i1zpSJjtmUt
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 111689031
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:e6frZ6j6634L5DHhqveKffZcX161XxAKZh0ujC45NGQN5FlHY01je
 htvWjyHOfnZMWH3fox/bYvnoxsOucTcm9JmTlRkri9nH3wb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QaAzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ1IQkGRwCMv9uanq6aG8hGtt8mDsX0adZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27B/
 zuYrzWhWUhy2Nq3igS/+3D8pb70uGDDXJ09S6yWtdhrqQjGroAUIEJPDgbqyRWjsWauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4O88Q5RyJy6HUyx2EHWVCRTlEAPQ5sOcmSDps0
 UWG9/vrGDoptrSWQHCc8768rDWuNCxTJmgHDQcHQBUE5Z//oYg1phPJUttnVqWyi7XI9SrYm
 m7Q6nJk3vNK0JBNjv/glbzav96yjrXAdwUc9grvZ0778iIoRaSVfImK5UeOuJ6sM72lZlWGu
 XEFne2X4+YPEYyBmUSxfQkdIF26z63baWOB2DaDC7Fkrm3woCD7Iei89RkkfC9U3tA4lSgFi
 aM5kSdY/9dtMXSjdsebiKrhWp1xncAM+TkIP804j+aigLArLGdrHwk0PyZ8OlwBd2By+ZzTw
 b/BLa6R4Y8yUMyLNgaeSeYHyqMMzSsj327VTp2T5035gebHOyPLGO5ZbwHmggUFAESs8W3oH
 yt3bZPWm32zrsWkCsUozWLjBQ9TdiVqbXwHg8dWavSCMmJbJY3VMNeImelJU9U8z8xoehLgo
 inVtrlwlACu2hUq6GyiNhheVV8Ydcgv8i9rYXN1Zj5FGRELOO6S0UvWTLNvFZFPyQCp5aQco
 yUtEylYPslydw==
IronPort-HdrOrdr: A9a23:2WYuQ6AnCTSgUlvlHeiksseALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPEfP+VAssQIb6Km90ci7MDrhHPtOjbX5Uo3SODUO1FHIEGgA1/qr/9SDIVyYygc178
 4JHMZD4bbLfDtHZLPBkWyF+qEbsbu6Gc6T5dv2/jNId0VHeqtg5wB2BkKyFVB3fhBPAd4UBY
 eR/c1OohunYDAyYt6gDncIcuDfr5mT/aiWKyIuNloC0k2jnDmo4Ln1H1yx2QofaSpGxfMP4H
 XIiAvw44SkqrWexgXH32HewpxKkJ/Ky8dFBuaLls8JQw+cwTqAVcBEYfmvrTo1qOag5BIBi9
 /XuSotOMx19jf4Yny1iQGF4Xit7B8er1vZjXOIi3rqpsL0ABggDdBauI5fehzFr2I9odBH1r
 5R1W7xjesZMfqAplWy2zH7bWArqqOGmwtgrQfVtQ0cbWIqUs4RkWXYxjIRLH5PJlO/1GltKp
 gXMCiV3ocsTbrdVQGVgoAn+q3QYpw+cy32OHQqq4ib1SNbk2t+yFZdzMsDnm0Y/JZ4UJVc4f
 /YW54Y4I2mY/VmH56VPt1xNPefGyjIW1bBIWiSKVPoGOUOPG/MsYf+5PEw6PuxcJIFwZMukN
 CZOWkow1IaagbrE4mDzZdL+hfCTCG0Wins0NhX49x8tqfnTLTmPCWfQBQlktemof8YHsrHMs
 zDT65+ErvmNy/jCIxJ1wrxV91bLmQfStQcvpIhV1eHsqvwW/7XXyzgAYbuzZbWYEgZsznEcw
 c+tRDIVbp9x1HuXGPkix7MXH6oclDj/PtLYdnnw9Q=
X-Talos-CUID: 9a23:zH4a9myBzekH9GOmnTi5BgU5E/kuW23S9E78fVOZWEdDeKaQa1OPrfY=
X-Talos-MUID: 9a23:I5z2UAsh93sNdK4KEc2nxw9YPcU4wPWVLQM1lYop5NiqdgpxEmLI
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="111689031"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH 0/4] x86: Feature check cleanup
Date: Tue, 16 May 2023 15:53:30 +0100
Message-ID: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This builds on the work from "[PATCH 0/6] x86: Introduce MSR_ARCH_CAPS into
featuresets" and is just cleanup to feature handling.

No functional change.

Andrew Cooper (4):
  x86/cpufeature: Rework {boot_,}cpu_has()
  x86/vtx: Remove opencoded MSR_ARCH_CAPS check
  x86/tsx: Remove opencoded MSR_ARCH_CAPS check
  x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check

 xen/arch/x86/hvm/vmx/vmx.c            |  8 +---
 xen/arch/x86/include/asm/cpufeature.h | 24 ++++++++++--
 xen/arch/x86/include/asm/processor.h  |  2 +-
 xen/arch/x86/spec_ctrl.c              | 56 +++++++++++++--------------
 xen/arch/x86/tsx.c                    | 13 ++++---
 5 files changed, 58 insertions(+), 45 deletions(-)


base-commit: 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
prerequisite-patch-id: ceeba7d5ab9498cb188e5012953c7e8c9a86347d
prerequisite-patch-id: c0957b9e1157ae6eb8de973c96716fd02587c486
prerequisite-patch-id: d2574bba15748cd021e5b33fa50e6cadc38863b6
prerequisite-patch-id: 0f66cd4287ffdc06f24dc01c7d26fb428f3e8c09
prerequisite-patch-id: a585f61b546ff96be3624ff253f8100b2f465de6
prerequisite-patch-id: 54551cdefaca083b4a4b97528d27d0f3dc9753ee
prerequisite-patch-id: 051423463e4a34728ab524f03e801e7103777684
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 14:53:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535266.832980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3q-0007Oi-G4; Tue, 16 May 2023 14:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535266.832980; Tue, 16 May 2023 14:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw3q-0007Ny-7K; Tue, 16 May 2023 14:53:58 +0000
Received: by outflank-mailman (input) for mailman id 535266;
 Tue, 16 May 2023 14:53:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pyw3o-00067V-Dw
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:53:56 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 799ec1ce-f3f9-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 16:53:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 799ec1ce-f3f9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684248834;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=lMkiMk+JiUBw0hNgilS4qSHGS05jFL3O36UzQf8lnjo=;
  b=Z151AEIeZh+lgWZLiUIM5oxx1Z1Y6Jiq2q3620FydbOlm5KvHcC/v/cv
   zq3v6rj3K1jhuSKQXI1Mzuq0sMQUcehBqNbB/ZrKRyzi5hko1+aYvtNaF
   DoRsJFU5dszRLjYQ3PdXsPHQuGj0upCrN3dAO6LOID+hy7r9gwew+vbFV
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 111689032
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:jULSca17vEt4G5MuofbD5dpxkn2cJEfYwER7XKvMYLTBsI5bp2MOy
 GtNCm2EM/2CN2r3Loh2boqy80xUsZKAmoM1QAdrpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFnO6gR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfLGh3/
 tUIJwg0Qk6fprOX6oKgT8tgmZF2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiHJwMwBbJ/
 jmcl4j/KkE4JNaa6yufyF72jcL/mDP4Qp4OTKLto5aGh3XMnzdOWXX6T2CTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1jY+cddNF+wx6CmW17HZpQ2eAwAsUTppeNEg8sgsSlQXO
 kShxo2zQ2Y16fvMFCzbr+3Pxd+vBcQLBUILXCQtXQQ92vW9vKBirzj/VdNBHLHg27UZBgrM6
 zyNqSE/gZAagsgKy7i38Dj7vt68mnTaZlVrv1uKBwpJ+is8Pdf4PNLwtTA3+N4adO6kok+9U
 G/ociR0xMQHFtmzmSOEW43h95n5tq/eYFUwbbOCdqTNFghBGVb5Jei8Axkkfi+F1/ronhe3C
 HI/QSsLuPdu0IKCNMebmb6ZBcUw1rTHHt/4TP3SZdcmSsEvJFPeo3wwNBfMgDqFfK0QfUYXa
 P+mnTuEVy5GWcyLMhLsLwvi7VPb7n9nnj6CLXwK5x+mzaCfdBaodFvxC3PXNrpRxPrd8G3oH
 yN3a5PiJ+N3DLevPUE6MOc7cTg3EJTMLc6m9JEHKrHaeWKL2ggJUpfs/F/oQKQ994w9qwsC1
 ijVtpNwoLYnuUD6FA==
IronPort-HdrOrdr: A9a23:LJl+wKjSpoGFeaLJsqaEaHZe7HBQX9J23DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03IwerwXpVoMkmsjKKdgLNhdItKOTOLhILGFvAH0WKP+Vzd8k7Fh6ZgPM
 VbAs9D4bTLZDAU4/oSizPIcOrIteP3lZxA8t2urUuFIzsLV4hQqyNCTiqLGEx/QwdLQbI/CZ
 qn/8JC4xawZHgNacy/J38dG8zOvcfCmp7KaQMPQ0dP0njFsRqYrJrBVzSI1BYXVD1ChZ8k7G
 j+igT8oomzrv2hzRfY9mnLq7BbgsHoxNdvDNGFzuIVNjLvoAC1Y5kJYczIgBkF5MWUrHo6mt
 jFpBkte+5p7WnKQ22zqRzxnyH9zTcH8RbZuBOlqEqmhfa8aCMxCsJHi44cWADe8VAcsNZ117
 8O936FtqBQEQjLkE3Glpf1vlBR5wSJSEgZ4K4uZk9kIMgjgXhq3M4iFXZuYdY99eTBmcUa+a
 dVfYXhDb1tACunhjjizxJSKZqXLzkO9169MzU/UsD56UktoFlpi0Qf38ARhXEG6dY0TIRF/f
 3NNuBymKhJVdJ+V9MIOA4te7rENoX2e2O4DEuCZVD8UK0XMXPErJD6pL0z+eGxYZQNiJ8/go
 7IXl9UvXM7PxuGM7z54LRbthTWBGmtVzXkzc9To5B/p73nXbLudSmOUkonncesq+gWRsfbR/
 GwMpRLBOKLFxqYJa9ZmwnlH5VCI3gXV8MY/t49RlKVu8rObpbns+TKGcyjV4YF0QxUKl8XLk
 FzIgQbfv8wknxDckWI/yT5SjfqZlH1+452HezT4/UTobJ9R7Fxjg==
X-Talos-CUID: =?us-ascii?q?9a23=3ARgzM4mro4N4SQVM0cCe2GszmUe0sS3b03HjoGmP?=
 =?us-ascii?q?mFUc4aZelUlqy9Lwxxg=3D=3D?=
X-Talos-MUID: 9a23:O88yugtWlovu7NLUDs2n1BNyPsJK8r6UJBoGvKUDq8qeKTAoJGLI
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="111689032"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 3/4] x86/tsx: Remove opencoded MSR_ARCH_CAPS check
Date: Tue, 16 May 2023 15:53:33 +0100
Message-ID: <20230516145334.1271347-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The current cpu_has_tsx_ctrl tristate is serving double pupose; to signal the
first pass through tsx_init(), and the availability of MSR_TSX_CTRL.

Drop the variable, replacing it with a once boolean, and altering
cpu_has_tsx_ctrl to come out of the feature information.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/include/asm/cpufeature.h |  1 +
 xen/arch/x86/include/asm/processor.h  |  2 +-
 xen/arch/x86/tsx.c                    | 13 ++++++++-----
 3 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 8446f98625f7..deca5bfc2629 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -185,6 +185,7 @@ static inline bool boot_cpu_has(unsigned int feat)
 
 /* MSR_ARCH_CAPS 10A */
 #define cpu_has_if_pschange_mc_no boot_cpu_has(X86_FEATURE_IF_PSCHANGE_MC_NO)
+#define cpu_has_tsx_ctrl        boot_cpu_has(X86_FEATURE_TSX_CTRL)
 
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
diff --git a/xen/arch/x86/include/asm/processor.h b/xen/arch/x86/include/asm/processor.h
index 0eaa2c3094d0..f983ff501d95 100644
--- a/xen/arch/x86/include/asm/processor.h
+++ b/xen/arch/x86/include/asm/processor.h
@@ -535,7 +535,7 @@ static inline uint8_t get_cpu_family(uint32_t raw, uint8_t *model,
     return fam;
 }
 
-extern int8_t opt_tsx, cpu_has_tsx_ctrl;
+extern int8_t opt_tsx;
 extern bool rtm_disabled;
 void tsx_init(void);
 
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index 41b6092cfe16..fc199815994d 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -19,7 +19,6 @@
  * controlling TSX behaviour, and where TSX isn't force-disabled by firmware.
  */
 int8_t __read_mostly opt_tsx = -1;
-int8_t __read_mostly cpu_has_tsx_ctrl = -1;
 bool __read_mostly rtm_disabled;
 
 static int __init cf_check parse_tsx(const char *s)
@@ -37,24 +36,28 @@ custom_param("tsx", parse_tsx);
 
 void tsx_init(void)
 {
+    static bool __read_mostly once;
+
     /*
      * This function is first called between microcode being loaded, and CPUID
      * being scanned generally.  Read into boot_cpu_data.x86_capability[] for
      * the cpu_has_* bits we care about using here.
      */
-    if ( unlikely(cpu_has_tsx_ctrl < 0) )
+    if ( unlikely(!once) )
     {
-        uint64_t caps = 0;
         bool has_rtm_always_abort;
 
+        once = true;
+
         if ( boot_cpu_data.cpuid_level >= 7 )
             boot_cpu_data.x86_capability[FEATURESET_7d0]
                 = cpuid_count_edx(7, 0);
 
         if ( cpu_has_arch_caps )
-            rdmsrl(MSR_ARCH_CAPABILITIES, caps);
+            rdmsr(MSR_ARCH_CAPABILITIES,
+                  boot_cpu_data.x86_capability[FEATURESET_10Al],
+                  boot_cpu_data.x86_capability[FEATURESET_10Ah]);
 
-        cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
         has_rtm_always_abort = cpu_has_rtm_always_abort;
 
         if ( cpu_has_tsx_ctrl && cpu_has_srbds_ctrl )
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 14:58:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 14:58:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535290.832993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw7s-00019N-U9; Tue, 16 May 2023 14:58:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535290.832993; Tue, 16 May 2023 14:58:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyw7s-00019G-Qw; Tue, 16 May 2023 14:58:08 +0000
Received: by outflank-mailman (input) for mailman id 535290;
 Tue, 16 May 2023 14:58:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyw7r-000198-Io
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 14:58:07 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0616.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1057dd41-f3fa-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 16:58:06 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8445.eurprd04.prod.outlook.com (2603:10a6:10:2cf::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.23; Tue, 16 May
 2023 14:58:05 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 14:58:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1057dd41-f3fa-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IPtBLE9yNVGUPbyDwZU0hHOv0ZQ5N5MczuoEMNlvF1uiX16JNFlQjbLQGfAcJMJuarba6HBknzsLIySPm/0FinOMqfwL/hdMcXEAlvcTc5ey0elqFzHBp91oEU3+kjVE2pnTYjYA2OVBESuHeQ6HNYs5cvMm0ghICLvOh97tXv3/HCncX6eMsaisnYbobL1j96RRTH1wKOxpJrq8Rippdj+X+L+6ObN5lQAVQY74P8zh0NbX+Z87LYbCwIhO/dW4UWfggtzThRz+8o04cmmSfro6r4cez1kqWV5uH5vYFco5IgH6OPcZrTMg3DWQnxFwTMk8d1XULTXH5zLin9ZFLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qOWyj1RNtyffmMO8fnLkKiE6NI7uYuHL2YVFcqmqZg4=;
 b=KItRshzB5Ltrn8BdBhABsQdqUVgb9o+OnlwWKEBCfle4UEq9Bt3EUwnWjimAjuuxA+XKe7Zfpq3iMLNCLuNIrWtX/BklKMUE4MWGm1aKvhUeKJ9kjZ3dd8smSdLGUOhDzeb4r5XB5Afo7hxvu85itIzJ7Cr9kNKBLWOhRACI7hNzSLUtbxVBnfgRt2QZR98s445QaWx4ToTbX/goxGssRSHF/Xd+rmMUZwTylo/2MtgGtTotCk6MKiOtsFfoLIQOkQLR9mhd8hxL8lpC6UioONTl6zFLYIR9nXnpZdOatQtjgoWFpDmo236jrJ441ZLiPflPETfWM8tkTBvSpyLkwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qOWyj1RNtyffmMO8fnLkKiE6NI7uYuHL2YVFcqmqZg4=;
 b=QTOnae0ardATfmda7Sp6J27V6XaIgYMTrh00kKFcIM3RXdlzwobBEkbkBEPv8sFuZo4mkJZnNWnWe5vxxbXYpWz3PEJ3M3AqZkjHCe5Q2/2WPIdLQnbEb+Tlg/FGLjlR9Q95Wvul1OvPJlVnTGveUzHOOCKCQJH36afZb6SSw57WqQB/1uziLACglseoQqFq7wc+HqVR6rQSk2GtiieOWsX/bcngBvqMB3su/qTkrflrFa9SGhKwt+XdS1DWgRk4txpm6k2fXBR5UzKf7mWz/WCuai6AT/9ATdpYT1TrZOz7MOrvNnldRn8+zMblI3QHhLwPrSba0ZUSudDhD48bzg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <327eb858-f5f5-bf09-edfb-53c5c23a6c17@suse.com>
Date: Tue, 16 May 2023 16:58:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230515144259.1009245-7-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0152.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b3::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8445:EE_
X-MS-Office365-Filtering-Correlation-Id: 2d693d98-51d0-4c9c-552e-08db561df3a4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wAFNQu2G0ozE1t+XuaSdugwqTuXOxGC9fbdr5FLIVF2lfe+gtZ+U8latCC1rnm/pe1PvowX+BT+JkguPwydie3wk3oiHi3/AK5zxwV8Y0b2i7lXybYH513lhwS7KVLRPRKvhXg0GJmhRwpLuNiXGM3zvnH8viqBt/Xjs+3R6qs4Z1HG5iLfXkA/Nd+a75TRG25Y963ulcrx39v7qf6wQ+og26eWsGcU9DcXxpzyMJrLfRRd+nbcKYB61K4aHg3ZCgMg0pwT3yQpMwt6UcuHYEu781WPwQG1lT38aIo+bCZ+WEkyS8ad17DcEdzOTlRcunf4w6UukAviDkfK+5KfwGg2T6Osg1gCxKnzPK5IUKB7HuPZhPnQVuUHlUIJhL0QObBZQl2GgAie8O33fNxWV6xRXLcCZ4rFC6duYOWFgr53m9O1jO5kUz6Bx1KUeIKiFSO3ROetPPEqMu4fT5eBU8tqYbfVlaIe5I4SsPy+voxKfyudJzy/y543mF3jwm4ppPgypULfMGzFyMvLK8IHxWAobu3R814k688FlHjy0AlJ466er4SQ7K3FKe/++F3j1KluCAaIbYzkG9Wa0uBwbhSA8RlqnHrPejTvVyWS+CK/TjlbJ82M0roJsZm/jbvElVVz9c1nwc0i+mJJZ8Rmo8g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(396003)(376002)(366004)(39860400002)(451199021)(31686004)(66899021)(83380400001)(2616005)(186003)(41300700001)(4326008)(66946007)(8936002)(66556008)(6916009)(26005)(53546011)(8676002)(5660300002)(316002)(31696002)(6506007)(54906003)(478600001)(6486002)(38100700002)(66476007)(86362001)(6512007)(36756003)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ODgzbGF2bm5mVGl4a3lHUTFMakxqVGxmdlBmSGwzYVJpSVRjYnZjRlE2aVAy?=
 =?utf-8?B?VkdHQkxzV3ZTZzR0Si9MdEplYVFNWTFFSlkzaDdGTzVKRlhraGM5eGxZRlU5?=
 =?utf-8?B?LzdZRmM0SXI4ait1T2JqZElQTUJZa0ttQWZ0TVAwVU5rQXppcG9hQXdnem9j?=
 =?utf-8?B?TzBXZ3ZBZjVHUTJBSmJncEpjUVczSk14SjllYWlkTVVraDZWbE8zTlVjQ1M3?=
 =?utf-8?B?S0VTekIwWG5DaVBYcTlRWkE4UWpiZHF6eGpiYWJvNldQaElXVmppc1M1MFNm?=
 =?utf-8?B?d1FxQVY3NkNqMExUb2M4eGdEa1kxQTBPdzJ0NHRzQzhRdmZsR0RSejRhbVJY?=
 =?utf-8?B?czJiS1cya1hMQzRVQWh3R1R0OUJIdmY1dmlMcWpadm0wbHJDR0RVd2dtbGdX?=
 =?utf-8?B?M1RIZmRVbFZJWFVWTFJPQ0pGUlpsMGFZVHIrR0FBR05VcW1VbXltZHVteENu?=
 =?utf-8?B?VjlYVms4cFB1WEYrUHJ6SHVNYmR6Um1tVUYrNFNkRXdzVzY2eW5OY2xpMzJp?=
 =?utf-8?B?UFFzM29FUk1sSlFOMkxWMStUdFhUelMvbnRvaUVuUE4xSDB0RlAvZ0pTdjlD?=
 =?utf-8?B?NUxnZC9JQklVbDhWVmhRSEVsWVhtazZDSkRMU0RMRFBiL3VvL3lybXhnUkFi?=
 =?utf-8?B?eDk5dWhtTWN2b2t0b1Fqa3dBMlFYcE5zbHpwVHA5bHpRS3NETythYkwwNU11?=
 =?utf-8?B?c3Y2aGVxcUFkejFDVXdTdHo1eXVrQndLanU5MC9PdlR0cW1rb2lCWVZvd1Fs?=
 =?utf-8?B?elN4MTVPejNSRGUrOTR5cDhSc0dOOXVhcmpXdkozcHFaSlZRaTNOTXl0L0Jw?=
 =?utf-8?B?TUdvenliMmxZM3dMcHJOUW1KV1ZMVEFTa24wM000ZGJ6SGFrZDJwY05PREUz?=
 =?utf-8?B?SkZWRzAwbmVteVhlUlcxSStxa3UzT01sUHJzQUVGVnJ4SEJRK2wxeG9yS2py?=
 =?utf-8?B?MVZLSVZhem9SZkMvZkkwWWFGK3ZiVjNjWG9LOGl4WGgyQ1dnN1pqenFlcjVh?=
 =?utf-8?B?MVd1allHYVJHQTdsZHRZWG1FQzFuQXlNWnVYWDN6MzRaRWtSMnhuMWw5SXpu?=
 =?utf-8?B?djF5dElPN1hxbUV3UEo4YXQyTThHUy9FTTdWZjZwZjJWMWFJaGJFeFd4RVZa?=
 =?utf-8?B?RVcwMnVaRmJuQ3M5UC83WnNSUGdNVGRrVG5lK1RnUk84ejg1RUpvRlVkZ0Yv?=
 =?utf-8?B?L0tmTHcvN2VzZmxqVWduZ3c4MUdSNkZVQjRJYzhpTHErN29UQlIycEt0Vldv?=
 =?utf-8?B?dDFXeHpaT1oxNlNkOXorVFhvQXFzV0xNdmJOc3gzQ1hXQzliRWNIN0RwZzdK?=
 =?utf-8?B?bldCT1B6Z3owdEVENWE0SHRSdmpFNTZES1N2MlFuc1VXWFRaZzZGVTFhQlAy?=
 =?utf-8?B?TUhhRDJhZFRsN2o0d0wvODB0RFpzL1F0eHNVYWNSZmZTQTlrTkkzUHlDMU1i?=
 =?utf-8?B?VkVtY1NMZEFhd1NpZ1REcnR2a1BYamtNY0JwT0FBVkpwZFdtQTFzTGJvYUVm?=
 =?utf-8?B?QlZveUs5NVRScDMvbXMyUys3bmdSUm82Qm5XVmMrM2xSVGZ5ek9xdUwvcEM2?=
 =?utf-8?B?QlVjUXdMclEycnF4WXlsTnpoMU5aYmZwckJsRnVTSGxhTWdOZTF0RVFZZVI5?=
 =?utf-8?B?M0VJZ0tFM1JQMWgrZjdoTm82Tk1lc0VmeU12VTR4RUl0UUR2ZmJWR2J6bU8y?=
 =?utf-8?B?elRZUEIyS1NuTUN6SzJGRkhVQ0pubStuWkNUT2RWUE1Nb2JkUkx1UVJWNjkv?=
 =?utf-8?B?RVNHaEpVbzVzb01jUjRMZzl1Y1F3aXhIcGc5ZVZabGtqYU96WjJEditMbzdp?=
 =?utf-8?B?MU5QQU5sZG50QmZIZElHS1ltVThlK1lHOTViTitRUWNUbFlKb0pVVnYxWnNU?=
 =?utf-8?B?bEFTUXN5RVQ0L3N1L1NsN2lIdDc3a3ZHLzAvemlaUThiV2kwR2NvazVHTysw?=
 =?utf-8?B?K0gwWFc5U1hkRk0yVkxTRThaUFIwT2hFUUthbCtJQWhnVWM3RnhkOUE4Skg2?=
 =?utf-8?B?VngvaVdKd21TbjNEV2JkUGZhYVUzYWpyZzFrT25sNjIrY2tqMktSUTNQNTRt?=
 =?utf-8?B?TVkxa1ozS3p4OUoyRFBPQkF3VzZiSGZiZGFiTm5NNTBlbzYzVXArRS9Lc2ZE?=
 =?utf-8?Q?9lTsaSvaH9hebqDKTmenFsM3h?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d693d98-51d0-4c9c-552e-08db561df3a4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 14:58:04.6831
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QG8aJ2VRx1DtyW1IYHNh8TE9Dm5qLJye0NiV1WP1dnhTwsnFTkOi3kIhAm0SlCFtV/GAQ+mtS3oKy4M0DknkzQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8445

On 15.05.2023 16:42, Andrew Cooper wrote:
> --- a/xen/arch/x86/cpu-policy.c
> +++ b/xen/arch/x86/cpu-policy.c
> @@ -408,6 +408,25 @@ static void __init calculate_host_policy(void)
>      p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
>  }
>  
> +static void __init guest_common_max_feature_adjustments(uint32_t *fs)
> +{
> +    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
> +    {
> +        /*
> +         * MSR_ARCH_CAPS is just feature data, and we can offer it to guests
> +         * unconditionally, although limit it to Intel systems as it is highly
> +         * uarch-specific.
> +         *
> +         * In particular, the RSBA and RRSBA bits mean "you might migrate to a
> +         * system where RSB underflow uses alternative predictors (a.k.a
> +         * Retpoline not safe)", so these need to be visible to a guest in all
> +         * cases, even when it's only some other server in the pool which
> +         * suffers the identified behaviour.
> +         */
> +        __set_bit(X86_FEATURE_ARCH_CAPS, fs);
> +    }
> +}

Wouldn't this better be accompanied by marking the bit !a in the public header?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 15:06:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 15:06:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535296.833003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywFm-0002kI-Sr; Tue, 16 May 2023 15:06:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535296.833003; Tue, 16 May 2023 15:06:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywFm-0002kB-PO; Tue, 16 May 2023 15:06:18 +0000
Received: by outflank-mailman (input) for mailman id 535296;
 Tue, 16 May 2023 15:06:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pywFl-0002k1-FI; Tue, 16 May 2023 15:06:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pywFl-00021t-8G; Tue, 16 May 2023 15:06:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pywFk-00083c-QB; Tue, 16 May 2023 15:06:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pywFk-00067c-PV; Tue, 16 May 2023 15:06:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lC5jIIG6BldvXpAc07krTDrqSxnpIvuAx5hC/NyH39Q=; b=x6kH+qw6/tfYOP5KmXvl9WH7fW
	9kLVur1vO8YB2nvBfOYALfU3z0vkGs1hgSVErZR1B21qS1YQkFdv3mmIxKw2Xh0Lga+OcM4jaakUP
	cEdxNZgdY8w4RPoCS5EfyNq4o3pkvzu2ZSSrknpK1F4wBY5/c625U42PwAsOCeY30iDo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180677-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180677: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-armhf:<job status>:broken:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    xen-unstable:build-armhf:host-build-prep:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    xen-unstable:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=56e2c8e5860090a35d5f0cafe168223a2a7c0e62
X-Osstest-Versions-That:
    xen=b8be19ce432a2edd69c0673768a0beeec77f795a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 May 2023 15:06:16 +0000

flight 180677 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180677/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 180671
 build-armhf                   5 host-build-prep          fail REGR. vs. 180671
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 180671

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180671
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180671
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180671
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180671
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180671
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180671
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180671
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180671
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180671
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass

version targeted for testing:
 xen                  56e2c8e5860090a35d5f0cafe168223a2a7c0e62
baseline version:
 xen                  b8be19ce432a2edd69c0673768a0beeec77f795a

Last test of basis   180671  2023-05-15 13:38:21 Z    1 days
Testing same since   180677  2023-05-15 23:07:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 56e2c8e5860090a35d5f0cafe168223a2a7c0e62
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 10 19:58:43 2023 +0100

    x86/cpuid: Calculate FEATURESET_NR_ENTRIES more helpfully
    
    When adding new featureset words, it is convenient to split the work into
    several patches.  However, GCC 12 spotted that the way we prefer to split the
    work results in a real (transient) breakage whereby the policy <-> featureset
    helpers perform out-of-bounds accesses on the featureset array.
    
    Fix this by having gen-cpuid.py calculate FEATURESET_NR_ENTRIES from the
    comments describing the word blocks, rather than from the XEN_CPUFEATURE()
    with the greatest value.
    
    For simplicty, require that the word blocks appear in order.  This can be
    revisted if we find a good reason to have blocks out of order.
    
    No functional change.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 16 15:12:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 15:12:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535303.833013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywLq-0004Fd-Hp; Tue, 16 May 2023 15:12:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535303.833013; Tue, 16 May 2023 15:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywLq-0004FW-FD; Tue, 16 May 2023 15:12:34 +0000
Received: by outflank-mailman (input) for mailman id 535303;
 Tue, 16 May 2023 15:12:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pywLp-0004FM-3V; Tue, 16 May 2023 15:12:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pywLo-0002Hi-Pv; Tue, 16 May 2023 15:12:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pywLo-0008BU-CX; Tue, 16 May 2023 15:12:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pywLo-0006cY-C7; Tue, 16 May 2023 15:12:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L62yjT5XGxixfeeuX+vzBTaNJt+i19zNIH86oZFL7bQ=; b=UWH9TQ9G3fhOlM9aX6iZ+rdgbx
	JwXVAoiFJ1/c/YnCtTZGzaWiscQP6+bom69MczAgjEHS1S5kNpc6+weFNUYQtW9n+kvnJmJcURyAP
	QBZmoz/8v9srgGElJsvES8+okiTuiaTc0FSl95auyogFBf5MwIvX/jW5V9eu+vDOhU1Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180681-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 180681: tolerable FAIL - PUSHED
X-Osstest-Failures:
    seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    seabios=be7e899350caa7b74d8271a34264c3b4aef25ab0
X-Osstest-Versions-That:
    seabios=ea1b7a0733906b8425d948ae94fba63c32b1d425
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 May 2023 15:12:32 +0000

flight 180681 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180681/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 176322
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 176322
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 176322
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 176322
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 176322
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass

version targeted for testing:
 seabios              be7e899350caa7b74d8271a34264c3b4aef25ab0
baseline version:
 seabios              ea1b7a0733906b8425d948ae94fba63c32b1d425

Last test of basis   176322  2023-02-02 01:57:05 Z  103 days
Testing same since   180681  2023-05-16 11:10:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   ea1b7a0..be7e899  be7e899350caa7b74d8271a34264c3b4aef25ab0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 16 15:15:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 15:15:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535314.833058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywOE-0005an-Kt; Tue, 16 May 2023 15:15:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535314.833058; Tue, 16 May 2023 15:15:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywOE-0005Zc-HF; Tue, 16 May 2023 15:15:02 +0000
Received: by outflank-mailman (input) for mailman id 535314;
 Tue, 16 May 2023 15:15:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IEz5=BF=xenbits.xen.org=jbeulich@srs-se1.protection.inumbo.net>)
 id 1pywOC-0004qr-GP
 for xen-devel@lists.xen.org; Tue, 16 May 2023 15:15:00 +0000
Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 68adb48a-f3fc-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 17:14:54 +0200 (CEST)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <jbeulich@xenbits.xen.org>)
 id 1pywNt-0002Kn-R5; Tue, 16 May 2023 15:14:41 +0000
Received: from jbeulich by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <jbeulich@xenbits.xen.org>)
 id 1pywNt-00034k-MT; Tue, 16 May 2023 15:14:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68adb48a-f3fc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=FSgKutlkty7XrjsLRVvpNZO6nUhKYovxPjAqthRaHbQ=; b=ovfB4G/yeLEMgz9vOE3aL5+5wx
	h7gfJpfRfc1U/gf3oQC+VFUEQlfUIXXYlJJfrcv4BfCfOpIoAEnVs3HAgIWWIV+/n5sgvXmQBZ8tG
	D7IklvZiR1JeDJzTmHTfQJgsCVwS0Xa4csgHisszbvBer3dO/95cJNb+4c6Teo+R20X4=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 431 v1 (CVE-2022-42336) - Mishandling of
 guest SSBD selection on AMD hardware
Message-Id: <E1pywNt-00034k-MT@xenbits.xenproject.org>
Date: Tue, 16 May 2023 15:14:41 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2022-42336 / XSA-431

          Mishandling of guest SSBD selection on AMD hardware

ISSUE DESCRIPTION
=================

The current logic to set SSBD on AMD Family 17h and Hygon Family 18h
processors requires that the setting of SSBD is coordinated at a core
level, as the setting is shared between threads.  Logic was introduced
to keep track of how many threads require SSBD active in order to
coordinate it, such logic relies on using a per-core counter of threads
that have SSBD active.

When running on the mentioned hardware, it's possible for a guest to
under or overflow the thread counter, because each write to
VIRT_SPEC_CTRL.SSBD by the guest gets propagated to the helper that does
the per-core active accounting.  Underflowing the counter causes the
value to get saturated, and thus attempts for guests running on the same
core to set SSBD won't have effect because the hypervisor assumes it's
already active.

IMPACT
======

An attacker with control over a guest can mislead other guests into
observing SSBD active when it is not.

VULNERABLE SYSTEMS
==================

Only Xen version 4.17 is vulnerable.

Only x86 AMD systems are vulnerable.  The vulnerability can be leveraged
by and affects only HVM guests.

MITIGATION
==========

Running PV guests only will prevent the vulnerability.

Setting `spec-ctrl=ssbd` on the hypervisor command line will force SSBD
to be unconditionally active.

NOTE REGARDING LACK OF EMBARGO
==============================

This issue was discussed in public already.

RESOLUTION
==========

Applying the attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa431.patch           xen-unstable - Xen 4.17.x

$ sha256sum xsa431*
e71a8b7e251adf4832a4de9e452c2fd895a56314729c54698d10e344f1996a99  xsa431.patch
$
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmRjkhsMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZDb8H/0vKLOgBhwKCVc8VYm59FIALd69k4qCLcwwfDuro
jFum5ATC3Cbx+iEXD2URFY6O+eE71mMBqw3/GT/BiKvsBHQhX5lsJUpxZFscqW9J
diM69a9BYuNNy+qW3TsslRsW9WGHH5bZoAhxpNKgciE17svJ76IRUsgNf806VRX+
VBI61wK2s9oqzfTazhQVR9zxFLANTyw7M4EtUXs0y49IUFjnSeVpW7/PdoloPC1C
m0SG6HSIJ4bH+yAWMqY5GYYVgJOkaStxEM6YLGjT/V078xcDyW2cie3BOtQ8/BI0
FJ7iwEh932k7VLtd+htBF3vo7CD+teGneeaktqKK2h55ps0=
=dmhW
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa431.patch"
Content-Disposition: attachment; filename="xsa431.patch"
Content-Transfer-Encoding: base64

RnJvbSA5YzAzMzgwZmM5ZTMyOGYwY2NiYTg2MGNiZTA5ZWY1OGVhMzY2Zjcx
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBSb2dlciBQYXUgTW9u
bmUgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpEYXRlOiBXZWQsIDIyIE1hciAy
MDIzIDExOjUyOjA3ICswMTAwClN1YmplY3Q6IFtQQVRDSF0geDg2L2FtZDog
Zml4IGxlZ2FjeSBzZXR0aW5nIG9mIFNTQkQgb24gQU1EIEZhbWlseSAxN2gK
TUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBj
aGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQK
ClRoZSBjdXJyZW50IGxvZ2ljIHRvIHNldCBTU0JEIG9uIEFNRCBGYW1pbHkg
MTdoIGFuZCBIeWdvbiBGYW1pbHkgMThoCnByb2Nlc3NvcnMgcmVxdWlyZXMg
dGhhdCB0aGUgc2V0dGluZyBvZiBTU0JEIGlzIGNvb3JkaW5hdGVkIGF0IGEg
Y29yZQpsZXZlbCwgYXMgdGhlIHNldHRpbmcgaXMgc2hhcmVkIGJldHdlZW4g
dGhyZWFkcy4gIExvZ2ljIHdhcyBpbnRyb2R1Y2VkCnRvIGtlZXAgdHJhY2sg
b2YgaG93IG1hbnkgdGhyZWFkcyByZXF1aXJlIFNTQkQgYWN0aXZlIGluIG9y
ZGVyIHRvCmNvb3JkaW5hdGUgaXQsIHN1Y2ggbG9naWMgcmVsaWVzIG9uIHVz
aW5nIGEgcGVyLWNvcmUgY291bnRlciBvZgp0aHJlYWRzIHRoYXQgaGF2ZSBT
U0JEIGFjdGl2ZS4KCkdpdmVuIHRoZSBjdXJyZW50IGxvZ2ljLCBpdCdzIHBv
c3NpYmxlIGZvciBhIGd1ZXN0IHRvIHVuZGVyIG9yCm92ZXJmbG93IHRoZSB0
aHJlYWQgY291bnRlciwgYmVjYXVzZSBlYWNoIHdyaXRlIHRvIFZJUlRfU1BF
Q19DVFJMLlNTQkQKYnkgdGhlIGd1ZXN0IGdldHMgcHJvcGFnYXRlZCB0byB0
aGUgaGVscGVyIHRoYXQgZG9lcyB0aGUgcGVyLWNvcmUKYWN0aXZlIGFjY291
bnRpbmcuICBPdmVyZmxvd2luZyB0aGUgY291bnRlciBpcyBub3Qgc28gbXVj
aCBvZiBhbgppc3N1ZSwgYXMgdGhpcyB3b3VsZCBqdXN0IG1ha2UgU1NCRCBz
dGlja3kuCgpVbmRlcmZsb3dpbmcgaG93ZXZlciBpcyBtb3JlIHByb2JsZW1h
dGljOiBvbiBub24tZGVidWcgWGVuIGJ1aWxkcyBhCmd1ZXN0IGNhbiBwZXJm
b3JtIGVtcHR5IHdyaXRlcyB0byBWSVJUX1NQRUNfQ1RSTCB0aGF0IHdvdWxk
IGNhdXNlIHRoZQpjb3VudGVyIHRvIHVuZGVyZmxvdyBhbmQgdGh1cyB0aGUg
dmFsdWUgZ2V0cyBzYXR1cmF0ZWQgdG8gdGhlIG1heAp2YWx1ZSBvZiB1bnNp
Z25lZCBpbnQuICBBdCB3aGljaCBwb2ludHMgYXR0ZW1wdHMgZnJvbSBhbnkg
dGhyZWFkIHRvCnNldCBWSVJUX1NQRUNfQ1RSTC5TU0JEIHdvbid0IGdldCBw
cm9wYWdhdGVkIHRvIHRoZSBoYXJkd2FyZSBhbnltb3JlLApiZWNhdXNlIHRo
ZSBsb2dpYyB3aWxsIHNlZSB0aGF0IHRoZSBjb3VudGVyIGlzIGdyZWF0ZXIg
dGhhbiAxIGFuZAphc3N1bWUgdGhhdCBTU0JEIGlzIGFscmVhZHkgYWN0aXZl
LCBlZmZlY3RpdmVseSBsb29zaW5nIHRoZSBzZXR0aW5nCm9mIFNTQkQgYW5k
IHRoZSBwcm90ZWN0aW9uIGl0IHByb3ZpZGVzLgoKRml4IHRoaXMgYnkgaW50
cm9kdWNpbmcgYSBwZXItQ1BVIHZhcmlhYmxlIHRoYXQga2VlcHMgdHJhY2sg
b2Ygd2hldGhlcgp0aGUgY3VycmVudCB0aHJlYWQgaGFzIGxlZ2FjeSBTU0JE
IGFjdGl2ZSBvciBub3QsIGFuZCB0aHVzIG9ubHkKYXR0ZW1wdCB0byBwcm9w
YWdhdGUgdGhlIHZhbHVlIHRvIHRoZSBoYXJkd2FyZSBvbmNlIHRoZSB0aHJl
YWQKc2VsZWN0ZWQgdmFsdWUgY2hhbmdlcy4KClRoaXMgaXMgWFNBLTQzMSAv
IENWRS0yMDIyLTQyMzM2CgpGaXhlczogYjIwMzBlNjczMGEyICgnYW1kL3Zp
cnRfc3NiZDogc2V0IFNTQkQgYXQgdkNQVSBjb250ZXh0IHN3aXRjaCcpClJl
cG9ydGVkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRy
aXguY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KLS0tCiB4ZW4vYXJjaC94ODYvY3B1L2FtZC5j
IHwgMTYgKysrKysrKysrKysrKysrKwogMSBmaWxlIGNoYW5nZWQsIDE2IGlu
c2VydGlvbnMoKykKCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvY3B1L2Ft
ZC5jIGIveGVuL2FyY2gveDg2L2NwdS9hbWQuYwppbmRleCBjYWFmZTQ0NzQw
MjEuLjlhMWEzODU4ZWRkNCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2Nw
dS9hbWQuYworKysgYi94ZW4vYXJjaC94ODYvY3B1L2FtZC5jCkBAIC03ODMs
MTIgKzc4MywyMyBAQCBib29sIF9faW5pdCBhbWRfc2V0dXBfbGVnYWN5X3Nz
YmQodm9pZCkKIAlyZXR1cm4gdHJ1ZTsKIH0KIAorLyoKKyAqIGxlZ2FjeV9z
c2JkIGlzIGFsd2F5cyBpbml0aWFsaXplZCB0byBmYWxzZSBiZWNhdXNlIHdo
ZW4gU1NCRCBpcyBzZXQKKyAqIGZyb20gdGhlIGNvbW1hbmQgbGluZSBndWVz
dCBhdHRlbXB0cyB0byBjaGFuZ2UgaXQgYXJlIGEgbm8tb3AgKHNlZQorICog
YW1kX3NldF9sZWdhY3lfc3NiZCgpKSwgd2hlcmVhcyB3aGVuIFNTQkQgaXMg
aW5hY3RpdmUgaGFyZHdhcmUgd2lsbAorICogYmUgZm9yY2VkIGludG8gdGhh
dCBtb2RlIChzZWUgYW1kX2luaXRfc3NiZCgpKS4KKyAqLworc3RhdGljIERF
RklORV9QRVJfQ1BVKGJvb2wsIGxlZ2FjeV9zc2JkKTsKKworLyogTXVzdCBi
ZSBjYWxsZWQgb25seSB3aGVuIHRoZSBTU0JEIHNldHRpbmcgbmVlZHMgdG9n
Z2xpbmcuICovCiBzdGF0aWMgdm9pZCBjb3JlX3NldF9sZWdhY3lfc3NiZChi
b29sIGVuYWJsZSkKIHsKIAljb25zdCBzdHJ1Y3QgY3B1aW5mb194ODYgKmMg
PSAmY3VycmVudF9jcHVfZGF0YTsKIAlzdHJ1Y3Qgc3NiZF9sc19jZmcgKnN0
YXR1czsKIAl1bnNpZ25lZCBsb25nIGZsYWdzOwogCisJQlVHX09OKHRoaXNf
Y3B1KGxlZ2FjeV9zc2JkKSA9PSBlbmFibGUpOworCiAJaWYgKChjLT54ODYg
IT0gMHgxNyAmJiBjLT54ODYgIT0gMHgxOCkgfHwgYy0+eDg2X251bV9zaWJs
aW5ncyA8PSAxKSB7CiAJCUJVR19PTighc2V0X2xlZ2FjeV9zc2JkKGMsIGVu
YWJsZSkpOwogCQlyZXR1cm47CkBAIC04MTYsMTIgKzgyNywxNyBAQCB2b2lk
IGFtZF9zZXRfbGVnYWN5X3NzYmQoYm9vbCBlbmFibGUpCiAJCSAqLwogCQly
ZXR1cm47CiAKKwlpZiAodGhpc19jcHUobGVnYWN5X3NzYmQpID09IGVuYWJs
ZSkKKwkJcmV0dXJuOworCiAJaWYgKGNwdV9oYXNfdmlydF9zc2JkKQogCQl3
cm1zcihNU1JfVklSVF9TUEVDX0NUUkwsIGVuYWJsZSA/IFNQRUNfQ1RSTF9T
U0JEIDogMCwgMCk7CiAJZWxzZSBpZiAoYW1kX2xlZ2FjeV9zc2JkKQogCQlj
b3JlX3NldF9sZWdhY3lfc3NiZChlbmFibGUpOwogCWVsc2UKIAkJQVNTRVJU
X1VOUkVBQ0hBQkxFKCk7CisKKwl0aGlzX2NwdShsZWdhY3lfc3NiZCkgPSBl
bmFibGU7CiB9CiAKIC8qCi0tIAoyLjQwLjAKCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue May 16 15:29:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 15:29:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535387.833068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywbl-0008Rz-9b; Tue, 16 May 2023 15:29:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535387.833068; Tue, 16 May 2023 15:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywbl-0008Rs-5o; Tue, 16 May 2023 15:29:01 +0000
Received: by outflank-mailman (input) for mailman id 535387;
 Tue, 16 May 2023 15:29:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pywbk-0008Rm-1G
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 15:29:00 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20622.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::622])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5fc8d2d7-f3fe-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 17:28:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7364.eurprd04.prod.outlook.com (2603:10a6:20b:1db::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Tue, 16 May
 2023 15:28:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 15:28:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5fc8d2d7-f3fe-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SV2lATIUiHijZOUMnALgI20oFD2NvxMFCU1PVhjoqu0CvkAc8YC8h88sY81IFT+hKCdxc200yOzR6z3gl02qqentiSlrTmzOVABIJBRmQ51sp1bQAM7pRNFkaGxCcUUcXT8o5zjuB9dqBvpY7qf1m3zRJLVb7zuY0PjjS54CVth+GAuTS2SltiWHG86qxr/lH0HB+oFM4trTe/aGmv/jlpXFP/HtTEjK6z0e8MbQu/gO1ZIb4HoDjejYA3qqdGkoef+ljCSC6qyVBm6Ri1Vi5rDf0Jr9sTVh33CYm9exu0jxUSpDqfHB/Ewc4qgnAKbx/rqKgOo/vfjZUF3WrpZkyg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=t4RnTQGIwyBi7nihRbezGj3BETawT/XGOe3PTDLectU=;
 b=LsCXpLgSt8WJMGfo7g6oOc8DovG4txEdQceq5Jg1bt0EEBjFbyGLIFIO2zj+j8BTNr413IvB1DHw+19W6j59mstucm6H5Rh7J8s384KUfLy09XUUai2530sjo2/PtVmvbdeZKqMs7aRyQ0RdEunXBFx3rc2KL5KnQ0Gh/RQsJlw7+bjaAeYm3VeIaojsx2WZ68FMz2fZS8W7H+COw6bK142pLQyqZ80f/8G9GS8RELOuDvNiEBbFTSIdx39Ctl2R+X2yqajZosE51xTJet+PBAzsVcpr86Hoke8t+FbO1QvIep7XQYu28SKfRpTFwiZI3VzHMiWG4F3ocxn49nzpfg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t4RnTQGIwyBi7nihRbezGj3BETawT/XGOe3PTDLectU=;
 b=35PAU6e4KDZk4zlHtBKgXAFvPa6mN2u/i0uTaBTbdKkK7qkqXzJoRwgnoE7YpZgt9Mugplq8StYFIXILRh6QJHAhp2gw7FcF0YzTJxf83lEs8yyJZ2emBt90OK7fQqCU7knzfVJ98WWFMjzh7+IahuWtUqqgjayNse1rw9zEJsg86K/DfmobPeBRjscmn46prrKUUwly5WLCywlPPRH7QpTg4he78DBjabb7Zyu8xFqhXKMh1dsG/Ntg/SJJEn6gbvY/NgrJkamdl9png/PCTIAZpZfDIQNDs4coWHFRBGgejA8uY1MnZd4ew+2skwefGBeDyVB6PHDJwGQA+VCkAQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <172e9ee5-4041-2646-d793-2e06d3212b57@suse.com>
Date: Tue, 16 May 2023 17:28:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] docs/misra: adds Mandatory rules
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, roger.pau@citrix.com,
 Bertrand.Marquis@arm.com, julien@xen.org,
 Stefano Stabellini <stefano.stabellini@amd.com>,
 xen-devel@lists.xenproject.org
References: <20230511232237.3720769-1-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230511232237.3720769-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0046.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7364:EE_
X-MS-Office365-Filtering-Correlation-Id: 95088de4-1c60-4080-6067-08db562241e3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jnS4aemAkZUy2S8mRJI9eiJv4OsCgqnRVFhmGS2r3jxUryETlkbqDw23bWPgd0Rm9rXiLHRgwEKXNzoTOEqiLx5IBlLL81Hu9e6xVWx7EqsVlmFX1w40RgXvuOARHYU23rPFudyfD/RhCsIxV/jBuOeiypd2tlZGoh9WzNWlhXNo2KLLn+YZ1+IfkW+/JSL+Cx366Yeq+8SDQytMFPfXT7y1+HjH6M3nfQDWYyLMZ29OuVB/O8xsDWX1YE5ikoOA4xDIrtokb0NN0rE325RNlnCCudPg+y9dtnsX2WOB1R9suHZ7zRCtIEVMeEUC+DjnryPKaRwSkG2ooq+LH+uz61QW4poCFeLFp0BxSNLWq8f2pzkeFpbVWfnlm/M+hthdDe+77R1BXEzVQeY1kno6QGGuANnlJZ8UujXYBFghVyTTxL0s3hL4jO/LcU5JyFssQXchDj4aRKPg5XHEUxrRx8ZuB66/h3fAt9Cdoft3YWwN1oNPdGMFLx47km4C7COrmwKRe3yJZnH9m+BlRMLY2tWFs+4wVAUHR+s3DPY+2I3yWtGQ5op+c4AV4HPjvBrzEXhdNV5OrFIKkRvjvsLMuNbo939ryYICqj3I07K80CKqrEVNrgDG1yy1WAI7JyAWvVMGBgbbKpsM7e0KSE1G1w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(39860400002)(376002)(396003)(346002)(136003)(451199021)(5660300002)(8936002)(41300700001)(4326008)(66476007)(66556008)(8676002)(478600001)(6916009)(2906002)(66946007)(31686004)(4744005)(316002)(6486002)(6512007)(6506007)(26005)(53546011)(186003)(2616005)(36756003)(86362001)(38100700002)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YkRnZExuTUVoSUk3bUkrNjV6L294b1hzZE1hSEg3d081cVorUzdHdGdsOXVV?=
 =?utf-8?B?WHlsY2krcEdVbHpjRml2Q2R0VnhBclRrVk9VdCtJTnNaalQ2N2ZvNmlsbjRU?=
 =?utf-8?B?QStJSGtEMVFzLy8vSWY1bnljREhvYWpLVVJsVk9YcWs1NE5FOHJkb0tUK0Yx?=
 =?utf-8?B?K3J1OXRiUkpKdmdRUWxKV0FlZFo4L0o1YnBQVE9pTWEzUE1RKzdWYVlxRVNr?=
 =?utf-8?B?VGJtZmFFRFhIcitucGovVGF6b1JoSzIxNDNwTXVwaU1Sb1BlNkpkU3BaZTZ0?=
 =?utf-8?B?ejBRUUpsczBVOTc0YkZkRkFFaGoxbkJnVGJSc21vWXp0aEZVdFBlUmtZeDBY?=
 =?utf-8?B?MkZNaDBTL0txWnM2UGR4eWFYbGNNbmtsM0JqeXR1Wkk5YXdNTmZEZm9BNzlI?=
 =?utf-8?B?cjloQ1B1TVRUeFpBNUY0NFdDSkFiMnFyZGhzTm4yY1hMUXBHUWdqQ3NHVHJ4?=
 =?utf-8?B?V0wxRGlLWWhQTHFkZWx4VFFTcWZsOGFVeDJhWHpMUlErNk1DdzVQU2dNS2t2?=
 =?utf-8?B?elhKK1Eza1Qxc0N1c0ZMYzZ5b2hiNXhvQ3dXSGxMT1VkZHB5K1UxYXR2NXZQ?=
 =?utf-8?B?ck05YkV3NnRuYmREVmpJblI5ZHhiM0JmdlFPRDhCRDI3SUU4YlJnaGJWZ3Zk?=
 =?utf-8?B?SkFabmNEZDJmR1poa0xYazRVeGt3K0hBL282dE9OSzVHMEpIS0ZpekQ1Qk5a?=
 =?utf-8?B?YVdjakFUcFVVWjdXRS9QZGpuMXNkSFpYRlZlRExsOG83OU9raG1MbnR3Nnp6?=
 =?utf-8?B?OXZLcHpYMmJ2ZHhYcTN5WmF5eWVQd2FsOEZQd1oxK3N1ODJuek5sUjRXWE1T?=
 =?utf-8?B?Rk9GV2pLdlQ3cUEyeXdCY2hrUzBrNGdhQ2czb1dBbGJEUHd2bHl0MXBOaUJ0?=
 =?utf-8?B?Yk9ZekwwUnNHOG9ZVVVvUThwcHlFUlI4ckV4U2pwbStaUGRBS3VzVGFNY2JC?=
 =?utf-8?B?NUM0a1M1TmZGWDZ0K1B5SWxqbkZCZXdUZ1Y3eFBHbEVhU2RPQzlVODNQdSt2?=
 =?utf-8?B?dmZZcXdLRzFmT2RSK0dJY0RvQUt1NVhTYk1POTRWN1RQL0JvVXJhKzRFQzRP?=
 =?utf-8?B?cWcwSmdsMkh1QTVrdjdEQ1VwOERXbzNWL25CdmVHcGkzUmdDZndLTVBmZWNt?=
 =?utf-8?B?ZmozZnJSa25uSTRpbmR0WnJtYTVxQW8yeEVCNnBKb0tkTFlPN2FXSE1oeVhM?=
 =?utf-8?B?VHpVbGlGV1VBUUdqTE1Kb0hXaWRpZlJZUCtUVXVLTjJ6TC9WNnhaZi9ndU1x?=
 =?utf-8?B?dXVvY3AyZXhNLzZlb3pObmJQTVhFV0lmMzhsZXAzY0pCclFpN2NrM25HTFZh?=
 =?utf-8?B?aWxVZzAzWnM1bEp5a0E4TGd5YkZSREorSTFFOEt5TjdvMjIyVHRsWHdhQ1dY?=
 =?utf-8?B?RUIzRExSZ1UvbFdGeGx0a0NFKzVTUEEvQm05My9uRWg1dnM1bWxyd3dwQWE5?=
 =?utf-8?B?Vk80NzVpTXIrd2RQSTVieVVwK2JacklzanZ0UEdZZlZiUFFnT0ZWY0VqdU9E?=
 =?utf-8?B?T0Z6allieEo0OHdlclk4aXJHd3F1SGt3RWRTc3dhQVhVTDZoQ08wS3lHdmpW?=
 =?utf-8?B?QkVRWlM2eFBqNGhWUklKMXVQMWVqbnM0YlN0OUhWYUJYRnkzWW10dFZHNVlX?=
 =?utf-8?B?YnBnbGFyK0FzNUJ0dWhtZkdhVmYzc2ExejNlWmRLR01TakR2K3RhbkJHaDM5?=
 =?utf-8?B?SGsrVWh1cmNmeEUzcFhRTUROQ01YckdRSVQyOWV4M2dVNlEzc1lCZ0xOMmtP?=
 =?utf-8?B?cUFIS2s0WEJrNXM2ZE1oRDhoaHNBL0lKczhwenhYYk1ydVhQOFZHV1Rub1A0?=
 =?utf-8?B?TlF1OVdwZzM1RVFpWjZ3cjJ4LzlMc1BZejhCQTFSWDB6c0JOcHM5SlUwbnoz?=
 =?utf-8?B?cllqcFF3WkdweE5CeHRtRG1Ia2Z2b3U1WERDdGdoa05mOTlsYVV0emh3QTgr?=
 =?utf-8?B?QnVZOTUzVFV3NUQzMWNmQnN4QlQ0ZnNqUWVlNjJoV3hVdDltNGJwK3ptZU9n?=
 =?utf-8?B?c0NVOFoxUWU1RFc4MDh6QnJ2M2hMdGo2WE1jeWNPbVlaZU5KLytBZTN1M0dG?=
 =?utf-8?B?ZzBqSmNLMTQzekROZ1dqbUV4SUVyK092cjZVenlBdVVnVk5HcGNVZEprSkY5?=
 =?utf-8?Q?xVwjWWe8jWuLh/m9/l2ZPrEuh?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 95088de4-1c60-4080-6067-08db562241e3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 15:28:54.1615
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 64hcUgUqol8vNO7SawwWE1qVDHW2my4GjK0BsN87BkX58/O+41OvIDor4QIWggSmkfcf69C2iC4oRYyndkQuEw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7364

On 12.05.2023 01:22, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@amd.com>
> 
> Add the Mandatory rules agreed by the MISRA C working group to
> docs/misra/rules.rst.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>

Slightly hesitantly
Acked-by: Jan Beulich <jbeulich@suse.com>
primarily because I don't see the point in enumerating / discussing
rules we're not affected by (as long as this list is for the
hypervisor only).

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 15:41:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 15:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535393.833077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywo8-0002Rk-CT; Tue, 16 May 2023 15:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535393.833077; Tue, 16 May 2023 15:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywo8-0002Rd-9n; Tue, 16 May 2023 15:41:48 +0000
Received: by outflank-mailman (input) for mailman id 535393;
 Tue, 16 May 2023 15:41:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tkax=BF=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pywo6-0002RX-J6
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 15:41:46 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28b18ef3-f400-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 17:41:44 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4GFfZY7Z
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 16 May 2023 17:41:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28b18ef3-f400-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1684251695; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=ESkjUTCfm2BXPpE2ST2+T4JIo9QE6syo75Ajhv6NnU0fYbMl/SYsErXI/7r6OVkb1u
    qZpb9/LesThBR4KrYKoLjkr5ArkTZ2fosP9na7fkBvNE0IJ4Y4i4rHVWaneLQOCuFITC
    L1I3LmL8axkHErGhe3yWMGnmXeK0c9dqedklRfhVrOgv8hORY4TeiruoiLdH70+DTYDh
    MIWoYZSQFqPVnm5a+og8mkPWpeCNchh4Zaft+QLfl3H9D1Er3MUySkVzF4MGA2aRRorJ
    1DC9ipf5G0G+uYz35E0Js/33GKA8VT9KQAdxrb4rxEQLZ13KddiWYcmF8Tq9paS3SAWX
    6/xQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1684251695;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=S4Dal8qF6Ltkztx7Q4K2kzVp9CzmK+IBPCjFoEOJkgQ=;
    b=LV0LRMi3qfDSFGDQyMcoGbj+BCCSNQpqzbxuUcMyj+S4Th+rb+kuXcnrSc+Xgs5ppy
    +FDAQEFXNBiR3sLM4Xj2HCrgbw70aXdfsoDm4Nt60AfgRvPTdripDaaJ4XpmU2dPLudG
    QxeHrK/irB6Fvtg7yTsEioa49SlgZBpvlqWCHSma27+qNmvljvccRca6OSxlyQbv3k6e
    it56BYn3XHmbz6m7okiuC7PvwoxKoz4RpGRHHuRPWgAizcIdGCMoYLfvXwjLbrL+QO8E
    ONFbwOjVkthrPLQRtAorDECC7qC8BnWyoksQ14Ngn/PT7Owwx5NMQOavM31cho2hUbir
    BOfw==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1684251695;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=S4Dal8qF6Ltkztx7Q4K2kzVp9CzmK+IBPCjFoEOJkgQ=;
    b=G8agVF/GMIVsVcgOXVX9cPIqvk43n8jHda50Kd/QqeYO43VHLFNBOmrg9UYwq0H7A3
    YXs1HTjfxkaHJB37iLMADNgQXZN30jwTsO9evWi4TBUEc2XUZ+RKot65uM1w+TXSEOQd
    4VxaLFr1g7U+pHxrl57bDoI0fDi2Hixf8rYvrRuA2g2+f6mlAl/Ghn0U+RZe2yvRW1j9
    jrnZEsQ2AqBpmriXw5W//4rlB50UFFA2uIzCvbQ/FpF2xIpS/PDTuycn1fy+QfNmfpKi
    4kWSwGR//9/sHQfvmS6X0LkaWlxO0f1+cemcExZnLo1vUM2lMjQLxZyUJCSLd25LFlaD
    eRGA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1684251695;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=S4Dal8qF6Ltkztx7Q4K2kzVp9CzmK+IBPCjFoEOJkgQ=;
    b=HYm53ipprvzopY8/Zxr9mYa4TIFFFrH8lc3ocyOfZfVAsuSYt9Rwe9rjgjQuQ0fHZW
    aWbJMpQysGNjShyWLACw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuznLRsvx4Sq0NeWsWjIFVg=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v1] automation: update documentation about how to build a container
Date: Tue, 16 May 2023 15:41:27 +0000
Message-Id: <20230516154127.11622-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

The command used in the example is different from the command used in
the Gitlab CI pipelines. Adjust it to simulate what will be used by CI.
This is essentially the build script, which is invoked with a number of
expected environment variables such as CC, CXX and debug.

In addition the input should not be a tty, which disables colors from
meson and interactive questions from kconfig.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 automation/build/README.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/automation/build/README.md b/automation/build/README.md
index 2d07cafe0e..1c040533fd 100644
--- a/automation/build/README.md
+++ b/automation/build/README.md
@@ -96,7 +96,8 @@ docker login registry.gitlab.com/xen-project/xen
 make -C automation/build suse/opensuse-tumbleweed
 env CONTAINER_NO_PULL=1 \
   CONTAINER=tumbleweed \
-  automation/scripts/containerize bash -exc './configure && make'
+  CONTAINER_ARGS='-e CC=gcc -e CXX=g++ -e debug=y' \
+  automation/scripts/containerize automation/scripts/build < /dev/null
 make -C automation/build suse/opensuse-tumbleweed PUSH=1
 ```
 


From xen-devel-bounces@lists.xenproject.org Tue May 16 15:42:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 15:42:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535398.833088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywp8-0002y4-Mm; Tue, 16 May 2023 15:42:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535398.833088; Tue, 16 May 2023 15:42:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pywp8-0002xx-JU; Tue, 16 May 2023 15:42:50 +0000
Received: by outflank-mailman (input) for mailman id 535398;
 Tue, 16 May 2023 15:42:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pywp7-0002xp-Ek
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 15:42:49 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2062d.outbound.protection.outlook.com
 [2a01:111:f400:fe16::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4e6cdb5e-f400-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 17:42:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7113.eurprd04.prod.outlook.com (2603:10a6:10:12a::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 15:42:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 15:42:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e6cdb5e-f400-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WRPOkUqqghNs+T1+f7QJXQIazpEDdeWdaghB51voz76DKX5xFAINZFqL2xoep87rYXNkhw+hFcnLfQJ0EVq/ZufcRZwH/xCwK71DuUd7Bw+4hoGwUuPdCbFErSJ7Q1SUvv2gdfaDtsUuSsp7OKpEN/+LRdv639Lyhs19HuYZOC16mMXQ+kswsvrmAYpuKG646gnVDIi0Ahic3rCkKHSaKk4EPD9mkkPQHVxqGk/e2/BIfHCzgiXIZT/SaNogvC8620H8tHWvvyPfcprEI1iW14hfwRTCEbB3uXen7wBsDcRDNKDRO26Q6ImRDZEemkM5vo3Y+hGU6sD30J1B2ubMwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5NMD35Yuc/kijxQlK4gShHDcs8NAlHM861Ruu5CJ210=;
 b=DbreSjUzlGHpRdYbcC3yMC7PSpP9M9A2tK81xeScca2OmLOU1KTsrZtYMooBHs8NCLatZ/4BneBwGq0G80FHgXEMeq2hDovp/5lNxRyz5lzoEz7Yyti+dCMxHq6HHPSFWdvfTokJ1jrKp79P955bxEItu7taanzdY4xWYSnYN0XPeKVF3bColRcYoQxm1SKRJ1tG6SFN64vbd6oGnfNtouvpsthDjA9DVxtP3RCfw1VRNSh8tzTCPw+66tRz4QzAuwo5fe7G50wb6Lh7fUT+ecNtOaN3pBCmuR+6C9skK2FzlKou7yHNm3Kg0D1gAwjqj9j7Ars6khnXUwyRi+ffRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5NMD35Yuc/kijxQlK4gShHDcs8NAlHM861Ruu5CJ210=;
 b=logwn2XKub35bwvxEnFHmjhhJCSinSeMYn9mmK2hR25ihtblIUWKjTiUo20dXjNAEAHaBga2JlPf9O2jQpgoTsopb4r6V8rsvqTnGQZFTUDdGX74Z7gdQ82eYjmmosL2wchxCwe57oiljBZJYGL4mzr/BmCzrwUb/5ne2DADUuvWsTduD1pWqXpd5w+taZ0Mk6eJzwdULkVUlnrsEgpWSLzZYKKUzg9F3UErnBFDe7Qe6cjWz1Reyux61f0Vtczt8jJ5SmQXPgEYV07Y42MuV4JsCzrG6P3eLJAwtPKH8xisQ5ICxM60vG/VcxR2HBO7VP/7P7xftVCUjMreKXi/Vw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d1529686-ce06-a707-de9e-a4b28c9f2e02@suse.com>
Date: Tue, 16 May 2023 17:42:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v7 1/5] xen/riscv: add VM space layout
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
 <7b03dbf21718ed9c05859a629f4442167d74553c.1683824347.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <7b03dbf21718ed9c05859a629f4442167d74553c.1683824347.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0040.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7113:EE_
X-MS-Office365-Filtering-Correlation-Id: 70c077a1-dcbc-4482-ab59-08db562430eb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JUuFwQwV4TOWbMSwsZllct1zQ2dG56DZd62fvegWwWcL0xp+2sbSqyd1pvwi2oPRitMD/99eaqFlka7RPPk1d6JJuojHTSBzVte6XFdqY5m9L+afH8pGe9I7MtL7QuubCepYRDMm/XHZQRDDGBHkAGpzF5Lw9otAG1owS71EV7fH5elgZFWAECT4MxDah6YVu8pHpN61lGer4Gqb2nMe9ng0q30CcDjTAC4zQ/TtEwhXMPV36/ClXWIGnbMMnvSlXIIHTXeFHIWLjwWy3p2VsOOVt1bGg/0fzAh4G+DXT7ltgT1tbFiDUoSHkDe3fG6C3TkaPNeE8fgwdi0qlWD3eKdBovIt28La5nSK77FlePPWn7dgmWDKCnx9R5hCl0glaiGmhUJuim5223NW+4sswd8xHqOBgJhPyTRurQ18jS+fXy7Gbg4avL/8P0nCjblHe9XgIUy0b23pdxXPi8dOHZWJ2c7M/Q0YYmzEB+LaczC6/W1BvH7Ov9H6bnrPvfDOA0zfMa0MFQ1b9wGvdM+VXfU51yNFpXG3l50g5JUMQb6/TLrB7SLDKedRkRrqWWu2T60DCL5KY32UBbEGeuntQzc55Aq92uXgRShGI3wUwGehpUt2bz6xH2LQKanc4mwqKfyTkiM5LTZXSZvxgwdDDA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(346002)(136003)(376002)(451199021)(53546011)(2906002)(31686004)(6506007)(6512007)(38100700002)(8936002)(54906003)(186003)(8676002)(26005)(478600001)(2616005)(5660300002)(86362001)(41300700001)(66946007)(31696002)(36756003)(316002)(66556008)(66476007)(6486002)(6916009)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TEp2d2FlVlhwZ0NDTVMrYXJ5SExMSENvWnBCenlkdDhBdU5lbElJQ21mRk5x?=
 =?utf-8?B?MVZyajk5V0o0cXFDZ1FwYTJZZVZBWFVPL0F6ZFkrUzlZelMzbDRxWllQK3M2?=
 =?utf-8?B?Q2pZeWxsbHR0M0ZFM1VXQmJnMDcwd1ExYTNTY2FJbEEra1dYd3gwL2F1cUFR?=
 =?utf-8?B?L21BZzkzd1l5a2M3eGJzQ3JZUDZNd2IzektvTzRsUk1kZFhNS05Xczh4TGNh?=
 =?utf-8?B?TUowTU9QdXVSdEM2QWhrK2dlNlpvR0doTC9acURGTUwzNUJKSjlvUnJrMkNQ?=
 =?utf-8?B?dC9icWRvaEd1U0s3Z25OK1pOWUQvNnUvdHZlRE9HNDVzV2VxK0RZUWowRXBF?=
 =?utf-8?B?aXBqMEVWN2pKa09vcUVqTUlGRXZWdUVWSTc0SE9rMHE4VzJtcFBaeTZ5TEJj?=
 =?utf-8?B?Y0lzRVhjTlZYNlVTQzcyak4rTmY3cldJWW5mbW8ySWU5RXpoNTg5ZUZLWEpX?=
 =?utf-8?B?QUpPZmU0ckl5QVpzM2hINnczQ0E0dUUvRldmUk1idWtmckVKY2REaC9GeWVy?=
 =?utf-8?B?dEo1WUY2WHJkdEIxWDF5NGZ3TnZDdjRkUEhYSEJQaUsyUEQ2UllaVnMzU3ov?=
 =?utf-8?B?SnlyVlNBVEd1QzlzVTA2QlU1RkdIeDNiSU8xbXR5UGlIemk4UHFlRVRhVitz?=
 =?utf-8?B?Szd1V2lRd3BwR056NVRFRFpIaURkRnRMMU9nbnY4aWJhWVozSXNJT1htdlJ6?=
 =?utf-8?B?aVVzNGxzSS9pMTBXaU8xSlBiQkJYK0JibXY5TVA2TUg0eDlCQUkwa3V4N3Qy?=
 =?utf-8?B?cjdTL0srUlNZVmVucXBxQi9oUDF6Nk5hSUlRMnFualhja2J4M0FHUU9lQ3Fj?=
 =?utf-8?B?N2VwUXdmYzdwMEZ5WElpdGZaMHNTK3BPMG5wTmdGZmNNejcxUkpOSEdjbGpr?=
 =?utf-8?B?QjJCN3FMWG5EQnlqUFYyY2ZtdWdUd2IrZE93WVQ5ZExsZ2k0SnlQZ0g5NDBE?=
 =?utf-8?B?WFdTdmVRVllhZGVrNTF5WkRoZE4zbCtUOXZnQXp2S1A2aXhlcmdybGZTSlE4?=
 =?utf-8?B?UmdDUFAxOHZWZDFhc2JSekVleEJSd2Y3NGtUNEFLUk1RQ2poZXNQNzFId1pW?=
 =?utf-8?B?SGRnazhWRnBRZ1Y4Z2pJaXlwTWt1RFNiVUpSMlRuSHkzenlYN25nTmR3dWtU?=
 =?utf-8?B?N3lJYmsvczRsZEpXdlFtS3R5UTYzYkppeExUancweEdQaEhCcXIzR0N2ZHBa?=
 =?utf-8?B?TmpKS1RFbUIrS01VSkdQUWx6RGRWN3pra3IrQldacGFmSmEwMWNEOUlEbm96?=
 =?utf-8?B?YlltTXZNK2JuVXFUdjVLNFFTZUJoOUlNdksrMkZJUTlEOGhzTjJxVGhndmxh?=
 =?utf-8?B?eU9TQ2szWkNVYTlpZ0hDYTloaUQyTWlubnhEY2lIYytidFIwOGlybUxSd0xE?=
 =?utf-8?B?MEZiTUlKWFZWSVFSb3dlbWE2VTZlMUtDRTlZZU5ETDZXczZSbnVRbmlWL01p?=
 =?utf-8?B?VzB5ZFp2NUVuNDdpdURZOVBHTHJRbEdFM1F1R3BVTGljVHNTT21LODdDdGdU?=
 =?utf-8?B?alcyb0YvNnZzV0Y5bzZpK0UydWhJVUpaWDJvTFlJUmdHQTJuZHdhVWVmekxr?=
 =?utf-8?B?OXVCbGt2NE0wUGNwVVA0RmREL3ZjRmxGSzNnUDA2Z1NjUmxtZlN1dnlpWEFr?=
 =?utf-8?B?OXJPdXduaFZZcjZES2RRNHZwUUtNcC9Eb0R0ajNIMlVxV1dXaUtIZXhmOHVj?=
 =?utf-8?B?c25GdmNOSDRkMGNRLzEzVzIxMWl2MjNUNzJ1M3A0REdVTDdkakFHelhVWldt?=
 =?utf-8?B?Nk5nV3VJTGR6T0E3cE55aHpseG1WbmhlZ2lMaDR6aHpPYTlaSHZxVVJFOUtT?=
 =?utf-8?B?TUtReUlVSmhadlc5clJmeUh2bVNLUHA4Qk1CZHVrWi9yY2VreU9oVVhVT09a?=
 =?utf-8?B?c2ZpZjE0Qks2ZDJkbkpveURlUG5YVjNKL2c1MGlrdGNHL0lhSmR0dTRDWGl5?=
 =?utf-8?B?ZlR6aDdtdlo2Nm96OFdCbmY2S2tuMDNnZTczZGdIZEVwS1NsYUJKU1hGd3Fy?=
 =?utf-8?B?TFFVTXQzZitFeDhzYk15aVVLSWtBM2VlamU0T3MyOVMzQnRYc0xtVUREcDR1?=
 =?utf-8?B?M3lBQ2J0WC9MaHhHUmVQaS9NbGZoQUJMdkQ2dTA1MW1zcThQZHd4WnIwWU9R?=
 =?utf-8?Q?v51lSTIKlKJwUD/ryb+MGrEIy?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 70c077a1-dcbc-4482-ab59-08db562430eb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 15:42:44.5462
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZN6gXscuoZO3Z/hWTltanRbGTzZPoS50hjoThDWGffv1pZex+IWOoMC4JFHS64oeLF3snv7p9MA8lbg1zVjeSA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7113

On 11.05.2023 19:09, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -4,6 +4,42 @@
>  #include <xen/const.h>
>  #include <xen/page-size.h>
>  
> +/*
> + * RISC-V64 Layout:
> + *
> + * #ifdef SV39

I did point you at x86'es similar #ifdef. Unlike here, there we use a
symbol which actually has a meaning, allowing to spot this comment in
e.g. grep output when looking for uses of that symbol. Hence here e.g.

#ifdef RV_STAGE1_MODE == SATP_MODE_SV39

? (I would also recommend to use the same style as x86 does, such that
the #ifdef and #endif look like normal directives [e.g. again in grep
output], leaving aside that they're inside a comment.)

> + * From the riscv-privileged doc:
> + *   When mapping between narrower and wider addresses,
> + *   RISC-V zero-extends a narrower physical address to a wider size.
> + *   The mapping between 64-bit virtual addresses and the 39-bit usable
> + *   address space of Sv39 is not based on zero-extension but instead
> + *   follows an entrenched convention that allows an OS to use one or
> + *   a few of the most-significant bits of a full-size (64-bit) virtual
> + *   address to quickly distinguish user and supervisor address regions.
> + *
> + * It means that:
> + *   top VA bits are simply ignored for the purpose of translating to PA.
> + *
> + * ============================================================================
> + *    Start addr    |   End addr        |  Size  | Slot       |area description
> + * ============================================================================
> + * FFFFFFFFC0800000 |  FFFFFFFFFFFFFFFF |1016 MB | L2 511     | Unused
> + * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | L2 511     | Fixmap
> + * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | L2 511     | FDT
> + * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | L2 511     | Xen
> + *                 ...                  |  1 GB  | L2 510     | Unused
> + * 0000003200000000 |  0000007f40000000 | 309 GB | L2 200-509 | Direct map

The upper bound here is 0000007f80000000 afaict, which then also makes
the earlier gap 1Gb in size.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 15:55:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 15:55:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535408.833097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyx0h-0004ZQ-OX; Tue, 16 May 2023 15:54:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535408.833097; Tue, 16 May 2023 15:54:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyx0h-0004ZJ-Lt; Tue, 16 May 2023 15:54:47 +0000
Received: by outflank-mailman (input) for mailman id 535408;
 Tue, 16 May 2023 15:54:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tkax=BF=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pyx0h-0004ZB-2f
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 15:54:47 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [85.215.255.52]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f9f467b2-f401-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 17:54:44 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4GFsZY9W
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 16 May 2023 17:54:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9f467b2-f401-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1684252476; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=lI2xPAg9Ynv20VCu4X3byhGh7Ihu8goWUL2BFNRtu+3THRVi4dM4TwY+9hfLl+1+Fj
    Z+WlSy2xa9Uj6VKHpH3Xm2doKXYuyOese9hbJsK92vRKlKHrXdsgxe67TOIme0GvN2r4
    Yz30d/vOy46QUxB6EGAEnafaTbeysDxsWzEIiVoGIQr6rZyKzgYA9YCwn/6VN8HK9wFV
    nMU8z7bOuTWOnZnYfUWo+UEfGVh5ESrAOigl3Q9CWwzo2/RL2R3mSIPflPGeYPuKzHHc
    ym+Z4yL6eUiim/AwOUSXAq+R3WE0jHFsBHzDGiE0aItsFbqxneCObwjqkW7klb+ApnL+
    Szig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1684252476;
    s=strato-dkim-0002; d=strato.com;
    h=Message-ID:Subject:Cc:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=JxzkjrfeFJlPVFGk3GTJIOA37rhi6lspjNY6Lrptp6A=;
    b=eqeJfH5n1V0Algy0BqA82KEMWMb470omoSziscPRF0y3WH1BZ2mpqB85hZEKcREr+8
    T685zqDO1DWkafKA5eY0eOnNtiMUZW3Y/58eS/U7BOBeR7FyT33LO3i3n+31MQWIFdhQ
    u1yL7U1rjbOkuEGPEG/v1khdSSHVg6YGID8lfFpnZAC643GOmGiIHx/w68GkNtDlXx+v
    RSaN1Sw7WIX0m+0uQ5EmMgNbkeBlEGXi7MEUp/+rwkFAa0FEBsEbI/QTgwfe2N0IIoEK
    PLfbNAGIaC7tAVl7R/TqUA3zf5lnzNpyqvpK9EGMQq4MHib3B6BrZ6z47YRfLGBqSG4E
    ZHIw==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1684252476;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-ID:Subject:Cc:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=JxzkjrfeFJlPVFGk3GTJIOA37rhi6lspjNY6Lrptp6A=;
    b=iWv/0+Pf/+xFpknX6aQD3AnVbRmk81zzG84aqhf+ZaL5d7Iiwei/VMZKLrtwQKqzwd
    HI3jnzpeAdL2hKWV2LWNQwJUpe9S9VJcHpNqDBmdah2MHiBoja+bVUi3T4ZsjPgEyeBO
    8LfnBxP6s5HNAkuzu+23m8O58yGaErmQzdCN8eQOiAK8O7e2HNZofzBuLTwgYcAl4Srs
    segVrEOnzc1CeRqfcXo0Tv9b2GCOZGecyP7rwV88s6u2c2yZk/KByBLMwxjjGdAx34lE
    s194rGsKgP8AkLjxdxSpLB85McsYFtt53EpmHjKbI5Z4WYFxpKKScsp6IGf3PptYJSGo
    NkLQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1684252476;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-ID:Subject:Cc:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=JxzkjrfeFJlPVFGk3GTJIOA37rhi6lspjNY6Lrptp6A=;
    b=0AW3xux8mAeEvpvQf0Icybr0n9DzPhYVlt1Im94hq3EtSG2mMnIv1x3wZhFplTeHdv
    esPlrdulBQtKDwR+hACw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QED/SSGq+wjGiUC4kV1cX/0jCNVp4ivfSTHw=="
Date: Tue, 16 May 2023 15:54:26 +0000
From: Olaf Hering <olaf@aepfle.de>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>
Cc: xen-devel@lists.xenproject.org
Subject: Logic error in rsa_private
Message-ID: <20230516155426.4acc9313@sender>
X-Mailer: Claws Mail 2023.04.19 (GTK 3.24.34; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/yhEuMHigZMic5Uk887l87UM";
 protocol="application/pgp-signature"; micalg=pgp-sha1
Content-Transfer-Encoding: 7bit

--Sig_/yhEuMHigZMic5Uk887l87UM
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Hello,

as shown in 'build.log' at
https://gitlab.com/xen-project/xen/-/jobs/4284741850/artifacts/browse
there is a logic error spotted by gcc 13.

crypto/rsa.c: In function 'rsa_private':
crypto/rsa.c:56:7: error: the comparison will always evaluate as 'true' for=
 the address of 'p' will never be NUL
   56 |   if (!key->p || !key->q || !key->u) {

None of p/q/u are pointers. Please have a look at some point.

Thanks,
Olaf

--Sig_/yhEuMHigZMic5Uk887l87UM
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iF0EARECAB0WIQSkRyP6Rn//f03pRUBdQqD6ppg2fgUCZGOnMgAKCRBdQqD6ppg2
fpLIAJ4pv/7z/8oRiwJw7InuWeHjjfrihgCcDs73yngORnSyxTOsHT6m3n+qzdk=
=O7X5
-----END PGP SIGNATURE-----

--Sig_/yhEuMHigZMic5Uk887l87UM--


From xen-devel-bounces@lists.xenproject.org Tue May 16 16:02:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 16:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535416.833108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyx80-0006f7-Iw; Tue, 16 May 2023 16:02:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535416.833108; Tue, 16 May 2023 16:02:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyx80-0006f0-FL; Tue, 16 May 2023 16:02:20 +0000
Received: by outflank-mailman (input) for mailman id 535416;
 Tue, 16 May 2023 16:02:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pyx7z-0006et-5n
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 16:02:19 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 080a7d1e-f403-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 18:02:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9098.eurprd04.prod.outlook.com (2603:10a6:10:2f1::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Tue, 16 May
 2023 16:02:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Tue, 16 May 2023
 16:02:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 080a7d1e-f403-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OIoKFQYZrvI7hgoht9r53loVfkGRg9oToPr9mwklKpRl7gAwUZGtEn2/EKFU7NxZdT9mFtU8KDOCVW1VR01BJifK7ptmKFcHTKixmasV9c0Zy7VcgkqMbyQ/t5DArVaxrOtH41W5Lapcqr0TNQ8BACw0hfbMvab8wNE/24nl47Qb1TdCp7y2WAmWy7RHfHvS+eBAn0SETunWlMtkAi0YJXwBmwXqMTq8Tyf/i8zhmnWWLbPmMGNIZnm2F8xtjCXt8W2sfFFh4tn17ggV7Vb3Q+sx1imOSk9bAd/Y32B/h2wfEte7W0kZxpNxc1YZ4T22GKhA0lXt8IcwHx4U3kXxaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KZ9sYQkqVq9+yjzu5bXHUNI8WFxSTxWSsFzg6hTw4oU=;
 b=oOkrdgTCKz0bJ1LNJ5FBAcaS5eWoqoNnQefEAX85f4MYlsSv75U1MNzjntKB8HblLsEkHEzOwg6S4J1QKiUh1zPHUfZPsrVhJg+K/Uwb3CZwoRFlwfNfwfiUuPcoAFTjGHFRpmwlR1WwE2CBreSv1i1XNMZyZid4DymYtNj009OYz78jg21mBI1a/TncfVQdJay9cU6hP30pEh0bKkIgg0bQ0Z6NqOIkwp4zT4qjXSW3dXd7sopa2+LznWYo214r0p2yAq9dbVKXX1/EVYAPj7bCj/hpv9kv1NYdq4c4klIacIswMsFrcchA/YgxOmXFs/4khbFs4diEZKxLta1T6Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KZ9sYQkqVq9+yjzu5bXHUNI8WFxSTxWSsFzg6hTw4oU=;
 b=SfLaAdOWWWPvhJMbyce7CwwGmj1N/yW5qgy6rSviluIMyn1zoejF/1sNp7/enD44YpWZjRua0Kw+gtpzf7C+R6O6obZwL3ymOzCI0XR0lq8Rooky8NqaFz0sfYDQiIvXCYJCHdus0Z/9SzP+VysNyUc7LljLzL7nUNxskfQ2NfKIc6kgWBT0rikTi9sftHyA39yMmFtWAf8HDBGhKHH6cBhZ5S0VaSSFU+ZtTBVxyn8/NfZH4BXO1QIbo8e8jcePTeB7zZEO20Ew90Ta8yKrr/s/jiuPitBvERYdH+BrBXNVfh6Ju8yOyI9LNJT7mAOfnc534q3GsXYegbkQvXXAag==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6954c105-f081-5d9a-5a77-1865fdc07133@suse.com>
Date: Tue, 16 May 2023 18:02:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v7 2/5] xen/riscv: introduce setup_initial_pages
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
 <632384e200b7de0fb4e2dae500a058c2a27628be.1683824347.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <632384e200b7de0fb4e2dae500a058c2a27628be.1683824347.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0184.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9098:EE_
X-MS-Office365-Filtering-Correlation-Id: a4a6a510-728a-4ffd-ff9d-08db5626ea90
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QnwRzdoflH4polssrDw7zW+QrDjZXhH/oWT7N6Gr1IkysRwDuaSgcSNex+vRDHaRp9dNTMs2ZuI/lDvchTy9ig8OGhVgh5ySfbXJrkyLU0RabRlFuYt3fKGTZLXRkxMq/HkdWZxlhhG5hXYmY+0NYFSdrsAQ8UqN1GjVR1kgmtLVfACRI0DPaZsnEx2V4VWLxoOvUWQ4NWIZx3fvcvq8L28KNTxPlhFgnYgUOkY31oUZgZFqpVgMi6o0oIDU3RhGIzcIETJh0oj1dphipFtMY0A4ovqP93aSoQjTNTW+ibzQu3HRgYtZUFx4+ELxOhfkig+bHMAO55HobY5UgMVQnrmKjMe5l2yYSMZSq2WxxhcJeC7NVZzVzIpoCo2ogiXk6ALmDcMVDx55i3hbnoJdq4ntItBJFQAR6hvXKZDRgx8hijwZVvLokHH/pq9rsQ7QDexLHioW3zCpuRcBC+ADaYdQLnpxQobsZrJ1Q2I7bFuaf69TsxF6RbDiO6eQKgcotrWS6QC6oi7l4j6k4ASmCZEz+oyzF2tIjjGp25FpVO+++a7ISsoh8c75F71TCRU6N1cZoeLQbU7Z2kYgzn2YBzTE3vj43AEyaEzJE7hb1AkKdtWgt6vSk6gDSEmUiT7V3Cv4rB04MODs70LxN7T1gw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(366004)(376002)(396003)(346002)(451199021)(8676002)(5660300002)(8936002)(83380400001)(31696002)(2616005)(6506007)(186003)(26005)(53546011)(6512007)(38100700002)(86362001)(478600001)(54906003)(6486002)(6666004)(66556008)(66476007)(66946007)(316002)(41300700001)(6916009)(36756003)(31686004)(4326008)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SFFCcVBBcmdJV2xuTDhzTm5yS3BLS0wzNkFkOW5PenJkZHUwTjlZYjVXZXhK?=
 =?utf-8?B?L2dwOFlRWGNuU1lKWlJMTEVkaW5KWUdzMWJ3Q08zeUN3Tk1PU0pEWXppU3pY?=
 =?utf-8?B?Y0lSSVJUTnVGU0laSWduQ2V3c09wUzl0c1dVQTN3amhEby9FM1l2Z0VFOXUr?=
 =?utf-8?B?OXJ1bHlaS2dFdVY3OGovNmREVTFsVXB6VEdTUFQvQW5FMWN4Z2FEYlVIMXUx?=
 =?utf-8?B?TzFmUElmc1R6cVhZZHpPWjIzRkRQQVgyM1UwOHFDM2E4eDlUME1DZ3VBcWhC?=
 =?utf-8?B?QmhDN3hoZDNPbnBVd05jVUZPZ0ZNR1kxcUdMRDFSQTB0ZzNWKzZxMTNzUHgy?=
 =?utf-8?B?cDB1ZTduY1lnTFVsTXpOdGFpUTNodWJLWUtYZVN1SFNFSmxwQVZ1ZnZkdFBL?=
 =?utf-8?B?VUgzSnBlSGdsdFQ2QkhhaytyS1QwQlNMeTFmYjYyNkt2a0R0b0VDaHdTc1c3?=
 =?utf-8?B?Mk5IVEJFTWwrRXJkbVhmeDg0d3g4YnlPeDE5eFFBZkYrcnhSalRNRjBqL0ZZ?=
 =?utf-8?B?TWdLUlZtZDcySm1pMWoxRjlGb1hLMm91MXAzQ3hvUjFYaUNNWkVWSndiaVVL?=
 =?utf-8?B?cUQ0ZForOHROcDJBcWdHQVptSEZGTEp4SnJwckhSN0RXMDREN3ljT3NhajhN?=
 =?utf-8?B?bkVOYWFUalozMEJ3UlVweGlQc3M1WVk4MFJTRVdKRGgzVTZRU2Y2NjZsbTZJ?=
 =?utf-8?B?cGovcFZXQ0pmekFKRE5vRHNxSHVYU1JubXFsQkcvNFVTV3NoTGw0cVdoN1cw?=
 =?utf-8?B?NGZmRWM2U0k4eVV0MkdyVmhXcmdLcCtvRG96bW5BbCtKUElYNDNYOFhnSEVH?=
 =?utf-8?B?NlpWQzNTTzE5TXZjdWZ5cUgxdzd2T1Y4Mm1CNVFmQUJWQ0hlYVZSQTByT0g4?=
 =?utf-8?B?Ri9DenZkSmgrQzRwSG5zMWdLU3ErSUJyUk1aTXZYN3BPVDhNbmVrTnUrdlE2?=
 =?utf-8?B?a0liSUkybmxQWVQ2dG1nRHQrWm4xb21WWnFTMDM0a2RYNXZiY2ZsVjJRQWxn?=
 =?utf-8?B?cGU0K01ZZnd3SUxzTlYwazBxaEVLQlhaTHFIaEd4bysrYzgzZWtWU3IyOTZa?=
 =?utf-8?B?QjZZenlRd0w4djE0U1hqMDlzMFpPVlZDSE80WHdUa0M2WlVHbkFLaERRTXdO?=
 =?utf-8?B?UFQ5V0QxZ2tTcWdEMHlNemRDa29JVmRGZEpranZUQXpqbHg5blVNV0pQQ2o4?=
 =?utf-8?B?eU1NdXN4bVNXSEk0eWwxeDlrU2ZicGQ1MmRudFozUTVkdTNrQW5qZVVGSGlT?=
 =?utf-8?B?bmpFcXNPaS9jb3NDd2EvcFJrNGpidHBNalhZTVZJWTFHSXNIcDhjWmtBZEdW?=
 =?utf-8?B?L05xTDNmSjNESC9venh0SmZMeEcvL01JRm5zUEdka0F6dVVmOE9Yckd3ZVNH?=
 =?utf-8?B?dUoySG1RcWxiTDBGR2tQeityNkdaRWhYZW5Yc0c3elhzWFcyWXo0UGVRdUZY?=
 =?utf-8?B?WFZsZmY3bjBMNE5NOCtkbTVTaDhiWjJhME83VDhHTlk0MlhIWGVuMGE0WDJz?=
 =?utf-8?B?UHZBTWFnZXRIUkQvYndHZE9mbWdCN2xHU1lTanNzaGR0ZGJ6STBhT2JYamlQ?=
 =?utf-8?B?K2ZrdU13R0NoMWt3d1AxRy9VWEJET0htVWczenlMZlNJekJZbE9lWUJ5UXJV?=
 =?utf-8?B?WjVwOW9PS29RRWhtMEJPby9jU29RT0tBN0dTSEI5TVFCNHNrcTc4Zm9xSktU?=
 =?utf-8?B?SFBYQjFvMnRJVHc0dWgzRHBXLzhvZFZsMUJqcEhybVVIeisvU2x1Snl4Y1lL?=
 =?utf-8?B?L3dqNHJiTFVPRkRVU3YvNWhYOVREUXp4U3FRaXU4a284aFJHWkNzdWs1S1M5?=
 =?utf-8?B?aDJNaUwzV2pHNWFvVklhU1pER010VFd3V1BaZmJtSXp2RWVSOGYrOUpvMFNZ?=
 =?utf-8?B?RExUVmhMVlhhVXIxQm80b3c1NUtJQzlwMlNlU2J5cmdRTWZnZHorRXozMmNr?=
 =?utf-8?B?RzZxSlZwVk5OTlhISUxTanZWY2pRSFZRMWxYYUJOOFRHU242aEh4cWJKVXph?=
 =?utf-8?B?SWhkTTJOeTUxRDVyN3JrSnVyclByUmlJbHk3MEVnaElDcFBFWUpVWjhpdkFX?=
 =?utf-8?B?cEZZR2ZHNnlBeUVpWjNOSHE4S0pPd0hLakd1MVVSei9WZkpMT0gydm1iOFgx?=
 =?utf-8?Q?LVcIZpptigJJEsLKH1b4+RWbC?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a4a6a510-728a-4ffd-ff9d-08db5626ea90
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 16:02:14.9889
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BKjqYnN0f/3Rv/ONYFXiHx0+oqklgyP32F+K9nE7Mnw7nAmQM4allWG82Xuxq9/bbxZWrWhbl75cQCpLN8uMEg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9098

On 11.05.2023 19:09, Oleksii Kurochko wrote:
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/page.h
> @@ -0,0 +1,58 @@
> +#ifndef _ASM_RISCV_PAGE_H
> +#define _ASM_RISCV_PAGE_H
> +
> +#include <xen/const.h>
> +#include <xen/types.h>
> +
> +#define VPN_MASK                    ((unsigned long)(PAGETABLE_ENTRIES - 1))
> +
> +#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
> +#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) + PAGE_SHIFT)
> +#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
> +#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
> +#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
> +
> +#define PTE_VALID                   BIT(0, UL)
> +#define PTE_READABLE                BIT(1, UL)
> +#define PTE_WRITABLE                BIT(2, UL)
> +#define PTE_EXECUTABLE              BIT(3, UL)
> +#define PTE_USER                    BIT(4, UL)
> +#define PTE_GLOBAL                  BIT(5, UL)
> +#define PTE_ACCESSED                BIT(6, UL)
> +#define PTE_DIRTY                   BIT(7, UL)
> +#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
> +
> +#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE | PTE_WRITABLE)
> +#define PTE_TABLE                   (PTE_VALID)
> +
> +/* Calculate the offsets into the pagetables for a given VA */
> +#define pt_linear_offset(lvl, va)   ((va) >> XEN_PT_LEVEL_SHIFT(lvl))
> +
> +#define pt_index(lvl, va) (pt_linear_offset(lvl, (va)) & VPN_MASK)

Nit: Please be consistent with parentheses. Here va doesn't need any,
but if you added / kept them, then lvl should also gain them.

> +/* Page Table entry */
> +typedef struct {
> +#ifdef CONFIG_RISCV_64
> +    uint64_t pte;
> +#else
> +    uint32_t pte;
> +#endif
> +} pte_t;
> +
> +static inline pte_t paddr_to_pte(paddr_t paddr,
> +                                 unsigned int permissions)
> +{
> +    return (pte_t) { .pte = (paddr >> PAGE_SHIFT) << PTE_PPN_SHIFT | permissions };

Please parenthesize the << against the |. I have also previously
recommended to avoid open-coding of things like PFN_DOWN() (or
paddr_to_pfn(), if you like that better) or ...

> +}
> +
> +static inline paddr_t pte_to_paddr(pte_t pte)
> +{
> +    return ((paddr_t)pte.pte >> PTE_PPN_SHIFT) << PAGE_SHIFT;

... or pfn_to_paddr() (which here would avoid the misplaced cast).

> --- a/xen/arch/riscv/include/asm/processor.h
> +++ b/xen/arch/riscv/include/asm/processor.h
> @@ -69,6 +69,11 @@ static inline void die(void)
>          wfi();
>  }
>  
> +static inline void sfence_vma(void)
> +{
> +    __asm__ __volatile__ ("sfence.vma" ::: "memory");
> +}

Hmm, in switch_stack_and_jump() you use "asm volatile()" (no
underscores). This is another thing which would nice if it was
consistent (possibly among headers as one group, and .c files as
another - there may be reasons why one wants the underscore
variants in headers, but the "easier" ones in .c files).

Also nit: Style (missing blanks inside the parentheses).

> +static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
> +                                         unsigned long map_start,
> +                                         unsigned long map_end,
> +                                         unsigned long pa_start)
> +{
> +    unsigned int index;
> +    pte_t *pgtbl;
> +    unsigned long page_addr;
> +
> +    if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
> +    {
> +        early_printk("(XEN) Xen should be loaded at 4k boundary\n");
> +        die();
> +    }
> +
> +    if ( map_start & ~XEN_PT_LEVEL_MAP_MASK(0) ||
> +         pa_start & ~XEN_PT_LEVEL_MAP_MASK(0) )

Nit: Please parenthesize the two & against the ||.

> +    {
> +        early_printk("(XEN) map and pa start addresses should be aligned\n");
> +        /* panic(), BUG() or ASSERT() aren't ready now. */
> +        die();
> +    }
> +
> +    for ( page_addr = map_start;
> +          page_addr < map_end;
> +          page_addr += XEN_PT_LEVEL_SIZE(0) )
> +    {
> +        pgtbl = mmu_desc->pgtbl_base;
> +
> +        switch ( mmu_desc->num_levels )
> +        {
> +        case 4: /* Level 3 */
> +            HANDLE_PGTBL(3);
> +        case 3: /* Level 2 */
> +            HANDLE_PGTBL(2);
> +        case 2: /* Level 1 */
> +            HANDLE_PGTBL(1);
> +        case 1: /* Level 0 */
> +            {
> +                unsigned long paddr = (page_addr - map_start) + pa_start;
> +                unsigned int permissions = PTE_LEAF_DEFAULT;
> +                pte_t pte_to_be_written;
> +
> +                index = pt_index(0, page_addr);
> +
> +                if ( is_kernel_text(LINK_TO_LOAD(page_addr)) ||
> +                     is_kernel_inittext(LINK_TO_LOAD(page_addr)) )
> +                    permissions =
> +                        PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
> +
> +                if ( is_kernel_rodata(LINK_TO_LOAD(page_addr)) )
> +                    permissions = PTE_READABLE | PTE_VALID;
> +
> +                pte_to_be_written = paddr_to_pte(paddr, permissions);
> +
> +                if ( !pte_is_valid(pgtbl[index]) )
> +                    pgtbl[index] = pte_to_be_written;
> +                else
> +                {
> +                    if ( (pgtbl[index].pte ^ pte_to_be_written.pte) &
> +                        ~(PTE_DIRTY | PTE_ACCESSED) )

Nit: Style (indentation).

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 16 16:15:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 16:15:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535420.833118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyxKF-0008Di-NC; Tue, 16 May 2023 16:14:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535420.833118; Tue, 16 May 2023 16:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyxKF-0008Db-KN; Tue, 16 May 2023 16:14:59 +0000
Received: by outflank-mailman (input) for mailman id 535420;
 Tue, 16 May 2023 16:14:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a0wm=BF=citrix.com=prvs=4936e02c6=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pyxKD-0008DV-A0
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 16:14:57 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c9d2d2c9-f404-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 18:14:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9d2d2c9-f404-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684253694;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=VxODdMrCBjVxUSwtvqjRIRyj6mcOXr/iZtrFrAOhNPs=;
  b=fxBGw0If5FPlDcXR+HCkqaBMSWFm41s4sajmqolQpDWwxGRXjj0mlCKZ
   tmUxg4kjcEijwXXw6H2uWr1BJLX6xXYMFFyb/V9nYp6zDlkgp3fNTv2UH
   Ctx8JWMromTOVclLTdh1mm+Xj+8GIHGbCNHlfjd14p0wtT2XrHVXTSx9z
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109132810
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:rRsmAa922pHNYqvhu22dDrUDtH6TJUtcMsCJ2f8bNWPcYEJGY0x3x
 2ZKWG3XParZZzP1eN8iaYyyphxXsJCHy9MwTVBs/CE8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKkV5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklk3
 qQbF2wKNCuM3cur5fGYE9V2n/4seZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAj3/jczpeuRSNqLA++WT7xw1tyrn9dtHSf7RmQO0MxhfB/
 TiWpjyR7hcyDuea1h2u62yXxfbzgwzLH98NNJam36s/6LGU7jNKU0BHPbehmtG+jkewc9tSM
 0IQ92wioMAa5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWbkAGRzhNcs07t+c5QDUr0
 hmCmNaBLSNrmK2YTzSa7Lj8hTqqNDIcN2MqeS4ORgxD6N7myLzflTqWEIwlSvTsyISoR3epm
 WviQDUCa6s7p+VT1Jrn+gr8jGiFu72KbVcJ/BztUTfwhu9mX7KNa4ut4FndyP9PKoeFU1WM1
 EQ5d9iiAPMmVs/UynHUKAkZNPTwvqvebmWA6bJ6N8N5nwlB7UJPamy5DNtWAE5yevgJdjbyC
 KM4kVMAvcQDVJdGgEIeXm5QNyjI5fK4fTgGfqqOBjarXnSWXFHvwc2WTRTMt10BaWB1+U3FB
 b+VcNy3EVERArl9wTy9So81iOF7mnlhmD2CHMyklXxLNIZyg1bEIYrpzXPUNrxphE96iF+9H
 ylj2zuilEwEDbyWjtj/+o8PN1EaRUUG6WTNg5UPLIare1M2cFzN/teNmdvNjaQ5xfUK/goJl
 1nhMnJlJK3X3CaXeFnSNSsyNNsCn/9X9BoGAMDlBn7ws1BLXGplxPt3m0cfFVX/yNFe8A==
IronPort-HdrOrdr: A9a23:X3RZSqPeJ+ybK8BcTvujsMiBIKoaSvp037BL7SxMoHluGfBw+P
 rAoB1273HJYVQqOE3I6OrgBEDoexq1n/NICO8qTNWftWLdyQiVxe9ZnOzf6gylNyri9vNMkY
 dMGpIObuEY1GIK6PoSNjPId+od/A==
X-Talos-CUID: =?us-ascii?q?9a23=3ADv0dtmsHUFpNiusiPuNUsIRt6IsPWSXe8yvXeXS?=
 =?us-ascii?q?3U090bf62EGOK4Zlrxp8=3D?=
X-Talos-MUID: 9a23:h8vf1AV6DwlwN+Xq/AXXgS5FH9522Oe3NF4iodJbpOmvKwUlbg==
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="109132810"
Date: Tue, 16 May 2023 17:14:42 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
CC: <xen-devel@lists.xenproject.org>, Jason Andryuk <jandryuk@gmail.com>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v3] Fix install.sh for systemd
Message-ID: <cb6134cc-d66d-4fa3-b2c6-0049424aac10@perard>
References: <20230512113643.3549-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230512113643.3549-1-olaf@aepfle.de>

On Fri, May 12, 2023 at 11:36:44AM +0000, Olaf Hering wrote:
> On a fedora system, if you run `sudo sh install.sh` you break your
> system. The installation clobbers /var/run, a symlink to /run.
> A subsequent boot fails when /var/run and /run are different since
> accesses through /var/run can't find items that now only exist in /run
> and vice-versa.
> 
> Skip populating /var/run/xen during make install.
> The directory is already created by some scripts. Adjust all remaining
> scripts to create XEN_RUN_DIR at runtime.
> 
> Use the shell variable XEN_RUN_DIR instead of hardcoded paths.
> 
> XEN_RUN_STORED is covered by XEN_RUN_DIR because xenstored is usually
> started afterwards.
> 
> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> Tested-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 16 16:28:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 16:28:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535424.833128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyxXY-0001LY-Vj; Tue, 16 May 2023 16:28:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535424.833128; Tue, 16 May 2023 16:28:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyxXY-0001LR-Rk; Tue, 16 May 2023 16:28:44 +0000
Received: by outflank-mailman (input) for mailman id 535424;
 Tue, 16 May 2023 16:28:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a0wm=BF=citrix.com=prvs=4936e02c6=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pyxXX-0001LJ-2H
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 16:28:43 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b6cf12c8-f406-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 18:28:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6cf12c8-f406-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684254520;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=ekbf1hsSNWvboaNh50ECw+KQwFQtwXZLH+LZ8zLNlXg=;
  b=BGa3jdSShWnoSc47cRK3EfwI9S2svD1bPiZdO3PPxV1BytRLSekkt9/l
   TNPXOG6UN+egPhoqOVGz7zykhEjQN6OehBP5wfr89vD/b7JZq72yop76l
   3BIji+q/E1ejf8REAWZ6br3hUS+DzIlkMdtsdVxMUumiSBf1xIoeAv30o
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109647125
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:jPNdCagAfAbqyOj545dUcI0CX161SxAKZh0ujC45NGQN5FlHY01je
 htvXzzQa/qMZjD1fYp/Od+w9U4AusTVyYJkT1A6riowECMb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsy+qWj0N8klgZmP6sT4QaAzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQxeDUCcjCunt6S+46ncPEwwe0JDtfCadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XejtEqFWTtOwv7nLa1gBZ27nxKtvFPNeNQK25m27B/
 jOYozugWExy2Nq39muor0i0h7D0uS7GVthDGo2hxOczqQjGroAUIEJPDgbqyRWjsWa5X9tRA
 0UZ4iQqoO4++SSDXtT7GhG1vnOAlhodQMZLVf037hmXzajZ6BrfAXILJhZebPQ2uclwQiYlv
 neLkMnuHidHq6CORDSW8bL8hTqqNDIcN2MqeS4ORgxD6N7myLzflTqWEIwlSvTsyISoR3epm
 WviQDUCa6s7pJ4O2Lmb0En7pwmQuqLREkkczCfJdzfwhu9mX7KNa4ut4FndyP9PKoeFU1WM1
 EQ5d9iiAPMmVs/UynHUKAkZNPTwvqvebmWA6bJ6N8N5nwlB7UJPamy5DNtWAE5yevgJdjbyC
 KM4kVMAvcQDVJdGgEIeXm5QNyjI5fK4fTgGfqqOBjarXnSWXFHvwc2WTRTMt10BaWB1+U3FB
 b+VcNy3EVERArl9wTy9So81iOF7mnlhmD2CHMyklXxLNIZyg1bEIYrpzXPUNrxphE96iF+9H
 ylj2zuilEwEDbyWjtj/+o8PN1EaRUUG6WTNg5UPLIare1M2cFzN/teNmdvNjaQ5xfUK/goJl
 1nhMnJlJK3X3CaXeFnSNSsyNNsCn/9X9BoGAMDlBn7ws1BLXGplxP53m0cfFVX/yNFe8A==
IronPort-HdrOrdr: A9a23:VqipeKMj0QzZ5cBcTsGjsMiBIKoaSvp037BL7TEWdfUxSKalfq
 +V7ZYmPHzP6Ar5OktQ/OxoUZPoKRi8yXcf2/h0AV7NZniAhIJqFu1fBEnZrgEI1xeQygeV78
 ldmw4SMqySMWRH
X-Talos-CUID: 9a23:RcWcTmFVxWkoelFGqmJOpEsbBuAIa0H590vMGVfkTk9DaqO8HAo=
X-Talos-MUID: 9a23:e4ZZfQvqmKo+LxVsR82nu2BPaf1w4vuVLE0wn89XtfmdBzUpEmLI
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="109647125"
Date: Tue, 16 May 2023 17:28:34 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>
Subject: Re: [PATCH v2] tools: drop bogus and obsolete ptyfuncs.m4
Message-ID: <2ef56f76-f75b-47f7-aef9-bef0c5b52883@perard>
References: <20230512122614.3724-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230512122614.3724-1-olaf@aepfle.de>

On Fri, May 12, 2023 at 12:26:14PM +0000, Olaf Hering wrote:
> According to openpty(3) it is required to include <pty.h> to get the
> prototypes for openpty() and login_tty(). But this is not what the
> function AX_CHECK_PTYFUNCS actually does. It makes no attempt to include
> the required header.
> 
> The two source files which call openpty() and login_tty() already contain
> the conditionals to include the required header.
> 
> Remove the bogus m4 file to fix build with clang, which complains about
> calls to undeclared functions.
> 
> Remove usage of INCLUDE_LIBUTIL_H in libxl_bootloader.c, it is already
> covered by inclusion of libxl_osdep.h.
> 
> Remove usage of PTYFUNCS_LIBS in libxl/Makefile, it is already covered
> by UTIL_LIBS from config/StdGNU.mk.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Obviously, the committer will have to run ./autogen.sh before
committing.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 16 17:39:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 17:39:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535436.833138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyydY-0000P9-VI; Tue, 16 May 2023 17:39:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535436.833138; Tue, 16 May 2023 17:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyydY-0000P2-SS; Tue, 16 May 2023 17:39:00 +0000
Received: by outflank-mailman (input) for mailman id 535436;
 Tue, 16 May 2023 17:38:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKnf=BF=redhat.com=jsnow@srs-se1.protection.inumbo.net>)
 id 1pyydX-0000Ow-HA
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 17:38:59 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 885e26f8-f410-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 19:38:57 +0200 (CEST)
Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com
 [209.85.210.197]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-267-ttrvkttINniNw_plmSPcBw-1; Tue, 16 May 2023 13:38:54 -0400
Received: by mail-pf1-f197.google.com with SMTP id
 d2e1a72fcca58-6434336147cso8246265b3a.3
 for <xen-devel@lists.xenproject.org>; Tue, 16 May 2023 10:38:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 885e26f8-f410-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684258736;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=G7JGMjrkbABMXmudfgjOwEmUZFJhzeLKxKLcXPj6Fso=;
	b=KRA8bo/3JK9izibu72tDcbgNmcoTaS2QDNlvi6idksoR7blblWRtaoUpAEJm+4FaWcibbx
	GAjhBnkG36F4Dbz7ueZmX4wbZpwL+oJ+dhmFd9EpfGJq1JqniHPFxFNzXKms1RX5aAjlDg
	4antL9sef/Pq8V1zHooPyieghQB9Tlk=
X-MC-Unique: ttrvkttINniNw_plmSPcBw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684258733; x=1686850733;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=G7JGMjrkbABMXmudfgjOwEmUZFJhzeLKxKLcXPj6Fso=;
        b=d4OUU9u0zMpXTo+cSEmi+dOV/GUi179RBUKrfzwHzI0/XQdKzCKjTDhK2PYkvscDr0
         F2M0n3iQILXzQ5qPtX2Iih+/dLbgNmeurlBwNyHbzKZfWBYsn7EPToGh8BL3fcJteRYU
         4moTzSHQ2MtQLG4ZneRWJoLxEML3lJvzom/xU9qqfE6G9pmcABRyyXnyqy3RD497Wnv2
         VnKuiUSebmQxIioipq8aJR6QKnP+XZRVDTAMaAX8nqZeCCF4J6hoaKMCdz0JQddebwEb
         XVP8CF/Qw+Q3o/ghTDdDUlH817jJS2UDTu9s5F8jIhwu76b1fcS87kQY1pbqAwLoVXYw
         M/rA==
X-Gm-Message-State: AC+VfDxYNOcwQLPvnkoUNCIvocHAwS5Ae9S7j9ieJwPu08mw0VUHgp+h
	Mi2LZN/IHomL1g8NUsQX+8NS17cPgBRwXiMbS5pEId3WDP+gqoUlya2BeijEa9H8//c02uSWasU
	jX/qvevAITFudlq4Ywq0PpMSnUuJgNqJsHzt/Omrdh4s=
X-Received: by 2002:a05:6a00:189a:b0:646:7234:cbfc with SMTP id x26-20020a056a00189a00b006467234cbfcmr42825192pfh.27.1684258733606;
        Tue, 16 May 2023 10:38:53 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ5DqjD5Mv/y/nBF5tNjAvtQEFBbZtdinKk9xVA0Nn3QAhyxhf+TjPJwds4k7CJPEPxrICXCVe7WEr6J8AcMekw=
X-Received: by 2002:a05:6a00:189a:b0:646:7234:cbfc with SMTP id
 x26-20020a056a00189a00b006467234cbfcmr42825176pfh.27.1684258733316; Tue, 16
 May 2023 10:38:53 -0700 (PDT)
MIME-Version: 1.0
References: <20210317070046.17860-1-olaf@aepfle.de> <4441d32f-bd52-9408-cabc-146b59f0e4dc@redhat.com>
 <20210325121219.7b5daf76.olaf@aepfle.de> <dae251e1-f808-708e-902c-05cfcbbea9cf@redhat.com>
 <20230509225818.GA16290@aepfle.de> <20230510094719.26fb79e5.olaf@aepfle.de> <alpine.DEB.2.22.394.2305121411310.3748626@ubuntu-linux-20-04-desktop>
In-Reply-To: <alpine.DEB.2.22.394.2305121411310.3748626@ubuntu-linux-20-04-desktop>
From: John Snow <jsnow@redhat.com>
Date: Tue, 16 May 2023 13:38:42 -0400
Message-ID: <CAFn=p-aFa_jFYuaYLMumkX=5zpn228ctBcV=Gch=BhmQs6i2dA@mail.gmail.com>
Subject: Re: [PATCH v2] piix: fix regression during unplug in Xen HVM domUs
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Olaf Hering <olaf@aepfle.de>, Paolo Bonzini <pbonzini@redhat.com>, xen-devel@lists.xenproject.org, 
	qemu-devel@nongnu.org, qemu-block@nongnu.org, 
	=?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 12, 2023 at 5:14=E2=80=AFPM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Wed, 10 May 2023, Olaf Hering wrote:
> > Wed, 10 May 2023 00:58:27 +0200 Olaf Hering <olaf@aepfle.de>:
> >
> > > In my debugging (with v8.0.0) it turned out the three pci_set_word
> > > causes the domU to hang. In fact, it is just the last one:
> > >
> > >    pci_set_byte(pci_conf + 0x20, 0x01);  /* BMIBA: 20-23h */
> > >
> > > It changes the value from 0xc121 to 0x1.
> >
> > If I disable just "pci_set_word(pci_conf + PCI_COMMAND, 0x0000);" it wo=
rks as well.
> > It changes the value from 0x5 to 0.
> >
> > In general I feel it is wrong to fiddle with PCI from the host side.
> > This is most likely not the intention of the Xen unplug protocol.
> > I'm sure the guest does not expect such changes under the hood.
> > It happens to work by luck with pvops kernels because their PCI discove=
ry
> > is done after the unplug.
> >
> > So, what do we do here to get this off the table?
>
> I don't have a concrete suggestion because I don't understand the root
> cause of the issue. Looking back at Paolo's reply from 2021
>
> https://marc.info/?l=3Dxen-devel&m=3D161669099305992&w=3D2
>
> I think he was right. We can either fix the root cause of the issue or
> avoid calling qdev_reset_all on unplug. I am OK with either one.

I haven't touched IDE or block code in quite a long while now -- I
don't think I can help land this fix, but I won't get in anyone's way,
either. Maybe just re-submit the patches with an improved commit
message / cover letter that helps collect the info from the previous
thread, the core issue, etc.

--js



From xen-devel-bounces@lists.xenproject.org Tue May 16 18:01:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 18:01:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535442.833148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyyyh-0003vh-T5; Tue, 16 May 2023 18:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535442.833148; Tue, 16 May 2023 18:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyyyh-0003va-OV; Tue, 16 May 2023 18:00:51 +0000
Received: by outflank-mailman (input) for mailman id 535442;
 Tue, 16 May 2023 18:00:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LW/A=BF=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pyyyf-0003vU-E0
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 18:00:50 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20609.outbound.protection.outlook.com
 [2a01:111:f400:7e88::609])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 932c628d-f413-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 20:00:44 +0200 (CEST)
Received: from DM6PR06CA0041.namprd06.prod.outlook.com (2603:10b6:5:54::18) by
 BL1PR12MB5777.namprd12.prod.outlook.com (2603:10b6:208:390::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 18:00:33 +0000
Received: from DM6NAM11FT042.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:54:cafe::a5) by DM6PR06CA0041.outlook.office365.com
 (2603:10b6:5:54::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33 via Frontend
 Transport; Tue, 16 May 2023 18:00:33 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT042.mail.protection.outlook.com (10.13.173.165) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.17 via Frontend Transport; Tue, 16 May 2023 18:00:33 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 16 May
 2023 13:00:32 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 16 May
 2023 11:00:31 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Tue, 16 May 2023 13:00:29 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 932c628d-f413-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wh7SJvBZajTjhu1rKcVuFz6i5TmQAPddSgO40gqGqeO7fLJaX/4xIVd+rwbe1yJNxDoFQ6xzltMZp8+KZJv2myoGimF3R+SW/qGY/rz66pYv1RG0YJ+fs2OZz2idUmuvDdl1g8w1R5qijLAhwuXUP+3j34zuRjQ2Girh48HBuv/AEw0rTBkcIuk5RZVwj2MCznlDuW46gEim/3TA1DJSwi1Qt9sx7CliNfo1u3Cv+4kRGGygi/GAwmitsKbsCQOlgm+IiB+91Je1Qtd5k2OcKF41fhgDAYS2qtw21QCyTo4VNBd1zPQZURvnCswIgVwkfMDyKQuybBPYVhyV4jCUHg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SR9ZdWU/Z++bfyti/+ygidf0Ewhg8btyk4D8zXKW+zs=;
 b=R/tnZZelziCZ4JdAj5mEFC5XVQldb+9YCauRiAsUxuLpM2nl85OfrQGqC76e4r2WBgZuoSpXgpoimJDOBW0095zviXbuCGMhe25F2saaSwFg24/DwIzb5fk1r8BcoLR0CZe1NMV6+Rn0Qxwg2qhUdVLJ0bVeM8uUAv2oKR9ffIx1AeDt+gtqOYqU3PFjSZdpweO4llV/FVq5/VPAjqWP6QZhbKTOu1Xa52nkzNqRUXNRlKa+5roFjcKhPN8heEFSoofYH90iejmis+DmW/i6touBBBmYu0QARFMo7lxiuae6Qfp6DyJNd3k8sFDXUIiVppY2jrkBkLlcp/vsYXohEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SR9ZdWU/Z++bfyti/+ygidf0Ewhg8btyk4D8zXKW+zs=;
 b=Sl+xcSWscnpYEUAwICT9VheMUOOCvEgqBD+rDRii2tm9SODAvnhlZa7kVgUDP4n96i+3eD0a5ChvVdH9LhxBak6N+p5KVXQCuL5dOuH8WRt6dC5Csi0fMk6bvNmoXaUzMZGMRkM1dLDMl/Kec4XGhBZZeF0dvaE4qlTEdKhnceE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <fa9ebe0a-6ba0-6f1a-0df1-ad65ec1e93b3@amd.com>
Date: Tue, 16 May 2023 20:00:29 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: xen cache colors in ARM
Content-Language: en-US
To: Oleg Nikitenko <oleshiiwood@gmail.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>,
	<Stewart.Hildebrand@amd.com>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com>
 <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
 <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
 <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
 <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com>
 <bcac90c2-ef35-2908-9fe6-f09c1b1e2340@amd.com>
 <CA+SAi2sgHbUKk6mQVnFWQWJ1LBY29GW+eagrqHNN6TLDmv2AgQ@mail.gmail.com>
 <CA+SAi2tErcaAkRT5zhTwSE=-jszwAWNtEAnm5jNGEP1NoqbQ3w@mail.gmail.com>
 <53af4bc6-97ad-d806-003b-38e70ccd2b58@amd.com>
 <CA+SAi2vrB714Tc9dn4SbtDo3VrT3Q8OpSiFXRLRaO5=0BJo_rQ@mail.gmail.com>
 <f0e6ca10-2142-7c39-0c7b-042c454e7e08@amd.com>
 <CA+SAi2tCVDiQ1BLdvuH2XnvTDGDCnPBDCq70AVbsO+TZKMERSw@mail.gmail.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <CA+SAi2tCVDiQ1BLdvuH2XnvTDGDCnPBDCq70AVbsO+TZKMERSw@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT042:EE_|BL1PR12MB5777:EE_
X-MS-Office365-Filtering-Correlation-Id: 34af8709-0442-4cf5-5880-08db563771aa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5C0PlFZYnBenHI70Jr4ZqnlxWq7nLnj/BRGRUWiQIXAjy9bD2rhxd61pMgG1ovuj9HT1VhH6q7IXYEKkOr7VHYVdspEcZgf8Iy6R6UikQFZLT6rB8wDRVsx+lVzdI/dZx539HQOkJmIKaU7WJMc/yEMK6x3oumBr3jgmrr6oFC+MiyLiQAaZ16L2cNLaDiIoanLBAIeCCH70RQsG8eB+XRe4dmbIIsfBhRIx9EipyAuZ+SE+OgYCDK/xxIm1vuqrM7HZmYRg5KU3BhOLm7qY4jjxE86u1bJXTbym29dA/gqK2Ar7IJuOFXDXj0YlMtJkZRKhhY850qZnf9qrk1N4ujmu6Yv8eeqEUT4oi+tX9GwjHWQ58TXC4PLgOcH7plhFjrQmfr4JrIcGsLiaj1xJZGOEsImbq7GRwFQ7BfX1BdDQ/upJTXp7gQorl6wqorhfQX+2uK5PRQNWIWa+L7Wfq5/7DEJes+pClbWLvE6Zdeu9RPQ9AVeG2XmHDvBApNxvOfb8Uk62mqgtolY9BsvScXDwfKIi00x2JtW6yq2UdNbIQGbJ+dlZSaTg/gXdtiv0htRGmqxfAkYj8rjPa+tDif+BzfZlXRKr8/3cN2c4b5Dr75ds9zaB9DV8qnjveQ+QRd7h3BLXfdsb4xzfirsZfVhn7RhsVjQnU2QLUO/GB4A5pM19uxYcxZX9nP1qPWWhJ5y0mx3LzsdQKls+fi+pDrPMEt3OVxkenNMH6Og1gPaPZvad3XdLkecIk8RCRcTccv/WuKKoyWKvTniMoInnjl9mxl6YAcF6rJwsS9FPwdqBL8TAje/nO8JbHLqPnEPAvteF3QSZHurWmored8xuK1aDExqN/lipWyWMV2w38aI=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(6019001)(4636009)(39860400002)(376002)(136003)(396003)(346002)(279900001)(451199021)(46966006)(36840700001)(40470700004)(36756003)(86362001)(31696002)(316002)(54906003)(16576012)(966005)(6916009)(4326008)(70206006)(70586007)(478600001)(40480700001)(82310400005)(8936002)(5660300002)(44832011)(41300700001)(2906002)(8676002)(30864003)(81166007)(356005)(82740400003)(336012)(426003)(53546011)(186003)(36860700001)(26005)(83380400001)(47076005)(2616005)(31686004)(40460700003)(43740500002)(36900700001)(559001)(579004)(139555002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 18:00:33.2455
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 34af8709-0442-4cf5-5880-08db563771aa
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT042.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5777

DQoNCk9uIDE2LzA1LzIwMjMgMTc6MTQsIE9sZWcgTmlraXRlbmtvIHdyb3RlOg0KPiAJDQo+
IA0KPiANCj4gSGkgZ3V5cywNCj4gDQo+IFRoYW5rcyBNaWNoYWwuDQo+IA0KPiBTbyBpZiBJ
IGhhdmUgbW9yZSBSQU0gSXQgaXMgcG9zc2libGXCoHRvIGluY3JlYXNlIHRoZSBjb2xvciBk
ZW5zaXR5Lg0KPiANCj4gRm9yIGV4YW1wbGUgOEdiLzE2IGl0IGlzIDUxMiBNYiBhcHByb3hp
bWF0ZWx5Lg0KPiBJcyB0aGlzIGNvcnJlY3QgPw0KWWVzLg0KVG8gbXkgcHJldmlvdXMgcmVw
bHkgSSBzaG91bGQgYWxzbyBhZGQgdGhhdCB0aGUgbnVtYmVyIG9mIGNvbG9ycyBkZXBlbmRz
IG9uIHRoZSBwYWdlIHNpemUsDQpidXQgaW4gWGVuLCB3ZSB1c2UgNGtCIHBhZ2VzIHNvIDY0
a0Igd2F5IHNpemUgcmVzdWx0cyBpbiAxNiBjb2xvcnMuDQoNCn5NaWNoYWwNCg0KPiBSZWdh
cmRzLA0KPiBPbGVnDQo+IA0KPiDQstGCLCAxNiDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAxNzo0
MCwgTWljaGFsIE9yemVsIDxtaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tPj46DQo+IA0KPiAgICAgSGkgT2xlZywNCj4gDQo+ICAgICBPbiAxNi8w
NS8yMDIzIDE0OjE1LCBPbGVnIE5pa2l0ZW5rbyB3cm90ZToNCj4gICAgID7CoCDCoCDCoCDC
oA0KPiAgICAgPg0KPiAgICAgPg0KPiAgICAgPiBIZWxsbywNCj4gICAgID4NCj4gICAgID4g
VGhhbmtzIGEgbG90IE1pY2hhbC4NCj4gICAgID4NCj4gICAgID4gVGhlbiB0aGUgbmV4dCBx
dWVzdGlvbi4NCj4gICAgID4gV2hlbiBJIGp1c3Qgc3RhcnRlZCBteSBleHBlcmltZW50cyB3
aXRoIHhlbiwgU3RlZmFubyBtZW50aW9uZWQgdGhhdCBlYWNoIGNhY2hlJ3MgY29sb3Igc2l6
ZSBpcyAyNTZNLg0KPiAgICAgPiBJcyBpdCBwb3NzaWJsZSB0byBleHRlbmQgdGhpcyBmaWd1
cmUgPw0KPiAgICAgV2l0aCAxNiBjb2xvcnMgKGUuZy4gb24gQ29ydGV4LUE1MykgYW5kIDRH
QiBvZiBtZW1vcnksIHJvdWdobHkgZWFjaCBjb2xvciBpcyAyNTZNIChpLmUuIDRHQi8xNiA9
IDI1Nk0pLg0KPiAgICAgU28gYXMgeW91IGNhbiBzZWUgdGhpcyBmaWd1cmUgZGVwZW5kcyBv
biB0aGUgbnVtYmVyIG9mIGNvbG9ycyBhbmQgbWVtb3J5IHNpemUuDQo+IA0KPiAgICAgfk1p
Y2hhbA0KPiANCj4gICAgID4NCj4gICAgID4gUmVnYXJkcywNCj4gICAgID4gT2xlZw0KPiAg
ICAgPg0KPiAgICAgPiDQv9C9LCAxNSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAxMTo1NywgTWlj
aGFsIE9yemVsIDxtaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9y
emVsQGFtZC5jb20+Pj46DQo+ICAgICA+DQo+ICAgICA+wqAgwqAgwqBIaSBPbGVnLA0KPiAg
ICAgPg0KPiAgICAgPsKgIMKgIMKgT24gMTUvMDUvMjAyMyAxMDo1MSwgT2xlZyBOaWtpdGVu
a28gd3JvdGU6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqANCj4gICAgID7CoCDCoCDC
oD4NCj4gICAgID7CoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD4gSGVsbG8gZ3V5cywNCj4g
ICAgID7CoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD4gVGhhbmtzIGEgbG90Lg0KPiAgICAg
PsKgIMKgIMKgPiBBZnRlciBhIGxvbmcgcHJvYmxlbSBsaXN0IEkgd2FzIGFibGUgdG8gcnVu
IHhlbiB3aXRoIERvbTAgd2l0aCBhIGNhY2hlIGNvbG9yLg0KPiAgICAgPsKgIMKgIMKgPiBP
bmUgbW9yZSBxdWVzdGlvbiBmcm9tIG15IHNpZGUuDQo+ICAgICA+wqAgwqAgwqA+IEkgd2Fu
dCB0byBydW4gYSBndWVzdCB3aXRoIGNvbG9yIG1vZGUgdG9vLg0KPiAgICAgPsKgIMKgIMKg
PiBJIGluc2VydGVkIGEgc3RyaW5nIGludG8gZ3Vlc3QgY29uZmlnIGZpbGUgbGxjLWNvbG9y
cyA9ICI5LTEzIg0KPiAgICAgPsKgIMKgIMKgPiBJIGdvdCBhbiBlcnJvcg0KPiAgICAgPsKg
IMKgIMKgPiBbIMKgNDU3LjUxNzAwNF0gbG9vcDA6IGRldGVjdGVkIGNhcGFjaXR5IGNoYW5n
ZSBmcm9tIDAgdG8gMzg1ODQwDQo+ICAgICA+wqAgwqAgwqA+IFBhcnNpbmcgY29uZmlnIGZy
b20gL3hlbi9yZWRfY29uZmlnLmNmZw0KPiAgICAgPsKgIMKgIMKgPiAveGVuL3JlZF9jb25m
aWcuY2ZnOjI2OiBjb25maWcgcGFyc2luZyBlcnJvciBuZWFyIGAtY29sb3JzJzogbGV4aWNh
bCBlcnJvcg0KPiAgICAgPsKgIMKgIMKgPiB3YXJuaW5nOiBDb25maWcgZmlsZSBsb29rcyBs
aWtlIGl0IGNvbnRhaW5zIFB5dGhvbiBjb2RlLg0KPiAgICAgPsKgIMKgIMKgPiB3YXJuaW5n
OiDCoEFyYml0cmFyeSBQeXRob24gaXMgbm8gbG9uZ2VyIHN1cHBvcnRlZC4NCj4gICAgID7C
oCDCoCDCoD4gd2FybmluZzogwqBTZWUgaHR0cHM6Ly93aWtpLnhlbi5vcmcvd2lraS9QeXRo
b25JblhsQ29uZmlnIDxodHRwczovL3dpa2kueGVuLm9yZy93aWtpL1B5dGhvbkluWGxDb25m
aWc+IDxodHRwczovL3dpa2kueGVuLm9yZy93aWtpL1B5dGhvbkluWGxDb25maWcgPGh0dHBz
Oi8vd2lraS54ZW4ub3JnL3dpa2kvUHl0aG9uSW5YbENvbmZpZz4+IDxodHRwczovL3dpa2ku
eGVuLm9yZy93aWtpL1B5dGhvbkluWGxDb25maWcgPGh0dHBzOi8vd2lraS54ZW4ub3JnL3dp
a2kvUHl0aG9uSW5YbENvbmZpZz4gPGh0dHBzOi8vd2lraS54ZW4ub3JnL3dpa2kvUHl0aG9u
SW5YbENvbmZpZyA8aHR0cHM6Ly93aWtpLnhlbi5vcmcvd2lraS9QeXRob25JblhsQ29uZmln
Pj4+DQo+ICAgICA+wqAgwqAgwqA+IEZhaWxlZCB0byBwYXJzZSBjb25maWc6IEludmFsaWQg
YXJndW1lbnQNCj4gICAgID7CoCDCoCDCoD4gU28gdGhpcyBpcyBhIHF1ZXN0aW9uLg0KPiAg
ICAgPsKgIMKgIMKgPiBJcyBpdCBwb3NzaWJsZSB0byBhc3NpZ24gYSBjb2xvciBtb2RlIGZv
ciB0aGUgRG9tVSBieSBjb25maWcgZmlsZSA/DQo+ICAgICA+wqAgwqAgwqA+IElmIHNvLCB3
aGF0IHN0cmluZyBzaG91bGQgSSB1c2U/DQo+ICAgICA+wqAgwqAgwqBQbGVhc2UsIGFsd2F5
cyByZWZlciB0byB0aGUgcmVsZXZhbnQgZG9jdW1lbnRhdGlvbi4gSW4gdGhpcyBjYXNlLCBm
b3IgeGwuY2ZnOg0KPiAgICAgPsKgIMKgIMKgaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94
ZW4vYmxvYi94bG54X3JlYmFzZV80LjE3L2RvY3MvbWFuL3hsLmNmZy41LnBvZC5pbiNMMjg5
MCA8aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE3
L2RvY3MvbWFuL3hsLmNmZy41LnBvZC5pbiNMMjg5MD4gPGh0dHBzOi8vZ2l0aHViLmNvbS9Y
aWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNy9kb2NzL21hbi94bC5jZmcuNS5wb2Qu
aW4jTDI4OTAgPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJh
c2VfNC4xNy9kb2NzL21hbi94bC5jZmcuNS5wb2QuaW4jTDI4OTA+Pg0KPiAgICAgPg0KPiAg
ICAgPsKgIMKgIMKgfk1pY2hhbA0KPiAgICAgPg0KPiAgICAgPsKgIMKgIMKgPg0KPiAgICAg
PsKgIMKgIMKgPiBSZWdhcmRzLA0KPiAgICAgPsKgIMKgIMKgPiBPbGVnDQo+ICAgICA+wqAg
wqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+INGH0YIsIDExINC80LDRjyAyMDIz4oCv0LMuINCy
IDEzOjMyLCBPbGVnIE5pa2l0ZW5rbyA8b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8
bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+Pj46DQo+ICAg
ICA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqBIaSBNaWNoYWwsDQo+ICAg
ICA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqBUaGFua3MuDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqBUaGlzIGNvbXBpbGF0aW9uIHByZXZpb3VzbHkgaGFkIGEgbmFt
ZSBDT05GSUdfQ09MT1JJTkcuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqBJdCBtaXhlZCBt
ZSB1cC4NCj4gICAgID7CoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoFJlZ2Fy
ZHMsDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqBPbGVnDQo+ICAgICA+wqAgwqAgwqA+DQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqDRh9GCLCAxMSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAx
MzoxNSwgTWljaGFsIE9yemVsIDxtaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86
bWljaGFsLm9yemVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+Pj46DQo+ICAgICA+wqAgwqAgwqA+
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqBIaSBPbGVnLA0KPiAgICAgPsKgIMKg
IMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgT24gMTEvMDUvMjAyMyAxMjow
MiwgT2xlZyBOaWtpdGVua28gd3JvdGU6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD4NCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD4gSGVsbG8sDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+DQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+IFRoYW5rcyBTdGVmYW5vLg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPiBUaGVuIHRoZSBuZXh0IHF1ZXN0aW9uLg0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPiBJIGNsb25lZCB4ZW4gcmVwbyBmcm9tIHhpbGlueCBz
aXRlIGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuLmdpdCA8aHR0cHM6Ly9naXRodWIu
Y29tL1hpbGlueC94ZW4uZ2l0PiA8aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4uZ2l0
IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi5naXQ+PiA8aHR0cHM6Ly9naXRodWIu
Y29tL1hpbGlueC94ZW4uZ2l0IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi5naXQ+
IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi5naXQgPGh0dHBzOi8vZ2l0aHViLmNv
bS9YaWxpbngveGVuLmdpdD4+PiA8aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4uZ2l0
IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi5naXQ+IDxodHRwczovL2dpdGh1Yi5j
b20vWGlsaW54L3hlbi5naXQgPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuLmdpdD4+
IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi5naXQgPGh0dHBzOi8vZ2l0aHViLmNv
bS9YaWxpbngveGVuLmdpdD4gPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuLmdpdCA8
aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4uZ2l0Pj4+Pg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPiBJIG1hbmFnZWQgdG8gYnVpbGQgYSB4bG54X3JlYmFzZV80LjE3
IGJyYW5jaCBpbiBteSBlbnZpcm9ubWVudC4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD4gSSBkaWQgaXQgd2l0aG91dCBjb2xvcmluZyBmaXJzdC4gSSBkaWQgbm90IGZpbmQg
YW55IGNvbG9yIGZvb3RwcmludHMgYXQgdGhpcyBicmFuY2guDQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+IEkgcmVhbGl6ZWQgY29sb3JpbmcgaXMgbm90IGluIHRoZSB4bG54
X3JlYmFzZV80LjE3IGJyYW5jaCB5ZXQuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqBUaGlzIGlzIG5vdCB0cnVlLiBDYWNoZSBjb2xvcmluZyBpcyBpbiB4bG54X3JlYmFzZV80
LjE3LiBQbGVhc2Ugc2VlIHRoZSBkb2NzOg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE3
L2RvY3MvbWlzYy9hcm0vY2FjaGUtY29sb3JpbmcucnN0IDxodHRwczovL2dpdGh1Yi5jb20v
WGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTcvZG9jcy9taXNjL2FybS9jYWNoZS1j
b2xvcmluZy5yc3Q+IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhf
cmViYXNlXzQuMTcvZG9jcy9taXNjL2FybS9jYWNoZS1jb2xvcmluZy5yc3QgPGh0dHBzOi8v
Z2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNy9kb2NzL21pc2Mv
YXJtL2NhY2hlLWNvbG9yaW5nLnJzdD4+IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hl
bi9ibG9iL3hsbnhfcmViYXNlXzQuMTcvZG9jcy9taXNjL2FybS9jYWNoZS1jb2xvcmluZy5y
c3QgPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4x
Ny9kb2NzL21pc2MvYXJtL2NhY2hlLWNvbG9yaW5nLnJzdD4gPGh0dHBzOi8vZ2l0aHViLmNv
bS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNy9kb2NzL21pc2MvYXJtL2NhY2hl
LWNvbG9yaW5nLnJzdCA8aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54
X3JlYmFzZV80LjE3L2RvY3MvbWlzYy9hcm0vY2FjaGUtY29sb3JpbmcucnN0Pj4+DQo+ICAg
ICA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqBJdCBkZXNjcmli
ZXMgdGhlIGZlYXR1cmUgYW5kIGRvY3VtZW50cyB0aGUgcmVxdWlyZWQgcHJvcGVydGllcy4N
Cj4gICAgID7CoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoH5NaWNo
YWwNCj4gICAgID7CoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD4N
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD4g0LLRgiwgOSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAyMjo0OSwgU3RlZmFu
byBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4gPG1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4g
PG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4+PjoNCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoFdl
IHRlc3QgWGVuIENhY2hlIENvbG9yaW5nIHJlZ3VsYXJseSBvbiB6Y3UxMDIuIEV2ZXJ5IFBl
dGFsaW51eCByZWxlYXNlDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqAodHdpY2UgYSB5ZWFyKSBpcyB0ZXN0ZWQgd2l0aCBjYWNoZSBjb2xvcmluZyBlbmFibGVk
LiBUaGUgbGFzdCBQZXRhbGludXgNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoHJlbGVhc2UgaXMgMjAyMy4xIGFuZCB0aGUga2VybmVsIHVzZWQgaXMgdGhpczoN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoGh0dHBzOi8vZ2l0aHVi
LmNvbS9YaWxpbngvbGludXgteGxueC90cmVlL3hsbnhfcmViYXNlX3Y2LjFfTFRTIDxodHRw
czovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4x
X0xUUz4gPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngvbGludXgteGxueC90cmVlL3hsbnhf
cmViYXNlX3Y2LjFfTFRTIDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngv
dHJlZS94bG54X3JlYmFzZV92Ni4xX0xUUz4+IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54
L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4xX0xUUyA8aHR0cHM6Ly9naXRodWIu
Y29tL1hpbGlueC9saW51eC14bG54L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFM+IDxodHRw
czovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4x
X0xUUyA8aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC9saW51eC14bG54L3RyZWUveGxueF9y
ZWJhc2VfdjYuMV9MVFM+Pj4gPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngvbGludXgteGxu
eC90cmVlL3hsbnhfcmViYXNlX3Y2LjFfTFRTIDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54
L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4xX0xUUz4gPGh0dHBzOi8vZ2l0aHVi
LmNvbS9YaWxpbngvbGludXgteGxueC90cmVlL3hsbnhfcmViYXNlX3Y2LjFfTFRTIDxodHRw
czovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4x
X0xUUz4+IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54
X3JlYmFzZV92Ni4xX0xUUyA8aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC9saW51eC14bG54
L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFM+DQo+ICAgICA8aHR0cHM6Ly9naXRodWIuY29t
L1hpbGlueC9saW51eC14bG54L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFMgPGh0dHBzOi8v
Z2l0aHViLmNvbS9YaWxpbngvbGludXgteGxueC90cmVlL3hsbnhfcmViYXNlX3Y2LjFfTFRT
Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgT24gVHVlLCA5IE1heSAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZToNCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD4gSGVsbG8gZ3V5cywNCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD4gSSBoYXZlIGEgY291cGxlIG9mIG1vcmUgcXVlc3Rpb25z
Lg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPiBIYXZlIHlvdSBl
dmVyIHJ1biB4ZW4gd2l0aCB0aGUgY2FjaGUgY29sb3JpbmcgYXQgWnlucSBVbHRyYVNjYWxl
KyBNUFNvQyB6Y3UxMDIgeGN6dTE1ZWcgPw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPiBXaGVuIGRpZCB5b3UgcnVuIHhlbiB3aXRoIHRoZSBjYWNoZSBjb2xv
cmluZyBsYXN0IHRpbWUgPw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPiBXaGF0IGtlcm5lbCB2ZXJzaW9uIGRpZCB5b3UgdXNlIGZvciBEb20wIHdoZW4geW91
IHJhbiB4ZW4gd2l0aCB0aGUgY2FjaGUgY29sb3JpbmcgbGFzdCB0aW1lID8NCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD4gUmVnYXJkcywNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD4gT2xlZw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPiDQ
v9GCLCA1INC80LDRjyAyMDIz4oCv0LMuINCyIDExOjQ4LCBPbGVnIE5pa2l0ZW5rbyA8b2xl
c2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bT4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbT4+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+PiA8bWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+Pj4+PjoN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoEhp
IE1pY2hhbCwNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD4NCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD4gVGhhbmtzLg0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPiBSZWdhcmRzLA0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPiBPbGVnDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
INC/0YIsIDUg0LzQsNGPIDIwMjPigK/Qsy4g0LIgMTE6MzQsIE1pY2hhbCBPcnplbCA8bWlj
aGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4gPG1haWx0
bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPj4g
PG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20+Pj4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86
bWljaGFsLm9yemVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+Pj4+Og0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgSGkgT2xlZywNCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoFJlcGx5aW5nLCBzbyB0aGF0IHlvdSBk
byBub3QgbmVlZCB0byB3YWl0IGZvciBTdGVmYW5vLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgT24gMDUvMDUvMjAyMyAxMDoyOCwgT2xlZyBOaWtpdGVua28g
d3JvdGU6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4gSGVsbG8gU3RlZmFubywNCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gSSB3b3VsZCBsaWtlIHRvIHRyeSBh
IHhlbiBjYWNoZSBjb2xvciBwcm9wZXJ0eSBmcm9tIHRoaXMgcmVwb8KgIGh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQgPGh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdC1odHRwL3hlbi5naXQ+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94
ZW4uZ2l0IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0Pj4gPGh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQgPGh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXQtaHR0cC94ZW4uZ2l0IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4u
Z2l0Pj4+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0IDxodHRw
czovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0PiA8aHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
LWh0dHAveGVuLmdpdD4+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4u
Z2l0IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0PiA8aHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCA8aHR0cHM6Ly94ZW5iaXRzLnhl
bi5vcmcvZ2l0LWh0dHAveGVuLmdpdD4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQt
aHR0cC94ZW4uZ2l0IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0
PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCA8aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdD4+IDxodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXQtaHR0cC94ZW4uZ2l0IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0
cC94ZW4uZ2l0PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCA8
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdD4+PiA8aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5v
cmcvZ2l0LWh0dHAveGVuLmdpdD4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRw
L3hlbi5naXQgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ+PiA8
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCA8aHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdD4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdC1odHRwL3hlbi5naXQgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hl
bi5naXQ+Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPiBDb3VsZCB5b3UgdGVsbCB3aG90IGJyYW5jaCBJIHNob3VsZCB1c2UgPw0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgQ2Fj
aGUgY29sb3JpbmcgZmVhdHVyZSBpcyBub3QgcGFydCBvZiB0aGUgdXBzdHJlYW0gdHJlZSBh
bmQgaXQgaXMgc3RpbGwgdW5kZXIgcmV2aWV3Lg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgWW91IGNhbiBvbmx5IGZpbmQgaXQgaW50ZWdy
YXRlZCBpbiB0aGUgWGlsaW54IFhlbiB0cmVlLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgfk1pY2hhbA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPiBSZWdhcmRzLA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBPbGVnDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+INC/0YIsIDI4INCw0L/RgC4gMjAyM+KAr9CzLiDQ
siAwMDo1MSwgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc+Pj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4+IDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pg0KPiAg
ICAgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZz4+Pj4+PjoNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoEkgYW0gZmFtaWxpYXIgd2l0aCB0
aGUgemN1MTAyIGJ1dCBJIGRvbid0IGtub3cgaG93IHlvdSBjb3VsZCBwb3NzaWJseQ0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgZ2VuZXJhdGUgYSBTRXJyb3IuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqBJIHN1Z2dlc3QgdG8gdHJ5IHRvIHVzZSBJ
bWFnZUJ1aWxkZXIgWzFdIHRvIGdlbmVyYXRlIHRoZSBib290DQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqBjb25maWd1cmF0
aW9uIGFzIGEgdGVzdCBiZWNhdXNlIHRoYXQgaXMga25vd24gdG8gd29yayB3ZWxsIGZvciB6
Y3UxMDIuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqBbMV0gaHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdl
YnVpbGRlciA8aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcj4g
PGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIgPGh0dHBzOi8v
Z2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXI+PiA8aHR0cHM6Ly9naXRsYWIu
Y29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlciA8aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1w
cm9qZWN0L2ltYWdlYnVpbGRlcj4gPGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9p
bWFnZWJ1aWxkZXIgPGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxk
ZXI+Pj4gPGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIgPGh0
dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXI+IDxodHRwczovL2dp
dGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyIDxodHRwczovL2dpdGxhYi5jb20v
eGVuLXByb2plY3QvaW1hZ2VidWlsZGVyPj4gPGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJv
amVjdC9pbWFnZWJ1aWxkZXIgPGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFn
ZWJ1aWxkZXI+IDxodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVy
IDxodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyPj4+PiA8aHR0
cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlciA8aHR0cHM6Ly9naXRs
YWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcj4gPGh0dHBzOi8vZ2l0bGFiLmNvbS94
ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIgPGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVj
dC9pbWFnZWJ1aWxkZXI+Pg0KPiAgICAgPGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVj
dC9pbWFnZWJ1aWxkZXIgPGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1
aWxkZXI+IDxodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyIDxo
dHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyPj4+IDxodHRwczov
L2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyIDxodHRwczovL2dpdGxhYi5j
b20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyPiA8aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1w
cm9qZWN0L2ltYWdlYnVpbGRlciA8aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2lt
YWdlYnVpbGRlcj4+IDxodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWls
ZGVyIDxodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyPiA8aHR0
cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlciA8aHR0cHM6Ly9naXRs
YWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcj4+Pj4+DQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqBPbiBUaHUsIDI3
IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZToNCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD4gSGVsbG8gU3RlZmFu
bywNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD4gVGhhbmtzIGZvciBjbGFyaWZpY2F0aW9uLg0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PiBXZSBuaWdodGVyIHVzZSBJbWFnZUJ1aWxkZXIgbm9yIHVib290IGJvb3Qgc2NyaXB0Lg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPiBBIG1vZGVsIGlzIHpjdTEwMiBjb21wYXRpYmxlLg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PiBSZWdhcmRzLA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPiBPLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPiDQstGCLCAyNSDQsNC/
0YAuIDIwMjPigK/Qsy4g0LIgMjE6MjEsIFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxp
bmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnPj4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnPj4+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+PiA8bWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnPj4NCj4gICAgIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4+Pj46DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqBUaGlzIGlzIGludGVyZXN0aW5nLiBBcmUgeW91IHVzaW5nIFhpbGlueCBoYXJk
d2FyZSBieSBhbnkgY2hhbmNlPyBJZiBzbywNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoHdoaWNoIGJv
YXJkPw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgQXJlIHlvdSB1c2luZyBJbWFnZUJ1
aWxkZXIgdG8gZ2VuZXJhdGUgeW91ciBib290LnNjciBib290IHNjcmlwdD8gSWYgc28sDQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqBjb3VsZCB5b3UgcGxlYXNlIHBvc3QgeW91ciBJbWFnZUJ1aWxk
ZXIgY29uZmlnIGZpbGU/IElmIG5vdCwgY2FuIHlvdQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgcG9z
dCB0aGUgc291cmNlIG9mIHlvdXIgdWJvb3QgYm9vdCBzY3JpcHQ/DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqBTRXJyb3JzIGFyZSBzdXBwb3NlZCB0byBiZSByZWxhdGVkIHRvIGEg
aGFyZHdhcmUgZmFpbHVyZSBvZiBzb21lIGtpbmQuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqBZb3Ug
YXJlIG5vdCBzdXBwb3NlZCB0byBiZSBhYmxlIHRvIHRyaWdnZXIgYW4gU0Vycm9yIGVhc2ls
eSBieQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIm1pc3Rha2UiLiBJIGhhdmUgbm90IHNlZW4gU0Vy
cm9ycyBkdWUgdG8gd3JvbmcgY2FjaGUgY29sb3JpbmcNCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoGNv
bmZpZ3VyYXRpb25zIG9uIGFueSBYaWxpbnggYm9hcmQgYmVmb3JlLg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgVGhlIGRpZmZlcmVuY2VzIGJldHdlZW4gWGVuIHdpdGggYW5kIHdp
dGhvdXQgY2FjaGUgY29sb3JpbmcgZnJvbSBhDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqBoYXJkd2Fy
ZSBwZXJzcGVjdGl2ZSBhcmU6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAtIFdpdGgg
Y2FjaGUgY29sb3JpbmcsIHRoZSBTTU1VIGlzIGVuYWJsZWQgYW5kIGRvZXMgYWRkcmVzcyB0
cmFuc2xhdGlvbnMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoMKgIGV2ZW4gZm9yIGRvbTAuIFdpdGhv
dXQgY2FjaGUgY29sb3JpbmcgdGhlIFNNTVUgY291bGQgYmUgZGlzYWJsZWQsIGFuZA0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgwqAgaWYgZW5hYmxlZCwgdGhlIFNNTVUgZG9lc24ndCBkbyBhbnkg
YWRkcmVzcyB0cmFuc2xhdGlvbnMgZm9yIERvbTAuIElmDQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqDC
oCB0aGVyZSBpcyBhIGhhcmR3YXJlIGZhaWx1cmUgcmVsYXRlZCB0byBTTU1VIGFkZHJlc3Mg
dHJhbnNsYXRpb24gaXQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoMKgIGNvdWxkIG9ubHkgdHJpZ2dl
ciB3aXRoIGNhY2hlIGNvbG9yaW5nLiBUaGlzIHdvdWxkIGJlIG15IG5vcm1hbA0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgwqAgc3VnZ2VzdGlvbiBmb3IgeW91IHRvIGV4cGxvcmUsIGJ1dCB0aGUg
ZmFpbHVyZSBoYXBwZW5zIHRvbyBlYXJseQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgwqAgYmVmb3Jl
IGFueSBETUEtY2FwYWJsZSBkZXZpY2UgaXMgcHJvZ3JhbW1lZC4gU28gSSBkb24ndCB0aGlu
ayB0aGlzIGNhbg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgwqAgYmUgdGhlIGlzc3VlLg0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgLSBXaXRoIGNhY2hlIGNvbG9yaW5nLCB0aGUgbWVtb3J5
IGFsbG9jYXRpb24gaXMgdmVyeSBkaWZmZXJlbnQgc28geW91J2xsDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqDCoCBlbmQgdXAgdXNpbmcgZGlmZmVyZW50IEREUiByZWdpb25zIGZvciBEb20wLiBT
byBpZiB5b3VyIEREUiBpcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgwqAgZGVmZWN0aXZlLCB5b3Ug
bWlnaHQgb25seSBzZWUgYSBmYWlsdXJlIHdpdGggY2FjaGUgY29sb3JpbmcgZW5hYmxlZA0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgwqAgYmVjYXVzZSB5b3UgZW5kIHVwIHVzaW5nIGRpZmZlcmVu
dCByZWdpb25zLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgT24gVHVl
LCAyNSBBcHIgMjAyMywgT2xlZyBOaWtpdGVua28gd3JvdGU6DQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+IEhpIFN0ZWZhbm8sDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+IFRoYW5rIHlvdS4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gSWYgSSBidWlsZCB4ZW4gd2l0
aG91dCBjb2xvcnMgc3VwcG9ydCB0aGVyZSBpcyBub3QgdGhpcyBlcnJvci4NCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gQWxsIHRoZSBkb21haW5zIGFyZSBib290ZWQgd2VsbC4NCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gSGVuc2UgaXQgY2FuIG5vdCBiZSBhIGhhcmR3YXJlIGlzc3VlLg0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPiBUaGlzIHBhbmljIGFycml2ZWQgZHVyaW5nIHVucGFja2luZyB0
aGUgcm9vdGZzLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBIZXJlIEkgYXR0YWNoZWQgdGhlIGJv
b3QgbG9nIHhlbi9Eb20wIHdpdGhvdXQgY29sb3IuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IEEg
aGlnaGxpZ2h0ZWQgc3RyaW5ncyBwcmludGVkIGV4YWN0bHkgYWZ0ZXIgdGhlIHBsYWNlIHdo
ZXJlIDEtc3QgdGltZSBwYW5pYyBhcnJpdmVkLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPiDCoFhlbiA0LjE2LjEtcHJlDQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
IChYRU4pIFhlbiB2ZXJzaW9uIDQuMTYuMS1wcmUgKG5vbGUyMzkwQChub25lKSkgKGFhcmNo
NjQtcG9ydGFibGUtbGludXgtZ2NjIChHQ0MpIDExLjMuMCkgZGVidWc9eQ0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgMjAyMy0wNC0yMQ0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiBXZWQgQXByIDE5
IDEyOjU2OjE0IDIwMjMgKzAzMDAgZ2l0OjMyMTY4N2IyMzEtZGlydHkNCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD4gKFhFTikgYnVpbGQtaWQ6IGMxODQ3MjU4ZmRiMWI3OTU2MmZjNzEwZGRhNDAw
MDhmOTZjMGZkZTUNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgUHJvY2Vzc29yOiAwMDAw
MDAwMDQxMGZkMDM0OiAiQVJNIExpbWl0ZWQiLCB2YXJpYW50OiAweDAsIHBhcnQgMHhkMDMs
cmV2IDB4NA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSA2NC1iaXQgRXhlY3V0aW9uOg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSDCoCBQcm9jZXNzb3IgRmVhdHVyZXM6IDAwMDAw
MDAwMDAwMDIyMjIgMDAwMDAwMDAwMDAwMDAwMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVO
KSDCoCDCoCBFeGNlcHRpb24gTGV2ZWxzOiBFTDM6NjQrMzIgRUwyOjY0KzMyIEVMMTo2NCsz
MiBFTDA6NjQrMzINCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgwqAgwqAgRXh0ZW5zaW9u
czogRmxvYXRpbmdQb2ludCBBZHZhbmNlZFNJTUQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhF
TikgwqAgRGVidWcgRmVhdHVyZXM6IDAwMDAwMDAwMTAzMDUxMDYgMDAwMDAwMDAwMDAwMDAw
MA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSDCoCBBdXhpbGlhcnkgRmVhdHVyZXM6IDAw
MDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAo
WEVOKSDCoCBNZW1vcnkgTW9kZWwgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDExMjIgMDAwMDAw
MDAwMDAwMDAwMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSDCoCBJU0EgRmVhdHVyZXM6
IMKgMDAwMDAwMDAwMDAxMTEyMCAwMDAwMDAwMDAwMDAwMDAwDQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+IChYRU4pIDMyLWJpdCBFeGVjdXRpb246DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4p
IMKgIFByb2Nlc3NvciBGZWF0dXJlczogMDAwMDAwMDAwMDAwMDEzMTowMDAwMDAwMDAwMDEx
MDExDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIMKgIMKgIEluc3RydWN0aW9uIFNldHM6
IEFBcmNoMzIgQTMyIFRodW1iIFRodW1iLTIgSmF6ZWxsZQ0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PiAoWEVOKSDCoCDCoCBFeHRlbnNpb25zOiBHZW5lcmljVGltZXIgU2VjdXJpdHkNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gKFhFTikgwqAgRGVidWcgRmVhdHVyZXM6IDAwMDAwMDAwMDMwMTAw
NjYNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgwqAgQXV4aWxpYXJ5IEZlYXR1cmVzOiAw
MDAwMDAwMDAwMDAwMDAwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIMKgIE1lbW9yeSBN
b2RlbCBGZWF0dXJlczogMDAwMDAwMDAxMDIwMTEwNSAwMDAwMDAwMDQwMDAwMDAwDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IChYRU4pIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIMKgMDAwMDAwMDAwMTI2MDAwMCAwMDAwMDAwMDAyMTAyMjExDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+IChYRU4pIMKgIElTQSBGZWF0dXJlczogMDAwMDAwMDAwMjEwMTExMCAwMDAwMDAw
MDEzMTEyMTExIDAwMDAwMDAwMjEyMzIwNDINCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikg
wqAgwqAgwqAgwqAgwqAgwqAgwqAgwqAgMDAwMDAwMDAwMTExMjEzMSAwMDAwMDAwMDAwMDEx
MTQyIDAwMDAwMDAwMDAwMTExMjENCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgVXNpbmcg
U01DIENhbGxpbmcgQ29udmVudGlvbiB2MS4yDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4p
IFVzaW5nIFBTQ0kgdjEuMQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBTTVA6IEFsbG93
aW5nIDQgQ1BVcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBHZW5lcmljIFRpbWVyIElS
UTogcGh5cz0zMCBoeXA9MjYgdmlydD0yNyBGcmVxOiAxMDAwMDAgS0h6DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+IChYRU4pIEdJQ3YyIGluaXRpYWxpemF0aW9uOg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPiAoWEVOKSDCoCDCoCDCoCDCoCBnaWNfZGlzdF9hZGRyPTAwMDAwMDAwZjkwMTAwMDAN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgwqAgwqAgwqAgwqAgZ2ljX2NwdV9hZGRyPTAw
MDAwMDAwZjkwMjAwMDANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgwqAgwqAgwqAgwqAg
Z2ljX2h5cF9hZGRyPTAwMDAwMDAwZjkwNDAwMDANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhF
TikgwqAgwqAgwqAgwqAgZ2ljX3ZjcHVfYWRkcj0wMDAwMDAwMGY5MDYwMDAwDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+IChYRU4pIMKgIMKgIMKgIMKgIGdpY19tYWludGVuYW5jZV9pcnE9MjUN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgR0lDdjI6IEFkanVzdGluZyBDUFUgaW50ZXJm
YWNlIGJhc2UgdG8gMHhmOTAyZjAwMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBHSUN2
MjogMTkyIGxpbmVzLCA0IGNwdXMsIHNlY3VyZSAoSUlEIDAyMDAxNDNiKS4NCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBudWxsIFNjaGVkdWxlciAobnVs
bCkNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgSW5pdGlhbGl6aW5nIG51bGwgc2NoZWR1
bGVyDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIFdBUk5JTkc6IFRoaXMgaXMgZXhwZXJp
bWVudGFsIHNvZnR3YXJlIGluIGRldmVsb3BtZW50Lg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAo
WEVOKSBVc2UgYXQgeW91ciBvd24gcmlzay4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikg
QWxsb2NhdGVkIGNvbnNvbGUgcmluZyBvZiAzMiBLaUIuDQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
IChYRU4pIENQVTA6IEd1ZXN0IGF0b21pY3Mgd2lsbCB0cnkgMTIgdGltZXMgYmVmb3JlIHBh
dXNpbmcgdGhlIGRvbWFpbg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBCcmluZ2luZyB1
cCBDUFUxDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIENQVTE6IEd1ZXN0IGF0b21pY3Mg
d2lsbCB0cnkgMTMgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhlIGRvbWFpbg0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPiAoWEVOKSBDUFUgMSBib290ZWQuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChY
RU4pIEJyaW5naW5nIHVwIENQVTINCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgQ1BVMjog
R3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAxMyB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9t
YWluDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIENQVSAyIGJvb3RlZC4NCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gKFhFTikgQnJpbmdpbmcgdXAgQ1BVMw0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PiAoWEVOKSBDUFUzOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEzIHRpbWVzIGJlZm9yZSBw
YXVzaW5nIHRoZSBkb21haW4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgQnJvdWdodCB1
cCA0IENQVXMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgQ1BVIDMgYm9vdGVkLg0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IHByb2Jp
bmcgaGFyZHdhcmUgY29uZmlndXJhdGlvbi4uLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVO
KSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IFNNTVV2MiB3aXRoOg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPiAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IHN0YWdlIDIgdHJhbnNs
YXRpb24NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgc21tdTogL2F4aS9zbW11QGZkODAw
MDAwOiBzdHJlYW0gbWF0Y2hpbmcgd2l0aCA0OCByZWdpc3RlciBncm91cHMsIG1hc2sgMHg3
ZmZmPDI+c21tdToNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoC9heGkvc21tdUBmZDgwMDAwMDogMTYgY29udGV4dA0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgYmFua3MgKDANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gc3RhZ2UtMiBvbmx5KQ0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IFN0YWdl
LTI6IDQ4LWJpdCBJUEEgLT4gNDgtYml0IFBBDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4p
IHNtbXU6IC9heGkvc21tdUBmZDgwMDAwMDogcmVnaXN0ZXJlZCAyOSBtYXN0ZXIgZGV2aWNl
cw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZW5hYmxl
ZA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSDCoC0gRG9tMCBtb2RlOiBSZWxheGVkDQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIFAyTTogNDAtYml0IElQQSB3aXRoIDQwLWJpdCBQ
QSBhbmQgOC1iaXQgVk1JRA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBQMk06IDMgbGV2
ZWxzIHdpdGggb3JkZXItMSByb290LCBWVENSIDB4MDAwMDAwMDA4MDAyMzU1OA0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPiAoWEVOKSBTY2hlZHVsaW5nIGdyYW51bGFyaXR5OiBjcHUsIDEgQ1BV
IHBlciBzY2hlZC1yZXNvdXJjZQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBhbHRlcm5h
dGl2ZXM6IFBhdGNoaW5nIHdpdGggYWx0IHRhYmxlIDAwMDAwMDAwMDAyY2M1YzggLT4gMDAw
MDAwMDAwMDJjY2IyYw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSAqKiogTE9BRElORyBE
T01BSU4gMCAqKioNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgTG9hZGluZyBkMCBrZXJu
ZWwgZnJvbSBib290IG1vZHVsZSBAIDAwMDAwMDAwMDEwMDAwMDANCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD4gKFhFTikgTG9hZGluZyByYW1kaXNrIGZyb20gYm9vdCBtb2R1bGUgQCAwMDAwMDAw
MDAyMDAwMDAwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIEFsbG9jYXRpbmcgMToxIG1h
cHBpbmdzIHRvdGFsbGluZyAxNjAwTUIgZm9yIGRvbTA6DQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
IChYRU4pIEJBTktbMF0gMHgwMDAwMDAxMDAwMDAwMC0weDAwMDAwMDIwMDAwMDAwICgyNTZN
QikNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgQkFOS1sxXSAweDAwMDAwMDI0MDAwMDAw
LTB4MDAwMDAwMjgwMDAwMDAgKDY0TUIpDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIEJB
TktbMl0gMHgwMDAwMDAzMDAwMDAwMC0weDAwMDAwMDgwMDAwMDAwICgxMjgwTUIpDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IChYRU4pIEdyYW50IHRhYmxlIHJhbmdlOiAweDAwMDAwMDAwZTAw
MDAwLTB4MDAwMDAwMDBlNDAwMDANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgc21tdTog
L2F4aS9zbW11QGZkODAwMDAwOiBkMDogcDJtYWRkciAweDAwMDAwMDA4N2JmOTQwMDANCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgQWxsb2NhdGluZyBQUEkgMTYgZm9yIGV2ZW50IGNo
YW5uZWwgaW50ZXJydXB0DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIEV4dGVuZGVkIHJl
Z2lvbiAwOiAweDgxMjAwMDAwLT4weGEwMDAwMDAwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChY
RU4pIEV4dGVuZGVkIHJlZ2lvbiAxOiAweGIxMjAwMDAwLT4weGMwMDAwMDAwDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiAyOiAweGM4MDAwMDAwLT4weGUw
MDAwMDAwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiAzOiAw
eGYwMDAwMDAwLT4weGY5MDAwMDAwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIEV4dGVu
ZGVkIHJlZ2lvbiA0OiAweDEwMDAwMDAwMC0+MHg2MDAwMDAwMDANCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD4gKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDU6IDB4ODgwMDAwMDAwLT4weDgwMDAwMDAw
MDANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDY6IDB4ODAw
MTAwMDAwMC0+MHgxMDAwMDAwMDAwMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBMb2Fk
aW5nIHpJbWFnZSBmcm9tIDAwMDAwMDAwMDEwMDAwMDAgdG8gMDAwMDAwMDAxMDAwMDAwMC0w
MDAwMDAwMDEwZTQxMDA4DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIExvYWRpbmcgZDAg
aW5pdHJkIGZyb20gMDAwMDAwMDAwMjAwMDAwMCB0byAweDAwMDAwMDAwMTM2MDAwMDAtMHgw
MDAwMDAwMDFmZjNhNjE3DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIExvYWRpbmcgZDAg
RFRCIHRvIDB4MDAwMDAwMDAxMzQwMDAwMC0weDAwMDAwMDAwMTM0MGNiZGMNCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gKFhFTikgSW5pdGlhbCBsb3cgbWVtb3J5IHZpcnEgdGhyZXNob2xkIHNl
dCBhdCAweDQwMDAgcGFnZXMuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pIFN0ZC4gTG9n
bGV2ZWw6IEFsbA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBHdWVzdCBMb2dsZXZlbDog
QWxsDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IChYRU4pICoqKiBTZXJpYWwgaW5wdXQgdG8gRE9N
MCAodHlwZSAnQ1RSTC1hJyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQpDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+IChYRU4pIG51bGwuYzozNTM6IDAgPC0tIGQwdjANCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD4gKFhFTikgRnJlZWQgMzU2a0IgaW5pdCBtZW1vcnkuDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+IChYRU4pIGQwdjAgVW5oYW5kbGVkIFNNQy9IVkM6IDB4ODQwMDAwNTANCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gKFhFTikgZDB2MCBVbmhhbmRsZWQgU01DL0hWQzogMHg4NjAwZmYw
MQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdv
cmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVI0DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+IChYRU4pIGQwdjA6IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAw
MGZmZmZmZmZmIHRvIElDQUNUSVZFUjgNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgZDB2
MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNB
Q1RJVkVSMTINCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFu
ZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMTYNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRl
IDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMjANCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4g
KFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZm
ZmYgdG8gSUNBQ1RJVkVSMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC4wMDAwMDBd
IEJvb3RpbmcgTGludXggb24gcGh5c2ljYWwgQ1BVIDB4MDAwMDAwMDAwMCBbMHg0MTBmZDAz
NF0NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSBMaW51eCB2ZXJzaW9u
IDUuMTUuNzIteGlsaW54LXYyMDIyLjEgKG9lLXVzZXJAb2UtaG9zdCkgKGFhcmNoNjQtcG9y
dGFibGUtbGludXgtZ2NjIChHQ0MpDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAxMS4zLjAsIEdOVSBsZCAoR05VDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqBCaW51dGlscykNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gMi4zOC4yMDIyMDcwOCkgIzEg
U01QIFR1ZSBGZWIgMjEgMDU6NDc6NTQgVVRDIDIwMjMNCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4g
WyDCoCDCoDAuMDAwMDAwXSBNYWNoaW5lIG1vZGVsOiBEMTQgVmlwZXIgQm9hcmQgLSBXaGl0
ZSBVbml0DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAwMDAwMF0gWGVuIDQuMTYg
c3VwcG9ydCBmb3VuZA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC4wMDAwMDBdIFpv
bmUgcmFuZ2VzOg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC4wMDAwMDBdIMKgIERN
QSDCoCDCoCDCoFttZW0gMHgwMDAwMDAwMDEwMDAwMDAwLTB4MDAwMDAwMDA3ZmZmZmZmZl0N
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSDCoCBETUEzMiDCoCDCoGVt
cHR5DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAwMDAwMF0gwqAgTm9ybWFsIMKg
IGVtcHR5DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAwMDAwMF0gTW92YWJsZSB6
b25lIHN0YXJ0IGZvciBlYWNoIG5vZGUNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAu
MDAwMDAwXSBFYXJseSBtZW1vcnkgbm9kZSByYW5nZXMNCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4g
WyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKgIDA6IFttZW0gMHgwMDAwMDAwMDEwMDAwMDAw
LTB4MDAwMDAwMDAxZmZmZmZmZl0NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAw
MDAwXSDCoCBub2RlIMKgIDA6IFttZW0gMHgwMDAwMDAwMDIyMDAwMDAwLTB4MDAwMDAwMDAy
MjE0N2ZmZl0NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSDCoCBub2Rl
IMKgIDA6IFttZW0gMHgwMDAwMDAwMDIyMjAwMDAwLTB4MDAwMDAwMDAyMjM0N2ZmZl0NCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKgIDA6IFttZW0g
MHgwMDAwMDAwMDI0MDAwMDAwLTB4MDAwMDAwMDAyN2ZmZmZmZl0NCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD4gWyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKgIDA6IFttZW0gMHgwMDAwMDAwMDMw
MDAwMDAwLTB4MDAwMDAwMDA3ZmZmZmZmZl0NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDC
oDAuMDAwMDAwXSBJbml0bWVtIHNldHVwIG5vZGUgMCBbbWVtIDB4MDAwMDAwMDAxMDAwMDAw
MC0weDAwMDAwMDAwN2ZmZmZmZmZdDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAw
MDAwMF0gT24gbm9kZSAwLCB6b25lIERNQTogODE5MiBwYWdlcyBpbiB1bmF2YWlsYWJsZSBy
YW5nZXMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSBPbiBub2RlIDAs
IHpvbmUgRE1BOiAxODQgcGFnZXMgaW4gdW5hdmFpbGFibGUgcmFuZ2VzDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+IFsgwqAgwqAwLjAwMDAwMF0gT24gbm9kZSAwLCB6b25lIERNQTogNzM1MiBw
YWdlcyBpbiB1bmF2YWlsYWJsZSByYW5nZXMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDC
oDAuMDAwMDAwXSBjbWE6IFJlc2VydmVkIDI1NiBNaUIgYXQgMHgwMDAwMDAwMDZlMDAwMDAw
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogcHJvYmluZyBm
b3IgY29uZHVpdCBtZXRob2QgZnJvbSBEVC4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDC
oDAuMDAwMDAwXSBwc2NpOiBQU0NJdjEuMSBkZXRlY3RlZCBpbiBmaXJtd2FyZS4NCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSBwc2NpOiBVc2luZyBzdGFuZGFyZCBQ
U0NJIHYwLjIgZnVuY3Rpb24gSURzDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAw
MDAwMF0gcHNjaTogVHJ1c3RlZCBPUyBtaWdyYXRpb24gbm90IHJlcXVpcmVkDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogU01DIENhbGxpbmcgQ29udmVu
dGlvbiB2MS4xDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAwMDAwMF0gcGVyY3B1
OiBFbWJlZGRlZCAxNiBwYWdlcy9jcHUgczMyNzkyIHIwIGQzMjc0NCB1NjU1MzYNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSBEZXRlY3RlZCBWSVBUIEktY2FjaGUg
b24gQ1BVMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC4wMDAwMDBdIENQVSBmZWF0
dXJlczoga2VybmVsIHBhZ2UgdGFibGUgaXNvbGF0aW9uIGZvcmNlZCBPTiBieSBLQVNMUg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC4wMDAwMDBdIENQVSBmZWF0dXJlczogZGV0
ZWN0ZWQ6IEtlcm5lbCBwYWdlIHRhYmxlIGlzb2xhdGlvbiAoS1BUSSkNCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSBCdWlsdCAxIHpvbmVsaXN0cywgbW9iaWxpdHkg
Z3JvdXBpbmcgb24uwqAgVG90YWwgcGFnZXM6IDQwMzg0NQ0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PiBbIMKgIMKgMC4wMDAwMDBdIEtlcm5lbCBjb21tYW5kIGxpbmU6IGNvbnNvbGU9aHZjMCBl
YXJseWNvbj14ZW4gZWFybHlwcmludGs9eGVuIGNsa19pZ25vcmVfdW51c2VkIGZpcHM9MQ0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgcm9v
dD0vZGV2L3JhbTANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoG1heGNwdXM9Mg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPiBbIMKgIMKgMC4wMDAwMDBdIFVua25vd24ga2VybmVsIGNvbW1hbmQgbGluZSBw
YXJhbWV0ZXJzICJlYXJseXByaW50az14ZW4gZmlwcz0xIiwgd2lsbCBiZSBwYXNzZWQgdG8g
dXNlcg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgc3BhY2UuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAwMDAwMF0gRGVudHJ5
IGNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMjYyMTQ0IChvcmRlcjogOSwgMjA5NzE1MiBi
eXRlcywgbGluZWFyKQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC4wMDAwMDBdIElu
b2RlLWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMTMxMDcyIChvcmRlcjogOCwgMTA0ODU3
NiBieXRlcywgbGluZWFyKQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC4wMDAwMDBd
IG1lbSBhdXRvLWluaXQ6IHN0YWNrOm9mZiwgaGVhcCBhbGxvYzpvbiwgaGVhcCBmcmVlOm9u
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAwMDAwMF0gbWVtIGF1dG8taW5pdDog
Y2xlYXJpbmcgc3lzdGVtIG1lbW9yeSBtYXkgdGFrZSBzb21lIHRpbWUuLi4NCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSBNZW1vcnk6IDExMjE5MzZLLzE2NDEwMjRL
IGF2YWlsYWJsZSAoOTcyOEsga2VybmVsIGNvZGUsIDgzNksgcndkYXRhLCAyMzk2SyByb2Rh
dGEsIDE1MzZLDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqBpbml0LCAyNjJLIGJzcywNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoDI1Njk0NEsgcmVz
ZXJ2ZWQsDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IDI2MjE0NEsgY21hLXJlc2VydmVkKQ0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC4wMDAwMDBdIFNMVUI6IEhXYWxpZ249NjQsIE9y
ZGVyPTAtMywgTWluT2JqZWN0cz0wLCBDUFVzPTIsIE5vZGVzPTENCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD4gWyDCoCDCoDAuMDAwMDAwXSByY3U6IEhpZXJhcmNoaWNhbCBSQ1UgaW1wbGVtZW50
YXRpb24uDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAwMDAwMF0gcmN1OiBSQ1Ug
ZXZlbnQgdHJhY2luZyBpcyBlbmFibGVkLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKg
MC4wMDAwMDBdIHJjdTogUkNVIHJlc3RyaWN0aW5nIENQVXMgZnJvbSBOUl9DUFVTPTggdG8g
bnJfY3B1X2lkcz0yLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC4wMDAwMDBdIHJj
dTogUkNVIGNhbGN1bGF0ZWQgdmFsdWUgb2Ygc2NoZWR1bGVyLWVubGlzdG1lbnQgZGVsYXkg
aXMgMjUgamlmZmllcy4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSBy
Y3U6IEFkanVzdGluZyBnZW9tZXRyeSBmb3IgcmN1X2Zhbm91dF9sZWFmPTE2LCBucl9jcHVf
aWRzPTINCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSBOUl9JUlFTOiA2
NCwgbnJfaXJxczogNjQsIHByZWFsbG9jYXRlZCBpcnFzOiAwDQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+IFsgwqAgwqAwLjAwMDAwMF0gUm9vdCBJUlEgaGFuZGxlcjogZ2ljX2hhbmRsZV9pcnEN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMDAwMDAwXSBhcmNoX3RpbWVyOiBjcDE1
IHRpbWVyKHMpIHJ1bm5pbmcgYXQgMTAwLjAwTUh6ICh2aXJ0KS4NCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD4gWyDCoCDCoDAuMDAwMDAwXSBjbG9ja3NvdXJjZTogYXJjaF9zeXNfY291bnRlcjog
bWFzazogMHhmZmZmZmZmZmZmZmZmZiBtYXhfY3ljbGVzOiAweDE3MTAyNGU3ZTAsDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqBtYXhfaWRs
ZV9uczogNDQwNzk1MjA1MzE1IG5zDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjAw
MDAwMF0gc2NoZWRfY2xvY2s6IDU2IGJpdHMgYXQgMTAwTUh6LCByZXNvbHV0aW9uIDEwbnMs
IHdyYXBzIGV2ZXJ5IDQzOTgwNDY1MTExMDBucw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKg
IMKgMC4wMDAyNThdIENvbnNvbGU6IGNvbG91ciBkdW1teSBkZXZpY2UgODB4MjUNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMzEwMjMxXSBwcmludGs6IGNvbnNvbGUgW2h2YzBd
IGVuYWJsZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMzE0NDAzXSBDYWxpYnJh
dGluZyBkZWxheSBsb29wIChza2lwcGVkKSwgdmFsdWUgY2FsY3VsYXRlZCB1c2luZyB0aW1l
ciBmcmVxdWVuY3kuLiAyMDAuMDAgQm9nb01JUFMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoChscGo9NDAwMDAwKQ0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPiBbIMKgIMKgMC4zMjQ4NTFdIHBpZF9tYXg6IGRlZmF1bHQ6IDMyNzY4IG1pbmlt
dW06IDMwMQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC4zMjk3MDZdIExTTTogU2Vj
dXJpdHkgRnJhbWV3b3JrIGluaXRpYWxpemluZw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKg
IMKgMC4zMzQyMDRdIFlhbWE6IGJlY29taW5nIG1pbmRmdWwuDQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+IFsgwqAgwqAwLjMzNzg2NV0gTW91bnQtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA0
MDk2IChvcmRlcjogMywgMzI3NjggYnl0ZXMsIGxpbmVhcikNCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD4gWyDCoCDCoDAuMzQ1MTgwXSBNb3VudHBvaW50LWNhY2hlIGhhc2ggdGFibGUgZW50cmll
czogNDA5NiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+IFsgwqAgwqAwLjM1NDc0M10geGVuOmdyYW50X3RhYmxlOiBHcmFudCB0YWJsZXMg
dXNpbmcgdmVyc2lvbiAxIGxheW91dA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC4z
NTkxMzJdIEdyYW50IHRhYmxlIGluaXRpYWxpemVkDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsg
wqAgwqAwLjM2MjY2NF0geGVuOmV2ZW50czogVXNpbmcgRklGTy1iYXNlZCBBQkkNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuMzY2OTkzXSBYZW46IGluaXRpYWxpemluZyBjcHUw
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjM3MDUxNV0gcmN1OiBIaWVyYXJjaGlj
YWwgU1JDVSBpbXBsZW1lbnRhdGlvbi4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAu
Mzc1OTMwXSBzbXA6IEJyaW5naW5nIHVwIHNlY29uZGFyeSBDUFVzIC4uLg0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPiAoWEVOKSBudWxsLmM6MzUzOiAxIDwtLSBkMHYxDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+IChYRU4pIGQwdjE6IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAw
MGZmZmZmZmZmIHRvIElDQUNUSVZFUjANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAu
MzgyNTQ5XSBEZXRlY3RlZCBWSVBUIEktY2FjaGUgb24gQ1BVMQ0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPiBbIMKgIMKgMC4zODg3MTJdIFhlbjogaW5pdGlhbGl6aW5nIGNwdTENCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gWyDCoCDCoDAuMzg4NzQzXSBDUFUxOiBCb290ZWQgc2Vjb25kYXJ5IHBy
b2Nlc3NvciAweDAwMDAwMDAwMDEgWzB4NDEwZmQwMzRdDQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
IFsgwqAgwqAwLjM4ODgyOV0gc21wOiBCcm91Z2h0IHVwIDEgbm9kZSwgMiBDUFVzDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjQwNjk0MV0gU01QOiBUb3RhbCBvZiAyIHByb2Nl
c3NvcnMgYWN0aXZhdGVkLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC40MTE2OThd
IENQVSBmZWF0dXJlczogZGV0ZWN0ZWQ6IDMyLWJpdCBFTDAgU3VwcG9ydA0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPiBbIMKgIMKgMC40MTY4ODhdIENQVSBmZWF0dXJlczogZGV0ZWN0ZWQ6IENS
QzMyIGluc3RydWN0aW9ucw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC40MjIxMjFd
IENQVTogQWxsIENQVShzKSBzdGFydGVkIGF0IEVMMQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBb
IMKgIMKgMC40MjYyNDhdIGFsdGVybmF0aXZlczogcGF0Y2hpbmcga2VybmVsIGNvZGUNCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuNDMxNDI0XSBkZXZ0bXBmczogaW5pdGlhbGl6
ZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuNDQxNDU0XSBLQVNMUiBlbmFibGVk
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjQ0MTYwMl0gY2xvY2tzb3VyY2U6IGpp
ZmZpZXM6IG1hc2s6IDB4ZmZmZmZmZmYgbWF4X2N5Y2xlczogMHhmZmZmZmZmZiwgbWF4X2lk
bGVfbnM6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA3NjQ1MDQxNzg1MTAwMDAwIG5zDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAw
LjQ0ODMyMV0gZnV0ZXggaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIgKG9yZGVyOiAzLCAzMjc2
OCBieXRlcywgbGluZWFyKQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC40OTYxODNd
IE5FVDogUmVnaXN0ZXJlZCBQRl9ORVRMSU5LL1BGX1JPVVRFIHByb3RvY29sIGZhbWlseQ0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC40OTgyNzddIERNQTogcHJlYWxsb2NhdGVk
IDI1NiBLaUIgR0ZQX0tFUk5FTCBwb29sIGZvciBhdG9taWMgYWxsb2NhdGlvbnMNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuNTAzNzcyXSBETUE6IHByZWFsbG9jYXRlZCAyNTYg
S2lCIEdGUF9LRVJORUx8R0ZQX0RNQSBwb29sIGZvciBhdG9taWMgYWxsb2NhdGlvbnMNCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuNTExNjEwXSBETUE6IHByZWFsbG9jYXRlZCAy
NTYgS2lCIEdGUF9LRVJORUx8R0ZQX0RNQTMyIHBvb2wgZm9yIGF0b21pYyBhbGxvY2F0aW9u
cw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC41MTk0NzhdIGF1ZGl0OiBpbml0aWFs
aXppbmcgbmV0bGluayBzdWJzeXMgKGRpc2FibGVkKQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBb
IMKgIMKgMC41MjQ5ODVdIGF1ZGl0OiB0eXBlPTIwMDAgYXVkaXQoMC4zMzY6MSk6IHN0YXRl
PWluaXRpYWxpemVkIGF1ZGl0X2VuYWJsZWQ9MCByZXM9MQ0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PiBbIMKgIMKgMC41MjkxNjldIHRoZXJtYWxfc3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292
ZXJub3IgJ3N0ZXBfd2lzZScNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuNTMzMDIz
XSBody1icmVha3BvaW50OiBmb3VuZCA2IGJyZWFrcG9pbnQgYW5kIDQgd2F0Y2hwb2ludCBy
ZWdpc3RlcnMuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAwLjU0NTYwOF0gQVNJRCBh
bGxvY2F0b3IgaW5pdGlhbGlzZWQgd2l0aCAzMjc2OCBlbnRyaWVzDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+IFsgwqAgwqAwLjU1MTAzMF0geGVuOnN3aW90bGJfeGVuOiBXYXJuaW5nOiBvbmx5
IGFibGUgdG8gYWxsb2NhdGUgNCBNQiBmb3Igc29mdHdhcmUgSU8gVExCDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+IFsgwqAgwqAwLjU1OTMzMl0gc29mdHdhcmUgSU8gVExCOiBtYXBwZWQgW21l
bSAweDAwMDAwMDAwMTE4MDAwMDAtMHgwMDAwMDAwMDExYzAwMDAwXSAoNE1CKQ0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPiBbIMKgIMKgMC41ODM1NjVdIEh1Z2VUTEIgcmVnaXN0ZXJlZCAxLjAw
IEdpQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlcw0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPiBbIMKgIMKgMC41ODQ3MjFdIEh1Z2VUTEIgcmVnaXN0ZXJlZCAzMi4wIE1pQiBwYWdl
IHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKg
IMKgMC41OTE0NzhdIEh1Z2VUTEIgcmVnaXN0ZXJlZCAyLjAwIE1pQiBwYWdlIHNpemUsIHBy
ZS1hbGxvY2F0ZWQgMCBwYWdlcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC41OTgy
MjVdIEh1Z2VUTEIgcmVnaXN0ZXJlZCA2NC4wIEtpQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0
ZWQgMCBwYWdlcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC42MzY1MjBdIERSQkc6
IENvbnRpbnVpbmcgd2l0aG91dCBKaXR0ZXIgUk5HDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsg
wqAgwqAwLjczNzE4N10gcmFpZDY6IG5lb254OCDCoCBnZW4oKSDCoDIxNDMgTUIvcw0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMC44MDUyOTRdIHJhaWQ2OiBuZW9ueDggwqAgeG9y
KCkgwqAxNTg5IE1CL3MNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDAuODczNDA2XSBy
YWlkNjogbmVvbng0IMKgIGdlbigpIMKgMjE3NyBNQi9zDQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
IFsgwqAgwqAwLjk0MTQ5OV0gcmFpZDY6IG5lb254NCDCoCB4b3IoKSDCoDE1NTYgTUIvcw0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMS4wMDk2MTJdIHJhaWQ2OiBuZW9ueDIgwqAg
Z2VuKCkgwqAyMDcyIE1CL3MNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDEuMDc3NzE1
XSByYWlkNjogbmVvbngyIMKgIHhvcigpIMKgMTQzMCBNQi9zDQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+IFsgwqAgwqAxLjE0NTgzNF0gcmFpZDY6IG5lb254MSDCoCBnZW4oKSDCoDE3NjkgTUIv
cw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMS4yMTM5MzVdIHJhaWQ2OiBuZW9ueDEg
wqAgeG9yKCkgwqAxMjE0IE1CL3MNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDEuMjgy
MDQ2XSByYWlkNjogaW50NjR4OCDCoGdlbigpIMKgMTM2NiBNQi9zDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+IFsgwqAgwqAxLjM1MDEzMl0gcmFpZDY6IGludDY0eDggwqB4b3IoKSDCoCA3NzMg
TUIvcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMS40MTgyNTldIHJhaWQ2OiBpbnQ2
NHg0IMKgZ2VuKCkgwqAxNjAyIE1CL3MNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDEu
NDg2MzQ5XSByYWlkNjogaW50NjR4NCDCoHhvcigpIMKgIDg1MSBNQi9zDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+IFsgwqAgwqAxLjU1NDQ2NF0gcmFpZDY6IGludDY0eDIgwqBnZW4oKSDCoDEz
OTYgTUIvcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMS42MjI1NjFdIHJhaWQ2OiBp
bnQ2NHgyIMKgeG9yKCkgwqAgNzQ0IE1CL3MNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDC
oDEuNjkwNjg3XSByYWlkNjogaW50NjR4MSDCoGdlbigpIMKgMTAzMyBNQi9zDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+IFsgwqAgwqAxLjc1ODc3MF0gcmFpZDY6IGludDY0eDEgwqB4b3IoKSDC
oCA1MTcgTUIvcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMS43NTg4MDldIHJhaWQ2
OiB1c2luZyBhbGdvcml0aG0gbmVvbng0IGdlbigpIDIxNzcgTUIvcw0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPiBbIMKgIMKgMS43NjI5NDFdIHJhaWQ2OiAuLi4uIHhvcigpIDE1NTYgTUIvcywg
cm13IGVuYWJsZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDEuNzY3OTU3XSByYWlk
NjogdXNpbmcgbmVvbiByZWNvdmVyeSBhbGdvcml0aG0NCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4g
WyDCoCDCoDEuNzcyODI0XSB4ZW46YmFsbG9vbjogSW5pdGlhbGlzaW5nIGJhbGxvb24gZHJp
dmVyDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAxLjc3ODAyMV0gaW9tbXU6IERlZmF1
bHQgZG9tYWluIHR5cGU6IFRyYW5zbGF0ZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDC
oDEuNzgyNTg0XSBpb21tdTogRE1BIGRvbWFpbiBUTEIgaW52YWxpZGF0aW9uIHBvbGljeTog
c3RyaWN0IG1vZGUNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDEuNzg5MTQ5XSBTQ1NJ
IHN1YnN5c3RlbSBpbml0aWFsaXplZA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMS43
OTI4MjBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDEuNzk4MjU0XSB1c2Jjb3JlOiByZWdpc3Rl
cmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGh1Yg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKg
IMKgMS44MDM2MjZdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGRldmljZSBkcml2ZXIgdXNi
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAxLjgwODc2MV0gcHBzX2NvcmU6IExpbnV4
UFBTIEFQSSB2ZXIuIDEgcmVnaXN0ZXJlZA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKg
MS44MTM3MTZdIHBwc19jb3JlOiBTb2Z0d2FyZSB2ZXIuIDUuMy42IC0gQ29weXJpZ2h0IDIw
MDUtMjAwNyBSb2RvbGZvIEdpb21ldHRpIDxnaW9tZXR0aUBsaW51eC5pdCA8bWFpbHRvOmdp
b21ldHRpQGxpbnV4Lml0PiA8bWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IDxtYWlsdG86Z2lv
bWV0dGlAbGludXguaXQ+PiA8bWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IDxtYWlsdG86Z2lv
bWV0dGlAbGludXguaXQ+IDxtYWlsdG86Z2lvbWV0dGlAbGludXguaXQgPG1haWx0bzpnaW9t
ZXR0aUBsaW51eC5pdD4+PiA8bWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IDxtYWlsdG86Z2lv
bWV0dGlAbGludXguaXQ+IDxtYWlsdG86Z2lvbWV0dGlAbGludXguaXQgPG1haWx0bzpnaW9t
ZXR0aUBsaW51eC5pdD4+IDxtYWlsdG86Z2lvbWV0dGlAbGludXguaXQgPG1haWx0bzpnaW9t
ZXR0aUBsaW51eC5pdD4gPG1haWx0bzpnaW9tZXR0aUBsaW51eC5pdCA8bWFpbHRvOmdpb21l
dHRpQGxpbnV4Lml0Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPG1haWx0bzpnaW9tZXR0aUBsaW51eC5pdCA8bWFpbHRvOmdpb21l
dHRpQGxpbnV4Lml0PiA8bWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IDxtYWlsdG86Z2lvbWV0
dGlAbGludXguaXQ+PiA8bWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IDxtYWlsdG86Z2lvbWV0
dGlAbGludXguaXQ+IDxtYWlsdG86Z2lvbWV0dGlAbGludXguaXQgPG1haWx0bzpnaW9tZXR0
aUBsaW51eC5pdD4+PiA8bWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IDxtYWlsdG86Z2lvbWV0
dGlAbGludXguaXQ+IDxtYWlsdG86Z2lvbWV0dGlAbGludXguaXQgPG1haWx0bzpnaW9tZXR0
aUBsaW51eC5pdD4+IDxtYWlsdG86Z2lvbWV0dGlAbGludXguaXQgPG1haWx0bzpnaW9tZXR0
aUBsaW51eC5pdD4gPG1haWx0bzpnaW9tZXR0aUBsaW51eC5pdCA8bWFpbHRvOmdpb21ldHRp
QGxpbnV4Lml0Pj4+Pj4+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAxLjgyMjkwM10g
UFRQIGNsb2NrIHN1cHBvcnQgcmVnaXN0ZXJlZA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKg
IMKgMS44MjY4OTNdIEVEQUMgTUM6IFZlcjogMy4wLjANCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4g
WyDCoCDCoDEuODMwMzc1XSB6eW5xbXAtaXBpLW1ib3ggbWFpbGJveEBmZjk5MDQwMDogUmVn
aXN0ZXJlZCBaeW5xTVAgSVBJIG1ib3ggd2l0aCBUWC9SWCBjaGFubmVscy4NCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gWyDCoCDCoDEuODM4ODYzXSB6eW5xbXAtaXBpLW1ib3ggbWFpbGJveEBm
Zjk5MDYwMDogUmVnaXN0ZXJlZCBaeW5xTVAgSVBJIG1ib3ggd2l0aCBUWC9SWCBjaGFubmVs
cy4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDEuODQ3MzU2XSB6eW5xbXAtaXBpLW1i
b3ggbWFpbGJveEBmZjk5MDgwMDogUmVnaXN0ZXJlZCBaeW5xTVAgSVBJIG1ib3ggd2l0aCBU
WC9SWCBjaGFubmVscy4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDEuODU1OTA3XSBG
UEdBIG1hbmFnZXIgZnJhbWV3b3JrDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAxLjg1
OTk1Ml0gY2xvY2tzb3VyY2U6IFN3aXRjaGVkIHRvIGNsb2Nrc291cmNlIGFyY2hfc3lzX2Nv
dW50ZXINCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDEuODcxNzEyXSBORVQ6IFJlZ2lz
dGVyZWQgUEZfSU5FVCBwcm90b2NvbCBmYW1pbHkNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDC
oCDCoDEuODcxODM4XSBJUCBpZGVudHMgaGFzaCB0YWJsZSBlbnRyaWVzOiAzMjc2OCAob3Jk
ZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFyKQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKg
IMKgMS44NzkzOTJdIHRjcF9saXN0ZW5fcG9ydGFkZHJfaGFzaCBoYXNoIHRhYmxlIGVudHJp
ZXM6IDEwMjQgKG9yZGVyOiAyLCAxNjM4NCBieXRlcywgbGluZWFyKQ0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPiBbIMKgIMKgMS44ODcwNzhdIFRhYmxlLXBlcnR1cmIgaGFzaCB0YWJsZSBlbnRy
aWVzOiA2NTUzNiAob3JkZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFyKQ0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPiBbIMKgIMKgMS44OTQ4NDZdIFRDUCBlc3RhYmxpc2hlZCBoYXNoIHRhYmxl
IGVudHJpZXM6IDE2Mzg0IChvcmRlcjogNSwgMTMxMDcyIGJ5dGVzLCBsaW5lYXIpDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAxLjkwMjkwMF0gVENQIGJpbmQgaGFzaCB0YWJsZSBl
bnRyaWVzOiAxNjM4NCAob3JkZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFyKQ0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPiBbIMKgIMKgMS45MTAzNTBdIFRDUDogSGFzaCB0YWJsZXMgY29uZmln
dXJlZCAoZXN0YWJsaXNoZWQgMTYzODQgYmluZCAxNjM4NCkNCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD4gWyDCoCDCoDEuOTE2Nzc4XSBVRFAgaGFzaCB0YWJsZSBlbnRyaWVzOiAxMDI0IChvcmRl
cjogMywgMzI3NjggYnl0ZXMsIGxpbmVhcikNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDC
oDEuOTIzNTA5XSBVRFAtTGl0ZSBoYXNoIHRhYmxlIGVudHJpZXM6IDEwMjQgKG9yZGVyOiAz
LCAzMjc2OCBieXRlcywgbGluZWFyKQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMS45
MzA3NTldIE5FVDogUmVnaXN0ZXJlZCBQRl9VTklYL1BGX0xPQ0FMIHByb3RvY29sIGZhbWls
eQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMS45MzY4MzRdIFJQQzogUmVnaXN0ZXJl
ZCBuYW1lZCBVTklYIHNvY2tldCB0cmFuc3BvcnQgbW9kdWxlLg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPiBbIMKgIMKgMS45NDIzNDJdIFJQQzogUmVnaXN0ZXJlZCB1ZHAgdHJhbnNwb3J0IG1v
ZHVsZS4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDEuOTQ3MDg4XSBSUEM6IFJlZ2lz
dGVyZWQgdGNwIHRyYW5zcG9ydCBtb2R1bGUuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAg
wqAxLjk1MTg0M10gUlBDOiBSZWdpc3RlcmVkIHRjcCBORlN2NC4xIGJhY2tjaGFubmVsIHRy
YW5zcG9ydCBtb2R1bGUuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAxLjk1ODMzNF0g
UENJOiBDTFMgMCBieXRlcywgZGVmYXVsdCA2NA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKg
IMKgMS45NjI3MDldIFRyeWluZyB0byB1bnBhY2sgcm9vdGZzIGltYWdlIGFzIGluaXRyYW1m
cy4uLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMS45NzcwOTBdIHdvcmtpbmdzZXQ6
IHRpbWVzdGFtcF9iaXRzPTYyIG1heF9vcmRlcj0xOSBidWNrZXRfb3JkZXI9MA0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPiBbIMKgIMKgMS45ODI4NjNdIEluc3RhbGxpbmcga25mc2QgKGNvcHly
aWdodCAoQykgMTk5NiBva2lyQG1vbmFkLnN3Yi5kZSA8bWFpbHRvOm9raXJAbW9uYWQuc3di
LmRlPiA8bWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIDxtYWlsdG86b2tpckBtb25hZC5zd2Iu
ZGU+PiA8bWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIDxtYWlsdG86b2tpckBtb25hZC5zd2Iu
ZGU+IDxtYWlsdG86b2tpckBtb25hZC5zd2IuZGUgPG1haWx0bzpva2lyQG1vbmFkLnN3Yi5k
ZT4+PiA8bWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIDxtYWlsdG86b2tpckBtb25hZC5zd2Iu
ZGU+IDxtYWlsdG86b2tpckBtb25hZC5zd2IuZGUgPG1haWx0bzpva2lyQG1vbmFkLnN3Yi5k
ZT4+IDxtYWlsdG86b2tpckBtb25hZC5zd2IuZGUgPG1haWx0bzpva2lyQG1vbmFkLnN3Yi5k
ZT4gPG1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSA8bWFpbHRvOm9raXJAbW9uYWQuc3diLmRl
Pj4+PiA8bWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIDxtYWlsdG86b2tpckBtb25hZC5zd2Iu
ZGU+IDxtYWlsdG86b2tpckBtb25hZC5zd2IuZGUgPG1haWx0bzpva2lyQG1vbmFkLnN3Yi5k
ZT4+IDxtYWlsdG86b2tpckBtb25hZC5zd2IuZGUgPG1haWx0bzpva2lyQG1vbmFkLnN3Yi5k
ZT4gPG1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSA8bWFpbHRvOm9raXJAbW9uYWQuc3diLmRl
Pj4+IDxtYWlsdG86b2tpckBtb25hZC5zd2IuZGUgPG1haWx0bzpva2lyQG1vbmFkLnN3Yi5k
ZT4gPG1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSA8bWFpbHRvOm9raXJAbW9uYWQuc3diLmRl
Pj4gPG1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSA8bWFpbHRvOm9raXJAbW9uYWQuc3diLmRl
PiA8bWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIDxtYWlsdG86b2tpckBtb25hZC5zd2IuZGU+
Pj4+PikuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjAyMTA0NV0gTkVUOiBSZWdp
c3RlcmVkIFBGX0FMRyBwcm90b2NvbCBmYW1pbHkNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDC
oCDCoDIuMDIxMTIyXSB4b3I6IG1lYXN1cmluZyBzb2Z0d2FyZSBjaGVja3N1bSBzcGVlZA0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi4wMjkzNDddIMKgIMKgOHJlZ3MgwqAgwqAg
wqAgwqAgwqAgOiDCoDIzNjYgTUIvc2VjDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAy
LjAzMzA4MV0gwqAgwqAzMnJlZ3MgwqAgwqAgwqAgwqAgwqA6IMKgMjgwMiBNQi9zZWMNCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMDM4MjIzXSDCoCDCoGFybTY0X25lb24gwqAg
wqAgwqA6IMKgMjMyMCBNQi9zZWMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMDM4
Mzg1XSB4b3I6IHVzaW5nIGZ1bmN0aW9uOiAzMnJlZ3MgKDI4MDIgTUIvc2VjKQ0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPiBbIMKgIMKgMi4wNDM2MTRdIEJsb2NrIGxheWVyIFNDU0kgZ2VuZXJp
YyAoYnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQgbG9hZGVkIChtYWpvciAyNDcpDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjA1MDk1OV0gaW8gc2NoZWR1bGVyIG1xLWRlYWRsaW5l
IHJlZ2lzdGVyZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMDU1NTIxXSBpbyBz
Y2hlZHVsZXIga3liZXIgcmVnaXN0ZXJlZA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKg
Mi4wNjgyMjddIHhlbjp4ZW5fZXZ0Y2huOiBFdmVudC1jaGFubmVsIGRldmljZSBpbnN0YWxs
ZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMDY5MjgxXSBTZXJpYWw6IDgyNTAv
MTY1NTAgZHJpdmVyLCA0IHBvcnRzLCBJUlEgc2hhcmluZyBkaXNhYmxlZA0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPiBbIMKgIMKgMi4wNzYxOTBdIGNhY2hlaW5mbzogVW5hYmxlIHRvIGRldGVj
dCBjYWNoZSBoaWVyYXJjaHkgZm9yIENQVSAwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAg
wqAyLjA4NTU0OF0gYnJkOiBtb2R1bGUgbG9hZGVkDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsg
wqAgwqAyLjA4OTI5MF0gbG9vcDogbW9kdWxlIGxvYWRlZA0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PiBbIMKgIMKgMi4wODkzNDFdIEludmFsaWQgbWF4X3F1ZXVlcyAoNCksIHdpbGwgdXNlIGRl
ZmF1bHQgbWF4OiAyLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi4wOTQ1NjVdIHR1
bjogVW5pdmVyc2FsIFRVTi9UQVAgZGV2aWNlIGRyaXZlciwgMS42DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+IFsgwqAgwqAyLjA5ODY1NV0geGVuX25ldGZyb250OiBJbml0aWFsaXNpbmcgWGVu
IHZpcnR1YWwgZXRoZXJuZXQgZHJpdmVyDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAy
LjEwNDE1Nl0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBydGw4
MTUwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjEwOTgxM10gdXNiY29yZTogcmVn
aXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciByODE1Mg0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PiBbIMKgIMKgMi4xMTUzNjddIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBk
cml2ZXIgYXNpeA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi4xMjA3OTRdIHVzYmNv
cmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgYXg4ODE3OV8xNzhhDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjEyNjkzNF0gdXNiY29yZTogcmVnaXN0ZXJlZCBu
ZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNfZXRoZXINCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDC
oCDCoDIuMTMyODE2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVy
IGNkY19lZW0NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMTM4NTI3XSB1c2Jjb3Jl
OiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIG5ldDEwODANCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD4gWyDCoCDCoDIuMTQ0MjU2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRl
cmZhY2UgZHJpdmVyIGNkY19zdWJzZXQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIu
MTUwMjA1XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHphdXJ1
cw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi4xNTU4MzddIHVzYmNvcmU6IHJlZ2lz
dGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgY2RjX25jbQ0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PiBbIMKgIMKgMi4xNjE1NTBdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBk
cml2ZXIgcjgxNTNfZWNtDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjE2ODI0MF0g
dXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBjZGNfYWNtDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjE3MzEwOV0gY2RjX2FjbTogVVNCIEFic3RyYWN0
IENvbnRyb2wgTW9kZWwgZHJpdmVyIGZvciBVU0IgbW9kZW1zIGFuZCBJU0ROIGFkYXB0ZXJz
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjE4MTM1OF0gdXNiY29yZTogcmVnaXN0
ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1YXMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDC
oCDCoDIuMTg2NTQ3XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVy
IHVzYi1zdG9yYWdlDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjE5MjY0M10gdXNi
Y29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBmdGRpX3Npbw0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPiBbIMKgIMKgMi4xOTgzODRdIHVzYnNlcmlhbDogVVNCIFNlcmlhbCBz
dXBwb3J0IHJlZ2lzdGVyZWQgZm9yIEZUREkgVVNCIFNlcmlhbCBEZXZpY2UNCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gWyDCoCDCoDIuMjA2MTE4XSB1ZGMtY29yZTogY291bGRuJ3QgZmluZCBh
biBhdmFpbGFibGUgVURDIC0gYWRkZWQgW2dfbWFzc19zdG9yYWdlXSB0byBsaXN0IG9mIHBl
bmRpbmcNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoGRyaXZlcnMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMjE1MzMyXSBpMmNf
ZGV2OiBpMmMgL2RldiBlbnRyaWVzIGRyaXZlcg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKg
IMKgMi4yMjA0NjddIHhlbl93ZHQgeGVuX3dkdDogaW5pdGlhbGl6ZWQgKHRpbWVvdXQ9NjBz
LCBub3dheW91dD0wKQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi4yMjU5MjNdIGRl
dmljZS1tYXBwZXI6IHVldmVudDogdmVyc2lvbiAxLjAuMw0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PiBbIMKgIMKgMi4yMzA2NjhdIGRldmljZS1tYXBwZXI6IGlvY3RsOiA0LjQ1LjAtaW9jdGwg
KDIwMjEtMDMtMjIpIGluaXRpYWxpc2VkOiBkbS1kZXZlbEByZWRoYXQuY29tIDxtYWlsdG86
ZG0tZGV2ZWxAcmVkaGF0LmNvbT4gPG1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIDxtYWls
dG86ZG0tZGV2ZWxAcmVkaGF0LmNvbT4+IDxtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbSA8
bWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20+IDxtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNv
bSA8bWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20+Pj4gPG1haWx0bzpkbS1kZXZlbEByZWRo
YXQuY29tIDxtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbT4gPG1haWx0bzpkbS1kZXZlbEBy
ZWRoYXQuY29tIDxtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbT4+IDxtYWlsdG86ZG0tZGV2
ZWxAcmVkaGF0LmNvbSA8bWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20+IDxtYWlsdG86ZG0t
ZGV2ZWxAcmVkaGF0LmNvbSA8bWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20+Pj4+DQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA8bWFpbHRv
OmRtLWRldmVsQHJlZGhhdC5jb20gPG1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tPiA8bWFp
bHRvOmRtLWRldmVsQHJlZGhhdC5jb20gPG1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tPj4g
PG1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIDxtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNv
bT4gPG1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIDxtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0
LmNvbT4+PiA8bWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20gPG1haWx0bzpkbS1kZXZlbEBy
ZWRoYXQuY29tPiA8bWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20gPG1haWx0bzpkbS1kZXZl
bEByZWRoYXQuY29tPj4gPG1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIDxtYWlsdG86ZG0t
ZGV2ZWxAcmVkaGF0LmNvbT4gPG1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIDxtYWlsdG86
ZG0tZGV2ZWxAcmVkaGF0LmNvbT4+Pj4+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAy
LjIzOTMxNV0gRURBQyBNQzA6IEdpdmluZyBvdXQgZGV2aWNlIHRvIG1vZHVsZSAxIGNvbnRy
b2xsZXIgc3lucHNfZGRyX2NvbnRyb2xsZXI6IERFViBzeW5wc19lZGFjDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAoSU5URVJSVVBUKQ0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi4yNDk0MDVdIEVEQUMgREVWSUNFMDogR2l2
aW5nIG91dCBkZXZpY2UgdG8gbW9kdWxlIHp5bnFtcC1vY20tZWRhYyBjb250cm9sbGVyIHp5
bnFtcF9vY206IERFVg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgZmY5NjAwMDAubWVtb3J5LWNvbnRy
b2xsZXIgKElOVEVSUlVQVCkNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMjYxNzE5
XSBzZGhjaTogU2VjdXJlIERpZ2l0YWwgSG9zdCBDb250cm9sbGVyIEludGVyZmFjZSBkcml2
ZXINCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMjY3NDg3XSBzZGhjaTogQ29weXJp
Z2h0KGMpIFBpZXJyZSBPc3NtYW4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMjcx
ODkwXSBzZGhjaS1wbHRmbTogU0RIQ0kgcGxhdGZvcm0gYW5kIE9GIGRyaXZlciBoZWxwZXIN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMjc4MTU3XSBsZWR0cmlnLWNwdTogcmVn
aXN0ZXJlZCB0byBpbmRpY2F0ZSBhY3Rpdml0eSBvbiBDUFVzDQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+IFsgwqAgwqAyLjI4MzgxNl0genlucW1wX2Zpcm13YXJlX3Byb2JlIFBsYXRmb3JtIE1h
bmFnZW1lbnQgQVBJIHYxLjENCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMjg5NTU0
XSB6eW5xbXBfZmlybXdhcmVfcHJvYmUgVHJ1c3R6b25lIHZlcnNpb24gdjEuMA0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPiBbIMKgIMKgMi4zMjc4NzVdIHNlY3VyZWZ3IHNlY3VyZWZ3OiBzZWN1
cmVmdyBwcm9iZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMzI4MzI0XSBhbGc6
IE5vIHRlc3QgZm9yIHhpbGlueC16eW5xbXAtYWVzICh6eW5xbXAtYWVzKQ0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPiBbIMKgIMKgMi4zMzI1NjNdIHp5bnFtcF9hZXMgZmlybXdhcmU6enlucW1w
LWZpcm13YXJlOnp5bnFtcC1hZXM6IEFFUyBTdWNjZXNzZnVsbHkgUmVnaXN0ZXJlZA0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi4zNDExODNdIGFsZzogTm8gdGVzdCBmb3IgeGls
aW54LXp5bnFtcC1yc2EgKHp5bnFtcC1yc2EpDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAg
wqAyLjM0NzY2N10gcmVtb3RlcHJvYyByZW1vdGVwcm9jMDogZmY5YTAwMDAucmY1c3M6cjVm
XzAgaXMgYXZhaWxhYmxlDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjM1MzAwM10g
cmVtb3RlcHJvYyByZW1vdGVwcm9jMTogZmY5YTAwMDAucmY1c3M6cjVmXzEgaXMgYXZhaWxh
YmxlDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjM2MjYwNV0gZnBnYV9tYW5hZ2Vy
IGZwZ2EwOiBYaWxpbnggWnlucU1QIEZQR0EgTWFuYWdlciByZWdpc3RlcmVkDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjM2NjU0MF0gdmlwZXIteGVuLXByb3h5IHZpcGVyLXhl
bi1wcm94eTogVmlwZXIgWGVuIFByb3h5IHJlZ2lzdGVyZWQNCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD4gWyDCoCDCoDIuMzcyNTI1XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IERldmljZSBU
cmVlIFByb2JpbmcNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMzc3Nzc4XSB2aXBl
ci12ZHBwIGE0MDAwMDAwLnZkcHA6IFZEUFAgVmVyc2lvbjogMS4zLjkuMCBJbmZvOiAxLjUx
Mi4xNS4wIEtleUxlbjogMzINCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMzg2NDMy
XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IFVuYWJsZSB0byByZWdpc3RlciB0YW1wZXIg
aGFuZGxlci4gUmV0cnlpbmcuLi4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuMzk0
MDk0XSB2aXBlci12ZHBwLW5ldCBhNTAwMDAwMC52ZHBwX25ldDogRGV2aWNlIFRyZWUgUHJv
YmluZw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi4zOTk4NTRdIHZpcGVyLXZkcHAt
bmV0IGE1MDAwMDAwLnZkcHBfbmV0OiBEZXZpY2UgcmVnaXN0ZXJlZA0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPiBbIMKgIMKgMi40MDU5MzFdIHZpcGVyLXZkcHAtc3RhdCBhODAwMDAwMC52ZHBw
X3N0YXQ6IERldmljZSBUcmVlIFByb2JpbmcNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDC
oDIuNDEyMDM3XSB2aXBlci12ZHBwLXN0YXQgYTgwMDAwMDAudmRwcF9zdGF0OiBCdWlsZCBw
YXJhbWV0ZXJzOiBWVEkgQ291bnQ6IDUxMiBFdmVudCBDb3VudDogMzINCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD4gWyDCoCDCoDIuNDIwODU2XSBkZWZhdWx0IHByZXNldA0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPiBbIMKgIMKgMi40MjM3OTddIHZpcGVyLXZkcHAtc3RhdCBhODAwMDAwMC52ZHBw
X3N0YXQ6IERldmljZSByZWdpc3RlcmVkDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAy
LjQzMDA1NF0gdmlwZXItdmRwcC1ybmcgYWMwMDAwMDAudmRwcF9ybmc6IERldmljZSBUcmVl
IFByb2JpbmcNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuNDM1OTQ4XSB2aXBlci12
ZHBwLXJuZyBhYzAwMDAwMC52ZHBwX3JuZzogRGV2aWNlIHJlZ2lzdGVyZWQNCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gWyDCoCDCoDIuNDQxOTc2XSB2bWN1IGRyaXZlciBpbml0DQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjQ0NDkyMl0gVk1DVTogOiAoMjQwOjApIHJlZ2lzdGVy
ZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuNDQ0OTU2XSBJbiBLODEgVXBkYXRl
ciBpbml0DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjQ0OTAwM10gcGt0Z2VuOiBQ
YWNrZXQgR2VuZXJhdG9yIGZvciBwYWNrZXQgcGVyZm9ybWFuY2UgdGVzdGluZy4gVmVyc2lv
bjogMi43NQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi40Njg4MzNdIEluaXRpYWxp
emluZyBYRlJNIG5ldGxpbmsgc29ja2V0DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAy
LjQ2ODkwMl0gTkVUOiBSZWdpc3RlcmVkIFBGX1BBQ0tFVCBwcm90b2NvbCBmYW1pbHkNCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuNDcyNzI5XSBCcmlkZ2UgZmlyZXdhbGxpbmcg
cmVnaXN0ZXJlZA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi40NzY3ODVdIDgwMjFx
OiA4MDIuMVEgVkxBTiBTdXBwb3J0IHYxLjgNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDC
oDIuNDgxMzQxXSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2ZXJzaW9uIDENCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD4gWyDCoCDCoDIuNDg2Mzk0XSBCdHJmcyBsb2FkZWQsIGNyYzMyYz1jcmMzMmMt
Z2VuZXJpYywgem9uZWQ9bm8sIGZzdmVyaXR5PW5vDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsg
wqAgwqAyLjUwMzE0NV0gZmYwMTAwMDAuc2VyaWFsOiB0dHlQUzEgYXQgTU1JTyAweGZmMDEw
MDAwIChpcnEgPSAzNiwgYmFzZV9iYXVkID0gNjI1MDAwMCkgaXMgYSB4dWFydHBzDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjUwNzEwM10gb2YtZnBnYS1yZWdpb24gZnBnYS1m
dWxsOiBGUEdBIFJlZ2lvbiBwcm9iZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIu
NTEyOTg2XSB4aWxpbngtenlucW1wLWRtYSBmZDUwMDAwMC5kbWEtY29udHJvbGxlcjogWnlu
cU1QIERNQSBkcml2ZXIgUHJvYmUgc3VjY2Vzcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKg
IMKgMi41MjAyNjddIHhpbGlueC16eW5xbXAtZG1hIGZkNTEwMDAwLmRtYS1jb250cm9sbGVy
OiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzDQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
IFsgwqAgwqAyLjUyODIzOV0geGlsaW54LXp5bnFtcC1kbWEgZmQ1MjAwMDAuZG1hLWNvbnRy
b2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3MNCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD4gWyDCoCDCoDIuNTM2MTUyXSB4aWxpbngtenlucW1wLWRtYSBmZDUzMDAwMC5kbWEt
Y29udHJvbGxlcjogWnlucU1QIERNQSBkcml2ZXIgUHJvYmUgc3VjY2Vzcw0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPiBbIMKgIMKgMi41NDQxNTNdIHhpbGlueC16eW5xbXAtZG1hIGZkNTQwMDAw
LmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjU1MjEyN10geGlsaW54LXp5bnFtcC1kbWEgZmQ1
NTAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVyIFByb2JlIHN1Y2Nlc3MN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuNTYwMTc4XSB4aWxpbngtenlucW1wLWRt
YSBmZmE4MDAwMC5kbWEtY29udHJvbGxlcjogWnlucU1QIERNQSBkcml2ZXIgUHJvYmUgc3Vj
Y2Vzcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi41Njc5ODddIHhpbGlueC16eW5x
bXAtZG1hIGZmYTkwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9i
ZSBzdWNjZXNzDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjU3NjAxOF0geGlsaW54
LXp5bnFtcC1kbWEgZmZhYTAwMDAuZG1hLWNvbnRyb2xsZXI6IFp5bnFNUCBETUEgZHJpdmVy
IFByb2JlIHN1Y2Nlc3MNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuNTgzODg5XSB4
aWxpbngtenlucW1wLWRtYSBmZmFiMDAwMC5kbWEtY29udHJvbGxlcjogWnlucU1QIERNQSBk
cml2ZXIgUHJvYmUgc3VjY2Vzcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi45NDYz
NzldIHNwaS1ub3Igc3BpMC4wOiBtdDI1cXU1MTJhICgxMzEwNzIgS2J5dGVzKQ0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPiBbIMKgIMKgMi45NDY0NjddIDIgZml4ZWQtcGFydGl0aW9ucyBwYXJ0
aXRpb25zIGZvdW5kIG9uIE1URCBkZXZpY2Ugc3BpMC4wDQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
IFsgwqAgwqAyLjk1MjM5M10gQ3JlYXRpbmcgMiBNVEQgcGFydGl0aW9ucyBvbiAic3BpMC4w
IjoNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDIuOTU3MjMxXSAweDAwMDAwNDAwMDAw
MC0weDAwMDAwODAwMDAwMCA6ICJiYW5rIEEiDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAg
wqAyLjk2MzMzMl0gMHgwMDAwMDAwMDAwMDAtMHgwMDAwMDQwMDAwMDAgOiAiYmFuayBCIg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi45Njg2OTRdIG1hY2IgZmYwYjAwMDAuZXRo
ZXJuZXQ6IE5vdCBlbmFibGluZyBwYXJ0aWFsIHN0b3JlIGFuZCBmb3J3YXJkDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjk3NTMzM10gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBl
dGgwOiBDYWRlbmNlIEdFTSByZXYgMHg1MDA3MDEwNiBhdCAweGZmMGIwMDAwIGlycSAyNQ0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgKDE4
OjQxOmZlOjBmOmZmOjAyKQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMi45ODQ0NzJd
IG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQ6IE5vdCBlbmFibGluZyBwYXJ0aWFsIHN0b3JlIGFu
ZCBmb3J3YXJkDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAyLjk5MjE0NF0gbWFjYiBm
ZjBjMDAwMC5ldGhlcm5ldCBldGgxOiBDYWRlbmNlIEdFTSByZXYgMHg1MDA3MDEwNiBhdCAw
eGZmMGMwMDAwIGlycSAyNg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgKDE4OjQxOmZlOjBmOmZmOjAzKQ0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PiBbIMKgIMKgMy4wMDEwNDNdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldDogVmlwZXIgcG93ZXIg
R1BJT3MgaW5pdGlhbGlzZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuMDA3MzEz
XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDAgKHVuaW5pdGlhbGl6ZWQpOiBWYWxpZGF0
ZSBpbnRlcmZhY2UgUVNHTUlJDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjAxNDkx
NF0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQxICh1bmluaXRpYWxpemVkKTogVmFsaWRh
dGUgaW50ZXJmYWNlIFFTR01JSQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy4wMjIx
MzhdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldCB2bmV0MSAodW5pbml0aWFsaXplZCk6IFZhbGlk
YXRlIGludGVyZmFjZSB0eXBlIDE4DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjAz
MDI3NF0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQyICh1bmluaXRpYWxpemVkKTogVmFs
aWRhdGUgaW50ZXJmYWNlIFFTR01JSQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy4w
Mzc3ODVdIHZpcGVyX2VuZXQgdmlwZXJfZW5ldCB2bmV0MyAodW5pbml0aWFsaXplZCk6IFZh
bGlkYXRlIGludGVyZmFjZSBRU0dNSUkNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMu
MDQ1MzAxXSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQ6IFZpcGVyIGVuZXQgcmVnaXN0ZXJlZA0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy4wNTA5NThdIHhpbGlueC1heGlwbW9uIGZm
YTAwMDAwLnBlcmYtbW9uaXRvcjogUHJvYmVkIFhpbGlueCBBUE0NCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD4gWyDCoCDCoDMuMDU3MTM1XSB4aWxpbngtYXhpcG1vbiBmZDBiMDAwMC5wZXJmLW1v
bml0b3I6IFByb2JlZCBYaWxpbnggQVBNDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAz
LjA2MzUzOF0geGlsaW54LWF4aXBtb24gZmQ0OTAwMDAucGVyZi1tb25pdG9yOiBQcm9iZWQg
WGlsaW54IEFQTQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy4wNjk5MjBdIHhpbGlu
eC1heGlwbW9uIGZmYTEwMDAwLnBlcmYtbW9uaXRvcjogUHJvYmVkIFhpbGlueCBBUE0NCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuMDk3NzI5XSBzaTcweHg6IHByb2JlIG9mIDIt
MDA0MCBmYWlsZWQgd2l0aCBlcnJvciAtNQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKg
My4wOTgwNDJdIGNkbnMtd2R0IGZkNGQwMDAwLndhdGNoZG9nOiBYaWxpbnggV2F0Y2hkb2cg
VGltZXIgd2l0aCB0aW1lb3V0IDYwcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy4x
MDUxMTFdIGNkbnMtd2R0IGZmMTUwMDAwLndhdGNoZG9nOiBYaWxpbnggV2F0Y2hkb2cgVGlt
ZXIgd2l0aCB0aW1lb3V0IDEwcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy4xMTI0
NTddIHZpcGVyLXRhbXBlciB2aXBlci10YW1wZXI6IERldmljZSByZWdpc3RlcmVkDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjExNzU5M10gYWN0aXZlX2JhbmsgYWN0aXZlX2Jh
bms6IGJvb3QgYmFuazogMQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy4xMjIxODRd
IGFjdGl2ZV9iYW5rIGFjdGl2ZV9iYW5rOiBib290IG1vZGU6ICgweDAyKSBxc3BpMzINCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuMTI4MjQ3XSB2aXBlci12ZHBwIGE0MDAwMDAw
LnZkcHA6IERldmljZSBUcmVlIFByb2JpbmcNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDC
oDMuMTMzNDM5XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IFZEUFAgVmVyc2lvbjogMS4z
LjkuMCBJbmZvOiAxLjUxMi4xNS4wIEtleUxlbjogMzINCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4g
WyDCoCDCoDMuMTQyMTUxXSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IFRhbXBlciBoYW5k
bGVyIHJlZ2lzdGVyZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuMTQ3NDM4XSB2
aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IERldmljZSByZWdpc3RlcmVkDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+IFsgwqAgwqAzLjE1MzAwN10gbHBjNTVfbDIgc3BpMS4wOiByZWdpc3RlcmVk
IGhhbmRsZXIgZm9yIHByb3RvY29sIDANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMu
MTU4NTgyXSBscGM1NV91c2VyIGxwYzU1X3VzZXI6IFRoZSBtYWpvciBudW1iZXIgZm9yIHlv
dXIgZGV2aWNlIGlzIDIzNg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy4xNjU5NzZd
IGxwYzU1X2wyIHNwaTEuMDogcmVnaXN0ZXJlZCBoYW5kbGVyIGZvciBwcm90b2NvbCAxDQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjE4MTk5OV0gcnRjLWxwYzU1IHJ0Y19scGM1
NTogbHBjNTVfcnRjX2dldF90aW1lOiBiYWQgcmVzdWx0OiAxDQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+IFsgwqAgwqAzLjE4Mjg1Nl0gcnRjLWxwYzU1IHJ0Y19scGM1NTogcmVnaXN0ZXJlZCBh
cyBydGMwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjE4ODY1Nl0gbHBjNTVfbDIg
c3BpMS4wOiAoMikgbWN1IHN0aWxsIG5vdCByZWFkeT8NCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4g
WyDCoCDCoDMuMTkzNzQ0XSBscGM1NV9sMiBzcGkxLjA6ICgzKSBtY3Ugc3RpbGwgbm90IHJl
YWR5Pw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy4xOTg4NDhdIGxwYzU1X2wyIHNw
aTEuMDogKDQpIG1jdSBzdGlsbCBub3QgcmVhZHk/DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsg
wqAgwqAzLjIwMjkzMl0gbW1jMDogU0RIQ0kgY29udHJvbGxlciBvbiBmZjE2MDAwMC5tbWMg
W2ZmMTYwMDAwLm1tY10gdXNpbmcgQURNQSA2NC1iaXQNCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4g
WyDCoCDCoDMuMjEwNjg5XSBscGM1NV9sMiBzcGkxLjA6ICg1KSBtY3Ugc3RpbGwgbm90IHJl
YWR5Pw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy4yMTU2OTRdIGxwYzU1X2wyIHNw
aTEuMDogcnggZXJyb3I6IC0xMTANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuMjg0
NDM4XSBtbWMwOiBuZXcgSFMyMDAgTU1DIGNhcmQgYXQgYWRkcmVzcyAwMDAxDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjI4NTE3OV0gbW1jYmxrMDogbW1jMDowMDAxIFNFTTE2
RyAxNC42IEdpQg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy4yOTE3ODRdIMKgbW1j
YmxrMDogcDEgcDIgcDMgcDQgcDUgcDYgcDcgcDgNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDC
oCDCoDMuMjkzOTE1XSBtbWNibGswYm9vdDA6IG1tYzA6MDAwMSBTRU0xNkcgNC4wMCBNaUIN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuMjk5MDU0XSBtbWNibGswYm9vdDE6IG1t
YzA6MDAwMSBTRU0xNkcgNC4wMCBNaUINCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMu
MzAzOTA1XSBtbWNibGswcnBtYjogbW1jMDowMDAxIFNFTTE2RyA0LjAwIE1pQiwgY2hhcmRl
diAoMjQ0OjApDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjU4MjY3Nl0gcnRjLWxw
YzU1IHJ0Y19scGM1NTogbHBjNTVfcnRjX2dldF90aW1lOiBiYWQgcmVzdWx0OiAxDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjU4MzMzMl0gcnRjLWxwYzU1IHJ0Y19scGM1NTog
aGN0b3N5czogdW5hYmxlIHRvIHJlYWQgdGhlIGhhcmR3YXJlIGNsb2NrDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+IFsgwqAgwqAzLjU5MTI1Ml0gY2Rucy1pMmMgZmYwMjAwMDAuaTJjOiByZWNv
dmVyeSBpbmZvcm1hdGlvbiBjb21wbGV0ZQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKg
My41OTcwODVdIGF0MjQgMC0wMDUwOiBzdXBwbHkgdmNjIG5vdCBmb3VuZCwgdXNpbmcgZHVt
bXkgcmVndWxhdG9yDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjYwMzAxMV0gbHBj
NTVfbDIgc3BpMS4wOiAoMikgbWN1IHN0aWxsIG5vdCByZWFkeT8NCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD4gWyDCoCDCoDMuNjA4MDkzXSBhdDI0IDAtMDA1MDogMjU2IGJ5dGUgc3BkIEVFUFJP
TSwgcmVhZC1vbmx5DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjYxMzYyMF0gbHBj
NTVfbDIgc3BpMS4wOiAoMykgbWN1IHN0aWxsIG5vdCByZWFkeT8NCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD4gWyDCoCDCoDMuNjE5MzYyXSBscGM1NV9sMiBzcGkxLjA6ICg0KSBtY3Ugc3RpbGwg
bm90IHJlYWR5Pw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy42MjQyMjRdIHJ0Yy1y
djMwMjggMC0wMDUyOiByZWdpc3RlcmVkIGFzIHJ0YzENCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4g
WyDCoCDCoDMuNjI4MzQzXSBscGM1NV9sMiBzcGkxLjA6ICg1KSBtY3Ugc3RpbGwgbm90IHJl
YWR5Pw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy42MzMyNTNdIGxwYzU1X2wyIHNw
aTEuMDogcnggZXJyb3I6IC0xMTANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuNjM5
MTA0XSBrODFfYm9vdGxvYWRlciAwLTAwMTA6IHByb2JlDQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
IFsgwqAgwqAzLjY0MTYyOF0gVk1DVTogOiAoMjM1OjApIHJlZ2lzdGVyZWQNCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gWyDCoCDCoDMuNjQxNjM1XSBrODFfYm9vdGxvYWRlciAwLTAwMTA6IHBy
b2JlIGNvbXBsZXRlZA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy42NjgzNDZdIGNk
bnMtaTJjIGZmMDIwMDAwLmkyYzogNDAwIGtIeiBtbWlvIGZmMDIwMDAwIGlycSAyOA0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy42NjkxNTRdIGNkbnMtaTJjIGZmMDMwMDAwLmky
YzogcmVjb3ZlcnkgaW5mb3JtYXRpb24gY29tcGxldGUNCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4g
WyDCoCDCoDMuNjc1NDEyXSBsbTc1IDEtMDA0ODogc3VwcGx5IHZzIG5vdCBmb3VuZCwgdXNp
bmcgZHVtbXkgcmVndWxhdG9yDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjY4Mjky
MF0gbG03NSAxLTAwNDg6IGh3bW9uMTogc2Vuc29yICd0bXAxMTInDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+IFsgwqAgwqAzLjY4NjU0OF0gaTJjIGkyYy0xOiBBZGRlZCBtdWx0aXBsZXhlZCBp
MmMgYnVzIDMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuNjkwNzk1XSBpMmMgaTJj
LTE6IEFkZGVkIG11bHRpcGxleGVkIGkyYyBidXMgNA0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBb
IMKgIMKgMy42OTU2MjldIGkyYyBpMmMtMTogQWRkZWQgbXVsdGlwbGV4ZWQgaTJjIGJ1cyA1
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjcwMDQ5Ml0gaTJjIGkyYy0xOiBBZGRl
ZCBtdWx0aXBsZXhlZCBpMmMgYnVzIDYNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMu
NzA1MTU3XSBwY2E5NTR4IDEtMDA3MDogcmVnaXN0ZXJlZCA0IG11bHRpcGxleGVkIGJ1c3Nl
cyBmb3IgSTJDIHN3aXRjaCBwY2E5NTQ2DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAz
LjcxMzA0OV0gYXQyNCAxLTAwNTQ6IHN1cHBseSB2Y2Mgbm90IGZvdW5kLCB1c2luZyBkdW1t
eSByZWd1bGF0b3INCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuNzIwMDY3XSBhdDI0
IDEtMDA1NDogMTAyNCBieXRlIDI0YzA4IEVFUFJPTSwgcmVhZC1vbmx5DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+IFsgwqAgwqAzLjcyNDc2MV0gY2Rucy1pMmMgZmYwMzAwMDAuaTJjOiAxMDAg
a0h6IG1taW8gZmYwMzAwMDAgaXJxIDI5DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAz
LjczMTI3Ml0gc2ZwIHZpcGVyX2VuZXQ6c2ZwLWV0aDE6IEhvc3QgbWF4aW11bSBwb3dlciAy
LjBXDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjczNzU0OV0gc2ZwX3JlZ2lzdGVy
X3NvY2tldDogZ290IHNmcF9idXMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuNzQw
NzA5XSBzZnBfcmVnaXN0ZXJfc29ja2V0OiByZWdpc3RlciBzZnBfYnVzDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+IFsgwqAgwqAzLjc0NTQ1OV0gc2ZwX3JlZ2lzdGVyX2J1czogb3BzIG9rIQ0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy43NDkxNzldIHNmcF9yZWdpc3Rlcl9idXM6
IFRyeSB0byBhdHRhY2gNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuNzUzNDE5XSBz
ZnBfcmVnaXN0ZXJfYnVzOiBBdHRhY2ggc3VjY2VlZGVkDQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
IFsgwqAgwqAzLjc1NzkxNF0gc2ZwX3JlZ2lzdGVyX2J1czogdXBzdHJlYW0gb3BzIGF0dGFj
aA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy43NjI2NzddIHNmcF9yZWdpc3Rlcl9i
dXM6IEJ1cyByZWdpc3RlcmVkDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjc2Njk5
OV0gc2ZwX3JlZ2lzdGVyX3NvY2tldDogcmVnaXN0ZXIgc2ZwX2J1cyBzdWNjZWVkZWQNCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuNzc1ODcwXSBvZl9jZnNfaW5pdA0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPiBbIMKgIMKgMy43NzYwMDBdIG9mX2Nmc19pbml0OiBPSw0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPiBbIMKgIMKgMy43NzgyMTFdIGNsazogTm90IGRpc2FibGluZyB1bnVz
ZWQgY2xvY2tzDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMjc4NDc3XSBGcmVlaW5n
IGluaXRyZCBtZW1vcnk6IDIwNjA1NksNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4y
Nzk0MDZdIEZyZWVpbmcgdW51c2VkIGtlcm5lbCBtZW1vcnk6IDE1MzZLDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+IFsgwqAgMTEuMzE0MDA2XSBDaGVja2VkIFcrWCBtYXBwaW5nczogcGFzc2Vk
LCBubyBXK1ggcGFnZXMgZm91bmQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4zMTQx
NDJdIFJ1biAvaW5pdCBhcyBpbml0IHByb2Nlc3MNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gSU5J
VDogdmVyc2lvbiAzLjAxIGJvb3RpbmcNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gZnNjayAoYnVz
eWJveCAxLjM1LjApDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IC9kZXYvbW1jYmxrMHAxOiBjbGVh
biwgMTIvMTAyNDAwIGZpbGVzLCAyMzgxNjIvNDA5NjAwIGJsb2Nrcw0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPiAvZGV2L21tY2JsazBwMjogY2xlYW4sIDEyLzEwMjQwMCBmaWxlcywgMTcxOTcy
LzQwOTYwMCBibG9ja3MNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gL2Rldi9tbWNibGswcDMgd2Fz
IG5vdCBjbGVhbmx5IHVubW91bnRlZCwgY2hlY2sgZm9yY2VkLg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPiAvZGV2L21tY2JsazBwMzogMjAvNDA5NiBmaWxlcyAoMC4wJSBub24tY29udGlndW91
cyksIDY2My8xNjM4NCBibG9ja3MNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS41NTMw
NzNdIEVYVDQtZnMgKG1tY2JsazBwMyk6IG1vdW50ZWQgZmlsZXN5c3RlbSB3aXRob3V0IGpv
dXJuYWwuIE9wdHM6IChudWxsKS4gUXVvdGEgbW9kZToNCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoGRpc2FibGVkLg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPiBTdGFydGluZyByYW5kb20gbnVtYmVyIGdlbmVyYXRvciBkYWVtb24uDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuNTgwNjYyXSByYW5kb206IGNybmcgaW5pdCBkb25l
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFN0YXJ0aW5nIHVkZXYNCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD4gWyDCoCAxMS42MTMxNTldIHVkZXZkWzE0Ml06IHN0YXJ0aW5nIHZlcnNpb24gMy4yLjEw
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuNjIwMzg1XSB1ZGV2ZFsxNDNdOiBzdGFy
dGluZyBldWRldi0zLjIuMTANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS43MDQ0ODFd
IG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IHJlbmFtZWQgZnJvbSBldGgw
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuNzIwMjY0XSBtYWNiIGZmMGMwMDAwLmV0
aGVybmV0IGNvbnRyb2xfYmxhY2s6IHJlbmFtZWQgZnJvbSBldGgxDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+IFsgwqAgMTIuMDYzMzk2XSBpcF9sb2NhbF9wb3J0X3JhbmdlOiBwcmVmZXIgZGlm
ZmVyZW50IHBhcml0eSBmb3Igc3RhcnQvZW5kIHZhbHVlcy4NCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD4gWyDCoCAxMi4wODQ4MDFdIHJ0Yy1scGM1NSBydGNfbHBjNTU6IGxwYzU1X3J0Y19nZXRf
dGltZTogYmFkIHJlc3VsdDogMQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBod2Nsb2NrOiBSVENf
UkRfVElNRTogSW52YWxpZCBleGNoYW5nZQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBNb24gRmVi
IDI3IDA4OjQwOjUzIFVUQyAyMDIzDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTIuMTE1
MzA5XSBydGMtbHBjNTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfc2V0X3RpbWU6IGJhZCByZXN1
bHQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gaHdjbG9jazogUlRDX1NFVF9USU1FOiBJbnZhbGlk
IGV4Y2hhbmdlDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTIuMTMxMDI3XSBydGMtbHBj
NTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfZ2V0X3RpbWU6IGJhZCByZXN1bHQ6IDENCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gU3RhcnRpbmcgbWN1ZA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBJTklU
OiBFbnRlcmluZyBydW5sZXZlbDogNQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBDb25maWd1cmlu
ZyBuZXR3b3JrIGludGVyZmFjZXMuLi4gZG9uZS4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gcmVz
ZXR0aW5nIG5ldHdvcmsgaW50ZXJmYWNlDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTIu
NzE4Mjk1XSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0IGNvbnRyb2xfcmVkOiBQSFkgW2ZmMGIw
MDAwLmV0aGVybmV0LWZmZmZmZmZmOjAyXSBkcml2ZXIgW1hpbGlueA0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgUENTL1BNQSBQSFldIChp
cnE9UE9MTCkNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMi43MjM5MTldIG1hY2IgZmYw
YjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IGNvbmZpZ3VyaW5nIGZvciBwaHkvZ21paSBs
aW5rIG1vZGUNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMi43MzIxNTFdIHBwcyBwcHMw
OiBuZXcgUFBTIHNvdXJjZSBwdHAwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTIuNzM1
NTYzXSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0OiBnZW0tcHRwLXRpbWVyIHB0cCBjbG9jayBy
ZWdpc3RlcmVkLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDEyLjc0NTcyNF0gbWFjYiBm
ZjBjMDAwMC5ldGhlcm5ldCBjb250cm9sX2JsYWNrOiBQSFkgW2ZmMGMwMDAwLmV0aGVybmV0
LWZmZmZmZmZmOjAxXSBkcml2ZXIgW1hpbGlueA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgUENTL1BNQSBQSFldDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAoaXJxPVBPTEwpDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTIuNzUzNDY5XSBt
YWNiIGZmMGMwMDAwLmV0aGVybmV0IGNvbnRyb2xfYmxhY2s6IGNvbmZpZ3VyaW5nIGZvciBw
aHkvZ21paSBsaW5rIG1vZGUNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMi43NjE4MDRd
IHBwcyBwcHMxOiBuZXcgUFBTIHNvdXJjZSBwdHAxDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsg
wqAgMTIuNzY1Mzk4XSBtYWNiIGZmMGMwMDAwLmV0aGVybmV0OiBnZW0tcHRwLXRpbWVyIHB0
cCBjbG9jayByZWdpc3RlcmVkLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBBdXRvLW5lZ290aWF0
aW9uOiBvZmYNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gQXV0by1uZWdvdGlhdGlvbjogb2ZmDQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTYuODI4MTUxXSBtYWNiIGZmMGIwMDAwLmV0aGVy
bmV0IGNvbnRyb2xfcmVkOiB1bmFibGUgdG8gZ2VuZXJhdGUgdGFyZ2V0IGZyZXF1ZW5jeTog
MTI1MDAwMDAwIEh6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTYuODM0NTUzXSBtYWNi
IGZmMGIwMDAwLmV0aGVybmV0IGNvbnRyb2xfcmVkOiBMaW5rIGlzIFVwIC0gMUdicHMvRnVs
bCAtIGZsb3cgY29udHJvbCBvZmYNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxNi44NjA1
NTJdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29udHJvbF9ibGFjazogdW5hYmxlIHRvIGdl
bmVyYXRlIHRhcmdldCBmcmVxdWVuY3k6IDEyNTAwMDAwMCBIeg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPiBbIMKgIDE2Ljg2NzA1Ml0gbWFjYiBmZjBjMDAwMC5ldGhlcm5ldCBjb250cm9sX2Js
YWNrOiBMaW5rIGlzIFVwIC0gMUdicHMvRnVsbCAtIGZsb3cgY29udHJvbCBvZmYNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gU3RhcnRpbmcgRmFpbHNhZmUgU2VjdXJlIFNoZWxsIHNlcnZlciBp
biBwb3J0IDIyMjI6IHNzaGQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gZG9uZS4NCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD4gU3RhcnRpbmcgcnBjYmluZCBkYWVtb24uLi5kb25lLg0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDE3LjA5MzAxOV0gcnRjLWxw
YzU1IHJ0Y19scGM1NTogbHBjNTVfcnRjX2dldF90aW1lOiBiYWQgcmVzdWx0OiAxDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+IGh3Y2xvY2s6IFJUQ19SRF9USU1FOiBJbnZhbGlkIGV4Y2hhbmdl
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFN0YXJ0aW5nIFN0YXRlIE1hbmFnZXIgU2VydmljZQ0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBTdGFydCBzdGF0ZS1tYW5hZ2VyIHJlc3RhcnRlci4uLg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiAoWEVOKSBkMHYxIEZvcndhcmRpbmcgQUVTIG9wZXJhdGlv
bjogMzI1NDc3OTk1MQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBTdGFydGluZyAvdXNyL3NiaW4v
eGVuc3RvcmVkLi4uLlsgwqAgMTcuMjY1MjU2XSBCVFJGUzogZGV2aWNlIGZzaWQgODBlZmMy
MjQtYzIwMi00ZjhlLWE5NDktNGRhZTdmMDRhMGFhDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqBkZXZpZCAxIHRyYW5zaWQgNzQ0DQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAvZGV2L2RtLTANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gc2Nhbm5lZCBi
eSB1ZGV2ZCAoMzg1KQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDE3LjM0OTkzM10gQlRS
RlMgaW5mbyAoZGV2aWNlIGRtLTApOiBkaXNrIHNwYWNlIGNhY2hpbmcgaXMgZW5hYmxlZA0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDE3LjM1MDY3MF0gQlRSRlMgaW5mbyAoZGV2aWNl
IGRtLTApOiBoYXMgc2tpbm55IGV4dGVudHMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAx
Ny4zNjQzODRdIEJUUkZTIGluZm8gKGRldmljZSBkbS0wKTogZW5hYmxpbmcgc3NkIG9wdGlt
aXphdGlvbnMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxNy44MzA0NjJdIEJUUkZTOiBk
ZXZpY2UgZnNpZCAyN2ZmNjY2Yi1mNGU1LTRmOTAtOTA1NC1jMjEwZGI1YjJlMmUgZGV2aWQg
MSB0cmFuc2lkIDYNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoC9kZXYvbWFwcGVyL2NsaWVudF9wcm92IHNjYW5uZWQgYnkNCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoG1rZnMuYnRyZnMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gKDUyNikNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gWyDCoCAxNy44NzI2OTldIEJUUkZTIGluZm8gKGRldmljZSBkbS0x
KTogdXNpbmcgZnJlZSBzcGFjZSB0cmVlDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTcu
ODcyNzcxXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6IGhhcyBza2lubnkgZXh0ZW50cw0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDE3Ljg3ODExNF0gQlRSRlMgaW5mbyAoZGV2aWNl
IGRtLTEpOiBmbGFnZ2luZyBmcyB3aXRoIGJpZyBtZXRhZGF0YSBmZWF0dXJlDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+IFsgwqAgMTcuODk0Mjg5XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6
IGVuYWJsaW5nIHNzZCBvcHRpbWl6YXRpb25zDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAg
MTcuODk1Njk1XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6IGNoZWNraW5nIFVVSUQgdHJl
ZQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBTZXR0aW5nIGRv
bWFpbiAwIG5hbWUsIGRvbWlkIGFuZCBKU09OIGNvbmZpZy4uLg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPiBEb25lIHNldHRpbmcgdXAgRG9tMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBTdGFydGlu
ZyB4ZW5jb25zb2xlZC4uLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBTdGFydGluZyBRRU1VIGFz
IGRpc2sgYmFja2VuZCBmb3IgZG9tMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBTdGFydGluZyBk
b21haW4gd2F0Y2hkb2cgZGFlbW9uOiB4ZW53YXRjaGRvZ2Qgc3RhcnR1cA0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDE4LjQwODY0N10gQlRSRlM6
IGRldmljZSBmc2lkIDVlMDhkNWU5LWJjMmEtNDZiOS1hZjZhLTQ0YzcwODdiODkyMSBkZXZp
ZCAxIHRyYW5zaWQgNg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgL2Rldi9tYXBwZXIvY2xpZW50X2NvbmZpZyBzY2FubmVkIGJ5DQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqBta2ZzLmJ0cmZzDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+ICg1NzQpDQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+IFtkb25lXQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDE4
LjQ2NTU1Ml0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTIpOiB1c2luZyBmcmVlIHNwYWNlIHRy
ZWUNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxOC40NjU2MjldIEJUUkZTIGluZm8gKGRl
dmljZSBkbS0yKTogaGFzIHNraW5ueSBleHRlbnRzDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsg
wqAgMTguNDcxMDAyXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMik6IGZsYWdnaW5nIGZzIHdp
dGggYmlnIG1ldGFkYXRhIGZlYXR1cmUNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4gU3RhcnRpbmcg
Y3JvbmQ6IFsgwqAgMTguNDgyMzcxXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMik6IGVuYWJs
aW5nIHNzZCBvcHRpbWl6YXRpb25zDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTguNDg2
NjU5XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMik6IGNoZWNraW5nIFVVSUQgdHJlZQ0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPiBPSw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiBzdGFydGluZyByc3lz
bG9nZCAuLi4gTG9nIHBhcnRpdGlvbiByZWFkeSBhZnRlciAwIHBvbGwgbG9vcHMNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD4gZG9uZQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPiByc3lzbG9nZDogY2Fu
bm90IGNvbm5lY3QgdG8gMTcyLjE4LjAuMTo1MTQgPGh0dHA6Ly8xNzIuMTguMC4xOjUxND4g
PGh0dHA6Ly8xNzIuMTguMC4xOjUxNCA8aHR0cDovLzE3Mi4xOC4wLjE6NTE0Pj4gPGh0dHA6
Ly8xNzIuMTguMC4xOjUxNCA8aHR0cDovLzE3Mi4xOC4wLjE6NTE0PiA8aHR0cDovLzE3Mi4x
OC4wLjE6NTE0IDxodHRwOi8vMTcyLjE4LjAuMTo1MTQ+Pj4gPGh0dHA6Ly8xNzIuMTguMC4x
OjUxNCA8aHR0cDovLzE3Mi4xOC4wLjE6NTE0PiA8aHR0cDovLzE3Mi4xOC4wLjE6NTE0IDxo
dHRwOi8vMTcyLjE4LjAuMTo1MTQ+PiA8aHR0cDovLzE3Mi4xOC4wLjE6NTE0IDxodHRwOi8v
MTcyLjE4LjAuMTo1MTQ+IDxodHRwOi8vMTcyLjE4LjAuMTo1MTQgPGh0dHA6Ly8xNzIuMTgu
MC4xOjUxND4+Pj4gPGh0dHA6Ly8xNzIuMTguMC4xOjUxNCA8aHR0cDovLzE3Mi4xOC4wLjE6
NTE0PiA8aHR0cDovLzE3Mi4xOC4wLjE6NTE0IDxodHRwOi8vMTcyLjE4LjAuMTo1MTQ+PiA8
aHR0cDovLzE3Mi4xOC4wLjE6NTE0IDxodHRwOi8vMTcyLjE4LjAuMTo1MTQ+IDxodHRwOi8v
MTcyLjE4LjAuMTo1MTQgPGh0dHA6Ly8xNzIuMTguMC4xOjUxND4+PiA8aHR0cDovLzE3Mi4x
OC4wLjE6NTE0IDxodHRwOi8vMTcyLjE4LjAuMTo1MTQ+IDxodHRwOi8vMTcyLjE4LjAuMTo1
MTQgPGh0dHA6Ly8xNzIuMTguMC4xOjUxND4+IDxodHRwOi8vMTcyLjE4LjAuMTo1MTQgPGh0
dHA6Ly8xNzIuMTguMC4xOjUxND4gPGh0dHA6Ly8xNzIuMTguMC4xOjUxNCA8aHR0cDovLzE3
Mi4xOC4wLjE6NTE0Pj4+Pj46IE5ldHdvcmsgaXMgdW5yZWFjaGFibGUgW3Y4LjIyMDguMCB0
cnkNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oGh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyA8aHR0cHM6Ly93d3cucnN5c2xvZy5j
b20vZS8yMDI3PiA8aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IDxodHRwczovL3d3
dy5yc3lzbG9nLmNvbS9lLzIwMjc+PiA8aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3
IDxodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc+IDxodHRwczovL3d3dy5yc3lzbG9n
LmNvbS9lLzIwMjcgPGh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNz4+PiA8aHR0cHM6
Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IDxodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIw
Mjc+IDxodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjcgPGh0dHBzOi8vd3d3LnJzeXNs
b2cuY29tL2UvMjAyNz4+IDxodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjcgPGh0dHBz
Oi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNz4gPGh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2Uv
MjAyNyA8aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3Pj4+PiA8aHR0cHM6Ly93d3cu
cnN5c2xvZy5jb20vZS8yMDI3IDxodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc+IDxo
dHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjcgPGh0dHBzOi8vd3d3LnJzeXNsb2cuY29t
L2UvMjAyNz4+IDxodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjcgPGh0dHBzOi8vd3d3
LnJzeXNsb2cuY29tL2UvMjAyNz4gPGh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyA8
aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3Pj4+IDxodHRwczovL3d3dy5yc3lzbG9n
LmNvbS9lLzIwMjcgPGh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNz4gPGh0dHBzOi8v
d3d3LnJzeXNsb2cuY29tL2UvMjAyNyA8aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3
Pj4gPGh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNw0KPiAgICAgPGh0dHBzOi8vd3d3
LnJzeXNsb2cuY29tL2UvMjAyNz4gPGh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyA8
aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3Pj4+Pj4gXQ0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPiBbIMKgIDE4LjY3MDYzN10gQlRSRlM6IGRldmljZSBmc2lkIDM5ZDdkOWUxLTk2N2Qt
NDc4ZS05NGFlLTY5MGRlYjcyMjA5NSBkZXZpZCAxIHRyYW5zaWQgNjA4IC9kZXYvZG0tMw0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgc2Nh
bm5lZCBieSB1ZGV2ZCAoNTE4KQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPiBQbGVhc2UgaW5zZXJ0IFVTQiB0b2tlbiBhbmQgZW50ZXIgeW91ciByb2xlIGlu
IGxvZ2luIHByb21wdC4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD4gbG9naW46DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFJl
Z2FyZHMsDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+IE8uDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+INC/0L0sIDI0INCw0L/R
gC4gMjAyM+KAr9CzLiDQsiAyMzozOSwgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+
IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+Pj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc+Pj4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+IDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc+Pg0KPiAgICAgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+Pj4+PjoNCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoCDCoEhpIE9sZWcsDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqBIZXJlIGlzIHRoZSBpc3N1ZSBmcm9tIHlvdXIg
bG9nczoNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oCDCoFNFcnJvciBJbnRlcnJ1cHQgb24gQ1BVMCwgY29kZSAweGJlMDAwMDAwIC0tIFNFcnJv
cg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
U0Vycm9ycyBhcmUgc3BlY2lhbCBzaWduYWxzIHRvIG5vdGlmeSBzb2Z0d2FyZSBvZiBzZXJp
b3VzIGhhcmR3YXJlDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqBlcnJvcnMuwqAg
U29tZXRoaW5nIGlzIGdvaW5nIHZlcnkgd3JvbmcuIERlZmVjdGl2ZSBoYXJkd2FyZSBpcyBh
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqBwb3NzaWJpbGl0eS7CoCBBbm90aGVy
IHBvc3NpYmlsaXR5IGlmIHNvZnR3YXJlIGFjY2Vzc2luZyBhZGRyZXNzIHJhbmdlcw0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgdGhhdCBpdCBpcyBub3Qgc3VwcG9zZWQgdG8s
IHNvbWV0aW1lcyBpdCBjYXVzZXMgU0Vycm9ycy4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoENoZWVycywNCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoFN0ZWZhbm8NCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoE9uIE1vbiwgMjQgQXByIDIwMjMsIE9sZWcg
TmlraXRlbmtvIHdyb3RlOg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPiBIZWxsbywNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDC
oD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gVGhhbmtzIGd1eXMuDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IEkgZm91bmQgb3V0IHdoZXJlIHRoZSBwcm9i
bGVtIHdhcy4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gTm93IGRvbTAgYm9v
dGVkIG1vcmUuIEJ1dCBJIGhhdmUgYSBuZXcgb25lLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPiBUaGlzIGlzIGEga2VybmVsIHBhbmljIGR1cmluZyBEb20wIGxvYWRpbmcu
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IE1heWJlIHNvbWVvbmUgaXMgYWJs
ZSB0byBzdWdnZXN0IHNvbWV0aGluZyA/DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFJlZ2FyZHMsDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IE8uDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgwqAzLjc3MTM2
Ml0gc2ZwX3JlZ2lzdGVyX2J1czogdXBzdHJlYW0gb3BzIGF0dGFjaA0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKgMy43NzYxMTldIHNmcF9yZWdpc3Rlcl9idXM6
IEJ1cyByZWdpc3RlcmVkDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAg
wqAzLjc4MDQ1OV0gc2ZwX3JlZ2lzdGVyX3NvY2tldDogcmVnaXN0ZXIgc2ZwX2J1cyBzdWNj
ZWVkZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCDCoDMuNzg5Mzk5
XSBvZl9jZnNfaW5pdA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIMKg
My43ODk0OTldIG9mX2Nmc19pbml0OiBPSw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPiBbIMKgIMKgMy43OTE2ODVdIGNsazogTm90IGRpc2FibGluZyB1bnVzZWQgY2xvY2tz
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwMzU1XSBTRXJy
b3IgSW50ZXJydXB0IG9uIENQVTAsIGNvZGUgMHhiZTAwMDAwMCAtLSBTRXJyb3INCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTAzODBdIENQVTogMCBQSUQ6
IDkgQ29tbToga3dvcmtlci91NDowIE5vdCB0YWludGVkIDUuMTUuNzIteGlsaW54LXYyMDIy
LjEgIzENCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTAzOTNd
IFdvcmtxdWV1ZTogZXZlbnRzX3VuYm91bmQgYXN5bmNfcnVuX2VudHJ5X2ZuDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNDE0XSBwc3RhdGU6IDYwMDAw
MDA1IChuWkN2IGRhaWYgLVBBTiAtVUFPIC1UQ08gLURJVCAtU1NCUyBCVFlQRT0tLSkNCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTA0MjJdIHBjIDogc2lt
cGxlX3dyaXRlX2VuZCsweGQwLzB4MTMwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+IFsgwqAgMTEuMDEwNDMxXSBsciA6IGdlbmVyaWNfcGVyZm9ybV93cml0ZSsweDExOC8w
eDFlMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDExLjAxMDQzOF0g
c3AgOiBmZmZmZmZjMDA4MDliOTEwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
IFsgwqAgMTEuMDEwNDQxXSB4Mjk6IGZmZmZmZmMwMDgwOWI5MTAgeDI4OiAwMDAwMDAwMDAw
MDAwMDAwIHgyNzogZmZmZmZmZWY2OWJhODhjMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPiBbIMKgIDExLjAxMDQ1MV0geDI2OiAwMDAwMDAwMDAwMDAzZWVjIHgyNTogZmZm
ZmZmODA3NTE1ZGIwMCB4MjQ6IDAwMDAwMDAwMDAwMDAwMDANCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTA0NTldIHgyMzogZmZmZmZmYzAwODA5YmE5MCB4
MjI6IDAwMDAwMDAwMDJhYWMwMDAgeDIxOiBmZmZmZmY4MDczMTVhMjYwDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNDcyXSB4MjA6IDAwMDAwMDAwMDAw
MDEwMDAgeDE5OiBmZmZmZmZmZTAyMDAwMDAwIHgxODogMDAwMDAwMDAwMDAwMDAwMA0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDExLjAxMDQ4MV0geDE3OiAwMDAw
MDAwMGZmZmZmZmZmIHgxNjogMDAwMDAwMDAwMDAwODAwMCB4MTU6IDAwMDAwMDAwMDAwMDAw
MDANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTA0OTBdIHgx
NDogMDAwMDAwMDAwMDAwMDAwMCB4MTM6IDAwMDAwMDAwMDAwMDAwMDAgeDEyOiAwMDAwMDAw
MDAwMDAwMDAwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEw
NDk4XSB4MTE6IDAwMDAwMDAwMDAwMDAwMDAgeDEwOiAwMDAwMDAwMDAwMDAwMDAwIHg5IDog
MDAwMDAwMDAwMDAwMDAwMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKg
IDExLjAxMDUwN10geDggOiAwMDAwMDAwMDAwMDAwMDAwIHg3IDogZmZmZmZmZWY2OTNiYTY4
MCB4NiA6IDAwMDAwMDAwMmQ4OWI3MDANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDC
oD4gWyDCoCAxMS4wMTA1MTVdIHg1IDogZmZmZmZmZmUwMjAwMDAwMCB4NCA6IGZmZmZmZjgw
NzMxNWEzYzggeDMgOiAwMDAwMDAwMDAwMDAxMDAwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqAgwqA+IFsgwqAgMTEuMDEwNTI0XSB4MiA6IDAwMDAwMDAwMDJhYWIwMDAgeDEgOiAw
MDAwMDAwMDAwMDAwMDAxIHgwIDogMDAwMDAwMDAwMDAwMDAwNQ0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPiBbIMKgIDExLjAxMDUzNF0gS2VybmVsIHBhbmljIC0gbm90IHN5
bmNpbmc6IEFzeW5jaHJvbm91cyBTRXJyb3IgSW50ZXJydXB0DQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNTM5XSBDUFU6IDAgUElEOiA5IENvbW06IGt3
b3JrZXIvdTQ6MCBOb3QgdGFpbnRlZCA1LjE1LjcyLXhpbGlueC12MjAyMi4xICMxDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNTQ1XSBIYXJkd2FyZSBu
YW1lOiBEMTQgVmlwZXIgQm9hcmQgLSBXaGl0ZSBVbml0IChEVCkNCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTA1NDhdIFdvcmtxdWV1ZTogZXZlbnRzX3Vu
Ym91bmQgYXN5bmNfcnVuX2VudHJ5X2ZuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+IFsgwqAgMTEuMDEwNTU2XSBDYWxsIHRyYWNlOg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPiBbIMKgIDExLjAxMDU1OF0gwqBkdW1wX2JhY2t0cmFjZSsweDAvMHgxYzQN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTA1NjddIMKgc2hv
d19zdGFjaysweDE4LzB4MmMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDC
oCAxMS4wMTA1NzRdIMKgZHVtcF9zdGFja19sdmwrMHg3Yy8weGEwDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNTgzXSDCoGR1bXBfc3RhY2srMHgxOC8w
eDM0DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNTg4XSDC
oHBhbmljKzB4MTRjLzB4MmY4DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsg
wqAgMTEuMDEwNTk3XSDCoHByaW50X3RhaW50ZWQrMHgwLzB4YjANCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTA2MDZdIMKgYXJtNjRfc2Vycm9yX3Bhbmlj
KzB4NmMvMHg3Yw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDExLjAx
MDYxNF0gwqBkb19zZXJyb3IrMHgyOC8weDYwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqAgwqA+IFsgwqAgMTEuMDEwNjIxXSDCoGVsMWhfNjRfZXJyb3JfaGFuZGxlcisweDMwLzB4
NTANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTA2MjhdIMKg
ZWwxaF82NF9lcnJvcisweDc4LzB4N2MNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDC
oD4gWyDCoCAxMS4wMTA2MzNdIMKgc2ltcGxlX3dyaXRlX2VuZCsweGQwLzB4MTMwDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNjM5XSDCoGdlbmVyaWNf
cGVyZm9ybV93cml0ZSsweDExOC8weDFlMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPiBbIMKgIDExLjAxMDY0NF0gwqBfX2dlbmVyaWNfZmlsZV93cml0ZV9pdGVyKzB4MTM4
LzB4MWM0DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNjUw
XSDCoGdlbmVyaWNfZmlsZV93cml0ZV9pdGVyKzB4NzgvMHhkMA0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPiBbIMKgIDExLjAxMDY1Nl0gwqBfX2tlcm5lbF93cml0ZSsweGZj
LzB4MmFjDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNjY1
XSDCoGtlcm5lbF93cml0ZSsweDg4LzB4MTYwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqAgwqA+IFsgwqAgMTEuMDEwNjczXSDCoHh3cml0ZSsweDQ0LzB4OTQNCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTA2ODBdIMKgZG9fY29weSsweGE4LzB4
MTA0DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNjg2XSDC
oHdyaXRlX2J1ZmZlcisweDM4LzB4NTgNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDC
oD4gWyDCoCAxMS4wMTA2OTJdIMKgZmx1c2hfYnVmZmVyKzB4NGMvMHhiYw0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDExLjAxMDY5OF0gwqBfX2d1bnppcCsweDI4
MC8weDMxMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDExLjAxMDcw
NF0gwqBndW56aXArMHgxYy8weDI4DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
IFsgwqAgMTEuMDEwNzA5XSDCoHVucGFja190b19yb290ZnMrMHgxNzAvMHgyYjANCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTA3MTVdIMKgZG9fcG9wdWxh
dGVfcm9vdGZzKzB4ODAvMHgxNjQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4g
WyDCoCAxMS4wMTA3MjJdIMKgYXN5bmNfcnVuX2VudHJ5X2ZuKzB4NDgvMHgxNjQNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTA3MjhdIMKgcHJvY2Vzc19v
bmVfd29yaysweDFlNC8weDNhMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBb
IMKgIDExLjAxMDczNl0gwqB3b3JrZXJfdGhyZWFkKzB4N2MvMHg0YzANCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoCDCoD4gWyDCoCAxMS4wMTA3NDNdIMKga3RocmVhZCsweDEyMC8w
eDEzMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDExLjAxMDc1MF0g
wqByZXRfZnJvbV9mb3JrKzB4MTAvMHgyMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPiBbIMKgIDExLjAxMDc1N10gU01QOiBzdG9wcGluZyBzZWNvbmRhcnkgQ1BVcw0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDExLjAxMDc4NF0gS2VybmVsIE9m
ZnNldDogMHgyZjYxMjAwMDAwIGZyb20gMHhmZmZmZmZjMDA4MDAwMDAwDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNzg4XSBQSFlTX09GRlNFVDogMHgw
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFsgwqAgMTEuMDEwNzkwXSBDUFUg
ZmVhdHVyZXM6IDB4MDAwMDA0MDEsMDAwMDA4NDINCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoCDCoD4gWyDCoCAxMS4wMTA3OTVdIE1lbW9yeSBMaW1pdDogbm9uZQ0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBbIMKgIDExLjI3NzUwOV0gLS0tWyBlbmQgS2VybmVs
IHBhbmljIC0gbm90IHN5bmNpbmc6IEFzeW5jaHJvbm91cyBTRXJyb3IgSW50ZXJydXB0IF0t
LS0NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoCDCoD4g0L/RgiwgMjEg0LDQv9GALiAyMDIz4oCv0LMuINCyIDE1OjUyLCBN
aWNoYWwgT3J6ZWwgPG1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwu
b3J6ZWxAYW1kLmNvbT4+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzpt
aWNoYWwub3J6ZWxAYW1kLmNvbT4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPj4+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4gPG1haWx0bzptaWNoYWwub3J6ZWxA
YW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPj4gPG1haWx0bzptaWNoYWwu
b3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPiA8bWFpbHRvOm1p
Y2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+Pj4+IDxt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNv
bT4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tPj4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWlj
aGFsLm9yemVsQGFtZC5jb20+Pj4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29t
IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+DQo+ICAgICA8bWFpbHRvOm1p
Y2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+Pj4+Pj46
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqBIaSBPbGVnLA0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgT24gMjEvMDQvMjAyMyAxNDo0OSwgT2xlZyBOaWtpdGVu
a28gd3JvdGU6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqAgwqANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDC
oD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gSGVsbG8gTWljaGFsLA0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBJIHdhcyBub3QgYWJsZSB0byBl
bmFibGUgZWFybHlwcmludGsgaW4gdGhlIHhlbiBmb3Igbm93Lg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBJIGRlY2lkZWQgdG8gY2hvb3NlIGFub3Ro
ZXIgd2F5Lg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPiBU
aGlzIGlzIGEgeGVuJ3MgY29tbWFuZCBsaW5lIHRoYXQgSSBmb3VuZCBvdXQgY29tcGxldGVs
eS4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gKFhFTikgJCQkJCBjb25z
b2xlPWR0dWFydCBkdHVhcnQ9c2VyaWFsMCBkb20wX21lbT0xNjAwTSBkb20wX21heF92Y3B1
cz0yIGRvbTBfdmNwdXNfcGluDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqBib290c2NydWI9MA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgdndmaT1u
YXRpdmUNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoHNjaGVkPW51bGwNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoHRpbWVyX3Nsb3A9MA0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgWWVzLCBhZGRpbmcgYSBw
cmludGsoKSBpbiBYZW4gd2FzIGFsc28gYSBnb29kIGlkZWEuDQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+IFNv
IHlvdSBhcmUgYWJzb2x1dGVseSByaWdodCBhYm91dCBhIGNvbW1hbmQgbGluZS4NCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gTm93IEkgYW0gZ29pbmcg
dG8gZmluZCBvdXQgd2h5IHhlbiBkaWQgbm90IGhhdmUgdGhlIGNvcnJlY3QgcGFyYW1ldGVy
cyBmcm9tIHRoZSBkZXZpY2UNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoHRyZWUuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqAgwqBNYXliZSB5b3Ugd2lsbCBmaW5kIHRoaXMgZG9jdW1lbnQgaGVscGZ1bDoN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoGh0dHBzOi8vZ2l0
aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJt
L2RldmljZS10cmVlL2Jvb3RpbmcudHh0IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hl
bi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290
aW5nLnR4dD4gPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJh
c2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IDxodHRwczov
L2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNj
L2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dD4+IDxodHRwczovL2dpdGh1Yi5jb20vWGls
aW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJl
ZS9ib290aW5nLnR4dCA8aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54
X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQ+IDxo
dHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9j
cy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dCA8aHR0cHM6Ly9naXRodWIuY29t
L1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNl
LXRyZWUvYm9vdGluZy50eHQ+Pj4gPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Js
b2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3Rpbmcu
dHh0IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQu
MTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dD4NCj4gICAgIDxodHRw
czovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9t
aXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dCA8aHR0cHM6Ly9naXRodWIuY29tL1hp
bGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRy
ZWUvYm9vdGluZy50eHQ+PiA8aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94
bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQg
PGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9k
b2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0PiA8aHR0cHM6Ly9naXRodWIu
Y29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2
aWNlLXRyZWUvYm9vdGluZy50eHQgPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Js
b2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3Rpbmcu
dHh0Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2Vf
NC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IDxodHRwczovL2dp
dGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2Fy
bS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dD4gPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngv
eGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jv
b3RpbmcudHh0IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmVi
YXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dD4+IDxodHRw
czovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9t
aXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dCA8aHR0cHM6Ly9naXRodWIuY29tL1hp
bGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRy
ZWUvYm9vdGluZy50eHQ+IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hs
bnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dCA8
aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2Rv
Y3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQ+Pj4gPGh0dHBzOi8vZ2l0aHVi
LmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2Rl
dmljZS10cmVlL2Jvb3RpbmcudHh0IDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9i
bG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5n
LnR4dD4NCj4gICAgIDxodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhf
cmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dCA8aHR0
cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3Mv
bWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQ+PiA8aHR0cHM6Ly9naXRodWIuY29t
L1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNl
LXRyZWUvYm9vdGluZy50eHQgPGh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2Iv
eGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0
PiA8aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2
L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQgPGh0dHBzOi8vZ2l0aHVi
LmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2Rl
dmljZS10cmVlL2Jvb3RpbmcudHh0Pj4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoH5NaWNo
YWwNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoCDCoD4gUmVnYXJkcywNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoCDCoD4gT2xlZw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPiDQv9GCLCAyMSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6MTYsIE1pY2hhbCBPcnpl
bCA8bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4g
PG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tPj4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnpl
bEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFs
Lm9yemVsQGFtZC5jb20+Pj4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+Pj4NCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoDxtYWlsdG86bWljaGFs
Lm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4gPG1haWx0bzpt
aWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPj4gPG1h
aWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29t
PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFt
ZC5jb20+Pj4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWlj
aGFsLm9yemVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWls
dG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20g
PG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+Pj4+IDxtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4gPG1haWx0bzptaWNoYWwu
b3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPj4gPG1haWx0bzpt
aWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPiA8bWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+
Pj4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9y
emVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWlj
aGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20NCj4gICAg
IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+Pj4+IDxtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4gPG1haWx0bzptaWNoYWwu
b3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPj4gPG1haWx0bzpt
aWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPiA8bWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+
Pj4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9y
emVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWlj
aGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0
bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+Pj4+Pj46DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqBPbiAyMS8wNC8yMDIzIDEwOjA0LCBPbGVnIE5pa2l0ZW5rbyB3cm90ZToN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPiBIZWxsbyBNaWNoYWwsDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+IFllcywgSSB1c2UgeW9jdG8uDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+IFllc3RlcmRheSBh
bGwgZGF5IGxvbmcgSSB0cmllZCB0byBmb2xsb3cgeW91ciBzdWdnZXN0aW9ucy4NCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD4gSSBmYWNl
ZCBhIHByb2JsZW0uDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+IE1hbnVhbGx5IGluIHRoZSB4ZW4gY29uZmlnIGJ1aWxkIGZpbGUgSSBw
YXN0ZWQgdGhlIHN0cmluZ3M6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqBJbiB0aGUgLmNvbmZpZyBmaWxlIG9yIGluIHNvbWUgWW9jdG8g
ZmlsZSAobGlzdGluZyBhZGRpdGlvbmFsIEtjb25maWcgb3B0aW9ucykgYWRkZWQNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoHRvIFNSQ19V
Ukk/DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqBZb3Ugc2hvdWxkbid0IHJlYWxseSBtb2RpZnkgLmNvbmZpZyBmaWxlIGJ1dCBpZiB5b3Ug
ZG8sIHlvdSBzaG91bGQgZXhlY3V0ZSAibWFrZQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgb2xkZGVmY29uZmlnIg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgYWZ0ZXJ3YXJkcy4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD4gQ09ORklHX0VBUkxZX1BSSU5USw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPiBDT05GSUdfRUFSTFlfUFJJTlRLX1pZTlFN
UA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PiBDT05GSUdfRUFSTFlfVUFSVF9DSE9JQ0VfQ0FERU5DRQ0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgSSBob3BlIHlvdSBhZGRlZCA9eSB0
byB0aGVtLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgQW55
d2F5LCB5b3UgaGF2ZSBhdCBsZWFzdCB0aGUgZm9sbG93aW5nIHNvbHV0aW9uczoNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoDEpIFJ1biBi
aXRiYWtlIHhlbiAtYyBtZW51Y29uZmlnIHRvIHByb3Blcmx5IHNldCBlYXJseSBwcmludGsN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoDIp
IEZpbmQgb3V0IGhvdyB5b3UgZW5hYmxlIG90aGVyIEtjb25maWcgb3B0aW9ucyBpbiB5b3Vy
IHByb2plY3QgKGUuZy4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoENPTkZJR19DT0xPUklORz15IHRoYXQgaXMgbm90DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqBlbmFibGVkIGJ5DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqBkZWZh
dWx0KQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgMykgQXBwZW5kIHRoZSBmb2xsb3dpbmcgdG8gInhlbi9hcmNoL2FybS9jb25maWdzL2Fy
bTY0X2RlZmNvbmZpZyI6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqBDT05GSUdfRUFSTFlfUFJJTlRLX1pZTlFNUD15DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqB+TWljaGFsDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+IEhvc3QgaGFuZ3MgaW4gYnVpbGQgdGlt
ZS7CoA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPiBNYXliZSBJIGRpZCBub3Qgc2V0IHNvbWV0aGluZyBpbiB0aGUgY29uZmlnIGJ1aWxk
IGZpbGUgPw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPiBSZWdhcmRzLA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPiBPbGVnDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+INGH0YIsIDIwINCw0L/RgC4gMjAyM+KAr9CzLiDQsiAx
MTo1NywgT2xlZyBOaWtpdGVua28gPG9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9s
ZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+Pj4gPG1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tPj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tPj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+IDxt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbT4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+Pj4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+IDxtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8
bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbT4+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8
bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20NCj4gICAgIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+PiA8bWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4g
PG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20+Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20g
PG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+
Pj4+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA8bWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4g
PG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20+Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20g
PG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+
PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20g
PG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+
DQo+ICAgICA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbT4+Pj4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+IDxtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+PiA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+Pj4+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+PiA8bWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+
Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+Pj4+Pj46DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqBUaGFua3Mg
TWljaGFsLA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgWW91IGdhdmUgbWUgYW4gaWRlYS4NCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoEkgYW0gZ29pbmcg
dG8gdHJ5IGl0IHRvZGF5Lg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgUmVnYXJkcywNCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoE8uDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqDRh9GC
LCAyMCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6NTYsIE9sZWcgTmlraXRlbmtvIDxvbGVz
aGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tPj4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+IDxtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+Pj4NCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoDxtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbT4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbT4+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+PiA8bWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+
Pj4+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+Pj4gPG1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tDQo+ICAgICA8bWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbT4+Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+IDxt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbT4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+IDxt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbT4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tPj4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9s
ZXNoaWl3b29kQGdtYWlsLmNvbT4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPg0KPiAgICAgPG1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+Pj4+PiA8bWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4g
PG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20+Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20g
PG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+
Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+IDxtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8
bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbT4+Pj4+Pj4+Og0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgVGhhbmtzIFN0ZWZhbm8uDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqBJIGFtIGdvaW5nIHRvIGRvIGl0IHRvZGF5Lg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgUmVnYXJkcywNCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoE8uDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqDRgdGALCAxOSDQsNC/0YAuIDIwMjPigK/Qsy4g
0LIgMjM6MDUsIFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnPj4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+Pg0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPG1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4g
PG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc+Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPG1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+
IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+Pj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc+Pj4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+IDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcNCj4gICAgIDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+Pj4+Pg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc+Pj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4+
IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+IDxtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcNCj4gICAgIDxtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+Pj4+DQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA8bWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4gPG1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz4+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4gPG1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+
Pj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4gPG1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc+DQo+ICAgICA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcg
PG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+Pj4+Pj46DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAg
wqAgwqBPbiBXZWQsIDE5IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90ZToNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoCDCoCDCoD4gSGkgTWljaGFsLA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPg0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgIMKgIMKgPiBJIGNvcnJlY3RlZCB4ZW4ncyBjb21tYW5kIGxpbmUuDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqAgwqAgwqA+IE5vdyBpdCBpcw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPiB4ZW4seGVuLWJv
b3RhcmdzID0gImNvbnNvbGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWwwIGRvbTBfbWVtPTE2MDBN
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqBk
b20wX21heF92Y3B1cz0yDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqBkb20wX3ZjcHVzX3Bpbg0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgYm9vdHNjcnViPTAgdndm
aT1uYXRpdmUgc2NoZWQ9bnVsbA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPiB0aW1lcl9zbG9wPTAg
d2F5X3NpemU9NjU1MzYgeGVuX2NvbG9ycz0wLTMgZG9tMF9jb2xvcnM9NC03IjsNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD4NCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoCDCoCDCoDQgY29sb3JzIGlzIHdheSB0b28gbWFueSBmb3IgeGVuLCBqdXN0IGRv
IHhlbl9jb2xvcnM9MC0wLiBUaGVyZSBpcyBubw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgYWR2YW50
YWdlIGluIHVzaW5nIG1vcmUgdGhhbiAxIGNvbG9yIGZvciBYZW4uDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAg
wqAgwqA0IGNvbG9ycyBpcyB0b28gZmV3IGZvciBkb20wLCBpZiB5b3UgYXJlIGdpdmluZyAx
NjAwTSBvZiBtZW1vcnkgdG8NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoERvbTAuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqBFYWNoIGNvbG9yIGlz
IDI1Nk0uIEZvciAxNjAwTSB5b3Ugc2hvdWxkIGdpdmUgYXQgbGVhc3QgNyBjb2xvcnMuIFRy
eToNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoCDCoCDCoHhlbl9jb2xvcnM9MC0wIGRvbTBfY29sb3JzPTEtOA0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgIMKgIMKgPiBVbmZvcnR1bmF0ZWx5IHRoZSByZXN1bHQgd2FzIHRoZSBz
YW1lLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPiAoWEVOKSDC
oC0gRG9tMCBtb2RlOiBSZWxheGVkDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+IChYRU4pIFAyTTog
NDAtYml0IElQQSB3aXRoIDQwLWJpdCBQQSBhbmQgOC1iaXQgVk1JRA0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
IMKgIMKgPiAoWEVOKSBQMk06IDMgbGV2ZWxzIHdpdGggb3JkZXItMSByb290LCBWVENSIDB4
MDAwMDAwMDA4MDAyMzU1OA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPiAoWEVOKSBTY2hlZHVsaW5n
IGdyYW51bGFyaXR5OiBjcHUsIDEgQ1BVIHBlciBzY2hlZC1yZXNvdXJjZQ0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgIMKgIMKgPiAoWEVOKSBDb2xvcmluZyBnZW5lcmFsIGluZm9ybWF0aW9uDQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqAgwqAgwqA+IChYRU4pIFdheSBzaXplOiA2NGtCDQo+ICAgICA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+
IChYRU4pIE1heC4gbnVtYmVyIG9mIGNvbG9ycyBhdmFpbGFibGU6IDE2DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqAgwqAgwqA+IChYRU4pIFhlbiBjb2xvcihzKTogWyAwIF0NCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDCoCDC
oD4gKFhFTikgYWx0ZXJuYXRpdmVzOiBQYXRjaGluZyB3aXRoIGFsdCB0YWJsZSAwMDAwMDAw
MDAwMmNjNjkwIC0+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAwMDAwMDAwMDAwMmNjYzBjDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+IChYRU4p
IENvbG9yIGFycmF5IGFsbG9jYXRpb24gZmFpbGVkIGZvciBkb20wDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAg
wqAgwqA+IChYRU4pDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+IChYRU4pICoqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDCoCDCoD4gKFhFTikg
UGFuaWMgb24gQ1BVIDA6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+IChYRU4pIEVycm9yIGNyZWF0
aW5nIGRvbWFpbiAwDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+IChYRU4pICoqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDCoCDCoD4gKFhFTikN
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoCDCoCDCoD4gKFhFTikgUmVib290IGluIGZpdmUgc2Vjb25kcy4uLg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPiBJIGFtIGdvaW5nIHRv
IGZpbmQgb3V0IGhvdyBjb21tYW5kIGxpbmUgYXJndW1lbnRzIHBhc3NlZCBhbmQgcGFyc2Vk
Lg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPiBSZWdhcmRzLA0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgIMKgIMKgPiBPbGVnDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+DQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqAgwqAgwqA+INGB0YAsIDE5INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxMToyNSwgT2xl
ZyBOaWtpdGVua28gPG9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+Pj4gPG1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tPj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+IDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bT4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbT4+Pj4+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA8bWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+
PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20+Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+PiA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+DQo+ICAg
ICA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbT4+Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+IDxtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+IDxtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbT4+Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+IDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bT4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tPg0KPiAgICAgPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+Pj4+PiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+PiA8bWFpbHRvOm9s
ZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4gPG1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20+Pj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tPj4gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4+Pg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPG1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20+IDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPj4gPG1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20gPG1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tPj4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIDxtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSA8bWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+IDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIDxtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tPiA8bWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbSA8bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbT4+Pj4+Pj4+Og0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgSGkgTWljaGFsLA0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKg
IMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPiBZb3UgcHV0IG15IG5vc2UgaW50byB0aGUgcHJv
YmxlbS4gVGhhbmsgeW91Lg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPiBJIGFtIGdvaW5nIHRvIHVz
ZSB5b3VyIHBvaW50Lg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPiBMZXQncyBzZWUgd2hhdCBoYXBw
ZW5zLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPiBSZWdhcmRz
LA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgIMKgIMKgPiBPbGVnDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+DQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+INGB0YAsIDE5INCw0L/RgC4g
MjAyM+KAr9CzLiDQsiAxMDozNywgTWljaGFsIE9yemVsIDxtaWNoYWwub3J6ZWxAYW1kLmNv
bSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWlj
aGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+PiA8bWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+
IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1k
LmNvbT4+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6
ZWxAYW1kLmNvbT4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tPj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+PiA8bWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86
bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+IDxt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNv
bT4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tPj4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoDxtYWlsdG86bWljaGFsLm9yemVsQGFt
ZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4gPG1haWx0bzptaWNoYWwub3J6
ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPj4gPG1haWx0bzptaWNo
YWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPiA8bWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+Pj4g
PG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFs
Lm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzpt
aWNoYWwub3J6ZWxAYW1kLmNvbT4+Pj4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8
bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnpl
bEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFs
Lm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+PiA8bWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNv
bT4+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxA
YW1kLmNvbT4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbQ0KPiAgICAgPG1haWx0bzpt
aWNoYWwub3J6ZWxAYW1kLmNvbT4+Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1p
Y2hhbC5vcnplbEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWls
dG86bWljaGFsLm9yemVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29t
IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFt
ZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+PiA8bWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWlj
aGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+IDxtYWls
dG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4g
PG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tPj4+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9y
emVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNo
YWwub3J6ZWxAYW1kLmNvbT4+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0
bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8
bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPj4+IDxtYWlsdG86bWljaGFsLm9yemVsQGFt
ZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4gPG1haWx0bzptaWNoYWwub3J6
ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPj4gPG1haWx0bzptaWNo
YWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPiA8bWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tDQo+ICAgICA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tPj4+Pj4gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWlj
aGFsLm9yemVsQGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWls
dG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20g
PG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBh
bWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9y
emVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+IDxtYWlsdG86bWlj
aGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4gPG1haWx0
bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPj4+
Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tPiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxtYWlsdG86bWljaGFs
Lm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20gPG1haWx0bzpt
aWNoYWwub3J6ZWxAYW1kLmNvbT4+PiA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIDxt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20+IDxtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4+IDxtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20gPG1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbT4gPG1haWx0bzptaWNoYWwu
b3J6ZWxAYW1kLmNvbSA8bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tPj4+Pj4+Pj46DQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqBIaSBPbGVnLA0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKg
IMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgT24gMTkvMDQvMjAyMyAwOTow
MywgT2xlZyBOaWtpdGVua28gd3JvdGU6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqANCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oCDCoD4gSGVsbG8gU3RlZmFubywNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4N
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gVGhhbmtzIGZvciB0aGUgY2xhcmlm
aWNhdGlvbi4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gTXkgY29tcGFueSB1
c2VzIHlvY3RvIGZvciBpbWFnZSBnZW5lcmF0aW9uLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPiBXaGF0IGtpbmQgb2YgaW5mb3JtYXRpb24gZG8geW91IG5lZWQgdG8gY29u
c3VsdCBtZSBpbiB0aGlzDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqBjYXNlID8NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4N
Cj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4gTWF5YmUgbW9kdWxlcyBzaXplcy9h
ZGRyZXNzZXMgd2hpY2ggd2VyZSBtZW50aW9uZWQgYnkgQEp1bGllbg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgR3JhbGwNCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoCDCoDxtYWlsdG86anVsaWVuQHhlbi5vcmcgPG1haWx0bzpq
dWxpZW5AeGVuLm9yZz4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4
ZW4ub3JnPj4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3Jn
PiA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+Pj4gPG1h
aWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnPiA8bWFpbHRvOmp1
bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+PiA8bWFpbHRvOmp1bGllbkB4
ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+IDxtYWlsdG86anVsaWVuQHhlbi5vcmcg
PG1haWx0bzpqdWxpZW5AeGVuLm9yZz4+Pj4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFp
bHRvOmp1bGllbkB4ZW4ub3JnPiA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVs
aWVuQHhlbi5vcmc+PiA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhl
bi5vcmc+IDxtYWlsdG86anVsaWVuQHhlbi5vcmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4+
PiA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+IDxtYWls
dG86anVsaWVuQHhlbi5vcmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4+IDxtYWlsdG86anVs
aWVuQHhlbi5vcmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4gPG1haWx0bzpqdWxpZW5AeGVu
Lm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnPj4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoCDCoDxtYWlsdG86anVsaWVuQHhlbi5vcmcgPG1haWx0bzpq
dWxpZW5AeGVuLm9yZz4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4
ZW4ub3JnPj4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3Jn
PiA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+Pj4gPG1h
aWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnPiA8bWFpbHRvOmp1
bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+PiA8bWFpbHRvOmp1bGllbkB4
ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+IDxtYWlsdG86anVsaWVuQHhlbi5vcmcg
PG1haWx0bzpqdWxpZW5AeGVuLm9yZz4+Pj4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFp
bHRvOmp1bGllbkB4ZW4ub3JnPiA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVs
aWVuQHhlbi5vcmc+PiA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhl
bi5vcmc+IDxtYWlsdG86anVsaWVuQHhlbi5vcmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4+
PiA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+IDxtYWls
dG86anVsaWVuQHhlbi5vcmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4+IDxtYWlsdG86anVs
aWVuQHhlbi5vcmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4gPG1haWx0bzpqdWxpZW5AeGVu
Lm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnPj4+Pj4+IDxtYWlsdG86anVsaWVuQHhlbi5v
cmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFp
bHRvOmp1bGllbkB4ZW4ub3JnPj4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1
bGllbkB4ZW4ub3JnPg0KPiAgICAgPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1
bGllbkB4ZW4ub3JnPj4+IDxtYWlsdG86anVsaWVuQHhlbi5vcmcgPG1haWx0bzpqdWxpZW5A
eGVuLm9yZz4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3Jn
Pj4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnPiA8bWFp
bHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+Pj4+DQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA8bWFpbHRvOmp1
bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+IDxtYWlsdG86anVsaWVuQHhl
bi5vcmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4+IDxtYWlsdG86anVsaWVuQHhlbi5vcmcg
PG1haWx0bzpqdWxpZW5AeGVuLm9yZz4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRv
Omp1bGllbkB4ZW4ub3JnPj4+IDxtYWlsdG86anVsaWVuQHhlbi5vcmcgPG1haWx0bzpqdWxp
ZW5AeGVuLm9yZz4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4u
b3JnPj4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnPiA8
bWFpbHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+Pj4+PiA8bWFp
bHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+IDxtYWlsdG86anVs
aWVuQHhlbi5vcmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4+IDxtYWlsdG86anVsaWVuQHhl
bi5vcmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8
bWFpbHRvOmp1bGllbkB4ZW4ub3JnPj4+IDxtYWlsdG86anVsaWVuQHhlbi5vcmcgPG1haWx0
bzpqdWxpZW5AeGVuLm9yZz4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGll
bkB4ZW4ub3JnPj4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4u
b3JnPiA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVuQHhlbi5vcmc+Pj4+
IDxtYWlsdG86anVsaWVuQHhlbi5vcmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4gPG1haWx0
bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnPj4gPG1haWx0bzpqdWxp
ZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnPiA8bWFpbHRvOmp1bGllbkB4ZW4u
b3JnDQo+ICAgICA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnPj4+IDxtYWlsdG86anVsaWVuQHhl
bi5vcmcgPG1haWx0bzpqdWxpZW5AeGVuLm9yZz4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8
bWFpbHRvOmp1bGllbkB4ZW4ub3JnPj4gPG1haWx0bzpqdWxpZW5AeGVuLm9yZyA8bWFpbHRv
Omp1bGllbkB4ZW4ub3JnPiA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnIDxtYWlsdG86anVsaWVu
QHhlbi5vcmc+Pj4+Pj4+PiA/DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqBTb3JyeSBmb3IganVtcGluZyBpbnRvIGRpc2N1c3Npb24sIGJ1
dCBGV0lDUyB0aGUgWGVuIGNvbW1hbmQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoGxpbmUgeW91IHByb3ZpZGVkDQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqBzZWVtcyB0byBiZQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgbm90IHRo
ZQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgb25lDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqBYZW4gYm9vdGVkIHdpdGguIFRoZSBlcnJvciB5
b3UgYXJlIG9ic2VydmluZyBtb3N0IGxpa2VseSBpcyBkdWUNCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoHRvIGRvbTAgY29sb3JzDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqBjb25maWd1cmF0aW9uIG5vdA0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgYmVpbmcNCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoCDCoHNwZWNpZmllZCAoaS5lLiBsYWNrIG9mIGRvbTBfY29sb3JzPTw+
IHBhcmFtZXRlcikuIEFsdGhvdWdoIGluDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqB0aGUgY29tbWFuZCBsaW5lIHlvdQ0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgcHJvdmlkZWQsIHRoaXMNCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoHBhcmFtZXRlcg0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgIMKgaXMgc2V0LCBJIHN0cm9uZ2x5IGRvdWJ0IHRoYXQgdGhpcyBpcyB0aGUg
YWN0dWFsIGNvbW1hbmQgbGluZQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgaW4gdXNlLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPg0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgWW91IHdyb3RlOg0KPiAgICAgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgIMKgeGVuLHhlbi1ib290YXJncyA9ICJjb25zb2xlPWR0dWFydCBkdHVhcnQ9
c2VyaWFsMA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgZG9tMF9tZW09MTYwME0gZG9tMF9tYXhfdmNwdXM9Mg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgZG9tMF92Y3B1c19waW4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoCDCoGJvb3RzY3J1Yj0wIHZ3Zmk9bmF0aXZlDQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqBzY2hlZD1udWxsIHRpbWVyX3Nsb3A9MCB3YXlfc3ppemU9
NjU1MzYgeGVuX2NvbG9ycz0wLTMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoGRvbTBfY29sb3JzPTQtNyI7DQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAg
wqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqBidXQ6DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqAxKSB3YXlfc3ppemUgaGFzIGEgdHlwbw0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgMikgeW91IHNwZWNpZmllZCA0IGNvbG9ycyAoMC0zKSBmb3Ig
WGVuLCBidXQgdGhlIGJvb3QgbG9nIHNheXMNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoHRoYXQgWGVuIGhhcyBvbmx5DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqBvbmU6DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqAoWEVOKSBYZW4g
Y29sb3Iocyk6IFsgMCBdDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqBUaGlzIG1ha2VzIG1lIGJlbGlldmUgdGhhdCBubyBjb2xvcnMgY29u
ZmlndXJhdGlvbiBhY3R1YWxseSBlbmQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoHVwIGluIGNvbW1hbmQgbGluZQ0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgdGhhdCBYZW4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoGJvb3RlZA0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgd2l0aC4NCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoFNpbmdsZSBjb2xvciBmb3IgWGVuIGlzIGEgImRl
ZmF1bHQgaWYgbm90IHNwZWNpZmllZCIgYW5kIHdheQ0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgc2l6ZSB3YXMgcHJvYmFibHkNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoGNhbGN1bGF0ZWQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDC
oGJ5IGFza2luZw0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
SFcuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqBTbyBJIHdvdWxkIHN1Z2dlc3QgdG8gZmlyc3QgY3Jvc3MtY2hlY2sgdGhlIGNvbW1hbmQg
bGluZSBpbg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgdXNlLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgfk1pY2hhbA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKg
IMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPiBSZWdhcmRzLA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPiBPbGVnDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+INCy0YIsIDE4INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAy
MDo0NCwgU3RlZmFubyBTdGFiZWxsaW5pDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+
IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+PiA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnPj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4gPG1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4N
Cj4gICAgIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc+Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4gPG1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
Zz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4+IDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnPj4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmcNCj4gICAgIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4g
PG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZz4+Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4gPG1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4+IDxtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnPj4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPg0KPiAg
ICAgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZz4+Pj4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+
IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+Pg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPG1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
Zz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc+Pj4+Pj4+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA8bWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnPj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4gPG1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
Zz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZz4+Pj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4g
PG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZz4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZw0KPiAg
ICAgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+Pj4NCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoDxtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnPj4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnPj4+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+PiA8bWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnPj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+ICAgICA8bWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4+Pj4NCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoDxtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnPj4+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnPj4+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+PiA8bWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnPj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4gICAgIDxtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Pj4+PiA8bWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnPj4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4gPG1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
Zz4gPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIDxtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZz4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+IDxt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc+IDxtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyA8bWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc+PiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmcgPG1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPiA8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmcgPG1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnPj4+Pj4+Pj4+Og0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgT24gVHVlLCAxOCBBcHIgMjAyMywgT2xlZyBOaWtpdGVua28gd3JvdGU6DQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+IEhpIEp1bGllbiwNCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD4NCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD4gPj4gVGhpcyBmZWF0dXJlIGhhcyBub3QgYmVl
biBtZXJnZWQgaW4gWGVuIHVwc3RyZWFtIHlldA0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPiA+IHdvdWxkIGFzc3VtZSB0aGF0IHVwc3RyZWFtICsgdGhlIHNlcmllcyBvbiB0aGUg
TUwgWzFdDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqB3b3JrDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+DQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+IFBsZWFzZSBjbGFyaWZ5
IHRoaXMgcG9pbnQuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+
IEJlY2F1c2UgdGhlIHR3byB0aG91Z2h0cyBhcmUgY29udHJvdmVyc2lhbC4NCj4gICAgID7C
oCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7C
oCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoCDCoCDCoD7CoCDCoCDCoCDCoD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoEhpIE9sZWcsDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqBBcyBKdWxpZW4gd3JvdGUs
IHRoZXJlIGlzIG5vdGhpbmcgY29udHJvdmVyc2lhbC4gQXMgeW91DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqBhcmUgYXdhcmUsDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqBYaWxpbnggbWFpbnRhaW5zIGEg
c2VwYXJhdGUgWGVuIHRyZWUgc3BlY2lmaWMgZm9yIFhpbGlueA0KPiAgICAgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgaGVyZToNCj4gICAgID7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4gPGh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4+IDxodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+
IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW4+Pj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dp
dGh1Yi5jb20veGlsaW54L3hlbj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxo
dHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4+IDxodHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1
Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+Pj4+DQo+
ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA8aHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuPiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4+PiA8aHR0cHM6Ly9naXRodWIuY29t
L3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4g
PGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1
Yi5jb20veGlsaW54L3hlbj4+Pj4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+PiA8aHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuPj4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPg0KPiAgICAgPGh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4+Pj4g
PGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1
Yi5jb20veGlsaW54L3hlbj4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+Pj4gPGh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4gPGh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
bj4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29t
L3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4+Pj4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oCDCoDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29t
L3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4g
PGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4+IDxodHRwczovL2dp
dGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+IDxo
dHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4gPGh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
bj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbj4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4g
PGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuDQo+ICAgICA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+IDxodHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4+Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPGh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5j
b20veGlsaW54L3hlbj4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
biA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+Pj4gPGh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4gPGh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4+
IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4+Pj4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+PiA8aHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuPj4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4NCj4gICAgIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4gPGh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4+Pj4+
Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4+IDxodHRwczovL2dpdGh1
Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+IDxodHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuPj4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIuY29t
L3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4gPGh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4g
PGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbj4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4NCj4gICAgIDxodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+
Pj4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuPj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4+PiA8aHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuPj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1
Yi5jb20veGlsaW54L3hlbj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5j
b20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+PiA8aHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuPj4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4+Pj4+
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA8aHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4gPGh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5j
b20veGlsaW54L3hlbj4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4gPGh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4gPGh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4+
Pj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dp
dGh1Yi5jb20veGlsaW54L3hlbj4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+Pj4gPGh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4gPGh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbj4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbg0KPiAgICAgPGh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4g
PGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4+Pj4NCj4gICAgID7CoCDCoCDCoD7C
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
biA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+PiA8aHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuPj4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4+PiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4+PiA8aHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuPiA8aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIDxodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbj4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbj4+PiA8aHR0cHM6Ly9naXRodWIuY29t
L3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPiA8aHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4gPGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPj4g
PGh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuDQo+ICAgICA8aHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW4+IDxodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiA8aHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW4+Pj4+Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgYW5k
IHRoZSBicmFuY2ggeW91IGFyZSB1c2luZyAoeGxueF9yZWJhc2VfNC4xNikgY29tZXMNCj4g
ICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoGZyb20g
dGhlcmUuDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAg
wqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAg
wqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAg
wqAgwqBJbnN0ZWFkLCB0aGUgdXBzdHJlYW0gWGVuIHRyZWUgbGl2ZXMgaGVyZToNCj4gICAg
ID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoGh0dHBzOi8veGVuYml0cy54ZW4u
b3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5v
cmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5v
cmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5v
cmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5v
cmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4+IDxodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPGh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5Pj4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDC
oCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoCDCoD7CoCDCoCDCoCDCoDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnk+Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnk+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeT4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk+Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeT4+Pj4+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5DQo+ICAgICA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeT4+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeT4+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5Pj4+Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnk+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeT4+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeT4+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5Pj4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7C
oCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7C
oCDCoCDCoCDCoDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnk+Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnk+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeT4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnk+Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnk+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eT4+Pj4+Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeQ0KPiAgICAgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk+Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeT4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk+Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeT4+Pj4+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+
IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkg
PGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4+
IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkg
PGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4g
PGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5Pj4+
IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkg
PGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4g
PGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5Pj4g
PGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PiA8
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+Pj4+
DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA8
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+IDxo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4+IDxo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4gPGh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5Pj4+IDxo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4gPGh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5Pj4gPGh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PiA8aHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRw
czovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+Pj4+Pj4g
PGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeQ0K
PiAgICAgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnk+Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnk+PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnk+IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eT4+Pj4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnkgPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eT4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+
Pj4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eSA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+
PiA8aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+
IDxodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkg
PGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4+
Pj4+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA8aHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+IDxodHRwczov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4+IDxodHRwczov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4gPGh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5Pj4+IDxodHRwczov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4gPGh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5Pj4gPGh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+Pj4+DQo+ICAgICA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA8aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+IDxodHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4+IDxodHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4gPGh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5Pj4+IDxodHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkgPGh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeT4gPGh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5Pj4gPGh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSA8aHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PiA8aHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IDxodHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk+Pj4+Pj4+Pg0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgVGhlIENhY2hl
IENvbG9yaW5nIGZlYXR1cmUgdGhhdCB5b3UgYXJlIHRyeWluZyB0bw0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgY29uZmlndXJlIGlzIHBy
ZXNlbnQNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDC
oCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDC
oCDCoD7CoCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoGluIHhsbnhf
cmViYXNlXzQuMTYsIGJ1dCBub3QgeWV0IHByZXNlbnQgdXBzdHJlYW0gKHRoZXJlDQo+ICAg
ICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqBpcyBhbg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgb3V0c3RhbmRpbmcgcGF0
Y2ggc2VyaWVzIHRvIGFkZCBjYWNoZSBjb2xvcmluZyB0byBYZW4NCj4gICAgID7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoHVwc3RyZWFtIGJ1dCBpdA0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKg
IMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgaGFzbid0IGJlZW4gbWVy
Z2VkIHlldC4pDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAg
wqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqAgwqAgwqA+wqAgwqAgwqAgwqA+DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+
wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAgwqA+
wqAgwqAgwqBBbnl3YXksIGlmIHlvdSBhcmUgdXNpbmcgeGxueF9yZWJhc2VfNC4xNiBpdCBk
b2Vzbid0DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAg
wqAgwqBtYXR0ZXIgdG9vIG11Y2ggZm9yDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAg
wqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqAgwqA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqAgwqAgwqA+wqAgwqAgwqAg
wqA+wqAgwqAgwqB5b3UgYXMgeW91IGFscmVhZHkgaGF2ZSBDYWNoZSBDb2xvcmluZyBhcyBh
IGZlYXR1cmUNCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDC
oCDCoCDCoHRoZXJlLg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAg
PsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKg
IMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgSSB0YWtlIHlvdSBhcmUgdXNpbmcgSW1hZ2VCdWlsZGVyIHRvIGdlbmVy
YXRlIHRoZSBib290DQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAgwqAgwqA+
wqAgwqAgwqAgwqBjb25maWd1cmF0aW9uPyBJZg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgc28sIHBsZWFzZSBwb3N0IHRoZSBJbWFnZUJ1aWxkZXIgY29uZmln
IGZpbGUgdGhhdCB5b3UgYXJlDQo+ICAgICA+wqAgwqAgwqA+wqAgwqAgwqAgwqAgwqA+wqAg
wqAgwqA+wqAgwqAgwqAgwqB1c2luZy4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDC
oD4NCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoEJ1dCBmcm9tIHRo
ZSBib290IG1lc3NhZ2UsIGl0IGxvb2tzIGxpa2UgdGhlIGNvbG9ycw0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgY29uZmlndXJhdGlvbiBm
b3INCj4gICAgID7CoCDCoCDCoD7CoCDCoCDCoCDCoCDCoD7CoCDCoCDCoD7CoCDCoCDCoCDC
oD7CoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDC
oD7CoCDCoCDCoCDCoCDCoCDCoCDCoD7CoCDCoCDCoCDCoD7CoCDCoCDCoERvbTAgaXMgaW5j
b3JyZWN0Lg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKg
IMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgIMKgIMKgPg0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPg0K
PiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKg
IMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAg
ICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKgIMKgPsKgIMKg
IMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPsKgIMKgIMKg
IMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
PsKgIMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKg
Pg0KPiAgICAgPsKgIMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAgICAgPsKg
IMKgIMKgPsKgIMKgIMKgIMKgIMKgPsKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPsKgIMKg
IMKgIMKgIMKgPg0KPiAgICAgPsKgIMKgIMKgPg0KPiAgICAgPg0KPiANCg==


From xen-devel-bounces@lists.xenproject.org Tue May 16 18:46:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 18:46:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535450.833158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzgd-0008To-Gi; Tue, 16 May 2023 18:46:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535450.833158; Tue, 16 May 2023 18:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzgd-0008Th-Cn; Tue, 16 May 2023 18:46:15 +0000
Received: by outflank-mailman (input) for mailman id 535450;
 Tue, 16 May 2023 18:46:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PFhh=BF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pyzgb-0008Tb-Ac
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 18:46:13 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eb40b857-f419-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 20:46:08 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 3E74B63DDE;
 Tue, 16 May 2023 18:46:07 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id F03A2C433EF;
 Tue, 16 May 2023 18:46:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb40b857-f419-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684262766;
	bh=1Y2StiZOojJk/Xg/VTyj/LVT3Er3HprFlZgifKkAtW8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ItI/DVGnVZCAb9XdzTBz+uWME/BrzEyL9EyDhdDpdOYrRQ7zvLjCt2z9JZiiNsAHb
	 46X6Lwpd6cnW53kBEQzbdX/CEZp2r2OJz4ozZpL0X+rp1ue5QEhBl3ACk0hTGjBvGL
	 d+UO6f8nMS3tOI5L+lzVXBwKmofix/9P4jK3Wn3ftgXMR9eNW04VHfP6I+DPSMph/J
	 k33g3keARRuLXdqp5gv557y0Yf5c5TWtWdqdGfM8yHuWFKAnzdFiEIgSnkE985jwxT
	 58P2Tq6rr2PPhjYvbAP+fkGAlP8WhDkjGaDg2i3df4dXqdLNCEeHUrM0z6YTu/df/K
	 s3qbALaeD4yQg==
Date: Tue, 16 May 2023 11:46:00 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Olaf Hering <olaf@aepfle.de>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH v1] automation: provide example for downloading an existing
 container
In-Reply-To: <20230516105155.0c59143a@sender>
Message-ID: <alpine.DEB.2.22.394.2305161145540.62578@ubuntu-linux-20-04-desktop>
References: <20230502201444.6532-1-olaf@aepfle.de> <alpine.DEB.2.22.394.2305151533320.4125828@ubuntu-linux-20-04-desktop> <20230516105155.0c59143a@sender>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 16 May 2023, Olaf Hering wrote:
> Am Mon, 15 May 2023 16:03:01 -0700 (PDT)
> schrieb Stefano Stabellini <sstabellini@kernel.org>:
> 
> > Given that opensuse-tumbleweed is still broken (doesn't complete the
> > Xen build successfully) even after these patches, I suggest we use a
> > different example?
> 
> I think the example in automation/build/README.md needs to be fixed.
> Right now it does something different than the Gitlab CI.
> 
> The CI runs automation/scripts/build with some environment variables
> set, the example runs automation/scripts/containerize.
 
I think you have a point that automation/build/README.md should also
describe how to do what gitlab-ci does but locally (i.e. call
automation/scripts/build). It should not only describe
automation/scripts/containerize.

 
> For me qemu-xen builds, I assume it is supposed to be 'master ==
> "8c51cd9705 (HEAD -> dummy, origin/staging, origin/master, origin/HEAD,
> master) hw/xen/xen_pt: fix uninitialized variable", but we do not know
> what GI tests, because scripts/git-checkout.sh does not show what HEAD
> actually is. I think it needs to run "$GIT --no-pager log --oneline
> -n1" at the end, so everyone knows what 'dummy' actually is.

Gitlab-ci only runs automation/scripts/build which builds QEMU as part
of a regular Xen build.

If you want to see details of a failure:
https://gitlab.com/xen-project/xen/-/jobs/4284741849

---
In file included from /builds/xen-project/xen/tools/qemu-xen-dir-remote/include/qemu/coroutine.h:18,
                 from /builds/xen-project/xen/tools/qemu-xen-dir-remote/include/block/aio.h:20,
                 from ../qemu-xen-dir-remote/util/async.c:28:
../qemu-xen-dir-remote/util/async.c: In function 'aio_bh_poll':
/builds/xen-project/xen/tools/qemu-xen-dir-remote/include/qemu/queue.h:303:22: error: storing the address of local variable 'slice' in '*ctx.bh_slice_list.sqh_last' [-Werror=dangling-pointer=]
  303 |     (head)->sqh_last = &(elm)->field.sqe_next;                          \
      |     ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~
../qemu-xen-dir-remote/util/async.c:161:5: note: in expansion of macro 'QSIMPLEQ_INSERT_TAIL'
  161 |     QSIMPLEQ_INSERT_TAIL(&ctx->bh_slice_list, &slice, next);
      |     ^~~~~~~~~~~~~~~~~~~~
../qemu-xen-dir-remote/util/async.c:156:17: note: 'slice' declared here
  156 |     BHListSlice slice;
      |                 ^~~~~
../qemu-xen-dir-remote/util/async.c:154:29: note: 'ctx' declared here
  154 | int aio_bh_poll(AioContext *ctx)
      |                 ~~~~~~~~~~~~^~~
cc1: all warnings being treated as errors
---

> I think it is perfectly fine that both examples refer to Tumbleweed,
> because one may need to fix future build errors, not test on something
> from which we already know that it works.

Sure I see your point. On the other hand Tumbleweed jobs are the ones
and only with "allow_failure". So among all the possible choices as
example, do we really need to pick the one and only that has been
failing for months? :-)


From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535455.833177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzwq-0002jr-25; Tue, 16 May 2023 19:03:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535455.833177; Tue, 16 May 2023 19:03:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzwp-0002jk-Vh; Tue, 16 May 2023 19:02:59 +0000
Received: by outflank-mailman (input) for mailman id 535455;
 Tue, 16 May 2023 19:02:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzwo-0002eu-Nd
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:02:58 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 43463480-f41c-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:02:56 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-96-VQz7vN_JPiauZfBJGMjQqA-1; Tue, 16 May 2023 15:02:50 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AD23E857DF7;
 Tue, 16 May 2023 19:02:48 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 02BBC2026D16;
 Tue, 16 May 2023 19:02:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43463480-f41c-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263774;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=aGfiJ3QdLqEJFIVBe0dFRaZLYBYa5r932VA0IFYrkjY=;
	b=WwbE7jbSPTZF+KABzIdyjc0RaOdpMsX+NvPTPXPubVcvr8iIwP/XltbZkqda6a77jIF7Jm
	Kw4hOxuEzNLP77AeGv3CicrKwULoLYG03apyKAyixP0dN87HU7nNye90y1RjUMQlgnRF1J
	cqlje3UhNe/nIP3qAZXq4Qn+/03bIQk=
X-MC-Unique: VQz7vN_JPiauZfBJGMjQqA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 01/20] block-backend: split blk_do_set_aio_context()
Date: Tue, 16 May 2023 15:02:19 -0400
Message-Id: <20230516190238.8401-2-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

blk_set_aio_context() is not fully transactional because
blk_do_set_aio_context() updates blk->ctx outside the transaction. Most
of the time this goes unnoticed but a BlockDevOps.drained_end() callback
that invokes blk_get_aio_context() fails assert(ctx == blk->ctx). This
happens because blk->ctx is only assigned after
BlockDevOps.drained_end() is called and we're in an intermediate state
where BlockDrvierState nodes already have the new context and the
BlockBackend still has the old context.

Making blk_set_aio_context() fully transactional solves this assertion
failure because the BlockBackend's context is updated as part of the
transaction (before BlockDevOps.drained_end() is called).

Split blk_do_set_aio_context() in order to solve this assertion failure.
This helper function actually serves two different purposes:
1. It drives blk_set_aio_context().
2. It responds to BdrvChildClass->change_aio_ctx().

Get rid of the helper function. Do #1 inside blk_set_aio_context() and
do #2 inside blk_root_set_aio_ctx_commit(). This simplifies the code.

The only drawback of the fully transactional approach is that
blk_set_aio_context() must contend with blk_root_set_aio_ctx_commit()
being invoked as part of the AioContext change propagation. This can be
solved by temporarily setting blk->allow_aio_context_change to true.

Future patches call blk_get_aio_context() from
BlockDevOps->drained_end(), so this patch will become necessary.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 block/block-backend.c | 71 +++++++++++++++++--------------------------
 1 file changed, 28 insertions(+), 43 deletions(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index ca537cd0ad..68087437ac 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -2411,52 +2411,31 @@ static AioContext *blk_aiocb_get_aio_context(BlockAIOCB *acb)
     return blk_get_aio_context(blk_acb->blk);
 }
 
-static int blk_do_set_aio_context(BlockBackend *blk, AioContext *new_context,
-                                  bool update_root_node, Error **errp)
-{
-    BlockDriverState *bs = blk_bs(blk);
-    ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
-    int ret;
-
-    if (bs) {
-        bdrv_ref(bs);
-
-        if (update_root_node) {
-            /*
-             * update_root_node MUST be false for blk_root_set_aio_ctx_commit(),
-             * as we are already in the commit function of a transaction.
-             */
-            ret = bdrv_try_change_aio_context(bs, new_context, blk->root, errp);
-            if (ret < 0) {
-                bdrv_unref(bs);
-                return ret;
-            }
-        }
-        /*
-         * Make blk->ctx consistent with the root node before we invoke any
-         * other operations like drain that might inquire blk->ctx
-         */
-        blk->ctx = new_context;
-        if (tgm->throttle_state) {
-            bdrv_drained_begin(bs);
-            throttle_group_detach_aio_context(tgm);
-            throttle_group_attach_aio_context(tgm, new_context);
-            bdrv_drained_end(bs);
-        }
-
-        bdrv_unref(bs);
-    } else {
-        blk->ctx = new_context;
-    }
-
-    return 0;
-}
-
 int blk_set_aio_context(BlockBackend *blk, AioContext *new_context,
                         Error **errp)
 {
+    bool old_allow_change;
+    BlockDriverState *bs = blk_bs(blk);
+    int ret;
+
     GLOBAL_STATE_CODE();
-    return blk_do_set_aio_context(blk, new_context, true, errp);
+
+    if (!bs) {
+        blk->ctx = new_context;
+        return 0;
+    }
+
+    bdrv_ref(bs);
+
+    old_allow_change = blk->allow_aio_context_change;
+    blk->allow_aio_context_change = true;
+
+    ret = bdrv_try_change_aio_context(bs, new_context, NULL, errp);
+
+    blk->allow_aio_context_change = old_allow_change;
+
+    bdrv_unref(bs);
+    return ret;
 }
 
 typedef struct BdrvStateBlkRootContext {
@@ -2468,8 +2447,14 @@ static void blk_root_set_aio_ctx_commit(void *opaque)
 {
     BdrvStateBlkRootContext *s = opaque;
     BlockBackend *blk = s->blk;
+    AioContext *new_context = s->new_ctx;
+    ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
 
-    blk_do_set_aio_context(blk, s->new_ctx, false, &error_abort);
+    blk->ctx = new_context;
+    if (tgm->throttle_state) {
+        throttle_group_detach_aio_context(tgm);
+        throttle_group_attach_aio_context(tgm, new_context);
+    }
 }
 
 static TransactionActionDrv set_blk_root_context = {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535456.833188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzwt-00031b-B5; Tue, 16 May 2023 19:03:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535456.833188; Tue, 16 May 2023 19:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzwt-00031R-7n; Tue, 16 May 2023 19:03:03 +0000
Received: by outflank-mailman (input) for mailman id 535456;
 Tue, 16 May 2023 19:03:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzws-0002US-CU
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:02 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4699fd0f-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:00 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-392-bwAvUUZhPMqJaK6CQaIk6A-1; Tue, 16 May 2023 15:02:54 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BEB543C0BE32;
 Tue, 16 May 2023 19:02:51 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 246F840C6EC4;
 Tue, 16 May 2023 19:02:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4699fd0f-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263779;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eWBF29RQlo3PxqGi5ARBdz9mjgsC7oCE+g384OxkQik=;
	b=VOvraVyB+bdgJf0piH3fnCVkPy5nfh7XTuZrZD2kLQ2hVAoYmRLJKV1Vr94t7sHfXZKGMC
	EwJwLp2PRPLSqmMNfoE9u1CctwpKfcp4MKnNtTWAEjcZV9uP9uUtuUae89fNoP1caH08e5
	1aEm6XAqr306+t/cvN3VqgQ8QJTtIG4=
X-MC-Unique: bwAvUUZhPMqJaK6CQaIk6A-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 02/20] hw/qdev: introduce qdev_is_realized() helper
Date: Tue, 16 May 2023 15:02:20 -0400
Message-Id: <20230516190238.8401-3-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

Add a helper function to check whether the device is realized without
requiring the Big QEMU Lock. The next patch adds a second caller. The
goal is to avoid spreading DeviceState field accesses throughout the
code.

Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 include/hw/qdev-core.h | 17 ++++++++++++++---
 hw/scsi/scsi-bus.c     |  3 +--
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
index 7623703943..f1070d6dc7 100644
--- a/include/hw/qdev-core.h
+++ b/include/hw/qdev-core.h
@@ -1,6 +1,7 @@
 #ifndef QDEV_CORE_H
 #define QDEV_CORE_H
 
+#include "qemu/atomic.h"
 #include "qemu/queue.h"
 #include "qemu/bitmap.h"
 #include "qemu/rcu.h"
@@ -168,9 +169,6 @@ typedef struct {
 
 /**
  * DeviceState:
- * @realized: Indicates whether the device has been fully constructed.
- *            When accessed outside big qemu lock, must be accessed with
- *            qatomic_load_acquire()
  * @reset: ResettableState for the device; handled by Resettable interface.
  *
  * This structure should not be accessed directly.  We declare it here
@@ -339,6 +337,19 @@ DeviceState *qdev_new(const char *name);
  */
 DeviceState *qdev_try_new(const char *name);
 
+/**
+ * qdev_is_realized:
+ * @dev: The device to check.
+ *
+ * May be called outside big qemu lock.
+ *
+ * Returns: %true% if the device has been fully constructed, %false% otherwise.
+ */
+static inline bool qdev_is_realized(DeviceState *dev)
+{
+    return qatomic_load_acquire(&dev->realized);
+}
+
 /**
  * qdev_realize: Realize @dev.
  * @dev: device to realize
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 3c20b47ad0..8857ff41f6 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -60,8 +60,7 @@ static SCSIDevice *do_scsi_device_find(SCSIBus *bus,
      * the user access the device.
      */
 
-    if (retval && !include_unrealized &&
-        !qatomic_load_acquire(&retval->qdev.realized)) {
+    if (retval && !include_unrealized && !qdev_is_realized(&retval->qdev)) {
         retval = NULL;
     }
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535454.833168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzwm-0002Ug-SK; Tue, 16 May 2023 19:02:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535454.833168; Tue, 16 May 2023 19:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzwm-0002UZ-Od; Tue, 16 May 2023 19:02:56 +0000
Received: by outflank-mailman (input) for mailman id 535454;
 Tue, 16 May 2023 19:02:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzwl-0002US-Hz
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:02:55 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 411fffb8-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:02:52 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-170-nJeA-3YaMay_1Lj1RYmoHw-1; Tue, 16 May 2023 15:02:47 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3E45586C60F;
 Tue, 16 May 2023 19:02:46 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B453C4021D9;
 Tue, 16 May 2023 19:02:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 411fffb8-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263770;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=XSavnLJy34agtHvxXyiVShndCbcr4jJazu6AYV1BQac=;
	b=TyQL9RBAGHXzgu7zC7j7phD1K0+mrRdy+VNiYtVy9so8RdpAMVbMj9n38qnjd2+h0YqfBK
	0jamgNj3Cu/CuwKb2erCf/iiabthqm9draWkDEB8NT1z3FDr9Z+yOnf4Or7Tfny7KknTys
	1crt6SZfDmB4r5sKmIXhupk2JcLJYuc=
X-MC-Unique: nJeA-3YaMay_1Lj1RYmoHw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 00/20] block: remove aio_disable_external() API
Date: Tue, 16 May 2023 15:02:18 -0400
Message-Id: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

v6:
- Fix scsi_device_unrealize() -> scsi_qdev_unrealize() mistake in Patch 4
  commit description [Kevin]
- Explain why we don't schedule a BH in .drained_begin() in Patch 16 [Kevin]
- Copy the comment explaining why the event notifier is tested and cleared in
  Patch 16 [Kevin]
- Fix EPOLL_ENABLE_THRESHOLD mismerge in util/fdmon-epoll.c [Kevin]

v5:
- Use atomic accesses for in_flight counter in vhost-user-server.c [Kevin]
- Stash SCSIDevice id/lun values for VIRTIO_SCSI_T_TRANSPORT_RESET event
  before unrealizing the SCSIDevice [Kevin]
- Keep vhost-user-blk export .detach() callback so ctx is set to NULL [Kevin]
- Narrow BdrvChildClass and BlockDriver drained_{begin/end/poll} callbacks from
  IO_OR_GS_CODE() to GLOBAL_STATE_CODE() [Kevin]
- Include Kevin's "block: Fix use after free in blockdev_mark_auto_del()" to
  fix a latent bug that was exposed by this series

v4:
- Remove external_disable_cnt variable [Philippe]
- Add Patch 1 to fix assertion failure in .drained_end() -> blk_get_aio_context()
v3:
- Resend full patch series. v2 was sent in the middle of a git rebase and was
  missing patches. [Eric]
- Apply Reviewed-by tags.
v2:
- Do not rely on BlockBackend request queuing, implement .drained_begin/end()
  instead in xen-block, virtio-blk, and virtio-scsi [Paolo]
- Add qdev_is_realized() API [Philippe]
- Add patch to avoid AioContext lock around blk_exp_ref/unref() [Paolo]
- Add patch to call .drained_begin/end() from main loop thread to simplify
  callback implementations

The aio_disable_external() API temporarily suspends file descriptor monitoring
in the event loop. The block layer uses this to prevent new I/O requests being
submitted from the guest and elsewhere between bdrv_drained_begin() and
bdrv_drained_end().

While the block layer still needs to prevent new I/O requests in drained
sections, the aio_disable_external() API can be replaced with
.drained_begin/end/poll() callbacks that have been added to BdrvChildClass and
BlockDevOps.

This newer .bdrained_begin/end/poll() approach is attractive because it works
without specifying a specific AioContext. The block layer is moving towards
multi-queue and that means multiple AioContexts may be processing I/O
simultaneously.

The aio_disable_external() was always somewhat hacky. It suspends all file
descriptors that were registered with is_external=true, even if they have
nothing to do with the BlockDriverState graph nodes that are being drained.
It's better to solve a block layer problem in the block layer than to have an
odd event loop API solution.

The approach in this patch series is to implement BlockDevOps
.drained_begin/end() callbacks that temporarily stop file descriptor handlers.
This ensures that new I/O requests are not submitted in drained sections.

Stefan Hajnoczi (20):
  block-backend: split blk_do_set_aio_context()
  hw/qdev: introduce qdev_is_realized() helper
  virtio-scsi: avoid race between unplug and transport event
  virtio-scsi: stop using aio_disable_external() during unplug
  util/vhost-user-server: rename refcount to in_flight counter
  block/export: wait for vhost-user-blk requests when draining
  block/export: stop using is_external in vhost-user-blk server
  hw/xen: do not use aio_set_fd_handler(is_external=true) in
    xen_xenstore
  block: add blk_in_drain() API
  block: drain from main loop thread in bdrv_co_yield_to_drain()
  xen-block: implement BlockDevOps->drained_begin()
  hw/xen: do not set is_external=true on evtchn fds
  block/export: rewrite vduse-blk drain code
  block/export: don't require AioContext lock around blk_exp_ref/unref()
  block/fuse: do not set is_external=true on FUSE fd
  virtio: make it possible to detach host notifier from any thread
  virtio-blk: implement BlockDevOps->drained_begin()
  virtio-scsi: implement BlockDevOps->drained_begin()
  virtio: do not set is_external=true on host notifiers
  aio: remove aio_disable_external() API

 hw/block/dataplane/xen-block.h              |   2 +
 include/block/aio.h                         |  57 ---------
 include/block/block_int-common.h            |  90 +++++++-------
 include/block/export.h                      |   2 +
 include/hw/qdev-core.h                      |  17 ++-
 include/hw/scsi/scsi.h                      |  14 +++
 include/qemu/vhost-user-server.h            |   8 +-
 include/sysemu/block-backend-common.h       |  25 ++--
 include/sysemu/block-backend-global-state.h |   1 +
 util/aio-posix.h                            |   1 -
 block.c                                     |   7 --
 block/blkio.c                               |  15 +--
 block/block-backend.c                       |  78 ++++++------
 block/curl.c                                |  10 +-
 block/export/export.c                       |  13 +-
 block/export/fuse.c                         |  56 ++++++++-
 block/export/vduse-blk.c                    | 128 ++++++++++++++------
 block/export/vhost-user-blk-server.c        |  52 +++++++-
 block/io.c                                  |  16 ++-
 block/io_uring.c                            |   4 +-
 block/iscsi.c                               |   3 +-
 block/linux-aio.c                           |   4 +-
 block/nfs.c                                 |   5 +-
 block/nvme.c                                |   8 +-
 block/ssh.c                                 |   4 +-
 block/win32-aio.c                           |   6 +-
 hw/block/dataplane/virtio-blk.c             |  23 +++-
 hw/block/dataplane/xen-block.c              |  42 +++++--
 hw/block/virtio-blk.c                       |  38 +++++-
 hw/block/xen-block.c                        |  24 +++-
 hw/i386/kvm/xen_xenstore.c                  |   2 +-
 hw/scsi/scsi-bus.c                          |  46 ++++++-
 hw/scsi/scsi-disk.c                         |  27 ++++-
 hw/scsi/virtio-scsi-dataplane.c             |  32 +++--
 hw/scsi/virtio-scsi.c                       | 127 ++++++++++++++-----
 hw/virtio/virtio.c                          |   9 +-
 hw/xen/xen-bus.c                            |  11 +-
 io/channel-command.c                        |   6 +-
 io/channel-file.c                           |   3 +-
 io/channel-socket.c                         |   3 +-
 migration/rdma.c                            |  16 +--
 tests/unit/test-aio.c                       |  27 +----
 tests/unit/test-bdrv-drain.c                |  15 +--
 tests/unit/test-fdmon-epoll.c               |  73 -----------
 util/aio-posix.c                            |  20 +--
 util/aio-win32.c                            |   8 +-
 util/async.c                                |   3 +-
 util/fdmon-epoll.c                          |  10 --
 util/fdmon-io_uring.c                       |   8 +-
 util/fdmon-poll.c                           |   3 +-
 util/main-loop.c                            |   7 +-
 util/qemu-coroutine-io.c                    |   7 +-
 util/vhost-user-server.c                    |  33 ++---
 hw/scsi/trace-events                        |   2 +
 tests/unit/meson.build                      |   3 -
 55 files changed, 725 insertions(+), 529 deletions(-)
 delete mode 100644 tests/unit/test-fdmon-epoll.c

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535457.833198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzww-0003Kc-NK; Tue, 16 May 2023 19:03:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535457.833198; Tue, 16 May 2023 19:03:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzww-0003KT-Jo; Tue, 16 May 2023 19:03:06 +0000
Received: by outflank-mailman (input) for mailman id 535457;
 Tue, 16 May 2023 19:03:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzwv-0002US-1z
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:05 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 47f91588-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:03 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-584-ZScnqIcZMPy3U2EgdpHnTw-1; Tue, 16 May 2023 15:02:56 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 725C286C614;
 Tue, 16 May 2023 19:02:55 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C6A3A492B00;
 Tue, 16 May 2023 19:02:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47f91588-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263782;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=U+xP3+VvbKptobyZr1DGqYiArlGHMgmGr4twrwXIfR0=;
	b=cCtxVferhgqNV03zDDb6ITSy08KyDlcBF96EQWwebrcMuFZ3PQnmkk4bKNHtBcebBPcTJi
	cMZFwgpzDzF0Qnj1Osyt8F+8f9dc7QJ4RMKvKSoPj0GC5LYQth1i/K9uuyVjVwcYl8UIaJ
	fPIVMpcVV30H0JoooekDHiHDUg0ivhE=
X-MC-Unique: ZScnqIcZMPy3U2EgdpHnTw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: [PATCH v6 03/20] virtio-scsi: avoid race between unplug and transport event
Date: Tue, 16 May 2023 15:02:21 -0400
Message-Id: <20230516190238.8401-4-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9

Only report a transport reset event to the guest after the SCSIDevice
has been unrealized by qdev_simple_device_unplug_cb().

qdev_simple_device_unplug_cb() sets the SCSIDevice's qdev.realized field
to false so that scsi_device_find/get() no longer see it.

scsi_target_emulate_report_luns() also needs to be updated to filter out
SCSIDevices that are unrealized.

Change virtio_scsi_push_event() to take event information as an argument
instead of the SCSIDevice. This allows virtio_scsi_hotunplug() to emit a
VIRTIO_SCSI_T_TRANSPORT_RESET event after the SCSIDevice has already
been unrealized.

These changes ensure that the guest driver does not see the SCSIDevice
that's being unplugged if it responds very quickly to the transport
reset event.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
v5:
- Stash SCSIDevice id/lun values for VIRTIO_SCSI_T_TRANSPORT_RESET event
  before unrealizing the SCSIDevice [Kevin]
---
 hw/scsi/scsi-bus.c    |  3 +-
 hw/scsi/virtio-scsi.c | 86 ++++++++++++++++++++++++++++++-------------
 2 files changed, 63 insertions(+), 26 deletions(-)

diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 8857ff41f6..64013c8a24 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -487,7 +487,8 @@ static bool scsi_target_emulate_report_luns(SCSITargetReq *r)
             DeviceState *qdev = kid->child;
             SCSIDevice *dev = SCSI_DEVICE(qdev);
 
-            if (dev->channel == channel && dev->id == id && dev->lun != 0) {
+            if (dev->channel == channel && dev->id == id && dev->lun != 0 &&
+                qdev_is_realized(&dev->qdev)) {
                 store_lun(tmp, dev->lun);
                 g_byte_array_append(buf, tmp, 8);
                 len += 8;
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 612c525d9d..ae314af3de 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -933,13 +933,27 @@ static void virtio_scsi_reset(VirtIODevice *vdev)
     s->events_dropped = false;
 }
 
-static void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
-                                   uint32_t event, uint32_t reason)
+typedef struct {
+    uint32_t event;
+    uint32_t reason;
+    union {
+        /* Used by messages specific to a device */
+        struct {
+            uint32_t id;
+            uint32_t lun;
+        } address;
+    };
+} VirtIOSCSIEventInfo;
+
+static void virtio_scsi_push_event(VirtIOSCSI *s,
+                                   const VirtIOSCSIEventInfo *info)
 {
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
     VirtIOSCSIReq *req;
     VirtIOSCSIEvent *evt;
     VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    uint32_t event = info->event;
+    uint32_t reason = info->reason;
 
     if (!(vdev->status & VIRTIO_CONFIG_S_DRIVER_OK)) {
         return;
@@ -965,27 +979,28 @@ static void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
     memset(evt, 0, sizeof(VirtIOSCSIEvent));
     evt->event = virtio_tswap32(vdev, event);
     evt->reason = virtio_tswap32(vdev, reason);
-    if (!dev) {
-        assert(event == VIRTIO_SCSI_T_EVENTS_MISSED);
-    } else {
+    if (event != VIRTIO_SCSI_T_EVENTS_MISSED) {
         evt->lun[0] = 1;
-        evt->lun[1] = dev->id;
+        evt->lun[1] = info->address.id;
 
         /* Linux wants us to keep the same encoding we use for REPORT LUNS.  */
-        if (dev->lun >= 256) {
-            evt->lun[2] = (dev->lun >> 8) | 0x40;
+        if (info->address.lun >= 256) {
+            evt->lun[2] = (info->address.lun >> 8) | 0x40;
         }
-        evt->lun[3] = dev->lun & 0xFF;
+        evt->lun[3] = info->address.lun & 0xFF;
     }
     trace_virtio_scsi_event(virtio_scsi_get_lun(evt->lun), event, reason);
-     
+
     virtio_scsi_complete_req(req);
 }
 
 static void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq)
 {
     if (s->events_dropped) {
-        virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
+        VirtIOSCSIEventInfo info = {
+            .event = VIRTIO_SCSI_T_NO_EVENT,
+        };
+        virtio_scsi_push_event(s, &info);
     }
 }
 
@@ -1009,9 +1024,17 @@ static void virtio_scsi_change(SCSIBus *bus, SCSIDevice *dev, SCSISense sense)
 
     if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_CHANGE) &&
         dev->type != TYPE_ROM) {
+        VirtIOSCSIEventInfo info = {
+            .event   = VIRTIO_SCSI_T_PARAM_CHANGE,
+            .reason  = sense.asc | (sense.ascq << 8),
+            .address = {
+                .id  = dev->id,
+                .lun = dev->lun,
+            },
+        };
+
         virtio_scsi_acquire(s);
-        virtio_scsi_push_event(s, dev, VIRTIO_SCSI_T_PARAM_CHANGE,
-                               sense.asc | (sense.ascq << 8));
+        virtio_scsi_push_event(s, &info);
         virtio_scsi_release(s);
     }
 }
@@ -1046,10 +1069,17 @@ static void virtio_scsi_hotplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     }
 
     if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
+        VirtIOSCSIEventInfo info = {
+            .event   = VIRTIO_SCSI_T_TRANSPORT_RESET,
+            .reason  = VIRTIO_SCSI_EVT_RESET_RESCAN,
+            .address = {
+                .id  = sd->id,
+                .lun = sd->lun,
+            },
+        };
+
         virtio_scsi_acquire(s);
-        virtio_scsi_push_event(s, sd,
-                               VIRTIO_SCSI_T_TRANSPORT_RESET,
-                               VIRTIO_SCSI_EVT_RESET_RESCAN);
+        virtio_scsi_push_event(s, &info);
         scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
         virtio_scsi_release(s);
     }
@@ -1062,15 +1092,14 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
     SCSIDevice *sd = SCSI_DEVICE(dev);
     AioContext *ctx = s->ctx ?: qemu_get_aio_context();
-
-    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
-        virtio_scsi_acquire(s);
-        virtio_scsi_push_event(s, sd,
-                               VIRTIO_SCSI_T_TRANSPORT_RESET,
-                               VIRTIO_SCSI_EVT_RESET_REMOVED);
-        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
-        virtio_scsi_release(s);
-    }
+    VirtIOSCSIEventInfo info = {
+        .event   = VIRTIO_SCSI_T_TRANSPORT_RESET,
+        .reason  = VIRTIO_SCSI_EVT_RESET_REMOVED,
+        .address = {
+            .id  = sd->id,
+            .lun = sd->lun,
+        },
+    };
 
     aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
@@ -1082,6 +1111,13 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
         blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL);
         virtio_scsi_release(s);
     }
+
+    if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
+        virtio_scsi_acquire(s);
+        virtio_scsi_push_event(s, &info);
+        scsi_bus_set_ua(&s->bus, SENSE_CODE(REPORTED_LUNS_CHANGED));
+        virtio_scsi_release(s);
+    }
 }
 
 static struct SCSIBusInfo virtio_scsi_scsi_info = {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535458.833204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzwx-0003OY-39; Tue, 16 May 2023 19:03:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535458.833204; Tue, 16 May 2023 19:03:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzww-0003OD-Sa; Tue, 16 May 2023 19:03:06 +0000
Received: by outflank-mailman (input) for mailman id 535458;
 Tue, 16 May 2023 19:03:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzwv-0002US-HM
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:05 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 48627b70-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:03 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-330-3lS_afUsO8WRHCJ1_-Aiuw-1; Tue, 16 May 2023 15:03:00 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2785E10146FF;
 Tue, 16 May 2023 19:02:58 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 2F4E82026D16;
 Tue, 16 May 2023 19:02:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48627b70-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263782;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8RC9Rdzxs6IYHgWBLR9iRionl9qUv7t2Lx7UcUXBQjY=;
	b=eW4iOV98KF4G+SCKSNiuwY+nj8+B0wJFDFjj6f6bu/OuRMfAkMcMu96hOjX5iiRWxE1zPF
	oIshOudze1oHNCxfw6Gy3E/XxCW20g8b36URHC0M9fC25YC9c1uIu3x6szgdsjPe2YiiuK
	LbFtOpQB92ukNYOEV7gUYJFGEmB0fDw=
X-MC-Unique: 3lS_afUsO8WRHCJ1_-Aiuw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>,
	Zhengui Li <lizhengui@huawei.com>,
	Daniil Tatianin <d-tatianin@yandex-team.ru>
Subject: [PATCH v6 04/20] virtio-scsi: stop using aio_disable_external() during unplug
Date: Tue, 16 May 2023 15:02:22 -0400
Message-Id: <20230516190238.8401-5-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

This patch is part of an effort to remove the aio_disable_external()
API because it does not fit in a multi-queue block layer world where
many AioContexts may be submitting requests to the same disk.

The SCSI emulation code is already in good shape to stop using
aio_disable_external(). It was only used by commit 9c5aad84da1c
("virtio-scsi: fixed virtio_scsi_ctx_check failed when detaching scsi
disk") to ensure that virtio_scsi_hotunplug() works while the guest
driver is submitting I/O.

Ensure virtio_scsi_hotunplug() is safe as follows:

1. qdev_simple_device_unplug_cb() -> qdev_unrealize() ->
   device_set_realized() calls qatomic_set(&dev->realized, false) so
   that future scsi_device_get() calls return NULL because they exclude
   SCSIDevices with realized=false.

   That means virtio-scsi will reject new I/O requests to this
   SCSIDevice with VIRTIO_SCSI_S_BAD_TARGET even while
   virtio_scsi_hotunplug() is still executing. We are protected against
   new requests!

2. scsi_qdev_unrealize() already contains a call to
   scsi_device_purge_requests() so that in-flight requests are cancelled
   synchronously. This ensures that no in-flight requests remain once
   qdev_simple_device_unplug_cb() returns.

Thanks to these two conditions we don't need aio_disable_external()
anymore.

Cc: Zhengui Li <lizhengui@huawei.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 hw/scsi/virtio-scsi.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index ae314af3de..c1a7ea9ae2 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1091,7 +1091,6 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev);
     VirtIOSCSI *s = VIRTIO_SCSI(vdev);
     SCSIDevice *sd = SCSI_DEVICE(dev);
-    AioContext *ctx = s->ctx ?: qemu_get_aio_context();
     VirtIOSCSIEventInfo info = {
         .event   = VIRTIO_SCSI_T_TRANSPORT_RESET,
         .reason  = VIRTIO_SCSI_EVT_RESET_REMOVED,
@@ -1101,9 +1100,7 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
         },
     };
 
-    aio_disable_external(ctx);
     qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
-    aio_enable_external(ctx);
 
     if (s->ctx) {
         virtio_scsi_acquire(s);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535459.833218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzx2-000408-C9; Tue, 16 May 2023 19:03:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535459.833218; Tue, 16 May 2023 19:03:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzx2-0003zo-8B; Tue, 16 May 2023 19:03:12 +0000
Received: by outflank-mailman (input) for mailman id 535459;
 Tue, 16 May 2023 19:03:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzx1-0002US-Jg
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:11 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4bff89f6-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:09 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-631-OBDdMVeVNAuZ1qrl1VHmbQ-1; Tue, 16 May 2023 15:03:05 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 58E9329A9D28;
 Tue, 16 May 2023 19:03:04 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 647022166B32;
 Tue, 16 May 2023 19:03:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4bff89f6-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263788;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QN5dUSHOXHxwuqPaPoFoBIoQO1wy9z42Wgl8ufTpgyA=;
	b=WzaVG5MaQyG9P6T/wzFKvz58omq6ynPyCZabP5mL0PdruVaHgdJSKbhcsadkNRcvmfau9n
	YgzGLkV+VQBVg+0CpBtlxT576dbYGjPfmsEw+OytEzlWcU5+5XkxX5G4uzyit2yivvztb7
	uwdtPGI0do/si2SohMIH3ljUPn05cLg=
X-MC-Unique: OBDdMVeVNAuZ1qrl1VHmbQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 05/20] util/vhost-user-server: rename refcount to in_flight counter
Date: Tue, 16 May 2023 15:02:23 -0400
Message-Id: <20230516190238.8401-6-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

The VuServer object has a refcount field and ref/unref APIs. The name is
confusing because it's actually an in-flight request counter instead of
a refcount.

Normally a refcount destroys the object upon reaching zero. The VuServer
counter is used to wake up the vhost-user coroutine when there are no
more requests.

Avoid confusing by renaming refcount and ref/unref to in_flight and
inc/dec.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 include/qemu/vhost-user-server.h     |  6 +++---
 block/export/vhost-user-blk-server.c | 11 +++++++----
 util/vhost-user-server.c             | 14 +++++++-------
 3 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index 25c72433ca..bc0ac9ddb6 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -41,7 +41,7 @@ typedef struct {
     const VuDevIface *vu_iface;
 
     /* Protected by ctx lock */
-    unsigned int refcount;
+    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -60,8 +60,8 @@ bool vhost_user_server_start(VuServer *server,
 
 void vhost_user_server_stop(VuServer *server);
 
-void vhost_user_server_ref(VuServer *server);
-void vhost_user_server_unref(VuServer *server);
+void vhost_user_server_inc_in_flight(VuServer *server);
+void vhost_user_server_dec_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index e56b92f2e2..841acb36e3 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -50,7 +50,10 @@ static void vu_blk_req_complete(VuBlkReq *req, size_t in_len)
     free(req);
 }
 
-/* Called with server refcount increased, must decrease before returning */
+/*
+ * Called with server in_flight counter increased, must decrease before
+ * returning.
+ */
 static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
 {
     VuBlkReq *req = opaque;
@@ -68,12 +71,12 @@ static void coroutine_fn vu_blk_virtio_process_req(void *opaque)
                                     in_num, out_num);
     if (in_len < 0) {
         free(req);
-        vhost_user_server_unref(server);
+        vhost_user_server_dec_in_flight(server);
         return;
     }
 
     vu_blk_req_complete(req, in_len);
-    vhost_user_server_unref(server);
+    vhost_user_server_dec_in_flight(server);
 }
 
 static void vu_blk_process_vq(VuDev *vu_dev, int idx)
@@ -95,7 +98,7 @@ static void vu_blk_process_vq(VuDev *vu_dev, int idx)
         Coroutine *co =
             qemu_coroutine_create(vu_blk_virtio_process_req, req);
 
-        vhost_user_server_ref(server);
+        vhost_user_server_inc_in_flight(server);
         qemu_coroutine_enter(co);
     }
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 5b6216069c..1622f8cfb3 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -75,16 +75,16 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
     error_report("vu_panic: %s", buf);
 }
 
-void vhost_user_server_ref(VuServer *server)
+void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->refcount++;
+    server->in_flight++;
 }
 
-void vhost_user_server_unref(VuServer *server)
+void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->refcount--;
-    if (server->wait_idle && !server->refcount) {
+    server->in_flight--;
+    if (server->wait_idle && !server->in_flight) {
         aio_co_wake(server->co_trip);
     }
 }
@@ -192,13 +192,13 @@ static coroutine_fn void vu_client_trip(void *opaque)
         /* Keep running */
     }
 
-    if (server->refcount) {
+    if (server->in_flight) {
         /* Wait for requests to complete before we can unmap the memory */
         server->wait_idle = true;
         qemu_coroutine_yield();
         server->wait_idle = false;
     }
-    assert(server->refcount == 0);
+    assert(server->in_flight == 0);
 
     vu_deinit(vu_dev);
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535460.833228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzx8-0004af-NS; Tue, 16 May 2023 19:03:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535460.833228; Tue, 16 May 2023 19:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzx8-0004aU-Jg; Tue, 16 May 2023 19:03:18 +0000
Received: by outflank-mailman (input) for mailman id 535460;
 Tue, 16 May 2023 19:03:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzx7-0002eu-Dk
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:17 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4fe8fed2-f41c-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:03:16 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-499-q0R2kR_yPT2D42s4iBe7mw-1; Tue, 16 May 2023 15:03:11 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D0F5B38107A8;
 Tue, 16 May 2023 19:03:08 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 0FD864750C0;
 Tue, 16 May 2023 19:03:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fe8fed2-f41c-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263795;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fL5t1UEl9cjNmPiwPkEsDsq1Nk17L3JAHc6T/MEfbt0=;
	b=CTYAMCl7ZLxSRTBKvKxSOmVY+KENs8RbettGXDQMmNXVtcGK2Z5j2afb0/v9rCFmLfuCI2
	R2WJo3B3REtRehV7vXOHuq876+DNZsJKezw8YxUyI+L4AbavRy+YEuvJDi1ecySx1WDCFf
	BftuE6cj/D9svfW+ceC4/VJYFPg/xf0=
X-MC-Unique: q0R2kR_yPT2D42s4iBe7mw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 06/20] block/export: wait for vhost-user-blk requests when draining
Date: Tue, 16 May 2023 15:02:24 -0400
Message-Id: <20230516190238.8401-7-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

Each vhost-user-blk request runs in a coroutine. When the BlockBackend
enters a drained section we need to enter a quiescent state. Currently
any in-flight requests race with bdrv_drained_begin() because it is
unaware of vhost-user-blk requests.

When blk_co_preadv/pwritev()/etc returns it wakes the
bdrv_drained_begin() thread but vhost-user-blk request processing has
not yet finished. The request coroutine continues executing while the
main loop thread thinks it is in a drained section.

One example where this is unsafe is for blk_set_aio_context() where
bdrv_drained_begin() is called before .aio_context_detached() and
.aio_context_attach(). If request coroutines are still running after
bdrv_drained_begin(), then the AioContext could change underneath them
and they race with new requests processed in the new AioContext. This
could lead to virtqueue corruption, for example.

(This example is theoretical, I came across this while reading the
code and have not tried to reproduce it.)

It's easy to make bdrv_drained_begin() wait for in-flight requests: add
a .drained_poll() callback that checks the VuServer's in-flight counter.
VuServer just needs an API that returns true when there are requests in
flight. The in-flight counter needs to be atomic.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
v5:
- Use atomic accesses for in_flight counter in vhost-user-server.c [Kevin]
---
 include/qemu/vhost-user-server.h     |  4 +++-
 block/export/vhost-user-blk-server.c | 13 +++++++++++++
 util/vhost-user-server.c             | 18 ++++++++++++------
 3 files changed, 28 insertions(+), 7 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index bc0ac9ddb6..b1c1cda886 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -40,8 +40,9 @@ typedef struct {
     int max_queues;
     const VuDevIface *vu_iface;
 
+    unsigned int in_flight; /* atomic */
+
     /* Protected by ctx lock */
-    unsigned int in_flight;
     bool wait_idle;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
@@ -62,6 +63,7 @@ void vhost_user_server_stop(VuServer *server);
 
 void vhost_user_server_inc_in_flight(VuServer *server);
 void vhost_user_server_dec_in_flight(VuServer *server);
+bool vhost_user_server_has_in_flight(VuServer *server);
 
 void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx);
 void vhost_user_server_detach_aio_context(VuServer *server);
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index 841acb36e3..f51a36a14f 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -272,7 +272,20 @@ static void vu_blk_exp_resize(void *opaque)
     vu_config_change_msg(&vexp->vu_server.vu_dev);
 }
 
+/*
+ * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
+ *
+ * Called with vexp->export.ctx acquired.
+ */
+static bool vu_blk_drained_poll(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    return vhost_user_server_has_in_flight(&vexp->vu_server);
+}
+
 static const BlockDevOps vu_blk_dev_ops = {
+    .drained_poll  = vu_blk_drained_poll,
     .resize_cb = vu_blk_exp_resize,
 };
 
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 1622f8cfb3..68c3bf162f 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
 void vhost_user_server_inc_in_flight(VuServer *server)
 {
     assert(!server->wait_idle);
-    server->in_flight++;
+    qatomic_inc(&server->in_flight);
 }
 
 void vhost_user_server_dec_in_flight(VuServer *server)
 {
-    server->in_flight--;
-    if (server->wait_idle && !server->in_flight) {
-        aio_co_wake(server->co_trip);
+    if (qatomic_fetch_dec(&server->in_flight) == 1) {
+        if (server->wait_idle) {
+            aio_co_wake(server->co_trip);
+        }
     }
 }
 
+bool vhost_user_server_has_in_flight(VuServer *server)
+{
+    return qatomic_load_acquire(&server->in_flight) > 0;
+}
+
 static bool coroutine_fn
 vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
 {
@@ -192,13 +198,13 @@ static coroutine_fn void vu_client_trip(void *opaque)
         /* Keep running */
     }
 
-    if (server->in_flight) {
+    if (vhost_user_server_has_in_flight(server)) {
         /* Wait for requests to complete before we can unmap the memory */
         server->wait_idle = true;
         qemu_coroutine_yield();
         server->wait_idle = false;
     }
-    assert(server->in_flight == 0);
+    assert(!vhost_user_server_has_in_flight(server));
 
     vu_deinit(vu_dev);
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535461.833233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzx9-0004eD-3B; Tue, 16 May 2023 19:03:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535461.833233; Tue, 16 May 2023 19:03:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzx8-0004cv-SR; Tue, 16 May 2023 19:03:18 +0000
Received: by outflank-mailman (input) for mailman id 535461;
 Tue, 16 May 2023 19:03:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzx7-0002US-Pj
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:17 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4f9e5af6-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:16 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-411-tngGddgoOKCHUPg4actehQ-1; Tue, 16 May 2023 15:03:12 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 01BAE86C60F;
 Tue, 16 May 2023 19:03:11 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6275740C2063;
 Tue, 16 May 2023 19:03:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f9e5af6-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263794;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wsN4h7xYRcCa/FyLGYJ9wKwMqFBp/AODhXKHex8caak=;
	b=jE0QFUJoR10bqbl3HtXGqotegD4PN+oTlu9z4tQ1AdY2aadHveQnISPoPPLIZFGF0H5fpg
	pUdhNq93o1me+knx46j2v/f8qKgMNO2oxudJ/OaZs3P/ceZYbxBnKsy4NIe1mPftKkyi9m
	yoEI4JRbs9x8CnqaDTOYGBs7JpdGBR0=
X-MC-Unique: tngGddgoOKCHUPg4actehQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 07/20] block/export: stop using is_external in vhost-user-blk server
Date: Tue, 16 May 2023 15:02:25 -0400
Message-Id: <20230516190238.8401-8-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

vhost-user activity must be suspended during bdrv_drained_begin/end().
This prevents new requests from interfering with whatever is happening
in the drained section.

Previously this was done using aio_set_fd_handler()'s is_external
argument. In a multi-queue block layer world the aio_disable_external()
API cannot be used since multiple AioContext may be processing I/O, not
just one.

Switch to BlockDevOps->drained_begin/end() callbacks.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 block/export/vhost-user-blk-server.c | 28 ++++++++++++++++++++++++++--
 util/vhost-user-server.c             | 10 +++++-----
 2 files changed, 31 insertions(+), 7 deletions(-)

diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index f51a36a14f..81b59761e3 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -212,15 +212,21 @@ static void blk_aio_attached(AioContext *ctx, void *opaque)
 {
     VuBlkExport *vexp = opaque;
 
+    /*
+     * The actual attach will happen in vu_blk_drained_end() and we just
+     * restore ctx here.
+     */
     vexp->export.ctx = ctx;
-    vhost_user_server_attach_aio_context(&vexp->vu_server, ctx);
 }
 
 static void blk_aio_detach(void *opaque)
 {
     VuBlkExport *vexp = opaque;
 
-    vhost_user_server_detach_aio_context(&vexp->vu_server);
+    /*
+     * The actual detach already happened in vu_blk_drained_begin() but from
+     * this point on we must not access ctx anymore.
+     */
     vexp->export.ctx = NULL;
 }
 
@@ -272,6 +278,22 @@ static void vu_blk_exp_resize(void *opaque)
     vu_config_change_msg(&vexp->vu_server.vu_dev);
 }
 
+/* Called with vexp->export.ctx acquired */
+static void vu_blk_drained_begin(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    vhost_user_server_detach_aio_context(&vexp->vu_server);
+}
+
+/* Called with vexp->export.blk AioContext acquired */
+static void vu_blk_drained_end(void *opaque)
+{
+    VuBlkExport *vexp = opaque;
+
+    vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ctx);
+}
+
 /*
  * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
  *
@@ -285,6 +307,8 @@ static bool vu_blk_drained_poll(void *opaque)
 }
 
 static const BlockDevOps vu_blk_dev_ops = {
+    .drained_begin = vu_blk_drained_begin,
+    .drained_end   = vu_blk_drained_end,
     .drained_poll  = vu_blk_drained_poll,
     .resize_cb = vu_blk_exp_resize,
 };
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 68c3bf162f..a12b2d1bba 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         vu_fd_watch->fd = fd;
         vu_fd_watch->cb = cb;
         qemu_socket_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
                            NULL, NULL, NULL, vu_fd_watch);
         vu_fd_watch->vu_dev = vu_dev;
         vu_fd_watch->pvt = pvt;
@@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
     if (!vu_fd_watch) {
         return;
     }
-    aio_set_fd_handler(server->ioc->ctx, fd, true,
+    aio_set_fd_handler(server->ioc->ctx, fd, false,
                        NULL, NULL, NULL, NULL, NULL);
 
     QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
@@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
@@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
     qio_channel_attach_aio_context(server->ioc, ctx);
 
     QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-        aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL,
+        aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL,
                            NULL, NULL, vu_fd_watch);
     }
 
@@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535462.833248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzxD-0005Oy-Fz; Tue, 16 May 2023 19:03:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535462.833248; Tue, 16 May 2023 19:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzxD-0005Op-Bh; Tue, 16 May 2023 19:03:23 +0000
Received: by outflank-mailman (input) for mailman id 535462;
 Tue, 16 May 2023 19:03:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxC-0002eu-0X
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:22 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 52ac32ee-f41c-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:03:21 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-473-Sy3hGnDlOKS3ECndEx5x9g-1; Tue, 16 May 2023 15:03:15 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4D39886C60B;
 Tue, 16 May 2023 19:03:14 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E99052166B31;
 Tue, 16 May 2023 19:03:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52ac32ee-f41c-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263799;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Gwf7LmGZHdm26Ds5BqmQDT3iRbnDekJi1MBsLHkdcpI=;
	b=MzJ5f00dlZR8nU4qOE6WRLjCN1Q12DAyie0IVUHnJYX6EdOu1HQSdPLNy1vPPvinxTEu0V
	PxiqKTuEpTRZ8yafGGPHLO8IFi4D+RVqy+8oFD4KnmkRR0wO2V2B8CzNzcvZsTI596CB/b
	t0kbVTe+IsVj6M3c5ILAJSO/8PcdD9I=
X-MC-Unique: Sy3hGnDlOKS3ECndEx5x9g-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: [PATCH v6 08/20] hw/xen: do not use aio_set_fd_handler(is_external=true) in xen_xenstore
Date: Tue, 16 May 2023 15:02:26 -0400
Message-Id: <20230516190238.8401-9-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

There is no need to suspend activity between aio_disable_external() and
aio_enable_external(), which is mainly used for the block layer's drain
operation.

This is part of ongoing work to remove the aio_disable_external() API.

Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 hw/i386/kvm/xen_xenstore.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 900679af8a..6e81bc8791 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "Xenstore evtchn port init failed");
         return;
     }
-    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), true,
+    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false,
                        xen_xenstore_event, NULL, NULL, NULL, s);
 
     s->impl = xs_impl_create(xen_domid);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535465.833258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzxG-0005t7-Ps; Tue, 16 May 2023 19:03:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535465.833258; Tue, 16 May 2023 19:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzxG-0005sw-Ms; Tue, 16 May 2023 19:03:26 +0000
Received: by outflank-mailman (input) for mailman id 535465;
 Tue, 16 May 2023 19:03:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxF-0002US-Dn
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:25 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 53d8514a-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:23 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-584-l2Cce6lmMQ6XCObNFfTjyg-1; Tue, 16 May 2023 15:03:18 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5E1951854CA7;
 Tue, 16 May 2023 19:03:17 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id BAD3A4021D9;
 Tue, 16 May 2023 19:03:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53d8514a-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263801;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=biE+IrOQwzQ1Xq0Ccvz0hh2N38Cghy4v9sBjAOwHiFA=;
	b=NtotL4IQTIRnbbFkYpZTa952M/4yEz+Jwo+MWspcne/aq81YPVOQ2IXBdZd/7pEsBV1m7w
	9jL+6bJWh2n/CoEVQwGLswVEGM0OazGVFe7dkG149YmpqbSLPWrXudNMyAPpqFVMTN+aeU
	01y6JC4m7/RWdyqwPmXORWcE39Q18e0=
X-MC-Unique: l2Cce6lmMQ6XCObNFfTjyg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 09/20] block: add blk_in_drain() API
Date: Tue, 16 May 2023 15:02:27 -0400
Message-Id: <20230516190238.8401-10-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

The BlockBackend quiesce_counter is greater than zero during drained
sections. Add an API to check whether the BlockBackend is in a drained
section.

The next patch will use this API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 include/sysemu/block-backend-global-state.h | 1 +
 block/block-backend.c                       | 7 +++++++
 2 files changed, 8 insertions(+)

diff --git a/include/sysemu/block-backend-global-state.h b/include/sysemu/block-backend-global-state.h
index fa83f9389c..184e667ebd 100644
--- a/include/sysemu/block-backend-global-state.h
+++ b/include/sysemu/block-backend-global-state.h
@@ -81,6 +81,7 @@ void blk_activate(BlockBackend *blk, Error **errp);
 int blk_make_zero(BlockBackend *blk, BdrvRequestFlags flags);
 void blk_aio_cancel(BlockAIOCB *acb);
 int blk_commit_all(void);
+bool blk_in_drain(BlockBackend *blk);
 void blk_drain(BlockBackend *blk);
 void blk_drain_all(void);
 void blk_set_on_error(BlockBackend *blk, BlockdevOnError on_read_error,
diff --git a/block/block-backend.c b/block/block-backend.c
index 68087437ac..3a5949ecce 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -1270,6 +1270,13 @@ blk_check_byte_request(BlockBackend *blk, int64_t offset, int64_t bytes)
     return 0;
 }
 
+/* Are we currently in a drained section? */
+bool blk_in_drain(BlockBackend *blk)
+{
+    GLOBAL_STATE_CODE(); /* change to IO_OR_GS_CODE(), if necessary */
+    return qatomic_read(&blk->quiesce_counter);
+}
+
 /* To be called between exactly one pair of blk_inc/dec_in_flight() */
 static void coroutine_fn blk_wait_while_drained(BlockBackend *blk)
 {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535468.833268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzxK-0006Jx-3A; Tue, 16 May 2023 19:03:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535468.833268; Tue, 16 May 2023 19:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzxJ-0006Jo-WF; Tue, 16 May 2023 19:03:30 +0000
Received: by outflank-mailman (input) for mailman id 535468;
 Tue, 16 May 2023 19:03:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxI-0002eu-Se
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:28 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56adcbb7-f41c-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:03:27 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-656-Sni2LIjuO2SRrSF38SjuFg-1; Tue, 16 May 2023 15:03:22 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5DB2410146F0;
 Tue, 16 May 2023 19:03:21 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B2816492B00;
 Tue, 16 May 2023 19:03:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56adcbb7-f41c-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263806;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0yOXWNGqaDwHdnws5HVeAQl7pUG1Jibfze9Ap1bxQAo=;
	b=dDUrwQNIqe5ZfRUSQOwocqKwfEEcUxevkvx1VQGFmz8ti+cw5/HSnAHfbz8JTJzl0I1mhD
	zt1ThcmJ6Drv5xmWwQ5N75vwuXnGBfBf/zUNzY3EkOTkN08C3mBUz72DUglS1s9PVugjwS
	1F8WZTvhhxNX0eLLzK9TvI/NvYvYQJg=
X-MC-Unique: Sni2LIjuO2SRrSF38SjuFg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 10/20] block: drain from main loop thread in bdrv_co_yield_to_drain()
Date: Tue, 16 May 2023 15:02:28 -0400
Message-Id: <20230516190238.8401-11-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9

For simplicity, always run BlockDevOps .drained_begin/end/poll()
callbacks in the main loop thread. This makes it easier to implement the
callbacks and avoids extra locks.

Move the function pointer declarations from the I/O Code section to the
Global State section for BlockDevOps, BdrvChildClass, and BlockDriver.

Narrow IO_OR_GS_CODE() to GLOBAL_STATE_CODE() where appropriate.

The test-bdrv-drain test case calls bdrv_drain() from an IOThread. This
is now only allowed from coroutine context, so update the test case to
run in a coroutine.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 include/block/block_int-common.h      | 90 +++++++++++++--------------
 include/sysemu/block-backend-common.h | 25 ++++----
 block/io.c                            | 14 +++--
 tests/unit/test-bdrv-drain.c          | 14 +++--
 4 files changed, 76 insertions(+), 67 deletions(-)

diff --git a/include/block/block_int-common.h b/include/block/block_int-common.h
index dbec0e3bb4..17eb54edcb 100644
--- a/include/block/block_int-common.h
+++ b/include/block/block_int-common.h
@@ -363,6 +363,21 @@ struct BlockDriver {
     void (*bdrv_attach_aio_context)(BlockDriverState *bs,
                                     AioContext *new_context);
 
+    /**
+     * bdrv_drain_begin is called if implemented in the beginning of a
+     * drain operation to drain and stop any internal sources of requests in
+     * the driver.
+     * bdrv_drain_end is called if implemented at the end of the drain.
+     *
+     * They should be used by the driver to e.g. manage scheduled I/O
+     * requests, or toggle an internal state. After the end of the drain new
+     * requests will continue normally.
+     *
+     * Implementations of both functions must not call aio_poll().
+     */
+    void (*bdrv_drain_begin)(BlockDriverState *bs);
+    void (*bdrv_drain_end)(BlockDriverState *bs);
+
     /**
      * Try to get @bs's logical and physical block size.
      * On success, store them in @bsz and return zero.
@@ -758,21 +773,6 @@ struct BlockDriver {
     void coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_io_unplug)(
         BlockDriverState *bs);
 
-    /**
-     * bdrv_drain_begin is called if implemented in the beginning of a
-     * drain operation to drain and stop any internal sources of requests in
-     * the driver.
-     * bdrv_drain_end is called if implemented at the end of the drain.
-     *
-     * They should be used by the driver to e.g. manage scheduled I/O
-     * requests, or toggle an internal state. After the end of the drain new
-     * requests will continue normally.
-     *
-     * Implementations of both functions must not call aio_poll().
-     */
-    void (*bdrv_drain_begin)(BlockDriverState *bs);
-    void (*bdrv_drain_end)(BlockDriverState *bs);
-
     bool (*bdrv_supports_persistent_dirty_bitmap)(BlockDriverState *bs);
 
     bool coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_can_store_new_dirty_bitmap)(
@@ -955,36 +955,6 @@ struct BdrvChildClass {
     void GRAPH_WRLOCK_PTR (*attach)(BdrvChild *child);
     void GRAPH_WRLOCK_PTR (*detach)(BdrvChild *child);
 
-    /*
-     * Notifies the parent that the filename of its child has changed (e.g.
-     * because the direct child was removed from the backing chain), so that it
-     * can update its reference.
-     */
-    int (*update_filename)(BdrvChild *child, BlockDriverState *new_base,
-                           const char *filename, Error **errp);
-
-    bool (*change_aio_ctx)(BdrvChild *child, AioContext *ctx,
-                           GHashTable *visited, Transaction *tran,
-                           Error **errp);
-
-    /*
-     * I/O API functions. These functions are thread-safe.
-     *
-     * See include/block/block-io.h for more information about
-     * the I/O API.
-     */
-
-    void (*resize)(BdrvChild *child);
-
-    /*
-     * Returns a name that is supposedly more useful for human users than the
-     * node name for identifying the node in question (in particular, a BB
-     * name), or NULL if the parent can't provide a better name.
-     */
-    const char *(*get_name)(BdrvChild *child);
-
-    AioContext *(*get_parent_aio_context)(BdrvChild *child);
-
     /*
      * If this pair of functions is implemented, the parent doesn't issue new
      * requests after returning from .drained_begin() until .drained_end() is
@@ -1005,6 +975,36 @@ struct BdrvChildClass {
      * activity on the child has stopped.
      */
     bool (*drained_poll)(BdrvChild *child);
+
+    /*
+     * Notifies the parent that the filename of its child has changed (e.g.
+     * because the direct child was removed from the backing chain), so that it
+     * can update its reference.
+     */
+    int (*update_filename)(BdrvChild *child, BlockDriverState *new_base,
+                           const char *filename, Error **errp);
+
+    bool (*change_aio_ctx)(BdrvChild *child, AioContext *ctx,
+                           GHashTable *visited, Transaction *tran,
+                           Error **errp);
+
+    /*
+     * I/O API functions. These functions are thread-safe.
+     *
+     * See include/block/block-io.h for more information about
+     * the I/O API.
+     */
+
+    void (*resize)(BdrvChild *child);
+
+    /*
+     * Returns a name that is supposedly more useful for human users than the
+     * node name for identifying the node in question (in particular, a BB
+     * name), or NULL if the parent can't provide a better name.
+     */
+    const char *(*get_name)(BdrvChild *child);
+
+    AioContext *(*get_parent_aio_context)(BdrvChild *child);
 };
 
 extern const BdrvChildClass child_of_bds;
diff --git a/include/sysemu/block-backend-common.h b/include/sysemu/block-backend-common.h
index 2391679c56..780cea7305 100644
--- a/include/sysemu/block-backend-common.h
+++ b/include/sysemu/block-backend-common.h
@@ -59,6 +59,19 @@ typedef struct BlockDevOps {
      */
     bool (*is_medium_locked)(void *opaque);
 
+    /*
+     * Runs when the backend receives a drain request.
+     */
+    void (*drained_begin)(void *opaque);
+    /*
+     * Runs when the backend's last drain request ends.
+     */
+    void (*drained_end)(void *opaque);
+    /*
+     * Is the device still busy?
+     */
+    bool (*drained_poll)(void *opaque);
+
     /*
      * I/O API functions. These functions are thread-safe.
      *
@@ -76,18 +89,6 @@ typedef struct BlockDevOps {
      * Runs when the size changed (e.g. monitor command block_resize)
      */
     void (*resize_cb)(void *opaque);
-    /*
-     * Runs when the backend receives a drain request.
-     */
-    void (*drained_begin)(void *opaque);
-    /*
-     * Runs when the backend's last drain request ends.
-     */
-    void (*drained_end)(void *opaque);
-    /*
-     * Is the device still busy?
-     */
-    bool (*drained_poll)(void *opaque);
 } BlockDevOps;
 
 /*
diff --git a/block/io.c b/block/io.c
index 4d54fda593..fece938fd0 100644
--- a/block/io.c
+++ b/block/io.c
@@ -60,7 +60,7 @@ static void bdrv_parent_drained_begin(BlockDriverState *bs, BdrvChild *ignore)
 
 void bdrv_parent_drained_end_single(BdrvChild *c)
 {
-    IO_OR_GS_CODE();
+    GLOBAL_STATE_CODE();
 
     assert(c->quiesced_parent);
     c->quiesced_parent = false;
@@ -108,7 +108,7 @@ static bool bdrv_parent_drained_poll(BlockDriverState *bs, BdrvChild *ignore,
 
 void bdrv_parent_drained_begin_single(BdrvChild *c)
 {
-    IO_OR_GS_CODE();
+    GLOBAL_STATE_CODE();
 
     assert(!c->quiesced_parent);
     c->quiesced_parent = true;
@@ -247,7 +247,7 @@ typedef struct {
 bool bdrv_drain_poll(BlockDriverState *bs, BdrvChild *ignore_parent,
                      bool ignore_bds_parents)
 {
-    IO_OR_GS_CODE();
+    GLOBAL_STATE_CODE();
 
     if (bdrv_parent_drained_poll(bs, ignore_parent, ignore_bds_parents)) {
         return true;
@@ -334,7 +334,8 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
     if (ctx != co_ctx) {
         aio_context_release(ctx);
     }
-    replay_bh_schedule_oneshot_event(ctx, bdrv_co_drain_bh_cb, &data);
+    replay_bh_schedule_oneshot_event(qemu_get_aio_context(),
+                                     bdrv_co_drain_bh_cb, &data);
 
     qemu_coroutine_yield();
     /* If we are resumed from some other event (such as an aio completion or a
@@ -357,6 +358,8 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent,
         return;
     }
 
+    GLOBAL_STATE_CODE();
+
     /* Stop things in parent-to-child order */
     if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) {
         aio_disable_external(bdrv_get_aio_context(bs));
@@ -399,11 +402,14 @@ static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
 {
     int old_quiesce_counter;
 
+    IO_OR_GS_CODE();
+
     if (qemu_in_coroutine()) {
         bdrv_co_yield_to_drain(bs, false, parent, false);
         return;
     }
     assert(bs->quiesce_counter > 0);
+    GLOBAL_STATE_CODE();
 
     /* Re-enable things in child-to-parent order */
     old_quiesce_counter = qatomic_fetch_dec(&bs->quiesce_counter);
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index 9a4c5e59d6..6ad9964f03 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -483,19 +483,19 @@ struct test_iothread_data {
     BlockDriverState *bs;
     enum drain_type drain_type;
     int *aio_ret;
+    bool co_done;
 };
 
-static void test_iothread_drain_entry(void *opaque)
+static void coroutine_fn test_iothread_drain_co_entry(void *opaque)
 {
     struct test_iothread_data *data = opaque;
 
-    aio_context_acquire(bdrv_get_aio_context(data->bs));
     do_drain_begin(data->drain_type, data->bs);
     g_assert_cmpint(*data->aio_ret, ==, 0);
     do_drain_end(data->drain_type, data->bs);
-    aio_context_release(bdrv_get_aio_context(data->bs));
 
-    qemu_event_set(&done_event);
+    data->co_done = true;
+    aio_wait_kick();
 }
 
 static void test_iothread_aio_cb(void *opaque, int ret)
@@ -531,6 +531,7 @@ static void test_iothread_common(enum drain_type drain_type, int drain_thread)
     BlockDriverState *bs;
     BDRVTestState *s;
     BlockAIOCB *acb;
+    Coroutine *co;
     int aio_ret;
     struct test_iothread_data data;
 
@@ -609,8 +610,9 @@ static void test_iothread_common(enum drain_type drain_type, int drain_thread)
         }
         break;
     case 1:
-        aio_bh_schedule_oneshot(ctx_a, test_iothread_drain_entry, &data);
-        qemu_event_wait(&done_event);
+        co = qemu_coroutine_create(test_iothread_drain_co_entry, &data);
+        aio_co_enter(ctx_a, co);
+        AIO_WAIT_WHILE_UNLOCKED(NULL, !data.co_done);
         break;
     default:
         g_assert_not_reached();
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:03:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:03:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535472.833278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzxV-0007TH-J6; Tue, 16 May 2023 19:03:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535472.833278; Tue, 16 May 2023 19:03:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzxV-0007T6-FZ; Tue, 16 May 2023 19:03:41 +0000
Received: by outflank-mailman (input) for mailman id 535472;
 Tue, 16 May 2023 19:03:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxU-0002eu-Uf
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:40 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5df11914-f41c-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:03:40 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-107-qbs1UNnyNMWMb1G0IdJnag-1; Tue, 16 May 2023 15:03:35 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 612FC29A9CBD;
 Tue, 16 May 2023 19:03:34 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B9719492B00;
 Tue, 16 May 2023 19:03:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5df11914-f41c-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263818;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IFl7FI73+FiKjtdEe70JC8EnCu2IGMWapigmR378qow=;
	b=BvMfhU/LbFfGbN58I1Xq2b0BeCGztN3HKZ58N9LGRhpGWkY/OG+/hrJBMAi+9z+ljkN/Pz
	dfMJHbxMhts+eYH324AROaZfQShrVECsXRR603Bk8ao6oz49Z5AgTuY6107tzIwCrfRboF
	jLv0how9LpmIUcpzd27Baxp+kNk7f20=
X-MC-Unique: qbs1UNnyNMWMb1G0IdJnag-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 14/20] block/export: don't require AioContext lock around blk_exp_ref/unref()
Date: Tue, 16 May 2023 15:02:32 -0400
Message-Id: <20230516190238.8401-15-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9

The FUSE export calls blk_exp_ref/unref() without the AioContext lock.
Instead of fixing the FUSE export, adjust blk_exp_ref/unref() so they
work without the AioContext lock. This way it's less error-prone.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 include/block/export.h   |  2 ++
 block/export/export.c    | 13 ++++++-------
 block/export/vduse-blk.c |  4 ----
 3 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/include/block/export.h b/include/block/export.h
index 7feb02e10d..f2fe0f8078 100644
--- a/include/block/export.h
+++ b/include/block/export.h
@@ -57,6 +57,8 @@ struct BlockExport {
      * Reference count for this block export. This includes strong references
      * both from the owner (qemu-nbd or the monitor) and clients connected to
      * the export.
+     *
+     * Use atomics to access this field.
      */
     int refcount;
 
diff --git a/block/export/export.c b/block/export/export.c
index 62c7c22d45..ab007e9d31 100644
--- a/block/export/export.c
+++ b/block/export/export.c
@@ -202,11 +202,10 @@ fail:
     return NULL;
 }
 
-/* Callers must hold exp->ctx lock */
 void blk_exp_ref(BlockExport *exp)
 {
-    assert(exp->refcount > 0);
-    exp->refcount++;
+    assert(qatomic_read(&exp->refcount) > 0);
+    qatomic_inc(&exp->refcount);
 }
 
 /* Runs in the main thread */
@@ -229,11 +228,10 @@ static void blk_exp_delete_bh(void *opaque)
     aio_context_release(aio_context);
 }
 
-/* Callers must hold exp->ctx lock */
 void blk_exp_unref(BlockExport *exp)
 {
-    assert(exp->refcount > 0);
-    if (--exp->refcount == 0) {
+    assert(qatomic_read(&exp->refcount) > 0);
+    if (qatomic_fetch_dec(&exp->refcount) == 1) {
         /* Touch the block_exports list only in the main thread */
         aio_bh_schedule_oneshot(qemu_get_aio_context(), blk_exp_delete_bh,
                                 exp);
@@ -341,7 +339,8 @@ void qmp_block_export_del(const char *id,
     if (!has_mode) {
         mode = BLOCK_EXPORT_REMOVE_MODE_SAFE;
     }
-    if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE && exp->refcount > 1) {
+    if (mode == BLOCK_EXPORT_REMOVE_MODE_SAFE &&
+        qatomic_read(&exp->refcount) > 1) {
         error_setg(errp, "export '%s' still in use", exp->id);
         error_append_hint(errp, "Use mode='hard' to force client "
                           "disconnect\n");
diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index a25556fe04..e0455551f9 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -44,9 +44,7 @@ static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
 {
     if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) {
         /* Prevent export from being deleted */
-        aio_context_acquire(vblk_exp->export.ctx);
         blk_exp_ref(&vblk_exp->export);
-        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
@@ -57,9 +55,7 @@ static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
         aio_wait_kick();
 
         /* Now the export can be deleted */
-        aio_context_acquire(vblk_exp->export.ctx);
         blk_exp_unref(&vblk_exp->export);
-        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:05:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:05:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535486.833287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzz1-0000kk-Uj; Tue, 16 May 2023 19:05:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535486.833287; Tue, 16 May 2023 19:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pyzz1-0000kd-Rz; Tue, 16 May 2023 19:05:15 +0000
Received: by outflank-mailman (input) for mailman id 535486;
 Tue, 16 May 2023 19:05:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyzz1-0000kH-05; Tue, 16 May 2023 19:05:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyzz0-0000J1-Us; Tue, 16 May 2023 19:05:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pyzz0-0003AU-J8; Tue, 16 May 2023 19:05:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pyzz0-0006ez-Ih; Tue, 16 May 2023 19:05:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Hz+QjgjFjEb2H1iZa+bvUDvtQBqeNyGrsR8d7TifxBk=; b=Urn7/9Pe9GhU+4Eau3/BjQo2qi
	QjHosQddk0Tb+26BgmUqnQGbux1XbDVVE6i62eTvowArIba2yawp351uLs0bUgfYIglEGRSBZOOqo
	g0lphX6ctXAGIwDTZnbZ8eFjLizKPklciYAQsAjWxK9FcAITmO77rWMfxo0M3/pLuKWA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180684-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180684: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c8e4bbb5b8ee22fd1591ba6a5a3cef4466dda323
X-Osstest-Versions-That:
    xen=8f9c8274a4e3e860bd777269cb2c91971e9fa69e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 May 2023 19:05:14 +0000

flight 180684 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180684/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c8e4bbb5b8ee22fd1591ba6a5a3cef4466dda323
baseline version:
 xen                  8f9c8274a4e3e860bd777269cb2c91971e9fa69e

Last test of basis   180678  2023-05-16 01:02:00 Z    0 days
Testing same since   180684  2023-05-16 16:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alejandro Vallejo <alejandro.vallejo@cloud.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8f9c8274a4..c8e4bbb5b8  c8e4bbb5b8ee22fd1591ba6a5a3cef4466dda323 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 16 19:07:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:07:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535495.833298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01I-0001RT-B9; Tue, 16 May 2023 19:07:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535495.833298; Tue, 16 May 2023 19:07:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01I-0001RM-7x; Tue, 16 May 2023 19:07:36 +0000
Received: by outflank-mailman (input) for mailman id 535495;
 Tue, 16 May 2023 19:07:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxa-0002US-LX
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:46 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 60ebfc81-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:45 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-630-s834bll1PqWUszHXDikoHA-1; Tue, 16 May 2023 15:03:37 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9C81D3C025D8;
 Tue, 16 May 2023 19:03:36 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 03DCC492B02;
 Tue, 16 May 2023 19:03:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60ebfc81-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263823;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XPd169DcGag/GSLBXKJkqR/lFNY63tNqxrQt5+AlYZw=;
	b=Hat8UP3n6MF33JjUJyzPgWU3xDFoh1J7sKe6wvtKH1TaEKvdlNZ2ek9kqiG+bCIGN673mn
	uhDnkUmuXOWxwREHEzPUrZfZMVLkiicrruDewltxWwFbFUwpV6h2Otig0dos2hBjSsPrOZ
	d5JjYISaOE8lgxjhcaBLlbIy1IvsM5c=
X-MC-Unique: s834bll1PqWUszHXDikoHA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 15/20] block/fuse: do not set is_external=true on FUSE fd
Date: Tue, 16 May 2023 15:02:33 -0400
Message-Id: <20230516190238.8401-16-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9

This is part of ongoing work to remove the aio_disable_external() API.

Use BlockDevOps .drained_begin/end/poll() instead of
aio_set_fd_handler(is_external=true).

As a side-effect the FUSE export now follows AioContext changes like the
other export types.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 block/export/fuse.c | 56 +++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 54 insertions(+), 2 deletions(-)

diff --git a/block/export/fuse.c b/block/export/fuse.c
index 06fa41079e..adf3236b5a 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -50,6 +50,7 @@ typedef struct FuseExport {
 
     struct fuse_session *fuse_session;
     struct fuse_buf fuse_buf;
+    unsigned int in_flight; /* atomic */
     bool mounted, fd_handler_set_up;
 
     char *mountpoint;
@@ -78,6 +79,42 @@ static void read_from_fuse_export(void *opaque);
 static bool is_regular_file(const char *path, Error **errp);
 
 
+static void fuse_export_drained_begin(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       NULL, NULL, NULL, NULL, NULL);
+    exp->fd_handler_set_up = false;
+}
+
+static void fuse_export_drained_end(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    /* Refresh AioContext in case it changed */
+    exp->common.ctx = blk_get_aio_context(exp->common.blk);
+
+    aio_set_fd_handler(exp->common.ctx,
+                       fuse_session_fd(exp->fuse_session), false,
+                       read_from_fuse_export, NULL, NULL, NULL, exp);
+    exp->fd_handler_set_up = true;
+}
+
+static bool fuse_export_drained_poll(void *opaque)
+{
+    FuseExport *exp = opaque;
+
+    return qatomic_read(&exp->in_flight) > 0;
+}
+
+static const BlockDevOps fuse_export_blk_dev_ops = {
+    .drained_begin = fuse_export_drained_begin,
+    .drained_end   = fuse_export_drained_end,
+    .drained_poll  = fuse_export_drained_poll,
+};
+
 static int fuse_export_create(BlockExport *blk_exp,
                               BlockExportOptions *blk_exp_args,
                               Error **errp)
@@ -101,6 +138,15 @@ static int fuse_export_create(BlockExport *blk_exp,
         }
     }
 
+    blk_set_dev_ops(exp->common.blk, &fuse_export_blk_dev_ops, exp);
+
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * the FUSE fd handler. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->common.blk, true);
+
     init_exports_table();
 
     /*
@@ -224,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint,
     g_hash_table_insert(exports, g_strdup(mountpoint), NULL);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), true,
+                       fuse_session_fd(exp->fuse_session), false,
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 
@@ -246,6 +292,8 @@ static void read_from_fuse_export(void *opaque)
 
     blk_exp_ref(&exp->common);
 
+    qatomic_inc(&exp->in_flight);
+
     do {
         ret = fuse_session_receive_buf(exp->fuse_session, &exp->fuse_buf);
     } while (ret == -EINTR);
@@ -256,6 +304,10 @@ static void read_from_fuse_export(void *opaque)
     fuse_session_process_buf(exp->fuse_session, &exp->fuse_buf);
 
 out:
+    if (qatomic_fetch_dec(&exp->in_flight) == 1) {
+        aio_wait_kick(); /* wake AIO_WAIT_WHILE() */
+    }
+
     blk_exp_unref(&exp->common);
 }
 
@@ -268,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp)
 
         if (exp->fd_handler_set_up) {
             aio_set_fd_handler(exp->common.ctx,
-                               fuse_session_fd(exp->fuse_session), true,
+                               fuse_session_fd(exp->fuse_session), false,
                                NULL, NULL, NULL, NULL, NULL);
             exp->fd_handler_set_up = false;
         }
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:07:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535498.833308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01L-0001hq-JU; Tue, 16 May 2023 19:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535498.833308; Tue, 16 May 2023 19:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01L-0001hf-GN; Tue, 16 May 2023 19:07:39 +0000
Received: by outflank-mailman (input) for mailman id 535498;
 Tue, 16 May 2023 19:07:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxS-0002US-Ps
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:38 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5c007aab-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:36 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-319-5SgmAg8zMuCHjVmCq-BmUA-1; Tue, 16 May 2023 15:03:32 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7590B38107A8;
 Tue, 16 May 2023 19:03:31 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id D9BB414171C1;
 Tue, 16 May 2023 19:03:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c007aab-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263815;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6NKS2h08c7ICfd8e4ndbZUq0n9RkVrcwRBS+UKLWGJE=;
	b=Nt/2QcSwA1S41n94usC3c/n1p8RIkwKiT7ubHWFmHyAA58FZMMSYvCEjMsGinJKU/h728g
	VnfQTUuZY/cOvU6XokphJ/ZFJzRIG53ETUl/kww3iEs1NnqgaL25yK3ZjELRwxOupoQvZO
	KXViQ3m2nk+GXkbfcO818g8v+BI5mu8=
X-MC-Unique: 5SgmAg8zMuCHjVmCq-BmUA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 13/20] block/export: rewrite vduse-blk drain code
Date: Tue, 16 May 2023 15:02:31 -0400
Message-Id: <20230516190238.8401-14-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

vduse_blk_detach_ctx() waits for in-flight requests using
AIO_WAIT_WHILE(). This is not allowed according to a comment in
bdrv_set_aio_context_commit():

  /*
   * Take the old AioContex when detaching it from bs.
   * At this point, new_context lock is already acquired, and we are now
   * also taking old_context. This is safe as long as bdrv_detach_aio_context
   * does not call AIO_POLL_WHILE().
   */

Use this opportunity to rewrite the drain code in vduse-blk:

- Use the BlockExport refcount so that vduse_blk_exp_delete() is only
  called when there are no more requests in flight.

- Implement .drained_poll() so in-flight request coroutines are stopped
  by the time .bdrv_detach_aio_context() is called.

- Remove AIO_WAIT_WHILE() from vduse_blk_detach_ctx() to solve the
  .bdrv_detach_aio_context() constraint violation. It's no longer
  needed due to the previous changes.

- Always handle the VDUSE file descriptor, even in drained sections. The
  VDUSE file descriptor doesn't submit I/O, so it's safe to handle it in
  drained sections. This ensures that the VDUSE kernel code gets a fast
  response.

- Suspend virtqueue fd handlers in .drained_begin() and resume them in
  .drained_end(). This eliminates the need for the
  aio_set_fd_handler(is_external=true) flag, which is being removed from
  QEMU.

This is a long list but splitting it into individual commits would
probably lead to git bisect failures - the changes are all related.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 block/export/vduse-blk.c | 132 +++++++++++++++++++++++++++------------
 1 file changed, 93 insertions(+), 39 deletions(-)

diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index b53ef39da0..a25556fe04 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -31,7 +31,8 @@ typedef struct VduseBlkExport {
     VduseDev *dev;
     uint16_t num_queues;
     char *recon_file;
-    unsigned int inflight;
+    unsigned int inflight; /* atomic */
+    bool vqs_started;
 } VduseBlkExport;
 
 typedef struct VduseBlkReq {
@@ -41,13 +42,24 @@ typedef struct VduseBlkReq {
 
 static void vduse_blk_inflight_inc(VduseBlkExport *vblk_exp)
 {
-    vblk_exp->inflight++;
+    if (qatomic_fetch_inc(&vblk_exp->inflight) == 0) {
+        /* Prevent export from being deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_ref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
+    }
 }
 
 static void vduse_blk_inflight_dec(VduseBlkExport *vblk_exp)
 {
-    if (--vblk_exp->inflight == 0) {
+    if (qatomic_fetch_dec(&vblk_exp->inflight) == 1) {
+        /* Wake AIO_WAIT_WHILE() */
         aio_wait_kick();
+
+        /* Now the export can be deleted */
+        aio_context_acquire(vblk_exp->export.ctx);
+        blk_exp_unref(&vblk_exp->export);
+        aio_context_release(vblk_exp->export.ctx);
     }
 }
 
@@ -124,8 +136,12 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
 
+    if (!vblk_exp->vqs_started) {
+        return; /* vduse_blk_drained_end() will start vqs later */
+    }
+
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, on_vduse_vq_kick, NULL, NULL, NULL, vq);
+                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
     /* Make sure we don't miss any kick afer reconnecting */
     eventfd_write(vduse_queue_get_fd(vq), 1);
 }
@@ -133,9 +149,14 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
 static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
 {
     VduseBlkExport *vblk_exp = vduse_dev_get_priv(dev);
+    int fd = vduse_queue_get_fd(vq);
 
-    aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       true, NULL, NULL, NULL, NULL, NULL);
+    if (fd < 0) {
+        return;
+    }
+
+    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
+                       NULL, NULL, NULL, NULL, NULL);
 }
 
 static const VduseOps vduse_blk_ops = {
@@ -152,42 +173,19 @@ static void on_vduse_dev_kick(void *opaque)
 
 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 {
-    int i;
-
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, on_vduse_dev_kick, NULL, NULL, NULL,
+                       false, on_vduse_dev_kick, NULL, NULL, NULL,
                        vblk_exp->dev);
 
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd, true,
-                           on_vduse_vq_kick, NULL, NULL, NULL, vq);
-    }
+    /* Virtqueues are handled by vduse_blk_drained_end() */
 }
 
 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
 {
-    int i;
-
-    for (i = 0; i < vblk_exp->num_queues; i++) {
-        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
-        int fd = vduse_queue_get_fd(vq);
-
-        if (fd < 0) {
-            continue;
-        }
-        aio_set_fd_handler(vblk_exp->export.ctx, fd,
-                           true, NULL, NULL, NULL, NULL, NULL);
-    }
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       true, NULL, NULL, NULL, NULL, NULL);
+                       false, NULL, NULL, NULL, NULL, NULL);
 
-    AIO_WAIT_WHILE(vblk_exp->export.ctx, vblk_exp->inflight > 0);
+    /* Virtqueues are handled by vduse_blk_drained_begin() */
 }
 
 
@@ -220,8 +218,55 @@ static void vduse_blk_resize(void *opaque)
                             (char *)&config.capacity);
 }
 
+static void vduse_blk_stop_virtqueues(VduseBlkExport *vblk_exp)
+{
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_disable_queue(vblk_exp->dev, vq);
+    }
+
+    vblk_exp->vqs_started = false;
+}
+
+static void vduse_blk_start_virtqueues(VduseBlkExport *vblk_exp)
+{
+    vblk_exp->vqs_started = true;
+
+    for (uint16_t i = 0; i < vblk_exp->num_queues; i++) {
+        VduseVirtq *vq = vduse_dev_get_queue(vblk_exp->dev, i);
+        vduse_blk_enable_queue(vblk_exp->dev, vq);
+    }
+}
+
+static void vduse_blk_drained_begin(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_stop_virtqueues(vblk_exp);
+}
+
+static void vduse_blk_drained_end(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    vduse_blk_start_virtqueues(vblk_exp);
+}
+
+static bool vduse_blk_drained_poll(void *opaque)
+{
+    BlockExport *exp = opaque;
+    VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
+
+    return qatomic_read(&vblk_exp->inflight) > 0;
+}
+
 static const BlockDevOps vduse_block_ops = {
-    .resize_cb = vduse_blk_resize,
+    .resize_cb     = vduse_blk_resize,
+    .drained_begin = vduse_blk_drained_begin,
+    .drained_end   = vduse_blk_drained_end,
+    .drained_poll  = vduse_blk_drained_poll,
 };
 
 static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
@@ -268,6 +313,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
     vblk_exp->handler.serial = g_strdup(vblk_opts->serial ?: "");
     vblk_exp->handler.logical_block_size = logical_block_size;
     vblk_exp->handler.writable = opts->writable;
+    vblk_exp->vqs_started = true;
 
     config.capacity =
             cpu_to_le64(blk_getlength(exp->blk) >> VIRTIO_BLK_SECTOR_BITS);
@@ -322,14 +368,20 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
         vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
     }
 
-    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), true,
+    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
                        on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev);
 
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                  vblk_exp);
-
     blk_set_dev_ops(exp->blk, &vduse_block_ops, exp);
 
+    /*
+     * We handle draining ourselves using an in-flight counter and by disabling
+     * virtqueue fd handlers. Do not queue BlockBackend requests, they need to
+     * complete so the in-flight counter reaches zero.
+     */
+    blk_set_disable_request_queuing(exp->blk, true);
+
     return 0;
 err:
     vduse_dev_destroy(vblk_exp->dev);
@@ -344,6 +396,9 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
     int ret;
 
+    assert(qatomic_read(&vblk_exp->inflight) == 0);
+
+    vduse_blk_detach_ctx(vblk_exp);
     blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
                                     vblk_exp);
     ret = vduse_dev_destroy(vblk_exp->dev);
@@ -354,13 +409,12 @@ static void vduse_blk_exp_delete(BlockExport *exp)
     g_free(vblk_exp->handler.serial);
 }
 
+/* Called with exp->ctx acquired */
 static void vduse_blk_exp_request_shutdown(BlockExport *exp)
 {
     VduseBlkExport *vblk_exp = container_of(exp, VduseBlkExport, export);
 
-    aio_context_acquire(vblk_exp->export.ctx);
-    vduse_blk_detach_ctx(vblk_exp);
-    aio_context_acquire(vblk_exp->export.ctx);
+    vduse_blk_stop_virtqueues(vblk_exp);
 }
 
 const BlockExportDriver blk_exp_vduse_blk = {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:07:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:07:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535500.833312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01L-0001l6-Tl; Tue, 16 May 2023 19:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535500.833312; Tue, 16 May 2023 19:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01L-0001kE-Oy; Tue, 16 May 2023 19:07:39 +0000
Received: by outflank-mailman (input) for mailman id 535500;
 Tue, 16 May 2023 19:07:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxk-0002US-Dh
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:56 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66ac7e65-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:54 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-435-W01fqy6HNQuowtE4IgRM3Q-1; Tue, 16 May 2023 15:03:50 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 202F1857E2F;
 Tue, 16 May 2023 19:03:49 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 7940914171C0;
 Tue, 16 May 2023 19:03:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66ac7e65-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263833;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iUB4CtVWakrcdVymTz7VqlzxL/U+8qst0VTlhhAvnQA=;
	b=cgLwvK4AUl5PDtN0RqtDgrJfVPX2ldaH1Qtq2a8Nml+ZCZOxvezlHo7+oEQ+lTqEyc9dU6
	EknW0mbEZKaHlRCTMO2h1KlmY+aM1oofsjcGb7WgVGy0QgUm050CmEao9+88he2XFpwlCR
	TF/u1YTYS/VHaSUl68z95S+h1+Xe84k=
X-MC-Unique: W01fqy6HNQuowtE4IgRM3Q-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 19/20] virtio: do not set is_external=true on host notifiers
Date: Tue, 16 May 2023 15:02:37 -0400
Message-Id: <20230516190238.8401-20-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

Host notifiers can now use is_external=false since virtio-blk and
virtio-scsi no longer rely on is_external=true for drained sections.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/virtio/virtio.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index cb09cb6464..08011be8dc 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n)
 
 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true,
+    aio_set_event_notifier(ctx, &vq->host_notifier, false,
                            virtio_queue_host_notifier_read,
                            virtio_queue_host_notifier_aio_poll,
                            virtio_queue_host_notifier_aio_poll_ready);
@@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
  */
 void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true,
+    aio_set_event_notifier(ctx, &vq->host_notifier, false,
                            virtio_queue_host_notifier_read,
                            NULL, NULL);
 }
 
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NULL);
 }
 
 void virtio_queue_host_notifier_read(EventNotifier *n)
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:07:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:07:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535502.833319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01M-0001qV-8p; Tue, 16 May 2023 19:07:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535502.833319; Tue, 16 May 2023 19:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01M-0001om-2q; Tue, 16 May 2023 19:07:40 +0000
Received: by outflank-mailman (input) for mailman id 535502;
 Tue, 16 May 2023 19:07:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxz-0002eu-T4
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:04:12 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6f7a7686-f41c-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:04:09 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-173-mkX37JlNPUiK7-1Pr1tiNg-1; Tue, 16 May 2023 15:04:05 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2C4F31854CB0;
 Tue, 16 May 2023 19:04:04 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 273B940C206F;
 Tue, 16 May 2023 19:03:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f7a7686-f41c-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263848;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YuCVYB1Gj9muOfvkax6ajVRgf7QG1FNZ1FoH2bz7VeY=;
	b=OQOx5scNjZuip71pFCl2MS2n5vSZUmKS1jfx84ybk7vKeKGPkaLoE85R9YA9roERrSgE2s
	CY95dT/pffGo4FQPS/7tWgw4Lv9KKLlBE8M/xXzgt9D6GU4V+3KNATTow8jsh2WuBvczfz
	FVyDHejMty/Ho8EranzICYrsHli9GzI=
X-MC-Unique: mkX37JlNPUiK7-1Pr1tiNg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 20/20] aio: remove aio_disable_external() API
Date: Tue, 16 May 2023 15:02:38 -0400
Message-Id: <20230516190238.8401-21-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

All callers now pass is_external=false to aio_set_fd_handler() and
aio_set_event_notifier(). The aio_disable_external() API that
temporarily disables fd handlers that were registered is_external=true
is therefore dead code.

Remove aio_disable_external(), aio_enable_external(), and the
is_external arguments to aio_set_fd_handler() and
aio_set_event_notifier().

The entire test-fdmon-epoll test is removed because its sole purpose was
testing aio_disable_external().

Parts of this patch were generated using the following coccinelle
(https://coccinelle.lip6.fr/) semantic patch:

  @@
  expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque;
  @@
  - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque)
  + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, opaque)

  @@
  expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready;
  @@
  - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io_poll_ready)
  + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready)

Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/aio.h           | 57 ---------------------------
 util/aio-posix.h              |  1 -
 block.c                       |  7 ----
 block/blkio.c                 | 15 +++----
 block/curl.c                  | 10 ++---
 block/export/fuse.c           |  8 ++--
 block/export/vduse-blk.c      | 10 ++---
 block/io.c                    |  2 -
 block/io_uring.c              |  4 +-
 block/iscsi.c                 |  3 +-
 block/linux-aio.c             |  4 +-
 block/nfs.c                   |  5 +--
 block/nvme.c                  |  8 ++--
 block/ssh.c                   |  4 +-
 block/win32-aio.c             |  6 +--
 hw/i386/kvm/xen_xenstore.c    |  2 +-
 hw/virtio/virtio.c            |  6 +--
 hw/xen/xen-bus.c              |  8 ++--
 io/channel-command.c          |  6 +--
 io/channel-file.c             |  3 +-
 io/channel-socket.c           |  3 +-
 migration/rdma.c              | 16 ++++----
 tests/unit/test-aio.c         | 27 +------------
 tests/unit/test-bdrv-drain.c  |  1 -
 tests/unit/test-fdmon-epoll.c | 73 -----------------------------------
 util/aio-posix.c              | 20 +++-------
 util/aio-win32.c              |  8 +---
 util/async.c                  |  3 +-
 util/fdmon-epoll.c            | 10 -----
 util/fdmon-io_uring.c         |  8 +---
 util/fdmon-poll.c             |  3 +-
 util/main-loop.c              |  7 ++--
 util/qemu-coroutine-io.c      |  7 ++--
 util/vhost-user-server.c      | 11 +++---
 tests/unit/meson.build        |  3 --
 35 files changed, 76 insertions(+), 293 deletions(-)
 delete mode 100644 tests/unit/test-fdmon-epoll.c

diff --git a/include/block/aio.h b/include/block/aio.h
index 89bbc536f9..32042e8905 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -225,8 +225,6 @@ struct AioContext {
      */
     QEMUTimerListGroup tlg;
 
-    int external_disable_cnt;
-
     /* Number of AioHandlers without .io_poll() */
     int poll_disable_cnt;
 
@@ -481,7 +479,6 @@ bool aio_poll(AioContext *ctx, bool blocking);
  */
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -497,7 +494,6 @@ void aio_set_fd_handler(AioContext *ctx,
  */
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            bool is_external,
                             EventNotifierHandler *io_read,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready);
@@ -626,59 +622,6 @@ static inline void aio_timer_init(AioContext *ctx,
  */
 int64_t aio_compute_timeout(AioContext *ctx);
 
-/**
- * aio_disable_external:
- * @ctx: the aio context
- *
- * Disable the further processing of external clients.
- */
-static inline void aio_disable_external(AioContext *ctx)
-{
-    qatomic_inc(&ctx->external_disable_cnt);
-}
-
-/**
- * aio_enable_external:
- * @ctx: the aio context
- *
- * Enable the processing of external clients.
- */
-static inline void aio_enable_external(AioContext *ctx)
-{
-    int old;
-
-    old = qatomic_fetch_dec(&ctx->external_disable_cnt);
-    assert(old > 0);
-    if (old == 1) {
-        /* Kick event loop so it re-arms file descriptors */
-        aio_notify(ctx);
-    }
-}
-
-/**
- * aio_external_disabled:
- * @ctx: the aio context
- *
- * Return true if the external clients are disabled.
- */
-static inline bool aio_external_disabled(AioContext *ctx)
-{
-    return qatomic_read(&ctx->external_disable_cnt);
-}
-
-/**
- * aio_node_check:
- * @ctx: the aio context
- * @is_external: Whether or not the checked node is an external event source.
- *
- * Check if the node's is_external flag is okay to be polled by the ctx at this
- * moment. True means green light.
- */
-static inline bool aio_node_check(AioContext *ctx, bool is_external)
-{
-    return !is_external || !qatomic_read(&ctx->external_disable_cnt);
-}
-
 /**
  * aio_co_schedule:
  * @ctx: the aio context
diff --git a/util/aio-posix.h b/util/aio-posix.h
index 80b927c7f4..4264c518be 100644
--- a/util/aio-posix.h
+++ b/util/aio-posix.h
@@ -38,7 +38,6 @@ struct AioHandler {
 #endif
     int64_t poll_idle_timeout; /* when to stop userspace polling */
     bool poll_ready; /* has polling detected an event? */
-    bool is_external;
 };
 
 /* Add a handler to a ready list */
diff --git a/block.c b/block.c
index f04a6ad4e8..6ef2063af0 100644
--- a/block.c
+++ b/block.c
@@ -7283,9 +7283,6 @@ static void bdrv_detach_aio_context(BlockDriverState *bs)
         bs->drv->bdrv_detach_aio_context(bs);
     }
 
-    if (bs->quiesce_counter) {
-        aio_enable_external(bs->aio_context);
-    }
     bs->aio_context = NULL;
 }
 
@@ -7295,10 +7292,6 @@ static void bdrv_attach_aio_context(BlockDriverState *bs,
     BdrvAioNotifier *ban, *ban_tmp;
     GLOBAL_STATE_CODE();
 
-    if (bs->quiesce_counter) {
-        aio_disable_external(new_context);
-    }
-
     bs->aio_context = new_context;
 
     if (bs->drv && bs->drv->bdrv_attach_aio_context) {
diff --git a/block/blkio.c b/block/blkio.c
index 0cdc99a729..72117fa005 100644
--- a/block/blkio.c
+++ b/block/blkio.c
@@ -306,23 +306,18 @@ static void blkio_attach_aio_context(BlockDriverState *bs,
 {
     BDRVBlkioState *s = bs->opaque;
 
-    aio_set_fd_handler(new_context,
-                       s->completion_fd,
-                       false,
-                       blkio_completion_fd_read,
-                       NULL,
+    aio_set_fd_handler(new_context, s->completion_fd,
+                       blkio_completion_fd_read, NULL,
                        blkio_completion_fd_poll,
-                       blkio_completion_fd_poll_ready,
-                       bs);
+                       blkio_completion_fd_poll_ready, bs);
 }
 
 static void blkio_detach_aio_context(BlockDriverState *bs)
 {
     BDRVBlkioState *s = bs->opaque;
 
-    aio_set_fd_handler(bdrv_get_aio_context(bs),
-                       s->completion_fd,
-                       false, NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(bdrv_get_aio_context(bs), s->completion_fd, NULL, NULL,
+                       NULL, NULL, NULL);
 }
 
 /* Call with s->blkio_lock held to submit I/O after enqueuing a new request */
diff --git a/block/curl.c b/block/curl.c
index 8bb39a134e..0fc42d03d7 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -132,7 +132,7 @@ static gboolean curl_drop_socket(void *key, void *value, void *opaque)
     CURLSocket *socket = value;
     BDRVCURLState *s = socket->s;
 
-    aio_set_fd_handler(s->aio_context, socket->fd, false,
+    aio_set_fd_handler(s->aio_context, socket->fd,
                        NULL, NULL, NULL, NULL, NULL);
     return true;
 }
@@ -180,20 +180,20 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
     trace_curl_sock_cb(action, (int)fd);
     switch (action) {
         case CURL_POLL_IN:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                curl_multi_do, NULL, NULL, NULL, socket);
             break;
         case CURL_POLL_OUT:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                NULL, curl_multi_do, NULL, NULL, socket);
             break;
         case CURL_POLL_INOUT:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                curl_multi_do, curl_multi_do,
                                NULL, NULL, socket);
             break;
         case CURL_POLL_REMOVE:
-            aio_set_fd_handler(s->aio_context, fd, false,
+            aio_set_fd_handler(s->aio_context, fd,
                                NULL, NULL, NULL, NULL, NULL);
             break;
     }
diff --git a/block/export/fuse.c b/block/export/fuse.c
index adf3236b5a..3307b64089 100644
--- a/block/export/fuse.c
+++ b/block/export/fuse.c
@@ -84,7 +84,7 @@ static void fuse_export_drained_begin(void *opaque)
     FuseExport *exp = opaque;
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        NULL, NULL, NULL, NULL, NULL);
     exp->fd_handler_set_up = false;
 }
@@ -97,7 +97,7 @@ static void fuse_export_drained_end(void *opaque)
     exp->common.ctx = blk_get_aio_context(exp->common.blk);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 }
@@ -270,7 +270,7 @@ static int setup_fuse_export(FuseExport *exp, const char *mountpoint,
     g_hash_table_insert(exports, g_strdup(mountpoint), NULL);
 
     aio_set_fd_handler(exp->common.ctx,
-                       fuse_session_fd(exp->fuse_session), false,
+                       fuse_session_fd(exp->fuse_session),
                        read_from_fuse_export, NULL, NULL, NULL, exp);
     exp->fd_handler_set_up = true;
 
@@ -320,7 +320,7 @@ static void fuse_export_shutdown(BlockExport *blk_exp)
 
         if (exp->fd_handler_set_up) {
             aio_set_fd_handler(exp->common.ctx,
-                               fuse_session_fd(exp->fuse_session), false,
+                               fuse_session_fd(exp->fuse_session),
                                NULL, NULL, NULL, NULL, NULL);
             exp->fd_handler_set_up = false;
         }
diff --git a/block/export/vduse-blk.c b/block/export/vduse-blk.c
index e0455551f9..83b05548e7 100644
--- a/block/export/vduse-blk.c
+++ b/block/export/vduse-blk.c
@@ -137,7 +137,7 @@ static void vduse_blk_enable_queue(VduseDev *dev, VduseVirtq *vq)
     }
 
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_queue_get_fd(vq),
-                       false, on_vduse_vq_kick, NULL, NULL, NULL, vq);
+                       on_vduse_vq_kick, NULL, NULL, NULL, vq);
     /* Make sure we don't miss any kick afer reconnecting */
     eventfd_write(vduse_queue_get_fd(vq), 1);
 }
@@ -151,7 +151,7 @@ static void vduse_blk_disable_queue(VduseDev *dev, VduseVirtq *vq)
         return;
     }
 
-    aio_set_fd_handler(vblk_exp->export.ctx, fd, false,
+    aio_set_fd_handler(vblk_exp->export.ctx, fd,
                        NULL, NULL, NULL, NULL, NULL);
 }
 
@@ -170,7 +170,7 @@ static void on_vduse_dev_kick(void *opaque)
 static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 {
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       false, on_vduse_dev_kick, NULL, NULL, NULL,
+                       on_vduse_dev_kick, NULL, NULL, NULL,
                        vblk_exp->dev);
 
     /* Virtqueues are handled by vduse_blk_drained_end() */
@@ -179,7 +179,7 @@ static void vduse_blk_attach_ctx(VduseBlkExport *vblk_exp, AioContext *ctx)
 static void vduse_blk_detach_ctx(VduseBlkExport *vblk_exp)
 {
     aio_set_fd_handler(vblk_exp->export.ctx, vduse_dev_get_fd(vblk_exp->dev),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
 
     /* Virtqueues are handled by vduse_blk_drained_begin() */
 }
@@ -364,7 +364,7 @@ static int vduse_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
         vduse_dev_setup_queue(vblk_exp->dev, i, queue_size);
     }
 
-    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev), false,
+    aio_set_fd_handler(exp->ctx, vduse_dev_get_fd(vblk_exp->dev),
                        on_vduse_dev_kick, NULL, NULL, NULL, vblk_exp->dev);
 
     blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
diff --git a/block/io.c b/block/io.c
index fece938fd0..540bf8d26d 100644
--- a/block/io.c
+++ b/block/io.c
@@ -362,7 +362,6 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent,
 
     /* Stop things in parent-to-child order */
     if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) {
-        aio_disable_external(bdrv_get_aio_context(bs));
         bdrv_parent_drained_begin(bs, parent);
         if (bs->drv && bs->drv->bdrv_drain_begin) {
             bs->drv->bdrv_drain_begin(bs);
@@ -418,7 +417,6 @@ static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
             bs->drv->bdrv_drain_end(bs);
         }
         bdrv_parent_drained_end(bs, parent);
-        aio_enable_external(bdrv_get_aio_context(bs));
     }
 }
 
diff --git a/block/io_uring.c b/block/io_uring.c
index 82cab6a5bd..3a77480e16 100644
--- a/block/io_uring.c
+++ b/block/io_uring.c
@@ -410,7 +410,7 @@ int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t offset,
 
 void luring_detach_aio_context(LuringState *s, AioContext *old_context)
 {
-    aio_set_fd_handler(old_context, s->ring.ring_fd, false,
+    aio_set_fd_handler(old_context, s->ring.ring_fd,
                        NULL, NULL, NULL, NULL, s);
     qemu_bh_delete(s->completion_bh);
     s->aio_context = NULL;
@@ -420,7 +420,7 @@ void luring_attach_aio_context(LuringState *s, AioContext *new_context)
 {
     s->aio_context = new_context;
     s->completion_bh = aio_bh_new(new_context, qemu_luring_completion_bh, s);
-    aio_set_fd_handler(s->aio_context, s->ring.ring_fd, false,
+    aio_set_fd_handler(s->aio_context, s->ring.ring_fd,
                        qemu_luring_completion_cb, NULL,
                        qemu_luring_poll_cb, qemu_luring_poll_ready, s);
 }
diff --git a/block/iscsi.c b/block/iscsi.c
index 9fc0bed90b..34f97ab646 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -363,7 +363,6 @@ iscsi_set_events(IscsiLun *iscsilun)
 
     if (ev != iscsilun->events) {
         aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsi),
-                           false,
                            (ev & POLLIN) ? iscsi_process_read : NULL,
                            (ev & POLLOUT) ? iscsi_process_write : NULL,
                            NULL, NULL,
@@ -1540,7 +1539,7 @@ static void iscsi_detach_aio_context(BlockDriverState *bs)
     IscsiLun *iscsilun = bs->opaque;
 
     aio_set_fd_handler(iscsilun->aio_context, iscsi_get_fd(iscsilun->iscsi),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
     iscsilun->events = 0;
 
     if (iscsilun->nop_timer) {
diff --git a/block/linux-aio.c b/block/linux-aio.c
index 442c86209b..916f001e32 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -446,7 +446,7 @@ int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
 
 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context)
 {
-    aio_set_event_notifier(old_context, &s->e, false, NULL, NULL, NULL);
+    aio_set_event_notifier(old_context, &s->e, NULL, NULL, NULL);
     qemu_bh_delete(s->completion_bh);
     s->aio_context = NULL;
 }
@@ -455,7 +455,7 @@ void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context)
 {
     s->aio_context = new_context;
     s->completion_bh = aio_bh_new(new_context, qemu_laio_completion_bh, s);
-    aio_set_event_notifier(new_context, &s->e, false,
+    aio_set_event_notifier(new_context, &s->e,
                            qemu_laio_completion_cb,
                            qemu_laio_poll_cb,
                            qemu_laio_poll_ready);
diff --git a/block/nfs.c b/block/nfs.c
index 006045d71a..8f89ece69f 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -195,7 +195,6 @@ static void nfs_set_events(NFSClient *client)
     int ev = nfs_which_events(client->context);
     if (ev != client->events) {
         aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                           false,
                            (ev & POLLIN) ? nfs_process_read : NULL,
                            (ev & POLLOUT) ? nfs_process_write : NULL,
                            NULL, NULL, client);
@@ -373,7 +372,7 @@ static void nfs_detach_aio_context(BlockDriverState *bs)
     NFSClient *client = bs->opaque;
 
     aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                       false, NULL, NULL, NULL, NULL, NULL);
+                       NULL, NULL, NULL, NULL, NULL);
     client->events = 0;
 }
 
@@ -391,7 +390,7 @@ static void nfs_client_close(NFSClient *client)
     if (client->context) {
         qemu_mutex_lock(&client->mutex);
         aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
-                           false, NULL, NULL, NULL, NULL, NULL);
+                           NULL, NULL, NULL, NULL, NULL);
         qemu_mutex_unlock(&client->mutex);
         if (client->fh) {
             nfs_close(client->context, client->fh);
diff --git a/block/nvme.c b/block/nvme.c
index 5b744c2bda..17937d398d 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -862,7 +862,7 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
     }
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, nvme_handle_event, nvme_poll_cb,
+                           nvme_handle_event, nvme_poll_cb,
                            nvme_poll_ready);
 
     if (!nvme_identify(bs, namespace, errp)) {
@@ -948,7 +948,7 @@ static void nvme_close(BlockDriverState *bs)
     g_free(s->queues);
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, NULL, NULL, NULL);
+                           NULL, NULL, NULL);
     event_notifier_cleanup(&s->irq_notifier[MSIX_SHARED_IRQ_IDX]);
     qemu_vfio_pci_unmap_bar(s->vfio, 0, s->bar0_wo_map,
                             0, sizeof(NvmeBar) + NVME_DOORBELL_SIZE);
@@ -1546,7 +1546,7 @@ static void nvme_detach_aio_context(BlockDriverState *bs)
 
     aio_set_event_notifier(bdrv_get_aio_context(bs),
                            &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, NULL, NULL, NULL);
+                           NULL, NULL, NULL);
 }
 
 static void nvme_attach_aio_context(BlockDriverState *bs,
@@ -1556,7 +1556,7 @@ static void nvme_attach_aio_context(BlockDriverState *bs,
 
     s->aio_context = new_context;
     aio_set_event_notifier(new_context, &s->irq_notifier[MSIX_SHARED_IRQ_IDX],
-                           false, nvme_handle_event, nvme_poll_cb,
+                           nvme_handle_event, nvme_poll_cb,
                            nvme_poll_ready);
 
     for (unsigned i = 0; i < s->queue_count; i++) {
diff --git a/block/ssh.c b/block/ssh.c
index b3b3352075..2748253d4a 100644
--- a/block/ssh.c
+++ b/block/ssh.c
@@ -1019,7 +1019,7 @@ static void restart_coroutine(void *opaque)
     AioContext *ctx = bdrv_get_aio_context(bs);
 
     trace_ssh_restart_coroutine(restart->co);
-    aio_set_fd_handler(ctx, s->sock, false, NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(ctx, s->sock, NULL, NULL, NULL, NULL, NULL);
 
     aio_co_wake(restart->co);
 }
@@ -1049,7 +1049,7 @@ static coroutine_fn void co_yield(BDRVSSHState *s, BlockDriverState *bs)
     trace_ssh_co_yield(s->sock, rd_handler, wr_handler);
 
     aio_set_fd_handler(bdrv_get_aio_context(bs), s->sock,
-                       false, rd_handler, wr_handler, NULL, NULL, &restart);
+                       rd_handler, wr_handler, NULL, NULL, &restart);
     qemu_coroutine_yield();
     trace_ssh_co_yield_back(s->sock);
 }
diff --git a/block/win32-aio.c b/block/win32-aio.c
index ee87d6048f..6327861e1d 100644
--- a/block/win32-aio.c
+++ b/block/win32-aio.c
@@ -174,7 +174,7 @@ int win32_aio_attach(QEMUWin32AIOState *aio, HANDLE hfile)
 void win32_aio_detach_aio_context(QEMUWin32AIOState *aio,
                                   AioContext *old_context)
 {
-    aio_set_event_notifier(old_context, &aio->e, false, NULL, NULL, NULL);
+    aio_set_event_notifier(old_context, &aio->e, NULL, NULL, NULL);
     aio->aio_ctx = NULL;
 }
 
@@ -182,8 +182,8 @@ void win32_aio_attach_aio_context(QEMUWin32AIOState *aio,
                                   AioContext *new_context)
 {
     aio->aio_ctx = new_context;
-    aio_set_event_notifier(new_context, &aio->e, false,
-                           win32_aio_completion_cb, NULL, NULL);
+    aio_set_event_notifier(new_context, &aio->e, win32_aio_completion_cb,
+                           NULL, NULL);
 }
 
 QEMUWin32AIOState *win32_aio_init(void)
diff --git a/hw/i386/kvm/xen_xenstore.c b/hw/i386/kvm/xen_xenstore.c
index 6e81bc8791..0b189c6ab8 100644
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -133,7 +133,7 @@ static void xen_xenstore_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "Xenstore evtchn port init failed");
         return;
     }
-    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh), false,
+    aio_set_fd_handler(qemu_get_aio_context(), xen_be_evtchn_fd(s->eh),
                        xen_xenstore_event, NULL, NULL, NULL, s);
 
     s->impl = xs_impl_create(xen_domid);
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 08011be8dc..295a603e58 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3491,7 +3491,7 @@ static void virtio_queue_host_notifier_aio_poll_end(EventNotifier *n)
 
 void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false,
+    aio_set_event_notifier(ctx, &vq->host_notifier,
                            virtio_queue_host_notifier_read,
                            virtio_queue_host_notifier_aio_poll,
                            virtio_queue_host_notifier_aio_poll_ready);
@@ -3508,14 +3508,14 @@ void virtio_queue_aio_attach_host_notifier(VirtQueue *vq, AioContext *ctx)
  */
 void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false,
+    aio_set_event_notifier(ctx, &vq->host_notifier,
                            virtio_queue_host_notifier_read,
                            NULL, NULL);
 }
 
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
-    aio_set_event_notifier(ctx, &vq->host_notifier, false, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &vq->host_notifier, NULL, NULL, NULL);
 }
 
 void virtio_queue_host_notifier_read(EventNotifier *n)
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index bf256d4da2..1e08cf027a 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
     }
 
     if (channel->ctx)
-        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
     if (ctx) {
         aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
-                           false, xen_device_event, NULL, xen_device_poll,
-                           NULL, channel);
+                           xen_device_event, NULL, xen_device_poll, NULL,
+                           channel);
     }
 }
 
@@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev,
 
     QLIST_REMOVE(channel, list);
 
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
                        NULL, NULL, NULL, NULL, NULL);
 
     if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) {
diff --git a/io/channel-command.c b/io/channel-command.c
index e7edd091af..7ed726c802 100644
--- a/io/channel-command.c
+++ b/io/channel-command.c
@@ -337,10 +337,8 @@ static void qio_channel_command_set_aio_fd_handler(QIOChannel *ioc,
                                                    void *opaque)
 {
     QIOChannelCommand *cioc = QIO_CHANNEL_COMMAND(ioc);
-    aio_set_fd_handler(ctx, cioc->readfd, false,
-                       io_read, NULL, NULL, NULL, opaque);
-    aio_set_fd_handler(ctx, cioc->writefd, false,
-                       NULL, io_write, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, cioc->readfd, io_read, NULL, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, cioc->writefd, NULL, io_write, NULL, NULL, opaque);
 }
 
 
diff --git a/io/channel-file.c b/io/channel-file.c
index d76663e6ae..8b5821f452 100644
--- a/io/channel-file.c
+++ b/io/channel-file.c
@@ -198,8 +198,7 @@ static void qio_channel_file_set_aio_fd_handler(QIOChannel *ioc,
                                                 void *opaque)
 {
     QIOChannelFile *fioc = QIO_CHANNEL_FILE(ioc);
-    aio_set_fd_handler(ctx, fioc->fd, false, io_read, io_write,
-                       NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, fioc->fd, io_read, io_write, NULL, NULL, opaque);
 }
 
 static GSource *qio_channel_file_create_watch(QIOChannel *ioc,
diff --git a/io/channel-socket.c b/io/channel-socket.c
index b0ea7d48b3..d99945ebec 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -899,8 +899,7 @@ static void qio_channel_socket_set_aio_fd_handler(QIOChannel *ioc,
                                                   void *opaque)
 {
     QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
-    aio_set_fd_handler(ctx, sioc->fd, false,
-                       io_read, io_write, NULL, NULL, opaque);
+    aio_set_fd_handler(ctx, sioc->fd, io_read, io_write, NULL, NULL, opaque);
 }
 
 static GSource *qio_channel_socket_create_watch(QIOChannel *ioc,
diff --git a/migration/rdma.c b/migration/rdma.c
index 2cd8f1cc66..d78a94f1ec 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3110,15 +3110,15 @@ static void qio_channel_rdma_set_aio_fd_handler(QIOChannel *ioc,
 {
     QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
     if (io_read) {
-        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
     } else {
-        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd,
-                           false, io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, io_read,
+                           io_write, NULL, NULL, opaque);
     }
 }
 
diff --git a/tests/unit/test-aio.c b/tests/unit/test-aio.c
index 321d7ab01a..519440eed3 100644
--- a/tests/unit/test-aio.c
+++ b/tests/unit/test-aio.c
@@ -130,7 +130,7 @@ static void *test_acquire_thread(void *opaque)
 static void set_event_notifier(AioContext *ctx, EventNotifier *notifier,
                                EventNotifierHandler *handler)
 {
-    aio_set_event_notifier(ctx, notifier, false, handler, NULL, NULL);
+    aio_set_event_notifier(ctx, notifier, handler, NULL, NULL);
 }
 
 static void dummy_notifier_read(EventNotifier *n)
@@ -383,30 +383,6 @@ static void test_flush_event_notifier(void)
     event_notifier_cleanup(&data.e);
 }
 
-static void test_aio_external_client(void)
-{
-    int i, j;
-
-    for (i = 1; i < 3; i++) {
-        EventNotifierTestData data = { .n = 0, .active = 10, .auto_set = true };
-        event_notifier_init(&data.e, false);
-        aio_set_event_notifier(ctx, &data.e, true, event_ready_cb, NULL, NULL);
-        event_notifier_set(&data.e);
-        for (j = 0; j < i; j++) {
-            aio_disable_external(ctx);
-        }
-        for (j = 0; j < i; j++) {
-            assert(!aio_poll(ctx, false));
-            assert(event_notifier_test_and_clear(&data.e));
-            event_notifier_set(&data.e);
-            aio_enable_external(ctx);
-        }
-        assert(aio_poll(ctx, false));
-        set_event_notifier(ctx, &data.e, NULL);
-        event_notifier_cleanup(&data.e);
-    }
-}
-
 static void test_wait_event_notifier_noflush(void)
 {
     EventNotifierTestData data = { .n = 0 };
@@ -935,7 +911,6 @@ int main(int argc, char **argv)
     g_test_add_func("/aio/event/wait",              test_wait_event_notifier);
     g_test_add_func("/aio/event/wait/no-flush-cb",  test_wait_event_notifier_noflush);
     g_test_add_func("/aio/event/flush",             test_flush_event_notifier);
-    g_test_add_func("/aio/external-client",         test_aio_external_client);
     g_test_add_func("/aio/timer/schedule",          test_timer_schedule);
 
     g_test_add_func("/aio/coroutine/queue-chaining", test_queue_chaining);
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index 6ad9964f03..60622f54a5 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -473,7 +473,6 @@ static void test_graph_change_drain_all(void)
 
     g_assert_cmpint(bs_b->quiesce_counter, ==, 0);
     g_assert_cmpint(b_s->drain_count, ==, 0);
-    g_assert_cmpint(qemu_get_aio_context()->external_disable_cnt, ==, 0);
 
     bdrv_unref(bs_b);
     blk_unref(blk_b);
diff --git a/tests/unit/test-fdmon-epoll.c b/tests/unit/test-fdmon-epoll.c
deleted file mode 100644
index ef5a856d09..0000000000
--- a/tests/unit/test-fdmon-epoll.c
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-/*
- * fdmon-epoll tests
- *
- * Copyright (c) 2020 Red Hat, Inc.
- */
-
-#include "qemu/osdep.h"
-#include "block/aio.h"
-#include "qapi/error.h"
-#include "qemu/main-loop.h"
-
-static AioContext *ctx;
-
-static void dummy_fd_handler(EventNotifier *notifier)
-{
-    event_notifier_test_and_clear(notifier);
-}
-
-static void add_event_notifiers(EventNotifier *notifiers, size_t n)
-{
-    for (size_t i = 0; i < n; i++) {
-        event_notifier_init(&notifiers[i], false);
-        aio_set_event_notifier(ctx, &notifiers[i], false,
-                               dummy_fd_handler, NULL, NULL);
-    }
-}
-
-static void remove_event_notifiers(EventNotifier *notifiers, size_t n)
-{
-    for (size_t i = 0; i < n; i++) {
-        aio_set_event_notifier(ctx, &notifiers[i], false, NULL, NULL, NULL);
-        event_notifier_cleanup(&notifiers[i]);
-    }
-}
-
-/* Check that fd handlers work when external clients are disabled */
-static void test_external_disabled(void)
-{
-    EventNotifier notifiers[100];
-
-    /* fdmon-epoll is only enabled when many fd handlers are registered */
-    add_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
-
-    event_notifier_set(&notifiers[0]);
-    assert(aio_poll(ctx, true));
-
-    aio_disable_external(ctx);
-    event_notifier_set(&notifiers[0]);
-    assert(aio_poll(ctx, true));
-    aio_enable_external(ctx);
-
-    remove_event_notifiers(notifiers, G_N_ELEMENTS(notifiers));
-}
-
-int main(int argc, char **argv)
-{
-    /*
-     * This code relies on the fact that fdmon-io_uring disables itself when
-     * the glib main loop is in use. The main loop uses fdmon-poll and upgrades
-     * to fdmon-epoll when the number of fds exceeds a threshold.
-     */
-    qemu_init_main_loop(&error_fatal);
-    ctx = qemu_get_aio_context();
-
-    while (g_main_context_iteration(NULL, false)) {
-        /* Do nothing */
-    }
-
-    g_test_init(&argc, &argv, NULL);
-    g_test_add_func("/fdmon-epoll/external-disabled", test_external_disabled);
-    return g_test_run();
-}
diff --git a/util/aio-posix.c b/util/aio-posix.c
index a8be940f76..934b1bbb85 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -99,7 +99,6 @@ static bool aio_remove_fd_handler(AioContext *ctx, AioHandler *node)
 
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -144,7 +143,6 @@ void aio_set_fd_handler(AioContext *ctx,
         new_node->io_poll = io_poll;
         new_node->io_poll_ready = io_poll_ready;
         new_node->opaque = opaque;
-        new_node->is_external = is_external;
 
         if (is_new) {
             new_node->pfd.fd = fd;
@@ -196,12 +194,11 @@ static void aio_set_fd_poll(AioContext *ctx, int fd,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            bool is_external,
                             EventNotifierHandler *io_read,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready)
 {
-    aio_set_fd_handler(ctx, event_notifier_get_fd(notifier), is_external,
+    aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
                        (IOHandler *)io_read, NULL, io_poll,
                        (IOHandler *)io_poll_ready, notifier);
 }
@@ -285,13 +282,11 @@ bool aio_pending(AioContext *ctx)
 
         /* TODO should this check poll ready? */
         revents = node->pfd.revents & node->pfd.events;
-        if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
-            aio_node_check(ctx, node->is_external)) {
+        if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) {
             result = true;
             break;
         }
-        if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
-            aio_node_check(ctx, node->is_external)) {
+        if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) {
             result = true;
             break;
         }
@@ -350,9 +345,7 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
         QLIST_INSERT_HEAD(&ctx->poll_aio_handlers, node, node_poll);
     }
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
-        poll_ready && revents == 0 &&
-        aio_node_check(ctx, node->is_external) &&
-        node->io_poll_ready) {
+        poll_ready && revents == 0 && node->io_poll_ready) {
         node->io_poll_ready(node->opaque);
 
         /*
@@ -364,7 +357,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
 
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
         (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
-        aio_node_check(ctx, node->is_external) &&
         node->io_read) {
         node->io_read(node->opaque);
 
@@ -375,7 +367,6 @@ static bool aio_dispatch_handler(AioContext *ctx, AioHandler *node)
     }
     if (!QLIST_IS_INSERTED(node, node_deleted) &&
         (revents & (G_IO_OUT | G_IO_ERR)) &&
-        aio_node_check(ctx, node->is_external) &&
         node->io_write) {
         node->io_write(node->opaque);
         progress = true;
@@ -436,8 +427,7 @@ static bool run_poll_handlers_once(AioContext *ctx,
     AioHandler *tmp;
 
     QLIST_FOREACH_SAFE(node, &ctx->poll_aio_handlers, node_poll, tmp) {
-        if (aio_node_check(ctx, node->is_external) &&
-            node->io_poll(node->opaque)) {
+        if (node->io_poll(node->opaque)) {
             aio_add_poll_ready_handler(ready_list, node);
 
             node->poll_idle_timeout = now + POLL_IDLE_INTERVAL_NS;
diff --git a/util/aio-win32.c b/util/aio-win32.c
index 6bded009a4..948ef47a4d 100644
--- a/util/aio-win32.c
+++ b/util/aio-win32.c
@@ -32,7 +32,6 @@ struct AioHandler {
     GPollFD pfd;
     int deleted;
     void *opaque;
-    bool is_external;
     QLIST_ENTRY(AioHandler) node;
 };
 
@@ -64,7 +63,6 @@ static void aio_remove_fd_handler(AioContext *ctx, AioHandler *node)
 
 void aio_set_fd_handler(AioContext *ctx,
                         int fd,
-                        bool is_external,
                         IOHandler *io_read,
                         IOHandler *io_write,
                         AioPollFn *io_poll,
@@ -111,7 +109,6 @@ void aio_set_fd_handler(AioContext *ctx,
         node->opaque = opaque;
         node->io_read = io_read;
         node->io_write = io_write;
-        node->is_external = is_external;
 
         if (io_read) {
             bitmask |= FD_READ | FD_ACCEPT | FD_CLOSE;
@@ -135,7 +132,6 @@ void aio_set_fd_handler(AioContext *ctx,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *e,
-                            bool is_external,
                             EventNotifierHandler *io_notify,
                             AioPollFn *io_poll,
                             EventNotifierHandler *io_poll_ready)
@@ -161,7 +157,6 @@ void aio_set_event_notifier(AioContext *ctx,
             node->e = e;
             node->pfd.fd = (uintptr_t)event_notifier_get_handle(e);
             node->pfd.events = G_IO_IN;
-            node->is_external = is_external;
             QLIST_INSERT_HEAD_RCU(&ctx->aio_handlers, node, node);
 
             g_source_add_poll(&ctx->source, &node->pfd);
@@ -368,8 +363,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     /* fill fd sets */
     count = 0;
     QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
-        if (!node->deleted && node->io_notify
-            && aio_node_check(ctx, node->is_external)) {
+        if (!node->deleted && node->io_notify) {
             assert(count < MAXIMUM_WAIT_OBJECTS);
             events[count++] = event_notifier_get_handle(node->e);
         }
diff --git a/util/async.c b/util/async.c
index 055070ffbd..8f90ddc304 100644
--- a/util/async.c
+++ b/util/async.c
@@ -409,7 +409,7 @@ aio_ctx_finalize(GSource     *source)
         g_free(bh);
     }
 
-    aio_set_event_notifier(ctx, &ctx->notifier, false, NULL, NULL, NULL);
+    aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL, NULL);
     event_notifier_cleanup(&ctx->notifier);
     qemu_rec_mutex_destroy(&ctx->lock);
     qemu_lockcnt_destroy(&ctx->list_lock);
@@ -593,7 +593,6 @@ AioContext *aio_context_new(Error **errp)
     QSLIST_INIT(&ctx->scheduled_coroutines);
 
     aio_set_event_notifier(ctx, &ctx->notifier,
-                           false,
                            aio_context_notifier_cb,
                            aio_context_notifier_poll,
                            aio_context_notifier_poll_ready);
diff --git a/util/fdmon-epoll.c b/util/fdmon-epoll.c
index 1683aa1105..c6413cb18f 100644
--- a/util/fdmon-epoll.c
+++ b/util/fdmon-epoll.c
@@ -64,11 +64,6 @@ static int fdmon_epoll_wait(AioContext *ctx, AioHandlerList *ready_list,
     int i, ret = 0;
     struct epoll_event events[128];
 
-    /* Fall back while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return fdmon_poll_ops.wait(ctx, ready_list, timeout);
-    }
-
     if (timeout > 0) {
         ret = qemu_poll_ns(&pfd, 1, timeout);
         if (ret > 0) {
@@ -133,11 +128,6 @@ bool fdmon_epoll_try_upgrade(AioContext *ctx, unsigned npfd)
         return false;
     }
 
-    /* Do not upgrade while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return false;
-    }
-
     if (npfd < EPOLL_ENABLE_THRESHOLD) {
         return false;
     }
diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
index ab43052dd7..17ec18b7bd 100644
--- a/util/fdmon-io_uring.c
+++ b/util/fdmon-io_uring.c
@@ -276,11 +276,6 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,
     unsigned wait_nr = 1; /* block until at least one cqe is ready */
     int ret;
 
-    /* Fall back while external clients are disabled */
-    if (qatomic_read(&ctx->external_disable_cnt)) {
-        return fdmon_poll_ops.wait(ctx, ready_list, timeout);
-    }
-
     if (timeout == 0) {
         wait_nr = 0; /* non-blocking */
     } else if (timeout > 0) {
@@ -315,8 +310,7 @@ static bool fdmon_io_uring_need_wait(AioContext *ctx)
         return true;
     }
 
-    /* Are we falling back to fdmon-poll? */
-    return qatomic_read(&ctx->external_disable_cnt);
+    return false;
 }
 
 static const FDMonOps fdmon_io_uring_ops = {
diff --git a/util/fdmon-poll.c b/util/fdmon-poll.c
index 5fe3b47865..17df917cf9 100644
--- a/util/fdmon-poll.c
+++ b/util/fdmon-poll.c
@@ -65,8 +65,7 @@ static int fdmon_poll_wait(AioContext *ctx, AioHandlerList *ready_list,
     assert(npfd == 0);
 
     QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
-        if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events
-                && aio_node_check(ctx, node->is_external)) {
+        if (!QLIST_IS_INSERTED(node, node_deleted) && node->pfd.events) {
             add_pollfd(node);
         }
     }
diff --git a/util/main-loop.c b/util/main-loop.c
index 7022f02ef8..014c795916 100644
--- a/util/main-loop.c
+++ b/util/main-loop.c
@@ -644,14 +644,13 @@ void qemu_set_fd_handler(int fd,
                          void *opaque)
 {
     iohandler_init();
-    aio_set_fd_handler(iohandler_ctx, fd, false,
-                       fd_read, fd_write, NULL, NULL, opaque);
+    aio_set_fd_handler(iohandler_ctx, fd, fd_read, fd_write, NULL, NULL,
+                       opaque);
 }
 
 void event_notifier_set_handler(EventNotifier *e,
                                 EventNotifierHandler *handler)
 {
     iohandler_init();
-    aio_set_event_notifier(iohandler_ctx, e, false,
-                           handler, NULL, NULL);
+    aio_set_event_notifier(iohandler_ctx, e, handler, NULL, NULL);
 }
diff --git a/util/qemu-coroutine-io.c b/util/qemu-coroutine-io.c
index d791932d63..364f4d5abf 100644
--- a/util/qemu-coroutine-io.c
+++ b/util/qemu-coroutine-io.c
@@ -74,8 +74,7 @@ typedef struct {
 static void fd_coroutine_enter(void *opaque)
 {
     FDYieldUntilData *data = opaque;
-    aio_set_fd_handler(data->ctx, data->fd, false,
-                       NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(data->ctx, data->fd, NULL, NULL, NULL, NULL, NULL);
     qemu_coroutine_enter(data->co);
 }
 
@@ -87,7 +86,7 @@ void coroutine_fn yield_until_fd_readable(int fd)
     data.ctx = qemu_get_current_aio_context();
     data.co = qemu_coroutine_self();
     data.fd = fd;
-    aio_set_fd_handler(
-        data.ctx, fd, false, fd_coroutine_enter, NULL, NULL, NULL, &data);
+    aio_set_fd_handler(data.ctx, fd, fd_coroutine_enter, NULL, NULL, NULL,
+                       &data);
     qemu_coroutine_yield();
 }
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index a12b2d1bba..cd17fb5326 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt,
         vu_fd_watch->fd = fd;
         vu_fd_watch->cb = cb;
         qemu_socket_set_nonblock(fd);
-        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
+        aio_set_fd_handler(server->ioc->ctx, fd, kick_handler,
                            NULL, NULL, NULL, vu_fd_watch);
         vu_fd_watch->vu_dev = vu_dev;
         vu_fd_watch->pvt = pvt;
@@ -299,8 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd)
     if (!vu_fd_watch) {
         return;
     }
-    aio_set_fd_handler(server->ioc->ctx, fd, false,
-                       NULL, NULL, NULL, NULL, NULL);
+    aio_set_fd_handler(server->ioc->ctx, fd, NULL, NULL, NULL, NULL, NULL);
 
     QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next);
     g_free(vu_fd_watch);
@@ -362,7 +361,7 @@ void vhost_user_server_stop(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
@@ -403,7 +402,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
     qio_channel_attach_aio_context(server->ioc, ctx);
 
     QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-        aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL,
+        aio_set_fd_handler(ctx, vu_fd_watch->fd, kick_handler, NULL,
                            NULL, NULL, vu_fd_watch);
     }
 
@@ -417,7 +416,7 @@ void vhost_user_server_detach_aio_context(VuServer *server)
         VuFdWatch *vu_fd_watch;
 
         QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) {
-            aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false,
+            aio_set_fd_handler(server->ctx, vu_fd_watch->fd,
                                NULL, NULL, NULL, NULL, vu_fd_watch);
         }
 
diff --git a/tests/unit/meson.build b/tests/unit/meson.build
index 3bc78d8660..b33298a444 100644
--- a/tests/unit/meson.build
+++ b/tests/unit/meson.build
@@ -122,9 +122,6 @@ if have_block
   if nettle.found() or gcrypt.found()
     tests += {'test-crypto-pbkdf': [io]}
   endif
-  if config_host_data.get('CONFIG_EPOLL_CREATE1')
-    tests += {'test-fdmon-epoll': [testblock]}
-  endif
 endif
 
 if have_system
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:08:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:08:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535507.833338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01Y-0002ur-TJ; Tue, 16 May 2023 19:07:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535507.833338; Tue, 16 May 2023 19:07:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01Y-0002ui-QS; Tue, 16 May 2023 19:07:52 +0000
Received: by outflank-mailman (input) for mailman id 535507;
 Tue, 16 May 2023 19:07:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxM-0002US-Ox
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:32 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58928e3d-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-266-PZ2AukVqP-GjPOevjG35Ag-1; Tue, 16 May 2023 15:03:27 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5A60129A9CA6;
 Tue, 16 May 2023 19:03:26 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4B053483EC2;
 Tue, 16 May 2023 19:03:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58928e3d-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263809;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1p0C/cPZpsJrZPLVm2kuhKZSaYrXmdL837gZyAhX640=;
	b=fNux0csvR1wLK2W8z0OMG96Ovkb2mhdeExWyjsydEFdyJO+gORDOcyObK3XszWzqrd1yNl
	yQcLZ1x+NP/P5VQf0kUJ5MH9tz5uQzpwSIDO63T54uHTSAvw0bulQ41wEnphKZzGyy1KAS
	hK0vte/1WXv+GfHu9EmBI9hltDPaaK4=
X-MC-Unique: PZ2AukVqP-GjPOevjG35Ag-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 11/20] xen-block: implement BlockDevOps->drained_begin()
Date: Tue, 16 May 2023 15:02:29 -0400
Message-Id: <20230516190238.8401-12-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

Detach event channels during drained sections to stop I/O submission
from the ring. xen-block is no longer reliant on aio_disable_external()
after this patch. This will allow us to remove the
aio_disable_external() API once all other code that relies on it is
converted.

Extend xen_device_set_event_channel_context() to allow ctx=NULL. The
event channel still exists but the event loop does not monitor the file
descriptor. Event channel processing can resume by calling
xen_device_set_event_channel_context() with a non-NULL ctx.

Factor out xen_device_set_event_channel_context() calls in
hw/block/dataplane/xen-block.c into attach/detach helper functions.
Incidentally, these don't require the AioContext lock because
aio_set_fd_handler() is thread-safe.

It's safer to register BlockDevOps after the dataplane instance has been
created. The BlockDevOps .drained_begin/end() callbacks depend on the
dataplane instance, so move the blk_set_dev_ops() call after
xen_block_dataplane_create().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 hw/block/dataplane/xen-block.h |  2 ++
 hw/block/dataplane/xen-block.c | 42 +++++++++++++++++++++++++---------
 hw/block/xen-block.c           | 24 ++++++++++++++++---
 hw/xen/xen-bus.c               |  7 ++++--
 4 files changed, 59 insertions(+), 16 deletions(-)

diff --git a/hw/block/dataplane/xen-block.h b/hw/block/dataplane/xen-block.h
index 76dcd51c3d..7b8e9df09f 100644
--- a/hw/block/dataplane/xen-block.h
+++ b/hw/block/dataplane/xen-block.h
@@ -26,5 +26,7 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
                                unsigned int protocol,
                                Error **errp);
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane);
+void xen_block_dataplane_attach(XenBlockDataPlane *dataplane);
+void xen_block_dataplane_detach(XenBlockDataPlane *dataplane);
 
 #endif /* HW_BLOCK_DATAPLANE_XEN_BLOCK_H */
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index d8bc39d359..2597f38805 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -664,6 +664,30 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane)
     g_free(dataplane);
 }
 
+void xen_block_dataplane_detach(XenBlockDataPlane *dataplane)
+{
+    if (!dataplane || !dataplane->event_channel) {
+        return;
+    }
+
+    /* Only reason for failure is a NULL channel */
+    xen_device_set_event_channel_context(dataplane->xendev,
+                                         dataplane->event_channel,
+                                         NULL, &error_abort);
+}
+
+void xen_block_dataplane_attach(XenBlockDataPlane *dataplane)
+{
+    if (!dataplane || !dataplane->event_channel) {
+        return;
+    }
+
+    /* Only reason for failure is a NULL channel */
+    xen_device_set_event_channel_context(dataplane->xendev,
+                                         dataplane->event_channel,
+                                         dataplane->ctx, &error_abort);
+}
+
 void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 {
     XenDevice *xendev;
@@ -674,13 +698,11 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane)
 
     xendev = dataplane->xendev;
 
-    aio_context_acquire(dataplane->ctx);
-    if (dataplane->event_channel) {
-        /* Only reason for failure is a NULL channel */
-        xen_device_set_event_channel_context(xendev, dataplane->event_channel,
-                                             qemu_get_aio_context(),
-                                             &error_abort);
+    if (!blk_in_drain(dataplane->blk)) {
+        xen_block_dataplane_detach(dataplane);
     }
+
+    aio_context_acquire(dataplane->ctx);
     /* Xen doesn't have multiple users for nodes, so this can't fail */
     blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort);
     aio_context_release(dataplane->ctx);
@@ -819,11 +841,9 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
     blk_set_aio_context(dataplane->blk, dataplane->ctx, NULL);
     aio_context_release(old_context);
 
-    /* Only reason for failure is a NULL channel */
-    aio_context_acquire(dataplane->ctx);
-    xen_device_set_event_channel_context(xendev, dataplane->event_channel,
-                                         dataplane->ctx, &error_abort);
-    aio_context_release(dataplane->ctx);
+    if (!blk_in_drain(dataplane->blk)) {
+        xen_block_dataplane_attach(dataplane);
+    }
 
     return;
 
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index f5a744589d..f099914831 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -189,8 +189,26 @@ static void xen_block_resize_cb(void *opaque)
     xen_device_backend_printf(xendev, "state", "%u", state);
 }
 
+/* Suspend request handling */
+static void xen_block_drained_begin(void *opaque)
+{
+    XenBlockDevice *blockdev = opaque;
+
+    xen_block_dataplane_detach(blockdev->dataplane);
+}
+
+/* Resume request handling */
+static void xen_block_drained_end(void *opaque)
+{
+    XenBlockDevice *blockdev = opaque;
+
+    xen_block_dataplane_attach(blockdev->dataplane);
+}
+
 static const BlockDevOps xen_block_dev_ops = {
-    .resize_cb = xen_block_resize_cb,
+    .resize_cb     = xen_block_resize_cb,
+    .drained_begin = xen_block_drained_begin,
+    .drained_end   = xen_block_drained_end,
 };
 
 static void xen_block_realize(XenDevice *xendev, Error **errp)
@@ -242,8 +260,6 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
         return;
     }
 
-    blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev);
-
     if (conf->discard_granularity == -1) {
         conf->discard_granularity = conf->physical_block_size;
     }
@@ -277,6 +293,8 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
     blockdev->dataplane =
         xen_block_dataplane_create(xendev, blk, conf->logical_block_size,
                                    blockdev->props.iothread);
+
+    blk_set_dev_ops(blk, &xen_block_dev_ops, blockdev);
 }
 
 static void xen_block_frontend_changed(XenDevice *xendev,
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index c59850b1de..b8f408c9ed 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -846,8 +846,11 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
-                       xen_device_event, NULL, xen_device_poll, NULL, channel);
+    if (ctx) {
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
+                           true, xen_device_event, NULL, xen_device_poll, NULL,
+                           channel);
+    }
 }
 
 XenEventChannel *xen_device_bind_event_channel(XenDevice *xendev,
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:08:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:08:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535515.833348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01e-0003Or-7e; Tue, 16 May 2023 19:07:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535515.833348; Tue, 16 May 2023 19:07:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01e-0003Oe-3Z; Tue, 16 May 2023 19:07:58 +0000
Received: by outflank-mailman (input) for mailman id 535515;
 Tue, 16 May 2023 19:07:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxR-0002US-GR
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:37 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5b59799a-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:35 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-455-MkRP-mB_O2aVof1osuZVZw-1; Tue, 16 May 2023 15:03:30 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0D9B386C60E;
 Tue, 16 May 2023 19:03:29 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 37E3C492B00;
 Tue, 16 May 2023 19:03:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b59799a-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263814;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fQCTZSkypWn0YHx4KoG3dn4Qz091Y17YNDQRTejUUgA=;
	b=Ql+BBrR1ePZ9hq88p2M/RAGgfEfuEUj+o27/KmYIv3XrBV9YxQ+Xre9SNyroDGwmS+EDdL
	3m/Izh+R1W2qLtkbN7DYTDqdk2sQqpopxn/NF1GIBBvqSE8KT2qZmCZ6JJegN8WdANHjLp
	766wGYT22rV3RmiAY2moJy4B8uzRbfI=
X-MC-Unique: MkRP-mB_O2aVof1osuZVZw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 12/20] hw/xen: do not set is_external=true on evtchn fds
Date: Tue, 16 May 2023 15:02:30 -0400
Message-Id: <20230516190238.8401-13-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9

is_external=true suspends fd handlers between aio_disable_external() and
aio_enable_external(). The block layer's drain operation uses this
mechanism to prevent new I/O from sneaking in between
bdrv_drained_begin() and bdrv_drained_end().

The previous commit converted the xen-block device to use BlockDevOps
.drained_begin/end() callbacks. It no longer relies on is_external=true
so it is safe to pass is_external=false.

This is part of ongoing work to remove the aio_disable_external() API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 hw/xen/xen-bus.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index b8f408c9ed..bf256d4da2 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -842,14 +842,14 @@ void xen_device_set_event_channel_context(XenDevice *xendev,
     }
 
     if (channel->ctx)
-        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+        aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                            NULL, NULL, NULL, NULL, NULL);
 
     channel->ctx = ctx;
     if (ctx) {
         aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh),
-                           true, xen_device_event, NULL, xen_device_poll, NULL,
-                           channel);
+                           false, xen_device_event, NULL, xen_device_poll,
+                           NULL, channel);
     }
 }
 
@@ -923,7 +923,7 @@ void xen_device_unbind_event_channel(XenDevice *xendev,
 
     QLIST_REMOVE(channel, list);
 
-    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), true,
+    aio_set_fd_handler(channel->ctx, qemu_xen_evtchn_fd(channel->xeh), false,
                        NULL, NULL, NULL, NULL, NULL);
 
     if (qemu_xen_evtchn_unbind(channel->xeh, channel->local_port) < 0) {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:08:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:08:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535522.833358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01i-0003s1-Dh; Tue, 16 May 2023 19:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535522.833358; Tue, 16 May 2023 19:08:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01i-0003ru-AP; Tue, 16 May 2023 19:08:02 +0000
Received: by outflank-mailman (input) for mailman id 535522;
 Tue, 16 May 2023 19:08:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxd-0002eu-CZ
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:49 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 630824f1-f41c-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:03:48 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-544-yjE8Q2lfPmWXjJNlZq6Baw-1; Tue, 16 May 2023 15:03:41 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 032B81C060D4;
 Tue, 16 May 2023 19:03:39 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 68FC1492B00;
 Tue, 16 May 2023 19:03:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 630824f1-f41c-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263827;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wRBk3QHXWdSEXDTSHgVJYXRAQiXNN4MdRJe8NcVGhXU=;
	b=JC2VUzA4tD0BklSmPLGb7DcTjfN5ZkslyiFFibGCC928Nsi0hllu1vKPGsM9ZJVfX6gt0X
	GV4YIR+NW5X3LnYb1TpSTIKCB3EIyvm/oC72id0I5VJb4osxqEGAnx0UO49hOgz8Gpa2+W
	l/rW9UwGut4zF3HkqVAII3s/W5UxRcs=
X-MC-Unique: yjE8Q2lfPmWXjJNlZq6Baw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 16/20] virtio: make it possible to detach host notifier from any thread
Date: Tue, 16 May 2023 15:02:34 -0400
Message-Id: <20230516190238.8401-17-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9

virtio_queue_aio_detach_host_notifier() does two things:
1. It removes the fd handler from the event loop.
2. It processes the virtqueue one last time.

The first step can be peformed by any thread and without taking the
AioContext lock.

The second step may need the AioContext lock (depending on the device
implementation) and runs in the thread where request processing takes
place. virtio-blk and virtio-scsi therefore call
virtio_queue_aio_detach_host_notifier() from a BH that is scheduled in
AioContext.

The next patch will introduce a .drained_begin() function that needs to
call virtio_queue_aio_detach_host_notifier(). .drained_begin() functions
cannot call aio_poll() to wait synchronously for the BH. It is possible
for a .drained_poll() callback to asynchronously wait for the BH, but
that is more complex than necessary here.

Move the virtqueue processing out to the callers of
virtio_queue_aio_detach_host_notifier() so that the function can be
called from any thread. This is in preparation for the next patch.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c |  7 +++++++
 hw/scsi/virtio-scsi-dataplane.c | 14 ++++++++++++++
 hw/virtio/virtio.c              |  3 ---
 3 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index af1c24c40c..4f5c7cd55f 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -287,8 +287,15 @@ static void virtio_blk_data_plane_stop_bh(void *opaque)
 
     for (i = 0; i < s->conf->num_queues; i++) {
         VirtQueue *vq = virtio_get_queue(s->vdev, i);
+        EventNotifier *host_notifier = virtio_queue_get_host_notifier(vq);
 
         virtio_queue_aio_detach_host_notifier(vq, s->ctx);
+
+        /*
+         * Test and clear notifier after disabling event, in case poll callback
+         * didn't have time to run.
+         */
+        virtio_queue_host_notifier_read(host_notifier);
     }
 }
 
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index f3214e1c57..b3a1ed21f7 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -71,12 +71,26 @@ static void virtio_scsi_dataplane_stop_bh(void *opaque)
 {
     VirtIOSCSI *s = opaque;
     VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
+    EventNotifier *host_notifier;
     int i;
 
     virtio_queue_aio_detach_host_notifier(vs->ctrl_vq, s->ctx);
+    host_notifier = virtio_queue_get_host_notifier(vs->ctrl_vq);
+
+    /*
+     * Test and clear notifier after disabling event, in case poll callback
+     * didn't have time to run.
+     */
+    virtio_queue_host_notifier_read(host_notifier);
+
     virtio_queue_aio_detach_host_notifier(vs->event_vq, s->ctx);
+    host_notifier = virtio_queue_get_host_notifier(vs->event_vq);
+    virtio_queue_host_notifier_read(host_notifier);
+
     for (i = 0; i < vs->conf.num_queues; i++) {
         virtio_queue_aio_detach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        host_notifier = virtio_queue_get_host_notifier(vs->cmd_vqs[i]);
+        virtio_queue_host_notifier_read(host_notifier);
     }
 }
 
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 272d930721..cb09cb6464 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3516,9 +3516,6 @@ void virtio_queue_aio_attach_host_notifier_no_poll(VirtQueue *vq, AioContext *ct
 void virtio_queue_aio_detach_host_notifier(VirtQueue *vq, AioContext *ctx)
 {
     aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL, NULL, NULL);
-    /* Test and clear notifier before after disabling event,
-     * in case poll callback didn't have time to run. */
-    virtio_queue_host_notifier_read(&vq->host_notifier);
 }
 
 void virtio_queue_host_notifier_read(EventNotifier *n)
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:08:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:08:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535532.833368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01o-0004Vt-No; Tue, 16 May 2023 19:08:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535532.833368; Tue, 16 May 2023 19:08:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01o-0004Vi-Kn; Tue, 16 May 2023 19:08:08 +0000
Received: by outflank-mailman (input) for mailman id 535532;
 Tue, 16 May 2023 19:08:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxd-0002US-53
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:49 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 626a86a9-f41c-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:03:47 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-589-el3UTLoyNk26m0tFq6WBLw-1; Tue, 16 May 2023 15:03:42 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0D84986C60F;
 Tue, 16 May 2023 19:03:42 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4061B35453;
 Tue, 16 May 2023 19:03:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 626a86a9-f41c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263826;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kaoTbBrPAhmWwCy9LBWyfcSDbVy7cENmA5Vg0RF9/R0=;
	b=Qyf1njciRLJVoYkx/A9bY0mmdwJeoE5WuOqEvbm+9pW+Uf7GnVqbyOuzcgPQ20eIoLvCOj
	E1xfjlVaHe2jbLcWt3BUU1s6nCb9vYzhYHKZjjptGtcf+1AzxGWlYU8/PFf+fTF0fugVCL
	bAZ9LAFW0IHyQDzRmTz7eYHS1pw8q5o=
X-MC-Unique: el3UTLoyNk26m0tFq6WBLw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 17/20] virtio-blk: implement BlockDevOps->drained_begin()
Date: Tue, 16 May 2023 15:02:35 -0400
Message-Id: <20230516190238.8401-18-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

Detach ioeventfds during drained sections to stop I/O submission from
the guest. virtio-blk is no longer reliant on aio_disable_external()
after this patch. This will allow us to remove the
aio_disable_external() API once all other code that relies on it is
converted.

Take extra care to avoid attaching/detaching ioeventfds if the data
plane is started/stopped during a drained section. This should be rare,
but maybe the mirror block job can trigger it.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 16 ++++++++------
 hw/block/virtio-blk.c           | 38 ++++++++++++++++++++++++++++++++-
 2 files changed, 47 insertions(+), 7 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 4f5c7cd55f..b90456c08c 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -246,13 +246,15 @@ int virtio_blk_data_plane_start(VirtIODevice *vdev)
     }
 
     /* Get this show started by hooking up our callbacks */
-    aio_context_acquire(s->ctx);
-    for (i = 0; i < nvqs; i++) {
-        VirtQueue *vq = virtio_get_queue(s->vdev, i);
+    if (!blk_in_drain(s->conf->conf.blk)) {
+        aio_context_acquire(s->ctx);
+        for (i = 0; i < nvqs; i++) {
+            VirtQueue *vq = virtio_get_queue(s->vdev, i);
 
-        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+            virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+        }
+        aio_context_release(s->ctx);
     }
-    aio_context_release(s->ctx);
     return 0;
 
   fail_aio_context:
@@ -322,7 +324,9 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
     s->stopping = true;
     trace_virtio_blk_data_plane_stop(s);
 
-    aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
+    if (!blk_in_drain(s->conf->conf.blk)) {
+        aio_wait_bh_oneshot(s->ctx, virtio_blk_data_plane_stop_bh, s);
+    }
 
     aio_context_acquire(s->ctx);
 
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 8f65ea4659..4ca66b5860 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1506,8 +1506,44 @@ static void virtio_blk_resize(void *opaque)
     aio_bh_schedule_oneshot(qemu_get_aio_context(), virtio_resize_cb, vdev);
 }
 
+/* Suspend virtqueue ioeventfd processing during drain */
+static void virtio_blk_drained_begin(void *opaque)
+{
+    VirtIOBlock *s = opaque;
+    VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
+    AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
+
+    if (!s->dataplane || !s->dataplane_started) {
+        return;
+    }
+
+    for (uint16_t i = 0; i < s->conf.num_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_detach_host_notifier(vq, ctx);
+    }
+}
+
+/* Resume virtqueue ioeventfd processing after drain */
+static void virtio_blk_drained_end(void *opaque)
+{
+    VirtIOBlock *s = opaque;
+    VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
+    AioContext *ctx = blk_get_aio_context(s->conf.conf.blk);
+
+    if (!s->dataplane || !s->dataplane_started) {
+        return;
+    }
+
+    for (uint16_t i = 0; i < s->conf.num_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_attach_host_notifier(vq, ctx);
+    }
+}
+
 static const BlockDevOps virtio_block_ops = {
-    .resize_cb = virtio_blk_resize,
+    .resize_cb     = virtio_blk_resize,
+    .drained_begin = virtio_blk_drained_begin,
+    .drained_end   = virtio_blk_drained_end,
 };
 
 static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:08:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:08:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535533.833374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01p-0004Zn-4l; Tue, 16 May 2023 19:08:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535533.833374; Tue, 16 May 2023 19:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz01o-0004ZB-U0; Tue, 16 May 2023 19:08:08 +0000
Received: by outflank-mailman (input) for mailman id 535533;
 Tue, 16 May 2023 19:08:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzm0=BF=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pyzxg-0002eu-K4
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:03:52 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 64c3751a-f41c-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:03:51 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-151-Eqyk3gCQO6C1PHNyU1HRkQ-1; Tue, 16 May 2023 15:03:47 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 91CBC29A9CA1;
 Tue, 16 May 2023 19:03:45 +0000 (UTC)
Received: from localhost (unknown [10.39.192.44])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E0C1F2166B31;
 Tue, 16 May 2023 19:03:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64c3751a-f41c-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684263830;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=b5WnUQm2S0YXwNos6EbIKNeS3TxM6yVDdnX/yh34lvc=;
	b=AGkKKOk6ngYYRvy4uKZpIF0Tw1OKosQrqjP2ionL8pPRc0+SoNaHy4WJotvFMyb5ZWClzR
	/1QkO90VD81kqNVcHYml6r8fC6On/ouR7uzFammgsmrH16FGZURKlYmlkZxsIUFBMm+GS6
	H+VCabzMzVNtlbypHYN34BfN+2zo/ws=
X-MC-Unique: Eqyk3gCQO6C1PHNyU1HRkQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>,
	xen-devel@lists.xenproject.org,
	Kevin Wolf <kwolf@redhat.com>,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>,
	eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>,
	Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	=?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: [PATCH v6 18/20] virtio-scsi: implement BlockDevOps->drained_begin()
Date: Tue, 16 May 2023 15:02:36 -0400
Message-Id: <20230516190238.8401-19-stefanha@redhat.com>
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

The virtio-scsi Host Bus Adapter provides access to devices on a SCSI
bus. Those SCSI devices typically have a BlockBackend. When the
BlockBackend enters a drained section, the SCSI device must temporarily
stop submitting new I/O requests.

Implement this behavior by temporarily stopping virtio-scsi virtqueue
processing when one of the SCSI devices enters a drained section. The
new scsi_device_drained_begin() API allows scsi-disk to message the
virtio-scsi HBA.

scsi_device_drained_begin() uses a drain counter so that multiple SCSI
devices can have overlapping drained sections. The HBA only sees one
pair of .drained_begin/end() calls.

After this commit, virtio-scsi no longer depends on hw/virtio's
ioeventfd aio_set_event_notifier(is_external=true). This commit is a
step towards removing the aio_disable_external() API.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/hw/scsi/scsi.h          | 14 ++++++++++++
 hw/scsi/scsi-bus.c              | 40 +++++++++++++++++++++++++++++++++
 hw/scsi/scsi-disk.c             | 27 +++++++++++++++++-----
 hw/scsi/virtio-scsi-dataplane.c | 18 +++++++++------
 hw/scsi/virtio-scsi.c           | 38 +++++++++++++++++++++++++++++++
 hw/scsi/trace-events            |  2 ++
 6 files changed, 127 insertions(+), 12 deletions(-)

diff --git a/include/hw/scsi/scsi.h b/include/hw/scsi/scsi.h
index 6f23a7a73e..e2bb1a2fbf 100644
--- a/include/hw/scsi/scsi.h
+++ b/include/hw/scsi/scsi.h
@@ -133,6 +133,16 @@ struct SCSIBusInfo {
     void (*save_request)(QEMUFile *f, SCSIRequest *req);
     void *(*load_request)(QEMUFile *f, SCSIRequest *req);
     void (*free_request)(SCSIBus *bus, void *priv);
+
+    /*
+     * Temporarily stop submitting new requests between drained_begin() and
+     * drained_end(). Called from the main loop thread with the BQL held.
+     *
+     * Implement these callbacks if request processing is triggered by a file
+     * descriptor like an EventNotifier. Otherwise set them to NULL.
+     */
+    void (*drained_begin)(SCSIBus *bus);
+    void (*drained_end)(SCSIBus *bus);
 };
 
 #define TYPE_SCSI_BUS "SCSI"
@@ -144,6 +154,8 @@ struct SCSIBus {
 
     SCSISense unit_attention;
     const SCSIBusInfo *info;
+
+    int drain_count; /* protected by BQL */
 };
 
 /**
@@ -213,6 +225,8 @@ void scsi_req_cancel_complete(SCSIRequest *req);
 void scsi_req_cancel(SCSIRequest *req);
 void scsi_req_cancel_async(SCSIRequest *req, Notifier *notifier);
 void scsi_req_retry(SCSIRequest *req);
+void scsi_device_drained_begin(SCSIDevice *sdev);
+void scsi_device_drained_end(SCSIDevice *sdev);
 void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense);
 void scsi_device_set_ua(SCSIDevice *sdev, SCSISense sense);
 void scsi_device_report_change(SCSIDevice *dev, SCSISense sense);
diff --git a/hw/scsi/scsi-bus.c b/hw/scsi/scsi-bus.c
index 64013c8a24..f80f4cb4fc 100644
--- a/hw/scsi/scsi-bus.c
+++ b/hw/scsi/scsi-bus.c
@@ -1669,6 +1669,46 @@ void scsi_device_purge_requests(SCSIDevice *sdev, SCSISense sense)
     scsi_device_set_ua(sdev, sense);
 }
 
+void scsi_device_drained_begin(SCSIDevice *sdev)
+{
+    SCSIBus *bus = DO_UPCAST(SCSIBus, qbus, sdev->qdev.parent_bus);
+    if (!bus) {
+        return;
+    }
+
+    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
+    assert(bus->drain_count < INT_MAX);
+
+    /*
+     * Multiple BlockBackends can be on a SCSIBus and each may begin/end
+     * draining at any time. Keep a counter so HBAs only see begin/end once.
+     */
+    if (bus->drain_count++ == 0) {
+        trace_scsi_bus_drained_begin(bus, sdev);
+        if (bus->info->drained_begin) {
+            bus->info->drained_begin(bus);
+        }
+    }
+}
+
+void scsi_device_drained_end(SCSIDevice *sdev)
+{
+    SCSIBus *bus = DO_UPCAST(SCSIBus, qbus, sdev->qdev.parent_bus);
+    if (!bus) {
+        return;
+    }
+
+    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
+    assert(bus->drain_count > 0);
+
+    if (bus->drain_count-- == 1) {
+        trace_scsi_bus_drained_end(bus, sdev);
+        if (bus->info->drained_end) {
+            bus->info->drained_end(bus);
+        }
+    }
+}
+
 static char *scsibus_get_dev_path(DeviceState *dev)
 {
     SCSIDevice *d = SCSI_DEVICE(dev);
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 97c9b1c8cd..e0d79c7966 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2360,6 +2360,20 @@ static void scsi_disk_reset(DeviceState *dev)
     s->qdev.scsi_version = s->qdev.default_scsi_version;
 }
 
+static void scsi_disk_drained_begin(void *opaque)
+{
+    SCSIDiskState *s = opaque;
+
+    scsi_device_drained_begin(&s->qdev);
+}
+
+static void scsi_disk_drained_end(void *opaque)
+{
+    SCSIDiskState *s = opaque;
+
+    scsi_device_drained_end(&s->qdev);
+}
+
 static void scsi_disk_resize_cb(void *opaque)
 {
     SCSIDiskState *s = opaque;
@@ -2414,16 +2428,19 @@ static bool scsi_cd_is_medium_locked(void *opaque)
 }
 
 static const BlockDevOps scsi_disk_removable_block_ops = {
-    .change_media_cb = scsi_cd_change_media_cb,
+    .change_media_cb  = scsi_cd_change_media_cb,
+    .drained_begin    = scsi_disk_drained_begin,
+    .drained_end      = scsi_disk_drained_end,
     .eject_request_cb = scsi_cd_eject_request_cb,
-    .is_tray_open = scsi_cd_is_tray_open,
     .is_medium_locked = scsi_cd_is_medium_locked,
-
-    .resize_cb = scsi_disk_resize_cb,
+    .is_tray_open     = scsi_cd_is_tray_open,
+    .resize_cb        = scsi_disk_resize_cb,
 };
 
 static const BlockDevOps scsi_disk_block_ops = {
-    .resize_cb = scsi_disk_resize_cb,
+    .drained_begin = scsi_disk_drained_begin,
+    .drained_end   = scsi_disk_drained_end,
+    .resize_cb     = scsi_disk_resize_cb,
 };
 
 static void scsi_disk_unit_attention_reported(SCSIDevice *dev)
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index b3a1ed21f7..d55de4c8ca 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -158,14 +158,16 @@ int virtio_scsi_dataplane_start(VirtIODevice *vdev)
     s->dataplane_starting = false;
     s->dataplane_started = true;
 
-    aio_context_acquire(s->ctx);
-    virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
-    virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
+    if (s->bus.drain_count == 0) {
+        aio_context_acquire(s->ctx);
+        virtio_queue_aio_attach_host_notifier(vs->ctrl_vq, s->ctx);
+        virtio_queue_aio_attach_host_notifier_no_poll(vs->event_vq, s->ctx);
 
-    for (i = 0; i < vs->conf.num_queues; i++) {
-        virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        for (i = 0; i < vs->conf.num_queues; i++) {
+            virtio_queue_aio_attach_host_notifier(vs->cmd_vqs[i], s->ctx);
+        }
+        aio_context_release(s->ctx);
     }
-    aio_context_release(s->ctx);
     return 0;
 
 fail_host_notifiers:
@@ -211,7 +213,9 @@ void virtio_scsi_dataplane_stop(VirtIODevice *vdev)
     }
     s->dataplane_stopping = true;
 
-    aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s);
+    if (s->bus.drain_count == 0) {
+        aio_wait_bh_oneshot(s->ctx, virtio_scsi_dataplane_stop_bh, s);
+    }
 
     blk_drain_all(); /* ensure there are no in-flight requests */
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index c1a7ea9ae2..4a8849cc7e 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -1117,6 +1117,42 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
     }
 }
 
+/* Suspend virtqueue ioeventfd processing during drain */
+static void virtio_scsi_drained_begin(SCSIBus *bus)
+{
+    VirtIOSCSI *s = container_of(bus, VirtIOSCSI, bus);
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    uint32_t total_queues = VIRTIO_SCSI_VQ_NUM_FIXED +
+                            s->parent_obj.conf.num_queues;
+
+    if (!s->dataplane_started) {
+        return;
+    }
+
+    for (uint32_t i = 0; i < total_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_detach_host_notifier(vq, s->ctx);
+    }
+}
+
+/* Resume virtqueue ioeventfd processing after drain */
+static void virtio_scsi_drained_end(SCSIBus *bus)
+{
+    VirtIOSCSI *s = container_of(bus, VirtIOSCSI, bus);
+    VirtIODevice *vdev = VIRTIO_DEVICE(s);
+    uint32_t total_queues = VIRTIO_SCSI_VQ_NUM_FIXED +
+                            s->parent_obj.conf.num_queues;
+
+    if (!s->dataplane_started) {
+        return;
+    }
+
+    for (uint32_t i = 0; i < total_queues; i++) {
+        VirtQueue *vq = virtio_get_queue(vdev, i);
+        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
+    }
+}
+
 static struct SCSIBusInfo virtio_scsi_scsi_info = {
     .tcq = true,
     .max_channel = VIRTIO_SCSI_MAX_CHANNEL,
@@ -1131,6 +1167,8 @@ static struct SCSIBusInfo virtio_scsi_scsi_info = {
     .get_sg_list = virtio_scsi_get_sg_list,
     .save_request = virtio_scsi_save_request,
     .load_request = virtio_scsi_load_request,
+    .drained_begin = virtio_scsi_drained_begin,
+    .drained_end = virtio_scsi_drained_end,
 };
 
 void virtio_scsi_common_realize(DeviceState *dev,
diff --git a/hw/scsi/trace-events b/hw/scsi/trace-events
index ab238293f0..bdd4e2c7c7 100644
--- a/hw/scsi/trace-events
+++ b/hw/scsi/trace-events
@@ -6,6 +6,8 @@ scsi_req_cancel(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_data(int target, int lun, int tag, int len) "target %d lun %d tag %d len %d"
 scsi_req_data_canceled(int target, int lun, int tag, int len) "target %d lun %d tag %d len %d"
 scsi_req_dequeue(int target, int lun, int tag) "target %d lun %d tag %d"
+scsi_bus_drained_begin(void *bus, void *sdev) "bus %p sdev %p"
+scsi_bus_drained_end(void *bus, void *sdev) "bus %p sdev %p"
 scsi_req_continue(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_continue_canceled(int target, int lun, int tag) "target %d lun %d tag %d"
 scsi_req_parsed(int target, int lun, int tag, int cmd, int mode, int xfer) "target %d lun %d tag %d command %d dir %d length %d"
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:32:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:32:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535556.833388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0PR-00015C-99; Tue, 16 May 2023 19:32:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535556.833388; Tue, 16 May 2023 19:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0PR-000155-5f; Tue, 16 May 2023 19:32:33 +0000
Received: by outflank-mailman (input) for mailman id 535556;
 Tue, 16 May 2023 19:32:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pz0PP-00014z-Ni
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:32:32 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 63691252-f420-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:32:28 +0200 (CEST)
Received: from mail-bn8nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 16 May 2023 15:31:53 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB4921.namprd03.prod.outlook.com (2603:10b6:5:1ea::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May
 2023 19:31:51 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6387.030; Tue, 16 May 2023
 19:31:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63691252-f420-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684265548;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=UEOWwoXYmogg+FZr/nvtAjUiGOViDPxhO8690y9mbV4=;
  b=YgYPzcIJOq+R1qSWephDgt8ucMshLTlNxYL4x8XdGLe07Ji5qbY+rG2Q
   VeG3E+Wa2kgUsSIrbNiWf/WxXjZiamRqE7V330JxBtpVq04hGmfHj1F5U
   uC51cHjkKh6SB6iEsj4qwetRI+8tDmcPZ5IJCcGwAtstb7oiiUFAFos2q
   8=;
X-IronPort-RemoteIP: 104.47.55.169
X-IronPort-MID: 108601033
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:yJMOgq8iFdYn0RRUlMnoDrUDrX+TJUtcMsCJ2f8bNWPcYEJGY0x3z
 mdOXmGPMqzZajTzfIh/b9y08klQsZWEnNcwQAJpqCg8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ire7kI+1BjOkGlA5AdmOKkV5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkT5
 KI2NhInQyvdgui74Zy/cc9FuO88eZyD0IM34hmMzBn/JNN/G9XpZfWP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWOilUvgdABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpCSuXhrqUx6LGV7mceCEUddViDm/e40EODA/EFK
 hU4xBN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxmdLngJSHhGctNOnN87Q3km2
 0GEm/vtBCdzq/uFRHSF7LCWoDiufy8PIgc/iTQsSAIE55zpptE1hxeWFNJ7Svfr35vyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLzsZ6s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:nsa0aK1Eyl5mqdvTTJeRSQqjBa9xeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5OEtOpTlPAtj4fZquz+8T3WB3B8beYOCGghrTEGgG1+ffKlLbak7DH4JmpM
 Jdmu1FeabN5DtB/LjHCWuDc+rIqePvmM7IuQ6d9QYUcegDUdAe0+4TMHf+LqQZfnghOXN0Lu
 v/2iIRzADQBUj/I/7LTkXsGIP41q/2vaOjRSRDKw8s6QGIgz/twLnmEyKA1hNbfyJTzawk+W
 3llRW8wqm4qfm0xjLVymeWtv1t6Zfc4+oGIPbJptkeKz3qhArtTIN9W4eatDRwjPCz5E0smN
 zspQ5lG8ho8Xvecky8vBOo8Qj91zQF7WPk1Daj8DbeiP28YAh/J9tKhIpffBecw008vOtk2K
 YO+26CrZJYAT7JgSy4vrHzJltXv3vxhUBnvf8YjnRZX4dbQLhNrbYH9EcQNJsbBir15K0uDe
 ErJsDB4/R9d0+cchnizyJS6e3pek52MgaNQ0AEtMDQ+z9KnEphx09d/8AblmdozuNLd7B0o8
 D/doh4nrBHScEbKYhnAv0afMexAmvRBTrRLWO7Oz3cZeE6EkOIj6SyzKQ+5emsdpBN5oA1go
 79XFRRsnN3U17yCPeJwIZA/nn2MSSAtAzWu4NjDqVCy/jBrOKBC1zGdLluqbrvnxwnOLyZZx
 7pU6gmRMMKLgPVaPJ0NkPFKt9vwEIlIb4oU+YAKiOzS/3wW/3XX8zgAYDuzenWYH8Zc1K6JE
 c/dx7OA+gFxnyXexbD8W3ssjXWCwPCwa4=
X-Talos-CUID: 9a23:zEEdK2BOVRKxksD6E3R2qmAKOOoISSPYxnrxDUuDL09XY6LAHA==
X-Talos-MUID: 9a23:gwKcTQqD1iSftHhXLLkezzpvNJ9jw6eWMk4My5g74+reNHVcBQ7I2Q==
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="108601033"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CWvqAQYG3jPrIMtH8YDOjD9H9jvWxyNnk1N2t4ZYBxrOLKG9b+KzRvzJhBhSMSwFEnYQXIQd31cZBtT0PneHcS28IPxkz97QKLrztt+LcugZWrOj3ADrw5L2zPF7whul3U9OcuCVST65PaxXbgP8fOr/ir98+efXS7ZbM5/1XSoEayaIDwNCvbtNHMWFfJaOaz2DNM0CZUt32WT6hwgzsS33umZBySwm7VuddgHxb/StnsQHrwM6Fj1Prpg/QVTrRkppkMguojfcQwu8rXecHfwr/BNHLflBDd44QCJ/N98/rldnV0qLph5Q2KgNOA+sukbeAb+X+p3g8E8/MzHYSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UEOWwoXYmogg+FZr/nvtAjUiGOViDPxhO8690y9mbV4=;
 b=OdeYB8WTDE35NwvGs3o6P9lL0OY5IKieN2HOyOEOxhLavTEpXHGDkaE1skeOGLt83gvdZu2dEIxe1irB55cg9k9uIha5J6zopwRfwLDLwx0c1+4o6b2GbT3N1qxka8JSVhavSq/+SyDCAqGM/AFcOIXpoiAZPYH/6/GksePTLWONXb54xEk3+kVpYykkT76AomzfxYqzeSh27GxfOpZb1KMO1s15XQeMSHQ7NimL1s3yo4i9rpBhpKAvnpxJk1YI/xDgWLICEjS4fTXwEW9vNnkMv8nkVW+11cKBuQ90GxkArEIRJ4UXdJ0PXLsILpOxcSlLrTJBq/NUp9RKTv5wDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UEOWwoXYmogg+FZr/nvtAjUiGOViDPxhO8690y9mbV4=;
 b=cGPV/89WpJZMJBnn4zWxJLTGPjeAIZfUw1ePPf5bEwzP3FdbltaiOuGOTHI7PHv5dWpDY4ZVuEVWyi65DK1KU8dDODYiHOjTE9kuxssv+UmwZ63bqBIZVDKmjon2h9GAHhlBVGiD2k5Zcae5U8dnwUEsfCf4xsvpNhdUoitsOJE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <54b35fa0-160e-3035-6c22-65a791ed2f84@citrix.com>
Date: Tue, 16 May 2023 20:31:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
 <a545a6c9-db48-9d91-c23b-59ea97def769@suse.com>
 <25421dbc-5889-a33c-37dd-d82476d56ce4@citrix.com>
 <1bef2b0e-04e8-2ca0-cf03-f61cdae484a9@suse.com>
 <b1c56e56-90cc-0f37-4c8a-df1217339e59@citrix.com>
 <22a6bd70-887e-da72-ccff-16b3627463c3@suse.com>
In-Reply-To: <22a6bd70-887e-da72-ccff-16b3627463c3@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0277.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:195::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM6PR03MB4921:EE_
X-MS-Office365-Filtering-Correlation-Id: efd76ccc-54a8-42e4-18aa-08db56443228
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	l0XhRj8IwHRdIK2QFOmBRM1liy/+/qBi8hjiR5iKVsmK8WUzAQpExJZBUIoS1uljF+p8xFDMc2NMU4Yv+B579UWBPgIkI7o2nSHYhCa5RNWxUxC9AUfkf2aQjkw/cNAZ9LctrTm98Disqn0O5siOKWe4zG3aezFFlrHhqb6Ycn5VM0ysgoOHK7pW7L4IFCo8IwVHydJTaqB8dKJTHsXCqJLAbtrY59F8zs/PU6HSfchjlXX0gtOWSr7jI++rZkIoCYiOg8PTuCo/em57cRdXpO1OV8BhabOiek39fGI9HqJrkLnjxPu39Nhn2ArCzEz8vDoeOv66cm+49+UyNVFVaTag00bTfH4LweT76d1cn2EC0cSmOS94yJzQqih3z80r8rrN04NjVm6HPa+08yunAnIYr8ccQBRSS4L43BR0BZo7PFH+lQT8yBHKV1gC901UyjHeA0sZgBAYqSN/wfvBRP8rkoADsUW0K15oJGbYImuhCvpXLUk0ScU7IyL2LRPzNDOC6etR2V+1Cv0smdUhxPvL48GhhvKf5t3q2lpYhnANE/l2AbPqZn55aRxUPCWeD9DX6mUchUL5aNEbv9jqAXx670DwPVrLVjvZ5hOxAt6GrrsmY4D51MRXbt3V20ZM+vBYJibAlrcwz4JBe5BluQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(376002)(366004)(346002)(451199021)(31686004)(38100700002)(82960400001)(31696002)(36756003)(86362001)(8676002)(8936002)(478600001)(5660300002)(6506007)(6512007)(26005)(186003)(41300700001)(2616005)(2906002)(83380400001)(316002)(6666004)(66556008)(66946007)(66476007)(54906003)(6486002)(53546011)(4326008)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?elMrelp1WU8vS1lEU3RpbE9FNEs4ZFRuL0d2eVUremVYNjRzMGhZQ1lEdlB5?=
 =?utf-8?B?ZSt2WklWT0Nxb243bSsxZmpVYnRyc2kwNmhIald6S2srR1kyQ2FNcGF0a3Jn?=
 =?utf-8?B?U05weVI1RGVmdUZpWTl4Q2k5My8wWWtiUk5SN291UUlGbmljMXZIblRrYjhC?=
 =?utf-8?B?b2l6OFMwbElBUmVZNGE4Z3RWalBrUWFqcDRHUnZOd0ViZXhoZWlKRDFQOGZ0?=
 =?utf-8?B?bWNkbDlIeTBwRzNsNzdCSlhobWdFeno0UGFpUkdsT29hZXlDWXRKa0czQlNv?=
 =?utf-8?B?dzQ2dlBrWlE0WFhjYi8yRDNZVVo1VXF0ODVhQlQ5VVlHSndiaG9uU0h6c01W?=
 =?utf-8?B?UUxsYnNwQ1JrWkkwaHlSSVVRZ1ZKc3hjaVROVUJINEQ2RVM0cTNnYVNtT3pm?=
 =?utf-8?B?K3pGZlUxSlo4angvYVVwM3hnWVBDM08wQlFFUEZZeGRZQU11RWtqcTRqL2xU?=
 =?utf-8?B?V1QrUysrcDlXYkEzTEhYTmdsbnNNYjZmU2hhRzMxQ0VLSlFZRi9oLzFpQ3lG?=
 =?utf-8?B?azZpcnpuaVVnVmJzbWVjd1pSRmNlNVc3dDFxcmpoYnAxaVVDNEE4dFowOGN3?=
 =?utf-8?B?c29mdmNBczJmQmcxOXpheHcvczVidWhwUW5zV0hIZVRxdmgzMnZIRGxxajRa?=
 =?utf-8?B?UUlHTU9KdTMvM3RScmZYNXNSdGFwZXJjYWVhekhEUzBkOTQxTVhRY2QzODc0?=
 =?utf-8?B?Mjl6M1BQNzY2RnowZ3pCYmF0OEo5Q2tQaWkwRUtOMDRGYTJqMk9HQm5jMm5m?=
 =?utf-8?B?NSs4SEdSWUNERVlrYUNCOVVqYVllVEw1K3dWS1Q3R2ZIdHhvSzJCNHp1WStr?=
 =?utf-8?B?eXFRNHd3dnA3YUVISHdEVk1RWVBlR1NjOG5WMFRlUjZ1c3dtck5iUlU1bDBV?=
 =?utf-8?B?MGR5UHFiQVJ2VUh6cG9MMXEzVXV3V1l0cjE3S3NtTk1VOENqZ1l2WERJN0dZ?=
 =?utf-8?B?Nm1sdHVSZGRRU1lzYSt6WUwwVVhONmRaVmdQT3kzbjJyTERicE9NcXB0REtZ?=
 =?utf-8?B?UCsvcGdEWFZXdjZDUll4S1RoNjcyaVBJWjNvdDg2eFRCc2ZTZ0hXeWJvSE14?=
 =?utf-8?B?eFBkMHVXUVVHQkxabDhLa3lxeVZNaHVZc1ZlS3RsQ2U2aVE5eHdoSVloY2gr?=
 =?utf-8?B?UW52dXVFcm1ZMVhYWUNQMFNReFJMY0c5SEZMeG5LdSt5ZEdsVGJGSlF4c3Az?=
 =?utf-8?B?ZmlWRXNGVkRiL3dYNXFlbTIvU0tPZnZGK2NoTFg3TjNRWmNyQ0hqWHRkaGRG?=
 =?utf-8?B?R2UwellaSXVpZVI1TWNMRjg2WnVrWXBmczlBVVg3bzhvMFVuZm5KMjBGeGhk?=
 =?utf-8?B?clE2aDZsMDhWNkVHaDIyczcrelJsZDJyRXBENVdDZmt3M2RVRUpIaU9wQkJp?=
 =?utf-8?B?Ri81cTNLVCtPbEI0VTVqTVpQUDZYd0lNQVgyL25WUjhTeHk3U29wOVdNbEs1?=
 =?utf-8?B?eU5oUEExV1pCSmZTR0xwVjZxdnV6Zm8xWStxaHdTWVNzRjRhNWFtTlRvcjJw?=
 =?utf-8?B?YnY0T0YzYzhOeGliSGtlaUVEbUhLdEFwa0ZJMXh3S0dJN3I5b0E0SExGVmJH?=
 =?utf-8?B?dzR0OVJZcnpod0wzMklGSXduYXZ2SDh4TFpBdXgxUEJ0Wm9RWEVKeDNMK0lQ?=
 =?utf-8?B?OHU5bVZDSWg4VnVOLzg5L291enMyMDYyYU0va1FXYjJHb2Z2cTNLaWhqSmh0?=
 =?utf-8?B?RFlLYUhadW5Ba21zWnNKaDloR0lrVWZ2Tk1NWVh5YStOL2pxcHYxdHlhVjBz?=
 =?utf-8?B?SlR0RVNFTlFrdHprTmNGNUlsRTlpNDlCY3FObU0ybnpEWlFkMGRxUTFmcUhx?=
 =?utf-8?B?c282SmkyK1ViMlZvVlNSODNvSm1oY2NXOFRUbEhHSFovSTVRZ0p0ZVNNaldn?=
 =?utf-8?B?RWx3Y2hXdkwvNzV4NmliQlhqdjhacVlMNXE1NUM3TDJYRGZTUDB1Njg5ZVhU?=
 =?utf-8?B?dnV6dHdxQmg0V29pVFhyTGJLL2x4dVZ2MEtWWEtwbUFlUW5LZ3dXdDViMU9V?=
 =?utf-8?B?WkExTThqRGVtdkRQcCtSaUkyUzFDc3N0REJtSnFoNDg3VFhrSkZRSWkyQ3Jp?=
 =?utf-8?B?a3U0dTVjL2piSUZ4anRHMFkvWHVJRHAwL0VqSjFwc1NnZUFkaWNmQzAvU2lD?=
 =?utf-8?B?RHVmMk5lYXduWEc0WmhRalNNZS9YcVpyZFNnVHJTZUhMUzRjVGxMcUZZYWxV?=
 =?utf-8?B?V0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	/HYUyUQ3wUQIoeK1OScXzgQg+iVKi7NaqveuuaFl33Vpo6uI9B8E3bWXgJmeoc4RPTDzlKSy8TfwHcsRoqVJlGrlkVdm5nFuQ4kIXP02Q6iCJXp0OiKIVA0TPbfjvuPxBYnuljX2xUq1xInDoKShLfmnrstszpPTXcwR+3Nk/A27WKhK7EkFofmf0ol8C32o30wt9NKcSPp+uJnF2QgBYwa+sHZVoxogyS3eI+WH4z/9H4DMQ7ybewu3HFCkvtnv5WphT5n6YrBVATiJ7TBE7uoO/jxCqFRHAwR9CNEyB0TI9SNKSO/bQ8I1K2M43hPXZCii008W3o0EzRklPi1VYx34RwcmRnWbT0obGsTBHfzX9jTM280m/Qd8CjxScrKJnlK4tAyuP7eyLLhu4XCQ0EzcwTPdNjGnJ/CZqyL01HaEgA/9vDJMu+puXNkii5seo9+7n8hOnNbNlSMaDAxmDI1U14tXEh5KuMWBOgQ9uLDca1oV1lVjOqwFCKcGStK0cfq0nGSZMkz5YsPpGzY5YVIbZApy5uRaNzPK5E4MEvxVxL+bbhbGvDyMqhafK+IFYO8GANq4uer3N1mMIYexG3OmOJFH2GnfmFSP/Qgz9m+7/GVctpFkvsM6Q5wWtS571ptiYLTwQT5Ne0SIscuYiea7ZRyR0TMnD+4Z6Oab+lYbNQx9H322MrE1o8eCgR3GqoFaQOE0bz4sTByxe3qE6Xd4VaPEmn4Gsm1kl+QgSLKf6idlaz0Q99eY4WN3a487EokxdxN3Znqw2BWDnldAER1BzCO8VjQqcFo9cQXw/m8nHVyyjCp7RbAiPQIeeaKb
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: efd76ccc-54a8-42e4-18aa-08db56443228
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 19:31:50.6613
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ooAvVmsJCeWVFBSJPIcSEtCjdd8VowQwkKVWj4f4y/lIK2x3PpaS1Hy35Jwa19Hnvil8oYo6RI0H4Fs80/eYXE9oqvOg8qAvGS5xGG69xNc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4921

On 16/05/2023 3:53 pm, Jan Beulich wrote:
> On 16.05.2023 16:16, Andrew Cooper wrote:
>> On 16/05/2023 3:06 pm, Jan Beulich wrote:
>>> On 16.05.2023 15:51, Andrew Cooper wrote:
>>>> On 16/05/2023 2:06 pm, Jan Beulich wrote:
>>>>> On 15.05.2023 16:42, Andrew Cooper wrote:
>>>>> Further is even just non-default exposure of all the various bits okay
>>>>> to other than Dom0? IOW is there indeed no further adjustment necessary
>>>>> to guest_rdmsr()?
>>> With your reply further down also sufficiently clarifying things for
>>> me (in particular pointing the one oversight of mine), the question
>>> above is the sole part remaining before I'd be okay giving my R-b here.
>> Oh sorry.  Yes, it is sufficient.  Because VMs (other than dom0) don't
>> get the ARCH_CAPS CPUID bit, reads of MSR_ARCH_CAPS will #GP.
>>
>> Right now, you can set cpuid = "host:arch-caps" in an xl.cfg file.  If
>> you do this, you get to keep both pieces, as you'll end up advertising
>> the MSR but with a value of 0 because of the note in patch 4.  libxl
>> still only understand the xend CPUID format and can't express any MSR
>> data at all.
> Hmm, so the CPUID bit being max only results in all the ARCH_CAPS bits
> getting turned off in the default policy. That is, to enable anything
> you need to not only enable the CPUID bit, but also the ARCH_CAPS bits
> you want enabled (with no presents means to do so).

Correct.

> I guess that's no
> different from other max-only features with dependents, but I wonder
> whether that's good behavior.

It's not really something you get a choice over.

Default is always less than max, so however you choose to express these
concepts, when you're opting-in you're always having to put information
back in which was previously stripped out.

> Wouldn't it make more sense for the
> individual bits' exposure qualifiers to become meaningful one to
> qualifying feature is enabled? I.e. here this would then mean that
> some ARCH_CAPS bits may become available, while others may require
> explicit turning on (assuming they weren't all 'A').

I'm afraid I don't follow.  You could make some bits of MSR_ARCH_CAPS
itself 'a' vs 'A', and that would have the expected effect (for any VM
where arch_caps was visible).

The thing which is 99% of the complexity with MSR_ARCH_CAPS is getting
RSBA/RRSBA right.  The moment we advertise MSR_ARCH_CAPS to guests,
RSBA/RRSBA must be set appropriately for migrate or we're creating a
security vulnerability in the guest.

If you're wondering about the block disable, that's because MSRs and
CPUID are different.  With CPUID, we have
x86_cpu_policy_clear_out_of_range_leaves() which uses the various
max_leaf.  e.g. a feat.max_leaf=0 is what causes all of subleaf 1 and 2
to be zeroed in a policy.


> But irrespective of that (which is kind of orthogonal) my question was
> rather with already considering the point in time when the CPUID bit
> would become 'A'. IOW I was wondering whether at that point having all
> the individual bits be 'A' is actually going to be correct.

I've chosen all 'A' for these bits because that is what I expect to be
correct in due course.  They're all the simple "you're not vulnerable to
$X" bits, plus eIBRS which in practice is just a qualifying statement on
IBRS (already fully supported in guests).

The rest of MSR_ARCH_CAPS is pretty much a dumping ground for all of the
controls we can't give to guests under any circumstance.  (FB_CLEAR_CTRL
might be an exception - allegedly we might want to give it to guests
which have passthrough and trust their userspace, but I'm unconvinced of
this argument and going to insist on concrete numbers from anyone
wanting to try and implement this usecase.)

But there certainly could be a feature in there in the future where we
leave it at 'a' for a while...  It's just feature bitmap data in a
non-CPUID form factor.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 16 19:36:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:36:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535564.833408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0T5-00021a-3E; Tue, 16 May 2023 19:36:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535564.833408; Tue, 16 May 2023 19:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0T5-00021R-0M; Tue, 16 May 2023 19:36:19 +0000
Received: by outflank-mailman (input) for mailman id 535564;
 Tue, 16 May 2023 19:36:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0T3-00020V-Dc
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:36:17 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eb689075-f420-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:36:15 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 1FE2E63E71;
 Tue, 16 May 2023 19:36:14 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 06C9EC4339B;
 Tue, 16 May 2023 19:36:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb689075-f420-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265773;
	bh=+r42W67OGDQQqAy2PIbpqRJtm86RNedMVE1BNsE+HVE=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=AOBuVWv84dy+YEirfBc0rOgB6n5+cRsHG61qC/Lnfi2dGn7Jv+JKh/H02eWWfMUF3
	 Ex2ZkwkMb7pU24/iLpb6Y8hKTZK4ttU1eJ2NobvkypPk2O2EnpVw+0yr+dF/PpkZQx
	 3eL/opgEW1QD86klpYN2w8CV+JEExP433BmmgiImAub0k+8is2vhhwVU/VPp6XDlfH
	 3XztpmqkQ5qaQ29gLQgBc9KgSs6vC4RdNI63NsfmLz+6t5LxvmdPqe41UvKb/ArBhl
	 S7oIwLXWs1/hwyZUUfG2KjSn5ZNqk85FsE1YOlhxhVLrZXk744pPU+X1xr/Njx6aQp
	 zQ2/DkqcjEvOQ==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 01/20] x86: move prepare_ftrace_return prototype to header
Date: Tue, 16 May 2023 21:35:30 +0200
Message-Id: <20230516193549.544673-2-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

On 32-bit builds, the prepare_ftrace_return() function only has a global
definition, but no prototype before it, which causes a warning:

arch/x86/kernel/ftrace.c:625:6: warning: no previous prototype for ‘prepare_ftrace_return’ [-Wmissing-prototypes]
  625 | void prepare_ftrace_return(unsigned long ip, unsigned long *parent,

Move the prototype that is already needed for some configurations into
a header file where it can be seen unconditionally.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/ftrace.h | 3 +++
 arch/x86/kernel/ftrace.c      | 3 ---
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
index 5061ac98ffa1..b8d4a07f9595 100644
--- a/arch/x86/include/asm/ftrace.h
+++ b/arch/x86/include/asm/ftrace.h
@@ -106,6 +106,9 @@ struct dyn_arch_ftrace {
 
 #ifndef __ASSEMBLY__
 
+void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
+			   unsigned long frame_pointer);
+
 #if defined(CONFIG_FUNCTION_TRACER) && defined(CONFIG_DYNAMIC_FTRACE)
 extern void set_ftrace_ops_ro(void);
 #else
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 5e7ead52cfdb..01e8f34daf22 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -525,9 +525,6 @@ static void *addr_from_call(void *ptr)
 	return ptr + CALL_INSN_SIZE + call.disp;
 }
 
-void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
-			   unsigned long frame_pointer);
-
 /*
  * If the ops->trampoline was not allocated, then it probably
  * has a static trampoline func, or is the ftrace caller itself.
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:36:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:36:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535563.833398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0Sy-0001kR-T4; Tue, 16 May 2023 19:36:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535563.833398; Tue, 16 May 2023 19:36:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0Sy-0001kK-PA; Tue, 16 May 2023 19:36:12 +0000
Received: by outflank-mailman (input) for mailman id 535563;
 Tue, 16 May 2023 19:36:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0Sx-0001kC-Ps
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:36:11 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7e5406d-f420-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:36:09 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 21AD563CFE;
 Tue, 16 May 2023 19:36:08 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5E20C433EF;
 Tue, 16 May 2023 19:36:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7e5406d-f420-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265767;
	bh=oAoSmF3XxuLgHtI7FvZzT7auqFocUXgu+0MePwNsyrE=;
	h=From:To:Cc:Subject:Date:From;
	b=KwDbO61f3SBWhJix8Gr20ak/Qqboqqh3ldX/gA4euZLkNvRlBoYXYxgW34oKD5+bq
	 lgA3GGHMsATzDaZfCh2jM0WzWNvEFjiOGPlsVGcQu/sXjiIoK3zNv5yWDm1DhznX7Y
	 9C89B6xO5lEbyaGUNpJ+Oy07z9iVjaOP927NmsglHmbwU2P2dm4OwFSIPr5PJ2r5Gz
	 1qqAqLg3NzZPU+tuOvCbIzt9KUilvGUbJ8Y+hlsLmDHIjedHIx/H1y1Bj+xZR8Goe8
	 03nF3w/T1w6FhjESnpTOJw33XxK0QUG9UAP0KyuSJ0VYZg4J2UxgOMGzi+kT2/XCMT
	 GZd4OSYv/yqOQ==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 00/20] x86: address -Wmissing-prototype warnings
Date: Tue, 16 May 2023 21:35:29 +0200
Message-Id: <20230516193549.544673-1-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

This addresses all x86 specific prototype warnings. The majority of the
patches should be straightforward, either adding an #include statement
to get the right header, or ensuring that an unused global function is
left out of the build when the prototype is hidden.

The ones that are a bit awkward are those that just add a prototype to
shut up the warning, but the prototypes are never used for calling the
function because the only caller is in assembler code. I tried to come up
with other ways to shut up the compiler using the asmlinkage annotation,
but with no success.

All of the warnings have to be addressed in some form before the warning
can be enabled by default.

    Arnd

Link: https://people.kernel.org/arnd/missing-prototype-warnings-in-the-kernel


Arnd Bergmann (20):
  x86: move prepare_ftrace_return prototype to header
  x86: ce4100: Mark local functions as 'static'
  x86: apic: hide unused safe_smp_processor_id on UP
  x86: avoid unneeded __div64_32 function definition
  x86: head: add dummy prototype for mk_early_pgtbl_32
  x86: math-emu: include asm/fpu/regset.h
  x86: doublefault: avoid missing-prototype warnings
  x86: highmem: include asm/numa.h for set_highmem_pages_init
  x86: platform_quirks: include linux/pnp.h for arch_pnpbios_disabled
  x86: xen: add missing prototypes
  x86: entry: add  do_SYSENTER_32() prototype
  x86: qspinlock-paravirt: fix mising-prototype warnings
  x86: hibernate: declare global functions in suspend.h
  x86: fbdev: include asm/fb.h as needed
  x86: mce: add copy_mc_fragile_handle_tail prototype
  x86: vdso: include vdso/processor.h
  x86: usercopy: include arch_wb_cache_pmem declaration
  x86: ioremap: add early_memremap_pgprot_adjust prototype
  x86: purgatory: include header for warn() declaration
  x86: olpc: avoid missing-prototype warnings

 arch/x86/boot/compressed/error.c          |  2 +-
 arch/x86/boot/compressed/error.h          |  2 +-
 arch/x86/entry/vdso/vgetcpu.c             |  1 +
 arch/x86/include/asm/div64.h              |  2 ++
 arch/x86/include/asm/doublefault.h        |  4 ++++
 arch/x86/include/asm/ftrace.h             |  3 +++
 arch/x86/include/asm/mce.h                |  3 +++
 arch/x86/include/asm/qspinlock_paravirt.h |  2 ++
 arch/x86/include/asm/syscall.h            |  6 ++++--
 arch/x86/kernel/apic/ipi.c                |  2 ++
 arch/x86/kernel/doublefault_32.c          |  1 +
 arch/x86/kernel/ftrace.c                  |  3 ---
 arch/x86/kernel/head32.c                  |  1 +
 arch/x86/kernel/paravirt.c                |  2 ++
 arch/x86/kernel/platform-quirks.c         |  1 +
 arch/x86/lib/usercopy_64.c                |  1 +
 arch/x86/math-emu/fpu_entry.c             |  1 +
 arch/x86/mm/highmem_32.c                  |  1 +
 arch/x86/pci/ce4100.c                     |  4 ++--
 arch/x86/platform/olpc/olpc_dt.c          |  2 +-
 arch/x86/purgatory/purgatory.c            |  1 +
 arch/x86/video/fbdev.c                    |  1 +
 arch/x86/xen/efi.c                        |  2 ++
 arch/x86/xen/smp.h                        |  3 +++
 arch/x86/xen/xen-ops.h                    | 14 ++++++++++++++
 include/linux/io.h                        |  5 +++++
 include/linux/olpc-ec.h                   |  2 ++
 include/linux/suspend.h                   |  4 ++++
 include/xen/xen.h                         |  3 +++
 kernel/locking/qspinlock_paravirt.h       | 20 ++++++++++----------
 kernel/power/power.h                      |  5 -----
 mm/internal.h                             |  6 ------
 32 files changed, 79 insertions(+), 31 deletions(-)

-- 
2.39.2

Cc: Thomas Gleixner <tglx@linutronix.de> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),commit_signer:1/5=20%,authored:1/5=20%)
Cc: Ingo Molnar <mingo@redhat.com> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Cc: Borislav Petkov <bp@alien8.de> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),commit_signer:1/3=33%,commit_signer:1/5=20%,authored:1/5=20%,removed_lines:40/51=78%)
Cc: Dave Hansen <dave.hansen@linux.intel.com> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),commit_signer:1/5=20%)
Cc: x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Cc: "H. Peter Anvin" <hpa@zytor.com> (reviewer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Cc: Andy Lutomirski <luto@kernel.org> (maintainer:X86 VDSO)
Cc: Steven Rostedt <rostedt@goodmis.org> (maintainer:FUNCTION HOOKS (FTRACE))
Cc: Masami Hiramatsu <mhiramat@kernel.org> (maintainer:FUNCTION HOOKS (FTRACE))
Cc: Mark Rutland <mark.rutland@arm.com> (reviewer:FUNCTION HOOKS (FTRACE))
Cc: Juergen Gross <jgross@suse.com> (supporter:PARAVIRT_OPS INTERFACE,commit_signer:2/5=40%,authored:1/5=20%,added_lines:20/31=65%,removed_lines:27/35=77%)
Cc: "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu> (supporter:PARAVIRT_OPS INTERFACE)
Cc: Alexey Makhalov <amakhalov@vmware.com> (reviewer:PARAVIRT_OPS INTERFACE)
Cc: VMware PV-Drivers Reviewers <pv-drivers@vmware.com> (reviewer:PARAVIRT_OPS INTERFACE)
Cc: Peter Zijlstra <peterz@infradead.org> (maintainer:X86 MM,commit_signer:4/5=80%,commit_signer:1/2=50%)
Cc: Darren Hart <dvhart@infradead.org> (reviewer:X86 PLATFORM DRIVERS - ARCH)
Cc: Andy Shevchenko <andy@infradead.org> (reviewer:X86 PLATFORM DRIVERS - ARCH)
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> (reviewer:XEN HYPERVISOR X86)
Cc: "Rafael J. Wysocki" <rafael@kernel.org> (supporter:HIBERNATION (aka Software Suspend, aka swsusp))
Cc: linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Cc: linux-trace-kernel@vger.kernel.org (open list:FUNCTION HOOKS (FTRACE))
Cc: virtualization@lists.linux-foundation.org (open list:PARAVIRT_OPS INTERFACE)
Cc: linux-pci@vger.kernel.org (open list:PCI SUBSYSTEM)
Cc: platform-driver-x86@vger.kernel.org (open list:X86 PLATFORM DRIVERS - ARCH)
Cc: xen-devel@lists.xenproject.org (moderated list:XEN HYPERVISOR X86)
Cc: linux-pm@vger.kernel.org (open list:HIBERNATION (aka Software Suspend, aka swsusp))
Cc: linux-mm@kvack.org (open list:MEMORY MANAGEMENT)



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:36:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:36:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535565.833418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0TA-0002Lc-EL; Tue, 16 May 2023 19:36:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535565.833418; Tue, 16 May 2023 19:36:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0TA-0002LM-8x; Tue, 16 May 2023 19:36:24 +0000
Received: by outflank-mailman (input) for mailman id 535565;
 Tue, 16 May 2023 19:36:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0T9-00020V-0c
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:36:23 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ef2613f6-f420-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:36:21 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 6283163DB0;
 Tue, 16 May 2023 19:36:20 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3326DC4339C;
 Tue, 16 May 2023 19:36:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef2613f6-f420-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265779;
	bh=1y8qB7kv+xDWqxOYw4K9L6tE3JzzzyJ6tx4iNSAnwRU=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=N1TfAHj/hol7TROTcUR6gshmlOixx+65a3Ny2hHWBSJC5v+DkbnglNdZhuHcyHDF1
	 TGrpdPBwNN+NswjZNBi8Y8axdkLspbxGTmhWJm3FISvx8uYKpFvWNvMYUCeI0BwiSq
	 o2a0BV8veTqGfq/kZefozxrUC+JvKPGZDvIk8eukSQlrMDWDfPGou7ctUpolUaZ2wS
	 VqKfOGtBPnd7AVKTmQ7p3HUnf7CiL3BOgYkssO0EHfiTrF/sy6ARFEYqTGdMUd/NC6
	 B708eZZdFX4plzFfEgIC/q+8LA6b9xp6feRVGoJ6xTNav8yRpOnn1MkAw3aOWScHlE
	 /IvgTFvadxfTQ==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 02/20] x86: ce4100: Mark local functions as 'static'
Date: Tue, 16 May 2023 21:35:31 +0200
Message-Id: <20230516193549.544673-3-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

Two functions in this file are global but have no prototype in
a header and are not called from elsewhere, so they should
be static:

arch/x86/pci/ce4100.c:86:6: error: no previous prototype for 'sata_revid_init' [-Werror=missing-prototypes]
arch/x86/pci/ce4100.c:175:5: error: no previous prototype for 'bridge_read' [-Werror=missing-prototypes]

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/pci/ce4100.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/pci/ce4100.c b/arch/x86/pci/ce4100.c
index 584c25b588b4..87313701f069 100644
--- a/arch/x86/pci/ce4100.c
+++ b/arch/x86/pci/ce4100.c
@@ -83,7 +83,7 @@ static void ehci_reg_read(struct sim_dev_reg *reg, u32 *value)
 		*value |= 0x100;
 }
 
-void sata_revid_init(struct sim_dev_reg *reg)
+static void sata_revid_init(struct sim_dev_reg *reg)
 {
 	reg->sim_reg.value = 0x01060100;
 	reg->sim_reg.mask = 0;
@@ -172,7 +172,7 @@ static inline void extract_bytes(u32 *value, int reg, int len)
 	*value &= mask;
 }
 
-int bridge_read(unsigned int devfn, int reg, int len, u32 *value)
+static int bridge_read(unsigned int devfn, int reg, int len, u32 *value)
 {
 	u32 av_bridge_base, av_bridge_limit;
 	int retval = 0;
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:36:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535566.833428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0TF-0002p3-Lp; Tue, 16 May 2023 19:36:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535566.833428; Tue, 16 May 2023 19:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0TF-0002oV-Ib; Tue, 16 May 2023 19:36:29 +0000
Received: by outflank-mailman (input) for mailman id 535566;
 Tue, 16 May 2023 19:36:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0TE-00020V-Pg
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:36:28 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f2925092-f420-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:36:27 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 26E8E63E90;
 Tue, 16 May 2023 19:36:26 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 50CFDC433D2;
 Tue, 16 May 2023 19:36:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2925092-f420-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265786;
	bh=L/Qp31ZARnIa1bxC1DyJGeKMdFS5Wgtrq4kpQkQu2zw=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=bB9/JNuGXy+CuuSmBA1mIJQRdm3PDhpA4MNWL9vDjPPUv0Axx1Q2VLtiOmu5h4wlO
	 ZIiveTA4vEzmvtg6jqHfnB2IA/KyFoLv58fU1C/G4RLxDtlLb+qWaDnPLIHqxp+uJN
	 YuIEMEzwPeJyaUbpe/498ig/Aj8SroIALwN5a9JdhgsmlVncnEE6j0ApQr0CD6wwpG
	 uayon8hFFnce7FZ+xk3f501jDfoYnSDBCEasqeaP9EuAKAaVBUhLY0N+aATStZTdOq
	 FxWM3nrGI7ppoWHfghhJ25EqNePfkhNvOQNgYm+lOBRhMsfTshiuFBM/UygrVqzdJr
	 FaoOyV2QyNahg==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 03/20] x86: apic: hide unused safe_smp_processor_id on UP
Date: Tue, 16 May 2023 21:35:32 +0200
Message-Id: <20230516193549.544673-4-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

When CONFIG_SMP is disabled, the prototype for safe_smp_processor_id()
is hidden, which causes a W=1 warning:

/home/arnd/arm-soc/arch/x86/kernel/apic/ipi.c:316:5: error: no previous prototype for 'safe_smp_processor_id' [-Werror=missing-prototypes]

Since there are no callers in this configuration, just hide the definition
as well.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/kernel/apic/ipi.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kernel/apic/ipi.c b/arch/x86/kernel/apic/ipi.c
index 2a6509e8c840..9bfd6e397384 100644
--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -301,6 +301,7 @@ void default_send_IPI_mask_logical(const struct cpumask *cpumask, int vector)
 	local_irq_restore(flags);
 }
 
+#ifdef CONFIG_SMP
 /* must come after the send_IPI functions above for inlining */
 static int convert_apicid_to_cpu(int apic_id)
 {
@@ -329,3 +330,4 @@ int safe_smp_processor_id(void)
 	return cpuid >= 0 ? cpuid : 0;
 }
 #endif
+#endif
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:36:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:36:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535569.833438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0TL-0003Kk-T0; Tue, 16 May 2023 19:36:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535569.833438; Tue, 16 May 2023 19:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0TL-0003Kd-Pw; Tue, 16 May 2023 19:36:35 +0000
Received: by outflank-mailman (input) for mailman id 535569;
 Tue, 16 May 2023 19:36:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0TK-0001kC-IQ
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:36:34 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f671ba2c-f420-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:36:33 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9E97263E84;
 Tue, 16 May 2023 19:36:32 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6DA1EC4339B;
 Tue, 16 May 2023 19:36:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f671ba2c-f420-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265792;
	bh=NCehg1lkd+BZGGfLe6zB+YpAMyX3wRQS6FlZqTExKsY=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=pMmYMmj61h1ryVXMWXgEPNi4c4GgRS8wGlxnCd9UxrJ05iwjecyfTOduY+WoqWZpO
	 Cvm/AJCD+MXi+4Shut/6HQufiGCdANXUBd424+CiNufUW6Mt2CPMf2SX0Erut/1Yjz
	 CZmpdbkHOQ1xlqE6pSRjeDiYME9avcj4NHsxu1xrpZ1j/RY+4x7QvymeScI7FPaZ6A
	 7vr/gZpCa3X1Nt6lxKZnO93JGkaChZ3M73Qb3fsEPkq2r9hKpfzvuhPaebh6V7v7p/
	 EnZrHT3z5U/r3Es9dGNnRROzYJFVNwRgJqycAOQQGvYgjBv9ea9EtDpGHUpfp80+kM
	 +ZmQV+6dglcAw==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 04/20] x86: avoid unneeded __div64_32 function definition
Date: Tue, 16 May 2023 21:35:33 +0200
Message-Id: <20230516193549.544673-5-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

The __div64_32() function is provided for 32-bit architectures that
don't have a custom do_div() implementation. x86_32 has one, and
does not use the header file that declares the function prototype,
so the definition causes a W=1 warning:

lib/math/div64.c:31:32: error: no previous prototype for '__div64_32' [-Werror=missing-prototypes]

Define an empty macro to prevent the function definition from getting
built, which avoids the warning and saves a little .text space.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/div64.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/div64.h b/arch/x86/include/asm/div64.h
index b8f1dc0761e4..9826d5fc12e3 100644
--- a/arch/x86/include/asm/div64.h
+++ b/arch/x86/include/asm/div64.h
@@ -71,6 +71,8 @@ static inline u64 mul_u32_u32(u32 a, u32 b)
 }
 #define mul_u32_u32 mul_u32_u32
 
+#define __div64_32 /* not needed */
+
 #else
 # include <asm-generic/div64.h>
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:36:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:36:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535570.833448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0TS-0003lo-4x; Tue, 16 May 2023 19:36:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535570.833448; Tue, 16 May 2023 19:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0TS-0003lc-1e; Tue, 16 May 2023 19:36:42 +0000
Received: by outflank-mailman (input) for mailman id 535570;
 Tue, 16 May 2023 19:36:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0TQ-00020V-W8
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:36:40 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f9db99f2-f420-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:36:39 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5DC6463E9F;
 Tue, 16 May 2023 19:36:38 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 83CFFC433A1;
 Tue, 16 May 2023 19:36:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9db99f2-f420-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265798;
	bh=9077UjXDS1JwWtjrb4zvyxiWevrXkl5r2sPVXNElvI0=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=nU4bcoQ15stgmO0CsrS+eCvsm+gCXhBNBFSdZUZJDZs6Fl6sPo6aQAEbL7HvpgQPw
	 34YU/6UqjzMRVqANQ++OmNHESa6zNkcaFm2LUOgnGFvueZRpIj33bI3tKq9iIVMOE8
	 GqW/XJRm7a1E2WWrjpdlejWdGAQtbuecvIZU5NAHZBM5oQh2e9un6it3oTQsqi/lTG
	 5kxTBEILwkSTC+lRDq4IqynAvKjTaueAozerHdsQ+TGjB+//lEcb2OOHgN+9Q9axD4
	 hkpT/62nrOrSL8vWP+nEXayygL8S/uqGna9VmOHK6w0B/JcOokBXElfO8XCeFSQKdo
	 bT2kB6PN6dMAg==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 05/20] x86: head: add dummy prototype for mk_early_pgtbl_32
Date: Tue, 16 May 2023 21:35:34 +0200
Message-Id: <20230516193549.544673-6-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

'make W=1' warns about a function without a prototype in the x86-32 head code:

arch/x86/kernel/head32.c:72:13: error: no previous prototype for 'mk_early_pgtbl_32' [-Werror=missing-prototypes]

This is called from assembler code, so it does not actually need a prototype.
I could not find an appropriate header for it, so just declare it in front
of the definition to shut up th warning.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/kernel/head32.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kernel/head32.c b/arch/x86/kernel/head32.c
index 10c27b4261eb..246a609f889b 100644
--- a/arch/x86/kernel/head32.c
+++ b/arch/x86/kernel/head32.c
@@ -69,6 +69,7 @@ asmlinkage __visible void __init __noreturn i386_start_kernel(void)
  * to the first kernel PMD. Note the upper half of each PMD or PTE are
  * always zero at this stage.
  */
+void __init mk_early_pgtbl_32(void);
 void __init mk_early_pgtbl_32(void)
 {
 #ifdef __pa
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:36:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:36:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535577.833458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0TY-0004LS-H2; Tue, 16 May 2023 19:36:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535577.833458; Tue, 16 May 2023 19:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0TY-0004LK-CJ; Tue, 16 May 2023 19:36:48 +0000
Received: by outflank-mailman (input) for mailman id 535577;
 Tue, 16 May 2023 19:36:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0TX-00020V-72
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:36:47 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fd9167d8-f420-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:36:45 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 640B963E9B;
 Tue, 16 May 2023 19:36:44 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 98313C433A4;
 Tue, 16 May 2023 19:36:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd9167d8-f420-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265804;
	bh=LoIviGdfSLH1ko7JOljz+WAWBtvFrxU7baITA30Kitg=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=f0SoTDZCBXANhO1JcRHoynXaLVhlO4XJ4vTd5g+5q0mXl9+dwihXQSjEJ2aF5PXD/
	 YJxxo2zEzGeYvPkjnpK4ydF8PNotm4sCoP7sIPFgPh0+jYX3ClFJbP9hWa/pr/81FH
	 rmiMLwAicNzJUMwErqx6DSJsta/whdlp/sC/tZT1cvxj5paHYr0r+8Nyyo5u5xkv5T
	 3r84XCOMO59rOsDFBM8wNIcW0zhbFHV1ety++q1mSUr8ScvUEZcAnGwKVpfu1h2uwR
	 waDu2tWolCE9YEYKKmG7d67vPlyLJ4ZYk6cOavjtt4C56mPdY+e8uvRCKegTvnFXKC
	 xnJG4ywBEjUvw==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 06/20] x86: math-emu: include asm/fpu/regset.h
Date: Tue, 16 May 2023 21:35:35 +0200
Message-Id: <20230516193549.544673-7-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

The fpregs_soft_set/fpregs_soft_get functions are declared in a
header that is not included in the file that defines them, causing
a W=1 warning:

/home/arnd/arm-soc/arch/x86/math-emu/fpu_entry.c:638:5: error: no previous prototype for 'fpregs_soft_set' [-Werror=missing-prototypes]
  638 | int fpregs_soft_set(struct task_struct *target,
      |     ^~~~~~~~~~~~~~~
/home/arnd/arm-soc/arch/x86/math-emu/fpu_entry.c:690:5: error: no previous prototype for 'fpregs_soft_get' [-Werror=missing-prototypes]
  690 | int fpregs_soft_get(struct task_struct *target,

Include the file here to avoid the warning.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/math-emu/fpu_entry.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/math-emu/fpu_entry.c b/arch/x86/math-emu/fpu_entry.c
index 7fe56c594aa6..91c52ead1226 100644
--- a/arch/x86/math-emu/fpu_entry.c
+++ b/arch/x86/math-emu/fpu_entry.c
@@ -32,6 +32,7 @@
 #include <asm/traps.h>
 #include <asm/user.h>
 #include <asm/fpu/api.h>
+#include <asm/fpu/regset.h>
 
 #include "fpu_system.h"
 #include "fpu_emu.h"
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:36:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:36:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535579.833468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0Te-0004rI-PB; Tue, 16 May 2023 19:36:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535579.833468; Tue, 16 May 2023 19:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0Te-0004r7-Kw; Tue, 16 May 2023 19:36:54 +0000
Received: by outflank-mailman (input) for mailman id 535579;
 Tue, 16 May 2023 19:36:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0Td-00020V-A3
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:36:53 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 011342a9-f421-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:36:51 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 7895663DB9;
 Tue, 16 May 2023 19:36:50 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB288C433D2;
 Tue, 16 May 2023 19:36:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 011342a9-f421-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265810;
	bh=UIw2cpO0M7mgQJomy8Z1KQIbzqHPSHzZkNPw0ZSZdik=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=JMo/H+dcAwoiU2BvKT+/qzV35hAeKJ89YTDQRfZVp5VuxZ7AAwO6XIq6mGKGPLHPM
	 n9Kgl0NxgfysUuyDXpoIpuKvgQDWBsj4qr+esrqhkg9QY7EaCjoIEcFFWSEs2pEGzA
	 FnkLwXZ2J/6v8Go5SZJp9S7qOEGXEQwUg5o1OEKHvC0coA5trawZlHiYQ1s91VRsLo
	 WQUb4Fjw53QsgtZj9Y0iaE8RIw4x3ZsPuMC/axFjq2EAqnJ7Ryi+ghY1T0ksu98x57
	 czRJOOwVX2pWrVR691opbv/O3yATTIWa99EE41FEQBTMfS2DReIvbrilOoBUrVW/Kl
	 t+ouxwrQLLJeA==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 07/20] x86: doublefault: avoid missing-prototype warnings
Date: Tue, 16 May 2023 21:35:36 +0200
Message-Id: <20230516193549.544673-8-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

Two functions in the 32-bit doublefault code are lacking a prototype:

arch/x86/kernel/doublefault_32.c:23:36: error: no previous prototype for 'doublefault_shim' [-Werror=missing-prototypes]
   23 | asmlinkage noinstr void __noreturn doublefault_shim(void)
      |                                    ^~~~~~~~~~~~~~~~
arch/x86/kernel/doublefault_32.c:114:6: error: no previous prototype for 'doublefault_init_cpu_tss' [-Werror=missing-prototypes]
  114 | void doublefault_init_cpu_tss(void)

The first one is only called from assembler, while the second one is
declared in doublefault.h, but this file is not included.

Include the header file and add the other declaration there as well.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/doublefault.h | 4 ++++
 arch/x86/kernel/doublefault_32.c   | 1 +
 2 files changed, 5 insertions(+)

diff --git a/arch/x86/include/asm/doublefault.h b/arch/x86/include/asm/doublefault.h
index 54a6e4a2e132..de0e88b32207 100644
--- a/arch/x86/include/asm/doublefault.h
+++ b/arch/x86/include/asm/doublefault.h
@@ -2,6 +2,8 @@
 #ifndef _ASM_X86_DOUBLEFAULT_H
 #define _ASM_X86_DOUBLEFAULT_H
 
+#include <linux/linkage.h>
+
 #ifdef CONFIG_X86_32
 extern void doublefault_init_cpu_tss(void);
 #else
@@ -10,4 +12,6 @@ static inline void doublefault_init_cpu_tss(void)
 }
 #endif
 
+asmlinkage void __noreturn doublefault_shim(void);
+
 #endif /* _ASM_X86_DOUBLEFAULT_H */
diff --git a/arch/x86/kernel/doublefault_32.c b/arch/x86/kernel/doublefault_32.c
index 3b58d8703094..6eaf9a6bc02f 100644
--- a/arch/x86/kernel/doublefault_32.c
+++ b/arch/x86/kernel/doublefault_32.c
@@ -9,6 +9,7 @@
 #include <asm/processor.h>
 #include <asm/desc.h>
 #include <asm/traps.h>
+#include <asm/doublefault.h>
 
 #define ptr_ok(x) ((x) > PAGE_OFFSET && (x) < PAGE_OFFSET + MAXMEM)
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:37:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:37:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535585.833478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0Tk-0005Lo-0o; Tue, 16 May 2023 19:37:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535585.833478; Tue, 16 May 2023 19:36:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0Tj-0005LZ-Ti; Tue, 16 May 2023 19:36:59 +0000
Received: by outflank-mailman (input) for mailman id 535585;
 Tue, 16 May 2023 19:36:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0Tj-0001kC-BU
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:36:59 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04f4fb5e-f421-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:36:58 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E42CB63E92;
 Tue, 16 May 2023 19:36:56 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BF1AFC4339C;
 Tue, 16 May 2023 19:36:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04f4fb5e-f421-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265816;
	bh=QhfcjWc94ABit5CqNae5vJwGdRyTTfKtDRBYc+KBnK8=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=MvDSmMsr+ldstYZFWxt0d/Qx2DvIFE1q3iE1Bqb+M+VmlMkwcfXpUcl5LvraEDI6S
	 HnWIuY+SqiBpKZk8dhmFw5ETZeXvKwhiOJcQGWdhuNVfBEaKkHmXH0FVPCeGls5BT5
	 vQMZLkokAMdjoie4w/QtXptZ4gsEiXXyWgx+GJHuazf/GDKjvdN6Gt3EWJ4oMBeTkr
	 FvGz1X+Q4Zbr9Q6Y3k/+TvHBSHypXy399/8SZG8zFY/uPkbqkgKPF1SzwLyfarV+rm
	 t6q653S9gdfUGKRgliLv3S1pFZQjCqHtNzTcpq1utdvmK5duCxYK7yPT5e0S7iC690
	 mE1gVTCKwKBiQ==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 08/20] x86: highmem: include asm/numa.h for set_highmem_pages_init
Date: Tue, 16 May 2023 21:35:37 +0200
Message-Id: <20230516193549.544673-9-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

The set_highmem_pages_init() function is declared in asm/numa.h, which
must be included in the file that defines it to avoid a W=1 warning:

arch/x86/mm/highmem_32.c:7:13: error: no previous prototype for 'set_highmem_pages_init' [-Werror=missing-prototypes]

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/mm/highmem_32.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
index 2c54b76d8f84..d9efa35711ee 100644
--- a/arch/x86/mm/highmem_32.c
+++ b/arch/x86/mm/highmem_32.c
@@ -3,6 +3,7 @@
 #include <linux/export.h>
 #include <linux/swap.h> /* for totalram_pages */
 #include <linux/memblock.h>
+#include <asm/numa.h>
 
 void __init set_highmem_pages_init(void)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:39:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:39:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535599.833499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0WK-00073f-OV; Tue, 16 May 2023 19:39:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535599.833499; Tue, 16 May 2023 19:39:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0WK-00073W-JR; Tue, 16 May 2023 19:39:40 +0000
Received: by outflank-mailman (input) for mailman id 535599;
 Tue, 16 May 2023 19:39:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0U8-00020V-8Q
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:37:24 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1386c707-f421-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:37:22 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 6D7D063E9D;
 Tue, 16 May 2023 19:37:21 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 31EEEC433D2;
 Tue, 16 May 2023 19:37:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1386c707-f421-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265840;
	bh=HzuyPJn9FfyStpC9b/gY3UOkJ1RIpy94E010iL3VMAA=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=lyMTmn90wbr/qFgLkHQFxSX2cjoGcxvnL1nVSavn0cO0jwQAdZ0abIUvOOmHaJE4h
	 SRKGKH4YuALaqYyeDXwC2rmXjbD8x3uOm+2hb9k6zSm0RE9k2ye4pRA7pKDh/KFcUg
	 74ThzTAjh7UyyblnQImUp9i/blUb9mBsmos5h+x7AgL6jhJm2i+7HlTCuZjmpcKHzQ
	 a92UzT+0fGfuHuE9dTXbB60MlSLud/EeFS1nvtAW1I2R9HUD4pSyK9Lo7mDNBfABSe
	 uLCctFq8BxZkKc2zL5wl5wy4XGrTMum5M9PugMRDnqjPC1nckk1AZSNKzdl5PpJ2N5
	 Yjoqi6uLxWodA==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 12/20] x86: qspinlock-paravirt: fix mising-prototype warnings
Date: Tue, 16 May 2023 21:35:41 +0200
Message-Id: <20230516193549.544673-13-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

__pv_queued_spin_unlock_slowpath is defined in a header file as a global
function, and designed to be called from an inline asm, but there is
no prototype visible in the definition:

kernel/locking/qspinlock_paravirt.h:493:1: error: no previous prototype for '__pv_queued_spin_unlock_slowpath' [-Werror=missing-prototypes]

Add this to the x86 header that contains the inline asm calling it,
and ensure this gets included before the definition, rather than
after it.

The native_pv_lock_init function in turn is only declared in SMP
builds but can be left out in UP to avoid another warning:

arch/x86/kernel/paravirt.c:76:13: error: no previous prototype for 'native_pv_lock_init' [-Werror=missing-prototypes]

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/qspinlock_paravirt.h |  2 ++
 arch/x86/kernel/paravirt.c                |  2 ++
 kernel/locking/qspinlock_paravirt.h       | 20 ++++++++++----------
 3 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/qspinlock_paravirt.h b/arch/x86/include/asm/qspinlock_paravirt.h
index 42b17cf10b10..85b6e3609cb9 100644
--- a/arch/x86/include/asm/qspinlock_paravirt.h
+++ b/arch/x86/include/asm/qspinlock_paravirt.h
@@ -4,6 +4,8 @@
 
 #include <asm/ibt.h>
 
+void __lockfunc __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked);
+
 /*
  * For x86-64, PV_CALLEE_SAVE_REGS_THUNK() saves and restores 8 64-bit
  * registers. For i386, however, only 1 32-bit register needs to be saved
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index ac10b46c5832..eb67aa4cc5ef 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -73,11 +73,13 @@ DEFINE_PARAVIRT_ASM(pv_native_read_cr2, "mov %cr2, %rax", .noinstr.text);
 
 DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key);
 
+#ifdef CONFIG_SMP
 void __init native_pv_lock_init(void)
 {
 	if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
 		static_branch_disable(&virt_spin_lock_key);
 }
+#endif
 
 unsigned int paravirt_patch(u8 type, void *insn_buff, unsigned long addr,
 			    unsigned int len)
diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
index 6afc249ce697..6a0184e9c234 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -485,6 +485,16 @@ pv_wait_head_or_lock(struct qspinlock *lock, struct mcs_spinlock *node)
 	return (u32)(atomic_read(&lock->val) | _Q_LOCKED_VAL);
 }
 
+/*
+ * Include the architecture specific callee-save thunk of the
+ * __pv_queued_spin_unlock(). This thunk is put together with
+ * __pv_queued_spin_unlock() to make the callee-save thunk and the real unlock
+ * function close to each other sharing consecutive instruction cachelines.
+ * Alternatively, architecture specific version of __pv_queued_spin_unlock()
+ * can be defined.
+ */
+#include <asm/qspinlock_paravirt.h>
+
 /*
  * PV versions of the unlock fastpath and slowpath functions to be used
  * instead of queued_spin_unlock().
@@ -533,16 +543,6 @@ __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked)
 	pv_kick(node->cpu);
 }
 
-/*
- * Include the architecture specific callee-save thunk of the
- * __pv_queued_spin_unlock(). This thunk is put together with
- * __pv_queued_spin_unlock() to make the callee-save thunk and the real unlock
- * function close to each other sharing consecutive instruction cachelines.
- * Alternatively, architecture specific version of __pv_queued_spin_unlock()
- * can be defined.
- */
-#include <asm/qspinlock_paravirt.h>
-
 #ifndef __pv_queued_spin_unlock
 __visible __lockfunc void __pv_queued_spin_unlock(struct qspinlock *lock)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:39:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:39:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535598.833488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0WJ-0006oq-GW; Tue, 16 May 2023 19:39:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535598.833488; Tue, 16 May 2023 19:39:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0WJ-0006oj-CF; Tue, 16 May 2023 19:39:39 +0000
Received: by outflank-mailman (input) for mailman id 535598;
 Tue, 16 May 2023 19:39:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0Tp-00020V-VX
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:37:05 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 08b71e5d-f421-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:37:04 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4C09F63EB5;
 Tue, 16 May 2023 19:37:03 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD2D1C433AC;
 Tue, 16 May 2023 19:36:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08b71e5d-f421-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265822;
	bh=DMCbu5Ck8nL16laCe8gzQ409+Zy7oFJ6XN+76haQQGQ=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=Q3CPblb9vbnnqFuY1tw61aFfK9l0Nhj6hfccpdSnjngrZvApvgdIxViPf4vjP57Qt
	 qFvGozg2KkPlPBN+rhtss57IV7tT1g8FtyasUBtWiN829+Muh+cn2shwW83ND+k0Cc
	 FplnF09GMWMj2fcC0b4/jJX2TTCfua7UEdLt0MdxFgk7PQtYpG9Ml8CxcQcizGiZaB
	 ZR03FhPL6TM5aGirSEx1n5ym6Z3bavzahWDpVyLa0GqRqs2Kjp4+nNz+wxsThyDUFs
	 rdCC/oV4Ld02UzX5Lf0ab8U3AP0KQT6Q7EydXL76RrAs/+USlWya2sP3b0UpG2emEd
	 pAwthPxEXYTPA==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 09/20] x86: platform_quirks: include linux/pnp.h for arch_pnpbios_disabled
Date: Tue, 16 May 2023 21:35:38 +0200
Message-Id: <20230516193549.544673-10-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

arch_pnpbios_disabled() is defined in architecture code on x86, but this
does not include the appropriate header, causing a warning:

arch/x86/kernel/platform-quirks.c:42:13: error: no previous prototype for 'arch_pnpbios_disabled' [-Werror=missing-prototypes]

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/kernel/platform-quirks.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kernel/platform-quirks.c b/arch/x86/kernel/platform-quirks.c
index b348a672f71d..b525fe6d6657 100644
--- a/arch/x86/kernel/platform-quirks.c
+++ b/arch/x86/kernel/platform-quirks.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 #include <linux/kernel.h>
 #include <linux/init.h>
+#include <linux/pnp.h>
 
 #include <asm/setup.h>
 #include <asm/bios_ebda.h>
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:39:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:39:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535600.833508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0WO-0007MW-WB; Tue, 16 May 2023 19:39:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535600.833508; Tue, 16 May 2023 19:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0WO-0007MN-RO; Tue, 16 May 2023 19:39:44 +0000
Received: by outflank-mailman (input) for mailman id 535600;
 Tue, 16 May 2023 19:39:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0Tv-00020V-QE
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:37:11 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0bf7e648-f421-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:37:09 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id BB27A63EC4;
 Tue, 16 May 2023 19:37:08 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EAB46C433D2;
 Tue, 16 May 2023 19:37:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bf7e648-f421-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265828;
	bh=9mGjBlJh/v4od95uyV8nA9/tvAI7Fp8HXXcZE+gBPXw=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=dchZZo9WqmTT5pXqQ91IEEzoZcfaltTNsqPX8uCQB2EtS7z6X18KvPkxGsTqXf4Im
	 FZs0O9Gu21RgjwUSracQfDCkupgQf81CrS65a+HuQhXicxa375U5tO5k2auE3ZkfIX
	 SKZBxS0YIyNz60tANL/6BP3KH5Ub5veXwRzbRJxH/VCBU7GKgNIbVhmkVUIQ8Vs5kv
	 5GszcMPQCxAVwJPTq5LqISilKOY4ClP6e9zRYaqOL34Yi0tc5tzpXafXUnVrKXNOO3
	 TsvuruEr2W76ZTF/RJBhwZwLfH80cVdeikTgUUcvyM1yAbkQhHNiX6snxmVt/CTQZJ
	 lgJDvr4/QBZSw==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 10/20] x86: xen: add missing prototypes
Date: Tue, 16 May 2023 21:35:39 +0200
Message-Id: <20230516193549.544673-11-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

arch/x86/xen/enlighten_pv.c:1233:34: error: no previous prototype for 'xen_start_kernel' [-Werror=missing-prototypes]
arch/x86/xen/irq.c:22:14: error: no previous prototype for 'xen_force_evtchn_callback' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:358:20: error: no previous prototype for 'xen_pte_val' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:366:20: error: no previous prototype for 'xen_pgd_val' [-Werror=missing-prototypes]

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/xen/efi.c     |  2 ++
 arch/x86/xen/smp.h     |  3 +++
 arch/x86/xen/xen-ops.h | 14 ++++++++++++++
 include/xen/xen.h      |  3 +++
 4 files changed, 22 insertions(+)

diff --git a/arch/x86/xen/efi.c b/arch/x86/xen/efi.c
index 7d7ffb9c826a..863d0d6b3edc 100644
--- a/arch/x86/xen/efi.c
+++ b/arch/x86/xen/efi.c
@@ -16,6 +16,8 @@
 #include <asm/setup.h>
 #include <asm/xen/hypercall.h>
 
+#include "xen-ops.h"
+
 static efi_char16_t vendor[100] __initdata;
 
 static efi_system_table_t efi_systab_xen __initdata = {
diff --git a/arch/x86/xen/smp.h b/arch/x86/xen/smp.h
index 22fb982ff971..cbc45e2462f5 100644
--- a/arch/x86/xen/smp.h
+++ b/arch/x86/xen/smp.h
@@ -2,6 +2,9 @@
 #ifndef _XEN_SMP_H
 
 #ifdef CONFIG_SMP
+
+asmlinkage void cpu_bringup_and_idle(void);
+
 extern void xen_send_IPI_mask(const struct cpumask *mask,
 			      int vector);
 extern void xen_send_IPI_mask_allbutself(const struct cpumask *mask,
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 6d7f6318fc07..0f71ee3fe86b 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -160,4 +160,18 @@ void xen_hvm_post_suspend(int suspend_cancelled);
 static inline void xen_hvm_post_suspend(int suspend_cancelled) {}
 #endif
 
+void xen_force_evtchn_callback(void);
+pteval_t xen_pte_val(pte_t pte);
+pgdval_t xen_pgd_val(pgd_t pgd);
+pte_t xen_make_pte(pteval_t pte);
+pgd_t xen_make_pgd(pgdval_t pgd);
+pmdval_t xen_pmd_val(pmd_t pmd);
+pmd_t xen_make_pmd(pmdval_t pmd);
+pudval_t xen_pud_val(pud_t pud);
+pud_t xen_make_pud(pudval_t pud);
+p4dval_t xen_p4d_val(p4d_t p4d);
+p4d_t xen_make_p4d(p4dval_t p4d);
+pte_t xen_make_pte_init(pteval_t pte);
+void xen_start_kernel(struct start_info *si);
+
 #endif /* XEN_OPS_H */
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 0efeb652f9b8..f989162983c3 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -31,6 +31,9 @@ extern uint32_t xen_start_flags;
 
 #include <xen/interface/hvm/start_info.h>
 extern struct hvm_start_info pvh_start_info;
+void xen_prepare_pvh(void);
+struct pt_regs;
+void xen_pv_evtchn_do_upcall(struct pt_regs *regs);
 
 #ifdef CONFIG_XEN_DOM0
 #include <xen/interface/xen.h>
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:39:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:39:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535602.833518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0WV-0007kO-Cx; Tue, 16 May 2023 19:39:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535602.833518; Tue, 16 May 2023 19:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0WV-0007kF-9f; Tue, 16 May 2023 19:39:51 +0000
Received: by outflank-mailman (input) for mailman id 535602;
 Tue, 16 May 2023 19:39:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0UD-0001kC-E9
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:37:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 17327862-f421-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:37:28 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 92C7963E89;
 Tue, 16 May 2023 19:37:27 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 57333C4339B;
 Tue, 16 May 2023 19:37:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17327862-f421-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265847;
	bh=J5EKXsQ8JpkrI9Gz8p7tvN4WBF349kQdKYOIvFSrsNU=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=XIN+eFgSG5axh0khDn+f/DfiFa2fy/Gl7HlqCu4DFGR/Gh/jJBZE6kogRn1pGLSf0
	 H7p3sJKdHoW15hlgh4PqRp1aSRxoJGu8xkUBD8UOV/7Wc3R0dnNCmKZ0BDGWJZb8D1
	 oH2yP6PbTSse4jjfW7V1Oui8ScgC+ip8wsTfipPJG3h+KlhBMS7cqSGZFq5vDlKiy0
	 RswG11Urln0rfwHreMBoCRqO+yKcqFn55K7Ppwg4ukMOh1RtnbSr9DTsibIdx1NEsh
	 DnzXMlfpiLzkeqt66J/wTfnFD+0D1bBh1mk3F7Als3cmaZdCeWg7QJDwUyVhPN3vUW
	 T784GEZx+Ea6A==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 13/20] x86: hibernate: declare global functions in suspend.h
Date: Tue, 16 May 2023 21:35:42 +0200
Message-Id: <20230516193549.544673-14-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

Three functions that are defined in x86 specific code to override
generic __weak implementations cause a warning because of a missing
prototype:

arch/x86/power/cpu.c:298:5: error: no previous prototype for 'hibernate_resume_nonboot_cpu_disable' [-Werror=missing-prototypes]
arch/x86/power/hibernate.c:129:5: error: no previous prototype for 'arch_hibernation_header_restore' [-Werror=missing-prototypes]
arch/x86/power/hibernate.c:91:5: error: no previous prototype for 'arch_hibernation_header_save' [-Werror=missing-prototypes]

Move the declarations into a global header so it can be included
by any file defining one of these.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 include/linux/suspend.h | 4 ++++
 kernel/power/power.h    | 5 -----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/linux/suspend.h b/include/linux/suspend.h
index 7ec73e77e652..bc911fecb8e8 100644
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -452,6 +452,10 @@ extern struct pbe *restore_pblist;
 int pfn_is_nosave(unsigned long pfn);
 
 int hibernate_quiet_exec(int (*func)(void *data), void *data);
+int hibernate_resume_nonboot_cpu_disable(void);
+int arch_hibernation_header_save(void *addr, unsigned int max_size);
+int arch_hibernation_header_restore(void *addr);
+
 #else /* CONFIG_HIBERNATION */
 static inline void register_nosave_region(unsigned long b, unsigned long e) {}
 static inline int swsusp_page_is_forbidden(struct page *p) { return 0; }
diff --git a/kernel/power/power.h b/kernel/power/power.h
index b83c8d5e188d..a6a16faf0ead 100644
--- a/kernel/power/power.h
+++ b/kernel/power/power.h
@@ -26,9 +26,6 @@ extern void __init hibernate_image_size_init(void);
 /* Maximum size of architecture specific data in a hibernation header */
 #define MAX_ARCH_HEADER_SIZE	(sizeof(struct new_utsname) + 4)
 
-extern int arch_hibernation_header_save(void *addr, unsigned int max_size);
-extern int arch_hibernation_header_restore(void *addr);
-
 static inline int init_header_complete(struct swsusp_info *info)
 {
 	return arch_hibernation_header_save(info, MAX_ARCH_HEADER_SIZE);
@@ -41,8 +38,6 @@ static inline const char *check_image_kernel(struct swsusp_info *info)
 }
 #endif /* CONFIG_ARCH_HIBERNATION_HEADER */
 
-extern int hibernate_resume_nonboot_cpu_disable(void);
-
 /*
  * Keep some memory free so that I/O operations can succeed without paging
  * [Might this be more than 4 MB?]
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:40:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:40:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535606.833528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0We-0008Su-LF; Tue, 16 May 2023 19:40:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535606.833528; Tue, 16 May 2023 19:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0We-0008Se-I3; Tue, 16 May 2023 19:40:00 +0000
Received: by outflank-mailman (input) for mailman id 535606;
 Tue, 16 May 2023 19:39:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0U1-0001kC-Tz
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:37:17 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0fdbeda6-f421-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:37:16 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4538563EA1;
 Tue, 16 May 2023 19:37:15 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D45FC433EF;
 Tue, 16 May 2023 19:37:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fdbeda6-f421-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265834;
	bh=+aTnWzkkpkhZD/Q7mAQ3dRAZxC9ekU9RPhso/2vdjUI=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=iMBAvFb2VaJhJEHCkXY8xN9dopzfCYxzGAVtlxmPc4ecgoxsfcVqAYWppizbCyYp/
	 VTvljPsItuhT+KslruGdf17RZHnkXZTeI5SNB0HcVA3Q2is1QzaTJ7NIWpw6KBUWf/
	 zV4XavdE98t65jnuE5hyhNvdFmgDjcz0+x8JQ3Pa2M+Ke5oMTbpixoiurollUeJB73
	 blh6OUjzwzOwaiTkJeA6SFn1+VvUJj1MKmXFKzsle/9hQMjVA4gOvU8RBqzqzy+ig5
	 svmuwCrWyPhEepXDnslmThYsgGibYlGdcBhMAypATUJEGwuGF8kbOigthqqwhCFfB8
	 Ic0+xYNIrQ58Q==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 11/20] x86: entry: add  do_SYSENTER_32() prototype
Date: Tue, 16 May 2023 21:35:40 +0200
Message-Id: <20230516193549.544673-12-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

The 32-bit system call entry points can be called on both 32-bit
and 64-bit kernels, but on the former the declarations are hidden:

arch/x86/entry/common.c:238:24: error: no previous prototype for 'do_SYSENTER_32' [-Werror=missing-prototypes]

Move them all out of the #ifdef block to avoid the warnings.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/syscall.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/syscall.h b/arch/x86/include/asm/syscall.h
index 5b85987a5e97..4fb36fba4b5a 100644
--- a/arch/x86/include/asm/syscall.h
+++ b/arch/x86/include/asm/syscall.h
@@ -127,9 +127,11 @@ static inline int syscall_get_arch(struct task_struct *task)
 }
 
 void do_syscall_64(struct pt_regs *regs, int nr);
-void do_int80_syscall_32(struct pt_regs *regs);
-long do_fast_syscall_32(struct pt_regs *regs);
 
 #endif	/* CONFIG_X86_32 */
 
+void do_int80_syscall_32(struct pt_regs *regs);
+long do_fast_syscall_32(struct pt_regs *regs);
+long do_SYSENTER_32(struct pt_regs *regs);
+
 #endif	/* _ASM_X86_SYSCALL_H */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:40:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:40:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535623.833538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0XO-0001wp-VA; Tue, 16 May 2023 19:40:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535623.833538; Tue, 16 May 2023 19:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0XO-0001wi-SU; Tue, 16 May 2023 19:40:46 +0000
Received: by outflank-mailman (input) for mailman id 535623;
 Tue, 16 May 2023 19:40:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pz0XN-0001vw-Hs; Tue, 16 May 2023 19:40:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pz0XN-0001LW-EE; Tue, 16 May 2023 19:40:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pz0XN-0005FF-1q; Tue, 16 May 2023 19:40:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pz0XN-0001TZ-1C; Tue, 16 May 2023 19:40:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jP3S07rrqRd1VMhkUwVx4TAJ50QQZyoaEsx1dIRDyA8=; b=epZQxEwABz2fLb19l2Uz9pCQJ5
	lp7iFduZ7KfsC8ryM94pfLDWEkoai4FuYw1J5aZ2CSxw2ggZM2ukDOodIY5oqsJ9076sXaT/IhBNa
	aYcBQyLpLtSuobo5LDeuDbKxswgOwuhAucKz83OV645ERAJKaz8KztNbRvVuop9Cw75o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180679-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180679: regressions - trouble: broken/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    qemu-mainline:test-armhf-armhf-xl-credit2:host-install(5):broken:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=ab4c44d657aeca7e1da6d6dcb1741c8e7d357b8b
X-Osstest-Versions-That:
    qemuu=18b6727083acceac5d76ea0b8cb6f5cdef6858a7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 May 2023 19:40:45 +0000

flight 180679 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180679/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2     <job status>                 broken
 test-armhf-armhf-xl-credit2   5 host-install(5)        broken REGR. vs. 180673
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 180673

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180673
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180673
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180673
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180673
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180673
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180673
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180673
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180673
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                ab4c44d657aeca7e1da6d6dcb1741c8e7d357b8b
baseline version:
 qemuu                18b6727083acceac5d76ea0b8cb6f5cdef6858a7

Last test of basis   180673  2023-05-15 17:38:45 Z    1 days
Testing same since   180679  2023-05-16 05:45:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrei Gudkov <gudkov.andrei@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Sam Li <faithilikerun@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-credit2 broken
broken-step test-armhf-armhf-xl-credit2 host-install(5)

Not pushing.

(No revision log; it would be 574 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 16 19:48:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:48:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535633.833548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eA-0002iW-OA; Tue, 16 May 2023 19:47:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535633.833548; Tue, 16 May 2023 19:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eA-0002iP-KV; Tue, 16 May 2023 19:47:46 +0000
Received: by outflank-mailman (input) for mailman id 535633;
 Tue, 16 May 2023 19:47:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0UK-00020V-Ap
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:37:36 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ad9d06d-f421-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:37:34 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B40FD63E94;
 Tue, 16 May 2023 19:37:33 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7B617C4339C;
 Tue, 16 May 2023 19:37:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ad9d06d-f421-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265853;
	bh=VUIVbi0JrRPfajUkNZclAh0STTeBs9rnHGdAvoTKkf0=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=mC+M7eKABjyFWUz3ahztx+5ej0UDKRHny6IDcOOkOBb7T8TKyoOo1fQHoeIhY+UJr
	 sMwYJWriml+CN5IT55dh+Y3N4Qkk1jy6sqNrI9NQRSoJY9QmUi1UgvCpWOpG8IrCEn
	 JOOcmypp8HYKXesh78Zw9Mn2B1hMfzJ5E0vHwnVhf9EWjHBjng/8AutDQDYVKbiQni
	 nhbYm4YAKuzLx1TphcnEliNa7XAqoVH4gMMnVTi3+fU11+7fSp55kbWmoZSJxYHgWE
	 bkp6SUDmTbzTpuvHIao5LyGY8w2RZRYUNYlNHTF5PTUou4JACCZ0eU58pOVKhjr1zj
	 +aqQ1XVbdlOGQ==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 14/20] x86: fbdev: include asm/fb.h as needed
Date: Tue, 16 May 2023 21:35:43 +0200
Message-Id: <20230516193549.544673-15-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

fb_is_primary_device() is defined as a global function on x86, unlike
the others that have an inline version. The file that defines is
however needs to include the declaration to avoid a warning:

arch/x86/video/fbdev.c:14:5: error: no previous prototype for 'fb_is_primary_device' [-Werror=missing-prototypes]

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/video/fbdev.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/video/fbdev.c b/arch/x86/video/fbdev.c
index 9fd24846d094..9e9143085d19 100644
--- a/arch/x86/video/fbdev.c
+++ b/arch/x86/video/fbdev.c
@@ -10,6 +10,7 @@
 #include <linux/pci.h>
 #include <linux/module.h>
 #include <linux/vgaarb.h>
+#include <asm/fb.h>
 
 int fb_is_primary_device(struct fb_info *info)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:48:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:48:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535637.833568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eL-0003Jb-Dp; Tue, 16 May 2023 19:47:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535637.833568; Tue, 16 May 2023 19:47:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eL-0003JS-8X; Tue, 16 May 2023 19:47:57 +0000
Received: by outflank-mailman (input) for mailman id 535637;
 Tue, 16 May 2023 19:47:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0Uc-00020V-A1
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:37:54 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 258fa881-f421-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:37:52 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A17B563EC0;
 Tue, 16 May 2023 19:37:51 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CC9F0C4339C;
 Tue, 16 May 2023 19:37:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 258fa881-f421-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265871;
	bh=gSQe4II2iJtS+QQtWHfwrqCvLdxK7Wzo7e0HtFoFV5g=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=j20kxPBBFRIlxqq0mFZaJS7pQ1cCrLadjW3BuMisWulwenHNraGn3QWbXEUvelZB9
	 k8AScRxshCwAwrVHCxXAtcvrlQ6DnbnlQVpLaumX7l4EAkHg/GHsGYTQGiOvmWdeHV
	 nnvqCAqbFDpb6jdopaiVGQxZ8tQP14F6vLmY6rEGNbIsMSxZRZfWCAbLoJmiLJul9v
	 mHgb9qRmHAMRdPNwGH1ZJuxcpbeasYeDWHN2uDuIqWD3/Br6dub6irvZhT2xBqimyR
	 5zR/L6TK+jHMgzzDqKoPOtdy3hz/UK9lh7gf7JJ+DhFN6pkHpaPxXMMg+7/fxjhdum
	 PLw3akFNr0smA==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 17/20] x86: usercopy: include arch_wb_cache_pmem declaration
Date: Tue, 16 May 2023 21:35:46 +0200
Message-Id: <20230516193549.544673-18-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

arch_wb_cache_pmem() is declared in a global header but defined by
the architecture. On x86, the implementation needs to include the header
to avoid this warning:

arch/x86/lib/usercopy_64.c:39:6: error: no previous prototype for 'arch_wb_cache_pmem' [-Werror=missing-prototypes]

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/lib/usercopy_64.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
index 003d90138e20..e9251b89a9e9 100644
--- a/arch/x86/lib/usercopy_64.c
+++ b/arch/x86/lib/usercopy_64.c
@@ -9,6 +9,7 @@
 #include <linux/export.h>
 #include <linux/uaccess.h>
 #include <linux/highmem.h>
+#include <linux/libnvdimm.h>
 
 /*
  * Zero Userspace
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:48:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:48:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535635.833558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eD-0002yI-3a; Tue, 16 May 2023 19:47:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535635.833558; Tue, 16 May 2023 19:47:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eD-0002yB-10; Tue, 16 May 2023 19:47:49 +0000
Received: by outflank-mailman (input) for mailman id 535635;
 Tue, 16 May 2023 19:47:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0Uo-0001kC-2Y
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:38:06 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2d0c97a7-f421-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:38:05 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 3CF5F63EBA;
 Tue, 16 May 2023 19:38:04 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 05AE9C433A0;
 Tue, 16 May 2023 19:37:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d0c97a7-f421-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265883;
	bh=j24rSNRgDIgmGiT2yXwRSRo4ZogPVSrZkif28sPhRnU=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=AeITFFPyf4T1O9Bk8T43ZhPjTAAH6Vi8ipARvhqNOYnVJGtz4HnVAYe+g/BzZ7026
	 fDFAQ+Kwic0E1tpyAEjk9jRaCFQp+kn6j5QUapAXkZVr+cEC/j8Xd57pjknKmNJ40L
	 YYnEhKGr6uzcXrPYT/H7qqH1VyhkhaztOT4kZMMxyyb3x1FoZRxS3qYbRHji3EIhxJ
	 GIf04p+6OuzTuprxdmvEFq7KckfwLT/GHmqgM9hv9SmOoyBUuMT0F5NhocTm6xphV4
	 BngR4+JYE7FOd9LAO6LyB06VqLeGQ76wR+h5GIBrwd5zRDNTjAQuZi2Pu27xvTgXI9
	 pzCAFRefSXgnA==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 19/20] x86: purgatory: include header for warn() declaration
Date: Tue, 16 May 2023 21:35:48 +0200
Message-Id: <20230516193549.544673-20-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

The purgatory code has uses parts of the decompressor and provides
its own warn() function, but has to include the corresponding
header file to avoid a -Wmissing-prototypes warning.

It turns out that this the function prototype actually differs
from the declaration, so change it to get a constant pointer
in the declaration and the other definition as well.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/boot/compressed/error.c | 2 +-
 arch/x86/boot/compressed/error.h | 2 +-
 arch/x86/purgatory/purgatory.c   | 1 +
 3 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/error.c b/arch/x86/boot/compressed/error.c
index c881878e56d3..ce5ed7d8265e 100644
--- a/arch/x86/boot/compressed/error.c
+++ b/arch/x86/boot/compressed/error.c
@@ -7,7 +7,7 @@
 #include "misc.h"
 #include "error.h"
 
-void warn(char *m)
+void warn(const char *m)
 {
 	error_putstr("\n\n");
 	error_putstr(m);
diff --git a/arch/x86/boot/compressed/error.h b/arch/x86/boot/compressed/error.h
index 1de5821184f1..87062dea9a20 100644
--- a/arch/x86/boot/compressed/error.h
+++ b/arch/x86/boot/compressed/error.h
@@ -4,7 +4,7 @@
 
 #include <linux/compiler.h>
 
-void warn(char *m);
+void warn(const char *m);
 void error(char *m) __noreturn;
 
 #endif /* BOOT_COMPRESSED_ERROR_H */
diff --git a/arch/x86/purgatory/purgatory.c b/arch/x86/purgatory/purgatory.c
index 7558139920f8..aea47e793963 100644
--- a/arch/x86/purgatory/purgatory.c
+++ b/arch/x86/purgatory/purgatory.c
@@ -14,6 +14,7 @@
 #include <crypto/sha2.h>
 #include <asm/purgatory.h>
 
+#include "../boot/compressed/error.h"
 #include "../boot/string.h"
 
 u8 purgatory_sha256_digest[SHA256_DIGEST_SIZE] __section(".kexec-purgatory");
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:48:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:48:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535638.833578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eP-0003hc-KU; Tue, 16 May 2023 19:48:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535638.833578; Tue, 16 May 2023 19:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eP-0003ga-H4; Tue, 16 May 2023 19:48:01 +0000
Received: by outflank-mailman (input) for mailman id 535638;
 Tue, 16 May 2023 19:48:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0UP-0001kC-Hb
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:37:41 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1e7f1bec-f421-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:37:40 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id CEFFF63E87;
 Tue, 16 May 2023 19:37:39 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B6AFC433EF;
 Tue, 16 May 2023 19:37:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e7f1bec-f421-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265859;
	bh=wLf8lxCTTft8AHgx+lACsU6Y90fX557lCughigKxTwU=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=lzNvSRgB1AKUSR+3LeaTPh3l60rOTsZilQhWDzOcQ+x5oMBRGFpj+Je09FDcA2oKs
	 bQKEk9CQaOZxW+LjeDZNGHoi1VyqvarfbYPZrFtOwIBWfmcUKXnPsDxk3K2zk5rI3y
	 gwROMzkPtIsOFqQj7b/viChMDejdNvfGdFXbZ5vHvg75Pj4BpgPv1EfIB110tOzdqm
	 cbrsbReFRHuvu0B87vAMax0fJ9PYXNdrYINAB0Gk032nABAs1PIVnq46ho2ehxCbAo
	 4W56TKdQY4nVBka2U6Mo+efkQ7TvUoLDyo62gyH/6bIRgyrLCuMeysKv+AAj4wS8d9
	 cMd2D7IKCRhyg==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 15/20] x86: mce: add copy_mc_fragile_handle_tail prototype
Date: Tue, 16 May 2023 21:35:44 +0200
Message-Id: <20230516193549.544673-16-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

copy_mc_fragile_handle_tail() is only called from assembler,
but 'make W=1' complains about a missing prototype:

arch/x86/lib/copy_mc.c:26:1: warning: no previous prototype for ‘copy_mc_fragile_handle_tail’ [-Wmissing-prototypes]
   26 | copy_mc_fragile_handle_tail(char *to, char *from, unsigned len)

Add the prototype to avoid the warning.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/include/asm/mce.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
index 9646ed6e8c0b..180b1cbfcc4e 100644
--- a/arch/x86/include/asm/mce.h
+++ b/arch/x86/include/asm/mce.h
@@ -350,4 +350,7 @@ static inline void mce_amd_feature_init(struct cpuinfo_x86 *c)		{ }
 #endif
 
 static inline void mce_hygon_feature_init(struct cpuinfo_x86 *c)	{ return mce_amd_feature_init(c); }
+
+unsigned long copy_mc_fragile_handle_tail(char *to, char *from, unsigned len);
+
 #endif /* _ASM_X86_MCE_H */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:48:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:48:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535643.833588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eZ-0004LT-U7; Tue, 16 May 2023 19:48:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535643.833588; Tue, 16 May 2023 19:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eZ-0004LC-QX; Tue, 16 May 2023 19:48:11 +0000
Received: by outflank-mailman (input) for mailman id 535643;
 Tue, 16 May 2023 19:48:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0Uw-00020V-PD
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:38:14 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 30cbe691-f421-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:38:11 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 753B66183C;
 Tue, 16 May 2023 19:38:10 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 28787C433EF;
 Tue, 16 May 2023 19:38:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30cbe691-f421-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265889;
	bh=xg0nY/5XuAoC5Y+aT1n09akQv/zFSJ5XVH0JIB0YwTQ=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=lJHg8XNuGiy8jTFn+IFpgUXTvuRIAwq7VY4K/2aG+9oDXR9V6HWZw9BoV/Iby3Y3D
	 ciQJCcSCN88ZGK79OW2jW+p1Mcr53FymRzsq3aD57yhmrL6X5s0JHYpe2CHPA22HlU
	 vtyjpMlDJ94LuFullCZvqENjJQpN0VpNDcwlo9gDKsPudVss3sLnvUVGlKZDHrWJKC
	 9hYO4n8OiF35vwW0mCeLD6vxQ51+sc9aB2SOI5vmH0t/cuFzcMA5Ms2ki0dFUhjZ6z
	 F+j0NZXOgRpsDihxxL0McykAWvZ3icJuNiaB3wLMzOKImpSeGeJaJv64l87FCV2Wpu
	 UeiC9WTSfPICw==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 20/20] x86: olpc: avoid missing-prototype warnings
Date: Tue, 16 May 2023 21:35:49 +0200
Message-Id: <20230516193549.544673-21-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

There are two functions in the olpc platform that have no prototype:

arch/x86/platform/olpc/olpc_dt.c:237:13: error: no previous prototype for 'olpc_dt_fixup' [-Werror=missing-prototypes]
arch/x86/platform/olpc/olpc-xo1-pm.c:73:26: error: no previous prototype for 'xo1_do_sleep' [-Werror=missing-prototypes]

The first one should just be marked 'static' as there are no other
callers, while the second one is called from assembler and is
just a false-positive warning that can be silenced by adding a
prototype.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/platform/olpc/olpc_dt.c | 2 +-
 include/linux/olpc-ec.h          | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/platform/olpc/olpc_dt.c b/arch/x86/platform/olpc/olpc_dt.c
index 75e3319e8bee..74ebd6882690 100644
--- a/arch/x86/platform/olpc/olpc_dt.c
+++ b/arch/x86/platform/olpc/olpc_dt.c
@@ -234,7 +234,7 @@ static int __init olpc_dt_compatible_match(phandle node, const char *compat)
 	return 0;
 }
 
-void __init olpc_dt_fixup(void)
+static void __init olpc_dt_fixup(void)
 {
 	phandle node;
 	u32 board_rev;
diff --git a/include/linux/olpc-ec.h b/include/linux/olpc-ec.h
index c4602364e909..3c2891d85c41 100644
--- a/include/linux/olpc-ec.h
+++ b/include/linux/olpc-ec.h
@@ -56,6 +56,8 @@ extern int olpc_ec_sci_query(u16 *sci_value);
 
 extern bool olpc_ec_wakeup_available(void);
 
+asmlinkage int xo1_do_sleep(u8 sleep_state);
+
 #else
 
 static inline int olpc_ec_cmd(u8 cmd, u8 *inbuf, size_t inlen, u8 *outbuf,
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:48:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535650.833597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eg-0004r0-6F; Tue, 16 May 2023 19:48:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535650.833597; Tue, 16 May 2023 19:48:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0eg-0004qr-30; Tue, 16 May 2023 19:48:18 +0000
Received: by outflank-mailman (input) for mailman id 535650;
 Tue, 16 May 2023 19:48:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0UW-0001kC-4H
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:37:48 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 227d6cb6-f421-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:37:47 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 85B5863EA1;
 Tue, 16 May 2023 19:37:46 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B5BB4C433D2;
 Tue, 16 May 2023 19:37:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 227d6cb6-f421-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265865;
	bh=ePOLp5lrYksDFnI5SruvVtfGvFIosnERaoYqbpISZdc=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=QzHHLwTINJvvcADDWDAFD53gb0rzvbfM+9frChQz9nOeJdPvDbN96AuXaWQc9Zvxr
	 aIMvdYvF3T+QNbI/k5wpwc6MSujK2ulBi7AB+gIW4MAFtORYIB6dloPpaU3136n4im
	 28e0oQ/gFuMmWEVqE32GvMwA8QC1R8ZOSDj5aB75yht+WU7UenY2JSg291k/G5ntiU
	 9aBuw8l68CYZc+TZWL+A0oDAj4k2nTmi3WAfSETq8X/lpMsTNhdlRU1e3qXY/DCjc8
	 kbTXiAG5OAPFlElt+nbApNzKv5CT7DQpkoHI/Mebhyj1HGkKmozLCY+jMgkk1Bebd7
	 De/Gp233G0FBg==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 16/20] x86: vdso: include vdso/processor.h
Date: Tue, 16 May 2023 21:35:45 +0200
Message-Id: <20230516193549.544673-17-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

__vdso_getcpu is declared in a header but this is not included
before the definition, causing a W=1 warning:

arch/x86/entry/vdso/vgetcpu.c:13:1: error: no previous prototype for '__vdso_getcpu' [-Werror=missing-prototypes]
arch/x86/entry/vdso/vdso32/../vgetcpu.c:13:1: error: no previous prototype for '__vdso_getcpu' [-Werror=missing-prototypes]

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/entry/vdso/vgetcpu.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/entry/vdso/vgetcpu.c b/arch/x86/entry/vdso/vgetcpu.c
index 0a9007c24056..e4640306b2e3 100644
--- a/arch/x86/entry/vdso/vgetcpu.c
+++ b/arch/x86/entry/vdso/vgetcpu.c
@@ -8,6 +8,7 @@
 #include <linux/kernel.h>
 #include <linux/getcpu.h>
 #include <asm/segment.h>
+#include <vdso/processor.h>
 
 notrace long
 __vdso_getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *unused)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:48:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535651.833608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0ei-0005ED-EJ; Tue, 16 May 2023 19:48:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535651.833608; Tue, 16 May 2023 19:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0ei-0005E1-AD; Tue, 16 May 2023 19:48:20 +0000
Received: by outflank-mailman (input) for mailman id 535651;
 Tue, 16 May 2023 19:48:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oY92=BF=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pz0Uh-0001kC-OM
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:37:59 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2963959c-f421-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 21:37:59 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 1C45163EA7;
 Tue, 16 May 2023 19:37:58 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E563AC433EF;
 Tue, 16 May 2023 19:37:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2963959c-f421-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684265877;
	bh=BMgSzYPnqUEEoyY2VRZWSxLxvLKRRE/JigFmOP6CVm8=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=NpGBW3oQfGokP+EqO9HPXqgBNDkoc36H1oosSB5XjEzBOg7yBtWPUSvXBHiX4Rn4Q
	 nPdmLvbLNil6YudLmIUT16A//m5E1l/d5GoNjyYA+FGYUUcB0t2DXV/Y8FzJPsCm/g
	 WSjrQhVBlok700vqB4usp49fWyRCAVJCMtG/hOkniDZfEqspZMur4GNXocMx9PVPyv
	 JJe6IYAyEollD++tusMr4MFE1ysVhpCHJNBmX4uqv8rrV1pvWM156/dMfMSZOF9/ow
	 oYS4tnWysUdqiVHzAJ2DPP9MsHPpjO23QeTGlMaWQS4Y+XN+rqnjj3iE0utAy7m/f2
	 S+exm0cd8OABA==
From: Arnd Bergmann <arnd@kernel.org>
To: x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Juergen Gross <jgross@suse.com>,
	"Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
	Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Darren Hart <dvhart@infradead.org>,
	Andy Shevchenko <andy@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-pci@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pm@vger.kernel.org,
	linux-mm@kvack.org
Subject: [PATCH 18/20] x86: ioremap: add early_memremap_pgprot_adjust prototype
Date: Tue, 16 May 2023 21:35:47 +0200
Message-Id: <20230516193549.544673-19-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

early_memremap_pgprot_adjust() is a __weak function with a local
prototype, but x86 has a custom implementation that does not
see the prototype, causing a W=1 warning:

arch/x86/mm/ioremap.c:785:17: error: no previous prototype for 'early_memremap_pgprot_adjust' [-Werror=missing-prototypes]

Move the declaration into the global linux/io.h header to avoid this.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 include/linux/io.h | 5 +++++
 mm/internal.h      | 6 ------
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/include/linux/io.h b/include/linux/io.h
index 308f4f0cfb93..7304f2a69960 100644
--- a/include/linux/io.h
+++ b/include/linux/io.h
@@ -68,6 +68,11 @@ void *devm_memremap(struct device *dev, resource_size_t offset,
 		size_t size, unsigned long flags);
 void devm_memunmap(struct device *dev, void *addr);
 
+/* architectures can override this */
+pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
+					unsigned long size, pgprot_t prot);
+
+
 #ifdef CONFIG_PCI
 /*
  * The PCI specifications (Rev 3.0, 3.2.5 "Transaction Ordering and
diff --git a/mm/internal.h b/mm/internal.h
index 68410c6d97ac..e6029d94bdb2 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -178,12 +178,6 @@ extern unsigned long highest_memmap_pfn;
  */
 #define MAX_RECLAIM_RETRIES 16
 
-/*
- * in mm/early_ioremap.c
- */
-pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
-					unsigned long size, pgprot_t prot);
-
 /*
  * in mm/vmscan.c:
  */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 19:52:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 19:52:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535677.833617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0ip-0007Yv-V2; Tue, 16 May 2023 19:52:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535677.833617; Tue, 16 May 2023 19:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0ip-0007Yo-SO; Tue, 16 May 2023 19:52:35 +0000
Received: by outflank-mailman (input) for mailman id 535677;
 Tue, 16 May 2023 19:52:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tkax=BF=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pz0ip-0007Yi-1r
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 19:52:35 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.217]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 322b42ad-f423-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 21:52:32 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4GJqRYhk
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 16 May 2023 21:52:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 322b42ad-f423-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1684266747; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=pn33qygtTjnTEZquBGtiwjIp123CnWh5/RtXZ8i11EgFae4xnIoVmNPm5o/Dv0SYR6
    MWQcS66dx4zhA0cOay3LCiVlMYPgeA5cdYj5slw0qpCFnwe5d+WJhIzujGLb3nn6me1x
    poPPYSsmtHTGuVefW7j2I0Evj/SqKM2gGg9XD0DhpteVTz+DePpI42suPmvLF9r8YxMD
    JxsbjhPKtGimAgQtL0KFskvCRMcyBv9BnHlMUWrxuHgdwYGg28bugntC5cwgS3PKigvQ
    q2vVGr/6Ydw1qwyRu7kDmsB3kMvyQ+KCJxWTQOr6KC1bXdECWcRJ897I53ZgbCGFitUY
    IXiw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1684266747;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=BDMMOZsldew5E9T5aGGViOq4aXtP5rfj0eDk121sUHo=;
    b=V26oMDKTx+Ey+q4epnIkaz5JpP0V3iFNciVaMZ/XuGpTEpF4EjcXFAb8ZVaVoXHzo3
    1LuMoYfl4WPUN3n+TTx54xOlI0zNzF99STP/HcxlIHsaDKBkQwgN8armZRcymVRSjy+h
    5X27jlEgWB2+ZUL+/qnPICdOaWiRrwjoDc66iAryzoOqylSnDw4PlgT9UBHWwuQ0k9td
    GEPYbyGIgPr7ywOag2014WdU7kjeK0yKPMkmArDXofttC54SdXTycuQurBniU07QFnpV
    YXjuhrG+bhHpJPplvzyc69oSKOmp2pNOr1p52RWk1kLXswcPILuyrExAZB5QrrRVjsOv
    +uVg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1684266747;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=BDMMOZsldew5E9T5aGGViOq4aXtP5rfj0eDk121sUHo=;
    b=SCGOvYfLyiElJ3IuuJoFxBTasZ6RGcEOTQ10mAmCQUXz2kn617BK/V6W5l7Cf5EWMf
    vcumd0gw4iyxqwAMpiD3rmbcij7I0aqsgmxZpVjfWCioMPi6MN9XVjw4Tzz6PSSzOAq4
    3z8vmF+6VDEqgIyP2qQrhv45xgGdvzEnb6zZgdaV0khYzknWM1sKYsC0nQ29ej31udCv
    sR9n2L3TM1yvD0+EUqBQNzP8/Bk6MSOvLs/deY3hoZIPHp47P0BuLlAdu2BjzgFWwRTz
    snaCsXYGvoOVc+ud7lFvUH5FiFfJtkLU1BjJJ8farUhkcb9A0MwBOfjXR1V5Pa1B0b98
    k+eQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1684266747;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=BDMMOZsldew5E9T5aGGViOq4aXtP5rfj0eDk121sUHo=;
    b=KDK8l/H/GUYo83TnygN3pqIqRwZLqEOfv8ak8d0nk6bkLSkHg/ljmvbhvOYJP5Gl1d
    g/0fhVNJr4IJmVnR7VCw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QED/SSGq+wjGiUC4kV1cX/0jCNVp4ivfSTHw=="
Date: Tue, 16 May 2023 19:52:04 +0000
From: Olaf Hering <olaf@aepfle.de>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH v1] automation: provide example for downloading an
 existing container
Message-ID: <20230516195204.66590536@sender>
In-Reply-To: <alpine.DEB.2.22.394.2305161145540.62578@ubuntu-linux-20-04-desktop>
References: <20230502201444.6532-1-olaf@aepfle.de>
	<alpine.DEB.2.22.394.2305151533320.4125828@ubuntu-linux-20-04-desktop>
	<20230516105155.0c59143a@sender>
	<alpine.DEB.2.22.394.2305161145540.62578@ubuntu-linux-20-04-desktop>
X-Mailer: Claws Mail 2023.04.19 (GTK 3.24.34; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/CAWtVK+TT+/BwI/I/9/MjiR";
 protocol="application/pgp-signature"; micalg=pgp-sha1
Content-Transfer-Encoding: 7bit

--Sig_/CAWtVK+TT+/BwI/I/9/MjiR
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue, 16 May 2023 11:46:00 -0700 (PDT)
schrieb Stefano Stabellini <sstabellini@kernel.org>:

> I think you have a point that automation/build/README.md should also
> describe how to do what gitlab-ci does but locally (i.e. call
> automation/scripts/build). It should not only describe
> automation/scripts/containerize.

Meanwhile I have figured this out, additional variables must be set. I
already sent a patch for the example. That way I was able to
understand and reproduce the error seen in the CI build.

> https://gitlab.com/xen-project/xen/-/jobs/4284741849

It turned out this bug in qemu is triggered by debug=3Dy vs. debug=3Dn in
the build environment. I have not checked which commit exactly fixed it
in upstream qemu.git, it should probably be backported. Or qemu should
be moved forward to v8.x at some point. I think I have not seen this
specific failure in my own qemu.git builds.

The reason is: --enable-debug will disable _FORTIFY_SOURCE, so the build
succeeds. Without that flag, configure will enable _FORTIFY_SOURCE.

> Sure I see your point. On the other hand Tumbleweed jobs are the ones
> and only with "allow_failure". So among all the possible choices as
> example, do we really need to pick the one and only that has been
> failing for months? :-)

Yeah, this is exactly the point, to give copy&paste commands so that
contributors can investigate such failures locally.

I did not follow the state of the openSUSE builds in the past months. I
think Tumbleweed did succeed a few weeks ago because the last snapshot
was meanwhile one year old, and all gcc12 bugs are already fixed by now.


Olaf

--Sig_/CAWtVK+TT+/BwI/I/9/MjiR
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iF0EARECAB0WIQSkRyP6Rn//f03pRUBdQqD6ppg2fgUCZGPe5AAKCRBdQqD6ppg2
fqfKAKCS/24du88I45VdwyhioKbsC+Y9JwCgwlmR6dtkco+OMVCIjhVRPxwzjvk=
=u4bK
-----END PGP SIGNATURE-----

--Sig_/CAWtVK+TT+/BwI/I/9/MjiR--


From xen-devel-bounces@lists.xenproject.org Tue May 16 20:00:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 20:00:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535685.833627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0qT-0000n4-RS; Tue, 16 May 2023 20:00:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535685.833627; Tue, 16 May 2023 20:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz0qT-0000mx-OQ; Tue, 16 May 2023 20:00:29 +0000
Received: by outflank-mailman (input) for mailman id 535685;
 Tue, 16 May 2023 20:00:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tkax=BF=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pz0qS-0000mr-Lw
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 20:00:28 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [81.169.146.165]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4cc9d0c5-f424-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 22:00:26 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4GK0DYiy
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 16 May 2023 22:00:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cc9d0c5-f424-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1684267214; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=oYjAOuo5XrCuEQR1W6/l+gMe8iW0/pRObAz3AXMdbMBAntXrf7eR3xWdGCEDgd5ejv
    8luHseiXq2JYMjo6BbNsglvWUioUorZ/XaMfsQWvp5ICJUPL2RO1xodEJyHR129K1yD1
    zoGe9NRN391Q+qXpMprXqEmK4n2iqaG8BwlX9sWTi8uznAD0DPksWP8RTa2oq0s2v9gn
    696rv2N8tmh/nGqGWobPJv1XoS+8nkf8oVPii2wp7FUwgZxBF2BwgIoFAgBPMJMzzE1S
    Tz5YwRhIqw4rOjEhAkABqrzraxm2i6xPuM5addxLuDimkwMqwEq6TidN4X+2vzld2qQs
    JbUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1684267214;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=pS0Z4id9fKm2g4HFIaP4cyA8REoB+fljOJamATQiIKg=;
    b=NkTEOOvUULIQHeKXD/qBlLdSUlymDYKgU2w3ld2/D95sks4e1qYP8VPVq4UBrIIr63
    vE94G3LG07MnxIJ/1EZa8W1bMsfltVELjyAsi6wU/bLdppBbaBoehgvJr0TQhINESyg4
    weWFPYUkel/Cw8E2IbUZrZKxiTr8wVBL2fpTIjg2Hku6Wf3OOygon1Ih0DmmghrE3eWJ
    ejW8rajtT4YH1pCyqELfXSDu0NhnoGhSMnwZE/ubU4BAf1lgHE6QuUWLfqjdFTaF6In7
    BJdIRm7xJLdx3lxDudGbo/VoCHVNNdY+Bw0wSVaqcqETpiR6xDKU/WXOlJ67haX4HZWP
    rI1w==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1684267214;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=pS0Z4id9fKm2g4HFIaP4cyA8REoB+fljOJamATQiIKg=;
    b=LyXWqaWgSuG1JKysX34wy0K6ONnQlBaVJHC97d9voPDy8UJpk3QvOx27tV+Mqm1Yyj
    pbW8qfbpUrVRwBNM5lkL6gTPTPBOY9Y0tOlPE6ShOOCbJnDL8y5CmRqZ2bqHlbCMTpRX
    uqVwj9HHcBiFNxkKAEAkxI/iZke1GHRgkyH9qkj32ikXmLyNG9ARFIIFy07rAvwWN0bC
    axXKdHjZXxHS9nRh8WWJHeogvVJmkH619asQFMMFNnP5XG3WzXAOfMbSA1fltMCfzGMh
    MfMzrDRrLtGJx5nSTGpBr3dQ9IYFlllkx1EKYfJopQozQXA5dUoEQnE2QJXiY/atxVRI
    eqfw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1684267214;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=pS0Z4id9fKm2g4HFIaP4cyA8REoB+fljOJamATQiIKg=;
    b=iVQJE14B1+56+wIw3G/ijmiRqUwL5q4Qd8Ue4aef6lOkHfwWhc2Z3y3afylkQkcFT0
    MtU+ri5Yn6x8XxH0KZBg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QED/SSGq+wjGiUC4kV1cX/0jCNVp4ivfSTHw=="
Date: Tue, 16 May 2023 20:00:07 +0000
From: Olaf Hering <olaf@aepfle.de>
To: John Snow <jsnow@redhat.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Paolo Bonzini
 <pbonzini@redhat.com>, xen-devel@lists.xenproject.org,
 qemu-devel@nongnu.org, qemu-block@nongnu.org, Philippe =?UTF-8?B?TWF0aGll?=
 =?UTF-8?B?dS1EYXVkw6k=?= <f4bug@amsat.org>
Subject: Re: [PATCH v2] piix: fix regression during unplug in Xen HVM domUs
Message-ID: <20230516200007.4fa87c6a@sender>
In-Reply-To: <CAFn=p-aFa_jFYuaYLMumkX=5zpn228ctBcV=Gch=BhmQs6i2dA@mail.gmail.com>
References: <20210317070046.17860-1-olaf@aepfle.de>
	<4441d32f-bd52-9408-cabc-146b59f0e4dc@redhat.com>
	<20210325121219.7b5daf76.olaf@aepfle.de>
	<dae251e1-f808-708e-902c-05cfcbbea9cf@redhat.com>
	<20230509225818.GA16290@aepfle.de>
	<20230510094719.26fb79e5.olaf@aepfle.de>
	<alpine.DEB.2.22.394.2305121411310.3748626@ubuntu-linux-20-04-desktop>
	<CAFn=p-aFa_jFYuaYLMumkX=5zpn228ctBcV=Gch=BhmQs6i2dA@mail.gmail.com>
X-Mailer: Claws Mail 2023.04.19 (GTK 3.24.34; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/11aB=ufI14REGukuAgcfB1H";
 protocol="application/pgp-signature"; micalg=pgp-sha1
Content-Transfer-Encoding: 7bit

--Sig_/11aB=ufI14REGukuAgcfB1H
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue, 16 May 2023 13:38:42 -0400
schrieb John Snow <jsnow@redhat.com>:

> I haven't touched IDE or block code in quite a long while now -- I
> don't think I can help land this fix, but I won't get in anyone's way,
> either. Maybe just re-submit the patches with an improved commit
> message / cover letter that helps collect the info from the previous
> thread, the core issue, etc.

I poked at it some more in the past days. Paolo was right in 2019, this
issue needs to be debugged more to really understand why fiddling
with one PCI devices breaks another, apparently unrelated PCI device.

Once I know more, I will suggest a new change. The old one is
stale, and needs to be rebased anyway.


Olaf

--Sig_/11aB=ufI14REGukuAgcfB1H
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iF0EARECAB0WIQSkRyP6Rn//f03pRUBdQqD6ppg2fgUCZGPgxwAKCRBdQqD6ppg2
fqLiAJ46FKHEwF2tggdoStt1IxPQ76JRyQCfZMuVO4l+1/a61bf/j2kZyfqYQlk=
=aXMf
-----END PGP SIGNATURE-----

--Sig_/11aB=ufI14REGukuAgcfB1H--


From xen-devel-bounces@lists.xenproject.org Tue May 16 20:35:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 20:35:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535692.833638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz1Nn-0004LL-EJ; Tue, 16 May 2023 20:34:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535692.833638; Tue, 16 May 2023 20:34:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz1Nn-0004LE-B7; Tue, 16 May 2023 20:34:55 +0000
Received: by outflank-mailman (input) for mailman id 535692;
 Tue, 16 May 2023 20:34:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WEk=BF=citrix.com=prvs=493b78b38=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pz1Nm-0004Ku-3A
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 20:34:54 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 18f26320-f429-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 22:34:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18f26320-f429-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684269289;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=rU41Bn/uaEUy8tDCBSZtHKtf7MazUTzw7ubV5C2SHWU=;
  b=eIe0dyPTE+rE9q4UNgjmsN3pddYVpTgacO41Y0IjhpL08Mbdrj7TJben
   E1/YtH5h8sQjPTTL76VLwUxiJ1SShEdb+RNzzDez+/YwISRcatl43OYSk
   ifmL+ML8NlwKXOutefrj2Uc+DoD1gPOSlfONyp1DvC/g7v5qNIiAYEcEJ
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108033431
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:opRnCKJwaMltpXjuFE+RJZUlxSXFcZb7ZxGr2PjKsXjdYENSg2ZVy
 mMcXjiHOq2JMzH1et4lb9/goBkAu5eGndUxSANlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wVgPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5MHCJtx
 awidAxRQRuPmuWJzJGAUM9z05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TTHZUNxR3E/
 TuuE2LRHyoLC97O8ha8rFHrpbDuxRj7Wd4/G+jtnhJtqALKnTFCYPEMbnOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHsljw2VsdUEuY6wBqQ0aeS6AGcbkAfVSJIYtEisM4wRBQp2
 0WPktevAiZg2JWKTVqN+7HSqim9UQAXMGsDaCksXQYDpd75r+kbsBXLSdpyFb+vuff8Ezrw3
 jOioTA3gvMYistj/6+250zdijSg4J3AVBco5x7/V3igqAh+YeaYi5eAsAaBq6wadcDAEwfH5
 SJf8ySD0AwQJaqQ1w+9EN9RIO2G7PqYNwLQomV1OYZ0olxB5EWfVYxX5Th/ImJgPcAFZSLlb
 SfvhO9B2HNAFCD0NPEqOupdH+xvlPG9Toq9Cpg4e/IUOvBMmBm7EDaCjKJ690TkiwASnK42I
 v93mu78XC9BWcyLINdbLtrxMIPHJAhknQs/prihlXxLNIZyg1bLIYrpyHPUMogEAFqs+W05C
 ep3OcqQ0Al4W+bjeCTR+oN7BQlUfSRhW8mm9pUMLbXrzu9a9IcJUaS5LVQJKuRYc1l9zL+Ur
 hlRpGcCoLYAuZE3AVrTMS0yAF8edZ1+sWg6LUQRALpc4FB6OdzHxP5GJ/MKkUwPqLQLIQhcE
 6NUJK1tw51nFlz6xtjqRcOm9dIyK0vx3lrm0ujMSGFXQqOMjjfhorfMFjYDPgFXZsZrnaPSe
 4Gd6z4=
IronPort-HdrOrdr: A9a23:+hchpqq6xt1dW6RL6YQECt0aV5oYeYIsimQD101hICG8Sqej5q
 STdZUgpH3JYVMqMxsdcL+7VZVoPkmsjKKdjbN8AV7AZniEhILLFuBfBOLZqlXd8kvFmdK1vp
 0BT0ERMrPN5X8Qt7ee3OGCeOxQp+VuycuT9IHjJn5WPHlXV50=
X-Talos-CUID: =?us-ascii?q?9a23=3Axoa9+GikCCxW4/pkltWSaXrSljJuSWCe70eAGRO?=
 =?us-ascii?q?EVUliRKCSdlOzw7lCjJ87?=
X-Talos-MUID: 9a23:Yye6+QraAMkBCQFkTogezyxZJfhqsp+CMR4u1rZZmpG/NS0sFjjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,278,1677560400"; 
   d="scan'208";a="108033431"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [PATCH RFC] xen: Enable -Wwrite-strings
Date: Tue, 16 May 2023 21:34:28 +0100
Message-ID: <20230516203428.1441365-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Following on from the MISRA discussions.

On x86, most are trivial.  The two slightly suspect cases are __hvm_copy()
where constness is dependent on flags, and kextra in __start_xen() which only
compiles because of laundering the pointer through strstr().

The one case which I can't figure out how to fix is EFI:

  In file included from arch/x86/efi/boot.c:700:
  arch/x86/efi/efi-boot.h: In function ‘efi_arch_handle_cmdline’:
  arch/x86/efi/efi-boot.h:327:16: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
    327 |         name.s = "xen";
        |                ^
  cc1: all warnings being treated as errors

Why do we have something that looks like this ?

  union string {
      CHAR16 *w;
      char *s;
      const char *cs;
  };

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/Makefile                           |  2 ++
 xen/arch/x86/acpi/cpu_idle.c           |  2 +-
 xen/arch/x86/cpu/mcheck/mce.c          |  2 +-
 xen/arch/x86/cpu/mcheck/mce.h          |  2 +-
 xen/arch/x86/dom0_build.c              |  2 +-
 xen/arch/x86/e820.c                    |  2 +-
 xen/arch/x86/hvm/dom0_build.c          |  4 ++--
 xen/arch/x86/hvm/hvm.c                 |  8 ++++----
 xen/arch/x86/hvm/vmx/vmcs.c            |  4 ++--
 xen/arch/x86/include/asm/dom0_build.h  |  4 ++--
 xen/arch/x86/include/asm/hvm/hvm.h     |  2 +-
 xen/arch/x86/include/asm/hvm/support.h |  4 ++--
 xen/arch/x86/include/asm/setup.h       |  2 +-
 xen/arch/x86/oprofile/nmi_int.c        |  8 ++++----
 xen/arch/x86/pv/dom0_build.c           |  2 +-
 xen/arch/x86/setup.c                   |  9 +++++----
 xen/arch/x86/time.c                    |  4 ++--
 xen/common/gunzip.c                    |  2 +-
 xen/common/ioreq.c                     |  3 ++-
 xen/common/libelf/libelf-dominfo.c     |  2 +-
 xen/drivers/acpi/tables.c              |  6 +++---
 xen/drivers/acpi/tables/tbfadt.c       |  2 +-
 xen/drivers/acpi/tables/tbutils.c      |  2 +-
 xen/drivers/acpi/tables/tbxface.c      |  2 +-
 xen/drivers/acpi/utilities/utmisc.c    |  6 +++---
 xen/include/acpi/actables.h            |  2 +-
 xen/include/acpi/actypes.h             |  2 +-
 xen/include/acpi/acutils.h             | 12 ++++++------
 xen/include/xen/acpi.h                 |  6 +++---
 xen/include/xen/dmi.h                  |  4 ++--
 30 files changed, 59 insertions(+), 55 deletions(-)

diff --git a/xen/Makefile b/xen/Makefile
index e89fc461fc4b..f5593f992147 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -384,6 +384,8 @@ else
 CFLAGS += -fomit-frame-pointer
 endif
 
+CFLAGS += -Wwrite-strings
+
 CFLAGS-$(CONFIG_CC_SPLIT_SECTIONS) += -ffunction-sections -fdata-sections
 
 CFLAGS += -nostdinc -fno-builtin -fno-common
diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 427c8c89c5c4..cfce4cc0408f 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -302,7 +302,7 @@ static void print_hw_residencies(uint32_t cpu)
            hw_res.cc6, hw_res.cc7);
 }
 
-static char* acpi_cstate_method_name[] =
+static const char *const acpi_cstate_method_name[] =
 {
     "NONE",
     "SYSIO",
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index 3e93bdd8dab4..1144a91aa444 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1706,7 +1706,7 @@ static void mc_panic_dump(void)
     dprintk(XENLOG_ERR, "End dump mc_info, %x mcinfo dumped\n", mcinfo_dumpped);
 }
 
-void mc_panic(char *s)
+void mc_panic(const char *s)
 {
     is_mc_panic = true;
     console_force_unlock();
diff --git a/xen/arch/x86/cpu/mcheck/mce.h b/xen/arch/x86/cpu/mcheck/mce.h
index bea08bdc7464..4046e5123268 100644
--- a/xen/arch/x86/cpu/mcheck/mce.h
+++ b/xen/arch/x86/cpu/mcheck/mce.h
@@ -58,7 +58,7 @@ struct mcinfo_extended *intel_get_extended_msrs(
 bool mce_available(const struct cpuinfo_x86 *c);
 unsigned int mce_firstbank(struct cpuinfo_x86 *c);
 /* Helper functions used for collecting error telemetry */
-void noreturn mc_panic(char *s);
+void noreturn mc_panic(const char *s);
 void x86_mc_get_cpu_info(unsigned, uint32_t *, uint16_t *, uint16_t *,
                          uint32_t *, uint32_t *, uint32_t *, uint32_t *);
 
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 79234f18ff01..ac252adac706 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -576,7 +576,7 @@ int __init dom0_setup_permissions(struct domain *d)
 
 int __init construct_dom0(struct domain *d, const module_t *image,
                           unsigned long image_headroom, module_t *initrd,
-                          char *cmdline)
+                          const char *cmdline)
 {
     int rc;
 
diff --git a/xen/arch/x86/e820.c b/xen/arch/x86/e820.c
index c5911cf48dc4..0b89935510ae 100644
--- a/xen/arch/x86/e820.c
+++ b/xen/arch/x86/e820.c
@@ -363,7 +363,7 @@ static unsigned long __init find_max_pfn(void)
     return max_pfn;
 }
 
-static void __init clip_to_limit(uint64_t limit, char *warnmsg)
+static void __init clip_to_limit(uint64_t limit, const char *warnmsg)
 {
     unsigned int i;
     char _warnmsg[160];
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index fd2cbf68bc62..a7ae9c3b046e 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -532,7 +532,7 @@ static paddr_t __init find_memory(
 static int __init pvh_load_kernel(struct domain *d, const module_t *image,
                                   unsigned long image_headroom,
                                   module_t *initrd, void *image_base,
-                                  char *cmdline, paddr_t *entry,
+                                  const char *cmdline, paddr_t *entry,
                                   paddr_t *start_info_addr)
 {
     void *image_start = image_base + image_headroom;
@@ -1177,7 +1177,7 @@ static void __hwdom_init pvh_setup_mmcfg(struct domain *d)
 int __init dom0_construct_pvh(struct domain *d, const module_t *image,
                               unsigned long image_headroom,
                               module_t *initrd,
-                              char *cmdline)
+                              const char *cmdline)
 {
     paddr_t entry, start_info;
     int rc;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d7d31b53937a..709d08768f71 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3398,9 +3398,9 @@ static enum hvm_translation_result __hvm_copy(
 }
 
 enum hvm_translation_result hvm_copy_to_guest_phys(
-    paddr_t paddr, void *buf, unsigned int size, struct vcpu *v)
+    paddr_t paddr, const void *buf, unsigned int size, struct vcpu *v)
 {
-    return __hvm_copy(buf, paddr, size, v,
+    return __hvm_copy((void *)buf /* to_guest doesn't modify */, paddr, size, v,
                       HVMCOPY_to_guest | HVMCOPY_phys, 0, NULL);
 }
 
@@ -3412,10 +3412,10 @@ enum hvm_translation_result hvm_copy_from_guest_phys(
 }
 
 enum hvm_translation_result hvm_copy_to_guest_linear(
-    unsigned long addr, void *buf, unsigned int size, uint32_t pfec,
+    unsigned long addr, const void *buf, unsigned int size, uint32_t pfec,
     pagefault_info_t *pfinfo)
 {
-    return __hvm_copy(buf, addr, size, current,
+    return __hvm_copy((void *)buf, addr, size, current,
                       HVMCOPY_to_guest | HVMCOPY_linear,
                       PFEC_page_present | PFEC_write_access | pfec, pfinfo);
 }
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index b2095636250c..13719cc923d9 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1949,7 +1949,7 @@ static inline unsigned long vmr(unsigned long field)
     (uint32_t)vmr(fld);                       \
 })
 
-static void vmx_dump_sel(char *name, uint32_t selector)
+static void vmx_dump_sel(const char *name, uint32_t selector)
 {
     uint32_t sel, attr, limit;
     uint64_t base;
@@ -1960,7 +1960,7 @@ static void vmx_dump_sel(char *name, uint32_t selector)
     printk("%s: %04x %05x %08x %016"PRIx64"\n", name, sel, attr, limit, base);
 }
 
-static void vmx_dump_sel2(char *name, uint32_t lim)
+static void vmx_dump_sel2(const char *name, uint32_t lim)
 {
     uint32_t limit;
     uint64_t base;
diff --git a/xen/arch/x86/include/asm/dom0_build.h b/xen/arch/x86/include/asm/dom0_build.h
index a5f8c9e67f68..107c1ff98367 100644
--- a/xen/arch/x86/include/asm/dom0_build.h
+++ b/xen/arch/x86/include/asm/dom0_build.h
@@ -16,12 +16,12 @@ int dom0_setup_permissions(struct domain *d);
 int dom0_construct_pv(struct domain *d, const module_t *image,
                       unsigned long image_headroom,
                       module_t *initrd,
-                      char *cmdline);
+                      const char *cmdline);
 
 int dom0_construct_pvh(struct domain *d, const module_t *image,
                        unsigned long image_headroom,
                        module_t *initrd,
-                       char *cmdline);
+                       const char *cmdline);
 
 unsigned long dom0_paging_pages(const struct domain *d,
                                 unsigned long nr_pages);
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 04cbd4ff24bd..169af541b720 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -92,7 +92,7 @@ struct hvm_vcpu_nonreg_state {
  * supports Intel's VT-x and AMD's SVM extensions.
  */
 struct hvm_function_table {
-    char *name;
+    const char *name;
 
     /* Support Hardware-Assisted Paging? */
     bool_t hap_supported;
diff --git a/xen/arch/x86/include/asm/hvm/support.h b/xen/arch/x86/include/asm/hvm/support.h
index 8d4707e58c9c..142e5985642d 100644
--- a/xen/arch/x86/include/asm/hvm/support.h
+++ b/xen/arch/x86/include/asm/hvm/support.h
@@ -59,7 +59,7 @@ enum hvm_translation_result {
  * address range does not map entirely onto ordinary machine memory.
  */
 enum hvm_translation_result hvm_copy_to_guest_phys(
-    paddr_t paddr, void *buf, unsigned int size, struct vcpu *v);
+    paddr_t paddr, const void *buf, unsigned int size, struct vcpu *v);
 enum hvm_translation_result hvm_copy_from_guest_phys(
     void *buf, paddr_t paddr, unsigned int size);
 
@@ -85,7 +85,7 @@ typedef struct pagefault_info
 } pagefault_info_t;
 
 enum hvm_translation_result hvm_copy_to_guest_linear(
-    unsigned long addr, void *buf, unsigned int size, uint32_t pfec,
+    unsigned long addr, const void *buf, unsigned int size, uint32_t pfec,
     pagefault_info_t *pfinfo);
 enum hvm_translation_result hvm_copy_from_guest_linear(
     void *buf, unsigned long addr, unsigned int size, uint32_t pfec,
diff --git a/xen/arch/x86/include/asm/setup.h b/xen/arch/x86/include/asm/setup.h
index ae0dd3915a61..51fce66607dc 100644
--- a/xen/arch/x86/include/asm/setup.h
+++ b/xen/arch/x86/include/asm/setup.h
@@ -35,7 +35,7 @@ int construct_dom0(
     struct domain *d,
     const module_t *kernel, unsigned long kernel_headroom,
     module_t *initrd,
-    char *cmdline);
+    const char *cmdline);
 void setup_io_bitmap(struct domain *d);
 
 unsigned long initial_images_nrpages(nodeid_t node);
diff --git a/xen/arch/x86/oprofile/nmi_int.c b/xen/arch/x86/oprofile/nmi_int.c
index 17bf3135f86f..faf75106f747 100644
--- a/xen/arch/x86/oprofile/nmi_int.c
+++ b/xen/arch/x86/oprofile/nmi_int.c
@@ -36,7 +36,7 @@ struct op_x86_model_spec const *__read_mostly model;
 static struct op_msrs cpu_msrs[NR_CPUS];
 static unsigned long saved_lvtpc[NR_CPUS];
 
-static char *cpu_type;
+static const char *cpu_type;
 
 static DEFINE_PER_CPU(struct vcpu *, nmi_cont_vcpu);
 
@@ -309,7 +309,7 @@ void nmi_stop(void)
 }
 
 
-static int __init p4_init(char ** cpu_type)
+static int __init p4_init(const char ** cpu_type)
 {
 	unsigned int cpu_model = current_cpu_data.x86_model;
 
@@ -353,7 +353,7 @@ static int __init cf_check force_cpu_type(const char *str)
 }
 custom_param("cpu_type", force_cpu_type);
 
-static int __init ppro_init(char ** cpu_type)
+static int __init ppro_init(const char ** cpu_type)
 {
 	if (force_arch_perfmon && cpu_has_arch_perfmon)
 		return 0;
@@ -375,7 +375,7 @@ static int __init ppro_init(char ** cpu_type)
 	return 1;
 }
 
-static int __init arch_perfmon_init(char **cpu_type)
+static int __init arch_perfmon_init(const char **cpu_type)
 {
 	if (!cpu_has_arch_perfmon)
 		return 0;
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index c99135a5522f..909ee9a899a4 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -358,7 +358,7 @@ int __init dom0_construct_pv(struct domain *d,
                              const module_t *image,
                              unsigned long image_headroom,
                              module_t *initrd,
-                             char *cmdline)
+                             const char *cmdline)
 {
     int i, rc, order, machine;
     bool compatible, compat;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 74e3915a4dce..b77f86e75b3d 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -835,7 +835,7 @@ static bool __init loader_is_grub2(const char *loader_name)
     return (p != NULL) && (p[5] != '0');
 }
 
-static char * __init cmdline_cook(char *p, const char *loader_name)
+static const char * __init cmdline_cook(const char *p, const char *loader_name)
 {
     p = p ? : "";
 
@@ -883,7 +883,7 @@ static struct domain *__init create_dom0(const module_t *image,
         },
     };
     struct domain *d;
-    char *cmdline;
+    const char *cmdline;
     domid_t domid;
 
     if ( opt_dom0_pvh )
@@ -968,8 +968,9 @@ static struct domain *__init create_dom0(const module_t *image,
 
 void __init noreturn __start_xen(unsigned long mbi_p)
 {
-    char *memmap_type = NULL;
-    char *cmdline, *kextra, *loader;
+    const char *memmap_type = NULL;
+    const char *cmdline, *loader;
+    char *kextra;
     void *bsp_stack;
     struct cpu_info *info = get_cpu_info(), *bsp_info;
     unsigned int initrdidx, num_parked = 0;
diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index bc75e1ae7d42..290ddb7e6f81 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -64,8 +64,8 @@ struct cpu_time {
 };
 
 struct platform_timesource {
-    char *id;
-    char *name;
+    const char *id;
+    const char *name;
     u64 frequency;
     /* Post-init this hook may only be invoked via the read_counter() wrapper! */
     u64 (*read_counter)(void);
diff --git a/xen/common/gunzip.c b/xen/common/gunzip.c
index 71ec5f26bea0..f3d0250ff2fd 100644
--- a/xen/common/gunzip.c
+++ b/xen/common/gunzip.c
@@ -52,7 +52,7 @@ typedef unsigned long   ulg;
 static long __initdata bytes_out;
 static void flush_window(void);
 
-static __init void error(char *x)
+static __init void error(const char *x)
 {
     panic("%s\n", x);
 }
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index ecb8f545e1c4..7cb717f7a2a4 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -501,7 +501,8 @@ static int ioreq_server_alloc_rangesets(struct ioreq_server *s,
 
     for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
     {
-        char *name, *type;
+        const char *type;
+        char *name;
 
         switch ( i )
         {
diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
index e5644f6c7fa6..3ca1c3530ef1 100644
--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -100,7 +100,7 @@ elf_errorstatus elf_xen_parse_note(struct elf_binary *elf,
 {
 /* *INDENT-OFF* */
     static const struct {
-        char *name;
+        const char *name;
         bool str;
     } note_desc[] = {
         [XEN_ELFNOTE_ENTRY] = { "ENTRY", 0},
diff --git a/xen/drivers/acpi/tables.c b/xen/drivers/acpi/tables.c
index 96ff96b84c66..20aed8929b86 100644
--- a/xen/drivers/acpi/tables.c
+++ b/xen/drivers/acpi/tables.c
@@ -300,7 +300,7 @@ acpi_table_get_entry_madt(enum acpi_madt_type entry_id,
 }
 
 int __init
-acpi_parse_entries(char *id, unsigned long table_size,
+acpi_parse_entries(const char *id, unsigned long table_size,
 		   acpi_table_entry_handler handler,
 		   struct acpi_table_header *table_header,
 		   int entry_id, unsigned int max_entries)
@@ -359,7 +359,7 @@ acpi_parse_entries(char *id, unsigned long table_size,
 }
 
 int __init
-acpi_table_parse_entries(char *id,
+acpi_table_parse_entries(const char *id,
 			 unsigned long table_size,
 			 int entry_id,
 			 acpi_table_entry_handler handler,
@@ -405,7 +405,7 @@ acpi_table_parse_madt(enum acpi_madt_type id,
  * Scan the ACPI System Descriptor Table (STD) for a table matching @id,
  * run @handler on it.
  */
-int __init acpi_table_parse(char *id, acpi_table_handler handler)
+int __init acpi_table_parse(const char *id, acpi_table_handler handler)
 {
 	struct acpi_table_header *table = NULL;
 
diff --git a/xen/drivers/acpi/tables/tbfadt.c b/xen/drivers/acpi/tables/tbfadt.c
index d8fcc50deca5..a03836e0dc8a 100644
--- a/xen/drivers/acpi/tables/tbfadt.c
+++ b/xen/drivers/acpi/tables/tbfadt.c
@@ -60,7 +60,7 @@ static void acpi_tb_validate_fadt(void);
 /* Table for conversion of FADT to common internal format and FADT validation */
 
 typedef struct acpi_fadt_info {
-	char *name;
+	const char *name;
 	u16 target;
 	u16 source;
 	u16 length;
diff --git a/xen/drivers/acpi/tables/tbutils.c b/xen/drivers/acpi/tables/tbutils.c
index 11412c47deb4..458989abea99 100644
--- a/xen/drivers/acpi/tables/tbutils.c
+++ b/xen/drivers/acpi/tables/tbutils.c
@@ -243,7 +243,7 @@ u8 acpi_tb_checksum(u8 * buffer, acpi_native_uint length)
 
 void __init
 acpi_tb_install_table(acpi_physical_address address,
-		      u8 flags, char *signature, acpi_native_uint table_index)
+		      u8 flags, const char *signature, acpi_native_uint table_index)
 {
 	struct acpi_table_header *table;
 
diff --git a/xen/drivers/acpi/tables/tbxface.c b/xen/drivers/acpi/tables/tbxface.c
index 21b2e5eae1c7..ae66ce2db0d5 100644
--- a/xen/drivers/acpi/tables/tbxface.c
+++ b/xen/drivers/acpi/tables/tbxface.c
@@ -164,7 +164,7 @@ acpi_initialize_tables(struct acpi_table_desc * initial_table_array,
  *
  *****************************************************************************/
 acpi_status __init
-acpi_get_table(char *signature,
+acpi_get_table(const char *signature,
 	       acpi_native_uint instance, struct acpi_table_header **out_table)
 {
 	acpi_native_uint i;
diff --git a/xen/drivers/acpi/utilities/utmisc.c b/xen/drivers/acpi/utilities/utmisc.c
index 4e1497ad0fae..ee22c83e3842 100644
--- a/xen/drivers/acpi/utilities/utmisc.c
+++ b/xen/drivers/acpi/utilities/utmisc.c
@@ -134,7 +134,7 @@ const char *__init acpi_ut_validate_exception(acpi_status status)
  ******************************************************************************/
 
 void ACPI_INTERNAL_VAR_XFACE __init
-acpi_ut_error(const char *module_name, u32 line_number, char *format, ...)
+acpi_ut_error(const char *module_name, u32 line_number, const char *format, ...)
 {
 	va_list args;
 
@@ -147,7 +147,7 @@ acpi_ut_error(const char *module_name, u32 line_number, char *format, ...)
 }
 
 void ACPI_INTERNAL_VAR_XFACE __init
-acpi_ut_warning(const char *module_name, u32 line_number, char *format, ...)
+acpi_ut_warning(const char *module_name, u32 line_number, const char *format, ...)
 {
 	va_list args;
 
@@ -161,7 +161,7 @@ acpi_ut_warning(const char *module_name, u32 line_number, char *format, ...)
 }
 
 void ACPI_INTERNAL_VAR_XFACE __init
-acpi_ut_info(const char *module_name, u32 line_number, char *format, ...)
+acpi_ut_info(const char *module_name, u32 line_number, const char *format, ...)
 {
 	va_list args;
 
diff --git a/xen/include/acpi/actables.h b/xen/include/acpi/actables.h
index d4cad35f41c0..527e1c9f9b9d 100644
--- a/xen/include/acpi/actables.h
+++ b/xen/include/acpi/actables.h
@@ -99,7 +99,7 @@ acpi_tb_verify_checksum(struct acpi_table_header *table, u32 length);
 
 void
 acpi_tb_install_table(acpi_physical_address address,
-		      u8 flags, char *signature, acpi_native_uint table_index);
+		      u8 flags, const char *signature, acpi_native_uint table_index);
 
 acpi_status
 acpi_tb_parse_root_table(acpi_physical_address rsdp_address, u8 flags);
diff --git a/xen/include/acpi/actypes.h b/xen/include/acpi/actypes.h
index f3e95abc3ab3..a237216903fd 100644
--- a/xen/include/acpi/actypes.h
+++ b/xen/include/acpi/actypes.h
@@ -281,7 +281,7 @@ typedef acpi_native_uint acpi_size;
  */
 typedef u32 acpi_status;	/* All ACPI Exceptions */
 typedef u32 acpi_name;		/* 4-byte ACPI name */
-typedef char *acpi_string;	/* Null terminated ASCII string */
+typedef const char *acpi_string;	/* Null terminated ASCII string */
 typedef void *acpi_handle;	/* Actually a ptr to a NS Node */
 
 struct uint64_struct {
diff --git a/xen/include/acpi/acutils.h b/xen/include/acpi/acutils.h
index b1b0df758bd6..ac54adaa8c23 100644
--- a/xen/include/acpi/acutils.h
+++ b/xen/include/acpi/acutils.h
@@ -164,7 +164,7 @@ acpi_ut_debug_print(u32 requested_debug_level,
 		    u32 line_number,
 		    const char *function_name,
 		    const char *module_name,
-		    u32 component_id, char *format, ...) ACPI_PRINTF_LIKE(6);
+		    u32 component_id, const char *format, ...) ACPI_PRINTF_LIKE(6);
 
 void ACPI_INTERNAL_VAR_XFACE
 acpi_ut_debug_print_raw(u32 requested_debug_level,
@@ -172,24 +172,24 @@ acpi_ut_debug_print_raw(u32 requested_debug_level,
 			const char *function_name,
 			const char *module_name,
 			u32 component_id,
-			char *format, ...) ACPI_PRINTF_LIKE(6);
+			const char *format, ...) ACPI_PRINTF_LIKE(6);
 
 void ACPI_INTERNAL_VAR_XFACE
 acpi_ut_error(const char *module_name,
-	      u32 line_number, char *format, ...) ACPI_PRINTF_LIKE(3);
+	      u32 line_number, const char *format, ...) ACPI_PRINTF_LIKE(3);
 
 void ACPI_INTERNAL_VAR_XFACE
 acpi_ut_exception(const char *module_name,
 		  u32 line_number,
-		  acpi_status status, char *format, ...) ACPI_PRINTF_LIKE(4);
+		  acpi_status status, const char *format, ...) ACPI_PRINTF_LIKE(4);
 
 void ACPI_INTERNAL_VAR_XFACE
 acpi_ut_warning(const char *module_name,
-		u32 line_number, char *format, ...) ACPI_PRINTF_LIKE(3);
+		u32 line_number, const char *format, ...) ACPI_PRINTF_LIKE(3);
 
 void ACPI_INTERNAL_VAR_XFACE
 acpi_ut_info(const char *module_name,
-	     u32 line_number, char *format, ...) ACPI_PRINTF_LIKE(3);
+	     u32 line_number, const char *format, ...) ACPI_PRINTF_LIKE(3);
 
 /*
  * utmisc
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index 352f27f6a723..8ec95791726e 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -81,12 +81,12 @@ int erst_init(void);
 void acpi_hest_init(void);
 
 int acpi_table_init (void);
-int acpi_table_parse(char *id, acpi_table_handler handler);
-int acpi_parse_entries(char *id, unsigned long table_size,
+int acpi_table_parse(const char *id, acpi_table_handler handler);
+int acpi_parse_entries(const char *id, unsigned long table_size,
 		       acpi_table_entry_handler handler,
 		       struct acpi_table_header *table_header,
 		       int entry_id, unsigned int max_entries);
-int acpi_table_parse_entries(char *id, unsigned long table_size,
+int acpi_table_parse_entries(const char *id, unsigned long table_size,
 	int entry_id, acpi_table_entry_handler handler, unsigned int max_entries);
 struct acpi_subtable_header *acpi_table_get_entry_madt(enum acpi_madt_type id,
 						      unsigned int entry_index);
diff --git a/xen/include/xen/dmi.h b/xen/include/xen/dmi.h
index fa25f6cd3816..71a5c46dc6ea 100644
--- a/xen/include/xen/dmi.h
+++ b/xen/include/xen/dmi.h
@@ -20,12 +20,12 @@ enum dmi_field {
  */
 struct dmi_strmatch {
 	u8 slot;
-	char *substr;
+	const char *substr;
 };
 
 struct dmi_system_id {
 	int (*callback)(const struct dmi_system_id *);
-	char *ident;
+	const char *ident;
 	struct dmi_strmatch matches[4];
 	void *driver_data;
 };
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 16 21:44:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 21:44:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535702.833648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz2TB-0003HX-H3; Tue, 16 May 2023 21:44:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535702.833648; Tue, 16 May 2023 21:44:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz2TB-0003HQ-Dt; Tue, 16 May 2023 21:44:33 +0000
Received: by outflank-mailman (input) for mailman id 535702;
 Tue, 16 May 2023 21:44:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XeCQ=BF=goodmis.org=rostedt@kernel.org>)
 id 1pz2T9-0003HK-OX
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 21:44:31 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d5f8ed1f-f432-11ed-b229-6b7b168915f2;
 Tue, 16 May 2023 23:44:30 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 1B42C632DB;
 Tue, 16 May 2023 21:44:29 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AF912C433EF;
 Tue, 16 May 2023 21:44:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5f8ed1f-f432-11ed-b229-6b7b168915f2
Date: Tue, 16 May 2023 17:44:22 -0400
From: Steven Rostedt <rostedt@goodmis.org>
To: Arnd Bergmann <arnd@kernel.org>
Cc: x86@kernel.org, Arnd Bergmann <arnd@arndb.de>, Thomas Gleixner
 <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov
 <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin"
 <hpa@zytor.com>, Andy Lutomirski <luto@kernel.org>, Masami Hiramatsu
 <mhiramat@kernel.org>, Mark Rutland <mark.rutland@arm.com>, Juergen Gross
 <jgross@suse.com>, "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
 Alexey Makhalov <amakhalov@vmware.com>, VMware PV-Drivers Reviewers
 <pv-drivers@vmware.com>, Peter Zijlstra <peterz@infradead.org>, Darren Hart
 <dvhart@infradead.org>, Andy Shevchenko <andy@infradead.org>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, "Rafael J. Wysocki"
 <rafael@kernel.org>, linux-kernel@vger.kernel.org,
 linux-trace-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, linux-pci@vger.kernel.org,
 platform-driver-x86@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-pm@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 01/20] x86: move prepare_ftrace_return prototype to
 header
Message-ID: <20230516174422.63e1e942@gandalf.local.home>
In-Reply-To: <20230516193549.544673-2-arnd@kernel.org>
References: <20230516193549.544673-1-arnd@kernel.org>
	<20230516193549.544673-2-arnd@kernel.org>
X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Tue, 16 May 2023 21:35:30 +0200
Arnd Bergmann <arnd@kernel.org> wrote:

> From: Arnd Bergmann <arnd@arndb.de>
>=20
> On 32-bit builds, the prepare_ftrace_return() function only has a global
> definition, but no prototype before it, which causes a warning:
>=20
> arch/x86/kernel/ftrace.c:625:6: warning: no previous prototype for =E2=80=
=98prepare_ftrace_return=E2=80=99 [-Wmissing-prototypes]
>   625 | void prepare_ftrace_return(unsigned long ip, unsigned long *paren=
t,
>=20
> Move the prototype that is already needed for some configurations into
> a header file where it can be seen unconditionally.
>=20
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  arch/x86/include/asm/ftrace.h | 3 +++
>  arch/x86/kernel/ftrace.c      | 3 ---
>  2 files changed, 3 insertions(+), 3 deletions(-)
>=20
> diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
> index 5061ac98ffa1..b8d4a07f9595 100644
> --- a/arch/x86/include/asm/ftrace.h
> +++ b/arch/x86/include/asm/ftrace.h

Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>

-- Steve


From xen-devel-bounces@lists.xenproject.org Tue May 16 21:46:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 21:46:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535707.833658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz2VJ-0003qh-Tc; Tue, 16 May 2023 21:46:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535707.833658; Tue, 16 May 2023 21:46:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz2VJ-0003qa-Q2; Tue, 16 May 2023 21:46:45 +0000
Received: by outflank-mailman (input) for mailman id 535707;
 Tue, 16 May 2023 21:46:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PFhh=BF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pz2VI-0003qU-UK
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 21:46:44 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 25028737-f433-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 23:46:42 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A090B632DB;
 Tue, 16 May 2023 21:46:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 74D25C433D2;
 Tue, 16 May 2023 21:46:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25028737-f433-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684273601;
	bh=Mey66pC3kiYMdWauehQPzA8cXR4e/XLFRAfhNKI+fj0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=UGPyWeKlieXVf+Y50IF1pjts3FEI3bFVbNXggTZ4p2dKCPP2u0G1oa9THQowQ+Jgd
	 vaE6Ql2rktdgIvsHbrmNUD2OqmPNy1WE9rfiDRuTaY03Lz46NU8cF6OPDjqGJBgHRD
	 tojRbMgYhBzZr5SOZOzRqo6AW7wkyNDMT5o80sdOcIO/Vv6WixJC8xT1am68XI9WVK
	 dBn1KQ6dGhyszbljFl5YJgXHOn/gQFUde52HJe2EdbP1yX9KJgNdPfwQAsuA2XDCoV
	 fmfYSVtSnMKbH1QIJgiFJekeTrXo3IiJLAWvEpqE/ltK+/6HPyb9s0TTtvCp1jvyyI
	 bqzDYhisgv5TA==
Date: Tue, 16 May 2023 14:46:35 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Olaf Hering <olaf@aepfle.de>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v1] automation: update documentation about how to build
 a container
In-Reply-To: <20230516154127.11622-1-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.22.394.2305161445570.128889@ubuntu-linux-20-04-desktop>
References: <20230516154127.11622-1-olaf@aepfle.de>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 16 May 2023, Olaf Hering wrote:
> The command used in the example is different from the command used in
> the Gitlab CI pipelines. Adjust it to simulate what will be used by CI.
> This is essentially the build script, which is invoked with a number of
> expected environment variables such as CC, CXX and debug.
> 
> In addition the input should not be a tty, which disables colors from
> meson and interactive questions from kconfig.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/build/README.md | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/automation/build/README.md b/automation/build/README.md
> index 2d07cafe0e..1c040533fd 100644
> --- a/automation/build/README.md
> +++ b/automation/build/README.md
> @@ -96,7 +96,8 @@ docker login registry.gitlab.com/xen-project/xen
>  make -C automation/build suse/opensuse-tumbleweed
>  env CONTAINER_NO_PULL=1 \
>    CONTAINER=tumbleweed \
> -  automation/scripts/containerize bash -exc './configure && make'
> +  CONTAINER_ARGS='-e CC=gcc -e CXX=g++ -e debug=y' \
> +  automation/scripts/containerize automation/scripts/build < /dev/null
>  make -C automation/build suse/opensuse-tumbleweed PUSH=1
>  ```
>  
> 


From xen-devel-bounces@lists.xenproject.org Tue May 16 21:53:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 21:53:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535711.833667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz2bq-0005LC-It; Tue, 16 May 2023 21:53:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535711.833667; Tue, 16 May 2023 21:53:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz2bq-0005L5-Fo; Tue, 16 May 2023 21:53:30 +0000
Received: by outflank-mailman (input) for mailman id 535711;
 Tue, 16 May 2023 21:53:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PFhh=BF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pz2bp-0005Kz-Nx
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 21:53:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 162e25a1-f434-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 23:53:27 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id BFD6F63FC6;
 Tue, 16 May 2023 21:53:25 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 954DBC4339B;
 Tue, 16 May 2023 21:53:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 162e25a1-f434-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684274005;
	bh=SDVOhqlrpiKenrawhVplIOdyEUYgKvyJLav+NDE2hGg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=grV2teyudrdipUb9uyAfdd5mjRcj9RE63vQ1w52fgY+64B7ZDp8RSC3tpb0w00x15
	 zRyyEsutC6HnIDaiWgR/dlNvGNKa4QLfP4M2IbyZndiLcnZT1elrwbcuM3UF6OqoW7
	 uj2hCpH6Aq+pqKsyt8UpKO2ZrffgYjLLuRV8lsqR+1ats+PD4zKuAat1qapXj5sVdR
	 BbCerADv42qDjFWC/uXwv1vLJgv1DudErt4jO3SWwIt5GtlpopYqtcIbs1eGIsaHEW
	 gC5W/qWpW5pXGaVioR2cHd/QEJ4yf0PWvxsPFlgmDDqtU6YTPZdRH2dz/JuwyIzJIE
	 yjLdobMY7Xn2Q==
Date: Tue, 16 May 2023 14:53:22 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Olaf Hering <olaf@aepfle.de>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH v1] automation: provide example for downloading an existing
 container
In-Reply-To: <20230516195204.66590536@sender>
Message-ID: <alpine.DEB.2.22.394.2305161447580.128889@ubuntu-linux-20-04-desktop>
References: <20230502201444.6532-1-olaf@aepfle.de> <alpine.DEB.2.22.394.2305151533320.4125828@ubuntu-linux-20-04-desktop> <20230516105155.0c59143a@sender> <alpine.DEB.2.22.394.2305161145540.62578@ubuntu-linux-20-04-desktop> <20230516195204.66590536@sender>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 16 May 2023, Olaf Hering wrote:
> Am Tue, 16 May 2023 11:46:00 -0700 (PDT)
> schrieb Stefano Stabellini <sstabellini@kernel.org>:
> 
> > I think you have a point that automation/build/README.md should also
> > describe how to do what gitlab-ci does but locally (i.e. call
> > automation/scripts/build). It should not only describe
> > automation/scripts/containerize.
> 
> Meanwhile I have figured this out, additional variables must be set. I
> already sent a patch for the example. That way I was able to
> understand and reproduce the error seen in the CI build.

Thanks!


> > https://gitlab.com/xen-project/xen/-/jobs/4284741849
> 
> It turned out this bug in qemu is triggered by debug=y vs. debug=n in
> the build environment. I have not checked which commit exactly fixed it
> in upstream qemu.git, it should probably be backported. Or qemu should
> be moved forward to v8.x at some point. I think I have not seen this
> specific failure in my own qemu.git builds.
> 
> The reason is: --enable-debug will disable _FORTIFY_SOURCE, so the build
> succeeds. Without that flag, configure will enable _FORTIFY_SOURCE.

This is very interesting. Thank you for investigating the problem.

I would prefer a proper backported fix to QEMU, or wholesale QEMU
upgrade. But if neither are possible we could also add a way to add
--enable-debug or disable _FORTIFY_SOURCE in another way for Tumbleweed.
See for instance EXTRA_XEN_CONFIG as a way to pass special parameters
from automation/gitlab-ci/build.yaml to automation/script/build.


From xen-devel-bounces@lists.xenproject.org Tue May 16 22:12:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 22:12:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535716.833678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz2th-0007pj-54; Tue, 16 May 2023 22:11:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535716.833678; Tue, 16 May 2023 22:11:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz2th-0007pc-0p; Tue, 16 May 2023 22:11:57 +0000
Received: by outflank-mailman (input) for mailman id 535716;
 Tue, 16 May 2023 22:11:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PFhh=BF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pz2tg-0007pW-72
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 22:11:56 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a9ca3fee-f436-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 00:11:54 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A7D8E615BC;
 Tue, 16 May 2023 22:11:52 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1A4E9C433D2;
 Tue, 16 May 2023 22:11:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9ca3fee-f436-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684275112;
	bh=c6QxmY1/bQe/rmr+gHzlOo5C4lckEJsWri4UVuuqSt0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=EM3xZwAXIOVnk9qBQ7KtiC1jE09hdaxfHH5DqxLxYgBW52E6611yXM+C7UIkYjeTj
	 AXq7tHhzlaK76+oU6cLt9YnNplSLEPTse1rOLW1Ai/XrGvsbhEJ/EbDqBxwgkqAfTT
	 QrvpYa6eyJxMaO3aGYiePQWpqJc5M7p3wj3vt12TA8e6Ou0BywEtHLKl0beKGz3oSv
	 CHYkdQE9hnmkzPBnzlsAi9xlQdy4MXrHzoLdgwuYdOSXHKNytKutY5NKqPBI19MhuY
	 5MA1Lh+w0mEXO0jAIWy1IBUJ4XlEFvVE7AJuFYnI17FCLx88civixCqJNGKAdcwpHc
	 v0ce7L6diE85g==
Date: Tue, 16 May 2023 15:11:49 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Jan Beulich <jbeulich@suse.com>, andrew.cooper3@citrix.com, 
    xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com, 
    Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT
 generation
In-Reply-To: <ZGM9vzwGm7Jv6i7M@Air-de-Roger>
Message-ID: <alpine.DEB.2.22.394.2305161509040.128889@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop> <20230513011720.3978354-1-sstabellini@kernel.org> <ZGHx9Mk3UGPdli1h@Air-de-Roger> <81ac6e51-6de9-5c4c-5cbd-7318cae93032@suse.com> <alpine.DEB.2.22.394.2305151712390.4125828@ubuntu-linux-20-04-desktop>
 <ZGM6X19p50oSqbNB@Air-de-Roger> <ZGM9vzwGm7Jv6i7M@Air-de-Roger>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1454606356-1684275111=:128889"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1454606356-1684275111=:128889
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Tue, 16 May 2023, Roger Pau Monné wrote:
> On Tue, May 16, 2023 at 10:10:07AM +0200, Roger Pau Monné wrote:
> > On Mon, May 15, 2023 at 05:16:27PM -0700, Stefano Stabellini wrote:
> > > On Mon, 15 May 2023, Jan Beulich wrote:
> > > > On 15.05.2023 10:48, Roger Pau Monné wrote:
> > > > > On Fri, May 12, 2023 at 06:17:19PM -0700, Stefano Stabellini wrote:
> > > > >> From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > >>
> > > > >> Xen always generates a XSDT table even if the firmware provided a RSDT
> > > > >> table. Instead of copying the XSDT header from the firmware table (that
> > > > >> might be missing), generate the XSDT header from a preset.
> > > > >>
> > > > >> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > >> ---
> > > > >>  xen/arch/x86/hvm/dom0_build.c | 32 +++++++++-----------------------
> > > > >>  1 file changed, 9 insertions(+), 23 deletions(-)
> > > > >>
> > > > >> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> > > > >> index 307edc6a8c..5fde769863 100644
> > > > >> --- a/xen/arch/x86/hvm/dom0_build.c
> > > > >> +++ b/xen/arch/x86/hvm/dom0_build.c
> > > > >> @@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> > > > >>                                        paddr_t *addr)
> > > > >>  {
> > > > >>      struct acpi_table_xsdt *xsdt;
> > > > >> -    struct acpi_table_header *table;
> > > > >> -    struct acpi_table_rsdp *rsdp;
> > > > >>      const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables;
> > > > >>      unsigned long size = sizeof(*xsdt);
> > > > >>      unsigned int i, j, num_tables = 0;
> > > > >> -    paddr_t xsdt_paddr;
> > > > >>      int rc;
> > > > >> +    struct acpi_table_header header = {
> > > > >> +        .signature    = "XSDT",
> > > > >> +        .length       = sizeof(struct acpi_table_header),
> > > > >> +        .revision     = 0x1,
> > > > >> +        .oem_id       = "Xen",
> > > > >> +        .oem_table_id = "HVM",
> > > > > 
> > > > > I think this is wrong, as according to the spec the OEM Table ID must
> > > > > match the OEM Table ID in the FADT.
> > > > > 
> > > > > We likely want to copy the OEM ID and OEM Table ID from the RSDP, and
> > > > > possibly also the other OEM related fields.
> > > > > 
> > > > > Alternatively we might want to copy and use the RSDT on systems that
> > > > > lack an XSDT, or even just copy the header from the RSDT into Xen's
> > > > > crafted XSDT, since the format of the RSDP and the XSDT headers are
> > > > > exactly the same (the difference is in the size of the description
> > > > > headers that come after).
> > > > 
> > > > I guess I'd prefer that last variant.
> > > 
> > > I tried this approach (together with the second patch for necessity) and
> > > it worked.
> > > 
> > > diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> > > index fd2cbf68bc..11d6d1bc23 100644
> > > --- a/xen/arch/x86/hvm/dom0_build.c
> > > +++ b/xen/arch/x86/hvm/dom0_build.c
> > > @@ -967,6 +967,10 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> > >          goto out;
> > >      }
> > >      xsdt_paddr = rsdp->xsdt_physical_address;
> > > +    if ( !xsdt_paddr )
> > > +    {
> > > +        xsdt_paddr = rsdp->rsdt_physical_address;
> > > +    }
> > >      acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
> > >      table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
> > >      if ( !table )
> > 
> > To be slightly more consistent, could you use:
> > 
> > /*
> >  * Note the header is the same for both RSDT and XSDT, so it's fine to
> >  * copy the native RSDT header to the Xen crafted XSDT if no native
> >  * XSDT is available.
> >  */
> > if (rsdp->revision > 1 && rsdp->xsdt_physical_address)
> >     sdt_paddr = rsdp->xsdt_physical_address;
> > else
> >     sdt_paddr = rsdp->rsdt_physical_address;
> > 
> > It was an oversight of mine to not check for the RSDP revision, as
> > RSDP < 2 will never have an XSDT.  Also add:
> > 
> > Fixes: 1d74282c455f ('x86: setup PVHv2 Dom0 ACPI tables')
> 
> Just realized this will require some more work so that the guest
> (dom0) provided RSDP is at least revision 2.  You will need to adjust
> the field and recalculate the checksum if needed.

But we are always providing RSDP version 2 in pvh_setup_acpi, right?
--8323329-1454606356-1684275111=:128889--


From xen-devel-bounces@lists.xenproject.org Tue May 16 22:44:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 22:44:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535722.833688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz3PK-0002s0-JC; Tue, 16 May 2023 22:44:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535722.833688; Tue, 16 May 2023 22:44:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz3PK-0002rt-Fp; Tue, 16 May 2023 22:44:38 +0000
Received: by outflank-mailman (input) for mailman id 535722;
 Tue, 16 May 2023 22:44:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pz3PJ-0002rj-2B; Tue, 16 May 2023 22:44:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pz3PI-0005TD-VN; Tue, 16 May 2023 22:44:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pz3PI-0000C8-Kj; Tue, 16 May 2023 22:44:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pz3PI-0006cD-KG; Tue, 16 May 2023 22:44:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d7mbIj6nRIyEXDCQPMkVyWmNwullUcxlLDBcEbqm95E=; b=YDtqLbX+oq05tkMWkQo+p8f2tK
	MTI85QObYYFCGhZWZkY44O6w5J+oKijLBBcQEogEYJBfBF6n4xqnV3ubFSxeHeF+hm1aJeAWRBMUm
	KXtBCwQmKStxAhpdRe7jcfWCSEMWJELQ2hdOZCdGgrDBhMmmZKCY7S3DufWwJTS2b/Ls=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180685-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180685: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=42abf5b9c53eb1b1a902002fcda68708234152c3
X-Osstest-Versions-That:
    xen=c8e4bbb5b8ee22fd1591ba6a5a3cef4466dda323
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 May 2023 22:44:36 +0000

flight 180685 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180685/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  42abf5b9c53eb1b1a902002fcda68708234152c3
baseline version:
 xen                  c8e4bbb5b8ee22fd1591ba6a5a3cef4466dda323

Last test of basis   180684  2023-05-16 16:00:25 Z    0 days
Testing same since   180685  2023-05-16 20:03:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Jason Andryuk <jandryuk@gmail.com>
  Olaf Hering <olaf@aepfle.de>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c8e4bbb5b8..42abf5b9c5  42abf5b9c53eb1b1a902002fcda68708234152c3 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 16 23:34:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 23:34:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535728.833697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz4BP-0008Ex-8E; Tue, 16 May 2023 23:34:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535728.833697; Tue, 16 May 2023 23:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz4BP-0008Eq-5U; Tue, 16 May 2023 23:34:19 +0000
Received: by outflank-mailman (input) for mailman id 535728;
 Tue, 16 May 2023 23:34:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PFhh=BF=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pz4BN-0008Ek-DJ
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 23:34:17 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a3d0512-f442-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 01:34:14 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D322963538;
 Tue, 16 May 2023 23:34:12 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 42A2AC433D2;
 Tue, 16 May 2023 23:34:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a3d0512-f442-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684280052;
	bh=imHz7Wq4iyqL2m4KAPsf15f1uakWwUFpAlHqEGzneZ8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=kqCDpTLPa+osNWFTQydygIR/XyFi7HnkYYEa+V8EuowZ92NjMXeZrORlCP3E309tG
	 HB8jpYYZRPuXtqhJODzYVZOguCIu9ZrBzk0CdR8HuAtf+q3Z7oDVFQUnXMqD4p/ehF
	 H5ZvOuQFJZbp4HYJJucNzhqtF2llGXqf+flvrcePK6+y4Z13kbcZIA3ybJzAuhdAnE
	 XUER8WXqEX6mlapXiciQzE265Kn8hLzrQxpyJsVEyImyf3U7YQ5149nznmO/WlSUkV
	 6OkwFY6qZm4rGJuyaJnlcG12+sQKJ2Hv3kpETfI336V/1Z2wRbD7f3a6IBbzeOIXAX
	 QII1tjmyS+g6g==
Date: Tue, 16 May 2023 16:34:09 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com, 
    jbeulich@suse.com, xen-devel@lists.xenproject.org, 
    Xenia.Ragiadakou@amd.com, Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
In-Reply-To: <ZGNLArlA0Yei4Fr0@Air-de-Roger>
Message-ID: <alpine.DEB.2.22.394.2305161522480.128889@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop> <20230513011720.3978354-2-sstabellini@kernel.org> <ZGH+5OKqnjTjUr/F@Air-de-Roger> <alpine.DEB.2.22.394.2305151656500.4125828@ubuntu-linux-20-04-desktop>
 <ZGNLArlA0Yei4Fr0@Air-de-Roger>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-945506403-1684276658=:128889"
Content-ID: <alpine.DEB.2.22.394.2305161543420.128889@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-945506403-1684276658=:128889
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305161543421.128889@ubuntu-linux-20-04-desktop>

On Tue, 16 May 2023, Roger Pau Monné wrote:
> On Mon, May 15, 2023 at 05:11:25PM -0700, Stefano Stabellini wrote:
> > On Mon, 15 May 2023, Roger Pau Monné wrote:
> > > On Fri, May 12, 2023 at 06:17:20PM -0700, Stefano Stabellini wrote:
> > > > From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > 
> > > > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > > > the tables in the guest. Instead, copy the tables to Dom0.
> > > > 
> > > > This is a workaround.
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > ---
> > > > As mentioned in the cover letter, this is a RFC workaround as I don't
> > > > know the cause of the underlying problem. I do know that this patch
> > > > solves what would be otherwise a hang at boot when Dom0 PVH attempts to
> > > > parse ACPI tables.
> > > 
> > > I'm unsure how safe this is for native systems, as it's possible for
> > > firmware to modify the data in the tables, so copying them would
> > > break that functionality.
> > > 
> > > I think we need to get to the root cause that triggers this behavior
> > > on QEMU.  Is it the table checksum that fail, or something else?  Is
> > > there an error from Linux you could reference?
> > 
> > I agree with you but so far I haven't managed to find a way to the root
> > of the issue. Here is what I know. These are the logs of a successful
> > boot using this patch:
> > 
> > [   10.437488] ACPI: Early table checksum verification disabled
> > [   10.439345] ACPI: RSDP 0x000000004005F955 000024 (v02 BOCHS )
> > [   10.441033] ACPI: RSDT 0x000000004005F979 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> > [   10.444045] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> > [   10.445984] ACPI: FACP 0x000000004005FA65 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
> > [   10.447170] ACPI BIOS Warning (bug): Incorrect checksum in table [FACP] - 0x67, should be 0x30 (20220331/tbprint-174)
> > [   10.449522] ACPI: DSDT 0x000000004005FB19 00145D (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
> > [   10.451258] ACPI: FACS 0x000000004005FAD9 000040
> > [   10.452245] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > [   10.452389] ACPI: Reserving FACP table memory at [mem 0x4005fa65-0x4005fad8]
> > [   10.452497] ACPI: Reserving DSDT table memory at [mem 0x4005fb19-0x40060f75]
> > [   10.452602] ACPI: Reserving FACS table memory at [mem 0x4005fad9-0x4005fb18]
> > 
> > 
> > And these are the logs of the same boot (unsuccessful) without this
> > patch:
> > 
> > [   10.516015] ACPI: Early table checksum verification disabled
> > [   10.517732] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> > [   10.519535] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> > [   10.522523] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> > [   10.527453] ACPI: ���� 0x000000007FFE149D FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> > [   10.528362] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > [   10.528491] ACPI: Reserving ���� table memory at [mem 0x7ffe149d-0x17ffe149b]
> > 
> > It is clearly a memory corruption around FACS but I couldn't find the
> > reason for it. The mapping code looks correct. I hope you can suggest a
> > way to narrow down the problem. If I could, I would suggest to apply
> > this patch just for the QEMU PVH tests but we don't have the
> > infrastructure for that in gitlab-ci as there is a single Xen build for
> > all tests.
> 
> Would be helpful to see the memory map provided to Linux, just in case
> we messed up and there's some overlap.

Everything looks correct. Here are some more logs:

(XEN) Xen-e820 RAM map:
(XEN)  [0000000000000000, 000000000009fbff] (usable)
(XEN)  [000000000009fc00, 000000000009ffff] (reserved)
(XEN)  [00000000000f0000, 00000000000fffff] (reserved)
(XEN)  [0000000000100000, 000000007ffdffff] (usable)
(XEN)  [000000007ffe0000, 000000007fffffff] (reserved)
(XEN)  [00000000fffc0000, 00000000ffffffff] (reserved)
(XEN)  [000000fd00000000, 000000ffffffffff] (reserved)
(XEN) Microcode loading not available
(XEN) New Xen image base address: 0x7f600000
(XEN) System RAM: 2047MB (2096636kB)
(XEN) ACPI: RSDP 000F58D0, 0014 (r0 BOCHS )
(XEN) ACPI: RSDT 7FFE1B21, 0034 (r1 BOCHS  BXPC            1 BXPC        1)
(XEN) ACPI: FACP 7FFE19CD, 0074 (r1 BOCHS  BXPC            1 BXPC        1)
(XEN) ACPI: DSDT 7FFE0040, 198D (r1 BOCHS  BXPC            1 BXPC        1)
(XEN) ACPI: FACS 7FFE0000, 0040
(XEN) ACPI: APIC 7FFE1A41, 0080 (r1 BOCHS  BXPC            1 BXPC        1)
(XEN) ACPI: HPET 7FFE1AC1, 0038 (r1 BOCHS  BXPC            1 BXPC        1)
(XEN) ACPI: WAET 7FFE1AF9, 0028 (r1 BOCHS  BXPC            1 BXPC        1)
[...]
(XEN) Dom0 memory map:
(XEN)  [0000000000000000, 000000000009efff] (usable)
(XEN)  [000000000009fc00, 000000000009ffff] (reserved)
(XEN)  [00000000000f0000, 00000000000fffff] (reserved)
(XEN)  [0000000000100000, 0000000040060f1d] (usable)
(XEN)  [0000000040060f1e, 0000000040060fa7] (ACPI data)
(XEN)  [0000000040061000, 000000007ffdffff] (unusable)
(XEN)  [000000007ffe0000, 000000007fffffff] (reserved)
(XEN)  [00000000fffc0000, 00000000ffffffff] (reserved)
(XEN)  [000000fd00000000, 000000ffffffffff] (reserved)
[...]
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009efff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000040060f1d] usable
[    0.000000] BIOS-e820: [mem 0x0000000040060f1e-0x0000000040060fa7] ACPI data
[    0.000000] BIOS-e820: [mem 0x0000000040061000-0x000000007ffdffff] unusable
[    0.000000] BIOS-e820: [mem 0x000000007ffe0000-0x000000007fffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[    0.000000] BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved
[...]
[   10.102427] ACPI: Early table checksum verification disabled
[   10.104455] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
[   10.106250] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[   10.109549] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[   10.115173] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
[   10.116054] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
[   10.116182] ACPI: Reserving ���� table memory at [mem 0x7ffe19cd-0x17ffe19cb]



> It seems like some of the XSDT entries (the FADT one) is corrupt?
> 
> Could you maybe add some debug to the Xen-crafted XSDT placement.

I added a printk just after:

  xsdt->table_offset_entry[j++] = tables[i].address;

And it printed only once:

  (XEN) DEBUG pvh_setup_acpi_xsdt 1000 name=FACP address=7ffe19cd

That actually matches the address read by Linux:

  [   10.175448] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)

So the address seems correct. It is the content of the FADT/FACP table
that is corrupted.

I wrote the following function in Xen:

static void check(void)
{
    unsigned long addr = 0x7ffe19cd;
    struct acpi_table_fadt *fadt;
    fadt = acpi_os_map_memory(addr, sizeof(*fadt));
    printk("DEBUG %s %d s=%s\n",__func__,__LINE__,fadt->header.signature);
    acpi_os_unmap_memory(fadt, sizeof(*fadt));
}

It prints the right table signature at the end of pvh_setup_acpi.
I also added a call at the top of xenmem_add_to_physmap_one, and the
signature is still correct. Then I added a call at the beginning of
__update_vcpu_system_time. Here is the surprise: from Xen point of view
the table never gets corrupted. Here are the logs:

[...]
(XEN) DEBUG fadt_check 1551 s=FACPt
(XEN) DEBUG fadt_check 1551 s=FACPt
(XEN) DEBUG fadt_check 1551 s=FACPt
(XEN) DEBUG fadt_check 1551 s=FACPt
(XEN) d0v0: upcall vector f3
[    0.000000] Linux version 6.1.19 (root@124de7fbba7f) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_3
[    0.000000] Command line: console=hvc0
[...]
[   10.371610] ACPI: Early table checksum verification disabled
[   10.373633] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
[   10.375548] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[   10.378732] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
[   10.384188] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
[   10.385374] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
[   10.385519] ACPI: Reserving ���� table memory at [mem 0x7ffe19cd-0x17ffe19cb]
[...]
(XEN) DEBUG fadt_check 1551 s=FACPt
(XEN) DEBUG fadt_check 1551 s=FACPt
(XEN) DEBUG fadt_check 1551 s=FACPt
(XEN) DEBUG fadt_check 1551 s=FACPt


So it looks like it is a problem with the mapping itself? Xen sees the
data correctly and Linux sees it corrupted?



> > If it helps to repro on your side, you can just do the following,
> > assuming your Xen repo is in /local/repos/xen:
> > 
> > 
> > cd /local/repos/xen
> > mkdir binaries
> > cd binaries
> > mkdir -p dist/install/
> > 
> > docker run -it -v `pwd`:`pwd` registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12
> > cp /initrd* /local/repos/xen/binaries
> > exit
> > 
> > docker run -it -v `pwd`:`pwd` registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:6.1.19
> > cp /bzImage /local/repos/xen/binaries
> > exit
> > 
> > That's it. Now you have enough pre-built binaries to repro the issue.
> > Next you can edit automation/scripts/qemu-alpine-x86_64.sh to add
> > 
> >   dom0=pvh dom0_mem=1G dom0-iommu=none
> 
> Do you get to boot with dom0-iommu=none?  Is there also some trick
> here in order to identity map dom0? I would expect things to not work
> because addresses used for IO with QEMU emulated devices won't be
> correct.

That's easy: just don't use any devices to boot. Put everything needed
in the dom0 ramdisk. That's the configuration provided in the gitlab-ci
script I pointed you in the previous email which uses an Alpine Linux
ramdisk.
--8323329-945506403-1684276658=:128889--


From xen-devel-bounces@lists.xenproject.org Tue May 16 23:52:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 16 May 2023 23:52:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535735.833707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz4T3-0002KM-S2; Tue, 16 May 2023 23:52:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535735.833707; Tue, 16 May 2023 23:52:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz4T3-0002KF-P3; Tue, 16 May 2023 23:52:33 +0000
Received: by outflank-mailman (input) for mailman id 535735;
 Tue, 16 May 2023 23:52:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pz4T3-0002K5-4v; Tue, 16 May 2023 23:52:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pz4T3-0006tF-1y; Tue, 16 May 2023 23:52:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pz4T2-0004NS-Ik; Tue, 16 May 2023 23:52:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pz4T2-0002l3-Hx; Tue, 16 May 2023 23:52:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FpUS8ZjZ7gfC2ePlHzNblHsZkEGEHT0mknW4fk7re6w=; b=KCN6Fhy6Oa9G78pkrdIp55KxWe
	2XLyPZt3VDYQCaLVArmhrfExYs1wgy+fkXhcVd77vkk1xlLf81O0eAU1tAVxWBQMXbZVK1VjoC7ia
	M2nIQpWwihr3TNxXIzJmbtrLUEXCRbahPwqUQK8arXlTguVNeXzJi55eQLhqLs1+IucY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180680-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180680: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-install:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 16 May 2023 23:52:32 +0000

flight 180680 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180680/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 18 guest-localmigrate/x10 fail in 180670 pass in 180680
 test-arm64-arm64-xl-credit2   7 xen-install      fail in 180674 pass in 180680
 test-arm64-arm64-libvirt-raw 17 guest-start/debian.repeat fail in 180674 pass in 180680
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180670
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180674
 test-amd64-amd64-xl-vhd      21 guest-start/debian.repeat  fail pass in 180674

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   30 days
Failing since        180281  2023-04-17 06:24:36 Z   29 days   55 attempts
Testing same since   180664  2023-05-14 20:12:01 Z    2 days    5 attempts

------------------------------------------------------------
2389 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 302499 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 17 00:35:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 00:35:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535741.833718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz57y-0007MJ-9u; Wed, 17 May 2023 00:34:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535741.833718; Wed, 17 May 2023 00:34:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz57y-0007MC-6P; Wed, 17 May 2023 00:34:50 +0000
Received: by outflank-mailman (input) for mailman id 535741;
 Wed, 17 May 2023 00:34:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VO9T=BG=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pz57w-0007M3-7K
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 00:34:48 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9f6f58cc-f44a-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 02:34:46 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 61B0E60AB0;
 Wed, 17 May 2023 00:34:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 62AE0C433EF;
 Wed, 17 May 2023 00:34:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f6f58cc-f44a-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684283684;
	bh=jTidJcjLSbTguKlPt0yffWmq1ys/89ao202AwH6GW8A=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=CmDNQL+Zvg/vVRAYrLbzSltX49xBbyBn2txukws9k9Ie3ZuU6c3O5oy+KY9sGaCO4
	 BOVKG/fmK7sL7haa4t1AoBxv85cWanVgH5R3bn0xSAOHrCD2R8FdwNDOCiS7OrUC01
	 usq2rLlLFNjcSQdjnCjUfuW7KPYZq8gAnS+86I+B3tBhxYfwRHAX9tPKg3ayd6WgVL
	 E25A5JPSy74klyJ8qoHZrHh2k/v930q15kqHlEHh8x5p7LvLtW5OJkhMfXMiZVbT1P
	 BdS8+Wymi2kI/Boze+v2Q7Q6pzQIgMnl8fOgdw5eFeyc3iefaWYvzJnyVWY/uzJWTM
	 T2hXV+fWaauQw==
Date: Tue, 16 May 2023 17:34:41 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, Michal Orzel <michal.orzel@amd.com>
Subject: Re: [PATCH 1/3] xen/misra: xen-analysis.py: fix parallel analysis
 Cppcheck errors
In-Reply-To: <20230504131245.2985400-2-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305161733010.128889@ubuntu-linux-20-04-desktop>
References: <20230504131245.2985400-1-luca.fancellu@arm.com> <20230504131245.2985400-2-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 4 May 2023, Luca Fancellu wrote:
> Currently Cppcheck has a limitation that prevents to use make with
> parallel build and have a parallel Cppcheck invocation on each
> translation unit (the .c files), because of spurious internal errors.
> 
> The issue comes from the fact that when using the build directory,
> Cppcheck saves temporary files as <filename>.c.<many-extensions>, but
> this doesn't work well when files with the same name are being
> analysed at the same time, leading to race conditions.
> 
> Fix the issue creating, under the build directory, the same directory
> structure of the file being analysed to avoid any clash.
> 
> Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/scripts/xen_analysis/cppcheck_analysis.py |  8 +++-----
>  xen/tools/cppcheck-cc.sh                      | 19 ++++++++++++++++++-
>  2 files changed, 21 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
> index ab52ce38d502..658795bb9f5b 100644
> --- a/xen/scripts/xen_analysis/cppcheck_analysis.py
> +++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
> @@ -139,7 +139,6 @@ def generate_cppcheck_deps():
>      # Compiler defines are in compiler-def.h which is included in config.h
>      #
>      cppcheck_flags="""
> ---cppcheck-build-dir={}/{}
>   --max-ctu-depth=10
>   --enable=style,information,missingInclude
>   --template=\'{{file}}({{line}},{{column}}):{{id}}:{{severity}}:{{message}}\'
> @@ -150,8 +149,7 @@ def generate_cppcheck_deps():
>   --suppress='unusedStructMember:*'
>   --include={}/include/xen/config.h
>   -DCPPCHECK
> -""".format(settings.outdir, CPPCHECK_BUILD_DIR, settings.xen_dir,
> -           settings.outdir, settings.xen_dir)
> +""".format(settings.xen_dir, settings.outdir, settings.xen_dir)
>  
>      invoke_cppcheck = utils.invoke_command(
>              "{} --version".format(settings.cppcheck_binpath),
> @@ -204,9 +202,9 @@ def generate_cppcheck_deps():
>  
>      cppcheck_cc_flags = """--compiler={} --cppcheck-cmd={} {}
>   --cppcheck-plat={}/cppcheck-plat --ignore-path=tools/
> - --ignore-path=arch/x86/efi/check.c
> + --ignore-path=arch/x86/efi/check.c --build-dir={}/{}
>  """.format(xen_cc, settings.cppcheck_binpath, cppcheck_flags,
> -           settings.tools_dir)
> +           settings.tools_dir, settings.outdir, CPPCHECK_BUILD_DIR)
>  
>      if settings.cppcheck_html:
>          cppcheck_cc_flags = cppcheck_cc_flags + " --cppcheck-html"
> diff --git a/xen/tools/cppcheck-cc.sh b/xen/tools/cppcheck-cc.sh
> index f6728e4c1084..16a965edb7ec 100755
> --- a/xen/tools/cppcheck-cc.sh
> +++ b/xen/tools/cppcheck-cc.sh
> @@ -24,6 +24,7 @@ Options:
>  EOF
>  }
>  
> +BUILD_DIR=""
>  CC_FILE=""
>  COMPILER=""
>  CPPCHECK_HTML="n"
> @@ -66,6 +67,10 @@ do
>              help
>              exit 0
>              ;;
> +        --build-dir=*)
> +            BUILD_DIR="${OPTION#*=}"
> +            sm_tool_args="n"
> +            ;;
>          --compiler=*)
>              COMPILER="${OPTION#*=}"
>              sm_tool_args="n"
> @@ -107,6 +112,12 @@ then
>      exit 1
>  fi
>  
> +if [ "${BUILD_DIR}" = "" ]
> +then
> +    echo "--build-dir arg is mandatory."
> +    exit 1
> +fi
> +
>  function create_jcd() {
>      local line="${1}"
>      local arg_num=0
> @@ -199,13 +210,18 @@ then
>              exit 1
>          fi
>  
> +        # Generate build directory for the analysed file
> +        cppcheck_build_dir="${BUILD_DIR}/${OBJTREE_PATH}"
> +        mkdir -p "${cppcheck_build_dir}"
> +
>          # Shellcheck complains about missing quotes on CPPCHECK_TOOL_ARGS, but
>          # they can't be used here
>          # shellcheck disable=SC2086
>          ${CPPCHECK_TOOL} ${CPPCHECK_TOOL_ARGS} \
>              --project="${JDB_FILE}" \
>              --output-file="${out_file}" \
> -            --platform="${platform}"
> +            --platform="${platform}" \
> +            --cppcheck-build-dir=${cppcheck_build_dir}
>  
>          if [ "${CPPCHECK_HTML}" = "y" ]
>          then
> @@ -216,6 +232,7 @@ then
>                  --project="${JDB_FILE}" \
>                  --output-file="${out_file%.txt}.xml" \
>                  --platform="${platform}" \
> +                --cppcheck-build-dir=${cppcheck_build_dir} \
>                  -q \
>                  --xml
>          fi
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed May 17 00:39:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 00:39:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535745.833728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz5C7-0007zM-Qy; Wed, 17 May 2023 00:39:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535745.833728; Wed, 17 May 2023 00:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz5C7-0007zF-NG; Wed, 17 May 2023 00:39:07 +0000
Received: by outflank-mailman (input) for mailman id 535745;
 Wed, 17 May 2023 00:39:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VO9T=BG=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pz5C6-0007z9-Uk
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 00:39:06 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 39f7e306-f44b-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 02:39:05 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id C1EA86303F;
 Wed, 17 May 2023 00:39:04 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CF43BC433D2;
 Wed, 17 May 2023 00:39:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39f7e306-f44b-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684283944;
	bh=yBGgIce3LbkMp7weWqw3zjRUv/dBOkI/gB3bWEMe4kI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=r323E9w7DjzGjK2zeg7MVY0cD47j9L0FIYn4YJ6ZFUChRJB5jaLAKtd3UjWLCNT6r
	 hivD9p56joHsNh3gNlyzHGIQZX19GbKXCNGAbl+r5e8Kh6JNGe0u0Rtz69lRBvzJtg
	 3muClq9m/kdIXODEfrTNotULrrOGLjj+pQM5DUcPYY9/VjmgHRtd7Hg3EgY+DGc0tg
	 lwTeHXMmVWyoSjtNfqKR7D1DPQaiO5tYuoCtt9PUqd+Wi5T1QM+LA07NLGBylMo/mo
	 tSUJGminIaTITxwzZFNvQmH9U3dLldNWfil9rdWaxWTm2OxqasSNxHmKgeRSsLWORJ
	 orAPaFFHuPdUA==
Date: Tue, 16 May 2023 17:39:01 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/3] xen/misra: xen-analysis.py: allow cppcheck version
 above 2.7
In-Reply-To: <20230504131245.2985400-3-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305161738460.128889@ubuntu-linux-20-04-desktop>
References: <20230504131245.2985400-1-luca.fancellu@arm.com> <20230504131245.2985400-3-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 4 May 2023, Luca Fancellu wrote:
> Allow the use of Cppcheck version above 2.7, exception for 2.8 which
> is known and documented do be broken.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/scripts/xen_analysis/cppcheck_analysis.py | 20 +++++++++++++++----
>  1 file changed, 16 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
> index 658795bb9f5b..c3783e8df343 100644
> --- a/xen/scripts/xen_analysis/cppcheck_analysis.py
> +++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
> @@ -157,13 +157,25 @@ def generate_cppcheck_deps():
>              "Error occured retrieving cppcheck version:\n{}\n\n{}"
>          )
>  
> -    version_regex = re.search('^Cppcheck (.*)$', invoke_cppcheck, flags=re.M)
> +    version_regex = re.search('^Cppcheck (\d+).(\d+)(?:.\d+)?$',
> +                              invoke_cppcheck, flags=re.M)
>      # Currently, only cppcheck version >= 2.7 is supported, but version 2.8 is
>      # known to be broken, please refer to docs/misra/cppcheck.txt
> -    if (not version_regex) or (not version_regex.group(1).startswith("2.7")):
> +    if (not version_regex) or len(version_regex.groups()) < 2:
>          raise CppcheckDepsPhaseError(
> -                "Can't find cppcheck version or version is not 2.7"
> -              )
> +            "Can't find cppcheck version or version not identified: "
> +            "{}".format(invoke_cppcheck)
> +        )
> +    major = int(version_regex.group(1))
> +    minor = int(version_regex.group(2))
> +    if major < 2 or (major == 2 and minor < 7):
> +        raise CppcheckDepsPhaseError(
> +            "Cppcheck version < 2.7 is not supported"
> +        )
> +    if major == 2 and minor == 8:
> +        raise CppcheckDepsPhaseError(
> +            "Cppcheck version 2.8 is known to be broken, see the documentation"
> +        )
>  
>      # If misra option is selected, append misra addon and generate cppcheck
>      # files for misra analysis
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed May 17 00:44:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 00:44:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535749.833738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz5H4-0000zA-Cp; Wed, 17 May 2023 00:44:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535749.833738; Wed, 17 May 2023 00:44:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz5H4-0000z3-9u; Wed, 17 May 2023 00:44:14 +0000
Received: by outflank-mailman (input) for mailman id 535749;
 Wed, 17 May 2023 00:44:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VO9T=BG=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pz5H2-0000yx-Ql
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 00:44:12 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ef974b00-f44b-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 02:44:10 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5A88563553;
 Wed, 17 May 2023 00:44:09 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5E578C433EF;
 Wed, 17 May 2023 00:44:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef974b00-f44b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684284248;
	bh=ehs/r7AZWycXiEY8yCcbKJTzpi6n+5JgucP7q5kCOH0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=JjKn8qkg0cItcEhKBuO/EYph8aqSXa950FXChxAzZ1ci89OuiXfuVJa1kUR31xH0G
	 F6zRDrKBOJUC/b1Wvm6ukNaWShCTWnBsbioXefTuqWk7mPig95EBKQtmGb16NhNTse
	 XPa3QnH/wG0P9z/E4WtbjUjqYuEoW38LCu0vabe+34+XI/2V781PV4+pnlN672UvZd
	 HFKBksepjsPx8+BtN1BI4jLVQOw3s3dMULClELXWKxylTZiMk6NLa4oLZgcGdT7ul/
	 uCAdqQIGD97aKGYYdqEKyqYXAkPaQ0nX8oHU/m42nqFR/bXa+ZnCuTwMlTqvqxiyEZ
	 cDCT8KP7th7sQ==
Date: Tue, 16 May 2023 17:44:05 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 3/3] xen/misra: xen-analysis.py: use the relative path
 from the ...
In-Reply-To: <20230504131245.2985400-4-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305161743520.128889@ubuntu-linux-20-04-desktop>
References: <20230504131245.2985400-1-luca.fancellu@arm.com> <20230504131245.2985400-4-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 4 May 2023, Luca Fancellu wrote:
> repository in the reports
> 
> Currently the cppcheck report entries shows the relative file path
> from the /xen folder of the repository instead of the base folder.
> In order to ease the checks, for example, when looking a git diff
> output and the report, use the repository folder as base.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/scripts/xen_analysis/cppcheck_analysis.py | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
> index c3783e8df343..c8abbe0fca79 100644
> --- a/xen/scripts/xen_analysis/cppcheck_analysis.py
> +++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
> @@ -149,7 +149,7 @@ def generate_cppcheck_deps():
>   --suppress='unusedStructMember:*'
>   --include={}/include/xen/config.h
>   -DCPPCHECK
> -""".format(settings.xen_dir, settings.outdir, settings.xen_dir)
> +""".format(settings.repo_dir, settings.outdir, settings.xen_dir)
>  
>      invoke_cppcheck = utils.invoke_command(
>              "{} --version".format(settings.cppcheck_binpath),
> @@ -240,7 +240,7 @@ def generate_cppcheck_report():
>      try:
>          cppcheck_report_utils.cppcheck_merge_txt_fragments(fragments,
>                                                             report_filename,
> -                                                           [settings.xen_dir])
> +                                                           [settings.repo_dir])
>      except cppcheck_report_utils.CppcheckTXTReportError as e:
>          raise CppcheckReportPhaseError(e)
>  
> @@ -257,7 +257,7 @@ def generate_cppcheck_report():
>          try:
>              cppcheck_report_utils.cppcheck_merge_xml_fragments(fragments,
>                                                                 xml_filename,
> -                                                               settings.xen_dir,
> +                                                               settings.repo_dir,
>                                                                 settings.outdir)
>          except cppcheck_report_utils.CppcheckHTMLReportError as e:
>              raise CppcheckReportPhaseError(e)
> @@ -265,7 +265,7 @@ def generate_cppcheck_report():
>          utils.invoke_command(
>              "{} --file={} --source-dir={} --report-dir={}/html --title=Xen"
>                  .format(settings.cppcheck_htmlreport_binpath, xml_filename,
> -                        settings.xen_dir, html_report_dir),
> +                        settings.repo_dir, html_report_dir),
>              False, CppcheckReportPhaseError,
>              "Error occured generating Cppcheck HTML report:\n{}"
>          )
> @@ -273,7 +273,7 @@ def generate_cppcheck_report():
>          html_files = utils.recursive_find_file(html_report_dir, r'.*\.html$')
>          try:
>              cppcheck_report_utils.cppcheck_strip_path_html(html_files,
> -                                                           (settings.xen_dir,
> +                                                           (settings.repo_dir,
>                                                              settings.outdir))
>          except cppcheck_report_utils.CppcheckHTMLReportError as e:
>              raise CppcheckReportPhaseError(e)
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed May 17 01:27:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 01:27:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535756.833748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz5wX-0003me-OF; Wed, 17 May 2023 01:27:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535756.833748; Wed, 17 May 2023 01:27:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz5wX-0003mX-KL; Wed, 17 May 2023 01:27:05 +0000
Received: by outflank-mailman (input) for mailman id 535756;
 Wed, 17 May 2023 01:27:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VO9T=BG=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pz5wW-0003mR-7c
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 01:27:04 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ec23c425-f451-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 03:27:01 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9B74F61444;
 Wed, 17 May 2023 01:27:00 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A497CC433D2;
 Wed, 17 May 2023 01:26:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec23c425-f451-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684286820;
	bh=OQKCDsSCaSoZDhinEV6zLuTDz68GaOYAu2TFP6gpAEI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=cWiec/SQEUDbvozvAJzc0OIcBgFM9Pv++nGdv0IXFFQifKDjZ/mkZSO8O4+11zFxl
	 FcvkkeotF7/iHCMg0z1Sf5AcUL6+vdzppQk9C9lOy6dqpfnEg7WaeFiWZk9Ym9mfVY
	 vdbdBvPkf0SwMO1XXw02XJVOSgIwsCi0RPhTLlQOt8Wa4HvgBeQ4J4ZsCEOIa+qdHw
	 0Mb8uY3ZzlQtmS96JeJ+aqmHAtmvQua8Bb+NaPSHK/FiCbCsxVOp2NrWGtzo+k2kKp
	 DwHRya/OvXf+obfw4GQLxHTBeeBsJlXIxM31n9x2SLG+SwfUjlDuhDHZzac1I0EtZM
	 RLpRGqNDuwtkQ==
Date: Tue, 16 May 2023 18:26:56 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/2] xen/misra: add diff-report.py tool
In-Reply-To: <20230504142523.2989306-2-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305161826460.128889@ubuntu-linux-20-04-desktop>
References: <20230504142523.2989306-1-luca.fancellu@arm.com> <20230504142523.2989306-2-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 4 May 2023, Luca Fancellu wrote:
> Add a new tool, diff-report.py that can be used to make diff between
> reports generated by xen-analysis.py tool.
> Currently this tool supports the Xen cppcheck text report format in
> its operations.
> 
> The tool prints every finding that is in the report passed with -r
> (check report) which is not in the report passed with -b (baseline).
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/scripts/diff-report.py                    |  76 ++++++++++++
>  .../xen_analysis/diff_tool/__init__.py        |   0
>  .../xen_analysis/diff_tool/cppcheck_report.py |  41 +++++++
>  xen/scripts/xen_analysis/diff_tool/debug.py   |  36 ++++++
>  xen/scripts/xen_analysis/diff_tool/report.py  | 114 ++++++++++++++++++
>  5 files changed, 267 insertions(+)
>  create mode 100755 xen/scripts/diff-report.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/__init__.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/debug.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/report.py
> 
> diff --git a/xen/scripts/diff-report.py b/xen/scripts/diff-report.py
> new file mode 100755
> index 000000000000..4913fb43a8f9
> --- /dev/null
> +++ b/xen/scripts/diff-report.py
> @@ -0,0 +1,76 @@
> +#!/usr/bin/env python3
> +
> +import os, sys
> +from argparse import ArgumentParser
> +from xen_analysis.diff_tool.debug import Debug
> +from xen_analysis.diff_tool.report import ReportError
> +from xen_analysis.diff_tool.cppcheck_report import CppcheckReport
> +
> +
> +def log_info(text, end='\n'):
> +    global args
> +    global file_out
> +
> +    if (args.verbose):
> +        print(text, end=end, file=file_out)
> +
> +
> +def main(argv):
> +    global args
> +    global file_out
> +
> +    parser = ArgumentParser(prog="diff-report.py")
> +    parser.add_argument("-b", "--baseline", required=True, type=str,
> +                        help="Path to the baseline report.")
> +    parser.add_argument("--debug", action='store_true',
> +                        help="Produce intermediate reports during operations.")
> +    parser.add_argument("-o", "--out", default="stdout", type=str,
> +                        help="Where to print the tool output. Default is "
> +                             "stdout")
> +    parser.add_argument("-r", "--report", required=True, type=str,
> +                        help="Path to the 'check report', the one checked "
> +                             "against the baseline.")
> +    parser.add_argument("-v", "--verbose", action='store_true',
> +                        help="Print more informations during the run.")
> +
> +    args = parser.parse_args()
> +
> +    if args.out == "stdout":
> +        file_out = sys.stdout
> +    else:
> +        try:
> +            file_out = open(args.out, "wt")
> +        except OSError as e:
> +            print("ERROR: Issue opening file {}: {}".format(args.out, e))
> +            sys.exit(1)
> +
> +    debug = Debug(args)
> +
> +    try:
> +        baseline_path = os.path.realpath(args.baseline)
> +        log_info("Loading baseline report {}".format(baseline_path), "")
> +        baseline = CppcheckReport(baseline_path)
> +        baseline.parse()
> +        debug.debug_print_parsed_report(baseline)
> +        log_info(" [OK]")
> +        new_rep_path = os.path.realpath(args.report)
> +        log_info("Loading check report {}".format(new_rep_path), "")
> +        new_rep = CppcheckReport(new_rep_path)
> +        new_rep.parse()
> +        debug.debug_print_parsed_report(new_rep)
> +        log_info(" [OK]")
> +    except ReportError as e:
> +        print("ERROR: {}".format(e))
> +        sys.exit(1)
> +
> +    output = new_rep - baseline
> +    print(output, end="", file=file_out)
> +
> +    if len(output) > 0:
> +        sys.exit(1)
> +
> +    sys.exit(0)
> +
> +
> +if __name__ == "__main__":
> +    main(sys.argv[1:])
> diff --git a/xen/scripts/xen_analysis/diff_tool/__init__.py b/xen/scripts/xen_analysis/diff_tool/__init__.py
> new file mode 100644
> index 000000000000..e69de29bb2d1
> diff --git a/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py b/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
> new file mode 100644
> index 000000000000..787a51aca583
> --- /dev/null
> +++ b/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
> @@ -0,0 +1,41 @@
> +#!/usr/bin/env python3
> +
> +import re
> +from .report import Report, ReportError
> +
> +
> +class CppcheckReport(Report):
> +    def __init__(self, report_path: str) -> None:
> +        super().__init__(report_path)
> +        # This matches a string like:
> +        # path/to/file.c(<line number>,<digits>):<whatever>
> +        # and captures file name path and line number
> +        # the last capture group is used for text substitution in __str__
> +        self.__report_entry_regex = re.compile(r'^(.*)\((\d+)(,\d+\):.*)$')
> +
> +    def parse(self) -> None:
> +        report_path = self.get_report_path()
> +        try:
> +            with open(report_path, "rt") as infile:
> +                report_lines = infile.readlines()
> +        except OSError as e:
> +            raise ReportError("Issue with reading file {}: {}"
> +                              .format(report_path, e))
> +        for line in report_lines:
> +            entry = self.__report_entry_regex.match(line)
> +            if entry and entry.group(1) and entry.group(2):
> +                file_path = entry.group(1)
> +                line_number = int(entry.group(2))
> +                self.add_entry(file_path, line_number, line)
> +            else:
> +                raise ReportError("Malformed report entry in file {}:\n{}"
> +                                  .format(report_path, line))
> +
> +    def __str__(self) -> str:
> +        ret = ""
> +        for entry in self.to_list():
> +            ret += re.sub(self.__report_entry_regex,
> +                          r'{}({}\3'.format(entry.file_path,
> +                                            entry.line_number),
> +                          entry.text)
> +        return ret
> diff --git a/xen/scripts/xen_analysis/diff_tool/debug.py b/xen/scripts/xen_analysis/diff_tool/debug.py
> new file mode 100644
> index 000000000000..d46df3300d21
> --- /dev/null
> +++ b/xen/scripts/xen_analysis/diff_tool/debug.py
> @@ -0,0 +1,36 @@
> +#!/usr/bin/env python3
> +
> +import os
> +from .report import Report
> +
> +
> +class Debug:
> +    def __init__(self, args):
> +        self.args = args
> +
> +    def __get_debug_out_filename(self, path: str, type: str) -> str:
> +        # Take basename
> +        file_name = os.path.basename(path)
> +        # Split in name and extension
> +        file_name = os.path.splitext(file_name)
> +        if self.args.out != "stdout":
> +            out_folder = os.path.dirname(self.args.out)
> +        else:
> +            out_folder = "./"
> +        dbg_report_path = out_folder + file_name[0] + type + file_name[1]
> +
> +        return dbg_report_path
> +
> +    def __debug_print_report(self, report: Report, type: str) -> None:
> +        report_name = self.__get_debug_out_filename(report.get_report_path(),
> +                                                    type)
> +        try:
> +            with open(report_name, "wt") as outfile:
> +                print(report, end="", file=outfile)
> +        except OSError as e:
> +            print("ERROR: Issue opening file {}: {}".format(report_name, e))
> +
> +    def debug_print_parsed_report(self, report: Report) -> None:
> +        if not self.args.debug:
> +            return
> +        self.__debug_print_report(report, ".parsed")
> diff --git a/xen/scripts/xen_analysis/diff_tool/report.py b/xen/scripts/xen_analysis/diff_tool/report.py
> new file mode 100644
> index 000000000000..d958d1816eb4
> --- /dev/null
> +++ b/xen/scripts/xen_analysis/diff_tool/report.py
> @@ -0,0 +1,114 @@
> +#!/usr/bin/env python3
> +
> +import os
> +
> +
> +class ReportError(Exception):
> +    pass
> +
> +
> +class Report:
> +    class ReportEntry:
> +        def __init__(self, file_path: str, line_number: int,
> +                     entry_text: list, line_id: int) -> None:
> +            if not isinstance(line_number, int) or \
> +               not isinstance(line_id, int):
> +                raise ReportError("ReportEntry constructor wrong type args")
> +            self.file_path = file_path
> +            self.line_number = line_number
> +            self.text = entry_text
> +            self.line_id = line_id
> +
> +        def __str__(self) -> str:
> +            ret = ''
> +            header = 'File path:Count\n'
> +
> +            for path in self.stats:
> +                ret += f'{path}: {len(self.stats[path])}\n'
> +
> +            if ret == '':
> +                ret += 'No new issues introduced\n'
> +
> +            ret = header + ret
> +
> +            return ret
> +
> +        def __len__(self) -> int:
> +            ret = 0
> +
> +            for ln_list in self.stats.values():
> +                ret += len(ln_list)
> +
> +            return ret
> +
> +    def __init__(self, report_path: str) -> None:
> +        self.__entries = {}
> +        self.__path = report_path
> +        self.__last_line_order = 0
> +
> +    def parse(self) -> None:
> +        raise ReportError("Please create a specialised class from 'Report'.")
> +
> +    def get_report_path(self) -> str:
> +        return self.__path
> +
> +    def get_report_entries(self) -> dict:
> +        return self.__entries
> +
> +    def add_entry(self, entry_path: str, entry_line_number: int,
> +                  entry_text: list) -> None:
> +        entry = Report.ReportEntry(entry_path, entry_line_number, entry_text,
> +                                   self.__last_line_order)
> +        if entry_path in self.__entries.keys():
> +            self.__entries[entry_path].append(entry)
> +        else:
> +            self.__entries[entry_path] = [entry]
> +        self.__last_line_order += 1
> +
> +    def to_list(self) -> list:
> +        report_list = []
> +        for _, entries in self.__entries.items():
> +            for entry in entries:
> +                report_list.append(entry)
> +
> +        report_list.sort(key=lambda x: x.line_id)
> +        return report_list
> +
> +    def __str__(self) -> str:
> +        ret = ""
> +        for entry in self.to_list():
> +            ret += entry.file_path + ":" + entry.line_number + ":" + entry.text
> +
> +        return ret
> +
> +    def __len__(self) -> int:
> +        return len(self.to_list())
> +
> +    def __sub__(self, report_b: 'Report') -> 'Report':
> +        if self.__class__ != report_b.__class__:
> +            raise ReportError("Diff of different type of report!")
> +
> +        filename, file_extension = os.path.splitext(self.__path)
> +        diff_report = self.__class__(filename + ".diff" + file_extension)
> +        # Put in the diff report only records of this report that are not
> +        # present in the report_b.
> +        for file_path, entries in self.__entries.items():
> +            rep_b_entries = report_b.get_report_entries()
> +            if file_path in rep_b_entries.keys():
> +                # File path exists in report_b, so check what entries of that
> +                # file path doesn't exist in report_b and add them to the diff
> +                rep_b_entries_num = [
> +                    x.line_number for x in rep_b_entries[file_path]
> +                ]
> +                for entry in entries:
> +                    if entry.line_number not in rep_b_entries_num:
> +                        diff_report.add_entry(file_path, entry.line_number,
> +                                              entry.text)
> +            else:
> +                # File path doesn't exist in report_b, so add every entry
> +                # of that file path to the diff
> +                for entry in entries:
> +                    diff_report.add_entry(file_path, entry.line_number,
> +                                          entry.text)
> +
> +        return diff_report
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed May 17 01:34:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 01:34:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535761.833758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz63F-0005Gn-Gs; Wed, 17 May 2023 01:34:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535761.833758; Wed, 17 May 2023 01:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pz63F-0005Gf-BE; Wed, 17 May 2023 01:34:01 +0000
Received: by outflank-mailman (input) for mailman id 535761;
 Wed, 17 May 2023 01:34:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VO9T=BG=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pz63E-0005GZ-Lt
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 01:34:00 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e40aee35-f452-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 03:33:57 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 72A1F60F13;
 Wed, 17 May 2023 01:33:56 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8211EC433D2;
 Wed, 17 May 2023 01:33:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e40aee35-f452-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684287235;
	bh=bzKtqBDw708Z1cndM5SJhkjtJJWG5wJtRW6r9ZtwuXw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nPDlc2YyYAXriJwU4p2HJKXJFIfyQ490D0ypmDAkeS/TZYWdrMdCLsCET0e3w6uvF
	 ndCkEz+65ehTr9ReVmACln4BUi5n16lX0onl1RizsrF1vQQa58QjDpatem39kLoFcs
	 +HU/XKGBOmesb/b0d2ZEDP2PFsWdFMby6agdw44vxEYwEj+eQ9nrSno1bxbO5Fcsk/
	 Zlswgs8m/72MEw7bxfgAZ323fS5WQo9YK+JNQMlBfEFCVRTjm0K8bhva1mf0KqqQHX
	 8ywgPWT058+qW7yiEAojF8lpZWYjs7jGj8iG8euzN54u616TqtlmcZyItXUa7jJiDA
	 bwBCR+e+RnV1g==
Date: Tue, 16 May 2023 18:33:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/2] xen/misra: diff-report.py: add report patching
 feature
In-Reply-To: <20230504142523.2989306-3-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305161827050.128889@ubuntu-linux-20-04-desktop>
References: <20230504142523.2989306-1-luca.fancellu@arm.com> <20230504142523.2989306-3-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 4 May 2023, Luca Fancellu wrote:
> Add a feature to the diff-report.py script that improves the comparison
> between two analysis report, one from a baseline codebase and the other
> from the changes applied to the baseline.
> 
> The comparison between reports of different codebase is an issue because
> entries in the baseline could have been moved in position due to addition
> or deletion of unrelated lines or can disappear because of deletion of
> the interested line, making the comparison between two revisions of the
> code harder.
> 
> Having a baseline report, a report of the codebase with the changes
> called "new report" and a git diff format file that describes the
> changes happened to the code from the baseline, this feature can
> understand which entries from the baseline report are deleted or shifted
> in position due to changes to unrelated lines and can modify them as
> they will appear in the "new report".
> 
> Having the "patched baseline" and the "new report", now it's simple
> to make the diff between them and print only the entry that are new.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

This is an amazing work!! Thanks Luca!

I am having issues trying the new patch feature. After applying this
patch I get:

sstabellini@ubuntu-linux-20-04-desktop:/local/repos/xen-upstream/xen$ ./scripts/diff-report.py
Traceback (most recent call last):
  File "./scripts/diff-report.py", line 5, in <module>
    from xen_analysis.diff_tool.debug import Debug
  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/debug.py", line 4, in <module>
    from .report import Report
  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/report.py", line 4, in <module>
    from .unified_format_parser import UnifiedFormatParser, ChangeSet
  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py", line 56, in <module>
    class UnifiedFormatParser:
  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py", line 57, in UnifiedFormatParser
    def __init__(self, args: str | list) -> None:
TypeError: unsupported operand type(s) for |: 'type' and 'type'

Also got a similar error elsewhere:

sstabellini@ubuntu-linux-20-04-desktop:/local/repos/xen-upstream/xen$ ./scripts/diff-report.py --patch ~/p/1 -b /tmp/1 -r /tmp/1
Traceback (most recent call last):
  File "./scripts/diff-report.py", line 127, in <module>
    main(sys.argv[1:])
  File "./scripts/diff-report.py", line 102, in main
    diffs = UnifiedFormatParser(diff_source)
  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py", line 79, in __init__
    self.__parse()
  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py", line 94, in __parse
    def parse_diff_header(line: str) -> ChangeSet | None:
TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'

My Python is 2.7.18


Am I understanding correctly that one should run the scan for the
baseline (saving the result somewhere), then apply the patch, run the
scan again. Finally, one should call diff-report.py passing -b
baseline-report -r new-report --patch the-patch-applied?



> ---
>  xen/scripts/diff-report.py                    |  55 ++++-
>  xen/scripts/xen_analysis/diff_tool/debug.py   |  19 ++
>  xen/scripts/xen_analysis/diff_tool/report.py  |  84 ++++++++
>  .../diff_tool/unified_format_parser.py        | 202 ++++++++++++++++++
>  4 files changed, 358 insertions(+), 2 deletions(-)
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/unified_format_parser.py
> 
> diff --git a/xen/scripts/diff-report.py b/xen/scripts/diff-report.py
> index 4913fb43a8f9..17f707f5d34e 100755
> --- a/xen/scripts/diff-report.py
> +++ b/xen/scripts/diff-report.py
> @@ -5,6 +5,10 @@ from argparse import ArgumentParser
>  from xen_analysis.diff_tool.debug import Debug
>  from xen_analysis.diff_tool.report import ReportError
>  from xen_analysis.diff_tool.cppcheck_report import CppcheckReport
> +from xen_analysis.diff_tool.unified_format_parser import \
> +    (UnifiedFormatParser, UnifiedFormatParseError)
> +from xen_analysis.utils import invoke_command
> +from xen_analysis.settings import repo_dir
>  
>  
>  def log_info(text, end='\n'):
> @@ -32,9 +36,32 @@ def main(argv):
>                               "against the baseline.")
>      parser.add_argument("-v", "--verbose", action='store_true',
>                          help="Print more informations during the run.")
> +    parser.add_argument("--patch", type=str,
> +                        help="The patch file containing the changes to the "
> +                             "code, from the baseline analysis result to the "
> +                             "'check report' analysis result.\n"
> +                             "Do not use with --baseline-rev/--report-rev")
> +    parser.add_argument("--baseline-rev", type=str,
> +                        help="Revision or SHA of the codebase analysed to "
> +                             "create the baseline report.\n"
> +                             "Use together with --report-rev")
> +    parser.add_argument("--report-rev", type=str,
> +                        help="Revision or SHA of the codebase analysed to "
> +                             "create the 'check report'.\n"
> +                             "Use together with --baseline-rev")
>  
>      args = parser.parse_args()
>  
> +    if args.patch and (args.baseline_rev or args.report_rev):
> +        print("ERROR: '--patch' argument can't be used with '--baseline-rev'"
> +              " or '--report-rev'.")
> +        sys.exit(1)
> +
> +    if bool(args.baseline_rev) != bool(args.report_rev):
> +        print("ERROR: '--baseline-rev' must be used together with "
> +              "'--report-rev'.")
> +        sys.exit(1)
> +
>      if args.out == "stdout":
>          file_out = sys.stdout
>      else:
> @@ -59,11 +86,35 @@ def main(argv):
>          new_rep.parse()
>          debug.debug_print_parsed_report(new_rep)
>          log_info(" [OK]")
> -    except ReportError as e:
> +        diff_source = None
> +        if args.patch:
> +            diff_source = os.path.realpath(args.patch)
> +        elif args.baseline_rev:
> +            git_diff = invoke_command(
> +                "git diff --git-dir={} -C -C {}..{}".format(repo_dir,
> +                                                            args.baseline_rev,
> +                                                            args.report_rev),
> +                True, "Error occured invoking:\n{}\n\n{}"
> +            )
> +            diff_source = git_diff.splitlines(keepends=True)
> +        if diff_source:
> +            log_info("Parsing changes...", "")
> +            diffs = UnifiedFormatParser(diff_source)
> +            debug.debug_print_parsed_diff(diffs)
> +            log_info(" [OK]")
> +    except (ReportError, UnifiedFormatParseError) as e:
>          print("ERROR: {}".format(e))
>          sys.exit(1)
>  
> -    output = new_rep - baseline
> +    if args.patch or args.baseline_rev:
> +        log_info("Patching baseline...", "")
> +        baseline_patched = baseline.patch(diffs)
> +        debug.debug_print_patched_report(baseline_patched)
> +        log_info(" [OK]")
> +        output = new_rep - baseline_patched
> +    else:
> +        output = new_rep - baseline
> +
>      print(output, end="", file=file_out)
>  
>      if len(output) > 0:
> diff --git a/xen/scripts/xen_analysis/diff_tool/debug.py b/xen/scripts/xen_analysis/diff_tool/debug.py
> index d46df3300d21..c314edbc8e38 100644
> --- a/xen/scripts/xen_analysis/diff_tool/debug.py
> +++ b/xen/scripts/xen_analysis/diff_tool/debug.py
> @@ -2,6 +2,7 @@
>  
>  import os
>  from .report import Report
> +from .unified_format_parser import UnifiedFormatParser
>  
>  
>  class Debug:
> @@ -34,3 +35,21 @@ class Debug:
>          if not self.args.debug:
>              return
>          self.__debug_print_report(report, ".parsed")
> +
> +    def debug_print_patched_report(self, report: Report) -> None:
> +        if not self.args.debug:
> +            return
> +        # The patched report contains already .patched in its name
> +        self.__debug_print_report(report, "")
> +
> +    def debug_print_parsed_diff(self, diff: UnifiedFormatParser) -> None:
> +        if not self.args.debug:
> +            return
> +        diff_filename = diff.get_diff_path()
> +        out_pathname = self.__get_debug_out_filename(diff_filename, ".parsed")
> +        try:
> +            with open(out_pathname, "wt") as outfile:
> +                for change_obj in diff.get_change_sets().values():
> +                    print(change_obj, end="", file=outfile)
> +        except OSError as e:
> +            print("ERROR: Issue opening file {}: {}".format(out_pathname, e))
> diff --git a/xen/scripts/xen_analysis/diff_tool/report.py b/xen/scripts/xen_analysis/diff_tool/report.py
> index d958d1816eb4..312d59682329 100644
> --- a/xen/scripts/xen_analysis/diff_tool/report.py
> +++ b/xen/scripts/xen_analysis/diff_tool/report.py
> @@ -1,6 +1,7 @@
>  #!/usr/bin/env python3
>  
>  import os
> +from .unified_format_parser import UnifiedFormatParser, ChangeSet
>  
>  
>  class ReportError(Exception):
> @@ -65,6 +66,89 @@ class Report:
>              self.__entries[entry_path] = [entry]
>          self.__last_line_order += 1
>  
> +    def remove_entries(self, entry_file_path: str) -> None:
> +        del self.__entries[entry_file_path]
> +
> +    def remove_entry(self, entry_path: str, line_id: int) -> None:
> +        if entry_path in self.__entries.keys():
> +            len_entry_path = len(self.__entries[entry_path])
> +            if len_entry_path == 1:
> +                del self.__entries[entry_path]
> +            else:
> +                if line_id in self.__entries[entry_path]:
> +                    self.__entries[entry_path].remove(line_id)
> +
> +    def patch(self, diff_obj: UnifiedFormatParser) -> 'Report':
> +        filename, file_extension = os.path.splitext(self.__path)
> +        patched_report = self.__class__(filename + ".patched" + file_extension)
> +        remove_files = []
> +        rename_files = []
> +        remove_entry = []
> +        ChangeMode = ChangeSet.ChangeMode
> +
> +        # Copy entries from this report to the report we are going to patch
> +        for entries in self.__entries.values():
> +            for entry in entries:
> +                patched_report.add_entry(entry.file_path, entry.line_number,
> +                                         entry.text)
> +
> +        # Patch the output report
> +        patched_rep_entries = patched_report.get_report_entries()
> +        for file_diff, change_obj in diff_obj.get_change_sets().items():
> +            if change_obj.is_change_mode(ChangeMode.COPY):
> +                # Copy the original entry pointed by change_obj.orig_file into
> +                # a new key in the patched report named change_obj.dst_file,
> +                # that here is file_diff variable content, because this
> +                # change_obj is pushed into the change_sets with the
> +                # change_obj.dst_file key
> +                if change_obj.orig_file in self.__entries.keys():
> +                    for entry in self.__entries[change_obj.orig_file]:
> +                        patched_report.add_entry(file_diff,
> +                                                 entry.line_number,
> +                                                 entry.text)
> +
> +            if file_diff in patched_rep_entries.keys():
> +                if change_obj.is_change_mode(ChangeMode.DELETE):
> +                    # No need to check changes here, just remember to delete
> +                    # the file from the report
> +                    remove_files.append(file_diff)
> +                    continue
> +                elif change_obj.is_change_mode(ChangeMode.RENAME):
> +                    # Remember to rename the file entry on this report
> +                    rename_files.append(change_obj)
> +
> +                for line_num, change_type in change_obj.get_change_set():
> +                    len_rep = len(patched_rep_entries[file_diff])
> +                    for i in range(len_rep):
> +                        rep_item = patched_rep_entries[file_diff][i]
> +                        if change_type == ChangeSet.ChangeType.REMOVE:
> +                            if rep_item.line_number == line_num:
> +                                # This line is removed with this changes,
> +                                # append to the list of entries to be removed
> +                                remove_entry.append(rep_item)
> +                            elif rep_item.line_number > line_num:
> +                                rep_item.line_number -= 1
> +                        elif change_type == ChangeSet.ChangeType.ADD:
> +                            if rep_item.line_number >= line_num:
> +                                rep_item.line_number += 1
> +                    # Remove deleted entries from the list
> +                    if len(remove_entry) > 0:
> +                        for entry in remove_entry:
> +                            patched_report.remove_entry(entry.file_path,
> +                                                        entry.line_id)
> +                        remove_entry.clear()
> +
> +        if len(remove_files) > 0:
> +            for file_name in remove_files:
> +                patched_report.remove_entries(file_name)
> +
> +        if len(rename_files) > 0:
> +            for change_obj in rename_files:
> +                patched_rep_entries[change_obj.dst_file] = \
> +                    patched_rep_entries.pop(change_obj.orig_file)
> +
> +        return patched_report
> +
>      def to_list(self) -> list:
>          report_list = []
>          for _, entries in self.__entries.items():
> diff --git a/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py b/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py
> new file mode 100644
> index 000000000000..e34cc8ac063f
> --- /dev/null
> +++ b/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py
> @@ -0,0 +1,202 @@
> +#!/usr/bin/env python3
> +
> +import re
> +from enum import Enum
> +from typing import Tuple
> +
> +
> +class UnifiedFormatParseError(Exception):
> +    pass
> +
> +
> +class ParserState(Enum):
> +    FIND_DIFF_HEADER = 0
> +    REGISTER_CHANGES = 1
> +    FIND_HUNK_OR_DIFF_HEADER = 2
> +
> +
> +class ChangeSet:
> +    class ChangeType(Enum):
> +        REMOVE = 0
> +        ADD = 1
> +
> +    class ChangeMode(Enum):
> +        NONE = 0
> +        CHANGE = 1
> +        RENAME = 2
> +        DELETE = 3
> +        COPY = 4
> +
> +    def __init__(self, a_file: str, b_file: str) -> None:
> +        self.orig_file = a_file
> +        self.dst_file = b_file
> +        self.change_mode = ChangeSet.ChangeMode.NONE
> +        self.__changes = []
> +
> +    def __str__(self) -> str:
> +        str_out = "{}: {} -> {}:\n{}\n".format(
> +            str(self.change_mode), self.orig_file, self.dst_file,
> +            str(self.__changes)
> +        )
> +        return str_out
> +
> +    def set_change_mode(self, change_mode: ChangeMode) -> None:
> +        self.change_mode = change_mode
> +
> +    def is_change_mode(self, change_mode: ChangeMode) -> bool:
> +        return self.change_mode == change_mode
> +
> +    def add_change(self, line_number: int, change_type: ChangeType) -> None:
> +        self.__changes.append((line_number, change_type))
> +
> +    def get_change_set(self) -> dict:
> +        return self.__changes
> +
> +
> +class UnifiedFormatParser:
> +    def __init__(self, args: str | list) -> None:
> +        if isinstance(args, str):
> +            self.__diff_file = args
> +            try:
> +                with open(self.__diff_file, "rt") as infile:
> +                    self.__diff_lines = infile.readlines()
> +            except OSError as e:
> +                raise UnifiedFormatParseError(
> +                    "Issue with reading file {}: {}"
> +                    .format(self.__diff_file, e)
> +                )
> +        elif isinstance(args, list):
> +            self.__diff_file = "git-diff-local.txt"
> +            self.__diff_lines = args
> +        else:
> +            raise UnifiedFormatParseError(
> +                "UnifiedFormatParser constructor called with wrong arguments")
> +
> +        self.__git_diff_header = re.compile(r'^diff --git a/(.*) b/(.*)$')
> +        self.__git_hunk_header = \
> +            re.compile(r'^@@ -\d+,(\d+) \+(\d+),(\d+) @@.*$')
> +        self.__diff_set = {}
> +        self.__parse()
> +
> +    def get_diff_path(self) -> str:
> +        return self.__diff_file
> +
> +    def add_change_set(self, change_set: ChangeSet) -> None:
> +        if not change_set.is_change_mode(ChangeSet.ChangeMode.NONE):
> +            if change_set.is_change_mode(ChangeSet.ChangeMode.COPY):
> +                # Add copy change mode items using the dst_file key, because
> +                # there might be other changes for the orig_file in this diff
> +                self.__diff_set[change_set.dst_file] = change_set
> +            else:
> +                self.__diff_set[change_set.orig_file] = change_set
> +
> +    def __parse(self) -> None:
> +        def parse_diff_header(line: str) -> ChangeSet | None:
> +            change_item = None
> +            diff_head = self.__git_diff_header.match(line)
> +            if diff_head and diff_head.group(1) and diff_head.group(2):
> +                change_item = ChangeSet(diff_head.group(1), diff_head.group(2))
> +
> +            return change_item
> +
> +        def parse_hunk_header(line: str) -> Tuple[int, int, int]:
> +            file_linenum = -1
> +            hunk_a_linemax = -1
> +            hunk_b_linemax = -1
> +            hunk_head = self.__git_hunk_header.match(line)
> +            if hunk_head and hunk_head.group(1) and hunk_head.group(2) \
> +               and hunk_head.group(3):
> +                file_linenum = int(hunk_head.group(2))
> +                hunk_a_linemax = int(hunk_head.group(1))
> +                hunk_b_linemax = int(hunk_head.group(3))
> +
> +            return (file_linenum, hunk_a_linemax, hunk_b_linemax)
> +
> +        file_linenum = 0
> +        hunk_a_linemax = 0
> +        hunk_b_linemax = 0
> +        diff_elem = None
> +        parse_state = ParserState.FIND_DIFF_HEADER
> +        ChangeMode = ChangeSet.ChangeMode
> +        ChangeType = ChangeSet.ChangeType
> +
> +        for line in self.__diff_lines:
> +            if parse_state == ParserState.FIND_DIFF_HEADER:
> +                diff_elem = parse_diff_header(line)
> +                if diff_elem:
> +                    # Found the diff header, go to the next stage
> +                    parse_state = ParserState.FIND_HUNK_OR_DIFF_HEADER
> +            elif parse_state == ParserState.FIND_HUNK_OR_DIFF_HEADER:
> +                # Here only these change modalities will be registered:
> +                # deleted file mode <mode>
> +                # rename from <path>
> +                # rename to <path>
> +                # copy from <path>
> +                # copy to <path>
> +                #
> +                # These will be ignored:
> +                # old mode <mode>
> +                # new mode <mode>
> +                # new file mode <mode>
> +                #
> +                # Also these info will be ignored
> +                # similarity index <number>
> +                # dissimilarity index <number>
> +                # index <hash>..<hash> <mode>
> +                if line.startswith("deleted file"):
> +                    # If the file is deleted, register it but don't go through
> +                    # the changes that will be only a set of lines removed
> +                    diff_elem.set_change_mode(ChangeMode.DELETE)
> +                    parse_state = ParserState.FIND_DIFF_HEADER
> +                elif line.startswith("new file"):
> +                    # If the file is new, skip it, as it doesn't give any
> +                    # useful information on the report translation
> +                    parse_state = ParserState.FIND_DIFF_HEADER
> +                elif line.startswith("rename to"):
> +                    # Renaming operation can be a pure renaming or a rename
> +                    # and a set of change, so keep looking for the hunk
> +                    # header
> +                    diff_elem.set_change_mode(ChangeMode.RENAME)
> +                elif line.startswith("copy to"):
> +                    # This is a copy operation, mark it
> +                    diff_elem.set_change_mode(ChangeMode.COPY)
> +                else:
> +                    # Look for the hunk header
> +                    (file_linenum, hunk_a_linemax, hunk_b_linemax) = \
> +                        parse_hunk_header(line)
> +                    if file_linenum >= 0:
> +                        if diff_elem.is_change_mode(ChangeMode.NONE):
> +                            # The file has only changes
> +                            diff_elem.set_change_mode(ChangeMode.CHANGE)
> +                        parse_state = ParserState.REGISTER_CHANGES
> +                    else:
> +                        # ... or there could be a diff header
> +                        new_diff_elem = parse_diff_header(line)
> +                        if new_diff_elem:
> +                            # Found a diff header, register the last change
> +                            # item
> +                            self.add_change_set(diff_elem)
> +                            diff_elem = new_diff_elem
> +            elif parse_state == ParserState.REGISTER_CHANGES:
> +                if (hunk_b_linemax > 0) and line.startswith("+"):
> +                    diff_elem.add_change(file_linenum, ChangeType.ADD)
> +                    hunk_b_linemax -= 1
> +                elif (hunk_a_linemax > 0) and line.startswith("-"):
> +                    diff_elem.add_change(file_linenum, ChangeType.REMOVE)
> +                    hunk_a_linemax -= 1
> +                    file_linenum -= 1
> +                elif ((hunk_a_linemax + hunk_b_linemax) > 0) and \
> +                        line.startswith(" "):
> +                    hunk_a_linemax -= 1 if (hunk_a_linemax > 0) else 0
> +                    hunk_b_linemax -= 1 if (hunk_b_linemax > 0) else 0
> +
> +                if (hunk_a_linemax + hunk_b_linemax) <= 0:
> +                    parse_state = ParserState.FIND_HUNK_OR_DIFF_HEADER
> +
> +                file_linenum += 1
> +
> +        if diff_elem is not None:
> +            self.add_change_set(diff_elem)
> +
> +    def get_change_sets(self) -> dict:
> +        return self.__diff_set
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed May 17 05:57:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 05:57:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535771.833768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzAAN-0001EG-Hs; Wed, 17 May 2023 05:57:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535771.833768; Wed, 17 May 2023 05:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzAAN-0001E7-Cs; Wed, 17 May 2023 05:57:39 +0000
Received: by outflank-mailman (input) for mailman id 535771;
 Wed, 17 May 2023 05:57:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ox5N=BG=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pzAAM-0001Dr-1f
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 05:57:38 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.220]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b7899ce0-f477-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 07:57:34 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4H5vPZnt
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 17 May 2023 07:57:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7899ce0-f477-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1684303045; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=qlywIqirYfVvCXm0Ivti94KV4tzDKWLCMlCwPMFJdun/2rS+nCng0MKAIHL3B+FcY0
    8o+Vj83a3TyrZVgrnPWdzssR5RP7EeUqWh5dlSHwmw6WOQIxwUWBWWG7Bq5p0owFWKZu
    k0PoO1Q95CIkQX0SFeCqMColTRsXGjH4s0Jz1IPEXBi1+1//TJ12AvBz7gGYrS/40nk0
    MN+qnBgWCnGDF2M5ToixRx02hNHviPRfrrtOR4w8WxbgRDCgQ+eSwwGnKYLsfivmCjEY
    RvTDaLVYcMq3RvVBjYc+hPTutksK1I2lwWF2MExp4YuZ/bzt8Y1ywyPwqjwq70ml3sx4
    j13Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1684303045;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=Si4oKzEcLlw3G7OffbBr/cajuDCVeN0U6l5SUR78nAw=;
    b=AzW7el+eUWgRP9yjV4N2FbrHj4pQ6TSsLgUOhC/fbDto32rXB8ohUTapHshdcnWA8C
    af7MJXrmxDIcLT4tZm32srvRfkqgze6tyn9nEkPmDzosf/GnLQ5nDhQKLtAzimX6o6CH
    VyEcWNhz3VZ0mAxGUQNa7iB/QS7fgSr9m3Al3+PVdiE3dGtGxkBslt/SdhUsY+9jIlwV
    F30/Z8dt2qRXvgsaby/iiCAiQ+MBxePc8GIm/dpfYxGPgJqeVTkdNkCCKcnkRmIyoWqW
    jMkom84ypZOhOvA2oByFd4h2h3wp6xgwjBr3E5p0JidQSGoBZUBiBvZjODIcSsbpjbyr
    s1nQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1684303045;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=Si4oKzEcLlw3G7OffbBr/cajuDCVeN0U6l5SUR78nAw=;
    b=cK8PWqXLm6G7PUJClO6dj/gvsZ2LUCvNXolrEwEKuPMoJGbh2r/YfHmxmbAmJd0Aku
    /i2AqE2u4mr8AdPJ2h/yXSm8MQFn3U+hbG8tX8AynhHQmeuGHTjPNI5GmXyGT9t9PCu9
    3DRCwloIDr4wKVHlEJqi32BeTwGbhHZwzDIvFJCI0mOsepErbZ+MhwQi7epxJYkzlIVA
    ZXdysazhahNC7dZ87zQwz2S0QOx6wmyGbexaA+3ohBI2YvAGiDqZe4Hwh3dHJBc0D5bu
    El4Z2yxesbkSUPhgG6k1Tmc1kM+lcdw+iuy+Uy1IadfLkjXYLnnWNmr/Td6ZbcUgU/hp
    kXUw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1684303045;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=Si4oKzEcLlw3G7OffbBr/cajuDCVeN0U6l5SUR78nAw=;
    b=8J8jANwHgUY1QrchMr3OqwFXj4RT8+LbEUz+hEHpS7Nzy6WXiGu8saAxCz32A/u5jt
    gfpAF9dKSVQYNV23apBg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuznLRsvx4Sq0NeWsWjIFVg=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v1] automation: allow to rerun build script
Date: Wed, 17 May 2023 05:57:22 +0000
Message-Id: <20230517055722.4057-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

Calling build twice in the same environment will fail because the
directory 'binaries' was already created before. Use mkdir -p to ignore
an existing directory and move on to the actual build.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 automation/scripts/build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/automation/scripts/build b/automation/scripts/build
index 197d085f3e..9085cba352 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -36,7 +36,7 @@ fi
 cp xen/.config xen-config
 
 # Directory for the artefacts to be dumped into
-mkdir binaries
+mkdir -p binaries
 
 if [[ "${CPPCHECK}" == "y" ]] && [[ "${HYPERVISOR_ONLY}" == "y" ]]; then
     # Cppcheck analysis invokes Xen-only build.


From xen-devel-bounces@lists.xenproject.org Wed May 17 06:00:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 06:00:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535776.833778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzADP-0002kU-Tn; Wed, 17 May 2023 06:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535776.833778; Wed, 17 May 2023 06:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzADP-0002kN-Qi; Wed, 17 May 2023 06:00:47 +0000
Received: by outflank-mailman (input) for mailman id 535776;
 Wed, 17 May 2023 06:00:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ox5N=BG=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1pzADO-0002kH-Qk
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 06:00:46 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.20]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 295a637f-f478-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 08:00:44 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4H60iZoh
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate)
 for <xen-devel@lists.xenproject.org>;
 Wed, 17 May 2023 08:00:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 295a637f-f478-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1684303244; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=V06lVMquFWcqyXSZhOHKtdEmcCCCG+GHqcGes+MX+X9cvdL2mg8zhGQkCIuOJ1aIy/
    G/mzZR84IRLGKXqLIrWGnnLtGrBt6vaYaU/ikLr+p4EnUvpxPcv1U3PdIbdDDpEUWkhd
    ZwnIiqxurPx1sWoqSNLosPSqjDBJrymai0eFCU9FWpLPeGE/ARA4j5KObk6uhvutQ3v0
    GyHFMhOAvRrZj0fg4rJ1xBYWw63TKfDumcoyYBZEXBx/x8IiZezBLVDe4LLV2cgmNWI6
    u3CfIoYupdBUqELH6QKrLciM2dHVjLUcj5AZZpQltZ5jB2Yv0nFDglCRorbteJf8PfM+
    dXxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1684303244;
    s=strato-dkim-0002; d=strato.com;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=MhFDJKZnJdarpDmKEzJHJdw14S3sG04JUxt6GAadBKY=;
    b=LrFYoXNhjwOOjNWJst7MypRsmpYf8ukQb7w1NZ02a2W0vGZ4ND2ENeEYCvo/Sk3JUN
    QI/7T4nLxurDK+tEEwk9VWFnf6wq5qCfkqN8Ze2i+q1Tz0OfIyb4pC0jngNfIqY3x8ly
    11yPs2dVvFmSjz9zwcdpesfdizTY/8ENRcBhq/Q5JjCbCdzLk9LfPzZJiXwV0oM0ZygM
    joVN9aZrrhS72h0D/oP5D8+4kZ65wyC7gH7AK8fAbCjlVFeP28hpwXmF23hil4/sBLsD
    iJu1erlJCvuuPCFJbMTNLTxxP/CFaAeaU9ISYYbBFuysZ9m4LtR73zHBFrS/DaPahdtC
    Kjcg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1684303244;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=MhFDJKZnJdarpDmKEzJHJdw14S3sG04JUxt6GAadBKY=;
    b=Rp4Xy09VQnlWT8lQN8EOQJSqbEb18IYd8KRgGheboNPUBd26lyRmZGwNARjN/slz9k
    UDM8tSe7oV37Feo2iy74PWpHKldB3Bk4UAdfboo/x2DLtNTaNjCtK+KksAWsvmzNxPS2
    0r2uit/u3linKUXH3i1+W+JptG6Liq5CvvCPI83CV3ftKkja1Uoqat7JwS0PkwfMDtQM
    gw2hDtpYfQ6Zym1p/YTp+7sVdZPiI74OamNiqrRlWOSLkalgtlsSJJpgSX+/91qg3SiQ
    ZFbUqANVDRccyDDHnTQN7WGxqnOKzIQL4CprqO41iykvIXgGrtUpEHzTUJadko90+tfG
    2d5g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1684303244;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=MhFDJKZnJdarpDmKEzJHJdw14S3sG04JUxt6GAadBKY=;
    b=p26KJiwi+30BISZqx/YNQNXGCKIgGlIx9S4qrXS7+BfJL3UTlcfI8PH8hnRvWvXGva
    EnSmtFZ5JPdVqSQeI9Ag==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QED/SSGq+wjGiUC4kV1cX/0jCNVp4ivfSTHw=="
Date: Wed, 17 May 2023 06:00:24 +0000
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: fix qemu to build with gcc13
Message-ID: <20230517060024.6c5d730a@sender>
X-Mailer: Claws Mail 2023.04.19 (GTK 3.24.34; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/zhdk1pmSVJ9su3FAnEtA_Mw";
 protocol="application/pgp-signature"; micalg=pgp-sha1
Content-Transfer-Encoding: 7bit

--Sig_/zhdk1pmSVJ9su3FAnEtA_Mw
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Hello,

please backport d66ba6dc1cce914673bd8a89fca30a7715ea70d1 to
qemu-xen.git to allow building it with gcc13.

Thanks,
Olaf

--Sig_/zhdk1pmSVJ9su3FAnEtA_Mw
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iF0EARECAB0WIQSkRyP6Rn//f03pRUBdQqD6ppg2fgUCZGRteAAKCRBdQqD6ppg2
fod7AJ94xNQhydRMQazuKNN4F/jGW/FLMACfT/FPSPpkfpzDGoD/7XOxDdWHqUY=
=cvRE
-----END PGP SIGNATURE-----

--Sig_/zhdk1pmSVJ9su3FAnEtA_Mw--


From xen-devel-bounces@lists.xenproject.org Wed May 17 06:34:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 06:34:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535788.833815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzAkC-0007Qd-L8; Wed, 17 May 2023 06:34:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535788.833815; Wed, 17 May 2023 06:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzAkC-0007QW-Hk; Wed, 17 May 2023 06:34:40 +0000
Received: by outflank-mailman (input) for mailman id 535788;
 Wed, 17 May 2023 06:34:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzAkB-0007QM-KZ; Wed, 17 May 2023 06:34:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzAkB-00073m-IE; Wed, 17 May 2023 06:34:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzAkA-0002BJ-F1; Wed, 17 May 2023 06:34:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzAkA-0000dR-CY; Wed, 17 May 2023 06:34:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WfYpImUtgovoWCQt0cIHINOlVlkKSsHUWTU5XB71h2E=; b=eGoISvdDQ4kpw391L/hm8gfWbb
	+9VNRxB998+hNCEgjJEPEd54bEUxFjkOvODxu01u4ZBH7p74lshbYLNUnzgLG45GW3Ab+Tk+0j5H8
	i4fNxX1QmKJOs/hyB+Snx+KKMMVQW62htcI92eDccfHOThnLyo83sdNyWLAF2N8Xa0nY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180682-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180682: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8f9c8274a4e3e860bd777269cb2c91971e9fa69e
X-Osstest-Versions-That:
    xen=b8be19ce432a2edd69c0673768a0beeec77f795a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 May 2023 06:34:38 +0000

flight 180682 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180682/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180671
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180671
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180671
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180671
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180671
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180671
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180671
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180671
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180671
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180671
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180671
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180671
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8f9c8274a4e3e860bd777269cb2c91971e9fa69e
baseline version:
 xen                  b8be19ce432a2edd69c0673768a0beeec77f795a

Last test of basis   180671  2023-05-15 13:38:21 Z    1 days
Failing since        180677  2023-05-15 23:07:02 Z    1 days    2 attempts
Testing same since   180682  2023-05-16 15:10:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b8be19ce43..8f9c8274a4  8f9c8274a4e3e860bd777269cb2c91971e9fa69e -> master


From xen-devel-bounces@lists.xenproject.org Wed May 17 06:39:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 06:39:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535796.833826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzApG-0008Hx-Eu; Wed, 17 May 2023 06:39:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535796.833826; Wed, 17 May 2023 06:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzApG-0008Hq-Af; Wed, 17 May 2023 06:39:54 +0000
Received: by outflank-mailman (input) for mailman id 535796;
 Wed, 17 May 2023 06:39:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ws97=BG=tls.msk.ru=mjt@srs-se1.protection.inumbo.net>)
 id 1pzApF-0008Gy-11
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 06:39:53 +0000
Received: from isrv.corpit.ru (isrv.corpit.ru [86.62.121.231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9e64ce83-f47d-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 08:39:49 +0200 (CEST)
Received: from tsrv.corpit.ru (tsrv.tls.msk.ru [192.168.177.2])
 by isrv.corpit.ru (Postfix) with ESMTP id A4D0A66D9;
 Wed, 17 May 2023 09:39:47 +0300 (MSK)
Received: from [192.168.177.130] (mjt.wg.tls.msk.ru [192.168.177.130])
 by tsrv.corpit.ru (Postfix) with ESMTP id 3B8745D88;
 Wed, 17 May 2023 09:39:47 +0300 (MSK)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e64ce83-f47d-11ed-8611-37d641c3527e
Message-ID: <986d9eca-5fab-cacb-05c7-b85e4d58665b@msgid.tls.msk.ru>
Date: Wed, 17 May 2023 09:39:47 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] xen/pt: fix igd passthrough for pc machine with xen
 accelerator
Content-Language: en-US
To: Chuck Zmudzinski <brchuckz@aol.com>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Igor Mammedov <imammedo@redhat.com>, xen-devel@lists.xenproject.org,
 qemu-stable@nongnu.org
References: <a304213d26506b066021f803c39b87f6a262ed86.1675820085.git.brchuckz.ref@aol.com>
 <a304213d26506b066021f803c39b87f6a262ed86.1675820085.git.brchuckz@aol.com>
From: Michael Tokarev <mjt@tls.msk.ru>
In-Reply-To: <a304213d26506b066021f803c39b87f6a262ed86.1675820085.git.brchuckz@aol.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

08.02.2023 05:03, Chuck Zmudzinski wrote:
> Commit 998250e97661 ("xen, gfx passthrough: register host bridge specific
> to passthrough") uses the igd-passthrough-i440FX pci host device with
> the xenfv machine type and igd-passthru=on, but using it for the pc
> machine type, xen accelerator, and igd-passtru=on was omitted from that
> commit.
> 
> The igd-passthru-i440FX pci host device is also needed for guests
> configured with the pc machine type, the xen accelerator, and
> igd-passthru=on. Specifically, tests show that not using the igd-specific
> pci host device with the Intel igd passed through to the guest results
> in slower startup performance and reduced resolution of the display
> during startup. This patch fixes this issue.
> 
> To simplify the logic that is needed to support both the --enable-xen
> and the --disable-xen configure options, introduce the boolean symbol
> pc_xen_igd_gfx_pt_enabled() whose value is set appropriately in the
> sysemu/xen.h header file as the test to determine whether or not
> to use the igd-passthrough-i440FX pci host device instead of the
> normal i440FX pci host device.
> 
> Fixes: 998250e97661 ("xen, gfx passthrough: register host bridge specific to passthrough")
> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>

Has this change been forgotten?  Is it not needed anymore?

Thanks,

/mjt


From xen-devel-bounces@lists.xenproject.org Wed May 17 07:29:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 07:29:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535806.833836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzBaq-000762-Pc; Wed, 17 May 2023 07:29:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535806.833836; Wed, 17 May 2023 07:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzBaq-00075v-MD; Wed, 17 May 2023 07:29:04 +0000
Received: by outflank-mailman (input) for mailman id 535806;
 Wed, 17 May 2023 07:29:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzBap-00075l-LB; Wed, 17 May 2023 07:29:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzBap-0008IV-IT; Wed, 17 May 2023 07:29:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzBap-0003XU-0O; Wed, 17 May 2023 07:29:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzBao-0001bF-W9; Wed, 17 May 2023 07:29:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=73oPkj6Aj0ddqlHA80+9tPrruux2OpLREHJ3u/wE1qk=; b=qXDrshjnsLcp2EP6E6YfGqcbAP
	RkKr0UeBo7y6tv4Cb76qhUt+5e4GFRHl+1VWI8ixCo0yRtGcAiUZCgr9EBciYip0lczDNgpDVP+QG
	APxGjr/jUxuZvj8fvCowSG2weB+3l9TQ1uundvawgNsWV74VJ+du6gF1UHAKr4V2UY6k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180683-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 180683: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-4.17-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=b773c48e368d9cf1ea29b259fe4ae434b8bb42da
X-Osstest-Versions-That:
    xen=0880df6f5f905bffc86dd181a8af64f16cc62110
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 May 2023 07:29:02 +0000

flight 180683 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180683/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180446
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180446
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180446
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180446
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180446
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180446
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180446
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180446
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180446
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180446
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180446
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180446
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  b773c48e368d9cf1ea29b259fe4ae434b8bb42da
baseline version:
 xen                  0880df6f5f905bffc86dd181a8af64f16cc62110

Last test of basis   180446  2023-04-27 13:08:27 Z   19 days
Testing same since   180683  2023-05-16 15:38:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0880df6f5f..b773c48e36  b773c48e368d9cf1ea29b259fe4ae434b8bb42da -> stable-4.17


From xen-devel-bounces@lists.xenproject.org Wed May 17 08:41:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 08:41:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535824.833854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzCik-00022Z-8L; Wed, 17 May 2023 08:41:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535824.833854; Wed, 17 May 2023 08:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzCik-00022Q-1c; Wed, 17 May 2023 08:41:18 +0000
Received: by outflank-mailman (input) for mailman id 535824;
 Wed, 17 May 2023 08:41:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sUit=BG=citrix.com=prvs=494f6562e=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzCii-00022E-2v
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 08:41:16 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 93357330-f48e-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 10:41:12 +0200 (CEST)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 May 2023 04:41:05 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM4PR03MB6064.namprd03.prod.outlook.com (2603:10b6:5:393::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Wed, 17 May
 2023 08:41:03 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6387.033; Wed, 17 May 2023
 08:41:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93357330-f48e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684312872;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=r2q0Q52rFzTToN7ZOZhpSLgYL+DI1ss7GQEjLHq7fC0=;
  b=NDFTU1OWyLAeH9fETC0YvEVA9bHMrFdsieXuUblID1b/2l/cONBGM6zH
   gpQmSDc6Gz6oaVmTPvMcWW2Poa90m1c+7VkySYV/tU0hDbeaBjatyMQRM
   IkV2fUuoBGBHsk7B00qdasLRi68IR6qA+Vu2NpeEzCj4sIUb9HJAN6udL
   4=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 108093322
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:EkKTmqBKzfKC8hVW/8niw5YqxClBgxIJ4kV8jS/XYbTApDl312NRz
 WtLXT+AOK2Na2L1eNp2OtuyphsDuJ7Sy9ExQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5B4QRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw6spnPGZ37
 NoiGGpWMCyGoMeVwIqXVbw57igjBJGD0II3nFhFlGucIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9OxuvDe7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqiL017SUx3mTtIQ6HbKH0MRLrQyvhXFLMxYWTkOaq+uysxvrMz5YA
 wlOksY0loAp71CiRNT5Wxy+oVaHswQaVt4WFPc1gCmPwKfJ5weSBkAfUyVMLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDQc0QA0E6p/ZqY4yhx7GTdF+OKewgpv+HjSY6
 yuWoSY3gbJVltIC3ai/+VHBghqlo5SPRQkwjjg7RUqg5wJ9IYKgOYqh7AGB6e4addnGCF6co
 HIDhs6SqvgUCo2AnzCMR+NLG6y14/GCM3vXhlsH84QdyglBMkWLJeh4iAyS7m8wWirYUVcFu
 HPuhD4=
IronPort-HdrOrdr: A9a23:qL9xUq0wbrtXwAYMdwEKFAqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-Talos-CUID: 9a23:XamwWWPMOkM5FO5DSAV3yUktA+YfQieGkkvVBGqbCV5FV+jA
X-Talos-MUID: 9a23:a9VaewrASJBhrl/nsbcezzhtC+FSwon/NAMQya4Nu9KcFCZiMA7I2Q==
X-IronPort-AV: E=Sophos;i="5.99,281,1677560400"; 
   d="scan'208";a="108093322"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kpVbJ9XDUXwTk1ErFUhVvLy/X60tfFhei1V8ppc7paEAc4Evk9dbUPUzI9LqRyDxTVlGQGZRA+Al3KYcVfDur9HCDUnr+DOFsWrNfWZNEdDQXBvFZZg99E2pg3evP8T0Wvc4LKmN9IEqYzK7RZcJb5BHLY5QAtfl8K30Uko1Cy7w3xG26aIYyKTjySTLvjBO0MWAmEadh8//BFPhB2YtoXDe2R7D4W4FqOum4/SbXgFlMzevFs+qR6aY6W9ohCHfGRjJnJifI57J8RlvHkkN5/ckHm4XaabCv9bhX/uz/PdK8e2697Xd0E6WEMIhny+3gXuvxiebSslDK/LLG8rrCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eAq9fnKH21Pz3T30qI0X/bMwHGnzIAvBXdpHc/TAM1E=;
 b=KDe6nIQG/1VasmtqESuu8wELR0bfkJtbyRQbMYRPmqt22gGGgW6AmDnAm5wjmh7mahw7tiAXhNGaO3U2cArMvZwtDMcsB/6Py7+hEbcR8r2q0p2YBTr/i6c4Ahjn2iJ3LZbay4cDZdIwp+yPgYouI5MjkuXdrcFZ7+BZZYIjKFAPv/Waqad+Lt8SC3EY3Af/MXyeDYQISURGKZybIqX94xIzsxRAC6orluKFAK4qEEMBvKRLy/yHC2/TKsXYAR44tmwL9JKb9JTuGXQKZhyxxuG4C8l+SqOsaxmezpOJDSF4Ez7TOfgOBTcopTARzWjWbzL2TyjRDXRh5arlrIlaQg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eAq9fnKH21Pz3T30qI0X/bMwHGnzIAvBXdpHc/TAM1E=;
 b=S+7shMNwzohWKHB5pyi5Pr0SMz5yjWAn9+weLcnJ/Smt+0wbDtZO2wegz3JUf6f26K9TlZR78+l2dbG5i/trxHV4wH8ksR6H40WCF9GYzpDh8gyb5BhwQCzN+HlgX9zpmkDFh/VjOCpoLzB/ByCqUirekeLiohArYC2c62moLjE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 17 May 2023 10:40:56 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, jbeulich@suse.com,
	xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
Message-ID: <ZGSTGIMh6qvCLZSr@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-2-sstabellini@kernel.org>
 <ZGH+5OKqnjTjUr/F@Air-de-Roger>
 <alpine.DEB.2.22.394.2305151656500.4125828@ubuntu-linux-20-04-desktop>
 <ZGNLArlA0Yei4Fr0@Air-de-Roger>
 <alpine.DEB.2.22.394.2305161522480.128889@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.22.394.2305161522480.128889@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LO6P265CA0020.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ff::11) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM4PR03MB6064:EE_
X-MS-Office365-Filtering-Correlation-Id: 891c1fe5-93d2-4d72-ce4b-08db56b2726f
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bMTusC9kUDneiAF6Q8upT7winnSCZiPwac1lNkJvcWdXU/sAX6bB8DRSqjqAzczgn2RZP/5d9m9SZ6RcBuu1QxJA0sUAlD4nZGZBT++tZ7N2/48oqKxQIQvGNwKkpoVjGhIgEQCpdrIpTY1fzTfcwhAO8SokmHnkumZmPEHT83fJFyfh8k7xuhHmNDrkY7wk9zqmkbrdxyH4EVwZl5yRD5FDoxfYUoo6EOuIV52XVBy2qnTKdgYi1OzK2eALhCJ58+Hw3ucnAHxVMbA1wp7ta3Hv7obNWFzFEZKIY4kSEUMf6fNrYqnkDaFaIG7xFIWpSluFTwdAs9h+sS2UXyQHu7V0eay3uiU+wCTfBP8btyDr+mN7Q48A06B45+80IEByKRAeuYWg7JgJjgCE2nrRpVJ04xIZOGnW6Res2slp2VRj3bM6woPaZ1HUYvYqSo3TxRMjZzX4OVxyuRtglWRjMwUzspmmJiNzKVbitg5FveXv/iibSIVvcwIi7ZT8pJXn85JXBsnaWfVI1Oi6/+3q7Q2Q7bIXlTPCpWkD/jLjCpU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(136003)(346002)(376002)(396003)(366004)(451199021)(5660300002)(41300700001)(2906002)(83380400001)(30864003)(85182001)(86362001)(33716001)(82960400001)(38100700002)(26005)(186003)(6512007)(6506007)(8676002)(8936002)(9686003)(66556008)(66476007)(66946007)(478600001)(6486002)(6666004)(84970400001)(316002)(4326008)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QldudXR6enpMbFNZc29uN0paZWZucmpuNlpGcW5iNkFOd2VYdTJ6Vmd5SjFp?=
 =?utf-8?B?NkNycVdtZiszQ1NUdU0zeHp4NmFzdUlwZHdaOHNoTit0RnFoNnNBOVRVdXk2?=
 =?utf-8?B?aXNmU3RwWW9UOEpmcmh3dVpqSktKUTZFWnRZK0c5OTBIdjdEM1Q2NDN4R1g4?=
 =?utf-8?B?eUdDWVR4c1RKdzZZallyQitXM3VrN1VSVmllZGt1R0IvYVFWNE9oSzNTbGhp?=
 =?utf-8?B?QjNhTW8xcGlNa3RJRzB5NlB4RzZCay9xS3dEMUJnaWxZZFhVZVJGQm5pNXBq?=
 =?utf-8?B?aDJmUTAyUi8rOU9uWW9oTFNDZzJLeEQycTQ3dWRpMzVDNU45R0w4RVdkQ0c0?=
 =?utf-8?B?bEJCMFRXN0hLSFp3TkxCNlA2VnZyUlNjL29qS25UL1dXNXZOM1pCTW5aNGhH?=
 =?utf-8?B?Z1VoRjM0OVVhdUVLYzRYcE0rdVZLWDFDWkg2c3ZSSlNLOEJabXJFZ0kwUHNO?=
 =?utf-8?B?a2doTDNsR1VnZ1lXcEpvVFJjbEFwRGJzTWFaVmQzQmpCalBXaW1NL1R1MDdi?=
 =?utf-8?B?aDk4R1F6Q0FFUDJTUjcvcnNWdnphNkppQkEyRXE1N2VRV0tuTmp3bUVpWWJv?=
 =?utf-8?B?TlVUVGtISW5hdGZNb25HdDZ0RVhIV0poVHRudVpLekZvS0luMStWUjBoKzVq?=
 =?utf-8?B?NXk4dVgzWGczSFBCbUxsWlZ0WDNZTk9pL1ZRcWZzYUZPNjZ5cXJ1VFkzaUtS?=
 =?utf-8?B?Z1Z1N3ZVbktIVGEzUkFaTGFBRFYrY3EzeFZkMit6QTFaVUF1WlM2UlZ2TGo4?=
 =?utf-8?B?MHhnNVdlcTJPa2lSSVo2TXkxeVF0S3U5MXdyaWZzOWxtOUF1TDFzemFlU1Bm?=
 =?utf-8?B?YXNHWEI0VG5JYTFaUEtRL3VkQm9STDEvUm1WVk1laFJiREV5dHVscWtxMzE0?=
 =?utf-8?B?em9mMjdCUlpIZk9wajR3TnoyaTBYaG80MFpheXZKVXNjRWFCaWk5UlpGSmZ4?=
 =?utf-8?B?SzhQOGpGSlBKWVEwRXdHRklXUWg5dGpoRmFORWJtWnFpNzFuYi9sU05EWXox?=
 =?utf-8?B?UEJmVzlNWjRvZXB5WStRcm1SVDMyZ0ZmK2RQNXRNcXpqajNhM29La1FnZ29r?=
 =?utf-8?B?aXhGZ2lITFdxb0ozV2tiRExiOHZrUGkvOGw2Z1RacnhKcVgwcEk5dW9TOFIy?=
 =?utf-8?B?NU9NSUF1OUpKaGZNcXZZWi9JRXNwK3BsREZkd1ZRN1dZajJ3SkhJYmFkL3Zu?=
 =?utf-8?B?K2NaVEx4SUgrRGs1WEdhU2ZJU2VQVCtqd3FBN0F2MHJkQ2VZbFdsMnRGemJi?=
 =?utf-8?B?MzJSdC9XWVd3cGJEYzNDZ3luaXJrTE0zOTgyNlpNQ0M2OUFNZ1RWWlVLaVBX?=
 =?utf-8?B?d0FIajdCdWZjcXk2MWtqWkc5STdSOUI2K1dhcjNUZnlYUGhNN2NZYjNvMFlB?=
 =?utf-8?B?VE05SHZYeTNOVGdHaCtGRFJCM211UFNhbldnVzNseXlDL0lGeXZFMEtaV3JV?=
 =?utf-8?B?anZTZXZVSjRtSnFCdUFPYUM1VHJFc2lXOXpwb2FKTjNwZlhhT2pXVHRya2RL?=
 =?utf-8?B?NjN0bnJaWU03Q1ZJVWd0QlliL0R0ZzJHWWVFSjJmY2JrRWVOVGtFTnN1RFdN?=
 =?utf-8?B?V3Btd3N5ZkNrWGQ3SXJSSFlIMTVXM09QRk5FVVBqOXNkN0U4R3MvVzYrVWZD?=
 =?utf-8?B?QXJIUm1zVVVITWZXN1JuMHM1WGdkZ3JGN3ZHalUvMEFLTHpDNW5DR29HMWVl?=
 =?utf-8?B?RU84MkgrbnRpcGZwSXd4NWdjRWdqdW81eTFYUjhqRXQ0NW9yWU85NjFxN05P?=
 =?utf-8?B?QUdpZmhUSTBQdE9tT1VTV0NORGhlY1RGT2NMOHFueTZ6MnBwVzEwaXNEOTF5?=
 =?utf-8?B?b3pRVzRiUVlhOFRCMnNDR1FIekZKMDdDZ3g1OWlvM083SldnVjNsK21kUlNQ?=
 =?utf-8?B?SmRkZm1wSTJsSlh3d3lBTFFwZ0NNOG40UGRrNldNS0dxZG55aWVJVndvQ1hJ?=
 =?utf-8?B?U0pGTTd0TkZMZzVtUkUzSlNIQ2FjT1ZKZjBOZWxoL0lzc1dlUlpyMXdNZ25p?=
 =?utf-8?B?aTdOMURHY05KSlJqRGtrMWVCUFU3d1c3OUxoNStzdTFiU2JueEt0SkduVXNv?=
 =?utf-8?B?VjlCV1FET0MwVTlRRlphalN4VXNjV2lQOEttYWViQWsxd0p4NVBmZFA5bE55?=
 =?utf-8?Q?ixuBxiyouZOQtNkKq63EDEexZ?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	F4JKTeSpzWT0LuMnk0eceqe+cQ/elKzLOtxuBsfFPa2zsyTIWYIsg5pFy5MtPswX1uD80XDxMz1RUq4iTJvzSmRwbka4pzHcUv4DjhHrzCRiEqwkdJWd794VD08EXT/5dokdBF33tVJs7mgloASn4B5yvlWPclYv6fWORjXIMvIelfUvLPzG7q2FlxN9Xos06CJa08Ry8UjRs3W6+wmzNUV2v5hDGQyDdH6EA+F4uKTwoOIcvz4LqMhGRGsTb19MX5SVCisq9qVOg26oLdiAAsHVSfbirPhlYzSWp0rzQvu4MA5weCQh/RhS11U0JMDSy7o+BRO94oD9KOpSyCTBI8nxHhUUoiiQmQPwY5wmNtx933B5isAuYsSpBiNHz3NBFKZ4pLhIX/SWzdI8z3C26isZqK1C9k58RBCgksiycc5bgyByL8G8K0ffrUoMxZQopCcAh+TBZKYgXscvpQ7RZJ49bDL7oSGAMMJOys5cETRj0x/2ctG+fJhNi3E7NVZ5iDducQjSSPQNU8yytvehKYC3b0esjsHGDdrli1RdhmWJ35GPG2G16aQG3iqHXzx6h2TKeBjphk0g9cmXAqIU0f9AeIYhr6dj65Ud4Q10okAUWAGN8ErzMP9/PqBRQMlucjH+3CmrOfTOzi3RFveWCfpwPp5sZdbmVD6wfcBrULEe0I64dCcPkQZCWJVcDJ9AojHOjJoaczXlEA0D1pHH9L9TLiKY/M8EFU7eyb30n2zUtDW5HosaL9gyd12sazVWR8v28bD1V09o/4LRGyiG2fHdZHH/0XKSSkToFChnP3zO51kMbMw5kB+6vu6YcnuY8VvrSsayBM/FiIRTVirOnmjtdj/v890QZ6yLdmwvpqM=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 891c1fe5-93d2-4d72-ce4b-08db56b2726f
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 08:41:03.1131
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OVmBdDN++gbX5CYQ9FN1ue9QuCYXDV5nZc2jbPkW3/IYFH6PZDapbDyy2BRkXZYLiFPdrkwkX8v2ZneZxLhGEA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6064

On Tue, May 16, 2023 at 04:34:09PM -0700, Stefano Stabellini wrote:
> On Tue, 16 May 2023, Roger Pau Monné wrote:
> > On Mon, May 15, 2023 at 05:11:25PM -0700, Stefano Stabellini wrote:
> > > On Mon, 15 May 2023, Roger Pau Monné wrote:
> > > > On Fri, May 12, 2023 at 06:17:20PM -0700, Stefano Stabellini wrote:
> > > > > From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > > 
> > > > > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > > > > the tables in the guest. Instead, copy the tables to Dom0.
> > > > > 
> > > > > This is a workaround.
> > > > > 
> > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > > ---
> > > > > As mentioned in the cover letter, this is a RFC workaround as I don't
> > > > > know the cause of the underlying problem. I do know that this patch
> > > > > solves what would be otherwise a hang at boot when Dom0 PVH attempts to
> > > > > parse ACPI tables.
> > > > 
> > > > I'm unsure how safe this is for native systems, as it's possible for
> > > > firmware to modify the data in the tables, so copying them would
> > > > break that functionality.
> > > > 
> > > > I think we need to get to the root cause that triggers this behavior
> > > > on QEMU.  Is it the table checksum that fail, or something else?  Is
> > > > there an error from Linux you could reference?
> > > 
> > > I agree with you but so far I haven't managed to find a way to the root
> > > of the issue. Here is what I know. These are the logs of a successful
> > > boot using this patch:
> > > 
> > > [   10.437488] ACPI: Early table checksum verification disabled
> > > [   10.439345] ACPI: RSDP 0x000000004005F955 000024 (v02 BOCHS )
> > > [   10.441033] ACPI: RSDT 0x000000004005F979 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> > > [   10.444045] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> > > [   10.445984] ACPI: FACP 0x000000004005FA65 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
> > > [   10.447170] ACPI BIOS Warning (bug): Incorrect checksum in table [FACP] - 0x67, should be 0x30 (20220331/tbprint-174)
> > > [   10.449522] ACPI: DSDT 0x000000004005FB19 00145D (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
> > > [   10.451258] ACPI: FACS 0x000000004005FAD9 000040
> > > [   10.452245] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > > [   10.452389] ACPI: Reserving FACP table memory at [mem 0x4005fa65-0x4005fad8]
> > > [   10.452497] ACPI: Reserving DSDT table memory at [mem 0x4005fb19-0x40060f75]
> > > [   10.452602] ACPI: Reserving FACS table memory at [mem 0x4005fad9-0x4005fb18]
> > > 
> > > 
> > > And these are the logs of the same boot (unsuccessful) without this
> > > patch:
> > > 
> > > [   10.516015] ACPI: Early table checksum verification disabled
> > > [   10.517732] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> > > [   10.519535] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> > > [   10.522523] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> > > [   10.527453] ACPI: ���� 0x000000007FFE149D FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> > > [   10.528362] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > > [   10.528491] ACPI: Reserving ���� table memory at [mem 0x7ffe149d-0x17ffe149b]
> > > 
> > > It is clearly a memory corruption around FACS but I couldn't find the
> > > reason for it. The mapping code looks correct. I hope you can suggest a
> > > way to narrow down the problem. If I could, I would suggest to apply
> > > this patch just for the QEMU PVH tests but we don't have the
> > > infrastructure for that in gitlab-ci as there is a single Xen build for
> > > all tests.
> > 
> > Would be helpful to see the memory map provided to Linux, just in case
> > we messed up and there's some overlap.
> 
> Everything looks correct. Here are some more logs:
> 
> (XEN) Xen-e820 RAM map:
> (XEN)  [0000000000000000, 000000000009fbff] (usable)
> (XEN)  [000000000009fc00, 000000000009ffff] (reserved)
> (XEN)  [00000000000f0000, 00000000000fffff] (reserved)
> (XEN)  [0000000000100000, 000000007ffdffff] (usable)
> (XEN)  [000000007ffe0000, 000000007fffffff] (reserved)
> (XEN)  [00000000fffc0000, 00000000ffffffff] (reserved)
> (XEN)  [000000fd00000000, 000000ffffffffff] (reserved)
> (XEN) Microcode loading not available
> (XEN) New Xen image base address: 0x7f600000
> (XEN) System RAM: 2047MB (2096636kB)
> (XEN) ACPI: RSDP 000F58D0, 0014 (r0 BOCHS )
> (XEN) ACPI: RSDT 7FFE1B21, 0034 (r1 BOCHS  BXPC            1 BXPC        1)
> (XEN) ACPI: FACP 7FFE19CD, 0074 (r1 BOCHS  BXPC            1 BXPC        1)
> (XEN) ACPI: DSDT 7FFE0040, 198D (r1 BOCHS  BXPC            1 BXPC        1)
> (XEN) ACPI: FACS 7FFE0000, 0040
> (XEN) ACPI: APIC 7FFE1A41, 0080 (r1 BOCHS  BXPC            1 BXPC        1)
> (XEN) ACPI: HPET 7FFE1AC1, 0038 (r1 BOCHS  BXPC            1 BXPC        1)
> (XEN) ACPI: WAET 7FFE1AF9, 0028 (r1 BOCHS  BXPC            1 BXPC        1)
> [...]
> (XEN) Dom0 memory map:
> (XEN)  [0000000000000000, 000000000009efff] (usable)
> (XEN)  [000000000009fc00, 000000000009ffff] (reserved)
> (XEN)  [00000000000f0000, 00000000000fffff] (reserved)
> (XEN)  [0000000000100000, 0000000040060f1d] (usable)
> (XEN)  [0000000040060f1e, 0000000040060fa7] (ACPI data)
> (XEN)  [0000000040061000, 000000007ffdffff] (unusable)
> (XEN)  [000000007ffe0000, 000000007fffffff] (reserved)
> (XEN)  [00000000fffc0000, 00000000ffffffff] (reserved)
> (XEN)  [000000fd00000000, 000000ffffffffff] (reserved)
> [...]
> [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009efff] usable
> [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x00000000000fffff] reserved
> [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000040060f1d] usable
> [    0.000000] BIOS-e820: [mem 0x0000000040060f1e-0x0000000040060fa7] ACPI data
> [    0.000000] BIOS-e820: [mem 0x0000000040061000-0x000000007ffdffff] unusable
> [    0.000000] BIOS-e820: [mem 0x000000007ffe0000-0x000000007fffffff] reserved
> [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
> [    0.000000] BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved
> [...]
> [   10.102427] ACPI: Early table checksum verification disabled
> [   10.104455] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> [   10.106250] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> [   10.109549] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> [   10.115173] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> [   10.116054] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> [   10.116182] ACPI: Reserving ���� table memory at [mem 0x7ffe19cd-0x17ffe19cb]
> 
> 
> 
> > It seems like some of the XSDT entries (the FADT one) is corrupt?
> > 
> > Could you maybe add some debug to the Xen-crafted XSDT placement.
> 
> I added a printk just after:
> 
>   xsdt->table_offset_entry[j++] = tables[i].address;
> 
> And it printed only once:
> 
>   (XEN) DEBUG pvh_setup_acpi_xsdt 1000 name=FACP address=7ffe19cd
> 
> That actually matches the address read by Linux:
> 
>   [   10.175448] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> 
> So the address seems correct. It is the content of the FADT/FACP table
> that is corrupted.
> 
> I wrote the following function in Xen:
> 
> static void check(void)
> {
>     unsigned long addr = 0x7ffe19cd;
>     struct acpi_table_fadt *fadt;
>     fadt = acpi_os_map_memory(addr, sizeof(*fadt));
>     printk("DEBUG %s %d s=%s\n",__func__,__LINE__,fadt->header.signature);
>     acpi_os_unmap_memory(fadt, sizeof(*fadt));
> }
> 
> It prints the right table signature at the end of pvh_setup_acpi.
> I also added a call at the top of xenmem_add_to_physmap_one, and the
> signature is still correct. Then I added a call at the beginning of
> __update_vcpu_system_time. Here is the surprise: from Xen point of view
> the table never gets corrupted. Here are the logs:
> 
> [...]
> (XEN) DEBUG fadt_check 1551 s=FACPt
> (XEN) DEBUG fadt_check 1551 s=FACPt
> (XEN) DEBUG fadt_check 1551 s=FACPt
> (XEN) DEBUG fadt_check 1551 s=FACPt
> (XEN) d0v0: upcall vector f3
> [    0.000000] Linux version 6.1.19 (root@124de7fbba7f) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_3
> [    0.000000] Command line: console=hvc0
> [...]
> [   10.371610] ACPI: Early table checksum verification disabled
> [   10.373633] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> [   10.375548] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> [   10.378732] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> [   10.384188] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> [   10.385374] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> [   10.385519] ACPI: Reserving ���� table memory at [mem 0x7ffe19cd-0x17ffe19cb]
> [...]
> (XEN) DEBUG fadt_check 1551 s=FACPt
> (XEN) DEBUG fadt_check 1551 s=FACPt
> (XEN) DEBUG fadt_check 1551 s=FACPt
> (XEN) DEBUG fadt_check 1551 s=FACPt
> 
> 
> So it looks like it is a problem with the mapping itself? Xen sees the
> data correctly and Linux sees it corrupted?

It seems to me like the page is not correctly mapped, and so 1s are
returned? (same behavior as a hole).  IOW: would seem to me like MMIO
areas are not correctly handled by nested NPT? (I assume you are
running this on AMD).

Does it make a difference if you try to boot with dom0=pvh,shadow?

A couple of wild ideas.  Maybe the nested virt support that you are
using doesn't handle the UC bit in second stage page table entries?
You could to remove this in p2m_type_to_flags() (see the
p2m_mmio_direct case).

Another wild idea I have is that the emulated NPT code doesn't like
having the bits 63:52 from the PTE set to anything different than 0,
and hence only p2m_ram_rw works (p2m_mmio_direct is 5).

> 
> 
> > > If it helps to repro on your side, you can just do the following,
> > > assuming your Xen repo is in /local/repos/xen:
> > > 
> > > 
> > > cd /local/repos/xen
> > > mkdir binaries
> > > cd binaries
> > > mkdir -p dist/install/
> > > 
> > > docker run -it -v `pwd`:`pwd` registry.gitlab.com/xen-project/xen/tests-artifacts/alpine:3.12
> > > cp /initrd* /local/repos/xen/binaries
> > > exit
> > > 
> > > docker run -it -v `pwd`:`pwd` registry.gitlab.com/xen-project/xen/tests-artifacts/kernel:6.1.19
> > > cp /bzImage /local/repos/xen/binaries
> > > exit
> > > 
> > > That's it. Now you have enough pre-built binaries to repro the issue.
> > > Next you can edit automation/scripts/qemu-alpine-x86_64.sh to add
> > > 
> > >   dom0=pvh dom0_mem=1G dom0-iommu=none
> > 
> > Do you get to boot with dom0-iommu=none?  Is there also some trick
> > here in order to identity map dom0? I would expect things to not work
> > because addresses used for IO with QEMU emulated devices won't be
> > correct.
> 
> That's easy: just don't use any devices to boot. Put everything needed
> in the dom0 ramdisk. That's the configuration provided in the gitlab-ci
> script I pointed you in the previous email which uses an Alpine Linux
> ramdisk.

Doh, yes :).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 17 08:42:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 08:42:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535830.833864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzCjd-0002fC-Lq; Wed, 17 May 2023 08:42:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535830.833864; Wed, 17 May 2023 08:42:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzCjd-0002f5-Io; Wed, 17 May 2023 08:42:13 +0000
Received: by outflank-mailman (input) for mailman id 535830;
 Wed, 17 May 2023 08:42:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sUit=BG=citrix.com=prvs=494f6562e=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzCjc-0002SE-Pr
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 08:42:12 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b612b35f-f48e-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 10:42:11 +0200 (CEST)
Received: from mail-mw2nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 May 2023 04:42:08 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM4PR03MB6064.namprd03.prod.outlook.com (2603:10b6:5:393::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Wed, 17 May
 2023 08:42:06 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6387.033; Wed, 17 May 2023
 08:42:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b612b35f-f48e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684312931;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=K3rDPXXb5oaP+oe6dacAoyK+btNfzMaBSkED60EWD8o=;
  b=ZuNi+j0+dm7OHdkleGqFap86jKxPnQ4wKxBgBTv3Xsp8pZmjgXJVzHaE
   0hHlkfAC+ISBd6X/IXunZM7UXf1CVucymOq8pyRFFZElxTo/5pn03UREX
   PgZevBTNLHzraJSdHpkDZ9LIqnFF7OA0PT+nC1lTwX+F+i4GgtHxsWIxQ
   I=;
X-IronPort-RemoteIP: 104.47.55.103
X-IronPort-MID: 109734942
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:GAfmV66gKppA5s/ym7hZnwxRtNjGchMFZxGqfqrLsTDasY5as4F+v
 mIWD2GCa/rYMGP9f49zOdjkoB4Ov57Rx4cxGwNs+yE3Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S4weH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m9
 +QhAgETc06/huO2yZ67a+VwptU9BZy+VG8fkikIITDxK98DGMiGZpqQoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooiOOF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxXKkA95DSOfQGvhCj3OMwUlNJUUsSUKqn8CguFzieolCN
 BlBksYphe1onKCxdfHmRAGxqnOAuh8aWvJTHvc85QXLzbDbiy6bDGUZSj9KaPQ9qdQ7Azct0
 zehj97vQDBirrCRYXac7auP6yO/PzAPKm0PbjNCShEKi/HTrYcyh1T1R9liGaK8jdroMTj1z
 3aBqy1Wr64PgMAC0aL95kzOiT+oopnPTyY84wmRVWWghj6Vf6agbo2srF3Et/BJKd/BSkHb5
 SBf3c+D8OoJEJeB0jSXR/kAF62o4PDDNyDAhVloHN8q8DHFF2OfQL28KQpWfC9BWvvosxewC
 KMPkWu9PKNuAUY=
IronPort-HdrOrdr: A9a23:yJLq8ayGrN5zhHtXzaIzKrPxduskLtp133Aq2lEZdPWaSL3jqy
 ncpoVm6faSskdlZJhAo6HwBEDkexzhHPFOkO8s1NuZLXbbUS6TXf9fBMjZsnDd8k7Fh5lgPM
 VbAs9D4YbLfBJHZK/BiWHSebtNrrjmgcPY55a6vhNQpENRGtxdBmxCe36m+yNNNXJ77NYCZe
 Ohz/sCgzKhfHQRYICfBmMZNtKz6+HjpdbeehgBCAcg6A6SyR2VyJOSKWnr4j4uFwpVx7Es6G
 7ElBG8wJ6CnbWU9j/wvlWjnKi/vrPau5J+Lf3Js9MSLDr0jAauecBcXIaPogwPhYiUmSIXeR
 330mEd1txImjjsl3+O0FPQMt7boUYT10M=
X-Talos-CUID: 9a23:tnUqMmMm/SXyf+5DWjFe3n8KB54Zc3zdxVvQAkOkKXd3YejA
X-Talos-MUID: 9a23:8x70BAaDthi6BeBTuBHzgxQ8GZ9S/72/NnkUo4dampKZKnkl
X-IronPort-AV: E=Sophos;i="5.99,281,1677560400"; 
   d="scan'208";a="109734942"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fwwbQ4lBc6KkrnZhTMdRsN9HOalrEYjVMfz3vayyDKKJyL+Se7UZCsfLhH/EIdueVWqSN9j9Dc9JiGuGwlz42htio+RQjpskiMsAea+rsfrf1KCMhF539fOjPNPIVXMN1TZ/NB0Ri6w3AguziBGH5cMBeEyXXaQnj2zYdwEWxR3r4Q30fALiXKG5r0xIPsBkbbWMECQXXTUg7h0s+ZwTzTJYiUJt4EH7I19DeVOO6cN5CAc+/fuBfrCJTDlMxEZgJzOUaF2Ok2T26nUcrkQDd/ywtR/IoIf8n1k1CkWwai24sYIBbSInCsS3pvgRaLpN73EyN7rTeB6JGCm1BSuVOA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VQ4G3CcfL9WU0zGwNpjDGixKrK8s8Ra5A7oUT7as/IQ=;
 b=CLi6A0lbmUIF8z0hwM2QYIp+skg5k0XG9FB0mwm4Uerj92Zn5fqau587ZNH/UGk31DCCEJmg4G+wJmxjP92zqyvd0VThkbkgXQ2hh6WEuKZ2cl+6NqyVOc1J8le9g1PnmH4aJtG6Bfu+7db1+7NmpCf5GNka7mhjNLuSBQl0yvPb+xCGm0NlOtiREWkw9aYqLaNDRlSH2QdQg/VwCJy2bDwVnILNKbyXRQycab0Xq5ndl3t2VJlBpEKXGxCuvNDwxbivZtVXdK/qA3RAug1Ui+O0kx0DLIX9FSNoVchY1PJjLeUe4Sldv9tI07DZdV44UZ4C2lWFdlEZv+MfJDSWRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VQ4G3CcfL9WU0zGwNpjDGixKrK8s8Ra5A7oUT7as/IQ=;
 b=b1BF0lsS8sjrVZ1iM3j+5BYoTg7guBANUOp2KV4+UuImjwJZc/h/74WFIaDx18TVAZMML11RRuzK6CL9XK9ru140hCedI5qIPJYDaPfpZAqxLR1d8s81Fzf5SMA/+zZrRE6D71Kyv5mn8sR53FnI6zaoVYkrSzADJjYogWn/5EI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 17 May 2023 10:42:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 1/2] xen/x86/pvh: use preset XSDT header for XSDT
 generation
Message-ID: <ZGSTWJcLmWGPg9QM@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-1-sstabellini@kernel.org>
 <ZGHx9Mk3UGPdli1h@Air-de-Roger>
 <81ac6e51-6de9-5c4c-5cbd-7318cae93032@suse.com>
 <alpine.DEB.2.22.394.2305151712390.4125828@ubuntu-linux-20-04-desktop>
 <ZGM6X19p50oSqbNB@Air-de-Roger>
 <ZGM9vzwGm7Jv6i7M@Air-de-Roger>
 <alpine.DEB.2.22.394.2305161509040.128889@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.22.394.2305161509040.128889@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LO4P123CA0524.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:2c5::9) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM4PR03MB6064:EE_
X-MS-Office365-Filtering-Correlation-Id: f4809a14-39c9-48d7-89ae-08db56b29810
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1PNPCyVykvroum/qnWYGrhbYu0eyCdJwQRWt8DqyIbSV7cSAjrYY9sb7ddvSaYasW0A+jDvhY/8ysJ7MEcqiWSbliCAOzL5JQoFy5DHVUwfhwo0lKqS/FEtY6gM493Sg+vk1zoTKP6qGnheyEvXiXuCRrAW1jMb4Umcgd4OnaTi0zP39XyYe2AcHG2lowPtlqj+NS3XHr+lNnv9n+VhwntTph0tEhKBWoUI9jsXpsCemk4EUDrLfubGc1cys+BncZL/E4d7wX95jts02Ac9nKk+ZTIMk4FEmE+62Co3tsI17M68IJAD0dhaIW/PCPWMbso9/hMpiRAxeceWkqk7gBViiYLXda+uevPp/yT/QaUtwFzBKiMN5uhqDxl7yFpMbKDznQuxoG+N9aG8famdhsnH+mPvj2xGR+CrRr9skHm7JwxcW1nCEECEyRSN1ZEIpMTS4k6lcz1+WHbdEZq3yG54+d7OmhJI0proDbEImToAyskayvqdXJXPNTSneYESBsUf8KfQSViTh7dXOw5aZLvwt0Mr/zc+5BSLoYYytrvtJUDIt4MAgzRj48S1xJ973
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(136003)(346002)(376002)(396003)(366004)(451199021)(5660300002)(41300700001)(2906002)(83380400001)(85182001)(86362001)(33716001)(82960400001)(38100700002)(26005)(186003)(6512007)(6506007)(8676002)(8936002)(53546011)(9686003)(66556008)(66476007)(66946007)(478600001)(6486002)(6666004)(54906003)(316002)(4326008)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q3NiekFPUlRwblFUQ1NWNTV3SzhiVTZ4QWxwMm9td1dWUUpqYjRXaytUOFk3?=
 =?utf-8?B?OEJZSHc2Ukg5ZE55Y2txMUN6Y1BVL1N6akMyNHd6V1ByUjF3ZjNBdWlsRHJ6?=
 =?utf-8?B?Vk1nRzVTNEJuaFhJbklrM1ZmRlIzS2pHY2xqRHV3NldNVVc4S3VzVDlHVzJy?=
 =?utf-8?B?NU8wTmsrUGx4QTFTNnA5ME9wTE5vSzNhUjROblZHSUZxd2RkRHQ5cktxUDNJ?=
 =?utf-8?B?S3dxZzl3bmFqT1Q4alVZOXFxVDl0UXFYUFZDZlo4T0ZGazVMRkpnZEVUL29u?=
 =?utf-8?B?ZTdQOWZzcFpETGcvL2VqK1hCTlk5bHhzRGQrTVNyUkcvQllUazdpVm5nUm5h?=
 =?utf-8?B?U3dyc0RCZGUwV3Bjb1ZHeGJKVFFGWHJoR2xCeFd0UWdWVXFtSXBPc1d1VXN5?=
 =?utf-8?B?aFdmT0pzS1lGVzRSZXFvcUtnQkZqaStROENJUEpLRkNzSzYrQVpDUTMzUUdh?=
 =?utf-8?B?VkZWT05qUEpFNmQwWHZwQ0RpZnJjSmgzZDVXQlFOTmVhQkRtN0VzNjVxRkxn?=
 =?utf-8?B?aFM1TkdsL3ZnWXg5dkVDRHpyWWpGWEpEUEFCQTJvUDRxWjcvU1dqaUE0Z2xR?=
 =?utf-8?B?ZmVROFIwWHZWelVxS29NQitNd3lYS2luMXFwcll3bXpnN3pSLzdaSW1IMkZS?=
 =?utf-8?B?M3BpNFg1cldqblhmR1FNWEExWDAwejBkcHdWNjRKbDJzcTNEelNHV0FvQW1z?=
 =?utf-8?B?SmFoN1JDWnVSaWxuUjBYV3cwamtiVk1mMlBCRVBnTi9DMjlVemppcUphMVZp?=
 =?utf-8?B?N3p1ODN0b20vRUdKNUM1ejV1cHBhUGZhbjFyWENZUmhBL0tRWGV4SEVveExt?=
 =?utf-8?B?NjJ6dlB2aW9oOU9XMXV2VXlDWWlKcVQ0bjFycVNpYmxnTVpCaXhtNWtsOVh5?=
 =?utf-8?B?bm1VcEY4bFpTM1UyQk9vak1GRmRVbm5ITVI0dTRLN2l1MzIyOVJpMDRzMlhW?=
 =?utf-8?B?U3d1cFA4anIxNWZXU0pPTHE0ZEtEZFJNNThrMVFlMEVQK3hlYVJBZ1c3cTVy?=
 =?utf-8?B?dllQVGpLTFJDM2lpL09zR243RmhYT3BuTHZNOGNvUTdkUCtVNUo4eVBIS25L?=
 =?utf-8?B?NFFEWVJvZ3Z6dFM0YUErYnYrVzZPeDEvREhHV3M3TEFaUXBVZUNxa2hmVmVC?=
 =?utf-8?B?VzEwc25pWDFkMTlpb1gwR243MERhWUhLbVNkUFIwS3VLOXBNQ01YVHV2NGZp?=
 =?utf-8?B?anJmV3IzQXVGMmJ0TlVIcXVLaUw0R0oxMEdrU2k3UjFPVWlUZS9Hdkg5ejNu?=
 =?utf-8?B?cE1xaVVEWEswTzFoZTExU3NzZ0xtYjEza1dmWUlpVk13aTNhc01XaFpGMW53?=
 =?utf-8?B?bmJjUEU2NXBlUzhxaTNYYjBVRTQ1ellsTzVhUTBzSEVIN1BPdFBHZ21yeGN4?=
 =?utf-8?B?SWlnbENjRGlVZzVoSS9KbHlPSUdYdHBwRG5zQTYyZHhpL2JWMUU0bWlhUC9T?=
 =?utf-8?B?aU1QczlkRHJFU0hKS0kwR0dub0VMVDBKTGFxSFIra0pEWWc3S0djQUxmWmVn?=
 =?utf-8?B?NzFqNFR2WGJFT0xGTjhTTUxUOEJuS1RyNVR5YWZxeFpiNmNlcW9GRHlCN2xY?=
 =?utf-8?B?NEVvRmY0V3VWcE1EaU5VNFBLbWFlVFgyNE9aNHAwK05pR1FGNE1lV29rdFhU?=
 =?utf-8?B?S0s4Z1FuUjUvK0llZy95Sms0UkVnaGZ4dThDcDNmWGVVelZqVUw0NEF1TzFu?=
 =?utf-8?B?ejJLV1lhdGpibkk2ZmtkbWxZTFl5K0FIOXJpMktVbGUzdmYwMi9CcFdsTzhF?=
 =?utf-8?B?NjRrVTdUeHJqWHl4VmU0MmJhSGVWeThhd21VME5KVnp5SmhNcjhNNzBwa3Fa?=
 =?utf-8?B?djczOE5tYlMzeE51U3E0M1QwZTlzMnUvcDM2WE1ySlZsd2V5cm1mWi9vallt?=
 =?utf-8?B?Vjh6eHpRcXl1YXNQNlcvcmJObHpKd2pQaTdvU0kxL2VOdW15SzhQamhGeElK?=
 =?utf-8?B?SnRmTzFFNnBjcU9tMEFQSmMwVXBMMjh3UUp1Zm9JT0xoMXZFMUhLQU9MT0pp?=
 =?utf-8?B?dkdVSFhTeThwcnNWVzJlcHZjUExtTGFBZWw5d0VicFBJRlJWM3RScVQ5WC8w?=
 =?utf-8?B?cUZJbzRWREtDN01BZDV4RnVrL1NnRStVSUZ0bmdoZllrM2liQkRHdE1pd3VQ?=
 =?utf-8?Q?yHoZVslBgIZQgrybDqcXbchdt?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	/DPfyJvnEXCDmvVvJ1pqR9s8bat1xSXlC0YaxacFdNycGs4KqI2bl0FW0nHWC7aZsnnW2WAVqV4Di95hcA9Q0V5DUPnzJc0Fpqxdrfo/yD71xrdi3YukCPlUUpz/rG8zMTqgzcXFrKmppYva6ziZqogEQoEtriA3+rgTVkMgVo7w8Hu9djCzEa/tPer6jTqLc5YrdI/1kZ7KOzxv5ZV19+QL+Ibwqz7v8YRdmMLJH/0AwPKn1rb6d+Py5HYMXtZWZi/xjklD3WH+MpPQqKjOkFDP88ExdQW9QRe1sg6vhm0W9hB79ngC/25Shex03ISQ006drKoFJ0UV6Wamx35ZJn9Gg3Q8MuIpg9qmIhyA4J+qr5CJC6gnSgzduMkxwANCqdRs4xu5DZQXF3QLJp4uMduO4huQ4TNeUP98v4B099003oVNxtrkaWhMJqynqZugpVvLEYOC99hOj7pDTVRFoSjvh/koYnAFSkivgPkJtxDv9370dv8YEfAYoNcXqcqIQdTDy8WfL3keGEC5w6/j1dfeXgrQB6OvdMsoxmoEpT+XUSllC7J481P0wYLoy/4QQVqpZ1LK8qpKD1rGiFetssRAnkos8SbFIj0azFWZf0SF5ABdmDYbJQ5mMHwHO0YlBO9mtfI2BX0hVptSlSfcKE1ShJoMgFugRj8skYFeBRXlcf4uv5WX+DSZ1b5Y8DCAwFhBs8FNLddx+n04W/S+Z51WJJLfApvUoZdq+xRxf4oB7tpYNxllfNy01LlgcZzBRYzYrdHVB6wGGgeWi/O5mxIOteWFIkn8SDW6BiYi6ff9JqERcpVHipAVBEx19AXKzq88vEfg37eef+p2XbhKvxlhldnFisL5zeXo9tETlmg=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f4809a14-39c9-48d7-89ae-08db56b29810
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 08:42:06.0843
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Vt4p6W4Sw4iOcQ2aBddW+OK/i38trmA61zmM6d3DeXJinOysvdf/dvhFrvR+bIy3XOTzx6ebQbaIZOJkn9XJbg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6064

On Tue, May 16, 2023 at 03:11:49PM -0700, Stefano Stabellini wrote:
> On Tue, 16 May 2023, Roger Pau Monné wrote:
> > On Tue, May 16, 2023 at 10:10:07AM +0200, Roger Pau Monné wrote:
> > > On Mon, May 15, 2023 at 05:16:27PM -0700, Stefano Stabellini wrote:
> > > > On Mon, 15 May 2023, Jan Beulich wrote:
> > > > > On 15.05.2023 10:48, Roger Pau Monné wrote:
> > > > > > On Fri, May 12, 2023 at 06:17:19PM -0700, Stefano Stabellini wrote:
> > > > > >> From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > > >>
> > > > > >> Xen always generates a XSDT table even if the firmware provided a RSDT
> > > > > >> table. Instead of copying the XSDT header from the firmware table (that
> > > > > >> might be missing), generate the XSDT header from a preset.
> > > > > >>
> > > > > >> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > > >> ---
> > > > > >>  xen/arch/x86/hvm/dom0_build.c | 32 +++++++++-----------------------
> > > > > >>  1 file changed, 9 insertions(+), 23 deletions(-)
> > > > > >>
> > > > > >> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> > > > > >> index 307edc6a8c..5fde769863 100644
> > > > > >> --- a/xen/arch/x86/hvm/dom0_build.c
> > > > > >> +++ b/xen/arch/x86/hvm/dom0_build.c
> > > > > >> @@ -963,13 +963,18 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> > > > > >>                                        paddr_t *addr)
> > > > > >>  {
> > > > > >>      struct acpi_table_xsdt *xsdt;
> > > > > >> -    struct acpi_table_header *table;
> > > > > >> -    struct acpi_table_rsdp *rsdp;
> > > > > >>      const struct acpi_table_desc *tables = acpi_gbl_root_table_list.tables;
> > > > > >>      unsigned long size = sizeof(*xsdt);
> > > > > >>      unsigned int i, j, num_tables = 0;
> > > > > >> -    paddr_t xsdt_paddr;
> > > > > >>      int rc;
> > > > > >> +    struct acpi_table_header header = {
> > > > > >> +        .signature    = "XSDT",
> > > > > >> +        .length       = sizeof(struct acpi_table_header),
> > > > > >> +        .revision     = 0x1,
> > > > > >> +        .oem_id       = "Xen",
> > > > > >> +        .oem_table_id = "HVM",
> > > > > > 
> > > > > > I think this is wrong, as according to the spec the OEM Table ID must
> > > > > > match the OEM Table ID in the FADT.
> > > > > > 
> > > > > > We likely want to copy the OEM ID and OEM Table ID from the RSDP, and
> > > > > > possibly also the other OEM related fields.
> > > > > > 
> > > > > > Alternatively we might want to copy and use the RSDT on systems that
> > > > > > lack an XSDT, or even just copy the header from the RSDT into Xen's
> > > > > > crafted XSDT, since the format of the RSDP and the XSDT headers are
> > > > > > exactly the same (the difference is in the size of the description
> > > > > > headers that come after).
> > > > > 
> > > > > I guess I'd prefer that last variant.
> > > > 
> > > > I tried this approach (together with the second patch for necessity) and
> > > > it worked.
> > > > 
> > > > diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> > > > index fd2cbf68bc..11d6d1bc23 100644
> > > > --- a/xen/arch/x86/hvm/dom0_build.c
> > > > +++ b/xen/arch/x86/hvm/dom0_build.c
> > > > @@ -967,6 +967,10 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
> > > >          goto out;
> > > >      }
> > > >      xsdt_paddr = rsdp->xsdt_physical_address;
> > > > +    if ( !xsdt_paddr )
> > > > +    {
> > > > +        xsdt_paddr = rsdp->rsdt_physical_address;
> > > > +    }
> > > >      acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
> > > >      table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
> > > >      if ( !table )
> > > 
> > > To be slightly more consistent, could you use:
> > > 
> > > /*
> > >  * Note the header is the same for both RSDT and XSDT, so it's fine to
> > >  * copy the native RSDT header to the Xen crafted XSDT if no native
> > >  * XSDT is available.
> > >  */
> > > if (rsdp->revision > 1 && rsdp->xsdt_physical_address)
> > >     sdt_paddr = rsdp->xsdt_physical_address;
> > > else
> > >     sdt_paddr = rsdp->rsdt_physical_address;
> > > 
> > > It was an oversight of mine to not check for the RSDP revision, as
> > > RSDP < 2 will never have an XSDT.  Also add:
> > > 
> > > Fixes: 1d74282c455f ('x86: setup PVHv2 Dom0 ACPI tables')
> > 
> > Just realized this will require some more work so that the guest
> > (dom0) provided RSDP is at least revision 2.  You will need to adjust
> > the field and recalculate the checksum if needed.
> 
> But we are always providing RSDP version 2 in pvh_setup_acpi, right?

Yes, as said in the reply to Jan, just ignore this.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 17 09:20:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 09:20:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535837.833874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzDL0-0000Lb-Bu; Wed, 17 May 2023 09:20:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535837.833874; Wed, 17 May 2023 09:20:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzDL0-0000LU-97; Wed, 17 May 2023 09:20:50 +0000
Received: by outflank-mailman (input) for mailman id 535837;
 Wed, 17 May 2023 09:20:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzDKz-0000LO-7C
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 09:20:49 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2045.outbound.protection.outlook.com [40.107.13.45])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1b18923b-f494-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 11:20:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9638.eurprd04.prod.outlook.com (2603:10a6:102:273::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Wed, 17 May
 2023 09:20:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 09:20:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b18923b-f494-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cJBTe2TMYKC1qsZKjNntHlIfGjTBaGp+HwLRsS6ovd1tkmvLsxaVVShNN5tDZUnOYHIOCFNbhIeDEnMIIMSsd8x2m+jD+F0N/oK+GwWnu2rnZXTqAOXTkhqmyp9ULVUslHGfmETRtlh5qLQ6JjTRz9QeqkMhwkLIA9x5R6d0S1Qs+Yy6Ux/w3o/IEFXcLdwSCO5ZCFC+E8jhmG6fLTd7f17ODhH1ppvGAnujpxDk2f9PoPT6Sj5mYsqzGdL7vxsgSe3+LDBdHI8muamRa8QLp4cCiaV+YALOptBa+xRux9AlspTHVWZOTgJqgCeIa8mWFaymA4WnHg8IVLQQVxA3dw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eGbu0+1oMNgp65PCtVCXEk10tNXbQ0sRu9AHOVhCyQA=;
 b=W18b2DxPyLpxpJHE2a1SQoYMQTc2vWAhRHeS2SOEBJ/6SxIM0mLzGiCNAKpRiOAvPxo0trMnpAeIsBl1f4eUOL4iV54nCNTPpz5voPigGKHLCRVdeOYKj/VnAOlzSgsZSdKubJEzOHKMek08e+y4Fc8qVIsHyaQegXHR3zudTFdj8TvkBUd+mUJVvM62eKbxOuIQzvHA8kFfcWQHTUyzocSC4mFOGjgPv1tPcnMqgKzrFoxh7l6bboouBlgnWFdVoUyy4Ip3oVQI1e2t4D9tFSgYOtBPpfCL+6LWaEUIY4hs8uRSHFzLIzaQ23QXBzkySxPoDlp1zr0MZymr4pRBoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eGbu0+1oMNgp65PCtVCXEk10tNXbQ0sRu9AHOVhCyQA=;
 b=JnCcvppf3A/dzP0WpXZTO54Z559mwYq5mSQANx1SsHZCanzvSUr1hjdrSivYK/0937ovtLK9F2zj00D0gaVrVwbmOEK+SJb1dGJKR3T4GKkdoKyntzF/OZfMlg5pWqdX22vf5jWBeEIf2IlZ1Oh62b5Zz8soHTZG4kLu2ol95KyLCOcHiTghKEvTQEWNvYMa2AiZJiRlC+MHAKH0zrERbdrJtNAKL0eH5U7LbfUafRpH6tZ0u+NkWhCzKx5Q5JsiQa6kqNX8uyyYy9OMLYL4Z9vQnO1DtEVYUgiMsMSYjwle52XSrufv6tfhb2g/XJUjLg9RbJK9lQ8cw5FZRnPu2Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a53a77e3-6dcd-2668-0f3c-282de93d8b04@suse.com>
Date: Wed, 17 May 2023 11:20:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
 <a545a6c9-db48-9d91-c23b-59ea97def769@suse.com>
 <25421dbc-5889-a33c-37dd-d82476d56ce4@citrix.com>
 <1bef2b0e-04e8-2ca0-cf03-f61cdae484a9@suse.com>
 <b1c56e56-90cc-0f37-4c8a-df1217339e59@citrix.com>
 <22a6bd70-887e-da72-ccff-16b3627463c3@suse.com>
 <54b35fa0-160e-3035-6c22-65a791ed2f84@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <54b35fa0-160e-3035-6c22-65a791ed2f84@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0243.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:af::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9638:EE_
X-MS-Office365-Filtering-Correlation-Id: 68c96103-34e1-45e8-d6ec-08db56b7ed25
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZaCvDjx1ADyGr+grh4I+rw8A0QJTQOlNtNMSFwi05/sVr8gctEmqQglfoVnpopYdP7UwhMomhlhSb+hbmIJEmQmSpHzL2ABtBwNN3EbTuqRXN9sSIxEy9qnh05fCcErSv+v5w7U+7jjDTQkMw4vnX+AZP+WC2V1uLvAwNKp/A18WGnNsgaO64ONZVXvTXanweMBNvF24cSfckAFseSMCUR0KHUeFimkCjmLAOXuxxIEKYGLOJOqyM0y23jgkRax4HTQBQ9C8G0emh3GQW+cAYWam3f923WiyXPhdTfBg4fFMsZDSM7TrXWQJ30nkPpUlSLNjG/Z2+qGNjExj1ZD8jne9A5EJ4MXY1hibC5lH5+eykOoX6QyaElmjHHUN8QOBOUTdZFgMGZdRgy63vSnIlp1sv/kCyqvnIdWQrYBn1DF3ppKNBkB5ZOrNbW9IMx1UweyzfexAXzv+nDD5N//+97hnZ/N6EkNV1Fijp+bOOhFQyHwYFTD5wb+SCzDlmUds1hf6BG6zWaiqLaGpHPOZUUwG5JTfpCHzmSHVdQZYukN9IKIdPV89LwJswFvf/LRw8tnF90XJ2OOtNbbDApy3tGbUCjNKxvb8KurQlu4RzRcfycqIUyx/iNR3LrBTSyvbGDllaOA7kuZ/M/Z8cZNeug==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(136003)(346002)(396003)(376002)(451199021)(41300700001)(478600001)(186003)(26005)(6506007)(6512007)(53546011)(66946007)(4326008)(66476007)(54906003)(6916009)(83380400001)(2616005)(31686004)(6486002)(66556008)(86362001)(5660300002)(316002)(8936002)(8676002)(2906002)(38100700002)(31696002)(66899021)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d0d4Nm5Gdlo0bHFTUms5bWVWVE0vRHVwa2xHWEpVQmtPS0VCb0RseFNXMzVv?=
 =?utf-8?B?U2cxS25ibjJzVy9oU1NYVk9GNE1KTnRvOWk0MEVBMFlTT1IydFdTQUxyMys5?=
 =?utf-8?B?aDg2MTJ5WVhCMkxPZ3VGSmRhY0JtUlo1UnErVWZXOExjTDkrSXd6VHNSYUth?=
 =?utf-8?B?YVhGckRSRm9NWXkvVGZuUiszTDNUdGZQNUFtVFBwaTRWUnhqcUoySmg1VDcz?=
 =?utf-8?B?SnVieXNCM0hOU3VlQk1nZnRlcWxxSXBDb2ZoR00zZlQzTDQvb3RMelNQNEVq?=
 =?utf-8?B?c1BGckVBYUdtREFJTWxCRGNhclNIaVRCdk5ZcmM2alhCRkY1QTdjRGZPUU9J?=
 =?utf-8?B?bjg3YmJoaHRyWTE1Z3lEY29tNjZvMS9lSGJ4ZmVPekw4TElOZnpSeUxYKy95?=
 =?utf-8?B?YXFvaFZBRytSWkpzSjNFeGFHNmdFYTZqUyt5cHM2Wkd0OXFrSGxKdG9uZ3k2?=
 =?utf-8?B?amRaRUQrS0pNWFkzYmt0bWRpRkVtRWhlRXU1eE51OUk4djBDZzFrSFdBNHM2?=
 =?utf-8?B?bWdYWngzcEY1MnRQdUp3czBBVTBKaGI2ZW56cmdsdUZ2OGVPTUx6MTZOcGZm?=
 =?utf-8?B?UUdPQXIxTExNajZUQkY4SUUrUmxyb1k4RCsvVHVuN1M0STh6Q2F6SEp4TkEr?=
 =?utf-8?B?OHJuRUt0UEI5UmFMK3pzQzU4TGVJN0FTZUJ0dzEvcWxacVdzNi9zUGFITTFI?=
 =?utf-8?B?Z0k1VHhHVElvRXRVMWVmZ1hzbFhTYlpXT2MzUjMwaUl5S3VHTkFFWlZpTy9j?=
 =?utf-8?B?dC8remRBUUZmRE04bHdTaTVyRkRjTXJPU0VaSDRMbDFuK082KzRMU1NoQnln?=
 =?utf-8?B?U25rTk8xVEh2YkVJemlDeFVkRGZuYncxdHN2bjdJYjhCVGszSjNuUEpMODlJ?=
 =?utf-8?B?UnNIcGo1ZFhnUDVkKzdNK1hlQk42YjVtbGVzazNaWUwyWWFKTm5LcGEydHh4?=
 =?utf-8?B?Ukd4UEFoVkFMbmhOQmxkMnhIalpldk00T3pRcXd2ajNyUGN6UFd0cmNtYzlK?=
 =?utf-8?B?L0lkOTFJQ0xpVG9YYWszWjdkaHdaM0FoTGVVamx1TmNyOGt0cGJibjV6MGla?=
 =?utf-8?B?cmhka2ZlRFZnN2FCRmVoSFZIbTJnbThGWEVhcE82NDR4VENPdmVtTjZRNXpx?=
 =?utf-8?B?bzFyd2RWeEJaNlhYVUkwYUt3bXUwcjQ3MUw5Y1JwRFUvOGhxdXAveGpJbi9s?=
 =?utf-8?B?eERiMytHbEUvQ3BhN3o0b2JEQUkzWGV4NHQ1bHJtN0NqdjlkRHVxT2wxTDBx?=
 =?utf-8?B?N0ppYVZFWS9IYXlKODgwQy9sa3BVR1RleGNVbUlUbU1WaDRhRStLVDJVbkUy?=
 =?utf-8?B?M0tZZnZiSE5DY2k4MG0ydmlCOElrbitiOE1uNXdUNmdDaExPbEVGVmVmNEUr?=
 =?utf-8?B?T1E3NVFoZFY3TGlkYkMrYVZidUpBMUhBNFV0MW5ONC9CQURWQVJpdXlKL2JM?=
 =?utf-8?B?Wm16U0J1ZjAxVE1KMjBNR2dkT1NRbXh2UmY5YlJXdFZiSHh1VzFiNmJWRC84?=
 =?utf-8?B?VVJseGRBNFFHMjJrYTRmTjFBZC9IN051bDNtUXRKTHRpdU9PWFJBM3cwTjdN?=
 =?utf-8?B?aGozRC9RaURNL0VId2tmQk03V0c4bFh6dUF2K3hJZC81WERRc0tkRGVjWWxW?=
 =?utf-8?B?Y2cvNktsUFFNQ1N6WlA4NzgwNGU1emJkZFA2dGtONEdleGN6Ry84MHBiWVpU?=
 =?utf-8?B?dm5ETUY0YmR4eHBLeG02MGNtUlAzbU5ySGthQlgwUU0zZVRGVlNTTEhadG5l?=
 =?utf-8?B?RDFBaFprNzYvSTRZa2pKSFFEQzFycjNmOTFpMFpRTkVTU0NoL2k2eEJzZWdk?=
 =?utf-8?B?VmdFc05BV1diWGVwZ21jK0tQSWVSZUErRHZCb0IxZzdxSFJSbS9aZ3VtR2dv?=
 =?utf-8?B?aGRWSnFlT0JpaWUxYjFpRkI0NmFCVWlkZXk4cHdYcUNMQjh3MWRvK0UrWGJz?=
 =?utf-8?B?eFdyYTQ2OTJpOEFCdHIrcXFuQXpMTTA4Yi9rU2ZGQWNlaVJwUWM3RnVUUjM3?=
 =?utf-8?B?UERpbDZraHZSUGt4a04xbzBkL1ZXb0dRS1NoblVuQ1IrcmsyOWVwUy9XZnA5?=
 =?utf-8?B?WE5WVE1zM013YWMzR0xyV091RGhlNlVZK1hVRTVxWEpCakxaY3JzTTRKcnZt?=
 =?utf-8?Q?/Vlr8QqO4/oDe9KdWpfb8tH7L?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 68c96103-34e1-45e8-d6ec-08db56b7ed25
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 09:20:16.3008
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yr4Qxnzj+mDUy5lRq8udPLtxj0lG/xOUoFd5sWyzr9HJsKVVfuhs01EriO/u2YaY0PNXUXKdpNLdd1kIkgpZng==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9638

On 16.05.2023 21:31, Andrew Cooper wrote:
> On 16/05/2023 3:53 pm, Jan Beulich wrote:
>> On 16.05.2023 16:16, Andrew Cooper wrote:
>>> On 16/05/2023 3:06 pm, Jan Beulich wrote:
>>>> On 16.05.2023 15:51, Andrew Cooper wrote:
>>>>> On 16/05/2023 2:06 pm, Jan Beulich wrote:
>>>>>> On 15.05.2023 16:42, Andrew Cooper wrote:
>>>>>> Further is even just non-default exposure of all the various bits okay
>>>>>> to other than Dom0? IOW is there indeed no further adjustment necessary
>>>>>> to guest_rdmsr()?
>>>> With your reply further down also sufficiently clarifying things for
>>>> me (in particular pointing the one oversight of mine), the question
>>>> above is the sole part remaining before I'd be okay giving my R-b here.
>>> Oh sorry.  Yes, it is sufficient.  Because VMs (other than dom0) don't
>>> get the ARCH_CAPS CPUID bit, reads of MSR_ARCH_CAPS will #GP.
>>>
>>> Right now, you can set cpuid = "host:arch-caps" in an xl.cfg file.  If
>>> you do this, you get to keep both pieces, as you'll end up advertising
>>> the MSR but with a value of 0 because of the note in patch 4.  libxl
>>> still only understand the xend CPUID format and can't express any MSR
>>> data at all.
>> Hmm, so the CPUID bit being max only results in all the ARCH_CAPS bits
>> getting turned off in the default policy. That is, to enable anything
>> you need to not only enable the CPUID bit, but also the ARCH_CAPS bits
>> you want enabled (with no presents means to do so).
> 
> Correct.
> 
>> I guess that's no
>> different from other max-only features with dependents, but I wonder
>> whether that's good behavior.
> 
> It's not really something you get a choice over.
> 
> Default is always less than max, so however you choose to express these
> concepts, when you're opting-in you're always having to put information
> back in which was previously stripped out.

But my point is towards the amount of data you need to specify manually.
I would find it quite helpful if default-on sub-features became available
automatically once the top-level feature was turned on. I guess so far
we don't have many such cases, but here you add a whole bunch.

>> Wouldn't it make more sense for the
>> individual bits' exposure qualifiers to become meaningful one to
>> qualifying feature is enabled? I.e. here this would then mean that
>> some ARCH_CAPS bits may become available, while others may require
>> explicit turning on (assuming they weren't all 'A').
> 
> I'm afraid I don't follow.  You could make some bits of MSR_ARCH_CAPS
> itself 'a' vs 'A', and that would have the expected effect (for any VM
> where arch_caps was visible).

Visible by default, you mean. Whereas I'm considering the case where
the CPUID bit is default-off, and turning it on for a guest doesn't at
the same time turn on all the 'A' bits in ARCH_CAPS (which hardware
offers, or which we synthesize).

Something similar could be seen / utilized for AMX, where in my
pending series I set all the bits to 'a', requiring every individual
bit to be turned on along with turning on AMX-TILE. Yet it would be
more user friendly if only the top-level bit needed enabling manually,
with available sub-features then becoming available "automatically".

> The thing which is 99% of the complexity with MSR_ARCH_CAPS is getting
> RSBA/RRSBA right.  The moment we advertise MSR_ARCH_CAPS to guests,
> RSBA/RRSBA must be set appropriately for migrate or we're creating a
> security vulnerability in the guest.
> 
> If you're wondering about the block disable, that's because MSRs and
> CPUID are different.  With CPUID, we have
> x86_cpu_policy_clear_out_of_range_leaves() which uses the various
> max_leaf.  e.g. a feat.max_leaf=0 is what causes all of subleaf 1 and 2
> to be zeroed in a policy.
> 
> 
>> But irrespective of that (which is kind of orthogonal) my question was
>> rather with already considering the point in time when the CPUID bit
>> would become 'A'. IOW I was wondering whether at that point having all
>> the individual bits be 'A' is actually going to be correct.
> 
> I've chosen all 'A' for these bits because that is what I expect to be
> correct in due course.  They're all the simple "you're not vulnerable to
> $X" bits, plus eIBRS which in practice is just a qualifying statement on
> IBRS (already fully supported in guests).

Right, upon checking again I agree.

Jan

> The rest of MSR_ARCH_CAPS is pretty much a dumping ground for all of the
> controls we can't give to guests under any circumstance.  (FB_CLEAR_CTRL
> might be an exception - allegedly we might want to give it to guests
> which have passthrough and trust their userspace, but I'm unconvinced of
> this argument and going to insist on concrete numbers from anyone
> wanting to try and implement this usecase.)
> 
> But there certainly could be a feature in there in the future where we
> leave it at 'a' for a while...  It's just feature bitmap data in a
> non-CPUID form factor.
> 
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 17 09:48:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 09:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535843.833884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzDlP-00045s-El; Wed, 17 May 2023 09:48:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535843.833884; Wed, 17 May 2023 09:48:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzDlP-00045k-Bn; Wed, 17 May 2023 09:48:07 +0000
Received: by outflank-mailman (input) for mailman id 535843;
 Wed, 17 May 2023 09:48:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NVq3=BG=aim.com=brchuckz@srs-se1.protection.inumbo.net>)
 id 1pzDlN-00045W-4z
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 09:48:05 +0000
Received: from sonic315-8.consmr.mail.gq1.yahoo.com
 (sonic315-8.consmr.mail.gq1.yahoo.com [98.137.65.32])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e900c11b-f497-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 11:48:02 +0200 (CEST)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic315.consmr.mail.gq1.yahoo.com with HTTP; Wed, 17 May 2023 09:47:59 +0000
Received: by hermes--production-bf1-54475bbfff-xh8w9 (Yahoo Inc. Hermes SMTP
 Server) with ESMTPA ID 24aad0db06063cbf22325155fe59148c; 
 Wed, 17 May 2023 09:47:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e900c11b-f497-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aol.com; s=a2048; t=1684316879; bh=aQdscKvLY6S6aQh/xtHAjeHn3mkGg+q462RfdyoVGDo=; h=Date:From:Subject:To:Cc:References:In-Reply-To:From:Subject:Reply-To; b=NxYsyj0LBb0F4lcv7k+j5t9nJ4hSQqhiDpI3nPPqw1dqNP99We5mLSXhlhwA3SuYVtYN8n9B9bvHhnNEz5RwHkRYM2DjbY7ibSoOJB09VeOw5/X9Ld9AojjWdyPknZMzixOONhwwBRLZKAG84APZ+Nd7wdpcF4FscnmdUEAdPhuaIEhhAYHXzbHXoXBT6r+OdKUxEtacpr/XXrh8Z7TAJkbfHZ16h8Nw9fGoMMgGe7DKRUkGNjG1om3DV49gz1A0ZCBFnXgHX3tw9TxMBkrjEN/xqP9eU/fFun6Mt8DCneDS+S3qGwJlFtZPv0alZdiajH84j9AuHQNyPa8L3n3i1Q==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1684316879; bh=4daBEh926WVPMyxCoaZ42Ind3YD2iMbZTrLb4RtE6eL=; h=X-Sonic-MF:Date:From:Subject:To:From:Subject; b=DCJJDr9M5HfHciobmrpii3IKct1Wf7aFwV/eU5GygKPWS7n2gI97QtQdymen8M+Sx1L2d9s31YFc+dC2hxxyZvI8TfzzbRdlKkrOmg534OdJyP3LXMXqqLJqEeSAIDN19TKe8Q+9Cl2JCKPrM55hTJ6Il4G5yXGWXg7dK0lrL8FMm53a4t5TXjaE60eTL4yx2bgoa0TpDhZWaNVfj0Upw0hj+/uc2bwewnPtvsjRQkZ8vFfWXtQ1xqOlHrPzr5HrhVmFOGGsbLModl6CSR/VXP617ac181VKgCZmcL10k0JWb7FmM+Ex3ek9IhU+685sAah84bKNZ1jxUa14rrKShw==
X-YMail-OSG: TJX2jYsVM1nF85kpPnGWeXvnTL5hE1dgUE6Cq76nrXx.KSuRVXdJydyDPNVdAou
 eJKhlXPF_0yUHiWremP6Zed_03k1h07aaq3BnbKS64N4mpma7Qti_RAGgBRe6_h8ieB9JxdQTwps
 DIPZNc.8E.cvlyw.8KxFAZpK4j6AH_DNlY7hQWRsaHL.1aVEj_Eeg4vSUtT2fUjElmyuLLm2p1hM
 kQBWiRZgpPS5f4PxJcWfMqpRppMQK6Wg5NzFjW5drADld0rou9k5hdwHmnITiSQhO_awGhmGf.AR
 JaczotUdnLayeCYw9D_nlvzFJ7olROQ.nYYL5aGTjqoa3vRphnTXIRauhrkbJ5SCLcwFjaHNFZRf
 XhRS0IwT0qBocmxt.0t6dAVPOSjpcZDz1BofVfchffrsyYEjj6JwkahPT4lE1dNkS7rYha.4ab.G
 7y8vUmLgPbOazHHpD19UYB0ZSU6SNgvN7MjMFjXq0zf9gLk07SPLHHN0gdU00oujtJIHrtJUTsuc
 bM1FHveV0JUZ4XZFuK_jHsd8AjRR.bRCQihHAxN131sLcv05DgzNBGlyY29gr6yyoJTrTymXLnSc
 TZPBClCy_rBrvgp6R8KThwEOq6EAfZF2QW9zK5zMFSJsCq0mYZzDgNL3Nin3uFMVbUxhmfiyvyBY
 dUMr6iUvqpCijY6wrByjMZhIDi9oc2Fx15bU6QckILeuuk7CKPz7JQ1SER7OtwyFrM71Mv1XmX02
 fvbBUs.uUTyzQMwla84VN.dpVPVd6hXtKcqkoUeBGbjEgU9gQ7ttDUgEU3veeLkRi7YRqwoJFr.f
 M_hxFmyQidjkon0YNugVUdxjVIhGVZ5wm9.62nys4VYhv7Ip_7hyHAEXetXaEpmZAFonwkUQUDHp
 dtjxU.eBw2EEq9bYRIDEVXHNH72j_DcZXCitXetALf8pm__WZMwuObtqTLCKJarIs4AHycO1G4k9
 wQBAPv7soUT21cOMmbl91oss3jLohT10ecZ.6nzXrZsJ7aEyBy7AG4L4x6dGUcSsNrD.Q37cNpfO
 NoaC23VIYqRMvy8kCg95bS_NMdWd8QqIJjZNTyUcxbm64k.nL2a2.lmgbBnQ6iqjIN9cHCyQruQw
 uCSCpDUA4G0AfS84wpSA.lrTmTzzFsIWkuh8GBiyBppg3Epmn_ToBVqNmCaArBEZdJf_ZxEPaOfa
 0JM5r7gBHfEFSwdDz_mGAkHtZNENZhL1so.uMTppiqLkA_3396Jm83UcUcir6e4r4T55b7vlABWk
 jfCY5p0R_zrg7xLy68wkdRog.B3xxGdx6Cyjqipl2IjCaHUv0UDqxOlK85vdYWW0pxWQNyNn_YYf
 qI0ZRgD_oAG9f240YwBnDHTufTZUZWalcTcubC5K2ovDANBtX36APUTJT2JmvKckHqaHMXaaXuo2
 rgM2GHwlA4.R3BaLwqCv1TrhNjPC.W8iC5pGIzKSEiLmcjgm9jPPmJqcO1aHNG5k9qu2wlahlJee
 hqRdoTAHUsy09D9iD8VQ6bjrT8Sx3rEh7L0O9zO.4AVCm5LHzY5hwjRbZORaM5vHckDoqRhj9q6R
 xyAK1.r5w1me8rRgvfzpAhrqj6Karnc5qZOSj7l440U_8sAc2jDh443zrZpyjJJKDQ8aJUkjSE2F
 jWv6WcOgbcl2NXjIht1Z.BMxGUSpWOl20e__FqxmawrLu1be5oDwi8GIJGIPfORSf2d_ZuwqmcjS
 UI86Fz_tjgLUfLaL6oAsWg7yZbBs2I.gNNlicnBsF4qhhhBunVy6MDcdteObdGEqQomvHrYeP2Gc
 K3m7iOxL6zGQNhI8elC1n5so4ici6FIohO4fSVgeIAsV6vxJ85PvS54Pg61zk7yfQERppaFIYZEP
 U93LvdNp8f_BkXp0rkoNKgBo7AnasOsA8QPxMogBYLg8KvEF1ORwpgpc4S5Vb5BgXBsLY52LTNNs
 J_.isQr9FRXoktN61BetWZxDuwQ3qODmcTyi.PO2i2jcX_99V2e.ul2fz_hPv.iuAITvUO5JpHI7
 s8q3H7ylHPTDGHqx3vjY8ncdco24Qkj2V0yhfHEyr.RAhsLrcpD.UtDRT1odIIxuh._vSldh1ihf
 4xIiiaAqS4fjooQDOPRbP.dJDQTBJM7Khv1bjaVx8WaDVLbYITTXQMRedYoDE7NeB6dZt69IdzCq
 YyPXa5fP1AkkI6QbYhwWHH5ZPJFrMm59HsOiAKC0QbOeSVujKFVwFmwNUY5lsaB3d5SV4nIs4uIe
 W3RtOY4LVGEjCeEJA
X-Sonic-MF: <brchuckz@aim.com>
X-Sonic-ID: 4eb3fcf0-14b8-4781-b11e-557bf4fbb7f4
Message-ID: <47ed3568-2127-a865-4e4f-ff5902484231@aol.com>
Date: Wed, 17 May 2023 05:47:52 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Chuck Zmudzinski <brchuckz@aol.com>
Subject: Re: [PATCH] xen/pt: fix igd passthrough for pc machine with xen
 accelerator
To: Michael Tokarev <mjt@tls.msk.ru>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Igor Mammedov <imammedo@redhat.com>, xen-devel@lists.xenproject.org,
 qemu-stable@nongnu.org
References: <a304213d26506b066021f803c39b87f6a262ed86.1675820085.git.brchuckz.ref@aol.com>
 <a304213d26506b066021f803c39b87f6a262ed86.1675820085.git.brchuckz@aol.com>
 <986d9eca-5fab-cacb-05c7-b85e4d58665b@msgid.tls.msk.ru>
Content-Language: en-US
In-Reply-To: <986d9eca-5fab-cacb-05c7-b85e4d58665b@msgid.tls.msk.ru>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Mailer: WebService/1.1.21471 mail.backend.jedi.jws.acl:role.jedi.acl.token.atz.jws.hermes.aol
Content-Length: 3127

On 5/17/2023 2:39 AM, Michael Tokarev wrote:
> 08.02.2023 05:03, Chuck Zmudzinski wrote:
> > Commit 998250e97661 ("xen, gfx passthrough: register host bridge specific
> > to passthrough") uses the igd-passthrough-i440FX pci host device with
> > the xenfv machine type and igd-passthru=on, but using it for the pc
> > machine type, xen accelerator, and igd-passtru=on was omitted from that
> > commit.
> > 
> > The igd-passthru-i440FX pci host device is also needed for guests
> > configured with the pc machine type, the xen accelerator, and
> > igd-passthru=on. Specifically, tests show that not using the igd-specific
> > pci host device with the Intel igd passed through to the guest results
> > in slower startup performance and reduced resolution of the display
> > during startup. This patch fixes this issue.
> > 
> > To simplify the logic that is needed to support both the --enable-xen
> > and the --disable-xen configure options, introduce the boolean symbol
> > pc_xen_igd_gfx_pt_enabled() whose value is set appropriately in the
> > sysemu/xen.h header file as the test to determine whether or not
> > to use the igd-passthrough-i440FX pci host device instead of the
> > normal i440FX pci host device.
> > 
> > Fixes: 998250e97661 ("xen, gfx passthrough: register host bridge specific to passthrough")
> > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>
> Has this change been forgotten?  Is it not needed anymore?

Short answer:

After 4f67543b ("xen/pt: reserve PCI slot 2 for Intel igd-passthru ") was
applied, I was inclined to think this change is not needed anymore, but
it would not hurt to add this change also, and now I think it might be
more correct to also add this change.

Longer explanation:

I strongly desired that at least one of the patches I proposed to improve
support for Intel IGD passthrough with xen be committed. Since
4f67543b ("xen/pt: reserve PCI slot 2 for Intel igd-passthru ") that fixed
Intel IGD passthrough for the xenfv machine type has been committed,
I reasoned that there is not such a great need to also fix Intel IGD
passthrough for the pc machine type with xen so I did not push hard for
this patch to also be applied.

My requirement was that either the xenfv machine be fixed or the pc
machine be fixed. I did not think it was necessary to fix them both, and
4f67543b ("xen/pt: reserve PCI slot 2 for Intel igd-passthru ") fixed the
xenfv machine. But this patch provides the additional fix for the pc machine,
a fix that is distinct from the fix that has already been committed for the
xenfv machine, and it probably should also be applied so pc and xenfv
machines will work equally well with Intel IGD passthrough.

In other words, it is good to fix at least one of the two broken machines configured
for Intel IGD passthrough and xen, it is better to fix them both. We already fixed
one of them with 4f67543b ("xen/pt: reserve PCI slot 2 for Intel igd-passthru "),
this patch would fix the other one.

If you want to add this change also, let's make sure recent changes to the
xen header files do not require the patch to be rebased before committing
it.

Chuck


From xen-devel-bounces@lists.xenproject.org Wed May 17 10:31:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 10:31:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535847.833894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzERf-0002Sv-F3; Wed, 17 May 2023 10:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535847.833894; Wed, 17 May 2023 10:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzERf-0002So-BF; Wed, 17 May 2023 10:31:47 +0000
Received: by outflank-mailman (input) for mailman id 535847;
 Wed, 17 May 2023 10:31:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cMB/=BG=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pzERe-0002Sa-3w
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 10:31:46 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0630.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04f827fe-f49e-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 12:31:44 +0200 (CEST)
Received: from DU2PR04CA0289.eurprd04.prod.outlook.com (2603:10a6:10:28c::24)
 by GV1PR08MB7682.eurprd08.prod.outlook.com (2603:10a6:150:61::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Wed, 17 May
 2023 10:31:41 +0000
Received: from DBAEUR03FT058.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28c:cafe::60) by DU2PR04CA0289.outlook.office365.com
 (2603:10a6:10:28c::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17 via Frontend
 Transport; Wed, 17 May 2023 10:31:40 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT058.mail.protection.outlook.com (100.127.142.120) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.17 via Frontend Transport; Wed, 17 May 2023 10:31:40 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Wed, 17 May 2023 10:31:40 +0000
Received: from 8ee269bb859f.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DEBDE5C0-E84B-4A77-8FEE-01E33163475F.1; 
 Wed, 17 May 2023 10:31:34 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8ee269bb859f.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 17 May 2023 10:31:34 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by GV2PR08MB7929.eurprd08.prod.outlook.com (2603:10a6:150:ac::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Wed, 17 May
 2023 10:31:31 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%6]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 10:31:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04f827fe-f49e-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6znffPVqR9NmWxnKAD9FJRV42x1Y5ng492Dynk2du0c=;
 b=YXJVCFOrTiPhtHECELcIYdkhN60dGFS8CRyOyNoP0QRRLrxSYRZ6LBBr98SVgQqTRXU80+cqbEJrl4/C1QYzoH5DCuM/a5zCIBRUDarLWhxFOZuWu3yB6ba02bRk/exlkdYR6/6E5MyG4mWACimNq8ZSqMZ7JE8YHxBTgAU0b4k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 46005cd57f16f872
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SNxAsaEWRC4wo1eMRxDD+relLPzJhKLn2f5TF5RAsNzT56NFvkVSR3IV0pRepDJFb7zrul14vVQV+lTK5aVEnW6qdHhB+yI9jtbqD4gKJIpatOZRnjqDUap7V/G9avmohPxz+n0ULuQIREhzU/xqC4aXUwnG+3avbjLSVqsfEU0f09Uh/Ek0JXfIRLHGZ3ZeEYzjcDXjMxDOc+ojDCbCR68P55yaapqOk0VZpFDuL5XCeajt8/sH5cG+OnQ1hcNhsVs1kACauNNKNGM5Bu3dVByWavtAUBgKU7gLMr9qeS2nS0POOVoKlVl56aVqiv+TKreu15gQ0r6mJFTuFUgpyw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6znffPVqR9NmWxnKAD9FJRV42x1Y5ng492Dynk2du0c=;
 b=PiuzufHlJWddZNVn3Yy4h26PKwbWKeczJ/C8KzosYl6KHYYtHntdr1DUkFWGyuYvVaYdGicQyC7g+mHymNUeNpvQmk/YHK4RlLnUZvlDtiB/T5AIf3ktqX8KASbcKhEe3+H2P3KqbXOmupAKjotT4e9oqQbJhDKymrDZIo+aadUpIpD+B53fBjjbsUCr10d3O8l75G58ncBcEVHyQQqOVI5pIlum8AvXhPtU1V5Sg/UUBlwN/uW2DRRZJXBg2+m7x6PNoo9kXujNz6BYFxLEJ+B2YTEFkZ2ubDEtW9bUaukPFPqhQVYaJrNTpkiAMA3CKjjwoGNmUuG9H6DQ8PIX/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6znffPVqR9NmWxnKAD9FJRV42x1Y5ng492Dynk2du0c=;
 b=YXJVCFOrTiPhtHECELcIYdkhN60dGFS8CRyOyNoP0QRRLrxSYRZ6LBBr98SVgQqTRXU80+cqbEJrl4/C1QYzoH5DCuM/a5zCIBRUDarLWhxFOZuWu3yB6ba02bRk/exlkdYR6/6E5MyG4mWACimNq8ZSqMZ7JE8YHxBTgAU0b4k=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 2/2] xen/misra: diff-report.py: add report patching
 feature
Thread-Topic: [PATCH 2/2] xen/misra: diff-report.py: add report patching
 feature
Thread-Index: AQHZfpRdAeUKdaL5DEawBEGR+YhUoK9dwh0AgACWKYA=
Date: Wed, 17 May 2023 10:31:30 +0000
Message-ID: <3A4E52B2-B33D-46FC-A1DB-4935AF06CC49@arm.com>
References: <20230504142523.2989306-1-luca.fancellu@arm.com>
 <20230504142523.2989306-3-luca.fancellu@arm.com>
 <alpine.DEB.2.22.394.2305161827050.128889@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2305161827050.128889@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|GV2PR08MB7929:EE_|DBAEUR03FT058:EE_|GV1PR08MB7682:EE_
X-MS-Office365-Filtering-Correlation-Id: f9b5a4f4-10c4-440c-e9df-08db56c1e6b5
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KNKAuGy0sw1x1uR96WqAAvteNNbZmqZkoVd8cr71FGRO7SgmuDg1VioWTJpjiZsjPT7Lnp4e+6Tgj+Ql+IMmBY6dGjjUeLFcsIfOP55+uEk8JvNKIV0rEerVnYgCCGGIPl3EsBlWLwLy4jqtaZiMkwLW7IAWxiN+2wvWiiTFXcKnPWS3dkb4bih5cINqz++WjhK3kSmd4ranUSBlryr0cro7z8KZx6ZXRK3yXLojJIaHLAlnF/k/EzQv0gF9DLsHSd3IHrM2V7UPX2FZiA+XweG8n1cCZxxNmmxCOjNLUy4y9mJhDEdS8ANhtEQZ5/S6tDdpRVSBQIQyEppj6GYKUsyTSFJxmyTThOq1xfMqK219F1piF76D/Yat3S7+qY6mVJ1B8AZIaL7/3AEw6+PM1zaLOEwSq+8qjNUZZuxAeZd8KLOFWzbrW0qGc42YmiryuvQ4FNJ350+xm0Y/ZHHaQoTWmovtnr7Uf4HpqyeWMR2t/uLoiclmhwhIqhcxIeSEHCzN2xw8Ap4t63N0GNjjK0NvtO+8elUI4txpjhPmhN/DqbJx02xTlIYdrUrFyZ4n1MwJOkXGUlbEET4FyKg4uTvMe4ZNtLq/uNPoIWCRwfutSKW594a8ArIsEv6XVDoVpv1yAvvR9eEdul8pD2F7FA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(39860400002)(136003)(366004)(346002)(451199021)(6506007)(6512007)(38070700005)(53546011)(2906002)(186003)(86362001)(26005)(83380400001)(2616005)(36756003)(38100700002)(33656002)(122000001)(71200400001)(41300700001)(6486002)(4326008)(6916009)(76116006)(66946007)(66446008)(66476007)(66556008)(64756008)(316002)(91956017)(54906003)(478600001)(5660300002)(8676002)(8936002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <66CE121E50BCB14C84B9938ADA893F1E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB7929
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	577f6b67-aeca-4fc1-147f-08db56c1e0b6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LrskEluYhuhatyJwlBPJx4owPqU36aKBRoO1oZK0ThRpoX6+UuMDCRgDjqM4NFKolPkj5hRQWC4t+me2WrD9QJ2AO6tDt8wo5r42ajslsxoOvmYuLxaQcvt6UxA2QvBU8JOzMCGMLOKwelglNn8aTldSeYJrUI1aTud9uGcYJGpEasA7UmK46cLTBryxSRHRpT+h5mVr8TVJ4nEZg0egpiGiuudejULitn3Cshg7oU69a8qAOb3DRVBCTUKtnq+ViWVrVucF1zM6RhIOwRONbZposb+HEtJyI0lsCJLiIAo/pU60i6WFpqG6TFYPCz61e6cZ+oTL/U22WQyHneFDAyfyPMYMCNYFoIjZWyA+9x1T3OXhrvOJMjcNMNVlp4DKxX+mHTrDAgSPL+A3l9/Rab5c7IlVqp9BAf0bRXoShGKSp/1QOn+HgIpVaO52mAUuQZaQR5ksYL6HozCn85KE8Fx3AHA7+ZfL6dXVs+MmKXjaPwVLSu10PdWCpMIgjqiq4SUz1ZecRkD11UNeoNLYpXbpT3n6UIJm3YH0CdSNs9ztXf3qHED0U2F4caOlHe7fcT1/G+TQAbkVzCMRQl4/rYBxGHmFudiPtLUqTC30gPY/6IwbOvNbjU/xwGFRrrH+VDWHL+QYA4yYN9xWjOSm5CJMV681x3GqrBydcDgIcofHC6QzilTWzuF2NEnWrjbZql/Q+5T0wnPDfp2tSp5f9frYhYTIEIEZ9Y05aRbu1+IkfwHcPLUhYYkG6G7nCcS7
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(376002)(39860400002)(136003)(451199021)(40470700004)(46966006)(36840700001)(4326008)(356005)(6862004)(8676002)(82740400003)(316002)(40480700001)(41300700001)(70586007)(478600001)(81166007)(70206006)(54906003)(8936002)(40460700003)(6506007)(186003)(36756003)(2616005)(82310400005)(53546011)(6512007)(83380400001)(2906002)(26005)(47076005)(33656002)(86362001)(336012)(36860700001)(5660300002)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 10:31:40.2259
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f9b5a4f4-10c4-440c-e9df-08db56c1e6b5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7682

DQoNCj4gT24gMTcgTWF5IDIwMjMsIGF0IDAyOjMzLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gT24gVGh1LCA0IE1heSAyMDIzLCBMdWNh
IEZhbmNlbGx1IHdyb3RlOg0KPj4gQWRkIGEgZmVhdHVyZSB0byB0aGUgZGlmZi1yZXBvcnQucHkg
c2NyaXB0IHRoYXQgaW1wcm92ZXMgdGhlIGNvbXBhcmlzb24NCj4+IGJldHdlZW4gdHdvIGFuYWx5
c2lzIHJlcG9ydCwgb25lIGZyb20gYSBiYXNlbGluZSBjb2RlYmFzZSBhbmQgdGhlIG90aGVyDQo+
PiBmcm9tIHRoZSBjaGFuZ2VzIGFwcGxpZWQgdG8gdGhlIGJhc2VsaW5lLg0KPj4gDQo+PiBUaGUg
Y29tcGFyaXNvbiBiZXR3ZWVuIHJlcG9ydHMgb2YgZGlmZmVyZW50IGNvZGViYXNlIGlzIGFuIGlz
c3VlIGJlY2F1c2UNCj4+IGVudHJpZXMgaW4gdGhlIGJhc2VsaW5lIGNvdWxkIGhhdmUgYmVlbiBt
b3ZlZCBpbiBwb3NpdGlvbiBkdWUgdG8gYWRkaXRpb24NCj4+IG9yIGRlbGV0aW9uIG9mIHVucmVs
YXRlZCBsaW5lcyBvciBjYW4gZGlzYXBwZWFyIGJlY2F1c2Ugb2YgZGVsZXRpb24gb2YNCj4+IHRo
ZSBpbnRlcmVzdGVkIGxpbmUsIG1ha2luZyB0aGUgY29tcGFyaXNvbiBiZXR3ZWVuIHR3byByZXZp
c2lvbnMgb2YgdGhlDQo+PiBjb2RlIGhhcmRlci4NCj4+IA0KPj4gSGF2aW5nIGEgYmFzZWxpbmUg
cmVwb3J0LCBhIHJlcG9ydCBvZiB0aGUgY29kZWJhc2Ugd2l0aCB0aGUgY2hhbmdlcw0KPj4gY2Fs
bGVkICJuZXcgcmVwb3J0IiBhbmQgYSBnaXQgZGlmZiBmb3JtYXQgZmlsZSB0aGF0IGRlc2NyaWJl
cyB0aGUNCj4+IGNoYW5nZXMgaGFwcGVuZWQgdG8gdGhlIGNvZGUgZnJvbSB0aGUgYmFzZWxpbmUs
IHRoaXMgZmVhdHVyZSBjYW4NCj4+IHVuZGVyc3RhbmQgd2hpY2ggZW50cmllcyBmcm9tIHRoZSBi
YXNlbGluZSByZXBvcnQgYXJlIGRlbGV0ZWQgb3Igc2hpZnRlZA0KPj4gaW4gcG9zaXRpb24gZHVl
IHRvIGNoYW5nZXMgdG8gdW5yZWxhdGVkIGxpbmVzIGFuZCBjYW4gbW9kaWZ5IHRoZW0gYXMNCj4+
IHRoZXkgd2lsbCBhcHBlYXIgaW4gdGhlICJuZXcgcmVwb3J0Ii4NCj4+IA0KPj4gSGF2aW5nIHRo
ZSAicGF0Y2hlZCBiYXNlbGluZSIgYW5kIHRoZSAibmV3IHJlcG9ydCIsIG5vdyBpdCdzIHNpbXBs
ZQ0KPj4gdG8gbWFrZSB0aGUgZGlmZiBiZXR3ZWVuIHRoZW0gYW5kIHByaW50IG9ubHkgdGhlIGVu
dHJ5IHRoYXQgYXJlIG5ldy4NCj4+IA0KPj4gU2lnbmVkLW9mZi1ieTogTHVjYSBGYW5jZWxsdSA8
bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KPiANCj4gVGhpcyBpcyBhbiBhbWF6aW5nIHdvcmshISBU
aGFua3MgTHVjYSENCj4gDQo+IEkgYW0gaGF2aW5nIGlzc3VlcyB0cnlpbmcgdGhlIG5ldyBwYXRj
aCBmZWF0dXJlLiBBZnRlciBhcHBseWluZyB0aGlzDQo+IHBhdGNoIEkgZ2V0Og0KPiANCj4gc3N0
YWJlbGxpbmlAdWJ1bnR1LWxpbnV4LTIwLTA0LWRlc2t0b3A6L2xvY2FsL3JlcG9zL3hlbi11cHN0
cmVhbS94ZW4kIC4vc2NyaXB0cy9kaWZmLXJlcG9ydC5weQ0KPiBUcmFjZWJhY2sgKG1vc3QgcmVj
ZW50IGNhbGwgbGFzdCk6DQo+ICBGaWxlICIuL3NjcmlwdHMvZGlmZi1yZXBvcnQucHkiLCBsaW5l
IDUsIGluIDxtb2R1bGU+DQo+ICAgIGZyb20geGVuX2FuYWx5c2lzLmRpZmZfdG9vbC5kZWJ1ZyBp
bXBvcnQgRGVidWcNCj4gIEZpbGUgIi9sb2NhbC9yZXBvcy94ZW4tdXBzdHJlYW0veGVuL3Njcmlw
dHMveGVuX2FuYWx5c2lzL2RpZmZfdG9vbC9kZWJ1Zy5weSIsIGxpbmUgNCwgaW4gPG1vZHVsZT4N
Cj4gICAgZnJvbSAucmVwb3J0IGltcG9ydCBSZXBvcnQNCj4gIEZpbGUgIi9sb2NhbC9yZXBvcy94
ZW4tdXBzdHJlYW0veGVuL3NjcmlwdHMveGVuX2FuYWx5c2lzL2RpZmZfdG9vbC9yZXBvcnQucHki
LCBsaW5lIDQsIGluIDxtb2R1bGU+DQo+ICAgIGZyb20gLnVuaWZpZWRfZm9ybWF0X3BhcnNlciBp
bXBvcnQgVW5pZmllZEZvcm1hdFBhcnNlciwgQ2hhbmdlU2V0DQo+ICBGaWxlICIvbG9jYWwvcmVw
b3MveGVuLXVwc3RyZWFtL3hlbi9zY3JpcHRzL3hlbl9hbmFseXNpcy9kaWZmX3Rvb2wvdW5pZmll
ZF9mb3JtYXRfcGFyc2VyLnB5IiwgbGluZSA1NiwgaW4gPG1vZHVsZT4NCj4gICAgY2xhc3MgVW5p
ZmllZEZvcm1hdFBhcnNlcjoNCj4gIEZpbGUgIi9sb2NhbC9yZXBvcy94ZW4tdXBzdHJlYW0veGVu
L3NjcmlwdHMveGVuX2FuYWx5c2lzL2RpZmZfdG9vbC91bmlmaWVkX2Zvcm1hdF9wYXJzZXIucHki
LCBsaW5lIDU3LCBpbiBVbmlmaWVkRm9ybWF0UGFyc2VyDQo+ICAgIGRlZiBfX2luaXRfXyhzZWxm
LCBhcmdzOiBzdHIgfCBsaXN0KSAtPiBOb25lOg0KPiBUeXBlRXJyb3I6IHVuc3VwcG9ydGVkIG9w
ZXJhbmQgdHlwZShzKSBmb3IgfDogJ3R5cGUnIGFuZCAndHlwZScNCj4gDQo+IEFsc28gZ290IGEg
c2ltaWxhciBlcnJvciBlbHNld2hlcmU6DQo+IA0KPiBzc3RhYmVsbGluaUB1YnVudHUtbGludXgt
MjAtMDQtZGVza3RvcDovbG9jYWwvcmVwb3MveGVuLXVwc3RyZWFtL3hlbiQgLi9zY3JpcHRzL2Rp
ZmYtcmVwb3J0LnB5IC0tcGF0Y2ggfi9wLzEgLWIgL3RtcC8xIC1yIC90bXAvMQ0KPiBUcmFjZWJh
Y2sgKG1vc3QgcmVjZW50IGNhbGwgbGFzdCk6DQo+ICBGaWxlICIuL3NjcmlwdHMvZGlmZi1yZXBv
cnQucHkiLCBsaW5lIDEyNywgaW4gPG1vZHVsZT4NCj4gICAgbWFpbihzeXMuYXJndlsxOl0pDQo+
ICBGaWxlICIuL3NjcmlwdHMvZGlmZi1yZXBvcnQucHkiLCBsaW5lIDEwMiwgaW4gbWFpbg0KPiAg
ICBkaWZmcyA9IFVuaWZpZWRGb3JtYXRQYXJzZXIoZGlmZl9zb3VyY2UpDQo+ICBGaWxlICIvbG9j
YWwvcmVwb3MveGVuLXVwc3RyZWFtL3hlbi9zY3JpcHRzL3hlbl9hbmFseXNpcy9kaWZmX3Rvb2wv
dW5pZmllZF9mb3JtYXRfcGFyc2VyLnB5IiwgbGluZSA3OSwgaW4gX19pbml0X18NCj4gICAgc2Vs
Zi5fX3BhcnNlKCkNCj4gIEZpbGUgIi9sb2NhbC9yZXBvcy94ZW4tdXBzdHJlYW0veGVuL3Njcmlw
dHMveGVuX2FuYWx5c2lzL2RpZmZfdG9vbC91bmlmaWVkX2Zvcm1hdF9wYXJzZXIucHkiLCBsaW5l
IDk0LCBpbiBfX3BhcnNlDQo+ICAgIGRlZiBwYXJzZV9kaWZmX2hlYWRlcihsaW5lOiBzdHIpIC0+
IENoYW5nZVNldCB8IE5vbmU6DQo+IFR5cGVFcnJvcjogdW5zdXBwb3J0ZWQgb3BlcmFuZCB0eXBl
KHMpIGZvciB8OiAndHlwZScgYW5kICdOb25lVHlwZScNCj4gDQo+IE15IFB5dGhvbiBpcyAyLjcu
MTgNCj4gDQo+IA0KPiBBbSBJIHVuZGVyc3RhbmRpbmcgY29ycmVjdGx5IHRoYXQgb25lIHNob3Vs
ZCBydW4gdGhlIHNjYW4gZm9yIHRoZQ0KPiBiYXNlbGluZSAoc2F2aW5nIHRoZSByZXN1bHQgc29t
ZXdoZXJlKSwgdGhlbiBhcHBseSB0aGUgcGF0Y2gsIHJ1biB0aGUNCj4gc2NhbiBhZ2Fpbi4gRmlu
YWxseSwgb25lIHNob3VsZCBjYWxsIGRpZmYtcmVwb3J0LnB5IHBhc3NpbmcgLWINCj4gYmFzZWxp
bmUtcmVwb3J0IC1yIG5ldy1yZXBvcnQgLS1wYXRjaCB0aGUtcGF0Y2gtYXBwbGllZD8NCg0KSGkg
U3RlZmFubywNCg0KWWVzIGluZGVlZCwgdGhhdCBwcm9jZWR1cmUgaXMgY29ycmVjdCwgSSB0aGlu
ayB0aGUgZXJyb3IgeW91IGFyZSBzZWVpbmcgY29tZXMgZnJvbSB0aGUgcHl0aG9uIHZlcnNpb24s
DQpJIGFtIHVzaW5nIHB5dGhvbiAzLCB2ZXJzaW9uIDMuMTAuNi4NCg0KVGhlIGVycm9yIHNlZW1z
IHRvIGNvbWUgZnJvbSBweXRob24gYW5ub3RhdGlvbnMsIEnigJltIHN1cnByaXNlZCB5b3UgZGlk
buKAmXQgaGl0IGl0IHdoZW4gdGVzdGluZyB0aGUgZmlyc3QgcGF0Y2gsDQpkaWQgeW91IHVzZSBw
eXRob24yIGZvciB0aGF0Pw0KDQpJcyBpdCBhIHByb2JsZW0gaWYgSSBkZXZlbG9wZWQgdGhlIHRv
b2wgaGF2aW5nIGluIG1pbmQgaXRzIHVzYWdlIHdpdGggcHl0aG9uMz8NCg0KQ2hlZXJzLA0KTHVj
YQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 17 10:35:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 10:35:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535851.833904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzEUf-0003Bn-Vx; Wed, 17 May 2023 10:34:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535851.833904; Wed, 17 May 2023 10:34:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzEUf-0003Ba-QK; Wed, 17 May 2023 10:34:53 +0000
Received: by outflank-mailman (input) for mailman id 535851;
 Wed, 17 May 2023 10:34:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzEUe-0003A7-TK
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 10:34:53 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20604.outbound.protection.outlook.com
 [2a01:111:f400:fe16::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7381ab00-f49e-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 12:34:50 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS5PR04MB9874.eurprd04.prod.outlook.com (2603:10a6:20b:673::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Wed, 17 May
 2023 10:34:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 10:34:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7381ab00-f49e-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HFh32YAmLxfirtn8W//w2p7Vy4iIS6ZYKf8sUnVL01dGIu3nRYp/ZD3frZtXTWSEVg840Dm8lc+Fja1kJttRVneGZej20aQQFcDGamtrvfxcItjR67jVWr+9zH3fewpJ8v0L1tfOCV2dYCfzQwEF/UNtv6WvM2HhEo4JDXMS+w5U16bFnWY2E44UYFlUJUfXqt7QH23nlwNXjd+sJVGVO2Ch9ZRmQIDF2rcNOCIodsmG9vfUbZ8JKZmgSAdhIHR+Ixh58BMI1HVBxIxa+ZMOO8OQnuNtu573MGw9iiK3Gt0qOmdOWQg4bMRuDdZTTszX+pazYgKBAzM/5vwyWbc71w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jSQvPMopOU481hkpJmJeQLLvSa5HZmU1KdzELPpK9GE=;
 b=UNswkMJV6di5YSuJIHgetQiTE02gs9S3l9N0vClim/yTSx/aEtBtGRbuzFCEvT1vhXBHkLOxTdotNR/U3xohlWxCHfudicoZagMlj4txtt+WCQyKqmuUt4ymUiw1FZeXCy1bEhw975jCb8h50SRVjcjCwGaCM4vEA04zcdGEejioQL5DALseMPNxsH4HFP1L7o7hqcKnMcWC3TvxVtQGkTRH6+D5S9tUS8Z8v9K3JvNK360lfJYKn/N8NvUYLupzla6hmecQ9zEgeIYAVuarf+zguFbxP2rxhCKWUhvOYer6KhnF/d+O7ipS12/fHtwwILEXIbG6y2wqtnKyVz8T0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jSQvPMopOU481hkpJmJeQLLvSa5HZmU1KdzELPpK9GE=;
 b=KY6CRN65soGfJNP4sy806+UffG+hKWFM5kYCZb1QyY6c5KjxPmZmsZYuLgChbZjtM/dM3XEy0L7F1I26aKnPhDll4GHe/boKXQxYJLI72Oc3e+8AlzGY1FUmjOzCr1zKWgBgVsxZ+KUfQ7EEHgs+yTwvqttr55S9chdDGvPwYMjKtIWBknyTivOCS3VJbLN8l8CXqq7uIh5XnEezJ8t/fhmK2S8RmbduTX/NZzrfi3c2rO9QjEaj4FkwruIMwnu63gRg8KsDssS1ugugKC8nkLCZxnCl8aCKyTMJnliiIQcF4R1sKIdxYbu4+ZSPg8ehv/o5OImHsMrIBOgcCsQ0NQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <796b6671-c699-1bbf-b3a7-59c8fceeb625@suse.com>
Date: Wed, 17 May 2023 12:34:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH RFC] xen: Enable -Wwrite-strings
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516203428.1441365-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230516203428.1441365-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0128.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS5PR04MB9874:EE_
X-MS-Office365-Filtering-Correlation-Id: da901812-507d-4aaa-cde4-08db56c25624
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UZRNPUaNT8nOPPxYNK/ByUKuw/xW8zQRlawq/ITRrRYLP8y0NGfI713DdijGOUR9WekZDEUylsJTFYL6euiwWZxplRJxYR9i9kkWfYP/CUICEHdwkb73aa8yz4N9NGK9+R0b6CnOwH9t9oFTSLbWBv3OmN1kHVEBiLTYnQQZ09daiojYsebpsVEtRv4VcFfp0NPNvCKksej/gPrLeFJiVi5tkeYaI+Z3q4P0xrPOOmslI4LG4IcR9dtsaFSkC/LNu2c1rn9aV3hJUZw1o6jCFoMMo+KfWHv7FTg4eMqeerTNLX7MfQW+9ryYcejWvWkpR/fSJFjhrLMei5uvkwRhW1+l5FO7sENTtZGq6jOWeswWu44O+d9kA+cM+NxsdvbqQiGjXx/hupmZDGSz4xwnKgyb1dElDUx1hJ+jcpY31qJ7exfBp0M7JUob219a5jd9rdP4vfA2omfZzUHzq8KOob+VdLQp5/8yz3MrsyYQE9AaObkecWPNeH5xZqtdSoy7kEZxokoT5F5/xGTDPPM0VQqO8Nk+5v+GSTrnZ+naaaD+jDIGIrl6Milfe1oU/G22BT3k/UVnUpdftn/PsPZhN3WFjbWR+jHyjoEiHkLbLaqUpNlEk+LRBcNfrIeEQCBbBjelP/vwVrvMy1ejcFN6mA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(396003)(136003)(376002)(346002)(451199021)(31696002)(36756003)(86362001)(54906003)(316002)(66476007)(66556008)(6916009)(478600001)(66946007)(4326008)(8936002)(5660300002)(2906002)(38100700002)(8676002)(53546011)(186003)(26005)(2616005)(6506007)(41300700001)(6512007)(83380400001)(6486002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K2tjOSt4SlI1Y1dpKy9hQ0hHY2F6TzN2RE40M0lNdXJwbndmMkpUOWd4eHdr?=
 =?utf-8?B?cXNaL3BBaUtjZWJQRXdCSElFMUVaRHJrSXFWTEVVaDY0cXdLUUY4OEZ6d24y?=
 =?utf-8?B?Q0NuTTA5TXh1aDEwY3JBNUUvTFVrTFRrOUlJTytxOGV1enZNU0g3aXJzUmMy?=
 =?utf-8?B?UTczaEFSTzlMT1ZPdlJRUjhaYUtPeldVRmplTmNMOHRNK2lGeHRXQ0RBeExR?=
 =?utf-8?B?SEFUY092UGxMT1JVVjk0d2ZabGhDSU1jdHQ0aFcxVFptc1lwamZVam9la2da?=
 =?utf-8?B?NCt3Mjh6a3hxVVdUMnZuYUVzQUNEYTdBNTVOWElNU3hGZUtlOHg2bEdob3Jv?=
 =?utf-8?B?MUIzd3M1d25oM0p2UFBuajdoYytOWWVreVpKV0hKT3NQQjZEZ2s0S1JLVHVw?=
 =?utf-8?B?dGZKQ3dwbmlnRlBNWFo0MGdFYlFRUWt1RGV0RFptM3JFUUV1SFpod1JreHI4?=
 =?utf-8?B?dFVDVnZDMldWUXhId2JmQytPU2dRRnhCZ2JVcm9Za1JHTUg1UGpvd3lSV25W?=
 =?utf-8?B?TVF3UzRBQTRKMkhNMEZwWDY3Wms1VmNvTkFRdXJYUEUyMkR4cTY3QmZMVURC?=
 =?utf-8?B?REtxYzN1MnNkVkNSZEpaRzNZRWVTV2FWSmsrYlVGUjZTSkVlb2tzOFNzQisr?=
 =?utf-8?B?S3R1bFc1RjhyYllvU05HN1ZibDllTk1NU2FXRTgwKzMzSDgzRlhIUVNCZW9u?=
 =?utf-8?B?NjltMjJPbldWc2V1S0paaTUwNGhmdHlIeGNTZnZLRlRXSXBnanNWRklFOEpq?=
 =?utf-8?B?N0ZQaVNMWkdQODZOOVFoK2N0UWpUOWtXRitFVHExQ25INFIzL3NsTXR6RHZP?=
 =?utf-8?B?NFFEeWhPbFdIVUFxN0w0cUZ4eUw1eUlHK08vZ0Vid3FTOGtwZm9lZHdVQkRU?=
 =?utf-8?B?Z2s2Nk5BVG9WcU5pWlkrM0ltQ1p2bFpBYzBPUC9tZVFsbzJlOU9sbGNhWXZo?=
 =?utf-8?B?VGpYd3V1ZXQwKytkOElGV2E2cE82clBLZ3pSQk52Qk4wMEsxYms2ZktycmNw?=
 =?utf-8?B?YTlBakgxQzZ6L3htOEw2bEllQWc3bjR1Lzk5QlAzVkhwaWc1bWdRWUJSQzF4?=
 =?utf-8?B?TmJFcXkra1JYRDcyckwvNFdQNHdQcVJwQTFLaG9vWU5oYnFFK3NhNkM0UGlO?=
 =?utf-8?B?bzAwZVBTcDZtSjhsM2lIZ3A5WHJrbnZ5SHJjSVJEQ2dIZzRIVFZuZmdLdHBT?=
 =?utf-8?B?Vm4wQUtLTmt1dE1QUUpDek4vSVlNSER3cGJ6eVpFZUtVTjdaN00rVm5nZjc0?=
 =?utf-8?B?TXJRczBPT0RHR05YN21KdWk3SjRacWdQUFdYbkIvRzhYdHNtVmdHY2hPdndQ?=
 =?utf-8?B?LzZNQ3Q5Nld6bkEvb0UrcS9CTFJNd1RhSk9OM3NrZ3hiNlFQTG1TTTVhV3Jm?=
 =?utf-8?B?K2sra1pJTVNHQjAxeFZaT3lPanpzUVpaT203RzcxYytvOStENTllYmxxVWdI?=
 =?utf-8?B?TmlBZnZPbkg1WEFYQ3oxREJKZEMwUGRyNHh1YVFVWHB2SmZxVno2YmM0UXhG?=
 =?utf-8?B?NURWc1dTaDBpbGMrL2FZVlQrOGt5bkRCRm1ISVFheUFmNCtETlpaV2hXTG1u?=
 =?utf-8?B?NjNZQ3VQT1FoaFIyQW1pdUc4b05WOGwvalFIZ2VPRy9OWFZ2QStHejRpM2NF?=
 =?utf-8?B?VWtXdktPS3pEb2dwVlY0b2k5YjdnQ2hxSktCKy9BYzVyd29hWXJRVmgwQVZh?=
 =?utf-8?B?Q0tmZkgzR2htN09OVnFEOUxoc3NYakZwVXFkeG1JdXYybXpMYjcyR2lDSXg3?=
 =?utf-8?B?OW45ZFN2SW1sVjk3aUR1WGxYbDZuY2RBTG1UQzlIbkE4aVNoaTl5bHdsbjBi?=
 =?utf-8?B?MU43eHA4bkk3SkhIVHpEOWFYZ2NkWXpReGFVQTN3WU14RmpVSi9LdFdkU0JD?=
 =?utf-8?B?ZzFwYWE3TjcrN3JrcHRiNGp5ZXk4ZFhab01JOTdjcEY4K3YrNXBJRk9rKzdR?=
 =?utf-8?B?enZGbE5kMy81SlVEemxSb2xZbkFhOGhHOHdiaVFHOGZvM2s0WDFlRXpNMWxB?=
 =?utf-8?B?YTA4Ti9GU1dXeU9ESFZtdlBaTlhTZTNUcnlnSU8xbkpIYVgxY2I2NVVyR2lh?=
 =?utf-8?B?ZWVLYjhIT0owR2E0RkxjdnVSVGFGb2Z3TmNzNmp3R2NTZjBNOXY1bVlYc1l4?=
 =?utf-8?Q?/Hlve76RPvnfJOjY2SuoWdtGM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: da901812-507d-4aaa-cde4-08db56c25624
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 10:34:47.3314
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zBRvdg6BlTAqK9JXA6ddq+VznXPvr3nAWWa4imicTtHa/IFJUF4ETAlWFzUz8x9+QH/Je8INZj3zYNTOKqRPeQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS5PR04MB9874

On 16.05.2023 22:34, Andrew Cooper wrote:
> Following on from the MISRA discussions.
> 
> On x86, most are trivial.  The two slightly suspect cases are __hvm_copy()
> where constness is dependent on flags,

But do we ever pass string literals into there? I certainly would
like to avoid the explicit casts to get rid of the const there.

> and kextra in __start_xen() which only
> compiles because of laundering the pointer through strstr().

The sole string literal there looks to be the empty string in
cmdline_cook(), which could be easily replaced, I think:

static char * __init cmdline_cook(char *p, const char *loader_name)
{
    static char __initdata empty[] = "";

    p = p ? : empty;

Yet of course only if we were unhappy with the strstr() side effect.

> The one case which I can't figure out how to fix is EFI:
> 
>   In file included from arch/x86/efi/boot.c:700:
>   arch/x86/efi/efi-boot.h: In function ‘efi_arch_handle_cmdline’:
>   arch/x86/efi/efi-boot.h:327:16: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
>     327 |         name.s = "xen";
>         |                ^
>   cc1: all warnings being treated as errors
> 
> Why do we have something that looks like this ?
> 
>   union string {
>       CHAR16 *w;
>       char *s;
>       const char *cs;
>   };

Because that was the least clutter (at respective use sites) that I
could think of at the time. Looks like you could simply assign to
name.cs, now that we have that field (iirc it wasn't there originally).
Of course that's then only papering over the issue.

> --- a/xen/include/acpi/actypes.h
> +++ b/xen/include/acpi/actypes.h
> @@ -281,7 +281,7 @@ typedef acpi_native_uint acpi_size;
>   */
>  typedef u32 acpi_status;	/* All ACPI Exceptions */
>  typedef u32 acpi_name;		/* 4-byte ACPI name */
> -typedef char *acpi_string;	/* Null terminated ASCII string */
> +typedef const char *acpi_string;	/* Null terminated ASCII string */
>  typedef void *acpi_handle;	/* Actually a ptr to a NS Node */

For all present uses that we have this change looks okay, but changing
this header leaves me a little uneasy. At the same time I have no
better suggestion.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 17 10:43:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 10:43:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535857.833913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzEcT-0004zO-Pb; Wed, 17 May 2023 10:42:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535857.833913; Wed, 17 May 2023 10:42:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzEcT-0004zH-MX; Wed, 17 May 2023 10:42:57 +0000
Received: by outflank-mailman (input) for mailman id 535857;
 Wed, 17 May 2023 10:42:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzEcS-0004z3-9I; Wed, 17 May 2023 10:42:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzEcS-0004oG-6d; Wed, 17 May 2023 10:42:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzEcR-0005Y2-Ng; Wed, 17 May 2023 10:42:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzEcR-0006wP-LC; Wed, 17 May 2023 10:42:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=w1vmK5UIcKQi0NAeYTP3cUSkLO1Df/jEVtmtJx5zQtA=; b=dsBrC2G7MoVziNxqiwy+jO9I1z
	miT+bFuL63ybj4/WwLXco3pr8lJcISWXPP1PhYWtppgnZ/d+3cZgDJWw82KNZOpAtZPPRDxbx9QgJ
	rOamlAVmnBSP6Fv2SygU4BVqDhWLbMH+D1kO9Y9fwyMxKB04S5Wooc46ihZF1RAtDF0o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180686-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180686: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f9d58e0ca53b3f470b84725a7b5e47fcf446a2ea
X-Osstest-Versions-That:
    qemuu=18b6727083acceac5d76ea0b8cb6f5cdef6858a7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 May 2023 10:42:55 +0000

flight 180686 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180686/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180673
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180673
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180673
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180673
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180673
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180673
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180673
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180673
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                f9d58e0ca53b3f470b84725a7b5e47fcf446a2ea
baseline version:
 qemuu                18b6727083acceac5d76ea0b8cb6f5cdef6858a7

Last test of basis   180673  2023-05-15 17:38:45 Z    1 days
Failing since        180679  2023-05-16 05:45:31 Z    1 days    2 attempts
Testing same since   180686  2023-05-16 20:07:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrei Gudkov <gudkov.andrei@huawei.com>
  Ani Sinha <anisinha@redhat.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <lvivier@redhat.com>
  Lizhi Yang <sledgeh4w@gmail.com>
  Mateusz Krawczuk <mat.krawczuk@gmail.com>
  Peter Foley <pefoley@google.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Sam Li <faithilikerun@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   18b6727083..f9d58e0ca5  f9d58e0ca53b3f470b84725a7b5e47fcf446a2ea -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed May 17 10:50:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 10:50:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535863.833924 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzEjm-00074l-HR; Wed, 17 May 2023 10:50:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535863.833924; Wed, 17 May 2023 10:50:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzEjm-00074e-D3; Wed, 17 May 2023 10:50:30 +0000
Received: by outflank-mailman (input) for mailman id 535863;
 Wed, 17 May 2023 10:50:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cMB/=BG=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pzEjk-00074H-Mi
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 10:50:28 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20614.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::614])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a1b864b5-f4a0-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 12:50:26 +0200 (CEST)
Received: from AM4PR0501CA0055.eurprd05.prod.outlook.com
 (2603:10a6:200:68::23) by AS2PR08MB10265.eurprd08.prod.outlook.com
 (2603:10a6:20b:62c::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.32; Wed, 17 May
 2023 10:50:23 +0000
Received: from AM7EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:200:68:cafe::ca) by AM4PR0501CA0055.outlook.office365.com
 (2603:10a6:200:68::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33 via Frontend
 Transport; Wed, 17 May 2023 10:50:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT016.mail.protection.outlook.com (100.127.140.106) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.15 via Frontend Transport; Wed, 17 May 2023 10:50:23 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Wed, 17 May 2023 10:50:23 +0000
Received: from c4181ce8af37.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5C3AB009-1C46-4A2D-BB27-AB838D802112.1; 
 Wed, 17 May 2023 10:50:16 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c4181ce8af37.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 17 May 2023 10:50:16 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DBAPR08MB5846.eurprd08.prod.outlook.com (2603:10a6:10:1b0::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Wed, 17 May
 2023 10:50:14 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%6]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 10:50:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1b864b5-f4a0-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XSq0NZmxvBCKts1XtcwpXIE078eEUbe1goYAV1VZhBA=;
 b=9HK9LRoyust42/6wxjVBncfDNjy5vyH5sCMe94OcXfY7CR65uNjEUo+ZmQV0KrRPbPs/XATjvNG9HYYl+2WbNlxCjo9mYAcxxyKZkiJx9DnneElxaQ4rvozw3NoTFfDjm2yWj3ebJxh9Z63JIkFDf/L404m5MBQP8+2KR2u7aIk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 424a2b75fe8f5081
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Odf10QGJ2ILhd0JOUUYe0d9zEyGykyDYDBe39tAs0bTrs+vRSWYcoHrzrt9Gfw54QW4kuesbHlM4o7s494G4lRnXVsCeKlXMG1isQfFJSC9hwm9joEbkv5HkZ09bZz5aRQAvIBQ0GOS0nLmlWkUyYOBmO5uUzBw+ZpQ7UvP2GyxPTEY9b/Q4P/80zBdZrb4FFQ0YnHrHWHso34AbBa7korsve9QjdL45aBk5xp3nXGtzDANTGNfsUrCSp2i2EO+3Ub3fiV93eun6lx8khU+Wxac+vlPEhTzyNCgukO2LHMnQk/6ZoU4hM53+cy6W0SQU30Oii1WACxAsnEFuhGmKxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XSq0NZmxvBCKts1XtcwpXIE078eEUbe1goYAV1VZhBA=;
 b=Z0Mpf29aGcr9tJvpTiGJzUBNMmYhodC555LxNqVyKDsEGnxJuM5LJsNAFvzNhFaBtGEF3Vuoe+AxTw6LCZj3LtGFZqzySGi2bjA6O1NamMSlL/x2zzPIARqQgHJeEO5vVOEebBZox1Bpk6Ke4t7YqC3og8uiFNl3EnXbP5lIbzjbFfgOe3CTEmXxKyf1hQ8W3iVWvs+KOKiOcAldCLEzjau3PB0AYa3qFFqopwPuoqa3nknWsPIeDzhxE+G07jecAMi+Hhn+V3D+O1HP0MqqaC9KcZ2+wVHubpMT9t3B+1YOO7F4M5H46iIy35uyNiQdP2QeiyJTou1XkbcwUekXVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XSq0NZmxvBCKts1XtcwpXIE078eEUbe1goYAV1VZhBA=;
 b=9HK9LRoyust42/6wxjVBncfDNjy5vyH5sCMe94OcXfY7CR65uNjEUo+ZmQV0KrRPbPs/XATjvNG9HYYl+2WbNlxCjo9mYAcxxyKZkiJx9DnneElxaQ4rvozw3NoTFfDjm2yWj3ebJxh9Z63JIkFDf/L404m5MBQP8+2KR2u7aIk=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 1/2] xen/misra: add diff-report.py tool
Thread-Topic: [PATCH 1/2] xen/misra: add diff-report.py tool
Thread-Index: AQHZfpRaHhveqUOI3U+Goaz/TE5FAK9dwC0AgACdVYA=
Date: Wed, 17 May 2023 10:50:14 +0000
Message-ID: <9A9ABE94-957D-4E95-934F-3D09C58CF742@arm.com>
References: <20230504142523.2989306-1-luca.fancellu@arm.com>
 <20230504142523.2989306-2-luca.fancellu@arm.com>
 <alpine.DEB.2.22.394.2305161826460.128889@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2305161826460.128889@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DBAPR08MB5846:EE_|AM7EUR03FT016:EE_|AS2PR08MB10265:EE_
X-MS-Office365-Filtering-Correlation-Id: 93db34f0-e0a1-4fda-6710-08db56c48453
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8HdLWY/68DGmVCHAuLQ/3RuT+FBoN0n232pL7EENEsB6hHtdrezBiOa3UBcORWKi7+hoGUTpHwz+2G9SgAXENFwdI2VxJVq+aCPkdRd/w9sLhxaqIBW0F2UFzu3DwZwGqqLqQRebNOdhLn9PJHPp+dvFDFNsgGNfiFbp8CLS6y/gJ/iSQM/9bs5ELgcvRbSmbjTnJ8l1I8goDRsI/rO6UfLRtdz2LkdtBpkXCufH7asoh1MXVTW+tqn76BBH7GPN+oRJ/OyDhNUo/pnDYpQyoHXo+C8rfQxczlKg7+7qDi84VLygGM1ZDZH1VM46gIJNLR5YOtxa1OQ+RTwUaiJ5JrvCiMJCkb4bwLxEAxzVSJS7QrbpZHCQU4WRn5aCPLOt7DwjMC6hnf5S9nsYYknh/tEfOAsheUnPozd7LszDtGup61n2FVkGyNfQj+zUO3cIN5Uj/vNP9iJa3xwePgxiVWOZfC/Qayfunhz4oTE1Tdq30VAKuDIpYNYZaXq/VyAF6bjv3EVcTC1FotmouWpqqm0dCyrzcxrZ00VVRXkMIHfk+dhS2WdQrGXJZ56+9Q2DvvSiKhkuvnYn5sJgyICnbjxELjyJ5HEkzLL/IBuONzCMKs/mSOJDBjVc8PyTDUQYZrhrUhU6vox7/DgAGHsMFw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(39860400002)(136003)(366004)(346002)(451199021)(91956017)(478600001)(6486002)(71200400001)(54906003)(2616005)(53546011)(6512007)(6506007)(26005)(186003)(2906002)(4744005)(8936002)(8676002)(5660300002)(36756003)(122000001)(33656002)(41300700001)(6916009)(38100700002)(4326008)(66476007)(66556008)(66946007)(66446008)(64756008)(76116006)(86362001)(316002)(38070700005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <3C4464FD16DCC649827C156E51E1A03E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5846
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	481de357-04eb-4c14-c011-08db56c47ec4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DxNKD92tAiMyfJHmJH64V/qlwCk9pVDgUzt/iexXuszis7krfmYawDW/m/Vqvsb/L8p67HtfuPtg01Dc/kUVhY4KrH2b625rHkdw5NPceyk4HKcDCtkhCWrs8KSrfEahNcVueTxZpniIaxP01x4LQ0M8BRzJIfOrb4zfnfKdZgr1MgOxGqJ5yo1m8r7Qe2A8RHwLtouKjMMieXzDXYkA8BsmFDjoUop1o958Dc3vzJN2rNVyra+gWwWx2P8Y3ieW4sUtKVRFW4xUHFncn5w9xUhM5FCgvuNwaEpjZaxkp1Gluvt37HcujFPiJ/1BkKPD92gnKbHLc/8sDwqzfUv/QTPBw48VXSIeYDULO54chl+18vIQ+3qIg3tbpAZhsbv3i6eT0G7fU19WJf2EKlheNCo0iib99syJVxHU3ntA6kEjnI5xdoH/DE0285i2q8gKOOqe+dvuqjxVYgCFtW9iLHkGzDWRf/aquU2GSZQt7xEaitk5OSoNCnq+PTK+yH6IYspUBuIH3E3dF0II+TaWs+jqq2KD4OsHwYnHqI5SpgE4IQwxa09IeL7I7xkoRWei+JX4AGoC27dej01QrkYRs4DBQlDAs+z0GTMmC0zaNlAniQzOlkYqEeJqdl4CEe6RYeDGlufQtVKNIXxTYbIWVRFRCTT6BdLy5fECYQhbzVEnhjs2KMJmz8ot3aq4PDfoiFBe+95gCa786pSH5cJCabkYrMZO0d0NCCX8mpgoDnJ+UeIhgTnJNJVcxcJlI4Ix
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(376002)(136003)(346002)(451199021)(46966006)(36840700001)(40470700004)(316002)(8936002)(8676002)(6862004)(5660300002)(336012)(82310400005)(47076005)(53546011)(86362001)(6506007)(356005)(6512007)(186003)(81166007)(2616005)(36860700001)(26005)(82740400003)(40460700003)(41300700001)(6486002)(54906003)(33656002)(40480700001)(70206006)(70586007)(478600001)(36756003)(4326008)(4744005)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 10:50:23.6225
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 93db34f0-e0a1-4fda-6710-08db56c48453
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB10265



> On 17 May 2023, at 02:26, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Thu, 4 May 2023, Luca Fancellu wrote:
>> Add a new tool, diff-report.py that can be used to make diff between
>> reports generated by xen-analysis.py tool.
>> Currently this tool supports the Xen cppcheck text report format in
>> its operations.
>>=20
>> The tool prints every finding that is in the report passed with -r
>> (check report) which is not in the report passed with -b (baseline).
>>=20
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>=20
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>

Thank you Stefano for taking the time to review and test it, I will push
the new version of the serie with the stale functions removed and I will
add your A-by and T-by.

Cheers,
Luca




From xen-devel-bounces@lists.xenproject.org Wed May 17 10:53:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 10:53:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535869.833934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzEmB-0007nt-5O; Wed, 17 May 2023 10:52:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535869.833934; Wed, 17 May 2023 10:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzEmB-0007nm-0u; Wed, 17 May 2023 10:52:59 +0000
Received: by outflank-mailman (input) for mailman id 535869;
 Wed, 17 May 2023 10:52:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ws97=BG=tls.msk.ru=mjt@srs-se1.protection.inumbo.net>)
 id 1pzEm9-0007nS-6u
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 10:52:57 +0000
Received: from isrv.corpit.ru (isrv.corpit.ru [86.62.121.231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fa4abd12-f4a0-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 12:52:55 +0200 (CEST)
Received: from tsrv.corpit.ru (tsrv.tls.msk.ru [192.168.177.2])
 by isrv.corpit.ru (Postfix) with ESMTP id 7584E696D;
 Wed, 17 May 2023 13:52:54 +0300 (MSK)
Received: from [192.168.177.130] (mjt.wg.tls.msk.ru [192.168.177.130])
 by tsrv.corpit.ru (Postfix) with ESMTP id B170C5FCC;
 Wed, 17 May 2023 13:52:53 +0300 (MSK)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa4abd12-f4a0-11ed-b229-6b7b168915f2
Message-ID: <2b07603f-6623-9fbf-15df-a86849d9aca3@msgid.tls.msk.ru>
Date: Wed, 17 May 2023 13:52:53 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] xen/pt: fix igd passthrough for pc machine with xen
 accelerator
Content-Language: en-US
To: Chuck Zmudzinski <brchuckz@aol.com>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Richard Henderson <richard.henderson@linaro.org>,
 Eduardo Habkost <eduardo@habkost.net>, "Michael S. Tsirkin"
 <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Igor Mammedov <imammedo@redhat.com>, xen-devel@lists.xenproject.org,
 qemu-stable@nongnu.org
References: <a304213d26506b066021f803c39b87f6a262ed86.1675820085.git.brchuckz.ref@aol.com>
 <a304213d26506b066021f803c39b87f6a262ed86.1675820085.git.brchuckz@aol.com>
 <986d9eca-5fab-cacb-05c7-b85e4d58665b@msgid.tls.msk.ru>
 <47ed3568-2127-a865-4e4f-ff5902484231@aol.com>
From: Michael Tokarev <mjt@tls.msk.ru>
In-Reply-To: <47ed3568-2127-a865-4e4f-ff5902484231@aol.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

17.05.2023 12:47, Chuck Zmudzinski wrote:
> On 5/17/2023 2:39 AM, Michael Tokarev wrote:
>> 08.02.2023 05:03, Chuck Zmudzinski wrote:...
>>> Fixes: 998250e97661 ("xen, gfx passthrough: register host bridge specific to passthrough")
>>> Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
>>
>> Has this change been forgotten?  Is it not needed anymore?
> 
> Short answer:
> 
> After 4f67543b ("xen/pt: reserve PCI slot 2 for Intel igd-passthru ") was
> applied, I was inclined to think this change is not needed anymore, but
> it would not hurt to add this change also, and now I think it might be
> more correct to also add this change.
...

Well, there were two machines with broken IDG passthrough in xen, now
there's one machine with broken IDG passthrough. Let's fix them all :)
Note this patch is tagged -stable as well.

> If you want to add this change also, let's make sure recent changes to the
> xen header files do not require the patch to be rebased before committing
> it.

It doesn't require rebasing, it looks like, - just built 8.0 and current master
qemu with it applied.  I haven't tried the actual IDG passthrough, though.

It just neeeds to be picked up the usual way as all other qemu changes goes in.

Thanks,

/mjt


From xen-devel-bounces@lists.xenproject.org Wed May 17 12:46:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 12:46:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535910.833947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzGXB-00071j-5g; Wed, 17 May 2023 12:45:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535910.833947; Wed, 17 May 2023 12:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzGXB-00071b-1J; Wed, 17 May 2023 12:45:37 +0000
Received: by outflank-mailman (input) for mailman id 535910;
 Wed, 17 May 2023 12:45:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=v3JH=BG=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pzGX9-00071T-8L
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 12:45:35 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b668fed2-f4b0-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 14:45:33 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B5DC7646CA;
 Wed, 17 May 2023 12:45:32 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC045C4339B;
 Wed, 17 May 2023 12:45:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b668fed2-f4b0-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684327532;
	bh=iOvwhrIqurJ9VDkNH99fkMyAAsvkDjtRTJDy0SAwuEE=;
	h=From:To:Cc:Subject:Date:From;
	b=KXW/r94Q7ohDPiHr0WynE0BNA5PbgMdv1VdP3fPVRosVwyBRn7cKtcvUHduFDm/6n
	 Z7ROnCIBNMzOYh4W6Lne0zgMcRsWz4vfpGfN2AFOdfrb+uUtd7orQ0KL6Kp5dEGLrs
	 FlzSb7yHd4VOO64a3lGHt429hR9rOA9YlSkxHH/kaptA/8GzG3jIOOIFT34l7SD+GH
	 lne4q4ssxGKV8nM4y86ywSHghXnzr3qmL6B11q5AKGvTW76/nagyX35hi0QLj5CSX8
	 rjKhJ/JdACi3+X78dudTjmyOL6QV+dyifUsO3VQeo+W+pkq6tm0+ts0sn+8kgtqGYL
	 GOd3HPkal7XTw==
From: Arnd Bergmann <arnd@kernel.org>
To: Juergen Gross <jgross@suse.com>
Cc: Arnd Bergmann <arnd@arndb.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Peter Zijlstra <peterz@infradead.org>,
	David Woodhouse <dwmw@amazon.co.uk>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH] xen: xen_debug_interrupt prototype to global header
Date: Wed, 17 May 2023 14:45:07 +0200
Message-Id: <20230517124525.929201-1-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

The xen_debug_interrupt() function is only called on x86, which has a
prototype in an architecture specific header, but the definition also
exists on others, where the lack of a prototype causes a W=1 warning:

drivers/xen/events/events_2l.c:264:13: error: no previous prototype for 'xen_debug_interrupt' [-Werror=missing-prototypes]

Move the prototype into a global header instead to avoid this warning.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/x86/xen/xen-ops.h | 2 --
 include/xen/events.h   | 3 +++
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 84a35ff1e0c9..0f71ee3fe86b 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -72,8 +72,6 @@ void xen_restore_time_memory_area(void);
 void xen_init_time_ops(void);
 void xen_hvm_init_time_ops(void);
 
-irqreturn_t xen_debug_interrupt(int irq, void *dev_id);
-
 bool xen_vcpu_stolen(int vcpu);
 
 void xen_vcpu_setup(int cpu);
diff --git a/include/xen/events.h b/include/xen/events.h
index 44c2855c76d1..ac1281c5ead6 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -138,4 +138,7 @@ int xen_test_irq_shared(int irq);
 
 /* initialize Xen IRQ subsystem */
 void xen_init_IRQ(void);
+
+irqreturn_t xen_debug_interrupt(int irq, void *dev_id);
+
 #endif	/* _XEN_EVENTS_H */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed May 17 14:01:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 14:01:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535918.833960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzHif-0001F8-Cr; Wed, 17 May 2023 14:01:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535918.833960; Wed, 17 May 2023 14:01:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzHif-0001F1-7s; Wed, 17 May 2023 14:01:33 +0000
Received: by outflank-mailman (input) for mailman id 535918;
 Wed, 17 May 2023 14:01:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzHie-0001El-Ea
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 14:01:32 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20619.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::619])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 52f9de5d-f4bb-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 16:01:30 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7657.eurprd04.prod.outlook.com (2603:10a6:10:1f5::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Wed, 17 May
 2023 14:01:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 14:01:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52f9de5d-f4bb-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B1FduhauMfMoeukXmqFMEmhvto33x3Zj6lyJgLKqIENnWjpjc0Fv1h+n1xcsbUWKQQ50BV2cSAOJ4qtoOxjihi4xTSVNodfs7bOQs53Z1XlXQYIDAwBPWZkuN3wxfTmy9KmgxzgjLT8E+IeeHjvyb31JhFKGb3lJS3shs9WoUiWU2aTvyoDERQV5QZjNfHamxrC0ym1dZVzZbsjcSXMThyzSpMZFkQl3m+PA5kddz++P8pLf+wu0Ht2c8eKjthwZFNgeD0XdiQtJQTUOvRzTVkxlNOXWUpNkjql3J1jE22ntBYBJDHC/QVsQUqFPvepLq0z/hA7fS9xh2Fy4ysHuAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jhY/zICz1UVzbcRZk5W3O1k5IkbOKELYWA2ukALRJ3Q=;
 b=WSlfy5ADOdgRgsrJ3+533xzW2VxW5LMvWTmq1t0MVWYkd440+iRkgTzAaQeAP3+F7+3XyXqq/NRgaCd08s9kulDhtrDQb52o2e32YJC/Kg+WQPsDyfLYJDojJ3pzKdBeX0FT40cOTQ2QKRaNOH0AjJOnTShEmrJQWggrn/CXUOaEZHNm4Xg3/uVJHebfz+Rqcgm5tVCKH0ZUjzsfcXSD8A9wn5PfkJUR9y7IHygMymyY7GrL/lp0LmkVsPvYSOAvds5AUhbIqSlYgKtgIsqYjDEHZOYxMgaBu3mNWUFnM5SBzv9DKAxRZ3YWiiRpfb9DfjH2T6J8Vm533XEjlz/aQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jhY/zICz1UVzbcRZk5W3O1k5IkbOKELYWA2ukALRJ3Q=;
 b=Ec9bUv/v6F4afsDwd7CUqx9l5y6ZJ9mBWw7Od82CERUFMuVklPwPgJoA+618VzNEQdAo8EMDzhYQ1hf8WLbMK03ZuGMkuujxXbB60NrCdHlW3gO1ofXVuSof/h1U35YrPYMdO2askDD7NVVNrorWHkk0rhk4ERN3DcbD2h1bPQhClgv1xLcegGPwBbpfpJszm4t8xZl5wdk4iTDZoBsudJsMOu2ol6v4OZccuaNTQVvhOYYFxxbJVlgETBjr7tCxv4XkIrHI1ph05/JE9v7jR09iiRXalOTynt/6eke83UQ3yI2fVoERyZRYgTVEQbX5tM5AUv9MgIAZXrYwGVRMRg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d5eb0703-63be-0c5c-3fee-37e74e11dcf8@suse.com>
Date: Wed, 17 May 2023 16:01:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/4] x86/cpufeature: Rework {boot_,}cpu_has()
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230516145334.1271347-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0229.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b2::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7657:EE_
X-MS-Office365-Filtering-Correlation-Id: 418178c6-86e2-4c03-5f0b-08db56df35f0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TmlapVOLUVoXxI2qlB3KwUhY/aTFEJConnZSiTtIUXWlo8mcGGvAIVz8SE6OP4w0DIwdM8YFI1s02iO0xy/PvsddAJA0BxgE/1XF8oASjPBabnz0k6z8U4fE//h1E9ZmAI9zO+sKSZYRyNtm50J1vTxFJfXyTGKU13W+THAiZzDfsMjVw8MApfx7olxwMZdIS8pLdTzKYUyIsdBhWDg/tOpQkwlgCtGMyToXIVerFdP+Kq6Iz2KQW9sQzhEbJpR4G36lGHEawxxlyhKAs7t68lmdiwROS122q+KNEooMPlz7T/d1/oP6qG2gTNRIpAAIoGfsgC27Gof3hCMW97Ss/H5YZxjaFmMAMy8VUrFXcrR8jC5ptk7tDIhGdo2zgPDemiFmBjwGAZjNYwxa2mH/AyAca8GnNYNMYnGlrOrtvJd7YF/lJ62P411l3fr1fhfAHi4KEy+sK93aeOunxUGi/cli81gOPo54qbf+6z4JDrQRUrnKGh6ySKi7IQGvi6WhSMAymvQfgZNJb/k/AiRjPN4W/HkcZxI4wWYKfvrttDp8gbr6h346i9qg6HEN4DW+cWWAEdXgZapG8HdZ3tbVSToLyBf62peN3Q2O4ctc5t4fL3JWHQrzlaJdoJAzZoZQ9Z6E1sHB661GMWmXCiqCDw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(346002)(39860400002)(366004)(396003)(451199021)(5660300002)(66476007)(41300700001)(66946007)(6916009)(36756003)(6486002)(6666004)(316002)(4326008)(66556008)(31696002)(86362001)(38100700002)(83380400001)(6512007)(6506007)(53546011)(2906002)(4744005)(31686004)(478600001)(2616005)(8676002)(186003)(26005)(54906003)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YkpoWm0wUmpMQVNUQmVwMkwvWmh0ZkZMVDNJREpQYkYvRzRValBTV0JjVUZY?=
 =?utf-8?B?VVdJUVk5dXZqS2ZJSmUwN25WWXhBL0sra3BuYjh5RFB5Q1NSN0dpRnlzbTRD?=
 =?utf-8?B?R0lSaFZpMnhvV2dKS3F3VEYvSWpOT0NmUTFzRTBFZXVrZDgrMGJvenBjSjlI?=
 =?utf-8?B?TTA0SHg3Mk9VeTYzU2d4d0xWWGVKYWpxUXhsTWNDcmxGM1RQYjBhdzFUemFV?=
 =?utf-8?B?eHkzeUVuUndEMjhXR09hRjVEczdmbk1Hb2czemp0NktobU1zK09neTRNQ3Y5?=
 =?utf-8?B?VDlPVjlGME1qeWFvUGt1L3FNRTh0VlRNcXQxOEZ1a3RERUd6cmc2RFErUnc5?=
 =?utf-8?B?WDY2YXRLaFZyZmVEcFhsMzNERDF1WTBVTmF5MjhmVm9hOUJ1Qnhkd2NFdHZQ?=
 =?utf-8?B?ankxUHNNR1V5TGs1Z3liMTQ5V3MzTDBSdFlPZ2NRbTlYeTVEelVGb1VhREJG?=
 =?utf-8?B?TEJpQ1RIS1NlUDcvRFBZbmZmYXdCNmZkVnpnZGZoSHpjSXpPYlQvSnVwS0xy?=
 =?utf-8?B?M20vNk56anluNWpTWDg4VXJkL0Y0N0hvYURiSnlRSXRLYzRKd3dBNC9Ndkl5?=
 =?utf-8?B?QWxtRzB3Vm04TmJQVEcxZjJWUXpNWXgvbi9jNStwNFlYSjFtMDdoTE9yVVdm?=
 =?utf-8?B?NCs0MWFRN0tkZ2JobEhzK1YxSlQzZzdHb2dQcklObm9xc2pVMUJzM0xoR0N4?=
 =?utf-8?B?MFVtSUVMa2FHN1p1dFhaOGhaM2FhSGNCTzE3dmZ0VFkzdEZQRlFqK29Ka3du?=
 =?utf-8?B?T04rYnhjVGNNd1BPUk14WmlOeVRqcjAxbHdWUVA0S2I4bkdnbE4zTldlcmF1?=
 =?utf-8?B?MUlQTzF5b1J2U2x1a24rRVpvS1hMTHhrNjZWR3VKcTNDdHNxcS8vbnZUTmFn?=
 =?utf-8?B?TkNKNlhtclkvWWNiV00xbUEzWUF6SENCaG1zY2pHVEc3bzlFR3VVWWkwbEJr?=
 =?utf-8?B?RWJDZzZjVDFTdDI1aWcxcDF2bjU2L2lkMnRkUW1hUVViaGFJaVZ0Z3dGZ0V4?=
 =?utf-8?B?U1RSN1YvM2JZcGRqeXUrbWVKK2h3TFRuMHkvN2hBd0dUTUZOakJUdkVNbFVa?=
 =?utf-8?B?YURTcndHTWhKZFI4K0R2QzVmc1Z5cFdKMU1sK2FJdG42V0tNLzJWcWdpV1Fo?=
 =?utf-8?B?ODcvVzZBb2IzSmpEazNwQy9rRkxWVExPSmdJV1lVYzNkODA1TlN0ckVMZEVr?=
 =?utf-8?B?aW0rMGpwZzNoVW1UZEpueCs4MVN2ZzhkYmdJQ0RqSjlyTWJuRnRmUklac3l5?=
 =?utf-8?B?T0FsV1VhanJ4L0x5eDBUODBtYm5mTDR6eGJkN2FvdnhucklFbGNRcDVydDZm?=
 =?utf-8?B?MmZMazdyL08rd0xYdDBOeVdTRmN0NVVTVndYeFpjbytQZ3NURUd6aEdGd0Nw?=
 =?utf-8?B?UFhTRU9rWUpxYVMvZ3M1Mk1oNlJldHMyR2cyNXNoeXNWaDZkTE9GQkxxV21v?=
 =?utf-8?B?dWJNTklBRG0rNjNzKzhLR2QrVUpEcXZTUjN0R3k0SU15aHZNK0lJa2o2bGdo?=
 =?utf-8?B?SmZLQXpPUTY5YWU3UFRtM1ZPRVhUWWtpYytEdDVCb0dXWW54OFF6ak5aRDk2?=
 =?utf-8?B?L3ZDVUJ2RlRUNlV4c3BZTS8vTmJoTEh2YzBNbExYRzVkWFNmaWtVUEQxcUd1?=
 =?utf-8?B?UW5GZ2RlUndjaEdvaVNuZFBBYlRWU0xmazBRdFZJT3pZQVpUZGd6aEwwRHdC?=
 =?utf-8?B?NE9Hd0R6bFUyTnk1Q3gzZmZ0VnhsS2JUZWI3OFoyaFdHSmlxOUNkT2Y0Smxn?=
 =?utf-8?B?R2w0TjBLcGpGRnVGNXYxOXRwVFlxdGtuRjdtaVUrMURib2svSU5RSktnMjFB?=
 =?utf-8?B?UkdGOEpldkhWWFdqSEpqb2pHSitHZG1wM3ZXV0k0Y0g0REZEQU9jZTFqUkU1?=
 =?utf-8?B?NHFSQUN5RndEYndjSXg1Uk9VV1FqL0RQQ0dqUEpRMCtZUlRmNjFqRkRTVnZV?=
 =?utf-8?B?RUE1akVhRmlBd3Y5ZFVWWFFldGQxRUVwT0l2RS9idnlyWEZncmF5QkpaTm1M?=
 =?utf-8?B?Z0xsUU41dVdQazh0K1VLWDA3M2JCL05wRm1zWjN2OU0zZlRMRUltcm9yUzFr?=
 =?utf-8?B?NTRqMkpVWmRKdCs3QTdXZmJiNGVhNmlxNWdDeWtOQWE0S0I3L1ZISk9oS1du?=
 =?utf-8?Q?DXR5gtbjQCBhkJEGNQhvoOIma?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 418178c6-86e2-4c03-5f0b-08db56df35f0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 14:01:28.8646
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EcF7EENlYtywf7RaHROjwlReTebJJKv0kGYxE3DhpzVeI9zS8EETGx5uX/WP8T+B3R7f41lXaJs7xxVl/CJ3gg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7657

On 16.05.2023 16:53, Andrew Cooper wrote:
> --- a/xen/arch/x86/include/asm/cpufeature.h
> +++ b/xen/arch/x86/include/asm/cpufeature.h
> @@ -7,6 +7,7 @@
>  #define __ASM_I386_CPUFEATURE_H
>  
>  #include <xen/const.h>
> +#include <xen/stdbool.h>
>  #include <asm/cpuid.h>

This isn't needed up here, and ...

> @@ -17,7 +18,6 @@
>  #define X86_FEATURE_ALWAYS      X86_FEATURE_LM
>  
>  #ifndef __ASSEMBLY__
> -#include <xen/bitops.h>

... putting it here would (a) eliminate a header dependency for
assembly sources including this file (perhaps indirectly) and (b)
eliminate the risk of a build breakage if something was added to
that header which isn't valid assembly.

Preferably with the adjustment
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 17 14:20:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 14:20:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535922.833970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzI0Q-0003aH-NY; Wed, 17 May 2023 14:19:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535922.833970; Wed, 17 May 2023 14:19:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzI0Q-0003aA-K1; Wed, 17 May 2023 14:19:54 +0000
Received: by outflank-mailman (input) for mailman id 535922;
 Wed, 17 May 2023 14:19:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzI0P-0003ZU-RO
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 14:19:53 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on062a.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e39fc9dc-f4bd-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 16:19:52 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9311.eurprd04.prod.outlook.com (2603:10a6:20b:4dd::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Wed, 17 May
 2023 14:19:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 14:19:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e39fc9dc-f4bd-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cCHrgDsrod6cFbj3SyZihSRlKNGb0cuIA621fRr/kYg3Is0WLWlrPuTsboV7WX8NzAim565Khg0Q2s60jmfMnR58OyweIvLWAaluywvJ8lD9CqT89kPYOuzyj5tAOHKPxZXkdlmWDveO3lKiq4gaaMrJTole+su+T/CdZyuJXfJyFb4aV4pqRlRd/V0nwYA04VtsQZNJBUZGOjr2enHenQBjB7nDCWkMIbM1q1/h0jwuapjLOoUWVk54NRqGST9u+Lp6IGWGyNd0HJQ1cCWiuD+Vs7f6Utbtrwvc0N+9GgG3iH/+MQsIm0lFGsYdPQlvr/xWqXCgpMVwidcFRNG1hw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1bJnpVJS26DUGOoYs8j6hlPZt9tMU0BLefLMqRo512k=;
 b=Ns6u7gPe1t7Z8EJgTMrcxUdb+MgA00pCHNmm6hXRpBUiwSLc3ToTWNZcJrHdJkU4TWqLD343TULwJW83PNAoZ+xZ2M+c6YAbwDMNUC51ajuok6tvuZkIIvUbDf5l8usYU2sXfXC3vzRmMe8huKxUHmeRD6WD+4jWk3EuFYWgSuzvu2lyfsnaDq1Pv+Zuzpb2Y/1LLLeh1p2MlnCv5yM3eyWJdhfQjXi9w9OjAZQZVi5NofB05iTMDI1mPTqlZUHkQ8gD3mokRvn59pnMXHUmi0vIPvIGoMPQOX3GRx5IodDiEZ7sCrSl9kl1JeOc4FWvUKHHu0x+VR5vFDoICvHX+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1bJnpVJS26DUGOoYs8j6hlPZt9tMU0BLefLMqRo512k=;
 b=atW2emvpAXsT34hLExMu/81Gwxz4FsVzOiHuhUpSLhCSm4UfBCXFBAbGu+riy62awmn4fEjfnMexF73JX2mPDzkP/1mekGSHEn4I1/8xDMYAtmKZK95wThzvI5CLIyRpJfm7TlTFLURGPx0BjTex7bQgDHQ8Os5kuLMHsQVNRBaHxtWvc0rZdXqr6jyGk7XpZjX9e3HscfljWJjW85E9B3ZYGF487Vez+0IxM5fR8TtT0xZcqNqawQHgR6pauTkwwX1iAYgGsI9riHEEBlxOkWsorA1yYr9SjlJzw+5S/7gJ0aO0OajK01tvrbGBQmMXHp5MUu0MNgUUINiC9zlzjA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f444ccc1-51be-e526-e8e2-7759a68a743d@suse.com>
Date: Wed, 17 May 2023 16:19:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] libelf: make L1_MFN_VALID note known
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0091.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9311:EE_
X-MS-Office365-Filtering-Correlation-Id: d5f1e226-f99f-426f-73f3-08db56e1c6f4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uR9Nv5PNtqWxjcNJ8OvISQemKsTXJud2sbzezaTg78Kd1F102ipjbClF/cF3pF3ugO+/YoG90eqCugx/jcaYCarYSwT4KXG2HVMu7dCppfUkQhuOqK7eTdEWorgVuSAmEEv238yNO+Rqzh7Hm5TZL4ODD73Wbg5UIF3WAEkmbnD0t4ZG6YWxqAjAsUBVkOi2QQ0PibteLCPUNFytG2eCv1L5A7hotO7iK9w7QNY1SbeN3V1SQdUSAecZ8k84LPTa16MvmtisU/8E9wSeLV4XT6ZRhrvld2Wa9rfzQZ1ndsMZaq2uIBQ8l/SHxFL1ND0TlCSOQuIl4ltNH2yAy61lpi44dD6e2y3HTic1GYnMpesEGeoo8WWgTpxG0hZOWEhP3eGX09q7EDtumavJEZGrEGsc8Z/brgv8+EijSqGpjKIE4CaaktEtM4PZIlz+JMDdxkjOCAjJQ05vbnTdVzdySaTwAPsjd4keqeITyVWJWv1MgD0B8DkFltJVGXzwISfPszQ8BEluEkMv8urUQD6md+GMbAPJ5KhFLUxIZyv+/chixaWwbwqRmZPQ8jv1ig1j89skp2kO68ENxCHEvm/n+A+muu9woZEDuI0qZ6b7EJIZZ9D2sbHrxrk0bsCnuvq+riL+E2uOB76M4G6BBj5qfQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(376002)(346002)(39860400002)(396003)(451199021)(6486002)(36756003)(2616005)(83380400001)(6512007)(6506007)(26005)(186003)(5660300002)(38100700002)(54906003)(31696002)(86362001)(41300700001)(31686004)(316002)(66556008)(6916009)(8676002)(8936002)(66476007)(66946007)(478600001)(4326008)(2906002)(4744005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bGZqeG9wVEh3akQrR0w3cXJPdDRuSHU5K2lZS2RTL1ZuQ3FQcHYyS1YvQXoz?=
 =?utf-8?B?WWFTaU9UZEhKS1dhc08zZ3pHUDNoTlVWclB5Um15TmpLaWJhTURDSnVEUG1F?=
 =?utf-8?B?cWdwVFl1eTVWRktXcGhGcHRCRkZwZkRGQnI2WWVpRFEybUV3anhLa01wZ3Rz?=
 =?utf-8?B?UnMwL29DQUR6S3BXTEdadCtWVC9QVFViNnZQMUR6dGJJQVZnZXhXdUV1bktX?=
 =?utf-8?B?U05ZeTFUSi9pbmxVNk9BK0RnSUFFM0haYjJtMUpHcG41aDUyVnFoajM0ZVFj?=
 =?utf-8?B?NmZVMHZrR1Q3LzVBM0NnRU1YMTdqSHMzTUFvNVhjRnpPbTVydXN6azNHZ25K?=
 =?utf-8?B?NFF4c014Q1hwTkozTFlLNGkwbm9veUh1elMraHUwUnc5YmVnbWZHWXltUmVV?=
 =?utf-8?B?SlB1WWM1NGNpL1N4eEIreXJpNHdnWlltVDgrRlpid0V2STF5V0VoTUgwWFZV?=
 =?utf-8?B?eDM2R21mNHc5L09mbWNEdHBKeThTNlRsZng4a3FubDhPR21pL3gvT1hxLzlS?=
 =?utf-8?B?TStoN0lKeVg0bVN3RExId29GTWZWR0RLVVFiMU9sd1VPSElMSFQ2dVpVQzFs?=
 =?utf-8?B?TUQwc2dudDEzL1NZb0RubStVeFduUUhob2NjOFNYTVY3ckJNR0JIOVN5Q25Z?=
 =?utf-8?B?b3orWkc2cHZzam4vOC9hQW9ZazY3aWhIYjRTM1kwWEROTC9Tcy9UWmhINzA5?=
 =?utf-8?B?ajF2U3lpd1M1WXdFcWJzbm1wREZZMzN6V2w2eUQyZlh3Q3dTemtvR1hOdm9N?=
 =?utf-8?B?NlZ1ZnVXZ1lyZXo2cDhXb1I2aE11Zk1Wb3NMTnQ2Um1QRE1Pc0NGbkhIeUJD?=
 =?utf-8?B?akw3ck1MY1lXT3BuQVlDcWtDT2VIOTl0QXdGbTBCSndrN3VGQ1VNenJIUzBr?=
 =?utf-8?B?K05KNWhDWkFjc0o1ZlFFVDhHblBXL3Y1VElnL2tTS3VHamd3cmJ6eGFaV2hl?=
 =?utf-8?B?ZStHa1ZITjZtMVpIcE91Y2JZTzhNNVZKcUFJdlJUODhqYkFpVmk1MmVITk5N?=
 =?utf-8?B?Rm9XOGVyek4vN0dXbDVORkZKVGRZNWFEaGoxZ1RBTnRHc2hUQ1F3U2RheDZi?=
 =?utf-8?B?aThyTE0wUUlrNTN3SHc4cCtDS0tUV0JodzVjdWV3eEpTOTdNWUVySkZ6ZXRU?=
 =?utf-8?B?VWU4eDFGTEFpZmR3MkdCbUlOUFNPelNQMnBIMkhRcXltUE5MaytnTDRPVSty?=
 =?utf-8?B?SkNWM1JhRVdya2FPK1BvNWlKaHFrczN6R2dHQTBRKzdzdjhEbE9Fdjk3NmNs?=
 =?utf-8?B?amFWSGJ6R3hjMUNjeVE2Y3BLcXpOM09KbzVNY1l0c2hYR01HcFI3TjM4eld2?=
 =?utf-8?B?TjRWcENxY0dwZHdEa3l2c0p1cFdzKzJDZ2Zyd2l6d0NNcm5kM3QzTXYyWGdY?=
 =?utf-8?B?N2RUSytNTDU1dnRLeE9LVkRPaWNIaVBUdVhOTS9NN1ZyMGl1bjBzOWlIYVE0?=
 =?utf-8?B?ZDhTZFN2Y25XOThSOWpnbmxkeFBlWERHeUFiWnYzekxWaE96bWphZWVieDV5?=
 =?utf-8?B?dlF5Wkw0Zi9MUG5PcS9oaVZKb3ZFVFhnTGYwbzRkMFNGeTdXaDVCQnBLUlZx?=
 =?utf-8?B?alNzckdZcTl3bC9ZUDNXUUYxSWdDV1FDOXNhMTg2bU94NjJjZk9RbWIvL0w2?=
 =?utf-8?B?R2cwNlBCQkptSjRHU0pDUU1wQXhlYjhPRDEvRUlmTGZUZzNRaFFUdmxlRVhr?=
 =?utf-8?B?Z2R5Zm14M2lEM0QwTCtlcFJZYk9YR1g5aGJVVHlqS1JmUW9USFM1UVo3NGRp?=
 =?utf-8?B?Z0NMMnYvT1E0Um1VeVBGMHZxUHFaZ2pZQlVGcnErQVVJdG5vNFZER2F2ZUZM?=
 =?utf-8?B?R3NIWEZjSzk1NE5CYlVHVEExNno0QW0wRW5JeUdCYXo5TWxoZ3hVb2dKWGlS?=
 =?utf-8?B?UEpCekJwcEJGSUlSL2pMSmc3Qk91TC9GLzk1Zkw1bVowOVdtbGhlQ2E0Y25V?=
 =?utf-8?B?UENWY2V5ODU2d1I0dFpHSERCcU5KT2FDMXRrOVlOSk1vaWdYaFZnMTRmQXRM?=
 =?utf-8?B?YmYvOVJIUDA4OSt5UlI3ejZqSldLMUI3d245RHhSZWpsQ1MvMzRvTG4xWHJl?=
 =?utf-8?B?TTRxNXJoc2tyY2h0YjNGYUl4ZjJQMGdiZ1hBNGt1UlZvNlNnRVZ4YWs1N05P?=
 =?utf-8?Q?nLpUE2VnKLrmLpB5KMVTbzcQk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d5f1e226-f99f-426f-73f3-08db56e1c6f4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 14:19:50.9698
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: s8c2LQR9wyVhnHT8noFyH1V7zdf24ojy/vfIjv82ag9Y1ym/0LU4AuRDVzIyEHFhTR8s5OGoYdqBHmg6Zwb9ig==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9311

We still don't use it (in the tool stack), and its values (plural) also
aren't fetched correctly, but it is odd to continue to see the
hypervisor log "ELF: note: unknown (0xd)" when loading a Linux Dom0.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -117,6 +117,7 @@ elf_errorstatus elf_xen_parse_note(struc
         [XEN_ELFNOTE_FEATURES] = { "FEATURES", 1},
         [XEN_ELFNOTE_SUPPORTED_FEATURES] = { "SUPPORTED_FEATURES", 0},
         [XEN_ELFNOTE_BSD_SYMTAB] = { "BSD_SYMTAB", 1},
+        [XEN_ELFNOTE_L1_MFN_VALID] = { "L1_MFN_VALID", false },
         [XEN_ELFNOTE_SUSPEND_CANCEL] = { "SUSPEND_CANCEL", 0 },
         [XEN_ELFNOTE_MOD_START_PFN] = { "MOD_START_PFN", 0 },
         [XEN_ELFNOTE_PHYS32_ENTRY] = { "PHYS32_ENTRY", 0 },


From xen-devel-bounces@lists.xenproject.org Wed May 17 14:23:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 14:23:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535926.833980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzI3F-00053N-4K; Wed, 17 May 2023 14:22:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535926.833980; Wed, 17 May 2023 14:22:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzI3F-00053G-10; Wed, 17 May 2023 14:22:49 +0000
Received: by outflank-mailman (input) for mailman id 535926;
 Wed, 17 May 2023 14:22:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzI3D-000530-Ox
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 14:22:47 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2060a.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4a561ec3-f4be-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 16:22:45 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9311.eurprd04.prod.outlook.com (2603:10a6:20b:4dd::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Wed, 17 May
 2023 14:22:43 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 14:22:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a561ec3-f4be-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VNvJZKpPCZJDqSfYPUmKWVLmFcUN7r+V6oJY+StHvF5if90JGSC7yDWQYyLr5wd7x5/aTTonD8xj5vMTNRJZ/nY3kmYpCfTQJQTepBeTmzj4Wm/aokOo5R/him9Frdn/dm88WrAM6HRAvWN3tY/fG9AOG94MXJhE2LUdocWOhpTOeNzQaQIPTs+SyQYHtoa8R+SkfDRz6/CjR0ifpfS9PiqBOQ3Ftvg7SizwguR+idVdjDZ4rJosalla0jIxN6oddv198Bx+8iklzKAARxJuEzPecO7+5W8MDSEQohoCEopEcmg5X6cXLwQ7rvol65sTuc9AOqBat5OR5SCuTTHG2g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vA8ACpAnc4NWR00iJWkixQSnKNSh+bScY3SDFmtG6i4=;
 b=Qpco6XpU3t5/lix2+/b/lM4XF2ssEjtbPMaIfgoxzD/EGLuU7BwoW1KSKyOD/TmTmr3sdYfd5S/F3xPr1oaRf0WH9Q4E3aZBNCvjCenoROShcsBwjoKDbFp0SmE11PVHkpXiinuSP1Yk2n8njca+By8o8WY7y1T1EEX7JV0f+pDXo826dYDLdnDEEi/puIrXHlpk+KFpwjp5waceg7iO7lvuGGVqqTaL+yHaWfg0+j8/xxo1pmBQSz3/+WVIA/kv9c44xgK+gBR5K6roQOmwER9GASA4zN+o7ahcB0CcNUArVCMP4C0+A1mMPCp+twzmDUR3NYJSHro8wOM1WTym7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vA8ACpAnc4NWR00iJWkixQSnKNSh+bScY3SDFmtG6i4=;
 b=sdFmxB0Esgeokcly3YwsCqATD3NRoAoOHf33kPNynwigdxZ3RvIthLXyyQY7/8yRApN0SObyq6OAgeATlpOwHbRNSeABwvbiX7eX5Uqhvo1WHiq2qkphIeSZ2eKbBt2+N5m2ISXwkMIRyltKl5c5eH0sUO+Qhc06VNCxs08QVD5ugNZSfLCvbIRPane3RQftAzan84CEUDGHWv4X1wjb+gEbeNofDKP0ZitTxWHrh9flxPIA9Y8jGJy9bMS2J9OfTG6jYVMvTSIesR+WrqilQczn4klxaORJSixjaTk2W0vXaBZnMQ5at47+nYcVpvJRT0FqEPjwRJgyPF27CP4Ihw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f994f67d-e0de-ad28-d418-1eb5a70bc1b8@suse.com>
Date: Wed, 17 May 2023 16:22:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: do away with HAVE_AS_NEGATIVE_TRUE
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0049.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9311:EE_
X-MS-Office365-Filtering-Correlation-Id: 95e13f16-8a88-4c4f-f986-08db56e22da9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Z9SB/qotM7CE4/9godTmbOiR5pbs9vmsnluFLCvGzs6kpzxxA4xbG2hpjf7ig6GGUCmSlRkghkN+iXMvPQbjMLlL2vzrkQCICPNFItwpYLsu5qMQkMM9v16bDfbSWpqFR9qggy2Saa9Vnxk4e+FGDtnheOvpRTv0EKIF/lMyukbW6MdCTtpSB4HOkfmjFYYaL2nuYZpxRylg17cmlo2e5GhWKllN3RRKhwVdjd1TG58KT+HlGlOQmoxY4RmAB6YmQT8DcmpjwivWZNXycaxDQR15KIV6IyzW5cY00OBAv6hgYCfkDNW9RoGdbN9xHx9njGB1y4ZBfgsLxa6bLBLDov3ONNlj9SRuNZKuWnHTMlCrF5M7Fosga7JyrXqfqc+HDdTs1exwrkkJcnv0mj+l+DeZRGeVdDQAbPpbczDeWCh1XfBss06je6ebxSLK+c562au4C3Ohr9j6iqEugi8XcQAj37ZVHF+H0wfRQG9GACrECsUNrPTYxTgxLyHHDamMRRAjWUb8HwjgTtOMg/5/I6qW/QNg9ZTpKP3EDAft+PXNfpq1VVLK9dY33JyRXjk03e4p2ZE/QULQWA3r5Yk5h+99vMX2ZkLnLhS6MIHPK8nQUTy5QjEx/KejzWOCmP6FIjkUAN4uvQxccaGG+Epe5w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(376002)(346002)(39860400002)(396003)(451199021)(6486002)(36756003)(2616005)(6512007)(6506007)(26005)(186003)(5660300002)(38100700002)(54906003)(31696002)(86362001)(41300700001)(31686004)(316002)(66556008)(6916009)(8676002)(8936002)(66476007)(66946007)(478600001)(4326008)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RnVYYTY3aUQ0L05ZWXFML1pKcTB0dVNmZXRjUnhZRTk2WmZaVVZId2VmdVRu?=
 =?utf-8?B?YWxkTnZuelkxUldRMy9sQW9zbEo3cWNLamFseDhycDhMTFpsVFNPOTA5Z0NE?=
 =?utf-8?B?RS9IcVIvWW9XNnVXZm00YXovc3d2YnZPOUVpTVd4LzFndFQrQXV5MXdoa0Jt?=
 =?utf-8?B?ZVhWN0VPSkNhVEV1MXZ0Y0lEZithTmNaaWFGWkJ2Rmt2eVBHOUxnNjQ4YVVq?=
 =?utf-8?B?VzZkRnVsWloreUdaSi9vL0E2OEs0cEI5ZFdPQ282UGpCL2hXT0hqRWtWRGlv?=
 =?utf-8?B?OC9tTG5GcU9yQjA5eG5mMEpsajJPNE9Vc0JON0syNnpRV21YalZjN3lhK25w?=
 =?utf-8?B?ZXI5S3FHOHM2T1BQeEZYU0c3cnBJU1VKSTkzbVB4azRVb2x5VFlBcE81UHBt?=
 =?utf-8?B?cmNsaWJHZkRUbVdaL0tmbjNmNUVFNzhYSTdGUWpiRkd1eWtzdkpFeFJEY0Nj?=
 =?utf-8?B?eWM1QmFvOUVRcGxSaFRXUi85SHAzZUFxWlN6TGliZnRSUXpwYm1TcnVmaEhm?=
 =?utf-8?B?eGYvSEo3bWNUZGhuc05XSldkUTYwVFpVWkxUVzE2eHBsU1RMYUhGeWU3eXVD?=
 =?utf-8?B?NFZtU01mV1kvb1FrTHgvekhrSXlQTmNMTWdCN3IzVjRkZ08yK0d5K1I5aS9O?=
 =?utf-8?B?L01uRDJ0R2ZPeHI5UmlmQThtRFAzbWxHOGRKK2xjTU1nVGc1VnpPaEUrVGxJ?=
 =?utf-8?B?eWN6eWZHdzlsMGJsNTlra3N0ZlVaN3BpYjlQUXZPUGx5VDcxbVRJc2hCWEc5?=
 =?utf-8?B?bThvRFF2SVZON0pkMGY3VlNJOWJ5b0xZUnIxOTNsc2hBYnBRNk0wTTR4M0pa?=
 =?utf-8?B?dnlBVDk2aEdxMUd4T1FZUjR0OHNjMkxEZ2Y2MjZFTlBONnAySmhZTU1rQzVh?=
 =?utf-8?B?RVUvSmNuZjEzcjEydmV6K25NU3VUa0tRYXVYZnM1YU5KSDcvWVhpQVk4U2h2?=
 =?utf-8?B?ZlhWeERuVXdyS1hXeVVLQURRaG9paVY1MmZTNmxCV0t6Z3pKUUM0R0kvWitT?=
 =?utf-8?B?M0JSTnd3dlJ2WkxjMWxOZmh5MmhYR1JyTlVHUUNMUTFGa0p4OFk5dnRWOGx6?=
 =?utf-8?B?c1JLaW5xVXJvSXJORDhBbTMrdTUwRk5NZEVtT0lBdTk1MEFiK2pJVHVkMzMy?=
 =?utf-8?B?WWZjTUIxeTMyNkpoQ2NFenRBRjVkQkdhSXpWZ0wzVFlkUVhvUDUvczRHMXdI?=
 =?utf-8?B?L2RMc0V1YlIrWGZuU2NGVGowQkhFcmFVNVE1ck8zYnZOUE1GdWo0QTBBV2Q5?=
 =?utf-8?B?cElWWlJLY0Y4WHd2Uk42YWVWaTVwZ0pSVUVXeS9qWDlwRzl0Y0YxbDdPQTRa?=
 =?utf-8?B?L0lhSE1DMGE3MmtBN2JwZDNiWjZHelY0S3JLYTJiWVBtUktpZU03dW45aFYw?=
 =?utf-8?B?WW4zZG8zVm5CSWY2TnZvcm1pem9SN3AvSWpTVFIzVFRJR2ZGbXlqOFNSNVRi?=
 =?utf-8?B?Sm9xQU5vMElYN0NoV21VMm1OUllPY1YxUDBNZ3dnMTBaaUpaR0ViRWVBbDFE?=
 =?utf-8?B?WlN6eWt0dFFRMkpmUUNjSVNTNDV2RW03UkNUeGlmM3hrNkh3U0VDS1RTVkl3?=
 =?utf-8?B?bzMvVHpxTzQrbFlvRS9vWXJuL3lwZlcwdytXVm5DWnRacUdNOENlNVY2ZnJw?=
 =?utf-8?B?b21tZW1pSDQxa3ljeVlWR1FxK1NsWWVCQXpkNTNQQjFVbGxaaThQYTRBWHg0?=
 =?utf-8?B?NlFwd3lDWW55V0ZVY0V6Tm0zV0RUYUcrankzN0RWaW9YK1BQZTNId2JsS1JP?=
 =?utf-8?B?ZmRWeHV3dGtkeEFpZzNiVm1CM0cvS2c1OExOUXhkQ0MrdlJYckZxVkx1bjF4?=
 =?utf-8?B?V3NEd21HV1UrZDJ1MjZnelpqOVNzYU1oZ1RobU1jcGIySjlvdWRrWXA2K1lL?=
 =?utf-8?B?TG5LMXR6aVViLy9BMkRTYVZyWnpUa2hMajdJUGFKU284VVo0OENjZmxyUDNZ?=
 =?utf-8?B?OXgxd1hGWnluTkYvWmdKSlVlZEg1N0FCUHYzYkx6TnVpYUdjbWR1NVFkTkVB?=
 =?utf-8?B?MDJUUmRIc01vb0tvdHNrWWZnR0szWVc3dURIUUNycTE2SWFVQW1nci8rYXcz?=
 =?utf-8?B?QlkrbHllWlR5cWY0bjlOVEMyaUZWZzF3dWhEbTZ5YlA0eE9QcXlVT2V0Ujhl?=
 =?utf-8?Q?mEReHdEt507fo65H9eEwgqjkl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 95e13f16-8a88-4c4f-f986-08db56e22da9
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 14:22:43.3139
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3iVW1V1kzA3ExHRBrJHwtbfo4MXHibjenvjFI0a27YnIhFDk5Z/UufzWmOuFqMNXu+i9TJwuDYOA8xjtV35S6w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9311

There's no real need for the associated probing - we can easily convert
to a uniform value without knowing the specific behavior (note also that
the respective comments weren't fully correct and have gone stale). All
we (need to) depend upon is unary ! producing 0 or 1 (and never -1).

For all present purposes yielding a value with all bits set is more
useful.

No difference in generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Unlike in C, there's also binary ! in assembly expressions, and even
binary !!. But those don't get in the way here.

--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -26,10 +26,6 @@ $(call as-option-add,CFLAGS,CC,"invpcid
 $(call as-option-add,CFLAGS,CC,"movdiri %rax$(comma)(%rax)",-DHAVE_AS_MOVDIR)
 $(call as-option-add,CFLAGS,CC,"enqcmd (%rax)$(comma)%rax",-DHAVE_AS_ENQCMD)
 
-# GAS's idea of true is -1.  Clang's idea is 1
-$(call as-option-add,CFLAGS,CC,\
-    ".if ((1 > 0) < 0); .error \"\";.endif",,-DHAVE_AS_NEGATIVE_TRUE)
-
 # Check to see whether the assmbler supports the .nop directive.
 $(call as-option-add,CFLAGS,CC,\
     ".L1: .L2: .nops (.L2 - .L1)$(comma)9",-DHAVE_AS_NOPS_DIRECTIVE)
--- a/xen/arch/x86/include/asm/alternative.h
+++ b/xen/arch/x86/include/asm/alternative.h
@@ -35,19 +35,19 @@ extern void alternative_branches(void);
 #define alt_repl_e(num)    ".LXEN%=_repl_e"#num
 #define alt_repl_len(num)  "(" alt_repl_e(num) " - " alt_repl_s(num) ")"
 
-/* GAS's idea of true is -1, while Clang's idea is 1. */
-#ifdef HAVE_AS_NEGATIVE_TRUE
-# define AS_TRUE "-"
-#else
-# define AS_TRUE ""
-#endif
+/*
+ * GAS's idea of true is sometimes 1 and sometimes -1, while Clang's idea was
+ * consistently 1 up to 6.x (it matches GAS's now).  Transform it to uniformly
+ * -1 (aka ~0).
+ */
+#define AS_TRUE "-!!"
 
-#define as_max(a, b) "(("a") ^ ((("a") ^ ("b")) & -("AS_TRUE"(("a") < ("b")))))"
+#define as_max(a, b) "(("a") ^ ((("a") ^ ("b")) & "AS_TRUE"(("a") < ("b"))))"
 
 #define OLDINSTR(oldinstr, padding)                              \
     ".LXEN%=_orig_s:\n\t" oldinstr "\n .LXEN%=_orig_e:\n\t"      \
     ".LXEN%=_diff = " padding "\n\t"                             \
-    "mknops ("AS_TRUE"(.LXEN%=_diff > 0) * .LXEN%=_diff)\n\t"    \
+    "mknops ("AS_TRUE"(.LXEN%=_diff > 0) & .LXEN%=_diff)\n\t"    \
     ".LXEN%=_orig_p:\n\t"
 
 #define OLDINSTR_1(oldinstr, n1)                                 \
--- a/xen/arch/x86/include/asm/alternative-asm.h
+++ b/xen/arch/x86/include/asm/alternative-asm.h
@@ -29,17 +29,17 @@
 #endif
 .endm
 
-/* GAS's idea of true is -1, while Clang's idea is 1. */
-#ifdef HAVE_AS_NEGATIVE_TRUE
-# define as_true(x) (-(x))
-#else
-# define as_true(x) (x)
-#endif
+/*
+ * GAS's idea of true is sometimes 1 and sometimes -1, while Clang's idea was
+ * consistently 1 up to 6.x (it matches GAS's now).  Transform it to uniformly
+ * -1 (aka ~0).
+ */
+#define as_true(x) (-!!(x))
 
 #define decl_orig(insn, padding)                  \
  .L\@_orig_s: insn; .L\@_orig_e:                  \
  .L\@_diff = padding;                             \
- mknops (as_true(.L\@_diff > 0) * .L\@_diff);     \
+ mknops (as_true(.L\@_diff > 0) & .L\@_diff);     \
  .L\@_orig_p:
 
 #define orig_len               (.L\@_orig_e       -     .L\@_orig_s)
@@ -49,7 +49,7 @@
 #define decl_repl(insn, nr)     .L\@_repl_s\()nr: insn; .L\@_repl_e\()nr:
 #define repl_len(nr)           (.L\@_repl_e\()nr  -     .L\@_repl_s\()nr)
 
-#define as_max(a, b)           ((a) ^ (((a) ^ (b)) & -as_true((a) < (b))))
+#define as_max(a, b)           ((a) ^ (((a) ^ (b)) & as_true((a) < (b))))
 
 .macro ALTERNATIVE oldinstr, newinstr, feature
     decl_orig(\oldinstr, repl_len(1) - orig_len)


From xen-devel-bounces@lists.xenproject.org Wed May 17 14:36:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 14:36:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535932.833990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIGN-0007Ag-Bv; Wed, 17 May 2023 14:36:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535932.833990; Wed, 17 May 2023 14:36:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIGN-0007AZ-8n; Wed, 17 May 2023 14:36:23 +0000
Received: by outflank-mailman (input) for mailman id 535932;
 Wed, 17 May 2023 14:36:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzIGM-0007AT-46
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 14:36:22 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0625.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2fe8468a-f4c0-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 16:36:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8264.eurprd04.prod.outlook.com (2603:10a6:20b:3fd::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Wed, 17 May
 2023 14:36:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 14:36:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fe8468a-f4c0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nNkLk+xdINQxh/bMoUOl+lBOMmM1LtoHPgedHqhY8/447B2jpwCQw55MUVBhGI85oIg/9hYX+ut4yqoPSrWHTnBd0w53BfrdhHx08+ycLGLWrkFBA83YUvem8Gcg6jXAl0zkS7YW81sGWQSKp3oP8RqoL+FK3ylebbJN4dEGX6QrdhjSSfHhRcM6GYWhQC4yzK/0Hfu3ORcrv1awvQS9AFklNxew9lIIHIBH+RXUtjGNN5D0MVpDArVYuCmzYzvpg0l0aDYJXml8xP8FzKRBKg/SWGglbcuqZT3WA2+MLzl67WSQ7XbFY9LnUnZqXGbVCuwaI+59mbbHVy1ETvDSWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Qxdf760ncPbPV6iltS8mo2G3uWsOOuPjPTs7ZIt5liw=;
 b=G9pw31PNhrqfzmeP4clMvo3/phWQyDSvikn0G4W1TpKn/YI31a4DFJ587hA1PU60PbcClS9qwlmLZVJQiL4gn2Q6ZkgSHmdeQvbZQIBEGYnOSp5Qsc6hzWzFlfnH4C2UUu6NxetGXKYk4o3lqckyYkq0ABB6yQVbQRfe2141FHT+511NdoPDSGartFJ+5llGF4up/OIFlaRWXpU385iacCDeSslzvWslDxKGRqYcsENCScUVcf6B+MEFlWXVZm0vCH5FegHOV7tgwteyPQxjcekhE6Uw9FYYwuJPDMJ9o0JgMx3x6vEM7kr0JWPVRQtw11EUatF9eBPANYmYPHP4uA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qxdf760ncPbPV6iltS8mo2G3uWsOOuPjPTs7ZIt5liw=;
 b=jwg0964lV5dFA/b4SmyhDqpvEYi1SjR1WxtGlBVBUG9Ba00CKGqWPHyHDU4+MCM8zT/z+ChOBLdg2KKXUgagaGNqnnI3U+4LaE56ldRuc9TGpgOulNkDy4MFNA5g8Le4J4WljAwV3/2oVn7VVTp+JjLIPphdqpYrGmnLYqv9gzZ/Lxu3XM1sthm7j5Qlx2GoDYyd7CgF/7f+yEz/hYv9gTyZ+OVNXjF8WdyUqZ49kTRbibrvszmoHcO+8yOWJr7r3iX5TBIJ0bV8xhuv3H0ThGrsMzEjOgH/ZDDrjwu1KnWwMiX1wXc4FeM3YbVfCs+7D3TFwWqoTn6GydsrlPmVow==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <54560de4-1c55-c7f0-61ad-84d1e71e47f5@suse.com>
Date: Wed, 17 May 2023 16:36:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/4] x86/vtx: Remove opencoded MSR_ARCH_CAPS check
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230516145334.1271347-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0003.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8264:EE_
X-MS-Office365-Filtering-Correlation-Id: ef7e986f-0813-4590-e377-08db56e412d7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	c7MFCkoJtK1r5pxoVSIN745Q99FaGJ0XvIBtHOWcQJWTfxq/4W+Wt06d4McddNwAcmrIXtTSd95Hxy82u+pHTsAdm3D+zbNoN3lZdIBPjMvRvXl+7YvtJuqDFbIYmzCc6HYEE0tGHGQmxYAcfcjCW+FaL+x3RKsdmaSAGNCbVfLwW3UBGUXKZHwBgnhS6nYu0SfhjAoINLwqS7y3wIztMDPA+hd15exzICAUlXXSk+i5x+TOKhzUOiP4T1JMZVIA8BqyCa2eYuZhJhuyrqKl8DGOIh6J/wUutSlniNprffHXC30ijnRqsyjoEmaYG1vYJPutyQjvB/2j6b18CW59vjLsYdHqDP6RFDtoGteX6meegQq7pXl7Q37oSZzWgDQz4q/7LCLcSEkEDc2ksYNfWitgcizZzrjCAc6HFtvxiOjdTDnGF3cWI7e2xzBUWt4Ti265Zj8Rz2CqxycwMathGNuS0YY7ScptZ5J76AwkoqD3+zo5WRF2mL0KBEQCPn++/5sYmFF+Y0BTdc4gA1PC1QM6dgbRXakBz2iVIJfXEXz4eUreEvSftamdVKY0iqYKe2m9P5+GGBjsAu5aSKOyYqVIH/4/HfEESY2OA5dBt3SQdX227STFBMlucEKZ+IQxCBKQy/JnOCegMD37YOPDiQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(366004)(136003)(39860400002)(396003)(451199021)(66556008)(66476007)(66946007)(31686004)(478600001)(53546011)(54906003)(4326008)(6916009)(316002)(6486002)(41300700001)(5660300002)(26005)(186003)(8676002)(8936002)(6512007)(6506007)(2616005)(4744005)(2906002)(36756003)(38100700002)(31696002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aDdxUStjMUlGc2tRVEEwWWoybm5yZGNvNm5XUGwyUytFWklMUThLNGR1VjN1?=
 =?utf-8?B?WFBZb0FYSTdxWUZxdURXTmJCK2dwa2ZSWWhSSjMySk55cUVuTEluLzBPRFJH?=
 =?utf-8?B?NWNDbVU3dEJ2eVNXRnNVNWtpaTRZczhFUThFM3lWcHpzU3BKZVV4WkhWL3po?=
 =?utf-8?B?cnZxMGFyN2RUMzE1bElxckh5dXZscHhlYzlnNkZXUlp0bERvcDYyVUJRZHBR?=
 =?utf-8?B?RlNJam1CbWZhMG9KWHVITmhKVk5ObXk4WmxxaXdlZlh6R21Hd2FVS2xvZ0Rx?=
 =?utf-8?B?RHg3eXhPT01VT2JzbTJOcFY2MHMzNkdnajk5S0ZUOG9JcU0zeTZrL3VMcS8z?=
 =?utf-8?B?d2kybC9WRkxCdnhVbm5iWUVxdzdZQStndUJlYzlTdDJEV3JFRERudlN5V1B3?=
 =?utf-8?B?Tk56emFMTFNuMXVXdXo3R1FxUzExVm1iWm5KWjQ5MDQ5aUtMZTJuR1VMUndN?=
 =?utf-8?B?OTRiUk9RUjlKN29LaHM3QWNWd3Jiczd0cFFyVXRGRllCTmgyYXhOTldnR2dm?=
 =?utf-8?B?RCs4dG45WElzZzNlczVkV0RUYzlla2RoLzVkOTVQZGczZHkweWl3dHNBcStU?=
 =?utf-8?B?S2RHK2J4MWJDbnRRVDZGakhyVXgwci9OcUdoNEkwcDAwcjNvbzMyNTJscDhx?=
 =?utf-8?B?YkhzYkwwZlo2SlhsRHpVUGVHK2dmSkFMbFVyR21RMkRKbUdmSlpLckxEL3BB?=
 =?utf-8?B?SEwxWjEvK1lFNUJ3b3Z2WU5hdnh1dkVoZ0R5Tit1NFBPeWdNNXVGNmFnL2RW?=
 =?utf-8?B?T0k0aml6SkU1UXZ3LzhRaTl3dmtNWUpQRUJsTCtZT04rWm1Fd2tRMEdsUGNw?=
 =?utf-8?B?LzhZd2swUkxpY2JzTWpURDZyVytTTUNLTTh6dC84UTlSdmpINU9Lclc0M0Y4?=
 =?utf-8?B?QXZoNDlyY1NNNVVHODJkYUovdHAvdnphaUpZQ3J5ZUhNZWhqUlpENXhsR3Zk?=
 =?utf-8?B?N0dsM3N1WUdTdFdTOXM0aWRVdmhKQ1hyRDFjZlJjMW1IdGVMNjdFUGgzeU9W?=
 =?utf-8?B?RWlOcEdMbEszemtSNUwvcWpYaDhlbkZsMW9TZXg1dkR0NjJlZDZaMHMyS3Z5?=
 =?utf-8?B?RkVpbXFBNTEvOE5zZUtwMU0yUDZaT1M4eUV4MkZ6WnF6cWlaRCtiOHpVNi9M?=
 =?utf-8?B?bktlTWJ4bW4zTFhvenJrZkxFOW84ZEdTUzVmeVYvTE4vYWwrbjRhZ2QzanVD?=
 =?utf-8?B?aWRWbjZRWSszZm5nV1phbDZZY2N6bWt1Y0UwSmo5UHQ1V01QYlJZS1R3RWxm?=
 =?utf-8?B?QzliZmtBVklEd0RvS201UUVzNEdCemdRdmNDUnBnOWcxTnZqS2pHNTZKb2M1?=
 =?utf-8?B?NFkrbjFKZ0hLT2RWT0FIdTB3YVZQSitOeHk2bDNyMHFnRW5CWFVnVFFVaFJZ?=
 =?utf-8?B?VWh1Y3NVUHBPU1daN2hIOXZETXdkeDZjd3NMMWYrZHhSM0c3eXNzMjZBVVgz?=
 =?utf-8?B?NklrTTdkVHA5VWNUbEg0cmZvQXlRQWRtNzJMUk83YXVNMnl1NEk0V0JpMm9m?=
 =?utf-8?B?NHJtZCtnV09zWURrYVpzZ0tHLzZTYTl0SFo0T1Q0d21EUmZrN0c3dUdmd1ph?=
 =?utf-8?B?aFZkQXNuT3lFeHRUU2ZEU04xaHJQTHNyQW5Wcm90UFluRXJvZ1FDc1psOTY0?=
 =?utf-8?B?c2NzSWlwSENKYWFxSXhXVFNPL3QxcTFNbXZRWFNWMFp5V2x2WGp4NDlBRmNI?=
 =?utf-8?B?YVBod2QvTlk5bXdocVlxaDh4cExpdHZkM21sRGwrMjV4RkR6c3A2Z3owT0dx?=
 =?utf-8?B?N1p0RldKM015MENJS3ZwYUxKRXN3VC9yc0J1cG5QdjFPSmJNQnM5bGFtM09o?=
 =?utf-8?B?M0taQXloV1g2ZXkzYklSZ09iS2QybDcvem9vQUxqWGQ1YWNDM3lEMHh0bVpw?=
 =?utf-8?B?KytacS9OcEpDYmJNdGQyYm1sY1AyVHQzOVZEb3Vtd1AyQWsrYkdmWUVRSjFa?=
 =?utf-8?B?STZIbjV2TU12Z0l4TUhtajlZTElXdVBTMmxGcGZmTzA4eDl3QWRrM205aTJC?=
 =?utf-8?B?R1RaQk5PK2tzRDVZcUVmRUdVbmJzSlk1WTJFOEtYNktYRy9RZVpCdVNXUW5Y?=
 =?utf-8?B?R01RdDVQR2hKQzhNaUVDRExtaWQyc1RPc3Q1MTVGY0pRYS9SRHNPRVQ4bGdR?=
 =?utf-8?Q?JtAXrcUkSOBvrDegFWKZMhukN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ef7e986f-0813-4590-e377-08db56e412d7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 14:36:17.3682
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UuaLLMJOTQia3WFTI+0Fsf4KR2Mpa4FD129/TPm5tXeAciHQvYVbYAiV6lsglB89BmwHAHp3+FbOMPoHt3j47A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8264

On 16.05.2023 16:53, Andrew Cooper wrote:
> MSR_ARCH_CAPS data is now included in featureset information.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
albeit, like in one of the patches in the earlier series, ...

> --- a/xen/arch/x86/include/asm/cpufeature.h
> +++ b/xen/arch/x86/include/asm/cpufeature.h
> @@ -183,6 +183,9 @@ static inline bool boot_cpu_has(unsigned int feat)
>  #define cpu_has_avx_vnni_int8   boot_cpu_has(X86_FEATURE_AVX_VNNI_INT8)
>  #define cpu_has_avx_ne_convert  boot_cpu_has(X86_FEATURE_AVX_NE_CONVERT)
>  
> +/* MSR_ARCH_CAPS 10A */
> +#define cpu_has_if_pschange_mc_no boot_cpu_has(X86_FEATURE_IF_PSCHANGE_MC_NO)

... I'm not convinced that having the (unadorned) MSR index here
is really helpful. In particular to people not knowing the indexes
by heart (possibly most except you) the number may actually look
odd / misplaced there.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 17 14:38:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 14:38:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535937.834000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIIM-0007qm-PC; Wed, 17 May 2023 14:38:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535937.834000; Wed, 17 May 2023 14:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIIM-0007qd-Kv; Wed, 17 May 2023 14:38:26 +0000
Received: by outflank-mailman (input) for mailman id 535937;
 Wed, 17 May 2023 14:38:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzIIL-0007mi-DL
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 14:38:25 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20608.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7a620896-f4c0-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 16:38:24 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8556.eurprd04.prod.outlook.com (2603:10a6:20b:437::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Wed, 17 May
 2023 14:38:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 14:38:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a620896-f4c0-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BP1uAWhzOnAwZ6/2RlTT41OIJgBtjejzt049ljIbTR7Ish/o+80KfMblza3Hd+4L0G4IlYP7DrnpXzXJrI523ehx5ARI9F7X1y3k1bJxL2xD4PSzU7vhQYILKqEhztqZWoqLl/zKiocTBGVST8Q+x99w7lUVSf0gwlA7phio2kIV+JO2NoHYVi1UhjaApx1dDT9Gxe6YCIPGuzou43/JU1LFejrGeEh2tvoZSMIgxNXAmXIJOuWLjxSfYTAkZZaf/RnqHG7Oo7/ACvhlq5XEIlZSjIHMYSxr8N3VgZT7oiLlUFhGNjYbpIz61vdZJuUKQcnHMImYW02q9biPn0EXSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DUay6O560edvbo46b5zeVc1Tw09tWHBMgxpuGuUvLrY=;
 b=hCqDK9l5Dl5pw3QLkgve932EeTylzZnrtwYIRFtwLAXXO13uWKxc14IR6ZwrQ+GpgxLWx7PYrmwia4gfSsreCEmdp7SG8pwR7/6P7Wzk0DbQjq0XAumJbqsY4EzZltrx6CmYS90Dux+bfHf5iGrDyvdyVwWqPzigb8rmJ6C3pKOmEC0rR+JpG0uvBaOjz6JyP2f1R1cHBtyz7a/PTdE5j9VlttEc4m+rs5jyPQBmWLaQ8scSWJUhs0XUDoUeJYIiXdFebh+nK8yXNoxHVEkMLGCJcI0ttqQwCeZfMmgrWqayp9na+0jMBOZBm2JKmvcA3viIhEiK4D3nuAylKtHEgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DUay6O560edvbo46b5zeVc1Tw09tWHBMgxpuGuUvLrY=;
 b=EoMy5mxK+ANxKKHjzBnfWxHbJgzMXOueMgaOWxlqhVISJ2dphkefwqYJC0o16c3TOxX66phX4bwgKVWlGI/AvJbdHfBMzyDXN/+ERnCAvxDqy1w3GYvgF7v0ZZ2k0pIC0zRln4B597rT+eWr7ghnpWv4w9P2GXY95DJYkxz1H5kcatZDW4Il0q6vU4uMR/M/6augc/DJfDxflBBewRU+nuxtMFmGTmmnlZ3MvfS9SDSTVhro1qifTbmApHiMOPadZuscmP4dLuh6Ag9GGCfekL0qI+cY9LsTcWWhSSHZINHj7mgs4GQwd/2nml8H+BiBfcsl2Z4GhAiWh4YtO9V8mg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d4e44806-098b-13f6-7ccd-0119a8a2d06e@suse.com>
Date: Wed, 17 May 2023 16:38:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 3/4] x86/tsx: Remove opencoded MSR_ARCH_CAPS check
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230516145334.1271347-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0011.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::21)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8556:EE_
X-MS-Office365-Filtering-Correlation-Id: e3613d76-304c-4b11-5962-08db56e45d02
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	apE+t/MsIOq00iFIxOdHg60koDbZADyDKgkygxfR4mRXltt+52uvsnnEhLAa/SU548bT81+6V9i4f1BpORpt/2YvbwnTUhsyQdIOJgSjkaiIy3YEtbFzj+1qDekMic0N5cd5GmZVrgKsiUggjK4jJZCiQJ6Y05eKUzYgG8BSFUZ+qkFPmogvzwf2USSR1ltErV7rR+niHk2hBUK80z81pJxH5O3XiAOaeS1ljKp7Ik3fL+Fjh97CsE5mamS+QhSyARcQ3p4hasbMSgI+zt4wxIIaAhsmhH3LPtRWI/Rhn6dyvRAEBFXWb9ybz5TmSff0+4T6F69D9YMuIzXNkbgBojXKdQaAFXiUiuaafHp35B56O23ZPYYRJMem4PFZRKsgeVaOz7pa3OX5r1jyDOS4sEgHDwLLPYHZh54mrbqawlV3YAsogaK+wck1Uljg3uo7PScxrfpNOOO3OwqKYH1THRLM5MNifknJPm8TEkxkH3i2RsRA0QvDC7CBCcpKRfWj3mHD/j95g9FMb/bOZzYhrapaFLWLS0HjvcAe8YMyAcPIEHxMbAIOLvVL0Uc4q/W3zgh79J+e0NV3kLPIjlBz26+0RircBqlEGLwGPAyhMh3Kb30zAERXiVvsT09i6mMwwY0vf1tIHYMtpGpD7jqz/A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(39860400002)(396003)(136003)(451199021)(31686004)(31696002)(8676002)(8936002)(41300700001)(5660300002)(86362001)(66946007)(66476007)(66556008)(2906002)(4744005)(38100700002)(478600001)(54906003)(6916009)(6486002)(4326008)(316002)(2616005)(26005)(186003)(36756003)(6506007)(6512007)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RTE3eFdZSmV2cURhR0N6Qnd6R1k1aFk4MWZCcnVFUFlWSGtRR2hSNVVlWGNW?=
 =?utf-8?B?Q0NyR2hVTGZUb3BoSU1rRVJnNTllblptcitncVBMU2hZQnFKQ0l4bDhJVXBR?=
 =?utf-8?B?KzBuR1gxSGVubXh3RWMxTGU5Q0tKOUlORjdtbCtwZXdUNmd3VldnRDExRmhN?=
 =?utf-8?B?Q3ByL3p6OGNkWnhnNlN5ZGxRejZwamxtMmNYOUUwTlJCcExRd0QyWU9nVzdJ?=
 =?utf-8?B?MnhMTlhDY0dkNEdVQXNWNFBtQzlWM0RJNjZtaytDTHJyNmtkQjNlK3VRVGlm?=
 =?utf-8?B?ckk1Mk52ZDBQU011Y0I2bS9aSDJyc0gwditHTk9lL05QSkM0LzNhK29LcGtP?=
 =?utf-8?B?cmE5dlRFMlYwOEM2MVkzRlJpczNwTWQzQUFmZGxjMkwvV3ZtRzNocGxUV1hO?=
 =?utf-8?B?NkxObGhRSjd2dW9HekJnYi9seUdDVjVLU3NwYjFmVmVod0R6dmRraDNVeWhX?=
 =?utf-8?B?TTVxamVIbXlyK2lXc1dQQ3orakVqWlNyRFkweGlWQTZ5T3NLamFqVnVoVUdN?=
 =?utf-8?B?dUZPWDdaTjU1RTliQzl0elZZWlNnYWdVTTZWZXZGRzl2UWY3blhKT2gvUFp5?=
 =?utf-8?B?L05INFM0Z25oUitnUkxDcFExNmU5ZjA3eExpbk9rSjBQSDZiYnZubjkvaFNS?=
 =?utf-8?B?emgwSzhPc0d1eFl1ODZKczIveGxQdEtUR2g0R21wUklzMTJYeDhJRzh5WXBO?=
 =?utf-8?B?SXhPVDBEQ1VSY1M4ZVNnVGl3bHRrSjNvV3hCM3M1MWFXQWV5dWt6VHNBbDNW?=
 =?utf-8?B?cnlzYTM0REMra1d6M28yQVBFKy9LbUZTVXZ5cFRUbDgzcHpGMzNIWUVHZU5E?=
 =?utf-8?B?bTJaQTdZU0hsWmYwSWUzc0pkS2VJQXJadGFLS3ZPeWtmWmJ3eWxpSzZLOTZD?=
 =?utf-8?B?LzBFdmcrb2dxSFllQ1lCK056MWtRSzdRbVZ3S1k1bngrd1AzQ3E5VVA2aEpw?=
 =?utf-8?B?c0tEdFhiVDcyMllOV1doQVdjZHdoQUdXbG12RkR6eVZHRDF0TDdnR0NNVG1P?=
 =?utf-8?B?Z3NZUDhSWjkxWDFWNXFrRnZIWkpsaE9VelBDd1RaTkRqRm9xOWxFVG5lN1I4?=
 =?utf-8?B?d0IxUlNjSWlXbmY4bTE3YmYwODVobnUvZDlFTVB4QTJ4S3Q0U2NXaTNOTXJW?=
 =?utf-8?B?NHFEM0ZGMnhMZXNKRjRLMzR1TWtGSkFJYThBUE5KcC9HL3RXVnJ5Y0h2YXpM?=
 =?utf-8?B?UEI3czlLZE91N3lvTUYyRGZOUGFDTWNpNlgzNnh1WitVVE9ieEE3VUlkUVVl?=
 =?utf-8?B?R3d5UEtHUmhyd0trZklMQklrWHZNT2V0THZESEJxeEI5OFVneEQ4THlvTFc1?=
 =?utf-8?B?YWJ4K3JYWFNjYTM3cDVrSW5id3k2czZUMGtsbnFHcFprNzJQRUhBNEkzaDg3?=
 =?utf-8?B?dkM2eFV4WFhFdmd3WGhiRGgwNENCbHFMYzZwZ0RoNXRYQlFHMmVhOWZ5ZCtT?=
 =?utf-8?B?YjFJeUdhMkZNd3k1d3lzSkpDbVZGUFZkZ0xRQTZuT0pwVmVuWXl3REk2QjRR?=
 =?utf-8?B?MEt5bUsyWnF0UDlWWC9uTEc0Zk91bFphSTF6Skg1M3VNY2R5czBKVFVFWEhQ?=
 =?utf-8?B?em9IYUlqSE85b1JTdVhkQmxFeC9YeXAyMUNxaDdybWJVeVdJTk1FTU9Ka1ky?=
 =?utf-8?B?QVhLalZ2ZnYwVHpKY2tEeTJid1ZTdzNlTHpCRFdzRUsrL0ROanJvNG54Wloy?=
 =?utf-8?B?WlYyZlV5aHFveFNsNUM3bHJHbXpSakdONGhrckJra2Q4MndiYjd1dWJjNWxS?=
 =?utf-8?B?SnM0STFtdHY3cVNRT0ROZzJTd2syMjZTWUtUOXFBWkJ0aVhycUw3dkROMHd5?=
 =?utf-8?B?RVJkRGxJSlF3NFVqUXZMUWVVNGFZZVBZaDFJYjg0ZlZWNkhyY3hLUWdHTGs5?=
 =?utf-8?B?cmtHdkFQeit3emE5Rlh6L1l1UnQva1ZKZlZBdzkyayt2d1g0S1h5RzIvS3h3?=
 =?utf-8?B?eXVhM0s3TDEzN0tkQnN0UkQxRi83UGlNdHIvdnMzV05ZQ3hGTnNScDZoa3VS?=
 =?utf-8?B?SldHZk5MMHNPZ1lDVkJxM09xMG5ZdkF1RFYwTzZtc1lSaTlDbWFseHRBZzNr?=
 =?utf-8?B?Q1RkRkZOWnVOaW5raFZTTHloeU1yanJyVjZRUk40RlVIQVhoN09RbVNhdjFI?=
 =?utf-8?Q?/F2+6FeJOcN05BpP6IFvdRm4R?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e3613d76-304c-4b11-5962-08db56e45d02
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 14:38:21.7669
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TQzbJdZhy8MrOCe6JELEzO9WjT3OSYU67qLIiQlHdIxSM8mhz8QPbwbSkT6gXIYjE1nYOM4aaHCl4upj8Rp6VQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8556

On 16.05.2023 16:53, Andrew Cooper wrote:
> The current cpu_has_tsx_ctrl tristate is serving double pupose; to signal the
> first pass through tsx_init(), and the availability of MSR_TSX_CTRL.
> 
> Drop the variable, replacing it with a once boolean, and altering
> cpu_has_tsx_ctrl to come out of the feature information.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Wed May 17 14:38:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 14:38:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535939.834010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIIr-0008LC-39; Wed, 17 May 2023 14:38:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535939.834010; Wed, 17 May 2023 14:38:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIIq-0008L4-VS; Wed, 17 May 2023 14:38:56 +0000
Received: by outflank-mailman (input) for mailman id 535939;
 Wed, 17 May 2023 14:38:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzIIq-0008K7-CI
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 14:38:56 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061e.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8c4ec659-f4c0-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 16:38:54 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8556.eurprd04.prod.outlook.com (2603:10a6:20b:437::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Wed, 17 May
 2023 14:38:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 14:38:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c4ec659-f4c0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tq1b8VZeZRe6B71MxNbXTpdBDR1wyvVLCpVQYeZ9c7K2JEcXn6UUh5v370SMy9d5lEpggqovYeoeSFY3x5BOfHJ1eSMjWxlPFHLO3VgraWUqDayom2Bgl68g1pie/Sij3sLG6hPkg1Wue/igVWvRhoCCtdMr55slzOd+pTm0ow9ZUaKG1Il8b9Lx+hwILLbnSNlzshZtCJ7kBDPy/hiV08RYrxd0nA2m6g439p/NSb9738OjgmOyv948+yct3OVnv1o2FRFQWri5F1QKJfbRiL7r1PCILsb77gD69BjKk7kPv47ENmwj5NBaYtBOxs4fYbHZmYGHkMh/gpZoIJA3MQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CDYEZTAhs+K6wtbgMMY1tDeB20mU/i2j/518ttZD/Ek=;
 b=Bt0UEUwqxT8tlTLW7jszcEhlYz1Y16QIvTUvj0cJSjNqpED3HVIbN35pm3ivWDU/a4LlZD+aEHJnyWXcJ+lEaLCAWdoHQJWL8Fid6+nXWLK6XRT/3j/z2rFfDzGfUfuTdHqFdDfd0ptMU+SMI+jMsopFZuq7kotswUNxRYY+e+Zf6BKaFgyIVD+8Ig9ghUjIGL4y+T4DpwiSDXgZT7zRCioyoTsPI9nYiTOD9lFVpt/VBsHzaGmS6HTpfxXI4+GI6jiVYj/3jBNhvjQaIEHNYMVUs/ELT6Gk0itdOxFjWzU8RiZY88F1tXzssMxL7l6QfAmJipMZHGD/tT4cg3qUYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CDYEZTAhs+K6wtbgMMY1tDeB20mU/i2j/518ttZD/Ek=;
 b=rhjbPcA/F+G77AB1NPpRSsnYGa0l1HqDJizEG7m5XtryimMOfEo7EnbFjStjK5dlpA4qkDRA8M90CmKIfk7fuSSQzi9QWrcxi5LsExUaAPaJpQZs4+L4fKaWKf0GZ6kq7W3VdRX4H0g4lNxIfBAVhKI7IR+yx2uqp96ax2tCE5RWNcRqBaTvjnkgpXOXhdgYPjAQsjxOZonYkSXGUxitm2fDIOTaATtaKu6iRSbs9ejPNhBXws1N4Ys+Qx4u6WLn9/qMm4UBdZ3yzvQRwMREyP9zKmF2Ct3LPMsRJsMDt9Gi28m9Rz0nFNb6lmNSCEQSAwNyngOziYfMknD02aB2tw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1f3a3e0d-74de-de19-2d87-b24574297356@suse.com>
Date: Wed, 17 May 2023 16:38:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/4] x86/vtx: Remove opencoded MSR_ARCH_CAPS check
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-3-andrew.cooper3@citrix.com>
 <54560de4-1c55-c7f0-61ad-84d1e71e47f5@suse.com>
In-Reply-To: <54560de4-1c55-c7f0-61ad-84d1e71e47f5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0006.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::16)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8556:EE_
X-MS-Office365-Filtering-Correlation-Id: 1da978f1-2abf-466f-cbca-08db56e46fc3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ruuFW5hyw07RHosIST2o5GNZJztQVn2Zx0QM+TxsK/l2GfxqmWxFzZ0aH2VBbVlqJqyC/Y/tewgZFQVLOAwuTyZrnI5hVAKMmHVRv04w3R/myl1bmsnY4uNR5x1jtZgpnd13dP2sFTbrga8GcE2Lo8AzLOJWXZ0pAEGINRxDc74Q3kaARjGjXuMIbq6LbRugxwYFNlHG/9DIx3QgZJDJX5yIvdGDiFPuQRy4Y2cY0AFy3AjGM5sJfRmWBEf2+AgMsB4aZi5HyqOCPtfcfP/vceWFXcWraFHt/DyhMHnM2jOBbAloQNKwauWpafr18RjSJ08mxy3x+aI4RaBrZgrMk55HdDjeojIYjKY72x48K9WyZOWwX4lChC/Z6ODtvtUhYiZ9NLlxaHGX2/1h1+jHng2FRGeNa44HXlPeynWJTyAcRNdrSIJ+1R4ChKpv6a/p8KNrFYQqBD7XlxaaxpdwJAvRHedGmuUSpe+J7Ira81zYaroX37yZNFoT4cBViqtHRrQt8LBBtK77EypB0EfFf27Oj7b9uzkYBDL0CXYDLokFCTMixmvtseifgHsc6Mz0X414Yb5HTBUhR/rera3dusqlDb3Ch7JWskxu0wWVjSvgiw0aiSXwr2iEisNT096YLFR3WpYXJ8krI2I4p5hlVw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(366004)(39860400002)(396003)(136003)(451199021)(31686004)(31696002)(8676002)(8936002)(41300700001)(5660300002)(86362001)(66946007)(66476007)(66556008)(2906002)(38100700002)(478600001)(54906003)(6916009)(6486002)(4326008)(316002)(2616005)(26005)(186003)(36756003)(6506007)(6512007)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?d3kreS8va3VlWXFlRkRaRFVsTVp5WkJvbEJ3YVdTdGd5b0czanFuUkxsSDJL?=
 =?utf-8?B?dWY3WHh4OG1UaW1FZWRnNDM3cjhrS2tmeGZNOUdrclhKa0FNeVNGVHF2SUp6?=
 =?utf-8?B?TE9lUGZ3ajhRZnh1NmVqOWFOOXVzUnRNdXdya0NielZyK0hDRUQ4eWpKRFFt?=
 =?utf-8?B?dkpWdFlIenBhNGl0cUprOGFteURRR081bjYxR2FHd0IrOXNxWmF0dU14ZURj?=
 =?utf-8?B?bGtQNlV2emMyczZnK054RXBvcy9yMXhGWlZBMVhJYmI5ZklPQmpoOUlBSmlR?=
 =?utf-8?B?ZEluSkJXUDRma1RXWWtoNlBHVjRPMTFpanpkdkp2V3E4MUVMTHBQTVUweXkr?=
 =?utf-8?B?eHE1dzdOSmdLL1NoSll5ZmV0MEx3anhaaTN4TCtwWjBFV3I0WksxM05jSTY2?=
 =?utf-8?B?QXdSSzFHVVpNeTdZODg2TFkzc1g5eGVPVmZLM0xKc3JUVlBkSW15bGJBOUZW?=
 =?utf-8?B?RVVLMzg3b3BrZ29xNEllOFZ2ZDFQNnU3YjY2RnY0SGtrYXV5eGFiSmJod0Rr?=
 =?utf-8?B?aXpod21QTHl1bFk1V0wzdHYxU0RoWmpoeXhvbEpwZmM0Rnc3cGRiTVBiQ29a?=
 =?utf-8?B?QVVvUWRZSzI5SDd6eGpiamN4ekU1SUI3cU00MVA2bVlKMDdMMFozTlRTV1pj?=
 =?utf-8?B?Ui9CL2x6Z2lMZVZ4Mm4xMDlmeUxJdEUyZG42aTFKeU5DTytVQ0lmOUh6MzBI?=
 =?utf-8?B?cFF0QVhpVFdqREoxZ28yZ09Da29lVEowczE2NUl3WjdJZTl4YkZBZyswbWk5?=
 =?utf-8?B?Y2JrL3cxUk5RRExySGlXYktQN280Q3dmblFOQTVteFFLeUQxQVp4OUFvcVps?=
 =?utf-8?B?OWw5SE1HcGRRbkVGdFJqNlFCTnZhdXBDQ0NkdkkvcXROZHYwQStjekh1NmlO?=
 =?utf-8?B?OG5SUk5lcXNjTVp0ODh4Y3hrUUNYRmx4aithSmIwSTdvYjdHSUNsZ3MxY1N0?=
 =?utf-8?B?SUtVUzVSTi94WjRiUGF3aExmUmhJdDg2b294RnpaRG5ndW41WHZKajVNWk1G?=
 =?utf-8?B?bVNOeThscDRxMERpWXBBWnpaSEd6MzVmQzlEdU9mbHJXSzVHc2NXQmJRTXVL?=
 =?utf-8?B?YTNzOE1hVnVoVTBJUVFDY3lWWDVoeC83VUJoSHFSMFdHWkNLeElodXVlOHpK?=
 =?utf-8?B?MmovL1NlZy9FbkNZNjNEejVNNDk0bFViODRiTzU3THZqcEVBOEtoZFk3L1BM?=
 =?utf-8?B?Y0JFMkhYVUY3djZSSEd0dzNzcG1VMExleGt0eTRjdThPV0p0T0FXSWllMWx4?=
 =?utf-8?B?Um5XUkMrZkEvTmZvN1Z4dCsrWGFPWkl5Tm5USjBRcVJhSzBZSWlvRkRGcWtS?=
 =?utf-8?B?d1JtdGorcGtGSnp6R2JaTXcyd29kUHZncnRhZUphS21NYm11NnVIYWhrcU52?=
 =?utf-8?B?M3FGTFlybmpIeHh0cGVLWUNpMWlxRmxocmNWWVhXakZ1d2ZUZ1ZtQnVBRjRm?=
 =?utf-8?B?VUZMTVhmODRWSXk1NXZkMXB1RHByYklqM2lweFEvK3BQYlJHWVVZY0F3R25Y?=
 =?utf-8?B?L2R4aWQzU1lkSlpua05BdnFQbjJSamVCODFhSFpoYjFTQlRLTk02ejNYWXE1?=
 =?utf-8?B?c2xFd0Y4ZmRnSSt4YlJxWmhLVHlMTytRZ21pNDNlaFFYQWI2VDdPRDRWb0RK?=
 =?utf-8?B?N29kYXJMZ2lVSkxWY1ZQOVk3alRaQ1ZHZC9aVHNicHBDWkVRY29VVWs2SnRp?=
 =?utf-8?B?NHpwVzNhQmlSdEZQNjJ0NTczMWhSVjQzbll5eXc1WUViRUhnRHlrQUtlYmZm?=
 =?utf-8?B?YnFORjArQzkyd1JSOTB3aExTR2RhN0dOakFDeVA5Q0tmclZla1FlVUJwNVVn?=
 =?utf-8?B?MGdLeWZ4ak0vb3lqR2k0NDA1dTc1QU1wRGJZbTlMdUJ2a2I3LzVCT2VEWEJE?=
 =?utf-8?B?RUVHQ2FjZkdmU2tMS1ZLNHhvcHNsN1YxYVBNZm93R3lMOTJHclgraDhycERI?=
 =?utf-8?B?SVVRdEM2WTl5a0gzSkpkVUJCdjk1dkZRU1N5em5kSC95VzlrSzBsZEtwOXNM?=
 =?utf-8?B?RjJVVE44QldUMEM4N2RzdjYzY2FhVUdQN1h6Wk1tVWlkOUhsU2FjYjdJVHFY?=
 =?utf-8?B?RG02ekIvY3pFODZpM01kd3NNeVRwVDgzdFpVWjhSZDhKM0pZOWhHTk5EaXBO?=
 =?utf-8?Q?aEhS5v0uKHUqfz2xnf7ehHCZ+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1da978f1-2abf-466f-cbca-08db56e46fc3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 14:38:53.1994
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1u/Z5MPxM6hl00eqcnJQa3tqgnDspDiTDw8lcTQg9vn/RC+CfnqD+34bBTe64fvPfQOytZVkFZ3uBZADDEBMlA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8556

On 17.05.2023 16:36, Jan Beulich wrote:
> On 16.05.2023 16:53, Andrew Cooper wrote:
>> MSR_ARCH_CAPS data is now included in featureset information.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>

Oops - this was really meant to be
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan

> albeit, like in one of the patches in the earlier series, ...
> 
>> --- a/xen/arch/x86/include/asm/cpufeature.h
>> +++ b/xen/arch/x86/include/asm/cpufeature.h
>> @@ -183,6 +183,9 @@ static inline bool boot_cpu_has(unsigned int feat)
>>  #define cpu_has_avx_vnni_int8   boot_cpu_has(X86_FEATURE_AVX_VNNI_INT8)
>>  #define cpu_has_avx_ne_convert  boot_cpu_has(X86_FEATURE_AVX_NE_CONVERT)
>>  
>> +/* MSR_ARCH_CAPS 10A */
>> +#define cpu_has_if_pschange_mc_no boot_cpu_has(X86_FEATURE_IF_PSCHANGE_MC_NO)
> 
> ... I'm not convinced that having the (unadorned) MSR index here
> is really helpful. In particular to people not knowing the indexes
> by heart (possibly most except you) the number may actually look
> odd / misplaced there.
> 
> Jan
> 



From xen-devel-bounces@lists.xenproject.org Wed May 17 14:47:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 14:47:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535947.834020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIQr-0001nd-Ub; Wed, 17 May 2023 14:47:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535947.834020; Wed, 17 May 2023 14:47:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIQr-0001nW-RZ; Wed, 17 May 2023 14:47:13 +0000
Received: by outflank-mailman (input) for mailman id 535947;
 Wed, 17 May 2023 14:47:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzIQq-0001nJ-BM
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 14:47:12 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b2d06ae2-f4c1-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 16:47:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7908.eurprd04.prod.outlook.com (2603:10a6:20b:24c::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Wed, 17 May
 2023 14:47:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 14:47:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2d06ae2-f4c1-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HEaoRrcKCdnOP7dvoaiBG6oK3cMrJTMb16XW2aK9U/OMn0vYl+87ufUrkcX8r2gd893JZBSiwt0fehE8nbYS29AD+ZJiMhvHfmqVJRi7MQ4Uz1eu8VyVvmJ1/ogwqaPiBPhySmXPdLO18IPKVECAejoeVb9Gp4HqjTwgJBCfl9FKuIf3lVSh+BH6H7/MNeHW1wl6FMMHhx3+dwDs3qa+AXUejRsp+MjYvWzrYaqvpysqihFiX7lFTAFCppFhpGtnO3htUAbc1jVzI1OTLDH9pCvfTs8DvgJoIcm1nrGF/HbZNvVuaGv0PIfO82i8KTAKl+l0HmQbZUuqQ3iRe770iA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Diqu4DfQoU/0R08zs8A2/lkKlcPSB6xmZp8ddCjo/AA=;
 b=I1Qy/Vv2s5kbgJSXSHRAwNsfnQ/q8GacmdmMgwSO3oglo0cPbELvTJAWvmlJ1znqLyNbaOA7pRinyGVCTIUcxywlQsVaf4ytTeQzGfxvmP2eQisbjMaQC852ZpemFF//ioID0dQu8p+oM25iRITZYhBFZfzBbkl4JoSJCK8Ui8UUs4DcGcH9oYDaeoT4d2PPURAKKwAZZHcU07FygiJuPlYj36joL9dqQ23daNTaVwOkAhM7GuLMSxEQ001pjipykCU9gi7DeQUoVLHZTYHE2aXml/1WBXfq6cDI2NoFoPg6UZf33VbouXRsHqkVLgeCYjTfnCzG6tGS3nqal3EGxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Diqu4DfQoU/0R08zs8A2/lkKlcPSB6xmZp8ddCjo/AA=;
 b=fnoeEGzMrCNGG8kPVVxkoxnOEzjt7pK/44Yu2dm8L8k3+nESQQmzy0JacpZuxxnl3n8lcj3GuvM/mvk0aDa1uqO9F3xEzbYY8Tg1k5wy55hOlQC7PTqmM48h4qlszisUaP4cShlTPf2RFuTkDninij7uftQnJ+87LfLxYPq8EUylIBYya7ZbnZj7dl1gnBPdNECoZKI4SsChFdKST4g/wOaGUr0mrrm/Jmx9LakfyturaMcXkH6qxVoPAFw1iM+8ahE4EMn0xhXYTIGlM3uzTb+rCcLQ5UmCOO5nO9UdaMtpCJq3lxx0EzMRLF0dt0HGspqYEvEyMX0UAR/0gV/vsw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1d06869b-1633-7794-c5c9-92d9c0885f19@suse.com>
Date: Wed, 17 May 2023 16:47:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 4/4] x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230516145334.1271347-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0190.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7908:EE_
X-MS-Office365-Filtering-Correlation-Id: eac4f8a5-f15b-43c2-719e-08db56e5961e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FpkPIwothdesJJearRTgJ6oTH15wMIHC3w3016rxiCEi9FxBUEwsMeid+YX33BjZRrumSC684ke1Z1a/ONE2tqn+vS73DW/cbKerndmsjbpZatP8e9tv8HDs6d9ku2d4R3ak3JWacfTVqc7gX9qa7bCYFyhYsEv7BwmmWlLgfqKeKp+P2YRSauudXMS/I4dXZqsXvZ5uNhS4cQy6aubWym/Ll/sl1XoyJ9tcxO6uHrIrvFuPQ1ujyKivx7HEbSk+PMHfullxvGuuCsAc8Z+gUSn2b5CPKjbBmKnlfV5PM1yjZZCPggZ0dyJJ2agj+j7yGV65UEfqIC0Oo1eflCP2Uaw6q9W9QiubU0wTgA4R80Wxtm3hHqtYuNQRdEsczmYR7Egc9ZsSKCgYqrmhj7uzVUSGh1vFyI/UTkFuGqjIj5r1inJj8JjekiTxydpL/hilq2K2kqfFb0x/ggbUL/ctR6nG3abwv38Lz+1fndeqEW+GayXy/0+ibOWhIkRO1SlEwBSNWzgLkaQBFdodBlmpho2TnjVdSoBVE3Tq32Tkdhety3BJJsIJCw27odgZ13MVAFGrRqnZgVDoqF4pbNfq/FdvHUXG5FbWOgYz9khgyzjXM9DGq+5bnstI8tFzBNQ1HeFDJ7dAZQHFCrd+TbFCP7vnsJCtJspgqvR15a85t0DIEl2sUxgt6eTAS1Co4P6TWNmfr2q6AhqGOOe7moRrag==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(346002)(39860400002)(396003)(136003)(451199021)(4326008)(478600001)(31686004)(316002)(8936002)(4744005)(2906002)(6916009)(41300700001)(66476007)(66556008)(66946007)(54906003)(8676002)(5660300002)(6486002)(53546011)(6512007)(26005)(6506007)(186003)(36756003)(2616005)(31696002)(86362001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MVZBT2pOVk5tNU1qRzRTalJBR2NseGJSNnRQWkIyOVVtMWJiYUhiUFJOZHR4?=
 =?utf-8?B?NlJyQTZNK0JOOHhvUUpXakllU0gydkx5dE1yRDRRTENWODVYZG9ib3BuWHN5?=
 =?utf-8?B?bDFNNGZKY1IveDdTeTBiU3gyV0MxMm5MaENaeGJvSlRiYXI3d1phclJYcUJi?=
 =?utf-8?B?MC82b3FHbHF1c2FQZERDVG5oRCtlU2d1alpYVlVWTHVZaG14eml4UEVyYldW?=
 =?utf-8?B?U05kQjJhVjJvRytMOGUwM0RqMmVNakRoRU9EYjZ2bzZLMlpSSmY1Qk5QVm9Q?=
 =?utf-8?B?cVJUY0FaT0VZSk82cVRHNy8yVlNkODgzZ3k5NjBSYm9taXlzWUwyd0duSmVy?=
 =?utf-8?B?ajVvZEdSRWJJK04yMjAvalpWK2xmVWNKdSsxVXRiNE13Y2tNSlNTbTV0YitP?=
 =?utf-8?B?OGJ3UDF0cGRZWlZoaXJiNC9rMzJZTG53Q2hnOHJuSWc2aUU3VXZ4YzZnN3Nv?=
 =?utf-8?B?YkQ5SDM3WEdPaGtkaG9ySVJmb2JpRnluNU5icmlVdlJiMU54YUJIVnlaT2RX?=
 =?utf-8?B?d0pRTHRPSk5UVkgzL2JMOTVmT2pEbytUdkMvd1F4dG9UOUpCWnhWdjFYenhC?=
 =?utf-8?B?YkFPWkFUelZrcHVHRW82M1Y4MEEyVlB1WUV4RnlqSHJ3WHB4SlJKMjg2VDBU?=
 =?utf-8?B?djIvTTFjMi9aSXlUSGpNaUFDNkwrTEp0U0duMEg2ZFU5VG9GRlBEM3lmVW8w?=
 =?utf-8?B?WnpaVm5hU3pzZEkxN1NuL3hyMm96RVVHa2d4SnFTWXdJWkcyMi9uR0RHZ1kv?=
 =?utf-8?B?VkdVUkxoZjUyanBHMllKc1NQbEs0dS9Kb3A3TDliWVRoWGV1MUdWSHF1cWZs?=
 =?utf-8?B?anJCVWNTVmI3Zzg4d1J6OWFKRFJHeTJXaGtDSUFPMFFDdVhKSS9BUmZudHN6?=
 =?utf-8?B?TUYyVDQ0QzFqcDZ5SlIvdlM5bURPTFhsRFZzK1drVU9xbUl5amNTNjVWMXNs?=
 =?utf-8?B?RVZhSHlmMS8xSkZLbGppZ0FldVc1SkxiejJ4U044MGY4Tmd0a3FyUE9YWnB5?=
 =?utf-8?B?Vm8vcE85L3lZa2lkUjB0ODZIdFQrZEpzU0djblIxT1B0SXRxODVXT2wrZUV4?=
 =?utf-8?B?THlKbkc1TXhUdWVobmhEZzNQejdPMGw5SFpwWVp4QkdYNy9VVVRMQ2FVNURh?=
 =?utf-8?B?Z3FWc2M3eUhMN2l4OGdVaTRIMDQrejFkU3JlN0lYUTQ5TGNZdyt4cWpmL1lY?=
 =?utf-8?B?cExKWlltZmRMVkRyaEFoanl2NThGMU5sSzVZWXFRS2hsUXFNVFgyZ2lhOE5R?=
 =?utf-8?B?eXQ1clJZOXgzeHNaMWlsNUtXSUdETFRTVzFSajhJcmJ3WHorZkpWQW5ESG95?=
 =?utf-8?B?NlZIaXowd3YxWUloZmhLUjVXeUFhaThUSjlwR2VOYVJsczRsRlFINUl1Uldl?=
 =?utf-8?B?UmdiYzJSNW9PYzU4YVJRc0hoQzVhZGxyZEw0Q0tJY1Z0QjNSQm9Ud1daS2lq?=
 =?utf-8?B?c2VhZUNWSjI2dWNQT0E5NEIydkhBZjRyT3VMY3hyUnlaV20zYWluZzh1U1gz?=
 =?utf-8?B?clluSFlob2RRdlVVZUpoRUpVTTNlU3F5ZE1tOXJoRnR4WU1FaElMWVFWd1lo?=
 =?utf-8?B?dzdiUXY0YmNweVJ3N0JwOEk0RlhJTkF2aXJ6MGl0K0g4VGpndGdUbEttVml3?=
 =?utf-8?B?MWRacDAreTNkN2pHRmppeHFWaFNTYUxSMDlGbnZ1VW9Tbmh2ekU5Y0ZOcEZ6?=
 =?utf-8?B?Z2g5QitxZ1Nsbmp4SSswLzhpQzF6ejhtTW55Y0c2VytCR2RScmFiQnRwdUhQ?=
 =?utf-8?B?VHFhT01VdTF6a3VTd2lhb0dmVzBpMHRMWUxIWWRlaktydlBVNGdGVlRXcVF6?=
 =?utf-8?B?UCtRNzJwQnNJVVNRRjhXelJFRE4vTFM3RU56aXBDNzl4Q1pFa1JsZ2RDbmpJ?=
 =?utf-8?B?L2NSMDc5TDF1UWlERWZncitLN1U5ZVAwVzMwUlVncjgxWWxuUUNCVTFySmZl?=
 =?utf-8?B?eTdGQXYrdld1VXNkeVhmZVlJRkpTZU1EcllCRHJPMHd2czRRV3dqbnlTc1ZY?=
 =?utf-8?B?eDEzY2xWSkZNMmdpOXhrT21pWWFHNjc4VnlDOEpVQ3FyTGZieko4cjdmYXZ0?=
 =?utf-8?B?T1hneUlOWElPelpacU4wemQ5WlFrNjNwM3lLTEtlelhzWjhZajNTL3Y4Sk5Q?=
 =?utf-8?Q?aadyl3kyQMaXpOMRDsdchP5iD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eac4f8a5-f15b-43c2-719e-08db56e5961e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 14:47:07.0471
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GDrWGMEIoOc7d8J9+wfIZWtB9gKlOYvWZt8fK/5y0doOD8ce6LZJ20+eziYiECRo3+g2Sd7BtSFNEVAXyhojBg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7908

On 16.05.2023 16:53, Andrew Cooper wrote:
> @@ -401,6 +400,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
>          cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
>      if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
>          cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
> +    if ( cpu_has_arch_caps )
> +        rdmsrl(MSR_ARCH_CAPABILITIES, caps);

Why do you read the MSR again? I would have expected this to come out
of raw_cpu_policy now (and incrementally the CPUID pieces as well,
later on).

Apart from this, with all the uses further down gone, perhaps there's
not even a need for the raw value, if you used the bitfields in the
printk(). Which in turn raises the question whether the #define-s in
msr-index.h are of much use then anymore.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 17 15:12:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 15:12:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535954.834030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIoh-00064g-OR; Wed, 17 May 2023 15:11:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535954.834030; Wed, 17 May 2023 15:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIoh-00064Z-L7; Wed, 17 May 2023 15:11:51 +0000
Received: by outflank-mailman (input) for mailman id 535954;
 Wed, 17 May 2023 15:11:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzIog-00064T-Fu
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 15:11:50 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0600.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 24e7932d-f4c5-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 17:11:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7989.eurprd04.prod.outlook.com (2603:10a6:20b:28b::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Wed, 17 May
 2023 15:11:46 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 15:11:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24e7932d-f4c5-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kmAKAlh0S3TtM0+WVLKN+lRF/4fXuYhAafVExdRJwJYSP/IbMgU/6eQGNGmrr7co3QZPDZpRb4ZLMb08C9Ot/IMnEd/wNzZVqYh50T04jT7aQpIsVVxAB1ydjaqGPMLJ/Y+1Zu/vTThwuVK1uHY9wApsuneY8OaEfu+EPsiMGGOOagonb9F5BG8Kg4sD7W2DU9w7dQkI8BCMMgA/nPXJtSMv+cUYFCGhsDZ4Fp8WoM+ppa4K/kx7JUFNITASN0nODrLbEnMvEb/rf8d1+CsJ3tPxFoEGyDiTJpF9qfoDBDfX+aMvHZeg/yy42BLTHoH2BD5HsKskv+ugajWkxe8fSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=y+aWjH0BITqOe3SNRN3Gw9i6+nWilhmT4/pnLX1fDz4=;
 b=k/olOGFANP3G6h2t7n8buvH1/Z6heDzKvGf4d7/Oaw3pge/s3f74CTw9JDz+G2KyvHDY2Uwn9srwF9bR1MwJ8H07oezfKqzJZlU6i4liK/Mi8DZSrQkjnIc9XhYgrRhi+AZXw2WQ1jLWp6TPvKpzIJbJt4AGCFfUenC7dMr7HIdinuB65eTP3tsV3Q6B8lnEDodRHo2LkMsFlPnrPT/URbb5nfIFE399OQK+gkoC8e+ta1n1u8qEg3JOQM8aNyzI9vy0QonMM1YlNJ8Gg/WwprF/KUMLL0XMJp5tHJkIxjmvGpZ5Z2gGFqFAx6lsP8mQKk79JY9T8+CLADR9BtrhKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y+aWjH0BITqOe3SNRN3Gw9i6+nWilhmT4/pnLX1fDz4=;
 b=nHFCVvTN0Mn+IRVT2vjioWDif1uIwezo0pMw1szjKDirhCmFGloK3GW4MLmMCBtBpJSYcoejo59AEUo2FmQpdcMUzrYhVaUjo1z+lYUViqEQQuuo5q3ex8JU4jJggc+AcTFgn0XuTA6eYUDgxH1fvzSG75zi1QEkdZaZWOSzM34zwKnB+KWRXgK1JHyW1nrmwbo78/46ctldHoB05LKMpo3JyRwuNV8Cb4MQ5Hv7zKpN78PU13uiTluf+8okUUBTGuUMgiU2ofbCxdUcdPJHh2h9EGKliMo9t+t5fVFUhuTjjzQEyLhRsClVgjltFzQHvZ9XxXeWRrLbF40TkZ2KEg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <28ea6dfe-63da-f0bc-bd98-6d5ff1ef933d@suse.com>
Date: Wed, 17 May 2023 17:11:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] docs: fix xenstore-paths doc structure
Content-Language: en-US
To: Yann Dirson <yann.dirson@vates.fr>
References: <20230509102455.813997-1-yann.dirson@vates.fr>
Cc: xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>,
 Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230509102455.813997-1-yann.dirson@vates.fr>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0001.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7989:EE_
X-MS-Office365-Filtering-Correlation-Id: f3c03017-0c2b-48be-7273-08db56e907e0
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BdNgGNBabhprG9zD7/rvK4hezDlcQ3c38Llc6SphIOc2xKw2Zs5mDn2XYYOBb7QApu5zZIFHDsSWEeyC9Qz4fWUvAW/EXxi96cNImvWX/CXctyigKbmjoMUvyAIMCF31cnSzWDM/WELUTlX9rNTCF4xgmLrx/BfF/N7daBa66/ZyBKCWTn1yXai/etKxu0wF848tHF+QYq/ujNSDvOOOU5fK4gJgKLDDW1Nnxm1whobspZtVuJbLSjIfALF9f2QFwI01abe6hAYP+/FANzEk3MtaflZCi1S3JdS0fxelF9mPzn4REgAuD+uFOlD8347roEXllg93tkvVfCnccdf83BZ0QdyZhOx97e9x2ANhp+VU5bapp0an1PC8cWv9xN3czYZ3r2f/TRf4tWg0KTMs7UaZQPSigw/eek5303SkXibz1H6oGuPr67PEY1S0vGknXuLO/Pdv7fgVJU2J6T+2JAN1qEKHBiaB5XM5EDunTG85wu+y5Y1ZQf/mim+MSRK6b8roOCK58mfJuH7UVOZVT7gBivPK38zKmEY3ZkUpPByI8Q7tgbCN4CppOT6iozE3I8e+RSi6dqEtI9M094HfoIcRk05KL8SKooF69nXulAE7UnbbB6/FHWPpkdg/t0ZEINZ4aO2VXRmuYySwbdTMHg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(376002)(136003)(346002)(39860400002)(451199021)(478600001)(6486002)(54906003)(2616005)(53546011)(6512007)(6506007)(26005)(186003)(2906002)(4744005)(8936002)(8676002)(5660300002)(36756003)(41300700001)(6916009)(38100700002)(4326008)(66476007)(66556008)(66946007)(86362001)(316002)(31696002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?anJmYVRwRjdjNlNvZ2tXcHZPcDBWNkxYOEVZSHJ3SnlNcjRWbXhia21YQmV6?=
 =?utf-8?B?a0t1V3NGTGRQWDFYVUZoZzZXZzlYSWFaem9RdW1VSGdTMnlUcTNhQmFUaDdp?=
 =?utf-8?B?WFowNEt5RndEZTVORjBvejZjT1ZFVXVaVUFkL25FZzRma0FIMDZBelcwL3Qz?=
 =?utf-8?B?dndmOGVPRXJibzFwWmQzU21CU0FSNG9LbWR2K1J5UW9ITGdoanI5UURNZ21U?=
 =?utf-8?B?Ym92QStuaW04VlM2SXRpWCt4aUZwMEZCdnhIMTh6VWZKQTJjeUE3c3JvbUZx?=
 =?utf-8?B?cDJieU5EOW5yWWJwc3o1UEVXRVkvclJ0Sm4zK01pVGlpUDJXOGxJZEM5WS8z?=
 =?utf-8?B?N2Mrb1FkNU5ib0Y2eWczbW9vOGJTMHh6eGp5S1E5QnplNndrWU1hRHEwWXRi?=
 =?utf-8?B?KzlEM0hCbmhILzcvd1Zqc004eUwyNDhnSFU0OHZUdnFDNkR1ZVAzY3UwdSs4?=
 =?utf-8?B?Qk5pc3ZJaWVlZDBmRzlKajBZNUFBb0V3VENaN05Yb3hPYlhoVG5VT2JJVjdM?=
 =?utf-8?B?RmdQdkV5dWRxemtHd25NUXgyVGlJa2ZMdjR0c280cjNSaGJkUXJmc0RqN29q?=
 =?utf-8?B?VVl2dnJVMC80VkVKREZYdTBLVWcxSUsxeWRBMVZva21CS2p2L3BOUlNYQlZ5?=
 =?utf-8?B?dG9NZXJ1cGRHMlNoSC9PTTAyR1pjWkwyMHJZOFZmNnQzbkdlRFNwajZQRTE5?=
 =?utf-8?B?WHZmM1hmWC9YSU5uZFJhaVQzclRXN0VEUkowVHcvdWgxT3YxQ3d6dXVONWw5?=
 =?utf-8?B?ZVhITjRISTIzaWlVOVZJaEdIcnNSdFM1RXJLZXhkZ3cvR3dJMFB1T2ppWWE3?=
 =?utf-8?B?bVdYdHZrMG14WUMyOHVlSkJrTXhTMmlOQ25qUXRzK0Zqc3lCWjJRc0FxbHNu?=
 =?utf-8?B?MmQ5S0xlYlYvWDYydXFLYzFiSndDYm9kdWl2eGcvNzUzZlpRQkowa3llQm5I?=
 =?utf-8?B?MDdybUxIVm1Ja243S1BPc1ZzNHZWSUIwYUVkRXdZeC9vU2FTZ0svVmJ3TGFJ?=
 =?utf-8?B?c0VxcEY2dHpFd2JaLzRqNitGbG1ERThvMnJvUGJWcm9KTWQwcENMYlJTZmtx?=
 =?utf-8?B?MnB6c3VLa05xd01XUklZSnR2M20ySk8wYzZOL0FCbndROFNYL0poMXV5MWl5?=
 =?utf-8?B?Qm05NHZVZjAyU0xwWDZaa1lHdWZwOVpnTnV6NFArc3ZrTCtYRXNzaEZSZmUx?=
 =?utf-8?B?cWlranJyVjBkREN0a1RYWWkwSEwzWWRoL3dPSGdqQkwxcnpabWhWeXFlM0JX?=
 =?utf-8?B?QW5leldZZFFvNyt3UjQvMzBIU2VDMzJZSzh0K3ZLQUFtdk1vd3NRcEx2UmtL?=
 =?utf-8?B?Tm1vT1ZmYi9RbWdOdWgxVStoaUFHcE1WYlJGRU5CRzZEbmZCbWVjalVEKzVY?=
 =?utf-8?B?dHBvK2VsbW05VnBBVnpoOVdkRFBDZjBVUWpPd3gzQ3V6aE00c2ZnbHBEUVd2?=
 =?utf-8?B?N0FPM3c3dm52bFlaMmhybE1CS3Nyb3YvSnNIQnFpT0YrUEF6cEo5czUrOWJJ?=
 =?utf-8?B?a0kzbXVhd3BRUmpsOUc2KzFpU3l3TFpRWGZOVzlJOWVPLzdQL3g5SGxrSERv?=
 =?utf-8?B?WWFsTnJsckY2M0F1UUQ1Y2FaWmU5N1VESUxuUmlMSGl0dzQ0cVZ2M2hkcEhH?=
 =?utf-8?B?a0JjRy9ReE8wenA3VGNKREJRMEJwbFRSYndRSGIrMFNJRlk4K1ZXZndFMGtT?=
 =?utf-8?B?RUY0eFZ2MVFhZEV4emcvRkc4YVNBVUc2Sm1vVzVod041eExHR1h3Q1NwRlFl?=
 =?utf-8?B?dlRpSTNmYkd2VE5GaWsvT1J6djBNWHJ2SEZXOTZsN25weGp1YlBSUjg4eC85?=
 =?utf-8?B?aXVOcXJPam0xcHVSNDJ3QVZpL3RiZVBhNnkzaTJySm5PdTVHOHR6ODFCZkZv?=
 =?utf-8?B?Y0NBcGdoUTBhelFDOFpobjU2bTFScXdTbWRueDdjOE02M2dBYWw4NzRpZUl4?=
 =?utf-8?B?RlZ3NkRUL1F5TCs5bGREMkE4aDlDSGYyYjk3Yy94SW1HaTBQV3lIaU5MYktm?=
 =?utf-8?B?YTlBM3o2ZUc3eHhsWWVibFIzb2Q3RFhUdDdRL2FQWVlBNEQyR1dwSEx3YmpD?=
 =?utf-8?B?elFhQUxqRERCTEk3QVhRbFFpM2RBL0lHVGVzSnZOekhwUGlhUmcvRzAwblJ6?=
 =?utf-8?Q?AdncOwWC8CgMVVq7Zg/64JNFG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f3c03017-0c2b-48be-7273-08db56e907e0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 15:11:46.4650
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: //TSHSrYAcKWEO1uEsjrTDCLEFdTbM+eIrJ94ORQ4iDFTCDQ8VgpaO6nF3PvfyGX/cb80QdTV5xCDh6UZv7CUQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7989

On 09.05.2023 12:25, Yann Dirson wrote:
> We currently have "Per Domain Paths" as an empty section, whereas it
> looks like "General Paths" was not indended to include all the
> following sections.
> 
> Signed-off-by: Yann Dirson <yann.dirson@vates.fr>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Cc-ing the xenstore folks for an ack.

Jan

> --- a/docs/misc/xenstore-paths.pandoc
> +++ b/docs/misc/xenstore-paths.pandoc
> @@ -129,7 +129,7 @@ create writable subdirectories as necessary.
>  
>  ## Per Domain Paths
>  
> -## General Paths
> +### General Paths
>  
>  #### ~/vm = PATH []
>  



From xen-devel-bounces@lists.xenproject.org Wed May 17 15:18:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 15:18:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535958.834039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIua-0006tT-CY; Wed, 17 May 2023 15:17:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535958.834039; Wed, 17 May 2023 15:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIua-0006tM-9T; Wed, 17 May 2023 15:17:56 +0000
Received: by outflank-mailman (input) for mailman id 535958;
 Wed, 17 May 2023 15:17:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzIua-0006tC-0W
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 15:17:56 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0604.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::604])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ff5cefe5-f4c5-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 17:17:55 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB6845.eurprd04.prod.outlook.com (2603:10a6:803:138::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.34; Wed, 17 May
 2023 15:17:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 15:17:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff5cefe5-f4c5-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dcIrOf+ooEfdESTwCwwqpmuKQxSgHQCJKKExBPzrV8RfyltunEToEs3soVYSY5iDp8OJVwFiYuKT6GZNfbSJL6cXbHZ07cJlXgeNYb3NHV67DhtnDid9ZWL53rE+gU75z83cUyuAMLaH3kbCcKrlSMGpPPTJ1RF3Tqykl8lm7ZQStBkjrCAuA23ietJM/Qs6amqgFLh7ATH5Gv6ClOCP45/We3vhypUJQB3M/P1J/wz7pZ4CYlSPmTtsL7qvAitYhnSnV0Le6e1DuLcuUudWPoEF8c9XMeD05VYMjc5lfpcKb0K1DHvBJd2m4FskTwxsQd30f2ITOnIj7JaA+S1HRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=inmT7/EQj7TNQcmPJ7BoVRu7uucB+XW08xEIj4cEg3U=;
 b=kpZ/Uz0bHfywoYwjK4xjBpgic3VnDoSJAFU4Ww3LaBVjQLeVb9ITg6AQay5tuJgmfLTnLnm6EIuxAreqxu6fJlAPAcT7eFkyeQEJ/oU2vHVosqWbe8B+RAM9u50VMZuzWp0Q3UDRC20CDYIa0xCnP8kwlE2FqoBEZRjf7Tgxe2jWa8XAtun/csWFh79UwhkxSNqb4upaxi+Vt9tQnyahRcoD2u3XZ/D7iuf5p5tn95Baq9DVpEc7cG09BrI1SxJ/m6kDpc3a7KUhOl2siyW99oJ6abVUC9soEyrU42oRcbA/OUaB/Pvr7Th0ZhqIDJs5DTLUDhnXOMVRFZx4bR69ag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=inmT7/EQj7TNQcmPJ7BoVRu7uucB+XW08xEIj4cEg3U=;
 b=hAoiOGfDKgqhVYmBqXnQMdPiHOMzd7x+x4xh15FMUrz5z1RmVENwu2kWw2rOhTRakBeQi/T/Uf5Q3B5b+a1kA2WBqkYHdAnJwtJtdfl++EBFmXxbajzagrovl4AOjrsxenvbRy1how29qxUc0SSPHkWF+miHLy5NWmy/Z4AtbR2ONr7TH129apXk+7rf/JNQlvvMy3GAZuBUVbQqX37Tjg7ZXkc8UPGXCTQqAkvXhjd78hNelph8sFq9royONJwhpUpRdTnEo+J135qLhTm0HdEyMDAxMVIc9qO1YteaKwjX+82v5Ppn2jwu4cgYSR31f2GId3bSOafrgVdmdMCMKA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <93a84fcc-f821-7d7d-7fff-75d10526ac51@suse.com>
Date: Wed, 17 May 2023 17:17:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 0/3] officializing xenstore control/feature-balloon entry
Content-Language: en-US
To: Yann Dirson <yann.dirson@vates.fr>
Cc: xihuan.yang@citrix.com, min.li1@citrix.com, xen-devel@lists.xenproject.org
References: <20230510142011.1120417-1-yann.dirson@vates.fr>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230510142011.1120417-1-yann.dirson@vates.fr>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0143.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB6845:EE_
X-MS-Office365-Filtering-Correlation-Id: f081239e-46f3-4c03-cff1-08db56e9e2a3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZVI0wMD3Gw6ZAlBlcm9K0mH7ndDqgnpt7MlgB7i4CGdHZLwB8s+V5YdvvWWhBe7ZitDQaVLAvQdbZxbpiAB5g6AceNpXyNgkzkE3P6xhqv5CYCCoddnSfRgOT2pC7/Vx6HNeURFn1Os3Ge6uNMBICtC1mwZlKhNr6+WxUya/ZdGnpdtpoC3JcwjdLuvMjzuq+DUrKQm+N/j8aC05TXS+Ophl1AdqKjYO7cUrgxJANn70qokb70B0vYeXi104Wbk0sHi7LjDJyK7RaG2jVbYDTdlqrCNrUACuwy95k+LtqL8F5K5JqC4egLbBo7/xVRWvgHm9AnrUUAlI9Ut2HTMwx8vB/COXuGP+04ORRUp2FyjYBsDx4waNyhVfYPrHrWBw4RUNnes46No66EZk9wHZ+XwMxzyxg+Yd/yOMivWOyX/oBqsc1cn3bWoEkA3aGOyQSA8Pt9H5l9mkf8uuOdY5wkTUzXcPhZdEt/Osa/1moldoJOUyPUlNiIgIEQWQcrI5pjVTP0JJdFJY1HbqylTKojxDDoqkLqdsL/SfZbMmkC8skd2WKyaI3P58dAjeOVqte8WCZVXP0CJLuA0paIcHmOyIn4EwqhW63jIlE0xXm20NoZs64h5MHckbPCH2jl6qdxg2I+/Rfm3HLqAfFM3ecQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(376002)(136003)(346002)(366004)(451199021)(31686004)(2906002)(4744005)(83380400001)(5660300002)(8936002)(8676002)(66946007)(478600001)(66556008)(36756003)(4326008)(6916009)(66476007)(316002)(6486002)(41300700001)(31696002)(2616005)(86362001)(53546011)(186003)(26005)(6506007)(6512007)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WW9HQ2RyZEVBdCs2Qy85eUFnV0gyM2tSL2lVZEVCTVM3Q0hRaDFKckIvaysy?=
 =?utf-8?B?Rlo4dU5JelN2UVV6ZWl2ZnFkSWpLblp3KzJKT0dvRUVoVVc0U0VGUm9Ma1JJ?=
 =?utf-8?B?T0R1Ni8wQUtGYmEyWHVSdUt1UXRtdURXb3BYblBtQWNiQW9YY2V4QVpWbXVZ?=
 =?utf-8?B?SHpCZFg1N0gwcklXQ1RzQlNFZE1HWFhVRDN6Q2xaeHJJbmNWczRxaDlWRTJj?=
 =?utf-8?B?b1l1M053ZVdrRWc3UTlPMGZBdzR0UG5LM3ltYmRnUzFSVEFiMEZpejduNkls?=
 =?utf-8?B?UHFLVk54eXhqdkNyMW9CMC9ReU1YZks2TVZXN3kxTnc1MWlIWGhiYUFhN29o?=
 =?utf-8?B?YmhXaXE0ZnNoMXpqeU5qcnlIV0tiNVMxNEFONW1WS281ZzhpNGdGKzF3SjAx?=
 =?utf-8?B?N25vS3lCTWQxajVuZ3ZKWDhNSXBkeVFRZTFhUHlrUjJWQmI4OGVpSldEMWhv?=
 =?utf-8?B?NHZvZkJ5VEZlelRncGpIVVBjUzhBWjdrTUd1amYxSlRmNEhEWnpGNjNpanpJ?=
 =?utf-8?B?NmFwblVSN3dWa1FCL21HaFZIaEdoSHlQYkxrRlhlMEdYRFhIbGNqRXdVWlVJ?=
 =?utf-8?B?V2pzS3V2aW5GZmhtVy9BUHFRZTdJVlhIU1d1cWNIdEpoTG04ZVZheGJscmlW?=
 =?utf-8?B?SXR4MHdvZXRFd1RIZmdHY3FaaTY4RXZzSmdobUgzOUl5cjJJSnJ5ZFBtdW9M?=
 =?utf-8?B?UkhDNDQwem9oNzRBTUd6ak0rN21Ma3puZnJNWlFNNnJkc0VJbzRmZEkwV3JJ?=
 =?utf-8?B?ck1MQlpINDYyd3VHK1kyMTg4QzEraU1WckMwL2VKMTdLQ00vVjJPUkM1dXRK?=
 =?utf-8?B?OTV3TjU0RDB6T0FFSzZBSEtSaWJoc0NrQStUQWRpU3pFMk1sd1pEVVdkMlo5?=
 =?utf-8?B?YkNwam1IUU04TFowdWcxengxejVTQ245Y2VnNEM0WmVhaGVFcXlDQ2F5a2Z0?=
 =?utf-8?B?SVhvaGtwZ3VWZCtiTForRjJtRWRxVnhSZXNydTFjU25OZkZqYzJ0cUhxeXNE?=
 =?utf-8?B?NGl5R0IxMUFscTZORCtBeXJTdmFGZEQxcVhnZE9RVEdPK24wNHcwUm1mSVdz?=
 =?utf-8?B?anREOXBqakNIc0E2eVNHSUgxYVVrNUtuNForb3A2UExHck5XSUNLc083V1o1?=
 =?utf-8?B?aWNIdlhERkJPZE43a1hxcGNEREtvTmdwMW1lL3BjZ2N5QmpkK2hpdkwwSFA5?=
 =?utf-8?B?K3J6RVc2R090Y2JUaWxsZFoyanVsYnBod0RJakI1cy93T2F2NUhCYUg2bVpj?=
 =?utf-8?B?dDJPa0xNbWx0QVRrZTBOMFp4aUQ5VjAwaUhBVVBmZXdGUXBIUUIwZ2F5NWMr?=
 =?utf-8?B?U3dEcGtpeUlneDUrMWlsN1lsN1REL3pvMzdUR0hiS0R0SmpaR2taZTFhZTlQ?=
 =?utf-8?B?VXJlWWtXOGxqS3J1UmhFZUVwWGducVI0WjBRNFFZOEMyeTBwaHFwQnFiVkNy?=
 =?utf-8?B?OU82citnWFYvQUFkRUlRYzc0SmptYzhCd1N0dkVPTEhBcGhxRENDamxRcmxB?=
 =?utf-8?B?SkwzZG51V3UyOXdpNm5yc3FnT2x4ZGkyc3ZTb20rTDBtMWplMTN0K1h2d1Y5?=
 =?utf-8?B?a1ZUTldCcDU2YXliRTc4WVUvS1U0V1QyUEd5U2xSSjU0eVB1QjJDMlplS1hZ?=
 =?utf-8?B?UjZEa25PdnRnUEV4UHZ6TjNoa3dJV1dwSnBLSjJ5VjJSY1RSUnU2bG1kbVZW?=
 =?utf-8?B?dFlJVFBPSUZNOUFhSjlXenRXUzZQWW9OMko4S2NsM3hMODR3Um5iOG9oMXdD?=
 =?utf-8?B?VGFwMEU2U2MrUVhzK2ZQMDVMYWo4bE1qdytUV1hxcDJLUFpJLzRLcWdoRWVZ?=
 =?utf-8?B?K2xBR0lOcmhJYXkxemdlamVxWjRPUU9IMnlnbTBLTS9UL3l2NE4yQldadWZm?=
 =?utf-8?B?OTV5Z0d6T0VjaEpudi9KSmtjTGROWEVuYWM3ZHE5NHpuWE12eHQweDNoVjRu?=
 =?utf-8?B?TXRJSnA0V0IzaDdsay9HeGRaODQ2MjlYa09WMGx5OEZkN3ZKY2lmelVXVHls?=
 =?utf-8?B?WGZqZTBhUEVrZFcrQ0JQc1RQdlVDeGVIY3FNWlJveXhLWTZIR3JVT0xpTUVm?=
 =?utf-8?B?azBNTVRhcTNaRHl3M3Z3YU9FcTEzcmh4ME1YanJuZEZLK2duaGxzdEhRaDNI?=
 =?utf-8?Q?YoIg9rr5jwd/Jsl7f5mOiqWeX?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f081239e-46f3-4c03-cff1-08db56e9e2a3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 15:17:53.5981
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: V1YaMaVEEEnfZQ6qohUv7iQSbBf1U0vkau9vlpOqFI3b+CZq4VNgxpv3NACPI5Sa9oeTAo1Hq6S3v5WcXMWwNw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6845

On 10.05.2023 16:20, Yann Dirson wrote:
> The main topic of this patch series is the ~/control/feature-balloon
> entry used by XAPI, prompted by the report of xe-guest-utilities on
> FreeBSD not being able to report the feature when using just libxl on
> the host.
> 
> First patch is a bit off-topic, but included here because it fixes the
> text from which this feature description was adapted from.
> 
> Yann Dirson (3):
>   docs: fix complex-and-wrong xenstore-path wording
>   docs: document ~/control/feature-balloon
>   libxl: create ~/control/feature-balloon
> 
>  docs/misc/xenstore-paths.pandoc | 16 ++++++++++------
>  tools/libs/light/libxl_create.c |  3 +++
>  2 files changed, 13 insertions(+), 6 deletions(-)

You may want to re-send this series with maintainers properly Cc-ed.
For the docs changes I wouldn't go by get_maintainer.pl, but rather
Cc the xenstore maintainers. (I guess this file ought to be added to
that section, on top of what was recently added there.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 17 15:19:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 15:19:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535964.834049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIwG-0007a7-Uc; Wed, 17 May 2023 15:19:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535964.834049; Wed, 17 May 2023 15:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzIwG-0007a0-Qw; Wed, 17 May 2023 15:19:40 +0000
Received: by outflank-mailman (input) for mailman id 535964;
 Wed, 17 May 2023 15:19:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzIwG-0007Zr-05
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 15:19:40 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on2060d.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::60d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d63abca-f4c6-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 17:19:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB7684.eurprd04.prod.outlook.com (2603:10a6:20b:287::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Wed, 17 May
 2023 15:19:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%4]) with mapi id 15.20.6387.032; Wed, 17 May 2023
 15:19:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d63abca-f4c6-11ed-b229-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hko9pL/g2DWc4aIvZVvbqbEIdpcaX90Ao4fHkzQWhme4+bTJg6cd3tBRer4HIB8waZsd3lQ5Zf50Dt7/1x3sb51v/ibDy689Kr0cIBwlOJfZR85eI4dHk+DdTaQL4fLwyI2XWE8CEFsIRYv3WWtlPMSBK0xTntglHchCle+4PC0jiU34mlOnBtbLF8LTqE2jQvqo/CtpF1Gl6lL3TAfBiiDVlRPk3sZ3ykdipPpubkbkBbhsLDbrlUxx/Y3a+JdxOAVEO5MZB84sRz2aZR9gpDQhDSMTiYziTphmYIEnUEa9gRgd5kG39AzCxtyTXzd7e8SuawfKu8Sp+d3VyBrpow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=F5vcVeveCulmaT20Mq9/HRE4T1HPjtyjAXudQ6NRWhE=;
 b=TZleMC/VLUxKykxOA8CtjkEHh1Y7RJMaIY7T+JF/w2HnqhPUnD3w4J8i5SE1JsqaZ+kt05vAg8BpL2KqxGRUt/aU9PJ8sYXPv/RaM/a0H6mqQHFbhJoP9ayJ7qdN/V6wCUzRuIirkx/ijc2DrU8pdsY+m4jMjx386gCvozmpAYGnmYSqtOQ40llDL6okWC9MFtGlADKxwqBXqJWTiIJD3GDdlh1taxv963Q4RJbFKIP0gZ786CAov6/LynVmm2y3WAmWieesT72NCN5SGP1O2ddiNSnlZAnopvsMCLDsTDOC4eTC0votqD0nf3jWuZqrA3mx03eHZNgvxX7hGnK0gw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F5vcVeveCulmaT20Mq9/HRE4T1HPjtyjAXudQ6NRWhE=;
 b=etVsbcQe11YXhkNYMc7CPyGWHGQo7SnW3BFwi6zJRTgaHXZMf7fjk6oQadnSNaYYwrNckYFCwR+nLsI9x2w185g2vq/CnK6rquI7xebeEDhV1h+8uiAheW0fjKc+dUACUt6S5aA8RL1KSJQ2tJjk0OXzODl6aENdB2wxoSQwmKGK31+jdn+s/aZ30M43bxgxMux+01dv7hjiRQXvwiI9NbJzyOqAkUVHIKwPEopWBHbni8GeyK8kUPYLiBHJ7bJn5BtqEBVLMdT7QLVIGDtKmTxYKbj8LyTsHBVMAJAGIcHowZRe7bIuh2/MJrsxyi4EUlNlbP4MqCjv8FSeTaYlKA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <599f18f9-880a-c016-9e98-4090e135fdf6@suse.com>
Date: Wed, 17 May 2023 17:19:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] MAINTAINERS: add more xenstore files
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230428132756.8763-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230428132756.8763-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0050.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB7684:EE_
X-MS-Office365-Filtering-Correlation-Id: bf2afef6-cefb-4fea-b09a-08db56ea1fd7
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	O+wsHShee6ZivqFmAdIMzrBjm/TjFzp+uedtXu8aKi4NHEYJ1qvTydDM7iRu+B93fu2n3VlzUzQv3Q4fs8NpHUhJl3tWDIh8J8ifN4qVXD1PIDOkZEPbh6VkZYPwR+8IMNt4sNx7pPhHxyT3jKfw2kEgjgqZie3k/ppc2HDa7ZLVHaq7uPEsZ6X23rovYUDjgB/f2lv59JL0Bl5eqf0TeEr9m1Nn6WullT+55/0sO0WyLE7qiazEw1uKmeBvVhQi1mGBAFBn03mjpiYEBIIUJrPNbPXFnSK5ukdjVydQXtjSnO/LrtuTjcDXZB/HBkkKTVwGXCV9QnrGKIpq3OnoCNnJPm5dAHrEJSJWftxP1iX07suWa4mJwL3A88Uh2lq3rTzDzzTi3j7y0kNXMbcuSH4g5tJtnZ8YJJvMIm7VzZ8Ar2ltErbtfj324fEi+EWLZzUZy/eJ5+aW4sL9cOFpOSIxmaxV2T/oN6GulamuqSZ8A67LANNlE0k3ApCDLj5xNlj8Dh+eTrYnHyy6RTRcEv033hIqvSjAAlCDmid+Rti0DP1OdqrJ6Xka/WseGPFNmc2U+f1/Gt1XnA/LGpXD/CRKcE/zavU1wH/V3a8wnwTulugBqJkpi+6mQfDu2k1YTRHp3lvb1HchHPz7/aMA7g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(39860400002)(366004)(376002)(346002)(451199021)(83380400001)(186003)(26005)(6506007)(6512007)(53546011)(36756003)(6486002)(2616005)(4326008)(316002)(6636002)(66556008)(66476007)(4744005)(66946007)(2906002)(38100700002)(8936002)(6862004)(8676002)(31696002)(31686004)(86362001)(5660300002)(41300700001)(37006003)(54906003)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WEl0V1o4clZGWnVkVHUxWmJ3MTF0bVdBQmN2djdvMm1PNFk5UytFUlM4aWEz?=
 =?utf-8?B?NUZoQlgzbWRPZXk3Y1hLWExyN3pBQ2xWbmh0MDUvOSsxRkFCYlFYb2crbTND?=
 =?utf-8?B?bDBNTWdwZUtDWDBNVUlhMU13bGtBZko1MWVTTUQwdkNydGpVLzRQTUN6M3JQ?=
 =?utf-8?B?Y3NpTVV2dUFBeWQ1NzEzOHgzOE0rSDRabENaUnl3eW9kSDdOdUFXeXRpeDNW?=
 =?utf-8?B?ZVEyWHpJVmNvTVlzR1JnS082V200TldvckdEOWxRT25RcjNJVXlyNkc1eFVF?=
 =?utf-8?B?dTJuLzhaeGdoQnNCS0xlcC9wUmU2V2FJRHdCTmdIa0RGVDdSbFBESUJiQmR4?=
 =?utf-8?B?a2NrQXlNRE5zS3o1V1VXYnFhOVVBRnJ5L3dRcGhIOG5JZDZ4blZWZkdaa2tD?=
 =?utf-8?B?bUtBbkxIcHMza05icHpZdWtVN2RnaEwxQWF1aFQ0bHIyY2Z0VXJyOEhGcW9E?=
 =?utf-8?B?Q013L2ZXMXp4ZzNMdHQzZ2RCeWNaMVBIS3ZTTllHWUJzemZtNk5NODBtMStE?=
 =?utf-8?B?bEFITm5tdTkwK2U5ZFNjS254MzFEOUg0WkVJcXBnUTFXZTdwY1FIazZVdWE4?=
 =?utf-8?B?SlN1aldkWTN1QUdMQ0szL2F6UndsMHhxTGxJdjJYbEFMalFiS3k4a0RHZ3g2?=
 =?utf-8?B?QlJJcUJiaDBIY012d3Y5RDJBOWNHd3F3NjZseW8zc29wZlpmcTByMGdwR0hZ?=
 =?utf-8?B?bkNhUHpJOW82ZFZLVVR2c1MzajBqRXhnK3VvU25uRFFhVjJCTE1FcTlZVmNw?=
 =?utf-8?B?ZnVCWTBKdG5nRzNXcENEeWFjNXFsdGJ3ektXM2RXTDVaMnl3SWxRNVVPYTd3?=
 =?utf-8?B?Z014YmVJWDBjTGJ6OXBuLzZTcHBtYzkvSlI3SEZaNW90d1o4Ulo5bW1aVzdJ?=
 =?utf-8?B?bWZDWE92V2JhT0tNMlRKNXRMV0dIVzZzZlRBTkp0ck9yOFNjQTVKY243WC9N?=
 =?utf-8?B?d0RuV1E4L0FNaitiVkxUb0ZHdEtXUjFQTkloVG0rTkFrd1dWdC9xdi9BVzUz?=
 =?utf-8?B?SForTFJSbGV2U3cvcVZhK0NCbm5ZQ2JiWjAxaEVsNjc1QUNod2VPUDVLY2ww?=
 =?utf-8?B?ZFVCNTE1eTdNSjk1QnNodmpCZS9KdGx2RWJibTAxcXhzUFhZeXhyNE56bGI2?=
 =?utf-8?B?Zjhnb0haQVFrNi9KZGtFOUFXUkxKeVlDbEljcDRrTXF3K2hJUzlZcE1OaldS?=
 =?utf-8?B?QU10VE5KWXhiNUVPdGF5VEZnMTZYcUZseWdXbjZJcUlVbmY4c1NKcjZzZjhE?=
 =?utf-8?B?OE5yaTBBODhYOUY1K1lHTERlK2gzNkhzcm1ld05tUThVdmNMOCtISDhndllC?=
 =?utf-8?B?c0w2aFkwcEJGc0xBLytPMDVtZW9Jd1cwamk2Qjd4Mk9EWmJoKzlSdDRtN3RR?=
 =?utf-8?B?ZURwZEQ4dGFEWk9CbkNDVzd5b3AycGtZbldadmYzbHh4dFFaMUYxVUJ0RTdQ?=
 =?utf-8?B?TW4rZDdXNXdEclBxRGludEpnVjRXa04xdEY0V1d6UytxNHR2WVJEcWhQN0ZR?=
 =?utf-8?B?WmFxN1Y3bDF3czltbnpDWTk5KzZQN256bTFhc0hWRWJ3aERlYjFKbG5QUHVz?=
 =?utf-8?B?Mm5GMUY3Y2FjMUREbU5RRGJkTCtQYmlRTVVzdFZScUtENDhpL216RUhlSlkw?=
 =?utf-8?B?anVZNHpVM0JYYUp1RG03eGxFVm5qT2ZITklDNXg1ZVF5Y0dtM3NVeFhNdVg4?=
 =?utf-8?B?N3NVM3o3dDBoVCtMRkJIbGZhUjNhOWlrd2NyVlhOY1ZxOVc2T0twOU85QXQ5?=
 =?utf-8?B?amxGMDdRSWt4MXlEZWh2OWErOHJTNW1YU21BaytpWG9QaFNCNENTb2YxNDhq?=
 =?utf-8?B?MmZQZGJ5U1QwUDhra0ZTaU9Jd2w4T2RZYkVPWGNJakFLc2xLM3JaRFZCdkxV?=
 =?utf-8?B?NHVXL3ZXQzZNOUVINjJnS05lTHRkdEZNODBxMzUzZ0tKbFVrZTBwbTRwS0hX?=
 =?utf-8?B?RngvMjZjTnZhRHZRUmNOZE8rQUJzOHBjYldsdFVTVWNCVVRiaDg5ZXRsMG0x?=
 =?utf-8?B?U3BzVFN4RHlBRkJPRkNLZFVTbkJHQ1VtMUJPUWQzOVJmMmZYSzVHQ1JnRlNL?=
 =?utf-8?B?amlicGxEM2pBMUdsSnpkSkd3KzRGSEE0R1k0MmFUMlcvQ1NKTzJuYSt4d2k4?=
 =?utf-8?Q?BWIO2h1KpfdB9FQc6JxAe3Z9I?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bf2afef6-cefb-4fea-b09a-08db56ea1fd7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 15:19:36.1832
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qIX+2zXz2ehAltoSkqdnPVGfgMFVZCmkf2YHlnDI3FFoQ2lWJVwlv+oJ3jMr4gT1yJ1UZYUX7SQ38gi1DLr8mg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB7684

On 28.04.2023 15:27, Juergen Gross wrote:
> Xenstore consists of more files than just the tools/xenstore directory.
> 
> Add them to the XENSTORE block.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  MAINTAINERS | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 0e5eba2312..f2f1881b32 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -653,6 +653,11 @@ M:	Wei Liu <wl@xen.org>
>  M:	Juergen Gross <jgross@suse.com>
>  R:	Julien Grall <julien@xen.org>
>  S:	Supported
> +F:	tools/helpers/init-xenstore-domain.c
> +F:	tools/include/xenstore-compat/
> +F:	tools/include/xenstore.h
> +F:	tools/include/xenstore_lib.h
> +F:	tools/libs/store/
>  F:	tools/xenstore/

I wonder if at the same time xenstore specific include files shouldn't
have been purged from LIBS.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 17 15:56:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 15:56:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535969.834060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzJWB-00053H-HQ; Wed, 17 May 2023 15:56:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535969.834060; Wed, 17 May 2023 15:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzJWB-000539-D8; Wed, 17 May 2023 15:56:47 +0000
Received: by outflank-mailman (input) for mailman id 535969;
 Wed, 17 May 2023 15:56:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RAjd=BG=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pzJW9-00052x-TB
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 15:56:45 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6b77f47f-f4cb-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 17:56:43 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2ad89c7a84fso9611931fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 17 May 2023 08:56:43 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 q26-20020a19a41a000000b004eb018fac57sm3406499lfc.191.2023.05.17.08.56.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 17 May 2023 08:56:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b77f47f-f4cb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684339003; x=1686931003;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=yrXiaGV5bMoTvPTJVgpLRTHo1+SVMXCCTl8SFepVr6M=;
        b=FLaEdSNgpcIhsGuOwWgGedh1nc//l8SQal6Yay2U368yHRWyVbs4/ts3Wq07GbUOXe
         OztW+K7SmXc5++SbwkEX/yifkhPp33VsuEY3oqRJP5b1he/9nLEFcX3epy6BnN0HeDMO
         51wgv4YcG98PW2+FTILZsaIZN7xCsZ5qqAfk9LMRtWtjY2dxS6g0rGepAotXzakcOfn6
         RdhwLQFzozxHbWjhEWcmc2Tbt/uqd9W1RxpNnvQHZTPSKgP3BtqTtNvd8Y39bqSEfLaL
         1K4vWtwV1Lv5AWaF+6BMfvS9DAZMILE5GkYRhJvzaKvpplp0nmLCwYyqYnyGBnCVFy1E
         gwBQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684339003; x=1686931003;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=yrXiaGV5bMoTvPTJVgpLRTHo1+SVMXCCTl8SFepVr6M=;
        b=G8Zruvjp/RI2xzUfZ1EpZuAdhUM66yzkp0uFfmEyIDpdjWr+QSbnrop05qpq72VlMs
         HT169g5WSB9xT2sXIhfiS1orwHN62E6H395N+Dvkl8odwlXJ+hqYuS4gMABhSq7v1kHT
         UsC8shvnTX/2fcMjnSiLcLExrm1f0iOqylD2I7sdWFHupplYw3VcvkEWmPvXv7LqEMwN
         m6qMFUZi1D58ehl8ghy34hcmppZRmrcEENROjHL+ZdjfhSxRcwOUPab0duMBjSqHLSeQ
         D954FgUShLyW8M6jZnRnezx/Jo2Wshnd3xF64XAiI+A+yNEJ+XzXaxovHspWz4AzUVb4
         u7pg==
X-Gm-Message-State: AC+VfDxgmqDjEYOFjPvpl4DfsHJqQv3pQU3/zavFEG+D8WgddhEiEaJy
	9xjHFuLFXbTD3m91k109LMg=
X-Google-Smtp-Source: ACHHUZ4jOJt012HmLvZ1XfmUeqBxM2DLoU/u/sI42TxpSPdM4va+BfCdw1IqCLeQkmZeYcWJk8su+w==
X-Received: by 2002:ac2:44a8:0:b0:4f2:5e21:2e2a with SMTP id c8-20020ac244a8000000b004f25e212e2amr341624lfm.9.1684339002912;
        Wed, 17 May 2023 08:56:42 -0700 (PDT)
Message-ID: <eef94bf6e8b67a98ad175125e221c75aeb4ba013.camel@gmail.com>
Subject: Re: [PATCH v7 1/5] xen/riscv: add VM space layout
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Wed, 17 May 2023 18:56:41 +0300
In-Reply-To: <d1529686-ce06-a707-de9e-a4b28c9f2e02@suse.com>
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
	 <7b03dbf21718ed9c05859a629f4442167d74553c.1683824347.git.oleksii.kurochko@gmail.com>
	 <d1529686-ce06-a707-de9e-a4b28c9f2e02@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.1 (3.48.1-1.fc38) 
MIME-Version: 1.0

On Tue, 2023-05-16 at 17:42 +0200, Jan Beulich wrote:
> On 11.05.2023 19:09, Oleksii Kurochko wrote:
> > --- a/xen/arch/riscv/include/asm/config.h
> > +++ b/xen/arch/riscv/include/asm/config.h
> > @@ -4,6 +4,42 @@
> > =C2=A0#include <xen/const.h>
> > =C2=A0#include <xen/page-size.h>
> > =C2=A0
> > +/*
> > + * RISC-V64 Layout:
> > + *
> > + * #ifdef SV39
>=20
> I did point you at x86'es similar #ifdef. Unlike here, there we use a
> symbol which actually has a meaning, allowing to spot this comment in
> e.g. grep output when looking for uses of that symbol. Hence here
> e.g.
>=20
> #ifdef RV_STAGE1_MODE =3D=3D SATP_MODE_SV39
>=20
> ? (I would also recommend to use the same style as x86 does, such
> that
> the #ifdef and #endif look like normal directives [e.g. again in grep
> output], leaving aside that they're inside a comment.)
It would be better. Thanks.
>=20
> > + * From the riscv-privileged doc:
> > + *=C2=A0=C2=A0 When mapping between narrower and wider addresses,
> > + *=C2=A0=C2=A0 RISC-V zero-extends a narrower physical address to a wi=
der
> > size.
> > + *=C2=A0=C2=A0 The mapping between 64-bit virtual addresses and the 39=
-bit
> > usable
> > + *=C2=A0=C2=A0 address space of Sv39 is not based on zero-extension bu=
t
> > instead
> > + *=C2=A0=C2=A0 follows an entrenched convention that allows an OS to u=
se one
> > or
> > + *=C2=A0=C2=A0 a few of the most-significant bits of a full-size (64-b=
it)
> > virtual
> > + *=C2=A0=C2=A0 address to quickly distinguish user and supervisor addr=
ess
> > regions.
> > + *
> > + * It means that:
> > + *=C2=A0=C2=A0 top VA bits are simply ignored for the purpose of trans=
lating
> > to PA.
> > + *
> > + *
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > + *=C2=A0=C2=A0=C2=A0 Start addr=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 End ad=
dr=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 Size=C2=A0 | Slot=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
> > |area description
> > + *
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > + * FFFFFFFFC0800000 |=C2=A0 FFFFFFFFFFFFFFFF |1016 MB | L2 511=C2=A0=
=C2=A0=C2=A0=C2=A0 |
> > Unused
> > + * FFFFFFFFC0600000 |=C2=A0 FFFFFFFFC0800000 |=C2=A0 2 MB=C2=A0 | L2 5=
11=C2=A0=C2=A0=C2=A0=C2=A0 |
> > Fixmap
> > + * FFFFFFFFC0200000 |=C2=A0 FFFFFFFFC0600000 |=C2=A0 4 MB=C2=A0 | L2 5=
11=C2=A0=C2=A0=C2=A0=C2=A0 |
> > FDT
> > + * FFFFFFFFC0000000 |=C2=A0 FFFFFFFFC0200000 |=C2=A0 2 MB=C2=A0 | L2 5=
11=C2=A0=C2=A0=C2=A0=C2=A0 |
> > Xen
> > + *=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ...=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 1 G=
B=C2=A0 | L2 510=C2=A0=C2=A0=C2=A0=C2=A0 |
> > Unused
> > + * 0000003200000000 |=C2=A0 0000007f40000000 | 309 GB | L2 200-509 |
> > Direct map
>=20
> The upper bound here is 0000007f80000000 afaict,=C2=A0
It should be 0000007f80000000. 0000007f40000000 is start address of 509
slot.

> which then also makes
> the earlier gap 1Gb in size.
do you mean that it is better to write start and end address (
0000007f80000000 - 7FC0000000 ) of L2 510 slot explicitly?

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Wed May 17 16:17:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 16:17:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535975.834070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzJpp-0000J1-2n; Wed, 17 May 2023 16:17:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535975.834070; Wed, 17 May 2023 16:17:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzJpo-0000Iu-Uc; Wed, 17 May 2023 16:17:04 +0000
Received: by outflank-mailman (input) for mailman id 535975;
 Wed, 17 May 2023 16:17:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzJpn-0000Ik-9L; Wed, 17 May 2023 16:17:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzJpn-00052z-5Q; Wed, 17 May 2023 16:17:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzJpm-0000n0-L7; Wed, 17 May 2023 16:17:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzJpm-0002RC-KU; Wed, 17 May 2023 16:17:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YFjpXjEj1qgy4CQNdpaQ2/4+yZqV8kXcBH6dFyXxi5Q=; b=BIH3d1IRCGuMjVvfIdvzInfSog
	JzqqTFF7eumPAhmvIMt+/9+hdDExiBpNoswJqkekLPYP0Ic3qn32uEh0pset1QJkfwFZQn/A16+Wp
	oEFjXaz929aqSjHLjVC1bsrZgOwoe31EQ/5F3WI17iaU2ck0Un2dgLyAIgE6HYW0n2Rg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180687-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180687: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 May 2023 16:17:02 +0000

flight 180687 linux-linus real [real]
flight 180692 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180687/
http://logs.test-lab.xenproject.org/osstest/logs/180692/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   30 days
Failing since        180281  2023-04-17 06:24:36 Z   30 days   56 attempts
Testing same since   180664  2023-05-14 20:12:01 Z    2 days    6 attempts

------------------------------------------------------------
2389 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 302499 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 17 16:18:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 16:18:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535982.834080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzJqn-0000xq-H8; Wed, 17 May 2023 16:18:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535982.834080; Wed, 17 May 2023 16:18:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzJqn-0000xj-DT; Wed, 17 May 2023 16:18:05 +0000
Received: by outflank-mailman (input) for mailman id 535982;
 Wed, 17 May 2023 16:18:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RAjd=BG=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1pzJql-0000xL-FP
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 16:18:03 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6525c2b1-f4ce-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 18:18:01 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ac87e7806aso9327221fa.3
 for <xen-devel@lists.xenproject.org>; Wed, 17 May 2023 09:18:01 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 z22-20020a2e9656000000b002af01da6c67sm660798ljh.32.2023.05.17.09.17.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 17 May 2023 09:18:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6525c2b1-f4ce-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684340281; x=1686932281;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=6/kGQ9HEGzOahJuO6BUt1qP50syBFKqiSLD2+KXav/M=;
        b=UshGMH5rL9S8MH6jVuQENivVs7vyonyw6wliteBaPMkOeFdLZ5IAeCNXdWzstAroHD
         8JrhT2Wj+SOC7/Pm1zRY7AG+lo9U3WAuFLU0CZyQ2ZS9knNqC/bhh2VWHW6YmBeVMnw1
         s5kDw089edEsZpVVgn3tOALdgDD9NlC9Ua8hAsrykETA23c96f0tH3LDSI7bLn3qm4CZ
         RNFki6/L0C04jzWnZ+2/mHc+BOtra7tdeXe23Mq8XR2ONb/DNhXBJZlQ3+0Jzi76d3j8
         XGOpGKXABBu2jW0vbgDSnDaxQTiwsKZv850idvbzl8gDIjJZ24bHhImgDC/vZ1vj9YSS
         cWNw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684340281; x=1686932281;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=6/kGQ9HEGzOahJuO6BUt1qP50syBFKqiSLD2+KXav/M=;
        b=ORSD/JE2suBXFzSdsqzW4xJDqlbL8baSJfjwN8kTAqnL9vHJIR4dzEW28hWVZU6z5k
         acsrGBUT7q9LINZnPmRcjj8wSML5iUWgxSLKqEDbLTIfkEkacz4sYIaI1+kqW4mDkfdW
         ixHym4W0z2MlxorZphQBJP6z5EmdHWJ3s8JaPZ9QSrcVHipUSHKlXINpLOCzCEcyii/c
         e/wI5xq5mv6G0db6+P6xtckBa8S4JI0ja81r+rdL02PtHWeLie94jvN3RnBKXYI7Zee0
         CLL3rT+Kw5mxESebinI9/+xWP0s1gkbDWqnk6z38NljzUw0MISJl04lA0SxA2aRq71bt
         NFPg==
X-Gm-Message-State: AC+VfDyA9zFycA0YDyVvGFRTz4385XaWDdB0l0dV7c8NcI2llXjBL+wZ
	BzSttcTpkirpjYn7OIN962s=
X-Google-Smtp-Source: ACHHUZ6iWU+6h68XD35h84vss8br3gmkANuKkqA1brVwzB1YlLjyWhEyQFzlfdam6vOLqnjeYIzROw==
X-Received: by 2002:a2e:9d50:0:b0:2af:5dd:53 with SMTP id y16-20020a2e9d50000000b002af05dd0053mr967619ljj.5.1684340280922;
        Wed, 17 May 2023 09:18:00 -0700 (PDT)
Message-ID: <63f61da8191f6dd1ea233436873a3deb7cd5aab4.camel@gmail.com>
Subject: Re: [PATCH v7 2/5] xen/riscv: introduce setup_initial_pages
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Date: Wed, 17 May 2023 19:17:59 +0300
In-Reply-To: <6954c105-f081-5d9a-5a77-1865fdc07133@suse.com>
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
	 <632384e200b7de0fb4e2dae500a058c2a27628be.1683824347.git.oleksii.kurochko@gmail.com>
	 <6954c105-f081-5d9a-5a77-1865fdc07133@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.48.1 (3.48.1-1.fc38) 
MIME-Version: 1.0

T24gVHVlLCAyMDIzLTA1LTE2IGF0IDE4OjAyICswMjAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiBP
biAxMS4wNS4yMDIzIDE5OjA5LCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOgo+ID4gLS0tIC9kZXYv
bnVsbAo+ID4gKysrIGIveGVuL2FyY2gvcmlzY3YvaW5jbHVkZS9hc20vcGFnZS5oCj4gPiBAQCAt
MCwwICsxLDU4IEBACj4gPiArI2lmbmRlZiBfQVNNX1JJU0NWX1BBR0VfSAo+ID4gKyNkZWZpbmUg
X0FTTV9SSVNDVl9QQUdFX0gKPiA+ICsKPiA+ICsjaW5jbHVkZSA8eGVuL2NvbnN0Lmg+Cj4gPiAr
I2luY2x1ZGUgPHhlbi90eXBlcy5oPgo+ID4gKwo+ID4gKyNkZWZpbmUgVlBOX01BU0vCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAoKHVuc2lnbmVkCj4gPiBsb25nKShQQUdF
VEFCTEVfRU5UUklFUyAtIDEpKQo+ID4gKwo+ID4gKyNkZWZpbmUgWEVOX1BUX0xFVkVMX09SREVS
KGx2bCnCoMKgwqDCoCAoKGx2bCkgKiBQQUdFVEFCTEVfT1JERVIpCj4gPiArI2RlZmluZSBYRU5f
UFRfTEVWRUxfU0hJRlQobHZsKcKgwqDCoMKgIChYRU5fUFRfTEVWRUxfT1JERVIobHZsKSArCj4g
PiBQQUdFX1NISUZUKQo+ID4gKyNkZWZpbmUgWEVOX1BUX0xFVkVMX1NJWkUobHZsKcKgwqDCoMKg
wqAgKF9BVChwYWRkcl90LCAxKSA8PAo+ID4gWEVOX1BUX0xFVkVMX1NISUZUKGx2bCkpCj4gPiAr
I2RlZmluZSBYRU5fUFRfTEVWRUxfTUFQX01BU0sobHZsKcKgICh+KFhFTl9QVF9MRVZFTF9TSVpF
KGx2bCkgLQo+ID4gMSkpCj4gPiArI2RlZmluZSBYRU5fUFRfTEVWRUxfTUFTSyhsdmwpwqDCoMKg
wqDCoCAoVlBOX01BU0sgPDwKPiA+IFhFTl9QVF9MRVZFTF9TSElGVChsdmwpKQo+ID4gKwo+ID4g
KyNkZWZpbmUgUFRFX1ZBTElEwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIEJJ
VCgwLCBVTCkKPiA+ICsjZGVmaW5lIFBURV9SRUFEQUJMRcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCBCSVQoMSwgVUwpCj4gPiArI2RlZmluZSBQVEVfV1JJVEFCTEXCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgQklUKDIsIFVMKQo+ID4gKyNkZWZpbmUgUFRFX0VYRUNVVEFCTEXC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBCSVQoMywgVUwpCj4gPiArI2RlZmluZSBQVEVfVVNF
UsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIEJJVCg0LCBVTCkKPiA+ICsj
ZGVmaW5lIFBURV9HTE9CQUzCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIEJJVCg1
LCBVTCkKPiA+ICsjZGVmaW5lIFBURV9BQ0NFU1NFRMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCBCSVQoNiwgVUwpCj4gPiArI2RlZmluZSBQVEVfRElSVFnCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgQklUKDcsIFVMKQo+ID4gKyNkZWZpbmUgUFRFX1JTV8KgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgKEJJVCg4LCBVTCkgfCBCSVQoOSwgVUwp
KQo+ID4gKwo+ID4gKyNkZWZpbmUgUFRFX0xFQUZfREVGQVVMVMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgKFBURV9WQUxJRCB8IFBURV9SRUFEQUJMRSB8Cj4gPiBQVEVfV1JJVEFCTEUpCj4gPiArI2Rl
ZmluZSBQVEVfVEFCTEXCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgKFBURV9W
QUxJRCkKPiA+ICsKPiA+ICsvKiBDYWxjdWxhdGUgdGhlIG9mZnNldHMgaW50byB0aGUgcGFnZXRh
YmxlcyBmb3IgYSBnaXZlbiBWQSAqLwo+ID4gKyNkZWZpbmUgcHRfbGluZWFyX29mZnNldChsdmws
IHZhKcKgwqAgKCh2YSkgPj4KPiA+IFhFTl9QVF9MRVZFTF9TSElGVChsdmwpKQo+ID4gKwo+ID4g
KyNkZWZpbmUgcHRfaW5kZXgobHZsLCB2YSkgKHB0X2xpbmVhcl9vZmZzZXQobHZsLCAodmEpKSAm
IFZQTl9NQVNLKQo+IAo+IE5pdDogUGxlYXNlIGJlIGNvbnNpc3RlbnQgd2l0aCBwYXJlbnRoZXNl
cy4gSGVyZSB2YSBkb2Vzbid0IG5lZWQgYW55LAo+IGJ1dCBpZiB5b3UgYWRkZWQgLyBrZXB0IHRo
ZW0sIHRoZW4gbHZsIHNob3VsZCBhbHNvIGdhaW4gdGhlbS4KU3VyZS4gSSdsbCBmaXggdGhhdC4K
Cj4gCj4gPiArLyogUGFnZSBUYWJsZSBlbnRyeSAqLwo+ID4gK3R5cGVkZWYgc3RydWN0IHsKPiA+
ICsjaWZkZWYgQ09ORklHX1JJU0NWXzY0Cj4gPiArwqDCoMKgIHVpbnQ2NF90IHB0ZTsKPiA+ICsj
ZWxzZQo+ID4gK8KgwqDCoCB1aW50MzJfdCBwdGU7Cj4gPiArI2VuZGlmCj4gPiArfSBwdGVfdDsK
PiA+ICsKPiA+ICtzdGF0aWMgaW5saW5lIHB0ZV90IHBhZGRyX3RvX3B0ZShwYWRkcl90IHBhZGRy
LAo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgdW5zaWduZWQgaW50IHBlcm1pc3Npb25zKQo+ID4gK3sKPiA+ICvCoMKg
wqAgcmV0dXJuIChwdGVfdCkgeyAucHRlID0gKHBhZGRyID4+IFBBR0VfU0hJRlQpIDw8IFBURV9Q
UE5fU0hJRlQKPiA+IHwgcGVybWlzc2lvbnMgfTsKPiAKPiBQbGVhc2UgcGFyZW50aGVzaXplIHRo
ZSA8PCBhZ2FpbnN0IHRoZSB8LiBJIGhhdmUgYWxzbyBwcmV2aW91c2x5Cj4gcmVjb21tZW5kZWQg
dG8gYXZvaWQgb3Blbi1jb2Rpbmcgb2YgdGhpbmdzIGxpa2UgUEZOX0RPV04oKSAob3IKPiBwYWRk
cl90b19wZm4oKSwgaWYgeW91IGxpa2UgdGhhdCBiZXR0ZXIpIG9yIC4uLgpJJ2xsIGNoYW5nZSBp
dCB0byBwYWRkcl90b19wZm4uIEl0IHNvdW5kcyBtb3JlIGNsZWFyIHRoYW4gUEZOX0RPV04oKQpp
biB0aGlzIGNvbnRleHQuClRoYW5rcyBmb3IgcmVtaW5kZXIuCj4gCj4gPiArfQo+ID4gKwo+ID4g
K3N0YXRpYyBpbmxpbmUgcGFkZHJfdCBwdGVfdG9fcGFkZHIocHRlX3QgcHRlKQo+ID4gK3sKPiA+
ICvCoMKgwqAgcmV0dXJuICgocGFkZHJfdClwdGUucHRlID4+IFBURV9QUE5fU0hJRlQpIDw8IFBB
R0VfU0hJRlQ7Cj4gCj4gLi4uIG9yIHBmbl90b19wYWRkcigpICh3aGljaCBoZXJlIHdvdWxkIGF2
b2lkIHRoZSBtaXNwbGFjZWQgY2FzdCkuCnBmbl90b19wYWRkcigpIHdvdWxkIGJlIGJldHRlciBo
ZXJlIHNvIEknbGwgdXBkYXRlIGl0LgoKPiAKPiA+IC0tLSBhL3hlbi9hcmNoL3Jpc2N2L2luY2x1
ZGUvYXNtL3Byb2Nlc3Nvci5oCj4gPiArKysgYi94ZW4vYXJjaC9yaXNjdi9pbmNsdWRlL2FzbS9w
cm9jZXNzb3IuaAo+ID4gQEAgLTY5LDYgKzY5LDExIEBAIHN0YXRpYyBpbmxpbmUgdm9pZCBkaWUo
dm9pZCkKPiA+IMKgwqDCoMKgwqDCoMKgwqAgd2ZpKCk7Cj4gPiDCoH0KPiA+IMKgCj4gPiArc3Rh
dGljIGlubGluZSB2b2lkIHNmZW5jZV92bWEodm9pZCkKPiA+ICt7Cj4gPiArwqDCoMKgIF9fYXNt
X18gX192b2xhdGlsZV9fICgic2ZlbmNlLnZtYSIgOjo6ICJtZW1vcnkiKTsKPiA+ICt9Cj4gCj4g
SG1tLCBpbiBzd2l0Y2hfc3RhY2tfYW5kX2p1bXAoKSB5b3UgdXNlICJhc20gdm9sYXRpbGUoKSIg
KG5vCj4gdW5kZXJzY29yZXMpLiBUaGlzIGlzIGFub3RoZXIgdGhpbmcgd2hpY2ggd291bGQgbmlj
ZSBpZiBpdCB3YXMKPiBjb25zaXN0ZW50IChwb3NzaWJseSBhbW9uZyBoZWFkZXJzIGFzIG9uZSBn
cm91cCwgYW5kIC5jIGZpbGVzIGFzCj4gYW5vdGhlciAtIHRoZXJlIG1heSBiZSByZWFzb25zIHdo
eSBvbmUgd2FudHMgdGhlIHVuZGVyc2NvcmUKPiB2YXJpYW50cyBpbiBoZWFkZXJzLCBidXQgdGhl
ICJlYXNpZXIiIG9uZXMgaW4gLmMgZmlsZXMpLgpJIHdpbGwgcmVtb3ZlICJfXyIgdG8gYmUgY29u
c2lzdGVudCBoZXJlIGFuZCBpbiBmdXR1cmUuCj4gCj4gQWxzbyBuaXQ6IFN0eWxlIChtaXNzaW5n
IGJsYW5rcyBpbnNpZGUgdGhlIHBhcmVudGhlc2VzKS4KVEhhbmtzLiBNaXNzZWQgdGhhdC4KPiAK
PiA+ICtzdGF0aWMgdm9pZCBfX2luaXQgc2V0dXBfaW5pdGlhbF9tYXBwaW5nKHN0cnVjdCBtbXVf
ZGVzYwo+ID4gKm1tdV9kZXNjLAo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVk
IGxvbmcgbWFwX3N0YXJ0LAo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGxv
bmcgbWFwX2VuZCwKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBsb25nIHBh
X3N0YXJ0KQo+ID4gK3sKPiA+ICvCoMKgwqAgdW5zaWduZWQgaW50IGluZGV4Owo+ID4gK8KgwqDC
oCBwdGVfdCAqcGd0Ymw7Cj4gPiArwqDCoMKgIHVuc2lnbmVkIGxvbmcgcGFnZV9hZGRyOwo+ID4g
Kwo+ID4gK8KgwqDCoCBpZiAoICh1bnNpZ25lZCBsb25nKV9zdGFydCAlIFhFTl9QVF9MRVZFTF9T
SVpFKDApICkKPiA+ICvCoMKgwqAgewo+ID4gK8KgwqDCoMKgwqDCoMKgIGVhcmx5X3ByaW50aygi
KFhFTikgWGVuIHNob3VsZCBiZSBsb2FkZWQgYXQgNGsKPiA+IGJvdW5kYXJ5XG4iKTsKPiA+ICvC
oMKgwqDCoMKgwqDCoCBkaWUoKTsKPiA+ICvCoMKgwqAgfQo+ID4gKwo+ID4gK8KgwqDCoCBpZiAo
IG1hcF9zdGFydCAmIH5YRU5fUFRfTEVWRUxfTUFQX01BU0soMCkgfHwKPiA+ICvCoMKgwqDCoMKg
wqDCoMKgIHBhX3N0YXJ0ICYgflhFTl9QVF9MRVZFTF9NQVBfTUFTSygwKSApCj4gCj4gTml0OiBQ
bGVhc2UgcGFyZW50aGVzaXplIHRoZSB0d28gJiBhZ2FpbnN0IHRoZSB8fC4KU3VyZS4gVEhhbmtz
Lgo+IAo+ID4gK8KgwqDCoCB7Cj4gPiArwqDCoMKgwqDCoMKgwqAgZWFybHlfcHJpbnRrKCIoWEVO
KSBtYXAgYW5kIHBhIHN0YXJ0IGFkZHJlc3NlcyBzaG91bGQgYmUKPiA+IGFsaWduZWRcbiIpOwo+
ID4gK8KgwqDCoMKgwqDCoMKgIC8qIHBhbmljKCksIEJVRygpIG9yIEFTU0VSVCgpIGFyZW4ndCBy
ZWFkeSBub3cuICovCj4gPiArwqDCoMKgwqDCoMKgwqAgZGllKCk7Cj4gPiArwqDCoMKgIH0KPiA+
ICsKPiA+ICvCoMKgwqAgZm9yICggcGFnZV9hZGRyID0gbWFwX3N0YXJ0Owo+ID4gK8KgwqDCoMKg
wqDCoMKgwqDCoCBwYWdlX2FkZHIgPCBtYXBfZW5kOwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoCBw
YWdlX2FkZHIgKz0gWEVOX1BUX0xFVkVMX1NJWkUoMCkgKQo+ID4gK8KgwqDCoCB7Cj4gPiArwqDC
oMKgwqDCoMKgwqAgcGd0YmwgPSBtbXVfZGVzYy0+cGd0YmxfYmFzZTsKPiA+ICsKPiA+ICvCoMKg
wqDCoMKgwqDCoCBzd2l0Y2ggKCBtbXVfZGVzYy0+bnVtX2xldmVscyApCj4gPiArwqDCoMKgwqDC
oMKgwqAgewo+ID4gK8KgwqDCoMKgwqDCoMKgIGNhc2UgNDogLyogTGV2ZWwgMyAqLwo+ID4gK8Kg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgSEFORExFX1BHVEJMKDMpOwo+ID4gK8KgwqDCoMKgwqDCoMKg
IGNhc2UgMzogLyogTGV2ZWwgMiAqLwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgSEFORExF
X1BHVEJMKDIpOwo+ID4gK8KgwqDCoMKgwqDCoMKgIGNhc2UgMjogLyogTGV2ZWwgMSAqLwo+ID4g
K8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgSEFORExFX1BHVEJMKDEpOwo+ID4gK8KgwqDCoMKgwqDC
oMKgIGNhc2UgMTogLyogTGV2ZWwgMCAqLwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgewo+
ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnNpZ25lZCBsb25nIHBhZGRyID0g
KHBhZ2VfYWRkciAtIG1hcF9zdGFydCkgKwo+ID4gcGFfc3RhcnQ7Cj4gPiArwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIHVuc2lnbmVkIGludCBwZXJtaXNzaW9ucyA9IFBURV9MRUFGX0RF
RkFVTFQ7Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHB0ZV90IHB0ZV90b19i
ZV93cml0dGVuOwo+ID4gKwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpbmRl
eCA9IHB0X2luZGV4KDAsIHBhZ2VfYWRkcik7Cj4gPiArCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIGlmICggaXNfa2VybmVsX3RleHQoTElOS19UT19MT0FEKHBhZ2VfYWRkcikp
IHx8Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpc19rZXJu
ZWxfaW5pdHRleHQoTElOS19UT19MT0FEKHBhZ2VfYWRkcikpICkKPiA+ICvCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBwZXJtaXNzaW9ucyA9Cj4gPiArwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBQVEVfRVhFQ1VUQUJMRSB8IFBURV9S
RUFEQUJMRSB8IFBURV9WQUxJRDsKPiA+ICsKPiA+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgaWYgKCBpc19rZXJuZWxfcm9kYXRhKExJTktfVE9fTE9BRChwYWdlX2FkZHIpKSApCj4g
PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgcGVybWlzc2lvbnMgPSBQ
VEVfUkVBREFCTEUgfCBQVEVfVkFMSUQ7Cj4gPiArCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIHB0ZV90b19iZV93cml0dGVuID0gcGFkZHJfdG9fcHRlKHBhZGRyLAo+ID4gcGVy
bWlzc2lvbnMpOwo+ID4gKwo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAo
ICFwdGVfaXNfdmFsaWQocGd0YmxbaW5kZXhdKSApCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgcGd0YmxbaW5kZXhdID0gcHRlX3RvX2JlX3dyaXR0ZW47Cj4gPiAr
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGVsc2UKPiA+ICvCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgewo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIGlmICggKHBndGJsW2luZGV4XS5wdGUgXgo+ID4gcHRlX3RvX2JlX3dyaXR0ZW4ucHRlKSAm
Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB+KFBU
RV9ESVJUWSB8IFBURV9BQ0NFU1NFRCkgKQo+IAo+IE5pdDogU3R5bGUgKGluZGVudGF0aW9uKS4K
SSdsbCBhZGQgbWlzc2VkIHNwYWNlLiBUaGFua3MuCgp+IE9sZWtzaWkK



From xen-devel-bounces@lists.xenproject.org Wed May 17 16:35:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 16:35:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535987.834089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzK7x-0004DH-Sh; Wed, 17 May 2023 16:35:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535987.834089; Wed, 17 May 2023 16:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzK7x-0004DA-Py; Wed, 17 May 2023 16:35:49 +0000
Received: by outflank-mailman (input) for mailman id 535987;
 Wed, 17 May 2023 16:35:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bcLS=BG=citrix.com=prvs=494da41bb=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pzK7w-0004Cv-Us
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 16:35:49 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dea59238-f4d0-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 18:35:46 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 May 2023 12:35:39 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB7025.namprd03.prod.outlook.com (2603:10b6:a03:4e0::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Wed, 17 May
 2023 16:35:36 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.017; Wed, 17 May 2023
 16:35:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dea59238-f4d0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684341346;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=0SyM59uUnoDGgit4MFxQdLL7NoG9wRKMP/uGRgHznFw=;
  b=H1AQjXGcvch8IoW0URrzJseoWoz1AU+wkdEAdkqZaiyvlVXTqwh+VFxV
   kx5jwi9cSHbumafLjRGSd59i26CB4J4JCrUTcTrrm8xHgnhuSNP127LSY
   X+OSjx0xYmqpY28iOwIqVGjpwnfz4gX7X4tB3xVnf+L5r/3lqD+C3iIhF
   M=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 108157302
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:YiAh4qBf4vHKlxVW/wTiw5YqxClBgxIJ4kV8jS/XYbTApG4i0WFRz
 TZKWjyEM/7fYGDzeoh+Oo+39EsGupXUndUxQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbyRFuspvlDs15K6p4G5B4QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw9+l5IWoe9
 tUhcS0pQyu+o/6X7umQY7w57igjBJGD0II3nFhFlGicJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9exuvTm7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraXzXqmB9lKTtVU8NZHpWyRnGUNTyQNfniYot+2yR+gXfdQf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaBMQOBWoLZCtBRw1V5dDm+dg3lkiWEIclF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9fABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:OOHocaMZ3JdMOcBcTuqjsMiBIKoaSvp037BL7TEVdfUxSKb0qy
 nAppgmPHPP5wr5IUtQ4OxoW5PwI080l6QU3WB5B97LYOCBggWVxepZnOjfKlPbehEWwdQtsZ
 uII5IUNDQpNykAsS8h2njfLz/8+qjhzEl1v5an856yd3ARV51d
X-Talos-CUID: 9a23:EQIPzWzJfxZo6uR4hWagBgUZFc4FeS3e5U6JPmuVMltNVJy7EnS5rfY=
X-Talos-MUID: 9a23:SwgBswm+49SYYQ5jpR35dnpPEfZqua2wT3kWurYluO+Ldn1uPRKS2WE=
X-IronPort-AV: E=Sophos;i="5.99,282,1677560400"; 
   d="scan'208";a="108157302"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FrzuNqSDQwTiyJR0Xca5+KpImQeagVAe6bJwqwu3B3WJ4Z+kxVsoODqhsa78OMHbAZE067UeJCeA6mmYnHYEhelDrJR5ryFouPES5joZ8W2qAYm/XnmHXfHYZHCFdV4q2QQgzdxlAyhNmBtLBtaQ5shATbqpKA+7OxhsYwWCXFpQ2tvMXfgVwFQu1jNskkiJAcVtaHd0WFNYYZVgNpCmblSkcieXF96wE7bXNoD9+iM66CFWDE8/Xi3CnlISUiitAd7v5+juaV2646W2C9Z/nRkoQ5LpkxJBgD1Xkm/AdKP6g6NUxsht/pg8Ig+dLWlnXsTyNHc4qSCmWxG1n/+rbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=E3/3VgeGMLkGH9ftmdBlOkYGQeKv/FVRHweKMHlJM6k=;
 b=ETmcCeMJR3EMyCKxc/ZGOtDHn0TFAIGj9ld+oTT2448Z30o4j1byBbTLMWth2zaIfU0ByEfiE2lMC6bcbS0dUPpAfDo+T4YhBoJt7FI38tMmnHS/WGmSLmQKPlBiPJ8FmfmT41mJWFOMPcoNiN++t+aRelrF28PZH5Vqalkh66Ne46bB6JPdr5ew86xWLjPeKZMOc9plCNRPpqRBIPa9JWdUpy4XOchr3zlScSj04dCSYlnaDh7woYDsvZSi0LMO5wC0uvcnCZ9j/HPEUbalUGcwO1yLnhox+foUi6hVFjNcrRFaG/gj4Yk1aN/gnGwBSfudsrTiiVmvFbbPjs7hhQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E3/3VgeGMLkGH9ftmdBlOkYGQeKv/FVRHweKMHlJM6k=;
 b=dd0ZZaf+VVHKz86bxW1a7VydVOnosHiVe75P9Iob3AJTEF8xwqRhrWbKmpTyQxXpSin/Q6zYjzF95M0pXXei76XjyxBc52bpdbmRJTnAuOhY02DT0MpkaKmX226cfK628fuwkDPXCad/UyoeWVXlRWxKyHdROxVIqxTHAvmfCd4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <42cd2479-1eba-11c7-26d8-441045c230ed@citrix.com>
Date: Wed, 17 May 2023 17:35:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 4/4] x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-5-andrew.cooper3@citrix.com>
 <1d06869b-1633-7794-c5c9-92d9c0885f19@suse.com>
In-Reply-To: <1d06869b-1633-7794-c5c9-92d9c0885f19@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0481.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13a::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB7025:EE_
X-MS-Office365-Filtering-Correlation-Id: d9e4d9f5-068a-45a2-78cc-08db56f4bdc8
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3NM87js5DUC4iHPkUrqYBqmyZhxpAEB5A76UD/JoJXZ60PK1CyYJRQg5e2nrPzpy/P5SC7KixJDWDWx6aqT7ObHmepfca1QJZ4dEHD74SDXTnnTZAQcojt7Uwua40CwEMsqYnbsym9zgYy2KUDt67qN82IEOUzQwdRLCs2Lg/exDkpI99lC1vp/cyLvyNBRRqsWtcLs2WU99ZVIjsuAVZH+zwK9sW2ozGlZHcrS8AAEeYJUMypwwMFYRKfaPRYB1RsS31wilOWZILjwRJqER3vgUPUja1XLQfe2OdukR0fWXa/LazCdQgA2FMrZWbVuuw0/Sxct3xyblY6Vtr9PIXoNV7oBmAnNQOgHuczet6RafMMneJ9L+ydB+xsyrkhznWFkvH/BOtmNqJtWbSO0jt2pC7jKKegIN/cbAT/cOsS+eLCxcl+eqTPQOy47KWaUoDB4IX0Tvigr+iTo1+UmCXvn3PzfyLDx2d4ZXjGJCpW8H6xyJm3HEmj2c+IM4LxahplAxZiFB4jQHbYiMJRuv3LTpHm/SYbS4oHpi1qveqCCuGTvrTa0vG0hAkSX9NdeON8fuvp9Y9VbaToO2t7YjSdSPv71lJ0rbqSFyKY/6wRcJ6WhIDPBm8TVeSwvTZAZIz7ephEFb29eBMTYYSRWDRQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(366004)(396003)(376002)(39860400002)(451199021)(83380400001)(6486002)(54906003)(478600001)(2616005)(6666004)(6512007)(6506007)(26005)(186003)(53546011)(8936002)(5660300002)(8676002)(36756003)(41300700001)(82960400001)(6916009)(38100700002)(66556008)(4326008)(66476007)(2906002)(66946007)(86362001)(316002)(31696002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K1JoeVJ2TE9ZYkU3eE9ITXp4b1NPemozdUVucGxkdWhMd2Roc3Z1UkFXTXl5?=
 =?utf-8?B?Z2pjZDFhazMzZ1pxZWtkdStpeTEvcUg1UEp3d2ZnZzd2aDl2MlNXWmlQc0RO?=
 =?utf-8?B?OEw1ZTA2VzR6RENwQk1tK250UldwcURrK290L2Y2NXNpNjNyQkg0UHNPeDRL?=
 =?utf-8?B?czZhUGhERGFlZ0FDUnFLSEFzMnIrc2ZwYUxDTGxFTUV6K2xlWml0YXFSQ1Rt?=
 =?utf-8?B?RzNCQ0tzS3pyV2J5ZVViVUozRnRiSDBRbmJtb0hLa0RJMDh6RllzdEM3Tzhl?=
 =?utf-8?B?MHVRUE1wc3JseUFCRC85ViswZXlmM25pRFVFbVFFWXI1dm9DS295ZlVFcXlS?=
 =?utf-8?B?eGo0ME1neGVSaDB5UE03c1pMbE0yNS9tWGJKS2ozWm5PTmlzVkhOQVNJc252?=
 =?utf-8?B?aU8wSjBkL254cVFuWU9CbmMwQ2VGaUI5eGkrc25hYTVlemFVeWluQjJGMXNP?=
 =?utf-8?B?OG9aaE5YM2RsTktPOVVxN1AwRU8wZnp1Tnc3eHljeG5tYlBoR25XZGJqV3Ni?=
 =?utf-8?B?eUJSOTRUT0NtN2FiZ1dPTWh3QStOam9leUZmTUJxWTA4OFh4cnRFQisvTlY3?=
 =?utf-8?B?NDlVSHBSZmxuSzhoSXNqTDhWblBXOGV4dUIyaEw5K3UvU0Q5aUtjYnBGbzBS?=
 =?utf-8?B?a0Q2VzVWKzRSczBlR0JkNnlsNjNCYUxxVWVEbXQ1UklUMTltWkRWNmhuTTBi?=
 =?utf-8?B?V2I4b0hxSnpDQWhQRjNTMlNJQXA1SzBqUVEyN2dJRTR3cXBhZ1BLR09kYzg3?=
 =?utf-8?B?N0dzZXhIbG5XQldJc1FDZWQrUHpIV2NobnJnYXlRNHp2Tmd4Y3Y0dzJNVDJi?=
 =?utf-8?B?TnJ3VEhLdjJ0T2d5ZVZwZFg1ZllTangrRGllU3R3YzBOOUNPVG5VZGR6ZWRY?=
 =?utf-8?B?TXdjM1A2cFcyUTQweWlZZzNlZjZWd2VUV212emN0QjRaa3dtWFQwb3BhanBE?=
 =?utf-8?B?R21xVGJrVUdzdWpUYXR4ZUt4dDE4bDZkNEhuU042K1VGUXp5TnpselAyUkEz?=
 =?utf-8?B?ZmFEZEthN1lMWHdpeFJiU01HZ2c0Mm1zcUwvL0dmakFGRnhCWVFKZVQ1cnlU?=
 =?utf-8?B?RVdCNjNicS83NUNDUXdlelU3Z2E0SWJoaE1iaEQwMU5yU2JMWWNlUVhza1Nz?=
 =?utf-8?B?SDZ5UDVMY3ljZis5b0czM1JKbzRhN2FyUDRGOXdMV1Zkak9MY0h2QkRjRGxT?=
 =?utf-8?B?ZlJadS9mak5XVHRJSlRVQW9Pc0FrcDhabW9pUzBxQlV1VEFCTFJNM3dyWXNh?=
 =?utf-8?B?ZVJ0bkVVa0xrT2Y4TkVjYUphV1lOaW9UcDRuUHJ2b0RWSnNJT0ZFL0JSRjAr?=
 =?utf-8?B?ZUt5MWdzaUsrY3RETWZqYlU3eVBkWHdwQTI0K1VELzNXdU43bndLdWxzT3pk?=
 =?utf-8?B?TnRDV3JwYzNDYTZLN1FKeVUyaDFqME1IL1NybzhvTVJjMmVMeW5XTk9PeHpK?=
 =?utf-8?B?ZWVlaXR4WWRVaHJPdmJNZEhNVDVxQyt2RVllL1pnUWIxdjdTSEZLZHNzMEsz?=
 =?utf-8?B?UGdieS9TZVV6RWFxRzFjRVI4Qk1BQ1Q5MDJUUUl6ZXVrWWFMUHZrZXo2UU1S?=
 =?utf-8?B?NE9TRlhvWmMwTkszUk9oVWdyTitVQWdiRm5YNkpEUndFeU9nQTg4UlMrYUVB?=
 =?utf-8?B?dndnN0tycUVGVFZmK0c2aFJQVXE3c091WG5saHdndDkyNk1NRnlxSTVuODJy?=
 =?utf-8?B?SVRHYi9Ya2tMc3k3bnJWendLRGNVdUo1QXY4c2kwY05lVHJyM3VheFFqaWZ6?=
 =?utf-8?B?SE1aZlZ3cVZKcHJIdXp6Q3IxSzlBSHY1MUF5TCs2UFk0ZWYxWXBuOTZMdm03?=
 =?utf-8?B?K04wUno4cDIvS0o5T0E0eW9LdFJqZU5kSHFBb1VCR0dOdVpieXp3N2pqN01L?=
 =?utf-8?B?QnJESmNFamdNK1ZocU9Dc09hTHp5VFhDQkVWcUt6Z3N1TncxYXovWjIwTUg5?=
 =?utf-8?B?Q1plUk5jTnhxd2MxNllObG9zMkRtT3duSTN2MmgyaHYzbFRTRVdjMGJNYnVt?=
 =?utf-8?B?c0E2ZFJXQ2t0cXNCVGZkZjBOSTJaSkdONDEwc3dubEtFM3d3R2dVcmYxQkNE?=
 =?utf-8?B?eXpOQTZ3TDFGMWJXR2dxQmhXSHFTV0pZNzg4UWFreWpycW5WUVZBdXkzU21Z?=
 =?utf-8?B?WXFoN3UrWUNvQ1JpVDJhRi9sV1pJcGlCdnJ0eGNoM3dYSHgwUHFpYTk0SDl5?=
 =?utf-8?B?b3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	2A85JkjyIAJECBHdzbl1/+BtkV28EiDHhPozcgh8cz3uz3kUjAH5YIjDoyg9MNzjAD+Q5zA+O+Pwa1ic3CEoeZGnvXGf+eU4MTq8enD1GdSQ7k7HYabUODzf3gKRZu3IIFyE9BRL+IcUd1whL1eH9boyZYFFnflWZ+DI0P8pe/A+N/gPCSd2UIb9/dRzH4Jd50r/qTDJo9+njAhdx8gteKgnXMHVvGL7XK/lGDdfh/0KuCRwfo6J+EXsevTFFSzU5M/Tj6YftpDpN3vipQ8d2GfsmHWlkTTBwsHF6fC9SEvWZMwKGSVfKn17pzqPY2DAjt/8+oME27r0WQ40E/l4WnkcpDODNflZLu6ka4oWbHor7tKWMIKZzKP91uxSjU2J5+cFqavXBlGEvNHsOscLZ05MITdUejxhXN6Buwd6DmsCQRx6lrNuLnmAVP9QEzNntNxZCONVNNnO+Gfmnj2IySuAYTuGYFSowoPbs5sauv4b2dTL2SfGRVulr6J15duRX8Ra3ksYk100tZU9OSk/HRutijOvQubHZFVhf3/41KOYJrlXcmMM5RnG+JhKiQdNawR24ZrXQfox53U9oBr7TCx/K+MP7SJ8/mvE+b8a443a3AKDPS/izVXik4pZtMj41MD/EDx2Wa2zOUPVg9AA7Bi47pIK0yhdmgqQIvvi9jmM6S+wnToxuOUXRzCZ+Lhw+3WZMIOe65aPVgetBEh+PsNhlBHWbYsA2vq1GoUxRTpfOvXkTS1r6pSvMUV8nXOgS9XXUPikxR/UcChS+nqX4FiXfQLTthiKoZx9FTxUEqYrXSwAj0tUEMiMaIl8poe/
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d9e4d9f5-068a-45a2-78cc-08db56f4bdc8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 16:35:36.3480
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: p6WFW8RA+rVdyJ6x0KRJwBoe0QKkbNuC0hsVDzJOwMZj1fgJU54MDCiA4CH+8ggJoPbLW2dtxMWLS/Je6UdSB/TJwF5k1bIhxXPoqeBA9Ck=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB7025

On 17/05/2023 3:47 pm, Jan Beulich wrote:
> On 16.05.2023 16:53, Andrew Cooper wrote:
>> @@ -401,6 +400,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
>>          cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
>>      if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
>>          cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
>> +    if ( cpu_has_arch_caps )
>> +        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
> Why do you read the MSR again? I would have expected this to come out
> of raw_cpu_policy now (and incrementally the CPUID pieces as well,
> later on).

Consistency with the surrounding logic.

Also because the raw and host policies don't get sorted until much later
in boot.

> Apart from this, with all the uses further down gone, perhaps there's
> not even a need for the raw value, if you used the bitfields in the
> printk(). Which in turn raises the question whether the #define-s in
> msr-index.h are of much use then anymore.

One of the next phases of work is synthesizing these in the host policy
for CPUs which didn't receive microcode updates (for whatever reason).

There is a valid discussion for whether we ought to render the raw or
host info here (currently we do raw), but I'm not adjusting that in this
patch.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 17 17:00:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 17:00:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535992.834100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzKVk-0000Hf-OP; Wed, 17 May 2023 17:00:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535992.834100; Wed, 17 May 2023 17:00:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzKVk-0000HX-LC; Wed, 17 May 2023 17:00:24 +0000
Received: by outflank-mailman (input) for mailman id 535992;
 Wed, 17 May 2023 17:00:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bcLS=BG=citrix.com=prvs=494da41bb=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pzKVj-0000HR-P6
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 17:00:23 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4c827f59-f4d4-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 19:00:19 +0200 (CEST)
Received: from mail-mw2nam04lp2169.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 May 2023 13:00:13 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5023.namprd03.prod.outlook.com (2603:10b6:208:1a7::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Wed, 17 May
 2023 17:00:10 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.017; Wed, 17 May 2023
 17:00:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c827f59-f4d4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684342819;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=eek2rsjzxCj1TFBA7MzdZHWj9gyc/pz95niKeU1b1Fo=;
  b=SwrVecmd7vovUC7f5VsGUQUizAunFrjz0ctQsX6mAPowXOPai9gwnJzm
   T1+XzkTI8HMIuqQd8yiZUX18JkNQTycQVUfgMLILmkpCzY1Rw8kkM8Pk/
   7UQd285h0Z6ZyNf227JfXPoZbVxPXX9vjtjz7BuDD7kPu6xLtignTEQ2z
   c=;
X-IronPort-RemoteIP: 104.47.73.169
X-IronPort-MID: 109286004
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:RNTLAa1OId5kK5ow4fbD5VFwkn2cJEfYwER7XKvMYLTBsI5bpzQCy
 WYXDW7TMvfYZTT3e94kPoWz8U8P6pfVm9BlSgFkpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFnOqgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfPHt05
 6BEIyo2TRWM3+Op0ZWKQPVQv5F2RCXrFNt3VnBI6xj8VapjZK+ZBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxouC6Kkl0ZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXqjCdhCSuXgqpaGhnXO2kwXUyNOTWeLnvbig2j9XpVtN
 0g9r39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLL5y6JC25CSSROAPQ9r9M/TzEu0
 l6PnvvqCCZpvbnTTmiSnp+WsDezNC49PWIEIygeQmMt3d7np40iiwPVefxqGqW1k97dFCn5x
 naBqy1Wr78el9IR3qO3u1XOmSuxp4PhRxQwoA7QWwqN/g5/IYKoeYGswVza9upbapaUSEGbu
 3oJkNTY6/oBZaxhjwSISeQJWbS2vfCMNWWAhUY1RsdwsTOw53SkYIZcpilkI1tkOdoFfjmvZ
 1LPvQRW59lYO37CgbJLXr9dwv8ClcDIfekJnNiONLKivrAZmNe7wRxT
IronPort-HdrOrdr: A9a23:fwOETKu0Uu2nLK6MNVd64aeZ7skCa4Aji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJhBo7+90We7MArhHO1OkO4s1NCZLXTbUQqTXftfBO7ZrwEIdBeOldK1uZ
 0QFpSWTeeAdmSS7vyKnjVQcexB/DDvysnB64bjJjVWPHlXgslbnnhE422gYylLrWd9dPwE/d
 anl6h6T23KQwVqUi33PAhMYwCFzOe75q7OUFojPVoK+QOOhTSn5PrTFAWZ5A4XV3di0Kov6m
 /Mli3+/+GGv+ugwhHR+mfP59AO8eGRhudrNYipsIw4Oz/sggGnaMBIXKCDhik8pKWC+Usni9
 7FpjYnJoBW52nKdm+4jBPx003L0Soo6VXl1ViE6EGT7PDRdXYfMY5slIhZehzW5w4Ju8x96r
 tC2ya8u4BMBR3NsSzh75yQPisa3HackD4Hq6o+nnZfWYwRZPt4qpEexlpcFNMlEDjh4I4qPe
 FyBIX35epQc3mdc3fF11Mfi+CEbzAWJFOrU0ICssua33x/m2149VIRwIglknIJ5PsGOu55zt
 WBFp4tuKBFT8cQY644LvwGW9GLBmvERg+JGH6OIHz8fZt3e07lmtrS2vEY9euqcJsHwN8Zg5
 LaSm5VsmY0ZgbHFdCO5ptW6RrAKV/NHAgF8vsupaSRh4eMAYYCaUa4ORQTeoqb0rsi6/TgKr
 WO0Mk8OY6lEYPscbw5qzEWFaMib0X2a/dlyerTa2j+0/4jFbeaxtAzUMyjUoYFQgxUE1/XMz
 8kYAXZAvlmwwSCZkLY6SKhLk8FPHaPsq5NLA==
X-Talos-CUID: 9a23:7PR6A2NjlxbECe5DZTNd5hJIEdoeUXjSlHbPMmiJA3pVcejA
X-Talos-MUID: 9a23:hmW1zArh7dnmMqjqpgYez2x+FMMz3fy2NHAyns4PnYqBch5sPx7I2Q==
X-IronPort-AV: E=Sophos;i="5.99,282,1677560400"; 
   d="scan'208";a="109286004"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AyI64KRT26G6xi7wckVUjBNgoY8M+ifJs6PoriYaHelupCgrFQ3JjA0lOgL6UAn+357Oa9IbRa8mAIEm5XS2URMDDgs8srDgYIMx5lZnwnPXZe1wp2H1IAF+giSTi8gCqppA6u4+gUeu2lao8fg7mLIikTzKNmvzngXIN9TuutWDWSCnBeGoi+44jIlhbCIIlS6reAY52FQ2rZj8JWm/rb/4/ErsOkaRxRzf6FglL06fmc5FbDAodBC4qreDuKKcvjGuA4xgazLrHO0ncaUuty2XK2A2vfliQn2D2cy00Tz9TrK2YkGnuYstZvEmswChGvvhT/WPH4eWbLa3QJei8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nJ8lkht6TdctiW6ejDjlY9EIEIwLigIUk3YY2jjGu5Q=;
 b=L0qxd37hAPl33dQH5S8t951kVx+pNM7Sz6b5wUMaS3U3GIC3T2YwvUJ/N6Fvh/bROZSe5sMo8w5Hc/kMiramv7to5SoXB9dj+vHT9llYPqskPuCDiZUl7k6Wt4b4HfnFbInNt1nlyEGQbhxOIbVAgvFqHnSuacyJf67iE1+SfJ5qrIl4ImdDzsFnnNWQ7H6AMb6kQdtwcpe7hnX6OXrp0N/M758KuwOWv3d7/SJ8i33+vOqICeIPRjtlxol1dbJvyW8yJr59r5U1fTNIZFegfk61k1EJG02qtwBSJzYFVtXw7MvTfTSiggc271Cf5WJn8wJ62aBpNl6cbTQt36yg7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nJ8lkht6TdctiW6ejDjlY9EIEIwLigIUk3YY2jjGu5Q=;
 b=ouIjAkTmWxiA7zCqh8kN65tNviHQiCsjTh/nmdWGO8Gn1cPEM+Vzi4YPrchPo+++y9nL6noqcZh44bGcltDvrDRqownvYbaiIe89bBX1dvD1htWXQGictnegS5Yfk4D5o7MUU6KJWbzpI0wniJxDXrQD7q+6+JUD+Vj2tYU0y58=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <2308d1ef-4928-bb60-88b0-319ac3370a53@citrix.com>
Date: Wed, 17 May 2023 18:00:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH RFC] xen: Enable -Wwrite-strings
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516203428.1441365-1-andrew.cooper3@citrix.com>
 <796b6671-c699-1bbf-b3a7-59c8fceeb625@suse.com>
In-Reply-To: <796b6671-c699-1bbf-b3a7-59c8fceeb625@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0405.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5023:EE_
X-MS-Office365-Filtering-Correlation-Id: 0dbddd6d-655b-4ceb-ed61-08db56f82c32
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6psOM21LqUI52XvUj4wWn/aSxq2T0iz9lP8o3Kalbw+WCPR+RafczhIsn35nb0M20YKObUcIjXqiveN9vOpv+NfbQOyn/XqAZzsClQNtY6eglk592OnmWjFmMVjURBMfeLO3J0chhY62crtgn/YSX9EJNCdU9U5CQIHdZP8X1xH3ssDmlFE3N9wu4c9LpGcIGKZE9/Gzfu/jkjkKYwtSkaARU8bQlZSkEDL70+zuISasxc5SCu2xeJsc5IkzE29IM0t+k4816By+Ec3v26p58SKSiDHufHyjoVtEly/7SAwl52OaG9Uw0EgGOvsm/Q8g6zjKXWRpWtWw9NNngVNB4uJMslF6XWdbVvgkkxCFjRwPf+GKxPDolKyrZ0Q2XSp+i1NgpYqqeKIsnLU17RhLomtGn7lX+i9G2eBX7qWB2muQV2lJc4y0cliMFumUr71y7+283R/KMfM21kvQyDehScuGiUkN2oUZ7SBcm/4w6rFp7rwJ3xw9RaY+XemmTiNIoSS35/zx2rnE4spWKI6vCA+EIF8TC0OAzurSc2/bsFEZ2kmhPzKwP3hmXOHXSsWlIBFn0nspl/VqosZ1/zgvHPzY8ia3KkjAOOAJ75heVzp0v7BCmpwKe1PJIaqoacqVdGmqnErvL7nJV8a6KfgppA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(396003)(39860400002)(346002)(366004)(451199021)(478600001)(86362001)(6486002)(31696002)(6666004)(82960400001)(2616005)(36756003)(316002)(6916009)(66476007)(66556008)(66946007)(4326008)(83380400001)(5660300002)(2906002)(31686004)(38100700002)(41300700001)(8936002)(8676002)(53546011)(26005)(6512007)(6506007)(186003)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UFRTNXpRWXY1R0UwdlVzZS9xRDRkRyt3R2FGbWJ2Y1JGTFJBM0ZlakhvS0FG?=
 =?utf-8?B?N25GQmlXVG5RQWRHZTQ3Qmh3NUxkTTNEYUFqYmRMMmR0NFl0aUt4R0VGOWlZ?=
 =?utf-8?B?VHhHZlNFNmRac1dFWWFJMHV5WkxpQktXdWJ5QjZpb1JpVUNhUFc1Z2FTRUor?=
 =?utf-8?B?cW1Kc09XMnJ2VnpuQis5eS83dDB6K04ycVovUW5yR0ttb0ZXM3ptSlZueSs0?=
 =?utf-8?B?bXhYQit6TTNaNjA0VC9MbDhkRUhFUEk0RHFKeWpZSi9tK0xjenI4Q01RNGk2?=
 =?utf-8?B?NmM5cHlybW5hQ3Y4cTl6cWREK3crOEhWZ3BkZ0pmdWtBWStyYlBVckVmVitS?=
 =?utf-8?B?REVSM3pZYUg5N2lzZTFLYmlUL0tXRnFKM2J3MDBuOHl1a1BteVdVUTBWdlZG?=
 =?utf-8?B?ZDFGTk9OUXE0MUUxY0dEbW9iTTVQT0pCVDRpL215SnlhaVpCQkl5WWJtQUdu?=
 =?utf-8?B?UE9JVEZ6L3hQOTl0ckhpRWU4YXhpdzdIYlY1RFdsTFhtcGlDc0NGYnRXdUwv?=
 =?utf-8?B?a0RFNjFYaXZqSk9xQ3FrYlNwWlFhRGw3eGFtb1I3TVdjOC9jUTZiTlFCcnhH?=
 =?utf-8?B?UW0wU3owV0E0NzY2N1F1Vlo5Z29qeVQya1Z2bVlwTFVCSUhvSlMra1VBdlE1?=
 =?utf-8?B?ZVhZZThxb0Q0NWpaTjM4NGZueTJCNkFTYlZSaXVoSXNDaGROUmVyM0R3SlM2?=
 =?utf-8?B?OW02VnN6b24wVW5SV010NHJUY2ZCZUtkanFlVVhtRC9lVjJsa3AyV3VrOUVj?=
 =?utf-8?B?bUFmdStBTnluVG9nUWplMzROaUhldGpObnQwR2NQdmVwM3QzSlc0MllNcnJF?=
 =?utf-8?B?U0lVeU9naFdmYWloUGc1N2tuNVY1b3pvTkpJcUVBMDlTSlhwM3k5aEVsQW1F?=
 =?utf-8?B?d1dTSUg1S3Q4VzluNlphcTRObW01dSs4dHE2aWZrNjFEUUwzeTkzRVBjT09u?=
 =?utf-8?B?VGJPZWdMYnFwKzZiZ1pobnoxTnNsUWYxTk1Vd0RWYk5VK2k0dVNaY1pUR3pU?=
 =?utf-8?B?UGlINE1kUDBDMzZ2bXQrN0VwMkh3Uk5KVDJicDNlYWlzc0EzeGlMK0ZIMUxm?=
 =?utf-8?B?ZnE1T3NpRTIvZ004Qndyc2w2bCt1SURzejdwTlJ1VHRWMDZhdURTUHZmQnhO?=
 =?utf-8?B?NzI2OFVTaWcrSVVhZWpjdlE3Wmo3Vk1HMVFZeGJIZkxEYk00WHlieGF2eHF1?=
 =?utf-8?B?OEorRGxyZW1YWHBib2pxdjJrWTUwejlEaWtuV3d2aDA5SDQrZUFKWTB3L3JE?=
 =?utf-8?B?SWhLVWNEdGN0bmlBektUM1hNVUcxTkJnQUNlRVlDS0RWeERpeSs0ZTEzZnEv?=
 =?utf-8?B?VWxOYVB2UGdBMUxmSUxtek5pOXZ2bDFHa0RBb0ZMRTlrQUUrSzdYWVQzbzBy?=
 =?utf-8?B?elpKdVdjL25OM29jOFNKeWlqUWRZaXIzM25jWlRISTl3ZS92QWhGTXFsQ2pS?=
 =?utf-8?B?YlRSUlJpbVNZTFpDY2VuZktRTmVydTM0blkwQjZmQk0yYWJyU05ZaWdINDRU?=
 =?utf-8?B?ZFgvNjFLUXlNeDNKQ3NxSktqZGNVeDFRcXFlUWtpRXdmazBJMWdrS3I2NHh5?=
 =?utf-8?B?b3F1aFlDQ3B3TDQ5Sm5xSmh5dHpORjBmQXg4emluNHI2bHVYVWt3SG5hR1Uy?=
 =?utf-8?B?QWVjNldmRFFqaEN0emo1dDdDdDNraXIxRm42aW9WWFJHOTZtZ2Q1emdGWmI1?=
 =?utf-8?B?K3hYTWhDZUY3NnhZNmJmVE10cGhuUmdUcGEzQ2FpK0VYaE9hZVFNRnJsbG5B?=
 =?utf-8?B?L3IxelVwQnRwUlFUZFBRQjJWbVlnazhvR2lFMWIxUVBnVitQbnhuNWtYYTda?=
 =?utf-8?B?SXRtRFRoM0NLdlhHUzcvT25nWExwbUtRTW9uTUNpSnAyejZ5MFU1SUVnR3By?=
 =?utf-8?B?T1ZMeUtmQ2lWMGZkNWZOMkJMTXo3d3l0QjFjUGRlczBIaEZoN3pQOVZxS3BE?=
 =?utf-8?B?cmN2MENwWXZGWS80a2U5cVFLYzBydm1USXdYV0kvb2Zlam1wc1lJSGF2cTBM?=
 =?utf-8?B?ZVhLbTF1Y1A4cUhJUFdNRitCN0pqZkR6VEp6NVVGOE5SQW5VRFZPZy9xMmgv?=
 =?utf-8?B?N2s4YXVYS3IzdDl5akJQOFY5Y1hsdHpYS08xMUNxcXlsV0szVjVKaGRia0JG?=
 =?utf-8?B?TzBRUzBmMFFzeFpYUzZqRUhiL3VRbGo1NTNqNkJFZ0trUFk4b2V5N245OE1l?=
 =?utf-8?B?YUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	MOQxL5r7SlxGa/dtbHoa5Gsnv4Iyg1DN9MblnApODyILp7OkjBdBy16i0fhSUTN7yS6tPyLfv1fYeyegJvu7SroxaGBhyMvs0kvt3pZEk9XBVDRO5RL53LoaprNdKb12V6CQMRNLESgk0RmEw9+08ppdVGuS+fbTxqeaP5jxbEkNJrCt1nlc40CkbaXgQRxsqXf8oghulGgC3Twk6+zYf6cTNSi300QGoEdZtL14UftGb/FU1NjeTu0g9EbbsAgmA7RlffgPsTVl8nSrd3fXGh6SPdio7B0Nr42l0nXE+4DTcQyHiEpZKfPbmka99mJG+RfiTSJ97GEsGlRijaSbwRP+W7YGzIHOubVKzzuFb1RXhYrUwdAIo1kTBItpd1DP2V7c3uLfEIR0P2iftMqGhM2Qiex70AiT5gXa/56CQ9xpH5Af6Tyc3o/kub3m/gt9PJLgEhkCTvgae93SFHRgRKTRJb7Ne8QYKLbGT5epDQPRw+XEFIqJhCn3TmaK5aokuYijWyrKabR62R6Qc4lUjYpk6iCYjdj1DZBbdIeiWmBMb/KJu9d8mRtoZQ3AdnrKQ8g0A0FUHsFmSjkCIuadkAMSXUpS/gUf/k0sHGIfdzHc7+oYGtCgZJl2gX14CKi16AgqtlB4ZpQrDIk4KJSFltP0HK48syiIVO1UT6YFXqOqQZdvIxktcWIl6wjUBeICR60bLfJLX8UMKT3OQPiZWD96Nmpw+4gkNQgWSYPP3BkTrcZgd9DGSWhyBoAkOwYXaaWNmLFb1WVQ3gszntCQ4HlzqNfl5NYq0TH2a7MCqOvtwvnQC6qGM5nqIJjAityYPok1Xanvu+FARZ50EdjNiaYrfx66vUgMBOgmM5gdih5UvHo2dWtQMN5W+MX4MTM3oVouSPeFNNROKh6Gx+etYg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0dbddd6d-655b-4ceb-ed61-08db56f82c32
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 17:00:09.9399
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sN2YybOxMQoVKtEPORbuUOi2iJ8448oUWMDHqL2JFE39ZlH/+eK9iRQqcO27TxJJosgVo/IbsM5+pSIXIy/k1DNbUBFxX60jne5YodYF2+8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5023

On 17/05/2023 11:34 am, Jan Beulich wrote:
> On 16.05.2023 22:34, Andrew Cooper wrote:
>> Following on from the MISRA discussions.
>>
>> On x86, most are trivial.  The two slightly suspect cases are __hvm_copy()
>> where constness is dependent on flags,
> But do we ever pass string literals into there? I certainly would
> like to avoid the explicit casts to get rid of the const there.

The thing which trips it up is the constness of the cmdline param in the
construct_dom0() calltree.  It may have been tied up in the constness
from cmdline_cook() - I wasn't paying that much attention.

Irrespective, from a conceptual point of view, we ought to be able to
use the copy_to_* helpers from a const source.

>> and kextra in __start_xen() which only
>> compiles because of laundering the pointer through strstr().
> The sole string literal there looks to be the empty string in
> cmdline_cook(), which could be easily replaced, I think:
>
> static char * __init cmdline_cook(char *p, const char *loader_name)
> {
>     static char __initdata empty[] = "";
>
>     p = p ? : empty;
>
> Yet of course only if we were unhappy with the strstr() side effect.

It's quite possible we can do something better here.  This logic looks
unnecessarily complicated and fragile.

>
>> The one case which I can't figure out how to fix is EFI:
>>
>>   In file included from arch/x86/efi/boot.c:700:
>>   arch/x86/efi/efi-boot.h: In function ‘efi_arch_handle_cmdline’:
>>   arch/x86/efi/efi-boot.h:327:16: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
>>     327 |         name.s = "xen";
>>         |                ^
>>   cc1: all warnings being treated as errors
>>
>> Why do we have something that looks like this ?
>>
>>   union string {
>>       CHAR16 *w;
>>       char *s;
>>       const char *cs;
>>   };
> Because that was the least clutter (at respective use sites) that I
> could think of at the time. Looks like you could simply assign to
> name.cs, now that we have that field (iirc it wasn't there originally).
> Of course that's then only papering over the issue.

Well yes.  If it's only this one, we could use the same initconst trick
and delete the cs field, but I suspect the fields existence means it
would cause problems elsewhere.

>
>> --- a/xen/include/acpi/actypes.h
>> +++ b/xen/include/acpi/actypes.h
>> @@ -281,7 +281,7 @@ typedef acpi_native_uint acpi_size;
>>   */
>>  typedef u32 acpi_status;	/* All ACPI Exceptions */
>>  typedef u32 acpi_name;		/* 4-byte ACPI name */
>> -typedef char *acpi_string;	/* Null terminated ASCII string */
>> +typedef const char *acpi_string;	/* Null terminated ASCII string */
>>  typedef void *acpi_handle;	/* Actually a ptr to a NS Node */
> For all present uses that we have this change looks okay, but changing
> this header leaves me a little uneasy. At the same time I have no
> better suggestion.

I was honestly tempted to purge this typedef with prejudice.  Hiding
indirection like this is nothing but an obfuscation technique.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 17 17:05:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 17:05:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535998.834110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzKaR-00015f-Ey; Wed, 17 May 2023 17:05:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535998.834110; Wed, 17 May 2023 17:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzKaR-00015X-Bb; Wed, 17 May 2023 17:05:15 +0000
Received: by outflank-mailman (input) for mailman id 535998;
 Wed, 17 May 2023 17:05:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bcLS=BG=citrix.com=prvs=494da41bb=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pzKaQ-00015L-9i
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 17:05:14 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fbd7b5c0-f4d4-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 19:05:12 +0200 (CEST)
Received: from mail-dm6nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 17 May 2023 13:05:10 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SN7PR03MB7087.namprd03.prod.outlook.com (2603:10b6:806:357::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.35; Wed, 17 May
 2023 17:05:08 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.017; Wed, 17 May 2023
 17:05:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbd7b5c0-f4d4-11ed-b229-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684343112;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Fg5W1XDHXS/R4/B4+Apfe6z1HaDLwENesQB3BTFUihY=;
  b=fyzxdTTaLsde8xr2tzvlBgoGmYe8hDxkwfjjM0zpd3h3Q04hbeuPgp1e
   T+ajggS+z6/8wOEwlIwz2GYuqcH/OyLyXAoncdVY7XnqF/1JQfvA4Q6DT
   GVKiKSrjC+8TP9KAqBs70Ex9SUO/7yHewRnsJ4DMDr/17aoKklQlKJMOy
   U=;
X-IronPort-RemoteIP: 104.47.58.104
X-IronPort-MID: 108161360
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:70YE46Kss/X1wu3kFE+R/JQlxSXFcZb7ZxGr2PjKsXjdYENS1zwCx
 2QeXm3QP/jfNmCgKd8gadzn9kkF7Z/Vy9I2TlBlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wVhPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4nW21Ur
 P8BCgwKVUjcxOLv8KiFd9tF05FLwMnDZOvzu1lG5BSAVbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGLkGSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHurAdxLT+PgnhJsqAfO+3Y/NxtRbAOy/8uUiXWbSvgHF
 lNBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9W9Vna15rqS6zSoNkAowXQqYCYFSU4J5oflqYRq1hbXFI87TOiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAtTA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:sk/xtKNqe+JMecBcT9z255DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jztSWatN/eYgBYpTnyAtjmfZq6z+8J3WBxB8bZYOCCggeVxe5ZnOjfKlHbakjDH6tmpN
 xdmstFeaPN5DpB7foSiTPQe7hA/DDEytHRuQ639QYTcegAUdAF0+4WMHf8LqQ7fnglOXJvf6
 Dsmvav6gDQMUj+Ka+Adws4dtmGg+eOuIPtYBYACRJiwA6SjQmw4Lq/PwmE0gwYWzZvx65n1W
 TeiQT26oiqrvn+k3bnpiPuxqUTvOGk5spIBcSKhMRQAjLwijywbIAkd6yesCszqOSP7k9vtN
 XXuR8vM+l69nuUVGCophnG3RXmzV8VmjLf4G7dpUGmjd3yRTo8BcYErYVFciHB405lmN1nyq
 pE00+QqpISVHr77W7AzumNcysvulu/oHIkn+JWp3tDUbEGYLsUiYAE5ktaHLoJASq/woE6F+
 tFCt3a+Z9tABinRkGcmlMq7M2nX3w1EBvDak8euvaN2zwTp3x9x1tw/r1pol4wsLYGD7VU7e
 XNNapl0JtUSNUNUK57DOAdBeOqF23kW3v3QSKvCGWiMJtCF2PGqpbx7rlwzvqtYoY0wJw7n4
 mEeE9EtFQ1Z1nlBaS1rdx2Gyj2MSeAtAnWu4RjD8ATgMy5eFOrC1zMdLkWqbrinx1FaferHM
 paO/ptcovexCXVaMB0NjbFKupvwEklIbwoU+kAKiKzS+LwW/vXX7/gAb/uDYuoNwoYcUXCJV
 ZGdATPBax7nzWWsznD8VfsZ08=
X-Talos-CUID: 9a23:Q3VPkW6a7dJtpZ5rmtsspBQYAJkma1/hy0yIfAy4UFZzdoCnYArF
X-Talos-MUID: =?us-ascii?q?9a23=3AASGmrg3VXAPu7zFNPyc5K6IeVzUjufmTEmwyl9I?=
 =?us-ascii?q?/48DaBwFzABKMpmyKXdpy?=
X-IronPort-AV: E=Sophos;i="5.99,282,1677560400"; 
   d="scan'208";a="108161360"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jq6nYJHmkfdqPQQL4hwJp7joIYdnVfRpSo3HalXXB5WsJbiV1h2rEK5FcmN7YOO7eKnioD1LTKcr1Xyj4Wd9Pon/aa8XTPVjKSsviXjoV6CQUxxGJ1wG6IiNjKffYjK269YeXNJw3fAwqAvIt1Yj8RNnXJGL0XDrGZaSHov8SUPPHgSpZYPCXHES31X7XG47QjHjRwqaGqixjGOlvE6VCzXIXJyjM9+XaGSdznk2+lsAkWVT4dU1JbTpD/XwqOoFUAbVYIJoGoDRIN8Wdahjb/R4QRINReIbMVSNPCf5dvXiBx8AC57YUU8915V9ZaptMZeXX+IapuPPmGrYKTb02A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Fg5W1XDHXS/R4/B4+Apfe6z1HaDLwENesQB3BTFUihY=;
 b=hGw8n8JpBaQthuX569uZ/6swp/NJYZI/5NGI0IaqhlYdLVCS4AmTzsifI6jBP9FonG3iLOCv+QjaITROQ3gpBqdrFKf4h1oJs4zCrHw523Yq93hX6dXI/Xj2+WSuVov3vp0BJrrsSFOCk5nbALfVYoDC4HdJE8wNA/DQiI0FldLPE1ko9EIYrCv3DgDalwyVDCE+50IS+SZREK6nppCvvVvTj5sbUmGlIK4Va6URvAuyx5em4EJVG0V474Z5bowbq81y4ueJx54AaihKU2AntVlXCyKhq4YGj7EuehVuiCMUGbgx4fMptmUB1wvugOMmkVpQXXf1dsQ0dX4pigPz4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Fg5W1XDHXS/R4/B4+Apfe6z1HaDLwENesQB3BTFUihY=;
 b=XWrVBGX+odPhN8FrgmfG4hRQXIkwSst3ww1U/BwGAxn1M0EPv24TVGi0hDqtJVqgChZ35lk2C2i9Ofr0JTbkEBUntUL/ukIb/OaiEU90x5a/Qy7+DFWoCqsKfLfd1RK21htOW81TQhLYxQQlTFQ1FSJ38MbYFn60DdXazKV8r/0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <b79b5b32-7bcb-b4cd-1594-e16aaff640e1@citrix.com>
Date: Wed, 17 May 2023 18:05:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86: do away with HAVE_AS_NEGATIVE_TRUE
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <f994f67d-e0de-ad28-d418-1eb5a70bc1b8@suse.com>
In-Reply-To: <f994f67d-e0de-ad28-d418-1eb5a70bc1b8@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO2P265CA0130.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SN7PR03MB7087:EE_
X-MS-Office365-Filtering-Correlation-Id: 701d4ec8-cae2-4fcb-d85c-08db56f8de0c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FQ8JfmoPW4SCRrR/z48AriZRvlcC9LzURJ8HHHaPRWnZoiq9LveYv4swAIwaPxaPTHXujhlWSySpaBFmWRN9W5pfoaVUc32PCP+27C10TFUEkp+/0qhBtVph6NHiBAChdlj0n3g91fL5ttZX3qMPJyal0F7M8Pl23niVEgbBW0lcaOyJ09BdJt5jCKUTXIcfGyJ75+mpNHnit3cMMZmFT/Mg2ACtK0hnTScYwylWRI3+pvJyVytJXaiR842Ndsf7kiMz61DEJGTixAvJEjXRHNQzs+MTbwPrtIr/rUA9pYk9v878H3uuVSCKvcTNvXqWb7e3npvf4//Mx8Fr9gd90nx19qQjBHEK3RV9bMN3qchymNyuvxgWBxb1xql9DTno7VUXKWI6jxw2KsB0wKjraZmU486vQ8UdFp+UcD8rUE6pN36Ppc1epBUV/XHPWIos0h7LC2IaLd8jS9ykDwzi/zmR7P1Tm5haQKWKSFr7b2eZbJTvb9o22c06Bfr5XDzecttK5bF2Lw3230EllAAFDRLB19CHSYXQSDygi1liBy7n5YuKmvZfW8ddVlq506IIZFcoxwz4ZJVhJrDUMwbOkB3Ya8Uh/d6fNL7xIVPWDSa9Ivwk2iunvenstBbGpqIdqwR809F9qySXXmN32oBReQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(366004)(376002)(346002)(39860400002)(451199021)(31686004)(4326008)(66946007)(66476007)(66556008)(478600001)(110136005)(54906003)(316002)(36756003)(186003)(6506007)(26005)(2616005)(53546011)(6512007)(107886003)(6486002)(41300700001)(5660300002)(8936002)(4744005)(2906002)(8676002)(6666004)(82960400001)(86362001)(31696002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WUczNmJaSmJuQk1kbXhPTXQvUDJnQVdzUXBMcGZFZk9kZi9BSStxbVFVR1lK?=
 =?utf-8?B?eWN4MmtoZ2lsKzJpZUh1TnJ6RzAxendOeGFKQXhiWUVtL3ZBL0ovY1lUeDFn?=
 =?utf-8?B?QTRHK1FFQ1BSNkUwUXRBSHhTMnBPVUR1K3BOQk5id3NUbXdhVldCd3RmeXl1?=
 =?utf-8?B?WWxlK3BtS3B4bDJMTlFpbWFoVU05ODNrQVFlcDJWaDA3VktaU1VrS0JaeFg2?=
 =?utf-8?B?d2g2S3ZwK3RkMGxBeEloZFdaMzFHMWVXMkFyMFk2OWcrbVZQR2ZxbU1aNlIr?=
 =?utf-8?B?YWRmK3ZMSEp0cnBxMDFmanppVFhCeEdoVVlTK1NUWGtjRnJMb3N1U3I5L1or?=
 =?utf-8?B?Rjl6UDdkVU1FeDhOcnNOUTkrdTlNSTdzci8xb0Zaa0V5Zk9VTTNMaCtZUEJC?=
 =?utf-8?B?dVFOM256Q2p2MUR0dG1HaGV3c1pjN3U1eUYwQk5UcXpDM0MwNE1wRU1kcWhv?=
 =?utf-8?B?SVVKNGw3T1A3VmxmT0ZuY3RoMUhqdHMvUDhJdWVJMzZ6M1FPYmd1c2VNemtz?=
 =?utf-8?B?MkJOQ0JWVWhReStwMjA0N0V0cEhlS055a1JxTkFFS3JBYkh6NWFLSThlcXV0?=
 =?utf-8?B?R0tHZDdWczJxZnAxdFdMTWpKTFd0M1RSU0VreiswUXpOcTVoVllYUUxuNkFJ?=
 =?utf-8?B?NU5Wb1ZleEZBekc2N1Q2ODdueVAyTGRYNHRDMzdHTVZ1Y2htTWtWUXppbVRy?=
 =?utf-8?B?S3E5WjRMaUN1TVdnblpaQWpiSnVDc0YzR1NaMmRSQ083dFVkYzRONGxzWnRD?=
 =?utf-8?B?N1R1TjF5T3o2U3Y3Nk5hNTZYdVF5SGZ6dWlyNEJTaFdrUWxiTGpTMW1UN28w?=
 =?utf-8?B?Tys5ZEJJT3ZqU2VNQVVobzZHZjY4SklqdVZHQ0RsMEVycFpCOHZXYUFrUzRn?=
 =?utf-8?B?aVhia2MwS1A2bWxLaGJhVXdjK05Ga0JZV2sxUHowV3ZZYWw0b0tLYXpFZnFT?=
 =?utf-8?B?UW13VFFTbWtRQi9DTlRzSkY5aUMxZTFxQnArUWpwVlVJcmtvZnJ6NmhwaXF3?=
 =?utf-8?B?TDB3WGxhYi9IZlZjdHBlSnZ3KzdwQVVHcmlEeTBTY1BSQTFUMU9ickFISnBo?=
 =?utf-8?B?Y0VDZHExY1dZcXdjYXNSSHUva3NlMkFQZFNpZmdwU2Zld2c2am5tVkljRm5B?=
 =?utf-8?B?L0pKNU04dGIwUVJUbEUxU3NYUUswTlk0OU9oYlZtSmRDY21pcWd2bkZZeGdJ?=
 =?utf-8?B?cTFNejBtWFo5RUgyS01CQjZxVzVrNjhHeVZ6RDFITFBLSEV3dFFod0IwUzhB?=
 =?utf-8?B?N3FqblZvRmlzaGNuWW00bnc1YVVSYzBadVpuMXpqQlRIMmR2cW5tTGFydVc4?=
 =?utf-8?B?UUhjRkRzSm8zY2xTQ0VuSG80aVp6ckh1KytkSzhtL3c1cnR0eVpRUlBKQk9D?=
 =?utf-8?B?ckIyR01samlubWxUeFRVVGRTZUJDVFZyc3pERk5IL0hHYldsZmRvQnZFaG05?=
 =?utf-8?B?bzBUZmZwMVBDaTEvUHJlT3BWamd5MURoejlFMUdMRksyRzg0VE5qUkFKd0h3?=
 =?utf-8?B?bytXVHpHM0d4VW5kKzVzdkhWMFZoOHRoY1RvMVQzM0cwYTdpNzljTVVkZTMv?=
 =?utf-8?B?c0lJcHBmdWYvQ3BEZEFHSmlGWU9GSFgrN1dROVVjTkdIOVNQSmhEMVAvNExz?=
 =?utf-8?B?Wk5HUFQzRTE0VStINjU5YTErTExPUnZ4OGdlclpwQ0h0bkJDUXVIdFhxUHdj?=
 =?utf-8?B?czMzVFBNL2pIcmlpMnQ2S1ErTUc1TmxsUGc2RVFMMmNMcGJHQVhhbHhtcGpD?=
 =?utf-8?B?OGRNUmJaVTZKamZLNnQ0cXVrVHd5L2F0ZkY5NFZ2aytFUFY1TmJ2eFBzYkZC?=
 =?utf-8?B?WlpPM1R3R1NPK3pISEhBQU8xeXFaOUxxNThIQS9TTGlZUkV3RExTZnluWXlU?=
 =?utf-8?B?SmVrVzFLQXMvckVZajN4SzdCMFJlWDhSVTdhQnEwV0VtejFuVGNGZ0tYK0Vt?=
 =?utf-8?B?Z09vRisrL2J2Q2VZUHpzWXlqWmM1THJDL2k2cXlpcnNseGhza1BhQUpRWCs5?=
 =?utf-8?B?ZlVFK0hpWHplTFRKSmxsRkg3TEU0dDFTL1gySW1SMHZTVTBzczR2eHNidEpq?=
 =?utf-8?B?SDIwUmtrcnloelNiTURKd0dJc2g5d3NnNFJONTAyNW1JVnF5ZkNxVlFWNTVa?=
 =?utf-8?B?T0hrV3lqQjVXTUszeWVqY2x6NU9xNmxVb0FQdkIyYTlMMFNzcURqaTlFVEtT?=
 =?utf-8?B?eFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	UTo3bjFxNV0gdjETDTiGzd07Ai678kFVnEo8MBbvb9d7gTNiIYHr8XY6gCxTTUx3f1lULqUzlqgzQ/wdfsJ0LyxPq37QDciEHSh/doxVG1h++cYmUwOYHop63MW1qBEOBXUyiCjwcllmGKqK4qEvYd0HWw+A1isFfB1CDAQi9cKzo3ZLwFt1/6m0gUIsRUqG2F/0xVrZJrx9/B3rI2yTFU7U1q9t3tubH6jgZg48dUYh49NnF3RVRnV1Gh5gevq4j9yRnef1CQcOQspEPdpNRzHV3QExbu2lKcFRaRUieO14/e1bZ7IFpcqlPY7/gaCuM2OwqEYAUWrjGZaaMA7qALDmRsicf05bTwXfdEd4WpWGiYcPPx1itZhpmpKGcd69CpbancfdUOR2+GPLGprwB+Fn+6417zVpMytupim6hCRZRlmEW5nUEU4nYc+TWdOqxr+kANI3HTjbQo/uXwgEks/+BOw7ECTt5b2+23J9WqE2kHHGhYTWz0AWc6cTYMSwPQjpVm+d7EaLVsxleRxhbRe+Rfhdi2BjJdQxPBQUOhmZ+hzJexo3noBoioc2C1QM3E2jkJruHQk54iQTMBLgqMv+wmFJS2PkfbK2wKsA+NlhobhPzhC7Jp63dLFO4hk6FRhTMBTbZ9hXMw5q6Sfp8oztvTuTe1ihfE7zST3945+jMOcv4FcBvPoC9ukySkIlmDUX3Sa8C8BTlfNdCP1ZwMvm9aJ2SU4Vpz/EPcKCznO3HBXa+QKciJFrUfvAP2ov64YckzhuwRxsdE6POUEyfvvm7gFn8162VpXb+bMq6HFNIGf755kXKW6YXBaj33xQ
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 701d4ec8-cae2-4fcb-d85c-08db56f8de0c
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2023 17:05:08.3328
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QJ4TDbzEw9nhBUIVEhTuhHRDhQCmL9EsbOvhXQB5dsn4OnLihx4sL8MgewP4cVYJtfXWFtsRYWwlM455lcI0VU50mdCFMmC3TCBBhsEagz4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR03MB7087

On 17/05/2023 3:22 pm, Jan Beulich wrote:
> There's no real need for the associated probing - we can easily convert
> to a uniform value without knowing the specific behavior (note also that
> the respective comments weren't fully correct and have gone stale). All
> we (need to) depend upon is unary ! producing 0 or 1 (and never -1).
>
> For all present purposes yielding a value with all bits set is more
> useful.
>
> No difference in generated code.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Unlike in C, there's also binary ! in assembly expressions, and even
> binary !!. But those don't get in the way here.

I had been wanting to do this for a while, but IMO a clearer expression
is to take ((x) & 1) to discard the sign.

It doesn't change any of the logic to use +1 (I don't think), and it's
definitely the more common way for the programmer to think.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 17 19:22:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 19:22:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536003.834120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzMj4-0002t6-C1; Wed, 17 May 2023 19:22:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536003.834120; Wed, 17 May 2023 19:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzMj4-0002sz-98; Wed, 17 May 2023 19:22:18 +0000
Received: by outflank-mailman (input) for mailman id 536003;
 Wed, 17 May 2023 19:22:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VO9T=BG=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pzMj2-0002sl-Ah
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 19:22:16 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1e913144-f4e8-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 21:22:10 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 8E48564A91;
 Wed, 17 May 2023 19:22:09 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59629C433EF;
 Wed, 17 May 2023 19:22:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e913144-f4e8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684351329;
	bh=cotWtwoEBc9omPlWdhAXiXoaigkbzGVwDbWMTLQ9FI0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mhbGZFlQo0Odftazgd8GWtAh4dMsnBplqJyvkaaYebadIj15LSJ1WjFFezq+o/DY9
	 aD3OLjg+YIWP0Hpbxyee3UEUXUR3wLS2uaUbJVhHEnfEqA8HqNz/+d5AmlGSWV0Mk/
	 4ylMAFKXC4gYg1rbnODjuSmGpDby9y/ubMcTAt4N8Fsd/qqpGMKs/WEPrhGL7Ibb/W
	 QOb5QESQZ+cjZ1HrAYKWEgw1jLmjdcKN/BB0bK/H00D/OnV96LlakV5+CwC6+BjIaq
	 6qnydeRgHMElLaCDOZt8cRFPDUTMJ80ZH7mBXCI11JEqem3Qmbiri2Ugjbf+2sTfhx
	 U2jfassxwHm5g==
Date: Wed, 17 May 2023 12:22:05 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Olaf Hering <olaf@aepfle.de>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v1] automation: allow to rerun build script
In-Reply-To: <20230517055722.4057-1-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.22.394.2305171222000.128889@ubuntu-linux-20-04-desktop>
References: <20230517055722.4057-1-olaf@aepfle.de>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 17 May 2023, Olaf Hering wrote:
> Calling build twice in the same environment will fail because the
> directory 'binaries' was already created before. Use mkdir -p to ignore
> an existing directory and move on to the actual build.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/scripts/build | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/automation/scripts/build b/automation/scripts/build
> index 197d085f3e..9085cba352 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -36,7 +36,7 @@ fi
>  cp xen/.config xen-config
>  
>  # Directory for the artefacts to be dumped into
> -mkdir binaries
> +mkdir -p binaries
>  
>  if [[ "${CPPCHECK}" == "y" ]] && [[ "${HYPERVISOR_ONLY}" == "y" ]]; then
>      # Cppcheck analysis invokes Xen-only build.
> 


From xen-devel-bounces@lists.xenproject.org Wed May 17 19:29:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 19:29:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536007.834130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzMpX-0003na-1r; Wed, 17 May 2023 19:28:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536007.834130; Wed, 17 May 2023 19:28:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzMpW-0003nT-VA; Wed, 17 May 2023 19:28:58 +0000
Received: by outflank-mailman (input) for mailman id 536007;
 Wed, 17 May 2023 19:28:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrYZ=BG=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pzMpW-0003nN-Ad
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 19:28:58 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 10786f62-f4e9-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 21:28:56 +0200 (CEST)
Received: by mail-ej1-x633.google.com with SMTP id
 a640c23a62f3a-965ab8ed1c0so183162266b.2
 for <xen-devel@lists.xenproject.org>; Wed, 17 May 2023 12:28:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10786f62-f4e9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684351735; x=1686943735;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=09axWUAu7hMSOcfm7NJQZY/JdptKb9Ru1nm1aT0b6uU=;
        b=lnBzNV2+n7/yBxyor14bcPPauELJvTQxJcVUUhwi2eiYFdIfWhQwAjzOkz//b0ZZHC
         smpjNANRklgztHTqHeOSkXtprhgWtu4oNacB2LomxTAjve2UBXxiVCHwTTqSkTQiloXV
         3HF4XAsiGVRkuYIH8BB9d+QZkAiJz11c+G6wHYkT8CwVgarsDoQXdLe0f6apRCbhx3PQ
         RWuLC/fv8cW4rNbaD48JavXSPk6x8irbPniJPLRCf13VeheXdU5yUHN31pkJ9zf35lLq
         So8jbEvuSgPcndw0tvWw6H+m3/1x01JmptzlF9BkkC1DXaGvIPUXvGFTs5DoHcE4YR3n
         cc3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684351735; x=1686943735;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=09axWUAu7hMSOcfm7NJQZY/JdptKb9Ru1nm1aT0b6uU=;
        b=cMNbyAfSIOVPjEeC45JFG1zJkFieS/UMr8JPkKRNS7Kkh8cmk17QV9T4JMDy8vumsa
         SI7hqjYteXqxu49YGiMLo2dqinU+3YyE/XDC7o5q3EiB2FqbiGnMDjTg3QiHgtY+blyh
         G1DsVlKyXsm+7Jbq+Utap2j2i0ui+x5ce7vLjQDsjHkB6B2vqtxSQtIj6FmHxdAYmoKM
         JsoeiahKymzkLFtwxHc5LDkfPl7NCH36+IH0DqkyQtdJSrmFM6u5oPy427e/H1cQ3svf
         TF12h18XhT6NAxYuAlXBMwckBB3pL2gdX7UC66ylGK8zuhuRZ5TXtcl8hkMWkw6p6pzp
         bswA==
X-Gm-Message-State: AC+VfDyWswtbL8ejUfSQEeGF/wZfPYu/nz5OFbLf1qY/itFDSCSQSPet
	DQ0fhF1uqabfVfe6ku2SVZLGQeDPNTKcNBtOrEadaW5O
X-Google-Smtp-Source: ACHHUZ56cD5Zzj2+JNa4rMz97JF+2l1nIbcNmzc1tXD+5Z41FVdagBv5FfxWAcLV87jmhfvbW3J1J2U/S24X9Dlc+Qk=
X-Received: by 2002:a17:907:948e:b0:96a:ec5c:685b with SMTP id
 dm14-20020a170907948e00b0096aec5c685bmr13701204ejc.29.1684351735305; Wed, 17
 May 2023 12:28:55 -0700 (PDT)
MIME-Version: 1.0
References: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
 <def382a6481a9d1bcc106200b971cd5b0f3d19c1.1683321183.git-series.marmarek@invisiblethingslab.com>
In-Reply-To: <def382a6481a9d1bcc106200b971cd5b0f3d19c1.1683321183.git-series.marmarek@invisiblethingslab.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 17 May 2023 15:28:42 -0400
Message-ID: <CAKf6xpuSg9vdxNejKYNix237ScPmo2WmF1np275f=czjT3jqAg@mail.gmail.com>
Subject: Re: [PATCH v2 1/2] x86/mm: add API for marking only part of a MMIO
 page read only
To: =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 5, 2023 at 5:26=E2=80=AFPM Marek Marczykowski-G=C3=B3recki
<marmarek@invisiblethingslab.com> wrote:
>
> In some cases, only few registers on a page needs to be write-protected.

Maybe "In some cases, only part of a page needs to be write-protected"?

> Examples include USB3 console (64 bytes worth of registers) or MSI-X's
> PBA table (which doesn't need to span the whole table either), although
> in the latter case the spec forbids placing other registers on the same
> page. Current API allows only marking whole pages pages read-only,
> which sometimes may cover other registers that guest may need to
> write into.
>
> Currently, when a guest tries to write to an MMIO page on the
> mmio_ro_ranges, it's either immediately crashed on EPT violation - if
> that's HVM, or if PV, it gets #PF. In case of Linux PV, if access was
> from userspace (like, /dev/mem), it will try to fixup by updating page
> tables (that Xen again will force to read-only) and will hit that #PF
> again (looping endlessly). Both behaviors are undesirable if guest could
> actually be allowed the write.
>
> Introduce an API that allows marking part of a page read-only. Since
> sub-page permissions are not a thing in page tables (they are in EPT,
> but not granular enough), do this via emulation (or simply page fault
> handler for PV) that handles writes that are supposed to be allowed.
> The new subpage_mmio_ro_add() takes a start physical address and the
> region size in bytes. Both start address and the size need to be 8-byte
> aligned, as a practical simplification (allows using smaller bitmask,
> and a smaller granularity isn't really necessary right now).
> It will internally add relevant pages to mmio_ro_ranges, but if either
> start or end address is not page-aligned, it additionally adds that page
> to a list for sub-page R/O handling. The list holds a bitmask which
> dwords are supposed to be read-only and an address where page is mapped
> for write emulation - this mapping is done only on the first access. A
> plain list is used instead of more efficient structure, because there
> isn't supposed to be many pages needing this precise r/o control.
>
> The mechanism this API is plugged in is slightly different for PV and
> HVM. For both paths, it's plugged into mmio_ro_emulated_write(). For PV,
> it's already called for #PF on read-only MMIO page. For HVM however, EPT
> violation on p2m_mmio_direct page results in a direct domain_crash().
> To reach mmio_ro_emulated_write(), change how write violations for
> p2m_mmio_direct are handled - specifically, check if they relate to such
> partially protected page via subpage_mmio_write_accept() and if so, call
> hvm_emulate_one_mmio() for them too. This decodes what guest is trying
> write and finally calls mmio_ro_emulated_write(). Note that hitting EPT
> write violation for p2m_mmio_direct page can only happen if the page was
> on mmio_ro_ranges (see ept_p2m_type_to_flags()), so there is no need for
> checking that again.
> Both of those paths need an MFN to which guest tried to write (to check
> which part of the page is supposed to be read-only, and where
> the page is mapped for writes). This information currently isn't
> available directly in mmio_ro_emulated_write(), but in both cases it is
> already resolved somewhere higher in the call tree. Pass it down to
> mmio_ro_emulated_write() via new mmio_ro_emulate_ctxt.mfn field.
>
> Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingsl=
ab.com>

Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

> --- a/xen/arch/x86/include/asm/mm.h
> +++ b/xen/arch/x86/include/asm/mm.h
> @@ -522,9 +522,24 @@ extern struct rangeset *mmio_ro_ranges;
>  void memguard_guard_stack(void *p);
>  void memguard_unguard_stack(void *p);
>
> +/*
> + * Add more precise r/o marking for a MMIO page. Bytes range specified h=
ere
> + * will still be R/O, but the rest of the page (nor marked as R/O via an=
other

s/nor/not/

> + * call) will have writes passed through.
> + * The start address and the size must be aligned to SUBPAGE_MMIO_RO_ALI=
GN.
> + *

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c

> +            /*
> +             * We don't know the write seize at this point yet, so it co=
uld be

s/seize/size/

> +             * an unalligned write, but accept it here anyway and deal w=
ith it

s/unalligned/unaligned/

> +             * later.
> +             */

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 17 19:31:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 19:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536011.834140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzMs5-0005IS-FC; Wed, 17 May 2023 19:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536011.834140; Wed, 17 May 2023 19:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzMs5-0005IL-Bk; Wed, 17 May 2023 19:31:37 +0000
Received: by outflank-mailman (input) for mailman id 536011;
 Wed, 17 May 2023 19:31:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzMs4-0005Hx-0e; Wed, 17 May 2023 19:31:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzMs3-0002g1-Pj; Wed, 17 May 2023 19:31:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzMs3-0000mV-7R; Wed, 17 May 2023 19:31:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzMs3-00068C-73; Wed, 17 May 2023 19:31:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rKbRHMV8Jhlkw0gB+RFaCCB1EVrzHaheDqYxUFuckYc=; b=aeCHdyspHfGo/9XvIhEhVMoy5E
	slBzs9dcLzyP6ddFfGzvE0KSjG1EqcPnnRhOVJ8AjZlK9aDAtcdk8Ysp2D3xQlSoYZtxLVtjF57Ce
	60dvIH67HpIpnfPJeiLBTiu1W74EGgaqckWAR52UgR9/NbQbDdWffwB4ywGq3kVyi35U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180688-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180688: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=b10bc8f7ab6f9986ccc54ba04fc5b3bad7576be6
X-Osstest-Versions-That:
    libvirt=4a681995bc9f0ba5df779c392b7bebf3470a3f9a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 May 2023 19:31:35 +0000

flight 180688 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180688/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180642
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180642
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180642
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              b10bc8f7ab6f9986ccc54ba04fc5b3bad7576be6
baseline version:
 libvirt              4a681995bc9f0ba5df779c392b7bebf3470a3f9a

Last test of basis   180642  2023-05-13 04:21:42 Z    4 days
Testing same since   180688  2023-05-17 04:20:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   4a681995bc..b10bc8f7ab  b10bc8f7ab6f9986ccc54ba04fc5b3bad7576be6 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 17 19:37:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 19:37:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536019.834150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzMxj-0006D7-7E; Wed, 17 May 2023 19:37:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536019.834150; Wed, 17 May 2023 19:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzMxj-0006D0-49; Wed, 17 May 2023 19:37:27 +0000
Received: by outflank-mailman (input) for mailman id 536019;
 Wed, 17 May 2023 19:37:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VO9T=BG=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pzMxh-0006Cu-SM
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 19:37:25 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3e88cb1d-f4ea-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 21:37:23 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 51BA764AAF;
 Wed, 17 May 2023 19:37:22 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 711D6C433A0;
 Wed, 17 May 2023 19:37:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e88cb1d-f4ea-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684352241;
	bh=io9ZhUNsyuLYL/OJGkk5/ml8C/igUfusGusXtLP/Ru0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=q8XOc38b+qot8A+npMw1uhdPcaTWhAxlrVY2pwNzQ/nnVDnK47eRmQ/Zp1w5TxAdT
	 kFisQwOf/pLAhUQgMl6YHRqoXaRR9LBRuSiB9Gowk9AKVY9/wJYVXUVnUl5tU8Jp76
	 Pf+Iq2xc9RTDizPE/3UfSn/sEi0WvZaVjPMuk73n4R9OzF0oligASc8L3GDQNYF9fe
	 Wm3m98ZrnFh/v91n60U3zHrhUYxDEu8kEOYGDTkuXoIMRvcTfKGoWriOLezS1WxwgS
	 ABF2QcVkyb9OGDYh4N2XvM8vTiR4VcgkReuAY9cr1dTv4OorOWcStetavKWrS8JB2T
	 uYhuLhPOT/zHw==
Date: Wed, 17 May 2023 12:37:18 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <Luca.Fancellu@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/2] xen/misra: diff-report.py: add report patching
 feature
In-Reply-To: <3A4E52B2-B33D-46FC-A1DB-4935AF06CC49@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305171232440.128889@ubuntu-linux-20-04-desktop>
References: <20230504142523.2989306-1-luca.fancellu@arm.com> <20230504142523.2989306-3-luca.fancellu@arm.com> <alpine.DEB.2.22.394.2305161827050.128889@ubuntu-linux-20-04-desktop> <3A4E52B2-B33D-46FC-A1DB-4935AF06CC49@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1258655336-1684352241=:128889"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1258655336-1684352241=:128889
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 17 May 2023, Luca Fancellu wrote:
> > On 17 May 2023, at 02:33, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Thu, 4 May 2023, Luca Fancellu wrote:
> >> Add a feature to the diff-report.py script that improves the comparison
> >> between two analysis report, one from a baseline codebase and the other
> >> from the changes applied to the baseline.
> >> 
> >> The comparison between reports of different codebase is an issue because
> >> entries in the baseline could have been moved in position due to addition
> >> or deletion of unrelated lines or can disappear because of deletion of
> >> the interested line, making the comparison between two revisions of the
> >> code harder.
> >> 
> >> Having a baseline report, a report of the codebase with the changes
> >> called "new report" and a git diff format file that describes the
> >> changes happened to the code from the baseline, this feature can
> >> understand which entries from the baseline report are deleted or shifted
> >> in position due to changes to unrelated lines and can modify them as
> >> they will appear in the "new report".
> >> 
> >> Having the "patched baseline" and the "new report", now it's simple
> >> to make the diff between them and print only the entry that are new.
> >> 
> >> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> > 
> > This is an amazing work!! Thanks Luca!
> > 
> > I am having issues trying the new patch feature. After applying this
> > patch I get:
> > 
> > sstabellini@ubuntu-linux-20-04-desktop:/local/repos/xen-upstream/xen$ ./scripts/diff-report.py
> > Traceback (most recent call last):
> >  File "./scripts/diff-report.py", line 5, in <module>
> >    from xen_analysis.diff_tool.debug import Debug
> >  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/debug.py", line 4, in <module>
> >    from .report import Report
> >  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/report.py", line 4, in <module>
> >    from .unified_format_parser import UnifiedFormatParser, ChangeSet
> >  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py", line 56, in <module>
> >    class UnifiedFormatParser:
> >  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py", line 57, in UnifiedFormatParser
> >    def __init__(self, args: str | list) -> None:
> > TypeError: unsupported operand type(s) for |: 'type' and 'type'
> > 
> > Also got a similar error elsewhere:
> > 
> > sstabellini@ubuntu-linux-20-04-desktop:/local/repos/xen-upstream/xen$ ./scripts/diff-report.py --patch ~/p/1 -b /tmp/1 -r /tmp/1
> > Traceback (most recent call last):
> >  File "./scripts/diff-report.py", line 127, in <module>
> >    main(sys.argv[1:])
> >  File "./scripts/diff-report.py", line 102, in main
> >    diffs = UnifiedFormatParser(diff_source)
> >  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py", line 79, in __init__
> >    self.__parse()
> >  File "/local/repos/xen-upstream/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py", line 94, in __parse
> >    def parse_diff_header(line: str) -> ChangeSet | None:
> > TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'
> > 
> > My Python is 2.7.18
> > 
> > 
> > Am I understanding correctly that one should run the scan for the
> > baseline (saving the result somewhere), then apply the patch, run the
> > scan again. Finally, one should call diff-report.py passing -b
> > baseline-report -r new-report --patch the-patch-applied?
> 
> Hi Stefano,
> 
> Yes indeed, that procedure is correct, I think the error you are seeing comes from the python version,
> I am using python 3, version 3.10.6.
> 
> The error seems to come from python annotations, I’m surprised you didn’t hit it when testing the first patch,
> did you use python2 for that?
> 
> Is it a problem if I developed the tool having in mind its usage with python3?

Hi Luca,

It is not a problem per se if the script requires python3 but then we
should check for the python version at the beginning of the script to
fail explictly with a nice error message if python < 3.

I am fine if you want to proceed that way, but if the only issue are the
annotations, I suggest it might be easier to remove them and then you
also get the benefit of python2 compatibility. I'll leave the choice to
you.

Either way, if you are OK with it, I think you should add a new entry to
the MAINTAINERS file to cover the xen analysis scripts if you are OK
with it:

xen/scripts/xen_analysis
xen/scripts/xen-analysis.py
xen/scripts/diff-report.py
--8323329-1258655336-1684352241=:128889--


From xen-devel-bounces@lists.xenproject.org Wed May 17 19:41:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 19:41:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536023.834160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzN1O-0007pk-Na; Wed, 17 May 2023 19:41:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536023.834160; Wed, 17 May 2023 19:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzN1O-0007pd-KS; Wed, 17 May 2023 19:41:14 +0000
Received: by outflank-mailman (input) for mailman id 536023;
 Wed, 17 May 2023 19:41:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VO9T=BG=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pzN1N-0007pS-GV
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 19:41:13 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c679572c-f4ea-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 21:41:11 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 715236453A;
 Wed, 17 May 2023 19:41:10 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3093C433EF;
 Wed, 17 May 2023 19:41:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c679572c-f4ea-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684352469;
	bh=NG9a8/HxZiREAt48f4XHXBbPjBYheg34T+R3HqxYF04=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=KrIqwaxHjOrDIPcl5GFZ4nH02dTUeLfqkP1kFzv/oso+V9602cACvTve631+G0rWd
	 qtmhyeex4Ww16mQqeXJe4I0VdROcdDIphtG8zWUwkGE3VVRJZZ7LU6zfFzPwYK8lB/
	 fPBBGX+YK0Zdziz8IjNrQ7PzM9+6teg4K2OYcAg+f4cPl0CHM0mKhaY5eaKYbHXlD1
	 9k4SV00HQBe9mmYDJwkCc8m+jajh5QE9FlFrVGp0Uh9NpFSbFGrQnduYLqVJb+5HOo
	 6zT00Pg9RJ4tOu/96uMrG1I/nGlSr1tYtyyNU/nvtjXO6MupmAlnFlhPFd+Ip+Gvxv
	 LhBFq2V8MuPgA==
Date: Wed, 17 May 2023 12:41:06 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Arnd Bergmann <arnd@kernel.org>
cc: Juergen Gross <jgross@suse.com>, Arnd Bergmann <arnd@arndb.de>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, 
    Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, 
    x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Jan Beulich <jbeulich@suse.com>, Peter Zijlstra <peterz@infradead.org>, 
    David Woodhouse <dwmw@amazon.co.uk>, xen-devel@lists.xenproject.org, 
    linux-kernel@vger.kernel.org
Subject: Re: [PATCH] xen: xen_debug_interrupt prototype to global header
In-Reply-To: <20230517124525.929201-1-arnd@kernel.org>
Message-ID: <alpine.DEB.2.22.394.2305171240550.128889@ubuntu-linux-20-04-desktop>
References: <20230517124525.929201-1-arnd@kernel.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 17 May 2023, Arnd Bergmann wrote:
> From: Arnd Bergmann <arnd@arndb.de>
> 
> The xen_debug_interrupt() function is only called on x86, which has a
> prototype in an architecture specific header, but the definition also
> exists on others, where the lack of a prototype causes a W=1 warning:
> 
> drivers/xen/events/events_2l.c:264:13: error: no previous prototype for 'xen_debug_interrupt' [-Werror=missing-prototypes]
> 
> Move the prototype into a global header instead to avoid this warning.
> 
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  arch/x86/xen/xen-ops.h | 2 --
>  include/xen/events.h   | 3 +++
>  2 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> index 84a35ff1e0c9..0f71ee3fe86b 100644
> --- a/arch/x86/xen/xen-ops.h
> +++ b/arch/x86/xen/xen-ops.h
> @@ -72,8 +72,6 @@ void xen_restore_time_memory_area(void);
>  void xen_init_time_ops(void);
>  void xen_hvm_init_time_ops(void);
>  
> -irqreturn_t xen_debug_interrupt(int irq, void *dev_id);
> -
>  bool xen_vcpu_stolen(int vcpu);
>  
>  void xen_vcpu_setup(int cpu);
> diff --git a/include/xen/events.h b/include/xen/events.h
> index 44c2855c76d1..ac1281c5ead6 100644
> --- a/include/xen/events.h
> +++ b/include/xen/events.h
> @@ -138,4 +138,7 @@ int xen_test_irq_shared(int irq);
>  
>  /* initialize Xen IRQ subsystem */
>  void xen_init_IRQ(void);
> +
> +irqreturn_t xen_debug_interrupt(int irq, void *dev_id);
> +
>  #endif	/* _XEN_EVENTS_H */
> -- 
> 2.39.2
> 


From xen-devel-bounces@lists.xenproject.org Wed May 17 19:49:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 19:49:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536027.834170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzN8l-0000KY-DV; Wed, 17 May 2023 19:48:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536027.834170; Wed, 17 May 2023 19:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzN8l-0000KR-A5; Wed, 17 May 2023 19:48:51 +0000
Received: by outflank-mailman (input) for mailman id 536027;
 Wed, 17 May 2023 19:48:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VO9T=BG=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pzN8k-0000KA-4G
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 19:48:50 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d6f0189f-f4eb-11ed-b22a-6b7b168915f2;
 Wed, 17 May 2023 21:48:48 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 77BC662C1F;
 Wed, 17 May 2023 19:48:47 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0380AC433EF;
 Wed, 17 May 2023 19:48:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6f0189f-f4eb-11ed-b22a-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684352926;
	bh=YPn6gNnZP/Y9kFaYu3q520M3Lj15uk9Hs9CIbaEEMYs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=T6Dn1ynk7hBxe2VOsJpElfPC2UMuZceV+JJGEhKZp+dppAWHp4m2tFYHdWOKubsta
	 v7A6ZQis69vbs6xGOY8iCjXSY24wiJk6Git2XI13tZvGqy8g2Ls1uTEL7sBRkedW3k
	 v3DEb7JiYLk33qJrwBCxoF9zt8PSLbdBmx5ZjqqGXt8Kjcae5OnXMpKRWXHx3CN+rC
	 uByZ+cA0Bt6sA9O+xV66QL72JHAkjs/30Wb0uXxMRogkftAieHgyWwDGAg8q408xSR
	 hA0Sofb/0iFqjMXlqOVjAbgVl0OMIgw4j7l8/HrGzCO4PAGdiSBcePL54ka9YEdYcD
	 kduzVO233J8vg==
Date: Wed, 17 May 2023 12:48:43 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michael Tokarev <mjt@tls.msk.ru>
cc: Chuck Zmudzinski <brchuckz@aol.com>, qemu-devel@nongnu.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>, 
    Paolo Bonzini <pbonzini@redhat.com>, 
    Richard Henderson <richard.henderson@linaro.org>, 
    Eduardo Habkost <eduardo@habkost.net>, 
    "Michael S. Tsirkin" <mst@redhat.com>, 
    Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
    Igor Mammedov <imammedo@redhat.com>, xen-devel@lists.xenproject.org, 
    qemu-stable@nongnu.org
Subject: Re: [PATCH] xen/pt: fix igd passthrough for pc machine with xen
 accelerator
In-Reply-To: <2b07603f-6623-9fbf-15df-a86849d9aca3@msgid.tls.msk.ru>
Message-ID: <alpine.DEB.2.22.394.2305171246020.128889@ubuntu-linux-20-04-desktop>
References: <a304213d26506b066021f803c39b87f6a262ed86.1675820085.git.brchuckz.ref@aol.com> <a304213d26506b066021f803c39b87f6a262ed86.1675820085.git.brchuckz@aol.com> <986d9eca-5fab-cacb-05c7-b85e4d58665b@msgid.tls.msk.ru> <47ed3568-2127-a865-4e4f-ff5902484231@aol.com>
 <2b07603f-6623-9fbf-15df-a86849d9aca3@msgid.tls.msk.ru>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 17 May 2023, Michael Tokarev wrote:
> 17.05.2023 12:47, Chuck Zmudzinski wrote:
> > On 5/17/2023 2:39 AM, Michael Tokarev wrote:
> > > 08.02.2023 05:03, Chuck Zmudzinski wrote:...
> > > > Fixes: 998250e97661 ("xen, gfx passthrough: register host bridge
> > > > specific to passthrough")
> > > > Signed-off-by: Chuck Zmudzinski <brchuckz@aol.com>
> > > 
> > > Has this change been forgotten?  Is it not needed anymore?
> > 
> > Short answer:
> > 
> > After 4f67543b ("xen/pt: reserve PCI slot 2 for Intel igd-passthru ") was
> > applied, I was inclined to think this change is not needed anymore, but
> > it would not hurt to add this change also, and now I think it might be
> > more correct to also add this change.
> ...
> 
> Well, there were two machines with broken IDG passthrough in xen, now
> there's one machine with broken IDG passthrough. Let's fix them all :)
> Note this patch is tagged -stable as well.
> 
> > If you want to add this change also, let's make sure recent changes to the
> > xen header files do not require the patch to be rebased before committing
> > it.
> 
> It doesn't require rebasing, it looks like, - just built 8.0 and current
> master
> qemu with it applied.  I haven't tried the actual IDG passthrough, though.
> 
> It just neeeds to be picked up the usual way as all other qemu changes goes
> in.

Hi Michal,

I am OK with this patch and acked it. However, I think it also needs an
ack from one if the i386 maintainers, Michal T or Marcel.


From xen-devel-bounces@lists.xenproject.org Wed May 17 21:00:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 21:00:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536033.834180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzOFn-0002Zy-6p; Wed, 17 May 2023 21:00:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536033.834180; Wed, 17 May 2023 21:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzOFn-0002Zr-2P; Wed, 17 May 2023 21:00:11 +0000
Received: by outflank-mailman (input) for mailman id 536033;
 Wed, 17 May 2023 21:00:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VO9T=BG=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pzOFl-0002ZY-Hg
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 21:00:09 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ccb07e24-f4f5-11ed-8611-37d641c3527e;
 Wed, 17 May 2023 23:00:06 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 326A264A23;
 Wed, 17 May 2023 21:00:05 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9E350C433EF;
 Wed, 17 May 2023 21:00:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccb07e24-f4f5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684357204;
	bh=yZxdHCXbGtxH9gvDJ8pIKwDgo+h1oCo9Ngu/FosH7NU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nTh4qtbpQIlH5CaYp3IfQjRUOKyuD1/H2ms/+G0UtnkVfBhynLzkDgZs69/RbQKdZ
	 IAfRC9ZcBR1mjrQFwrbpPJgKRcoEMIiNTVUAriCCNKEVqH0I14K4NBorO2oRxoVCqL
	 2mCMuuHD7yv1qhsnuukc4gFL/hGeU97SrcOczZ2Qy2M9+/iN49FzG1iJSe3K7YZbKo
	 ZAsr72U4VzMZEEleVbG6TPiU96wYHwDhSJ4YbAsjIyHQOgWfIoP4gyDFDJEZWxN9Zg
	 3ucIsPQDap1+KWvxIt597enaFnSGpdzYAv/vk2jWG4q7bMbOI6T9WxBppp3KPmgGEZ
	 L9JKliFjApmIQ==
Date: Wed, 17 May 2023 14:00:01 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com, 
    jbeulich@suse.com, xen-devel@lists.xenproject.org, 
    Xenia.Ragiadakou@amd.com, Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
In-Reply-To: <ZGSTGIMh6qvCLZSr@Air-de-Roger>
Message-ID: <alpine.DEB.2.22.394.2305171354590.128889@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop> <20230513011720.3978354-2-sstabellini@kernel.org> <ZGH+5OKqnjTjUr/F@Air-de-Roger> <alpine.DEB.2.22.394.2305151656500.4125828@ubuntu-linux-20-04-desktop> <ZGNLArlA0Yei4Fr0@Air-de-Roger>
 <alpine.DEB.2.22.394.2305161522480.128889@ubuntu-linux-20-04-desktop> <ZGSTGIMh6qvCLZSr@Air-de-Roger>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1353024790-1684357126=:128889"
Content-ID: <alpine.DEB.2.22.394.2305171358480.128889@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1353024790-1684357126=:128889
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305171358481.128889@ubuntu-linux-20-04-desktop>

On Wed, 17 May 2023, Roger Pau Monné wrote:
> On Tue, May 16, 2023 at 04:34:09PM -0700, Stefano Stabellini wrote:
> > On Tue, 16 May 2023, Roger Pau Monné wrote:
> > > On Mon, May 15, 2023 at 05:11:25PM -0700, Stefano Stabellini wrote:
> > > > On Mon, 15 May 2023, Roger Pau Monné wrote:
> > > > > On Fri, May 12, 2023 at 06:17:20PM -0700, Stefano Stabellini wrote:
> > > > > > From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > > > 
> > > > > > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > > > > > the tables in the guest. Instead, copy the tables to Dom0.
> > > > > > 
> > > > > > This is a workaround.
> > > > > > 
> > > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > > > ---
> > > > > > As mentioned in the cover letter, this is a RFC workaround as I don't
> > > > > > know the cause of the underlying problem. I do know that this patch
> > > > > > solves what would be otherwise a hang at boot when Dom0 PVH attempts to
> > > > > > parse ACPI tables.
> > > > > 
> > > > > I'm unsure how safe this is for native systems, as it's possible for
> > > > > firmware to modify the data in the tables, so copying them would
> > > > > break that functionality.
> > > > > 
> > > > > I think we need to get to the root cause that triggers this behavior
> > > > > on QEMU.  Is it the table checksum that fail, or something else?  Is
> > > > > there an error from Linux you could reference?
> > > > 
> > > > I agree with you but so far I haven't managed to find a way to the root
> > > > of the issue. Here is what I know. These are the logs of a successful
> > > > boot using this patch:
> > > > 
> > > > [   10.437488] ACPI: Early table checksum verification disabled
> > > > [   10.439345] ACPI: RSDP 0x000000004005F955 000024 (v02 BOCHS )
> > > > [   10.441033] ACPI: RSDT 0x000000004005F979 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> > > > [   10.444045] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> > > > [   10.445984] ACPI: FACP 0x000000004005FA65 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
> > > > [   10.447170] ACPI BIOS Warning (bug): Incorrect checksum in table [FACP] - 0x67, should be 0x30 (20220331/tbprint-174)
> > > > [   10.449522] ACPI: DSDT 0x000000004005FB19 00145D (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
> > > > [   10.451258] ACPI: FACS 0x000000004005FAD9 000040
> > > > [   10.452245] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > > > [   10.452389] ACPI: Reserving FACP table memory at [mem 0x4005fa65-0x4005fad8]
> > > > [   10.452497] ACPI: Reserving DSDT table memory at [mem 0x4005fb19-0x40060f75]
> > > > [   10.452602] ACPI: Reserving FACS table memory at [mem 0x4005fad9-0x4005fb18]
> > > > 
> > > > 
> > > > And these are the logs of the same boot (unsuccessful) without this
> > > > patch:
> > > > 
> > > > [   10.516015] ACPI: Early table checksum verification disabled
> > > > [   10.517732] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> > > > [   10.519535] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> > > > [   10.522523] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> > > > [   10.527453] ACPI: ���� 0x000000007FFE149D FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> > > > [   10.528362] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > > > [   10.528491] ACPI: Reserving ���� table memory at [mem 0x7ffe149d-0x17ffe149b]
> > > > 
> > > > It is clearly a memory corruption around FACS but I couldn't find the
> > > > reason for it. The mapping code looks correct. I hope you can suggest a
> > > > way to narrow down the problem. If I could, I would suggest to apply
> > > > this patch just for the QEMU PVH tests but we don't have the
> > > > infrastructure for that in gitlab-ci as there is a single Xen build for
> > > > all tests.
> > > 
> > > Would be helpful to see the memory map provided to Linux, just in case
> > > we messed up and there's some overlap.
> > 
> > Everything looks correct. Here are some more logs:
> > 
> > (XEN) Xen-e820 RAM map:
> > (XEN)  [0000000000000000, 000000000009fbff] (usable)
> > (XEN)  [000000000009fc00, 000000000009ffff] (reserved)
> > (XEN)  [00000000000f0000, 00000000000fffff] (reserved)
> > (XEN)  [0000000000100000, 000000007ffdffff] (usable)
> > (XEN)  [000000007ffe0000, 000000007fffffff] (reserved)
> > (XEN)  [00000000fffc0000, 00000000ffffffff] (reserved)
> > (XEN)  [000000fd00000000, 000000ffffffffff] (reserved)
> > (XEN) Microcode loading not available
> > (XEN) New Xen image base address: 0x7f600000
> > (XEN) System RAM: 2047MB (2096636kB)
> > (XEN) ACPI: RSDP 000F58D0, 0014 (r0 BOCHS )
> > (XEN) ACPI: RSDT 7FFE1B21, 0034 (r1 BOCHS  BXPC            1 BXPC        1)
> > (XEN) ACPI: FACP 7FFE19CD, 0074 (r1 BOCHS  BXPC            1 BXPC        1)
> > (XEN) ACPI: DSDT 7FFE0040, 198D (r1 BOCHS  BXPC            1 BXPC        1)
> > (XEN) ACPI: FACS 7FFE0000, 0040
> > (XEN) ACPI: APIC 7FFE1A41, 0080 (r1 BOCHS  BXPC            1 BXPC        1)
> > (XEN) ACPI: HPET 7FFE1AC1, 0038 (r1 BOCHS  BXPC            1 BXPC        1)
> > (XEN) ACPI: WAET 7FFE1AF9, 0028 (r1 BOCHS  BXPC            1 BXPC        1)
> > [...]
> > (XEN) Dom0 memory map:
> > (XEN)  [0000000000000000, 000000000009efff] (usable)
> > (XEN)  [000000000009fc00, 000000000009ffff] (reserved)
> > (XEN)  [00000000000f0000, 00000000000fffff] (reserved)
> > (XEN)  [0000000000100000, 0000000040060f1d] (usable)
> > (XEN)  [0000000040060f1e, 0000000040060fa7] (ACPI data)
> > (XEN)  [0000000040061000, 000000007ffdffff] (unusable)
> > (XEN)  [000000007ffe0000, 000000007fffffff] (reserved)
> > (XEN)  [00000000fffc0000, 00000000ffffffff] (reserved)
> > (XEN)  [000000fd00000000, 000000ffffffffff] (reserved)
> > [...]
> > [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009efff] usable
> > [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x00000000000fffff] reserved
> > [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000040060f1d] usable
> > [    0.000000] BIOS-e820: [mem 0x0000000040060f1e-0x0000000040060fa7] ACPI data
> > [    0.000000] BIOS-e820: [mem 0x0000000040061000-0x000000007ffdffff] unusable
> > [    0.000000] BIOS-e820: [mem 0x000000007ffe0000-0x000000007fffffff] reserved
> > [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
> > [    0.000000] BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved
> > [...]
> > [   10.102427] ACPI: Early table checksum verification disabled
> > [   10.104455] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> > [   10.106250] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> > [   10.109549] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> > [   10.115173] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> > [   10.116054] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > [   10.116182] ACPI: Reserving ���� table memory at [mem 0x7ffe19cd-0x17ffe19cb]
> > 
> > 
> > 
> > > It seems like some of the XSDT entries (the FADT one) is corrupt?
> > > 
> > > Could you maybe add some debug to the Xen-crafted XSDT placement.
> > 
> > I added a printk just after:
> > 
> >   xsdt->table_offset_entry[j++] = tables[i].address;
> > 
> > And it printed only once:
> > 
> >   (XEN) DEBUG pvh_setup_acpi_xsdt 1000 name=FACP address=7ffe19cd
> > 
> > That actually matches the address read by Linux:
> > 
> >   [   10.175448] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> > 
> > So the address seems correct. It is the content of the FADT/FACP table
> > that is corrupted.
> > 
> > I wrote the following function in Xen:
> > 
> > static void check(void)
> > {
> >     unsigned long addr = 0x7ffe19cd;
> >     struct acpi_table_fadt *fadt;
> >     fadt = acpi_os_map_memory(addr, sizeof(*fadt));
> >     printk("DEBUG %s %d s=%s\n",__func__,__LINE__,fadt->header.signature);
> >     acpi_os_unmap_memory(fadt, sizeof(*fadt));
> > }
> > 
> > It prints the right table signature at the end of pvh_setup_acpi.
> > I also added a call at the top of xenmem_add_to_physmap_one, and the
> > signature is still correct. Then I added a call at the beginning of
> > __update_vcpu_system_time. Here is the surprise: from Xen point of view
> > the table never gets corrupted. Here are the logs:
> > 
> > [...]
> > (XEN) DEBUG fadt_check 1551 s=FACPt
> > (XEN) DEBUG fadt_check 1551 s=FACPt
> > (XEN) DEBUG fadt_check 1551 s=FACPt
> > (XEN) DEBUG fadt_check 1551 s=FACPt
> > (XEN) d0v0: upcall vector f3
> > [    0.000000] Linux version 6.1.19 (root@124de7fbba7f) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_3
> > [    0.000000] Command line: console=hvc0
> > [...]
> > [   10.371610] ACPI: Early table checksum verification disabled
> > [   10.373633] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> > [   10.375548] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> > [   10.378732] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> > [   10.384188] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> > [   10.385374] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > [   10.385519] ACPI: Reserving ���� table memory at [mem 0x7ffe19cd-0x17ffe19cb]
> > [...]
> > (XEN) DEBUG fadt_check 1551 s=FACPt
> > (XEN) DEBUG fadt_check 1551 s=FACPt
> > (XEN) DEBUG fadt_check 1551 s=FACPt
> > (XEN) DEBUG fadt_check 1551 s=FACPt
> > 
> > 
> > So it looks like it is a problem with the mapping itself? Xen sees the
> > data correctly and Linux sees it corrupted?
> 
> It seems to me like the page is not correctly mapped, and so 1s are
> returned? (same behavior as a hole).  IOW: would seem to me like MMIO
> areas are not correctly handled by nested NPT? (I assume you are
> running this on AMD).
> 
> Does it make a difference if you try to boot with dom0=pvh,shadow?
> 
> A couple of wild ideas.  Maybe the nested virt support that you are
> using doesn't handle the UC bit in second stage page table entries?
> You could to remove this in p2m_type_to_flags() (see the
> p2m_mmio_direct case).
> 
> Another wild idea I have is that the emulated NPT code doesn't like
> having the bits 63:52 from the PTE set to anything different than 0,
> and hence only p2m_ram_rw works (p2m_mmio_direct is 5).

Many thanks to Xenia for figuring out the root cause of the bug. The
underlying memory region is already added as E820_RESERVED to the guest
(instead of E820_ACPI). When pvh_add_mem_range is called with E820_ACPI
as parameter for the interesting table, pvh_add_mem_range returns with
-EEXIST without doing anything.

The original fix by Xenia was to carve out the relevant subset of the
reserved region and mark it as E820_ACPI. Instead, I rewrote it as
changing the type of the entire region to E820_ACPI because it is
simpler and doesn't have to deal with the edge cases (partially
overlapping, overlapping 2 existing regions, etc.)

What do you think?


diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index e1043e40d2..6c1c73d853 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -241,6 +241,20 @@ static int __init pvh_add_mem_range(struct domain *d, uint64_t s, uint64_t e,
         if ( rs >= e )
             break;
 
+        if ( re >= e && rs <= s )
+        {
+            /*
+             * An existing overlapping memory range exists and it is
+             * marked as reserved. This happens on QEMU. Change the type
+             * to E820_ACPI.
+             */
+            if ( d->arch.e820[i].type == E820_RESERVED && type == E820_ACPI )
+            {
+                d->arch.e820[i].type = E820_ACPI;
+                break;
+            }
+        }
+
         if ( re > s )
             return -EEXIST;
     }
--8323329-1353024790-1684357126=:128889--


From xen-devel-bounces@lists.xenproject.org Wed May 17 22:10:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 22:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536080.834200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPLz-00058r-8e; Wed, 17 May 2023 22:10:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536080.834200; Wed, 17 May 2023 22:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPLz-00058k-5o; Wed, 17 May 2023 22:10:39 +0000
Received: by outflank-mailman (input) for mailman id 536080;
 Wed, 17 May 2023 22:10:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqXG=BG=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pzPLx-0004sV-Fc
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 22:10:37 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a49c1865-f4ff-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 00:10:34 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-48-VfyR7csfO32fVtGOTuGAng-1; Wed, 17 May 2023 18:10:29 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8DD1A805F5A;
 Wed, 17 May 2023 22:10:28 +0000 (UTC)
Received: from localhost (unknown [10.39.192.14])
 by smtp.corp.redhat.com (Postfix) with ESMTP id AEC6C2166B31;
 Wed, 17 May 2023 22:10:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a49c1865-f4ff-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684361433;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/rde1IRAOm4sc1Tal7ezRnPv14rgbAWjAI/I5FB1Dhc=;
	b=FWyCOElPIxRnQkix7GgSn8cUOgF64boWhooWqybUHODsgKsfjSwXJJpq7PiYYTm6w6oBMf
	lzvjwuRQIXiK+0LcJ3pwIlMoUd1pFuIOQg5AfgbyJnsufP2fLGXk6OvgoNDJGQWtZtnGuQ
	e3+vK3+ZDOI8rO50IlXxcx4UMFrBPsI=
X-MC-Unique: VfyR7csfO32fVtGOTuGAng-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: [PATCH 1/6] block: add blk_io_plug_call() API
Date: Wed, 17 May 2023 18:10:17 -0400
Message-Id: <20230517221022.325091-2-stefanha@redhat.com>
In-Reply-To: <20230517221022.325091-1-stefanha@redhat.com>
References: <20230517221022.325091-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

Introduce a new API for thread-local blk_io_plug() that does not
traverse the block graph. The goal is to make blk_io_plug() multi-queue
friendly.

Instead of having block drivers track whether or not we're in a plugged
section, provide an API that allows them to defer a function call until
we're unplugged: blk_io_plug_call(fn, opaque). If blk_io_plug_call() is
called multiple times with the same fn/opaque pair, then fn() is only
called once at the end of the function - resulting in batching.

This patch introduces the API and changes blk_io_plug()/blk_io_unplug().
blk_io_plug()/blk_io_unplug() no longer require a BlockBackend argument
because the plug state is now thread-local.

Later patches convert block drivers to blk_io_plug_call() and then we
can finally remove .bdrv_co_io_plug() once all block drivers have been
converted.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 MAINTAINERS                       |   1 +
 include/sysemu/block-backend-io.h |  13 +--
 block/block-backend.c             |  22 -----
 block/plug.c                      | 159 ++++++++++++++++++++++++++++++
 hw/block/dataplane/xen-block.c    |   8 +-
 hw/block/virtio-blk.c             |   4 +-
 hw/scsi/virtio-scsi.c             |   6 +-
 block/meson.build                 |   1 +
 8 files changed, 173 insertions(+), 41 deletions(-)
 create mode 100644 block/plug.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 50585117a0..574202295c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2644,6 +2644,7 @@ F: util/aio-*.c
 F: util/aio-*.h
 F: util/fdmon-*.c
 F: block/io.c
+F: block/plug.c
 F: migration/block*
 F: include/block/aio.h
 F: include/block/aio-wait.h
diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
index d62a7ee773..be4dcef59d 100644
--- a/include/sysemu/block-backend-io.h
+++ b/include/sysemu/block-backend-io.h
@@ -100,16 +100,9 @@ void blk_iostatus_set_err(BlockBackend *blk, int error);
 int blk_get_max_iov(BlockBackend *blk);
 int blk_get_max_hw_iov(BlockBackend *blk);
 
-/*
- * blk_io_plug/unplug are thread-local operations. This means that multiple
- * IOThreads can simultaneously call plug/unplug, but the caller must ensure
- * that each unplug() is called in the same IOThread of the matching plug().
- */
-void coroutine_fn blk_co_io_plug(BlockBackend *blk);
-void co_wrapper blk_io_plug(BlockBackend *blk);
-
-void coroutine_fn blk_co_io_unplug(BlockBackend *blk);
-void co_wrapper blk_io_unplug(BlockBackend *blk);
+void blk_io_plug(void);
+void blk_io_unplug(void);
+void blk_io_plug_call(void (*fn)(void *), void *opaque);
 
 AioContext *blk_get_aio_context(BlockBackend *blk);
 BlockAcctStats *blk_get_stats(BlockBackend *blk);
diff --git a/block/block-backend.c b/block/block-backend.c
index ca537cd0ad..1f1d226ba6 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -2568,28 +2568,6 @@ void blk_add_insert_bs_notifier(BlockBackend *blk, Notifier *notify)
     notifier_list_add(&blk->insert_bs_notifiers, notify);
 }
 
-void coroutine_fn blk_co_io_plug(BlockBackend *blk)
-{
-    BlockDriverState *bs = blk_bs(blk);
-    IO_CODE();
-    GRAPH_RDLOCK_GUARD();
-
-    if (bs) {
-        bdrv_co_io_plug(bs);
-    }
-}
-
-void coroutine_fn blk_co_io_unplug(BlockBackend *blk)
-{
-    BlockDriverState *bs = blk_bs(blk);
-    IO_CODE();
-    GRAPH_RDLOCK_GUARD();
-
-    if (bs) {
-        bdrv_co_io_unplug(bs);
-    }
-}
-
 BlockAcctStats *blk_get_stats(BlockBackend *blk)
 {
     IO_CODE();
diff --git a/block/plug.c b/block/plug.c
new file mode 100644
index 0000000000..6738a568ba
--- /dev/null
+++ b/block/plug.c
@@ -0,0 +1,159 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Block I/O plugging
+ *
+ * Copyright Red Hat.
+ *
+ * This API defers a function call within a blk_io_plug()/blk_io_unplug()
+ * section, allowing multiple calls to batch up. This is a performance
+ * optimization that is used in the block layer to submit several I/O requests
+ * at once instead of individually:
+ *
+ *   blk_io_plug(); <-- start of plugged region
+ *   ...
+ *   blk_io_plug_call(my_func, my_obj); <-- deferred my_func(my_obj) call
+ *   blk_io_plug_call(my_func, my_obj); <-- another
+ *   blk_io_plug_call(my_func, my_obj); <-- another
+ *   ...
+ *   blk_io_unplug(); <-- end of plugged region, my_func(my_obj) is called once
+ *
+ * This code is actually generic and not tied to the block layer. If another
+ * subsystem needs this functionality, it could be renamed.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/coroutine-tls.h"
+#include "qemu/notify.h"
+#include "qemu/thread.h"
+#include "sysemu/block-backend.h"
+
+/* A function call that has been deferred until unplug() */
+typedef struct {
+    void (*fn)(void *);
+    void *opaque;
+} UnplugFn;
+
+/* Per-thread state */
+typedef struct {
+    unsigned count;       /* how many times has plug() been called? */
+    GArray *unplug_fns;   /* functions to call at unplug time */
+} Plug;
+
+/* Use get_ptr_plug() to fetch this thread-local value */
+QEMU_DEFINE_STATIC_CO_TLS(Plug, plug);
+
+/* Called at thread cleanup time */
+static void blk_io_plug_atexit(Notifier *n, void *value)
+{
+    Plug *plug = get_ptr_plug();
+    g_array_free(plug->unplug_fns, TRUE);
+}
+
+/* This won't involve coroutines, so use __thread */
+static __thread Notifier blk_io_plug_atexit_notifier;
+
+/**
+ * blk_io_plug_call:
+ * @fn: a function pointer to be invoked
+ * @opaque: a user-defined argument to @fn()
+ *
+ * Call @fn(@opaque) immediately if not within a blk_io_plug()/blk_io_unplug()
+ * section.
+ *
+ * Otherwise defer the call until the end of the outermost
+ * blk_io_plug()/blk_io_unplug() section in this thread. If the same
+ * @fn/@opaque pair has already been deferred, it will only be called once upon
+ * blk_io_unplug() so that accumulated calls are batched into a single call.
+ *
+ * The caller must ensure that @opaque is not be freed before @fn() is invoked.
+ */
+void blk_io_plug_call(void (*fn)(void *), void *opaque)
+{
+    Plug *plug = get_ptr_plug();
+
+    /* Call immediately if we're not plugged */
+    if (plug->count == 0) {
+        fn(opaque);
+        return;
+    }
+
+    GArray *array = plug->unplug_fns;
+    if (!array) {
+        array = g_array_new(FALSE, FALSE, sizeof(UnplugFn));
+        plug->unplug_fns = array;
+        blk_io_plug_atexit_notifier.notify = blk_io_plug_atexit;
+        qemu_thread_atexit_add(&blk_io_plug_atexit_notifier);
+    }
+
+    UnplugFn *fns = (UnplugFn *)array->data;
+    UnplugFn new_fn = {
+        .fn = fn,
+        .opaque = opaque,
+    };
+
+    /*
+     * There won't be many, so do a linear search. If this becomes a bottleneck
+     * then a binary search (glib 2.62+) or different data structure could be
+     * used.
+     */
+    for (guint i = 0; i < array->len; i++) {
+        if (memcmp(&fns[i], &new_fn, sizeof(new_fn)) == 0) {
+            return; /* already exists */
+        }
+    }
+
+    g_array_append_val(array, new_fn);
+}
+
+/**
+ * blk_io_plug: Defer blk_io_plug_call() functions until blk_io_unplug()
+ *
+ * blk_io_plug/unplug are thread-local operations. This means that multiple
+ * threads can simultaneously call plug/unplug, but the caller must ensure that
+ * each unplug() is called in the same thread of the matching plug().
+ *
+ * Nesting is supported. blk_io_plug_call() functions are only called at the
+ * outermost blk_io_unplug().
+ */
+void blk_io_plug(void)
+{
+    Plug *plug = get_ptr_plug();
+
+    assert(plug->count < UINT32_MAX);
+
+    plug->count++;
+}
+
+/**
+ * blk_io_unplug: Run any pending blk_io_plug_call() functions
+ *
+ * There must have been a matching blk_io_plug() call in the same thread prior
+ * to this blk_io_unplug() call.
+ */
+void blk_io_unplug(void)
+{
+    Plug *plug = get_ptr_plug();
+
+    assert(plug->count > 0);
+
+    if (--plug->count > 0) {
+        return;
+    }
+
+    GArray *array = plug->unplug_fns;
+    if (!array) {
+        return;
+    }
+
+    UnplugFn *fns = (UnplugFn *)array->data;
+
+    for (guint i = 0; i < array->len; i++) {
+        fns[i].fn(fns[i].opaque);
+    }
+
+    /*
+     * This resets the array without freeing memory so that appending is cheap
+     * in the future.
+     */
+    g_array_set_size(array, 0);
+}
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index d8bc39d359..e49c24f63d 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -537,7 +537,7 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
      * is below us.
      */
     if (inflight_atstart > IO_PLUG_THRESHOLD) {
-        blk_io_plug(dataplane->blk);
+        blk_io_plug();
     }
     while (rc != rp) {
         /* pull request from ring */
@@ -577,12 +577,12 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
 
         if (inflight_atstart > IO_PLUG_THRESHOLD &&
             batched >= inflight_atstart) {
-            blk_io_unplug(dataplane->blk);
+            blk_io_unplug();
         }
         xen_block_do_aio(request);
         if (inflight_atstart > IO_PLUG_THRESHOLD) {
             if (batched >= inflight_atstart) {
-                blk_io_plug(dataplane->blk);
+                blk_io_plug();
                 batched = 0;
             } else {
                 batched++;
@@ -590,7 +590,7 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
         }
     }
     if (inflight_atstart > IO_PLUG_THRESHOLD) {
-        blk_io_unplug(dataplane->blk);
+        blk_io_unplug();
     }
 
     return done_something;
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 8f65ea4659..b4286424c1 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1134,7 +1134,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
     bool suppress_notifications = virtio_queue_get_notification(vq);
 
     aio_context_acquire(blk_get_aio_context(s->blk));
-    blk_io_plug(s->blk);
+    blk_io_plug();
 
     do {
         if (suppress_notifications) {
@@ -1158,7 +1158,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
         virtio_blk_submit_multireq(s, &mrb);
     }
 
-    blk_io_unplug(s->blk);
+    blk_io_unplug();
     aio_context_release(blk_get_aio_context(s->blk));
 }
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 612c525d9d..534a44ee07 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -799,7 +799,7 @@ static int virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
         return -ENOBUFS;
     }
     scsi_req_ref(req->sreq);
-    blk_io_plug(d->conf.blk);
+    blk_io_plug();
     object_unref(OBJECT(d));
     return 0;
 }
@@ -810,7 +810,7 @@ static void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
     if (scsi_req_enqueue(sreq)) {
         scsi_req_continue(sreq);
     }
-    blk_io_unplug(sreq->dev->conf.blk);
+    blk_io_unplug();
     scsi_req_unref(sreq);
 }
 
@@ -836,7 +836,7 @@ static void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
                 while (!QTAILQ_EMPTY(&reqs)) {
                     req = QTAILQ_FIRST(&reqs);
                     QTAILQ_REMOVE(&reqs, req, next);
-                    blk_io_unplug(req->sreq->dev->conf.blk);
+                    blk_io_unplug();
                     scsi_req_unref(req->sreq);
                     virtqueue_detach_element(req->vq, &req->elem, 0);
                     virtio_scsi_free_req(req);
diff --git a/block/meson.build b/block/meson.build
index 486dda8b85..fb4332bd66 100644
--- a/block/meson.build
+++ b/block/meson.build
@@ -23,6 +23,7 @@ block_ss.add(files(
   'mirror.c',
   'nbd.c',
   'null.c',
+  'plug.c',
   'qapi.c',
   'qcow2-bitmap.c',
   'qcow2-cache.c',
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 17 22:10:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 22:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536079.834190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPLx-0004sd-0T; Wed, 17 May 2023 22:10:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536079.834190; Wed, 17 May 2023 22:10:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPLw-0004sW-Tw; Wed, 17 May 2023 22:10:36 +0000
Received: by outflank-mailman (input) for mailman id 536079;
 Wed, 17 May 2023 22:10:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqXG=BG=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pzPLw-0004sP-4b
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 22:10:36 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a3f51616-f4ff-11ed-b22a-6b7b168915f2;
 Thu, 18 May 2023 00:10:33 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-59-lSLG9I1HP4uawS9h9gPBxQ-1; Wed, 17 May 2023 18:10:26 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EB521185A78B;
 Wed, 17 May 2023 22:10:25 +0000 (UTC)
Received: from localhost (unknown [10.39.192.14])
 by smtp.corp.redhat.com (Postfix) with ESMTP id DFC2BC15BA0;
 Wed, 17 May 2023 22:10:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3f51616-f4ff-11ed-b22a-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684361432;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=N/ldJhRh22XDo67uOMsA8dfc78R1BhZ6UygIAszkpiQ=;
	b=H+dLBQeNZjT+Ew29XQwEO1Fey19IYsAKrgfSRwQ4aGhQdhgy+rq39djD7y9oy+EBT/1JGE
	fy786SdEqqG9ViPlsdehabDUauEWoboSZFosd0M3VWCnKKL365tyk50qb6MZrOO1wQwtxC
	Xlui6DwtefPw8psePEXyT18djal+1ic=
X-MC-Unique: lSLG9I1HP4uawS9h9gPBxQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: [PATCH 0/6] block: add blk_io_plug_call() API
Date: Wed, 17 May 2023 18:10:16 -0400
Message-Id: <20230517221022.325091-1-stefanha@redhat.com>
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

The existing blk_io_plug() API is not block layer multi-queue friendly because
the plug state is per-BlockDriverState.

Change blk_io_plug()'s implementation so it is thread-local. This is done by
introducing the blk_io_plug_call() function that block drivers use to batch
calls while plugged. It is relatively easy to convert block drivers from
.bdrv_co_io_plug() to blk_io_plug_call().

Random read 4KB performance with virtio-blk on a host NVMe block device:

iodepth   iops   change vs today
1        45612   -4%
2        87967   +2%
4       129872   +0%
8       171096   -3%
16      194508   -4%
32      208947   -1%
64      217647   +0%
128     229629   +0%

The results are within the noise for these benchmarks. This is to be expected
because the plugging behavior for a single thread hasn't changed in this patch
series, only that the state is thread-local now.

The following graph compares several approaches:
https://vmsplice.net/~stefan/blk_io_plug-thread-local.png
- v7.2.0: before most of the multi-queue block layer changes landed.
- with-blk_io_plug: today's post-8.0.0 QEMU.
- blk_io_plug-thread-local: this patch series.
- no-blk_io_plug: what happens when we simply remove plugging?
- call-after-dispatch: what if we integrate plugging into the event loop? I
  decided against this approach in the end because it's more likely to
  introduce performance regressions since I/O submission is deferred until the
  end of the event loop iteration.

Aside from the no-blk_io_plug case, which bottlenecks much earlier than the
others, we see that all plugging approaches are more or less equivalent in this
benchmark. It is also clear that QEMU 8.0.0 has lower performance than 7.2.0.

The Ansible playbook, fio results, and a Jupyter notebook are available here:
https://github.com/stefanha/qemu-perf/tree/remove-blk_io_plug

Stefan Hajnoczi (6):
  block: add blk_io_plug_call() API
  block/nvme: convert to blk_io_plug_call() API
  block/blkio: convert to blk_io_plug_call() API
  block/io_uring: convert to blk_io_plug_call() API
  block/linux-aio: convert to blk_io_plug_call() API
  block: remove bdrv_co_io_plug() API

 MAINTAINERS                       |   1 +
 include/block/block-io.h          |   3 -
 include/block/block_int-common.h  |  11 ---
 include/block/raw-aio.h           |  14 ---
 include/sysemu/block-backend-io.h |  13 +--
 block/blkio.c                     |  40 ++++----
 block/block-backend.c             |  22 -----
 block/file-posix.c                |  38 -------
 block/io.c                        |  37 -------
 block/io_uring.c                  |  45 ++++-----
 block/linux-aio.c                 |  41 +++-----
 block/nvme.c                      |  44 +++------
 block/plug.c                      | 159 ++++++++++++++++++++++++++++++
 hw/block/dataplane/xen-block.c    |   8 +-
 hw/block/virtio-blk.c             |   4 +-
 hw/scsi/virtio-scsi.c             |   6 +-
 block/meson.build                 |   1 +
 block/trace-events                |   5 +-
 18 files changed, 236 insertions(+), 256 deletions(-)
 create mode 100644 block/plug.c

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 17 22:10:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 22:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536082.834220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPM0-0005bT-QJ; Wed, 17 May 2023 22:10:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536082.834220; Wed, 17 May 2023 22:10:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPM0-0005bJ-NK; Wed, 17 May 2023 22:10:40 +0000
Received: by outflank-mailman (input) for mailman id 536082;
 Wed, 17 May 2023 22:10:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqXG=BG=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pzPLz-0004sP-Ok
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 22:10:39 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a78f9fea-f4ff-11ed-b22a-6b7b168915f2;
 Thu, 18 May 2023 00:10:39 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-274-PzJp_SQIOtapVDQqpnz-qw-1; Wed, 17 May 2023 18:10:34 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BAB96857FB2;
 Wed, 17 May 2023 22:10:33 +0000 (UTC)
Received: from localhost (unknown [10.39.192.14])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1EF702166B31;
 Wed, 17 May 2023 22:10:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a78f9fea-f4ff-11ed-b22a-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684361438;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RXMA7sqy2vuXoTTqUl+VUv8v7FFhFPNT89EZySBaASY=;
	b=e8S3k/NQQUr+/Q52PVXxi9RUiPOH8ulNON7h0Nhd4Ewvpbi70kPdGP4T+NZ/MOm1FUdkaz
	kLiU7y2ExV9qAyFf5c2bi0gOKnmFUm/AreJ0nyFS/CSWjccRBESx9gIPZ2p8AxPudAyBnn
	aZ9I+zRlNO4HQ6uXYcHZNzfXtM7RHmI=
X-MC-Unique: PzJp_SQIOtapVDQqpnz-qw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: [PATCH 3/6] block/blkio: convert to blk_io_plug_call() API
Date: Wed, 17 May 2023 18:10:19 -0400
Message-Id: <20230517221022.325091-4-stefanha@redhat.com>
In-Reply-To: <20230517221022.325091-1-stefanha@redhat.com>
References: <20230517221022.325091-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/blkio.c | 40 +++++++++++++++++++++-------------------
 1 file changed, 21 insertions(+), 19 deletions(-)

diff --git a/block/blkio.c b/block/blkio.c
index 0cdc99a729..f2a1dc1fb2 100644
--- a/block/blkio.c
+++ b/block/blkio.c
@@ -325,16 +325,28 @@ static void blkio_detach_aio_context(BlockDriverState *bs)
                        false, NULL, NULL, NULL, NULL, NULL);
 }
 
-/* Call with s->blkio_lock held to submit I/O after enqueuing a new request */
-static void blkio_submit_io(BlockDriverState *bs)
+/*
+ * Called by blk_io_unplug() or immediately if not plugged. Called without
+ * blkio_lock.
+ */
+static void blkio_unplug_fn(BlockDriverState *bs)
 {
-    if (qatomic_read(&bs->io_plugged) == 0) {
-        BDRVBlkioState *s = bs->opaque;
+    BDRVBlkioState *s = bs->opaque;
 
+    WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_do_io(s->blkioq, NULL, 0, 0, NULL);
     }
 }
 
+/*
+ * Schedule I/O submission after enqueuing a new request. Called without
+ * blkio_lock.
+ */
+static void blkio_submit_io(BlockDriverState *bs)
+{
+    blk_io_plug_call(blkio_unplug_fn, bs);
+}
+
 static int coroutine_fn
 blkio_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
 {
@@ -345,9 +357,9 @@ blkio_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_discard(s->blkioq, offset, bytes, &cod, 0);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
     return cod.ret;
 }
@@ -378,9 +390,9 @@ blkio_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_readv(s->blkioq, offset, iov, iovcnt, &cod, 0);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
 
     if (use_bounce_buffer) {
@@ -423,9 +435,9 @@ static int coroutine_fn blkio_co_pwritev(BlockDriverState *bs, int64_t offset,
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_writev(s->blkioq, offset, iov, iovcnt, &cod, blkio_flags);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
 
     if (use_bounce_buffer) {
@@ -444,9 +456,9 @@ static int coroutine_fn blkio_co_flush(BlockDriverState *bs)
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_flush(s->blkioq, &cod, 0);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
     return cod.ret;
 }
@@ -472,22 +484,13 @@ static int coroutine_fn blkio_co_pwrite_zeroes(BlockDriverState *bs,
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_write_zeroes(s->blkioq, offset, bytes, &cod, blkio_flags);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
     return cod.ret;
 }
 
-static void coroutine_fn blkio_co_io_unplug(BlockDriverState *bs)
-{
-    BDRVBlkioState *s = bs->opaque;
-
-    WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
-        blkio_submit_io(bs);
-    }
-}
-
 typedef enum {
     BMRR_OK,
     BMRR_SKIP,
@@ -1009,7 +1012,6 @@ static void blkio_refresh_limits(BlockDriverState *bs, Error **errp)
         .bdrv_co_pwritev         = blkio_co_pwritev, \
         .bdrv_co_flush_to_disk   = blkio_co_flush, \
         .bdrv_co_pwrite_zeroes   = blkio_co_pwrite_zeroes, \
-        .bdrv_co_io_unplug       = blkio_co_io_unplug, \
         .bdrv_refresh_limits     = blkio_refresh_limits, \
         .bdrv_register_buf       = blkio_register_buf, \
         .bdrv_unregister_buf     = blkio_unregister_buf, \
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 17 22:10:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 22:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536083.834231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPM4-0005wt-Bj; Wed, 17 May 2023 22:10:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536083.834231; Wed, 17 May 2023 22:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPM4-0005wd-69; Wed, 17 May 2023 22:10:44 +0000
Received: by outflank-mailman (input) for mailman id 536083;
 Wed, 17 May 2023 22:10:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqXG=BG=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pzPM3-0004sV-0H
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 22:10:43 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a8b30104-f4ff-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 00:10:41 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-500-iuCQhDZoOHmdirDNUMxiUg-1; Wed, 17 May 2023 18:10:36 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 34265101A54F;
 Wed, 17 May 2023 22:10:36 +0000 (UTC)
Received: from localhost (unknown [10.39.192.14])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 8660D40C6EC4;
 Wed, 17 May 2023 22:10:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8b30104-f4ff-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684361440;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Blp/dScqtxXcjtaEje+/rVMH7PoeKSgRMwItRYa7aR4=;
	b=RyxjFtLx6ZGCwF0gH2t7Mcrg5hJdfkf//cJheEgAcDX5wfwZ5k8wcl6R44dnECe10LDYO1
	lda7AcE07j3Y2qTzlh7G8kfYOOIHbAy1j1fDPH/qgfe60NZ/HGxaJHtznQ5/eo5Lc+dAzi
	mWyKICdj0xKQZ7mIizyyjxcVDtpa9/U=
X-MC-Unique: iuCQhDZoOHmdirDNUMxiUg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: [PATCH 4/6] block/io_uring: convert to blk_io_plug_call() API
Date: Wed, 17 May 2023 18:10:20 -0400
Message-Id: <20230517221022.325091-5-stefanha@redhat.com>
In-Reply-To: <20230517221022.325091-1-stefanha@redhat.com>
References: <20230517221022.325091-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/raw-aio.h |  7 -------
 block/file-posix.c      | 10 ---------
 block/io_uring.c        | 45 ++++++++++++++++-------------------------
 block/trace-events      |  5 ++---
 4 files changed, 19 insertions(+), 48 deletions(-)

diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
index 0fe85ade77..da60ca13ef 100644
--- a/include/block/raw-aio.h
+++ b/include/block/raw-aio.h
@@ -81,13 +81,6 @@ int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t offset,
                                   QEMUIOVector *qiov, int type);
 void luring_detach_aio_context(LuringState *s, AioContext *old_context);
 void luring_attach_aio_context(LuringState *s, AioContext *new_context);
-
-/*
- * luring_io_plug/unplug work in the thread's current AioContext, therefore the
- * caller must ensure that they are paired in the same IOThread.
- */
-void luring_io_plug(void);
-void luring_io_unplug(void);
 #endif
 
 #ifdef _WIN32
diff --git a/block/file-posix.c b/block/file-posix.c
index 0ab158efba..7baa8491dd 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2558,11 +2558,6 @@ static void coroutine_fn raw_co_io_plug(BlockDriverState *bs)
         laio_io_plug();
     }
 #endif
-#ifdef CONFIG_LINUX_IO_URING
-    if (s->use_linux_io_uring) {
-        luring_io_plug();
-    }
-#endif
 }
 
 static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
@@ -2573,11 +2568,6 @@ static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
         laio_io_unplug(s->aio_max_batch);
     }
 #endif
-#ifdef CONFIG_LINUX_IO_URING
-    if (s->use_linux_io_uring) {
-        luring_io_unplug();
-    }
-#endif
 }
 
 static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
diff --git a/block/io_uring.c b/block/io_uring.c
index 82cab6a5bd..9a45e5fb8b 100644
--- a/block/io_uring.c
+++ b/block/io_uring.c
@@ -16,6 +16,7 @@
 #include "block/raw-aio.h"
 #include "qemu/coroutine.h"
 #include "qapi/error.h"
+#include "sysemu/block-backend.h"
 #include "trace.h"
 
 /* Only used for assertions.  */
@@ -41,7 +42,6 @@ typedef struct LuringAIOCB {
 } LuringAIOCB;
 
 typedef struct LuringQueue {
-    int plugged;
     unsigned int in_queue;
     unsigned int in_flight;
     bool blocked;
@@ -267,7 +267,7 @@ static void luring_process_completions_and_submit(LuringState *s)
 {
     luring_process_completions(s);
 
-    if (!s->io_q.plugged && s->io_q.in_queue > 0) {
+    if (s->io_q.in_queue > 0) {
         ioq_submit(s);
     }
 }
@@ -301,29 +301,17 @@ static void qemu_luring_poll_ready(void *opaque)
 static void ioq_init(LuringQueue *io_q)
 {
     QSIMPLEQ_INIT(&io_q->submit_queue);
-    io_q->plugged = 0;
     io_q->in_queue = 0;
     io_q->in_flight = 0;
     io_q->blocked = false;
 }
 
-void luring_io_plug(void)
+static void luring_unplug_fn(void *opaque)
 {
-    AioContext *ctx = qemu_get_current_aio_context();
-    LuringState *s = aio_get_linux_io_uring(ctx);
-    trace_luring_io_plug(s);
-    s->io_q.plugged++;
-}
-
-void luring_io_unplug(void)
-{
-    AioContext *ctx = qemu_get_current_aio_context();
-    LuringState *s = aio_get_linux_io_uring(ctx);
-    assert(s->io_q.plugged);
-    trace_luring_io_unplug(s, s->io_q.blocked, s->io_q.plugged,
-                           s->io_q.in_queue, s->io_q.in_flight);
-    if (--s->io_q.plugged == 0 &&
-        !s->io_q.blocked && s->io_q.in_queue > 0) {
+    LuringState *s = opaque;
+    trace_luring_unplug_fn(s, s->io_q.blocked, s->io_q.in_queue,
+                           s->io_q.in_flight);
+    if (!s->io_q.blocked && s->io_q.in_queue > 0) {
         ioq_submit(s);
     }
 }
@@ -337,7 +325,6 @@ void luring_io_unplug(void)
  * @type: type of request
  *
  * Fetches sqes from ring, adds to pending queue and preps them
- *
  */
 static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s,
                             uint64_t offset, int type)
@@ -370,14 +357,16 @@ static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s,
 
     QSIMPLEQ_INSERT_TAIL(&s->io_q.submit_queue, luringcb, next);
     s->io_q.in_queue++;
-    trace_luring_do_submit(s, s->io_q.blocked, s->io_q.plugged,
-                           s->io_q.in_queue, s->io_q.in_flight);
-    if (!s->io_q.blocked &&
-        (!s->io_q.plugged ||
-         s->io_q.in_flight + s->io_q.in_queue >= MAX_ENTRIES)) {
-        ret = ioq_submit(s);
-        trace_luring_do_submit_done(s, ret);
-        return ret;
+    trace_luring_do_submit(s, s->io_q.blocked, s->io_q.in_queue,
+                           s->io_q.in_flight);
+    if (!s->io_q.blocked) {
+        if (s->io_q.in_flight + s->io_q.in_queue >= MAX_ENTRIES) {
+            ret = ioq_submit(s);
+            trace_luring_do_submit_done(s, ret);
+            return ret;
+        }
+
+        blk_io_plug_call(luring_unplug_fn, s);
     }
     return 0;
 }
diff --git a/block/trace-events b/block/trace-events
index 32665158d6..c22fb1ed43 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -64,9 +64,8 @@ file_paio_submit(void *acb, void *opaque, int64_t offset, int count, int type) "
 # io_uring.c
 luring_init_state(void *s, size_t size) "s %p size %zu"
 luring_cleanup_state(void *s) "%p freed"
-luring_io_plug(void *s) "LuringState %p plug"
-luring_io_unplug(void *s, int blocked, int plugged, int queued, int inflight) "LuringState %p blocked %d plugged %d queued %d inflight %d"
-luring_do_submit(void *s, int blocked, int plugged, int queued, int inflight) "LuringState %p blocked %d plugged %d queued %d inflight %d"
+luring_unplug_fn(void *s, int blocked, int queued, int inflight) "LuringState %p blocked %d queued %d inflight %d"
+luring_do_submit(void *s, int blocked, int queued, int inflight) "LuringState %p blocked %d queued %d inflight %d"
 luring_do_submit_done(void *s, int ret) "LuringState %p submitted to kernel %d"
 luring_co_submit(void *bs, void *s, void *luringcb, int fd, uint64_t offset, size_t nbytes, int type) "bs %p s %p luringcb %p fd %d offset %" PRId64 " nbytes %zd type %d"
 luring_process_completion(void *s, void *aiocb, int ret) "LuringState %p luringcb %p ret %d"
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 17 22:10:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 22:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536084.834240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPM6-0006En-KS; Wed, 17 May 2023 22:10:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536084.834240; Wed, 17 May 2023 22:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPM6-0006EY-Fb; Wed, 17 May 2023 22:10:46 +0000
Received: by outflank-mailman (input) for mailman id 536084;
 Wed, 17 May 2023 22:10:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqXG=BG=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pzPM4-0004sP-Nd
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 22:10:44 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aa518528-f4ff-11ed-b22a-6b7b168915f2;
 Thu, 18 May 2023 00:10:44 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-84-iMg22q66OE2OQnfjQYyXYg-1; Wed, 17 May 2023 18:10:39 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E198A870820;
 Wed, 17 May 2023 22:10:38 +0000 (UTC)
Received: from localhost (unknown [10.39.192.14])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 592832166B31;
 Wed, 17 May 2023 22:10:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa518528-f4ff-11ed-b22a-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684361442;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wAL0IqCXxu/U5jmrcTSsPXlVFdfkdt5JIfR3AjEbRH4=;
	b=eUaOjwnN+BLneCHx6+v2uLzbz2heKkbp8hVGIKkrjqKPMsGVfE04zVgrWPy5DkyJYLZqOK
	C5GAzhcqm80uv6tTBi5WPC9gdNMNCVUhW/MEhjcutCaOhpjSrPKGzcT8J7AXps8taJN6Rd
	AhHmmlFYUOiYYLTxD6mw4Rl5/XrbvxE=
X-MC-Unique: iMg22q66OE2OQnfjQYyXYg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: [PATCH 5/6] block/linux-aio: convert to blk_io_plug_call() API
Date: Wed, 17 May 2023 18:10:21 -0400
Message-Id: <20230517221022.325091-6-stefanha@redhat.com>
In-Reply-To: <20230517221022.325091-1-stefanha@redhat.com>
References: <20230517221022.325091-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/raw-aio.h |  7 -------
 block/file-posix.c      | 28 ----------------------------
 block/linux-aio.c       | 41 +++++++++++------------------------------
 3 files changed, 11 insertions(+), 65 deletions(-)

diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
index da60ca13ef..0f63c2800c 100644
--- a/include/block/raw-aio.h
+++ b/include/block/raw-aio.h
@@ -62,13 +62,6 @@ int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
 
 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
 void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
-
-/*
- * laio_io_plug/unplug work in the thread's current AioContext, therefore the
- * caller must ensure that they are paired in the same IOThread.
- */
-void laio_io_plug(void);
-void laio_io_unplug(uint64_t dev_max_batch);
 #endif
 /* io_uring.c - Linux io_uring implementation */
 #ifdef CONFIG_LINUX_IO_URING
diff --git a/block/file-posix.c b/block/file-posix.c
index 7baa8491dd..ac1ed54811 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2550,26 +2550,6 @@ static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset,
     return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_WRITE);
 }
 
-static void coroutine_fn raw_co_io_plug(BlockDriverState *bs)
-{
-    BDRVRawState __attribute__((unused)) *s = bs->opaque;
-#ifdef CONFIG_LINUX_AIO
-    if (s->use_linux_aio) {
-        laio_io_plug();
-    }
-#endif
-}
-
-static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
-{
-    BDRVRawState __attribute__((unused)) *s = bs->opaque;
-#ifdef CONFIG_LINUX_AIO
-    if (s->use_linux_aio) {
-        laio_io_unplug(s->aio_max_batch);
-    }
-#endif
-}
-
 static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
 {
     BDRVRawState *s = bs->opaque;
@@ -3914,8 +3894,6 @@ BlockDriver bdrv_file = {
     .bdrv_co_copy_range_from = raw_co_copy_range_from,
     .bdrv_co_copy_range_to  = raw_co_copy_range_to,
     .bdrv_refresh_limits = raw_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
@@ -4286,8 +4264,6 @@ static BlockDriver bdrv_host_device = {
     .bdrv_co_copy_range_from = raw_co_copy_range_from,
     .bdrv_co_copy_range_to  = raw_co_copy_range_to,
     .bdrv_refresh_limits = raw_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
@@ -4424,8 +4400,6 @@ static BlockDriver bdrv_host_cdrom = {
     .bdrv_co_pwritev        = raw_co_pwritev,
     .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
     .bdrv_refresh_limits    = cdrom_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
@@ -4552,8 +4526,6 @@ static BlockDriver bdrv_host_cdrom = {
     .bdrv_co_pwritev        = raw_co_pwritev,
     .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
     .bdrv_refresh_limits    = cdrom_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
diff --git a/block/linux-aio.c b/block/linux-aio.c
index 442c86209b..5021aed68f 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -15,6 +15,7 @@
 #include "qemu/event_notifier.h"
 #include "qemu/coroutine.h"
 #include "qapi/error.h"
+#include "sysemu/block-backend.h"
 
 /* Only used for assertions.  */
 #include "qemu/coroutine_int.h"
@@ -46,7 +47,6 @@ struct qemu_laiocb {
 };
 
 typedef struct {
-    int plugged;
     unsigned int in_queue;
     unsigned int in_flight;
     bool blocked;
@@ -236,7 +236,7 @@ static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
 {
     qemu_laio_process_completions(s);
 
-    if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
+    if (!QSIMPLEQ_EMPTY(&s->io_q.pending)) {
         ioq_submit(s);
     }
 }
@@ -277,7 +277,6 @@ static void qemu_laio_poll_ready(EventNotifier *opaque)
 static void ioq_init(LaioQueue *io_q)
 {
     QSIMPLEQ_INIT(&io_q->pending);
-    io_q->plugged = 0;
     io_q->in_queue = 0;
     io_q->in_flight = 0;
     io_q->blocked = false;
@@ -354,31 +353,11 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64_t dev_max_batch)
     return max_batch;
 }
 
-void laio_io_plug(void)
+static void laio_unplug_fn(void *opaque)
 {
-    AioContext *ctx = qemu_get_current_aio_context();
-    LinuxAioState *s = aio_get_linux_aio(ctx);
+    LinuxAioState *s = opaque;
 
-    s->io_q.plugged++;
-}
-
-void laio_io_unplug(uint64_t dev_max_batch)
-{
-    AioContext *ctx = qemu_get_current_aio_context();
-    LinuxAioState *s = aio_get_linux_aio(ctx);
-
-    assert(s->io_q.plugged);
-    s->io_q.plugged--;
-
-    /*
-     * Why max batch checking is performed here:
-     * Another BDS may have queued requests with a higher dev_max_batch and
-     * therefore in_queue could now exceed our dev_max_batch. Re-check the max
-     * batch so we can honor our device's dev_max_batch.
-     */
-    if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch) ||
-        (!s->io_q.plugged &&
-         !s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending))) {
+    if (!s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
         ioq_submit(s);
     }
 }
@@ -410,10 +389,12 @@ static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
 
     QSIMPLEQ_INSERT_TAIL(&s->io_q.pending, laiocb, next);
     s->io_q.in_queue++;
-    if (!s->io_q.blocked &&
-        (!s->io_q.plugged ||
-         s->io_q.in_queue >= laio_max_batch(s, dev_max_batch))) {
-        ioq_submit(s);
+    if (!s->io_q.blocked) {
+        if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch)) {
+            ioq_submit(s);
+        } else {
+            blk_io_plug_call(laio_unplug_fn, s);
+        }
     }
 
     return 0;
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 17 22:10:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 22:10:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536081.834205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPLz-0005CP-JP; Wed, 17 May 2023 22:10:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536081.834205; Wed, 17 May 2023 22:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPLz-0005BQ-EJ; Wed, 17 May 2023 22:10:39 +0000
Received: by outflank-mailman (input) for mailman id 536081;
 Wed, 17 May 2023 22:10:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqXG=BG=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pzPLx-0004sP-Ny
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 22:10:37 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a639c45e-f4ff-11ed-b22a-6b7b168915f2;
 Thu, 18 May 2023 00:10:37 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-192-C0RqvpuqMluZCT-d_XeyiQ-1; Wed, 17 May 2023 18:10:32 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5F6833C0BE57;
 Wed, 17 May 2023 22:10:31 +0000 (UTC)
Received: from localhost (unknown [10.39.192.14])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B3F2614171C0;
 Wed, 17 May 2023 22:10:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a639c45e-f4ff-11ed-b22a-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684361435;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gFn0D1QeS7K9r5hk8We93ICMQFKJ73KryVejH0WL6Ew=;
	b=ZZqmvM2fOa51dy/lEyym2F0cKXavI3OdGqa87KJGo54j/9zAiOMmfVu4r29yppxmq23am8
	D0Rpt1N7hoXfyQ4aRhQPNLqOfHDNP2S1YkIpYMTHps2q3udYjbJ/UQsB7Ns3DXSOBHw9vj
	3AzaBs42j1us/eOQaGLu5nS5imBhrsI=
X-MC-Unique: C0RqvpuqMluZCT-d_XeyiQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: [PATCH 2/6] block/nvme: convert to blk_io_plug_call() API
Date: Wed, 17 May 2023 18:10:18 -0400
Message-Id: <20230517221022.325091-3-stefanha@redhat.com>
In-Reply-To: <20230517221022.325091-1-stefanha@redhat.com>
References: <20230517221022.325091-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/nvme.c | 44 ++++++++++++--------------------------------
 1 file changed, 12 insertions(+), 32 deletions(-)

diff --git a/block/nvme.c b/block/nvme.c
index 5b744c2bda..100b38b592 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -25,6 +25,7 @@
 #include "qemu/vfio-helpers.h"
 #include "block/block-io.h"
 #include "block/block_int.h"
+#include "sysemu/block-backend.h"
 #include "sysemu/replay.h"
 #include "trace.h"
 
@@ -119,7 +120,6 @@ struct BDRVNVMeState {
     int blkshift;
 
     uint64_t max_transfer;
-    bool plugged;
 
     bool supports_write_zeroes;
     bool supports_discard;
@@ -282,7 +282,7 @@ static void nvme_kick(NVMeQueuePair *q)
 {
     BDRVNVMeState *s = q->s;
 
-    if (s->plugged || !q->need_kick) {
+    if (!q->need_kick) {
         return;
     }
     trace_nvme_kick(s, q->index);
@@ -387,10 +387,6 @@ static bool nvme_process_completion(NVMeQueuePair *q)
     NvmeCqe *c;
 
     trace_nvme_process_completion(s, q->index, q->inflight);
-    if (s->plugged) {
-        trace_nvme_process_completion_queue_plugged(s, q->index);
-        return false;
-    }
 
     /*
      * Support re-entrancy when a request cb() function invokes aio_poll().
@@ -480,6 +476,15 @@ static void nvme_trace_command(const NvmeCmd *cmd)
     }
 }
 
+static void nvme_unplug_fn(void *opaque)
+{
+    NVMeQueuePair *q = opaque;
+
+    QEMU_LOCK_GUARD(&q->lock);
+    nvme_kick(q);
+    nvme_process_completion(q);
+}
+
 static void nvme_submit_command(NVMeQueuePair *q, NVMeRequest *req,
                                 NvmeCmd *cmd, BlockCompletionFunc cb,
                                 void *opaque)
@@ -496,8 +501,7 @@ static void nvme_submit_command(NVMeQueuePair *q, NVMeRequest *req,
            q->sq.tail * NVME_SQ_ENTRY_BYTES, cmd, sizeof(*cmd));
     q->sq.tail = (q->sq.tail + 1) % NVME_QUEUE_SIZE;
     q->need_kick++;
-    nvme_kick(q);
-    nvme_process_completion(q);
+    blk_io_plug_call(nvme_unplug_fn, q);
     qemu_mutex_unlock(&q->lock);
 }
 
@@ -1567,27 +1571,6 @@ static void nvme_attach_aio_context(BlockDriverState *bs,
     }
 }
 
-static void coroutine_fn nvme_co_io_plug(BlockDriverState *bs)
-{
-    BDRVNVMeState *s = bs->opaque;
-    assert(!s->plugged);
-    s->plugged = true;
-}
-
-static void coroutine_fn nvme_co_io_unplug(BlockDriverState *bs)
-{
-    BDRVNVMeState *s = bs->opaque;
-    assert(s->plugged);
-    s->plugged = false;
-    for (unsigned i = INDEX_IO(0); i < s->queue_count; i++) {
-        NVMeQueuePair *q = s->queues[i];
-        qemu_mutex_lock(&q->lock);
-        nvme_kick(q);
-        nvme_process_completion(q);
-        qemu_mutex_unlock(&q->lock);
-    }
-}
-
 static bool nvme_register_buf(BlockDriverState *bs, void *host, size_t size,
                               Error **errp)
 {
@@ -1664,9 +1647,6 @@ static BlockDriver bdrv_nvme = {
     .bdrv_detach_aio_context  = nvme_detach_aio_context,
     .bdrv_attach_aio_context  = nvme_attach_aio_context,
 
-    .bdrv_co_io_plug          = nvme_co_io_plug,
-    .bdrv_co_io_unplug        = nvme_co_io_unplug,
-
     .bdrv_register_buf        = nvme_register_buf,
     .bdrv_unregister_buf      = nvme_unregister_buf,
 };
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 17 22:10:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 22:10:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536085.834250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPM8-0006YZ-VA; Wed, 17 May 2023 22:10:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536085.834250; Wed, 17 May 2023 22:10:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPM8-0006YS-QP; Wed, 17 May 2023 22:10:48 +0000
Received: by outflank-mailman (input) for mailman id 536085;
 Wed, 17 May 2023 22:10:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqXG=BG=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1pzPM7-0004sP-CI
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 22:10:47 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id abff6cb0-f4ff-11ed-b22a-6b7b168915f2;
 Thu, 18 May 2023 00:10:46 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-283-ZuGjwf9_O02pHtA8FBXCrg-1; Wed, 17 May 2023 18:10:42 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6832838184E6;
 Wed, 17 May 2023 22:10:41 +0000 (UTC)
Received: from localhost (unknown [10.39.192.14])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C65272026D16;
 Wed, 17 May 2023 22:10:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abff6cb0-f4ff-11ed-b22a-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684361445;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oAMrkGRvsGP9WognGT4SAJgv9CJG1xPxVEQ07vb4i/g=;
	b=OEa3KfXfMrw7VuIR0boDMxvN5PJVyqj7s9NYMoRO/uRz+UWr7xlHzhy78Zq4ynJLYZOnND
	E5LYxNVh19/qVyXxLekzjZePQ2Io4Di4ckiiXhJEWAj1elqImDS7kwG8dPf5gildd1KBuN
	ZGATV/VA0Hui4zYDh1C+sSFYAugRLag=
X-MC-Unique: ZuGjwf9_O02pHtA8FBXCrg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: [PATCH 6/6] block: remove bdrv_co_io_plug() API
Date: Wed, 17 May 2023 18:10:22 -0400
Message-Id: <20230517221022.325091-7-stefanha@redhat.com>
In-Reply-To: <20230517221022.325091-1-stefanha@redhat.com>
References: <20230517221022.325091-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

No block driver implements .bdrv_co_io_plug() anymore. Get rid of the
function pointers.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/block/block-io.h         |  3 ---
 include/block/block_int-common.h | 11 ----------
 block/io.c                       | 37 --------------------------------
 3 files changed, 51 deletions(-)

diff --git a/include/block/block-io.h b/include/block/block-io.h
index a27e471a87..43af816d75 100644
--- a/include/block/block-io.h
+++ b/include/block/block-io.h
@@ -259,9 +259,6 @@ void coroutine_fn bdrv_co_leave(BlockDriverState *bs, AioContext *old_ctx);
 
 AioContext *child_of_bds_get_parent_aio_context(BdrvChild *c);
 
-void coroutine_fn GRAPH_RDLOCK bdrv_co_io_plug(BlockDriverState *bs);
-void coroutine_fn GRAPH_RDLOCK bdrv_co_io_unplug(BlockDriverState *bs);
-
 bool coroutine_fn GRAPH_RDLOCK
 bdrv_co_can_store_new_dirty_bitmap(BlockDriverState *bs, const char *name,
                                    uint32_t granularity, Error **errp);
diff --git a/include/block/block_int-common.h b/include/block/block_int-common.h
index dbec0e3bb4..fa369d83dd 100644
--- a/include/block/block_int-common.h
+++ b/include/block/block_int-common.h
@@ -753,11 +753,6 @@ struct BlockDriver {
     void coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_debug_event)(
         BlockDriverState *bs, BlkdebugEvent event);
 
-    /* io queue for linux-aio */
-    void coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_io_plug)(BlockDriverState *bs);
-    void coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_io_unplug)(
-        BlockDriverState *bs);
-
     /**
      * bdrv_drain_begin is called if implemented in the beginning of a
      * drain operation to drain and stop any internal sources of requests in
@@ -1227,12 +1222,6 @@ struct BlockDriverState {
     unsigned int in_flight;
     unsigned int serialising_in_flight;
 
-    /*
-     * counter for nested bdrv_io_plug.
-     * Accessed with atomic ops.
-     */
-    unsigned io_plugged;
-
     /* do we need to tell the quest if we have a volatile write cache? */
     int enable_write_cache;
 
diff --git a/block/io.c b/block/io.c
index 4d54fda593..56b0c1ce6c 100644
--- a/block/io.c
+++ b/block/io.c
@@ -3219,43 +3219,6 @@ void *qemu_try_blockalign0(BlockDriverState *bs, size_t size)
     return mem;
 }
 
-void coroutine_fn bdrv_co_io_plug(BlockDriverState *bs)
-{
-    BdrvChild *child;
-    IO_CODE();
-    assert_bdrv_graph_readable();
-
-    QLIST_FOREACH(child, &bs->children, next) {
-        bdrv_co_io_plug(child->bs);
-    }
-
-    if (qatomic_fetch_inc(&bs->io_plugged) == 0) {
-        BlockDriver *drv = bs->drv;
-        if (drv && drv->bdrv_co_io_plug) {
-            drv->bdrv_co_io_plug(bs);
-        }
-    }
-}
-
-void coroutine_fn bdrv_co_io_unplug(BlockDriverState *bs)
-{
-    BdrvChild *child;
-    IO_CODE();
-    assert_bdrv_graph_readable();
-
-    assert(bs->io_plugged);
-    if (qatomic_fetch_dec(&bs->io_plugged) == 1) {
-        BlockDriver *drv = bs->drv;
-        if (drv && drv->bdrv_co_io_unplug) {
-            drv->bdrv_co_io_unplug(bs);
-        }
-    }
-
-    QLIST_FOREACH(child, &bs->children, next) {
-        bdrv_co_io_unplug(child->bs);
-    }
-}
-
 /* Helper that undoes bdrv_register_buf() when it fails partway through */
 static void GRAPH_RDLOCK
 bdrv_register_buf_rollback(BlockDriverState *bs, void *host, size_t size,
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 17 22:47:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 17 May 2023 22:47:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536115.834284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPvq-0004xD-Rh; Wed, 17 May 2023 22:47:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536115.834284; Wed, 17 May 2023 22:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzPvq-0004x6-Ot; Wed, 17 May 2023 22:47:42 +0000
Received: by outflank-mailman (input) for mailman id 536115;
 Wed, 17 May 2023 22:47:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzPvp-0004ww-CV; Wed, 17 May 2023 22:47:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzPvp-0000Ff-AC; Wed, 17 May 2023 22:47:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzPvo-0005d3-Sv; Wed, 17 May 2023 22:47:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzPvo-0001xL-ST; Wed, 17 May 2023 22:47:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=94/s/eD0UNPN6Ixi4qxqr9wU+eWo0pEAbg60SZdHRnY=; b=intLHefw4QOWfLNE6J6NKOwc+W
	KK1XAJKToOlPY+kUb75YGjbo/b2PLwe9k5nj/CCgxOMogzPkKymMvflKB6a9EMuIWVWhRSPDiAdFS
	8RANAq3OMVs20jN/CqRL2WE8AylIWzQ/MmTqcjpXsbF2qbWjYdG055aheefkv40oa284=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180689-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180689: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=42abf5b9c53eb1b1a902002fcda68708234152c3
X-Osstest-Versions-That:
    xen=8f9c8274a4e3e860bd777269cb2c91971e9fa69e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 17 May 2023 22:47:40 +0000

flight 180689 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180689/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180682
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180682
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180682
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180682
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180682
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180682
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180682
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180682
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180682
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180682
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180682
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180682
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  42abf5b9c53eb1b1a902002fcda68708234152c3
baseline version:
 xen                  8f9c8274a4e3e860bd777269cb2c91971e9fa69e

Last test of basis   180682  2023-05-16 15:10:00 Z    1 days
Testing same since   180689  2023-05-17 06:37:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alejandro Vallejo <alejandro.vallejo@cloud.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jason Andryuk <jandryuk@gmail.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8f9c8274a4..42abf5b9c5  42abf5b9c53eb1b1a902002fcda68708234152c3 -> master


From xen-devel-bounces@lists.xenproject.org Thu May 18 00:59:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 00:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536123.834294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzRzY-0006yE-V8; Thu, 18 May 2023 00:59:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536123.834294; Thu, 18 May 2023 00:59:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzRzY-0006xs-PK; Thu, 18 May 2023 00:59:40 +0000
Received: by outflank-mailman (input) for mailman id 536123;
 Thu, 18 May 2023 00:59:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o87v=BH=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pzRzX-0006xm-EZ
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 00:59:39 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 41c74d91-f517-11ed-b22b-6b7b168915f2;
 Thu, 18 May 2023 02:59:36 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 88F7C61990;
 Thu, 18 May 2023 00:59:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0B264C433D2;
 Thu, 18 May 2023 00:59:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41c74d91-f517-11ed-b22b-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684371575;
	bh=NXvLc3Z1//DALPEDIkF8W+GldJEwq3S9o4P8wPwCX94=;
	h=Date:From:To:cc:Subject:From;
	b=n+teuAyrQ1fqs3mAAyb9exYm3e3X8si+CiEKueIlkSDAA4pZ9IHoHOtB5bfxa5Vao
	 xVb6AubRrRajE+ufB2Knmd7sZuNDuK29jZ9nLeyh4gL7QLXCv8ATKlFcniDWLl5/cC
	 EqSFkdxDazTsHY2Jl43Y7DZHtQmP3ubDBd35oufziWmfpx7560R2ikFazgloQL3DBZ
	 vVLgoLYzoHVwY5yGNV8yCbrLNcuuCVQ8OdZTxNgQOiiGWIPkfDmvUT4clHs4CE2wU6
	 BBN7sNw2ziTecpRu4D1pc24myys0KBN83fnVk/OcKPbtxgHL87yi/TVQYKrlVzk2vy
	 wKJJR9/VmZf/Q==
Date: Wed, 17 May 2023 17:59:31 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: jbeulich@suse.com, roger.pau@citrix.com
cc: sstabellini@kernel.org, andrew.cooper3@citrix.com, 
    xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com, 
    xenia.ragiadakou@amd.com
Subject: PVH Dom0 related UART failure
Message-ID: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
Zen3 system and we already have a few successful tests with it, see
automation/gitlab-ci/test.yaml.

We managed to narrow down the issue to a console problem. We are
currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
options, it works with PV Dom0 and it is using a PCI UART card.

In the case of Dom0 PVH:
- it works without console=com1
- it works with console=com1 and with the patch appended below
- it doesn't work otherwise and crashes with this error:
https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK

What is the right way to fix it?

Keep in mind that I don't have access to the system except via gitlab-ci
pipelines. Marek (CCed) might have more info on the system and the PCI
UART he is using in case it's needed.

Many thanks for any help you can provide.

Cheers,

Stefano


diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 212a9c49ae..57623bc091 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -429,17 +428,6 @@ static void __init cf_check ns16550_init_postirq(struct serial_port *port)
 #ifdef NS16550_PCI
     if ( uart->bar || uart->ps_bdf_enable )
     {
-        if ( uart->param && uart->param->mmio &&
-             rangeset_add_range(mmio_ro_ranges, PFN_DOWN(uart->io_base),
-                                PFN_UP(uart->io_base + uart->io_size) - 1) )
-            printk(XENLOG_INFO "Error while adding MMIO range of device to mmio_ro_ranges\n");
-
-        if ( pci_ro_device(0, uart->ps_bdf[0],
-                           PCI_DEVFN(uart->ps_bdf[1], uart->ps_bdf[2])) )
-            printk(XENLOG_INFO "Could not mark config space of %02x:%02x.%u read-only.\n",
-                   uart->ps_bdf[0], uart->ps_bdf[1],
-                   uart->ps_bdf[2]);
-
         if ( uart->msi )
         {
             struct msi_info msi = {


From xen-devel-bounces@lists.xenproject.org Thu May 18 02:10:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 02:10:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536152.834321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzT5e-0005fu-9g; Thu, 18 May 2023 02:10:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536152.834321; Thu, 18 May 2023 02:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzT5e-0005fm-5N; Thu, 18 May 2023 02:10:02 +0000
Received: by outflank-mailman (input) for mailman id 536152;
 Thu, 18 May 2023 02:10:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y6hQ=BH=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1pzT5c-0005YW-Tg
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 02:10:00 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16f49c97-f521-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 04:09:58 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-965a68abfd4so271164366b.2
 for <xen-devel@lists.xenproject.org>; Wed, 17 May 2023 19:09:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16f49c97-f521-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684375798; x=1686967798;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Z50a9Kg5Zk3qOUOw9QwsrjSX3TOX4jrusUWVi33KFUg=;
        b=ORGup4/7ZVy5FGzIsi5RwciSWxM0IW5CBd7Hjchhqj7HoFaOXo5ik0IPAJY+XjLBG+
         69+qDmQf/OtAb3fIlR9hYTIBoJ5eaopf7+/X/slAvvBWwOkMyFh73zbPW71e3WoZButB
         31wDPVV5wbHeRAmC0y3VRpXujzvWx6Xu1iC+1xBXU6AGnPvl7tESXOvg9HYkm8fX26Ya
         j4C//yKZGvsnC1ed2WdhOcc2tTP4Ea9YSfPH3KFluLl9WfDognUARuwJkO9RE7mEWNbD
         Hl+KyYbvIetSY1qj2FKQ65tBKLZpdZTOb56uwhfV4O3SaLcVI3w9ywk+FHnw3lokWHJ3
         cjvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684375798; x=1686967798;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Z50a9Kg5Zk3qOUOw9QwsrjSX3TOX4jrusUWVi33KFUg=;
        b=JIvxqFObScX0mdmaBMabSn9IS/bj2Pyi3M+z1rRB5i9NOL7riWamgTQw8MHKYV5LI4
         t/Y1FYpevmGuP40txDb8i/Ds3S2UGNI3sFlHpaUG2BM3m6yiRbqtLBZz+t83GR90m4AR
         b85wtDXl5GVzG0D01KyD7iKI3DEq+EIg6HjK5g9jfp8iutcUdoQYInanTMnAfAMY4frI
         5rN+F31MHbIHYTJ5TuQOoBFx19dVQ0lGA9xLB1ZaA1rtLiCozBpZwUNJGBRr0naPIipe
         yXvxXJJEz8PFU2iGIhYCkzqoKMeHKjVEnUgN3j4syp0O2oUMSX9gPExoZrj2WOXzyqjP
         pVDw==
X-Gm-Message-State: AC+VfDweRrQjqiMyU1eVMZmNVTx1HRCToW/SlxIbQ//7DHCaNm5fnQMY
	vq59EGJ1Cf+wjEwlOtD2WyPztTAm/CJVusbgRGQ=
X-Google-Smtp-Source: ACHHUZ5Vejp5o9w3CIlmm6LuPL/4+Z5LjlPnX3IOXyW0yDsUrbLtJfQYNQlGMDq+laS6GFMrkykjnw2r+TGQiB7jzZE=
X-Received: by 2002:a17:906:9b8f:b0:96a:6c3:5e72 with SMTP id
 dd15-20020a1709069b8f00b0096a06c35e72mr29803714ejc.10.1684375798063; Wed, 17
 May 2023 19:09:58 -0700 (PDT)
MIME-Version: 1.0
References: <20230320000554.8219-1-jandryuk@gmail.com> <48c55d33-aa16-4867-a477-f6df45c7d9d9@perard>
In-Reply-To: <48c55d33-aa16-4867-a477-f6df45c7d9d9@perard>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 17 May 2023 22:09:46 -0400
Message-ID: <CAKf6xpvzcR47oTcbyNWwTV9Bu2N0EaqjNh5CDv2XYTwGF5_qEA@mail.gmail.com>
Subject: Re: [PATCH] xen: Fix host pci for stubdom
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
	qemu-devel@nongnu.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, May 15, 2023 at 11:04=E2=80=AFAM Anthony PERARD
<anthony.perard@citrix.com> wrote:
>
> On Sun, Mar 19, 2023 at 08:05:54PM -0400, Jason Andryuk wrote:
> > diff --git a/hw/xen/xen-host-pci-device.c b/hw/xen/xen-host-pci-device.=
c
> > index 8c6e9a1716..51a72b432d 100644
> > --- a/hw/xen/xen-host-pci-device.c
> > +++ b/hw/xen/xen-host-pci-device.c
> > @@ -33,13 +34,101 @@
> >  #define IORESOURCE_PREFETCH     0x00001000      /* No side effects */
> >  #define IORESOURCE_MEM_64       0x00100000
> >
> > +/*
> > + * Non-passthrough (dom0) accesses are local PCI devices and use the g=
iven BDF
> > + * Passthough (stubdom) accesses are through PV frontend PCI device.  =
Those
> > + * either have a BDF identical to the backend's BFD (xen-backend.passt=
hrough=3D1)
> > + * or a local virtual BDF (xen-backend.passthrough=3D0)
> > + *
> > + * We are always given the backend's BDF and need to lookup the approp=
riate
> > + * local BDF for sysfs access.
> > + */
> > +static void xen_host_pci_fill_local_addr(XenHostPCIDevice *d, Error **=
errp)
> > +{
> > +    unsigned int num_devs, len, i;
> > +    unsigned int domain, bus, dev, func;
> > +    char *be_path;
> > +    char path[80];
> > +    char *msg;
> > +
> > +    be_path =3D qemu_xen_xs_read(xenstore, 0, "device/pci/0/backend", =
&len);
> > +    if (!be_path) {
> > +        /*
> > +         * be_path doesn't exist, so we are dealing with a local
> > +         * (non-passthough) device.
> > +         */
> > +        d->local_domain =3D d->domain;
> > +        d->local_bus =3D d->bus;
> > +        d->local_dev =3D d->dev;
> > +        d->local_func =3D d->func;
> > +
> > +        return;
> > +    }
> > +
> > +    snprintf(path, sizeof(path), "%s/num_devs", be_path);
>
> Is 80 bytes for `path` enough?
> What if the path is truncated due to the limit?
>
>
> There's xs_node_scanf() which might be useful. It does the error
> handling and call scanf(). But I'm not sure if it can be used here, in
> this file.

Thanks for the suggestion - I'll take a look.  Your other comments
sound good, too.

Also, for the next version, I plan to change the From: to Marek. I was
thinking of doing it earlier, but failed to do so when it was time to
send out the patch.  Most of the code is Marek's from his Qubes
stubdom repo.  My modifications were to make it work with non-stubdom
as well.

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 18 02:25:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 02:25:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536165.834330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzTKp-0008Ah-II; Thu, 18 May 2023 02:25:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536165.834330; Thu, 18 May 2023 02:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzTKp-0008Aa-Fk; Thu, 18 May 2023 02:25:43 +0000
Received: by outflank-mailman (input) for mailman id 536165;
 Thu, 18 May 2023 02:25:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzTKn-0008AQ-V5; Thu, 18 May 2023 02:25:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzTKn-0005dG-Kh; Thu, 18 May 2023 02:25:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzTKn-0002Ri-0Q; Thu, 18 May 2023 02:25:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzTKm-0002KQ-WE; Thu, 18 May 2023 02:25:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pgAddrm/pNtZfuupaW6c4liWtxZl+EFK475+exW2oWk=; b=4LcCfO6IIAUiNi9DHnUCn6ZAY1
	GqdUrHL/QOItyrfryIND/B8ep4pvDPWeF4TbHrUjQY+dQ+5bn3nPNIUM+tltcgUuTZIRlYAKW5OJm
	EFu9+HIvl2CBkBwsdHf+fStpc5bb5lxbZhT7G6ToGbl8/rcWjEZz5k4rbSz5gumU55mg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180694-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180694: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=753d903a6f2d1e68d98487d36449b5739c28d65a
X-Osstest-Versions-That:
    xen=42abf5b9c53eb1b1a902002fcda68708234152c3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 May 2023 02:25:40 +0000

flight 180694 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180694/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  753d903a6f2d1e68d98487d36449b5739c28d65a
baseline version:
 xen                  42abf5b9c53eb1b1a902002fcda68708234152c3

Last test of basis   180685  2023-05-16 20:03:38 Z    1 days
Testing same since   180694  2023-05-18 00:03:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   42abf5b9c5..753d903a6f  753d903a6f2d1e68d98487d36449b5739c28d65a -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 18 02:26:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 02:26:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536170.834341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzTLi-0000F5-St; Thu, 18 May 2023 02:26:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536170.834341; Thu, 18 May 2023 02:26:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzTLi-0000Ey-PS; Thu, 18 May 2023 02:26:38 +0000
Received: by outflank-mailman (input) for mailman id 536170;
 Thu, 18 May 2023 02:26:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzTLh-0000Ef-Cl; Thu, 18 May 2023 02:26:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzTLh-0005eL-BS; Thu, 18 May 2023 02:26:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzTLh-0002SZ-09; Thu, 18 May 2023 02:26:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzTLg-0002tu-Vy; Thu, 18 May 2023 02:26:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o1NmDYarkwOvbQow3pnBfAcVHJ1Dk0cmFwGWPhtgjFw=; b=TAkEpKCIOY13AxQf2bgGQXTY0T
	4cIJKLoumlSf4QokDNy0K4h8VdmwV2WEuEE4sndj/r67HyL7qZ6QhTPU2z9eq1YQAOadmGlDN4sRe
	fpBOwgCBp099NMs6JXBdDw+OeYgUTkoNgOuNdgRkAKrzZU2Qxc0Xe+11u266xdzlU7RQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180695-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180695: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0abfb0be6cf78a8e962383e85cec57851ddae5bc
X-Osstest-Versions-That:
    ovmf=cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 May 2023 02:26:36 +0000

flight 180695 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180695/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0abfb0be6cf78a8e962383e85cec57851ddae5bc
baseline version:
 ovmf                 cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e

Last test of basis   180675  2023-05-15 21:40:50 Z    2 days
Testing same since   180695  2023-05-18 00:10:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrei Warkentin <andrei.warkentin@intel.com>
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cafb4f3f36..0abfb0be6c  0abfb0be6cf78a8e962383e85cec57851ddae5bc -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 18 05:18:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 05:18:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536220.834367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzW1o-0002HY-FF; Thu, 18 May 2023 05:18:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536220.834367; Thu, 18 May 2023 05:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzW1o-0002HR-CF; Thu, 18 May 2023 05:18:16 +0000
Received: by outflank-mailman (input) for mailman id 536220;
 Thu, 18 May 2023 05:18:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzW1n-0002HH-LB; Thu, 18 May 2023 05:18:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzW1n-0002R5-B8; Thu, 18 May 2023 05:18:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzW1n-0003YO-0g; Thu, 18 May 2023 05:18:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzW1n-0008L2-0G; Thu, 18 May 2023 05:18:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Q/3DqFOuZJIkNgxXZ1UzbIEacoKrn3XhlGm/D/2QOOk=; b=nLX959KcpIMTOMscMbwWuXeqIn
	ZkjjEzeEbO3/ji0vIimET+no6QBcVqgsK6cjXqEHLJezDroZwYtqpCnOC3QwKzPVwyfYF1kKxsiUy
	ysFZ45ns6VE8N2ph5ixLMYnllrQGp76OY5IbN7U7vRq7wVcNRNsAE8OK3sp6P7heL3hk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180690-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 180690: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl-xsm:guest-localmigrate/x10:fail:heisenbug
    linux-5.4:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start.2:fail:heisenbug
    linux-5.4:test-amd64-i386-libvirt-raw:xen-install:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f53660ec669f60c772fdf7d75d1c24d288547cee
X-Osstest-Versions-That:
    linux=ea7862c507eca54ea6caad9dcfc8bba5e749fbde
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 May 2023 05:18:15 +0000

flight 180690 linux-5.4 real [real]
flight 180697 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180690/
http://logs.test-lab.xenproject.org/osstest/logs/180697/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-xsm    20 guest-localmigrate/x10 fail pass in 180697-retest
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start      fail pass in 180697-retest
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail pass in 180697-retest
 test-armhf-armhf-xl-multivcpu 19 guest-start.2      fail pass in 180697-retest

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw   7 xen-install                  fail  like 180443
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 180443
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180461
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180461
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180461
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180461
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180461
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180461
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180461
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180461
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180461
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180461
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180461
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180461
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f53660ec669f60c772fdf7d75d1c24d288547cee
baseline version:
 linux                ea7862c507eca54ea6caad9dcfc8bba5e749fbde

Last test of basis   180461  2023-04-28 05:27:42 Z   19 days
Testing same since   180690  2023-05-17 09:46:05 Z    0 days    1 attempts

------------------------------------------------------------
310 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   ea7862c507ec..f53660ec669f  f53660ec669f60c772fdf7d75d1c24d288547cee -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu May 18 07:17:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 07:17:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536228.834378 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzXsm-0006FL-SY; Thu, 18 May 2023 07:17:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536228.834378; Thu, 18 May 2023 07:17:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzXsm-0006FE-OW; Thu, 18 May 2023 07:17:04 +0000
Received: by outflank-mailman (input) for mailman id 536228;
 Thu, 18 May 2023 07:17:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzXsl-0006F4-TL; Thu, 18 May 2023 07:17:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzXsl-0005q9-Ib; Thu, 18 May 2023 07:17:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzXsl-0008Ek-0W; Thu, 18 May 2023 07:17:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzXsl-0004zi-07; Thu, 18 May 2023 07:17:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+csqpyu6NEItiPhcWiy0SL+uxwuicMLWRwF0k93JfvY=; b=B2msRCM7qYQdRtFNey6cjz3i60
	BsOU93cofvxJC4GjcUo224A2wxsjKEzCdg6+Gtp3DFiiVV1KX9f69DNXID3EUqUPg9ydewDg3pTla
	CS4lekj8zk7Lbl1ykt4ylifXz976Lb4HlVMV5cyltnkjhbrynz7F9w/eaeYqpnNzpSnE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180691-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180691: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
X-Osstest-Versions-That:
    qemuu=f9d58e0ca53b3f470b84725a7b5e47fcf446a2ea
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 May 2023 07:17:03 +0000

flight 180691 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180691/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180686
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180686
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180686
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180686
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180686
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180686
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180686
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180686
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a
baseline version:
 qemuu                f9d58e0ca53b3f470b84725a7b5e47fcf446a2ea

Last test of basis   180686  2023-05-16 20:07:22 Z    1 days
Testing same since   180691  2023-05-17 10:45:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   f9d58e0ca5..6972ef1440  6972ef1440a9d685482d78672620a7482f2bd09a -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu May 18 07:24:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 07:24:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536234.834387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzXzs-0007js-Jt; Thu, 18 May 2023 07:24:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536234.834387; Thu, 18 May 2023 07:24:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzXzs-0007jl-Gt; Thu, 18 May 2023 07:24:24 +0000
Received: by outflank-mailman (input) for mailman id 536234;
 Thu, 18 May 2023 07:24:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tqpr=BH=amd.com=Xenia.Ragiadakou@srs-se1.protection.inumbo.net>)
 id 1pzXzr-0007jf-Ve
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 07:24:24 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on20617.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::617])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0054737a-f54d-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 09:24:20 +0200 (CEST)
Received: from MW4PR04CA0264.namprd04.prod.outlook.com (2603:10b6:303:88::29)
 by CH2PR12MB4860.namprd12.prod.outlook.com (2603:10b6:610:6c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Thu, 18 May
 2023 07:24:16 +0000
Received: from CO1NAM11FT008.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:88:cafe::4c) by MW4PR04CA0264.outlook.office365.com
 (2603:10b6:303:88::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 07:24:16 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT008.mail.protection.outlook.com (10.13.175.191) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 07:24:16 +0000
Received: from [10.0.2.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 02:24:14 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0054737a-f54d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dFPQPwBrsbqjLisvRs4nBw3vMETRfQkPh/bJ0Qcm/pcEQJWo0c3gGrAXik+cdmxkY7Pdhlx9p8FJzPaNM4YDD+mXZ7tao0k0xbZwp6ZJu6tM1/jODoRzFRKUS0zjJSMm/nON0KnowMg280+GN/ebYaX58k7xs5Yk8JA+iXALlb1V2AolbM8dmpOl2MH21rwHY5uD8ZV+iWBBYL47c9Y2YzyJ7fNefUuJpRbewthr+KaO5R4pD9N0M0lR54+7wghRIfa98l2xrrJoaIhIpDeu8fwa/nWeInuXcUv5ynIH0vnVq7J0Qh95UufHqp47yJdt5HKMOA/eDxi0XDfs8tsufA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6SNJs7fPg68exoDmqAEaEtOXRxw5Tkn1OXE+a9gCg30=;
 b=XdPNXvpExShLMrENNkHBc5xmRzojkh72mSw4SLiqswmCNQP5EPUc99fiYoTTIqjz6RNY5wrpWJCTdqM0xb+UX/Cd8uC1P/fa0Gu9qY8MBU1H+pxct3MiPmDKvR1llsCO+QUbkbHKwj4o0RBqqkweIG7Y/7Hz7yM6iohjy6bdzwOtDk/nR0YbFfkxoiFggcCIOUfCLKpLZ8FuyCoZ01BrINop7jp0sHRlAwNhSefedlLKJcsedZ7h96ip3YxnX26yOuhYGsm3Fe8CWAfhU/tQaxaZjDvWq/A4cgj4dnC9XKkRIkVm4da3fQ8+Qjv0xtO00Xv702qqv/snPHtZrr74qw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6SNJs7fPg68exoDmqAEaEtOXRxw5Tkn1OXE+a9gCg30=;
 b=KPRAE0DvP7A+E0l+faA/Pzuh2Wo1PNQg5Opd3nGpU9yf3UKl/ev2CcBzuMgrHSzV46FZEJAGhyHEiOUrrr26fENpkiADjCGLa/st6LOyxFbPjNlz/Y3DIPEoSxuxdmcPgJA+a9qTwvVfImnbRPG9pfsX1/PycPmfAqjWwD+uW8I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <6f3f4e12-ae5b-c58c-891c-fbce08283206@amd.com>
Date: Thu, 18 May 2023 10:24:10 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
	<sstabellini@kernel.org>
CC: <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<stefano.stabellini@amd.com>, <roger.pau@citrix.com>,
	<andrew.cooper3@citrix.com>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-2-sstabellini@kernel.org>
 <c22a8925-15e4-47b9-6f5d-f85bbe802255@suse.com>
From: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
In-Reply-To: <c22a8925-15e4-47b9-6f5d-f85bbe802255@suse.com>
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT008:EE_|CH2PR12MB4860:EE_
X-MS-Office365-Filtering-Correlation-Id: 8c4137a0-504b-41f2-edbb-08db5770e36a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jttaadZy8jHBzHjLqTK8MZYBfxSfyTbyv/1dSzeAmv5SNEi0SoBwCo20Fw+ForFFHtOJ0OVedeS2q9oEFTOnb+R0WttZsT70TJ4DICh5e3dvvdBE3lgpfORfPV7YrIFRslw/cTuwvOBfzdHA7MUFbsHNUGNH5hiHTpkP+h2oaUVi7+mfOFj0/ny89M8UOuM6Idfcchh104A3F/vwY5pmof8zkzRkbZhhSc6ARK3QSLP9vpPM9iNuL+u4PZ+Pm1PhY315mh+oAczy0NamHCil5m6YHdseUgdkiZCZkMlXiHb0ssp3lBlVlCARO3fF8hRpp8Wu6JLzf/uEARVoUnacR2moZxRr3bNlJMoGUExeDR6awhacFZWLPekzteQiOYxbjPIC1vccTh+kjAxmvrLsNg36gDi5LEwxtmUpLrig97tCK0+4PnESlyvgm82oIDQ1atqxHLenp7RsrIm5CWDYm5dMo7eP2TdaX6ARv+0dMfu/6i8LdNBwR0Q4vHJ1S7PrldC44/GtANg2dmcD6Z4llPs8LR2/qnHNPdJnfxmjeZwC+eqbPpBnztNWRn9JnjS3wAtkQVsEZ8NzHTt8TnsWFWJxDA6vGoRrcXbULco9+jIlN5Ag0tavzpM0YTsIOHJQ5ZA0fA5IjXWIs2SaDPefsLrNnWwmb/UMoa50GDFYc/EPq1NqxH0Ba1aCCF9RqGJBM0aVmAmdf+JRJGp+zVkWSZp+r4Pa3bJzfQIDWRS+ZhPwf+IrcERwPF9nzvYda++qh7Wf437tXtwpKKEO6RjSIZg7/g3fesDByZeqy4brpNQF/nCPfUKoRf+QzULszeOW
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(346002)(376002)(136003)(451199021)(36840700001)(40470700004)(46966006)(26005)(53546011)(40460700003)(36756003)(36860700001)(40480700001)(47076005)(2616005)(336012)(426003)(31696002)(86362001)(82310400005)(356005)(82740400003)(81166007)(16526019)(186003)(54906003)(44832011)(110136005)(16576012)(478600001)(2906002)(316002)(31686004)(4326008)(8936002)(8676002)(41300700001)(5660300002)(70206006)(70586007)(6666004)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 07:24:16.4884
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8c4137a0-504b-41f2-edbb-08db5770e36a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT008.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4860


On 15/5/23 17:17, Jan Beulich wrote:
> On 13.05.2023 03:17, Stefano Stabellini wrote:
>> From: Stefano Stabellini <stefano.stabellini@amd.com>
>>
>> Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
>> the tables in the guest. Instead, copy the tables to Dom0.
> Do you really mean "in the guest" (i.e. from Xen's perspective, i.e.
> ignoring that when running on qemu it is kind of a guest itself)?
>
> I also consider the statement too broad anyway: Various people have
> run PVH Dom0 without running into such an issue, so it's clearly not
> just "leads to".

In my opinion the issue is broader.

In pvh_setup_acpi(), the code adding the ACPI tables to dom0 memory map 
does not check the return value of pvh_add_mem_range(). If there is an 
overlap and the overlapping region is marked as E820_ACPI, it maps not 
just the allowed tables but the entire overlapping range , while if the 
overlapping range is marked as E820_RESERVED, it does not map the tables 
at all (the issue that Stefano saw with qemu). Since dom0 memory map is 
initialized based on the native one, the code adding the ACPI table 
memory ranges will naturally fall into one of the two cases above.

So even when not running into this issue, pvh_add_mem_range() still 
fails and the memory range mapped is wider than the allowed one.

Xenia



From xen-devel-bounces@lists.xenproject.org Thu May 18 08:35:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 08:35:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536244.834398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzZ68-0007Ij-2b; Thu, 18 May 2023 08:34:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536244.834398; Thu, 18 May 2023 08:34:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzZ67-0007Ic-Un; Thu, 18 May 2023 08:34:55 +0000
Received: by outflank-mailman (input) for mailman id 536244;
 Thu, 18 May 2023 08:34:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=joHs=BH=citrix.com=prvs=495b323d3=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzZ66-0007IW-Sa
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 08:34:55 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dab760dd-f556-11ed-b22b-6b7b168915f2;
 Thu, 18 May 2023 10:34:52 +0200 (CEST)
Received: from mail-bn7nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 May 2023 04:34:38 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6769.namprd03.prod.outlook.com (2603:10b6:510:120::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Thu, 18 May
 2023 08:34:33 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.019; Thu, 18 May 2023
 08:34:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dab760dd-f556-11ed-b22b-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684398891;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=yYyjy6cYQ7Zlm7OGQQ3EZXNb98J8mMHDjexKfljOy80=;
  b=bSDBTN5gg9TVbQYgeZI5NnD7JECkOle5f1454SCxndBNgTX7MKPXMeZJ
   9LAPv7/XEZOqSTVyCVdQ/ta7CTI7QzNG0rJwh2r4AQmzbjkBdp9UyraKy
   dL3ux2Hetq+3jAke6yeYMDLXkTzmUj2iSYZKxXjP4Onk26TjzLSmnVXh+
   4=;
X-IronPort-RemoteIP: 104.47.70.106
X-IronPort-MID: 108247284
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:xAMSIa2EY4YO4PJfa/bD5TNwkn2cJEfYwER7XKvMYLTBsI5bpzUEy
 WYdWjyAafiNMGWkctEnad+zoEoH6MKGzNJnGlZqpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFnNagS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfDjhi0
 KAeNC00MzucjcaqkOO1eMZMmZF2RCXrFNt3VnBI6xj8VK5ja7acBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxouy6KlFIZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13r+VwXyhCN56+LuQ6KJLrGC06GoqCz4MBFrq5si1mFOBRIcKQ
 6AT0m90xUQoz2SwVcX0VRC8pH+CvzYfVsBWHul87xuCooLW6QuEAmkPThZadccr8sQxQFQCx
 lKP2t/kGzFrmLmUUm6GsKeZqyuoPioYJnNEYjULJSM17t/iqccJhx3OR9pqE6moptTwFXf7x
 DXihDMiirsai8lNzLmy913DhzOqp7DASwJz7QLSNl9J9St8bY+hIoautl7S6K8YKJ7DFwHf+
 n8Zh8KZ8eYCS4mXkzCAS/kMG7fv4OuZNDrbghhkGJxJGymRxkNPtLt4uFlWTHqF+O5fEdM1S
 Cc/YT9s2aI=
IronPort-HdrOrdr: A9a23:C0osjaA6KZD/GJflHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-Talos-CUID: =?us-ascii?q?9a23=3AzsO782gDEWQDhQTmMv3H18ee8DJufSTx6SvWP1K?=
 =?us-ascii?q?BVE12QpmJGQ6eyoxEjJ87?=
X-Talos-MUID: 9a23:zdlOpwSSSJKhuVIbRXTy2BBOCeBG5Z2tK1pckMRBisWWKBNvbmI=
X-IronPort-AV: E=Sophos;i="5.99,284,1677560400"; 
   d="scan'208";a="108247284"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZDj1Bw4xwXn7sFbso6cO11RL6ll6YBwvJvxDx88NYvbglgenJrmOWrR49H0FWem+67w196iLVI8OaMDnsxUUl1C7+Uh/wHyl41BtLG2TFqAqan6UZfTfRpmVxRYRh/rfEFJOepXectlSV9mbEa+YNW7goxtKnXp5vxXXZk4vj0EInbL46lJN5L7uBkS+WyuNYA7gdgnEoi/RauWKfRYkdnaTsdhyW5CfEWAhhwihOr8OmV0n3KBbZMnnjidwAWzsOg6Ladlbis7PsaYHjuHpLoRf/O8MK7MS2RBe6d7evprh1iKoyqTwJQaKCSLdQVFECk5MBjKCe/WwmemT+m3rzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=umiSJINlN7iH5/qvE4Jx983fQszMg/7S/RA0VjeOv14=;
 b=coiXy1qGPgTEU7TMnqONerqfPF4ZSdPRiDKsk+xHX4+gtYIVeCKkyNu+VjEE1He1eeVzq/L7bkNRg5If0BJOOMYTEyhwNyye3HvGt/tTz6F3FSE1Cwei3Cgms0PuCA3vhKVQThfi4uMKMUPTifM3rM+rZrclM+JlPHiG4+08jHZyO2mEkdwPT+BNVPgEGeUiHi1qUf44oZa4OsLqil2poyhNrKHfIK75M0fiqmCb83Sy3/WCDahBGTQunlrrxZU5hCcij3nKzmurZVr+yGQ9yV3EuCmGmKxe9XTnxitAyvSjKqNIyCpdeT1/KoAPDpfdsWSynh+UkaPsakJnhrDbJw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=umiSJINlN7iH5/qvE4Jx983fQszMg/7S/RA0VjeOv14=;
 b=jDRFa9amZ4UHoVx30RwdsS5tUbXF4bnn4QuDQxeaHxwjZongi4dj49g6R1Mk22fKc5Hgh6Mg5nofbCFa8fn2yVBxjs2ENRgRK6dbABkw1D+O4LEax8bTg/U/phBdF0MWQUlFVAiym7SB00Z9Jlu9tF0k2oYFL3Didq6Ba3uqmNE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 18 May 2023 10:34:25 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, jbeulich@suse.com,
	xen-devel@lists.xenproject.org, Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
Message-ID: <ZGXjEQ2yM5CXqn3c@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-2-sstabellini@kernel.org>
 <ZGH+5OKqnjTjUr/F@Air-de-Roger>
 <alpine.DEB.2.22.394.2305151656500.4125828@ubuntu-linux-20-04-desktop>
 <ZGNLArlA0Yei4Fr0@Air-de-Roger>
 <alpine.DEB.2.22.394.2305161522480.128889@ubuntu-linux-20-04-desktop>
 <ZGSTGIMh6qvCLZSr@Air-de-Roger>
 <alpine.DEB.2.22.394.2305171354590.128889@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.22.394.2305171354590.128889@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LO4P123CA0199.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a5::6) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6769:EE_
X-MS-Office365-Filtering-Correlation-Id: 297b8550-87de-4c29-52fa-08db577ab45e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	H50f6fRS6oAgaiE3Zjt4C+bz9OJOlf9j3fOua5KC9UK2rsvCoKREDWYPOoYkNfLQTiy2MMax/T0YojwcZiNUxmsXl8Qc9givsc/Qz1j2REOiR2jC96plb6vfOok4/wK9avgctohMewMeyW+uiVXX3JU+zwdVjzkoO3CqpwIFl3PgIiGfOBu3eQby6spuhYnBo2B1FNDVz/stYfGSvjhopAboIEtWP5MKAzcT+5YPG4TBEuLemBYglzcZ2wkigtkAh/9A4hi6cdz507PsDHMPlpl8bAtuLO6/th0PXPt7xw8bfpYIIpju8/ZEmC6dhsJqGIOqUf2+t9nAPlntVnfF8Sl10ggiw81ad1PhVVrYUi7qrBQN+rVkiEkrSFnqeJNzb2mubz+FcSX/uR8E9uXUF/enPE5uI66sjqyXuJbvhI2tjIcHPzRxtaflK8AwXQM8B0EUDy4kEQXLGW8OMYX7B642QjxGyZwpweGYnDb4Zov0BXFVKJw3zClQaejEcMSvJ0XgdgrTkIIsrCcKH6BhH2MuoPlcuWpy1Y8xUMQNbO2Hu5fUjujWeo+W/xHr0oQE
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(39860400002)(366004)(346002)(396003)(136003)(376002)(451199021)(478600001)(6512007)(26005)(6506007)(186003)(9686003)(6666004)(6916009)(4326008)(66476007)(66556008)(82960400001)(83380400001)(66946007)(6486002)(41300700001)(5660300002)(316002)(8936002)(8676002)(2906002)(30864003)(86362001)(38100700002)(66899021)(33716001)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SSt2NS9xanc1SXNJbjU2VC9aUm9uSlFBREREbkZWeGFCZ1NlNGR4UGRRbERX?=
 =?utf-8?B?ckE0dFU4UHlqd1FMM1NZKzRlVjlTRk9sRlltN0U1ZWsvOGdZS2liak5XYVdz?=
 =?utf-8?B?NU9FT2JkYzJncEU4ZUJrbmZQbHhuSDkzN0hxSldKLzd3MitnRTZLV1h2Nmk1?=
 =?utf-8?B?d3lnOEFGZXhNZjdOaVIvQ3pzaXV4OERUZFJyQUpHQllibG1Vd09FMlpaUjc5?=
 =?utf-8?B?ekdqZjBuUEJTSGlXb2txT2JJWFNCSTY4K2Rnb3IzTitCNkxGRDF5RjljSy95?=
 =?utf-8?B?OUVxa2ZoSEhRQ1RrTU13dXRQZHducFpibm1VWlFqajJXVGw4ZTM5Uno5NzV0?=
 =?utf-8?B?SDFIMTVqUGtBaXpLQVRMWnpUc05OK0hGZm9ZbU5SRFd3UWxNTWxnWE82QVZz?=
 =?utf-8?B?T0tyK1dIOUc0R0VEeUs4TFNlbWFFSjBUOHVESzBaOU1sTU9BaVI3OGxMbkQw?=
 =?utf-8?B?a1RhazFrZmNOL1g2d3R6bVQrek9WdzFXUXA5bVNXQ1R0V3o1MFpaay9kTW5D?=
 =?utf-8?B?VTA0Z21aNjh2Ni85bitITXRHRDBJSUZwZjl4RGF6UzY4M2RQUWpGMitEdTR0?=
 =?utf-8?B?ejlMM0Nqc21IYkc4b3VVS0JTV2szUlJ3MTQwQktBempZVTQ3UklFTWlUU3ZK?=
 =?utf-8?B?YkdnSExCbDlacHF1MzQveXVkdkJlYm81VytHWGwzVk5uem93Z0tycWh1cDZq?=
 =?utf-8?B?QTBuSlNPM291TUlUcERGSStrTTQzcFB5aFdXT2hSd0s3a2tjb0tuNUVENmdr?=
 =?utf-8?B?VE13ZDFWbENEcUhmSTJFZ0NUaEJDM3E4TzVwNm9OWFpqTHM2d3RJMU5mWE9r?=
 =?utf-8?B?UTVvbXo4N25QeVdRY0hSSWZrajhTdm1zVDJCd1AvOFVtOC9EazQvWUhLbDlS?=
 =?utf-8?B?Vkc2c3ZyME8wdjNLOHQyREJiVU5XM0ZkK0ZMRXNLOG1DY0gyWXI3YlkyME1u?=
 =?utf-8?B?TmU1dkh3dTZ4NU9yTzd5a3dPYkYxVVNwSkdFaHA5cFg1ejRkNmdCVmUrcFdQ?=
 =?utf-8?B?SlNzMUE0SVQ4MVpzYXhuMDR5RE5wWHFtWlFUYUdJcXM2bGdDRGRNSE5SMFJx?=
 =?utf-8?B?L0JQL01Qc25ZMUIyTzd6TGhyNWxFOENHUUF3djhCSU91TnQzcnZ3dlNrdkNO?=
 =?utf-8?B?K09mUy83a3VGak8zZEFhbWFMR0ttT2ptVVdXaTgwQWsxbU9NU0s1WWU4cFJY?=
 =?utf-8?B?U3drZUl3YmxpK1dzNWhvUmpRNkJHL3dFbUVPamd0MmhLb0RWVWZyQ0dyWjRs?=
 =?utf-8?B?d3NYcHJQWUhiV1VVc04zVGp3Ym9wRG5xU1c3SElvTVRCNk1VOXJCTUphUmJP?=
 =?utf-8?B?aDd2amVkdnpyK2VEejFOdVhkWWlyN25KWnQ2TldIUG1ZTXlaTnBCSzZBcDZr?=
 =?utf-8?B?MVNiQm54L1RMQllRRDNmcE11ODJCUUpqVE81ZGlzUTBiMWN6S21GVEtJUms1?=
 =?utf-8?B?NHVhV0NlNGJQbHZiajZIZWFvemhnc0k2OWtIWlpnYzJ0a2hNd3JpWGEydy9x?=
 =?utf-8?B?NGFRVzdHcVI0NEkzVUI5R3FRT25JWUNCem1iT3B0am8wSzlBdWRVdDU3VkJt?=
 =?utf-8?B?RGo1UHJ6WTVZOFJlWmlERVE5TWVNL1dkM1AvYkw5Rm5zS2l4RHVBVVJUdWp2?=
 =?utf-8?B?eE1UUStyTHBidS90RDhIRTlZdDllV1N1MW9hM2wrazBhQW1ML285ODUvNG1T?=
 =?utf-8?B?U1FiVkNVcFlzVEg3MXFpMXlIQURPdm9PaXFCUFdGNDlXMUVPZm5WeFVKN2pn?=
 =?utf-8?B?ZUdBajhyK2hMdUFnSU9aSVZuZm5zU1dKTWk0T1BCTCs1aDN3ZXlrVTBaNmVU?=
 =?utf-8?B?Zi90aE5vV3JSTnNpS1k3ZGtYN2wwZFZFYzFidnNZbm5OVW44clA1UVQ0dmFq?=
 =?utf-8?B?VmNZTTR3VURWNnZLZjhzekh3Y3cvSGNtRzhDaFNlTDk3MGZqclhpSUlnaG9O?=
 =?utf-8?B?bnMzdVFKK01RUUd3Q1RxeFI4VXVSTUEyWlk1M2tPRUFMV0ViWUl0QjF5bEl6?=
 =?utf-8?B?ZHgyMXIzUk1MeTJ6ZysyY2lQNzhPeVV2SHpWTG1od2FGaXV2MGx4TmxYTElp?=
 =?utf-8?B?ZmttQlpBOHVnaGUvNnpzRG5SNU5EVWhjdVhQT3ljSzdzcjJ6S1dKeGIyMVBw?=
 =?utf-8?Q?qj2nn/INkj9PqL8EkWnafVm2P?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	KYM3z+xGTiugboyTd7roKnioIaIBr8RDPJrsvC0zqJnjH1kjlgetbcuBKKeqi8p8bG/o1V0Ci1ud5LWEefe8WFPb26tBYJtL4BlTO3RJEhV9utwWuZ7T4OMEmWLLtpHr8hh2zNKcPV9q0gXrJGs9fzKCXI0l/M1o3ZXRR4x1JaY/FT9USErydNDvGW9rZAbLOo/95TwbWTnfzouaXdjvqKXYJepSSs/pNZ7QY8NLV1CZUqjVPcuIo2ZYWevV14K+IIM+wQryc09fnw1lXpIKojCDGVSueh1vBJtQpn0D2D3+zkzRbYN7I4MH5LUNPY5O2BQw7OICGTrj+G+DnR/dVynhV12U2VWG4LCjivvsCFE2OXi5LXrCtS5xLc6zaMwiXKODDc1VhsomxA0wxR0KUyeOyLNdXbHnF2K9i+cjAYVVAJ0N2csgsvXdpsfoA2iWl5vHOP6lJrv4yzm3nwizIBkRQbuGAM8p3tY7oUxTd1BjQLSxOvjRdr9WiOte1L2PrPuD3TySs9dV27MrRfr3PjpxWqoyMtvIg/4u5RvYJ9pjnuEg4iklcp2H2K3cVzoLgSoV1RTahsBscEEMzZGWuLpmkmweOOPt2QMYdqdmcvGeoZ9yft23eA+skr7tSNNUyi7PDv+WOw2lyr0PNNu4Uf+FBuOrULhVRNzPJ5jrmwimJVl3i/h0B8hZ3xfiAvh9rbt+8v9sRbgM5ZIS6Ze2mn+s8gCy7GjnRowApf5m+yI1RylnicYNxDydcVBXYxzFrP2fABY7dbLNAZNLcVi7v1JsKzXMQU+zIpXcRqngsv1U8EURoooVsE0QLqXK3T++V3dWg8iU8g8q30t2QXpq6JRq8TbLy84dgySJsAwDzQw=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 297b8550-87de-4c29-52fa-08db577ab45e
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 08:34:33.1464
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: G2wCCo2UBmxXZStd1/h+2UupshJynhpIa4OLV/MHQg6QL0PQO4rHZj/2+6atfFr+PoMCvyKjVduTZ4Xo57Ympg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6769

On Wed, May 17, 2023 at 02:00:01PM -0700, Stefano Stabellini wrote:
> On Wed, 17 May 2023, Roger Pau Monné wrote:
> > On Tue, May 16, 2023 at 04:34:09PM -0700, Stefano Stabellini wrote:
> > > On Tue, 16 May 2023, Roger Pau Monné wrote:
> > > > On Mon, May 15, 2023 at 05:11:25PM -0700, Stefano Stabellini wrote:
> > > > > On Mon, 15 May 2023, Roger Pau Monné wrote:
> > > > > > On Fri, May 12, 2023 at 06:17:20PM -0700, Stefano Stabellini wrote:
> > > > > > > From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > > > > 
> > > > > > > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > > > > > > the tables in the guest. Instead, copy the tables to Dom0.
> > > > > > > 
> > > > > > > This is a workaround.
> > > > > > > 
> > > > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > > > > ---
> > > > > > > As mentioned in the cover letter, this is a RFC workaround as I don't
> > > > > > > know the cause of the underlying problem. I do know that this patch
> > > > > > > solves what would be otherwise a hang at boot when Dom0 PVH attempts to
> > > > > > > parse ACPI tables.
> > > > > > 
> > > > > > I'm unsure how safe this is for native systems, as it's possible for
> > > > > > firmware to modify the data in the tables, so copying them would
> > > > > > break that functionality.
> > > > > > 
> > > > > > I think we need to get to the root cause that triggers this behavior
> > > > > > on QEMU.  Is it the table checksum that fail, or something else?  Is
> > > > > > there an error from Linux you could reference?
> > > > > 
> > > > > I agree with you but so far I haven't managed to find a way to the root
> > > > > of the issue. Here is what I know. These are the logs of a successful
> > > > > boot using this patch:
> > > > > 
> > > > > [   10.437488] ACPI: Early table checksum verification disabled
> > > > > [   10.439345] ACPI: RSDP 0x000000004005F955 000024 (v02 BOCHS )
> > > > > [   10.441033] ACPI: RSDT 0x000000004005F979 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> > > > > [   10.444045] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> > > > > [   10.445984] ACPI: FACP 0x000000004005FA65 000074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
> > > > > [   10.447170] ACPI BIOS Warning (bug): Incorrect checksum in table [FACP] - 0x67, should be 0x30 (20220331/tbprint-174)
> > > > > [   10.449522] ACPI: DSDT 0x000000004005FB19 00145D (v01 BOCHS  BXPCDSDT 00000001 BXPC 00000001)
> > > > > [   10.451258] ACPI: FACS 0x000000004005FAD9 000040
> > > > > [   10.452245] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > > > > [   10.452389] ACPI: Reserving FACP table memory at [mem 0x4005fa65-0x4005fad8]
> > > > > [   10.452497] ACPI: Reserving DSDT table memory at [mem 0x4005fb19-0x40060f75]
> > > > > [   10.452602] ACPI: Reserving FACS table memory at [mem 0x4005fad9-0x4005fb18]
> > > > > 
> > > > > 
> > > > > And these are the logs of the same boot (unsuccessful) without this
> > > > > patch:
> > > > > 
> > > > > [   10.516015] ACPI: Early table checksum verification disabled
> > > > > [   10.517732] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> > > > > [   10.519535] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
> > > > > [   10.522523] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
> > > > > [   10.527453] ACPI: ���� 0x000000007FFE149D FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> > > > > [   10.528362] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > > > > [   10.528491] ACPI: Reserving ���� table memory at [mem 0x7ffe149d-0x17ffe149b]
> > > > > 
> > > > > It is clearly a memory corruption around FACS but I couldn't find the
> > > > > reason for it. The mapping code looks correct. I hope you can suggest a
> > > > > way to narrow down the problem. If I could, I would suggest to apply
> > > > > this patch just for the QEMU PVH tests but we don't have the
> > > > > infrastructure for that in gitlab-ci as there is a single Xen build for
> > > > > all tests.
> > > > 
> > > > Would be helpful to see the memory map provided to Linux, just in case
> > > > we messed up and there's some overlap.
> > > 
> > > Everything looks correct. Here are some more logs:
> > > 
> > > (XEN) Xen-e820 RAM map:
> > > (XEN)  [0000000000000000, 000000000009fbff] (usable)
> > > (XEN)  [000000000009fc00, 000000000009ffff] (reserved)
> > > (XEN)  [00000000000f0000, 00000000000fffff] (reserved)
> > > (XEN)  [0000000000100000, 000000007ffdffff] (usable)
> > > (XEN)  [000000007ffe0000, 000000007fffffff] (reserved)
> > > (XEN)  [00000000fffc0000, 00000000ffffffff] (reserved)
> > > (XEN)  [000000fd00000000, 000000ffffffffff] (reserved)
> > > (XEN) Microcode loading not available
> > > (XEN) New Xen image base address: 0x7f600000
> > > (XEN) System RAM: 2047MB (2096636kB)
> > > (XEN) ACPI: RSDP 000F58D0, 0014 (r0 BOCHS )
> > > (XEN) ACPI: RSDT 7FFE1B21, 0034 (r1 BOCHS  BXPC            1 BXPC        1)
> > > (XEN) ACPI: FACP 7FFE19CD, 0074 (r1 BOCHS  BXPC            1 BXPC        1)
> > > (XEN) ACPI: DSDT 7FFE0040, 198D (r1 BOCHS  BXPC            1 BXPC        1)
> > > (XEN) ACPI: FACS 7FFE0000, 0040
> > > (XEN) ACPI: APIC 7FFE1A41, 0080 (r1 BOCHS  BXPC            1 BXPC        1)
> > > (XEN) ACPI: HPET 7FFE1AC1, 0038 (r1 BOCHS  BXPC            1 BXPC        1)
> > > (XEN) ACPI: WAET 7FFE1AF9, 0028 (r1 BOCHS  BXPC            1 BXPC        1)
> > > [...]
> > > (XEN) Dom0 memory map:
> > > (XEN)  [0000000000000000, 000000000009efff] (usable)
> > > (XEN)  [000000000009fc00, 000000000009ffff] (reserved)
> > > (XEN)  [00000000000f0000, 00000000000fffff] (reserved)
> > > (XEN)  [0000000000100000, 0000000040060f1d] (usable)
> > > (XEN)  [0000000040060f1e, 0000000040060fa7] (ACPI data)
> > > (XEN)  [0000000040061000, 000000007ffdffff] (unusable)
> > > (XEN)  [000000007ffe0000, 000000007fffffff] (reserved)
> > > (XEN)  [00000000fffc0000, 00000000ffffffff] (reserved)
> > > (XEN)  [000000fd00000000, 000000ffffffffff] (reserved)
> > > [...]
> > > [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009efff] usable
> > > [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x00000000000fffff] reserved
> > > [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000040060f1d] usable
> > > [    0.000000] BIOS-e820: [mem 0x0000000040060f1e-0x0000000040060fa7] ACPI data
> > > [    0.000000] BIOS-e820: [mem 0x0000000040061000-0x000000007ffdffff] unusable
> > > [    0.000000] BIOS-e820: [mem 0x000000007ffe0000-0x000000007fffffff] reserved
> > > [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
> > > [    0.000000] BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved
> > > [...]
> > > [   10.102427] ACPI: Early table checksum verification disabled
> > > [   10.104455] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> > > [   10.106250] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> > > [   10.109549] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> > > [   10.115173] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> > > [   10.116054] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > > [   10.116182] ACPI: Reserving ���� table memory at [mem 0x7ffe19cd-0x17ffe19cb]
> > > 
> > > 
> > > 
> > > > It seems like some of the XSDT entries (the FADT one) is corrupt?
> > > > 
> > > > Could you maybe add some debug to the Xen-crafted XSDT placement.
> > > 
> > > I added a printk just after:
> > > 
> > >   xsdt->table_offset_entry[j++] = tables[i].address;
> > > 
> > > And it printed only once:
> > > 
> > >   (XEN) DEBUG pvh_setup_acpi_xsdt 1000 name=FACP address=7ffe19cd
> > > 
> > > That actually matches the address read by Linux:
> > > 
> > >   [   10.175448] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> > > 
> > > So the address seems correct. It is the content of the FADT/FACP table
> > > that is corrupted.
> > > 
> > > I wrote the following function in Xen:
> > > 
> > > static void check(void)
> > > {
> > >     unsigned long addr = 0x7ffe19cd;
> > >     struct acpi_table_fadt *fadt;
> > >     fadt = acpi_os_map_memory(addr, sizeof(*fadt));
> > >     printk("DEBUG %s %d s=%s\n",__func__,__LINE__,fadt->header.signature);
> > >     acpi_os_unmap_memory(fadt, sizeof(*fadt));
> > > }
> > > 
> > > It prints the right table signature at the end of pvh_setup_acpi.
> > > I also added a call at the top of xenmem_add_to_physmap_one, and the
> > > signature is still correct. Then I added a call at the beginning of
> > > __update_vcpu_system_time. Here is the surprise: from Xen point of view
> > > the table never gets corrupted. Here are the logs:
> > > 
> > > [...]
> > > (XEN) DEBUG fadt_check 1551 s=FACPt
> > > (XEN) DEBUG fadt_check 1551 s=FACPt
> > > (XEN) DEBUG fadt_check 1551 s=FACPt
> > > (XEN) DEBUG fadt_check 1551 s=FACPt
> > > (XEN) d0v0: upcall vector f3
> > > [    0.000000] Linux version 6.1.19 (root@124de7fbba7f) (gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_3
> > > [    0.000000] Command line: console=hvc0
> > > [...]
> > > [   10.371610] ACPI: Early table checksum verification disabled
> > > [   10.373633] ACPI: RSDP 0x0000000040060F1E 000024 (v02 BOCHS )
> > > [   10.375548] ACPI: RSDT 0x0000000040060F42 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> > > [   10.378732] ACPI: APIC 0x0000000040060F76 00008A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
> > > [   10.384188] ACPI: ���� 0x000000007FFE19CD FFFFFFFF (v255 ������ �������� FFFFFFFF ���� FFFFFFFF)
> > > [   10.385374] ACPI: Reserving APIC table memory at [mem 0x40060f76-0x40060fff]
> > > [   10.385519] ACPI: Reserving ���� table memory at [mem 0x7ffe19cd-0x17ffe19cb]
> > > [...]
> > > (XEN) DEBUG fadt_check 1551 s=FACPt
> > > (XEN) DEBUG fadt_check 1551 s=FACPt
> > > (XEN) DEBUG fadt_check 1551 s=FACPt
> > > (XEN) DEBUG fadt_check 1551 s=FACPt
> > > 
> > > 
> > > So it looks like it is a problem with the mapping itself? Xen sees the
> > > data correctly and Linux sees it corrupted?
> > 
> > It seems to me like the page is not correctly mapped, and so 1s are
> > returned? (same behavior as a hole).  IOW: would seem to me like MMIO
> > areas are not correctly handled by nested NPT? (I assume you are
> > running this on AMD).
> > 
> > Does it make a difference if you try to boot with dom0=pvh,shadow?
> > 
> > A couple of wild ideas.  Maybe the nested virt support that you are
> > using doesn't handle the UC bit in second stage page table entries?
> > You could to remove this in p2m_type_to_flags() (see the
> > p2m_mmio_direct case).
> > 
> > Another wild idea I have is that the emulated NPT code doesn't like
> > having the bits 63:52 from the PTE set to anything different than 0,
> > and hence only p2m_ram_rw works (p2m_mmio_direct is 5).
> 
> Many thanks to Xenia for figuring out the root cause of the bug. The
> underlying memory region is already added as E820_RESERVED to the guest
> (instead of E820_ACPI). When pvh_add_mem_range is called with E820_ACPI
> as parameter for the interesting table, pvh_add_mem_range returns with
> -EEXIST without doing anything.
> 
> The original fix by Xenia was to carve out the relevant subset of the
> reserved region and mark it as E820_ACPI. Instead, I rewrote it as
> changing the type of the entire region to E820_ACPI because it is
> simpler and doesn't have to deal with the edge cases (partially
> overlapping, overlapping 2 existing regions, etc.)
> 
> What do you think?

Hm, I'm unsure whether wholesale converting reserved regions into ACPI
ones is correct, for example Xen will handle reserved regions specially
when creating the IOMMU mappings, and RMRRs are also expected to live
in reserved regions.

The issue is IMO with the usage of dom0-iommu=none, which leads to
reserved regions not mapped in the dom0 physmap (see
arch_iommu_hwdom_init()).

One option might be to move the mapping of reserved regions from
arch_iommu_hwdom_init() into PVH dom0 build (as part of
pvh_populate_p2m()) and thus render arch_iommu_hwdom_init() PV only.
That would also imply that for PVH dom0 `dom0-iommu=map-reserved` is
fixed cannot be modified (iow: reserved regions are always added to
the PVH dom0 physmap).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 18 09:31:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 09:31:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536251.834407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzZyr-0005Ks-BJ; Thu, 18 May 2023 09:31:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536251.834407; Thu, 18 May 2023 09:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzZyr-0005Kl-8F; Thu, 18 May 2023 09:31:29 +0000
Received: by outflank-mailman (input) for mailman id 536251;
 Thu, 18 May 2023 09:31:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=joHs=BH=citrix.com=prvs=495b323d3=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzZyq-0005Kf-Ez
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 09:31:28 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c17a2eb7-f55e-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 11:31:25 +0200 (CEST)
Received: from mail-bn8nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 May 2023 05:31:22 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB6455.namprd03.prod.outlook.com (2603:10b6:a03:38d::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 09:31:20 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.019; Thu, 18 May 2023
 09:31:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c17a2eb7-f55e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684402285;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=sgovmXIw5wcey7WW6bfshpN918+MI3214S71E6T1VnY=;
  b=cCkyEeClYUrcnDiqK0SO2QsvNPViZ0FQJsopOLa4QknZmW32qM8dUBAX
   ccY8hHmIl0za28hx7If87JLvZb4N4xcBsUhvaf3aiMdPTu/qceVjaSnqv
   unRZKhtK5cL9i6U7D3CA8K9ynZNwYT1VWHc1YWsvBkxWXjUoJBaq9fcKx
   I=;
X-IronPort-RemoteIP: 104.47.55.175
X-IronPort-MID: 109503796
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:cIikw623mpPlKOtrCfbD5TNwkn2cJEfYwER7XKvMYLTBsI5bpzNSz
 jRKCj2HPK6CY2T2L9wnbNy09B9S75/Rz9AxTlM9pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gFnNagS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfAkEVz
 OAieG43VVOMnOft/pWgZvlGr5F2RCXrFNt3VnBI6xj8VKxja7aTBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvi6Kk1EZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13r6SxXikCdx6+LuQ7dJI2k2N6zMvAxwJaEuLocHhoUSQVIcKQ
 6AT0m90xUQoz2S7Q9/8VluiqXGFtxIVWN1NO+Q/5EeGza+8ywSTC3UATzVBQMc7r8JwTjsvv
 neShM/gDzFrtLyTSFqe+62SoDf0PjIaRUcSaClBQQYb7t3LpIAokgmJXttlCLSyjND+BXf32
 T/ihDMiirsai8lNzLmy913DhzOqp7DASwJz7QLSNl9J9St8bY+hIoauuV7S6K8aKJ7DFwbc+
 n8Zh8KZ8eYCS4mXkzCAS/kMG7fv4OuZNDrbghhkGJxJGymRxkNPtLt4uFlWTHqF+O5dEdM1S
 Cc/YT9s2aI=
IronPort-HdrOrdr: A9a23:NITSBq+nS12sPj8TLyxuk+G5dr1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwXZVoZUmsjaKdgLNhRItKOTOLhILGFuFfBOfZsl7d8mjFh5VgPM
 RbAtRD4b/LfD9HZK/BiWHXcurIguP3lpxA7d2uskuFJjsaD52IgT0JaDpyRSZNNXN77NcCZe
 yhDo0tnUvRRV0nKuCAQlUVVenKoNPG0LrgfB49HhYirCWekD+y77b+Mh6AmjMTSSlGz7sO+X
 XM11WR3NToj9iLjjvnk0PD5ZVfn9XsjvNFGcy3k8AQbhn8lwqyY4xlerua+BQ4uvum5loGmM
 TF5z0gI8NwwXXMeXzdm2qn5yDQlBIVr1Pyw16RhnXu5eT/WTIBEsJEwaZUaAHQ5UYMtMx1lP
 sj5RPQi7NnSTf72Ajt7dnBUB9n0mKyvHoZiOYWy1hSS5EXZrN9pZEWuGlVDJADNiTn751PKp
 gmMOjsoNJtNX+KZXHQuWdihPSqQ3QIBx+DBnMPv8SEugIm6UxR/g89/ogyj30A/JUyR91v/O
 LfKJllk7lIU4s/cb99LP1pe7r4NkX9BTb3dE6CK1XuE68Kf1jXrYTs3bkz7Oa2PLQV0ZoJno
 jbWl8wjx98R6vXM7zP4HR3yGGPfI3kNg6diP22pqIJ9oEUfYCbcBFqEzsV4o6dS/Z2OLyoZx
 /8AuMTPxbZFxqeJW945XyAZ3BsEwhhbCQ0gKdOZ7vcmLO9FqTa8srmTd30GJ3BVR4ZZ0KXOA
 pxYNG0HrQM0nyW
X-Talos-CUID: 9a23:om3lhmDPgaE5ckz6EypHr3cIIJB7SEaew2zVeHGfEkp5YpTAHA==
X-Talos-MUID: 9a23:bZjD8wotU0KBXtBK9UQezypOFp14/YnxNEsMjqgWi8m8bQd6FTjI2Q==
X-IronPort-AV: E=Sophos;i="5.99,284,1677560400"; 
   d="scan'208";a="109503796"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gHMwULcYjEJQv3aNV3ivpLS/cP11bZSPxEuplKEMyuTQIT1B18lGK9VTQqBJX5QHt+Dc3Mjs3A+OwHimxOUfBZPGMuDzX34rXnumPM/9VT7gutUg3yqNjBVM3eBUVPhVR37hDCxY+bHLFQurbKzRc9Gpm7ceOVZ+4z0s6gmVcSwagjkSSAejYWjXYt30HvXsblHdya8jbV9VWgSvPBBxEcisEkB/qYBamEWivQ1AgBUwzxYTAzskOeaaWeyIMwF7Mx2scCoZJEXLY2AMzmeJcB03xnofDDqT95YpGjw8htmuhrsHQzLgwGZJ04Azgs61Y1Uu45Fk6Dja+iMu54ufmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+OMXQrhCQaBqSvOUEgCtRxojhKihVOS7X5ukdb3X92A=;
 b=N29955NlCpVIhnl3tD6NhnVe1Q33+Qb3W9kI56WTavCJM8Cb6ndV5bQCabKwNyTUJBTYpA6ZJZkAe0AiE/JzPxTpu8YDMHjgQuNn9ISWUggqIMNTmBydxNYi8bje3fSq+AWDM0vic3sRFE7iYphCHveSB2jiYf+Q+73ikJfNxK9D2B48o1Ulh+seAHa0aUJduqhUdRZC6IyUJdq5ZyoNESRdpaz1DvP+ISZsMKEdnkcG/+eAYH94DlkZDAb1p60Kz9+RSanEnTuHnyR8gH5YPT0U65qWpGq7V+qsOBjBAidcwIUfpjXBoZx9KKXJQGd1WG0U5pxI8/2Z6Q0r8n47dw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+OMXQrhCQaBqSvOUEgCtRxojhKihVOS7X5ukdb3X92A=;
 b=nfoaqKF1xk+/QZG9lc1N5JdFDugybaRzjl2i2K+C9Rbfa7S5ShNhPyhgmE59EQDBN5z/l+I/n7Rlx1EX/9Ot+2TEcHnIF5kH3iLlha3T/3u94phWU7/MTlKrI5iNE42uQ8gri5OyT6mi2Q1GtVkz6Q9u2cCq6+SQzD0jjr2p/ss=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 18 May 2023 11:31:14 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@amd.com>,
	andrew.cooper3@citrix.com
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
Message-ID: <ZGXwYsOX44/EBI3x@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-2-sstabellini@kernel.org>
 <c22a8925-15e4-47b9-6f5d-f85bbe802255@suse.com>
 <6f3f4e12-ae5b-c58c-891c-fbce08283206@amd.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <6f3f4e12-ae5b-c58c-891c-fbce08283206@amd.com>
X-ClientProxiedBy: LO4P302CA0040.GBRP302.PROD.OUTLOOK.COM
 (2603:10a6:600:317::15) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB6455:EE_
X-MS-Office365-Filtering-Correlation-Id: b68abc7f-f886-4e83-81a7-08db5782a329
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eUl7vYvJkpenHzF+8rkT3OyVk8JfkCS7+X4uTk30Q45VAuD4fvAt2nnFn37xmDzm7VZyXaPWmLf3ignUtkdQHw1FaERNqmYEtH881w3sK760EbrhYEFMCbQGSxXtGga6xp94AxtCrUDAjH6RW+5tZr16WCgCvAQoA8FrXx1ELV4TLGo9YscRxpqSKCU1DZBSUyw6MCQcmKAIWSfnrZjRKyijvltK6J6FcmVMcm/PZwUtiLjM8rjt6SIXDHcguoJHPn7u7J1Nii4fW9zChsOBfbCtPoj1nhseSe9dKR5uy5F4ZItCg5odpQjW7sOV4z1jvyM8iNAAqBgg2+NyWmSIIWxElJMyFutopyM1ZuheYvlREPzcbYBXiNERhv1bCwJYwrP6WgmhYFUsndsLOHUG/oe9PdwHtSa6Xjs0+sN0LEOjTfjNKBRG1YjAKN0sLy/qlay8qtkWn67M/uWWiM25TsflxF0JurTiUyk1bsdw3MF+CxLRyJ/YbDD04RwS7XwYRanN/MVQn2nwGcdoBq67JTeqFPexQvUsHRt4BQBnSv4mWD1F2x5ALUXcmDNjZTU2
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(346002)(366004)(376002)(136003)(396003)(39860400002)(451199021)(82960400001)(33716001)(38100700002)(478600001)(2906002)(6486002)(54906003)(6666004)(86362001)(85182001)(107886003)(186003)(53546011)(6506007)(9686003)(6512007)(26005)(66556008)(66946007)(4326008)(41300700001)(8676002)(8936002)(6916009)(316002)(66476007)(5660300002)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NldCOWttVnlHTytNaHNNZlRldGpNdGgwOGNlK1NtclNMZ1ZicWxjc1M0Ulh0?=
 =?utf-8?B?bnR3bkZnV2RzeGY1dXpIZHVBd3FBUkthcWxwM2JkenFzeGl2Q2E0NFhHSUNO?=
 =?utf-8?B?VFJ5SkQxamcwalo0elh6dEhoeXUzT2VKZmJSTlBLMU9RcmJBYXV0RHdRZW0z?=
 =?utf-8?B?dmZSbFlmN3lZd2FsSXJaMmw5VFRma0pWaENlZzVJdE42WURvWWd3L0NLQ2Fx?=
 =?utf-8?B?V08vYWFzNlRsT0F6ZkxuZ1lJY0xrbzVhODA1djNRT2VrTzNwYkh4dmduWEFw?=
 =?utf-8?B?K2tnWkFJNDhSOFZ2MWNvcUoyTlN5b01xbDEwYjBYZmt5bWtIRW5NUmhIbW5v?=
 =?utf-8?B?MGEzbUY5RmxkTEppM1ZLTGpPYkNFaExmMno0bTZBS1lLY1FQNVdPb05ZUXpB?=
 =?utf-8?B?dzBrOWVjcXBvTlNNNXRpZU1kaysvTVl2MG40aFFHb2xjc1luM091NFlPaDE3?=
 =?utf-8?B?eUh5K3hOUVhSL05qWEhMUVlBZ2doYXc3RUt6UHpQVTludC9PZ1g4MWNGYnd5?=
 =?utf-8?B?dnFnRk0yQ1U1M1lyUjhTVThIbHFWUEpBWW1LN0JsNDRzK3JRTTJ0QU5vRjR5?=
 =?utf-8?B?TzhCazZFSzRvek9nb0QvVXpGWVcrKys2VXpjWTJoQVoyeGdIa1lMOGtQNjlt?=
 =?utf-8?B?c2J3WlVqRTRSSWdZQjBEVzRwVGZjVUw5UVVtb2Z4b2E2SDVnV0tJeWFHY3R6?=
 =?utf-8?B?Q1I2YlV3azUyVXc4NU5JcklPc3ZWaVk3UWd0azZmMVpFTjNRRUF5dmNoNWo0?=
 =?utf-8?B?NlVIL1I5ajNqczNvUVZ2T3FWekplN1RKUFNNeUpab01KM0o1RFBYeVpsRG8y?=
 =?utf-8?B?aXNwRS9jajNMRlpOYitrWlV1MkVONXZEaEY1SEJtTGo2SThjVElqV0hxRG1E?=
 =?utf-8?B?UVluM3lCa1FoWmZISWk5ZGx3cnFTN3g3VXNjMHFrWE1lcGs0azFMSXRMOGtk?=
 =?utf-8?B?REYyUEhXUUlINVFOZloxSzIrOGtId21YR0xRVW41a1ZocTZCNzgzK29GN04w?=
 =?utf-8?B?SGpXSG9KUEZTdElWWWtTOXRGeGRlQStwVVVkNU04d29PQXdHUnFjOXNlMllu?=
 =?utf-8?B?eUJVNy9YakNoajRFL1UrRnZlMDVCRVowS2J0bjgycmV5dmlSVFQvTFJoTnNF?=
 =?utf-8?B?UHlCNjRpYjRraFo4bXJxNUEwcjdBaXB3cmdNZ2VhaUg2dnpLcVZxNmJWUUJ2?=
 =?utf-8?B?R1UyZzU5UkJCN1o0TDBIcyswMG9GM1ZraUpxM0pHRWs2aWRNUkpNb1lVeFZv?=
 =?utf-8?B?MVNCbjRoU2FoUHdZK1NxQk44KytLOWlHWm03eHRCSmdJN1RMN0d5MUp0QkhX?=
 =?utf-8?B?Y2RXc0QyNzVsRWdobDMrdE9BMk5rZ2NqVFZ3Rlc1a085NlVyOFBjcVZwQ0g4?=
 =?utf-8?B?VUVOTHpudlVER3pVUkJpVWxEL3IrQ0tqOGY3QTB6QVZYZ3VjVmg1TUVWMnQ3?=
 =?utf-8?B?UHJzV0M4OGlhSWNmQnl1aGV2bjY4Qll3a3U5Rml2Wkh6RWN6cjVQUWEwcjFk?=
 =?utf-8?B?NUY4dWEzVEVqWWZEekoyWjJGdC9lREt5RGpPWjVYSDZqSnhhVjhVaVZlcUll?=
 =?utf-8?B?T0hZTXhkM2hMNkY4cGd3MHNsdUhEMXBhQUNLR1RNSzEyb0d1a2tjMms3RHFx?=
 =?utf-8?B?MTFlaXNnUGcrWnBsOFhMRVpEczlJNVdaL0N2OGxUN01jWU9KdmtOSXlNTys3?=
 =?utf-8?B?YUJuZHpZNHVMdUpnbmxCTVdUWDFOalp0RzlVMDRxVG5MTlpCSzQwSHNEai95?=
 =?utf-8?B?cDRVbWt2Q01KeGJRUnZrRDZ4S3lTSFkybjRNWGlkSHo3L3ByclBwZ3lua0p5?=
 =?utf-8?B?cEprS1Qzb1pybG9sMUUvTDlqVVRkWXpMZjluZkdqUUYyL1VIak5QcGhKTmNC?=
 =?utf-8?B?cDBxbElwVjl4ZTViVlprK0JzWmRyQVl1RS83VVZKeDdTMUxRQnJEcWR4bnJF?=
 =?utf-8?B?WHNaRDVQcC9oWWl0T2p2T084WHVTWTNBZDBlVTFpenNHb1MyTE0zVUF4SC90?=
 =?utf-8?B?T0VFa2dkNXdRSnNYMFZJSHd3aXpka2RpVzhBSitxNEJGeUNEZk40eXUvNDFK?=
 =?utf-8?B?eVI5aHRHRzVWQU0wbFUwNEJOWjgrNGNMUDZBVzBsQjhJNlVDSUZlalVKSlRQ?=
 =?utf-8?Q?UI4LDHWpVlY7gQ0p8T2LBV73a?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	bcHtTnJUd3LvvK3sQgamIcFEWqGdgs2b1EhUY5u/0hpTU1+F64Cd29b93ikXRVsvw88CUZrCs/Zz9xq+QnGd3Ej/kyZ/ygGaobB5FJdCj/l6DTdhdGdpp9gK3V8g1EWQwmzerAm0edq2ZsDDi3r6HlEDLypZlBU5vN9ODf2BCWk9YxUW4QRwtDMjGyUGnholuw+dhojvZ2YyvvdgbqKG6wVpOAYQxbPQm2I2pylLwCggDudR4DYpM7vVJSQOyZeavs6fsHTWhfr1cRVYpJWf3tMIe0YoUnih0XJUEMX71KlohQAH6ZhaRjdd0in0FNFhSy5xd7f+HjuFM8jDZYznuOpEnuPTbwZXtBUnM5pEddUeG4YMC+fyFbwrVQbDg5Zqp++YtsexGet1FjhcOlxVbwLSSdGtkOAtYMEDm9DWMtHzEQa0XGDno9Z5CCwaEDuLwP0xeDBXSsFNRC2tmjZWFGlgjIWD/GL31ibyx1XfnPw8hCFjVjkZ0HeJogm0qu7ecOePWVtnOrOS5z62noIwe8qjgiUTuzJSQw6xpeedY4NQppdhVIyS/2LYaW2MZ45BNo30qnUq4Holfp1SGuvpVgFev4Q6h/qEJOnHqJe3Sgcoap30UNV6wSO/R8v+tnQz8yYhq4lwPtJpTehDdQIon0Du5mokB674wdRqaFfXBcp6lq22nhlQWAObkDIxjqpLCMaLFqsZ7SjjB60VtWF7bk8EM4bAyCnrxnwLJerr3+2EhUSvciGVipf1OKK1Sd5TRFOLGZpZscC771wxo6lfpIJpPyn13/sQHdyq9eCiK9Nd61Nls/oPEp2yLy1A5sBytmBrPMda06U9yn4g1TvT+DMn1KdXM+LZjm7z7toePOE=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b68abc7f-f886-4e83-81a7-08db5782a329
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 09:31:19.9945
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DzqYsmAl9TjY2xiAYXSuJehE3reVvJG3ToXNsBWa2DEgiUazDYDQ+2+RDzNub3B9LbzcQM/+6Yn56CWxFe9sqQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6455

On Thu, May 18, 2023 at 10:24:10AM +0300, Xenia Ragiadakou wrote:
> 
> On 15/5/23 17:17, Jan Beulich wrote:
> > On 13.05.2023 03:17, Stefano Stabellini wrote:
> > > From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > 
> > > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > > the tables in the guest. Instead, copy the tables to Dom0.
> > Do you really mean "in the guest" (i.e. from Xen's perspective, i.e.
> > ignoring that when running on qemu it is kind of a guest itself)?
> > 
> > I also consider the statement too broad anyway: Various people have
> > run PVH Dom0 without running into such an issue, so it's clearly not
> > just "leads to".
> 
> In my opinion the issue is broader.
> 
> In pvh_setup_acpi(), the code adding the ACPI tables to dom0 memory map does
> not check the return value of pvh_add_mem_range(). If there is an overlap
> and the overlapping region is marked as E820_ACPI, it maps not just the
> allowed tables but the entire overlapping range ,

But that's the indented behavior: all ACPI regions will be mapped into
the dom0 physmap, the filtering of the tables exposed to dom0 is done
in the XSDT, but not in by filtering the mapped regions.  Note this
won't be effective anyway, as the minimal granularity of physmap
entries is 4k, so multiple tables could live in the same 4K region.
Also Xen cannot parse dynamic tables (SSDT) or execute methods, and
hence doesn't know exactly which memory will be used.

Xen relies on the firmware to have the ACPI tables in ACPI, NVS or
RESERVED regions in order for them to be mapped into the gust physmap.
The call to pvh_add_mem_range() in pvh_setup_acpi() is just an attempt
to workaround broken systems that have tables placed in memory map
holes, and hence ignoring the return value is fine.

> while if the overlapping
> range is marked as E820_RESERVED, it does not map the tables at all (the
> issue that Stefano saw with qemu). Since dom0 memory map is initialized
> based on the native one, the code adding the ACPI table memory ranges will
> naturally fall into one of the two cases above.

Xen does map them, but that's done in arch_iommu_hwdom_init() which get
short-circuited by the usage of dom0-iommu=none in your example.  See
my reply to Stefano about moving such mappings into pvh_populate_p2m().

> So even when not running into this issue, pvh_add_mem_range() still fails
> and the memory range mapped is wider than the allowed one.

The intention of that call to pvh_add_mem_range() is not to limit what
gets mapped into dom0 physmap, but rather to workaround bugs in the
firmware if ACPI tables are placed in memory map holes.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 18 09:35:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 09:35:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536256.834418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pza2U-0005wB-Qv; Thu, 18 May 2023 09:35:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536256.834418; Thu, 18 May 2023 09:35:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pza2U-0005w4-O4; Thu, 18 May 2023 09:35:14 +0000
Received: by outflank-mailman (input) for mailman id 536256;
 Thu, 18 May 2023 09:35:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pza2T-0005vy-Ux
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 09:35:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pza2T-0001um-9B; Thu, 18 May 2023 09:35:13 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.26.27]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pza2S-0005Q1-Vc; Thu, 18 May 2023 09:35:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=fHevM2NyMfg5N/W6uT4sRsj9HSH4dDzO5SgkDt2t3iU=; b=F6Amjt61SF39a9FMLILHC16rmc
	mg7WCC3OPwSdCzWH95Nzs6/h7tCTOaQuqOHu/UB6Mo0mmC8osPRFFng+S4VrqMiwLJOw4ojzVeu9R
	J4YZiuTodyAS4eRwWLgEp5CaAw0jjTLR5k7lBW6nUq7TsWmLeUB9Ov0p4x5O482JAI4E=;
Message-ID: <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
Date: Thu, 18 May 2023 10:35:11 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-2-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230424060248.1488859-2-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

Sorry for jumping late in the review.

On 24/04/2023 07:02, Luca Fancellu wrote:
> Enable Xen to handle the SVE extension, add code in cpufeature module
> to handle ZCR SVE register, disable trapping SVE feature on system
> boot only when SVE resources are accessed.
> While there, correct coding style for the comment on coprocessor
> trapping.
> 
> Now cptr_el2 is part of the domain context and it will be restored
> on context switch, this is a preparation for saving the SVE context
> which will be part of VFP operations, so restore it before the call
> to save VFP registers.
> To save an additional isb barrier, restore cptr_el2 before an
> existing isb barrier and move the call for saving VFP context after
> that barrier.
> 
> Change the KConfig entry to make ARM64_SVE symbol selectable, by
> default it will be not selected.
> 
> Create sve module and sve_asm.S that contains assembly routines for
> the SVE feature, this code is inspired from linux and it uses
> instruction encoding to be compatible with compilers that does not
> support SVE.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes from v5:
>   - Add R-by Bertrand
> Changes from v4:
>   - don't use fixed types in vl_to_zcr, forgot to address that in
>     v3, by mistake I changed that in patch 2, fixing now (Jan)
> Changes from v3:
>   - no changes
> Changes from v2:
>   - renamed sve_asm.S in sve-asm.S, new files should not contain
>     underscore in the name (Jan)
> Changes from v1:
>   - Add assert to vl_to_zcr, it is never called with vl==0, but just
>     to be sure it won't in the future.
> Changes from RFC:
>   - Moved restoring of cptr before an existing barrier (Julien)
>   - Marked the feature as unsupported for now (Julien)
>   - Trap and un-trap only when using SVE resources in
>     compute_max_zcr() (Julien)
> ---
>   xen/arch/arm/Kconfig                     | 10 +++--
>   xen/arch/arm/arm64/Makefile              |  1 +
>   xen/arch/arm/arm64/cpufeature.c          |  7 ++--
>   xen/arch/arm/arm64/sve-asm.S             | 48 +++++++++++++++++++++++
>   xen/arch/arm/arm64/sve.c                 | 50 ++++++++++++++++++++++++
>   xen/arch/arm/cpufeature.c                |  6 ++-
>   xen/arch/arm/domain.c                    |  9 +++--
>   xen/arch/arm/include/asm/arm64/sve.h     | 43 ++++++++++++++++++++
>   xen/arch/arm/include/asm/arm64/sysregs.h |  1 +
>   xen/arch/arm/include/asm/cpufeature.h    | 14 +++++++
>   xen/arch/arm/include/asm/domain.h        |  1 +
>   xen/arch/arm/include/asm/processor.h     |  2 +
>   xen/arch/arm/setup.c                     |  5 ++-
>   xen/arch/arm/traps.c                     | 28 +++++++------
>   14 files changed, 201 insertions(+), 24 deletions(-)
>   create mode 100644 xen/arch/arm/arm64/sve-asm.S
>   create mode 100644 xen/arch/arm/arm64/sve.c
>   create mode 100644 xen/arch/arm/include/asm/arm64/sve.h
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 239d3aed3c7f..41f45d8d1203 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -112,11 +112,15 @@ config ARM64_PTR_AUTH
>   	  This feature is not supported in Xen.
>   
>   config ARM64_SVE
> -	def_bool n
> +	bool "Enable Scalar Vector Extension support (UNSUPPORTED)" if UNSUPPORTED
>   	depends on ARM_64
>   	help
> -	  Scalar Vector Extension support.
> -	  This feature is not supported in Xen.
> +	  Scalar Vector Extension (SVE/SVE2) support for guests.
> +
> +	  Please be aware that currently, enabling this feature will add latency on
> +	  VM context switch between SVE enabled guests, between not-enabled SVE
> +	  guests and SVE enabled guests and viceversa, compared to the time
> +	  required to switch between not-enabled SVE guests.
>   
>   config ARM64_MTE
>   	def_bool n
> diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
> index 28481393e98f..54ad55c75cda 100644
> --- a/xen/arch/arm/arm64/Makefile
> +++ b/xen/arch/arm/arm64/Makefile
> @@ -13,6 +13,7 @@ obj-$(CONFIG_LIVEPATCH) += livepatch.o
>   obj-y += mm.o
>   obj-y += smc.o
>   obj-y += smpboot.o
> +obj-$(CONFIG_ARM64_SVE) += sve.o sve-asm.o
>   obj-y += traps.o
>   obj-y += vfp.o
>   obj-y += vsysreg.o
> diff --git a/xen/arch/arm/arm64/cpufeature.c b/xen/arch/arm/arm64/cpufeature.c
> index d9039d37b2d1..b4656ff4d80f 100644
> --- a/xen/arch/arm/arm64/cpufeature.c
> +++ b/xen/arch/arm/arm64/cpufeature.c
> @@ -455,15 +455,11 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
>   	ARM64_FTR_END,
>   };
>   
> -#if 0
> -/* TODO: use this to sanitize SVE once we support it */
> -
>   static const struct arm64_ftr_bits ftr_zcr[] = {
>   	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
>   		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
>   	ARM64_FTR_END,
>   };
> -#endif
>   
>   /*
>    * Common ftr bits for a 32bit register with all hidden, strict
> @@ -603,6 +599,9 @@ void update_system_features(const struct cpuinfo_arm *new)
>   
>   	SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
>   
> +	if ( cpu_has_sve )
> +		SANITIZE_REG(zcr64, 0, zcr);
> +
>   	/*
>   	 * Comment from Linux:
>   	 * Userspace may perform DC ZVA instructions. Mismatched block sizes
> diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
> new file mode 100644
> index 000000000000..4d1549344733
> --- /dev/null
> +++ b/xen/arch/arm/arm64/sve-asm.S
> @@ -0,0 +1,48 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Arm SVE assembly routines
> + *
> + * Copyright (C) 2022 ARM Ltd.
> + *
> + * Some macros and instruction encoding in this file are taken from linux 6.1.1,
> + * file arch/arm64/include/asm/fpsimdmacros.h, some of them are a modified
> + * version.
AFAICT, the only modified version is _sve_rdvl, but it is not clear to 
me why we would want to have a modified version?

I am asking this because without an explanation, it would be difficult 
to know how to re-sync the code with Linux.

> + */
> +
> +/* Sanity-check macros to help avoid encoding garbage instructions */
> +
> +.macro _check_general_reg nr
> +    .if (\nr) < 0 || (\nr) > 30
> +        .error "Bad register number \nr."
> +    .endif
> +.endm
> +
> +.macro _check_num n, min, max
> +    .if (\n) < (\min) || (\n) > (\max)
> +        .error "Number \n out of range [\min,\max]"
> +    .endif
> +.endm
> +
> +/* SVE instruction encodings for non-SVE-capable assemblers */
> +/* (pre binutils 2.28, all kernel capable clang versions support SVE) */
> +
> +/* RDVL X\nx, #\imm */
> +.macro _sve_rdvl nx, imm
> +    _check_general_reg \nx
> +    _check_num (\imm), -0x20, 0x1f
> +    .inst 0x04bf5000                \
> +        | (\nx)                     \
> +        | (((\imm) & 0x3f) << 5)
> +.endm
> +
> +/* Gets the current vector register size in bytes */
> +GLOBAL(sve_get_hw_vl)
> +    _sve_rdvl 0, 1
> +    ret
> +
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> new file mode 100644
> index 000000000000..6f3fb368c59b
> --- /dev/null
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -0,0 +1,50 @@
> +/* SPDX-License-Identifier: GPL-2.0 */

Above, you are using GPL-2.0-only, but here GPL-2.0. We favor the former 
now. Happy to deal it on commit if there is nothing else to address.

> +/*
> + * Arm SVE feature code
> + *
> + * Copyright (C) 2022 ARM Ltd.
> + */
> +
> +#include <xen/types.h>
> +#include <asm/arm64/sve.h>
> +#include <asm/arm64/sysregs.h>
> +#include <asm/processor.h>
> +#include <asm/system.h>
> +
> +extern unsigned int sve_get_hw_vl(void);
> +
> +register_t compute_max_zcr(void)
> +{
> +    register_t cptr_bits = get_default_cptr_flags();
> +    register_t zcr = vl_to_zcr(SVE_VL_MAX_BITS);
> +    unsigned int hw_vl;
> +
> +    /* Remove trap for SVE resources */
> +    WRITE_SYSREG(cptr_bits & ~HCPTR_CP(8), CPTR_EL2);
> +    isb();
> +
> +    /*
> +     * Set the maximum SVE vector length, doing that we will know the VL
> +     * supported by the platform, calling sve_get_hw_vl()
> +     */
> +    WRITE_SYSREG(zcr, ZCR_EL2);

 From my reading of the Arm (D19-6331, ARM DDI 0487J.a), a direct write 
to a system register would need to be followed by an context 
synchronization event (e.g. isb()) before the software can rely on the 
value.

In this situation, AFAICT, the instruciton in sve_get_hw_vl() will use 
the content of ZCR_EL2. So don't we need an ISB() here?

> +
> +    /*
> +     * Read the maximum VL, which could be lower than what we imposed before,
> +     * hw_vl contains VL in bytes, multiply it by 8 to use vl_to_zcr() later
> +     */
> +    hw_vl = sve_get_hw_vl() * 8U;
> +
> +    /* Restore CPTR_EL2 */
> +    WRITE_SYSREG(cptr_bits, CPTR_EL2);
> +    isb();
> +
> +    return vl_to_zcr(hw_vl);
> +}
> +
> +/* Takes a vector length in bits and returns the ZCR_ELx encoding */
> +register_t vl_to_zcr(unsigned int vl)
> +{
> +    ASSERT(vl > 0);
> +    return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
> +}

Missing the emacs magic blocks at the end.

> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index c4ec38bb2554..83b84368f6d5 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -9,6 +9,7 @@
>   #include <xen/init.h>
>   #include <xen/smp.h>
>   #include <xen/stop_machine.h>
> +#include <asm/arm64/sve.h>
>   #include <asm/cpufeature.h>
>   
>   DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
> @@ -143,6 +144,9 @@ void identify_cpu(struct cpuinfo_arm *c)
>   
>       c->zfr64.bits[0] = READ_SYSREG(ID_AA64ZFR0_EL1);
>   
> +    if ( cpu_has_sve )
> +        c->zcr64.bits[0] = compute_max_zcr();
> +
>       c->dczid.bits[0] = READ_SYSREG(DCZID_EL0);
>   
>       c->ctr.bits[0] = READ_SYSREG(CTR_EL0);
> @@ -199,7 +203,7 @@ static int __init create_guest_cpuinfo(void)
>       guest_cpuinfo.pfr64.mpam = 0;
>       guest_cpuinfo.pfr64.mpam_frac = 0;
>   
> -    /* Hide SVE as Xen does not support it */
> +    /* Hide SVE by default to the guests */
>       guest_cpuinfo.pfr64.sve = 0;
>       guest_cpuinfo.zfr64.bits[0] = 0;
>   
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index d8ef6501ff8e..0350d8c61ed8 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -181,9 +181,6 @@ static void ctxt_switch_to(struct vcpu *n)
>       /* VGIC */
>       gic_restore_state(n);
>   
> -    /* VFP */
> -    vfp_restore_state(n);
> -

At the moment ctxt_switch_to() is (mostly?) the reverse of 
ctxt_switch_from(). But with this change, you are going to break it.

I would really prefer if the existing convention stays because it helps 
to confirm that we didn't miss bits in the restore code.

So if you want to move vfp_restore_state() later, then please more 
vfp_save_state() earlier in ctxt_switch_from().


>       /* XXX MPU */
>   
>       /* Fault Status */
> @@ -234,6 +231,7 @@ static void ctxt_switch_to(struct vcpu *n)
>       p2m_restore_state(n);
>   
>       /* Control Registers */
> +    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);

I would prefer if this called closer to vfp_restore_state(). So the 
dependency between the two is easier to spot.

>       WRITE_SYSREG(n->arch.cpacr, CPACR_EL1);
>   
>       /*
> @@ -258,6 +256,9 @@ static void ctxt_switch_to(struct vcpu *n)
>   #endif
>       isb();
>   
> +    /* VFP */

Please document in the code that vfp_restore_state() have to be called 
after CPTR_EL2() + a synchronization event.

Similar docoumentation on top of at least CPTR_EL2 and possibly isb(). 
This would help if we need to re-order the code in the future.


> +    vfp_restore_state(n);
> +
>       /* CP 15 */
>       WRITE_SYSREG(n->arch.csselr, CSSELR_EL1);
>   
> @@ -548,6 +549,8 @@ int arch_vcpu_create(struct vcpu *v)
>   
>       v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
>   
> +    v->arch.cptr_el2 = get_default_cptr_flags();
> +
>       v->arch.hcr_el2 = get_default_hcr_flags();
>   
>       v->arch.mdcr_el2 = HDCR_TDRA | HDCR_TDOSA | HDCR_TDA;
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
> new file mode 100644
> index 000000000000..144d2b1cc485
> --- /dev/null
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -0,0 +1,43 @@
> +/* SPDX-License-Identifier: GPL-2.0 */

Use GPL-2.0-only.

> +/*
> + * Arm SVE feature code
> + *
> + * Copyright (C) 2022 ARM Ltd.
> + */
> +
> +#ifndef _ARM_ARM64_SVE_H
> +#define _ARM_ARM64_SVE_H
> +
> +#define SVE_VL_MAX_BITS (2048U)

NIT: The parentheses are unnecessary and we don't tend to add them in Xen.

> +
> +/* Vector length must be multiple of 128 */
> +#define SVE_VL_MULTIPLE_VAL (128U)

NIT: The parentheses are unnecessary

> +
> +#ifdef CONFIG_ARM64_SVE
> +
> +register_t compute_max_zcr(void);
> +register_t vl_to_zcr(unsigned int vl);
> +
> +#else /* !CONFIG_ARM64_SVE */
> +
> +static inline register_t compute_max_zcr(void)
> +{

Is this meant to be called when SVE is not enabled? If not, then please 
add ASSERT_UNREACHABLE().

> +    return 0;
> +}
> +
> +static inline register_t vl_to_zcr(unsigned int vl)
> +{

Is this meant to be called when SVE is not enabled? If not, then please 
add ASSERT_UNREACHABLE().

> +    return 0;
> +}
> +
> +#endif /* CONFIG_ARM64_SVE */
> +
> +#endif /* _ARM_ARM64_SVE_H */
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
> index 463899951414..4cabb9eb4d5e 100644
> --- a/xen/arch/arm/include/asm/arm64/sysregs.h
> +++ b/xen/arch/arm/include/asm/arm64/sysregs.h
> @@ -24,6 +24,7 @@
>   #define ICH_EISR_EL2              S3_4_C12_C11_3
>   #define ICH_ELSR_EL2              S3_4_C12_C11_5
>   #define ICH_VMCR_EL2              S3_4_C12_C11_7
> +#define ZCR_EL2                   S3_4_C1_C2_0
>   
>   #define __LR0_EL2(x)              S3_4_C12_C12_ ## x
>   #define __LR8_EL2(x)              S3_4_C12_C13_ ## x
> diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
> index c62cf6293fd6..6d703e051906 100644
> --- a/xen/arch/arm/include/asm/cpufeature.h
> +++ b/xen/arch/arm/include/asm/cpufeature.h
> @@ -32,6 +32,12 @@
>   #define cpu_has_thumbee   (boot_cpu_feature32(thumbee) == 1)
>   #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
>   
> +#ifdef CONFIG_ARM64_SVE
> +#define cpu_has_sve       (boot_cpu_feature64(sve) == 1)
> +#else
> +#define cpu_has_sve       (0)

NIT: The parentheses are unnecessary

> +#endif
> +
>   #ifdef CONFIG_ARM_32
>   #define cpu_has_gicv3     (boot_cpu_feature32(gic) >= 1)
>   #define cpu_has_gentimer  (boot_cpu_feature32(gentimer) == 1)
> @@ -323,6 +329,14 @@ struct cpuinfo_arm {
>           };
>       } isa64;
>   
> +    union {
> +        register_t bits[1];
> +        struct {
> +            unsigned long len:4;
> +            unsigned long __res0:60;
> +        };
> +    } zcr64;
> +
>       struct {
>           register_t bits[1];
>       } zfr64;
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> index 2a51f0ca688e..e776ee704b7d 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -190,6 +190,7 @@ struct arch_vcpu
>       register_t tpidrro_el0;
>   
>       /* HYP configuration */
> +    register_t cptr_el2;
>       register_t hcr_el2;
>       register_t mdcr_el2;
>   
> diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
> index 54f253087718..bc683334125c 100644
> --- a/xen/arch/arm/include/asm/processor.h
> +++ b/xen/arch/arm/include/asm/processor.h
> @@ -582,6 +582,8 @@ void do_trap_guest_serror(struct cpu_user_regs *regs);
>   
>   register_t get_default_hcr_flags(void);
>   
> +register_t get_default_cptr_flags(void);
> +
>   /*
>    * Synchronize SError unless the feature is selected.
>    * This is relying on the SErrors are currently unmasked.
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 6f9f4d8c8a15..4191a766767a 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -135,10 +135,11 @@ static void __init processor_id(void)
>              cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No",
>              cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No",
>              cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No");
> -    printk("    Extensions:%s%s%s\n",
> +    printk("    Extensions:%s%s%s%s\n",
>              cpu_has_fp ? " FloatingPoint" : "",
>              cpu_has_simd ? " AdvancedSIMD" : "",
> -           cpu_has_gicv3 ? " GICv3-SysReg" : "");
> +           cpu_has_gicv3 ? " GICv3-SysReg" : "",
> +           cpu_has_sve ? " SVE" : "");
>   
>       /* Warn user if we find unknown floating-point features */
>       if ( cpu_has_fp && (boot_cpu_feature64(fp) >= 2) )
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index d40c331a4e9c..c0611c2ef6a5 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -93,6 +93,21 @@ register_t get_default_hcr_flags(void)
>                HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
>   }
>   
> +register_t get_default_cptr_flags(void)
> +{
> +    /*
> +     * Trap all coprocessor registers (0-13) except cp10 and
> +     * cp11 for VFP.
> +     *
> +     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
> +     *
> +     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
> +     * RES1, i.e. they would trap whether we did this write or not.
> +     */
> +    return  ((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
> +             HCPTR_TTA | HCPTR_TAM);
> +}
> +
>   static enum {
>       SERRORS_DIVERSE,
>       SERRORS_PANIC,
> @@ -122,6 +137,7 @@ __initcall(update_serrors_cpu_caps);
>   
>   void init_traps(void)
>   {
> +    register_t cptr_bits = get_default_cptr_flags();

Coding style: Please add a newline after the declaration. That said...

>       /*
>        * Setup Hyp vector base. Note they might get updated with the
>        * branch predictor hardening.
> @@ -135,17 +151,7 @@ void init_traps(void)
>       /* Trap CP15 c15 used for implementation defined registers */
>       WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
>   
> -    /* Trap all coprocessor registers (0-13) except cp10 and
> -     * cp11 for VFP.
> -     *
> -     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
> -     *
> -     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
> -     * RES1, i.e. they would trap whether we did this write or not.
> -     */
> -    WRITE_SYSREG((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
> -                 HCPTR_TTA | HCPTR_TAM,
> -                 CPTR_EL2);
> +    WRITE_SYSREG(cptr_bits, CPTR_EL2);

... I would combine the two lines as the variable seems unnecessary.

>   
>       /*
>        * Configure HCR_EL2 with the bare minimum to run Xen until a guest

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 18 09:49:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 09:49:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536262.834427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzaFe-0007Ym-45; Thu, 18 May 2023 09:48:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536262.834427; Thu, 18 May 2023 09:48:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzaFe-0007Yf-1P; Thu, 18 May 2023 09:48:50 +0000
Received: by outflank-mailman (input) for mailman id 536262;
 Thu, 18 May 2023 09:48:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pzaFc-0007YZ-BR
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 09:48:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pzaFb-0002Nt-TW; Thu, 18 May 2023 09:48:47 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.26.27]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pzaFb-0005xD-Mi; Thu, 18 May 2023 09:48:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=a6gzCqulOAkdbdDhCePMVFO/KnZP7uVkkzC+wheTeU8=; b=T5xEs8fW2pwlWA71U0TDFQR5Lv
	g40iJL8myO296OK6bxEOMWzs/lDfkfH5BKenpAv5iHN0fO00QGwMCeg+9PfYc1szroFaV4ngHlQ4U
	RjGVEZahD+gCOS0YrKY4Or3rB36kmgisygN75Ahg9f+lsBUeZhPRSeRByiKxTnPtdPCA=;
Message-ID: <9d6de850-48a7-d3f8-d944-f51e4597fef9@xen.org>
Date: Thu, 18 May 2023 10:48:45 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 02/12] xen/arm: add SVE vector length field to the
 domain
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-3-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230424060248.1488859-3-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 24/04/2023 07:02, Luca Fancellu wrote:
> Add sve_vl field to arch_domain and xen_arch_domainconfig struct,
> to allow the domain to have an information about the SVE feature
> and the number of SVE register bits that are allowed for this
> domain.
> 
> sve_vl field is the vector length in bits divided by 128, this
> allows to use less space in the structures.
> 
> The field is used also to allow or forbid a domain to use SVE,
> because a value equal to zero means the guest is not allowed to
> use the feature.
> 
> Check that the requested vector length is lower or equal to the
> platform supported vector length, otherwise fail on domain
> creation.
> 
> Check that only 64 bit domains have SVE enabled, otherwise fail.
> 
> Bump the XEN_DOMCTL_INTERFACE_VERSION because of the new field
> in struct xen_arch_domainconfig.

The domctl interface version was bumped this week (see bdb1184d4f 
"domctl: bump interface version"). So this will not be necessary.

> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes from v5:
>   - Update commit message stating the interface ver. bump (Bertrand)
>   - in struct arch_domain, protect sve_vl with CONFIG_ARM64_SVE,
>     given the change, move also is_sve_domain() where it's protected
>     inside sve.h and create a stub when the macro is not defined,
>     protect the usage of sve_vl where needed.
>     (Julien)
>   - Add a check for 32 bit guest running on top of 64 bit host that
>     have sve parameter enabled to stop the domain creation, added in
>     construct_domain() of domain_build.c and subarch_do_domctl of
>     domctl.c. (Julien)
> Changes from v4:
>   - Return 0 in get_sys_vl_len() if sve is not supported, code style fix,
>     removed else if since the conditions can't fallthrough, removed not
>     needed condition checking for VL bits validity because it's already
>     covered, so delete is_vl_valid() function. (Jan)
> Changes from v3:
>   - don't use fixed types when not needed, use encoded value also in
>     arch_domain so rename sve_vl_bits in sve_vl. (Jan)
>   - rename domainconfig_decode_vl to sve_decode_vl because it will now
>     be used also to decode from arch_domain value
>   - change sve_vl from uint16_t to uint8_t and move it after "type" field
>     to optimize space.
> Changes from v2:
>   - rename field in xen_arch_domainconfig from "sve_vl_bits" to
>     "sve_vl" and use the implicit padding after gic_version to
>     store it, now this field is the VL/128. (Jan)
>   - Created domainconfig_decode_vl() function to decode the sve_vl
>     field and use it as plain bits value inside arch_domain.
>   - Changed commit message reflecting the changes
> Changes from v1:
>   - no changes
> Changes from RFC:
>   - restore zcr_el2 in sve_restore_state, that will be introduced
>     later in this serie, so remove zcr_el2 related code from this
>     patch and move everything to the later patch (Julien)
>   - add explicit padding into struct xen_arch_domainconfig (Julien)
>   - Don't lower down the vector length, just fail to create the
>     domain. (Julien)
> ---
>   xen/arch/arm/arm64/domctl.c          |  4 ++++
>   xen/arch/arm/arm64/sve.c             | 12 ++++++++++++
>   xen/arch/arm/domain.c                | 29 ++++++++++++++++++++++++++++
>   xen/arch/arm/domain_build.c          |  7 +++++++
>   xen/arch/arm/include/asm/arm64/sve.h | 16 +++++++++++++++
>   xen/arch/arm/include/asm/domain.h    |  5 +++++
>   xen/include/public/arch-arm.h        |  2 ++
>   xen/include/public/domctl.h          |  2 +-
>   8 files changed, 76 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c
> index 0de89b42c448..14fc622e9956 100644
> --- a/xen/arch/arm/arm64/domctl.c
> +++ b/xen/arch/arm/arm64/domctl.c
> @@ -10,6 +10,7 @@
>   #include <xen/sched.h>
>   #include <xen/hypercall.h>
>   #include <public/domctl.h>
> +#include <asm/arm64/sve.h>
>   #include <asm/cpufeature.h>
>   
>   static long switch_mode(struct domain *d, enum domain_type type)
> @@ -43,6 +44,9 @@ long subarch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>           case 32:
>               if ( !cpu_has_el1_32 )
>                   return -EINVAL;
> +            /* SVE is not supported for 32 bit domain */
> +            if ( is_sve_domain(d) )
> +                return -EINVAL;
>               return switch_mode(d, DOMAIN_32BIT);
>           case 64:
>               return switch_mode(d, DOMAIN_64BIT);
> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> index 6f3fb368c59b..86a5e617bfca 100644
> --- a/xen/arch/arm/arm64/sve.c
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -8,6 +8,7 @@
>   #include <xen/types.h>
>   #include <asm/arm64/sve.h>
>   #include <asm/arm64/sysregs.h>
> +#include <asm/cpufeature.h>
>   #include <asm/processor.h>
>   #include <asm/system.h>
>   
> @@ -48,3 +49,14 @@ register_t vl_to_zcr(unsigned int vl)
>       ASSERT(vl > 0);
>       return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
>   }
> +
> +/* Get the system sanitized value for VL in bits */
> +unsigned int get_sys_vl_len(void)
> +{
> +    if ( !cpu_has_sve )
> +        return 0;
> +
> +    /* ZCR_ELx len field is ((len+1) * 128) = vector bits length */

NIT: Please add a space before and after '+'.

> +    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
> +            SVE_VL_MULTIPLE_VAL;
> +}
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 0350d8c61ed8..143359d0f313 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -13,6 +13,7 @@
>   #include <xen/wait.h>
>   
>   #include <asm/alternative.h>
> +#include <asm/arm64/sve.h>
>   #include <asm/cpuerrata.h>
>   #include <asm/cpufeature.h>
>   #include <asm/current.h>
> @@ -550,6 +551,8 @@ int arch_vcpu_create(struct vcpu *v)
>       v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
>   
>       v->arch.cptr_el2 = get_default_cptr_flags();
> +    if ( is_sve_domain(v->domain) )
> +        v->arch.cptr_el2 &= ~HCPTR_CP(8);
>   
>       v->arch.hcr_el2 = get_default_hcr_flags();
>   
> @@ -594,6 +597,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>       unsigned int max_vcpus;
>       unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
>       unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
> +    unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
>   
>       if ( (config->flags & ~flags_optional) != flags_required )
>       {
> @@ -602,6 +606,26 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>           return -EINVAL;
>       }
>   
> +    /* Check feature flags */
> +    if ( sve_vl_bits > 0 )
> +    {
> +        unsigned int zcr_max_bits = get_sys_vl_len();
> +
> +        if ( !zcr_max_bits )
> +        {
> +            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
> +            return -EINVAL;
> +        }
> +
> +        if ( sve_vl_bits > zcr_max_bits )
> +        {
> +            dprintk(XENLOG_INFO,
> +                    "Requested SVE vector length (%u) > supported length (%u)\n",
> +                    sve_vl_bits, zcr_max_bits);
> +            return -EINVAL;
> +        }
> +    }
> +
>       /* The P2M table must always be shared between the CPU and the IOMMU */
>       if ( config->iommu_opts & XEN_DOMCTL_IOMMU_no_sharept )
>       {
> @@ -744,6 +768,11 @@ int arch_domain_create(struct domain *d,
>       if ( (rc = domain_vpci_init(d)) != 0 )
>           goto fail;
>   
> +#ifdef CONFIG_ARM64_SVE
> +    /* Copy the encoded vector length sve_vl from the domain configuration */
> +    d->arch.sve_vl = config->arch.sve_vl;
> +#endif
> +
>       return 0;
>   
>   fail:
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index f80fdd1af206..ffabe567ac3f 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -26,6 +26,7 @@
>   #include <asm/platform.h>
>   #include <asm/psci.h>
>   #include <asm/setup.h>
> +#include <asm/arm64/sve.h>
>   #include <asm/cpufeature.h>
>   #include <asm/domain_build.h>
>   #include <xen/event.h>
> @@ -3674,6 +3675,12 @@ static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
>           return -EINVAL;
>       }
>   
> +    if ( is_sve_domain(d) && (kinfo->type == DOMAIN_32BIT) )
> +    {
> +        printk("SVE is not available for 32-bit domain\n");
> +        return -EINVAL;
> +    }
> +
>       if ( is_64bit_domain(d) )
>           vcpu_switch_to_aarch64_mode(v);
>   
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
> index 144d2b1cc485..730c3fb5a9c8 100644
> --- a/xen/arch/arm/include/asm/arm64/sve.h
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -13,13 +13,24 @@
>   /* Vector length must be multiple of 128 */
>   #define SVE_VL_MULTIPLE_VAL (128U)
>   
> +static inline unsigned int sve_decode_vl(unsigned int sve_vl)
> +{
> +    /* SVE vector length is stored as VL/128 in xen_arch_domainconfig */
> +    return sve_vl * SVE_VL_MULTIPLE_VAL;
> +}
> +
>   #ifdef CONFIG_ARM64_SVE
>   
> +#define is_sve_domain(d) ((d)->arch.sve_vl > 0)
> +
>   register_t compute_max_zcr(void);
>   register_t vl_to_zcr(unsigned int vl);
> +unsigned int get_sys_vl_len(void);
>   
>   #else /* !CONFIG_ARM64_SVE */
>   
> +#define is_sve_domain(d) (0)

You want to use (d, 0) so 'd' is still evaluated when !SVE is not 
enabled. An alternative is to provide a static line helper.

> +
>   static inline register_t compute_max_zcr(void)
>   {
>       return 0;
> @@ -30,6 +41,11 @@ static inline register_t vl_to_zcr(unsigned int vl)
>       return 0;
>   }
>   
> +static inline unsigned int get_sys_vl_len(void)
> +{
> +    return 0;
> +}
> +
>   #endif /* CONFIG_ARM64_SVE */
>   
>   #endif /* _ARM_ARM64_SVE_H */
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> index e776ee704b7d..331da0f3bcc3 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -67,6 +67,11 @@ struct arch_domain
>       enum domain_type type;
>   #endif
>   
> +#ifdef CONFIG_ARM64_SVE
> +    /* max SVE encoded vector length */
> +    uint8_t sve_vl;
> +#endif
> +
>       /* Virtual MMU */
>       struct p2m_domain p2m;
>   
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 1528ced5097a..38311f559581 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -300,6 +300,8 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
>   struct xen_arch_domainconfig {
>       /* IN/OUT */
>       uint8_t gic_version;
> +    /* IN - Contains SVE vector length divided by 128 */
> +    uint8_t sve_vl;
>       /* IN */
>       uint16_t tee_type;
>       /* IN */
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 529801c89ba3..e2e22cb534d6 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -21,7 +21,7 @@
>   #include "hvm/save.h"
>   #include "memory.h"
>   
> -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000015
> +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000016
>   
>   /*
>    * NB. xen_domctl.domain is an IN/OUT parameter for this operation.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 18 09:51:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 09:51:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536267.834438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzaIT-0000Xk-IL; Thu, 18 May 2023 09:51:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536267.834438; Thu, 18 May 2023 09:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzaIT-0000Xd-EN; Thu, 18 May 2023 09:51:45 +0000
Received: by outflank-mailman (input) for mailman id 536267;
 Thu, 18 May 2023 09:51:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pzaIR-0000XX-Uq
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 09:51:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pzaIR-0002RY-HL; Thu, 18 May 2023 09:51:43 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.26.27]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pzaIR-000607-A3; Thu, 18 May 2023 09:51:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=GR9jO2FVSzo4DjMEKBsiQgSVBVqVBCDL+yA8NIXhThM=; b=n/cinT6NaIhizgd7R4yc+D3JVH
	PyqXpH8cgaBaMYaNpm9Lbe+w7kL+hno1oyu3EuwqQfiqdSsjz6I17nNswDVvzIItdeBg5LwSCEU5A
	LGIOKZJ+EIvUjiT4K0ayDVD0Mmxw1yQ6pCCnoi/RmUe8yDCmnbzH6x2dc0vrOovG2g9I=;
Message-ID: <92f0e233-2832-be64-ce67-0082691c9f3f@xen.org>
Date: Thu, 18 May 2023 10:51:41 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 03/12] xen/arm: Expose SVE feature to the guest
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-4-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230424060248.1488859-4-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 24/04/2023 07:02, Luca Fancellu wrote:
> When a guest is allowed to use SVE, expose the SVE features through
> the identification registers.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

With one remark below:

Acked-by: Julien Grall <jgrall@amazon.com>

> +    case HSR_SYSREG_ID_AA64ZFR0_EL1:
> +    {
> +        /*
> +         * When the guest has the SVE feature enabled, the whole id_aa64zfr0_el1
> +         * needs to be exposed.
> +         */
> +        register_t guest_reg_value = guest_cpuinfo.zfr64.bits[0];

Coding style: Add a newline after the declaration.

> +        if ( is_sve_domain(v->domain) )
> +            guest_reg_value = system_cpuinfo.zfr64.bits[0];
> +
> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
> +                                  guest_reg_value);
> +    }
>   
>       /*
>        * Those cases are catching all Reserved registers trapped by TID3 which

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 18 09:55:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 09:55:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536272.834448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzaM4-0001A3-0N; Thu, 18 May 2023 09:55:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536272.834448; Thu, 18 May 2023 09:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzaM3-00019w-Sh; Thu, 18 May 2023 09:55:27 +0000
Received: by outflank-mailman (input) for mailman id 536272;
 Thu, 18 May 2023 09:55:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pzaM2-00019o-Oc
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 09:55:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pzaM2-0002Xd-BL; Thu, 18 May 2023 09:55:26 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.26.27]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pzaM2-0006I7-64; Thu, 18 May 2023 09:55:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=eoTm1ScNklZrv6cO4gidneCDF/zxW+DEZW/xrywpNBg=; b=oChv5Ko4JWJg5iMzKBZZ07sgno
	TXfFvBxBkla9t9M5SoCS635duWqq0IKFYKzebfIlzKh3fRNO5/wOK1BS4vS7V7XbhmxFvWaufNTcd
	Vso5Vjf6NZa9spwndJjtpMHy1E3ebYbJodAgmv5aTnF1/XYBVTsVhOymPWYz09Reztac=;
Message-ID: <a981c6f9-7236-1270-20e1-51ce287ece72@xen.org>
Date: Thu, 18 May 2023 10:55:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 04/12] xen/arm: add SVE exception class handling
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-5-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230424060248.1488859-5-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 24/04/2023 07:02, Luca Fancellu wrote:
> SVE has a new exception class with code 0x19, introduce the new code
> and handle the exception.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 18 10:35:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 10:35:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536276.834458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzayL-0005fO-27; Thu, 18 May 2023 10:35:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536276.834458; Thu, 18 May 2023 10:35:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzayK-0005fH-V7; Thu, 18 May 2023 10:35:00 +0000
Received: by outflank-mailman (input) for mailman id 536276;
 Thu, 18 May 2023 10:35:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=joHs=BH=citrix.com=prvs=495b323d3=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzayJ-0005fB-RA
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 10:34:59 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a0c207d1-f567-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 12:34:56 +0200 (CEST)
Received: from mail-mw2nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 May 2023 06:34:47 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH7PR03MB6917.namprd03.prod.outlook.com (2603:10b6:510:12c::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 10:34:44 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.019; Thu, 18 May 2023
 10:34:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0c207d1-f567-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684406096;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=s0ml1uIjHIpXatrOozPdarLFDLtExfZEVROgwFh+ALU=;
  b=KYtAlsT1cFy825UZ55PIKv1pVuRtkHvXAMS5ioGCwvuZHNNFOY1x4/SC
   0Q4YjffnZh9fgLdc+UhIbL4+VGMeWv07IOpaifkHjPdFaKUkSVkLqHw+Z
   cnWJz1G2H1ApvG8sY+yaslMGdWEQBdjk+NdAhs35yil8IXnU50Ybbt3+1
   0=;
X-IronPort-RemoteIP: 104.47.55.107
X-IronPort-MID: 108260436
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:HOtTWqOjLBuIYFTvrR1KlsFynXyQoLVcMsEvi/4bfWQNrUolhGFTz
 moaDGuGP6qCZ2ShfI1/bojn9EJQucLXxt9iHQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5wxmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0t5HLmhl6
 qASE2kcUyqzhd2z0omgd+Y506zPLOGzVG8ekldJ6GiBSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PdxujCMpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyr83LGXwXilMG4UPIGh8fBKgh6I/0AaTz8XbGOBsaGgm3frDrqzL
 GRRoELCt5Ma71e3R9PwWxm5pn+svRMGXddUVeog52mlyKDZ/gKYDWgsVSNaZZots8pebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakcsSAIf5tD5rYIbjxTRT81iGqq4kt30Hz7rx
 zmA6iM5gt07ncMN1qz951nIgjugr5vOUyY84wmRVWWghj6Vf6agbo2srF3Et/BJKd/BSkHb5
 SBb3c+D8OoJEJeB0jSXR/kAF62o4PDDNyDAhVloHN8q8DHFF2OfQL28KQpWfC9BWvvosxezC
 KMPkWu9PKNuAUY=
IronPort-HdrOrdr: A9a23:QN0EtK0/OHn5hoKCPcS4AgqjBNEkLtp133Aq2lEZdPU1SKylfq
 WV98jzuiWYtN98YhsdcLO7WZVoP0myyXcd2+B4AV7IZmXbUQWTQr1f0Q==
X-Talos-CUID: 9a23:08gSP2/P/qiW6uchR66Vv3YkNckObSHX9y7BHnWIC1lsWoaOdFDFrQ==
X-Talos-MUID: 9a23:BPtL/AgRAU8RDng7LFubrsMpCN5v7YSsS1s0vZxBmci9DRZgYy6MtWHi
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="108260436"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JtRwgYRa+sK56jaBCM7BJVnx5gsui4ogtJ+31uiJz4gGpKBra8dAsJ3F/M5HQxQdD3n9oSH9IWfg2LFiRo0kXT62x5XndbkMSkCGY//1C/bfgM7l8KMBl48ckamPit654RcP+wSureu+K4yf//2iGbnJoFXij9MdKBVgwEEJbyoAAv12RtO2yKc7HfH+H8qLqJ7FEpjViYnLtUP/ZprnImO8grEdlyltOQvZkSeYipHvSD/2awXQL7UDXkJSYALdLaTaIlN1SlBIvY+/nlnxdrR8MAlCfezu0sqV6LG/yrTE8hhkTiB2MqAtyLA47G6z2+1BaomFRQYWU100eGY5FQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7BbAKP2nPjjoO3ABgLWbb/zSXYNFRcfc+NcJ53uOQQk=;
 b=GOLrlNQxcqMzHLWenXsKP3wHHT0DLYsq7/kzB4Ut3NUK73qCVNEeD61HdU7Zg2zpEoLBnFDOIu/w9i9XZKCF84xC/OKk+3V2fFQ0J55W2taVvpqAmJaZ/E2IrVbaAiiyUEbXQwQfJqoFTr4LPXmFjPoKkPSwHTvBpqFnh+we3RzIFxyL0RbHCjHHbNGPFnUGTUSFZgPcpTiQN5cQdNrWAw+U0uSh4+ADxIxJ3oDgqJN4CuxileFVhvIjXEbZ0R5oYlA9qOlCH9fFCNLc3h+PVUlF2Atq3Sm+nZO4d/ReJg77fhWVwiY5GlA3OqEc8DP5Mgr/1lT8O87Q14IDciqPxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7BbAKP2nPjjoO3ABgLWbb/zSXYNFRcfc+NcJ53uOQQk=;
 b=bGEuEkXB4OnPGPUY8poeBym225FE0agHXXJZ4qgLxTR99LwRglfxqAWtI6mXTiAaubhQQxF9qbr6/amxaxKoXC1JnTbq+yfPkCgFxjRvb7YF6/rLUbfTZDhzfzteI1oHY6/XzLOg0/CgsQ6fGABiClXhs5CZpD0PNiwhjgHe55w=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 18 May 2023 12:34:38 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: jbeulich@suse.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
	xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
Message-ID: <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LO4P265CA0079.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bd::12) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH7PR03MB6917:EE_
X-MS-Office365-Filtering-Correlation-Id: 7baf81d2-bf2f-407c-14d8-08db578b7e95
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	t7BgAPRzTm3J9asIVC8POQ8XF85eSvrNtAQY4846UJmSJC9fqKPtX5gFbRAzMblNCIWmi+kP9xpkCD5TZUfXPkFiCHeRZU0zBlUroNkP+NrjD9OFLpxT7t0mfQ3WsYU+Unjl5LO83P2ZGy5uBxqxkqxBfcIHs2mRLjI/uxMeF7sbqmoddSQZ7yHnEAPHHcQJ4I8Uf6IOg1CdFjxV6zn7n+zJZcHXbHboNnnKJakiONNSuXEqvVuBZ6pSlIB5xTyvzYstHSWXAvQRwrxFd9r8gW7D2ILzhcUPqP4fP4IN7MMrRx5xAjSuiaZ9ZDvflNTi3QwE9kXpoumpq2AoouaCCs1QSVPekCB70f79SIFbmXV9h5tlaoXqGsfBTC8G94A/v3x/AUwGc/yTuZaMADHT3bagZoBW48S1GAHSc7H3uGQbkHg5w7HRAMTp6ynr4qqinDDz2MDnPuioeiRuxojkTP6mbBf4p/GnfH4XSOKbhcR3reg5cgJLLV45kTqjy77H8OUvVd+fcMh/Z/Q350CwJ/v/JzigGRn5P1fUJWhM5nMmosu66L9s8Vl92E3LvR31lGZBCY9/7a7/0vMCLRZ4qegLicnJZYev27qKYMn4Ylw=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(396003)(366004)(376002)(39860400002)(136003)(346002)(451199021)(66899021)(966005)(6916009)(66556008)(4326008)(66476007)(66946007)(478600001)(316002)(86362001)(85182001)(83380400001)(6506007)(6512007)(9686003)(186003)(26005)(41300700001)(8936002)(5660300002)(6486002)(2906002)(8676002)(33716001)(6666004)(82960400001)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WTRrRUlsS2hXNzJiL3hBdlFtQkt3aVMyd2RGSGpNclBRL0JMQ1VvUlU1cTBL?=
 =?utf-8?B?YXNhc3Z3MFVlNmtXWjI5NWpBVnFjWVk5TGgxT2FXbjhmWTJ5WDFMVWlac2M1?=
 =?utf-8?B?SGw5M2lGOVJWNVM1UGxmK3VMcW5ZWTA1dkNKMy9uamsrUFVkN3p0TlcrMmJR?=
 =?utf-8?B?QVZ1VUJFcHJPVEx1eVNkektGZFY3SXJLejZHNjJHbTFLMUJBS25qR2ljU0NN?=
 =?utf-8?B?aXc4bDJvWXBNRlBTb1ZQTEF0ajVZaWxnZGJEQ3B2d0hBU3NMVFFUZlhtVUZP?=
 =?utf-8?B?d0EyNVdnVzNRUHpZUnZaZ0tGc3hKbStZejFJY3JqcFRpN3ExN1piZEtDZWsv?=
 =?utf-8?B?Wmw5SXBIaXVYUkdaRE9TN1lBSjk0bC9COWllSkNGVFI2ZG5VU1lna01XaVlH?=
 =?utf-8?B?MXdPVDhES2Z2NERYTFVQWHdqbHYxb2swTW92Sy9YanZPYmdpVFliYzZ6NTB3?=
 =?utf-8?B?ekhuNGxDNmVubXltb2dsOWtVYlBqUmlKcWpBQ3BIK3RhaFd6RVZZMjJZNmNn?=
 =?utf-8?B?VVNrZkoxOVd0Y3NpcHgySkkxV1RocjFkM0IyZG1FeTE2MXBidThFSW1JVHV6?=
 =?utf-8?B?T0VxTVlYV2dvVmpyN3BSa3Q2RTZVa0ZaUVFxNENITTlxWEF5QU9WRzB5WDJ2?=
 =?utf-8?B?Q01jTDRhT3AvU1JSSEVKekt6c29MVElhM1N4RVFaQU1hN3lwT0FXaXp1blJT?=
 =?utf-8?B?L08yUTl3WElaL2p1anlBKzR5ZjlyTFI4WVBoY2lrdHc3M1p4R3BaZW5wY3JR?=
 =?utf-8?B?TXQvUGhvNVRtZlB6bGNMbFhrbjJvbnRHV3NRSkFkazRocC9VVUkzZHBsaDJx?=
 =?utf-8?B?ZWp5WHVEbmpjL3RnZWk4cmZ6eU05NFBwRDJLZVl3WXkrK0dJak1TVmRISGVW?=
 =?utf-8?B?SmxLWXpBTFJaN3RIQlRmUzMyZW5aMGlZVmdnQkZaV1JNYmdzWnNZdUljUnlR?=
 =?utf-8?B?Mll1M2VXT0dkTzJjV1RmSlg3K0o4RHFGQ0dsVEhpeitzdkkrZVA2Y0oyQ3Uy?=
 =?utf-8?B?aU1zdkJraWRqUHlSRlJwaVd0U3ZKN0lKbUZaOVE2MC9yMktIdXZFZmtxRUhZ?=
 =?utf-8?B?TWI3bE02ZXY4bFpIanN1N0JCZE9MVnN1c1M4Z1Zpay80UTJRNkVIcG1yb0Fu?=
 =?utf-8?B?YnZyc2VHNitoOGlZTEJEZytHZXVVZy9lNTZuTVprS0JuMnpBQitkdkxkWG5a?=
 =?utf-8?B?N2VFNUllTEs0aDR6UHZWekVVNlVWYk1YYjU0WXZ2cEgwTDd3SjJCVGJORWR5?=
 =?utf-8?B?UktlelFqM0pJS0IzdHF1UWNiUlBJaXovUTVLQnkrN2hsNExwNmVwczVtSVdt?=
 =?utf-8?B?Yk5OUDJIbjMreHVUUlpWZEFFKzk1K1I2a0thV1lCdTZLRmVZRTVFTlhPZkh2?=
 =?utf-8?B?T1p0WDFBbDVtTWFZcm55YlF6SFBzNlZRb3dodVQvQXFIelFsZldadDZPL09m?=
 =?utf-8?B?MlJIbUljcGZjc0N2anpvWUw0UDA5VFhQbGUxWHdsTXlJVVFmWVlNelMxL2Jo?=
 =?utf-8?B?TUpTQzJKcWdjNGtNbVRKeUhwK3ltZFdBT3E5ZElOOVpaUzdsR3Fka1kvMStx?=
 =?utf-8?B?U283V1BNRkUvditxU0drVzV4Z0hDcFI5T3ZEUUY2MzVSN1ZHSU1Wc2g4cklX?=
 =?utf-8?B?bmV0YlhHYWxvWXhmd0VTdm02SWY4cTM0ZDlNNmYvZ29EUDdlVWtTRmhZRUxj?=
 =?utf-8?B?UW02alhYMkpydGdsbUhEZnk4V1krZVczbVorOHljQXRJSVhmb1lybzFvNitK?=
 =?utf-8?B?eFVNbFZCVUZPNHNsRDNYbFRBNG11eTA5VjIzK25OZVJWTDZaUktsam94ZExk?=
 =?utf-8?B?dUl2UXAzaElueVlpa2s5Y2xEZE5jSklmY2lVaUJvTzhlKzlEdnNrTk5GcTVl?=
 =?utf-8?B?WHdwQ1lucGt3RGZzbFRmVlFYd09IbmdsYlcxbUMrVC9uMDVtV1pWdURWWFl3?=
 =?utf-8?B?OEVCTmxHMWdpUjRZc0tqRDZEeGUzeUFlVlFFRVZsK1A0aUtYUzVCa0NjSUg2?=
 =?utf-8?B?cmFQWHVhWWZUUGZ0SWFCUU02Y21yckF3aDhMcVgvbGZZM3Z3UTN3clBnTmFv?=
 =?utf-8?B?OTlVMjRrUmthZ3FJRFpkTjNIMXg4dE9XZUhURlR4SGg3YlcwNkVMemdRRXZx?=
 =?utf-8?Q?LbDrGB4swmHwzS2N8c30no/MQ?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	F/Se5tsAeoG6NFmJGzn4zBc8x70gdL5fX+c6oHifnvgHkzbjooitLE95iNaH6bByxwImz39OU/fXQZewgr++Ovt+xex4hpssK1W+UbkHkrZF0PBW0K05WaObdGVDKJKhzqqPTnT9d1rfUjqXlcnvsU8IStKKrfIz2G+kn3y+5YdhmfoxZrpkFquCi98pQrg3Pj4/WaiI4/KzV3BzdV4OrnJSFuXN97lAAuYRJocFQFMYk1XFJBKZoWazh1y9Xb/7j6fCzJ/QnmgW8SJaL4DOHaP1WAPAL0W2jzIrCGqUdgXnPwQj4CB44RIBsNwISYmROAsZbfVpTzl3S/vDKL16zlO/r3aUehxwvSw5kCy7eENW+97tntf37/Xdg0wjn2/gQ9T/osQRUkfj6jEk9cSXqx3KzHZ3MVHCdgvcoy3j+VEiliBpdl1DZ1J2UdlIGmqPSw/q+B9QmOaX4JOBnpZcY+zbJZ2OE8TWO1rRhSJ83ippIcHB0JKKBk9hab0KVFCqpZXNZ0iUiYHN0uGHW7DOVmTpQoh0FasFcCIcmus+C8iPnYK+mTDuFN+k2ZOdgA8j9qo7x6OZwK+hDblWUigSpm8iPA1ONTn7esbTk6LRiNxXsbzolkeOkdXhAZNpdTMg+vFgjfu8btBO5w7YAOjhK+/y60qvbYJoPsHdLGz4maIiR9QZP8J82Rwj3RphWfgmPpUiCt7McN/gsXIymCvFy13TtWqLONXjGZ+4t6ZTqN9akJDq7tgua7onGaoZEcKs58DXr1bpZoxJM8R1kUl1eGF88UFAFc4E3mQ/SrHmnNhAZtGcylFZ52BNZjkn5nVHXoQTRTYEyTQmMUJkOf6rfaN7Xt+eZLQVancDdjjXyP2KqGaY68+KL8LRc/IxB43BB2xYlk7TRa4CZ3ZUUGeI+SpbOJtosDfH6D89pj/hFro=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7baf81d2-bf2f-407c-14d8-08db578b7e95
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 10:34:44.3455
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UoN6U4nICLch50PPobsIctnJvoJdqSz2aha6ncg9N2rWaEMCi5qCbj1sUK5NNnq0RRHddz2onLqo83ShDurfUw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB6917

On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
> Hi all,
> 
> I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
> test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
> Zen3 system and we already have a few successful tests with it, see
> automation/gitlab-ci/test.yaml.
> 
> We managed to narrow down the issue to a console problem. We are
> currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
> options, it works with PV Dom0 and it is using a PCI UART card.
> 
> In the case of Dom0 PVH:
> - it works without console=com1
> - it works with console=com1 and with the patch appended below
> - it doesn't work otherwise and crashes with this error:
> https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK

Jan also noticed this, and we have a ticket for it in gitlab:

https://gitlab.com/xen-project/xen/-/issues/85

> What is the right way to fix it?

I think the right fix is to simply avoid hidden devices from being
handled by vPCI, in any case such devices won't work propewrly with
vPCI because they are in use by Xen, and so any cached information by
vPCI is likely to become stable as Xen can modify the device without
vPCI noticing.

I think the chunk below should help.  It's not clear to me however how
hidden devices should be handled, is the intention to completely hide
such devices from dom0?

Roger.
---
diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 652807a4a454..0baef3a8d3a1 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -72,7 +72,12 @@ int vpci_add_handlers(struct pci_dev *pdev)
     unsigned int i;
     int rc = 0;
 
-    if ( !has_vpci(pdev->domain) )
+    if ( !has_vpci(pdev->domain) ||
+         /*
+          * Ignore RO and hidden devices, those are in use by Xen and vPCI
+          * won't work on them.
+          */
+         pci_get_pdev(dom_xen, pdev->sbdf) )
         return 0;
 
     /* We should not get here twice for the same device. */



From xen-devel-bounces@lists.xenproject.org Thu May 18 10:58:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 10:58:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536284.834468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzbKg-0008GG-3V; Thu, 18 May 2023 10:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536284.834468; Thu, 18 May 2023 10:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzbKg-0008G9-09; Thu, 18 May 2023 10:58:06 +0000
Received: by outflank-mailman (input) for mailman id 536284;
 Thu, 18 May 2023 10:58:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=joHs=BH=citrix.com=prvs=495b323d3=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzbKe-0008G3-VE
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 10:58:04 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dabbb531-f56a-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 12:58:02 +0200 (CEST)
Received: from mail-dm6nam10lp2106.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.106])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 May 2023 06:57:54 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MN2PR03MB4959.namprd03.prod.outlook.com (2603:10b6:208:1a3::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Thu, 18 May
 2023 10:57:51 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.019; Thu, 18 May 2023
 10:57:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dabbb531-f56a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684407482;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=A/uACWFhkkV9jxcmRWs6ZXnElUREbuI1UGoYrXPWvyI=;
  b=SsXcxyfcbQ1Ju8/QEx6rwjnPPJWvtYeIbh/zwHWgJSabPwiHQPIVgJNJ
   TIANUAM3ttXaRptuEREGemmI1KD6Ebz6x0Ln4cTOyxO63yJZACc/eyVKp
   c1ICFHVNdjjQETlY5FVbZjdp2tBOYC46HrCob12cq//8mMzbLL3S0EZtd
   o=;
X-IronPort-RemoteIP: 104.47.58.106
X-IronPort-MID: 111964818
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:kT5up6JUbJK1qGgYFE+R8ZQlxSXFcZb7ZxGr2PjKsXjdYENS1jdUx
 zEZXD3XPPbZNmv0KYx3boXk9ExTvcOGx9VnGwFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wVuPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5SHTl20
 6VCLAlXSSKl2uS98LOmbeJV05FLwMnDZOvzu1lG5BSAVbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGLkmSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv1376RwXOiCOr+EpXg3LlwrXiNmFAwNycREgOC4si20VKhDoc3x
 0s8v3BGQbIJ3FymSJzxUgO1pFaAvwUAQJxAHusi8gaPx6HIpQGDCQAsTDRMddgnv88eXiEx2
 xmCmNaBLSRmrbm9WX+bsLCOoluP1TM9KGYDYWoISFUD6ty6+IUr1EuXEJBkDbK/icDzFXfo2
 TeWoSMihrIVy8kWy6G8+lOBiDWpznTUcjMICszsdjrNxmtEiESNPuRENXCzAS58Ebuk
IronPort-HdrOrdr: A9a23:19f0m64C2qo397cRugPXwN/XdLJyesId70hD6qkRc3xom6Oj/f
 xG8M536faWslcssRMb9uxoUZPoKRjhHPVOj7X5U43PYCDW/EWBaLx45Yz5yzf8Gyv4/us179
 YCT5RD
X-Talos-CUID: 9a23:eITz6GNZxoIPBu5DfTU/zGhNQMQcKFrv6irWKH+5FEFicejA
X-Talos-MUID: =?us-ascii?q?9a23=3AUeL9twyj97afxORxAOHgO+ApvbCaqPyET1ogtIt?=
 =?us-ascii?q?Yh+q7Dys3Yw7evTvsXpByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="111964818"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UFeESLyG2xHmsQSif/RAtG5zgpFpmir2qRF+YY5jJhEC+Qw+BXqoYPlRn/110n2uCQY2UswMaDdvI2yz4LzuLmJQF3i/hUkr0jps20giWGShLPvCObOFC+xcuHihUNHh49A6ZsFIbxZY3NlnLN/2Qt4N3VzgzF+u9dnvM/iRKtkjQeMb7EmLufs7vPrIbTkv1OVoDmMK8s2ZdGc/vTIct53j6aPGRUkgjLwkGvS+KCSqL7/tRY5xwVE4GfX+i3yTcyJpev9cW2rgahuMnYeVuJvdNUctJ1OLzn/DAlK+wrT+93412+f4ds0vfuUwEUImnrI8O+MseiHyypXjCYn35A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9Z2b5omfJztJFPLzPsbCzsdAso5yPlwMiZXcEZ1CNcM=;
 b=NLp3DzxW6+BM+rQRLdqMeVwCvlIk6BKMwA6sZopC8SpLW/m7ECcMJq9gJelqs/EYnfe9eQYmbwezfV6v7TncEKGqvTyWGn1GYbEPOuOpMPm+2em9fgV+4Cg8F0hA5L20mfc5v5LGXYwWvJOwACXPZoXAl77YT2/3rpnIaqRElVYc3Xj2gWziwkm+iURK2yYqIzuuSzoCBCWm/obcwUz91VgI26N66ZycM6Go/9C3os/nyWN9/CgCoy/Vp+5sLGb8yp5mABo3fWG6iSQptPpaoWS20Fo2MjQN2MeqqrVO0w4zxaz0miFIazsNKNZalK4fpQBuOFlod5cgzaR8+YpS1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9Z2b5omfJztJFPLzPsbCzsdAso5yPlwMiZXcEZ1CNcM=;
 b=Ve3bvosAmrdsU1/KEyY+P4K2X6n4qytysE+e+io4AiHgjGdluFp4uiZB7vxKLZPu9GZJq3WCuwA7ohm5qDKOJrDuWXvMuK5WKbS+lwU0vbKgVjFhNmxlH40XAcasgdpCCUcwEn+OYsXQEcuvuMHBryXa8suXTboZNMHhbcoNkeA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH] pci: fix pci_get_pdev() to always account for the segment
Date: Thu, 18 May 2023 12:57:38 +0200
Message-Id: <20230518105738.16695-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0085.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8::25) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MN2PR03MB4959:EE_
X-MS-Office365-Filtering-Correlation-Id: 460b8e72-1681-4ab1-f415-08db578eb91f
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DsTWOTGezYHSO72n1D9voV463VBMohyNRxGRgXs4phIe2VzSjluopIuefZxrOMmi612xnistC2nZQPW8W7X+C0NbhCP7YaFVc3M2Wwh/bcYisVcXv71/MjPELQOQkVXKsLvTuTlFxKxQ5gwggATzfX3UAwKYFCEDbnz+I8cIcRG6V4A8LA1l3kjmzI/awW7YSUFXdGw4RSKBjvenruUzVJ9MYYIv1kwX4mlerk523qBZq5bILyc6oRU3aYNhgXmTg3ICJu1tQPPU7ivtpjfzIBVf+MTV8joEo2Hc2/zKMQ+5H2PiXF8LZf885gCe+fmdBNZnKhASyDFyw+75nXS5sYG/4MpB+wWwH5rVxUbWSC8yvI/hnfRFZMzcCbioFAWLSuaCY4sBeAUHd7SUaZM6ym9EqQatyu2aVu37tsmIZV90qJ5K3bRdp7Quf2iduQp3IB2lDwV8c6JHp4jpPXJ/fjlBsgTswSPFYAUceHJ0Tb5c7qFF51WvPo5y12275JxFZrA/oQ2CUTbw+6s+2uuSYt+DYPPPZhdRchE/Krt5iwYnBQQU0jQj1oUA7ffMlRHa
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(136003)(366004)(39860400002)(396003)(451199021)(2906002)(5660300002)(82960400001)(6486002)(15650500001)(54906003)(86362001)(41300700001)(6916009)(316002)(66476007)(4326008)(66556008)(66946007)(8676002)(38100700002)(36756003)(8936002)(6666004)(478600001)(83380400001)(186003)(2616005)(6506007)(1076003)(26005)(6512007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M3k5VWxOb0FLMWRWam5na2RxQ2VjQVRPVXB0SFBZVnlyTEdMRkpKRS9SbU1S?=
 =?utf-8?B?Uk9tYkpyTXBKTVlMaURidXd4bHc3N3JWeEpManorUFZEVWd5WjhIdCtqcHo4?=
 =?utf-8?B?bFp4M01WSm1tOHF5ai8rNUkySmVIR1RZZHJWL3FRZWVSK0xqZzlrNlNLZE9K?=
 =?utf-8?B?NlhOOWovRWpuRDlrZVFNUEs4TTNwdlU4ajlxUnA0NVgrOXhDdDVxQStscTFR?=
 =?utf-8?B?citCMWRYbzdCNGo5c2pCejNja0RrbHlKVHBIVk5HaXUxNkt5VXBxNXVEMW5s?=
 =?utf-8?B?VC9VZlovYlhtOWdMdTlPWWdGOW8zVWlKaEZGckxYUFV4MTFFMU9vV0lqQlh5?=
 =?utf-8?B?b3ltU051SzhqUS9GSjh4N3VxeTFLYnJpMnh3UVZ0UG9MVDhDTkY2VkV4cEtY?=
 =?utf-8?B?WG93MzJ0SnU1YTNlVVFBQlRhTy91TSs4VEpYc3JTWElCSE1ucnhhYWFIYmJu?=
 =?utf-8?B?TnFacXdNZ2trclk5MEdKWWE0dXNmWElVQTAwMGRFZ0tvdkw4UkxBb0NUakRt?=
 =?utf-8?B?YVRhM3hZcnFxOXJ4bHpneHNkU21FZGYzUVEwN05EcThGc2d5bHFJeGVLMEU5?=
 =?utf-8?B?WUdXazNnTkh5d0huc3NMeHA2TnJUUmJ3SlVqRmR5bFpiUnlEREx4RGh1aFpD?=
 =?utf-8?B?RGFsMUp3QzJVekFPK04xc1JoVklTQm9sRk10cHUxVnc0R0p6aG85TVdLT1c1?=
 =?utf-8?B?azdCTG5hazdBMlVETGRiSDhmZG1ieFArUW5NL3Vta0t1TGVDamlvTFRwV0xI?=
 =?utf-8?B?R09TY252aCtJR3JoNDV4U0dGQW9ONklBTFlNK0NNY1grYUttUTFaTGNOa09w?=
 =?utf-8?B?bzhjaEJIaC9XbkNrWi9pa0o0MDVaOXErK1R5bE5vT3dtVzVTTGJhOHh3VCt5?=
 =?utf-8?B?eG1GWXRXcXpxUWc4cGxoeWdWc2JRcnVMWXJsTGt3a3VMOFZKMEl0RkVMY3Zz?=
 =?utf-8?B?Tzh1bGs0ci9hT0l0WCszbVpZTXFOYXdaa3g1bjZOQ0tnTjdBZkh6dDJ2RGZT?=
 =?utf-8?B?cmdiMVVqc3NFcWlmSk9WZmEzdC9GUUNGeUo0Z1dVUUVPVFBNRWE5T2ljRVd0?=
 =?utf-8?B?T0hFSnRtekxBWHpLMUlscmtaTW1GOXA4UkNEQ3pVR0tIV2pDMEFBMVlRRHp5?=
 =?utf-8?B?T3BJNzdqSHdabVVFUzJGeFdYREJjQWx4WjM0NmViS2lXTmxCYVJtK1lMQ3Vl?=
 =?utf-8?B?LzVZT0JWMWZYK3AyRWZwVnpBVFJ0OHhRYnVNSzZ2K2xVeGNQWGQwR0p5bnEr?=
 =?utf-8?B?eUZ6SWx4UzlpQ2V1cUlZZ3JtdTA0UVlLbzhvb3UrdTdpV0tISEtDZ3V5QXBq?=
 =?utf-8?B?OW9tMWNUNHdlZ2tlZkl4UlVKK1FHWGJ5c2paTlpiZjFsODRZRUlZVUFGd2p5?=
 =?utf-8?B?SHRQSjU1Z0Y4cExDR2U5cXZGYno4UmFLVnhDYWhmRkQzcEFxTEQvbzk1VXNQ?=
 =?utf-8?B?cXhHOHlycnEybHhrUVRHNEFVSnpkY0k4NnJYQU1BZWpFbDdoUXNvQWxXYTlE?=
 =?utf-8?B?amJyVytubmJucHdYZXNnNWgyZnpwejVXWGFXOW1wTGpQeklDQjFoVlpibTFF?=
 =?utf-8?B?OXN4USt0RHB1dXhTQmRwcDdDLzFBbTBUQ2d1YkYyUWNMSnEyZitzb1o4U1Nx?=
 =?utf-8?B?bzlYZndNbi8vREJlYW4xcHVhbnM0ZUt3cEVnZ0NzTExFL1YzWWZaMFBZMS9a?=
 =?utf-8?B?NUtLWkpJbVpGeWwxcC9BY2JyZ2FRd1YrSjVwN2JOQjBCNS8rdEQvMWtJKzNq?=
 =?utf-8?B?NHRnRVM3V3dybTd2Q3ZLdjZwUmRlc2U0TUZwakRFVGxPNDFrY3YyaTV4citw?=
 =?utf-8?B?NUlaODgyaEFnaWVValVuVDA1OENzZG85MUo3azQ0N3o2aVdzdytvN3RLeFp4?=
 =?utf-8?B?cUt2eVJBSUY4S1piaWR3S2ttM1dMd214aHpYd0txTEVyd1J5dmluYUVVRTZW?=
 =?utf-8?B?M3hDUVh3aGRJZ0ZBM3dsN3k2Ung1TFcyL2JPZFl4Qkc0UmlsSXJabytOa2NX?=
 =?utf-8?B?RitvWitEd0I4eFllSmFVcG1TVmZRd2ZCbndBNHJyZFZpeVhhV2xIakM5ekJt?=
 =?utf-8?B?SEl6dXRzVDZQVzhLTEsxbmswS2liaWx6Uy9tS1BiS1VtaHFNd0trRC9DYnFG?=
 =?utf-8?B?Szg4OUlXZTFlaEJpNjZDVDQzUVNNQi8vaHppWDVsU05WSTlRM3pMVlgxNmhx?=
 =?utf-8?B?V2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	XfzXPg2QoQuBWsGBiKJ9WQJk7wTKOFWhGbW9dexZOMxwy9akVc9eJepG5zNyGQA+YkBR5+ZB+GqPqWMveYGhiUS45hychxYAVRvjduPvKDeRQ8E/Y9X5jOtg5O7AVDV6sBNJ96CskS0O7xIpy8AParTbg3oVdzpfxJ/uqw2A5zTSvZ2gTeIIU8or7ibfBfK57CUJETk5WvfTCencE444QzL66bk1PgUIE3+jlK1Jgp/vP+/o5+UtuMzCpBDPpt/hUikdKvx+PcslAJzngggU/xP9HQI27oNTgmrOkAV6/emZzux0K2qzmnWqZpOzL9ufExnkAQMpXV6V24LKXZd7R01WPZGy8HhV/PTlzCZ8DQorIWV18zsbbl0Q0iQVpZN++Ly2KYS6b5k2tnRcaz1ZI54ngu7rUXy6+i8SFOTWI0fOqb2k/Fpi4P9c8b1l2cFSZm4RmOF4jabTEyAA4pmB2E0Arpaae7HdcmAZyvVQoaTw0l2beBED6FGR59DJ0frV+VPDlCqrZ4OluQsRyEmMuAHtEWggYLEdWhZ7/ph4Xs1MlSN+8x5qeIHcdmn7qkzZ15/pP85m+VynPK8VUoT1uijgEn8qbFbTkF6cISZQvXz7o8WcoMbUK8AuFXbpTj4pazbBE+xhK3JRp+eerxxkQnPxj1IolNmpI9QxD1P//SmFcR3OvffKVP4P26m+R0qnBhGz/4wIWSMvqmKidb4cwh4aTWJ2LPYmiN8plI1tl+Q/N+e91EYRDr5dVRzEeHVuP+tvFCIXi0LQ77omMokn+ijEKcQyvm6aahqmz9KUwJQntlpJicsB1ILYFjswv0wWY3qQ266cr8OOxon/Kurl3Q==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 460b8e72-1681-4ab1-f415-08db578eb91f
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 10:57:50.7338
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sYT8p4i95o4rVXretaqylsSG6RX65/g00YRflWnHz439cud4oxdnE9zDOP+T5dgbEJi8C5SjZlEqeAOELGZLwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB4959

When a domain parameter is provided to pci_get_pdev() the search
function would match against the bdf, without taking the segment into
account.

Fix this and also account for the passed segment.

Fixes: 8cf6e0738906 ('PCI: simplify (and thus correct) pci_get_pdev{,_by_domain}()')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
There's no mention in 8cf6e0738906 that avoiding the segment check is
fine, and hence I assume it's an oversight, as it should be possible
to have devices from multiple segments assigned to the same domain.
---
 xen/drivers/passthrough/pci.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index b42acb8d7c09..07d1986d330a 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -552,7 +552,7 @@ struct pci_dev *pci_get_pdev(const struct domain *d, pci_sbdf_t sbdf)
     }
     else
         list_for_each_entry ( pdev, &d->pdev_list, domain_list )
-            if ( pdev->sbdf.bdf == sbdf.bdf )
+            if ( pdev->sbdf.sbdf == sbdf.sbdf )
                 return pdev;
 
     return NULL;
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Thu May 18 11:37:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 11:37:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536289.834478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzbwF-0004EY-0p; Thu, 18 May 2023 11:36:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536289.834478; Thu, 18 May 2023 11:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzbwE-0004ER-UC; Thu, 18 May 2023 11:36:54 +0000
Received: by outflank-mailman (input) for mailman id 536289;
 Thu, 18 May 2023 11:36:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tqpr=BH=amd.com=Xenia.Ragiadakou@srs-se1.protection.inumbo.net>)
 id 1pzbwD-0004EL-GQ
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 11:36:53 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e8d::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 46c447c2-f570-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 13:36:50 +0200 (CEST)
Received: from DM6PR13CA0051.namprd13.prod.outlook.com (2603:10b6:5:134::28)
 by IA1PR12MB6650.namprd12.prod.outlook.com (2603:10b6:208:3a1::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Thu, 18 May
 2023 11:36:46 +0000
Received: from DM6NAM11FT009.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:134:cafe::cb) by DM6PR13CA0051.outlook.office365.com
 (2603:10b6:5:134::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.16 via Frontend
 Transport; Thu, 18 May 2023 11:36:46 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT009.mail.protection.outlook.com (10.13.173.20) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.16 via Frontend Transport; Thu, 18 May 2023 11:36:46 +0000
Received: from [10.0.2.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 06:36:44 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46c447c2-f570-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iDNh6cbtw3btgDQOg2ydtkySWW2aMq/MYxAt6mNGluPJLyVXAaH2xgcyVWxTL17tcnqKsmO+aZlFikr1ntEVttqsVFw/Od5ajIvtvmAAyeuz4tklj5jv2Olk7AKDlFZnzbCLPQvFYr024RHv+42dWtp8ek4v+w8bMKsO/A08mKWJ+jD3ixz3GVB0Pw1mwaaHTfqln8dPF09/xO5laddZHMFg+tIBG0R8M5zALVqeBLSSXMPbB2SUwrc+DeBW1t8CbR4vtcvuc0wUsKXYgw8yLJ52DncVr1mu6u46AAN82qusVNTXUgGU0HV0l0lOJWa5kHMrbBiWqg7R4GzaqcmIfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5Wux/elGwQsFDhg9Q17YSCIXdkzZxCuqyVT8b3mYU6Y=;
 b=TYZNHu24YRdsn2gi9NkJgC8ob1p1UC6tn53lDhs+jjyuOAEjiRrOYhUpsgAjVrcidAH0ngMKTpNFA4NoGKo0+gZjwjHkN0+kGhtEmQkAOOzh7jqYL+6X25+eN6U9s+N5D4kUK4hchDIgGXUVTyKmvy5mTQ46Y3ptMBdMXPE8mdBJdXOvUrAGW1nPiLrpx5qgirvnuo+hBDH7jHVA0Xdu8te9YDSiPJMGJs2kRQgN1XBttRXXAWFDLMijvkaEoiqBdMpsu70PAWcu1kEj0haULBBh188PiJ0U/EQRHDTkvUA7w3PpMS0W+sMu1U4EP+D4kABe1zbzubaH55pk/u9J1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Wux/elGwQsFDhg9Q17YSCIXdkzZxCuqyVT8b3mYU6Y=;
 b=iA571Hy4+jDHKxJBTMJZAgyoX76ICVVVHdE+W1RtQYu0Yuj3n6wikgYZMUro8Na+TmIwwX7qaXZPqizXi8c9F0+mCD5n6S6hmA+s06zlYnsiBr3wI1dnGi5pmjg/bdKikpqi4GsHm4S+TLyrbIrqcMy2Z0PSs/uYt2sl4yK14i0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <001b153d-1148-e9d1-e69f-da689a9f395b@amd.com>
Date: Thu, 18 May 2023 14:36:41 +0300
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
	<sstabellini@kernel.org>, <xen-devel@lists.xenproject.org>, "Stefano
 Stabellini" <stefano.stabellini@amd.com>, <andrew.cooper3@citrix.com>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-2-sstabellini@kernel.org>
 <c22a8925-15e4-47b9-6f5d-f85bbe802255@suse.com>
 <6f3f4e12-ae5b-c58c-891c-fbce08283206@amd.com>
 <ZGXwYsOX44/EBI3x@Air-de-Roger>
From: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
In-Reply-To: <ZGXwYsOX44/EBI3x@Air-de-Roger>
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 8bit
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT009:EE_|IA1PR12MB6650:EE_
X-MS-Office365-Filtering-Correlation-Id: a42085cc-01e6-44ba-a7bb-08db5794296b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RXjkqC8PgA4pRObJbWuKgH1gjNPvLvc0TQdzX9GEeNRGqUQ7/07ANRsZhc0L+uSdFsnXzmt7zuY76UdcAf4DNUAPOE4NNHSrTdxtu7otz8r/Tn0PbLMC88bJEoVRamR5eGpNYP3s/jeOwG4wN+2LjhlaSnh38uALJwwu/aK9n7uV9MEuMfipXeFxq0w93K0YJTW+9l5tMG4vyMbD2V47lnjBB5RtGaPLiTZ0whCKkqsI4cJAjKz58oFkL40k6GFmjQKvCtCPpFnhgrgJIrHYYAJhOWIulVuuaJjQo7+EsgOOSoqoP4r7zViCccSwJjUH93dznwpgqdIoblQtrxCZtI9/X+L2vt5lOOcDPrQ3Az8lK0BDjcffytTlD4U7n0s9K1ZsZRMKPx6PSRpBK9zZvpz5RYZRBro6yro41YuJGWhgLKPP/v0z795tL/tX3N6sj30BRFrbF6QGzvz9SR+MlnC+N+u/wOf9+/mZONSlrMzAQIyW0LdeXu1yQNhSMRWrTf0VjH6wqX3u4t1E5QpL28ZeFXO8chwAT+ok0XVSLhPfRCvzInHgIZ6lm2Z8fYnmYsbXnEQleSuG36oZKcZXoS7114hBlI5sHSTrA66Oqt2iRn/WxEbBjozd4YrVKc6gurk8X6ob3GIvrVgX+yQ4uy0Oy9Dg0K3VV/br0MTZ/X/KeCbs3pwDCsEXzaxrSVRl7lYtmdOz8r0yvouRz4LRg4EDvuB9yZFPtuPhgTvj5W0PbcmyiBIHVuILothtrsZiDN3Jj5BYDgtipnKp5UX/alE4T1JWyf98j/CU/8wzbVpr9tMjZdvJkM8PzKevgY3b
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(376002)(346002)(451199021)(36840700001)(46966006)(40470700004)(31696002)(36756003)(16576012)(6916009)(70206006)(478600001)(4326008)(316002)(70586007)(54906003)(336012)(6666004)(40480700001)(82310400005)(8936002)(8676002)(44832011)(5660300002)(2906002)(81166007)(356005)(82740400003)(41300700001)(186003)(26005)(36860700001)(53546011)(16526019)(426003)(2616005)(47076005)(83380400001)(86362001)(31686004)(40460700003)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 11:36:46.3817
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a42085cc-01e6-44ba-a7bb-08db5794296b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT009.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6650


On 18/5/23 12:31, Roger Pau Monné wrote:
> On Thu, May 18, 2023 at 10:24:10AM +0300, Xenia Ragiadakou wrote:
>> On 15/5/23 17:17, Jan Beulich wrote:
>>> On 13.05.2023 03:17, Stefano Stabellini wrote:
>>>> From: Stefano Stabellini <stefano.stabellini@amd.com>
>>>>
>>>> Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
>>>> the tables in the guest. Instead, copy the tables to Dom0.
>>> Do you really mean "in the guest" (i.e. from Xen's perspective, i.e.
>>> ignoring that when running on qemu it is kind of a guest itself)?
>>>
>>> I also consider the statement too broad anyway: Various people have
>>> run PVH Dom0 without running into such an issue, so it's clearly not
>>> just "leads to".
>> In my opinion the issue is broader.
>>
>> In pvh_setup_acpi(), the code adding the ACPI tables to dom0 memory map does
>> not check the return value of pvh_add_mem_range(). If there is an overlap
>> and the overlapping region is marked as E820_ACPI, it maps not just the
>> allowed tables but the entire overlapping range ,
> But that's the indented behavior: all ACPI regions will be mapped into
> the dom0 physmap, the filtering of the tables exposed to dom0 is done
> in the XSDT, but not in by filtering the mapped regions.  Note this
> won't be effective anyway, as the minimal granularity of physmap
> entries is 4k, so multiple tables could live in the same 4K region.
> Also Xen cannot parse dynamic tables (SSDT) or execute methods, and
> hence doesn't know exactly which memory will be used.
Thanks a lot for the explanation. I checked more carefully the code and 
it's true that xen does not aim to restrict dom0 access to ACPI tables. 
I got confused by the name of the function pvh_acpi_table_allowed.
>
> Xen relies on the firmware to have the ACPI tables in ACPI, NVS or
> RESERVED regions in order for them to be mapped into the gust physmap.
> The call to pvh_add_mem_range() in pvh_setup_acpi() is just an attempt
> to workaround broken systems that have tables placed in memory map
> holes, and hence ignoring the return value is fine.
In pvh_setup_acpi(), xen identity maps E820_ACPI and E820_NVS ranges to 
dom0. Why it does not do the same for E820_RESERVED, since ACPI tables 
may also lie there and since it does not know which memory will be used?
>> while if the overlapping
>> range is marked as E820_RESERVED, it does not map the tables at all (the
>> issue that Stefano saw with qemu). Since dom0 memory map is initialized
>> based on the native one, the code adding the ACPI table memory ranges will
>> naturally fall into one of the two cases above.
> Xen does map them, but that's done in arch_iommu_hwdom_init() which get
> short-circuited by the usage of dom0-iommu=none in your example.  See
> my reply to Stefano about moving such mappings into pvh_populate_p2m().
Indeed, if dom0-iommu=none is removed from the xen cmdline and qemu is 
configured with an iommu, the issue is not triggered. Because 
arch_iommu_hwdom_init() identity maps to dom0 at least the first 4G, right?
>> So even when not running into this issue, pvh_add_mem_range() still fails
>> and the memory range mapped is wider than the allowed one.
> The intention of that call to pvh_add_mem_range() is not to limit what
> gets mapped into dom0 physmap, but rather to workaround bugs in the
> firmware if ACPI tables are placed in memory map holes.
>
> Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 18 11:44:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 11:44:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536293.834487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzc3P-0005hn-On; Thu, 18 May 2023 11:44:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536293.834487; Thu, 18 May 2023 11:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzc3P-0005hg-M1; Thu, 18 May 2023 11:44:19 +0000
Received: by outflank-mailman (input) for mailman id 536293;
 Thu, 18 May 2023 11:44:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=joHs=BH=citrix.com=prvs=495b323d3=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzc3O-0005ha-7g
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 11:44:18 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4fd87529-f571-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 13:44:15 +0200 (CEST)
Received: from mail-mw2nam04lp2174.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 May 2023 07:44:12 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6493.namprd03.prod.outlook.com (2603:10b6:510:b7::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 11:44:06 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.019; Thu, 18 May 2023
 11:44:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fd87529-f571-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684410255;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=at9DCZDQhDOTD5M6vmDudAM/wFOOvIm5gh38HBZdutE=;
  b=SCyMjIaOthLKxGQpYPTtahJpNZquZIx5qPxNTEkQaTxRq1xvhcDyzPQJ
   DINbRSNZzGg2u4PPTpXCZNMs8eizB7VhSYEIASPqH0IEjKmxp2gNQPbc7
   bgxTtbCJVmkY5njS9IOyI5kIcdVLPlSOAVaS1eNZPf7J6ajldEBrRvTck
   c=;
X-IronPort-RemoteIP: 104.47.73.174
X-IronPort-MID: 109392548
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:z91DC6NvE5AQ19HvrR1FlsFynXyQoLVcMsEvi/4bfWQNrUon3jYBy
 jRNWz+FPvneZmWkKthwO9+/pEhQuZLUyYBqTwto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5wxmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uZIAk9p+
 dMAExsmVU6s3OL1x62eWOY506zPLOGzVG8ekldJ6GmFSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+vFxujePpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyr93b6VwH+rMG4UPOWT9q5Qmkys/Us0BS08ZRilpKO3slHrDrqzL
 GRRoELCt5Ma5EGtT9C7RRS3oXeItx0bRvJZFuF84waIooLW6QuEAmkPThZadccr8sQxQFQCz
 USVltnkAThutry9Sn+H8LqQ6zSoNkA9PWIEICMJUwYBy93iu50oyALCSM55F6y4hcGzHiv/q
 xiRsCUwjrMUy9UX3q+2+VTGhTOEr53FCAUy423qsnmN6wp4YMugeNau4F2DsfJYdt/GEh+Go
 WQOnNWY4KYWF5aRmSeRQeILWra0+/KCNz6aillqd3U8ywmQF7eYVdg4yFlDyI1Ba67opReBj
 JfvhD5s
IronPort-HdrOrdr: A9a23:kRLQPKsdJ+JYObWN4E2+BSI07skDedV00zEX/kB9WHVpm62j5q
 aTdZEgvyMc5wxhOk3I9erhBEDjewK4yXcF2/hzAV7KZmCP0wqVxepZnO/fKlPbakrDHy1muZ
 uIsZISNDQ9NzdHZA/BjjWFLw==
X-Talos-CUID: =?us-ascii?q?9a23=3AOSJ3n2pPgmVRSAGvgdTqFxrmUcAacVP6wFjIGGS?=
 =?us-ascii?q?TMjhGE73WTkC16qwxxg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3A/9GrlwzfJ8FZS1n45zHU9bHYrW+aqPyFCBA3i4Q?=
 =?us-ascii?q?5gO6jOBxaC3SDiW65epByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="109392548"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iQi/l2gQ0i/Z1uY8IBqc3PvKEfqJPIB9e+UCs0SpjiyalWaZW1BkTPiDn1DjDjobxKtlIEaqbitsaITPAPDIYJaPJATmti32gGOtWC+PpOhM56v4Wz25qs3oE0hsIayeNKmCudiTv8PYU5fp7/ZozWxEVUValO7qIZnY7LxEfIiYzBkCBCXgy9bU9yFhpOyYj8AKOvUPxppRtZ3dw0ui/D+/FY/3rMhID0QTGMOeabyu86ROs9uQ7KScCT3CyRq0VIOgAQoQ8dsqTa4d9tj7xQqcPgPq6INKDLBwZ3kkd757BAW3SdDbFDK0B3Nxl7RHFmWC3gbFwoxrPzwFsi6r5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qTUiPJ4aaci7VUDmEZj1uJHBr208SvM+4RyxcdBpU9E=;
 b=lt2FILQSN2sVXSWU8JlJQ+Sa1gObxX0CBzX2HIgARExkWgkjefGMHp1Q0Gf/DMSUmZcEJp8DNWyYq6IXI2gbYINPnwTFtsBYjNuSSv+TS+wHISDlLhxsz1mblXBCIFxwM7eQP4aUntQjBTl3TQ0ZFLsNOIzt1y26cooKJo7Ren6QUgIjteeOy0IRZOnja5y6LNA6SbANlzW7/eYqkv9jufvHCPNLgkQE3CAPGc315K8MEn6CJi7f3Dc7hUnycpepC1mtPY9qReyNevhdGGFqR/AO6WujbI62R5TQHLH2Z3i/687Az3ilgyP3HYid1wFTp38Z8YR4Y0xOp/QKDYRwZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qTUiPJ4aaci7VUDmEZj1uJHBr208SvM+4RyxcdBpU9E=;
 b=RF3+WmGR2ZUKUBnpREkyuC/zgogbT3CZTZh2yaRNmWk3e0D4hSiOV6ZQWrcmjBPDaYcuEU2ABgQnVG9vO6lPUMjuThFtNdPwNWlm1plfY3tYSupcu2Xr2GIJWhprI+ZPbMGzx2vq++tVsBBwvAc5nLhY8BA2fUwI/G4VcTlwzvY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 18 May 2023 13:44:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@amd.com>,
	andrew.cooper3@citrix.com
Subject: Re: [PATCH 2/2] xen/x86/pvh: copy ACPI tables to Dom0 instead of
 mapping
Message-ID: <ZGYPgYiunhlDQsR4@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305121801460.3748626@ubuntu-linux-20-04-desktop>
 <20230513011720.3978354-2-sstabellini@kernel.org>
 <c22a8925-15e4-47b9-6f5d-f85bbe802255@suse.com>
 <6f3f4e12-ae5b-c58c-891c-fbce08283206@amd.com>
 <ZGXwYsOX44/EBI3x@Air-de-Roger>
 <001b153d-1148-e9d1-e69f-da689a9f395b@amd.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <001b153d-1148-e9d1-e69f-da689a9f395b@amd.com>
X-ClientProxiedBy: LO4P265CA0100.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bc::12) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6493:EE_
X-MS-Office365-Filtering-Correlation-Id: 7016521d-ebb1-4acb-0403-08db57952f88
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NbfHkPtTBvaknJPyBjT5W3lXy+Use8dyF5nzgwoPyLOtmfHAiuDNiaaFyaI7eR8QZrOFmxvlLrQZzbdoYkyAp3vyGgrV07LAh+h3ZRzQek3YBx8TkxALPGJlgjLi+kmHoC/eoCXss+PbslDvhcX7D7EMgEySvSuuoQZtEMkSbN+NCX/Potv5tdgk01wX6MtCjgINMjDdd2pxvWK46eGhv0cTiK9urvYABzwvdlJrmG6N37ggyij87K7bdh8LYace9it93xMJNcIFlLwRbT2uj/8Yj6DYDijcHG25qOhDcGcsH3gFp7BSuEgrRV30E+K3EVEie/F+nqyuNChYaL+0huOFPdiw3ocGbn1I4578FucG9ZjvqneZMzzX5c6NoiiYpCHn7JlTyhZSe4IekzOciO8AUAq32VumkzwVdexxjqV4zyjPJk/JxYKaywmuh7SZ8nMjj86BwzngR82p5xVZ9EIEYaZAhSl+XIFO2ZrBo8LoKs5g3v6bANn1RGj0QxvELz4lFq0rir8NDbeLkYB3KT1EUXzoBAZNz4akGJGVFuttNdQGMebC+RQOHlF0ClGc
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(346002)(136003)(396003)(366004)(39860400002)(451199021)(186003)(26005)(83380400001)(107886003)(53546011)(9686003)(6512007)(6506007)(6486002)(6666004)(85182001)(316002)(4326008)(2906002)(66476007)(66556008)(6916009)(66946007)(82960400001)(38100700002)(8936002)(8676002)(86362001)(5660300002)(41300700001)(54906003)(478600001)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YTNNZm9KSHR4alhQT2pwcnpWMERITER4QWVENThpK3pZS3NJcHVIcEpXcFh5?=
 =?utf-8?B?TjlOeFdzVnBiYTQ4U3FCYnByMWdMaU5RVjB1RDhvOVVZUGZnWnlNRGpiUS8x?=
 =?utf-8?B?NGxHalpBM1dwZHg4VGJYNU8zZS9lNEJhdkJURnUvVUx1cHNrQTREdFJOZUJ4?=
 =?utf-8?B?U3pHbTN5VXlwVWVTcUVMdTlvYk5aSGZ5d3F1cndKVnRCVEo5dTM0RlJCbnV4?=
 =?utf-8?B?S3JNU0ZKT0hZTTVVUHR5WkR3dzF3N0pBejFSQTJpbldLSnNjdG1iaFpGZEpo?=
 =?utf-8?B?Qk45TzBKM082RitJbS9jRUxKODZJRlViT3NYZzBQZW9ZaDlvbmRDMTliVXVP?=
 =?utf-8?B?RG9YQzkwcjNHMXMxeXFiSXJzMzBkbFdVTHRyWnpjdHd5UzBzTit2QndRa1d6?=
 =?utf-8?B?cUMxalFNV05lMTlZY1R6YWV4U1lGd0NNZTFpU1ErcEIxd0lDUHUrdGlkWEN0?=
 =?utf-8?B?TlJkOTJnQ2pwdlgzaEphRDlxWmY0ajkxTWF2RzRzTHN1clorM2pSNFBLaWI0?=
 =?utf-8?B?aTMrS21NcE9welNqVGhNcHpac3ByQXEwSWxuVWU2bzZPbUZJT21lMksycHhK?=
 =?utf-8?B?UVYrZ3RjSU43ck02ejNzVnJCcVBNNGppQjdZZmZJOThYb0Z2aUVocmFYbGlt?=
 =?utf-8?B?ZC8rclBVcXd2aWZXamhveVh1R244K29GR3I5bktJS1BsODlWV29VdlpxWURG?=
 =?utf-8?B?TWJPQzVjdEVpcHhzcGxDRWtWQ25GU3pNOURhVDN3TFpVaFVES0hUNlVFR25q?=
 =?utf-8?B?a0RxTG5LOWtMRS9hQ3hyR2lVWU53cU5ubktuakhjaTZCZ0hsczlJNlJLM3Fl?=
 =?utf-8?B?Zk9CMVNRbGdNVHlhNHJzMVA0S212ZmhXdmJyeWkreTdCaUd6SEVTOWF0bWR3?=
 =?utf-8?B?bTB0aWxibHBpMVgxUS9KOGZTTlpYSjVydXc5alV1MzlUWGlRcHFPYzRDL2RF?=
 =?utf-8?B?K3U0eDZQVWthQVFUY3lUWHNRdXBHQUtsQkFMcEo5MjkxZmFQRWt0bTRqQjhH?=
 =?utf-8?B?MWtEZDdqTjdYekVRbVZjZnppbUlKV3dveVhwR2lyU0JiYUZSTFNtNHRWSGs2?=
 =?utf-8?B?MklSOXVGVERaam9PdUZiZ3BPSFIzU3dDeGdERWcrUHY0ZnY5T25MT0c1Wmc5?=
 =?utf-8?B?dnBDNi9XQ0ZtNEhEcERPQ1hnYlFLSW1tTnRFUWhLOFdiaEVybnNvOWpkcm9i?=
 =?utf-8?B?OHk4eGFUL2JTSGwvV25MQnZGekc2bm5XbElVN3V1UStxbE9IY2dBdTNNMjlm?=
 =?utf-8?B?MVo5R1dObWhDTlEwNWVFZ1U1aGxNdjFFZnJPdGJ4c2lvOTFQeHphY3l5VEly?=
 =?utf-8?B?ejk4clBhWmorQjBnaUxaR1paY3VQVEtNZk9VVmRXcm9mUXQ1Tk85cExFWjFu?=
 =?utf-8?B?UDVaTExVWFNMdUdCM1Bpc1JIUXgzR1FFNWZKZ1k0VFBNdFZaQitSMm90VFpu?=
 =?utf-8?B?Umh6NkhXNkZWSHFFWHJaZmdpYlROMHBiaHRObkFEZWYyR3dnQldHOXIrdWwz?=
 =?utf-8?B?eG45aUh0aTlPakZMUDZMS2FPMWZZa3pCZzN1bmJmY0txeG01YzUyak9FYnBV?=
 =?utf-8?B?a095N1o1K1BWUC96emI4aFBFUkxmbVI0MHgxZlFBa1FxeHZnUWtyc3dBK1Bo?=
 =?utf-8?B?QStjTjAzVkE3bngwbzdlSXU3dmp3MllyTUZ0T1NTYVlqc2RScVhRTDRDQmkz?=
 =?utf-8?B?R2gxYlF5Vi96NHJMZXBOYWlPbVc0SHgrR2RYYUFkalRHdzdTODBPSkR4anJY?=
 =?utf-8?B?Y1lxdWxjNWJzci9aNXd5dFB6TnNzL2E3STN3ZGFoTjc3Q3hkRnpvQmRteFE1?=
 =?utf-8?B?WjNqb3h6M1hDU2J2SmI3SDI3US9DYk10Z21IZzU5T0ZVVXFOY1JpbktONGY3?=
 =?utf-8?B?REVoUU82T1dmZ20vUnFJNnUvdmtUZXhPTU9zdytCa0VZU3hqczk4cXhQTGZZ?=
 =?utf-8?B?UWMwVWFnQWlNK1BwVll2S05OdU5DdEZKMk1ST0pNMVRBcHE3YnFmN0FBNEs4?=
 =?utf-8?B?Y0ZGTmlBVHpRLzlabzYrbzhKZW9mWWo1VGhMMmFNRHdBRmxYUzJkZnVoZFRH?=
 =?utf-8?B?UXdNNUM1NTBUalhYamQvZy96MzMrUlFpbEtqUVRWU1liVzdOOUtwL2xicGNk?=
 =?utf-8?Q?odyjioMygDg6zfnuMRe3t5/3p?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	jrVsGDOrgDL/vWdFfmhei0KH5ReMNif/tc2I6IyLuYYik440JraR6cSBd0sQAiGVnq+GxCBKMBeqzSYSk/qpJtGwrJotM0uPb5sFa2c562yhec6iXXzB7yjRFjFnIeuaX5dW7QK/+QqrYZlIiY3DjVE9gNDDLcEhQKygtVQcSpEUlUe3ZjgbHY+fUFvLb5xhEIw53ag924GPz0pwmgeZCX/CLEiNjugYgVc9Om6toogC8zHTdU8/BJsjqhZwb1w058xXb7+9sYYTqa+1BdWu5KOX0Pu59uiXERXDGUuuZPose6ZkLQWzcjAlanGkP6201QK03NMXPWtsi8T7omlFzPcpF2rmAGW0ZjYaW/FYLeByJmmlHA8JjgKbPoghPhB0msRXBs47VrvkbdBU2YzrUIJ3co6ZSlX4jMfBG1QzMc3N6jfGFb0DkZ+9DqY2bbfG2B/cCAoULFgpv/i+Xtt24ZvwA1o4uTHdkSFCE8L2777NsiwXkpspjkvmGNEnA3FrLUyeAtazgUUnElbjV0vY9v2LDZhmMnz0L9iJ+GnmNMG+XM1ODvLa3LZ0bB3PQAtU4KC3x6a0gfrwwC0h2xJu18D8xeH+dpwqgVeWmni7Wc2+DRJFqU6YX8LxQ7umQ3x8HvHnJNdWovbjMn8ukWRmydDSeKNYn5MovwGytD0fVVdS3uwPmsNLjll9Qg5M43I/B3vRYnXK6wXYLNj2X4et62loeJ8EHdK8jMW0+qhTSL9sht2Idv0MlkhUaXswFUJsWqnW0OtPaW4lxjxEchldM2se6c4PnzgIoQrvM/SCmW75gL/MIWkP+Y17t52jlHsNC3GhGSv+ieD+WHperuQxUHrbc8+WcVETERT5DB18IVQ=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7016521d-ebb1-4acb-0403-08db57952f88
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 11:44:06.4930
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /ezgKl3rsAFUtUjO71aQdeyzlcUu9qOSNd8FH10Mkv4JqNEDrBs7McF7oXNGJS9NZXIjfgKlj+MiKY/dVKZBWQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6493

On Thu, May 18, 2023 at 02:36:41PM +0300, Xenia Ragiadakou wrote:
> 
> On 18/5/23 12:31, Roger Pau Monné wrote:
> > On Thu, May 18, 2023 at 10:24:10AM +0300, Xenia Ragiadakou wrote:
> > > On 15/5/23 17:17, Jan Beulich wrote:
> > > > On 13.05.2023 03:17, Stefano Stabellini wrote:
> > > > > From: Stefano Stabellini <stefano.stabellini@amd.com>
> > > > > 
> > > > > Mapping the ACPI tables to Dom0 PVH 1:1 leads to memory corruptions of
> > > > > the tables in the guest. Instead, copy the tables to Dom0.
> > > > Do you really mean "in the guest" (i.e. from Xen's perspective, i.e.
> > > > ignoring that when running on qemu it is kind of a guest itself)?
> > > > 
> > > > I also consider the statement too broad anyway: Various people have
> > > > run PVH Dom0 without running into such an issue, so it's clearly not
> > > > just "leads to".
> > > In my opinion the issue is broader.
> > > 
> > > In pvh_setup_acpi(), the code adding the ACPI tables to dom0 memory map does
> > > not check the return value of pvh_add_mem_range(). If there is an overlap
> > > and the overlapping region is marked as E820_ACPI, it maps not just the
> > > allowed tables but the entire overlapping range ,
> > But that's the indented behavior: all ACPI regions will be mapped into
> > the dom0 physmap, the filtering of the tables exposed to dom0 is done
> > in the XSDT, but not in by filtering the mapped regions.  Note this
> > won't be effective anyway, as the minimal granularity of physmap
> > entries is 4k, so multiple tables could live in the same 4K region.
> > Also Xen cannot parse dynamic tables (SSDT) or execute methods, and
> > hence doesn't know exactly which memory will be used.
> Thanks a lot for the explanation. I checked more carefully the code and it's
> true that xen does not aim to restrict dom0 access to ACPI tables. I got
> confused by the name of the function pvh_acpi_table_allowed.
> > 
> > Xen relies on the firmware to have the ACPI tables in ACPI, NVS or
> > RESERVED regions in order for them to be mapped into the gust physmap.
> > The call to pvh_add_mem_range() in pvh_setup_acpi() is just an attempt
> > to workaround broken systems that have tables placed in memory map
> > holes, and hence ignoring the return value is fine.
> In pvh_setup_acpi(), xen identity maps E820_ACPI and E820_NVS ranges to
> dom0. Why it does not do the same for E820_RESERVED, since ACPI tables may
> also lie there and since it does not know which memory will be used?

So far I at least wasn't considering that ACPI tables could reside in
RESERVED regions.  Given the behavior exposed by QEMU I think we need
to move the mapping of RESERVED regions from arch_iommu_hwdom_init()
into pvh_populate_p2m() for PVH dom0, thus rendering
arch_iommu_hwdom_init() PV-only.

> > > while if the overlapping
> > > range is marked as E820_RESERVED, it does not map the tables at all (the
> > > issue that Stefano saw with qemu). Since dom0 memory map is initialized
> > > based on the native one, the code adding the ACPI table memory ranges will
> > > naturally fall into one of the two cases above.
> > Xen does map them, but that's done in arch_iommu_hwdom_init() which get
> > short-circuited by the usage of dom0-iommu=none in your example.  See
> > my reply to Stefano about moving such mappings into pvh_populate_p2m().
> Indeed, if dom0-iommu=none is removed from the xen cmdline and qemu is
> configured with an iommu, the issue is not triggered. Because
> arch_iommu_hwdom_init() identity maps to dom0 at least the first 4G, right?

For PVH dom0 only reserved regions are identity mapped into the
physmap.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 18 12:12:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 12:12:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536308.834498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzcUO-0000zh-71; Thu, 18 May 2023 12:12:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536308.834498; Thu, 18 May 2023 12:12:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzcUO-0000za-4I; Thu, 18 May 2023 12:12:12 +0000
Received: by outflank-mailman (input) for mailman id 536308;
 Thu, 18 May 2023 12:12:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GxtU=BH=ziepe.ca=jgg@srs-se1.protection.inumbo.net>)
 id 1pzcUM-0000zU-JX
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 12:12:10 +0000
Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com
 [2607:f8b0:4864:20::732])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34db536d-f575-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 14:12:07 +0200 (CEST)
Received: by mail-qk1-x732.google.com with SMTP id
 af79cd13be357-7577f03e131so108391585a.0
 for <xen-devel@lists.xenproject.org>; Thu, 18 May 2023 05:12:07 -0700 (PDT)
Received: from ziepe.ca
 (hlfxns017vw-142-68-25-194.dhcp-dynamic.fibreop.ns.bellaliant.net.
 [142.68.25.194]) by smtp.gmail.com with ESMTPSA id
 w8-20020a0562140b2800b006215c5bb2e9sm476635qvj.70.2023.05.18.05.12.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 18 May 2023 05:12:05 -0700 (PDT)
Received: from jgg by wakko with local (Exim 4.95)
 (envelope-from <jgg@ziepe.ca>) id 1pzcUG-0055VE-PF;
 Thu, 18 May 2023 09:12:04 -0300
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34db536d-f575-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ziepe.ca; s=google; t=1684411926; x=1687003926;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=b1LVde9eD9hiDa72aXFp01UeitZcx1XKK5VRIQwwJ4w=;
        b=f1WIjix4LVO2qoUadyymiNEwWX8LVYnKjiZJmHZsA00vpgp8OzN7fLeOXwmxyAsNjQ
         sHOCKYtaHjUeBnmA70xUS7m11Y8GXRs0HF7i7PVSFcwvUzQVx0yAXlGLYQrw7yq7y3KH
         MBoMqwZGJTzwuItitFkcFv88cqBCrBWri+DUGp8Fc0eAqXpc1sD7lY1vf9Zu0c1UnHTi
         g1NSzolGSvoeoheQrwHIwMvc3Rso4DvQpoRRtvFDk7J6iY3SLf6NCCF8S9IRrkOXhhUV
         HRuCy/CUe+KQ4Lk9dlGX7GSjHNglFJk4MTFkm4joUr1QsJe7so9IrOlgfq/tmei89LyY
         B2xw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684411926; x=1687003926;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=b1LVde9eD9hiDa72aXFp01UeitZcx1XKK5VRIQwwJ4w=;
        b=aNDm/WiTAASwzuNVH0coKJFiv8uGRW6l8qOZ33gCG0lkbBYXghWcGiojgHAnc3J4i+
         msexDCAKOuDMoWfYldMkaJ6hm5rR2qwYBdwfzzt1mMhw09n2RDDxyzIXfc+jyJzj5GTe
         Mp7FGIFqtKsS7Aa9VtAhHR84qE19YWTWSn0wDHxMQsjlIwfVxmH97b1kK5jgJgTLb6bb
         41hgEdGvPQvM3dJBin4F6QW4SdEwtEzo0/plnCmrTz6uEjg7h5a/gyxp7cntbyF9yW6z
         KQSueZl+/4A1A+tF33golwFn9Rxm1nkqqqw8bq02q1h2/Bwjjc8XBruH4jUy99lRQsIr
         bj7Q==
X-Gm-Message-State: AC+VfDwr/BgDK9J3FcFEaCtt1mqKjUE4CqeJxSgcLuTwuXj00oR4CT5r
	Un8OivgN9ByC2O9BSA37griXIQ==
X-Google-Smtp-Source: ACHHUZ7wZ7lgCFFjWmWBQuc0tx+WGJVC/VREvoop6Vbqh2gPTjnAeer0zQN00cudqqAWi4mVPLtPKg==
X-Received: by 2002:a05:6214:2486:b0:5fd:7701:88c5 with SMTP id gi6-20020a056214248600b005fd770188c5mr5974330qvb.6.1684411925806;
        Thu, 18 May 2023 05:12:05 -0700 (PDT)
Date: Thu, 18 May 2023 09:12:04 -0300
From: Jason Gunthorpe <jgg@ziepe.ca>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Dinh Nguyen <dinguyen@kernel.org>, Jonas Bonn <jonas@southpole.se>,
	David Hildenbrand <david@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"David S. Miller" <davem@davemloft.net>,
	Richard Weinberger <richard@nod.at>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Arnd Bergmann <arnd@arndb.de>
Subject: Re: [PATCH v2 00/34] Split ptdesc from struct page
Message-ID: <ZGYWFIfyDtdpeWg1@ziepe.ca>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230501192829.17086-1-vishal.moola@gmail.com>

On Mon, May 01, 2023 at 12:27:55PM -0700, Vishal Moola (Oracle) wrote:
> The MM subsystem is trying to shrink struct page. This patchset
> introduces a memory descriptor for page table tracking - struct ptdesc.
> 
> This patchset introduces ptdesc, splits ptdesc from struct page, and
> converts many callers of page table constructor/destructors to use ptdescs.

Lightly related food for future thought - based on some discussions at
LSF/MM it would be really nice if an end result of this was that a
rcu_head was always available in the ptdesc so we don't need to
allocate memory to free a page table.

Jason


From xen-devel-bounces@lists.xenproject.org Thu May 18 12:15:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 12:15:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536312.834508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzcXb-0001bB-Kx; Thu, 18 May 2023 12:15:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536312.834508; Thu, 18 May 2023 12:15:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzcXb-0001b4-I7; Thu, 18 May 2023 12:15:31 +0000
Received: by outflank-mailman (input) for mailman id 536312;
 Thu, 18 May 2023 12:15:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ne9c=BH=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1pzcXZ-0001aw-EV
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 12:15:29 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2089.outbound.protection.outlook.com [40.107.7.89])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac7a0226-f575-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 14:15:27 +0200 (CEST)
Received: from AS8PR04CA0070.eurprd04.prod.outlook.com (2603:10a6:20b:313::15)
 by AS4PR08MB8191.eurprd08.prod.outlook.com (2603:10a6:20b:58e::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 12:14:56 +0000
Received: from AM7EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:313:cafe::4d) by AS8PR04CA0070.outlook.office365.com
 (2603:10a6:20b:313::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 12:14:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT020.mail.protection.outlook.com (100.127.140.196) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 12:14:55 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Thu, 18 May 2023 12:14:55 +0000
Received: from 0187f21d750b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2086682D-466B-4082-9938-9A029360597A.1; 
 Thu, 18 May 2023 12:14:49 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0187f21d750b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 18 May 2023 12:14:49 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by AS8PR08MB8970.eurprd08.prod.outlook.com (2603:10a6:20b:5b3::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.34; Thu, 18 May
 2023 12:14:47 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a4ab:cdfa:b445:7ecb%7]) with mapi id 15.20.6411.017; Thu, 18 May 2023
 12:14:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac7a0226-f575-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mevuXsc0WtbFofj+MJrmTbX5VjZEsM7SXqUlYUKj04M=;
 b=dSblu11b5JhcQHf8Ds+GAFCZf9ztmu8Q3ef9T4QPm98qxwpCTA8jw+HnKniBHnwKGFkigzGdDbTY7JiioR6fVgjJMPOwtyol4/xpUKMsKpcLwm5BOX4t2Ht2Ktp2ypU1dWspTdTn32RWHgci3YjWLjCNzAalHrWwhIzUccvZ4EA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: ef361fdedccfa0c3
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mCV4j+EwJSMSv67xP1iIhr4xnuYFqAeuARUo91wokoOuZvjMVdb6Hq9W0+WEPDMLVe/eTw+RRLu36emEtVzWqrMrQwNGl1d0+Cm01Y3SHLOlf+s29n2l3PX6BztrnH0vl+2u/Mz2gX3KUmRutla9lfnHFbZ55I0jgJRZvBkJ5KO/6EfSkJqd66LBk5XYSxsJp6QZWgQDYKyFp2YiIa2eJsflEpaygmEFHtNeWOyMdE+kKl8a36xRf36SPvJeFOvNq0OOuA34uPQMX318SuetIdzwvxzcp9ovJYryerx3WBWI7mB1uUa+n7KeLIZfu8DmKRbrVFKFhHAAUWJ4fSh01A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mevuXsc0WtbFofj+MJrmTbX5VjZEsM7SXqUlYUKj04M=;
 b=DMfpBY9Kj0Il/YXAlrM+IfLEQh7hARiKt8zeew9J+GMtUbue5dfKnvEvk63ukelDp2uBjiave4e5h6ULews20vgjBTeMf+aM867C1FLuNUWEbqOUz+LVVEwz7+Aqpj08sOH25eMkr4Zaai/NNt+W8J//c0DUXJzft5QTI7lco518Pf+Gz1BQvRSidq0GoeYuk9mX/PPEH6D/zoC0nCXamuf+26wUtI2vkV4co7YW+wwCSV40AHa5d64vCBrb+9/rbrswXc3j1wa7pZKK0QzdezrsaHXZ2uoKizd+oVdZcipQHKZRLno19+Vcv1zjuHtNe+VbkjHwpCy/8XpoTJzu9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mevuXsc0WtbFofj+MJrmTbX5VjZEsM7SXqUlYUKj04M=;
 b=dSblu11b5JhcQHf8Ds+GAFCZf9ztmu8Q3ef9T4QPm98qxwpCTA8jw+HnKniBHnwKGFkigzGdDbTY7JiioR6fVgjJMPOwtyol4/xpUKMsKpcLwm5BOX4t2Ht2Ktp2ypU1dWspTdTn32RWHgci3YjWLjCNzAalHrWwhIzUccvZ4EA=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Roger Pau Monne <roger.pau@citrix.com>
CC: Xen developer discussion <xen-devel@lists.xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] pci: fix pci_get_pdev() to always account for the segment
Thread-Topic: [PATCH] pci: fix pci_get_pdev() to always account for the
 segment
Thread-Index: AQHZiXevyYqrsZqMy0C54Y28mmUHD69f8ZgA
Date: Thu, 18 May 2023 12:14:46 +0000
Message-ID: <7661952A-477F-46C8-8F8D-AD2D7D81A4EE@arm.com>
References: <20230518105738.16695-1-roger.pau@citrix.com>
In-Reply-To: <20230518105738.16695-1-roger.pau@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7158:EE_|AS8PR08MB8970:EE_|AM7EUR03FT020:EE_|AS4PR08MB8191:EE_
X-MS-Office365-Filtering-Correlation-Id: 79ec2c36-9886-4040-0182-08db57997dfd
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ar06aml/Wphd90ZFTOBIcIXAEG0t0JSP0+gHauqVlhOdcr7pXH92t/blJZlJP4ukbNvF+6/PM+0qgDSHNq218tPSQIgZ4UwPlrA0mNKhU7oxG9xrNjF+ypHaI0s1bSY26aYssYErxYJDEJwcXBYHp8a48ZA50g9q0hA6l6f1v+tLuvdffHGV4m+Bgdc9PONNcQrxv6/tlSV+UTShSUuRcAUY8DoErGggSfRGn/6YfkveL5lwVKvQMBrkwDsr/Eetxvdn8wdqs/iAKn4J0nITUDzt6EWx3vUByH5IIL3tzmEbNxlRLPSkYtRjc8XQkPoGFdi5Y99CYZ+yMmsTSME8nY68AnwWJniFBL7X9S4utEbmU78IU8E0JbZ618r8EUtWxsXQURy01UROYG8PzAzyCO07lzcJJ1OVE4PRpszhRFJarwiYuIKyuNm+N13NeQgl0qQVQp2b6geUoyNxM4NKgY993h26+tO96v6KI6y25Mc5SdhXekIRb7JLoVNlfCDznynLznKydh85K+CPpPHqqR7oLgBcnhfiDV9c1yXa/ko2RGPKpjNiD3vqpXjIuIiwMHo+yskuxowcV8M6EuqKfXoBHcwlc49obU1xukyy/RxC6KDtIp/j8dEtdyP1tHE1c67PmatKe2ZDd90EoJO0uA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(39860400002)(376002)(366004)(451199021)(316002)(8936002)(91956017)(4326008)(76116006)(66556008)(86362001)(6916009)(15650500001)(5660300002)(66446008)(8676002)(38070700005)(33656002)(66476007)(64756008)(54906003)(38100700002)(122000001)(66946007)(2906002)(4744005)(41300700001)(478600001)(6486002)(2616005)(71200400001)(36756003)(6506007)(6512007)(83380400001)(186003)(26005)(53546011)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <52CAB91DE1F7F1438EA9A10E13C5219C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8970
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5b3866c1-9646-4360-04b9-08db579978bd
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ls1+oNPIn9QOL5A2BwGmflXg9BJ2bd1m6JCIE8oBe3SfWWh/BXkk4bQqNLlPV8j8ao4jj/U1g8yeUbwqV+0jh+SSSNRIWtXrWk4rypyZC74URfdFNuhicwZFpM+jzhzA5w6268YpEFk2UNp3zr83hmBtDXRN03QzDGrUfewZZlQuBY1hvp0D6DhOaz/7f7zySTVYIXUxJkQi2NX+6Z2v23EKwAfdMMn39ZIaFLGtGeFiQ0T3cAzFDk/GDR4jFE4slyxu6LkyVb1yYTNty/wOjMls0hMzQ3xfcH8rTazuRX8lu1ofh7NYz0SgA5SUmuGCzlAZfbgLFkNOgs6g4qh2yGDGaPUO54kHLH1fo06MBREvEixHTImyuJnrKTq6axc+eT1N7dfFVu/geVTSNDryTXvDU+kD5cQiA7arWxbuxA9IFCfd6vHQsu2gjTuwCzidlGlwZITzZpOLV6StECBmp5IWqlKU2R4V20KqcNAays7aC5FIOX59poqBzyuBlc6xSvyHNs7jJ+fCecUwg209Go7ZSYwY9JBdkFMaMocKYHVLLEk501NOlTDSLS3AgjOqMu1oB4mCqtA96mztaCHijsD8GA+nGNIfxh71srJ+yJRLvIhCvsbJvC4ANeUzw0ONPN36ahLJ7yLzs5i1YGMZwHD4/IYAg7O9cSxvHf/AZMr221sL3BvYn1xnUyG4keqoggqOjvVbPydWkrAShfNCfJDOEFFLVcMyvGvYAxGHqfQeOusHuSb8WcFHnjac9BZA
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(396003)(346002)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(6486002)(40480700001)(40460700003)(53546011)(6506007)(6512007)(186003)(26005)(5660300002)(82310400005)(8676002)(86362001)(41300700001)(33656002)(8936002)(6862004)(356005)(4326008)(82740400003)(70586007)(316002)(81166007)(54906003)(336012)(70206006)(2616005)(36860700001)(2906002)(4744005)(36756003)(15650500001)(478600001)(83380400001)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 12:14:55.7778
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 79ec2c36-9886-4040-0182-08db57997dfd
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8191

SGkgUm9nZXIsDQoNCj4gT24gMTggTWF5IDIwMjMsIGF0IDExOjU3IGFtLCBSb2dlciBQYXUgTW9u
bmUgPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IFdoZW4gYSBkb21haW4gcGFy
YW1ldGVyIGlzIHByb3ZpZGVkIHRvIHBjaV9nZXRfcGRldigpIHRoZSBzZWFyY2gNCj4gZnVuY3Rp
b24gd291bGQgbWF0Y2ggYWdhaW5zdCB0aGUgYmRmLCB3aXRob3V0IHRha2luZyB0aGUgc2VnbWVu
dCBpbnRvDQo+IGFjY291bnQuDQo+IA0KPiBGaXggdGhpcyBhbmQgYWxzbyBhY2NvdW50IGZvciB0
aGUgcGFzc2VkIHNlZ21lbnQuDQo+IA0KPiBGaXhlczogOGNmNmUwNzM4OTA2ICgnUENJOiBzaW1w
bGlmeSAoYW5kIHRodXMgY29ycmVjdCkgcGNpX2dldF9wZGV2eyxfYnlfZG9tYWlufSgpJykNCj4g
U2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQog
DQpJIHRoaW5rIHRoZSBjb3JyZWN0IGZpeGVzIHRhZyBpczoNCkZpeGVzOiBhMzdmOWVhN2E2NTEg
KCJQQ0k6IGZvbGQgcGNpX2dldF9wZGV2eyxfYnlfZG9tYWlufSgpIikNCg0KV2l0aCB0aGF0Og0K
UmV2aWV3ZWQtYnk6IFJhaHVsIFNpbmdoIDxyYWh1bC5zaW5naEBhcm0uY29tPg0KIA0KDQpSZWdh
cmRzLA0KUmFodWw=


From xen-devel-bounces@lists.xenproject.org Thu May 18 12:24:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 12:24:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536316.834517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzcgE-000360-Fc; Thu, 18 May 2023 12:24:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536316.834517; Thu, 18 May 2023 12:24:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzcgE-00035t-Cs; Thu, 18 May 2023 12:24:26 +0000
Received: by outflank-mailman (input) for mailman id 536316;
 Thu, 18 May 2023 12:24:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Na5m=BH=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pzcgD-00035n-W8
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 12:24:26 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20609.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::609])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb96c1a6-f576-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 14:24:23 +0200 (CEST)
Received: from DM6PR01CA0014.prod.exchangelabs.com (2603:10b6:5:296::19) by
 PH8PR12MB7254.namprd12.prod.outlook.com (2603:10b6:510:225::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.18; Thu, 18 May
 2023 12:24:20 +0000
Received: from DM6NAM11FT079.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:296:cafe::71) by DM6PR01CA0014.outlook.office365.com
 (2603:10b6:5:296::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 12:24:19 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT079.mail.protection.outlook.com (10.13.173.4) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.20 via Frontend Transport; Thu, 18 May 2023 12:24:19 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 07:24:18 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 05:24:18 -0700
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 07:24:17 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb96c1a6-f576-11ed-b22c-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XTmm9UlEQWtuk2D7fA0Dr+jvyIN/GkgbQoHzz81aybSSt1uWx6ZBbDfU1zt61TFWIbFKgDOMmI0AbZEidhbtuYeRb4IlBYbA0gc2y0VehjtFoVc0nbQw1h94RQ8a5f778uIQcynTh3XOiXeSXtya+JofG5ewPD79W4VNATtEUSewRLyS/ALztr3mFMQhd5Dw68TjYO9H5aAQbBDASXGlkGW6WrwmQ89nivU88eQwICs2/43JEULVIObVeiJhLe+v/XAovdTK7yPMs9VmZy1oHpPeOgdRUrqsNoBKcbTM6r9v7AQtleSx8uPZ/CK69dNwwaqfkPn9Sm/+ht4fBEARig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bKM0BHsOw2bp+YWztYBo8GjCfLhaPqBDI4+n/Jlij88=;
 b=g8+1iFVNOZaD8FlYDQQWe0e4odBlRs+N6xA36mcypXqqBziX728OT69mZet6nbd5fPAPbvzp+8xi35r5wc5QHc42/ojBXuptPJ0Ic1zRY5t3Q+nr6ptVXGSyzIDKuOMYbAKMTtPFP0VdbYjcpt5rnRwNJ8A/ZthnrHbMGMS9pAuQ1DwWqnSMF1ZNgnh6jAEflpUfZ+AH5GkShEDz6IO1JA7F/5ZZ1Jj3T0LqRWD9V8MMfQ+nqS2IxRiQ8mRO38HGbMSN+WXOZWlwTRiCOinmgWsgtcd5ymk28+xy6A3q9w/JAnMzEoRKVNTSbM2knJxRJN07bV5P9XayWIqb3zpXtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bKM0BHsOw2bp+YWztYBo8GjCfLhaPqBDI4+n/Jlij88=;
 b=A5HYqUvKZjK0Y5xN+kZtYAerSObq/RHh7/YPnEiasqq6y3+3BHGfQQhoyZSq5hA2NAXzm7B38lsz3rPfx9HiygfpjDSzLObvXUOnBe02hObZUC1AykEre4OHCd0VYjk+AmmG8eRdC6gyXNZN0QlO/Ujzt32GMKsRzOZ83tDp5Zc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>, <luca.fancellu@arm.com>
Subject: [PATCH] automation: Enable parallel build with cppcheck analysis
Date: Thu, 18 May 2023 14:24:15 +0200
Message-ID: <20230518122415.8698-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT079:EE_|PH8PR12MB7254:EE_
X-MS-Office365-Filtering-Correlation-Id: bcbdbc52-848c-4086-c7ac-08db579acdf4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FPiWh7qdelt2wX5nw0JIL5jbk3WNT9ywDUE2nH2reaSLZ1QjT1ZDe9xyItE4i29qyJ3oEjO5Ms0SHW7tsZRnwLHK0aIzTB7bIyStF+hlyMvBDS/oXUcl1ojiMWfD/WQImGSJSCnQzUkourlTDileYPCzgvVZEmcRsP7qgBO67mhp3BHOPduLS01f9bsEtt4AlXrm+6MYfz50DFDZXTnhrKRKm/SI+lVSK+SQBOHG1eE3C+b3nVZnxAKsPSUPVz/8BpIaCYc9NdQX0+9o6w9cCB/8Kv2mpLW0ajrEcf+n4YqCmvEQ41gs6YHDY9hcpSr9nkxGjNq4ay+8nOmUPknxkT5lqBc2+TeDyHM2HHU2YPlrM/t/mB0OOCEXZA+SBi3hylvfXJcvXAhIpQATbGjNYSqiPEaUXivfjsjnAPqiw0OGR3J8nN8irKH7BKUAUagdx6xy4VeL0L51jAXsQnU9hQQQAl1LnzXBhPjkFMHSUynSXLQqzB6DWTbv/Vp6QdETFWSaRuqnaLuQ+/SJxe0x9EE3skjbfjH+al6yOQe2AqIPLpGhlG0aD+cgH2txlRhqkN7j6Njkr5n+p3m55A9PSJD61sC2J2rch40hsDg8Vsnz5Y0r6oJcMDOa6GOL4kjJf1Enk5Jf1MCBP84ipczrxiOs30BFbhilqUAaYuroi4vfKiEAePDAx6MbFyGZhpf5Bn33fL22prZzAyEq/b4ALj4ZyEdGVfIPo+b2eD8mh0SZgI66tVaDzSw7cyg0UY0Rm0kcqdf9ZcQHiyVaTx41EQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(136003)(396003)(451199021)(40470700004)(46966006)(36840700001)(82310400005)(1076003)(4744005)(2906002)(186003)(40460700003)(86362001)(26005)(83380400001)(40480700001)(336012)(426003)(36860700001)(2616005)(36756003)(81166007)(47076005)(356005)(82740400003)(316002)(41300700001)(6916009)(4326008)(70206006)(70586007)(54906003)(5660300002)(478600001)(44832011)(8676002)(8936002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 12:24:19.4227
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bcbdbc52-848c-4086-c7ac-08db579acdf4
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT079.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7254

The limitation was fixed by the commit:
45bfff651173d538239308648c6a6cd7cbe37172

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 automation/scripts/build | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/automation/scripts/build b/automation/scripts/build
index 9085cba35281..38c48ae6d826 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -39,10 +39,8 @@ cp xen/.config xen-config
 mkdir -p binaries
 
 if [[ "${CPPCHECK}" == "y" ]] && [[ "${HYPERVISOR_ONLY}" == "y" ]]; then
-    # Cppcheck analysis invokes Xen-only build.
-    # Known limitation: cppcheck generates inconsistent reports when running
-    # in parallel mode, therefore do not specify -j<n>.
-    xen/scripts/xen-analysis.py --run-cppcheck --cppcheck-misra
+    # Cppcheck analysis invokes Xen-only build
+    xen/scripts/xen-analysis.py --run-cppcheck --cppcheck-misra -- -j$(nproc)
 
     # Preserve artefacts
     cp xen/xen binaries/xen
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 12:26:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 12:26:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536320.834527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzciT-0003fl-S4; Thu, 18 May 2023 12:26:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536320.834527; Thu, 18 May 2023 12:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzciT-0003fe-PP; Thu, 18 May 2023 12:26:45 +0000
Received: by outflank-mailman (input) for mailman id 536320;
 Thu, 18 May 2023 12:26:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Mvs=BH=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pzciS-0003fY-TG
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 12:26:44 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2062c.outbound.protection.outlook.com
 [2a01:111:f400:fe16::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3e9b2969-f577-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 14:26:42 +0200 (CEST)
Received: from DU2PR04CA0178.eurprd04.prod.outlook.com (2603:10a6:10:2b0::33)
 by AS8PR08MB9867.eurprd08.prod.outlook.com (2603:10a6:20b:5ab::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Thu, 18 May
 2023 12:26:40 +0000
Received: from DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b0:cafe::d6) by DU2PR04CA0178.outlook.office365.com
 (2603:10a6:10:2b0::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 12:26:40 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT038.mail.protection.outlook.com (100.127.143.23) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 12:26:39 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Thu, 18 May 2023 12:26:39 +0000
Received: from 7c813080ea05.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2C28E3A0-16A9-4E96-B594-5C0B3A56E6D8.1; 
 Thu, 18 May 2023 12:26:33 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7c813080ea05.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 18 May 2023 12:26:33 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB9PR08MB8337.eurprd08.prod.outlook.com (2603:10a6:10:3de::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.18; Thu, 18 May
 2023 12:26:29 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.019; Thu, 18 May 2023
 12:26:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e9b2969-f577-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9bx2snVv4W3p8p2oYqTvwowFapIcPKqX4VTWPUzQsWg=;
 b=Fa2tujinUKC/UAdR69iQM8wsLMh1oZcN3mETmEXwuXJGIfvNXF+X5T6RNtL6Wo8S141E/Uk+l6tidimmuh1f7lCD0/TMoC91KDDl9H01XMinafCqd0phtqdU6YHih7mxZUWYHwJLHW0rcdFUCf7tQs/T19/UPXqZ74DW28TUkks=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0f942611c4068126
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nWMZXRfk7HE8qxIWGk1XE48OHT5EImRvhBUbpti89dHrWsGjl2XKzZKGT8eQD6QBoi9+yLKLYFLJOIkhSnMuTTolK6/v92bNPjWlyJMWF+fjEkUWTbb1tqItyq6RIKxw4KOagq8VOS7+Ydpojv+E22yOC9sIDdEMBKDVUe2qu1fBntDPTbNAOiTObvdo3jStfnGxOoy5juYG3l0TVLgEu8CKrsI9RZoZKK8mltdPZoLEj+i5LWVdnyIogi3rSA56aayOAFA5kMLHh12G2RPfF+iiFvCLwX9RCs4tNgedpn8JoT9FKOIoiEarP4NJdaEBWOTKF7PchaKFZR18IoAygQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9bx2snVv4W3p8p2oYqTvwowFapIcPKqX4VTWPUzQsWg=;
 b=a9d5luVo9g4oRPm5BLpZVZBpQEFmfIKC15xvP0zqV7Kt14dVVOopxPoYW9euR8l4UwckbGL3SaDobST+T8Ban2xy17K6QU7tpnoGJDjzji67rnLpizFQzNM4WWQXuB4SFdl/1IchtHR3G70BI8hKO8mLh5Z6UBirlq+WYLCqibWp8y1PMu3T37dif1g0QpLTTY3W9eVoZ5IcdgwlYlAHYoaWdrY/6tY0Eu/aQgbg5rY4fbjvFUG5oL5QquEppLjlU1Z7ihG3u/B3GnHIaPvEFfTcH1nCSHGWIO8CU6o7NOD6tV5o0zgFydCJtKDHq6asnMdawz1GwxhJ98qLchEQmA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9bx2snVv4W3p8p2oYqTvwowFapIcPKqX4VTWPUzQsWg=;
 b=Fa2tujinUKC/UAdR69iQM8wsLMh1oZcN3mETmEXwuXJGIfvNXF+X5T6RNtL6Wo8S141E/Uk+l6tidimmuh1f7lCD0/TMoC91KDDl9H01XMinafCqd0phtqdU6YHih7mxZUWYHwJLHW0rcdFUCf7tQs/T19/UPXqZ74DW28TUkks=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Doug Goldstein
	<cardoe@cardoe.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH] automation: Enable parallel build with cppcheck analysis
Thread-Topic: [PATCH] automation: Enable parallel build with cppcheck analysis
Thread-Index: AQHZiYO0TSgWEhB91UKK6TnuwOVmHK9f9N6A
Date: Thu, 18 May 2023 12:26:29 +0000
Message-ID: <2243D77C-F212-422B-8AB4-7D93F651601C@arm.com>
References: <20230518122415.8698-1-michal.orzel@amd.com>
In-Reply-To: <20230518122415.8698-1-michal.orzel@amd.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB9PR08MB8337:EE_|DBAEUR03FT038:EE_|AS8PR08MB9867:EE_
X-MS-Office365-Filtering-Correlation-Id: 23c37769-e3c3-4462-b43e-08db579b2199
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 kGMy+PGKh7D7wOELFBk7j0c/hFvTNSN45TWtCRldHR0OGCFoPM2Xem/TIXVJSpyXqlnVp6HAjWZZrWHYsDhmLG9QqbKIltvrxv3B+znWPwceG3DLuUzuNnyqL/btGB2kidbiXG2RVCwgY4soVREPlm35fcsQGN+yLZp02Wwpv4JvdMR2Gs4t2+XxvO+YeHROlJaQYs9s1lr6y5fj4br/jEv95xwxKJyvfFICa2Ia4XUPD3w6sOMlq+5i+UPN1hOvBZsXLvLxmvF2uggmXOVvJVtgjvJZuh+AOvap3MR7XhAK0hSm3BRCNfR1AAB0O+tDjALCrZm6BwghLo02fhOfR+OTXJGQAZ2NBIeaektdnFWQlLRV9mO4S9Vghb30ehZiBWt84TYR0eR1ZXutTXqo/AAbQPGk0dQN2LE4PtPn5XBCTyv2Eb5JHSV8wumoT5rbr0zwB7+nQ4nL+rt99f4vP5UPUXGUwmM+qmpYjlDDozckDcf9WrvjTj2cfnpmQp3CAdALGLQYiyv9y/AYm4AeuTHgRV0qyxUo2rk6Gw5bJJ78yY34qWvQA0/RDw5FtLv/sDmjVTNMOxozI40oOVrp4Mw50rwxEeJOYCJEbQIeufY2/JoTnIfJVrShwBVEhzvamfInzeNlePpZwg7DjdFvJQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(376002)(366004)(396003)(451199021)(478600001)(6916009)(66946007)(316002)(76116006)(66476007)(66556008)(66446008)(64756008)(4326008)(91956017)(54906003)(38070700005)(86362001)(36756003)(33656002)(83380400001)(186003)(26005)(6512007)(6506007)(53546011)(41300700001)(2616005)(8676002)(6486002)(2906002)(5660300002)(8936002)(71200400001)(122000001)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <BFEAAE3278A84C40BC0379791CDB474E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8337
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0b783df3-f244-4cf3-caaa-08db579b1b92
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b1/US5P5xUjWkL7KzMc/jmpi3lA/23zI0f93QkOx5QVWh0hS2YomkLAxy7+8Mem4iQXpAJxObb2o9GZOmO9tiI39k6/pvgMN/NVmSwQ9gshO3y5otp7J8imu0Cbwro1SYhEVKPvHsU9qdRUO3P3Ua+HK+2rMWB8HQMZ13zzxIHDPTW1Fx8ALW2Uu+iBmTUTnnTFG3o+4pRijYcy2mPeRqEkk1sPOT1dBR8u9gDAA/q3xUpnV7IY7O4VmwA/JrxBNNUG6PUltb1vrKlqNFnAhrGnQ/8iIg6uzqLnw2PUbuYkFiGsQhGBN3gXGJkOAuwCVVgPUdI3tMYgXK/hexrj1rVDDljgLGUG9PLqhNaORnDp2+m0jCehuAZOWwK22OfbFVUu5vWdYRpTV6TA7U8+EcotQKyW1Gymia3ODDcvCxjPHoE+0s1kAFntyplK+M9OMljUu17LkccYIegjUdOW2oRHxs1tTfQ8nruRg2nHEgVsuOl0uDkarqt46HGYT9rMN/uqJRUgDBZveUqIfcMFBnHrqigqncVdsUlY+TbP3lZNYg+h3If2IhzXIC8j7hYBnvkUuG0JCJvp2f1KCEHOf6nn+Hd3KZXQ7KfKxYWtSO9K8iS/9v6jVOYcd6Jwt75amXAEEfXdgtril867DwmhnvV6LbAvoCO7U8qHZcKoNYoJ9D75tqWQy9Ngg0dui/DcOES5655HA7GIHjaKHRvE5RNSkh2J9VfrEtjJTvssERC8V5OmqtXDaak40LJo5pDMF
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(36756003)(82310400005)(8936002)(316002)(4326008)(8676002)(40460700003)(6862004)(356005)(5660300002)(81166007)(2616005)(336012)(47076005)(2906002)(40480700001)(82740400003)(478600001)(6486002)(54906003)(26005)(70206006)(186003)(41300700001)(70586007)(33656002)(36860700001)(83380400001)(6512007)(6506007)(107886003)(53546011)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 12:26:39.8325
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 23c37769-e3c3-4462-b43e-08db579b2199
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9867



> On 18 May 2023, at 13:24, Michal Orzel <michal.orzel@amd.com> wrote:
>=20
> The limitation was fixed by the commit:
> 45bfff651173d538239308648c6a6cd7cbe37172
>=20
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Hi Michal,

Looks good!

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>



> ---
> automation/scripts/build | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>=20
> diff --git a/automation/scripts/build b/automation/scripts/build
> index 9085cba35281..38c48ae6d826 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -39,10 +39,8 @@ cp xen/.config xen-config
> mkdir -p binaries
>=20
> if [[ "${CPPCHECK}" =3D=3D "y" ]] && [[ "${HYPERVISOR_ONLY}" =3D=3D "y" ]=
]; then
> -    # Cppcheck analysis invokes Xen-only build.
> -    # Known limitation: cppcheck generates inconsistent reports when run=
ning
> -    # in parallel mode, therefore do not specify -j<n>.
> -    xen/scripts/xen-analysis.py --run-cppcheck --cppcheck-misra
> +    # Cppcheck analysis invokes Xen-only build
> +    xen/scripts/xen-analysis.py --run-cppcheck --cppcheck-misra -- -j$(n=
proc)
>=20
>     # Preserve artefacts
>     cp xen/xen binaries/xen
> --=20
> 2.25.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu May 18 12:40:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 12:40:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536326.834537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzcvX-0006AK-4H; Thu, 18 May 2023 12:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536326.834537; Thu, 18 May 2023 12:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzcvX-0006AD-1b; Thu, 18 May 2023 12:40:15 +0000
Received: by outflank-mailman (input) for mailman id 536326;
 Thu, 18 May 2023 12:40:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=joHs=BH=citrix.com=prvs=495b323d3=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzcvV-0006A7-9V
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 12:40:13 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f545ec5-f579-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 14:40:10 +0200 (CEST)
Received: from mail-dm6nam11lp2170.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 May 2023 08:40:04 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DS7PR03MB5624.namprd03.prod.outlook.com (2603:10b6:5:2d1::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Thu, 18 May
 2023 12:40:02 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.019; Thu, 18 May 2023
 12:40:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f545ec5-f579-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684413610;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=w5+cf/9p/bZKJjS6STQyiiUqeumHmkyeKCRar8TcDYU=;
  b=CQxG4Khzyom9DVHWYKgYQWvQniIDneTDCqLqox59Gfl8S1CqzpENxiYK
   LgPoiiwSIKuPCwby9o9fjvB7GBBef+gAdXf8Pg2AhulMBS7BjFdLiOWiy
   9NdUFciLvPrEVVjAlg2rdD75EYv/c7vhQV80uZXqJllcoN6U6SCtGoYMc
   4=;
X-IronPort-RemoteIP: 104.47.57.170
X-IronPort-MID: 111975285
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:6EKYoaxWa2PjSr2RqOx6t+cIxyrEfRIJ4+MujC+fZmUNrF6WrkVRm
 mscUDqFa/iONmb1L4p3Oo61px9QvpfRmN5lTQRk/iAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPKAT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KURy3
 NVHcDMfVT+Ou7+IxLCQQ+ptpP12eaEHPKtH0p1h5RfwKK9/BLzmHeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjeVlVMpuFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37aWzHymAtJPfFG+3tBqkGGJ70lNMzI1d2WF+/ykplOGQ/sKf
 iT4/QJr98De7neDdND9Qhn+m3+CsR40UsBVVeY97Wml2qfSpgqUGGUAZjpAc8A98t87QyQw0
 V2ElM+vAiZg2JWXQ3+A8rafrRupJDMYa2QFYEcsVQIY5/HzrYd1iQjAJuuPC4awh9zxXD31n
 TaDqXFng61J1JFTkaKm4VrAnjSg4IDTSRI47RnWWWTj6R5lYImiZMqj7l2zAet8Ebt1h2Kp5
 BAs8/VyJshXZX1RvERhmNkwIYw=
IronPort-HdrOrdr: A9a23:ZrC0XKlKBDeBsVKuGEpaiTT2l4fpDfOhimdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcIi7Sc+9qADnhOdICOgqTMGftWzd1FdAQ7sSibcKrweAJ8SczJ8V6U
 4DSdkYNDSYNzET4qjHCWKDYrUdKay8gcWVbJDlvhVQpG9RC51I3kNcMEK2A0d2TA5JCd4SD5
 yH/PdKoDKmZDA+ctm7LmNtZZmJm/T70LbdJTIWDR8u7weDyRmy7qThLhSe1hACFxtS3LYZ93
 TfmQCR3NTojxj78G6Q64bg1eUYpDLT8KoMOCVKsLlVFtzYsHflWG2mYczDgNl6mpDt1L9gqq
 i1n/5pBbUJ15qWRBD8nfKl4Xib7B8i62XpzFeZhXf5u8rkXzg3YvAx+75xY1/X7VEts8p717
 8O12WFt4BPBReFhyjl4cPUPisa43ZcjEBS4tL7tUYvI7c2eftUt8gS7UlVGJAPEGbz750mCv
 BnCIXZ6OxNeV2XYnjFti03qebcKUgbD1ODWAwPq8aV2z9ZkDRwyFYZ3tUWmjMF+IgmQ5dJ6u
 zYOuBjla1ITMURcaVhbd1xNfefGyjIW1bBIWiSKVPoGOUOPG/MsYf+5PEv6OSjaPUzvesPcV
 T6ISJlXEIJCj3T4Je1reB2Gzj2MRmAYQg=
X-Talos-CUID: 9a23:KwdeM2+3J352QRBOatSVv2soM+l6Kn7P91jdHxKGDWRCQb6Ua2bFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3AWdDegA2sSim5sAQTKG+s2jRuqTUjzaS+NFoktMs?=
 =?us-ascii?q?9lOqmCxIqPiqxkQyva9py?=
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="111975285"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YLJyq62kejDCQQm1Lps9l/fN6Dus+G3DFLA83dVlFya1yeHPE6o7plQHt4cLE77Enct7e/oHkHudLxMGZaiZHljknHbPDg+YQ5tR3aJunPMhtIbLANumIVJu5qFAjo67mSisg0z1XjRGZlYbBSO9ewzVP1g0byuZIbv9RVZalcCxIK6xTQHLMAj80Wy59STYPNoEslKQFd86X5PQmc4IjUxmI3MHvPchBbrRjyZzVSaQUdMY/3ahfNGtLIH0LQqfaLICsZtLsfpBcT677oCsIEFjui7R/CPwmTWUQU56/D4icNZNigWfoL+PapKVP0LhaGBWOD2HpDJ77GvUox9K9Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hu38g2+g3pQBwkTaDfCdtR9hZvDiTh0J1xtKp9GaG6k=;
 b=KIj5WOegrrB/bxAJI8wjsvObdQseJZT4C60+hQ8/2s9G3oHJ+4csvVwSUmLueS1NrjA1m3PpHHMmKMhbDlKq5i5ejDBebH0UuJKUEoPX9ybdmkMe12g8vsOUFCXKFAIUGfYBnlF/Q/zEe8MtyW2sdj5aXNrD0Pt/taSjskb2NdscyMSqOzjm6OyQ6oLodgP98I+WtQea2S8g4moKagBb1f06e3r4h7Eml5GoqFvv6dp6+hTDRgp0rTL+O1FABOG/jc/cWgBqbboZFbZkjWwD9C+YdrYkAIOjrdwtRdqwXGxEgX8vMwSTcIb4MLJkbVGHzfv4MSZ5SXicbvXn1wQMNg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hu38g2+g3pQBwkTaDfCdtR9hZvDiTh0J1xtKp9GaG6k=;
 b=qKCXumECclXbnM8ogscLO96ExQM1GZTAgnjOWlbMex3i8T/8RZyY6Y4zwflO5mq+GeBXbqXbVBNaeAQv2Xj3bnLtW7f3mtLinRQtuvt05aCwLaK6rohumK4e2dkjdxG0TVDRXg85FSi3e2R8sJPLyKRG6RIl82VN7gJT1jdRqXM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 18 May 2023 14:39:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Xen developer discussion <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] pci: fix pci_get_pdev() to always account for the segment
Message-ID: <ZGYcm6Tr0VUF+NYk@Air-de-Roger>
References: <20230518105738.16695-1-roger.pau@citrix.com>
 <7661952A-477F-46C8-8F8D-AD2D7D81A4EE@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7661952A-477F-46C8-8F8D-AD2D7D81A4EE@arm.com>
X-ClientProxiedBy: LO2P123CA0092.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:139::7) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DS7PR03MB5624:EE_
X-MS-Office365-Filtering-Correlation-Id: ffe695c7-ae82-4d9f-cd44-08db579cff65
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NJ4bHGDX7D8M8JIQH9PISkdHF3vZ0I4gOWsJ1/qrl8Orid8kWPL5mlWXm618pOzcvH5LHIeEiThe8pT3LLdlJgc1aRlji+KJSlEd8Alk08Pdusvdq72SV0YgqpmNmowOGviMwngYS37kyzYWri+WzcvmQbSIrNyja9Q33gskj6GbZY7heJmI4oCsrTUfjlm/INnc35JxYaGRJ+bTDiQ/J8cNDthRHmtkWePpq9ArcZ5vx8gGp5i3AVSUzNsYMEk2DoQayvtzpctwa33wi8jH1Zk8MpEUsWeQ18qfjbP66X/4lEYdc3Eb+QzgpFQFB/btZs8/GUHHEzlXAmxrfdWbTltqMzRqU9hgnEYUL1pCX6FV5d6m4CK78zD/fnfOavXRZhu9/P+qiXZzDf1GRjdIC5HYmiXKP+3Ocl10Wzuse0zX8JJkvxgHP9ZhnG1i9cnZ3mqQZA9Tgpaw83PRqqxvcOXrfxtRt63e7Y0oJRa92UfEzIWRLMxr1E118ga7m1DKEuedbtxQkapBrbJswtoFoVuxTw6Xc0qXeJKWfXaCmhkFuAvtj5fUHN+RnA7hjhyLtlGte2RkIw5GTdq0Kx1BhqYJVaNofPW8K1MK+5qtLjU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(39860400002)(366004)(376002)(136003)(346002)(451199021)(6666004)(15650500001)(6506007)(6512007)(478600001)(53546011)(26005)(9686003)(2906002)(4744005)(6486002)(41300700001)(186003)(85182001)(316002)(54906003)(83380400001)(38100700002)(82960400001)(33716001)(4326008)(86362001)(8936002)(66556008)(5660300002)(8676002)(66476007)(66946007)(6916009)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VklDREh5clZWY3l2aWMxSDRkeTFOb3ZvVkdtdWhmbE9yR0lySlFISGo1S3B2?=
 =?utf-8?B?Y1dVbDRtaEs5M1Fvd1VSczkrdGlDdnZwL2UrN1N1TjhyYVQ5Vld5R2JvN0lH?=
 =?utf-8?B?UHV5U0RyeC9sREdMUWVpdno0ODZPanJFamtHVEl1RWErcERrYnhXTUNGRklE?=
 =?utf-8?B?Wm94NkMyeXFVN1hYdXpBcnNlVDd2VC9wTGZCYSs1SVFsNDM4OVhxK2JRNWRR?=
 =?utf-8?B?UkR4Z2FkZVNJODBsbVhzb3ZDd21oNHdjS3ZHZWs3NWhablFCQkhFWjRPRXEz?=
 =?utf-8?B?L3RMTFZ0NFN2VGpKblArYys2cFRhUk5lN0NYM1Y3VUZiR3ZSRzZnTmZqSmZh?=
 =?utf-8?B?T0ZyN1R6ajRvTi9LcEFiKzBIYXJkL1RvclVLejdkUTZES3N5cEljNkZkWmtO?=
 =?utf-8?B?ajZkbUFNZjB2TG43NlNNa3BXYTE3eUJkZ21KbEpBMWpZbGVKNTNPNU8zQXBW?=
 =?utf-8?B?b3FrWVZ5eEpyRDU3ajc5TS9leS9Vd05tNG52aVRZcnJjek13dlNwNlZNTitU?=
 =?utf-8?B?bzRzbHVxTWVLOVdZWmZqcW5kL0JyUkdOaHFKRXY5czNjdWNoWkhGemI1eUVz?=
 =?utf-8?B?eTBNWkFIcXBqYnBOY09UOVFYOUQzQzZsWEtqWXVBOHgxSjlPRWN2L2J3Tnhh?=
 =?utf-8?B?SjBxQ1lob0NRd2lEdi9KYUJiT2dDVm1yaHFSanB3a2JRT0ZibUVJb1gzOWZx?=
 =?utf-8?B?L2x3OExqSFJvTERldkVjS3VmMGdUT1RSN1ljbkRJT1RTa3VFUzhsRDhZbm5l?=
 =?utf-8?B?cXhkb0diWWo0M3d0cldER1E0NVZHQXpzd2g2WnJlYVRnRDNFamdUOExSSW1P?=
 =?utf-8?B?Z0RqRlZEKzhpell4bzgrZEVYN1poNnlFUGEyRjBVUXkyVlVhbkxteUpRL3Nk?=
 =?utf-8?B?SFBxaTl2cEpsL3JLWFUyZ2l0dmhpa3VqZXFrd1ZTNXo0d3ovYWFoVHN3eE1U?=
 =?utf-8?B?cnRmVmttWWpSbDFRTElXcGFoNzMvYlZub3Z2VHNzbXA1ME5oWHFFTWw5WXdG?=
 =?utf-8?B?ZUhkOG5jNS83OEFuNStVQTFybnluazB1Q1o3Z0NXNzFkUVFRZEMzQVN1ajRX?=
 =?utf-8?B?blJtdjhpaDVaR1BWNVdTWUZZN2djZmg2M3JIcmZ4UjltNUVZcWxpY1lnaXpM?=
 =?utf-8?B?WlM3TWdhYkQ0Wm9vZWY0WDV0NVZCNWU2S1dzUElmaXdDN25IQlBEUDMwM293?=
 =?utf-8?B?MzRaNTN3MkdsRDh3UmVnS1h3NjlWQnVSRm1xQjFESmJBVjhoLzFYQzlwOWhy?=
 =?utf-8?B?R0ZrOFpqY051VW4wTUYxRGttRkhnd0VxTVRrOStDZFYzOWE1bWticnFvRkJ0?=
 =?utf-8?B?Z1dqU0NKMFFweldRWUp0QTBmcDNQVkVoUldnS1U3TWd6anZHdFlVNnJZUDhu?=
 =?utf-8?B?RGY1clNuc3lUejlJMEx5YnBIYm1GM1hnN0hRWEE1UTZqMDlmT1pQcm1XQUJG?=
 =?utf-8?B?Nmc3eTY3OGcrNVU4cjdGZXMwcmdIanhnS3BmRjlDTTBLRGJzZExsYTFUM3RH?=
 =?utf-8?B?Q3gwYy9vVXB1R2JDT2F2TmpCaFZGejBpS0sxbDhvOTdZWUhSV0pidmhuWnhT?=
 =?utf-8?B?SHRwZkRoWUZzdEo4NGRpVXRhaGk1d2xucUxzRVpBZEFQcTBLOXVUQVhxTVRL?=
 =?utf-8?B?cGxDdXVYdWF4TWpzVThKOFBHekNlTmh4WEx4NE5TbDJmUmFZeTBpZE53SUZV?=
 =?utf-8?B?cHAwNzZzSlZQN2NIY21lbTVYUXBtWEE5YkhrVjdPVUpHN0M2Y2VvTUtidThx?=
 =?utf-8?B?OGtNRGxreTV4TEgvbHVxdFdmK1VRcURGVlVDa3dFbEl2OGpBTnNsUHNkZ2p0?=
 =?utf-8?B?SW1wZUUxbzZiSDA0eU4yNmVxUkVxMG8xS3l5bVJoL0N3ZW5TbVJCTDR4a3Fm?=
 =?utf-8?B?QjBHaE9RK25zQVdiMkJjRnh5T0ZpVXBIb1BRNkxXQ0JVZTk2cXR1N0FhL2NS?=
 =?utf-8?B?djAxVVhHQS9PQkJkRW9FUGowakE3bWdLN3M4d0hZaysyL2JhNUdkcjBCQnVG?=
 =?utf-8?B?c2hKRGlFUjh0anhYRSs1WGF2Ny9nTTBWaGxlNVY1cXpRMmNnZFp6TzZ2NDZt?=
 =?utf-8?B?S0dNU0p4enQ2amlEMC9ZRnAvbmlVb05TNHJqQ0dvVDdGQkVQaWJXbnFvSnhI?=
 =?utf-8?Q?qSywEzCghHum5SrFxcBx2kAh0?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	50HT3RIqEW7xEvmgjbGZjy1IykHr4IttFaNLhOHbhUEQAQcjoW7wO4ttaRkS7Sm12sjn4Kimhd25xtAzUVdGjZg4s9lq/oHSF1Ti5ioXPeloqKvrKj2G4NPl1VTI9Iy/OsNfKRSdUm5nwGbf2aNU+ro3zzwCHF7nDzfDC/DZP2dH1e9oyMl7q+86geOHJHeMfdHJGGJ7nu/YmXdWCfSyfPHLxl01qlxOUt27KUTRLHxAAfrjJZKIXNF7j9P6o8cjfNP4bqzN/1EQNqvjSh0/jWD7F6Q67gwf7vKn4qMRBad/ddFKxmGMpiSl0uwHAf+XpHA4R1/eGGdjXCfhPGdcmAwMBSpK+HyzONTyNFk+Gvb/+qmTebvjJpZOnpSwSZmvbGdaKBzuQfRQ8IvRr0pxRAFfpjQxCZ5Mo8UznxGGIsqBs1aGaFMqhNdsH8iVv9s6h0D79hY988zbP9XCXcT0IV+iEC4a7F5mQgxde38ViB166erE96CnZDH2//SMSwjeXV+37wCdQSJQbgv+WmoYH/RVizpQXLlCsLxTHKKIx6B+a+ACw86n4MGI6f25Pcy++AL4zda/GyHJEJQ0trKoMuuvgNUAFlFmkwB4rx1WqPSPzepvEXm7cBalmtueFx55N5VRPibxIEUXzLgaEPMkKs2UX+ecbzjyiD5Sk2L51/xvcMdK0ZSD0IyKW02XTTnPRzB3hUlZIqQykh2WkAzBIVof5pTYOvgBg9uvH/YVzUWonyaoh7vPuMyGdxAPi1uPBamBBx2jnGX9imGst13n8Gih+2ykYpy/EBd7gTjZ1hjt7aYr/KfVE1nxF3pPL6b2A/P/DY1Dhk0V18JW4xNnK10RzNXpRN2hs0HhM19qyHU=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ffe695c7-ae82-4d9f-cd44-08db579cff65
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 12:40:01.6631
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: S8/hMv7HO2bC+Tk4XwOJP56UgyN0HUbX8vgrJ7T9vgZxTh5MgdrZdRfqKi33wB6qxCw3Er9V5AJo6Zpz+lpt5w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5624

On Thu, May 18, 2023 at 12:14:46PM +0000, Rahul Singh wrote:
> Hi Roger,
> 
> > On 18 May 2023, at 11:57 am, Roger Pau Monne <roger.pau@citrix.com> wrote:
> > 
> > When a domain parameter is provided to pci_get_pdev() the search
> > function would match against the bdf, without taking the segment into
> > account.
> > 
> > Fix this and also account for the passed segment.
> > 
> > Fixes: 8cf6e0738906 ('PCI: simplify (and thus correct) pci_get_pdev{,_by_domain}()')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>  
> I think the correct fixes tag is:
> Fixes: a37f9ea7a651 ("PCI: fold pci_get_pdev{,_by_domain}()")

I don't think so, a37f9ea7a651 just changed:

         list_for_each_entry ( pdev, &d->pdev_list, domain_list )
-            if ( pdev->bus == bus && pdev->devfn == devfn )
+            if ( pdev->sbdf.bdf == sbdf.bdf )
                 return pdev;

That code was already wrong, a37f9ea7a651 simply switched it to use
the sbdf struct field.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 18 12:42:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 12:42:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536330.834548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzcxm-0006jj-Ge; Thu, 18 May 2023 12:42:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536330.834548; Thu, 18 May 2023 12:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzcxm-0006jc-DO; Thu, 18 May 2023 12:42:34 +0000
Received: by outflank-mailman (input) for mailman id 536330;
 Thu, 18 May 2023 12:42:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cfsf=BH=citrix.com=prvs=495ba01a1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pzcxk-0006jU-Vs
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 12:42:32 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 731ed076-f579-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 14:42:31 +0200 (CEST)
Received: from mail-dm6nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 May 2023 08:42:18 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH7PR03MB7366.namprd03.prod.outlook.com (2603:10b6:510:2f3::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Thu, 18 May
 2023 12:42:16 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.017; Thu, 18 May 2023
 12:42:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 731ed076-f579-11ed-b22c-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684413751;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=NG7LS6mzYnWLVhuTmdFXZLfM9ZebIaFX7/n2J7WJhwY=;
  b=GOFlDHqLwrq+Vbh+gdhvjAdIQ0Y9B4vSwAsBhglQG49fbw6jyhkPmigI
   3r19tR81W/G89tyZoAFd/B3PJOMvN1FpJnaBpYx8oeM5eymDQyrl/0m4F
   mOnY0qXuWOxs8kQG1HinlA7CcklJlunTcSvl8Anzw6XCob+MJz05k1qaE
   4=;
X-IronPort-RemoteIP: 104.47.59.175
X-IronPort-MID: 109399071
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:fi3yFaOnTlnYT3vvrR2LlsFynXyQoLVcMsEvi/4bfWQNrUoq0TMAz
 jceX2yHOa2OMzH2ed5/O4nl8UsDvZXcyNVqSgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5wxmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uFeDFEU3
 PAZEh4ycVO6tbKR2K2LFsA506zPLOGzVG8ekldJ6GiDSNoDH9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PtxujeOpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxXKgCdNKS+TQGvhCvl+P+X5INRcsVGSQkOmYu26OXNN+E
 hlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQN4sudIyRDcq/
 kSUhN6vDjtq2JWNQG+Z3qeZq3W1Iyd9EIMZTSoNTA9A79y9pog210jLVow6T/PzicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQOAhRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:B1Hm/qp+hmXODzB6rtHCV7EaV5s2LNV00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwJ080hqQFhbX5Wo3SITUO2VHYVr2KiLGP/9SOIVycygcw79
 YZT0E6MqyKMbEYt7eF3ODbKbYdKbC8mcjH5Ns2jU0dND2CA5sQkDuRYTzrd3GeKjM2YqbRWK
 DshPau8FGbCAgqh4mAdzE4t6+pnay4qLvWJTo9QzI34giHij2lrJb8Dhijxx8bFx9f3Ls49m
 DBsgrhooGuqeuyxBPw33Laq80+oqqs9vJzQOi3zuQFIDTljQilIKxnRr25pTgw5M2/9Vowl9
 HIghE4e+B+8WnYcG2ZqQbknyPgzDEtwXn/zkLwuwqvneXJABYBT+ZRj4NQdRXUr2ImodFHya
 pOm0aUrYBeAx/slDn0o4GgbWAhqmOE5V4Z1cIDhX1WVoUTLJdXsIwk5UtQVLMNBjjz5owLGP
 RnSOvc+PFVW1WHaG2xhBgl/PWcGlAIWjuWSEkLvcKYlxBQgXBC1kMdgPcSm38RnahNPKVs1q
 DhCOBFhbtORsgZYeZWH+EaW/a6DWTLXFblLH+SCU6PLtBGB1v977rMpJkl7uCjf5IFiLEono
 7abV9evWkuP2rzFMy12oFR+BylehT9Yd3U8LAd23FFgMy4eFKyWhfzDGzG0vHQ7cn3O/erGM
 paY/ltcrjexWiHI/c84+SxYegVFZAkarxnhj8KYSP+niv1EPybigX6SoekGFO/K0dsZkrPRl
 0+YRPUGOJsqmiWZ16QummlZ5qqQD2xwa5N
X-Talos-CUID: =?us-ascii?q?9a23=3AWH/QJWhoh46vyn2CBPzTyZhzhDJudVTQ/C35L06?=
 =?us-ascii?q?CJ1l2C4SnGXiI55M4up87?=
X-Talos-MUID: 9a23:bJxuLgbdxzBvhuBTiT3wuyM5Cc1S4puFOVAsoLU2vMilDHkl
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="109399071"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mu81g+6EshPnCwxEtZO/upLxf9bg7v43QiLBVPIcpE6/EAlkmndMSq9rud8MITlqE2Kbh0QtPseZl6sK5tJ2W5HTTv5V3miJqKEqrNu37XtheMyCtzfblkALNLvxIdmZuAeCr48EEbRtuauW1PgzrZ9Rr/GqnOGBlm8H0OJPJhql2Gr9JzcOfspfTc59/m1jaw2DM9GatCVzDwiTF7M1081/x0KFP9ZMxkWkFe9gHyWT0+baEq5ZuyWddHcYUCUlVG751NUlUQ7KjvAJxUatKw/rSRqu1+cRUhigVDpvY9iNblBODn8N5So5skc0BDdlogMaM1o+zKkyRZaWL3mwVA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NG7LS6mzYnWLVhuTmdFXZLfM9ZebIaFX7/n2J7WJhwY=;
 b=KxBpz6WxSy1XhBzhMi6cGkLsJjd1jBdo0GkqJ63Ex2g+eB3lxfRqrNOY3FCcyC9ThDAzf28ThPZ59x7OkskSgq4RouZVoNf+hU9yNJl0k1Z9fzfeeVkgsHdc67cQ4RDbJRS49hi4Wte5aE7483bLm6qgJ4n+sIOMfG1dyU7pHxw7kw+8mfcWTCPZ+XYsIiWR1UrJu3WLC9gvIH+fAaVt1m5+VRbOVq2dL82Z8fR1LbkkF0OaV9cae84IdI4q2C+3HsjicWzk1eVyrZHbN4wVm4LFirbpVCQxAzPQEv0HLxItWINZSg7yXcLE/Qr3QjrhPuJr0gGzMjpjQfgbm42nkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NG7LS6mzYnWLVhuTmdFXZLfM9ZebIaFX7/n2J7WJhwY=;
 b=ofpN9aj88Aeq8/JkZzfcCoUFa8QXgM3YckH/fKL9MOZYOXYqJSbkwgaL0raNs++KqDsUKPBoIxpi7qeo1fBkEdhyqSvYLi4wKA2x06w2bzvPwfD4j6krPhCB7VM7Jhe1iZNc86R5kygbPuOdJLAtkFlKeetuXp5Ygf8ZZSde8zM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <480194dd-4757-d9dc-a2f2-7dea9182aeb6@citrix.com>
Date: Thu, 18 May 2023 13:42:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] pci: fix pci_get_pdev() to always account for the segment
Content-Language: en-GB
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
References: <20230518105738.16695-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230518105738.16695-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0360.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:d::36) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH7PR03MB7366:EE_
X-MS-Office365-Filtering-Correlation-Id: 3c908980-d93d-4513-fcfe-08db579d4f40
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PsDfWbW+XY5n9mnlzTjVdspeAoS6k6Lc8xKkchmekciG2LCD2+hKPMdF1m7WzWmOHIx65OY2pydfzhSP3kY0EVLweEx2EunOTcO7RFkMx3xhJ4c5g3fS7L2wJ90zYUsumFulUB6McxnDL1AJl+ttTDvn3aiysHUhDkAY/gZS9M/X7CiJwQD4RIE7qL0iSicNWNaSuuM4fUUuSnsCKrH9y5HbF968vNDMT59sr//74nNOu/qznlUwztskyoDjWvr6PXpNVyLiPtJZvoawKTp9DGtg/s4+7h78vtMPrig39kkVdywtGR0ug5xFdMc/B7/9q3MvWB++3N7//STjVTz4KuyEBuo5OJHivJwF2K6fw2h/kPN6g1cVXdAFofV0TUJSAHopDBArV2KPF2DQ52TxTzHq8V+8qvPhs0Od2vd2XxGYhzFCjXih6MYbA4R7RCvg2LHFETYXkPF2SU6WEmmIFTlZLDFnv1dKch6QCl/5GfVD7LPuVND07j2fmWa+cQ9oN6+yZKX4ofFK/vlyNiG8By0YKMdkiV4NgzUNrDDbHjum3GwajHhTKjuCRaloIDQgAjLCG81Iun66yR5KJfs1auc2WIXeluny0rmoxfjXqXXoMu+eKsXDoLqEm3tl0YK6cCSGrNsyVipXHjcd0/g9SQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(366004)(346002)(39860400002)(451199021)(8936002)(8676002)(5660300002)(186003)(2906002)(82960400001)(83380400001)(36756003)(38100700002)(4744005)(2616005)(31696002)(86362001)(53546011)(478600001)(26005)(6512007)(6666004)(6506007)(66946007)(316002)(66476007)(4326008)(66556008)(41300700001)(54906003)(6486002)(31686004)(15650500001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZkEvalFmTk9JVGtZOHZNZC9Ea21Id0t3aXRwOC82QzZtSXZwVG5nT281d0Fv?=
 =?utf-8?B?NC9pcnZ6L2NXQXAxaGl4ZTBuay80NGVyLzJxOXVlbTJEeEdyb2lSeDZlNFRs?=
 =?utf-8?B?cnZnU0pKZlFJQW9oVW1vZjI2ZUp2U3Y4WExWRlQ1K1gyRE9wVElpY2FqK21i?=
 =?utf-8?B?Ynlza3lMeWNQa05PQjhSdlU4Y21BUUxpR2ZJVjRvQVlXalExYWVobnY0d3pX?=
 =?utf-8?B?VUE0WW13aWJhUGYzNmhub3g3MzljajRmeWtQZVMvOGdtL1RkdkdjNjg5aTBR?=
 =?utf-8?B?VWhnVHhLYTUySUpYR2lJcW1zLzl0SjZvdFdod3VkQ09IM25XMzAvQkEvL3BL?=
 =?utf-8?B?ell3ZStXUGJzZ1ZmMEpqaHhheFFOVHhHRkZGbTlQY1VEMEQwUmJZeUVVMm12?=
 =?utf-8?B?Y2tjRGh2amE1YkNicXNreHkzeWsraXNlNWJlT0hSdEJzNDhyUzhDdncrRmpG?=
 =?utf-8?B?eWZZQ1lqNXIwbkJid2poUmd6MjFxcXFGWW1rSHFrR2tkTEhRZUV0MDl3a1FD?=
 =?utf-8?B?aFV0SmtvU0xUS1cxWlFWQTlWYlI2MERWNnJIdTN2SWFFZWVoUG1LZ3JDcitj?=
 =?utf-8?B?ZTIrTXJhdUNWMG1NMC9SOVBQQnpqd3pQa3pKZGxGV2NreEVKNnRYNkxJcVNV?=
 =?utf-8?B?M3kxa2Y1SWZEbFlCcE5HNDJTNkxFRUkyZzFHUW5Da25ZeEI2ZFJXd0VjS1pZ?=
 =?utf-8?B?MXpYS0ZXQ2R2NTlzcU5MTWx4eXg0dVYrT2huWHFZMFRrR3owUm4ra1RKTXJX?=
 =?utf-8?B?MDBBWGtTN3VhSGZkb3NlQlAzeG8wd0QyZXVqeVNKNEhKcG41RGYycUhOSkxj?=
 =?utf-8?B?ejVhaXVrc3kxY2tUby9kRWFjYjFVNXZoaVRBckFnZlVjdjZHaDQyZ2JETnlq?=
 =?utf-8?B?ZVljSE50L1hwOVgzVW0raFk5K2IzZDZBVnFtRmZ2RXNPOXdrcC82bTB5RHpU?=
 =?utf-8?B?cHpLQ2cyU0oycDM0YXcxMHlWTGlBM2dmalRHWFZIb0xMTHhaR0xua2s2d3c3?=
 =?utf-8?B?cTBGazZQU2lSbjI0YTFRSE9Jb3RGcUpWMHB6d2tscUl0VVhXYUZaME0xeUox?=
 =?utf-8?B?NitUd0xOZm9FVkMwNTJPL2lFTGoyRTlIOXFGbUZPYklrQ1VuMG1WQlgweCt4?=
 =?utf-8?B?aU9nNDlPSURrM21CMUp6cmk4RVRPa2FEa1NIUVNabGY2RDI4OC82N1JiNHVs?=
 =?utf-8?B?dFF3VU50TTRXVkdSOTlUOFVNREdCNlBzQnZOYk9SM2FVRVVBdjZWdzVCRTZq?=
 =?utf-8?B?WnpmMVFOME1DaWZrZml2S3ROT29NbWMwSjNHbGxtRlJsTWdZc2xJdm5Kbmdv?=
 =?utf-8?B?bFVHVW9DUkd4aVVJNnoyMXpFUzUyTDUxekR2STdCUjIwNklFa0ttOGQ5aTZY?=
 =?utf-8?B?ekVoWkdkZ0ZBOThXRm5WNWtmZzhGTWlsditKYktyU0VQOENDWDU5ek1wcTBm?=
 =?utf-8?B?Nms0T1R4RXNCc3l0bUtSVjN3UDdpVGxZanNBS05hYkZNNWFpcWFCQ1NjSG1z?=
 =?utf-8?B?MWRCaWNGVm90MmJkNkw0Tkw2Qy9Oa1E4K0htNnpwQ2c4TDdhMGt6eEFtRzM1?=
 =?utf-8?B?VGg1S2dWdTN1OE1HSndDRXdTV1FteWJ4UWh0RTR1anNoVm5EL0N0cEsrdXRy?=
 =?utf-8?B?V2lDM2E2ZExBT1lkY0hscURrWVJsRVp1aDZiN2FUUlpnRkxNblJQQXNWOVhx?=
 =?utf-8?B?dEQ1K280bHNMSkhZU2hXVEp5ekpRTWRYd2pIMmFDMk5yTGhzZkpzNXh1NUJw?=
 =?utf-8?B?Rmg1TTNUMHBpVSsxcXE0Y2pMckFEYThhcFBYNUJpWndydDBDNkZhbzcwR2Mv?=
 =?utf-8?B?bEtiRE5Na2tTbW90cW9DZmhpcTUzY3dEdEJDMHVGRmo1allCc1U2UXBoU29Q?=
 =?utf-8?B?dDA1MDRCZkxkVEdUUnY5Zk5JdjJYWis4cVZ2YXdiaUNYUTNqdVJHeW1NRm5k?=
 =?utf-8?B?aHZBR2dNNnJYQVc4OUdlbnRpY1I2V1IyMi9HeFY5NWVLcCt1dGVIM1p0VXlE?=
 =?utf-8?B?ZlowV3dLVm4wZ2xERnFtMFhJeWVkTmJPZ0dNeUpQTDlad3BGc042RUJHeFFl?=
 =?utf-8?B?Ti91WXM4VFphVFhqRHowdlk3S2ZOS3JNK0k1UHFWNElmR3ZOdjZhdVF6aVdC?=
 =?utf-8?B?RDB5aWRjQ3MybzNvNlBvZXhXMlBILzFsbXpKYkRpdWc3cDFZd0llaC8zUTYz?=
 =?utf-8?B?bWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	s24v+f8Os3CF8T11RWgn+FBEGBQR2EkXUPff6Bpvz04qnATuCYsTNjnYhJd9VOyxIVWdQJG6bhL8Cby16hVHbekqgIhoXM2J1Xd3Y2Xm/sk5LbeEti9OtspSO+nurP9zgx+QT5dVqEBut0aFxJlxJx1jy+Hj3UJXgQWOVsKDaKKQIp34h/p0f/lrC9VLICXi3aZ5RySfwhyehED6mp21fGu+W2ymAkXh+iKqaJrYx1gdwzQ6S8HGDl3L3Nk/gTHvFIQw+PS3UtAzYWDVo+5f411ZFzOAAz5DyNIicYasn8NfTA90YkabdHER21H13hWbfZ/L6gtdNldHPWv+/NCgBG/Qf19kXWbcYAXYJWgFFRDMhOFEbdB8jr0bO59qbV4NYk/o1Wo3zME0rBB2s3sizsyEbhYHX1rLBroOdRlbjDvd4jKDRckpSk3bB3vQA+hlTF1FDOZgNvYvGvbObNx+dDTtHzIBApNeXlp/JnBWPbS12Etkz/z0qdkDtLUe5uEkCoZ50YHRvh/5w7Tw74P/BDFlkSmpee3k6zW/o4cKpu+rTUXQrDqIL6s+9p90fBvbocOtTPap3Xz3p42Gvqeukp2r04gwkNqliZUKG2PPfjthtzt3zPM4VdBYVISXZNu0m4cORDl3SQdSRWJXjeZ55lN7ga5qr2rSCuWsKrGZuDDc6vTzsDJYxAlCY4uL6C1oq6++WvAcQfYd5c9DLdY810jXsQt3uL98Rd3VUpSiXbMktzz3515mLTAs6rtjWMuTF9iobMNTlxxDO3Yf4SK780juGR3wXRg0VuCNg9u0BstdcUc8DOuNmx/Xr/JBuyzIZDYUSFywpql04Rx/xwikSg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3c908980-d93d-4513-fcfe-08db579d4f40
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 12:42:15.7606
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FITNHWTdemjipg9Bv9B1qHkLk6U0zw2RtGHJirv9+P9ZXQT6sPcYcpy6Ws8YyQk3QatggjhfUULRpwT3lzMqf8EoVWebSwA8GqFTJuCXIrU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7366

On 18/05/2023 11:57 am, Roger Pau Monne wrote:
> When a domain parameter is provided to pci_get_pdev() the search
> function would match against the bdf, without taking the segment into
> account.
>
> Fix this and also account for the passed segment.
>
> Fixes: 8cf6e0738906 ('PCI: simplify (and thus correct) pci_get_pdev{,_by_domain}()')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> There's no mention in 8cf6e0738906 that avoiding the segment check is
> fine, and hence I assume it's an oversight, as it should be possible
> to have devices from multiple segments assigned to the same domain.

Oh, absolutely - skipping the segment check is very much not fine.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>



From xen-devel-bounces@lists.xenproject.org Thu May 18 12:59:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 12:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536334.834558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzdDU-0008Od-SD; Thu, 18 May 2023 12:58:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536334.834558; Thu, 18 May 2023 12:58:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzdDU-0008OW-PZ; Thu, 18 May 2023 12:58:48 +0000
Received: by outflank-mailman (input) for mailman id 536334;
 Thu, 18 May 2023 12:58:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cfsf=BH=citrix.com=prvs=495ba01a1=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pzdDU-0008OQ-01
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 12:58:48 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b85a65ad-f57b-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 14:58:45 +0200 (CEST)
Received: from mail-dm6nam04lp2041.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 May 2023 08:58:43 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW5PR03MB6957.namprd03.prod.outlook.com (2603:10b6:303:1a8::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Thu, 18 May
 2023 12:58:40 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.017; Thu, 18 May 2023
 12:58:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b85a65ad-f57b-11ed-b22c-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684414725;
  h=message-id:date:subject:from:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=lIfqmHs8J9VyvC7I3FI/uxTkwzd/25bxlSUPz06dm54=;
  b=ErO0cclRmoZ9fpH6LITmkXhPx1F+DEZegrnOV7UYRV6pYTFoIRkxHDri
   lWUG38XhxY+mQuAPMf2jOd7np6vBvGrC5maW2tBrtumgTWK8iUD1Ze0gD
   +wse5oA+hrrayUuOJSea9CtJEflUhyCsRpsmLsTzoprLngH8F74pMHIEX
   Y=;
X-IronPort-RemoteIP: 104.47.73.41
X-IronPort-MID: 111977752
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/Ml78qxuRpKJCsny1bF6t+cbxyrEfRIJ4+MujC+fZmUNrF6WrkUAn
 WUeC22Daa6LZmX2KN5xPYi1oxxS6p7WzINqTVY++yAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPKAT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KWVN8
 tkDchMJVEvAuryHkauFDdFHvf12eaEHPKtH0p1h5RfwKK56BLzmHeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvDOVkFUZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXqiAdNNS+LlrJaGhnW6+1EwBxAVamKjoP2yt2fnZeN9O
 24Lr39GQa8asRbDosPGdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOtsU7WDgr3
 V+hhM7yCHpkt7j9YWqU67O8vT60fy8PIgc/iTQsSAIE55zvpd81hxeXEtJ7Svbp15vyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLzt56s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:Rgyt0q9aL2DOk0kDqANuk+DnI+orL9Y04lQ7vn2ZhyYlC/Bw9v
 re5MjzsCWftN9/YgBEpTntAtjjfZqYz+8X3WBzB9aftWvdyQ+VxehZhOOI/9SjIU3DH4VmpM
 BdmsZFebvN5JtB4foSIjPULz/t+ra6GWmT69vj8w==
X-Talos-CUID: 9a23:2xeoIWAmt5oYnIn6Ey1k82sPQOMhSCae8SnhclaGTl5Cc4TAHA==
X-Talos-MUID: =?us-ascii?q?9a23=3Awivf9w8Jk1qyiijDcq4Oe7SQf+5V3oehBUMyqI0?=
 =?us-ascii?q?Hv8K5HHVOChWNriviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="111977752"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lG8Q2Dl2OFbWPq4HQ30j/+tWL9PXVxYbSJcRF4of7j3F3GhSEmJTzF0hj5qNLBx1V+wf4GFgZcGr3mpFHkSlU0ycDLxYzzt5fwmtjrHWV6zJ6IijnAL2XEm1nLzOamcdRnDl/0e4tKrNuw2d3ut7Kij37uyvvnKmAOuZQAQvklZadaeL9rhb6bCUdcrW3pI/09ByAarylk01mB6pNdXwOTZn7saUvQhAiWNu45GOOWFn60ySuqvQ4fneb84Ug4R+bgU8mL1JwA9v+sXZb3sR+R96coDxbfSUWvsu1YWM0NTn+P67yvc/qm5Qhfs8PZp4nw4pe3SEgrxXSAJE7HyqUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lIfqmHs8J9VyvC7I3FI/uxTkwzd/25bxlSUPz06dm54=;
 b=ZpcXdXzcgqbNYDHjeI1utxhYpTwXUwYFK1BgxTMU3HMV4KI02nmREcmBi7vX3oPbD3gFRXZyktB+zz4TdB7ZWEi/cmSP8ABTf7J/YQhszk6PyNsGyGQJJpIrS3zkSttktXTQ9un0eGW0azAJyiagWGIxpbkTWd+xwK+1LYlOjBnEeqiLjS2pRlZP0EAzdlDNg3E2y2fnhCup5y5VLxxS8A7KjR2NKBuzOEwVSckB8rPpwi6ckUldGnFn3OEljv8CVFjBupk6ZiabFHefiDutDoy2HkgHQ1L8mCqTUHeIMklBZJkrpmQK8bBNSHHJLxzapCDPsJ5Z7VzcseUDBwDNvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lIfqmHs8J9VyvC7I3FI/uxTkwzd/25bxlSUPz06dm54=;
 b=MmhcvgTTKzSoPHCrQ9zAt+ZgINc+ngpakK7qydOl3hGlHcbMos94OjBqUlD98b/L2YtWV+l1zpKAdqnjeiihLEX5KXR9jxav1N1/J2uLakxSCmBdLTEPmqpoxztY4FFOaUkKJmhSFqBTIJ3jMC0mVKvb3xEajcrWtMVXGgIlhmo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <2274c165-5e6a-2e13-278a-da3c9a6dab4c@citrix.com>
Date: Thu, 18 May 2023 13:58:34 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] pci: fix pci_get_pdev() to always account for the segment
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
References: <20230518105738.16695-1-roger.pau@citrix.com>
 <480194dd-4757-d9dc-a2f2-7dea9182aeb6@citrix.com>
In-Reply-To: <480194dd-4757-d9dc-a2f2-7dea9182aeb6@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0188.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:311::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MW5PR03MB6957:EE_
X-MS-Office365-Filtering-Correlation-Id: d63fb330-d175-4750-46fd-08db579f99b6
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MK904uzV163+wQiotCnya9DC8ld/Xqn1ts2O0pfCURimXmhv8eZou6MQe++Sw+d6sSr0T7RT+MSzLfnaC6jJ68EYUPhDmvljqwtI+dtqKtGE20B8ISeAws8aRyVP3Sg7ADcHO7V5vzejYAoaYGdiU90KLjVO2ZPGdfJ1sBQFqXwYBwpaAjS3A/DnYMiqmWRp5+F2j/4Lq4AFFeLUBYo64v+yr/QTUhMQbtaVqqft7gf9eIEg8ic+uIwf5PJMKZ1J/fXyjuVFU0SRnMfe+54+B82VupuZHw5ice0ki9+MqSSVmY2kN7pA58u9zaDrdNu8UxvwKzoMkxYYr0gwAiYGHwCJj8Fbr1P+BrogenEjoT+SUzFrX5ybl+lw0bbwnCiBu4iXwuIzY7xOA7bi8sQa/lpY+1bzV7ktWUZVwKcK4JdmT8aP0ZqcIwms/3tZLbMvtpXnQMnA7Sou3FJn65Bk/kIul7WhoGMRFYRKjemjom9rvMegWGc6IlaKJKvHio7Po0li1uiGuCHpEolpcKkagSD1OT9iEHydigb/3LrGbOdkVCSe9WlfP4vlIFyJNYr+Gmm9GgeVEKWzXEjBFzHZYbMRrLLS92OlfgT4jXvRDXXJL3K8IFidnD+S+yZVTxMldRlGYbjFsiJIbychU56Ntg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(366004)(376002)(396003)(136003)(451199021)(36756003)(4326008)(316002)(66556008)(6486002)(54906003)(478600001)(66946007)(66476007)(86362001)(8676002)(2616005)(6666004)(2906002)(5660300002)(8936002)(15650500001)(82960400001)(38100700002)(31696002)(6512007)(6506007)(186003)(53546011)(41300700001)(83380400001)(26005)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R1pVcGxKMXl1TVJXbkVhUitNZWp4Mm9RYmVYSWREQlVUR3d1RGJsSDVLUG1v?=
 =?utf-8?B?Z2o1VWxFZXVXNEFOU3U1WllWamlFTC9vdDVBMDkzZm1PRkdDQkN4Y3FJVE94?=
 =?utf-8?B?YTFyVHA4ZVM3RlJVamVkeUJidHk4L043VWw3eVM1TUw2YWRFWUVTYXAwcGxW?=
 =?utf-8?B?M1ZEeTZnbHZBYXBwbkszOE5SN1BBVGx2YXN5aDc0UDFiNzExL2tNNWNQZkdX?=
 =?utf-8?B?ODJkZEpLN3RtbTQ5WWJSQk9ObHFmWk1NKzdHOEdmb3JPeHhCaTB2T2xWV3FF?=
 =?utf-8?B?U05zQXVLb2pJVVpkdmhBZXhtREVlSm40YWxvM1hMeVM4dEpDRDI3cnNCNXhv?=
 =?utf-8?B?MHlrMkpCd2paSTZINW1XYVNQMExNRCtJM0ZBWVBvUDViaUhyZE9Mck5NcjZp?=
 =?utf-8?B?WVJ5cGlEVlFXbjJ2b3kzYTBFcnVwVlFHOGhtUUVOOHdabnUycWtlaG1zVWx0?=
 =?utf-8?B?RlpJellOYytCUkJER3VKc3R6ZmlLWkRRazdCdFVLdVdUWThnNSt1OGE4Tk9r?=
 =?utf-8?B?dTM4Sys3UWRmK1U5TC85Q3NTTW9iUkJSZ2M5VDFaemRqZDJhaHd4ZWI4UG1k?=
 =?utf-8?B?MnFGVG9yeUpoOEtUOWFTSHpoZ0Qybzh0Q3o4UjJWNy85ZVplSzFyanZ4UTNQ?=
 =?utf-8?B?VGFjbGdjSnhFTWRIUW5sVWRwVHVhWEN0YkgrSWhXQzUzWWNPWFZ0MUkzNlZH?=
 =?utf-8?B?ajlwNTdMMXdiWm9WL0RCUlluNUhEMFgrNVpCM2w5ODRYeCs0SDdEZnpyS2tZ?=
 =?utf-8?B?K3h0dlNoUFJZRTFkQXpqNEdHKzFLWDc0YXhYWE5tR0NCc2V1QnlJaS9Yc0tX?=
 =?utf-8?B?Ryt0TXBqbmdEREJqRHA1SEUyYUloNWtmMitrNmt5eUgvWGNVVXpIUWlHUGtY?=
 =?utf-8?B?RGFzenR1TEdGcVNrc2NoWXdsSVdOTDdLRjU2bVFMdE5lVS9pWUtLWjllMEFU?=
 =?utf-8?B?Q2JRdGVVVjF4Y2xTZWhQREgraVRWTzdLMytPWU1FeFN2eVRtQWJmOU8xZXFU?=
 =?utf-8?B?UnJsY1J1cWZPUkRVeVZoRGhoVktoZk84eUZDeGtDOUxsSFVWSGxqdkJaUlhT?=
 =?utf-8?B?cyt0WFhJS2x6TkFJVGVmVitDWTVYc3d1T1ozY3ZXR3lHUUtGV0UvRUgvM2JW?=
 =?utf-8?B?RCs4U0I2QWh4RkxlR0hDbjVEcWJleXJtUE93c25QVFRhT2lueURxa3ZHS054?=
 =?utf-8?B?TVovbjBoK2RpNkNQRngwM0RLdHFGem12d0xsazBFQU9MaldUamVuMjg3S0Vx?=
 =?utf-8?B?WDNjWmpDMDIveU1ScjV2MExBLzlyZk1SRzcwcTdkcm11a1ZRKzVGSDliaDk5?=
 =?utf-8?B?MUkyeExDM1NUTDFkd1lVWkROVk9haTUwMjdmblFSVFFqajhRUk5iYmV1R1lE?=
 =?utf-8?B?SE4yMkg5NDJMbDk4SGdON1pxaFJUWkwrSUN3MFl0RUV5b3RjSjJXZHNQd1lq?=
 =?utf-8?B?em9rUDg4WEVIQUprTE1WbUYzYldHS2xSd2NDajVDZXZjL1dIMWlxbEt0U1or?=
 =?utf-8?B?R0xEZkRFUk4xWUNESkFFa3pwZzJpYW5Hb20wMTdxUFd1R1dDWlFGSm5DK2g1?=
 =?utf-8?B?T3dJQVZ0Ny90bmpxcWVQOEo5bnc3YzEwbmhFd05wYTJaT3R6NzZIbDlXbWdL?=
 =?utf-8?B?N2lhT3NUSXY0SlVRcTkxY2VDRmZyNVpvRjR3d1o1am04dkNIcG1SbjYrSlhn?=
 =?utf-8?B?cVhnNnBnSHJpR1pNUS9NSXpIWGJwdG9yTWZWRkczcW14c0djWkorUWs5YVRH?=
 =?utf-8?B?eDRsdU9wZTFTcUlGdGlLVkJibzhsc002ell2OEc4eExEZG5IR3ZKa0NEcGNl?=
 =?utf-8?B?WXpzWUdwMmI5Y1NFS29OVUtmWFQ2M3VSRjM0NFlIeDU5MnY4REM0cjFjcXFl?=
 =?utf-8?B?dGx4K1JrWU5qZ0ttZGpzTjZld3ZWY1lnT01udHpLcXEyUVFjVmUyTkRMZXZ4?=
 =?utf-8?B?d3ByMnM2bUJtZkNaWXErUUNvK2I1eGFtNi92RkZ1V3Fielh4L2VrL1pnUDY4?=
 =?utf-8?B?Z2laUDFuK3VFSGRrcnVkaWIycGh3TXdXbmpTWFJZWXBRTURFb2praE8ydzdO?=
 =?utf-8?B?aHpjUWNZY0wrUjRGVE5uSmkyU25rRGVpZFVzWXA0cmFPQ2J4TGZJV0Y4YWJV?=
 =?utf-8?B?YTF6NWFpdStpUjA0ZU41SDY2eG5SWkxoTzVCMlVwU3F6akc3RFh2dk93TTRj?=
 =?utf-8?B?K1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	jmeBMvmLHt3okxXqjh0312OGUW/D/5fmytfttmHo2IWeaQtUZdbnkZUrVrCo7llYdiAaSBDE9MXb20BYojonyKRih4ynaMi12/alE3EPh5jjdoepXwKLnHjnc8oTCBLklr85bkwQrMT9Ad7id2no7uwSoFhezz7fPNobqladZie+qGtLJ7i/rOqjJpI/CZTqmXp9CmXVgKzkOi3DYNpyTiRqCjHsg1MabRzJv2PRIrAn0u1f5UuhsedqbNpGUv8s5L9Vw028wekaStUsDFEFLh5cZKN7+QM3wk2M9w4Rpy8MeXA9omtgS3P7UuyrfHh6cbOVBUl3dwUXoCdqd+clF1L0cvBeozcOPSkq3Lh8Xf5qGVUvZHwKxeV3PmtjOsTxfkTRbDi+NfwR1YMmKyXoum05wbS7LsKYHltgMLIK6IVEpaoBkd1lv9Ilb79t1Q+G0esnFXk5vN/2hO4FuAglfkvdNNgxraYs2c28rpp4ODAF6NvyQZneFfrUPqD8l7HxY9H2GGTMbIVy4y6SDpsGwYEOxPzsj0RaukcemdWc/poFFOEPkRAwZPyg9n5qL0nz6w0ywStUKmcEtFzxsnDry2cxcv5A3IAAaHtGqduTWPAWwa/MlSp4KCVPep3crS42KJvalDUoLOQlsWsmoFlV8VvPLlVqXMlXW3eGWDxaSk0kI7kKlmh69ZT99+x1mGEYtTQgTV+4zIeRFnINigu8hn6v1CuBkN7o1Qk01r2wetUpGiBLu9RCf5WzrIekq7eJnfmW8T1kZLNVYdhde1sieji0IHTVxzJNkftNTCTXquaoVu32KrGs/U2qp/XpfcDtKUzJtpG3UMSljoP2nYariQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d63fb330-d175-4750-46fd-08db579f99b6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 12:58:39.6355
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KeR5MckxlkJ+fyQuFrPSFdRKjS//lvcLLR0XHkTD7dmoP9oRDkF5KgkanyZg66glJnm2Tc8kipH083K2g7d/TmAAQ4eBqk1KXMLrPArkhWk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW5PR03MB6957

On 18/05/2023 1:42 pm, Andrew Cooper wrote:
> On 18/05/2023 11:57 am, Roger Pau Monne wrote:
>> When a domain parameter is provided to pci_get_pdev() the search
>> function would match against the bdf, without taking the segment into
>> account.
>>
>> Fix this and also account for the passed segment.
>>
>> Fixes: 8cf6e0738906 ('PCI: simplify (and thus correct) pci_get_pdev{,_by_domain}()')
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>> There's no mention in 8cf6e0738906 that avoiding the segment check is
>> fine, and hence I assume it's an oversight, as it should be possible
>> to have devices from multiple segments assigned to the same domain.
> Oh, absolutely - skipping the segment check is very much not fine.
>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>

Sorry, I should go on to say.  Xen has had code for segments for years
and years and years, but I've seen plenty of evidence of Xen not having
any kind of regular testing in multi-segment systems.

Sapphire Rapids is the first platform I'm aware of which is
multi-segment in its base configuration and is going to see routine
testing with Xen.

I don't expect this to be the final bugfix before multi-segment works
properly...

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 18 13:07:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 13:07:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536340.834568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzdLi-0001X7-QI; Thu, 18 May 2023 13:07:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536340.834568; Thu, 18 May 2023 13:07:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzdLi-0001X0-Mh; Thu, 18 May 2023 13:07:18 +0000
Received: by outflank-mailman (input) for mailman id 536340;
 Thu, 18 May 2023 13:07:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=joHs=BH=citrix.com=prvs=495b323d3=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzdLh-0001Wu-TH
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 13:07:17 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e706277d-f57c-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 15:07:14 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 18 May 2023 09:06:44 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA2PR03MB5833.namprd03.prod.outlook.com (2603:10b6:806:114::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 13:06:39 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.019; Thu, 18 May 2023
 13:06:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e706277d-f57c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684415233;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=WaSOcuBOrfhvLDmVkBGugkCJmxI8KNT9lWP8iM4R7mM=;
  b=Dy06Ml8YLS5SYL375G9Ha8vfh7QhWLPVA1lqUdjjYna9MZZ9YYNfljp+
   RTLCdN3BjvElUWlRuo2SeowldCzEIeOjoUt+Lr92C8ENBiowy15Yc7Qdp
   2R/Xb9cpFJ7zSOUe9SaW/RzLVWomcvRfCJ88oiifBvV0dK1nZtQJCK8xc
   U=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 109527701
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:J7h2Oq45fCvQksuLSoHOHgxRtBvGchMFZxGqfqrLsTDasY5as4F+v
 modXGqHbPrYZDb8KI9waoWxpkoFvsPcndNgGgtorS1nHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S7AeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mx
 aE7FyANdxq6ufOx+5CFcut0lvgoBZy+VG8fkikIITDxK98DGMmGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OnEoojumF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxXKmAt1NTOTQGvhCiUOR/0YsDAIvRRiEnumguF6VZ85iA
 hlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQN4sudIyRDcq/
 kSUhN6vDjtq2JWNQG+Z3qeZq3W1Iyd9EIMZTSoNTA9A6d+zpog210vLVow6Tv/zicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQeLhRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:PygEg61FoMqQhzPJ/U+faQqjBJQkLtp133Aq2lEZdPU1SKClfq
 WV98jzuiWatN98Yh8dcKm7Sc29qBHnlaKdkLNxAV7KZmCP0gaVxepZnOjfKlPbalbD398Y+a
 B8c7VvTP3cZGIK9/oT7WODYrQdKNXsytHPuQ/VpU0dKj2DZMtbnmJE49+gYzRLrd99dOIEKK
 Y=
X-Talos-CUID: 9a23:1X5j8W8kNsd5koBM002Vv35JOf81cmTX9UnBJHSaN15rUK2pQnbFrQ==
X-Talos-MUID: 9a23:JCM7YQTl2Cuu4yiPRXTlmjNbHvhs/J2+S38ygYw0q+K4KRFZbmI=
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="109527701"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QaQ8EZXgu0wI0JpHP8BkENkAPEBw6Sk71ri6HsZmkSCx1aRIXeRVNU5LLKhjzK1Y2VMQY4+WV/TMqRNbHN0N0RWXXukDPse3PtSj4dlQQZn+JgruuumDQ85ol8KAkdIv6yiqLBMsag2fCXFtAye9hxFNoCr3RG/FjOHkZk6q0Sk4lTOBL5RUJlfRnJ9ZhAeFBi+678d3BC0wMPesqayUzdwjGPL2IjvGhYa0RMvwRRP4oL5BuQXFpX1n/1OMGyklfO0Tz2gRpPpo2woF6IR7ZqFh1zCmDeVc5KIF5trvXyiZ3nuBXK3wOtjviitGpgtePoXNH1+xMTTZcaEN4UYuRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wgdW8kW3aDopHJzmnsXTv9irrnJKp01lgss7qj6uemY=;
 b=Fi8+i1H9PJQ8RdmtWwcWhVieXjoFi3qn8xQiyayilTYcCxRj94mGEAWTwwPv7v/yUws6U2fyBbO0NvW933JiIXqrcpG0U1KdcnKw5Zstc+qfbfb4Kz4qmh28I/dDPivm8dgleqJHpLpIi4ixapDzazXV+XBIKhQj4LHetv2xQ0geQ8eLhjfEh5gOrTMNwO95+hD9klirVHjI6RBs1z4vxNCBwpPmQAEe/GI8C2YEsT/jjqqiIFbvQvBt5lezx22EprUNoCI/SB0h/Q7VGmfUgRQHIUqfD9YkzuLa+NCdMYYHuN3MRanS/YEGQZf/y+p1XbkCL5z6qp0eF1U41WkCYw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wgdW8kW3aDopHJzmnsXTv9irrnJKp01lgss7qj6uemY=;
 b=hMXX+tx4jC6ijkcV2Dbo7HlbLw8n8z4FvbM54Q9Xfij60j67K6HQQEtUxlJ2iQCiOo/pk7y6DhE9FIwfe1iuzUH8Vk4w0v0YI7wOvhgoucK/e0FC0xmiN+kmdALWaw2neFNOUwYWUwJhOyGykMuDeY0A5BRtcjIzGGKgMf/L+G4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 18 May 2023 15:06:33 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] pci: fix pci_get_pdev() to always account for the segment
Message-ID: <ZGYi2dexQbJUnt8v@Air-de-Roger>
References: <20230518105738.16695-1-roger.pau@citrix.com>
 <480194dd-4757-d9dc-a2f2-7dea9182aeb6@citrix.com>
 <2274c165-5e6a-2e13-278a-da3c9a6dab4c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2274c165-5e6a-2e13-278a-da3c9a6dab4c@citrix.com>
X-ClientProxiedBy: LO2P265CA0505.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::12) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA2PR03MB5833:EE_
X-MS-Office365-Filtering-Correlation-Id: 21a7dde4-3ef8-46b9-408d-08db57a0b782
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rR48AGZHpTJ+CZ1VAf7hV/By4Di9+JJe5d/0vpKYL+8adI4KnojwXBq0xdYG+Jfth5F1mmI5BvQBwPcKGA+ZqQxPRb00xXnu4TOQox2qDCpB34w96VQWQ7rqCWJkxwtVyx3jvr6cOhBRuxukQNaioGFO1PNdQ0Y+238UpGHScIsHzzdxmMG2r7yV50UzIZNlI/RQTdLOyzucKT9nV+t5pQIXyH+vMxV558wi6Y4wD7xha+Pex6fmb8oAmmexn24FXOEUnb7IHxYclrwc1rfF3u1G3sjUM2OY34UkD5iSSReISmzFPHo1cBm/q8DevnXd1bQJkZe2Bb+e05jZkU/Hn7KmbOcvDazD5+tmVlGLfyVAaffbCEuvYTzagtMEPb5CBsgqVxJSvSsXmkZD2zzR8HLEFE1kqffy31dGHhGgTzHmdAoxXj+QGQU+AX+7pUGHthWknQNErsSORYJ7MkXs25dW8NC9DKNZrZhfLHuiZy933jvkO5Vg3KPetncQKpDdU2ToMCnzoTuOKhWJa65XWQ0G3DRzjpgLHfZj6Psgfxgg8v2DJgTxOXeUj8FtkByo
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(366004)(396003)(346002)(376002)(39860400002)(136003)(451199021)(26005)(6512007)(9686003)(6506007)(53546011)(85182001)(83380400001)(86362001)(38100700002)(82960400001)(186003)(33716001)(6486002)(54906003)(478600001)(15650500001)(316002)(2906002)(8676002)(8936002)(6636002)(4326008)(6862004)(41300700001)(5660300002)(66476007)(66556008)(66946007)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dWFEREgvcVZ6YXNFSit1MzRoWWp0NVBDVjFsa3hoajh0RlhpcjU0NmREOTha?=
 =?utf-8?B?YUdtcHhHbytDR0hPQWpnUFc3bmlidWJzMlpPdE85Wk4wMlZxZU1yZEUrRWZn?=
 =?utf-8?B?NFllSjBhS05GalAxYmZRVHU3QmpJRmJYOWRTR25wL3hRNkhZOFFOSXV6Z1I3?=
 =?utf-8?B?QW9Mcml6Zk9qNTFMcDlkdS9lUXduQkFwRnBQSGgvYnFMRmFwc250OHUwUmxJ?=
 =?utf-8?B?WDZYd0plSDBtQ3orUDkySnhkUWlNWnJNWHBqUHY2TFVyREUxN1k1TG02MzQ1?=
 =?utf-8?B?SkNVUkdYVkI4UE40c3BISjh0QWVoZm1kcUlVRVMwZm5oNGtIVVpWQ1g4TWl1?=
 =?utf-8?B?dVhJYXMxa01LQ2hjRlJQWkpuQ3ZlbDVJb3hJbHFqOW94eG4relB1djlwckE2?=
 =?utf-8?B?NVBKeHE3V0UvMVlUUTBueDRKWTNUUERtSS93S2hpdG1aUUp6bm05cGdLbEln?=
 =?utf-8?B?dlIybXVHNUZheVdGd0pYR1NCY2xORFE3dGI2dHh4dTRZOFpUakQvTmpoOUJl?=
 =?utf-8?B?MWRRd0FremxmbW51bXBkRkZlQ3EvL1M1S3FMRHB1OXZpc0lSQUwxNnNsditI?=
 =?utf-8?B?YWdmWkJkcEh3U0lIczFHRHk0SUNSaGZKclZtSVlNQ0lzQ29uSklUc0pIS0p1?=
 =?utf-8?B?ei9UaG95VU9YdmZOdmhkcDIzc0RlblRkd1U5cHdlT3Z5Tyt6dWZvY3o0MDVm?=
 =?utf-8?B?dVdlZ1VDOEhRSG5tckViK015YWFmL0t5dGhzclM3R2x2Ri9VSWRlL1ZoRTBr?=
 =?utf-8?B?MVcreVovZW05V0I3bnpsSi9ZN2JsL1FJdExNWjVNbnNYQkhPVHRkTk9QS2lO?=
 =?utf-8?B?b1R5MkhNK0N0N1VCYzFIV3JCdjNDeWFteDdpT3dDaHAwc1llZjUwd1V5bE5t?=
 =?utf-8?B?ZVFIUHpLVXpLWTIrL21udkFaWFcwY3F4Ymtuc0RoaW91OEQvc2lDY1F2STZ0?=
 =?utf-8?B?M1VvVzMxN1NVamtldWJBL3F4aVhvc1BkYWFIODkwRnh2ZC8xbXVBQXRUWXBy?=
 =?utf-8?B?TUo1YzdRNk9QT0d3NWVXRXhVWG9JaGV6TXh2dWszazRXMkNYVkVKMHBlYVZ0?=
 =?utf-8?B?K3IzekwvVTIrTm1rV0pwWU82Y2xpY1pHWEM1WGNQRXo1aWpKeUxlSHFHQXJW?=
 =?utf-8?B?R0ZEb2NnV3BhL2hlSjlBbXV3SGhtS3NFcHI5UUZHNTVwSTRGNHZpeUhVUWRq?=
 =?utf-8?B?dW9pWkpTYjdCNTFZUW1NV1BhWVFxaXBxVVhzbkhWcFBpcmw5cEg4a0lSUmE4?=
 =?utf-8?B?MUZnSlAyQ3dPZk1maEN5TkhRT0VqY1JnbzN3RXNVUFFiMGRGMEZ1VXcvOVp4?=
 =?utf-8?B?NjVxeCt6VExuS1VqZjlvRk1oQTVMOG84Z2hZWkhoV3NJZmV3bEJhNWNYUzh2?=
 =?utf-8?B?dVR0NlA1SjR2NXprQ3kvMWk0N1RaNlRvU1VrcjJZam9SSHRxbXJzQlVoWm5v?=
 =?utf-8?B?VFdjWE1IUzRpbEVqeFdjYVpXdUhJRFJEdHR1RCszdEZDTXFydHJBK2R0dXBu?=
 =?utf-8?B?eUhkMTBCM0dUb1JBR3dCQkg5TE5LR0U5VFpoeTNQcmRIekxjVmh0UmZ4UmFX?=
 =?utf-8?B?SVlCUGg1Vy9aN3VGeUlyZGFsTnFkZTNZYkhIQ1h1aW9MNStibFNQMGVRaFhN?=
 =?utf-8?B?VnRUeXRwT3EySFdab0orUG1SNlgyMGllaXVjdWZ3WFN2TnFZMXg0a3J5Ym5Z?=
 =?utf-8?B?NElVdW1lUm91TkdxSkhOMkVZQlJWYno5dXpTU09NR1prNzRPZm5kOXhRTDJa?=
 =?utf-8?B?WE5FNU5rLytXQUxzdnVCcXZubll5SEV0WSt2RVYrdDNJZUZLUEV1UUU1c0ox?=
 =?utf-8?B?amVLYy9TRXhUUkxrR1JVU3FTOUJCMUVzQkNPc3A1L3dObUZYYVNtRjdlUWpS?=
 =?utf-8?B?cExHK2hNNzFqU0xwRUdvaUZtUEVuSFVubGtzK1JJa01nWHpLcUN4a1U5ejc4?=
 =?utf-8?B?MG9VYzQrS2diUnRNemdlWmVMRDMwcXNTb1lVWXdNRUJDNnUyaTluekNhU1ZF?=
 =?utf-8?B?WEllamlsUVRTVWNuaG00UjMrdm5Ca1hwdnd3MDhqTFN3R2gyYy9VQU8ya1pa?=
 =?utf-8?B?ejJ6M040KzAvQ1lSeVN3VlBibFEwSXJySWxtNlZtV1dPM0pKWEZKZHFCYlZL?=
 =?utf-8?B?Zmw4bHh1ZUdOZnFzNGVZYndOYkxVV2ZFVktxZThkaktJNnVtalhKeU1SeU5J?=
 =?utf-8?B?R0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	tAlrc/8pCcGXQG1ceslY3b+iFN04F9RQwdoCetRc0vVQiv8kR0/mtQYmtszIuLVtar8XgzcNlymGIHn3EFMAjdOxNdL32SlvxrV0rmxAQz41WBM4hDMVqA1H44FS3jmkAmEhaeur6an2pvonLXb9aSxIFTG/78ZqSAdFD7nyXHqwDFLBrPKtiVih0Gjdh/xcYPG9ukogQKrbsI4vx9YPhOx2TvzhbDNYkspfk5NktNxeu822Czj3rML/QHP8p0SJKex5mhnV7r2hCecHZNY7ZtVn6f4RB3/bm2z2bEUTLT85jvvOYeuaOm/vDq6ALV+vtQnULFumVPni9lotj74AeY7JIwkNjFtN0uv47nfJIv0ozdbtue75J+v/+BaMdLIAQFM2IqypK3+F6vGNm3g3axCbKB2JOTGqqaJdupY/3PEs4lySs42fKlLF1/WnTwX0FD8relzNGBAFMEJwecBrLCuiDNL9RMsy0ncxfhTjm0Nz2OMVRH4MdKT74ErfV16uakpFJsvB+1xSyKq/pFxKRH8mYUwCXAi1lSqA7isDtznCAYRx89wkvcxZoeu0JHkCMrPRE8sOkb573bSPq19jryyYnv5rBKQc/YrZpQu22vYf8ThLinfzAdyeMH/EjLxeiWgEtdYOqw5YjQnk4qp7q5fLnuGvjKL6ROUKXdRzYrnA+QRMh2Uol47wmo/5t82tBDA0HmzzLoOH7IB5feoc+A1Hui+Eaz2kg0eVG2325nhfE1tw70Xy9V9riJqe3jcoV5KaouNSslbkKBCOrwL3JKyTv/GLn74MT8Hd5GCulTkGAf/4ioHo+yjkMBpAI8X8
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 21a7dde4-3ef8-46b9-408d-08db57a0b782
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 13:06:39.1036
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HpWdPqd1Qwxwtv2dYHe6Vz6+YHRm9L4OAAeFTxJItZ1jfdc8CBW80DS//c5MAXn5lY29LNiTeprbslvjK3qJgA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5833

On Thu, May 18, 2023 at 01:58:34PM +0100, Andrew Cooper wrote:
> On 18/05/2023 1:42 pm, Andrew Cooper wrote:
> > On 18/05/2023 11:57 am, Roger Pau Monne wrote:
> >> When a domain parameter is provided to pci_get_pdev() the search
> >> function would match against the bdf, without taking the segment into
> >> account.
> >>
> >> Fix this and also account for the passed segment.
> >>
> >> Fixes: 8cf6e0738906 ('PCI: simplify (and thus correct) pci_get_pdev{,_by_domain}()')
> >> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >> ---
> >> There's no mention in 8cf6e0738906 that avoiding the segment check is
> >> fine, and hence I assume it's an oversight, as it should be possible
> >> to have devices from multiple segments assigned to the same domain.
> > Oh, absolutely - skipping the segment check is very much not fine.
> >
> > Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >
> 
> Sorry, I should go on to say.  Xen has had code for segments for years
> and years and years, but I've seen plenty of evidence of Xen not having
> any kind of regular testing in multi-segment systems.
> 
> Sapphire Rapids is the first platform I'm aware of which is
> multi-segment in its base configuration and is going to see routine
> testing with Xen.
> 
> I don't expect this to be the final bugfix before multi-segment works
> properly...

I just found this by code inspection while looking at something else,
it wasn't related to any testing on multi segments systems.  IOW:
don't take this fix as me having done any kind of testing on multi
segment systems.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 18 13:37:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 13:37:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536344.834578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzdoP-000500-Th; Thu, 18 May 2023 13:36:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536344.834578; Thu, 18 May 2023 13:36:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzdoP-0004zt-R5; Thu, 18 May 2023 13:36:57 +0000
Received: by outflank-mailman (input) for mailman id 536344;
 Thu, 18 May 2023 13:36:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzdoO-0004zj-Kp; Thu, 18 May 2023 13:36:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzdoO-0000WO-B5; Thu, 18 May 2023 13:36:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzdoN-0006XR-Of; Thu, 18 May 2023 13:36:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzdoN-0002Yd-OD; Thu, 18 May 2023 13:36:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NpM9qze5c2+NI6Ee0TpW2nFf4Kx9npfQUPxfBjX8CQw=; b=eNf4GQ3c1fUJyKOGMeCXYzYDyG
	snacFRTdtaRKBTdlwh5ZvX1JGl7GX52Rt8bhcdqNsPNGxdLVNaDC/vPWMyewZCaawf6o1vi20kRrI
	DNDWqh8kvt2AorIzsJy1l3Yu+0kZ4EPDYxzJft51Q3j/Gi/rnsiWg0ZJls2g1o2VRHyk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180693-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180693: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pygrub:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 May 2023 13:36:55 +0000

flight 180693 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180693/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pygrub      21 guest-start/debian.repeat  fail pass in 180687

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   31 days
Failing since        180281  2023-04-17 06:24:36 Z   31 days   57 attempts
Testing same since   180664  2023-05-14 20:12:01 Z    3 days    7 attempts

------------------------------------------------------------
2389 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 302499 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 18 13:43:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 13:43:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536355.834608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzduf-0006zP-7Q; Thu, 18 May 2023 13:43:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536355.834608; Thu, 18 May 2023 13:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzduf-0006zA-4U; Thu, 18 May 2023 13:43:25 +0000
Received: by outflank-mailman (input) for mailman id 536355;
 Thu, 18 May 2023 13:43:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cezo=BH=bombadil.srs.infradead.org=BATV+e1e315a83c1522261844+7207+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1pzdud-0006XN-2k
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 13:43:23 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f34b7459-f581-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 15:43:20 +0200 (CEST)
Received: from [2001:4bb8:188:3dd5:1149:8081:5f51:3e54] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pzduK-00D6SB-1r; Thu, 18 May 2023 13:43:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f34b7459-f581-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=0qIa3nDGH6TUOBAqJwF2s8Hp3AbyQ3Z9bxsITaMukWA=; b=pbfDTeq5MMRbhVEH4CcdLdnaIl
	cLan6u3lCSEqM4f7q8nJqZieDbhwOHHwFhSmsXM+2qxYj6DG3yMd0HOMjR58sstguuZ77BU9YJPyF
	Gq2BxiI4p9KIc19qV7MrhjjquWR8DTa68qrZBvDpK4lspAF2bcFqXQFEGH/ctEOaYlz0ydULrznJu
	fTKWdqL1KcqMzgW8IWNlqgjkpiSonSU/N02Ryto3srwkIe5afJY4CG3YrCwULl9ZWmNuF8if5C7Pj
	f+9G6D6Evcs7iO6Gx1cZhk6B1nbfBHfS3KpZeNIVDKkj5akOhZyeVSrW5OgkXWV06WpKhlDx7WvLX
	bin8x6rQ==;
From: Christoph Hellwig <hch@lst.de>
To: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>,
	Lyude Paul <lyude@redhat.com>
Cc: xen-devel@lists.xenproject.org,
	iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org,
	nouveau@lists.freedesktop.org
Subject: [PATCH 3/4] drm/nouveau: stop using is_swiotlb_active
Date: Thu, 18 May 2023 15:42:52 +0200
Message-Id: <20230518134253.909623-4-hch@lst.de>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230518134253.909623-1-hch@lst.de>
References: <20230518134253.909623-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Drivers have no business looking into dma-mapping internals and check
what backend is used.  Unfortunstely the DRM core is still broken and
tries to do plain page allocations instead of using DMA API allocators
by default and uses various bandaids on when to use dma_alloc_coherent.

Switch nouveau to use the same (broken) scheme as amdgpu and radeon
to remove the last driver user of is_swiotlb_active.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/gpu/drm/nouveau/nouveau_ttm.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index 1469a88910e45d..486f39f31a38df 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -24,9 +24,9 @@
  */
 
 #include <linux/limits.h>
-#include <linux/swiotlb.h>
 
 #include <drm/ttm/ttm_range_manager.h>
+#include <drm/drm_cache.h>
 
 #include "nouveau_drv.h"
 #include "nouveau_gem.h"
@@ -265,7 +265,6 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	struct nvkm_pci *pci = device->pci;
 	struct nvif_mmu *mmu = &drm->client.mmu;
 	struct drm_device *dev = drm->dev;
-	bool need_swiotlb = false;
 	int typei, ret;
 
 	ret = nouveau_ttm_init_host(drm, 0);
@@ -300,13 +299,10 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 		drm->agp.cma = pci->agp.cma;
 	}
 
-#if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active(dev->dev);
-#endif
-
 	ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev,
 				  dev->anon_inode->i_mapping,
-				  dev->vma_offset_manager, need_swiotlb,
+				  dev->vma_offset_manager,
+				  drm_need_swiotlb(drm->client.mmu.dmabits),
 				  drm->client.mmu.dmabits <= 32);
 	if (ret) {
 		NV_ERROR(drm, "error initialising bo driver, %d\n", ret);
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu May 18 13:43:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 13:43:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536353.834588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzduc-0006U5-Ja; Thu, 18 May 2023 13:43:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536353.834588; Thu, 18 May 2023 13:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzduc-0006Ty-Gf; Thu, 18 May 2023 13:43:22 +0000
Received: by outflank-mailman (input) for mailman id 536353;
 Thu, 18 May 2023 13:43:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cezo=BH=bombadil.srs.infradead.org=BATV+e1e315a83c1522261844+7207+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1pzdua-0006Tn-Vs
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 13:43:21 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f233b2ff-f581-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 15:43:19 +0200 (CEST)
Received: from [2001:4bb8:188:3dd5:1149:8081:5f51:3e54] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pzduF-00D6R1-1f; Thu, 18 May 2023 13:42:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f233b2ff-f581-11ed-b22c-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=0CEO3HS6DBexWIDg7vdwTQX1D12Af+6zFPesDZ1ytb0=; b=Ku330CY0SmQD9jlWX8wTaNK0/C
	1516RRZd/G2UiIf/EBbsRc5wdlY9u2fCX7tBPcuG2OTVizGl+rva6bo4eli4DgsenII7NFrZqv105
	dhDGWtSuF0RPj2xjLy+4V59LKKwxizXBoCo/OoVAtUtxKcToKHYLzV0dm8QbMPPfkFgD+qCVmRXbX
	Hjc9o0lFiVHBW1oKJ6/xtZ3xqcQAN0Zs6WaNLnJHaXDpn37cYujKNKyJuu5xsSRvCfbI8yWh0Rju6
	Yd3hFEmsiHR1AfMSqKoDZmKnXdy0A9PHbY8v3JkH9q1OKW0q9HNR4zoeJAf01hXfgCCe+T/RgdFcr
	X/NfkhWQ==;
From: Christoph Hellwig <hch@lst.de>
To: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>,
	Lyude Paul <lyude@redhat.com>
Cc: xen-devel@lists.xenproject.org,
	iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org,
	nouveau@lists.freedesktop.org
Subject: [PATCH 1/4] x86: move a check out of pci_xen_swiotlb_init
Date: Thu, 18 May 2023 15:42:50 +0200
Message-Id: <20230518134253.909623-2-hch@lst.de>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230518134253.909623-1-hch@lst.de>
References: <20230518134253.909623-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the exact checks when to initialize the Xen swiotlb code out
of pci_xen_swiotlb_init and into the caller so that is uses readable
positive checks, rather than negative ones that will get even more
confusing with another addition.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 arch/x86/kernel/pci-dma.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index de6be0a3965ee4..f887b08ac5ffe4 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -74,8 +74,6 @@ static inline void __init pci_swiotlb_detect(void)
 #ifdef CONFIG_SWIOTLB_XEN
 static void __init pci_xen_swiotlb_init(void)
 {
-	if (!xen_initial_domain() && !x86_swiotlb_enable)
-		return;
 	x86_swiotlb_enable = true;
 	x86_swiotlb_flags |= SWIOTLB_ANY;
 	swiotlb_init_remap(true, x86_swiotlb_flags, xen_swiotlb_fixup);
@@ -113,7 +111,8 @@ static inline void __init pci_xen_swiotlb_init(void)
 void __init pci_iommu_alloc(void)
 {
 	if (xen_pv_domain()) {
-		pci_xen_swiotlb_init();
+		if (xen_initial_domain() || x86_swiotlb_enable)
+			pci_xen_swiotlb_init();
 		return;
 	}
 	pci_swiotlb_detect();
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu May 18 13:43:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 13:43:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536356.834614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzduf-00072d-I6; Thu, 18 May 2023 13:43:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536356.834614; Thu, 18 May 2023 13:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzduf-00072B-CW; Thu, 18 May 2023 13:43:25 +0000
Received: by outflank-mailman (input) for mailman id 536356;
 Thu, 18 May 2023 13:43:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cezo=BH=bombadil.srs.infradead.org=BATV+e1e315a83c1522261844+7207+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1pzdud-0006XN-Og
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 13:43:23 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f2217d41-f581-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 15:43:19 +0200 (CEST)
Received: from [2001:4bb8:188:3dd5:1149:8081:5f51:3e54] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pzduI-00D6RW-0C; Thu, 18 May 2023 13:43:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2217d41-f581-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=05AyJy/WnduYw6cv5NEmxDHJJYQdGW0Vqnw+CujW2f0=; b=YCm6+nCfk/tqJs7qLv0fLb0i6p
	rh6YUXDVTuctprWr6BzRIVl5659Y51hsG50TV0qeGB/0lKu4HFOpG4FGQ2JEFosMgtSoOG1Ow82Uz
	T9PjamZA1bYNoiR5YkTkBWsX5NhqcUyOh6poOma8ugLkFknBvmIaPStma9KpMTsBbUcJJGJj2GKOM
	m4FKMYO7r8cwQykZilTii7odYMouDxqaidikNF/yF3CHzf5J2OZ+NbaaCrMe+2WSAAYCfKSuT1EoI
	IoZGn66ql9UNMrCi9szQ7Xcw0aE6fgi6+RqZgIv8hKUVhPl2kjaVw0wt5trZqQLvqFJegnCIvf3F9
	Xml1tTcw==;
From: Christoph Hellwig <hch@lst.de>
To: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>,
	Lyude Paul <lyude@redhat.com>
Cc: xen-devel@lists.xenproject.org,
	iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org,
	nouveau@lists.freedesktop.org
Subject: [PATCH 2/4] x86: always initialize xen-swiotlb when xen-pcifront is enabling
Date: Thu, 18 May 2023 15:42:51 +0200
Message-Id: <20230518134253.909623-3-hch@lst.de>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230518134253.909623-1-hch@lst.de>
References: <20230518134253.909623-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Remove the dangerous late initialization of xen-swiotlb in
pci_xen_swiotlb_init_late and instead just always initialize
xen-swiotlb in the boot code if CONFIG_XEN_PCIDEV_FRONTEND is enabled.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 arch/x86/include/asm/xen/swiotlb-xen.h |  6 ------
 arch/x86/kernel/pci-dma.c              | 25 +++----------------------
 drivers/pci/xen-pcifront.c             |  6 ------
 3 files changed, 3 insertions(+), 34 deletions(-)

diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/asm/xen/swiotlb-xen.h
index 77a2d19cc9909e..abde0f44df57dc 100644
--- a/arch/x86/include/asm/xen/swiotlb-xen.h
+++ b/arch/x86/include/asm/xen/swiotlb-xen.h
@@ -2,12 +2,6 @@
 #ifndef _ASM_X86_SWIOTLB_XEN_H
 #define _ASM_X86_SWIOTLB_XEN_H
 
-#ifdef CONFIG_SWIOTLB_XEN
-extern int pci_xen_swiotlb_init_late(void);
-#else
-static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
-#endif
-
 int xen_swiotlb_fixup(void *buf, unsigned long nslabs);
 int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
 				unsigned int address_bits,
diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index f887b08ac5ffe4..c4a7ead9eb674e 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -81,27 +81,6 @@ static void __init pci_xen_swiotlb_init(void)
 	if (IS_ENABLED(CONFIG_PCI))
 		pci_request_acs();
 }
-
-int pci_xen_swiotlb_init_late(void)
-{
-	if (dma_ops == &xen_swiotlb_dma_ops)
-		return 0;
-
-	/* we can work with the default swiotlb */
-	if (!io_tlb_default_mem.nslabs) {
-		int rc = swiotlb_init_late(swiotlb_size_or_default(),
-					   GFP_KERNEL, xen_swiotlb_fixup);
-		if (rc < 0)
-			return rc;
-	}
-
-	/* XXX: this switches the dma ops under live devices! */
-	dma_ops = &xen_swiotlb_dma_ops;
-	if (IS_ENABLED(CONFIG_PCI))
-		pci_request_acs();
-	return 0;
-}
-EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
 #else
 static inline void __init pci_xen_swiotlb_init(void)
 {
@@ -111,7 +90,9 @@ static inline void __init pci_xen_swiotlb_init(void)
 void __init pci_iommu_alloc(void)
 {
 	if (xen_pv_domain()) {
-		if (xen_initial_domain() || x86_swiotlb_enable)
+		if (xen_initial_domain() ||
+		    IS_ENABLED(CONFIG_XEN_PCIDEV_FRONTEND) ||
+		    x86_swiotlb_enable)
 			pci_xen_swiotlb_init();
 		return;
 	}
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index 83c0ab50676dff..11636634ae512f 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -22,7 +22,6 @@
 #include <linux/bitops.h>
 #include <linux/time.h>
 #include <linux/ktime.h>
-#include <linux/swiotlb.h>
 #include <xen/platform_pci.h>
 
 #include <asm/xen/swiotlb-xen.h>
@@ -669,11 +668,6 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {
-		err = pci_xen_swiotlb_init_late();
-		if (err)
-			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
-	}
 	return err;
 }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu May 18 13:43:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 13:43:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536357.834620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzduf-0007A7-Vs; Thu, 18 May 2023 13:43:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536357.834620; Thu, 18 May 2023 13:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzduf-00076t-Mp; Thu, 18 May 2023 13:43:25 +0000
Received: by outflank-mailman (input) for mailman id 536357;
 Thu, 18 May 2023 13:43:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cezo=BH=bombadil.srs.infradead.org=BATV+e1e315a83c1522261844+7207+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1pzdue-0006XN-Oo
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 13:43:24 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f361b9a3-f581-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 15:43:21 +0200 (CEST)
Received: from [2001:4bb8:188:3dd5:1149:8081:5f51:3e54] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pzduC-00D6Qd-2E; Thu, 18 May 2023 13:42:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f361b9a3-f581-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
	Content-ID:Content-Description:In-Reply-To:References;
	bh=piFrG5IdL4hzL74U9vzU8Q8FkrcebDa8Lvd7CJSzX1k=; b=CsnQ9kgCBYwe6gnaRylmDTM7Wg
	VjmAGMjJNouFmojoa2PViZy7KmCYkXfDItlxSiGdMRAKa4u6bzwnXH4xPbITOvKi46E/e8j1923vR
	u9Qj20fH+4Jugzi8umtd5H2NCKBNNKBIBV0dxSh72+mq7t83yJrzFx7FLxqhbec1O5OZi/yVN98fM
	bMfK/BLvhmvFSMgRRzB1zl6hg7RjhlcD4Ba8FO3xlDIUmySBgiGGVXbnu2ZFgjv2AQpplWQ2Swsa/
	jUTX+NHAV9OEu62r3wgbLMxWNseWs9tNZ5xv/HXPufF4KzJy/mSF0tCwCSPpD5y/XE2Ohl7NFw+0f
	4i0qXJRg==;
From: Christoph Hellwig <hch@lst.de>
To: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>,
	Lyude Paul <lyude@redhat.com>
Cc: xen-devel@lists.xenproject.org,
	iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org,
	nouveau@lists.freedesktop.org
Subject: unexport swiotlb_active
Date: Thu, 18 May 2023 15:42:49 +0200
Message-Id: <20230518134253.909623-1-hch@lst.de>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Hi all,

this little series removes the last swiotlb API exposed to modules.

Diffstat:
 arch/x86/include/asm/xen/swiotlb-xen.h |    6 ------
 arch/x86/kernel/pci-dma.c              |   28 ++++------------------------
 drivers/gpu/drm/nouveau/nouveau_ttm.c  |   10 +++-------
 drivers/pci/xen-pcifront.c             |    6 ------
 kernel/dma/swiotlb.c                   |    1 -
 5 files changed, 7 insertions(+), 44 deletions(-)


From xen-devel-bounces@lists.xenproject.org Thu May 18 13:43:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 13:43:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536354.834594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzduc-0006XT-Ti; Thu, 18 May 2023 13:43:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536354.834594; Thu, 18 May 2023 13:43:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzduc-0006XJ-OW; Thu, 18 May 2023 13:43:22 +0000
Received: by outflank-mailman (input) for mailman id 536354;
 Thu, 18 May 2023 13:43:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cezo=BH=bombadil.srs.infradead.org=BATV+e1e315a83c1522261844+7207+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1pzdub-0006Tn-QF
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 13:43:21 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f1c14798-f581-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 15:43:18 +0200 (CEST)
Received: from [2001:4bb8:188:3dd5:1149:8081:5f51:3e54] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux))
 id 1pzduN-00D6T3-0S; Thu, 18 May 2023 13:43:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1c14798-f581-11ed-b22c-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=NrwaDXndPVrlGW4jqCyf20MCAXo0YJy7aoO3FDItFxI=; b=lh9YNaJgqjsQ4ABDzfBXPDYhnF
	0rgvGegVpdV31RoZMvM9hbEbN94AXBXG4zrc9vSce86yoaAY93okWrN31ceblBj4ur0q1+YmfM6Hg
	v2JiJP+oecPzr+HP7G86rf0eM/dmlMOfuhqIhS4tMRcj+JGXLecrE0QZepRPPoZJwkQxnA+j3FdZt
	RlxkFpSPRj7ljl509O3DDGybPMxj20tGTo404+biDFXzMLHmhd46mUIWs42T8tBhOx2EJsaRBCXG2
	KFhN7YElPvMIUpYDvbAlPmwYYGOnFHuhw97nSyrLo0FGUtm5W07Q29cDo0EwumNuBftywhBKx3uyT
	dshNqNvg==;
From: Christoph Hellwig <hch@lst.de>
To: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>,
	Lyude Paul <lyude@redhat.com>
Cc: xen-devel@lists.xenproject.org,
	iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org,
	nouveau@lists.freedesktop.org
Subject: [PATCH 4/4] swiotlb: unexport is_swiotlb_active
Date: Thu, 18 May 2023 15:42:53 +0200
Message-Id: <20230518134253.909623-5-hch@lst.de>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230518134253.909623-1-hch@lst.de>
References: <20230518134253.909623-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Drivers have no business looking at dma-mapping or swiotlb internals.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index af2e304c672c43..9f1fd28264a067 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -921,7 +921,6 @@ bool is_swiotlb_active(struct device *dev)
 
 	return mem && mem->nslabs;
 }
-EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:19:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:19:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536378.834638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeTZ-0003wc-Na; Thu, 18 May 2023 14:19:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536378.834638; Thu, 18 May 2023 14:19:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeTZ-0003wV-KP; Thu, 18 May 2023 14:19:29 +0000
Received: by outflank-mailman (input) for mailman id 536378;
 Thu, 18 May 2023 14:19:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CvkS=BH=citrix.com=prvs=495754ba3=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pzeTY-0003wP-AH
 for xen-devel@lists.xen.org; Thu, 18 May 2023 14:19:28 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fccd1542-f586-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 16:19:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fccd1542-f586-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684419565;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=s0voyg0vRlyssHFOnfAjzksRgu/GwwFTHZGGiRj/DLI=;
  b=NsngB4HShfgY78Fegjdk9pu7b5W90gc9H4w6PNxppjwo048g9DZV2Qcj
   LiisyWBlPYjIRrAyQDeGVPTgLg4NeaS8AYQF3cCFn76I5vZcd3+o0t6Rq
   BcOGEZfr1GXJbw4GrB2rxXsNyMKalIGVsUGbUbWZSXEc/kTiMJqz3WZU8
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109415103
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:/FkZN6zFkdi4Gi1qvbp6t+dnwSrEfRIJ4+MujC+fZmUNrF6WrkVTn
 2JOCG6FP/uJYzbwe9twPIu/901Sv5OGnd41SAdrpCAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw//F+UIHUMja4mtC5QRjPKAT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KUNJ+
 rs2MT4JVUDZlf+f0ImQaMB3jf12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZwNzxrB+
 zuepAwVBDkAPeOO7SqbokuypfaImwbVAIkzHeeno6sCbFq7mTVIVUx+uUGAiee4kEOlW5RcN
 kkd4AIqrK477kvtScPyNzWorXjBshMCVt54F+wh9BrL2qfSpQGDCQAsTDFbb8c9nNQrXjFs3
 ViM9/vrGDhuvbu9WX+bsLCOoluaJykTJmIEeWkLUAoZ/97/iIUyiBvVSZBkCqHdpsbpAzjsx
 CvPoCUgr7ILyMoKzLmgu1TGhTu2od7OVAFdzgzTU3Lj5A5/YoOoT4ip71HB6rBHNonxZlyIo
 HgFltXY9OcPF5CAjgSJQeMEBrbv7PGAWBXbhVNsBIUw7DSF9HuqfIQW6zZ7TG9kKMcHPyTiY
 E7XvQJX67dXPX2jd6gxZJi+Y+wj1aX6HM7pfuzVZNFJJJN2cWe6EDpGPBDKmTq3yQ51zP95Y
 M3AGSqxMZoEIYZgw32YXukZ6u9x1D0X1Vj4Z7ngxC3yhNJye0WpYbsCNVKPaMUw46WFvBjZ/
 r5jCiea9/lMeLagO3eKqOb/OXhPdCFmXs6u96S7Y8bZemJb9Hcd5+g9KF/LU6hshOxrm+jB5
 RlRsWcImQOk1RUrxehnA02PiY8Dv74l9RrX3gR2Zz5EPkTPhq7xhJrzj7NtIdEaGBVLlJaYt
 cUtdcSaGehoQT/a4TkbZpSVhNU8JEjw2VLTZnL5OWBXk3tcq+vhq7fZkvbHrnFSXkJbS+Nly
 1Ff6u8racVaHFkzZConQPmu00mwrRAgpQ6GZGOReoM7UBy1oOBXx9nZ0qdfzzckdU+SmVN3F
 m++XX8lmAU6i9NsoYiZ3fHV9trB/ikXNhMyIlQ3JI2ebUHylldPC6cbOApUVVgxjF/JxZg=
IronPort-HdrOrdr: A9a23:YsK3+K2lhSZj/ZFqg3urMAqjBbFxeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5AEtQ4expOMG7IU80hqQFmrX5XI3SFzUO11HYSL2KgbGN/9SkIVyGygc/79
 YrT0EdMqyWMbESt6+TjGaF+pQbsb+6GcuT9ITjJgJWPGRXgtZbnmVE42igc3FedU1jP94UBZ
 Cc7s1Iq36JfmkWVN2yAj0oTvXOvNrCkbPheFojCwQ84AeDoDu04PqieiLokis2Yndq+/MP4G
 LFmwv26uGKtOy68AbV0yv+/olbg9zoz/pEHYiphtIOIjvhpw60bMBKWqGEvhoyvOazgWxa3e
 XkklMFBYBe+nnRdma6rV/GwA/7ygsj7Hfk1BuxnWbjidaRfkN1N+NxwaZiNjfJ4Uspu99xlI
 hR2XiCipZRBRTc2Azg+tnzUQ1wnEbcmwtirQdTtQ0ebWItUs4SkWUtxjIRLH40JlO41GloKp
 grMCiW3ocqTbrTVQGkgkBfhPihWWkyGBCdK3JywPB9lQIm00yRhnFou/A3jzMO8okwRIJD4P
 mBOqN0lKtWRstTdq5lAvwdKPHHfFAlbCi8RF56G26XY50vKjbIsdr68b817OaldNgBy4Yzgo
 3IVBdduXQpc0zjBMWS1NkTmyq9CFmVTHDo0IVT9pJ5srrzSP7iNjCCUkknl4+lr+8ECsPWVv
 6vMNZdAuPlL2HpBYFVtjeOEqV6OD0bSokYq9w7U1WBrobCLZDrrPXSdLLJKL/kAV8fKxbC67
 s4LUrOzel7nzCWsyXD8WbsslvWCz3C1IM1FrTG9O4Oz4VIPpFQsxl9syXL2v22
X-Talos-CUID: 9a23:obdbQ2EICrT89hr8qmJF80FTNNkrdkaNlmXNCl+DIkRAU7asHAo=
X-Talos-MUID: 9a23:qkbiXQSNlYIqaBXORXT2jQ5zENd2s52wK1lKl7Ao48KlPHZvbmI=
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="109415103"
Date: Thu, 18 May 2023 15:19:17 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
CC: <xen-devel@lists.xen.org>, Juergen Gross <jgross@suse.com>, Julien Grall
	<julien@xen.org>, Vincent Guittot <vincent.guittot@linaro.org>,
	<stratos-dev@op-lists.linaro.org>, Alex =?iso-8859-1?Q?Benn=E9e?=
	<alex.bennee@linaro.org>, Mathieu Poirier <mathieu.poirier@linaro.com>,
	Oleksandr Tyshchenko <olekstysh@gmail.com>, Erik Schilling
	<erik.schilling@linaro.org>
Subject: Re: [PATCH V2 2/2] libxl: arm: Add grant_usage parameter for virtio
 devices
Message-ID: <b83fc678-d453-4eea-bdaa-59ca636059b2@perard>
References: <782a7b3f54c36a3930a031647f6778e8dd02131d.1683791298.git.viresh.kumar@linaro.org>
 <ccf5b1402fb7156be0ef33b44f7b114efbe76319.1683791298.git.viresh.kumar@linaro.org>
 <5dc217d6-ca8f-4c5f-ad7c-2ab30d6647bd@perard>
 <20230515120600.bsfw6pe3usae4sl4@vireshk-i7>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230515120600.bsfw6pe3usae4sl4@vireshk-i7>

On Mon, May 15, 2023 at 05:36:00PM +0530, Viresh Kumar wrote:
> On 12-05-23, 11:43, Anthony PERARD wrote:
> > On Thu, May 11, 2023 at 01:20:43PM +0530, Viresh Kumar wrote:
> > > diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> > > index 24ac92718288..0405f6efe62a 100644
> > > --- a/docs/man/xl.cfg.5.pod.in
> > > +++ b/docs/man/xl.cfg.5.pod.in
> > > @@ -1619,6 +1619,18 @@ hexadecimal format, without the "0x" prefix and all in lower case, like
> > >  Specifies the transport mechanism for the Virtio device, only "mmio" is
> > >  supported for now.
> > >  
> > > +=item B<grant_usage=STRING>
> > > +
> > > +Specifies the grant usage details for the Virtio device. This can be set to
> > > +following values:
> > > +
> > > +- "default": The default grant setting will be used, enable grants if
> > > +  backend-domid != 0.
> > 
> > I don't think this "default" setting is useful. We could just describe
> > what the default is when "grant_usage" setting is missing from the
> > configuration.
> 
> This is what I suggested earlier [1], maybe I misunderstood what
> Juergen said.

To me, as a user of any program, setting default to a configuration
option is when we don't select a configuration option. Like when
starting a program for the first time and let it set things up based on
the environment if that make senses. But I guess sometime when there's
multiple choice, we can select default.

Anyway, I've looked in the xl.cfg man page and there's already plenty of
example where "default" is an option, so I guess it doesn't really hurt
to have the option to choose to not choose. You still need to write in
the man page that "default" is the default option, as in the absence of
the option in the configuration the default option will be used (unless
you managed somehow to make the option mandatory, but is they a reason
for that?).

In any case, there's going to be a 3-state option between xl and libxl
which are going to be default, false, true. It doesn't matter whether a
user can write default or not.

> > > +- "enabled": The grants are always enabled.
> > > +
> > > +- "disabled": The grants are always disabled.
> > 
> > > +            if ((virtio->grant_usage == LIBXL_VIRTIO_GRANT_USAGE_ENABLED) ||
> > > +                ((virtio->grant_usage == LIBXL_VIRTIO_GRANT_USAGE_DEFAULT) &&
> > > +                 (virtio->backend_domid != LIBXL_TOOLSTACK_DOMID))) {
> > 
> > I think libxl can select what the default value should be replace with
> > before we start to setup the guest. There's a *_setdefault() phase were
> > we set the correct value when a configuration value hasn't been set and
> > thus a default value is used. I think this can be done in
> >     libxl__device_virtio_setdefault().
> > After that, virtio->grant_usage will be true or false, and that's the
> > value that should be given to the virtio backend via xenstore.
> 
> I am not great with Xen's flow of stuff and so would like to clarify a
> few things here since I am confused.
> 
> In my case, parse_virtio_config() gets called first followed by
> libxl__prepare_dtb(), where I need to use the "grant_usage" field.
> Later libxl__device_virtio_setdefault() gets called, anything done
> there isn't of much use in my case I guess.

:-(, I feel like something is missing. I would think that
libxl__prepare_dtb() would be called after any _setdefault() function.
Maybe something isn't calling setdefault for virtio devices soon enough
in libxl.

> Setting the default value of grant_usage in
> libxl__device_virtio_setdefault() doesn't work for me (since
> libxl__prepare_dtb() is already called), and I need to set this in
> parse_virtio_config() only.

I don't think that `xl` should set any default, that would be better
done in libxl. libxl could be used by another program, such as
`libvirt`.

> Currently, virtio->backend_domid is getting set via
> libxl__resolve_domid() in libxl__device_virtio_setdefault(), which is
> too late for me, but is working fine, accidentally I think, since the
> default value of the field is 0, which is same as domain id in my
> case. I would like to understand though how it works for Disk device
> for Oleksandr, since they must also face similar issues. I must be
> doing something wrong here :)

No, I think something is missing for virtio devices.

For disk, there's code in initiate_domain_create() which call
libxl__disk_devtype.set_default() for every disk, and this happen before
libxl__prepare_dtb(). I don't know how other device types are doing this
defaulting, I need to search.

There's also a special case for nic, a call of
libxl__device_nic_set_devids() does call set_default for nics.
Otherwise, I think set_default() is called whenever something calls
add().


So, for virtio devices in libxl, I think we will also want to call
set_default() early. Add a call to libxl__virtio_devtype.set_default()
in libxl__domain_config_setdefault() similar to the one done for disk.
(For disk, at the moment it is done in initiate_domain_create() but
let's use the new libxl__domain_config_setdefault() function for
virtio.)

This means that libxl__device_virtio_setdefault() would be called twice
for each device, but that shouldn't be an issue.

Would that work?

> Lastly, libxl__virtio_from_xenstore() is never called in my case.
> Should I just remove it ? I copied it from some other implementation.

I don't think from_xenstore() is normally called when creating a guest,
but if we had an `xl` command called "virtio-list", like there's
"block-list", then from_xenstore() would be use I think. It could also be
use when doing *-detach, even if virtio doesn't have the option.


Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 18 14:36:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:36:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536385.834648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzejb-0006OZ-2W; Thu, 18 May 2023 14:36:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536385.834648; Thu, 18 May 2023 14:36:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeja-0006OS-VW; Thu, 18 May 2023 14:36:02 +0000
Received: by outflank-mailman (input) for mailman id 536385;
 Thu, 18 May 2023 14:36:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CvkS=BH=citrix.com=prvs=495754ba3=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pzejZ-0006OM-7M
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:36:01 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4db8491d-f589-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 16:35:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4db8491d-f589-11ed-b22c-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684420559;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=YoIYAW8qebn32nOP3GnLdPAr4RdbTEwwJ19kKHSvjNM=;
  b=UamC3Gw4ltwaBjyLJ4tDtBXQt4a3bw2eE9yXNpLu3X7A4mL/6n6pZLRX
   Grry6pdhT/Df9L8YHVZb/IsebgZpAr+YJfNXsjtHxWx0dlmtSNh3vBEwP
   EvXgTMHeMtDsaJzvbQSd5wgnSTQJDZps61ld/DSr+EzcyvfLPsIdaZHBs
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109417320
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:yqrlYqIVMm4ub/asFE+RVJUlxSXFcZb7ZxGr2PjKsXjdYENS0mYOz
 GZNDG2DOquIa2LwftBybIixoUIGupKDxtYxHQRlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpJrfPSwP9TlK6q4mhA4wVuPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c51XkNi8
 8MZKwkDMCiR3+aY/bOwVttj05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TTHZQJxxnD/
 DquE2LRJRpFaduvlTC88Sj8rOXugAf2Zas8G+jtnhJtqALKnTFCYPEMbnOkpdGph0j4XMhQQ
 2Qd8Sovq+496VS3R/H0RRj+q3mB1jYVQ9dKGvc2wB2MwKHTpQ2eAwAsTDFbb8c9nNQrXjFs3
 ViM9/v5CDoqvLCLRHa18raPsSj0KSUTNXUFZyIPUU0C+daLiIM+iAmJUddgFKezgtDvMTXxx
 TmQq245nbp7pcsCza7991fBhTOnp7DAVAtz7QLSNkqP4xllfoeja8qN4ELC8PdbBI+DSx+Ku
 31spiSFxLlQV9fXznXLGbhTWujzvJ5pLQEwn3ZtQLsN8Qus+EetI7hCuDVQGUZiLvQLLGqBj
 FDohStd45paPX2PZKBxYp6sB8lC8ZUMBegJRdiPMIMQP8EZmBuvuXg3OBXOhzyFfF0Ey/lXB
 HuNTSq74Z/244xDxSH+eeoS2KRDKssWlTKKHsCTI/hKPNOjiJ+ppVUtagPmggMRtvnsTODpH
 zF3aaO3J+13CrGWX8Uu2dd7wao2BXY6H4vqjMdca/SOJAFrcEl4Va+Nmu9xJtU9w/0N/gstw
 p1ach4w9bYCrSefdVXiho5LM9sDoqqTXVpkZHdxbD5EKlAoYJq17bd3SqbbiYIPrbQ5pdYtF
 qltRil1KqgXItgx02hHPMaVQU0LXEjDuD9iyAL5OWluL8YxFlWUkjImFyO2nBQz4uOMnZNWi
 9WdOsnzGMZTL+i+JK46sM6S8m4=
IronPort-HdrOrdr: A9a23:JW8oQKjY4Sp4zFeF0M6z/H9rrHBQXgkji2hC6mlwRA09TyX4rb
 HNoB1/73TJYVkqNU3I9ertBED4ewKiyXcX2/hzAV7BZmjbUTCTXeRfBOLZqlWLJ8SZzIFgPM
 xbE5SWZuefMbFxt7ef3OGee+xQpqj/gdjY/tv2/jNPaQlrbq16hj0JdzpzancGPjWv2/ICZf
 2hDvIunUvdRZ29VLXEOkU4
X-Talos-CUID: 9a23:/wrDO2H3zhFQlCnPqmJlq2w4HP8kcUTTkk/Ce0uWNGdiVI+aHAo=
X-Talos-MUID: =?us-ascii?q?9a23=3AlGaswQ2FWNLEZLfK86sxQVP4EzUjxK/xVUc9zZ8?=
 =?us-ascii?q?/uJPUHyZRJhOQsR2ZTdpy?=
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="109417320"
Date: Thu, 18 May 2023 15:34:42 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jens Wiklander <jens.wiklander@linaro.org>
CC: <xen-devel@lists.xenproject.org>, <Bertrand.Marquis@arm.com>, Marc Bonnici
	<marc.bonnici@arm.com>, Achin Gupta <achin.gupta@arm.com>, Wei Liu
	<wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: Re: [XEN PATCH v8 03/22] tools: add Arm FF-A mediator
Message-ID: <45f59b7a-592a-4a1e-b606-c2d564b979b8@perard>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-4-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230413071424.3273490-4-jens.wiklander@linaro.org>

On Thu, Apr 13, 2023 at 09:14:05AM +0200, Jens Wiklander wrote:
> Adds a new "ffa" value to the Enumeration "tee_type" to indicate if a
> guest is trusted to use FF-A.

Is "ffa" working yet in the hypervisor? (At this point in the patch
series) I'm asking because the doc change is at the end of the patch
series and this patch at the beginning.

I feel like this patch would be better at the end of the series, just
before the doc change, when the hypervisor is ready for it.

In any case:
Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 18 14:39:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:39:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536391.834657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzen3-00074X-Kp; Thu, 18 May 2023 14:39:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536391.834657; Thu, 18 May 2023 14:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzen3-00074Q-ID; Thu, 18 May 2023 14:39:37 +0000
Received: by outflank-mailman (input) for mailman id 536391;
 Thu, 18 May 2023 14:39:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzen1-00074K-V0
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:39:36 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20600.outbound.protection.outlook.com
 [2a01:111:f400:7eae::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd15eceb-f589-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 16:39:32 +0200 (CEST)
Received: from DM6PR07CA0119.namprd07.prod.outlook.com (2603:10b6:5:330::11)
 by DS7PR12MB8371.namprd12.prod.outlook.com (2603:10b6:8:e9::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 14:39:29 +0000
Received: from DM6NAM11FT070.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:330:cafe::7f) by DM6PR07CA0119.outlook.office365.com
 (2603:10b6:5:330::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 14:39:29 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT070.mail.protection.outlook.com (10.13.173.51) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Thu, 18 May 2023 14:39:29 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:39:29 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:39:28 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:39:27 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd15eceb-f589-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PXp8NrG71WdXvHyJiseIZqZlZF++FeRXuH8CJyDz9/JE2UZL/TLN8iiH5XZJiAIu86WKguhO7hEkZkLUtKk1WBqUaP5nNR8BbN2e7tPANHdS40/zeX9RSJiGq5VSkpUpkAhFXMhKLz+dr0rxgKs6jdyxw7BDjROgr8u3HfnTa6zZr22lGAPIKhQ4/lOqfb9Uu9lFCruPDAQDrBqNuqnGgUKg78EUwXnccUMITF0eFbdvX+uB0D9k1G2JHgR7gI9f6p6Y8Sidt8cxunf1EH1heZrU6X+Z2tRwVlJ2921wZYWwa4eIMaI4LmflhCa8Vi3QnHjTXm0AWN0LdDxxITHeLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aJCvJZk+ePYZnsdqF65o/m8Ky/j3D4a3J97pUu8ku9s=;
 b=LdVY9sTMm7WOKIc/SiY74yycRnjVa6NZhCQpwEtc/+9R4eJ3m72PXZVPAy31+gWLidNHNSnoaAnEDAe8lcY0nPB9YN9b1pLcNPDsGfWsmU7muf/jL9F4FHaOOOsROzZG5TmS5jZEvy14OGJySBjH2YCsojsxF9cVMyLLfVK6kW26mJcKTIUJWej0Avcoqe6lMnoGRcSPWUO1U6FFeOwa3w72k6pgNxf7LlsVtYXaGH9xaPgG10IHJTGyqezdjfhV0Pm+twVtf3JT5SXsUN7Ya7uTPwJif93Fw1WAtDUZByUBMM/gAOX3VEAjZ+6pA+l8MN3+mpwXdwMwCSnSBOIJjw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aJCvJZk+ePYZnsdqF65o/m8Ky/j3D4a3J97pUu8ku9s=;
 b=chB5Xbp46K0i3Wkh5ga1Ejjl0c/GtmN1PL9wKclPk/74IqdSOG3RwPb9vWPKQiXjIJnmGJPWCOoGws7/fwsIT1A4OEEv2gu9oPI2KwI0zoB+vRKwqjwrfFIS8/GWiMH2qXBV+TyggC1piypASGQGPqF6p/gkx/SKCGvUlcXE4BQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 00/11] Add support for 32-bit physical address
Date: Thu, 18 May 2023 15:39:09 +0100
Message-ID: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT070:EE_|DS7PR12MB8371:EE_
X-MS-Office365-Filtering-Correlation-Id: fc125919-d51c-4c43-819f-08db57adafde
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	f5g1fXXQDvEMJWYT2G97Xh6UjGwDEvzCyf71DU70nAocIy8Kb5mihXivxowCVivrQTfrZlf5ScYAlGWzZt/VTuxrLTr8jp/8waeT4No8FchEenKSK81omms21DsUgwzlTKqMqhtscok1qVuiU4XUZZYAS2hF1ez8QbOqRgGUE0NMa1Fn8Qb45xcojzlfa0czou9QgTLm9IR+Y5q/XqJYHe5kmFg5om1FjygAf37TmwdfRoJTCBWlYEBQG9YKPlvFdlvvgee81K1XVnDNIvK4wMGczTB9g3UKlvPx3yHqPU8g52by8Z3Xp9PH+E+xM6JrwbMgUHAW/Pg4eBvqyHYszsRwXHVNDjNIUpDBdIXmp7rTpBMinumIKMuRlUyvqaJpiRE3OH5OnB7QDEY8ACGaHVMKda+ZLwaMPlFyGFoX0NQ/uqhel99LgWTSzn9wx+BzGFuZPhu6WVB+PfHSeZaTAL6XVv8Zl+IPwCyGGdYm+osJXI0w8TTzNwm0LzUAluRRgc43VOmonSy7svIJeAa5rbGuARawBXKLbdA8IH5Przlujxv/dJ47BAAL4fNmWVDfGYJKRJOxRlvGx31oYp0VTlbFHn6O6IUdNLWBTeJihvqHVIvw4osADaiF6yP2wKOtNft+BZACQ4lbgHp73ATx/PhDpHkHzkOgr0lNEpxAFgWvRdK2qsk3zlLpRn5frdkAtk73l1/ILS1kly7gHj08UNixLUOlQYFHM1OAZknqxO7HaTY6wjyv5ovcg5XUE+CPiyAXSXktplHL0ZBwdEQwQrPqYJup34fMCnuK1wBFZsA=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(39860400002)(376002)(346002)(451199021)(46966006)(36840700001)(40470700004)(1076003)(26005)(966005)(40460700003)(36860700001)(36756003)(2616005)(40480700001)(47076005)(83380400001)(426003)(336012)(86362001)(82310400005)(81166007)(103116003)(82740400003)(356005)(186003)(54906003)(7416002)(478600001)(316002)(2906002)(6916009)(8936002)(8676002)(4326008)(41300700001)(5660300002)(70586007)(70206006)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:39:29.3829
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fc125919-d51c-4c43-819f-08db57adafde
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT070.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB8371

Hi All,

Please have a look at https://lists.xenproject.org/archives/html/xen-devel/2022-11/msg01465.html
for the context.

The benefits of using 32 bit physical addresses are as follows :-

1. It helps to use Xen on platforms (for eg R52) which supports 32-bit
physical addresses and has no support for large physical address extension.
On 32-bit MPU systems which supports flat-mapping (for eg R52), it helps
to translate 32 bit VA into 32 bit PA.

2. It also helps in code optimization when the underlying platform does not
use large physical address extension.

The following points are to be noted :-
1. Device tree always use uint64_t for address and size. The caller needs to
translate between uint64_t and unsigned long (when 32 bit physical addressing is used).
2. Currently, we have enabled this option for Arm_32 as the MMU for Arm_64
uses 48-bit physical addressing.


A note for the Xen committers/reviewers :-

The following patches are ready to be committed (1..5). They have been reviewed
by atleast 2 people (out of which atleast one is mantainer).
  xen/arm: domain_build: Track unallocated pages using the frame number
  xen/arm: Typecast the DT values into paddr_t
  xen/arm: Introduce a wrapper for dt_device_get_address() to handle
    paddr_t
  xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to
    SMMU_CBn_TTBR0
  xen/arm: domain_build: Check if the address fits the range of physical
    address

Out of the remaining patches, the following is the status :-
  xen: dt: Replace u64 with uint64_t as the callback function parameters
    for dt_for_each_range() (Reviewed)
  xen/arm: p2m: Use the pa_range_info table to support ARM_32 and ARM_64
  xen/arm: Introduce choice to enable 64/32 bit physical addressing
  xen/arm: guest_walk: LPAE specific bits should be enclosed within
    "ifndef CONFIG_PHYS_ADDR_T_32"  (Acked)
  xen/arm: Restrict zeroeth_table_offset for ARM_64 (Reviewed and Ack)
  xen/arm: p2m: Enable support for 32bit IPA for ARM_32

I have reordered the patches such that the initial five patches can be
committed without any rebase.

Changes from :

v1 - 1. Reordered the patches such that the first three patches fixes issues in
the existing codebase. These can be applied independent of the remaining patches
in this serie.

2. Dropped translate_dt_address_size() for the address/size translation between
paddr_t and u64 (as parsed from the device tree). Also, dropped the check for
truncation (while converting u64 to paddr_t).
Instead now we have modified device_tree_get_reg() and typecasted the return for
dt_read_number(), to obtain paddr_t. Also, introduced wrappers for
fdt_get_mem_rsv() and dt_device_get_address() for the same purpose. These can be
found in patch 4/11 and patch 6/11.

3. Split "Other adaptations required to support 32bit paddr" into the following
individual patches for each adaptation :
  xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to
    SMMU_CBn_TTBR0
  xen/arm: guest_walk: LPAE specific bits should be enclosed within
    "ifndef CONFIG_ARM_PA_32"

4. Introduced "xen/arm: p2m: Enable support for 32bit IPA".

v2 - 1. Dropped patches 1/11, 2/11 and 3/11 from the v2 as it has already been
committed (except 2/11 - "[XEN v5] xen/arm: Use the correct format specifier"
which is waiting to be committed).

2. Introduced a new patch "xen/drivers: ns16550: Use paddr_t for io_base/io_size".

v3 - 1. Combined the patches from https://lists.xenproject.org/archives/html/xen-devel/2023-02/msg00656.html in this series.

v4 - 1. Dropped "xen/drivers: ns16550: Use paddr_t for io_base/io_size" from the patch series.

2. Introduced "xen/arm: domain_build: Check if the address fits the range of physical address".

3. "xen/arm: Use the correct format specifier" has been committed in v4.

v5 - 1. Based on the comments on "[XEN v5 08/10] xen/arm: domain_build: Check if the address fits the range of physical address",
the patch has been modified and split into the following :-

a.  xen: dt: Replace u64 with uint64_t as the callback function parameters
    for dt_for_each_range()
b.  xen/arm: pci: Use 'uint64_t' as the datatype for the function
    parameters.
c.  xen/arm: domain_build: Check if the address fits the range of physical
    address

v6 - 1. Reordered the patches such that only the patches which are dependent on
"CONFIG_PHYS_ADDR_T_32" appear after the Kconfig option is introduced.

Ayan Kumar Halder (11):
  xen/arm: domain_build: Track unallocated pages using the frame number
  xen/arm: Typecast the DT values into paddr_t
  xen/arm: Introduce a wrapper for dt_device_get_address() to handle
    paddr_t
  xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to
    SMMU_CBn_TTBR0
  xen/arm: domain_build: Check if the address fits the range of physical
    address
  xen: dt: Replace u64 with uint64_t as the callback function parameters
    for dt_for_each_range()
  xen/arm: p2m: Use the pa_range_info table to support ARM_32 and ARM_64
  xen/arm: Introduce choice to enable 64/32 bit physical addressing
  xen/arm: guest_walk: LPAE specific bits should be enclosed within
    "ifndef CONFIG_PHYS_ADDR_T_32"
  xen/arm: Restrict zeroeth_table_offset for ARM_64
  xen/arm: p2m: Enable support for 32bit IPA for ARM_32

 xen/arch/Kconfig                           |  3 ++
 xen/arch/arm/Kconfig                       | 32 ++++++++++++
 xen/arch/arm/bootfdt.c                     | 46 +++++++++++++----
 xen/arch/arm/domain_build.c                | 57 +++++++++++++++-------
 xen/arch/arm/gic-v2.c                      | 10 ++--
 xen/arch/arm/gic-v3-its.c                  |  4 +-
 xen/arch/arm/gic-v3.c                      | 10 ++--
 xen/arch/arm/guest_walk.c                  |  2 +
 xen/arch/arm/include/asm/lpae.h            |  4 ++
 xen/arch/arm/include/asm/p2m.h             |  6 ---
 xen/arch/arm/include/asm/page-bits.h       |  6 +--
 xen/arch/arm/include/asm/setup.h           |  6 +--
 xen/arch/arm/include/asm/types.h           | 13 +++++
 xen/arch/arm/mm.c                          | 12 ++---
 xen/arch/arm/p2m.c                         | 38 ++++++++++-----
 xen/arch/arm/pci/pci-host-common.c         |  8 +--
 xen/arch/arm/platforms/brcm-raspberry-pi.c |  2 +-
 xen/arch/arm/platforms/brcm.c              |  6 +--
 xen/arch/arm/platforms/exynos5.c           | 32 ++++++------
 xen/arch/arm/platforms/sunxi.c             |  2 +-
 xen/arch/arm/platforms/xgene-storm.c       |  2 +-
 xen/arch/arm/setup.c                       | 14 +++---
 xen/arch/arm/smpboot.c                     |  2 +-
 xen/common/device_tree.c                   | 40 ++++++++++++++-
 xen/drivers/char/cadence-uart.c            |  4 +-
 xen/drivers/char/exynos4210-uart.c         |  4 +-
 xen/drivers/char/imx-lpuart.c              |  4 +-
 xen/drivers/char/meson-uart.c              |  4 +-
 xen/drivers/char/mvebu-uart.c              |  4 +-
 xen/drivers/char/omap-uart.c               |  4 +-
 xen/drivers/char/pl011.c                   |  6 +--
 xen/drivers/char/scif-uart.c               |  4 +-
 xen/drivers/passthrough/arm/ipmmu-vmsa.c   |  8 +--
 xen/drivers/passthrough/arm/smmu-v3.c      |  2 +-
 xen/drivers/passthrough/arm/smmu.c         | 23 ++++-----
 xen/include/xen/device_tree.h              | 42 +++++++++++++++-
 xen/include/xen/libfdt/libfdt-xen.h        | 55 +++++++++++++++++++++
 37 files changed, 380 insertions(+), 141 deletions(-)
 create mode 100644 xen/include/xen/libfdt/libfdt-xen.h

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:39:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:39:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536392.834668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzenG-0007Mb-Tt; Thu, 18 May 2023 14:39:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536392.834668; Thu, 18 May 2023 14:39:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzenG-0007MU-Qu; Thu, 18 May 2023 14:39:50 +0000
Received: by outflank-mailman (input) for mailman id 536392;
 Thu, 18 May 2023 14:39:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzenF-00074K-4e
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:39:49 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20608.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::608])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d61c4b9f-f589-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 16:39:47 +0200 (CEST)
Received: from DS7PR03CA0179.namprd03.prod.outlook.com (2603:10b6:5:3b2::34)
 by BN9PR12MB5179.namprd12.prod.outlook.com (2603:10b6:408:11c::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 14:39:43 +0000
Received: from DM6NAM11FT003.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b2:cafe::ee) by DS7PR03CA0179.outlook.office365.com
 (2603:10b6:5:3b2::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 14:39:43 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT003.mail.protection.outlook.com (10.13.173.162) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.15 via Frontend Transport; Thu, 18 May 2023 14:39:43 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:39:42 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:39:41 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d61c4b9f-f589-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Lxn6bxJ9FerscYp3grHBK2B0pUXcoeNVyjdBgNvThgGyKNEALTzfU17PsiFbOT2lG77rc8pFX07dyCuog2cv1WF3zmlSRwUgc/CZmbHYHrsnT51mR9VZRgPX2amK8T+BUVmyXNvtEIytCoBrHSModYYe2pZY1DsegMVCa7l3bf40ybP7innwTfZFOqL2xoX7F+t3n32o6BpUvz73muGw2AgPLjoJeWCHbL1zEGuh83PQMWP40Jsp0pCJaFGgWdLIfqfrHpI6+YuMMg2ZBY3ecQr2JrIUL91308TUoJXSUgwnF3F+aEEnaus0QhZvlrQwpjff/f0DlqldRbj0gRZ+fg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5fJWKj17VMd4J7OD9poo9yDrAqqLVEuS40vyYFKg3Cw=;
 b=Mikke4fJReDcL8VdETUDJEPJTlh97aslDxLn7mhwxEy5HdyuX0uK4/ewLn2mKE/JPizYydafvkHWn61SS2FSFPlg3pLHquoTycPXUfAea4mXsKxz4zD4+TkoXE664IBmcZkKiCu4Tq+b8PGjocVktXu6qNxmfbVOKkr0YfZIZu5SKq6XALk+H+nERAtft9V9CJA7Ax9MGviziMSX6qRjXSqNku9+XfkWUsbnlG1HMcEpyEA0Wq+khh1gbtLBXCEctIzvDGhTRmheSJqO+GdsPeqRMA0snIJFGHCQ4jMl2y3ArsxR2Kvwn9A02mJfl5wDRO5ro9xsTGMQl2x5ZHotLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5fJWKj17VMd4J7OD9poo9yDrAqqLVEuS40vyYFKg3Cw=;
 b=Cwo6uug56mruDIseFPdUnK8HMVoCkAKspB28uOoMYsrUdAW0nmCQnpp32cEe511W1dH4768pKmFaM7A++JH6INVi8ADy4aQ45/E1tDRdlSi1fCV5FdSDy1Nc6iPDVMXMc1ffeadoxVtBk3c8LNbfnHEzJbrVtWhTNmnFXrcAk44=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 01/11] xen/arm: domain_build: Track unallocated pages using the frame number
Date: Thu, 18 May 2023 15:39:10 +0100
Message-ID: <20230518143920.43186-2-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT003:EE_|BN9PR12MB5179:EE_
X-MS-Office365-Filtering-Correlation-Id: e8295b1e-d244-40d7-87c6-08db57adb821
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yzlQW8HU0QhXF13+6fkD+BIJJyDB9q5BmFHDi0AtI3L1c0viVJqHgwFmoBPdnuGd4XcMpUBfvXK28chzRvYrOfB3W0PlqtSwSEkD7HJqlvF6ZUahdpRzCfzfCcDaD3PBgdy3YTL/2x+6VJY6vvatvwth1rysr3rq44drWufZ4ja2b3M5fHY3aOoJooIco3l22Brq4krBLwhxaajBwqfGuUbyuwH2y7cU5c95auuaeQ57FZ2DsroRnnRgviWeOUbjk/sSgwcWaaiKNmqNtAmngURLe0858O4/BnAwr2nxGEL4vvmx5/EdyrdxOFiXX4UOsDXG8Eq1cund4EDurSrEd+Z4bDITrf+pCNr5C5vOF4Y48jlzwYK8WkjgrCsFOwNRenJvGLZM7uFBIY6rrNRTu+TLDq1CAAy32eSfSzZQvtz6SqpbyzqUMk5E6AUczGqGA2wX5R09bkk1J4TYQRVQydTyAPvrd6c1tcDCvNaGAPKkP6HQtftrW7Elg/oPvneP9I7oTtmWVTbYlICdGAhkQ0+dK7xXF4WPYl135QFL2kbhvZanmrgx7iqasrKhxdZlXr4Q9XlVF5vkFyzJ+p7hGppCLUqIAV6dk5EC03lOKOKr8F71uA4FN1QvKj6XLpG3vKOP+BmLVUIWVesC5WLEz0htHcbVdyzfIRDXNGBMbZlry3ufvYjtofa8Gg/uXVoCzn2P7l1soylgdGIAE7iSDge++HKlxgedt7osKfYId2vTvV3UHNkTL+6kZXl9UmwYV7bsHm37B9eDg0nGjagDtpQAClsy678OQK5atyYQRVE=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(396003)(136003)(376002)(451199021)(40470700004)(36840700001)(46966006)(86362001)(6916009)(8676002)(41300700001)(8936002)(4326008)(356005)(82740400003)(82310400005)(81166007)(54906003)(336012)(70586007)(70206006)(316002)(103116003)(7416002)(36756003)(2906002)(83380400001)(478600001)(47076005)(426003)(2616005)(36860700001)(40480700001)(6666004)(966005)(26005)(5660300002)(40460700003)(186003)(1076003)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:39:43.2283
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e8295b1e-d244-40d7-87c6-08db57adb821
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT003.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5179

rangeset_{xxx}_range() functions are invoked with 'start' and 'size' as
arguments which are either 'uint64_t' or 'paddr_t'. However, the function
accepts 'unsigned long' for 'start' and 'size'. 'unsigned long' is 32 bits for
Arm32. Thus, there is an implicit downcasting from 'uint64_t'/'paddr_t' to
'unsigned long' when invoking rangeset_{xxx}_range().

So, it may seem there is a possibility of lose of data due to truncation.

In reality, 'start' and 'size' are always page aligned. And Arm32 currently
supports 40 bits as the width of physical address.
So if the addresses are page aligned, the last 12 bits contain zeroes.
Thus, we could instead pass page frame number which will contain 28 bits (40-12
on Arm32) and this can be represented using 'unsigned long'.

On Arm64, this change will not induce any adverse side effect as the max
supported width of physical address is 48 bits. Thus, the width of 'gfn'
(ie 48 - 12 = 36) can be represented using 'unsigned long' (which is 64 bits
wide).

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
Changes from -

v3 - 1. Extracted the patch from https://lists.xenproject.org/archives/html/xen-devel/2023-02/msg00657.html
and added it to this series.
2. Modified add_ext_regions(). This accepts a frame number instead of physical
address.

v4 - 1. Reworded the commit message to use Arm32/Arm64
(32-bit/64-bit Arm architecture).
2. Replaced pfn with gfn to denote guest frame number in add_ext_regions().
3. Use pfn_to_paddr() to return a physical address from the guest frame number.

v5 - 1. Updated the commit message. Added R-b and A-b.

v6 - 1. No changes.

 xen/arch/arm/domain_build.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 71f307a572..e0ac5db60d 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1500,10 +1500,13 @@ static int __init make_resv_memory_node(const struct domain *d,
     return res;
 }
 
-static int __init add_ext_regions(unsigned long s, unsigned long e, void *data)
+static int __init add_ext_regions(unsigned long s_gfn, unsigned long e_gfn,
+                                  void *data)
 {
     struct meminfo *ext_regions = data;
     paddr_t start, size;
+    paddr_t s = pfn_to_paddr(s_gfn);
+    paddr_t e = pfn_to_paddr(e_gfn);
 
     if ( ext_regions->nr_banks >= ARRAY_SIZE(ext_regions->bank) )
         return 0;
@@ -1566,7 +1569,8 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
     {
         start = bootinfo.mem.bank[i].start;
         end = bootinfo.mem.bank[i].start + bootinfo.mem.bank[i].size;
-        res = rangeset_add_range(unalloc_mem, start, end - 1);
+        res = rangeset_add_range(unalloc_mem, PFN_DOWN(start),
+                                 PFN_DOWN(end - 1));
         if ( res )
         {
             printk(XENLOG_ERR "Failed to add: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1580,7 +1584,8 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
     {
         start = assign_mem->bank[i].start;
         end = assign_mem->bank[i].start + assign_mem->bank[i].size;
-        res = rangeset_remove_range(unalloc_mem, start, end - 1);
+        res = rangeset_remove_range(unalloc_mem, PFN_DOWN(start),
+                                    PFN_DOWN(end - 1));
         if ( res )
         {
             printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1595,7 +1600,8 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
         start = bootinfo.reserved_mem.bank[i].start;
         end = bootinfo.reserved_mem.bank[i].start +
             bootinfo.reserved_mem.bank[i].size;
-        res = rangeset_remove_range(unalloc_mem, start, end - 1);
+        res = rangeset_remove_range(unalloc_mem, PFN_DOWN(start),
+                                    PFN_DOWN(end - 1));
         if ( res )
         {
             printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1607,7 +1613,7 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
     /* Remove grant table region */
     start = kinfo->gnttab_start;
     end = kinfo->gnttab_start + kinfo->gnttab_size;
-    res = rangeset_remove_range(unalloc_mem, start, end - 1);
+    res = rangeset_remove_range(unalloc_mem, PFN_DOWN(start), PFN_DOWN(end - 1));
     if ( res )
     {
         printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1617,7 +1623,7 @@ static int __init find_unallocated_memory(const struct kernel_info *kinfo,
 
     start = 0;
     end = (1ULL << p2m_ipa_bits) - 1;
-    res = rangeset_report_ranges(unalloc_mem, start, end,
+    res = rangeset_report_ranges(unalloc_mem, PFN_DOWN(start), PFN_DOWN(end),
                                  add_ext_regions, ext_regions);
     if ( res )
         ext_regions->nr_banks = 0;
@@ -1639,7 +1645,7 @@ static int __init handle_pci_range(const struct dt_device_node *dev,
 
     start = addr & PAGE_MASK;
     end = PAGE_ALIGN(addr + len);
-    res = rangeset_remove_range(mem_holes, start, end - 1);
+    res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end - 1));
     if ( res )
     {
         printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1677,7 +1683,7 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
     /* Start with maximum possible addressable physical memory range */
     start = 0;
     end = (1ULL << p2m_ipa_bits) - 1;
-    res = rangeset_add_range(mem_holes, start, end);
+    res = rangeset_add_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end));
     if ( res )
     {
         printk(XENLOG_ERR "Failed to add: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1708,7 +1714,8 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
 
             start = addr & PAGE_MASK;
             end = PAGE_ALIGN(addr + size);
-            res = rangeset_remove_range(mem_holes, start, end - 1);
+            res = rangeset_remove_range(mem_holes, PFN_DOWN(start),
+                                        PFN_DOWN(end - 1));
             if ( res )
             {
                 printk(XENLOG_ERR "Failed to remove: %#"PRIpaddr"->%#"PRIpaddr"\n",
@@ -1735,7 +1742,7 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
 
     start = 0;
     end = (1ULL << p2m_ipa_bits) - 1;
-    res = rangeset_report_ranges(mem_holes, start, end,
+    res = rangeset_report_ranges(mem_holes, PFN_DOWN(start), PFN_DOWN(end),
                                  add_ext_regions,  ext_regions);
     if ( res )
         ext_regions->nr_banks = 0;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:40:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:40:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536397.834677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeo0-0000UH-70; Thu, 18 May 2023 14:40:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536397.834677; Thu, 18 May 2023 14:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeo0-0000UA-3t; Thu, 18 May 2023 14:40:36 +0000
Received: by outflank-mailman (input) for mailman id 536397;
 Thu, 18 May 2023 14:40:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzeny-00074K-Pc
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:40:34 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20601.outbound.protection.outlook.com
 [2a01:111:f400:7e88::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f121d681-f589-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 16:40:32 +0200 (CEST)
Received: from BN0PR04CA0159.namprd04.prod.outlook.com (2603:10b6:408:eb::14)
 by IA0PR12MB7652.namprd12.prod.outlook.com (2603:10b6:208:434::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Thu, 18 May
 2023 14:40:28 +0000
Received: from BN8NAM11FT011.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:eb:cafe::82) by BN0PR04CA0159.outlook.office365.com
 (2603:10b6:408:eb::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.20 via Frontend
 Transport; Thu, 18 May 2023 14:40:28 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT011.mail.protection.outlook.com (10.13.176.140) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 14:40:27 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:40:27 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 07:40:26 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:40:25 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f121d681-f589-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TxZxdQ0XkawVC9kho2WT2FtHwa6ZCNWyuezW2z/Uj+mESMKX9QPd4i3PbEZHDlWc/bOVthBsoiAfmGw+0ecW4LgNCxsOWSmHARpjOeGVkR6LXAtJfF3HyWd2178iTm3t/HsgKJWz7m43ZAKzkZX2lcwlF1dpN0uqiSy9xjKHbWlhdHXnZjdWPTouy7VA6s8QCrKO7+13aVOHwhHFmpC7oVjP90X8z+ftRaBYJOyCJ9rJYwMP6OzS1Uf7XKoSr0oyyDUaqOLkWK0eiI8tKGj5Zlx4nDv6MNFTnFwQqhiPKxzz3i7pMTPXt8dWSxeVgMhDyRvLyMEHxO9sI0p5UyWSJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+tqVoZ2YFpUeNO52o/M+YkHJkClJ6m4riHukSRNfevU=;
 b=bvdvvcSU8MBiXFsivFtD8WxEBDOLU3FpX54Czf5jS1add0Me4UAwKEatyDma8Xb36EhGIdfNIXqbdoC8gg0hkqt8zUi+WNgMzV9MG0LKtUd+3yOOdcvUrXOcvEIY13rJj/wVQaXxxWJu/eq6OpH4ppdCGZ3Ea/MwpZbvQLzhJ1ajrXFn0hhwoy9+6Cuu9AchxYEt26Tav/EaeJJUtDMJqLBXfScH8VT+kOlKkfLcoFjVKtCekHjIWMKmQWZafDdU6uQgKFMniu958fkja+gkVk5NNOdPMGjanygmDrac0ICFvEn12tV21Kt50tI8GDt/JHa28Sz5r6odJoLVrhTP+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+tqVoZ2YFpUeNO52o/M+YkHJkClJ6m4riHukSRNfevU=;
 b=HEPucOwMUXr1jCdrcHk3ZzgO7y00ebptZ1GsVhqsdyFmiQ8JjRZQhpoDNyYe1vWUV++vIHhT0thUv5CpZx2dHgGpceToHUtz1ud14VpA8Ov0vJrFUt1hgO0VA5kgQ94JFxrW9zytJusomjm0oSsYt+1ZWX/DRWoQtrYpsk9aXCY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 02/11] xen/arm: Typecast the DT values into paddr_t
Date: Thu, 18 May 2023 15:39:11 +0100
Message-ID: <20230518143920.43186-3-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT011:EE_|IA0PR12MB7652:EE_
X-MS-Office365-Filtering-Correlation-Id: d5892642-ee00-4618-7d35-08db57add2b4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4JhO5bt6yWYYWGK41MCPgBxZQ1VZn44Z6/QsFTVshMTRmFamw7T/ocMf5b5JpxFUzWxNGm+c8LaBvlmPCX/inwyy0gYmTVz0rjFg3nsa+4CDGh6y4xNDqGvjrXZjR9HR7xDWnSuizJeWZn1TDaceWRJAMubXpGvd4/XH6ZCuG5pLZwQsfqoUjZbDBOka/9XO2N3Z65TI1o0D1ZFuEbOx5ck/+BxedMnEKTKvjsqlYS9uU1hHp6snM+b9j6sBXu7E8qw5GN+MFYdGx4gfy0x9W58QoJpRfFXrNnvI3WzzQdfXBVHn8X5StIbz4EwgkasQNwcjqDbwt7z/NGUS21wZeNNkC5HPgolEtcCCjxqyl9wT9+9mKz8Rkv7+fyIPnhh/lhMZ0VZM4Zl0v1qG4Q8F9tYHE1u4d3Q7NahHnthIjqkgwoyfUiHFcmOZ/wq0C/Isug8cLUlWvXmQQ30NTX//6hW1gdiF66ff/cpxBDj+lfqlraHFjXaGM0YtEK9mfq2qglNsQtGfmAc7umu0y/Z3d5vxghVo3/JaMC8LWnRmHM8A0WypKLgugz2li13hNkTrfzKqA+etya6Ppv3v6cdtjMWZFtmJQ0hu68ZHHXRQpjyNEYHLhZtU4Hq6t4VlmxnYs06b1DuhPxuH9b1XJPz/GbSKTqpbv0MB6aniXrlEUF6jt0ZP6HoAnYomN2DE3thnXsLEuYEhX9wyP/lKHdzvSvh8eqMVAQ4wp45vXnGfMfAfD5Ex+bt4GiVOniZZ4lJ71qgBruFAVf6NMO/t1OqbdGeGNtsPLiuSc8bpHuF6GKFGp+TIUCwy9ZT4R4aAbNMZ
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(39860400002)(396003)(376002)(451199021)(46966006)(36840700001)(40470700004)(40460700003)(316002)(70206006)(70586007)(478600001)(4326008)(54906003)(6916009)(36756003)(103116003)(83380400001)(47076005)(1076003)(426003)(36860700001)(186003)(26005)(2616005)(86362001)(30864003)(2906002)(5660300002)(8936002)(8676002)(82310400005)(336012)(6666004)(40480700001)(7416002)(356005)(81166007)(41300700001)(82740400003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:40:27.8591
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d5892642-ee00-4618-7d35-08db57add2b4
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT011.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7652

The DT functions (dt_read_number(), device_tree_get_reg(), fdt_get_mem_rsv())
currently accept or return 64-bit values.

In future when we support 32-bit physical address, these DT functions are
expected to accept/return 32-bit or 64-bit values (depending on the width of
physical address). Also, we wish to detect if any truncation has occurred
(i.e. while parsing 32-bit physical addresses from 64-bit values read from DT).

device_tree_get_reg() should now be able to return paddr_t. This is invoked by
various callers to get DT address and size.

For fdt_get_mem_rsv(), we have introduced a wrapper named
fdt_get_mem_rsv_paddr() which will invoke fdt_get_mem_rsv() and translate
uint64_t to paddr_t. The reason being we cannot modify fdt_get_mem_rsv() as it
has been imported from external source.

For dt_read_number(), we have also introduced a wrapper named dt_read_paddr()
dt_read_paddr() to read physical addresses. We chose not to modify the original
function as it is used in places where it needs to specifically read 64-bit
values from dt (For e.g. dt_property_read_u64()).

Xen prints warning when it detects truncation in cases where it is not able to
return error.

Also, replaced u32/u64 with uint32_t/uint64_t in the functions touched
by the code changes.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
---

Changes from

v1 - 1. Dropped "[XEN v1 2/9] xen/arm: Define translate_dt_address_size() for the translation between u64 and paddr_t" and
"[XEN v1 4/9] xen/arm: Use translate_dt_address_size() to translate between device tree addr/size and paddr_t", instead
this approach achieves the same purpose.

2. No need to check for truncation while converting values from u64 to paddr_t.

v2 - 1. Use "( (dt_start >> (PADDR_SHIFT - 1)) > 1 )" to detect truncation.
2. Introduced libfdt_xen.h to implement fdt_get_mem_rsv_paddr
3. Logged error messages in case truncation is detected.

v3 - 1. Renamed libfdt_xen.h to libfdt-xen.h.
2. Replaced u32/u64 with uint32_t/uint64_t
3. Use "(paddr_t)val != val" to check for truncation.
4. Removed the alias "#define PADDR_SHIFT PADDR_BITS". 

v4 - 1. Added a WARN() when truncation is detected.
2. Always check the return value of fdt_get_mem_rsv().

v5 - 1. Removed the initialization of variables in fdt_get_mem_rsv_paddr().
The warning has been fixed by checking "if (ret < 0)", similar to how it was
being done for fdt_get_mem_rsv().

2. Removed printing "Error:" before WARN().
3. Added the note about implicit casting before dt_read_number()
4. Sanity fixes.

v6 - 1. Added R-b.

 xen/arch/arm/bootfdt.c              | 46 +++++++++++++++++++-----
 xen/arch/arm/domain_build.c         |  2 +-
 xen/arch/arm/include/asm/setup.h    |  4 +--
 xen/arch/arm/setup.c                | 14 ++++----
 xen/arch/arm/smpboot.c              |  2 +-
 xen/include/xen/device_tree.h       | 27 ++++++++++++++
 xen/include/xen/libfdt/libfdt-xen.h | 55 +++++++++++++++++++++++++++++
 7 files changed, 130 insertions(+), 20 deletions(-)
 create mode 100644 xen/include/xen/libfdt/libfdt-xen.h

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index e2f6c7324b..b6f92a174f 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -11,7 +11,7 @@
 #include <xen/efi.h>
 #include <xen/device_tree.h>
 #include <xen/lib.h>
-#include <xen/libfdt/libfdt.h>
+#include <xen/libfdt/libfdt-xen.h>
 #include <xen/sort.h>
 #include <xsm/xsm.h>
 #include <asm/setup.h>
@@ -52,11 +52,37 @@ static bool __init device_tree_node_compatible(const void *fdt, int node,
     return false;
 }
 
-void __init device_tree_get_reg(const __be32 **cell, u32 address_cells,
-                                u32 size_cells, u64 *start, u64 *size)
+void __init device_tree_get_reg(const __be32 **cell, uint32_t address_cells,
+                                uint32_t size_cells, paddr_t *start,
+                                paddr_t *size)
 {
-    *start = dt_next_cell(address_cells, cell);
-    *size = dt_next_cell(size_cells, cell);
+    uint64_t dt_start, dt_size;
+
+    /*
+     * dt_next_cell will return uint64_t whereas paddr_t may not be 64-bit.
+     * Thus, there is an implicit cast from uint64_t to paddr_t.
+     */
+    dt_start = dt_next_cell(address_cells, cell);
+    dt_size = dt_next_cell(size_cells, cell);
+
+    if ( dt_start != (paddr_t)dt_start )
+    {
+        printk("Physical address greater than max width supported\n");
+        WARN();
+    }
+
+    if ( dt_size != (paddr_t)dt_size )
+    {
+        printk("Physical size greater than max width supported\n");
+        WARN();
+    }
+
+    /*
+     * Xen will truncate the address/size if it is greater than the maximum
+     * supported width and it will give an appropriate warning.
+     */
+    *start = dt_start;
+    *size = dt_size;
 }
 
 static int __init device_tree_get_meminfo(const void *fdt, int node,
@@ -329,7 +355,7 @@ static int __init process_chosen_node(const void *fdt, int node,
         printk("linux,initrd-start property has invalid length %d\n", len);
         return -EINVAL;
     }
-    start = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
+    start = dt_read_paddr((void *)&prop->data, dt_size_to_cells(len));
 
     prop = fdt_get_property(fdt, node, "linux,initrd-end", &len);
     if ( !prop )
@@ -342,7 +368,7 @@ static int __init process_chosen_node(const void *fdt, int node,
         printk("linux,initrd-end property has invalid length %d\n", len);
         return -EINVAL;
     }
-    end = dt_read_number((void *)&prop->data, dt_size_to_cells(len));
+    end = dt_read_paddr((void *)&prop->data, dt_size_to_cells(len));
 
     if ( start >= end )
     {
@@ -593,9 +619,11 @@ static void __init early_print_info(void)
     for ( i = 0; i < nr_rsvd; i++ )
     {
         paddr_t s, e;
-        if ( fdt_get_mem_rsv(device_tree_flattened, i, &s, &e) < 0 )
+
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &s, &e) < 0 )
             continue;
-        /* fdt_get_mem_rsv returns length */
+
+        /* fdt_get_mem_rsv_paddr returns length */
         e += s;
         printk(" RESVD[%u]: %"PRIpaddr" - %"PRIpaddr"\n", i, s, e);
     }
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e0ac5db60d..7bcd6a83f7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -949,7 +949,7 @@ static int __init process_shm(struct domain *d, struct kernel_info *kinfo,
         BUG_ON(!prop);
         cells = (const __be32 *)prop->value;
         device_tree_get_reg(&cells, addr_cells, addr_cells, &pbase, &gbase);
-        psize = dt_read_number(cells, size_cells);
+        psize = dt_read_paddr(cells, size_cells);
         if ( !IS_ALIGNED(pbase, PAGE_SIZE) || !IS_ALIGNED(gbase, PAGE_SIZE) )
         {
             printk("%pd: physical address 0x%"PRIpaddr", or guest address 0x%"PRIpaddr" is not suitably aligned.\n",
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 38e2ce255f..47ce565d87 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -159,8 +159,8 @@ const char *boot_module_kind_as_string(bootmodule_kind kind);
 extern uint32_t hyp_traps_vector[];
 void init_traps(void);
 
-void device_tree_get_reg(const __be32 **cell, u32 address_cells,
-                         u32 size_cells, u64 *start, u64 *size);
+void device_tree_get_reg(const __be32 **cell, uint32_t address_cells,
+                         uint32_t size_cells, paddr_t *start, paddr_t *size);
 
 u32 device_tree_get_u32(const void *fdt, int node,
                         const char *prop_name, u32 dflt);
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 6f9f4d8c8a..74b40e527f 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -29,7 +29,7 @@
 #include <xen/virtual_region.h>
 #include <xen/vmap.h>
 #include <xen/trace.h>
-#include <xen/libfdt/libfdt.h>
+#include <xen/libfdt/libfdt-xen.h>
 #include <xen/acpi.h>
 #include <xen/warning.h>
 #include <asm/alternative.h>
@@ -222,11 +222,11 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
     {
         paddr_t r_s, r_e;
 
-        if ( fdt_get_mem_rsv(device_tree_flattened, i, &r_s, &r_e ) < 0 )
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened, i, &r_s, &r_e ) < 0 )
             /* If we can't read it, pretend it doesn't exist... */
             continue;
 
-        r_e += r_s; /* fdt_get_mem_rsv returns length */
+        r_e += r_s; /* fdt_get_mem_rsv_paddr returns length */
 
         if ( s < r_e && r_s < e )
         {
@@ -592,13 +592,13 @@ static paddr_t __init consider_modules(paddr_t s, paddr_t e,
     {
         paddr_t mod_s, mod_e;
 
-        if ( fdt_get_mem_rsv(device_tree_flattened,
-                             i - mi->nr_mods,
-                             &mod_s, &mod_e ) < 0 )
+        if ( fdt_get_mem_rsv_paddr(device_tree_flattened,
+                                   i - mi->nr_mods,
+                                   &mod_s, &mod_e ) < 0 )
             /* If we can't read it, pretend it doesn't exist... */
             continue;
 
-        /* fdt_get_mem_rsv returns length */
+        /* fdt_get_mem_rsv_paddr returns length */
         mod_e += mod_s;
 
         if ( s < mod_e && mod_s < e )
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 4a89b3a834..e107b86b7b 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -159,7 +159,7 @@ static void __init dt_smp_init_cpus(void)
             continue;
         }
 
-        addr = dt_read_number(prop, dt_n_addr_cells(cpu));
+        addr = dt_read_paddr(prop, dt_n_addr_cells(cpu));
 
         hwid = addr;
         if ( hwid != addr )
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 19a74909ce..5f8f61aec8 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -241,6 +241,33 @@ static inline u64 dt_read_number(const __be32 *cell, int size)
     return r;
 }
 
+/* Wrapper for dt_read_number() to return paddr_t (instead of uint64_t) */
+static inline paddr_t dt_read_paddr(const __be32 *cell, int size)
+{
+    uint64_t dt_r;
+    paddr_t r;
+
+    /*
+     * dt_read_number will return uint64_t whereas paddr_t may not be 64-bit.
+     * Thus, there is an implicit cast from uint64_t to paddr_t.
+     */
+    dt_r = dt_read_number(cell, size);
+
+    if ( dt_r != (paddr_t)dt_r )
+    {
+        printk("Physical address greater than max width supported\n");
+        WARN();
+    }
+
+    /*
+     * Xen will truncate the address/size if it is greater than the maximum
+     * supported width and it will give an appropriate warning.
+     */
+    r = dt_r;
+
+    return r;
+}
+
 /* Helper to convert a number of cells to bytes */
 static inline int dt_cells_to_size(int size)
 {
diff --git a/xen/include/xen/libfdt/libfdt-xen.h b/xen/include/xen/libfdt/libfdt-xen.h
new file mode 100644
index 0000000000..a5340bc9f4
--- /dev/null
+++ b/xen/include/xen/libfdt/libfdt-xen.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xen/include/xen/libfdt/libfdt-xen.h
+ *
+ * Wrapper functions for device tree. This helps to convert dt values
+ * between uint64_t and paddr_t.
+ *
+ * Copyright (C) 2023, Advanced Micro Devices, Inc. All Rights Reserved.
+ */
+
+#ifndef LIBFDT_XEN_H
+#define LIBFDT_XEN_H
+
+#include <xen/libfdt/libfdt.h>
+
+static inline int fdt_get_mem_rsv_paddr(const void *fdt, int n,
+                                        paddr_t *address,
+                                        paddr_t *size)
+{
+    uint64_t dt_addr;
+    uint64_t dt_size;
+    int ret;
+
+    ret = fdt_get_mem_rsv(fdt, n, &dt_addr, &dt_size);
+    if ( ret < 0 )
+        return ret;
+
+    if ( dt_addr != (paddr_t)dt_addr )
+    {
+        printk("Error: Physical address greater than max width supported\n");
+        return -FDT_ERR_MAX;
+    }
+
+    if ( dt_size != (paddr_t)dt_size )
+    {
+        printk("Error: Physical size greater than max width supported\n");
+        return -FDT_ERR_MAX;
+    }
+
+    *address = dt_addr;
+    *size = dt_size;
+
+    return ret;
+}
+
+#endif /* LIBFDT_XEN_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:40:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:40:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536399.834688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeo8-0000ne-JL; Thu, 18 May 2023 14:40:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536399.834688; Thu, 18 May 2023 14:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeo8-0000nX-Gf; Thu, 18 May 2023 14:40:44 +0000
Received: by outflank-mailman (input) for mailman id 536399;
 Thu, 18 May 2023 14:40:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzeo6-0000Pw-Ts
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:40:43 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5d7d34a-f589-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 16:40:41 +0200 (CEST)
Received: from MW4PR03CA0185.namprd03.prod.outlook.com (2603:10b6:303:b8::10)
 by CH2PR12MB4890.namprd12.prod.outlook.com (2603:10b6:610:63::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Thu, 18 May
 2023 14:40:35 +0000
Received: from CO1NAM11FT047.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:b8:cafe::26) by MW4PR03CA0185.outlook.office365.com
 (2603:10b6:303:b8::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 14:40:35 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT047.mail.protection.outlook.com (10.13.174.132) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 14:40:34 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:40:33 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:40:32 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:40:31 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5d7d34a-f589-11ed-b22c-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MvWkuXgNPbtpKH2Gkd2ZJyIjskmVL09fkLWh5Cymxl4/43Xd7tElhhnP9g0qDZ9iE1Zxy4yo3rT/HuC3Cfs8K9EVeU1BmNgz3TTHlu7F/wFQq3kuZgmTI7qjB2VXHHnvPxr9FEutkAS05C47eqAyCWoAMNJA5ECzuch5OA7XthCgAITGC7a9Sm91wv5WHFutLlU3qQMXtBUr7Kpdb/yShnqugf5NvEfBlcAIvalSu0QL3pUEUFtxhSLDO9XtNJEpJe3neiEGEhJBLN5de1Dtehy7vVlOlf5yyXLrbBmuu4bc40fBuDKeIjVH7AiSuvr55/AiPBzFmsmxwGxtHfeL/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/D5r9t1oiMhFvvPxy1K92f7LozAVsetKxtPVhdSX3ok=;
 b=WXyIHYEHoWxxMqJUxAnPoauyKFRlIIsMX3tcF1TUHetv8FzHFdV88z7VUHtk53Dok3RiIJPFWXVDh98gLeP4axrfa1vfPmryabu+gBzLZsNuRTORP5v5Kb+QNW+KY2HDQwe4EVDPBw6cGkpqO2UzBQxJpZEvtTMnjn1ACdbwXXiqo8uk/aqOY9kB5lTIXeqkC/9pX7iQM6t1LazsTQ8g5BVrapQgmjRrYOqmPkbTBXDtVYzefZ2Pk9YWHWjyX+ExiHpFso4ZDFMdEhjngrkwTb5vB0IZBYMdq8hIspyztW0WUKe5hZcdqMea2DlXmj+0urpeGAiez9lTceMyY169Eg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/D5r9t1oiMhFvvPxy1K92f7LozAVsetKxtPVhdSX3ok=;
 b=5dQCG+EaXmSaC0WtS+aLzRNpt6FUx0W9+GGpXQQZe+9t/o8TioTLokA21XHMGgtOWPyDYQWBNMw0KojTbwMjudXUiFm0eC4rSaxymKrrAhEJH6QeboatvhXuDtfW/Wx6xAp7YXf4nTR67IsH544+mmXqYBm27NTaWk1vgzW2ljg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 03/11] xen/arm: Introduce a wrapper for dt_device_get_address() to handle paddr_t
Date: Thu, 18 May 2023 15:39:12 +0100
Message-ID: <20230518143920.43186-4-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT047:EE_|CH2PR12MB4890:EE_
X-MS-Office365-Filtering-Correlation-Id: f9d17b1f-6709-47fd-8dc5-08db57add6cb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PFiDoQYP2q9przqJAZ+DalfZF6qpDfs0DBFJZJ0QqpHQoYOJfhDSY1XujaOMCRe4yTzpYkzz8Ofv4ns27VIo6wnxOb6dFA1CjVIASoar8GjBNOMozv5IcEVeULuJnAFbEAvqFdD6imwyLA9Neb3Mrrzw7GkdvC84L/9MiWPcUNlyuzWSaNWU9Wyu2eP3dEDO2HUJ5OyX57QfSo7uhxJIjJeuSsGaZrN00GlmeFXAIXYZLZXBx4rhBCiw8DGMbpL7LwCyE3Kh2A/yluOO+C1a4Qos2R5HEE/CmBpTysRaiG1/GWhwJ19mYL0SoaSTvXH9fEF8NFAVODLlLIFJGjDPopww0xle2cxZ9o7UQ1b1n/cv2ismhU0yptFFbWBw140hc43Rfj2YAzvA/j7wQ2zXKYWDOuLMoTuEINu8ewMxEyDUx5KFrK0r9y8GBk89LPOSTPQidDCgLQtgb+S/DCK1dtwD+WkszeAJtFpIf3v6b6zdjVcdhtRHSShEdm0dxN/9ZettuQ+4V56IweWXrsCPvrLRqCE5A9wLtiU9MELr/7KtmeTU19CJRPxVKbKYIPO2mx+aGZpJSeSwlvNyzb0EdfS5CZTSaIRRYlWMkYa4AX4oGvXdZyhfk+Bfc2ReAyxwM+U5N8WP0c4VgnNM8WazMhbsbAr8EPgAMykF53nIuaQY4exJtokFT5sUwTELxRGtWy9I1jLUXt8vIsYXVb/byAiomiv0eU00MWm/xcnQV4ZfIwa6B0JTQgbNQQUedf2o
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(136003)(346002)(396003)(451199021)(36840700001)(46966006)(40470700004)(86362001)(2616005)(336012)(426003)(4326008)(6666004)(26005)(1076003)(40480700001)(6916009)(70586007)(70206006)(316002)(40460700003)(36756003)(47076005)(81166007)(186003)(478600001)(8936002)(82310400005)(8676002)(54906003)(82740400003)(103116003)(2906002)(30864003)(356005)(7416002)(36860700001)(41300700001)(966005)(83380400001)(5660300002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:40:34.6117
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f9d17b1f-6709-47fd-8dc5-08db57add6cb
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT047.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4890

dt_device_get_address() can accept uint64_t only for address and size.
However, the address/size denotes physical addresses. Thus, they should
be represented by 'paddr_t'.
Consequently, we introduce a wrapper for dt_device_get_address() ie
dt_device_get_paddr() which accepts address/size as paddr_t and inturn
invokes dt_device_get_address() after converting address/size to
uint64_t.

The reason for introducing this is that in future 'paddr_t' may not
always be 64-bit. Thus, we need an explicit wrapper to do the type
conversion and return an error in case of truncation.

With this, callers can now invoke dt_device_get_paddr(). However, ns16550.c
is left unchanged as it requires some prior cleanup. For details, see
https://patchew.org/Xen/20230413173735.48387-1-ayan.kumar.halder@amd.com.
This will be addressed in a subsequent series.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
Changes from -

v1 - 1. New patch.

v2 - 1. Extracted part of "[XEN v2 05/11] xen/arm: Use paddr_t instead of u64 for address/size"
into this patch.

2. dt_device_get_address() callers now invoke dt_device_get_paddr() instead.

3. Logged error in case of truncation.

v3 - 1. Modified the truncation checks as "dt_addr != (paddr_t)dt_addr".
2. Some sanity fixes.

v4 - 1. Some sanity fixes.
2. Preserved the declaration of dt_device_get_address() in
xen/include/xen/device_tree.h. The reason being it is currently used by
ns16550.c. This driver requires some more changes as pointed by Jan in
https://lore.kernel.org/xen-devel/6196e90f-752e-e61a-45ce-37e46c22b812@suse.com/
which is to be addressed as a separate series.

v5 - 1. Removed initialization of variables.
2. In dt_device_get_paddr(), added the check
if ( !addr )
    return -EINVAL;

v6 - 1. Added R-b and Ack. 

 xen/arch/arm/domain_build.c                | 10 +++---
 xen/arch/arm/gic-v2.c                      | 10 +++---
 xen/arch/arm/gic-v3-its.c                  |  4 +--
 xen/arch/arm/gic-v3.c                      | 10 +++---
 xen/arch/arm/pci/pci-host-common.c         |  6 ++--
 xen/arch/arm/platforms/brcm-raspberry-pi.c |  2 +-
 xen/arch/arm/platforms/brcm.c              |  6 ++--
 xen/arch/arm/platforms/exynos5.c           | 32 +++++++++----------
 xen/arch/arm/platforms/sunxi.c             |  2 +-
 xen/arch/arm/platforms/xgene-storm.c       |  2 +-
 xen/common/device_tree.c                   | 36 ++++++++++++++++++++++
 xen/drivers/char/cadence-uart.c            |  4 +--
 xen/drivers/char/exynos4210-uart.c         |  4 +--
 xen/drivers/char/imx-lpuart.c              |  4 +--
 xen/drivers/char/meson-uart.c              |  4 +--
 xen/drivers/char/mvebu-uart.c              |  4 +--
 xen/drivers/char/omap-uart.c               |  4 +--
 xen/drivers/char/pl011.c                   |  6 ++--
 xen/drivers/char/scif-uart.c               |  4 +--
 xen/drivers/passthrough/arm/ipmmu-vmsa.c   |  8 ++---
 xen/drivers/passthrough/arm/smmu-v3.c      |  2 +-
 xen/drivers/passthrough/arm/smmu.c         |  8 ++---
 xen/include/xen/device_tree.h              | 13 ++++++++
 23 files changed, 117 insertions(+), 68 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 7bcd6a83f7..50b85ea783 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1698,13 +1698,13 @@ static int __init find_memory_holes(const struct kernel_info *kinfo,
     dt_for_each_device_node( dt_host, np )
     {
         unsigned int naddr;
-        u64 addr, size;
+        paddr_t addr, size;
 
         naddr = dt_number_of_address(np);
 
         for ( i = 0; i < naddr; i++ )
         {
-            res = dt_device_get_address(np, i, &addr, &size);
+            res = dt_device_get_paddr(np, i, &addr, &size);
             if ( res )
             {
                 printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
@@ -2475,7 +2475,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
     unsigned int naddr;
     unsigned int i;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
     bool own_device = !dt_device_for_passthrough(dev);
     /*
      * We want to avoid mapping the MMIO in dom0 for the following cases:
@@ -2530,7 +2530,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
     /* Give permission and map MMIOs */
     for ( i = 0; i < naddr; i++ )
     {
-        res = dt_device_get_address(dev, i, &addr, &size);
+        res = dt_device_get_paddr(dev, i, &addr, &size);
         if ( res )
         {
             printk(XENLOG_ERR "Unable to retrieve address %u for %s\n",
@@ -2961,7 +2961,7 @@ static int __init handle_passthrough_prop(struct kernel_info *kinfo,
         if ( res )
         {
             printk(XENLOG_ERR "Unable to permit to dom%d access to"
-                   " 0x%"PRIx64" - 0x%"PRIx64"\n",
+                   " 0x%"PRIpaddr" - 0x%"PRIpaddr"\n",
                    kinfo->d->domain_id,
                    mstart & PAGE_MASK, PAGE_ALIGN(mstart + size) - 1);
             return res;
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 5d4d298b86..6476ff4230 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -993,7 +993,7 @@ static void gicv2_extension_dt_init(const struct dt_device_node *node)
             continue;
 
         /* Get register frame resource from DT. */
-        if ( dt_device_get_address(v2m, 0, &addr, &size) )
+        if ( dt_device_get_paddr(v2m, 0, &addr, &size) )
             panic("GICv2: Cannot find a valid v2m frame address\n");
 
         /*
@@ -1018,19 +1018,19 @@ static void __init gicv2_dt_init(void)
     paddr_t vsize;
     const struct dt_device_node *node = gicv2_info.node;
 
-    res = dt_device_get_address(node, 0, &dbase, NULL);
+    res = dt_device_get_paddr(node, 0, &dbase, NULL);
     if ( res )
         panic("GICv2: Cannot find a valid address for the distributor\n");
 
-    res = dt_device_get_address(node, 1, &cbase, &csize);
+    res = dt_device_get_paddr(node, 1, &cbase, &csize);
     if ( res )
         panic("GICv2: Cannot find a valid address for the CPU\n");
 
-    res = dt_device_get_address(node, 2, &hbase, NULL);
+    res = dt_device_get_paddr(node, 2, &hbase, NULL);
     if ( res )
         panic("GICv2: Cannot find a valid address for the hypervisor\n");
 
-    res = dt_device_get_address(node, 3, &vbase, &vsize);
+    res = dt_device_get_paddr(node, 3, &vbase, &vsize);
     if ( res )
         panic("GICv2: Cannot find a valid address for the virtual CPU\n");
 
diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index 1ec9934191..3aa4edda10 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -1004,12 +1004,12 @@ static void gicv3_its_dt_init(const struct dt_device_node *node)
      */
     dt_for_each_child_node(node, its)
     {
-        uint64_t addr, size;
+        paddr_t addr, size;
 
         if ( !dt_device_is_compatible(its, "arm,gic-v3-its") )
             continue;
 
-        if ( dt_device_get_address(its, 0, &addr, &size) )
+        if ( dt_device_get_paddr(its, 0, &addr, &size) )
             panic("GICv3: Cannot find a valid ITS frame address\n");
 
         add_to_host_its_list(addr, size, its);
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index bb59ea94cd..4e6c98bada 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1377,7 +1377,7 @@ static void __init gicv3_dt_init(void)
     int res, i;
     const struct dt_device_node *node = gicv3_info.node;
 
-    res = dt_device_get_address(node, 0, &dbase, NULL);
+    res = dt_device_get_paddr(node, 0, &dbase, NULL);
     if ( res )
         panic("GICv3: Cannot find a valid distributor address\n");
 
@@ -1393,9 +1393,9 @@ static void __init gicv3_dt_init(void)
 
     for ( i = 0; i < gicv3.rdist_count; i++ )
     {
-        uint64_t rdist_base, rdist_size;
+        paddr_t rdist_base, rdist_size;
 
-        res = dt_device_get_address(node, 1 + i, &rdist_base, &rdist_size);
+        res = dt_device_get_paddr(node, 1 + i, &rdist_base, &rdist_size);
         if ( res )
             panic("GICv3: No rdist base found for region %d\n", i);
 
@@ -1417,10 +1417,10 @@ static void __init gicv3_dt_init(void)
      * For GICv3 supporting GICv2, GICC and GICV base address will be
      * provided.
      */
-    res = dt_device_get_address(node, 1 + gicv3.rdist_count,
+    res = dt_device_get_paddr(node, 1 + gicv3.rdist_count,
                                 &cbase, &csize);
     if ( !res )
-        dt_device_get_address(node, 1 + gicv3.rdist_count + 2,
+        dt_device_get_paddr(node, 1 + gicv3.rdist_count + 2,
                               &vbase, &vsize);
 }
 
diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
index 7474d877de..5dd62e8013 100644
--- a/xen/arch/arm/pci/pci-host-common.c
+++ b/xen/arch/arm/pci/pci-host-common.c
@@ -93,7 +93,7 @@ gen_pci_init(struct dt_device_node *dev, const struct pci_ecam_ops *ops)
         cfg_reg_idx = 0;
 
     /* Parse our PCI ecam register address */
-    err = dt_device_get_address(dev, cfg_reg_idx, &addr, &size);
+    err = dt_device_get_paddr(dev, cfg_reg_idx, &addr, &size);
     if ( err )
         goto err_exit;
 
@@ -351,10 +351,10 @@ int __init pci_host_bridge_mappings(struct domain *d)
 
         for ( i = 0; i < dt_number_of_address(dev); i++ )
         {
-            uint64_t addr, size;
+            paddr_t addr, size;
             int err;
 
-            err = dt_device_get_address(dev, i, &addr, &size);
+            err = dt_device_get_paddr(dev, i, &addr, &size);
             if ( err )
             {
                 printk(XENLOG_ERR
diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/platforms/brcm-raspberry-pi.c
index 811b40b1a6..407ec07f63 100644
--- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
+++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
@@ -64,7 +64,7 @@ static void __iomem *rpi4_map_watchdog(void)
     if ( !node )
         return NULL;
 
-    ret = dt_device_get_address(node, 0, &start, &len);
+    ret = dt_device_get_paddr(node, 0, &start, &len);
     if ( ret )
     {
         printk("Cannot read watchdog register address\n");
diff --git a/xen/arch/arm/platforms/brcm.c b/xen/arch/arm/platforms/brcm.c
index d481b2c60f..951e4d6cc3 100644
--- a/xen/arch/arm/platforms/brcm.c
+++ b/xen/arch/arm/platforms/brcm.c
@@ -40,7 +40,7 @@ static __init int brcm_get_dt_node(char *compat_str,
                                    u32 *reg_base)
 {
     const struct dt_device_node *node;
-    u64 reg_base_64;
+    paddr_t reg_base_paddr;
     int rc;
 
     node = dt_find_compatible_node(NULL, NULL, compat_str);
@@ -50,7 +50,7 @@ static __init int brcm_get_dt_node(char *compat_str,
         return -ENOENT;
     }
 
-    rc = dt_device_get_address(node, 0, &reg_base_64, NULL);
+    rc = dt_device_get_paddr(node, 0, &reg_base_paddr, NULL);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "%s: missing \"reg\" prop\n", __func__);
@@ -61,7 +61,7 @@ static __init int brcm_get_dt_node(char *compat_str,
         *dn = node;
 
     if ( reg_base )
-        *reg_base = reg_base_64;
+        *reg_base = reg_base_paddr;
 
     return 0;
 }
diff --git a/xen/arch/arm/platforms/exynos5.c b/xen/arch/arm/platforms/exynos5.c
index 6560507092..c48093cd4f 100644
--- a/xen/arch/arm/platforms/exynos5.c
+++ b/xen/arch/arm/platforms/exynos5.c
@@ -42,8 +42,8 @@ static int exynos5_init_time(void)
     void __iomem *mct;
     int rc;
     struct dt_device_node *node;
-    u64 mct_base_addr;
-    u64 size;
+    paddr_t mct_base_addr;
+    paddr_t size;
 
     node = dt_find_compatible_node(NULL, NULL, "samsung,exynos4210-mct");
     if ( !node )
@@ -52,14 +52,14 @@ static int exynos5_init_time(void)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, &mct_base_addr, &size);
+    rc = dt_device_get_paddr(node, 0, &mct_base_addr, &size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in \"samsung,exynos4210-mct\"\n");
         return -ENXIO;
     }
 
-    dprintk(XENLOG_INFO, "mct_base_addr: %016llx size: %016llx\n",
+    dprintk(XENLOG_INFO, "mct_base_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"\n",
             mct_base_addr, size);
 
     mct = ioremap_nocache(mct_base_addr, size);
@@ -97,9 +97,9 @@ static int __init exynos5_smp_init(void)
     struct dt_device_node *node;
     void __iomem *sysram;
     char *compatible;
-    u64 sysram_addr;
-    u64 size;
-    u64 sysram_offset;
+    paddr_t sysram_addr;
+    paddr_t size;
+    paddr_t sysram_offset;
     int rc;
 
     node = dt_find_compatible_node(NULL, NULL, "samsung,secure-firmware");
@@ -125,13 +125,13 @@ static int __init exynos5_smp_init(void)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, &sysram_addr, &size);
+    rc = dt_device_get_paddr(node, 0, &sysram_addr, &size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in %s\n", compatible);
         return -ENXIO;
     }
-    dprintk(XENLOG_INFO, "sysram_addr: %016llx size: %016llx offset: %016llx\n",
+    dprintk(XENLOG_INFO,"sysram_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"offset: 0x%"PRIpaddr"\n",
             sysram_addr, size, sysram_offset);
 
     sysram = ioremap_nocache(sysram_addr, size);
@@ -189,7 +189,7 @@ static int exynos5_cpu_power_up(void __iomem *power, int cpu)
     return 0;
 }
 
-static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
+static int exynos5_get_pmu_baseandsize(paddr_t *power_base_addr, paddr_t *size)
 {
     struct dt_device_node *node;
     int rc;
@@ -208,14 +208,14 @@ static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
         return -ENXIO;
     }
 
-    rc = dt_device_get_address(node, 0, power_base_addr, size);
+    rc = dt_device_get_paddr(node, 0, power_base_addr, size);
     if ( rc )
     {
         dprintk(XENLOG_ERR, "Error in \"samsung,exynos5XXX-pmu\"\n");
         return -ENXIO;
     }
 
-    dprintk(XENLOG_DEBUG, "power_base_addr: %016llx size: %016llx\n",
+    dprintk(XENLOG_DEBUG, "power_base_addr: 0x%"PRIpaddr" size: 0x%"PRIpaddr"\n",
             *power_base_addr, *size);
 
     return 0;
@@ -223,8 +223,8 @@ static int exynos5_get_pmu_baseandsize(u64 *power_base_addr, u64 *size)
 
 static int exynos5_cpu_up(int cpu)
 {
-    u64 power_base_addr;
-    u64 size;
+    paddr_t power_base_addr;
+    paddr_t size;
     void __iomem *power;
     int rc;
 
@@ -256,8 +256,8 @@ static int exynos5_cpu_up(int cpu)
 
 static void exynos5_reset(void)
 {
-    u64 power_base_addr;
-    u64 size;
+    paddr_t power_base_addr;
+    paddr_t size;
     void __iomem *pmu;
     int rc;
 
diff --git a/xen/arch/arm/platforms/sunxi.c b/xen/arch/arm/platforms/sunxi.c
index e8e4d88bef..2b2c215f20 100644
--- a/xen/arch/arm/platforms/sunxi.c
+++ b/xen/arch/arm/platforms/sunxi.c
@@ -50,7 +50,7 @@ static void __iomem *sunxi_map_watchdog(bool *new_wdt)
         return NULL;
     }
 
-    ret = dt_device_get_address(node, 0, &wdt_start, &wdt_len);
+    ret = dt_device_get_paddr(node, 0, &wdt_start, &wdt_len);
     if ( ret )
     {
         dprintk(XENLOG_ERR, "Cannot read watchdog register address\n");
diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index befd0c3c2d..6fc2f9679e 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -50,7 +50,7 @@ static void __init xgene_check_pirq_eoi(void)
     if ( !node )
         panic("%s: Can not find interrupt controller node\n", __func__);
 
-    res = dt_device_get_address(node, 0, &dbase, NULL);
+    res = dt_device_get_paddr(node, 0, &dbase, NULL);
     if ( res )
         panic("%s: Cannot find a valid address for the distributor\n", __func__);
 
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 6c9712ab7b..20bc369367 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -955,6 +955,42 @@ int dt_device_get_address(const struct dt_device_node *dev, unsigned int index,
     return 0;
 }
 
+int dt_device_get_paddr(const struct dt_device_node *dev, unsigned int index,
+                        paddr_t *addr, paddr_t *size)
+{
+    uint64_t dt_addr, dt_size;
+    int ret;
+
+    ret = dt_device_get_address(dev, index, &dt_addr, &dt_size);
+    if ( ret )
+        return ret;
+
+    if ( !addr )
+        return -EINVAL;
+
+    if ( dt_addr != (paddr_t)dt_addr )
+    {
+        printk("Error: Physical address 0x%"PRIx64" for node=%s is greater than max width (%zu bytes) supported\n",
+               dt_addr, dev->name, sizeof(paddr_t));
+        return -ERANGE;
+    }
+
+    *addr = dt_addr;
+
+    if ( size )
+    {
+        if ( dt_size != (paddr_t)dt_size )
+        {
+            printk("Error: Physical size 0x%"PRIx64" for node=%s is greater than max width (%zu bytes) supported\n",
+                   dt_size, dev->name, sizeof(paddr_t));
+            return -ERANGE;
+        }
+
+        *size = dt_size;
+    }
+
+    return ret;
+}
 
 int dt_for_each_range(const struct dt_device_node *dev,
                       int (*cb)(const struct dt_device_node *,
diff --git a/xen/drivers/char/cadence-uart.c b/xen/drivers/char/cadence-uart.c
index 22905ba66c..c38d7ed143 100644
--- a/xen/drivers/char/cadence-uart.c
+++ b/xen/drivers/char/cadence-uart.c
@@ -158,14 +158,14 @@ static int __init cuart_init(struct dt_device_node *dev, const void *data)
     const char *config = data;
     struct cuart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &cuart_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("cadence: Unable to retrieve the base"
diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 43aaf02e18..2503392ccd 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -303,7 +303,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     const char *config = data;
     struct exynos4210_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
@@ -316,7 +316,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     uart->parity    = PARITY_NONE;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("exynos4210: Unable to retrieve the base"
diff --git a/xen/drivers/char/imx-lpuart.c b/xen/drivers/char/imx-lpuart.c
index 9c1f3b71a3..77f70c2719 100644
--- a/xen/drivers/char/imx-lpuart.c
+++ b/xen/drivers/char/imx-lpuart.c
@@ -204,7 +204,7 @@ static int __init imx_lpuart_init(struct dt_device_node *dev,
     const char *config = data;
     struct imx_lpuart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
@@ -216,7 +216,7 @@ static int __init imx_lpuart_init(struct dt_device_node *dev,
     uart->parity = 0;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("imx8-lpuart: Unable to retrieve the base"
diff --git a/xen/drivers/char/meson-uart.c b/xen/drivers/char/meson-uart.c
index b1e25e0468..c627328122 100644
--- a/xen/drivers/char/meson-uart.c
+++ b/xen/drivers/char/meson-uart.c
@@ -209,14 +209,14 @@ static int __init meson_uart_init(struct dt_device_node *dev, const void *data)
     const char *config = data;
     struct meson_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &meson_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("meson: Unable to retrieve the base address of the UART\n");
diff --git a/xen/drivers/char/mvebu-uart.c b/xen/drivers/char/mvebu-uart.c
index a00618b96f..cc55173513 100644
--- a/xen/drivers/char/mvebu-uart.c
+++ b/xen/drivers/char/mvebu-uart.c
@@ -231,14 +231,14 @@ static int __init mvebu_uart_init(struct dt_device_node *dev, const void *data)
     const char *config = data;
     struct mvebu3700_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &mvebu3700_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("mvebu3700: Unable to retrieve the base address of the UART\n");
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index d6a5d59aa2..8e643cb039 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -324,7 +324,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     struct omap_uart *uart;
     u32 clkspec;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
@@ -344,7 +344,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     uart->parity = UART_PARITY_NONE;
     uart->stop_bits = 1;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("omap-uart: Unable to retrieve the base"
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index be67242bc0..052a651251 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -222,7 +222,7 @@ static struct uart_driver __read_mostly pl011_driver = {
     .vuart_info   = pl011_vuart,
 };
 
-static int __init pl011_uart_init(int irq, u64 addr, u64 size, bool sbsa)
+static int __init pl011_uart_init(int irq, paddr_t addr, paddr_t size, bool sbsa)
 {
     struct pl011 *uart;
 
@@ -258,14 +258,14 @@ static int __init pl011_dt_uart_init(struct dt_device_node *dev,
 {
     const char *config = data;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
     {
         printk("WARNING: UART configuration is not supported\n");
     }
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("pl011: Unable to retrieve the base"
diff --git a/xen/drivers/char/scif-uart.c b/xen/drivers/char/scif-uart.c
index 2fccafe340..1b28ba90e9 100644
--- a/xen/drivers/char/scif-uart.c
+++ b/xen/drivers/char/scif-uart.c
@@ -311,14 +311,14 @@ static int __init scif_uart_init(struct dt_device_node *dev,
     const char *config = data;
     struct scif_uart *uart;
     int res;
-    u64 addr, size;
+    paddr_t addr, size;
 
     if ( strcmp(config, "") )
         printk("WARNING: UART configuration is not supported\n");
 
     uart = &scif_com;
 
-    res = dt_device_get_address(dev, 0, &addr, &size);
+    res = dt_device_get_paddr(dev, 0, &addr, &size);
     if ( res )
     {
         printk("scif-uart: Unable to retrieve the base"
diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
index 091f09b217..611d9eeba5 100644
--- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
+++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
@@ -794,7 +794,7 @@ static void ipmmu_device_reset(struct ipmmu_vmsa_device *mmu)
 static __init bool ipmmu_stage2_supported(void)
 {
     struct dt_device_node *np;
-    uint64_t addr, size;
+    paddr_t addr, size;
     void __iomem *base;
     uint32_t product, cut;
     bool stage2_supported = false;
@@ -806,7 +806,7 @@ static __init bool ipmmu_stage2_supported(void)
         return false;
     }
 
-    if ( dt_device_get_address(np, 0, &addr, &size) )
+    if ( dt_device_get_paddr(np, 0, &addr, &size) )
     {
         printk(XENLOG_ERR "ipmmu: Failed to get PRR MMIO\n");
         return false;
@@ -884,7 +884,7 @@ static int ipmmu_probe(struct dt_device_node *node)
 {
     const struct dt_device_match *match;
     struct ipmmu_vmsa_device *mmu;
-    uint64_t addr, size;
+    paddr_t addr, size;
     uint32_t reg;
     int irq, ret;
 
@@ -905,7 +905,7 @@ static int ipmmu_probe(struct dt_device_node *node)
     bitmap_zero(mmu->ctx, IPMMU_CTX_MAX);
 
     /* Map I/O memory and request IRQ. */
-    ret = dt_device_get_address(node, 0, &addr, &size);
+    ret = dt_device_get_paddr(node, 0, &addr, &size);
     if ( ret )
     {
         dev_err(&node->dev, "Failed to get MMIO\n");
diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 4ca55d400a..720aa69ff2 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -2428,7 +2428,7 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
 	}
 
 	/* Base address */
-	ret = dt_device_get_address(np, 0, &ioaddr, &iosize);
+	ret = dt_device_get_paddr(np, 0, &ioaddr, &iosize);
 	if (ret)
 		goto out_free_smmu;
 
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0a514821b3..79281075ba 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -73,8 +73,8 @@
 /* Xen: Helpers to get device MMIO and IRQs */
 struct resource
 {
-	u64 addr;
-	u64 size;
+	paddr_t addr;
+	paddr_t size;
 	unsigned int type;
 };
 
@@ -101,7 +101,7 @@ static struct resource *platform_get_resource(struct platform_device *pdev,
 
 	switch (type) {
 	case IORESOURCE_MEM:
-		ret = dt_device_get_address(pdev, num, &res.addr, &res.size);
+		ret = dt_device_get_paddr(pdev, num, &res.addr, &res.size);
 
 		return ((ret) ? NULL : &res);
 
@@ -169,7 +169,7 @@ static void __iomem *devm_ioremap_resource(struct device *dev,
 	ptr = ioremap_nocache(res->addr, res->size);
 	if (!ptr) {
 		dev_err(dev,
-			"ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n",
+			"ioremap failed (addr 0x%"PRIpaddr" size 0x%"PRIpaddr")\n",
 			res->addr, res->size);
 		return ERR_PTR(-ENOMEM);
 	}
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 5f8f61aec8..d514c486a8 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -585,6 +585,19 @@ int dt_find_node_by_gpath(XEN_GUEST_HANDLE(char) u_path, uint32_t u_plen,
  */
 const struct dt_device_node *dt_get_parent(const struct dt_device_node *node);
 
+/**
+ * dt_device_get_paddr - Resolve an address for a device
+ * @device: the device whose address is to be resolved
+ * @index: index of the address to resolve
+ * @addr: address filled by this function
+ * @size: size filled by this function
+ *
+ * This function resolves an address, walking the tree, for a given
+ * device-tree node. It returns 0 on success.
+ */
+int dt_device_get_paddr(const struct dt_device_node *dev, unsigned int index,
+                        paddr_t *addr, paddr_t *size);
+
 /**
  * dt_device_get_address - Resolve an address for a device
  * @device: the device whose address is to be resolved
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:40:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:40:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536401.834698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeoH-0001Bh-W9; Thu, 18 May 2023 14:40:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536401.834698; Thu, 18 May 2023 14:40:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeoH-0001Ba-SV; Thu, 18 May 2023 14:40:53 +0000
Received: by outflank-mailman (input) for mailman id 536401;
 Thu, 18 May 2023 14:40:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzeoG-00074K-Ox
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:40:52 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20628.outbound.protection.outlook.com
 [2a01:111:f400:7eab::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fb41bd51-f589-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 16:40:50 +0200 (CEST)
Received: from MW4PR04CA0318.namprd04.prod.outlook.com (2603:10b6:303:82::23)
 by SJ2PR12MB9209.namprd12.prod.outlook.com (2603:10b6:a03:558::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Thu, 18 May
 2023 14:40:46 +0000
Received: from CO1NAM11FT026.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:82:cafe::1) by MW4PR04CA0318.outlook.office365.com
 (2603:10b6:303:82::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 14:40:46 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT026.mail.protection.outlook.com (10.13.175.67) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 14:40:45 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:40:42 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:40:42 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:40:41 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb41bd51-f589-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TlmwmNq3D7cLBLdPDzjc9NkISAWvS8oY2BzZMBbutii01/nT/hxFjIzRlo8WpBvobHoPTc2uudmxFK67rpxnansMHlzx7LcFLi42Qrc0SHSkoWWrN+dzHjg6tPQHSlO09lMJDHqjzzM9pDKvFfxIrU4Zi5BhKUhd56/M1Lk4PsxjWQBiqIOuRNFxaOas1dLHiaUOdFUsx5EAg8ZHYWRKNKZ4DUx3lPSRd+LHWWYaNiATLYACTs2t8WWTx6or7O+GcxMAnZo46Xh2Gi6FJNs5U+UCcxkVWlejkQ6PYK/1M6iEyGI+5sWVV/7PECB+SX11MLYtmEBnS78piLNxNZ/o/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EAgb1ysrPbUg0InpEjefR449H1nWxGwdPTeZHCYsBak=;
 b=KKA6e+1ubKQyK1olEGEKD6P6BJTL2SoS0UztOxQFygdkvfZ2ZThlQe/tCgBVvRpeTNu5LeH+7peaqUnC93wCIbWtg4AgULpafuQBq2jw/GiHXpDu08v2/6I5/TI3RZFQ4sfZ66+eQZ2QKlvV/gn/DDkeGsv7PsaPsI90m7HC1AuWLqXHIDkQyUfZPDcVBEPk9u9JVi1jF9ktck5z+uOE77bvp4Ue5Os/qQzMSYpstKws4w0cPk2PmrSI1Ow4gIghiSABxmK84KUT/vPocO9UakMuzcZaEDA6DwMMRV2KheqivHvDZ7sYtpYyqJ/X/xpy5O+bjO2wnuQYxYd7LrGB7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=temperror (sender ip
 is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org
 smtp.mailfrom=amd.com; dmarc=temperror action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EAgb1ysrPbUg0InpEjefR449H1nWxGwdPTeZHCYsBak=;
 b=2sHMdsD53eCq5+6Y8pcMcBY2s2rZG9VliEa7xNTLE7wKFVtB3mDau/nJZQ8J9QWI1HELxgeM345X1aggLSVS71NK/T3LPDwZGlTCtrhZ9HErWsXzFQkwK2iJaeyc9ky37Js8UcrxHYKxAXB9sSOY2+gXId9gEhADmFb71vse8XE=
X-MS-Exchange-Authentication-Results: spf=temperror (sender IP is
 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=temperror action=none header.from=amd.com;
Received-SPF: TempError (protection.outlook.com: error in processing during
 lookup of amd.com: DNS Timeout)
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 04/11] xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to SMMU_CBn_TTBR0
Date: Thu, 18 May 2023 15:39:13 +0100
Message-ID: <20230518143920.43186-5-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT026:EE_|SJ2PR12MB9209:EE_
X-MS-Office365-Filtering-Correlation-Id: bd4570f7-76c0-488d-9dc1-08db57addd82
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	327SBPcJ42XIc8xHyjXzwvanvvWZ22qoP2EY7wt80BgGAf/iOzdXrXtZDeRJbDB3zjgOCGvRwraaBVrWsWp8/lKc0noJsByNxQWsCmWiuguS9DenoTBQSvhfmGszbYlnZtfPRovaX8bOGkKl9SrwARixAdzznlNGdKKZ2HFR/HtNNRviRgFL5/IDZrwLi1B2KtA3h2z7gHC+IEgIY9yWouUmkOSbc8MH3XbtbEVSbpycrRrDYMVNueWl5bHrKG2HcCokb/loVEP1ANNHpklQhDJwVd5ovNd22lJdY2DPKPeAVhizaB8/to5FAKqwvP0kkT2Q5oFOM2PnuC4L4J7Ji6WYwdZ8/KWMKg/FLPFWgIzVfyjdEPRwUmV0jombx5rltIGEmY5MXcIaQMkcGAuw1b3T4wG+UBxBTDZKP0WkV4nFNW0vz8/3Ts2BRjtHa7mmeqvEb+rOyVsMKfnTpRwNpc/vNZPSmo9ZFP2QUDbSFuUpnFkPDKcsNAGBO87nVPLEjkDY/zmhmEBlxHXNZUiKI8nndUdx/ql2XlmcaerkW3BPfElpT6SNgkSoBZov3mth34pTI0r2WTVMPr1qWI0FqXwFEvuQsKinI5bLF9kMTLwizqJWksKxwVRa6mM37DY/OnsL8EWG3EUnd0yZ6pC76VX4QlBdRKYpGRpgOUylUVr9hUsCcvWx2/yVqs5Y+erjXyOmcVsT7WY4jYmZP0kcccd+AGVdzZTUr8+G6vcK0kLejEMU2KFJ4CYQT2/+es6YiDKjrx8ZeDkudz0OEXyJuA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(39860400002)(396003)(376002)(451199021)(46966006)(36840700001)(40470700004)(40460700003)(316002)(70206006)(70586007)(478600001)(4326008)(54906003)(6916009)(36756003)(103116003)(63350400001)(83380400001)(63370400001)(47076005)(1076003)(426003)(36860700001)(186003)(26005)(2616005)(86362001)(2906002)(5660300002)(8936002)(8676002)(82310400005)(336012)(6666004)(40480700001)(7416002)(356005)(81166007)(41300700001)(82740400003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:40:45.8778
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bd4570f7-76c0-488d-9dc1-08db57addd82
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT026.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB9209

Refer ARM IHI 0062D.c ID070116 (SMMU 2.0 spec), 17-360, 17.3.9,
SMMU_CBn_TTBR0 is a 64 bit register. Thus, one can use
writeq_relaxed_non_atomic() to write to it instead of invoking
writel_relaxed() twice for lower half and upper half of the register.

This also helps us as p2maddr is 'paddr_t' (which may be u32 in future).
Thus, one can assign p2maddr to a 64 bit register and do the bit
manipulations on it, to generate the value for SMMU_CBn_TTBR0.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Rahul Singh <rahul.singh@arm.com>
---
Changes from -

v1 - 1. Extracted the patch from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".
Use writeq_relaxed_non_atomic() to write u64 register in a non-atomic
fashion.

v2 - 1. Added R-b.

v3 - 1. No changes.

v4 - 1. Reordered the R-b. No further changes.
(This patch can be committed independent of the series).

v5 - Used 'uint64_t' instead of u64. As the change looked trivial to me, I
retained the R-b.

v6 - Added R-b.

 xen/drivers/passthrough/arm/smmu.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 79281075ba..c37fa9af13 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -499,8 +499,7 @@ enum arm_smmu_s2cr_privcfg {
 #define ARM_SMMU_CB_SCTLR		0x0
 #define ARM_SMMU_CB_RESUME		0x8
 #define ARM_SMMU_CB_TTBCR2		0x10
-#define ARM_SMMU_CB_TTBR0_LO		0x20
-#define ARM_SMMU_CB_TTBR0_HI		0x24
+#define ARM_SMMU_CB_TTBR0		0x20
 #define ARM_SMMU_CB_TTBCR		0x30
 #define ARM_SMMU_CB_S1_MAIR0		0x38
 #define ARM_SMMU_CB_FSR			0x58
@@ -1083,6 +1082,7 @@ static void arm_smmu_flush_pgtable(struct arm_smmu_device *smmu, void *addr,
 static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
 {
 	u32 reg;
+	uint64_t reg64;
 	bool stage1;
 	struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
 	struct arm_smmu_device *smmu = smmu_domain->smmu;
@@ -1177,12 +1177,13 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
 	dev_notice(smmu->dev, "d%u: p2maddr 0x%"PRIpaddr"\n",
 		   smmu_domain->cfg.domain->domain_id, p2maddr);
 
-	reg = (p2maddr & ((1ULL << 32) - 1));
-	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_LO);
-	reg = (p2maddr >> 32);
+	reg64 = p2maddr;
+
 	if (stage1)
-		reg |= ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT;
-	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0_HI);
+		reg64 |= (((uint64_t) (ARM_SMMU_CB_ASID(cfg) << TTBRn_HI_ASID_SHIFT))
+		          << 32);
+
+	writeq_relaxed_non_atomic(reg64, cb_base + ARM_SMMU_CB_TTBR0);
 
 	/*
 	 * TTBCR
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:41:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:41:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536403.834708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeoR-0001ka-A4; Thu, 18 May 2023 14:41:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536403.834708; Thu, 18 May 2023 14:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeoR-0001kT-64; Thu, 18 May 2023 14:41:03 +0000
Received: by outflank-mailman (input) for mailman id 536403;
 Thu, 18 May 2023 14:41:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzeoQ-0000Pw-1B
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:41:02 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on20622.outbound.protection.outlook.com
 [2a01:111:f400:7e83::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 01770aa1-f58a-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 16:41:01 +0200 (CEST)
Received: from BN9PR03CA0178.namprd03.prod.outlook.com (2603:10b6:408:f4::33)
 by CH0PR12MB5387.namprd12.prod.outlook.com (2603:10b6:610:d6::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 14:40:57 +0000
Received: from BN8NAM11FT092.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f4:cafe::f6) by BN9PR03CA0178.outlook.office365.com
 (2603:10b6:408:f4::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 14:40:57 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT092.mail.protection.outlook.com (10.13.176.180) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 14:40:57 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:40:56 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:40:56 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:40:54 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01770aa1-f58a-11ed-b22c-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gJzO8xv45zn9w1Dkr6iS3sdn+uUGIm0t3eT8Y99//7eksGU4+5ND2qXPc0gRrb4OwJ6jS4JMp93Z1i8RkmMrt87qdrblGMBbP6LJm0pW0z+l6R7+JXmWyDGBJCUgEM3TRGszwdv0eKtgSYWrPcBDLQ8J/qxMNZTBEA70nOAKXQP3D2rkqNXCIXTYsnuLYzSslSlgnvq+W3jyQpc9ATrBqgSHYX43RfoALa0hXO+98MUdl7QWrG+M/62MXWDPq0jn3vi66NLz0Ptq7IKnm37iAVmLwrHEQNFgQmAukcJhJe9FKHvoxjwPCQX9oyur4iWc8TMFOnincM6CHqtEUrFtGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dIbGtyu/gWerwDuQ2gvCU+RRCAtN1+d/wfgGFegCZv8=;
 b=b52H8s8t5TiuEfTjY1BYae43ctiUlp5SrDAghP7Ch5lU5qRAxXZvo315DTaQ/OmE96vBx9DO2+5Jm+R1GUS2IAm82ct9aJXb1TOenhUG/w518yyEbpulMzbyl5+lYNZcKzx38sFYrHYC+Xm2o5jbkRBJdnSI2/dLUt5PaaeMSWs2juLy6vxetn+srTreohqXk4oFgfTIuv/0KpkhFHinvwsEKlrpQJDpB3j3Bh5/UYVF05fJg8aVUVHqkfvnQLPKoh1DM+Q4OetQbUeGAlTkYsuTo9qCW4LKkpaShuvHsxxxi38VNwCYsJ5AGaY5W9S4AG+RTPAnO8Io15qUG0n0Cw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dIbGtyu/gWerwDuQ2gvCU+RRCAtN1+d/wfgGFegCZv8=;
 b=3PcfitGTM7m6Qvsh/k33vn7y/9jZIWX+YmiNxh+R4bztP2zKWlw9w66fA/sBoeaLq1gGMrVYIXzvmNv1jrFBITkRtcRDrMPT/nSMIptpeWwEUms7vhdpfvgTonh5SXlLjimCbLHB7hm0tGylvT5tCGYSi9qesfemifJgaUdtS+I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 05/11] xen/arm: domain_build: Check if the address fits the range of physical address
Date: Thu, 18 May 2023 15:39:14 +0100
Message-ID: <20230518143920.43186-6-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT092:EE_|CH0PR12MB5387:EE_
X-MS-Office365-Filtering-Correlation-Id: bbfb26b1-c771-4668-1066-08db57ade439
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a0K+eRmx4ZJrhgGkrSsAthQy62mld0TL3pwTK+ORsIN/pwZxnZp5B8e40sTSJ1dBEpKORAMPaeoDGJ4Pan5AYrd09u5h9oUDkVKSHzNhJsWyZ0NcMKa10krwL3tMdHtgZTD8RzKRK4Uxa+na8sRJBAvAEFhNbhnoKs/csyUuh/p8h8kA251l4D+IGmOmE3UuugPK6znsc2RgNDtUGJdBKmyCdrSzxCvqbPf7Fry/edZ+YP5Yib+8vUoC6HXheCc+8+GRk/PqjXI7P4Qi6niGH79suI9wVj+ahSzkJE5HQESIKHzWSQMWuhfVNTK6ZcKVck2nS0DM8OcxYH6fNRvwVI7Hb3eBFSC1IQ3DE001MISSulKJQq9sjw8+ZCltvZ8968UzqtS4kXyCHe6TSCgvjfN/dQ7N6XpKysHY21OnVVA0OLtstkaea/NCszfZHXJ8fTWpS49p5cFRuQltFW0O/JeUykEBfDcTja4ThSXFdlvw9pJy1YnFEifpEmjAr3Gfg/wlVI1hF60T4Sk/Y+X7ZaaHTMmvHnt7bnct/mdP+tXZXIsOeK4KhS/7lFoVv2S9wuf5kUaen8kN8WXqIywPWYq3cLYn9WVhTeQEmOVSa2qsOgekLDbtYFGDlVgg7Tzauk14SFe58o08NFd03SI93LP50QZVH+MeZjmP1ol/+BeEdArGltr/d1UPfJTilrrIznD66Jx9KKDCd4wUaJ1y6iVpENh7xY1EZgs1qAEF+m1zDQSwc+u3LE8MDDNCydfgby/++LMIwJ+cucXqhjwk1g==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(396003)(39860400002)(346002)(451199021)(46966006)(36840700001)(40470700004)(26005)(40460700003)(1076003)(36860700001)(36756003)(83380400001)(47076005)(40480700001)(426003)(336012)(86362001)(82310400005)(2616005)(103116003)(356005)(81166007)(82740400003)(186003)(54906003)(7416002)(478600001)(2906002)(316002)(8936002)(6916009)(4326008)(8676002)(41300700001)(5660300002)(70586007)(70206006)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:40:57.2492
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bbfb26b1-c771-4668-1066-08db57ade439
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT092.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5387

handle_pci_range() and map_range_to_domain() take addr and len as uint64_t
parameters. Then frame numbers are obtained from addr and len by right shifting
with PAGE_SHIFT. The frame numbers are expressed using unsigned long.

Now if 64-bit >> PAGE_SHIFT, the result will have 52-bits as valid. On a 32-bit
system, 'unsigned long' is 32-bits. Thus, there is a potential loss of value
when the result is stored as 'unsigned long'.

To mitigate this issue, we check if the starting and end address can be
contained within the range of physical address supported on the system. If not,
then an appropriate error is returned.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
---
Changes from :-
v1...v4 - NA. New patch introduced in v5.

v5 - 1. Updated the error message
2. Used "(((paddr_t)~0 - addr) < len)" to check the limit on len.
3. Changes in the prototype of "map_range_to_domain()" has been
addressed by the patch 8.

v6 - Trivial changes. Added R-b.

 xen/arch/arm/domain_build.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 50b85ea783..cb23f531a8 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1643,6 +1643,13 @@ static int __init handle_pci_range(const struct dt_device_node *dev,
     paddr_t start, end;
     int res;
 
+    if ( (addr != (paddr_t)addr) || (((paddr_t)~0 - addr) < len) )
+    {
+        printk(XENLOG_ERR "%s: [0x%"PRIx64", 0x%"PRIx64"] exceeds the maximum allowed PA width (%u bits)",
+               dt_node_full_name(dev), addr, (addr + len), PADDR_BITS);
+        return -ERANGE;
+    }
+
     start = addr & PAGE_MASK;
     end = PAGE_ALIGN(addr + len);
     res = rangeset_remove_range(mem_holes, PFN_DOWN(start), PFN_DOWN(end - 1));
@@ -2333,6 +2340,13 @@ int __init map_range_to_domain(const struct dt_device_node *dev,
     struct domain *d = mr_data->d;
     int res;
 
+    if ( (addr != (paddr_t)addr) || (((paddr_t)~0 - addr) < len) )
+    {
+        printk(XENLOG_ERR "%s: [0x%"PRIx64", 0x%"PRIx64"] exceeds the maximum allowed PA width (%u bits)",
+               dt_node_full_name(dev), addr, (addr + len), PADDR_BITS);
+        return -ERANGE;
+    }
+
     /*
      * reserved-memory regions are RAM carved out for a special purpose.
      * They are not MMIO and therefore a domain should not be able to
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:41:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:41:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536406.834717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeoW-00028V-I0; Thu, 18 May 2023 14:41:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536406.834717; Thu, 18 May 2023 14:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeoW-00028O-F1; Thu, 18 May 2023 14:41:08 +0000
Received: by outflank-mailman (input) for mailman id 536406;
 Thu, 18 May 2023 14:41:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzeoU-00074K-K6
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:41:06 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on20625.outbound.protection.outlook.com
 [2a01:111:f400:7e83::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 03ad2234-f58a-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 16:41:04 +0200 (CEST)
Received: from MW4PR02CA0022.namprd02.prod.outlook.com (2603:10b6:303:16d::15)
 by SA1PR12MB7037.namprd12.prod.outlook.com (2603:10b6:806:24c::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Thu, 18 May
 2023 14:41:01 +0000
Received: from CO1NAM11FT004.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:16d:cafe::4a) by MW4PR02CA0022.outlook.office365.com
 (2603:10b6:303:16d::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 14:41:01 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT004.mail.protection.outlook.com (10.13.175.89) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.20 via Frontend Transport; Thu, 18 May 2023 14:41:00 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:41:00 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:40:58 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03ad2234-f58a-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aS+z61dIEOh9wHoYQV5HXZhDJBTlh4TPoLbW+9gdcSiLp3XAhVn/tBxddyrVUTKUk4yq1AT/ovHB07PoVgvS99vV47y8G2dE7XfGsIhT7sUrLEMOSx+AA/YhLt/XirD4t8e1qEKICwLqEP0nUj4+voZ3vjdR/jOxiR64NEJV89nmJlN4bV3Wd3XbBLUGRhKgLmhoB7X3vjQPvujpbkjElB68qQyQICCnqNtpCAAmZsHFSUZjT0XqfRA8p7VzAV/atuxZDMZbqK4D3jMXVE4Pl0nlAx0jht8jhsIucB9zqc9JrG+HiNbBmRo5T5llVni8iXn7rE0ZCztJ4/9BsQfEug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fc/6ksSNIY14+cMIRv0fMrHzTXfYwm3DIOoQYaSFZAM=;
 b=mmIwTPUwjh0down3t8bHqjdQQblEEBW29Qv65oYd67eza7VGyDqQxU2Oy6ynaZSDbwrNUnpeKRlJBtnIryTiSA1Wv3RJ+mEtO5P4LevGyX/cAl5Q49AQN6CYgXnAQVQHvMTTA7OfJV0h7IosWdb+oDKtjDrXR9qLAVf9sxkeSV3u9KB8ufcZ4/tuo4TIrC4iRjiUgLXeSyXWLbHdOlXhXrsO35pripi3T6Tzjk6CHZv+r0xcufJy0O/DeBiHqaArAisCR8iu/eYW7sTPyJ6ujPdRk9k7UhKNXsRJQqKtNMOEPrVHPG/2B6gYIvZM8S/UMjSW5MaPLjsssVfsAN1BzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fc/6ksSNIY14+cMIRv0fMrHzTXfYwm3DIOoQYaSFZAM=;
 b=HmA0QSxc/Gsqf83zKSAxc3OM9X71iHPC+SfCoVacqvu7aRu1yT9ptN4ocilEI31+bKTiCWlFexGXwZjpTD+q1lT79tscMYRIczWkpaTD+g7vTJXr++sanlK7ead8i6q7SJdCN7cX3BypzpB1MKmUFC6SuJxygs842R+RregMTms=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 06/11] xen: dt: Replace u64 with uint64_t as the callback function parameters for dt_for_each_range()
Date: Thu, 18 May 2023 15:39:15 +0100
Message-ID: <20230518143920.43186-7-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT004:EE_|SA1PR12MB7037:EE_
X-MS-Office365-Filtering-Correlation-Id: 92281347-216f-4037-489d-08db57ade65e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PSaiMIR0zR/aFHCVT2O4ZUfN99/cgRGDkD1kvTEAZTHQPU9uFnh/vZ+H9c9FlielH16tLEHYK67FgZ1zU3cSSlWDtNdC1wtsR1V2sqnRj+kERDJYtH4tiAh/PAbqElyoQybBpZ3Z61n5UO7YWqM1N0wGnlrkmEL7DZAElfN0L2SaD0k7xsN+QsR5VHEPgyviG7yJrS+YPuHRjdXVNYSaUTE4GqQIqfOzWjyHCNzNUShyagpWEjVVQAQsdsi4hhlYU6G4UrCfLoC/1j29P8G39qPplLY3SswB06ykBSkrq/GPVe+QOUgh0yg75kvOoeuOFBLhrAU8wbVt/z/60F7Dd0hnHJ4E6x3Bwlni9riHeAgUsya5NO7UO5NG0aKB43hA/W6ZXoCRZZWKZcvYx5obFn4ePsoK8jBBjeioFmDghyU1Vx7ueEapCCrpmZIc+QC+5T/JOr2wMA4jAAWG9J7TgiiACsO3KO23fix4FZlm/iSIPBKXEJrqg+VpHXwjzVb1ZUUmIl6vuMrnEh5CkcHdgmiFUKzmqvhSnHzxR+wojJSFkb1S80YaHw4AGX/rPMqmnnUPpzHbUjfBIuTH0KT+cITVEniCU8G58TFIX6495e5vhwxUACbWUJchyN2WHge4tjfK749Q+Bqh9LBbHHdgb+ape6fLZ4QkuD/D/uDQJz+HMy4fCSsp85uUEhNV42NTbQMJ9c5nPrJkFUjjnrrGVR0vW0GEaZyzGN6sjwHdopBEsU4/OtG7ReFTV88zanOlroEmFCGLFouqMo464H2lpw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(39860400002)(136003)(451199021)(36840700001)(40470700004)(46966006)(41300700001)(86362001)(316002)(54906003)(82310400005)(7416002)(103116003)(5660300002)(81166007)(82740400003)(356005)(478600001)(2906002)(4326008)(6916009)(8676002)(8936002)(70586007)(70206006)(36756003)(6666004)(26005)(36860700001)(47076005)(1076003)(2616005)(336012)(40460700003)(426003)(186003)(83380400001)(40480700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:41:00.7235
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 92281347-216f-4037-489d-08db57ade65e
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT004.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7037

In the callback functions invoked by dt_for_each_range() ie handle_pci_range(),
map_range_to_domain(), 'u64' should be replaced with 'uint64_t' as the data type
for the parameters. The reason being Xen coding style mentions that u32/u64
should be avoided.

Also dt_for_each_range() invokes the callback functions with 'uint64_t'
arguments. Thus, is_bar_valid() needs to change the parameter types accordingly.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
---
Changes from :-

v1-v5 - New patch introduced in v6.

v6 - 1. Merged "[XEN v6 08/12] xen: dt: Replace u64 with uint64_t as the callback"
and "[XEN v6 09/12] xen/arm: pci: Use 'uint64_t' as the datatype for the .."

2. Added R-b.

 xen/arch/arm/domain_build.c        | 4 ++--
 xen/arch/arm/include/asm/setup.h   | 2 +-
 xen/arch/arm/pci/pci-host-common.c | 2 +-
 xen/common/device_tree.c           | 4 ++--
 xen/include/xen/device_tree.h      | 2 +-
 5 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index cb23f531a8..3f4558ade6 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1637,7 +1637,7 @@ out:
 }
 
 static int __init handle_pci_range(const struct dt_device_node *dev,
-                                   u64 addr, u64 len, void *data)
+                                   uint64_t addr, uint64_t len, void *data)
 {
     struct rangeset *mem_holes = data;
     paddr_t start, end;
@@ -2334,7 +2334,7 @@ static int __init map_dt_irq_to_domain(const struct dt_device_node *dev,
 }
 
 int __init map_range_to_domain(const struct dt_device_node *dev,
-                               u64 addr, u64 len, void *data)
+                               uint64_t addr, uint64_t len, void *data)
 {
     struct map_range_data *mr_data = data;
     struct domain *d = mr_data->d;
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 47ce565d87..fe17cb0a4a 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -166,7 +166,7 @@ u32 device_tree_get_u32(const void *fdt, int node,
                         const char *prop_name, u32 dflt);
 
 int map_range_to_domain(const struct dt_device_node *dev,
-                        u64 addr, u64 len, void *data);
+                        uint64_t addr, uint64_t len, void *data);
 
 extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
 
diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
index 5dd62e8013..7cdfc89e52 100644
--- a/xen/arch/arm/pci/pci-host-common.c
+++ b/xen/arch/arm/pci/pci-host-common.c
@@ -381,7 +381,7 @@ int __init pci_host_bridge_mappings(struct domain *d)
  * right place for alignment check.
  */
 static int is_bar_valid(const struct dt_device_node *dev,
-                        paddr_t addr, paddr_t len, void *data)
+                        uint64_t addr, uint64_t len, void *data)
 {
     struct pdev_bar_check *bar_data = data;
     paddr_t s = bar_data->start;
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 20bc369367..8da1052911 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -994,7 +994,7 @@ int dt_device_get_paddr(const struct dt_device_node *dev, unsigned int index,
 
 int dt_for_each_range(const struct dt_device_node *dev,
                       int (*cb)(const struct dt_device_node *,
-                                u64 addr, u64 length,
+                                uint64_t addr, uint64_t length,
                                 void *),
                       void *data)
 {
@@ -1057,7 +1057,7 @@ int dt_for_each_range(const struct dt_device_node *dev,
 
     for ( ; rlen >= rone; rlen -= rone, ranges += rone )
     {
-        u64 a, s;
+        uint64_t a, s;
         int ret;
 
         memcpy(addr, ranges + na, 4 * pna);
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index d514c486a8..c2eada7489 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -681,7 +681,7 @@ int dt_for_each_irq_map(const struct dt_device_node *dev,
  */
 int dt_for_each_range(const struct dt_device_node *dev,
                       int (*cb)(const struct dt_device_node *,
-                                u64 addr, u64 length,
+                                uint64_t addr, uint64_t length,
                                 void *),
                       void *data);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:42:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:42:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536419.834728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzepm-0003Rz-2o; Thu, 18 May 2023 14:42:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536419.834728; Thu, 18 May 2023 14:42:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzepl-0003Rs-Ul; Thu, 18 May 2023 14:42:25 +0000
Received: by outflank-mailman (input) for mailman id 536419;
 Thu, 18 May 2023 14:42:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzepl-0003Qv-9x
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:42:25 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7e83::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32eca288-f58a-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 16:42:23 +0200 (CEST)
Received: from SJ0PR03CA0245.namprd03.prod.outlook.com (2603:10b6:a03:3a0::10)
 by DM4PR12MB5326.namprd12.prod.outlook.com (2603:10b6:5:39a::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Thu, 18 May
 2023 14:42:21 +0000
Received: from CO1NAM11FT093.eop-nam11.prod.protection.outlook.com
 (2603:10b6:a03:3a0:cafe::f2) by SJ0PR03CA0245.outlook.office365.com
 (2603:10b6:a03:3a0::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 14:42:20 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT093.mail.protection.outlook.com (10.13.175.59) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 14:42:18 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:42:17 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:42:16 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32eca288-f58a-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N1ytZ9+1/jjUtOKV2xnBBUtBFccy7gY3JEPHzcduwLOydo886bmdhUZzcYn17U0Epcb2sSnwMT9YnrUttBjYyeTKBeCcvGztaIcC4jVPrgS9vAS4VzWXgxs2dlcN9ZXak3sR0cJMLF3FFTOfCJCGRIjz6jh8jUmFp73+losru7GNZB63YkF6gdE8BjOCpRKhgYttCVPJJS9yhUUVwFFkEjY76o91onxVMKADe5zb3gGnLLbH/kBZYrbSpsROS3vpLQd8G50kA2grOMGbMV7Qip/EBjM6Ka8+NV6b3Qyc51vnDpaU08ETaBy1QIJgbl4Lugvc6C8acO4a27UpC878Gg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=S39xs8zKJNXxpJ6L+8N1p//laX3Tu7QlivwVG5FaGX0=;
 b=G1U+XtoGmrB/hRrojmjGvTg1/XQC872Hu6H9acixnQqP8kgI6m3Zq1N5x1XHXmxY55Ica42V5Fwa0c+YMFuwQEAzmTX7kcFRNacvwnkivmwSfKQkiiDd6goSplw/aN9Ne0yczb28hWKdb9vFASmX7aRkcoiyYj4sC8Pt7eEtbPovBvYlcJn74uPmwjkKp5pH+r9AXR6QUFq+TwDZ2hPQv3YMh83+36Nvn1ch32XIUilNAUhr9rhtz5p/pVlZ1Ih9z4UCYk0yr3PiD2JvIav344Xi9QAJ3QKRUgDqGf9C2qCkFM9UVee6waH+1KrseX15YOd11AK3HMzwrYTZbnsgJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=temperror (sender ip
 is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org
 smtp.mailfrom=amd.com; dmarc=temperror action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S39xs8zKJNXxpJ6L+8N1p//laX3Tu7QlivwVG5FaGX0=;
 b=TAsE1B73aaMxisiJ0wMMX7VZTmaIGJCQ8gJwwUtuleD8Wq4wkIUsGhYb97mpFUQ+IFfwBCloOSlnNM0KUCvkIidLeFWfWCCE4A+WY31W1tuxz3i2w8p+fSwj4hzRzNJst9hY9cTkHssZGNerniCi8AexXT949rRfzoUXFNCpcus=
X-MS-Exchange-Authentication-Results: spf=temperror (sender IP is
 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=temperror action=none header.from=amd.com;
Received-SPF: TempError (protection.outlook.com: error in processing during
 lookup of amd.com: DNS Timeout)
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 10/11] xen/arm: Restrict zeroeth_table_offset for ARM_64
Date: Thu, 18 May 2023 15:39:19 +0100
Message-ID: <20230518143920.43186-11-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT093:EE_|DM4PR12MB5326:EE_
X-MS-Office365-Filtering-Correlation-Id: bb8a2e8d-598c-4549-0786-08db57ae14ab
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GfcY/a1Ux/k4PtXxNmQNowT1RJhJeykLBnEv8rBjAiqQ8GoNDuQxssfxgeJjK/E9CYSeXhHZdmZeaP+MZKRYpf09lL315a+UpsB3miICqGreIZ/I8qFVjd3/xE49tnxS8TZffHcD3wf0o7EYdVNx926KufAG0p/cXjmvfWGp+XpsKxlJwPVIqnRN9aIsULGa6GnIEtWkST04SU3r5TmBdKfOEtUdw5Z6JfXAFq+03qtTqXZ5HepLmrbWNAf0b6Tgh2L6ZhlgQxWsTAZcoF9HZYtk7R3AJnOVmSdKh2hZVrMkSbKYnhL3YCVd/d7Bv2xIkKzKqdPkv8seWIZfXYFQ4yfqZhmg0xnwHkxwU+mdpEneS0o0xnhs5EWaoc4hNeMacDFUo2Je1ezowB9IZWCKJFWpEkCvymd4MtHUntIptX0hiK8X4WvJzGUnKC+afGzcw4M0BgolX5fvAJwHaMUVaUghQUi4SbFe5H7Gw5yk4ATc4uTV/vDJFwM4Ft17YOF2KH4G/SIR6Q+vB668MHXNMqg12OnnFSAKJjrZk9RCZLvueqmJUCjs+PaugujXzWRISriiWUhXNT/hyRyw9CDX/upbl4kEK/HfQ85gwW4hz1scb/Nm60Wqr0WKeJ8RSJ6+M58I3/E/QmRAZ9PAGN+eNuSIps2lmUSRmMTfXrr+rSbtDeW/UG9zlANX+OQieNeVWQvnNahYn2KKz/jjnpYejTIEa0cBc0XtGqEL/1UlSE82JJAaDv3aQs2DmOtv9fUn7Mis9PFn+YUR+PDhDSRwhg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(396003)(376002)(346002)(451199021)(46966006)(36840700001)(40470700004)(36860700001)(63350400001)(426003)(47076005)(336012)(186003)(1076003)(2616005)(54906003)(478600001)(63370400001)(26005)(83380400001)(2906002)(8676002)(36756003)(5660300002)(8936002)(6916009)(81166007)(82740400003)(103116003)(40460700003)(7416002)(41300700001)(356005)(70206006)(70586007)(82310400005)(40480700001)(86362001)(4326008)(316002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:42:18.4173
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bb8a2e8d-598c-4549-0786-08db57ae14ab
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT093.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5326

When 32 bit physical addresses are used (ie PHYS_ADDR_T_32=y),
"va >> ZEROETH_SHIFT" causes an overflow.
Also, there is no zeroeth level page table on Arm32.

Also took the opportunity to clean up dump_pt_walk(). One could use
DECLARE_OFFSETS() macro instead of declaring an array of page table
offsets.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
Changes from -

v1 - Removed the duplicate declaration for DECLARE_OFFSETS.

v2 - 1. Reworded the commit message. 
2. Use CONFIG_ARM_PA_32 to restrict zeroeth_table_offset.

v3 - 1. Added R-b and Ack.

v4 - 1. Removed R-b and Ack as we use CONFIG_PHYS_ADDR_T_32
instead of CONFIG_ARM_PA_BITS_32. This is to be in parity with our earlier
patches where we use CONFIG_PHYS_ADDR_T_32 to denote 32-bit physical addr
support.

v5 - 1. Added R-b and Ack.

v6 - 1. No changes.

 xen/arch/arm/include/asm/lpae.h | 4 ++++
 xen/arch/arm/mm.c               | 7 +------
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpae.h
index 3fdd5d0de2..7d2f6fd1bd 100644
--- a/xen/arch/arm/include/asm/lpae.h
+++ b/xen/arch/arm/include/asm/lpae.h
@@ -259,7 +259,11 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
 #define first_table_offset(va)  TABLE_OFFSET(first_linear_offset(va))
 #define second_table_offset(va) TABLE_OFFSET(second_linear_offset(va))
 #define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
+#ifdef CONFIG_PHYS_ADDR_T_32
+#define zeroeth_table_offset(va)  0
+#else
 #define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
+#endif
 
 /*
  * Macros to define page-tables:
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 5ef5fd8c49..e460249736 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -233,12 +233,7 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr,
 {
     static const char *level_strs[4] = { "0TH", "1ST", "2ND", "3RD" };
     const mfn_t root_mfn = maddr_to_mfn(ttbr);
-    const unsigned int offsets[4] = {
-        zeroeth_table_offset(addr),
-        first_table_offset(addr),
-        second_table_offset(addr),
-        third_table_offset(addr)
-    };
+    DECLARE_OFFSETS(offsets, addr);
     lpae_t pte, *mapping;
     unsigned int level, root_table;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:42:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:42:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536420.834738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzepx-0003lI-9T; Thu, 18 May 2023 14:42:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536420.834738; Thu, 18 May 2023 14:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzepx-0003lB-68; Thu, 18 May 2023 14:42:37 +0000
Received: by outflank-mailman (input) for mailman id 536420;
 Thu, 18 May 2023 14:42:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzepv-0003kU-MI
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:42:35 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20602.outbound.protection.outlook.com
 [2a01:111:f400:fe59::602])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 39761814-f58a-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 16:42:34 +0200 (CEST)
Received: from BN9PR03CA0320.namprd03.prod.outlook.com (2603:10b6:408:112::25)
 by PH8PR12MB6961.namprd12.prod.outlook.com (2603:10b6:510:1bc::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.24; Thu, 18 May
 2023 14:42:30 +0000
Received: from BN8NAM11FT065.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:112:cafe::a8) by BN9PR03CA0320.outlook.office365.com
 (2603:10b6:408:112::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 14:42:30 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT065.mail.protection.outlook.com (10.13.177.63) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Thu, 18 May 2023 14:42:29 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:42:29 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:42:27 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39761814-f58a-11ed-b22c-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kEXfwzU6VoApqgyxVB3k7eR/qeEpzyqikzzA+lp3LwD+CSpMml3MzBfPuun6t4H9k7B9oNYx1AxvJFIlt6bBrGwa3PA/4JHHjIVtSL1su7mb88SetivxZ3VLaVp7XZpdXbbVfDGKwJ7WNKLWRSK8/qMQNuwpPdaN4lcewwuddHIB4F+iLebbN7Aos0A64VBpPBCeykKDEK8F+SbL4iTnneqyTRX28xAZLDKdNaCDE01ycp803GBVxE7KK+RZV8MgPwMYqL0/stzJyD+tKYxDVwDMO80kEWrMpexNtm1qa6puB4RMTSg1pnLKfOSwVRNmU7KrX0DORwWOH57PeF91Qg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M9J/5ThoetEUno6+i8DrRu1jgcNPeeZ3w3OXUkCp52Y=;
 b=I+QM4nORvC6tA/vLOZTiia1jZ5y081VwqSyMKSl5w71GS/53GKuICTnBuTv1x9/CPxkc0SmQ+I16nkbOogXhZJJ/NyO3fSred/1Dnd939vJp5XsyqqyqWff+flnMHFb5PE3Ieh9F88qASi5aG112ZpdNYki9bRIENGXHNTRwUuYTbpl+ImOfZwnX8u48FSq9KQVOV/M5raQEoKHN5KRlZi9XicE1+ONfWJ9ZuEgyWyVZrU3loG/sfpfokaa3ZfDgZ6R+Odhf6ldLM/fHfL1wIrICO3Nwb/CrAF/Diyx/9APpSvtGo9xTr2lA/+N6+PPTDgJ96tN6MemmngeC/mz9gg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M9J/5ThoetEUno6+i8DrRu1jgcNPeeZ3w3OXUkCp52Y=;
 b=b+iGFAU602aOhWComooD9HzBGj81gEbGY6fJRSQvuZ54Hwdxh1lkQy2uLZGda43UOoiNiNpmKOuFQaxAOYqOaI/DwPxrNGOkm0PQ8OXuod0H+SWQtUlVOtXQQlo1tvb9qZxKcisExyJ9s7BkpH4fGu7Ucw5qcS6IlVDRfYfELsY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 11/11] xen/arm: p2m: Enable support for 32bit IPA for ARM_32
Date: Thu, 18 May 2023 15:39:20 +0100
Message-ID: <20230518143920.43186-12-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT065:EE_|PH8PR12MB6961:EE_
X-MS-Office365-Filtering-Correlation-Id: 7efc5c60-cf6f-4e54-b064-08db57ae1b6b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pTgoIEc+qksOtZzpFWX/aRxYRbHyE3smwQR+oM4eLzI0kmyuuqrkLg5+QtgGN/tkJdIPwXajxygGUC5V3hA2eOH9XHv6F/0eDI/+28DI1K6kXbsIxFoklzCcV14SP4BHTx1v3mK2rZ56Ubfys78mqIHl40rryW4FKsnfEBG0w55znkScPiTKDWdvf2oMDrJ0wJiSICXjpHprmM+bQbWDxJrv/MJ3w47TlIjYBjmkI7LXuCPcV0s6m5xBvqzRTB8vBoMA24QQtZcI847djFwHW50nS2vFhMEI0cM7UOyKledWbZCpTiBu6wPk5or1U3d0hevmCbY6c20xvhtmGaWeSsIh/y7RAWM6R4iAxqOLb46W28unFRIf7InclsE5hsEobEK7vUb5Dek0V6xOItMKbQNmYcVduXEPh+m3F/vjME/qVnh6Mt75NiAcAfhTuhv0kn08Yjab1kX+gX4ZfN91Q0eKk0DN+VJWliaPjIij2qJOuZAa1CljC1QmrtCILkD9NcUSANiYVOBYbRljwQfMXyHYXCb1MVS9tXyqvAVQLBcPNU0ACSLp7ub0ztwUodWugS/XV7GN9L76lPxPX18U/tr4UgQ48S02zr/cOTIjWUCUZG2B7+JnPU1ez5Y1zWpTP4viVhh3wtqOfYlw/CumWr602jT+8iApxXxjgxgYRvlmoZPtsrSFXj3xMnMnt0ZbnsbKu4UwSmcifTjtY5RQKqnq9geuXGi2YiZDooVKlMIqmTEHPY2Xk2fHjgkkILBNeUXWjcJXj4BWp67HxpMR6w==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(7416002)(8676002)(8936002)(186003)(1076003)(26005)(336012)(426003)(83380400001)(47076005)(36860700001)(5660300002)(2616005)(6666004)(41300700001)(82310400005)(356005)(103116003)(70206006)(81166007)(40460700003)(54906003)(2906002)(316002)(4326008)(478600001)(82740400003)(40480700001)(6916009)(86362001)(36756003)(70586007)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:42:29.8556
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7efc5c60-cf6f-4e54-b064-08db57ae1b6b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT065.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6961

Refer ARM DDI 0406C.d ID040418, B3-1345,

"A stage 2 translation with an input address range of 31-34 bits can
start the translation either:

- With a first-level lookup, accessing a first-level translation
  table with 2-16 entries.

- With a second-level lookup, accessing a set of concatenated
  second-level translation tables"

Thus, for 32 bit IPA, there will be no concatenated root level tables.
So, the root-order is 0.

Also, Refer ARM DDI 0406C.d ID040418, B3-1348
"Determining the required first lookup level for stage 2 translations

For a stage 2 translation, the output address range from the stage 1
translations determines the required input address range for the stage 2
translation. The permitted values of VTCR.SL0 are:
0b00 Stage 2 translation lookup must start at the second level.
0b01 Stage 2 translation lookup must start at the first level.

VTCR.T0SZ must indicate the required input address range. The size of
the input address region is 2^(32-T0SZ) bytes."

Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = 0 when the size of
input address region is 2^32 bytes.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v1 - New patch.

v2 - 1. Added Ack.

v3 - 1. Dropped Ack. 
2. Rebased the patch based on the previous change.

v4 - 1. t0sz is 0 for 32-bit IPA on Arm32.
2. Updated the commit message to explain t0sz, sl0 and root_order.

v5 - 1. Rebased on top of the changes in the previous patch.

v6 - 1. Removed the index for ARM_32.

 xen/arch/arm/p2m.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 755cb86c5b..08b209e7c9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2265,6 +2265,7 @@ void __init setup_virt_paging(void)
         [6] = { 52,      12/*12*/,  4,          2 },
         [7] = { 0 }  /* Invalid */
 #else
+        { 32,      0/*0*/,    0,          1 },
         { 40,      24/*24*/,  1,          1 },
         { 0 },  /* Invalid */
 #endif
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:47:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:47:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536431.834763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeul-000532-J2; Thu, 18 May 2023 14:47:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536431.834763; Thu, 18 May 2023 14:47:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeul-00051s-Bu; Thu, 18 May 2023 14:47:35 +0000
Received: by outflank-mailman (input) for mailman id 536431;
 Thu, 18 May 2023 14:47:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzeoZ-00074K-GH
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:41:11 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20600.outbound.protection.outlook.com
 [2a01:111:f400:7eae::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 06f2fec4-f58a-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 16:41:09 +0200 (CEST)
Received: from BN1PR14CA0029.namprd14.prod.outlook.com (2603:10b6:408:e3::34)
 by SA0PR12MB4576.namprd12.prod.outlook.com (2603:10b6:806:93::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Thu, 18 May
 2023 14:41:05 +0000
Received: from BN8NAM11FT106.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e3:cafe::6a) by BN1PR14CA0029.outlook.office365.com
 (2603:10b6:408:e3::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 14:41:04 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT106.mail.protection.outlook.com (10.13.177.7) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 14:41:03 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:41:03 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 07:41:02 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:41:01 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06f2fec4-f58a-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GPNXG2IM5Ehud4tR8pGKKxkxebCruwh99eJLIRN7YSosKkRwMpYDaLefAKSJNS5vqrv9+vgdy3LeW1Ab497ytE8Nk+HNzcxBbEhghekitWuLvf1e0K1ucstY8gl5qp+Py6Ey3nh49ROe0gRnIHQk0pThD9cPq0YkI77+zonfYEXdQmOe7ArjIp7i2y6p8jJtGqyNvvxSOwxp0XNolRx6QcUhTwln7xipvn+SxBaucLKO+7hzuktQykLpC0Pmspq1RJvh4iBH1OH+qbgktJFzodTX7mw/mbqMbiH+zGobvvVLYlTD/95Q8BBXw0uLVyGrnAIIful49LSn+HuNnA2PGg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=b9lGGyp31QTztHVbOYMqPwY8XuOJ0Ip79morXeJYXlE=;
 b=GMidtoGFfHj6WV7uCedW5BvLrbHb+m2JB4InlSwfoJkDfiP63jLCpFl7tNvdropKx02hh5MzIiXb+sBl9o6oBOwkU5QdTgV+XxOSnbRWtPgpBbQCq+TfTg5d2q0/WJyfkOJsWPvdvGzUxCQIZJVXXojEwgVvya5VxdgryuDSdw17TXpegDjRMD+t/XxAxlFG4mMxal8tUD3B2HioS9AqfiqdMZLmrDLM6Il8AOcQkzqjkYSN9ibAYaASE1xym6DUhvcN1611DWhH1WWHGrMgcVD9cGexoSMiydxv6lr+eWSNSbcuTdGFH3BvDh4P19x9V/L60gcGC13YlBEco8zqXw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b9lGGyp31QTztHVbOYMqPwY8XuOJ0Ip79morXeJYXlE=;
 b=IT2LIcMcwekH1Ud2KAbMJOkR8BIIlplEaDM5TbXL5kI3vJt1A7r1i6ECUJrPoVUeuIr5afCedunYlt4X+Mp7Hlug62g1Yy2aY050nL+8DohCij06uInyZkuiTxyrkNITCV+AfF4maA9Msy64YN6mrIxlNCsPdeB3x2N2SUjjXrU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 07/11] xen/arm: p2m: Use the pa_range_info table to support ARM_32 and ARM_64
Date: Thu, 18 May 2023 15:39:16 +0100
Message-ID: <20230518143920.43186-8-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT106:EE_|SA0PR12MB4576:EE_
X-MS-Office365-Filtering-Correlation-Id: 45b85d64-7e76-4884-f1e6-08db57ade836
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PxuliIDOJ40wAcLwYeedNUZOELXqQoDpOsDruCR20x86iC3IdBng0iwshjGch7LRBxVEyXgsbbNYP8mFrR1v1X2KjJZXMwi55nsfkeMUntTx0+U/IfjZvxxoRLvj1WJdRVFKyeaWO56HlR20nrk0AxNoKQQIyKo4XEkTL75VTk3ohlH4FzF+9NONIHCCEpjNIBf9EMrqWH5LuSxCUis14vUE1XiOUdGVnKVofSf95S/iS5apfc1gokykrMMD5Fx8CfAji6+DF8Nqih8YyM1NfEJR7Tff1Lk9I1Z+XYE0Ov0VsURBeOSLzdbU+UtVr1Qifl+nAOWsaGFmpDAGQhW+fgQPjzZFDJR9W7zsi+C2g4T3Xc7wT8K61BDZtDxiGyycRwaZZp2NF0kMy8p5KiqQ8hMKpA09w42sH/U415Zd9xtNqhQTZ4CPqziAeGwsa8sJ1vLTSOSX+GnmJs8mKEjyBd/s7rsRf0Zstc792r7BGLogKuJ/aEcyhcMZ/byDRz8qBfnVGGp6/j8Vbm//kEip55Jbh60hobXVqsjM2XGhk6dmdYX8ETb+1dEyJow2pTnODIJjiEwCg12XcWTP/mvJCHThv8S8fl9Bd7hH/ZLV45Qj+kRRnz+GlOigomCff+3ExpTCHfXQuvUHgKCU8IwtKV9o8E2bV6mRk0cx/DvuJSVbduXUt86PTXrT8eZxWJjlw+/VfWIRngT5kpwat7+lt2xqmYV/AJ7AojJ0EiCVw59MDrjMhCh6ekianD8mksifos3xeCqf6ZjWp/14tmD/HQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(39860400002)(396003)(451199021)(36840700001)(40470700004)(46966006)(6666004)(83380400001)(2906002)(82310400005)(47076005)(36756003)(426003)(336012)(2616005)(186003)(5660300002)(82740400003)(8676002)(26005)(1076003)(54906003)(7416002)(8936002)(81166007)(86362001)(41300700001)(40460700003)(36860700001)(70586007)(6916009)(103116003)(478600001)(70206006)(356005)(4326008)(316002)(40480700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:41:03.9462
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 45b85d64-7e76-4884-f1e6-08db57ade836
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT106.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4576

Restructure the code so that one can use pa_range_info[] table for both
ARM_32 as well as ARM_64.

Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
p2m_root_order can be obtained from the pa_range_info[].root_order and
p2m_root_level can be obtained from pa_range_info[].sl0.

Refer ARM DDI 0406C.d ID040418, B3-1345,
"Use of concatenated first-level translation tables

...However, a 40-bit input address range with a translation granularity of 4KB
requires a total of 28 bits of address resolution. Therefore, a stage 2
translation that supports a 40-bit input address range requires two concatenated
first-level translation tables,..."

Thus, root-order is 1 for 40-bit IPA on ARM_32.

Refer ARM DDI 0406C.d ID040418, B3-1348,

"Determining the required first lookup level for stage 2 translations

For a stage 2 translation, the output address range from the stage 1
translations determines the required input address range for the stage 2
translation. The permitted values of VTCR.SL0 are:

0b00 Stage 2 translation lookup must start at the second level.
0b01 Stage 2 translation lookup must start at the first level.

VTCR.T0SZ must indicate the required input address range. The size of the input
address region is 2^(32-T0SZ) bytes."

Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of input
address region is 2^40 bytes.

Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b which is 24.

VTCR.T0SZ, is bits [5:0] for ARM_64.
VTCR.T0SZ is bits [3:0] and S(sign extension), bit[4] for ARM_32.

Thus, VTCR.T0SZ bits are interpreted accordingly for different architecture.
For this, we have used union.

pa_range_info[] is indexed by ID_AA64MMFR0_EL1.PARange which is present in Arm64
only. This is the reason we do not specify the indices for ARM_32. Also, we
duplicated the entry "{ 40,      24/*24*/,  1,          1 }" between ARM_64 and
ARM_32. This is done to avoid introducing extra #if-defs.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -

v3 - 1. New patch introduced in v4.
2. Restructure the code such that pa_range_info[] is used both by ARM_32 as
well as ARM_64.

v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and P2M_ROOT_LEVEL.
The reason being root_order will not be always 1 (See the next patch).
2. Updated the commit message to explain t0sz, sl0 and root_order values for
32-bit IPA on Arm32.
3. Some sanity fixes.

v5 - pa_range_info is indexed by system_cpuinfo.mm64.pa_range. ie
when PARange is 0, the PA size is 32, 1 -> 36 and so on. So pa_range_info[] has
been updated accordingly.
For ARM_32 pa_range_info[0] = 0 and pa_range_info[1] = 0 as we do not support
32-bit, 36-bit physical address range yet.

v6 - 1. Added pa_range_info[] entries for ARM_32 without indices. Some entry
may be duplicated between ARM_64 and ARM_32.
2. Recalculate p2m_ipa_bits for ARM_32 from T0SZ (similar to ARM_64).
3. Introduced an union to reinterpret T0SZ bits between ARM_32 and ARM_64.

 xen/arch/arm/include/asm/p2m.h |  6 ------
 xen/arch/arm/p2m.c             | 37 +++++++++++++++++++++++-----------
 2 files changed, 25 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index f67e9ddc72..940495d42b 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -14,16 +14,10 @@
 /* Holds the bit size of IPAs in p2m tables.  */
 extern unsigned int p2m_ipa_bits;
 
-#ifdef CONFIG_ARM_64
 extern unsigned int p2m_root_order;
 extern unsigned int p2m_root_level;
 #define P2M_ROOT_ORDER    p2m_root_order
 #define P2M_ROOT_LEVEL p2m_root_level
-#else
-/* First level P2M is always 2 consecutive pages */
-#define P2M_ROOT_ORDER    1
-#define P2M_ROOT_LEVEL 1
-#endif
 
 struct domain;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 418997843d..755cb86c5b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -19,9 +19,9 @@
 
 #define INVALID_VMID 0 /* VMID 0 is reserved */
 
-#ifdef CONFIG_ARM_64
 unsigned int __read_mostly p2m_root_order;
 unsigned int __read_mostly p2m_root_level;
+#ifdef CONFIG_ARM_64
 static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
 /* VMID is by default 8 bit width on AArch64 */
 #define MAX_VMID       max_vmid
@@ -2247,16 +2247,6 @@ void __init setup_virt_paging(void)
     /* Setup Stage 2 address translation */
     register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
 
-#ifdef CONFIG_ARM_32
-    if ( p2m_ipa_bits < 40 )
-        panic("P2M: Not able to support %u-bit IPA at the moment\n",
-              p2m_ipa_bits);
-
-    printk("P2M: 40-bit IPA\n");
-    p2m_ipa_bits = 40;
-    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
-    val |= VTCR_SL0(0x1); /* P2M starts at first level */
-#else /* CONFIG_ARM_64 */
     static const struct {
         unsigned int pabits; /* Physical Address Size */
         unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
@@ -2265,6 +2255,7 @@ void __init setup_virt_paging(void)
     } pa_range_info[] __initconst = {
         /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */
         /*      PA size, t0sz(min), root-order, sl0(max) */
+#ifdef CONFIG_ARM_64
         [0] = { 32,      32/*32*/,  0,          1 },
         [1] = { 36,      28/*28*/,  0,          1 },
         [2] = { 40,      24/*24*/,  1,          1 },
@@ -2273,11 +2264,22 @@ void __init setup_virt_paging(void)
         [5] = { 48,      16/*16*/,  0,          2 },
         [6] = { 52,      12/*12*/,  4,          2 },
         [7] = { 0 }  /* Invalid */
+#else
+        { 40,      24/*24*/,  1,          1 },
+        { 0 },  /* Invalid */
+#endif
     };
 
     unsigned int i;
     unsigned int pa_range = 0x10; /* Larger than any possible value */
 
+    /* Typecast pa_range_info[].t0sz into ARM_32 and ARM_64 bit variants. */
+    union {
+        signed int t0sz_32:5;
+        unsigned int t0sz_64:6;
+    } t0sz;
+
+#ifdef CONFIG_ARM_64
     /*
      * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
      * with IPA bits == PA bits, compare against "pabits".
@@ -2291,6 +2293,7 @@ void __init setup_virt_paging(void)
      */
     if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
         max_vmid = MAX_VMID_16_BIT;
+#endif
 
     /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits". */
     for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
@@ -2306,24 +2309,34 @@ void __init setup_virt_paging(void)
     if ( pa_range >= ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_range].pabits )
         panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", pa_range);
 
+#ifdef CONFIG_ARM_64
     val |= VTCR_PS(pa_range);
     val |= VTCR_TG0_4K;
 
     /* Set the VS bit only if 16 bit VMID is supported. */
     if ( MAX_VMID == MAX_VMID_16_BIT )
         val |= VTCR_VS;
+#endif
+
     val |= VTCR_SL0(pa_range_info[pa_range].sl0);
     val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
 
     p2m_root_order = pa_range_info[pa_range].root_order;
     p2m_root_level = 2 - pa_range_info[pa_range].sl0;
+
+#ifdef CONFIG_ARM_64
+    t0sz.t0sz_64 = pa_range_info[pa_range].t0sz;
     p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
+#else
+    t0sz.t0sz_32 = pa_range_info[pa_range].t0sz;
+    p2m_ipa_bits = 32 - t0sz.t0sz_32;
+#endif
 
     printk("P2M: %d-bit IPA with %d-bit PA and %d-bit VMID\n",
            p2m_ipa_bits,
            pa_range_info[pa_range].pabits,
            ( MAX_VMID == MAX_VMID_16_BIT ) ? 16 : 8);
-#endif
+
     printk("P2M: %d levels with order-%d root, VTCR 0x%"PRIregister"\n",
            4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:47:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:47:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536430.834758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeul-00050w-9j; Thu, 18 May 2023 14:47:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536430.834758; Thu, 18 May 2023 14:47:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeul-00050p-4j; Thu, 18 May 2023 14:47:35 +0000
Received: by outflank-mailman (input) for mailman id 536430;
 Thu, 18 May 2023 14:47:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzepP-0000Pw-5W
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:42:03 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on20627.outbound.protection.outlook.com
 [2a01:111:f400:7eb2::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 26296225-f58a-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 16:42:02 +0200 (CEST)
Received: from MW2PR2101CA0029.namprd21.prod.outlook.com (2603:10b6:302:1::42)
 by SA1PR12MB6895.namprd12.prod.outlook.com (2603:10b6:806:24e::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Thu, 18 May
 2023 14:41:59 +0000
Received: from CO1NAM11FT053.eop-nam11.prod.protection.outlook.com
 (2603:10b6:302:1:cafe::7d) by MW2PR2101CA0029.outlook.office365.com
 (2603:10b6:302:1::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.6 via Frontend
 Transport; Thu, 18 May 2023 14:41:59 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT053.mail.protection.outlook.com (10.13.175.63) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 14:41:58 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:41:57 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 07:41:57 -0700
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:41:56 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26296225-f58a-11ed-b22c-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AxljZyJCoDuKiwjuzQwORT+0OarCdHjVU5cRm/0sqKn2D5JLiOwF3KZkPNLpMtA3uVx8Vw0W8KW8zCrdnHj69mle55n9q/NFNmdu5giN3K0VzVzR7FD8ZxxcxwLa0dWELcAS07SztYIUYAodG7Qt384rEJSY4JTXzR06zg4JZmk+ER31v+RcozkuqQDW7l5GgMl3jMOHK555wQK4WgwJRojhnYtrSb+DxFmbe+5o8647i8W+UP1FNv80Z0cyF4xBhmsTRPGAqD642NlI+CEGevYkLVfXTPO/W3rx116rPKT5kZjGltnM2Clg0RWgvEpGMGCyu6LvsrF63hG88mhNBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IXMycSPCMoOkBk7CFnL74O37ftF1ESExFvr5Bqg0mxU=;
 b=UKEDB6pSB7tlAHpUpvDxLi9GylDEOGzTcV2pBrL3w6QTUO6ewvALKzyxe8afEVPDhUR9nVY1ysDKH5GuagYhE0SDRBnJIJ+qP9FVT239oRCjEHByOwk2cSo8V/Mr8L202tVSTppbW/hSKlTnLoYGY0jANgqUvROnaVqhlmHenFsGg+LgirLrptR4eSLoRtOpZyRGuyZ8rVQ9gLPQTEnuSp62Bdis7gcPdzfcKvp7ilvlyukuwYBiTJeMAxYTx+kAurhWfjicMraoEmRCfuR/rPEOfMsDJKxqy8tUmTpPzLUN6+Ee+ztywCP3LQ8rlDXaPoWAsYbAEf4p2TaAu9PTpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IXMycSPCMoOkBk7CFnL74O37ftF1ESExFvr5Bqg0mxU=;
 b=s1v+xe6V4xa89bZ+XG7/a1/1i7GpxU0ONiQ5dk0Pwas4qtJwdSavRh7zErod5VxbCZpwNY6m5+rTRHHncZ3KeUwY/Qhs2f6Wxp3YNbFpVbsdYKeiQtM6ODsaEmRtDYwrJAp9xHi8GO8xC+CSuvbXPXDeYA6ZqzaMcQ4xG+NcGR8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 09/11] xen/arm: guest_walk: LPAE specific bits should be enclosed within "ifndef CONFIG_PHYS_ADDR_T_32"
Date: Thu, 18 May 2023 15:39:18 +0100
Message-ID: <20230518143920.43186-10-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT053:EE_|SA1PR12MB6895:EE_
X-MS-Office365-Filtering-Correlation-Id: d91d4848-3879-4e23-c422-08db57ae08d7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5ZSHmTx+EeRWpab+dReGyHtsuWWdQ5XcBVv9SyWuxsgmUAVUDTgD6OBmx4XUok+7htNOm0IiSYGueeTH0+enGRxsP3CiHYBnDxjy4LtSTqSO3wSHU1hF0gtR1xrt8Ifp2FRcIrMQ8L6TeeL38FeJeGAeI/eW11IC1nmwpFIN1rke6w/RsmK0hvaNIh+b8EoqZjqdRJTJoY8iu2Mcvw15G+Ta0k8W+0wU4pNw3FAlzf4GjDEurrf+NoZE0Bvl+GBhmfYG6iH7bx88ej4GhFOekRX7SI1YpvQ8kP9LgIJo5yP5uTZYBFvAufYEuKJFs5eXO7RJM/UqlPUKw5WZnMm5MTvX7yprEHr2CVg7i0ArhIU2P3aQFZ3+4MgJBo+K8i8YjYPlQko1+1waJEWAr2xUOa7Kr7DuY/PemyEmyeGL/WPk6a6LUWVoCrc4lZqjC9ByoOW9+o6vA1K/+h6CcEuMQ11ycDVSFChAlQdf4sKfhu6fm5TZTTmV7CpwtCv1TpIiazNa3gVa5RORwT+6LWuRuggagSM5ZConGr1fjrie+V4G++ODircD38WUmQr6HYQNOQJXL+hG9MmYs8BoyhOlnB1NgO4W3joiHG/FG8yFhCZuPBXbISQwd/7rS7Nd+6ngWLs77DoKw83VGGMaK3BwvArIbV141ZSKD/c0lFuh8dlkbhvm8UzlWR1bJthw+R3SzXagMWV3j3oA21ZBvksWauFRTXnR4MU12pofuCDpU6omN/6qhWvHthuwOnGZeocBLcbfNCGoRd3XutoTmXsqsw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(39860400002)(346002)(451199021)(46966006)(40470700004)(36840700001)(6916009)(36756003)(4326008)(2906002)(7416002)(8676002)(426003)(336012)(5660300002)(86362001)(8936002)(40460700003)(41300700001)(70206006)(54906003)(6666004)(103116003)(70586007)(316002)(40480700001)(478600001)(356005)(2616005)(82740400003)(83380400001)(47076005)(1076003)(36860700001)(82310400005)(26005)(81166007)(186003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:41:58.5778
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d91d4848-3879-4e23-c422-08db57ae08d7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT053.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB6895

As the previous patch introduces CONFIG_PHYS_ADDR_T_32 to support 32 bit
physical addresses, the code specific to "Large Physical Address Extension"
(ie LPAE) should be enclosed within "ifndef CONFIG_PHYS_ADDR_T_32".

Refer xen/arch/arm/include/asm/short-desc.h, "short_desc_l1_supersec_t"
unsigned int extbase1:4;    /* Extended base address, PA[35:32] */
unsigned int extbase2:4;    /* Extended base address, PA[39:36] */

Thus, extbase1 and extbase2 are not valid when 32 bit physical addresses
are supported.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes from -
v1 - 1. Extracted from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".

v2 - 1. Reordered this patch so that it appears after CONFIG_ARM_PA_32 is
introduced (in 6/9).

v3 - 1. Updated the commit message.
2. Added Ack.

v4 - 1. No changes.

v5 - 1. No changes.

v6 - 1. No changes.

 xen/arch/arm/guest_walk.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index 43d3215304..c80a0ce55b 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -154,8 +154,10 @@ static bool guest_walk_sd(const struct vcpu *v,
             mask = (1ULL << L1DESC_SUPERSECTION_SHIFT) - 1;
             *ipa = gva & mask;
             *ipa |= (paddr_t)(pte.supersec.base) << L1DESC_SUPERSECTION_SHIFT;
+#ifndef CONFIG_PHYS_ADDR_T_32
             *ipa |= (paddr_t)(pte.supersec.extbase1) << L1DESC_SUPERSECTION_EXT_BASE1_SHIFT;
             *ipa |= (paddr_t)(pte.supersec.extbase2) << L1DESC_SUPERSECTION_EXT_BASE2_SHIFT;
+#endif /* CONFIG_PHYS_ADDR_T_32 */
         }
 
         /* Set permissions so that the caller can check the flags by herself. */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 14:47:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 14:47:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536426.834748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeuj-0004lb-Sj; Thu, 18 May 2023 14:47:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536426.834748; Thu, 18 May 2023 14:47:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzeuj-0004lO-O3; Thu, 18 May 2023 14:47:33 +0000
Received: by outflank-mailman (input) for mailman id 536426;
 Thu, 18 May 2023 14:47:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7bG=BH=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1pzepA-00074K-A9
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 14:41:48 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20617.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::617])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c61c6a2-f58a-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 16:41:46 +0200 (CEST)
Received: from MW4PR04CA0086.namprd04.prod.outlook.com (2603:10b6:303:6b::31)
 by SJ1PR12MB6361.namprd12.prod.outlook.com (2603:10b6:a03:455::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 14:41:42 +0000
Received: from CO1NAM11FT094.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:6b:cafe::cc) by MW4PR04CA0086.outlook.office365.com
 (2603:10b6:303:6b::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 14:41:42 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT094.mail.protection.outlook.com (10.13.174.161) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 14:41:42 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:41:39 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 09:41:39 -0500
Received: from xcbayankuma41x.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34
 via Frontend Transport; Thu, 18 May 2023 09:41:37 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c61c6a2-f58a-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mrjx0LnzcAsYrkxe9sr49t7CfQz8YRveGilrmREb3T55OTvXIfk5vftZcaklyT2Q2ClKmXRbDhLNLuMzmeTyjSN4q/uw5inXX/EahlDOP9nbF4NGBOBs0UERmmIdeVOrPub4wsYn680+hWnpT3hgGv1Nl2JBDNJCQgDFvc8Gj3Nk6sZYSjOR8UrJcjiibq2x3cGK6ff9pSqUpoLQcPlOGzwCHBvQLf5mygFqOYBlRsO9hjtCewBCLDk3sXUfGyNnvrZB7/9pTaW/weordR8uVXEsR7looC1B30BjfpiPiveSLf2LLD1eizJFGvE5ApTjO8JAVjr7O+LNzk9i3dKQcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=V/cEYIxV/h5yAer+sUbKU2ootiXZ/Hl1Xk3NfMqs0ok=;
 b=cekbC89ycqQaL2NJBYBi4LxcMT1Qy3csY7+umXBul7zEJK5x3IEjB2IZrlTvhhYntwOJy+DRXtQKQphEFViGzJ9jh477uHWncmG0Fd2hEDFlH0SJPjuA/7QCJF71eIuuazUbveGCjL/pchw5RU9GJg/DijeXdNe1hwMkwE3nK7Cq+Y3ivviBlkVpDdvkNUZqxtMO8GMes405fE+iMijOwPsuhltuKsLp9k1xbXZZjfi31iIS+s8A97rI5VFwfjRZg07gIdaZxZUOq5ho6+K1Nwmv8I9Y7sMO+7wVf50TBomJYUSdhuCSXCUDbRdw57ln9BKIVEbuW4/ouvo5vt5fXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V/cEYIxV/h5yAer+sUbKU2ootiXZ/Hl1Xk3NfMqs0ok=;
 b=Day8QmvpJRuveSsSIqt6QF20wWiSSMrhNXhkAv9+bBZ7wwhcLUmhjbr6ur5pXpcy9vaIJwnVBvBRmLP1AlW2ynVmYi95KhbiswJpb0ysK8d9uxU4YLgaRQsJslqGcFOppURWCJX2ZklpDL/ZEN682h22+rGzkibNmEoxlJ4iKN4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>, <michal.orzel@amd.com>, "Ayan Kumar
 Halder" <ayan.kumar.halder@amd.com>
Subject: [XEN v7 08/11] xen/arm: Introduce choice to enable 64/32 bit physical addressing
Date: Thu, 18 May 2023 15:39:17 +0100
Message-ID: <20230518143920.43186-9-ayan.kumar.halder@amd.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT094:EE_|SJ1PR12MB6361:EE_
X-MS-Office365-Filtering-Correlation-Id: d53ad848-18ad-4aed-c475-08db57adff1f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2tPYg221YJj44gHiKTuNUNfbzlpM6cHHXVOHpWr1ssKv+gTnx5khAUCA7NvbWU3z/uK9TNLYNWH+C5EQP5FjYmN6ZWNIq1V0LaMhiuy6ec3es13eML0LPJ4egBmJ1QSZHOuHW4rpJt4Eeg9L4RjKy40MRJ5tMjQMwdSxD6y0AR2DV6olYZUgl58Luvn5T3XyqdBEiOPsC8cz3exlzgLy2EfDm2CmftG5ol8lPyVq+a3fWv/w6PN0QUaofWYXZah5RJANqAbytfLXOzbdBCcrpccX9IEN9M0LrJ/5LJo4ThFKaYya8jDOaeUnn71mO6xSTvPi9rnXcR7Ec5GYpH6aN78Mc+Zjwd310kN5YjMImjUBQMNW/GlkiV5AlthNqgiJPbgeyySM5Nr1BJmIFh/xSogj2GsOmaKm+7cEqHSEmvRVocJDfFcNCGfqoh6Gk0b13aZaZuI1XQfZZOfm/kEOwLw40ZsRHtKn/v/zJFK6mRxkRC3tspcFCbpBEKcnywKdBIE1OPbBO5wA/4JVrCvyq0kga2dZ6kuIMc9pEndUhb79f8I45TNvvU7gpAJMrK1glKMdQ144hiLNFaStCwa4jOuEIVsG4bS2nn/a8xiP5nNeL+inPtrGorSdlmw1D82nPakTnGM92Etl2PAJTK0I7V/smKWWmdIwtUa5gx7sIDhoce09X0mHZPC7diqbsR4y9t/k5XhEgNuxVrkhL5Yo0m5wnFSgqwwQ7AvayWMMaZd2hqKtnQASlEB4ZT8C7J3k80LOusOvgDsFJwovjk3yaQ==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(136003)(376002)(451199021)(40470700004)(46966006)(36840700001)(5660300002)(70206006)(70586007)(2906002)(8936002)(478600001)(316002)(8676002)(54906003)(6916009)(7416002)(4326008)(41300700001)(6666004)(40460700003)(26005)(1076003)(82740400003)(356005)(186003)(40480700001)(426003)(336012)(83380400001)(36756003)(36860700001)(47076005)(86362001)(82310400005)(2616005)(81166007)(103116003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 14:41:42.1788
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d53ad848-18ad-4aed-c475-08db57adff1f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT094.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6361

Some Arm based hardware platforms which does not support LPAE
(eg Cortex-R52), uses 32 bit physical addresses.
Also, users may choose to use 32 bits to represent physical addresses
for optimization.

To support the above use cases, we have introduced arch independent
config to choose if the physical address can be represented using
32 bits (PHYS_ADDR_T_32) or 64 bits (!PHYS_ADDR_T_32).
For now only ARM_32 provides support to enable 32 bit physical
addressing.

When PHYS_ADDR_T_32 is defined, PADDR_BITS is set to 32. Note that we
use "unsigned long" (not "uint32_t") to denote the datatype of physical
address. This is done to avoid using a cast each time PAGE_* macros are
used on paddr_t. On 32-bit architecture, "unsigned long" is 32-bit
wide. Thus, it can be used to denote physical address.

When PHYS_ADDR_T_32 is not defined for ARM_32, PADDR_BITS is set to 40.
For ARM_64, PADDR_BITS is set to 48.
The last two are same as the current configuration used today on Xen.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
---
Changes from -
v1 - 1. Extracted from "[XEN v1 8/9] xen/arm: Other adaptations required to support 32bit paddr".

v2 - 1. Introduced Kconfig choice. ARM_64 can select PHYS_ADDR_64 only whereas
ARM_32 can select PHYS_ADDR_32 or PHYS_ADDR_64.
2. For CONFIG_ARM_PA_32, paddr_t is defined as 'unsigned long'. 

v3 - 1. Allow user to define PADDR_BITS by selecting different config options
ARM_PA_BITS_32, ARM_PA_BITS_40 and ARM_PA_BITS_48.
2. Add the choice under "Architecture Features".

v4 - 1. Removed PHYS_ADDR_T_64 as !PHYS_ADDR_T_32 means PHYS_ADDR_T_32.

v5 - 1. Removed ARM_PA_BITS_48 as there is no choice for ARM_64.
2. In ARM_PA_BITS_32, "help" is moved to last, and "depends on" before "select".

v6 - 1. Explained why we use "unsigned long" to represent physical address
for ARM_32.

 xen/arch/Kconfig                     |  3 +++
 xen/arch/arm/Kconfig                 | 32 ++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/page-bits.h |  6 +-----
 xen/arch/arm/include/asm/types.h     | 13 +++++++++++
 xen/arch/arm/mm.c                    |  5 +++++
 5 files changed, 54 insertions(+), 5 deletions(-)

diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig
index 7028f7b74f..67ba38f32f 100644
--- a/xen/arch/Kconfig
+++ b/xen/arch/Kconfig
@@ -1,6 +1,9 @@
 config 64BIT
 	bool
 
+config PHYS_ADDR_T_32
+	bool
+
 config NR_CPUS
 	int "Maximum number of CPUs"
 	range 1 4095
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..63f4d35dab 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -26,6 +26,38 @@ config ARCH_DEFCONFIG
 
 menu "Architecture Features"
 
+choice
+	prompt "Physical address space size" if ARM_32
+	default ARM_PA_BITS_40 if ARM_32
+	help
+	  User can choose to represent the width of physical address. This can
+	  sometimes help in optimizing the size of image when user chooses a
+	  smaller size to represent physical address.
+
+config ARM_PA_BITS_32
+	bool "32-bit"
+	depends on ARM_32
+	select PHYS_ADDR_T_32
+	help
+	  On platforms where any physical address can be represented within 32 bits,
+	  user should choose this option. This will help in reduced size of the
+	  binary.
+	  Xen uses "unsigned long" and not "uint32_t" to denote the datatype of
+	  physical address. This is done to avoid using a cast each time PAGE_*
+	  macros are used on paddr_t. On 32-bit architecture, "unsigned long" is
+	  32-bit wide. Thus, it can be used to denote physical address.
+
+config ARM_PA_BITS_40
+	bool "40-bit"
+	depends on ARM_32
+endchoice
+
+config PADDR_BITS
+	int
+	default 32 if ARM_PA_BITS_32
+	default 40 if ARM_PA_BITS_40
+	default 48 if ARM_64
+
 source "arch/Kconfig"
 
 config ACPI
diff --git a/xen/arch/arm/include/asm/page-bits.h b/xen/arch/arm/include/asm/page-bits.h
index 5d6477e599..deb381ceeb 100644
--- a/xen/arch/arm/include/asm/page-bits.h
+++ b/xen/arch/arm/include/asm/page-bits.h
@@ -3,10 +3,6 @@
 
 #define PAGE_SHIFT              12
 
-#ifdef CONFIG_ARM_64
-#define PADDR_BITS              48
-#else
-#define PADDR_BITS              40
-#endif
+#define PADDR_BITS              CONFIG_PADDR_BITS
 
 #endif /* __ARM_PAGE_SHIFT_H__ */
diff --git a/xen/arch/arm/include/asm/types.h b/xen/arch/arm/include/asm/types.h
index e218ed77bd..01d9d39e4b 100644
--- a/xen/arch/arm/include/asm/types.h
+++ b/xen/arch/arm/include/asm/types.h
@@ -34,9 +34,22 @@ typedef signed long long s64;
 typedef unsigned long long u64;
 typedef u32 vaddr_t;
 #define PRIvaddr PRIx32
+#if defined(CONFIG_PHYS_ADDR_T_32)
+
+/*
+ * We use "unsigned long" and not "uint32_t" to denote the type. This is done
+ * to avoid having a cast each time PAGE_* macros are used on paddr_t.
+ * On 32-bit architecture, "unsigned long" is 32-bit wide. Thus, we can use it
+ * to denote physical address.
+ */
+typedef unsigned long paddr_t;
+#define INVALID_PADDR (~0UL)
+#define PRIpaddr "08lx"
+#else
 typedef u64 paddr_t;
 #define INVALID_PADDR (~0ULL)
 #define PRIpaddr "016llx"
+#endif
 typedef u32 register_t;
 #define PRIregister "08x"
 #elif defined (CONFIG_ARM_64)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 74f6ff2c6f..5ef5fd8c49 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -703,6 +703,11 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
     int rc;
 
+    /*
+     * The size of paddr_t should be sufficient for the complete range of
+     * physical address.
+     */
+    BUILD_BUG_ON((sizeof(paddr_t) * BITS_PER_BYTE) < PADDR_BITS);
     BUILD_BUG_ON(sizeof(struct page_info) != PAGE_INFO_SIZE);
 
     if ( frametable_size > FRAMETABLE_SIZE )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 16:02:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 16:02:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536449.834788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzg4i-0006aR-3S; Thu, 18 May 2023 16:01:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536449.834788; Thu, 18 May 2023 16:01:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzg4i-0006aK-0i; Thu, 18 May 2023 16:01:56 +0000
Received: by outflank-mailman (input) for mailman id 536449;
 Thu, 18 May 2023 16:01:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CvkS=BH=citrix.com=prvs=495754ba3=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pzg4h-0006In-5P
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 16:01:55 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d8768a7-f595-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 18:01:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d8768a7-f595-11ed-b22c-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684425713;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=q2xGzEM6PVAM5NYwRSzGWmWCN2ifAS7zDZYHRybU8xs=;
  b=Q0es/77bDluoc6txRTfR6BktLj1m0FkHsI5DuwuuwAbiDY3EAxYNO9py
   SpYTZVXallU18wvi/+LgbsQKls4ohd/Z+sc9ddfE/8e97WKN5nxH9xL+S
   CnEnfCJmbt0cxNDW3cx5Zq39CWBYangy0L/6KaJUYO2yl7dwL5r62TWam
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109940866
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:PSjrz6OlcuU47EbvrR1El8FynXyQoLVcMsEvi/4bfWQNrUomgzUBm
 mUbDzvQOv6La2GnKohwao+0/B4A7JfSytY2GQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGj9SuvPrRC9H5qyo42tF5wxmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0tp4DFsJ0
 fY2EQImXC6zq+uHn+ilavY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUoXSFJQJxhbG+
 Aoq+UzgJxxFLcCB1wagqEmguuPykX/jXLANQejQGvlC3wTImz175ActfUW6ouOwjwixUshfN
 EUQ0iMroe4580nDZsnwWVi0rWCJujYYWsFMCKsq5QeV0K3W7g2FQG8eQVZpc8c6vcU7QTgr0
 F6hnN7zAzFr9rqPRhq187afrTq2fy8PP2IGTSYBQU0O5NyLiJ43pgLCSJBkCqHdpt/6Azbr2
 BiRsTMzwb4UiKYj3r2251ndjxqwp5LCSUg+4QC/Y46+xlonPsj/PdXusAWFq68ad+51U2Vto
 lA4lMWkq8QEI6i/vx6zcMkELe7z+daaZWi0bUFUI7Et8DGk+niGdI9W4S1jKEoBDvvoaQMFc
 2eI51oPucY70G+CKPYuPtnvU5hCIb3IT4yNaxzCUjZZjnGdniej9TomW0Of1nuFfKMEwfBmY
 sfznSpB4B8n5UVbINieHb91PVwDnHpWKYbvqXfTkXyaPUK2PiL9dFv8GALmghoFxK2Fuh7J1
 N1UKtGHzR5SOMWnPHmLqtJDcAlScSFjbXwTlyCwXr/rH+abMDt5V6+5LU0JIOSJYJi5Zs+Xp
 yrgCye0OXL0hGHdKBXiV02PnIjHBM4lxVpiZHxEALpd8yR7CWpZxPtFJsRfkHhO3LAL8MOYu
 NFZIpzaWqwSF2SWk9nfBLGkxLFfmN2QrVrmF0KYjPIXJvaMmyShFgfYQzbS
IronPort-HdrOrdr: A9a23:y23s2a0kIKgLiV8WPZILcwqjBLwkLtp133Aq2lEZdPU1SKClfq
 WV98jzuiWatN98Yh8dcLK7WJVoMEm8yXcd2+B4V9qftWLdyQiVxe9ZnO7f6gylNyri9vNMkY
 dMGpIObOEY1GIK7/rH3A==
X-Talos-CUID: 9a23:KOs9lG32l3g5SUZ9amL0sbxfJs8oSDr5xlnpEWSKN1ZFQYzKE0Gw5/Yx
X-Talos-MUID: 9a23:fE1x4wRkyQVTit71RXT1oTw9Ldt2/ZiLUnsBrZk7t5WcMR5vbmI=
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="109940866"
Date: Thu, 18 May 2023 17:01:47 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>
CC: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<michal.orzel@amd.com>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: Re: [XEN][PATCH v6 18/19] tools/libs/light: Implement new libxl
 functions for device tree overlay ops
Message-ID: <009387b9-4a51-4731-bb13-c1b388860a88@perard>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-19-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230502233650.20121-19-vikram.garhwal@amd.com>

On Tue, May 02, 2023 at 04:36:49PM -0700, Vikram Garhwal wrote:
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 18 16:02:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 16:02:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536448.834778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzg4V-0006J0-P9; Thu, 18 May 2023 16:01:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536448.834778; Thu, 18 May 2023 16:01:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzg4V-0006It-Lu; Thu, 18 May 2023 16:01:43 +0000
Received: by outflank-mailman (input) for mailman id 536448;
 Thu, 18 May 2023 16:01:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CvkS=BH=citrix.com=prvs=495754ba3=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pzg4U-0006In-Ic
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 16:01:42 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 455115db-f595-11ed-b22c-6b7b168915f2;
 Thu, 18 May 2023 18:01:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 455115db-f595-11ed-b22c-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684425699;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=cuHFcbnouck4+Y5DtNKw2g6vnG+PHUITkuDdurVMx30=;
  b=VJRvURBM6U4Bmr/P1j5KhxXZY/X+frjJFYNUdiuj7TSZpsbMJr3MJpwD
   wOLLxzqj22JqzX1pWyVTVmqSQxhPXJSqJxnGmF1E1Ci6CW8KeTfHDzHtR
   6KVnnJb+ryZzAKsqt6dLWogjA2C7qSWkOvnnP/iV0H77BhszPWCW7s6C4
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108303028
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:dn43nKoA6Ss0E1egx29LsGrAXDVeBmL5ZRIvgKrLsJaIsI4StFCzt
 garIBmObPreZ2vyLt10OYq08UICscTWnNVkSFdkqilhEiwV8puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WJwUmAWP6gR5weDzilNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAAhQVhGAoOuR+aqidupn3vUDdfXoJLpK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVRrk6VoqwmpXDe1gVr3JDmMcbPe8zMTsJQ9qqdj
 juerz+hWUlAZLRzzxLazXCCp7fVxhinc4kOGL+eqfhQo2OMkzl75Bo+CgLg/KjRZlSFc8xeK
 FYd/2whpLIy60WvZtD4U1uzp3vslg4RXZ9cHvM37CmJy7HI+ECJC24cVDlDZdc68sgsSlQCz
 USVltnkAThutry9Sn+H8LqQ6zSoNkA9KG4JZSYACwwf8dTniIg2glTESdMLOLGxps34H3f32
 T/ihCoxnbIIluYQyr62u1vAhlqEr4DEVAcv6i3LX2iu6UVyY4vNWmCzwQGFt7Aadt/fFwTf+
 iFewKBy8dziE7m1yRSMQ8IEO4ix/sqGO2XtoQdTHIEYomHFF2GYQahc5zR3JUFMO8kCeCP0b
 EK7hT699KO/L1PxM/YpPtvZ59ACiPG5SI+7Dqy8gs9mOMAZSeORwM15iad8NUjJmVNkr6wwM
 IzznS2EXSdDUvQPINZbqo4gPV4XKsIWnzu7qXPTlU7PPV+iiJm9F9843KOmNLxR0U99iFy9H
 yxjH8WL0Q5Dd+b1fzPa94UeRXhTcyhnXsym9pUPKrfbSuaDJI3GI665/F/cU9Y9w/Q9ehngp
 RlRpXO0OHKg3CaaeG1mm1hoaa/1XIYXkE/XyRcEZA7ys1B6ONbH0UvqX8dvFVXR3LA5nKEco
 jhsU5noP8mjvRybom1HNcGg8dwzHPlp7CrXVxeYjPEEV8YIb2T0FhXMJ2MDKAFm4vKLiPYD
IronPort-HdrOrdr: A9a23:low69qlbYIu12nZmmxgBE8NO7k/pDfIo3DAbv31ZSRFFG/Fw8P
 re+8jztCWE7Ar5PUtKpTnuAsW9qB/nmqKdgrNwAV7BZmfbUQKTRekJgLcKqAeAJ8SRzJ8+6Y
 5QN4R4Fd3sHRxboK/BkWyF+g8bsbq6zJw=
X-Talos-CUID: 9a23:rTmZwWFcc59NQDziqmJfqXYVGecsVUTF1UvTKBeJKT9VS6WsHAo=
X-Talos-MUID: 9a23:NSNSrQTdjTQfJCnGRXSzuzNMFZ9yuJ/yI0sRnZcg5M++FhJvbmI=
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="108303028"
Date: Thu, 18 May 2023 17:01:28 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>
CC: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<michal.orzel@amd.com>, Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>
Subject: Re: [XEN][PATCH v6 17/19] tools/libs/ctrl: Implement new xc
 interfaces for dt overlay
Message-ID: <8a78ce55-a677-4006-b04b-8fc600b3f75e@perard>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-18-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230502233650.20121-18-vikram.garhwal@amd.com>

On Tue, May 02, 2023 at 04:36:48PM -0700, Vikram Garhwal wrote:
> xc_dt_overlay() sends the device tree binary overlay, size of .dtbo and overlay
> operation type i.e. add or remove to xen.
> 
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 18 16:02:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 16:02:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536450.834797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzg4z-00071z-BC; Thu, 18 May 2023 16:02:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536450.834797; Thu, 18 May 2023 16:02:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzg4z-00071s-8D; Thu, 18 May 2023 16:02:13 +0000
Received: by outflank-mailman (input) for mailman id 536450;
 Thu, 18 May 2023 16:02:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CvkS=BH=citrix.com=prvs=495754ba3=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1pzg4y-0006xf-1V
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 16:02:12 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 56a52496-f595-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 18:02:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56a52496-f595-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684425729;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=RJjMOipmfKuyICk91c6ZWe8Uz+qr7h8zmhitlprumOg=;
  b=EP9kE8kSDxTdvTNgMivC45C/8Kwi6TpW2JoiERglNjvaw1PxUxIQMpRw
   aDHZHV9NbnfAkGErSBafYtA7Heyhz2qzM/vBKV8qc33iV5wJZazccJlFo
   U9IqjtrF38Rr40FfneeylCKKHLHvYcXO3IZfnJtP6csbU6xXLisUC4nEM
   w=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109429920
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:JpsXya4go2v6qEXt8K5yfwxRtO7HchMFZxGqfqrLsTDasY5as4F+v
 mIaWG7TPPrZYDemLtwiaYy/phxTvZPdnYRqSwE4rSg0Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9VU+7ZwehBtC5gZlPa0S7AeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m0
 P0iMSACcUy/2LyLz5zrcbdSjMAzM5y+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrnD5bz1frkPTvact6nLf5AdwzKLsIJzefdniqcB9xx7J+
 jiXrj6hav0cHNmR5AKv3FCqv7T0ggylU6hKEJua0fE/1TV/wURMUUZLBDNXu8KRlE+9Qdtab
 UMd4CoxpKwa/UmnCNL6WnWQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3OctXiAj3
 FKNm9LvBBRsvaeTRHbb8a2bxRu3OCMVJGtEYjUWQA8t6tzv5oo0i3rnUdJLAKOzyNrvFlnYy
 iiHtiEki50PjMQA0OOw+lWvvt63jsGXFEhvvFyRBz/7qFojP+ZJerBE93D+7MxZPImGEmXe4
 kULtZilst4ECYGSwXnlrPo2IJml4POMMTv5iFFpHoU8+znFx0NPbby88xkleh43b59slSvBJ
 RaK5FgPvMM70G6CN/cfXm6nNyg9IUEM//zBX+ucUNdBa4MZmOSvrHA3Ph74M4wAfSERfUAD1
 XWzK57E4ZUyU/4PIN+KqwA1j9cWKtgWnz+7eHwC503PPUCiTHCUU6wZF1CFc/o06qiJyC2Mr
 YYDbJfalkwHDLSmCsUyzWL0BQFiEJTGLcqu95w/mhCreWKK513N+9eOmOh8KuSJboxel/vS/
 2HVZ3K0PGHX3CWdQS3TMyALVV8adconxZ7NFXB2bAnANrlKSdrH0ZrzgLNrJOB9rLQ+kK4lJ
 xTHEu3Zaslypv3802x1RfHAQEZKL3xHWSrm0/KZXQUC
IronPort-HdrOrdr: A9a23:vPx2SK7N3J7BVCuYswPXwMbXdLJyesId70hD6qkRc3Bom6mj/P
 xG88516faZslgssRMb+exoSZPgfZq0z/cci+Qs1NyZLWrbUQWTXeRfxLqn7zr8GzDvss5xvJ
 0QF5SW0eeAb2RHsQ==
X-Talos-CUID: 9a23:t5ZgmW55Ush0crHkTtss0U8uA9AqbSHhkyntLGHoMjhXQrmFVgrF
X-Talos-MUID: 9a23:mpd9BAY2z5JYhOBTsjHBvBtMFsFRuJ+xJx4Gu80CneujOnkl
X-IronPort-AV: E=Sophos;i="5.99,285,1677560400"; 
   d="scan'208";a="109429920"
Date: Thu, 18 May 2023 17:01:57 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Vikram Garhwal <vikram.garhwal@amd.com>
CC: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<michal.orzel@amd.com>, Wei Liu <wl@xen.org>
Subject: Re: [XEN][PATCH v6 19/19] tools/xl: Add new xl command overlay for
 device tree overlay support
Message-ID: <b5d77faa-e20d-4648-b900-8474e3574844@perard>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-20-vikram.garhwal@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230502233650.20121-20-vikram.garhwal@amd.com>

On Tue, May 02, 2023 at 04:36:50PM -0700, Vikram Garhwal wrote:
> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 18 16:54:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 16:54:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536467.834808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzgtj-0004To-6t; Thu, 18 May 2023 16:54:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536467.834808; Thu, 18 May 2023 16:54:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzgtj-0004Th-3o; Thu, 18 May 2023 16:54:39 +0000
Received: by outflank-mailman (input) for mailman id 536467;
 Thu, 18 May 2023 16:54:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzgth-0004TX-Sw; Thu, 18 May 2023 16:54:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzgth-00070h-JC; Thu, 18 May 2023 16:54:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzgth-0005wq-1l; Thu, 18 May 2023 16:54:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzgth-00013g-1L; Thu, 18 May 2023 16:54:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AQkqy1YGBFXYkU+uTvjsCfRjqofrQmN/ZMd/ThxWscU=; b=MyvkPsR5N8xrvwOPO9hrP+U8Fj
	kWokblXUgK6LVat/9rE1oszYHMnRFEB6qKxSnmgBcxeXj9v+Ce8slDvQ3bmp9VOAl8VCvEXmQznoq
	Sw6DtBokPzuUpDkaEF04BoBNQ3n3Xt5Ceho0nLy0f1jhb3rTb3uHekN8lBWL1BlLwb1U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180696-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180696: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=42abf5b9c53eb1b1a902002fcda68708234152c3
X-Osstest-Versions-That:
    xen=42abf5b9c53eb1b1a902002fcda68708234152c3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 May 2023 16:54:37 +0000

flight 180696 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180696/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180689
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180689
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180689
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180689
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180689
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180689
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180689
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180689
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180689
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180689
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180689
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180689
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  42abf5b9c53eb1b1a902002fcda68708234152c3
baseline version:
 xen                  42abf5b9c53eb1b1a902002fcda68708234152c3

Last test of basis   180696  2023-05-18 01:51:57 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu May 18 17:01:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 17:01:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536465.834824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzh05-00062t-8E; Thu, 18 May 2023 17:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536465.834824; Thu, 18 May 2023 17:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzh05-00062m-2w; Thu, 18 May 2023 17:01:13 +0000
Received: by outflank-mailman (input) for mailman id 536465;
 Thu, 18 May 2023 16:53:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tUbc=BH=intel.com=aleksander.lobakin@srs-se1.protection.inumbo.net>)
 id 1pzgsx-0004SQ-Fk
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 16:53:51 +0000
Received: from mga05.intel.com (mga05.intel.com [192.55.52.43])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8be6dc9b-f59c-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 18:53:45 +0200 (CEST)
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 May 2023 09:53:41 -0700
Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81])
 by orsmga008.jf.intel.com with ESMTP; 18 May 2023 09:53:40 -0700
Received: from fmsmsx612.amr.corp.intel.com (10.18.126.92) by
 fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 18 May 2023 09:53:39 -0700
Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by
 fmsmsx612.amr.corp.intel.com (10.18.126.92) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 18 May 2023 09:53:39 -0700
Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by
 fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23 via Frontend Transport; Thu, 18 May 2023 09:53:39 -0700
Received: from NAM02-DM3-obe.outbound.protection.outlook.com (104.47.56.46) by
 edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.23; Thu, 18 May 2023 09:53:39 -0700
Received: from DM6PR11MB3625.namprd11.prod.outlook.com (2603:10b6:5:13a::21)
 by MN2PR11MB4693.namprd11.prod.outlook.com (2603:10b6:208:261::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 16:53:37 +0000
Received: from DM6PR11MB3625.namprd11.prod.outlook.com
 ([fe80::64d9:76b5:5b43:1590]) by DM6PR11MB3625.namprd11.prod.outlook.com
 ([fe80::64d9:76b5:5b43:1590%2]) with mapi id 15.20.6411.019; Thu, 18 May 2023
 16:53:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8be6dc9b-f59c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1684428825; x=1715964825;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=LIf4f/2ILzMdE7QYHyioK8PC0ClFgzFfLjK3Dby2vAE=;
  b=Qbm03zYuVZF674UEcdn5eaHHdvPRsKPBA5KUhjhQTTfWkproY0pHRYKD
   Avrkp2MujlLZbV5R7xIpd4xh/DygPGRM+gxn8iwdVZ8hWvLR0J/AnvaGw
   2KbdTYN2AHQTPUVE2ni6JMFtSw6tpd5QoLf+Y4VwyIPkx1jvw/cEM9my7
   oOSUfH06V3Q5DVHy5lM959FpcH/LQV528k2mJMAc0traEASwkb7IPhe+H
   P7pi1SRvafQy8RtjKqXLFB6Sox775Id2nIbj9aeYuYM1Ju90KQw6znBkr
   DqAd2LFll6QPfdxKjMKf54RJg1tbgNozS9G6gxjfzyU4vp1/TXDx/mb7U
   A==;
X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="438468371"
X-IronPort-AV: E=Sophos;i="6.00,174,1681196400"; 
   d="scan'208";a="438468371"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="732963028"
X-IronPort-AV: E=Sophos;i="6.00,285,1681196400"; 
   d="scan'208";a="732963028"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oTYS52vneRMF6B1SJtknomY8kON6EmrYXkIKJSOp+eIOu1hSkR99FazmKzgsz08dfSlNcDeAurcFI5C7T+gDjWNlycqVPz9fWzAWZHLTzVaW4O7qs78yyB1mKQQcLey/Kg0s8HKwgxSGnJE9KSQAc3t7vfxQAIKstm2dFgpTJrMKAGH6C75n6ge9SSbt+ViPw3re2EVgnqrGegltD4YCZh8ZzgLmOHKvJwwst+0EtcmuaijG1X2GdVmc9MODHPjtsQGLzYCxardnvwffPP84GfY+I6crKdD/ag8BrJJOZMfRcJaJWwMtuN1ywuijdzAVHpJMwtqC8wCoV8gZ1OugBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fiNVHMRZRgjKJsPlgxcLC7Vv43LhrmjaOmxRpNWnEm4=;
 b=kCtYhAvf9e17UsyTpGivThsh3pfaCbkGQPeJ44D0A1UTIwAq3yk3YevgX4aldGnJqE01iQH4G8mdee80Byvm0SXTIdLCNIL0sM/GJdMdTSn8J5msjIsrzpvvrXCoCdPtS8mefTqqgIYfJXgLbZKHf82Klg/Z6YoaOiaf6IBLj3yKtV1J3SOU3AL6PqONHcpfdMAUV42B3I/4wFsnDyPSHC0cgikUtIzXcwGRPDDRFfx3Kp7Z61nidFCIXz4yx8hVaQYfX60jzQafQRwA7sFD0OtUoZOUquVnk4uNgx/iB6FxQrJC8T/A+yl0F22hNTOQJG7UhfFmVsDbxTC2W2adzw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
Message-ID: <996ee80f-f3d0-d0f5-d045-4636c35ce9b2@intel.com>
Date: Thu, 18 May 2023 18:52:34 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 05/20] x86: head: add dummy prototype for
 mk_early_pgtbl_32
Content-Language: en-US
To: Arnd Bergmann <arnd@kernel.org>
CC: <x86@kernel.org>, Arnd Bergmann <arnd@arndb.de>, Thomas Gleixner
	<tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov
	<bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin"
	<hpa@zytor.com>, Andy Lutomirski <luto@kernel.org>, Steven Rostedt
	<rostedt@goodmis.org>, Masami Hiramatsu <mhiramat@kernel.org>, Mark Rutland
	<mark.rutland@arm.com>, Juergen Gross <jgross@suse.com>, "Srivatsa S. Bhat
 (VMware)" <srivatsa@csail.mit.edu>, Alexey Makhalov <amakhalov@vmware.com>,
	VMware PV-Drivers Reviewers <pv-drivers@vmware.com>, Peter Zijlstra
	<peterz@infradead.org>, Darren Hart <dvhart@infradead.org>, Andy Shevchenko
	<andy@infradead.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, "Rafael
 J. Wysocki" <rafael@kernel.org>, <linux-kernel@vger.kernel.org>,
	<linux-trace-kernel@vger.kernel.org>,
	<virtualization@lists.linux-foundation.org>, <linux-pci@vger.kernel.org>,
	<platform-driver-x86@vger.kernel.org>, <xen-devel@lists.xenproject.org>,
	<linux-pm@vger.kernel.org>, <linux-mm@kvack.org>
References: <20230516193549.544673-1-arnd@kernel.org>
 <20230516193549.544673-6-arnd@kernel.org>
From: Alexander Lobakin <aleksander.lobakin@intel.com>
In-Reply-To: <20230516193549.544673-6-arnd@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0072.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::23) To DM6PR11MB3625.namprd11.prod.outlook.com
 (2603:10b6:5:13a::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6PR11MB3625:EE_|MN2PR11MB4693:EE_
X-MS-Office365-Filtering-Correlation-Id: f5c355bb-406c-4694-ca43-08db57c06c49
X-LD-Processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 4hZE8KcRWPJEjjVJ9PWCThrVafr+v7p0i5tuamnd3bQKveNYHf85txfZIeAdjg4VBVYy5a+U5ui4m3WUWnPosIWuG9bARGU8Hs2elA6qXInmJW5dARcqWFj38cEjnAILZyHQuyAwGvPo2hFjRpk/K9OK4WTDEqAevBmdMfuyXwtCA79qHtnaJtvEveGCZKAtWzvyOcHB9wygWLNzs+Uu1stlhhWpa2gpKuGNLUscTsYAHFvkUpZ1E5TNQM3z+J0a4ZbsUTAQ5jS6FLAx3F7iaqicv22TQHLgRbG7g2+6hHffO+WQNeDLvPIodMAG8yP9xj6PijKEAryHADTdh6JPJoPmWFvpqof7fpG06LFoHNPRAnhZtfQ5hPa6c5Zm9oGza8xMivjonNtPHZIcnpQlwOgF4KH4X02cqCy8DgTBWS9Awjvnc2T7bztIeKW7EbSLIKQAn+Tb0jjTREZkh9AwrGdsC1LSD4KpLmbM7jyuAtomnbghimbUnhh+9cjHzcDHI3pZviowsbnB6rLlvfze6BCTktNG7jab7x3URqwMGlBsdNVDpgpIdqCdE+gS5cfnZzxKlOakFyRjfTFQGmODi9pj9ueao8aZjG5sFYs4bHDyKM7iQSVfmKJQVyDA1y/6HwYAFVkCbfMBY/S6NKv6GA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR11MB3625.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39860400002)(396003)(366004)(346002)(376002)(451199021)(6512007)(6506007)(26005)(36756003)(83380400001)(38100700002)(2616005)(31696002)(86362001)(82960400001)(186003)(54906003)(7416002)(478600001)(2906002)(4744005)(8936002)(31686004)(8676002)(316002)(41300700001)(5660300002)(66556008)(66476007)(4326008)(6916009)(66946007)(6666004)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?di91Nzh2WG5sZXZPTUFlRTREZUpWU09MMmhoZ3Y1T0FPVS9xOUZzYkNnTERP?=
 =?utf-8?B?YTJFTjNjQURHdzdObTd2cjZaREhxd1p6TVE2WXByTkNabVlCOHRCelo0WXRv?=
 =?utf-8?B?b0FFQ01wSlB0SEdnNC9jdW1uSG9BNjVtSmRpaFd1SEZRLzV3SmVTVzZQeXZR?=
 =?utf-8?B?TnIxRGVXMC8yN01PcHpjdFA3VVZUVWdwRVozbUVDQmpYUWNhRTg1OG1tMVFz?=
 =?utf-8?B?Qnl5N2ZkanZiUHMzRTRFOEs5RHNrcnpwRFU3dmVQK2VxTDhnUmNqU04vZG90?=
 =?utf-8?B?UHZlcTAwcXkxM2UvYXUxUytOaE9NL0FUQ01xa0dZNXlTQ2lhcG5YL1hlem1j?=
 =?utf-8?B?NFFaOUhTK3BTNk9HbmZ6RlhROTUrYnJYL2lxeTRDb0lacUVNb081Vm10MGhP?=
 =?utf-8?B?RWRCRDlSYWZUeFNsbDRrdTNzZE5NTEZIdENVRExMRGpZVzZSWFRNS2lDWEUv?=
 =?utf-8?B?eU4rRmdEYThQaDcwRXlQRUNoYllXMEp5a1kyb0E3ZmZadmxUdUdSRk9raW4z?=
 =?utf-8?B?Vk9wYmVkYXBYL2dDZVk4QVdCeko4dWp4SFFDRFd4YXRHOUl3N3lCbndrUnlq?=
 =?utf-8?B?dWZhd1g0d21NQWVsK1V2bnI5bmdhUGNCcWFQNk1hVGNCNFpVS09MRmFQNGJU?=
 =?utf-8?B?dUJRaXNZVklSNW5CRk5OaHFtNk9Nc01VQTZGaXVWQUtRYSt4dkVjUnFrVzJ4?=
 =?utf-8?B?N2pXMUtBV25KYTVhcHh5ZmFLZGphZEFCVjBFdGcrQ3A1L210OHRGZitvUzdt?=
 =?utf-8?B?RFNLMW02R1Q5ZStFQ2NQdlhOVWhYb01nbFNXU3pGWkZsOVNQRnMvOHgyaWlC?=
 =?utf-8?B?QlFOc1dTSTJJZ3cvazJWMW9lZmd2Wmg2eHkyOC9zU1RnbVBTNVVJY3NUNno0?=
 =?utf-8?B?VU1leWx6N1ZFd2sreC84M2lkeEdNZDluT0Y4b044Y3dNTU5hSTlqdEx1Sy9K?=
 =?utf-8?B?M1pLRGJraUZoRGRtc0J6YUlwLzIrb1BYVitoaG9SM2hpWmw5ZDMwVldEc3dR?=
 =?utf-8?B?ekVrZWJyZmtRSkpRNzhyS2xoOGFLZHdTVXJKbTlaaXJYRUEvWHRhQTRtVFA1?=
 =?utf-8?B?UTR1NU9BL1lpSlRZV1VrNmdJby83NEVvZ0k3TTlRTGJXS2QrQWI2Y3QyYk9y?=
 =?utf-8?B?OXNMSm1kTTBWRUVQL2pXSk95eVpsNE5IbTJzdlJ2bEQ0L3lHVnh3NlBLQWZF?=
 =?utf-8?B?Tk5XRG01bHUyd1BQbjFjUU8wSTJOaU0vZHY2VytncE5qNVFpT1ZnUUFFN2lr?=
 =?utf-8?B?SW5saUhyR1RHY0lkRnRvSGZSVjJCK1pDZjlLT1VTNHJ1VzJuaDJqVzYzdXZO?=
 =?utf-8?B?NmxISmROZUVNdlFzemJTYlhwOEhNNUYrWktRVm1rSHJFSU9MR3J2ayt5TDFw?=
 =?utf-8?B?ZU1GSlJnR0lDaUtlLyt3Mk9GRDdhbTk3b0hqejNIMm13Vlg2TUxRTHRHRHJy?=
 =?utf-8?B?OTc4RFBDYnQvWkRrOGNNSS80K25pQkJtTzNHb1FreTZoUjRRYWwzd2lnVXND?=
 =?utf-8?B?RFRXYkdMM2FwbHpZUnVqV1M2amdXNGNFWUljaDVsNGJyZXZUUDNXclUranlT?=
 =?utf-8?B?YWttcCtZTmp6b0VYbGZQaEdiMW9NMVd1N2NoNm5WNnBKVnllbTAxV3lMR2Zk?=
 =?utf-8?B?Z3lZRjY3aDdCVlB3UW9wRHBrV3oyQjM0MUFoNC9MRDhDK3ZsOE8xVHlIOTM4?=
 =?utf-8?B?SUxrM0dHZGY2aGxyME9vaml2Zjg3c3piRExjMU9XMVRZcm90RlpobUluMmxE?=
 =?utf-8?B?aE82RDJSOTJsdGJpS1hucE5XRkxXVjQ5ZTJwRnFuamNXbGxlazBTellTZ09p?=
 =?utf-8?B?M1ZCVnN5VHVMcDhLaVJyWkVXNkd6dkFjTytBaWlubDJ5STJQd3R6a0N4bmh3?=
 =?utf-8?B?MFMxNDNSdkp3K2x2MHg3anF5emRGcU9UemkyKzBLbVBPd2lJRHgxYmk5MVdz?=
 =?utf-8?B?ejduSjlrR3pzQlRZMzVvc1QxS3lGZkEwSmp5NU4wQ0E1Q0pvT2tUdUlkSXFp?=
 =?utf-8?B?amRqNkZFNnJiVTJrcXN6TGRlS3dNQ1VKRGlWODBlZzZqR1NQVXh5V3pJRW5W?=
 =?utf-8?B?WERHekhFSFNrVDVoc1Z4bWM2ZG5HWUMyeG5rckgwUmEvcHhQOUlVbFVRdVhY?=
 =?utf-8?B?NUNZRXMzeUpyVzA4RDkyYUsxMnhtRzFjbTZtN2FmNWYvdC9xRUY0KzgrWFJF?=
 =?utf-8?B?UXc9PQ==?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f5c355bb-406c-4694-ca43-08db57c06c49
X-MS-Exchange-CrossTenant-AuthSource: DM6PR11MB3625.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 16:53:36.8257
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: unuITrb1NbgY33qlq+dkhVF/KvZ2G37sJ7EbviQNjPmyDmWyOJCWsTe6Kij8lrVqTeVDjiYnTOvNp7pOF86IQa9d3Bs4jBqN59xhZTWv/54=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR11MB4693
X-OriginatorOrg: intel.com

From: Arnd Bergmann <arnd@kernel.org>
Date: Tue, 16 May 2023 21:35:34 +0200

> From: Arnd Bergmann <arnd@arndb.de>
> 
> 'make W=1' warns about a function without a prototype in the x86-32 head code:
> 
> arch/x86/kernel/head32.c:72:13: error: no previous prototype for 'mk_early_pgtbl_32' [-Werror=missing-prototypes]
> 
> This is called from assembler code, so it does not actually need a prototype.
> I could not find an appropriate header for it, so just declare it in front
> of the definition to shut up th warning.

                               ^^
                               the :p

> 
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>

I'd say, for the whole series:

Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>

BUT apart from the Xen part, it's all black magic rituals to me :D

Thanks,
Olek


From xen-devel-bounces@lists.xenproject.org Thu May 18 17:01:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 17:01:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535914.834818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzh04-0005zW-U9; Thu, 18 May 2023 17:01:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535914.834818; Thu, 18 May 2023 17:01:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzh04-0005zP-QI; Thu, 18 May 2023 17:01:12 +0000
Received: by outflank-mailman (input) for mailman id 535914;
 Wed, 17 May 2023 12:47:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5JRk=BG=linux.microsoft.com=madvenka@srs-se1.protection.inumbo.net>)
 id 1pzGYw-0007bO-IN
 for xen-devel@lists.xenproject.org; Wed, 17 May 2023 12:47:26 +0000
Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id f8f14aac-f4b0-11ed-b229-6b7b168915f2;
 Wed, 17 May 2023 14:47:25 +0200 (CEST)
Received: from [192.168.254.32] (unknown [47.186.50.133])
 by linux.microsoft.com (Postfix) with ESMTPSA id D164820F069A;
 Wed, 17 May 2023 05:47:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8f14aac-f4b0-11ed-b229-6b7b168915f2
DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com D164820F069A
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com;
	s=default; t=1684327643;
	bh=LhIkPpJhPvIClwJ39pJZmrIh6zoXx6jE68xfB8W+VWs=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=mumhyWRm4PGisYlV+6k79BLPaamhkccyeiIlh7D48PRTzPjbzjxh2vVIF5IMgB/kg
	 ffvKm60yYk2+lOntvrropzUTEW8F4BOwI+8ppjyBg+mNGMVj/ubDGGKoWDypVfUwG4
	 iLdY81ZYifmsEzXVwEDBYtRVqeqDXr8NLjfUncbw=
Message-ID: <e8fcc1b8-6c0f-9556-a110-bd994d3fe3c6@linux.microsoft.com>
Date: Wed, 17 May 2023 07:47:20 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v1 3/9] virt: Implement Heki common code
To: Wei Liu <wei.liu@kernel.org>, =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?=
 <mic@digikod.net>
Cc: Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>, "H . Peter Anvin" <hpa@zytor.com>,
 Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>,
 Paolo Bonzini <pbonzini@redhat.com>, Sean Christopherson
 <seanjc@google.com>, Thomas Gleixner <tglx@linutronix.de>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
 James Morris <jamorris@linux.microsoft.com>,
 John Andersen <john.s.andersen@intel.com>, Liran Alon
 <liran.alon@oracle.com>, Marian Rotariu <marian.c.rotariu@gmail.com>,
 =?UTF-8?Q?Mihai_Don=c8=9bu?= <mdontu@bitdefender.com>,
 =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>,
 Zahra Tarkhani <ztarkhani@microsoft.com>,
 =?UTF-8?Q?=c8=98tefan_=c8=98icleru?= <ssicleru@bitdefender.com>,
 dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
 linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
 qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
 x86@kernel.org, xen-devel@lists.xenproject.org
References: <20230505152046.6575-1-mic@digikod.net>
 <20230505152046.6575-4-mic@digikod.net>
 <ZFkxhWhjyIzrPkt8@liuwe-devbox-debian-v2>
Content-Language: en-US
From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
In-Reply-To: <ZFkxhWhjyIzrPkt8@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Sorry for the delay. See inline...

On 5/8/23 12:29, Wei Liu wrote:
> On Fri, May 05, 2023 at 05:20:40PM +0200, Mickaël Salaün wrote:
>> From: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
>>
>> Hypervisor Enforced Kernel Integrity (Heki) is a feature that will use
>> the hypervisor to enhance guest virtual machine security.
>>
>> Configuration
>> =============
>>
>> Define the config variables for the feature. This feature depends on
>> support from the architecture as well as the hypervisor.
>>
>> Enabling HEKI
>> =============
>>
>> Define a kernel command line parameter "heki" to turn the feature on or
>> off. By default, Heki is on.
> 
> For such a newfangled feature can we have it off by default? Especially
> when there are unsolved issues around dynamically loaded code.
> 

Yes. We can certainly do that.

>>
> [...]
>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>> index 3604074a878b..5cf5a7a97811 100644
>> --- a/arch/x86/Kconfig
>> +++ b/arch/x86/Kconfig
>> @@ -297,6 +297,7 @@ config X86
>>  	select FUNCTION_ALIGNMENT_4B
>>  	imply IMA_SECURE_AND_OR_TRUSTED_BOOT    if EFI
>>  	select HAVE_DYNAMIC_FTRACE_NO_PATCHABLE
>> +	select ARCH_SUPPORTS_HEKI		if X86_64
> 
> Why is there a restriction on X86_64?
> 

We want to get the PoC working and reviewed on X64 first. We have tested this only on X64 so far.

>>  
>>  config INSTRUCTION_DECODER
>>  	def_bool y
>> diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/asm/sections.h
>> index a6e8373a5170..42ef1e33b8a5 100644
>> --- a/arch/x86/include/asm/sections.h
>> +++ b/arch/x86/include/asm/sections.h
> [...]
>>  
>> +#ifdef CONFIG_HEKI
>> +
>> +/*
>> + * Gather all of the statically defined sections so heki_late_init() can
>> + * protect these sections in the host page table.
>> + *
>> + * The sections are defined under "SECTIONS" in vmlinux.lds.S
>> + * Keep this array in sync with SECTIONS.
>> + */
> 
> This seems a bit fragile, because it requires constant attention from
> people who care about this functionality. Can this table be
> automatically generated?
> 

We realize that. But I don't know of a way this can be automatically generated. Also, the permissions for
each section is specific to the use of that section. The developer who introduces a new section is the
one who will know what the permissions should be.

If any one has any ideas of how we can generate this table automatically or even just add a build time check
of some sort, please let us know.

Thanks.

Madhavan

> Thanks,
> Wei.
> 
>> +struct heki_va_range __initdata heki_va_ranges[] = {
>> +	{
>> +		.va_start = _stext,
>> +		.va_end = _etext,
>> +		.attributes = HEKI_ATTR_MEM_NOWRITE | HEKI_ATTR_MEM_EXEC,
>> +	},
>> +	{
>> +		.va_start = __start_rodata,
>> +		.va_end = __end_rodata,
>> +		.attributes = HEKI_ATTR_MEM_NOWRITE,
>> +	},
>> +#ifdef CONFIG_UNWINDER_ORC
>> +	{
>> +		.va_start = __start_orc_unwind_ip,
>> +		.va_end = __stop_orc_unwind_ip,
>> +		.attributes = HEKI_ATTR_MEM_NOWRITE,
>> +	},
>> +	{
>> +		.va_start = __start_orc_unwind,
>> +		.va_end = __stop_orc_unwind,
>> +		.attributes = HEKI_ATTR_MEM_NOWRITE,
>> +	},
>> +	{
>> +		.va_start = orc_lookup,
>> +		.va_end = orc_lookup_end,
>> +		.attributes = HEKI_ATTR_MEM_NOWRITE,
>> +	},
>> +#endif /* CONFIG_UNWINDER_ORC */
>> +};
>> +


From xen-devel-bounces@lists.xenproject.org Thu May 18 17:01:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 17:01:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.535315.834838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzh0J-0006bm-Ow; Thu, 18 May 2023 17:01:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 535315.834838; Thu, 18 May 2023 17:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzh0J-0006ba-K2; Thu, 18 May 2023 17:01:27 +0000
Received: by outflank-mailman (input) for mailman id 535315;
 Tue, 16 May 2023 15:15:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2FW8=BF=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1pywOM-0004qr-A9
 for xen-devel@lists.xenproject.org; Tue, 16 May 2023 15:15:11 +0000
Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com
 [2607:f8b0:4864:20::435])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 70958a3e-f3fc-11ed-8611-37d641c3527e;
 Tue, 16 May 2023 17:15:07 +0200 (CEST)
Received: by mail-pf1-x435.google.com with SMTP id
 d2e1a72fcca58-643465067d1so10493422b3a.0
 for <xen-devel@lists.xenproject.org>; Tue, 16 May 2023 08:15:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70958a3e-f3fc-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684250106; x=1686842106;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=Uimow1co1NIkZ3BdMbyKNXUqyAN2/nqWi2XjMGAiT3Y=;
        b=IwyCbbiKy0xdtTP727Uve9Je8FPdxHdjNxRMtN5aUaxSp99DJdDhN6ok/0+73KRzX/
         ztij6RMrj39sbsm6c5slHH36xr8gEP2bO4cetJwBMjpSRugcB8t0iq+FzBdvo5CUf/UB
         mw3zsEqgXomZjkwpnLzLOiIlGdlE5x5h9uFH6SXLVbOhoChQmkxdLPn+KN3KTZihTYM5
         yXvKQG9eun0aUsiWzhdYEaVRWTtIUCcDTRzcUqk+oHQJC6g/a0DKkWhADHUMwPbn5I0/
         J547ysranMlNjNduj3JeEsS2D7EWHAahH+iD8Qu49Sa5E53Mr3WSD1tjQQNwThaqpCT5
         RMUA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684250106; x=1686842106;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=Uimow1co1NIkZ3BdMbyKNXUqyAN2/nqWi2XjMGAiT3Y=;
        b=VnY8avIbNJn4tPbuLi3md16BC0FcPe9P/dDuNKkfDlr8zh1gr6OASwxY9q5mmB+YT/
         vBXzIPiZN9iuehyuDEGTkdlr6egQ7hkViouLCaAKKe8CXRPUXIX+z224ubWgPHgDrhoZ
         Wf7S49E34P8KVeuAutcITrGqf3FKmt0MFYGQJTPG5q2vYeqwv2HzLlBczuHHg4sPSnwy
         DnFzZtBil6bV+YX/w+Wv26VpueTFlpf+cPiyy34nXthn1+vj8kpkV655M/ns29xoVs7j
         pQWy/tE8qDIiA06kt3pUO7SDMD5TqKLi2djYsbPYi/y9S4yLdRpjuT8A9Kf0aNYfZusq
         zYyA==
X-Gm-Message-State: AC+VfDzBaFOs36/OYD0iGJSS9TSS6I+WiS7OH5BtutxLMWbjIuRznAr7
	S/PSNi+RCUdHGhsxdeo6LiIj/tg6Y08szL/G2bCmvjOjggfEwA==
X-Google-Smtp-Source: ACHHUZ69GFBoTpApyNgUPnSHogX0NrmW9hgrj0pnBddtG5m10vMFL9HvdwOeu274QHqMBI2yyI9/+pbkuKJmPFhxrNY=
X-Received: by 2002:a05:6a00:2392:b0:645:ac97:5295 with SMTP id
 f18-20020a056a00239200b00645ac975295mr38826906pfc.9.1684250105182; Tue, 16
 May 2023 08:15:05 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <alpine.DEB.2.22.394.2304251120530.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2vWP76fxNS3wCWumNFSBd9knVmdSdStsfRpfOr1iQQw+A@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com> <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
 <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
 <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
 <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com>
 <bcac90c2-ef35-2908-9fe6-f09c1b1e2340@amd.com> <CA+SAi2sgHbUKk6mQVnFWQWJ1LBY29GW+eagrqHNN6TLDmv2AgQ@mail.gmail.com>
 <CA+SAi2tErcaAkRT5zhTwSE=-jszwAWNtEAnm5jNGEP1NoqbQ3w@mail.gmail.com>
 <53af4bc6-97ad-d806-003b-38e70ccd2b58@amd.com> <CA+SAi2vrB714Tc9dn4SbtDo3VrT3Q8OpSiFXRLRaO5=0BJo_rQ@mail.gmail.com>
 <f0e6ca10-2142-7c39-0c7b-042c454e7e08@amd.com>
In-Reply-To: <f0e6ca10-2142-7c39-0c7b-042c454e7e08@amd.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Tue, 16 May 2023 18:14:49 +0300
Message-ID: <CA+SAi2tCVDiQ1BLdvuH2XnvTDGDCnPBDCq70AVbsO+TZKMERSw@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Michal Orzel <michal.orzel@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>, 
	Stewart.Hildebrand@amd.com
Content-Type: multipart/alternative; boundary="0000000000003b806f05fbd10971"

--0000000000003b806f05fbd10971
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi guys,

Thanks Michal.

So if I have more RAM It is possible to increase the color density.

For example 8Gb/16 it is 512 Mb approximately.
Is this correct ?
Regards,
Oleg

=D0=B2=D1=82, 16 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 17:40, Mich=
al Orzel <michal.orzel@amd.com>:

> Hi Oleg,
>
> On 16/05/2023 14:15, Oleg Nikitenko wrote:
> >
> >
> >
> > Hello,
> >
> > Thanks a lot Michal.
> >
> > Then the next question.
> > When I just started my experiments with xen, Stefano mentioned that eac=
h
> cache's color size is 256M.
> > Is it possible to extend this figure ?
> With 16 colors (e.g. on Cortex-A53) and 4GB of memory, roughly each color
> is 256M (i.e. 4GB/16 =3D 256M).
> So as you can see this figure depends on the number of colors and memory
> size.
>
> ~Michal
>
> >
> > Regards,
> > Oleg
> >
> > =D0=BF=D0=BD, 15 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 11:57, =
Michal Orzel <michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>:
> >
> >     Hi Oleg,
> >
> >     On 15/05/2023 10:51, Oleg Nikitenko wrote:
> >     >
> >     >
> >     >
> >     > Hello guys,
> >     >
> >     > Thanks a lot.
> >     > After a long problem list I was able to run xen with Dom0 with a
> cache color.
> >     > One more question from my side.
> >     > I want to run a guest with color mode too.
> >     > I inserted a string into guest config file llc-colors =3D "9-13"
> >     > I got an error
> >     > [  457.517004] loop0: detected capacity change from 0 to 385840
> >     > Parsing config from /xen/red_config.cfg
> >     > /xen/red_config.cfg:26: config parsing error near `-colors':
> lexical error
> >     > warning: Config file looks like it contains Python code.
> >     > warning:  Arbitrary Python is no longer supported.
> >     > warning:  See https://wiki.xen.org/wiki/PythonInXlConfig <
> https://wiki.xen.org/wiki/PythonInXlConfig> <
> https://wiki.xen.org/wiki/PythonInXlConfig <
> https://wiki.xen.org/wiki/PythonInXlConfig>>
> >     > Failed to parse config: Invalid argument
> >     > So this is a question.
> >     > Is it possible to assign a color mode for the DomU by config file=
 ?
> >     > If so, what string should I use?
> >     Please, always refer to the relevant documentation. In this case,
> for xl.cfg:
> >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod=
.in#L2890
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod=
.in#L2890
> >
> >
> >     ~Michal
> >
> >     >
> >     > Regards,
> >     > Oleg
> >     >
> >     > =D1=87=D1=82, 11 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 1=
3:32, Oleg Nikitenko <oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>:
> >     >
> >     >     Hi Michal,
> >     >
> >     >     Thanks.
> >     >     This compilation previously had a name CONFIG_COLORING.
> >     >     It mixed me up.
> >     >
> >     >     Regards,
> >     >     Oleg
> >     >
> >     >     =D1=87=D1=82, 11 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=
=B2 13:15, Michal Orzel <michal.orzel@amd.com
> <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>:
> >     >
> >     >         Hi Oleg,
> >     >
> >     >         On 11/05/2023 12:02, Oleg Nikitenko wrote:
> >     >         >
> >     >         >
> >     >         >
> >     >         > Hello,
> >     >         >
> >     >         > Thanks Stefano.
> >     >         > Then the next question.
> >     >         > I cloned xen repo from xilinx site
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git> <
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git>> <
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git> <
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git>>>
> >     >         > I managed to build a xlnx_rebase_4.17 branch in my
> environment.
> >     >         > I did it without coloring first. I did not find any
> color footprints at this branch.
> >     >         > I realized coloring is not in the xlnx_rebase_4.17
> branch yet.
> >     >         This is not true. Cache coloring is in xlnx_rebase_4.17.
> Please see the docs:
> >     >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst
> >>
> >     >
> >     >         It describes the feature and documents the required
> properties.
> >     >
> >     >         ~Michal
> >     >
> >     >         >
> >     >         >
> >     >         > =D0=B2=D1=82, 9 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3.=
 =D0=B2 22:49, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>:
> >     >         >
> >     >         >     We test Xen Cache Coloring regularly on zcu102.
> Every Petalinux release
> >     >         >     (twice a year) is tested with cache coloring
> enabled. The last Petalinux
> >     >         >     release is 2023.1 and the kernel used is this:
> >     >         >
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS> <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>> <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS> <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>>>
> >     >         >
> >     >         >
> >     >         >     On Tue, 9 May 2023, Oleg Nikitenko wrote:
> >     >         >     > Hello guys,
> >     >         >     >
> >     >         >     > I have a couple of more questions.
> >     >         >     > Have you ever run xen with the cache coloring at
> Zynq UltraScale+ MPSoC zcu102 xczu15eg ?
> >     >         >     > When did you run xen with the cache coloring last
> time ?
> >     >         >     > What kernel version did you use for Dom0 when you
> ran xen with the cache coloring last time ?
> >     >         >     >
> >     >         >     > Regards,
> >     >         >     > Oleg
> >     >         >     >
> >     >         >     > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=
=D0=B3. =D0=B2 11:48, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>:
> >     >         >     >       Hi Michal,
> >     >         >     >
> >     >         >     > Thanks.
> >     >         >     >
> >     >         >     > Regards,
> >     >         >     > Oleg
> >     >         >     >
> >     >         >     > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=
=D0=B3. =D0=B2 11:34, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>:
> >     >         >     >       Hi Oleg,
> >     >         >     >
> >     >         >     >       Replying, so that you do not need to wait
> for Stefano.
> >     >         >     >
> >     >         >     >       On 05/05/2023 10:28, Oleg Nikitenko wrote:
> >     >         >     >       >
> >     >         >     >       >
> >     >         >     >       >
> >     >         >     >       > Hello Stefano,
> >     >         >     >       >
> >     >         >     >       > I would like to try a xen cache color
> property from this repo  https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>>>
> >     >         >     >       <https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>>>>
> >     >         >     >       > Could you tell whot branch I should use ?
> >     >         >     >       Cache coloring feature is not part of the
> upstream tree and it is still under review.
> >     >         >     >       You can only find it integrated in the
> Xilinx Xen tree.
> >     >         >     >
> >     >         >     >       ~Michal
> >     >         >     >
> >     >         >     >       >
> >     >         >     >       > Regards,
> >     >         >     >       > Oleg
> >     >         >     >       >
> >     >         >     >       > =D0=BF=D1=82, 28 =D0=B0=D0=BF=D1=80. 2023=
=E2=80=AF=D0=B3. =D0=B2 00:51, Stefano
> Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>:
> >     >         >     >       >
> >     >         >     >       >     I am familiar with the zcu102 but I
> don't know how you could possibly
> >     >         >     >       >     generate a SError.
> >     >         >     >       >
> >     >         >     >       >     I suggest to try to use ImageBuilder
> [1] to generate the boot
> >     >         >     >       >     configuration as a test because that
> is known to work well for zcu102.
> >     >         >     >       >
> >     >         >     >       >     [1]
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>>> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>>>>
> >     >         >     >       >
> >     >         >     >       >
> >     >         >     >       >     On Thu, 27 Apr 2023, Oleg Nikitenko
> wrote:
> >     >         >     >       >     > Hello Stefano,
> >     >         >     >       >     >
> >     >         >     >       >     > Thanks for clarification.
> >     >         >     >       >     > We nighter use ImageBuilder nor
> uboot boot script.
> >     >         >     >       >     > A model is zcu102 compatible.
> >     >         >     >       >     >
> >     >         >     >       >     > Regards,
> >     >         >     >       >     > O.
> >     >         >     >       >     >
> >     >         >     >       >     > =D0=B2=D1=82, 25 =D0=B0=D0=BF=D1=80=
. 2023=E2=80=AF=D0=B3. =D0=B2 21:21, Stefano
> Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>:
> >     >         >     >       >     >       This is interesting. Are you
> using Xilinx hardware by any chance? If so,
> >     >         >     >       >     >       which board?
> >     >         >     >       >     >
> >     >         >     >       >     >       Are you using ImageBuilder to
> generate your boot.scr boot script? If so,
> >     >         >     >       >     >       could you please post your
> ImageBuilder config file? If not, can you
> >     >         >     >       >     >       post the source of your uboot
> boot script?
> >     >         >     >       >     >
> >     >         >     >       >     >       SErrors are supposed to be
> related to a hardware failure of some kind.
> >     >         >     >       >     >       You are not supposed to be
> able to trigger an SError easily by
> >     >         >     >       >     >       "mistake". I have not seen
> SErrors due to wrong cache coloring
> >     >         >     >       >     >       configurations on any Xilinx
> board before.
> >     >         >     >       >     >
> >     >         >     >       >     >       The differences between Xen
> with and without cache coloring from a
> >     >         >     >       >     >       hardware perspective are:
> >     >         >     >       >     >
> >     >         >     >       >     >       - With cache coloring, the
> SMMU is enabled and does address translations
> >     >         >     >       >     >         even for dom0. Without cach=
e
> coloring the SMMU could be disabled, and
> >     >         >     >       >     >         if enabled, the SMMU doesn'=
t
> do any address translations for Dom0. If
> >     >         >     >       >     >         there is a hardware failure
> related to SMMU address translation it
> >     >         >     >       >     >         could only trigger with
> cache coloring. This would be my normal
> >     >         >     >       >     >         suggestion for you to
> explore, but the failure happens too early
> >     >         >     >       >     >         before any DMA-capable
> device is programmed. So I don't think this can
> >     >         >     >       >     >         be the issue.
> >     >         >     >       >     >
> >     >         >     >       >     >       - With cache coloring, the
> memory allocation is very different so you'll
> >     >         >     >       >     >         end up using different DDR
> regions for Dom0. So if your DDR is
> >     >         >     >       >     >         defective, you might only
> see a failure with cache coloring enabled
> >     >         >     >       >     >         because you end up using
> different regions.
> >     >         >     >       >     >
> >     >         >     >       >     >
> >     >         >     >       >     >       On Tue, 25 Apr 2023, Oleg
> Nikitenko wrote:
> >     >         >     >       >     >       > Hi Stefano,
> >     >         >     >       >     >       >
> >     >         >     >       >     >       > Thank you.
> >     >         >     >       >     >       > If I build xen without
> colors support there is not this error.
> >     >         >     >       >     >       > All the domains are booted
> well.
> >     >         >     >       >     >       > Hense it can not be a
> hardware issue.
> >     >         >     >       >     >       > This panic arrived during
> unpacking the rootfs.
> >     >         >     >       >     >       > Here I attached the boot lo=
g
> xen/Dom0 without color.
> >     >         >     >       >     >       > A highlighted strings
> printed exactly after the place where 1-st time panic arrived.
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >  Xen 4.16.1-pre
> >     >         >     >       >     >       > (XEN) Xen version 4.16.1-pr=
e
> (nole2390@(none)) (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=3Dy
> >     >         >     >       2023-04-21
> >     >         >     >       >     >       > (XEN) Latest ChangeSet: Wed
> Apr 19 12:56:14 2023 +0300 git:321687b231-dirty
> >     >         >     >       >     >       > (XEN) build-id:
> c1847258fdb1b79562fc710dda40008f96c0fde5
> >     >         >     >       >     >       > (XEN) Processor:
> 00000000410fd034: "ARM Limited", variant: 0x0, part 0xd03,rev 0x4
> >     >         >     >       >     >       > (XEN) 64-bit Execution:
> >     >         >     >       >     >       > (XEN)   Processor Features:
> 0000000000002222 0000000000000000
> >     >         >     >       >     >       > (XEN)     Exception Levels:
> EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
> >     >         >     >       >     >       > (XEN)     Extensions:
> FloatingPoint AdvancedSIMD
> >     >         >     >       >     >       > (XEN)   Debug Features:
> 0000000010305106 0000000000000000
> >     >         >     >       >     >       > (XEN)   Auxiliary Features:
> 0000000000000000 0000000000000000
> >     >         >     >       >     >       > (XEN)   Memory Model
> Features: 0000000000001122 0000000000000000
> >     >         >     >       >     >       > (XEN)   ISA Features:
>  0000000000011120 0000000000000000
> >     >         >     >       >     >       > (XEN) 32-bit Execution:
> >     >         >     >       >     >       > (XEN)   Processor Features:
> 0000000000000131:0000000000011011
> >     >         >     >       >     >       > (XEN)     Instruction Sets:
> AArch32 A32 Thumb Thumb-2 Jazelle
> >     >         >     >       >     >       > (XEN)     Extensions:
> GenericTimer Security
> >     >         >     >       >     >       > (XEN)   Debug Features:
> 0000000003010066
> >     >         >     >       >     >       > (XEN)   Auxiliary Features:
> 0000000000000000
> >     >         >     >       >     >       > (XEN)   Memory Model
> Features: 0000000010201105 0000000040000000
> >     >         >     >       >     >       > (XEN)
>    0000000001260000 0000000002102211
> >     >         >     >       >     >       > (XEN)   ISA Features:
> 0000000002101110 0000000013112111 0000000021232042
> >     >         >     >       >     >       > (XEN)
> 0000000001112131 0000000000011142 0000000000011121
> >     >         >     >       >     >       > (XEN) Using SMC Calling
> Convention v1.2
> >     >         >     >       >     >       > (XEN) Using PSCI v1.1
> >     >         >     >       >     >       > (XEN) SMP: Allowing 4 CPUs
> >     >         >     >       >     >       > (XEN) Generic Timer IRQ:
> phys=3D30 hyp=3D26 virt=3D27 Freq: 100000 KHz
> >     >         >     >       >     >       > (XEN) GICv2 initialization:
> >     >         >     >       >     >       > (XEN)
> gic_dist_addr=3D00000000f9010000
> >     >         >     >       >     >       > (XEN)
> gic_cpu_addr=3D00000000f9020000
> >     >         >     >       >     >       > (XEN)
> gic_hyp_addr=3D00000000f9040000
> >     >         >     >       >     >       > (XEN)
> gic_vcpu_addr=3D00000000f9060000
> >     >         >     >       >     >       > (XEN)
> gic_maintenance_irq=3D25
> >     >         >     >       >     >       > (XEN) GICv2: Adjusting CPU
> interface base to 0xf902f000
> >     >         >     >       >     >       > (XEN) GICv2: 192 lines, 4
> cpus, secure (IID 0200143b).
> >     >         >     >       >     >       > (XEN) Using scheduler: null
> Scheduler (null)
> >     >         >     >       >     >       > (XEN) Initializing null
> scheduler
> >     >         >     >       >     >       > (XEN) WARNING: This is
> experimental software in development.
> >     >         >     >       >     >       > (XEN) Use at your own risk.
> >     >         >     >       >     >       > (XEN) Allocated console rin=
g
> of 32 KiB.
> >     >         >     >       >     >       > (XEN) CPU0: Guest atomics
> will try 12 times before pausing the domain
> >     >         >     >       >     >       > (XEN) Bringing up CPU1
> >     >         >     >       >     >       > (XEN) CPU1: Guest atomics
> will try 13 times before pausing the domain
> >     >         >     >       >     >       > (XEN) CPU 1 booted.
> >     >         >     >       >     >       > (XEN) Bringing up CPU2
> >     >         >     >       >     >       > (XEN) CPU2: Guest atomics
> will try 13 times before pausing the domain
> >     >         >     >       >     >       > (XEN) CPU 2 booted.
> >     >         >     >       >     >       > (XEN) Bringing up CPU3
> >     >         >     >       >     >       > (XEN) CPU3: Guest atomics
> will try 13 times before pausing the domain
> >     >         >     >       >     >       > (XEN) Brought up 4 CPUs
> >     >         >     >       >     >       > (XEN) CPU 3 booted.
> >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: probing hardware configuration...
> >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: SMMUv2 with:
> >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: stage 2 translation
> >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: stream matching with 48 register groups, mask
> 0x7fff<2>smmu:
> >     >         >     >       /axi/smmu@fd800000: 16 context
> >     >         >     >       >     >       banks (0
> >     >         >     >       >     >       > stage-2 only)
> >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit PA
> >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: registered 29 master devices
> >     >         >     >       >     >       > (XEN) I/O virtualisation
> enabled
> >     >         >     >       >     >       > (XEN)  - Dom0 mode: Relaxed
> >     >         >     >       >     >       > (XEN) P2M: 40-bit IPA with
> 40-bit PA and 8-bit VMID
> >     >         >     >       >     >       > (XEN) P2M: 3 levels with
> order-1 root, VTCR 0x0000000080023558
> >     >         >     >       >     >       > (XEN) Scheduling
> granularity: cpu, 1 CPU per sched-resource
> >     >         >     >       >     >       > (XEN) alternatives: Patchin=
g
> with alt table 00000000002cc5c8 -> 00000000002ccb2c
> >     >         >     >       >     >       > (XEN) *** LOADING DOMAIN 0
> ***
> >     >         >     >       >     >       > (XEN) Loading d0 kernel fro=
m
> boot module @ 0000000001000000
> >     >         >     >       >     >       > (XEN) Loading ramdisk from
> boot module @ 0000000002000000
> >     >         >     >       >     >       > (XEN) Allocating 1:1
> mappings totalling 1600MB for dom0:
> >     >         >     >       >     >       > (XEN) BANK[0]
> 0x00000010000000-0x00000020000000 (256MB)
> >     >         >     >       >     >       > (XEN) BANK[1]
> 0x00000024000000-0x00000028000000 (64MB)
> >     >         >     >       >     >       > (XEN) BANK[2]
> 0x00000030000000-0x00000080000000 (1280MB)
> >     >         >     >       >     >       > (XEN) Grant table range:
> 0x00000000e00000-0x00000000e40000
> >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
> >     >         >     >       >     >       > (XEN) Allocating PPI 16 for
> event channel interrupt
> >     >         >     >       >     >       > (XEN) Extended region 0:
> 0x81200000->0xa0000000
> >     >         >     >       >     >       > (XEN) Extended region 1:
> 0xb1200000->0xc0000000
> >     >         >     >       >     >       > (XEN) Extended region 2:
> 0xc8000000->0xe0000000
> >     >         >     >       >     >       > (XEN) Extended region 3:
> 0xf0000000->0xf9000000
> >     >         >     >       >     >       > (XEN) Extended region 4:
> 0x100000000->0x600000000
> >     >         >     >       >     >       > (XEN) Extended region 5:
> 0x880000000->0x8000000000
> >     >         >     >       >     >       > (XEN) Extended region 6:
> 0x8001000000->0x10000000000
> >     >         >     >       >     >       > (XEN) Loading zImage from
> 0000000001000000 to 0000000010000000-0000000010e41008
> >     >         >     >       >     >       > (XEN) Loading d0 initrd fro=
m
> 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
> >     >         >     >       >     >       > (XEN) Loading d0 DTB to
> 0x0000000013400000-0x000000001340cbdc
> >     >         >     >       >     >       > (XEN) Initial low memory
> virq threshold set at 0x4000 pages.
> >     >         >     >       >     >       > (XEN) Std. Loglevel: All
> >     >         >     >       >     >       > (XEN) Guest Loglevel: All
> >     >         >     >       >     >       > (XEN) *** Serial input to
> DOM0 (type 'CTRL-a' three times to switch input)
> >     >         >     >       >     >       > (XEN) null.c:353: 0 <-- d0v=
0
> >     >         >     >       >     >       > (XEN) Freed 356kB init
> memory.
> >     >         >     >       >     >       > (XEN) d0v0 Unhandled
> SMC/HVC: 0x84000050
> >     >         >     >       >     >       > (XEN) d0v0 Unhandled
> SMC/HVC: 0x8600ff01
> >     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandle=
d
> word write 0x000000ffffffff to ICACTIVER4
> >     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandle=
d
> word write 0x000000ffffffff to ICACTIVER8
> >     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandle=
d
> word write 0x000000ffffffff to ICACTIVER12
> >     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandle=
d
> word write 0x000000ffffffff to ICACTIVER16
> >     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandle=
d
> word write 0x000000ffffffff to ICACTIVER20
> >     >         >     >       >     >       > (XEN) d0v0: vGICD: unhandle=
d
> word write 0x000000ffffffff to ICACTIVER0
> >     >         >     >       >     >       > [    0.000000] Booting Linu=
x
> on physical CPU 0x0000000000 [0x410fd034]
> >     >         >     >       >     >       > [    0.000000] Linux versio=
n
> 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable-linux-gcc (GCC=
)
> >     >         >     >       11.3.0, GNU ld (GNU
> >     >         >     >       >     >       Binutils)
> >     >         >     >       >     >       > 2.38.20220708) #1 SMP Tue
> Feb 21 05:47:54 UTC 2023
> >     >         >     >       >     >       > [    0.000000] Machine
> model: D14 Viper Board - White Unit
> >     >         >     >       >     >       > [    0.000000] Xen 4.16
> support found
> >     >         >     >       >     >       > [    0.000000] Zone ranges:
> >     >         >     >       >     >       > [    0.000000]   DMA
>  [mem 0x0000000010000000-0x000000007fffffff]
> >     >         >     >       >     >       > [    0.000000]   DMA32
>  empty
> >     >         >     >       >     >       > [    0.000000]   Normal
> empty
> >     >         >     >       >     >       > [    0.000000] Movable zone
> start for each node
> >     >         >     >       >     >       > [    0.000000] Early memory
> node ranges
> >     >         >     >       >     >       > [    0.000000]   node   0:
> [mem 0x0000000010000000-0x000000001fffffff]
> >     >         >     >       >     >       > [    0.000000]   node   0:
> [mem 0x0000000022000000-0x0000000022147fff]
> >     >         >     >       >     >       > [    0.000000]   node   0:
> [mem 0x0000000022200000-0x0000000022347fff]
> >     >         >     >       >     >       > [    0.000000]   node   0:
> [mem 0x0000000024000000-0x0000000027ffffff]
> >     >         >     >       >     >       > [    0.000000]   node   0:
> [mem 0x0000000030000000-0x000000007fffffff]
> >     >         >     >       >     >       > [    0.000000] Initmem setu=
p
> node 0 [mem 0x0000000010000000-0x000000007fffffff]
> >     >         >     >       >     >       > [    0.000000] On node 0,
> zone DMA: 8192 pages in unavailable ranges
> >     >         >     >       >     >       > [    0.000000] On node 0,
> zone DMA: 184 pages in unavailable ranges
> >     >         >     >       >     >       > [    0.000000] On node 0,
> zone DMA: 7352 pages in unavailable ranges
> >     >         >     >       >     >       > [    0.000000] cma: Reserve=
d
> 256 MiB at 0x000000006e000000
> >     >         >     >       >     >       > [    0.000000] psci: probin=
g
> for conduit method from DT.
> >     >         >     >       >     >       > [    0.000000] psci:
> PSCIv1.1 detected in firmware.
> >     >         >     >       >     >       > [    0.000000] psci: Using
> standard PSCI v0.2 function IDs
> >     >         >     >       >     >       > [    0.000000] psci: Truste=
d
> OS migration not required
> >     >         >     >       >     >       > [    0.000000] psci: SMC
> Calling Convention v1.1
> >     >         >     >       >     >       > [    0.000000] percpu:
> Embedded 16 pages/cpu s32792 r0 d32744 u65536
> >     >         >     >       >     >       > [    0.000000] Detected VIP=
T
> I-cache on CPU0
> >     >         >     >       >     >       > [    0.000000] CPU features=
:
> kernel page table isolation forced ON by KASLR
> >     >         >     >       >     >       > [    0.000000] CPU features=
:
> detected: Kernel page table isolation (KPTI)
> >     >         >     >       >     >       > [    0.000000] Built 1
> zonelists, mobility grouping on.  Total pages: 403845
> >     >         >     >       >     >       > [    0.000000] Kernel
> command line: console=3Dhvc0 earlycon=3Dxen earlyprintk=3Dxen clk_ignore_=
unused
> fips=3D1
> >     >         >     >       root=3D/dev/ram0
> >     >         >     >       >     >       maxcpus=3D2
> >     >         >     >       >     >       > [    0.000000] Unknown
> kernel command line parameters "earlyprintk=3Dxen fips=3D1", will be pass=
ed to
> user
> >     >         >     >       space.
> >     >         >     >       >     >       > [    0.000000] Dentry cache
> hash table entries: 262144 (order: 9, 2097152 bytes, linear)
> >     >         >     >       >     >       > [    0.000000] Inode-cache
> hash table entries: 131072 (order: 8, 1048576 bytes, linear)
> >     >         >     >       >     >       > [    0.000000] mem
> auto-init: stack:off, heap alloc:on, heap free:on
> >     >         >     >       >     >       > [    0.000000] mem
> auto-init: clearing system memory may take some time...
> >     >         >     >       >     >       > [    0.000000] Memory:
> 1121936K/1641024K available (9728K kernel code, 836K rwdata, 2396K rodata=
,
> 1536K
> >     >         >     >       init, 262K bss,
> >     >         >     >       >     >       256944K reserved,
> >     >         >     >       >     >       > 262144K cma-reserved)
> >     >         >     >       >     >       > [    0.000000] SLUB:
> HWalign=3D64, Order=3D0-3, MinObjects=3D0, CPUs=3D2, Nodes=3D1
> >     >         >     >       >     >       > [    0.000000] rcu:
> Hierarchical RCU implementation.
> >     >         >     >       >     >       > [    0.000000] rcu: RCU
> event tracing is enabled.
> >     >         >     >       >     >       > [    0.000000] rcu: RCU
> restricting CPUs from NR_CPUS=3D8 to nr_cpu_ids=3D2.
> >     >         >     >       >     >       > [    0.000000] rcu: RCU
> calculated value of scheduler-enlistment delay is 25 jiffies.
> >     >         >     >       >     >       > [    0.000000] rcu:
> Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=3D2
> >     >         >     >       >     >       > [    0.000000] NR_IRQS: 64,
> nr_irqs: 64, preallocated irqs: 0
> >     >         >     >       >     >       > [    0.000000] Root IRQ
> handler: gic_handle_irq
> >     >         >     >       >     >       > [    0.000000] arch_timer:
> cp15 timer(s) running at 100.00MHz (virt).
> >     >         >     >       >     >       > [    0.000000] clocksource:
> arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x171024e7e0,
> >     >         >     >       max_idle_ns: 440795205315 ns
> >     >         >     >       >     >       > [    0.000000] sched_clock:
> 56 bits at 100MHz, resolution 10ns, wraps every 4398046511100ns
> >     >         >     >       >     >       > [    0.000258] Console:
> colour dummy device 80x25
> >     >         >     >       >     >       > [    0.310231] printk:
> console [hvc0] enabled
> >     >         >     >       >     >       > [    0.314403] Calibrating
> delay loop (skipped), value calculated using timer frequency.. 200.00
> BogoMIPS
> >     >         >     >       (lpj=3D400000)
> >     >         >     >       >     >       > [    0.324851] pid_max:
> default: 32768 minimum: 301
> >     >         >     >       >     >       > [    0.329706] LSM: Securit=
y
> Framework initializing
> >     >         >     >       >     >       > [    0.334204] Yama:
> becoming mindful.
> >     >         >     >       >     >       > [    0.337865] Mount-cache
> hash table entries: 4096 (order: 3, 32768 bytes, linear)
> >     >         >     >       >     >       > [    0.345180]
> Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
> >     >         >     >       >     >       > [    0.354743]
> xen:grant_table: Grant tables using version 1 layout
> >     >         >     >       >     >       > [    0.359132] Grant table
> initialized
> >     >         >     >       >     >       > [    0.362664] xen:events:
> Using FIFO-based ABI
> >     >         >     >       >     >       > [    0.366993] Xen:
> initializing cpu0
> >     >         >     >       >     >       > [    0.370515] rcu:
> Hierarchical SRCU implementation.
> >     >         >     >       >     >       > [    0.375930] smp: Bringin=
g
> up secondary CPUs ...
> >     >         >     >       >     >       > (XEN) null.c:353: 1 <-- d0v=
1
> >     >         >     >       >     >       > (XEN) d0v1: vGICD: unhandle=
d
> word write 0x000000ffffffff to ICACTIVER0
> >     >         >     >       >     >       > [    0.382549] Detected VIP=
T
> I-cache on CPU1
> >     >         >     >       >     >       > [    0.388712] Xen:
> initializing cpu1
> >     >         >     >       >     >       > [    0.388743] CPU1: Booted
> secondary processor 0x0000000001 [0x410fd034]
> >     >         >     >       >     >       > [    0.388829] smp: Brought
> up 1 node, 2 CPUs
> >     >         >     >       >     >       > [    0.406941] SMP: Total o=
f
> 2 processors activated.
> >     >         >     >       >     >       > [    0.411698] CPU features=
:
> detected: 32-bit EL0 Support
> >     >         >     >       >     >       > [    0.416888] CPU features=
:
> detected: CRC32 instructions
> >     >         >     >       >     >       > [    0.422121] CPU: All
> CPU(s) started at EL1
> >     >         >     >       >     >       > [    0.426248] alternatives=
:
> patching kernel code
> >     >         >     >       >     >       > [    0.431424] devtmpfs:
> initialized
> >     >         >     >       >     >       > [    0.441454] KASLR enable=
d
> >     >         >     >       >     >       > [    0.441602] clocksource:
> jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:
> >     >         >     >       7645041785100000 ns
> >     >         >     >       >     >       > [    0.448321] futex hash
> table entries: 512 (order: 3, 32768 bytes, linear)
> >     >         >     >       >     >       > [    0.496183] NET:
> Registered PF_NETLINK/PF_ROUTE protocol family
> >     >         >     >       >     >       > [    0.498277] DMA:
> preallocated 256 KiB GFP_KERNEL pool for atomic allocations
> >     >         >     >       >     >       > [    0.503772] DMA:
> preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
> >     >         >     >       >     >       > [    0.511610] DMA:
> preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
> >     >         >     >       >     >       > [    0.519478] audit:
> initializing netlink subsys (disabled)
> >     >         >     >       >     >       > [    0.524985] audit:
> type=3D2000 audit(0.336:1): state=3Dinitialized audit_enabled=3D0 res=3D1
> >     >         >     >       >     >       > [    0.529169] thermal_sys:
> Registered thermal governor 'step_wise'
> >     >         >     >       >     >       > [    0.533023]
> hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
> >     >         >     >       >     >       > [    0.545608] ASID
> allocator initialised with 32768 entries
> >     >         >     >       >     >       > [    0.551030]
> xen:swiotlb_xen: Warning: only able to allocate 4 MB for software IO TLB
> >     >         >     >       >     >       > [    0.559332] software IO
> TLB: mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB)
> >     >         >     >       >     >       > [    0.583565] HugeTLB
> registered 1.00 GiB page size, pre-allocated 0 pages
> >     >         >     >       >     >       > [    0.584721] HugeTLB
> registered 32.0 MiB page size, pre-allocated 0 pages
> >     >         >     >       >     >       > [    0.591478] HugeTLB
> registered 2.00 MiB page size, pre-allocated 0 pages
> >     >         >     >       >     >       > [    0.598225] HugeTLB
> registered 64.0 KiB page size, pre-allocated 0 pages
> >     >         >     >       >     >       > [    0.636520] DRBG:
> Continuing without Jitter RNG
> >     >         >     >       >     >       > [    0.737187] raid6: neonx=
8
>   gen()  2143 MB/s
> >     >         >     >       >     >       > [    0.805294] raid6: neonx=
8
>   xor()  1589 MB/s
> >     >         >     >       >     >       > [    0.873406] raid6: neonx=
4
>   gen()  2177 MB/s
> >     >         >     >       >     >       > [    0.941499] raid6: neonx=
4
>   xor()  1556 MB/s
> >     >         >     >       >     >       > [    1.009612] raid6: neonx=
2
>   gen()  2072 MB/s
> >     >         >     >       >     >       > [    1.077715] raid6: neonx=
2
>   xor()  1430 MB/s
> >     >         >     >       >     >       > [    1.145834] raid6: neonx=
1
>   gen()  1769 MB/s
> >     >         >     >       >     >       > [    1.213935] raid6: neonx=
1
>   xor()  1214 MB/s
> >     >         >     >       >     >       > [    1.282046] raid6:
> int64x8  gen()  1366 MB/s
> >     >         >     >       >     >       > [    1.350132] raid6:
> int64x8  xor()   773 MB/s
> >     >         >     >       >     >       > [    1.418259] raid6:
> int64x4  gen()  1602 MB/s
> >     >         >     >       >     >       > [    1.486349] raid6:
> int64x4  xor()   851 MB/s
> >     >         >     >       >     >       > [    1.554464] raid6:
> int64x2  gen()  1396 MB/s
> >     >         >     >       >     >       > [    1.622561] raid6:
> int64x2  xor()   744 MB/s
> >     >         >     >       >     >       > [    1.690687] raid6:
> int64x1  gen()  1033 MB/s
> >     >         >     >       >     >       > [    1.758770] raid6:
> int64x1  xor()   517 MB/s
> >     >         >     >       >     >       > [    1.758809] raid6: using
> algorithm neonx4 gen() 2177 MB/s
> >     >         >     >       >     >       > [    1.762941] raid6: ....
> xor() 1556 MB/s, rmw enabled
> >     >         >     >       >     >       > [    1.767957] raid6: using
> neon recovery algorithm
> >     >         >     >       >     >       > [    1.772824] xen:balloon:
> Initialising balloon driver
> >     >         >     >       >     >       > [    1.778021] iommu:
> Default domain type: Translated
> >     >         >     >       >     >       > [    1.782584] iommu: DMA
> domain TLB invalidation policy: strict mode
> >     >         >     >       >     >       > [    1.789149] SCSI
> subsystem initialized
> >     >         >     >       >     >       > [    1.792820] usbcore:
> registered new interface driver usbfs
> >     >         >     >       >     >       > [    1.798254] usbcore:
> registered new interface driver hub
> >     >         >     >       >     >       > [    1.803626] usbcore:
> registered new device driver usb
> >     >         >     >       >     >       > [    1.808761] pps_core:
> LinuxPPS API ver. 1 registered
> >     >         >     >       >     >       > [    1.813716] pps_core:
> Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <
> giometti@linux.it <mailto:giometti@linux.it> <mailto:giometti@linux.it
> <mailto:giometti@linux.it>> <mailto:giometti@linux.it <mailto:
> giometti@linux.it> <mailto:giometti@linux.it <mailto:giometti@linux.it>>>
> >     >         >     >       <mailto:giometti@linux.it <mailto:
> giometti@linux.it> <mailto:giometti@linux.it <mailto:giometti@linux.it>>
> <mailto:giometti@linux.it <mailto:giometti@linux.it> <mailto:
> giometti@linux.it <mailto:giometti@linux.it>>>>>
> >     >         >     >       >     >       > [    1.822903] PTP clock
> support registered
> >     >         >     >       >     >       > [    1.826893] EDAC MC: Ver=
:
> 3.0.0
> >     >         >     >       >     >       > [    1.830375]
> zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX
> channels.
> >     >         >     >       >     >       > [    1.838863]
> zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX
> channels.
> >     >         >     >       >     >       > [    1.847356]
> zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX
> channels.
> >     >         >     >       >     >       > [    1.855907] FPGA manager
> framework
> >     >         >     >       >     >       > [    1.859952] clocksource:
> Switched to clocksource arch_sys_counter
> >     >         >     >       >     >       > [    1.871712] NET:
> Registered PF_INET protocol family
> >     >         >     >       >     >       > [    1.871838] IP idents
> hash table entries: 32768 (order: 6, 262144 bytes, linear)
> >     >         >     >       >     >       > [    1.879392]
> tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes,
> linear)
> >     >         >     >       >     >       > [    1.887078] Table-pertur=
b
> hash table entries: 65536 (order: 6, 262144 bytes, linear)
> >     >         >     >       >     >       > [    1.894846] TCP
> established hash table entries: 16384 (order: 5, 131072 bytes, linear)
> >     >         >     >       >     >       > [    1.902900] TCP bind has=
h
> table entries: 16384 (order: 6, 262144 bytes, linear)
> >     >         >     >       >     >       > [    1.910350] TCP: Hash
> tables configured (established 16384 bind 16384)
> >     >         >     >       >     >       > [    1.916778] UDP hash
> table entries: 1024 (order: 3, 32768 bytes, linear)
> >     >         >     >       >     >       > [    1.923509] UDP-Lite has=
h
> table entries: 1024 (order: 3, 32768 bytes, linear)
> >     >         >     >       >     >       > [    1.930759] NET:
> Registered PF_UNIX/PF_LOCAL protocol family
> >     >         >     >       >     >       > [    1.936834] RPC:
> Registered named UNIX socket transport module.
> >     >         >     >       >     >       > [    1.942342] RPC:
> Registered udp transport module.
> >     >         >     >       >     >       > [    1.947088] RPC:
> Registered tcp transport module.
> >     >         >     >       >     >       > [    1.951843] RPC:
> Registered tcp NFSv4.1 backchannel transport module.
> >     >         >     >       >     >       > [    1.958334] PCI: CLS 0
> bytes, default 64
> >     >         >     >       >     >       > [    1.962709] Trying to
> unpack rootfs image as initramfs...
> >     >         >     >       >     >       > [    1.977090] workingset:
> timestamp_bits=3D62 max_order=3D19 bucket_order=3D0
> >     >         >     >       >     >       > [    1.982863] Installing
> knfsd (copyright (C) 1996 okir@monad.swb.de <mailto:okir@monad.swb.de>
> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>> <mailto:
> okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de
> <mailto:okir@monad.swb.de>>> <mailto:okir@monad.swb.de <mailto:
> okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>>
> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:
> okir@monad.swb.de <mailto:okir@monad.swb.de>>>>).
> >     >         >     >       >     >       > [    2.021045] NET:
> Registered PF_ALG protocol family
> >     >         >     >       >     >       > [    2.021122] xor:
> measuring software checksum speed
> >     >         >     >       >     >       > [    2.029347]    8regs
>       :  2366 MB/sec
> >     >         >     >       >     >       > [    2.033081]    32regs
>      :  2802 MB/sec
> >     >         >     >       >     >       > [    2.038223]    arm64_neo=
n
>      :  2320 MB/sec
> >     >         >     >       >     >       > [    2.038385] xor: using
> function: 32regs (2802 MB/sec)
> >     >         >     >       >     >       > [    2.043614] Block layer
> SCSI generic (bsg) driver version 0.4 loaded (major 247)
> >     >         >     >       >     >       > [    2.050959] io scheduler
> mq-deadline registered
> >     >         >     >       >     >       > [    2.055521] io scheduler
> kyber registered
> >     >         >     >       >     >       > [    2.068227]
> xen:xen_evtchn: Event-channel device installed
> >     >         >     >       >     >       > [    2.069281] Serial:
> 8250/16550 driver, 4 ports, IRQ sharing disabled
> >     >         >     >       >     >       > [    2.076190] cacheinfo:
> Unable to detect cache hierarchy for CPU 0
> >     >         >     >       >     >       > [    2.085548] brd: module
> loaded
> >     >         >     >       >     >       > [    2.089290] loop: module
> loaded
> >     >         >     >       >     >       > [    2.089341] Invalid
> max_queues (4), will use default max: 2.
> >     >         >     >       >     >       > [    2.094565] tun:
> Universal TUN/TAP device driver, 1.6
> >     >         >     >       >     >       > [    2.098655] xen_netfront=
:
> Initialising Xen virtual ethernet driver
> >     >         >     >       >     >       > [    2.104156] usbcore:
> registered new interface driver rtl8150
> >     >         >     >       >     >       > [    2.109813] usbcore:
> registered new interface driver r8152
> >     >         >     >       >     >       > [    2.115367] usbcore:
> registered new interface driver asix
> >     >         >     >       >     >       > [    2.120794] usbcore:
> registered new interface driver ax88179_178a
> >     >         >     >       >     >       > [    2.126934] usbcore:
> registered new interface driver cdc_ether
> >     >         >     >       >     >       > [    2.132816] usbcore:
> registered new interface driver cdc_eem
> >     >         >     >       >     >       > [    2.138527] usbcore:
> registered new interface driver net1080
> >     >         >     >       >     >       > [    2.144256] usbcore:
> registered new interface driver cdc_subset
> >     >         >     >       >     >       > [    2.150205] usbcore:
> registered new interface driver zaurus
> >     >         >     >       >     >       > [    2.155837] usbcore:
> registered new interface driver cdc_ncm
> >     >         >     >       >     >       > [    2.161550] usbcore:
> registered new interface driver r8153_ecm
> >     >         >     >       >     >       > [    2.168240] usbcore:
> registered new interface driver cdc_acm
> >     >         >     >       >     >       > [    2.173109] cdc_acm: USB
> Abstract Control Model driver for USB modems and ISDN adapters
> >     >         >     >       >     >       > [    2.181358] usbcore:
> registered new interface driver uas
> >     >         >     >       >     >       > [    2.186547] usbcore:
> registered new interface driver usb-storage
> >     >         >     >       >     >       > [    2.192643] usbcore:
> registered new interface driver ftdi_sio
> >     >         >     >       >     >       > [    2.198384] usbserial:
> USB Serial support registered for FTDI USB Serial Device
> >     >         >     >       >     >       > [    2.206118] udc-core:
> couldn't find an available UDC - added [g_mass_storage] to list of pendin=
g
> >     >         >     >       drivers
> >     >         >     >       >     >       > [    2.215332] i2c_dev: i2c
> /dev entries driver
> >     >         >     >       >     >       > [    2.220467] xen_wdt
> xen_wdt: initialized (timeout=3D60s, nowayout=3D0)
> >     >         >     >       >     >       > [    2.225923]
> device-mapper: uevent: version 1.0.3
> >     >         >     >       >     >       > [    2.230668]
> device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com>> <mailto:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com>>>
> >     >         >     >       <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com>> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com>>>>
> >     >         >     >       >     >       > [    2.239315] EDAC MC0:
> Giving out device to module 1 controller synps_ddr_controller: DEV
> synps_edac
> >     >         >     >       (INTERRUPT)
> >     >         >     >       >     >       > [    2.249405] EDAC DEVICE0=
:
> Giving out device to module zynqmp-ocm-edac controller zynqmp_ocm: DEV
> >     >         >     >       >     >       ff960000.memory-controller
> (INTERRUPT)
> >     >         >     >       >     >       > [    2.261719] sdhci: Secur=
e
> Digital Host Controller Interface driver
> >     >         >     >       >     >       > [    2.267487] sdhci:
> Copyright(c) Pierre Ossman
> >     >         >     >       >     >       > [    2.271890] sdhci-pltfm:
> SDHCI platform and OF driver helper
> >     >         >     >       >     >       > [    2.278157] ledtrig-cpu:
> registered to indicate activity on CPUs
> >     >         >     >       >     >       > [    2.283816]
> zynqmp_firmware_probe Platform Management API v1.1
> >     >         >     >       >     >       > [    2.289554]
> zynqmp_firmware_probe Trustzone version v1.0
> >     >         >     >       >     >       > [    2.327875] securefw
> securefw: securefw probed
> >     >         >     >       >     >       > [    2.328324] alg: No test
> for xilinx-zynqmp-aes (zynqmp-aes)
> >     >         >     >       >     >       > [    2.332563] zynqmp_aes
> firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registered
> >     >         >     >       >     >       > [    2.341183] alg: No test
> for xilinx-zynqmp-rsa (zynqmp-rsa)
> >     >         >     >       >     >       > [    2.347667] remoteproc
> remoteproc0: ff9a0000.rf5ss:r5f_0 is available
> >     >         >     >       >     >       > [    2.353003] remoteproc
> remoteproc1: ff9a0000.rf5ss:r5f_1 is available
> >     >         >     >       >     >       > [    2.362605] fpga_manager
> fpga0: Xilinx ZynqMP FPGA Manager registered
> >     >         >     >       >     >       > [    2.366540]
> viper-xen-proxy viper-xen-proxy: Viper Xen Proxy registered
> >     >         >     >       >     >       > [    2.372525] viper-vdpp
> a4000000.vdpp: Device Tree Probing
> >     >         >     >       >     >       > [    2.377778] viper-vdpp
> a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> >     >         >     >       >     >       > [    2.386432] viper-vdpp
> a4000000.vdpp: Unable to register tamper handler. Retrying...
> >     >         >     >       >     >       > [    2.394094]
> viper-vdpp-net a5000000.vdpp_net: Device Tree Probing
> >     >         >     >       >     >       > [    2.399854]
> viper-vdpp-net a5000000.vdpp_net: Device registered
> >     >         >     >       >     >       > [    2.405931]
> viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing
> >     >         >     >       >     >       > [    2.412037]
> viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VTI Count: 512 Even=
t
> Count: 32
> >     >         >     >       >     >       > [    2.420856] default pres=
et
> >     >         >     >       >     >       > [    2.423797]
> viper-vdpp-stat a8000000.vdpp_stat: Device registered
> >     >         >     >       >     >       > [    2.430054]
> viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing
> >     >         >     >       >     >       > [    2.435948]
> viper-vdpp-rng ac000000.vdpp_rng: Device registered
> >     >         >     >       >     >       > [    2.441976] vmcu driver
> init
> >     >         >     >       >     >       > [    2.444922] VMCU: :
> (240:0) registered
> >     >         >     >       >     >       > [    2.444956] In K81
> Updater init
> >     >         >     >       >     >       > [    2.449003] pktgen:
> Packet Generator for packet performance testing. Version: 2.75
> >     >         >     >       >     >       > [    2.468833] Initializing
> XFRM netlink socket
> >     >         >     >       >     >       > [    2.468902] NET:
> Registered PF_PACKET protocol family
> >     >         >     >       >     >       > [    2.472729] Bridge
> firewalling registered
> >     >         >     >       >     >       > [    2.476785] 8021q: 802.1=
Q
> VLAN Support v1.8
> >     >         >     >       >     >       > [    2.481341] registered
> taskstats version 1
> >     >         >     >       >     >       > [    2.486394] Btrfs loaded=
,
> crc32c=3Dcrc32c-generic, zoned=3Dno, fsverity=3Dno
> >     >         >     >       >     >       > [    2.503145]
> ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq =3D 36, base_baud =3D 625=
0000)
> is a xuartps
> >     >         >     >       >     >       > [    2.507103]
> of-fpga-region fpga-full: FPGA Region probed
> >     >         >     >       >     >       > [    2.512986]
> xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >         >     >       >     >       > [    2.520267]
> xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >         >     >       >     >       > [    2.528239]
> xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >         >     >       >     >       > [    2.536152]
> xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >         >     >       >     >       > [    2.544153]
> xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >         >     >       >     >       > [    2.552127]
> xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >         >     >       >     >       > [    2.560178]
> xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >         >     >       >     >       > [    2.567987]
> xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >         >     >       >     >       > [    2.576018]
> xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >         >     >       >     >       > [    2.583889]
> xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >         >     >       >     >       > [    2.946379] spi-nor
> spi0.0: mt25qu512a (131072 Kbytes)
> >     >         >     >       >     >       > [    2.946467] 2
> fixed-partitions partitions found on MTD device spi0.0
> >     >         >     >       >     >       > [    2.952393] Creating 2
> MTD partitions on "spi0.0":
> >     >         >     >       >     >       > [    2.957231]
> 0x000004000000-0x000008000000 : "bank A"
> >     >         >     >       >     >       > [    2.963332]
> 0x000000000000-0x000004000000 : "bank B"
> >     >         >     >       >     >       > [    2.968694] macb
> ff0b0000.ethernet: Not enabling partial store and forward
> >     >         >     >       >     >       > [    2.975333] macb
> ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25
> >     >         >     >       (18:41:fe:0f:ff:02)
> >     >         >     >       >     >       > [    2.984472] macb
> ff0c0000.ethernet: Not enabling partial store and forward
> >     >         >     >       >     >       > [    2.992144] macb
> ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26
> >     >         >     >       (18:41:fe:0f:ff:03)
> >     >         >     >       >     >       > [    3.001043] viper_enet
> viper_enet: Viper power GPIOs initialised
> >     >         >     >       >     >       > [    3.007313] viper_enet
> viper_enet vnet0 (uninitialized): Validate interface QSGMII
> >     >         >     >       >     >       > [    3.014914] viper_enet
> viper_enet vnet1 (uninitialized): Validate interface QSGMII
> >     >         >     >       >     >       > [    3.022138] viper_enet
> viper_enet vnet1 (uninitialized): Validate interface type 18
> >     >         >     >       >     >       > [    3.030274] viper_enet
> viper_enet vnet2 (uninitialized): Validate interface QSGMII
> >     >         >     >       >     >       > [    3.037785] viper_enet
> viper_enet vnet3 (uninitialized): Validate interface QSGMII
> >     >         >     >       >     >       > [    3.045301] viper_enet
> viper_enet: Viper enet registered
> >     >         >     >       >     >       > [    3.050958]
> xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
> >     >         >     >       >     >       > [    3.057135]
> xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
> >     >         >     >       >     >       > [    3.063538]
> xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
> >     >         >     >       >     >       > [    3.069920]
> xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
> >     >         >     >       >     >       > [    3.097729] si70xx: prob=
e
> of 2-0040 failed with error -5
> >     >         >     >       >     >       > [    3.098042] cdns-wdt
> fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s
> >     >         >     >       >     >       > [    3.105111] cdns-wdt
> ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s
> >     >         >     >       >     >       > [    3.112457] viper-tamper
> viper-tamper: Device registered
> >     >         >     >       >     >       > [    3.117593] active_bank
> active_bank: boot bank: 1
> >     >         >     >       >     >       > [    3.122184] active_bank
> active_bank: boot mode: (0x02) qspi32
> >     >         >     >       >     >       > [    3.128247] viper-vdpp
> a4000000.vdpp: Device Tree Probing
> >     >         >     >       >     >       > [    3.133439] viper-vdpp
> a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: 32
> >     >         >     >       >     >       > [    3.142151] viper-vdpp
> a4000000.vdpp: Tamper handler registered
> >     >         >     >       >     >       > [    3.147438] viper-vdpp
> a4000000.vdpp: Device registered
> >     >         >     >       >     >       > [    3.153007] lpc55_l2
> spi1.0: registered handler for protocol 0
> >     >         >     >       >     >       > [    3.158582] lpc55_user
> lpc55_user: The major number for your device is 236
> >     >         >     >       >     >       > [    3.165976] lpc55_l2
> spi1.0: registered handler for protocol 1
> >     >         >     >       >     >       > [    3.181999] rtc-lpc55
> rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >     >         >     >       >     >       > [    3.182856] rtc-lpc55
> rtc_lpc55: registered as rtc0
> >     >         >     >       >     >       > [    3.188656] lpc55_l2
> spi1.0: (2) mcu still not ready?
> >     >         >     >       >     >       > [    3.193744] lpc55_l2
> spi1.0: (3) mcu still not ready?
> >     >         >     >       >     >       > [    3.198848] lpc55_l2
> spi1.0: (4) mcu still not ready?
> >     >         >     >       >     >       > [    3.202932] mmc0: SDHCI
> controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit
> >     >         >     >       >     >       > [    3.210689] lpc55_l2
> spi1.0: (5) mcu still not ready?
> >     >         >     >       >     >       > [    3.215694] lpc55_l2
> spi1.0: rx error: -110
> >     >         >     >       >     >       > [    3.284438] mmc0: new
> HS200 MMC card at address 0001
> >     >         >     >       >     >       > [    3.285179] mmcblk0:
> mmc0:0001 SEM16G 14.6 GiB
> >     >         >     >       >     >       > [    3.291784]  mmcblk0: p1
> p2 p3 p4 p5 p6 p7 p8
> >     >         >     >       >     >       > [    3.293915] mmcblk0boot0=
:
> mmc0:0001 SEM16G 4.00 MiB
> >     >         >     >       >     >       > [    3.299054] mmcblk0boot1=
:
> mmc0:0001 SEM16G 4.00 MiB
> >     >         >     >       >     >       > [    3.303905] mmcblk0rpmb:
> mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
> >     >         >     >       >     >       > [    3.582676] rtc-lpc55
> rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >     >         >     >       >     >       > [    3.583332] rtc-lpc55
> rtc_lpc55: hctosys: unable to read the hardware clock
> >     >         >     >       >     >       > [    3.591252] cdns-i2c
> ff020000.i2c: recovery information complete
> >     >         >     >       >     >       > [    3.597085] at24 0-0050:
> supply vcc not found, using dummy regulator
> >     >         >     >       >     >       > [    3.603011] lpc55_l2
> spi1.0: (2) mcu still not ready?
> >     >         >     >       >     >       > [    3.608093] at24 0-0050:
> 256 byte spd EEPROM, read-only
> >     >         >     >       >     >       > [    3.613620] lpc55_l2
> spi1.0: (3) mcu still not ready?
> >     >         >     >       >     >       > [    3.619362] lpc55_l2
> spi1.0: (4) mcu still not ready?
> >     >         >     >       >     >       > [    3.624224] rtc-rv3028
> 0-0052: registered as rtc1
> >     >         >     >       >     >       > [    3.628343] lpc55_l2
> spi1.0: (5) mcu still not ready?
> >     >         >     >       >     >       > [    3.633253] lpc55_l2
> spi1.0: rx error: -110
> >     >         >     >       >     >       > [    3.639104]
> k81_bootloader 0-0010: probe
> >     >         >     >       >     >       > [    3.641628] VMCU: :
> (235:0) registered
> >     >         >     >       >     >       > [    3.641635]
> k81_bootloader 0-0010: probe completed
> >     >         >     >       >     >       > [    3.668346] cdns-i2c
> ff020000.i2c: 400 kHz mmio ff020000 irq 28
> >     >         >     >       >     >       > [    3.669154] cdns-i2c
> ff030000.i2c: recovery information complete
> >     >         >     >       >     >       > [    3.675412] lm75 1-0048:
> supply vs not found, using dummy regulator
> >     >         >     >       >     >       > [    3.682920] lm75 1-0048:
> hwmon1: sensor 'tmp112'
> >     >         >     >       >     >       > [    3.686548] i2c i2c-1:
> Added multiplexed i2c bus 3
> >     >         >     >       >     >       > [    3.690795] i2c i2c-1:
> Added multiplexed i2c bus 4
> >     >         >     >       >     >       > [    3.695629] i2c i2c-1:
> Added multiplexed i2c bus 5
> >     >         >     >       >     >       > [    3.700492] i2c i2c-1:
> Added multiplexed i2c bus 6
> >     >         >     >       >     >       > [    3.705157] pca954x
> 1-0070: registered 4 multiplexed busses for I2C switch pca9546
> >     >         >     >       >     >       > [    3.713049] at24 1-0054:
> supply vcc not found, using dummy regulator
> >     >         >     >       >     >       > [    3.720067] at24 1-0054:
> 1024 byte 24c08 EEPROM, read-only
> >     >         >     >       >     >       > [    3.724761] cdns-i2c
> ff030000.i2c: 100 kHz mmio ff030000 irq 29
> >     >         >     >       >     >       > [    3.731272] sfp
> viper_enet:sfp-eth1: Host maximum power 2.0W
> >     >         >     >       >     >       > [    3.737549]
> sfp_register_socket: got sfp_bus
> >     >         >     >       >     >       > [    3.740709]
> sfp_register_socket: register sfp_bus
> >     >         >     >       >     >       > [    3.745459]
> sfp_register_bus: ops ok!
> >     >         >     >       >     >       > [    3.749179]
> sfp_register_bus: Try to attach
> >     >         >     >       >     >       > [    3.753419]
> sfp_register_bus: Attach succeeded
> >     >         >     >       >     >       > [    3.757914]
> sfp_register_bus: upstream ops attach
> >     >         >     >       >     >       > [    3.762677]
> sfp_register_bus: Bus registered
> >     >         >     >       >     >       > [    3.766999]
> sfp_register_socket: register sfp_bus succeeded
> >     >         >     >       >     >       > [    3.775870] of_cfs_init
> >     >         >     >       >     >       > [    3.776000] of_cfs_init:
> OK
> >     >         >     >       >     >       > [    3.778211] clk: Not
> disabling unused clocks
> >     >         >     >       >     >       > [   11.278477] Freeing
> initrd memory: 206056K
> >     >         >     >       >     >       > [   11.279406] Freeing
> unused kernel memory: 1536K
> >     >         >     >       >     >       > [   11.314006] Checked W+X
> mappings: passed, no W+X pages found
> >     >         >     >       >     >       > [   11.314142] Run /init as
> init process
> >     >         >     >       >     >       > INIT: version 3.01 booting
> >     >         >     >       >     >       > fsck (busybox 1.35.0)
> >     >         >     >       >     >       > /dev/mmcblk0p1: clean,
> 12/102400 files, 238162/409600 blocks
> >     >         >     >       >     >       > /dev/mmcblk0p2: clean,
> 12/102400 files, 171972/409600 blocks
> >     >         >     >       >     >       > /dev/mmcblk0p3 was not
> cleanly unmounted, check forced.
> >     >         >     >       >     >       > /dev/mmcblk0p3: 20/4096
> files (0.0% non-contiguous), 663/16384 blocks
> >     >         >     >       >     >       > [   11.553073] EXT4-fs
> (mmcblk0p3): mounted filesystem without journal. Opts: (null). Quota mode=
:
> >     >         >     >       disabled.
> >     >         >     >       >     >       > Starting random number
> generator daemon.
> >     >         >     >       >     >       > [   11.580662] random: crng
> init done
> >     >         >     >       >     >       > Starting udev
> >     >         >     >       >     >       > [   11.613159] udevd[142]:
> starting version 3.2.10
> >     >         >     >       >     >       > [   11.620385] udevd[143]:
> starting eudev-3.2.10
> >     >         >     >       >     >       > [   11.704481] macb
> ff0b0000.ethernet control_red: renamed from eth0
> >     >         >     >       >     >       > [   11.720264] macb
> ff0c0000.ethernet control_black: renamed from eth1
> >     >         >     >       >     >       > [   12.063396]
> ip_local_port_range: prefer different parity for start/end values.
> >     >         >     >       >     >       > [   12.084801] rtc-lpc55
> rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >     >         >     >       >     >       > hwclock: RTC_RD_TIME:
> Invalid exchange
> >     >         >     >       >     >       > Mon Feb 27 08:40:53 UTC 202=
3
> >     >         >     >       >     >       > [   12.115309] rtc-lpc55
> rtc_lpc55: lpc55_rtc_set_time: bad result
> >     >         >     >       >     >       > hwclock: RTC_SET_TIME:
> Invalid exchange
> >     >         >     >       >     >       > [   12.131027] rtc-lpc55
> rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >     >         >     >       >     >       > Starting mcud
> >     >         >     >       >     >       > INIT: Entering runlevel: 5
> >     >         >     >       >     >       > Configuring network
> interfaces... done.
> >     >         >     >       >     >       > resetting network interface
> >     >         >     >       >     >       > [   12.718295] macb
> ff0b0000.ethernet control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver
> [Xilinx
> >     >         >     >       PCS/PMA PHY] (irq=3DPOLL)
> >     >         >     >       >     >       > [   12.723919] macb
> ff0b0000.ethernet control_red: configuring for phy/gmii link mode
> >     >         >     >       >     >       > [   12.732151] pps pps0: ne=
w
> PPS source ptp0
> >     >         >     >       >     >       > [   12.735563] macb
> ff0b0000.ethernet: gem-ptp-timer ptp clock registered.
> >     >         >     >       >     >       > [   12.745724] macb
> ff0c0000.ethernet control_black: PHY [ff0c0000.ethernet-ffffffff:01] driv=
er
> [Xilinx
> >     >         >     >       PCS/PMA PHY]
> >     >         >     >       >     >       (irq=3DPOLL)
> >     >         >     >       >     >       > [   12.753469] macb
> ff0c0000.ethernet control_black: configuring for phy/gmii link mode
> >     >         >     >       >     >       > [   12.761804] pps pps1: ne=
w
> PPS source ptp1
> >     >         >     >       >     >       > [   12.765398] macb
> ff0c0000.ethernet: gem-ptp-timer ptp clock registered.
> >     >         >     >       >     >       > Auto-negotiation: off
> >     >         >     >       >     >       > Auto-negotiation: off
> >     >         >     >       >     >       > [   16.828151] macb
> ff0b0000.ethernet control_red: unable to generate target frequency:
> 125000000 Hz
> >     >         >     >       >     >       > [   16.834553] macb
> ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full - flow control off
> >     >         >     >       >     >       > [   16.860552] macb
> ff0c0000.ethernet control_black: unable to generate target frequency:
> 125000000 Hz
> >     >         >     >       >     >       > [   16.867052] macb
> ff0c0000.ethernet control_black: Link is Up - 1Gbps/Full - flow control o=
ff
> >     >         >     >       >     >       > Starting Failsafe Secure
> Shell server in port 2222: sshd
> >     >         >     >       >     >       > done.
> >     >         >     >       >     >       > Starting rpcbind
> daemon...done.
> >     >         >     >       >     >       >
> >     >         >     >       >     >       > [   17.093019] rtc-lpc55
> rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >     >         >     >       >     >       > hwclock: RTC_RD_TIME:
> Invalid exchange
> >     >         >     >       >     >       > Starting State Manager
> Service
> >     >         >     >       >     >       > Start state-manager
> restarter...
> >     >         >     >       >     >       > (XEN) d0v1 Forwarding AES
> operation: 3254779951
> >     >         >     >       >     >       > Starting
> /usr/sbin/xenstored....[   17.265256] BTRFS: device fsid
> 80efc224-c202-4f8e-a949-4dae7f04a0aa
> >     >         >     >       devid 1 transid 744
> >     >         >     >       >     >       /dev/dm-0
> >     >         >     >       >     >       > scanned by udevd (385)
> >     >         >     >       >     >       > [   17.349933] BTRFS info
> (device dm-0): disk space caching is enabled
> >     >         >     >       >     >       > [   17.350670] BTRFS info
> (device dm-0): has skinny extents
> >     >         >     >       >     >       > [   17.364384] BTRFS info
> (device dm-0): enabling ssd optimizations
> >     >         >     >       >     >       > [   17.830462] BTRFS: devic=
e
> fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
> >     >         >     >       /dev/mapper/client_prov scanned by
> >     >         >     >       >     >       mkfs.btrfs
> >     >         >     >       >     >       > (526)
> >     >         >     >       >     >       > [   17.872699] BTRFS info
> (device dm-1): using free space tree
> >     >         >     >       >     >       > [   17.872771] BTRFS info
> (device dm-1): has skinny extents
> >     >         >     >       >     >       > [   17.878114] BTRFS info
> (device dm-1): flagging fs with big metadata feature
> >     >         >     >       >     >       > [   17.894289] BTRFS info
> (device dm-1): enabling ssd optimizations
> >     >         >     >       >     >       > [   17.895695] BTRFS info
> (device dm-1): checking UUID tree
> >     >         >     >       >     >       >
> >     >         >     >       >     >       > Setting domain 0 name, domi=
d
> and JSON config...
> >     >         >     >       >     >       > Done setting up Dom0
> >     >         >     >       >     >       > Starting xenconsoled...
> >     >         >     >       >     >       > Starting QEMU as disk
> backend for dom0
> >     >         >     >       >     >       > Starting domain watchdog
> daemon: xenwatchdogd startup
> >     >         >     >       >     >       >
> >     >         >     >       >     >       > [   18.408647] BTRFS: devic=
e
> fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
> >     >         >     >       /dev/mapper/client_config scanned by
> >     >         >     >       >     >       mkfs.btrfs
> >     >         >     >       >     >       > (574)
> >     >         >     >       >     >       > [done]
> >     >         >     >       >     >       > [   18.465552] BTRFS info
> (device dm-2): using free space tree
> >     >         >     >       >     >       > [   18.465629] BTRFS info
> (device dm-2): has skinny extents
> >     >         >     >       >     >       > [   18.471002] BTRFS info
> (device dm-2): flagging fs with big metadata feature
> >     >         >     >       >     >       > Starting crond: [
> 18.482371] BTRFS info (device dm-2): enabling ssd optimizations
> >     >         >     >       >     >       > [   18.486659] BTRFS info
> (device dm-2): checking UUID tree
> >     >         >     >       >     >       > OK
> >     >         >     >       >     >       > starting rsyslogd ... Log
> partition ready after 0 poll loops
> >     >         >     >       >     >       > done
> >     >         >     >       >     >       > rsyslogd: cannot connect to
> 172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <
> http://172.18.0.1:514>> <http://172.18.0.1:514 <http://172.18.0.1:514> <
> http://172.18.0.1:514 <http://172.18.0.1:514>>> <http://172.18.0.1:514 <
> http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>> <
> http://172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <
> http://172.18.0.1:514>>>>: Network is unreachable [v8.2208.0 try
> >     >         >     >       https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>>> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>>>> ]
> >     >         >     >       >     >       > [   18.670637] BTRFS: devic=
e
> fsid 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /dev/dm-3
> >     >         >     >       scanned by udevd (518)
> >     >         >     >       >     >       >
> >     >         >     >       >     >       > Please insert USB token and
> enter your role in login prompt.
> >     >         >     >       >     >       >
> >     >         >     >       >     >       > login:
> >     >         >     >       >     >       >
> >     >         >     >       >     >       > Regards,
> >     >         >     >       >     >       > O.
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >
> >     >         >     >       >     >       > =D0=BF=D0=BD, 24 =D0=B0=D0=
=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:39,
> Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org=
>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>:
> >     >         >     >       >     >       >       Hi Oleg,
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >       Here is the issue fro=
m
> your logs:
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >       SError Interrupt on
> CPU0, code 0xbe000000 -- SError
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >       SErrors are special
> signals to notify software of serious hardware
> >     >         >     >       >     >       >       errors.  Something is
> going very wrong. Defective hardware is a
> >     >         >     >       >     >       >       possibility.  Another
> possibility if software accessing address ranges
> >     >         >     >       >     >       >       that it is not
> supposed to, sometimes it causes SErrors.
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >       Cheers,
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >       Stefano
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >       On Mon, 24 Apr 2023,
> Oleg Nikitenko wrote:
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >       > Hello,
> >     >         >     >       >     >       >       >
> >     >         >     >       >     >       >       > Thanks guys.
> >     >         >     >       >     >       >       > I found out where
> the problem was.
> >     >         >     >       >     >       >       > Now dom0 booted
> more. But I have a new one.
> >     >         >     >       >     >       >       > This is a kernel
> panic during Dom0 loading.
> >     >         >     >       >     >       >       > Maybe someone is
> able to suggest something ?
> >     >         >     >       >     >       >       >
> >     >         >     >       >     >       >       > Regards,
> >     >         >     >       >     >       >       > O.
> >     >         >     >       >     >       >       >
> >     >         >     >       >     >       >       > [    3.771362]
> sfp_register_bus: upstream ops attach
> >     >         >     >       >     >       >       > [    3.776119]
> sfp_register_bus: Bus registered
> >     >         >     >       >     >       >       > [    3.780459]
> sfp_register_socket: register sfp_bus succeeded
> >     >         >     >       >     >       >       > [    3.789399]
> of_cfs_init
> >     >         >     >       >     >       >       > [    3.789499]
> of_cfs_init: OK
> >     >         >     >       >     >       >       > [    3.791685] clk:
> Not disabling unused clocks
> >     >         >     >       >     >       >       > [   11.010355]
> SError Interrupt on CPU0, code 0xbe000000 -- SError
> >     >         >     >       >     >       >       > [   11.010380] CPU:
> 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> >     >         >     >       >     >       >       > [   11.010393]
> Workqueue: events_unbound async_run_entry_fn
> >     >         >     >       >     >       >       > [   11.010414]
> pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=3D--)
> >     >         >     >       >     >       >       > [   11.010422] pc :
> simple_write_end+0xd0/0x130
> >     >         >     >       >     >       >       > [   11.010431] lr :
> generic_perform_write+0x118/0x1e0
> >     >         >     >       >     >       >       > [   11.010438] sp :
> ffffffc00809b910
> >     >         >     >       >     >       >       > [   11.010441] x29:
> ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
> >     >         >     >       >     >       >       > [   11.010451] x26:
> 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
> >     >         >     >       >     >       >       > [   11.010459] x23:
> ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
> >     >         >     >       >     >       >       > [   11.010472] x20:
> 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
> >     >         >     >       >     >       >       > [   11.010481] x17:
> 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
> >     >         >     >       >     >       >       > [   11.010490] x14:
> 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
> >     >         >     >       >     >       >       > [   11.010498] x11:
> 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
> >     >         >     >       >     >       >       > [   11.010507] x8 :
> 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
> >     >         >     >       >     >       >       > [   11.010515] x5 :
> fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
> >     >         >     >       >     >       >       > [   11.010524] x2 :
> 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
> >     >         >     >       >     >       >       > [   11.010534]
> Kernel panic - not syncing: Asynchronous SError Interrupt
> >     >         >     >       >     >       >       > [   11.010539] CPU:
> 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> >     >         >     >       >     >       >       > [   11.010545]
> Hardware name: D14 Viper Board - White Unit (DT)
> >     >         >     >       >     >       >       > [   11.010548]
> Workqueue: events_unbound async_run_entry_fn
> >     >         >     >       >     >       >       > [   11.010556] Call
> trace:
> >     >         >     >       >     >       >       > [   11.010558]
>  dump_backtrace+0x0/0x1c4
> >     >         >     >       >     >       >       > [   11.010567]
>  show_stack+0x18/0x2c
> >     >         >     >       >     >       >       > [   11.010574]
>  dump_stack_lvl+0x7c/0xa0
> >     >         >     >       >     >       >       > [   11.010583]
>  dump_stack+0x18/0x34
> >     >         >     >       >     >       >       > [   11.010588]
>  panic+0x14c/0x2f8
> >     >         >     >       >     >       >       > [   11.010597]
>  print_tainted+0x0/0xb0
> >     >         >     >       >     >       >       > [   11.010606]
>  arm64_serror_panic+0x6c/0x7c
> >     >         >     >       >     >       >       > [   11.010614]
>  do_serror+0x28/0x60
> >     >         >     >       >     >       >       > [   11.010621]
>  el1h_64_error_handler+0x30/0x50
> >     >         >     >       >     >       >       > [   11.010628]
>  el1h_64_error+0x78/0x7c
> >     >         >     >       >     >       >       > [   11.010633]
>  simple_write_end+0xd0/0x130
> >     >         >     >       >     >       >       > [   11.010639]
>  generic_perform_write+0x118/0x1e0
> >     >         >     >       >     >       >       > [   11.010644]
>  __generic_file_write_iter+0x138/0x1c4
> >     >         >     >       >     >       >       > [   11.010650]
>  generic_file_write_iter+0x78/0xd0
> >     >         >     >       >     >       >       > [   11.010656]
>  __kernel_write+0xfc/0x2ac
> >     >         >     >       >     >       >       > [   11.010665]
>  kernel_write+0x88/0x160
> >     >         >     >       >     >       >       > [   11.010673]
>  xwrite+0x44/0x94
> >     >         >     >       >     >       >       > [   11.010680]
>  do_copy+0xa8/0x104
> >     >         >     >       >     >       >       > [   11.010686]
>  write_buffer+0x38/0x58
> >     >         >     >       >     >       >       > [   11.010692]
>  flush_buffer+0x4c/0xbc
> >     >         >     >       >     >       >       > [   11.010698]
>  __gunzip+0x280/0x310
> >     >         >     >       >     >       >       > [   11.010704]
>  gunzip+0x1c/0x28
> >     >         >     >       >     >       >       > [   11.010709]
>  unpack_to_rootfs+0x170/0x2b0
> >     >         >     >       >     >       >       > [   11.010715]
>  do_populate_rootfs+0x80/0x164
> >     >         >     >       >     >       >       > [   11.010722]
>  async_run_entry_fn+0x48/0x164
> >     >         >     >       >     >       >       > [   11.010728]
>  process_one_work+0x1e4/0x3a0
> >     >         >     >       >     >       >       > [   11.010736]
>  worker_thread+0x7c/0x4c0
> >     >         >     >       >     >       >       > [   11.010743]
>  kthread+0x120/0x130
> >     >         >     >       >     >       >       > [   11.010750]
>  ret_from_fork+0x10/0x20
> >     >         >     >       >     >       >       > [   11.010757] SMP:
> stopping secondary CPUs
> >     >         >     >       >     >       >       > [   11.010784]
> Kernel Offset: 0x2f61200000 from 0xffffffc008000000
> >     >         >     >       >     >       >       > [   11.010788]
> PHYS_OFFSET: 0x0
> >     >         >     >       >     >       >       > [   11.010790] CPU
> features: 0x00000401,00000842
> >     >         >     >       >     >       >       > [   11.010795]
> Memory Limit: none
> >     >         >     >       >     >       >       > [   11.277509] ---[
> end Kernel panic - not syncing: Asynchronous SError Interrupt ]---
> >     >         >     >       >     >       >       >
> >     >         >     >       >     >       >       > =D0=BF=D1=82, 21 =
=D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3.
> =D0=B2 15:52, Michal Orzel <michal.orzel@amd.com <mailto:michal.orzel@amd=
.com>
> <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>>:
> >     >         >     >       >     >       >       >       Hi Oleg,
> >     >         >     >       >     >       >       >
> >     >         >     >       >     >       >       >       On 21/04/2023
> 14:49, Oleg Nikitenko wrote:
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       > Hello Micha=
l,
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       > I was not
> able to enable earlyprintk in the xen for now.
> >     >         >     >       >     >       >       >       > I decided t=
o
> choose another way.
> >     >         >     >       >     >       >       >       > This is a
> xen's command line that I found out completely.
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       > (XEN) $$$$
> console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom=
0_vcpus_pin
> >     >         >     >       bootscrub=3D0
> >     >         >     >       >     >       vwfi=3Dnative
> >     >         >     >       >     >       >       sched=3Dnull
> >     >         >     >       >     >       >       >       timer_slop=3D=
0
> >     >         >     >       >     >       >       >       Yes, adding a
> printk() in Xen was also a good idea.
> >     >         >     >       >     >       >       >
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       > So you are
> absolutely right about a command line.
> >     >         >     >       >     >       >       >       > Now I am
> going to find out why xen did not have the correct parameters from the
> device
> >     >         >     >       tree.
> >     >         >     >       >     >       >       >       Maybe you wil=
l
> find this document helpful:
> >     >         >     >       >     >       >       >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >>>
> >     >         >     >       <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >>>>
> >     >         >     >       >     >       >       >
> >     >         >     >       >     >       >       >       ~Michal
> >     >         >     >       >     >       >       >
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       > Regards,
> >     >         >     >       >     >       >       >       > Oleg
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       > =D0=BF=D1=
=82, 21 =D0=B0=D0=BF=D1=80.
> 2023=E2=80=AF=D0=B3. =D0=B2 11:16, Michal Orzel <michal.orzel@amd.com <ma=
ilto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>
> >     >         >     >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>>>:
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       >     On
> 21/04/2023 10:04, Oleg Nikitenko wrote:
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     > Hello
> Michal,
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     > Yes, =
I
> use yocto.
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
> Yesterday all day long I tried to follow your suggestions.
> >     >         >     >       >     >       >       >       >     > I
> faced a problem.
> >     >         >     >       >     >       >       >       >     >
> Manually in the xen config build file I pasted the strings:
> >     >         >     >       >     >       >       >       >     In the
> .config file or in some Yocto file (listing additional Kconfig options)
> added
> >     >         >     >       to SRC_URI?
> >     >         >     >       >     >       >       >       >     You
> shouldn't really modify .config file but if you do, you should execute "m=
ake
> >     >         >     >       olddefconfig"
> >     >         >     >       >     >       afterwards.
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
> CONFIG_EARLY_PRINTK
> >     >         >     >       >     >       >       >       >     >
> CONFIG_EARLY_PRINTK_ZYNQMP
> >     >         >     >       >     >       >       >       >     >
> CONFIG_EARLY_UART_CHOICE_CADENCE
> >     >         >     >       >     >       >       >       >     I hope
> you added =3Dy to them.
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       >     Anyway,
> you have at least the following solutions:
> >     >         >     >       >     >       >       >       >     1) Run
> bitbake xen -c menuconfig to properly set early printk
> >     >         >     >       >     >       >       >       >     2) Find
> out how you enable other Kconfig options in your project (e.g.
> >     >         >     >       CONFIG_COLORING=3Dy that is not
> >     >         >     >       >     >       enabled by
> >     >         >     >       >     >       >       default)
> >     >         >     >       >     >       >       >       >     3)
> Append the following to "xen/arch/arm/configs/arm64_defconfig":
> >     >         >     >       >     >       >       >       >
>  CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       >     ~Michal
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     > Host
> hangs in build time.
> >     >         >     >       >     >       >       >       >     > Maybe
> I did not set something in the config build file ?
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
> Regards,
> >     >         >     >       >     >       >       >       >     > Oleg
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     > =D1=
=87=D1=82, 20
> =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:57, Oleg Nikitenko <ol=
eshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>
> >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>
> >     >         >     >       >     >       >       >       <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
> >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>>>:
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>  Thanks Michal,
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>  You gave me an idea.
> >     >         >     >       >     >       >       >       >     >     I
> am going to try it today.
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>  Regards,
> >     >         >     >       >     >       >       >       >     >     O=
.
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>  =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:56, =
Oleg Nikitenko <oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>
> >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>
> >     >         >     >       >     >       >       >       <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
> >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>>>:
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>  Thanks Stefano.
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>  I am going to do it today.
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>  Regards,
> >     >         >     >       >     >       >       >       >     >
>  O.
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>  =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:05, =
Stefano Stabellini <sstabellini@kernel.org
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>
> >     >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>
> >     >         >     >       >     >       <mailto:sstabellini@kernel.or=
g
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>
> >     >         >     >       >     >       >       >       <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
> >     >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>>>:
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>      On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
> >     >         >     >       >     >       >       >       >     >
>      > Hi Michal,
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      > I corrected xen's command line.
> >     >         >     >       >     >       >       >       >     >
>      > Now it is
> >     >         >     >       >     >       >       >       >     >
>      > xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0 dom0_mem=
=3D1600M
> >     >         >     >       dom0_max_vcpus=3D2
> >     >         >     >       >     >       dom0_vcpus_pin
> >     >         >     >       >     >       >       >       bootscrub=3D0
> vwfi=3Dnative sched=3Dnull
> >     >         >     >       >     >       >       >       >     >
>      > timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=3D4-7=
";
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>      4 colors is way too many for xen, just do xen_colors=3D0-0. There is=
 no
> >     >         >     >       >     >       >       >       >     >
>      advantage in using more than 1 color for Xen.
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>      4 colors is too few for dom0, if you are giving 1600M of memory to
> >     >         >     >       Dom0.
> >     >         >     >       >     >       >       >       >     >
>      Each color is 256M. For 1600M you should give at least 7 colors. Try=
:
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>      xen_colors=3D0-0 dom0_colors=3D1-8
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >     >
>      > Unfortunately the result was the same.
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      > (XEN)  - Dom0 mode: Relaxed
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) Coloring general information
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) Way size: 64kB
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) Max. number of colors available: 16
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) Xen color(s): [ 0 ]
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) alternatives: Patching with alt table 00000000002cc690 ->
> >     >         >     >       00000000002ccc0c
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) Color array allocation failed for dom0
> >     >         >     >       >     >       >       >       >     >
>      > (XEN)
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) ****************************************
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) Panic on CPU 0:
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) Error creating domain 0
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) ****************************************
> >     >         >     >       >     >       >       >       >     >
>      > (XEN)
> >     >         >     >       >     >       >       >       >     >
>      > (XEN) Reboot in five seconds...
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      > I am going to find out how command line arguments passed and parse=
d.
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      > Regards,
> >     >         >     >       >     >       >       >       >     >
>      > Oleg
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 1=
1:25, Oleg Nikitenko <oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>
> >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>
> >     >         >     >       >     >       <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>
> >     >         >     >       >     >       >       >       <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>
> >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>>>:
> >     >         >     >       >     >       >       >       >     >
>      >       Hi Michal,
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      > You put my nose into the problem. Thank you.
> >     >         >     >       >     >       >       >       >     >
>      > I am going to use your point.
> >     >         >     >       >     >       >       >       >     >
>      > Let's see what happens.
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      > Regards,
> >     >         >     >       >     >       >       >       >     >
>      > Oleg
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 1=
0:37, Michal Orzel <michal.orzel@amd.com
> <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>
> >     >         >     >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>
> >     >         >     >       >     >       <mailto:michal.orzel@amd.com
> <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>>
> >     >         >     >       >     >       >       >       <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>
> >     >         >     >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>>>>:
> >     >         >     >       >     >       >       >       >     >
>      >       Hi Oleg,
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       > Hello Stefano,
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       > Thanks for the clarification.
> >     >         >     >       >     >       >       >       >     >
>      >       > My company uses yocto for image generation.
> >     >         >     >       >     >       >       >       >     >
>      >       > What kind of information do you need to consult me in this
> >     >         >     >       case ?
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       > Maybe modules sizes/addresses which were mentioned by @Jul=
ien
> >     >         >     >       Grall
> >     >         >     >       >     >       >       <mailto:julien@xen.or=
g
> <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>
> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org
> <mailto:julien@xen.org>>> <mailto:julien@xen.org <mailto:julien@xen.org>
> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org
> <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>>
> >     >         >     >       >     >       >       >       <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:
> julien@xen.org <mailto:julien@xen.org>>> <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>>>>> <mailto:julien@xen.org <mailto:julien@xen.org>
> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org
> <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>
> >     >         >     >       <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>>>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto=
:
> julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:
> julien@xen.org <mailto:julien@xen.org>>>>>>> ?
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >       Sorry for jumping into discussion, but FWICS the Xen command
> >     >         >     >       line you provided
> >     >         >     >       >     >       seems to be
> >     >         >     >       >     >       >       not the
> >     >         >     >       >     >       >       >       one
> >     >         >     >       >     >       >       >       >     >
>      >       Xen booted with. The error you are observing most likely is =
due
> >     >         >     >       to dom0 colors
> >     >         >     >       >     >       >       configuration not
> >     >         >     >       >     >       >       >       being
> >     >         >     >       >     >       >       >       >     >
>      >       specified (i.e. lack of dom0_colors=3D<> parameter). Althoug=
h in
> >     >         >     >       the command line you
> >     >         >     >       >     >       >       provided, this
> >     >         >     >       >     >       >       >       parameter
> >     >         >     >       >     >       >       >       >     >
>      >       is set, I strongly doubt that this is the actual command lin=
e
> >     >         >     >       in use.
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >       You wrote:
> >     >         >     >       >     >       >       >       >     >
>      >       xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0
> >     >         >     >       dom0_mem=3D1600M dom0_max_vcpus=3D2
> >     >         >     >       >     >       >       dom0_vcpus_pin
> >     >         >     >       >     >       >       >       bootscrub=3D0
> vwfi=3Dnative
> >     >         >     >       >     >       >       >       >     >
>      >       sched=3Dnull timer_slop=3D0 way_szize=3D65536 xen_colors=3D0=
-3
> >     >         >     >       dom0_colors=3D4-7";
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >       but:
> >     >         >     >       >     >       >       >       >     >
>      >       1) way_szize has a typo
> >     >         >     >       >     >       >       >       >     >
>      >       2) you specified 4 colors (0-3) for Xen, but the boot log sa=
ys
> >     >         >     >       that Xen has only
> >     >         >     >       >     >       one:
> >     >         >     >       >     >       >       >       >     >
>      >       (XEN) Xen color(s): [ 0 ]
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >       This makes me believe that no colors configuration actually =
end
> >     >         >     >       up in command line
> >     >         >     >       >     >       that Xen
> >     >         >     >       >     >       >       booted
> >     >         >     >       >     >       >       >       with.
> >     >         >     >       >     >       >       >       >     >
>      >       Single color for Xen is a "default if not specified" and way
> >     >         >     >       size was probably
> >     >         >     >       >     >       calculated
> >     >         >     >       >     >       >       by asking
> >     >         >     >       >     >       >       >       HW.
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >       So I would suggest to first cross-check the command line in
> >     >         >     >       use.
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >       ~Michal
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       > Regards,
> >     >         >     >       >     >       >       >       >     >
>      >       > Oleg
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =
=D0=B2 20:44, Stefano Stabellini
> >     >         >     >       <sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>
> >     >         >     >       >     >       >       >       <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>
> >     >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>
> >     >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>>
> >     >         >     >       >     >       >       <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
> >     >         >     >       >     >       >       >       <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>
> >     >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>
> >     >         >     >       <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>>>>:
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
> >     >         >     >       >     >       >       >       >     >
>      >       >     > Hi Julien,
> >     >         >     >       >     >       >       >       >     >
>      >       >     >
> >     >         >     >       >     >       >       >       >     >
>      >       >     > >> This feature has not been merged in Xen upstream =
yet
> >     >         >     >       >     >       >       >       >     >
>      >       >     >
> >     >         >     >       >     >       >       >       >     >
>      >       >     > > would assume that upstream + the series on the ML =
[1]
> >     >         >     >       work
> >     >         >     >       >     >       >       >       >     >
>      >       >     >
> >     >         >     >       >     >       >       >       >     >
>      >       >     > Please clarify this point.
> >     >         >     >       >     >       >       >       >     >
>      >       >     > Because the two thoughts are controversial.
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >     Hi Oleg,
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >     As Julien wrote, there is nothing controversial. As yo=
u
> >     >         >     >       are aware,
> >     >         >     >       >     >       >       >       >     >
>      >       >     Xilinx maintains a separate Xen tree specific for Xili=
nx
> >     >         >     >       here:
> >     >         >     >       >     >       >       >       >     >
>      >       >     https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>
> >     >         >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>>
> >     >         >     >       >     >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>
> >     >         >     >       >     >       >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>>
> >     >         >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>
> >     >         >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>>
> >     >         >     >       >     >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>
> >     >         >     >       >     >       >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>>>
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >     and the branch you are using (xlnx_rebase_4.16) comes
> >     >         >     >       from there.
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >     Instead, the upstream Xen tree lives here:
> >     >         >     >       >     >       >       >       >     >
>      >       >     https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummar=
y <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>
> >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>>>
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >     The Cache Coloring feature that you are trying to
> >     >         >     >       configure is present
> >     >         >     >       >     >       >       >       >     >
>      >       >     in xlnx_rebase_4.16, but not yet present upstream (the=
re
> >     >         >     >       is an
> >     >         >     >       >     >       >       >       >     >
>      >       >     outstanding patch series to add cache coloring to Xen
> >     >         >     >       upstream but it
> >     >         >     >       >     >       >       >       >     >
>      >       >     hasn't been merged yet.)
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't
> >     >         >     >       matter too much for
> >     >         >     >       >     >       >       >       >     >
>      >       >     you as you already have Cache Coloring as a feature
> >     >         >     >       there.
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >     I take you are using ImageBuilder to generate the boot
> >     >         >     >       configuration? If
> >     >         >     >       >     >       >       >       >     >
>      >       >     so, please post the ImageBuilder config file that you =
are
> >     >         >     >       using.
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >       >     But from the boot message, it looks like the colors
> >     >         >     >       configuration for
> >     >         >     >       >     >       >       >       >     >
>      >       >     Dom0 is incorrect.
> >     >         >     >       >     >       >       >       >     >
>      >       >
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
>      >
> >     >         >     >       >     >       >       >       >     >
> >     >         >     >       >     >       >       >       >
> >     >         >     >       >     >       >       >
> >     >         >     >       >     >       >       >
> >     >         >     >       >     >       >       >
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >
> >     >         >     >       >     >       >
> >     >         >     >       >     >
> >     >         >     >       >     >
> >     >         >     >       >     >
> >     >         >     >       >
> >     >         >     >
> >     >         >     >
> >     >         >     >
> >     >         >
> >     >
> >
>

--0000000000003b806f05fbd10971
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+SGkgZ3V5cyw8ZGl2Pjxicj48L2Rpdj48ZGl2PlRoYW5rcyBNaWNoYWwu
PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5TbyBpZiBJIGhhdmUgbW9yZSBSQU0gSXQgaXMgcG9z
c2libGXCoHRvIGluY3JlYXNlIHRoZSBjb2xvciBkZW5zaXR5LjwvZGl2PjxkaXY+PGJyPjwvZGl2
PjxkaXY+Rm9yIGV4YW1wbGUgOEdiLzE2IGl0IGlzIDUxMiBNYiBhcHByb3hpbWF0ZWx5LjwvZGl2
PjxkaXY+SXMgdGhpcyBjb3JyZWN0ID88L2Rpdj48ZGl2PlJlZ2FyZHMsPC9kaXY+PGRpdj5PbGVn
PC9kaXY+PC9kaXY+PGJyPjxkaXYgY2xhc3M9ImdtYWlsX3F1b3RlIj48ZGl2IGRpcj0ibHRyIiBj
bGFzcz0iZ21haWxfYXR0ciI+0LLRgiwgMTYg0LzQsNGPIDIwMjPigK/Qsy4g0LIgMTc6NDAsIE1p
Y2hhbCBPcnplbCAmbHQ7PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIj5taWNo
YWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Ojxicj48L2Rpdj48YmxvY2txdW90ZSBjbGFzcz0iZ21h
aWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MHB4IDBweCAwcHggMC44ZXg7Ym9yZGVyLWxlZnQtd2lk
dGg6MXB4O2JvcmRlci1sZWZ0LXN0eWxlOnNvbGlkO2JvcmRlci1sZWZ0LWNvbG9yOnJnYigyMDQs
MjA0LDIwNCk7cGFkZGluZy1sZWZ0OjFleCI+SGkgT2xlZyw8YnI+DQo8YnI+DQpPbiAxNi8wNS8y
MDIzIDE0OjE1LCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAgwqA8YnI+
DQomZ3Q7IDxicj4NCiZndDsgPGJyPg0KJmd0OyBIZWxsbyw8YnI+DQomZ3Q7IDxicj4NCiZndDsg
VGhhbmtzIGEgbG90IE1pY2hhbC48YnI+DQomZ3Q7IDxicj4NCiZndDsgVGhlbiB0aGUgbmV4dCBx
dWVzdGlvbi48YnI+DQomZ3Q7IFdoZW4gSSBqdXN0IHN0YXJ0ZWQgbXkgZXhwZXJpbWVudHMgd2l0
aCB4ZW4sIFN0ZWZhbm8gbWVudGlvbmVkIHRoYXQgZWFjaCBjYWNoZSYjMzk7cyBjb2xvciBzaXpl
IGlzIDI1Nk0uPGJyPg0KJmd0OyBJcyBpdCBwb3NzaWJsZSB0byBleHRlbmQgdGhpcyBmaWd1cmUg
Pzxicj4NCldpdGggMTYgY29sb3JzIChlLmcuIG9uIENvcnRleC1BNTMpIGFuZCA0R0Igb2YgbWVt
b3J5LCByb3VnaGx5IGVhY2ggY29sb3IgaXMgMjU2TSAoaS5lLiA0R0IvMTYgPSAyNTZNKS48YnI+
DQpTbyBhcyB5b3UgY2FuIHNlZSB0aGlzIGZpZ3VyZSBkZXBlbmRzIG9uIHRoZSBudW1iZXIgb2Yg
Y29sb3JzIGFuZCBtZW1vcnkgc2l6ZS48YnI+DQo8YnI+DQp+TWljaGFsPGJyPg0KPGJyPg0KJmd0
OyA8YnI+DQomZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0OyBPbGVnPGJyPg0KJmd0OyA8YnI+DQomZ3Q7
INC/0L0sIDE1INC80LDRjyAyMDIz4oCv0LMuINCyIDExOjU3LCBNaWNoYWwgT3J6ZWwgJmx0Ozxh
IGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hh
bC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6
ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsm
Z3Q7Ojxicj4NCiZndDsgPGJyPg0KJmd0O8KgIMKgIMKgSGkgT2xlZyw8YnI+DQomZ3Q7IDxicj4N
CiZndDvCoCDCoCDCoE9uIDE1LzA1LzIwMjMgMTA6NTEsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0OyBIZWxsbyBndXlzLDxicj4N
CiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7IFRoYW5rcyBhIGxvdC48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7IEFmdGVyIGEgbG9uZyBwcm9ibGVtIGxpc3QgSSB3YXMgYWJsZSB0
byBydW4geGVuIHdpdGggRG9tMCB3aXRoIGEgY2FjaGUgY29sb3IuPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0OyBPbmUgbW9yZSBxdWVzdGlvbiBmcm9tIG15IHNpZGUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0
OyBJIHdhbnQgdG8gcnVuIGEgZ3Vlc3Qgd2l0aCBjb2xvciBtb2RlIHRvby48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7IEkgaW5zZXJ0ZWQgYSBzdHJpbmcgaW50byBndWVzdCBjb25maWcgZmlsZSBsbGMt
Y29sb3JzID0gJnF1b3Q7OS0xMyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCZndDsgSSBnb3QgYW4g
ZXJyb3I8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7IFsgwqA0NTcuNTE3MDA0XSBsb29wMDogZGV0ZWN0
ZWQgY2FwYWNpdHkgY2hhbmdlIGZyb20gMCB0byAzODU4NDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
IFBhcnNpbmcgY29uZmlnIGZyb20gL3hlbi9yZWRfY29uZmlnLmNmZzxicj4NCiZndDvCoCDCoCDC
oCZndDsgL3hlbi9yZWRfY29uZmlnLmNmZzoyNjogY29uZmlnIHBhcnNpbmcgZXJyb3IgbmVhciBg
LWNvbG9ycyYjMzk7OiBsZXhpY2FsIGVycm9yPGJyPg0KJmd0O8KgIMKgIMKgJmd0OyB3YXJuaW5n
OiBDb25maWcgZmlsZSBsb29rcyBsaWtlIGl0IGNvbnRhaW5zIFB5dGhvbiBjb2RlLjxicj4NCiZn
dDvCoCDCoCDCoCZndDsgd2FybmluZzogwqBBcmJpdHJhcnkgUHl0aG9uIGlzIG5vIGxvbmdlciBz
dXBwb3J0ZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0OyB3YXJuaW5nOiDCoFNlZSA8YSBocmVmPSJo
dHRwczovL3dpa2kueGVuLm9yZy93aWtpL1B5dGhvbkluWGxDb25maWciIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vd2lraS54ZW4ub3JnL3dpa2kvUHl0aG9uSW5YbENv
bmZpZzwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vd2lraS54ZW4ub3JnL3dpa2kvUHl0aG9uSW5Y
bENvbmZpZyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly93aWtpLnhl
bi5vcmcvd2lraS9QeXRob25JblhsQ29uZmlnPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8v
d2lraS54ZW4ub3JnL3dpa2kvUHl0aG9uSW5YbENvbmZpZyIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly93aWtpLnhlbi5vcmcvd2lraS9QeXRob25JblhsQ29uZmlnPC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly93aWtpLnhlbi5vcmcvd2lraS9QeXRob25JblhsQ29uZmln
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3dpa2kueGVuLm9yZy93
aWtpL1B5dGhvbkluWGxDb25maWc8L2E+Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7IEZh
aWxlZCB0byBwYXJzZSBjb25maWc6IEludmFsaWQgYXJndW1lbnQ8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7IFNvIHRoaXMgaXMgYSBxdWVzdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7IElzIGl0IHBv
c3NpYmxlIHRvIGFzc2lnbiBhIGNvbG9yIG1vZGUgZm9yIHRoZSBEb21VIGJ5IGNvbmZpZyBmaWxl
ID88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7IElmIHNvLCB3aGF0IHN0cmluZyBzaG91bGQgSSB1c2U/
PGJyPg0KJmd0O8KgIMKgIMKgUGxlYXNlLCBhbHdheXMgcmVmZXIgdG8gdGhlIHJlbGV2YW50IGRv
Y3VtZW50YXRpb24uIEluIHRoaXMgY2FzZSwgZm9yIHhsLmNmZzo8YnI+DQomZ3Q7wqAgwqAgwqA8
YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQu
MTcvZG9jcy9tYW4veGwuY2ZnLjUucG9kLmluI0wyODkwIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNl
XzQuMTcvZG9jcy9tYW4veGwuY2ZnLjUucG9kLmluI0wyODkwPC9hPiAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE3L2RvY3MvbWFu
L3hsLmNmZy41LnBvZC5pbiNMMjg5MCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE3L2RvY3Mv
bWFuL3hsLmNmZy41LnBvZC5pbiNMMjg5MDwvYT4mZ3Q7PGJyPg0KJmd0OyA8YnI+DQomZ3Q7wqAg
wqAgwqB+TWljaGFsPGJyPg0KJmd0OyA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDsgT2xlZzxicj4NCiZndDvC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7INGH0YIsIDExINC80LDRjyAyMDIz4oCv
0LMuINCyIDEzOjMyLCBPbGVnIE5pa2l0ZW5rbyAmbHQ7PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4g
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9
Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdv
b2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZn
dDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqBIaSBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoFRoYW5rcy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqBUaGlzIGNvbXBpbGF0
aW9uIHByZXZpb3VzbHkgaGFkIGEgbmFtZSBDT05GSUdfQ09MT1JJTkcuPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgSXQgbWl4ZWQgbWUgdXAuPGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgT2xlZzxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqDRh9GCLCAxMSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAxMzoxNSwgTWljaGFsIE9yemVsICZs
dDs8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5t
aWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFs
Lm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4m
Z3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVs
QGFtZC5jb208L2E+Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqBIaSBPbGVnLDxicj4NCiZndDvCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqBPbiAxMS8wNS8yMDIzIDEyOjAyLCBP
bGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDsgSGVsbG8sPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDsgVGhhbmtz
IFN0ZWZhbm8uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0OyBUaGVuIHRo
ZSBuZXh0IHF1ZXN0aW9uLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDsg
SSBjbG9uZWQgeGVuIHJlcG8gZnJvbSB4aWxpbnggc2l0ZSA8YSBocmVmPSJodHRwczovL2dpdGh1
Yi5jb20vWGlsaW54L3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuLmdpdDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8v
Z2l0aHViLmNvbS9YaWxpbngveGVuLmdpdCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4uZ2l0PC9hPiZndDsgJmx0OzxhIGhyZWY9
Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuLmdpdCIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4uZ2l0PC9hPiAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4uZ2l0IiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi5naXQ8L2E+Jmd0
OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuLmdpdCIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94
ZW4uZ2l0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4uZ2l0
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGls
aW54L3hlbi5naXQ8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlu
eC94ZW4uZ2l0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1
Yi5jb20vWGlsaW54L3hlbi5naXQ8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20v
WGlsaW54L3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
Z2l0aHViLmNvbS9YaWxpbngveGVuLmdpdDwvYT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7IEkgbWFuYWdlZCB0byBidWlsZCBhIHhsbnhfcmViYXNl
XzQuMTcgYnJhbmNoIGluIG15IGVudmlyb25tZW50Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDsgSSBkaWQgaXQgd2l0aG91dCBjb2xvcmluZyBmaXJzdC4gSSBkaWQgbm90
IGZpbmQgYW55IGNvbG9yIGZvb3RwcmludHMgYXQgdGhpcyBicmFuY2guPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0OyBJIHJlYWxpemVkIGNvbG9yaW5nIGlzIG5vdCBpbiB0
aGUgeGxueF9yZWJhc2VfNC4xNyBicmFuY2ggeWV0Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoFRoaXMgaXMgbm90IHRydWUuIENhY2hlIGNvbG9yaW5nIGlzIGluIHhsbnhfcmVi
YXNlXzQuMTcuIFBsZWFzZSBzZWUgdGhlIGRvY3M6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgPGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54
X3JlYmFzZV80LjE3L2RvY3MvbWlzYy9hcm0vY2FjaGUtY29sb3JpbmcucnN0IiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9i
L3hsbnhfcmViYXNlXzQuMTcvZG9jcy9taXNjL2FybS9jYWNoZS1jb2xvcmluZy5yc3Q8L2E+ICZs
dDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNl
XzQuMTcvZG9jcy9taXNjL2FybS9jYWNoZS1jb2xvcmluZy5yc3QiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9y
ZWJhc2VfNC4xNy9kb2NzL21pc2MvYXJtL2NhY2hlLWNvbG9yaW5nLnJzdDwvYT4mZ3Q7ICZsdDs8
YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQu
MTcvZG9jcy9taXNjL2FybS9jYWNoZS1jb2xvcmluZy5yc3QiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJh
c2VfNC4xNy9kb2NzL21pc2MvYXJtL2NhY2hlLWNvbG9yaW5nLnJzdDwvYT4gJmx0OzxhIGhyZWY9
Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNy9kb2Nz
L21pc2MvYXJtL2NhY2hlLWNvbG9yaW5nLnJzdCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE3
L2RvY3MvbWlzYy9hcm0vY2FjaGUtY29sb3JpbmcucnN0PC9hPiZndDsmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoEl0IGRlc2NyaWJl
cyB0aGUgZmVhdHVyZSBhbmQgZG9jdW1lbnRzIHRoZSByZXF1aXJlZCBwcm9wZXJ0aWVzLjxicj4N
CiZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqB+TWlj
aGFsPGJyPg0KJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0OyDQstGCLCA5INC80LDRjyAyMDIz4oCv0LMu
INCyIDIyOjQ5LCBTdGVmYW5vIFN0YWJlbGxpbmkgJmx0OzxhIGhyZWY9Im1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwv
YT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0
YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7
Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoFdlIHRlc3QgWGVuIENhY2hl
IENvbG9yaW5nIHJlZ3VsYXJseSBvbiB6Y3UxMDIuIEV2ZXJ5IFBldGFsaW51eCByZWxlYXNlPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgKHR3aWNlIGEgeWVh
cikgaXMgdGVzdGVkIHdpdGggY2FjaGUgY29sb3JpbmcgZW5hYmxlZC4gVGhlIGxhc3QgUGV0YWxp
bnV4PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgcmVsZWFz
ZSBpcyAyMDIzLjEgYW5kIHRoZSBrZXJuZWwgdXNlZCBpcyB0aGlzOjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoDxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNv
bS9YaWxpbngvbGludXgteGxueC90cmVlL3hsbnhfcmViYXNlX3Y2LjFfTFRTIiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhs
bngvdHJlZS94bG54X3JlYmFzZV92Ni4xX0xUUzwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0
aHViLmNvbS9YaWxpbngvbGludXgteGxueC90cmVlL3hsbnhfcmViYXNlX3Y2LjFfTFRTIiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L2xp
bnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4xX0xUUzwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJo
dHRwczovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4x
X0xUUyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29t
L1hpbGlueC9saW51eC14bG54L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFM8L2E+ICZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFz
ZV92Ni4xX0xUUyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRo
dWIuY29tL1hpbGlueC9saW51eC14bG54L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFM8L2E+Jmd0
OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngvbGludXgteGxueC90
cmVlL3hsbnhfcmViYXNlX3Y2LjFfTFRTIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92
Ni4xX0xUUzwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngvbGludXgt
eGxueC90cmVlL3hsbnhfcmViYXNlX3Y2LjFfTFRTIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0i
X2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3Jl
YmFzZV92Ni4xX0xUUzwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20vWGls
aW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4xX0xUUyIgcmVsPSJub3JlZmVycmVy
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC9saW51eC14bG54L3Ry
ZWUveGxueF9yZWJhc2VfdjYuMV9MVFM8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5j
b20vWGlsaW54L2xpbnV4LXhsbngvdHJlZS94bG54X3JlYmFzZV92Ni4xX0xUUyIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC9saW51eC14
bG54L3RyZWUveGxueF9yZWJhc2VfdjYuMV9MVFM8L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqBPbiBUdWUsIDkgTWF5IDIwMjMsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgSGVsbG8gZ3V5cyw8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBJIGhhdmUgYSBjb3VwbGUg
b2YgbW9yZSBxdWVzdGlvbnMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0OyBIYXZlIHlvdSBldmVyIHJ1biB4ZW4gd2l0aCB0aGUgY2FjaGUgY29sb3Jp
bmcgYXQgWnlucSBVbHRyYVNjYWxlKyBNUFNvQyB6Y3UxMDIgeGN6dTE1ZWcgPzxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgV2hlbiBkaWQgeW91IHJ1
biB4ZW4gd2l0aCB0aGUgY2FjaGUgY29sb3JpbmcgbGFzdCB0aW1lID88YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IFdoYXQga2VybmVsIHZlcnNpb24g
ZGlkIHlvdSB1c2UgZm9yIERvbTAgd2hlbiB5b3UgcmFuIHhlbiB3aXRoIHRoZSBjYWNoZSBjb2xv
cmluZyBsYXN0IHRpbWUgPzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDsg0L/RgiwgNSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAxMTo0OCwgT2xlZyBOaWtpdGVua28g
Jmx0OzxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9s
ZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNv
bTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xl
c2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208
L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBNaWNoYWwsPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgVGhhbmtzLjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBPbGVnPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDsg0L/RgiwgNSDQvNCw0Y8gMjAyM+KAr9CzLiDQsiAxMToz
NCwgTWljaGFsIE9yemVsICZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20i
IHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwu
b3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9i
bGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6
ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFt
ZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1p
Y2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208
L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBIaSBPbGVnLDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBSZXBseWluZywgc28gdGhhdCB5b3Ug
ZG8gbm90IG5lZWQgdG8gd2FpdCBmb3IgU3RlZmFuby48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gMDUvMDUvMjAyMyAxMDoyOCwgT2xl
ZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IEhlbGxvIFN0ZWZhbm8sPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSSB3b3VsZCBs
aWtlIHRvIHRyeSBhIHhlbiBjYWNoZSBjb2xvciBwcm9wZXJ0eSBmcm9tIHRoaXMgcmVwb8KgIDxh
IGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQiIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRw
L3hlbi5naXQ8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0
cC94ZW4uZ2l0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+ICZs
dDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQt
aHR0cC94ZW4uZ2l0PC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXQtaHR0cC94ZW4uZ2l0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0PC9hPiAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCIgcmVsPSJub3JlZmVycmVy
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdp
dDwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94
ZW4uZ2l0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdDwvYT4mZ3Q7Jmd0OyZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVu
LmdpdCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhl
bi5vcmcvZ2l0LWh0dHAveGVuLmdpdDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+Jmd0OyAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdCIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAv
eGVuLmdpdDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRw
L3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0IiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXQtaHR0cC94ZW4uZ2l0PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4u
b3JnL2dpdC1odHRwL3hlbi5naXQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQ8L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0IiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0
PC9hPiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IENvdWxkIHlvdSB0ZWxsIHdob3QgYnJhbmNo
IEkgc2hvdWxkIHVzZSA/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgQ2FjaGUgY29sb3JpbmcgZmVhdHVyZSBpcyBub3QgcGFydCBv
ZiB0aGUgdXBzdHJlYW0gdHJlZSBhbmQgaXQgaXMgc3RpbGwgdW5kZXIgcmV2aWV3Ljxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFlv
dSBjYW4gb25seSBmaW5kIGl0IGludGVncmF0ZWQgaW4gdGhlIFhpbGlueCBYZW4gdHJlZS48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgfk1p
Y2hhbDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBSZWdhcmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgT2xlZzxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7INC/0YIsIDI4INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAwMDo1MSwgU3RlZmFubyBTdGFi
ZWxsaW5pICZsdDs8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0
PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5z
c3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZn
dDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwu
b3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
SSBhbSBmYW1pbGlhciB3aXRoIHRoZSB6Y3UxMDIgYnV0IEkgZG9uJiMzOTt0IGtub3cgaG93IHlv
dSBjb3VsZCBwb3NzaWJseTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoGdlbmVyYXRlIGEgU0Vycm9yLjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBJIHN1Z2dlc3QgdG8gdHJ5IHRvIHVzZSBJbWFnZUJ1aWxk
ZXIgWzFdIHRvIGdlbmVyYXRlIHRoZSBib290PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgY29uZmlndXJhdGlv
biBhcyBhIHRlc3QgYmVjYXVzZSB0aGF0IGlzIGtub3duIHRvIHdvcmsgd2VsbCBmb3IgemN1MTAy
Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBbMV0gPGEgaHJlZj0iaHR0cHM6Ly9naXRsYWIu
Y29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcjwvYT4gJmx0
OzxhIGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJv
amVjdC9pbWFnZWJ1aWxkZXI8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRsYWIuY29t
L3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcjwvYT4gJmx0Ozxh
IGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVj
dC9pbWFnZWJ1aWxkZXI8L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNv
bS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXI8L2E+ICZsdDs8
YSBocmVmPSJodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyIiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGxhYi5jb20veGVuLXByb2pl
Y3QvaW1hZ2VidWlsZGVyPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94
ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXI8L2E+ICZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3Qv
aW1hZ2VidWlsZGVyPC9hPiZndDsmZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRsYWIu
Y29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcjwvYT4gJmx0
OzxhIGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJv
amVjdC9pbWFnZWJ1aWxkZXI8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRsYWIuY29t
L3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlciIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L2ltYWdlYnVpbGRlcjwvYT4gJmx0Ozxh
IGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVj
dC9pbWFnZWJ1aWxkZXI8L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNv
bS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXI8L2E+ICZsdDs8
YSBocmVmPSJodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyIiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGxhYi5jb20veGVuLXByb2pl
Y3QvaW1hZ2VidWlsZGVyPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0bGFiLmNvbS94
ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXIiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9pbWFnZWJ1aWxkZXI8L2E+ICZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvaW1hZ2VidWlsZGVyIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3Qv
aW1hZ2VidWlsZGVyPC9hPiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoE9uIFRodSwgMjcgQXByIDIwMjMsIE9sZWcgTmlraXRlbmtvIHdyb3RlOjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDsgSGVsbG8gU3RlZmFubyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBUaGFua3MgZm9yIGNsYXJpZmljYXRpb24uPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0OyBXZSBuaWdodGVyIHVzZSBJbWFnZUJ1aWxkZXIgbm9yIHVib290IGJvb3Qg
c2NyaXB0Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgQSBtb2RlbCBpcyB6Y3UxMDIgY29tcGF0aWJs
ZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBSZWdhcmRzLDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDsgTy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0OyDQstGCLCAyNSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMjE6MjEsIFN0ZWZhbm8g
U3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJl
bGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3Jn
PC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0
YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3Rh
YmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwv
YT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmci
IHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0
YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9i
bGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgVGhpcyBpcyBpbnRlcmVzdGluZy4gQXJlIHlvdSB1c2luZyBYaWxpbnggaGFyZHdh
cmUgYnkgYW55IGNoYW5jZT8gSWYgc28sPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
d2hpY2ggYm9hcmQ/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoEFyZSB5b3UgdXNpbmcgSW1hZ2VCdWlsZGVyIHRvIGdlbmVyYXRlIHlvdXIgYm9v
dC5zY3IgYm9vdCBzY3JpcHQ/IElmIHNvLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oGNvdWxkIHlvdSBwbGVhc2UgcG9zdCB5b3VyIEltYWdlQnVpbGRlciBjb25maWcgZmlsZT8gSWYg
bm90LCBjYW4geW91PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcG9zdCB0aGUgc291
cmNlIG9mIHlvdXIgdWJvb3QgYm9vdCBzY3JpcHQ/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFNFcnJvcnMgYXJlIHN1cHBvc2VkIHRvIGJlIHJl
bGF0ZWQgdG8gYSBoYXJkd2FyZSBmYWlsdXJlIG9mIHNvbWUga2luZC48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBZb3UgYXJlIG5vdCBzdXBwb3NlZCB0byBiZSBhYmxlIHRvIHRyaWdn
ZXIgYW4gU0Vycm9yIGVhc2lseSBieTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZx
dW90O21pc3Rha2UmcXVvdDsuIEkgaGF2ZSBub3Qgc2VlbiBTRXJyb3JzIGR1ZSB0byB3cm9uZyBj
YWNoZSBjb2xvcmluZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRp
b25zIG9uIGFueSBYaWxpbnggYm9hcmQgYmVmb3JlLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBUaGUgZGlmZmVyZW5jZXMgYmV0d2VlbiBYZW4g
d2l0aCBhbmQgd2l0aG91dCBjYWNoZSBjb2xvcmluZyBmcm9tIGE8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBoYXJkd2FyZSBwZXJzcGVjdGl2ZSBhcmU6PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoC0gV2l0aCBjYWNoZSBjb2xvcmlu
ZywgdGhlIFNNTVUgaXMgZW5hYmxlZCBhbmQgZG9lcyBhZGRyZXNzIHRyYW5zbGF0aW9uczxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoMKgIGV2ZW4gZm9yIGRvbTAuIFdpdGhvdXQgY2Fj
aGUgY29sb3JpbmcgdGhlIFNNTVUgY291bGQgYmUgZGlzYWJsZWQsIGFuZDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoMKgIGlmIGVuYWJsZWQsIHRoZSBTTU1VIGRvZXNuJiMzOTt0IGRv
IGFueSBhZGRyZXNzIHRyYW5zbGF0aW9ucyBmb3IgRG9tMC4gSWY8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqDCoCB0aGVyZSBpcyBhIGhhcmR3YXJlIGZhaWx1cmUgcmVsYXRlZCB0byBT
TU1VIGFkZHJlc3MgdHJhbnNsYXRpb24gaXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqDCoCBjb3VsZCBvbmx5IHRyaWdnZXIgd2l0aCBjYWNoZSBjb2xvcmluZy4gVGhpcyB3b3VsZCBi
ZSBteSBub3JtYWw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBzdWdnZXN0aW9u
IGZvciB5b3UgdG8gZXhwbG9yZSwgYnV0IHRoZSBmYWlsdXJlIGhhcHBlbnMgdG9vIGVhcmx5PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgwqAgYmVmb3JlIGFueSBETUEtY2FwYWJsZSBk
ZXZpY2UgaXMgcHJvZ3JhbW1lZC4gU28gSSBkb24mIzM5O3QgdGhpbmsgdGhpcyBjYW48YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBiZSB0aGUgaXNzdWUuPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoC0gV2l0aCBjYWNoZSBjb2xv
cmluZywgdGhlIG1lbW9yeSBhbGxvY2F0aW9uIGlzIHZlcnkgZGlmZmVyZW50IHNvIHlvdSYjMzk7
bGw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqDCoCBlbmQgdXAgdXNpbmcgZGlmZmVy
ZW50IEREUiByZWdpb25zIGZvciBEb20wLiBTbyBpZiB5b3VyIEREUiBpczxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoMKgIGRlZmVjdGl2ZSwgeW91IG1pZ2h0IG9ubHkgc2VlIGEgZmFp
bHVyZSB3aXRoIGNhY2hlIGNvbG9yaW5nIGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqDCoCBiZWNhdXNlIHlvdSBlbmQgdXAgdXNpbmcgZGlmZmVyZW50IHJlZ2lvbnMuPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBPbiBUdWUsIDI1IEFwciAyMDIzLCBPbGVnIE5pa2l0ZW5rbyB3cm90
ZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEhpIFN0ZWZhbm8sPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgVGhhbmsgeW91Ljxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgSWYgSSBidWlsZCB4ZW4gd2l0aG91dCBjb2xvcnMgc3VwcG9ydCB0aGVyZSBpcyBub3QgdGhp
cyBlcnJvci48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEFsbCB0aGUgZG9t
YWlucyBhcmUgYm9vdGVkIHdlbGwuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBIZW5zZSBpdCBjYW4gbm90IGJlIGEgaGFyZHdhcmUgaXNzdWUuPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBUaGlzIHBhbmljIGFycml2ZWQgZHVyaW5nIHVucGFja2luZyB0
aGUgcm9vdGZzLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSGVyZSBJIGF0
dGFjaGVkIHRoZSBib290IGxvZyB4ZW4vRG9tMCB3aXRob3V0IGNvbG9yLjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQSBoaWdobGlnaHRlZCBzdHJpbmdzIHByaW50ZWQgZXhh
Y3RseSBhZnRlciB0aGUgcGxhY2Ugd2hlcmUgMS1zdCB0aW1lIHBhbmljIGFycml2ZWQuPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgwqBYZW4gNC4xNi4xLXByZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgKFhFTikgWGVuIHZlcnNpb24gNC4xNi4xLXByZSAobm9sZTIzOTBAKG5vbmUpKSAo
YWFyY2g2NC1wb3J0YWJsZS1saW51eC1nY2MgKEdDQykgMTEuMy4wKSBkZWJ1Zz15PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgMjAy
My0wNC0yMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgTGF0ZXN0
IENoYW5nZVNldDogV2VkIEFwciAxOSAxMjo1NjoxNCAyMDIzICswMzAwIGdpdDozMjE2ODdiMjMx
LWRpcnR5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBidWlsZC1p
ZDogYzE4NDcyNThmZGIxYjc5NTYyZmM3MTBkZGE0MDAwOGY5NmMwZmRlNTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgUHJvY2Vzc29yOiAwMDAwMDAwMDQxMGZkMDM0
OiAmcXVvdDtBUk0gTGltaXRlZCZxdW90OywgdmFyaWFudDogMHgwLCBwYXJ0IDB4ZDAzLHJldiAw
eDQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIDY0LWJpdCBFeGVj
dXRpb246PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBQcm9j
ZXNzb3IgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDIyMjIgMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgRXhjZXB0aW9uIExldmVs
czogRUwzOjY0KzMyIEVMMjo2NCszMiBFTDE6NjQrMzIgRUwwOjY0KzMyPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCBFeHRlbnNpb25zOiBGbG9hdGluZ1Bv
aW50IEFkdmFuY2VkU0lNRDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgwqAgRGVidWcgRmVhdHVyZXM6IDAwMDAwMDAwMTAzMDUxMDYgMDAwMDAwMDAwMDAwMDAwMDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgQXV4aWxpYXJ5IEZl
YXR1cmVzOiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIE1lbW9yeSBNb2RlbCBGZWF0dXJlczogMDAw
MDAwMDAwMDAwMTEyMiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyAoWEVOKSDCoCBJU0EgRmVhdHVyZXM6IMKgMDAwMDAwMDAwMDAxMTEyMCAwMDAw
MDAwMDAwMDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSAz
Mi1iaXQgRXhlY3V0aW9uOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhF
TikgwqAgUHJvY2Vzc29yIEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAwMTMxOjAwMDAwMDAwMDAwMTEw
MTE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIEluc3Ry
dWN0aW9uIFNldHM6IEFBcmNoMzIgQTMyIFRodW1iIFRodW1iLTIgSmF6ZWxsZTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgRXh0ZW5zaW9uczogR2VuZXJp
Y1RpbWVyIFNlY3VyaXR5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVO
KSDCoCBEZWJ1ZyBGZWF0dXJlczogMDAwMDAwMDAwMzAxMDA2Njxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgQXV4aWxpYXJ5IEZlYXR1cmVzOiAwMDAwMDAwMDAw
MDAwMDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCBNZW1v
cnkgTW9kZWwgRmVhdHVyZXM6IDAwMDAwMDAwMTAyMDExMDUgMDAwMDAwMDA0MDAwMDAwMDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAgwqAgwqAgwqAgwqAgwqAg
wqAgwqAgwqAgwqAgwqAgwqAgwqAwMDAwMDAwMDAxMjYwMDAwIDAwMDAwMDAwMDIxMDIyMTE8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIElTQSBGZWF0dXJlczog
MDAwMDAwMDAwMjEwMTExMCAwMDAwMDAwMDEzMTEyMTExIDAwMDAwMDAwMjEyMzIwNDI8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIMKgIMKgIMKg
IMKgIDAwMDAwMDAwMDExMTIxMzEgMDAwMDAwMDAwMDAxMTE0MiAwMDAwMDAwMDAwMDExMTIxPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2luZyBTTUMgQ2FsbGlu
ZyBDb252ZW50aW9uIHYxLjI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIFVzaW5nIFBTQ0kgdjEuMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgU01QOiBBbGxvd2luZyA0IENQVXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IChYRU4pIEdlbmVyaWMgVGltZXIgSVJROiBwaHlzPTMwIGh5cD0yNiB2aXJ0PTI3IEZy
ZXE6IDEwMDAwMCBLSHo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IEdJQ3YyIGluaXRpYWxpemF0aW9uOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgwqAgwqAgwqAgwqAgZ2ljX2Rpc3RfYWRkcj0wMDAwMDAwMGY5MDEwMDAwPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSDCoCDCoCDCoCDCoCBnaWNfY3B1
X2FkZHI9MDAwMDAwMDBmOTAyMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgwqAgwqAgwqAgwqAgZ2ljX2h5cF9hZGRyPTAwMDAwMDAwZjkwNDAwMDA8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIMKgIMKgIMKgIMKgIGdpY192Y3B1
X2FkZHI9MDAwMDAwMDBmOTA2MDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgwqAgwqAgwqAgwqAgZ2ljX21haW50ZW5hbmNlX2lycT0yNTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgR0lDdjI6IEFkanVzdGluZyBDUFUgaW50ZXJm
YWNlIGJhc2UgdG8gMHhmOTAyZjAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgKFhFTikgR0lDdjI6IDE5MiBsaW5lcywgNCBjcHVzLCBzZWN1cmUgKElJRCAwMjAwMTQzYiku
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBVc2luZyBzY2hlZHVs
ZXI6IG51bGwgU2NoZWR1bGVyIChudWxsKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgSW5pdGlhbGl6aW5nIG51bGwgc2NoZWR1bGVyPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBXQVJOSU5HOiBUaGlzIGlzIGV4cGVyaW1lbnRhbCBz
b2Z0d2FyZSBpbiBkZXZlbG9wbWVudC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIFVzZSBhdCB5b3VyIG93biByaXNrLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgKFhFTikgQWxsb2NhdGVkIGNvbnNvbGUgcmluZyBvZiAzMiBLaUIuPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUwOiBHdWVzdCBhdG9taWNz
IHdpbGwgdHJ5IDEyIHRpbWVzIGJlZm9yZSBwYXVzaW5nIHRoZSBkb21haW48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEJyaW5naW5nIHVwIENQVTE8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIENQVTE6IEd1ZXN0IGF0b21pY3Mgd2ls
bCB0cnkgMTMgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhlIGRvbWFpbjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVIDEgYm9vdGVkLjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQnJpbmdpbmcgdXAgQ1BVMjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQ1BVMjogR3Vlc3QgYXRvbWljcyB3aWxsIHRy
eSAxMyB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUgMiBib290ZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCcmluZ2luZyB1cCBDUFUzPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUzOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEz
IHRpbWVzIGJlZm9yZSBwYXVzaW5nIHRoZSBkb21haW48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IChYRU4pIEJyb3VnaHQgdXAgNCBDUFVzPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBDUFUgMyBib290ZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBzbW11OiAvYXhpL3NtbXVAZmQ4MDAwMDA6IHByb2Jpbmcg
aGFyZHdhcmUgY29uZmlndXJhdGlvbi4uLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBTTU1VdjIgd2l0aDo8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIHNtbXU6IC9heGkvc21tdUBmZDgw
MDAwMDogc3RhZ2UgMiB0cmFuc2xhdGlvbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgc21tdTogL2F4aS9zbW11QGZkODAwMDAwOiBzdHJlYW0gbWF0Y2hpbmcgd2l0
aCA0OCByZWdpc3RlciBncm91cHMsIG1hc2sgMHg3ZmZmJmx0OzImZ3Q7c21tdTo8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAvYXhp
L3NtbXVAZmQ4MDAwMDA6IDE2IGNvbnRleHQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBiYW5rcyAoMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgc3RhZ2UtMiBv
bmx5KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgc21tdTogL2F4
aS9zbW11QGZkODAwMDAwOiBTdGFnZS0yOiA0OC1iaXQgSVBBIC0mZ3Q7IDQ4LWJpdCBQQTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgc21tdTogL2F4aS9zbW11QGZk
ODAwMDAwOiByZWdpc3RlcmVkIDI5IG1hc3RlciBkZXZpY2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZW5hYmxlZDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgwqAtIERvbTAgbW9kZTogUmVsYXhl
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgUDJNOiA0MC1iaXQg
SVBBIHdpdGggNDAtYml0IFBBIGFuZCA4LWJpdCBWTUlEPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06IDMgbGV2ZWxzIHdpdGggb3JkZXItMSByb290LCBWVENS
IDB4MDAwMDAwMDA4MDAyMzU1ODxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
KFhFTikgU2NoZWR1bGluZyBncmFudWxhcml0eTogY3B1LCAxIENQVSBwZXIgc2NoZWQtcmVzb3Vy
Y2U8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGFsdGVybmF0aXZl
czogUGF0Y2hpbmcgd2l0aCBhbHQgdGFibGUgMDAwMDAwMDAwMDJjYzVjOCAtJmd0OyAwMDAwMDAw
MDAwMmNjYjJjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSAqKiog
TE9BRElORyBET01BSU4gMCAqKio8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIExvYWRpbmcgZDAga2VybmVsIGZyb20gYm9vdCBtb2R1bGUgQCAwMDAwMDAwMDAxMDAw
MDAwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBMb2FkaW5nIHJh
bWRpc2sgZnJvbSBib290IG1vZHVsZSBAIDAwMDAwMDAwMDIwMDAwMDA8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEFsbG9jYXRpbmcgMToxIG1hcHBpbmdzIHRvdGFs
bGluZyAxNjAwTUIgZm9yIGRvbTA6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyAoWEVOKSBCQU5LWzBdIDB4MDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAyMDAwMDAwMCAoMjU2TUIp
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBCQU5LWzFdIDB4MDAw
MDAwMjQwMDAwMDAtMHgwMDAwMDAyODAwMDAwMCAoNjRNQik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEJBTktbMl0gMHgwMDAwMDAzMDAwMDAwMC0weDAwMDAwMDgw
MDAwMDAwICgxMjgwTUIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVO
KSBHcmFudCB0YWJsZSByYW5nZTogMHgwMDAwMDAwMGUwMDAwMC0weDAwMDAwMDAwZTQwMDAwPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBzbW11OiAvYXhpL3NtbXVA
ZmQ4MDAwMDA6IGQwOiBwMm1hZGRyIDB4MDAwMDAwMDg3YmY5NDAwMDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgQWxsb2NhdGluZyBQUEkgMTYgZm9yIGV2ZW50IGNo
YW5uZWwgaW50ZXJydXB0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVO
KSBFeHRlbmRlZCByZWdpb24gMDogMHg4MTIwMDAwMC0mZ3Q7MHhhMDAwMDAwMDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDE6IDB4YjEy
MDAwMDAtJmd0OzB4YzAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiAyOiAweGM4MDAwMDAwLSZndDsweGUwMDAwMDAwPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBFeHRlbmRlZCByZWdpb24gMzog
MHhmMDAwMDAwMC0mZ3Q7MHhmOTAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgRXh0ZW5kZWQgcmVnaW9uIDQ6IDB4MTAwMDAwMDAwLSZndDsweDYwMDAwMDAw
MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgRXh0ZW5kZWQgcmVn
aW9uIDU6IDB4ODgwMDAwMDAwLSZndDsweDgwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIEV4dGVuZGVkIHJlZ2lvbiA2OiAweDgwMDEwMDAwMDAtJmd0
OzB4MTAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4p
IExvYWRpbmcgekltYWdlIGZyb20gMDAwMDAwMDAwMTAwMDAwMCB0byAwMDAwMDAwMDEwMDAwMDAw
LTAwMDAwMDAwMTBlNDEwMDg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChY
RU4pIExvYWRpbmcgZDAgaW5pdHJkIGZyb20gMDAwMDAwMDAwMjAwMDAwMCB0byAweDAwMDAwMDAw
MTM2MDAwMDAtMHgwMDAwMDAwMDFmZjNhNjE3PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyAoWEVOKSBMb2FkaW5nIGQwIERUQiB0byAweDAwMDAwMDAwMTM0MDAwMDAtMHgwMDAw
MDAwMDEzNDBjYmRjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBJ
bml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQgc2V0IGF0IDB4NDAwMCBwYWdlcy48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFN0ZC4gTG9nbGV2ZWw6IEFs
bDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgR3Vlc3QgTG9nbGV2
ZWw6IEFsbDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgKioqIFNl
cmlhbCBpbnB1dCB0byBET00wICh0eXBlICYjMzk7Q1RSTC1hJiMzOTsgdGhyZWUgdGltZXMgdG8g
c3dpdGNoIGlucHV0KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikg
bnVsbC5jOjM1MzogMCAmbHQ7LS0gZDB2MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgKFhFTikgRnJlZWQgMzU2a0IgaW5pdCBtZW1vcnkuPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwIFVuaGFuZGxlZCBTTUMvSFZDOiAweDg0MDAwMDUw
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwIFVuaGFuZGxl
ZCBTTUMvSFZDOiAweDg2MDBmZjAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZm
ZiB0byBJQ0FDVElWRVI0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVO
KSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJ
Q0FDVElWRVI4PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYw
OiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElW
RVIxMjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgKFhFTikgZDB2MDogdkdJ
Q0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMTY8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIGQwdjA6IHZHSUNEOiB1
bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjIwPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5k
bGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIwPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIEJvb3RpbmcgTGludXgg
b24gcGh5c2ljYWwgQ1BVIDB4MDAwMDAwMDAwMCBbMHg0MTBmZDAzNF08YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gTGludXggdmVyc2lvbiA1LjE1
LjcyLXhpbGlueC12MjAyMi4xIChvZS11c2VyQG9lLWhvc3QpIChhYXJjaDY0LXBvcnRhYmxlLWxp
bnV4LWdjYyAoR0NDKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoDExLjMuMCwgR05VIGxkIChHTlU8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBCaW51dGlscyk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IDIuMzguMjAyMjA3MDgpICMxIFNNUCBUdWUgRmViIDIxIDA1OjQ3OjU0IFVUQyAyMDIzPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE1hY2hp
bmUgbW9kZWw6IEQxNCBWaXBlciBCb2FyZCAtIFdoaXRlIFVuaXQ8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gWGVuIDQuMTYgc3VwcG9ydCBmb3Vu
ZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBa
b25lIHJhbmdlczo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjAwMDAwMF0gwqAgRE1BIMKgIMKgIMKgW21lbSAweDAwMDAwMDAwMTAwMDAwMDAtMHgwMDAwMDAw
MDdmZmZmZmZmXTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
MDAwMDAwXSDCoCBETUEzMiDCoCDCoGVtcHR5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIE5vcm1hbCDCoCBlbXB0eTxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBNb3ZhYmxlIHpvbmUgc3Rh
cnQgZm9yIGVhY2ggbm9kZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuMDAwMDAwXSBFYXJseSBtZW1vcnkgbm9kZSByYW5nZXM8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gwqAgbm9kZSDCoCAwOiBbbWVtIDB4
MDAwMDAwMDAxMDAwMDAwMC0weDAwMDAwMDAwMWZmZmZmZmZdPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAweDAw
MDAwMDAwMjIwMDAwMDAtMHgwMDAwMDAwMDIyMTQ3ZmZmXTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSDCoCBub2RlIMKgIDA6IFttZW0gMHgwMDAw
MDAwMDIyMjAwMDAwLTB4MDAwMDAwMDAyMjM0N2ZmZl08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gwqAgbm9kZSDCoCAwOiBbbWVtIDB4MDAwMDAw
MDAyNDAwMDAwMC0weDAwMDAwMDAwMjdmZmZmZmZdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIMKgIG5vZGUgwqAgMDogW21lbSAweDAwMDAwMDAw
MzAwMDAwMDAtMHgwMDAwMDAwMDdmZmZmZmZmXTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBJbml0bWVtIHNldHVwIG5vZGUgMCBbbWVtIDB4MDAw
MDAwMDAxMDAwMDAwMC0weDAwMDAwMDAwN2ZmZmZmZmZdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE9uIG5vZGUgMCwgem9uZSBETUE6IDgxOTIg
cGFnZXMgaW4gdW5hdmFpbGFibGUgcmFuZ2VzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE9uIG5vZGUgMCwgem9uZSBETUE6IDE4NCBwYWdlcyBp
biB1bmF2YWlsYWJsZSByYW5nZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjAwMDAwMF0gT24gbm9kZSAwLCB6b25lIERNQTogNzM1MiBwYWdlcyBpbiB1bmF2
YWlsYWJsZSByYW5nZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjAwMDAwMF0gY21hOiBSZXNlcnZlZCAyNTYgTWlCIGF0IDB4MDAwMDAwMDA2ZTAwMDAwMDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBwc2Np
OiBwcm9iaW5nIGZvciBjb25kdWl0IG1ldGhvZCBmcm9tIERULjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBwc2NpOiBQU0NJdjEuMSBkZXRlY3Rl
ZCBpbiBmaXJtd2FyZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAwLjAwMDAwMF0gcHNjaTogVXNpbmcgc3RhbmRhcmQgUFNDSSB2MC4yIGZ1bmN0aW9uIElEczxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBwc2Np
OiBUcnVzdGVkIE9TIG1pZ3JhdGlvbiBub3QgcmVxdWlyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcHNjaTogU01DIENhbGxpbmcgQ29udmVu
dGlvbiB2MS4xPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4w
MDAwMDBdIHBlcmNwdTogRW1iZWRkZWQgMTYgcGFnZXMvY3B1IHMzMjc5MiByMCBkMzI3NDQgdTY1
NTM2PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBd
IERldGVjdGVkIFZJUFQgSS1jYWNoZSBvbiBDUFUwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIENQVSBmZWF0dXJlczoga2VybmVsIHBhZ2UgdGFi
bGUgaXNvbGF0aW9uIGZvcmNlZCBPTiBieSBLQVNMUjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuMDAwMDAwXSBDUFUgZmVhdHVyZXM6IGRldGVjdGVkOiBLZXJu
ZWwgcGFnZSB0YWJsZSBpc29sYXRpb24gKEtQVEkpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIEJ1aWx0IDEgem9uZWxpc3RzLCBtb2JpbGl0eSBn
cm91cGluZyBvbi7CoCBUb3RhbCBwYWdlczogNDAzODQ1PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIEtlcm5lbCBjb21tYW5kIGxpbmU6IGNvbnNv
bGU9aHZjMCBlYXJseWNvbj14ZW4gZWFybHlwcmludGs9eGVuIGNsa19pZ25vcmVfdW51c2VkIGZp
cHM9MTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoHJvb3Q9L2Rldi9yYW0wPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
bWF4Y3B1cz0yPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4w
MDAwMDBdIFVua25vd24ga2VybmVsIGNvbW1hbmQgbGluZSBwYXJhbWV0ZXJzICZxdW90O2Vhcmx5
cHJpbnRrPXhlbiBmaXBzPTEmcXVvdDssIHdpbGwgYmUgcGFzc2VkIHRvIHVzZXI8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzcGFj
ZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0g
RGVudHJ5IGNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMjYyMTQ0IChvcmRlcjogOSwgMjA5NzE1
MiBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDAuMDAwMDAwXSBJbm9kZS1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDEzMTA3MiAob3Jk
ZXI6IDgsIDEwNDg1NzYgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gbWVtIGF1dG8taW5pdDogc3RhY2s6b2ZmLCBoZWFw
IGFsbG9jOm9uLCBoZWFwIGZyZWU6b248YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjAwMDAwMF0gbWVtIGF1dG8taW5pdDogY2xlYXJpbmcgc3lzdGVtIG1lbW9y
eSBtYXkgdGFrZSBzb21lIHRpbWUuLi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjAwMDAwMF0gTWVtb3J5OiAxMTIxOTM2Sy8xNjQxMDI0SyBhdmFpbGFibGUg
KDk3MjhLIGtlcm5lbCBjb2RlLCA4MzZLIHJ3ZGF0YSwgMjM5Nksgcm9kYXRhLCAxNTM2Szxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oGluaXQsIDI2MksgYnNzLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDI1Njk0NEsg
cmVzZXJ2ZWQsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAyNjIxNDRLIGNt
YS1yZXNlcnZlZCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjAwMDAwMF0gU0xVQjogSFdhbGlnbj02NCwgT3JkZXI9MC0zLCBNaW5PYmplY3RzPTAsIENQVXM9
MiwgTm9kZXM9MTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
MDAwMDAwXSByY3U6IEhpZXJhcmNoaWNhbCBSQ1UgaW1wbGVtZW50YXRpb24uPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHJjdTogUkNVIGV2ZW50
IHRyYWNpbmcgaXMgZW5hYmxlZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAwLjAwMDAwMF0gcmN1OiBSQ1UgcmVzdHJpY3RpbmcgQ1BVcyBmcm9tIE5SX0NQVVM9
OCB0byBucl9jcHVfaWRzPTIuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMC4wMDAwMDBdIHJjdTogUkNVIGNhbGN1bGF0ZWQgdmFsdWUgb2Ygc2NoZWR1bGVyLWVu
bGlzdG1lbnQgZGVsYXkgaXMgMjUgamlmZmllcy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gcmN1OiBBZGp1c3RpbmcgZ2VvbWV0cnkgZm9yIHJj
dV9mYW5vdXRfbGVhZj0xNiwgbnJfY3B1X2lkcz0yPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIE5SX0lSUVM6IDY0LCBucl9pcnFzOiA2NCwgcHJl
YWxsb2NhdGVkIGlycXM6IDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAwLjAwMDAwMF0gUm9vdCBJUlEgaGFuZGxlcjogZ2ljX2hhbmRsZV9pcnE8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDAwMF0gYXJjaF90aW1lcjog
Y3AxNSB0aW1lcihzKSBydW5uaW5nIGF0IDEwMC4wME1IeiAodmlydCkuPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIGNsb2Nrc291cmNlOiBhcmNo
X3N5c19jb3VudGVyOiBtYXNrOiAweGZmZmZmZmZmZmZmZmZmIG1heF9jeWNsZXM6IDB4MTcxMDI0
ZTdlMCw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBtYXhfaWRsZV9uczogNDQwNzk1MjA1MzE1IG5zPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4wMDAwMDBdIHNjaGVkX2Nsb2NrOiA1NiBiaXRz
IGF0IDEwME1IeiwgcmVzb2x1dGlvbiAxMG5zLCB3cmFwcyBldmVyeSA0Mzk4MDQ2NTExMTAwbnM8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjAwMDI1OF0gQ29u
c29sZTogY29sb3VyIGR1bW15IGRldmljZSA4MHgyNTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDAuMzEwMjMxXSBwcmludGs6IGNvbnNvbGUgW2h2YzBdIGVuYWJs
ZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMxNDQwM10g
Q2FsaWJyYXRpbmcgZGVsYXkgbG9vcCAoc2tpcHBlZCksIHZhbHVlIGNhbGN1bGF0ZWQgdXNpbmcg
dGltZXIgZnJlcXVlbmN5Li4gMjAwLjAwIEJvZ29NSVBTPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgKGxwaj00MDAwMDApPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zMjQ4NTFdIHBpZF9tYXg6
IGRlZmF1bHQ6IDMyNzY4IG1pbmltdW06IDMwMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDAuMzI5NzA2XSBMU006IFNlY3VyaXR5IEZyYW1ld29yayBpbml0aWFs
aXppbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjMzNDIw
NF0gWWFtYTogYmVjb21pbmcgbWluZGZ1bC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAwLjMzNzg2NV0gTW91bnQtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA0
MDk2IChvcmRlcjogMywgMzI3NjggYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM0NTE4MF0gTW91bnRwb2ludC1jYWNoZSBoYXNoIHRh
YmxlIGVudHJpZXM6IDQwOTYgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzU0NzQzXSB4ZW46Z3JhbnRf
dGFibGU6IEdyYW50IHRhYmxlcyB1c2luZyB2ZXJzaW9uIDEgbGF5b3V0PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNTkxMzJdIEdyYW50IHRhYmxlIGluaXRp
YWxpemVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNjI2
NjRdIHhlbjpldmVudHM6IFVzaW5nIEZJRk8tYmFzZWQgQUJJPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zNjY5OTNdIFhlbjogaW5pdGlhbGl6aW5nIGNwdTA8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM3MDUxNV0gcmN1
OiBIaWVyYXJjaGljYWwgU1JDVSBpbXBsZW1lbnRhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM3NTkzMF0gc21wOiBCcmluZ2luZyB1cCBzZWNvbmRh
cnkgQ1BVcyAuLi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IChYRU4pIG51
bGwuYzozNTM6IDEgJmx0Oy0tIGQwdjE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IChYRU4pIGQwdjE6IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZm
ZmZmIHRvIElDQUNUSVZFUjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAwLjM4MjU0OV0gRGV0ZWN0ZWQgVklQVCBJLWNhY2hlIG9uIENQVTE8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjM4ODcxMl0gWGVuOiBpbml0aWFsaXpp
bmcgY3B1MTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuMzg4
NzQzXSBDUFUxOiBCb290ZWQgc2Vjb25kYXJ5IHByb2Nlc3NvciAweDAwMDAwMDAwMDEgWzB4NDEw
ZmQwMzRdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC4zODg4
MjldIHNtcDogQnJvdWdodCB1cCAxIG5vZGUsIDIgQ1BVczxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDA2OTQxXSBTTVA6IFRvdGFsIG9mIDIgcHJvY2Vzc29y
cyBhY3RpdmF0ZWQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
MC40MTE2OThdIENQVSBmZWF0dXJlczogZGV0ZWN0ZWQ6IDMyLWJpdCBFTDAgU3VwcG9ydDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDE2ODg4XSBDUFUgZmVh
dHVyZXM6IGRldGVjdGVkOiBDUkMzMiBpbnN0cnVjdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQyMjEyMV0gQ1BVOiBBbGwgQ1BVKHMpIHN0YXJ0ZWQg
YXQgRUwxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40MjYy
NDhdIGFsdGVybmF0aXZlczogcGF0Y2hpbmcga2VybmVsIGNvZGU8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjQzMTQyNF0gZGV2dG1wZnM6IGluaXRpYWxpemVk
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40NDE0NTRdIEtB
U0xSIGVuYWJsZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAw
LjQ0MTYwMl0gY2xvY2tzb3VyY2U6IGppZmZpZXM6IG1hc2s6IDB4ZmZmZmZmZmYgbWF4X2N5Y2xl
czogMHhmZmZmZmZmZiwgbWF4X2lkbGVfbnM6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgNzY0NTA0MTc4NTEwMDAwMCBuczxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDQ4MzIxXSBmdXRleCBo
YXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXI6IDMsIDMyNzY4IGJ5dGVzLCBsaW5lYXIpPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC40OTYxODNdIE5FVDog
UmVnaXN0ZXJlZCBQRl9ORVRMSU5LL1BGX1JPVVRFIHByb3RvY29sIGZhbWlseTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNDk4Mjc3XSBETUE6IHByZWFsbG9j
YXRlZCAyNTYgS2lCIEdGUF9LRVJORUwgcG9vbCBmb3IgYXRvbWljIGFsbG9jYXRpb25zPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41MDM3NzJdIERNQTogcHJl
YWxsb2NhdGVkIDI1NiBLaUIgR0ZQX0tFUk5FTHxHRlBfRE1BIHBvb2wgZm9yIGF0b21pYyBhbGxv
Y2F0aW9uczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTEx
NjEwXSBETUE6IHByZWFsbG9jYXRlZCAyNTYgS2lCIEdGUF9LRVJORUx8R0ZQX0RNQTMyIHBvb2wg
Zm9yIGF0b21pYyBhbGxvY2F0aW9uczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuNTE5NDc4XSBhdWRpdDogaW5pdGlhbGl6aW5nIG5ldGxpbmsgc3Vic3lzIChk
aXNhYmxlZCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjUy
NDk4NV0gYXVkaXQ6IHR5cGU9MjAwMCBhdWRpdCgwLjMzNjoxKTogc3RhdGU9aW5pdGlhbGl6ZWQg
YXVkaXRfZW5hYmxlZD0wIHJlcz0xPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMC41MjkxNjldIHRoZXJtYWxfc3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292ZXJu
b3IgJiMzOTtzdGVwX3dpc2UmIzM5Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDAuNTMzMDIzXSBody1icmVha3BvaW50OiBmb3VuZCA2IGJyZWFrcG9pbnQgYW5k
IDQgd2F0Y2hwb2ludCByZWdpc3RlcnMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC41NDU2MDhdIEFTSUQgYWxsb2NhdG9yIGluaXRpYWxpc2VkIHdpdGggMzI3
NjggZW50cmllczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAu
NTUxMDMwXSB4ZW46c3dpb3RsYl94ZW46IFdhcm5pbmc6IG9ubHkgYWJsZSB0byBhbGxvY2F0ZSA0
IE1CIGZvciBzb2Z0d2FyZSBJTyBUTEI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAwLjU1OTMzMl0gc29mdHdhcmUgSU8gVExCOiBtYXBwZWQgW21lbSAweDAwMDAw
MDAwMTE4MDAwMDAtMHgwMDAwMDAwMDExYzAwMDAwXSAoNE1CKTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTgzNTY1XSBIdWdlVExCIHJlZ2lzdGVyZWQgMS4w
MCBHaUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXM8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjU4NDcyMV0gSHVnZVRMQiByZWdpc3RlcmVkIDMy
LjAgTWlCIHBhZ2Ugc2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2VzPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMC41OTE0NzhdIEh1Z2VUTEIgcmVnaXN0ZXJlZCAy
LjAwIE1pQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlczxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuNTk4MjI1XSBIdWdlVExCIHJlZ2lzdGVyZWQg
NjQuMCBLaUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXM8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjYzNjUyMF0gRFJCRzogQ29udGludWluZyB3
aXRob3V0IEppdHRlciBSTkc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAwLjczNzE4N10gcmFpZDY6IG5lb254OCDCoCBnZW4oKSDCoDIxNDMgTUIvczxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDAuODA1Mjk0XSByYWlkNjogbmVv
bng4IMKgIHhvcigpIMKgMTU4OSBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMC44NzM0MDZdIHJhaWQ2OiBuZW9ueDQgwqAgZ2VuKCkgwqAyMTc3IE1CL3M8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAwLjk0MTQ5OV0gcmFp
ZDY6IG5lb254NCDCoCB4b3IoKSDCoDE1NTYgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDEuMDA5NjEyXSByYWlkNjogbmVvbngyIMKgIGdlbigpIMKgMjA3
MiBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4wNzc3
MTVdIHJhaWQ2OiBuZW9ueDIgwqAgeG9yKCkgwqAxNDMwIE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjE0NTgzNF0gcmFpZDY6IG5lb254MSDCoCBnZW4o
KSDCoDE3NjkgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDEuMjEzOTM1XSByYWlkNjogbmVvbngxIMKgIHhvcigpIMKgMTIxNCBNQi9zPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS4yODIwNDZdIHJhaWQ2OiBpbnQ2NHg4
IMKgZ2VuKCkgwqAxMzY2IE1CL3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAxLjM1MDEzMl0gcmFpZDY6IGludDY0eDggwqB4b3IoKSDCoCA3NzMgTUIvczxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNDE4MjU5XSByYWlkNjog
aW50NjR4NCDCoGdlbigpIMKgMTYwMiBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMS40ODYzNDldIHJhaWQ2OiBpbnQ2NHg0IMKgeG9yKCkgwqAgODUxIE1C
L3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjU1NDQ2NF0g
cmFpZDY6IGludDY0eDIgwqBnZW4oKSDCoDEzOTYgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNjIyNTYxXSByYWlkNjogaW50NjR4MiDCoHhvcigpIMKg
IDc0NCBNQi9zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS42
OTA2ODddIHJhaWQ2OiBpbnQ2NHgxIMKgZ2VuKCkgwqAxMDMzIE1CL3M8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc1ODc3MF0gcmFpZDY6IGludDY0eDEgwqB4
b3IoKSDCoCA1MTcgTUIvczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDEuNzU4ODA5XSByYWlkNjogdXNpbmcgYWxnb3JpdGhtIG5lb254NCBnZW4oKSAyMTc3IE1C
L3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc2Mjk0MV0g
cmFpZDY6IC4uLi4geG9yKCkgMTU1NiBNQi9zLCBybXcgZW5hYmxlZDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzY3OTU3XSByYWlkNjogdXNpbmcgbmVvbiBy
ZWNvdmVyeSBhbGdvcml0aG08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAxLjc3MjgyNF0geGVuOmJhbGxvb246IEluaXRpYWxpc2luZyBiYWxsb29uIGRyaXZlcjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzc4MDIxXSBpb21t
dTogRGVmYXVsdCBkb21haW4gdHlwZTogVHJhbnNsYXRlZDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzgyNTg0XSBpb21tdTogRE1BIGRvbWFpbiBUTEIgaW52
YWxpZGF0aW9uIHBvbGljeTogc3RyaWN0IG1vZGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAxLjc4OTE0OV0gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6ZWQ8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjc5MjgyMF0gdXNiY29y
ZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Jmczxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuNzk4MjU0XSB1c2Jjb3JlOiByZWdpc3RlcmVk
IG5ldyBpbnRlcmZhY2UgZHJpdmVyIGh1Yjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDEuODAzNjI2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBkZXZpY2UgZHJp
dmVyIHVzYjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODA4
NzYxXSBwcHNfY29yZTogTGludXhQUFMgQVBJIHZlci4gMSByZWdpc3RlcmVkPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44MTM3MTZdIHBwc19jb3JlOiBTb2Z0
d2FyZSB2ZXIuIDUuMy42IC0gQ29weXJpZ2h0IDIwMDUtMjAwNyBSb2RvbGZvIEdpb21ldHRpICZs
dDs8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGludXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9t
ZXR0aUBsaW51eC5pdDwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGlu
dXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9tZXR0aUBsaW51eC5pdDwvYT4mZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+Z2lv
bWV0dGlAbGludXguaXQ8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxp
bnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+Jmd0OyZndDsgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGludXguaXQiIHRhcmdldD0iX2JsYW5r
Ij5naW9tZXR0aUBsaW51eC5pdDwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86Z2lvbWV0
dGlAbGludXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9tZXR0aUBsaW51eC5pdDwvYT4mZ3Q7ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFu
ayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmdpb21l
dHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+Jmd0OyZn
dDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGludXguaXQi
IHRhcmdldD0iX2JsYW5rIj5naW9tZXR0aUBsaW51eC5pdDwvYT4gJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86Z2lvbWV0dGlAbGludXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9tZXR0aUBsaW51
eC5pdDwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0
IiB0YXJnZXQ9Il9ibGFuayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+Z2lvbWV0dGlAbGlu
dXguaXQ8L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGlu
dXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9tZXR0aUBsaW51eC5pdDwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86Z2lvbWV0dGlAbGludXguaXQiIHRhcmdldD0iX2JsYW5rIj5naW9tZXR0
aUBsaW51eC5pdDwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxp
bnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+Z2lvbWV0dGlAbGludXguaXQ8L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOmdpb21ldHRpQGxpbnV4Lml0IiB0YXJnZXQ9Il9ibGFuayI+Z2lvbWV0
dGlAbGludXguaXQ8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjgyMjkwM10gUFRQIGNsb2NrIHN1cHBvcnQgcmVnaXN0
ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODI2ODkz
XSBFREFDIE1DOiBWZXI6IDMuMC4wPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMS44MzAzNzVdIHp5bnFtcC1pcGktbWJveCBtYWlsYm94QGZmOTkwNDAwOiBSZWdp
c3RlcmVkIFp5bnFNUCBJUEkgbWJveCB3aXRoIFRYL1JYIGNoYW5uZWxzLjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODM4ODYzXSB6eW5xbXAtaXBpLW1ib3gg
bWFpbGJveEBmZjk5MDYwMDogUmVnaXN0ZXJlZCBaeW5xTVAgSVBJIG1ib3ggd2l0aCBUWC9SWCBj
aGFubmVscy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg0
NzM1Nl0genlucW1wLWlwaS1tYm94IG1haWxib3hAZmY5OTA4MDA6IFJlZ2lzdGVyZWQgWnlucU1Q
IElQSSBtYm94IHdpdGggVFgvUlggY2hhbm5lbHMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMS44NTU5MDddIEZQR0EgbWFuYWdlciBmcmFtZXdvcms8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg1OTk1Ml0gY2xvY2tzb3Vy
Y2U6IFN3aXRjaGVkIHRvIGNsb2Nrc291cmNlIGFyY2hfc3lzX2NvdW50ZXI8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjg3MTcxMl0gTkVUOiBSZWdpc3RlcmVk
IFBGX0lORVQgcHJvdG9jb2wgZmFtaWx5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMS44NzE4MzhdIElQIGlkZW50cyBoYXNoIHRhYmxlIGVudHJpZXM6IDMyNzY4
IChvcmRlcjogNiwgMjYyMTQ0IGJ5dGVzLCBsaW5lYXIpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMS44NzkzOTJdIHRjcF9saXN0ZW5fcG9ydGFkZHJfaGFzaCBo
YXNoIHRhYmxlIGVudHJpZXM6IDEwMjQgKG9yZGVyOiAyLCAxNjM4NCBieXRlcywgbGluZWFyKTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuODg3MDc4XSBUYWJs
ZS1wZXJ0dXJiIGhhc2ggdGFibGUgZW50cmllczogNjU1MzYgKG9yZGVyOiA2LCAyNjIxNDQgYnl0
ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAx
Ljg5NDg0Nl0gVENQIGVzdGFibGlzaGVkIGhhc2ggdGFibGUgZW50cmllczogMTYzODQgKG9yZGVy
OiA1LCAxMzEwNzIgYnl0ZXMsIGxpbmVhcik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAxLjkwMjkwMF0gVENQIGJpbmQgaGFzaCB0YWJsZSBlbnRyaWVzOiAxNjM4
NCAob3JkZXI6IDYsIDI2MjE0NCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTEwMzUwXSBUQ1A6IEhhc2ggdGFibGVzIGNvbmZpZ3Vy
ZWQgKGVzdGFibGlzaGVkIDE2Mzg0IGJpbmQgMTYzODQpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45MTY3NzhdIFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6IDEw
MjQgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTIzNTA5XSBVRFAtTGl0ZSBoYXNoIHRhYmxlIGVudHJp
ZXM6IDEwMjQgKG9yZGVyOiAzLCAzMjc2OCBieXRlcywgbGluZWFyKTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTMwNzU5XSBORVQ6IFJlZ2lzdGVyZWQgUEZf
VU5JWC9QRl9MT0NBTCBwcm90b2NvbCBmYW1pbHk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAxLjkzNjgzNF0gUlBDOiBSZWdpc3RlcmVkIG5hbWVkIFVOSVggc29j
a2V0IHRyYW5zcG9ydCBtb2R1bGUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMS45NDIzNDJdIFJQQzogUmVnaXN0ZXJlZCB1ZHAgdHJhbnNwb3J0IG1vZHVsZS48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk0NzA4OF0gUlBD
OiBSZWdpc3RlcmVkIHRjcCB0cmFuc3BvcnQgbW9kdWxlLjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTUxODQzXSBSUEM6IFJlZ2lzdGVyZWQgdGNwIE5GU3Y0
LjEgYmFja2NoYW5uZWwgdHJhbnNwb3J0IG1vZHVsZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk1ODMzNF0gUENJOiBDTFMgMCBieXRlcywgZGVmYXVsdCA2
NDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDEuOTYyNzA5XSBU
cnlpbmcgdG8gdW5wYWNrIHJvb3RmcyBpbWFnZSBhcyBpbml0cmFtZnMuLi48YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAxLjk3NzA5MF0gd29ya2luZ3NldDogdGlt
ZXN0YW1wX2JpdHM9NjIgbWF4X29yZGVyPTE5IGJ1Y2tldF9vcmRlcj0wPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMS45ODI4NjNdIEluc3RhbGxpbmcga25mc2Qg
KGNvcHlyaWdodCAoQykgMTk5NiA8YSBocmVmPSJtYWlsdG86b2tpckBtb25hZC5zd2IuZGUiIHRh
cmdldD0iX2JsYW5rIj5va2lyQG1vbmFkLnN3Yi5kZTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86b2tpckBtb25hZC5zd2IuZGUiIHRhcmdldD0iX2JsYW5rIj5va2lyQG1vbmFkLnN3Yi5k
ZTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIiB0
YXJnZXQ9Il9ibGFuayI+b2tpckBtb25hZC5zd2IuZGU8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIiB0YXJnZXQ9Il9ibGFuayI+b2tpckBtb25hZC5zd2Iu
ZGU8L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2tpckBtb25hZC5zd2Iu
ZGUiIHRhcmdldD0iX2JsYW5rIj5va2lyQG1vbmFkLnN3Yi5kZTwvYT4gJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86b2tpckBtb25hZC5zd2IuZGUiIHRhcmdldD0iX2JsYW5rIj5va2lyQG1vbmFk
LnN3Yi5kZTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9raXJAbW9uYWQuc3di
LmRlIiB0YXJnZXQ9Il9ibGFuayI+b2tpckBtb25hZC5zd2IuZGU8L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIiB0YXJnZXQ9Il9ibGFuayI+b2tpckBtb25h
ZC5zd2IuZGU8L2E+Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9raXJA
bW9uYWQuc3diLmRlIiB0YXJnZXQ9Il9ibGFuayI+b2tpckBtb25hZC5zd2IuZGU8L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIiB0YXJnZXQ9Il9ibGFuayI+
b2tpckBtb25hZC5zd2IuZGU8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpva2ly
QG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJfYmxhbmsiPm9raXJAbW9uYWQuc3diLmRlPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJfYmxhbmsi
Pm9raXJAbW9uYWQuc3diLmRlPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9raXJAbW9uYWQuc3diLmRlIiB0YXJnZXQ9Il9ibGFuayI+b2tpckBtb25hZC5zd2IuZGU8L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9raXJAbW9uYWQuc3diLmRlIiB0YXJnZXQ9Il9i
bGFuayI+b2tpckBtb25hZC5zd2IuZGU8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpva2lyQG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJfYmxhbmsiPm9raXJAbW9uYWQuc3diLmRlPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpva2lyQG1vbmFkLnN3Yi5kZSIgdGFyZ2V0PSJf
YmxhbmsiPm9raXJAbW9uYWQuc3diLmRlPC9hPiZndDsmZ3Q7Jmd0OyZndDspLjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDIxMDQ1XSBORVQ6IFJlZ2lzdGVy
ZWQgUEZfQUxHIHByb3RvY29sIGZhbWlseTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuMDIxMTIyXSB4b3I6IG1lYXN1cmluZyBzb2Z0d2FyZSBjaGVja3N1bSBz
cGVlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDI5MzQ3
XSDCoCDCoDhyZWdzIMKgIMKgIMKgIMKgIMKgIDogwqAyMzY2IE1CL3NlYzxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDMzMDgxXSDCoCDCoDMycmVncyDCoCDC
oCDCoCDCoCDCoDogwqAyODAyIE1CL3NlYzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuMDM4MjIzXSDCoCDCoGFybTY0X25lb24gwqAgwqAgwqA6IMKgMjMyMCBN
Qi9zZWM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjAzODM4
NV0geG9yOiB1c2luZyBmdW5jdGlvbjogMzJyZWdzICgyODAyIE1CL3NlYyk8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA0MzYxNF0gQmxvY2sgbGF5ZXIgU0NT
SSBnZW5lcmljIChic2cpIGRyaXZlciB2ZXJzaW9uIDAuNCBsb2FkZWQgKG1ham9yIDI0Nyk8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA1MDk1OV0gaW8gc2No
ZWR1bGVyIG1xLWRlYWRsaW5lIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAyLjA1NTUyMV0gaW8gc2NoZWR1bGVyIGt5YmVyIHJlZ2lzdGVyZWQ8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA2ODIyN10geGVu
Onhlbl9ldnRjaG46IEV2ZW50LWNoYW5uZWwgZGV2aWNlIGluc3RhbGxlZDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDY5MjgxXSBTZXJpYWw6IDgyNTAvMTY1
NTAgZHJpdmVyLCA0IHBvcnRzLCBJUlEgc2hhcmluZyBkaXNhYmxlZDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDc2MTkwXSBjYWNoZWluZm86IFVuYWJsZSB0
byBkZXRlY3QgY2FjaGUgaGllcmFyY2h5IGZvciBDUFUgMDxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMDg1NTQ4XSBicmQ6IG1vZHVsZSBsb2FkZWQ8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA4OTI5MF0gbG9vcDogbW9k
dWxlIGxvYWRlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIu
MDg5MzQxXSBJbnZhbGlkIG1heF9xdWV1ZXMgKDQpLCB3aWxsIHVzZSBkZWZhdWx0IG1heDogMi48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA5NDU2NV0gdHVu
OiBVbml2ZXJzYWwgVFVOL1RBUCBkZXZpY2UgZHJpdmVyLCAxLjY8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjA5ODY1NV0geGVuX25ldGZyb250OiBJbml0aWFs
aXNpbmcgWGVuIHZpcnR1YWwgZXRoZXJuZXQgZHJpdmVyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4xMDQxNTZdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGlu
dGVyZmFjZSBkcml2ZXIgcnRsODE1MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuMTA5ODEzXSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJp
dmVyIHI4MTUyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4x
MTUzNjddIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgYXNpeDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTIwNzk0XSB1c2Jjb3Jl
OiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGF4ODgxNzlfMTc4YTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTI2OTM0XSB1c2Jjb3JlOiByZWdp
c3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGNkY19ldGhlcjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTMyODE2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5l
dyBpbnRlcmZhY2UgZHJpdmVyIGNkY19lZW08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAyLjEzODUyN10gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNl
IGRyaXZlciBuZXQxMDgwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4xNDQyNTZdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgY2Rj
X3N1YnNldDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTUw
MjA1XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHphdXJ1czxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTU1ODM3XSB1c2Jjb3Jl
OiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIGNkY19uY208YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE2MTU1MF0gdXNiY29yZTogcmVnaXN0ZXJl
ZCBuZXcgaW50ZXJmYWNlIGRyaXZlciByODE1M19lY208YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE2ODI0MF0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50
ZXJmYWNlIGRyaXZlciBjZGNfYWNtPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIMKgMi4xNzMxMDldIGNkY19hY206IFVTQiBBYnN0cmFjdCBDb250cm9sIE1vZGVsIGRy
aXZlciBmb3IgVVNCIG1vZGVtcyBhbmQgSVNETiBhZGFwdGVyczxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMTgxMzU4XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5l
dyBpbnRlcmZhY2UgZHJpdmVyIHVhczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuMTg2NTQ3XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJp
dmVyIHVzYi1zdG9yYWdlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi4xOTI2NDNdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgZnRk
aV9zaW88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjE5ODM4
NF0gdXNic2VyaWFsOiBVU0IgU2VyaWFsIHN1cHBvcnQgcmVnaXN0ZXJlZCBmb3IgRlRESSBVU0Ig
U2VyaWFsIERldmljZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDIuMjA2MTE4XSB1ZGMtY29yZTogY291bGRuJiMzOTt0IGZpbmQgYW4gYXZhaWxhYmxlIFVEQyAt
IGFkZGVkIFtnX21hc3Nfc3RvcmFnZV0gdG8gbGlzdCBvZiBwZW5kaW5nPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZHJpdmVyczxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjE1MzMyXSBpMmNf
ZGV2OiBpMmMgL2RldiBlbnRyaWVzIGRyaXZlcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMjIwNDY3XSB4ZW5fd2R0IHhlbl93ZHQ6IGluaXRpYWxpemVkICh0
aW1lb3V0PTYwcywgbm93YXlvdXQ9MCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjIyNTkyM10gZGV2aWNlLW1hcHBlcjogdWV2ZW50OiB2ZXJzaW9uIDEuMC4z
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yMzA2NjhdIGRl
dmljZS1tYXBwZXI6IGlvY3RsOiA0LjQ1LjAtaW9jdGwgKDIwMjEtMDMtMjIpIGluaXRpYWxpc2Vk
OiA8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmRt
LWRldmVsQHJlZGhhdC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmRtLWRldmVs
QHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5kbS1kZXZlbEByZWRoYXQuY29tPC9hPiZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbSIgdGFyZ2V0PSJf
YmxhbmsiPmRtLWRldmVsQHJlZGhhdC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OmRtLWRldmVsQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5kbS1kZXZlbEByZWRoYXQuY29t
PC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmRtLWRldmVsQHJlZGhhdC5j
b20iIHRhcmdldD0iX2JsYW5rIj5kbS1kZXZlbEByZWRoYXQuY29tPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIiB0YXJnZXQ9Il9ibGFuayI+ZG0tZGV2
ZWxAcmVkaGF0LmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmRtLWRldmVs
QHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5kbS1kZXZlbEByZWRoYXQuY29tPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+ZG0tZGV2ZWxAcmVkaGF0LmNvbTwvYT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIiB0YXJnZXQ9Il9ibGFuayI+ZG0tZGV2
ZWxAcmVkaGF0LmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVk
aGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmRtLWRldmVsQHJlZGhhdC5jb208L2E+Jmd0OyAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpkbS1kZXZlbEByZWRoYXQuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+ZG0tZGV2ZWxAcmVkaGF0LmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86ZG0t
ZGV2ZWxAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmRtLWRldmVsQHJlZGhhdC5jb208L2E+
Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVkaGF0LmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPmRtLWRldmVsQHJlZGhhdC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5kbS1kZXZlbEBy
ZWRoYXQuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86ZG0tZGV2ZWxAcmVk
aGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPmRtLWRldmVsQHJlZGhhdC5jb208L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOmRtLWRldmVsQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5k
bS1kZXZlbEByZWRoYXQuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjIzOTMxNV0gRURBQyBNQzA6IEdpdmluZyBvdXQg
ZGV2aWNlIHRvIG1vZHVsZSAxIGNvbnRyb2xsZXIgc3lucHNfZGRyX2NvbnRyb2xsZXI6IERFViBz
eW5wc19lZGFjPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgKElOVEVSUlVQVCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAyLjI0OTQwNV0gRURBQyBERVZJQ0UwOiBHaXZpbmcgb3V0IGRldmljZSB0
byBtb2R1bGUgenlucW1wLW9jbS1lZGFjIGNvbnRyb2xsZXIgenlucW1wX29jbTogREVWPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZmY5NjAwMDAubWVtb3J5LWNvbnRyb2xsZXIgKElO
VEVSUlVQVCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjI2
MTcxOV0gc2RoY2k6IFNlY3VyZSBEaWdpdGFsIEhvc3QgQ29udHJvbGxlciBJbnRlcmZhY2UgZHJp
dmVyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi4yNjc0ODdd
IHNkaGNpOiBDb3B5cmlnaHQoYykgUGllcnJlIE9zc21hbjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjcxODkwXSBzZGhjaS1wbHRmbTogU0RIQ0kgcGxhdGZv
cm0gYW5kIE9GIGRyaXZlciBoZWxwZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjI3ODE1N10gbGVkdHJpZy1jcHU6IHJlZ2lzdGVyZWQgdG8gaW5kaWNhdGUg
YWN0aXZpdHkgb24gQ1BVczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMjgzODE2XSB6eW5xbXBfZmlybXdhcmVfcHJvYmUgUGxhdGZvcm0gTWFuYWdlbWVudCBB
UEkgdjEuMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMjg5
NTU0XSB6eW5xbXBfZmlybXdhcmVfcHJvYmUgVHJ1c3R6b25lIHZlcnNpb24gdjEuMDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuMzI3ODc1XSBzZWN1cmVmdyBz
ZWN1cmVmdzogc2VjdXJlZncgcHJvYmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi4zMjgzMjRdIGFsZzogTm8gdGVzdCBmb3IgeGlsaW54LXp5bnFtcC1hZXMg
KHp5bnFtcC1hZXMpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi4zMzI1NjNdIHp5bnFtcF9hZXMgZmlybXdhcmU6enlucW1wLWZpcm13YXJlOnp5bnFtcC1hZXM6
IEFFUyBTdWNjZXNzZnVsbHkgUmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDIuMzQxMTgzXSBhbGc6IE5vIHRlc3QgZm9yIHhpbGlueC16eW5xbXAt
cnNhICh6eW5xbXAtcnNhKTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDIuMzQ3NjY3XSByZW1vdGVwcm9jIHJlbW90ZXByb2MwOiBmZjlhMDAwMC5yZjVzczpyNWZf
MCBpcyBhdmFpbGFibGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjM1MzAwM10gcmVtb3RlcHJvYyByZW1vdGVwcm9jMTogZmY5YTAwMDAucmY1c3M6cjVmXzEg
aXMgYXZhaWxhYmxlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi4zNjI2MDVdIGZwZ2FfbWFuYWdlciBmcGdhMDogWGlsaW54IFp5bnFNUCBGUEdBIE1hbmFnZXIg
cmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIu
MzY2NTQwXSB2aXBlci14ZW4tcHJveHkgdmlwZXIteGVuLXByb3h5OiBWaXBlciBYZW4gUHJveHkg
cmVnaXN0ZXJlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIu
MzcyNTI1XSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IERldmljZSBUcmVlIFByb2Jpbmc8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjM3Nzc3OF0gdmlwZXIt
dmRwcCBhNDAwMDAwMC52ZHBwOiBWRFBQIFZlcnNpb246IDEuMy45LjAgSW5mbzogMS41MTIuMTUu
MCBLZXlMZW46IDMyPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
Mi4zODY0MzJdIHZpcGVyLXZkcHAgYTQwMDAwMDAudmRwcDogVW5hYmxlIHRvIHJlZ2lzdGVyIHRh
bXBlciBoYW5kbGVyLiBSZXRyeWluZy4uLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDIuMzk0MDk0XSB2aXBlci12ZHBwLW5ldCBhNTAwMDAwMC52ZHBwX25ldDog
RGV2aWNlIFRyZWUgUHJvYmluZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDIuMzk5ODU0XSB2aXBlci12ZHBwLW5ldCBhNTAwMDAwMC52ZHBwX25ldDogRGV2aWNl
IHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAy
LjQwNTkzMV0gdmlwZXItdmRwcC1zdGF0IGE4MDAwMDAwLnZkcHBfc3RhdDogRGV2aWNlIFRyZWUg
UHJvYmluZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuNDEy
MDM3XSB2aXBlci12ZHBwLXN0YXQgYTgwMDAwMDAudmRwcF9zdGF0OiBCdWlsZCBwYXJhbWV0ZXJz
OiBWVEkgQ291bnQ6IDUxMiBFdmVudCBDb3VudDogMzI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQyMDg1Nl0gZGVmYXVsdCBwcmVzZXQ8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQyMzc5N10gdmlwZXItdmRwcC1zdGF0
IGE4MDAwMDAwLnZkcHBfc3RhdDogRGV2aWNlIHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQzMDA1NF0gdmlwZXItdmRwcC1ybmcgYWMwMDAw
MDAudmRwcF9ybmc6IERldmljZSBUcmVlIFByb2Jpbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQzNTk0OF0gdmlwZXItdmRwcC1ybmcgYWMwMDAwMDAudmRw
cF9ybmc6IERldmljZSByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMi40NDE5NzZdIHZtY3UgZHJpdmVyIGluaXQ8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ0NDkyMl0gVk1DVTogOiAoMjQwOjApIHJlZ2lz
dGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ0NDk1
Nl0gSW4gSzgxIFVwZGF0ZXIgaW5pdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCDCoDIuNDQ5MDAzXSBwa3RnZW46IFBhY2tldCBHZW5lcmF0b3IgZm9yIHBhY2tldCBw
ZXJmb3JtYW5jZSB0ZXN0aW5nLiBWZXJzaW9uOiAyLjc1PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40Njg4MzNdIEluaXRpYWxpemluZyBYRlJNIG5ldGxpbmsg
c29ja2V0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40Njg5
MDJdIE5FVDogUmVnaXN0ZXJlZCBQRl9QQUNLRVQgcHJvdG9jb2wgZmFtaWx5PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi40NzI3MjldIEJyaWRnZSBmaXJld2Fs
bGluZyByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi40NzY3ODVdIDgwMjFxOiA4MDIuMVEgVkxBTiBTdXBwb3J0IHYxLjg8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjQ4MTM0MV0gcmVnaXN0ZXJlZCB0YXNr
c3RhdHMgdmVyc2lvbiAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi40ODYzOTRdIEJ0cmZzIGxvYWRlZCwgY3JjMzJjPWNyYzMyYy1nZW5lcmljLCB6b25lZD1u
bywgZnN2ZXJpdHk9bm88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjUwMzE0NV0gZmYwMTAwMDAuc2VyaWFsOiB0dHlQUzEgYXQgTU1JTyAweGZmMDEwMDAwIChp
cnEgPSAzNiwgYmFzZV9iYXVkID0gNjI1MDAwMCkgaXMgYSB4dWFydHBzPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MDcxMDNdIG9mLWZwZ2EtcmVnaW9uIGZw
Z2EtZnVsbDogRlBHQSBSZWdpb24gcHJvYmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMi41MTI5ODZdIHhpbGlueC16eW5xbXAtZG1hIGZkNTAwMDAwLmRtYS1j
b250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MjAyNjddIHhpbGlueC16eW5xbXAtZG1h
IGZkNTEwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNz
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41MjgyMzldIHhp
bGlueC16eW5xbXAtZG1hIGZkNTIwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZl
ciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMi41MzYxNTJdIHhpbGlueC16eW5xbXAtZG1hIGZkNTMwMDAwLmRtYS1jb250cm9sbGVyOiBa
eW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMi41NDQxNTNdIHhpbGlueC16eW5xbXAtZG1hIGZkNTQwMDAwLmRt
YS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NTIxMjddIHhpbGlueC16eW5xbXAt
ZG1hIGZkNTUwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNj
ZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NjAxNzhd
IHhpbGlueC16eW5xbXAtZG1hIGZmYTgwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRy
aXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMi41Njc5ODddIHhpbGlueC16eW5xbXAtZG1hIGZmYTkwMDAwLmRtYS1jb250cm9sbGVy
OiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41NzYwMThdIHhpbGlueC16eW5xbXAtZG1hIGZmYWEwMDAw
LmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBzdWNjZXNzPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi41ODM4ODldIHhpbGlueC16eW5x
bXAtZG1hIGZmYWIwMDAwLmRtYS1jb250cm9sbGVyOiBaeW5xTVAgRE1BIGRyaXZlciBQcm9iZSBz
dWNjZXNzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMi45NDYz
NzldIHNwaS1ub3Igc3BpMC4wOiBtdDI1cXU1MTJhICgxMzEwNzIgS2J5dGVzKTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTQ2NDY3XSAyIGZpeGVkLXBhcnRp
dGlvbnMgcGFydGl0aW9ucyBmb3VuZCBvbiBNVEQgZGV2aWNlIHNwaTAuMDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDIuOTUyMzkzXSBDcmVhdGluZyAyIE1URCBw
YXJ0aXRpb25zIG9uICZxdW90O3NwaTAuMCZxdW90Ozo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk1NzIzMV0gMHgwMDAwMDQwMDAwMDAtMHgwMDAwMDgwMDAw
MDAgOiAmcXVvdDtiYW5rIEEmcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgwqAyLjk2MzMzMl0gMHgwMDAwMDAwMDAwMDAtMHgwMDAwMDQwMDAwMDAgOiAmcXVv
dDtiYW5rIEImcXVvdDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAyLjk2ODY5NF0gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldDogTm90IGVuYWJsaW5nIHBhcnRpYWwg
c3RvcmUgYW5kIGZvcndhcmQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAyLjk3NTMzM10gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBldGgwOiBDYWRlbmNlIEdFTSBy
ZXYgMHg1MDA3MDEwNiBhdCAweGZmMGIwMDAwIGlycSAyNTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCgxODo0MTpmZTowZjpmZjow
Mik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk4NDQ3Ml0g
bWFjYiBmZjBjMDAwMC5ldGhlcm5ldDogTm90IGVuYWJsaW5nIHBhcnRpYWwgc3RvcmUgYW5kIGZv
cndhcmQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAyLjk5MjE0
NF0gbWFjYiBmZjBjMDAwMC5ldGhlcm5ldCBldGgxOiBDYWRlbmNlIEdFTSByZXYgMHg1MDA3MDEw
NiBhdCAweGZmMGMwMDAwIGlycSAyNjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCgxODo0MTpmZTowZjpmZjowMyk8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjAwMTA0M10gdmlwZXJfZW5ldCB2
aXBlcl9lbmV0OiBWaXBlciBwb3dlciBHUElPcyBpbml0aWFsaXNlZDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMDA3MzEzXSB2aXBlcl9lbmV0IHZpcGVyX2Vu
ZXQgdm5ldDAgKHVuaW5pdGlhbGl6ZWQpOiBWYWxpZGF0ZSBpbnRlcmZhY2UgUVNHTUlJPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wMTQ5MTRdIHZpcGVyX2Vu
ZXQgdmlwZXJfZW5ldCB2bmV0MSAodW5pbml0aWFsaXplZCk6IFZhbGlkYXRlIGludGVyZmFjZSBR
U0dNSUk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjAyMjEz
OF0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQxICh1bmluaXRpYWxpemVkKTogVmFsaWRhdGUg
aW50ZXJmYWNlIHR5cGUgMTg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAzLjAzMDI3NF0gdmlwZXJfZW5ldCB2aXBlcl9lbmV0IHZuZXQyICh1bmluaXRpYWxpemVk
KTogVmFsaWRhdGUgaW50ZXJmYWNlIFFTR01JSTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDMuMDM3Nzg1XSB2aXBlcl9lbmV0IHZpcGVyX2VuZXQgdm5ldDMgKHVu
aW5pdGlhbGl6ZWQpOiBWYWxpZGF0ZSBpbnRlcmZhY2UgUVNHTUlJPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4wNDUzMDFdIHZpcGVyX2VuZXQgdmlwZXJfZW5l
dDogVmlwZXIgZW5ldCByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMy4wNTA5NThdIHhpbGlueC1heGlwbW9uIGZmYTAwMDAwLnBlcmYtbW9uaXRv
cjogUHJvYmVkIFhpbGlueCBBUE08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgwqAzLjA1NzEzNV0geGlsaW54LWF4aXBtb24gZmQwYjAwMDAucGVyZi1tb25pdG9yOiBQ
cm9iZWQgWGlsaW54IEFQTTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDMuMDYzNTM4XSB4aWxpbngtYXhpcG1vbiBmZDQ5MDAwMC5wZXJmLW1vbml0b3I6IFByb2Jl
ZCBYaWxpbnggQVBNPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My4wNjk5MjBdIHhpbGlueC1heGlwbW9uIGZmYTEwMDAwLnBlcmYtbW9uaXRvcjogUHJvYmVkIFhp
bGlueCBBUE08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA5
NzcyOV0gc2k3MHh4OiBwcm9iZSBvZiAyLTAwNDAgZmFpbGVkIHdpdGggZXJyb3IgLTU8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjA5ODA0Ml0gY2Rucy13ZHQg
ZmQ0ZDAwMDAud2F0Y2hkb2c6IFhpbGlueCBXYXRjaGRvZyBUaW1lciB3aXRoIHRpbWVvdXQgNjBz
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xMDUxMTFdIGNk
bnMtd2R0IGZmMTUwMDAwLndhdGNoZG9nOiBYaWxpbnggV2F0Y2hkb2cgVGltZXIgd2l0aCB0aW1l
b3V0IDEwczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTEy
NDU3XSB2aXBlci10YW1wZXIgdmlwZXItdGFtcGVyOiBEZXZpY2UgcmVnaXN0ZXJlZDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTE3NTkzXSBhY3RpdmVfYmFu
ayBhY3RpdmVfYmFuazogYm9vdCBiYW5rOiAxPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy4xMjIxODRdIGFjdGl2ZV9iYW5rIGFjdGl2ZV9iYW5rOiBib290IG1v
ZGU6ICgweDAyKSBxc3BpMzI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsg
wqAgwqAzLjEyODI0N10gdmlwZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBEZXZpY2UgVHJlZSBQcm9i
aW5nPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xMzM0Mzld
IHZpcGVyLXZkcHAgYTQwMDAwMDAudmRwcDogVkRQUCBWZXJzaW9uOiAxLjMuOS4wIEluZm86IDEu
NTEyLjE1LjAgS2V5TGVuOiAzMjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuMTQyMTUxXSB2aXBlci12ZHBwIGE0MDAwMDAwLnZkcHA6IFRhbXBlciBoYW5kbGVy
IHJlZ2lzdGVyZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAz
LjE0NzQzOF0gdmlwZXItdmRwcCBhNDAwMDAwMC52ZHBwOiBEZXZpY2UgcmVnaXN0ZXJlZDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTUzMDA3XSBscGM1NV9s
MiBzcGkxLjA6IHJlZ2lzdGVyZWQgaGFuZGxlciBmb3IgcHJvdG9jb2wgMDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTU4NTgyXSBscGM1NV91c2VyIGxwYzU1
X3VzZXI6IFRoZSBtYWpvciBudW1iZXIgZm9yIHlvdXIgZGV2aWNlIGlzIDIzNjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTY1OTc2XSBscGM1NV9sMiBzcGkx
LjA6IHJlZ2lzdGVyZWQgaGFuZGxlciBmb3IgcHJvdG9jb2wgMTxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMTgxOTk5XSBydGMtbHBjNTUgcnRjX2xwYzU1OiBs
cGM1NV9ydGNfZ2V0X3RpbWU6IGJhZCByZXN1bHQ6IDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjE4Mjg1Nl0gcnRjLWxwYzU1IHJ0Y19scGM1NTogcmVnaXN0
ZXJlZCBhcyBydGMwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKg
My4xODg2NTZdIGxwYzU1X2wyIHNwaTEuMDogKDIpIG1jdSBzdGlsbCBub3QgcmVhZHk/PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xOTM3NDRdIGxwYzU1X2wy
IHNwaTEuMDogKDMpIG1jdSBzdGlsbCBub3QgcmVhZHk/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4xOTg4NDhdIGxwYzU1X2wyIHNwaTEuMDogKDQpIG1jdSBz
dGlsbCBub3QgcmVhZHk/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMy4yMDI5MzJdIG1tYzA6IFNESENJIGNvbnRyb2xsZXIgb24gZmYxNjAwMDAubW1jIFtmZjE2
MDAwMC5tbWNdIHVzaW5nIEFETUEgNjQtYml0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIMKgMy4yMTA2ODldIGxwYzU1X2wyIHNwaTEuMDogKDUpIG1jdSBzdGlsbCBu
b3QgcmVhZHk/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4y
MTU2OTRdIGxwYzU1X2wyIHNwaTEuMDogcnggZXJyb3I6IC0xMTA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjI4NDQzOF0gbW1jMDogbmV3IEhTMjAwIE1NQyBj
YXJkIGF0IGFkZHJlc3MgMDAwMTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
WyDCoCDCoDMuMjg1MTc5XSBtbWNibGswOiBtbWMwOjAwMDEgU0VNMTZHIDE0LjYgR2lCPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4yOTE3ODRdIMKgbW1jYmxr
MDogcDEgcDIgcDMgcDQgcDUgcDYgcDcgcDg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgwqAzLjI5MzkxNV0gbW1jYmxrMGJvb3QwOiBtbWMwOjAwMDEgU0VNMTZHIDQu
MDAgTWlCPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy4yOTkw
NTRdIG1tY2JsazBib290MTogbW1jMDowMDAxIFNFTTE2RyA0LjAwIE1pQjxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuMzAzOTA1XSBtbWNibGswcnBtYjogbW1j
MDowMDAxIFNFTTE2RyA0LjAwIE1pQiwgY2hhcmRldiAoMjQ0OjApPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy41ODI2NzZdIHJ0Yy1scGM1NSBydGNfbHBjNTU6
IGxwYzU1X3J0Y19nZXRfdGltZTogYmFkIHJlc3VsdDogMTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNTgzMzMyXSBydGMtbHBjNTUgcnRjX2xwYzU1OiBoY3Rv
c3lzOiB1bmFibGUgdG8gcmVhZCB0aGUgaGFyZHdhcmUgY2xvY2s8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjU5MTI1Ml0gY2Rucy1pMmMgZmYwMjAwMDAuaTJj
OiByZWNvdmVyeSBpbmZvcm1hdGlvbiBjb21wbGV0ZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCDCoDMuNTk3MDg1XSBhdDI0IDAtMDA1MDogc3VwcGx5IHZjYyBub3Qg
Zm91bmQsIHVzaW5nIGR1bW15IHJlZ3VsYXRvcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDMuNjAzMDExXSBscGM1NV9sMiBzcGkxLjA6ICgyKSBtY3Ugc3RpbGwg
bm90IHJlYWR5Pzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMu
NjA4MDkzXSBhdDI0IDAtMDA1MDogMjU2IGJ5dGUgc3BkIEVFUFJPTSwgcmVhZC1vbmx5PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42MTM2MjBdIGxwYzU1X2wy
IHNwaTEuMDogKDMpIG1jdSBzdGlsbCBub3QgcmVhZHk/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42MTkzNjJdIGxwYzU1X2wyIHNwaTEuMDogKDQpIG1jdSBz
dGlsbCBub3QgcmVhZHk/PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKg
IMKgMy42MjQyMjRdIHJ0Yy1ydjMwMjggMC0wMDUyOiByZWdpc3RlcmVkIGFzIHJ0YzE8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYyODM0M10gbHBjNTVfbDIg
c3BpMS4wOiAoNSkgbWN1IHN0aWxsIG5vdCByZWFkeT88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjYzMzI1M10gbHBjNTVfbDIgc3BpMS4wOiByeCBlcnJvcjog
LTExMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNjM5MTA0
XSBrODFfYm9vdGxvYWRlciAwLTAwMTA6IHByb2JlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy42NDE2MjhdIFZNQ1U6IDogKDIzNTowKSByZWdpc3RlcmVkPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42NDE2MzVdIGs4MV9i
b290bG9hZGVyIDAtMDAxMDogcHJvYmUgY29tcGxldGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy42NjgzNDZdIGNkbnMtaTJjIGZmMDIwMDAwLmkyYzogNDAw
IGtIeiBtbWlvIGZmMDIwMDAwIGlycSAyODxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCDCoDMuNjY5MTU0XSBjZG5zLWkyYyBmZjAzMDAwMC5pMmM6IHJlY292ZXJ5IGlu
Zm9ybWF0aW9uIGNvbXBsZXRlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy42NzU0MTJdIGxtNzUgMS0wMDQ4OiBzdXBwbHkgdnMgbm90IGZvdW5kLCB1c2luZyBk
dW1teSByZWd1bGF0b3I8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjY4MjkyMF0gbG03NSAxLTAwNDg6IGh3bW9uMTogc2Vuc29yICYjMzk7dG1wMTEyJiMzOTs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY4NjU0OF0gaTJj
IGkyYy0xOiBBZGRlZCBtdWx0aXBsZXhlZCBpMmMgYnVzIDM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjY5MDc5NV0gaTJjIGkyYy0xOiBBZGRlZCBtdWx0aXBs
ZXhlZCBpMmMgYnVzIDQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
wqAzLjY5NTYyOV0gaTJjIGkyYy0xOiBBZGRlZCBtdWx0aXBsZXhlZCBpMmMgYnVzIDU8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjcwMDQ5Ml0gaTJjIGkyYy0x
OiBBZGRlZCBtdWx0aXBsZXhlZCBpMmMgYnVzIDY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgwqAzLjcwNTE1N10gcGNhOTU0eCAxLTAwNzA6IHJlZ2lzdGVyZWQgNCBt
dWx0aXBsZXhlZCBidXNzZXMgZm9yIEkyQyBzd2l0Y2ggcGNhOTU0Njxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzEzMDQ5XSBhdDI0IDEtMDA1NDogc3VwcGx5
IHZjYyBub3QgZm91bmQsIHVzaW5nIGR1bW15IHJlZ3VsYXRvcjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzIwMDY3XSBhdDI0IDEtMDA1NDogMTAyNCBieXRl
IDI0YzA4IEVFUFJPTSwgcmVhZC1vbmx5PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIMKgMy43MjQ3NjFdIGNkbnMtaTJjIGZmMDMwMDAwLmkyYzogMTAwIGtIeiBtbWlv
IGZmMDMwMDAwIGlycSAyOTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCDCoDMuNzMxMjcyXSBzZnAgdmlwZXJfZW5ldDpzZnAtZXRoMTogSG9zdCBtYXhpbXVtIHBvd2Vy
IDIuMFc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjczNzU0
OV0gc2ZwX3JlZ2lzdGVyX3NvY2tldDogZ290IHNmcF9idXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc0MDcwOV0gc2ZwX3JlZ2lzdGVyX3NvY2tldDogcmVn
aXN0ZXIgc2ZwX2J1czxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDC
oDMuNzQ1NDU5XSBzZnBfcmVnaXN0ZXJfYnVzOiBvcHMgb2shPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NDkxNzldIHNmcF9yZWdpc3Rlcl9idXM6IFRyeSB0
byBhdHRhY2g8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgwqAzLjc1
MzQxOV0gc2ZwX3JlZ2lzdGVyX2J1czogQXR0YWNoIHN1Y2NlZWRlZDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzU3OTE0XSBzZnBfcmVnaXN0ZXJfYnVzOiB1
cHN0cmVhbSBvcHMgYXR0YWNoPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIMKgMy43NjI2NzddIHNmcF9yZWdpc3Rlcl9idXM6IEJ1cyByZWdpc3RlcmVkPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NjY5OTldIHNmcF9yZWdpc3Rl
cl9zb2NrZXQ6IHJlZ2lzdGVyIHNmcF9idXMgc3VjY2VlZGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NzU4NzBdIG9mX2Nmc19pbml0PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NzYwMDBdIG9mX2Nmc19pbml0OiBP
Szxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzc4MjExXSBj
bGs6IE5vdCBkaXNhYmxpbmcgdW51c2VkIGNsb2Nrczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxMS4yNzg0NzddIEZyZWVpbmcgaW5pdHJkIG1lbW9yeTogMjA2MDU2
Szxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4yNzk0MDZdIEZy
ZWVpbmcgdW51c2VkIGtlcm5lbCBtZW1vcnk6IDE1MzZLPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIDExLjMxNDAwNl0gQ2hlY2tlZCBXK1ggbWFwcGluZ3M6IHBhc3Nl
ZCwgbm8gVytYIHBhZ2VzIGZvdW5kPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIDExLjMxNDE0Ml0gUnVuIC9pbml0IGFzIGluaXQgcHJvY2Vzczxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSU5JVDogdmVyc2lvbiAzLjAxIGJvb3Rpbmc8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGZzY2sgKGJ1c3lib3ggMS4zNS4wKTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgL2Rldi9tbWNibGswcDE6IGNsZWFuLCAx
Mi8xMDI0MDAgZmlsZXMsIDIzODE2Mi80MDk2MDAgYmxvY2tzPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyAvZGV2L21tY2JsazBwMjogY2xlYW4sIDEyLzEwMjQwMCBmaWxlcywg
MTcxOTcyLzQwOTYwMCBibG9ja3M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IC9kZXYvbW1jYmxrMHAzIHdhcyBub3QgY2xlYW5seSB1bm1vdW50ZWQsIGNoZWNrIGZvcmNlZC48
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IC9kZXYvbW1jYmxrMHAzOiAyMC80
MDk2IGZpbGVzICgwLjAlIG5vbi1jb250aWd1b3VzKSwgNjYzLzE2Mzg0IGJsb2Nrczxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS41NTMwNzNdIEVYVDQtZnMgKG1t
Y2JsazBwMyk6IG1vdW50ZWQgZmlsZXN5c3RlbSB3aXRob3V0IGpvdXJuYWwuIE9wdHM6IChudWxs
KS4gUXVvdGEgbW9kZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBkaXNhYmxlZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFN0YXJ0aW5nIHJhbmRvbSBudW1iZXIgZ2VuZXJhdG9yIGRhZW1vbi48YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNTgwNjYyXSByYW5kb206IGNy
bmcgaW5pdCBkb25lPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGlu
ZyB1ZGV2PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjYxMzE1
OV0gdWRldmRbMTQyXTogc3RhcnRpbmcgdmVyc2lvbiAzLjIuMTA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuNjIwMzg1XSB1ZGV2ZFsxNDNdOiBzdGFydGluZyBl
dWRldi0zLjIuMTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEu
NzA0NDgxXSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0IGNvbnRyb2xfcmVkOiByZW5hbWVkIGZyb20g
ZXRoMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS43MjAyNjRd
IG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQgY29udHJvbF9ibGFjazogcmVuYW1lZCBmcm9tIGV0aDE8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuMDYzMzk2XSBpcF9s
b2NhbF9wb3J0X3JhbmdlOiBwcmVmZXIgZGlmZmVyZW50IHBhcml0eSBmb3Igc3RhcnQvZW5kIHZh
bHVlcy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuMDg0ODAx
XSBydGMtbHBjNTUgcnRjX2xwYzU1OiBscGM1NV9ydGNfZ2V0X3RpbWU6IGJhZCByZXN1bHQ6IDE8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IGh3Y2xvY2s6IFJUQ19SRF9USU1F
OiBJbnZhbGlkIGV4Y2hhbmdlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBN
b24gRmViIDI3IDA4OjQwOjUzIFVUQyAyMDIzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDEyLjExNTMwOV0gcnRjLWxwYzU1IHJ0Y19scGM1NTogbHBjNTVfcnRjX3Nl
dF90aW1lOiBiYWQgcmVzdWx0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBo
d2Nsb2NrOiBSVENfU0VUX1RJTUU6IEludmFsaWQgZXhjaGFuZ2U8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuMTMxMDI3XSBydGMtbHBjNTUgcnRjX2xwYzU1OiBs
cGM1NV9ydGNfZ2V0X3RpbWU6IGJhZCByZXN1bHQ6IDE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIG1jdWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IElOSVQ6IEVudGVyaW5nIHJ1bmxldmVsOiA1PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBDb25maWd1cmluZyBuZXR3b3JrIGludGVyZmFjZXMuLi4gZG9uZS48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHJlc2V0dGluZyBuZXR3b3JrIGludGVy
ZmFjZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43MTgyOTVd
IG1hY2IgZmYwYjAwMDAuZXRoZXJuZXQgY29udHJvbF9yZWQ6IFBIWSBbZmYwYjAwMDAuZXRoZXJu
ZXQtZmZmZmZmZmY6MDJdIGRyaXZlciBbWGlsaW54PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgUENTL1BNQSBQSFldIChpcnE9UE9M
TCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzIzOTE5XSBt
YWNiIGZmMGIwMDAwLmV0aGVybmV0IGNvbnRyb2xfcmVkOiBjb25maWd1cmluZyBmb3IgcGh5L2dt
aWkgbGluayBtb2RlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEy
LjczMjE1MV0gcHBzIHBwczA6IG5ldyBQUFMgc291cmNlIHB0cDA8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzM1NTYzXSBtYWNiIGZmMGIwMDAwLmV0aGVybmV0
OiBnZW0tcHRwLXRpbWVyIHB0cCBjbG9jayByZWdpc3RlcmVkLjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMi43NDU3MjRdIG1hY2IgZmYwYzAwMDAuZXRoZXJuZXQg
Y29udHJvbF9ibGFjazogUEhZIFtmZjBjMDAwMC5ldGhlcm5ldC1mZmZmZmZmZjowMV0gZHJpdmVy
IFtYaWxpbng8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqBQQ1MvUE1BIFBIWV08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAoaXJxPVBPTEwpPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEy
Ljc1MzQ2OV0gbWFjYiBmZjBjMDAwMC5ldGhlcm5ldCBjb250cm9sX2JsYWNrOiBjb25maWd1cmlu
ZyBmb3IgcGh5L2dtaWkgbGluayBtb2RlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyBbIMKgIDEyLjc2MTgwNF0gcHBzIHBwczE6IG5ldyBQUFMgc291cmNlIHB0cDE8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTIuNzY1Mzk4XSBtYWNiIGZmMGMw
MDAwLmV0aGVybmV0OiBnZW0tcHRwLXRpbWVyIHB0cCBjbG9jayByZWdpc3RlcmVkLjxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgQXV0by1uZWdvdGlhdGlvbjogb2ZmPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBBdXRvLW5lZ290aWF0aW9uOiBvZmY8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTYuODI4MTUxXSBtYWNiIGZm
MGIwMDAwLmV0aGVybmV0IGNvbnRyb2xfcmVkOiB1bmFibGUgdG8gZ2VuZXJhdGUgdGFyZ2V0IGZy
ZXF1ZW5jeTogMTI1MDAwMDAwIEh6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyBbIMKgIDE2LjgzNDU1M10gbWFjYiBmZjBiMDAwMC5ldGhlcm5ldCBjb250cm9sX3JlZDogTGlu
ayBpcyBVcCAtIDFHYnBzL0Z1bGwgLSBmbG93IGNvbnRyb2wgb2ZmPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE2Ljg2MDU1Ml0gbWFjYiBmZjBjMDAwMC5ldGhlcm5l
dCBjb250cm9sX2JsYWNrOiB1bmFibGUgdG8gZ2VuZXJhdGUgdGFyZ2V0IGZyZXF1ZW5jeTogMTI1
MDAwMDAwIEh6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE2Ljg2
NzA1Ml0gbWFjYiBmZjBjMDAwMC5ldGhlcm5ldCBjb250cm9sX2JsYWNrOiBMaW5rIGlzIFVwIC0g
MUdicHMvRnVsbCAtIGZsb3cgY29udHJvbCBvZmY8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFN0YXJ0aW5nIEZhaWxzYWZlIFNlY3VyZSBTaGVsbCBzZXJ2ZXIgaW4gcG9ydCAy
MjIyOiBzc2hkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBkb25lLjxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgcnBjYmluZCBkYWVtb24u
Li5kb25lLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuMDkzMDE5XSBydGMtbHBjNTUgcnRjX2xw
YzU1OiBscGM1NV9ydGNfZ2V0X3RpbWU6IGJhZCByZXN1bHQ6IDE8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IGh3Y2xvY2s6IFJUQ19SRF9USU1FOiBJbnZhbGlkIGV4Y2hhbmdl
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBTdGF0ZSBNYW5h
Z2VyIFNlcnZpY2U8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0IHN0
YXRlLW1hbmFnZXIgcmVzdGFydGVyLi4uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBkMHYxIEZvcndhcmRpbmcgQUVTIG9wZXJhdGlvbjogMzI1NDc3OTk1MTxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRpbmcgL3Vzci9zYmluL3hlbnN0
b3JlZC4uLi5bIMKgIDE3LjI2NTI1Nl0gQlRSRlM6IGRldmljZSBmc2lkIDgwZWZjMjI0LWMyMDIt
NGY4ZS1hOTQ5LTRkYWU3ZjA0YTBhYTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGRldmlkIDEgdHJhbnNpZCA3NDQ8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAvZGV2L2RtLTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IHNjYW5uZWQgYnkgdWRldmQgKDM4NSk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuMzQ5OTMzXSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMCk6
IGRpc2sgc3BhY2UgY2FjaGluZyBpcyBlbmFibGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDE3LjM1MDY3MF0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTApOiBoYXMg
c2tpbm55IGV4dGVudHM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAg
MTcuMzY0Mzg0XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMCk6IGVuYWJsaW5nIHNzZCBvcHRpbWl6
YXRpb25zPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3LjgzMDQ2
Ml0gQlRSRlM6IGRldmljZSBmc2lkIDI3ZmY2NjZiLWY0ZTUtNGY5MC05MDU0LWMyMTBkYjViMmUy
ZSBkZXZpZCAxIHRyYW5zaWQgNjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoC9kZXYvbWFwcGVyL2NsaWVudF9wcm92IHNjYW5uZWQg
Ynk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBta2ZzLmJ0cmZzPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyAoNTI2KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxNy44NzI2OTldIEJUUkZTIGluZm8gKGRldmljZSBkbS0xKTogdXNp
bmcgZnJlZSBzcGFjZSB0cmVlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBb
IMKgIDE3Ljg3Mjc3MV0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTEpOiBoYXMgc2tpbm55IGV4dGVu
dHM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuODc4MTE0XSBC
VFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6IGZsYWdnaW5nIGZzIHdpdGggYmlnIG1ldGFkYXRhIGZl
YXR1cmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTcuODk0Mjg5
XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMSk6IGVuYWJsaW5nIHNzZCBvcHRpbWl6YXRpb25zPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE3Ljg5NTY5NV0gQlRSRlMg
aW5mbyAoZGV2aWNlIGRtLTEpOiBjaGVja2luZyBVVUlEIHRyZWU8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBT
ZXR0aW5nIGRvbWFpbiAwIG5hbWUsIGRvbWlkIGFuZCBKU09OIGNvbmZpZy4uLjxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgRG9uZSBzZXR0aW5nIHVwIERvbTA8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFN0YXJ0aW5nIHhlbmNvbnNvbGVkLi4uPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFydGluZyBRRU1VIGFzIGRpc2sgYmFj
a2VuZCBmb3IgZG9tMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgU3RhcnRp
bmcgZG9tYWluIHdhdGNoZG9nIGRhZW1vbjogeGVud2F0Y2hkb2dkIHN0YXJ0dXA8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDE4LjQwODY0N10gQlRSRlM6IGRldmljZSBmc2lkIDVlMDhkNWU5LWJjMmEt
NDZiOS1hZjZhLTQ0YzcwODdiODkyMSBkZXZpZCAxIHRyYW5zaWQgNjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoC9kZXYvbWFwcGVy
L2NsaWVudF9jb25maWcgc2Nhbm5lZCBieTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oG1rZnMuYnRyZnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7ICg1NzQpPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbZG9uZV08YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTguNDY1NTUyXSBCVFJGUyBpbmZvIChkZXZpY2Ug
ZG0tMik6IHVzaW5nIGZyZWUgc3BhY2UgdHJlZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCAxOC40NjU2MjldIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTogaGFzIHNr
aW5ueSBleHRlbnRzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDE4
LjQ3MTAwMl0gQlRSRlMgaW5mbyAoZGV2aWNlIGRtLTIpOiBmbGFnZ2luZyBmcyB3aXRoIGJpZyBt
ZXRhZGF0YSBmZWF0dXJlPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBTdGFy
dGluZyBjcm9uZDogWyDCoCAxOC40ODIzNzFdIEJUUkZTIGluZm8gKGRldmljZSBkbS0yKTogZW5h
Ymxpbmcgc3NkIG9wdGltaXphdGlvbnM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTguNDg2NjU5XSBCVFJGUyBpbmZvIChkZXZpY2UgZG0tMik6IGNoZWNraW5nIFVV
SUQgdHJlZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgT0s8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IHN0YXJ0aW5nIHJzeXNsb2dkIC4uLiBMb2cgcGFy
dGl0aW9uIHJlYWR5IGFmdGVyIDAgcG9sbCBsb29wczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgZG9uZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgcnN5
c2xvZ2Q6IGNhbm5vdCBjb25uZWN0IHRvIDxhIGhyZWY9Imh0dHA6Ly8xNzIuMTguMC4xOjUxNCIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+MTcyLjE4LjAuMTo1MTQ8L2E+ICZsdDs8
YSBocmVmPSJodHRwOi8vMTcyLjE4LjAuMTo1MTQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHA6Ly8xNzIuMTguMC4xOjUxNDwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwOi8v
MTcyLjE4LjAuMTo1MTQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly8x
NzIuMTguMC4xOjUxNDwvYT4gJmx0OzxhIGhyZWY9Imh0dHA6Ly8xNzIuMTguMC4xOjUxNCIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovLzE3Mi4xOC4wLjE6NTE0PC9hPiZn
dDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwOi8vMTcyLjE4LjAuMTo1MTQiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly8xNzIuMTguMC4xOjUxNDwvYT4gJmx0OzxhIGhyZWY9
Imh0dHA6Ly8xNzIuMTguMC4xOjUxNCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cDovLzE3Mi4xOC4wLjE6NTE0PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHA6Ly8xNzIuMTgu
MC4xOjUxNCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovLzE3Mi4xOC4w
LjE6NTE0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cDovLzE3Mi4xOC4wLjE6NTE0IiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8vMTcyLjE4LjAuMTo1MTQ8L2E+Jmd0OyZndDsm
Z3Q7ICZsdDs8YSBocmVmPSJodHRwOi8vMTcyLjE4LjAuMTo1MTQiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly8xNzIuMTguMC4xOjUxNDwvYT4gJmx0OzxhIGhyZWY9Imh0
dHA6Ly8xNzIuMTguMC4xOjUxNCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0
cDovLzE3Mi4xOC4wLjE6NTE0PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHA6Ly8xNzIuMTguMC4x
OjUxNCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovLzE3Mi4xOC4wLjE6
NTE0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cDovLzE3Mi4xOC4wLjE6NTE0IiByZWw9Im5vcmVmZXJy
ZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8vMTcyLjE4LjAuMTo1MTQ8L2E+Jmd0OyZndDsgJmx0
OzxhIGhyZWY9Imh0dHA6Ly8xNzIuMTguMC4xOjUxNCIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cDovLzE3Mi4xOC4wLjE6NTE0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cDovLzE3
Mi4xOC4wLjE6NTE0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8vMTcy
LjE4LjAuMTo1MTQ8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cDovLzE3Mi4xOC4wLjE6NTE0IiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8vMTcyLjE4LjAuMTo1MTQ8L2E+
ICZsdDs8YSBocmVmPSJodHRwOi8vMTcyLjE4LjAuMTo1MTQiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHA6Ly8xNzIuMTguMC4xOjUxNDwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7OiBO
ZXR3b3JrIGlzIHVucmVhY2hhYmxlIFt2OC4yMjA4LjAgdHJ5PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGEgaHJlZj0iaHR0cHM6
Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc8L2E+ICZsdDs8YSBocmVmPSJodHRwczov
L3d3dy5yc3lzbG9nLmNvbS9lLzIwMjciIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNzwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRw
czovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjciIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNzwvYT4gJmx0OzxhIGhyZWY9Imh0dHBz
Oi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3PC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVm
PSJodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjciIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNzwvYT4gJmx0OzxhIGhyZWY9
Imh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3PC9hPiZndDsgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3PC9hPiAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc8L2E+Jmd0OyZndDsmZ3Q7
ICZsdDs8YSBocmVmPSJodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjciIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNzwvYT4g
Jmx0OzxhIGhyZWY9Imh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3PC9hPiZn
dDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3PC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjc8L2E+
Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2UvMjAyNyIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8y
MDI3PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIw
Mjc8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly93d3cucnN5c2xvZy5jb20vZS8yMDI3IiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3d3dy5yc3lzbG9nLmNvbS9l
LzIwMjc8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3d3dy5yc3lzbG9nLmNvbS9lLzIwMjciIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vd3d3LnJzeXNsb2cuY29tL2Uv
MjAyNzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7IF08YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTguNjcwNjM3XSBCVFJGUzogZGV2aWNlIGZzaWQgMzlkN2Q5ZTEtOTY3ZC00
NzhlLTk0YWUtNjkwZGViNzIyMDk1IGRldmlkIDEgdHJhbnNpZCA2MDggL2Rldi9kbS0zPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
c2Nhbm5lZCBieSB1ZGV2ZCAoNTE4KTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFBsZWFzZSBpbnNlcnQgVVNC
IHRva2VuIGFuZCBlbnRlciB5b3VyIHJvbGUgaW4gbG9naW4gcHJvbXB0Ljxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IGxvZ2luOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBPLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyDQv9C9LCAyNCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMjM6MzksIFN0
ZWZhbm8gU3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJu
ZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4g
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0
PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlA
a2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJn
ZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgSGkgT2xlZyw8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgSGVyZSBpcyB0aGUgaXNzdWUgZnJvbSB5b3VyIGxvZ3M6PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoFNFcnJvciBJbnRlcnJ1cHQgb24gQ1BVMCwgY29kZSAweGJlMDAwMDAwIC0t
IFNFcnJvcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTRXJyb3JzIGFyZSBzcGVjaWFsIHNp
Z25hbHMgdG8gbm90aWZ5IHNvZnR3YXJlIG9mIHNlcmlvdXMgaGFyZHdhcmU8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBlcnJvcnMuwqAgU29tZXRoaW5nIGlz
IGdvaW5nIHZlcnkgd3JvbmcuIERlZmVjdGl2ZSBoYXJkd2FyZSBpcyBhPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcG9zc2liaWxpdHkuwqAgQW5vdGhlciBw
b3NzaWJpbGl0eSBpZiBzb2Z0d2FyZSBhY2Nlc3NpbmcgYWRkcmVzcyByYW5nZXM8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0aGF0IGl0IGlzIG5vdCBzdXBw
b3NlZCB0bywgc29tZXRpbWVzIGl0IGNhdXNlcyBTRXJyb3JzLjxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqBDaGVlcnMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoFN0ZWZhbm88YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBPbiBNb24sIDI0IEFwciAyMDIz
LCBPbGVnIE5pa2l0ZW5rbyB3cm90ZTo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBI
ZWxsbyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBUaGFu
a3MgZ3V5cy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IEkgZm91bmQgb3V0IHdoZXJlIHRoZSBwcm9ibGVtIHdhcy48YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE5vdyBkb20wIGJvb3RlZCBtb3JlLiBC
dXQgSSBoYXZlIGEgbmV3IG9uZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFRoaXMgaXMgYSBrZXJuZWwgcGFuaWMgZHVyaW5nIERvbTAgbG9hZGlu
Zy48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE1h
eWJlIHNvbWVvbmUgaXMgYWJsZSB0byBzdWdnZXN0IHNvbWV0aGluZyA/PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE8uPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCDCoDMuNzcxMzYyXSBzZnBfcmVnaXN0
ZXJfYnVzOiB1cHN0cmVhbSBvcHMgYXR0YWNoPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43NzYxMTldIHNmcF9yZWdpc3Rlcl9idXM6
IEJ1cyByZWdpc3RlcmVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIMKgMy43ODA0NTldIHNmcF9yZWdpc3Rlcl9zb2NrZXQ6IHJlZ2lzdGVy
IHNmcF9idXMgc3VjY2VlZGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43ODkzOTldIG9mX2Nmc19pbml0PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIMKgMy43ODk0OTldIG9m
X2Nmc19pbml0OiBPSzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDsgWyDCoCDCoDMuNzkxNjg1XSBjbGs6IE5vdCBkaXNhYmxpbmcgdW51c2VkIGNsb2Nr
czxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDC
oCAxMS4wMTAzNTVdIFNFcnJvciBJbnRlcnJ1cHQgb24gQ1BVMCwgY29kZSAweGJlMDAwMDAwIC0t
IFNFcnJvcjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCAxMS4wMTAzODBdIENQVTogMCBQSUQ6IDkgQ29tbToga3dvcmtlci91NDowIE5vdCB0
YWludGVkIDUuMTUuNzIteGlsaW54LXYyMDIyLjEgIzE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwMzkzXSBXb3JrcXVldWU6IGV2
ZW50c191bmJvdW5kIGFzeW5jX3J1bl9lbnRyeV9mbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0MTRdIHBzdGF0ZTogNjAwMDAw
MDUgKG5aQ3YgZGFpZiAtUEFOIC1VQU8gLVRDTyAtRElUIC1TU0JTIEJUWVBFPS0tKTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0
MjJdIHBjIDogc2ltcGxlX3dyaXRlX2VuZCsweGQwLzB4MTMwPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQzMV0gbHIgOiBnZW5l
cmljX3BlcmZvcm1fd3JpdGUrMHgxMTgvMHgxZTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNDM4XSBzcCA6IGZmZmZmZmMwMDgw
OWI5MTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTEuMDEwNDQxXSB4Mjk6IGZmZmZmZmMwMDgwOWI5MTAgeDI4OiAwMDAwMDAwMDAwMDAw
MDAwIHgyNzogZmZmZmZmZWY2OWJhODhjMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0NTFdIHgyNjogMDAwMDAwMDAwMDAwM2Vl
YyB4MjU6IGZmZmZmZjgwNzUxNWRiMDAgeDI0OiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ1OV0g
eDIzOiBmZmZmZmZjMDA4MDliYTkwIHgyMjogMDAwMDAwMDAwMmFhYzAwMCB4MjE6IGZmZmZmZjgw
NzMxNWEyNjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTEuMDEwNDcyXSB4MjA6IDAwMDAwMDAwMDAwMDEwMDAgeDE5OiBmZmZmZmZmZTAy
MDAwMDAwIHgxODogMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA0ODFdIHgxNzogMDAwMDAwMDBmZmZm
ZmZmZiB4MTY6IDAwMDAwMDAwMDAwMDgwMDAgeDE1OiAwMDAwMDAwMDAwMDAwMDAwPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDQ5
MF0geDE0OiAwMDAwMDAwMDAwMDAwMDAwIHgxMzogMDAwMDAwMDAwMDAwMDAwMCB4MTI6IDAwMDAw
MDAwMDAwMDAwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTEuMDEwNDk4XSB4MTE6IDAwMDAwMDAwMDAwMDAwMDAgeDEwOiAwMDAwMDAw
MDAwMDAwMDAwIHg5IDogMDAwMDAwMDAwMDAwMDAwMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1MDddIHg4IDogMDAwMDAwMDAw
MDAwMDAwMCB4NyA6IGZmZmZmZmVmNjkzYmE2ODAgeDYgOiAwMDAwMDAwMDJkODliNzAwPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAx
MDUxNV0geDUgOiBmZmZmZmZmZTAyMDAwMDAwIHg0IDogZmZmZmZmODA3MzE1YTNjOCB4MyA6IDAw
MDAwMDAwMDAwMDEwMDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTI0XSB4MiA6IDAwMDAwMDAwMDJhYWIwMDAgeDEgOiAwMDAw
MDAwMDAwMDAwMDAxIHgwIDogMDAwMDAwMDAwMDAwMDAwNTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1MzRdIEtlcm5lbCBwYW5p
YyAtIG5vdCBzeW5jaW5nOiBBc3luY2hyb25vdXMgU0Vycm9yIEludGVycnVwdDxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1Mzld
IENQVTogMCBQSUQ6IDkgQ29tbToga3dvcmtlci91NDowIE5vdCB0YWludGVkIDUuMTUuNzIteGls
aW54LXYyMDIyLjEgIzE8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTQ1XSBIYXJkd2FyZSBuYW1lOiBEMTQgVmlwZXIgQm9hcmQg
LSBXaGl0ZSBVbml0IChEVCk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTQ4XSBXb3JrcXVldWU6IGV2ZW50c191bmJvdW5kIGFz
eW5jX3J1bl9lbnRyeV9mbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1NTZdIENhbGwgdHJhY2U6PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDU1OF0gwqBkdW1w
X2JhY2t0cmFjZSsweDAvMHgxYzQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTY3XSDCoHNob3dfc3RhY2srMHgxOC8weDJjPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEx
LjAxMDU3NF0gwqBkdW1wX3N0YWNrX2x2bCsweDdjLzB4YTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNTgzXSDCoGR1bXBfc3Rh
Y2srMHgxOC8weDM0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDExLjAxMDU4OF0gwqBwYW5pYysweDE0Yy8weDJmODxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA1OTddIMKg
cHJpbnRfdGFpbnRlZCsweDAvMHhiMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2MDZdIMKgYXJtNjRfc2Vycm9yX3BhbmljKzB4
NmMvMHg3Yzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgWyDCoCAxMS4wMTA2MTRdIMKgZG9fc2Vycm9yKzB4MjgvMHg2MDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2MjFdIMKgZWwx
aF82NF9lcnJvcl9oYW5kbGVyKzB4MzAvMHg1MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2MjhdIMKgZWwxaF82NF9lcnJvcisw
eDc4LzB4N2M8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IFsgwqAgMTEuMDEwNjMzXSDCoHNpbXBsZV93cml0ZV9lbmQrMHhkMC8weDEzMDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA2
MzldIMKgZ2VuZXJpY19wZXJmb3JtX3dyaXRlKzB4MTE4LzB4MWUwPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY0NF0gwqBfX2dl
bmVyaWNfZmlsZV93cml0ZV9pdGVyKzB4MTM4LzB4MWM0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY1MF0gwqBnZW5lcmljX2Zp
bGVfd3JpdGVfaXRlcisweDc4LzB4ZDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjU2XSDCoF9fa2VybmVsX3dyaXRlKzB4ZmMv
MHgyYWM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTEuMDEwNjY1XSDCoGtlcm5lbF93cml0ZSsweDg4LzB4MTYwPGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDY3M10gwqB4
d3JpdGUrMHg0NC8weDk0PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0OyBbIMKgIDExLjAxMDY4MF0gwqBkb19jb3B5KzB4YTgvMHgxMDQ8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjg2
XSDCoHdyaXRlX2J1ZmZlcisweDM4LzB4NTg8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNjkyXSDCoGZsdXNoX2J1ZmZlcisweDRj
LzB4YmM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTEuMDEwNjk4XSDCoF9fZ3VuemlwKzB4MjgwLzB4MzEwPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDcwNF0gwqBndW56
aXArMHgxYy8weDI4PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0OyBbIMKgIDExLjAxMDcwOV0gwqB1bnBhY2tfdG9fcm9vdGZzKzB4MTcwLzB4MmIwPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDEx
LjAxMDcxNV0gwqBkb19wb3B1bGF0ZV9yb290ZnMrMHg4MC8weDE2NDxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3MjJdIMKgYXN5
bmNfcnVuX2VudHJ5X2ZuKzB4NDgvMHgxNjQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFsgwqAgMTEuMDEwNzI4XSDCoHByb2Nlc3Nfb25lX3dvcmsr
MHgxZTQvMHgzYTA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7IFsgwqAgMTEuMDEwNzM2XSDCoHdvcmtlcl90aHJlYWQrMHg3Yy8weDRjMDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3
NDNdIMKga3RocmVhZCsweDEyMC8weDEzMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3NTBdIMKgcmV0X2Zyb21fZm9yaysweDEw
LzB4MjA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
IFsgwqAgMTEuMDEwNzU3XSBTTVA6IHN0b3BwaW5nIHNlY29uZGFyeSBDUFVzPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAxMDc4NF0g
S2VybmVsIE9mZnNldDogMHgyZjYxMjAwMDAwIGZyb20gMHhmZmZmZmZjMDA4MDAwMDAwPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBbIMKgIDExLjAx
MDc4OF0gUEhZU19PRkZTRVQ6IDB4MDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4wMTA3OTBdIENQVSBmZWF0dXJlczogMHgwMDAwMDQw
MSwwMDAwMDg0Mjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDsgWyDCoCAxMS4wMTA3OTVdIE1lbW9yeSBMaW1pdDogbm9uZTxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgWyDCoCAxMS4yNzc1MDldIC0tLVsg
ZW5kIEtlcm5lbCBwYW5pYyAtIG5vdCBzeW5jaW5nOiBBc3luY2hyb25vdXMgU0Vycm9yIEludGVy
cnVwdCBdLS0tPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsg
0L/RgiwgMjEg0LDQv9GALiAyMDIz4oCv0LMuINCyIDE1OjUyLCBNaWNoYWwgT3J6ZWwgJmx0Ozxh
IGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hh
bC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6
ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0i
X2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1k
LmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxA
YW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNv
bTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRh
cmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Jmd0OyZndDsgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWlj
aGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwv
YT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0
YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9y
emVsQGFtZC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFs
Lm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4g
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0i
X2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVs
QGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsmZ3Q7
Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgSGkgT2xlZyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gMjEvMDQvMjAyMyAxNDo0OSwgT2xlZyBOaWtpdGVu
a28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7IEhlbGxvIE1pY2hhbCw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBJIHdhcyBub3QgYWJsZSB0byBl
bmFibGUgZWFybHlwcmludGsgaW4gdGhlIHhlbiBmb3Igbm93Ljxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDsgSSBkZWNpZGVk
IHRvIGNob29zZSBhbm90aGVyIHdheS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFRoaXMgaXMgYSB4ZW4mIzM5O3MgY29t
bWFuZCBsaW5lIHRoYXQgSSBmb3VuZCBvdXQgY29tcGxldGVseS48YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyAoWEVOKSAkJCQkIGNvbnNvbGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWwwIGRvbTBfbWVtPTE2MDBN
IGRvbTBfbWF4X3ZjcHVzPTIgZG9tMF92Y3B1c19waW48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBib290c2NydWI9MDxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHZ3Zmk9bmF0aXZlPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgc2NoZWQ9bnVsbDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHRpbWVyX3Nsb3A9MDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoFllcywgYWRkaW5nIGEgcHJpbnRrKCkgaW4gWGVuIHdhcyBhbHNvIGEgZ29vZCBpZGVhLjxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0OyBTbyB5b3UgYXJlIGFic29sdXRlbHkgcmlnaHQgYWJvdXQgYSBjb21tYW5k
IGxpbmUuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0OyBOb3cgSSBhbSBnb2luZyB0byBmaW5kIG91dCB3aHkgeGVuIGRpZCBu
b3QgaGF2ZSB0aGUgY29ycmVjdCBwYXJhbWV0ZXJzIGZyb20gdGhlIGRldmljZTxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHRyZWUu
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgTWF5YmUgeW91IHdpbGwgZmluZCB0aGlzIGRvY3VtZW50IGhlbHBmdWw6PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGEg
aHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2
L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9y
ZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0PC9hPiAmbHQ7
PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80
LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxu
eF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0PC9hPiZn
dDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9y
ZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9i
bG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4
dDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxu
eF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hl
bi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5n
LnR4dDwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94
ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGlu
Zy50eHQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNv
bS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10
cmVlL2Jvb3RpbmcudHh0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlu
eC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9v
dGluZy50eHQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHVi
LmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2Rldmlj
ZS10cmVlL2Jvb3RpbmcudHh0PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNv
bS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10
cmVlL2Jvb3RpbmcudHh0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2Fy
bS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHVi
LmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2Rldmlj
ZS10cmVlL2Jvb3RpbmcudHh0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNj
L2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dDwvYT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2
L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9y
ZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0PC9hPiAmbHQ7
PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94ZW4vYmxvYi94bG54X3JlYmFzZV80
LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxu
eF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0PC9hPiZn
dDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9y
ZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9i
bG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4
dDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9YaWxpbngveGVuL2Jsb2IveGxu
eF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20vWGlsaW54L3hl
bi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5n
LnR4dDwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlueC94
ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGlu
Zy50eHQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNv
bS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10
cmVlL2Jvb3RpbmcudHh0PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL1hpbGlu
eC94ZW4vYmxvYi94bG54X3JlYmFzZV80LjE2L2RvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9v
dGluZy50eHQiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHVi
LmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2Rldmlj
ZS10cmVlL2Jvb3RpbmcudHh0PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNv
bS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2RldmljZS10
cmVlL2Jvb3RpbmcudHh0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNjL2Fy
bS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dDwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHVi
LmNvbS9YaWxpbngveGVuL2Jsb2IveGxueF9yZWJhc2VfNC4xNi9kb2NzL21pc2MvYXJtL2Rldmlj
ZS10cmVlL2Jvb3RpbmcudHh0IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20vWGlsaW54L3hlbi9ibG9iL3hsbnhfcmViYXNlXzQuMTYvZG9jcy9taXNj
L2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dDwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoH5NaWNoYWw8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
OyDQv9GCLCAyMSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6MTYsIE1pY2hhbCBPcnplbCAmbHQ7
PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWlj
aGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBh
bWQuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnpl
bEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQu
Y29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7Jmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9
Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFt
ZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1k
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1p
Y2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0
YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hh
bC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6
ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsm
Z3Q7Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNo
YWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9
Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwu
b3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9i
bGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5j
b208L2E+Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnpl
bEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQu
Y29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+
bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+
Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnpl
bEBhbWQuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoE9uIDIxLzA0LzIwMjMgMTA6MDQsIE9sZWcgTmlraXRlbmtvIHdy
b3RlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBIZWxsbyBNaWNo
YWwsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgWWVzLCBJIHVz
ZSB5b2N0by48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBZZXN0
ZXJkYXkgYWxsIGRheSBsb25nIEkgdHJpZWQgdG8gZm9sbG93IHlvdXIgc3VnZ2VzdGlvbnMuPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0OyBJIGZhY2VkIGEgcHJvYmxlbS48YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7IE1hbnVhbGx5IGluIHRoZSB4ZW4gY29uZmlnIGJ1aWxkIGZpbGUgSSBwYXN0ZWQgdGhlIHN0
cmluZ3M6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgSW4gdGhlIC5jb25maWcgZmlsZSBvciBpbiBzb21lIFlv
Y3RvIGZpbGUgKGxpc3RpbmcgYWRkaXRpb25hbCBLY29uZmlnIG9wdGlvbnMpIGFkZGVkPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
dG8gU1JDX1VSST88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBZb3Ugc2hvdWxkbiYjMzk7dCByZWFsbHkgbW9k
aWZ5IC5jb25maWcgZmlsZSBidXQgaWYgeW91IGRvLCB5b3Ugc2hvdWxkIGV4ZWN1dGUgJnF1b3Q7
bWFrZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoG9sZGRlZmNvbmZpZyZxdW90Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoGFmdGVyd2FyZHMuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7IENPTkZJR19FQVJMWV9QUklOVEs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IENP
TkZJR19FQVJMWV9QUklOVEtfWllOUU1QPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBDT05GSUdfRUFS
TFlfVUFSVF9DSE9JQ0VfQ0FERU5DRTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEkgaG9wZSB5b3UgYWRkZWQg
PXkgdG8gdGhlbS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgQW55d2F5LCB5b3UgaGF2ZSBh
dCBsZWFzdCB0aGUgZm9sbG93aW5nIHNvbHV0aW9uczo8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAxKSBSdW4g
Yml0YmFrZSB4ZW4gLWMgbWVudWNvbmZpZyB0byBwcm9wZXJseSBzZXQgZWFybHkgcHJpbnRrPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgMikgRmluZCBvdXQgaG93IHlvdSBlbmFibGUgb3RoZXIgS2NvbmZpZyBv
cHRpb25zIGluIHlvdXIgcHJvamVjdCAoZS5nLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoENPTkZJR19DT0xPUklORz15IHRoYXQg
aXMgbm90PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZW5hYmxlZCBieTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGRlZmF1bHQpPGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgMykgQXBwZW5kIHRoZSBmb2xsb3dpbmcgdG8gJnF1b3Q7eGVuL2FyY2gvYXJtL2Nv
bmZpZ3MvYXJtNjRfZGVmY29uZmlnJnF1b3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoENPTkZJR19FQVJM
WV9QUklOVEtfWllOUU1QPXk8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgfk1pY2hhbDxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBIb3N0IGhh
bmdzIGluIGJ1aWxkIHRpbWUuwqA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IE1heWJlIEkgZGlkIG5v
dCBzZXQgc29tZXRoaW5nIGluIHRoZSBjb25maWcgYnVpbGQgZmlsZSA/PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyDRh9GC
LCAyMCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMTE6NTcsIE9sZWcgTmlraXRlbmtvICZsdDs8YSBo
cmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hp
aXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdl
dD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29k
QGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsm
Z3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20i
IHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9s
ZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9
Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0i
X2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29v
ZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0
OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0
YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlp
d29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVz
aGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4m
Z3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdv
b2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hp
aXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29v
ZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdl
dD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29k
QGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsm
Z3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20i
IHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNo
aWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNo
aWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwv
YT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9s
ZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9h
PiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0
YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlp
d29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21h
aWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0
YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlp
d29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVz
aGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4m
Z3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdv
b2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoFRoYW5rcyBNaWNoYWwsPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoFlvdSBnYXZl
IG1lIGFuIGlkZWEuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgSSBhbSBnb2luZyB0byB0
cnkgaXQgdG9kYXkuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoFJlZ2FyZHMsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgTy48YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg0YfRgiwgMjAg0LDQv9GA
LiAyMDIz4oCv0LMuINCyIDExOjU2LCBPbGVnIE5pa2l0ZW5rbyAmbHQ7PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208
L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5v
bGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdl
dD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29k
QGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9i
bGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9
Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9s
ZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9h
PiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xl
c2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29t
PC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRh
cmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5v
bGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21h
aWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9s
ZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNv
bTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29t
IiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208
L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5v
bGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyZndDsgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9i
bGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFp
bC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiZndDsmZ3Q7Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdl
dD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlp
d29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7
Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRh
cmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3
b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4g
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9
Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwu
Y29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5v
bGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21h
aWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9s
ZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNv
bTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Ojxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqBUaGFua3MgU3RlZmFuby48YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgSSBhbSBn
b2luZyB0byBkbyBpdCB0b2RheS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqBPLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqDRgdGALCAxOSDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMjM6MDUsIFN0ZWZhbm8g
U3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJl
bGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3Jn
PC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0
YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
PnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwu
b3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5z
c3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0i
X2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
PnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwu
b3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3Rh
YmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwv
YT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmci
IHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0
YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9i
bGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3Jn
PC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0
YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5z
c3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZn
dDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9i
bGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7
Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0
YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0i
X2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0
YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozo8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgT24gV2VkLCAxOSBBcHIgMjAyMywgT2xlZyBOaWtpdGVua28gd3JvdGU6PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBIaSBNaWNoYWwsPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgSSBjb3JyZWN0ZWQgeGVuJiMzOTtzIGNvbW1hbmQgbGlu
ZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IE5vdyBpdCBp
czxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgeGVuLHhlbi1i
b290YXJncyA9ICZxdW90O2NvbnNvbGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWwwIGRvbTBfbWVtPTE2
MDBNPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgZG9tMF9tYXhfdmNwdXM9Mjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oGRvbTBfdmNwdXNfcGluPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgYm9vdHNjcnViPTAgdndmaT1uYXRpdmUgc2NoZWQ9bnVsbDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgdGltZXJfc2xvcD0w
IHdheV9zaXplPTY1NTM2IHhlbl9jb2xvcnM9MC0zIGRvbTBfY29sb3JzPTQtNyZxdW90Ozs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgNCBjb2xvcnMgaXMgd2F5IHRvbyBtYW55IGZvciB4ZW4sIGp1c3QgZG8geGVuX2NvbG9ycz0w
LTAuIFRoZXJlIGlzIG5vPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
YWR2YW50YWdlIGluIHVzaW5nIG1vcmUgdGhhbiAxIGNvbG9yIGZvciBYZW4uPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoDQgY29s
b3JzIGlzIHRvbyBmZXcgZm9yIGRvbTAsIGlmIHlvdSBhcmUgZ2l2aW5nIDE2MDBNIG9mIG1lbW9y
eSB0bzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoERvbTAuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
RWFjaCBjb2xvciBpcyAyNTZNLiBGb3IgMTYwME0geW91IHNob3VsZCBnaXZlIGF0IGxlYXN0IDcg
Y29sb3JzLiBUcnk6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoHhlbl9jb2xvcnM9MC0wIGRvbTBfY29sb3JzPTEtODxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgVW5mb3J0dW5h
dGVseSB0aGUgcmVzdWx0IHdhcyB0aGUgc2FtZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0OyAoWEVOKSDCoC0gRG9tMCBtb2RlOiBSZWxheGVkPGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBQMk06IDQwLWJpdCBJUEEgd2l0aCA0MC1i
aXQgUEEgYW5kIDgtYml0IFZNSUQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAg
wqAgwqAmZ3Q7IChYRU4pIFAyTTogMyBsZXZlbHMgd2l0aCBvcmRlci0xIHJvb3QsIFZUQ1IgMHgw
MDAwMDAwMDgwMDIzNTU4PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBTY2hlZHVsaW5nIGdyYW51bGFyaXR5OiBjcHUsIDEgQ1BVIHBlciBzY2hlZC1y
ZXNvdXJjZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgKFhF
TikgQ29sb3JpbmcgZ2VuZXJhbCBpbmZvcm1hdGlvbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCDCoCDCoCZndDsgKFhFTikgV2F5IHNpemU6IDY0a0I8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIE1heC4gbnVtYmVyIG9mIGNvbG9y
cyBhdmFpbGFibGU6IDE2PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0OyAoWEVOKSBYZW4gY29sb3Iocyk6IFsgMCBdPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBhbHRlcm5hdGl2ZXM6IFBhdGNoaW5nIHdpdGggYWx0
IHRhYmxlIDAwMDAwMDAwMDAyY2M2OTAgLSZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAwMDAwMDAwMDAwMmNjYzBjPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVOKSBDb2xvciBhcnJh
eSBhbGxvY2F0aW9uIGZhaWxlZCBmb3IgZG9tMDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDsgKFhFTik8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7IChYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
Kio8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pIFBh
bmljIG9uIENQVSAwOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDsgKFhFTikgRXJyb3IgY3JlYXRpbmcgZG9tYWluIDA8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IChYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKio8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
IChYRU4pPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyAoWEVO
KSBSZWJvb3QgaW4gZml2ZSBzZWNvbmRzLi4uPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDsgSSBhbSBnb2luZyB0byBmaW5kIG91dCBob3cgY29tbWFuZCBsaW5lIGFyZ3VtZW50cyBw
YXNzZWQgYW5kIHBhcnNlZC48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBSZWdh
cmRzLDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgT2xlZzxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7INGB0YAsIDE5INCw0L/RgC4gMjAyM+KA
r9CzLiDQsiAxMToyNSwgT2xlZyBOaWtpdGVua28gJmx0OzxhIGhyZWY9Im1haWx0bzpvbGVzaGlp
d29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0
PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3
b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RA
Z21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsm
Z3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdv
b2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0i
X2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5j
b20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9s
ZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFp
bC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xl
c2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29t
PC9hPiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2Js
YW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWls
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdt
YWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwu
Y29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9i
bGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RA
Z21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9
Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJf
YmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21h
aWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21h
aWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RA
Z21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2Js
YW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWls
LmNvbTwvYT4mZ3Q7Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdv
b2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0i
X2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29v
ZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdt
YWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdl
dD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29k
QGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9i
bGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsmZ3Q7Jmd0OyZndDsgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+
b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVz
aGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208
L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20i
IHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNo
aWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpv
bGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIg
dGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86
PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9s
ZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hp
aXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9h
PiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29k
QGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4gJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9i
bGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+b2xlc2hpaXdvb2RA
Z21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBnbWFp
bC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+Jmd0OyZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86b2xlc2hpaXdvb2RAZ21haWwuY29tIiB0YXJnZXQ9
Il9ibGFuayI+b2xlc2hpaXdvb2RAZ21haWwuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpvbGVzaGlpd29vZEBnbWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBn
bWFpbC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpvbGVzaGlpd29vZEBn
bWFpbC5jb20iIHRhcmdldD0iX2JsYW5rIj5vbGVzaGlpd29vZEBnbWFpbC5jb208L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm9sZXNoaWl3b29kQGdtYWlsLmNvbSIgdGFyZ2V0PSJfYmxh
bmsiPm9sZXNoaWl3b29kQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7
Ojxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oEhpIE1pY2hhbCw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBZb3UgcHV0IG15
IG5vc2UgaW50byB0aGUgcHJvYmxlbS4gVGhhbmsgeW91Ljxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDsgSSBhbSBnb2luZyB0byB1c2UgeW91ciBwb2ludC48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IExldCYjMzk7cyBzZWUg
d2hhdCBoYXBwZW5zLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7IFJlZ2FyZHMs
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0OyBPbGVnPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7INGB0YAsIDE5INCw0L/RgC4gMjAyM+KAr9CzLiDQsiAxMDozNywg
TWljaGFsIE9yemVsICZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRh
cmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6
ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnpl
bEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFu
ayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxA
YW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5j
b20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+
bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+
Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVs
QGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0
YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyZndDsgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5t
aWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFs
Lm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4m
Z3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVs
QGFtZC5jb208L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVs
QGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxA
YW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFt
ZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29t
IiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1p
Y2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwu
b3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZn
dDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hh
bC5vcnplbEBhbWQuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFs
Lm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4g
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0i
X2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5v
cnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxA
YW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2Js
YW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNv
bTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
Om1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5j
b208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0
YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hh
bC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6
ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsm
Z3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVs
QGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxA
YW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnpl
bEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1k
LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5r
Ij5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWlj
aGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwv
YT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hh
bC5vcnplbEBhbWQuY29tPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFs
Lm9yemVsQGFtZC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4g
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86bWljaGFsLm9yemVsQGFtZC5jb20iIHRhcmdldD0i
X2JsYW5rIj5taWNoYWwub3J6ZWxAYW1kLmNvbTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+
bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hh
bC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+
Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFy
Z2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnpl
bEBhbWQuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5v
cnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9i
bGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBh
bWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNv
bSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7Jmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJn
ZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVs
QGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxA
YW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsi
Pm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOm1pY2hhbC5vcnplbEBhbWQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFt
ZC5jb208L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOm1pY2hhbC5vcnplbEBhbWQuY29t
IiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLm9yemVsQGFtZC5jb208L2E+Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzptaWNoYWwub3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1p
Y2hhbC5vcnplbEBhbWQuY29tPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzptaWNoYWwu
b3J6ZWxAYW1kLmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm1pY2hhbC5vcnplbEBhbWQuY29tPC9hPiZn
dDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgSGkgT2xlZyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgT24gMTkvMDQvMjAyMyAwOTowMywgT2xlZyBOaWtpdGVu
a28gd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IEhlbGxvIFN0ZWZhbm8sPGJyPg0K
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDsgVGhhbmtzIGZvciB0aGUgY2xhcmlmaWNhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE15IGNvbXBhbnkgdXNlcyB5b2N0
byBmb3IgaW1hZ2UgZ2VuZXJhdGlvbi48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IFdoYXQga2luZCBvZiBpbmZvcm1hdGlvbiBkbyB5
b3UgbmVlZCB0byBjb25zdWx0IG1lIGluIHRoaXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBjYXNlID88YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyBNYXliZSBt
b2R1bGVzIHNpemVzL2FkZHJlc3NlcyB3aGljaCB3ZXJlIG1lbnRpb25lZCBieSBASnVsaWVuPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgR3JhbGw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
Pmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVu
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsgJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVu
Lm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdl
dD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVs
aWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5A
eGVuLm9yZzwvYT4mZ3Q7Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVu
QHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVu
Lm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0
YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0
OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0i
X2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVs
aWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVs
aWVuQHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+anVsaWVuQHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4
ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4
ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9i
bGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpq
dWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGll
bkB4ZW4ub3JnPC9hPiZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxp
ZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4
ZW4ub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmci
IHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4m
Z3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0
PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpq
dWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5q
dWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5v
cmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5A
eGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsgJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5A
eGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRh
cmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3Jn
PC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJf
YmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxp
ZW5AeGVuLm9yZzwvYT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1
bGllbkB4ZW4ub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhl
bi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9y
ZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZn
dDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2Js
YW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVu
QHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7Jmd0OyZndDsm
Z3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9i
bGFuayI+anVsaWVuQHhlbi5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGll
bkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGll
bkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9Il9ibGFuayI+anVsaWVuQHhlbi5v
cmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOmp1bGllbkB4ZW4ub3JnIiB0YXJnZXQ9
Il9ibGFuayI+anVsaWVuQHhlbi5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1
bGllbkB4ZW4ub3JnPC9hPiZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpq
dWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGll
bkB4ZW4ub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5v
cmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwv
YT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpqdWxpZW5AeGVuLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpqdWxpZW5AeGVuLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPmp1bGllbkB4ZW4ub3JnPC9hPiZndDsg
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIHRhcmdldD0iX2JsYW5r
Ij5qdWxpZW5AeGVuLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86anVsaWVuQHhl
bi5vcmciIHRhcmdldD0iX2JsYW5rIj5qdWxpZW5AeGVuLm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsmZ3Q7ID88YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgU29ycnkgZm9yIGp1bXBpbmcgaW50byBkaXNjdXNzaW9uLCBidXQgRldJQ1MgdGhlIFhlbiBj
b21tYW5kPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgbGluZSB5b3UgcHJvdmlkZWQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqBzZWVtcyB0byBiZTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoG5vdCB0aGU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqBvbmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBYZW4gYm9vdGVkIHdpdGguIFRoZSBlcnJvciB5b3UgYXJl
IG9ic2VydmluZyBtb3N0IGxpa2VseSBpcyBkdWU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0byBkb20wIGNvbG9yczxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb24gbm90
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgYmVpbmc8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBzcGVjaWZpZWQgKGkuZS4gbGFjayBvZiBkb20wX2NvbG9ycz0mbHQ7Jmd0OyBw
YXJhbWV0ZXIpLiBBbHRob3VnaCBpbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHRoZSBjb21tYW5kIGxpbmUgeW91PGJyPg0KJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgcHJvdmlkZWQsIHRoaXM8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBwYXJhbWV0ZXI8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqBpcyBzZXQsIEkgc3Ryb25nbHkgZG91YnQgdGhhdCB0aGlzIGlzIHRoZSBhY3R1
YWwgY29tbWFuZCBsaW5lPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgaW4gdXNlLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBZb3Ugd3JvdGU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgeGVuLHhlbi1ib290YXJncyA9ICZxdW90O2NvbnNv
bGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWwwPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgZG9tMF9tZW09MTYwME0gZG9tMF9tYXhfdmNw
dXM9Mjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGRvbTBf
dmNwdXNfcGluPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgYm9vdHNjcnViPTAgdndmaT1uYXRpdmU8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBzY2hlZD1udWxsIHRpbWVyX3Ns
b3A9MCB3YXlfc3ppemU9NjU1MzYgeGVuX2NvbG9ycz0wLTM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBkb20wX2NvbG9ycz00LTcm
cXVvdDs7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJ1dDo8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAx
KSB3YXlfc3ppemUgaGFzIGEgdHlwbzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoDIpIHlvdSBzcGVjaWZpZWQgNCBjb2xvcnMgKDAtMykgZm9y
IFhlbiwgYnV0IHRoZSBib290IGxvZyBzYXlzPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgdGhhdCBYZW4gaGFzIG9ubHk8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBvbmU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgKFhFTikgWGVuIGNvbG9yKHMpOiBbIDAgXTxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBUaGlzIG1ha2VzIG1l
IGJlbGlldmUgdGhhdCBubyBjb2xvcnMgY29uZmlndXJhdGlvbiBhY3R1YWxseSBlbmQ8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB1
cCBpbiBjb21tYW5kIGxpbmU8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0aGF0IFhl
bjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGJvb3RlZDxi
cj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoHdpdGguPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgU2luZ2xlIGNvbG9yIGZvciBYZW4gaXMgYSAmcXVvdDtkZWZhdWx0IGlmIG5vdCBz
cGVjaWZpZWQmcXVvdDsgYW5kIHdheTxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHNpemUgd2FzIHByb2JhYmx5PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgY2FsY3VsYXRlZDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoGJ5IGFza2luZzxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoEhXLjxicj4NCiZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBTbyBJIHdvdWxkIHN1Z2dlc3QgdG8gZmlyc3Qg
Y3Jvc3MtY2hlY2sgdGhlIGNvbW1hbmQgbGluZSBpbjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoHVzZS48YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgfk1pY2hhbDxicj4NCiZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDsgUmVnYXJkcyw8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7IE9sZWc8YnI+DQomZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0OyDQstGCLCAx
OCDQsNC/0YAuIDIwMjPigK/Qsy4g0LIgMjA6NDQsIFN0ZWZhbm8gU3RhYmVsbGluaTxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZs
dDs8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
PnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwu
b3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJu
ZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4m
Z3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxp
bmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBo
cmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9h
PiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJu
ZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZn
dDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZs
dDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0i
X2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1h
aWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlA
a2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0
YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3Jn
IiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0i
bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGlu
aUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBr
ZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7
ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwv
YT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxh
IGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0
YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWls
dG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5r
Ij5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJu
ZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAm
bHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9
Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4g
Jmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0
PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDsm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21h
aWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxh
bmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRv
OnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJu
ZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2Js
YW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4m
Z3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJl
bGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWls
dG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0
O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJf
YmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJn
ZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3Rh
YmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwv
YT4mZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJl
bGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8
L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8
YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3Jn
PC9hPiZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0
OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEg
aHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3Rh
YmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmci
IHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsmZ3Q7ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2Js
YW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0
bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7
bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9i
bGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4N
CiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdl
dD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9
Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxp
bmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9h
PiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJn
ZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwu
b3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0
bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsi
PnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7ICZsdDttYWlsdG86PGEgaHJl
Zj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVs
bGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGlu
aUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4m
Z3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRh
cmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhy
ZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJl
bGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpz
c3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86
c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3Rh
YmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9y
ZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIg
dGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRv
OjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+
c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0
YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5v
cmc8L2E+Jmd0OyZndDsgJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+ICZsdDtt
YWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2Js
YW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsgJmx0O21haWx0bzo8YSBocmVmPSJt
YWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5p
QGtlcm5lbC5vcmc8L2E+ICZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtl
cm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiZndDsm
Z3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZsdDttYWlsdG86PGEgaHJlZj0ibWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5l
bC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBrZXJuZWwub3JnPC9hPiAmbHQ7bWFp
bHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFu
ayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7ICZsdDttYWlsdG86PGEgaHJlZj0ibWFp
bHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmciIHRhcmdldD0iX2JsYW5rIj5zc3RhYmVsbGluaUBr
ZXJuZWwub3JnPC9hPiAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJu
ZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4mZ3Q7Jmd0
OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVsbGluaUBrZXJuZWwub3JnIiB0YXJn
ZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwvYT4gJmx0O21haWx0bzo8YSBocmVm
PSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFyZ2V0PSJfYmxhbmsiPnNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyAmbHQ7bWFpbHRvOjxhIGhyZWY9Im1haWx0bzpzc3RhYmVs
bGluaUBrZXJuZWwub3JnIiB0YXJnZXQ9Il9ibGFuayI+c3N0YWJlbGxpbmlAa2VybmVsLm9yZzwv
YT4gJmx0O21haWx0bzo8YSBocmVmPSJtYWlsdG86c3N0YWJlbGxpbmlAa2VybmVsLm9yZyIgdGFy
Z2V0PSJfYmxhbmsiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZn
dDsmZ3Q7Jmd0OyZndDs6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoE9uIFR1ZSwgMTggQXByIDIwMjMsIE9sZWcg
TmlraXRlbmtvIHdyb3RlOjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDsgSGkgSnVsaWVuLDxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7ICZndDsmZ3Q7IFRoaXMgZmVhdHVyZSBoYXMgbm90IGJlZW4gbWVy
Z2VkIGluIFhlbiB1cHN0cmVhbSB5ZXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyAm
Z3Q7IHdvdWxkIGFzc3VtZSB0aGF0IHVwc3RyZWFtICsgdGhlIHNlcmllcyBvbiB0aGUgTUwgWzFd
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgd29yazxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7IFBsZWFzZSBjbGFyaWZ5
IHRoaXMgcG9pbnQuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0OyBCZWNhdXNlIHRoZSB0d28gdGhvdWdodHMgYXJl
IGNvbnRyb3ZlcnNpYWwuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEhpIE9sZWcsPGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEFz
IEp1bGllbiB3cm90ZSwgdGhlcmUgaXMgbm90aGluZyBjb250cm92ZXJzaWFsLiBBcyB5b3U8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqBhcmUgYXdhcmUsPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgWGlsaW54IG1haW50YWlucyBhIHNlcGFyYXRlIFhlbiB0
cmVlIHNwZWNpZmljIGZvciBYaWxpbng8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqBoZXJlOjxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoDxhIGhyZWY9Imh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0Ozxh
IGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVm
PSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsgJmx0Ozxh
IGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVm
PSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsmZ3Q7Jmd0
OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsg
Jmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJy
ZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0
OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0
OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsg
Jmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJy
ZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0
OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0
OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwv
YT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4m
Z3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0
OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwv
YT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4m
Z3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0
OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9h
PiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwv
YT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4m
Z3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9h
PiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwv
YT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4m
Z3Q7Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
bjwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJu
b3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48
L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
bjwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJu
b3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48
L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPC9hPiZndDsmZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29t
L3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dp
dGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1
Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIu
Y29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
Z2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29t
L3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dp
dGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1
Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9
Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0i
X2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9
Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0i
X2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7Jmd0OyZndDsgJmx0
OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0Ozxh
IGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8
YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsgJmx0
OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0Ozxh
IGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8
YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBo
cmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsmZ3Q7
Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZn
dDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4g
Jmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJy
ZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7
Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZn
dDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4g
Jmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJy
ZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7
Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4i
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
bjwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4i
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54
L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuIiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
bjwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
biIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4i
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuPC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hl
biIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hp
bGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5jb20veGlsaW54L3hlbiIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4i
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxp
bngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW4iIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuPC9hPiZndDsmZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbjwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlu
eC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNv
bS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94
ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8vZ2l0aHViLmNvbS94
aWxpbngveGVuPC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngv
eGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20v
eGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVu
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL2dpdGh1Yi5jb20veGls
aW54L3hlbjwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1
Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5j
b20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0
aHViLmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRw
czovL2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8vZ2l0aHVi
LmNvbS94aWxpbngveGVuIiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbjwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1
Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6
Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dpdGh1Yi5j
b20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dp
dGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczov
L2dpdGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+ICZsdDs8YSBocmVmPSJodHRwczovL2dp
dGh1Yi5jb20veGlsaW54L3hlbiIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0
cHM6Ly9naXRodWIuY29tL3hpbGlueC94ZW48L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9n
aXRodWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly9naXRo
dWIuY29tL3hpbGlueC94ZW4iIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8vZ2l0aHViLmNvbS94aWxpbngveGVuPC9hPiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8
YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgYW5kIHRoZSBicmFuY2ggeW91IGFyZSB1c2luZyAoeGxueF9yZWJhc2Vf
NC4xNikgY29tZXM8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqBmcm9tIHRoZXJlLjxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgSW5zdGVhZCwg
dGhlIHVwc3RyZWFtIFhlbiB0cmVlIGxpdmVzIGhlcmU6PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgPGEgaHJlZj0iaHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0
OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhl
bi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVm
PSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7PGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3
ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3
ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4m
Z3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBo
cmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnki
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZl
cnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
PC9hPiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhl
bi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZs
dDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZn
dDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3
ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0OzxhIGhyZWY9
Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0OyAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7Jmd0
OyZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0Ozxh
IGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5v
cmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3Jl
ZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9w
PXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5v
cmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0i
X2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnk8L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3
ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeTwvYT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7
ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJu
b3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJl
ciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9h
PiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4u
Z2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8
YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4u
b3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4u
b3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczov
L3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsg
Jmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7
Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZs
dDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5PC9hPiZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxh
bmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwv
YT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsgJmx0OzxhIGhy
ZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5i
aXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIi
IHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdp
dDthPXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRw
czovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4u
b3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9
Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7
PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1t
YXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9
Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVs
PSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRh
cmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnk8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+
ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDth
PXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0OyZndDs8YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZn
dDsmZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhy
ZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIg
cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVy
cmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhl
bi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcv
Z2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2Js
YW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8
L2E+Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVu
Lm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0
PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsgJmx0
OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhl
bi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0
cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0
YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0
OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmbHQ7PGEgaHJlZj0i
aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9
Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3
ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3
ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsi
Pmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4g
Jmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsmZ3Q7ICZsdDs8YSBo
cmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnki
IHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVy
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3Jn
L2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9i
bGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsmZ3Q7
Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14
ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEg
aHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5
IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVm
ZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9
eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9y
Zy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJf
YmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eTwvYT4mZ3Q7Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5o
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZs
dDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0i
bm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdl
Yi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54
ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJn
ZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5PC9hPiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsg
Jmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRz
Lnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5v
cmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVu
Yml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVy
IiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0
d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5r
Ij5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+
Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5n
aXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hl
bmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVm
PSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJl
bD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmx0Ozxh
IGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFy
eSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5v
cmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94
ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJy
ZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnk8L2E+Jmd0OyAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5v
cmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0i
X2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1h
cnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8v
eGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0OyAm
bHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1z
dW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRw
czovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9y
ZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4mZ3Q7ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMu
eGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFy
Z2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9
c3VtbWFyeTwvYT4gJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/
cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0
cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsm
Z3Q7Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZsdDs8YSBocmVmPSJodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIv
P3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIgdGFyZ2V0PSJfYmxhbmsiPmh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeTwvYT4gJmx0
OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3Vt
bWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhl
bi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiZndDsgJmx0OzxhIGhyZWY9Imh0
dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJu
b3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2Vi
Lz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9hPiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhl
bi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1
bW1hcnk8L2E+Jmd0OyZndDsgJmx0OzxhIGhyZWY9Imh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dp
dHdlYi8/cD14ZW4uZ2l0O2E9c3VtbWFyeSIgcmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFu
ayI+aHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5PC9h
PiAmbHQ7PGEgaHJlZj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7
YT1zdW1tYXJ5IiByZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+Jmd0OyAmbHQ7PGEgaHJl
Zj0iaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1zdW1tYXJ5IiBy
ZWw9Im5vcmVmZXJyZXIiIHRhcmdldD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9n
aXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnk8L2E+ICZsdDs8YSBocmVmPSJodHRwczovL3hlbmJp
dHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPXN1bW1hcnkiIHJlbD0ibm9yZWZlcnJlciIg
dGFyZ2V0PSJfYmxhbmsiPmh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0
O2E9c3VtbWFyeTwvYT4mZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqBUaGUgQ2FjaGUgQ29sb3JpbmcgZmVhdHVyZSB0aGF0IHlvdSBhcmUgdHJ5aW5nIHRvPGJy
Pg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgY29uZmlndXJlIGlzIHByZXNlbnQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBpbiB4bG54X3JlYmFzZV80LjE2LCBi
dXQgbm90IHlldCBwcmVzZW50IHVwc3RyZWFtICh0aGVyZTxicj4NCiZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGlzIGFuPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
b3V0c3RhbmRpbmcgcGF0Y2ggc2VyaWVzIHRvIGFkZCBjYWNoZSBjb2xvcmluZyB0byBYZW48YnI+
DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqB1cHN0cmVhbSBidXQgaXQ8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqBoYXNuJiMzOTt0IGJlZW4gbWVyZ2VkIHlldC4p
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAgwqAmZ3Q7wqAgwqAgwqBBbnl3YXksIGlmIHlvdSBhcmUgdXNpbmcgeGxueF9yZWJhc2Vf
NC4xNiBpdCBkb2VzbiYjMzk7dDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoG1hdHRlciB0b28gbXVjaCBmb3I8YnI+DQomZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAg
wqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqB5
b3UgYXMgeW91IGFscmVhZHkgaGF2ZSBDYWNoZSBDb2xvcmluZyBhcyBhIGZlYXR1cmU8YnI+DQom
Z3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqB0
aGVyZS48YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEkgdGFrZSB5b3UgYXJlIHVzaW5nIEltYWdlQnVpbGRl
ciB0byBnZW5lcmF0ZSB0aGUgYm9vdDxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb24/IElmPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
c28sIHBsZWFzZSBwb3N0IHRoZSBJbWFnZUJ1aWxkZXIgY29uZmlnIGZpbGUgdGhhdCB5b3UgYXJl
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgdXNpbmcuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoEJ1dCBmcm9tIHRoZSBib290IG1lc3NhZ2UsIGl0
IGxvb2tzIGxpa2UgdGhlIGNvbG9yczxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoGNvbmZpZ3VyYXRpb24gZm9yPGJyPg0KJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKg
RG9tMCBpcyBpbmNvcnJlY3QuPGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZn
dDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAgwqAgwqAmZ3Q7
PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKg
Jmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvC
oCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAg
wqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAg
wqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0
O8KgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDC
oCDCoCZndDvCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAg
wqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAm
Z3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0O8Kg
IMKgIMKgIMKgJmd0O8KgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDC
oCDCoCZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0O8KgIMKgIMKgIMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDC
oCZndDvCoCDCoCDCoCZndDs8YnI+DQomZ3Q7wqAgwqAgwqAmZ3Q7wqAgwqAgwqAgwqAgwqAmZ3Q7
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0O8KgIMKgIMKgJmd0O8KgIMKgIMKgIMKgIMKgJmd0O8KgIMKg
IMKgJmd0Ozxicj4NCiZndDvCoCDCoCDCoCZndDvCoCDCoCDCoCDCoCDCoCZndDs8YnI+DQomZ3Q7
wqAgwqAgwqAmZ3Q7PGJyPg0KJmd0OyA8YnI+DQo8L2Jsb2NrcXVvdGU+PC9kaXY+DQo=
--0000000000003b806f05fbd10971--


From xen-devel-bounces@lists.xenproject.org Thu May 18 17:29:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 17:29:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536486.834847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzhR7-0001Nd-7P; Thu, 18 May 2023 17:29:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536486.834847; Thu, 18 May 2023 17:29:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzhR7-0001NW-4a; Thu, 18 May 2023 17:29:09 +0000
Received: by outflank-mailman (input) for mailman id 536486;
 Thu, 18 May 2023 17:29:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EfMJ=BH=intel.com=dave.hansen@srs-se1.protection.inumbo.net>)
 id 1pzhR5-0001NK-68
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 17:29:07 +0000
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 79fc4896-f5a1-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 19:29:02 +0200 (CEST)
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 May 2023 10:28:58 -0700
Received: from nroy-mobl1.amr.corp.intel.com (HELO [10.209.81.123])
 ([10.209.81.123])
 by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 May 2023 10:28:58 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79fc4896-f5a1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1684430942; x=1715966942;
  h=message-id:date:mime-version:subject:to:cc:references:
   from:in-reply-to:content-transfer-encoding;
  bh=YbB0ITNWC6W+DKq+pyGIH80h6yd4xEjccS7jew/3CTQ=;
  b=aKZT4vEpd/1SU6W4n3H/qvzp094XJ3lgXfgWAzUXUkefbl/I8nNwkN5P
   2JM6nSjwuvkv1EcEf4gorDf1Zar2ueQ91us7eP5v4ooLxlRqgYayYZ9Ul
   2PjHBjnx04lMQa6DqTQrHLX/9kOpVStF1oGyqImObVWW1CEKgXev7A3Bv
   NvlRB/JrIVT+F5FfO8MWkBIAV60xb4B+dnP/1Tb4Xkcx9aHrK2mMOIEEJ
   Lrt2yFUa2aZHftKCc9ga42ij0Irohm724zB3XP3daqs5VQPSdvALIypPP
   rRKbCg1BIyxApr0SNpgCp8QjZM/bCvKRQwSegvA+iV8OVStktLHz9QMhH
   Q==;
X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="354474559"
X-IronPort-AV: E=Sophos;i="6.00,174,1681196400"; 
   d="scan'208";a="354474559"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="702214993"
X-IronPort-AV: E=Sophos;i="6.00,174,1681196400"; 
   d="scan'208";a="702214993"
Message-ID: <cabdd839-71d5-aabb-aee6-d37ebcabf2ab@intel.com>
Date: Thu, 18 May 2023 10:28:58 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 10/20] x86: xen: add missing prototypes
Content-Language: en-US
To: Arnd Bergmann <arnd@kernel.org>, x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>,
 Andy Lutomirski <luto@kernel.org>, Steven Rostedt <rostedt@goodmis.org>,
 Masami Hiramatsu <mhiramat@kernel.org>, Mark Rutland <mark.rutland@arm.com>,
 Juergen Gross <jgross@suse.com>,
 "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
 Alexey Makhalov <amakhalov@vmware.com>,
 VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
 Peter Zijlstra <peterz@infradead.org>, Darren Hart <dvhart@infradead.org>,
 Andy Shevchenko <andy@infradead.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>, linux-kernel@vger.kernel.org,
 linux-trace-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, linux-pci@vger.kernel.org,
 platform-driver-x86@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-pm@vger.kernel.org, linux-mm@kvack.org
References: <20230516193549.544673-1-arnd@kernel.org>
 <20230516193549.544673-11-arnd@kernel.org>
From: Dave Hansen <dave.hansen@intel.com>
In-Reply-To: <20230516193549.544673-11-arnd@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 5/16/23 12:35, Arnd Bergmann wrote:
> From: Arnd Bergmann <arnd@arndb.de>
> 
> arch/x86/xen/enlighten_pv.c:1233:34: error: no previous prototype for 'xen_start_kernel' [-Werror=missing-prototypes]
> arch/x86/xen/irq.c:22:14: error: no previous prototype for 'xen_force_evtchn_callback' [-Werror=missing-prototypes]
> arch/x86/xen/mmu_pv.c:358:20: error: no previous prototype for 'xen_pte_val' [-Werror=missing-prototypes]
> arch/x86/xen/mmu_pv.c:366:20: error: no previous prototype for 'xen_pgd_val' [-Werror=missing-prototypes]

What's the deal with this one?

The patch is doing a bunch functions on top of the ones from the commit
message.  Were you just showing a snippet of what the actual set of
warnings is?

Also, fwiw, it would be nice to have actual words in the changelog, even
for these maddeningly repetitive series.  Even something like:

	Xen has a bunch of these because of how the paravirt code uses
	inline assembly.


From xen-devel-bounces@lists.xenproject.org Thu May 18 17:31:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 17:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536490.834858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzhTI-0002kv-K3; Thu, 18 May 2023 17:31:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536490.834858; Thu, 18 May 2023 17:31:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzhTI-0002ko-HI; Thu, 18 May 2023 17:31:24 +0000
Received: by outflank-mailman (input) for mailman id 536490;
 Thu, 18 May 2023 17:31:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EfMJ=BH=intel.com=dave.hansen@srs-se1.protection.inumbo.net>)
 id 1pzhTH-0002kg-Ax
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 17:31:23 +0000
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cce18eeb-f5a1-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 19:31:21 +0200 (CEST)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 May 2023 10:31:18 -0700
Received: from nroy-mobl1.amr.corp.intel.com (HELO [10.209.81.123])
 ([10.209.81.123])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 May 2023 10:31:17 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cce18eeb-f5a1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1684431081; x=1715967081;
  h=message-id:date:mime-version:subject:to:cc:references:
   from:in-reply-to:content-transfer-encoding;
  bh=Wl8BDYVG0i7AvYNjyPawAfZhnam6nIO2zxc9kJVuY98=;
  b=WpzfTPZYDSOXeNuKq7AQYSu6P41BySw4wNiz2aSfmDSWcklIFwwzpmpZ
   B4UFkZm0pjHH0TmTiV95F7HAFLxBPFOo1VID+8WSCB+7DWyrRfMVXXpTT
   Err7TrnNCyaQNuQ1TKl2fUhx4gjZx81DCvB8wcHnF1MIAeYourh/hE48P
   HCIIrEV77WlNurieCvNBQQcYqDKWFxMv8/rrpRW/BLzjkHSdZMMmALzls
   TfWpeKgymFi5/zYdyLjr0Vwf56UxfQ9P6wObk0YhbUP9z+kuHhhFJtdsE
   1UM4aHF6vLxBcbl/T42EiaPlaKStMiXid40AYFt/gwyZ5SczSHegn3ftt
   g==;
X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="354475450"
X-IronPort-AV: E=Sophos;i="6.00,174,1681196400"; 
   d="scan'208";a="354475450"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="1032284552"
X-IronPort-AV: E=Sophos;i="6.00,174,1681196400"; 
   d="scan'208";a="1032284552"
Message-ID: <d03ef733-8098-69b7-97c2-304f1195e2a4@intel.com>
Date: Thu, 18 May 2023 10:31:16 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 00/20] x86: address -Wmissing-prototype warnings
Content-Language: en-US
To: Arnd Bergmann <arnd@kernel.org>, x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>,
 Andy Lutomirski <luto@kernel.org>, Steven Rostedt <rostedt@goodmis.org>,
 Masami Hiramatsu <mhiramat@kernel.org>, Mark Rutland <mark.rutland@arm.com>,
 Juergen Gross <jgross@suse.com>,
 "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
 Alexey Makhalov <amakhalov@vmware.com>,
 VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
 Peter Zijlstra <peterz@infradead.org>, Darren Hart <dvhart@infradead.org>,
 Andy Shevchenko <andy@infradead.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>, linux-kernel@vger.kernel.org,
 linux-trace-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, linux-pci@vger.kernel.org,
 platform-driver-x86@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-pm@vger.kernel.org, linux-mm@kvack.org
References: <20230516193549.544673-1-arnd@kernel.org>
From: Dave Hansen <dave.hansen@intel.com>
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 5/16/23 12:35, Arnd Bergmann wrote:
> The ones that are a bit awkward are those that just add a prototype to
> shut up the warning, but the prototypes are never used for calling the
> function because the only caller is in assembler code. I tried to come up
> with other ways to shut up the compiler using the asmlinkage annotation,
> but with no success.

I went looking for the same thing.  It's too bad gcc doesn't have an
__attribute__ for it.


From xen-devel-bounces@lists.xenproject.org Thu May 18 18:19:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 18:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536494.834867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pziDI-0007OJ-0u; Thu, 18 May 2023 18:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536494.834867; Thu, 18 May 2023 18:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pziDH-0007OC-UP; Thu, 18 May 2023 18:18:55 +0000
Received: by outflank-mailman (input) for mailman id 536494;
 Thu, 18 May 2023 18:18:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bBlf=BH=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pziDG-0007O6-Og
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 18:18:55 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6d6086cb-f5a8-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 20:18:49 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 467EA5C0189;
 Thu, 18 May 2023 14:18:45 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Thu, 18 May 2023 14:18:45 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 18 May 2023 14:18:42 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d6086cb-f5a8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm3; t=
	1684433925; x=1684520325; bh=8axqpDZcBJUL6/RSlQEGx3UykZkeYDfsoFq
	sKS4DlH8=; b=LTDJ72ZJBNU4Q+wdQtTMHbKI7Rf7xCwPmPlFPa/OWtk6KR2yIsb
	wUx4TgI9fwxNTiD9tCkEW9fGhx6uFOOexL8CANKKVptANJ3uJC8nIcLpDQnNeuJ5
	9tieBz0pNDhKd4uSNPNux7XAJUkWci2MO18bTW3KhOKheIthnn54/C2EIz2B7n0p
	ArsV7dtXoA+8i7LqZVw5QBTuof9aSMA381lGLdwimEdPy6/4q2hSQupv4lFLphcq
	Xz1nGSd9blAmwB1xoafB3KUXW/7h0gRdkot2+/Ph2RsVSc6V0quQymU13VP2wcXw
	AuAa+ike1zV5TPGd9EvSg2UdVQTk5IVi74g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1684433925; x=1684520325; bh=8axqpDZcBJUL6
	/RSlQEGx3UykZkeYDfsoFqsKS4DlH8=; b=ahTrDDMNHMprtR6j7zsInmHT0unIR
	qAyeVpbtq7fyzs/HsksHaOtng51EzIdD1Y4DmO8WoPBkTWZ5M3rG0m/mcJ6JDmPw
	Ecy4AFX/C+Uewwq8xxSQ0sk7Is2WYhYYslyESxZijdWH/cei3IxLH5Fq8X6pTphq
	SxFdOZPf2JQXVzxxMxEilNQPw4mbB+ZrezIiDlKcdSqxv2j25fEZnzUl0MJ8yfLX
	4IBsMTnU58G/IcA7jgPp15uxmuqzn8dHYvRHo01VfoDYtRBqchFq9RUOtpPcUI9N
	ZflIOipXAScjPuYZeecwD2XoSt2p9rjtUfsNR+ACw3XeX8tFtlst0182g==
X-ME-Sender: <xms:BGxmZEHwrxvzj2rUdyQFtkeClxzfILEtdDb-xaRPscDsugNfa_gRhg>
    <xme:BGxmZNXNpf0AMvINhbKCxiq-tUTzBaefoJnHwWVNaDikDkZjQkGPm_8uhSk2JyFpn
    4sOIN4kpu3nRQ>
X-ME-Received: <xmr:BGxmZOJZQmSuEVRCprIH4kzePADrtGYRp1bSQUHUENFrUWHdOe2cZ4QgvTtelEMwRsTpyFOh_IZf-gSqvSecmq8Ebl-rrYiRAFA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeifedguddvfecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    udelteefvefhfeehieetleeihfejhfeludevteetkeevtedtvdegueetfeejudenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:BGxmZGG9PGqozfwHtPqImZp0h9jq6zXETkRQMUn2ych4Vg8Ht-Fv7g>
    <xmx:BGxmZKWMsb0cJjwvkTZF5RgdlJXSFKEsQNnO8v9cj_4G3rifEk9oVA>
    <xmx:BGxmZJO894ds2AjAhhHmsTz7sH_UR233C-BOjr4cInJFMiOoHG__fw>
    <xmx:BWxmZDVQItjPzKlagxJxlOPA1P5hwMRsW1KjHLFzxth6sl8DCB60tA>
Feedback-ID: i1568416f:Fastmail
Date: Thu, 18 May 2023 20:18:39 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>, Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>, Lyude Paul <lyude@redhat.com>,
	xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when xen-pcifront
 is enabling
Message-ID: <ZGZr/xgbUmVqpOpN@mail-itl>
References: <20230518134253.909623-1-hch@lst.de>
 <20230518134253.909623-3-hch@lst.de>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="CsM+DO8kg05XbydD"
Content-Disposition: inline
In-Reply-To: <20230518134253.909623-3-hch@lst.de>


--CsM+DO8kg05XbydD
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Thu, 18 May 2023 20:18:39 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>, Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>, Lyude Paul <lyude@redhat.com>,
	xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when xen-pcifront
 is enabling

On Thu, May 18, 2023 at 03:42:51PM +0200, Christoph Hellwig wrote:
> Remove the dangerous late initialization of xen-swiotlb in
> pci_xen_swiotlb_init_late and instead just always initialize
> xen-swiotlb in the boot code if CONFIG_XEN_PCIDEV_FRONTEND is enabled.
>=20
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Doesn't it mean all the PV guests will basically waste 64MB of RAM
by default each if they don't really have PCI devices?

> ---
>  arch/x86/include/asm/xen/swiotlb-xen.h |  6 ------
>  arch/x86/kernel/pci-dma.c              | 25 +++----------------------
>  drivers/pci/xen-pcifront.c             |  6 ------
>  3 files changed, 3 insertions(+), 34 deletions(-)
>=20
> diff --git a/arch/x86/include/asm/xen/swiotlb-xen.h b/arch/x86/include/as=
m/xen/swiotlb-xen.h
> index 77a2d19cc9909e..abde0f44df57dc 100644
> --- a/arch/x86/include/asm/xen/swiotlb-xen.h
> +++ b/arch/x86/include/asm/xen/swiotlb-xen.h
> @@ -2,12 +2,6 @@
>  #ifndef _ASM_X86_SWIOTLB_XEN_H
>  #define _ASM_X86_SWIOTLB_XEN_H
> =20
> -#ifdef CONFIG_SWIOTLB_XEN
> -extern int pci_xen_swiotlb_init_late(void);
> -#else
> -static inline int pci_xen_swiotlb_init_late(void) { return -ENXIO; }
> -#endif
> -
>  int xen_swiotlb_fixup(void *buf, unsigned long nslabs);
>  int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
>  				unsigned int address_bits,
> diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
> index f887b08ac5ffe4..c4a7ead9eb674e 100644
> --- a/arch/x86/kernel/pci-dma.c
> +++ b/arch/x86/kernel/pci-dma.c
> @@ -81,27 +81,6 @@ static void __init pci_xen_swiotlb_init(void)
>  	if (IS_ENABLED(CONFIG_PCI))
>  		pci_request_acs();
>  }
> -
> -int pci_xen_swiotlb_init_late(void)
> -{
> -	if (dma_ops =3D=3D &xen_swiotlb_dma_ops)
> -		return 0;
> -
> -	/* we can work with the default swiotlb */
> -	if (!io_tlb_default_mem.nslabs) {
> -		int rc =3D swiotlb_init_late(swiotlb_size_or_default(),
> -					   GFP_KERNEL, xen_swiotlb_fixup);
> -		if (rc < 0)
> -			return rc;
> -	}
> -
> -	/* XXX: this switches the dma ops under live devices! */
> -	dma_ops =3D &xen_swiotlb_dma_ops;
> -	if (IS_ENABLED(CONFIG_PCI))
> -		pci_request_acs();
> -	return 0;
> -}
> -EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
>  #else
>  static inline void __init pci_xen_swiotlb_init(void)
>  {
> @@ -111,7 +90,9 @@ static inline void __init pci_xen_swiotlb_init(void)
>  void __init pci_iommu_alloc(void)
>  {
>  	if (xen_pv_domain()) {
> -		if (xen_initial_domain() || x86_swiotlb_enable)
> +		if (xen_initial_domain() ||
> +		    IS_ENABLED(CONFIG_XEN_PCIDEV_FRONTEND) ||
> +		    x86_swiotlb_enable)
>  			pci_xen_swiotlb_init();
>  		return;
>  	}
> diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
> index 83c0ab50676dff..11636634ae512f 100644
> --- a/drivers/pci/xen-pcifront.c
> +++ b/drivers/pci/xen-pcifront.c
> @@ -22,7 +22,6 @@
>  #include <linux/bitops.h>
>  #include <linux/time.h>
>  #include <linux/ktime.h>
> -#include <linux/swiotlb.h>
>  #include <xen/platform_pci.h>
> =20
>  #include <asm/xen/swiotlb-xen.h>
> @@ -669,11 +668,6 @@ static int pcifront_connect_and_init_dma(struct pcif=
ront_device *pdev)
> =20
>  	spin_unlock(&pcifront_dev_lock);
> =20
> -	if (!err && !is_swiotlb_active(&pdev->xdev->dev)) {
> -		err =3D pci_xen_swiotlb_init_late();
> -		if (err)
> -			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
> -	}
>  	return err;
>  }
> =20
> --=20
> 2.39.2
>=20
>=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--CsM+DO8kg05XbydD
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRma/8ACgkQ24/THMrX
1yzT+gf/S1uwHKmjBfQtKqzw50EE/PUeNAO8869wN0cpWYT6WkKJT5BxXOVgbJfW
9mibLfMCcVO0H1cE8+PCxvC9BIv3ldhC7KVtSQks99V24zf0mPiqBiGM2mAI9VRD
SYMwmipIXDSRFERIcBo1XomAt4ytJj/BkqCv+Xy5PgYIqdABz9R4G3HT2q6rN0Lq
M2sLnKWdGwoYdk8hOmlTY5F3/iYdv/Zlel4Ki2s5ZzLUBxCZ1IKErb31wRaRid8p
APcTnrs8RZE8+YBL7nzdr9HudKMlaChsiZsPkIo0v0aI5oFa+OHs6Z5sOXOP0RAB
qmCDKShHiVUc6Lqc1gT6VHLmZdEEig==
=Y97F
-----END PGP SIGNATURE-----

--CsM+DO8kg05XbydD--


From xen-devel-bounces@lists.xenproject.org Thu May 18 18:27:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 18:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536499.834878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pziLr-0000TX-TB; Thu, 18 May 2023 18:27:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536499.834878; Thu, 18 May 2023 18:27:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pziLr-0000TK-PX; Thu, 18 May 2023 18:27:47 +0000
Received: by outflank-mailman (input) for mailman id 536499;
 Thu, 18 May 2023 18:27:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pziLr-0000TC-5v
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 18:27:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pziLq-0001Jf-Fx; Thu, 18 May 2023 18:27:46 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.26.27]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pziLq-0007lt-6S; Thu, 18 May 2023 18:27:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=rJ95ros767tiVD/9rk/xr/iTTMFUGfe1uzd3kHsbOa0=; b=jEMydz+r2otD+Exd5ghSqdh3JN
	fJ73yaMaok/qN34s8RWZu0icII3tiDwlfEcfIaZly46UTc/0K1ZHj8WDpNMzaD9m+4vK9iSs392nM
	ZbjLocuvykf0atWwknlLzu2W1izvvgXhsVdku317LPekG1ySRgyzAeJNBZTiOhAr/Fng=;
Message-ID: <d735e539-a8ad-0c14-2eda-22fbad19191f@xen.org>
Date: Thu, 18 May 2023 19:27:44 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 05/12] arm/sve: save/restore SVE context switch
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-6-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230424060248.1488859-6-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 24/04/2023 07:02, Luca Fancellu wrote:
> Save/restore context switch for SVE, allocate memory to contain
> the Z0-31 registers whose length is maximum 2048 bits each and
> FFR who can be maximum 256 bits, the allocated memory depends on
> how many bits is the vector length for the domain and how many bits
> are supported by the platform.
> 
> Save P0-15 whose length is maximum 256 bits each, in this case the
> memory used is from the fpregs field in struct vfp_state,
> because V0-31 are part of Z0-31 and this space would have been
> unused for SVE domain otherwise.
> 
> Create zcr_el{1,2} fields in arch_vcpu, initialise zcr_el2 on vcpu
> creation given the requested vector length and restore it on
> context switch, save/restore ZCR_EL1 value as well.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes from v5:
>   - use XFREE instead of xfree, keep the headers (Julien)
>   - Avoid math computation for every save/restore, store the computation
>     in struct vfp_state once (Bertrand)
>   - protect access to v->domain->arch.sve_vl inside arch_vcpu_create now
>     that sve_vl is available only on arm64
> Changes from v4:
>   - No changes
> Changes from v3:
>   - don't use fixed len types when not needed (Jan)
>   - now VL is an encoded value, decode it before using.
> Changes from v2:
>   - No changes
> Changes from v1:
>   - No changes
> Changes from RFC:
>   - Moved zcr_el2 field introduction in this patch, restore its
>     content inside sve_restore_state function. (Julien)
> ---
>   xen/arch/arm/arm64/sve-asm.S             | 141 +++++++++++++++++++++++
>   xen/arch/arm/arm64/sve.c                 |  63 ++++++++++
>   xen/arch/arm/arm64/vfp.c                 |  79 +++++++------
>   xen/arch/arm/domain.c                    |   9 ++
>   xen/arch/arm/include/asm/arm64/sve.h     |  13 +++
>   xen/arch/arm/include/asm/arm64/sysregs.h |   3 +
>   xen/arch/arm/include/asm/arm64/vfp.h     |  12 ++
>   xen/arch/arm/include/asm/domain.h        |   2 +
>   8 files changed, 288 insertions(+), 34 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
> index 4d1549344733..8c37d7bc95d5 100644
> --- a/xen/arch/arm/arm64/sve-asm.S
> +++ b/xen/arch/arm/arm64/sve-asm.S

Are all the new helpers added in this patch taken from Linux? If so, it 
would be good to clarify this (again) in the commit message as it helps 
for the review (I can diff with Linux rather than properly reviewing them).

> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> index 86a5e617bfca..064832b450ff 100644
> --- a/xen/arch/arm/arm64/sve.c
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -5,6 +5,8 @@
>    * Copyright (C) 2022 ARM Ltd.
>    */
>   
> +#include <xen/sched.h>
> +#include <xen/sizes.h>
>   #include <xen/types.h>
>   #include <asm/arm64/sve.h>
>   #include <asm/arm64/sysregs.h>
> @@ -13,6 +15,24 @@
>   #include <asm/system.h>
>   
>   extern unsigned int sve_get_hw_vl(void);
> +extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr);
> +extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
> +                         int restore_ffr);

 From the use, it is not entirely what restore_ffr/save_ffr is meant to 
be. Are they bool? If so, maybe use bool? At mimimum, they probably want 
to be unsigned int.

> +
> +static inline unsigned int sve_zreg_ctx_size(unsigned int vl)
> +{
> +    /*
> +     * Z0-31 registers size in bytes is computed from VL that is in bits, so VL
> +     * in bytes is VL/8.
> +     */
> +    return (vl / 8U) * 32U;
> +}
> +
> +static inline unsigned int sve_ffrreg_ctx_size(unsigned int vl)
> +{
> +    /* FFR register size is VL/8, which is in bytes (VL/8)/8 */
> +    return (vl / 64U);
> +}
>   
>   register_t compute_max_zcr(void)
>   {
> @@ -60,3 +80,46 @@ unsigned int get_sys_vl_len(void)
>       return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
>               SVE_VL_MULTIPLE_VAL;
>   }
> +
> +int sve_context_init(struct vcpu *v)
> +{
> +    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
> +    uint64_t *ctx = _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
> +                             sve_ffrreg_ctx_size(sve_vl_bits),
> +                             L1_CACHE_BYTES);
> +
> +    if ( !ctx )
> +        return -ENOMEM;
> +
> +    /* Point to the end of Z0-Z31 memory, just before FFR memory */

NIT: I would add that the logic should be kept in sync with 
sve_context_free(). Same...

> +    v->arch.vfp.sve_zreg_ctx_end = ctx +
> +        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
> +
> +    return 0;
> +}
> +
> +void sve_context_free(struct vcpu *v)
> +{
> +    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
> +
> +    /* Point back to the beginning of Z0-Z31 + FFR memory */

... here (but with sve_context_init()). So it is clearer that if the 
logic change in one place then it needs to be changed in the other.

> +    v->arch.vfp.sve_zreg_ctx_end -=
> +        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));

 From my understanding, sve_context_free() could be called with 
sve_zreg_ctxt_end equal to NULL (i.e. because sve_context_init() 
failed). So wouldn't we end up to substract the value to NULL and 
therefore...

> +
> +    XFREE(v->arch.vfp.sve_zreg_ctx_end);

... free a random pointer?

> +}
> +
> +void sve_save_state(struct vcpu *v)
> +{
> +    v->arch.zcr_el1 = READ_SYSREG(ZCR_EL1);
> +
> +    sve_save_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
> +}
> +
> +void sve_restore_state(struct vcpu *v)
> +{
> +    WRITE_SYSREG(v->arch.zcr_el1, ZCR_EL1);
> +    WRITE_SYSREG(v->arch.zcr_el2, ZCR_EL2);

AFAIU, this value will be used for the restore below. So don't we need 
an isb()?

> +
> +    sve_load_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
> +}
> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> index 47885e76baae..2d0d7c2e6ddb 100644
> --- a/xen/arch/arm/arm64/vfp.c
> +++ b/xen/arch/arm/arm64/vfp.c
> @@ -2,29 +2,35 @@
>   #include <asm/processor.h>
>   #include <asm/cpufeature.h>
>   #include <asm/vfp.h>
> +#include <asm/arm64/sve.h>
>   
>   void vfp_save_state(struct vcpu *v)
>   {
>       if ( !cpu_has_fp )
>           return;
>   
> -    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
> -                 "stp q2, q3, [%1, #16 * 2]\n\t"
> -                 "stp q4, q5, [%1, #16 * 4]\n\t"
> -                 "stp q6, q7, [%1, #16 * 6]\n\t"
> -                 "stp q8, q9, [%1, #16 * 8]\n\t"
> -                 "stp q10, q11, [%1, #16 * 10]\n\t"
> -                 "stp q12, q13, [%1, #16 * 12]\n\t"
> -                 "stp q14, q15, [%1, #16 * 14]\n\t"
> -                 "stp q16, q17, [%1, #16 * 16]\n\t"
> -                 "stp q18, q19, [%1, #16 * 18]\n\t"
> -                 "stp q20, q21, [%1, #16 * 20]\n\t"
> -                 "stp q22, q23, [%1, #16 * 22]\n\t"
> -                 "stp q24, q25, [%1, #16 * 24]\n\t"
> -                 "stp q26, q27, [%1, #16 * 26]\n\t"
> -                 "stp q28, q29, [%1, #16 * 28]\n\t"
> -                 "stp q30, q31, [%1, #16 * 30]\n\t"
> -                 : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
> +    if ( is_sve_domain(v->domain) )
> +        sve_save_state(v);
> +    else
> +    {
> +        asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
> +                     "stp q2, q3, [%1, #16 * 2]\n\t"
> +                     "stp q4, q5, [%1, #16 * 4]\n\t"
> +                     "stp q6, q7, [%1, #16 * 6]\n\t"
> +                     "stp q8, q9, [%1, #16 * 8]\n\t"
> +                     "stp q10, q11, [%1, #16 * 10]\n\t"
> +                     "stp q12, q13, [%1, #16 * 12]\n\t"
> +                     "stp q14, q15, [%1, #16 * 14]\n\t"
> +                     "stp q16, q17, [%1, #16 * 16]\n\t"
> +                     "stp q18, q19, [%1, #16 * 18]\n\t"
> +                     "stp q20, q21, [%1, #16 * 20]\n\t"
> +                     "stp q22, q23, [%1, #16 * 22]\n\t"
> +                     "stp q24, q25, [%1, #16 * 24]\n\t"
> +                     "stp q26, q27, [%1, #16 * 26]\n\t"
> +                     "stp q28, q29, [%1, #16 * 28]\n\t"
> +                     "stp q30, q31, [%1, #16 * 30]\n\t"
> +                     : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
> +    }
>   
>       v->arch.vfp.fpsr = READ_SYSREG(FPSR);
>       v->arch.vfp.fpcr = READ_SYSREG(FPCR);
> @@ -37,23 +43,28 @@ void vfp_restore_state(struct vcpu *v)
>       if ( !cpu_has_fp )
>           return;
>   
> -    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
> -                 "ldp q2, q3, [%1, #16 * 2]\n\t"
> -                 "ldp q4, q5, [%1, #16 * 4]\n\t"
> -                 "ldp q6, q7, [%1, #16 * 6]\n\t"
> -                 "ldp q8, q9, [%1, #16 * 8]\n\t"
> -                 "ldp q10, q11, [%1, #16 * 10]\n\t"
> -                 "ldp q12, q13, [%1, #16 * 12]\n\t"
> -                 "ldp q14, q15, [%1, #16 * 14]\n\t"
> -                 "ldp q16, q17, [%1, #16 * 16]\n\t"
> -                 "ldp q18, q19, [%1, #16 * 18]\n\t"
> -                 "ldp q20, q21, [%1, #16 * 20]\n\t"
> -                 "ldp q22, q23, [%1, #16 * 22]\n\t"
> -                 "ldp q24, q25, [%1, #16 * 24]\n\t"
> -                 "ldp q26, q27, [%1, #16 * 26]\n\t"
> -                 "ldp q28, q29, [%1, #16 * 28]\n\t"
> -                 "ldp q30, q31, [%1, #16 * 30]\n\t"
> -                 : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
> +    if ( is_sve_domain(v->domain) )
> +        sve_restore_state(v);
> +    else
> +    {
> +        asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
> +                     "ldp q2, q3, [%1, #16 * 2]\n\t"
> +                     "ldp q4, q5, [%1, #16 * 4]\n\t"
> +                     "ldp q6, q7, [%1, #16 * 6]\n\t"
> +                     "ldp q8, q9, [%1, #16 * 8]\n\t"
> +                     "ldp q10, q11, [%1, #16 * 10]\n\t"
> +                     "ldp q12, q13, [%1, #16 * 12]\n\t"
> +                     "ldp q14, q15, [%1, #16 * 14]\n\t"
> +                     "ldp q16, q17, [%1, #16 * 16]\n\t"
> +                     "ldp q18, q19, [%1, #16 * 18]\n\t"
> +                     "ldp q20, q21, [%1, #16 * 20]\n\t"
> +                     "ldp q22, q23, [%1, #16 * 22]\n\t"
> +                     "ldp q24, q25, [%1, #16 * 24]\n\t"
> +                     "ldp q26, q27, [%1, #16 * 26]\n\t"
> +                     "ldp q28, q29, [%1, #16 * 28]\n\t"
> +                     "ldp q30, q31, [%1, #16 * 30]\n\t"
> +                     : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
> +    }
>   
>       WRITE_SYSREG(v->arch.vfp.fpsr, FPSR);
>       WRITE_SYSREG(v->arch.vfp.fpcr, FPCR);
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 143359d0f313..24c722a4a11e 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -552,7 +552,14 @@ int arch_vcpu_create(struct vcpu *v)
>   
>       v->arch.cptr_el2 = get_default_cptr_flags();
>       if ( is_sve_domain(v->domain) )
> +    {
> +        if ( (rc = sve_context_init(v)) != 0 )
> +            goto fail;
>           v->arch.cptr_el2 &= ~HCPTR_CP(8);
> +#ifdef CONFIG_ARM64_SVE

This #ifdef reads a bit odd to me because you are protecting 
v->arch.zcr_el2 but not the rest. This is one of the case where I would 
surround the full if with the #ifdef because it makes clearer that there 
is no way the rest of the code can be reached if !CONFIG_ARM64_SVE.

That said, I would actually prefer if...

> +        v->arch.zcr_el2 = vl_to_zcr(sve_decode_vl(v->domain->arch.sve_vl));

... this line is moved in sve_context_init() because this is related to 
the SVE context.

> +#endif
> +    }
>   
>       v->arch.hcr_el2 = get_default_hcr_flags();
>   
> @@ -582,6 +589,8 @@ fail:
>   
>   void arch_vcpu_destroy(struct vcpu *v)
>   {
> +    if ( is_sve_domain(v->domain) )
> +        sve_context_free(v);
>       vcpu_timer_destroy(v);
>       vcpu_vgic_free(v);
>       free_xenheap_pages(v->arch.stack, STACK_ORDER);
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
> index 730c3fb5a9c8..582405dfdf6a 100644
> --- a/xen/arch/arm/include/asm/arm64/sve.h
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -26,6 +26,10 @@ static inline unsigned int sve_decode_vl(unsigned int sve_vl)
>   register_t compute_max_zcr(void);
>   register_t vl_to_zcr(unsigned int vl);
>   unsigned int get_sys_vl_len(void);
> +int sve_context_init(struct vcpu *v);
> +void sve_context_free(struct vcpu *v);
> +void sve_save_state(struct vcpu *v);
> +void sve_restore_state(struct vcpu *v);
>   
>   #else /* !CONFIG_ARM64_SVE */
>   
> @@ -46,6 +50,15 @@ static inline unsigned int get_sys_vl_len(void)
>       return 0;
>   }
>   
> +static inline int sve_context_init(struct vcpu *v)
> +{
> +    return 0;
> +}
> +
> +static inline void sve_context_free(struct vcpu *v) {}
> +static inline void sve_save_state(struct vcpu *v) {}
> +static inline void sve_restore_state(struct vcpu *v) {}
> +
>   #endif /* CONFIG_ARM64_SVE */
>   
>   #endif /* _ARM_ARM64_SVE_H */
> diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
> index 4cabb9eb4d5e..3fdeb9d8cdef 100644
> --- a/xen/arch/arm/include/asm/arm64/sysregs.h
> +++ b/xen/arch/arm/include/asm/arm64/sysregs.h
> @@ -88,6 +88,9 @@
>   #ifndef ID_AA64ISAR2_EL1
>   #define ID_AA64ISAR2_EL1            S3_0_C0_C6_2
>   #endif
> +#ifndef ZCR_EL1
> +#define ZCR_EL1                     S3_0_C1_C2_0
> +#endif
>   
>   /* ID registers (imported from arm64/include/asm/sysreg.h in Linux) */
>   
> diff --git a/xen/arch/arm/include/asm/arm64/vfp.h b/xen/arch/arm/include/asm/arm64/vfp.h
> index e6e8c363bc16..4aa371e85d26 100644
> --- a/xen/arch/arm/include/asm/arm64/vfp.h
> +++ b/xen/arch/arm/include/asm/arm64/vfp.h
> @@ -6,7 +6,19 @@
>   
>   struct vfp_state
>   {
> +    /*
> +     * When SVE is enabled for the guest, fpregs memory will be used to
> +     * save/restore P0-P15 registers, otherwise it will be used for the V0-V31
> +     * registers.
> +     */
>       uint64_t fpregs[64] __vfp_aligned;
> +    /*
> +     * When SVE is enabled for the guest, sve_zreg_ctx_end points to memory
> +     * where Z0-Z31 registers and FFR can be saved/restored, it points at the
> +     * end of the Z0-Z31 space and at the beginning of the FFR space, it's done
> +     * like that to ease the save/restore assembly operations.
> +     */
> +    uint64_t *sve_zreg_ctx_end;
>       register_t fpcr;
>       register_t fpexc32_el2;
>       register_t fpsr;
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> index 331da0f3bcc3..814652d92568 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -195,6 +195,8 @@ struct arch_vcpu
>       register_t tpidrro_el0;
>   
>       /* HYP configuration */
> +    register_t zcr_el1;
> +    register_t zcr_el2;
>       register_t cptr_el2;
>       register_t hcr_el2;
>       register_t mdcr_el2;

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 18 18:30:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 18:30:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536505.834887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pziOO-0001w1-DH; Thu, 18 May 2023 18:30:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536505.834887; Thu, 18 May 2023 18:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pziOO-0001vu-Ag; Thu, 18 May 2023 18:30:24 +0000
Received: by outflank-mailman (input) for mailman id 536505;
 Thu, 18 May 2023 18:30:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pziOM-0001vk-Pp
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 18:30:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pziOM-0001P6-Dq; Thu, 18 May 2023 18:30:22 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.26.27]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pziOM-0007nP-8d; Thu, 18 May 2023 18:30:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=27+5mE4ZVTNFrOH1jB3mQzfQNjqckp9iiGm+9CxUwOU=; b=stRZrsjSuAJyoEHazgcR+F1uaL
	p8Qh6X8IuwpHbJE+0Elkuucy3K//2gd02XT6O4Flhr/EU4o9InDgDDGokn3iwRkAj7UbdLtSvXX2N
	FQV5luKdB329KzXU/G07otN1h4H88lMQCNNJh4CrUCK0kysdkmRJbne0lwqzEBG1xiQE=;
Message-ID: <779e46a5-a3d0-187a-6d15-e1a12f71278f@xen.org>
Date: Thu, 18 May 2023 19:30:20 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 05/12] arm/sve: save/restore SVE context switch
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-6-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230424060248.1488859-6-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

One more remark.

On 24/04/2023 07:02, Luca Fancellu wrote:
>   #else /* !CONFIG_ARM64_SVE */
>   
> @@ -46,6 +50,15 @@ static inline unsigned int get_sys_vl_len(void)
>       return 0;
>   }
>   
> +static inline int sve_context_init(struct vcpu *v)
> +{
> +    return 0;

The call is protected by is_domain_sve(). So I think we want to return 
an error just in case someone is calling it outside of its intended use.

> +}
> +
> +static inline void sve_context_free(struct vcpu *v) {}
> +static inline void sve_save_state(struct vcpu *v) {}
> +static inline void sve_restore_state(struct vcpu *v) {}
> +

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 18 18:39:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 18:39:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536509.834898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pziWx-0002bc-8N; Thu, 18 May 2023 18:39:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536509.834898; Thu, 18 May 2023 18:39:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pziWx-0002bV-5A; Thu, 18 May 2023 18:39:15 +0000
Received: by outflank-mailman (input) for mailman id 536509;
 Thu, 18 May 2023 18:39:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1pziWv-0002bP-Ao
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 18:39:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pziWr-0001nb-2B; Thu, 18 May 2023 18:39:09 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.26.27]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1pziWq-0008FY-S2; Thu, 18 May 2023 18:39:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=BcYQED4F2cqPOXbxdtEQiFqYi1G038BgSr3Wel6ihBY=; b=vAdSuWg3p8DwIA8T4usdohmXMv
	776EPTI+m7bGGgaBoVM9oInuC5XQwTTdwwZ4wmPhi+9AiC3OUDRCCcEuKnvlCfKqcgsdduvsVnCie
	Pn/csV0sQXlSSOJuvcrG6Q+2VsAb3AvJl1xHZ5rIGqhHlsPs32rIMoTUpgPgVixcD+PE=;
Message-ID: <ccbb32bb-56cc-11c0-b3d6-f4506dadc541@xen.org>
Date: Thu, 18 May 2023 19:39:06 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>, Jan Beulich <jbeulich@suse.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-8-luca.fancellu@arm.com>
 <589fdeec-a0cf-1dc0-18b2-bd20c76832d2@suse.com>
 <7064B21E-414F-4FB5-BCC9-349388B32EA5@arm.com>
 <11e92082-6603-7180-f405-b96a14d430dd@suse.com>
 <37C35493-D5DA-4102-9B93-0045732E6F94@arm.com>
 <d49f1df6-ac49-27ef-d55f-b6284c76b055@suse.com>
 <5535FDB0-989E-4536-AF7B-8F0BB561667A@arm.com>
 <bd064b44-3531-a1b0-a7a8-1ad7ae434394@suse.com>
 <300BE89F-CA37-4A28-9CC5-5875E10D4A0C@arm.com>
 <a268313d-03be-9281-3627-c38115d3e5de@suse.com>
 <B534E482-71BF-4C5F-B9A8-3D567367F7AA@arm.com>
 <f9e631e1-02bb-a565-4df4-ccbb66fbaf49@suse.com>
 <C1815EB3-E875-4D49-831A-56E152BF4B61@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <C1815EB3-E875-4D49-831A-56E152BF4B61@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

Sorry for the late reply.

On 25/04/2023 07:04, Luca Fancellu wrote:
> 
> 
>> On 24 Apr 2023, at 17:10, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 24.04.2023 17:43, Luca Fancellu wrote:
>>>> On 24 Apr 2023, at 16:41, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 24.04.2023 17:34, Luca Fancellu wrote:
>>>>>> On 24 Apr 2023, at 16:25, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>> On 24.04.2023 17:18, Luca Fancellu wrote:
>>>>>>> Oh ok, I don’t know, here what I get if for example I build arm32:
>>>>>>>
>>>>>>> arm-linux-gnueabihf-ld -EL -T arch/arm/xen.lds -N prelink.o \
>>>>>>> ./common/symbols-dummy.o -o ./.xen-syms.0
>>>>>>> arm-linux-gnueabihf-ld: prelink.o: in function `create_domUs':
>>>>>>> (.init.text+0x13464): undefined reference to `sve_domctl_vl_param'
>>>>>>
>>>>>> In particular with seeing this: What you copied here is a build with the
>>>>>> series applied only up to this patch? I ask because the patch here adds a
>>>>>> call only out of create_dom0().
>>>>>
>>>>> No I’ve do the changes on top of the serie, I’ve tried it now, only to this patch and it builds correctly,
>>>>> It was my mistake to don’t read carefully the error output.
>>>>>
>>>>> Anyway I guess this change is not applicable because we don’t have a symbol that is plain 0 for domUs
>>>>> to be placed inside create_domUs.
>>>>
>>>> Possible, but would you mind first telling me in which other patch(es) the
>>>> further reference(s) are being introduced, so I could take a look without
>>>> (again) digging through the entire series?
>>>
>>> Sure, the other references to the function are introduced in "xen/arm: add sve property for dom0less domUs” patch 11
>>
>> Personally I'm inclined to suggest adding "#ifdef CONFIG_ARM64_SVE" there.
>> But I guess that may again go against your desire to not ignore inapplicable
>> options. Still I can't resist to at least ask how an "sve" node on Arm32 is
>> different from an entirely unknown one.
> 
> It would be ok for me to use #ifdef CONFIG_ARM64_SVE and fail in the #else branch,
> but I had the feeling in the past that Arm maintainers are not very happy with #ifdefs, I might
> be wrong so I’ll wait for them to give an opinion and then I will be happy to follow.

IIRC, your suggestion is for patch #11. In this case, my preference is 
the #ifdef + throwing an error in the #else branch. This would avoid to 
silently ignore the property if SVE is not enabled (both Bertrand and I 
agreed this should not be ignored, see [1]).

Cheers,

[1] 
https://lore.kernel.org/all/7614AE25-F59D-430A-9C3E-30B1CE0E1580@arm.com/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 18 19:37:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 19:37:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536514.834909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzjRD-0000YF-NO; Thu, 18 May 2023 19:37:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536514.834909; Thu, 18 May 2023 19:37:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzjRD-0000Y8-IN; Thu, 18 May 2023 19:37:23 +0000
Received: by outflank-mailman (input) for mailman id 536514;
 Thu, 18 May 2023 19:37:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzjRC-0000Xy-Mp; Thu, 18 May 2023 19:37:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzjRC-0003W3-Gu; Thu, 18 May 2023 19:37:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzjRB-0003qx-Te; Thu, 18 May 2023 19:37:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzjRB-0004fi-T6; Thu, 18 May 2023 19:37:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g+MdtslopQ3z6dkxv2ayg+pmdLBIDMc4aeLqvPYAm4s=; b=NbDlhArW/UOeK/9iDuQH64JI/6
	Y2i9lEBGALO8+ifCM3w17U6k4ytm/7Kw2vTFSQwuOryXK/LXfEbNxVnN1vUh4BeIF8Pj9rRV678G2
	dobTFD1gJcgVhfyHsKIGfRW+zrE4C67HknDPzEpsRjjWkiQXuLCxMJ6py7757xd24kpI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180698-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180698: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=5ff58a0ce7a6ad452919a86a05e27427ccf1f27b
X-Osstest-Versions-That:
    libvirt=b10bc8f7ab6f9986ccc54ba04fc5b3bad7576be6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 May 2023 19:37:21 +0000

flight 180698 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180698/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180688
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180688
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180688
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              5ff58a0ce7a6ad452919a86a05e27427ccf1f27b
baseline version:
 libvirt              b10bc8f7ab6f9986ccc54ba04fc5b3bad7576be6

Last test of basis   180688  2023-05-17 04:20:21 Z    1 days
Testing same since   180698  2023-05-18 04:18:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   b10bc8f7ab..5ff58a0ce7  5ff58a0ce7a6ad452919a86a05e27427ccf1f27b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 18 20:29:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 20:29:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536520.834917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkFn-000616-Jm; Thu, 18 May 2023 20:29:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536520.834917; Thu, 18 May 2023 20:29:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkFn-00060z-Gv; Thu, 18 May 2023 20:29:39 +0000
Received: by outflank-mailman (input) for mailman id 536520;
 Thu, 18 May 2023 20:29:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzkFm-00060p-0Z; Thu, 18 May 2023 20:29:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzkFl-0005G3-No; Thu, 18 May 2023 20:29:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzkFl-00077L-Cm; Thu, 18 May 2023 20:29:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzkFl-0002bW-C6; Thu, 18 May 2023 20:29:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ipPWF7UbJu7UukY68WVGOATrSspFJTjeP/ujTokov3w=; b=lHIp8H9mdzrsxvlp0WFOZf0Z0P
	qUiFohnfvD49AKlwuU2OmTb8VMtCx+WKz3Ygq+yS6s6+AFlWoDBrCk8N0zGghMyiSzIaw3xymaaDw
	fJAJrpVmKUNaas/HgwbkFjK8CkjV2hkzWbEUsSEQJCUYlOFojQTU0NX+79NOELMGWrBs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180699-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180699: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:test-amd64-coresched-amd64-xl:<job status>:broken:regression
    qemu-mainline:test-amd64-coresched-amd64-xl:capture-logs(26):broken:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d27e7c359330ba7020bdbed7ed2316cb4cf6ffc1
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 18 May 2023 20:29:37 +0000

flight 180699 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180699/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-coresched-amd64-xl    <job status>                 broken
 test-amd64-coresched-amd64-xl 26 capture-logs(26)      broken REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180691
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180691
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d27e7c359330ba7020bdbed7ed2316cb4cf6ffc1
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    1 days
Testing same since   180699  2023-05-18 07:21:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  Igor Mammedov <imammedo@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Markus Armbruster <armbru@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                broken  
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-coresched-amd64-xl broken
broken-step test-amd64-coresched-amd64-xl capture-logs(26)

Not pushing.

------------------------------------------------------------
commit d27e7c359330ba7020bdbed7ed2316cb4cf6ffc1
Author: Markus Armbruster <armbru@redhat.com>
Date:   Wed May 17 08:16:00 2023 +0200

    qapi/parser: Drop two bad type hints for now
    
    Two type hints fail centos-stream-8-x86_64 CI.  They are actually
    broken.  Changing them to Optional[re.Match[str]] fixes them locally
    for me, but then CI fails differently.  Drop them for now.
    
    Fixes: 3e32dca3f0d1 (qapi: Rewrite parsing of doc comment section symbols and tags)
    Signed-off-by: Markus Armbruster <armbru@redhat.com>
    Message-Id: <20230517061600.1782455-1-armbru@redhat.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
    Tested-by: Igor Mammedov <imammedo@redhat.com>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 19200a0edf67a193275f2b194f7b3b731b3817b3
Merge: 6972ef1440 1e35d32789
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Wed May 17 05:42:14 2023 -0700

    Merge tag 'linux-user-for-8.1-pull-request' of https://github.com/vivier/qemu into staging
    
    linux-user pull request 20230512-v4
    
    add open_tree(), move_mount()
    add /proc/cpuinfo for riscv
    fixes and cleanup
    
    # -----BEGIN PGP SIGNATURE-----
    #
    # iQJGBAABCAAwFiEEzS913cjjpNwuT1Fz8ww4vT8vvjwFAmRkiZISHGxhdXJlbnRA
    # dml2aWVyLmV1AAoJEPMMOL0/L748FdIP/RC1JaCftkP7ajAstNbZLMLegMxjUYHV
    # TrdhsMOsm804ZmLgTqqfS3bJ080mIHup0xUnHBckcEtUcwaz54cJ1BAR2WlM3/8A
    # t3fHMt3PDkh3OPd/3AnmpLE8XRh7yBztirPYfZc6SKqnFzT0TZrwBoQnwprEnZ5r
    # c0gbrgLZLunZhrWU1BbQmuIufW1qDoQo4PzwnyZeux1fHA1/v/dx3wgSLpv3V4k6
    # x0Kj8TvtMUU4/io2RqYF4jKopfhwsh0jnr9rlOmydOExalKq1VbRptJI2UC4KVOY
    # MZuApF1EaZfrW+v/WSlvmzaZ/zRzP1L0X3Xh0wB4J9Rj3057/elXr6bi+R+rM46p
    # xGTcti9ahWKP2J4/xrazRw2lfPsLcw/YbqVGG79AX1xLJPCiWq6lamzc/g3ptFnx
    # F/RRETe65z7apzF/nzU7SDOsMdN5p4/fMb1SysLuAov5OepNVjNVWyiTgqOHB5uC
    # ye+lOYkkvk+qRdMbls/fIcjDQ3C4AjoBWj4QlgRc0/Qf6ac4TkVjzPa70Y6eyzzS
    # LEV9D4fXD8EZgYWENNGmbbKPNbtfqc9uR6gXdgkEsKDx/rf5IMf1d6r1C99dhB3A
    # nbu0JpFKKY2lhD2oGVPDE3UQMW9DXXhZpDApUBsLNiEwfuoXZee+apH+6jc8tbn6
    # r+8LFB1mM9os
    # =NfIV
    # -----END PGP SIGNATURE-----
    # gpg: Signature made Wed 17 May 2023 01:00:18 AM PDT
    # gpg:                using RSA key CD2F75DDC8E3A4DC2E4F5173F30C38BD3F2FBE3C
    # gpg:                issuer "laurent@vivier.eu"
    # gpg: Good signature from "Laurent Vivier <lvivier@redhat.com>" [undefined]
    # gpg:                 aka "Laurent Vivier <laurent@vivier.eu>" [undefined]
    # gpg:                 aka "Laurent Vivier (Red Hat) <lvivier@redhat.com>" [undefined]
    # gpg: WARNING: This key is not certified with a trusted signature!
    # gpg:          There is no indication that the signature belongs to the owner.
    # Primary key fingerprint: CD2F 75DD C8E3 A4DC 2E4F  5173 F30C 38BD 3F2F BE3C
    
    * tag 'linux-user-for-8.1-pull-request' of https://github.com/vivier/qemu:
      linux-user: fix getgroups/setgroups allocations
      linux-user: Fix mips fp64 executables loading
      linux-user: Don't require PROT_READ for mincore
      linux-user: Add new flag VERIFY_NONE
      linux-user/main: Use list_cpus() instead of cpu_list()
      linux-user: Add open_tree() syscall
      linux-user: Add move_mount() syscall
      linux-user: report ENOTTY for unknown ioctls
      linux-user: Emulate /proc/cpuinfo output for riscv
    
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 1e35d327890bdd117a67f79c52e637fb12bb1bf4
Author: Michael Tokarev <mjt@tls.msk.ru>
Date:   Sun Apr 9 13:53:27 2023 +0300

    linux-user: fix getgroups/setgroups allocations
    
    linux-user getgroups(), setgroups(), getgroups32() and setgroups32()
    used alloca() to allocate grouplist arrays, with unchecked gidsetsize
    coming from the "guest".  With NGROUPS_MAX being 65536 (linux, and it
    is common for an application to allocate NGROUPS_MAX for getgroups()),
    this means a typical allocation is half the megabyte on the stack.
    Which just overflows stack, which leads to immediate SIGSEGV in actual
    system getgroups() implementation.
    
    An example of such issue is aptitude, eg
    https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=811087#72
    
    Cap gidsetsize to NGROUPS_MAX (return EINVAL if it is larger than that),
    and use heap allocation for grouplist instead of alloca().  While at it,
    fix coding style and make all 4 implementations identical.
    
    Try to not impose random limits - for example, allow gidsetsize to be
    negative for getgroups() - just do not allocate negative-sized grouplist
    in this case but still do actual getgroups() call.  But do not allow
    negative gidsetsize for setgroups() since its argument is unsigned.
    
    Capping by NGROUPS_MAX seems a bit arbitrary, - we can do more, it is
    not an error if set size will be NGROUPS_MAX+1. But we should not allow
    integer overflow for the array being allocated. Maybe it is enough to
    just call g_try_new() and return ENOMEM if it fails.
    
    Maybe there's also no need to convert setgroups() since this one is
    usually smaller and known beforehand (KERN_NGROUPS_MAX is actually 63, -
    this is apparently a kernel-imposed limit for runtime group set).
    
    The patch fixes aptitude segfault mentioned above.
    
    Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
    Message-Id: <20230409105327.1273372-1-mjt@msgid.tls.msk.ru>
    Signed-off-by: Laurent Vivier <laurent@vivier.eu>

commit a0f8d2701b205d9d7986aa555e0566b13dc18fa0
Author: Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
Date:   Tue Apr 4 08:21:54 2023 +0300

    linux-user: Fix mips fp64 executables loading
    
    If a program requires fr1, we should set the FR bit of CP0 control status
    register and add F64 hardware flag. The corresponding `else if` branch
    statement is copied from the linux kernel sources (see `arch_check_elf` function
    in linux/arch/mips/kernel/elf.c).
    
    Signed-off-by: Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
    Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Message-Id: <20230404052153.16617-1-dkovalev@compiler-toolchain-for.me>
    Signed-off-by: Laurent Vivier <laurent@vivier.eu>

commit f443a26cc6c077f792a5114c5229020ecf44ba3b
Author: Thomas Weißschuh <thomas@t-8ch.de>
Date:   Sat Apr 22 12:03:14 2023 +0200

    linux-user: Don't require PROT_READ for mincore
    
    The kernel does not require PROT_READ for addresses passed to mincore.
    For example the fincore(1) tool from util-linux uses PROT_NONE and
    currently does not work under qemu-user.
    
    Example (with fincore(1) from util-linux 2.38):
    
    $ fincore /proc/self/exe
    RES PAGES  SIZE FILE
    24K     6 22.1K /proc/self/exe
    
    $ qemu-x86_64 /usr/bin/fincore /proc/self/exe
    fincore: failed to do mincore: /proc/self/exe: Cannot allocate memory
    
    With this patch:
    
    $ ./build/qemu-x86_64 /usr/bin/fincore /proc/self/exe
    RES PAGES  SIZE FILE
    24K     6 22.1K /proc/self/exe
    
    Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de>
    Reviewed-by: Laurent Vivier <laurent@vivier.eu>
    Message-Id: <20230422100314.1650-3-thomas@t-8ch.de>
    Signed-off-by: Laurent Vivier <laurent@vivier.eu>

commit 64d06015f6f44e3338af0ab2968ef7467dd2f3ef
Author: Thomas Weißschuh <thomas@t-8ch.de>
Date:   Sat Apr 22 12:03:13 2023 +0200

    linux-user: Add new flag VERIFY_NONE
    
    This can be used to validate that an address range is mapped but without
    being readable or writable.
    
    It will be used by an updated implementation of mincore().
    
    Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de>
    Reviewed-by: Laurent Vivier <laurent@vivier.eu>
    Message-Id: <20230422100314.1650-2-thomas@t-8ch.de>
    Signed-off-by: Laurent Vivier <laurent@vivier.eu>

commit b67e5cb43b64cd511785aa1b35876b5e5bf55f69
Author: Thomas Huth <thuth@redhat.com>
Date:   Mon Apr 24 14:21:26 2023 +0200

    linux-user/main: Use list_cpus() instead of cpu_list()
    
    This way we can get rid of the if'deffery and the XXX comment
    here (it's repeated in the list_cpus() function anyway).
    
    Signed-off-by: Thomas Huth <thuth@redhat.com>
    Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Message-Id: <20230424122126.236586-1-thuth@redhat.com>
    Signed-off-by: Laurent Vivier <laurent@vivier.eu>

commit 7f696cddd9d7bbde0ecc489eb9a29c7196b29727
Author: Thomas Weißschuh <thomas@t-8ch.de>
Date:   Mon Apr 24 17:34:29 2023 +0200

    linux-user: Add open_tree() syscall
    
    Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de>
    Reviewed-by: Laurent Vivier <laurent@vivier.eu>
    Message-Id: <20230424153429.276788-2-thomas@t-8ch.de>
    [lv: move declaration at the beginning of the block,
         define syscall]
    Signed-off-by: Laurent Vivier <laurent@vivier.eu>

commit 4b2d2753e88bdb25db5eab84c172135200f15c99
Author: Thomas Weißschuh <thomas@t-8ch.de>
Date:   Mon Apr 24 17:34:28 2023 +0200

    linux-user: Add move_mount() syscall
    
    Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de>
    Reviewed-by: Laurent Vivier <laurent@vivier.eu>
    [lv: define syscall]
    Message-Id: <20230424153429.276788-1-thomas@t-8ch.de>
    Signed-off-by: Laurent Vivier <laurent@vivier.eu>

commit 59d11727768a0a29675a78a18c3f87390d5dc90a
Author: Thomas Weißschuh <thomas@t-8ch.de>
Date:   Wed Apr 26 09:06:59 2023 +0200

    linux-user: report ENOTTY for unknown ioctls
    
    The correct error number for unknown ioctls is ENOTTY.
    
    ENOSYS would mean that the ioctl() syscall itself is not implemented,
    which is very improbable and unexpected for userspace.
    
    ENOTTY means "Inappropriate ioctl for device". This is what the kernel
    returns on unknown ioctls, what qemu is trying to express and what
    userspace is prepared to handle.
    
    Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de>
    Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
    Message-Id: <20230426070659.80649-1-thomas@t-8ch.de>
    Signed-off-by: Laurent Vivier <laurent@vivier.eu>

commit 8ddc171b7b302844d9f4598125fed925b72c686c
Author: Afonso Bordado <afonsobordado@gmail.com>
Date:   Sun Mar 5 14:34:37 2023 +0000

    linux-user: Emulate /proc/cpuinfo output for riscv
    
    RISC-V does not expose all extensions via hwcaps, thus some userspace
    applications may want to query these via /proc/cpuinfo.
    
    Currently when querying this file the host's file is shown instead
    which is slightly confusing. Emulate a basic /proc/cpuinfo file
    with mmu info and an ISA string.
    
    Signed-off-by: Afonso Bordado <afonsobordado@gmail.com>
    Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
    Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
    Reviewed-by: Laurent Vivier <laurent@vivier.eu>
    Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
    Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
    Message-Id: <167873059442.9885.15152085316575248452-0@git.sr.ht>
    [lv: removed the test that fails in CI for unknown reason]
    Signed-off-by: Laurent Vivier <laurent@vivier.eu>


From xen-devel-bounces@lists.xenproject.org Thu May 18 21:04:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:04:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536530.834928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzknd-00022S-E7; Thu, 18 May 2023 21:04:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536530.834928; Thu, 18 May 2023 21:04:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzknd-00022L-BO; Thu, 18 May 2023 21:04:37 +0000
Received: by outflank-mailman (input) for mailman id 536530;
 Thu, 18 May 2023 21:04:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yK0N=BH=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pzknb-00022F-KX
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:04:35 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on20608.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 957be541-f5bf-11ed-b22d-6b7b168915f2;
 Thu, 18 May 2023 23:04:32 +0200 (CEST)
Received: from BN9P220CA0029.NAMP220.PROD.OUTLOOK.COM (2603:10b6:408:13e::34)
 by DS7PR12MB6006.namprd12.prod.outlook.com (2603:10b6:8:7d::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 21:04:29 +0000
Received: from BN8NAM11FT089.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:13e:cafe::f7) by BN9P220CA0029.outlook.office365.com
 (2603:10b6:408:13e::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 21:04:29 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT089.mail.protection.outlook.com (10.13.176.105) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.20 via Frontend Transport; Thu, 18 May 2023 21:04:29 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:04:28 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 18 May 2023 16:04:27 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 957be541-f5bf-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mBm/ad1XFmI9e9ONbz2YXpMzcE7pN7/SfxbvinL6RuluYmogJ3DwjbmO/sj9B1HzD/WE9dG7puOrhxJ/F3r32DmbSFapdfF0ZSpwCvPbmu8jFpzvKz6hj2otTkePbPuVebUHzv/QYFeOcpkeSnB0bLBbJKGwGPmkpHZ8XKQH3p99X0oDLTwpIyEBeBk8hg3CkJpf4E+qPE2q0Yw5E5bsV87bbXm0u1v+n1pHV/Z3ii1osXktTAKYj3uh1/44fM2SaMfbwPr1RMew2JdCSIoizApFgKoYCL1+h1Lt9lHH5ay2SmF7oLQMPD7v71msVy5+UBLLvLS+G1vSTnn/z1rUtQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=T6+AhEIkwA9zB0CpYlpi2x/xWUDwNWP5GuE6V+/Cncw=;
 b=fP4GXaKg7SComkjdr4dMd6GkArpvAK0LLnhsrEWCPU3ZlfQzl8fKenA/6UUG3SRmXnIXDEulOu4kYtliTP3sfYQHMoiwBw3xLDKfOFukxtgUddfeRmRLFsQpZCYrj4mUFonsKwJuer7JxtsdqbFq1/Bq13bsiazntbb+/+zkiIBvGplaMUVLNif4ZPNnKFhpO2dEywwCcfQ2RNO9aJjPctvkmKxVuOKX5F45h22CEY/b0e/0Jqhdip7zt1wZ5MOppPtp5Nytp4psABx6agDcyuVmH432y866+N0h6ylBMalUgzfXnFOtDsp+LRNMQd17lwPQv3I1CMPdTn0dirpBEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T6+AhEIkwA9zB0CpYlpi2x/xWUDwNWP5GuE6V+/Cncw=;
 b=ep+QJRmSuKpjYoMLSznFDWbWzw6dxVMPO5owNytJv1kE+xeJkY0drm/P/oE/vZ5aUS9o9njnkO/9W5sECKV+LAASYwLQOj7swd8109q2P3DBaNPKuMEAoWQmd+eTrR7bwaPwfJrfm+SToK/tUaYHT/TPo9w9nGQPgHsu824+3GQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <2810bdac-9c89-7373-3a5a-0365127a2928@amd.com>
Date: Thu, 18 May 2023 17:04:21 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 3/8] iommu/arm: Introduce iommu_add_dt_pci_device API
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Paul Durrant
	<paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
 <20230511191654.400720-4-stewart.hildebrand@amd.com>
 <71df3cfb-602d-e543-33bb-02e708e85f5e@suse.com>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <71df3cfb-602d-e543-33bb-02e708e85f5e@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT089:EE_|DS7PR12MB6006:EE_
X-MS-Office365-Filtering-Correlation-Id: 818231c8-0317-44d6-728b-08db57e3784e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	J2yb2tA4pQgoNqkCf2dzy0uCrNNrFRjPwOrD8MjJTCL4k4nzLqIVgYxlBEaC67W7WrsRzlK/2t1DmdDBwnLJSQXlw3S+0loaU2JYRbJe2UgozjEaKsQOOY7BuJeaLllHSVB0QBYTIN5OjZNGrmkU0uk+qBFwuiRLAAXPey5tyBZss/gK4dY1KECQuzoB7GNlofTZTZCpif4rTOzHwvV9XQ475MRk0mOjMBfy2rQfqCkaNO6pa/X2xh/EQSm7DIb022bZ0/kiRMdOALusaKvSxEs7iEKmaxJ86M9tZG4nQNV4fhAfF3261c6BVveCxW/cbGXUP7olH9Q53J3jGp8eiqbZ0x7kgM7HaLcYaK3t6kkg2Hje/TpS4MXxl9Q6cVPPGlCM0LPLYktAk4OLFtuDKUDlnnWt1zIJ33vFH9AVDCoLi85Yuu4FHM2NPvoFW42+gHK47LKS6ay/D1Q0Wa3s4eabU8mSaQT+3rz9YT21cW3L7Ab+mVQAhyLFX9H6Kxjs7S/HwkdyMlw83en3ohtsLBVeJPPV08kS5/JIXnAMMOEX+UBLhg68OeQMGH9k3iBZ5FvRYBjQrbaiWXxxbqBwv3b44ouaTgHe60xniZc6m8zvZ+kwZm5V0/5qldfIRPLIGxM3Jb704x33++W2SAm8FEuTT0w/p0//I8EiICh7RxJuep9SIuDF8CYBvgixdQOaBduz9OugjiDxXOhuCSCwyAaz3qc105bbjpot0ngPjqAj8/jEtUkAysYVy2r7hXhl9resXZjFVQ73jOXVmfpaACjqxjt4SMmTGwcL7H3BlsQ=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(39860400002)(396003)(451199021)(46966006)(36840700001)(40470700004)(26005)(40460700003)(53546011)(36860700001)(36756003)(426003)(47076005)(40480700001)(336012)(86362001)(31696002)(82310400005)(2616005)(356005)(81166007)(186003)(82740400003)(16576012)(54906003)(44832011)(5660300002)(478600001)(2906002)(4744005)(316002)(31686004)(6916009)(4326008)(8676002)(8936002)(41300700001)(70206006)(70586007)(6666004)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:04:29.0188
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 818231c8-0317-44d6-728b-08db57e3784e
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT089.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6006

On 5/12/23 03:05, Jan Beulich wrote:
> On 11.05.2023 21:16, Stewart Hildebrand wrote:
>> --- a/xen/include/xen/iommu.h
>> +++ b/xen/include/xen/iommu.h
>> @@ -219,7 +219,8 @@ int iommu_dt_domain_init(struct domain *d);
>>  int iommu_release_dt_devices(struct domain *d);
>>
>>  /*
>> - * Helper to add master device to the IOMMU using generic IOMMU DT bindings.
>> + * Helpers to add master device to the IOMMU using generic (PCI-)IOMMU
>> + * DT bindings.
>>   *
>>   * Return values:
>>   *  0 : device is protected by an IOMMU
>> @@ -228,6 +229,9 @@ int iommu_release_dt_devices(struct domain *d);
>>   *      (IOMMU is not enabled/present or device is not connected to it).
>>   */
>>  int iommu_add_dt_device(struct dt_device_node *np);
>> +#ifdef CONFIG_HAS_PCI
>> +int iommu_add_dt_pci_device(struct pci_dev *pdev);
>> +#endif
> 
> Is the #ifdef really necessary?

No, I will remove in v3


From xen-devel-bounces@lists.xenproject.org Thu May 18 21:05:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:05:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536533.834938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkoK-0002YL-Pc; Thu, 18 May 2023 21:05:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536533.834938; Thu, 18 May 2023 21:05:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkoK-0002YE-Mp; Thu, 18 May 2023 21:05:20 +0000
Received: by outflank-mailman (input) for mailman id 536533;
 Thu, 18 May 2023 21:05:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yK0N=BH=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pzkoJ-0002Wp-9h
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:05:19 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on20608.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b168079e-f5bf-11ed-b22d-6b7b168915f2;
 Thu, 18 May 2023 23:05:18 +0200 (CEST)
Received: from BN0PR04CA0079.namprd04.prod.outlook.com (2603:10b6:408:ea::24)
 by DS0PR12MB8217.namprd12.prod.outlook.com (2603:10b6:8:f1::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 21:05:12 +0000
Received: from BN8NAM11FT014.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ea:cafe::73) by BN0PR04CA0079.outlook.office365.com
 (2603:10b6:408:ea::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 21:05:12 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT014.mail.protection.outlook.com (10.13.177.142) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 21:05:12 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:05:11 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 18 May 2023 16:05:09 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b168079e-f5bf-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Sq/eoa7MsJgIR48I+cvuARLPY59iuq/bbLsVeeknsYXk/8GjZxlxCUBufnzKcChZv583EAKTi094jKOuSs7LWxWVKND2MMCLuK57HLPws+ZheciBaT/gCTF3fvhu41t4yMWPd5FDLRqU3MdP609zD+Amv0SFoldesENhgqP4A65UBMafR5zpK3J9jNjqOezV1QGAh5Ze7BpCPmxQSiNFmJG+ZSFIX5za3FcSGfIMfm4lFxeeG+KbUw/h4GsbvDj6k2RqIMXksmY8+wHD5UU+ZBVchJDKFOBOA66j9Hd+ITxu1+IRhgMSnSrFsmUyYx5NOZfsf9uZwKVfrfXMYEfr7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gI3C+9aSno4SDqtQbilkfQ8C+dQxysKiAvmQPjqz+uc=;
 b=lk1M/dLFp0S9W82kBj2fzru6eSulFc+ZGtfp8+uvylEpp0xTAPYIbOlQFFrSqFu/qiApwN0bdY+3U+hSRNduT+PwQxEcIz19pXtdf4UPklO51ARZg7UK4+aFn/CXs7fmzCbV20/GxpSI4jTuCrmi6UsVXDsYNIa12Og9qeC8QDfPwm8hMAYeNnQjtMbzaXD7Wec9vNptKrG8ifwCfSBRwqsj2lbdwJV3R5qmxKH2lvu2tzzKekmAE5BLDjei85WM11A1g0QYp0elOIc5Zu8jX2A77Z2ws8SXE+LPObY2MSakVWwpTFCOB9dw3SPG8t87DDewtCfts7mT7qDfaDzu2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gI3C+9aSno4SDqtQbilkfQ8C+dQxysKiAvmQPjqz+uc=;
 b=aP38xa4Zgf+nmogtaQXlwhWsIRBaZ12byb1gKeaVhlFvNWmdTfRNZX8YqueHTOBJqAfjesX/KsmvEj80E0scUP2sJiTwllB965gKVJxUclxf2h81us0OnFaqkpfCsX5JTaUMr20ml4HAuaC4NJqXUEXJ0eMIL0LealKplnOoSww=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <c47c6f34-b887-b768-b460-69c7076e9c67@amd.com>
Date: Thu, 18 May 2023 17:05:08 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 5/8] pci/arm: Use iommu_add_dt_pci_device()
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant
	<paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Julien Grall <julien@xen.org>, Rahul Singh <rahul.singh@arm.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<xen-devel@lists.xenproject.org>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
 <20230511191654.400720-6-stewart.hildebrand@amd.com>
 <61ae93e8-ac8f-b373-4fa7-0a8aeb61ef4f@suse.com>
 <f7d78b4e-3a16-d342-59d2-caa4d2b75b9c@amd.com>
 <cdb08ae5-1cbb-ac43-67bb-0c73bde7f479@suse.com>
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <cdb08ae5-1cbb-ac43-67bb-0c73bde7f479@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT014:EE_|DS0PR12MB8217:EE_
X-MS-Office365-Filtering-Correlation-Id: b336dfa7-7308-46a6-50c3-08db57e39200
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3SrBZwdpO5I+79S2J3692SOEIGV6rH6EJ+C7hlvzS00zZ68RH3hcn/5fSERfqX2F7UDFh3tcXS8eqk6D5tXdlsBIxjqelacNRRy8qu9rlD5rqDas1rXV0237oFwylqNXzbeKPnEtDkbAMi3syWLPl1L/EtijRO9Cv6bFMePk0C7neMFXdkD0/LT9/yOlkpooJUCEBgixsAJECHuaaYOJtsUertYBP3VV00x1I7ebEvWW5nQaQr4bcNXOioLvyKz475n1ExYpDj5UiudxC8wRYXg/4uoAhtezPyJsQEHbsZIWxadJ5RVee20jgSH+G+hK6YmZGkDqFmwuksPAu06qu3UEgE+kAStBch0utRptI+A7xmKlsBnQFFZEfuHDCSsskZ+PtxawsxSLpKeU0JRa5NHtBn+g/qPa+smqUR7x3ofGZ5KWF7cbv+K8mv4XdJBg1cjRk6kpTSIdfHkPrWXIeDpkCKQ6gV7CxVGx28GGXeqoC6/MloWN6kTa3CY8vfrgI41u1WsmcUMUTNPu7nlv2RKv8IH/EbjLE55xoMUB4pAQze5Jj5VqDhEiN+/VVXZxUSPnTq9V7WBTDoLUsFurU48/lPDzBufqTe/n30en+HmFSgj81hYfFitORFvY6ECQGzeCTq8kDDSp3eCx5yz5fquYAolLDgsqk+46rb1XdMvD+L81ufyi26tFa/gXKc6tbp6s1DghfvuY4bZNRLFa9DDcSqS/aXfEoMT/iryTTkv/2q+StrhS7uYIh2ih+2yf3LddsOnzGZjhm3ICdZ12lemxRyZcdwMkq8V6Qvqlphcp+pWs5iVx8Ed4zDWGWFP4
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(39860400002)(396003)(451199021)(46966006)(36840700001)(40470700004)(26005)(40460700003)(53546011)(36860700001)(36756003)(426003)(47076005)(40480700001)(336012)(86362001)(31696002)(82310400005)(2616005)(356005)(81166007)(186003)(82740400003)(16576012)(54906003)(7416002)(44832011)(5660300002)(478600001)(2906002)(316002)(31686004)(6916009)(4326008)(8676002)(8936002)(41300700001)(70206006)(70586007)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:05:12.1275
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b336dfa7-7308-46a6-50c3-08db57e39200
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT014.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8217

On 5/15/23 03:30, Jan Beulich wrote:
> On 12.05.2023 23:03, Stewart Hildebrand wrote:
>> On 5/12/23 03:25, Jan Beulich wrote:
>>> On 11.05.2023 21:16, Stewart Hildebrand wrote:
>>>> @@ -762,9 +767,20 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>>>>              pdev->domain = NULL;
>>>>              goto out;
>>>>          }
>>>> +#ifdef CONFIG_HAS_DEVICE_TREE
>>>> +        ret = iommu_add_dt_pci_device(pdev);
>>>> +        if ( ret < 0 )
>>>> +        {
>>>> +            printk(XENLOG_ERR "pci-iommu translation failed: %d\n", ret);
>>>> +            goto out;
>>>> +        }
>>>> +#endif
>>>>          ret = iommu_add_device(pdev);
>>>
>>> Hmm, am I misremembering that in the earlier patch you had #else to
>>> invoke the alternative behavior?
>>
>> You are remembering correctly. v1 had an #else, v2 does not.
>>
>>> Now you end up calling both functions;
>>> if that's indeed intended,
>>
>> Yes, this is intentional.
>>
>>> this may still want doing differently.
>>> Looking at the earlier patch introducing the function, I can't infer
>>> though whether that's intended: iommu_add_dt_pci_device() checks that
>>> the add_device hook is present, but then I didn't find any use of this
>>> hook. The revlog there suggests the check might be stale.
>>
>> Ah, right, the ops->add_device check is stale in the other patch. Good catch, I'll remove it there.
>>
>>> If indeed the function does only preparatory work, I don't see why it
>>> would need naming "iommu_..."; I'd rather consider pci_add_dt_device()
>>> then.
>>
>> The function has now been reduced to reading SMMU configuration data from DT and mapping RID/BDF -> AXI stream ID. However, it is still SMMU related, and it is still invoking another iommu_ops hook function, dt_xlate (which is yet another AXI stream ID translation, separate from what is being discussed here). Does this justify keeping "iommu_..." in the name? I'm not convinced pci_add_dt_device() is a good name for it either (more on this below).
> 
> The function being SMMU-related pretty strongly suggests it wants to be
> invoked via a hook. If the add_device() one isn't suitable, perhaps we
> need a new (optional) prepare_device() one? With pci_add_device() then
> calling iommu_prepare_device(), wrapping the hook invocation?
> 
> But just to be clear: A new hook would need enough justification as to
> the existing one being unsuitable.

I'll move it to the add_device hook in v3


From xen-devel-bounces@lists.xenproject.org Thu May 18 21:06:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:06:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536537.834948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkoq-00034O-3u; Thu, 18 May 2023 21:05:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536537.834948; Thu, 18 May 2023 21:05:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkoq-00034H-0f; Thu, 18 May 2023 21:05:52 +0000
Received: by outflank-mailman (input) for mailman id 536537;
 Thu, 18 May 2023 21:05:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yK0N=BH=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pzkop-0002TM-5I
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:05:51 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on2061f.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::61f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c21ebb46-f5bf-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 23:05:49 +0200 (CEST)
Received: from DS7PR05CA0038.namprd05.prod.outlook.com (2603:10b6:8:2f::23) by
 LV8PR12MB9230.namprd12.prod.outlook.com (2603:10b6:408:186::9) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.19; Thu, 18 May 2023 21:05:44 +0000
Received: from DM6NAM11FT071.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:2f:cafe::36) by DS7PR05CA0038.outlook.office365.com
 (2603:10b6:8:2f::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17 via Frontend
 Transport; Thu, 18 May 2023 21:05:43 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT071.mail.protection.outlook.com (10.13.173.48) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Thu, 18 May 2023 21:05:43 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:05:43 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 14:05:42 -0700
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 18 May 2023 16:05:40 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c21ebb46-f5bf-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gSBMC7Fgg24ZPnpX3aVmBY2/IGVjjGIis5L7Vh1dr0WlP8EJTUwyXTbO5Dh5nsqNJHwN6HLsmeoJ30exuWeo9m9yxBhyuzjg7U6Hko5oV9IM/n2WvlHa5JRtuWrN+TnehPutvBqp38MhD5c5/lZNOtaVCX8JB9cPzoWwx31Blbmsnp/3Rdk0tGcJ34r3xujLThjDmF0IVbt8b9C5EDE46tU3ohO6/0JS06pfp4GPkFqWiSLOB6MUQuqg8O0P9NpFctzNN0pu4qCowJd2+hW31DXwvh3dcEc1EqTEsS7I6niao9ztjr94D0nRcd3zUlGQuz5mB1Q7mh/HkMvK2nvtqQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wygi/9UjCLj36Lrc8DeTiXmlBt2FEHRZaM65Iiuxq6M=;
 b=E1PGpbWO6yyskAAiffKgFkZohaHBI9N8+VbuP6f+FSUOA3g4I4MT1DGcbjkhEYMEbXD5Q9Q8sobW4+CBuPt5w9A2TTij34jFPPu2zs+DOI+IsdjxYAvKqR7EXGoH83fQRLx9PAxPYv1EYKceVBkZybh+QUIfv2Qg2kQNdzVhZZBK7T/rXnOH5drTU+2hwVjcU01R1I23bnrRB0lvpNn+hmiJsdJUOh+7y91Y1imoiW7xBmifotRNb4Zb+jANxA2Fzf2lSbFI8OQfNtLZMxSb+hdExCF6wqAFQf2ZIGkVu3lqM4bCdUwYo6ZnY8tIQLnZp4m4a5wofPvKVLnvWX3gqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wygi/9UjCLj36Lrc8DeTiXmlBt2FEHRZaM65Iiuxq6M=;
 b=AqHDWjV7KVSH+adhAy1jKC779+2m9P3wfxchXFyyBGUjKDMr2soSQZutr9BEVl3rKw2myiNNHTtCIUzayPKp7C6QlxybSXZKZ8JxZPvIXn4//QHuD/ASW1R0AkLEzcGPM/W4cC6IuIqNC5DPlVxPo3XMwJWCIcP6UXCKJV6PIrw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <e06aa189-dcb6-cdc0-122a-0743ee831ce8@amd.com>
Date: Thu, 18 May 2023 17:05:40 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [RFC PATCH v2 6/8] pci/arm: don't do iommu call for phantom
 functions
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
CC: Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Julien Grall <julien@xen.org>, Rahul Singh
	<rahul.singh@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, "Bertrand
 Marquis" <bertrand.marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, <xen-devel@lists.xenproject.org>
References: <20230511191654.400720-1-stewart.hildebrand@amd.com>
 <20230511191654.400720-7-stewart.hildebrand@amd.com>
 <b1403a6b-80f8-6277-5bd0-b21a2c8f0dd9@suse.com>
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <b1403a6b-80f8-6277-5bd0-b21a2c8f0dd9@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT071:EE_|LV8PR12MB9230:EE_
X-MS-Office365-Filtering-Correlation-Id: b4a5699c-6f5d-4ee7-d793-08db57e3a4c4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KRMJ/9Hs1ST+YirQGMmQyThbnBtbxICq+1X25ADzcdjNUX/MNHGfgYxXmTWp4Vh6e9W69fbkuvK5xxm26JXFWfBICskU9kLKpZcClLAIUff8k1aaTtTM1Wfg42Jyha3ukOKS3qHKf5aEr1Cle0ZUfkVXsEk7zKaak2OayZCGlB1gKwpjP7yNLxyZPXaLzAYvELRx10u5+DhjWE+R0pQuUODvUY72eUtipLcs6GMe62Ldd4lClb9tY3Gaoakrawzr3rZ4cX7p8sRYsQV9KWa+dKIY0HiFfeEADBl4RyZ3PRg7Q81ePRHL13W2iPNt5Seda9+ecmP0AHU6eKVFbjSD89N2DI+g6lAhaXjlxvnDzxCQRqXSZspw/TX7DE4AEvnGwrfH11IxTSTgGppeoCcxhSq1JByImOiQcuDotSHsy90907wNUATDfzGq6OWanw518f5xGrV02h8tZsTM1/t+lOvnBMV2Ngpvk5+ngqR4zTLd2ZSXAgphGuIobiV4zp8bov+TH2fAlJfP0z5vGoGlKGvbtWhp3lFrTTQFwsJ+Q4RR6AiCwTPzn2nrLrVQJojb1q+tEI6CIilDHcdsQ2Tgp7/I3n3D2vy/4/dtFrQv2sdVpHpKgLEG4s/9RN/yt3XA+iwT3MH4m8oQDR29RtxGIBxpMMcxjboFBKkJg9yFWz9pdAb+psbO9FI4VyWog8h6hTthO+KOr1EPjj0Ah0LHctk8MjAQRe93NregulAxmAonmgoGKU1YT83d0zlwMLwX5Jb3VZykOldVxff3zSH5M998UkeyFoXyKBIS6bTqsn89SJHlh6ui8sYEho6fRAMW
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(82740400003)(31696002)(47076005)(426003)(336012)(2616005)(53546011)(36756003)(186003)(26005)(40480700001)(356005)(82310400005)(81166007)(36860700001)(40460700003)(86362001)(83380400001)(70206006)(70586007)(6916009)(4326008)(2906002)(4744005)(316002)(8676002)(44832011)(8936002)(31686004)(41300700001)(5660300002)(478600001)(54906003)(16576012)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:05:43.5988
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b4a5699c-6f5d-4ee7-d793-08db57e3a4c4
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT071.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR12MB9230

On 5/12/23 03:28, Jan Beulich wrote:
> On 11.05.2023 21:16, Stewart Hildebrand wrote:
>> It's not necessary to add/remove/assign/deassign pci phantom functions
>> for the ARM SMMU drivers. All associated AXI stream IDs are added during
>> the iommu call for the base PCI device/function.
>>
>> However, the ARM SMMU drivers can cope with the extra/unnecessary calls just
>> fine, so this patch is RFC as it's not strictly required.
> 
> Tying the skipping to IS_ENABLED(CONFIG_HAS_DEVICE_TREE) goes against
> one of Julien's earlier comments, towards DT and ACPI wanting to
> co-exist at some point. So I think keeping the supposedly unnecessary
> calls is going to be unavoidable, unless you have a runtime property
> that you could check instead.

I'll drop this patch in v3


From xen-devel-bounces@lists.xenproject.org Thu May 18 21:07:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:07:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536542.834957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkqS-0003gw-GE; Thu, 18 May 2023 21:07:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536542.834957; Thu, 18 May 2023 21:07:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkqS-0003gp-DG; Thu, 18 May 2023 21:07:32 +0000
Received: by outflank-mailman (input) for mailman id 536542;
 Thu, 18 May 2023 21:07:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yK0N=BH=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pzkqQ-0003gZ-Vc
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:07:30 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20624.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe819945-f5bf-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 23:07:28 +0200 (CEST)
Received: from DS7PR06CA0006.namprd06.prod.outlook.com (2603:10b6:8:2a::7) by
 IA1PR12MB8221.namprd12.prod.outlook.com (2603:10b6:208:3f0::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Thu, 18 May
 2023 21:07:24 +0000
Received: from DM6NAM11FT039.eop-nam11.prod.protection.outlook.com
 (2603:10b6:8:2a:cafe::21) by DS7PR06CA0006.outlook.office365.com
 (2603:10b6:8:2a::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 21:07:24 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT039.mail.protection.outlook.com (10.13.172.83) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 21:07:23 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:07:23 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 14:07:23 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 18 May 2023 16:07:21 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe819945-f5bf-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AygZTvivD4kBDMqXwhwowCOuc+DuiFrWhElD6Zd1rQflCjDcaaqWNICL+6u65GpYM/Fes5ZebPTInJpsQvMT6ymQ5Zek397yHDyAV99JsjqO5R0Z4wNO+B5Q1gp9gX6uvdgFUvc9dAvjeKA/kf2jmcfb3gLFXxHwGrG9Zd8VdjW5VVQc+d4xdKoMkEDOy62F9cW0XHAeYwHcfZdCdTAc9T8Tm4TApvHrXOQEH+5PIphGpiR3W0tsud8Uuur+40a9LQSoN9zo//HePN7O4wIOikoMyFKbQ1CiR+jgmbazDTYVfKk9beHBzjO2uPhDJyxy9G9EVxb6v17ZLcvr9TsYfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Xi4VMXaQkQek2qLIH1FDhP/UsKtsSqzDAK67s9incms=;
 b=LJN/fgcLz32HGsYyuAca3H8zQsLJfxyi0UXdvyocejpxj+1F5seaZSvj9lP8fKsxYVFhrCUp22hV4Ah8OoczFRj/Dziz5tNc9OhaAYAyzYvyzBT67+v2HUqQd55FxeyOY1h0vditxPUTnjLL0xVY/IKJ2gTSpM+51m2t3PLqRMiAbCQQ7yrKoLFBordCSoo5gaz5p82soTCJHDfjSpHgMl+76flxERncB9/qDAB78+WMCLbmTs8FC0Cvf24/4vHOUOzwh57u8Gd5lj6WMIJd6DUU1TqiGwP54hR2lXLC8qtkLogjJCvlsDnnHCq+wyHacxN+4BPximxLrNLZjKrqeQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Xi4VMXaQkQek2qLIH1FDhP/UsKtsSqzDAK67s9incms=;
 b=lbcx8EH1RENj9z7ClZhkmtgWeq/MYJRF+sA/zg2ZfJfVRFZT9NbQ5F0Z8ZBRfCvn7L//tmJ83pipvo/ty65JIQkYYaNW7u0fIR9h8ChWzffGzF+WDQSOwqoC7W3UrtL/4mlEYKRd4+1NTry3AofwApiFiUcYH9afj8TDSejUTLw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Rahul Singh <rahul.singh@arm.com>, Jan Beulich <jbeulich@suse.com>, "Paul
 Durrant" <paul@xen.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>
Subject: [PATCH v3 0/6] SMMU handling for PCIe Passthrough on ARM
Date: Thu, 18 May 2023 17:06:52 -0400
Message-ID: <20230518210658.66156-1-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT039:EE_|IA1PR12MB8221:EE_
X-MS-Office365-Filtering-Correlation-Id: c8196a3e-f7bc-4cd8-42cc-08db57e3e08a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xi63QywFG/SJtsR7mDG7yxKkf5xuw4IHrmUoV9Fjp4kZnVdhTICtlziJ9oUwGkCsbaDQnW9S6L0kwzv5Sew8U4jSlKFOMddqhbHqkjr7v6yJ2MHd8gHr7nB/X4r8l6DinhP5cz0/ciMANLhKaAS+zHG1ThwZHzqi118vFg6/qH2EFyLHuxZmRvMlm01lN24T7Um2+DpeEwsNJdjIBaUYP8WImXesFYEz+nLGyuJU75tARvPuS9GxziHDMXprHi85NPgJTq53ZHZ8PxXK93RptEkKaGbfmvJbqc5oeH1pCGnsUtFa4Pl9AZbNSFrxVINfMI7Cfwqmi4WlIoAFII4XmE3QiCa239g/XpFgi1Aws/iurOD5scYtVUqhbe5Qhg04aqkdn0U354omp40D5Vm+JvT7dB5P8X9o6OyAZFJ7luROtNT3n5uSIxinEc5a5BbFDndQXm2s1qGDC6D14mS3ruzKGFiBEJspgFyNFghYDCsrFDfa+6XXIhL4NMIa0Df8fdXFfI8WAI65+c1LqpjTvnX8Q/ckYVJ4X+lAEr2r2k9myuBXtg8Ns0/Uua5DyQT0nJbQU2U4S7R/lBrvKQUZX8x1UX031wm4z5b3xXDli4qNJoV+2JzdBsi59gH5lURM8bxBNW3QWtcIvjbOCtvnBNPrkGMbr9m9pfNDwGePK3+FqgsILl5XGnGYai3ZwQ5QgNNOs1NEbyC7jv57Gt48rD/mRhh6KBdgDfahJMEF5OR8L18YU7bvBNBoX/M85t1YPezfGcUM0FgGnpGO3zFZwA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(39860400002)(346002)(451199021)(46966006)(36840700001)(40470700004)(40460700003)(70206006)(316002)(70586007)(478600001)(4326008)(54906003)(6916009)(36756003)(83380400001)(47076005)(1076003)(36860700001)(186003)(426003)(26005)(2616005)(86362001)(5660300002)(2906002)(8676002)(44832011)(8936002)(336012)(82310400005)(6666004)(40480700001)(356005)(81166007)(41300700001)(82740400003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:07:23.8626
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c8196a3e-f7bc-4cd8-42cc-08db57e3e08a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT039.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8221

This series introduces SMMU handling for PCIe passthrough on ARM. These patches
are independent from (and don't depend on) the vPCI reference counting/locking
work in progress, and should be able to be upstreamed independently.

v2->v3:
* drop "pci/arm: Use iommu_add_dt_pci_device()"
* drop "RFC: pci/arm: don't do iommu call for phantom functions"
* move invocation of sideband ID mapping function to add_device()
  platform_ops/iommu_ops hook

v1->v2:
* phantom device handling
* shuffle around iommu_add_dt_pci_device()

Oleksandr Andrushchenko (1):
  xen/arm: smmuv2: Add PCI devices support for SMMUv2

Oleksandr Tyshchenko (3):
  xen/arm: Move is_protected flag to struct device
  iommu/arm: Add iommu_dt_xlate()
  iommu/arm: Introduce iommu_add_dt_pci_sideband_ids API

Rahul Singh (1):
  xen/arm: smmuv3: Add PCI devices support for SMMUv3

Stewart Hildebrand (1):
  iommu/arm: iommu_add_dt_pci_sideband_ids phantom handling

 xen/arch/arm/domain_build.c              |   4 +-
 xen/arch/arm/include/asm/device.h        |  13 ++
 xen/common/device_tree.c                 |   2 +-
 xen/drivers/passthrough/arm/ipmmu-vmsa.c |   8 +-
 xen/drivers/passthrough/arm/smmu-v3.c    |  83 +++++++--
 xen/drivers/passthrough/arm/smmu.c       | 116 ++++++++++---
 xen/drivers/passthrough/device_tree.c    | 204 ++++++++++++++++++++---
 xen/include/xen/device_tree.h            |  38 +++--
 xen/include/xen/iommu.h                  |  17 +-
 9 files changed, 408 insertions(+), 77 deletions(-)

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 21:08:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:08:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536546.834968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkqt-0004Fx-TA; Thu, 18 May 2023 21:07:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536546.834968; Thu, 18 May 2023 21:07:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkqt-0004Fq-Pn; Thu, 18 May 2023 21:07:59 +0000
Received: by outflank-mailman (input) for mailman id 536546;
 Thu, 18 May 2023 21:07:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yK0N=BH=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pzkqs-000492-N3
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:07:58 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20600.outbound.protection.outlook.com
 [2a01:111:f400:7eae::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f9b8c6b-f5c0-11ed-b22d-6b7b168915f2;
 Thu, 18 May 2023 23:07:57 +0200 (CEST)
Received: from BN9PR03CA0354.namprd03.prod.outlook.com (2603:10b6:408:f6::29)
 by SA1PR12MB8743.namprd12.prod.outlook.com (2603:10b6:806:37c::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 21:07:54 +0000
Received: from BN8NAM11FT095.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f6:cafe::b1) by BN9PR03CA0354.outlook.office365.com
 (2603:10b6:408:f6::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 21:07:53 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT095.mail.protection.outlook.com (10.13.176.206) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Thu, 18 May 2023 21:07:53 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:07:52 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 18 May 2023 16:07:51 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f9b8c6b-f5c0-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YZFTvqMGLiUa0SNDbBsOTvgT24ezFIHYn+7uMswT82PcUbZYQoa+so7RnvgF9QvGpt2A2Xq4jRmBOEOpWCDPv2vGaMcGQfRPVn1oG2yHTtkjRtUIo++lImM8kk3tLIl2ySOw/yCDBMrGJ2R2waQdDELaO0TWMyEC2zuGVuqIv4kTd2JzvvJxDOev1QfhXGqsLuhoOLsrw5GUTk8s2++OBMmeZFptXGR5tQqhdMFvGhnvr1VKF6pnh7P3XfhcqQT9KV8wQigprzyb958O2QpDLeePvPLZ91QP4mdKO6Yxv3EUO5wgJOgBtkQDiJY+OClcoECMjh/S/0eYwNmhuOvqbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=t6nCWAuU2fluFwlRvvsBvDibEl+2/q0dSI5mIxYKxGc=;
 b=ofIuCBFP2QblcQgRv2d9MK0AzOK383AgNbzCu8lyltTrS6pauMKu4nuJIp00m4byNSyfRbyXcaIRpceDAu2kBthvpC5tAAs3MI971JEi1+YZlB/qU6f588EJ7BWtz01/IyIk4CrWzSYltYsBKmnuopKopaEnopPPbq9c2LUvzNNGdFCfqG1wXfmkmnjWZ1jgZwIpkdCQzRznj+WYpaCG06CYBtGP8HMqQ0apoJhkfGwwACCdMCFta+i5fPSNlfCHQeOK7KT6nfg8TyWIM4Rmnzda+2PPDKa62srniKH5QR9iL9UshziYCuFiSHBptn//EMc97OW2w+W7LveJMc1aGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=t6nCWAuU2fluFwlRvvsBvDibEl+2/q0dSI5mIxYKxGc=;
 b=vKxR2djAdcz0aPsRUZ9jETGOvhQ4zIRbxCRHuSSx8XLOqbBvJOueJPyZ/ysIu5oCGYxCvPdJCCT59ATkN9EMHtaVfrFq811r7/CXYxLXNSKcqK6sTUPQkRGMrHBUGjf0UpGHqnA/xD+GqFFlr+6g0sb4itMa3QO7CzYCMmSQ8+U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Rahul Singh <rahul.singh@arm.com>, Stewart Hildebrand
	<stewart.hildebrand@amd.com>
Subject: [PATCH v3 1/6] xen/arm: Move is_protected flag to struct device
Date: Thu, 18 May 2023 17:06:53 -0400
Message-ID: <20230518210658.66156-2-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230518210658.66156-1-stewart.hildebrand@amd.com>
References: <20230518210658.66156-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT095:EE_|SA1PR12MB8743:EE_
X-MS-Office365-Filtering-Correlation-Id: 38bc38c3-3bf3-4b36-7e9c-08db57e3f248
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EkDFRbWlqmjXIt/yl+oExgRHuLcT6hA7yHEprxyGyPJTxgRBQyQoD9w6jhMRKku4Fx+ePAWjlqmKe4QAq/2ZClUq6i1Za93jO7cuP66rVAWrI+YYTK6hXNsowpU6it6Ufhvuyys0b5EMBhnaZB9Ln9PZdtRyx4V5grY+fwe5xhu2lG/8mRo3M9i8gYz11b6kwlvGs+e+4/Nsxhvf6v7r0y23DtsbxDu6R4VLvO9mfwHaSM9Zapb9mG7ujJdkQQepM9aji9lJdSvNwLVt2J/OBv2/ds16Oh/cFsxrpNkdiGGD6HP59skBl5n9vJ59x2kC18bAlxwW7S5bcm3r3ItldFMHPvZEtvi8feas38AXnvhOxj6Jaj3gSFGlejW6/HyEcxSqzNXPvUKfrSM3fOEosX2hstauqAaZCoKcyx7xRGa8ilW+DNV+AJfttTKxearHATFz1HWbV53mNiXpja9Hrzfo7n+TlVHO8QYO2tAXtsR3aTNVGhDb1OKO43Zp8gO/vGzkvyQXcLG8NbRza0FC4qVM3PjheMo867PI9VFlt514VG7U3L98wwGT0bs3v9z5NMZdQUmF2EB1mkxse3cgHzv4vSRJiquyoVfPpmiUhkHRePTLgyv0bJb7eoKg6cVYMrVapGJl7co+cWJWtiYf4CAqMoPkZrLAhlCMQwauKgdvDbl/RA1/hO2yVYvt9JLgP+2ho8n4o0Fyx9mMcQ5PxAaZiWYWIlZ42sLxAwIWXUDiLuOGD34nVlJsM17VTHgM
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(39860400002)(396003)(451199021)(46966006)(36840700001)(40470700004)(26005)(40460700003)(1076003)(966005)(36860700001)(36756003)(426003)(83380400001)(47076005)(40480700001)(336012)(86362001)(82310400005)(2616005)(356005)(81166007)(186003)(82740400003)(54906003)(44832011)(5660300002)(478600001)(2906002)(316002)(6916009)(4326008)(8676002)(8936002)(41300700001)(70206006)(70586007)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:07:53.6618
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 38bc38c3-3bf3-4b36-7e9c-08db57e3f248
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT095.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8743

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This flag will be re-used for PCI devices by the subsequent
patches.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v2->v3:
* no change

v1->v2:
* no change

downstream->v1:
* rebase
* s/dev_node->is_protected/dev_node->dev.is_protected/ in smmu.c
* s/dt_device_set_protected(dev_to_dt(dev))/device_set_protected(dev)/ in smmu-v3.c
* remove redundant device_is_protected checks in smmu-v3.c/ipmmu-vmsa.c

(cherry picked from commit 59753aac77528a584d3950936b853ebf264b68e7 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---
 xen/arch/arm/domain_build.c              |  4 ++--
 xen/arch/arm/include/asm/device.h        | 13 +++++++++++++
 xen/common/device_tree.c                 |  2 +-
 xen/drivers/passthrough/arm/ipmmu-vmsa.c |  8 +-------
 xen/drivers/passthrough/arm/smmu-v3.c    |  7 +------
 xen/drivers/passthrough/arm/smmu.c       |  2 +-
 xen/drivers/passthrough/device_tree.c    | 15 +++++++++------
 xen/include/xen/device_tree.h            | 13 -------------
 8 files changed, 28 insertions(+), 36 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 71f307a572e9..d228da641367 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2503,7 +2503,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
             return res;
         }
 
-        if ( dt_device_is_protected(dev) )
+        if ( device_is_protected(dt_to_dev(dev)) )
         {
             dt_dprintk("%s setup iommu\n", dt_node_full_name(dev));
             res = iommu_assign_dt_device(d, dev);
@@ -3003,7 +3003,7 @@ static int __init handle_passthrough_prop(struct kernel_info *kinfo,
         return res;
 
     /* If xen_force, we allow assignment of devices without IOMMU protection. */
-    if ( xen_force && !dt_device_is_protected(node) )
+    if ( xen_force && !device_is_protected(dt_to_dev(node)) )
         return 0;
 
     return iommu_assign_dt_device(kinfo->d, node);
diff --git a/xen/arch/arm/include/asm/device.h b/xen/arch/arm/include/asm/device.h
index b5d451e08776..086dde13eb6b 100644
--- a/xen/arch/arm/include/asm/device.h
+++ b/xen/arch/arm/include/asm/device.h
@@ -1,6 +1,8 @@
 #ifndef __ASM_ARM_DEVICE_H
 #define __ASM_ARM_DEVICE_H
 
+#include <xen/types.h>
+
 enum device_type
 {
     DEV_DT,
@@ -20,6 +22,7 @@ struct device
 #endif
     struct dev_archdata archdata;
     struct iommu_fwspec *iommu_fwspec; /* per-device IOMMU instance data */
+    bool is_protected; /* Shows that device is protected by IOMMU */
 };
 
 typedef struct device device_t;
@@ -94,6 +97,16 @@ int device_init(struct dt_device_node *dev, enum device_class class,
  */
 enum device_class device_get_class(const struct dt_device_node *dev);
 
+static inline void device_set_protected(struct device *device)
+{
+    device->is_protected = true;
+}
+
+static inline bool device_is_protected(const struct device *device)
+{
+    return device->is_protected;
+}
+
 #define DT_DEVICE_START(_name, _namestr, _class)                    \
 static const struct device_desc __dev_desc_##_name __used           \
 __section(".dev.info") = {                                          \
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 6c9712ab7bda..1d5d7cb5f01b 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1874,7 +1874,7 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
         /* By default dom0 owns the device */
         np->used_by = 0;
         /* By default the device is not protected */
-        np->is_protected = false;
+        np->dev.is_protected = false;
         INIT_LIST_HEAD(&np->domain_list);
 
         if ( new_format )
diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
index 091f09b21752..039212a3a990 100644
--- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
+++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
@@ -1288,14 +1288,8 @@ static int ipmmu_add_device(u8 devfn, struct device *dev)
     if ( !to_ipmmu(dev) )
         return -ENODEV;
 
-    if ( dt_device_is_protected(dev_to_dt(dev)) )
-    {
-        dev_err(dev, "Already added to IPMMU\n");
-        return -EEXIST;
-    }
-
     /* Let Xen know that the master device is protected by an IOMMU. */
-    dt_device_set_protected(dev_to_dt(dev));
+    device_set_protected(dev);
 
     dev_info(dev, "Added master device (IPMMU %s micro-TLBs %u)\n",
              dev_name(fwspec->iommu_dev), fwspec->num_ids);
diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index 4ca55d400a7b..f5910e79922f 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -1521,13 +1521,8 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev)
 	 */
 	arm_smmu_enable_pasid(master);
 
-	if (dt_device_is_protected(dev_to_dt(dev))) {
-		dev_err(dev, "Already added to SMMUv3\n");
-		return -EEXIST;
-	}
-
 	/* Let Xen know that the master device is protected by an IOMMU. */
-	dt_device_set_protected(dev_to_dt(dev));
+	device_set_protected(dev);
 
 	dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
 			dev_name(fwspec->iommu_dev), fwspec->num_ids);
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 0a514821b336..5b6024d579a8 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -838,7 +838,7 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 	master->of_node = dev_node;
 
 	/* Xen: Let Xen know that the device is protected by an SMMU */
-	dt_device_set_protected(dev_node);
+	device_set_protected(dev);
 
 	for (i = 0; i < fwspec->num_ids; ++i) {
 		if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) &&
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 1c32d7b50cce..b5bd13393b56 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -34,7 +34,7 @@ int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev)
     if ( !is_iommu_enabled(d) )
         return -EINVAL;
 
-    if ( !dt_device_is_protected(dev) )
+    if ( !device_is_protected(dt_to_dev(dev)) )
         return -EINVAL;
 
     spin_lock(&dtdevs_lock);
@@ -65,7 +65,7 @@ int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev)
     if ( !is_iommu_enabled(d) )
         return -EINVAL;
 
-    if ( !dt_device_is_protected(dev) )
+    if ( !device_is_protected(dt_to_dev(dev)) )
         return -EINVAL;
 
     spin_lock(&dtdevs_lock);
@@ -87,7 +87,7 @@ static bool_t iommu_dt_device_is_assigned(const struct dt_device_node *dev)
 {
     bool_t assigned = 0;
 
-    if ( !dt_device_is_protected(dev) )
+    if ( !device_is_protected(dt_to_dev(dev)) )
         return 0;
 
     spin_lock(&dtdevs_lock);
@@ -141,12 +141,15 @@ int iommu_add_dt_device(struct dt_device_node *np)
         return -EINVAL;
 
     /*
-     * The device may already have been registered. As there is no harm in
-     * it just return success early.
+     * This is needed in case a device has both the iommus property and
+     * also appears in the mmu-masters list.
      */
-    if ( dev_iommu_fwspec_get(dev) )
+    if ( device_is_protected(dev) )
         return 0;
 
+    if ( dev_iommu_fwspec_get(dev) )
+        return -EEXIST;
+
     /*
      * According to the Documentation/devicetree/bindings/iommu/iommu.txt
      * from Linux.
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 19a74909cece..c1e4751a581f 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -90,9 +90,6 @@ struct dt_device_node {
     struct dt_device_node *next; /* TODO: Remove it. Only use to know the last children */
     struct dt_device_node *allnext;
 
-    /* IOMMU specific fields */
-    bool is_protected;
-
     /* HACK: Remove this if there is a need of space */
     bool_t static_evtchn_created;
 
@@ -302,16 +299,6 @@ static inline domid_t dt_device_used_by(const struct dt_device_node *device)
     return device->used_by;
 }
 
-static inline void dt_device_set_protected(struct dt_device_node *device)
-{
-    device->is_protected = true;
-}
-
-static inline bool dt_device_is_protected(const struct dt_device_node *device)
-{
-    return device->is_protected;
-}
-
 static inline bool_t dt_property_name_is_equal(const struct dt_property *pp,
                                                const char *name)
 {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 21:08:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:08:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536549.834978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkrK-0004my-6K; Thu, 18 May 2023 21:08:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536549.834978; Thu, 18 May 2023 21:08:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkrK-0004mr-30; Thu, 18 May 2023 21:08:26 +0000
Received: by outflank-mailman (input) for mailman id 536549;
 Thu, 18 May 2023 21:08:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yK0N=BH=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pzkrI-000492-8T
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:08:24 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20600.outbound.protection.outlook.com
 [2a01:111:f400:7eae::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1f5db55d-f5c0-11ed-b22d-6b7b168915f2;
 Thu, 18 May 2023 23:08:23 +0200 (CEST)
Received: from DM6PR02CA0110.namprd02.prod.outlook.com (2603:10b6:5:1b4::12)
 by SJ2PR12MB7847.namprd12.prod.outlook.com (2603:10b6:a03:4d2::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Thu, 18 May
 2023 21:08:17 +0000
Received: from DM6NAM11FT029.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1b4:cafe::70) by DM6PR02CA0110.outlook.office365.com
 (2603:10b6:5:1b4::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 21:08:17 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT029.mail.protection.outlook.com (10.13.173.23) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Thu, 18 May 2023 21:08:16 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:08:16 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:08:15 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 18 May 2023 16:08:14 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f5db55d-f5c0-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I0n8mSjsCLUvzwpfnyCjNOt/w0ECvPc9Mjz9ALSPVMuoZ1rT42YALSixsSPPWiA5MwxsdHjF4WlN9G3SoVQdK1hJ+NZ4d6jmz08t0OePvckvQe+K5WdkcHKNPmTDEsg7AqDvipP/1MBH1APYoCETssrJIkV8KGhCSR968hC9hyK+awCvnDRNtK7ItrdOWDGXb11tuLsgQwH4duUDwSsiXG3oR8mg4Gx/Nq7t27X2Uzli1pbytsDDWTdeFXAPKyzKiXspZp0D3/TBKYDCnQ/elZuMkDw0excAJxmL6FMvrRuGgFYSdvAOqHUS7qoyQawJnAfDej1EhavcrYAdRVpA5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/Hf5r04d2nEr/v6wVdo3KW9S2VJ/PyaIu7QWgQPgol4=;
 b=BhWtmRAA6/iGrPOtItTym1v/QeD4LUbXVzGpmDgnbd1BcfDRPbofQ08yUjpBXUWP2RkkZJ1EMQ8JshiFD4j67vPPnCgR6wh4vnZH9+xkapTcLcqGo19PqQyNeCYY9uqVUksMWwYYtfq3SWazUuU/ms1rwWFZ7bx1N/f4tyjFEQQFK6VBkFwxYAEtJRkFMXrf+QO02fEk8XoLwkIhsyMKI5jxiGss3/W3BOyIawFGvnP5BWnIpvWSNja5XpxykEWpEuKYsKAELPel1+ELaG/zCc7kR8WXatFl75phu/dKrOaiW9YF1jFyzbU707F3v3nSDGFjGtscFLAnK+7K9OWdpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/Hf5r04d2nEr/v6wVdo3KW9S2VJ/PyaIu7QWgQPgol4=;
 b=SQgIyjPyX/tBGZydXeEvDUluTeRuN7o9s3VyH964otcm3peO7NfK5q8/0TDI8wArk0pfY3GiidqaXkPYwciHKKxE/onzFLifWrPvYXR1uGUYjYK8XiA9tFtKcSgcuydhz/YxmF2ZD5C1qlj1kJFYw1iHle1AKYlvXa3W8oVHelI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Rahul Singh
	<rahul.singh@arm.com>, Bertrand Marquis <bertrand.marquis@arm.com>, "Stewart
 Hildebrand" <stewart.hildebrand@amd.com>
Subject: [PATCH v3 2/6] iommu/arm: Add iommu_dt_xlate()
Date: Thu, 18 May 2023 17:06:54 -0400
Message-ID: <20230518210658.66156-3-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230518210658.66156-1-stewart.hildebrand@amd.com>
References: <20230518210658.66156-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT029:EE_|SJ2PR12MB7847:EE_
X-MS-Office365-Filtering-Correlation-Id: 0b74e7f2-c0f4-4465-27e6-08db57e3fffd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9/9Dp4fyGn3rTnBq0fmg46tqP5tFLB3/f3krMVWGnpGB1GTRnk0kqvFpRzTadh2KvSNaGkM7+lSP894fkillaISxw/zSnFOnduiWw5C04UXDQJhJap02nQ0qJMYTS3ri31OM1hE5h7ePA1Z6F2Bk14cDzS+HA+Z3IBz+uOC86zdWlAEZYaF2MLPYgvM+TRARXEkl7ly4rV0U9CjF2YjHvESA/IRB9g3FtRSm1sDkXwtpN2+tukJYhJo/Lk8OTH/03rqUvmSUpJPceua3zpdSL1UF7rl1CEFhpzs6BqdOcECu+R0q+zJI2LKnBa0I+ItuVZucukbCD4Z4t+m0CyjMSmXpEkb0Cpq/Njpw5zfbA2bLIwnaSX5mcMXz4h+bbnpJwGShRedONr2/LAWhBtCX6SEAlYEE/iLQ4UqjR1Q0f69qkYjeeXYTAcgnCBtweSGu8rupyD4Dpl+bDCqg3uRn5sTTUI8WmrhVsbmWs+H1wcwFoRJkTk8YVWQulZlLqNYdwt3uvEh+Dq/PwV7h2lHrgJPZNvg0jSsFr9umLg+jNvKruqAxT2o5wh013008rneGZDiDnLCBgMrdOZOH0Sz391TpO/+AFH9t7FePOPOZXpZ1csS+OzbnZE0PCDC+BA+04eFMRIqzmmoSD2R5j8YMwo0bTTxLMX8s83sj3OR25v3aKvNShVnvO4QpMS77y8WX4/5QqIVIS1W3py0F2dUkPIvhi5iJLqjhMPCV0t680fEwzMAwhkq1X79w+aBCN4u5
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(396003)(346002)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(40460700003)(70586007)(316002)(966005)(36756003)(478600001)(70206006)(4326008)(54906003)(426003)(6916009)(86362001)(47076005)(83380400001)(36860700001)(1076003)(26005)(186003)(41300700001)(2616005)(8676002)(8936002)(5660300002)(336012)(44832011)(6666004)(82310400005)(40480700001)(2906002)(81166007)(356005)(82740400003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:08:16.6249
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b74e7f2-c0f4-4465-27e6-08db57e3fffd
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT029.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB7847

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Move code for processing DT IOMMU specifier to a separate helper.
This helper will be re-used for adding PCI devices by the subsequent
patches as we will need exact the same actions for processing
DT PCI-IOMMU specifier.

While at it introduce NO_IOMMU to avoid magic "1".

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com> # rename
---
v2->v3:
* no change

v1->v2:
* no change

downstream->v1:
* trivial rebase
* s/dt_iommu_xlate/iommu_dt_xlate/

(cherry picked from commit c26bab0415ca303df86aba1d06ef8edc713734d3 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---
 xen/drivers/passthrough/device_tree.c | 42 +++++++++++++++++----------
 1 file changed, 27 insertions(+), 15 deletions(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index b5bd13393b56..1b50f4670944 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -127,15 +127,39 @@ int iommu_release_dt_devices(struct domain *d)
     return 0;
 }
 
+/* This correlation must not be altered */
+#define NO_IOMMU    1
+
+static int iommu_dt_xlate(struct device *dev,
+                          struct dt_phandle_args *iommu_spec)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    int rc;
+
+    if ( !dt_device_is_available(iommu_spec->np) )
+        return NO_IOMMU;
+
+    rc = iommu_fwspec_init(dev, &iommu_spec->np->dev);
+    if ( rc )
+        return rc;
+
+    /*
+     * Provide DT IOMMU specifier which describes the IOMMU master
+     * interfaces of that device (device IDs, etc) to the driver.
+     * The driver is responsible to decide how to interpret them.
+     */
+    return ops->dt_xlate(dev, iommu_spec);
+}
+
 int iommu_add_dt_device(struct dt_device_node *np)
 {
     const struct iommu_ops *ops = iommu_get_ops();
     struct dt_phandle_args iommu_spec;
     struct device *dev = dt_to_dev(np);
-    int rc = 1, index = 0;
+    int rc = NO_IOMMU, index = 0;
 
     if ( !iommu_enabled )
-        return 1;
+        return NO_IOMMU;
 
     if ( !ops )
         return -EINVAL;
@@ -164,19 +188,7 @@ int iommu_add_dt_device(struct dt_device_node *np)
         if ( !ops->add_device || !ops->dt_xlate )
             return -EINVAL;
 
-        if ( !dt_device_is_available(iommu_spec.np) )
-            break;
-
-        rc = iommu_fwspec_init(dev, &iommu_spec.np->dev);
-        if ( rc )
-            break;
-
-        /*
-         * Provide DT IOMMU specifier which describes the IOMMU master
-         * interfaces of that device (device IDs, etc) to the driver.
-         * The driver is responsible to decide how to interpret them.
-         */
-        rc = ops->dt_xlate(dev, &iommu_spec);
+        rc = iommu_dt_xlate(dev, &iommu_spec);
         if ( rc )
             break;
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 21:08:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:08:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536552.834988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkrg-0005HR-GX; Thu, 18 May 2023 21:08:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536552.834988; Thu, 18 May 2023 21:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkrg-0005HI-Cd; Thu, 18 May 2023 21:08:48 +0000
Received: by outflank-mailman (input) for mailman id 536552;
 Thu, 18 May 2023 21:08:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yK0N=BH=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pzkrf-0003gZ-BP
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:08:47 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20612.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::612])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2b72a6b1-f5c0-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 23:08:44 +0200 (CEST)
Received: from BN8PR07CA0016.namprd07.prod.outlook.com (2603:10b6:408:ac::29)
 by BN9PR12MB5081.namprd12.prod.outlook.com (2603:10b6:408:132::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 21:08:39 +0000
Received: from BN8NAM11FT081.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ac:cafe::8) by BN8PR07CA0016.outlook.office365.com
 (2603:10b6:408:ac::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.20 via Frontend
 Transport; Thu, 18 May 2023 21:08:38 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT081.mail.protection.outlook.com (10.13.177.233) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 21:08:38 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:08:38 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 14:08:37 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 18 May 2023 16:08:35 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b72a6b1-f5c0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R6WW65et7SmaGgTg5sJM1Q8PstoYCaCDY3/JxtcnREc+YqnnQpeQFEVe6MvbTzEctfHmr48CYi2pvfhdPJbzr2fflTpyXXPRWiOG+un7yYLc7xTw/RrQ4ielcD+kfVDDPnq6C2Nxw804n+m72/2p+Pe3svrWFoedveQKnGWjgLXIGzboD4PezmniSqEaPH/IZEN4EXL3SzGdJ0ravY/T4nGNw8IGqCSZtxHPHJYqG2BJnvo3C5GAF8oCjehql4aUiHE4pH7Jx3zkLo2WNIm+wBB/973Djoep/QxnpMGMV6crGTcCOmyBKX89+Bh/Sd8S09VXDbLYFyloFAcimQNelg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XSpUUbvq9wCcX8m7HccnIFgrdTbsnGWDdpMq97CInwY=;
 b=I4fHFXAkiIC9utfHkR4Q3f68Hwg3tXaDiEJNCsal2dVHRIKX35XVtHAOTt5/h5RSeJJ0de3MTjlWFzwBrn/dGqBKK9YrYy47HbBVSuToOL2t35M2AQ2ie6U7Wk+x9cSdtobqxYKU6H+ZjvKctQuf8xpW1Jf6GMCKAaIvRa9pMazsQi0VO1WYa8wClcnFXVE0p7fA1xc9835ZEENxHKNnoxNtSdQrABd/xS1Do5jCva3Fj+YCsgB/VX+aJeDiUL0q+23KjVsGb2q9ZuXXjB/68f7DRzRefLTDrUXrP7yL+kRgnrtcfYrnBRjTQssUXbl1NexJAeqR2OlZgHo0StoijQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XSpUUbvq9wCcX8m7HccnIFgrdTbsnGWDdpMq97CInwY=;
 b=aUp4+jQxryUu9eTHHya1t3zNwGru+pGLp+GPoKHqCStNduirnAQpKsBfqfBLsqZAS/7J5tAr62AZbLOpFhI6vAK1/lHd1sohN3Zrpmnc3mLjVYlf2XJQn65tMQMX4eJVWCOSIlADCnyxRa/hYs4D2Vvqi5MxHHBNyxUnMBpT0yY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Rahul Singh
	<rahul.singh@arm.com>, Bertrand Marquis <bertrand.marquis@arm.com>, "Stewart
 Hildebrand" <stewart.hildebrand@amd.com>
Subject: [PATCH v3 3/6] iommu/arm: Introduce iommu_add_dt_pci_sideband_ids API
Date: Thu, 18 May 2023 17:06:55 -0400
Message-ID: <20230518210658.66156-4-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230518210658.66156-1-stewart.hildebrand@amd.com>
References: <20230518210658.66156-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT081:EE_|BN9PR12MB5081:EE_
X-MS-Office365-Filtering-Correlation-Id: 28f549cc-d34e-43c8-f4dc-08db57e40cff
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eORbxfv7AXDuo/z6znZIQYxXHfgAxg1YA9IsHee/4OxDcYXq0JdgHfg9MSWjddNUCR10PET3GF8v40pgLw5aFLnf5QcFSvARi1B99WXETroOgFDDc91dF0HPLaBhIMO5Yx8yXz4s2OaeK/OjuvxJA6PdHdV6HJuXgTZDrfV5aOHsb3uTC3Xw0SGMy5DhO1fVVYr7cWr7zu/ZrY+v3jvcwljwAu41JJv8V1Bx1yDAdOaoIZMPkLEYXZcksN5jcrT8M+5zQdVHBHlFnIXzDlNkwuAQC6ZUJNugJPe5lIEulTt35WvTZ653cEpT1R1pS9rKea5qgj/iciXIPPLO9gqYvtPg8aVDwyodndVIHYvC+Pg9Bz+WPLdtmdvgO7qpHykexvxNk0hEBr1FG3BBBJ/raQrxPrSBZM5TKfjDQSpemcrb5s3+QEj+1oeF4230bScf+4MrWLOL3dmlLYNnklzU69l5ZhnQNQ1flpIBGCQe+5su/rOdW/DM5RJz6/31NedoH3H0iLOsd6C7CkbWVs6njK+K1UraZRG77rR+Ckgabzm5JI7768QVOfvkfwjppTuCHkrMicVXL1U73VZMJRlK8IAQ6cx6OlBomTypAuEpR+vbi8A89YvLjEddTGQ+n6fz4nGu3mAS8+9lgTJ/lNzKdiT4QV4LKkzY9Ug6tpi0yqTKwBxDjBbiBoApCfSRyzQbHLJbi7spGfM9G9gWO2kGkzVUDsimK4qb3MOIQiHvi024ZQbrdNKpITHWoiEfeT4xuhnk8YXmaFw1DsTXMeno0Iijos0Lf6dZU0JxGK51cmjvYIK+VR44RoNoLqS5kivTfBDoTYoH+h+J8oQKmkltXw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(376002)(346002)(451199021)(40470700004)(36840700001)(46966006)(316002)(2906002)(478600001)(41300700001)(6916009)(4326008)(8936002)(8676002)(54906003)(44832011)(6666004)(70586007)(70206006)(5660300002)(966005)(1076003)(26005)(40460700003)(82740400003)(356005)(186003)(40480700001)(2616005)(47076005)(36756003)(36860700001)(83380400001)(82310400005)(86362001)(81166007)(426003)(336012)(21314003)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:08:38.4849
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 28f549cc-d34e-43c8-f4dc-08db57e40cff
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT081.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5081

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The main purpose of this patch is to add a way to register PCI device
(which is behind the IOMMU) using the generic PCI-IOMMU DT bindings [1]
before assigning that device to a domain.

This behaves in almost the same way as existing iommu_add_dt_device API,
the difference is in devices to handle and DT bindings to use.

The function of_map_id to translate an ID through a downstream mapping
(which is also suitable for mapping Requester ID) was borrowed from Linux
(v5.10-rc6) and updated according to the Xen code base.

XXX: I don't port pci_for_each_dma_alias from Linux which is a part
of PCI-IOMMU bindings infrastucture as I don't have a good understanding
for how it is expected to work in Xen environment.
Also it is not completely clear whether we need to distinguish between
different PCI types here (DEV_TYPE_PCI, DEV_TYPE_PCI_HOST_BRIDGE, etc).
For example, how we should behave here if the host bridge doesn't have
a stream ID (so not described in iommu-map property) just simple
fail or bypasses translation?

[1] https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-iommu.txt

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v2->v3:
* new patch title (was: iommu/arm: Introduce iommu_add_dt_pci_device API)
* renamed function
  from: iommu_add_dt_pci_device
  to: iommu_add_dt_pci_sideband_ids
* removed stale ops->add_device check
* iommu.h: add empty stub iommu_add_dt_pci_sideband_ids for !HAS_DEVICE_TREE
* iommu.h: add iommu_add_pci_sideband_ids helper
* iommu.h: don't wrap prototype in #ifdef CONFIG_HAS_PCI
* s/iommu_fwspec_free(pci_to_dev(pdev))/iommu_fwspec_free(dev)/

v1->v2:
* remove extra devfn parameter since pdev fully describes the device
* remove ops->add_device() call from iommu_add_dt_pci_device(). Instead, rely on
  the existing iommu call in iommu_add_device().
* move the ops->add_device and ops->dt_xlate checks earlier

downstream->v1:
* rebase
* add const qualifier to struct dt_device_node *np arg in dt_map_id()
* add const qualifier to struct dt_device_node *np declaration in iommu_add_pci_device()
* use stdint.h types instead of u8/u32/etc...
* rename functions:
  s/dt_iommu_xlate/iommu_dt_xlate/
  s/dt_map_id/iommu_dt_pci_map_id/
  s/iommu_add_pci_device/iommu_add_dt_pci_device/
* add device_is_protected check in iommu_add_dt_pci_device
* wrap prototypes in CONFIG_HAS_PCI

(cherry picked from commit 734e3bf6ee77e7947667ab8fa96c25b349c2e1da from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---
 xen/drivers/passthrough/device_tree.c | 140 ++++++++++++++++++++++++++
 xen/include/xen/device_tree.h         |  25 +++++
 xen/include/xen/iommu.h               |  17 +++-
 3 files changed, 181 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 1b50f4670944..d568166e19ec 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -151,6 +151,146 @@ static int iommu_dt_xlate(struct device *dev,
     return ops->dt_xlate(dev, iommu_spec);
 }
 
+#ifdef CONFIG_HAS_PCI
+int iommu_dt_pci_map_id(const struct dt_device_node *np, uint32_t id,
+                        const char *map_name, const char *map_mask_name,
+                        struct dt_device_node **target, uint32_t *id_out)
+{
+    uint32_t map_mask, masked_id, map_len;
+    const __be32 *map = NULL;
+
+    if ( !np || !map_name || (!target && !id_out) )
+        return -EINVAL;
+
+    map = dt_get_property(np, map_name, &map_len);
+    if ( !map )
+    {
+        if ( target )
+            return -ENODEV;
+        /* Otherwise, no map implies no translation */
+        *id_out = id;
+        return 0;
+    }
+
+    if ( !map_len || map_len % (4 * sizeof(*map)) )
+    {
+        printk(XENLOG_ERR "%pOF: Error: Bad %s length: %d\n", np,
+            map_name, map_len);
+        return -EINVAL;
+    }
+
+    /* The default is to select all bits. */
+    map_mask = 0xffffffff;
+
+    /*
+     * Can be overridden by "{iommu,msi}-map-mask" property.
+     * If of_property_read_u32() fails, the default is used.
+     */
+    if ( map_mask_name )
+        dt_property_read_u32(np, map_mask_name, &map_mask);
+
+    masked_id = map_mask & id;
+    for ( ; (int)map_len > 0; map_len -= 4 * sizeof(*map), map += 4 )
+    {
+        struct dt_device_node *phandle_node;
+        uint32_t id_base = be32_to_cpup(map + 0);
+        uint32_t phandle = be32_to_cpup(map + 1);
+        uint32_t out_base = be32_to_cpup(map + 2);
+        uint32_t id_len = be32_to_cpup(map + 3);
+
+        if ( id_base & ~map_mask )
+        {
+            printk(XENLOG_ERR "%pOF: Invalid %s translation - %s-mask (0x%x) ignores id-base (0x%x)\n",
+                   np, map_name, map_name, map_mask, id_base);
+            return -EFAULT;
+        }
+
+        if ( masked_id < id_base || masked_id >= id_base + id_len )
+            continue;
+
+        phandle_node = dt_find_node_by_phandle(phandle);
+        if ( !phandle_node )
+            return -ENODEV;
+
+        if ( target )
+        {
+            if ( !*target )
+                *target = phandle_node;
+
+            if ( *target != phandle_node )
+                continue;
+        }
+
+        if ( id_out )
+            *id_out = masked_id - id_base + out_base;
+
+        printk(XENLOG_DEBUG "%pOF: %s, using mask %08x, id-base: %08x, out-base: %08x, length: %08x, id: %08x -> %08x\n",
+               np, map_name, map_mask, id_base, out_base, id_len, id,
+               masked_id - id_base + out_base);
+        return 0;
+    }
+
+    printk(XENLOG_ERR "%pOF: no %s translation for id 0x%x on %pOF\n",
+           np, map_name, id, target && *target ? *target : NULL);
+
+    /*
+     * NOTE: Linux bypasses translation without returning an error here,
+     * but should we behave in the same way on Xen? Restrict for now.
+     */
+    return -EFAULT;
+}
+
+int iommu_add_dt_pci_sideband_ids(struct pci_dev *pdev)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    struct dt_phandle_args iommu_spec = { .args_count = 1 };
+    struct device *dev = pci_to_dev(pdev);
+    const struct dt_device_node *np;
+    int rc = NO_IOMMU;
+
+    if ( !iommu_enabled )
+        return NO_IOMMU;
+
+    if ( !ops )
+        return -EINVAL;
+
+    if ( device_is_protected(dev) )
+        return 0;
+
+    if ( dev_iommu_fwspec_get(dev) )
+        return -EEXIST;
+
+    np = pci_find_host_bridge_node(pdev);
+    if ( !np )
+        return -ENODEV;
+
+    /*
+     * The driver which supports generic PCI-IOMMU DT bindings must have
+     * these callback implemented.
+     */
+    if ( !ops->dt_xlate )
+        return -EINVAL;
+
+    /*
+     * According to the Documentation/devicetree/bindings/pci/pci-iommu.txt
+     * from Linux.
+     */
+    rc = iommu_dt_pci_map_id(np, PCI_BDF(pdev->bus, pdev->devfn), "iommu-map",
+                             "iommu-map-mask", &iommu_spec.np, iommu_spec.args);
+    if ( rc )
+        return rc == -ENODEV ? NO_IOMMU : rc;
+
+    rc = iommu_dt_xlate(dev, &iommu_spec);
+    if ( rc < 0 )
+    {
+        iommu_fwspec_free(dev);
+        return -EINVAL;
+    }
+
+    return rc;
+}
+#endif /* CONFIG_HAS_PCI */
+
 int iommu_add_dt_device(struct dt_device_node *np)
 {
     const struct iommu_ops *ops = iommu_get_ops();
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index c1e4751a581f..dc40fdfb9231 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -852,6 +852,31 @@ int dt_count_phandle_with_args(const struct dt_device_node *np,
  */
 int dt_get_pci_domain_nr(struct dt_device_node *node);
 
+#ifdef CONFIG_HAS_PCI
+/**
+ * iommu_dt_pci_map_id - Translate an ID through a downstream mapping.
+ * @np: root complex device node.
+ * @id: device ID to map.
+ * @map_name: property name of the map to use.
+ * @map_mask_name: optional property name of the mask to use.
+ * @target: optional pointer to a target device node.
+ * @id_out: optional pointer to receive the translated ID.
+ *
+ * Given a device ID, look up the appropriate implementation-defined
+ * platform ID and/or the target device which receives transactions on that
+ * ID, as per the "iommu-map" and "msi-map" bindings. Either of @target or
+ * @id_out may be NULL if only the other is required. If @target points to
+ * a non-NULL device node pointer, only entries targeting that node will be
+ * matched; if it points to a NULL value, it will receive the device node of
+ * the first matching target phandle, with a reference held.
+ *
+ * Return: 0 on success or a standard error code on failure.
+ */
+int iommu_dt_pci_map_id(const struct dt_device_node *np, uint32_t id,
+                        const char *map_name, const char *map_mask_name,
+                        struct dt_device_node **target, uint32_t *id_out);
+#endif /* CONFIG_HAS_PCI */
+
 struct dt_device_node *dt_find_node_by_phandle(dt_phandle handle);
 
 #ifdef CONFIG_DEVICE_TREE_DEBUG
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 405db59971c5..e83de1fced67 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -26,6 +26,7 @@
 #include <xen/spinlock.h>
 #include <public/domctl.h>
 #include <public/hvm/ioreq.h>
+#include <asm/acpi.h>
 #include <asm/device.h>
 
 TYPE_SAFE(uint64_t, dfn);
@@ -219,7 +220,8 @@ int iommu_dt_domain_init(struct domain *d);
 int iommu_release_dt_devices(struct domain *d);
 
 /*
- * Helper to add master device to the IOMMU using generic IOMMU DT bindings.
+ * Helpers to add master device to the IOMMU using generic (PCI-)IOMMU
+ * DT bindings.
  *
  * Return values:
  *  0 : device is protected by an IOMMU
@@ -228,12 +230,25 @@ int iommu_release_dt_devices(struct domain *d);
  *      (IOMMU is not enabled/present or device is not connected to it).
  */
 int iommu_add_dt_device(struct dt_device_node *np);
+int iommu_add_dt_pci_sideband_ids(struct pci_dev *pdev);
 
 int iommu_do_dt_domctl(struct xen_domctl *, struct domain *,
                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
+#else /* !HAS_DEVICE_TREE */
+static inline int iommu_add_dt_pci_sideband_ids(struct pci_dev *pdev)
+{
+    return 0;
+}
 #endif /* HAS_DEVICE_TREE */
 
+static inline int iommu_add_pci_sideband_ids(struct pci_dev *pdev)
+{
+    if ( acpi_disabled )
+        return iommu_add_dt_pci_sideband_ids(pdev);
+    return 0;
+}
+
 struct page_info;
 
 /*
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 21:09:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:09:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536559.834998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzks6-0005vS-TJ; Thu, 18 May 2023 21:09:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536559.834998; Thu, 18 May 2023 21:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzks6-0005vJ-Pw; Thu, 18 May 2023 21:09:14 +0000
Received: by outflank-mailman (input) for mailman id 536559;
 Thu, 18 May 2023 21:09:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yK0N=BH=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pzks5-0003gZ-8d
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:09:13 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20605.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3b716a8e-f5c0-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 23:09:11 +0200 (CEST)
Received: from DS7PR03CA0097.namprd03.prod.outlook.com (2603:10b6:5:3b7::12)
 by SJ0PR12MB8613.namprd12.prod.outlook.com (2603:10b6:a03:44d::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Thu, 18 May
 2023 21:09:03 +0000
Received: from DM6NAM11FT055.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b7:cafe::4e) by DS7PR03CA0097.outlook.office365.com
 (2603:10b6:5:3b7::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 21:09:03 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT055.mail.protection.outlook.com (10.13.173.103) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Thu, 18 May 2023 21:09:03 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:09:02 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 18 May 2023 16:09:01 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b716a8e-f5c0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SUqtsNWKjzke+85coSzJ7VAQldY8bQL5RXLADsy0VRONFXm2cME3ji5kBBpdJPwQVJR5XSz8gkB15vsL5hzJkBEK+Qk0zyJRhYNJH1yIsjqohmWRPmBgxtShin2YQoXYjhgHwkzrskVMoFQ2L28NNnllWeikDOzZyYxLhaBncV3XsN/QwXIy5E7xiq/MIB23Jo29A6BZq541NItZVdFAgsAU1eKncvflEaRDYAO2mX2xVt49S32I9wVbw3Axsvc3gGfxgh8QkPa7fbW+QwFQkvZ1NgdN+cjF6nsZxk2zXvVaFouo8BkrBNcrNP4pY3kz4LbYeSFZb3CaJzJsRsAX/g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MxTD/B2LQB1Jv047UjgOqA+QiFcS4OIKP4Cx/fTEEkk=;
 b=SPbyHzZ81QzXgSSXTN7atwxTEtmDckKn9QOLSx1zAOTy4yzTwG2ZBBlaze7PuwQe3UVX2SK7r7PP1PxiK+8itUY66XADRoY0X5IRr4DpvTlj9Xi4vzvif8Bi520T5hRPJe+bsw0OgWm59CjOyK9WgcYV3B0scQUw9b2wivGD1AWq5eGqr+FMJJcL4V2XV42f4XjbSFicjWHZ6bEcZtnTy6xZdDg7XwpGf9vlAUWdPk6Zl/tcwd/zwlCL852fyhyZrfaVwG9Y3hMfmp3zwZeSZ03Kqj+yaJAaLoVL1C4rPiFjRFm5st7ph0V2bIEhiYWeQCgqO8W/1UVJNcI16txUsw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MxTD/B2LQB1Jv047UjgOqA+QiFcS4OIKP4Cx/fTEEkk=;
 b=kFOVTD5IAn72oyfE6DV51rj2DRsq5Hp3w8p3ytZqEGHCk4ICOSqlJB1H3OBCvd8QOiRR2FcMTjTOc1LCw0JR33llJnHPkXWqrcqIUtvoWe72ZU5QfTak6g8VUrBnkmtUB60QxHHzeixrUA4qJlqcEszwMkZTNxBls0DNXYv1SKA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Rahul Singh
	<rahul.singh@arm.com>, Bertrand Marquis <bertrand.marquis@arm.com>, "Jan
 Beulich" <jbeulich@suse.com>
Subject: [PATCH v3 4/6] iommu/arm: iommu_add_dt_pci_sideband_ids phantom handling
Date: Thu, 18 May 2023 17:06:56 -0400
Message-ID: <20230518210658.66156-5-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230518210658.66156-1-stewart.hildebrand@amd.com>
References: <20230518210658.66156-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT055:EE_|SJ0PR12MB8613:EE_
X-MS-Office365-Filtering-Correlation-Id: f698b08a-9328-4df4-aa27-08db57e41bc1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Dd6XR/njwJuEOsighaKQGU0olMfUNbREGRa0WoDP0rVTPmXdRG4yhBPW8q+07gIVLTF5HkcYVsevVgHQO1zikP13N7QoWShXR+HUDzGOyPeziKqjOMFoxMCB64RzMHtNRhxIpJAUzVtDHfC9E/Q/AbGRXh9DeoaG86o1fGI2jxv9dQJ+7ZEknu8gxBpU19WiASBa2mcwLn2RDtIhJBYQj4UqHSy3YOaE7qgq9eBb/GziJ/03jMx3Gh41JctGuSGP0TxXjk70B8+vGQzu8mSrO6bFDEXMOn0O0EIi19YF5JhvCopNbObsnhm/bLUJleS/Sg7Q6g+uOCc9Alzmzli/m1Whobf8zf6M4p9F8aXIKxr9f6tfvhRChcZBi6L+COnHkYzxIKxt8qz+CGPdV505Ng6kE+Tdh96PN1piZ9U+F75y7fgdURr3QeH9i3AsK5hrYkymbSanyCaBq4KaLxoWX3ywSwFFzdWF4GkAldppsnPPq3d7wCUoUQ9VKhI9mwMxUaCvLSYkCA4cEFo+EsxLhlbksM3ZAMd0vEKzvyO9g6NP0bys2jZY9Kr6wc16Q4RDvWhIm7D9cmgIiEHMOaZ4Zwa57AqDZeFn8u78sI7VQ5OSy3Thoh+GLL2Go51ar9K15kJalL4I0GPXtOKOfnR9Nlk3tWxdFHskPwu1PTVqOo+dwtTTggI28Xv67BCarXosuSZyK05QJb+rn5dxqk1LZBeCqAuJ+D4/mWPpXzf7wAJpGWPkzYMiGEhs0zCVfDTAMRe6VqPKNKdu2HeUKsRPbg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(40460700003)(70586007)(4326008)(86362001)(6916009)(478600001)(70206006)(966005)(54906003)(316002)(36756003)(83380400001)(36860700001)(26005)(1076003)(336012)(186003)(47076005)(426003)(2616005)(2906002)(44832011)(8676002)(8936002)(40480700001)(41300700001)(356005)(82310400005)(5660300002)(82740400003)(81166007)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:09:03.2112
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f698b08a-9328-4df4-aa27-08db57e41bc1
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT055.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB8613

Handle phantom functions in iommu_add_dt_pci_sideband_ids(). Each phantom
function will have a unique requestor ID (RID)/BDF. On ARM, we need to
map/translate the RID/BDF to an AXI stream ID for each phantom function
according to the pci-iommu device tree mapping [1]. The RID/BDF -> AXI stream ID
mapping in DT could allow phantom devices (i.e. devices with phantom functions)
to use different AXI stream IDs based on the (phantom) function.

[1] https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-iommu.txt

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v2->v3:
* new patch title (was: iommu/arm: iommu_add_dt_pci_device phantom handling)
* rework loop to reduce duplication
* s/iommu_fwspec_free(pci_to_dev(pdev))/iommu_fwspec_free(dev)/

v1->v2:
* new patch

---
 xen/drivers/passthrough/device_tree.c | 33 ++++++++++++++++-----------
 1 file changed, 20 insertions(+), 13 deletions(-)

diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index d568166e19ec..c18ddae3e993 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -247,6 +247,7 @@ int iommu_add_dt_pci_sideband_ids(struct pci_dev *pdev)
     struct device *dev = pci_to_dev(pdev);
     const struct dt_device_node *np;
     int rc = NO_IOMMU;
+    unsigned int devfn = pdev->devfn;
 
     if ( !iommu_enabled )
         return NO_IOMMU;
@@ -271,21 +272,27 @@ int iommu_add_dt_pci_sideband_ids(struct pci_dev *pdev)
     if ( !ops->dt_xlate )
         return -EINVAL;
 
-    /*
-     * According to the Documentation/devicetree/bindings/pci/pci-iommu.txt
-     * from Linux.
-     */
-    rc = iommu_dt_pci_map_id(np, PCI_BDF(pdev->bus, pdev->devfn), "iommu-map",
-                             "iommu-map-mask", &iommu_spec.np, iommu_spec.args);
-    if ( rc )
-        return rc == -ENODEV ? NO_IOMMU : rc;
+    do {
+        /*
+         * According to the Documentation/devicetree/bindings/pci/pci-iommu.txt
+         * from Linux.
+         */
+        rc = iommu_dt_pci_map_id(np, PCI_BDF(pdev->bus, devfn), "iommu-map",
+                                 "iommu-map-mask", &iommu_spec.np, iommu_spec.args);
+        if ( rc )
+            return rc == -ENODEV ? NO_IOMMU : rc;
 
-    rc = iommu_dt_xlate(dev, &iommu_spec);
-    if ( rc < 0 )
-    {
-        iommu_fwspec_free(dev);
-        return -EINVAL;
+        rc = iommu_dt_xlate(dev, &iommu_spec);
+        if ( rc < 0 )
+        {
+            iommu_fwspec_free(dev);
+            return -EINVAL;
+        }
+
+        devfn += pdev->phantom_stride;
     }
+    while ( devfn != pdev->devfn &&
+            PCI_SLOT(devfn) == PCI_SLOT(pdev->devfn));
 
     return rc;
 }
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 21:11:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:11:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536566.835008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzku4-0007T4-9e; Thu, 18 May 2023 21:11:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536566.835008; Thu, 18 May 2023 21:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzku4-0007Sx-6p; Thu, 18 May 2023 21:11:16 +0000
Received: by outflank-mailman (input) for mailman id 536566;
 Thu, 18 May 2023 21:11:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yK0N=BH=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pzku3-0007Sr-79
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:11:15 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20628.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::628])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 841f0c6e-f5c0-11ed-b22d-6b7b168915f2;
 Thu, 18 May 2023 23:11:13 +0200 (CEST)
Received: from SJ0PR13CA0010.namprd13.prod.outlook.com (2603:10b6:a03:2c0::15)
 by PH8PR12MB7325.namprd12.prod.outlook.com (2603:10b6:510:217::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Thu, 18 May
 2023 21:11:07 +0000
Received: from DM6NAM11FT066.eop-nam11.prod.protection.outlook.com
 (2603:10b6:a03:2c0:cafe::76) by SJ0PR13CA0010.outlook.office365.com
 (2603:10b6:a03:2c0::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.6 via Frontend
 Transport; Thu, 18 May 2023 21:11:07 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT066.mail.protection.outlook.com (10.13.173.179) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 21:11:06 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:11:05 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 14:11:05 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 18 May 2023 16:11:03 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 841f0c6e-f5c0-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=je85EGooONoIrZymclRq8Z4Wjgh0PWAaj+mgjmzhQZbzc2doVtNBu7y8gM7dKQ20xFNzT/QjNnIsg06StOJfX9SPGPjOQV6nWRTFAiT3xKFg8s/k7E7ySiOqKacIkP/qSw6gZirI4026imAPPjcw/jDaP01fK3/v3RDHURspUeVPQWShfLf2jY/cDwmmk3p+IFrYKYsowWF6CoAUPvLlxOCuzKcsNRnqAM170n3MZsFxKKGoRINZNVdsQqgij6DZ5Sv5hd7Fu9ReAFNaZjIUEHlEE0rm8caAVV+ommBdamO0XGl3A5SLmTS/w//1hG5t3YMNem5DM7lFP+/EfndTPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Lc3aiT/FBstHJO8Rds/4fwDxNeL7AHWhcXKg/oYVb4o=;
 b=iJavxXkt7TgRqQDkFyKwpm7WhF4esoh/MhXdPlAsxwckORZHh95rzh0V+PWvlNfGp6jB5SP4epN+eWFM0c6RyVBRlfBjwuKe7fZB7iTlIqLcTvWyFw4N6soXEDqKngP7q5O91q4h5ChLiWa/Rd72WmGZ9TFOPQ8j3HHEFLwGp3BVAGIB55CDiAyu1ROS/sNpsgNHkgy3NygKvBPBw1Qze/NJUI6GLvyfDbuCUAinkek4mOdkgMSjWYFFtul3FSk3jheX3jkzyYEc44ACXH/mjFZvA3P8JrgIvZC6pA15G9VkGcsXsLaX9b6iKeD2j28l5YgV1GWGtW97Iw4FpMGHwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Lc3aiT/FBstHJO8Rds/4fwDxNeL7AHWhcXKg/oYVb4o=;
 b=1BmPev53OzxrLHodE+t4gKY3N4J94Zdmlja+8vq2S0GCfhFtsRksEt9DoCF6MmI1akn3Tpx8bsN1FvFkQ46/hx9j0B+m5B775+fFAGSX2L+l/a6Fx5iEyh9L/V+qi/lfFTKQLj2qE6dLUgVyt5wdbrhV8cNWjsnzR4zhn/ln0PQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Rahul Singh <rahul.singh@arm.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Stewart Hildebrand <stewart.hildebrand@amd.com>
Subject: [PATCH v3 6/6] xen/arm: smmuv3: Add PCI devices support for SMMUv3
Date: Thu, 18 May 2023 17:06:58 -0400
Message-ID: <20230518210658.66156-7-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230518210658.66156-1-stewart.hildebrand@amd.com>
References: <20230518210658.66156-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT066:EE_|PH8PR12MB7325:EE_
X-MS-Office365-Filtering-Correlation-Id: 6223ce84-9df2-493d-43a4-08db57e46555
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MZglEBikKfx/a1lsqwlQhgOMgnn3h5VXxsWqPjaTZooZ7RSQwadQxX3C4rc40re8ez3M6vN351DrzGJWqy44IIe6GQmPAk43CA32i8S0hKEzM/c9Z73dR4woNe8hqTV/HF7m7CgO3uCCXiiIFZQuHOUbTyJRUIKJiEuydpiPxpoEZB4XKBQvv0MW6Mic5vnNyd02kmo+osiq1amgEisLuH/qs4odWBezGdHbKu3C4E96XRtIYRH9JX+aFr8FtUMKXfedYoDLsSvstiqQ00lv17D3McMiT55PXiO8RdxNLlET6J22yH467VXWT1Zjwa33Pq+4kjV/PIAESklmfliBHQEI4GDL8XaMTd06t2hmhjiCOcOB1SN8Qo6UJ9jIissWaLza3WHnE1y5+bnEAYDKPyBWfsy5J5TTiccM1CYkfFqQsdBrOZzwauJvQ9h8lp1IJBVw1sK0/sI8+Wx0/1hiTxYGeNpCJI8kOSeE34Cs45pHrqK/UdK6/sQf47KQ02MsG0uxNo/SbGfTh8j7DxG56qFLTAJEHmoa5jSD4pKlKQ6e2ptcJ1YhN51fHAcoJAaQ4WiKyAi/fx8YP7C7nlVESPzpCv2/IC8bUOQZqf/XmCstrTOBoc2fNqV1XEphmEP2B2Dm29lWXDpa0NpDQzlDI61Or2RzEGs4L0DupWjh0I//ENeQPCKXGPLmNoEZWSDO+vRHZ+2zJZf7y9jN5+PUzSmNVFjtOeoS2Oc+OTI0YNJbObDbOhpOrhY2qk7yl2Yw4eVxRX7lw7jPw6CXpUbSPw==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(376002)(451199021)(46966006)(36840700001)(40470700004)(478600001)(316002)(4326008)(44832011)(2906002)(8936002)(6916009)(41300700001)(70206006)(8676002)(54906003)(6666004)(5660300002)(70586007)(966005)(40460700003)(1076003)(26005)(82740400003)(186003)(81166007)(40480700001)(83380400001)(36860700001)(36756003)(2616005)(47076005)(82310400005)(86362001)(356005)(426003)(336012)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:11:06.6522
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6223ce84-9df2-493d-43a4-08db57e46555
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT066.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7325

From: Rahul Singh <rahul.singh@arm.com>

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v2->v3:
* rebase
* invoke iommu_add_pci_sideband_ids() from add_device hook

v1->v2:
* ignore add_device/assign_device/reassign_device calls for phantom functions
  (i.e. devfn != pdev->devfn)

downstream->v1:
* rebase
* move 2 replacements of s/dt_device_set_protected(dev_to_dt(dev))/device_set_protected(dev)/
  from this commit to ("xen/arm: Move is_protected flag to struct device")
  so as to not break ability to bisect
* adjust patch title (remove stray space)
* arm_smmu_(de)assign_dev: return error instead of crashing system
* remove arm_smmu_remove_device() stub
* update condition in arm_smmu_reassign_dev
* style fixup

(cherry picked from commit 7ed6c3ab250d899fe6e893a514278e406a2893e8 from
 the downstream branch poc/pci-passthrough from
 https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
---

This is a file imported from Linux with modifications for Xen. What should be
the coding style used for Xen modifications?
---
 xen/drivers/passthrough/arm/smmu-v3.c | 76 +++++++++++++++++++++++++--
 1 file changed, 72 insertions(+), 4 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
index f5910e79922f..a9ca889bd437 100644
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -1469,14 +1469,32 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
 }
 /* Forward declaration */
 static struct arm_smmu_device *arm_smmu_get_by_dev(const struct device *dev);
+static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
+			struct device *dev, u32 flag);
 
 static int arm_smmu_add_device(u8 devfn, struct device *dev)
 {
 	int i, ret;
 	struct arm_smmu_device *smmu;
 	struct arm_smmu_master *master;
-	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+	struct iommu_fwspec *fwspec;
+
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+		int ret;
+
+		if ( devfn != pdev->devfn )
+			return 0;
+
+		ret = iommu_add_pci_sideband_ids(pdev);
+		if ( ret < 0 )
+			iommu_fwspec_free(dev);
+	}
+#endif
 
+	fwspec = dev_iommu_fwspec_get(dev);
 	if (!fwspec)
 		return -ENODEV;
 
@@ -1527,6 +1545,17 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev)
 	dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
 			dev_name(fwspec->iommu_dev), fwspec->num_ids);
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		ret = arm_smmu_assign_dev(pdev->domain, devfn, dev, 0);
+		if (ret)
+			goto err_free_master;
+	}
+#endif
+
 	return 0;
 
 err_free_master:
@@ -2616,6 +2645,27 @@ static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
 	struct arm_smmu_domain *smmu_domain;
 	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) && !is_hardware_domain(d) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Assigning device %04x:%02x:%02x.%u to dom%d\n",
+			pdev->seg, pdev->bus, PCI_SLOT(devfn),
+			PCI_FUNC(devfn), d->domain_id);
+
+		if ( devfn != pdev->devfn || pdev->domain == d )
+			return 0;
+
+		list_move(&pdev->domain_list, &d->pdev_list);
+		pdev->domain = d;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	spin_lock(&xen_domain->lock);
 
 	/*
@@ -2649,7 +2699,7 @@ out:
 	return ret;
 }
 
-static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
+static int arm_smmu_deassign_dev(struct domain *d, uint8_t devfn, struct device *dev)
 {
 	struct iommu_domain *io_domain = arm_smmu_get_domain(d, dev);
 	struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
@@ -2661,6 +2711,24 @@ static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
 		return -ESRCH;
 	}
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Deassigning device %04x:%02x:%02x.%u from dom%d\n",
+			pdev->seg, pdev->bus, PCI_SLOT(devfn),
+			PCI_FUNC(devfn), d->domain_id);
+
+		if ( devfn != pdev->devfn )
+			return 0;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	spin_lock(&xen_domain->lock);
 
 	arm_smmu_detach_dev(master);
@@ -2680,13 +2748,13 @@ static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
 	int ret = 0;
 
 	/* Don't allow remapping on other domain than hwdom */
-	if ( t && !is_hardware_domain(t) )
+	if ( t && !is_hardware_domain(t) && (t != dom_io) )
 		return -EPERM;
 
 	if (t == s)
 		return 0;
 
-	ret = arm_smmu_deassign_dev(s, dev);
+	ret = arm_smmu_deassign_dev(s, devfn, dev);
 	if (ret)
 		return ret;
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 21:16:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:16:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536570.835017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkzV-00087L-Ux; Thu, 18 May 2023 21:16:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536570.835017; Thu, 18 May 2023 21:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzkzV-00087E-SB; Thu, 18 May 2023 21:16:53 +0000
Received: by outflank-mailman (input) for mailman id 536570;
 Thu, 18 May 2023 21:16:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o87v=BH=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pzkzU-000878-4J
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:16:52 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d7aaf33-f5c1-11ed-b22d-6b7b168915f2;
 Thu, 18 May 2023 23:16:50 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 12F0F61546;
 Thu, 18 May 2023 21:16:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE954C433D2;
 Thu, 18 May 2023 21:16:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d7aaf33-f5c1-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684444608;
	bh=HoxGkgzFbxf63dORm+ogvEfSTinTLo9wBA1ezIhqqa8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=AF/6hg6qgN+xM4xcYrFdHGdKeM5E1rRA4v7yqZhAO61m+c75ak/Zoje/CToS6hINQ
	 RW5ntP37VKKRgYwS6Lmt27+tJ9A0RM5ZPz8bAavwFGsLl/VPfsLmhyXYdYCUNmeAmU
	 iJQSV6p74JgiFyso36AUfuhfNIhnZjdhySUqkxAt2ZaB3qYoBbQ3gQLeK5KpDpBeiJ
	 JEvjDXgd+2csv/lkkNh3SFvQpMPKtU8/7dkqKGVT0wRfnpWo0KNxB3qCFWH2vEuX3s
	 Okm11oIYAuy5X6+Abium2paMC7gm2D58e+5iMfniKtZLIPk/GbrTrXYavE9AKef70J
	 ZcYb5uHuudFvg==
Date: Thu, 18 May 2023 14:16:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michal Orzel <michal.orzel@amd.com>
cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, luca.fancellu@arm.com
Subject: Re: [PATCH] automation: Enable parallel build with cppcheck
 analysis
In-Reply-To: <20230518122415.8698-1-michal.orzel@amd.com>
Message-ID: <alpine.DEB.2.22.394.2305181416400.128889@ubuntu-linux-20-04-desktop>
References: <20230518122415.8698-1-michal.orzel@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 18 May 2023, Michal Orzel wrote:
> The limitation was fixed by the commit:
> 45bfff651173d538239308648c6a6cd7cbe37172
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/scripts/build | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/automation/scripts/build b/automation/scripts/build
> index 9085cba35281..38c48ae6d826 100755
> --- a/automation/scripts/build
> +++ b/automation/scripts/build
> @@ -39,10 +39,8 @@ cp xen/.config xen-config
>  mkdir -p binaries
>  
>  if [[ "${CPPCHECK}" == "y" ]] && [[ "${HYPERVISOR_ONLY}" == "y" ]]; then
> -    # Cppcheck analysis invokes Xen-only build.
> -    # Known limitation: cppcheck generates inconsistent reports when running
> -    # in parallel mode, therefore do not specify -j<n>.
> -    xen/scripts/xen-analysis.py --run-cppcheck --cppcheck-misra
> +    # Cppcheck analysis invokes Xen-only build
> +    xen/scripts/xen-analysis.py --run-cppcheck --cppcheck-misra -- -j$(nproc)
>  
>      # Preserve artefacts
>      cp xen/xen binaries/xen
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 18 21:17:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:17:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536573.835027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzl0A-0000Bb-6e; Thu, 18 May 2023 21:17:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536573.835027; Thu, 18 May 2023 21:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzl0A-0000BU-40; Thu, 18 May 2023 21:17:34 +0000
Received: by outflank-mailman (input) for mailman id 536573;
 Thu, 18 May 2023 21:17:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yK0N=BH=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pzksW-0003gZ-NY
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:09:40 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7e88::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4baa4d84-f5c0-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 23:09:38 +0200 (CEST)
Received: from BN7PR06CA0051.namprd06.prod.outlook.com (2603:10b6:408:34::28)
 by PH8PR12MB8606.namprd12.prod.outlook.com (2603:10b6:510:1ce::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 21:09:32 +0000
Received: from BN8NAM11FT093.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:34:cafe::55) by BN7PR06CA0051.outlook.office365.com
 (2603:10b6:408:34::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 21:09:32 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT093.mail.protection.outlook.com (10.13.177.22) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Thu, 18 May 2023 21:09:32 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:09:32 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 14:09:31 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 18 May 2023 16:09:30 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4baa4d84-f5c0-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E6Wr2OYG3a+Fm5SXOFHd2FRWHDmDxhUP8zsTxSyevBLLKvW+98zwcJC5HrmIS20DUDQ7A2WsSwf5xfZ/zJN+hziLpXUi7WBcsmV9w5iKVWEInVlHBfDuTnHBNYepfJrlVAtdcQRrbH+xOL1zpImvnKniWUoZrlXcVcqBVvy8spZf7RcTuut1ku/LL9zq0sFmvZiwu0q0OgdHvJjvNuRyDuDpU6SwT3shx6cvYukY+JXlXyWwuTgTQ1R0lRvfSWTs2rZA1ArmBYdSeoKCq/gWnXh8BF6UzPCJJBkzbgAOzTxDJlru3vmJeED1VXoalUM3MaRJ/SIlSGDRHCifHYwXUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RAz7bk/eEBRtyO3ChhU4KoUxEkKuzfEMkl7Zwu/CsVI=;
 b=ZPCzaYJ6jvLx/ysZ+rUPQl3JSBxcdRp/qWmEwKlUmwc26buxu0w/dy6A5hSZ9hLrutq2pSLH9Ak/5nuBPM17lzaG1bW2lhNNr5uxFzg960VFoulR5xEkpHc95jHgi0bLvgfC6vVjtEloo4pA0UeebVrPmAJMSyg6Lf3rLUx0CISv10gKzzClOi+gWxidlOq2MaLsU1b2n1e0+awiM12/eUamoz/Tem4HNOwSm0bBNtkHMRRfsmtuE724tpAjUlL3U2rbDNp0rfLsZl7/ofXwmh7lwWHLGhhkN5hwAnLho9CLT+1q+9Aoa6nG19gQHVYCTAErjjSUPjCpAdvmg6G9Sw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RAz7bk/eEBRtyO3ChhU4KoUxEkKuzfEMkl7Zwu/CsVI=;
 b=ClklKs6DNata2RzT7hRoGKX5EWLWUZ+YquIktNtvQHO3mVIgFUGck54FwqnO999CAYT+0roNx/mcnY9fi+WdzYq+x+/8o+n5heL5/A5ZjLc0i3P7AfzBXi3Ifn416eY3nGracvn/NPTPJxE6XEbePrliYqlUyIiouasamqDfgKQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Julien Grall
	<julien@xen.org>, Rahul Singh <rahul.singh@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Stewart Hildebrand
	<stewart.hildebrand@amd.com>
Subject: [PATCH v3 5/6] xen/arm: smmuv2: Add PCI devices support for SMMUv2
Date: Thu, 18 May 2023 17:06:57 -0400
Message-ID: <20230518210658.66156-6-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230518210658.66156-1-stewart.hildebrand@amd.com>
References: <20230518210658.66156-1-stewart.hildebrand@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT093:EE_|PH8PR12MB8606:EE_
X-MS-Office365-Filtering-Correlation-Id: aac9e896-4d5c-4822-0496-08db57e42d44
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZJT4YLVQhFLGIj9qIKSHASBsQ3ItZGieFdULQ3RKZleVCnzzYHspvXbeBeF96lyT1uNazn5AHeNpuJLUPNs/k3qwJU9EoywtdPIKRuBxNKP0jAL8NuekSvBPm/Wt8xLIvTpasaQhapS5Ar6ISh2YoJK54QvPlGjFSXSHfrEIKSr7KRCrmyxwlFZZkUka7bRdpvO/tkAoW3NzvsueMlLePUL7tZn2c+Mqh6/uPC3fQN2LG+M2Jji+IVCI+wlY/pQKLM1LIG3qtkJccsuWuswG1ILafM6QZUynrB8TxFJlUwwW7J7KXhbhVFwzGJxjVwk6bctbdfD5rU4BmjUV5+iJK2nUmFzZHMIRu5XwJVgTcE9GRQxVl92+ER85tzOMUmOXrDY8KykYMAN85zN7Gnx3gljqerfKhia7bLo7CqSD2Ye6gNrXmCrrRtN0MSbh/5mGTnzHmf3uzSVLLu2bharWN8rK6jarITz2oHUnwDeyYab/dvCbtE6dGS1HNIORyrao4a3bWuRPz3XTkH+A1PzejHN1pSy3CAUE4XNkMooNNxtyWNmGY8jpLZFiKhsStDv4T2mNhCvAeEfI/ofSOqDgyFdpEchbc24BqUAOXE1dPBfNrDzQw5nqRkShSYArOnUS/zXW/MNq2zYweeLhN2VU1VDp4IUwdwXSyccWE1YXGHwSUx6+3Ehjtm9mDRQCrS776Uami60EUCVuRPqhGYI2fx4kkuE2/nihEPo1mPz2zPwDfZZnRxG2nrQgxemkbjpYHKjJKvycsUoIfeqGmxmNrW6wSY0Oyz0fu3ikCObFbFM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(346002)(396003)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(2906002)(478600001)(41300700001)(4326008)(6916009)(8936002)(8676002)(316002)(54906003)(44832011)(6666004)(70586007)(70206006)(5660300002)(40460700003)(966005)(26005)(1076003)(82740400003)(356005)(186003)(81166007)(2616005)(83380400001)(40480700001)(47076005)(36756003)(36860700001)(82310400005)(336012)(426003)(86362001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:09:32.6337
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: aac9e896-4d5c-4822-0496-08db57e42d44
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT093.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB8606

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v2->v3:
* invoke iommu_add_pci_sideband_ids() from add_device hook

v1->v2:
* ignore add_device/assign_device/reassign_device calls for phantom functions
  (i.e. devfn != pdev->devfn)

downstream->v1:
* wrap unused function in #ifdef 0
* remove the remove_device() stub since it was submitted separately to the list
  [XEN][PATCH v6 12/19] xen/smmu: Add remove_device callback for smmu_iommu ops
  https://lists.xenproject.org/archives/html/xen-devel/2023-05/msg00204.html
* arm_smmu_(de)assign_dev: return error instead of crashing system
* update condition in arm_smmu_reassign_dev
* style fixup
* add && !is_hardware_domain(d) into condition in arm_smmu_assign_dev()

(cherry picked from commit 0c11a7f65f044c26d87d1e27ac6283ef1f9cfb7a from
 the downstream branch spider-master from
 https://github.com/xen-troops/xen.git)
---

This is a file imported from Linux with modifications for Xen. What should be
the coding style for Xen modifications?
---
 xen/drivers/passthrough/arm/smmu.c | 114 +++++++++++++++++++++++------
 1 file changed, 93 insertions(+), 21 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 5b6024d579a8..d426920d8f9b 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -134,8 +134,20 @@ typedef enum irqreturn irqreturn_t;
 /* Device logger functions
  * TODO: Handle PCI
  */
-#define dev_print(dev, lvl, fmt, ...)						\
-	 printk(lvl "smmu: %s: " fmt, dt_node_full_name(dev_to_dt(dev)), ## __VA_ARGS__)
+#ifndef CONFIG_HAS_PCI
+#define dev_print(dev, lvl, fmt, ...)    \
+    printk(lvl "smmu: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#else
+#define dev_print(dev, lvl, fmt, ...) ({                                \
+    if ( !dev_is_pci((dev)) )                                           \
+        printk(lvl "smmu: %s: " fmt, dev_name((dev)), ## __VA_ARGS__);  \
+    else                                                                \
+    {                                                                   \
+        struct pci_dev *pdev = dev_to_pci((dev));                       \
+        printk(lvl "smmu: %pp: " fmt, &pdev->sbdf, ## __VA_ARGS__);     \
+    }                                                                   \
+})
+#endif
 
 #define dev_dbg(dev, fmt, ...) dev_print(dev, XENLOG_DEBUG, fmt, ## __VA_ARGS__)
 #define dev_notice(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
@@ -187,6 +199,7 @@ static void __iomem *devm_ioremap_resource(struct device *dev,
  * Xen: PCI functions
  * TODO: It should be implemented when PCI will be supported
  */
+#if 0 /* unused */
 #define to_pci_dev(dev)	(NULL)
 static inline int pci_for_each_dma_alias(struct pci_dev *pdev,
 					 int (*fn) (struct pci_dev *pdev,
@@ -196,6 +209,7 @@ static inline int pci_for_each_dma_alias(struct pci_dev *pdev,
 	BUG();
 	return 0;
 }
+#endif
 
 /* Xen: misc */
 #define PHYS_MASK_SHIFT		PADDR_BITS
@@ -632,7 +646,7 @@ struct arm_smmu_master_cfg {
 	for (i = 0; idx = cfg->smendx[i], i < num; ++i)
 
 struct arm_smmu_master {
-	struct device_node		*of_node;
+	struct device			*dev;
 	struct rb_node			node;
 	struct arm_smmu_master_cfg	cfg;
 };
@@ -724,7 +738,7 @@ arm_smmu_get_fwspec(struct arm_smmu_master_cfg *cfg)
 {
 	struct arm_smmu_master *master = container_of(cfg,
 			                                      struct arm_smmu_master, cfg);
-	return dev_iommu_fwspec_get(&master->of_node->dev);
+	return dev_iommu_fwspec_get(master->dev);
 }
 
 static void parse_driver_options(struct arm_smmu_device *smmu)
@@ -757,7 +771,7 @@ static struct device_node *dev_get_dev_node(struct device *dev)
 }
 
 static struct arm_smmu_master *find_smmu_master(struct arm_smmu_device *smmu,
-						struct device_node *dev_node)
+						struct device *dev)
 {
 	struct rb_node *node = smmu->masters.rb_node;
 
@@ -766,9 +780,9 @@ static struct arm_smmu_master *find_smmu_master(struct arm_smmu_device *smmu,
 
 		master = container_of(node, struct arm_smmu_master, node);
 
-		if (dev_node < master->of_node)
+		if (dev < master->dev)
 			node = node->rb_left;
-		else if (dev_node > master->of_node)
+		else if (dev > master->dev)
 			node = node->rb_right;
 		else
 			return master;
@@ -803,9 +817,9 @@ static int insert_smmu_master(struct arm_smmu_device *smmu,
 			= container_of(*new, struct arm_smmu_master, node);
 
 		parent = *new;
-		if (master->of_node < this->of_node)
+		if (master->dev < this->dev)
 			new = &((*new)->rb_left);
-		else if (master->of_node > this->of_node)
+		else if (master->dev > this->dev)
 			new = &((*new)->rb_right);
 		else
 			return -EEXIST;
@@ -824,18 +838,18 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 	struct arm_smmu_master *master;
 	struct device_node *dev_node = dev_get_dev_node(dev);
 
-	master = find_smmu_master(smmu, dev_node);
+	master = find_smmu_master(smmu, dev);
 	if (master) {
 		dev_err(dev,
 			"rejecting multiple registrations for master device %s\n",
-			dev_node->name);
+			dev_node ? dev_node->name : "");
 		return -EBUSY;
 	}
 
 	master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
 	if (!master)
 		return -ENOMEM;
-	master->of_node = dev_node;
+	master->dev = dev;
 
 	/* Xen: Let Xen know that the device is protected by an SMMU */
 	device_set_protected(dev);
@@ -845,7 +859,7 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
 		     (fwspec->ids[i] >= smmu->num_mapping_groups)) {
 			dev_err(dev,
 				"stream ID for master device %s greater than maximum allowed (%d)\n",
-				dev_node->name, smmu->num_mapping_groups);
+				dev_node ? dev_node->name : "", smmu->num_mapping_groups);
 			return -ERANGE;
 		}
 		master->cfg.smendx[i] = INVALID_SMENDX;
@@ -881,6 +895,21 @@ static int arm_smmu_dt_add_device_generic(u8 devfn, struct device *dev)
 	struct arm_smmu_device *smmu;
 	struct iommu_fwspec *fwspec;
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+		int ret;
+
+		if ( devfn != pdev->devfn )
+			return 0;
+
+		ret = iommu_add_pci_sideband_ids(pdev);
+		if ( ret < 0 )
+			iommu_fwspec_free(dev);
+	}
+#endif
+
 	fwspec = dev_iommu_fwspec_get(dev);
 	if (fwspec == NULL)
 		return -ENXIO;
@@ -912,11 +941,10 @@ static struct arm_smmu_device *find_smmu_for_device(struct device *dev)
 {
 	struct arm_smmu_device *smmu;
 	struct arm_smmu_master *master = NULL;
-	struct device_node *dev_node = dev_get_dev_node(dev);
 
 	spin_lock(&arm_smmu_devices_lock);
 	list_for_each_entry(smmu, &arm_smmu_devices, list) {
-		master = find_smmu_master(smmu, dev_node);
+		master = find_smmu_master(smmu, dev);
 		if (master)
 			break;
 	}
@@ -2006,6 +2034,7 @@ static bool arm_smmu_capable(enum iommu_cap cap)
 }
 #endif
 
+#if 0 /* Not used */
 static int __arm_smmu_get_pci_sid(struct pci_dev *pdev, u16 alias, void *data)
 {
 	*((u16 *)data) = alias;
@@ -2016,6 +2045,7 @@ static void __arm_smmu_release_pci_iommudata(void *data)
 {
 	kfree(data);
 }
+#endif
 
 static int arm_smmu_add_device(struct device *dev)
 {
@@ -2023,12 +2053,13 @@ static int arm_smmu_add_device(struct device *dev)
 	struct arm_smmu_master_cfg *cfg;
 	struct iommu_group *group;
 	void (*releasefn)(void *) = NULL;
-	int ret;
 
 	smmu = find_smmu_for_device(dev);
 	if (!smmu)
 		return -ENODEV;
 
+	/* There is no need to distinguish here, thanks to PCI-IOMMU DT bindings */
+#if 0
 	if (dev_is_pci(dev)) {
 		struct pci_dev *pdev = to_pci_dev(dev);
 		struct iommu_fwspec *fwspec;
@@ -2053,10 +2084,12 @@ static int arm_smmu_add_device(struct device *dev)
 				       &fwspec->ids[0]);
 		releasefn = __arm_smmu_release_pci_iommudata;
 		cfg->smmu = smmu;
-	} else {
+	} else
+#endif
+	{
 		struct arm_smmu_master *master;
 
-		master = find_smmu_master(smmu, dev->of_node);
+		master = find_smmu_master(smmu, dev);
 		if (!master) {
 			return -ENODEV;
 		}
@@ -2724,6 +2757,27 @@ static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
 			return -ENOMEM;
 	}
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) && !is_hardware_domain(d) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Assigning device %04x:%02x:%02x.%u to dom%d\n",
+		       pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+		       d->domain_id);
+
+		if ( devfn != pdev->devfn || pdev->domain == d )
+			return 0;
+
+		list_move(&pdev->domain_list, &d->pdev_list);
+		pdev->domain = d;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	if (!dev_iommu_group(dev)) {
 		ret = arm_smmu_add_device(dev);
 		if (ret)
@@ -2773,11 +2827,29 @@ out:
 	return ret;
 }
 
-static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
+static int arm_smmu_deassign_dev(struct domain *d, u8 devfn, struct device *dev)
 {
 	struct iommu_domain *domain = dev_iommu_domain(dev);
 	struct arm_smmu_xen_domain *xen_domain;
 
+#ifdef CONFIG_HAS_PCI
+	if ( dev_is_pci(dev) )
+	{
+		struct pci_dev *pdev = dev_to_pci(dev);
+
+		printk(XENLOG_INFO "Deassigning device %04x:%02x:%02x.%u from dom%d\n",
+		       pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+		       d->domain_id);
+
+		if ( devfn != pdev->devfn )
+			return 0;
+
+		/* dom_io is used as a sentinel for quarantined devices */
+		if ( d == dom_io )
+			return 0;
+	}
+#endif
+
 	xen_domain = dom_iommu(d)->arch.priv;
 
 	if (!domain || domain->priv->cfg.domain != d) {
@@ -2805,13 +2877,13 @@ static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
 	int ret = 0;
 
 	/* Don't allow remapping on other domain than hwdom */
-	if ( t && !is_hardware_domain(t) )
+	if ( t && !is_hardware_domain(t) && t != dom_io )
 		return -EPERM;
 
 	if (t == s)
 		return 0;
 
-	ret = arm_smmu_deassign_dev(s, dev);
+	ret = arm_smmu_deassign_dev(s, devfn, dev);
 	if (ret)
 		return ret;
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 18 21:27:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:27:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536581.835038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzl9c-0001oj-90; Thu, 18 May 2023 21:27:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536581.835038; Thu, 18 May 2023 21:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzl9c-0001oc-6F; Thu, 18 May 2023 21:27:20 +0000
Received: by outflank-mailman (input) for mailman id 536581;
 Thu, 18 May 2023 21:27:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yK0N=BH=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1pzl9b-0001oV-AT
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:27:19 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20631.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c32efc80-f5c2-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 23:27:16 +0200 (CEST)
Received: from BN0PR04CA0125.namprd04.prod.outlook.com (2603:10b6:408:ed::10)
 by SA1PR12MB8918.namprd12.prod.outlook.com (2603:10b6:806:386::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Thu, 18 May
 2023 21:27:13 +0000
Received: from BN8NAM11FT048.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ed:cafe::f2) by BN0PR04CA0125.outlook.office365.com
 (2603:10b6:408:ed::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Thu, 18 May 2023 21:27:12 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT048.mail.protection.outlook.com (10.13.177.117) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Thu, 18 May 2023 21:27:12 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 16:27:07 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 14:27:07 -0700
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 18 May 2023 16:27:06 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c32efc80-f5c2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GFuKoENbNPnebXoLEdzjJBAtns+18JEWgH+L0KI3D4eSp2TLcq249KogLMfwp7u177/0K8EnkfTAhp8y76df+2TVEyf2tc6BrRbXjsacxqjSPkwSTI9w7tksznTiQxr3sUQ+Iko1U6R2gjk0/340ZOYUFwGcjrF9Vy03jSeQk6Y1EgI6K/1kTFirP++tXquPOWyLhlpfqfEXjiKeTWg3F8oiZbkJX0aCAWiYE/4H536b+BW/jN6mjoh+yGDnXFIzPADPnsreHN8zC/3hPOkQde3JGplUugDUsjFoay6B5JoHPjVZQHXeBRsT+w15MPhQ90zYajht2h3k66oOoX1I0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xppf23Ldn6VHQaqMu3fTD0tDbMfuwtf5e9zTUkPLz9g=;
 b=XnpWOIiL19pRoUuvewRmZ5dugnxN0bCKtJ+kWjwfbE3AG1Kbv2we7j+LIbGAmSx7PwaMgrE+TSzBd1hsD9t6OBTc7II3NW5PsqqDegrU97HaUQZcEzSTQ3/RyTBh1XrPqYKDdGiMalAY+QeHWcnA3J3RE66BAf2DLpAPq2vPuqVdLMYqiDo7TjULO1ywTqiCmIvdTFAgZna7Us7/r3uEMO066xqXnalpz44C4Nftfbh3ANrbtZ3cngvSr2WcvwiwjCIlTzMYbV0uLPrsvjVIlgMLka7p8rKHKcDto5nP8PIuZcKO4u6uqwTh4OKZdF0Ge2oI5d47r4r1wR3sIRzt8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xppf23Ldn6VHQaqMu3fTD0tDbMfuwtf5e9zTUkPLz9g=;
 b=Nto+BzKZ1gfuT5pQ39OJpdRGBYMWfTznbW5JXKIlywQ57kQ+jVJNZN9rpuWkQMCuCGsloo1DGsC4bg3t1rgcLVucocYSPzZjeQDCSrr4y4313dHg2dn/SmLRII0bNAqYp0GcPj78B5otldTuCmAucygM1YjTG+Mxc24cSCBZeEU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <93adef92-90fb-80de-c6b4-b41872b74682@amd.com>
Date: Thu, 18 May 2023 17:27:05 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v1] xen/sched/null: avoid crash after failed domU creation
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Dario Faggioli
	<dfaggioli@suse.com>
References: <20230501203046.168856-1-stewart.hildebrand@amd.com>
 <30246788-c90e-e338-de4b-e7bb2e440f4e@suse.com>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <30246788-c90e-e338-de4b-e7bb2e440f4e@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT048:EE_|SA1PR12MB8918:EE_
X-MS-Office365-Filtering-Correlation-Id: 18bab9b4-4998-45af-4d20-08db57e6a51a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nY3fT41DU4FIf6T4+eRaTpIHLpuVw+CPirTuLZR3KuRaUp9ekc0T9DW1XdnpNEgK0TdP1zj+1Q5ysjgvWJES/nGszqvx+ltTet5m4a1d6m/MYUWr4dZSTqR780ehWk4cnHn4A9MClqEtVpv3Aqlbc/PDfN/1oYONb2tRw4SPd1gn/IcNCQ6AOmwHrp0fKICqD5XcCGc39jMrJ3pMyMRSdNSXJAZYF5VlC0dXKKZk3nXj+puL9ESyjlxJUHD+pLhrtARvghOHVMYJdvLAZzvIDMBaA8+ULd4GYDZZ6+6KyWZG4fykxKWsL3/z2UU79VtkAGvwH18Gg8ZCsQzoJIO6eQMhM//Lf3jczWC/hzF9bYkjCxgxvsk8krB60WUsRQKuAqyvLm1/i66p5A3Y822J6i2ZODfHKYAG9oYoSXqlidB6wLSKW5uvQROIyyAGjTAe5eM8YPKOiZWSln1wi8QxzsppAWknGxduI/NFRrxDakfLafzSHpNWvHMol8/YErzqP9ECv5Ch2IlfFaG8XXJjPVg2Y5lueVyg4YEcFLgMIQykpwxDtNLpocU85c2sbw6VinxRPSG8+zRd6mZf3IB+VvoAjlFX+tYmnyiRQbP19S1oP1TRlfnzsK4Sp6MxWTcRY9LUG2R5bXYbGvXfIzY633qdhFWcJUtA8p23nfoC+o5H7dU8uXv5U9YdRXHT8h7kKqGfOXyKlE3Rc9szafyiklZ48OqfeIy/9pTIOXwmRPYjYOEiRiCgdkR0ZnDafGEWAMiwuHn7ua8biSe9nBXv46ykSxG+ezTZboapgqcpMWo=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(346002)(396003)(451199021)(36840700001)(40470700004)(46966006)(86362001)(31686004)(36860700001)(83380400001)(82740400003)(40460700003)(81166007)(356005)(82310400005)(41300700001)(40480700001)(36756003)(5660300002)(8676002)(31696002)(316002)(70206006)(70586007)(2906002)(4326008)(336012)(47076005)(426003)(186003)(44832011)(26005)(53546011)(8936002)(110136005)(478600001)(16576012)(2616005)(54906003)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:27:12.6647
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 18bab9b4-4998-45af-4d20-08db57e6a51a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT048.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8918

On 5/5/23 01:59, Juergen Gross wrote:
> On 01.05.23 22:30, Stewart Hildebrand wrote:
>> When creating a domU, but the creation fails, there is a corner case that may
>> lead to a crash in the null scheduler when running a debug build of Xen.
>>
>> (XEN) ****************************************
>> (XEN) Panic on CPU 0:
>> (XEN) Assertion 'npc->unit == unit' failed at common/sched/null.c:379
>> (XEN) ****************************************
>>
>> The events leading to the crash are:
>>
>> * null_unit_insert() was invoked with the unit offline. Since the unit was
>>    offline, unit_assign() was not called, and null_unit_insert() returned.
>> * Later during domain creation, the unit was onlined
>> * Eventually, domain creation failed due to bad configuration
>> * null_unit_remove() was invoked with the unit still online. Since the unit was
>>    online, it called unit_deassign() and triggered an ASSERT.
>>
>> To fix this, only call unit_deassign() when npc->unit is non-NULL in
>> null_unit_remove.
>>
>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

Thanks for the review. Does this still need a maintainer ack?


From xen-devel-bounces@lists.xenproject.org Thu May 18 21:57:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 21:57:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536586.835048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzlcO-0005F8-Hz; Thu, 18 May 2023 21:57:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536586.835048; Thu, 18 May 2023 21:57:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzlcO-0005F1-Ef; Thu, 18 May 2023 21:57:04 +0000
Received: by outflank-mailman (input) for mailman id 536586;
 Thu, 18 May 2023 21:57:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EfMJ=BH=intel.com=dave.hansen@srs-se1.protection.inumbo.net>)
 id 1pzlcM-0005Ev-G8
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 21:57:02 +0000
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e71b5cef-f5c6-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 23:56:56 +0200 (CEST)
Received: from fmsmga006.fm.intel.com ([10.253.24.20])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 May 2023 14:56:53 -0700
Received: from nroy-mobl1.amr.corp.intel.com (HELO [10.209.81.123])
 ([10.209.81.123])
 by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 May 2023 14:56:52 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e71b5cef-f5c6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1684447017; x=1715983017;
  h=message-id:date:mime-version:subject:to:cc:references:
   from:in-reply-to:content-transfer-encoding;
  bh=TfncyPoLkgEVKt4cWoEjhnDcsRhR4hVCsiDKfdnguT0=;
  b=i6qdD5EqNO9/VxpfPWtOWkvlKjQ4zTyz9oZu1izzSorxItwCJEn+AcPm
   G1kC/QSc39zCxCdPGq/E8POBF0DIM24tKld8tN5uvxElwQ6IZI26dYL2z
   ltGnCr/ZkZkbPVMubNeFfJBYQJGNLoj7Nz7hPAGMZrcX88JP8SSU7kJB6
   pSnMvOOqBGT8ZcTp2nM1hHE6jJ0NYBpXxVFm3m7WR7t+QTY/Q2v9IMdby
   jNGYqT+OITq+U+2e+JdSH9GQL15d6E5zmmj8ca0JUGMguNfmTAhqpPixo
   apAcuE0VzXI3z2XG7eusZMoeDYje0ffxb2742+YvMo4PWpsXq6kygzX53
   w==;
X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="354550239"
X-IronPort-AV: E=Sophos;i="6.00,175,1681196400"; 
   d="scan'208";a="354550239"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="948852100"
X-IronPort-AV: E=Sophos;i="6.00,175,1681196400"; 
   d="scan'208";a="948852100"
Message-ID: <a78d9dcd-0bc1-7e98-a8f1-e5d6cd0c09a3@intel.com>
Date: Thu, 18 May 2023 14:56:52 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH 00/20] x86: address -Wmissing-prototype warnings
Content-Language: en-US
To: Arnd Bergmann <arnd@kernel.org>, x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>,
 Andy Lutomirski <luto@kernel.org>, Steven Rostedt <rostedt@goodmis.org>,
 Masami Hiramatsu <mhiramat@kernel.org>, Mark Rutland <mark.rutland@arm.com>,
 Juergen Gross <jgross@suse.com>,
 "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
 Alexey Makhalov <amakhalov@vmware.com>,
 VMware PV-Drivers Reviewers <pv-drivers@vmware.com>,
 Peter Zijlstra <peterz@infradead.org>, Darren Hart <dvhart@infradead.org>,
 Andy Shevchenko <andy@infradead.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 "Rafael J. Wysocki" <rafael@kernel.org>, linux-kernel@vger.kernel.org,
 linux-trace-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, linux-pci@vger.kernel.org,
 platform-driver-x86@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-pm@vger.kernel.org, linux-mm@kvack.org
References: <20230516193549.544673-1-arnd@kernel.org>
From: Dave Hansen <dave.hansen@intel.com>
In-Reply-To: <20230516193549.544673-1-arnd@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 5/16/23 12:35, Arnd Bergmann wrote:
> From: Arnd Bergmann <arnd@arndb.de>
> 
> This addresses all x86 specific prototype warnings. The majority of the
> patches should be straightforward, either adding an #include statement
> to get the right header, or ensuring that an unused global function is
> left out of the build when the prototype is hidden.
> 
> The ones that are a bit awkward are those that just add a prototype to
> shut up the warning, but the prototypes are never used for calling the
> function because the only caller is in assembler code. I tried to come up
> with other ways to shut up the compiler using the asmlinkage annotation,
> but with no success.
> 
> All of the warnings have to be addressed in some form before the warning
> can be enabled by default.

I picked up the ones that were blatantly obvious, but left out 03, 04,
10, 12 and 19 for the moment.

BTW, I think the i386 allyesconfig is getting pretty lightly tested
these days.  I think you and I hit the same mlx4 __bad_copy_from()
compile issue.


From xen-devel-bounces@lists.xenproject.org Thu May 18 23:45:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 18 May 2023 23:45:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536591.835058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pznIx-00089e-1R; Thu, 18 May 2023 23:45:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536591.835058; Thu, 18 May 2023 23:45:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pznIw-00089X-Us; Thu, 18 May 2023 23:45:06 +0000
Received: by outflank-mailman (input) for mailman id 536591;
 Thu, 18 May 2023 23:45:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+wJ3=BH=amd.com=stefano.stabellini@srs-se1.protection.inumbo.net>)
 id 1pznIu-00089P-Q6
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 23:45:04 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2062a.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 00a1e0ef-f5d6-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 01:45:01 +0200 (CEST)
Received: from DS7PR03CA0015.namprd03.prod.outlook.com (2603:10b6:5:3b8::20)
 by IA0PR12MB8350.namprd12.prod.outlook.com (2603:10b6:208:40d::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May
 2023 23:44:56 +0000
Received: from DM6NAM11FT010.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:3b8:cafe::2) by DS7PR03CA0015.outlook.office365.com
 (2603:10b6:5:3b8::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.20 via Frontend
 Transport; Thu, 18 May 2023 23:44:56 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT010.mail.protection.outlook.com (10.13.172.222) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 23:44:55 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 18:44:55 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 18 May
 2023 18:44:54 -0500
Received: from ubuntu-20.04.2-arm64.shared (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2375.34 via Frontend Transport; Thu, 18 May 2023 18:44:54 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00a1e0ef-f5d6-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SZb1yO3ncl/YKeaSmEwHFtLIPsBYzrqJWcwg6AS3HDrAz8n8yxB/43tQdc+2tndrXqgXZ1gHgglxO5YbrIvnuOtRPICNrpEOF3JCthYq05nSWduAxFAxAJp8BPuMuxU7HKwPTnb5aT9WkHBlaJ8Hghrkx55Pkj5HjMN5GBYe8Bok8FTG3OHfgDmJVMPTQPH0BTjhZcjO1slHk5GSvR9E2p/CVFclsnmpLj5xvgHUb9e5l0cTslLFd56Ks0tOkO3A47noNQU80CVJTBq5agFKo7BdrAAQo6qgTN0O10FiDGW9bAK9bZosuVc9RKybiHQJ8gvMwhcbUnZ8du+Wv458fQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eBEWT/dwzg5bHP+LJqopgQqdSF83NK+k3UHbU9VyfRg=;
 b=fBJ5rvflvABHgSYNohQ7163SBLGNGlB8fQoNH/FF/w0u+dLobOf7kNEN9OcSHlvh0wLouTbxoXmMjDTPeucKHzOX6TSbfwXrD9PDb11F0Of0HK8G/Xie+flL9/Pc0Pj8BJrUByLY1J2Sx/4Mos68NypCcfSxw+M6tzwWIZRI6gdqiM0f41uLho4DkTgmaHfrCFHXsWPbjMMq/wOKTy9+S7Cd1k3S+LPUp3w85zyqe//KrQG6hHHs/Y85ntYw7M6QPxL6awDagQq6+5+A/ZnsIRyQx/yasNeFYLDK+97s1ucjsNUVwUbbJjjTqaPq5v1FOe0wOMF1YOK9kyYweGcDWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eBEWT/dwzg5bHP+LJqopgQqdSF83NK+k3UHbU9VyfRg=;
 b=Q0v5s6MLucf9OOGgzMCaMuLtMKhKMexB6mKxwgaYs3CbQM5a7DjegbXnIXqqyXar+ywMlvH/3syPB/4aqn6vd0PdB7TGaNYPuqcU8S1JkMBvyEkLKnq61OuyD36hUeHb8U2lvFeM6VyrkHl+lvJGQELheix/k8c7t+GWqGUcAIA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Date: Thu, 18 May 2023 16:44:53 -0700
From: Stefano Stabellini <stefano.stabellini@amd.com>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: <xen-devel@lists.xenproject.org>
CC: <stefano.stabellini@amd.com>, <jbeulich@suse.com>,
	<andrew.cooper3@citrix.com>, <roger.pau@citrix.com>,
	<xenia.ragiadakou@amd.com>
Subject: [RFC] Xen crashes on ASSERT on suspend/resume, suggested fix
Message-ID: <alpine.DEB.2.22.394.2305181638390.128889@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT010:EE_|IA0PR12MB8350:EE_
X-MS-Office365-Filtering-Correlation-Id: d1b2950f-209f-4638-5800-08db57f9e260
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gD2iGDXdeZcGHutBNg3SbRQuLkDroKdIR3UuhaPuUFJCrY+VGwwHoVkDnlgz6pVcvKfdam0V7H8UyDKzx1//pu9LwQbq+NstJ4KrYwXAOvWko+Ixz/Td8lTCrZluhZy/VNpk5KX9V+yuk+BFt8ML8dt7G9E7jWsswO6vmPb0QJSL3Ic3/JCDCvk/zdQAFdaxRmnVva6duUlM3qOGDWn0lxKsZnozZybsPqzgzqFEUAY5fb4+TjPYUSBfm5B8dE6mMixDcKjCW3DMogignTzdC2E6Upcn15YA9C44HlIroSOBSnikRi8Sdrf7ttN2CbWPTG39nLsEvk/Rsw5SR5a9Sab0DEEZG2ceBEWJbKGmzCh4dODdj/zi/uTW38No8Q84xE9IZjPZ0uXomrtNif30MIO9r0fKcGEU9OH83Q+9O7C4jG7JALP0rFykC2v+BAHWQhSnRAcdZ3SivGHnAMO3zxoUoy4amjV2UsPEn7JGUyrosBbh3kg1se7C12OFu+ipeQxhDkBUxEwO7my3IMVEJpsJKb/FM4taLe4C1Yk0et8t5t0Q2fmP0hxxDSrX1AGFb4Hjkg7Ur0WKMmT9HCeH957IagZLky6+nzvoWfOWpphCbhK+ogeJ3CQXmAKwE+Xsk7l7uoLm1dQKQLHWhZHiqRoXPlRn8GZ/oMpgaXePn8CyuyybDrKdiYdqqQvy+uBiJVGMefpM3CnWnkbo/AKo+FCZHJgiwJInoU6kFgsHb0gjdo+9gz0JxAvkwYSAhpoIZgnqoYqN44faC53N360RWg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(136003)(346002)(39860400002)(396003)(451199021)(46966006)(40470700004)(36840700001)(33716001)(356005)(81166007)(40480700001)(86362001)(40460700003)(82310400005)(82740400003)(44832011)(478600001)(186003)(26005)(9686003)(41300700001)(5660300002)(15650500001)(8676002)(8936002)(70206006)(70586007)(6916009)(316002)(4326008)(54906003)(2906002)(36860700001)(83380400001)(426003)(336012)(47076005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 23:44:55.8557
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d1b2950f-209f-4638-5800-08db57f9e260
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT010.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8350

Hi all,

After many PVH Dom0 suspend/resume cycles we are seeing the following
Xen crash (it is random and doesn't reproduce reliably):

(XEN) [555.042981][<ffff82d04032a137>] R arch/x86/irq.c#_clear_irq_vector+0x214/0x2bd
(XEN) [555.042986][<ffff82d04032a74c>] F destroy_irq+0xe2/0x1b8
(XEN) [555.042991][<ffff82d0403276db>] F msi_free_irq+0x5e/0x1a7
(XEN) [555.042995][<ffff82d04032da2d>] F unmap_domain_pirq+0x441/0x4b4
(XEN) [555.043001][<ffff82d0402d29b9>] F arch/x86/hvm/vmsi.c#vpci_msi_disable+0xc0/0x155
(XEN) [555.043006][<ffff82d0402d39fc>] F vpci_msi_arch_disable+0x1e/0x2b
(XEN) [555.043013][<ffff82d04026561c>] F drivers/vpci/msi.c#control_write+0x109/0x10e
(XEN) [555.043018][<ffff82d0402640c3>] F vpci_write+0x11f/0x268
(XEN) [555.043024][<ffff82d0402c726a>] F arch/x86/hvm/io.c#vpci_portio_write+0xa0/0xa7
(XEN) [555.043029][<ffff82d0402c6682>] F hvm_process_io_intercept+0x203/0x26f
(XEN) [555.043034][<ffff82d0402c6718>] F hvm_io_intercept+0x2a/0x4c
(XEN) [555.043039][<ffff82d0402b6353>] F arch/x86/hvm/emulate.c#hvmemul_do_io+0x29b/0x5f6
(XEN) [555.043043][<ffff82d0402b66dd>] F arch/x86/hvm/emulate.c#hvmemul_do_io_buffer+0x2f/0x6a
(XEN) [555.043048][<ffff82d0402b7bde>] F hvmemul_do_pio_buffer+0x33/0x35
(XEN) [555.043053][<ffff82d0402c7042>] F handle_pio+0x6d/0x1b4
(XEN) [555.043059][<ffff82d04029ec20>] F svm_vmexit_handler+0x10bf/0x18b0
(XEN) [555.043064][<ffff82d0402034e5>] F svm_stgi_label+0x8/0x18
(XEN) [555.043067]
(XEN) [555.469861]
(XEN) [555.471855] ****************************************
(XEN) [555.477315] Panic on CPU 9:
(XEN) [555.480608] Assertion 'per_cpu(vector_irq, cpu)[old_vector] == irq' failed at arch/x86/irq.c:233
(XEN) [555.489882] ****************************************

Looking at the code in question, the ASSERT looks wrong to me.

Specifically, if you see send_cleanup_vector and
irq_move_cleanup_interrupt, it is entirely possible to have old_vector
still valid and also move_in_progress still set, but only some of the
per_cpu(vector_irq, me)[vector] cleared. It seems to me that this could
happen especially when an MSI has a large old_cpu_mask.

While we go and clear per_cpu(vector_irq, me)[vector] in each CPU one by
one, the state is that not all per_cpu(vector_irq, me)[vector] are
cleared and old_vector is still set.

If at this point we enter _clear_irq_vector, we are going to hit the
ASSERT above.

My suggestion was to turn the ASSERT into an if. Any better ideas?

Cheers,

Stefano

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 20150b1c7f..c82c6b350a 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -230,9 +230,11 @@ static void _clear_irq_vector(struct irq_desc *desc)
 
         for_each_cpu(cpu, tmp_mask)
         {
-            ASSERT(per_cpu(vector_irq, cpu)[old_vector] == irq);
-            TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
-            per_cpu(vector_irq, cpu)[old_vector] = ~irq;
+            if ( per_cpu(vector_irq, cpu)[old_vector] == irq )
+            {
+                TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
+                per_cpu(vector_irq, cpu)[old_vector] = ~irq;
+            }
         }
 
         release_old_vec(desc);


From xen-devel-bounces@lists.xenproject.org Fri May 19 00:05:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 00:05:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536595.835068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzncI-0002oX-Nj; Fri, 19 May 2023 00:05:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536595.835068; Fri, 19 May 2023 00:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzncI-0002oP-Ix; Fri, 19 May 2023 00:05:06 +0000
Received: by outflank-mailman (input) for mailman id 536595;
 Fri, 19 May 2023 00:05:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I2Cg=BI=redhat.com=eblake@srs-se1.protection.inumbo.net>)
 id 1pzncG-0002oJ-TL
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 00:05:05 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cc6634c7-f5d8-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 02:05:02 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-43-Ql_NEbjYM2m2kNrCkzuzAg-1; Thu, 18 May 2023 20:04:57 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 75D80185A790;
 Fri, 19 May 2023 00:04:56 +0000 (UTC)
Received: from redhat.com (unknown [10.2.16.57])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 811342026D25;
 Fri, 19 May 2023 00:04:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc6634c7-f5d8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684454700;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=L1bW03zBeN+IDObTz00Q0SyhMk0VOr4i17hlQ4QvYdg=;
	b=XNsg5tjHCKtRzRepCzd339kPCJ3A5bJljLlFNarEf6rO53B6hMQYg6zjJIe3vZs4lFyY5V
	2kWHlf5ELqK2BRTzCCmjuicQD8HwbhnmDfPcsT4b85VFunGisAON1viNdrt/c4dg8cVPEl
	NSEYFPgw9aqNCH0xD6fN0R8YXqyVy3k=
X-MC-Unique: Ql_NEbjYM2m2kNrCkzuzAg-1
Date: Thu, 18 May 2023 19:04:52 -0500
From: Eric Blake <eblake@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
	Julia Suvorova <jusual@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Stefano Garzarella <sgarzare@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, 
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org, 
	Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Aarushi Mehta <mehta.aaru20@gmail.com>, Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 1/6] block: add blk_io_plug_call() API
Message-ID: <7bsmwvpfmf6kelaxv32p6nhqcx2f2um2vqhvhu6uw5cooztrhe@oijddrxc2ysx>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-2-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230517221022.325091-2-stefanha@redhat.com>
User-Agent: NeoMutt/20230517
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

On Wed, May 17, 2023 at 06:10:17PM -0400, Stefan Hajnoczi wrote:
> Introduce a new API for thread-local blk_io_plug() that does not
> traverse the block graph. The goal is to make blk_io_plug() multi-queue
> friendly.
> 
> Instead of having block drivers track whether or not we're in a plugged
> section, provide an API that allows them to defer a function call until
> we're unplugged: blk_io_plug_call(fn, opaque). If blk_io_plug_call() is
> called multiple times with the same fn/opaque pair, then fn() is only
> called once at the end of the function - resulting in batching.
> 
> This patch introduces the API and changes blk_io_plug()/blk_io_unplug().
> blk_io_plug()/blk_io_unplug() no longer require a BlockBackend argument
> because the plug state is now thread-local.
> 
> Later patches convert block drivers to blk_io_plug_call() and then we
> can finally remove .bdrv_co_io_plug() once all block drivers have been
> converted.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---

> +++ b/block/plug.c
> +
> +/**
> + * blk_io_plug_call:
> + * @fn: a function pointer to be invoked
> + * @opaque: a user-defined argument to @fn()
> + *
> + * Call @fn(@opaque) immediately if not within a blk_io_plug()/blk_io_unplug()
> + * section.
> + *
> + * Otherwise defer the call until the end of the outermost
> + * blk_io_plug()/blk_io_unplug() section in this thread. If the same
> + * @fn/@opaque pair has already been deferred, it will only be called once upon
> + * blk_io_unplug() so that accumulated calls are batched into a single call.
> + *
> + * The caller must ensure that @opaque is not be freed before @fn() is invoked.

s/be //

> + */
> +void blk_io_plug_call(void (*fn)(void *), void *opaque)

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Fri May 19 00:06:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 00:06:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536601.835078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzndx-0003Po-59; Fri, 19 May 2023 00:06:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536601.835078; Fri, 19 May 2023 00:06:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzndx-0003Ph-22; Fri, 19 May 2023 00:06:49 +0000
Received: by outflank-mailman (input) for mailman id 536601;
 Fri, 19 May 2023 00:06:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I2Cg=BI=redhat.com=eblake@srs-se1.protection.inumbo.net>)
 id 1pzndw-0003Pb-Gi
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 00:06:48 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0abc846c-f5d9-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 02:06:46 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-36-CmACskPSNxmEXXeyD8i6Wg-1; Thu, 18 May 2023 20:06:41 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 248FB2A59578;
 Fri, 19 May 2023 00:06:41 +0000 (UTC)
Received: from redhat.com (unknown [10.2.16.57])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C17901121314;
 Fri, 19 May 2023 00:06:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0abc846c-f5d9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684454805;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=cc51f8Bp7mzNOGGwWjxbUeVDIv23LIv26o+E1C5Y2V4=;
	b=BKqEIBZ1204dlF5rc4O/NCwE+eSt3emyQnnek6aE4/NDiVRboqu72bHpmXGmGF6EN3kmoP
	4eJm6o7c+qPXLQ+Ape8railremb9vZQWlb7nv52WXO2E0V82yTK0kYIl7BIA9lBCwuBKaB
	ryQVpAqtVar8cCw5nPUtMOVUdOEuZkY=
X-MC-Unique: CmACskPSNxmEXXeyD8i6Wg-1
Date: Thu, 18 May 2023 19:06:38 -0500
From: Eric Blake <eblake@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
	Julia Suvorova <jusual@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Stefano Garzarella <sgarzare@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, 
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org, 
	Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Aarushi Mehta <mehta.aaru20@gmail.com>, Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 2/6] block/nvme: convert to blk_io_plug_call() API
Message-ID: <r5bg7a7fb2v6sn7wysssbacbh4pwze5lyemnv4yu5uya2sb67d@a7b63bbtmczp>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-3-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230517221022.325091-3-stefanha@redhat.com>
User-Agent: NeoMutt/20230517
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

On Wed, May 17, 2023 at 06:10:18PM -0400, Stefan Hajnoczi wrote:
> Stop using the .bdrv_co_io_plug() API because it is not multi-queue
> block layer friendly. Use the new blk_io_plug_call() API to batch I/O
> submission instead.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  block/nvme.c | 44 ++++++++++++--------------------------------
>  1 file changed, 12 insertions(+), 32 deletions(-)

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Fri May 19 00:12:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 00:12:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536605.835088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pznjj-0004tF-Qm; Fri, 19 May 2023 00:12:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536605.835088; Fri, 19 May 2023 00:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pznjj-0004t8-Mx; Fri, 19 May 2023 00:12:47 +0000
Received: by outflank-mailman (input) for mailman id 536605;
 Fri, 19 May 2023 00:12:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I2Cg=BI=redhat.com=eblake@srs-se1.protection.inumbo.net>)
 id 1pznji-0004t2-3o
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 00:12:46 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e059a794-f5d9-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 02:12:44 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-82-UzPaRfgVNQCxxpIntwth8g-1; Thu, 18 May 2023 20:12:40 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B9430863E84;
 Fri, 19 May 2023 00:12:39 +0000 (UTC)
Received: from redhat.com (unknown [10.2.16.57])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id D973340C6EC4;
 Fri, 19 May 2023 00:12:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e059a794-f5d9-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684455163;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=h61XM7tpix6Mzm/aeJgroszAT7MeIj1AnU8gTEhsEM8=;
	b=bvj3WZ41n9PTnv14mqvMgOTh7RqkLP3mrtpPGvUOpjVqs4EnAaVNl2doBJGDN+rV/BiAWw
	APGWHCXSJiuIA8/cV4eEPs68UKg3mklImFres91nlac1JYFab9IHsobVEP5QNn02rBOfjV
	lTUe0DkWdZmVv6ITB3MDxrbQW+qpRlQ=
X-MC-Unique: UzPaRfgVNQCxxpIntwth8g-1
Date: Thu, 18 May 2023 19:12:36 -0500
From: Eric Blake <eblake@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
	Julia Suvorova <jusual@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Stefano Garzarella <sgarzare@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, 
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org, 
	Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Aarushi Mehta <mehta.aaru20@gmail.com>, Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 3/6] block/blkio: convert to blk_io_plug_call() API
Message-ID: <v2uohowqlo4whvhlreumxn4zlahxv5p3cfec7piv5s2ldvnp2f@bfhyy6mcl3rs>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-4-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230517221022.325091-4-stefanha@redhat.com>
User-Agent: NeoMutt/20230517
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

On Wed, May 17, 2023 at 06:10:19PM -0400, Stefan Hajnoczi wrote:
> Stop using the .bdrv_co_io_plug() API because it is not multi-queue
> block layer friendly. Use the new blk_io_plug_call() API to batch I/O
> submission instead.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  block/blkio.c | 40 +++++++++++++++++++++-------------------
>  1 file changed, 21 insertions(+), 19 deletions(-)

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Fri May 19 00:19:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 00:19:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536609.835097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pznps-0005X9-GM; Fri, 19 May 2023 00:19:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536609.835097; Fri, 19 May 2023 00:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pznps-0005X2-Dl; Fri, 19 May 2023 00:19:08 +0000
Received: by outflank-mailman (input) for mailman id 536609;
 Fri, 19 May 2023 00:19:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I2Cg=BI=redhat.com=eblake@srs-se1.protection.inumbo.net>)
 id 1pznpq-0005Ww-Rq
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 00:19:06 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c3539d35-f5da-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 02:19:05 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-286-uzkgamGeMV2u9rf44o44xQ-1; Thu, 18 May 2023 20:19:01 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7BA94101A551;
 Fri, 19 May 2023 00:19:00 +0000 (UTC)
Received: from redhat.com (unknown [10.2.16.57])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 2016640C2063;
 Fri, 19 May 2023 00:18:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3539d35-f5da-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684455544;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=oaXrwy83qppglQqMsQxjXsygcQ3kYt7b1pYV3/XPqVE=;
	b=UXXHo3+s4LDUD1FmQVmKk5xfCaY9PwXb+6oaDOStazyGmjMYr3jO7TtVz3l9mbXoh6LvT8
	DhSn3MOeAtaY3NfPrtASYwAHHPDFEuqbShlOQN64iUINN78dKJBNwAHlIbhm11S93M4JSw
	szpnM4lmxnRIC2wk8mtaRRjjAUnVdTM=
X-MC-Unique: uzkgamGeMV2u9rf44o44xQ-1
Date: Thu, 18 May 2023 19:18:42 -0500
From: Eric Blake <eblake@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
	Julia Suvorova <jusual@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Stefano Garzarella <sgarzare@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, 
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org, 
	Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Aarushi Mehta <mehta.aaru20@gmail.com>, Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 4/6] block/io_uring: convert to blk_io_plug_call() API
Message-ID: <7xerljqzrhzvl73beu7dboq3d6jbxbkrxbhs25xzcw5ozopgbn@3olwj3w5fil5>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-5-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230517221022.325091-5-stefanha@redhat.com>
User-Agent: NeoMutt/20230517
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

On Wed, May 17, 2023 at 06:10:20PM -0400, Stefan Hajnoczi wrote:
> Stop using the .bdrv_co_io_plug() API because it is not multi-queue
> block layer friendly. Use the new blk_io_plug_call() API to batch I/O
> submission instead.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  include/block/raw-aio.h |  7 -------
>  block/file-posix.c      | 10 ---------
>  block/io_uring.c        | 45 ++++++++++++++++-------------------------
>  block/trace-events      |  5 ++---
>  4 files changed, 19 insertions(+), 48 deletions(-)
> 

> @@ -337,7 +325,6 @@ void luring_io_unplug(void)
>   * @type: type of request
>   *
>   * Fetches sqes from ring, adds to pending queue and preps them
> - *
>   */
>  static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s,
>                              uint64_t offset, int type)
> @@ -370,14 +357,16 @@ static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s,

Looks a bit like a stray hunk, but you are touching the function, so
it's okay.

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Fri May 19 00:29:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 00:29:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536613.835109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pznzU-00073d-EY; Fri, 19 May 2023 00:29:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536613.835109; Fri, 19 May 2023 00:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pznzU-00073W-9d; Fri, 19 May 2023 00:29:04 +0000
Received: by outflank-mailman (input) for mailman id 536613;
 Fri, 19 May 2023 00:29:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I2Cg=BI=redhat.com=eblake@srs-se1.protection.inumbo.net>)
 id 1pznzS-00073Q-PU
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 00:29:02 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 26594b53-f5dc-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 02:29:01 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-593-tqpyHlUyPEW6dSc08Bg6qg-1; Thu, 18 May 2023 20:28:56 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com
 [10.11.54.5])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 35DB0185A7A4;
 Fri, 19 May 2023 00:28:56 +0000 (UTC)
Received: from redhat.com (unknown [10.2.16.57])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C8C9863F5F;
 Fri, 19 May 2023 00:28:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26594b53-f5dc-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684456140;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xeGWmHxXOnaXNqDj+FYNqz1jw8Uegaz/pUc2EBnBJJw=;
	b=Ku/z6UT3zS8cC+WxwXFZYn5o4HCfNwOew+j/3vsBkRzEx8jZvvJlbUfpZO0GhX29RS1KN6
	DkvyTKeKu2WtG99u+kZvDPRj0Ql8zs0Nv4nclUtj9tELS2WGWdBt9CpCTNaCxANyOVpLAr
	AB6sob4ASuXoLEZTlpwO6F6C5scs+34=
X-MC-Unique: tqpyHlUyPEW6dSc08Bg6qg-1
Date: Thu, 18 May 2023 19:28:52 -0500
From: Eric Blake <eblake@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
	Julia Suvorova <jusual@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Stefano Garzarella <sgarzare@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, 
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org, 
	Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Aarushi Mehta <mehta.aaru20@gmail.com>, Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 5/6] block/linux-aio: convert to blk_io_plug_call() API
Message-ID: <a5ziex34rcalbgh5bmklgsac7m2yirfbrhjruvzh5nd2h5srt6@gqdom2tnxx3k>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-6-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230517221022.325091-6-stefanha@redhat.com>
User-Agent: NeoMutt/20230517
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5

On Wed, May 17, 2023 at 06:10:21PM -0400, Stefan Hajnoczi wrote:
> Stop using the .bdrv_co_io_plug() API because it is not multi-queue
> block layer friendly. Use the new blk_io_plug_call() API to batch I/O
> submission instead.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  include/block/raw-aio.h |  7 -------
>  block/file-posix.c      | 28 ----------------------------
>  block/linux-aio.c       | 41 +++++++++++------------------------------
>  3 files changed, 11 insertions(+), 65 deletions(-)
>

Nice to see that not only is it friendlier to multi-queue, it's also
fewer lines of code.

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Fri May 19 00:29:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 00:29:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536616.835117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzo01-0007Y3-Ln; Fri, 19 May 2023 00:29:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536616.835117; Fri, 19 May 2023 00:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzo01-0007Xw-JD; Fri, 19 May 2023 00:29:37 +0000
Received: by outflank-mailman (input) for mailman id 536616;
 Fri, 19 May 2023 00:29:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I2Cg=BI=redhat.com=eblake@srs-se1.protection.inumbo.net>)
 id 1pzo00-00073Q-MB
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 00:29:36 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b11ac20-f5dc-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 02:29:36 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-53-rNDO6Pq6NgOB6AuhL2p-gA-1; Thu, 18 May 2023 20:29:33 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 15634185A7A4;
 Fri, 19 May 2023 00:29:33 +0000 (UTC)
Received: from redhat.com (unknown [10.2.16.57])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id BD7402026D25;
 Fri, 19 May 2023 00:29:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b11ac20-f5dc-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684456174;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=izck6U+p15XsA5Y7NeSwny6iry6I1uPGq8jJ+Yvs74g=;
	b=fWTd0vHRdj7MGez+2OWCgCMJdrHn1U+JOAFo78Wi519Z4TVSe69lXy/vkz858b4g87+SbJ
	c6Kch7okY20JymjXDdsvOqow22vu52Agi7LileA/1L6R3aNNijYZAnmUYsqPr+Bv/PfUao
	B+qa6Qmgv3Yk0uRuJPHuZ1OtndMTVoY=
X-MC-Unique: rNDO6Pq6NgOB6AuhL2p-gA-1
Date: Thu, 18 May 2023 19:29:29 -0500
From: Eric Blake <eblake@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
	Julia Suvorova <jusual@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Stefano Garzarella <sgarzare@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, 
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org, 
	Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Aarushi Mehta <mehta.aaru20@gmail.com>, Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 6/6] block: remove bdrv_co_io_plug() API
Message-ID: <eygnwu5upxmsoh4laxobo2x3i5ongnktxhp35m52mtsacgptas@ily4molhj2me>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-7-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230517221022.325091-7-stefanha@redhat.com>
User-Agent: NeoMutt/20230517
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

On Wed, May 17, 2023 at 06:10:22PM -0400, Stefan Hajnoczi wrote:
> No block driver implements .bdrv_co_io_plug() anymore. Get rid of the
> function pointers.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  include/block/block-io.h         |  3 ---
>  include/block/block_int-common.h | 11 ----------
>  block/io.c                       | 37 --------------------------------
>  3 files changed, 51 deletions(-)

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Fri May 19 01:10:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 01:10:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536622.835128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzodh-0002kx-Pt; Fri, 19 May 2023 01:10:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536622.835128; Fri, 19 May 2023 01:10:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzodh-0002kq-Lj; Fri, 19 May 2023 01:10:37 +0000
Received: by outflank-mailman (input) for mailman id 536622;
 Fri, 19 May 2023 01:10:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzodg-0002kg-8R; Fri, 19 May 2023 01:10:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzodg-0002p5-3V; Fri, 19 May 2023 01:10:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzodf-0007xB-L0; Fri, 19 May 2023 01:10:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzodf-00016B-KV; Fri, 19 May 2023 01:10:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tL/uaXbILzQIBndKF+56pMjLreobyW5jkYuzgL3Hxis=; b=h+SQnik6AlPOW/7SZ7u4ztkZnE
	WXyLZZodFrkvi0sg7O55n7aO+Ww6fT2Yve3qJl624UUgw9lQhvkzuxGQLXL+cZjLcIFkIWWJvh+0/
	7MIfnr6hbwpiEpY2FUrQrUHwXXPdjY7NCQHSiRWHRC97bdtmH4ulVD0NNRynF3ZV3oaY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180700-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180700: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4d6d4c7f541d7027beed4fb86eb2c451bd8d6fff
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 May 2023 01:10:35 +0000

flight 180700 linux-linus real [real]
flight 180703 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180700/
http://logs.test-lab.xenproject.org/osstest/logs/180703/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                4d6d4c7f541d7027beed4fb86eb2c451bd8d6fff
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   32 days
Failing since        180281  2023-04-17 06:24:36 Z   31 days   58 attempts
Testing same since   180700  2023-05-18 13:40:14 Z    0 days    1 attempts

------------------------------------------------------------
2391 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 302906 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 19 01:26:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 01:26:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536630.835137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzosl-0004UD-69; Fri, 19 May 2023 01:26:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536630.835137; Fri, 19 May 2023 01:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzosl-0004U6-3V; Fri, 19 May 2023 01:26:11 +0000
Received: by outflank-mailman (input) for mailman id 536630;
 Fri, 19 May 2023 01:26:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzosj-0004Tu-G5; Fri, 19 May 2023 01:26:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzosj-000345-8V; Fri, 19 May 2023 01:26:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzosi-0008Ht-N5; Fri, 19 May 2023 01:26:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzosi-00061i-Mf; Fri, 19 May 2023 01:26:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aW5QVNzOy/uZNfO69mdhbAzjuz6sca/CzM3kQooMxEQ=; b=g+2aBtIT5l/dv/By/l80DdmN39
	UolmPgcy/tXkL/o4bnvPHxLW834/uFDMoHx4KWHg6Eh3ifdOxCUWT6FYc2YQe31/oDPSnkwasMPuC
	yskQ8ONg3VCjuZkzIP2HJIEyr8+DV3zQwNKw8gRoSVNhuW7pSlB85zeu5uT7+YFG+iOM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180702-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180702: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=297e8182194e634baa0cbbfd96d2e09e2a0bcd40
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 May 2023 01:26:08 +0000

flight 180702 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180702/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                297e8182194e634baa0cbbfd96d2e09e2a0bcd40
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    1 days
Failing since        180699  2023-05-18 07:21:24 Z    0 days    2 attempts
Testing same since   180702  2023-05-18 20:40:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Gavin Shan <gshan@redhat.com>
  Igor Mammedov <imammedo@redhat.com>
  John Snow <jsnow@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Markus Armbruster <armbru@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Steve Sistare <steven.sistare@oracle.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2118 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 19 01:47:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 01:47:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536637.835151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzpCt-000704-WA; Fri, 19 May 2023 01:46:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536637.835151; Fri, 19 May 2023 01:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzpCt-0006zw-Qs; Fri, 19 May 2023 01:46:59 +0000
Received: by outflank-mailman (input) for mailman id 536637;
 Fri, 19 May 2023 01:46:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sk0S=BI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1pzpCs-0006zq-RF
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 01:46:58 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0966dd9b-f5e7-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 03:46:57 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id DA471652CD;
 Fri, 19 May 2023 01:46:55 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5A58AC433D2;
 Fri, 19 May 2023 01:46:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0966dd9b-f5e7-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684460815;
	bh=bcUly72PDa+s+h9TKc25Y8B9YjfC6eEFUcRDf3u8aa8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=hbUeXdf1ZuLbFOTgRYBBKo9jx9E37qk0bA/OFKFNtifgifstixYPChl9IbTzxOBge
	 lHgMVqO39YdMTiutJ9JxH8oCJ+6FbzhMVB73jC9CH1mlZqNdW+pxZtQbcpvfUE+u7u
	 7Q/svKuhOAGZftdMx10eaNqHimQfqexOOJfbzdDfj8gSmYIysDpVaN1mbXvDITQwf0
	 iD6rGRb4Nb1/qisDL9UKLpaSJkp9hQfLMfaHD3XxblyAvGeaU2N/hhyU95tE8aktoo
	 MHFftyhC4i8tSKZh5NJ9VxbPCZo63ho76bc93uixcsPT3rHM5xoVjlrYu+LnLa97h7
	 3pUDkV6Ctk9+A==
Date: Thu, 18 May 2023 18:46:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, jbeulich@suse.com, 
    andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org, 
    marmarek@invisiblethingslab.com, xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
In-Reply-To: <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
Message-ID: <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop> <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-461541215-1684457310=:128889"
Content-ID: <alpine.DEB.2.22.394.2305181748310.128889@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-461541215-1684457310=:128889
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305181748311.128889@ubuntu-linux-20-04-desktop>

On Thu, 18 May 2023, Roger Pau Monné wrote:
> On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
> > Hi all,
> > 
> > I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
> > test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
> > Zen3 system and we already have a few successful tests with it, see
> > automation/gitlab-ci/test.yaml.
> > 
> > We managed to narrow down the issue to a console problem. We are
> > currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
> > options, it works with PV Dom0 and it is using a PCI UART card.
> > 
> > In the case of Dom0 PVH:
> > - it works without console=com1
> > - it works with console=com1 and with the patch appended below
> > - it doesn't work otherwise and crashes with this error:
> > https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
> 
> Jan also noticed this, and we have a ticket for it in gitlab:
> 
> https://gitlab.com/xen-project/xen/-/issues/85
> 
> > What is the right way to fix it?
> 
> I think the right fix is to simply avoid hidden devices from being
> handled by vPCI, in any case such devices won't work propewrly with
> vPCI because they are in use by Xen, and so any cached information by
> vPCI is likely to become stable as Xen can modify the device without
> vPCI noticing.
> 
> I think the chunk below should help.  It's not clear to me however how
> hidden devices should be handled, is the intention to completely hide
> such devices from dom0?

I like the idea but the patch below still failed:

(XEN) Xen call trace:
(XEN)    [<ffff82d0402682b0>] R drivers/vpci/header.c#modify_bars+0x2b3/0x44d
(XEN)    [<ffff82d040268714>] F drivers/vpci/header.c#init_bars+0x2ca/0x372
(XEN)    [<ffff82d0402673b3>] F vpci_add_handlers+0xd5/0x10a
(XEN)    [<ffff82d0404408b9>] F drivers/passthrough/pci.c#setup_one_hwdom_device+0x73/0x97
(XEN)    [<ffff82d0404409b0>] F drivers/passthrough/pci.c#_setup_hwdom_pci_devices+0x63/0x15b
(XEN)    [<ffff82d04027df08>] F drivers/passthrough/pci.c#pci_segments_iterate+0x43/0x69
(XEN)    [<ffff82d040440e29>] F setup_hwdom_pci_devices+0x25/0x2c
(XEN)    [<ffff82d04043cb1a>] F drivers/passthrough/amd/pci_amd_iommu.c#amd_iommu_hwdom_init+0xd4/0xdd
(XEN)    [<ffff82d0404403c9>] F iommu_hwdom_init+0x49/0x53
(XEN)    [<ffff82d04045175e>] F dom0_construct_pvh+0x160/0x138d
(XEN)    [<ffff82d040468914>] F construct_dom0+0x5c/0xb7
(XEN)    [<ffff82d0404619c1>] F __start_xen+0x2423/0x272d
(XEN)    [<ffff82d040203344>] F __high_start+0x94/0xa0

I haven't managed to figure out why yet.


> --- 
> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
> index 652807a4a454..0baef3a8d3a1 100644
> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -72,7 +72,12 @@ int vpci_add_handlers(struct pci_dev *pdev)
>      unsigned int i;
>      int rc = 0;
>  
> -    if ( !has_vpci(pdev->domain) )
> +    if ( !has_vpci(pdev->domain) ||
> +         /*
> +          * Ignore RO and hidden devices, those are in use by Xen and vPCI
> +          * won't work on them.
> +          */
> +         pci_get_pdev(dom_xen, pdev->sbdf) )
>          return 0;
>  
>      /* We should not get here twice for the same device. */

i
--8323329-461541215-1684457310=:128889--


From xen-devel-bounces@lists.xenproject.org Fri May 19 04:04:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 04:04:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536655.835197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzrLf-0004qT-73; Fri, 19 May 2023 04:04:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536655.835197; Fri, 19 May 2023 04:04:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzrLf-0004qM-32; Fri, 19 May 2023 04:04:11 +0000
Received: by outflank-mailman (input) for mailman id 536655;
 Fri, 19 May 2023 04:04:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jOyF=BI=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1pzrLe-0004qG-DW
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 04:04:10 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 34096fef-f5fa-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 06:04:08 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 6AB6268AFE; Fri, 19 May 2023 06:04:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34096fef-f5fa-11ed-b22d-6b7b168915f2
Date: Fri, 19 May 2023 06:04:05 +0200
From: Christoph Hellwig <hch@lst.de>
To: Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>
Cc: Christoph Hellwig <hch@lst.de>, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>, Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>, Lyude Paul <lyude@redhat.com>,
	xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when
 xen-pcifront is enabling
Message-ID: <20230519040405.GA10818@lst.de>
References: <20230518134253.909623-1-hch@lst.de> <20230518134253.909623-3-hch@lst.de> <ZGZr/xgbUmVqpOpN@mail-itl>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZGZr/xgbUmVqpOpN@mail-itl>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Thu, May 18, 2023 at 08:18:39PM +0200, Marek Marczykowski-Grecki wrote:
> On Thu, May 18, 2023 at 03:42:51PM +0200, Christoph Hellwig wrote:
> > Remove the dangerous late initialization of xen-swiotlb in
> > pci_xen_swiotlb_init_late and instead just always initialize
> > xen-swiotlb in the boot code if CONFIG_XEN_PCIDEV_FRONTEND is enabled.
> > 
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> 
> Doesn't it mean all the PV guests will basically waste 64MB of RAM
> by default each if they don't really have PCI devices?

If CONFIG_XEN_PCIDEV_FRONTEND is enabled, and the kernel's isn't booted
with swiotlb=noforce, yes.



From xen-devel-bounces@lists.xenproject.org Fri May 19 06:01:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 06:01:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536671.835238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pztBB-0000vq-Uc; Fri, 19 May 2023 06:01:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536671.835238; Fri, 19 May 2023 06:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pztBB-0000vj-Q5; Fri, 19 May 2023 06:01:29 +0000
Received: by outflank-mailman (input) for mailman id 536671;
 Fri, 19 May 2023 06:01:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pztBA-0000vd-Jq
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 06:01:28 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2074.outbound.protection.outlook.com [40.107.7.74])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 973b9ade-f60a-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 08:01:26 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9363.eurprd04.prod.outlook.com (2603:10a6:20b:4e8::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 06:00:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 06:00:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 973b9ade-f60a-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OQPJjjGTGtC/hZw4sw+gZ34TqHInWAiIC+ERcQFb/fIVYRdNHkW5DU/BjxZ9Cc7eC5LCmhj/EIOhATOqloX/AA8mPKx+y310pE4NRvnZew+UlJMmvgkXJ8fETjLppDqn/wIBmXW88v5HdcdDECnUjWRKi3NIaCqk+zYf61yzkOY4MsVX7pLem8OBhNUHMwGdgHQlBW35pbvGyXkqjOKpCyxb7eaY7y3Di6pcDfbOrrFgoK9s+juV9gWS5ahP5LtS3V9mfppfecWm/w7SCTN4qo9zVlu0B1hQzWogMOXraCaNiyR4jA0PseDYOng2aF9N2PdjqkheHmmnOG9eM01wNA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BSeGtClR+KejmygyDgqPrvKuUAT9gGUEr5MA54PAiMk=;
 b=OCfPRe/2lyDVhjAUE/el2gNRqXoyGihtuTpNHMfAIoBZIlq7jfpIj6kU+B6MHTx7pEfS9CyGz3QKRDPDrTAtylwbC+x1lL65QqN3aN/B/dvxFSqj8DyL6T+sCoEe0X3T9uzLIhfIneeSNENSZETbAOVGwi1jtC6eslKhxwJabuYA+p6Tj7Ve66ochI/ZOBMZNLLgLTEfo9TdAMFMBaCnEfqyVxPrcOAKet+FMN+AQBTABY4JxEH2msWrtZRV0XZ7/6LIf3t7cxDS6YB244U3TngNc3F9RCTK/mEB0E6heKE9361yMBnw1d/ADj9djBz2zdwkr5JjZ27vMlXlgS7NGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BSeGtClR+KejmygyDgqPrvKuUAT9gGUEr5MA54PAiMk=;
 b=4YFFoat30/hHxubgQTk2J1M9HB5gB7l2M8dYgCCcL2ZrHO6uL5iYzSMcogAYKn6iiFuHDs4WtZEJWmyfTMwa39zkHiEcXTT/7PjnWQq6QRSNaAyYcR3zWG91+yPG++DSneac2gG01u709QBhcWWTD4+TkzTzYZCFhC5lJfkJML8KQpUCgJfM+Rx8SDBUWICOQ1gdLulVvB1ZhaK0uQBU+NuV9WKvIQ983kME843DlbkCA3hwZ73oAcmAkvx1Ns2DhRdSUhUmKROx3OgiBb1DlM36gr/lF8GhaIm4drFLtXF+Dbaq1KPY/NQWvk48l+ZoKjvmfr7O4/fV7CFw1EsVfA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fb95d033-7a71-7cc1-bb8f-ee2a74d1c5cf@suse.com>
Date: Fri, 19 May 2023 08:00:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 4/4] x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-5-andrew.cooper3@citrix.com>
 <1d06869b-1633-7794-c5c9-92d9c0885f19@suse.com>
 <42cd2479-1eba-11c7-26d8-441045c230ed@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <42cd2479-1eba-11c7-26d8-441045c230ed@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0165.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b3::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9363:EE_
X-MS-Office365-Filtering-Correlation-Id: 97f9dce8-e67f-43a1-c74b-08db582e6952
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oDNyHy5GL0Ber8dmkqhAUouro2YoQWDXxNgyImu8MrWmzMnb4YulEszqQ0VUssnLjscF3YVgp9/0L9rAL/lpmVF052M2hxoEEA7Ms7G8oeaOZ8cBKdaD/8SHoflrkmjafieZa8ka2bNwoe8E+nrCTYxieNJXAsHUiFx9LHGIs39ol8RuSIphl2WL457kbxTeDppG0qTrenmgIe12/Z2QSlAJ90TXMHY0/++E0iStgFg2XlaVka9wzxOP777vxQGN2T4oArso2bbld5ZCmWCLgqEJrHKquH/tQ1z1JpTvszx50reY3bMB1i4xkYCpjJ6RREt6fyneHJIu8qfMkHeVy0/lp5FE9ksLOBMJUxl+/pLu4VVW7zIpkaKqXa4tafmtMp6b+Oqfw1R+O4q5yQWCdyKfuAa8FFwQARPApx6Z45mJnMazQ4m7cCZlI90m0sfqxBga5a6bdt2kNLwaeOUzD+PmwQYY3NEdLpoWMHbLdNCPTgaMRSQ4bNA5gH2Q3dtykpPAmAjeTB3IWCOtCUR8hneUW4aCmOqk7Iavx7iAC7BbAbyte8V9Kt/hxScDKaKo/ZshngiM4TUptegQUUQFmuuydMXqRT0rxz1tuGZkXn5NM/EZrEznAg4/aWo7EuyeqlBFJLR1jZcTj4dLUjASfw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(39860400002)(366004)(346002)(376002)(451199021)(2906002)(8936002)(478600001)(41300700001)(8676002)(31686004)(316002)(54906003)(6486002)(6916009)(4326008)(66946007)(66476007)(66556008)(5660300002)(6506007)(53546011)(6512007)(26005)(186003)(83380400001)(86362001)(38100700002)(2616005)(36756003)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Uy9ac1hLM01QS3lteHFidldOL2xYS0FzbW5GYlB2Z2tyRUJwcHJWTHNwSnll?=
 =?utf-8?B?c1l2T0o3SE1jaFpzZFQvNjZhRjY0dSs1WjgrSDBlTGs3eG9FMHFkVGZ3dWRt?=
 =?utf-8?B?aW5rUEFQZWswUnB4eHFqTktMM25mang0MU40cG1VZlhUK3JkMURLeU15QVN4?=
 =?utf-8?B?cTNUQ0RoOTY0Vm1QbFUzaUpLMjlhdXRsdHh2RGFDdm1CcFlocnEvcHlieDRp?=
 =?utf-8?B?aHA4SU5pN01XYXpvd2VFMkxjZkJYZys0dnRDZCtQRjBITEcrTjgxTHFzVlpM?=
 =?utf-8?B?RDN1VEYrOE40WGF4OCttZ2VjN3FDUDRlNktkZVA4eUlwVlRwZGJIekdnd1dU?=
 =?utf-8?B?bk5LMjhLb3dhV2ljQUNQb2pwd2lDd0FKL1lDREtqMEkybkcwUnZwQjl6OVBV?=
 =?utf-8?B?OGZiZG1DcnA0Z3JNeGd0SXdqNnpkdXNEYzVyOWFhN3hiSlM2QUhnUDdpZ1R1?=
 =?utf-8?B?aHJOM0d1ZStnYVJmSWpuTmNnNlFxZHJjaUxIVlNwNndPcWpFYTFjM3hTazdo?=
 =?utf-8?B?Mk5rL0U0L0I1TkkvWW8yeXZlV0FSYkZFSW5nSGhVYjF0VVY4STgxREg3c1Vo?=
 =?utf-8?B?VUZVdGxGRzUydURMV1dVVlUrUzFUajBPQXhBVVdISHN2cnRBdlNUNkNJNC8v?=
 =?utf-8?B?NDlwcHZ2Rzc0NHh5Mzl0eUphNWllWUVtbDdsdC80dG1rWmNTUWxEbDFsZ1R5?=
 =?utf-8?B?Tkg2SDkvaTJjeDQ0ajRHK25jM0VPUzBnYUkyVnoxQ1lLRHZaZThUdjdHeEEx?=
 =?utf-8?B?NHI4M1hVeFhiOTFlbllPbm90bmg4NnEraW9TUmpLazVyQXpzeFBHSXB2RGNF?=
 =?utf-8?B?WWx4Y1lnNWJ5NERtQ1ppTDVCRi91VkRUaE10YnEyKzNWSU5zem5LRFNwRXBI?=
 =?utf-8?B?ZzlYa28rVUhZZm15ck1qY0I4akJ2TWE3emkzOTlnV2RVcURBK0lYa09yQ2l4?=
 =?utf-8?B?VDRabWFIYXo4VmdBQ3pXbXhGL1dzWGsrWE4vQWM2L2NCNXZXb2NDbXA2VG1I?=
 =?utf-8?B?byt5Q3MrK0U5blQ4c2NxUXRtc3EvS1VkNElNM3JkVUNmNzhVazZHWTloK3NW?=
 =?utf-8?B?ZjE2K3VhT3ZOVWRhSEJGSmVvWmo3clZKUTM3clAwQ3hWQTRoaHBwRGIyc2Ra?=
 =?utf-8?B?dUxDa2l3bEZPNWRHKzJFWVlkNXVEbFl6T3pjTFFKaHFaczZtQkI2NSs1OS94?=
 =?utf-8?B?VFVvYnRmZ1ptVzI2Z1FlVndIZWlyQ1MrK2drVmFyMDlFeHgzeUxSSUFKbVFp?=
 =?utf-8?B?aHRBeUUzMGlHQ1dudUsvMDVNc1FiVGxEZFJIYXFsM2FKSmFwNDZTdnBKWVdL?=
 =?utf-8?B?aXhpSDQzeW53NXJWbHMxb0RTYVBlMGg4eHFNVHFZZnVTRE1pQjJYTlVsZ1ZO?=
 =?utf-8?B?UkxxSVhvVkRyaXVNaEoxbWtleWkyRDNsUE1vS2hxbkhzZkZRcytzTzZQTnpn?=
 =?utf-8?B?enRuaWhyaVRWU2kyRjZXQVZvdXRsWU9SMW4xK1lVUHJiQVMyNld2Z2Vka0d5?=
 =?utf-8?B?SXVyTGR0RVJUNHJBLzRZVk1yZGlUVDR5M05tSjFmQVhiSkkzYmF0TTJ4cU56?=
 =?utf-8?B?cUdVMFV6S3UvWklCTW5yZkxsVEFYV09pWWRld0F2cE94SktUWkFWTWJvQUlS?=
 =?utf-8?B?WHVzWVBvaWtHcHVMTGUvc3VOR1V5cmJldnphOVVFVXNIU0M4SWFBWlh4NHUx?=
 =?utf-8?B?ZlREWXIvUjBHbk5ZNG4xSFFXMjJzUW0xa3orQnhIa2NwenB0a3VWalZmSFAv?=
 =?utf-8?B?K2RCMGNtQ1BGVHAxa0w4bXNlb2Y5U2JMZjg3ZWFnT2pSQncyN2QxZGlQS1Nu?=
 =?utf-8?B?NGJuZlRTM3JVa3pjS3JNcnlzQWVHSXdGYngzcHpYcktJcUZpUDBUR2NwUGlx?=
 =?utf-8?B?di9sb2ZJMGZoc3gxbVJ4cE4rcklZU2JraEpUS0N0YWxsaUE1WndQVU10RTFT?=
 =?utf-8?B?Q1BrcVEvUWlmSjZZM3BFam1TV0FSbUEyYkZtbjRzVEhvQkljeGh0V3c5d0tV?=
 =?utf-8?B?djc3MWFERklrQjQzUHYyaFQ2Qi91WkkxWmdKMDdXVXdGVFh2cTY2dE9RbEJJ?=
 =?utf-8?B?ZWxvQ201aGpUd0NVOThUUVJaZ0U4dHN2cEpXaTVNSU10YWYwRVdLZ1dMbU1r?=
 =?utf-8?Q?fcPP8F3zhpkVVCG7GrMRy3Z3K?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 97f9dce8-e67f-43a1-c74b-08db582e6952
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 06:00:56.4215
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LyjRmGZuiXEkwTyNFm563fLe2HOHWNgL5Zn3CzNDPoJpLDmAqcTomA01hmlJks5gBl4LEZxG2j2ljbbHLZiHLw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9363

On 17.05.2023 18:35, Andrew Cooper wrote:
> On 17/05/2023 3:47 pm, Jan Beulich wrote:
>> On 16.05.2023 16:53, Andrew Cooper wrote:
>>> @@ -401,6 +400,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
>>>          cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
>>>      if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
>>>          cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
>>> +    if ( cpu_has_arch_caps )
>>> +        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
>> Why do you read the MSR again? I would have expected this to come out
>> of raw_cpu_policy now (and incrementally the CPUID pieces as well,
>> later on).
> 
> Consistency with the surrounding logic.

I view this as relevant only when the code invoking CPUID directly is
intended to stay.

> Also because the raw and host policies don't get sorted until much later
> in boot.

identify_cpu(), which invokes init_host_cpu_policies(), is called
ahead of init_speculation_mitigations(), isn't it?

>> Apart from this, with all the uses further down gone, perhaps there's
>> not even a need for the raw value, if you used the bitfields in the
>> printk(). Which in turn raises the question whether the #define-s in
>> msr-index.h are of much use then anymore.
> 
> One of the next phases of work is synthesizing these in the host policy
> for CPUs which didn't receive microcode updates (for whatever reason).
> 
> There is a valid discussion for whether we ought to render the raw or
> host info here (currently we do raw), but I'm not adjusting that in this
> patch.

In the end I think both have their merits to log. So far it was my
assumption that "Hardware {hints,features}:" was intended to cover
raw, while "Xen settings:" was meaning to be close to "host" (but of
course there's quite a bit of a delta).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 19 06:06:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 06:06:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536677.835253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pztGQ-0001aB-KP; Fri, 19 May 2023 06:06:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536677.835253; Fri, 19 May 2023 06:06:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pztGQ-0001a4-Gq; Fri, 19 May 2023 06:06:54 +0000
Received: by outflank-mailman (input) for mailman id 536677;
 Fri, 19 May 2023 06:06:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pztGO-0001Zu-HE; Fri, 19 May 2023 06:06:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pztGO-0001oR-FH; Fri, 19 May 2023 06:06:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pztGO-0001bG-0c; Fri, 19 May 2023 06:06:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pztGO-00028e-03; Fri, 19 May 2023 06:06:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CrX3cliINKMjiC8NAZFEwTIgxGfQqVtMAwS/RSMgu7M=; b=ME5q37jJr4oU3Y8eh03DOCUpnG
	evsOKUKpOfDQPZCYIULm3LXSarI6mGDxGuIx16k5Xo7w83OSvU2w8AQsdtxsFL0L7FkIrpqMu6/lT
	ueXP2PJkQ8jwHtaK5QtYN+MuYjRL6cu3F+eD6oy4tKe5e2OXJgiItDA7sfjj0Mx2JBZw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180704-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180704: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64-pvops:kernel-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=297e8182194e634baa0cbbfd96d2e09e2a0bcd40
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 May 2023 06:06:52 +0000

flight 180704 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180704/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                297e8182194e634baa0cbbfd96d2e09e2a0bcd40
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    1 days
Failing since        180699  2023-05-18 07:21:24 Z    0 days    3 attempts
Testing same since   180702  2023-05-18 20:40:14 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Gavin Shan <gshan@redhat.com>
  Igor Mammedov <imammedo@redhat.com>
  John Snow <jsnow@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Markus Armbruster <armbru@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Steve Sistare <steven.sistare@oracle.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2118 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 19 06:10:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 06:10:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536685.835263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pztJo-00037Z-7x; Fri, 19 May 2023 06:10:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536685.835263; Fri, 19 May 2023 06:10:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pztJo-00037S-47; Fri, 19 May 2023 06:10:24 +0000
Received: by outflank-mailman (input) for mailman id 536685;
 Fri, 19 May 2023 06:10:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pztJm-00037M-5d
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 06:10:22 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20609.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::609])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d5bcfd1d-f60b-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 08:10:21 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9789.eurprd04.prod.outlook.com (2603:10a6:10:4ed::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 06:10:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 06:10:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5bcfd1d-f60b-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DjRH5EnmaHGL/nRRN3iWrUzDycrI8FeTVE4fMUOYWTDBbkZGnsPzYx7MTV8qfMfKNYENtqJVH84U/Hwa8PQ1as5bWq68HXlhE98gmJbYjKjEXiaarFOvlPa9Qmx29SP90lsEzpSDo7LqoyuNTrsFxhfZGGMJmHGqLWkNO6Rn2/wSreibpvvoXvQ8+pj+XUzT4NKjvQklf2lU/MmKFsdAZrKeapOHwfhNqFwKeryFfnCh/EU4QMpFqgyafnS+9ja9hBdvPrlOhSAezLJ5pDid7psvEA+k6qsdDps5gU5UAw8j+tiV54X/+5T+C35TqwOQkEJ7gh0Q+w7W54079btSBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=evtFuPv6XMF+Tdp6iGbdxcxMnpHnJtUVWds36uzNMSw=;
 b=m2NRsNhtqknXjvhx4BilCTJKVV+22Xi2k0SSYSpB6MeHIEWS4iz1L9etwHOdOHt9WUfAfA4gIzoRnlbj0FS8TJ6/7AvUN8A2YRAL1l4IxbVie9p5bhMYwt14222MtRULiRoY8mDLayYx4O0+79vmZXBJb6lnBxVn9KEZ7zFoC/uJOU0eMiYDdHDdKHlinfOxhaKzOdXkLRHOvnb/7KuXdZuKyDW9AeU4nW5cYpoUuN076qLKRyMLbPgR+voNWT7n2SpQondodnZjXnRsvHgJtslPAdMGXqvfSciq8P50fpz6g1+T5KyrEoIDGNCkRv69PO/sCw/tPDTLxO0w3xvFsg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=evtFuPv6XMF+Tdp6iGbdxcxMnpHnJtUVWds36uzNMSw=;
 b=4Dh11Vqli31FIL9POyofIoKcwqRk9tZGV7KXqRl8Dy7TVReTHSQwpFjwDGm7L/iv5fe/nmimRY3KlCRs65XWB3d2K+fxX5eu8EuZ3HiMJnfFOcr4kpGyEtu1lZOcigoqfNXf4HGwm7R/Knl7L7xv7HZjamTD2xTrQBjl7vRTO7OyEWhcCv7Z33AMLMRCTT3MaZ4SrN3tDpF9FYY19R/4pzWPsMw13k/oVj702iiykelgqKkOl3DVAscU+794ZdRcJ4USWFFSg9Syt/oE1+HODwKyeL3bjfuYHHLYmqLTxHGbYFomY4NqeQPfikaykv0e/5I7xUi9varATHcR1gN/ag==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a871ff38-5a68-d703-6909-2d217f7fdc7d@suse.com>
Date: Fri, 19 May 2023 08:10:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH RFC] xen: Enable -Wwrite-strings
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516203428.1441365-1-andrew.cooper3@citrix.com>
 <796b6671-c699-1bbf-b3a7-59c8fceeb625@suse.com>
 <2308d1ef-4928-bb60-88b0-319ac3370a53@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2308d1ef-4928-bb60-88b0-319ac3370a53@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0153.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9789:EE_
X-MS-Office365-Filtering-Correlation-Id: 99ba670b-1bd1-43ad-4497-08db582fb89b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tG6gB+IduNODRVMRrTuGPCMbK5YbNyXs0hnUTP5gBxbmyTjUx58dv+PLiTBIGTik0qMcRzE8VlyotHbn8ZB3Jqg3o3vKfkCR42E8kOfCeMYZP+Mdc/HSdfQAI0+x1+aR5kspVGDZpGxoT6aRh6m/GD68semg/BbxYbtUu7jFW66uiOMkt39xQuX7bOQLReg5Tu7MDxQY2P0dBiLIsHRott9Hk5FOuS8pX7YA7wDLfSvsGuybTbSOK+2y8sdoG2h4sT0vlP3DfNG2uNUnUNRHz5P5FTZraGu01f+q1UNVag8IYdRg4PzAAkIF899YXLQmQMfzGMWL0t9eqZk1286XViRTLCFbBEYgFKixZyb5BPrH/qaQlKoiQUxciApTYY4K2GguQdi3FlsIQhAbK9wyHJcNNaGcY68goIOAFcqve9zL8Xza3odJD4R8yZspsfjIKr0ljBG98usnCTte8Smuh1tdgQDUvqxQjs+L4V6C5ekbQcYvZUnaB+ErshTIBsv/DdvSsNGCgnnsdstBuCsc8EjGs2WeAbO+ba4dLfvTRbL/9UEGR3Njh0SOv/pVcpajI5EDkKeagXO/s315UcmQyy6K/1odKnqOMcySFHzNiIEaD80uc1OyVuuSYRvsz+6hLQ4Bv0tGQRfMkjODWwR/wg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39860400002)(396003)(366004)(376002)(346002)(451199021)(2906002)(316002)(478600001)(41300700001)(4326008)(8676002)(6916009)(31686004)(8936002)(6486002)(54906003)(5660300002)(66556008)(66946007)(66476007)(53546011)(6506007)(6512007)(26005)(38100700002)(186003)(83380400001)(36756003)(2616005)(86362001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dm9oZGFXQTF0V0xIdWl2VnVtTmhEV29QUUZ1cXhlcURITVNPQmRtdTkrdXg1?=
 =?utf-8?B?Q3ZkZEpQUG1wbk0zR0hZQUw5MkZmNGNGOGtwbTNRdFdzSW8vYWhnK3VqM2dm?=
 =?utf-8?B?dzFoZEJjRGsrS1FxaGxIOXorRDlselhXbXNXUFVxUFUrd1R3enc4WHRxV0Zw?=
 =?utf-8?B?ekV0VVBrRi9ZbExnVXZnL2ViVXJNOW1KemZGS2llS2VyaEJWelk0NlROdGE1?=
 =?utf-8?B?TDZlSDVzZitBREpQY1A2MVJvN3d1QzEzRUh0ZGRJMmlmN3QweWRZTFQwUzM1?=
 =?utf-8?B?dE9uUWc0RkNseHJKbVB0eC9SSWRGUXY2Z2JMajRBZ0FidTJDV2lFOU5lMUNV?=
 =?utf-8?B?bVZ3ejFhMlpVeWpKVytYOUJ3czZQYjJKM0thRG1NSEc5U1NSMjdndDE5NW9F?=
 =?utf-8?B?SkhDRFVLZ1gxaEhXNzAveVFPelNhVFNVRlgrTVFLZ3kvNEZBSDYwbUorVmVh?=
 =?utf-8?B?QVZtcWdvR3FsWU1WVllHK2gvTjZrWXFVV2U2L2xDektRakY1dHhiZWQ4bm43?=
 =?utf-8?B?dndlTW95TGJUZ0FVSmdYbHoyVTg1bzlnV29JQ2dMSG5IdG5JQ1Y1SmUxVlZF?=
 =?utf-8?B?ZGRZMHVnSTdUMS9rQVlYZmNhdXB6alZSUWw0NFFhT1NTVlNWdjI5UnZYdG9l?=
 =?utf-8?B?K3NjbFpOQW5wdFZXMjROL3V5L1hEMDk4UFgvRkl1U0JRam4waVE1S2FNemJj?=
 =?utf-8?B?dkhIOENhS0lacmN3NXlXVWN6MHhKWXRXazlqeUpUUFhwbWJreUI5ZVVKQ2di?=
 =?utf-8?B?ZkpuM1lJdnV1QW9JNkcveE8rNnRHY0UzUnI3ckNjcURDaFhVZmhHVEh5NFVj?=
 =?utf-8?B?TEdlb2JMeDE0Q0QvNUhsdEpyNzlKMnBVZ1VScnRFRThsUys3eHBmK29sWXB2?=
 =?utf-8?B?SHdKN0doMXRMTVFqNEhkbnRsbS9USnJkc0Q0aWQxNFgzSHZqR2NLaGxZemsx?=
 =?utf-8?B?QUM3Y3kraUhkK0w0UkxZZXBuQXNaU0JOTzFObHlPSUVVaWdaOVZoREFaL1dJ?=
 =?utf-8?B?dUZTdUZ0NFl4MG85dDVrQ01Tc01hK2tEMEw3aG1BYnZMSER1QzM5RFlyVHV6?=
 =?utf-8?B?Ri8yMm12Zm16aTJkNUwwSDE5cGZLNlBtcGNHVURjUGFWcHVaYUtWc3IwZjJl?=
 =?utf-8?B?aTRhS04waFhZU0p2dnRSOTBPTTMxNWpzOE52cWZZWGZsQUJMN2luY1JtVHVt?=
 =?utf-8?B?dStJZzNVaEEvRjlzNUtob3Z6MVZpTXIrWDV0MTZSaHNMSVNRa1F0eDRDVVVV?=
 =?utf-8?B?ZThaUjlreU5sTVUxNUl2WFdudVNVclB3M1ZBN2NaNEhZWVJlRUpOQlhDcmRT?=
 =?utf-8?B?cHdQeHNoZ1I0RU55dnJFUzdNVFAweVQ2YlVQUlRwTnhvUFlocUZ6b1ZscVNi?=
 =?utf-8?B?Qk1wdUZSY3E1Q2Z3d0E4WVNaT2psZlFyS3dEOTMzNU5aYi9NaVBGL01UYkFC?=
 =?utf-8?B?RngzYVdNNGJDT2IrUlpDS3FoVU1zS2dEME81aWQ3UElLUVlmdWVJYnRRdW52?=
 =?utf-8?B?RXdmVjhKTUZxOWVFaVdZZ3k4MWpPRDJSQWtwVXFCOXl3aitrbDlaalQyc0Zj?=
 =?utf-8?B?NWlFcEsyQUVUM1ltakVWZENJdzkxOXBUdkw4SFNVRFNSY3pFTm1lN2R1bzJa?=
 =?utf-8?B?YzhDZ0RDS09jRTA4WFhNYU9zbE9tSFEwcTVSWlhIbnZrK05FdEVncnR3c1Rn?=
 =?utf-8?B?dGVWeDJaandtNzJLRDlqUXN4bHI2aUN5VHFlc1FINU91QXpKaE1iLytFZ2l0?=
 =?utf-8?B?aGs3NXlzNlYvVmVPREdlbDV5K2NlbGVFNUZnMG9Ndjc3c3FyTjVlOFN2VUxm?=
 =?utf-8?B?R2hFdmR2NzZEc2VKRFIwQllTa2RzOU5ZdzVQdklXTWpxOXpZcTIxcHZoK0lq?=
 =?utf-8?B?cG5QRVlpWTdKYTVPZzRNQTVqeEYxdTR6VE11ZjhlNlNxNlpRSXROZW9rSnlR?=
 =?utf-8?B?bUlZQVJyUVBCNGVhMjNKRGIrK3RzV1JGUmNSREYzbExJaC9vSTh5UUFaUWZT?=
 =?utf-8?B?blBzVVkxWUpBdW9sUGZ1YmlFaVdCOTRWNFE1eXlOdDNkN2JpWXpFWXBKbTh4?=
 =?utf-8?B?R0VYRkRqS3lvekVqR1BDMHVEU2FsdDBTYUxnaEFVNUxteGVEeTEzOWhrQldx?=
 =?utf-8?Q?iqYrIURzZ67IH9IcMhFXTQgY4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 99ba670b-1bd1-43ad-4497-08db582fb89b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 06:10:18.8350
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7oJxQGt/JqoIE+hjTbJNyRZhmaatZkb7GwNbY51k3pLRrqsmjx6x4a3zGLdVfRSqAJwKhnITsdWb/GglHvq5RQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9789

On 17.05.2023 19:00, Andrew Cooper wrote:
> On 17/05/2023 11:34 am, Jan Beulich wrote:
>> On 16.05.2023 22:34, Andrew Cooper wrote:
>>> Following on from the MISRA discussions.
>>>
>>> On x86, most are trivial.  The two slightly suspect cases are __hvm_copy()
>>> where constness is dependent on flags,
>> But do we ever pass string literals into there? I certainly would
>> like to avoid the explicit casts to get rid of the const there.
> 
> The thing which trips it up is the constness of the cmdline param in the
> construct_dom0() calltree.  It may have been tied up in the constness
> from cmdline_cook() - I wasn't paying that much attention.
> 
> Irrespective, from a conceptual point of view, we ought to be able to
> use the copy_to_* helpers from a const source.

True. Yet then as a minimal additional change may I ask that you drop
the cast that copy_to_user_hvm() has in exchange for the one(s) you
add?

>>> The one case which I can't figure out how to fix is EFI:
>>>
>>>   In file included from arch/x86/efi/boot.c:700:
>>>   arch/x86/efi/efi-boot.h: In function ‘efi_arch_handle_cmdline’:
>>>   arch/x86/efi/efi-boot.h:327:16: error: assignment discards ‘const’ qualifier from pointer target type [-Werror=discarded-qualifiers]
>>>     327 |         name.s = "xen";
>>>         |                ^
>>>   cc1: all warnings being treated as errors
>>>
>>> Why do we have something that looks like this ?
>>>
>>>   union string {
>>>       CHAR16 *w;
>>>       char *s;
>>>       const char *cs;
>>>   };
>> Because that was the least clutter (at respective use sites) that I
>> could think of at the time. Looks like you could simply assign to
>> name.cs, now that we have that field (iirc it wasn't there originally).
>> Of course that's then only papering over the issue.
> 
> Well yes.  If it's only this one, we could use the same initconst trick
> and delete the cs field, but I suspect the fields existence means it
> would cause problems elsewhere.

I'm pretty sure it would (hence why I didn't suggest so); as said I
think this field was added much later, maybe in the context of the
unified EFI image work.

>>> --- a/xen/include/acpi/actypes.h
>>> +++ b/xen/include/acpi/actypes.h
>>> @@ -281,7 +281,7 @@ typedef acpi_native_uint acpi_size;
>>>   */
>>>  typedef u32 acpi_status;	/* All ACPI Exceptions */
>>>  typedef u32 acpi_name;		/* 4-byte ACPI name */
>>> -typedef char *acpi_string;	/* Null terminated ASCII string */
>>> +typedef const char *acpi_string;	/* Null terminated ASCII string */
>>>  typedef void *acpi_handle;	/* Actually a ptr to a NS Node */
>> For all present uses that we have this change looks okay, but changing
>> this header leaves me a little uneasy. At the same time I have no
>> better suggestion.
> 
> I was honestly tempted to purge this typedef with prejudice.  Hiding
> indirection like this is nothing but an obfuscation technique.

To be honest - I think I'd be fine with purging (but then better in
a separate patch).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 19 06:16:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 06:16:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536689.835273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pztPH-0003js-RY; Fri, 19 May 2023 06:16:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536689.835273; Fri, 19 May 2023 06:16:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pztPH-0003jl-Op; Fri, 19 May 2023 06:16:03 +0000
Received: by outflank-mailman (input) for mailman id 536689;
 Fri, 19 May 2023 06:16:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pztPG-0003jf-L8
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 06:16:02 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2061b.outbound.protection.outlook.com
 [2a01:111:f400:7d00::61b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a06d1006-f60c-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 08:16:01 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7987.eurprd04.prod.outlook.com (2603:10a6:20b:24d::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 06:15:59 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 06:15:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a06d1006-f60c-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oOPe9Z1m83hDuQs34Z+/8xHn8yg3xSaUUYOBOrHyNd/9ubEO9+hC2av+tY079GX75mXvcb9F26hQIExmF1Md0nYABsfp4mW7yV2KwCodjGKttUCgfJFl+IH++ZUShILCq/stlbqx//G8xTtGcQlSHN61UQ9WTTSox/E3gaXoStWbUj2QAA3Jk7xKlc50hxLwX8Oa+mmcfD3u4ht69KLR9DvCxjFiXqsleJCEFfHsobAHWtLV4nGVbofuaixID3plwPRlFh9G/Onykc6GU6jVWhD9bD2CFBmqQm1xjQ7ww3uKriFyysNbQvQpm4NvtPPkBn/bbGQTVhycrhcZltQV+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sJF0+RYrUrsBk6SF7twEGkhRDSpxnS1tjodPwav43Ck=;
 b=XBe183Z9c385vV2/ll2Hu6KmxblDjhKh5A4xnfUKWJj8xQUzd4qLYzpNEcbEBMnqNwotlzMEL/3bILSNUIo/L9DvaY6SXto75CzS6wtMY07020Qh7lkkJzP8eCaymx/1FW3oI7Ono7fNVxVzjKKbBWV5eTyf379VIUp8yVv6y0OI+gT+oZsMYiHn9lg8i+O7o2xlcOw9sE9t3cuw+LUZ4SlIb+mN17EYVu7MHcHKm+Bx+5Z2oYs+O6gmQLH6VfkATQ+laX/2OFLlO5q4dRFo6akeIRByTMcb6y0PThpyUD2WdGH20+C7Sf5u783stsS2qHdzYRRg7oZcvehS6+3rfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sJF0+RYrUrsBk6SF7twEGkhRDSpxnS1tjodPwav43Ck=;
 b=bdugUjkWA65yw779l+bDizSZl8ePFvokReT64lHP2RKu5ka/ClgByc9jpN4+cpWPP790y/+Vt+suN/62kwSHwR9LQoSJbV9sEye0FMDJVoBtS6//gYpoUQp0fl6pFsNA34I7GDAMdR2xqxRHLFZM3LH2DNychCM4aLNmPcFJ3o5pLsAC85o+Lup1rgigP11e+Y8iZCNVfFOr+y9YPwX+C6cV1NzhR0YuFC/MbVvQRk/jgEHhXlGDCOCh5NphdNw7aqpbW/N4lP6qEkzp7QepAACgve/7q7H7bCIeRZkdjfp2+rHeO+1F1Kp+qCnt8EXyHJkUzYLP8V6eMOhmKkSKlw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2c867a48-1442-e4bc-0d51-d87c77aba8a9@suse.com>
Date: Fri, 19 May 2023 08:15:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86: do away with HAVE_AS_NEGATIVE_TRUE
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f994f67d-e0de-ad28-d418-1eb5a70bc1b8@suse.com>
 <b79b5b32-7bcb-b4cd-1594-e16aaff640e1@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <b79b5b32-7bcb-b4cd-1594-e16aaff640e1@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0056.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7987:EE_
X-MS-Office365-Filtering-Correlation-Id: 539178e1-c6fc-40ca-f69a-08db5830839c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	P87xZ0JzJCOmPggMV5xZLNctAk2woztsjPNHN6LFImijw5uTZ3nMmSOl0m3xJEPTyrXirGHVHz4j0r/zkXXCWLnWlK1WMTEnLVtS071GjDK+hP5ixpqOLzEENv/2D4KofeiynWiH2Tn4stK/WVUWduL2NgxVtV/edHiqcmrKOxlr2Of0gnPPr3esIHgmxKibypTdNKD7yXv8IqOnCIEzRnTGXCKGvhZP986CaHK0H4d9BTWNV+/C419n1Sk1iX3ZQ4mlJPvRxYZWRPafJY9j2zryqNFeJylRfP11GI1kjC0MR+s1g4+4jdpqY5wQV+tIDGXoklDJcbUXNdt7u+uo7EE4v0y32/fwHosMw7KXCPbvVARgmIpLXAt6yhcVTOhUr7Co3dpuqlzOgd+ug1b6Y3lwe9Rbwd5Ypi3m1pvbAv3WQQfzGnpYeMoG8iwUMakIwG5Qmh2uUix2wT/ivSpsiyCfQjQciZgr59DjxK+TjojHPnr9aYhyuxoRvAav4FRXqHraN3vYKQAmXW9ED8hcbzJwvovsoTOYsICxtRnSJQosgz1A/igXVr4WP9ZXjWGJ70L3I2P04sDD9H+zCEwzyONArwkns8BNYAuNkob/BSWcSC2LPxBywWHJKtfAMuD6O7UMCOBKF7gQTogPugR+tQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(39860400002)(136003)(376002)(346002)(396003)(451199021)(31686004)(53546011)(6512007)(6506007)(2616005)(8936002)(26005)(8676002)(36756003)(86362001)(31696002)(2906002)(6916009)(478600001)(66476007)(41300700001)(66556008)(5660300002)(186003)(316002)(66946007)(54906003)(6486002)(4326008)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N09tNFRqd2pKa0Fxc1NJTzQ2U1kzelpYL211Tkh0K1V3cUpBdmxvTHJjRjJv?=
 =?utf-8?B?VEZTQjZDcGJJMWZjYTJQUk1KM0EzWFZtZy9ySitKZG0wa3A2Wmw2Sy9iOVRR?=
 =?utf-8?B?Ulp2WG5VODRSUU5VSTVJYzNXTDNaVTNobktDMkpvNXhWTjNuYlVpUUUwRUdk?=
 =?utf-8?B?WithWUdHTUMwQTZSY1NwN3hRMUd0OW41QkkrQ2t0R0NwQ1JOeFFYVm0xMlJo?=
 =?utf-8?B?RHp6ZHJ2MTFVY1RWNlFlYm4rNHRhaTV6bExDcCsxOUZHVXprTU0yR3JpdE5O?=
 =?utf-8?B?emVJcHptZnMvVGpRN1Y4c1IrYWtYN1Z4azhMUkoxcmYrOTIrV1FabHdWREto?=
 =?utf-8?B?LzZENnQ0S21IWnZFM1JyT3I3QzlmdzdkbUM3T0JUMlhyU0Exc05HMTUxSWNH?=
 =?utf-8?B?N0pmUVdqcjFPc0ErUXB2OGVUR1BZVjQzZXZIaUFCSkVwbmtORHNHMUZib0k2?=
 =?utf-8?B?dndqYUhuOExya1BRb3NZWlRFOGFuMUNUSU5zUWwyRnB1Y0lBNjdlTDVOZXlH?=
 =?utf-8?B?SndQMlg0WUxrQ1o5YlhpT09HL00ydCtpZGxyRlRNL2RnZm9rdVducGZyTlZu?=
 =?utf-8?B?SUYyeEl4bHRvbC9MVlBLZnkrMHUvVEpwdXkwSDZyQkVBL1ZKNG5WYWFvMFpw?=
 =?utf-8?B?ZGZjMFNTajZRSFpkbHI1d3JSQUtBd2llanNaWFJRdEM5b0QvWm9RVzFFWUlW?=
 =?utf-8?B?MzI5UXkySUliOWRUcjdNWFkrbDROQ20zTDAwbHNOeFJGcjl6SmEwRitmNk5r?=
 =?utf-8?B?M2sxaWprYVc0cmQ1Mm1IRXVNNS9BR0oxWit1YUdlemQxaTA4dElDeDlNZCt0?=
 =?utf-8?B?cEZMQmgyVU5wTysycUY0NVNCajFibU1KWWZCZ1BnaCtxSHJMWTdydkNWdVZl?=
 =?utf-8?B?N0VXT3k4a0tPVzN5bm5xRXVuN2w0b3pjUTdRVk51aE5SN1p3bldoeXhwY3pE?=
 =?utf-8?B?TUl1ZjY4clE2RDdnRHpNemtoaENCYXpWOEhQZENoZlM3OHBRaUlaQ1J0aWFX?=
 =?utf-8?B?OW5mNU1NLzVGRzdTWlRJUmR5OXhBN3laWTA0ZjdOZ1pYeDZQS0FTWUpQNWwr?=
 =?utf-8?B?d1ErdDFrcDJ1b1lxRDdUdWwyTUdIOGpsUWRRVzhRdDZ6aTdSYW84c3pPYXpU?=
 =?utf-8?B?OHpzbGszUmdJdFpsY0dmZG5jbldnYmNEcElKMHkxQlFEQWcyV1ptbFhicnNI?=
 =?utf-8?B?cXNKeHlBYlB5SW1wQVJKYTlkUGlTMnlWQ2lqd1ROdERrQUxvRm1ldm12NGJj?=
 =?utf-8?B?eGN3QmxybjZVSXl1a3RML1AxbjZKNzNZcFNRM2hiOGNUeDVGVFU0UW9BUHFS?=
 =?utf-8?B?U3ZCei91UEpPZzNyOEZCd2JjdDlDTVlXbHJRa2ovOUx6RnVHU0J2ams4QTM5?=
 =?utf-8?B?T3E2Sk9UdEFMd1IyMCtoZ1hJai9lOEM3MUxUYXdxb2JSRG9EenMwZEdJRE1R?=
 =?utf-8?B?aHpNSEE1cmVYeStPaTN3eGVydnVuY2syVDR6and3elRxTURXd29kQkVHQXVT?=
 =?utf-8?B?Z3FBM0kvcUZRcnFuc0lEb1gvSkluYUJiRlhHcVZYREVEL0ROQm1UV1U5eFJO?=
 =?utf-8?B?cXA2eVlZYlhuS2RlbzhqQzZQczZDT1JVMVU5MU5IM0FuelQrM1hSMmUvbkhP?=
 =?utf-8?B?ZzdJeHN2cGdDZzhyeGRFcGdrVXpPajRVK2FCMjBIZDBxRDAwdEd4L1RSYWNS?=
 =?utf-8?B?MWhvNVp5TUdjZnFmZVlnaVlLWUNNaTZBYkVvQjU4WDRQazNoZUJIN2hnUVFn?=
 =?utf-8?B?TFNsMFdreWpveDd2ek1jd2ZEeGdkZ2kyRHgxMDVEOFpnWlhDTWt3STQvSFZZ?=
 =?utf-8?B?bkRvdG5YTzJwUWtwNUpYRFFwNnhqTmtVTE1GSlNGeEw5ejVaeWlvWVY2enp6?=
 =?utf-8?B?MFBMUlQzQUJBbVVaaWZWZ2Y4NVo1bmZjWGpSUzRiVU1zNXlVamxNbVdoZnpW?=
 =?utf-8?B?aUMrTnBKQjBjR2hlMG5sclI4U29hbXU0NUVtNWNZdzljbjhjNm9yUUdFenNO?=
 =?utf-8?B?UXc1d1VUdUtMbEZDWFJYQjZLdlFrd2cxMGxvSnI4K0dBUnVPZ1ZwaDhWckNj?=
 =?utf-8?B?Yjh3OGx4bEs4K2Ewa3kwQTRaYXR4bHBtVHJqcGk0VTZxRlg4cUVqY252QzJw?=
 =?utf-8?Q?1Pv6hB0c4UG4NBlsfC4kOU8YZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 539178e1-c6fc-40ca-f69a-08db5830839c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 06:15:59.5014
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DubonVAOqVLRHDVcwnV7GS/NguDdB1LD8US9tQyXUp+Ra15ECqFlYGfnX+QjWO+UvZE4RVtgS9R4cz0Cmx2tJg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7987

On 17.05.2023 19:05, Andrew Cooper wrote:
> On 17/05/2023 3:22 pm, Jan Beulich wrote:
>> There's no real need for the associated probing - we can easily convert
>> to a uniform value without knowing the specific behavior (note also that
>> the respective comments weren't fully correct and have gone stale). All
>> we (need to) depend upon is unary ! producing 0 or 1 (and never -1).
>>
>> For all present purposes yielding a value with all bits set is more
>> useful.
>>
>> No difference in generated code.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> Unlike in C, there's also binary ! in assembly expressions, and even
>> binary !!. But those don't get in the way here.
> 
> I had been wanting to do this for a while, but IMO a clearer expression
> is to take ((x) & 1) to discard the sign.
> 
> It doesn't change any of the logic to use +1 (I don't think), and it's
> definitely the more common way for the programmer to think.

Well, I can certainly switch. It simply seemed to me that with our many
uses of !! elsewhere, using this here as well would only be consistent.
(I did in fact consider the ... & 1 alternative.)

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 19 07:09:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 07:09:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536699.835300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuEQ-0000ph-Tx; Fri, 19 May 2023 07:08:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536699.835300; Fri, 19 May 2023 07:08:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuEQ-0000pa-RN; Fri, 19 May 2023 07:08:54 +0000
Received: by outflank-mailman (input) for mailman id 536699;
 Fri, 19 May 2023 07:08:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9j/4=BI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pzuEP-0000pT-7o
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 07:08:53 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20623.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::623])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id feae246a-f613-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 09:08:46 +0200 (CEST)
Received: from MW2PR2101CA0003.namprd21.prod.outlook.com (2603:10b6:302:1::16)
 by CY8PR12MB8244.namprd12.prod.outlook.com (2603:10b6:930:72::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Fri, 19 May
 2023 07:08:42 +0000
Received: from CO1NAM11FT003.eop-nam11.prod.protection.outlook.com
 (2603:10b6:302:1:cafe::c0) by MW2PR2101CA0003.outlook.office365.com
 (2603:10b6:302:1::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.7 via Frontend
 Transport; Fri, 19 May 2023 07:08:41 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT003.mail.protection.outlook.com (10.13.175.93) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Fri, 19 May 2023 07:08:41 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 02:08:39 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 02:08:39 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 19 May 2023 02:08:37 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: feae246a-f613-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TJrZN+yVT+HLYIuQcXuyUxxYyMr5i7BAXTPCdRBWfVx/ZXOANE/MZurUzDGL9h1QNNkFcPCUKCCbgUrAY0D2yBB4FCgcc2USnG97c8Mg6KleTh2rY2twJe1GgB0A4rC+b38cxciMujT2ko/z6aJLaSsrRlwLyTJowRVfkst4iT5HJ+fbXXeWgi6QK/MLOv/S2g3+LoK9UG26PDJ8FhlcYwZtY8JtlcH8MrW3OlgcRMVz+vmfljaL4xgljXfR5NpEFX+Ez9czhRgo4C/zv3u2TNdciMxwIWVAwXtLPI90hKT+kj+t4BlZqW7wQg+o1mNab9BH14t2+kU5NrqnyF7fhw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/b4M/CxF91kxc/f6hPCJkHkDP4+LP+C59xmPdLzvC8o=;
 b=Zq3HNYv2/tBok8SWjjdBcAAMIe0/QOXqqq8oW94YXL+HGfpiA60dc3ClVS2lPCcB3FebzMzRZnkJyBPmYUdyQCvfP1+vpRmna9W9uNxFmu30YbT5iqse/p2SznUkh8ssMgMXn4Ll52uBqdxvSjygwOP/bFhe+Pforvi63b2vw/V4E0c+eryzdb6CYiAT7pT/edL1tsQVYm3QsfyOX7Rb3rvTWomR5wh2GrkcH+AS9bWYfDBWiMH2MNGOSDEz0awip5vLwSEJNys71U7HpBk+qPjj6IPXASUqeqx1XwgMFY1HBGVlDzVppGY0lreTfYGYaSvxEuUzgzOUcrFFGoGWPA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=kernel.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/b4M/CxF91kxc/f6hPCJkHkDP4+LP+C59xmPdLzvC8o=;
 b=zi2WQ1mpBHPdtCDq8jk46NFMOQf05cUwSBUKz2blO7Yu/JcJ37PcBLdTsHDZILsdy6Zj95IktUcJ6EvFt6h1MXS7Zyw4ipCM7MkyCh8qCZ2tutMZ9LQLnNQlyTCIbKtbx2qhBRjDPg6JKTay4I6gN1aYFamifX8it9WzPo3YheM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <a0d6197a-53e8-0121-c7e0-ddbdaf970c7e@amd.com>
Date: Fri, 19 May 2023 09:08:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 3/3] xen/misra: xen-analysis.py: use the relative path
 from the ...
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>, Luca Fancellu
	<luca.fancellu@arm.com>
CC: <xen-devel@lists.xenproject.org>, <bertrand.marquis@arm.com>,
	<wei.chen@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>
References: <20230504131245.2985400-1-luca.fancellu@arm.com>
 <20230504131245.2985400-4-luca.fancellu@arm.com>
 <alpine.DEB.2.22.394.2305161743520.128889@ubuntu-linux-20-04-desktop>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <alpine.DEB.2.22.394.2305161743520.128889@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT003:EE_|CY8PR12MB8244:EE_
X-MS-Office365-Filtering-Correlation-Id: 9b5ed2ac-5270-4895-510d-08db5837e0aa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WZl57d7RHMhp4IP15j9MjQj/a30dWoHZeHnjmDatinTTuxBlhIxtzUEh5+cSMNcEx9TcOyCd4+8ueZACLRfFLSdLChZI+6Ro7vUrtCrfnSJmPGuZyYI2VMzyVPz49aSBVaXmLYpiLCMG4xnhzOzognaBRNkJaHxaNvWr8VL/t+1adCR+OLsH9Oqs/yvWWehlyi8DiLedFNb22vAyZW78MTaopt1b8m+7OulymuGY2yiT+MYwTXGlrreKBogGlam08JI2tEGuYtPqNepI55VSn1JVs4IAANab4+nXwoKw0rY+kKZEi9nyPoXITVVqBNTCYFR3LLuvrbCMRCQ5YeBhIwdzEhh1NkSuCaAhsRmDCxF5vyJAn9D8/DEyINBCod0R5sPnNYydGn9Wj0H4IwMkH5Lar6/zwTf28b7AwEC0N+JholWtrfr+Z8fX1zKjGhbs0Q7T8I7McGlmv+7G7OggLaEKEH4iK6taaFkyaG7A7zZtTavUIGB4a0JAMKQxVuNxVZTIdj9r007xGBHZZ7bA6aqIVuQ9ApUNy6aActUlhL79z2DLVBE/ZQ97AsrAQSvOycNTd+EDJwT3+X1A+QKpprfAZlhdfC9Yh83+ggfhxjIJ/Bjk8NlQBbH2bsvY1QGHG1E3FipQTzgRUpvasSUJNID96sh91XDnCTVHWZcRExOW4Uw6n6i6OJPAcFO2QU7AbEHUblSDftv2yal6BnzRwoDEUSVt0zhYDuXuj0AbHZ0bqzgwpsWd3RWGUGsKNa9ZqU4g22bNODbR9zH941qy57oDJP4f5CWL2vq8OmGXWbQrwKoSimFiE83zGpQNK9Zw
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(376002)(396003)(39860400002)(451199021)(36840700001)(46966006)(40470700004)(81166007)(6666004)(478600001)(356005)(86362001)(40480700001)(82740400003)(31696002)(36756003)(336012)(83380400001)(426003)(53546011)(26005)(186003)(316002)(40460700003)(2616005)(47076005)(36860700001)(44832011)(82310400005)(8936002)(31686004)(2906002)(7416002)(41300700001)(8676002)(5660300002)(54906003)(16576012)(110136005)(70586007)(70206006)(4326008)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 07:08:41.7198
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9b5ed2ac-5270-4895-510d-08db5837e0aa
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT003.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8244

Hi Luca,

On 17/05/2023 02:44, Stefano Stabellini wrote:
> 
> 
> On Thu, 4 May 2023, Luca Fancellu wrote:
>> repository in the reports
>>
>> Currently the cppcheck report entries shows the relative file path
>> from the /xen folder of the repository instead of the base folder.
>> In order to ease the checks, for example, when looking a git diff
>> output and the report, use the repository folder as base.
>>
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> Tested-by: Stefano Stabellini <sstabellini@kernel.org>

I know this patch is now committed but there is something confusing here.
At the moment, in the cppcheck report we have paths relative to xen/ e.g.:
arch/arm/arm64/lib/bitops.c(117,1):...

So after this patch, I would expect to see the path relative to root of repository e.g.:
*xen/*arch/arm/arm64/lib/bitops.c(117,1):...

However, with or without this patch the behavior is the same.
Did I misunderstand your patch?

~Michal


From xen-devel-bounces@lists.xenproject.org Fri May 19 07:12:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 07:12:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536705.835311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuHy-0002IK-GJ; Fri, 19 May 2023 07:12:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536705.835311; Fri, 19 May 2023 07:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuHy-0002ID-DS; Fri, 19 May 2023 07:12:34 +0000
Received: by outflank-mailman (input) for mailman id 536705;
 Fri, 19 May 2023 07:12:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzuHx-0002I1-A1; Fri, 19 May 2023 07:12:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzuHx-0003OI-5t; Fri, 19 May 2023 07:12:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzuHw-0003I9-SE; Fri, 19 May 2023 07:12:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzuHw-0004Bl-Ro; Fri, 19 May 2023 07:12:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QnEEne9b5HVoK/U0fuC+wXJ1fMjwQ8Jx6BDMvCRbKPI=; b=EmCW2cN9Fn5y2XUouDB6krQmoa
	SSAENo3C8yXkQZK37nMkyaqDNzhehWmJCp+F+4uLUBZu6lC1vTV4FbfKJoEnjncbt/Ej4RzPBtRpD
	4mYHpzq3W8mw/j+1mUUbGPz64tpUUeaeXRwEPg0EALgurFboKvH2eN5vOgzRF7zC8/r0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180701-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180701: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    xen-unstable:test-armhf-armhf-xl-credit1:guest-start:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=753d903a6f2d1e68d98487d36449b5739c28d65a
X-Osstest-Versions-That:
    xen=42abf5b9c53eb1b1a902002fcda68708234152c3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 May 2023 07:12:32 +0000

flight 180701 xen-unstable real [real]
flight 180719 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180701/
http://logs.test-lab.xenproject.org/osstest/logs/180719/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 180696
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 180696

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1  14 guest-start         fail pass in 180719-retest
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail pass in 180719-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 180719 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 180719 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180696
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180696
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180696
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180696
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180696
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180696
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180696
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180696
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180696
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  753d903a6f2d1e68d98487d36449b5739c28d65a
baseline version:
 xen                  42abf5b9c53eb1b1a902002fcda68708234152c3

Last test of basis   180696  2023-05-18 01:51:57 Z    1 days
Testing same since   180701  2023-05-18 17:09:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 753d903a6f2d1e68d98487d36449b5739c28d65a
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed May 17 05:57:22 2023 +0000

    automation: allow to rerun build script
    
    Calling build twice in the same environment will fail because the
    directory 'binaries' was already created before. Use mkdir -p to ignore
    an existing directory and move on to the actual build.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 816d2797468dbcc8a3d23f67592b06929f67b2ab
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue May 16 15:41:27 2023 +0000

    automation: update documentation about how to build a container
    
    The command used in the example is different from the command used in
    the Gitlab CI pipelines. Adjust it to simulate what will be used by CI.
    This is essentially the build script, which is invoked with a number of
    expected environment variables such as CC, CXX and debug.
    
    In addition the input should not be a tty, which disables colors from
    meson and interactive questions from kconfig.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit bdf48bf170bf1257b236b8f467d3de6c3adb9608
Author: Stefano Stabellini <stefano.stabellini@amd.com>
Date:   Thu May 11 16:22:37 2023 -0700

    docs/misra: adds Mandatory rules
    
    Add the Mandatory rules agreed by the MISRA C working group to
    docs/misra/rules.rst.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
    Tested-by: Luca Fancellu <luca.fancellu@arm.com>

commit b046f7e374893dd0eadc84d7010f928ea7e8fcf2
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu May 4 14:12:45 2023 +0100

    xen/misra: xen-analysis.py: use the relative path from the ...
    
    repository in the reports
    
    Currently the cppcheck report entries shows the relative file path
    from the /xen folder of the repository instead of the base folder.
    In order to ease the checks, for example, when looking a git diff
    output and the report, use the repository folder as base.
    
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit 069cb96fbd7595c80bf2af6a06454ce5c732721e
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu May 4 14:12:44 2023 +0100

    xen/misra: xen-analysis.py: allow cppcheck version above 2.7
    
    Allow the use of Cppcheck version above 2.7, exception for 2.8 which
    is known and documented do be broken.
    
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit 45bfff651173d538239308648c6a6cd7cbe37172
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu May 4 14:12:43 2023 +0100

    xen/misra: xen-analysis.py: fix parallel analysis Cppcheck errors
    
    Currently Cppcheck has a limitation that prevents to use make with
    parallel build and have a parallel Cppcheck invocation on each
    translation unit (the .c files), because of spurious internal errors.
    
    The issue comes from the fact that when using the build directory,
    Cppcheck saves temporary files as <filename>.c.<many-extensions>, but
    this doesn't work well when files with the same name are being
    analysed at the same time, leading to race conditions.
    
    Fix the issue creating, under the build directory, the same directory
    structure of the file being analysed to avoid any clash.
    
    Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 19 07:13:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 07:13:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536708.835321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuIP-0002k5-Qz; Fri, 19 May 2023 07:13:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536708.835321; Fri, 19 May 2023 07:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuIP-0002jy-MT; Fri, 19 May 2023 07:13:01 +0000
Received: by outflank-mailman (input) for mailman id 536708;
 Fri, 19 May 2023 07:13:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pzuIN-0002hC-Qe
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 07:12:59 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 94c04fa1-f614-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 09:12:57 +0200 (CEST)
Received: from DUZPR01CA0326.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4ba::25) by AS4PR08MB8168.eurprd08.prod.outlook.com
 (2603:10a6:20b:58f::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.18; Fri, 19 May
 2023 07:12:50 +0000
Received: from DBAEUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4ba:cafe::d0) by DUZPR01CA0326.outlook.office365.com
 (2603:10a6:10:4ba::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 07:12:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT060.mail.protection.outlook.com (100.127.142.238) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.19 via Frontend Transport; Fri, 19 May 2023 07:12:50 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Fri, 19 May 2023 07:12:50 +0000
Received: from 8daee9c73488.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1094027E-E627-439B-9FFC-A350D399DFF7.1; 
 Fri, 19 May 2023 07:12:38 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8daee9c73488.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 19 May 2023 07:12:38 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB6069.eurprd08.prod.outlook.com (2603:10a6:20b:29c::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 07:12:36 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.019; Fri, 19 May 2023
 07:12:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94c04fa1-f614-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zACNNrTEZFcme8uOsvJd1/loasiAK/ZPNOiFKwUi3dA=;
 b=xoDkWLfsGlUGJPlGenpotuWP4iFHDIBwPGDKVQcc/kw/T2jEHY54AtvUZ5is/Zk8ZiGbxSiOBAc37YCsY+uiathPUQ9nsNm12kbinySYakWrvdq4iwXGQbxcC9C3M5KTnonkuUDtQ7XSVYMWb9vG2ESNzb6cB8SU98+y/gKXc5I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6e696b46e8e509af
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OdDyO8K2m0k5SwPhDZVO7PGzDMJoSnl2i+QFip+G2sTilhIt/IuAuunNNnaGu+0vhf/sEivK/5NncpNAaiG9FmqJmDjnFwQPSYmXoZZNNkh03IkZL6M52ZJ1An1BSNg/TKGDU+RZmDv5/PZXpe1+mIK9G1p5XvX1K67w6KpRl7LirHnzQX8N3DSvIahQ3RYcIBOm1FjiH21JZUWBaxr2neTA8kDL9ku5IdKqgSKa8ZLzM3qz5s2gw5n+kWmf9lHweALJ7+6y80GHLKKl0HRdZFllgDbWb4f0etDiRClTsPN0V4Q0Y6/w44BEdWnFDfO2dqo9dxUOPrT1z6PS0558+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zACNNrTEZFcme8uOsvJd1/loasiAK/ZPNOiFKwUi3dA=;
 b=LOkhaSeSqgOLN78KUM7XfbKpzdYvt++QwMzHPnB7OcOeuOPvFUVvfFCDvbzh/N2OLiNl7NFB0jKPSnQxuGyHDY1wuXVzpe5w5wzoNkmleBm/uj4MHzCcQauCAbqiwxMp+KUGetVYhJ7mL94AJoyCqsJXpjaJGFKwu2i2ruakbZQRflhQ1EXjVBKLyxkL3pCWAkjaRXfPkfddnUlsuMdIU6qfhURGkEpMSluW80PPT2ctAPo/CXJ1Jej13jpBpvsRSGJpDYbOBW0Tcdr0dWMzGDJuypuRzNpv3DsAFRSpYO2nnfeY+DUaWpQX/FXf21/eqGRXrpEnJhQD1Q0i8qbOcA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zACNNrTEZFcme8uOsvJd1/loasiAK/ZPNOiFKwUi3dA=;
 b=xoDkWLfsGlUGJPlGenpotuWP4iFHDIBwPGDKVQcc/kw/T2jEHY54AtvUZ5is/Zk8ZiGbxSiOBAc37YCsY+uiathPUQ9nsNm12kbinySYakWrvdq4iwXGQbxcC9C3M5KTnonkuUDtQ7XSVYMWb9vG2ESNzb6cB8SU98+y/gKXc5I=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Xen-devel
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 3/3] xen/misra: xen-analysis.py: use the relative path
 from the ...
Thread-Topic: [PATCH 3/3] xen/misra: xen-analysis.py: use the relative path
 from the ...
Thread-Index: AQHZfooyuAY6OgOAbESkC2BSWxnO0K9dtEiAgAOQFACAAAETgA==
Date: Fri, 19 May 2023 07:12:33 +0000
Message-ID: <B087CAA6-0DCD-48C8-8199-A328BDA649A8@arm.com>
References: <20230504131245.2985400-1-luca.fancellu@arm.com>
 <20230504131245.2985400-4-luca.fancellu@arm.com>
 <alpine.DEB.2.22.394.2305161743520.128889@ubuntu-linux-20-04-desktop>
 <a0d6197a-53e8-0121-c7e0-ddbdaf970c7e@amd.com>
In-Reply-To: <a0d6197a-53e8-0121-c7e0-ddbdaf970c7e@amd.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB6069:EE_|DBAEUR03FT060:EE_|AS4PR08MB8168:EE_
X-MS-Office365-Filtering-Correlation-Id: de3e8a14-d931-415b-73ff-08db583874ab
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 JrrxEdkBZMzXvLe/AtY/ZTTVnIllKNlxav4hiQaQvs+TbyZ04aEMd+IwKTg+xB9lQ1S/bDuzsJAQbR3iRv4NJEsSb8j96NhNZcHnAGC3plvZVuJhQmGD8BRSHLwx5vJqDceay3Gn5hByozQiclV155SJ/BhiNEUOIFa3XtFxrPoj9xxj7zjLbRytJFVkaKhZhiL1WaymXrGZrtvsHWe9rBozjt9lVLB7S6GTiVwSyr0DnZ5BG27cWtZYOHa3R+LHxhkuJs/pG3adBFcB/r050sgC+fsXim00WhAu+fPbNOJAIBejcA6n99wf1Ifv+DKIKTiRh4Lba1vRmg4IaM03+eW++thu8qH0nuX1S1mdkB3Gu8jyFoZ2M7PTLYxUxP3ay+syJYu0rAmsm5j37qLVQaKrCKlDtnnvuswBZ6dbWpKUHnRoQRSoZezEAbRCS3bNyDxlr4zlWTvZiOQGwX7TZwZNeQfwW+07M1/KEWjpFUX4j+YZjiZATI88Qc/oO4DhGqk3QeBXs0DjwbpZd0PWn2fysKDr3fKYU9EhZMAzgZgc6iNpjEvxZTjZuETk0l7z5V43IVYcHfGkSURKWUf20m72FRGfUsxR56+i4zhvoG5GXfIfixHq9UPN4FC4uFCYcYFDt5bXdgaExfmDdsXZtg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(366004)(136003)(396003)(39860400002)(451199021)(54906003)(478600001)(86362001)(2616005)(83380400001)(33656002)(6506007)(186003)(26005)(71200400001)(6512007)(53546011)(122000001)(38100700002)(38070700005)(36756003)(6486002)(6916009)(316002)(4326008)(91956017)(76116006)(66446008)(66476007)(64756008)(2906002)(66556008)(66946007)(8936002)(8676002)(5660300002)(41300700001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <5A54DD60D2946243A6D4F2239976A99D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6069
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5f9e36e7-ff02-48e5-3d13-08db58386ac0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	R2BjePBeMtm2YN4ARKRAqSSKajui0MFWG9KkCacZJn7UHV1iS0x5US8L5Dv/zW5nJvZnZrNIM9xMJachBPBJOoHJKIKa44ztpvz1eMC5KZo7N+XmOKq3uyyMNKRF2yRQINfTZeOj4MQgF7m0eA2zEFavh6jM9Dvxc9DLoDgocM/zjw9dlW50xGnZPSvTf6Y8nXTLMlyVxXm9WGF56M7q02cPWSmT/SxP5lOfD19J537UImwRqh/m7190QOpDS4HES8JuqTGU4/oTKyB93rMio2015dL6JxLFWSCHsTPsXO43e3IMvQxfaFo9VWXLftqLMm0hknIZ3rA6Y8Ua7iiQD0B/WNYdBEZAvNPPEhNJgnYoEFNIGZvAlfuB7XLWSb3ft/4UMYQ9q3WiywiGv4Inqtw+HSdvDuesEKbVjD8Po8/DFiXoTl5J2529MV+4fE6vNMvLtbpcGMlNk+fWSCBPUvxC/XBXLE0op7Oe0AyRL09+dKSJufmVyOypJG9qWny1ucu+l1/bSC80/hMUM1tuC7VRgsL47VJvj5IhB2dCmm9m+geA9qIj2d4fjMEA3YgQcTaZMyyCGh8okl2iLbDIYP3+J2fNLQpoiz04djRJss0sXthPdZ9s4obhTSnXZ2tktEeTaSH7KUVgxq6CDcML0Av/wHt62Ts5zsVyeKtPnIbsdNF/7wzlLnvn56lCFK6F4XqRILWVewg5gfWTNiVtJyyzpBOpd2WSTAZ1BpvQqCuJalM021ibacDATWTSXd0p
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(376002)(396003)(451199021)(36840700001)(40470700004)(46966006)(36756003)(33656002)(54906003)(86362001)(70586007)(4326008)(70206006)(478600001)(316002)(82310400005)(6486002)(6862004)(8936002)(8676002)(2906002)(41300700001)(5660300002)(81166007)(82740400003)(356005)(186003)(2616005)(26005)(6506007)(40480700001)(36860700001)(53546011)(47076005)(83380400001)(6512007)(336012)(40460700003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 07:12:50.1863
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: de3e8a14-d931-415b-73ff-08db583874ab
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8168

DQoNCj4gT24gMTkgTWF5IDIwMjMsIGF0IDA4OjA4LCBNaWNoYWwgT3J6ZWwgPG1pY2hhbC5vcnpl
bEBhbWQuY29tPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbiAxNy8wNS8yMDIzIDAy
OjQ0LCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6DQo+PiANCj4+IA0KPj4gT24gVGh1LCA0IE1h
eSAyMDIzLCBMdWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4+IHJlcG9zaXRvcnkgaW4gdGhlIHJlcG9y
dHMNCj4+PiANCj4+PiBDdXJyZW50bHkgdGhlIGNwcGNoZWNrIHJlcG9ydCBlbnRyaWVzIHNob3dz
IHRoZSByZWxhdGl2ZSBmaWxlIHBhdGgNCj4+PiBmcm9tIHRoZSAveGVuIGZvbGRlciBvZiB0aGUg
cmVwb3NpdG9yeSBpbnN0ZWFkIG9mIHRoZSBiYXNlIGZvbGRlci4NCj4+PiBJbiBvcmRlciB0byBl
YXNlIHRoZSBjaGVja3MsIGZvciBleGFtcGxlLCB3aGVuIGxvb2tpbmcgYSBnaXQgZGlmZg0KPj4+
IG91dHB1dCBhbmQgdGhlIHJlcG9ydCwgdXNlIHRoZSByZXBvc2l0b3J5IGZvbGRlciBhcyBiYXNl
Lg0KPj4+IA0KPj4+IFNpZ25lZC1vZmYtYnk6IEx1Y2EgRmFuY2VsbHUgPGx1Y2EuZmFuY2VsbHVA
YXJtLmNvbT4NCj4+IA0KPj4gQWNrZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz4NCj4+IFRlc3RlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVs
bGluaUBrZXJuZWwub3JnPg0KPiANCj4gSSBrbm93IHRoaXMgcGF0Y2ggaXMgbm93IGNvbW1pdHRl
ZCBidXQgdGhlcmUgaXMgc29tZXRoaW5nIGNvbmZ1c2luZyBoZXJlLg0KPiBBdCB0aGUgbW9tZW50
LCBpbiB0aGUgY3BwY2hlY2sgcmVwb3J0IHdlIGhhdmUgcGF0aHMgcmVsYXRpdmUgdG8geGVuLyBl
LmcuOg0KPiBhcmNoL2FybS9hcm02NC9saWIvYml0b3BzLmMoMTE3LDEpOi4uLg0KPiANCj4gU28g
YWZ0ZXIgdGhpcyBwYXRjaCwgSSB3b3VsZCBleHBlY3QgdG8gc2VlIHRoZSBwYXRoIHJlbGF0aXZl
IHRvIHJvb3Qgb2YgcmVwb3NpdG9yeSBlLmcuOg0KPiAqeGVuLyphcmNoL2FybS9hcm02NC9saWIv
Yml0b3BzLmMoMTE3LDEpOi4uLg0KPiANCj4gSG93ZXZlciwgd2l0aCBvciB3aXRob3V0IHRoaXMg
cGF0Y2ggdGhlIGJlaGF2aW9yIGlzIHRoZSBzYW1lLg0KPiBEaWQgSSBtaXN1bmRlcnN0YW5kIHlv
dXIgcGF0Y2g/DQoNCkhpIE1pY2hhbCwNCg0KVGhhbmsgeW91IGZvciBoYXZpbmcgc3BvdHRlZCB0
aGlzLCBkdXJpbmcgbXkgdGVzdHMgSSB3YXMgdXNpbmcgWGVuLWFuYWx5c2lzLnB5IHNvIHRoYXQg
aXQNCmNhbGxzIHRoZSBtYWtlZmlsZSB3aXRoIG91dC1vZi10cmVlIGJ1aWxkLCBJ4oCZdmUgZm91
bmQgYWZ0ZXIgeW91ciBtYWlsIHRoYXQgd2hlbiBpdCBjYWxscyB0aGUgbWFrZWZpbGUNCndpdGgg
aW4tdHJlZS1idWlsZCwgY3BwY2hlY2sgaXMgcnVuIGZyb20gL3hlbi94ZW4gYW5kIGl0IGNhdXNl
cyBpdCB0byBwcm9kdWNlIHJlbGF0aXZlIHBhdGggZnJvbQ0KdGhlcmUgaW4gdGhlIFRYVCBmcmFn
bWVudHMsIHNob3dpbmcgdGhlIGlzc3VlIHlvdSBvYnNlcnZlZC4NCg0KSSBoYXZlIHJlYWR5IGEg
Zml4IGZvciB0aGF0IGFuZCBJ4oCZbGwgcHVzaCB0aGF0IHNvb24uDQoNCj4gDQo+IH5NaWNoYWwN
Cg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri May 19 07:23:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 07:23:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536719.835337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuSY-0004Tt-W4; Fri, 19 May 2023 07:23:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536719.835337; Fri, 19 May 2023 07:23:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuSY-0004Tm-SH; Fri, 19 May 2023 07:23:30 +0000
Received: by outflank-mailman (input) for mailman id 536719;
 Fri, 19 May 2023 07:23:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzuSX-0004Td-Nz
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 07:23:29 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2071.outbound.protection.outlook.com [40.107.13.71])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0be520cf-f616-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 09:23:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6876.eurprd04.prod.outlook.com (2603:10a6:10:116::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 07:22:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 07:22:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0be520cf-f616-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YZ+UKFB4K9aCPllreBIfm+fAymFgDdZOO5V1ZCFBbtHw5SvaA6UedyfSBsI7AfcJsP9wJ1osuutJ42aU3jcPdV3R2c4/I5CukaJoLyV6igBZtvg0PcPiybyzNleRWtuj9XcsO0MY59rqbu86uD6FgHULUL3MJ002ATTB+0GmFHG+0LO+JNtrMp+ymol8D4hf8hSy8OvT8/k+xYpaWQEhWSJyVTPXNEqthCGlqlU+y0+cNkZw2IPOVPz3rS6lyuC44iP1iyYowrZW8XUngj3rK2QuC/VS4EcnVO3c4vwEOrEGMgVcN7Jrnp7japHqWcTLD6/1rREgaPtWhDCGHGvfNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XBsY3zEC5jAZo3joefLdxCqspcjjKyANbBsqhEwAJug=;
 b=Htax4CMDDTUhlamdUYeIuuCmNOMYYgYHDyV4ACUnfanijxSlKkJVd4Qqt5TwKzIay74EpOlxj/Fl6eSBF1O8Rascl5Bfi2R3FktKvhxnDL78ok1FeJGVuqQVqas6LLhMcQPaYQDEItZGlaswYdRGTz3bYYHJ+WV3DM6oBAPMucu7/wfKbDUZXxL8ekzB26bNjHcR7u0Zon2zTwj0eSe2CxCloeXjm7cMdpdfVkx0BBq571WrAsMo08VEoyrRgJc3N0yA4mKMAhcyfTaG5TEsDAOkiJsJ0pLj42Ua3Q3r5E7KVgRr2Jp/w+LVnDSlbCTa85zaJkJ3cGlYcJXe2vzKCg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XBsY3zEC5jAZo3joefLdxCqspcjjKyANbBsqhEwAJug=;
 b=QBQvnEkU2OtP8gi5k6k6AIRDr73uwwLU0UD7Lk7VfmYnrXmIL31R8ra1CE7aeo3h6Ym3dey1ZAOJRXMVEaEfihX7Ahiq+IMTWQNu8gk1SJj7RxOvAOOCW6dvjSRwJQAhhI3/EbWEpwGjwFma40igjiaX5wlyA7d5ZRMJB02YpO8rGrHUtp3OMg/PeoVZg/vz/v7ejSICHCdgw5fhhfGm6WHcRntp+x6A6GBuxa4Quflph2mn6TG5c1y4ROHonQ0RNzpBuOMogPocxG3ahYYZnpNsfxJUALnYWivwai16DwHJzPtxjY/0f4GpQPL7IECeKTNyMeUBX9QIdJMqmz3r7w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b411f7aa-7fd2-7b1c-1bcd-35b989f528b6@suse.com>
Date: Fri, 19 May 2023 09:22:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: PVH Dom0 related UART failure
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 marmarek@invisiblethingslab.com, xenia.ragiadakou@amd.com
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0018.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6876:EE_
X-MS-Office365-Filtering-Correlation-Id: d2bbdbce-594d-4121-4655-08db5839ded5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UhfjMlLfg+3RVzxMAYGJ1j/hkwoMuyYucWeWZcX01brVdE4WW9qRtBB8yZmEuQi5avwz0Czc9ybqwK+S5DgtLNHwxy0atIyaHyRwot0w845hqEdlPcmpZ2xBKyv/z7RYJt3VbhYeFueR47C+ZIU5hMQp2TpEcKibwByfdyUCpYDB019p8zZYoN/sb5Kn/9n7AN0HSWvuHZOoShZzXrTyDINInlw+GsDF3GoYZvDNKEQyagRHpZOuwoYVRFGXoib2oWmQ6jQYJkP9lTregHW5aiKQMcBzd7rzMe2YMHedjLam6X3i/6myi0XIvqO6urM5ilW+K9qnNBegbkLU2D6us8Wcki0IpPiXBdO6p6bmHmJJM3vAsuOK8OosyPDeGPAmkrMDuC94qceNIYmvdGUFSvMv/54+w6BHonmhW6WT+lEA+DSov3CHMgws/LhuvAnUzAzUgz8uu+u8q8WCkv5kgjOqrte81gWQ+GEwh9JxyGA5cxlNwqxJOJGdgzAC1DRWzskfxDnp+5OzhYvJdV5Ys4Kc0dsj8P5AByt54YicZx2An05p9y+orS1ge+7Y27HUvx5SUYpCpTJNwbxnAFLNahHATFc7NVZXzWyowL7k/vpySqHJrhdejUzaKsWlRnpdgp4mAUMARI4HbfYvKKXORg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(376002)(346002)(396003)(366004)(451199021)(31696002)(86362001)(4326008)(36756003)(110136005)(316002)(966005)(66946007)(66476007)(478600001)(66556008)(6486002)(8936002)(2906002)(8676002)(5660300002)(38100700002)(41300700001)(53546011)(6506007)(83380400001)(6512007)(186003)(26005)(2616005)(66899021)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TkFwNS8zSG9aYjZqME13alNhaVJsa2szaWx6MHdSQlVlUjhDUHlFSm5id096?=
 =?utf-8?B?NDFlMDZXbFFuaFp3bVhTU1kwakZvdGhMdHF0T3ZQRnZDL0poeU85SE1vSmlt?=
 =?utf-8?B?WWNTSzZiTXVoTnJJUlZBUUd6VTg0OUwyRU4zcUdyZkxUZmFmYm1HNTFkNytY?=
 =?utf-8?B?Ulc3SFJBTlBnYlgzN1l1M0hPQmhpdU9QbkdPTC9CbDlpZ2Vqc000WktDdXVT?=
 =?utf-8?B?c1BuMFVlY1NOblJWTjV0bHBOTWNMUDJIamFVTjErei9RRVdZbW5KTEVqNnNh?=
 =?utf-8?B?TE5WMU5wM3NWNVBFcS8wRjIyQmNmQzRpUnVaSExvekI5aVo2QVplMU1obk1X?=
 =?utf-8?B?MTAzT0p2M2p2ditScU01QmpMc1ByUGlLYnd0Z3ZYbEhtYmV1VWFkeU9QSUNG?=
 =?utf-8?B?em1rQkRPVk5Ea1hFSEpNbC9jMUdVckwrSmI2MWhkMzFzU1JoOW43c2NTRVBl?=
 =?utf-8?B?TVhuY3hrSGJzWDdOWVV5R203NEw1UGEvaVFQK2lUQzVGSDFRR2IxZG5oV2lH?=
 =?utf-8?B?YnByemlxQWZiR1hNa1pwT1JVelY1SlFYWjhjbk1OQjF0ei9QS2FBMlVLTmpz?=
 =?utf-8?B?YUlTcHgwWFFyY3hKNXQ4ei8rU0lmSlR3REZpRzgzNTZ0a2JHQ2p3NTUxbG5O?=
 =?utf-8?B?NysraFpMdnp3QjlBRXcxSHM1c3Izbk4zbXlwd2pkeUtlQUJnRkgvK28zN3lT?=
 =?utf-8?B?ZCszNzFFY2xNbFgxZXVOUDRSK1BXK0lwRXBhSlZua3NZVGkzZWxQOXNXV0lO?=
 =?utf-8?B?eFdjMHUyWEk0RUpvcjZoTXNWcitmb2Rtb24wYjhuMHBXN1hHUWN5Q2tNVmdL?=
 =?utf-8?B?MUl4NnVhV0dCdHlidGVtd2VNMHkzWjdtZ1VWbWlWSitSOVFMaGN3cEFjSzNK?=
 =?utf-8?B?SGdXV1dQYXpDaWtZanhTbFhYekVhalRtd0JXcTdWVmx4N1BhemRpZVM5ZlNx?=
 =?utf-8?B?OEpNU1ZLZGgvSGFMZnZnRGsxV3BJa1c4Z0dla01ZV0lQTWdKdmtudVVSOEhu?=
 =?utf-8?B?b2EvekxUMUNITHRSdHhRM0FHY0tnNUcvWjJGaEpQOHA0Qkp1d3I1bk85YXFX?=
 =?utf-8?B?RFg0OWtuOGNRN2JsTks3bmZ3eUxDYmdJcTU4eTI3bTZ0bmZuZUR0cDVoNHJq?=
 =?utf-8?B?cnlnaktvdWFzcjdDUGdhZ0VDa1FpSjBzejdTRTA1Q2RJQVUxL0pXc2NxVDFt?=
 =?utf-8?B?V1hYaWRyQjRVbms3WlhHVi90SzZibmVXYWtDNDl1M2pObGxoRmJJRWVwQitY?=
 =?utf-8?B?OFdCUnREek5HaVlEQyttVkxHVWlaazVVSUpFZFVkOW9ZeDI2ZXdZaGdzVmg0?=
 =?utf-8?B?Z2x0OFFNbVhBaEdUSjNXMjF0REVlN0h1ZVNOSEE0Yis5dEJOdllJeXpndGVi?=
 =?utf-8?B?K0JNcGM1VWUzdDY5MkJ3ejB2VGxyZzQ3ek9TdkZLK29wbFhjVVlVQ05tRzFU?=
 =?utf-8?B?N0Q0QUNrc1BBTUFDMXFLR1FHVGxMUVNTeDdpSDZwY2RGM0NQTCtQZVhveWxl?=
 =?utf-8?B?YnltZ201Ky9ybWtDNUJwaDNrVnFocXRPMVFEaThXLy94Z3pJTjNSQk1zcEpo?=
 =?utf-8?B?cURwdmt0dW1DMUh6TmNGKzBjdFJoNUpXS0VhOUxERlpRbTdIdHpnT0xEbUc0?=
 =?utf-8?B?VXhERHhuZjFZQVhXbmxzRktXdkFkaHVZdkFDSVAwMHc2aGFPQjdVSXNsQklK?=
 =?utf-8?B?OVNPWG9tRy9kZ09PaE1EUlhaTUo4bVNIVzc0dGRqN2ZYTFhpaWJIUkZSbUZy?=
 =?utf-8?B?QUI1UTJvaXA4amx5MTluOXBFZ0VGU1ZJSFJIVXlFcjJ6N2N4NFU5WkxENmJz?=
 =?utf-8?B?b3F5UG5DQTN6MS9UaENwOFhCUHZka3VRdkZYVEFDRjBzeWJLMElNUHIwOVV0?=
 =?utf-8?B?ZHM4eEZ5SWJIM3YwUGpJcXEwd2liT2paTmYrK2hBK0k1WTNKOEtkUUI1L0V5?=
 =?utf-8?B?NFh4V2orQXFjcjNlTjhyWVBQeDJ2Vm4wMUZWSjVBeklDa3lyVDI3VktkMFVs?=
 =?utf-8?B?NWV5MEZFV2ltTWRiU1ZUOExPczdIY1AveGFIRDFqOHBXUExodGJLdTRRUktk?=
 =?utf-8?B?OW1uVVZWaVNTeS94K2xtQlJ0Z2l3ZXJ5VFFqaE15OTNDcUJHbWFsTGpqN0lq?=
 =?utf-8?Q?ay2AYYjy8KYqxSzEqBTjotTql?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d2bbdbce-594d-4121-4655-08db5839ded5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 07:22:58.0296
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KB3JuqdI9iSD/UBiA0BEAtEy4m6rFb13ul7sxrIDFLJwtE4kVUfM5fNbhfkY7LS6ASu9lFmjyjUg9bHf9J5xRw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6876

On 18.05.2023 12:34, Roger Pau Monné wrote:
> On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
>> I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
>> test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
>> Zen3 system and we already have a few successful tests with it, see
>> automation/gitlab-ci/test.yaml.
>>
>> We managed to narrow down the issue to a console problem. We are
>> currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
>> options, it works with PV Dom0 and it is using a PCI UART card.
>>
>> In the case of Dom0 PVH:
>> - it works without console=com1
>> - it works with console=com1 and with the patch appended below
>> - it doesn't work otherwise and crashes with this error:
>> https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
> 
> Jan also noticed this, and we have a ticket for it in gitlab:
> 
> https://gitlab.com/xen-project/xen/-/issues/85
> 
>> What is the right way to fix it?
> 
> I think the right fix is to simply avoid hidden devices from being
> handled by vPCI, in any case such devices won't work propewrly with
> vPCI because they are in use by Xen, and so any cached information by
> vPCI is likely to become stable as Xen can modify the device without
> vPCI noticing.
> 
> I think the chunk below should help.  It's not clear to me however how
> hidden devices should be handled, is the intention to completely hide
> such devices from dom0?

No, Dom0 should still be able to see them in a (mostly) r/o fashion.
Hence my earlier RFC patch making vPCI actually deal with them.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 19 07:26:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 07:26:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536723.835347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuV8-00053x-D5; Fri, 19 May 2023 07:26:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536723.835347; Fri, 19 May 2023 07:26:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuV8-00053q-A4; Fri, 19 May 2023 07:26:10 +0000
Received: by outflank-mailman (input) for mailman id 536723;
 Fri, 19 May 2023 07:26:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9j/4=BI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pzuV7-00053k-2X
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 07:26:09 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on20611.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::611])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 68fa1211-f616-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 09:26:05 +0200 (CEST)
Received: from BN8PR12CA0014.namprd12.prod.outlook.com (2603:10b6:408:60::27)
 by PH8PR12MB7422.namprd12.prod.outlook.com (2603:10b6:510:22a::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 07:25:58 +0000
Received: from BN8NAM11FT085.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:60:cafe::96) by BN8PR12CA0014.outlook.office365.com
 (2603:10b6:408:60::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 07:25:57 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT085.mail.protection.outlook.com (10.13.176.100) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Fri, 19 May 2023 07:25:57 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 02:25:57 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 00:25:57 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 19 May 2023 02:25:55 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68fa1211-f616-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hwOtdoPugVnA9wPpxb0XUWgOP1GWj89wlOHLQJMmq0kZEedGPczU90Af0sYkkaZIjSA0tGB+rWHNWX8UWBWRe9aVpFAx5eaOcW5touZKYWFgX1jVXGyAV2FSpxM8rT8O5PjOkv+KiZC70Hy62HGnYkkt/OHUS0UX3f0KOqkaIZc9wDrcBp3Y0dZg+ZXTJSylIIGVb19GZpVn6PgRGPa7yOyfdtqy3NGEbigZ5jsrTGWElaPP/I4NRz07mZUxQ/WMrJCX5PpHRbmVBtRbgutDEL0urzhhLSE/2eRl7gvCq29pfPWF00lJ7vJKN80l8ayKpJmI5bWUcO6Kl6oioowwNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kb0a18HqGbaJZ9tZ/7ciRPB+FTXd7WBltR8R5VrxBZA=;
 b=eqpANoGtqVTT3lZHfLJHbpODAaFkkzfP2G2leuSf60kpl6JsswsbAtgDOu14cb7udgWGXXi5y9KzOtka3ddIM5jzrNbfxxUUK1JCTdaGC8mU+FPRGETGZCFBq4YgCkKSIsE0JtEynDehYVD3nSYXFjHzU7URK1ZEFtbPsoeHiyFnY+Sq1CeXZw67OWD4Yphm+XbBGhMOJxJ+CibQKMWyA3iGssKLfOqZHsqIS9McJnVKve/7KvBygZ1puCTnnH5BqV3ClYLAV+e+RE45+FFmrJ3ol93VPP2mxUeXKwNswhBmtnLq6GYwp8El9ntTmN8YPMKbHYpYdr7aJpWmQozbeg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kb0a18HqGbaJZ9tZ/7ciRPB+FTXd7WBltR8R5VrxBZA=;
 b=47xEyRzhy2vw2yuEGT4IOLCIl98kHJyxDlaJvAc8/Ti4YVqik1j8tKGe+WesUEo3mMRGCbYnWvo0+abR9G4SQL8keqL6Ynfafv4j0slNEhEllOy3hy9wFccSF5rK3TsAi2SWsnxiNDumx7RjT4Cb+c2On7iXV6M5gNYkMjMHczA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <acb39086-69f1-4bd0-96f8-d9c9420cbb41@amd.com>
Date: Fri, 19 May 2023 09:25:49 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 3/3] xen/misra: xen-analysis.py: use the relative path
 from the ...
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Xen-devel
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>
References: <20230504131245.2985400-1-luca.fancellu@arm.com>
 <20230504131245.2985400-4-luca.fancellu@arm.com>
 <alpine.DEB.2.22.394.2305161743520.128889@ubuntu-linux-20-04-desktop>
 <a0d6197a-53e8-0121-c7e0-ddbdaf970c7e@amd.com>
 <B087CAA6-0DCD-48C8-8199-A328BDA649A8@arm.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <B087CAA6-0DCD-48C8-8199-A328BDA649A8@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT085:EE_|PH8PR12MB7422:EE_
X-MS-Office365-Filtering-Correlation-Id: d31c1254-2b88-429b-a3a1-08db583a4a0c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kLOycE++EKjvjWdZtpZFDi3lExeb2HBYDVLG3pndXcSQL56Ngo87XXGcu7Ka42EXQIpOfepAiKVkZQpkHHOLxnWzTn5bJVJX4KQ+KoL6I9xaoDi+ZkExIG9iShcVBod34YBO0krOee0WhCg9vcLx9uVMe9ox43FO39jF0xKF6K/7EzBoHmu+Q0E/K1Fazgy3UQNBhZySVR5NVBUNYDi5LXtU7VujXLkLDyfyAkmq6xP/srJrrS2UYEtQSThVOOmX+b325bMbWLE6fZ6QjCY/RDmritmPFxdvaZw/ejuD2z4LeZqISXkeJjeHFgG5Cgn7LNqIwZJhGYXZz0E0denJfKNB9BJgAHyvHay6NptV+qTHFq5mgHNjsmnKtrykEg59EQHyBkN7ZD0RpwjhJvyT1roHx4P2eK0J8JKoo926BQAfmpMGuKEyozITEkmlrWO7CTa/kDc8TKVk5P4nhY68oCoyNxTG9eCO6II61VTJScUiR8mZzVOqShb8NKCJ24/FKexa/meTz26BEoSdQO4t2mXBrh1Js6l53Yd0fiQ+NAlGLhoTUqwUpM8D9pSdnGfBlnw9BW9ZpYm3g+lWXzwZCGd8dTJtMaNO3fJVg/S7ZiY6vCnvD7ZCHeCPv+VvR5nPXLmoMmqxU7ZOt9kIIfoqeJFPrTjEQAnekGI8t1CivPwYXskwyd/8fIUMiJNU6vjXLOIKDdF+juT4Y6Kf4mAINinQef+GikCrGxPlL2Ci6TI5JAM4oS7EQ5nKdyscHvBC1Be1RWoBkMikmTvupddCda20whLBKHHlhX3lPBavbn73g2IEDZEejIMdLx2J9kOF
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(396003)(376002)(346002)(451199021)(36840700001)(46966006)(40470700004)(2906002)(316002)(478600001)(41300700001)(4326008)(8676002)(6916009)(31686004)(8936002)(16576012)(54906003)(7416002)(5660300002)(44832011)(6666004)(70586007)(70206006)(53546011)(26005)(40460700003)(81166007)(186003)(82740400003)(356005)(83380400001)(40480700001)(36860700001)(47076005)(426003)(36756003)(82310400005)(2616005)(336012)(86362001)(31696002)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 07:25:57.6278
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d31c1254-2b88-429b-a3a1-08db583a4a0c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT085.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7422


On 19/05/2023 09:12, Luca Fancellu wrote:
> 
> 
>> On 19 May 2023, at 08:08, Michal Orzel <michal.orzel@amd.com> wrote:
>>
>> Hi Luca,
>>
>> On 17/05/2023 02:44, Stefano Stabellini wrote:
>>>
>>>
>>> On Thu, 4 May 2023, Luca Fancellu wrote:
>>>> repository in the reports
>>>>
>>>> Currently the cppcheck report entries shows the relative file path
>>>> from the /xen folder of the repository instead of the base folder.
>>>> In order to ease the checks, for example, when looking a git diff
>>>> output and the report, use the repository folder as base.
>>>>
>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>>
>>> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>>> Tested-by: Stefano Stabellini <sstabellini@kernel.org>
>>
>> I know this patch is now committed but there is something confusing here.
>> At the moment, in the cppcheck report we have paths relative to xen/ e.g.:
>> arch/arm/arm64/lib/bitops.c(117,1):...
>>
>> So after this patch, I would expect to see the path relative to root of repository e.g.:
>> *xen/*arch/arm/arm64/lib/bitops.c(117,1):...
>>
>> However, with or without this patch the behavior is the same.
>> Did I misunderstand your patch?
> 
> Hi Michal,
> 
> Thank you for having spotted this, during my tests I was using Xen-analysis.py so that it
> calls the makefile with out-of-tree build, I’ve found after your mail that when it calls the makefile
> with in-tree-build, cppcheck is run from /xen/xen and it causes it to produce relative path from
> there in the TXT fragments, showing the issue you observed.
Ok, the way I test it is the same as in our gitlab CI so this needs to be fixed.

> 
> I have ready a fix for that and I’ll push that soon.
Thanks.

~Michal


From xen-devel-bounces@lists.xenproject.org Fri May 19 07:39:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 07:39:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536728.835360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuhb-0006dK-JP; Fri, 19 May 2023 07:39:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536728.835360; Fri, 19 May 2023 07:39:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzuhb-0006dD-Fw; Fri, 19 May 2023 07:39:03 +0000
Received: by outflank-mailman (input) for mailman id 536728;
 Fri, 19 May 2023 07:39:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kTyP=BI=citrix.com=prvs=49624ea57=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzuha-0006d7-7U
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 07:39:02 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 367a423d-f618-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 09:38:59 +0200 (CEST)
Received: from mail-mw2nam12lp2041.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 May 2023 03:38:56 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MW4PR03MB6427.namprd03.prod.outlook.com (2603:10b6:303:122::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Fri, 19 May
 2023 07:38:54 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.019; Fri, 19 May 2023
 07:38:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 367a423d-f618-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684481938;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Q1BAOgB3ZUzGU1gDfM5OLJopGjOjCz97ebthPmZ1aA8=;
  b=EEJwgt0dEw5LqRGD92ym6MIXyBt/9q9BaCa2HJImTxUKyKIJ7WddyEBH
   7KH9ZnHFJUv4YeF00uJxDTo+HOoz/xA41FTuVXHs8GCSkBO3Eyi2MkfVj
   ELxQREUtDgUopWIAMJFcHBMAHRDldIB3Pcka4NbkCmvyC8rruHwZrGNaQ
   Q=;
X-IronPort-RemoteIP: 104.47.66.41
X-IronPort-MID: 109509093
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ltpYx6A6GD/IihVW/8Xiw5YqxClBgxIJ4kV8jS/XYbTApDwj0GYHz
 jAYCj2HM6mJZGb9edglOtm1901VuMfUxtQ2QQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbCRMs8pvlDs15K6p4G5B7wRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw1dx6BDF16
 /AjJS0tQj+uiOvox6LkRbw57igjBJGD0II3nFhFlGucKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTL++xrugA/zyQouFTpGMDSddGQA91cg26Tp
 37c/nS/CRYfXDCa4WPdoy/017eexksXXqpKT4ex+vl6uWfI4UczFloZaHSypaCm3xvWt9V3b
 hZ8FjAVhao4+VGvT9L9dwalu3PCtRkZM/JLCPEz4gyJzqvS4i6aC3ICQzoHb8Yp3OcmSDpv2
 lKXktfBAT10rKbTWX+b7q2Trz65JW4SN2BqTSoNVw4M+dTgiIA1kBPUT9xnHbK1j9v6AjX5y
 XaBqy1Wr6Uei88Ckb+y8lHOjzuvoYXhRws5oA7QWwqYAhhRYYekY8mt9gLd5PMZdoKBFAHd5
 T4DhtSU6/0IAdeVjiuRTe4RHbavofGYLDnbhl0pFJ4kn9iwx0OekUlryGkWDC9U3gwsIFcFv
 Ge7Vdtt2aJu
IronPort-HdrOrdr: A9a23:kDEslat08UB+4PKztveVOY6e7skCEYAji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJhBo7+90We7MBbhHLpOkPEs1NCZLXLbUQqTXfhfBO7ZrwEIdBefygcw79
 YCT0E6MqyLMbEYt7eE3ODbKadG/DDvysnB64bjJjVWPGdXgslbnntE422gYylLrWd9dPgE/M
 323Ls7m9PsQwVfUiz9bUN1LNTrlpnurtbLcBQGDxko5E2nii6p0qfzF1y1zwoTSDRGxJYl6C
 zgnxbi7quunvmnwluEvlWjoqh+qZ/E8J9uFcaMgs8aJnHFjRupXp1oX/mvrS04u+am7XctiZ
 3prw07N8p+xnvNdiWeoAfr2SPnzDEygkWSg2OwsD/Gm4jUVTg6A81OicZwdQbY0VMpuJVZ3L
 hQ12yUmpJLBVeY9R6NrOTgZlVPrA6ZsHAimekcgzh2VpYfUqZYqcg68FlOGJkNMSrm4MQMEf
 VoDuvb+PFKGGnqJ0zxjy1K+piBT34zFhCJTgwrvdGU6SFfmDRDw04R1KUk7wM93aN4b6MBy/
 XPM6xumr0LZNQRd7hBCOAIRtbyInDRQDrXWVjiYGjPJeUiATbgupT36LI66KWBY5oT1qY/n5
 zHTRdxqXMyQUTzEseDtac7vCwleF/NHggF9/supaSQ4tbHNf/W2Gy4OR8TevKb0rUi6paxYY
 f2BHpUa8WTWFcGV7w5mDEWYKMiWUX2YPdlxOrTZGj+0/4jCreawdAzI8yjUobFIHIDZl7VJE
 clcXzaGPhgh3rbKEMQxiKhF0/QRg==
X-Talos-CUID: 9a23:OYeBlW5DqW5U5Cv099ss23QZH5AebHHmwWrwD2CVBm83T63KRgrF
X-Talos-MUID: 9a23:B+S3jQhbyMz9rRhRwss6WsMpbJxS6qipGHk3yrItm9fDOhFfMjPEk2Hi
X-IronPort-AV: E=Sophos;i="6.00,176,1681185600"; 
   d="scan'208";a="109509093"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T60KxpkpLARhhvqzILge/0/DHCeeRE0usyW6nYjr4DeBCxBz3CUhOx/vnMKbEj5Xf2HlYfzQzO9oopbZzkdIE/Xir9oObFJ52itmYRVC+Qg88kg4OuWBFBp9yaocd1m5eNJCYxlPewZ5vDjmPM9Czb6iH607IvAV1ZOdwPEJnNt0UHHElvjoik2EWt63KoHGbeB10l4huhaKdsQrv0eqF0DPdcaeQobqJnlA2YvwOTnhKRiScxCHN/cAtnLBbQdg5Va9M104vVXl5x+fzOCd8uMRhvS5adGTJILLQl9TIyihTE6Tg6IsC1RPZNOvM40LunXAuCcxBmRaiKqZWnL1Mw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3hUDmsjUGTRRMByIBsebnZ+jzvOiPbptetj/uyiskZc=;
 b=BrVRRtg5ZaQzIMm5z7rWhv1Ykwd9O6fUlIEvCYvx4EQcihAYNLRU+ab0vRZSXoigBo573NxMzhSd6v5dl1stcOUMrYEr2CoJjRceDWoaeMpQ30c3Fkt5XAly9ZEL0pmk0jOH1Lj5MsjlKcLNYDGSLO5FSjHf9zCPTjJLA+QrAU05eTa6f7pwlS+fw7QBSvQuI3BC5trz5KSnTOrpG+ohwbHmswkGRWuM8mwqKSpg7HuPxSDDDul7Pup3c4udQO5Sk0MJWFFyEF2NXVe2jc71QBCaYHk42g2Fdscv8QQ0gqZKdWcWpBZiVjFVT4zyHQLiBfjIhS2a3JrX/+5a8dVAXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3hUDmsjUGTRRMByIBsebnZ+jzvOiPbptetj/uyiskZc=;
 b=tNeJpwosJ4NRh7Joe7qyugyNvfMKKoMsYgVeumuNQXt/TITnyqTFTNYuILcQLAvKji1Lk4/wAlkmdH9WDSgziJn0Yln4aYiNrkVz3A7vsLRFO6QABN/Y2NJbO3B19oEYV0gihtAI0VHkEfhvn+VguR6LbClGwDjnMhZ/6gY0Jr0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 19 May 2023 09:38:47 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
	xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
Message-ID: <ZGcnh/DZvFAIBR/n@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <b411f7aa-7fd2-7b1c-1bcd-35b989f528b6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b411f7aa-7fd2-7b1c-1bcd-35b989f528b6@suse.com>
X-ClientProxiedBy: LO6P123CA0039.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:2fe::19) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MW4PR03MB6427:EE_
X-MS-Office365-Filtering-Correlation-Id: 589d631a-ecf6-416e-f279-08db583c1872
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RFwkWcz+Euw2z8v79CK/LNk+PxjP2oMiIwMaXGdQtkzndD68HvMweAp5WUa6AwKbaxbxteCA8zDCAnRgCAQJVu9yZNY0oO7g24mPpxrhiJT0JHYV3Ehnk5q0m5Cu9M7mPVyLrVYT3KmEz/3R2bm5d2O1bZ4kC32UQshWV602QuDOk0+1UyiftujqE8O31AdWx0oLqQaaO4ruL8MxHmfpWWW192Pddl6fDjn55drBWwfSVLsObSxG46Yz+qsemeXN9tDMMu1F5aItZw8NnPscXYebo4iqk+BoetpU63M+rxEm0apyIYxsT/6COtVAL9bnNRJj3LdOgDiLtRKSQw8uQkeI1P5JBaOk2Arx8/708dDE5HBZAMb+EMhEjoGBSxzK0lINsznLgjlJsrBlGds86wNQuDMea/eNaFupzuz4mp/lpsAE7Vba7p7zzr2FSjyV/4M8SVh+5Alm/XI3CXEo8kKSW2qnBaQs+00Ji1b/K1c5L5e00J6FjEI+OX5T8T4isRwGV24zLdIuUqvpCypYNdjLfGgH4UnenIFxeeeTkuc=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(366004)(396003)(39860400002)(136003)(376002)(346002)(451199021)(5660300002)(66899021)(41300700001)(2906002)(83380400001)(85182001)(86362001)(82960400001)(33716001)(38100700002)(26005)(186003)(6512007)(8936002)(8676002)(6506007)(53546011)(9686003)(66476007)(66556008)(66946007)(6666004)(6486002)(478600001)(966005)(6916009)(4326008)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QWk1QnE3S2tmUXZrRHpsZi93VlVxaE5pcGEvR3Q0ZWVCYTV4L0pYb2x3Ym5P?=
 =?utf-8?B?ZEhGay81ZjVwLzV0ZEN1YWZpT2NLQlh4VmRJTHFqdHZjTjlqVk94T3c1VjFH?=
 =?utf-8?B?U0xqTVdWR1pEYlN0a1JIV2lHWndGMWw2emdtU2lmV0ZZOUpIUFJOU1Y4WVh3?=
 =?utf-8?B?RjgrbjZhSTNQKzNVWG0wQ3FqRzdRcVBVd0tBNjFiL3R6MUpxMXNUVnNFR0pr?=
 =?utf-8?B?UXFHT1lGSktxVEd1SytCUHRjeGJnRnZCYURaTUVmSEFvV2lnZE1ub2YwdmdI?=
 =?utf-8?B?enExZUFBTitFclBIbmV1OVJPdkxMVkt3cHo1eHJhNyt6VUV0Z3NMYUFyQUFH?=
 =?utf-8?B?aG9jbHNqZmlLdkZWR0YrTHBIZHF2K0xNNnV3WEdkZ24rTFladk5VbktKN0FV?=
 =?utf-8?B?R3cvcHJVcVRNVWNZa3lZNlFvd2xTU2o0WWtxY3U2RGtvV2VSUkgyVldxK2tU?=
 =?utf-8?B?RWpzNG5seStzVll4d0VZRTk5dUFmOFJKVWhXVW5iSXRmUURZMWZTTEZ4R0ND?=
 =?utf-8?B?MzY0L2RaODBwZ3BReE5XWXlwc082Um1SMm1EVEVNaG1ZdWhRMjNGRHFJbFl3?=
 =?utf-8?B?YjRoWnVVQ2tUd3BGRzVLL0s1SmFuTEtVVVY5MVh5aExRc2NtU1kxaysrZnY5?=
 =?utf-8?B?eWJMaHNyV3BRZkZQZnJmbTd1QjJ6eE9TTG9Ub01Qa2s4ODhUVkNzcmFlQ2lY?=
 =?utf-8?B?dnYrRGgyaW9rYmJweDUxNjQ4dUpJdnljS3IxalZyTGs1cHJ5bUtOOXJFeXQ4?=
 =?utf-8?B?UHlDRFVuVHVPak1LZjRrT250LzU2alVmanNBRHRXUURSRU9SZUFqU0hIRlVM?=
 =?utf-8?B?dGJxYTlXVHRSc1ViekltSzYxOXdOUjJkSC9ZREl5Q0pPaVl6TTZuaDdoYkZk?=
 =?utf-8?B?R3FNekRYTzhtellyUi9naVdzb2lXUGJQcVlQN2RxcENzTXJvS0F1STVGbVdx?=
 =?utf-8?B?YW51cTBqVG91blEvVWs5OERuYW52UHZiSGJMMlRUMXFLSG9NNlZ3OEtCZGdL?=
 =?utf-8?B?WUduakNSRWVUamNFRzlNclY4aVptcnpaa0E4Q1hTN2pmODc5V1RwV2VaMCty?=
 =?utf-8?B?L0ZZcHorUzRYYW15TlUybkJMazl0WU5nMUpadG9DSWlZM2pVZnNxWWZRQXV1?=
 =?utf-8?B?SnlnaG5KcnJrMVNlWU4vOTk5WUJyZjJETDFpNXBmcUUzTFZEdzh4dTBCdmhP?=
 =?utf-8?B?ZndBUTJRdk04bHhzMFVYcXZmOEZJQXNUR1hkenRjcDVZb0ljclY4V0xXbTdF?=
 =?utf-8?B?bWdzODQ4QllZZTdSOHhkZE04b1hjUzRKdGk2dHE0WCszUzkrMVdJR2h2T0h3?=
 =?utf-8?B?NnQzNUx1VTJiRzFPVTZIVHp1U0RQQmVUNG8yQTdxazRpY3NjbHhKRGdrYXFW?=
 =?utf-8?B?ckhSY0hGQ2pGaWkwTmwySjhjUjNEcG95QllhUDg4ZVp5RkhleUkxV09QaC9P?=
 =?utf-8?B?RG5WenlTd2VOV2F5TTFjZ1hFbnZzUGVDSVJtSTIvajJub2Q5aUt5N3Q2MXEz?=
 =?utf-8?B?MjZEaG9WcGdZVGtWc1BpOFplaVExY09yWXBZVWZnRm05YnlDZldMYmlUM2pU?=
 =?utf-8?B?K2xpanRET205QWNrQzVkdzJTbG81L0xUUGs0cVBTMXBQd0xDYkpVeDVqdCs2?=
 =?utf-8?B?cHRNVmJneDFYdHprbnorUU5ob2lkWjNmRzFIbFJKeW9PMDdQV1A1MExMZ0Zt?=
 =?utf-8?B?UE9IZHd2dFdZL3JkM1VZbXRPVytrM1JJY2c5Tk10TGh0bXJXWFh4Z0hxb3FW?=
 =?utf-8?B?MHloZitZcFk4UjF2cnUxamF3YzJxZm84bDlMSUdZTTI5UjNBMStFbUtsSnlD?=
 =?utf-8?B?aGxJRmpBbUk2YmxOS1NxMUhhL1BwczlJUjRnQzAyZmx5ZWZqSmVCdURUTys0?=
 =?utf-8?B?S3I0cWpzRFpVd3NDSVlXK1F2RjdMOWxhZ3ViMWJPTzdZSHFhWU9uWUllcUQw?=
 =?utf-8?B?RS80VndmaXNlUWpEMlYxQm9FY0FxdEgwWGloNlJOYWhlZm0razJCU09BMkln?=
 =?utf-8?B?L2gvR2ZCdVcwWHNsR0NKT1k5T0d2Z2JnT21iVGR4NkM4V2I3ajUrT1pmZmdh?=
 =?utf-8?B?SkRHRnBzRS95bmFhZDJ2QW9DYjBHTksxRWMvcU8zV2kvTmpYSkhwVTBuVU94?=
 =?utf-8?Q?QCOaHH4geaL+/keimj5R+hjst?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	yTur0IE/vmaXgHpiRbaSlJLsAW+IwNKbYKjxIZne9E0Nra9mMMU8Kzn6sSEH/tq1rPKTfryi2XcD5RWaUlFkXEPME5Ir36mhQzHhz8E4oo4DoHSBPv/EIrWOwFpwnaiWjup32ete3YN720Gd4/jWg5V1tb14MKfLc7/Y2b6fu4aQIvQnOIv1+WUe+iVUy7PEGrtlV3n6HFvWEkBUFRDS++HPTFFM9VFIkoOAvzZD4MQxWPkt/sL8LNFt7mIrXyTNO+NvvU47jmC1YaxFy2H7+VKyrhS8j/x5ykGKmcJSJojsU3MwxhJL3dIKHBI9Oxwf0sBX68O5cTErSJzHIEYwg63+uNosjLtsnJWaJELBmiR1kCSlCcKI/UBNLujQlGTBsjtQ/np+Wb1FH4kcgyo7H3HUxfBNydC65h+PnQpgoo/AN8cNg+DtK8iUe6CPFgACZMRAEc0zaARaeYq6DuTJjaJm2cFZwxSbTQlXp09cn86yh/wrUz0FDC/8Z4nwW28QQrilnsWW39OUhmExhivOL2vXYZCuTERotk9FgUms+/mmwBExO5toRe2AXJ7SPp9xz9N0p/fEGOd+6fI84toKpDRV/fdzLQJtcrkLMzARyfLJ+isNX6JYWnkInQwp4Y0F62WolwLdEfGq/5T8qFqUdtkpfJixfjA41LrU++wv4SnD41G1hDYW0/Jiq4MzG7UPco9kIAAYT4Os1nQFyM2hBbmQLupF/L87d4WQ4ND7/d3SwAoE55j+Smo8y/UhU8SFQB14IM0SAadi8eXLweNIp01eZRbelMD/bX29RhrkqC/Sa2Dez2azQbSuhDivibAwR5OE53JUVwtuMzT9MLgZFIvvnj+nnEEtnpjbuWVGgfpgh/D5OfjfkR1mcy09V3dQvGzLNFkhnFjfXfMqWkXRVG1jezzZeFmOMGM1zJSWmUI=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 589d631a-ecf6-416e-f279-08db583c1872
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 07:38:53.8560
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3GYNzt+GL8QO4EuGV7J9TnJjb73lUOfprvO4sk+saKVBcwOp0y5iyKSjzfne25pTndYpvyenxGxY4iG9MI+iqQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6427

On Fri, May 19, 2023 at 09:22:58AM +0200, Jan Beulich wrote:
> On 18.05.2023 12:34, Roger Pau Monné wrote:
> > On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
> >> I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
> >> test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
> >> Zen3 system and we already have a few successful tests with it, see
> >> automation/gitlab-ci/test.yaml.
> >>
> >> We managed to narrow down the issue to a console problem. We are
> >> currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
> >> options, it works with PV Dom0 and it is using a PCI UART card.
> >>
> >> In the case of Dom0 PVH:
> >> - it works without console=com1
> >> - it works with console=com1 and with the patch appended below
> >> - it doesn't work otherwise and crashes with this error:
> >> https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
> > 
> > Jan also noticed this, and we have a ticket for it in gitlab:
> > 
> > https://gitlab.com/xen-project/xen/-/issues/85
> > 
> >> What is the right way to fix it?
> > 
> > I think the right fix is to simply avoid hidden devices from being
> > handled by vPCI, in any case such devices won't work propewrly with
> > vPCI because they are in use by Xen, and so any cached information by
> > vPCI is likely to become stable as Xen can modify the device without
> > vPCI noticing.
> > 
> > I think the chunk below should help.  It's not clear to me however how
> > hidden devices should be handled, is the intention to completely hide
> > such devices from dom0?
> 
> No, Dom0 should still be able to see them in a (mostly) r/o fashion.
> Hence my earlier RFC patch making vPCI actually deal with them.

What's the difference between a hidden device and one that's marked RO
then?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 19 08:10:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 08:10:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536741.835375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvCB-00039J-I1; Fri, 19 May 2023 08:10:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536741.835375; Fri, 19 May 2023 08:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvCB-00039C-FQ; Fri, 19 May 2023 08:10:39 +0000
Received: by outflank-mailman (input) for mailman id 536741;
 Fri, 19 May 2023 08:10:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kTyP=BI=citrix.com=prvs=49624ea57=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzvCA-000396-Mz
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 08:10:38 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a118e70f-f61c-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 10:10:35 +0200 (CEST)
Received: from mail-dm6nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 May 2023 04:10:30 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA2PR03MB5690.namprd03.prod.outlook.com (2603:10b6:806:110::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 08:10:27 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.019; Fri, 19 May 2023
 08:10:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a118e70f-f61c-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684483836;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=KAFiJnlkzkUrxQEUmpeMiHZ2ov427VK2wOClmfbY/TE=;
  b=HVS4PV+TeLdh4dy9JhRIEu2DlvSu/O6eNAWqs9TYda82eJsXbjVcteWb
   RdoPahfEQvIRnSveTzvl1c7KTL8ZTuFmvQgOBDgHHXlWIG2sUBo8BWLNS
   BGzp3Xndz/IcFbmXYvR+fK95hbv/tu/z5KEbA+wmsmY0qADOm3pItbxKI
   M=;
X-IronPort-RemoteIP: 104.47.59.169
X-IronPort-MID: 108384577
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:3DOwFqje8+6GabrAibXkqEzyX161jxEKZh0ujC45NGQN5FlHY01je
 htvXzjVbKuMYmv1c95+bYi1/UkC6pfXmoM3TAI/qitjHyMb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QaPzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tRfES4CdlOeq97m0e/ldLVSg9ouFpXkadZ3VnFIlVk1DN4AaLWaGeDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEgluGzYbI5efTTLSlRtlyfq
 W/cuXzwHzkRNcCFyCrD+XWp7gPKtXqjCNlCSODnqZaGhnWT6jUdNA9JemCw+9eZj2KjRe9nd
 0Mbr39GQa8asRbDosPGdwajvHeOsxoYWtxRO+438geAzuzT+QnxLmoOQyNFadcmnNQrXjFs3
 ViM9/v5CDoqvLCLRHa18raPsSj0KSUTNXUFZyIPUU0C+daLiIQ6lBfGVNtgOK+zkNzuGDv0z
 iyKrS4xnLEah4gA0KDT1UDKhTOl4ILAQQ886gzUWX+N5wZ1IoWiYuSVBUPz6P9BKMOVSweHt
 X1dwcyGtrlQXNeKiTCHR/gLEPex/fGZPTbAgFlpWZ486zCq/H3ldodViN1jGHpU3g8/UWeBS
 CfuVcl5vfe/4FPCgXdLXr+M
IronPort-HdrOrdr: A9a23:neeTWKiQwQc9X18Y41RZmI61NnBQXioji2hC6mlwRA09TyX5ra
 2TdZUgpHjJYVMqMk3I9uruBEDtex3hHNtOkOos1NSZLW3bUQmTTL2KhLGKq1Hd8m/Fh4xgPM
 9bGJSWY+eAaGSS4/ya3OG5eexQvOVu8sqT9JjjJ6EGd3AVV0lihT0JezpyCidNNW977QJSLu
 vn2iJAzQDQAEg/X4CAKVQuefPMnNHPnIKOW296O/Z2gDP+9Q9B8dTBYmOl4is=
X-Talos-CUID: =?us-ascii?q?9a23=3ARC2R52gb4Zj3ib3WYtXuhEi0dDJuNUTDw2+TPmO?=
 =?us-ascii?q?EWXc0FoPJd1qO9Zk4nJ87?=
X-Talos-MUID: =?us-ascii?q?9a23=3Abbs8dgw/mVcIV2vvrhvmAky9eWqaqKSCUU4xg8o?=
 =?us-ascii?q?pgMLeGwx1FyqDsQzrTYByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,176,1681185600"; 
   d="scan'208";a="108384577"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jF2qEkU3Elai6jkGzZODifUKuUKe8D2srE8mrV8Bjt4Ov5JDHNZtlferx8xbrctVOLjYmDTJQfRLBZVaNZcl449QvaY8YaW686lZub38oj4UCepQQpMfypcE3Gm++TL8KPvk0RLqhjYbTVGmH8dMJCQ/T3jp+5JORt45nFx4rE9iJjclWg6wTKQ2uyTo8V8AR7474z0UbxaRxRMBLGk447gst0HXJHuN7/BKNznkkd6bgahbyyIVQvrhvbUuC0pmAXEsJbXcRxMdLGBv8C2El7fGx+no/w+tm7we+Rwlhc9z2PDtNnQ2pPjEd6ghEYxE1Fxya7069wOuFLqAOCIxow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iJ5kDOw2hvHWWbUTawynwBSrz8QYr+54VSnF8CoZlv4=;
 b=BBZawzbkQmgK/uOeQSI/37RFicFQltUS6+e9VvYLzXW4tG/r6zu4SkM0zuH0zXPVIl0cUBcAdTxU7J0dzBx2RIVp1T0kiKhykWB3UCJ6aeQVX+dMmx1yPQYyy/jQjVp2svriTsxxQPeJ4P61d/bl5LJZ4eeLXW14ABTZVJ3L0i8OQr8uRSsRxfYW+JBqOEhO+uUub7JMQHrQkPRDnS3P0DXdlD8hDlSS2RkWqiWvE/wp3y5gtPpnkuHilDvv97M7LkGIeGHWpCf17jOgo5hloi9ZlH0SFbDyn0jog3+X1cAzGpVeRV2yYa9LWc1q+uL8qYRmEx3W8CtbZZLuQY7n/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iJ5kDOw2hvHWWbUTawynwBSrz8QYr+54VSnF8CoZlv4=;
 b=RfPQFkYnHKg9Xw1e9E1tiMtDnOwiS2p2dfa2t7AdpqJS0HIOl3BSEfbdEj69jHIKf+9apMAfALsJXtYzHsTweDyzXbcNJZ+YbBDuFzBNOteqXojQVQYMNMDnVuN7V1JaIEJvNcSPfwFJPBCYhRlTsuCQw8XxxmfQy99BkzxR0wQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 19 May 2023 10:10:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: jbeulich@suse.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
	xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
Message-ID: <ZGcu7EWW1cuNjwDA@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LO4P123CA0135.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:193::14) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA2PR03MB5690:EE_
X-MS-Office365-Filtering-Correlation-Id: 7544cd5f-74f9-4afb-c35e-08db584080b7
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	izHV6LX47NlpHrf0xIooAiUTifhYd32RPsKrq4cUp73XQdlEd8rmOLl5tnKn6aDnekbqXVUP/xaXQ7Fu8EwFtpn7SSC9yZUUjl4gxVGAeMJEsTYw5dsMaEjzIppXzq8cZ6E/lBa73t0+QNpP3VIhTMB56HBMQWj4S6gwc7Jy/BD4UkpTVGZu9ujA79NJzZtPrlyJp+urs379khrKnii4DSrEavLJHpnfn0ojUL4Gh5m8WX0sAkNcg5VkHIEpVNNhZE76d//CCwM3+cdPWQDJJKVjbtljifagjdDyMZDQOX3hgN2M3KSUC/iLzCsoEORq+czsayvHBlm2JshIAu6lDWlg82VWtXtrB4hr9X+ghjDO3IlGw0OWjG93lhH1qrW7Hf9K325nR8EbqdRwZm4SM5wBJJ/N5/4HKUoOeDDDK46CfZYCsMepMOaykSqtsv6/qL8XNZBEzqjm6xtn+sZIOQPhYLSNPgauFkHnzH78nuI8JOxosA5HVx0wkTH8Ullvh80cg8Yx0zsUEzsY7WB8IW/0d8YpwoR12tYV7FFT+f8=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(346002)(376002)(366004)(396003)(39860400002)(136003)(451199021)(26005)(6512007)(9686003)(6506007)(966005)(85182001)(83380400001)(86362001)(38100700002)(82960400001)(186003)(33716001)(6486002)(5660300002)(478600001)(2906002)(316002)(6916009)(4326008)(8676002)(8936002)(41300700001)(66476007)(66556008)(66946007)(66899021)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c0s4Y3Q3TEpZL3Z2YTl1SStEcmRNQkhZeVFab3VQRnFPQXAvWitkTXloem9a?=
 =?utf-8?B?QXJSbUQyMWVielZqSzhFUTQzRzZ6UUQ3Y1c3UXMxTXROYTlOZlVRc2NkNTFv?=
 =?utf-8?B?c1VHc01ndks2ZXRDNFRPcmxlbmpoZFNRQ3ZsK3RudUtmdElSMnp3a1BDeUZR?=
 =?utf-8?B?ZHpmMTFhdjIwT2FsN1VxL3REWWRrZFZKMDJybVZlN0NOcDRLeWFTcGlaUDQw?=
 =?utf-8?B?VCtoUndSUnVEbDVuQlJzOHZmQnRnYm4rVi9reVlqdWFPQ2JpWXVVdE9lOUpE?=
 =?utf-8?B?MUxmR1FJVnlCR3FiWXV3WVNqSzJoa0hQdVhOTElpdWw2YWVUT1Bmb3U4Wk5N?=
 =?utf-8?B?VlkrWGZMR2c4UjlXN3FNYnhQY1NsT3JMNXFGelF0Q1VXTXk2NGovRjZmdjE0?=
 =?utf-8?B?YmNzKzJnWDg5Wmp1NmluQUVvN0RJMmxtTDhtREorb3hDRkprT2dTR3gvTmdw?=
 =?utf-8?B?L3ZOd21yL0dvNisyY2M2bERoaG5POHdJb1p1YWZOc0hYb1RnMGUweElQK3ZZ?=
 =?utf-8?B?MHZ0YmVZVDMxVTVxLzNCb241SncvR1pwbDJFalRFQ0g5bGhFVkgvVG5wNjlw?=
 =?utf-8?B?R1hIUS9nc1JyeEpoMzRneWhaeHhURnlMM1UvejAzbGxlZlN4eUFVN2lvYktk?=
 =?utf-8?B?cHF3RmJsNEpBaEFMVWFPbjNXZStWaUVWNkl2RFRHQnJjNEVJUlFoL3N4ZENW?=
 =?utf-8?B?UGFCakZCVmY1akNJdnZSaklNWFhKV2lKTWxoSm9RbzEyTjRnSnVOVytaSEFR?=
 =?utf-8?B?d3VCMThWMTMwSFZ5aHpZcnc2cmVsQ2cxT3pwTCtSeXdncG5TcnZhMzhhQjJW?=
 =?utf-8?B?c0U1SE5RTHlJRjFlTDlYT0RIc0pYeXZZemdzeGQrRkh0TEgzZ2IvQTU3UEpM?=
 =?utf-8?B?QTNaT2llaU1XNzVTaDNVMjR2emNLVlkvSXQrTEpTZC91V2YzUURzWG9SL2Fq?=
 =?utf-8?B?aDNJVXhMcU1yQ2NzVXRIVStZTzdpdzVqL044aTRPYjFZMVN2Z1ErZlJUYzE2?=
 =?utf-8?B?OXdoK3FvR3libjF4VjRWZHhRSmdPQUQ1aHg0MzNYVEJUdHVCOTJ5eGVpcGxw?=
 =?utf-8?B?RERpL3Fjbnh4WVdqeXJCY0x6VGxTU1pDeTJIWDdHczdIVERCR2FsMmxZVm16?=
 =?utf-8?B?Y2tLcUVyQTBWTUw4ZVh6a1RSNWZlTGh4VjRMOTNKSmlEQUFQVVpOM2hrUHZt?=
 =?utf-8?B?T2M0bUplR3I3OWtTbU55UmY3RVJLajZHYTV1dzZRLzM3akI5aXNEZllYVEg1?=
 =?utf-8?B?bnRYcmkrSUtHeEVzQTFlTjFKVDZOS3I0d0RCVFR3S1VVd0J6T1B2WFdGR3VU?=
 =?utf-8?B?b1hBaWhQSG85czgvV0NRTlNGRWQ0L2tiQ3k4N1E3ZGZKMzVQem9CblRCazE5?=
 =?utf-8?B?a1d0KzE1OGMrcURXN3pGQmhGVkpMSU4zUzJ0S3Q1QitLcVJ0Wk5FV0JramFS?=
 =?utf-8?B?bUVZcTZIUExjOUJORG5ZZWdFRE9oVDRUK0hraVQxN2pSRXlXMmVxUmpXZjBI?=
 =?utf-8?B?R2N2aGRXMGIwaVdWaE5rZFE1QVdRbFRTVy9LVS9IWXlKV0gxN2xQZXdhU3lG?=
 =?utf-8?B?TVNDRDNRakRsMmlUUzV1UExHZlduaG5KWjVyS2F4dWpTOEcwSUxiRkRhUGVq?=
 =?utf-8?B?SVA1SXl2WHVrNVpmcUI3ZFlyWDBGSUFCeWFubEhqTXVCUlFHOUs3OEh3aUFo?=
 =?utf-8?B?MTlvMEhnZ1d3ZW50ck1zMExvOUZ5cFUwQU04VGk0bkhZVTBHdGFBSXZ5Yjc5?=
 =?utf-8?B?K1U3OWxINDBzNkJrejc0aUxQWTR5bjF2NnlTcGY2Z3JPWkRmeldMeXRZeG1l?=
 =?utf-8?B?Y0pVYTFHYjY0Y1hyUDlmL1U1SzBycndRaGpkMHBuSVVHU3ZYRjhNdXJFK084?=
 =?utf-8?B?U3RKdnkrYWRFZkZoOHRVVWRQOXpKVHhaMi9NSWVZZEg3R0wrR1ozSnl1UXM2?=
 =?utf-8?B?OUwzaXA4YmVndGt5ZVdDVFlYaGhDTENDM0xYRzllSGt6SzdZYU54RTF6bVd5?=
 =?utf-8?B?U1lQSWVHdlZwTC9aaUpSQTBXZTFpZ0krVkhmbzVrb2pZVkNZV2haUTZibFNJ?=
 =?utf-8?B?ejBxM3JhMDI0Mm5WajRvdXc2eFhMZnZadXQzM0VzMkFraEhRdVdGNFNYOTJK?=
 =?utf-8?B?VElCNjlqRkIwYTlUcFAwbk5pT2xCdnlsNWFrTEV0ZDFuSjBBblJ3TmczODNO?=
 =?utf-8?B?Z3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	qx3d06sXzRXrYfNVeJXZuChEceG3iEzXyEH4lclt2MxR/fn6Nhejt0b28n/JGdirmF8tR2H8WUE0Drc2uChUYBc/0btZoFDilQC4+6wGjabbmpdB9VUfhvAAE6FO1ajbwTEPwEPxfxZD7UKV+eEtt39tAxYChxG4u1LBjuDDL840No3cQY4i/fkgXP+seP3HBMeZB1sOTuL7DX1deoQUIPJgkWWTIXqHRXi/erogDNFpP2Dg5nBYF6v74V4jThWvL7Ozc+2aYvu5uK0I7zayILjF4bM1PiH1zVyVkMXoYbxkxqJ053OHV62HGEfb7536F/S5DyBlp5Jnd4AJY+1ONCQ7ISAmoOTz814x8OGzdJ9m9fhx9R2l+sPLDOmcsSY9yglSd6UScUY1b/Ycsf2NucXRW21WemQggL73mXIxB5Boo8LfvBpQUGYjZPQk4/RDssXYwOvc6kRyxfIS7LmV+mqfqbDSx8ZgDPdcVBZjPa48ExkDByoIawJ4RX3H4nJh0pRCUYaR2XcNvsWzKfuhmoeSS8MJ7YPgwLwpLi+T8xPdyzYtcH0tZ6vfSuq+eW0wyffvug+Dz2Fs+TUcBhOe1pabcpGw+wAmW+z61bMscMa51HpTWe93tDSxj7BnKJUbXLU4HXq2EBMeLR9tBEBKtqq3WVgdi5rdnQ0+RzZUUdVrEpaxC++jVsT1/n5ZsIMccZtUxoiEaIZThSmP7W3bqJLOsHe0Bmxuu+K0KzmTb1tj0/+jejZ0+foAe4tUQQm3gA1dhV9b39fJ82QWj2i2DcneYv93w2oOxmNWi4QS8t3l8qlRVELCjHvmpkF+09bDwQ7ApYkL+/4/VCyCR9xW/ppm5B1WB3JfmqPYrtrzK9sKcycl7gQ63X2zka8lBnK+bMngRr3XQMpfDvS2IOrPV/SDO7O0AFvlDKl7YkSnVlk=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7544cd5f-74f9-4afb-c35e-08db584080b7
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 08:10:26.6754
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JN0bg7aHsvvlGpOrFJUwRmpLnlK366WdZIVSIv6ANl8frHaaVFoNLPS2/2eJW63I8yM5A+D6/homHjvfeevZiw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5690

On Thu, May 18, 2023 at 06:46:52PM -0700, Stefano Stabellini wrote:
> On Thu, 18 May 2023, Roger Pau Monné wrote:
> > On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
> > > Hi all,
> > > 
> > > I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
> > > test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
> > > Zen3 system and we already have a few successful tests with it, see
> > > automation/gitlab-ci/test.yaml.
> > > 
> > > We managed to narrow down the issue to a console problem. We are
> > > currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
> > > options, it works with PV Dom0 and it is using a PCI UART card.
> > > 
> > > In the case of Dom0 PVH:
> > > - it works without console=com1
> > > - it works with console=com1 and with the patch appended below
> > > - it doesn't work otherwise and crashes with this error:
> > > https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
> > 
> > Jan also noticed this, and we have a ticket for it in gitlab:
> > 
> > https://gitlab.com/xen-project/xen/-/issues/85
> > 
> > > What is the right way to fix it?
> > 
> > I think the right fix is to simply avoid hidden devices from being
> > handled by vPCI, in any case such devices won't work propewrly with
> > vPCI because they are in use by Xen, and so any cached information by
> > vPCI is likely to become stable as Xen can modify the device without
> > vPCI noticing.
> > 
> > I think the chunk below should help.  It's not clear to me however how
> > hidden devices should be handled, is the intention to completely hide
> > such devices from dom0?
> 
> I like the idea but the patch below still failed:
> 
> (XEN) Xen call trace:
> (XEN)    [<ffff82d0402682b0>] R drivers/vpci/header.c#modify_bars+0x2b3/0x44d
> (XEN)    [<ffff82d040268714>] F drivers/vpci/header.c#init_bars+0x2ca/0x372
> (XEN)    [<ffff82d0402673b3>] F vpci_add_handlers+0xd5/0x10a
> (XEN)    [<ffff82d0404408b9>] F drivers/passthrough/pci.c#setup_one_hwdom_device+0x73/0x97
> (XEN)    [<ffff82d0404409b0>] F drivers/passthrough/pci.c#_setup_hwdom_pci_devices+0x63/0x15b
> (XEN)    [<ffff82d04027df08>] F drivers/passthrough/pci.c#pci_segments_iterate+0x43/0x69
> (XEN)    [<ffff82d040440e29>] F setup_hwdom_pci_devices+0x25/0x2c
> (XEN)    [<ffff82d04043cb1a>] F drivers/passthrough/amd/pci_amd_iommu.c#amd_iommu_hwdom_init+0xd4/0xdd
> (XEN)    [<ffff82d0404403c9>] F iommu_hwdom_init+0x49/0x53
> (XEN)    [<ffff82d04045175e>] F dom0_construct_pvh+0x160/0x138d
> (XEN)    [<ffff82d040468914>] F construct_dom0+0x5c/0xb7
> (XEN)    [<ffff82d0404619c1>] F __start_xen+0x2423/0x272d
> (XEN)    [<ffff82d040203344>] F __high_start+0x94/0xa0
> 
> I haven't managed to figure out why yet.

Do you have some other patches applied?

I've tested this by manually hiding a device on my system and can
confirm that without the fix I hit the ASSERT, but with the patch
applied I no longer hit it.  I have no idea how can you get into
init_bars if the device is hidden and thus belongs to dom_xen.

FWIW, I've used the following chunk to make a device RO and enable
memory decoding:

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 07d1986d330a..e4de372af7c9 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1111,6 +1111,16 @@ static int __hwdom_init cf_check _setup_hwdom_pci_devices(
 {
     struct setup_hwdom *ctxt = arg;
     int bus, devfn;
+    pci_sbdf_t hide = {
+        .seg = 0,
+        .bus = 0,
+        .dev = 8,
+        .fn = 0,
+    };
+    uint16_t cmd = pci_conf_read16(hide, PCI_COMMAND);
+
+    pci_conf_write16(hide, PCI_COMMAND, cmd | PCI_COMMAND_MEMORY);
+    printk("hide dev: %d\n", pci_ro_device(0, 0, PCI_DEVFN(8, 0)));
 
     for ( bus = 0; bus < 256; bus++ )
     {


From xen-devel-bounces@lists.xenproject.org Fri May 19 08:20:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 08:20:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536745.835386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvLj-0004ea-FO; Fri, 19 May 2023 08:20:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536745.835386; Fri, 19 May 2023 08:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvLj-0004eT-CM; Fri, 19 May 2023 08:20:31 +0000
Received: by outflank-mailman (input) for mailman id 536745;
 Fri, 19 May 2023 08:20:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzvLi-0004eN-1x
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 08:20:30 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061e.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 03048b83-f61e-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 10:20:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB9877.eurprd04.prod.outlook.com (2603:10a6:150:113::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Fri, 19 May
 2023 08:20:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 08:20:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03048b83-f61e-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NAKBZiMNFjXGSwa6aCrafbgtLsA0CX+SDBSJdw8QVpJrejT/oUKZj0eEcVDLDcBwVeChg59HsOooKpPKYIdG8YBwJ3fwi4xAPnk7oSVdYhBnmrkKskKsGUuPGgfuVEEu43dp4ZHGojNTVz+J0anIDvI+W9vgnsScDdtDOl2SXry47E88glIg7IfxrC/m3ThqmnNvhdzLQPb2B/XJWpUDF23TyU5KMKoLeBMdTNPu5jFSun+93lsWwBqpXrIjYiEggrmuyuD6+0gil09zG/O/UASuHISOMndVrTyevHm4P2YmAPExqqwnV1mvax2H6gqVCuBuxwa++ga6WYO1sELTSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1lFXBW2kHGzFklFIGhri1TV0wq4IwHvP/OC5+F2Axcs=;
 b=cGSlV2Q/QokK3QhA854NDtiyiBsKb5eo8j7MQnWCGHchuEvclmZkC646Bo7jcCQrrpe8D5umO3IknFsjLjMzfr1O0Db3bs8e1F8OapDi3epPh7XXyUsyND2KmtGayDR/OWRflMDzN4sxVXUYvVAHKExDpcci4hKI5xEzrPLynXkmTSpN71TdfdswYZzVs+/ZshFO6mzt46s1c0k3AZaIM3Lu1Cwg3i5QFREB3xMHkCPFUCQy6FE368+aMydNGYvC9rcKWf0AZN261vESCsnnHLIEI0IUSgJ8D41hWjyduKxVXz+PGxoi0TEup8x4zVNNgW+rHmM90zHDL/rHT8qV1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1lFXBW2kHGzFklFIGhri1TV0wq4IwHvP/OC5+F2Axcs=;
 b=FwuSjQpEJaJUqYU11Blj9bzrL2svIl6SlZbcNEV1aqf5CKPaxmH0AMiCoguRTvuayePoTE1dEIjNgheibjcoRKGWStqpJL9BScNrVEVc+tWMIrpOW4vkHFGyGNfHXrhfsjQ0AFqyX1bobpAmlhdMi+gmUyPoTdmA5euSworQlN303LobyFSyDZrI4Wbtc/3TeIeYsaP5KLFiRHWCJBAuo+NOE+zgCh5f287FrfBbgQprF7jwWBUMmKFWMtMLc9yV+PLUIt9V0h192DhDoO71tXhENVpqbznjQ4bfl2uX2rcmEacDMJ1cFR/r8sjtccaXUBV1OWCMS6hlI01Gxmrm7A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7a00019e-da64-ad0c-d107-d002cf6bce85@suse.com>
Date: Fri, 19 May 2023 10:20:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: PVH Dom0 related UART failure
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com,
 xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
 xenia.ragiadakou@amd.com
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <b411f7aa-7fd2-7b1c-1bcd-35b989f528b6@suse.com>
 <ZGcnh/DZvFAIBR/n@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZGcnh/DZvFAIBR/n@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0244.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:af::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB9877:EE_
X-MS-Office365-Filtering-Correlation-Id: 41e22116-e8b6-4170-0c22-08db5841e46d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MSfN5ce46ATozQO0lxyO2J3pRgQTOVAF4ghFzCbGFzfzOPpM39lP8x4K/YZQTPEiVxV1Cio5gW55H18gfeey+wkugXcji+FVCXSLuI4WkpKqjWK6wIiN+ApRWxjB01FEsLf+MEGyCA9ZD7PmP69EldBb2yONz4K3meVqdGgugRDGEF5ExVIa+zgR9jF77wwZ8F1mg0n+seCLk1GYB1Qht4KEhBo2wkZ74OlizxM9hfiWviObpaO2Oek+szDLHlk9n+Ocl9UM5nF7v5GX4SaGi87073c5lXa8nIAXMkZYZVmqXB3sbqJGHtt53xczslj2I/NB1bwU0BF9IOxg1obJaPGFFh5lpaH7yIn0PktVqbni6EzZPL6oFwBonEPkG62Jka49/RH1T6Ki5fHbBtaaC8KWiVhbCsqOVzL9CCZ9TtOxa33DaeE3PC2vWuBEYAHBaTmDiQH7KR3KJq3ldjmeUsbFfh9gHsuV33WZhGfs4kvVbKgOPyusmzpOuCoemdcdgVxKnWrKD5Leoqg3EFZwm4Ldd+huwUEIEebaUHzYafBoZt579C+t0tuj5mXykgWX2AjqH91YvqN3RWBEM3ySwikNF/ybRH2KBiR7nKezG3pCzDQd+Yn+7R35fe+K7y9ieVC9hEmoGXwbnyv3LD2lUw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(39860400002)(376002)(396003)(136003)(451199021)(86362001)(31686004)(66899021)(83380400001)(41300700001)(6916009)(38100700002)(36756003)(5660300002)(66556008)(31696002)(316002)(66476007)(66946007)(8676002)(4326008)(2906002)(966005)(186003)(6506007)(6512007)(26005)(53546011)(8936002)(478600001)(6486002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VDR0YjY0N3hQZlEvTUtZelVDMG5mUWNsS2R6Z2J4K2F3VnpDeEtGeGF5Tkdl?=
 =?utf-8?B?b1F1bG5MMHNEMXRrZjRkQ3ZaZk1aOUVZdWR4Rk9qM1Q2Q28wSmpXRS9OQUVo?=
 =?utf-8?B?UjIycDYzQzE3aGpuVEJiMXRPQ1FMdkl2aERWSWpCcjNPeWRpeDhHdUc4ZlE3?=
 =?utf-8?B?Z3VnQzZGemkzSDBWYkNXajRvdnozSE0ybFlaOXpkSW9TN2MwdUxzaHd2U09k?=
 =?utf-8?B?c3hRVThiV2YyRUdBb3ltZXFidWUrVElzNFQ3UGZJcHFkSllncEpXU3cxbWZW?=
 =?utf-8?B?RUhYUEE1dmE0aS8zU3I3RUtmU214aVBtZmhNbXVLOXBublB5QXJFTVZSaEdk?=
 =?utf-8?B?QmxGYmZNcjE2NWljZU52UVI5cGlObW1qNURyTVphWW1MamtnV3lRWmtIb0ll?=
 =?utf-8?B?eWh4NjN6UXFvT3RRcGNUSk1xRUxKbDdncWlzR00zNFdwY1NvUjFJTGtFd1dm?=
 =?utf-8?B?dWxVbG9xSWxpQUp0YVBaOVpDbVZkQ2FnTzJ2UTYySlVkVDhYTGpYdnJpU2hU?=
 =?utf-8?B?Z0JhR2x6dkM2UXVWZ21QNnV5MVVkNHlxdHJaZGFqQktNQWJJV0wyQVJKb2pw?=
 =?utf-8?B?N1g1RDN5MGQwd0FmV3hnK2lNQW5mVDRXa3VrV2ZVeUcvOTcyTDdHVis4T2VL?=
 =?utf-8?B?a1NwMU80bGppUk1hUkI1dERxRUNRSy9qMWtHQ050RzVJLzZYWStIMC9la2tq?=
 =?utf-8?B?S1V3RlRvdWFVSGNJcU9qR3Zsb2VPa2h4cmFpcW1GMXU3QkFSckpNNXR6Wlds?=
 =?utf-8?B?Yk9vczlrOW5jYlk3azh0TmROTjdyd3pIbFZBQm9NWUZwWUtGYXpWZnBQbEFr?=
 =?utf-8?B?WDN3c0ZQQzU4cFQ1cjJRY1J5eXIvMG1VeHFRaWJzRUJHT29ncXNHWG4yRVFF?=
 =?utf-8?B?dnJwYVRTSWpjd0UvUHdYcmI1RFRJYVQ3N2ptRkxUK0tWYmJieS9rZXM4Nk11?=
 =?utf-8?B?YW43NVZjOFh1R3JxWDZpaU50cTV2M1Y4bTZSbVRkU00veHJJNmFJOFduZ1Vh?=
 =?utf-8?B?VGRNSFNQeFZCVHZQdmV5NGtHcDE4MTZHVURuSFhJQ29jV3ZUR3V3NzFrQmRu?=
 =?utf-8?B?eitoMkR3dXlNbXBqU3hzbTVZSGpQSWJtSjliZWFndllRaEdmNVBEZW9mUWNh?=
 =?utf-8?B?akVLWDYyK0JzeWYybW9GZjh5cmorWWtJZTRsTlI0TGs5eWhtZzQ3Z3llVFB5?=
 =?utf-8?B?ZEl3dTFFays2S0FkR1R1dC9kSGJXK0VwQ2E3elB3SXo1VlQxZnFLN0h6bmRz?=
 =?utf-8?B?NDhuYSszMDBrRUkrRERvSDNmRGFiS1U5UC80T1pmOXNXTllFNG1ZNkJoK21l?=
 =?utf-8?B?emZzQlluNXlLL3crdGdZSU5ST1ZWeHdzMUJPa2ZRVWhDdEZQeTRSb2VYV0hj?=
 =?utf-8?B?aHJOdWdDZlZWQnVueE5nQS93UnlLRTF2c3ptVWV4Q0FESzJtcU1pWUZzNUk1?=
 =?utf-8?B?aWlEZFRWTlhUS3dkdnhGSC9qaHBLZW5XV0F5UVQ4NW1rejhHVmJPRzl4UWhX?=
 =?utf-8?B?cC9GeExEZkFsbUJ1MkFVeHlydnljenpaMDgwdkhjUk85cmFpTHVQMWJIYWEv?=
 =?utf-8?B?NktCU0c1cWpBcGVwa1gzOXpnbGhQYVFjTmxFdVVpRVlvODZIZTVFREIyd2JZ?=
 =?utf-8?B?WWpLTEpVNlVXazFpUHhZQjg5dm10Mi9JV2EwM0c1YmVkYnc5cDVwc0tNT2lt?=
 =?utf-8?B?RHhWcHF2S0lVcmFCWk94SVFLY0s2aGsvMEdBbjZnZUMzUmxUNUZRZU5rZXpL?=
 =?utf-8?B?aUJyR2dUSU02K3huZE9TK3JCa01ZYTRYQ3FDN2wzSTBPbmpvSXRKQkxTeVRZ?=
 =?utf-8?B?K2V0cTJKaTFJNk9UaENvWFJXblJ4ci9WemhmQ2pBRXdkOHpYcDVFWnJXcS9Q?=
 =?utf-8?B?YjVMUUMyR2xlYnMrZ2hKek0wZmU3d1MyV2c0SU9oNlRXUWRHd2RpeG1kdnZm?=
 =?utf-8?B?VlRYVnBRU3M1T21UOXdHRUMxcHN1dHRMT0k4dmw4TFRPMUdQSGRuTFl4NTFt?=
 =?utf-8?B?ZjF3SkdKQW9GekxreHJ6UDQ1eVk1WkhPaGxsY0RJRnBmR2E5dTVZQ2RSUFgr?=
 =?utf-8?B?QUpMVXJsNUVQVFZYUGUwNmlrUDVKT3R1NHNNS0MxbGlpcTVTWFdrTENLVnQy?=
 =?utf-8?Q?n31wvDk29u6uOSQ8GohWc7eNc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 41e22116-e8b6-4170-0c22-08db5841e46d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 08:20:23.3057
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LU2s6lTquIchcJI1W+T0P4o2l6fV6ogKRSDVbTpYRORqbfj3kJMR10W5s351UEKRgl3UNu9agtBYz2jIbd0TWg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB9877

On 19.05.2023 09:38, Roger Pau Monné wrote:
> On Fri, May 19, 2023 at 09:22:58AM +0200, Jan Beulich wrote:
>> On 18.05.2023 12:34, Roger Pau Monné wrote:
>>> On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
>>>> I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
>>>> test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
>>>> Zen3 system and we already have a few successful tests with it, see
>>>> automation/gitlab-ci/test.yaml.
>>>>
>>>> We managed to narrow down the issue to a console problem. We are
>>>> currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
>>>> options, it works with PV Dom0 and it is using a PCI UART card.
>>>>
>>>> In the case of Dom0 PVH:
>>>> - it works without console=com1
>>>> - it works with console=com1 and with the patch appended below
>>>> - it doesn't work otherwise and crashes with this error:
>>>> https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
>>>
>>> Jan also noticed this, and we have a ticket for it in gitlab:
>>>
>>> https://gitlab.com/xen-project/xen/-/issues/85
>>>
>>>> What is the right way to fix it?
>>>
>>> I think the right fix is to simply avoid hidden devices from being
>>> handled by vPCI, in any case such devices won't work propewrly with
>>> vPCI because they are in use by Xen, and so any cached information by
>>> vPCI is likely to become stable as Xen can modify the device without
>>> vPCI noticing.
>>>
>>> I think the chunk below should help.  It's not clear to me however how
>>> hidden devices should be handled, is the intention to completely hide
>>> such devices from dom0?
>>
>> No, Dom0 should still be able to see them in a (mostly) r/o fashion.
>> Hence my earlier RFC patch making vPCI actually deal with them.
> 
> What's the difference between a hidden device and one that's marked RO
> then?

pci_hide_device() simply makes the device unavailable for assignment
(by having it owned by DomXEN). pci_ro_device() additionally adds the
device to the segment's ro_map, thus protecting its config space from
Dom0 writes.

And just to clarify - a PCI dev containing a UART isn't "hidden" in the
above sense, but "r/o" (by virtue of calling pci_ro_device() on it).
But the issue reported long ago (and now re-discovered by Stefano) is
common to "r/o" and "hidden" devices (it's the "hidden" aspect that
counts here, which is common for both).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 19 08:28:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 08:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536750.835399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvTe-0005Jx-8m; Fri, 19 May 2023 08:28:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536750.835399; Fri, 19 May 2023 08:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvTe-0005Jq-65; Fri, 19 May 2023 08:28:42 +0000
Received: by outflank-mailman (input) for mailman id 536750;
 Fri, 19 May 2023 08:28:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzvTc-0005Jk-SO
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 08:28:40 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2061a.outbound.protection.outlook.com
 [2a01:111:f400:fe13::61a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 272a34d4-f61f-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 10:28:38 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7835.eurprd04.prod.outlook.com (2603:10a6:10:1ea::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 08:28:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 08:28:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 272a34d4-f61f-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fodBSeiRzbAw97ILtqpoDHRi3TV+ET3FVyexVnpgwz7nO4Lv5vIt4cwmZKnx+Kzp1GyLTakhA33+zDWrjdhcX5askLIScvYVJ9gK2QD7NFkLljRY1DjHMSKr+W2UMx3KEPB6xgz10aa457tvnMV1qilQkq0GClJ7rUcI6vjJNomuiddjgiDwqWoLeXfFAOGhZgEyGCjlAhhMU3gdtsrvzfP12+VAEQ76gD/oc2WWQF+lLSOG/d1xIJd+wULoE5RT3Wx9A0+98Scx+EeOE93KOhnfkissCcYjoEPBYh7kEgNFmHnm28gTFMTDJr7tzQPMOVlsk80QapNkQ26mX5trGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mJqWN8/9JNox31rHkgub6+DW10pCtsL9LUee67rr7fY=;
 b=UI9HB3GuJYIEwWnOrid/kbJoN3KNPd1j2cYioVWCvjNyKQOdXXTbvgpjH7lqPSUtouhw0eoWjJKmszojR2dRk7ZYg6r8FB6uLu4EXsLd5SV8in4CR77voVNTesHCx5bC5HCodhkBg1VQi5B2awix9DeKEEiOhfezzNRmlhWRGh5kril2XgZnh4avzwttH1tukdzq4bMQfD/xqPDyn400ke0JGkon1Ar9zKksiLOx6Pgb7px1WUusABve39SwA2B1qeSst0yUo8pVF+gHK4BlIq1uLy7EQsPzvcF/PUW4qyv7tqpkXsh0tNU2fkuLiPfug5XCm8yKW3yhsoWzeHK43w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mJqWN8/9JNox31rHkgub6+DW10pCtsL9LUee67rr7fY=;
 b=pG6IYRUuXxUZQx1lQpafJCJxTaXzc0XU+rWRId3iKPOuWKBwMYXnoe4HNE+/HqSgmOnB0oqgmPVW1Z0HmtfaPvkaDiu/NNsYcT8zgc2ui+LqhbwLuh4gaqmJxHBcpDdIDE5StGPydLQcNRLf8w9PQnHQ9EWFKnluZGhPpzBgP70f/B2R8hI7yGQm/8k/9j4RN10kuhzMMv7cbaPfx3wPLcds4B4E8U69xmj/S1F+vUa077Kj+hHF09LZjGtGxNN9vAU75Km59j2LnIlaEbSoZ6JjJREn4XDb+PlCjQCxPHOd7fUIYRTj1e0EQr4YGOqy3a8Fbm0yz78CZa3pRYeTlQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b1145824-b5f3-2f2d-7cd4-97c30fdd9fe1@suse.com>
Date: Fri, 19 May 2023 10:28:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v7 1/5] xen/riscv: add VM space layout
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1683824347.git.oleksii.kurochko@gmail.com>
 <7b03dbf21718ed9c05859a629f4442167d74553c.1683824347.git.oleksii.kurochko@gmail.com>
 <d1529686-ce06-a707-de9e-a4b28c9f2e02@suse.com>
 <eef94bf6e8b67a98ad175125e221c75aeb4ba013.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <eef94bf6e8b67a98ad175125e221c75aeb4ba013.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0196.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ad::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7835:EE_
X-MS-Office365-Filtering-Correlation-Id: 82a8fd21-d324-406f-a287-08db58430a44
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ibzEJoJD+6QM9kK2avIYdL2L3F/O5yVx2AdbcnbOqa0PCU6o52z0EIumcqM+rjlX95zAxrZDCMPWhhyDW2r7KB6VVFeRDOS45Bbzep1pCaMl6s5/PqzmPbPfjaTHimfac9Mg/FYGuulAKrChMDUR2+KSLmuO/B/mbT3ClGS22ub/yH93G35kWRAkd6u+fAsI4SeMesyem/K1ELu5a/phGCzXCjqXkRa5SGXI7dFTLai2Ef+Z82lyM4+3mDNQ9hHW7SBxbcrOwJ+nVZQqCO0S796sAhdEKifXgENg2IWxado8EfLmPPE5MDSPCTd8yW6EvoYoZ9/3khvE6Sk5aoOTOBgMQeFg0+RtTrgpibIwFZitfqVozScA43Z9WCKU9Z7PHKagUnq/eybUmr2wpAOE8BA+ZZhNvqu115PSqZE64SfXk4QKMeScx8QJkhTd0AgTE83Z6S5ppLIH5bVcBQ4fjd756fk8MXOkJqqClPVssY+lhQt669ZpzPOv7b0XJpYazd715OodMAlpClOc/ppLofv07TSnJgGvtkkXhD1NApufpZMaGGTYofAxCU6RmDkNoL4kQMYX7TdGjNYd3F7GWjOwai2GRMGyvHyY90kRN2xAa1pM+/xje02xk+NZiCCuZkXiQm0ktkuhDwqqR1gyvw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(366004)(39860400002)(376002)(396003)(451199021)(36756003)(31696002)(8676002)(86362001)(8936002)(66946007)(31686004)(2616005)(6512007)(6506007)(6486002)(38100700002)(53546011)(41300700001)(2906002)(66476007)(186003)(26005)(66556008)(4326008)(54906003)(316002)(5660300002)(6916009)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?S3dzaGg4UnhxS2tDUzhZY0N0Vm9Cb0h1V3ppZmJCT0dHaGNNU3Mwc0drY1hm?=
 =?utf-8?B?VkNxNVh3NkNLV05OekNhc2tZNzFJSkh2ZVNMSmtOb3VqRmNQaGpoUzNnRkw0?=
 =?utf-8?B?NEk1UE15NzVWUG5tREFOQUEwY3EzZU9zVnhqRm9uWDhKaGx1bDJZM1hSU0Ny?=
 =?utf-8?B?Qi9FOXJ3a2tJdjJZTC8wMnhOakk1aFk3RWZ0eHQzQmdOUFNuM3BmL2cveklY?=
 =?utf-8?B?cEhjMUpQcTlWYy9pRkpqYVRoSEZRZWY4SUNoT1F3N2ZjL2ZiWDArM0FONXBR?=
 =?utf-8?B?VDRLUE9HRy9pSThCQlBUeUlVaFdPZ0NJOTZUa3pNSTRvMVlPUVVJamphZkdI?=
 =?utf-8?B?RjZHcnNVY0wxK09yeUJZWmxMajZtRHVCZXUrQ2pjSWZtSm9ISFE4eTZ0c3Q0?=
 =?utf-8?B?MEVERHhiK3p6SmVMRG1EMndBcmgxRXoyMkJ2QUJIamFlVm5uYXdWV1ViNGRE?=
 =?utf-8?B?cE1FWFpic0g5QVlqOTV0WHB1MXU4MlhHczI2aVQ0ZS9YR1prY1VRWFIxQWg0?=
 =?utf-8?B?SXRTeUpIOSt1emR6M3NqcUp6eVlxam8xdGZzdjBMdGZZcjlTMGE1VTA2OEZp?=
 =?utf-8?B?QWx2Y0ErVFJ1bkFlU0E4bjE5Vjc2ZTlmRmtrT0kzc0xDajBKWXdPOXEvKzNZ?=
 =?utf-8?B?S09OWXFyKzJSZ0VrU0Y1OUxBM054VVdHQy9NQnpmVHp6R3h3VUJCTVdBQm02?=
 =?utf-8?B?YkYyYzF4ZjdRVFI5VldtSG9WTjZ0M0NhbXRxQS94RFMzRnV6cGpURVZjL1ZH?=
 =?utf-8?B?ZG9va0FXQ2NYNUkyaXhlTHdJeFdWb0t5QUttUElSZC9tUHRMTThqa0JzOFFp?=
 =?utf-8?B?M0pXdlg3NCtqeW9uSllsYVoxVEU3UXpiRnJGdS9sSGRnaGtWUy9vcXZYOVBT?=
 =?utf-8?B?UVVXbGU3M3Y0Z3VEaDRRRkwzUzhWT2VTaWZNOFRHdW8zbzRpS0s3OUtKYzlK?=
 =?utf-8?B?ZDdXSzUwWk5vNGJHNlB0dkM2UFE4UlpRU0tNSzZDWlJOWkxid0JoUFN3eHBZ?=
 =?utf-8?B?SzYvYlNhTlUvTytMb1A5WWxtL3VhcFdEYzY4VDRqUGRyQWRJN2JuVUk3TXhU?=
 =?utf-8?B?QVFqQUNzV2VFbWpNUXYyNjkyK1dOUnlKak55dThDL3N3RGhZRVM2bGpBTXZr?=
 =?utf-8?B?d0RVL1kwREh6VXRaYlBibHRrdFlWK1ZKdzdsc3hhbzY0Z3psT0VqN3lJdWQz?=
 =?utf-8?B?TDI4ZUI1MWtPVXZVbUp1amZpbkh2d0dkM3plSHRQeHZ6OWN5M2tORFQvQitY?=
 =?utf-8?B?QnlUakxMSXNEMnBoSml6dDAvbXFYMjV4eGtib3NtUFRXWmxUc0tacU1iWlhX?=
 =?utf-8?B?ZmNocS8rQU5sNVNZdFJsWHZGSkZCUWJzTVdocjI0Q2xjMGJQWFBFbkgvTkZH?=
 =?utf-8?B?bURydXVRMSt6UUtQQ2NYbTdFTVhxWmdJRXB1eGpCOUJheCtUenFVc3JhQTM1?=
 =?utf-8?B?K2FVeGRRVldLOVcwNVh0eWFwSlN1LzVrS0licG95K0RSZVhHdERndy9Oem5X?=
 =?utf-8?B?dE56cGUzZXlIWGE3V1dTV09GZVZBZ2EyVTZDOGd3ay9zNExQd29Xb25YUjd5?=
 =?utf-8?B?WUFMckNLOE91V2ovZFNzdHlNMlN3UzJneUZ4TGJTS3IzREYvS1pZRTdUbjRO?=
 =?utf-8?B?WEtzcGwyR3NEZm5QdnU3d3R3aEpxSDlrelNET0x5NFZqVTVFajRpV1k3Z3Rk?=
 =?utf-8?B?WDBmS1RrWllsRm1xTzJ1UnRWeERRckRRMGZVN0VsT2lMQXRydDBoVDNUUXVJ?=
 =?utf-8?B?R3RaTDFldExVSmNiS2RlMjZ5QU5nN1QvdHkwTFd5aGp2d1Vib3N6RmdaWVkw?=
 =?utf-8?B?TFRJZFBrTkpwbENZV2ZndEdoVlZkNHhWOHVUWFRCZEtEbUVSRENRMmNyMEVR?=
 =?utf-8?B?RDJZRmxMVU9tZk1VV3VVWm0wWUUzL0pjZHhrVXJCT0FwNEhldEN4cm9OU1A0?=
 =?utf-8?B?Y3EraDRJMmJmMDNhVFBTNmF1Uzg5UCt1ejlGS2JUZGxySC93ZEVwZys3MGRZ?=
 =?utf-8?B?TWVMczFvVnBuMzVuMGFPRFRQWXBjbnptekZZWXliTG5qa2R6NjZCOUQ2UGtF?=
 =?utf-8?B?TkVmOHVFM0pLUXRQQ3VZd3R0UE5vYWRBQnVBYmFPN0JCRTBZRklFZlFwb1U0?=
 =?utf-8?Q?nkigrO0uz41OyrXDANJiTF5HJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 82a8fd21-d324-406f-a287-08db58430a44
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 08:28:36.3271
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: C0In6zVngC9V9yJYRjrzY85JzNcndMXUhiQusTySXynXIN6CtebHQbgvGT+VyD1mfGb+SJ5Vp7y/ykBJ+M+Png==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7835

On 17.05.2023 17:56, Oleksii wrote:
> On Tue, 2023-05-16 at 17:42 +0200, Jan Beulich wrote:
>> On 11.05.2023 19:09, Oleksii Kurochko wrote:
>>> ===================================================================
>>> =========
>>> + *    Start addr    |   End addr        |  Size  | Slot      
>>> |area description
>>> + *
>>> ===================================================================
>>> =========
>>> + * FFFFFFFFC0800000 |  FFFFFFFFFFFFFFFF |1016 MB | L2 511     |
>>> Unused
>>> + * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | L2 511     |
>>> Fixmap
>>> + * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | L2 511     |
>>> FDT
>>> + * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | L2 511     |
>>> Xen
>>> + *                 ...                  |  1 GB  | L2 510     |
>>> Unused
>>> + * 0000003200000000 |  0000007f40000000 | 309 GB | L2 200-509 |
>>> Direct map
>>
>> The upper bound here is 0000007f80000000 afaict, 
> It should be 0000007f80000000. 0000007f40000000 is start address of 509
> slot.
> 
>> which then also makes
>> the earlier gap 1Gb in size.
> do you mean that it is better to write start and end address (
> 0000007f80000000 - 7FC0000000 ) of L2 510 slot explicitly?

No, not really. The ... there is quite okay imo, because of the differing
upper bits. I was merely pointing out that as you had it the gap was 2Gb
(from 0000007f40000000 till FFFFFFFFC0000000, leaving aside the ignored
upper bits).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 19 08:33:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 08:33:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536758.835415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvYI-0006pE-W0; Fri, 19 May 2023 08:33:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536758.835415; Fri, 19 May 2023 08:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvYI-0006p7-TE; Fri, 19 May 2023 08:33:30 +0000
Received: by outflank-mailman (input) for mailman id 536758;
 Fri, 19 May 2023 08:33:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzvYH-0006oy-59
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 08:33:29 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on062b.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d3622b08-f61f-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 10:33:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7835.eurprd04.prod.outlook.com (2603:10a6:10:1ea::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 08:33:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 08:33:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3622b08-f61f-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gpkGAt8Mhi898t1IkGT6F6u3iJWxj4wPb17mtztAA3nqjjs448Ul98ZljC6YnEhFfoyunH/68AwdVA9rY32JsWrmuSwbeZUuQpK5BFzAoA03rK8hjnYDUx9LTazWj5xIjXh2QohzMWgKM/yhielYb7XWC0UYVSAuAUTcb+XMz0rNjcvkt1lLmHhHrWJ6Svk5+6t91riwlMZcvlu8ejoQJWKMGglXdFLwbkSYnA6pFtGLGxReV5u1dmLsPB83CU5l/j2H3gEwITF54VAIwakaFkf0AOSbcZ/QKHmTv9vh1uPR6joECLRx0gnYYD4+9gLH4Ac8IQikDYTyUBiICggyFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QI7nFyGxmmFoMXbiNr4biDnPFS64C3AU2e1jSgYKMu4=;
 b=ISwlaSISVbaGhjeDmOOOz+4yeOIBtckJmVvZYuwySQdtoWH9Gs3DzidOM1/Gz+2X6Qed5HlPRAxxKr2DpAWeWiG0jnfPD2c7Sah7LXuf1DOvW6SH+UFNAdeoEP3VPR77E7xzVUjzk/FpV90sxrTfTeBxpuWUx9iR9DgkLEkesAb+xCPTvldMNYBW/v+4VvwpS05tJn8MfseV5Ag4zypeW2Q6RVej3eXqvXx47JlcVN+JzH6aw36pGcFg0dgWZkls8mQHwBa58J2zUZ4wRdST0X/zYpVvF3TSs9ausTGU11dUyWGmhMyQ9emnBAjsIrQy73gbX4hVZ0/fjiny3AazAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QI7nFyGxmmFoMXbiNr4biDnPFS64C3AU2e1jSgYKMu4=;
 b=5M4xcnuEKCU9BrIH/1AoMftnDLwAwRB35kyG4TwZ4F4Zpex2x1+yqVx1u4YQiJzJecFy3ECndTLIYUQQaQh6GKnAojK2czqLNQAi3FzVcaXj9i3JLzhqLYn2qZO1N3PoqZSgc5xeF1Hx9bkeLamGnegLD+Gr21I8iAMOsNCujSRhJD/uvdMUI/ziRtU59k2BwnnAe4tXS06UdE9ArO8k/XQlz64wLnMMzi6M8FkvB/idhUtkaNV8bFbiCwFiN+8JXVHsHFM1QmRB0iVO90wzFweFX5agwNxRyXhfXofFmBXHY/xgZl0El7Vf8JoQDjks2cEakL4TJJ5gPDhwUx7qBQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b563dbfc-e7dd-9ae2-01c1-2e0c7251a550@suse.com>
Date: Fri, 19 May 2023 10:33:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v1] xen/sched/null: avoid crash after failed domU creation
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <20230501203046.168856-1-stewart.hildebrand@amd.com>
 <30246788-c90e-e338-de4b-e7bb2e440f4e@suse.com>
 <93adef92-90fb-80de-c6b4-b41872b74682@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <93adef92-90fb-80de-c6b4-b41872b74682@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0066.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7835:EE_
X-MS-Office365-Filtering-Correlation-Id: 4bd282a2-61c5-4efc-a526-08db5843b6ae
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5AfS6sruAHreoDwCJykhUj64W/EIAMqSG6wu4olxOgqqawGizDNg4V5J7XxHppJqptySw8+nkBCMFID0jUhsmIxJVBmR3xD2OTp9m1guVNgQ0tUxRPz7Unt0x2XYoqD4PbrXOLRtTkMiJD9k8UzRR/iJsKp+NpOTaoexcBQM7+chI96vG+2xF5b1tCgh2igtL0ppnDaz07AC7oEpIDE6Tt/yuZMHkEVZmfms31gTscmQCP67URvwj1lZR25vPqM0GeCL9LO5CYfjmPRhNzkZMnPrv9uZFIAdR2B5elzlCgMP1SgJePchE9w/T0f15m5i9uhocFhHeNfQzW4so6jlZZh/woim5uaSscSyvPJqOwyYIdDxmjOYXK7cKw6vz+aeaXzpceZhAGiISsn/gCU2flIpfVlD5FycZ+yOlkS67jp5p7AvLckY0ryx8nnep6kXDPMLphVzcaqDIN7eA8noEi0KgQVs+G1bcAOUVj/OeCO43+2FiRL5LQ7pZ4YzlorqWbhp2QISHG00D5BQgUBRSrWgKZhmpjajkI/Q41MKJs8DcixDUjn8jraaQoSx1DPJz5EytPGn5q+3ZJSXAdm2tvQUWCisVKfuKp7lC78OzPtKwI/0MAxzac9d/XImOiu55aMs6AvOr00EkgqPJjyOMQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(376002)(136003)(346002)(366004)(451199021)(2616005)(53546011)(41300700001)(83380400001)(316002)(5660300002)(4326008)(54906003)(478600001)(6916009)(66476007)(2906002)(66556008)(186003)(26005)(6512007)(6506007)(6486002)(38100700002)(36756003)(31696002)(86362001)(8676002)(66946007)(8936002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a3dZSnd1WVIyZDNSM3FES0doVTZrdHJta0E1QW1LV295K2F5dWJSQ0ZpcEdk?=
 =?utf-8?B?bFdMQUt4Tms2azR3SUxqZ3FIM3BXTkk4b2lhR29Memh3UmpGQ085VGdoWmVN?=
 =?utf-8?B?Ly9pVjV1WlVOKzZYejg3dXJacXBDSXVxQ3NRMmtKN285SXQyYVBqYWNTRHZ0?=
 =?utf-8?B?YUJkNDBsYmJJelQ4dm9lSWtSS2locVM3aWkrcVhXQlpINmwrRC9OMXJIU2hk?=
 =?utf-8?B?Qml5NU1pa1RtSWdEU3dteCtiMTZqMmdhNldGdmVjVGVFVXo0aGs1Z2VOdVU2?=
 =?utf-8?B?TXdTcndFS25Bd2FyaGk3dVlPL1NlVktQRHFjOEdQWGczSlF4WVRqRjc1dnlS?=
 =?utf-8?B?ei9BSDQ2MW5ITXFZRms1OWpEV0tnakV0cGN2SlVQZ1Q0OGlLeTdyQWpGMTJK?=
 =?utf-8?B?cTFNVmw3MmJGSmlRczMzUEdoaTNKYjVUTUQ2SUtxMUtka0svUTFhMEpFZUlO?=
 =?utf-8?B?OTYzOWZIZEtPTVNGWTdrZzBuQ0dTV0pUeW43QW04YnBmeUpLOEZDbnVSUHVO?=
 =?utf-8?B?UU9CS3l6aVNNNGozMzJpb0UxMERDR056aCtuN05TdXpsV0ovcUp0ZkZpOFd3?=
 =?utf-8?B?ZElwUTdJWnFqY3lEa3kyMzc0T3lJc2t3cjBuUVpRY1RzTXZzU0hyYXVQUE9O?=
 =?utf-8?B?RU8vK2lELy9oclJjeHhKNkVWN3ZOd3g5d1lXYURMRkRXbjNKdS9QaUMrMXlW?=
 =?utf-8?B?Q3V3MitSNElmdWN6bkd6VU1GTFllbW5Zd2RqTU1NczhHSHE2d2Y2VXRNWGFq?=
 =?utf-8?B?b2tpaUtDcCsyVTBrY2IrK3VYcVhnZzVYNVJaTkJLSFNaY2d5QmQ0cU52SDJw?=
 =?utf-8?B?TmJ4QUc1WmRXcTdwUkZGZWhHVTdJZFlURWwwV1VNSThKM3Y2dkJYU0NGWEtE?=
 =?utf-8?B?K1NBM1FRVm1DMXdBbjczTWMwQjFnOEJiQXpnSk1RbVlkdFg2cUNDN0k0Ym1D?=
 =?utf-8?B?UWtwY2ovSGdRa2lhZWs2R2JDY3AyTzIvTC9VZE80MXB5T0NrdmlONU5YbDha?=
 =?utf-8?B?d3pPRDFudk1vOFczcE5nSXZsOHp2RjRLVEQ1RHlPcElpTklBT3R2TUFWRGJE?=
 =?utf-8?B?MERqQkxmOGhpbGpLVFNYNDRFMjR3b1dFblhzY09pa3RaWTcreGR3ZndRMTVC?=
 =?utf-8?B?WjR4dXlhdnprNDRnU1ZwK2JTKzZSRFdUakkxRlQvcUNXSTE1QnN4OFdoM2ky?=
 =?utf-8?B?WDlDRUcwZHI1V1NMUzJXbUJTN1lDdm9pYXVjTFk4V3NBNVBPdDY0Sy9KVW9K?=
 =?utf-8?B?M2txRm5vKzlVcHlzdDI1dklxMGtCRW44MUFEZ2NOcGwyUXE2YXBJVHpjZlVk?=
 =?utf-8?B?aWNVZVo5WFhVUTNVYmtiYlcvQ280L05WWG1pMXdGZ2NGd2VhTFFjRENIKzAx?=
 =?utf-8?B?cTBwQVVUekZEYlRQTm5yK3N0V0VwQjRBUEVXN1RJaXlkcmRSU2xhNzRYcDky?=
 =?utf-8?B?UmppbTBKMGhqdW5sc3JKcFFQNi9qRjQ1bnhLajJvUGpRZkNwZlhyeWM5Nnky?=
 =?utf-8?B?ck4wM0NNcGU0azlzdjhZNVBib3NxY2NaZ2dha0Y5Uk1qV3hEaTNzWjJteTd6?=
 =?utf-8?B?TlNKN25XaDBUa1JoaWdQU0lmeS9WS3pJUlRVWk5VenBPTnQ0VkNaTkpEdWE1?=
 =?utf-8?B?Yll0UUl3S3FFMU4vQmV6YWsyWWdrNm1qbG1RYk54am9qWHpZd2pTM25uM0Fl?=
 =?utf-8?B?a3BTbkU3a1pYaHB6ZEFYZWYyYURsYW04TUhOKy9QL2xPc1lybXExM0UzWjNa?=
 =?utf-8?B?VCtLR1dYaWNEVExHS0JYS1NYRnBndldCSHd5TnNOa2VSQWkzcDUreFBOVWhm?=
 =?utf-8?B?MnNKS3BRajFWLzF2T1lObWwvZzl4emNsTGhYNXF5bWFhWlFNY1dDTDE1UU9U?=
 =?utf-8?B?V1k1V2NXaGF1SklQZk1BRnBRYVJyWVgveCtPRDNad3hVQlpuTWpyREs3ekNz?=
 =?utf-8?B?czZURllTS3psRDNUazA3OXAyNFlSK2hDdHMyRDJod0Q0ZzYzdHY4dnQ2UzZm?=
 =?utf-8?B?OFFPWVI0cjZjaGJ1aVZIRG9XTEpuYldiVmpRQmpKWEpLRno1SDJseVlib1J2?=
 =?utf-8?B?WDJXazVaa1ozMHdueWEwdllkcVliVEZPeGozV2JBZWI3OVFRQkdzR2dZa2Iv?=
 =?utf-8?Q?bQ5blKz5olZTswg8Z1CvvjIfE?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4bd282a2-61c5-4efc-a526-08db5843b6ae
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 08:33:25.5870
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: w0eb9at3TkWKpRP5UlBeKz6j9KsZldPF401d4JPin9aJActhIQzYHJEWr7JEMwPHLc1EdUom/sWid16BQbP+Bg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7835

On 18.05.2023 23:27, Stewart Hildebrand wrote:
> On 5/5/23 01:59, Juergen Gross wrote:
>> On 01.05.23 22:30, Stewart Hildebrand wrote:
>>> When creating a domU, but the creation fails, there is a corner case that may
>>> lead to a crash in the null scheduler when running a debug build of Xen.
>>>
>>> (XEN) ****************************************
>>> (XEN) Panic on CPU 0:
>>> (XEN) Assertion 'npc->unit == unit' failed at common/sched/null.c:379
>>> (XEN) ****************************************
>>>
>>> The events leading to the crash are:
>>>
>>> * null_unit_insert() was invoked with the unit offline. Since the unit was
>>>    offline, unit_assign() was not called, and null_unit_insert() returned.
>>> * Later during domain creation, the unit was onlined
>>> * Eventually, domain creation failed due to bad configuration
>>> * null_unit_remove() was invoked with the unit still online. Since the unit was
>>>    online, it called unit_deassign() and triggered an ASSERT.
>>>
>>> To fix this, only call unit_deassign() when npc->unit is non-NULL in
>>> null_unit_remove.
>>>
>>> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
>>
>> Reviewed-by: Juergen Gross <jgross@suse.com>
> 
> Thanks for the review. Does this still need a maintainer ack?

In principle yes. I might be willing to time out at some point, but
not before at least one ping was sent (and some more time has passed
afterwards).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 19 08:46:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 08:46:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536763.835428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvkN-0008NR-3D; Fri, 19 May 2023 08:45:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536763.835428; Fri, 19 May 2023 08:45:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvkM-0008NK-WE; Fri, 19 May 2023 08:45:59 +0000
Received: by outflank-mailman (input) for mailman id 536763;
 Fri, 19 May 2023 08:45:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzvkM-0008NE-69
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 08:45:58 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20622.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91c4dc22-f621-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 10:45:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6792.eurprd04.prod.outlook.com (2603:10a6:20b:dc::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.18; Fri, 19 May
 2023 08:45:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 08:45:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91c4dc22-f621-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I+bOzZdLdyXKzqNbOthwJs0ikvle9iktmbN/c43l0P0yMk9kBAnLJfHA4KeGh04cbVidLIxOHQ+uHGsVnjCmJoJYrMd9w2ArcuMH/JfjP3TnAq9ze37NS+4NRk3Ug8XDCT5wiFTZXF/V/SJbp0gmTpWzQP/CrjjM++eeG+gXtS7z5QQpSoRFzOka3q81nwvMyHBip/6jjjyqXy3yFrQU4NAW0eh3AfMdEc+H/EnyzIc9kv3tYrLtm0rFn767swew4fgTYoH1lSgg5WLXGdG+IGIiLa6OILhoEd99qKgMFCA7lQd+TZSE/xIu3vht1nuvuLd0y25NNddCi8UAsQshAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/8nyHlQZBJinqOl3PekLKCUs6CLB2n9F7EIZ9hgRSzE=;
 b=feoUbBPxjNNbMVy0xisEqy/Jp5fK6VwKLY1lK3dZHMyY7llXIDLEWx94tWKR+Qak7ejNIdR+huAJsLagNxiqNg6DbHsbiW5SOnahu+VpFo/TJOCwwSf8FxesAan2Q8WG4NNVsAb2XQEvCYxA8JCU4pAwnLkPNhKTazRRdjmBu69e1+6epzn2tLrzW56b3pmM+07b661F02qOYyTfpyr8kUyig6ekhodmSjy4EtQ6x29HssRRzzfvqrciYxdFiePaVIAA6BULruomL3xBL9HFwFK1cfAELPoT61jpnDQ9SracVkirx2oN5Ipk9E8/uup7blEBWd5zWHVZV096C5aMVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/8nyHlQZBJinqOl3PekLKCUs6CLB2n9F7EIZ9hgRSzE=;
 b=laEqbydaKllNSgTQBcgt6oUrwkD1pyWrUCHDqr5mVLRBcrxb9FU94qmxfTFtPVnR3vixx9J497xe8mZ6pDChWFPw23Xu1nhSeX0dZlyY57wt4PvHhyBbp0fM0zeT4/bpb1TkunWJs/w5dwY2Qa+ZOMSEelpeKANJo994Q9t+oyxhbM7OGWgVQbU/NdVukTnt0AYsFF90oZusVE+LjB8wUR1bMzB3bmfrjV7tDRKcdSw+F8srRkEiUuqdlYzCO3rmUnMEBYoU+NAcJ4iNXxFXP+dVihYjy9aklHpfMCwTJxc3rljJB1dUZSOMhM6dwl3mCarnQv9Zy69Zugb1kxrMSQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d9618948-d4da-cfea-ad80-d130dc50d3cb@suse.com>
Date: Fri, 19 May 2023 10:45:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v3 3/6] iommu/arm: Introduce iommu_add_dt_pci_sideband_ids
 API
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Paul Durrant <paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Rahul Singh <rahul.singh@arm.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, xen-devel@lists.xenproject.org
References: <20230518210658.66156-1-stewart.hildebrand@amd.com>
 <20230518210658.66156-4-stewart.hildebrand@amd.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230518210658.66156-4-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0029.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::22) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6792:EE_
X-MS-Office365-Filtering-Correlation-Id: 7f6e522c-83d2-4f9c-64ae-08db58457553
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	p0tUdzLXv7ZiMJvtc0mFzA4Zo9LQ+j2tq1xQN1h+1GYJMAtLaiK1KSLinp9EZvSTQCOrCBwcsrP1nHdrBQjFS2615FPppCawPL8PB4kHeOa14JlpNX9LhEqaBK3UI0NRLq9EU5VVXHKMJ03C9PDVs/jk15rnRM2oIaMQ/F1LgL0S307Xya53Mew89BstkYCbVqaVbQBls/B59w27OH1VyBU97ekLsjfzBe8G9+7sDUsQ1yVHP/DDuHq5lAbDAMxmRKhFPqWalp8Xn3vv7HXem4TuAylVnl3qIHpbBZB3FdJlNn693gRgCnvITWp2KnVkEGKz4p2J9cpHeGVYlgtGSkfbOVvXRxxD/fEdZReyCGOfBn6UcDnrOagcB/niJ6A9DnpRGqQ6msN3S/9J34jEqYyjdiqEyGM+jyjr6q8QLu/2yftv4TglXtchTNIoryW1dX7gav/0KbuVDb5YjA1tL4hH+OJ7b2xZ/QhUpjbvdGrVq1phpclHbxY6aayrTf9N314roz8x7cQsd/+qi3YEL+R5p9accH6DnbuZVOYAAjN8VZHUEjT0R2v0mcUmoBFaTd7RQvJKEM/1AurhL9EubhoXzYp8trjqEE0imDJo32sz4fUcDxfzQq05P5yF7Y9KrWgevsRJ5kNbJssltkQn6w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(346002)(366004)(396003)(136003)(376002)(451199021)(6486002)(41300700001)(53546011)(36756003)(38100700002)(6506007)(6512007)(26005)(5660300002)(83380400001)(31696002)(2616005)(186003)(2906002)(86362001)(8936002)(8676002)(6916009)(4326008)(66556008)(66476007)(66946007)(54906003)(31686004)(478600001)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cGltcGJOTllKY041bnF6WXNjUXZLZGRudkpyNnBpVnQvdVF6OHNQZ3BxbFdv?=
 =?utf-8?B?WHpWNjE1MzZtTmVaTGtlQ1pNSUN0Ym9TNnFWZjNZVWp0SkxrRmFEeUIwc085?=
 =?utf-8?B?TzVnTUhwbEFxaUhJRmxheHRqd0lWeTlrai9DdmpNQzgzQ2xSYWJIRUNGSk9D?=
 =?utf-8?B?emsyWCtuQkxVeGlBdzA2SGR2ZWhHOEZyYWFWM3FFb1FuY0l4KzVTb3hYdTBS?=
 =?utf-8?B?V09aV1BtdDBoRGdDSjltckxHV1lUNUxGcHhaOWFGL2VtcVJrWHoyMW1XVnpL?=
 =?utf-8?B?RERDU1NVMUhHMjdVTVJuUTkxZitZaDRscDBYOUZISEtlV0FWUUJLUTlqc255?=
 =?utf-8?B?cWZuaVVpSDl4MEYrNnhnMlVpSGNZSUhDRUw5ZXkrWDJTcnR5NzVPSHJ2amF3?=
 =?utf-8?B?NTAxb1V6SHlGL0ZTRlZJVE0xeEtmL2hJUFlYUllLR0lvb3hsMjVqVk5hR2Q0?=
 =?utf-8?B?TjA2V094QU5KNmxjcmxjaTRqdGphdm1HV2EyR3pIODNFSzRXSjlnRlBuUTFs?=
 =?utf-8?B?Y0R4eW0vcDRKYnpCZm5VNmRhNWFrSllGcUw0ZUFTSnBLdExGSERxTXU4eGdQ?=
 =?utf-8?B?bGJ0cjl0c01ERWNScWFhbTZnbHY3K3RaMWFxVHBIeHFVUnJDTFdFTUtNQkRp?=
 =?utf-8?B?cU1GVndYUTZ0aFE4aVJCUzNYVjRhcXA0TjhlR1Z5NzRBbHc1bjVTREFqNEhz?=
 =?utf-8?B?dDE2VGpiNDBmaE9EMllXa0VXeW9uc1lCQkdvdmtPa2lGajBnMHM3Z1ZVcGsr?=
 =?utf-8?B?Y1BKL3Z4WUtrVXJqSG9IT2k1RXNaRGdKcEMyampEL1dYcWJQQkZWWktzOCsr?=
 =?utf-8?B?c1ZnaDk5UzhaYkVVVG9NWFdVQjRwUWc5TmFkNlFoOTVydmxxckwySkt6UThk?=
 =?utf-8?B?UzJ0MlRTdUJYZlNUT3U5dElvdEdFTldrd0ZTaXAxRUdySGp6a2xaZk5idVNv?=
 =?utf-8?B?aXBDVm11c0VXQ1VOMkpsbUY1MmxsWFBSZHZib1Y5VGg1eEx5WGR3b2VYRzBm?=
 =?utf-8?B?am8yV3VPUXc1bGNwSFRIb21ENTdMdnE1RmJOUjladWRJRFRGWXZIQkJkdGRY?=
 =?utf-8?B?N2UrckE4SjhOWWh6YVlGYlJpUXB5WEx4RnFZTXJzZC9MN21KNGNQWDdZcHdR?=
 =?utf-8?B?bzBKUWp4bGtoOTRSTVVmK3MzS2h3NHEwbElFa2pjdjF5bXMyZHJvT0ZNNnBB?=
 =?utf-8?B?bE1NVWREQW4yTTc2ZDR1UlVxS2ZuM21sWnltbnhhcWxuL0Q0Y05PdklGNUox?=
 =?utf-8?B?R0xvQStuamZHZDJPVC9QTEVRNmNZYm1NMkkxOHV2Yy9qQTFCd0EyRlk5blZV?=
 =?utf-8?B?cHpNanBoUHhhUTNNSGZ4RWtIbUpjOW9LSmFvbFhFcXFBQ2NkK3NvSmYrS3Y3?=
 =?utf-8?B?Q04xK0ZIQ29wTWR3OE5HVktsL1U1QkFaN0dsSTZsdXNuMFRoM1NsNHpmbHd1?=
 =?utf-8?B?NjA1dWlscjBacjdsazBtdzlJT2J3TENkam1CTnk3WmFXYXE5Nk9RZXFXV2E5?=
 =?utf-8?B?YkFMTmhuWDJ1N0d6cWtoQnB6TzZEdm4yS0NBSEZ0d0IrMFlUWExHU3A4b1Fo?=
 =?utf-8?B?VXdvMkZnWnMzOTFHZDJTQ09EekhKei9lTE1HZjVNQTM5TDROcU9UTzN3OFJC?=
 =?utf-8?B?QTYzOGJldG9qSzYwNGhXY0duc3ZuVWQzLzREakNCNmM0QzJxUVVtMnJlK1JE?=
 =?utf-8?B?NHBXTFdmWXV5OWZnVHJrWU9KYXNQUlh2NkF3UkhYbnh2VTVBSElGKy9DUWdh?=
 =?utf-8?B?S0NxUC9iODcyRmpaOWZXZjJ1TG9mV1hNaWJkL2M5NEdCMzBDNjV6UUlMWWdm?=
 =?utf-8?B?aXAvQnpUWTM2QUdnSlpvSTBQakpEb3FmTUxFbjFSN01id0luR1JrMWY3T0xN?=
 =?utf-8?B?K2FjdmNYb1NsNFJ3WW05anBpaWRoZ1NSQjduQ1ZmNk16ZVdXRnJaSWVLekNH?=
 =?utf-8?B?THdIM2IrVHdZNll6dmtnbXlWZWI0MjdqQkxmOTliTmVwS1EzKzZOQjd5dnh2?=
 =?utf-8?B?ZEU3ME1ZMkhrcjZsMDhKTzN5aXNONDE0Q2x4NFAza29ELzRxQU9GaUx2WmVC?=
 =?utf-8?B?NmE0TmxraTlMSGVaem9TM3I1UEhLcHZFM055dmVoWEZSTlpnUVN1NjI3eXBN?=
 =?utf-8?Q?lVAZdlt1HRNUH7A2o0fUEijQd?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7f6e522c-83d2-4f9c-64ae-08db58457553
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 08:45:54.9310
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3wJnIFbRX9gnaULqI4rL4WJUa4a9Yz3Za4MNX3kitsllgMweGY/zBlTVebTIy5Sm9bMV70/8U1eeOQta+K6DKg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6792

On 18.05.2023 23:06, Stewart Hildebrand wrote:
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -26,6 +26,7 @@
>  #include <xen/spinlock.h>
>  #include <public/domctl.h>
>  #include <public/hvm/ioreq.h>
> +#include <asm/acpi.h>
>  #include <asm/device.h>

I view this as problematic: It'll require all architectures with an
IOMMU implementation to have an asm/acpi.h. I think this wants to go
inside an "#ifdef CONFIG_ACPI" and then ...

> @@ -228,12 +230,25 @@ int iommu_release_dt_devices(struct domain *d);
>   *      (IOMMU is not enabled/present or device is not connected to it).
>   */
>  int iommu_add_dt_device(struct dt_device_node *np);
> +int iommu_add_dt_pci_sideband_ids(struct pci_dev *pdev);
>  
>  int iommu_do_dt_domctl(struct xen_domctl *, struct domain *,
>                         XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
>  
> +#else /* !HAS_DEVICE_TREE */
> +static inline int iommu_add_dt_pci_sideband_ids(struct pci_dev *pdev)
> +{
> +    return 0;
> +}
>  #endif /* HAS_DEVICE_TREE */
>  
> +static inline int iommu_add_pci_sideband_ids(struct pci_dev *pdev)
> +{
> +    if ( acpi_disabled )

... the same #ifdef would be added around this if().

All of this of course only if this is deemed enough to allow co-existance
of DT and ACPI (which I'm not convinced it is, but I don't know enough
about DT and e.g. possible mixed configurations).

Jan

> +        return iommu_add_dt_pci_sideband_ids(pdev);
> +    return 0;
> +}
> +
>  struct page_info;
>  
>  /*



From xen-devel-bounces@lists.xenproject.org Fri May 19 08:46:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 08:46:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536764.835438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvkc-0000Dc-Bq; Fri, 19 May 2023 08:46:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536764.835438; Fri, 19 May 2023 08:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvkc-0000DV-8P; Fri, 19 May 2023 08:46:14 +0000
Received: by outflank-mailman (input) for mailman id 536764;
 Fri, 19 May 2023 08:46:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qg83=BI=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1pzvkb-0000DN-Il
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 08:46:13 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99d6db0a-f621-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 10:46:10 +0200 (CEST)
Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com
 [209.85.218.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-513-Qy-wZ0UXOZ-r82kUg5uy3w-1; Fri, 19 May 2023 04:46:04 -0400
Received: by mail-ej1-f70.google.com with SMTP id
 a640c23a62f3a-96f4d917e06so85084166b.1
 for <xen-devel@lists.xenproject.org>; Fri, 19 May 2023 01:46:03 -0700 (PDT)
Received: from sgarzare-redhat (c-115-213.cust-q.wadsl.it. [212.43.115.213])
 by smtp.gmail.com with ESMTPSA id
 m18-20020a17090679d200b00965af4c7f07sm1997520ejo.20.2023.05.19.01.45.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 May 2023 01:46:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99d6db0a-f621-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684485969;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=jgI8OJAIeNt0D4Rn40wXDXa5ZKqF4jwy77J/VpVAn4w=;
	b=acDY8iTrgE8uXIxanbvyBB8HzEQVv6qFnmhpoFdQcOVyqhM+G1CkdPQLGV+Idd0nDoUWbx
	zeg9OhBk/jTSSTfndT9ugTKs5XAhKK47gZdPmwtZsmapWdPXAeN4U+TTAuGMeqYnJRoOFe
	AfkvkSkCynqPwglwyoeQFyr5u5r5+H8=
X-MC-Unique: Qy-wZ0UXOZ-r82kUg5uy3w-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684485963; x=1687077963;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jgI8OJAIeNt0D4Rn40wXDXa5ZKqF4jwy77J/VpVAn4w=;
        b=CuAptvjx/Cf6nQ9nf4cS9neH0Mrx1ko3F0gecm+bPPMkHhw2uG0Zmo6xWTdio16mvS
         H5ee6kYZWQ7/PXzwZ6aYQW/p1VjfE7QTH7qG0e1ooDTHr4wYz3zeFhxRw58vGsyg9FHl
         w8wvZqdvCMCOTXGp8ammRNtzv3VKOJQ50DsEkXK2o9byOFbtClO3086XDMtZSOQdb4vf
         t1dXceDaMXOxKnG8t7M9aR4TMj34puU8v4TGjg+lz5cUsatjvEd/WnN4ZBQ9vui1fECE
         fDNiHif9CbCzfwwIz7hMG6LgUDId386tWv+MyfPvtbcZUP4fJun9XiO4tYEyocNvNFTP
         cOxA==
X-Gm-Message-State: AC+VfDxuP/NodEY7MzvXebP37amyTc1wZ/ltEmabKyOyWfh901hrcF2n
	pdOpWvsDGG2dvy+gWudhwDmeJVOMkXiWLDF2uxEakY0hS0S7MKXa4D7/5W1pB5CkjXyPzZRISRy
	muionAdOc9kvuEGkhj3jd/00K5/E=
X-Received: by 2002:a17:907:3e0b:b0:967:13a3:d82c with SMTP id hp11-20020a1709073e0b00b0096713a3d82cmr1204381ejc.26.1684485962912;
        Fri, 19 May 2023 01:46:02 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ5s/YXjjycc6PLN79YkUCZyKXcyl+Pgz0v3jvliAT7csD7GqowYq044W8A3xmPDZJy18EdUeA==
X-Received: by 2002:a17:907:3e0b:b0:967:13a3:d82c with SMTP id hp11-20020a1709073e0b00b0096713a3d82cmr1204351ejc.26.1684485962534;
        Fri, 19 May 2023 01:46:02 -0700 (PDT)
Date: Fri, 19 May 2023 10:45:57 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
	Julia Suvorova <jusual@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, 
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org, 
	Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Aarushi Mehta <mehta.aaru20@gmail.com>, Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 1/6] block: add blk_io_plug_call() API
Message-ID: <mzxjz4d3ab3sq6grwsle6wlacysh2uffz42ojpdze3hmqimbr5@fxgkad47nnim>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-2-stefanha@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230517221022.325091-2-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline

On Wed, May 17, 2023 at 06:10:17PM -0400, Stefan Hajnoczi wrote:
>Introduce a new API for thread-local blk_io_plug() that does not
>traverse the block graph. The goal is to make blk_io_plug() multi-queue
>friendly.
>
>Instead of having block drivers track whether or not we're in a plugged
>section, provide an API that allows them to defer a function call until
>we're unplugged: blk_io_plug_call(fn, opaque). If blk_io_plug_call() is
>called multiple times with the same fn/opaque pair, then fn() is only
>called once at the end of the function - resulting in batching.
>
>This patch introduces the API and changes blk_io_plug()/blk_io_unplug().
>blk_io_plug()/blk_io_unplug() no longer require a BlockBackend argument
>because the plug state is now thread-local.
>
>Later patches convert block drivers to blk_io_plug_call() and then we
>can finally remove .bdrv_co_io_plug() once all block drivers have been
>converted.
>
>Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>---
> MAINTAINERS                       |   1 +
> include/sysemu/block-backend-io.h |  13 +--
> block/block-backend.c             |  22 -----
> block/plug.c                      | 159 ++++++++++++++++++++++++++++++
> hw/block/dataplane/xen-block.c    |   8 +-
> hw/block/virtio-blk.c             |   4 +-
> hw/scsi/virtio-scsi.c             |   6 +-
> block/meson.build                 |   1 +
> 8 files changed, 173 insertions(+), 41 deletions(-)
> create mode 100644 block/plug.c
>
>diff --git a/MAINTAINERS b/MAINTAINERS
>index 50585117a0..574202295c 100644
>--- a/MAINTAINERS
>+++ b/MAINTAINERS
>@@ -2644,6 +2644,7 @@ F: util/aio-*.c
> F: util/aio-*.h
> F: util/fdmon-*.c
> F: block/io.c
>+F: block/plug.c
> F: migration/block*
> F: include/block/aio.h
> F: include/block/aio-wait.h
>diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
>index d62a7ee773..be4dcef59d 100644
>--- a/include/sysemu/block-backend-io.h
>+++ b/include/sysemu/block-backend-io.h
>@@ -100,16 +100,9 @@ void blk_iostatus_set_err(BlockBackend *blk, int error);
> int blk_get_max_iov(BlockBackend *blk);
> int blk_get_max_hw_iov(BlockBackend *blk);
>
>-/*
>- * blk_io_plug/unplug are thread-local operations. This means that multiple
>- * IOThreads can simultaneously call plug/unplug, but the caller must ensure
>- * that each unplug() is called in the same IOThread of the matching plug().
>- */
>-void coroutine_fn blk_co_io_plug(BlockBackend *blk);
>-void co_wrapper blk_io_plug(BlockBackend *blk);
>-
>-void coroutine_fn blk_co_io_unplug(BlockBackend *blk);
>-void co_wrapper blk_io_unplug(BlockBackend *blk);
>+void blk_io_plug(void);
>+void blk_io_unplug(void);
>+void blk_io_plug_call(void (*fn)(void *), void *opaque);
>
> AioContext *blk_get_aio_context(BlockBackend *blk);
> BlockAcctStats *blk_get_stats(BlockBackend *blk);
>diff --git a/block/block-backend.c b/block/block-backend.c
>index ca537cd0ad..1f1d226ba6 100644
>--- a/block/block-backend.c
>+++ b/block/block-backend.c
>@@ -2568,28 +2568,6 @@ void blk_add_insert_bs_notifier(BlockBackend *blk, Notifier *notify)
>     notifier_list_add(&blk->insert_bs_notifiers, notify);
> }
>
>-void coroutine_fn blk_co_io_plug(BlockBackend *blk)
>-{
>-    BlockDriverState *bs = blk_bs(blk);
>-    IO_CODE();
>-    GRAPH_RDLOCK_GUARD();
>-
>-    if (bs) {
>-        bdrv_co_io_plug(bs);
>-    }
>-}
>-
>-void coroutine_fn blk_co_io_unplug(BlockBackend *blk)
>-{
>-    BlockDriverState *bs = blk_bs(blk);
>-    IO_CODE();
>-    GRAPH_RDLOCK_GUARD();
>-
>-    if (bs) {
>-        bdrv_co_io_unplug(bs);
>-    }
>-}
>-
> BlockAcctStats *blk_get_stats(BlockBackend *blk)
> {
>     IO_CODE();
>diff --git a/block/plug.c b/block/plug.c
>new file mode 100644
>index 0000000000..6738a568ba
>--- /dev/null
>+++ b/block/plug.c
>@@ -0,0 +1,159 @@
>+/* SPDX-License-Identifier: GPL-2.0-or-later */
>+/*
>+ * Block I/O plugging
>+ *
>+ * Copyright Red Hat.
>+ *
>+ * This API defers a function call within a blk_io_plug()/blk_io_unplug()
>+ * section, allowing multiple calls to batch up. This is a performance
>+ * optimization that is used in the block layer to submit several I/O requests
>+ * at once instead of individually:
>+ *
>+ *   blk_io_plug(); <-- start of plugged region
>+ *   ...
>+ *   blk_io_plug_call(my_func, my_obj); <-- deferred my_func(my_obj) call
>+ *   blk_io_plug_call(my_func, my_obj); <-- another
>+ *   blk_io_plug_call(my_func, my_obj); <-- another
>+ *   ...
>+ *   blk_io_unplug(); <-- end of plugged region, my_func(my_obj) is called once
>+ *
>+ * This code is actually generic and not tied to the block layer. If another
>+ * subsystem needs this functionality, it could be renamed.
>+ */
>+
>+#include "qemu/osdep.h"
>+#include "qemu/coroutine-tls.h"
>+#include "qemu/notify.h"
>+#include "qemu/thread.h"
>+#include "sysemu/block-backend.h"
>+
>+/* A function call that has been deferred until unplug() */
>+typedef struct {
>+    void (*fn)(void *);
>+    void *opaque;
>+} UnplugFn;
>+
>+/* Per-thread state */
>+typedef struct {
>+    unsigned count;       /* how many times has plug() been called? */
>+    GArray *unplug_fns;   /* functions to call at unplug time */
>+} Plug;
>+
>+/* Use get_ptr_plug() to fetch this thread-local value */
>+QEMU_DEFINE_STATIC_CO_TLS(Plug, plug);
>+
>+/* Called at thread cleanup time */
>+static void blk_io_plug_atexit(Notifier *n, void *value)
>+{
>+    Plug *plug = get_ptr_plug();
>+    g_array_free(plug->unplug_fns, TRUE);
>+}
>+
>+/* This won't involve coroutines, so use __thread */
>+static __thread Notifier blk_io_plug_atexit_notifier;
>+
>+/**
>+ * blk_io_plug_call:
>+ * @fn: a function pointer to be invoked
>+ * @opaque: a user-defined argument to @fn()
>+ *
>+ * Call @fn(@opaque) immediately if not within a blk_io_plug()/blk_io_unplug()
>+ * section.

Just to understand better, what if two BlockDrivers share the same
iothread but one calls blk_io_plug()/blk_io_unplug(), while the other
calls this function not in a blk_io_plug()/blk_io_unplug() section?

If the call is in the middle of the other BlockDriver's section, it is
deferred, right?

Is this situation possible?
Or should we prevent blk_io_plug_call() from being called out of a
blk_io_plug()/blk_io_unplug() section?

Thanks,
Stefano

>+ *
>+ * Otherwise defer the call until the end of the outermost
>+ * blk_io_plug()/blk_io_unplug() section in this thread. If the same
>+ * @fn/@opaque pair has already been deferred, it will only be called once upon
>+ * blk_io_unplug() so that accumulated calls are batched into a single call.
>+ *
>+ * The caller must ensure that @opaque is not be freed before @fn() is invoked.
>+ */
>+void blk_io_plug_call(void (*fn)(void *), void *opaque)
>+{
>+    Plug *plug = get_ptr_plug();
>+
>+    /* Call immediately if we're not plugged */
>+    if (plug->count == 0) {
>+        fn(opaque);
>+        return;
>+    }
>+
>+    GArray *array = plug->unplug_fns;
>+    if (!array) {
>+        array = g_array_new(FALSE, FALSE, sizeof(UnplugFn));
>+        plug->unplug_fns = array;
>+        blk_io_plug_atexit_notifier.notify = blk_io_plug_atexit;
>+        qemu_thread_atexit_add(&blk_io_plug_atexit_notifier);
>+    }
>+
>+    UnplugFn *fns = (UnplugFn *)array->data;
>+    UnplugFn new_fn = {
>+        .fn = fn,
>+        .opaque = opaque,
>+    };
>+
>+    /*
>+     * There won't be many, so do a linear search. If this becomes a bottleneck
>+     * then a binary search (glib 2.62+) or different data structure could be
>+     * used.
>+     */
>+    for (guint i = 0; i < array->len; i++) {
>+        if (memcmp(&fns[i], &new_fn, sizeof(new_fn)) == 0) {
>+            return; /* already exists */
>+        }
>+    }
>+
>+    g_array_append_val(array, new_fn);
>+}
>+
>+/**
>+ * blk_io_plug: Defer blk_io_plug_call() functions until blk_io_unplug()
>+ *
>+ * blk_io_plug/unplug are thread-local operations. This means that multiple
>+ * threads can simultaneously call plug/unplug, but the caller must ensure that
>+ * each unplug() is called in the same thread of the matching plug().
>+ *
>+ * Nesting is supported. blk_io_plug_call() functions are only called at the
>+ * outermost blk_io_unplug().
>+ */
>+void blk_io_plug(void)
>+{
>+    Plug *plug = get_ptr_plug();
>+
>+    assert(plug->count < UINT32_MAX);
>+
>+    plug->count++;
>+}
>+
>+/**
>+ * blk_io_unplug: Run any pending blk_io_plug_call() functions
>+ *
>+ * There must have been a matching blk_io_plug() call in the same thread prior
>+ * to this blk_io_unplug() call.
>+ */
>+void blk_io_unplug(void)
>+{
>+    Plug *plug = get_ptr_plug();
>+
>+    assert(plug->count > 0);
>+
>+    if (--plug->count > 0) {
>+        return;
>+    }
>+
>+    GArray *array = plug->unplug_fns;
>+    if (!array) {
>+        return;
>+    }
>+
>+    UnplugFn *fns = (UnplugFn *)array->data;
>+
>+    for (guint i = 0; i < array->len; i++) {
>+        fns[i].fn(fns[i].opaque);
>+    }
>+
>+    /*
>+     * This resets the array without freeing memory so that appending is cheap
>+     * in the future.
>+     */
>+    g_array_set_size(array, 0);
>+}
>diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
>index d8bc39d359..e49c24f63d 100644
>--- a/hw/block/dataplane/xen-block.c
>+++ b/hw/block/dataplane/xen-block.c
>@@ -537,7 +537,7 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
>      * is below us.
>      */
>     if (inflight_atstart > IO_PLUG_THRESHOLD) {
>-        blk_io_plug(dataplane->blk);
>+        blk_io_plug();
>     }
>     while (rc != rp) {
>         /* pull request from ring */
>@@ -577,12 +577,12 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
>
>         if (inflight_atstart > IO_PLUG_THRESHOLD &&
>             batched >= inflight_atstart) {
>-            blk_io_unplug(dataplane->blk);
>+            blk_io_unplug();
>         }
>         xen_block_do_aio(request);
>         if (inflight_atstart > IO_PLUG_THRESHOLD) {
>             if (batched >= inflight_atstart) {
>-                blk_io_plug(dataplane->blk);
>+                blk_io_plug();
>                 batched = 0;
>             } else {
>                 batched++;
>@@ -590,7 +590,7 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
>         }
>     }
>     if (inflight_atstart > IO_PLUG_THRESHOLD) {
>-        blk_io_unplug(dataplane->blk);
>+        blk_io_unplug();
>     }
>
>     return done_something;
>diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
>index 8f65ea4659..b4286424c1 100644
>--- a/hw/block/virtio-blk.c
>+++ b/hw/block/virtio-blk.c
>@@ -1134,7 +1134,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
>     bool suppress_notifications = virtio_queue_get_notification(vq);
>
>     aio_context_acquire(blk_get_aio_context(s->blk));
>-    blk_io_plug(s->blk);
>+    blk_io_plug();
>
>     do {
>         if (suppress_notifications) {
>@@ -1158,7 +1158,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
>         virtio_blk_submit_multireq(s, &mrb);
>     }
>
>-    blk_io_unplug(s->blk);
>+    blk_io_unplug();
>     aio_context_release(blk_get_aio_context(s->blk));
> }
>
>diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
>index 612c525d9d..534a44ee07 100644
>--- a/hw/scsi/virtio-scsi.c
>+++ b/hw/scsi/virtio-scsi.c
>@@ -799,7 +799,7 @@ static int virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
>         return -ENOBUFS;
>     }
>     scsi_req_ref(req->sreq);
>-    blk_io_plug(d->conf.blk);
>+    blk_io_plug();
>     object_unref(OBJECT(d));
>     return 0;
> }
>@@ -810,7 +810,7 @@ static void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
>     if (scsi_req_enqueue(sreq)) {
>         scsi_req_continue(sreq);
>     }
>-    blk_io_unplug(sreq->dev->conf.blk);
>+    blk_io_unplug();
>     scsi_req_unref(sreq);
> }
>
>@@ -836,7 +836,7 @@ static void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
>                 while (!QTAILQ_EMPTY(&reqs)) {
>                     req = QTAILQ_FIRST(&reqs);
>                     QTAILQ_REMOVE(&reqs, req, next);
>-                    blk_io_unplug(req->sreq->dev->conf.blk);
>+                    blk_io_unplug();
>                     scsi_req_unref(req->sreq);
>                     virtqueue_detach_element(req->vq, &req->elem, 0);
>                     virtio_scsi_free_req(req);
>diff --git a/block/meson.build b/block/meson.build
>index 486dda8b85..fb4332bd66 100644
>--- a/block/meson.build
>+++ b/block/meson.build
>@@ -23,6 +23,7 @@ block_ss.add(files(
>   'mirror.c',
>   'nbd.c',
>   'null.c',
>+  'plug.c',
>   'qapi.c',
>   'qcow2-bitmap.c',
>   'qcow2-cache.c',
>-- 
>2.40.1
>



From xen-devel-bounces@lists.xenproject.org Fri May 19 08:46:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 08:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536767.835448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvl0-0000oe-P9; Fri, 19 May 2023 08:46:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536767.835448; Fri, 19 May 2023 08:46:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvl0-0000nd-Kf; Fri, 19 May 2023 08:46:38 +0000
Received: by outflank-mailman (input) for mailman id 536767;
 Fri, 19 May 2023 08:46:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qg83=BI=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1pzvkz-0000DN-MT
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 08:46:37 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a92afa7b-f621-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 10:46:36 +0200 (CEST)
Received: from mail-ej1-f69.google.com (mail-ej1-f69.google.com
 [209.85.218.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-497-cfZxm1aPPUqmfaY1c8D1qA-1; Fri, 19 May 2023 04:46:30 -0400
Received: by mail-ej1-f69.google.com with SMTP id
 a640c23a62f3a-96f6e83685eso37415166b.1
 for <xen-devel@lists.xenproject.org>; Fri, 19 May 2023 01:46:30 -0700 (PDT)
Received: from sgarzare-redhat (c-115-213.cust-q.wadsl.it. [212.43.115.213])
 by smtp.gmail.com with ESMTPSA id
 s23-20020a1709066c9700b0094f4d2d81d9sm1996669ejr.94.2023.05.19.01.46.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 May 2023 01:46:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a92afa7b-f621-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684485994;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=YEL2y9nbWEymaZC5VtAMnGiLPQ/RQ8cB14tNAAAKQbE=;
	b=diOdkqvPd9LkiEUo+u23roxXbub6FKj2bAy3wcbJyIQbBHrPODXl0ZldjlXoVvyLTrTPY9
	pdcvjZu39gJPgUpNozxcpI86sYd7AG8pZrU21IUdXxaO/0rYEANU4J4yCo5H1LM+hcP15G
	iQLRG1cN1Nsk3h8hmhY0pWU56eQYh3g=
X-MC-Unique: cfZxm1aPPUqmfaY1c8D1qA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684485990; x=1687077990;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YEL2y9nbWEymaZC5VtAMnGiLPQ/RQ8cB14tNAAAKQbE=;
        b=D6exAjiYFa4CfoGOy/d1sA5LEPn1jATW74UeMPS09CUPgUvFhPg9NWImY5lDjf4D5X
         tDJ+uRxfRh/uDNGoJ9FsRgrFR7D8OT88KEJEhP5+Gx32QBaXyP36iOuskuuY2UKMQXos
         Lz1mWuw2XT1hHzGVO5qfeo2uVgXi9IMwUP/LQwAq/aUBV48Hu1V7VnBs6AbTMLmW5vf/
         OzGoQnKbxBwWE5YcaxkWtNIrScaeVaxMKHjNbDmCb9yDKITR+vnKcTn0jc+bx0r/bYiD
         daiBFPSgFZWaos3RHCxnnvb69zONSlxijfO19LMDDGXDgKN1z1a6KTl5KZ537Hm8QcXu
         Yz+g==
X-Gm-Message-State: AC+VfDzdAGDwDLbATD4OyRLSUZWjvi7+KC2da/3WptOgZYhlcfKk1wRO
	mhtsalmQAe3cQhfiVxrlSWhEcJo0DQVpemslhWDQX8RQOBnOZF7vE/KACJOSWBAYPmCKONqhz9A
	FJrTnNrCsV/GscybekxQT0UwP7wo=
X-Received: by 2002:a17:907:6d26:b0:957:2e48:5657 with SMTP id sa38-20020a1709076d2600b009572e485657mr888525ejc.68.1684485989850;
        Fri, 19 May 2023 01:46:29 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ5SxN5pRvFk0iFfnDnDMghNaM1OvyjcQaMREZ/NtKawPbRiFV6lVdDqQ5/uyeE2nKgNGlLyqw==
X-Received: by 2002:a17:907:6d26:b0:957:2e48:5657 with SMTP id sa38-20020a1709076d2600b009572e485657mr888509ejc.68.1684485989512;
        Fri, 19 May 2023 01:46:29 -0700 (PDT)
Date: Fri, 19 May 2023 10:46:25 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
	Julia Suvorova <jusual@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, 
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org, 
	Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Aarushi Mehta <mehta.aaru20@gmail.com>, Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 2/6] block/nvme: convert to blk_io_plug_call() API
Message-ID: <utuievutol5cux2axpym7x3t4tueresl4tbqadizc36f5yblpi@ndpva7u6croa>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-3-stefanha@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230517221022.325091-3-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline

On Wed, May 17, 2023 at 06:10:18PM -0400, Stefan Hajnoczi wrote:
>Stop using the .bdrv_co_io_plug() API because it is not multi-queue
>block layer friendly. Use the new blk_io_plug_call() API to batch I/O
>submission instead.
>
>Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>---
> block/nvme.c | 44 ++++++++++++--------------------------------
> 1 file changed, 12 insertions(+), 32 deletions(-)
>
>diff --git a/block/nvme.c b/block/nvme.c
>index 5b744c2bda..100b38b592 100644
>--- a/block/nvme.c
>+++ b/block/nvme.c
>@@ -25,6 +25,7 @@
> #include "qemu/vfio-helpers.h"
> #include "block/block-io.h"
> #include "block/block_int.h"
>+#include "sysemu/block-backend.h"
> #include "sysemu/replay.h"
> #include "trace.h"
>
>@@ -119,7 +120,6 @@ struct BDRVNVMeState {
>     int blkshift;
>
>     uint64_t max_transfer;
>-    bool plugged;
>
>     bool supports_write_zeroes;
>     bool supports_discard;
>@@ -282,7 +282,7 @@ static void nvme_kick(NVMeQueuePair *q)
> {
>     BDRVNVMeState *s = q->s;
>
>-    if (s->plugged || !q->need_kick) {
>+    if (!q->need_kick) {
>         return;
>     }
>     trace_nvme_kick(s, q->index);
>@@ -387,10 +387,6 @@ static bool nvme_process_completion(NVMeQueuePair *q)
>     NvmeCqe *c;
>
>     trace_nvme_process_completion(s, q->index, q->inflight);
>-    if (s->plugged) {
>-        trace_nvme_process_completion_queue_plugged(s, q->index);

Should we remove "nvme_process_completion_queue_plugged(void *s,
unsigned q_index) "s %p q #%u" from block/trace-events?

The rest LGTM!

Thanks,
Stefano



From xen-devel-bounces@lists.xenproject.org Fri May 19 08:47:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 08:47:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536776.835458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvlY-0001YN-0f; Fri, 19 May 2023 08:47:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536776.835458; Fri, 19 May 2023 08:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvlX-0001YE-UG; Fri, 19 May 2023 08:47:11 +0000
Received: by outflank-mailman (input) for mailman id 536776;
 Fri, 19 May 2023 08:47:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qg83=BI=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1pzvlW-0000ix-IN
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 08:47:10 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bd053da0-f621-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 10:47:09 +0200 (CEST)
Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com
 [209.85.218.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-675-B_QYlGeiPvWhY3_A1-Ml5w-1; Fri, 19 May 2023 04:47:05 -0400
Received: by mail-ej1-f70.google.com with SMTP id
 a640c23a62f3a-94a355c9028so388782366b.3
 for <xen-devel@lists.xenproject.org>; Fri, 19 May 2023 01:47:05 -0700 (PDT)
Received: from sgarzare-redhat (c-115-213.cust-q.wadsl.it. [212.43.115.213])
 by smtp.gmail.com with ESMTPSA id
 v15-20020a170906380f00b0096165b2703asm1988689ejc.110.2023.05.19.01.47.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 19 May 2023 01:47:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd053da0-f621-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684486028;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=f9I0g5xFt+Bq1aRK6HzDjwahYXoeZ0017I2Bg1auWyM=;
	b=VvRA55PXB7NbHMhKMW5KTjz+F7Ng/JLhLSNLe9g5H4i80M02UL0vU9AggzZm1u49Es2+Y0
	maZ8OfNW+KGtk9IN6CYKLv/7cDbEnD0VPzG7kwKkZWWIumnvT97r9GEh+Qi9QiJJyLvol8
	0GgDhKWzzrmuJEeYDB4/ejDtoBRwkDs=
X-MC-Unique: B_QYlGeiPvWhY3_A1-Ml5w-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684486024; x=1687078024;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=f9I0g5xFt+Bq1aRK6HzDjwahYXoeZ0017I2Bg1auWyM=;
        b=awL/mfngX0ccXyCGvoIrvySMhXj8NxDqynie7XIthDbVxMrQj2AoMNFOso491oUVeU
         C9yv2GioRK+u7ylpWsOlUr1OUgbkPUEKYf8hGA/O1l/HcW2reW5hgZONexA522UiKzLC
         TRmVTSDWJ0rPMY8HCq7B5inUiBG5LVLp91OBYL+a9JKECdaZ1xQ22mFMhjqjsKVGmrTN
         DT0lC4fBFZB+24M/dYgT+bFjOeqzACIuiS9Sde8Ximy+53R2ju8urlgXjGWx3psj3oPi
         1s9n4NWAAXxk0sD01nKat2iNMwZC4irsXYNKBoJ1LzlN1AFegSFob62K/mznH0ujWRG4
         T0dQ==
X-Gm-Message-State: AC+VfDyobLy5fVH0Gt9eM1VxJIVSJ/pSh/sSHBk8MkCtFQuGCPbbSvD/
	IyLB2DaDBhU9nARZrEGVs4KSGHaz32OZOdYxYkkGh0e8rkCXowPWg1CuuGwo0mskgddTecTJuUl
	A0lfEjrEtcWiSoE1UgOHdl7mscz4=
X-Received: by 2002:a17:907:7f1d:b0:88a:1ea9:a5ea with SMTP id qf29-20020a1709077f1d00b0088a1ea9a5eamr1032502ejc.65.1684486024362;
        Fri, 19 May 2023 01:47:04 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ6fY1cGrDo7A0FTNLr+0xW3nb/0HDfhIGyDb6BB5UI2BqcERvG/eCqnKv++kjeGQo4yNVO08Q==
X-Received: by 2002:a17:907:7f1d:b0:88a:1ea9:a5ea with SMTP id qf29-20020a1709077f1d00b0088a1ea9a5eamr1032472ejc.65.1684486023998;
        Fri, 19 May 2023 01:47:03 -0700 (PDT)
Date: Fri, 19 May 2023 10:47:00 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
	Julia Suvorova <jusual@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, 
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org, 
	Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Aarushi Mehta <mehta.aaru20@gmail.com>, Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 3/6] block/blkio: convert to blk_io_plug_call() API
Message-ID: <wtyut5kd4v5vapon7fzpvi3kghvpplokcas5ovcwnjhiwyuccb@rm6eb6jjhhp5>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-4-stefanha@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230517221022.325091-4-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Wed, May 17, 2023 at 06:10:19PM -0400, Stefan Hajnoczi wrote:
>Stop using the .bdrv_co_io_plug() API because it is not multi-queue
>block layer friendly. Use the new blk_io_plug_call() API to batch I/O
>submission instead.
>
>Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>---
> block/blkio.c | 40 +++++++++++++++++++++-------------------
> 1 file changed, 21 insertions(+), 19 deletions(-)

With this patch, the build fails in several places, maybe it's an old
version:

../block/blkio.c:347:5: error: implicit declaration of function 
‘blk_io_plug_call’ [-Werror=implicit-function-declaration]
   347 |     blk_io_plug_call(blkio_unplug_fn, bs);

../block/blkio.c:348:22: error: passing argument 1 of ‘blk_io_plug_call’ 
from incompatible pointer type [-Werror=incompatible-pointer-types]
   348 |     blk_io_plug_call(blkio_unplug_fn, bs);

Thanks,
Stefano

>
>diff --git a/block/blkio.c b/block/blkio.c
>index 0cdc99a729..f2a1dc1fb2 100644
>--- a/block/blkio.c
>+++ b/block/blkio.c
>@@ -325,16 +325,28 @@ static void blkio_detach_aio_context(BlockDriverState *bs)
>                        false, NULL, NULL, NULL, NULL, NULL);
> }
>
>-/* Call with s->blkio_lock held to submit I/O after enqueuing a new request */
>-static void blkio_submit_io(BlockDriverState *bs)
>+/*
>+ * Called by blk_io_unplug() or immediately if not plugged. Called without
>+ * blkio_lock.
>+ */
>+static void blkio_unplug_fn(BlockDriverState *bs)
> {
>-    if (qatomic_read(&bs->io_plugged) == 0) {
>-        BDRVBlkioState *s = bs->opaque;
>+    BDRVBlkioState *s = bs->opaque;
>
>+    WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
>         blkioq_do_io(s->blkioq, NULL, 0, 0, NULL);
>     }
> }
>
>+/*
>+ * Schedule I/O submission after enqueuing a new request. Called without
>+ * blkio_lock.
>+ */
>+static void blkio_submit_io(BlockDriverState *bs)
>+{
>+    blk_io_plug_call(blkio_unplug_fn, bs);
>+}
>+
> static int coroutine_fn
> blkio_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
> {
>@@ -345,9 +357,9 @@ blkio_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
>
>     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
>         blkioq_discard(s->blkioq, offset, bytes, &cod, 0);
>-        blkio_submit_io(bs);
>     }
>
>+    blkio_submit_io(bs);
>     qemu_coroutine_yield();
>     return cod.ret;
> }
>@@ -378,9 +390,9 @@ blkio_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
>
>     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
>         blkioq_readv(s->blkioq, offset, iov, iovcnt, &cod, 0);
>-        blkio_submit_io(bs);
>     }
>
>+    blkio_submit_io(bs);
>     qemu_coroutine_yield();
>
>     if (use_bounce_buffer) {
>@@ -423,9 +435,9 @@ static int coroutine_fn blkio_co_pwritev(BlockDriverState *bs, int64_t offset,
>
>     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
>         blkioq_writev(s->blkioq, offset, iov, iovcnt, &cod, blkio_flags);
>-        blkio_submit_io(bs);
>     }
>
>+    blkio_submit_io(bs);
>     qemu_coroutine_yield();
>
>     if (use_bounce_buffer) {
>@@ -444,9 +456,9 @@ static int coroutine_fn blkio_co_flush(BlockDriverState *bs)
>
>     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
>         blkioq_flush(s->blkioq, &cod, 0);
>-        blkio_submit_io(bs);
>     }
>
>+    blkio_submit_io(bs);
>     qemu_coroutine_yield();
>     return cod.ret;
> }
>@@ -472,22 +484,13 @@ static int coroutine_fn blkio_co_pwrite_zeroes(BlockDriverState *bs,
>
>     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
>         blkioq_write_zeroes(s->blkioq, offset, bytes, &cod, blkio_flags);
>-        blkio_submit_io(bs);
>     }
>
>+    blkio_submit_io(bs);
>     qemu_coroutine_yield();
>     return cod.ret;
> }
>
>-static void coroutine_fn blkio_co_io_unplug(BlockDriverState *bs)
>-{
>-    BDRVBlkioState *s = bs->opaque;
>-
>-    WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
>-        blkio_submit_io(bs);
>-    }
>-}
>-
> typedef enum {
>     BMRR_OK,
>     BMRR_SKIP,
>@@ -1009,7 +1012,6 @@ static void blkio_refresh_limits(BlockDriverState *bs, Error **errp)
>         .bdrv_co_pwritev         = blkio_co_pwritev, \
>         .bdrv_co_flush_to_disk   = blkio_co_flush, \
>         .bdrv_co_pwrite_zeroes   = blkio_co_pwrite_zeroes, \
>-        .bdrv_co_io_unplug       = blkio_co_io_unplug, \
>         .bdrv_refresh_limits     = blkio_refresh_limits, \
>         .bdrv_register_buf       = blkio_register_buf, \
>         .bdrv_unregister_buf     = blkio_unregister_buf, \
>-- 
>2.40.1
>



From xen-devel-bounces@lists.xenproject.org Fri May 19 08:54:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 08:54:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536781.835468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvsp-0003AH-Ol; Fri, 19 May 2023 08:54:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536781.835468; Fri, 19 May 2023 08:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvsp-0003AA-Lq; Fri, 19 May 2023 08:54:43 +0000
Received: by outflank-mailman (input) for mailman id 536781;
 Fri, 19 May 2023 08:54:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9j/4=BI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pzvsn-0003A4-Lg
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 08:54:41 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20614.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::614])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c8e3e056-f622-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 10:54:39 +0200 (CEST)
Received: from BN0PR02CA0023.namprd02.prod.outlook.com (2603:10b6:408:e4::28)
 by PH8PR12MB7133.namprd12.prod.outlook.com (2603:10b6:510:22e::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 08:54:35 +0000
Received: from BN8NAM11FT010.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:e4:cafe::f4) by BN0PR02CA0023.outlook.office365.com
 (2603:10b6:408:e4::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 08:54:34 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT010.mail.protection.outlook.com (10.13.177.53) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Fri, 19 May 2023 08:54:34 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 03:54:34 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 01:54:27 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 19 May 2023 03:54:25 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8e3e056-f622-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ng2K3f9SW9guoL78XlD9GiyJEB6UX2IKqUbJdR3RpmPf4JmS40iw0ybCTyT1g7MLcJa1AyfLdhE//DP2f8djy8KqN8HRwMDyjIhWVEdts92mled0KHK1lpDaaI/rGpIu9Fi0XPru1/WxjLNvkIyPv+6XvW4gjbeq26CqEOmoxpENnBV+b5UCIo+3kU5OexoGWpHf2UmjbOQqHSn71VDaL0EPviWFV79ML8Co66io8dY6khrP8XqMdOHUl/ypP3PgR74TiNoh8flPtkaUPyDL06qWFgdfKNfHU7y3g/arYVHX99lRRF6FPuIcc/yyQYaxnswnVWMf5xz9bfwJPsqrHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=g27pllfuv7XhnFiv1A5r7fp2ewJzbA+HLvvBJSGEdaY=;
 b=hxSwarFUwqRoUfbtpGBDGy7BePnvj/6uZ6N1JfqULh9mIhkQyexcBprVOLytGzjhG6KBrtONBqGgy6GTv3VkVhldB6TuSuR+y32x+CBe9nzyY0vjwtu8sWJKBi6uPLTaN6dBd/VwT1M8Cx+HzmshfjBQwg0Xvo/3p6PfL68inpyYRA8R/+A43h8Dj6eUDC7XeS0g1pleLluaU/k4pC8xXqg3pY9yWeSfdR+KXclMUVewm8IDFRktaXCEZiqXjb10lP2yRQTR7fhaQKR5u3Tlk6V7QJQJ2ZCmqIN8pydVCQ81rHZ5UwsoNzUJt5hDXAqB6MI2fbZucyiA3uhlkD2yQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=g27pllfuv7XhnFiv1A5r7fp2ewJzbA+HLvvBJSGEdaY=;
 b=K+toBytHwggTLdCBh1MX3THDao14YBZ5WhPcWMNMeObadfQHG1gXwBUBZ5NZ16JSyOszfwmKbYSjn4mh/fEipbHwIUr5nSyDfQb1nQ5p+xxEMQrab2SaQy+tnJzkaoEZV9bI7BXREwsGo/FQWgXdsv7irRgI2mAA+v+FLRamx3o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <66f04988-9a4b-39c0-fb17-c508b98e3bdf@amd.com>
Date: Fri, 19 May 2023 10:54:19 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN v7 07/11] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
 <20230518143920.43186-8-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230518143920.43186-8-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT010:EE_|PH8PR12MB7133:EE_
X-MS-Office365-Filtering-Correlation-Id: d6895996-3463-4658-b521-08db5846ab0a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6MnR2JzJioxGKDTMpkxw9Uey3Tmq8Bcd7f9Z3etSc0MzlRP2X5J3lgYewOI/j14xIAMD2QqHrBvkuNZhGgniAAMKMpNBXZrQN/aCs11P1KV3QfDws+mBsS61nDdg8+wvkr7q2CH0tgF+26ywuqmGux0yk5og54mWVKVDUyCVYb0JWXL+7FDyZ8y68kWxYL5WmwyZXPgUOCdRP16uMF/wiC1h74B3AAJDx/8/pvaEfU9XW5aad/qrNUiw7swwY0AyHhLatlixZ9GsU1bz3rblVI0TBDxVJHEqJPIOLEJatIPKBwo4ewvkloE29MEW+7o0fF38XHsT9mameeLfaPvi/PvhdN6siVZVg0FKQ4TLYc8Akb9aZVTZLX1p/qJZQKw/6TfPEx+jhX2sqXK3TFMoLjwNd5LageQKHOxYTRpiJ4bcsIDxjcKxVj/AiCqmWx3weW+3kl32KPMQfkqywSxqNuTl93YajEfQYDlCf5VaMvObDCkIGZ66X/lIlkaFtCVaVjjEZKDBx6y9lsGmnlhBl6XyFPiQF/TXRspclmAULyPVm7K11DbMP5bBwK6f8q9oZJgwCeMUY8+6XuhPnNVR7z26+ykd3j78b8odYnzOVDOCZLJk5mC369oEdMfpI0AU1yP8sGiQPLVYqL8GZZ3hbNV8sjbJIahpGywYB4pmtUllhmMVA0BQA2usU4bszJOaGSRT7HBhx3wOCtxwWMFn5pxOnjq8mgrKR1vrWlnZGoerqg/9DpXIN10k9rRkdPW6G3lVYgRcozi5h1Fc3VUrwdOzBraJTe4+CVOCEBej19c=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(396003)(136003)(451199021)(36840700001)(46966006)(40470700004)(40460700003)(6666004)(41300700001)(44832011)(26005)(53546011)(5660300002)(7416002)(36756003)(82310400005)(47076005)(83380400001)(2616005)(36860700001)(426003)(336012)(186003)(2906002)(86362001)(31696002)(356005)(81166007)(8936002)(8676002)(40480700001)(82740400003)(478600001)(70206006)(70586007)(4326008)(31686004)(16576012)(54906003)(110136005)(316002)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 08:54:34.3149
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d6895996-3463-4658-b521-08db5846ab0a
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT010.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7133

Hi Ayan,

On 18/05/2023 16:39, Ayan Kumar Halder wrote:
> Restructure the code so that one can use pa_range_info[] table for both
> ARM_32 as well as ARM_64.
> 
> Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
> p2m_root_order can be obtained from the pa_range_info[].root_order and
> p2m_root_level can be obtained from pa_range_info[].sl0.
> 
> Refer ARM DDI 0406C.d ID040418, B3-1345,
> "Use of concatenated first-level translation tables
> 
> ...However, a 40-bit input address range with a translation granularity of 4KB
> requires a total of 28 bits of address resolution. Therefore, a stage 2
> translation that supports a 40-bit input address range requires two concatenated
> first-level translation tables,..."
> 
> Thus, root-order is 1 for 40-bit IPA on ARM_32.
> 
> Refer ARM DDI 0406C.d ID040418, B3-1348,
> 
> "Determining the required first lookup level for stage 2 translations
> 
> For a stage 2 translation, the output address range from the stage 1
> translations determines the required input address range for the stage 2
> translation. The permitted values of VTCR.SL0 are:
> 
> 0b00 Stage 2 translation lookup must start at the second level.
> 0b01 Stage 2 translation lookup must start at the first level.
> 
> VTCR.T0SZ must indicate the required input address range. The size of the input
> address region is 2^(32-T0SZ) bytes."
> 
> Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of input
> address region is 2^40 bytes.
> 
> Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b which is 24.
> 
> VTCR.T0SZ, is bits [5:0] for ARM_64.
> VTCR.T0SZ is bits [3:0] and S(sign extension), bit[4] for ARM_32.
> 
> Thus, VTCR.T0SZ bits are interpreted accordingly for different architecture.
> For this, we have used union.
> 
> pa_range_info[] is indexed by ID_AA64MMFR0_EL1.PARange which is present in Arm64
> only. This is the reason we do not specify the indices for ARM_32. Also, we
> duplicated the entry "{ 40,      24/*24*/,  1,          1 }" between ARM_64 and
> ARM_32. This is done to avoid introducing extra #if-defs.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> ---
> Changes from -
> 
> v3 - 1. New patch introduced in v4.
> 2. Restructure the code such that pa_range_info[] is used both by ARM_32 as
> well as ARM_64.
> 
> v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and P2M_ROOT_LEVEL.
> The reason being root_order will not be always 1 (See the next patch).
> 2. Updated the commit message to explain t0sz, sl0 and root_order values for
> 32-bit IPA on Arm32.
> 3. Some sanity fixes.
> 
> v5 - pa_range_info is indexed by system_cpuinfo.mm64.pa_range. ie
> when PARange is 0, the PA size is 32, 1 -> 36 and so on. So pa_range_info[] has
> been updated accordingly.
> For ARM_32 pa_range_info[0] = 0 and pa_range_info[1] = 0 as we do not support
> 32-bit, 36-bit physical address range yet.
> 
> v6 - 1. Added pa_range_info[] entries for ARM_32 without indices. Some entry
> may be duplicated between ARM_64 and ARM_32.
> 2. Recalculate p2m_ipa_bits for ARM_32 from T0SZ (similar to ARM_64).
> 3. Introduced an union to reinterpret T0SZ bits between ARM_32 and ARM_64.
> 
>  xen/arch/arm/include/asm/p2m.h |  6 ------
>  xen/arch/arm/p2m.c             | 37 +++++++++++++++++++++++-----------
>  2 files changed, 25 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
> index f67e9ddc72..940495d42b 100644
> --- a/xen/arch/arm/include/asm/p2m.h
> +++ b/xen/arch/arm/include/asm/p2m.h
> @@ -14,16 +14,10 @@
>  /* Holds the bit size of IPAs in p2m tables.  */
>  extern unsigned int p2m_ipa_bits;
>  
> -#ifdef CONFIG_ARM_64
>  extern unsigned int p2m_root_order;
>  extern unsigned int p2m_root_level;
>  #define P2M_ROOT_ORDER    p2m_root_order
>  #define P2M_ROOT_LEVEL p2m_root_level
> -#else
> -/* First level P2M is always 2 consecutive pages */
> -#define P2M_ROOT_ORDER    1
> -#define P2M_ROOT_LEVEL 1
> -#endif
>  
>  struct domain;
>  
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 418997843d..755cb86c5b 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -19,9 +19,9 @@
>  
>  #define INVALID_VMID 0 /* VMID 0 is reserved */
>  
> -#ifdef CONFIG_ARM_64
>  unsigned int __read_mostly p2m_root_order;
>  unsigned int __read_mostly p2m_root_level;
> +#ifdef CONFIG_ARM_64
>  static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>  /* VMID is by default 8 bit width on AArch64 */
>  #define MAX_VMID       max_vmid
> @@ -2247,16 +2247,6 @@ void __init setup_virt_paging(void)
>      /* Setup Stage 2 address translation */
>      register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
>  
> -#ifdef CONFIG_ARM_32
> -    if ( p2m_ipa_bits < 40 )
> -        panic("P2M: Not able to support %u-bit IPA at the moment\n",
> -              p2m_ipa_bits);
> -
> -    printk("P2M: 40-bit IPA\n");
> -    p2m_ipa_bits = 40;
> -    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
> -    val |= VTCR_SL0(0x1); /* P2M starts at first level */
> -#else /* CONFIG_ARM_64 */
>      static const struct {
>          unsigned int pabits; /* Physical Address Size */
>          unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
> @@ -2265,6 +2255,7 @@ void __init setup_virt_paging(void)
>      } pa_range_info[] __initconst = {
>          /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */
>          /*      PA size, t0sz(min), root-order, sl0(max) */
> +#ifdef CONFIG_ARM_64
>          [0] = { 32,      32/*32*/,  0,          1 },
>          [1] = { 36,      28/*28*/,  0,          1 },
>          [2] = { 40,      24/*24*/,  1,          1 },
> @@ -2273,11 +2264,22 @@ void __init setup_virt_paging(void)
>          [5] = { 48,      16/*16*/,  0,          2 },
>          [6] = { 52,      12/*12*/,  4,          2 },
>          [7] = { 0 }  /* Invalid */
> +#else
> +        { 40,      24/*24*/,  1,          1 },
> +        { 0 },  /* Invalid */
Do we really need this invalid entry?

> +#endif
>      };
>  
>      unsigned int i;
>      unsigned int pa_range = 0x10; /* Larger than any possible value */
>  
> +    /* Typecast pa_range_info[].t0sz into ARM_32 and ARM_64 bit variants. */
This would want some explanation in the code.

> +    union {
> +        signed int t0sz_32:5;
> +        unsigned int t0sz_64:6;
> +    } t0sz;
> +
> +#ifdef CONFIG_ARM_64
>      /*
>       * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
>       * with IPA bits == PA bits, compare against "pabits".
> @@ -2291,6 +2293,7 @@ void __init setup_virt_paging(void)
>       */
>      if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
>          max_vmid = MAX_VMID_16_BIT;
> +#endif
>  
>      /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits". */
>      for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
> @@ -2306,24 +2309,34 @@ void __init setup_virt_paging(void)
>      if ( pa_range >= ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_range].pabits )
>          panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", pa_range);
>  
> +#ifdef CONFIG_ARM_64
>      val |= VTCR_PS(pa_range);
>      val |= VTCR_TG0_4K;
>  
>      /* Set the VS bit only if 16 bit VMID is supported. */
>      if ( MAX_VMID == MAX_VMID_16_BIT )
>          val |= VTCR_VS;
> +#endif
> +
>      val |= VTCR_SL0(pa_range_info[pa_range].sl0);
>      val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
>  
>      p2m_root_order = pa_range_info[pa_range].root_order;
>      p2m_root_level = 2 - pa_range_info[pa_range].sl0;
> +
> +#ifdef CONFIG_ARM_64
> +    t0sz.t0sz_64 = pa_range_info[pa_range].t0sz;
>      p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
This should be:
p2m_ipa_bits = 64 - t0sz.t0sz_64;

Another alternative would be to use anonymous unions+struct/union inside pa_range_info, e.g:
        union {
            unsigned int t0sz;
            struct {
                signed int t0sz_32:5;
            };
        };
so, if t0sz stores 24, t0sz_32 would automatically store -8.
This could simplify the code later on, as you could just do:

#ifdef CONFIG_ARM_64
    p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
#else
    p2m_ipa_bits = 32 - pa_range_info[pa_range].t0sz_32;
#endif

However, I think it would require placing extra braces around initializers, i.e:
[0] = { 32,      {32/*32*/},  0,          1 },

~Michal


From xen-devel-bounces@lists.xenproject.org Fri May 19 08:55:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 08:55:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536787.835478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvtO-0003jw-4s; Fri, 19 May 2023 08:55:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536787.835478; Fri, 19 May 2023 08:55:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzvtO-0003jp-1B; Fri, 19 May 2023 08:55:18 +0000
Received: by outflank-mailman (input) for mailman id 536787;
 Fri, 19 May 2023 08:55:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kTyP=BI=citrix.com=prvs=49624ea57=roger.pau@srs-se1.protection.inumbo.net>)
 id 1pzvtM-0003dr-N5
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 08:55:17 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dd8bf4b8-f622-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 10:55:14 +0200 (CEST)
Received: from mail-mw2nam12lp2040.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 May 2023 04:55:11 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6804.namprd03.prod.outlook.com (2603:10b6:510:11b::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.29; Fri, 19 May
 2023 08:55:08 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.019; Fri, 19 May 2023
 08:55:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd8bf4b8-f622-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684486514;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=aFHWZCpozCx9cXUTBKdiNUYWOvGEG+rsbkG6J+0ya2M=;
  b=QqsR7t1ilD+GFQ05hcvy17RQkF+shU8QvIh5EMUBrDaTihMSpHGVrZPJ
   eKn/W3wXRcjHtHlqrZMPFZUmu/h7qtxQjqsP2GlvUgOvvBZkQpJLL35gr
   lgXHhV1S3Wz8AXY6pZemrj/TjH7eDO6CsUisF6ZaBrQa2rFKd2FifegUX
   0=;
X-IronPort-RemoteIP: 104.47.66.40
X-IronPort-MID: 110029355
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:GoZCwKuZGqRwqdI/S80uUXAkM+fnVLpfMUV32f8akzHdYApBsoF/q
 tZmKWqDPq3eZGv0L4pwO4u080NVvZHdyIU1TgdorXgwEnwQ+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Fv0gnRkPaoQ5AKHxiFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwDjIBUAmcpLmK2ezgELhDq9QYHtXBFdZK0p1g5Wmx4fcOZ7nmGv+PwOACmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ovjv6xaLI5efTTLSlRtlyfq
 W/cuXzwHzkRNcCFyCrD+XWp7gPKtXqiANpIReblq5aGhnWv1GNJCl4nBGfqmuCWmkDhCtlVM
 Vcbr39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLP+BqQDGUASj9HafQludUwSDhs0
 UWG9/v5CDoqvLCLRHa18raPsSj0KSUTNXUFZyIPUU0C+daLiIQ6lBfGVNtgOK+zkNzuGDv0z
 iyKrS4xnLEah4gA0KDT1UDKhTOl4ILAQQ886gzUWX+N5wZ1IoWiYuSVBUPz6P9BKMOcUQOHt
 X1dwcyGtrhSVdeKiTCHR/gLEPex/fGZPTbAgFlpWZ486zCq/H3ldodViN1jGHpU3g8/UWeBS
 CfuVcl5vfe/4FPCgXdLXr+M
IronPort-HdrOrdr: A9a23:f8TcjaAZtcDkJG3lHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-Talos-CUID: =?us-ascii?q?9a23=3AiVeeHGk09Qf0LQcW3wv/KbJitGzXOVvfwEnJPGz?=
 =?us-ascii?q?jNX83E+fEZnKAw5F4ueM7zg=3D=3D?=
X-Talos-MUID: 9a23:ZRueAggKyvZne5QmA9NgesMpCNVnuabtORExwK4flNmKan17MS2Gg2Hi
X-IronPort-AV: E=Sophos;i="6.00,176,1681185600"; 
   d="scan'208";a="110029355"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kG+dnqewUEZqYTTtkXl7IzyQLjL2twTudOd0o4KRsPn+ZR2RN8R4MOPne6HZnx1Ux5JRvyLxx66rhjJTh2+utSJwkpd9Z7K4j1JQ87+UwUApjthFbZtAPNlNfje2z5DX+pcmrojAxA2m6x78jJeWwpgMyr/jtOmOKH6YG36SWis+oRjVIwBto2muEz4aL3Fey6V5Nc1ojkwn4LR4jqLWCtt4EY1jzf1kF6/mriSyapzZvojFpXiDVaBdh6MHhPM8+3uZRSNHgsDlKNpgYEi3/piwIudtTQBCQMlIiI2295KcWBTckngAwudktOFcDaSbiD4w3yOxY6/kG7IRksIf9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TkaIWG97UiepBuE5L4ND/Gu5vs12Grc760SxaRnzto0=;
 b=Thos1XbuCJVwRXN4oz3pzL+8oOIm/STYAcPLtWiOu1FK9AqzhO9dC5YzOoGlG74xQAxDENrGQ2Q5+K6AIfKAoJ0G9FYf3CCIeaGt0mO5UtHzoAzKDFilWNAw6TkWrJVI+Rw8r4HWvdgctmJ+OkGs2mBzZk5h9R4ZlCtFwOR+QQ2/R66go+H1kYcMrBUIhSaDYaAsXotwA/AeP/6qpcJDtW1927vw8MbYPULOx6ay7LjjmUVYRcTzrjaLlDhL/NSb8/kILkiv/YL5bwiFj0fkPj/eK8OofeHY47OlRzJXalnluSVgtYQJTWZevLKQyr+nschFSm/p4vObKQbz8tOLZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TkaIWG97UiepBuE5L4ND/Gu5vs12Grc760SxaRnzto0=;
 b=q33styVAfsqxj19/zWYSHPcKAGCDj/xYBD0MZBwiak+qm5v4eeDd3mgVm2yHdWO+Rb+72xa6KaqqeYH6HLJfNNLL7s33+BNiByrexZeyAP6NEdAiNQqEkAQaY0NspRG32Q21PRpI9S3NxvsKHHUZKReytn5rr/xxjPhCFTB+Jj8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Fri, 19 May 2023 10:55:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
	xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
Message-ID: <ZGc5ZfebSL21W5sY@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <b411f7aa-7fd2-7b1c-1bcd-35b989f528b6@suse.com>
 <ZGcnh/DZvFAIBR/n@Air-de-Roger>
 <7a00019e-da64-ad0c-d107-d002cf6bce85@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7a00019e-da64-ad0c-d107-d002cf6bce85@suse.com>
X-ClientProxiedBy: LO6P265CA0025.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ff::18) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6804:EE_
X-MS-Office365-Filtering-Correlation-Id: 8cd9a82b-810e-4180-6a78-08db5846bf1e
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gSR2N2l7IyEpIPoqRXQVJ+oKPaSUbAURJoH20DFCc6zoAk3WxMP7IcHcRaxGF0A6NzFWQJTRtjDg2rR8q2A+axYpqsrfFu7bwvFrTlx09a3epkw/LCtqYEWB61ibo2bA7r3r6rBtChqnAxOW/+agqJnOwrsKhyrSZ34nDosB8knYBohRlRpFMbgQoFeIUoVvIrGicqkV5rfego55wSeNld7T71SnabAF2nyrPM0yH9sbu6UTU+kqVWdYkcxR3UkNGKe1OqJ6zteJTHIDwL8jGJS4lSiLhrwa16X3uyLR+ioXTTJDZDqBrWQJprkOrjglIczbElbxYbfG9sEs4MOqxmpuAE9ApycWNZ9NWNu+5XGghwEi77NDEtmcYwk48nrPRWXgwNkpwfoNtfBIw6tJAWvA6mxpM7mN2J7tH+el/YLcfK2F28PZqM+iLtsOHbUexO1IUWp7kNMEitMkuHHuXvTwWPA11nysZons2MaqP+/AlUS1rYquPaZmiwMMee3gojySCjY8oGHFfbmdLmN1JTTEcyyVeU3jbNSe4pdepeY=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(39860400002)(136003)(366004)(346002)(396003)(451199021)(83380400001)(4326008)(41300700001)(186003)(2906002)(6666004)(6486002)(66476007)(966005)(66946007)(478600001)(66556008)(26005)(53546011)(5660300002)(6512007)(9686003)(8936002)(316002)(6506007)(8676002)(6916009)(82960400001)(33716001)(85182001)(38100700002)(86362001)(66899021);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?c3k0bHpvTmh3bUlrNXdmRzZ5czQ2b0FDc1JZYUFzN2Y2TDE3YytnaVA3dU9P?=
 =?utf-8?B?a1BDdnJab0NsTDlUTEgrNm1sNk84NVhMNlZoZytrM1J4SzZaZExXSE91UWhL?=
 =?utf-8?B?Zy8zQWN4S0hDU2dsSERSR1dzREx6R25DQ21Mdytvc0JZL24wR1RpeEFDWEZO?=
 =?utf-8?B?bVU4RjJoVmZYcGFwTllJV2J6YUVtanZIb1V0VDRoaFJFd0xkT3pnZW9hLzhq?=
 =?utf-8?B?QWtnUFdlSkVxcUlFcjJ5YkVFYzlsMXI1UlAwWkdiVm5XSGN1djM1dzNYL01H?=
 =?utf-8?B?aFlSTU10UllCNWVuNHhkSStUaUYzMWlLOWdWcDJJNTY1amZjZ1lJc3Nmekhp?=
 =?utf-8?B?dlozejQ4b1NtK0h1bVQ5dGtOc09VOGRtbElHUmtPZ2xKVFdLQjh4MDJ5WjJB?=
 =?utf-8?B?M05kTUVMU2s0Z2UvYkdCSU9uQ0xDMzZRN0ZtYUFkRlFua2RHcTlQQmN3aDNP?=
 =?utf-8?B?cTVhOWdrMSs3ellqU1d4NUlnR0hjT2MvSlJtWm1vdXFOVHhoL3g0clgxcEFR?=
 =?utf-8?B?UTQrNTFwazJ4QWNpTVpiTndZTkptNkFqUFV6SHZxUEZSbkZlZllFYk1HdkMv?=
 =?utf-8?B?cE10LzVCWm1pUUJobVBWeGNSZS9NUUVqVmFDSVdSSkNycVIwZGEzS1p2OHRv?=
 =?utf-8?B?ZG5MTzJaa3oyOWFlSCtESGsxemdTaTI2QjVzTnlPVFY2bVFOekRhc2oxalkz?=
 =?utf-8?B?aU1rMFRmU3Y3Y0F6QjJvUnd1SXBGeGtaUlpkV1N6MWhxZHM1MHpSZTJNczJF?=
 =?utf-8?B?dHVKamFaV0p2SmFwbFNCTm44VWMvRGlyeDF5VHl3bGZ3ek93OXg3bkRtOEx4?=
 =?utf-8?B?Q1VGeTJKWlYzODZzbUJVdm5NMVk0RTNwL0xCTnE0YUxrQ2tnR3BoUUYrU083?=
 =?utf-8?B?RnVhbXJjUFNGYk5tdFgwU3BwSHZ1RHdkZVdBenFqaTIzY3FWd2JnZ3JkbE0w?=
 =?utf-8?B?ZWNZaTZsVTZiNHpxc3U3Q3Y3TTlkNHJ4U2h4N1dSVDRMMXAydFQ0cURCekRD?=
 =?utf-8?B?YUU0S2FWdlFIU1kvcUhvZ0QvTHdGYVkrMm04MHV2M3plUDZ5S1E2bkQ3ektQ?=
 =?utf-8?B?WjJUZ25DZUViM1J0bmxVUlE4TjBXQ25QZktVNktBSlAyK2syQjBxa1J5UzFI?=
 =?utf-8?B?bzlVeW14Z0NHYlZITjhrNkxsUFpKV3JJbEs4OHRDb2laZHRwMXRlRFgyaTZB?=
 =?utf-8?B?RU9LS1d1ZnpmS1hJQXY0NldvdGIwVFRCQmRxOWN4alo5WnlVRmg0MkFKU0Er?=
 =?utf-8?B?L0xXNVBUTEM4d1BGV0pNeFM0cGVGcmpQOG5LQ21pQW16N1owcVQ4eEJNbVRS?=
 =?utf-8?B?NVRHSHc2djNqZ0VFMm9ROGdTWUFuRkEvc2svVXFpZWtxSndCUHNDaDVmeGlp?=
 =?utf-8?B?R1lneldLcS9lZnlqS2lmaXNENEVwZStWMTRLaDFDNitnbWhXcGdCWTJPRUhX?=
 =?utf-8?B?YWlzYXVOb2dHdHl5Vk1SaUhveE0zUUUyMXBWTlQwNTN0UjE0R1IvUHdrVFhV?=
 =?utf-8?B?d2IwdS9kVEdqWkxWUHcvV3RBVVIvcmZmc1lNS3hjdnI1WVh1QnJVckZrMWR0?=
 =?utf-8?B?Yyt6L3JlS0hncUlWUDhjN1Z3Q1FLdjZrSk5Xdk5XWjBaTlczd1MwTWtkemJv?=
 =?utf-8?B?cGZMTWM3ZGVyWjFJQTlzSEpYV2lGdldqMmJKZk9mUW1EbFk4ZVdyc0lxLzBq?=
 =?utf-8?B?VlFjd3ZwaWl1US95S05CVFVhWFBudEpTYU0zYjArcVlhaFpDV1hVRUwxZWZk?=
 =?utf-8?B?SWg4YW1OK01nWkhMS0lVeEVyczdMZGJ5TFVCaWhwMlpVQ1hKOUJSNTJ6QnM4?=
 =?utf-8?B?ZTZmdWdRVkxodEt2d2dpT3ovODRwbTVwNE1udmROV1NxMzFweGxldzlGNmtO?=
 =?utf-8?B?ekNCN20zSEJEODBrYkd5ckxUM2FETnBpWDNFU3p3aWVuMVJ6YXRQOUtVOFcy?=
 =?utf-8?B?L21waCtsZTgzREJUR1E5VjFtSE1JdUhsMkJQdW9aWmV1djE2R05tTmV0d0hU?=
 =?utf-8?B?VjRabnJ1TWFlL2g3bUJsMG5lRE1vcXM4RS9OYnFGdVUxYUU1WHBpVHRKR25m?=
 =?utf-8?B?UUdlUUJQemFxVEVEbjFzYWtuZFQ4WmRBRWxmdmw5UmlxOWNiVUZiTnllbGlI?=
 =?utf-8?B?MFNvYXp6cnNVdmloZWhlVDFsdjdGdDVwSVdUNFdFR09qM0lRalMyLy84aVBL?=
 =?utf-8?B?ZFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	dmssNBAPj23Tw0bin3qPMBE8Yiu/KzfRnl5tnJgnOztZOhhTelSkburvAgl0vFpUSzGVBfhkr+lwvC9cmb09WBjPMF92IzqGvv0+glAQIyi1vtQaM8nPB1hVvg/2kViLM46E/EbmoESPVHi83M7iKpeSZM74lQOVDs1ecHJUMB9DGqWPftWu2yco6vpYUbNzs8J+AD4/1tq4lwJ4IH+LG3OYFSzgmHxViGnaquUqKQhpNc55xV/j7yNVQHNAntEeQ9mPgjgAn0QK4aHmdgFagcVMCauTiHKS+EnG/8g5q1qe4G845bZahgCN62dhh9YCVCX2FhR8sptZpw2H70U5IJIRBwiJzCOsf+UdCuTsQnG1vOVaT0yfgZUOmoLDMBjq4aHPXDvb7fubP+CrwJbwz7zv66t7rgODXLU+CBB44tWlkhkQgYevepYbkX5/eDk8nlhSc6TvhmsOM+dtnDJPG+xjbqTcLXp2nzWt4QUyNrkA7+S6xMbY4Cl70nwQSkDnRMmZt9459Ih5OWFXeaQbMX5z71yMVBswLXaV2wZ1Yev6HLOWwRMgjUngV+Wfbez6e5UqqA6VU6A+kApqAStdnrvVXtVCWZcK5LvWFUFb2ySkg5LAWoRRquEFnvXQIkjzXu0PZMQMwPGeim6YKvnJRoI6GTdYbFFZYRUQU85Ksp+Lg7Xu7xmkKNqRwBTbUQ2TG6iHnm9B2X+WRQYcyfzQuU6GyDrfRLXb5D1Pl4UPjYgwiy+w0zvXU1O21boZ3sWLokE2FVnIYYR4hWQGdo57MOz4WnnisaZYZLD01qCy0N+cSgpXogqIWJqf+P4RyJm8hnlQpzKazoHTHCtCQMnIdT6+n9oI+I9T28MPi50xtDURWVp4osdhwc5109JQUZNUlgQTZEvBREYe9BmvB66QoUUTgPA647sOH1KC5Y8nkD8=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8cd9a82b-810e-4180-6a78-08db5846bf1e
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 08:55:08.3681
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GjoxGTU1fS5ZAePudIEstY2fZG2an0dKzHXrKzL52hlFUhxnL6nY7bi17+pI96YtZH0aJcfqiFdTgZP6/3RqWQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6804

On Fri, May 19, 2023 at 10:20:24AM +0200, Jan Beulich wrote:
> On 19.05.2023 09:38, Roger Pau Monné wrote:
> > On Fri, May 19, 2023 at 09:22:58AM +0200, Jan Beulich wrote:
> >> On 18.05.2023 12:34, Roger Pau Monné wrote:
> >>> On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
> >>>> I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
> >>>> test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
> >>>> Zen3 system and we already have a few successful tests with it, see
> >>>> automation/gitlab-ci/test.yaml.
> >>>>
> >>>> We managed to narrow down the issue to a console problem. We are
> >>>> currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
> >>>> options, it works with PV Dom0 and it is using a PCI UART card.
> >>>>
> >>>> In the case of Dom0 PVH:
> >>>> - it works without console=com1
> >>>> - it works with console=com1 and with the patch appended below
> >>>> - it doesn't work otherwise and crashes with this error:
> >>>> https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
> >>>
> >>> Jan also noticed this, and we have a ticket for it in gitlab:
> >>>
> >>> https://gitlab.com/xen-project/xen/-/issues/85
> >>>
> >>>> What is the right way to fix it?
> >>>
> >>> I think the right fix is to simply avoid hidden devices from being
> >>> handled by vPCI, in any case such devices won't work propewrly with
> >>> vPCI because they are in use by Xen, and so any cached information by
> >>> vPCI is likely to become stable as Xen can modify the device without
> >>> vPCI noticing.
> >>>
> >>> I think the chunk below should help.  It's not clear to me however how
> >>> hidden devices should be handled, is the intention to completely hide
> >>> such devices from dom0?
> >>
> >> No, Dom0 should still be able to see them in a (mostly) r/o fashion.
> >> Hence my earlier RFC patch making vPCI actually deal with them.
> > 
> > What's the difference between a hidden device and one that's marked RO
> > then?
> 
> pci_hide_device() simply makes the device unavailable for assignment
> (by having it owned by DomXEN). pci_ro_device() additionally adds the
> device to the segment's ro_map, thus protecting its config space from
> Dom0 writes.

But above you mention that hidden devices should be visible to dom0
"in a (mostly) r/o fashion.".

I understand that for RO devices the whole config space of the device
is RO, in which case we should simply avoid using vPCI for them.  We
however likely want to have the BARs of such devices permanently
mapped into the dom0 physmap (if memory decoding is enabled).

But for hidden devices it's not clear to me what needs to be RO, do we
also need to protect the config space from dom0 accesses?

It might be complicated for vPCI to deal with devices that have MSI-X
interrupts in use by Xen for example.  So I would suggest that at
leeat for the time being we don't handle hidden devices with vPCI.

We might want to do similar to RO devices and prevent write access to
the config space for those also.

> And just to clarify - a PCI dev containing a UART isn't "hidden" in the
> above sense, but "r/o" (by virtue of calling pci_ro_device() on it).
> But the issue reported long ago (and now re-discovered by Stefano) is
> common to "r/o" and "hidden" devices (it's the "hidden" aspect that
> counts here, which is common for both).

Indeed, the issue is with any device assigned to dom_xen (or not
assigned to the hardware domain, but accessible by the hardware domain
from vPCI).


From xen-devel-bounces@lists.xenproject.org Fri May 19 09:02:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:02:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536791.835487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzw02-0005I8-Rb; Fri, 19 May 2023 09:02:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536791.835487; Fri, 19 May 2023 09:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzw02-0005I1-Ox; Fri, 19 May 2023 09:02:10 +0000
Received: by outflank-mailman (input) for mailman id 536791;
 Fri, 19 May 2023 09:02:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9j/4=BI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1pzw01-0005Hv-6M
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:02:09 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d3dcc792-f623-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 11:02:05 +0200 (CEST)
Received: from BN9PR03CA0361.namprd03.prod.outlook.com (2603:10b6:408:f7::6)
 by CH3PR12MB8331.namprd12.prod.outlook.com (2603:10b6:610:12f::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 09:02:02 +0000
Received: from BN8NAM11FT025.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:f7:cafe::23) by BN9PR03CA0361.outlook.office365.com
 (2603:10b6:408:f7::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 09:02:02 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT025.mail.protection.outlook.com (10.13.177.136) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Fri, 19 May 2023 09:02:02 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 04:02:02 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 04:02:01 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 19 May 2023 04:01:59 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3dcc792-f623-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q/sOPuPCfqeA5d5eK0IYYiJC0huvLBoKzwOcvWRJ0QDZswhqwx9HQNS4BfwSNbLA5zH6DM4pNMba5z3BhkABhmrVDo5z5cwRdOhRh3YasGNjI3kLySET6pmVKx0KicUIxA+60HPopeYXONyn80X5XeplEWEa4PYLcG1VPvEZQBIYu/d4mwW+7Pa7CCiA5GfIT9Hkhbm0ugiR7FlUdxfWSlNSadHOBRw6mHipu8DN3o5R4QD0n2cAQXemgquGX/P6ckZ1R0nzGGOLwY9gBU0AQ3SRu8Ib8QJooVMAITUQW2UURIl3EMoGKIhpT7X4y5NXra5OvNIWI7J21w8fn7D7VQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KOUjcdEH9fDlsrAVmFsfUK4vNJ6hQlSxplVv/eOZy/8=;
 b=i0xsMhWguTf6qqUVmWLc6Q5y3ehY7XgHa5TVN7teFofhRQMtBYB5L9fHjEKhjYOVnvLGrzV0iZkdmal9mv5UmJ5W5SwrXJ3vnNBOyLkn8MR9hwg8zz4oZ1mdScwN2dUK0VG8BoGFJCxavcpKocSTjMmzZKn9uxTbjZ51kLNEyA15W8KWq5o2S1qp8a4Ek2ZmwJQS4ciceM6c/DARmtieUilIRVrZDFqdYsfNrrv4nw/Jhnt8Sd2AP40wOWL9FOL2ImxofrB65vNC85EDJx2LYBS+sSgJxnxPP5lep8OkO0a5mwi3MtDGI9iYIBfm3+SWJbqNsE5/KIqUc8XRED+dwQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KOUjcdEH9fDlsrAVmFsfUK4vNJ6hQlSxplVv/eOZy/8=;
 b=Hm/6YsIYYGN1wTlNDjjIi2aPDfTelrVrutbd3X3N1UCCFwDPoA+UfSVW2orF4nkOOxpBt9iTExZdv/ow8iQG/b9H6/lyunp/ccoA3/Hymt8namqttDqx0xLKnZ9PdMl8NNnXyghn6fJBUEGfJmNC3o4ToeE6b/GElJusIZVh6i8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <a75e3528-8583-d8d3-6946-91f171b62511@amd.com>
Date: Fri, 19 May 2023 11:01:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN v7 08/11] xen/arm: Introduce choice to enable 64/32 bit
 physical addressing
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, <stefano.stabellini@amd.com>, <julien@xen.org>,
	<Volodymyr_Babchuk@epam.com>, <bertrand.marquis@arm.com>,
	<andrew.cooper3@citrix.com>, <george.dunlap@citrix.com>, <jbeulich@suse.com>,
	<wl@xen.org>, <rahul.singh@arm.com>
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
 <20230518143920.43186-9-ayan.kumar.halder@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230518143920.43186-9-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT025:EE_|CH3PR12MB8331:EE_
X-MS-Office365-Filtering-Correlation-Id: 509c234e-daa7-4e86-b9a3-08db5847b629
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XNsvovRhny4OfMLllAbCpyw28m87JpBmvsw6eW92xNqJx3yBb/der0Xdth7QImnxLo5Z6hThSrEn3QFkMdiM4WuvTjJa0tG3mvEhiMa9YiLtC5rFEFjD0TeMiuRPVaJEYXx/2guNWOI63q3jgSw72lBS4UqMsUrkD2rV/K5SsjHyOc57iuc+HFiGXHy/GBo5+zjZQ2+Z0/JEGiIdGtfoHx3vhMglxwdA8SbDk5wvQ7Zlbkso0WkTYBP7bT4dkG8RLVxMqVcBMh0WVGqg0bfjJ7ZE5c/H5rUizD9j7X/tOeJv1VFMfqNjrg6W/tSMYVedhqBO969UBg+upfVDs25N0iTy3jT9svOVfDn25+YLHSEgtFxEqDMEKqJmUcF1rSOgFzUX/JtH7fqUn5BcQtz5BO5BOSYcTQtdBux1oS7WaRSyzXWBFnztfa3kIkeEXVoyasrDEBJeqxSnC8RcW4v/8k+A2MOWDLcHr4dqA64c8YlRcUusFsBPUeJmVhYpupQP+x0LJX5yw6QVrzxz25lKMlGimvLHg7DuM/gGDqudn7rx4Gc0C3w1JBUaTmcmBe/bftV5zug/8BP6h49UNTNg9rReIgGekG+PvmupJOJwHAs+UyWsWaylwdvKx/aCb3catiBUj49xaqPv4gomzZ8/h7+TC9/Rtw/GBTYlftphLguAJ7najg02cdZPcuqK07cJpCBg8DPjVvHIhy3ICgZZPHGHraEGDkE+GA9WSXe5np63UOf3gmboT4nKc7N5lN/qc87gNF39NnfgFr/cBfZv/mFwA9TOb/aILls4wFcf95XGF0leu6R7SkHB3ZmqRXB5
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(396003)(346002)(376002)(451199021)(36840700001)(46966006)(40470700004)(5660300002)(478600001)(70586007)(70206006)(31686004)(2906002)(8676002)(8936002)(316002)(110136005)(54906003)(16576012)(7416002)(41300700001)(4326008)(44832011)(53546011)(26005)(40460700003)(186003)(82740400003)(356005)(40480700001)(426003)(336012)(36860700001)(36756003)(83380400001)(47076005)(2616005)(82310400005)(86362001)(31696002)(81166007)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 09:02:02.4698
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 509c234e-daa7-4e86-b9a3-08db5847b629
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT025.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8331

Hi Ayan,

On 18/05/2023 16:39, Ayan Kumar Halder wrote:
> Some Arm based hardware platforms which does not support LPAE
> (eg Cortex-R52), uses 32 bit physical addresses.
> Also, users may choose to use 32 bits to represent physical addresses
> for optimization.
> 
> To support the above use cases, we have introduced arch independent
> config to choose if the physical address can be represented using
> 32 bits (PHYS_ADDR_T_32) or 64 bits (!PHYS_ADDR_T_32).
> For now only ARM_32 provides support to enable 32 bit physical
> addressing.
> 
> When PHYS_ADDR_T_32 is defined, PADDR_BITS is set to 32. Note that we
> use "unsigned long" (not "uint32_t") to denote the datatype of physical
> address. This is done to avoid using a cast each time PAGE_* macros are
> used on paddr_t. On 32-bit architecture, "unsigned long" is 32-bit
> wide. Thus, it can be used to denote physical address.
I think the only issue is when printing but you do not mention it as the root cause.
Also, FWIR it all comes down to PAGE_SIZE being defined as L.

I leave it up to you to decide if this is sufficient explanation.

Small notice:
Generally each commit in a series should build successfully. This is not the case starting from this patch
if user (or randconfig) selects 32-bit PA. But I know that other patches depend on PHYS_ADDR_T_32 so
I guess it would be difficult.

> 
> When PHYS_ADDR_T_32 is not defined for ARM_32, PADDR_BITS is set to 40.
> For ARM_64, PADDR_BITS is set to 48.
> The last two are same as the current configuration used today on Xen.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>

Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal


From xen-devel-bounces@lists.xenproject.org Fri May 19 09:03:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:03:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536795.835497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzw17-0005oN-4M; Fri, 19 May 2023 09:03:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536795.835497; Fri, 19 May 2023 09:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzw17-0005oG-1g; Fri, 19 May 2023 09:03:17 +0000
Received: by outflank-mailman (input) for mailman id 536795;
 Fri, 19 May 2023 09:03:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzw15-0005lA-Nd
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:03:15 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20630.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc6a6a18-f623-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 11:03:14 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8766.eurprd04.prod.outlook.com (2603:10a6:102:20d::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 09:03:11 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 09:03:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc6a6a18-f623-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OI66tjwjUHfH/lR+tUDi+bKyscnhhxpy6Cnr0NkEi6ZNcrfDPDloZoHadVmnEueMoea9hPTvoWY9omRGotXWIUe/yvTDh1wWgc2WriiE5Jg+4g1HZX5IECp/vJwhBXFyYlDkdHckhcTDfFcAUgKt8yCUUn+hEQpdJFtXFwcfyY4SWH26auyvpoupEP4OToxJyerLpmVaTevmfa0vr947+vsHpRMeNanemAm3k64kOCG41djBQGwqHwdUeLllXB1+rZTtKaYQzwQ8UmeADGJx2shADF0P/2UlObIx+D5pcLORza2xGovpnf3lDvE3B7+3F9BNdclUfWf0ciZmCTHzwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6CO/tG9xjduflqkPIA/YpkDUJU43YBT8Ah8eOFkSyE0=;
 b=Z01Tptgqghby6b0Y5bfNTrSIiZzKEp+JnnN5WPwu/YnAfF5x1MWbwHJ1eIoJYbel9E/kylRJMP3tX8tJ2tiNfjAqkhpExR2vNWJ2YfNC2xbxtYMxWMhZ9wDfo/t+Kz68hW8Gr6mduRxwKlqG8BdvlLVFQY94wQGWnSg8EYc+k0Llkq8Iyy7ntEq+r7Xc5lapoT3ek3EFTpSGKTtVW6lgc1Dv5IdkkWfCmL8hNV8/340RX0dfL47pUES0tvhO/h6JoAW5rfAdF7T1jDK4+hfooPrtpDXnkb32a62UkodqxI7z1n54lx5J3ko2X7Ba3ynw4pcpb6Q52jl1psjTOJp6rA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6CO/tG9xjduflqkPIA/YpkDUJU43YBT8Ah8eOFkSyE0=;
 b=ZMFqq2+01Hfio5U/mGjgUlaAUIC9OdSSI2G4CnjEtFmxGDpcHc/lm4/z4vtM7pA1ifnXvCmJ3Ng6v5I6R8sx2/KvkMUYwxg7g2pZSN3LopX2+ARtkOKE94h+gQaL29zQdmCVHqG7APygyjKhaTMqshJrM3G3qtUmvAB0MCAjDokYts1PIM6n0By3Etv6puRglwTkDmAzqDnxxN4oPwH/Eh5fvHZ4wDXcFfeBf5xoBsOsMU27MgE47IHpkYH7djlYdiB8OPNmtQ0URGpqNHLIF4MI0j/rSY1K9AhqBvdnxujw8IDIry1eDJPvtYIuOKGByNi4f4KVm8+b3dyDkjtSBg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <81eae127-b3d7-b1ea-901d-d139ebf5aa21@suse.com>
Date: Fri, 19 May 2023 11:03:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] pci: fix pci_get_pdev() to always account for the segment
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230518105738.16695-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230518105738.16695-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0257.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8766:EE_
X-MS-Office365-Filtering-Correlation-Id: 403ca812-e1d4-495e-caef-08db5847de9a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rcRfIOcxQxi9ASndmqtkorCVl/yNHelDXvg3WN6orRX9Fn40mLou6aJ8CXhrce4jeVIpVTQFj3sFSLn9fThzvdGaNQXwZcehxmEmEZ3ZZHTYseXqDgs2rExOBD2D7i1W3l4p+6Jg3+yfXLZJiyGu7Nc0rR7wFHZynBIFFWiOfoeU0jjGyJ+E0LZ9qMNdV/bKhzCKLq+tuAg87mZ8Ctsl+cL8TrKrfCqSr8FDwNbt3rTS/BE1+qijLnWtpgarJs5UfM/QJ7j+1ccZEDVDn3GKEG7vBG81Sd/Z92QDX8TzdKPt1GZTrEE3y/PBgY7xjR2OlfVVuvS1+A7BLtyB0s79Wj6WHacnpHt6RP+v9V+eTzMdat0Pt+15Q8u+R/WxzX9MpvB6baa9xmPzc03byLtowf19FS+TStY9VyCPznVxQFEAzBuOpA0GEIUGvoN2or1QzRbx0URgZ5oz59K5rSC0zUQ/29+T7bqqNRU9cKTBxhl+t7v50TPnh+IAm89KJGkk/tRu77VbaOpSqAH5uwMpNyaRmCGPGLjQYa0le7fN4TQyRVuGu4gi4fejE7uTj5la/uSu5Lli3Qr5BmClbwTWdtQ4Wr+5YnRyeQa7qtTYx2B3gzuAxF4tLFfXmzXYh2qAi1RrmfCNY+IB+3Wey1wsYQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(346002)(376002)(136003)(39860400002)(451199021)(31686004)(478600001)(6916009)(66476007)(66556008)(4326008)(66946007)(316002)(31696002)(86362001)(36756003)(83380400001)(6512007)(53546011)(6506007)(2616005)(26005)(186003)(8676002)(41300700001)(15650500001)(2906002)(5660300002)(6486002)(8936002)(4744005)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aGNqeUhyS21wb0t0UlROdDlmQzMxei82QTBacTdSbEVVOHhuNDA0OGMwaHBZ?=
 =?utf-8?B?a3NDaW95Rzd0QlE1UmpKa3VrcFQrL3k0ODJlNGJHaUxwOUlyaVcvUldGRUNq?=
 =?utf-8?B?RHVzeXA0NU5HZmNWY29uUmZpYmFIck9zKzV5TlJvMVdzTEFpNnBvYjArdVZ5?=
 =?utf-8?B?WFhjTEdXMDZ1NzEvb0lOcmRuUjMwWmdtVkl3b3p0cndNMzJpOHhnZmtZb1FH?=
 =?utf-8?B?U3NQdUxwejAzUm5PdTkrQmpyTnFoejVVdHBZdEptWmh1OFNxQUZuQlk1ZTdJ?=
 =?utf-8?B?M1A4cUtucFBvWDRTVWxPdW05OEh1RksvcWQwNUFiRmNUbFo1YnZnaHpXM3Ba?=
 =?utf-8?B?T1R5aDdPQURKNFNXWVpEYVV1V1JITGNvb1VOSGd2Mng2YkRNOCtjNW45eVlS?=
 =?utf-8?B?bHZhUitIU2tWNzFMZjZJbDFrSXQ5M2lMaFQ4d2l0Mi9PWVZTMjhLVDVzcjdT?=
 =?utf-8?B?SHhLb0dtdW9ETmNOdFF3dlR2Q3hPcWdUVjFGMVhhRWwzNXJnTVFVQjEvUFZw?=
 =?utf-8?B?TXRXZFJOZjFtQkV2ZzNwQThSeTRrc1ZCK0FOblB5cE54cW8xcE5QN0k5dW9O?=
 =?utf-8?B?TEZDY2ZvYm1oYlpoaEM1MHRSdDNzeDN4dlZ0Q1hjaWdwWjFMaHJYOURqU3dL?=
 =?utf-8?B?VDZFblI4bUkzc2JSTWlQWmZQNHcxN2dOUTZhZExPSTVEdnEwaHRYWVJMQm90?=
 =?utf-8?B?RDhGSVArMXF5WmtyRDhqMXdkeEJ2d0JuZTZ6bk9IZFovY1VjRTlZWDNpZzJO?=
 =?utf-8?B?cjRkdGIyOGFSbmUxTTU2UjdjMWx5ZGpnRkVoOHRDUWx5WlBPUHpQYlFidW9N?=
 =?utf-8?B?TjFoVjlVdVpwTGxkazhnUWpnWnoyaE5iaDUzY2VDYlI3OTVJYndNR1drQ3lV?=
 =?utf-8?B?V3h6QXFDWTFBK2YreGZNdWNZN1I5bnhxVXE2bUVZeWJraU1zb2JUNFdjVERG?=
 =?utf-8?B?cTVCeVVTeElYMlliVE51SjhEenV0bXJPWDdEYnJQaCtiZUZIZTNaSm5zQUFu?=
 =?utf-8?B?TVJaTUwyQTBNNXZmbnpGKzRhcXYyeUoxaXc5RVErTmJhUlV5ZENqYmNTMGxY?=
 =?utf-8?B?Qi95bEppNnJVK1g3RDlrMTF2RFJrMkVkaHZmSmkyUUFMRWRUSTlaM0xWUjhn?=
 =?utf-8?B?QUJaNk4raGpXeVZiZ0RZTlhtbVdncERMUEhPU3NKdjlqYURKRFRTSEFydWJW?=
 =?utf-8?B?SkZzakhTWThCQ0tZbFFHVi9uZlZxZWhWSm1EWnhmNi9reTBGSDFLYU5mY3Nr?=
 =?utf-8?B?V2ZoUDladGNpUTRtM2dYUEJkL2xnT1ltYXY1TzZXellyRk4xNnFia0RlU3F0?=
 =?utf-8?B?b3RwaG1ONkhkYTdybWsxclpGS0JoYnBNVTFZTzVDc3JEa1liMzBZaFQ3a1By?=
 =?utf-8?B?eWZ2VXF1OVl1WEpITHVkRFVyeWYyQ0VNcFN5NHdraWtnYkFnbkl4VnZHK0F0?=
 =?utf-8?B?aTR6NlBOWEF6dUcrcDJSemE3dXVUNVUxUUE5dDZRNHNLOGMwcmd3N1lMVlZp?=
 =?utf-8?B?dmU0czN0UFdKZ3dyR09Sc3FieG1VZVVDSmVXYWphWW44OGZHYU00NjlkU1NX?=
 =?utf-8?B?WDhxZU1aZkk3bEJXaVJoRHJOTFkzaGt2ZTRtZ3QxTjVoZm8zRURuQlFqbTFz?=
 =?utf-8?B?ZDk4OWQ4TnZnaUxaNW1xTmVZVTUwLzR3Q2Z3OWpiVlBzYTBaYWN2UWh1a2Mz?=
 =?utf-8?B?Rnc1dFpSbDNqSzUwQTZEWGRWQzVWN1lCMk5NK2lzZ1hEais3bHFVamlZWVh3?=
 =?utf-8?B?aVVKSTJZQ2UxdWc1Vkx6b3d1ZXNCQjhtWGxtaDg5Y1JGZTJab3VxS1pxM1ZV?=
 =?utf-8?B?YkFoVlFYcitPZXhONVk0WGdEeENBZDlnNVZjWDdsYjhBQnRHQ0Q4aVljOWdK?=
 =?utf-8?B?SEp3M3JraURlVEwrbDAvdzFIeVk2WE9zNmxpczNxd0dkOXN1QVp2NWtiNlZ6?=
 =?utf-8?B?WU1CRExLb1ZJODAvRW9qRzBxc00yZkZPV3dFK1pvcmtacVM3WlpzWjNHeDdr?=
 =?utf-8?B?TkVOQ1dwcDRYYkd5WDdreHdQOUVscmVSdjhwUzYwKzU0YkxPOG83T2YxK3JS?=
 =?utf-8?B?dnYvM1lnN25WdjZPbUtWNkdMZ01BYVl1NndFVndMaWMvcytQQXRCWmt4emZE?=
 =?utf-8?Q?N1V+6M+UBYWlELzPTtr6DK8+H?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 403ca812-e1d4-495e-caef-08db5847de9a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 09:03:10.7424
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8vjQQcTpB2pzlzQ+wfq9ajG7OmjVX7Lcr9v8SVR7rqb4ZcV49TIsbZ3Tw/PXuXmgufc4xm3lT3nk1vPVD8bvJQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8766

On 18.05.2023 12:57, Roger Pau Monne wrote:
> When a domain parameter is provided to pci_get_pdev() the search
> function would match against the bdf, without taking the segment into
> account.
> 
> Fix this and also account for the passed segment.
> 
> Fixes: 8cf6e0738906 ('PCI: simplify (and thus correct) pci_get_pdev{,_by_domain}()')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> ---
> There's no mention in 8cf6e0738906 that avoiding the segment check is
> fine, and hence I assume it's an oversight, as it should be possible
> to have devices from multiple segments assigned to the same domain.

I guess this was a lack of editing after copy-and-paste from the
loops iterating over pseg->alldevs_list. Thanks much for spotting!

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 19 09:09:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:09:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536801.835508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzw6u-0006Yl-TO; Fri, 19 May 2023 09:09:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536801.835508; Fri, 19 May 2023 09:09:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzw6u-0006Ye-Ql; Fri, 19 May 2023 09:09:16 +0000
Received: by outflank-mailman (input) for mailman id 536801;
 Fri, 19 May 2023 09:09:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzw6s-0006YY-S9
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:09:14 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on060a.outbound.protection.outlook.com
 [2a01:111:f400:fe02::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d226edd0-f624-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 11:09:12 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8766.eurprd04.prod.outlook.com (2603:10a6:102:20d::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 09:09:10 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 09:09:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d226edd0-f624-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q67JiSTv0UB/SDlng+V597s8OrIKmrcnwYO7mRvFv0yhuqe8BVsOmz+MelQC/EwwT3c0TO+0fJXw0Tv79Pg08zn1UzU6GbIg1DseY5v8IsN6Vcw1SqdXHuoxkWjSWUKdaD+JMLobkYzL2eAHZ21JZqon8pz960GlsGhdVlhAa40BFR3XQQFo1ro9x/miaaTRpxwUUTi2AREa5woa3Q0v8a9mITafBxHqMVyl4wGmjZQr1aSgfFtQW0M0Tz6S0E9X7S6p6QGmu7oZbtioanEylcgHl55K13cqabt4qnOfaAgLHAng5pOlv7zBtodnBLWjXCS1vONSd1ShwBpy89w+mA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fsB3vCLgthXRHPhn4STfbolD2zd1U7buZWW3Ey+MIRQ=;
 b=jESaUzIVAoMmVzrT7IbFJydoBqxEyNMCJ+FpYhRFd6zb4pBlOf8JIcTqfdBZQHsrFwCz+yE1MiVoUfwvotT4fPhxl4Llzbu38EGOB5ucctmPNxrn5BfAVbhGFJRVE1itOdZg/+4IfEWf/JQh7m0IkWzQrFYpnOT8Kxx4doMkt0rc4z8tOLamiEMG+XGhqFJqoQzGG7BA200BqfLFUkFHh/mRx0T7mrm4xeWP0J5aQwfsdihjyoA2WaDJ/ZXi71D3JVP+pUrLb/QZJCIfbc1Gb7loErKx10+0FdXc9EXkbt7/mVdxC/n7x35GCt+ZUMaee4QxKYfbhGJ5pFNWx8wkqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fsB3vCLgthXRHPhn4STfbolD2zd1U7buZWW3Ey+MIRQ=;
 b=e0P2Vg/1aYa6hDfY7mllpQLfhqyIfCxd23gz2xZhz5tkpmHxXzIEu4GEH4bXq0QzRnvF+XKdCf4pkvyj8/HopiEOpngu39c+HqTzwPkwtz6/y+YJcj4gPSK8oe4sgDsla1Duof+0U4xgSs9zGVb01KntT6EC/9F9l4VrUKixrpdN4mwq7N91Zfqhe11gpfZDqb72E58gPqhlkwRTVp6HuWIi7PJfRmumgdmQNEzuU1E4cR6WkNzZ7k8SXZLmQ9UkjPMkudZp0xQXA8bZcFWQWgZRfWN00zW876LxFxb/dj+XvA7hOHXtFWKTX1DH0tIZVuE1oFpLfIUt0b4OWDuaIA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8214dfbe-8438-8adc-6ac2-6300e8eb7923@suse.com>
Date: Fri, 19 May 2023 11:09:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: PVH Dom0 related UART failure
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com,
 xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
 xenia.ragiadakou@amd.com
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <b411f7aa-7fd2-7b1c-1bcd-35b989f528b6@suse.com>
 <ZGcnh/DZvFAIBR/n@Air-de-Roger>
 <7a00019e-da64-ad0c-d107-d002cf6bce85@suse.com>
 <ZGc5ZfebSL21W5sY@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZGc5ZfebSL21W5sY@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0079.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8766:EE_
X-MS-Office365-Filtering-Correlation-Id: 30720829-8b80-43c5-2547-08db5848b545
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OqQjztKpKYIeF5IyZwQvIYl7MTQnN6LckVitsEqbVF7bWONUn55zq72RsRDlszdcMvi1NUm1VT1FYPTTY/5Iw/aTR+8l7yRwhfXKAKXDPbAxbrn8xkAa4zYsHpYDtcw7SqkNePQFur0CDNCo3TeoHzLIv5OTdt3msK++JDtJ0uxOqqdQEdtqcBRJem9CJuD0p9qosQmBhO26uFwhUW6T0sxt1Q6YMdZ6Z61m8YvFeDZdtdf2nx/72zkXOEUlxcYhskQBD1xh2y3gUi14jRM4ACxELOBE7QG9lJL7W8aBJtNniMCr00xrA0OTmZoA7ja7mJiM89QXNUmYWM34S4S2KP031j10t4UcfQ8PNhbPJ2QhqxhgdWEkrUXfaQYmHMG52vUdUWHS/35FdpXaheRWbl0wlX7xpnx5qLBBooCstwOtLEG5ekUK4ybeVYvSW1frdI2zlSww/wDyISTuwAs4iZDEOE1oe5Fa+RPx8etnXbC4HUS/Lyq86bMSKo7i/lQar2k4qIuOUQRJu1dSjP0PF/21A36uP05f8+nDL1WlUy6wsiYhiJ0itPdlB0OlxZKVZc26g4jKib0MHMyBKWLEpcEnBc9wBKw8WhJIrYJM5ueWbUu4Pbl3sLSOeayh0vSLaL2bkAQD1N1h9NkDbitkoQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(136003)(376002)(366004)(396003)(451199021)(31696002)(86362001)(36756003)(6916009)(966005)(478600001)(316002)(66476007)(66556008)(4326008)(66946007)(2906002)(5660300002)(8676002)(41300700001)(6486002)(8936002)(38100700002)(6512007)(53546011)(6506007)(2616005)(26005)(83380400001)(186003)(66899021)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SElQQnNZMC9vR0IxNVN4bWlXVFpEdTloczlhUW9TR0RyZkpYZnQ2eHZZcS83?=
 =?utf-8?B?MWJ6a2UyMlQ1MXVGUDMvUTdCTm56STZIUkp2NmZvR2pLRjZ0a0JQYkFsSzFL?=
 =?utf-8?B?RkUwK0RUV2YzQ0xlSUhXb3hlemQ3STMyMGxpRXR6U1l5a0h6Y0xiSktuVnJR?=
 =?utf-8?B?RWJSRThtNTE1T2lCR09ScE5JOVlIUDBhK0lrVjN3TVdVWGo2VUlqYXdaZDBI?=
 =?utf-8?B?dmxUSFBSZURUR3dIMFlzUHY0R1dhSXdySmVmUmFNVU9KNitRQnNCQ2xMaHhi?=
 =?utf-8?B?ZWNkM2E3TTVpSjgwOU9nRTh0RmZncFZhald4YjlIamhkU3hZTy9BbzBNMDhV?=
 =?utf-8?B?REFBcExCZTUzajhYMlF3cFlsRmJoT2VHY2Q0UGhIamp6ZW81bFlMQ1Z3cTE2?=
 =?utf-8?B?NFcxdisveEJ1WXFWdUtpVllmcmZaZmhRb2FBRFNZNk9wZThCT3JNbUoydzRn?=
 =?utf-8?B?NE4xdnBnZlRNaTJ0QStWMTVjSDJOdVYvT0NFUDd0UENhQVFWNmRnV0VreWNl?=
 =?utf-8?B?enpTV3I3dzlMeW5xcDl2YWsweTFhbW5Hb2pCWEhBYzFQbzN1Z09iakpXeTNK?=
 =?utf-8?B?MlBqeUVnczEyMEJEUC9GWUpaa1lyRWFhUWVDQkFyRXBLOWJlR2RVamxIK1Nw?=
 =?utf-8?B?UzQzRWZTNTVPVFJJcCtlemRzSnh0UW50NFNNQ3hKcGtYN1NTamVySFFsbnNE?=
 =?utf-8?B?Z0ZlcnFEcEp0VVREUnJQV1ZONGo4YjRiandjbDFLTUNjTG1meE1jR3NQNXlY?=
 =?utf-8?B?cG1wRTdlSkNxZWlvVk1nTGlTUlFnai9Ub0V2K3AzYUs2R1lxS2JORlB5a01t?=
 =?utf-8?B?Y3pFcnBiOHR3Y1ExZkt4VDFOMmlnMy8vRWxTT2FaOG5heDNxai8yS3dqM2hH?=
 =?utf-8?B?UjlNT2tBVzhoSWVOMmJWZTdtVzRtUDlLclIxVnFlcWprc042RnBjWDdnOVA5?=
 =?utf-8?B?UjA2emVPUndETDFaQ2t2UGtQYy91RWFnaDZPY21qd1g1anlsaWE1ZUt6THU5?=
 =?utf-8?B?L1F5eFR2UkdLdG1XMzFhUmVFbzM2OStnM3EzZ1lFQWs1RGx0UnI2OFBPZEpH?=
 =?utf-8?B?TnF0VDJIVlFPSENqUHQrbW1kc0dXUzF4a3NpZmRDV0djbGZLT2diOEJodUt2?=
 =?utf-8?B?ekNCK2w2VkRQczR2ejNtd0VKKytCVEI0UkJPdW1mRlNQcmVKNzJNRmNEcWx0?=
 =?utf-8?B?eHhGM21zRjVNWWFnL0Z6RDV6WGtaMWtqUTNZK0FqK3dTbVZ4YmhJY05kM3ly?=
 =?utf-8?B?b2g5aE9lMGFzV1NJelQ2MTNad2t2cjkzMVo0MElsd2RxQ2c5U0NkSDgxSkxG?=
 =?utf-8?B?b0ZCWGIzbm5OeDV5ZjdmVlRiWS9IVjZvTFV0ZlFWYVBRdm9nTk96QXNIWWlR?=
 =?utf-8?B?UDR1L1JjMGNEUFdhSjBaTUpKd1ErOWFhK2d3RGRXcGZTQ244U1VMM3RzeTFs?=
 =?utf-8?B?aTNPMW9ySERqZTlsQkpHNWtQeHc4RE1odHZNWEY4aUlzTDVIcjBWaVBCUXoy?=
 =?utf-8?B?b09mMDRCWmNxWnQyOGxmS2pYZk5wK28waXVES2R6NUllYkd1NTR4V0puZjFJ?=
 =?utf-8?B?UlUvdXZyTDRuU1M4TjBONTJibG1JTnFZaVd3NjlnQkdSajFONDA2YWxVT3Mw?=
 =?utf-8?B?SHJTWEg3VW5TemV2Mmc3NDFRQVpQK1cwREFVd1U0bjBZM2RWVGtTRlFRcDlk?=
 =?utf-8?B?aVMvMk5ZUjZ6elRLZUVIanRMVVVZbjlaZDN6RTAzQU90cm9lMEhLNTZjb3JC?=
 =?utf-8?B?Q3pNenZOVnVWUkhuREFTdG1rcUFRaDhOaldkZEtvOWwxM1lwLzIvbnhyZWZy?=
 =?utf-8?B?Tjd6VTllY0JnWmtDU21HekVUZ1BsWVpEY1k2YVZmUEJtUENhalpHSjQ0K2o5?=
 =?utf-8?B?MnhDcnRJVWdocThLQXVpaG9PTHFWRXpZR1FKV0UzT1dtbDkwWlZMai9GVVY4?=
 =?utf-8?B?QXUvY0JHYXYwd3hMUVpjLzFnWjNWdmdNYlEwRjBhVm5qK0lBUC9BdnJnaFln?=
 =?utf-8?B?Z1lFeUYvTGQrTHR4MG5yK1N4ZjNvL0w0QzRFQ0pFVWhyWXpvVHllT3luZWNW?=
 =?utf-8?B?TjNWbVZSUVZVd0NEWmZzbHFQd2I2TWZUbTVqT2V5ZnNHQlBWTlNyOFVTRm93?=
 =?utf-8?Q?YU78+9OX1/Ch4ecWJCWdnBIiW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 30720829-8b80-43c5-2547-08db5848b545
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 09:09:10.6802
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: R1IeZfcTt991DMideG86gJv0kD29FQuGSKStDOe4tsntm6DvFFZwssXBWT6Ki0iYJ1Lqxs91iNA5xE8TJHeuYg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8766

On 19.05.2023 10:55, Roger Pau Monné wrote:
> On Fri, May 19, 2023 at 10:20:24AM +0200, Jan Beulich wrote:
>> On 19.05.2023 09:38, Roger Pau Monné wrote:
>>> On Fri, May 19, 2023 at 09:22:58AM +0200, Jan Beulich wrote:
>>>> On 18.05.2023 12:34, Roger Pau Monné wrote:
>>>>> On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
>>>>>> I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
>>>>>> test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
>>>>>> Zen3 system and we already have a few successful tests with it, see
>>>>>> automation/gitlab-ci/test.yaml.
>>>>>>
>>>>>> We managed to narrow down the issue to a console problem. We are
>>>>>> currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
>>>>>> options, it works with PV Dom0 and it is using a PCI UART card.
>>>>>>
>>>>>> In the case of Dom0 PVH:
>>>>>> - it works without console=com1
>>>>>> - it works with console=com1 and with the patch appended below
>>>>>> - it doesn't work otherwise and crashes with this error:
>>>>>> https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
>>>>>
>>>>> Jan also noticed this, and we have a ticket for it in gitlab:
>>>>>
>>>>> https://gitlab.com/xen-project/xen/-/issues/85
>>>>>
>>>>>> What is the right way to fix it?
>>>>>
>>>>> I think the right fix is to simply avoid hidden devices from being
>>>>> handled by vPCI, in any case such devices won't work propewrly with
>>>>> vPCI because they are in use by Xen, and so any cached information by
>>>>> vPCI is likely to become stable as Xen can modify the device without
>>>>> vPCI noticing.
>>>>>
>>>>> I think the chunk below should help.  It's not clear to me however how
>>>>> hidden devices should be handled, is the intention to completely hide
>>>>> such devices from dom0?
>>>>
>>>> No, Dom0 should still be able to see them in a (mostly) r/o fashion.
>>>> Hence my earlier RFC patch making vPCI actually deal with them.
>>>
>>> What's the difference between a hidden device and one that's marked RO
>>> then?
>>
>> pci_hide_device() simply makes the device unavailable for assignment
>> (by having it owned by DomXEN). pci_ro_device() additionally adds the
>> device to the segment's ro_map, thus protecting its config space from
>> Dom0 writes.
> 
> But above you mention that hidden devices should be visible to dom0
> "in a (mostly) r/o fashion.".

I'm sorry for the confusion. My reply was in the context of the UART
question here, which is a "r/o device" case. I didn't realize you
were asking a question not directly related to such UART devices.

> I understand that for RO devices the whole config space of the device
> is RO, in which case we should simply avoid using vPCI for them.  We
> however likely want to have the BARs of such devices permanently
> mapped into the dom0 physmap (if memory decoding is enabled).
> 
> But for hidden devices it's not clear to me what needs to be RO, do we
> also need to protect the config space from dom0 accesses?

No, then they would be identical to r/o devices. Dom0 should be allowed
to deal with hidden devices normally, I think, with the sole exception
of them not being possible to assign to another domain (not even DomIO,
if I'm not mistaken).

> It might be complicated for vPCI to deal with devices that have MSI-X
> interrupts in use by Xen for example.  So I would suggest that at
> leeat for the time being we don't handle hidden devices with vPCI.

If any interrupts are in use on a device, I think it needs to be made
"r/o", not just "hidden".

Jan

> We might want to do similar to RO devices and prevent write access to
> the config space for those also.
> 
>> And just to clarify - a PCI dev containing a UART isn't "hidden" in the
>> above sense, but "r/o" (by virtue of calling pci_ro_device() on it).
>> But the issue reported long ago (and now re-discovered by Stefano) is
>> common to "r/o" and "hidden" devices (it's the "hidden" aspect that
>> counts here, which is common for both).
> 
> Indeed, the issue is with any device assigned to dom_xen (or not
> assigned to the hardware domain, but accessible by the hardware domain
> from vPCI).



From xen-devel-bounces@lists.xenproject.org Fri May 19 09:29:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:29:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536846.835541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwQT-0001mT-An; Fri, 19 May 2023 09:29:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536846.835541; Fri, 19 May 2023 09:29:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwQT-0001mM-85; Fri, 19 May 2023 09:29:29 +0000
Received: by outflank-mailman (input) for mailman id 536846;
 Fri, 19 May 2023 09:29:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FzOY=BI=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1pzwQR-0001mG-7J
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:29:27 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a4b0c3bb-f627-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 11:29:25 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 2AC7361040;
 Fri, 19 May 2023 09:29:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 95151C433D2;
 Fri, 19 May 2023 09:29:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4b0c3bb-f627-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684488563;
	bh=XoazJ8+NY/zUkaU8NzggrB/ghI/4odceV/1JnUi+Dmc=;
	h=From:To:Cc:Subject:Date:From;
	b=F8lhzdDq6FFa/B9xQpGC3UMBhg06/S8aUAed0Bx6y3p7D8C0KePzwA28eJ6MwOWnB
	 HCzOB9mw0ArwodrqG93+d2iUANl2qG2+g6/eoIrKQBPQYczyF3XxlvWCzbTErhUzlC
	 GPiDfT2Ni/svQaioieFFywhgVN8es8lASKkH163niPjsMEAF2S/hhKx9UEKU4f68gh
	 UvhCYp96WcUev936+h1S2Dg7pP5GOWLVnlJntnTlGVEu43C36hqbCfHb/tZP+ZWYm/
	 n9+do87UuZ1KvZwRdyJAhg/zhegv053Vq3jEgxkSNlSa7IaD1Jje2B00yvYt9tcKx/
	 nDRHkSehZEcng==
From: Arnd Bergmann <arnd@kernel.org>
To: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH] [v2] x86: xen: add missing prototypes
Date: Fri, 19 May 2023 11:28:40 +0200
Message-Id: <20230519092905.3828633-1-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

These function are all called from assembler files, or from inline assembler,
so there is no immediate need for a prototype in a header, but if -Wmissing-prototypes
is enabled, the compiler warns about them:

arch/x86/xen/efi.c:130:13: error: no previous prototype for 'xen_efi_init' [-Werror=missing-prototypes]
arch/x86/platform/pvh/enlighten.c:120:13: error: no previous prototype for 'xen_prepare_pvh' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:358:20: error: no previous prototype for 'xen_pte_val' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:366:20: error: no previous prototype for 'xen_pgd_val' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:372:17: error: no previous prototype for 'xen_make_pte' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:380:17: error: no previous prototype for 'xen_make_pgd' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:387:20: error: no previous prototype for 'xen_pmd_val' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:425:17: error: no previous prototype for 'xen_make_pmd' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:432:20: error: no previous prototype for 'xen_pud_val' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:438:17: error: no previous prototype for 'xen_make_pud' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:522:20: error: no previous prototype for 'xen_p4d_val' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:528:17: error: no previous prototype for 'xen_make_p4d' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:1442:17: error: no previous prototype for 'xen_make_pte_init' [-Werror=missing-prototypes]
arch/x86/xen/enlighten_pv.c:1233:34: error: no previous prototype for 'xen_start_kernel' [-Werror=missing-prototypes]
arch/x86/xen/irq.c:22:14: error: no previous prototype for 'xen_force_evtchn_callback' [-Werror=missing-prototypes]
arch/x86/entry/common.c:302:24: error: no previous prototype for 'xen_pv_evtchn_do_upcall' [-Werror=missing-prototypes]

Declare all of them in an appropriate header file to avoid the warnings.
For consistency, also move the asm_cpu_bringup_and_idle() declaration out of
smp_pv.c.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
v2: fix up changelog
---
 arch/x86/xen/efi.c     |  2 ++
 arch/x86/xen/smp.h     |  4 ++++
 arch/x86/xen/smp_pv.c  |  1 -
 arch/x86/xen/xen-ops.h | 14 ++++++++++++++
 include/xen/xen.h      |  3 +++
 5 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/efi.c b/arch/x86/xen/efi.c
index 7d7ffb9c826a..863d0d6b3edc 100644
--- a/arch/x86/xen/efi.c
+++ b/arch/x86/xen/efi.c
@@ -16,6 +16,8 @@
 #include <asm/setup.h>
 #include <asm/xen/hypercall.h>
 
+#include "xen-ops.h"
+
 static efi_char16_t vendor[100] __initdata;
 
 static efi_system_table_t efi_systab_xen __initdata = {
diff --git a/arch/x86/xen/smp.h b/arch/x86/xen/smp.h
index 22fb982ff971..81a7821dd07f 100644
--- a/arch/x86/xen/smp.h
+++ b/arch/x86/xen/smp.h
@@ -1,7 +1,11 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 #ifndef _XEN_SMP_H
 
+void asm_cpu_bringup_and_idle(void);
+asmlinkage void cpu_bringup_and_idle(void);
+
 #ifdef CONFIG_SMP
+
 extern void xen_send_IPI_mask(const struct cpumask *mask,
 			      int vector);
 extern void xen_send_IPI_mask_allbutself(const struct cpumask *mask,
diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
index a92e8002b5cf..d5ae5de2daa2 100644
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -55,7 +55,6 @@ static DEFINE_PER_CPU(struct xen_common_irq, xen_irq_work) = { .irq = -1 };
 static DEFINE_PER_CPU(struct xen_common_irq, xen_pmu_irq) = { .irq = -1 };
 
 static irqreturn_t xen_irq_work_interrupt(int irq, void *dev_id);
-void asm_cpu_bringup_and_idle(void);
 
 static void cpu_bringup(void)
 {
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 6d7f6318fc07..0f71ee3fe86b 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -160,4 +160,18 @@ void xen_hvm_post_suspend(int suspend_cancelled);
 static inline void xen_hvm_post_suspend(int suspend_cancelled) {}
 #endif
 
+void xen_force_evtchn_callback(void);
+pteval_t xen_pte_val(pte_t pte);
+pgdval_t xen_pgd_val(pgd_t pgd);
+pte_t xen_make_pte(pteval_t pte);
+pgd_t xen_make_pgd(pgdval_t pgd);
+pmdval_t xen_pmd_val(pmd_t pmd);
+pmd_t xen_make_pmd(pmdval_t pmd);
+pudval_t xen_pud_val(pud_t pud);
+pud_t xen_make_pud(pudval_t pud);
+p4dval_t xen_p4d_val(p4d_t p4d);
+p4d_t xen_make_p4d(p4dval_t p4d);
+pte_t xen_make_pte_init(pteval_t pte);
+void xen_start_kernel(struct start_info *si);
+
 #endif /* XEN_OPS_H */
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 0efeb652f9b8..f989162983c3 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -31,6 +31,9 @@ extern uint32_t xen_start_flags;
 
 #include <xen/interface/hvm/start_info.h>
 extern struct hvm_start_info pvh_start_info;
+void xen_prepare_pvh(void);
+struct pt_regs;
+void xen_pv_evtchn_do_upcall(struct pt_regs *regs);
 
 #ifdef CONFIG_XEN_DOM0
 #include <xen/interface/xen.h>
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri May 19 09:30:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:30:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536849.835551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwRV-00038y-L9; Fri, 19 May 2023 09:30:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536849.835551; Fri, 19 May 2023 09:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwRV-00038n-Gt; Fri, 19 May 2023 09:30:33 +0000
Received: by outflank-mailman (input) for mailman id 536849;
 Fri, 19 May 2023 09:30:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pzwRU-00038Z-ER
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:30:32 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id cc6e24ca-f627-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 11:30:31 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AEEC62F4;
 Fri, 19 May 2023 02:31:15 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9D3123F73F;
 Fri, 19 May 2023 02:30:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc6e24ca-f627-11ed-b22d-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/3] xen/misra: xen-analysis.py: Improve the cppcheck version check
Date: Fri, 19 May 2023 10:30:17 +0100
Message-Id: <20230519093019.2131896-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230519093019.2131896-1-luca.fancellu@arm.com>
References: <20230519093019.2131896-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Use tuple comparison to check the cppcheck version.

Take the occasion to harden the regex, escaping the dots so that we
check for them instead of generic characters.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/scripts/xen_analysis/cppcheck_analysis.py | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
index c8abbe0fca79..8dc45e653b79 100644
--- a/xen/scripts/xen_analysis/cppcheck_analysis.py
+++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
@@ -157,7 +157,7 @@ def generate_cppcheck_deps():
             "Error occured retrieving cppcheck version:\n{}\n\n{}"
         )
 
-    version_regex = re.search('^Cppcheck (\d+).(\d+)(?:.\d+)?$',
+    version_regex = re.search('^Cppcheck (\d+)\.(\d+)(?:\.\d+)?$',
                               invoke_cppcheck, flags=re.M)
     # Currently, only cppcheck version >= 2.7 is supported, but version 2.8 is
     # known to be broken, please refer to docs/misra/cppcheck.txt
@@ -166,15 +166,10 @@ def generate_cppcheck_deps():
             "Can't find cppcheck version or version not identified: "
             "{}".format(invoke_cppcheck)
         )
-    major = int(version_regex.group(1))
-    minor = int(version_regex.group(2))
-    if major < 2 or (major == 2 and minor < 7):
+    version = (int(version_regex.group(1)), int(version_regex.group(2)))
+    if version < (2, 7) or version == (2, 8):
         raise CppcheckDepsPhaseError(
-            "Cppcheck version < 2.7 is not supported"
-        )
-    if major == 2 and minor == 8:
-        raise CppcheckDepsPhaseError(
-            "Cppcheck version 2.8 is known to be broken, see the documentation"
+            "Cppcheck version < 2.7 or 2.8 are not supported"
         )
 
     # If misra option is selected, append misra addon and generate cppcheck
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 19 09:30:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:30:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536850.835555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwRV-0003CM-Rj; Fri, 19 May 2023 09:30:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536850.835555; Fri, 19 May 2023 09:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwRV-0003C7-Ox; Fri, 19 May 2023 09:30:33 +0000
Received: by outflank-mailman (input) for mailman id 536850;
 Fri, 19 May 2023 09:30:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pzwRV-00038h-2N
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:30:33 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id cb94a9ea-f627-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 11:30:30 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 417D11FB;
 Fri, 19 May 2023 02:31:14 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 256453F73F;
 Fri, 19 May 2023 02:30:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb94a9ea-f627-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 0/3] Fix and improvements to xen-analysis.py - Pt.2
Date: Fri, 19 May 2023 10:30:16 +0100
Message-Id: <20230519093019.2131896-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This serie includes one improvement suggested by Andrew Cooper, and two bug fix
for the xen-analysis.py tool.

Luca Fancellu (3):
  xen/misra: xen-analysis.py: Improve the cppcheck version check
  xen/misra: xen-analysis.py: Fix latent bug
  xen/misra: xen-analysis.py: Fix cppcheck report relative paths

 xen/scripts/xen_analysis/cppcheck_analysis.py | 13 +++------
 .../xen_analysis/cppcheck_report_utils.py     | 27 +++++++++++++++----
 2 files changed, 26 insertions(+), 14 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 19 09:30:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:30:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536851.835571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwRX-0003dn-5S; Fri, 19 May 2023 09:30:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536851.835571; Fri, 19 May 2023 09:30:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwRX-0003dg-1t; Fri, 19 May 2023 09:30:35 +0000
Received: by outflank-mailman (input) for mailman id 536851;
 Fri, 19 May 2023 09:30:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pzwRV-00038Z-G5
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:30:33 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id cd4ebcb5-f627-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 11:30:32 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2864015BF;
 Fri, 19 May 2023 02:31:17 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 16E1C3F73F;
 Fri, 19 May 2023 02:30:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd4ebcb5-f627-11ed-b22d-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/3] xen/misra: xen-analysis.py: Fix latent bug
Date: Fri, 19 May 2023 10:30:18 +0100
Message-Id: <20230519093019.2131896-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230519093019.2131896-1-luca.fancellu@arm.com>
References: <20230519093019.2131896-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currenly there is a latent bug that is not triggered because
the function cppcheck_merge_txt_fragments is called with the
parameter strip_paths having a list of only one element.

The bug is that the split function should not be in the
loop for strip_paths, but one level before, fix it.

Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 xen/scripts/xen_analysis/cppcheck_report_utils.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/scripts/xen_analysis/cppcheck_report_utils.py
index c5f466aff141..fdc299c7e029 100644
--- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
+++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
@@ -104,8 +104,8 @@ def cppcheck_merge_txt_fragments(fragments_list, out_txt_file, strip_paths):
                 for path in strip_paths:
                     text_report_content[i] = text_report_content[i].replace(
                                                                 path + "/", "")
-                    # Split by : separator
-                    text_report_content[i] = text_report_content[i].split(":")
+                # Split by : separator
+                text_report_content[i] = text_report_content[i].split(":")
 
             # sort alphabetically for second field (misra rule) and as second
             # criteria for the first field (file name)
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 19 09:30:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:30:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536852.835581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwRY-0003uy-JO; Fri, 19 May 2023 09:30:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536852.835581; Fri, 19 May 2023 09:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwRY-0003uo-GL; Fri, 19 May 2023 09:30:36 +0000
Received: by outflank-mailman (input) for mailman id 536852;
 Fri, 19 May 2023 09:30:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pzwRX-00038Z-3N
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:30:35 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id ce3f963c-f627-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 11:30:34 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B1CD615DB;
 Fri, 19 May 2023 02:31:18 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 84D9D3F73F;
 Fri, 19 May 2023 02:30:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce3f963c-f627-11ed-b22d-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [PATCH 3/3] xen/misra: xen-analysis.py: Fix cppcheck report relative paths
Date: Fri, 19 May 2023 10:30:19 +0100
Message-Id: <20230519093019.2131896-4-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230519093019.2131896-1-luca.fancellu@arm.com>
References: <20230519093019.2131896-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Fix the generation of the relative path from the repo, for cppcheck
reports, when the script is launching make with in-tree build.

Fixes: b046f7e37489 ("xen/misra: xen-analysis.py: use the relative path from the ...")
Reported-by: Michal Orzel <michal.orzel@amd.com>
Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 .../xen_analysis/cppcheck_report_utils.py     | 25 ++++++++++++++++---
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/scripts/xen_analysis/cppcheck_report_utils.py
index fdc299c7e029..10100f6c6a57 100644
--- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
+++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
@@ -1,6 +1,7 @@
 #!/usr/bin/env python3
 
-import os
+import os, re
+from . import settings
 from xml.etree import ElementTree
 
 class CppcheckHTMLReportError(Exception):
@@ -101,12 +102,28 @@ def cppcheck_merge_txt_fragments(fragments_list, out_txt_file, strip_paths):
             text_report_content = list(text_report_content)
             # Strip path from report lines
             for i in list(range(0, len(text_report_content))):
-                for path in strip_paths:
-                    text_report_content[i] = text_report_content[i].replace(
-                                                                path + "/", "")
                 # Split by : separator
                 text_report_content[i] = text_report_content[i].split(":")
 
+                for path in strip_paths:
+                    text_report_content[i][0] = \
+                        text_report_content[i][0].replace(path + "/", "")
+
+                # When the compilation is in-tree, the makefile places
+                # the directory in /xen/xen, making cppcheck produce
+                # relative path from there, so check if "xen/" is a prefix
+                # of the path and if it's not, check if it can be added to
+                # have a relative path from the repository instead of from
+                # /xen/xen
+                if not text_report_content[i][0].startswith("xen/"):
+                    # cppcheck first entry is in this format:
+                    # path/to/file(line,cols), remove (line,cols)
+                    cppcheck_file = re.sub(r'\(.*\)', '',
+                                           text_report_content[i][0])
+                    if os.path.isfile(settings.xen_dir + "/" + cppcheck_file):
+                        text_report_content[i][0] = \
+                            "xen/" + text_report_content[i][0]
+
             # sort alphabetically for second field (misra rule) and as second
             # criteria for the first field (file name)
             text_report_content.sort(key = lambda x: (x[1], x[0]))
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 19 09:46:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:46:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536873.835590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwgs-0006VG-Qi; Fri, 19 May 2023 09:46:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536873.835590; Fri, 19 May 2023 09:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwgs-0006V9-Nx; Fri, 19 May 2023 09:46:26 +0000
Received: by outflank-mailman (input) for mailman id 536873;
 Fri, 19 May 2023 09:46:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pzwgs-0006V3-4y
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:46:26 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 048a94e9-f62a-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 11:46:24 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B6A4C1FB;
 Fri, 19 May 2023 02:47:08 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A175A3F73F;
 Fri, 19 May 2023 02:46:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 048a94e9-f62a-11ed-b22d-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 0/2] diff-report.py tool
Date: Fri, 19 May 2023 10:46:11 +0100
Message-Id: <20230519094613.2134153-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

--------------------------------------------------------------------------------
This serie is dependent on this patch, in case cppcheck report are generated
using xen-analysis.py that calls the makefile with in-tree build:
https://patchwork.kernel.org/project/xen-devel/patch/20230519093019.2131896-4-luca.fancellu@arm.com/
--------------------------------------------------------------------------------

Now that we have a tool (xen-analysis.py) that wraps cppcheck to generates
reports, we have a generic overview of how many static analysis issues and non
compliances to MISRA C we have for a certain revision of the codebase.

This is great and eventually the work to be done should be just having less and
less findings in the report until we reach zero.

This is just an ideal trend, because in practice we might have issues that
comes from existing code (macro for example) that are not going to be fixed
soon for any reason, but we would like to check and see how many issues are
introduced by the commits added (ideally zero, but if any is added and the fault
resides outside the changed code, maintainers might decide to include it
anyway).

So the idea is to check the difference between the reports of the codebase: one
called "baseline" which is basically the current codebase, the other one called
"new report" that is the codebase after the changes.
To check if any new finding is added, we need to have a look on every finding in
the "new report" that is not listed in the "baseline".

It seems very simple, but what can happen to existing findings in the code after
a commit is applied?
Basically existing findings can be shifted in position due to changes to
unrelated lines, or they can be also deleted or fixed for changes involving the
findings line (Michal was the first one to point that out).

So comparing the two report now seems quite difficult, because if we compare
them we will have all the new findings plus all the findings that changed
position due to the changes applied.

To overcome this, the diff-tool.py is created and it allows to "patch" the
"baseline" report, looking into the changes applied to the baseline codebase,
described by git diff.

This serie is organised in two patch, I've tried to split the amount of code and
to leave a meaning, so in the first patch you have everything you need to
import cppcheck reports and do a "raw" diff between reports, this gives you
an hint about new findings + old findings that has changed place.

The second patch is adding the "patching" system, having a class that parses
the git diff output and later "patching" the baseline before doing the
comparison. This last option is activated only when passing the git diff changes
to the tool, but everything is described (I hope) in the help.

Some consideration needs to be made, this tool can translate in coordinates
(file, line) the findings from the "baseline" to the "new report" using the git
diff output as, let me say, a translation matrix.
This doesn't mean it can understand the meaning of the findings and recognise
them in the new codebase, so for example, a finding related to a line that is
moved to another part of the file won't be recognised as "old finding" and will
be just removed from the "baseline patched report", however we will find it
in the new report unless it contains a fix for the reported issue.

This means that the tool is not really suited to be a gatekeeper for the merge
action, it is more suitable to help the maintainer to understand when a change
is introducing new issues without having to check and compare manually two
reports of (nowadays) hundreds of finding.
Eventually we could run it in the CI and make the CI reply to the patchwork
thread with its output!

The tool has also a debug argument that when applied, generates extra files
that can be checked against the original files, for example the reports are
imported in the tool, and afterwards the debug code will generate them back from
the imported data and they should be the same (if everything works).
Another debug check is to export the representation of the parsed git diff
output, so that the developer can check how and if the parser interpreted
correctly the data.

Future works for this tool might be to parse also Coverity reports and
eventually (don't know if it is possible) also eclair text report.


Luca Fancellu (2):
  xen/misra: add diff-report.py tool
  xen/misra: diff-report.py: add report patching feature

 xen/scripts/diff-report.py                    | 130 ++++++++++
 .../xen_analysis/diff_tool/__init__.py        |   0
 .../xen_analysis/diff_tool/cppcheck_report.py |  44 ++++
 xen/scripts/xen_analysis/diff_tool/debug.py   |  61 +++++
 xen/scripts/xen_analysis/diff_tool/report.py  | 187 ++++++++++++++
 .../diff_tool/unified_format_parser.py        | 231 ++++++++++++++++++
 6 files changed, 653 insertions(+)
 create mode 100755 xen/scripts/diff-report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/__init__.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/debug.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/unified_format_parser.py

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 19 09:46:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:46:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536875.835611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwgw-0006zt-9q; Fri, 19 May 2023 09:46:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536875.835611; Fri, 19 May 2023 09:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwgw-0006zi-6D; Fri, 19 May 2023 09:46:30 +0000
Received: by outflank-mailman (input) for mailman id 536875;
 Fri, 19 May 2023 09:46:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pzwgu-0006V3-Vw
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:46:29 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 0660987c-f62a-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 11:46:27 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D94DF15BF;
 Fri, 19 May 2023 02:47:11 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AAD083F73F;
 Fri, 19 May 2023 02:46:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0660987c-f62a-11ed-b22d-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/2] xen/misra: diff-report.py: add report patching feature
Date: Fri, 19 May 2023 10:46:13 +0100
Message-Id: <20230519094613.2134153-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230519094613.2134153-1-luca.fancellu@arm.com>
References: <20230519094613.2134153-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a feature to the diff-report.py script that improves the comparison
between two analysis report, one from a baseline codebase and the other
from the changes applied to the baseline.

The comparison between reports of different codebase is an issue because
entries in the baseline could have been moved in position due to addition
or deletion of unrelated lines or can disappear because of deletion of
the interested line, making the comparison between two revisions of the
code harder.

Having a baseline report, a report of the codebase with the changes
called "new report" and a git diff format file that describes the
changes happened to the code from the baseline, this feature can
understand which entries from the baseline report are deleted or shifted
in position due to changes to unrelated lines and can modify them as
they will appear in the "new report".

Having the "patched baseline" and the "new report", now it's simple
to make the diff between them and print only the entry that are new.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v1:
 - Made the script compatible with python2 (Stefano)
---
 xen/scripts/diff-report.py                    |  55 ++++-
 xen/scripts/xen_analysis/diff_tool/debug.py   |  21 ++
 xen/scripts/xen_analysis/diff_tool/report.py  |  87 +++++++
 .../diff_tool/unified_format_parser.py        | 232 ++++++++++++++++++
 4 files changed, 393 insertions(+), 2 deletions(-)
 create mode 100644 xen/scripts/xen_analysis/diff_tool/unified_format_parser.py

diff --git a/xen/scripts/diff-report.py b/xen/scripts/diff-report.py
index f97cb2355cc3..d608e3a05aa1 100755
--- a/xen/scripts/diff-report.py
+++ b/xen/scripts/diff-report.py
@@ -7,6 +7,10 @@ from argparse import ArgumentParser
 from xen_analysis.diff_tool.cppcheck_report import CppcheckReport
 from xen_analysis.diff_tool.debug import Debug
 from xen_analysis.diff_tool.report import ReportError
+from xen_analysis.diff_tool.unified_format_parser import \
+    (UnifiedFormatParser, UnifiedFormatParseError)
+from xen_analysis.settings import repo_dir
+from xen_analysis.utils import invoke_command
 
 
 def log_info(text, end='\n'):
@@ -36,9 +40,32 @@ def main(argv):
                              "against the baseline.")
     parser.add_argument("-v", "--verbose", action='store_true',
                         help="Print more informations during the run.")
+    parser.add_argument("--patch", type=str,
+                        help="The patch file containing the changes to the "
+                             "code, from the baseline analysis result to the "
+                             "'check report' analysis result.\n"
+                             "Do not use with --baseline-rev/--report-rev")
+    parser.add_argument("--baseline-rev", type=str,
+                        help="Revision or SHA of the codebase analysed to "
+                             "create the baseline report.\n"
+                             "Use together with --report-rev")
+    parser.add_argument("--report-rev", type=str,
+                        help="Revision or SHA of the codebase analysed to "
+                             "create the 'check report'.\n"
+                             "Use together with --baseline-rev")
 
     args = parser.parse_args()
 
+    if args.patch and (args.baseline_rev or args.report_rev):
+        print("ERROR: '--patch' argument can't be used with '--baseline-rev'"
+              " or '--report-rev'.")
+        sys.exit(1)
+
+    if bool(args.baseline_rev) != bool(args.report_rev):
+        print("ERROR: '--baseline-rev' must be used together with "
+              "'--report-rev'.")
+        sys.exit(1)
+
     if args.out == "stdout":
         file_out = sys.stdout
     else:
@@ -63,11 +90,35 @@ def main(argv):
         new_rep.parse()
         debug.debug_print_parsed_report(new_rep)
         log_info(" [OK]")
-    except ReportError as e:
+        diff_source = None
+        if args.patch:
+            diff_source = os.path.realpath(args.patch)
+        elif args.baseline_rev:
+            git_diff = invoke_command(
+                "git diff --git-dir={} -C -C {}..{}".format(repo_dir,
+                                                            args.baseline_rev,
+                                                            args.report_rev),
+                True, "Error occured invoking:\n{}\n\n{}"
+            )
+            diff_source = git_diff.splitlines(keepends=True)
+        if diff_source:
+            log_info("Parsing changes...", "")
+            diffs = UnifiedFormatParser(diff_source)
+            debug.debug_print_parsed_diff(diffs)
+            log_info(" [OK]")
+    except (ReportError, UnifiedFormatParseError) as e:
         print("ERROR: {}".format(e))
         sys.exit(1)
 
-    output = new_rep - baseline
+    if args.patch or args.baseline_rev:
+        log_info("Patching baseline...", "")
+        baseline_patched = baseline.patch(diffs)
+        debug.debug_print_patched_report(baseline_patched)
+        log_info(" [OK]")
+        output = new_rep - baseline_patched
+    else:
+        output = new_rep - baseline
+
     print(output, end="", file=file_out)
 
     if len(output) > 0:
diff --git a/xen/scripts/xen_analysis/diff_tool/debug.py b/xen/scripts/xen_analysis/diff_tool/debug.py
index 65cca2464110..fcf1d861b5cf 100644
--- a/xen/scripts/xen_analysis/diff_tool/debug.py
+++ b/xen/scripts/xen_analysis/diff_tool/debug.py
@@ -3,6 +3,7 @@
 from __future__ import print_function
 import os
 from .report import Report
+from .unified_format_parser import UnifiedFormatParser
 
 
 class Debug:
@@ -38,3 +39,23 @@ class Debug:
         if not self.args.debug:
             return
         self.__debug_print_report(report, ".parsed")
+
+    def debug_print_patched_report(self, report):
+        # type: (Report) -> None
+        if not self.args.debug:
+            return
+        # The patched report contains already .patched in its name
+        self.__debug_print_report(report, "")
+
+    def debug_print_parsed_diff(self, diff):
+        # type: (UnifiedFormatParser) -> None
+        if not self.args.debug:
+            return
+        diff_filename = diff.get_diff_path()
+        out_pathname = self.__get_debug_out_filename(diff_filename, ".parsed")
+        try:
+            with open(out_pathname, "wt") as outfile:
+                for change_obj in diff.get_change_sets().values():
+                    print(change_obj, end="", file=outfile)
+        except OSError as e:
+            print("ERROR: Issue opening file {}: {}".format(out_pathname, e))
diff --git a/xen/scripts/xen_analysis/diff_tool/report.py b/xen/scripts/xen_analysis/diff_tool/report.py
index 4a303d61b3ea..b80eb31114f0 100644
--- a/xen/scripts/xen_analysis/diff_tool/report.py
+++ b/xen/scripts/xen_analysis/diff_tool/report.py
@@ -1,6 +1,7 @@
 #!/usr/bin/env python3
 
 import os
+from .unified_format_parser import UnifiedFormatParser, ChangeSet
 
 
 class ReportError(Exception):
@@ -47,6 +48,92 @@ class Report(object):
             self.__entries[entry_path] = [entry]
         self.__last_line_order += 1
 
+    def remove_entries(self, entry_file_path):
+        # type: (str) -> None
+        del self.__entries[entry_file_path]
+
+    def remove_entry(self, entry_path, line_id):
+        # type: (str, int) -> None
+        if entry_path in self.__entries.keys():
+            len_entry_path = len(self.__entries[entry_path])
+            if len_entry_path == 1:
+                del self.__entries[entry_path]
+            else:
+                if line_id in self.__entries[entry_path]:
+                    self.__entries[entry_path].remove(line_id)
+
+    def patch(self, diff_obj):
+        # type: (UnifiedFormatParser) -> Report
+        filename, file_extension = os.path.splitext(self.__path)
+        patched_report = self.__class__(filename + ".patched" + file_extension)
+        remove_files = []
+        rename_files = []
+        remove_entry = []
+        ChangeMode = ChangeSet.ChangeMode
+
+        # Copy entries from this report to the report we are going to patch
+        for entries in self.__entries.values():
+            for entry in entries:
+                patched_report.add_entry(entry.file_path, entry.line_number,
+                                         entry.text)
+
+        # Patch the output report
+        patched_rep_entries = patched_report.get_report_entries()
+        for file_diff, change_obj in diff_obj.get_change_sets().items():
+            if change_obj.is_change_mode(ChangeMode.COPY):
+                # Copy the original entry pointed by change_obj.orig_file into
+                # a new key in the patched report named change_obj.dst_file,
+                # that here is file_diff variable content, because this
+                # change_obj is pushed into the change_sets with the
+                # change_obj.dst_file key
+                if change_obj.orig_file in self.__entries.keys():
+                    for entry in self.__entries[change_obj.orig_file]:
+                        patched_report.add_entry(file_diff,
+                                                 entry.line_number,
+                                                 entry.text)
+
+            if file_diff in patched_rep_entries.keys():
+                if change_obj.is_change_mode(ChangeMode.DELETE):
+                    # No need to check changes here, just remember to delete
+                    # the file from the report
+                    remove_files.append(file_diff)
+                    continue
+                elif change_obj.is_change_mode(ChangeMode.RENAME):
+                    # Remember to rename the file entry on this report
+                    rename_files.append(change_obj)
+
+                for line_num, change_type in change_obj.get_change_set():
+                    len_rep = len(patched_rep_entries[file_diff])
+                    for i in range(len_rep):
+                        rep_item = patched_rep_entries[file_diff][i]
+                        if change_type == ChangeSet.ChangeType.REMOVE:
+                            if rep_item.line_number == line_num:
+                                # This line is removed with this changes,
+                                # append to the list of entries to be removed
+                                remove_entry.append(rep_item)
+                            elif rep_item.line_number > line_num:
+                                rep_item.line_number -= 1
+                        elif change_type == ChangeSet.ChangeType.ADD:
+                            if rep_item.line_number >= line_num:
+                                rep_item.line_number += 1
+                    # Remove deleted entries from the list
+                    if len(remove_entry) > 0:
+                        for entry in remove_entry:
+                            patched_report.remove_entry(entry.file_path,
+                                                        entry.line_id)
+                        del remove_entry[:]
+
+        if len(remove_files) > 0:
+            for file_name in remove_files:
+                patched_report.remove_entries(file_name)
+
+        if len(rename_files) > 0:
+            for change_obj in rename_files:
+                patched_rep_entries[change_obj.dst_file] = \
+                    patched_rep_entries.pop(change_obj.orig_file)
+
+        return patched_report
+
     def to_list(self):
         # type: () -> list
         report_list = []
diff --git a/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py b/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py
new file mode 100644
index 000000000000..8b3fbc318df7
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py
@@ -0,0 +1,232 @@
+#!/usr/bin/env python3
+
+import re
+import sys
+
+try:
+    from enum import Enum
+except Exception:
+    if sys.version_info[0] == 2:
+        print("Please install enum34 package when using python 2.")
+    else:
+        print("Please use python version 3.5 or above.")
+    sys.exit(1)
+
+try:
+    from typing import Tuple
+except Exception:
+    if sys.version_info[0] == 2:
+        print("Please install typing package when using python 2.")
+    else:
+        print("Please use python version 3.5 or above.")
+    sys.exit(1)
+
+
+class UnifiedFormatParseError(Exception):
+    pass
+
+
+class ParserState(Enum):
+    FIND_DIFF_HEADER = 0
+    REGISTER_CHANGES = 1
+    FIND_HUNK_OR_DIFF_HEADER = 2
+
+
+class ChangeSet(object):
+    class ChangeType(Enum):
+        REMOVE = 0
+        ADD = 1
+
+    class ChangeMode(Enum):
+        NONE = 0
+        CHANGE = 1
+        RENAME = 2
+        DELETE = 3
+        COPY = 4
+
+    def __init__(self, a_file, b_file):
+        # type: (str, str) -> None
+        self.orig_file = a_file
+        self.dst_file = b_file
+        self.change_mode = ChangeSet.ChangeMode.NONE
+        self.__changes = []
+
+    def __str__(self):
+        # type: () -> str
+        str_out = "{}: {} -> {}:\n{}\n".format(
+            str(self.change_mode), self.orig_file, self.dst_file,
+            str(self.__changes)
+        )
+        return str_out
+
+    def set_change_mode(self, change_mode):
+        # type: (ChangeMode) -> None
+        self.change_mode = change_mode
+
+    def is_change_mode(self, change_mode):
+        # type: (ChangeMode) -> bool
+        return self.change_mode == change_mode
+
+    def add_change(self, line_number, change_type):
+        # type: (int, ChangeType) -> None
+        self.__changes.append((line_number, change_type))
+
+    def get_change_set(self):
+        # type: () -> dict
+        return self.__changes
+
+
+class UnifiedFormatParser(object):
+    def __init__(self, args):
+        # type: (str | list) -> None
+        if isinstance(args, str):
+            self.__diff_file = args
+            try:
+                with open(self.__diff_file, "rt") as infile:
+                    self.__diff_lines = infile.readlines()
+            except OSError as e:
+                raise UnifiedFormatParseError(
+                    "Issue with reading file {}: {}"
+                    .format(self.__diff_file, e)
+                )
+        elif isinstance(args, list):
+            self.__diff_file = "git-diff-local.txt"
+            self.__diff_lines = args
+        else:
+            raise UnifiedFormatParseError(
+                "UnifiedFormatParser constructor called with wrong arguments")
+
+        self.__git_diff_header = re.compile(r'^diff --git a/(.*) b/(.*)$')
+        self.__git_hunk_header = \
+            re.compile(r'^@@ -\d+,(\d+) \+(\d+),(\d+) @@.*$')
+        self.__diff_set = {}
+        self.__parse()
+
+    def get_diff_path(self):
+        # type: () -> str
+        return self.__diff_file
+
+    def add_change_set(self, change_set):
+        # type: (ChangeSet) -> None
+        if not change_set.is_change_mode(ChangeSet.ChangeMode.NONE):
+            if change_set.is_change_mode(ChangeSet.ChangeMode.COPY):
+                # Add copy change mode items using the dst_file key, because
+                # there might be other changes for the orig_file in this diff
+                self.__diff_set[change_set.dst_file] = change_set
+            else:
+                self.__diff_set[change_set.orig_file] = change_set
+
+    def __parse(self):
+        # type: () -> None
+        def parse_diff_header(line):
+            # type: (str) -> ChangeSet | None
+            change_item = None
+            diff_head = self.__git_diff_header.match(line)
+            if diff_head and diff_head.group(1) and diff_head.group(2):
+                change_item = ChangeSet(diff_head.group(1), diff_head.group(2))
+
+            return change_item
+
+        def parse_hunk_header(line):
+            # type: (str) -> Tuple[int, int, int]
+            file_linenum = -1
+            hunk_a_linemax = -1
+            hunk_b_linemax = -1
+            hunk_head = self.__git_hunk_header.match(line)
+            if hunk_head and hunk_head.group(1) and hunk_head.group(2) \
+               and hunk_head.group(3):
+                file_linenum = int(hunk_head.group(2))
+                hunk_a_linemax = int(hunk_head.group(1))
+                hunk_b_linemax = int(hunk_head.group(3))
+
+            return (file_linenum, hunk_a_linemax, hunk_b_linemax)
+
+        file_linenum = 0
+        hunk_a_linemax = 0
+        hunk_b_linemax = 0
+        diff_elem = None
+        parse_state = ParserState.FIND_DIFF_HEADER
+        ChangeMode = ChangeSet.ChangeMode
+        ChangeType = ChangeSet.ChangeType
+
+        for line in self.__diff_lines:
+            if parse_state == ParserState.FIND_DIFF_HEADER:
+                diff_elem = parse_diff_header(line)
+                if diff_elem:
+                    # Found the diff header, go to the next stage
+                    parse_state = ParserState.FIND_HUNK_OR_DIFF_HEADER
+            elif parse_state == ParserState.FIND_HUNK_OR_DIFF_HEADER:
+                # Here only these change modalities will be registered:
+                # deleted file mode <mode>
+                # rename from <path>
+                # rename to <path>
+                # copy from <path>
+                # copy to <path>
+                #
+                # These will be ignored:
+                # old mode <mode>
+                # new mode <mode>
+                # new file mode <mode>
+                #
+                # Also these info will be ignored
+                # similarity index <number>
+                # dissimilarity index <number>
+                # index <hash>..<hash> <mode>
+                if line.startswith("deleted file"):
+                    # If the file is deleted, register it but don't go through
+                    # the changes that will be only a set of lines removed
+                    diff_elem.set_change_mode(ChangeMode.DELETE)
+                    parse_state = ParserState.FIND_DIFF_HEADER
+                elif line.startswith("new file"):
+                    # If the file is new, skip it, as it doesn't give any
+                    # useful information on the report translation
+                    parse_state = ParserState.FIND_DIFF_HEADER
+                elif line.startswith("rename to"):
+                    # Renaming operation can be a pure renaming or a rename
+                    # and a set of change, so keep looking for the hunk
+                    # header
+                    diff_elem.set_change_mode(ChangeMode.RENAME)
+                elif line.startswith("copy to"):
+                    # This is a copy operation, mark it
+                    diff_elem.set_change_mode(ChangeMode.COPY)
+                else:
+                    # Look for the hunk header
+                    (file_linenum, hunk_a_linemax, hunk_b_linemax) = \
+                        parse_hunk_header(line)
+                    if file_linenum >= 0:
+                        if diff_elem.is_change_mode(ChangeMode.NONE):
+                            # The file has only changes
+                            diff_elem.set_change_mode(ChangeMode.CHANGE)
+                        parse_state = ParserState.REGISTER_CHANGES
+                    else:
+                        # ... or there could be a diff header
+                        new_diff_elem = parse_diff_header(line)
+                        if new_diff_elem:
+                            # Found a diff header, register the last change
+                            # item
+                            self.add_change_set(diff_elem)
+                            diff_elem = new_diff_elem
+            elif parse_state == ParserState.REGISTER_CHANGES:
+                if (hunk_b_linemax > 0) and line.startswith("+"):
+                    diff_elem.add_change(file_linenum, ChangeType.ADD)
+                    hunk_b_linemax -= 1
+                elif (hunk_a_linemax > 0) and line.startswith("-"):
+                    diff_elem.add_change(file_linenum, ChangeType.REMOVE)
+                    hunk_a_linemax -= 1
+                    file_linenum -= 1
+                elif ((hunk_a_linemax + hunk_b_linemax) > 0) and \
+                        line.startswith(" "):
+                    hunk_a_linemax -= 1 if (hunk_a_linemax > 0) else 0
+                    hunk_b_linemax -= 1 if (hunk_b_linemax > 0) else 0
+
+                if (hunk_a_linemax + hunk_b_linemax) <= 0:
+                    parse_state = ParserState.FIND_HUNK_OR_DIFF_HEADER
+
+                file_linenum += 1
+
+        if diff_elem is not None:
+            self.add_change_set(diff_elem)
+
+    def get_change_sets(self):
+        # type: () -> dict
+        return self.__diff_set
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 19 09:46:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:46:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536874.835600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwgu-0006k7-13; Fri, 19 May 2023 09:46:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536874.835600; Fri, 19 May 2023 09:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwgt-0006k0-Uj; Fri, 19 May 2023 09:46:27 +0000
Received: by outflank-mailman (input) for mailman id 536874;
 Fri, 19 May 2023 09:46:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1pzwgt-0006V3-84
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:46:27 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 057abb1a-f62a-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 11:46:26 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4D94B2F4;
 Fri, 19 May 2023 02:47:10 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1E6843F73F;
 Fri, 19 May 2023 02:46:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 057abb1a-f62a-11ed-b22d-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] xen/misra: add diff-report.py tool
Date: Fri, 19 May 2023 10:46:12 +0100
Message-Id: <20230519094613.2134153-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230519094613.2134153-1-luca.fancellu@arm.com>
References: <20230519094613.2134153-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new tool, diff-report.py that can be used to make diff between
reports generated by xen-analysis.py tool.
Currently this tool supports the Xen cppcheck text report format in
its operations.

The tool prints every finding that is in the report passed with -r
(check report) which is not in the report passed with -b (baseline).

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v1:
 - removed 2 method from class ReportEntry that landed there by a
   mistake on rebase.
 - Made the script compatible also with python2 (Stefano)
---
 xen/scripts/diff-report.py                    |  80 ++++++++++++++
 .../xen_analysis/diff_tool/__init__.py        |   0
 .../xen_analysis/diff_tool/cppcheck_report.py |  44 ++++++++
 xen/scripts/xen_analysis/diff_tool/debug.py   |  40 +++++++
 xen/scripts/xen_analysis/diff_tool/report.py  | 100 ++++++++++++++++++
 5 files changed, 264 insertions(+)
 create mode 100755 xen/scripts/diff-report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/__init__.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/debug.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/report.py

diff --git a/xen/scripts/diff-report.py b/xen/scripts/diff-report.py
new file mode 100755
index 000000000000..f97cb2355cc3
--- /dev/null
+++ b/xen/scripts/diff-report.py
@@ -0,0 +1,80 @@
+#!/usr/bin/env python3
+
+from __future__ import print_function
+import os
+import sys
+from argparse import ArgumentParser
+from xen_analysis.diff_tool.cppcheck_report import CppcheckReport
+from xen_analysis.diff_tool.debug import Debug
+from xen_analysis.diff_tool.report import ReportError
+
+
+def log_info(text, end='\n'):
+    # type: (str, str) -> None
+    global args
+    global file_out
+
+    if (args.verbose):
+        print(text, end=end, file=file_out)
+
+
+def main(argv):
+    # type: (list) -> None
+    global args
+    global file_out
+
+    parser = ArgumentParser(prog="diff-report.py")
+    parser.add_argument("-b", "--baseline", required=True, type=str,
+                        help="Path to the baseline report.")
+    parser.add_argument("--debug", action='store_true',
+                        help="Produce intermediate reports during operations.")
+    parser.add_argument("-o", "--out", default="stdout", type=str,
+                        help="Where to print the tool output. Default is "
+                             "stdout")
+    parser.add_argument("-r", "--report", required=True, type=str,
+                        help="Path to the 'check report', the one checked "
+                             "against the baseline.")
+    parser.add_argument("-v", "--verbose", action='store_true',
+                        help="Print more informations during the run.")
+
+    args = parser.parse_args()
+
+    if args.out == "stdout":
+        file_out = sys.stdout
+    else:
+        try:
+            file_out = open(args.out, "wt")
+        except OSError as e:
+            print("ERROR: Issue opening file {}: {}".format(args.out, e))
+            sys.exit(1)
+
+    debug = Debug(args)
+
+    try:
+        baseline_path = os.path.realpath(args.baseline)
+        log_info("Loading baseline report {}".format(baseline_path), "")
+        baseline = CppcheckReport(baseline_path)
+        baseline.parse()
+        debug.debug_print_parsed_report(baseline)
+        log_info(" [OK]")
+        new_rep_path = os.path.realpath(args.report)
+        log_info("Loading check report {}".format(new_rep_path), "")
+        new_rep = CppcheckReport(new_rep_path)
+        new_rep.parse()
+        debug.debug_print_parsed_report(new_rep)
+        log_info(" [OK]")
+    except ReportError as e:
+        print("ERROR: {}".format(e))
+        sys.exit(1)
+
+    output = new_rep - baseline
+    print(output, end="", file=file_out)
+
+    if len(output) > 0:
+        sys.exit(1)
+
+    sys.exit(0)
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/xen/scripts/xen_analysis/diff_tool/__init__.py b/xen/scripts/xen_analysis/diff_tool/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py b/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
new file mode 100644
index 000000000000..e7e80a9dde84
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
@@ -0,0 +1,44 @@
+#!/usr/bin/env python3
+
+import re
+from .report import Report, ReportError
+
+
+class CppcheckReport(Report):
+    def __init__(self, report_path):
+        # type: (str) -> None
+        super(CppcheckReport, self).__init__(report_path)
+        # This matches a string like:
+        # path/to/file.c(<line number>,<digits>):<whatever>
+        # and captures file name path and line number
+        # the last capture group is used for text substitution in __str__
+        self.__report_entry_regex = re.compile(r'^(.*)\((\d+)(,\d+\):.*)$')
+
+    def parse(self):
+        # type: () -> None
+        report_path = self.get_report_path()
+        try:
+            with open(report_path, "rt") as infile:
+                report_lines = infile.readlines()
+        except OSError as e:
+            raise ReportError("Issue with reading file {}: {}"
+                              .format(report_path, e))
+        for line in report_lines:
+            entry = self.__report_entry_regex.match(line)
+            if entry and entry.group(1) and entry.group(2):
+                file_path = entry.group(1)
+                line_number = int(entry.group(2))
+                self.add_entry(file_path, line_number, line)
+            else:
+                raise ReportError("Malformed report entry in file {}:\n{}"
+                                  .format(report_path, line))
+
+    def __str__(self):
+        # type: () -> str
+        ret = ""
+        for entry in self.to_list():
+            ret += re.sub(self.__report_entry_regex,
+                          r'{}({}\3'.format(entry.file_path,
+                                            entry.line_number),
+                          entry.text)
+        return ret
diff --git a/xen/scripts/xen_analysis/diff_tool/debug.py b/xen/scripts/xen_analysis/diff_tool/debug.py
new file mode 100644
index 000000000000..65cca2464110
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/debug.py
@@ -0,0 +1,40 @@
+#!/usr/bin/env python3
+
+from __future__ import print_function
+import os
+from .report import Report
+
+
+class Debug:
+    def __init__(self, args):
+        self.args = args
+
+    def __get_debug_out_filename(self, path, type):
+        # type: (str, str) -> str
+        # Take basename
+        file_name = os.path.basename(path)
+        # Split in name and extension
+        file_name = os.path.splitext(file_name)
+        if self.args.out != "stdout":
+            out_folder = os.path.dirname(self.args.out)
+        else:
+            out_folder = "./"
+        dbg_report_path = out_folder + file_name[0] + type + file_name[1]
+
+        return dbg_report_path
+
+    def __debug_print_report(self, report, type):
+        # type: (Report, str) -> None
+        report_name = self.__get_debug_out_filename(report.get_report_path(),
+                                                    type)
+        try:
+            with open(report_name, "wt") as outfile:
+                print(report, end="", file=outfile)
+        except OSError as e:
+            print("ERROR: Issue opening file {}: {}".format(report_name, e))
+
+    def debug_print_parsed_report(self, report):
+        # type: (Report) -> None
+        if not self.args.debug:
+            return
+        self.__debug_print_report(report, ".parsed")
diff --git a/xen/scripts/xen_analysis/diff_tool/report.py b/xen/scripts/xen_analysis/diff_tool/report.py
new file mode 100644
index 000000000000..4a303d61b3ea
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/report.py
@@ -0,0 +1,100 @@
+#!/usr/bin/env python3
+
+import os
+
+
+class ReportError(Exception):
+    pass
+
+
+class Report(object):
+    class ReportEntry:
+        def __init__(self, file_path, line_number, entry_text, line_id):
+            # type: (str, int, list, int) -> None
+            if not isinstance(line_number, int) or \
+               not isinstance(line_id, int):
+                raise ReportError("ReportEntry constructor wrong type args")
+            self.file_path = file_path
+            self.line_number = line_number
+            self.text = entry_text
+            self.line_id = line_id
+
+    def __init__(self, report_path):
+        # type: (str) -> None
+        self.__entries = {}
+        self.__path = report_path
+        self.__last_line_order = 0
+
+    def parse(self):
+        # type: () -> None
+        raise ReportError("Please create a specialised class from 'Report'.")
+
+    def get_report_path(self):
+        # type: () -> str
+        return self.__path
+
+    def get_report_entries(self):
+        # type: () -> dict
+        return self.__entries
+
+    def add_entry(self, entry_path, entry_line_number, entry_text):
+        # type: (str, int, str) -> None
+        entry = Report.ReportEntry(entry_path, entry_line_number, entry_text,
+                                   self.__last_line_order)
+        if entry_path in self.__entries.keys():
+            self.__entries[entry_path].append(entry)
+        else:
+            self.__entries[entry_path] = [entry]
+        self.__last_line_order += 1
+
+    def to_list(self):
+        # type: () -> list
+        report_list = []
+        for _, entries in self.__entries.items():
+            for entry in entries:
+                report_list.append(entry)
+
+        report_list.sort(key=lambda x: x.line_id)
+        return report_list
+
+    def __str__(self):
+        # type: () -> str
+        ret = ""
+        for entry in self.to_list():
+            ret += entry.file_path + ":" + entry.line_number + ":" + entry.text
+
+        return ret
+
+    def __len__(self):
+        # type: () -> int
+        return len(self.to_list())
+
+    def __sub__(self, report_b):
+        # type: (Report) -> Report
+        if self.__class__ != report_b.__class__:
+            raise ReportError("Diff of different type of report!")
+
+        filename, file_extension = os.path.splitext(self.__path)
+        diff_report = self.__class__(filename + ".diff" + file_extension)
+        # Put in the diff report only records of this report that are not
+        # present in the report_b.
+        for file_path, entries in self.__entries.items():
+            rep_b_entries = report_b.get_report_entries()
+            if file_path in rep_b_entries.keys():
+                # File path exists in report_b, so check what entries of that
+                # file path doesn't exist in report_b and add them to the diff
+                rep_b_entries_num = [
+                    x.line_number for x in rep_b_entries[file_path]
+                ]
+                for entry in entries:
+                    if entry.line_number not in rep_b_entries_num:
+                        diff_report.add_entry(file_path, entry.line_number,
+                                              entry.text)
+            else:
+                # File path doesn't exist in report_b, so add every entry
+                # of that file path to the diff
+                for entry in entries:
+                    diff_report.add_entry(file_path, entry.line_number,
+                                          entry.text)
+
+        return diff_report
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 19 09:48:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:48:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536890.835621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwj8-0008E8-PY; Fri, 19 May 2023 09:48:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536890.835621; Fri, 19 May 2023 09:48:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwj8-0008E1-Lb; Fri, 19 May 2023 09:48:46 +0000
Received: by outflank-mailman (input) for mailman id 536890;
 Fri, 19 May 2023 09:48:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pzwj7-0008Dt-C9
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:48:45 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20615.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::615])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 57cf9d14-f62a-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 11:48:44 +0200 (CEST)
Received: from DB8PR06CA0046.eurprd06.prod.outlook.com (2603:10a6:10:120::20)
 by DU2PR08MB10186.eurprd08.prod.outlook.com (2603:10a6:10:46c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Fri, 19 May
 2023 09:48:38 +0000
Received: from DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:120:cafe::2f) by DB8PR06CA0046.outlook.office365.com
 (2603:10a6:10:120::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 09:48:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT057.mail.protection.outlook.com (100.127.142.182) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.7 via Frontend Transport; Fri, 19 May 2023 09:48:37 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Fri, 19 May 2023 09:48:37 +0000
Received: from 438853850870.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6E59D5B5-2E83-4C41-9A49-D57BC1E61E46.1; 
 Fri, 19 May 2023 09:48:27 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 438853850870.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 19 May 2023 09:48:27 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB8208.eurprd08.prod.outlook.com (2603:10a6:10:3b1::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 09:48:24 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.019; Fri, 19 May 2023
 09:48:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57cf9d14-f62a-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gdBCT2ALX5Wxv5EZGQ6wrQqEMqsMT1tYY2k01sLMPS0=;
 b=8HHmubthp8xQw5hulC4LH4Eiz/RN+JCMVg+HEVEO8y+TokYFFT6qrrKlZsHmgvIfyeqMSPaECgqcSb+HmxSTEcOk1N/rZ8Rkm+g8IAVRRto6I0KDIkJ8Va0AvPTrx8we7ZECV84AQfcpMemi84/rMOToLNRQayH9Yob8yyde3AA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8d4045b0b297e259
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N64MGL6x8aWxkr47NOizVmqrR0LVrncssg3Aog02CtgEIyA60EW0hRWa8Qb8IM0nMYe849pDAENkUdPWM/Be4j2g4ObeBAE0Dp5PTZX3PlXEJ/WTk/Eqv2AF2g+O9VKDgtCPc/n6Ka1MilAKkMxJsaM11VH3TkITolCTDfWGAHE+BGf9IWE+z20AYj6WYqhB9oPxR6mlAUoWBJytqcUz3EpQzNtXEoOgCxmiDyqjBqVr63Xa+mGNetNF6ID11gv62EnMLje/RYUck0Rsnkl6Y0a5r0ymrFrccOkSbhBp74D75SGvoqF/uKjRYpQsdeIzf0BKZWZLP6cD3lx9HlVUPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gdBCT2ALX5Wxv5EZGQ6wrQqEMqsMT1tYY2k01sLMPS0=;
 b=DDBD5I/IbYqAGMuEtSlUoOBu9GemtXFdyZVCcR24Vt3GpiW/e1zFyxM+3oCh8fUtHHGMBVnlCtvHOzx6md+o2w7OcxnoftqsXIqRZSUomAOgxOoH0+CTlftRcsPmp1JAkEpq1VMhwNk6MX78z3DWH6efb7Ni7wyiVKdlxWDqCLNqdmnhLdFQSE/3pTHjlPzysTX498uCiaxQ+N9UhcSfWnt/i/ofQfPu47+94FMHK2Jf/77kI1pTzcYQMpHZJ4Q1kx6FXzef3YVIlz+7BGl0JfFPaUaNcpfjl/rI6ufGEUvsqDpfzohoDuzp5dJPCLHHHIWJ7BmQd+ncEAT9l6Q2nw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gdBCT2ALX5Wxv5EZGQ6wrQqEMqsMT1tYY2k01sLMPS0=;
 b=8HHmubthp8xQw5hulC4LH4Eiz/RN+JCMVg+HEVEO8y+TokYFFT6qrrKlZsHmgvIfyeqMSPaECgqcSb+HmxSTEcOk1N/rZ8Rkm+g8IAVRRto6I0KDIkJ8Va0AvPTrx8we7ZECV84AQfcpMemi84/rMOToLNRQayH9Yob8yyde3AA=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Xen-devel
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 3/3] xen/misra: xen-analysis.py: use the relative path
 from the ...
Thread-Topic: [PATCH 3/3] xen/misra: xen-analysis.py: use the relative path
 from the ...
Thread-Index: AQHZfooyuAY6OgOAbESkC2BSWxnO0K9dtEiAgAOQFACAAAETgIAAA8GAgAAnyAA=
Date: Fri, 19 May 2023 09:48:23 +0000
Message-ID: <FAFAD44A-0241-432A-8439-F3D92D4D3A53@arm.com>
References: <20230504131245.2985400-1-luca.fancellu@arm.com>
 <20230504131245.2985400-4-luca.fancellu@arm.com>
 <alpine.DEB.2.22.394.2305161743520.128889@ubuntu-linux-20-04-desktop>
 <a0d6197a-53e8-0121-c7e0-ddbdaf970c7e@amd.com>
 <B087CAA6-0DCD-48C8-8199-A328BDA649A8@arm.com>
 <acb39086-69f1-4bd0-96f8-d9c9420cbb41@amd.com>
In-Reply-To: <acb39086-69f1-4bd0-96f8-d9c9420cbb41@amd.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB8208:EE_|DBAEUR03FT057:EE_|DU2PR08MB10186:EE_
X-MS-Office365-Filtering-Correlation-Id: b10c7300-fd20-4b5a-dd29-08db584e385a
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 tzWXpvvKHKkHY2dXI1LW8QmAJ2oTQ02CxZg32VFzeqeo4SUTxaW3/h+FfbWhlS3pK2FFKSC5a/vuvXX+gNeUqvPqPkg5BQA4EC5sf1LxdZrYN1SYte4sfqs9y8mJhFeC8HI20D5qIA5UG27eA20YjV0n33s4vnrwdpeuKrWl2NINlNTNnM/XBlXG9E7oZewVt/n+r7vtAJc4tx0Uk/QMdp7mUm8RKipyq6nBHE+4ASInlzImr1hxMsO//tHkjPrSCUBxKuj7TEGnCBvDmmieejahGpF9AtxWfdHvxflrhNuu/nbJ0qyJfWhWftlqT1KV0+xfEIITe8J6EKKgjza9GRmRaCpbSnulCbPKAhy4vIcIP/QzrsH5xAvZ76leQUv9dKW+m9aqFVeIYiMbThJs3zEJrIBt4UzniG3vg7hLcDay40BTOnPXuGjxdR9FeRhZmQ5+TUw06q6Gc6KXSx/rh4MgVlmkb/zCt7Fq6mOMbjydiYHP/qDTpNTWNOF2mV5LPSgr5hoCJyIDnFu0I0AfaxZ1eYCRDT/4GdhmYFKLbarwdkJDOrRTdMV3mBPjUqhcsperK66HP3sfht5kugnFYc7QQpVFZNdatbbqZVx7xc1fWdUj23sNqQVF9P+aHfq7Ks9xqfwC0sr7UfV+gou32Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(346002)(39860400002)(376002)(136003)(451199021)(38070700005)(36756003)(33656002)(86362001)(91956017)(54906003)(316002)(6916009)(66946007)(66556008)(66476007)(64756008)(76116006)(66446008)(966005)(4326008)(478600001)(71200400001)(6486002)(8936002)(8676002)(5660300002)(2906002)(38100700002)(122000001)(41300700001)(2616005)(26005)(186003)(53546011)(6506007)(6512007)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <487662D172FD0045B92E79C11CE1F525@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8208
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f06f2006-68c2-46c0-2c2f-08db584e2fea
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	c/otDLz/nyCz9pidzWvj0fFDISfvcwo7zp7p0Sfmp56HlMRFBgtIgsOuKY88MeUuxpslSZi4Xp73tc7xIlPxa7zsKFNRZiy6p4c492ZYGa7NMOPa+ucKvZQTPLMXAemRNtBnfYL2izyeBhqEHLX5o8Jq9vaBVV+qScvRJaxAb4/V/wgWuRpa5mtCoAhZ6SqQmcBxwlPyKERooI8yBzdDe6zXUqtaaoH4QilQUYfWHSUKTr6Ts+G8pXluYppjavYlpdxfsuvPGa4P9oSXEnbXB6FZGwOCxTCym5sXxRNKlotCgZhDp6/qef9W9raiz0w788Jf2xn9Kqys6Eq5FYy+i6tuFv/0GvmukIt3I5ths0EwNvnh145v4iWpgj0XeHS+5LlgZ4acTWef3b0MqD0KbwiDlvGqa9xmYNfm6jkgf7zoDm5XihMfuEZgB6aZuCGALoxDsr3nySQhxCKWDGm/LLEk/UyUeTBLXDhad8ME6SVgGN0bY3hOK9oFkFxHen7vLvyMR1xx/0nuiJFPbpoKIC6lGx3fVag+0NsYAZPTB33ERl7H11BFuDc8cEZ5DN4aZt/5GjuGehxiz+YhpOSKj7ryTePMBYPXblj4pRdLY1z+ErJuShYJWHiR9tDqKxFAx8iHSY7/Gorp0custlu8IOhTs5dSFa6g5/tXWzgdxHbJVTFsc2nTRlTB2ZsXTMCDxNBdElyvENQIx9Hmn/LkGAt01VFN5PIXsbwkYINqwBI=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199021)(46966006)(36840700001)(40470700004)(83380400001)(36860700001)(186003)(966005)(336012)(47076005)(2616005)(54906003)(478600001)(6486002)(6512007)(26005)(53546011)(6506007)(8936002)(6862004)(36756003)(5660300002)(40460700003)(81166007)(33656002)(82740400003)(356005)(40480700001)(82310400005)(41300700001)(70586007)(8676002)(70206006)(4326008)(2906002)(316002)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 09:48:37.9229
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b10c7300-fd20-4b5a-dd29-08db584e385a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB10186

DQoNCj4gT24gMTkgTWF5IDIwMjMsIGF0IDA4OjI1LCBNaWNoYWwgT3J6ZWwgPG1pY2hhbC5vcnpl
bEBhbWQuY29tPiB3cm90ZToNCj4gDQo+IA0KPiBPbiAxOS8wNS8yMDIzIDA5OjEyLCBMdWNhIEZh
bmNlbGx1IHdyb3RlOg0KPj4gDQo+PiANCj4+PiBPbiAxOSBNYXkgMjAyMywgYXQgMDg6MDgsIE1p
Y2hhbCBPcnplbCA8bWljaGFsLm9yemVsQGFtZC5jb20+IHdyb3RlOg0KPj4+IA0KPj4+IEhpIEx1
Y2EsDQo+Pj4gDQo+Pj4gT24gMTcvMDUvMjAyMyAwMjo0NCwgU3RlZmFubyBTdGFiZWxsaW5pIHdy
b3RlOg0KPj4+PiANCj4+Pj4gDQo+Pj4+IE9uIFRodSwgNCBNYXkgMjAyMywgTHVjYSBGYW5jZWxs
dSB3cm90ZToNCj4+Pj4+IHJlcG9zaXRvcnkgaW4gdGhlIHJlcG9ydHMNCj4+Pj4+IA0KPj4+Pj4g
Q3VycmVudGx5IHRoZSBjcHBjaGVjayByZXBvcnQgZW50cmllcyBzaG93cyB0aGUgcmVsYXRpdmUg
ZmlsZSBwYXRoDQo+Pj4+PiBmcm9tIHRoZSAveGVuIGZvbGRlciBvZiB0aGUgcmVwb3NpdG9yeSBp
bnN0ZWFkIG9mIHRoZSBiYXNlIGZvbGRlci4NCj4+Pj4+IEluIG9yZGVyIHRvIGVhc2UgdGhlIGNo
ZWNrcywgZm9yIGV4YW1wbGUsIHdoZW4gbG9va2luZyBhIGdpdCBkaWZmDQo+Pj4+PiBvdXRwdXQg
YW5kIHRoZSByZXBvcnQsIHVzZSB0aGUgcmVwb3NpdG9yeSBmb2xkZXIgYXMgYmFzZS4NCj4+Pj4+
IA0KPj4+Pj4gU2lnbmVkLW9mZi1ieTogTHVjYSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0u
Y29tPg0KPj4+PiANCj4+Pj4gQWNrZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz4NCj4+Pj4gVGVzdGVkLWJ5OiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+DQo+Pj4gDQo+Pj4gSSBrbm93IHRoaXMgcGF0Y2ggaXMgbm93IGNv
bW1pdHRlZCBidXQgdGhlcmUgaXMgc29tZXRoaW5nIGNvbmZ1c2luZyBoZXJlLg0KPj4+IEF0IHRo
ZSBtb21lbnQsIGluIHRoZSBjcHBjaGVjayByZXBvcnQgd2UgaGF2ZSBwYXRocyByZWxhdGl2ZSB0
byB4ZW4vIGUuZy46DQo+Pj4gYXJjaC9hcm0vYXJtNjQvbGliL2JpdG9wcy5jKDExNywxKTouLi4N
Cj4+PiANCj4+PiBTbyBhZnRlciB0aGlzIHBhdGNoLCBJIHdvdWxkIGV4cGVjdCB0byBzZWUgdGhl
IHBhdGggcmVsYXRpdmUgdG8gcm9vdCBvZiByZXBvc2l0b3J5IGUuZy46DQo+Pj4gKnhlbi8qYXJj
aC9hcm0vYXJtNjQvbGliL2JpdG9wcy5jKDExNywxKTouLi4NCj4+PiANCj4+PiBIb3dldmVyLCB3
aXRoIG9yIHdpdGhvdXQgdGhpcyBwYXRjaCB0aGUgYmVoYXZpb3IgaXMgdGhlIHNhbWUuDQo+Pj4g
RGlkIEkgbWlzdW5kZXJzdGFuZCB5b3VyIHBhdGNoPw0KPj4gDQo+PiBIaSBNaWNoYWwsDQo+PiAN
Cj4+IFRoYW5rIHlvdSBmb3IgaGF2aW5nIHNwb3R0ZWQgdGhpcywgZHVyaW5nIG15IHRlc3RzIEkg
d2FzIHVzaW5nIFhlbi1hbmFseXNpcy5weSBzbyB0aGF0IGl0DQo+PiBjYWxscyB0aGUgbWFrZWZp
bGUgd2l0aCBvdXQtb2YtdHJlZSBidWlsZCwgSeKAmXZlIGZvdW5kIGFmdGVyIHlvdXIgbWFpbCB0
aGF0IHdoZW4gaXQgY2FsbHMgdGhlIG1ha2VmaWxlDQo+PiB3aXRoIGluLXRyZWUtYnVpbGQsIGNw
cGNoZWNrIGlzIHJ1biBmcm9tIC94ZW4veGVuIGFuZCBpdCBjYXVzZXMgaXQgdG8gcHJvZHVjZSBy
ZWxhdGl2ZSBwYXRoIGZyb20NCj4+IHRoZXJlIGluIHRoZSBUWFQgZnJhZ21lbnRzLCBzaG93aW5n
IHRoZSBpc3N1ZSB5b3Ugb2JzZXJ2ZWQuDQo+IE9rLCB0aGUgd2F5IEkgdGVzdCBpdCBpcyB0aGUg
c2FtZSBhcyBpbiBvdXIgZ2l0bGFiIENJIHNvIHRoaXMgbmVlZHMgdG8gYmUgZml4ZWQuDQoNCkhl
cmUgaXQgaXMgdGhlIGZpeDogaHR0cHM6Ly9wYXRjaHdvcmsua2VybmVsLm9yZy9wcm9qZWN0L3hl
bi1kZXZlbC9wYXRjaC8yMDIzMDUxOTA5MzAxOS4yMTMxODk2LTQtbHVjYS5mYW5jZWxsdUBhcm0u
Y29tLw0KDQpJ4oCZdmUgdXBkYXRlZCBteSBpbnRlcm5hbCB0ZXN0IHNjcmlwdCB0byB0ZXN0IGl0
IG9uIGluLXRyZWUgYW5kIG91dC1vZi10cmVlIG1ha2VmaWxlIGludm9jYXRpb24uIEhvcGUgSSBk
aWQgbm90IGZvcmdldCBhbnl0aGluZywNCmFwb2xvZ2llcyBmb3IgdGhlIGluY29udmVuaWVuY2Uh
DQoNCg0KPiANCj4+IA0KPj4gSSBoYXZlIHJlYWR5IGEgZml4IGZvciB0aGF0IGFuZCBJ4oCZbGwg
cHVzaCB0aGF0IHNvb24uDQo+IFRoYW5rcy4NCj4gDQo+IH5NaWNoYWwNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri May 19 09:52:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 09:52:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536921.835648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwmL-0002Dd-IQ; Fri, 19 May 2023 09:52:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536921.835648; Fri, 19 May 2023 09:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzwmL-0002DW-Fh; Fri, 19 May 2023 09:52:05 +0000
Received: by outflank-mailman (input) for mailman id 536921;
 Fri, 19 May 2023 09:52:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oBaS=BI=gmail.com=andy.shevchenko@srs-se1.protection.inumbo.net>)
 id 1pzwmK-00029R-Ps
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 09:52:04 +0000
Received: from mail-qv1-xf34.google.com (mail-qv1-xf34.google.com
 [2607:f8b0:4864:20::f34])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce2ca888-f62a-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 11:52:03 +0200 (CEST)
Received: by mail-qv1-xf34.google.com with SMTP id
 6a1803df08f44-62388997422so12307466d6.1
 for <xen-devel@lists.xenproject.org>; Fri, 19 May 2023 02:52:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce2ca888-f62a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684489922; x=1687081922;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lYlFoKtA5IkPlUC08PPhE1HRAZ9SjNGPuMwfpP8csYw=;
        b=nYxSjSYcaXL9wBLYYxr18nPzFhoS6t/XMYh9wWv4olQ5Ajdh0ltDPhpnV6tMT0kw2L
         7PVzfD8g+760Xthp7t3msGPUjuZ59gQw4rwzaEmCtQ8hyGpsue8N+xUoLitFBMGgdB9W
         jVGc5vSq1XyykVvXwliYjJhs52PRT3SjBYJ7iwE5d7/KP6vVYRumAVbxhCiJ5soKreN4
         brAcrHDbctTGfUlLlmhxftAZM3pPIDGyFEnEQC4iY77FwsJnKWjlVhqa4CFYiyr2O4Qf
         7YR2QIMjUYV5AhK5aiXR9mtSexGX2S4hbthpMwYW3yJ58fFaMoJ96fz6odyG79IA5mY6
         tJ0Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684489922; x=1687081922;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=lYlFoKtA5IkPlUC08PPhE1HRAZ9SjNGPuMwfpP8csYw=;
        b=inIYRxKo9jYgXzMHUioK984EqLXOJ9fFsVjWzvdy1DV9FHLaO8RkxJ63QvEikp7VR3
         LA4X6+T+Yimob94fAM7cN+L51CMhmA0KjVPguV/ZnPCw83hay3s9CBLZLunG3OGuOqxg
         7L+wu5GsF29rhVpx6wsJ3ZQRHsjp1f/v7C8gWSMm8p9jgNdgF7mclDArJWI7lDXZr3Y1
         hRYutDsXXJo+R6S3fA35WhJ648Gcsbr6bw7SW6L7I5f9f8fVtXZBU9Rq+SQMf6lq8lrg
         y/wOTyYyDB+2iuDd22QwvJEEggF0UFnh4qjdjHQeP4VbXb8Eyvsu2AJOmIaJ6mWSIcgN
         G+vw==
X-Gm-Message-State: AC+VfDypLrMYCjEqEZ6tW6Bzo5r89bFCf/aXcnlemqmV1/JKnegd7h1e
	2nxoayiIo1Bf8DSaJ/7316qkmN0cuPNaxjEbG4g=
X-Google-Smtp-Source: ACHHUZ5LXB2GGpTth50QwGaF1r9ATLp3VHmm5/m7EW57RPp9upvSN9LQS14BGDeYC2Kr+/Vi0wX8BeWxerm5Q629S9k=
X-Received: by 2002:a05:6214:5183:b0:61a:fe65:4481 with SMTP id
 kl3-20020a056214518300b0061afe654481mr2595650qvb.51.1684489921968; Fri, 19
 May 2023 02:52:01 -0700 (PDT)
MIME-Version: 1.0
References: <20230516193549.544673-1-arnd@kernel.org> <a78d9dcd-0bc1-7e98-a8f1-e5d6cd0c09a3@intel.com>
In-Reply-To: <a78d9dcd-0bc1-7e98-a8f1-e5d6cd0c09a3@intel.com>
From: Andy Shevchenko <andy.shevchenko@gmail.com>
Date: Fri, 19 May 2023 12:51:25 +0300
Message-ID: <CAHp75VeX9=1+apLMZsidudUziO_s4WUb=HOd0mraRHL17DN+cw@mail.gmail.com>
Subject: Re: [PATCH 00/20] x86: address -Wmissing-prototype warnings
To: Dave Hansen <dave.hansen@intel.com>
Cc: Arnd Bergmann <arnd@kernel.org>, x86@kernel.org, Arnd Bergmann <arnd@arndb.de>, 
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, "H. Peter Anvin" <hpa@zytor.com>, 
	Andy Lutomirski <luto@kernel.org>, Steven Rostedt <rostedt@goodmis.org>, 
	Masami Hiramatsu <mhiramat@kernel.org>, Mark Rutland <mark.rutland@arm.com>, 
	Juergen Gross <jgross@suse.com>, "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>, 
	Alexey Makhalov <amakhalov@vmware.com>, VMware PV-Drivers Reviewers <pv-drivers@vmware.com>, 
	Peter Zijlstra <peterz@infradead.org>, Darren Hart <dvhart@infradead.org>, 
	Andy Shevchenko <andy@infradead.org>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	"Rafael J. Wysocki" <rafael@kernel.org>, linux-kernel@vger.kernel.org, 
	linux-trace-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, 
	linux-pci@vger.kernel.org, platform-driver-x86@vger.kernel.org, 
	xen-devel@lists.xenproject.org, linux-pm@vger.kernel.org, linux-mm@kvack.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 19, 2023 at 12:56=E2=80=AFAM Dave Hansen <dave.hansen@intel.com=
> wrote:
> On 5/16/23 12:35, Arnd Bergmann wrote:

> I picked up the ones that were blatantly obvious, but left out 03, 04,
> 10, 12 and 19 for the moment.

Btw, there is series that went unnoticed

https://lore.kernel.org/all/20211119110017.48510-1-andriy.shevchenko@linux.=
intel.com/

I dunno why.

--=20
With Best Regards,
Andy Shevchenko


From xen-devel-bounces@lists.xenproject.org Fri May 19 10:10:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 10:10:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536929.835658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzx4J-0004p9-4G; Fri, 19 May 2023 10:10:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536929.835658; Fri, 19 May 2023 10:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzx4J-0004p2-0O; Fri, 19 May 2023 10:10:39 +0000
Received: by outflank-mailman (input) for mailman id 536929;
 Fri, 19 May 2023 10:10:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PdMp=BI=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1pzx4H-0004ow-As
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 10:10:37 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 63ef2352-f62d-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 12:10:35 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id 590545C014B;
 Fri, 19 May 2023 06:10:32 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Fri, 19 May 2023 06:10:32 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 19 May 2023 06:10:29 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63ef2352-f62d-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm1; t=
	1684491032; x=1684577432; bh=WmoTlhd7e1kwJpAAsaCCOOQJxMdCzpL/nDP
	L1E42Cfo=; b=P5i+sNrzGgkybQ2llHpdSW2AKLSaYvzjLV/X1tKoCOnMEuquyum
	Xz7L7fUs5n+bcz3EquSOz/LyYzo4+bnInf6pePRv8KXFTe1Ri70wn0/aQTymk6Nj
	3cOgoNnjULfVW8UqeTZdOxRVQ+EBZ+LWRXrUhP2M9QLZF0gH6sQkNPc9qKZCOrO9
	9MfZ9MN3Y9m8U5zroM3O8KCYAEdroMxEt0djE4C7v+kuG5NSqqLCvp420255lmj3
	Zfhv2QVb/G0g/MmOCnqA0OBNGNTtKmfHSBfBvNe+eSi5vldVRS0rB6sPFGLPp1+q
	imYfiENJ01Bse+gNazzWEmT6kALsLF8G68A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1684491032; x=1684577432; bh=WmoTlhd7e1kwJ
	pAAsaCCOOQJxMdCzpL/nDPL1E42Cfo=; b=xGmbjPzlgjhHajV61PT0fxHOkTPA9
	P/BKddYJeZL8r3HwmUtZQoGkTKDM3sMX2G2E47LEKMHpnsftch7DUZCK3BGFhY94
	qp7VHIhqnZbVw9bKSeYk7xj1S+ywBdu1R3Z+/eMTB4h0v5/lNdk7d0UjJ6+USLn/
	P/hvq8ohUkOE9QafrdILUftyhnmjgg631lGCldrTurUHMVTM73uZKI3dDsT/NJWJ
	X/UjqL6iaIaAhkO2lv6CnDY+vYB14qlO0xDRW15Eu4p2VpQ+NsZcRo5gdaBSL5/u
	UwkL3yHLMhGBwUUudXDfwgnrAIf6s8EPADQJkSMIokbsiTEV/s8yHvbEw==
X-ME-Sender: <xms:F0tnZHkDT-Yelrchp8VYR0XaK9crha449BTfficNd73wBREUMflA5A>
    <xme:F0tnZK1MRXbgM7v-C9t2xZc8E_Or2jhOZiSv4A9o_b74SKo8qEk7L9IO1Y9kSOg3U
    ai88nLPPn61bA>
X-ME-Received: <xmr:F0tnZNprQUsXTFl7lvK1CHl92nb7NWLH5Vh17LdECZL8ASWuJUyNkriWpp4gXXVb2KbHmK8hrICDHD5zZXDlUakiFA6EOjfasJI>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeihedgvdeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:F0tnZPkznI8Dzysbg3Je1q1Haflx3_kES29BO_hzPvRFEZZya4yEfA>
    <xmx:F0tnZF2bA5DdY11UUPtFQPCIHoGt_tmnpY_iR1S6AbEzh2Q89gSLLg>
    <xmx:F0tnZOtQwe94V3TNIOi1__ZV8Bk1B_Z4LPUSCGxb79tkGw_tF69vFw>
    <xmx:GEtnZE3OXQaasri_Ttvfhpz8-Ksd1lV6mvoGRbShRHswItGlXDiKsw>
Feedback-ID: i1568416f:Fastmail
Date: Fri, 19 May 2023 12:10:26 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>, Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>, Lyude Paul <lyude@redhat.com>,
	xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when xen-pcifront
 is enabling
Message-ID: <ZGdLErBzi9MANL3i@mail-itl>
References: <20230518134253.909623-1-hch@lst.de>
 <20230518134253.909623-3-hch@lst.de>
 <ZGZr/xgbUmVqpOpN@mail-itl>
 <20230519040405.GA10818@lst.de>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="A3eIvvbx7EttM1KL"
Content-Disposition: inline
In-Reply-To: <20230519040405.GA10818@lst.de>


--A3eIvvbx7EttM1KL
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 19 May 2023 12:10:26 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>, Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>, Lyude Paul <lyude@redhat.com>,
	xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when xen-pcifront
 is enabling

On Fri, May 19, 2023 at 06:04:05AM +0200, Christoph Hellwig wrote:
> On Thu, May 18, 2023 at 08:18:39PM +0200, Marek Marczykowski-G=C3=B3recki=
 wrote:
> > On Thu, May 18, 2023 at 03:42:51PM +0200, Christoph Hellwig wrote:
> > > Remove the dangerous late initialization of xen-swiotlb in
> > > pci_xen_swiotlb_init_late and instead just always initialize
> > > xen-swiotlb in the boot code if CONFIG_XEN_PCIDEV_FRONTEND is enabled.
> > >=20
> > > Signed-off-by: Christoph Hellwig <hch@lst.de>
> >=20
> > Doesn't it mean all the PV guests will basically waste 64MB of RAM
> > by default each if they don't really have PCI devices?
>=20
> If CONFIG_XEN_PCIDEV_FRONTEND is enabled, and the kernel's isn't booted
> with swiotlb=3Dnoforce, yes.

That's "a bit" unfortunate, since that might be significant part of the
VM memory, or if you have a lot of VMs, a significant part of the host
memory - it quickly adds up.
While I would say PCI passthrough is not very common for PV guests, can
the decision about xen-swiotlb be delayed until you can enumerate
xenstore to check if there are any PCI devices connected (and not
allocate xen-swiotlb by default if there are none)? This would
still not cover the hotplug case (in which case, you'd need to force it
with a cmdline), but at least you wouldn't loose much memory just
because one of your VMs may use PCI passthrough (so, you have it enabled
in your kernel).
Please remember that guest kernel is not always under full control of
the host admin, so making guests loose 64MB of RAM always, in default
setup isn't good for customers of such VMs...

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--A3eIvvbx7EttM1KL
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRnSxEACgkQ24/THMrX
1yxoZQgAigYwLANKEDjGQFkztgAogWxy1LFDTtNJ3E+BQRNBEzsMU2jP0ND6oIWA
fuzuDMHRtd7zAop8RQSHwJ+x9OTpPlBR1a7tSxaszuF9tm+l1lWN/6M+fFKNNG+C
C8hCEe5NjlcrGsCOzfsPrU2/141dN/1DayOWQ6DPpBawF7PBOZrhqdEKVV+SfVEL
D7HM1k8hZj8Nxn39zU0AztoC4HpnhA/ovojpuL7HhyKrs/PbUFgQeJJhYAWDxsNI
1CGjRlyzNFMFBtmC1r7foXOX8AKpcOGeGLcnw5aoMCBnAlZi7rUE/WY+fHGud1tk
iPlvgAJ6SgmEF/kRu7VrzljzR8eZZA==
=fKf1
-----END PGP SIGNATURE-----

--A3eIvvbx7EttM1KL--


From xen-devel-bounces@lists.xenproject.org Fri May 19 10:24:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 10:24:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536936.835673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzxHF-0006OJ-8g; Fri, 19 May 2023 10:24:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536936.835673; Fri, 19 May 2023 10:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzxHF-0006OC-5s; Fri, 19 May 2023 10:24:01 +0000
Received: by outflank-mailman (input) for mailman id 536936;
 Fri, 19 May 2023 10:24:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pzxHE-0006O6-1P
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 10:24:00 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20621.outbound.protection.outlook.com
 [2a01:111:f400:fe12::621])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 424cd24e-f62f-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 12:23:55 +0200 (CEST)
Received: from DU2PR04CA0210.eurprd04.prod.outlook.com (2603:10a6:10:28d::35)
 by PAVPR08MB9089.eurprd08.prod.outlook.com (2603:10a6:102:32a::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Fri, 19 May
 2023 10:23:48 +0000
Received: from DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28d:cafe::e3) by DU2PR04CA0210.outlook.office365.com
 (2603:10a6:10:28d::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 10:23:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT011.mail.protection.outlook.com (100.127.142.132) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.7 via Frontend Transport; Fri, 19 May 2023 10:23:47 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Fri, 19 May 2023 10:23:47 +0000
Received: from f3f839bfce5b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 93AA62C8-995D-4139-886A-3B1A9C3BABDA.1; 
 Fri, 19 May 2023 10:23:41 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f3f839bfce5b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 19 May 2023 10:23:41 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB8423.eurprd08.prod.outlook.com (2603:10a6:10:405::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 10:23:39 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.019; Fri, 19 May 2023
 10:23:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 424cd24e-f62f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DpVyXnAofFG5X42IZvUp2FuIueTA9OH2z4soqWItk6A=;
 b=FvQX18rFZ3bf+T5G6BAkZs+AyiEsqC49RxNVEh7ITiHelMe+l1/TNoaPpoIJrKUBIUsoOZFL0YrXFScCj/C6xlvgi0S/n9Ry4UQPDIXR02aroNN0gRRfRpCvgtgAzMF0tihlrp/M0cZ8YTHfI8zGCM/V66yWWSvB+35babSdlCg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 410ce92359396816
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Gov6Usji3yYyzRkvNIv9GBv5i6rbNvnnABdppAdmzmT0/WnOZCziZQpbOqOOI9igngIBL9EUYmUapKUsGHboxglLYsg0xAa3VenEnsRsuwpBq7qyA3iAeOG2dMd95qRbX+Fs5mU5crhW37pZGlsKA8XKlAG+sXp7nO99KlLkzRKHVU7zIoY3pwoy7MO6kqN9DMPNZAslQFCjkNPrBX1JT6zqPFg9VKEPaDkJIB8GAwFR2q0U+mSo4MZwMOAbCccDGlsLubkhGR2M0bX6Cx9Ti9ZECEQU2ODAy4EctvsHBHpB/Urbpfwz0AoX/jGhpVA/ZjkvPZmwi3HSf9Ovo0wB2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DpVyXnAofFG5X42IZvUp2FuIueTA9OH2z4soqWItk6A=;
 b=bxtktPbkZe4Li7crQdeTIdf7RT3V8nePHbIEF2MzvEsZb25dbYp8sbl8OA3maQirbtPLHLN5TZn42jp1ID+xnVRHwOv8atzzlZN+D34w12udtzFUyvX7zMm3rsW1YjP0H9oTShIyHUbHBBnB2bxmDLu0mAY191d7ROhPwojVwbfMwv1HdAW6GUbatxUJroOQQosEZaB9Khjr7RBjUqlsiz0AyAhckYqBM224TeQw0lYbBAATxL0Y4pSD1D0ZzTs5VNOTI0Yz6oLz+L+cD4G9OJ+rCAtXjnzUGxOmzGr9ocIe6bOuWhnLrHlNl/jWYWRHfdUDH1NOlvdw5PeI65np1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DpVyXnAofFG5X42IZvUp2FuIueTA9OH2z4soqWItk6A=;
 b=FvQX18rFZ3bf+T5G6BAkZs+AyiEsqC49RxNVEh7ITiHelMe+l1/TNoaPpoIJrKUBIUsoOZFL0YrXFScCj/C6xlvgi0S/n9Ry4UQPDIXR02aroNN0gRRfRpCvgtgAzMF0tihlrp/M0cZ8YTHfI8zGCM/V66yWWSvB+35babSdlCg=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 2/2] xen/misra: diff-report.py: add report patching
 feature
Thread-Topic: [PATCH 2/2] xen/misra: diff-report.py: add report patching
 feature
Thread-Index: AQHZijbbQbnxRobwCkSJohEa6eKRVa9hY3kA
Date: Fri, 19 May 2023 10:23:39 +0000
Message-ID: <CA6576E0-E49F-4E36-9363-CEB23B508DCE@arm.com>
References: <20230519094613.2134153-1-luca.fancellu@arm.com>
 <20230519094613.2134153-3-luca.fancellu@arm.com>
In-Reply-To: <20230519094613.2134153-3-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB8423:EE_|DBAEUR03FT011:EE_|PAVPR08MB9089:EE_
X-MS-Office365-Filtering-Correlation-Id: 70f612db-fe20-45b1-a226-08db58532201
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 +9haEDQXvfuSFVMIc/h0iZ2hakbh0Cj1PmuRK0ET0i2BXcpD8xcHMurfQWXJ9ZkYqWs9x38yA4+4yKJkIpfGRSUlabtTSvUoei73KBn+1blVwTQuf5K/4xC9FBFPuTXmSX55nlszQvsZroqk/1XR9Tq++PGWInJbQDhOE9iD+p0gj1JgfG0907YIZ0UAypwl3oz8XgH5Ux8WCpahZuQr7EA1arisAzSxzIblCNtTgHnX16GyNOQi0GhuegADzNbXR7wDIO0t+oZovapl+NsKhrp09qffT/pxJR9pabjWu/T6nQr3Q8zhO7XHEeXaO+xKoRiRvreGxz745nRQHkYCtnqcidof78i0RzoDWfgdfJDtDKK6CtoVT898E8OCy4YBoMDUExM+sVrWnRfFZYkPiA1eXMItdIbF2VZiNmxkVtVSicj/uGTr6x0CeqTQ/Nv+Q7XMQ34ImWzJAZexLkx9BjkH/4qhBLgNk5Y5foh5eCVvcTz8AwIqHV0ANpfaP3p9omFhN7hvPHi2uw/xx4Yuv+s0F5SxFP8z3kpyrdtb86s7W9Zwdif976BDzeeqSYur+ObeOelD8Kl/Gac1RQqJyaDpwX5S36hYCKFkcDaMkh2czkrgBJZsUCZLfU+LnpF9cA2HNoz3t6rhy3Q8uHZzyQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(396003)(39860400002)(366004)(346002)(451199021)(478600001)(316002)(5660300002)(8936002)(8676002)(41300700001)(4326008)(6916009)(66446008)(2906002)(64756008)(66476007)(91956017)(76116006)(66556008)(66946007)(54906003)(33656002)(36756003)(6486002)(6512007)(186003)(6506007)(26005)(38070700005)(53546011)(38100700002)(86362001)(2616005)(122000001)(71200400001)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <D0F63F1F421E2D459BEB18FE28870D9F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8423
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f0a161c0-020d-4b26-b043-08db58531cfb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2X9pwNg6VbXtMJHkyp3PC2zWvmdetrzRYYWGkm4FzUt/A1S9dFoUpvPAgtgu/Z1l0yTsSbREiyzS+niG022eP3DeRWNOiotqjvnxq8lmXQZ2G1DNoz1TWlbRUJA+n69kHJUfwPGJj+oz6gJ6wdLHRtbHyxEZYN+4dvB7n6m4U3IMu3YMljtUyQpBKRGG5papuG+tj7aeV3pQYBRzL1/kBfbKWGAu1VuTrylEv58KUcsmNnuCbYWKR+2Qq8VvAQMronfobpIRiUV82sOvCnfYyBikqZXtkIT2679WsWfYFrNaKAFnXd6v+j47cQLXkjmJzGB1bCcTzDkzEL0ELl26U1YXdlIzKjrX44lT7beRJAWDDwa/I35d3oofZ6GN4TaqwjlSwcgnwcFiFcOw+TAw9QWUmXG0TweuRxpragIBFzt0nwiO2nq3kUW9fvLTmSZpU3exOvroQt2P0bsLRdzPBeWUHkVNnMVWnk2NMYwWJ1q9C0KySAfmFMAKFwwvlpmmafK/GaaQdMRVfij55JlwnhAbT8KjqMptNMs+yUcsTXXvSJpPUJeqsJ92aSMPJrARCD6I0hLCdjjyd9cVdLv+kGPwrUSPsgvx2h2Q6G3OD7wI+lWxE8hI9FgdyzpctqfXH7nePv+CeIYlE3awuGntJsmu9+8zlQthbAziflqIoW0HooUzf1FKW0xrR2KZTaz2LeTO+xb+3tUThPD1x2xgn20kPyXZebFoL8R8S3dFBSU0lNYoKSqMLJsKX6oHpHak
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(136003)(346002)(376002)(451199021)(46966006)(40470700004)(36840700001)(70206006)(70586007)(5660300002)(54906003)(8936002)(8676002)(478600001)(4326008)(6916009)(316002)(41300700001)(6486002)(40460700003)(36860700001)(81166007)(40480700001)(83380400001)(6506007)(53546011)(47076005)(26005)(336012)(356005)(82310400005)(2616005)(82740400003)(86362001)(6512007)(186003)(33656002)(2906002)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 10:23:47.8650
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 70f612db-fe20-45b1-a226-08db58532201
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9089

DQoNCj4gT24gMTkgTWF5IDIwMjMsIGF0IDEwOjQ2LCBMdWNhIEZhbmNlbGx1IDxMdWNhLkZhbmNl
bGx1QGFybS5jb20+IHdyb3RlOg0KPiANCj4gQWRkIGEgZmVhdHVyZSB0byB0aGUgZGlmZi1yZXBv
cnQucHkgc2NyaXB0IHRoYXQgaW1wcm92ZXMgdGhlIGNvbXBhcmlzb24NCj4gYmV0d2VlbiB0d28g
YW5hbHlzaXMgcmVwb3J0LCBvbmUgZnJvbSBhIGJhc2VsaW5lIGNvZGViYXNlIGFuZCB0aGUgb3Ro
ZXINCj4gZnJvbSB0aGUgY2hhbmdlcyBhcHBsaWVkIHRvIHRoZSBiYXNlbGluZS4NCj4gDQo+IFRo
ZSBjb21wYXJpc29uIGJldHdlZW4gcmVwb3J0cyBvZiBkaWZmZXJlbnQgY29kZWJhc2UgaXMgYW4g
aXNzdWUgYmVjYXVzZQ0KPiBlbnRyaWVzIGluIHRoZSBiYXNlbGluZSBjb3VsZCBoYXZlIGJlZW4g
bW92ZWQgaW4gcG9zaXRpb24gZHVlIHRvIGFkZGl0aW9uDQo+IG9yIGRlbGV0aW9uIG9mIHVucmVs
YXRlZCBsaW5lcyBvciBjYW4gZGlzYXBwZWFyIGJlY2F1c2Ugb2YgZGVsZXRpb24gb2YNCj4gdGhl
IGludGVyZXN0ZWQgbGluZSwgbWFraW5nIHRoZSBjb21wYXJpc29uIGJldHdlZW4gdHdvIHJldmlz
aW9ucyBvZiB0aGUNCj4gY29kZSBoYXJkZXIuDQo+IA0KPiBIYXZpbmcgYSBiYXNlbGluZSByZXBv
cnQsIGEgcmVwb3J0IG9mIHRoZSBjb2RlYmFzZSB3aXRoIHRoZSBjaGFuZ2VzDQo+IGNhbGxlZCAi
bmV3IHJlcG9ydCIgYW5kIGEgZ2l0IGRpZmYgZm9ybWF0IGZpbGUgdGhhdCBkZXNjcmliZXMgdGhl
DQo+IGNoYW5nZXMgaGFwcGVuZWQgdG8gdGhlIGNvZGUgZnJvbSB0aGUgYmFzZWxpbmUsIHRoaXMg
ZmVhdHVyZSBjYW4NCj4gdW5kZXJzdGFuZCB3aGljaCBlbnRyaWVzIGZyb20gdGhlIGJhc2VsaW5l
IHJlcG9ydCBhcmUgZGVsZXRlZCBvciBzaGlmdGVkDQo+IGluIHBvc2l0aW9uIGR1ZSB0byBjaGFu
Z2VzIHRvIHVucmVsYXRlZCBsaW5lcyBhbmQgY2FuIG1vZGlmeSB0aGVtIGFzDQo+IHRoZXkgd2ls
bCBhcHBlYXIgaW4gdGhlICJuZXcgcmVwb3J0Ii4NCj4gDQo+IEhhdmluZyB0aGUgInBhdGNoZWQg
YmFzZWxpbmUiIGFuZCB0aGUgIm5ldyByZXBvcnQiLCBub3cgaXQncyBzaW1wbGUNCj4gdG8gbWFr
ZSB0aGUgZGlmZiBiZXR3ZWVuIHRoZW0gYW5kIHByaW50IG9ubHkgdGhlIGVudHJ5IHRoYXQgYXJl
IG5ldy4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEx1Y2EgRmFuY2VsbHUgPGx1Y2EuZmFuY2VsbHVA
YXJtLmNvbT4NCj4gLS0tDQo+IENoYW5nZXMgZnJvbSB2MToNCj4gLSBNYWRlIHRoZSBzY3JpcHQg
Y29tcGF0aWJsZSB3aXRoIHB5dGhvbjIgKFN0ZWZhbm8pDQo+IC0tLQ0KPiB4ZW4vc2NyaXB0cy9k
aWZmLXJlcG9ydC5weSAgICAgICAgICAgICAgICAgICAgfCAgNTUgKysrKy0NCj4geGVuL3Njcmlw
dHMveGVuX2FuYWx5c2lzL2RpZmZfdG9vbC9kZWJ1Zy5weSAgIHwgIDIxICsrDQo+IHhlbi9zY3Jp
cHRzL3hlbl9hbmFseXNpcy9kaWZmX3Rvb2wvcmVwb3J0LnB5ICB8ICA4NyArKysrKysrDQo+IC4u
Li9kaWZmX3Rvb2wvdW5pZmllZF9mb3JtYXRfcGFyc2VyLnB5ICAgICAgICB8IDIzMiArKysrKysr
KysrKysrKysrKysNCj4gNCBmaWxlcyBjaGFuZ2VkLCAzOTMgaW5zZXJ0aW9ucygrKSwgMiBkZWxl
dGlvbnMoLSkNCj4gY3JlYXRlIG1vZGUgMTAwNjQ0IHhlbi9zY3JpcHRzL3hlbl9hbmFseXNpcy9k
aWZmX3Rvb2wvdW5pZmllZF9mb3JtYXRfcGFyc2VyLnB5DQo+IA0KPiBkaWZmIC0tZ2l0IGEveGVu
L3NjcmlwdHMvZGlmZi1yZXBvcnQucHkgYi94ZW4vc2NyaXB0cy9kaWZmLXJlcG9ydC5weQ0KPiBp
bmRleCBmOTdjYjIzNTVjYzMuLmQ2MDhlM2EwNWFhMSAxMDA3NTUNCj4gLS0tIGEveGVuL3Njcmlw
dHMvZGlmZi1yZXBvcnQucHkNCj4gKysrIGIveGVuL3NjcmlwdHMvZGlmZi1yZXBvcnQucHkNCj4g
QEAgLTcsNiArNywxMCBAQCBmcm9tIGFyZ3BhcnNlIGltcG9ydCBBcmd1bWVudFBhcnNlcg0KPiBm
cm9tIHhlbl9hbmFseXNpcy5kaWZmX3Rvb2wuY3BwY2hlY2tfcmVwb3J0IGltcG9ydCBDcHBjaGVj
a1JlcG9ydA0KPiBmcm9tIHhlbl9hbmFseXNpcy5kaWZmX3Rvb2wuZGVidWcgaW1wb3J0IERlYnVn
DQo+IGZyb20geGVuX2FuYWx5c2lzLmRpZmZfdG9vbC5yZXBvcnQgaW1wb3J0IFJlcG9ydEVycm9y
DQo+ICtmcm9tIHhlbl9hbmFseXNpcy5kaWZmX3Rvb2wudW5pZmllZF9mb3JtYXRfcGFyc2VyIGlt
cG9ydCBcDQo+ICsgICAgKFVuaWZpZWRGb3JtYXRQYXJzZXIsIFVuaWZpZWRGb3JtYXRQYXJzZUVy
cm9yKQ0KPiArZnJvbSB4ZW5fYW5hbHlzaXMuc2V0dGluZ3MgaW1wb3J0IHJlcG9fZGlyDQo+ICtm
cm9tIHhlbl9hbmFseXNpcy51dGlscyBpbXBvcnQgaW52b2tlX2NvbW1hbmQNCj4gDQo+IA0KPiBk
ZWYgbG9nX2luZm8odGV4dCwgZW5kPSdcbicpOg0KPiBAQCAtMzYsOSArNDAsMzIgQEAgZGVmIG1h
aW4oYXJndik6DQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgImFnYWluc3QgdGhlIGJh
c2VsaW5lLiIpDQo+ICAgICBwYXJzZXIuYWRkX2FyZ3VtZW50KCItdiIsICItLXZlcmJvc2UiLCBh
Y3Rpb249J3N0b3JlX3RydWUnLA0KPiAgICAgICAgICAgICAgICAgICAgICAgICBoZWxwPSJQcmlu
dCBtb3JlIGluZm9ybWF0aW9ucyBkdXJpbmcgdGhlIHJ1bi4iKQ0KPiArICAgIHBhcnNlci5hZGRf
YXJndW1lbnQoIi0tcGF0Y2giLCB0eXBlPXN0ciwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAg
IGhlbHA9IlRoZSBwYXRjaCBmaWxlIGNvbnRhaW5pbmcgdGhlIGNoYW5nZXMgdG8gdGhlICINCj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgImNvZGUsIGZyb20gdGhlIGJhc2VsaW5lIGFu
YWx5c2lzIHJlc3VsdCB0byB0aGUgIg0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAi
J2NoZWNrIHJlcG9ydCcgYW5hbHlzaXMgcmVzdWx0LlxuIg0KPiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAiRG8gbm90IHVzZSB3aXRoIC0tYmFzZWxpbmUtcmV2Ly0tcmVwb3J0LXJldiIp
DQo+ICsgICAgcGFyc2VyLmFkZF9hcmd1bWVudCgiLS1iYXNlbGluZS1yZXYiLCB0eXBlPXN0ciwN
Cj4gKyAgICAgICAgICAgICAgICAgICAgICAgIGhlbHA9IlJldmlzaW9uIG9yIFNIQSBvZiB0aGUg
Y29kZWJhc2UgYW5hbHlzZWQgdG8gIg0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAi
Y3JlYXRlIHRoZSBiYXNlbGluZSByZXBvcnQuXG4iDQo+ICsgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICJVc2UgdG9nZXRoZXIgd2l0aCAtLXJlcG9ydC1yZXYiKQ0KPiArICAgIHBhcnNlci5h
ZGRfYXJndW1lbnQoIi0tcmVwb3J0LXJldiIsIHR5cGU9c3RyLA0KPiArICAgICAgICAgICAgICAg
ICAgICAgICAgaGVscD0iUmV2aXNpb24gb3IgU0hBIG9mIHRoZSBjb2RlYmFzZSBhbmFseXNlZCB0
byAiDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICJjcmVhdGUgdGhlICdjaGVjayBy
ZXBvcnQnLlxuIg0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAiVXNlIHRvZ2V0aGVy
IHdpdGggLS1iYXNlbGluZS1yZXYiKQ0KPiANCj4gICAgIGFyZ3MgPSBwYXJzZXIucGFyc2VfYXJn
cygpDQo+IA0KPiArICAgIGlmIGFyZ3MucGF0Y2ggYW5kIChhcmdzLmJhc2VsaW5lX3JldiBvciBh
cmdzLnJlcG9ydF9yZXYpOg0KPiArICAgICAgICBwcmludCgiRVJST1I6ICctLXBhdGNoJyBhcmd1
bWVudCBjYW4ndCBiZSB1c2VkIHdpdGggJy0tYmFzZWxpbmUtcmV2JyINCj4gKyAgICAgICAgICAg
ICAgIiBvciAnLS1yZXBvcnQtcmV2Jy4iKQ0KPiArICAgICAgICBzeXMuZXhpdCgxKQ0KPiArDQo+
ICsgICAgaWYgYm9vbChhcmdzLmJhc2VsaW5lX3JldikgIT0gYm9vbChhcmdzLnJlcG9ydF9yZXYp
Og0KPiArICAgICAgICBwcmludCgiRVJST1I6ICctLWJhc2VsaW5lLXJldicgbXVzdCBiZSB1c2Vk
IHRvZ2V0aGVyIHdpdGggIg0KPiArICAgICAgICAgICAgICAiJy0tcmVwb3J0LXJldicuIikNCj4g
KyAgICAgICAgc3lzLmV4aXQoMSkNCj4gKw0KPiAgICAgaWYgYXJncy5vdXQgPT0gInN0ZG91dCI6
DQo+ICAgICAgICAgZmlsZV9vdXQgPSBzeXMuc3Rkb3V0DQo+ICAgICBlbHNlOg0KPiBAQCAtNjMs
MTEgKzkwLDM1IEBAIGRlZiBtYWluKGFyZ3YpOg0KPiAgICAgICAgIG5ld19yZXAucGFyc2UoKQ0K
PiAgICAgICAgIGRlYnVnLmRlYnVnX3ByaW50X3BhcnNlZF9yZXBvcnQobmV3X3JlcCkNCj4gICAg
ICAgICBsb2dfaW5mbygiIFtPS10iKQ0KPiAtICAgIGV4Y2VwdCBSZXBvcnRFcnJvciBhcyBlOg0K
PiArICAgICAgICBkaWZmX3NvdXJjZSA9IE5vbmUNCj4gKyAgICAgICAgaWYgYXJncy5wYXRjaDoN
Cj4gKyAgICAgICAgICAgIGRpZmZfc291cmNlID0gb3MucGF0aC5yZWFscGF0aChhcmdzLnBhdGNo
KQ0KPiArICAgICAgICBlbGlmIGFyZ3MuYmFzZWxpbmVfcmV2Og0KPiArICAgICAgICAgICAgZ2l0
X2RpZmYgPSBpbnZva2VfY29tbWFuZCgNCj4gKyAgICAgICAgICAgICAgICAiZ2l0IGRpZmYgLS1n
aXQtZGlyPXt9IC1DIC1DIHt9Li57fSIuZm9ybWF0KHJlcG9fZGlyLA0KPiArICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYXJncy5iYXNl
bGluZV9yZXYsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBhcmdzLnJlcG9ydF9yZXYpLA0KPiArICAgICAgICAgICAgICAgIFRy
dWUsICJFcnJvciBvY2N1cmVkIGludm9raW5nOlxue31cblxue30iDQo+ICsgICAgICAgICAgICAp
DQoNCknigJl2ZSBub3RpY2VkIG5vdyBhbiBpc3N1ZSBoZXJlLCB3aGVuIHVzaW5nIC0tYmFzZWxp
bmUtcmV2Ly0tcmVwb3J0LXJldiwgdGhlIGZpeCBpcyB0aGlzIG9uZToNCg0KZGlmZiAtLWdpdCBh
L3hlbi9zY3JpcHRzL2RpZmYtcmVwb3J0LnB5IGIveGVuL3NjcmlwdHMvZGlmZi1yZXBvcnQucHkN
CmluZGV4IGQ2MDhlM2EwNWFhMS4uNjM2Zjk4ZjVlZWJlIDEwMDc1NQ0KLS0tIGEveGVuL3Njcmlw
dHMvZGlmZi1yZXBvcnQucHkNCisrKyBiL3hlbi9zY3JpcHRzL2RpZmYtcmVwb3J0LnB5DQpAQCAt
OTUsOSArOTUsOCBAQCBkZWYgbWFpbihhcmd2KToNCiAgICAgICAgICAgICBkaWZmX3NvdXJjZSA9
IG9zLnBhdGgucmVhbHBhdGgoYXJncy5wYXRjaCkNCiAgICAgICAgIGVsaWYgYXJncy5iYXNlbGlu
ZV9yZXY6DQogICAgICAgICAgICAgZ2l0X2RpZmYgPSBpbnZva2VfY29tbWFuZCgNCi0gICAgICAg
ICAgICAgICAgImdpdCBkaWZmIC0tZ2l0LWRpcj17fSAtQyAtQyB7fS4ue30iLmZvcm1hdChyZXBv
X2RpciwNCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBhcmdzLmJhc2VsaW5lX3JldiwNCi0gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhcmdzLnJlcG9ydF9yZXYpLA0KKyAg
ICAgICAgICAgICAgICAiZ2l0IC0tZ2l0LWRpcj17fS8uZ2l0IGRpZmYgLUMgLUMge30uLnt9Ig0K
KyAgICAgICAgICAgICAgICAuZm9ybWF0KHJlcG9fZGlyLCBhcmdzLmJhc2VsaW5lX3JldiwgYXJn
cy5yZXBvcnRfcmV2KSwNCiAgICAgICAgICAgICAgICAgVHJ1ZSwgIkVycm9yIG9jY3VyZWQgaW52
b2tpbmc6XG57fVxuXG57fSINCiAgICAgICAgICAgICApDQogICAgICAgICAgICAgZGlmZl9zb3Vy
Y2UgPSBnaXRfZGlmZi5zcGxpdGxpbmVzKGtlZXBlbmRzPVRydWUpDQoNCknigJlsbCB3YWl0IGZv
ciBvdGhlciBmZWVkYmFjayBvbiB0aGUgcGF0Y2ggYmVmb3JlIHNlbmRpbmcgaXQgYWdhaW4uDQoN
Cg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri May 19 10:54:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 10:54:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536942.835684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzxkJ-0001SF-Nc; Fri, 19 May 2023 10:54:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536942.835684; Fri, 19 May 2023 10:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzxkJ-0001S8-Ji; Fri, 19 May 2023 10:54:03 +0000
Received: by outflank-mailman (input) for mailman id 536942;
 Fri, 19 May 2023 10:54:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1pzxkI-0001S0-DB
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 10:54:02 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2061d.outbound.protection.outlook.com
 [2a01:111:f400:fe16::61d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 76383ec1-f633-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 12:54:00 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9224.eurprd04.prod.outlook.com (2603:10a6:102:2a3::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 10:53:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 10:53:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76383ec1-f633-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lgXT8LVcKp/r87nRK+8BKUJTdrRJSxn8GmB7Mo48RlN4cNTBM7Jm9DI9A+H6M7u3DWUh4OY9Q3+/iPE+QgTReIiaOwPdN4vyeZJuIOxVgYXxIKdNYE/H6CO2mORHPZlI97mnfF6gyuswEJ5yfwL7+ikaSzDJugP4myqw6QOfy7nANKI0/jnCQ14oY0oy/h4+vflUG3fBMKKoOMcCgAoKJJqWg+R57d2/XWITeuf3xpXOIjl1zc2MG2qeDBVSeEowTIdzbUSVfOq7J7h/d2yAJcQ2Azz0c9oL6TUABM5H9FkivWI9SRvf909Vcmu4IJ/cMNyY1OMi2sUFfwC1V5xYTg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/ib20T4hZdR1RQXO2Snx3fTo7JI+hIMSkXXpI4ZrdNg=;
 b=hWB6sWj/PBhcA6TcD4FGQT+4R2Deh4C8Dvrc6sX+AjM3+ybCvk0At9DJ9Oz2aUJzwIWqFyzfNNdfwRG942iS6Xb3l9JNILPDfAWP7NlaSPVJTQh+fkZck2U/pKjbAMCh0P741Zx4mAamsmlN12MzcQ05/cfTEEyrVhhJ9MhSwy5NDxo6pq8o/tR3HvEHLr0jYw8pVla2aT5uNC2UhEygH/zy+mW+jQeriql6jj9f80m84hd2c4hVUIZpfpqYX92xgt4uE1z5my4TFBjtxPm1yf3HyfYH0KGIP/+ZGnNJk9QQzyimuAgM02R76J29CFpWGNvIa2yR+0n1/KLaSYd5DQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/ib20T4hZdR1RQXO2Snx3fTo7JI+hIMSkXXpI4ZrdNg=;
 b=oouLQodKS5YCe/lYF2QEWkH+WAGe5KZQMMoBiu2mZY/2/Lr9x0Js6E4C9JPl2tLzGTDtBatk8InmMq1XyXnXRYTEhl/WTyT/WOh8GuslR79sI4vGiYFKBiWeUxhf5KW7V6OTmdesKNA3cQRLriP4TiNK0E07pK9LnhBFJ3mHqq7HL1H7vvYYCuRsXmyFsqtnij6utpefDbVp4ah/1V59lnwQc+63vCzplAoygBwSKszWe7QI15T0zjhU7hSPeplArURdkJC4WAj2EVk6rLbVxq3IPnwDnQ83lksn/Nr+6WRnD6eYSYSq67b9jNfIX2DJsTrppv5JgG4EKUIcwjSfKQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <94cc53ec-17c4-6ec7-7f13-e4f0f9e31e2d@suse.com>
Date: Fri, 19 May 2023 12:53:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] xen/misra: add diff-report.py tool
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230519094613.2134153-1-luca.fancellu@arm.com>
 <20230519094613.2134153-2-luca.fancellu@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230519094613.2134153-2-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0087.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9224:EE_
X-MS-Office365-Filtering-Correlation-Id: c3dc33b8-0d72-467f-b76b-08db58575850
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kvpXnzf36ICWmFLz4nGryfzI8LvmBJxa6eqaaFo/fCVh0npDcO5+2Ijf8/pbUxh91gPmbAGLF5OrHCrvRJv5NJOJFkwEbk9E/h3rQyGph3io7hbgYluX8H0g3ukFw4d/VjVBZOhy4Ps9ypqhIz/1WZg6jKQt4XF5A+hFSwxvDf/8djc0eU6OlJkedCmOLbLcjKOoZ3TfY4XHx/LRuWq/TVKFJNKIM+7yZTk8pDk9Dqjy8e7gpFQUuiq9mCS5icI6+ePTva3xGeOedim16E54l27QM0D14WalJFe7E8E9CMA2mjtsVhbo2dqp1Hu5lPtLTTpqW9w7a9mVxHfrdq9kBTHboFJSJ7GnvgNI1UEtAovoyQb7wOG2yLjN8M7RN296Zc89km0iViFHvuf7eDfkDwCHbTyXGp1RzPQlbRjfOx1yoi7+uK1KqTtN+2QTEEH65j5SX/1x09pmVB+tSKK4FJ3sa+9tULVLylhJKVOFS22nk4zaX1hNMDaBDH3m71dADfYHnlTm74gpVpsYWUke6ryg4S+KU56okl60la3f+Wtw8W89NAaTv8X521bwQF0Ku/8COlR/8lz+CQ5xE2ZsCrobgi/9jvrl9Ikn6GXcgVRldLWWcJ7Ev5lZMf3dS1+aEE543FLFIKCH/4MIs2HSdQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(39860400002)(346002)(396003)(136003)(376002)(451199021)(31686004)(4326008)(2906002)(6916009)(86362001)(31696002)(8936002)(316002)(5660300002)(8676002)(66946007)(66476007)(41300700001)(66556008)(38100700002)(53546011)(186003)(478600001)(6512007)(36756003)(6506007)(26005)(2616005)(54906003)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UVJzTXZ1N3VzU21oZE1wWWRQMkxJYmQ2dGxxSi9vVzlPand1aDhCSUpnUjlO?=
 =?utf-8?B?dDFBV2JvQThobmcreWhHcEwxSkN1amR1S20wOUJVY05VQS8yUTlIdGczQUpq?=
 =?utf-8?B?OWxzazd2TjNjdzVaR2FCTTU5bkE1MzF6ejBFVm84TGhzcHZXNkl4UnhkUkFF?=
 =?utf-8?B?WGExTG4xV0pNalJkS1Bsdkc4Y010TmNHcnBuWGh5dDcybmxoYnArbHdaQkpu?=
 =?utf-8?B?RDFNZXJYSnF6eGMvODNwY3AvNVlDZWtwb3pCVFpZbDRQRjVHRGJ2cnYzaXVt?=
 =?utf-8?B?cnFlQVlUSEdIZEZ5RDJUVGxjUEljM1c4NXBOMlN6ZE55NzVNRTlLa1AweHBO?=
 =?utf-8?B?c1JteWJRVFVUaVJ6YTczMkJGcmhIV0hON3lveEJIcFRDaHZWbnd3b1d2aEwy?=
 =?utf-8?B?K3owMDByT3ZPbnBBcFplQ0Y1RFRRMFIrZ1FaSldjaUJob2YvSzZIckZJV29n?=
 =?utf-8?B?NmFTazB2U08zRWpDaGNxQlRLVzZCNnBQdTI4OXFUREdUcWRXV0VqMUFRdTNF?=
 =?utf-8?B?UWVrT1lCSnJuL2p4RUxwM3MvdkJNakxPRzJLQm51b0h2b3pwaUJrOXV2OXdG?=
 =?utf-8?B?UTV4MW1SRnFNWGVld0lIbHVXRDRFSlNsejNEaURMWVlEam5Cb090MVdGR1lP?=
 =?utf-8?B?azdjak1oVXpITjlpRGRtVXk5MDBCbGUySHhDWkZCOCtDM3BVZG5yV1FFMC9i?=
 =?utf-8?B?alUrK2h0VEZvNmY1Z2dDM3dkWWYrTU00VkhoT2puYWZOYjdMSDJndFZSZkdJ?=
 =?utf-8?B?Mm1GcjhERFAwUVBZRXlSbzU5WUk1aTJTZlBZMlV4b3hYTDRHaWgwQUlsMzZ6?=
 =?utf-8?B?L0RCRmQxbnJtNG16S1NaVUU0N1B3STI2V2JLdGtqTEkzTGx2cTJRYXFENHlY?=
 =?utf-8?B?UndKSzRZV09UM2cwblJPeFNvWjBnQWozVWhLZ2Rlb3A5SE05VDVmc1FBdVo5?=
 =?utf-8?B?MzZ2MStnK0plNGVrVmZ5dUN4eEEzSVJRVmlZSk8zUGJLa1Z1L1dBVGhMZWZK?=
 =?utf-8?B?T1Rqd2tXUjFxNzBUTUQ5Z3FkOG1HZm5zZ2F0a01yS2IyZGI1N0pTdFlYSG9Y?=
 =?utf-8?B?RUJLQlFtWTZOcXVBYzN6RU9pVlJWTExFSzQ0Z2RPK2EwUjhvL1RRVmFVdG52?=
 =?utf-8?B?K0toL1JaNE5aUmp4ZTdsMEtUOHNleWlzeWFZZ2d0UXcrSzZUeEVqSnh1bzcy?=
 =?utf-8?B?YVJtclk4N2RURXFMVXR3cHY5bytRVkVqUVVha2hhdWhmTWNrZkQrc0daMEQ2?=
 =?utf-8?B?dkp6aTkvNGx1b2xzWmVsM1Y3NCt2M1laM2xOMjhzcWdnRXlZSVNXRWtZSXZE?=
 =?utf-8?B?T01vOXR1N2l2ZVU2VHNMWDlodUlwWE40dG9ldDhOUGY1dnZsb3JIQ2wzZml0?=
 =?utf-8?B?N1BlK0UzYkdxR2MyblUyd2h0YXFnU0tkMlNhUlZZS2l5Y0tFcGhZaW5QM25R?=
 =?utf-8?B?SUlBbWdabjdneE03Y2dCVDFUaUVUb01PRFRqT3ZSb1BxMk5ZQy9CdXdMR0ta?=
 =?utf-8?B?KzNZOGN2ejRXZ1E2dVgzZXNWODJobWs5UmNtYVhYMGJua3RHNG4wcFVpN1Q2?=
 =?utf-8?B?aUgwOGpOWXhLdUFIdG01MlR5bnJEb1FCVjFLMytJaVRmMWhhL0xUTGU1b1RZ?=
 =?utf-8?B?cjNmRnhteVZPVUdaNTlBbUZjYVllZWYzM1BRSmdSa2hZRG9ndmEyTTRuYzE3?=
 =?utf-8?B?ckphTmx4MkNZdE9uMjZIdWU2OXpBeDZ4cnFvNEpJc0k0WE9lV2ZQcCt6WHBZ?=
 =?utf-8?B?S3lRR3FLMFhqbW10UE9nQk95cWo3NC9JcGhpWWJDQXZPOGV0UEZNeVhIQ1BK?=
 =?utf-8?B?NVFGa0R0d29ObFdBQVFib2hEQmdSRVhMSzhlbTVTaEV6bVNpclg2S2JoZXhn?=
 =?utf-8?B?a1pnKzRwYjg0dDhMRnJWOEhCampHeGwxUTIrTEFRWDhJd1htdG5hZkNTalpK?=
 =?utf-8?B?cXkzdERMTWdwWkJ6cEw0OXVOOFlDQ1JqbXlBU0w3VjdZT2hYd2ZScmlEY2Nw?=
 =?utf-8?B?ZTlGQm5wZCt0VkN4OEJXWFhFSk9pQk9SdmVDdjRYZFRpaWdEWUlmWTlQWnha?=
 =?utf-8?B?Ujh6OU9ka0lyYnVhdEZoM3FMQ1ZHN2dUSXExNWxHNXNOZjEwYmVrSDhxeEdj?=
 =?utf-8?Q?cTBZXSC+TpWj6QdML5TkGlRdp?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c3dc33b8-0d72-467f-b76b-08db58575850
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 10:53:57.2362
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: q0NoAGSTi4Rvadp7mnnECVwzJlb1wprrsX/LpBXWxs+ySHg5XTQn/A3QenaeXAwicKeQ05aymrEcxopUBS1cow==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9224

On 19.05.2023 11:46, Luca Fancellu wrote:
> Add a new tool, diff-report.py that can be used to make diff between
> reports generated by xen-analysis.py tool.
> Currently this tool supports the Xen cppcheck text report format in
> its operations.
> 
> The tool prints every finding that is in the report passed with -r
> (check report) which is not in the report passed with -b (baseline).
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes from v1:
>  - removed 2 method from class ReportEntry that landed there by a
>    mistake on rebase.
>  - Made the script compatible also with python2 (Stefano)
> ---
>  xen/scripts/diff-report.py                    |  80 ++++++++++++++
>  .../xen_analysis/diff_tool/__init__.py        |   0
>  .../xen_analysis/diff_tool/cppcheck_report.py |  44 ++++++++
>  xen/scripts/xen_analysis/diff_tool/debug.py   |  40 +++++++
>  xen/scripts/xen_analysis/diff_tool/report.py  | 100 ++++++++++++++++++
>  5 files changed, 264 insertions(+)
>  create mode 100755 xen/scripts/diff-report.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/__init__.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/debug.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/report.py

If I'm not mistaken Python has no issue with dashes in path names.
Hence it would once again be better if the underscores were avoided
in the new directory names.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 19 11:07:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 11:07:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536974.835717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzxxO-00045Q-Bk; Fri, 19 May 2023 11:07:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536974.835717; Fri, 19 May 2023 11:07:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzxxO-00045J-8k; Fri, 19 May 2023 11:07:34 +0000
Received: by outflank-mailman (input) for mailman id 536974;
 Fri, 19 May 2023 11:07:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sKmR=BI=arndb.de=arnd@srs-se1.protection.inumbo.net>)
 id 1pzxuO-000307-4m
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 11:04:28 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb033717-f634-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 13:04:26 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id 888FD5C018C;
 Fri, 19 May 2023 07:04:25 -0400 (EDT)
Received: from imap51 ([10.202.2.101])
 by compute6.internal (MEProxy); Fri, 19 May 2023 07:04:25 -0400
Received: by mailuser.nyi.internal (Postfix, from userid 501)
 id 9A414B60089; Fri, 19 May 2023 07:04:23 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb033717-f634-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arndb.de; h=cc
	:cc:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1684494265; x=1684580665; bh=/C
	wla6Z93CmdqEZ213cheaHU/C7xFoaUH7cr2bjxpHQ=; b=fgD3+fClg58VBaUs//
	zaPCHvCJP5VwPquLsJQCejG585X4z+0cgq8+PVQLQhfsm5YWTZtpvK/KaaglSt4v
	AZlDlbvGF3uoZD3fC2SSE25eKMeiJ6Wom5J1YcG2GkUcncWYRhxQyk68GPcM/VQp
	34Ei3ImxNpBRwXdc9tym77ZcH2TZYznA5nJLd0dV2c5Gry+glxktVg9gmMI9439a
	f3L5sFm8wopBNd1vAmNNyXTneaLVTBRRVuEaIadcCeApET3YIvFnFJalDw9tA0h/
	2TZol3iqiGR+i22jK3aoA3M0ZmXVLJnvyzijOnDj+ydVHvdZRUx6GiucYtjtNpoN
	JBTA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1684494265; x=1684580665; bh=/Cwla6Z93Cmdq
	EZ213cheaHU/C7xFoaUH7cr2bjxpHQ=; b=dFs0aALapqY2/RKBVBoRgsFC+kwwn
	aghUbwauR7c1rX0XPL5DOBgegPR2+r/bkGNCHfJxUzqFkNmnzE8s/4FBbZi5lUoP
	nAOFrE15xfCXbV5De8PpVHEPPQaEzx07LPV8M2YiUR692Ae414HT9CDIPsp6/adS
	ri78lMwnrM8+0M82lszLmKjmmPSM5wfqXIhxwrHo/HhPJmcwFpyMcgcL4OhJuOKD
	iQ+iy6dLE5E+W30aMejTZcd8FnCpaTSh3UIXb6IRU53VnkL0NvNyaDJ3yMEEg3jy
	Amhm2R5vJ7t1jx6KTPd6J1GyNs70g8SgYMjfUjxhve2s0eAY/TVDwYw5g==
X-ME-Sender: <xms:t1dnZJJPaJ2g_2nxuUp5GipLJ04nJ7R8n701jCM3WJMdOGe7-3CtvQ>
    <xme:t1dnZFIpdftyeukP7ox0wWnWvdRtWV5S4-6mNAnBSP0uR5dlSKF5AEHwrDn2R_GBj
    PS_1pR-COVEJwI7y-M>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeihedgfeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepofgfggfkjghffffhvfevufgtsehttdertderredtnecuhfhrohhmpedftehr
    nhguuceuvghrghhmrghnnhdfuceorghrnhgusegrrhhnuggsrdguvgeqnecuggftrfgrth
    htvghrnhepffehueegteeihfegtefhjefgtdeugfegjeelheejueethfefgeeghfektdek
    teffnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomheprg
    hrnhgusegrrhhnuggsrdguvg
X-ME-Proxy: <xmx:t1dnZBuc-n5XDtHdGoa_Vunbhz_oDVhyhorHNRW0yPyAxt_gejgLQg>
    <xmx:t1dnZKYlvNBeT1qmqCedxwoaVPlHQKmXv30SdUKAdaGMxpMsm1ja6g>
    <xmx:t1dnZAbwOs_Q1raPox-9GKEVt3JVUpAGWNgWD_H68sPb8ESiQXlXfg>
    <xmx:uVdnZLvOcIo3POIIQLS4ujcc4StrWhcyu6DRLzy0SJ32ArlhSgUt0g>
Feedback-ID: i56a14606:Fastmail
X-Mailer: MessagingEngine.com Webmail Interface
User-Agent: Cyrus-JMAP/3.9.0-alpha0-431-g1d6a3ebb56-fm-20230511.001-g1d6a3ebb
Mime-Version: 1.0
Message-Id: <1f771dae-1bc7-4fd3-8514-613cf3b12e1a@app.fastmail.com>
In-Reply-To: <cabdd839-71d5-aabb-aee6-d37ebcabf2ab@intel.com>
References: <20230516193549.544673-1-arnd@kernel.org>
 <20230516193549.544673-11-arnd@kernel.org>
 <cabdd839-71d5-aabb-aee6-d37ebcabf2ab@intel.com>
Date: Fri, 19 May 2023 13:04:03 +0200
From: "Arnd Bergmann" <arnd@arndb.de>
To: "Dave Hansen" <dave.hansen@intel.com>, "Arnd Bergmann" <arnd@kernel.org>,
 x86@kernel.org
Cc: "Thomas Gleixner" <tglx@linutronix.de>, "Ingo Molnar" <mingo@redhat.com>,
 "Borislav Petkov" <bp@alien8.de>,
 "Dave Hansen" <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "Andy Lutomirski" <luto@kernel.org>,
 "Steven Rostedt" <rostedt@goodmis.org>,
 "Masami Hiramatsu" <mhiramat@kernel.org>,
 "Mark Rutland" <mark.rutland@arm.com>, "Juergen Gross" <jgross@suse.com>,
 "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
 "Alexey Makhalov" <amakhalov@vmware.com>,
 "VMware PV-Drivers Reviewers" <pv-drivers@vmware.com>,
 "Peter Zijlstra" <peterz@infradead.org>,
 "Darren Hart" <dvhart@infradead.org>,
 "Andy Shevchenko" <andy@infradead.org>,
 "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
 "Rafael J . Wysocki" <rafael@kernel.org>, linux-kernel@vger.kernel.org,
 linux-trace-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, linux-pci@vger.kernel.org,
 platform-driver-x86@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-pm@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 10/20] x86: xen: add missing prototypes
Content-Type: text/plain

On Thu, May 18, 2023, at 19:28, Dave Hansen wrote:
> On 5/16/23 12:35, Arnd Bergmann wrote:
>> 
>> arch/x86/xen/enlighten_pv.c:1233:34: error: no previous prototype for 'xen_start_kernel' [-Werror=missing-prototypes]
>> arch/x86/xen/irq.c:22:14: error: no previous prototype for 'xen_force_evtchn_callback' [-Werror=missing-prototypes]
>> arch/x86/xen/mmu_pv.c:358:20: error: no previous prototype for 'xen_pte_val' [-Werror=missing-prototypes]
>> arch/x86/xen/mmu_pv.c:366:20: error: no previous prototype for 'xen_pgd_val' [-Werror=missing-prototypes]
>
> What's the deal with this one?
>
> The patch is doing a bunch functions on top of the ones from the commit
> message.  Were you just showing a snippet of what the actual set of
> warnings is?

I missed this one going through the changelogs before sending them out,
I thought I had added a proper text to each one, but it fell through the
cracks. I've followed up with a v2 patch that has a proper changelog
now.

      Arnd


From xen-devel-bounces@lists.xenproject.org Fri May 19 11:10:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 11:10:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536985.835727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzy0M-0005WB-Qg; Fri, 19 May 2023 11:10:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536985.835727; Fri, 19 May 2023 11:10:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzy0M-0005W4-NJ; Fri, 19 May 2023 11:10:38 +0000
Received: by outflank-mailman (input) for mailman id 536985;
 Fri, 19 May 2023 11:10:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1pzy0K-0005Vu-Ro
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 11:10:37 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0600.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c6609db4-f635-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 13:10:34 +0200 (CEST)
Received: from AS9P251CA0007.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:50f::9)
 by VE1PR08MB5728.eurprd08.prod.outlook.com (2603:10a6:800:1a0::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 11:10:31 +0000
Received: from AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:50f:cafe::e9) by AS9P251CA0007.outlook.office365.com
 (2603:10a6:20b:50f::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 11:10:31 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT031.mail.protection.outlook.com (100.127.140.84) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.7 via Frontend Transport; Fri, 19 May 2023 11:10:30 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Fri, 19 May 2023 11:10:30 +0000
Received: from 62848bcf881c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8A5908B4-1D24-4C4B-84C4-EE0EB33623C0.1; 
 Fri, 19 May 2023 11:10:19 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 62848bcf881c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 19 May 2023 11:10:19 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS2PR08MB9713.eurprd08.prod.outlook.com (2603:10a6:20b:607::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 11:10:16 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.019; Fri, 19 May 2023
 11:10:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6609db4-f635-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wzdU8G0j7JJ8NyWlHgRsEf2AwWK9aby/sOqP4FNFMc8=;
 b=p7g+aS3kKrhywn1w3vHFW05qXAgxUVFYdwFAPzubdKHqkwjJChhd2P2MhGIcph939XHUNXaFuEGefXJyt/5uxe6D825cHL7NRqcmZTcQ9y5sqB5yupu3yLdfQP1kI73nUnlBJMzeq3qzBfu+F5Jdnh7ypbQBiLau0HldkxoDuBo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3ef959593854b2fa
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LJZ7sDTO4NyLK+ndBOMm/SBTC60lC4o8gbpZpAfb+0ChflScBt17PGbthDH4NW6QnRcm+6VqgSDuvupEtsA1+nv7IEbb2BAruZBXDsfakYtGhqMF1zOBpEwegUWNGB0MBqe+nw1rnDr6LDoPsQj793d5KCTIAekJL1H2T9Q4xyJtwAsmZa8AKm3Ivdd783ulfvw5IheDAGsZ4AkB5kMWvP6ga43mKS59NfvePZ6dIDvEz/Et2rIoywsTOQ4Fp7G1M4bjJkH1aKODZqBKpF5pwuy+LalmZr7Z6wtmhS6VgHVsBRZx00VJmWTAipabpIY0ut4NE7S9GepTBrTvP4Qrmw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wzdU8G0j7JJ8NyWlHgRsEf2AwWK9aby/sOqP4FNFMc8=;
 b=IOmmycFuqGMHquMk/vWtKXUkfAvO17UCCq5x6asNpnd/wYR1dqXr9YUvgCOikPz2nz8gCAx9wH9+FTCcYJpDtuoWFZEH3kBaYr74E4kBXCsurFPMZtcef29orjfd/qTueYWlAzqw4EWI2j5FKnqvC4wZbElHaH3TYsktiUdyde8AQ5HGnJIb35pfYwtEqzNSeCaTkI/f9QHSMjHv38DnoGgCVpWQACpLm4FSK2tR+xhyrNrKGLR4d1OumRW0k3X8XezHzx202JjGorlERfdLas7v5ggIDtLgikzvDw/eIPr09kOGXoR2J+z/E+KkxLdgDZUA88rwmDuPZYbvTWqO9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wzdU8G0j7JJ8NyWlHgRsEf2AwWK9aby/sOqP4FNFMc8=;
 b=p7g+aS3kKrhywn1w3vHFW05qXAgxUVFYdwFAPzubdKHqkwjJChhd2P2MhGIcph939XHUNXaFuEGefXJyt/5uxe6D825cHL7NRqcmZTcQ9y5sqB5yupu3yLdfQP1kI73nUnlBJMzeq3qzBfu+F5Jdnh7ypbQBiLau0HldkxoDuBo=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 1/2] xen/misra: add diff-report.py tool
Thread-Topic: [PATCH 1/2] xen/misra: add diff-report.py tool
Thread-Index: AQHZijbarhOmFRE5RUW1W8RyRgYzra9ha/uAgAAEhIA=
Date: Fri, 19 May 2023 11:10:16 +0000
Message-ID: <4BCB4841-3E13-468E-BB89-E72EDFF9D3E1@arm.com>
References: <20230519094613.2134153-1-luca.fancellu@arm.com>
 <20230519094613.2134153-2-luca.fancellu@arm.com>
 <94cc53ec-17c4-6ec7-7f13-e4f0f9e31e2d@suse.com>
In-Reply-To: <94cc53ec-17c4-6ec7-7f13-e4f0f9e31e2d@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS2PR08MB9713:EE_|AM7EUR03FT031:EE_|VE1PR08MB5728:EE_
X-MS-Office365-Filtering-Correlation-Id: d4a63b69-4e37-4b93-e65e-08db5859a859
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 S/1fRWkP0LkEL4cq0FMsa+A7zf+JerpLCYokfiWmhLqVJgn4f5IBrbQJviKG6M9CFAB+REno8NKx5yvFAA0bdGxWWKyhq8aMNvoMHJEtuolfA5dDnB4rIm4Cnruc7t7wj2swNLU682+VeGBkg6eec5ZApo4ibX5eU9sbtEibvaBcrwPCw+F5hp1UthixQbib2SRhKwfOBY8gmr7InzasI+9WiL7HQhw5BIVR2txjycRNYBnh5ji/obiZw3PRs8FwQjluHbJqJpg/qmsw/Jo7tf2z18IgOytzEMEFEaDkBnQZr270PIA3xktGAvwZNv0XfAQvir6hscWH2bNvWqfLsxmhype3zEdcPunL9/OmjOr+lNmgWsa2jcRiSBoKjMdzT9qv00Efbut+tay0f3Fn+VWwefsVCWGnqRHgV178oDUWts0Wm8lpLZ6k25wSXNXE8Qsx9yeoo32DwosaBMGUvd5UP+akzwQB1z1mDAlaZ2RWEsqJnzrvgyzv1UNABYhYzGwbNKGACbe6kcEI7OobDVQrXxXJ3dx8Y18UOQR1rOiC9FQFQpmYztq8q3f1IhxYxZOrtgVzklB327wdqletn0//ntL6q4th8Qk6q/YdcPYzrt7ob1fvoYsmH+H843KCPS9d6+aszIIVZRE8B/4T9g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(366004)(396003)(136003)(39860400002)(451199021)(71200400001)(6486002)(41300700001)(26005)(6506007)(6512007)(53546011)(38100700002)(5660300002)(2616005)(36756003)(186003)(38070700005)(2906002)(33656002)(86362001)(122000001)(8676002)(8936002)(478600001)(6916009)(66946007)(66446008)(64756008)(66556008)(76116006)(66476007)(4326008)(91956017)(54906003)(316002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <01CED32C5C31714E94B3034808742752@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9713
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8553e973-e9f5-4cda-cbe1-08db5859a050
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OmadSVlJ53dyI5pUDnEXbmC48lXG8hmeb06iyO2crLxVTsQphw61ocVs52hYaNy4GRc2Sv7LPk20M3gpS3u87Ll9x22JCqq0WWjl3ALbWmrIGZHgBDjcWwhx36EgniFmiWIexcobGq10M11yOo8JKjiUPkpue+WGyK8ooTudiny1hRHr4qdEp/i9yKwroamk3Hk5ye6tDnlDD19rmIxmIfEUxQlmARNBzlnTCpry0ubakG4mc9vo0xI81epwARlbDbSm2hpLrJTljPX9C99AzdO9BA4RAocBAMceAld7v1PaeVfNBRbDwOyaWyGgL85LPqQ4ifV5HsIguBcmWRzFJi33M3WiErhntaTSpyX1nYshiyaqnyyh65V65b9Xg1yxuJGdlp+IoidwIeAbLf3U94PfZWr8w4xImXtxfrSXvyWO9KB1EHop4gnnvswinJC5ZIjznyAtL7bVMWK34yF4O97lUgRjTA90DfXuO6FnByk6ZIbvriSd5wHq2ZKT2ASliW6bP/e0+8enjVolRV+zWH3CArTG8tbYfqNJ9fnrjJ2OIdLLG8zgngYvXkEhTbIodZvJWbGk9NJAOtXXNFlMk4BXUkH/vb5UPaCRxkHcJwEjxd5AbRJ4qMnhFRXhiIz2z0j+MdZ/aPKJ065cMf9wFkfwTGmv7/NmvDeYoI/ZTky2nsYnOO9RBChAtHdyH38B0wW/dSLTuWU2CF2uQ0EpM3HO7HoX+UZghuMlaVScZ7w=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(39860400002)(346002)(451199021)(36840700001)(40470700004)(46966006)(186003)(6512007)(40480700001)(33656002)(2906002)(6486002)(478600001)(82310400005)(86362001)(54906003)(36756003)(53546011)(82740400003)(81166007)(40460700003)(356005)(26005)(6506007)(41300700001)(316002)(8676002)(2616005)(70206006)(70586007)(6862004)(8936002)(4326008)(47076005)(336012)(36860700001)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 11:10:30.2229
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d4a63b69-4e37-4b93-e65e-08db5859a859
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5728

DQoNCj4gT24gMTkgTWF5IDIwMjMsIGF0IDExOjUzLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMTkuMDUuMjAyMyAxMTo0NiwgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+IEFkZCBhIG5ldyB0b29sLCBkaWZmLXJlcG9ydC5weSB0aGF0IGNhbiBiZSB1c2Vk
IHRvIG1ha2UgZGlmZiBiZXR3ZWVuDQo+PiByZXBvcnRzIGdlbmVyYXRlZCBieSB4ZW4tYW5hbHlz
aXMucHkgdG9vbC4NCj4+IEN1cnJlbnRseSB0aGlzIHRvb2wgc3VwcG9ydHMgdGhlIFhlbiBjcHBj
aGVjayB0ZXh0IHJlcG9ydCBmb3JtYXQgaW4NCj4+IGl0cyBvcGVyYXRpb25zLg0KPj4gDQo+PiBU
aGUgdG9vbCBwcmludHMgZXZlcnkgZmluZGluZyB0aGF0IGlzIGluIHRoZSByZXBvcnQgcGFzc2Vk
IHdpdGggLXINCj4+IChjaGVjayByZXBvcnQpIHdoaWNoIGlzIG5vdCBpbiB0aGUgcmVwb3J0IHBh
c3NlZCB3aXRoIC1iIChiYXNlbGluZSkuDQo+PiANCj4+IFNpZ25lZC1vZmYtYnk6IEx1Y2EgRmFu
Y2VsbHUgPGx1Y2EuZmFuY2VsbHVAYXJtLmNvbT4NCj4+IC0tLQ0KPj4gQ2hhbmdlcyBmcm9tIHYx
Og0KPj4gLSByZW1vdmVkIDIgbWV0aG9kIGZyb20gY2xhc3MgUmVwb3J0RW50cnkgdGhhdCBsYW5k
ZWQgdGhlcmUgYnkgYQ0KPj4gICBtaXN0YWtlIG9uIHJlYmFzZS4NCj4+IC0gTWFkZSB0aGUgc2Ny
aXB0IGNvbXBhdGlibGUgYWxzbyB3aXRoIHB5dGhvbjIgKFN0ZWZhbm8pDQo+PiAtLS0NCj4+IHhl
bi9zY3JpcHRzL2RpZmYtcmVwb3J0LnB5ICAgICAgICAgICAgICAgICAgICB8ICA4MCArKysrKysr
KysrKysrKw0KPj4gLi4uL3hlbl9hbmFseXNpcy9kaWZmX3Rvb2wvX19pbml0X18ucHkgICAgICAg
IHwgICAwDQo+PiAuLi4veGVuX2FuYWx5c2lzL2RpZmZfdG9vbC9jcHBjaGVja19yZXBvcnQucHkg
fCAgNDQgKysrKysrKysNCj4+IHhlbi9zY3JpcHRzL3hlbl9hbmFseXNpcy9kaWZmX3Rvb2wvZGVi
dWcucHkgICB8ICA0MCArKysrKysrDQo+PiB4ZW4vc2NyaXB0cy94ZW5fYW5hbHlzaXMvZGlmZl90
b29sL3JlcG9ydC5weSAgfCAxMDAgKysrKysrKysrKysrKysrKysrDQo+PiA1IGZpbGVzIGNoYW5n
ZWQsIDI2NCBpbnNlcnRpb25zKCspDQo+PiBjcmVhdGUgbW9kZSAxMDA3NTUgeGVuL3NjcmlwdHMv
ZGlmZi1yZXBvcnQucHkNCj4+IGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vc2NyaXB0cy94ZW5fYW5h
bHlzaXMvZGlmZl90b29sL19faW5pdF9fLnB5DQo+PiBjcmVhdGUgbW9kZSAxMDA2NDQgeGVuL3Nj
cmlwdHMveGVuX2FuYWx5c2lzL2RpZmZfdG9vbC9jcHBjaGVja19yZXBvcnQucHkNCj4+IGNyZWF0
ZSBtb2RlIDEwMDY0NCB4ZW4vc2NyaXB0cy94ZW5fYW5hbHlzaXMvZGlmZl90b29sL2RlYnVnLnB5
DQo+PiBjcmVhdGUgbW9kZSAxMDA2NDQgeGVuL3NjcmlwdHMveGVuX2FuYWx5c2lzL2RpZmZfdG9v
bC9yZXBvcnQucHkNCj4gDQo+IElmIEknbSBub3QgbWlzdGFrZW4gUHl0aG9uIGhhcyBubyBpc3N1
ZSB3aXRoIGRhc2hlcyBpbiBwYXRoIG5hbWVzLg0KPiBIZW5jZSBpdCB3b3VsZCBvbmNlIGFnYWlu
IGJlIGJldHRlciBpZiB0aGUgdW5kZXJzY29yZXMgd2VyZSBhdm9pZGVkDQo+IGluIHRoZSBuZXcg
ZGlyZWN0b3J5IG5hbWVzLg0KDQpIaSBKYW4sDQoNCkZyb20gd2hhdCBJIGtub3cgcHl0aG9uIGNh
buKAmXQgdXNlIGltcG9ydCBmb3IgbW9kdWxlIHdpdGggZGFzaGVzIGluIHRoZSBuYW1lLCB1bmxl
c3MNCnVzaW5nIHNvbWUgdHJpY2tzLCBidXQgaWYgYW55b25lIGtub3dzIG1vcmUgYWJvdXQgdGhh
dCBwbGVhc2UgY29ycmVjdCBtZSBpZiBJ4oCZbSB3cm9uZy4NCg0KVGhlIHN0eWxlIGd1aWRlIGZv
ciBweXRob24gKGh0dHBzOi8vcGVwcy5weXRob24ub3JnL3BlcC0wMDA4LyNwYWNrYWdlLWFuZC1t
b2R1bGUtbmFtZXMpDQpTYXlzOg0KDQpNb2R1bGVzIHNob3VsZCBoYXZlIHNob3J0LCBhbGwtbG93
ZXJjYXNlIG5hbWVzLiBVbmRlcnNjb3JlcyBjYW4gYmUgdXNlZCBpbiB0aGUgbW9kdWxlDQpuYW1l
IGlmIGl0IGltcHJvdmVzIHJlYWRhYmlsaXR5LiBQeXRob24gcGFja2FnZXMgc2hvdWxkIGFsc28g
aGF2ZSBzaG9ydCwgYWxsLWxvd2VyY2FzZSBuYW1lcywNCmFsdGhvdWdoIHRoZSB1c2Ugb2YgdW5k
ZXJzY29yZXMgaXMgZGlzY291cmFnZWQuDQoNClNvLCB5ZXMsIHRoZSB1c2UgaXMgZGlzY291cmFn
ZWQsIGJ1dCBoZXJlIEkgdGhpbmsgaXQgaW1wcm92ZXMgdGhlIHJlYWRhYmlsaXR5LiBVbmxlc3Mg
d2Ugd2FudA0KdG8gdXNlIOKAnGRpZmZ0b29s4oCdIGluc3RlYWQgb2Yg4oCcZGlmZl90b29s4oCd
IGFuZCDigJxjcHBjaGVja3JlcG9ydOKAnSBpbnN0ZWFkIG9mIOKAnGNwcGNoZWNrX3JlcG9ydOKA
nS4NCg0KQ2FuIEkgYXNrIHRoZSByZWFzb24gd2h5IHdlIG5lZWQgdG8gYXZvaWQgdW5kZXJzY29y
ZXMgaW4gZmlsZSBuYW1lcz8NCg0KPiANCj4gSmFuDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Fri May 19 11:48:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 11:48:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537023.835753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzybK-0001iI-3y; Fri, 19 May 2023 11:48:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537023.835753; Fri, 19 May 2023 11:48:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzybK-0001iB-1F; Fri, 19 May 2023 11:48:50 +0000
Received: by outflank-mailman (input) for mailman id 537023;
 Fri, 19 May 2023 11:48:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=937s=BI=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1pzybI-0001i5-Ak
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 11:48:48 +0000
Received: from sender3-of-o59.zoho.com (sender3-of-o59.zoho.com
 [136.143.184.59]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1bcaec07-f63b-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 13:48:46 +0200 (CEST)
Delivered-To: dpsmith@apertussolutions.com
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1684496920935451.16488822791234;
 Fri, 19 May 2023 04:48:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bcaec07-f63b-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1684496922; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=cB8C1krIFJBzTOzr2aap3/rRvAvTPTk6svyUvCp6o94IsrbeG8JXjeqUa7/L1HbsbVNUKQHGWWudTfBBfidyS+ARkIaqs8BCY+4igND0CzNe4JPieh2sfgzUYUsew4hxmm8/wcxLP7j7Be4yKu43LGYRZlgSdfbZhHGXsH8PARI=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1684496922; h=Content-Transfer-Encoding:Cc:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=CyO3tHf0BVTIqiWlpbO9WHzMK2kr7jU8+ZeKunsHBt8=; 
	b=X3LDdxkhbez70bEVmHMh4XR+NdgjBAur5bMT60crvKwnh4ZQzz/AMqEkjZgFc9mD4YK+xZlnRjtbC/kr8zJJA86F8HILw1F/lDpvPMe+kAduvWLF7LSnhN/1eD3cGFaRUC3Utf9PtABoUlxTtKAkiwxrsvv7UmMjz52IKFMf8TY=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1684496922;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-Id:Message-Id:MIME-Version:Content-Transfer-Encoding:Reply-To;
	bh=CyO3tHf0BVTIqiWlpbO9WHzMK2kr7jU8+ZeKunsHBt8=;
	b=d43WFm+J7t2vcywaoNt7eD4mfYtVR+iKoijGN0YhXzyv+cQp5C7xeVL85zbR/2ll
	zZDp0WddNEdDWig4QG21WL8jEmTqrJeA5knvV3HNszsjuM9+f1YwTbm72oDg5TSKaHt
	tw5UHu+K2INT8wlIkwal6JUCNORRoryR3r61uz94=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] maintainers: add regex matching for xsm
Date: Fri, 19 May 2023 07:48:24 -0400
Message-Id: <20230519114824.12482-1-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

XSM is a subsystem where it is equally important of how and where its hooks are
called as is the implementation of the hooks. The people best suited for
evaluating the how and where are the XSM maintainers and reviewers. This
creates a challenge as the hooks are used throughout the hypervisor for which
the XSM maintainers and reviewers are not, and should not be, a reviewer for
each of these subsystems in the MAINTAINERS file. Though the MAINTAINERS file
does support the use of regex matches, 'K' identifier, that are applied to both
the commit message and the commit delta. Adding the 'K' identifier will declare
that any patch relating to XSM require the input from the XSM maintainers and
reviewers. For those that use the get_maintianers script, the 'K' identifier
will automatically add the XSM maintainers and reviewers. Any one not using
get_maintainers, it will be their responsibility to ensure that if their work
touches and XSM hook, to ensure the XSM maintainers and reviewers are copied.

This patch adds a pair of regex expressions to the XSM section. The first is
`xsm_.*` which seeks to match XSM hooks in the commit's delta. The second is
`\b(xsm|XSM)\b` which seeks to match strictly the words xsm or XSM and should
not capture words with a substring of "xsm".

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 MAINTAINERS | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index f2f1881b32..f7c8961dbb 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -674,6 +674,8 @@ F:	tools/flask/
 F:	xen/include/xsm/
 F:	xen/xsm/
 F:	docs/misc/xsm-flask.txt
+K:  xsm_.*
+K:  \b(xsm|XSM)\b
 
 THE REST
 M:	Andrew Cooper <andrew.cooper3@citrix.com>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 19 12:09:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 12:09:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537039.835764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzyvT-0004U2-5e; Fri, 19 May 2023 12:09:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537039.835764; Fri, 19 May 2023 12:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzyvT-0004Tv-1v; Fri, 19 May 2023 12:09:39 +0000
Received: by outflank-mailman (input) for mailman id 537039;
 Fri, 19 May 2023 12:09:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sKmR=BI=arndb.de=arnd@srs-se1.protection.inumbo.net>)
 id 1pzyvR-0004Tp-QG
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 12:09:38 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 046b5f1b-f63e-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 14:09:34 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id AC3AD5C0208;
 Fri, 19 May 2023 08:09:33 -0400 (EDT)
Received: from imap51 ([10.202.2.101])
 by compute6.internal (MEProxy); Fri, 19 May 2023 08:09:33 -0400
Received: by mailuser.nyi.internal (Postfix, from userid 501)
 id 147E1B6008D; Fri, 19 May 2023 08:09:31 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 046b5f1b-f63e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arndb.de; h=cc
	:cc:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1684498173; x=1684584573; bh=Mx
	aybIo+rxpirmU88Zvah8XfxfWvwDtpl+cNoSkys6I=; b=Q0MlMmxTzyw4pVVcJT
	vRqaslXkNToR2VaA4IHp2uzwMBULCK8c/Sxfynl9jnpFU5Oeb/24g73/8G2VcEMe
	GHJz37cuzeUD0J8zSy96C0q06yH5b0NYLYhaWn1Gw0LDYgl3zuB8ruBcheCitHnb
	oEz9J2qJ4iS/mhfdVw1BhrWZyZbuaN92Wd4Ns1DOvhwXE0dZEf387chWc1qNPvA7
	H6LeV3AEaLZ3wIAIievFk9VXl4/o5oNpipBDcCmKN05sLL2BwTEPCmjK2IV/OmoV
	ehq/Hcy4ULr/4ilyo7AGGTveSq5qN9w/6o90ZYcpQhCvG1Fc5t2twZRNORTKjrk+
	i6Vg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1684498173; x=1684584573; bh=MxaybIo+rxpir
	mU88Zvah8XfxfWvwDtpl+cNoSkys6I=; b=rt1BNYexiUNroHAALgEkYqfePno/6
	XwxxgFiiA/Wbfk6hTu/XlsXLpj2AHFEuRDmj8gEVhc6SQdZ+PpfN5LOu72VUeQ9+
	y6HkzIq9c2aK8wyOuJTxVTj9nErufpZ5PcVxle9HXA9TfrFSnW+uaPpPlyO2Ceid
	Nq2uMrvdwvogH385uqOMUcv+nw9gADpplGLQQvcHLwSkeghzTZwikd2RghzmIra5
	kT91kkOSEcJ3Cm2ptafQldBC85kh7Ch2Nsk9MatlA38qTzcYrL6g4GF9GsbeeEAL
	pjztTxP5cRHtNfbhFK7i7HyixdZzM2u3vpo0Ij/FyoEjhsgGDuAN+0LPw==
X-ME-Sender: <xms:_GZnZJnWZB2IyypOr_yRnE0D3CvKXbEq05w1xtt0PEPHojUtrJ9MMQ>
    <xme:_GZnZE3xvucdo-tfM8QW2EAJZB2cLg5FK-b55YHFAak6yQxPgdokQkFRcuORdo1-b
    j0iVbQuppIa6aIaKBA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeihedggeelucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepofgfggfkjghffffhvfevufgtsehttdertderredtnecuhfhrohhmpedftehr
    nhguuceuvghrghhmrghnnhdfuceorghrnhgusegrrhhnuggsrdguvgeqnecuggftrfgrth
    htvghrnhepffehueegteeihfegtefhjefgtdeugfegjeelheejueethfefgeeghfektdek
    teffnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomheprg
    hrnhgusegrrhhnuggsrdguvg
X-ME-Proxy: <xmx:_GZnZPpDdGmXZCCb7m59J49-q5TnDSqQBwV6B31eJLU3NxqW9Mifmg>
    <xmx:_GZnZJn57hY2WDzP4ha-F1l54B-WWf1X7xyQqCE9g3bQ7QrrPpXeig>
    <xmx:_GZnZH2AT3tC_hFcIMFDqo84dtlpJ0mh489IMmq_BtxdyO71BrapSA>
    <xmx:_WZnZLLmQYCUSB-Wfx2NvnHGRoXma16YUbBHKSLs0RdQMpJEHg4p2w>
Feedback-ID: i56a14606:Fastmail
X-Mailer: MessagingEngine.com Webmail Interface
User-Agent: Cyrus-JMAP/3.9.0-alpha0-431-g1d6a3ebb56-fm-20230511.001-g1d6a3ebb
Mime-Version: 1.0
Message-Id: <9aaed1c3-9a7d-4348-b15f-2bb9be654bef@app.fastmail.com>
In-Reply-To: <a78d9dcd-0bc1-7e98-a8f1-e5d6cd0c09a3@intel.com>
References: <20230516193549.544673-1-arnd@kernel.org>
 <a78d9dcd-0bc1-7e98-a8f1-e5d6cd0c09a3@intel.com>
Date: Fri, 19 May 2023 14:09:11 +0200
From: "Arnd Bergmann" <arnd@arndb.de>
To: "Dave Hansen" <dave.hansen@intel.com>, "Arnd Bergmann" <arnd@kernel.org>,
 x86@kernel.org
Cc: "Thomas Gleixner" <tglx@linutronix.de>, "Ingo Molnar" <mingo@redhat.com>,
 "Borislav Petkov" <bp@alien8.de>,
 "Dave Hansen" <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "Andy Lutomirski" <luto@kernel.org>,
 "Steven Rostedt" <rostedt@goodmis.org>,
 "Masami Hiramatsu" <mhiramat@kernel.org>,
 "Mark Rutland" <mark.rutland@arm.com>, "Juergen Gross" <jgross@suse.com>,
 "Srivatsa S. Bhat (VMware)" <srivatsa@csail.mit.edu>,
 "Alexey Makhalov" <amakhalov@vmware.com>,
 "VMware PV-Drivers Reviewers" <pv-drivers@vmware.com>,
 "Peter Zijlstra" <peterz@infradead.org>,
 "Darren Hart" <dvhart@infradead.org>,
 "Andy Shevchenko" <andy@infradead.org>,
 "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
 "Rafael J . Wysocki" <rafael@kernel.org>, linux-kernel@vger.kernel.org,
 linux-trace-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, linux-pci@vger.kernel.org,
 platform-driver-x86@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-pm@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 00/20] x86: address -Wmissing-prototype warnings
Content-Type: text/plain

On Thu, May 18, 2023, at 23:56, Dave Hansen wrote:
> On 5/16/23 12:35, Arnd Bergmann wrote:
>> 
>> All of the warnings have to be addressed in some form before the warning
>> can be enabled by default.
>
> I picked up the ones that were blatantly obvious, but left out 03, 04,
> 10, 12 and 19 for the moment.

Ok, thanks!

I've already sent a fixed version of patch 10, let me know if you
need anything else for the other ones.

> BTW, I think the i386 allyesconfig is getting pretty lightly tested
> these days.  I think you and I hit the same mlx4 __bad_copy_from()
> compile issue.

I did all my testing on randconfig builds, so I probably caught a lot
of the more obscure corner cases, but it doesn't always hit everything
that is in allyesconfig/allmodconfig.

       Arnd


From xen-devel-bounces@lists.xenproject.org Fri May 19 12:41:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 12:41:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537045.835777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzzQF-0000U4-KE; Fri, 19 May 2023 12:41:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537045.835777; Fri, 19 May 2023 12:41:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzzQF-0000Tx-GZ; Fri, 19 May 2023 12:41:27 +0000
Received: by outflank-mailman (input) for mailman id 537045;
 Fri, 19 May 2023 12:41:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jOyF=BI=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1pzzQE-0000Tb-33
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 12:41:26 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 75787794-f642-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 14:41:22 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id A400368C7B; Fri, 19 May 2023 14:41:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75787794-f642-11ed-8611-37d641c3527e
Date: Fri, 19 May 2023 14:41:18 +0200
From: Christoph Hellwig <hch@lst.de>
To: Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>
Cc: Christoph Hellwig <hch@lst.de>, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>, Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>, Lyude Paul <lyude@redhat.com>,
	xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when
 xen-pcifront is enabling
Message-ID: <20230519124118.GA5869@lst.de>
References: <20230518134253.909623-1-hch@lst.de> <20230518134253.909623-3-hch@lst.de> <ZGZr/xgbUmVqpOpN@mail-itl> <20230519040405.GA10818@lst.de> <ZGdLErBzi9MANL3i@mail-itl>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZGdLErBzi9MANL3i@mail-itl>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, May 19, 2023 at 12:10:26PM +0200, Marek Marczykowski-Grecki wrote:
> While I would say PCI passthrough is not very common for PV guests, can
> the decision about xen-swiotlb be delayed until you can enumerate
> xenstore to check if there are any PCI devices connected (and not
> allocate xen-swiotlb by default if there are none)? This would
> still not cover the hotplug case (in which case, you'd need to force it
> with a cmdline), but at least you wouldn't loose much memory just
> because one of your VMs may use PCI passthrough (so, you have it enabled
> in your kernel).

How early can we query xenstore?  We'd need to do this before setting
up DMA for any device.

The alternative would be to finally merge swiotlb-xen into swiotlb, in
which case we might be able to do this later.  Let me see what I can
do there.


From xen-devel-bounces@lists.xenproject.org Fri May 19 12:50:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 12:50:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537051.835790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzzYh-00021Y-FG; Fri, 19 May 2023 12:50:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537051.835790; Fri, 19 May 2023 12:50:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzzYh-00021R-Cc; Fri, 19 May 2023 12:50:11 +0000
Received: by outflank-mailman (input) for mailman id 537051;
 Fri, 19 May 2023 12:50:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9pCJ=BI=citrix.com=prvs=496ec590c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1pzzYf-00021J-KT
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 12:50:10 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ada2c795-f643-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 14:50:07 +0200 (CEST)
Received: from mail-dm6nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 May 2023 08:49:57 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MN2PR03MB5295.namprd03.prod.outlook.com (2603:10b6:208:1e7::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 12:49:54 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 12:49:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ada2c795-f643-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684500607;
  h=message-id:date:subject:to:cc:references:from:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=9QpGMXHLf28yi25Yw8XWa19UktQ24AfX69lAVg6gM74=;
  b=dL7zvfIocc++HpPJXHCPzHcxWFThfi37LUQ3v4Ca6PNAJCiTGJrPOb7r
   +XcMMOaD6r4hQ2bAMNY45BUjc85t4VUe+NU+irhZeygQnYfZSsX+ooEhF
   QS2jtWQKGw4F0FYCdJwTmYF4lRVjR7nAi6tdVDecr8N7w4FCYn4K9XpN2
   k=;
X-IronPort-RemoteIP: 104.47.58.104
X-IronPort-MID: 112119261
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:a9gujK06ok6TcIRaQfbD5Qhxkn2cJEfYwER7XKvMYLTBsI5bpz0Ey
 WRKWzzXaP/Ya2akL4h1Pty1oR4B6JSAnNBhQVA5pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFnNKgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfWmVXq
 6YSN2kxcgGf2Of1wpGBcfdCv5F2RCXrFNt3VnBI6xj8VK5jbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqsy6KlFAZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXqiAttLRefmr5aGhnWNxVYpOhxPD2HisKiG0kmVcY8GO
 2Mbr39GQa8asRbDosPGdx2zoFaApQJaV9c4O+w97QSQ4q7V+BqCQGwFSCNRLtArqqceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDQcAXBYJ7vH5rY0zhw6JRdFmeIa2jtDvEyrs6
 yqHoCg3m/MYistj/6+g/l3IjCmEvJXFTgcpoA7QWwqN7htwTJy0e4uyr1Pc6J5oKIefU0nEv
 3UencWaxP4BAIvLlyGXRugJWraz6J6tIG2CqV1iBZ8s83Kq4XHLVYpIyDh6JUpvYoAIdFfBZ
 U7VtgR5/pJfPHK2K6RwZuqZF8su3YDkFNL4SurTaNteJJR8HCec5z1nb0OU22HrkWAvnLs5N
 JPddtyjZV4YBr5g1yGeXPoG3PkgwSVW7WbaX5Xy1Rmk+buYen+OSL0BPUeOb+Y296eNqkPe9
 NM3H8eD1RgZUOT4eSTR2YoSK00aa3k9GZ3y7cdQc4arJA17BmwoFtfVwKg9YMprhalYmurT/
 W26Qglf0lWXuJHcAQCDa3QmZLaxW5969Co/JXZ1ZQru3GU/a4Gy6qtZb4EwYbQs6O1ky7hzU
 uUBfMKDRP9IT1wr5gggUHU0l6Q6HDzDuO5EF3D9CNTjV/aMnzD0x+I=
IronPort-HdrOrdr: A9a23:kj++565iwlHNiOAkvwPXwa6CI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HnBEDyewK5yXcT2/htAV7CZnidhILMFu1fBOTZsl7d8kHFh5ZgPO
 JbAtND4b7LfCZHZKTBgDVQeuxIqLfnzEnrv5am854Ed3AUV0gK1XYdNu/0KDwQeOALP+taKH
 LKjfA32wZINE5nJ/iTNz0gZazuttfLnJXpbVovAAMm0hCHiXeF+aP3CB+R2zYZSndqza05+W
 bIvgTl7uH72svLhyP05iv21dB7idHhwtxMCIiljdUUECzljkKFdZlsQLqLuREyuaWK5EwxmN
 fBjh88N4BY6m/XfEuyvRzxsjOQmgoG2jvH8xu1kHHjqcv2SHYTDNdAv5tQdl/851A7tN9x/a
 pX1ybB3qAnRi/orWDY3ZzlRhtqnk27rT4LlvMStWVWVc87ZKVKpYIS0UtJGNMrHT786qogDO
 5yZfusrMp+QBe/VTT0r2NvyNujUjAaGQqHeFELvoiv3z1fjBlCvj8l7f1auk1F2IM2SpFC6e
 iBGL9vjqtyQsgfar84LPsdQOOsY1a9AC7kASa3GxDKBasHM3XCp9rc+7Mu/tynf5QO0d8bhI
 nBalVFrmQ/EnieR/Fm5Kc7sSwlfV/NHwgEkqpllt1EU/zHNfXW2BS4ORATe5DKmYRaPiXZM8
 zDTa6+TcWTalcGIrw5rDEWa6MiWEX2b/dlyurTe2j+1f4jebeawNDzQbL0GIfHNwoCdyfWPk
 YjNQKDVvmoqHrbFkPFvA==
X-Talos-CUID: =?us-ascii?q?9a23=3A6SHUIWtsWiikw30SlwbEfsjr6It+dmSDln3PenO?=
 =?us-ascii?q?kGHo2Tf6XamWr5o1dxp8=3D?=
X-Talos-MUID: 9a23:B5TiKAqLMonctFj1Wb0ezy9hDv1a4v30MhkcwKpevOKuaHVgFDjI2Q==
X-IronPort-AV: E=Sophos;i="6.00,176,1681185600"; 
   d="scan'208";a="112119261"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J+6ZXVXyM/syUMNX9tfUX1QuZlCMIbSj1CdJWYBKw0N3vaHbq7dN3E0ln02q+dIRhKapwlQiMCfM2iFOiy6LJXjPpnH1F2ooa5EbiA+N5clY+FrEgzr+QUoIkz2Gb2mbV85kamBN+obcEYdTjPEOKYM+SSPTiTzTHJTU//DPLuwbG37UEYBQPdMb3T6w94v7GplTPhQNbeFggpj1GoKvIYjZ4r6qg/cdHc3mNB6mZwVvfZ0bY8tEI6JelruzqCMEiIs/STUwhCv2Gdwh6/W3xSF+8PDiC2/3b9mNtzL8/owptH4A8ixV9tzceESiK1iixp03utWfRo8aN0hbroSsRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VDQBl+uRnD5Rxb5ncMwm8f/5UoO1di+Yo3Jv2boMr5Q=;
 b=a/cbCsaquwgYn/k37srtbka6CKCrreEtFthMba7R9kchzLNoPF1gGs2X0+wLmAu4jO+9jFS/H/HCfA3wiXeVUYhpLU8QUBbe7iY3+46xoTwFqnE6GMqgPzFEBwyJrFAnkmLdttPv5dUm6y92Qj4YsjuUOXMyx7EYOB5B2CVyaLv/gcvHkxBkaihcymdw4CsZ8ki5Veovzl7P6fq1Vqa9+sM6XLDqKxbLU+44yODBpy2wJJ0CYBnj9iHMF7UpSX2GBgL501mgEcmLL6b2tUDPP7TzOBEKmp1Ip9G2J97JucvZ6kgqIFFIUcN4GbDKLtkrSGbdu+b594zpwxEymFOORQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VDQBl+uRnD5Rxb5ncMwm8f/5UoO1di+Yo3Jv2boMr5Q=;
 b=Gajfih0x2xIO0ginFwPb9cjRZyEE0Weyp1vzyAIKrXvzf189xttvuWbkpOxH5fMhHQ6OLu3RcyMmrdauv84MwHyRpq21BQy+9sqVihO/9RisaedLIJPIe/gXpR2EVBCjSi6/nr+r44O1QXG580qoNY2qM6BxtNVr/DHGe0LQVLY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <8617570c-6dc4-74f5-7418-98f04f7e0ece@citrix.com>
Date: Fri, 19 May 2023 13:49:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when xen-pcifront
 is enabling
Content-Language: en-GB
To: Christoph Hellwig <hch@lst.de>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 Ben Skeggs <bskeggs@redhat.com>, Karol Herbst <kherbst@redhat.com>,
 Lyude Paul <lyude@redhat.com>, xen-devel@lists.xenproject.org,
 iommu@lists.linux.dev, linux-kernel@vger.kernel.org,
 nouveau@lists.freedesktop.org
References: <20230518134253.909623-1-hch@lst.de>
 <20230518134253.909623-3-hch@lst.de> <ZGZr/xgbUmVqpOpN@mail-itl>
 <20230519040405.GA10818@lst.de> <ZGdLErBzi9MANL3i@mail-itl>
 <20230519124118.GA5869@lst.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <20230519124118.GA5869@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0093.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:191::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MN2PR03MB5295:EE_
X-MS-Office365-Filtering-Correlation-Id: 8dc79e00-f35c-41d5-6b08-08db58678ae3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UWsnDQcEn93kCxQ7xiRTxF9YgEbPHjuIGZI4ax04Svl3MgT+eQDSBHhuH84Z0KQS8hy42JzUY453JBe2ZxlbzFokUIZU31tZiKmFrVjOcNvotcPpSYdV33fZuIosD/sT/0vMx1XPOVGTcLd3RMjZ4N117oMix7oXb0b3cv4cUp+g61yqbn458cdt4fxzRno+IcAdoX4H+URcqGOIpmbPG+WVlGJWdRsZTJQIF6dtSCe7WisUXylHqO1r4tb8Tr+uVNgpwYdEtq4EZ9x20CMPYY4/C46xlADZD0cLt8N0SojD+zYGRIFatYSqC3VsnSLdtU+I8L1NxOVQrdi6OPtxcoDExtv7QTFEhAoWO+YjvNIBN0ReXeKLmH86TvMpm3mVYcyf8g286fwg2uN1sioPC8a2WVXWrwDWyoK0uXDeSOo0Fvlsv2sBcY37xMwCT1ze7Zf9FZixdkHTP5eoBOmoWq4B7olTYnqe/ZV6omxLdQ+4ATTJKKmFKRKYc93OjCvZj6EbCkBNJzCUrEslRMizm2Afion5D7Eww6Bqpu+6mnvLEHBH7Pwwi3fLoJIyqi97QkBzcLe0GnsJGbZ74fH/KaPshcxO11B83hTqmgjrNf6V6bTetB/USMkS9F/HFXJdihnx41L9ZCHaHBjDu5swGA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(346002)(376002)(136003)(366004)(451199021)(31686004)(4326008)(66556008)(66946007)(66476007)(478600001)(54906003)(316002)(110136005)(86362001)(31696002)(36756003)(83380400001)(26005)(2616005)(186003)(6506007)(6512007)(53546011)(5660300002)(41300700001)(8936002)(7416002)(8676002)(2906002)(6486002)(6666004)(82960400001)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ODBIeGZtTTZUbTYrMEZPWXQxWEw1cThZWERWQXh0bHdOMXNpY0RQQU9ONzZM?=
 =?utf-8?B?dld0WHNFYjNOT1h3NXlYUmY4UTVrTGpuNFVOdDJsREdMb0EvRjFBdHVES1RP?=
 =?utf-8?B?Qlo1cElocFkyRjlaeTZYSjBxSi9JcUFUT2orenlEdlgwWWszZnZ6K2pUa2pp?=
 =?utf-8?B?cUVrRmV1aXNQbFp0VGRqN1NTSmVoUzJGcG00RmRHWHBMSHN1ZHNpRE81NXg0?=
 =?utf-8?B?VFJYN1ozNjJqcjkyVjVzRGxZVU1OSmFKNDRYVnBHYkF6SFNMbXVGd3d2T1Ji?=
 =?utf-8?B?N2M2R1V4b05URlpOU0pFV3hKczN1YWh3S2pJMFpETFNYMm9UT1lnLzNZNnhn?=
 =?utf-8?B?ZmhQMHBYeUJibm9PcnFER1J2MTM4TzZxc2tFQmdFOFFJanR5clNCZ29jMjJT?=
 =?utf-8?B?Wk92aUtVczE2Y05PeTBQS0lza2REQVZIRlAxRHNQMk1sV2tjZXhzYm1tM2h6?=
 =?utf-8?B?U0dXeXdGd3F1bWtTd1ZtOS9tQXNIMlloUnc3TDBZdExJUnZQOFhOWHA5N0ZK?=
 =?utf-8?B?dXRzV1VtdktsMVRvblo5dTEyaHpTWUxHMUVUM3NTVHkyZUJGaUpOSVJBZ0NC?=
 =?utf-8?B?OWNGVXVIWHFyV0Yra0tsUWhHcm1ZQjlTc3dFVjNkdnR6dDBPMGNpNTB6Sk93?=
 =?utf-8?B?ZGdDNTdsVWtNMy9TUDFudS9TanZva0V6THJ6Y1dPUlVPdXh2dmdQZEFjUVNz?=
 =?utf-8?B?T2hZdEw5L216MUFLRnk3Rk1uZUJURXBKRjNiL0Q2KzdrOWJSUG93b3FCZU5J?=
 =?utf-8?B?OUxHZkhvOEN1aXN0Y3RRUmpRUkdrZDJZWDFGbjRtSFlXRmtlcHQ4aitxWnBP?=
 =?utf-8?B?bW1Tc2s1cEgvMU95aHNPUlhITld6VmVVUnErYUV4d25heHBFQzlPWFY3VjZB?=
 =?utf-8?B?a25oZ2MwaU1YZ2NMTWRBQmFZdlBlZDBoM0wzM0dJRlBhL01vNlNXTndzM0Yw?=
 =?utf-8?B?S0xYNkc4WEFteDhqZ1dyeWpMYnp1OUhFbGswa0ovdHJ4RlkrcnRURHBwS0pa?=
 =?utf-8?B?NVJQcnd0dGdyVWdsM1dFSi9hQ3NpUEs0WlFFWHU1ckZKOWdVeWNKcGRvMFBZ?=
 =?utf-8?B?UFRydkhrSXJvdndDaGtoUmkwU2NPb1g0LzhIdWtDaHpndlZkRTNjSXBCb1R0?=
 =?utf-8?B?QmMyTHcrM1NqUjJpdzdpQ28xU3BLbHU1VXNCL29QNHFqSnJVd1pvUDlYZk9u?=
 =?utf-8?B?eENHbEd0RzN5NzBZQlRIcndsdU53TDBYeWg0Y0twY3Y0ZUZsT0NrdHdkYW1P?=
 =?utf-8?B?UVd0SlI3OXlHaVVMOTlpWFlIUmxXaU5EY3RTcTRuNFVCaWs1TCtWUEw2VXlr?=
 =?utf-8?B?MDFhMVRIa0Y0eXJSVk9rWTVhZHR4aTFhNzhBdExtRTVJWFJmTlZ5TlM1TVJH?=
 =?utf-8?B?K1dXUXZhNm9oQ2tpWkl6dXFUUnM2RXF3a3MxM3VueWZOU1o0dVcwRFBxeHRR?=
 =?utf-8?B?RUY1QzZHQnFTdXdhM0c2UjhEOEFnL3Brek9wNStPa2VXcG9CSDc1eWpuVWFw?=
 =?utf-8?B?bVFSSkVCODlvRXNkT0NOK0dkVXJDSnNjRTdBWmk2RUN3dmYzSG92VG83UUtu?=
 =?utf-8?B?dmRDRlA0QldYU1MyQ09NUXY0Vk05a0FMTnpSRVhDVHNDMTZOSFd2cVlHYmp5?=
 =?utf-8?B?VG9ZazUrNFdab1FVRkxFc2pSeWs4bGorMHAxN1c2NVpJdlQvM2oxUUk4ODVQ?=
 =?utf-8?B?YjZaQkZ0aXUvUzRaRzZZMlA3U2FVWXBwQ1ZPb0hpK1B4Q3VRZFc0UDNHb2M5?=
 =?utf-8?B?RCsyYS95MkVpVllIN3J6OHNPT3grQnFyOGxxVWRBTTliL0RmcnZoS09hOFZW?=
 =?utf-8?B?emZZSEFVdE9lUWo1dHFYeFVKVzVBMW43QWNQMmxkcFhGY0NCNk5kR1hmR09m?=
 =?utf-8?B?ZFFzYWR2V04yU25zNGpiSEN2VTlmK3hjQ3ExNFo0SkprQXVad1QyS1ZwZVND?=
 =?utf-8?B?NFVJdEllSDdGWDRndkh1NWZVaGxTNjFaSG4wLzd2Vlc4Z1ExV2tVWEV6bEIr?=
 =?utf-8?B?M0c3bk9lcDZ6T2JlZitFK3FnR1UrWU4vZzhMakRtSmcrWDNQMEV2aitsKzVi?=
 =?utf-8?B?ZTZEWEV1WjYxT1QxQXRZb1kxa2U3akNuZGdnV1JsSkRhYUt4TlY1QW1XMXMv?=
 =?utf-8?B?WEpvbzdzNHFpVXNxWHI1aGpTRm0vT0RLOERoYVJmV2t0UTJHQlpBcEQ3dUJ3?=
 =?utf-8?B?RUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?b251L1l1QWM2K1hvanlaM0wvVDlHYjRvWko2V0YwM3d1MDFlZVZUekxGMG45?=
 =?utf-8?B?eGVhM1ZlekZZNUxVRk1GZEJUSDRtRjZlRUlwcUJWTFpwdkR2RmlvY09obVgw?=
 =?utf-8?B?K2J6QzJQa2lQNGNUMWRwYjFxa3V0MC9IRE5id2tyTzNtY0ZSMjFMY3hkU0J0?=
 =?utf-8?B?SDltL3NQNjIzVXhvZ1JlWEZHVXR5NjQ5K0dwNXNSQ3ZqMGJaZkI4L1Yvd3lE?=
 =?utf-8?B?aVdDc2tqeE0vNVFPMUNLZGZlS291RFh3dW8rejlabG1xNmJ3T2pHVG04aERk?=
 =?utf-8?B?dmtxSURhQnRhNVpUWjh1M0NZQWh4TkhldWxNYnRFWEF0L1BJL1ZUSUtsWFd6?=
 =?utf-8?B?VTVUMFhjWG9rc3BPazhWdG5kQ3BPQTNDQnY3MUxTbWcwZWIvMzdvWjVmQ3Nq?=
 =?utf-8?B?cFNIcVROZkVObU9xT2RIWWJVbXJiRFJRL3VsL2VCVTZaRFRLOHJpNUJiL29t?=
 =?utf-8?B?YUVMbnFaVjZVaHhnUlppTnJ2ZXJIV1V6aGhKSXYxekRsVjNpVGU5T3d2VGRO?=
 =?utf-8?B?MGhqSFlyb0p5bGRIaEhVaGZnek5LWS9ocVNNK2RQRkdwcGd2VVBCNWdQOENO?=
 =?utf-8?B?S3NGY285MFdMb2lhZmZSWWFSOURxM2RINFpKbnNDV0NCQW1QWW81SURWV3I5?=
 =?utf-8?B?a0VnUDN2UmVSYmxjWjVnZmZTazJoalFmaHN3eTBFUzhKeUV4SzJYWHcwZ1Ru?=
 =?utf-8?B?RkV6WWZQZ01sYXdhR1hoL2tva2hKaUt2RjJ5YUNmajIrVUZ4RFhCT3pyaVZj?=
 =?utf-8?B?NFJFS1psTmI2MitEalMyQ2loaUNzbXNRQXBiMGE1Vk5mWmlHVUVLdmdCVk1t?=
 =?utf-8?B?eDZHQjl5NTdTM2VlYks3amliRk5lMU94a1cvYVN2a09BWEIycXlqczIrSERD?=
 =?utf-8?B?Nk5ZVHhHOVdTQ3NrVDFQU2YxWFF2UkUxZ1lRUkx1dEVjNUJtSTR6Q1FOQUpX?=
 =?utf-8?B?bmdpZEpzTkJ3VERla1RaMkN1VnUzekpBeVNYNWQwTzdHYlpvVGRxMWpRaGtU?=
 =?utf-8?B?NmNNU2xhR21JeW9LK3F6TFVxOGZLS3RaR2gzVnByZTZKMXRtNEQ4d0NMeHlV?=
 =?utf-8?B?MVMrMHpwSVpTd3FCcU5hRWRKdUo4SGdpZzhnQkV5cTZ1eVVXS2dlZEJWR3pS?=
 =?utf-8?B?RmdqdlQzUG5Ra3MweEdwTHVaNWRXS2Zmd2hhc0xBTVkvQncxRTZJbW4vZ2Rv?=
 =?utf-8?B?eXRwUlhjWVRtREJCUFdrUmk3K2wvQVU4WTZ5djdhTmdvMG9XZUFZM1BqRzd6?=
 =?utf-8?B?bkNsSkx6RFg4MHFoeEtyT0NHQTNUbm92bUhsZmpGelRMY1UxT01tdW1vT1hv?=
 =?utf-8?B?Qm5BV1U5RnlLZldyNDJUb1BpQXhnL0FISXlWRlZndUpaZW16WlhSaGlMUUFK?=
 =?utf-8?B?cTBJcGZpWWRiYktENForM2ROc1g2ZWt0dnQwc3FTaitxLzZiVHQwcHN5endi?=
 =?utf-8?B?OGxka3FGZDQxYVNYdGFNUFFLa2d1eDdGeWc2OFpjeGs4NitiKzFZdU1qb0lz?=
 =?utf-8?B?NlMydjV1eWFEZXRnRWZPYkxGYzJEOFBrbmx0TUdoL2FwdVhFVjNkVmZ2Y3ls?=
 =?utf-8?B?VzMrOWk1TlFuVGlvcE55bUd5OWFMUVFYUEk2bnBkalNmZDd1eVZyeElTdC8v?=
 =?utf-8?B?bU9saUJOMU96aFRETTR2YVNITXNxeHc9PQ==?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8dc79e00-f35c-41d5-6b08-08db58678ae3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 12:49:54.2319
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nb170M4+s1uheOmp+e5TioG2rQjxhz3jbSKB57FKIJh2ujuNPrazcw8OfmUPpkncEXT3zVqa1pdHiBrCcIicHJpQem5lb/SEKdf2g1Q4MOg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5295

On 19/05/2023 1:41 pm, Christoph Hellwig wrote:
> On Fri, May 19, 2023 at 12:10:26PM +0200, Marek Marczykowski-Górecki wrote:
>> While I would say PCI passthrough is not very common for PV guests, can
>> the decision about xen-swiotlb be delayed until you can enumerate
>> xenstore to check if there are any PCI devices connected (and not
>> allocate xen-swiotlb by default if there are none)? This would
>> still not cover the hotplug case (in which case, you'd need to force it
>> with a cmdline), but at least you wouldn't loose much memory just
>> because one of your VMs may use PCI passthrough (so, you have it enabled
>> in your kernel).
> How early can we query xenstore?  We'd need to do this before setting
> up DMA for any device.

Not that early.  One supported configuration has xenstore not starting
for an indefinite period of time after boot.

> The alternative would be to finally merge swiotlb-xen into swiotlb, in
> which case we might be able to do this later.  Let me see what I can
> do there.

If that is an option, it would be great to reduce the special-cashing.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 19 12:59:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 12:59:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537057.835806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzzhH-0002kR-DT; Fri, 19 May 2023 12:59:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537057.835806; Fri, 19 May 2023 12:59:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzzhH-0002kK-AM; Fri, 19 May 2023 12:59:03 +0000
Received: by outflank-mailman (input) for mailman id 537057;
 Fri, 19 May 2023 12:59:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jOyF=BI=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1pzzhF-0002kE-IN
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 12:59:01 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ec76cf31-f644-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 14:59:00 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 725BF68C7B; Fri, 19 May 2023 14:58:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec76cf31-f644-11ed-b22d-6b7b168915f2
Date: Fri, 19 May 2023 14:58:57 +0200
From: Christoph Hellwig <hch@lst.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Christoph Hellwig <hch@lst.de>,
	Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>, Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>, Lyude Paul <lyude@redhat.com>,
	xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when
 xen-pcifront is enabling
Message-ID: <20230519125857.GA6994@lst.de>
References: <20230518134253.909623-1-hch@lst.de> <20230518134253.909623-3-hch@lst.de> <ZGZr/xgbUmVqpOpN@mail-itl> <20230519040405.GA10818@lst.de> <ZGdLErBzi9MANL3i@mail-itl> <20230519124118.GA5869@lst.de> <8617570c-6dc4-74f5-7418-98f04f7e0ece@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <8617570c-6dc4-74f5-7418-98f04f7e0ece@citrix.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, May 19, 2023 at 01:49:46PM +0100, Andrew Cooper wrote:
> > The alternative would be to finally merge swiotlb-xen into swiotlb, in
> > which case we might be able to do this later.  Let me see what I can
> > do there.
> 
> If that is an option, it would be great to reduce the special-cashing.

I think it's doable, and I've been wanting it for a while.  I just
need motivated testers, but it seems like I just found at least two :)


From xen-devel-bounces@lists.xenproject.org Fri May 19 13:13:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 13:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537064.835821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzzv2-0005Fe-M2; Fri, 19 May 2023 13:13:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537064.835821; Fri, 19 May 2023 13:13:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1pzzv2-0005FX-JJ; Fri, 19 May 2023 13:13:16 +0000
Received: by outflank-mailman (input) for mailman id 537064;
 Fri, 19 May 2023 13:13:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzzv1-0005FL-Ah; Fri, 19 May 2023 13:13:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzzv0-0003ZP-Q1; Fri, 19 May 2023 13:13:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1pzzv0-0002VD-Dr; Fri, 19 May 2023 13:13:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1pzzv0-0003nq-DN; Fri, 19 May 2023 13:13:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7foy0Op69mossXTk6vlNGRQmTCCjQI0i44ONFeo3FEU=; b=E1yMgx34mYvuJyGDN+JRXce/Rq
	Xl8S0E6Vfe5QDWhLOT9VLEg6hXGCkZxm1jy5FNQy2vNz+JmxDqyTzAH9kjr4VOR11m5d4EMQywj3w
	noOSQosbMjSRzztHtwgm4OFyI95gZPnL3GgveK2KVbJzL86ygw2SSu4hFJP5ZtVItUJM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180706-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180706: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2d1bcbc6cd703e64caf8df314e3669b4786e008a
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 May 2023 13:13:14 +0000

flight 180706 linux-linus real [real]
flight 180734 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180706/
http://logs.test-lab.xenproject.org/osstest/logs/180734/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-freebsd11-amd64 21 guest-start/freebsd.repeat fail pass in 180734-retest
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180734-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2d1bcbc6cd703e64caf8df314e3669b4786e008a
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   32 days
Failing since        180281  2023-04-17 06:24:36 Z   32 days   59 attempts
Testing same since   180706  2023-05-19 01:42:57 Z    0 days    1 attempts

------------------------------------------------------------
2416 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 306654 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 19 13:46:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 13:46:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537077.835841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00RU-0000UV-LH; Fri, 19 May 2023 13:46:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537077.835841; Fri, 19 May 2023 13:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00RU-0000UO-HJ; Fri, 19 May 2023 13:46:48 +0000
Received: by outflank-mailman (input) for mailman id 537077;
 Fri, 19 May 2023 13:46:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q00RT-0000UI-Cg
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 13:46:47 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 95a09b93-f64b-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 15:46:41 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6972.eurprd04.prod.outlook.com (2603:10a6:10:11c::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 13:46:39 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 13:46:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95a09b93-f64b-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EhXG10KDVNIVpJ5C3VC/2bihgl4o8O5c1AjyFW9cMh4GjvaulGP+xnHwoiGj5Wjkg7PzO4eOVzn5X74eygknaYrtNiPsETOm8tuQYHxJtRPjBRFhyNxbGXYmL6GBWMGNNtargffZi/lOzWO22SSyc1eb0BLDV2SuG1r7hYI269AYjYk6+lb64VaXIQiQR0ip4ulbSBnkJGr1u9mgRpC4TtKh2U5FcmeJ+seliQu77vbANfsGLKEhw4m+XNxS95+gb5/MSMq8kpO4WyKLhkeeEbpHVXxxaLUn7Wq7thopF4TZpLi956Y2ntytXgZwpDGrbIGvs3dplsn04RYDGapsUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rhHA8hH7QwqfXdKGI77LLNSsHMPt7mHwyDyOqpZH4lA=;
 b=kyLfp5RE3w3N8bboGxYdmn5PQ+fyOb+i0IV5zSlbD0iFh1RyPxMLRaO99F96tDQ9aemLgc1kDdxocmEXdoB3hW2vujytip+fo9Osa1sW6FJPCHqPh69z+rrW5zl3xxZythaQpH4iqJ+1vdFuhLipzCEb2DpLsLBsdBu814smsqJQdt8BTzZJRWBs9NVu3uL4CRccy4zcbXTnf6TDN9H55jKk3Gf7Pyjfq2HKcZv71KBYSWR3utAMfrjnY0cg9fByzDgrNcx/ZQMCj4JoTBPe9EWzRD1r44s6FXMC+mBOnwU48T4kAfPfTfOorCluPYuWvPetXArSRezqKQWWbXZS7Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rhHA8hH7QwqfXdKGI77LLNSsHMPt7mHwyDyOqpZH4lA=;
 b=ub40dt2J60ZWo7rPT1P2yAAiopEDgCCHluBDqd7s5TwvYP5RJZw08+7RdhubF2lHMwr81LXeWMAHdvNoN4RBPPh/q7ZZAUBS0Cb3zGLHl/thMzQRIrGpz73w415EMqz9UNDI3xsvLbSlBr6LPXIyWkI1h30DF8Yb92eNRlZVPqgh7suMTpBPr3jQtD7tyiEY2s6teqfdYintSUfAdnZ9YqVeBNm3rYevsc4wrSYY1q0mImyDzL4NhZZQh7e/yOc6nL+z+OmMPta3+OQdMJTuc1tKg3MPWiCJuo1u2gfJ20TFVwHGYM5sn2/WVlVmNwinYyiuVNAuZO6m8Bco01iDUA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9624d303-d894-ca35-dd20-8bc6924dce01@suse.com>
Date: Fri, 19 May 2023 15:46:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/2] xen/misra: add diff-report.py tool
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230519094613.2134153-1-luca.fancellu@arm.com>
 <20230519094613.2134153-2-luca.fancellu@arm.com>
 <94cc53ec-17c4-6ec7-7f13-e4f0f9e31e2d@suse.com>
 <4BCB4841-3E13-468E-BB89-E72EDFF9D3E1@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <4BCB4841-3E13-468E-BB89-E72EDFF9D3E1@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0024.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6972:EE_
X-MS-Office365-Filtering-Correlation-Id: ef8e2e81-b387-4273-1ff7-08db586f7856
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cqasDpKyEiB9K0LEY4mCAu/BkUZNC0jOuddUYiqZJx1VIkPy9O+xwDu1TbUdQwnawQPwBYTHVbel+Xmdmt/eW10hDX4kgWD+7oHV4VW9Z7xJOihPsylU+4nqDmFet+wFZAdT6/47PZOwIpNxAMxBT78fgoxkIgmAk7eocTUMJIxM8P3n0pLXSb0ddqPQkLBU31ubW0uia4SrMyWEIrqeKhQo9W2g6S+V3pF0s5lyNwfddyZ9pn7SJ1BNPAWN6xJmKPFRDQ/+Ljy93EiDjeQM05FKmOxnxSXhNDYv8jUBcfj0ZwufO2J+6m16zfcX27oFdyCybjra5YJ7EQ3iglURe5zK0SfLpWtp+WjQcMpOlLhvqktifx/7+kzIYFf8ejNx0Kqq12vXTvQy/dHRmlkiHBS4tWdWZLcTQnfj7vUk4ZKMjoHQ1D431DJ5lb3PhfuhSfwflkx3STsYT63vEJgdCj4TlrIMw6zFffwRoAs7nx1zOS6ma2a0P12WD6iEHsOfbPPmK3k0lGbQLpWvwHc2b6QwBirHUwqVXENYqFkN3lcR9QVgmjDiqFRueevIDlnJksdkAYn19VBnPfsLbWz66CVX3LM1OI53KcXI835TGNR4aomiSbSTaemThpHTEhLnmSANEo2Jmm/32U2/uXicQw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(396003)(136003)(39860400002)(366004)(451199021)(6512007)(6506007)(31686004)(478600001)(53546011)(26005)(2906002)(41300700001)(6486002)(186003)(316002)(2616005)(54906003)(66476007)(36756003)(8936002)(6916009)(38100700002)(31696002)(4326008)(5660300002)(86362001)(8676002)(66556008)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QWdES3Z2YUpkRk5aWjViOU56a0hTTUFEYStGcjdEeU9BRFhVTktwYnZKdlg4?=
 =?utf-8?B?QkJsT2xpQ28rd0lqUUdVeU1NR3JxRnk4ZXNtYVpwYUNSdjhXLy9FbjhVWlpT?=
 =?utf-8?B?YXJCLzJ3a1JNeUV0UExla2lMYTNmQ1RGSzVaM0VaVGRZd0grd3c4YXd0WlQ0?=
 =?utf-8?B?MStjUFgxZGtHanE0aTdNbnk5UjR5dmRrUnF0L1lJVXo2NEtmbmMzdXFYaEgr?=
 =?utf-8?B?LzNXWTRXVUtFUVpIVkZoVGJ0U0R5bCt4Vkk5NUhucjN5RjBCWjczTnphS2Z2?=
 =?utf-8?B?N3VFNk9VTjhzL2VZei82bjlsSFRMSWJWWTAxZTRzNUgzUVJIRXE5QnF1VTgz?=
 =?utf-8?B?NXJtNDBOTktHWm9Kc0Q3QWpsb2dQanVieFgxbmpPYVM5czlxSDF3cUVTaGRj?=
 =?utf-8?B?ZmpBM1FNT0VWS0l3TFNLcHRvdklLUU8ra0pBQ1Fwb1lJakR3NEpqKzdHOG04?=
 =?utf-8?B?WGd6cnJJTDhuT0E0RXQ3SGdjdk1OakRremM3dDRBVW9xVzU4WlBvVzFBVXho?=
 =?utf-8?B?aGo3elVNbDFWcFp1a3ZqV3R5Y1BqcGdCckxXaG12MjRDNUJnOSs5Z3FFWWJy?=
 =?utf-8?B?ZzJXSE5ITUxabDlYWm93SGROQ1M4dmx4Y3h1bkJsZ3RQVVJNeFlsalQrU1Y5?=
 =?utf-8?B?N0dCbnBjOXBtVGNYUzdzeGRweGkxNVNhTHZkSm51eCtBQTYwa0M5dlVXdUJR?=
 =?utf-8?B?VG5NM2s1dWM0WlNRS1JEUzdFSnBZZ2dXcUtnYVZ5TE9SZnQ0RzBWaGtTMjZV?=
 =?utf-8?B?cmwrR0w4Wmt5VTd6d2xIWUJydzZwNHNzK1RjOU1EeFBjUGRzUGZFWlY4VklQ?=
 =?utf-8?B?UndFeFBVSTdpZUlxL3RMZHJOaEtMd1o0YzdZZWR2K25qT1NlMU5CZGRXQ0NX?=
 =?utf-8?B?eWZ5eTluUS9ac3g3ZkFSTjZsWFlYM1VhT0FZajRiOWcrbTJQMVZXZUlGUk45?=
 =?utf-8?B?RXYvWnZPbUxaQzlyZCtqbG45TnZIekorTTFmMWpKb0ZxNFN3V2szVEtrbDBN?=
 =?utf-8?B?Q0lCMHJvUjJ2WmFCaWd6QTdBTnQ2SkFyaGFlT1cwQ3dhdCtMd04xM2dDZmlF?=
 =?utf-8?B?WkU3c1ROaGh2MUx4K1EySHJxbStLY2pYcEdwbkVwckdkTHdzL28waGxiV0RS?=
 =?utf-8?B?SGxJdWRuemtKYU52d1VMVzBqbzYrSDBOTmlHZkRnV0FUWmtMRm1DajZFbzNa?=
 =?utf-8?B?NFlxNUF0aUZrbHVyYXpQQksvanhoSW5nUC9NODFnQ1UrUjBRVi9NVWtzclNq?=
 =?utf-8?B?NjZZNmZGZWtocG55aUtwQ2l0NVBtai9lRU51ZHFtSmVMekcxMVBRa2Yya0xr?=
 =?utf-8?B?WjJESjY0d3FDMUg2SnBRTDBFSVgyRDV0TUhrSFNEYXRQcTkyTS96ekcwL1dI?=
 =?utf-8?B?R1pFL1p0NDN1cTl0T2hHcFpFMklWbURpSmNrNG1ENmVLYVFZcXErdnR1cWZi?=
 =?utf-8?B?QU80ZHZlWElEd2hzZXFjRVUzbFZKMGJINjk5N0ZjcHh3UGQ4NTJweWg3T2pZ?=
 =?utf-8?B?QkNCSGpxWnY5VEFSN3RQajlXQ3dKclpsWjZRMXkxSnBDT3liZ2djeU0vRDg2?=
 =?utf-8?B?VmJWNElQdnMxZnpoZWN5VnZTdTUyU0lCSXNOeFpqUlM3VzBxRjVwcXpaelRo?=
 =?utf-8?B?WnJkVmEzVHdDeGhxUjJyNEswS1hhWDZTRTdiaGQyMytDV2hnRUs0eGJ5WWFH?=
 =?utf-8?B?dGhZekRlckF4QTNDWWtyYXkwSFowNEpEOW5NdFA0QnI1MjhLc0tXK2d5ZFFX?=
 =?utf-8?B?UWJoR1pKQ2g2NUY1RjRoVnRQRjgrcmpDcDMwc1lVcXF4SWVDNXlpVnBXQi93?=
 =?utf-8?B?U0UrUjhhYVBWa2Q4Sk9ia0F3MzlUR3pia1Rma1I3UU9lOGZqb3BOVmVZWG9Q?=
 =?utf-8?B?RWtIVnlCL1Q4OVRadG03QTFSRmU4MVd3YkwyamFXUkxEdWZJeUtPS0VSV04v?=
 =?utf-8?B?NnJYUml0Rmg4bW1hSks0ekplWWlROFlhNXhJWVNmS3laM3d3V29wR2ZNN2F5?=
 =?utf-8?B?Q3FGbVF6K3ptWCs4Mi9QMmpKYkdRZjdIa1VDRDhsZDd6WWlCNkVWdkcrVGVM?=
 =?utf-8?B?eTVOcmdaMVI2OWVjMGxodThIOGUvMEM0WlVuTDBMQXFRVnIxYkZzYWZqOFRZ?=
 =?utf-8?Q?G8D7h0Jwomn+zlufjDEffXj7s?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ef8e2e81-b387-4273-1ff7-08db586f7856
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 13:46:38.9762
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 89i1T1XCxZ1AyUKwGVS5IKRprGt0dOoXKCqtCLYL2jAPc+KY61FvhiVQ0ZjSIB5JYQsC1+vESEfVgN+zIwnk4A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6972

On 19.05.2023 13:10, Luca Fancellu wrote:
> 
> 
>> On 19 May 2023, at 11:53, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 19.05.2023 11:46, Luca Fancellu wrote:
>>> Add a new tool, diff-report.py that can be used to make diff between
>>> reports generated by xen-analysis.py tool.
>>> Currently this tool supports the Xen cppcheck text report format in
>>> its operations.
>>>
>>> The tool prints every finding that is in the report passed with -r
>>> (check report) which is not in the report passed with -b (baseline).
>>>
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>> ---
>>> Changes from v1:
>>> - removed 2 method from class ReportEntry that landed there by a
>>>   mistake on rebase.
>>> - Made the script compatible also with python2 (Stefano)
>>> ---
>>> xen/scripts/diff-report.py                    |  80 ++++++++++++++
>>> .../xen_analysis/diff_tool/__init__.py        |   0
>>> .../xen_analysis/diff_tool/cppcheck_report.py |  44 ++++++++
>>> xen/scripts/xen_analysis/diff_tool/debug.py   |  40 +++++++
>>> xen/scripts/xen_analysis/diff_tool/report.py  | 100 ++++++++++++++++++
>>> 5 files changed, 264 insertions(+)
>>> create mode 100755 xen/scripts/diff-report.py
>>> create mode 100644 xen/scripts/xen_analysis/diff_tool/__init__.py
>>> create mode 100644 xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
>>> create mode 100644 xen/scripts/xen_analysis/diff_tool/debug.py
>>> create mode 100644 xen/scripts/xen_analysis/diff_tool/report.py
>>
>> If I'm not mistaken Python has no issue with dashes in path names.
>> Hence it would once again be better if the underscores were avoided
>> in the new directory names.
> 
> From what I know python can’t use import for module with dashes in the name, unless
> using some tricks, but if anyone knows more about that please correct me if I’m wrong.
> 
> The style guide for python (https://peps.python.org/pep-0008/#package-and-module-names)
> Says:
> 
> Modules should have short, all-lowercase names. Underscores can be used in the module
> name if it improves readability. Python packages should also have short, all-lowercase names,
> although the use of underscores is discouraged.

Hmm, I was initially thinking there might be such a restriction, but
the I checked a pretty old installation and found plat-linux2/ there
with several .py / .pyo / .pyc files underneath. Which suggested to
me that, for them to be of any use, such a path name must be permitted.

But well, if underscores are required to be used if any separator
is wanted, so be it. Albeit ...

> So, yes, the use is discouraged, but here I think it improves the readability. Unless we want
> to use “difftool” instead of “diff_tool” and “cppcheckreport” instead of “cppcheck_report”.

... personally I'd like both shorter variants better, plus perhaps
xen_ dropped from xen_analysis, or some different name used there
altogether (to me this name doesn't really tell me what to expect
there, but maybe that's indeed just me).

> Can I ask the reason why we need to avoid underscores in file names?

First of all they're odd, a space or dash is simply more natural to
use. From my pov they ought to be used only when a visual separator
is wanted, but neither space nor dash fit the purpose (e.g. for
lexical reasons in programming languages). Plus typing them requires,
on all keyboards I'm aware of, <shift> to be used when dash doesn't.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 19 13:51:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 13:51:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537081.835851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00Vp-0001wm-6h; Fri, 19 May 2023 13:51:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537081.835851; Fri, 19 May 2023 13:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00Vp-0001wf-2X; Fri, 19 May 2023 13:51:17 +0000
Received: by outflank-mailman (input) for mailman id 537081;
 Fri, 19 May 2023 13:51:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9j/4=BI=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1q00Vo-0001wZ-7I
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 13:51:16 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on2060f.outbound.protection.outlook.com
 [2a01:111:f400:fe59::60f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 36287e52-f64c-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 15:51:11 +0200 (CEST)
Received: from BYAPR07CA0096.namprd07.prod.outlook.com (2603:10b6:a03:12b::37)
 by CH3PR12MB8457.namprd12.prod.outlook.com (2603:10b6:610:154::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 13:51:08 +0000
Received: from DM6NAM11FT004.eop-nam11.prod.protection.outlook.com
 (2603:10b6:a03:12b:cafe::d7) by BYAPR07CA0096.outlook.office365.com
 (2603:10b6:a03:12b::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 13:51:08 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT004.mail.protection.outlook.com (10.13.172.217) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Fri, 19 May 2023 13:51:08 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 08:51:07 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 08:51:07 -0500
Received: from [10.31.192.57] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 19 May 2023 08:51:05 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36287e52-f64c-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FMeuCN6/YIyNP7kp+7PqjfEvtYCtLZUMwlIqMyBAQefm/xt/+Hko5UQqaNxRk8K+SzDG/m4Yt5cNt+8wxAbDZZHJB0LbZGcJMJN23xp8wSVInkhEIqTGdm+RBjEzYYsdMwmWDJYpq0mAlim6h+5hch3nn0VAbpPwSZivhKOZumsSurMAbFP3OKlaRAyUBH661F4+gBx0P/5tpNlT0hOEkOz4nZnF9D8izNdQ82G7qGyCw5wAx6IKICCFbrwqbpbUBqyiK+4uJuT90r9D6QlbvvXqVMuc0+cIPFXtv4I4LGwbpKSxvVDo6JvpJrA+AsIboEmjEXy96qE2v1U13ieJhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ne1kIOEt0to5/Wp87p+qktz7CIMqfTtz1Jeunn9xaUs=;
 b=Pc104TGmDxpaMYlKNnmIDUt1I/UL9tyKSGaOPi8rHn+JSqrEOt+2KSSx5cfbPr41S0mVh+Xqfp9ag7Z7jpw3jKPzGGK80daZ/uKFyvwxXfx+KnAuOlsfAs4pmvkLVwwnzrc4kwiw2UQGC6isIPSz3ljdUsQeNFfbqvcbzHzsfHQa2FZShyWk5FwSHkD3oyuC6Kxzk48fpdtVkaeXxLqRidp3rhOSn1cU6AqhVMEdTo5qVRLWR35f2ZDPyFhyQ5pRMjrXgCKPj0RQqxOyKxzOt6+9RVc/5+4SmxKLANwGG9YrTd4v3yqDDQP7b8cd6GfbCPQD0Gnf9NFIIX0D+sap1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ne1kIOEt0to5/Wp87p+qktz7CIMqfTtz1Jeunn9xaUs=;
 b=ZhH7lxEASAMapN491GwJBy8jdDCTbAZplqqdqay6RTIR1lVDHskBpPeMZBveGoZnjIF/uB4neXQfUnAun+BBvkXbpjVm9Ldq/zr0pOJcVLSBUdHVbM4LekvK6voC17DeioLsEh+tlNx1JYThi6fAJ4Xg1cGex9mDclqYHNIKCWg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <c3ed3834-fafb-ddfa-a422-2d3b3dd30a06@amd.com>
Date: Fri, 19 May 2023 15:51:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: xen cache colors in ARM
Content-Language: en-US
To: Oleg Nikitenko <oleshiiwood@gmail.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>,
	<Stewart.Hildebrand@amd.com>
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com>
 <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
 <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
 <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
 <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com>
 <bcac90c2-ef35-2908-9fe6-f09c1b1e2340@amd.com>
 <CA+SAi2sgHbUKk6mQVnFWQWJ1LBY29GW+eagrqHNN6TLDmv2AgQ@mail.gmail.com>
 <CA+SAi2tErcaAkRT5zhTwSE=-jszwAWNtEAnm5jNGEP1NoqbQ3w@mail.gmail.com>
 <53af4bc6-97ad-d806-003b-38e70ccd2b58@amd.com>
 <CA+SAi2vrB714Tc9dn4SbtDo3VrT3Q8OpSiFXRLRaO5=0BJo_rQ@mail.gmail.com>
 <f0e6ca10-2142-7c39-0c7b-042c454e7e08@amd.com>
 <CA+SAi2tCVDiQ1BLdvuH2XnvTDGDCnPBDCq70AVbsO+TZKMERSw@mail.gmail.com>
 <fa9ebe0a-6ba0-6f1a-0df1-ad65ec1e93b3@amd.com>
 <CA+SAi2u9uxSgUHZ6EdNuQHSh9FYn3YR0kEg9BB3BQwrkf11zCw@mail.gmail.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <CA+SAi2u9uxSgUHZ6EdNuQHSh9FYn3YR0kEg9BB3BQwrkf11zCw@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT004:EE_|CH3PR12MB8457:EE_
X-MS-Office365-Filtering-Correlation-Id: cfb29d2b-069f-4ae3-1b94-08db587018f3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NNzNQlM2P8LjCouaSO8QzyH8g5XPBLVDdYDFslh06GA5cphpxabOcmRVUBcZ6hl7jf35W4NNcF/A8f+02wZJrjGDUyGmssuimGQJov4/5T2FZAZBw8zy1zR7MaBuXhQPL2r9FX1ESENIJKIOAVEQXGf6g1Cn2dIIgs/mGDcK6MKQcBvFmU5UQy2gJrNcCqMBUgxjMsKw81D6+qMA8ZzKiiNFPIXMlDNsnz4mMEUE7HOd68SnH+URh4z66/KiJa9o/eU49pBG1esSN6MKlYdJmCAefB4/8rm8vhv8/LvrGoHulFwNhWlFMf/P0qWfotENUUAHQnCReffFM4XKT/BVKWQVhhY4jL3OrOd2At+sb2w3m81MsU1aKspWzGxd5H8u5stG5NGxNU+YZ6PtpB+bhpIZYtHNFBCxB/c3CsSK2OrtYsJshBRQ1IcbCQUOcx15UMW0pdLdmoR5MWzk7ZwXnXWlRQXRNnxpIEq1pODlKiECXYU8OarTU4jE+rk2azJu/P+vkOJkoXlfRCLfd9HtieeTMFSlnudnNuypWJYtth5tNAvjanAvZHinqPJamP3dh4z6xxRQwQr98G7y3ocE1QxgbkyDS8heL7Gt1cbVgR/eHwfgq2XZkMeIxCHzUwmQIxEObQxiloQ9VZm8sTG4PCtnnGtWg4Go8nbuB9NAePdPcPZ/sIvA51dpfeWAsfTGIlJuYUwef0sV5cvylAYECDCXI2r1CyAqkXsdGg46OAQf6U5cIvzhPWsQlAkQz9jl
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199021)(36840700001)(40470700004)(46966006)(336012)(478600001)(40460700003)(54906003)(16576012)(86362001)(2616005)(426003)(47076005)(83380400001)(36860700001)(26005)(186003)(53546011)(31696002)(356005)(82740400003)(81166007)(82310400005)(36756003)(40480700001)(4326008)(6916009)(316002)(70586007)(70206006)(2906002)(8936002)(8676002)(31686004)(44832011)(41300700001)(5660300002)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 13:51:08.0442
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cfb29d2b-069f-4ae3-1b94-08db587018f3
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT004.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8457

Hi Oleg,

On 19/05/2023 15:38, Oleg Nikitenko wrote:
> 	
> 
> 
> Hello,
> 
> Thanks Michal.
> 
> Then the next question. Now it is more related to the integration than to the development.
> A license for the xen in 4.17 revision at branch xlnx_rebase_4.17 xilinx repo has changed.
> I found out when I built this version.
> Now bitbake and yocto build fault by COPYING file md5 hashe inequality reason.
> I found out md5 hash stored at sources/libs/meta-virtualization/recipes-extended/xen directory in files
> xen-tools_4.15.bb <http://xen-tools_4.15.bb>
> xen_4.15.bb <http://xen_4.15.bb>
> xen_git.bb <http://xen_git.bb>
> xen-tools_git.bb <http://xen-tools_git.bb>
> xen-tools_4.16.bb <http://xen-tools_4.16.bb>
> xen_4.16.bb <http://xen_4.16.bb>
> So this is a question. Should I update the license file for all our branches or is it possible to keep an old one for old branches ?

I'm not a Yocto expert but it looks like you want to use Xen 4.17 using old meta-virtualization with no support for 4.17.
Anyway, this discussion is no longer related to the original issue (nor to Xen itself) which I hope I helped to overcome.
In case of other Xen related issues, please send a new e-mail to a mailing list.

~Michal


From xen-devel-bounces@lists.xenproject.org Fri May 19 13:51:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 13:51:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537082.835861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00Vs-0002Cx-EF; Fri, 19 May 2023 13:51:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537082.835861; Fri, 19 May 2023 13:51:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00Vs-0002Co-BQ; Fri, 19 May 2023 13:51:20 +0000
Received: by outflank-mailman (input) for mailman id 537082;
 Fri, 19 May 2023 13:51:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q00Vr-0002C2-DH; Fri, 19 May 2023 13:51:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q00Vr-0004LO-2Y; Fri, 19 May 2023 13:51:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q00Vq-0003Xt-GD; Fri, 19 May 2023 13:51:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q00Vq-0006Ru-Fj; Fri, 19 May 2023 13:51:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=z9JomDfclueZhS0wKciwxJHr+18exI4CHrSqbWMuANo=; b=ZEnyRnlPbmTOL8+RPYaVAcg12+
	CD5kYmyMljom8N8fvyiUHXd4q+rBmn8BRbtanBcE8hP2aTSjVfWOQLJACzZc55flIn8ACNxnDMP9R
	Cd+e/3tszpTccvME55C1vVyHyP98LOTxiEfBgooPvy0dtgS8XPFdc5OpnO+TE9q+lRHc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180721-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180721: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=146f515110e86aefe3bc2e8eb581ab724614060f
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 May 2023 13:51:18 +0000

flight 180721 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180721/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                146f515110e86aefe3bc2e8eb581ab724614060f
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    2 days
Failing since        180699  2023-05-18 07:21:24 Z    1 days    4 attempts
Testing same since   180721  2023-05-19 06:11:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Gavin Shan <gshan@redhat.com>
  Igor Mammedov <imammedo@redhat.com>
  John Snow <jsnow@redhat.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Markus Armbruster <armbru@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Steve Sistare <steven.sistare@oracle.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2380 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 19 13:54:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 13:54:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537094.835873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00Yp-0003AP-1t; Fri, 19 May 2023 13:54:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537094.835873; Fri, 19 May 2023 13:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00Yo-0003AI-VN; Fri, 19 May 2023 13:54:22 +0000
Received: by outflank-mailman (input) for mailman id 537094;
 Fri, 19 May 2023 13:54:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1HX=BI=citrix.com=prvs=496750f7c=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q00Yn-0003A8-TU
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 13:54:21 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a5cd3c71-f64c-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 15:54:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5cd3c71-f64c-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684504459;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=TCnM66PND3tboRMB8gcvZFeeqFeCuf9rRPG+txHkYnQ=;
  b=iMGhEGsVfAYZpuKG5g27EZpWcytLqoEcpRtM8foaWE/5UTzHXjc/kW+0
   Vv7b4U9kr0hwKVwnliVE5H2j48+RqBpNX2QkQSCrpDjpgG7vJZOSKCzU8
   YuIE4Ocmq3Tc1fZxGT0gTleBugwNRws7QSAaONxWGfGYEDtRKtRbgbna4
   M=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108992975
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:bIGxo6sTzqosbWb3qXhbZNrZwufnVGBeMUV32f8akzHdYApBsoF/q
 tZmKWqAO/+CZGr3e91/aYzl/UsCuJDTmIM2GQJk/ng9ES1D+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Fv0gnRkPaoQ5AKHxiFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwcjknUTKgt7ONkbulV9lSqtgECMTIFdZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WzRetFKSo7tx+2XJxRZ9+LPsLMDUapqBQsA9ckOw/
 zqWrjWjU05HXDCZ4RGs/TGngO/joRPiALBPKLSCxPtpvGTGkwT/DzVJDADm8JFVkHWWWd1FL
 FcP0jEztqV0/0uuJvH0RxCiqWSIlgIdUdFXVeY97Wml1a788wufQG8eQVZpatUguNUnWD8C2
 VqAntevDjtq2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nczp4OPfr1JuvQ2i2m
 m3U6nFk3N3/kPLnyY3h9Hr4oRGc96OYRxAo1jv7QkWB4yBQMdvNi5OT1XDX6vNJLYC8R1aHv
 WQZl8X20N3iHa1hhwTWHrxTQejBC+KtdWSF3AUxR8VJGyGFoSbLQGxG3N1pyK6F2O4gcCShX
 kLcsBg5CHR7bCrzNv8fj25c5q0XIUnc+TbNDKi8gjlmOMIZmOq7EMZGOyatM5jFyhRErE3GE
 c7znTyQJXgbE7976zG9Wv0Q17QmrghnmzOPGcunn0j8j+TPDJJwdVviGALUBt3VEYve+FmFm
 zqhH5DiJ+pjvB3WPXCMrN97waEiJnknH5Hmw/Fqmhq4ClM+QgkJUqaBqY7NjqQ5x8y5YM+Up
 CDiMqKZoXKj7UD6xfKiNigyMeq3Bcov8hrW/0UEZD6V5pTqWq73hI93Snf9VeVPGDBLpRKsc
 8Q4Rg==
IronPort-HdrOrdr: A9a23:9mSNXK3tqUYGUgvpxNcF6wqjBLAkLtp133Aq2lEZdPRUGvb4qy
 mLpoV96faUskd0ZJhOo7y90cW7Lk80sKQFh7X5Xo3SOTUO2lHYT72KhLGKq1aLdhEWtNQtt5
 uIG5IOceEYZmIbsS+V2meFL+o=
X-Talos-CUID: 9a23:5zytc2+L1V5UBgywE7OVvxQdMP8HUU2M9ybZPVKIN0p1b5mYanbFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3A2bG4SQ2NfjLmKxMmf27Jc7vqtDUjyI2hS0sLkbk?=
 =?us-ascii?q?6i9DcB3FMMg2viTeLe9py?=
X-IronPort-AV: E=Sophos;i="6.00,176,1681185600"; 
   d="scan'208";a="108992975"
Date: Fri, 19 May 2023 14:53:54 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>
Subject: Re: [PATCH v3 08/14 RESEND] libxc: Include hwp_para in definitions
Message-ID: <f8f43ced-f34a-4438-8821-7d41f80ab9ec@perard>
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-9-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230501193034.88575-9-jandryuk@gmail.com>

On Mon, May 01, 2023 at 03:30:28PM -0400, Jason Andryuk wrote:
> Expose the hwp_para fields through libxc.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 19 13:55:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 13:55:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537099.835889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00Zx-0003kO-CS; Fri, 19 May 2023 13:55:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537099.835889; Fri, 19 May 2023 13:55:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00Zx-0003kH-95; Fri, 19 May 2023 13:55:33 +0000
Received: by outflank-mailman (input) for mailman id 537099;
 Fri, 19 May 2023 13:55:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1HX=BI=citrix.com=prvs=496750f7c=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q00Zv-0003jx-93
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 13:55:31 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ceb16f2e-f64c-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 15:55:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ceb16f2e-f64c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684504528;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=Uuu4LSxqVICmUg6LyP4Utj+sprKIwwLKqyqTjZYhcnM=;
  b=HkpcduDQzrW6eznXZSCx4VvRM2WUIz+hLJtKl6Q7xP7e1gCJLMcfd/kr
   Q0IkNGwLaPaT+TbCduJ8fcO5CYrjvfFjz5d5fVJq8HNu6EajqVc7VKK5K
   nJ5nzfo6hsgvsQH274lErHPejwvBMaCvRw815EyxDohSIJU4tETTAl66N
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109674701
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Naoxq6NjUC2e2c7vrR2Ul8FynXyQoLVcMsEvi/4bfWQNrUoi3mcOx
 zROD2DSbP+JNjf1ft1+PY+3oUJT75+Dydc3TQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tF5w1mPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0v1HDjF1q
 MwaFAsmfwndrrzxxZ6QGtA506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUoXSFJsKwhnA/
 woq+UzGPjNABNrY0AaHqGuF2OPFoyTDBaYdQejQGvlC3wTImz175ActfVmyp/Wjm1O9c91aI
 k0QvCEpqMAa5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWbkAbShZRZdpgs9U5LRQo2
 UWOhMjBHiF0vfueTnf13r6MoCm7IyQ9MW4IbihCRgwAi+QPu6lq0EiJFIw6Vvfo0JusQ2qYL
 y22QDYWnIUc3fVW56eHxnvWmmyF+N/MYiVl31CCNo661T9RaImgbo2uzFHU6/dcMYqUJmW8U
 Gg4d9u2t75XU8zU/MCZaKBURezyua7ZWNHJqQQ3d6TN4QhB7JJKkWp4xDhlbHlkPc8fEdMCS
 B+C4FgBjHO/0ZbDUEOWX25TI55ypUQDPY6/PhwxUjapSsYZSeN/1HsyDXN8Jki0+KTWrYkxO
 I2AbeGnBmsABKJswVKeHrlNjeNwmnBinj2NFPgXKihLNpLHPhaopUotagPSPojVEovfyOkqz
 zqvH5TTkEgOOAEPSiLW7ZQSPTg3EJTPPriv85Y/XrfacmJb9JQJV6e5LUUJJ9Y0wMy4V47go
 hmAZ6Ov4Aan1CSecV3WOi8LhXGGdc8XkE/X9BcEZT6As0XPq672tM/zq7NfkWEbydFe
IronPort-HdrOrdr: A9a23:x1s49q8Vyooo2YJzXhVuk+DYI+orL9Y04lQ7vn2YSXRuHPBws/
 re+MjztCWE7Qr5N0tMpTntAsW9qDbnhPlICOoqTNWftWvd2FdARbsKheCJ/9SjIVycygc079
 YHT0EUMrzN5DZB4vrH3A==
X-Talos-CUID: =?us-ascii?q?9a23=3Awa9d/2svkHhurIKf3APVQhLq6Is+QnKN9yrgIXO?=
 =?us-ascii?q?DIndwFpTLDlq80/Ndxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AKkAvEwzlOhCFlHf2LWB/ISweS4KaqJuSKho9l9I?=
 =?us-ascii?q?fh8veLAUuNz2jiwzqZoByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,176,1681185600"; 
   d="scan'208";a="109674701"
Date: Fri, 19 May 2023 14:55:01 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>
Subject: Re: [PATCH v3 11/14 RESEND] libxc: Add xc_set_cpufreq_hwp
Message-ID: <683eb8e5-91eb-43a5-a23a-17ba837da204@perard>
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-12-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230501193034.88575-12-jandryuk@gmail.com>

On Mon, May 01, 2023 at 03:30:31PM -0400, Jason Andryuk wrote:
> Add xc_set_cpufreq_hwp to allow calling xen_systctl_pm_op
> SET_CPUFREQ_HWP.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 19 14:05:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 14:05:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537104.835899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00j5-0005NS-7h; Fri, 19 May 2023 14:04:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537104.835899; Fri, 19 May 2023 14:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q00j5-0005NL-55; Fri, 19 May 2023 14:04:59 +0000
Received: by outflank-mailman (input) for mailman id 537104;
 Fri, 19 May 2023 14:04:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q00j3-0005NF-SJ
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 14:04:57 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20621.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 223ee2e7-f64e-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 16:04:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6823.eurprd04.prod.outlook.com (2603:10a6:20b:102::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 14:04:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 14:04:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 223ee2e7-f64e-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TaEU7duZZ9+UvXnZ52YZBavccb0KJ5WmKBkimg6aVdvt2y+p2rHKOiPqaK4dVcgK6DKObgQQhQNjGdQHnCuKKjuYfe+b2/c8Y2qdTXr7p9KfGGDYL2tDNe0HIsHfhWtz7rlBe7EO/7l+B9MNIkfWxC8CqvPK3Fg4skuouIlOx1BoEiMxICu1v2QJ8TZpv9OgJCqt7j2m+20/tVWEL0nB2OWazfdge+xT6xh96awjRi/uc12exTXPsd/zCQwGTYBLMnKDyQ8uh0lNKgbt9Xeu0goy98IEhvPSzvo5A6TCeT8KrUu6o8OTChrs/TyMWvQnxR2iWbk88qdG8KnPmLLj5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EkoE74H0DL22Rh4UD4F39wlDsfv/7WlSaI1htlhZv5A=;
 b=K/m3o13dwqsMg/A3+A8hN7crmYyHVWBjD4b3qCmsJbZJP/KkEJ0/QvTs5gKm21EaR4WJIo44MyHsHLLU3W2AzvTONcYnIbHA5p6w925huNbGe3Oo2IjbeB+FGaR0bjl4lGNGrubd829mDUa7/rR190TyibbPx3AnLajvWgLZrDQ87dkOup1Gto5i8cHMFCSz68eh9VbYuonZtAfubTv14gy530ddKvtoVXZM5MxQacyjICdrV7AC3n6HLSSxx9o+/lyaFEPu90zg99BAYnJO6kaAxiVRQfl6fa1PoCIWD9Oe6IHpuHF49UuJt0hkGhIPKFIsgo+v24m0nS1yUfobGA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EkoE74H0DL22Rh4UD4F39wlDsfv/7WlSaI1htlhZv5A=;
 b=IIrtW7Svjn0hh8vCg9KBTdGcoiaXqjc5g7s7KV2ZrLgWsQlQ6R3M+O6fDvIeB99GZ80El/reHxHRhdZcn4ImQnjRdMox7zbtJdTIi6bwYXFMrYrr0WEhQUswX3m/+Pf2MR3K46+6DcShJvjlZ59IWuSvek35CB4Mj9eV4nobJ4/gaMLHLzhoFQdfuNLL8QjhwCh0kYSM78mxOUhdHSZOrvq3HkkZkF9vi5/znrCQ1PYaJ/ZY/kE4Qz8ssRn+56NpZBNK7+UCDLv9C5lgSljGRsG2G2mwkky8HrKQJzNta6VSTIG1aUGvw4zq6oWSrvKtOyLSA6G7vNhYaX4AJjn6Ow==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1b837835-f0bc-b59b-751c-2831f106b7f2@suse.com>
Date: Fri, 19 May 2023 16:04:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86: do away with HAVE_AS_NEGATIVE_TRUE
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f994f67d-e0de-ad28-d418-1eb5a70bc1b8@suse.com>
 <b79b5b32-7bcb-b4cd-1594-e16aaff640e1@citrix.com>
 <2c867a48-1442-e4bc-0d51-d87c77aba8a9@suse.com>
In-Reply-To: <2c867a48-1442-e4bc-0d51-d87c77aba8a9@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0167.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6823:EE_
X-MS-Office365-Filtering-Correlation-Id: 097ff62d-ff9a-4644-a133-08db58720537
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DhkYzc8/D/ATRkGSYQkG5/r5cjgcC8nPxlDoFIWmK7tYpf5e/vLJUmXo7gj2cxGWKIyaEJHwHHDY78GCRuet8yd9xgfkhxgdZjNtaXheSx74ysCRcSMIZhlYWYj5qScepPUPNZvxXdzzRE8ZXxDJ6bIi0ZQrhlII46EqVKPV5xEwr2J5SAFpjWeJyS4aDZhcF+BP40eMmeXgYw30p92JBiR0lTxpFdI8rgBuav+wTObb55FglxFHAt5r0sF3z+m7CjyJGRiDNqE4/GWW+fU601OKQwuVz4NLdGcJDH4ZP5Q1TMbBcgEzZKGOY3fT5sXsRkJ9UMA9XSvRNUAqJOj9CFGCayA/V+hwG1EZWyqT44aUpAbIrQ9/3UPMsnxXcMzqmg6cO7mDK72WC3Xnb4V4f+maEiRqaciy7E/shVMr3pFYi+XYxPivgbwbtcGmXbql+iPmRp5NoNz5FmXMXe5PdvqXQe5Jf118DYP81cMBkoLnR/wrQl7mstNq7KUD9/hzzAxImlqafLaYogExVTvUOm8K0hmNQR6mJw4SH+iri/1ILNTfUTWh7Awhe3oNJPz6vxdy8bMkH3uKgilifu+W1pAfVqlr9AlECpGeo+BMxXyVApiep0oWQfcpfPlArSe3XvJXrOWuwtk8WDksXlGNSA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(136003)(346002)(376002)(366004)(396003)(451199021)(478600001)(6486002)(54906003)(2616005)(6512007)(6506007)(26005)(53546011)(186003)(2906002)(5660300002)(8676002)(8936002)(36756003)(6916009)(4326008)(38100700002)(66476007)(66556008)(316002)(31696002)(66946007)(86362001)(41300700001)(31686004)(66899021)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RkpNWWFkellvSWVPZFlnMHJuVU9QU2JESnNHYXFJS3h2TnAwMzJlR1lNM2V1?=
 =?utf-8?B?ZUFMRWM0TXlNOEcwQ0pjdTFtWXhCSWYzNTF4ODJ4ZDZ5dHlPZEd2RzYwdXQ3?=
 =?utf-8?B?SDliM2g1bENQQTBXdW1Jdmg3VFZJck9Cazd3RCswRmZHSGpyTE5WOEIwRkNG?=
 =?utf-8?B?dGx2RnJEUUhTN1dyNzFoY0R3MmRkRVIwZmVjQU1hOHpZOVA1MjFuNFYzU2p4?=
 =?utf-8?B?Wmk1THVFVUR2RWptcFhTbTNIOXVtNU9abmtsMnFCSmUxdit0UFg2R2JQYTg3?=
 =?utf-8?B?R2tRT1ZHbU9HT0wzeFVQOUpPUFM0Tm1RR3k2ajhsclA5aE9DVHQzaVU2alMv?=
 =?utf-8?B?cEpzVTVmc0RJdlNveTA0SDllNjZCMHpEaE9QakRhU044NGtjbnBEVVpLQ24v?=
 =?utf-8?B?Slg3QjFQTHlmT2IxMVpWeE4zVkRIZHFwQklBZWNGV2hVQnhOMlhTbFVEWXpa?=
 =?utf-8?B?QmZjVzZsM2ZVZEE2YUZDRWpGMVl2YzE3SXlXWjN3M0U0c1l5ZjZMU1pjVEt0?=
 =?utf-8?B?VjZMMjRUZ2NnemszWjZuWjdwdVBoYjdOMTFJMmR5M2NHLzZadnE2bVNzUlhV?=
 =?utf-8?B?SVJKN004anduZ0NvNEpuOUxnMnBjcXJXSVNtejBZR1RjNWwrWEpJUk85b1hD?=
 =?utf-8?B?Z0NmdEdwdlBwOVh6YUtweE40bGw3VWpHNnRwT1FJYVpTa3h2V3pBYkJNMllM?=
 =?utf-8?B?Ui84d0FweHVFQnVoT3VkYXVsS3Q5YnJVVTExVmZ0Yi9PLzlnVDZhcFF0OFF6?=
 =?utf-8?B?UGdPYzBlazFvdVB3Z3cwZ2ZWYlU2Mjl5dlAzTUprTm03K3BZU2ltYWZHSnln?=
 =?utf-8?B?OW5LdUxzN2pmSy92QjRteERHcWtmNVZhaEJNWHFYOVpxeEdwVEpTeHVJQzJY?=
 =?utf-8?B?YlRHZnhEak0vSzlQVW1zT0JpRFdKalpXS1ZaWFFDaEcwUWtXUzFJN2J1elVH?=
 =?utf-8?B?S05PU3F6bVRHUURmaU5zMUUzWmFaOTVXdzByQTdYYjFZY3ZvaTRvekhSWG1S?=
 =?utf-8?B?MHlmc2dHM1hyanBxdzQvMCtwTmdjRzJncGtJNlBHY3FEbUY4QVhlOEQvSVJU?=
 =?utf-8?B?ckljOENjV3pMOWsvempVM0d0WlBCVGh3SWdaS1ZIamxlSU9MUHdJT2xSRGpS?=
 =?utf-8?B?NmZUZ0M5NkZ5S0RPYzQzdEJRbExTUEJHUFVzU3dJZEtydnEzaVBselY0ZDl1?=
 =?utf-8?B?SVF0emFJdlVtNlJ1Zm92eVB1N043WEg4YVc2WkxRdk9nNWxLQldFc0FQQVlG?=
 =?utf-8?B?S0t2a0hxQUFSVXZEcEVEWEtLbysxRVlXbWhrZU4vbldIMmVhVUFtMittTHlV?=
 =?utf-8?B?S3JjRjViRDJVeVM5VnhIUk9XKzhMWDdPOHpWMXh0QTNPamVzeU1hZDhGTGph?=
 =?utf-8?B?NUpybEU3d01PclFsQllrODJzMHhjWjZJV25UN3RRQXBTL3haYVQ4VVNDTkNz?=
 =?utf-8?B?MXlNOFcrcUMyMWRlbk9ZbnRQVmJ4QXE1d2tSOExyRHd1aWMxOW55WHRHOEpv?=
 =?utf-8?B?eFBGNG8zdHpaak9IQmxxTEJEU2wrZVp0bWJwMnRLaGlvL0E5SytqMFMzZCt0?=
 =?utf-8?B?aFlpakN1R0tDRG9JRnpGa0RRQW90RFNKZ0ZCTjFWYVg3bjV1c3k1M2VaKzJW?=
 =?utf-8?B?SXJ0S0lGcHZOTnI4MFRUYVlGbEw5d0pTQVZYOU1jcUR0MTBDOVJUODhmOVdM?=
 =?utf-8?B?UEFiT1FMV245VENLM2dBeUVBbHJwcWNXdWFGbFpNRDBNL1JHNmcxSEI2S3Vh?=
 =?utf-8?B?L1VDcmRzbXFySDhmYnM2YmZEK3I1Z1RpeVlyQ3NyTTYzN0JmRi9EbHl0cE45?=
 =?utf-8?B?anFHNldWZ0g2a21jZXNvWFUvK29WeWI2RHdQVHNFTVpRS2tLNUExWUhRQk54?=
 =?utf-8?B?Q3NCcUJJUUpZNCt1WWxHSGV4b0NYSGQvZ0tCeGw1SmxrUm5QOEVScFJ0enE0?=
 =?utf-8?B?OWVhZ0dMcytaTUdBb2NHRUEvbDNmdDhZdkxHb3lGSlRkcW1VcHora3BhaFY0?=
 =?utf-8?B?SmZTUzFlbUVkUUtNZEZ2bmRRQWYxdXdDMzlpbHNOQmE3bWJMc1ZkYy9VTkZG?=
 =?utf-8?B?dWVRQ09wdGZFZENOUVJIVUZHWHJ6dXZOL1Mzc2FHNEFBTDk5MG5obUtOZGRt?=
 =?utf-8?Q?SHky6efx2asorTULOY4JGOUyz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 097ff62d-ff9a-4644-a133-08db58720537
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 14:04:54.1470
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aAOEkdR4xf54qcatN1KW7yTsjvzC5ZgMTzP4KQxW4NETdkr8c9/y2DO7hJmQz1jgDTUMrtNYfbd5Bzv9Leip7Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6823

On 19.05.2023 08:15, Jan Beulich wrote:
> On 17.05.2023 19:05, Andrew Cooper wrote:
>> On 17/05/2023 3:22 pm, Jan Beulich wrote:
>>> There's no real need for the associated probing - we can easily convert
>>> to a uniform value without knowing the specific behavior (note also that
>>> the respective comments weren't fully correct and have gone stale). All
>>> we (need to) depend upon is unary ! producing 0 or 1 (and never -1).
>>>
>>> For all present purposes yielding a value with all bits set is more
>>> useful.
>>>
>>> No difference in generated code.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> Unlike in C, there's also binary ! in assembly expressions, and even
>>> binary !!. But those don't get in the way here.
>>
>> I had been wanting to do this for a while, but IMO a clearer expression
>> is to take ((x) & 1) to discard the sign.
>>
>> It doesn't change any of the logic to use +1 (I don't think), and it's
>> definitely the more common way for the programmer to think.
> 
> Well, I can certainly switch. It simply seemed to me that with our many
> uses of !! elsewhere, using this here as well would only be consistent.
> (I did in fact consider the ... & 1 alternative.)

Before even starting with this - you do realize that the C macro
(AS_TRUE) expands to just a prefix for the expression to be dealt
with? That would then become "1 & ", which I have to admit I find
a little odd. The alternative of making this a more ordinary macro
(with a parameter) would likely be more intrusive. Plus I assume
you had a reason to do it the way it is right now (and I might end
up figuring that reason the hard way when trying to change things).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 19 14:26:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 14:26:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537108.835909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q013r-0007zM-Vo; Fri, 19 May 2023 14:26:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537108.835909; Fri, 19 May 2023 14:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q013r-0007zF-T2; Fri, 19 May 2023 14:26:27 +0000
Received: by outflank-mailman (input) for mailman id 537108;
 Fri, 19 May 2023 14:26:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q013q-0007z3-AO
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 14:26:26 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20616.outbound.protection.outlook.com
 [2a01:111:f400:fe16::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 20edb9ee-f651-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 16:26:22 +0200 (CEST)
Received: from DU2PR04CA0071.eurprd04.prod.outlook.com (2603:10a6:10:232::16)
 by GV2PR08MB9327.eurprd08.prod.outlook.com (2603:10a6:150:d3::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 14:26:19 +0000
Received: from DBAEUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:232:cafe::ea) by DU2PR04CA0071.outlook.office365.com
 (2603:10a6:10:232::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 14:26:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT006.mail.protection.outlook.com (100.127.142.72) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.21 via Frontend Transport; Fri, 19 May 2023 14:26:18 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Fri, 19 May 2023 14:26:18 +0000
Received: from 04863da4bb8d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D17D8615-5F92-4668-9852-431D1B390AE9.1; 
 Fri, 19 May 2023 14:26:06 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 04863da4bb8d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 19 May 2023 14:26:06 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAVPR08MB9114.eurprd08.prod.outlook.com (2603:10a6:102:327::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 14:26:04 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.019; Fri, 19 May 2023
 14:26:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20edb9ee-f651-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QrQ0UXtKTZTupXR+pzQbxmP6Cg1wi4yPPTDGyooj8dc=;
 b=xctBp+Vu8S0Si4NXcuIdCza8XP/L5HgT/nsSoRnlB8Dfc6bAApPuYPOhLlpBChcvezD6bM39PF7PILv2aItGYGad+CVSFkm1nrjky1uOYMOYJoJVv1aV1uZcirXUImvD/2a53Zz2UZ6fpqrr+3aV/BBRbbI6juqSJZtI/KSJu2s=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: a8e1e1f11282b4b4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TjcmLSWUR85poPa/0NjIWp3Yo7nU699bejpJkoPglEEMtItgyCJOtu6u4JxkWiwhKpfNUVRT2BqYADgJkzQ55ZCaErl4yTCH3M8Eg8Q4rQJqB30LCWinfpxpjg5JFtxA6FSASU/2e2VQuQ+h8KPzqM1BhSPDXI5dK/ZXLfyvX9jk1QrNghB6XNO1TgbbLoC7lECMTugcfIFGbGvQAmnQg/60tN37C4KFXKi+ubDAvaIsy4HNKDKCGwL+U6EBjLVWAR98HnHTy2a+qT6sOVMKmMW5+XyE2XVIGJt4ooKUPAiex/QSvAz4QqNUlTSFwVPGLwctAhz7TVLG2ZyU79snlA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QrQ0UXtKTZTupXR+pzQbxmP6Cg1wi4yPPTDGyooj8dc=;
 b=ZWDcyLZ9rUTkq1y3mJlWuGud+85spQOd+rcbPUeY05BHYLZLVo9Q8QGz5RRrdgFlVebnyY5EIxrvV1mivy7alMv/p/AZIHgNpDeZkKSiWp4WLy12M9412XO9ERXgdFNGKkFcMsejxX/nJNs8JllzfPROj9dkosP8PHz5e66yBmEZEzGoHQAFatWTuZH14aL22ho4D60vL44klc7rT0+Ex6EjSkxSuzQsMudnT9b7N8O6rLCVXRp9A4l14u+E7L6NWe3CRIZ3PELO4IwNJEvBlEeUud1YCDywmluOOUqGPM/frHWkPtXOdTYGRpt70rS1Cg+UI+2DWmWqB68Xg0T5SQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QrQ0UXtKTZTupXR+pzQbxmP6Cg1wi4yPPTDGyooj8dc=;
 b=xctBp+Vu8S0Si4NXcuIdCza8XP/L5HgT/nsSoRnlB8Dfc6bAApPuYPOhLlpBChcvezD6bM39PF7PILv2aItGYGad+CVSFkm1nrjky1uOYMOYJoJVv1aV1uZcirXUImvD/2a53Zz2UZ6fpqrr+3aV/BBRbbI6juqSJZtI/KSJu2s=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Thread-Topic: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Thread-Index: AQHZdnKBJmFHv6Wq8kSjvXctD+zlI69f6zCAgAHjioA=
Date: Fri, 19 May 2023 14:26:01 +0000
Message-ID: <86D7B5C8-2671-4359-A48D-E7D52B06565C@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-2-luca.fancellu@arm.com>
 <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
In-Reply-To: <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAVPR08MB9114:EE_|DBAEUR03FT006:EE_|GV2PR08MB9327:EE_
X-MS-Office365-Filtering-Correlation-Id: c0f82d48-bf5a-49e5-30ec-08db587502d4
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 N4LuDXdg7YNCpMaQ1NDTgHhbYc5yXTNTJ55cA4k9jk3/UfhY+HXc49bVFrcuf0nuEBjqYSLQa2IGRn5d9mPrl8gr66Vv6+Z4fK0XVfo3auj+LQGCY0XU8Wo16ILJRGPXaUyV/weBqiRRUOcSWsBXxn97KrZWo+FK+/sTVqsNgqjDtM/2gCPAUhFnt02LrAq0W6pk5VSAcYotcZo76KRoQhlCjdNg47Y539xtVEaaGkUc0pkW6fRzsNfLk1nPmniYx+pgvjOxYc7tz6nGd0oBV6ZJBCsKrQNN+IKVTPG7bVTdxIbZBDovBNj4ORqfP5dqaEf/96GlPC6XCC7MHCEU0RDSvBAe59iExbs8nDhb6sXhoOvm+VXxCgqtflI1QBLRQ6+pHU4N/SPzQFwqUspaM5OSDyOlZMqqY/pJiFTP3bwj1ga5HY8Brl7cKQDay4lj7pnb4Hgt7yD28KlOlu/wLWDFjv99R03TCess+GDlbMAPBbVnEtUmBmwJvqSNred8+UzMXNw99pCYUSVrop7v0Xqi6Ihuwb8k40k6TNXwdtKnUVeYDN7yNpexILszooywVJIstArXU5Xbe9HrED/OxCzi6V/ZXTqgbii670LXOiFgKwtNr7rYrYV2Uz+juuKezJcUf+UZ3et6k23h+kzyurVti/PBnt1LeU0tzcHFMCE=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(366004)(376002)(346002)(39860400002)(451199021)(66899021)(4326008)(6916009)(66946007)(66476007)(66446008)(66556008)(64756008)(76116006)(478600001)(54906003)(91956017)(316002)(86362001)(38070700005)(33656002)(36756003)(83380400001)(6506007)(6512007)(2616005)(26005)(186003)(53546011)(5660300002)(41300700001)(8936002)(8676002)(2906002)(6666004)(30864003)(6486002)(71200400001)(122000001)(38100700002)(21314003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <7FAB04BC7E6600429F233353EA9EF6E8@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9114
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8c2aa960-c821-4b9a-c53b-08db5874f8e4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	h1Acoi12hfQk4kC8qVIxzZDfl7KTJ3T+nzGa4aeb+KfgUoAQvzI8S/rgtfuVywRjLFj0aaCwiazWX8/ybHjmHJvAbCT4k54GwH2KsoiBl5Vb4D+FAo2vb2/1ueeedZxwuiBZwWlJwV8L+w7/SK7pxfixmjtra+0I4s6Lq/hNpa6qF2CMudiWWcYuyUzhThAim97WhKmuIli09W9mQ/7LUK1AqA3A154en6Mj6it7d2SXnZTNYrFThruY+laoSLnUOvlCOcCL1rlWAJc/3h8zIXEkw+5HszKD6fV4TmUQE3ypXIEQPVg6D3xELffiuqASlzskhnee+TOw2kB30RgopoV7NYrqLPBhuyN8LuaH9aetxBQuuig2+6Th7aPBJqk+wVT4CeRLVS3wD5fN2H9DJnMaTkcKyMBXGh4v3TJB5vIhl9vnUo7eC2Csi3FfmZmpaJllz5dK1j/bE/g0inkPCKGlif2l/2rK2M9JW6ON/8XLbMiDd0R6bT0aCOrEqx+qNa0SSoJUX+9Q/+Dz523MxFaAkF3EX19FQX4/V20a6lu+4YkJ5EaeWL3i4rvGTp1B8KgzlE7TYdnjMNs0mNT2VHktPkFG2TFW2rQ6GVyypjbeD5EVx8AT+AGx/o3usVJyZm0+BxQZPz8aE1lxyoTA5aF9nO7udFQ+zSNBP4IBn62jQp5iIn9XRdP83WjxeevgpoxHq0BrH1vYjwdp8oGVGbs6iLCv7LuZgZx1m+taaqyW1Fyt3AaA2uHKetZdU6vnX7wNi0KFqqY1EjhUiloZog==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(396003)(136003)(451199021)(40470700004)(36840700001)(46966006)(70206006)(70586007)(4326008)(54906003)(82740400003)(6862004)(8936002)(8676002)(40480700001)(478600001)(66899021)(316002)(6666004)(41300700001)(6486002)(40460700003)(30864003)(2906002)(33656002)(86362001)(81166007)(356005)(53546011)(6506007)(6512007)(26005)(107886003)(83380400001)(47076005)(2616005)(36860700001)(186003)(336012)(36756003)(5660300002)(82310400005)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 14:26:18.4935
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c0f82d48-bf5a-49e5-30ec-08db587502d4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB9327

DQoNCj4gT24gMTggTWF5IDIwMjMsIGF0IDEwOjM1LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBTb3JyeSBmb3IganVtcGluZyBsYXRl
IGluIHRoZSByZXZpZXcuDQoNCkhpIEp1bGllbiwNCg0KVGhhbmsgeW91IGZvciB0YWtpbmcgdGhl
IHRpbWUgdG8gcmV2aWV3LA0KDQo+PiANCj4+ICAgLyoNCj4+ICAgICogQ29tbWVudCBmcm9tIExp
bnV4Og0KPj4gICAgKiBVc2Vyc3BhY2UgbWF5IHBlcmZvcm0gREMgWlZBIGluc3RydWN0aW9ucy4g
TWlzbWF0Y2hlZCBibG9jayBzaXplcw0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9hcm02
NC9zdmUtYXNtLlMgYi94ZW4vYXJjaC9hcm0vYXJtNjQvc3ZlLWFzbS5TDQo+PiBuZXcgZmlsZSBt
b2RlIDEwMDY0NA0KPj4gaW5kZXggMDAwMDAwMDAwMDAwLi40ZDE1NDkzNDQ3MzMNCj4+IC0tLSAv
ZGV2L251bGwNCj4+ICsrKyBiL3hlbi9hcmNoL2FybS9hcm02NC9zdmUtYXNtLlMNCj4+IEBAIC0w
LDAgKzEsNDggQEANCj4+ICsvKiBTUERYLUxpY2Vuc2UtSWRlbnRpZmllcjogR1BMLTIuMC1vbmx5
ICovDQo+PiArLyoNCj4+ICsgKiBBcm0gU1ZFIGFzc2VtYmx5IHJvdXRpbmVzDQo+PiArICoNCj4+
ICsgKiBDb3B5cmlnaHQgKEMpIDIwMjIgQVJNIEx0ZC4NCj4+ICsgKg0KPj4gKyAqIFNvbWUgbWFj
cm9zIGFuZCBpbnN0cnVjdGlvbiBlbmNvZGluZyBpbiB0aGlzIGZpbGUgYXJlIHRha2VuIGZyb20g
bGludXggNi4xLjEsDQo+PiArICogZmlsZSBhcmNoL2FybTY0L2luY2x1ZGUvYXNtL2Zwc2ltZG1h
Y3Jvcy5oLCBzb21lIG9mIHRoZW0gYXJlIGEgbW9kaWZpZWQNCj4+ICsgKiB2ZXJzaW9uLg0KPiBB
RkFJQ1QsIHRoZSBvbmx5IG1vZGlmaWVkIHZlcnNpb24gaXMgX3N2ZV9yZHZsLCBidXQgaXQgaXMg
bm90IGNsZWFyIHRvIG1lIHdoeSB3ZSB3b3VsZCB3YW50IHRvIGhhdmUgYSBtb2RpZmllZCB2ZXJz
aW9uPw0KPiANCj4gSSBhbSBhc2tpbmcgdGhpcyBiZWNhdXNlIHdpdGhvdXQgYW4gZXhwbGFuYXRp
b24sIGl0IHdvdWxkIGJlIGRpZmZpY3VsdCB0byBrbm93IGhvdyB0byByZS1zeW5jIHRoZSBjb2Rl
IHdpdGggTGludXguDQoNCkluIHRoaXMgcGF0Y2ggdGhlIG1hY3JvcyBhcmUgZXhhY3RseSBlcXVh
bCB0byBMaW51eCwgYXBhcnQgZnJvbSB0aGUgY29kaW5nIHN0eWxlIHRoYXQgdXNlcyBzcGFjZXMg
aW5zdGVhZCBvZiB0YWJzLA0KSSB3YXMgbm90IGV4cGVjdGluZyB0byBrZWVwIHRoZW0gaW4gc3lu
YyBhcyB0aGV5IHNlZW1zIHRvIGJlIG5vdCBwcm9uZSB0byBjaGFuZ2Ugc29vbiwgbGV0IG1lIGtu
b3cgaWYgSSBuZWVkIHRvDQp1c2UgYWxzbyB0YWJzIGFuZCBiZSAxMDAlIGVxdWFsIHRvIExpbnV4
Lg0KDQpUaGUgZm9sbG93aW5nIG1hY3JvcyB0aGF0IGFyZSBjb21pbmcgaW4gcGF0Y2ggNSBhcmUg
ZXF1YWwgYXBhcnQgZnJvbSBzdmVfc2F2ZS9zdmVfbG9hZCwgdGhhdCBhcmUgZGlmZmVyZW50IGJl
Y2F1c2UNCm9mIHRoZSBjb25zdHJ1Y3Rpb24gZGlmZmVyZW5jZXMgYmV0d2VlbiB0aGUgc3RvcmFn
ZSBidWZmZXJzIGhlcmUgYW5kIGluIExpbnV4LCBpZiB5b3Ugd2FudCBJIGNhbiBwdXQgYSBjb21t
ZW50IG9uIHRoZW0NCnRvIGV4cGxhaW4gdGhpcyBkaWZmZXJlbmNlIGluIHBhdGNoIDUNCg0KPj4g
DQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2FybTY0L3N2ZS5jIGIveGVuL2FyY2gvYXJt
L2FybTY0L3N2ZS5jDQo+PiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPj4gaW5kZXggMDAwMDAwMDAw
MDAwLi42ZjNmYjM2OGM1OWINCj4+IC0tLSAvZGV2L251bGwNCj4+ICsrKyBiL3hlbi9hcmNoL2Fy
bS9hcm02NC9zdmUuYw0KPj4gQEAgLTAsMCArMSw1MCBAQA0KPj4gKy8qIFNQRFgtTGljZW5zZS1J
ZGVudGlmaWVyOiBHUEwtMi4wICovDQo+IA0KPiBBYm92ZSwgeW91IGFyZSB1c2luZyBHUEwtMi4w
LW9ubHksIGJ1dCBoZXJlIEdQTC0yLjAuIFdlIGZhdm9yIHRoZSBmb3JtZXIgbm93LiBIYXBweSB0
byBkZWFsIGl0IG9uIGNvbW1pdCBpZiB0aGVyZSBpcyBub3RoaW5nIGVsc2UgdG8gYWRkcmVzcy4N
Cg0KTm8gcHJvYmxlbSwgSSB3aWxsIGZpeCBpbiB0aGUgbmV4dCBwdXNoDQoNCj4gDQo+PiArLyoN
Cj4+ICsgKiBBcm0gU1ZFIGZlYXR1cmUgY29kZQ0KPj4gKyAqDQo+PiArICogQ29weXJpZ2h0IChD
KSAyMDIyIEFSTSBMdGQuDQo+PiArICovDQo+PiArDQo+PiArI2luY2x1ZGUgPHhlbi90eXBlcy5o
Pg0KPj4gKyNpbmNsdWRlIDxhc20vYXJtNjQvc3ZlLmg+DQo+PiArI2luY2x1ZGUgPGFzbS9hcm02
NC9zeXNyZWdzLmg+DQo+PiArI2luY2x1ZGUgPGFzbS9wcm9jZXNzb3IuaD4NCj4+ICsjaW5jbHVk
ZSA8YXNtL3N5c3RlbS5oPg0KPj4gKw0KPj4gK2V4dGVybiB1bnNpZ25lZCBpbnQgc3ZlX2dldF9o
d192bCh2b2lkKTsNCj4+ICsNCj4+ICtyZWdpc3Rlcl90IGNvbXB1dGVfbWF4X3pjcih2b2lkKQ0K
Pj4gK3sNCj4+ICsgICAgcmVnaXN0ZXJfdCBjcHRyX2JpdHMgPSBnZXRfZGVmYXVsdF9jcHRyX2Zs
YWdzKCk7DQo+PiArICAgIHJlZ2lzdGVyX3QgemNyID0gdmxfdG9femNyKFNWRV9WTF9NQVhfQklU
Uyk7DQo+PiArICAgIHVuc2lnbmVkIGludCBod192bDsNCj4+ICsNCj4+ICsgICAgLyogUmVtb3Zl
IHRyYXAgZm9yIFNWRSByZXNvdXJjZXMgKi8NCj4+ICsgICAgV1JJVEVfU1lTUkVHKGNwdHJfYml0
cyAmIH5IQ1BUUl9DUCg4KSwgQ1BUUl9FTDIpOw0KPj4gKyAgICBpc2IoKTsNCj4+ICsNCj4+ICsg
ICAgLyoNCj4+ICsgICAgICogU2V0IHRoZSBtYXhpbXVtIFNWRSB2ZWN0b3IgbGVuZ3RoLCBkb2lu
ZyB0aGF0IHdlIHdpbGwga25vdyB0aGUgVkwNCj4+ICsgICAgICogc3VwcG9ydGVkIGJ5IHRoZSBw
bGF0Zm9ybSwgY2FsbGluZyBzdmVfZ2V0X2h3X3ZsKCkNCj4+ICsgICAgICovDQo+PiArICAgIFdS
SVRFX1NZU1JFRyh6Y3IsIFpDUl9FTDIpOw0KPiANCj4gRnJvbSBteSByZWFkaW5nIG9mIHRoZSBB
cm0gKEQxOS02MzMxLCBBUk0gRERJIDA0ODdKLmEpLCBhIGRpcmVjdCB3cml0ZSB0byBhIHN5c3Rl
bSByZWdpc3RlciB3b3VsZCBuZWVkIHRvIGJlIGZvbGxvd2VkIGJ5IGFuIGNvbnRleHQgc3luY2hy
b25pemF0aW9uIGV2ZW50IChlLmcuIGlzYigpKSBiZWZvcmUgdGhlIHNvZnR3YXJlIGNhbiByZWx5
IG9uIHRoZSB2YWx1ZS4NCj4gDQo+IEluIHRoaXMgc2l0dWF0aW9uLCBBRkFJQ1QsIHRoZSBpbnN0
cnVjaXRvbiBpbiBzdmVfZ2V0X2h3X3ZsKCkgd2lsbCB1c2UgdGhlIGNvbnRlbnQgb2YgWkNSX0VM
Mi4gU28gZG9uJ3Qgd2UgbmVlZCBhbiBJU0IoKSBoZXJlPw0KDQpGcm9tIHdoYXQgSeKAmXZlIHJl
YWQgaW4gdGhlIG1hbnVhbCBmb3IgWkNSX0VMeDoNCg0KQW4gaW5kaXJlY3QgcmVhZCBvZiBaQ1Jf
RUwyLkxFTiBhcHBlYXJzIHRvIG9jY3VyIGluIHByb2dyYW0gb3JkZXIgcmVsYXRpdmUgdG8gYSBk
aXJlY3Qgd3JpdGUgb2YNCnRoZSBzYW1lIHJlZ2lzdGVyLCB3aXRob3V0IHRoZSBuZWVkIGZvciBl
eHBsaWNpdCBzeW5jaHJvbml6YXRpb24NCg0KSeKAmXZlIGludGVycHJldGVkIGl0IGFzIOKAnHRo
ZXJlIGlzIG5vIG5lZWQgdG8gc3luYyBiZWZvcmUgd3JpdGXigJ0gYW5kIEnigJl2ZSBsb29rZWQg
aW50byBMaW51eCBhbmQgaXQgZG9lcyBub3QNCkFwcGVhciBhbnkgc3luY2hyb25pc2F0aW9uIG1l
Y2hhbmlzbSBhZnRlciBhIHdyaXRlIHRvIHRoYXQgcmVnaXN0ZXIsIGJ1dCBpZiBJIGFtIHdyb25n
IEkgY2FuIGZvciBzdXJlDQphZGQgYW4gaXNiIGlmIHlvdSBwcmVmZXIuDQoNCj4gDQo+PiArDQo+
PiArICAgIC8qDQo+PiArICAgICAqIFJlYWQgdGhlIG1heGltdW0gVkwsIHdoaWNoIGNvdWxkIGJl
IGxvd2VyIHRoYW4gd2hhdCB3ZSBpbXBvc2VkIGJlZm9yZSwNCj4+ICsgICAgICogaHdfdmwgY29u
dGFpbnMgVkwgaW4gYnl0ZXMsIG11bHRpcGx5IGl0IGJ5IDggdG8gdXNlIHZsX3RvX3pjcigpIGxh
dGVyDQo+PiArICAgICAqLw0KPj4gKyAgICBod192bCA9IHN2ZV9nZXRfaHdfdmwoKSAqIDhVOw0K
Pj4gKw0KPj4gKyAgICAvKiBSZXN0b3JlIENQVFJfRUwyICovDQo+PiArICAgIFdSSVRFX1NZU1JF
RyhjcHRyX2JpdHMsIENQVFJfRUwyKTsNCj4+ICsgICAgaXNiKCk7DQo+PiArDQo+PiArICAgIHJl
dHVybiB2bF90b196Y3IoaHdfdmwpOw0KPj4gK30NCj4+ICsNCj4+ICsvKiBUYWtlcyBhIHZlY3Rv
ciBsZW5ndGggaW4gYml0cyBhbmQgcmV0dXJucyB0aGUgWkNSX0VMeCBlbmNvZGluZyAqLw0KPj4g
K3JlZ2lzdGVyX3QgdmxfdG9femNyKHVuc2lnbmVkIGludCB2bCkNCj4+ICt7DQo+PiArICAgIEFT
U0VSVCh2bCA+IDApOw0KPj4gKyAgICByZXR1cm4gKCh2bCAvIFNWRV9WTF9NVUxUSVBMRV9WQUwp
IC0gMVUpICYgWkNSX0VMeF9MRU5fTUFTSzsNCj4+ICt9DQo+IA0KPiBNaXNzaW5nIHRoZSBlbWFj
cyBtYWdpYyBibG9ja3MgYXQgdGhlIGVuZC4NCg0KSeKAmWxsIGFkZA0KDQo+IA0KPj4gZGlmZiAt
LWdpdCBhL3hlbi9hcmNoL2FybS9jcHVmZWF0dXJlLmMgYi94ZW4vYXJjaC9hcm0vY3B1ZmVhdHVy
ZS5jDQo+PiBpbmRleCBjNGVjMzhiYjI1NTQuLjgzYjg0MzY4ZjZkNSAxMDA2NDQNCj4+IC0tLSBh
L3hlbi9hcmNoL2FybS9jcHVmZWF0dXJlLmMNCj4+ICsrKyBiL3hlbi9hcmNoL2FybS9jcHVmZWF0
dXJlLmMNCj4+IEBAIC05LDYgKzksNyBAQA0KPj4gICNpbmNsdWRlIDx4ZW4vaW5pdC5oPg0KPj4g
ICNpbmNsdWRlIDx4ZW4vc21wLmg+DQo+PiAgI2luY2x1ZGUgPHhlbi9zdG9wX21hY2hpbmUuaD4N
Cj4+ICsjaW5jbHVkZSA8YXNtL2FybTY0L3N2ZS5oPg0KPj4gICNpbmNsdWRlIDxhc20vY3B1ZmVh
dHVyZS5oPg0KPj4gICAgREVDTEFSRV9CSVRNQVAoY3B1X2h3Y2FwcywgQVJNX05DQVBTKTsNCj4+
IEBAIC0xNDMsNiArMTQ0LDkgQEAgdm9pZCBpZGVudGlmeV9jcHUoc3RydWN0IGNwdWluZm9fYXJt
ICpjKQ0KPj4gICAgICAgIGMtPnpmcjY0LmJpdHNbMF0gPSBSRUFEX1NZU1JFRyhJRF9BQTY0WkZS
MF9FTDEpOw0KPj4gICsgICAgaWYgKCBjcHVfaGFzX3N2ZSApDQo+PiArICAgICAgICBjLT56Y3I2
NC5iaXRzWzBdID0gY29tcHV0ZV9tYXhfemNyKCk7DQo+PiArDQo+PiAgICAgIGMtPmRjemlkLmJp
dHNbMF0gPSBSRUFEX1NZU1JFRyhEQ1pJRF9FTDApOw0KPj4gICAgICAgIGMtPmN0ci5iaXRzWzBd
ID0gUkVBRF9TWVNSRUcoQ1RSX0VMMCk7DQo+PiBAQCAtMTk5LDcgKzIwMyw3IEBAIHN0YXRpYyBp
bnQgX19pbml0IGNyZWF0ZV9ndWVzdF9jcHVpbmZvKHZvaWQpDQo+PiAgICAgIGd1ZXN0X2NwdWlu
Zm8ucGZyNjQubXBhbSA9IDA7DQo+PiAgICAgIGd1ZXN0X2NwdWluZm8ucGZyNjQubXBhbV9mcmFj
ID0gMDsNCj4+ICAtICAgIC8qIEhpZGUgU1ZFIGFzIFhlbiBkb2VzIG5vdCBzdXBwb3J0IGl0ICov
DQo+PiArICAgIC8qIEhpZGUgU1ZFIGJ5IGRlZmF1bHQgdG8gdGhlIGd1ZXN0cyAqLw0KPj4gICAg
ICBndWVzdF9jcHVpbmZvLnBmcjY0LnN2ZSA9IDA7DQo+PiAgICAgIGd1ZXN0X2NwdWluZm8uemZy
NjQuYml0c1swXSA9IDA7DQo+PiAgZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9kb21haW4uYyBi
L3hlbi9hcmNoL2FybS9kb21haW4uYw0KPj4gaW5kZXggZDhlZjY1MDFmZjhlLi4wMzUwZDhjNjFl
ZDggMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vZG9tYWluLmMNCj4+ICsrKyBiL3hlbi9h
cmNoL2FybS9kb21haW4uYw0KPj4gQEAgLTE4MSw5ICsxODEsNiBAQCBzdGF0aWMgdm9pZCBjdHh0
X3N3aXRjaF90byhzdHJ1Y3QgdmNwdSAqbikNCj4+ICAgICAgLyogVkdJQyAqLw0KPj4gICAgICBn
aWNfcmVzdG9yZV9zdGF0ZShuKTsNCj4+ICAtICAgIC8qIFZGUCAqLw0KPj4gLSAgICB2ZnBfcmVz
dG9yZV9zdGF0ZShuKTsNCj4+IC0NCj4gDQo+IEF0IHRoZSBtb21lbnQgY3R4dF9zd2l0Y2hfdG8o
KSBpcyAobW9zdGx5PykgdGhlIHJldmVyc2Ugb2YgY3R4dF9zd2l0Y2hfZnJvbSgpLiBCdXQgd2l0
aCB0aGlzIGNoYW5nZSwgeW91IGFyZSBnb2luZyB0byBicmVhayBpdC4NCj4gDQo+IEkgd291bGQg
cmVhbGx5IHByZWZlciBpZiB0aGUgZXhpc3RpbmcgY29udmVudGlvbiBzdGF5cyBiZWNhdXNlIGl0
IGhlbHBzIHRvIGNvbmZpcm0gdGhhdCB3ZSBkaWRuJ3QgbWlzcyBiaXRzIGluIHRoZSByZXN0b3Jl
IGNvZGUuDQo+IA0KPiBTbyBpZiB5b3Ugd2FudCB0byBtb3ZlIHZmcF9yZXN0b3JlX3N0YXRlKCkg
bGF0ZXIsIHRoZW4gcGxlYXNlIG1vcmUgdmZwX3NhdmVfc3RhdGUoKSBlYXJsaWVyIGluIGN0eHRf
c3dpdGNoX2Zyb20oKS4NCg0KT2sgSSB3aWxsIG1vdmUgdmZwX3NhdmVfc3RhdGUgZWFybGllciwg
YW5kIC4uLg0KDQo+IA0KPiANCj4+ICAgICAgLyogWFhYIE1QVSAqLw0KPj4gICAgICAgIC8qIEZh
dWx0IFN0YXR1cyAqLw0KPj4gQEAgLTIzNCw2ICsyMzEsNyBAQCBzdGF0aWMgdm9pZCBjdHh0X3N3
aXRjaF90byhzdHJ1Y3QgdmNwdSAqbikNCj4+ICAgICAgcDJtX3Jlc3RvcmVfc3RhdGUobik7DQo+
PiAgICAgICAgLyogQ29udHJvbCBSZWdpc3RlcnMgKi8NCj4+ICsgICAgV1JJVEVfU1lTUkVHKG4t
PmFyY2guY3B0cl9lbDIsIENQVFJfRUwyKTsNCj4gDQo+IEkgd291bGQgcHJlZmVyIGlmIHRoaXMg
Y2FsbGVkIGNsb3NlciB0byB2ZnBfcmVzdG9yZV9zdGF0ZSgpLiBTbyB0aGUgZGVwZW5kZW5jeSBi
ZXR3ZWVuIHRoZSB0d28gaXMgZWFzaWVyIHRvIHNwb3QuDQo+IA0KPj4gICAgICBXUklURV9TWVNS
RUcobi0+YXJjaC5jcGFjciwgQ1BBQ1JfRUwxKTsNCj4+ICAgICAgICAvKg0KPj4gQEAgLTI1OCw2
ICsyNTYsOSBAQCBzdGF0aWMgdm9pZCBjdHh0X3N3aXRjaF90byhzdHJ1Y3QgdmNwdSAqbikNCj4+
ICAjZW5kaWYNCj4+ICAgICAgaXNiKCk7DQo+PiAgKyAgICAvKiBWRlAgKi8NCj4gDQo+IFBsZWFz
ZSBkb2N1bWVudCBpbiB0aGUgY29kZSB0aGF0IHZmcF9yZXN0b3JlX3N0YXRlKCkgaGF2ZSB0byBi
ZSBjYWxsZWQgYWZ0ZXIgQ1BUUl9FTDIoKSArIGEgc3luY2hyb25pemF0aW9uIGV2ZW50Lg0KPiAN
Cj4gU2ltaWxhciBkb2NvdW1lbnRhdGlvbiBvbiB0b3Agb2YgYXQgbGVhc3QgQ1BUUl9FTDIgYW5k
IHBvc3NpYmx5IGlzYigpLiBUaGlzIHdvdWxkIGhlbHAgaWYgd2UgbmVlZCB0byByZS1vcmRlciB0
aGUgY29kZSBpbiB0aGUgZnV0dXJlLg0KDQpJIHdpbGwgcHV0IGNvbW1lbnRzIG9uIHRvcCBvZiBD
UFRSX0VMMiBhbmQgdmZwX3Jlc3RvcmVfc3RhdGUgdG8gZXhwbGFpbiB0aGUgc2VxdWVuY2UgYW5k
IHRoZSBzeW5jaHJvbmlzYXRpb24uDQoNCj4gDQo+IA0KPj4gKyAgICB2ZnBfcmVzdG9yZV9zdGF0
ZShuKTsNCj4+ICsNCj4+ICAgICAgLyogQ1AgMTUgKi8NCj4+ICAgICAgV1JJVEVfU1lTUkVHKG4t
PmFyY2guY3NzZWxyLCBDU1NFTFJfRUwxKTsNCj4+ICBAQCAtNTQ4LDYgKzU0OSw4IEBAIGludCBh
cmNoX3ZjcHVfY3JlYXRlKHN0cnVjdCB2Y3B1ICp2KQ0KPj4gICAgICAgIHYtPmFyY2gudm1waWRy
ID0gTVBJRFJfU01QIHwgdmNwdWlkX3RvX3ZhZmZpbml0eSh2LT52Y3B1X2lkKTsNCj4+ICArICAg
IHYtPmFyY2guY3B0cl9lbDIgPSBnZXRfZGVmYXVsdF9jcHRyX2ZsYWdzKCk7DQo+PiArDQo+PiAg
ICAgIHYtPmFyY2guaGNyX2VsMiA9IGdldF9kZWZhdWx0X2hjcl9mbGFncygpOw0KPj4gICAgICAg
IHYtPmFyY2gubWRjcl9lbDIgPSBIRENSX1REUkEgfCBIRENSX1RET1NBIHwgSERDUl9UREE7DQo+
PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L3N2ZS5oIGIveGVu
L2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L3N2ZS5oDQo+PiBuZXcgZmlsZSBtb2RlIDEwMDY0
NA0KPj4gaW5kZXggMDAwMDAwMDAwMDAwLi4xNDRkMmIxY2M0ODUNCj4+IC0tLSAvZGV2L251bGwN
Cj4+ICsrKyBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9hcm02NC9zdmUuaA0KPj4gQEAgLTAs
MCArMSw0MyBAQA0KPj4gKy8qIFNQRFgtTGljZW5zZS1JZGVudGlmaWVyOiBHUEwtMi4wICovDQo+
IA0KPiBVc2UgR1BMLTIuMC1vbmx5Lg0KDQpPaw0KDQo+IA0KPj4gKy8qDQo+PiArICogQXJtIFNW
RSBmZWF0dXJlIGNvZGUNCj4+ICsgKg0KPj4gKyAqIENvcHlyaWdodCAoQykgMjAyMiBBUk0gTHRk
Lg0KPj4gKyAqLw0KPj4gKw0KPj4gKyNpZm5kZWYgX0FSTV9BUk02NF9TVkVfSA0KPj4gKyNkZWZp
bmUgX0FSTV9BUk02NF9TVkVfSA0KPj4gKw0KPj4gKyNkZWZpbmUgU1ZFX1ZMX01BWF9CSVRTICgy
MDQ4VSkNCj4gDQo+IE5JVDogVGhlIHBhcmVudGhlc2VzIGFyZSB1bm5lY2Vzc2FyeSBhbmQgd2Ug
ZG9uJ3QgdGVuZCB0byBhZGQgdGhlbSBpbiBYZW4uDQoNCk9rDQoNCj4gDQo+PiArDQo+PiArLyog
VmVjdG9yIGxlbmd0aCBtdXN0IGJlIG11bHRpcGxlIG9mIDEyOCAqLw0KPj4gKyNkZWZpbmUgU1ZF
X1ZMX01VTFRJUExFX1ZBTCAoMTI4VSkNCj4gDQo+IE5JVDogVGhlIHBhcmVudGhlc2VzIGFyZSB1
bm5lY2Vzc2FyeQ0KDQpPaw0KDQo+IA0KPj4gKw0KPj4gKyNpZmRlZiBDT05GSUdfQVJNNjRfU1ZF
DQo+PiArDQo+PiArcmVnaXN0ZXJfdCBjb21wdXRlX21heF96Y3Iodm9pZCk7DQo+PiArcmVnaXN0
ZXJfdCB2bF90b196Y3IodW5zaWduZWQgaW50IHZsKTsNCj4+ICsNCj4+ICsjZWxzZSAvKiAhQ09O
RklHX0FSTTY0X1NWRSAqLw0KPj4gKw0KPj4gK3N0YXRpYyBpbmxpbmUgcmVnaXN0ZXJfdCBjb21w
dXRlX21heF96Y3Iodm9pZCkNCj4+ICt7DQo+IA0KPiBJcyB0aGlzIG1lYW50IHRvIGJlIGNhbGxl
ZCB3aGVuIFNWRSBpcyBub3QgZW5hYmxlZD8gSWYgbm90LCB0aGVuIHBsZWFzZSBhZGQgQVNTRVJU
X1VOUkVBQ0hBQkxFKCkuDQoNCknigJlsbCBhZGQNCg0KPiANCj4+ICsgICAgcmV0dXJuIDA7DQo+
PiArfQ0KPj4gKw0KPj4gK3N0YXRpYyBpbmxpbmUgcmVnaXN0ZXJfdCB2bF90b196Y3IodW5zaWdu
ZWQgaW50IHZsKQ0KPj4gK3sNCj4gDQo+IElzIHRoaXMgbWVhbnQgdG8gYmUgY2FsbGVkIHdoZW4g
U1ZFIGlzIG5vdCBlbmFibGVkPyBJZiBub3QsIHRoZW4gcGxlYXNlIGFkZCBBU1NFUlRfVU5SRUFD
SEFCTEUoKS4NCg0KSXQgc2VlbXMgdGhhdCBmb3IgdGhpcyBwYXRjaCB0aGlzIHdhcyB1bm5lZWRl
ZCwgbWF5YmUgc29tZSBjaGFuZ2UgZnJvbSB2MSBsZWQgdG8gdGhhdCwgSSB3aWxsIHJlbW92ZSB0
aGlzLg0KDQo+IA0KPj4gKyAgICByZXR1cm4gMDsNCj4+ICt9DQo+PiArDQo+PiArI2VuZGlmIC8q
IENPTkZJR19BUk02NF9TVkUgKi8NCj4+ICsNCj4+ICsjZW5kaWYgLyogX0FSTV9BUk02NF9TVkVf
SCAqLw0KPj4gKy8qDQo+PiArICogTG9jYWwgdmFyaWFibGVzOg0KPj4gKyAqIG1vZGU6IEMNCj4+
ICsgKiBjLWZpbGUtc3R5bGU6ICJCU0QiDQo+PiArICogYy1iYXNpYy1vZmZzZXQ6IDQNCj4+ICsg
KiBpbmRlbnQtdGFicy1tb2RlOiBuaWwNCj4+ICsgKiBFbmQ6DQo+PiArICovDQo+PiBkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L3N5c3JlZ3MuaCBiL3hlbi9hcmNo
L2FybS9pbmNsdWRlL2FzbS9hcm02NC9zeXNyZWdzLmgNCj4+IGluZGV4IDQ2Mzg5OTk1MTQxNC4u
NGNhYmI5ZWI0ZDVlIDEwMDY0NA0KPj4gLS0tIGEveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2Fy
bTY0L3N5c3JlZ3MuaA0KPj4gKysrIGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L3N5
c3JlZ3MuaA0KPj4gQEAgLTI0LDYgKzI0LDcgQEANCj4+ICAjZGVmaW5lIElDSF9FSVNSX0VMMiAg
ICAgICAgICAgICAgUzNfNF9DMTJfQzExXzMNCj4+ICAjZGVmaW5lIElDSF9FTFNSX0VMMiAgICAg
ICAgICAgICAgUzNfNF9DMTJfQzExXzUNCj4+ICAjZGVmaW5lIElDSF9WTUNSX0VMMiAgICAgICAg
ICAgICAgUzNfNF9DMTJfQzExXzcNCj4+ICsjZGVmaW5lIFpDUl9FTDIgICAgICAgICAgICAgICAg
ICAgUzNfNF9DMV9DMl8wDQo+PiAgICAjZGVmaW5lIF9fTFIwX0VMMih4KSAgICAgICAgICAgICAg
UzNfNF9DMTJfQzEyXyAjIyB4DQo+PiAgI2RlZmluZSBfX0xSOF9FTDIoeCkgICAgICAgICAgICAg
IFMzXzRfQzEyX0MxM18gIyMgeA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9pbmNsdWRl
L2FzbS9jcHVmZWF0dXJlLmggYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vY3B1ZmVhdHVyZS5o
DQo+PiBpbmRleCBjNjJjZjYyOTNmZDYuLjZkNzAzZTA1MTkwNiAxMDA2NDQNCj4+IC0tLSBhL3hl
bi9hcmNoL2FybS9pbmNsdWRlL2FzbS9jcHVmZWF0dXJlLmgNCj4+ICsrKyBiL3hlbi9hcmNoL2Fy
bS9pbmNsdWRlL2FzbS9jcHVmZWF0dXJlLmgNCj4+IEBAIC0zMiw2ICszMiwxMiBAQA0KPj4gICNk
ZWZpbmUgY3B1X2hhc190aHVtYmVlICAgKGJvb3RfY3B1X2ZlYXR1cmUzMih0aHVtYmVlKSA9PSAx
KQ0KPj4gICNkZWZpbmUgY3B1X2hhc19hYXJjaDMyICAgKGNwdV9oYXNfYXJtIHx8IGNwdV9oYXNf
dGh1bWIpDQo+PiAgKyNpZmRlZiBDT05GSUdfQVJNNjRfU1ZFDQo+PiArI2RlZmluZSBjcHVfaGFz
X3N2ZSAgICAgICAoYm9vdF9jcHVfZmVhdHVyZTY0KHN2ZSkgPT0gMSkNCj4+ICsjZWxzZQ0KPj4g
KyNkZWZpbmUgY3B1X2hhc19zdmUgICAgICAgKDApDQo+IA0KPiBOSVQ6IFRoZSBwYXJlbnRoZXNl
cyBhcmUgdW5uZWNlc3NhcnkNCg0KT2sNCg0KPiANCj4+ICsjZW5kaWYNCj4+ICsNCj4+ICAjaWZk
ZWYgQ09ORklHX0FSTV8zMg0KPj4gICNkZWZpbmUgY3B1X2hhc19naWN2MyAgICAgKGJvb3RfY3B1
X2ZlYXR1cmUzMihnaWMpID49IDEpDQo+PiAgI2RlZmluZSBjcHVfaGFzX2dlbnRpbWVyICAoYm9v
dF9jcHVfZmVhdHVyZTMyKGdlbnRpbWVyKSA9PSAxKQ0KPj4gQEAgLTMyMyw2ICszMjksMTQgQEAg
c3RydWN0IGNwdWluZm9fYXJtIHsNCj4+ICAgICAgICAgIH07DQo+PiAgICAgIH0gaXNhNjQ7DQo+
PiAgKyAgICB1bmlvbiB7DQo+PiArICAgICAgICByZWdpc3Rlcl90IGJpdHNbMV07DQo+PiArICAg
ICAgICBzdHJ1Y3Qgew0KPj4gKyAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgbGVuOjQ7DQo+PiAr
ICAgICAgICAgICAgdW5zaWduZWQgbG9uZyBfX3JlczA6NjA7DQo+PiArICAgICAgICB9Ow0KPj4g
KyAgICB9IHpjcjY0Ow0KPj4gKw0KPj4gICAgICBzdHJ1Y3Qgew0KPj4gICAgICAgICAgcmVnaXN0
ZXJfdCBiaXRzWzFdOw0KPj4gICAgICB9IHpmcjY0Ow0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNo
L2FybS9pbmNsdWRlL2FzbS9kb21haW4uaCBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9kb21h
aW4uaA0KPj4gaW5kZXggMmE1MWYwY2E2ODhlLi5lNzc2ZWU3MDRiN2QgMTAwNjQ0DQo+PiAtLS0g
YS94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vZG9tYWluLmgNCj4+ICsrKyBiL3hlbi9hcmNoL2Fy
bS9pbmNsdWRlL2FzbS9kb21haW4uaA0KPj4gQEAgLTE5MCw2ICsxOTAsNyBAQCBzdHJ1Y3QgYXJj
aF92Y3B1DQo+PiAgICAgIHJlZ2lzdGVyX3QgdHBpZHJyb19lbDA7DQo+PiAgICAgICAgLyogSFlQ
IGNvbmZpZ3VyYXRpb24gKi8NCj4+ICsgICAgcmVnaXN0ZXJfdCBjcHRyX2VsMjsNCj4+ICAgICAg
cmVnaXN0ZXJfdCBoY3JfZWwyOw0KPj4gICAgICByZWdpc3Rlcl90IG1kY3JfZWwyOw0KPj4gIGRp
ZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vcHJvY2Vzc29yLmggYi94ZW4vYXJj
aC9hcm0vaW5jbHVkZS9hc20vcHJvY2Vzc29yLmgNCj4+IGluZGV4IDU0ZjI1MzA4NzcxOC4uYmM2
ODMzMzQxMjVjIDEwMDY0NA0KPj4gLS0tIGEveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL3Byb2Nl
c3Nvci5oDQo+PiArKysgYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vcHJvY2Vzc29yLmgNCj4+
IEBAIC01ODIsNiArNTgyLDggQEAgdm9pZCBkb190cmFwX2d1ZXN0X3NlcnJvcihzdHJ1Y3QgY3B1
X3VzZXJfcmVncyAqcmVncyk7DQo+PiAgICByZWdpc3Rlcl90IGdldF9kZWZhdWx0X2hjcl9mbGFn
cyh2b2lkKTsNCj4+ICArcmVnaXN0ZXJfdCBnZXRfZGVmYXVsdF9jcHRyX2ZsYWdzKHZvaWQpOw0K
Pj4gKw0KPj4gIC8qDQo+PiAgICogU3luY2hyb25pemUgU0Vycm9yIHVubGVzcyB0aGUgZmVhdHVy
ZSBpcyBzZWxlY3RlZC4NCj4+ICAgKiBUaGlzIGlzIHJlbHlpbmcgb24gdGhlIFNFcnJvcnMgYXJl
IGN1cnJlbnRseSB1bm1hc2tlZC4NCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vc2V0dXAu
YyBiL3hlbi9hcmNoL2FybS9zZXR1cC5jDQo+PiBpbmRleCA2ZjlmNGQ4YzhhMTUuLjQxOTFhNzY2
NzY3YSAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS9zZXR1cC5jDQo+PiArKysgYi94ZW4v
YXJjaC9hcm0vc2V0dXAuYw0KPj4gQEAgLTEzNSwxMCArMTM1LDExIEBAIHN0YXRpYyB2b2lkIF9f
aW5pdCBwcm9jZXNzb3JfaWQodm9pZCkNCj4+ICAgICAgICAgICAgIGNwdV9oYXNfZWwyXzMyID8g
IjY0KzMyIiA6IGNwdV9oYXNfZWwyXzY0ID8gIjY0IiA6ICJObyIsDQo+PiAgICAgICAgICAgICBj
cHVfaGFzX2VsMV8zMiA/ICI2NCszMiIgOiBjcHVfaGFzX2VsMV82NCA/ICI2NCIgOiAiTm8iLA0K
Pj4gICAgICAgICAgICAgY3B1X2hhc19lbDBfMzIgPyAiNjQrMzIiIDogY3B1X2hhc19lbDBfNjQg
PyAiNjQiIDogIk5vIik7DQo+PiAtICAgIHByaW50aygiICAgIEV4dGVuc2lvbnM6JXMlcyVzXG4i
LA0KPj4gKyAgICBwcmludGsoIiAgICBFeHRlbnNpb25zOiVzJXMlcyVzXG4iLA0KPj4gICAgICAg
ICAgICAgY3B1X2hhc19mcCA/ICIgRmxvYXRpbmdQb2ludCIgOiAiIiwNCj4+ICAgICAgICAgICAg
IGNwdV9oYXNfc2ltZCA/ICIgQWR2YW5jZWRTSU1EIiA6ICIiLA0KPj4gLSAgICAgICAgICAgY3B1
X2hhc19naWN2MyA/ICIgR0lDdjMtU3lzUmVnIiA6ICIiKTsNCj4+ICsgICAgICAgICAgIGNwdV9o
YXNfZ2ljdjMgPyAiIEdJQ3YzLVN5c1JlZyIgOiAiIiwNCj4+ICsgICAgICAgICAgIGNwdV9oYXNf
c3ZlID8gIiBTVkUiIDogIiIpOw0KPj4gICAgICAgIC8qIFdhcm4gdXNlciBpZiB3ZSBmaW5kIHVu
a25vd24gZmxvYXRpbmctcG9pbnQgZmVhdHVyZXMgKi8NCj4+ICAgICAgaWYgKCBjcHVfaGFzX2Zw
ICYmIChib290X2NwdV9mZWF0dXJlNjQoZnApID49IDIpICkNCj4+IGRpZmYgLS1naXQgYS94ZW4v
YXJjaC9hcm0vdHJhcHMuYyBiL3hlbi9hcmNoL2FybS90cmFwcy5jDQo+PiBpbmRleCBkNDBjMzMx
YTRlOWMuLmMwNjExYzJlZjZhNSAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS90cmFwcy5j
DQo+PiArKysgYi94ZW4vYXJjaC9hcm0vdHJhcHMuYw0KPj4gQEAgLTkzLDYgKzkzLDIxIEBAIHJl
Z2lzdGVyX3QgZ2V0X2RlZmF1bHRfaGNyX2ZsYWdzKHZvaWQpDQo+PiAgICAgICAgICAgICAgIEhD
Ul9USUQzfEhDUl9UU0N8SENSX1RBQ3xIQ1JfU1dJT3xIQ1JfVElEQ1B8SENSX0ZCfEhDUl9UU1cp
Ow0KPj4gIH0NCj4+ICArcmVnaXN0ZXJfdCBnZXRfZGVmYXVsdF9jcHRyX2ZsYWdzKHZvaWQpDQo+
PiArew0KPj4gKyAgICAvKg0KPj4gKyAgICAgKiBUcmFwIGFsbCBjb3Byb2Nlc3NvciByZWdpc3Rl
cnMgKDAtMTMpIGV4Y2VwdCBjcDEwIGFuZA0KPj4gKyAgICAgKiBjcDExIGZvciBWRlAuDQo+PiAr
ICAgICAqDQo+PiArICAgICAqIC8hXCBBbGwgY29wcm9jZXNzb3JzIGV4Y2VwdCBjcDEwIGFuZCBj
cDExIGNhbm5vdCBiZSB1c2VkIGluIFhlbi4NCj4+ICsgICAgICoNCj4+ICsgICAgICogT24gQVJN
NjQgdGhlIFRDUHggYml0cyB3aGljaCB3ZSBzZXQgaGVyZSAoMC4uOSwxMiwxMykgYXJlIGFsbA0K
Pj4gKyAgICAgKiBSRVMxLCBpLmUuIHRoZXkgd291bGQgdHJhcCB3aGV0aGVyIHdlIGRpZCB0aGlz
IHdyaXRlIG9yIG5vdC4NCj4+ICsgICAgICovDQo+PiArICAgIHJldHVybiAgKChIQ1BUUl9DUF9N
QVNLICYgfihIQ1BUUl9DUCgxMCkgfCBIQ1BUUl9DUCgxMSkpKSB8DQo+PiArICAgICAgICAgICAg
IEhDUFRSX1RUQSB8IEhDUFRSX1RBTSk7DQo+PiArfQ0KPj4gKw0KPj4gIHN0YXRpYyBlbnVtIHsN
Cj4+ICAgICAgU0VSUk9SU19ESVZFUlNFLA0KPj4gICAgICBTRVJST1JTX1BBTklDLA0KPj4gQEAg
LTEyMiw2ICsxMzcsNyBAQCBfX2luaXRjYWxsKHVwZGF0ZV9zZXJyb3JzX2NwdV9jYXBzKTsNCj4+
ICAgIHZvaWQgaW5pdF90cmFwcyh2b2lkKQ0KPj4gIHsNCj4+ICsgICAgcmVnaXN0ZXJfdCBjcHRy
X2JpdHMgPSBnZXRfZGVmYXVsdF9jcHRyX2ZsYWdzKCk7DQo+IA0KPiBDb2Rpbmcgc3R5bGU6IFBs
ZWFzZSBhZGQgYSBuZXdsaW5lIGFmdGVyIHRoZSBkZWNsYXJhdGlvbi4gVGhhdCBzYWlkLi4uDQo+
IA0KPj4gICAgICAvKg0KPj4gICAgICAgKiBTZXR1cCBIeXAgdmVjdG9yIGJhc2UuIE5vdGUgdGhl
eSBtaWdodCBnZXQgdXBkYXRlZCB3aXRoIHRoZQ0KPj4gICAgICAgKiBicmFuY2ggcHJlZGljdG9y
IGhhcmRlbmluZy4NCj4+IEBAIC0xMzUsMTcgKzE1MSw3IEBAIHZvaWQgaW5pdF90cmFwcyh2b2lk
KQ0KPj4gICAgICAvKiBUcmFwIENQMTUgYzE1IHVzZWQgZm9yIGltcGxlbWVudGF0aW9uIGRlZmlu
ZWQgcmVnaXN0ZXJzICovDQo+PiAgICAgIFdSSVRFX1NZU1JFRyhIU1RSX1QoMTUpLCBIU1RSX0VM
Mik7DQo+PiAgLSAgICAvKiBUcmFwIGFsbCBjb3Byb2Nlc3NvciByZWdpc3RlcnMgKDAtMTMpIGV4
Y2VwdCBjcDEwIGFuZA0KPj4gLSAgICAgKiBjcDExIGZvciBWRlAuDQo+PiAtICAgICAqDQo+PiAt
ICAgICAqIC8hXCBBbGwgY29wcm9jZXNzb3JzIGV4Y2VwdCBjcDEwIGFuZCBjcDExIGNhbm5vdCBi
ZSB1c2VkIGluIFhlbi4NCj4+IC0gICAgICoNCj4+IC0gICAgICogT24gQVJNNjQgdGhlIFRDUHgg
Yml0cyB3aGljaCB3ZSBzZXQgaGVyZSAoMC4uOSwxMiwxMykgYXJlIGFsbA0KPj4gLSAgICAgKiBS
RVMxLCBpLmUuIHRoZXkgd291bGQgdHJhcCB3aGV0aGVyIHdlIGRpZCB0aGlzIHdyaXRlIG9yIG5v
dC4NCj4+IC0gICAgICovDQo+PiAtICAgIFdSSVRFX1NZU1JFRygoSENQVFJfQ1BfTUFTSyAmIH4o
SENQVFJfQ1AoMTApIHwgSENQVFJfQ1AoMTEpKSkgfA0KPj4gLSAgICAgICAgICAgICAgICAgSENQ
VFJfVFRBIHwgSENQVFJfVEFNLA0KPj4gLSAgICAgICAgICAgICAgICAgQ1BUUl9FTDIpOw0KPj4g
KyAgICBXUklURV9TWVNSRUcoY3B0cl9iaXRzLCBDUFRSX0VMMik7DQo+IA0KPiAuLi4gSSB3b3Vs
ZCBjb21iaW5lIHRoZSB0d28gbGluZXMgYXMgdGhlIHZhcmlhYmxlIHNlZW1zIHVubmVjZXNzYXJ5
Lg0KDQpJIHdpbGwgY29tYmluZQ0KDQo+IA0KPj4gICAgICAgIC8qDQo+PiAgICAgICAqIENvbmZp
Z3VyZSBIQ1JfRUwyIHdpdGggdGhlIGJhcmUgbWluaW11bSB0byBydW4gWGVuIHVudGlsIGEgZ3Vl
c3QNCj4gDQo+IENoZWVycywNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri May 19 14:27:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 14:27:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537111.835920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q014V-0008UG-Cd; Fri, 19 May 2023 14:27:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537111.835920; Fri, 19 May 2023 14:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q014V-0008U9-9X; Fri, 19 May 2023 14:27:07 +0000
Received: by outflank-mailman (input) for mailman id 537111;
 Fri, 19 May 2023 14:27:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C4tO=BI=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1q014U-0007z3-5f
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 14:27:06 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2062e.outbound.protection.outlook.com
 [2a01:111:f400:7eab::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 38c1e270-f651-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 16:27:04 +0200 (CEST)
Received: from BN0PR04CA0021.namprd04.prod.outlook.com (2603:10b6:408:ee::26)
 by BY5PR12MB4193.namprd12.prod.outlook.com (2603:10b6:a03:20c::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 14:26:59 +0000
Received: from BN8NAM11FT071.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:ee:cafe::8f) by BN0PR04CA0021.outlook.office365.com
 (2603:10b6:408:ee::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend
 Transport; Fri, 19 May 2023 14:26:59 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT071.mail.protection.outlook.com (10.13.177.92) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.21 via Frontend Transport; Fri, 19 May 2023 14:26:59 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 09:26:59 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Fri, 19 May
 2023 09:26:58 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Fri, 19 May 2023 09:26:56 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38c1e270-f651-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WRmMyxi3ACUrhjEbgYPTec10eBpkss9ICxf8QIfPYAvzFFmEOOtKdbuAbP2GJ0fHGfV3+j9Jx/qvzmoNlxkrNRQG3yzdWuzApdss05QmBrXOYtK8lAZ+xUdHrpmP0CzS7XYTcrNP4CijqfAAamJ8OrNOfYRKWWQPIi4a3d9FNP1ntimvcnpJeBmgBuBmU7YmK+wllYFRBI0jLjwDwLDKsD4blLznKHsJj3KiplDGUXcaV7coGsjOhpVSsoQIr8qdktcbhhB6kiS+eMEijZqfyiVfntJ03uHmyrNumduYGjr1s85bAmxj8ooEZpFcqbYVw4nnesxdllI9gyqsMnzqEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DHZSfXl2P/H9/KuFs7UN6kolQMT57tk4g/AQSqax9O4=;
 b=lxGN+Cutf4KSGOpPHwRzR6nu+hPPuRn07AUUBHKLYchYuNPjT/hwUigBZZzSm0Yft62iVl1HfRxmDioVx7ozW6aEYShUEs96wvwunmvCU6+ww2rQOCalgpAHq16vcQhiSYq0PxsWabZhZb1wkIt+9Uny+N4T3bXmUM39aCvsQhCA4fPx9MfecU8aYqZrgSvEwIV2IbQIdqh3L9uNBSaB4ncozzfGjZl4kp1lDEjvF+pmLYc0xx4rglgekG0+/bAJMJty7oMxZORMHE2Kb67E8cn6qV1lMydAsy1/RDnCk5SIAHeDCShCNAdumHhdEfN9IKvbzYh4i61kGNbVX3IDjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DHZSfXl2P/H9/KuFs7UN6kolQMT57tk4g/AQSqax9O4=;
 b=JNrgLsekIMHwlnhWDlxQHxBdC/WLHVeZ+4uO18ZHQrs+q2k+alxkRIvX9oAaHRD9oMaJC1PlCryXpwqLlthT96aBNOBRWChqmHTsxatxMp0kD1lECkG2xDQ0oKVJSCi/jEUpNN6O/7b4HPKUr1rdWihNtKq7tJ5xfu76ocxFn/I=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <0241103b-4c82-aad3-a7e9-d359d805c675@amd.com>
Date: Fri, 19 May 2023 10:26:51 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v3 3/6] iommu/arm: Introduce iommu_add_dt_pci_sideband_ids
 API
To: Jan Beulich <jbeulich@suse.com>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Paul Durrant
	<paul@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Rahul Singh <rahul.singh@arm.com>, Bertrand Marquis
	<bertrand.marquis@arm.com>, <xen-devel@lists.xenproject.org>
References: <20230518210658.66156-1-stewart.hildebrand@amd.com>
 <20230518210658.66156-4-stewart.hildebrand@amd.com>
 <d9618948-d4da-cfea-ad80-d130dc50d3cb@suse.com>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <d9618948-d4da-cfea-ad80-d130dc50d3cb@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT071:EE_|BY5PR12MB4193:EE_
X-MS-Office365-Filtering-Correlation-Id: 317cf9b3-11f5-429e-9350-08db58751b30
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Lhgzd+wV0QbzKIr1xS8yhbXj9WTEAYOCtM2i7x1SIzQs2YmgsfAnwQ/VewB8uTBGsxkWUkaYhHMhU596Ic88qh3k0rHdsRrreZnpkVirQtxa+whZ4eYvng3ONjj9l0VmjFt34kUAw/sdprTsVePnTvd3T6ugnG2vEyPHlTHRcH3uN49AsMnnGUKyOxcFuZbBpWVjZvG6UzTyiQfg2onFkG21tE8jNH9MafY8M88qExtbYEP+0yt5YX9Lk7CCAIn710jElYVcdlP9qoTD1lvsbAXE877vmk7iZbXYZNSszw6nEBOWhZSdX7icnhJ6VhHngZWvL5xxSPwpjWF6SUuAGx7lcRSBUIgteZmPa7iHgpsMzOw1ju79crgv98BEeSpiYp89ylOLG0Wfpj6rFWvXVvEfHP3OMxxcEnFbJqithlO5V9Y8NWK5rxAbOtXMLnjARbYMCK7F4Kj/tzBnYZuJfSDI6jaI8suD4u45DwSGdva+daRnljDTcYAqeBICBNaTnOLfuAdaXELYlGEm1uHIYoJVkNNPsyR6n5MZUJTjMg5w+20x/Qudkzpsc8ZRSE+zDjXa7wnUGU+Zz5QbSmRee6iaxEe3zhGxNeiLIZveRQhVtdXPbK7Fe56zQrsm7zJlhUXfXPrKF1L1v1GvnbZr4wWBdX65fkoJO3u3/I16g0CMvW+nO95T2+Rb/cCMmtuSAYTZ8eEfbaVKxoJeea8rIeFW4pGBaqd1eFJQlgSu+Y8F7vU0V/ZCftS84yEjIHHebdpOkUV5HaTYy5qASWB9oTT9vWlAXGazIaoBGbC1SI/PG0Z7DS90+DYZoA0xQB/T
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(396003)(136003)(451199021)(40470700004)(36840700001)(46966006)(70206006)(70586007)(6916009)(4326008)(31686004)(16576012)(54906003)(82740400003)(8936002)(8676002)(40480700001)(478600001)(316002)(6666004)(41300700001)(40460700003)(2906002)(86362001)(31696002)(81166007)(356005)(53546011)(26005)(44832011)(83380400001)(47076005)(2616005)(36860700001)(186003)(426003)(336012)(36756003)(5660300002)(82310400005)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 14:26:59.3174
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 317cf9b3-11f5-429e-9350-08db58751b30
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT071.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4193

On 5/19/23 04:45, Jan Beulich wrote:
> On 18.05.2023 23:06, Stewart Hildebrand wrote:
>> --- a/xen/include/xen/iommu.h
>> +++ b/xen/include/xen/iommu.h
>> @@ -26,6 +26,7 @@
>>  #include <xen/spinlock.h>
>>  #include <public/domctl.h>
>>  #include <public/hvm/ioreq.h>
>> +#include <asm/acpi.h>
>>  #include <asm/device.h>
> 
> I view this as problematic: It'll require all architectures with an
> IOMMU implementation to have an asm/acpi.h. I think this wants to go
> inside an "#ifdef CONFIG_ACPI" and then ...

Will do

>> @@ -228,12 +230,25 @@ int iommu_release_dt_devices(struct domain *d);
>>   *      (IOMMU is not enabled/present or device is not connected to it).
>>   */
>>  int iommu_add_dt_device(struct dt_device_node *np);
>> +int iommu_add_dt_pci_sideband_ids(struct pci_dev *pdev);
>>
>>  int iommu_do_dt_domctl(struct xen_domctl *, struct domain *,
>>                         XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
>>
>> +#else /* !HAS_DEVICE_TREE */
>> +static inline int iommu_add_dt_pci_sideband_ids(struct pci_dev *pdev)
>> +{
>> +    return 0;
>> +}
>>  #endif /* HAS_DEVICE_TREE */
>>
>> +static inline int iommu_add_pci_sideband_ids(struct pci_dev *pdev)
>> +{
>> +    if ( acpi_disabled )
> 
> ... the same #ifdef would be added around this if().

Okay. I will take care to avoid an unreachable return 0; by introducing a local variable:

static inline int iommu_add_pci_sideband_ids(struct pci_dev *pdev)
{
    int ret = 0;
#ifdef CONFIG_ACPI
    if ( acpi_disabled )
#endif
        ret = iommu_add_dt_pci_sideband_ids(pdev);
    return ret;
}

> All of this of course only if this is deemed enough to allow co-existance
> of DT and ACPI (which I'm not convinced it is, but I don't know enough
> about DT and e.g. possible mixed configurations).
> 
> Jan

On ARM, we dynamically check for the existence of a valid device tree and set acpi_disabled accordingly. I did some basic testing on ARM with both CONFIG_ACPI=y and # CONFIG_ACPI is not set. My understanding is that it will work, and it should allow ACPI on ARM to be implemented in future.

>> +        return iommu_add_dt_pci_sideband_ids(pdev);
>> +    return 0;
>> +}
>> +
>>  struct page_info;
>>
>>  /*
> 


From xen-devel-bounces@lists.xenproject.org Fri May 19 14:38:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 14:38:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537120.835936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01Fl-0001lD-GZ; Fri, 19 May 2023 14:38:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537120.835936; Fri, 19 May 2023 14:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01Fl-0001l6-De; Fri, 19 May 2023 14:38:45 +0000
Received: by outflank-mailman (input) for mailman id 537120;
 Fri, 19 May 2023 14:38:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9pCJ=BI=citrix.com=prvs=496ec590c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q01Fj-0001ky-D1
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 14:38:43 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d87db198-f652-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 16:38:41 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 May 2023 10:38:30 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BY5PR03MB5252.namprd03.prod.outlook.com (2603:10b6:a03:224::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 14:38:24 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 14:38:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d87db198-f652-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684507121;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=icbZ0cjNnRXi3LbMV1qMyxvUULyZX4i2HqMEOYRnj/k=;
  b=YFpd9zssBJEVthuUkQYjZsZ4xjoJp5q2zujyiVAu47uEVX6SFV2bTfZW
   234RjSlAQt9/0gvlP6Zu0HiKAe8lUv1ZmCvN4QMjaN5r2gQiRTVIhkmZa
   7fHpGjBGl1JrZ6VykgnV+kOTP63cDp6lGdqnBcWanWVy0iLN1IaRjzWIN
   E=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 112133596
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:EE/OeawNh2f1waMa+0d6t+caxyrEfRIJ4+MujC+fZmUNrF6WrkUHz
 WJNDGCBbPvfZmqme4xxOozi8UkH75fWnN5hGVZl/iAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw/zF8EsHUMja4mtC5QRjPKET5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KT9hq
 fM4DwlUUjOS1tCX7IDgDcRomct2eaEHPKtH0p1h5RfwKK9/BLrlE+DN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjiVlVIguFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiAdhLT+blp6ACbFu7+lEyCh86RGqCmtqJyWKcftlBN
 UE19X97xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L8/AKxFmUCCDlbZ7QOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRVLufgcBRAoBptPl8Ic6i0uWSs45SfDkyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CBhRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:VBQmJ6OMJ158R8BcT9z255DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jztSWatN/eYgBYpTnyAtjmfZq6z+8J3WBxB8bZYOCCggeVxe5ZnOjfKlHbakjDH6tmpN
 xdmstFeaPN5DpB7foSiTPQe7hA/DDEytHRuQ639QYTcegAUdAF0+4WMHf8LqQ7fnglOXJvf6
 Dsmvav6gDQMUj+Ka+Adws4dtmGg+eOuIPtYBYACRJiwA6SjQmw4Lq/PwmE0gwYWzZvx65n1W
 TeiQT26oiqrvn+k3bnpiPuxqUTvOGk5spIBcSKhMRQAjLwijywbIAkd6yesCszqOSP7k9vtN
 XXuR8vM+l69nuUVGCophnG3RXmzV8VmjLf4G7dpUGmjd3yRTo8BcYErYVFciHB405lmN1nyq
 pE00+QqpISVHr77W7AzumNcysvulu/oHIkn+JWp3tDUbEGYLsUiYAE5ktaHLoJASq/woE6F+
 tFCt3a+Z9tABinRkGcmlMq7M2nX3w1EBvDak8euvaN2zwTp3x9x1tw/r1pol4wsLYGD7VU7e
 XNNapl0JtUSNUNUK57DOAdBeOqF23kW3v3QSKvCGWiMJtCF2PGqpbx7rlwzvqtYoY0wJw7n4
 mEeE9EtFQ1Z1nlBaS1rdx2Gyj2MSeAtAnWu4RjD8ATgMy5eFOrC1zMdLkWqbrinx1FaferHM
 paO/ptcovexCXVaMB0NjbFKupvwEklIbwoU+kAKiKzS+LwW/vXX7/gAb/uDYuoNwoYcUXCJV
 ZGdATPBax7nzWWsznD8VfsZ08=
X-Talos-CUID: 9a23:F2p1621akpSsMurbZd3tgLxfRO95fV369S7sAF7pO0dFeubSZFuP9/Yx
X-Talos-MUID: =?us-ascii?q?9a23=3AW/EK4A2+2rvXjjWMSx8My+KxNjUj35mPCkIhgKQ?=
 =?us-ascii?q?/gOq0dgBSHjyDkjmWTdpy?=
X-IronPort-AV: E=Sophos;i="6.00,177,1681185600"; 
   d="scan'208";a="112133596"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gJ+pgxyJI4CpYQdRcbPDEGedwy0PysKyJmXZ7ETNq115GH6cqU6QslCnalXye99Lx+phOqM7XSZS1wQK4zrsl+SQad43cvkNtEvBMVVW58GllabAB62/eYa2gSmM8UPw65hC6TcQGQnXiQiTrhZA2WZcAgpNnhurVkYso4jY9BT1LcztpUp93W1LErOegqM+TMplcRLwafoakvczGXoD9jPolwipspOruF07caJKgidrTBzTe5dkHct/G+Viex87bTyODKFP/fRIyAoJUAhigfurf2FQLxmO62FiFcRBKO7L7tUM8v8b3igh/m7N0WuVkwTpgPPewaLFSvipD5vgog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Uim8DUTwIE8e6XEtU6M/QwUCTv6D5g0/K7GLMCIRjxU=;
 b=MLqsx+FhvePkaRAEqSfAYbGtW0Vqf4RLb96eLAN6EhkKP+uY3csGRpsKpXSEQHpqolu8VftPtwYNNROgtzpwK05ZdEmr6a/jIBITC4Ml54YEn/+0wCBRBqQboEpsN/yF41kPB18FtGQeMed2pefZVV/7w67oIVgvuU+ZDYM5eLmTejoCKzFIZXycgpjJcqXVMwxHs9/CbDLDe6HTXPPseeUcGwCWN+7MaGllU7iaN7h5cBoD5UG64lZenv4/z54XXOidrbd6/qBt9REqpiX+eBy+/fIVJEXt6YE+rSziVjisbtz12/YFW2Sv/lmF4cjRhJPtLXpl/AuRBPC0oKhmBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Uim8DUTwIE8e6XEtU6M/QwUCTv6D5g0/K7GLMCIRjxU=;
 b=qSDrKEvDvsO38Pj6jhDa47PbeESvLuOr6TKFz4oAgjovQLaEtci9Tm4cOLupbdrrUpMzhVv/O9ZCUrKBQ3IhEgK5oXuWAsS1EoS5na8U9E9OYYc0xbYO7qma+LZTP/WRY1SC+d7Qzk4OZbIsOnv1ywIHOFH3lInRlP6eL/8MabI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <a8013bb5-b0bb-6e42-85de-2e12d7b6f83c@citrix.com>
Date: Fri, 19 May 2023 15:38:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 4/4] x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-5-andrew.cooper3@citrix.com>
 <1d06869b-1633-7794-c5c9-92d9c0885f19@suse.com>
 <42cd2479-1eba-11c7-26d8-441045c230ed@citrix.com>
 <fb95d033-7a71-7cc1-bb8f-ee2a74d1c5cf@suse.com>
In-Reply-To: <fb95d033-7a71-7cc1-bb8f-ee2a74d1c5cf@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO3P265CA0001.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:bb::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BY5PR03MB5252:EE_
X-MS-Office365-Filtering-Correlation-Id: 912b3681-48d9-438c-f2ce-08db5876b2b3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pt8k5P5pTdfOb4pgfZLcVBPmQUetB6VvY6YfZufGwjzR/zBhaJb/paZtcjnotLmfBOnhSn4ASKPtqvO1td6YCPxB7ROhzslvNooZgzHJc+n53rs1o7HN45nC2i0c3Rat4SD4oSSe1bwQmWyL6mq8XAPLhuhVgvKhoqFD/oD3zFo4mak/lD/MMfXpJcuZR1gQKLmnNkpqhWSFMAEMHWDhBXa/Q63NSIE+NLb4PvtfVpGkR+P0s0I3fefLppjgWGkNdrfAoXdeNCY9zLHdPT/eetuyGl5a6HKh1Id1/Zgh6N6WoEaeyf4oMavZ/mcx3peqdxUTTWZcbm2Et9oyAnMwtcEyNc9VRIFIfzLSO5YIqvZ/ANgHorHHSCRQR7/XwgV9CGwkOgvsO7X4xs8kWp42+F1j9U5KIuh6Z6j7casHT/h3oRsIVtkSPqXbRJBBqb9M7cblfZ8NQ1998C6P+Rg2uxeVc8arvqA2bXF1qgSN/4aD85+rmmNdpGkgTQK5kDttiPLtYn9K3Um8/qonHkpKdHnmJXR7RbD/Pu8hiKvEDWFYTccLG9OqiFTCnfYfU7q+MXQtj71X/eZIwtHSta+weEpq78HYDIHzuqEHooVduDWB1OTf03YHTxf1Sve7hFz6fE6KXQ7Lf4T/beDFlgtHHw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(136003)(396003)(366004)(346002)(451199021)(31686004)(6916009)(66556008)(66946007)(66476007)(478600001)(4326008)(54906003)(316002)(86362001)(31696002)(36756003)(83380400001)(26005)(2616005)(6506007)(186003)(53546011)(5660300002)(8936002)(6512007)(8676002)(2906002)(6666004)(6486002)(38100700002)(41300700001)(82960400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RXNORlNpUHE4YzJmeTk3VnFXQlQ4azVhL3FpdUhRbzVsanVTMmlZcUpuZzda?=
 =?utf-8?B?MXpZbUx4UlJ6TXMyRlJ4NGtESFo4THFWa05RYVVtd012Z0lSRkVja3cyUDlD?=
 =?utf-8?B?akx3WmdaQVNnVGY0WFVxQzA0QlBYNkhzaWdCL2xnUFkxUzRGa3d3L2FZclJE?=
 =?utf-8?B?eUJkanpqbDJXcG5nMmZyUlNhYUFOU1N5M0xkay9PNEFwZE02dU1vMklPYTRq?=
 =?utf-8?B?ZlZIMVlKeWI4aWVxdUxMM0JqNWc3WFhaTEtQcExTM29oYkZyMThPWHZnbm5j?=
 =?utf-8?B?KzhxVjJJT2xVa09rdmd5UVZheWRlemRsN2xxUjRUVjVJZ2Uxc0F4OXlhbXEx?=
 =?utf-8?B?a2EwYTF4bnYyZWFLZkwxRGF5YlJjRnV4YkxJTElOMnJBQ05naTdOY1hJeXlq?=
 =?utf-8?B?eVRiUTdmWEsvdWs4eUdJakg1bHVkcjZoMk1qbEVxRllQdkcxMDVkb2I4dkU1?=
 =?utf-8?B?Y1pjT1Vwd3pVVEV6bUlqREdBWjExb0piY1VWZGNjOGREYXFEY09ubTcrbG1x?=
 =?utf-8?B?Q2FGOXZnUDJCZFZlanFBdzRUUmdVRksyZk1kRkgwRFRsOHlwdjZIOFQ5OUsz?=
 =?utf-8?B?cXI2enNqNTRuanFpY3JWaTFWNmFYc3MxRlVQeklDbXlxTzREaXIwSit4R0ZD?=
 =?utf-8?B?R29Iblhoc0k1dlJVTUZXeVRMR2o5bTZuNVNMKzA5eXJTKys2dE5vNFJWTE1t?=
 =?utf-8?B?UlpVa1ZSc1Jod1haNjNZR21JMmczdnRKb2lOYUxhbHR4NUhjNmRLdHVGUGl3?=
 =?utf-8?B?bW5sYXYra2dxUFhQWUxtK3BYRXZTeURrQzBma0tHaFRhVUJ5aU1pZGVZaGEv?=
 =?utf-8?B?UnN5a3JzWDBiQkNWcEZqK0N5OE9wS3JQTjA3anMzTS91R1FZcnZYYTJrTng0?=
 =?utf-8?B?NmorRGhnd043T2xHUk03TkJIa0VRMFBTbnFUSHBmRnNsNkJxaWNwajFvY0Ny?=
 =?utf-8?B?cU9LeE43Y3BNdTQ5ZnVDdzVPWDU3ZUE1WUZvZXgzdVdJMG9UVUxpeHFBUmdh?=
 =?utf-8?B?UVZlbm9YWjI0RFFGT1ZUZWRxR3JwRjFiK2RFUmxQTkJTQkhvejR3SVdPeWFt?=
 =?utf-8?B?MElYQU1DVmhpdUpDUGY4K0prMHlRQ3VQQ3F6WlZYb2p3TDRkWkVoZ1RuZHhH?=
 =?utf-8?B?UEY2MTd3cmJieU10TnJxb25QYUVTUEwyUy9uWXhuVCtGWmlua1BJdTVpRmVi?=
 =?utf-8?B?WUZsZWYxQlRUYnRCd0lYNGpjKzU5MS9wMTlVQ2l1cHFUbGhHa0tpWkt2RGRm?=
 =?utf-8?B?MHRtMSt6QlpITHNmV1M2WmJOU2E0Uk5oRUsyYmREQitLQ2FnMHNreWc3V2Vr?=
 =?utf-8?B?Ujc1S1VHNGU3MURkSlppQ2xaMlFJaXM4N043Ykc3YVJGWjhFc2ZWZ3VlaC9l?=
 =?utf-8?B?SzY2WjNaU1c0V3A5d1JXT3NXSWN4cFBBUTJFdmNDdEUwdnRYM0JnVncrUmdp?=
 =?utf-8?B?MU5QM2phS0drNktOaFhFTTlucTVoZWVyMklST1JBTjBmR25wckNaQUM2VllH?=
 =?utf-8?B?dy82QkEva0FDMmVpeklvN3VvaXAwSnl6L0J3M0h3SFRRYVd6bDM1cXdKblZV?=
 =?utf-8?B?N2lKTEVxYWRlakVCTmZoTGNkb2JpRitGUkVobXBaQk1YR3JvQlJWKzd1NGlC?=
 =?utf-8?B?OVBYYU9QU1dubEF2SFNDRlh6RlVUTGZQT0FXSDZDY0YySzNmLzBSZmFQdUFE?=
 =?utf-8?B?alRxbllnVnBNQTR5d3VyV3FUVVI1Z0NtdWE2aE96UFRJdnpKWXFNc2hDd2Zw?=
 =?utf-8?B?WmZKVzNxWGhqa2ZzOW05Wjk1NjVuS2lyMFVNUHdHYW5sNmdzVWZyQnFwNU8w?=
 =?utf-8?B?SlNDNDlEK3hZWFhZRGFpcnpXR2xzdE0vWDBZdVA2bmI4T0ljSlhxWWtjY1E0?=
 =?utf-8?B?QVRxZHQrWDI1TnJlTWJZeGc1azZyemRkTDNBWm5mT2lKUUN1MFl3VnRZbTdL?=
 =?utf-8?B?ZkN0bzNRQm1TNHFwU0xES0Noa0pncURTVytZa3ovcm40MEZuRkFYUndsOHl4?=
 =?utf-8?B?eCs0S3JGdTdiUkhSRG9Ld2sreG94ODRPajYwSVJRU1VHMENENTlOTG1kb0ZZ?=
 =?utf-8?B?UDY4d1dnM2p6NDc2ZnRTak5QQjFWeFlCOE8vMXZWUlplOHRxeW1XMnJMc2p3?=
 =?utf-8?B?R2pQUTlmMXdhT0pxR0pPLzQ0Qm15Kzg0L0VqRVY3a0hrRWVOQWpXN3VkQ1Q0?=
 =?utf-8?B?WHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	l9YTlJtkAp6MOqFnEbldhhDmdb5vN769CshPlqFqnsAJpILVhbf+VZyZvT36MJrq1sBYHibbKCTMBjtkZHbcIZEWvQES/9mHbkHcXb1xJq75h3QXS7IMWK+zKjQ9OZZS/c7uc4aWnpgxJgNEu2NVfS+GN9OZywCDfpP4aKN/crCHwP+kpWhzmDyyDMu68o5p9JTV40n0yCFoEV1JHdB0l8K9idlDqS5tEpGybBfZl+X4+MYBRFIyZYH/CjbNHXuAyfcaB9QUkQIF7HDLZ6UyMlI0cMS0Xj4AD0X6LvwSItohadFtIDimUrHiuDQksmbhIXXGVlay3SGgvv7IobOpV1ArIcTL5jhm/5cPv3i9tOIRZ8EGLP6Rans+P0bdwts7vK8Xq3XJa/v9suZvZlkmMwgLdcLc9MToG3WoB2EL/KRmWWQnYfJi2rKfPErNehvWJdm8aXk5tbEmXtRZUFiy8it3/+hxXDxx2bAOwrobTZsAk4WOoYGDTKPDwbxESDZsvaQQs+AthjXbbd1PRo+wb2mLddq6C5PDLq3Feur8J06APxI9DdAXf7By3LWTgn2IB06FZg5cjdryWobLj4gjL/BLXEEtfgB5Si/EN/JpP1XUOA3RZVz+WLPd/9NEf9Rq9vNS2ha7iS3rCwgMf1g/L6OgIjZurFX86HTlzdC1m0tq/dSpj3a85z6aLfbFVcIhl2/6XeCt1ehRIWooO3slVp6vnAYz5yDJXtD5LpLlejcxjfKyuqOsBlnQ5cP0o9mzwHFMQd8Cki0qbv2KzAwUsjjlK9TrIWjLewuNlHWW/2XYrRnJsLGUSF7ghMuRtn1l
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 912b3681-48d9-438c-f2ce-08db5876b2b3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 14:38:23.6841
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WjPnz6tMZ+gf4cdEC3xmN7vZTfD949YeSsdR6V+HmMJnWx9MF5fJfCQkSZQ71V4GvvIyP0rY5HYTkIA0QWXydNK9LZxHvuroEGR3NXNJATk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5252

On 19/05/2023 7:00 am, Jan Beulich wrote:
> On 17.05.2023 18:35, Andrew Cooper wrote:
>> On 17/05/2023 3:47 pm, Jan Beulich wrote:
>>> On 16.05.2023 16:53, Andrew Cooper wrote:
>>>> @@ -401,6 +400,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
>>>>          cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
>>>>      if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
>>>>          cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
>>>> +    if ( cpu_has_arch_caps )
>>>> +        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
>>> Why do you read the MSR again? I would have expected this to come out
>>> of raw_cpu_policy now (and incrementally the CPUID pieces as well,
>>> later on).
>> Consistency with the surrounding logic.
> I view this as relevant only when the code invoking CPUID directly is
> intended to stay.

Quite the contrary.  It stays because this patch, and changing the
semantics of the print block are unrelated things and should not be
mixed together.

>> Also because the raw and host policies don't get sorted until much later
>> in boot.
> identify_cpu(), which invokes init_host_cpu_policies(), is called
> ahead of init_speculation_mitigations(), isn't it?

What is init_host_cpu_policies() ?

I have a plan for what it's going to be if prior MSR work hadn't ground
to a halt, but it's a bit too late for that now.

(To answer the question properly, no the policies aren't set up until
just before building dom0, and that's not something that is trivial to
change.)

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 19 14:43:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 14:43:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537124.835946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01KU-0003Dh-3i; Fri, 19 May 2023 14:43:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537124.835946; Fri, 19 May 2023 14:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01KT-0003Da-V7; Fri, 19 May 2023 14:43:37 +0000
Received: by outflank-mailman (input) for mailman id 537124;
 Fri, 19 May 2023 14:43:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q01KT-0003DQ-AP; Fri, 19 May 2023 14:43:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q01KT-0005bl-2t; Fri, 19 May 2023 14:43:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q01KS-0006UY-OA; Fri, 19 May 2023 14:43:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q01KS-00015J-Nh; Fri, 19 May 2023 14:43:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lAXjcR8dVDFx4x1LcNEA0QcUzQMm7y2jym3A5L+d06c=; b=oL7ezuJNUfs/y7lRtp4WNz+ysY
	TBpWThL6PWazNWkJEgn26561Fb073RPCbPympfd0UT/PRoJus4r2HY1SP1uF1bfMo83pozTOI2tki
	+7eIsYykvxsXhfjM6xdi6BeO9No7ZMi1h4LaUHAgdVHWdo/db1ROz1F5byHBKXe7krA0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180714-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180714: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=90404c53682f464b4a26efd618887dc336d9da80
X-Osstest-Versions-That:
    libvirt=5ff58a0ce7a6ad452919a86a05e27427ccf1f27b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 May 2023 14:43:36 +0000

flight 180714 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180714/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180698
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180698
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180698
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              90404c53682f464b4a26efd618887dc336d9da80
baseline version:
 libvirt              5ff58a0ce7a6ad452919a86a05e27427ccf1f27b

Last test of basis   180698  2023-05-18 04:18:52 Z    1 days
Testing same since   180714  2023-05-19 04:18:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiang Jiacheng <jiangjiacheng@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   5ff58a0ce7..90404c5368  90404c53682f464b4a26efd618887dc336d9da80 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 19 14:46:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 14:46:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537132.835956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01NI-0003s7-Jt; Fri, 19 May 2023 14:46:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537132.835956; Fri, 19 May 2023 14:46:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01NI-0003s0-GQ; Fri, 19 May 2023 14:46:32 +0000
Received: by outflank-mailman (input) for mailman id 537132;
 Fri, 19 May 2023 14:46:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q01NI-0003rs-4m
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 14:46:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q01NH-0005eB-Lk; Fri, 19 May 2023 14:46:31 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.7.127]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q01NH-0007Wl-GI; Fri, 19 May 2023 14:46:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=evzsQn0cxznW7Q5CwHPtUbJ0lQbK8eu+IxdLYMEyEtE=; b=pXIUUsNIoFpv1AkWZGrdSmaDQt
	yKNR2Q1h1GEsQi0cHKiIF0gxKmX5iKfeWqX4AddUtBefWhelsYgrTag6lOJW3oHRBCS7TWHE6fucU
	Ie3i7804o7V7YOUbpqFYUlrHwTK/Yx3ZpJwTGs5J8AR7l4rrmNx0qzrh30BkbeP6yfwg=;
Message-ID: <2f14dad9-25f5-7ac7-4ff5-d756e6f55718@xen.org>
Date: Fri, 19 May 2023 15:46:29 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-2-luca.fancellu@arm.com>
 <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
 <86D7B5C8-2671-4359-A48D-E7D52B06565C@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <86D7B5C8-2671-4359-A48D-E7D52B06565C@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

On 19/05/2023 15:26, Luca Fancellu wrote:
>> On 18 May 2023, at 10:35, Julien Grall <julien@xen.org> wrote:
>>>    /*
>>>     * Comment from Linux:
>>>     * Userspace may perform DC ZVA instructions. Mismatched block sizes
>>> diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
>>> new file mode 100644
>>> index 000000000000..4d1549344733
>>> --- /dev/null
>>> +++ b/xen/arch/arm/arm64/sve-asm.S
>>> @@ -0,0 +1,48 @@
>>> +/* SPDX-License-Identifier: GPL-2.0-only */
>>> +/*
>>> + * Arm SVE assembly routines
>>> + *
>>> + * Copyright (C) 2022 ARM Ltd.
>>> + *
>>> + * Some macros and instruction encoding in this file are taken from linux 6.1.1,
>>> + * file arch/arm64/include/asm/fpsimdmacros.h, some of them are a modified
>>> + * version.
>> AFAICT, the only modified version is _sve_rdvl, but it is not clear to me why we would want to have a modified version?
>>
>> I am asking this because without an explanation, it would be difficult to know how to re-sync the code with Linux.
> 
> In this patch the macros are exactly equal to Linux, apart from the coding style that uses spaces instead of tabs,
> I was not expecting to keep them in sync as they seems to be not prone to change soon, let me know if I need to
> use also tabs and be 100% equal to Linux.

The file is small enough, so I think it would be OK if this is converted 
to the Xen coding style.

> 
> The following macros that are coming in patch 5 are equal apart from sve_save/sve_load, that are different because
> of the construction differences between the storage buffers here and in Linux, if you want I can put a comment on them
> to explain this difference in patch 5

That would be good. Also, can you update 
arch/arm/README.LinuxPrimitives? The file is listing primitives imported 
from Linux and when.

> 
>>>
>>> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
>>> new file mode 100644
>>> index 000000000000..6f3fb368c59b
>>> --- /dev/null
>>> +++ b/xen/arch/arm/arm64/sve.c
>>> @@ -0,0 +1,50 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>
>> Above, you are using GPL-2.0-only, but here GPL-2.0. We favor the former now. Happy to deal it on commit if there is nothing else to address.
> 
> No problem, I will fix in the next push
> 
>>
>>> +/*
>>> + * Arm SVE feature code
>>> + *
>>> + * Copyright (C) 2022 ARM Ltd.
>>> + */
>>> +
>>> +#include <xen/types.h>
>>> +#include <asm/arm64/sve.h>
>>> +#include <asm/arm64/sysregs.h>
>>> +#include <asm/processor.h>
>>> +#include <asm/system.h>
>>> +
>>> +extern unsigned int sve_get_hw_vl(void);
>>> +
>>> +register_t compute_max_zcr(void)
>>> +{
>>> +    register_t cptr_bits = get_default_cptr_flags();
>>> +    register_t zcr = vl_to_zcr(SVE_VL_MAX_BITS);
>>> +    unsigned int hw_vl;
>>> +
>>> +    /* Remove trap for SVE resources */
>>> +    WRITE_SYSREG(cptr_bits & ~HCPTR_CP(8), CPTR_EL2);
>>> +    isb();
>>> +
>>> +    /*
>>> +     * Set the maximum SVE vector length, doing that we will know the VL
>>> +     * supported by the platform, calling sve_get_hw_vl()
>>> +     */
>>> +    WRITE_SYSREG(zcr, ZCR_EL2);
>>
>>  From my reading of the Arm (D19-6331, ARM DDI 0487J.a), a direct write to a system register would need to be followed by an context synchronization event (e.g. isb()) before the software can rely on the value.
>>
>> In this situation, AFAICT, the instruciton in sve_get_hw_vl() will use the content of ZCR_EL2. So don't we need an ISB() here?
> 
>  From what I’ve read in the manual for ZCR_ELx:
> 
> An indirect read of ZCR_EL2.LEN appears to occur in program order relative to a direct write of
> the same register, without the need for explicit synchronization
> 
> I’ve interpreted it as “there is no need to sync before write” and I’ve looked into Linux and it does not
> Appear any synchronisation mechanism after a write to that register, but if I am wrong I can for sure
> add an isb if you prefer.

Ah, I was reading the generic section about synchronization and didn't 
realize there was a paragraph in the ZCR_EL2 section as well.

Reading the new section, I agree with your understanding. The isb() is 
not necessary.

So please ignore this comment :).

>>>       /* XXX MPU */
>>>         /* Fault Status */
>>> @@ -234,6 +231,7 @@ static void ctxt_switch_to(struct vcpu *n)
>>>       p2m_restore_state(n);
>>>         /* Control Registers */
>>> +    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
>>
>> I would prefer if this called closer to vfp_restore_state(). So the dependency between the two is easier to spot.
>>
>>>       WRITE_SYSREG(n->arch.cpacr, CPACR_EL1);
>>>         /*
>>> @@ -258,6 +256,9 @@ static void ctxt_switch_to(struct vcpu *n)
>>>   #endif
>>>       isb();
>>>   +    /* VFP */
>>
>> Please document in the code that vfp_restore_state() have to be called after CPTR_EL2() + a synchronization event.
>>
>> Similar docoumentation on top of at least CPTR_EL2 and possibly isb(). This would help if we need to re-order the code in the future.
> 
> I will put comments on top of CPTR_EL2 and vfp_restore_state to explain the sequence and the synchronisation.

Just to clarify, does this mean you will keep CPTR_EL2 where it 
currently is? (See my comment just above in the previous e-mail)

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 19 14:52:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 14:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537138.835972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01SX-0005M5-6j; Fri, 19 May 2023 14:51:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537138.835972; Fri, 19 May 2023 14:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01SX-0005Ly-3s; Fri, 19 May 2023 14:51:57 +0000
Received: by outflank-mailman (input) for mailman id 537138;
 Fri, 19 May 2023 14:51:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q01SW-0005Lq-3S
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 14:51:56 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20627.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b10c1b3e-f654-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 16:51:53 +0200 (CEST)
Received: from DU2PR04CA0030.eurprd04.prod.outlook.com (2603:10a6:10:3b::35)
 by DB9PR08MB6442.eurprd08.prod.outlook.com (2603:10a6:10:259::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Fri, 19 May
 2023 14:51:47 +0000
Received: from DBAEUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:3b:cafe::a2) by DU2PR04CA0030.outlook.office365.com
 (2603:10a6:10:3b::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 14:51:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT062.mail.protection.outlook.com (100.127.142.64) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.21 via Frontend Transport; Fri, 19 May 2023 14:51:47 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Fri, 19 May 2023 14:51:47 +0000
Received: from f06b72459915.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 09040BA9-4DA2-49E7-A266-D1BD6F6464A1.1; 
 Fri, 19 May 2023 14:51:36 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f06b72459915.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 19 May 2023 14:51:35 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB4PR08MB9190.eurprd08.prod.outlook.com (2603:10a6:10:3fd::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 14:51:33 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.019; Fri, 19 May 2023
 14:51:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b10c1b3e-f654-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kitbsA1zlG6W+qc+6IX2A383irQKmXMjPHJcr6b9TrI=;
 b=n4XGIPmrJXv5T0UyjHLRwKr7yczOucGkB1Kkuk0k+TYtbetGucbhmHJfLyQ5l0ZPF0Mc/VyJxkErJBo0L3KJ6wcL67U5K9qbXo3PQfQmfvBZdovKVDdBElFGzAF1XtvAVGQRuyJF7qI1lPvIv9WtwcbLnncvAojhBBAqadPLVNQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7b2e81bb1b7df554
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T+AMeJtkGSbBT4d4lDewMxsJQIK53hofJsDVy1+FTAhBtAvSVrjoE8pItf/n97DzRlYOhTgwo5egiMHub0UaX2vOS9zVG60ophsBpaXiGQzModXzi2jx8FSQ4rcZWYdehBMD8erzjZIgxZqfC0I1Xdi+HEQEvGgXa2SDLqqTqJtmRmQMoZHLsaua+vY0cfuoDfic/mp9M/5/MW+JSAXC+A9ZcE5HmveUfYZMrSbAhEDG5A+okJabqL63H++74asmLxm47cjKvncfEnwurlmC01H6Aof5Q12aFvG/50ANzdbPxAHvqFjiFvkfquGBcRCUzJPfAVVp4JfgE+Bk5zryWw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kitbsA1zlG6W+qc+6IX2A383irQKmXMjPHJcr6b9TrI=;
 b=YPcjxZws80q/Cr0WfWz4RLRd++MZOS9wjoPI0eTV+DjBk4qVijaOkRFw6hAHfuJ9wG7cAe+uR9NBqNG5e86vJXDiwDDhTfpCCadyVBeSVUzGx761tvcbeEj0+qGVROb4R2HolrSWNtoa5FjLGQAkx8DaPiCF8llCF6h/IKFQrGdOzuSqoGi28j6aujH0qIK5uM3OXBAE3+x+zaYEDary4tjo4Vy/P5TjTyvZ5mvH+fO1hD8C2m5CxKZqxcgyBQzsxdJMkAXfrTexQ5CkgVnpO3CCfaPSNZTJx/ZnRGGmAeosSbUOZjExUlRakAjNnKSLN6n7eZasOi6UivI5+GqLfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kitbsA1zlG6W+qc+6IX2A383irQKmXMjPHJcr6b9TrI=;
 b=n4XGIPmrJXv5T0UyjHLRwKr7yczOucGkB1Kkuk0k+TYtbetGucbhmHJfLyQ5l0ZPF0Mc/VyJxkErJBo0L3KJ6wcL67U5K9qbXo3PQfQmfvBZdovKVDdBElFGzAF1XtvAVGQRuyJF7qI1lPvIv9WtwcbLnncvAojhBBAqadPLVNQ=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Thread-Topic: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Thread-Index: AQHZdnKBJmFHv6Wq8kSjvXctD+zlI69f6zCAgAHjioCAAAXEgIAAAV4A
Date: Fri, 19 May 2023 14:51:32 +0000
Message-ID: <AAE00F7D-612B-47AC-82F9-BEEE9CB17804@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-2-luca.fancellu@arm.com>
 <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
 <86D7B5C8-2671-4359-A48D-E7D52B06565C@arm.com>
 <2f14dad9-25f5-7ac7-4ff5-d756e6f55718@xen.org>
In-Reply-To: <2f14dad9-25f5-7ac7-4ff5-d756e6f55718@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB4PR08MB9190:EE_|DBAEUR03FT062:EE_|DB9PR08MB6442:EE_
X-MS-Office365-Filtering-Correlation-Id: 6fbb1383-819f-4613-18af-08db58789234
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 02QQO5B1dHaksqEYmeA8oif9NjE/HZ9FiIVoyTVFGI0Dwa4CjTaHI8SKePyCjNim4ryjyW0/CAJ3s0C4PKOWAlvma8mU3Rb+vX29qdDbVxOZWAOxwR580R9fjJTVGCsOPE1QXSV0Atx3TSCaYnYsmVq0/swaXqlyweWL/BvcY3+8Jk2WUv5rdlZNTkvTKZbZs0tEvS78skIyWwm1o1vr2XbTurbXEZSkPpg5QiUW9JHknVNxjoK3GlRm1sV7PgOxI8r7QmN55zwy+AxzQogOQnkSNhP8eEH4uDIY0lSUce+HuoKLCvLvIUgOdOTPhvJr/+69MdZ90E9hixRpK8ytQbFsuK7d/DIZhefJOq4mdVCmFWLQXPxm3mqCKcuPD++UYg7uUQlDpXbd9bZ8HBP0pVRbACVb7uYO/dN8RpCe9Y40HJFnbQcnjWXNwVMQkZN8cjNqPzUz5CBkxg1ol2rPJKN1Hx5RXXIzww105HjWPR4vRQWfnINjHJhP4Ejce4YL89hRpwI3gJjsElmMNeRTlwDoo+A3Nyt90gJ02EFUvLW9xwFy0AYT0gyaLw6RqgTDOxD6ClwwrkgkSnmeg6e//zbJSbW/b5+2R+UZvjcry6C4NeIt4hN6/3haireYnxhGQuTpP+rkm/svro60SWNSPMjGzZ01Xsetdlnh4wPKN78=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(366004)(396003)(136003)(39850400004)(451199021)(38070700005)(86362001)(33656002)(36756003)(54906003)(91956017)(316002)(66476007)(66446008)(64756008)(66556008)(66946007)(6916009)(76116006)(4326008)(478600001)(6486002)(71200400001)(8936002)(8676002)(5660300002)(41300700001)(2906002)(38100700002)(122000001)(186003)(2616005)(6506007)(6512007)(26005)(53546011)(83380400001)(66899021)(21314003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <5D3878CDF557084381970A3B5FB0AA01@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB9190
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8e6433ab-ac20-4f33-7134-08db5878898f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DSOBuuuCKQJlrf6IrbBywH10R/xvDVmbVU3l/uWRjL2LqP2xauwU0XGCF3yZxmuwLQ/bh73W7ZI71nhni60UFu49P/WXZ/XjJm3ZdUdK0kFTEzO9tOk5rQKM2AsgP7EvHH7o2j0clyziSrWYGqCEbLWnx9u0HOXGmVP6BwU4SuSWSMiyd15PVW20IGfy6wAQbFG3oW57hN9N0u9zM4iSdiRrt/nUBOFmsi3tsXUI5yj+fQNp10XpmTCU1OTAgLlX8yTpt630iUYD4z/sMgu8wAiA0Vg/XfykTGsVreiGCX15Xupk+WcC7Hcnnhj1MP6eWfDi10g5IkRtRb9A54kgrpi08k0TAmLO5lMMmLGRPSTk9nTSHopVHA+BwZrfaf/4rf+rfDCf+ZfDqh4BwSswrdoJJAz9F1bJEeTEqwOvr7k7/NFp80mZQA3hBHPGxr8Zt/fopUqOMPrgU3ajAWO8dumTIMjrW7NyUSQEu3DbcDRAPANXPmV3zFtJboo9EDgzlfGR2piKO0wLSZZFZ2n7MTzRUbtMV0YOUk75wVK6G7iqc4j5rKElWwrKK2lraPuALDHBAYMvs14syxEZRm4Tuwp/u3ueFUL2Kc4OvUFFV2OH/7MLFpz/Hr9iwW4SOHPGZnCz2aRSvbRsI17jkg3WJ/ov9uJf9qVJd3uvdi1/Yi4SSA2llyTEM30DhwQMdsKM+PTB1CDUBL5uXqwEDQegOLPMJ3gCVe5vgynzoZv2Gy16SwCxw+dW0sOtqtBV8I6yuhwPxx9Lrs+UIt9W5HXbsA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(376002)(396003)(451199021)(46966006)(36840700001)(40470700004)(40460700003)(66899021)(33656002)(316002)(70206006)(70586007)(4326008)(54906003)(86362001)(478600001)(6486002)(36756003)(83380400001)(47076005)(6512007)(36860700001)(107886003)(53546011)(6506007)(26005)(2616005)(186003)(6862004)(336012)(8676002)(2906002)(8936002)(40480700001)(5660300002)(82310400005)(356005)(82740400003)(81166007)(41300700001)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 14:51:47.5108
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6fbb1383-819f-4613-18af-08db58789234
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6442

DQoNCj4gT24gMTkgTWF5IDIwMjMsIGF0IDE1OjQ2LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbiAxOS8wNS8yMDIzIDE1OjI2LCBM
dWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4+IE9uIDE4IE1heSAyMDIzLCBhdCAxMDozNSwgSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+Pj4+ICAgLyoNCj4+Pj4gICAgKiBDb21t
ZW50IGZyb20gTGludXg6DQo+Pj4+ICAgICogVXNlcnNwYWNlIG1heSBwZXJmb3JtIERDIFpWQSBp
bnN0cnVjdGlvbnMuIE1pc21hdGNoZWQgYmxvY2sgc2l6ZXMNCj4+Pj4gZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL2FybS9hcm02NC9zdmUtYXNtLlMgYi94ZW4vYXJjaC9hcm0vYXJtNjQvc3ZlLWFzbS5T
DQo+Pj4+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0DQo+Pj4+IGluZGV4IDAwMDAwMDAwMDAwMC4uNGQx
NTQ5MzQ0NzMzDQo+Pj4+IC0tLSAvZGV2L251bGwNCj4+Pj4gKysrIGIveGVuL2FyY2gvYXJtL2Fy
bTY0L3N2ZS1hc20uUw0KPj4+PiBAQCAtMCwwICsxLDQ4IEBADQo+Pj4+ICsvKiBTUERYLUxpY2Vu
c2UtSWRlbnRpZmllcjogR1BMLTIuMC1vbmx5ICovDQo+Pj4+ICsvKg0KPj4+PiArICogQXJtIFNW
RSBhc3NlbWJseSByb3V0aW5lcw0KPj4+PiArICoNCj4+Pj4gKyAqIENvcHlyaWdodCAoQykgMjAy
MiBBUk0gTHRkLg0KPj4+PiArICoNCj4+Pj4gKyAqIFNvbWUgbWFjcm9zIGFuZCBpbnN0cnVjdGlv
biBlbmNvZGluZyBpbiB0aGlzIGZpbGUgYXJlIHRha2VuIGZyb20gbGludXggNi4xLjEsDQo+Pj4+
ICsgKiBmaWxlIGFyY2gvYXJtNjQvaW5jbHVkZS9hc20vZnBzaW1kbWFjcm9zLmgsIHNvbWUgb2Yg
dGhlbSBhcmUgYSBtb2RpZmllZA0KPj4+PiArICogdmVyc2lvbi4NCj4+PiBBRkFJQ1QsIHRoZSBv
bmx5IG1vZGlmaWVkIHZlcnNpb24gaXMgX3N2ZV9yZHZsLCBidXQgaXQgaXMgbm90IGNsZWFyIHRv
IG1lIHdoeSB3ZSB3b3VsZCB3YW50IHRvIGhhdmUgYSBtb2RpZmllZCB2ZXJzaW9uPw0KPj4+IA0K
Pj4+IEkgYW0gYXNraW5nIHRoaXMgYmVjYXVzZSB3aXRob3V0IGFuIGV4cGxhbmF0aW9uLCBpdCB3
b3VsZCBiZSBkaWZmaWN1bHQgdG8ga25vdyBob3cgdG8gcmUtc3luYyB0aGUgY29kZSB3aXRoIExp
bnV4Lg0KPj4gSW4gdGhpcyBwYXRjaCB0aGUgbWFjcm9zIGFyZSBleGFjdGx5IGVxdWFsIHRvIExp
bnV4LCBhcGFydCBmcm9tIHRoZSBjb2Rpbmcgc3R5bGUgdGhhdCB1c2VzIHNwYWNlcyBpbnN0ZWFk
IG9mIHRhYnMsDQo+PiBJIHdhcyBub3QgZXhwZWN0aW5nIHRvIGtlZXAgdGhlbSBpbiBzeW5jIGFz
IHRoZXkgc2VlbXMgdG8gYmUgbm90IHByb25lIHRvIGNoYW5nZSBzb29uLCBsZXQgbWUga25vdyBp
ZiBJIG5lZWQgdG8NCj4+IHVzZSBhbHNvIHRhYnMgYW5kIGJlIDEwMCUgZXF1YWwgdG8gTGludXgu
DQo+IA0KPiBUaGUgZmlsZSBpcyBzbWFsbCBlbm91Z2gsIHNvIEkgdGhpbmsgaXQgd291bGQgYmUg
T0sgaWYgdGhpcyBpcyBjb252ZXJ0ZWQgdG8gdGhlIFhlbiBjb2Rpbmcgc3R5bGUuDQo+IA0KPj4g
VGhlIGZvbGxvd2luZyBtYWNyb3MgdGhhdCBhcmUgY29taW5nIGluIHBhdGNoIDUgYXJlIGVxdWFs
IGFwYXJ0IGZyb20gc3ZlX3NhdmUvc3ZlX2xvYWQsIHRoYXQgYXJlIGRpZmZlcmVudCBiZWNhdXNl
DQo+PiBvZiB0aGUgY29uc3RydWN0aW9uIGRpZmZlcmVuY2VzIGJldHdlZW4gdGhlIHN0b3JhZ2Ug
YnVmZmVycyBoZXJlIGFuZCBpbiBMaW51eCwgaWYgeW91IHdhbnQgSSBjYW4gcHV0IGEgY29tbWVu
dCBvbiB0aGVtDQo+PiB0byBleHBsYWluIHRoaXMgZGlmZmVyZW5jZSBpbiBwYXRjaCA1DQo+IA0K
PiBUaGF0IHdvdWxkIGJlIGdvb2QuIEFsc28sIGNhbiB5b3UgdXBkYXRlIGFyY2gvYXJtL1JFQURN
RS5MaW51eFByaW1pdGl2ZXM/IFRoZSBmaWxlIGlzIGxpc3RpbmcgcHJpbWl0aXZlcyBpbXBvcnRl
ZCBmcm9tIExpbnV4IGFuZCB3aGVuLg0KDQpTdXJlIEkgd2lsbA0KDQo+IA0KPj4+PiANCj4+Pj4g
ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYyBiL3hlbi9hcmNoL2FybS9hcm02
NC9zdmUuYw0KPj4+PiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPj4+PiBpbmRleCAwMDAwMDAwMDAw
MDAuLjZmM2ZiMzY4YzU5Yg0KPj4+PiAtLS0gL2Rldi9udWxsDQo+Pj4+ICsrKyBiL3hlbi9hcmNo
L2FybS9hcm02NC9zdmUuYw0KPj4+PiBAQCAtMCwwICsxLDUwIEBADQo+Pj4+ICsvKiBTUERYLUxp
Y2Vuc2UtSWRlbnRpZmllcjogR1BMLTIuMCAqLw0KPj4+IA0KPj4+IEFib3ZlLCB5b3UgYXJlIHVz
aW5nIEdQTC0yLjAtb25seSwgYnV0IGhlcmUgR1BMLTIuMC4gV2UgZmF2b3IgdGhlIGZvcm1lciBu
b3cuIEhhcHB5IHRvIGRlYWwgaXQgb24gY29tbWl0IGlmIHRoZXJlIGlzIG5vdGhpbmcgZWxzZSB0
byBhZGRyZXNzLg0KPj4gTm8gcHJvYmxlbSwgSSB3aWxsIGZpeCBpbiB0aGUgbmV4dCBwdXNoDQo+
Pj4gDQo+Pj4+ICsvKg0KPj4+PiArICogQXJtIFNWRSBmZWF0dXJlIGNvZGUNCj4+Pj4gKyAqDQo+
Pj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMjIgQVJNIEx0ZC4NCj4+Pj4gKyAqLw0KPj4+PiArDQo+
Pj4+ICsjaW5jbHVkZSA8eGVuL3R5cGVzLmg+DQo+Pj4+ICsjaW5jbHVkZSA8YXNtL2FybTY0L3N2
ZS5oPg0KPj4+PiArI2luY2x1ZGUgPGFzbS9hcm02NC9zeXNyZWdzLmg+DQo+Pj4+ICsjaW5jbHVk
ZSA8YXNtL3Byb2Nlc3Nvci5oPg0KPj4+PiArI2luY2x1ZGUgPGFzbS9zeXN0ZW0uaD4NCj4+Pj4g
Kw0KPj4+PiArZXh0ZXJuIHVuc2lnbmVkIGludCBzdmVfZ2V0X2h3X3ZsKHZvaWQpOw0KPj4+PiAr
DQo+Pj4+ICtyZWdpc3Rlcl90IGNvbXB1dGVfbWF4X3pjcih2b2lkKQ0KPj4+PiArew0KPj4+PiAr
ICAgIHJlZ2lzdGVyX3QgY3B0cl9iaXRzID0gZ2V0X2RlZmF1bHRfY3B0cl9mbGFncygpOw0KPj4+
PiArICAgIHJlZ2lzdGVyX3QgemNyID0gdmxfdG9femNyKFNWRV9WTF9NQVhfQklUUyk7DQo+Pj4+
ICsgICAgdW5zaWduZWQgaW50IGh3X3ZsOw0KPj4+PiArDQo+Pj4+ICsgICAgLyogUmVtb3ZlIHRy
YXAgZm9yIFNWRSByZXNvdXJjZXMgKi8NCj4+Pj4gKyAgICBXUklURV9TWVNSRUcoY3B0cl9iaXRz
ICYgfkhDUFRSX0NQKDgpLCBDUFRSX0VMMik7DQo+Pj4+ICsgICAgaXNiKCk7DQo+Pj4+ICsNCj4+
Pj4gKyAgICAvKg0KPj4+PiArICAgICAqIFNldCB0aGUgbWF4aW11bSBTVkUgdmVjdG9yIGxlbmd0
aCwgZG9pbmcgdGhhdCB3ZSB3aWxsIGtub3cgdGhlIFZMDQo+Pj4+ICsgICAgICogc3VwcG9ydGVk
IGJ5IHRoZSBwbGF0Zm9ybSwgY2FsbGluZyBzdmVfZ2V0X2h3X3ZsKCkNCj4+Pj4gKyAgICAgKi8N
Cj4+Pj4gKyAgICBXUklURV9TWVNSRUcoemNyLCBaQ1JfRUwyKTsNCj4+PiANCj4+PiBGcm9tIG15
IHJlYWRpbmcgb2YgdGhlIEFybSAoRDE5LTYzMzEsIEFSTSBEREkgMDQ4N0ouYSksIGEgZGlyZWN0
IHdyaXRlIHRvIGEgc3lzdGVtIHJlZ2lzdGVyIHdvdWxkIG5lZWQgdG8gYmUgZm9sbG93ZWQgYnkg
YW4gY29udGV4dCBzeW5jaHJvbml6YXRpb24gZXZlbnQgKGUuZy4gaXNiKCkpIGJlZm9yZSB0aGUg
c29mdHdhcmUgY2FuIHJlbHkgb24gdGhlIHZhbHVlLg0KPj4+IA0KPj4+IEluIHRoaXMgc2l0dWF0
aW9uLCBBRkFJQ1QsIHRoZSBpbnN0cnVjaXRvbiBpbiBzdmVfZ2V0X2h3X3ZsKCkgd2lsbCB1c2Ug
dGhlIGNvbnRlbnQgb2YgWkNSX0VMMi4gU28gZG9uJ3Qgd2UgbmVlZCBhbiBJU0IoKSBoZXJlPw0K
Pj4gRnJvbSB3aGF0IEnigJl2ZSByZWFkIGluIHRoZSBtYW51YWwgZm9yIFpDUl9FTHg6DQo+PiBB
biBpbmRpcmVjdCByZWFkIG9mIFpDUl9FTDIuTEVOIGFwcGVhcnMgdG8gb2NjdXIgaW4gcHJvZ3Jh
bSBvcmRlciByZWxhdGl2ZSB0byBhIGRpcmVjdCB3cml0ZSBvZg0KPj4gdGhlIHNhbWUgcmVnaXN0
ZXIsIHdpdGhvdXQgdGhlIG5lZWQgZm9yIGV4cGxpY2l0IHN5bmNocm9uaXphdGlvbg0KPj4gSeKA
mXZlIGludGVycHJldGVkIGl0IGFzIOKAnHRoZXJlIGlzIG5vIG5lZWQgdG8gc3luYyBiZWZvcmUg
d3JpdGXigJ0gYW5kIEnigJl2ZSBsb29rZWQgaW50byBMaW51eCBhbmQgaXQgZG9lcyBub3QNCj4+
IEFwcGVhciBhbnkgc3luY2hyb25pc2F0aW9uIG1lY2hhbmlzbSBhZnRlciBhIHdyaXRlIHRvIHRo
YXQgcmVnaXN0ZXIsIGJ1dCBpZiBJIGFtIHdyb25nIEkgY2FuIGZvciBzdXJlDQo+PiBhZGQgYW4g
aXNiIGlmIHlvdSBwcmVmZXIuDQo+IA0KPiBBaCwgSSB3YXMgcmVhZGluZyB0aGUgZ2VuZXJpYyBz
ZWN0aW9uIGFib3V0IHN5bmNocm9uaXphdGlvbiBhbmQgZGlkbid0IHJlYWxpemUgdGhlcmUgd2Fz
IGEgcGFyYWdyYXBoIGluIHRoZSBaQ1JfRUwyIHNlY3Rpb24gYXMgd2VsbC4NCj4gDQo+IFJlYWRp
bmcgdGhlIG5ldyBzZWN0aW9uLCBJIGFncmVlIHdpdGggeW91ciB1bmRlcnN0YW5kaW5nLiBUaGUg
aXNiKCkgaXMgbm90IG5lY2Vzc2FyeS4NCj4gDQo+IFNvIHBsZWFzZSBpZ25vcmUgdGhpcyBjb21t
ZW50IDopLg0KPiANCj4+Pj4gICAgICAvKiBYWFggTVBVICovDQo+Pj4+ICAgICAgICAvKiBGYXVs
dCBTdGF0dXMgKi8NCj4+Pj4gQEAgLTIzNCw2ICsyMzEsNyBAQCBzdGF0aWMgdm9pZCBjdHh0X3N3
aXRjaF90byhzdHJ1Y3QgdmNwdSAqbikNCj4+Pj4gICAgICBwMm1fcmVzdG9yZV9zdGF0ZShuKTsN
Cj4+Pj4gICAgICAgIC8qIENvbnRyb2wgUmVnaXN0ZXJzICovDQo+Pj4+ICsgICAgV1JJVEVfU1lT
UkVHKG4tPmFyY2guY3B0cl9lbDIsIENQVFJfRUwyKTsNCj4+PiANCj4+PiBJIHdvdWxkIHByZWZl
ciBpZiB0aGlzIGNhbGxlZCBjbG9zZXIgdG8gdmZwX3Jlc3RvcmVfc3RhdGUoKS4gU28gdGhlIGRl
cGVuZGVuY3kgYmV0d2VlbiB0aGUgdHdvIGlzIGVhc2llciB0byBzcG90Lg0KPj4+IA0KPj4+PiAg
ICAgIFdSSVRFX1NZU1JFRyhuLT5hcmNoLmNwYWNyLCBDUEFDUl9FTDEpOw0KPj4+PiAgICAgICAg
LyoNCj4+Pj4gQEAgLTI1OCw2ICsyNTYsOSBAQCBzdGF0aWMgdm9pZCBjdHh0X3N3aXRjaF90byhz
dHJ1Y3QgdmNwdSAqbikNCj4+Pj4gICNlbmRpZg0KPj4+PiAgICAgIGlzYigpOw0KPj4+PiAgKyAg
ICAvKiBWRlAgKi8NCj4+PiANCj4+PiBQbGVhc2UgZG9jdW1lbnQgaW4gdGhlIGNvZGUgdGhhdCB2
ZnBfcmVzdG9yZV9zdGF0ZSgpIGhhdmUgdG8gYmUgY2FsbGVkIGFmdGVyIENQVFJfRUwyKCkgKyBh
IHN5bmNocm9uaXphdGlvbiBldmVudC4NCj4+PiANCj4+PiBTaW1pbGFyIGRvY291bWVudGF0aW9u
IG9uIHRvcCBvZiBhdCBsZWFzdCBDUFRSX0VMMiBhbmQgcG9zc2libHkgaXNiKCkuIFRoaXMgd291
bGQgaGVscCBpZiB3ZSBuZWVkIHRvIHJlLW9yZGVyIHRoZSBjb2RlIGluIHRoZSBmdXR1cmUuDQo+
PiBJIHdpbGwgcHV0IGNvbW1lbnRzIG9uIHRvcCBvZiBDUFRSX0VMMiBhbmQgdmZwX3Jlc3RvcmVf
c3RhdGUgdG8gZXhwbGFpbiB0aGUgc2VxdWVuY2UgYW5kIHRoZSBzeW5jaHJvbmlzYXRpb24uDQo+
IA0KPiBKdXN0IHRvIGNsYXJpZnksIGRvZXMgdGhpcyBtZWFuIHlvdSB3aWxsIGtlZXAgQ1BUUl9F
TDIgd2hlcmUgaXQgY3VycmVudGx5IGlzPyAoU2VlIG15IGNvbW1lbnQganVzdCBhYm92ZSBpbiB0
aGUgcHJldmlvdXMgZS1tYWlsKQ0KDQpUaGlzIGlzIGhvdyBJIGNoYW5nZWQgdGhlIGNvZGU6DQoN
Ci8qIENvbnRyb2wgUmVnaXN0ZXJzICovDQovKg0KKiBDUFRSX0VMMiBuZWVkcyB0byBiZSB3cml0
dGVuIGJlZm9yZSBjYWxsaW5nIHZmcF9yZXN0b3JlX3N0YXRlLCBhDQoqIHN5bmNocm9uaXphdGlv
biBpbnN0cnVjdGlvbiBpcyBleHBlY3RlZCBhZnRlciB0aGUgd3JpdGUgKGlzYikNCiovDQpXUklU
RV9TWVNSRUcobi0+YXJjaC5jcHRyX2VsMiwgQ1BUUl9FTDIpOw0KV1JJVEVfU1lTUkVHKG4tPmFy
Y2guY3BhY3IsIENQQUNSX0VMMSk7DQoNCi8qDQoqIFRoaXMgd3JpdGUgdG8gc3lzcmVnIENPTlRF
WFRJRFJfRUwxIGVuc3VyZXMgd2UgZG9uJ3QgaGl0IGVycmF0dW0NCiogIzg1MjUyMyAoQ29ydGV4
LUE1Nykgb3IgIzg1MzcwOSAoQ29ydGV4LUE3MikuDQoqIEkuZSBEQUNSMzJfRUwyIGlzIG5vdCBj
b3JyZWN0bHkgc3luY2hyb25pemVkLg0KKi8NCldSSVRFX1NZU1JFRyhuLT5hcmNoLmNvbnRleHRp
ZHIsIENPTlRFWFRJRFJfRUwxKTsNCldSSVRFX1NZU1JFRyhuLT5hcmNoLnRwaWRyX2VsMCwgVFBJ
RFJfRUwwKTsNCldSSVRFX1NZU1JFRyhuLT5hcmNoLnRwaWRycm9fZWwwLCBUUElEUlJPX0VMMCk7
DQpXUklURV9TWVNSRUcobi0+YXJjaC50cGlkcl9lbDEsIFRQSURSX0VMMSk7DQoNCmlmICggaXNf
MzJiaXRfZG9tYWluKG4tPmRvbWFpbikgJiYgY3B1X2hhc190aHVtYmVlICkNCnsNCldSSVRFX1NZ
U1JFRyhuLT5hcmNoLnRlZWNyLCBURUVDUjMyX0VMMSk7DQpXUklURV9TWVNSRUcobi0+YXJjaC50
ZWVoYnIsIFRFRUhCUjMyX0VMMSk7DQp9DQoNCiNpZmRlZiBDT05GSUdfQVJNXzMyDQpXUklURV9D
UDMyKG4tPmFyY2guam9zY3IsIEpPU0NSKTsNCldSSVRFX0NQMzIobi0+YXJjaC5qbWNyLCBKTUNS
KTsNCiNlbmRpZg0KaXNiKCk7DQoNCi8qIFZGUCAtIGNhbGwgdmZwX3Jlc3RvcmVfc3RhdGUgYWZ0
ZXIgd3JpdGluZyBvbiBDUFRSX0VMMiArIGlzYiAqLw0KdmZwX3Jlc3RvcmVfc3RhdGUobik7DQoN
Ck1heWJlIEkgbWlzdW5kZXJzdG9vZCB5b3VyIHByZWZlcmVuY2UsIGRvIHlvdSB3YW50IG1lIHRv
IHB1dCB0aGUgd3JpdGUgdG8gQ1BUUl9FTDINCnJpZ2h0IGJlZm9yZSB0aGUgaXNiKCkgdGhhdCBw
cmVjZWRlcyB2ZnBfcmVzdG9yZV9zdGF0ZT8NCg0KDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLSAN
Cj4gSnVsaWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri May 19 15:01:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 15:01:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537142.835982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01bA-0006tT-23; Fri, 19 May 2023 15:00:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537142.835982; Fri, 19 May 2023 15:00:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01b9-0006tM-Uw; Fri, 19 May 2023 15:00:51 +0000
Received: by outflank-mailman (input) for mailman id 537142;
 Fri, 19 May 2023 15:00:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q01b8-0006tG-V4
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 15:00:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q01b8-0005v5-Ew; Fri, 19 May 2023 15:00:50 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.7.127]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q01b8-00088W-7n; Fri, 19 May 2023 15:00:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=mIxbErU89H5ZUs45gpL/Zz35b8gcDd3Ef2SkjVi5JNg=; b=DMwDtvnVD1zpDPj4B3OhQwOVFu
	VeYEaLLAQkMHoHdLtgAvtyHmJJI4zCoXEpOPDavkHC9Ss+iBs59grGGdMzGGbjlQ3Rs7wG7ZrV10M
	GD9CFPLub7cHVZtmpiLaPFdhVPup25zRDuElFZ07fxVZzZ5mNJjadYva+TdCjKrhVljw=;
Message-ID: <b7bb99fb-c8d5-8852-9f35-3430a61d39b7@xen.org>
Date: Fri, 19 May 2023 16:00:48 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-2-luca.fancellu@arm.com>
 <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
 <86D7B5C8-2671-4359-A48D-E7D52B06565C@arm.com>
 <2f14dad9-25f5-7ac7-4ff5-d756e6f55718@xen.org>
 <AAE00F7D-612B-47AC-82F9-BEEE9CB17804@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <AAE00F7D-612B-47AC-82F9-BEEE9CB17804@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 19/05/2023 15:51, Luca Fancellu wrote:
> /* Control Registers */
> /*
> * CPTR_EL2 needs to be written before calling vfp_restore_state, a
> * synchronization instruction is expected after the write (isb)
> */
> WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
> WRITE_SYSREG(n->arch.cpacr, CPACR_EL1);
> 
> /*
> * This write to sysreg CONTEXTIDR_EL1 ensures we don't hit erratum
> * #852523 (Cortex-A57) or #853709 (Cortex-A72).
> * I.e DACR32_EL2 is not correctly synchronized.
> */
> WRITE_SYSREG(n->arch.contextidr, CONTEXTIDR_EL1);
> WRITE_SYSREG(n->arch.tpidr_el0, TPIDR_EL0);
> WRITE_SYSREG(n->arch.tpidrro_el0, TPIDRRO_EL0);
> WRITE_SYSREG(n->arch.tpidr_el1, TPIDR_EL1);
> 
> if ( is_32bit_domain(n->domain) && cpu_has_thumbee )
> {
> WRITE_SYSREG(n->arch.teecr, TEECR32_EL1);
> WRITE_SYSREG(n->arch.teehbr, TEEHBR32_EL1);
> }
> 
> #ifdef CONFIG_ARM_32
> WRITE_CP32(n->arch.joscr, JOSCR);
> WRITE_CP32(n->arch.jmcr, JMCR);
> #endif
> isb();
> 
> /* VFP - call vfp_restore_state after writing on CPTR_EL2 + isb */
> vfp_restore_state(n);
> 
> Maybe I misunderstood your preference, do you want me to put the write to CPTR_EL2
> right before the isb() that precedes vfp_restore_state?

Yes please. Unless there is a reason to keep it "far away". The comments 
look good to me.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 19 15:14:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 15:14:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537151.836000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01o8-00005w-8S; Fri, 19 May 2023 15:14:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537151.836000; Fri, 19 May 2023 15:14:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01o8-00005p-5l; Fri, 19 May 2023 15:14:16 +0000
Received: by outflank-mailman (input) for mailman id 537151;
 Fri, 19 May 2023 15:14:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q01o7-00005j-62
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 15:14:15 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on062a.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cfe74c93-f657-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 17:14:13 +0200 (CEST)
Received: from AM6P191CA0010.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::23)
 by AS8PR08MB9148.eurprd08.prod.outlook.com (2603:10a6:20b:57f::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 15:14:11 +0000
Received: from AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::3f) by AM6P191CA0010.outlook.office365.com
 (2603:10a6:209:8b::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 15:14:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT032.mail.protection.outlook.com (100.127.140.65) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.7 via Frontend Transport; Fri, 19 May 2023 15:14:10 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Fri, 19 May 2023 15:14:09 +0000
Received: from eb3788c04473.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 440772C1-C3E7-488C-9937-2BBDF32D0B2B.1; 
 Fri, 19 May 2023 15:13:58 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eb3788c04473.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 19 May 2023 15:13:58 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM8PR08MB6545.eurprd08.prod.outlook.com (2603:10a6:20b:368::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 15:13:54 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.019; Fri, 19 May 2023
 15:13:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfe74c93-f657-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nz/EAoLLiHuYxeS/yb49OuErXbPQ6y/GQYH2gzgWFyE=;
 b=NVPYi4uSQU+y3uKOmf8kdOuJSbaOF/e+3FJVvoQ62g4SlM2INO6xyhhnGXJLksS71d3SpXKtH2GaqnC6B6agdSUVygOpFApqYzkTsBYas/nr3ODr0fgR61zq+QU09RC5o0PzSE+e9NUhygov764wW+4q+Kr2yu3n240+ViLvQp0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5c1cffdfff0bf19a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RCWrhkqXqmkCGF5WPs78yifEPl1IAzea/YJ+I3NTYl9nJoRzeUWjA186ZfVixhde9+lh6qy0jylMw3FbJSltl9jU75J3vDz1b6AQl4CRl0cAq17FSTpg1LlyrTSbe8tn3NIisXh/JkS4tl/yTmkp4iRRG7KDGS/UanKwc+yv0PWS87dx7rvfiWsZLwhnFFrNlfutHJJsjmYO92/JRMwfhR8Mqf+5UOZ9WdAqygdze8qffZeoCy3lzAqOf4ffvfimNtMdOUxiWNrrM5Hw7noWW8Ir+Be9/uT7nPFj/Tx3AZA4l2pu+DFJUrhXS2A3uMcSftTkTKgmkb/bV31hFdQNcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nz/EAoLLiHuYxeS/yb49OuErXbPQ6y/GQYH2gzgWFyE=;
 b=IZRR4i4wD1m2ZOPUoOi1NQLdSjQMgxsAoDEzIzMq2+O7BrdPqDr48C3IG3BmsUg//jgh/S5KC5mBfEmtwQPqtnQ7RWQsotNtWwksh75DcImaPeZIuYRuo6I7me8p+A94hf6+QD1/77GodI7paP84VX6AU5PHk0H8iHt2hZo451bccy9bkSP5ZBcEkgkdy8M/oX2zv8G0flUa3qvUi5V3aZIuHu0gXe7Gafw0qaO4iXgjdBV5XDuiu0PZrwUQGFF0ushP4qx1K5vDHQAwAU5qfB7/z9uRixUJFqH296sFhseL1Lv8P8qhHoiJnusI8VByOeH/FiH7EiQkZPPoEAlEBg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nz/EAoLLiHuYxeS/yb49OuErXbPQ6y/GQYH2gzgWFyE=;
 b=NVPYi4uSQU+y3uKOmf8kdOuJSbaOF/e+3FJVvoQ62g4SlM2INO6xyhhnGXJLksS71d3SpXKtH2GaqnC6B6agdSUVygOpFApqYzkTsBYas/nr3ODr0fgR61zq+QU09RC5o0PzSE+e9NUhygov764wW+4q+Kr2yu3n240+ViLvQp0=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Thread-Topic: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Thread-Index:
 AQHZdnKBJmFHv6Wq8kSjvXctD+zlI69f6zCAgAHjioCAAAXEgIAAAV4AgAACogCAAAOZAA==
Date: Fri, 19 May 2023 15:13:51 +0000
Message-ID: <13FF90CF-22ED-4A7E-AAEB-633FF8CAF999@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-2-luca.fancellu@arm.com>
 <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
 <86D7B5C8-2671-4359-A48D-E7D52B06565C@arm.com>
 <2f14dad9-25f5-7ac7-4ff5-d756e6f55718@xen.org>
 <AAE00F7D-612B-47AC-82F9-BEEE9CB17804@arm.com>
 <b7bb99fb-c8d5-8852-9f35-3430a61d39b7@xen.org>
In-Reply-To: <b7bb99fb-c8d5-8852-9f35-3430a61d39b7@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM8PR08MB6545:EE_|AM7EUR03FT032:EE_|AS8PR08MB9148:EE_
X-MS-Office365-Filtering-Correlation-Id: de587e3c-628c-4b7a-de0e-08db587bb27d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 pIi+lgMRpIhEcpWMM7jvwCLxnzewmAA4GYWFs2zahqDlbBOKGlsiJDNifVa2OAl0Hb1yL89j46HX1gomqWPhoushRPDNWMgyAi2/xxRHNN/qdPFhW9onV3sxSL8ZrKgYSuUkOGZcPotxETWNgJ8/cgswolGts814vAEljeOBDQw8n4bAsCbvqYJc/YqUBkafjQs2v/YeA2rsEEEYDoRFWKRBQAwvqWZsuCphdzcrMVCcaiWF8wvxRAuScaMYs1+fs97o5KGx8ADWiUbL/Z2kQdNzMG0D6Esy/n3ODbP7I1nzBEV49ZMbvZ8yixytJXtznpCGvu8llPo3Nje+y3T7XLDc/VqLqZUqQglwf9VtrztC7k/Nc4CsOS6e0fA1dXsZNJAu7zs0Fa4TcPhlGfdT7ZBDGy4pRFbIJyxCgcgNkKVsvtmKZg8DJiF9AVDvJrMpG79w8Zmkg3k4sQfXp/rpYts/yHvqNu8aA7iFMhrQV+EzEsLdqcIUhOQWHMXFkWiOKehjTMRgTd+3b/leYa8obfif3l2e+a3u6/v02oVhsLWidXAiEsj7nkw70GCtjWf0eeCAFXnfJIJAast6RJRBlOiRyQPfT6hoPnXTlDWlOjt3EoTC3qzDtMmS3RDwjPF97U7ikBIvAtPsVQ356wiLdw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(366004)(396003)(376002)(451199021)(66556008)(66476007)(66446008)(76116006)(2906002)(64756008)(8936002)(8676002)(5660300002)(6916009)(91956017)(316002)(4326008)(66946007)(41300700001)(86362001)(2616005)(54906003)(478600001)(36756003)(6486002)(26005)(71200400001)(6512007)(6506007)(186003)(33656002)(53546011)(83380400001)(38070700005)(122000001)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <BDE669DC3F704A4C80CEACAAC70802F9@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6545
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	fe20f238-d2df-4642-2bdd-08db587ba77f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HB6e6e72divxtP8SDpD01ug5y1ASgFJse/4DUU/E2ReRwXyvGz1YvO8aSgLI1VoaWyG39nGiAfHGw3MWD9kDvUVSQcm1JCz9Tq82gPwmL3nuGu5UKGBMQK+1BhAkX0PnxZNqhgG5fb1OpKx4Ldo/zo2Rk0GcJLP4svUCEJ8orYYhaUbSMzic95sXMoTmR3VC4cCz3WjEqk8Qz64rQzpyCB2epVya7fB8a8JDUmWsVuwnghBj5OaEpHeWR9zTNkwl+N0aFE8kgwQ3rVckAK4vPxxlYm8cLsM6lIJYUZpLifneBP2noLBRdDgoIESYv7/pc1rwdv2bpyx3rGbyOjyhl+j/kY3RPJ82bkV1Ya/WqpvGK3i9I1U4sKX9Hgr73Ptlc/4AaCesvLwC4lsqI1Zki/WNkSBsX4YiNJePlO3e6e5EFf+APeVrvKNVL67EHD3KkhMhwy8AQCAXSgt4eFww8aOs5jCwrl3RR+P+tdxtFgSC1JRzeHFU9hSNgBwX+2sCDZiFUc4WbxbngLZBG2FQ0S7YY2J2N9mT47Xv8AIXdZVebdF9Y9hBGzfTvw2hXGTc7QucpWczKY0p3/6XdJ9pJQKua5+nS4XREBoU9SPgsWRqqg4355R3wDhumEQMQ85SRw6Y60pdlLMyqRYFmVwE6JCPq0yVwLeYexgTrPTFVETCIv+2JbHSAsXAZZ2BET8KCvxu6eSlsRIt8H6x5mx1cQd0vj2tbrVsXHnIqc0HQQgjL6As0XVDV39N8gOaAzFR
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(376002)(136003)(451199021)(36840700001)(40470700004)(46966006)(40460700003)(4326008)(70206006)(478600001)(70586007)(54906003)(316002)(33656002)(36756003)(86362001)(83380400001)(47076005)(107886003)(26005)(2616005)(186003)(336012)(36860700001)(6506007)(6512007)(53546011)(5660300002)(8676002)(6862004)(8936002)(2906002)(6486002)(82310400005)(40480700001)(41300700001)(81166007)(82740400003)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 15:14:10.1403
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: de587e3c-628c-4b7a-de0e-08db587bb27d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9148



> On 19 May 2023, at 16:00, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 19/05/2023 15:51, Luca Fancellu wrote:
>> /* Control Registers */
>> /*
>> * CPTR_EL2 needs to be written before calling vfp_restore_state, a
>> * synchronization instruction is expected after the write (isb)
>> */
>> WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
>> WRITE_SYSREG(n->arch.cpacr, CPACR_EL1);
>> /*
>> * This write to sysreg CONTEXTIDR_EL1 ensures we don't hit erratum
>> * #852523 (Cortex-A57) or #853709 (Cortex-A72).
>> * I.e DACR32_EL2 is not correctly synchronized.
>> */
>> WRITE_SYSREG(n->arch.contextidr, CONTEXTIDR_EL1);
>> WRITE_SYSREG(n->arch.tpidr_el0, TPIDR_EL0);
>> WRITE_SYSREG(n->arch.tpidrro_el0, TPIDRRO_EL0);
>> WRITE_SYSREG(n->arch.tpidr_el1, TPIDR_EL1);
>> if ( is_32bit_domain(n->domain) && cpu_has_thumbee )
>> {
>> WRITE_SYSREG(n->arch.teecr, TEECR32_EL1);
>> WRITE_SYSREG(n->arch.teehbr, TEEHBR32_EL1);
>> }
>> #ifdef CONFIG_ARM_32
>> WRITE_CP32(n->arch.joscr, JOSCR);
>> WRITE_CP32(n->arch.jmcr, JMCR);
>> #endif
>> isb();
>> /* VFP - call vfp_restore_state after writing on CPTR_EL2 + isb */
>> vfp_restore_state(n);
>> Maybe I misunderstood your preference, do you want me to put the write t=
o CPTR_EL2
>> right before the isb() that precedes vfp_restore_state?
>=20
> Yes please. Unless there is a reason to keep it "far away". The comments =
look good to me.

Ok, a question regarding README.LinuxPrimitives, is it some file taken from=
 an automated tool?
Because I see there is some kind of structure, how can I know if my syntax =
is correct?

>=20
> Cheers,
>=20
> --=20
> Julien Grall




From xen-devel-bounces@lists.xenproject.org Fri May 19 15:17:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 15:17:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537156.836014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01rA-0000iV-Mj; Fri, 19 May 2023 15:17:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537156.836014; Fri, 19 May 2023 15:17:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01rA-0000iO-Jx; Fri, 19 May 2023 15:17:24 +0000
Received: by outflank-mailman (input) for mailman id 537156;
 Fri, 19 May 2023 15:17:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q01r9-0000iG-5h
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 15:17:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q01r8-0006Mn-S2; Fri, 19 May 2023 15:17:22 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.7.127]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q01r8-0000Ns-M8; Fri, 19 May 2023 15:17:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Vaf9tYYMa/4VADQCkHAr+y9FmmnTjGo2uTwPzNEJPNM=; b=do2HzavNi+xEV/PrJweg34P7tt
	EZ6imFs08P3QheOOqkuoxuAXV2HKhek/dcgItGqNYeNK6l40MDKpybQpZIQsvGhHorH50MlXHK1iT
	dX+Y6Q6YzTLuV8bvAazSSWSxH/r0W0oVEB5UCtqSKAopfgER/kqJLWTYBpKp2l4RZrJE=;
Message-ID: <751c0d88-ee5e-7bc6-7d66-a478b9411fa4@xen.org>
Date: Fri, 19 May 2023 16:17:20 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-2-luca.fancellu@arm.com>
 <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
 <86D7B5C8-2671-4359-A48D-E7D52B06565C@arm.com>
 <2f14dad9-25f5-7ac7-4ff5-d756e6f55718@xen.org>
 <AAE00F7D-612B-47AC-82F9-BEEE9CB17804@arm.com>
 <b7bb99fb-c8d5-8852-9f35-3430a61d39b7@xen.org>
 <13FF90CF-22ED-4A7E-AAEB-633FF8CAF999@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <13FF90CF-22ED-4A7E-AAEB-633FF8CAF999@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 19/05/2023 16:13, Luca Fancellu wrote:
>> On 19/05/2023 15:51, Luca Fancellu wrote:
>>> /* Control Registers */
>>> /*
>>> * CPTR_EL2 needs to be written before calling vfp_restore_state, a
>>> * synchronization instruction is expected after the write (isb)
>>> */
>>> WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
>>> WRITE_SYSREG(n->arch.cpacr, CPACR_EL1);
>>> /*
>>> * This write to sysreg CONTEXTIDR_EL1 ensures we don't hit erratum
>>> * #852523 (Cortex-A57) or #853709 (Cortex-A72).
>>> * I.e DACR32_EL2 is not correctly synchronized.
>>> */
>>> WRITE_SYSREG(n->arch.contextidr, CONTEXTIDR_EL1);
>>> WRITE_SYSREG(n->arch.tpidr_el0, TPIDR_EL0);
>>> WRITE_SYSREG(n->arch.tpidrro_el0, TPIDRRO_EL0);
>>> WRITE_SYSREG(n->arch.tpidr_el1, TPIDR_EL1);
>>> if ( is_32bit_domain(n->domain) && cpu_has_thumbee )
>>> {
>>> WRITE_SYSREG(n->arch.teecr, TEECR32_EL1);
>>> WRITE_SYSREG(n->arch.teehbr, TEEHBR32_EL1);
>>> }
>>> #ifdef CONFIG_ARM_32
>>> WRITE_CP32(n->arch.joscr, JOSCR);
>>> WRITE_CP32(n->arch.jmcr, JMCR);
>>> #endif
>>> isb();
>>> /* VFP - call vfp_restore_state after writing on CPTR_EL2 + isb */
>>> vfp_restore_state(n);
>>> Maybe I misunderstood your preference, do you want me to put the write to CPTR_EL2
>>> right before the isb() that precedes vfp_restore_state?
>>
>> Yes please. Unless there is a reason to keep it "far away". The comments look good to me.
> 
> Ok, a question regarding README.LinuxPrimitives, is it some file taken from an automated tool?

I am not aware of any automated tools using it. All the re-syncs I have 
seen recently were manual.

> Because I see there is some kind of structure, how can I know if my syntax is correct?

There are some commands to help syncing when the file is very similar to 
Linux. In your case, I would follow what we did for the atomics as only 
some functions are the same.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 19 15:24:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 15:24:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537162.836029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01y5-0002Er-G5; Fri, 19 May 2023 15:24:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537162.836029; Fri, 19 May 2023 15:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q01y5-0002Ek-DA; Fri, 19 May 2023 15:24:33 +0000
Received: by outflank-mailman (input) for mailman id 537162;
 Fri, 19 May 2023 15:24:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9pCJ=BI=citrix.com=prvs=496ec590c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q01y3-0002Ec-Or
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 15:24:32 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3de87cb6-f659-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 17:24:28 +0200 (CEST)
Received: from mail-dm6nam04lp2049.outbound.protection.outlook.com (HELO
 NAM04-DM6-obe.outbound.protection.outlook.com) ([104.47.73.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 May 2023 11:24:17 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH7PR03MB7003.namprd03.prod.outlook.com (2603:10b6:510:12e::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.17; Fri, 19 May
 2023 15:24:16 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 15:24:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3de87cb6-f659-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684509868;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=hpnx/oMSOpf9eRJO9qIixDVv1lS7/xn+mem9fTZbvho=;
  b=a0kTAr4Cukn6g4v8QcsyCNfgjH/ZtyYRRTaJ8MlugSg+YmS8RXS1+ZvT
   hdQX/izcsW9zuxbOKhWmH3BudnUpwo2kOAm4K02pNfSTJlH3yN1JZRpiG
   GPMyN2gIXA7vrYv6zvF7yqLHV37bwKncTTQt62F8PA/1ZnaZaVvPPZtji
   Q=;
X-IronPort-RemoteIP: 104.47.73.49
X-IronPort-MID: 108433237
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:UZDHm65MjjnNsj6sghAvQQxRtBXGchMFZxGqfqrLsTDasY5as4F+v
 jAZX2uPO6mDMGv2c9tzb4zk/B4CuJSGxoM1SABs/ig1Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa0S7QeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mq
 PopCBxSbSq4naGw0OiiRfBThtYkM5y+VG8fkikIITDxK98DGcyGb4CUoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6Ml0otgdABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpCSObjrKU16LGV7lIRIUMQfwHgndb6pGGjA+9QC
 UUO5BN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxmdLngJSHhGctNOnN87Q3km2
 0GEm/vtBCdzq/uFRHSF7LCWoDiufy8PIgc/iTQsSAIE55zvpd81hxeXEtJ7Svbp35vyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLztKkowFqxJrVZg
 EU5pg==
IronPort-HdrOrdr: A9a23:hIXrD69tdZxNQnfTLc5uk+HFdr1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqN03IV+rwQZVoMkmskqKdhrNhQItKPTOWwldASbsP0WKM+UyCJ8STzJ856U
 4kSdkENDSSNykFsS+Z2mmF+r8bqbHokZxAx92ut0uFJTsaFJ2IhD0JbjpzfHcGIjWuSaBJdq
 Z1saF81kadkDksH4yG7j5vZZmwm/T70LbdJTIWDR8u7weDyRuu9b7BChCdmjsOTj9Vxr8m0G
 7d1yj0/L+qvf2XwgLVkza71eUbpPLRjv94QOCcgMkcLTvhzi6ueYRaQrWH+Bwlve21714usd
 /U5zMtJd565X/9dny85THtxw7j+jAz7GKK8y7UvVLT5ejCAB4qActIgoxUNjPf9kobpdl5lI
 ZGxXiQuZZ7BQ7J2H2V3amDazha0m6P5VYym+8aiHJSFaMYdb9qtIQauGdYCo0JEi7W4J0uVM
 NuEMbfzvBLdk7yVQGTgkBfhPiXGlgjFBaPRUYP/uSTzjhthXh8i3AVwcQO901wg64Vet1h3a
 DpI65onLZBQos9dqRmHtoMRsOxFyjkXQ/MGHj6GyWmKIg3f1b277Ln6rQ84++nPLYSyoEppZ
 jHWFRE8UYvZkPVD9GU1pEjyGGNfIyEZ0Wu9ihi3ek9hlWlL4CbdRFrCWpe3fdIms9vQfEyAJ
 2ISdVr6/yKFxqbJW8G5Xy5Z3BoEwhsbCQkgKdLZ7uwmLO6FmTLjJ2sTB+BHsulLR8UHkXCP1
 AkYB/fYO1902HDYA6MvPGWYQKjRnDC
X-Talos-CUID: =?us-ascii?q?9a23=3AUqqPmGvVsLxJXSfAH6DSZV//6It4dEz94CjTGXS?=
 =?us-ascii?q?iSmB5EqSaUGCQ6qR7xp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AASFvWA3AuG4GLREtPxjzki5biTUj3aaHVkYny84?=
 =?us-ascii?q?6lIqqGjEoBji/hS6pe9py?=
X-IronPort-AV: E=Sophos;i="6.00,177,1681185600"; 
   d="scan'208";a="108433237"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HkWfVYTjbq8epxVTY/cT7C5jrROlxxX+VD6Rf2puIOaIB7hDtKCmkTMnXAEWd2Fdqt0SQdtcnKn1Nry0zp3m/I+ZRTS0zVIkvt0szQHtmN13lz+n8IzYxWcI5oKb8TQMYHc0zNOkjqRS9CY934XlvZYP4sGN3slpMwKWvOEVMbp0uWQCPaAjk2ObOrXmrPeQZN9bhtrJcx4Fy6Gsy8TNY9mqH0Da+eV5tcDH75FkTqv4Jp6QNPdDb8MDGju1fVLNY9AY3mA8ITcY9aRRyJTDHGab1GYFFJjoGoVK9uiHWG3xn8dROjSGPlXuTE3en4AMfdP7UoD6jilp8DuJb7DjCw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HNP+hd6PJCKQUi6fnXVetMNIKRZNgvYVVI0wBMveiV0=;
 b=SI/11bbarwkNm+CNG1+epwpbB3U0QKgoF97xYTOLKN129rGub3KYvgt5+acbr89LLm5PQm2HPUv9BVaooLa8ffxnW0/aWaRJ5OGTBC3beWxrd2Xsm7Q3qxQgUkqrIZcWn+BngmUjk4/nq1Akxwc8GDoq6p760WBN55DhtnBEP2gP7ggkXOfWwxICtKflMpNAl8EUii4mt6Ii+E/ZJI+JO+uGt5JD2pnW1dmhOZi9SLfvtPCKRRqUXH8lwdUktH/QX5adDDQtfG6/CPNn+i00Jrf9Ckct/rPtLukNKXFRzEYrNJ4cg6sxjTd/RwUdvPHEWuDWonFCRmmfO5L/mZWhQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HNP+hd6PJCKQUi6fnXVetMNIKRZNgvYVVI0wBMveiV0=;
 b=F8A2AoHCIFs/b3rw3hLZ18zhIbMv1fKJ9eaMwvMY2S7WjxTWp0Wk9CyCr/QpkB0SWfP0czzJQ3lD78hr8irV83vGpQbeonRuudw+US/IMynnmyb2SfVMxTo1iV0mphiN8MkqOMhzgnw4AW/JV6AKAufn4DifDtrNl6+qMprnL7c=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <9cd79b9a-2e9d-aa0b-3ea2-747a6840f5f1@citrix.com>
Date: Fri, 19 May 2023 16:24:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/4] x86/cpufeature: Rework {boot_,}cpu_has()
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-2-andrew.cooper3@citrix.com>
 <d5eb0703-63be-0c5c-3fee-37e74e11dcf8@suse.com>
In-Reply-To: <d5eb0703-63be-0c5c-3fee-37e74e11dcf8@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0033.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH7PR03MB7003:EE_
X-MS-Office365-Filtering-Correlation-Id: 069e435d-a3ec-44f0-acc6-08db587d1add
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	W8Qd35Hx3xoensuztcO9r+O1Z1n2Xi4JGZ8E2Jjbw+sYps55au91g93osPgkHtrcs11bQ3AbAUrmfxofjrDt0c52suKkCgdV5giQiqeFVbe/EUnBHJARghysUsp5hMRiWFzMDZvVFgE0qz+jLQzzd+MBD4x3ufEEXDQMCS369vS2LU6DG92ZbDqsGEB2Lg/Su/qro02cS59hMmZswBtPjwf9dyf4jQ7W1HM+GyT/fzEFidtlnnpu8t3d/aqSxU1uR+0y5i/TMnkSIi49300LZfGyTPafB9kXakdlXPj+OXywq83zgYx0NHp2/yRZSj8pfsb1goH3B8d+ic0f2plNmgmJW5vDbPLbdVNvxyx3HeQpZmkwpQGvNWBl/6sh9IM4zo8n8muawP0pF1kecL/pGAc8ql048ytW9NKmdcaGhxtNStsbRAegYwpkYfhXF+rzuTt43zEC7YmWPDePnNKyE0PNiUYdX73CuE06jfRY5CsJq2+wfFJ/O3XA+XQhM2pk7nFOMKXNS01ZFHzSb1CSnBrW4r+D2Xg/DOCwVmgFQNXc3ZBwSSyGnw7Ae9c/RdXcQBsl/sBMdHK2vKLMf8cGk30TYpGB6xprcL02uPJyvItW9NpWHS6J32zanTpCB82DmLx5X+SaANUJ1VVj0ra+bw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(346002)(396003)(39860400002)(136003)(451199021)(83380400001)(66476007)(478600001)(66946007)(8676002)(66556008)(6666004)(6506007)(6512007)(2616005)(54906003)(6916009)(186003)(86362001)(2906002)(31696002)(8936002)(5660300002)(53546011)(36756003)(38100700002)(82960400001)(4326008)(41300700001)(6486002)(316002)(26005)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aElRaFlyb3FMS3lDbjl6ZnJGbTUvOHFZZGhYbkVFekRaYVZmQWNITWRhMXgy?=
 =?utf-8?B?SVF3UnFMVXpGVjN0Tjc5RVZtbUEwNEJXUGNZT0JPSkdJeGNWbCtPZFJiY0xX?=
 =?utf-8?B?ZldBbjlyTWdicjhmNXg3aVhWY05zZmVkK25VdlF6YzR0MUZRL01QYU44OENs?=
 =?utf-8?B?Q2F5cmpMaytrUkMreHNHMGkycC9LVEpJZGZxcXBFYmEreWtTaHRHcWc4SWxY?=
 =?utf-8?B?NU1XMUlGVlFlcmxQL1AwN3R6UWgvSzdPVFpYVGZUWXpHc2JxbnBpZC80MzMx?=
 =?utf-8?B?bzVkSlA0blZTdGNHdkxVUE1zdld2Z2V5ZzNDN1hKY0lDemNadURpRVBKODVs?=
 =?utf-8?B?dDFuOWd5SHJOSHcrNDVDWm5kWEdERmpXblRETWNlTXUraVpQeDJTUUlNWTFU?=
 =?utf-8?B?T25xSE0xVlN2ZlM2eUNlSGkzNjByTVRGcmtIR2hjNkN5bTdOcEkyeTlFcHI5?=
 =?utf-8?B?NDh5ekhOdHlIRitVOFp3QkZGRytwMS81U3hCRWhTWkdlUUc2THZHdTNhYzdN?=
 =?utf-8?B?Nnhvd0RVeUVOS0RxUXdQaG93MWxWdkc3R25UMWNjYXN4L0c2YnQ2Y3ZKeTk1?=
 =?utf-8?B?RENUU1pGRERDY2JRVUlRSWlBdE95cnFrVnYrNHcwL1diOHVWaWZ3R1JUWlRS?=
 =?utf-8?B?MEgxeTZqeFRwcmZnYmRYSEpCWkpIZnB5eENJcjN6RWlycFM4VXZkeFNvZWtC?=
 =?utf-8?B?R1M3NTRpSUFlWEVQaXpxNHBHdWwrWFhGcjFXTFZvSHdWWE9JQmZETlA4bG4y?=
 =?utf-8?B?Nml0TnJaNUZJZW4wanp1dENpQ2JtMkQrbnM0QmJvQzFsR3R4M0hBbEJiaElF?=
 =?utf-8?B?Z3hjZFczNkpNU0tkYllvb3ZQOEg5c1RRNGw2VHZ4bjFyelcvQStUMU1IUVVl?=
 =?utf-8?B?WCtFaGNJU0NiRUZsekNrMDI5cjkzdDJTVHAyb2pEeFJ4OFdWWlNrdy9LRWdz?=
 =?utf-8?B?ZjYrNkNxWSthc0Zibnc0cnBocXMvQ084VGYzVmRnMklvR2g4SngvemdvQVhj?=
 =?utf-8?B?SStuT1VQVlZCcHorWFVZbUJxTTdXb0tMbWFLK3V4QndqbVlMTklpb1N0Rmp6?=
 =?utf-8?B?RXJiYmErYnNIZklReHF5QUhKWDVLendEQzdNK09mNE9jWnlyNWZzTjBxNFdq?=
 =?utf-8?B?U1JGa1M3aDkrcFJ5eUZvQ1dvRzJvL0h2MURKKzMyMzJKVHVmS05xbVkrSGZk?=
 =?utf-8?B?V0JoRUxMYjNpb2E3bVZWUDZxZ2JUUG9uUlFNcHBLa2diSy9jM1lQT0U3MEFv?=
 =?utf-8?B?Rlp0R2Z4QWJHaVZCNUNTSm5qTWI1cnV2Tjd4bW1WY1JaKzd0ZmJadzkrZEhy?=
 =?utf-8?B?dFlCdEIyekgxTkk5Z0M1VmtqaXdoWHJIbzJpQUhzSERIbWY4VWdRM0x3bHNX?=
 =?utf-8?B?anM2a3g2bFdHUkNYOFMyMVZZc01UOXpaWHVzckRTU29hdE55SDBrbU9nRVk1?=
 =?utf-8?B?ZGR3ZWxIYzRtL2k2NGdNVHo4NjE1MlhwT21OdUtWZm9VWHN3OEFTUlpqVXZn?=
 =?utf-8?B?OVVFOHYzUG1HWUpUb1NiWU1xV3NKNDVmcFJsUjBjRVdaUTd4Zi9kaVZqdGNL?=
 =?utf-8?B?TWRqdGR5L0JBYlIrV2NjYVZlU3VBdDQvcXc5L2JRU3M5Q3dra0ZObG1tSEZD?=
 =?utf-8?B?RG9QcVdMSEJ2RUwrMHJVOEZkZHkzS1QwRnZQK2pBOVpSUDd0N0svR25OQk0w?=
 =?utf-8?B?dXNzaThISFpjWklIeFpBZHNBTjh0TXVkTzZ4SUVvbmlPQ08yYVNKaWNaSVRl?=
 =?utf-8?B?ZlFlNERibWkvM3ZuWk05dzBxcjhsb2cxYkFHOEM0bEloelJ6dzIraEVYM3c0?=
 =?utf-8?B?VzkzZC9ZL3hPQXI3bTdoWXA3WjZtSEpNMlBWSm80cUthNnVwblVXcEczaGl3?=
 =?utf-8?B?d3E5cVoxYlROVnFLWTlPNDlCbEdWeGxFZDZzcjQzR2lIcXpzbVZQRk5aM08x?=
 =?utf-8?B?L29PdzkwRng1Q2dycVVPWkNmbXAzK0JXd1V3SnhIYXJXbWkrbWZwMnkwQTd3?=
 =?utf-8?B?aFlWVlAybHpRU1hXczA4NlR1ODZEZU5HYkNGNUlsdGZyYjZhWUh5MEFINkVj?=
 =?utf-8?B?eHVLR2dWUnlJSUozc3ZwS2d0QTNVb0hGRGplaWdOSDFPcHJMWExkRzJYOGpj?=
 =?utf-8?B?aFVLUTNCNFRmcFh2REIzb295NFNad1c0Z2t4cUR5NWxOMkVaQ0hGNjBzZGZF?=
 =?utf-8?B?RXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	T44yYJxojAHUmy0oK1fmAqo0eQqedo2W3IsvoO/Iwaw+Gdki9iB3whb0SVcSTvH7j/CwTPgRX/hDLzoS7S2Q5Td7d2gFzcYdq1QuTKPZTTKh1bZPqTHC4uciL/bWOqLUBucoGPISg176isHaBoG7l3akiTL5oLd5BkUumtoihlXgIndkhab2/QUw7sS05wBkDLlg26fal2aq8Lh+SVGoxnnBit93m8Xgx49rdA0EXN6vdYEuvlUCwSKAEyGNpVWUXv/mJAmLJoH7XYTee3GWs/RAdNCGiIh1/tot0FTRpHOLu412wWUqPHOvT+ethpR9FQazjIySjmR3GnfEF4EkGHIODobT4RAfx0ozFsHfX+/UEIcXtqKaVX5xjhbxI48hpRpJTGTdh60vNpjkrlAgF3RznYW2lOvWoYdUPph3iYXVAcSEd6Sdg+bVUkORvdfQvuE5parm9FGhS3cHcSAILIxtdIArnO6tqemy1BhxXfTkOlMc7a4iBu65Vovm2/Vri8SqoeRj8TPLyCEW7+e2EiF4JQmGpOwb8r+qJFGOYjOADpQJtbT5TIh15iEkAtefgBUDPGRkii4rlIUGRbf6t1v8BRDDdzq/ng57O2i8ez8JJomOQxhxuzP+Uo5Z84MTEhM1WwNIu/41ox6FkRTdNfqy1GwyaOAlg8m6Sdkwr9x3dD8P+W2BVOfC1x69K3u+2aCZDnUbPCUXQEVVYs8I+3r7wmw+bo+kV9DMX2QXSWV8kOPEDP+ii5aptESmiEDclcRlMVBvK1thcTfAaFcOM5DCWw7KPxyhrE8J9wunainAON3Icb3w6CgyU6ZvaW0M
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 069e435d-a3ec-44f0-acc6-08db587d1add
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 15:24:15.3384
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5rOCbcotRkNd3bOByOpuD64uAa+BofUtTUZO+mkoE17FeSGeEQlirgFhFBCdNncTaeXqGzBKmxjZpYoEZPD73WU0ioouSgQIxRoR0P1WBnc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7003

On 17/05/2023 3:01 pm, Jan Beulich wrote:
> On 16.05.2023 16:53, Andrew Cooper wrote:
>> --- a/xen/arch/x86/include/asm/cpufeature.h
>> +++ b/xen/arch/x86/include/asm/cpufeature.h
>> @@ -7,6 +7,7 @@
>>  #define __ASM_I386_CPUFEATURE_H
>>  
>>  #include <xen/const.h>
>> +#include <xen/stdbool.h>
>>  #include <asm/cpuid.h>
> This isn't needed up here, and ...
>
>> @@ -17,7 +18,6 @@
>>  #define X86_FEATURE_ALWAYS      X86_FEATURE_LM
>>  
>>  #ifndef __ASSEMBLY__
>> -#include <xen/bitops.h>
> ... putting it here would (a) eliminate a header dependency for
> assembly sources including this file (perhaps indirectly) and (b)
> eliminate the risk of a build breakage if something was added to
> that header which isn't valid assembly.

b) That's a weak argument for headers in general, but you're saying it
about our copy of stdbool.h which probably the least likely header for
that ever to be true.

a) Not really, because cpuid.h pulls in a reasonable chunk of the world,
including types.h and therefore stdbool.h.  cpuid.h is necessary to make
the X86_FEATURE_ALWAYS -> X86_FEATURE_LM, which is used by asm for
alternatives.

I'm tempted to just omit it.  cpufeature.h has one of the most tangled
set of headers we've got, and I can't find any reasonable way to make it
less bad.

> Preferably with the adjustment
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks,

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 19 15:37:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 15:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537168.836040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q02A3-0003tg-O1; Fri, 19 May 2023 15:36:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537168.836040; Fri, 19 May 2023 15:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q02A3-0003tZ-Kf; Fri, 19 May 2023 15:36:55 +0000
Received: by outflank-mailman (input) for mailman id 537168;
 Fri, 19 May 2023 15:36:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9pCJ=BI=citrix.com=prvs=496ec590c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q02A2-0003tR-1F
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 15:36:54 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f8ae665a-f65a-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 17:36:51 +0200 (CEST)
Received: from mail-mw2nam12lp2049.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.49])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 May 2023 11:36:48 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA3PR03MB7467.namprd03.prod.outlook.com (2603:10b6:806:3a1::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 15:36:45 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 15:36:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8ae665a-f65a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684510611;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=FlQh7b5LTJf4+44J5I14Xqx887u+lfrJucGy/YqvKUk=;
  b=MJGlZO18n0oCqPi0lxoy77OKWRuHxZ/EpVAnBykGrXXXu7yJZK/vtmfu
   o1XiotHcp6awnJRtwpuWH36Jama57/aPDSrkeWtXVJwCQe4vcORzd4mZQ
   VXbmJKrUspDKNDZOFxkiK8YPkwtjOFyc3PPme+69LSFLuwF2ad7tMWFjZ
   s=;
X-IronPort-RemoteIP: 104.47.66.49
X-IronPort-MID: 109688956
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:CiY0nKvpefk0FJVgvgQGqOWo1OfnVHRfMUV32f8akzHdYApBsoF/q
 tZmKW2CP/+JajfxLYtwPYjg/U0BvsXSzIQxQAVqr3s0RHsR+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Fv0gnRkPaoQ5AKHxiFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwGg0DNh+Kv/OPm6u+Wu9w3dUCFvj5I9ZK0p1g5Wmx4fcOZ7nmGv+PwOACmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ovjv6xYbI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXpqK460QDNroAVIBJRSQuHmf/gtl6Baf0HE
 XMmqyEJj5FnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLncAZi5MbpohrsBebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakc5oRAt5tDipMQ2kUjJR9M6Sqqt1IWpSHf33
 iyAqzU4i/MLl8kX2q6n/FfBxTWxupzOSQ1z7QLSNo640j5EiEeeT9TAwTDmATxode51knHpU
 KA4pvWj
IronPort-HdrOrdr: A9a23:wYNAN6GLbELaW5GYpLqE28eALOsnbusQ8zAXPhZKOGVom62j5q
 WTdJZy73XJYVMqNU3I9urtBEDtexzhHP1OkOss1NWZPDUO41HYSr2KhLGKq1bd8kvFmNK1vp
 0QEJSWZueQMbDU5/yKmDVRv7wbsb26GAHDv5a480tQ
X-Talos-CUID: 9a23:Ih+TQ2HdiY7VFPAgqmJVyUBMFckdY0SG1XLAJVPjGEdTea+KHAo=
X-Talos-MUID: 9a23:xi5CQAg4j/dmIfn3GF1NRsMpO4B0s5iWDhkxgao0lJG+Ch4hMBC/tWHi
X-IronPort-AV: E=Sophos;i="6.00,177,1681185600"; 
   d="scan'208";a="109688956"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LmmDhRAefJtRKCKfZxkPZeXOsi57s7lHCV5udf2DoMAePp3iX3FoHVICLKdH2HxBpCQtaDgqK2KsT/PaUwXugL371kqeilZ6fAQRfCuufshCAwgnc7ijuDd1/b/bq0cO+brmtrsy1bJaFADoLRmXXosPZuRGxcZ70F8vueE+rLqv9OARoCqOjDBysupW9XrOX6RN0sXKOaANWXjslO2tCY3aeKxs16N5SNUDyIbOvWXri0qEroNbzDTco+zf0+d4ADOvMj38k7iArBGC7PbAZcP6gbEZdPaj1wApgAjTCergU9chGuw99XScks+vmjLS82husGXFV/5+4caflNAdoA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TtMiiOK4EZSQ52xTb4CP2P8D+baoolUYBsdhmsSZFbk=;
 b=cyF5qzzK5lwQXhkgbyYSCSSRWm6T0d6YOzZ6XxQvLf44+2W5A7EavHfaibIbMnpGT2oaZhRpxLPhg1xefqWR5OoZp5vfcp6kwuo8XDtl4ZLR1YskpszwRSNyvr2BEh2u4vThx4TJJs/MTGDlE3w0epZEXyW8Z1Bqbt6iYL9bfVkH5WHYTt8Jr5oVbFreR/YZIdZPzCWFFYFIl406XoCBybEZvaXKd/B4Sq7YrMmFsK8hMbOxYNUMmMY2OnInL35eyJuMjIFxpdg8AdmkTX0ZL0eIulDKA1xRwh8C9XYZJYA9CYdxTEfdua/NqOhjsgjtDU/lVu7wTNFV0B+QdC1vBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TtMiiOK4EZSQ52xTb4CP2P8D+baoolUYBsdhmsSZFbk=;
 b=LYCfKRDFm3WWaNQNA8FlAVQ77mQ/N7fDSb14mIKmxwBmlkyVcYPdjoOMVXaSZZJPl9rE5TL7FUsnb56cK66oQmz9UG0zS41ytj2S1vd0UciPMZKHGg0K3e1gg3Jvrw19qNsGeTgxczRu+YHaDyvUTMEYiN5Z3UjuUsenOzeS9I0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <ea8bb0da-6326-55d6-18b7-0ce681046d53@citrix.com>
Date: Fri, 19 May 2023 16:36:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/6] x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-4-andrew.cooper3@citrix.com>
 <347219e4-6c3a-a0ad-b010-4dbd7282c7ad@suse.com>
In-Reply-To: <347219e4-6c3a-a0ad-b010-4dbd7282c7ad@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0225.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:315::10) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA3PR03MB7467:EE_
X-MS-Office365-Filtering-Correlation-Id: d5fdaebe-f469-4b74-7b00-08db587eda17
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tFPGE7/jjOhZBaXXMCqFtxn/RGIprB2bLNVWjZY6KEQ1K3y1sG33FaQ5zRFW6zKIycEGC+hEG1VzQEcH4mvmgVsIsWBCSwDx0N2kP7tOV7lorN6RBuWQ3y/XbRda2xIbvI2ogsSrxJ0Gsxota/3eei0GYIrjXAqKYMNUuIxi+RdwSnqlL6FBHaxK/dM5Rn74G86NXp2qJUsSfyk3mq5dOwiYTBspZlx4+TW1g4qoOsSCpj0pyrisoS2p2qKArLgjhkozdUwr6Aax1mGub4z6QTtK7u5Woq9NR6AXfwoTYTsGqOvcD8+vaQ3rZaQj2bccrzO27fVLVwsitUDWbc/mg3flbhin199VtgUNcfXpG9z1osfYqYW6ryRYfI9kUOs5l/znUbofYMArcfZCVG8vhSBuBRZzgyUmTuUBQVOlZ7DzM597y4WsWA1cZKcMcR+bOP9BLgRuKgPAdTf8bJKlMD6VFWUYYP4IfBaxYvDQF5LgfB11c7GgCNvxuCrVwn7vww7T/677e94jNwIH111qa3LzDDS66LCkhFcyaPKf4fHHk3zanhv//mE/pRP6dPL4nMPe4/BrmIfESmWDGQ/6dKcr2Gef9Uca07hWpNQ8mlqcNOavje4g/2/7WBMuCVp6wCLzFM6ap9T1i9pcduh6gw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(136003)(376002)(39860400002)(396003)(451199021)(86362001)(31696002)(36756003)(316002)(54906003)(66556008)(66946007)(66476007)(6916009)(478600001)(4326008)(6666004)(6486002)(8936002)(5660300002)(6512007)(2906002)(8676002)(41300700001)(38100700002)(82960400001)(2616005)(26005)(186003)(53546011)(6506007)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RFU5eFR4NHRsMlkxaEpzRzRmVWFrZy9pT0crazhiRFFhNWVXZVBUaDdMbU5Q?=
 =?utf-8?B?QjQ1VFNxMEs3SGhBT1dGRi9ZOUkreUphYzlmMmNNUlhqeGYwWHJXcmkxQmtQ?=
 =?utf-8?B?YzVuQ1lXdGlCVVlqeDgzbmdCcFZIbnlYM0JwOGp1S3BIVGhQbmhBeitvbW1X?=
 =?utf-8?B?RWNpazA2RDhlUVJLajJhdU5SdjlndWV1dkQrOGdDNy9RRWwrV2Z2aUM1cVJ3?=
 =?utf-8?B?TG1VbmlMQk1QaCtDUm9xUWx2KzBIUDlyMDJCamNrNWpDdzY2cE45UmhFSnJP?=
 =?utf-8?B?VVVuWUNEQzU1Q2g5aUVwemVsOC9mMW50NVdGMElGL3hFVjJRZlplQjZuam85?=
 =?utf-8?B?b2dpNi9iVCt4TDZOTFBmQkZJOUFaRzlEUUlYRnVhM3dyaVFPY0RDZ3IrR2lP?=
 =?utf-8?B?cmQ0TW8rS1NvQzE4T3gxVjZBL0o5Nk9XYmRHTVRpaUYwUklLOGlRZ0JkR2la?=
 =?utf-8?B?aVZjUXNiU2VOUmp5NVUyUmR0NzBhcE1GV1JHaWZ4OUZQS29HZFRtVWRidFU3?=
 =?utf-8?B?dW1PdG9NblkvM25oWGlqUW5iblBBUkpua2xTajd1TDBmYlpmU3dIWTVsVUln?=
 =?utf-8?B?VjloVTZ4cWdXMnVxWi9iOERZczZhVjlyMEM5cC8wTTJtS25TdDBRNldpNEN3?=
 =?utf-8?B?VGFRY3hyaHlzeHA3U0w5T1ZmcmF4S0tTSzJnSkgzdXFWL3I4MzhuZ0ZXTWhs?=
 =?utf-8?B?SjBkWEUyMW5pMVFBdGx1cTZicTgxK1A4QVQyU25ycTBFTWQ2a2c3bFVlSHBK?=
 =?utf-8?B?TEJXTXc3cm92SkRWL1d6dDM4Z3BZamZlUlFEY0hXQ2Q3MVFGS3cvWDV2bE5G?=
 =?utf-8?B?cS9rb0ZwNWRrcm05SEZUN3IwQ3JYNHdHb090TkxaRlZ2ak04Z3pod3NyM09h?=
 =?utf-8?B?Nk9KbzNST3JMVGYvYWxPUzVzTXh1Uk9zV0hneWlacUFPYjdHczA5RGhoNjBS?=
 =?utf-8?B?N2JuYWVlNHVSK2dzL0hSSnBYMnNPQ3k0RmpaY3c5RUhzMkQ0SldsVTRaVGNF?=
 =?utf-8?B?ZWYrR3dMMFdkcGx1OUdDOCs0dmg5N0RtelRTU1Q3citpeVdLUDRkaFhGWnM5?=
 =?utf-8?B?Q2VIYXZXQXpya0xlUzVCU0U0dklpTUgwMVFqdmcxYjF3S1MwWjMrMmRqNDI4?=
 =?utf-8?B?anRITWZwTWltOEFHQzNuaFlDOEo5Q3pCZE16eVZrTXdWQmhKNkRORkNUci9m?=
 =?utf-8?B?ZHNXQXdCei9pdHc0OXgvTGZOWHdKV0hNeHl2ZTc4NG9mK2U0QlUwd0lOQSt1?=
 =?utf-8?B?bnB0WHJqS1RXREtxWGxaUVd3THVLN2hVMjZ2SmhDZmdicDRlUGFVckJPQWVK?=
 =?utf-8?B?MEtQM1dSeFZKTGdDVUk4VmdXSzBrdVEvdWlUakVOWlN3bStHaGc1ME5YckFC?=
 =?utf-8?B?QzlHL2RXSnEvdURjQ28yTDY1Z1lZVERlemFWU2Q2b3JqNDJXYnZUdnU5YkdB?=
 =?utf-8?B?WWxjSHdiK0ZWb2tXbmNxMTJpMGRMaUE5ci9tWmwvekdodkpIVmtUWER3L3Mw?=
 =?utf-8?B?REhEYm9wbGdqcGJDSVpyMEZ6ckZFQjBIWE9scFVTN3NqSzFhM1dGWlBtdE1R?=
 =?utf-8?B?a2hTOG56R2FyZWRlZzAxUzNQdmVobmI0T3VVbXUxSVRYblN3RmwrNHFZVDRF?=
 =?utf-8?B?SDQyK1hXOVJBNFgzdDBlSnhQZmhNaDJBT3pKSkk3eEFrOFllQnV3Y21UVlhC?=
 =?utf-8?B?NXNOT01SZUhqL1BPdWdtNXZJdFdRWUlqTldhbGtqNS9qYkxLSnQzbG1VK3dy?=
 =?utf-8?B?dDFETnQ3NmF3SFhJQXI0LzJEYzQ2WE50R250dkR4S1FpOVlDVWpNZzlsUEI0?=
 =?utf-8?B?OEF3UVorcEMwb2VQM2Q5cTluUUZmbDMrK3k5NWZaUVltbEtEdUd6TUVGYUFK?=
 =?utf-8?B?ZjBpLzJPUkZGMnRJT1g2TkFKMnN2THFEaHBYRzVCUEJrUlZNYjJMRVV1dDhP?=
 =?utf-8?B?bjVIeHdOQkN4ZWhHNnBMOE5HM3FPdGxMczFLSEloMjdrMTJBMVdVdGhadTZy?=
 =?utf-8?B?R0dqLzJXcW9TS0ZMSkZKemtzZ1Q0RklVbVdlZ2JGN2VnRWJwdjFMZk15UDk4?=
 =?utf-8?B?dHRrMmRzTHdqU0Z5eE9pZ0V1Y0pQZUFMYSt1bFBPWExWN3RPMDFvSE43UWxZ?=
 =?utf-8?B?NlZPS1lkZmpZakc5MlFmbGs4MlNsOEJnVEkxM0h6aUxreG1xVk9saEdBRTFF?=
 =?utf-8?B?WlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	npQHue5S7xR007EmC6nZ3JM1YBqryyNUYL9uMDShY9hAD4V5b3IJM3lPhDAfvXrbYYblE/tXDEssoZuLosgzdulMhoxa4rNc/0kOOcOBpWJF963y0hQ1JFVhz7ouOHrFyN5p4ELV6f4MxvWirAK0EuN6Iw4U8OkHX+EEfEN20CdQdBe5ilF84c7/x3D+wghNrPdajX5XhuU3tX8liwuEVM8fHn+iSlWayaZ5LaHy1JZA6Xgnbylc+7QwdWOBW5GAI8SpJRmxVnVQRHGqIyZrogaz5iYfygxB/CsD+ihcpW6cdlw8zPDDDAtDz3abu0XTWTQNoZ6uKaP9I1SLDIcqVpfIIdqZ9gvjnheCXoDcPr5vzEhfIGNFsAhi3j0wxMn795Noqbeq3NgO1LgjrP/FORKa3Mcvv3O3J7LkXg0jEMaEVs1wxdzZJRsgF3I9nXHDylxBwqQUaemqOw8fKkOLtxw+HadZVgCusPjx6H/i9yFIPlx0Fi6FUIr/KqMMyCECP4FrIErp6pqHRTVus5cJi1AzQC9dSIOOsJO97o7lK2DI4xe5gRxcS5VEzixWqVFUGsuvGnV0AM9iBcrF0SOmFX0U8+ZGDwO8faBy7mbZJj47ehfV20dsGC7SfOhyK3OYzgMDrKapZNOz16nz9oiMxy8cB6lFbuWAOW5LpvRgfW2nNwHobsY8pr5ZGa6TyXQvhHiqj3iyhKiUl+qOFsaq7cJ5suuZ/NllHXUaGSLZtvxQ0onvEUisY4Mnpom/n23xiiBZ7OTn0CN/lKuf85VOP+dzFHzF7DRJFIyM4hR9FYcLDjJNppqdQKi0F4yhlvkS
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d5fdaebe-f469-4b74-7b00-08db587eda17
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 15:36:45.4638
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: D5tCeFcqkrL1t9P7Yd0umwI5S7dR2X1gVmqsPmrGJ+9X4AVlvVMyQ8KB30HwNE46ZROwpxRvcVWQtKrNiQshCaeeX4e66a3EWA8Paq1unAI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR03MB7467

On 16/05/2023 1:02 pm, Jan Beulich wrote:
> On 15.05.2023 16:42, Andrew Cooper wrote:
>> Bits through 24 are already defined, meaning that we're not far off needing
>> the second word.  Put both in right away.
>>
>> The bool bitfield names in the arch_caps union are unused, and somewhat out of
>> date.  They'll shortly be automatically generated.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> I'm largely okay, but I'd like to raise a couple of naming / presentation
> questions:
>
>> --- a/tools/misc/xen-cpuid.c
>> +++ b/tools/misc/xen-cpuid.c
>> @@ -226,6 +226,14 @@ static const char *const str_7d2[32] =
>>      [ 4] = "bhi-ctrl",      [ 5] = "mcdt-no",
>>  };
>>  
>> +static const char *const str_10Al[32] =
>> +{
>> +};
>> +
>> +static const char *const str_10Ah[32] =
>> +{
>> +};
>> +
>>  static const struct {
>>      const char *name;
>>      const char *abbr;
>> @@ -248,6 +256,8 @@ static const struct {
>>      { "0x00000007:2.edx", "7d2", str_7d2 },
>>      { "0x00000007:1.ecx", "7c1", str_7c1 },
>>      { "0x00000007:1.edx", "7d1", str_7d1 },
>> +    { "0x0000010a.lo",   "10Al", str_10Al },
>> +    { "0x0000010a.hi",   "10Ah", str_10Ah },
> The MSR-ness can certainly be inferred from the .lo / .hi and l/h
> suffixes of the strings, but I wonder whether having it e.g. like
>
>     { "MSR0000010a.lo",   "m10Al", str_10Al },
>     { "MSR0000010a.hi",   "m10Ah", str_10Ah },
>
> or
>
>     { "MSR[010a].lo",   "m10Al", str_10Al },
>     { "MSR[010a].hi",   "m10Ah", str_10Ah },
>
> or even
>
>     { "ARCH_CAPS.lo",   "m10Al", str_10Al },
>     { "ARCH_CAPS.hi",   "m10Ah", str_10Ah },
>
> wouldn't make it more obvious.

Well, it's takes something which is consistent, and introduces
inconsistencies.

The e is logically part of the leaf number, so using m for MSRs is not
equivalent.  If you do want to prefix MSRs, you need to prefix the
non-extended leaves, and c isn't obviously CPUID when there's the
Centaur range at 0xC000xxxx

Nor can you reasonably have MSR[...] in the long names without
CPUID[...] too, and that's taking some pretty long lines and making them
even longer.

>  For the two str_*, see below.
>
>> --- a/xen/include/public/arch-x86/cpufeatureset.h
>> +++ b/xen/include/public/arch-x86/cpufeatureset.h
>> @@ -307,6 +307,10 @@ XEN_CPUFEATURE(AVX_VNNI_INT8,      15*32+ 4) /*A  AVX-VNNI-INT8 Instructions */
>>  XEN_CPUFEATURE(AVX_NE_CONVERT,     15*32+ 5) /*A  AVX-NE-CONVERT Instructions */
>>  XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
>>  
>> +/* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.eax, word 16 */
>> +
>> +/* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.edx, word 17 */
> Right here I'd be inclined to omit the MSR index; the name ought to
> be sufficient.

It doesn't hurt to have it, and it it might be helpful for people who
don't know the indices off by heart.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 19 15:52:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 15:52:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537174.836056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q02P2-0006NQ-3p; Fri, 19 May 2023 15:52:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537174.836056; Fri, 19 May 2023 15:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q02P2-0006NJ-0X; Fri, 19 May 2023 15:52:24 +0000
Received: by outflank-mailman (input) for mailman id 537174;
 Fri, 19 May 2023 15:52:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9pCJ=BI=citrix.com=prvs=496ec590c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q02P0-0006ND-CF
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 15:52:22 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 21874dd9-f65d-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 17:52:19 +0200 (CEST)
Received: from mail-bn8nam04lp2048.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 May 2023 11:52:15 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6504.namprd03.prod.outlook.com (2603:10b6:a03:394::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Fri, 19 May
 2023 15:52:10 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 15:52:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21874dd9-f65d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684511538;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=67iwhlo1gSg4uAUZlHc9WXoYmjyPevZMdWTL6UP7ghQ=;
  b=UGmS7ptmMnKLncqvgxYijTd5aSEVH5XxJTE7jM6g3AUEIQmJkrIwbKwh
   Dspj3JNJO/Ey8B7IKAlqK5/QPcE49G+jl/K8bWAQgkM5loF34eXY6Ar6L
   ldYk4OqLLQOLl/wT5YjxSQJRigGoJ6uEq+cUPJPgTtAC7h0dmwWLVLd3v
   U=;
X-IronPort-RemoteIP: 104.47.74.48
X-IronPort-MID: 110080316
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Nz/MpKJznOFSKRPwFE+R85QlxSXFcZb7ZxGr2PjKsXjdYENShT1Sx
 mpJD2+CP63ca2v2LdF/O4rj9BwG6sOBzINlSQFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wVvPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5UBl1l9
 vkYDwxWSUGH18mM3uibFeNj05FLwMnDZOvzu1lG5BSBUbMMZ8CGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VrpTSDpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxHujCNpJTePQGvhCpX+C/msDJQYsWRilotjimnTlW9EEE
 hlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml1a788wufQG8eQVZpeNEg8cM7WzEu/
 luIhM/yQyxitqWPTnCQ/avSqim9URX5NkcHbC4ACAEDs9/qpdlvigqVFoozVqmoktfyBDf8h
 SiQqzQzjKkSishN0Lin+VfAgHSnoZ2hohMJ2zg7l1mNtmtRDLNJraTygbQHxZ6s9Lqkc2Q=
IronPort-HdrOrdr: A9a23:0oAPy6HWLdxKhL38pLqE18eALOsnbusQ8zAXPo5KOGVom62j5r
 iTdZEgvyMc5wxhPU3I9erwWpVoBEmslqKdgrNxAV7BZniDhILAFugLhrcKgQeBJ8SUzJ876U
 4PSdkZNDQyNzRHZATBjTVQ3+xO/DBPys6Vuds=
X-Talos-CUID: 9a23:YmIBG24sBd9EXYh2Qtss10koMcMgclrhl1CBJx+TM3gzaOKfRgrF
X-Talos-MUID: 9a23:yJAnIATLUDKLKGJuRXTJpiBaM9dF+J6iS14OsM4WgZKENndJbmI=
X-IronPort-AV: E=Sophos;i="6.00,177,1681185600"; 
   d="scan'208";a="110080316"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nXFM8jzBKp+kBNJHQGcWtgEqwrDNXC4TQKaAYeYvWf6gIFhJUhfkaTAqgRfSsIj7xt5U4of3mVU35IQLRHTrA76yGKba2xXa6iYl31UO2rdWCkiVE0Jw35sAcIioGO6AhT6Oe7Nwo3Pjp2d3m/+Oh0MDEDIRlIgXx/6YUgA7JT60HvbR7MNcGGkYBJioOREcq6UWBu3K0jYlBniqsWFrQ0jUWGuc0ge8EluUcBHzFVBmq+B/o60WEdXk3hvwdRp8vjrfeVQh2GR12tq/ZxzzC9OATRlY7B4F2CuOmY8Vvq/+vvH0M8t1NjC9WShPlNsbU3z5BY/sPfWVykTu8SXxjw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=67iwhlo1gSg4uAUZlHc9WXoYmjyPevZMdWTL6UP7ghQ=;
 b=aG1/r9yCTA0smDoOtn5w5EwPZ2iQyNUmxbq9uTk2U4l7in4ZcePUZ3mkrr1OAtz7aaoj1ZFvKM7xUvNFslzkHYYko+fHs/PZtQW62D3SnKeYLLDt23ls3siKpt3o3JRDFeXvLmJ1Lgm+lH8XXEGo4P3L8kgSkHvrZE/oCp1eToPTKYdPwuUR550sub4vc03yo8aB/vsrV9hj9hf3hPM93IlGuHKuxI2bDCj4YH+tFnMT3+X2ifkWSUnL+rxbtNlzUGkBRgDkzKfsWUaTgoiL2tVpSpxS1uhzhHqXI92TpAG7bZEBzd4UCkcXgWwJ42v/qywUkvH0ob7KGOU476i1wA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=67iwhlo1gSg4uAUZlHc9WXoYmjyPevZMdWTL6UP7ghQ=;
 b=UHjgyEAi0SAsNgNYQIJE349hD9hX69103UlQYwgXf84jJQrkYb/k335ObWpEsMuvERrMy0/8idN/gLRFDk9Kj8XxTFNFqMIjcIKnNrYq/Fbo4Lve5WgOyNdHjT9rYeEXCBcpKcChqtq4C+iDuYeoUJ5oYvaUyx9/L/8Mqm6smXQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <897bac23-b17d-ec4b-613b-d4d1b4c77d58@citrix.com>
Date: Fri, 19 May 2023 16:52:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
 <a545a6c9-db48-9d91-c23b-59ea97def769@suse.com>
 <25421dbc-5889-a33c-37dd-d82476d56ce4@citrix.com>
 <1bef2b0e-04e8-2ca0-cf03-f61cdae484a9@suse.com>
 <b1c56e56-90cc-0f37-4c8a-df1217339e59@citrix.com>
 <22a6bd70-887e-da72-ccff-16b3627463c3@suse.com>
 <54b35fa0-160e-3035-6c22-65a791ed2f84@citrix.com>
 <a53a77e3-6dcd-2668-0f3c-282de93d8b04@suse.com>
In-Reply-To: <a53a77e3-6dcd-2668-0f3c-282de93d8b04@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0074.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6504:EE_
X-MS-Office365-Filtering-Correlation-Id: c4cf129f-f565-4d8a-cc2f-08db5881012b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	USJ6em7ko9EWl0vyTpCXVYLqmOp+D6oe0gH0ure7zG9DOWsprkb3m0DoqK3L342LiuTpwWmmCMqTkDIRDltogp4aESnt2cuWHEjjpndTEObLlewcLQDnneJ31/0fV9NOds3iVTGrrkwBkP/hL5W7JnjzObjHmPKW19/3iONJrQ5+YxYxGFiAV2U+tH4RlYh0CY055QOIkJnLqYLb6ff1DwO9W1TjcLzU7Qj2POnvOkP9PP66lEYNo3K7OqRm0UR5SxNGDIsWwQz+lPNd/QmHQEOyct4UcunRiAMcNoQPBL3KVUqC9xlxOWaGK8fLemj7dZNuCroqH0AtSMvkqD95K6nAKfpuDRN3/jmXgGwpzCJyUNvoxwfHy5yBhmV1FOHSgBdfxsrSYFeG3yeoTwcSlTnZGvl+dRBPOfn1k/q2dOgpkFfoauih7Flbx+EHbvTjkmMCXEKNn48dnV68bwLkvk71wK6obbmcyut6EtrnN6bpOa78x8+saW1wbDXpmQgFPFY7DdTXl+hr9OOnEYk1xBFAw6Adsipnc+TmmfF/ar1ykfirwsru7BYqXkD772DgvMgILZ+TxI7FLb44jR+rKDgUnYXoQ3gvSsbCEe01fYzIe5813Xa1rcvNIFwgfyktud2qasIYZ6Ga1I5pHICNgg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(366004)(346002)(376002)(451199021)(5660300002)(86362001)(66556008)(8936002)(8676002)(82960400001)(53546011)(26005)(2616005)(83380400001)(31696002)(6512007)(38100700002)(186003)(6506007)(41300700001)(6486002)(316002)(6666004)(478600001)(54906003)(66946007)(66476007)(6916009)(36756003)(31686004)(4326008)(66899021)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?LzhjNTdNYmN2YjZmdXZRVzdUdnpYQWlydkErRVQzNTVGamhJNk1wd0dXTTAv?=
 =?utf-8?B?Q2hFakIwQzFjSUQwR1Ruc2FvK2ZiMVR6SnlzaGNpQzdYQ0pwRVpuSzh5U0xx?=
 =?utf-8?B?aEhFR3lac25PNzFsT051LzVCbE9PYkh0dXZuK0lxTTQ3UU41dmZBWmNkNmxQ?=
 =?utf-8?B?azErNENLQ0xDdU9Mbmh4YzJuVThidUN0QmwzdXNid0F3bk1LNE5DNlBhYjg2?=
 =?utf-8?B?VkVEMTVGUUZ5S1NZSEFtZnNaVkxYUUxyZ3M0UzJKcWhBSE5tV0tQWlB0ZU1G?=
 =?utf-8?B?MUVjSWg3YUlFRkhxMlNxaEpBOUJHYzIwM3BwUXJ4Mys4dGs5WFFUTEpqMVRz?=
 =?utf-8?B?cVRaMUpoU1FwWWNnUzl6a2U1bEI0d0UrSmJXb0Evb0J6YitTRVBWeXRnL25k?=
 =?utf-8?B?MFRCenFpZk1nRVlzVjhHQjBDbmVtcmJScExpWi9YN0NFNjFYWVIyYU9oVENR?=
 =?utf-8?B?NzZDS09razdYU0lpR1d4bTNKeDhtbE9RdUxqUkpUNGFkRTg5NktaeW9DSjZD?=
 =?utf-8?B?MzhiVlpVREpNN09nelRlSU1uWVpyWGVQcTcrNHptZGRlVS9GZUs1UlJ2bDBG?=
 =?utf-8?B?Z2pJNWFDL2FRK2NiVlEySTUrd29sQWpPNE1hdmlMdC8yd2xOVjc1dXZHMzJ5?=
 =?utf-8?B?NmJRdkxGWW1ucGY2OHFUbkFVSGQyRTVqRHVMOTJ3eXZ5MmZ1a1hlRWdJdTBJ?=
 =?utf-8?B?WVhEUXVHNEw2eUNwa3JVeHRzRnVocnp3K3FrV0NFV3UwWFBpN2d1Y05odFhY?=
 =?utf-8?B?bnNBZktaaVRnSDRaL05jWmxmdm83ZlNWM2FSRzMyVzlUbithQ05URjg5Y1Ba?=
 =?utf-8?B?MVluSm4wRGhQQU5jSktNSkxKMlhSNnRpcy96ZDl2aXNJa2NjV3Z0bFR3ajhK?=
 =?utf-8?B?N2FDb0FVZHFLYzJNQ1NGMGNyYXVJZ1FPdER0a2NvdU05K3J5MFZ6MFo1bFhq?=
 =?utf-8?B?TnhxbFJQTUdGQVBFbjdBZXQyOFQ3NEZFWTV1OW1GS3B1ZFFtOVJHVlI1Ri8w?=
 =?utf-8?B?VGs1TGJReHNjdk1IdXR0a3haNzBqVHlXYVI5d3ZIZG8xdEw1b25MUmhXd1Yx?=
 =?utf-8?B?TE93ZCt6MFJTQktZbHQ1OTZoc3c1QStVcjhqVGYzY001K0ltT25ycTcvY1RV?=
 =?utf-8?B?Mi9Mc0pvL3lNZXhtcWhtelBpQktFcFVBL0M4d2E3UDhDTmVDU01QUVpPZm1v?=
 =?utf-8?B?VUl5REpqeDdTcG9qb2lvZ2hESllxMFB5L2M0Zk81SWJOYUduUWRIeWViVTh0?=
 =?utf-8?B?dTJoUUtESWdsR0cyZ2N2TlR5Mk16MVdYSXhiSG5kZlZaSk5JbHRDWjFRQTlC?=
 =?utf-8?B?QTJHUVdLZkltZmFXM3diaURxRFhmY1Y4czVoQUlxRDJZaGNzd2ZkQzhEWk03?=
 =?utf-8?B?b05UL2FxVU93YmlUa3Z1MHErQitCYzFyR3Q2dHBveVFoc1czVklqeFpoYWNS?=
 =?utf-8?B?VnI4VDVZWG1oMXNKNHhmQmRxYmZrc3hpcXpxbStjSncwZnZBeXZtbFlxck5K?=
 =?utf-8?B?ZXEwVHNXKzFWTmxyVXh3Vm1DcE5QOFNHZHB6NHVUUXJWdmtjMUZIV240bkdt?=
 =?utf-8?B?bHJyKzhmeFZYenM5MS9aT2VadThFWkRuQVgxYTBXRjA3d3ZwNjV1TUZWbWhF?=
 =?utf-8?B?Sm96enRmcnllRkZrVjIxYnptVTlWMDFRRTB2YmFXOEJKVDFkcXNmVFZtRGZ0?=
 =?utf-8?B?MUtxMHh3MEFmUlN2ZW5yZTRPeWJhUlhiZDlzRzRpc0NtVXV0WUlNeG1jb1lz?=
 =?utf-8?B?dFoyd2VNa3FsZm9QSWFmRWVob2NpV3NKNm9JbjR6Z1UzVmZWUDRsSytTOCtq?=
 =?utf-8?B?dmc4dWpHRjRKT3VVYXZnaXdoQ1dyVHhnM1JCOTVNUlBpQ3NyYmlpelY5cS9r?=
 =?utf-8?B?TDU5NFBTUm1CRFRkRDZMMy9JK2g1SURXdTdDUVpOU04zVGxuRFN4ajZiNmhl?=
 =?utf-8?B?VXhpcVdEZTZocDNQSjVIZ2VZS1ZZejFVYTh1RUt0QU5qaFNhWTNEMUFqd0RL?=
 =?utf-8?B?V2dMOG1LMHdsdFcvN3F6eHBEOEFWdmJUNFFZVitORms0ZjUyNE9yb3R6dUFR?=
 =?utf-8?B?bWhMaEpiMmRLZjRoQ2p6OFEzWmtPUjZZWEhJZzJUSW12MFhOMzZVck1jTnlI?=
 =?utf-8?B?SVUxanlNazN1R3lEM3NKeG9HYW5JRk9QcnRiWExmUTM5TlFVM2pUUzRBSzZJ?=
 =?utf-8?B?M1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	bu3X8zHsYEg2YNtNqFT23lscNXm6ij7BvJmS4a53G4JbgVKq0HF4BYDErE1H+KtefSOEpF3oijwyeoJ/01f6bNhO+n7gDlo30VCyeJN0lQYbfNHhyCtxBQtNuURy+KFzyBDKdH0RYVozWQk/2utxyt7eNsf69wtv0m15k0ieb17b8K6cV3IJPxOD4yksOtRstuHO4MJfMgRrTPDm976UuaLgY3fGuBToPRtrhAEJv0KnC5xpjzjlCU0poaKwokIIGmtVgkCh68p4eoUOmmGxHAZ+I7odvtJqqOwt2Fc5wmLMq6gOyRSxJIQny2k9AFymWvZ7nOJcpsiqqrQ+dKgFO/B0zCtftuHxHlMLuCkzo6/sCinKS2D5hECDKR5xvDo/jTDERp6p1RzhVDc3KS2J/c94lwblrSV0B2gtrqloSE7MKSDUzdEV2JhW7QtO4JFkdlmlVkcAgEC5+xZGETBCeyKtaWNqwrdBK7v/2s6+ZmORlTCPoYpkWqMbhkT5pULy56MmXI866EsEJKqNX2A14EXcm1owVjcteZLVAcbQhkjJ+okJWOHfmpPS/9lb9yCv0tl8Rtah8Nn1k+jy3kEnUbLLbe3bMJ37DaeEczUf2xwWSX0wM3QMOdpqScWW7DI6LsXVmB8x3fj7YJTQI43mgvoOvhyPo1Quu6rxs7L+o9IoGoLOj1BNOMnj6p0l7h47ypwd5IO5MjMmVUZHgeOY5zMDhGN/30HrZfJCzo+GJaBV24KLrE8Hld69lDmJLl9N7T1gqckc6IDBoyRojzTtxwEdVLFFtyVI7/Kojadbzd9GcZWnlD4BHjujSpkNkiWE
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c4cf129f-f565-4d8a-cc2f-08db5881012b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 15:52:10.1719
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wnA0Y6Mj30xL9Bkl1/K4uC+euRhqE9ydtMeDSsg9eV6DiJiz12M/U7wgUIsB4PRqm0qvdf7HOjoGFNfTh53CCWDJdcNlcb91ZfwmbNuBk5M=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6504

On 17/05/2023 10:20 am, Jan Beulich wrote:
> On 16.05.2023 21:31, Andrew Cooper wrote:
>> On 16/05/2023 3:53 pm, Jan Beulich wrote:
>>> I guess that's no
>>> different from other max-only features with dependents, but I wonder
>>> whether that's good behavior.
>> It's not really something you get a choice over.
>>
>> Default is always less than max, so however you choose to express these
>> concepts, when you're opting-in you're always having to put information
>> back in which was previously stripped out.
> But my point is towards the amount of data you need to specify manually.
> I would find it quite helpful if default-on sub-features became available
> automatically once the top-level feature was turned on. I guess so far
> we don't have many such cases, but here you add a whole bunch.

I'm not suggesting specifying it manually.  That would be a dumb UX move.

But you absolutely cannot have "user turns on EIBRS" meaning "turn on
ARCH_CAPS too", because a) that requires creating the reverse feature
map which is massively larger than the forward feature map, and b) it
creates a vulnerability in the guest, and c) it's ambiguous - e.g. what
does turning on a sub-mode of AVX-512 mean for sibling sub-modes?

Whichever algorithm you want, you still need *something* to know that
ARCH_CAPS is special and how to arrange the defaults given a non-default
overarching setting.

When the toolstack infrastructure grows the ability to say no to the
user, it will be able to identify explicit user settings which cannot be
fulfilled.  (And with a bit more complicated logic, why.)

>>> Wouldn't it make more sense for the
>>> individual bits' exposure qualifiers to become meaningful one to
>>> qualifying feature is enabled? I.e. here this would then mean that
>>> some ARCH_CAPS bits may become available, while others may require
>>> explicit turning on (assuming they weren't all 'A').
>> I'm afraid I don't follow.  You could make some bits of MSR_ARCH_CAPS
>> itself 'a' vs 'A', and that would have the expected effect (for any VM
>> where arch_caps was visible).
> Visible by default, you mean. Whereas I'm considering the case where
> the CPUID bit is default-off, and turning it on for a guest doesn't at
> the same time turn on all the 'A' bits in ARCH_CAPS (which hardware
> offers, or which we synthesize).
>
> Something similar could be seen / utilized for AMX, where in my
> pending series I set all the bits to 'a', requiring every individual
> bit to be turned on along with turning on AMX-TILE. Yet it would be
> more user friendly if only the top-level bit needed enabling manually,
> with available sub-features then becoming available "automatically".

I think I've covered all of this in the reply above?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 19 15:52:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 15:52:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537175.836066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q02PB-0006eu-Em; Fri, 19 May 2023 15:52:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537175.836066; Fri, 19 May 2023 15:52:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q02PB-0006en-Bl; Fri, 19 May 2023 15:52:33 +0000
Received: by outflank-mailman (input) for mailman id 537175;
 Fri, 19 May 2023 15:52:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9pCJ=BI=citrix.com=prvs=496ec590c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q02PA-0006ND-VV
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 15:52:32 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28de9f04-f65d-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 17:52:31 +0200 (CEST)
Received: from mail-bn8nam04lp2042.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.42])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 May 2023 11:52:28 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6504.namprd03.prod.outlook.com (2603:10b6:a03:394::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Fri, 19 May
 2023 15:52:26 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 15:52:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28de9f04-f65d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684511550;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=5QAmARQjsMNiLoh8KT3yS2y/MS5Vv47XHq7zU5qmD2g=;
  b=UV2M3qwhmNspeDARQ0CVG8EvEsIf1a3B86hqIyuRVUZj/624djEyK0Xi
   XeSRa21/1Bw9QYBlgtJlZzIo3sueqaZiqQhwImOItgLv8OeIMdzSrk0Kf
   FWy/599dBMRPGJMCwDuOX6u+8c+Xi/xQh0Fp+YmFfDnKzZr4LDI0M2sZc
   Q=;
X-IronPort-RemoteIP: 104.47.74.42
X-IronPort-MID: 109693000
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:KdiiAqraSbIZqZhbSQqpztuf+FFeBmI6ZBIvgKrLsJaIsI4StFCzt
 garIBnXbP+CZ2fyftEiO4+wp0xXusXRn9dqGwVvpCkxESlDoJuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzihNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAGwiNxykhcum+5a6EdAvu+cmLdLrLIxK7xmMzRmBZRonabbqZv2WoPV+jHI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3j+SrbIC9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgp6A72wXLngT/DjVVdWmkm8a0zXKUUtV7A
 AsspQMph6w9oRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2hBSfSN9mSPKxloetRWu2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBN/xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:ARFCX6Ned3oXbsBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-Talos-CUID: 9a23:dORZ82OgJlPB2O5DBAY++GkWM9AeVnyG9TTtAEq5EmZ5R+jA
X-Talos-MUID: 9a23:7X64WwQ65Hpde+2CRXSvpAN9DvhlpJiEAUcputI+5OCDHyh/bmI=
X-IronPort-AV: E=Sophos;i="6.00,177,1681185600"; 
   d="scan'208";a="109693000"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A2XyawxI21XHEVcyV/TXjV5eONn45q8OXkpB7RLQiM3Gvr9DlpsRGB/wyPWPnF/792OjuDiF+VAzrCwaBnMmaHpeAWI4YQkcnWZR/axz+ClTmFHvAmrbQtINnTKfbx8AuuEkw2HC/X0UzbzzeADQfDZ41bwUEk9D6pTziUReWolorWEOyKTBSpe4/8Qo7ouwL/HznIb5nbLZAqBqUoCZtCMfIHxeKHXtPrbu5Cty+GZecE20g5NsmhAyq1UQn12zkAe1Cpx0KrSJzwIvrH/eEkIFay6wSURmLAaYPzd7cQ2FU+m+FZJ19gkSqtjQB9xg5XMd5ftkl/MzmuhccYU8Mg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VAr9VLcnWtdBGK+4BN5I3SbbD6F0ngyViRpoJsSvtRo=;
 b=j6yh6g3gqZjk+aE59jGXNULYpWuoU/eQi3/vpq9IvJtUcZMAfcs1HzvvaMEgjCHKCzzr2WuG7aEs/YRqfDbKFwEkEXqD49bA68NdsbEAQSM28k0ZNKV9IQJvVO8hIs6xkmWSVxFIyAJcqlXSe+xeKXksl1lgb33Uw06Yo+bqzA/x8PWAe5/02o9JZdtVrc1WsIZLm1J2aklVHlzkUHpVUH0ecxlPCLVNiyTHJofFtFisH0tMs0yO0EhyLsY5RrqtF5HI83Y3iL+VPWHJg7A+F75EpJNuTQFmgaPisFWVf1mFxzZpR0U5VCIMi53FCWvbs4dq9ovAzMhyuv0JBqCSSQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VAr9VLcnWtdBGK+4BN5I3SbbD6F0ngyViRpoJsSvtRo=;
 b=FQbVaCbUZ9o5yqjiN4a3ZpIPDtbYEKrsHgYJRAC8kKmu8eYjbBTBWiQItezSspzfQybI4uRGl1uOZK6alSb/eTIaPzlOh9EgwW/dYLOLvX5UU8GcbgoPDS9MXVB219pe2lIUgNqIpy9OYiKNJ5sD2VB17/e13pLNemiBvQzxtXE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <64e3689f-8b5b-9eea-c756-85261de4c2e1@citrix.com>
Date: Fri, 19 May 2023 16:52:21 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
 <327eb858-f5f5-bf09-edfb-53c5c23a6c17@suse.com>
In-Reply-To: <327eb858-f5f5-bf09-edfb-53c5c23a6c17@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0079.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6504:EE_
X-MS-Office365-Filtering-Correlation-Id: 91bbf8cc-b8d4-4e04-02cf-08db58810af3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dcu9giU/3qmp7lHQkzIt4NMalxFi+/TEcCCJ2xbofh0aIPe72R+SzlFnMbA3cmY/nbkR82rDNV7qXN3HU/Pmisx6aqpALxuUjAqscRwDk5yr8zFw2kTO5Vqe2SyiQ+FixCJR70rfFuwmTVGa1MKy7Wyq6RA1caa8nFPp3ZT180Z39fBJzZbNa7Fa/hqPMEDkwDCLznsVsWS3kLhDwY+iLeHrgPsAo0T02ShPa/ztFtBeIVcvjmeJTyj/1VS43aTg518/x94gPzoMCZ0ntpGonZv0XRDqTq16GgBpKGpGAAJdPemMpWJIQdSD+BzuE8l7zvw/asOGBWTc1sgii4tgBnSeSAyMSwCy2yHVDFpZeT9A7CSgSnkZBkaKSVIAH+psrXlpnt4dmQlGGxMMWNKqxSkL2BK0Le/ijGh0CcgW0VGCN3fS9DTSgzZA9qmHoXbzcuNHALsdqJ/ZalIsY7hBbP4/w1XmbC9ZQv3/SVg4j51OQVw67oa7A7dYSkBNJnOR0Z2QcNildphPjCHJ4lHMbj7+QxbdC5t3uT1cZk7tzBG2MR1h0/Le0FauBSKcqlVTqTMStiyFSODWSe0iS8hrs7dDTeltlCTv0ZH/rwgJIV1aJ16O8pbenkGxMkl3CBzrjWbgUI0YfDAeFXmO+YVJKA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(366004)(346002)(376002)(451199021)(5660300002)(86362001)(66556008)(8936002)(8676002)(82960400001)(53546011)(26005)(2616005)(83380400001)(31696002)(6512007)(38100700002)(186003)(6506007)(41300700001)(6486002)(316002)(6666004)(478600001)(54906003)(66946007)(66476007)(6916009)(36756003)(31686004)(4326008)(66899021)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T2cydXUxWUt4NHZ6bkx3Q0k0VG1Od1dlYVV5MnVneUp3cWdsMEZRTi9sU0VJ?=
 =?utf-8?B?QWhGS0RGTllFS2pTenFSOGkxZkxOM0NhM1FLN1hKVTRJVFRCcE1lblBLUi9C?=
 =?utf-8?B?OHBUTG9obGxPa25pdWk4RnNjc2UxUEUyYzlCMHptNFZGS0E3R1MxaitNWU1G?=
 =?utf-8?B?YW01NTJneUN4VndLa2J2WEFScjNha0VrSDJiOFNFb3Q3QnYxNlJTenR1V2xl?=
 =?utf-8?B?cmo1bmZMNk8vUWRUYUJ5OEFzUUJWV3VhRW1zSUd2WVFpQXBBRURjeVJzSjE1?=
 =?utf-8?B?bzJRN2ZGbHlzQ1QwVDZkNHBnbE5NQlliNCs3czBwb0g2eS8vbmR4WGxhS2xs?=
 =?utf-8?B?aGc2S2svRlNZSnZKOHdxL2Fyb1NMVTNMQmhTbUNRa016dDBlWGlXU1VPbXZv?=
 =?utf-8?B?TWFGWE8zQjh2SkZFdXhHb1NmSHZmWWxrcTROYkc1Ui9MZzlVNmZDVnVVOWNK?=
 =?utf-8?B?K2p6S1d2MDBybTM3SFpRNDBDbjBKSisxdForZkxPMDREV3didW1UQ29jL2xl?=
 =?utf-8?B?eUJJOU5GN1pRM2VIOFgxb3RiaWY5cSs4N29LZXB3NU5HVW9NVFFIUmsxL2Qz?=
 =?utf-8?B?RGdKakI5SVJpSmc1VXcrVXFLcTFuaGhDbWd6amNxMnZEZU45anJDR0g1d1RO?=
 =?utf-8?B?V29vK0YwbTFNK1hlZW4zbzVvSFNEeWZvbzliMHRUcEJIZ0U0c2RKL21zZkZu?=
 =?utf-8?B?Z3JVOS9yMkExdS9uQXdTWDJ2STltajRMZEtLM1pGVjZyVHdLWUpjUVNZWkM4?=
 =?utf-8?B?Uk5uNTBsUjJRZkllTVZiNm1aa00rQWpXVlo5cG5TRXpLYnV1SG81TklZN2Np?=
 =?utf-8?B?WXluTnZ0VXY2UWF3VXk3OXRueXo2N2o4U25rQ1VjR29LcmRSWVBEdENSQUQv?=
 =?utf-8?B?MXJJL2dCb1N3cGcxOU4zSGNoa0IyMXEwZnhMalpwbVlFR0c4U3dUWWlIRCty?=
 =?utf-8?B?Sk9jKy9zeE5Ub21NYmdhY05WZ0orVnZscklGWHVUS2tBWk5rOUpyZFlhaGNk?=
 =?utf-8?B?Vmx0cHlJZGE4TkN4T2NRQTAzK2h3VlVCaStuZ0RKVGRHOUQvUWMrVCt6c2pi?=
 =?utf-8?B?SS9xUG0zcjJSQkZrTHYxUGE1bkg3QStiQmN4R29jU1hvUWV6SngwNEsxY2ZF?=
 =?utf-8?B?MTZ6SFhYSVl2bGo2Ykx1dHJiR1lCREFLZE51OUFHMU1mNzN3SUVmbDI4YjA3?=
 =?utf-8?B?ZzZMUjFMWnRpN2VqR3I0VGUzY0ZteDdBNGlGajcrM21CNWNIbU1DNnV3amZQ?=
 =?utf-8?B?LzQybUF0NkZ1V3NUYnNsM1VMdHlPVS9JbGdWb08vd0NxT2RRTDBVK09WQkZC?=
 =?utf-8?B?cmxFbU1BRHR1UkhEMThUM3UxbjFiSkU1Y0VoVEVoeDkrcVNlVFZLc2VvWjNJ?=
 =?utf-8?B?bjdRRzVyYUx0czY5ZzUwby94a0xNakcyWERRSjJrdVhaUHVVT0E3VGZLcURV?=
 =?utf-8?B?b1BicWZNUUlkeHRRSW9qamZYMzBOOEFKNXVLZ3k2SUJlallPTDk5UHg0TitC?=
 =?utf-8?B?OVlZbU9zRlppVVZuM2pYVENFQ2FYZTVYUUZBRmNwdGFTcmJKQlJ5ZmM3cXRU?=
 =?utf-8?B?ODN0VjZaSExZSkZtajFSOTZMd2czbHNBdjZzVFplYzF6VnRSTEFEcGpyZGV3?=
 =?utf-8?B?b3BoUUtLRlRSU0xEVnhrMm91WmJ1OTJLdjRoM3J6OEVnUHhIM0lXVERydzNr?=
 =?utf-8?B?RVFWSHZXMHBrODZXQ1VpN0oxSlNVTkRkNHBWc2NLRlo3NEUySjY0WFdjTU4x?=
 =?utf-8?B?QmV3THdGZGk5VHVYMDErSWFiTmVxdUFCajVEeDIxNm01YW85QWNlazJUZ1hZ?=
 =?utf-8?B?NGhRVnFmb2dIWEw1MFNhc2hXRjNMMEdiMVN2MHl1VEVHMWoxUWtuRUIxaWlt?=
 =?utf-8?B?MTNxd0Y4c2VlT1RKZG1TQ1hNdzBuVmRXSXl6VTdXYjc5MDJXSlBGOWh4SGVN?=
 =?utf-8?B?cWlCOHN5K1FKTjVHaWlITjZxOVdqVVRsVC8yTDBBY3ZUUTFvSHA1SGdyaXNC?=
 =?utf-8?B?ZisvT3c3VUg2ZURRYXRTd3UxZkNtUEQvT0F1NlJiMTBCSUhzQkU3bG10b28r?=
 =?utf-8?B?U3ZVcW5hTGJtcU4xb1ArNFVHVDFBdlVxcUVzdkNzTmZOWlhSSTBhS2pSSExo?=
 =?utf-8?B?cWp0eFQzSmNqVG9FbjdKS0ZIbVdpbm1GTDhKdkRtZ3FPRGtFbDFoTkJPOE84?=
 =?utf-8?B?OUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ifTuTToaJtSi6epvlWEC34DnUohLvcTpki8AIurIZyqT5QdGJXfiLqnTHSLdDw76PAsFn4vOzD2/DcfKXdsLTWbDkclepkD7YO/h89yR0n5Ro2XQYq/Ztd5ZbjaD1G8zYjuJrGHeglNErQhV2uXap0Fkk2btJrBUsBfDAwi9IenvSn0/yGQh4m7PDk8JY8YJWvZ1YiHfr9iPNxzKbr4DGDF8f/gduj4XWSg/NUAcGqRCJZTgqUDLV+xBtn4aQBhETy7xinZY4urPGKiXIpQ4w1OhFtxTwhlCToTKxgEwXbmFznjZEyDpw5YyDUhWWf0wljQ7fv0qPK9086cvRQh/7glmL3uX7edie0m1ViO9XUh3o+yTNfA7wjIl0FoFtNaco+EIvhSfkVSqSjp1e8RmqgYculy6zuEiIOoGpVCmtcpxv46KKqNMiKaqKe8Vo1GzCpcO3fGkMV6ZNX/L4w8McIBkh87R3CEp8WTzVMgpnEDAguJiA1x7TXitffznIoedKVN8pz2x5DXFljGihEoshXtqFUlO6Ws23mZ1hh3ukC7ZPihnoincvwlDJ/GjTzfhkJ7pfMRWKlBldu5UEFi48Hz1NiIKJ2WadEQzygSa7sUOxXKGWcAwfSEgI+yhFh+Ql2izZIW3vxwijxOo9YZAFD5tFO4aoRNI2VcJJK5O3aKboe2oL+m2H6jeZOwu/f+UmGpyK8I1QFIUMBh9RysVeFYSWMh4rBK4juqSI4N+6ZCavlzVJrx2ix6C6FmSdG7aSX13p896k+Cg5whWSaMhaolg0I+8oyeISY0MPR5hd/2SxtVihQ/+grpmroW2UGCq
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 91bbf8cc-b8d4-4e04-02cf-08db58810af3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 15:52:26.4280
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wxQfZVdFHF17NvhV9tx+bIQ2cnG1tQC8HPb0zifMoUxLGBl2H/1ltJgQG19aTbVOiwn60drxGj4Z29mKNitVRFgXXAWK9YCcLJHAICa1PfI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6504

On 16/05/2023 3:58 pm, Jan Beulich wrote:
> On 15.05.2023 16:42, Andrew Cooper wrote:
>> --- a/xen/arch/x86/cpu-policy.c
>> +++ b/xen/arch/x86/cpu-policy.c
>> @@ -408,6 +408,25 @@ static void __init calculate_host_policy(void)
>>      p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
>>  }
>>  
>> +static void __init guest_common_max_feature_adjustments(uint32_t *fs)
>> +{
>> +    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
>> +    {
>> +        /*
>> +         * MSR_ARCH_CAPS is just feature data, and we can offer it to guests
>> +         * unconditionally, although limit it to Intel systems as it is highly
>> +         * uarch-specific.
>> +         *
>> +         * In particular, the RSBA and RRSBA bits mean "you might migrate to a
>> +         * system where RSB underflow uses alternative predictors (a.k.a
>> +         * Retpoline not safe)", so these need to be visible to a guest in all
>> +         * cases, even when it's only some other server in the pool which
>> +         * suffers the identified behaviour.
>> +         */
>> +        __set_bit(X86_FEATURE_ARCH_CAPS, fs);
>> +    }
>> +}
> Wouldn't this better be accompanied by marking the bit !a in the public header?

Yes, probably.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 19 16:14:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 16:14:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537185.836079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q02kU-0001e8-9k; Fri, 19 May 2023 16:14:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537185.836079; Fri, 19 May 2023 16:14:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q02kU-0001e1-6g; Fri, 19 May 2023 16:14:34 +0000
Received: by outflank-mailman (input) for mailman id 537185;
 Fri, 19 May 2023 16:14:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1HX=BI=citrix.com=prvs=496750f7c=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q02kS-0001dp-Rf
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 16:14:32 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3aff3cfb-f660-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 18:14:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3aff3cfb-f660-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684512869;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=8o8MPTru8/h22iKlQ9DAEThyK8DJTq1+pdxNIMWl4cs=;
  b=XV12eS/CwRQotC7IL5wXrWc+BnMZfqIlq4P/Gakfz2SMKyT0LCHNCRY5
   ow/L9IgY1zijOKyzqruCzRg7t4purK9Pyi82LFAvutXsWMgpvUlFCSLKn
   o5twulJkrXrBoYuQzJQtVw7GKYInj6nzKvcL8UtcTBneJtZQuGUrPAnD5
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112149145
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:KXvFTK03K+ZCO7JZCvbD5Rhxkn2cJEfYwER7XKvMYLTBsI5bpzMDn
 GUXDGiAPf6JZmqmeNxxa4uyoxwOsJ6GzYdqGwVopC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFnNKgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfUX9q9
 PZBFSk0XzuKvLqv+52XZrhGr5F2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tO6umnn4dSwesF+PrLA7y2PS0BZwwP7mN9+9ltmiHJwPxxrJ/
 z6bl4j/KixKFdKb2WCYyyiDgN+IpyDpfIM8OrLto5aGh3XMnzdOWXX6T2CTq/SjllS3Xd4ZL
 kUO4zcvtoA77kntRd74NzWnpFaUsxhaXMBfe8Uz8AyXw7DYyxqYDGMDCDVGbbQOjMIqSCZs9
 liYksjBDCZq9raSTBq15rqS6D+/JyURBWsDfjMfCxsI5cH5p4M+hQ6JScxseIaqg9yzEjH9x
 RiDti14jLIW5fPnzI3iowqB2Wj14MGUEEhsvF6/sn+ZAh1RNJCBZobx1EHi3N1JF96AdXatk
 Ckusp3LhAwRNq2lmCuISeQLObim4feZLTHR6WJS84kdGyeFoCD6I90JiN1qDAIwa5tfJ2e1C
 KPGkVkJjKK/KkdGekOej2iZL80xhZbtGt3+Phw/RoofO8MhHONrEcwHWKJx44wPuBJ0+U3cE
 c3BGSpJMZr9IfoP8dZOb71BuYLHPwhnrY8pebj1zg68zZ2Vb2OPRLEOPTOmN75psP/f/FWJq
 IgBZqNmLimzt8WnOEHqHXM7dwhWfRDX+7itwyCoSgJzClU/QzxwYxMg6bggZ5Zkj8xoqws8x
 VnkAhUw4AOm1RX6xfCiNigLhEXHAcwu8hrW/EUEYT6V5pTUSdr2tP9GKsVpJ9HKNoVLlJZJc
 hXMQO3Yatwnd9gN0211gUXVxGC6SCmWuA==
IronPort-HdrOrdr: A9a23:saJ6wa8SnLrrsDucaohuk+DWI+orL9Y04lQ7vn2ZKCY4TiX8ra
 uTdZsguiMc5Ax+ZJhDo7C90di7IE80nKQdieN9AV7IZniEhILHFvAG0aLShxHmBi3i5qp8+M
 5bAsxD4QTLfDpHsfo=
X-Talos-CUID: =?us-ascii?q?9a23=3AJMGVhWv/quISiKaLB49YFA5k6It1bFncxi7tOna?=
 =?us-ascii?q?JIm9SEKSuZnO8qYlNxp8=3D?=
X-Talos-MUID: 9a23:AjuNpAXCJNR51A/q/AK8pT9mJJ112v6BDnBKq5YUn8iqGyMlbg==
X-IronPort-AV: E=Sophos;i="6.00,177,1681185600"; 
   d="scan'208";a="112149145"
Date: Fri, 19 May 2023 17:14:12 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Peter Hoyes <Peter.Hoyes@arm.com>
CC: <xen-devel@lists.xenproject.org>, <wei.chen@arm.com>,
	<bertrand.marquis@arm.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] tools/xendomains: Only save/restore/migrate if supported
 by xenlight
Message-ID: <d52c31c7-57b1-41c1-af35-a9b847683c0a@perard>
References: <20230322135800.3869458-1-peter.hoyes@arm.com>
 <fa320fd7-31fa-4e96-a804-172e70ef1c80@perard>
 <6b98b95d-d737-5495-9c03-7857886cb1ce@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <6b98b95d-d737-5495-9c03-7857886cb1ce@arm.com>

On Thu, Apr 06, 2023 at 03:57:12PM +0100, Peter Hoyes wrote:
> On 04/04/2023 17:28, Anthony PERARD wrote:
> > On Wed, Mar 22, 2023 at 01:58:00PM +0000, Peter Hoyes wrote:
> > > From: Peter Hoyes <Peter.Hoyes@arm.com>
> > > 
> > > Saving, restoring and migrating domains are not currently supported on
> > > arm and arm64 platforms, so xendomains prints the warning:
> > > 
> > >    An error occurred while saving domain:
> > >    command not implemented
> > > 
> > > when attempting to run `xendomains stop`. It otherwise continues to shut
> > > down the domains cleanly, with the unsupported steps skipped.
> > The patch looks kind of ok, but shouldn't $XENDOMAINS_SAVE be set to an
> > empty string in the config by the admin instead?
> > 
> > Or is the issue that $XENDOMAINS_SAVE is set by default, even on arm* ?
> Yea the default is the issue. We are building for embedded, using Yocto, so
> there isn't really an admin.
> > 
> > Maybe it's easier to check that the command is implemented at run time
> > rather than trying to have a good default value for XENDOMAINS_SAVE at
> > install/package time.
> 
> It would be cleaner to do this at build time for sure, but I'm not sure the
> autotools config file approach for sysconfig.xendomains.in can handle the
> logic for this?

I think that possible, we can already change config in ./configure based
on the CPU.

I'm preparing a patch.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 19 16:25:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 16:25:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537190.836092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q02us-0003EF-AB; Fri, 19 May 2023 16:25:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537190.836092; Fri, 19 May 2023 16:25:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q02us-0003E8-7F; Fri, 19 May 2023 16:25:18 +0000
Received: by outflank-mailman (input) for mailman id 537190;
 Fri, 19 May 2023 16:25:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h1HX=BI=citrix.com=prvs=496750f7c=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q02uq-0003E2-Ra
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 16:25:17 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id baea8ef7-f661-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 18:25:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: baea8ef7-f661-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684513513;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=UGnIcAY/vT3B5YwQu/7AIgPtJq69zjmZ9Y0IHpiz6Q0=;
  b=iTNVo5JJMhAx8xlbDrAu01kjJmUpVa+vghz51U/yiXVZRP9R4fAndv6i
   mqJ0MxQ4uV6WYNoF+pyfdTSsVhjgMt+LgaGEOUhIdSceuvXE3ba73AVhz
   RBtjT50MgaXOCQ3nu52JY5+HCdN+WKdq+0IYLK+iX7+VZg5Tbc9AZljEE
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108441553
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:iO5lP6ibJg9rL69yVpsBdkcXX161ahAKZh0ujC45NGQN5FlHY01je
 htvCmzVOancNjSjeYx2YYSw/U4HvcKDndVhTFZvpSgyFHsb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QaPzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQdEC1WcxSAmNmmyZafF9BPgstyNs7CadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XejtEqFWTtOwv7nLa1gBZ27nxKtvFPNeNQK25m27B/
 jKaoz2kX0py2Nq3wGK5906nm/b0sgTWG4Q1PbOR/fpSqQjGroAUIEJPDgbqyRWjsWauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4O9U39AyXjIrT8guCLmEeS3hKb9lOnNAybSwn0
 BmOhdyBONB0mOTLEzTHrO7S9G7sf3FPdgfueBPoUyMA28S4vYUwkSnfR/JHTZeWitKkAzbZl
 mXiQDcFu50fissC1qOe9F/Bgi6xqpWhcjPZ9jk7TUr+sFonOdfNi5iArAGCsK0edNrxokyp5
 iBspiSI0AwZ4XhhfgSpSf5FIrym7u3t3Nb00Q82RMlJG9hAFheekWFsDNNWfh8B3iUsI2WBj
 KrvVeR5uvdu0IOCN/MfXm5II51CIVLcPdrkTOvISdFFf4J8cgSKlAk3Ox7MgzuxwBR3z/9uU
 Xt+TSpLJS9AYZmLMRLsH7tNuVPV7ntWKZzvqWDTkE38jOv2iI+9QrYZKlqeBt0EAFe/iFyNq
 b53bpLaoyizpcWiOkE7B6ZPdwFVRZX6bLiqw/FqmhmrflM+QDh8VaGLqV7jEqQ895loei7z1
 inVcidlJJDX2RUr9S3ihqhfVY7S
IronPort-HdrOrdr: A9a23:SdsjSKFTqcu+MJhMpLqE18eALOsnbusQ8zAXPo5KOGVom62j5r
 iTdZEgvyMc5wxhPU3I9erwWpVoBEmslqKdgrNxAV7BZniDhILAFugLhrcKgQeBJ8SUzJ876U
 4PSdkZNDQyNzRHZATBjTVQ3+xO/DBPys6Vuds=
X-Talos-CUID: 9a23:jp/tx2xt+f3ef7RCfb17BgURIt1+b3ne00zAKmucVF9KTLLEU1i5rfY=
X-Talos-MUID: 9a23:1kfPmAvny5Nw9Tb6L82njWleMftj/r2VJ2cg0pMhksetPihIEmLI
X-IronPort-AV: E=Sophos;i="6.00,177,1681185600"; 
   d="scan'208";a="108441553"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Peter Hoyes
	<Peter.Hoyes@arm.com>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH] tools/xendomains: Don't auto save/restore/migrate on Arm*
Date: Fri, 19 May 2023 17:24:54 +0100
Message-ID: <20230519162454.50337-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <d52c31c7-57b1-41c1-af35-a9b847683c0a@perard>
References: <d52c31c7-57b1-41c1-af35-a9b847683c0a@perard>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Saving, restoring and migrating domains are not currently supported on
arm and arm64 platforms, so xendomains prints the warning:

  An error occurred while saving domain:
  command not implemented

when attempting to run `xendomains stop`. It otherwise continues to shut
down the domains cleanly, with the unsupported steps skipped.

Also in sysconfig.xendomains, change "Default" to "Example" as the
real default is an empty value.

Reported-by: Peter Hoyes <Peter.Hoyes@arm.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Peter, what do you think of this approach?

For reference, there's also a way to findout if a macro exist, with
AC_CHECK_DECL(), but the libxl.h header depends on other headers that
needs to be generated.
---
 tools/configure                                    | 11 +++++++++++
 tools/configure.ac                                 | 13 +++++++++++++
 tools/hotplug/Linux/init.d/sysconfig.xendomains.in |  4 ++--
 3 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/tools/configure b/tools/configure
index 52b4717d01..a722f72c08 100755
--- a/tools/configure
+++ b/tools/configure
@@ -624,6 +624,7 @@ ac_includes_default="\
 
 ac_subst_vars='LTLIBOBJS
 LIBOBJS
+XENDOMAINS_SAVE_DIR
 pvshim
 ninepfs
 SYSTEMD_LIBS
@@ -10155,6 +10156,16 @@ if test "$ax_found" = "0"; then :
 fi
 
 
+case "$host_cpu" in
+    arm*|aarch64)
+        XENDOMAINS_SAVE_DIR=
+        ;;
+    *)
+        XENDOMAINS_SAVE_DIR="$XEN_LIB_DIR/save"
+        ;;
+esac
+
+
 cat >confcache <<\_ACEOF
 # This file is a shell script that caches the results of configure
 # tests run on this system so they can be shared between configure
diff --git a/tools/configure.ac b/tools/configure.ac
index 3cccf41960..0f0983f6b7 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -517,4 +517,17 @@ AS_IF([test "x$pvshim" = "xy"], [
 
 AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])
 
+dnl Disable autosave of domain in xendomains on shutdown
+dnl due to missing support. This should be in sync with
+dnl LIBXL_HAVE_NO_SUSPEND_RESUME in libxl.h
+case "$host_cpu" in
+    arm*|aarch64)
+        XENDOMAINS_SAVE_DIR=
+        ;;
+    *)
+        XENDOMAINS_SAVE_DIR="$XEN_LIB_DIR/save"
+        ;;
+esac
+AC_SUBST(XENDOMAINS_SAVE_DIR)
+
 AC_OUTPUT()
diff --git a/tools/hotplug/Linux/init.d/sysconfig.xendomains.in b/tools/hotplug/Linux/init.d/sysconfig.xendomains.in
index f61ef9c4d1..3c49f18bb0 100644
--- a/tools/hotplug/Linux/init.d/sysconfig.xendomains.in
+++ b/tools/hotplug/Linux/init.d/sysconfig.xendomains.in
@@ -45,7 +45,7 @@ XENDOMAINS_CREATE_USLEEP=5000000
 XENDOMAINS_MIGRATE=""
 
 ## Type: string
-## Default: @XEN_LIB_DIR@/save
+## Example: @XEN_LIB_DIR@/save
 #
 # Directory to save running domains to when the system (dom0) is
 # shut down. Will also be used to restore domains from if # XENDOMAINS_RESTORE
@@ -53,7 +53,7 @@ XENDOMAINS_MIGRATE=""
 # (e.g. because you rather shut domains down).
 # If domain saving does succeed, SHUTDOWN will not be executed.
 #
-XENDOMAINS_SAVE=@XEN_LIB_DIR@/save
+XENDOMAINS_SAVE=@XENDOMAINS_SAVE_DIR@
 
 ## Type: string
 ## Default: "--wait"
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri May 19 16:41:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 16:41:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537198.836108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q039t-0005ir-M7; Fri, 19 May 2023 16:40:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537198.836108; Fri, 19 May 2023 16:40:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q039t-0005ij-J0; Fri, 19 May 2023 16:40:49 +0000
Received: by outflank-mailman (input) for mailman id 537198;
 Fri, 19 May 2023 16:40:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q039s-0005iZ-9x; Fri, 19 May 2023 16:40:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q039s-0000Rv-2C; Fri, 19 May 2023 16:40:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q039r-0002Tf-LD; Fri, 19 May 2023 16:40:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q039r-0007yD-Kk; Fri, 19 May 2023 16:40:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bhus9IvY6y6NQUnD7gUPe+b53zsIIXk+g7ZzNt7Wp+I=; b=wVIQNlsGYYnsZnBezz69aVUTOu
	znbR7UTh7k8XgnkscUDyyaaujnyqzWo8N6VyMJm7RuCcrPFzZ0EQXy3bskmkQIB7kQMn89IOd6mY2
	rgUkQA+NRWsK4lVp1sBMUnOvCpz/sOxWijnGXfOKrNb+Rr05wrLpIiWi0uN+CPb7DWGk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180742-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180742: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=146f515110e86aefe3bc2e8eb581ab724614060f
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 May 2023 16:40:47 +0000

flight 180742 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180742/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                146f515110e86aefe3bc2e8eb581ab724614060f
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    2 days
Failing since        180699  2023-05-18 07:21:24 Z    1 days    5 attempts
Testing same since   180721  2023-05-19 06:11:20 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Gavin Shan <gshan@redhat.com>
  Igor Mammedov <imammedo@redhat.com>
  John Snow <jsnow@redhat.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Markus Armbruster <armbru@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Steve Sistare <steven.sistare@oracle.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2380 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 19 16:57:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 16:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537209.836127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q03QH-0007TA-BP; Fri, 19 May 2023 16:57:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537209.836127; Fri, 19 May 2023 16:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q03QH-0007T3-7v; Fri, 19 May 2023 16:57:45 +0000
Received: by outflank-mailman (input) for mailman id 537209;
 Fri, 19 May 2023 16:57:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9pCJ=BI=citrix.com=prvs=496ec590c=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q03QG-0007Sx-0L
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 16:57:44 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 43a3a5b6-f666-11ed-b22d-6b7b168915f2;
 Fri, 19 May 2023 18:57:41 +0200 (CEST)
Received: from mail-bn7nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 19 May 2023 12:57:29 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6094.namprd03.prod.outlook.com (2603:10b6:5:395::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May
 2023 16:57:25 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.021; Fri, 19 May 2023
 16:57:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43a3a5b6-f666-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684515461;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=jG1556UdD29OJoaALVIo8wkISlGBR2/HX2MVx7arj5c=;
  b=hZFnLzcEMCGRrVHVVm4O6fuF6RMS0fTzsl3BVLutl1m7I6r3ZsPqmp1F
   Tz+mKRmS9AfHaab/FVvaCpT9Br47TVS7gZbi4IwO1B/421vN8qaJ97goM
   jz2RXEkbXq4f48Ae1rp3SkuiaAR/sJKax3TCBxyetSoA9bTpLay9y3YFj
   A=;
X-IronPort-RemoteIP: 104.47.70.101
X-IronPort-MID: 109017135
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jKZWe6lHCqBpyzNxCaWfjgno5gwLIERdPkR7XQ2eYbSJt1+Wr1Gzt
 xJOXWyDaa2PNGfwLd92a9vi8x9S6pbWmoNgGlZp+CxkRSMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE0p5KyaVA8w5ARkPqgW5Q6GzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 aAJNi89Zxudvvvs2YLrQMRvrcoBFNa+aevzulk4pd3YJdAPZMmbBo/suppf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVk1Q3ieC0WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOzgr6Ez2wD7Kmo7Ui0ECUG7sdSDuhScBvheN
 GJT/AU+hP1nnKCsZpynN/Gim1adox8XbNhQDuI9wBiQy6HV5Q3fDW8BJhZFado7pIo/Xzlv1
 VKTmd7tLThuq6GOD3Ob+K2doT67NW4SN2BqTSQNUQIB4t+lpYgpjxTGVf5qCqvzhdrwcRnvw
 jSOhCsznbMeiYgMzarT1V/ahCihoLDKQxQz6wGRWXiqhit9ZYi4d8mj8lvKxehPIZzfTVSbu
 nUA3c+E44gmEYqHkyOARuklFbWx5//DPifVh11iGd8t+lyF/3+lYJAV5TBmP293PcsePzzke
 knevUVW/pA7FHKwa6Nfapi3B85swaWIPfbsU/3PKPBJfoR2cQuK7QlubEiL0mb1mVIwl6wlI
 peaaYCnCnNyIaBmyiemAv8Uy74wzQggym7JA5P21RKq1fyZfnH9Ya8MLV/Icek96biArRT96
 NdRNtWHjR5YVYXWeiDT9IMJBVwDJ3I2AYywoMtSHsaHIwx7CCQ7CuTa35slepd5hOJUkOnS9
 32wU0Mez0Dw7VXCIB+JLGp+dLfmW5pXpGg+eycrOD6AwHEpe4+35aE3fp4+b74hsudkyJZcS
 vgGdsGED+VnTzXM4TMGapfh6odlcXyDiQ+KOye/SDc6b9hsSmTh4Nj/ZU339CgKDzKsss0Wp
 Kep3QfWB5EEQmxKCMfQdeKHxkm0uXkbhfI0WFHHZMRQEG3m95JrMGr2leMtJN8XKgTrwSGT3
 AKbRxwfoIHlv4Ax4PHNhKaZs52uFepuWEZXd0Hf7LCrJWzZ83ClzItoTumFZ3beWXny9aHkY
 v9ap9nwL/gYxn5LtYRmGrpmxK544MHgz5dKnlpMH3jRaVmvTLR6LRGu19NAu7dW2pdWvAK3X
 gSE/dwyEbyTOsrjOFoQIhc1KOWFyfwQ3DLV6JwdOE/94j12+ruvS0hePxCQzidaKdNdII4jh
 OsspsMSwwi+kQYxdMaLiDhO8GaBJWBGVL8o3rkCCZLvjwEo4lJPe5rRDmnx+p7nQ85BO08oK
 Rebg63Ng7kazU3HG1I0EXXOxuNbiY4mvhlWxVsPIVeO3NzMg/Jf9AVU/DIlQwB9yxJD3O91f
 G1zOAt5KM2m8C9pj8NEW2GEGg5IAByQvEf2zjMhkHDSTUCuEGjQJ280P+GT1E8c+mNYODNc+
 dmw1X3sVD+sddvw0AM4SFJop/XlRtE3/QrH8OilEN6CBIISeif+j+mlYm9ggwfnBsY4nwvDq
 O9m9e1zeIXyMCJWqKo+Y6GWyrEPYBmBLX5FR7dq++UUHgnhlCqa3DGPLwW7fJxLLvmTqUugU
 ZU2f4RISgi00zuIonYDH6kQLrRonfkvot0fZrfsIm1AuLyaxtZ0jK/tGuHFrDdDa71TfQwVc
 +s9qxrq/rSsuEZp
IronPort-HdrOrdr: A9a23:acvBjqoRXn9NRvA2wqBf36kaV5o8eYIsimQD101hICG9Ffbo9f
 xG/c5rtiMc7Qx7Mk3I9ursBED+ewK4yXcY2+Us1NWZMjUOyVHJEGgK1+KL/9SKIUzDH4Bmup
 uIepIObOHNMQ==
X-Talos-CUID: 9a23:CvMr0GFKcSSFZQaoqmI75VALKskubETElm7gc0WVCk1naoWKHAo=
X-Talos-MUID: 9a23:9s8+yQiu6Euz42IUKptd0cMpbvd6v76uGG03oc8k5uOFKgI3PRiNpWHi
X-IronPort-AV: E=Sophos;i="6.00,177,1681185600"; 
   d="scan'208";a="109017135"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MgAsgsjHRP7EACn8XdeluJuZnDeCrbDFUNWSYrQwf9EWPvc2U3QCpNuZkEIaAaJj/uBf1y9LRIE73ZY7Aa7OmpVH6obv5AWBqMG9ftapYrjxF4Q6gJnZ+a7FnX9LxPqeI7iebPZUFW8Ef9uxf1hC6JHIYOgO8XyZTZUcnO+yW7L6rFKvwUPu+nspeVeL7Sswf6cm1CW/YRkZPw/NsQFll+LUq0uKqJSB2zM57xarS2E7XrASFjbUhMEglGhyzHy//TXZzT+PSZ6BVvngatGOjkMHQiHCDkxCKIZGVWgOQWvAq9WTsRgfVFOjyEgNI9SiA5EVGjNnAsdy1kUPw9Btmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jG1556UdD29OJoaALVIo8wkISlGBR2/HX2MVx7arj5c=;
 b=kSMIAT5FgOyy4xytPuQDWzmb3Rxu0rGzlOQnZKImPvpD4yOqcuAnCAUA7NXQ5i4ybhNq7WVpZCgpBKlAauA4n5LgJUJ3pwWsi1PNXb9W6/zP0ouxAc4HFaMo/mnICeRovvH++Mo3eX8JUqxUF69adhCOZ5w8bM7j2rUtpkpZYeh+6eHX5OMTVc1aTT8WUQv9XCSIiMFtv1eix+OjpMjGZdJRH3WIyNNmMu/it4xI1bv3R08XGO7c4+EXNmcaRqnxxWac8FbTwnpHI7HLmV4eoYNjdUNqzKcIDylmbQEFVHZpcv2EPHV05SeUy0OlBfhAMqCUFG/Y0r/ekMqETzZFJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jG1556UdD29OJoaALVIo8wkISlGBR2/HX2MVx7arj5c=;
 b=cZheZ+t1tSj8VzO8OZt7FcFuNNuVvMq4SA6YzUM0neC7cE+FAwECBmiQLtbBIW8+/T3qo5kjKnq4hsntqwwMY3RIyVNL+GyOy9O26OvrqMoGUfD9Dl2MEiMh432dLemuLL5lrEXItcdUlyJYBuPxVpq3Ft0zfQvPWHmqPNdELDg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0cafbfcb-2430-6d90-ee77-4e5de08ee1da@citrix.com>
Date: Fri, 19 May 2023 17:57:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [patch V4 36/37] x86/smpboot: Support parallel startup of
 secondary CPUs
Content-Language: en-GB
To: Jeffrey Hugo <quic_jhugo@quicinc.com>,
 Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>,
 David Woodhouse <dwmw2@infradead.org>
Cc: x86@kernel.org, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>,
 Sean Christopherson <seanjc@google.com>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Ross Philipson <ross.philipson@oracle.com>,
 David Woodhouse <dwmw@amazon.co.uk>
References: <20230512203426.452963764@linutronix.de>
 <20230512205257.411554373@linutronix.de>
 <16562305-3bc0-c69f-0cb5-1b9da1014f19@quicinc.com>
In-Reply-To: <16562305-3bc0-c69f-0cb5-1b9da1014f19@quicinc.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0014.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ad::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM4PR03MB6094:EE_
X-MS-Office365-Filtering-Correlation-Id: e9ddc14d-f273-4528-925d-08db588a1e69
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GjmgoYUTqj/Sd8j+X8Ukj3pgab9QdV62vErzk5ym9iN2Uf/PyaeO+URIyI7XfwP97AXUTe+wehrYaQZ87UynaPBB0dMqIPH34hcxXKI/YNdg1W+CLX9F38w0lfwcIKNqclUyErBvK3sjW4kEmwIKLlXIyPs13AHppWDDmJLcmdU9mAsA22ba7i8ubXgwhvpSReg/L5+kujgBxLqa60r49ZUtuYHyqV+wm7W6lbV5wbeoZPFskB44CgsOybE0WZ9zB0ChtsbbY6CWi3EdeUGZD6HU6kyPLk8GKVnyOZ2v7JIffhHeS3BW6d1KgNHQRp5LV8aNupLbVvPJ1g3NY28gb2fENql29EBK4Tv97KON1ACnfRlumGBLaUYmC5yYWxjOCkAdwtPQSSf43o41wJ7wBYkBEE6pzPTge+j2g/fPrB9i3nHEBQW5epoGDkKzMMQUD86i6bYWx50rOfuNGU4AvhJQ99Zo77CWgnMcBKdBH+h4jNh6kdoGD0gSklSrwMOXY4hjji1/B/7eNSjgb6ujy7ihZQkMtHhVSPR1EMPEwIGxQzP3WVH6O1ORSbfATPIo2DF9r3ZEH92zcVpHZn0EfPV/UgZb4sIHOlm5Kz01Q02Wg8Z0aRmlsftAaXsgCu82LONap7y0yDzFSzcdYgbmSQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(39860400002)(136003)(376002)(346002)(451199021)(2906002)(316002)(478600001)(41300700001)(8936002)(4326008)(31686004)(8676002)(6486002)(7416002)(110136005)(54906003)(5660300002)(7406005)(6666004)(66946007)(66556008)(66476007)(53546011)(6512007)(26005)(6506007)(82960400001)(38100700002)(186003)(83380400001)(36756003)(2616005)(86362001)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YzhITmZnYWZQajlHRkJmb01TZy85ZTNjaGprNlBkQ1ByM1d1bUVuYmxHcXVw?=
 =?utf-8?B?akc4aHlXM0N1R21FZHZTWE9VcEZGc2FObjlHRWdxSFIzUENva2JOMDFnTW54?=
 =?utf-8?B?SUh3RWoybFRsREpZb2NRZVV4RHdnbWp2eVRBdHVFN1lvQXhvVk1LRGV0QnN6?=
 =?utf-8?B?NDRNTVBkSFZtaGRiWDF1NWlaQWZLSkJka2NvdGNrdDVsUE5kdGlSNXRDMXZp?=
 =?utf-8?B?amJvZkUxM1dESWpqN0VXYkw0SWpRMkJhU1lNY3lyVUdkUEUxelpNeTNIZkVa?=
 =?utf-8?B?SDRlMUE1L3RQUTZpSVZCN2NBSk1kdFFFeU9HeksrT1l1WE4xQjc0dFJsb1Ew?=
 =?utf-8?B?VDJyZXFUZkw2bGVLcThndGtsL08xMHZodWRienVldG95UXlGYzVkK1dVL2Vs?=
 =?utf-8?B?RHNGK2FkWTAyaU9Dc09WWGdxZkNBMnY5NEh3WjBEVE1kbjQxQTRNcWRjOTZN?=
 =?utf-8?B?N2VFM0haaDFoVVZVYXZjNjJQcmREU09zNTFEeUlPbDFLYlFhYWhIUzFxZ3NW?=
 =?utf-8?B?QVAzVEZjL05IbE5TUUZabml0WFJ2UkNqQ1ZhVFVNeFFQZ1dHb0pvbkdHQ1hT?=
 =?utf-8?B?WnFmMHR2RE9mL0ZFeGM3bzI4dU54aEh0QjRGclpnMnYxbGJta21uSEMxWFd4?=
 =?utf-8?B?dWZHVkJjbGxiT3dqck5DR0pkczNJa2NuRm1TWGNoQ3VyVHdnQkgrNURvUUZB?=
 =?utf-8?B?Q3c3T0tlRzhlcTRYOGdXbUZyK21HTEI3Q1hRS2FXb3FNNW14Z0tGazdKOWVz?=
 =?utf-8?B?SzQwamVLRHJaU2lETUw5V1A0aTJmYUJYVjhKem0zekJxZU1SQkhVNm9GZGxn?=
 =?utf-8?B?UG1XUXdBMTQzODBEY1JqZnJyTTQ3djZJbWNpQ3hGN0xpTzFJRTUzdVBuakRB?=
 =?utf-8?B?RktremRhNm5uL3lDNGtJa1Juc2x1SWFueXcwajBIdkFQZG00TmVzaGtQNjdM?=
 =?utf-8?B?bUFZcVpTcWhick9ubzRjWUtBTXBDQ0x1aGM2Y2RuODdRNzJsa0hLY2MxdllY?=
 =?utf-8?B?Y05VNUxEM0NYc2VHYXY2eXp0R1A0KzRVOUR4UGJUcVduelcwTmpXYTJkb2lh?=
 =?utf-8?B?NzgyL3BsSTdlUzliQk9HSk9FaXMrRi9sR0draWxRSzZhNXZtNjVJQUkydHlD?=
 =?utf-8?B?d3ZNTGYwbEFxQXhjSWxzUzVQVHJOQzlOejdHcU1WWUZ5aGUvLzFnWDFIV0NN?=
 =?utf-8?B?ZGJVeXJVbHZnODd6ekhWWS9LQnhyTmVWN3JXVExNYlpmTmlicVpYLy9IY2xa?=
 =?utf-8?B?MGpHNjJoRjhCMXFrNW5Kb0xPQ0VtdDlDbGlvR1lRNUhwOXIvaTNyWklVMyta?=
 =?utf-8?B?S1A3ZUJxNWV3U1RWMTU3cXI2cmJseUk1U21lamV6N2huWU00QzAreFRmUCtT?=
 =?utf-8?B?NnNUQzRLdXQzVWtiRk5sWkdwbjhkcCtsRVg5N2JmUUhKb29hclVDMmxONzkz?=
 =?utf-8?B?U0xtNk5CQnNNTFc1a2dGVzZLbkNUaUFsaVdxNmVPUHp2Zi9wMDZUOCsvbytk?=
 =?utf-8?B?OTdVZCtPMGRNaDVodlFKNm1nbWJwZU9EQTdoQ21DL2JwWXFQUlY5c0xWaDRP?=
 =?utf-8?B?b2dZSW9VUDI4cTNFdTRQVHF1Uy83WjJZdjNiRDFqQkRPb3lOeHNaTWM1cExZ?=
 =?utf-8?B?WHJpeDRvNzJLQWMrT1pxcWs0WldKMkRIUTVPQzMxMXBjMHBWWUMyN3RVRnZF?=
 =?utf-8?B?MEFzaFEzeEJ3UWg1bmJyKzl4bXI5YnQrNXRjNzY3dWMyR09oQ1ZjZUNVRlVw?=
 =?utf-8?B?N2Rab2hYWFdYWjFwWE9qbmNLWjljUUtzM0lTd0tMcXNGKzZpN0pSQ3BnOHM3?=
 =?utf-8?B?V2FBVjgyaXcvdzFObC9rc0dQa2ZZUkVnbTgvMG5veXdpYnlIZHBzMCtYZDNW?=
 =?utf-8?B?RUoyK1BKSkhtQllCZ3hRc2Jvd0xXbCtvWHdtVzNVWk1xS3pIVllQY25tWkZh?=
 =?utf-8?B?L3NUY1J1ZHlUVVdwaE9xZEdMS2NwQ3V2SUkxRVlPb0szY281VDVtams0Zktl?=
 =?utf-8?B?Q25xUElLSGpVRVlSV21pZWVvUWZqYVRDQ1dUZHlBMllNNGhBZnJwL0dNeEVp?=
 =?utf-8?B?L0NWSkY3TnQrVjZ0czVVamRtVVh3WDhzMlgvUmo4SlF6VExxYTUrRVJIdmRk?=
 =?utf-8?B?RVZTcGpUaENJMnlvMlB0dm1Fc2RHVkxHWFUrTnJNL3h6cnpUZnNGRi9iQnJR?=
 =?utf-8?B?WGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?L29BMUxkUTl1RWYrSUxacFdnK0xYdWVVRkV2RDVmUkJOZVFTQnNkQ0k4Rkh2?=
 =?utf-8?B?VHAxS09MMVlPOXZlSnVHalE0ZHlQMGp1a2FEWDZVeGxIZUJhbUtMR245TVBi?=
 =?utf-8?B?RUlidkxRSVJPcGZWT2NCaVNta0pNREY1a0xrZjNjSEFQSGwyWUhLd3FOQ2px?=
 =?utf-8?B?MTZ0UXFRSzBBTEIvWEk0WHM3cE82Z3BnOFF3V1QrYVdYTUtEekR1MndWVXcw?=
 =?utf-8?B?dzc5RWQ2TEI0TVExSGxVTXVGTHJZWlhVaHNsUWZpRFI3UG4rR2VOWVM5akZm?=
 =?utf-8?B?SnVIN3lxdVhCcEtENE5GUUxzU21xZjJVUGRmQUJrcUN1SG5ZODdaOHlIeGVF?=
 =?utf-8?B?Yk5aSUdOWk5tZEdGdlljcGdUQWc1RHN6eFpKYlBqbngvcVFDbVJGd3lRTXRP?=
 =?utf-8?B?ZjF4Z3ZERHlSem9lVFRCR0pHd3dRenMwVEpFSzk3elZWd0duNmdrc09XdXRJ?=
 =?utf-8?B?TGVzbERES2pFdXArSkF2ckxZNDdSdlc3VHFTck1hS0wxQTJObjlxaXBwdUNW?=
 =?utf-8?B?dU1aMEx5NU1jdm44QTAydTMwQUdaR1hwOU4zNnpmbTVKL3ZGS0pjZHYvKy9M?=
 =?utf-8?B?bVJEUStWVUx6V1RBVjRnUmFHYVE1T1hwU2dZZ25scTE2QVViZHZRRktrNVA0?=
 =?utf-8?B?dzdqUGZPOURXMmhGWDFpQlVqVHhaQzhiQUV4bUlQTWtQMElLVUVqNEY4VFll?=
 =?utf-8?B?Y2xJK25NRmJuYmk0dWJkSmxDYXhOZXpFK1N2Qk5LbHRPanN4K3lHb1c2UTY0?=
 =?utf-8?B?N3BmWTBXT053cE12SGpVTkI4cmVmT0RlTzJqNHZ4ekJJcSswM3ZKZGZQNVY3?=
 =?utf-8?B?VElkMWp2c1JTNitDWFpFOUVUaXdsMkZ3Vks0T2NIOGVOMGVNV1ZnMjlJWHEz?=
 =?utf-8?B?Q2xYMitJZ0UzT0pOVWRFQnRBSEgrMURtRFdoZGNZTk8yUEVzU3I1RVhmMGg0?=
 =?utf-8?B?RTVPeS9WcHdsVmJuSnlncE15OGg2ZnBEYnVVUVhJZkFZNWZJUU00ajJFeGxk?=
 =?utf-8?B?RXdFWU8vaXhyalg2RnpUMG5LSWh3eEhWN2Z4MHR4aExab0F6WCtzcGc5YzBE?=
 =?utf-8?B?WWROczNXYlVxMVhwWmg2ZjdEYWZISnNtZ2F6OFc2S1V1eklrSU5NNlFFSEwz?=
 =?utf-8?B?THI1V1VqTVVOQ0c5WkYvN1N1anRtVERoeHNsVjhOQXo2d3BDYVduRTVuYVpW?=
 =?utf-8?B?UDg2VmhIeFNESzRPRUlkcm9ZbWM5YmxSUEJpKzNULzk2NHB0MjlFTVYzNkpm?=
 =?utf-8?B?SW9TcURDdFRNcWZncXFVdzN3OU1QZzlIQ0xsMFh0dWpTdUh6VXZlVFdOK3k4?=
 =?utf-8?B?K1I4N1NON1lWeGpUelFwM3E5QzVPTDM5bHlyNVFaVkxRVklMTjAxbm56TEVE?=
 =?utf-8?B?SXhMUFk5ZThJaEVNNnVSY3d6OWh0TlBoZXJmSzQzbUFTMVUxeWM0V29Ia0dr?=
 =?utf-8?B?Z2U0WGhibEpGYW9HeTVNd2l5QVJGODRPbDNDN3B0S1VTc094K2NpUDRqamFm?=
 =?utf-8?B?K2ZXLzdrVGtUWWtick9jNVZZTk9KSE82bEhFaGxqanNtMml1TkR1TnduV0VJ?=
 =?utf-8?B?S3hDTEhmWWh6UTVXSmg3b3Nvd25JS2VNdSthNm9YWXI3RXJxQnZxc0RGL042?=
 =?utf-8?B?YlJNOS9hdmVqV2NEbkhnTHlYV3V6MllHKythd0ptMUFDbllTbWdGNE1SR2t1?=
 =?utf-8?B?UHJyWWRtVEFFUFhKT1pZMXlKSzdJUVRReUlwd0UrK1MvN24wR0NZUGFxOFJC?=
 =?utf-8?B?SDNHM3ltdUQ1dUlERjlVYnJiTS92MnVVRTdHOE45NVhDS0ZuM1V1ZGgzNVNk?=
 =?utf-8?B?S0xycmNzV1Z2bjBHUW9TSDBkdGJOTEdqbnlhNmRqcUlXWTgveFlkOGNBcGcx?=
 =?utf-8?B?cDZMVThOWllQTlNiUzlIV1R1ellmUDA2bWRSMjlsUW4ySmtlalJ0TmVGTXY4?=
 =?utf-8?B?bkprb2toeE10bmxnRHd5dGo5TytnZ2hvNDd0SWJYYzcvQWRUY0VFTklQbDR6?=
 =?utf-8?B?a3FUQnpGYUVMZ21maDRMVnB2cnFZVC9tZjlLYTYvaXhMWUFkNUl6ZnE3YzFE?=
 =?utf-8?B?L1pzM2FVKzJaODVFYWtaQU15Ry9USW1tajJEUWEycjlFR3poSlM0S2lWZnFT?=
 =?utf-8?B?M0hwS0hEZFk2R1JzOVNBZWRDK3hDaytCNEdLZDdJQlRCSytROEhBaFBHd05S?=
 =?utf-8?B?ZE56c2V6R3B3aE0yYkpxNms3OTAyY09XWFJTWFF2VVBuTFBXQ0ZqRHp3R0V2?=
 =?utf-8?B?VnRMVWNIbEp1MENnS1VpYWkxWGY5MzM3WlROWlNDVEpFeEJHaUpxMHdlNXVS?=
 =?utf-8?B?dTFmaVpBQmFVb1l3RTdmcHVYMzhBMjhoUXVZdWwxZERTaE9xY3V0Zz09?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e9ddc14d-f273-4528-925d-08db588a1e69
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 16:57:24.6082
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oA7o/o5FRMYXgF7+HJQsjjD6oFRqFrjubLB/IOEu4eWMa4HGuq+mqx21q5OL9JS/+A3v0qUIiGguLhwBCLjoGM9OLw5zEtHveQM/AJQhTac=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6094

On 19/05/2023 5:28 pm, Jeffrey Hugo wrote:
>   DESCEND objtool
>   INSTALL libsubcmd_headers
>   CALL    scripts/checksyscalls.sh
>   AS      arch/x86/kernel/head_64.o
> arch/x86/kernel/head_64.S: Assembler messages:
> arch/x86/kernel/head_64.S:261: Error: missing ')'
> arch/x86/kernel/head_64.S:261: Error: junk `UL<<10)' after expression
>   CC      arch/x86/kernel/head64.o
>   CC      arch/x86/kernel/ebda.o
>   CC      arch/x86/kernel/platform-quirks.o
> scripts/Makefile.build:374: recipe for target
> 'arch/x86/kernel/head_64.o' failed
> make[3]: *** [arch/x86/kernel/head_64.o] Error 1
> make[3]: *** Waiting for unfinished jobs....
> scripts/Makefile.build:494: recipe for target 'arch/x86/kernel' failed
> make[2]: *** [arch/x86/kernel] Error 2
> scripts/Makefile.build:494: recipe for target 'arch/x86' failed
> make[1]: *** [arch/x86] Error 2
> make[1]: *** Waiting for unfinished jobs....
> Makefile:2026: recipe for target '.' failed
> make: *** [.] Error 2
>
> This is with GCC 5.4.0, if it matters.
>
> Reverting this change allows the build to move forward, although I
> also need to revert "x86/smpboot/64: Implement
> arch_cpuhp_init_parallel_bringup() and enable it" for the build to
> fully succeed.
>
> I'm not familiar with this code, and nothing obvious stands out to me.
> What can I do to help root cause this?

Can you try:

-#define XAPIC_ENABLE    (1UL << 11)
-#define X2APIC_ENABLE    (1UL << 10)
+#define XAPIC_ENABLE    BIT(11)
+#define X2APIC_ENABLE    BIT(10)

The UL suffix isn't understood by older binutils, and this patch adds
the first use of these constants in assembly.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 19 17:36:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 17:36:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537218.836152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q041C-0003aA-BQ; Fri, 19 May 2023 17:35:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537218.836152; Fri, 19 May 2023 17:35:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q041C-0003a3-7T; Fri, 19 May 2023 17:35:54 +0000
Received: by outflank-mailman (input) for mailman id 537218;
 Fri, 19 May 2023 17:35:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Umm=BI=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q041B-0003Zx-3N
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 17:35:53 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on062e.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 961bd3a6-f66b-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 19:35:46 +0200 (CEST)
Received: from DU2PR04CA0201.eurprd04.prod.outlook.com (2603:10a6:10:28d::26)
 by AM0PR08MB5332.eurprd08.prod.outlook.com (2603:10a6:208:17e::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 17:35:42 +0000
Received: from DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28d:cafe::e) by DU2PR04CA0201.outlook.office365.com
 (2603:10a6:10:28d::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21 via Frontend
 Transport; Fri, 19 May 2023 17:35:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT005.mail.protection.outlook.com (100.127.142.81) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.21 via Frontend Transport; Fri, 19 May 2023 17:35:41 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Fri, 19 May 2023 17:35:41 +0000
Received: from cf0b35af8a2a.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 42E12A3F-70EA-4132-82EC-FD09309099F1.1; 
 Fri, 19 May 2023 17:35:35 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cf0b35af8a2a.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 19 May 2023 17:35:35 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PR3PR08MB5786.eurprd08.prod.outlook.com (2603:10a6:102:85::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 17:35:32 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.019; Fri, 19 May 2023
 17:35:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 961bd3a6-f66b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0+UMVRzaD+CWOar4o5v5DYZiy3og6QZZF9IHD80Ncr0=;
 b=IANhoUUxDYhwVZNJEIAbwInsVe3d5bGsBjsKhWnkIJNSOqiAXty4bWADlwZhh3OtvFNG2JnyflseWBYKhVC/WCTAgO+ucsAADtMNUuVLscsocE4xQYslA6kPOetiVMz2CcHTCCgtOq9dHGhvaKUQ55X9qWF/1C03t0Jrld264zU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: ba0ad3a4333b7f6a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d+xur66z5BMlEO2jX883JvQ98AprKS6278IuEPrUlAl83py2o1GDQo6ohKRkWmsAoPA2LPi5KTGWIDIlADj+Ko4Y5C91J+ATohhuk5hMGcfmdEojMrsMrw/VeXTuyZ1718pynyiPQ+/E+tYO2pkIzBj6Y/h5/XYaUTiSECg/wwm4XKSiNr84tEczEaafcnjkfodwwIn1JetXXznpcCdqf3Xdos8SapICKw09f1viDmSUBRcwlLRWXt2OtjmPTRM5aW2urBrL7v77MYl2x3dNOPkzHYWbo+ZfRPFc9zrDsS9X8U1lGi+hNeMeR/NcZ8VG6lEneTqiHJ+fbLqDkqi66A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0+UMVRzaD+CWOar4o5v5DYZiy3og6QZZF9IHD80Ncr0=;
 b=RJFa1g3B9XElNepFE3RD3Bg17UAnQTQcpXIiF6OLeVfPUa/bSzhmw23BskRQ6/EFj5XjeYztOOx5XVM2HTE7KmBJH3zmrMeQLUFFIF6aMSohR54o0xIpDk0vWP3rJTsNIAmVkRu6UtII8V25Mh99p6jZJx5gOdeD2GVKAUeMDHXwJvcVTnZZpsqijN49nFxRacMuRg+eCNeqA72ZqxwFHOF9NZOQznL5LdMrLtfqllYgF7IGtoT5ijPDKvw2grMihLo73n9ZZf/b++wck0LCY/qYPZ/CNtXuDJVY8/qwVBscPO4OoR0pMkqiKn3xo+z1ChWNGJLru1MM/V39YGGjxA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0+UMVRzaD+CWOar4o5v5DYZiy3og6QZZF9IHD80Ncr0=;
 b=IANhoUUxDYhwVZNJEIAbwInsVe3d5bGsBjsKhWnkIJNSOqiAXty4bWADlwZhh3OtvFNG2JnyflseWBYKhVC/WCTAgO+ucsAADtMNUuVLscsocE4xQYslA6kPOetiVMz2CcHTCCgtOq9dHGhvaKUQ55X9qWF/1C03t0Jrld264zU=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v6 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v6 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZdnJ92qfFeC693kK/O74mILlnGa9gf/oAgAGDsIA=
Date: Fri, 19 May 2023 17:35:30 +0000
Message-ID: <57A30CAD-678F-45F8-93A5-7CB65764E26D@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-6-luca.fancellu@arm.com>
 <d735e539-a8ad-0c14-2eda-22fbad19191f@xen.org>
In-Reply-To: <d735e539-a8ad-0c14-2eda-22fbad19191f@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PR3PR08MB5786:EE_|DBAEUR03FT005:EE_|AM0PR08MB5332:EE_
X-MS-Office365-Filtering-Correlation-Id: 32dfb869-2e5b-4744-9156-08db588f77a5
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 TOyyuci17wI46mCyF7PJVCfRxAnKC0XXUSKnGtLfLsmAOm7xf9BGjBLLm768/ytvgIyiO8Wm3jkgQtj7534kvW3QcU/70/wUs4NcqrcNKX2ihmW+Im+Yyr8+C9LSNOAMf+0LkgTHjlVgX/9IY+04q8WbxDCFMVWdYDkSkRGBzfT1mxJmITVLpJ2iS2jC9MdHRnFdz6oVSiXUS9YX7Rkba1Rx9q54AOpl1GAxvNUTjXftRd74iAJ/b6u5tTlFFhPuswM+ZlbsQ894p3pBzQNBna0114b27Oiq+AsPONANWy5XDHhS+WBptl1TJXQ3WwC7wTr3pLp/4E2TzGsBIV4ifNMDn1reQpF/cgTsyYKwxwRdHAAE/ccEMMpYSyClhpcu1pHkYO2V5kWO5D19mO4Ve/DKC2RikYrTcAYmIy2y8WkYA3tQgJhNQwGMzC1Z68Wc+ZLEo9VZfeY2mLElrhgWiFzOeVbC3Bc334HgMjPZ1m3igigqso5MIm+05NP+FBklggqfWeZWX+YKits5hxUza3O4hmY7aohKmqy2IBMXZ2KkCwkJT58hso5nqzAZFBb6KpcNIgPs3plGNs1Z1e0AfCwTBYLljrF68H7IBocUO6uwlsnnIbTKFpzU7o/ErD0xXJOAXILNioO7cOlw7q0nmg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(376002)(346002)(366004)(451199021)(66946007)(76116006)(66476007)(66556008)(6916009)(64756008)(66446008)(4326008)(8676002)(8936002)(91956017)(316002)(54906003)(478600001)(41300700001)(122000001)(71200400001)(86362001)(30864003)(2906002)(33656002)(53546011)(6506007)(26005)(6512007)(36756003)(6486002)(38100700002)(186003)(38070700005)(5660300002)(2616005)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <2C87FE54521BF04FB491EFE3317A4315@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5786
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f7e2bb6a-613c-4eb9-afc0-08db588f710c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dRKrUcZvBdoy+RXpT8JVxTQngiyoKc41ouPI6chGB6/+u1JUFFD90X8lw8TJWrXZVxUkzPiHC4eTdrp6Qlh2kyxiz2pxBhw1p7ZrmrGKu8KSwjoUUn1pZ6/I8Cyfg228cF5nZ1AWNOejs9cXNWrH7p1BqkC2Jv6stS0d3a+odwbVe3fgMz1uRGGbQAI5gqwSMDoFnie3VuFq5a7WOPZUODCMQVYhzuNTgtzyUsq1zbD6AT1wMYnyYItR8Z+7ywb9axnKxFRl3nK0O4gy1llDH4AWzPqXBbokFOhhXRyRDNk7Awy4LJ3dtkkcYBhpbewB0GFsETmZ5MtdU+Cz6K4Xwnc6qH1b3uSgaiLpNn48NY9bOnVKJextRqHdNusg+V67hiPC9JF3dJWUJGoQe/Hej25aMLU7vd8+ZIcpJaWmxLbcUE981/85LtfLEA8pn3npr8mjcyh1ghcksqYOMJFrOwU31DOZfoe5ZUfPQETMuWtdANfn9I/3o6yKkDK8Yd9E8tUvO+6ceG867bns3ICwk6oHz96+dPyfwRaa2TiXsloknBYaXT8DOFVyYlfR0pi7dqA9pbpG+qBwkzDIVCSkPpu3BOuvbSgfPW+5SAg+HoRDlnhaDpmdJ1J7W5Sm3HTAMukAnA8ydFHJi91R2DyRzzAKUQfykPADLwN0BYDg04+FLJ7QbQ4Q9iWj0U+SdXiq0ww5phpq78Gac4i2M5VpzNx+ZVY6QR+p3c6Tp+FwJiugN6vMbqQ0G5B8/YRt97/S
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(396003)(136003)(451199021)(40470700004)(46966006)(36840700001)(86362001)(30864003)(53546011)(2906002)(6512007)(6506007)(70586007)(70206006)(186003)(26005)(33656002)(54906003)(478600001)(36756003)(40480700001)(40460700003)(6486002)(356005)(81166007)(82310400005)(2616005)(47076005)(336012)(41300700001)(36860700001)(83380400001)(8676002)(5660300002)(6862004)(8936002)(82740400003)(4326008)(107886003)(316002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 17:35:41.3807
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 32dfb869-2e5b-4744-9156-08db588f77a5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5332

DQoNCj4gT24gMTggTWF5IDIwMjMsIGF0IDE5OjI3LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbiAyNC8wNC8yMDIzIDA3OjAyLCBM
dWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4gU2F2ZS9yZXN0b3JlIGNvbnRleHQgc3dpdGNoIGZvciBT
VkUsIGFsbG9jYXRlIG1lbW9yeSB0byBjb250YWluDQo+PiB0aGUgWjAtMzEgcmVnaXN0ZXJzIHdo
b3NlIGxlbmd0aCBpcyBtYXhpbXVtIDIwNDggYml0cyBlYWNoIGFuZA0KPj4gRkZSIHdobyBjYW4g
YmUgbWF4aW11bSAyNTYgYml0cywgdGhlIGFsbG9jYXRlZCBtZW1vcnkgZGVwZW5kcyBvbg0KPj4g
aG93IG1hbnkgYml0cyBpcyB0aGUgdmVjdG9yIGxlbmd0aCBmb3IgdGhlIGRvbWFpbiBhbmQgaG93
IG1hbnkgYml0cw0KPj4gYXJlIHN1cHBvcnRlZCBieSB0aGUgcGxhdGZvcm0uDQo+PiBTYXZlIFAw
LTE1IHdob3NlIGxlbmd0aCBpcyBtYXhpbXVtIDI1NiBiaXRzIGVhY2gsIGluIHRoaXMgY2FzZSB0
aGUNCj4+IG1lbW9yeSB1c2VkIGlzIGZyb20gdGhlIGZwcmVncyBmaWVsZCBpbiBzdHJ1Y3QgdmZw
X3N0YXRlLA0KPj4gYmVjYXVzZSBWMC0zMSBhcmUgcGFydCBvZiBaMC0zMSBhbmQgdGhpcyBzcGFj
ZSB3b3VsZCBoYXZlIGJlZW4NCj4+IHVudXNlZCBmb3IgU1ZFIGRvbWFpbiBvdGhlcndpc2UuDQo+
PiBDcmVhdGUgemNyX2VsezEsMn0gZmllbGRzIGluIGFyY2hfdmNwdSwgaW5pdGlhbGlzZSB6Y3Jf
ZWwyIG9uIHZjcHUNCj4+IGNyZWF0aW9uIGdpdmVuIHRoZSByZXF1ZXN0ZWQgdmVjdG9yIGxlbmd0
aCBhbmQgcmVzdG9yZSBpdCBvbg0KPj4gY29udGV4dCBzd2l0Y2gsIHNhdmUvcmVzdG9yZSBaQ1Jf
RUwxIHZhbHVlIGFzIHdlbGwuDQo+PiBTaWduZWQtb2ZmLWJ5OiBMdWNhIEZhbmNlbGx1IDxsdWNh
LmZhbmNlbGx1QGFybS5jb20+DQo+PiAtLS0NCj4+IENoYW5nZXMgZnJvbSB2NToNCj4+ICAtIHVz
ZSBYRlJFRSBpbnN0ZWFkIG9mIHhmcmVlLCBrZWVwIHRoZSBoZWFkZXJzIChKdWxpZW4pDQo+PiAg
LSBBdm9pZCBtYXRoIGNvbXB1dGF0aW9uIGZvciBldmVyeSBzYXZlL3Jlc3RvcmUsIHN0b3JlIHRo
ZSBjb21wdXRhdGlvbg0KPj4gICAgaW4gc3RydWN0IHZmcF9zdGF0ZSBvbmNlIChCZXJ0cmFuZCkN
Cj4+ICAtIHByb3RlY3QgYWNjZXNzIHRvIHYtPmRvbWFpbi0+YXJjaC5zdmVfdmwgaW5zaWRlIGFy
Y2hfdmNwdV9jcmVhdGUgbm93DQo+PiAgICB0aGF0IHN2ZV92bCBpcyBhdmFpbGFibGUgb25seSBv
biBhcm02NA0KPj4gQ2hhbmdlcyBmcm9tIHY0Og0KPj4gIC0gTm8gY2hhbmdlcw0KPj4gQ2hhbmdl
cyBmcm9tIHYzOg0KPj4gIC0gZG9uJ3QgdXNlIGZpeGVkIGxlbiB0eXBlcyB3aGVuIG5vdCBuZWVk
ZWQgKEphbikNCj4+ICAtIG5vdyBWTCBpcyBhbiBlbmNvZGVkIHZhbHVlLCBkZWNvZGUgaXQgYmVm
b3JlIHVzaW5nLg0KPj4gQ2hhbmdlcyBmcm9tIHYyOg0KPj4gIC0gTm8gY2hhbmdlcw0KPj4gQ2hh
bmdlcyBmcm9tIHYxOg0KPj4gIC0gTm8gY2hhbmdlcw0KPj4gQ2hhbmdlcyBmcm9tIFJGQzoNCj4+
ICAtIE1vdmVkIHpjcl9lbDIgZmllbGQgaW50cm9kdWN0aW9uIGluIHRoaXMgcGF0Y2gsIHJlc3Rv
cmUgaXRzDQo+PiAgICBjb250ZW50IGluc2lkZSBzdmVfcmVzdG9yZV9zdGF0ZSBmdW5jdGlvbi4g
KEp1bGllbikNCj4+IC0tLQ0KPj4gIHhlbi9hcmNoL2FybS9hcm02NC9zdmUtYXNtLlMgICAgICAg
ICAgICAgfCAxNDEgKysrKysrKysrKysrKysrKysrKysrKysNCj4+ICB4ZW4vYXJjaC9hcm0vYXJt
NjQvc3ZlLmMgICAgICAgICAgICAgICAgIHwgIDYzICsrKysrKysrKysNCj4+ICB4ZW4vYXJjaC9h
cm0vYXJtNjQvdmZwLmMgICAgICAgICAgICAgICAgIHwgIDc5ICsrKysrKystLS0tLS0NCj4+ICB4
ZW4vYXJjaC9hcm0vZG9tYWluLmMgICAgICAgICAgICAgICAgICAgIHwgICA5ICsrDQo+PiAgeGVu
L2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L3N2ZS5oICAgICB8ICAxMyArKysNCj4+ICB4ZW4v
YXJjaC9hcm0vaW5jbHVkZS9hc20vYXJtNjQvc3lzcmVncy5oIHwgICAzICsNCj4+ICB4ZW4vYXJj
aC9hcm0vaW5jbHVkZS9hc20vYXJtNjQvdmZwLmggICAgIHwgIDEyICsrDQo+PiAgeGVuL2FyY2gv
YXJtL2luY2x1ZGUvYXNtL2RvbWFpbi5oICAgICAgICB8ICAgMiArDQo+PiAgOCBmaWxlcyBjaGFu
Z2VkLCAyODggaW5zZXJ0aW9ucygrKSwgMzQgZGVsZXRpb25zKC0pDQo+PiBkaWZmIC0tZ2l0IGEv
eGVuL2FyY2gvYXJtL2FybTY0L3N2ZS1hc20uUyBiL3hlbi9hcmNoL2FybS9hcm02NC9zdmUtYXNt
LlMNCj4+IGluZGV4IDRkMTU0OTM0NDczMy4uOGMzN2Q3YmM5NWQ1IDEwMDY0NA0KPj4gLS0tIGEv
eGVuL2FyY2gvYXJtL2FybTY0L3N2ZS1hc20uUw0KPj4gKysrIGIveGVuL2FyY2gvYXJtL2FybTY0
L3N2ZS1hc20uUw0KPiANCj4gQXJlIGFsbCB0aGUgbmV3IGhlbHBlcnMgYWRkZWQgaW4gdGhpcyBw
YXRjaCB0YWtlbiBmcm9tIExpbnV4PyBJZiBzbywgaXQgd291bGQgYmUgZ29vZCB0byBjbGFyaWZ5
IHRoaXMgKGFnYWluKSBpbiB0aGUgY29tbWl0IG1lc3NhZ2UgYXMgaXQgaGVscHMgZm9yIHRoZSBy
ZXZpZXcgKEkgY2FuIGRpZmYgd2l0aCBMaW51eCByYXRoZXIgdGhhbiBwcm9wZXJseSByZXZpZXdp
bmcgdGhlbSkuDQo+IA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYyBi
L3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYw0KPj4gaW5kZXggODZhNWU2MTdiZmNhLi4wNjQ4MzJi
NDUwZmYgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vYXJtNjQvc3ZlLmMNCj4+ICsrKyBi
L3hlbi9hcmNoL2FybS9hcm02NC9zdmUuYw0KPj4gQEAgLTUsNiArNSw4IEBADQo+PiAgICogQ29w
eXJpZ2h0IChDKSAyMDIyIEFSTSBMdGQuDQo+PiAgICovDQo+PiAgKyNpbmNsdWRlIDx4ZW4vc2No
ZWQuaD4NCj4+ICsjaW5jbHVkZSA8eGVuL3NpemVzLmg+DQo+PiAgI2luY2x1ZGUgPHhlbi90eXBl
cy5oPg0KPj4gICNpbmNsdWRlIDxhc20vYXJtNjQvc3ZlLmg+DQo+PiAgI2luY2x1ZGUgPGFzbS9h
cm02NC9zeXNyZWdzLmg+DQo+PiBAQCAtMTMsNiArMTUsMjQgQEANCj4+ICAjaW5jbHVkZSA8YXNt
L3N5c3RlbS5oPg0KPj4gICAgZXh0ZXJuIHVuc2lnbmVkIGludCBzdmVfZ2V0X2h3X3ZsKHZvaWQp
Ow0KPj4gK2V4dGVybiB2b2lkIHN2ZV9zYXZlX2N0eCh1aW50NjRfdCAqc3ZlX2N0eCwgdWludDY0
X3QgKnByZWdzLCBpbnQgc2F2ZV9mZnIpOw0KPj4gK2V4dGVybiB2b2lkIHN2ZV9sb2FkX2N0eCh1
aW50NjRfdCBjb25zdCAqc3ZlX2N0eCwgdWludDY0X3QgY29uc3QgKnByZWdzLA0KPj4gKyAgICAg
ICAgICAgICAgICAgICAgICAgICBpbnQgcmVzdG9yZV9mZnIpOw0KPiANCj4gRnJvbSB0aGUgdXNl
LCBpdCBpcyBub3QgZW50aXJlbHkgd2hhdCByZXN0b3JlX2Zmci9zYXZlX2ZmciBpcyBtZWFudCB0
byBiZS4gQXJlIHRoZXkgYm9vbD8gSWYgc28sIG1heWJlIHVzZSBib29sPyBBdCBtaW1pbXVtLCB0
aGV5IHByb2JhYmx5IHdhbnQgdG8gYmUgdW5zaWduZWQgaW50Lg0KDQpJIGhhdmUgdG8gc2F5IHRo
YXQgSSB0cnVzdGVkIHRoZSBMaW51eCBpbXBsZW1lbnRhdGlvbiBoZXJlLCBpbiBhcmNoL3JtNjQv
aW5jbHVkZS9hc20vZnBzaW1kLmgsIHRoYXQgdXNlcyBpbnQ6DQoNCmV4dGVybiB2b2lkIHN2ZV9z
YXZlX3N0YXRlKHZvaWQgKnN0YXRlLCB1MzIgKnBmcHNyLCBpbnQgc2F2ZV9mZnIpOw0KZXh0ZXJu
IHZvaWQgc3ZlX2xvYWRfc3RhdGUodm9pZCBjb25zdCAqc3RhdGUsIHUzMiBjb25zdCAqcGZwc3Is
DQppbnQgcmVzdG9yZV9mZnIpOw0KDQpCdXQgaWYgeW91IHByZWZlciBJIGNhbiBwdXQgdW5zaWdu
ZWQgaW50IGluc3RlYWQuDQoNCj4gDQo+PiArDQo+PiArc3RhdGljIGlubGluZSB1bnNpZ25lZCBp
bnQgc3ZlX3pyZWdfY3R4X3NpemUodW5zaWduZWQgaW50IHZsKQ0KPj4gK3sNCj4+ICsgICAgLyoN
Cj4+ICsgICAgICogWjAtMzEgcmVnaXN0ZXJzIHNpemUgaW4gYnl0ZXMgaXMgY29tcHV0ZWQgZnJv
bSBWTCB0aGF0IGlzIGluIGJpdHMsIHNvIFZMDQo+PiArICAgICAqIGluIGJ5dGVzIGlzIFZMLzgu
DQo+PiArICAgICAqLw0KPj4gKyAgICByZXR1cm4gKHZsIC8gOFUpICogMzJVOw0KPj4gK30NCj4+
ICsNCj4+ICtzdGF0aWMgaW5saW5lIHVuc2lnbmVkIGludCBzdmVfZmZycmVnX2N0eF9zaXplKHVu
c2lnbmVkIGludCB2bCkNCj4+ICt7DQo+PiArICAgIC8qIEZGUiByZWdpc3RlciBzaXplIGlzIFZM
LzgsIHdoaWNoIGlzIGluIGJ5dGVzIChWTC84KS84ICovDQo+PiArICAgIHJldHVybiAodmwgLyA2
NFUpOw0KPj4gK30NCj4+ICAgIHJlZ2lzdGVyX3QgY29tcHV0ZV9tYXhfemNyKHZvaWQpDQo+PiAg
ew0KPj4gQEAgLTYwLDMgKzgwLDQ2IEBAIHVuc2lnbmVkIGludCBnZXRfc3lzX3ZsX2xlbih2b2lk
KQ0KPj4gICAgICByZXR1cm4gKChzeXN0ZW1fY3B1aW5mby56Y3I2NC5iaXRzWzBdICYgWkNSX0VM
eF9MRU5fTUFTSykgKyAxVSkgKg0KPj4gICAgICAgICAgICAgIFNWRV9WTF9NVUxUSVBMRV9WQUw7
DQo+PiAgfQ0KPj4gKw0KPj4gK2ludCBzdmVfY29udGV4dF9pbml0KHN0cnVjdCB2Y3B1ICp2KQ0K
Pj4gK3sNCj4+ICsgICAgdW5zaWduZWQgaW50IHN2ZV92bF9iaXRzID0gc3ZlX2RlY29kZV92bCh2
LT5kb21haW4tPmFyY2guc3ZlX3ZsKTsNCj4+ICsgICAgdWludDY0X3QgKmN0eCA9IF94emFsbG9j
KHN2ZV96cmVnX2N0eF9zaXplKHN2ZV92bF9iaXRzKSArDQo+PiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBzdmVfZmZycmVnX2N0eF9zaXplKHN2ZV92bF9iaXRzKSwNCj4+ICsgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIEwxX0NBQ0hFX0JZVEVTKTsNCj4+ICsNCj4+ICsgICAgaWYg
KCAhY3R4ICkNCj4+ICsgICAgICAgIHJldHVybiAtRU5PTUVNOw0KPj4gKw0KPj4gKyAgICAvKiBQ
b2ludCB0byB0aGUgZW5kIG9mIFowLVozMSBtZW1vcnksIGp1c3QgYmVmb3JlIEZGUiBtZW1vcnkg
Ki8NCj4gDQo+IE5JVDogSSB3b3VsZCBhZGQgdGhhdCB0aGUgbG9naWMgc2hvdWxkIGJlIGtlcHQg
aW4gc3luYyB3aXRoIHN2ZV9jb250ZXh0X2ZyZWUoKS4gU2FtZS4uLg0KPiANCj4+ICsgICAgdi0+
YXJjaC52ZnAuc3ZlX3pyZWdfY3R4X2VuZCA9IGN0eCArDQo+PiArICAgICAgICAoc3ZlX3pyZWdf
Y3R4X3NpemUoc3ZlX3ZsX2JpdHMpIC8gc2l6ZW9mKHVpbnQ2NF90KSk7DQo+PiArDQo+PiArICAg
IHJldHVybiAwOw0KPj4gK30NCj4+ICsNCj4+ICt2b2lkIHN2ZV9jb250ZXh0X2ZyZWUoc3RydWN0
IHZjcHUgKnYpDQo+PiArew0KPj4gKyAgICB1bnNpZ25lZCBpbnQgc3ZlX3ZsX2JpdHMgPSBzdmVf
ZGVjb2RlX3ZsKHYtPmRvbWFpbi0+YXJjaC5zdmVfdmwpOw0KPj4gKw0KPj4gKyAgICAvKiBQb2lu
dCBiYWNrIHRvIHRoZSBiZWdpbm5pbmcgb2YgWjAtWjMxICsgRkZSIG1lbW9yeSAqLw0KPiANCj4g
Li4uIGhlcmUgKGJ1dCB3aXRoIHN2ZV9jb250ZXh0X2luaXQoKSkuIFNvIGl0IGlzIGNsZWFyZXIg
dGhhdCBpZiB0aGUgbG9naWMgY2hhbmdlIGluIG9uZSBwbGFjZSB0aGVuIGl0IG5lZWRzIHRvIGJl
IGNoYW5nZWQgaW4gdGhlIG90aGVyLg0KDQpTdXJlIEkgd2lsbA0KDQo+IA0KPj4gKyAgICB2LT5h
cmNoLnZmcC5zdmVfenJlZ19jdHhfZW5kIC09DQo+PiArICAgICAgICAoc3ZlX3pyZWdfY3R4X3Np
emUoc3ZlX3ZsX2JpdHMpIC8gc2l6ZW9mKHVpbnQ2NF90KSk7DQo+IA0KPiBGcm9tIG15IHVuZGVy
c3RhbmRpbmcsIHN2ZV9jb250ZXh0X2ZyZWUoKSBjb3VsZCBiZSBjYWxsZWQgd2l0aCBzdmVfenJl
Z19jdHh0X2VuZCBlcXVhbCB0byBOVUxMIChpLmUuIGJlY2F1c2Ugc3ZlX2NvbnRleHRfaW5pdCgp
IGZhaWxlZCkuIFNvIHdvdWxkbid0IHdlIGVuZCB1cCB0byBzdWJzdHJhY3QgdGhlIHZhbHVlIHRv
IE5VTEwgYW5kIHRoZXJlZm9yZS4uLg0KPiANCj4+ICsNCj4+ICsgICAgWEZSRUUodi0+YXJjaC52
ZnAuc3ZlX3pyZWdfY3R4X2VuZCk7DQo+IA0KPiAuLi4gZnJlZSBhIHJhbmRvbSBwb2ludGVyPw0K
DQpUaGFuayB5b3UgZm9yIHNwb3R0aW5nIHRoaXMsIEkgd2lsbCBzdXJyb3VuZCB0aGUgb3BlcmF0
aW9ucyBpbiBzdmVfY29udGV4dF9mcmVlIGJ5OiANCg0KaWYgKCB2LT5hcmNoLnZmcC5zdmVfenJl
Z19jdHhfZW5kICkNCg0KSeKAmW0gYXNzdW1pbmcgdGhlIG1lbW9yeSBzaG91bGQgYmUgemVybyBp
bml0aWFsaXNlZCBmb3IgdGhlIHZmcCBzdHJ1Y3R1cmUsIHBsZWFzZQ0KY29ycmVjdCBtZSBpZiBJ
4oCZbSB3cm9uZy4NCg0KPiANCj4+ICt9DQo+PiArDQo+PiArdm9pZCBzdmVfc2F2ZV9zdGF0ZShz
dHJ1Y3QgdmNwdSAqdikNCj4+ICt7DQo+PiArICAgIHYtPmFyY2guemNyX2VsMSA9IFJFQURfU1lT
UkVHKFpDUl9FTDEpOw0KPj4gKw0KPj4gKyAgICBzdmVfc2F2ZV9jdHgodi0+YXJjaC52ZnAuc3Zl
X3pyZWdfY3R4X2VuZCwgdi0+YXJjaC52ZnAuZnByZWdzLCAxKTsNCj4+ICt9DQo+PiArDQo+PiAr
dm9pZCBzdmVfcmVzdG9yZV9zdGF0ZShzdHJ1Y3QgdmNwdSAqdikNCj4+ICt7DQo+PiArICAgIFdS
SVRFX1NZU1JFRyh2LT5hcmNoLnpjcl9lbDEsIFpDUl9FTDEpOw0KPj4gKyAgICBXUklURV9TWVNS
RUcodi0+YXJjaC56Y3JfZWwyLCBaQ1JfRUwyKTsNCj4gDQo+IEFGQUlVLCB0aGlzIHZhbHVlIHdp
bGwgYmUgdXNlZCBmb3IgdGhlIHJlc3RvcmUgYmVsb3cuIFNvIGRvbid0IHdlIG5lZWQgYW4gaXNi
KCk/DQoNCldlIHJlYWNoZWQgdGhlIGFncmVlbWVudCBvbiB0aGlzIGluIHBhdGNoIDENCg0KPiAN
Cj4+ICsNCj4+ICsgICAgc3ZlX2xvYWRfY3R4KHYtPmFyY2gudmZwLnN2ZV96cmVnX2N0eF9lbmQs
IHYtPmFyY2gudmZwLmZwcmVncywgMSk7DQo+PiArfQ0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNo
L2FybS9hcm02NC92ZnAuYyBiL3hlbi9hcmNoL2FybS9hcm02NC92ZnAuYw0KPj4gaW5kZXggNDc4
ODVlNzZiYWFlLi4yZDBkN2MyZTZkZGIgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vYXJt
NjQvdmZwLmMNCj4+ICsrKyBiL3hlbi9hcmNoL2FybS9hcm02NC92ZnAuYw0KPj4gQEAgLTIsMjkg
KzIsMzUgQEANCj4+ICAjaW5jbHVkZSA8YXNtL3Byb2Nlc3Nvci5oPg0KPj4gICNpbmNsdWRlIDxh
c20vY3B1ZmVhdHVyZS5oPg0KPj4gICNpbmNsdWRlIDxhc20vdmZwLmg+DQo+PiArI2luY2x1ZGUg
PGFzbS9hcm02NC9zdmUuaD4NCj4+ICAgIHZvaWQgdmZwX3NhdmVfc3RhdGUoc3RydWN0IHZjcHUg
KnYpDQo+PiAgew0KPj4gICAgICBpZiAoICFjcHVfaGFzX2ZwICkNCj4+ICAgICAgICAgIHJldHVy
bjsNCj4+ICAtICAgIGFzbSB2b2xhdGlsZSgic3RwIHEwLCBxMSwgWyUxLCAjMTYgKiAwXVxuXHQi
DQo+PiAtICAgICAgICAgICAgICAgICAic3RwIHEyLCBxMywgWyUxLCAjMTYgKiAyXVxuXHQiDQo+
PiAtICAgICAgICAgICAgICAgICAic3RwIHE0LCBxNSwgWyUxLCAjMTYgKiA0XVxuXHQiDQo+PiAt
ICAgICAgICAgICAgICAgICAic3RwIHE2LCBxNywgWyUxLCAjMTYgKiA2XVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAic3RwIHE4LCBxOSwgWyUxLCAjMTYgKiA4XVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICAic3RwIHExMCwgcTExLCBbJTEsICMxNiAqIDEwXVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICAic3RwIHExMiwgcTEzLCBbJTEsICMxNiAqIDEyXVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICAic3RwIHExNCwgcTE1LCBbJTEsICMxNiAqIDE0XVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICAic3RwIHExNiwgcTE3LCBbJTEsICMxNiAqIDE2XVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICAic3RwIHExOCwgcTE5LCBbJTEsICMxNiAqIDE4XVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICAic3RwIHEyMCwgcTIxLCBbJTEsICMxNiAqIDIwXVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICAic3RwIHEyMiwgcTIzLCBbJTEsICMxNiAqIDIyXVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICAic3RwIHEyNCwgcTI1LCBbJTEsICMxNiAqIDI0XVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICAic3RwIHEyNiwgcTI3LCBbJTEsICMxNiAqIDI2XVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICAic3RwIHEyOCwgcTI5LCBbJTEsICMxNiAqIDI4XVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICAic3RwIHEzMCwgcTMxLCBbJTEsICMxNiAqIDMwXVxuXHQiDQo+PiAtICAgICAg
ICAgICAgICAgICA6ICI9USIgKCp2LT5hcmNoLnZmcC5mcHJlZ3MpIDogInIiICh2LT5hcmNoLnZm
cC5mcHJlZ3MpKTsNCj4+ICsgICAgaWYgKCBpc19zdmVfZG9tYWluKHYtPmRvbWFpbikgKQ0KPj4g
KyAgICAgICAgc3ZlX3NhdmVfc3RhdGUodik7DQo+PiArICAgIGVsc2UNCj4+ICsgICAgew0KPj4g
KyAgICAgICAgYXNtIHZvbGF0aWxlKCJzdHAgcTAsIHExLCBbJTEsICMxNiAqIDBdXG5cdCINCj4+
ICsgICAgICAgICAgICAgICAgICAgICAic3RwIHEyLCBxMywgWyUxLCAjMTYgKiAyXVxuXHQiDQo+
PiArICAgICAgICAgICAgICAgICAgICAgInN0cCBxNCwgcTUsIFslMSwgIzE2ICogNF1cblx0Ig0K
Pj4gKyAgICAgICAgICAgICAgICAgICAgICJzdHAgcTYsIHE3LCBbJTEsICMxNiAqIDZdXG5cdCIN
Cj4+ICsgICAgICAgICAgICAgICAgICAgICAic3RwIHE4LCBxOSwgWyUxLCAjMTYgKiA4XVxuXHQi
DQo+PiArICAgICAgICAgICAgICAgICAgICAgInN0cCBxMTAsIHExMSwgWyUxLCAjMTYgKiAxMF1c
blx0Ig0KPj4gKyAgICAgICAgICAgICAgICAgICAgICJzdHAgcTEyLCBxMTMsIFslMSwgIzE2ICog
MTJdXG5cdCINCj4+ICsgICAgICAgICAgICAgICAgICAgICAic3RwIHExNCwgcTE1LCBbJTEsICMx
NiAqIDE0XVxuXHQiDQo+PiArICAgICAgICAgICAgICAgICAgICAgInN0cCBxMTYsIHExNywgWyUx
LCAjMTYgKiAxNl1cblx0Ig0KPj4gKyAgICAgICAgICAgICAgICAgICAgICJzdHAgcTE4LCBxMTks
IFslMSwgIzE2ICogMThdXG5cdCINCj4+ICsgICAgICAgICAgICAgICAgICAgICAic3RwIHEyMCwg
cTIxLCBbJTEsICMxNiAqIDIwXVxuXHQiDQo+PiArICAgICAgICAgICAgICAgICAgICAgInN0cCBx
MjIsIHEyMywgWyUxLCAjMTYgKiAyMl1cblx0Ig0KPj4gKyAgICAgICAgICAgICAgICAgICAgICJz
dHAgcTI0LCBxMjUsIFslMSwgIzE2ICogMjRdXG5cdCINCj4+ICsgICAgICAgICAgICAgICAgICAg
ICAic3RwIHEyNiwgcTI3LCBbJTEsICMxNiAqIDI2XVxuXHQiDQo+PiArICAgICAgICAgICAgICAg
ICAgICAgInN0cCBxMjgsIHEyOSwgWyUxLCAjMTYgKiAyOF1cblx0Ig0KPj4gKyAgICAgICAgICAg
ICAgICAgICAgICJzdHAgcTMwLCBxMzEsIFslMSwgIzE2ICogMzBdXG5cdCINCj4+ICsgICAgICAg
ICAgICAgICAgICAgICA6ICI9USIgKCp2LT5hcmNoLnZmcC5mcHJlZ3MpIDogInIiICh2LT5hcmNo
LnZmcC5mcHJlZ3MpKTsNCj4+ICsgICAgfQ0KPj4gICAgICAgIHYtPmFyY2gudmZwLmZwc3IgPSBS
RUFEX1NZU1JFRyhGUFNSKTsNCj4+ICAgICAgdi0+YXJjaC52ZnAuZnBjciA9IFJFQURfU1lTUkVH
KEZQQ1IpOw0KPj4gQEAgLTM3LDIzICs0MywyOCBAQCB2b2lkIHZmcF9yZXN0b3JlX3N0YXRlKHN0
cnVjdCB2Y3B1ICp2KQ0KPj4gICAgICBpZiAoICFjcHVfaGFzX2ZwICkNCj4+ICAgICAgICAgIHJl
dHVybjsNCj4+ICAtICAgIGFzbSB2b2xhdGlsZSgibGRwIHEwLCBxMSwgWyUxLCAjMTYgKiAwXVxu
XHQiDQo+PiAtICAgICAgICAgICAgICAgICAibGRwIHEyLCBxMywgWyUxLCAjMTYgKiAyXVxuXHQi
DQo+PiAtICAgICAgICAgICAgICAgICAibGRwIHE0LCBxNSwgWyUxLCAjMTYgKiA0XVxuXHQiDQo+
PiAtICAgICAgICAgICAgICAgICAibGRwIHE2LCBxNywgWyUxLCAjMTYgKiA2XVxuXHQiDQo+PiAt
ICAgICAgICAgICAgICAgICAibGRwIHE4LCBxOSwgWyUxLCAjMTYgKiA4XVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAibGRwIHExMCwgcTExLCBbJTEsICMxNiAqIDEwXVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAibGRwIHExMiwgcTEzLCBbJTEsICMxNiAqIDEyXVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAibGRwIHExNCwgcTE1LCBbJTEsICMxNiAqIDE0XVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAibGRwIHExNiwgcTE3LCBbJTEsICMxNiAqIDE2XVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAibGRwIHExOCwgcTE5LCBbJTEsICMxNiAqIDE4XVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAibGRwIHEyMCwgcTIxLCBbJTEsICMxNiAqIDIwXVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAibGRwIHEyMiwgcTIzLCBbJTEsICMxNiAqIDIyXVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAibGRwIHEyNCwgcTI1LCBbJTEsICMxNiAqIDI0XVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAibGRwIHEyNiwgcTI3LCBbJTEsICMxNiAqIDI2XVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAibGRwIHEyOCwgcTI5LCBbJTEsICMxNiAqIDI4XVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICAibGRwIHEzMCwgcTMxLCBbJTEsICMxNiAqIDMwXVxuXHQiDQo+PiAtICAg
ICAgICAgICAgICAgICA6IDogIlEiICgqdi0+YXJjaC52ZnAuZnByZWdzKSwgInIiICh2LT5hcmNo
LnZmcC5mcHJlZ3MpKTsNCj4+ICsgICAgaWYgKCBpc19zdmVfZG9tYWluKHYtPmRvbWFpbikgKQ0K
Pj4gKyAgICAgICAgc3ZlX3Jlc3RvcmVfc3RhdGUodik7DQo+PiArICAgIGVsc2UNCj4+ICsgICAg
ew0KPj4gKyAgICAgICAgYXNtIHZvbGF0aWxlKCJsZHAgcTAsIHExLCBbJTEsICMxNiAqIDBdXG5c
dCINCj4+ICsgICAgICAgICAgICAgICAgICAgICAibGRwIHEyLCBxMywgWyUxLCAjMTYgKiAyXVxu
XHQiDQo+PiArICAgICAgICAgICAgICAgICAgICAgImxkcCBxNCwgcTUsIFslMSwgIzE2ICogNF1c
blx0Ig0KPj4gKyAgICAgICAgICAgICAgICAgICAgICJsZHAgcTYsIHE3LCBbJTEsICMxNiAqIDZd
XG5cdCINCj4+ICsgICAgICAgICAgICAgICAgICAgICAibGRwIHE4LCBxOSwgWyUxLCAjMTYgKiA4
XVxuXHQiDQo+PiArICAgICAgICAgICAgICAgICAgICAgImxkcCBxMTAsIHExMSwgWyUxLCAjMTYg
KiAxMF1cblx0Ig0KPj4gKyAgICAgICAgICAgICAgICAgICAgICJsZHAgcTEyLCBxMTMsIFslMSwg
IzE2ICogMTJdXG5cdCINCj4+ICsgICAgICAgICAgICAgICAgICAgICAibGRwIHExNCwgcTE1LCBb
JTEsICMxNiAqIDE0XVxuXHQiDQo+PiArICAgICAgICAgICAgICAgICAgICAgImxkcCBxMTYsIHEx
NywgWyUxLCAjMTYgKiAxNl1cblx0Ig0KPj4gKyAgICAgICAgICAgICAgICAgICAgICJsZHAgcTE4
LCBxMTksIFslMSwgIzE2ICogMThdXG5cdCINCj4+ICsgICAgICAgICAgICAgICAgICAgICAibGRw
IHEyMCwgcTIxLCBbJTEsICMxNiAqIDIwXVxuXHQiDQo+PiArICAgICAgICAgICAgICAgICAgICAg
ImxkcCBxMjIsIHEyMywgWyUxLCAjMTYgKiAyMl1cblx0Ig0KPj4gKyAgICAgICAgICAgICAgICAg
ICAgICJsZHAgcTI0LCBxMjUsIFslMSwgIzE2ICogMjRdXG5cdCINCj4+ICsgICAgICAgICAgICAg
ICAgICAgICAibGRwIHEyNiwgcTI3LCBbJTEsICMxNiAqIDI2XVxuXHQiDQo+PiArICAgICAgICAg
ICAgICAgICAgICAgImxkcCBxMjgsIHEyOSwgWyUxLCAjMTYgKiAyOF1cblx0Ig0KPj4gKyAgICAg
ICAgICAgICAgICAgICAgICJsZHAgcTMwLCBxMzEsIFslMSwgIzE2ICogMzBdXG5cdCINCj4+ICsg
ICAgICAgICAgICAgICAgICAgICA6IDogIlEiICgqdi0+YXJjaC52ZnAuZnByZWdzKSwgInIiICh2
LT5hcmNoLnZmcC5mcHJlZ3MpKTsNCj4+ICsgICAgfQ0KPj4gICAgICAgIFdSSVRFX1NZU1JFRyh2
LT5hcmNoLnZmcC5mcHNyLCBGUFNSKTsNCj4+ICAgICAgV1JJVEVfU1lTUkVHKHYtPmFyY2gudmZw
LmZwY3IsIEZQQ1IpOw0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9kb21haW4uYyBiL3hl
bi9hcmNoL2FybS9kb21haW4uYw0KPj4gaW5kZXggMTQzMzU5ZDBmMzEzLi4yNGM3MjJhNGExMWUg
MTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vZG9tYWluLmMNCj4+ICsrKyBiL3hlbi9hcmNo
L2FybS9kb21haW4uYw0KPj4gQEAgLTU1Miw3ICs1NTIsMTQgQEAgaW50IGFyY2hfdmNwdV9jcmVh
dGUoc3RydWN0IHZjcHUgKnYpDQo+PiAgICAgICAgdi0+YXJjaC5jcHRyX2VsMiA9IGdldF9kZWZh
dWx0X2NwdHJfZmxhZ3MoKTsNCj4+ICAgICAgaWYgKCBpc19zdmVfZG9tYWluKHYtPmRvbWFpbikg
KQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBpZiAoIChyYyA9IHN2ZV9jb250ZXh0X2luaXQodikp
ICE9IDAgKQ0KPj4gKyAgICAgICAgICAgIGdvdG8gZmFpbDsNCj4+ICAgICAgICAgIHYtPmFyY2gu
Y3B0cl9lbDIgJj0gfkhDUFRSX0NQKDgpOw0KPj4gKyNpZmRlZiBDT05GSUdfQVJNNjRfU1ZFDQo+
IA0KPiBUaGlzICNpZmRlZiByZWFkcyBhIGJpdCBvZGQgdG8gbWUgYmVjYXVzZSB5b3UgYXJlIHBy
b3RlY3Rpbmcgdi0+YXJjaC56Y3JfZWwyIGJ1dCBub3QgdGhlIHJlc3QuIFRoaXMgaXMgb25lIG9m
IHRoZSBjYXNlIHdoZXJlIEkgd291bGQgc3Vycm91bmQgdGhlIGZ1bGwgaWYgd2l0aCB0aGUgI2lm
ZGVmIGJlY2F1c2UgaXQgbWFrZXMgY2xlYXJlciB0aGF0IHRoZXJlIGlzIG5vIHdheSB0aGUgcmVz
dCBvZiB0aGUgY29kZSBjYW4gYmUgcmVhY2hlZCBpZiAhQ09ORklHX0FSTTY0X1NWRS4NCj4gDQo+
IFRoYXQgc2FpZCwgSSB3b3VsZCBhY3R1YWxseSBwcmVmZXIgaWYuLi4NCj4gDQo+PiArICAgICAg
ICB2LT5hcmNoLnpjcl9lbDIgPSB2bF90b196Y3Ioc3ZlX2RlY29kZV92bCh2LT5kb21haW4tPmFy
Y2guc3ZlX3ZsKSk7DQo+IA0KPiAuLi4gdGhpcyBsaW5lIGlzIG1vdmVkIGluIHN2ZV9jb250ZXh0
X2luaXQoKSBiZWNhdXNlIHRoaXMgaXMgcmVsYXRlZCB0byB0aGUgU1ZFIGNvbnRleHQuDQoNClN1
cmUgSSB3aWxsIGRvIHRoYXQsIHNvIGlmIEnigJl2ZSB1bmRlcnN0b29kIGNvcnJlY3RseSwgeW91
IHdhbnQgbWUgdG8ga2VlcCB0aGlzOg0KDQoNCnYtPmFyY2guY3B0cl9lbDIgPSBnZXRfZGVmYXVs
dF9jcHRyX2ZsYWdzKCk7DQppZiAoIGlzX3N2ZV9kb21haW4odi0+ZG9tYWluKSApDQp7DQogICAg
aWYgKCAocmMgPSBzdmVfY29udGV4dF9pbml0KHYpKSAhPSAwICkNCiAgICAgICAgZ290byBmYWls
Ow0KICAgIHYtPmFyY2guY3B0cl9lbDIgJj0gfkhDUFRSX0NQKDgpOw0KfQ0KDQpXaXRob3V0ICNp
ZmRlZiBDT05GSUdfQVJNNjRfU1ZFDQoNCj4gDQo+PiArI2VuZGlmDQo+PiArICAgIH0NCj4+ICAg
ICAgICB2LT5hcmNoLmhjcl9lbDIgPSBnZXRfZGVmYXVsdF9oY3JfZmxhZ3MoKTsNCj4+ICBAQCAt
NTgyLDYgKzU4OSw4IEBAIGZhaWw6DQo+PiAgICB2b2lkIGFyY2hfdmNwdV9kZXN0cm95KHN0cnVj
dCB2Y3B1ICp2KQ0KPj4gIHsNCj4+ICsgICAgaWYgKCBpc19zdmVfZG9tYWluKHYtPmRvbWFpbikg
KQ0KPj4gKyAgICAgICAgc3ZlX2NvbnRleHRfZnJlZSh2KTsNCj4+ICAgICAgdmNwdV90aW1lcl9k
ZXN0cm95KHYpOw0KPj4gICAgICB2Y3B1X3ZnaWNfZnJlZSh2KTsNCj4+ICAgICAgZnJlZV94ZW5o
ZWFwX3BhZ2VzKHYtPmFyY2guc3RhY2ssIFNUQUNLX09SREVSKTsNCj4+IGRpZmYgLS1naXQgYS94
ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vYXJtNjQvc3ZlLmggYi94ZW4vYXJjaC9hcm0vaW5jbHVk
ZS9hc20vYXJtNjQvc3ZlLmgNCj4+IGluZGV4IDczMGMzZmI1YTljOC4uNTgyNDA1ZGZkZjZhIDEw
MDY0NA0KPj4gLS0tIGEveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L3N2ZS5oDQo+PiAr
KysgYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vYXJtNjQvc3ZlLmgNCj4+IEBAIC0yNiw2ICsy
NiwxMCBAQCBzdGF0aWMgaW5saW5lIHVuc2lnbmVkIGludCBzdmVfZGVjb2RlX3ZsKHVuc2lnbmVk
IGludCBzdmVfdmwpDQo+PiAgcmVnaXN0ZXJfdCBjb21wdXRlX21heF96Y3Iodm9pZCk7DQo+PiAg
cmVnaXN0ZXJfdCB2bF90b196Y3IodW5zaWduZWQgaW50IHZsKTsNCj4+ICB1bnNpZ25lZCBpbnQg
Z2V0X3N5c192bF9sZW4odm9pZCk7DQo+PiAraW50IHN2ZV9jb250ZXh0X2luaXQoc3RydWN0IHZj
cHUgKnYpOw0KPj4gK3ZvaWQgc3ZlX2NvbnRleHRfZnJlZShzdHJ1Y3QgdmNwdSAqdik7DQo+PiAr
dm9pZCBzdmVfc2F2ZV9zdGF0ZShzdHJ1Y3QgdmNwdSAqdik7DQo+PiArdm9pZCBzdmVfcmVzdG9y
ZV9zdGF0ZShzdHJ1Y3QgdmNwdSAqdik7DQo+PiAgICAjZWxzZSAvKiAhQ09ORklHX0FSTTY0X1NW
RSAqLw0KPj4gIEBAIC00Niw2ICs1MCwxNSBAQCBzdGF0aWMgaW5saW5lIHVuc2lnbmVkIGludCBn
ZXRfc3lzX3ZsX2xlbih2b2lkKQ0KPj4gICAgICByZXR1cm4gMDsNCj4+ICB9DQo+PiAgK3N0YXRp
YyBpbmxpbmUgaW50IHN2ZV9jb250ZXh0X2luaXQoc3RydWN0IHZjcHUgKnYpDQo+PiArew0KPj4g
KyAgICByZXR1cm4gMDsNCj4+ICt9DQo+PiArDQo+PiArc3RhdGljIGlubGluZSB2b2lkIHN2ZV9j
b250ZXh0X2ZyZWUoc3RydWN0IHZjcHUgKnYpIHt9DQo+PiArc3RhdGljIGlubGluZSB2b2lkIHN2
ZV9zYXZlX3N0YXRlKHN0cnVjdCB2Y3B1ICp2KSB7fQ0KPj4gK3N0YXRpYyBpbmxpbmUgdm9pZCBz
dmVfcmVzdG9yZV9zdGF0ZShzdHJ1Y3QgdmNwdSAqdikge30NCj4+ICsNCj4+ICAjZW5kaWYgLyog
Q09ORklHX0FSTTY0X1NWRSAqLw0KPj4gICAgI2VuZGlmIC8qIF9BUk1fQVJNNjRfU1ZFX0ggKi8N
Cj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vYXJtNjQvc3lzcmVncy5o
IGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L3N5c3JlZ3MuaA0KPj4gaW5kZXggNGNh
YmI5ZWI0ZDVlLi4zZmRlYjlkOGNkZWYgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vaW5j
bHVkZS9hc20vYXJtNjQvc3lzcmVncy5oDQo+PiArKysgYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9h
c20vYXJtNjQvc3lzcmVncy5oDQo+PiBAQCAtODgsNiArODgsOSBAQA0KPj4gICNpZm5kZWYgSURf
QUE2NElTQVIyX0VMMQ0KPj4gICNkZWZpbmUgSURfQUE2NElTQVIyX0VMMSAgICAgICAgICAgIFMz
XzBfQzBfQzZfMg0KPj4gICNlbmRpZg0KPj4gKyNpZm5kZWYgWkNSX0VMMQ0KPj4gKyNkZWZpbmUg
WkNSX0VMMSAgICAgICAgICAgICAgICAgICAgIFMzXzBfQzFfQzJfMA0KPj4gKyNlbmRpZg0KPj4g
ICAgLyogSUQgcmVnaXN0ZXJzIChpbXBvcnRlZCBmcm9tIGFybTY0L2luY2x1ZGUvYXNtL3N5c3Jl
Zy5oIGluIExpbnV4KSAqLw0KPj4gIGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vaW5jbHVkZS9h
c20vYXJtNjQvdmZwLmggYi94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vYXJtNjQvdmZwLmgNCj4+
IGluZGV4IGU2ZThjMzYzYmMxNi4uNGFhMzcxZTg1ZDI2IDEwMDY0NA0KPj4gLS0tIGEveGVuL2Fy
Y2gvYXJtL2luY2x1ZGUvYXNtL2FybTY0L3ZmcC5oDQo+PiArKysgYi94ZW4vYXJjaC9hcm0vaW5j
bHVkZS9hc20vYXJtNjQvdmZwLmgNCj4+IEBAIC02LDcgKzYsMTkgQEANCj4+ICAgIHN0cnVjdCB2
ZnBfc3RhdGUNCj4+ICB7DQo+PiArICAgIC8qDQo+PiArICAgICAqIFdoZW4gU1ZFIGlzIGVuYWJs
ZWQgZm9yIHRoZSBndWVzdCwgZnByZWdzIG1lbW9yeSB3aWxsIGJlIHVzZWQgdG8NCj4+ICsgICAg
ICogc2F2ZS9yZXN0b3JlIFAwLVAxNSByZWdpc3RlcnMsIG90aGVyd2lzZSBpdCB3aWxsIGJlIHVz
ZWQgZm9yIHRoZSBWMC1WMzENCj4+ICsgICAgICogcmVnaXN0ZXJzLg0KPj4gKyAgICAgKi8NCj4+
ICAgICAgdWludDY0X3QgZnByZWdzWzY0XSBfX3ZmcF9hbGlnbmVkOw0KPj4gKyAgICAvKg0KPj4g
KyAgICAgKiBXaGVuIFNWRSBpcyBlbmFibGVkIGZvciB0aGUgZ3Vlc3QsIHN2ZV96cmVnX2N0eF9l
bmQgcG9pbnRzIHRvIG1lbW9yeQ0KPj4gKyAgICAgKiB3aGVyZSBaMC1aMzEgcmVnaXN0ZXJzIGFu
ZCBGRlIgY2FuIGJlIHNhdmVkL3Jlc3RvcmVkLCBpdCBwb2ludHMgYXQgdGhlDQo+PiArICAgICAq
IGVuZCBvZiB0aGUgWjAtWjMxIHNwYWNlIGFuZCBhdCB0aGUgYmVnaW5uaW5nIG9mIHRoZSBGRlIg
c3BhY2UsIGl0J3MgZG9uZQ0KPj4gKyAgICAgKiBsaWtlIHRoYXQgdG8gZWFzZSB0aGUgc2F2ZS9y
ZXN0b3JlIGFzc2VtYmx5IG9wZXJhdGlvbnMuDQo+PiArICAgICAqLw0KPj4gKyAgICB1aW50NjRf
dCAqc3ZlX3pyZWdfY3R4X2VuZDsNCj4+ICAgICAgcmVnaXN0ZXJfdCBmcGNyOw0KPj4gICAgICBy
ZWdpc3Rlcl90IGZwZXhjMzJfZWwyOw0KPj4gICAgICByZWdpc3Rlcl90IGZwc3I7DQo+PiBkaWZm
IC0tZ2l0IGEveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2RvbWFpbi5oIGIveGVuL2FyY2gvYXJt
L2luY2x1ZGUvYXNtL2RvbWFpbi5oDQo+PiBpbmRleCAzMzFkYTBmM2JjYzMuLjgxNDY1MmQ5MjU2
OCAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9kb21haW4uaA0KPj4g
KysrIGIveGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2RvbWFpbi5oDQo+PiBAQCAtMTk1LDYgKzE5
NSw4IEBAIHN0cnVjdCBhcmNoX3ZjcHUNCj4+ICAgICAgcmVnaXN0ZXJfdCB0cGlkcnJvX2VsMDsN
Cj4+ICAgICAgICAvKiBIWVAgY29uZmlndXJhdGlvbiAqLw0KPj4gKyAgICByZWdpc3Rlcl90IHpj
cl9lbDE7DQo+PiArICAgIHJlZ2lzdGVyX3QgemNyX2VsMjsNCj4+ICAgICAgcmVnaXN0ZXJfdCBj
cHRyX2VsMjsNCj4+ICAgICAgcmVnaXN0ZXJfdCBoY3JfZWwyOw0KPj4gICAgICByZWdpc3Rlcl90
IG1kY3JfZWwyOw0KPiANCj4gQ2hlZXJzLA0KPiANCj4gLS0gDQo+IEp1bGllbiBHcmFsbA0KDQoN
Cg==


From xen-devel-bounces@lists.xenproject.org Fri May 19 17:53:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 17:53:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537226.836162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q04Hg-00067Q-Sv; Fri, 19 May 2023 17:52:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537226.836162; Fri, 19 May 2023 17:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q04Hg-00067J-Pn; Fri, 19 May 2023 17:52:56 +0000
Received: by outflank-mailman (input) for mailman id 537226;
 Fri, 19 May 2023 17:52:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q04Hf-00067D-So
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 17:52:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q04Hf-0001x8-Cc; Fri, 19 May 2023 17:52:55 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=[192.168.7.127]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q04Hf-0003UG-6k; Fri, 19 May 2023 17:52:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=OFAmjKXftJ4RBA1ryqRaIdThFbC4gw18/FRgeaN8hg4=; b=tVtXrYf8MtPH2icEtvRD9C6h5I
	xuHdIdefsPtcytILybeE8ndNdHJk/BPBK6iEmUuY6cG50n4crkBnQa6qYJy2fZ2oGwcznZFxHseEr
	N6H5xQbP4MRsjpb+HU7+m+wRF9wXbgFRicEcpigL5wjybxuWfMVvHTVVaGtGR9RBlgIE=;
Message-ID: <df689411-07dd-30fe-2ded-4c68b0313a81@xen.org>
Date: Fri, 19 May 2023 18:52:53 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 05/12] arm/sve: save/restore SVE context switch
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-6-luca.fancellu@arm.com>
 <d735e539-a8ad-0c14-2eda-22fbad19191f@xen.org>
 <57A30CAD-678F-45F8-93A5-7CB65764E26D@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <57A30CAD-678F-45F8-93A5-7CB65764E26D@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 19/05/2023 18:35, Luca Fancellu wrote:
> 
> 
>> On 18 May 2023, at 19:27, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Luca,
>>
>> On 24/04/2023 07:02, Luca Fancellu wrote:
>>> Save/restore context switch for SVE, allocate memory to contain
>>> the Z0-31 registers whose length is maximum 2048 bits each and
>>> FFR who can be maximum 256 bits, the allocated memory depends on
>>> how many bits is the vector length for the domain and how many bits
>>> are supported by the platform.
>>> Save P0-15 whose length is maximum 256 bits each, in this case the
>>> memory used is from the fpregs field in struct vfp_state,
>>> because V0-31 are part of Z0-31 and this space would have been
>>> unused for SVE domain otherwise.
>>> Create zcr_el{1,2} fields in arch_vcpu, initialise zcr_el2 on vcpu
>>> creation given the requested vector length and restore it on
>>> context switch, save/restore ZCR_EL1 value as well.
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>> ---
>>> Changes from v5:
>>>   - use XFREE instead of xfree, keep the headers (Julien)
>>>   - Avoid math computation for every save/restore, store the computation
>>>     in struct vfp_state once (Bertrand)
>>>   - protect access to v->domain->arch.sve_vl inside arch_vcpu_create now
>>>     that sve_vl is available only on arm64
>>> Changes from v4:
>>>   - No changes
>>> Changes from v3:
>>>   - don't use fixed len types when not needed (Jan)
>>>   - now VL is an encoded value, decode it before using.
>>> Changes from v2:
>>>   - No changes
>>> Changes from v1:
>>>   - No changes
>>> Changes from RFC:
>>>   - Moved zcr_el2 field introduction in this patch, restore its
>>>     content inside sve_restore_state function. (Julien)
>>> ---
>>>   xen/arch/arm/arm64/sve-asm.S             | 141 +++++++++++++++++++++++
>>>   xen/arch/arm/arm64/sve.c                 |  63 ++++++++++
>>>   xen/arch/arm/arm64/vfp.c                 |  79 +++++++------
>>>   xen/arch/arm/domain.c                    |   9 ++
>>>   xen/arch/arm/include/asm/arm64/sve.h     |  13 +++
>>>   xen/arch/arm/include/asm/arm64/sysregs.h |   3 +
>>>   xen/arch/arm/include/asm/arm64/vfp.h     |  12 ++
>>>   xen/arch/arm/include/asm/domain.h        |   2 +
>>>   8 files changed, 288 insertions(+), 34 deletions(-)
>>> diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
>>> index 4d1549344733..8c37d7bc95d5 100644
>>> --- a/xen/arch/arm/arm64/sve-asm.S
>>> +++ b/xen/arch/arm/arm64/sve-asm.S
>>
>> Are all the new helpers added in this patch taken from Linux? If so, it would be good to clarify this (again) in the commit message as it helps for the review (I can diff with Linux rather than properly reviewing them).
>>
>>> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
>>> index 86a5e617bfca..064832b450ff 100644
>>> --- a/xen/arch/arm/arm64/sve.c
>>> +++ b/xen/arch/arm/arm64/sve.c
>>> @@ -5,6 +5,8 @@
>>>    * Copyright (C) 2022 ARM Ltd.
>>>    */
>>>   +#include <xen/sched.h>
>>> +#include <xen/sizes.h>
>>>   #include <xen/types.h>
>>>   #include <asm/arm64/sve.h>
>>>   #include <asm/arm64/sysregs.h>
>>> @@ -13,6 +15,24 @@
>>>   #include <asm/system.h>
>>>     extern unsigned int sve_get_hw_vl(void);
>>> +extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr);
>>> +extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
>>> +                         int restore_ffr);
>>
>>  From the use, it is not entirely what restore_ffr/save_ffr is meant to be. Are they bool? If so, maybe use bool? At mimimum, they probably want to be unsigned int.
> 
> I have to say that I trusted the Linux implementation here, in arch/rm64/include/asm/fpsimd.h, that uses int:

Ah, so this is a verbatim copy of the Linux code? If so...

> 
> extern void sve_save_state(void *state, u32 *pfpsr, int save_ffr);
> extern void sve_load_state(void const *state, u32 const *pfpsr,
> int restore_ffr);
> 
> But if you prefer I can put unsigned int instead.

... keep it as-is (Linux seems to like using 'int' for bool) but I would 
suggest to document the expected values.

> 
>>
>>> +
>>> +static inline unsigned int sve_zreg_ctx_size(unsigned int vl)
>>> +{
>>> +    /*
>>> +     * Z0-31 registers size in bytes is computed from VL that is in bits, so VL
>>> +     * in bytes is VL/8.
>>> +     */
>>> +    return (vl / 8U) * 32U;
>>> +}
>>> +
>>> +static inline unsigned int sve_ffrreg_ctx_size(unsigned int vl)
>>> +{
>>> +    /* FFR register size is VL/8, which is in bytes (VL/8)/8 */
>>> +    return (vl / 64U);
>>> +}
>>>     register_t compute_max_zcr(void)
>>>   {
>>> @@ -60,3 +80,46 @@ unsigned int get_sys_vl_len(void)
>>>       return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
>>>               SVE_VL_MULTIPLE_VAL;
>>>   }
>>> +
>>> +int sve_context_init(struct vcpu *v)
>>> +{
>>> +    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
>>> +    uint64_t *ctx = _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
>>> +                             sve_ffrreg_ctx_size(sve_vl_bits),
>>> +                             L1_CACHE_BYTES);
>>> +
>>> +    if ( !ctx )
>>> +        return -ENOMEM;
>>> +
>>> +    /* Point to the end of Z0-Z31 memory, just before FFR memory */
>>
>> NIT: I would add that the logic should be kept in sync with sve_context_free(). Same...
>>
>>> +    v->arch.vfp.sve_zreg_ctx_end = ctx +
>>> +        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +void sve_context_free(struct vcpu *v)
>>> +{
>>> +    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
>>> +
>>> +    /* Point back to the beginning of Z0-Z31 + FFR memory */
>>
>> ... here (but with sve_context_init()). So it is clearer that if the logic change in one place then it needs to be changed in the other.
> 
> Sure I will
> 
>>
>>> +    v->arch.vfp.sve_zreg_ctx_end -=
>>> +        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
>>
>>  From my understanding, sve_context_free() could be called with sve_zreg_ctxt_end equal to NULL (i.e. because sve_context_init() failed). So wouldn't we end up to substract the value to NULL and therefore...
>>
>>> +
>>> +    XFREE(v->arch.vfp.sve_zreg_ctx_end);
>>
>> ... free a random pointer?
> 
> Thank you for spotting this, I will surround the operations in sve_context_free by:
> 
> if ( v->arch.vfp.sve_zreg_ctx_end )

Rather than surrounding, how about adding:

if ( !v->arch.vfp...)
   return;

This would avoid an extra indentation.

> 
> I’m assuming the memory should be zero initialised for the vfp structure, please
> correct me if I’m wrong.

This is part of the struct vcpu. So yes (see alloc_vcpu_struct()).

[...]

>>> index 143359d0f313..24c722a4a11e 100644
>>> --- a/xen/arch/arm/domain.c
>>> +++ b/xen/arch/arm/domain.c
>>> @@ -552,7 +552,14 @@ int arch_vcpu_create(struct vcpu *v)
>>>         v->arch.cptr_el2 = get_default_cptr_flags();
>>>       if ( is_sve_domain(v->domain) )
>>> +    {
>>> +        if ( (rc = sve_context_init(v)) != 0 )
>>> +            goto fail;
>>>           v->arch.cptr_el2 &= ~HCPTR_CP(8);
>>> +#ifdef CONFIG_ARM64_SVE
>>
>> This #ifdef reads a bit odd to me because you are protecting v->arch.zcr_el2 but not the rest. This is one of the case where I would surround the full if with the #ifdef because it makes clearer that there is no way the rest of the code can be reached if !CONFIG_ARM64_SVE.
>>
>> That said, I would actually prefer if...
>>
>>> +        v->arch.zcr_el2 = vl_to_zcr(sve_decode_vl(v->domain->arch.sve_vl));
>>
>> ... this line is moved in sve_context_init() because this is related to the SVE context.
> 
> Sure I will do that, so if I’ve understood correctly, you want me to keep this:
> 
> 
> v->arch.cptr_el2 = get_default_cptr_flags();
> if ( is_sve_domain(v->domain) )
> {
>      if ( (rc = sve_context_init(v)) != 0 )
>          goto fail;
>      v->arch.cptr_el2 &= ~HCPTR_CP(8);
> }
> 
> Without #ifdef CONFIG_ARM64_SVE

Yes please.

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 19 19:01:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 19:01:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537240.836191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q05MJ-0005Gs-1C; Fri, 19 May 2023 19:01:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537240.836191; Fri, 19 May 2023 19:01:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q05MI-0005Gl-SP; Fri, 19 May 2023 19:01:46 +0000
Received: by outflank-mailman (input) for mailman id 537240;
 Fri, 19 May 2023 19:01:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q05MI-0005Gb-72; Fri, 19 May 2023 19:01:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q05MH-0003Uj-N9; Fri, 19 May 2023 19:01:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q05MH-0005oL-GS; Fri, 19 May 2023 19:01:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q05MH-0000sf-Fu; Fri, 19 May 2023 19:01:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=+hUwEAA7cOXbwHgcWPQitjvEVoY7UObWS804vOfluHU=; b=gwNtBrhtcCjj4rABc6LIGYaUV/
	MqGrWOeW9ziU5l/qlLaYzG+Nm5Su5asGwn6IhCFlQwyKiu6I/pHT13595HmoC4f7qpcLt7jY1ojl7
	gLs6H1RvKmTxxmDgv1wwSS6XpiHkhwWnz79eGAHT0b5XSZBxXCgKatLJbUCeprulDrc4=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-amd64-xsm
Message-Id: <E1q05MH-0000sf-Fu@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 May 2023 19:01:45 +0000

branch xen-unstable
xenbranch xen-unstable
job build-amd64-xsm
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180758/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-amd64-xsm.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-amd64-xsm.xen-build --summary-out=tmp/180758.bisection-summary --basis-template=180691 --blessings=real,real-bisect,real-retry qemu-mainline build-amd64-xsm xen-build
Searching for failure / basis pass:
 180742 fail [host=himrod2] / 180691 ok.
Failure / basis pass flights: 180742 / 180691
(tree with no url: minios)
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f515110e86aefe3bc2e8eb581ab724614060f be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
Basis pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6972ef1440a9d685482d78672620a7482f2bd09a be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e-0abfb0be6cf78a8e962383e85cec57851ddae5bc git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 https://gitlab.com/qemu-project/qemu.git#6972ef1440a9d685482d78672620a7482f2bd09a-146f515110e86aefe3bc2e8eb581ab724614060f git://xenbits.xen.org/osstest/seabios.git#be7e899350caa7b74d8271a34264c3b\
 4aef25ab0-be7e899350caa7b74d8271a34264c3b4aef25ab0 git://xenbits.xen.org/xen.git#8f9c8274a4e3e860bd777269cb2c91971e9fa69e-42abf5b9c53eb1b1a902002fcda68708234152c3
Loaded 34991 nodes in revision graph
Searching for test results:
 180702 [host=himrod0]
 180691 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6972ef1440a9d685482d78672620a7482f2bd09a be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
 180705 [host=himrod0]
 180707 [host=himrod0]
 180708 [host=himrod0]
 180709 [host=himrod0]
 180710 [host=himrod0]
 180711 [host=himrod0]
 180712 [host=himrod0]
 180713 [host=himrod0]
 180715 [host=himrod0]
 180716 [host=himrod0]
 180717 [host=himrod0]
 180718 [host=himrod0]
 180704 [host=himrod0]
 180720 [host=himrod0]
 180722 [host=himrod0]
 180723 [host=himrod0]
 180724 [host=himrod0]
 180727 [host=himrod0]
 180728 [host=himrod0]
 180730 [host=himrod0]
 180731 [host=himrod0]
 180733 [host=himrod0]
 180736 [host=himrod0]
 180737 [host=himrod0]
 180721 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f515110e86aefe3bc2e8eb581ab724614060f be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180739 [host=himrod0]
 180741 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6972ef1440a9d685482d78672620a7482f2bd09a be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
 180743 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f515110e86aefe3bc2e8eb581ab724614060f be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180744 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 721fa5e563e8174d6c33e1e413cebb5442625932 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180745 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dee01b827ffc26577217697074052b8b7f4770dc be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180747 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f6652ebc2717b28c5788e6364c6dab09bb0ac44 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180748 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd48b477e90c3200b970545d1953e12e8c1431db be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180749 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e80bdbf283fb7a3643172b7f85b41d9dd312091c be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180742 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f515110e86aefe3bc2e8eb581ab724614060f be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180750 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180752 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180754 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180755 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180757 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180758 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
Searching for interesting versions
 Result found: flight 180691 (pass), for basis pass
 Result found: flight 180721 (fail), for basis failure
 Repro found: flight 180741 (pass), for basis pass
 Repro found: flight 180742 (fail), for basis failure
 0 revisions at 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
No revisions left to test, checking graph state.
 Result found: flight 180750 (pass), for last pass
 Result found: flight 180752 (fail), for first failure
 Repro found: flight 180754 (pass), for last pass
 Repro found: flight 180755 (fail), for first failure
 Repro found: flight 180757 (pass), for last pass
 Repro found: flight 180758 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180758/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-amd64-xsm.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
180758: tolerable ALL FAIL

flight 180758 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/180758/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64-xsm               6 xen-build               fail baseline untested


jobs:
 build-amd64-xsm                                              fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri May 19 20:18:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 20:18:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537257.836233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q06YK-0004Tn-HJ; Fri, 19 May 2023 20:18:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537257.836233; Fri, 19 May 2023 20:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q06YK-0004Tg-EV; Fri, 19 May 2023 20:18:16 +0000
Received: by outflank-mailman (input) for mailman id 537257;
 Fri, 19 May 2023 20:18:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q06YI-0004TW-V7; Fri, 19 May 2023 20:18:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q06YI-0005Qj-OV; Fri, 19 May 2023 20:18:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q06YI-0007bD-Dv; Fri, 19 May 2023 20:18:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q06YI-0003iE-DR; Fri, 19 May 2023 20:18:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CimCG4UFOLTsGWfad1unz8pFSEqM+1yl9egy4BNVKrc=; b=vWVDORVCALzbXXOHPR8b93vJLw
	K8nYMbpS3jkVpAPkH78o8poiOUhkISI3Jh0wO1PECcUOLDTkhek/OmDjI8FhRU0Sk0IZOsBNMHkQI
	DrcnP8ee5RjsTm/uxvgCFkrLCbhYsgi+rvk5/Ry9tL14UQECwmLuzXQN6UmI1SL+KG+Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180726-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180726: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit1:guest-start:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=753d903a6f2d1e68d98487d36449b5739c28d65a
X-Osstest-Versions-That:
    xen=42abf5b9c53eb1b1a902002fcda68708234152c3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 May 2023 20:18:14 +0000

flight 180726 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180726/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-vhd      21 guest-start/debian.repeat fail REGR. vs. 180696

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  7 xen-install   fail in 180701 pass in 180726
 test-armhf-armhf-xl-credit1  14 guest-start      fail in 180701 pass in 180726
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 180701 pass in 180726
 test-amd64-i386-libvirt-raw   7 xen-install                fail pass in 180701
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 180701

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 180701 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180696
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180696
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180696
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180696
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180696
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180696
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180696
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180696
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180696
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  753d903a6f2d1e68d98487d36449b5739c28d65a
baseline version:
 xen                  42abf5b9c53eb1b1a902002fcda68708234152c3

Last test of basis   180696  2023-05-18 01:51:57 Z    1 days
Testing same since   180701  2023-05-18 17:09:58 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 753d903a6f2d1e68d98487d36449b5739c28d65a
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed May 17 05:57:22 2023 +0000

    automation: allow to rerun build script
    
    Calling build twice in the same environment will fail because the
    directory 'binaries' was already created before. Use mkdir -p to ignore
    an existing directory and move on to the actual build.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 816d2797468dbcc8a3d23f67592b06929f67b2ab
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue May 16 15:41:27 2023 +0000

    automation: update documentation about how to build a container
    
    The command used in the example is different from the command used in
    the Gitlab CI pipelines. Adjust it to simulate what will be used by CI.
    This is essentially the build script, which is invoked with a number of
    expected environment variables such as CC, CXX and debug.
    
    In addition the input should not be a tty, which disables colors from
    meson and interactive questions from kconfig.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit bdf48bf170bf1257b236b8f467d3de6c3adb9608
Author: Stefano Stabellini <stefano.stabellini@amd.com>
Date:   Thu May 11 16:22:37 2023 -0700

    docs/misra: adds Mandatory rules
    
    Add the Mandatory rules agreed by the MISRA C working group to
    docs/misra/rules.rst.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
    Tested-by: Luca Fancellu <luca.fancellu@arm.com>

commit b046f7e374893dd0eadc84d7010f928ea7e8fcf2
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu May 4 14:12:45 2023 +0100

    xen/misra: xen-analysis.py: use the relative path from the ...
    
    repository in the reports
    
    Currently the cppcheck report entries shows the relative file path
    from the /xen folder of the repository instead of the base folder.
    In order to ease the checks, for example, when looking a git diff
    output and the report, use the repository folder as base.
    
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit 069cb96fbd7595c80bf2af6a06454ce5c732721e
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu May 4 14:12:44 2023 +0100

    xen/misra: xen-analysis.py: allow cppcheck version above 2.7
    
    Allow the use of Cppcheck version above 2.7, exception for 2.8 which
    is known and documented do be broken.
    
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit 45bfff651173d538239308648c6a6cd7cbe37172
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu May 4 14:12:43 2023 +0100

    xen/misra: xen-analysis.py: fix parallel analysis Cppcheck errors
    
    Currently Cppcheck has a limitation that prevents to use make with
    parallel build and have a parallel Cppcheck invocation on each
    translation unit (the .c files), because of spurious internal errors.
    
    The issue comes from the fact that when using the build directory,
    Cppcheck saves temporary files as <filename>.c.<many-extensions>, but
    this doesn't work well when files with the same name are being
    analysed at the same time, leading to race conditions.
    
    Fix the issue creating, under the build directory, the same directory
    structure of the file being analysed to avoid any clash.
    
    Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 19 22:26:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 22:26:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537277.836272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q08Xj-0001Fl-1n; Fri, 19 May 2023 22:25:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537277.836272; Fri, 19 May 2023 22:25:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q08Xi-0001Fe-Ul; Fri, 19 May 2023 22:25:46 +0000
Received: by outflank-mailman (input) for mailman id 537277;
 Fri, 19 May 2023 22:25:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rtmv=BI=oracle.com=boris.ostrovsky@srs-se1.protection.inumbo.net>)
 id 1q08Xh-0001FW-LB
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 22:25:45 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 15d6e7ba-f694-11ed-b22d-6b7b168915f2;
 Sat, 20 May 2023 00:25:42 +0200 (CEST)
Received: from pps.filterd (m0246631.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 34JJECoZ001734; Fri, 19 May 2023 22:24:40 GMT
Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta01.appoci.oracle.com [130.35.100.223])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3qnkuxbdh5-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 May 2023 22:24:40 +0000
Received: from pps.filterd
 (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 34JMLInf032189; Fri, 19 May 2023 22:24:39 GMT
Received: from nam12-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam12lp2171.outbound.protection.outlook.com [104.47.59.171])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id
 3qj10ep7vs-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 19 May 2023 22:24:39 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by DS7PR10MB5135.namprd10.prod.outlook.com (2603:10b6:5:38e::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May
 2023 22:24:37 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::1be4:11e3:2446:aee4]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::1be4:11e3:2446:aee4%3]) with mapi id 15.20.6411.018; Fri, 19 May 2023
 22:24:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15d6e7ba-f694-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=message-id : date :
 to : cc : references : from : subject : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2023-03-30;
 bh=HdlFplk5qhlhoA5aRAZEuOG0i5tuzU8zMdFpgbpBXco=;
 b=rzF91hOkLeAPlu48XNfjh17b3rOwm0aU4LR3sE08krhAiGjEu5iMLrrcaMLZgRhcCN2W
 yBmmmct8FUD0WQqKSvcBRM0iLRQ2e5i51VOFsGCQ9J65dMuZme4UIlrUmSs1OAgC5bx2
 t0d2kAJW2A+GK1VHp450TFPIVawJLXXeqbk+zuIZykAx+nA4AV3fDKQTipMj4sSdT4WE
 JLVTTTFr0UVobbAxiLjgV+sPjVGxZnPycZVvqC7mqeGXEQuafUQH9Y0bgUc6Wdj90z+y
 awXJY5SXjTiUXMWPeiQSwqLAz/1e+/zPxgL80AHxFyfzFlhCFLVN/h8TFitMEV/O7ehI hw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uu5fj6h79XBCo8xGjjbowYMoP7V78hz1+i0TBnk0kQ+O6FBe/rgZT4QXYSP4Y7kHQOMCc7G9jH5AGjKrvhYPndE5y+e4q1PXWjbTr6gEssEkPHFo7/eqiJNJd7xQBLEqkyORhLGXNPkflt+r0IcgX6I0c7mHoXpLuG769JsQxFUVJ3HLumZ4nOwhrfhzW9blOFOT7RDMMEQQXVDrNgbqzQ0uh33BsBBA7RSD4p23I4iL7ALKiB4T+GFhoaPDAtQvUbn06iagesGdU7Np0JvgC7CfTEp2TaYbJysMxphABZJg1ia7/KbrkKA4zH6LxeGBy6tOcXUVPs3ZQIdnpTwJnw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HdlFplk5qhlhoA5aRAZEuOG0i5tuzU8zMdFpgbpBXco=;
 b=Oa7WI6tqhDnKbmjE9NZsTTPG7Ug6SbpPESqR1hpGhfo1lskf17qgnsDCQBfCvFgIwcdMSmMVrh14iDH4GQhNJaa5494XdU/WgIsPWcCJeSzKl1Rpi4zUN1Fhd2R50PjH6f3ZXptyxZSnpj2ZutMXlepNC5OpqdIBkuZ7eVwcVOVBVRY1Gkyd3sp39kOnROScHKemBbiz3Ahmk2UqQCAuRA/y7sS15axoSV9YYTtwNGvOUSMuaBGYCs1lVCa5mbrI+5vAfZ7C56ptxNMh3IZ6S52+0mWaRg4uhd1KEIRWyJNtnSiOCudvt1zlChO5kd//muyVy+hQndfoYcWnplJjTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HdlFplk5qhlhoA5aRAZEuOG0i5tuzU8zMdFpgbpBXco=;
 b=W52sGHBaqOCqqEkSu7Iy5O7mQ2Vn1R1jSLEEnpbQdLnk4qrh/eBA4irV7aRiyz63djtql+rb7Do41TaDog/T5L3cmz0ITb8FvINi5tCwiUoHFZXvU0pPsT7YYRk8Q0lKd0lDx6dyCeNUEddxrpH+7HCXg27KZhajN5y2c5v42oo=
Message-ID: <35c82bbd-4c33-05da-1252-6eeec946ea22@oracle.com>
Date: Fri, 19 May 2023 18:24:32 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Content-Language: en-US
To: Arnd Bergmann <arnd@kernel.org>, Juergen Gross <jgross@suse.com>,
        Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
        Borislav Petkov <bp@alien8.de>,
        Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
        Stefano Stabellini <sstabellini@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>, "H. Peter Anvin" <hpa@zytor.com>,
        Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
        "Peter Zijlstra (Intel)" <peterz@infradead.org>,
        xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
References: <20230519092905.3828633-1-arnd@kernel.org>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH] [v2] x86: xen: add missing prototypes
In-Reply-To: <20230519092905.3828633-1-arnd@kernel.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SA1P222CA0097.NAMP222.PROD.OUTLOOK.COM
 (2603:10b6:806:35e::24) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BLAPR10MB5009:EE_|DS7PR10MB5135:EE_
X-MS-Office365-Filtering-Correlation-Id: 209ca017-450e-494e-a70b-08db58b7d45e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	23ULu/aiS2Va4WibWi54PdbeFCiR9k1+enOUqNC29D5nK1SduXWhw8V3AB2UGl/RGfP+tB0truCl2U4HNlvg3Wt9vIkQoHziXjYkBAZTKkuwtjAXL2FA7MzWtgj/tUXV1pZTLyu8qsrmqBaV7fvmKD48R4tdEhtvV1JOoDp8IS0SDaweHJm+tz1qbIrrZIKzH0tMYpS9vAWNJxs8JCNeXwI/qP9nUmHPrkFzQm1KZ8CLcGRKnGCwFW9wqKCQTFti4BLrPQ0k8nlZcGuMljiwbQ2jI7z+khp7ApPKg/OnOmwn3zZsf+lul9JFooSFpKZvo0j5dcHQoaYE6N5SCcRLDNsUUJSe7lFHqfr/htHj/Wnq4V868LkUw0DNdFmcUSA7kxm1uc9h4O7aC4uvKrcEuCxyEfBbiHY+mgQoVBMiQpga5UoyqFqWO3xmY9Kx5XRAaQqTcf68yr2mr+87MMZh8d6oiTbygvaZ6twNNEYY+lcVspOnCSaKFvcoWmCDGr9uk/LuzRcoWVqanKvICAdEkCC+WmKsZAn8gfwWlqvcETKrBkLEQzAxB0WuzL6I+ncZ72erEcOALfplrWlcOo1zye6a4xgjY+Tjf5MSpUxI6+gVr5I153rYZSSNfrAaEnvbWdjEM4Gf4pTazD4kbeZckw==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(376002)(396003)(346002)(366004)(451199021)(66476007)(66946007)(66556008)(4326008)(110136005)(54906003)(31686004)(8676002)(8936002)(478600001)(316002)(6666004)(41300700001)(6486002)(2906002)(86362001)(31696002)(53546011)(26005)(6512007)(6506007)(44832011)(38100700002)(83380400001)(2616005)(186003)(5660300002)(7416002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?clZ4TnRTUTd0RFdxbmJJMVdXQ2wrd0Nvb3YrbEExek5SZDVtcFJYZ3FJM2J0?=
 =?utf-8?B?c254bzNZNWtMa01peUR4ZURXbEVENTJWTUdxN3BybkcvZTJXQklrVTZKdlB5?=
 =?utf-8?B?K0JHL05nTkNXdXh2SkxONDlBb09JN285UzQ5NW4vNXYvZU5DLzRmeTFXVElz?=
 =?utf-8?B?ZmpPc2hzSkIzNEhIb0w1MEYzYkRFdEZFSTBYWVVPMERTaDJmdjNjSmtPRDZk?=
 =?utf-8?B?RTR1ZXRMa0k3NndDTHVYOERIcjRrakM5dDJWeHV3OXBPUm9ZOVFMZDdnL09H?=
 =?utf-8?B?Mkd3K05UcG5YTlp6MGh3Q1MxSGQwWUUyaDFMZHRRMEppdWRyNmhteWFjbC9G?=
 =?utf-8?B?UTc2UnZsMHBOa08rL3RmS05PV3orVHBnY3JHOElxTjBzSytxS3I2Vk5KZG82?=
 =?utf-8?B?TEdrOUJIUGRZaW9Rb2hkKzQvMFB6dXpOazVJb0MwWHNkRWpsbENkT0dCVTBy?=
 =?utf-8?B?Zzh4VlJYdGxXOGQxblpNL1Y4R2FQd0Ywc2wwNUc0aHFDYVoxZFNLMVpHdFZL?=
 =?utf-8?B?cHhtMVVvZlloM1NvYVhlT2VuNktDeGV6NVBDb2MxMnNITHRvT0Rra0xwbFNl?=
 =?utf-8?B?QnpHUCtNRCtiVFhESWpVZ3R5VFNJemtadm9EdzdUU0dtYU1HRlE1NXUrZ0oy?=
 =?utf-8?B?NVZzQkRja1NDdy9iTUk0Q2lvb3g5Qm9jWjB0ZmFDRjZyQ2Z0QjF4MFlRK3VD?=
 =?utf-8?B?OXRraFNvVFZubk4vZ0xtMVBGbDh4QVJKZ0R5WkN3SWJsNDI5QXczak95Zk5R?=
 =?utf-8?B?MXFRVlh0WXkwN3ZoWTV2RVBLTUdXZXNvaFlCWnpvK1FycmZHNmZ5cVVRY1NN?=
 =?utf-8?B?WHVnWXQxYko4MUpoeHdYWDlGcUpQN3lyQW5vQjFqNmxSU1VHTkVjQzZSZGlQ?=
 =?utf-8?B?WFdhVWRJaXJ4enhyVEJTVlFwL1hPVzgvaElpekttaUZJZ2JZdk5tckZsZVlN?=
 =?utf-8?B?Wkk2ZldXZDJodDhuTlgxNHR6VDV5a0xHMXpEZ2Rsd0tMUjRkZnVPbHB6N2R1?=
 =?utf-8?B?OFJKRjY5OXFabk9QRXZYOWlOWFF6OVQ3VmZUZUZpNUx0NStqc2Foc2g1cWFW?=
 =?utf-8?B?WDVVNmExZGpQRStiMlJYOGk4M3NCZ25RYTJKYkIvVUovLzk0OS9XSnZYQkxL?=
 =?utf-8?B?dmlCRWRyNmxvVm5aMndHMTdxUmNyVkFmMWcwTEF6SnEySGRSUUI0OEZVOUZt?=
 =?utf-8?B?ZmlGYmpieTlrYnRoVEwxZW5wclpjZExXWU5xQS94dEZYYVhrdWVFdlpNcG8y?=
 =?utf-8?B?WWU0OUV3VWlVMFY0UlJuNDZzNXBuT2syRFkrK0I4d1FGaFhiMTM3cng4aXhG?=
 =?utf-8?B?UDZRbkRsY29XQUdsdTVrS3BkOFZJVDUrbG1mbis4RHVGcnc2WCs3MnRYSWlS?=
 =?utf-8?B?Ry8zUFJaREVIcGVXQkJidUM1czBJbGhlS2p5M0ZzUFdIY290cjhtNTh5RUh4?=
 =?utf-8?B?b09LZ2QzSkxWNmhrSnhHdGZDaW1CU2NoV05uRW51RGd4SDVBVGwvQ1pqMUM5?=
 =?utf-8?B?UTdqMlZHV0pvQ2JCYXZxV2lPRlRBVUovZGxPMHNWVzhEd0pTOVpQTktWM2tv?=
 =?utf-8?B?RE9sWW8rdUFoZi9jOEVOajA0cjFLclQvMk9MN0RFY2ZRbjhRNFhFNEVZc3ZV?=
 =?utf-8?B?QVd6VE8wQS9ySTZ4ZUNkVHJJSTVoZThJaCs1VFR6NmdwVHF5ME41RXJhYXBl?=
 =?utf-8?B?cnZsWkx5U25WS0M4YUhsK3JwTUNlSnhPYzhEOFJLaVV1QnVyRU9FVWRwNkZH?=
 =?utf-8?B?MUViYzVpRTdyVFhnUmZ6UFZwSGk4UlBUQXRDOU11R25ISDhkOVc4b1YwcTVo?=
 =?utf-8?B?RjBLdGhla0dtYzZGSHhTZnp0bjNNZ0I3VlVhbGVmbU1rSU81MHEzS0tTRmVa?=
 =?utf-8?B?dVZOak9FSVc3ck1rWHJoNVNNZWFUNW1DYk9WY3A2WndxK0NWNW1LcWY2TGov?=
 =?utf-8?B?a2NyVHY1L3F1aWI1L3I0eXlSdmdHYUNtVmhZOG85V00yV3ZKSGRKNFpOWjdm?=
 =?utf-8?B?UExZYW1mLzh5SGdTczdyZGl3Rkhna1RHTjdPOExSWkRhaFNaYTRMT1VzU0hv?=
 =?utf-8?B?cWtyZEp2Y29BMkxpMzgvZ0lqWmdyMzNaMDN1M0RWdkU2ZWxNd3BkbUZSNlpk?=
 =?utf-8?B?MFRCTnYrWFVEUml5YmN2RkVTZnp6MEovblZLNmpDUFJyeDBHVjJYZWw3aSs1?=
 =?utf-8?B?eFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	=?utf-8?B?dG5iR3ExVUJNa3ljb25wdlh2YkhZWUFtbnlmY05vUUxrSkZBMXpHcXN1OXNO?=
 =?utf-8?B?dlNvdmRvMDV6U3lDeGpsT09pdXM5VVRrWUZVRFExZzZqWmdDN2d3ektrR21o?=
 =?utf-8?B?cEdkUTNuL0hoUWcrOGJ5MHd2RzQzOTJxQUdMMVZ4UHRHNzVqOGpsWFBDZHd3?=
 =?utf-8?B?cTJPQS96UnVGcDdOMVJyaHRITFMvSENBU0JXMC9xdmlHV3pTYnRPdEFVQ2cw?=
 =?utf-8?B?bUkzaUZaNjh0TG8vVWdIdU1vWjY0djJGM0pqeDNybmRTOXc3b2hWM3p0Um1O?=
 =?utf-8?B?TE0rZEkrTkdIS3c5M3B6UVVtckU3TzNjM3BNTlF3eGpiU0NaVHRETDZjY015?=
 =?utf-8?B?eVh6Qmt3MVJicURSWlp2b3FBZDI0bW11M0VtYzRzSUdTZFNzZUoyZFIvVitR?=
 =?utf-8?B?Uit0MXlObnAxaVZhbXlORE82SktoSmtaWSt1RGJUYWxyMkFERUNZK0ljd0RJ?=
 =?utf-8?B?K3dRWFMweHZzM3JHNWJPdzhhR0YrUWo3WmYxZjRQYk93S1VKc0hZekRuRGF6?=
 =?utf-8?B?UTcrV2lyZ0JNVnAwVTl3Qk4zMjluSURGUzhqa21peUI1MndSU3gyT1NBeVFE?=
 =?utf-8?B?SVcvL1JVSTNWRE5kQmsvdXVMMzdMZFZYcXIrVVhhZGlDQ2tRWk9uQTZ4UXd0?=
 =?utf-8?B?YmxPbGJKTXYzdGRtMnR5WXVqdlhhV1NrNkQwelo3RXNsZURDZlA2eG5CNUlB?=
 =?utf-8?B?Vkd5L1I1WGtkeEdWYTdZUGl2M3Z4QTJlU25rY0dZRjRvUlFnSDRVMlAvMFRs?=
 =?utf-8?B?aENnU2FtU3h5WjBMODNFdkwwMmVkNEJhQ3NXTllSb1c5VTR3bUQzbzcxem5R?=
 =?utf-8?B?bDRWd2trZEIrQy9VeDZITDJ5Q0tva3pwWWdUMzVYOVJRVkY3ZXN1cFRybUNq?=
 =?utf-8?B?cU80NmhKYmxUZGJpZmhyaEVGNHlUQnJLaHpmdjJsbjFhL3RRQTBtNmNnNmow?=
 =?utf-8?B?eGJiREZkdzdZQnAwSjVUWG1HZ1JOUXV1T2NwVG5oVlZ4RWN0QjBUNytMR2lU?=
 =?utf-8?B?dWh4TWozOFdyTFZSUHdMS1lhcTFLSk5uV1UvRGxTTmxGUWtzLzVZdDdFa25N?=
 =?utf-8?B?UnlXYzdpaU8zMFdWZWhJYzVDRzJROHlZMis1cm5icFR5TENnNWhpeE1MaU1s?=
 =?utf-8?B?Q0k3V2JNb3p0U2RDS29HM2lrb2RWZXdNL296cm1ibExpWjlPMHV1dUk4NE9j?=
 =?utf-8?B?ZGZ1dFhhUXpYcnJ5SVdFNUJDdWowNGkrZCtFWG83czJaQktKM2RpTDh1TXVw?=
 =?utf-8?B?eGI3azgyR0k3RlZlajBvdXVHYklsQnhkSW5VTnd6QS9xazZkdW5kVGd4djFF?=
 =?utf-8?B?ZXozNmhRNFFzZ1VaK09oRlNiSXJZUUo4Uk5yUnVmWmx5ZGpnM3EzUndEQjMy?=
 =?utf-8?B?WWZpaXlDSisxWHZnMWlhNjc5SDMxK2VvdVFKOHloM1Y5TFpyTnRNOFkrNHN4?=
 =?utf-8?B?dTEraWFTYXNnNnZyc09Nd2RsZVJtTEFteWcxS3RudVZyOGgwakxqVnFIbE53?=
 =?utf-8?Q?QYhvUlo5ia3QQHd14jBPmyBa0sN?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 209ca017-450e-494e-a70b-08db58b7d45e
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 22:24:37.1401
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: emR9jnEX0qFft6PJMCtw1AkOpX5TJNvV1zquriaQ6ENgTzlRiZj4e9pEx49IO/zjUPnvDJ/qBcEVhUr9vjm5hK0GsgQX+f3WpnBk9v9oITI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR10MB5135
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-05-19_16,2023-05-17_02,2023-02-09_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 adultscore=0
 malwarescore=0 mlxscore=0 spamscore=0 bulkscore=0 phishscore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2304280000 definitions=main-2305190192
X-Proofpoint-ORIG-GUID: ct0DRok6u-O7dl4WKc2W2x02pcKn1ske
X-Proofpoint-GUID: ct0DRok6u-O7dl4WKc2W2x02pcKn1ske



On 5/19/23 5:28 AM, Arnd Bergmann wrote:

> diff --git a/arch/x86/xen/smp.h b/arch/x86/xen/smp.h
> index 22fb982ff971..81a7821dd07f 100644
> --- a/arch/x86/xen/smp.h
> +++ b/arch/x86/xen/smp.h
> @@ -1,7 +1,11 @@
>   /* SPDX-License-Identifier: GPL-2.0 */
>   #ifndef _XEN_SMP_H
>   
> +void asm_cpu_bringup_and_idle(void);
> +asmlinkage void cpu_bringup_and_idle(void);

These can go under CONFIG_SMP

> +
>   #ifdef CONFIG_SMP
> +
>   extern void xen_send_IPI_mask(const struct cpumask *mask,
>   			      int vector);
>   extern void xen_send_IPI_mask_allbutself(const struct cpumask *mask,
> diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
> index a92e8002b5cf..d5ae5de2daa2 100644



> diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> index 6d7f6318fc07..0f71ee3fe86b 100644
> --- a/arch/x86/xen/xen-ops.h
> +++ b/arch/x86/xen/xen-ops.h
> @@ -160,4 +160,18 @@ void xen_hvm_post_suspend(int suspend_cancelled);
>   static inline void xen_hvm_post_suspend(int suspend_cancelled) {}
>   #endif
>   
> +void xen_force_evtchn_callback(void);

These ...

> +pteval_t xen_pte_val(pte_t pte);
> +pgdval_t xen_pgd_val(pgd_t pgd);
> +pte_t xen_make_pte(pteval_t pte);
> +pgd_t xen_make_pgd(pgdval_t pgd);
> +pmdval_t xen_pmd_val(pmd_t pmd);
> +pmd_t xen_make_pmd(pmdval_t pmd);
> +pudval_t xen_pud_val(pud_t pud);
> +pud_t xen_make_pud(pudval_t pud);
> +p4dval_t xen_p4d_val(p4d_t p4d);
> +p4d_t xen_make_p4d(p4dval_t p4d);
> +pte_t xen_make_pte_init(pteval_t pte);
> +void xen_start_kernel(struct start_info *si);


... should go under '#ifdef CONFIG_XEN_PV'



-boris


From xen-devel-bounces@lists.xenproject.org Fri May 19 22:45:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 22:45:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537283.836290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q08qK-0003pS-Me; Fri, 19 May 2023 22:45:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537283.836290; Fri, 19 May 2023 22:45:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q08qK-0003pL-IA; Fri, 19 May 2023 22:45:00 +0000
Received: by outflank-mailman (input) for mailman id 537283;
 Fri, 19 May 2023 22:44:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sk0S=BI=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q08qJ-0003pD-0f
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 22:44:59 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c64b8cd6-f696-11ed-8611-37d641c3527e;
 Sat, 20 May 2023 00:44:56 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id AFCCB65AF5;
 Fri, 19 May 2023 22:44:54 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 28D8AC433EF;
 Fri, 19 May 2023 22:44:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c64b8cd6-f696-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684536294;
	bh=avhqWAzTLQYSC2CKBntCFG/I4IQPsOmgng9uuN2d9F4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Oib71LM9fOYGGzo9wCebmTL9hEST8mTMhZPd5EAqerSU11GzTJdGgpJR84IKJfHt5
	 Lv0Ddq2gGEK62d7lr85dPNQS2YSro7yDMhAVpRkQZULALX1kqGP7hJRA4WgMh6XP7X
	 DlMziPqCCJX6sRrMwz9kBwRfDURQD2xH0R4vFkhebjAwoBhkU6FzlonIBqqMo650dt
	 M/eHumRWxbigcbbAIqCFD/iYFwAObtf3faDQWCexY7TB/GRfY5XPhztEjyirYICssw
	 L2snwxsieRzpng3RdJgrMIdS2zrEtSfj8ca9Ah1IkWE9/9BJwTYsbXYkuCKCHWflpm
	 MYrg9DEwhnEng==
Date: Fri, 19 May 2023 15:44:48 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com, 
    xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com, 
    xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
In-Reply-To: <b411f7aa-7fd2-7b1c-1bcd-35b989f528b6@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305191544420.815658@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop> <ZGX/Pvgy3+onJOJZ@Air-de-Roger> <b411f7aa-7fd2-7b1c-1bcd-35b989f528b6@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-464613732-1684536293=:815658"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-464613732-1684536293=:815658
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 19 May 2023, Jan Beulich wrote:
> On 18.05.2023 12:34, Roger Pau Monné wrote:
> > On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
> >> I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
> >> test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
> >> Zen3 system and we already have a few successful tests with it, see
> >> automation/gitlab-ci/test.yaml.
> >>
> >> We managed to narrow down the issue to a console problem. We are
> >> currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
> >> options, it works with PV Dom0 and it is using a PCI UART card.
> >>
> >> In the case of Dom0 PVH:
> >> - it works without console=com1
> >> - it works with console=com1 and with the patch appended below
> >> - it doesn't work otherwise and crashes with this error:
> >> https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
> > 
> > Jan also noticed this, and we have a ticket for it in gitlab:
> > 
> > https://gitlab.com/xen-project/xen/-/issues/85
> > 
> >> What is the right way to fix it?
> > 
> > I think the right fix is to simply avoid hidden devices from being
> > handled by vPCI, in any case such devices won't work propewrly with
> > vPCI because they are in use by Xen, and so any cached information by
> > vPCI is likely to become stable as Xen can modify the device without
> > vPCI noticing.
> > 
> > I think the chunk below should help.  It's not clear to me however how
> > hidden devices should be handled, is the intention to completely hide
> > such devices from dom0?
> 
> No, Dom0 should still be able to see them in a (mostly) r/o fashion.

But why? If something is in-use by Xen (e.g. IOMMU, a serial PCI device,
etc.) ideally Dom0 shouldn't even know of its existence because the
device is not exposed to Dom0. Dom0 is not meant to use it. Why let Dom0
know it exists if Dom0 should not use it?

In Xen on ARM, initially we didn't expose devices used by Xen to Dom0
at all.  However to hide them completely we had to make complex device
tree manipulations. Now instead we leave the device nodes in device tree
as is, but we change the "status" property to "disabled".

The idea is still that we completely hide Xen devices from Dom0, but
because of implementation complexity, instead of completing taking away
the corresponding nodes from device tree, we change them to disabled,
which still leads to the same result: the guest OS will skip them.

I am saying this without being familiar with the x86 PVH implementation,
so pardon my ignorance here, but it seems to me that as we are moving
toward an better architecture with PVH, once that allows any OS to be
Dom0, not just Linux, we would want to also completely hide devices
owned by Xen from Dom0.

That way we don't need any workaround in the guest OS for it not to use
them.
--8323329-464613732-1684536293=:815658--


From xen-devel-bounces@lists.xenproject.org Fri May 19 23:36:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 19 May 2023 23:36:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537294.836319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q09dh-00013B-Gr; Fri, 19 May 2023 23:36:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537294.836319; Fri, 19 May 2023 23:36:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q09dh-000134-Dv; Fri, 19 May 2023 23:36:01 +0000
Received: by outflank-mailman (input) for mailman id 537294;
 Fri, 19 May 2023 23:36:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q09dg-00012u-Fb; Fri, 19 May 2023 23:36:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q09df-0001EX-To; Fri, 19 May 2023 23:35:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q09df-00088x-KB; Fri, 19 May 2023 23:35:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q09df-00042C-Jl; Fri, 19 May 2023 23:35:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=5D3AxBuZEYRt5KMINZrmiWlxB7cNPVQYSH+OdHXWLnA=; b=xjsI0yoXPBgbrIMy2ryEV+CQaF
	az1cKk07+9hLrExmqahjYvLks0IawWUzXqyHX116wXoSHR7q5ZsYInwIe1QTWZRqqmiUxyhD0sseJ
	V8W8fW2SrDmlBlbj1qCanAQ3+z3S+2cg106Ig3YWkw9n336ns3mwWKwyBnG0HXcE1q7E=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-amd64
Message-Id: <E1q09df-00042C-Jl@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 19 May 2023 23:35:59 +0000

branch xen-unstable
xenbranch xen-unstable
job build-amd64
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180774/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-amd64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-amd64.xen-build --summary-out=tmp/180774.bisection-summary --basis-template=180691 --blessings=real,real-bisect,real-retry qemu-mainline build-amd64 xen-build
Searching for failure / basis pass:
 180742 fail [host=himrod2] / 180691 ok.
Failure / basis pass flights: 180742 / 180691
(tree with no url: minios)
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f515110e86aefe3bc2e8eb581ab724614060f be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
Basis pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6972ef1440a9d685482d78672620a7482f2bd09a be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e-0abfb0be6cf78a8e962383e85cec57851ddae5bc git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 https://gitlab.com/qemu-project/qemu.git#6972ef1440a9d685482d78672620a7482f2bd09a-146f515110e86aefe3bc2e8eb581ab724614060f git://xenbits.xen.org/osstest/seabios.git#be7e899350caa7b74d8271a34264c3b\
 4aef25ab0-be7e899350caa7b74d8271a34264c3b4aef25ab0 git://xenbits.xen.org/xen.git#8f9c8274a4e3e860bd777269cb2c91971e9fa69e-42abf5b9c53eb1b1a902002fcda68708234152c3
Loaded 34991 nodes in revision graph
Searching for test results:
 180702 [host=himrod0]
 180691 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6972ef1440a9d685482d78672620a7482f2bd09a be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
 180704 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 297e8182194e634baa0cbbfd96d2e09e2a0bcd40 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180721 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f515110e86aefe3bc2e8eb581ab724614060f be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180742 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f515110e86aefe3bc2e8eb581ab724614060f be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180759 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6972ef1440a9d685482d78672620a7482f2bd09a be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
 180760 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f515110e86aefe3bc2e8eb581ab724614060f be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180761 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 34f983d86fe40ffe5975369c1cf5e6a61688383a be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180762 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dee01b827ffc26577217697074052b8b7f4770dc be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180764 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6b0cedcdc7c52feda1a6b5d6c6f30356290af0ec be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180765 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd48b477e90c3200b970545d1953e12e8c1431db be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180767 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e80bdbf283fb7a3643172b7f85b41d9dd312091c be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180769 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180770 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180771 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180772 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180773 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180774 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
Searching for interesting versions
 Result found: flight 180691 (pass), for basis pass
 Result found: flight 180721 (fail), for basis failure
 Repro found: flight 180759 (pass), for basis pass
 Repro found: flight 180760 (fail), for basis failure
 0 revisions at 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
No revisions left to test, checking graph state.
 Result found: flight 180769 (pass), for last pass
 Result found: flight 180770 (fail), for first failure
 Repro found: flight 180771 (pass), for last pass
 Repro found: flight 180772 (fail), for first failure
 Repro found: flight 180773 (pass), for last pass
 Repro found: flight 180774 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180774/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-amd64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
180774: tolerable ALL FAIL

flight 180774 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/180774/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64                   6 xen-build               fail baseline untested


jobs:
 build-amd64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat May 20 00:02:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 00:02:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537305.836332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0A3J-000594-FQ; Sat, 20 May 2023 00:02:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537305.836332; Sat, 20 May 2023 00:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0A3J-00058x-Cq; Sat, 20 May 2023 00:02:29 +0000
Received: by outflank-mailman (input) for mailman id 537305;
 Sat, 20 May 2023 00:02:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fgvr=BJ=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q0A3H-00058r-Ac
 for xen-devel@lists.xenproject.org; Sat, 20 May 2023 00:02:27 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 99d57031-f6a1-11ed-b22d-6b7b168915f2;
 Sat, 20 May 2023 02:02:25 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A0EF065BEA;
 Sat, 20 May 2023 00:02:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 131AEC4339B;
 Sat, 20 May 2023 00:02:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99d57031-f6a1-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684540944;
	bh=AB8uAjp95laAlH2gCZ2ccb+KwHo8hvVoZJ47TGn3mlY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=iYIUAbZq4kqtFHfSMjcXadrGYkUDTOomP0j8t677vUFIhUDJk5tVzUXpEyaFAnpV2
	 BXgY44Kwj4lFgjXb0ZA/WvQDj1jB7QmVHeQq/h5OezCAX+yK0wGISNop3wAwmdoDQL
	 bWrWD5DsEE0WYGq6PQqBxPCTNlBqSCd1Qql0M2TqoHFoPDLqR9kmKR/1RfXtMxkYSp
	 mKxzpJ1B4Ybitfxjlz4l0+Za3DiRt/bzia9milu8wID8tsb24Z0q/xs4ZPj9EKjH83
	 0hS3z29aqQVhO/RkoJ06Za5OtZsoOhXA/0tpJ5EHwBiY4WLjVMW/tFQYD3hnmxC+Lf
	 Zf07KTHabMsZw==
Date: Fri, 19 May 2023 17:02:21 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, jbeulich@suse.com, 
    andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org, 
    marmarek@invisiblethingslab.com, xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
In-Reply-To: <ZGcu7EWW1cuNjwDA@Air-de-Roger>
Message-ID: <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop> <ZGX/Pvgy3+onJOJZ@Air-de-Roger> <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop> <ZGcu7EWW1cuNjwDA@Air-de-Roger>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1971014780-1684540943=:815658"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1971014780-1684540943=:815658
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 19 May 2023, Roger Pau Monné wrote:
> On Thu, May 18, 2023 at 06:46:52PM -0700, Stefano Stabellini wrote:
> > On Thu, 18 May 2023, Roger Pau Monné wrote:
> > > On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
> > > > Hi all,
> > > > 
> > > > I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
> > > > test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
> > > > Zen3 system and we already have a few successful tests with it, see
> > > > automation/gitlab-ci/test.yaml.
> > > > 
> > > > We managed to narrow down the issue to a console problem. We are
> > > > currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
> > > > options, it works with PV Dom0 and it is using a PCI UART card.
> > > > 
> > > > In the case of Dom0 PVH:
> > > > - it works without console=com1
> > > > - it works with console=com1 and with the patch appended below
> > > > - it doesn't work otherwise and crashes with this error:
> > > > https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
> > > 
> > > Jan also noticed this, and we have a ticket for it in gitlab:
> > > 
> > > https://gitlab.com/xen-project/xen/-/issues/85
> > > 
> > > > What is the right way to fix it?
> > > 
> > > I think the right fix is to simply avoid hidden devices from being
> > > handled by vPCI, in any case such devices won't work propewrly with
> > > vPCI because they are in use by Xen, and so any cached information by
> > > vPCI is likely to become stable as Xen can modify the device without
> > > vPCI noticing.
> > > 
> > > I think the chunk below should help.  It's not clear to me however how
> > > hidden devices should be handled, is the intention to completely hide
> > > such devices from dom0?
> > 
> > I like the idea but the patch below still failed:
> > 
> > (XEN) Xen call trace:
> > (XEN)    [<ffff82d0402682b0>] R drivers/vpci/header.c#modify_bars+0x2b3/0x44d
> > (XEN)    [<ffff82d040268714>] F drivers/vpci/header.c#init_bars+0x2ca/0x372
> > (XEN)    [<ffff82d0402673b3>] F vpci_add_handlers+0xd5/0x10a
> > (XEN)    [<ffff82d0404408b9>] F drivers/passthrough/pci.c#setup_one_hwdom_device+0x73/0x97
> > (XEN)    [<ffff82d0404409b0>] F drivers/passthrough/pci.c#_setup_hwdom_pci_devices+0x63/0x15b
> > (XEN)    [<ffff82d04027df08>] F drivers/passthrough/pci.c#pci_segments_iterate+0x43/0x69
> > (XEN)    [<ffff82d040440e29>] F setup_hwdom_pci_devices+0x25/0x2c
> > (XEN)    [<ffff82d04043cb1a>] F drivers/passthrough/amd/pci_amd_iommu.c#amd_iommu_hwdom_init+0xd4/0xdd
> > (XEN)    [<ffff82d0404403c9>] F iommu_hwdom_init+0x49/0x53
> > (XEN)    [<ffff82d04045175e>] F dom0_construct_pvh+0x160/0x138d
> > (XEN)    [<ffff82d040468914>] F construct_dom0+0x5c/0xb7
> > (XEN)    [<ffff82d0404619c1>] F __start_xen+0x2423/0x272d
> > (XEN)    [<ffff82d040203344>] F __high_start+0x94/0xa0
> > 
> > I haven't managed to figure out why yet.
> 
> Do you have some other patches applied?
> 
> I've tested this by manually hiding a device on my system and can
> confirm that without the fix I hit the ASSERT, but with the patch
> applied I no longer hit it.  I have no idea how can you get into
> init_bars if the device is hidden and thus belongs to dom_xen.

Unfortunately it doesn't work. Here are the full logs with interesting
DEBUG messages (search for "DEBUG"):
https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4318489116
https://gitlab.com/xen-project/people/sstabellini/xen/-/commit/31c400caa7b86d4c14f9553138e02af18d3b3284

[...]
(XEN) DEBUG ns16550_init_postirq 432  03:00.0
[...]
(XEN) DEBUG vpci_add_handlers 75 0000:00:00.0 0^M
(XEN) DEBUG vpci_add_handlers 75 0000:00:00.2 1^M
(XEN) DEBUG vpci_add_handlers 78 0000:00:00.2^M
(XEN) DEBUG vpci_add_handlers 75 0000:00:01.0 0^M
(XEN) DEBUG vpci_add_handlers 75 0000:00:02.0 0^M
(XEN) DEBUG vpci_add_handlers 75 0000:00:02.1 0^M

Then crash on drivers/vpci/header.c#modify_bars

vpci_add_handlers hasn't even been called yet for the interesing device,
which is 03:00.0 (not 00:02.1).

At that pointed I doubted myself on the previous test so I went back and
re-run it again. I do confirm that the below patch instead (instead, not
in addition) works:


diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 212a9c49ae..24abfaae30 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -429,17 +429,6 @@ static void __init cf_check ns16550_init_postirq(struct serial_port *port)
 #ifdef NS16550_PCI
     if ( uart->bar || uart->ps_bdf_enable )
     {
-        if ( uart->param && uart->param->mmio &&
-             rangeset_add_range(mmio_ro_ranges, PFN_DOWN(uart->io_base),
-                                PFN_UP(uart->io_base + uart->io_size) - 1) )
-            printk(XENLOG_INFO "Error while adding MMIO range of device to mmio_ro_ranges\n");
-
-        if ( pci_ro_device(0, uart->ps_bdf[0],
-                           PCI_DEVFN(uart->ps_bdf[1], uart->ps_bdf[2])) )
-            printk(XENLOG_INFO "Could not mark config space of %02x:%02x.%u read-only.\n",
-                   uart->ps_bdf[0], uart->ps_bdf[1],
-                   uart->ps_bdf[2]);
-
         if ( uart->msi )
         {
             struct msi_info msi = {
--8323329-1971014780-1684540943=:815658--


From xen-devel-bounces@lists.xenproject.org Sat May 20 00:35:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 00:35:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537317.836361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0AYw-0000He-3j; Sat, 20 May 2023 00:35:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537317.836361; Sat, 20 May 2023 00:35:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0AYw-0000HW-0o; Sat, 20 May 2023 00:35:10 +0000
Received: by outflank-mailman (input) for mailman id 537317;
 Sat, 20 May 2023 00:35:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=akxf=BJ=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1q0AYu-0000HQ-BS
 for xen-devel@lists.xenproject.org; Sat, 20 May 2023 00:35:08 +0000
Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com
 [66.111.4.27]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 28720129-f6a6-11ed-b22d-6b7b168915f2;
 Sat, 20 May 2023 02:35:05 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.nyi.internal (Postfix) with ESMTP id DA9DC5C00EB;
 Fri, 19 May 2023 20:35:01 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Fri, 19 May 2023 20:35:01 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 19 May 2023 20:35:00 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28720129-f6a6-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm1; t=
	1684542901; x=1684629301; bh=8JWnuXKvqOBFUdIwI9GejWOITXtszPdzG0y
	gzBHRP0o=; b=lT3HH1jUdpjEvwt6gVvEnZSLA7v1BbUDLq0BxQheHEmWn/A18So
	Ij+AgboUwmVhXnhhRWK/TE7CfF/PuFTbswz1PC+Dkuff4nt1MSJsHRw6CJdNci+O
	p5yd+2yVRNyo2pr9iQ7ujm2oexGd1sUigEWPaBi3oduE2NPBcFUfzZkc7EBwUTIp
	TwH+vpQU0Tyv6pbrKwmtaNOdUgjhLSFB1RiIVCZ0A8G9gQ41gYZYPkWGaH2SR/+c
	cVd8kgOzlog8oOLWLkDvZ2548LhhmJh5qMJC5WDBlPobuKbVlt2yHrq6s5WHlmOW
	mkWztiQ6PGI48c4UiwspDdwEoLEklrpx9nA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1684542901; x=1684629301; bh=8JWnuXKvqOBFU
	dIwI9GejWOITXtszPdzG0ygzBHRP0o=; b=p+trJPHhzIZVYkS/bJ5aF1/7xT0CM
	wlTz8aBz0exHO/4XrDjb8MxtR6UtC3bMfhM/DunH0GXswVg+IafbHsPzHwjJNSHP
	0ypEE9H9J2ys7BukAIuJHNW/1NlhNoIN+td1kD3KNG91zXQwe3KhndrljrYnsDkJ
	fHGakd/O2E46S+G7Ae5KWKtqVFzy/gfotKYP9cYzifiBDXOlAP76ha3sdpb4IMTe
	mpeGAexai1C1PVsMs327ssFyh0pAivN2FKlFeDbZ2oP4eeywn04KPEEUncns9VzX
	HLnzS3GevByYo4d89nvteHi+nRCG0lJItbzMU8Nyny7Pm95/h68pTnHuQ==
X-ME-Sender: <xms:tRVoZIaFWT6GP01rwebjlU7QENQhoUFtlzLMCx9zpAEkxP3eUjb0OQ>
    <xme:tRVoZDZako3WWsCa7CCQyi6oDpM4UulhUvdUkx05Mc8XvmYWAEqAPaXnoNTo-uy8r
    e1kdOqVo-2U1Q>
X-ME-Received: <xmr:tRVoZC9FhbmfUgQO0Z-AuoOfQeBvlDv-3DZ8bKejBHHwJLfWQTW4ur1lVwghfiYs4DGCLFqjD7lz1ov5RknDkJOvgnn5YLY_-1o>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeiiedgfeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepuddu
    geelhefhteektdekvefgieeugedutdffgefgvdduieetleeuveelieeulefgnecuffhomh
    grihhnpehmrghtrhhigidrohhrghdpihhnvhhishhisghlvghthhhinhhgshhlrggsrdgt
    ohhmpdhgihhtlhgrsgdrtghomhenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh
    epmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhl
    rggsrdgtohhm
X-ME-Proxy: <xmx:tRVoZCobHojYxrwqtoChuedO6aLEBw1fiViW9qx7WqP6q44wJGZqpw>
    <xmx:tRVoZDoRNbSnvd7vd2Dvt58FRXPlJQ2tAfQhxNKMZLIcArmDG2KxBQ>
    <xmx:tRVoZAQOcRa4It9TFjYZqohW7g54TjHoRQnOuCUWgs9CHXmvWzWD6g>
    <xmx:tRVoZCWwg4QOi6cJCAUZd9hwWSLmmXcNISa5MqyubaH0pFtkfnFCsw>
Feedback-ID: i1568416f:Fastmail
Date: Sat, 20 May 2023 02:34:56 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	jbeulich@suse.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
Message-ID: <ZGgVsRuAm/+RqA1C@mail-itl>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
 <ZGcu7EWW1cuNjwDA@Air-de-Roger>
 <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Cl3FzbKZSqTcwbFj"
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>


--Cl3FzbKZSqTcwbFj
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sat, 20 May 2023 02:34:56 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	jbeulich@suse.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure

On Fri, May 19, 2023 at 05:02:21PM -0700, Stefano Stabellini wrote:
> On Fri, 19 May 2023, Roger Pau Monn=C3=A9 wrote:
> > On Thu, May 18, 2023 at 06:46:52PM -0700, Stefano Stabellini wrote:
> > > On Thu, 18 May 2023, Roger Pau Monn=C3=A9 wrote:
> > > > On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
> > > > > Hi all,
> > > > >=20
> > > > > I have run into another PVH Dom0 issue. I am trying to enable a P=
VH Dom0
> > > > > test with the brand new gitlab-ci runner offered by Qubes. It is =
an AMD
> > > > > Zen3 system and we already have a few successful tests with it, s=
ee
> > > > > automation/gitlab-ci/test.yaml.
> > > > >=20
> > > > > We managed to narrow down the issue to a console problem. We are
> > > > > currently using console=3Dcom1 com1=3D115200,8n1,pci,msi as Xen c=
ommand line
> > > > > options, it works with PV Dom0 and it is using a PCI UART card.
> > > > >=20
> > > > > In the case of Dom0 PVH:
> > > > > - it works without console=3Dcom1
> > > > > - it works with console=3Dcom1 and with the patch appended below
> > > > > - it doesn't work otherwise and crashes with this error:
> > > > > https://matrix-client.matrix.org/_matrix/media/r0/download/invisi=
blethingslab.com/uzcmldIqHptFZuxqsJtviLZK
> > > >=20
> > > > Jan also noticed this, and we have a ticket for it in gitlab:
> > > >=20
> > > > https://gitlab.com/xen-project/xen/-/issues/85
> > > >=20
> > > > > What is the right way to fix it?
> > > >=20
> > > > I think the right fix is to simply avoid hidden devices from being
> > > > handled by vPCI, in any case such devices won't work propewrly with
> > > > vPCI because they are in use by Xen, and so any cached information =
by
> > > > vPCI is likely to become stable as Xen can modify the device without
> > > > vPCI noticing.
> > > >=20
> > > > I think the chunk below should help.  It's not clear to me however =
how
> > > > hidden devices should be handled, is the intention to completely hi=
de
> > > > such devices from dom0?
> > >=20
> > > I like the idea but the patch below still failed:
> > >=20
> > > (XEN) Xen call trace:
> > > (XEN)    [<ffff82d0402682b0>] R drivers/vpci/header.c#modify_bars+0x2=
b3/0x44d
> > > (XEN)    [<ffff82d040268714>] F drivers/vpci/header.c#init_bars+0x2ca=
/0x372
> > > (XEN)    [<ffff82d0402673b3>] F vpci_add_handlers+0xd5/0x10a
> > > (XEN)    [<ffff82d0404408b9>] F drivers/passthrough/pci.c#setup_one_h=
wdom_device+0x73/0x97
> > > (XEN)    [<ffff82d0404409b0>] F drivers/passthrough/pci.c#_setup_hwdo=
m_pci_devices+0x63/0x15b
> > > (XEN)    [<ffff82d04027df08>] F drivers/passthrough/pci.c#pci_segment=
s_iterate+0x43/0x69
> > > (XEN)    [<ffff82d040440e29>] F setup_hwdom_pci_devices+0x25/0x2c
> > > (XEN)    [<ffff82d04043cb1a>] F drivers/passthrough/amd/pci_amd_iommu=
=2Ec#amd_iommu_hwdom_init+0xd4/0xdd
> > > (XEN)    [<ffff82d0404403c9>] F iommu_hwdom_init+0x49/0x53
> > > (XEN)    [<ffff82d04045175e>] F dom0_construct_pvh+0x160/0x138d
> > > (XEN)    [<ffff82d040468914>] F construct_dom0+0x5c/0xb7
> > > (XEN)    [<ffff82d0404619c1>] F __start_xen+0x2423/0x272d
> > > (XEN)    [<ffff82d040203344>] F __high_start+0x94/0xa0
> > >=20
> > > I haven't managed to figure out why yet.
> >=20
> > Do you have some other patches applied?
> >=20
> > I've tested this by manually hiding a device on my system and can
> > confirm that without the fix I hit the ASSERT, but with the patch
> > applied I no longer hit it.  I have no idea how can you get into
> > init_bars if the device is hidden and thus belongs to dom_xen.
>=20
> Unfortunately it doesn't work. Here are the full logs with interesting
> DEBUG messages (search for "DEBUG"):
> https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4318489116
> https://gitlab.com/xen-project/people/sstabellini/xen/-/commit/31c400caa7=
b86d4c14f9553138e02af18d3b3284
>=20
> [...]
> (XEN) DEBUG ns16550_init_postirq 432  03:00.0
> [...]
> (XEN) DEBUG vpci_add_handlers 75 0000:00:00.0 0^M
> (XEN) DEBUG vpci_add_handlers 75 0000:00:00.2 1^M
> (XEN) DEBUG vpci_add_handlers 78 0000:00:00.2^M
> (XEN) DEBUG vpci_add_handlers 75 0000:00:01.0 0^M
> (XEN) DEBUG vpci_add_handlers 75 0000:00:02.0 0^M
> (XEN) DEBUG vpci_add_handlers 75 0000:00:02.1 0^M
>=20
> Then crash on drivers/vpci/header.c#modify_bars
>=20
> vpci_add_handlers hasn't even been called yet for the interesing device,
> which is 03:00.0 (not 00:02.1).

This device is behind a bridge, could it maybe be related to marking
(part of) BAR of such device as R/O? FWIW, the bridge is at 00:02.4.

> At that pointed I doubted myself on the previous test so I went back and
> re-run it again. I do confirm that the below patch instead (instead, not
> in addition) works:
>=20
>=20
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index 212a9c49ae..24abfaae30 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -429,17 +429,6 @@ static void __init cf_check ns16550_init_postirq(str=
uct serial_port *port)
>  #ifdef NS16550_PCI
>      if ( uart->bar || uart->ps_bdf_enable )
>      {
> -        if ( uart->param && uart->param->mmio &&
> -             rangeset_add_range(mmio_ro_ranges, PFN_DOWN(uart->io_base),
> -                                PFN_UP(uart->io_base + uart->io_size) - =
1) )
> -            printk(XENLOG_INFO "Error while adding MMIO range of device =
to mmio_ro_ranges\n");
> -
> -        if ( pci_ro_device(0, uart->ps_bdf[0],
> -                           PCI_DEVFN(uart->ps_bdf[1], uart->ps_bdf[2])) )
> -            printk(XENLOG_INFO "Could not mark config space of %02x:%02x=
=2E%u read-only.\n",
> -                   uart->ps_bdf[0], uart->ps_bdf[1],
> -                   uart->ps_bdf[2]);
> -
>          if ( uart->msi )
>          {
>              struct msi_info msi =3D {


--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--Cl3FzbKZSqTcwbFj
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRoFbEACgkQ24/THMrX
1yytRAf/SKHGJInOXUyXa+SlnbHIgGZcJAIO8MHFKtcR7M8em/GXsyLE59EheGXQ
UWnOAt4I+d1f788a1pLPZ0EkqTJksEW95vRRYE/Oj4dbib2PEB9LbxrAd/tdMscJ
mrHgZSkkeQpbalYpnBp9A68Ie+jNtfdmRPdtaDyH0Cxr8ib+rsnY598XTr89gY/0
6Z6Fp4D9VLK0QIbOzRQBiVXc565RxObJRVZEBa34dGCebk6alB1gdB5K4b9rVoIZ
2gczEOCCgb5gy9G+g5z1bJxrNkye3KC5V1Gt0i480tMsTDYd1si4fUcxsiDLAGiT
Z7kqbcM6sVNFWAsttpDwNQaDQ+SUHg==
=W7Rz
-----END PGP SIGNATURE-----

--Cl3FzbKZSqTcwbFj--


From xen-devel-bounces@lists.xenproject.org Sat May 20 01:06:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 01:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537326.836383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0B2y-00026f-Jn; Sat, 20 May 2023 01:06:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537326.836383; Sat, 20 May 2023 01:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0B2y-00026Y-H3; Sat, 20 May 2023 01:06:12 +0000
Received: by outflank-mailman (input) for mailman id 537326;
 Sat, 20 May 2023 01:06:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0B2x-00026O-Bf; Sat, 20 May 2023 01:06:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0B2w-0002bT-TJ; Sat, 20 May 2023 01:06:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0B2w-0003yX-JC; Sat, 20 May 2023 01:06:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0B2w-00053P-Ij; Sat, 20 May 2023 01:06:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qLQ8LBmBE4R8PnvjFLQ+4bXBEjxBGwmyPGVy+8l9dgE=; b=K32d3wHoL0MSOcgb//ye/nOyii
	SkYC1pW5gPz9g3Of7CEnFp+Gd/ot3zWjTlgywtxsgBR5CjG1bhp2VFNY/IGOftnpzhNNhGD8KPQkq
	V+Q3Pp6UjVLgylxYVUnfTTfUsfFqknxi9L3MVeZqJy+5CcjipM62PxqTbMssH1z8QrN4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180738-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180738: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2d1bcbc6cd703e64caf8df314e3669b4786e008a
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 May 2023 01:06:10 +0000

flight 180738 linux-linus real [real]
flight 180776 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180738/
http://logs.test-lab.xenproject.org/osstest/logs/180776/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2d1bcbc6cd703e64caf8df314e3669b4786e008a
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   33 days
Failing since        180281  2023-04-17 06:24:36 Z   32 days   60 attempts
Testing same since   180706  2023-05-19 01:42:57 Z    0 days    2 attempts

------------------------------------------------------------
2416 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 306654 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 20 01:48:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 01:48:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537336.836399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Bhb-0006h2-3k; Sat, 20 May 2023 01:48:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537336.836399; Sat, 20 May 2023 01:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Bha-0006gu-Us; Sat, 20 May 2023 01:48:10 +0000
Received: by outflank-mailman (input) for mailman id 537336;
 Sat, 20 May 2023 01:48:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0BhZ-0006gk-2J; Sat, 20 May 2023 01:48:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0BhY-0003hF-Rs; Sat, 20 May 2023 01:48:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0BhY-0005Rg-HV; Sat, 20 May 2023 01:48:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0BhY-000837-H2; Sat, 20 May 2023 01:48:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=V9uQhQiFrhRNNGAYuoYk4zjglSj8l7AZp1d68VnDnDc=; b=ni1X+Rind0qp3y0omS2+Fcc4+m
	usCMHBFTgegnS7m6wuQVhGQc4G1hy0tIN9jWhOiyR2QaKGfIxu1Me0Z7/4bqs5z9H8RWmGffr6Wkk
	SUl6QhH4ZMrUqvzEJJt5LOrLIB7nb8WrvgAYKFozqdXuif0q7yNMlVdew93C7ZuP1X3Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180753-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180753: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=d009607d08d22f91ca399b72828c6693855e7325
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 May 2023 01:48:08 +0000

flight 180753 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180753/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                d009607d08d22f91ca399b72828c6693855e7325
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    2 days
Failing since        180699  2023-05-18 07:21:24 Z    1 days    6 attempts
Testing same since   180753  2023-05-19 17:07:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Anton Johansson <anjo@rev.ng>
  Brian Cain <bcain@quicinc.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Gavin Shan <gshan@redhat.com>
  Igor Mammedov <imammedo@redhat.com>
  John Snow <jsnow@redhat.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sid Manning <sidneym@quicinc.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3303 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 20 05:37:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 05:37:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537366.836481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0FH0-0006C7-ID; Sat, 20 May 2023 05:36:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537366.836481; Sat, 20 May 2023 05:36:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0FH0-0006C0-Ep; Sat, 20 May 2023 05:36:58 +0000
Received: by outflank-mailman (input) for mailman id 537366;
 Sat, 20 May 2023 05:36:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GQEh=BJ=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q0FGz-0006Bu-1r
 for xen-devel@lists.xenproject.org; Sat, 20 May 2023 05:36:57 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 52a43a09-f6d0-11ed-8611-37d641c3527e;
 Sat, 20 May 2023 07:36:52 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 809765C00DA;
 Sat, 20 May 2023 01:36:51 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute5.internal (MEProxy); Sat, 20 May 2023 01:36:51 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat,
 20 May 2023 01:36:50 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52a43a09-f6d0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:message-id:mime-version:reply-to
	:sender:subject:subject:to:to; s=fm1; t=1684561011; x=
	1684647411; bh=8IitWPFe78G+zxKofkqDDSKZmdplMZYDBzCeut9KBls=; b=q
	6/r7wB2zQZ2jEjveKe/1SGYEU/FNePYE70vakyrRqUbYr2BimK+WMM8slYFnwfiA
	kP3ete9uVx90mRdW9y10J7bTUqDD/6Ncjt167n72EZMyS36oYCRoVMQ+3VFxZYTT
	nGonFB8vQiXBFq4hjQBGxY5ipJgmS3gO569RucVleuufPDVLVlJC2FTn2VQ4tkez
	fs1GvvfkT8QIsNcDh2faPmjYcUYqIKWEzkuU8RQnq7AANj4iAnG8zs3P658lz6r3
	Z8ghyu8GASq+YqBuBsmbWRVdnEdgazOp2hqdq/ZSpbh9EiqRT1h8FsyfqbSMg6qt
	nE67bYhXmjhn/hT/T1ykg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=
	1684561011; x=1684647411; bh=8IitWPFe78G+zxKofkqDDSKZmdplMZYDBzC
	eut9KBls=; b=TMSSDr0UaFz1SHJZdiS8/CeBFuUkKAjOjkdjnHdiz4AsyrkC7sG
	whDCBDHNOjWZhbZ5E59Qtn/tMwwH6w8meCw0atTapN22IiNPH+1hasgsOvf3ooEU
	3Wb5gqvHQcrxqGchnsSKGXAAYUA3W3phzvEaFKXm7soy0mrhczruUJge+B+Qe9Ps
	OxICwtN+gmDjNisVX6uJsQ9f2mLhw8LY0vXw/bcN4HpxX3Cv5B+ODM6PAYNkMAvO
	s0Nfr3inATVRJpWKPFxWY1DtILafHxEr/1wo9V6pOHxopDG18xU8skLjTjXF4Rrg
	/xAFfY0mnctQsJrCqRPV+n9DUGDjYQlBPHA==
X-ME-Sender: <xms:c1xoZPkxsJ2ADDhMkmc7fzFYgrgOF6F4JqkAm9tfnfVgWPa8NYO4HA>
    <xme:c1xoZC0ZbbgGWHfo--rImLALvQPWtYrOX6E2ncrPrIJ9N8VftBvB1PgP71dXOF6rm
    p5Qzpcn4ozjToI>
X-ME-Received: <xmr:c1xoZFp-7r2T7PxWn6VEqUpTwGSzP1mP-dmAUQ_rNWdPDreA8lzirtyngvY>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeiiedgleeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkgggtugesghdtreertddtjeenucfhrhhomhepffgvmhhiucfo
    rghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhnghhslh
    grsgdrtghomheqnecuggftrfgrthhtvghrnhepueevleffkeefueelieeuveehfeeigfff
    gefgudeiueejveevheffgfdthfeijefhnecuvehluhhsthgvrhfuihiivgeptdenucfrrg
    hrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhslhgr
    sgdrtghomh
X-ME-Proxy: <xmx:c1xoZHkU7O8JRorYdPUhsShUQkgo5Jsk3HTrd_JVxR_CKWgUrEhIKQ>
    <xmx:c1xoZN0xuUuIh8AiQcciNySbGKzeRBFyZ4CyM_0z_zzCqmAhwkvINQ>
    <xmx:c1xoZGtlSb-iDCRRgSTCsKR6geWYZAoMM6KwDMG9QVRj9dnG8sLbvQ>
    <xmx:c1xoZKhk-I6OtuUPbpkXDVqXnGoonlGzZNgVmUoKvRYlLJB1blboaA>
Feedback-ID: iac594737:Fastmail
Date: Sat, 20 May 2023 01:36:44 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Xen developer discussion <xen-devel@lists.xenproject.org>
Cc: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Removing Linux memory hotplug limitations
Message-ID: <ZGhccUNlipyTIm5/@itl-email>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="Em6P91+DqSFMPKbb"
Content-Disposition: inline


--Em6P91+DqSFMPKbb
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sat, 20 May 2023 01:36:44 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Xen developer discussion <xen-devel@lists.xenproject.org>
Cc: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Removing Linux memory hotplug limitations

Qubes OS is trying to switch from relying on populate-on-demand to
memory hotplug for Linux guests.  However, this runs into a problem,
which is that only a limited amount of memory can be hotplugged.

My experiments with Qubes OS=E2=80=99s build of Linux 6.3.2 reveal:

- The more memory a guest starts off with, the more memory that can be
  added to it via hotplug.

- The memory that the guest is not able to use remains available on the
  host and can be assigned to dom0 (and, presumably, to oother guests).

- There is no sudden jump at 2GiB or 3GiB as far as I can tell.

- There are no kernel warning messages unless I try to add a huge amount
  of memory (far beyond what can be successfully hotplugged).  In
  particular, there are no warning messages from drivers/xen/balloon.c.

- There are several waits in the balloon driver that should probably
  hvae comments added.

- `cat /sys/devices/system/memory/memory*/state` reveals that all memory
  devices are online.

- `echo $((1 << 63)) | sudo tee /sys/devices/system/xen_memory/xen_memory0/=
target`
  causes a kernel crash (BUG_ON(ret !=3D nr_pages) in drivers/xen/balloon.c=
).
  Patch coming.

- The initial amount of memory assigned to a guest is irrelevant.

- The maximum amount of memory assigned to a guest is highly relevant.
  Table below:

   Initial maximum memory:      Maximum after hotplug
             400M                       2733M=20
             500M                       3067M=20
             600M                       3535M
             700M                       3919M
             800M                       4315
             900m                       4711
             1000m                      5105

SciPy linear regression gives:

- Slope: 3.9943
- Y-Intercept: 1116.14
- R: 0.99965
- stderr: 0.047

In short, there is a very clear, nearly linear relationship between the
amount of cold-plugged memory and the amount of memory that can be
hotplugged later.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--Em6P91+DqSFMPKbb
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmRoXHEACgkQsoi1X/+c
IsEMwg//ST7yUUKUJZL4pqYw+siRHT6dD8oax7dAGQx2Zo6CLyQ0K3fu27FOqTEM
hB7Ua3h05DzYtOSJbDcdjqlYendfIo2Yj7ItBgt3VF1rPTBKy+sNYqxjr838LvUR
18adFho7fiyqmuRB+kqmdQ/BW2S1mQos5AIq1rY7E+MaIgGyzXQHc2oIyGw1X6uY
FJ3RoLRo3cHk1dbwsmSc+OXjxKPAW6I1DLd8GhFLhiBXUV1Q+QlUt5lJTPS7Ci5b
nbIvEh+1kOyg35ZUefLWlMzdTGIMW6rpzIT3Q1C6YDr2K0EXeigUbgvhZIrbId0T
zILnEI44MLzGAHyrC52BXyCPik8l97rlRHxW9/nV0dMXiM7/vVj1INURGK1Vm3y1
waNiLfBohbKpGKb5cz874FgQV8dNl1704zSkKgchVRRTBQjte0PhgUDpHNM+IdtG
E1fZZJ4wquJikUUYLqS2Hc8Z3O0H+2ALWVJwz8ZnosEARw8aZcICIbKpQgaReMHB
+dNxmyy4nSxBBzSMllLJhjsTjqA33MXAFIVRtAjmOqODR1sxhMjALdBCvTu5qXXt
MycpQY4xVOVMC9E0QdD4XiKNbkFg+CHT0dzJmE5JQm3J/NZdGxeBAa5nbweXzJMX
VJnF5eJogsnPXHAr42QFmW5HWrci3tg2ATGg9UOUSlQCzu7stV0=
=3tFv
-----END PGP SIGNATURE-----

--Em6P91+DqSFMPKbb--


From xen-devel-bounces@lists.xenproject.org Sat May 20 05:44:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 05:44:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537373.836491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0FNp-0007l0-FC; Sat, 20 May 2023 05:44:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537373.836491; Sat, 20 May 2023 05:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0FNp-0007kt-Al; Sat, 20 May 2023 05:44:01 +0000
Received: by outflank-mailman (input) for mailman id 537373;
 Sat, 20 May 2023 05:43:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0FNn-0007kj-Rk; Sat, 20 May 2023 05:43:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0FNn-00022G-JI; Sat, 20 May 2023 05:43:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0FNn-0006MI-Cl; Sat, 20 May 2023 05:43:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0FNn-0003eI-CG; Sat, 20 May 2023 05:43:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=6gjEcNgTEnz3Dp27QG3nnWbTbNPvcCjcODcRjIQCJXg=; b=gOi95AYy2PwtCvDVUDzIvboS98
	X2GS9E70E5CNArF/426k0/joCn6MdvEWO1Jrj0CIpyvkny/k/qpVMNCrrYMCCsTbeC4PDcKJx+UQB
	jflyK8x/1yNtJjDmZ6y8p6YWCYAwbgz9yUsGYk1yOOtFTLiAs95KY0pFEe02I1Gpy9c8=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-i386-xsm
Message-Id: <E1q0FNn-0003eI-CG@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 May 2023 05:43:59 +0000

branch xen-unstable
xenbranch xen-unstable
job build-i386-xsm
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180796/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-i386-xsm.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-i386-xsm.xen-build --summary-out=tmp/180796.bisection-summary --basis-template=180691 --blessings=real,real-bisect,real-retry qemu-mainline build-i386-xsm xen-build
Searching for failure / basis pass:
 180753 fail [host=albana0] / 180691 [host=huxelrebe1] 180686 [host=huxelrebe1] 180673 [host=elbling0] 180659 ok.
Failure / basis pass flights: 180753 / 180659
(tree with no url: minios)
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d009607d08d22f91ca399b72828c6693855e7325 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
Basis pass d3225577123767fd09c91201d27e9c91663ae132 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8844bb8d896595ee1d25d21c770e6e6f29803097 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#d3225577123767fd09c91201d27e9c91663ae132-0abfb0be6cf78a8e962383e85cec57851ddae5bc git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 https://gitlab.com/qemu-project/qemu.git#8844bb8d896595ee1d25d21c770e6e6f29803097-d009607d08d22f91ca399b72828c6693855e7325 git://xenbits.xen.org/osstest/seabios.git#ea1b7a0733906b8425d948ae94fba63\
 c32b1d425-be7e899350caa7b74d8271a34264c3b4aef25ab0 git://xenbits.xen.org/xen.git#4c507d8a6b6e8be90881a335b0a66eb28e0f7737-42abf5b9c53eb1b1a902002fcda68708234152c3
Loaded 43247 nodes in revision graph
Searching for test results:
 180659 pass d3225577123767fd09c91201d27e9c91663ae132 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8844bb8d896595ee1d25d21c770e6e6f29803097 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
 180673 [host=elbling0]
 180686 [host=huxelrebe1]
 180702 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 297e8182194e634baa0cbbfd96d2e09e2a0bcd40 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180691 [host=huxelrebe1]
 180704 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 297e8182194e634baa0cbbfd96d2e09e2a0bcd40 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180721 [host=albana1]
 180742 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f515110e86aefe3bc2e8eb581ab724614060f be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180775 pass d3225577123767fd09c91201d27e9c91663ae132 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8844bb8d896595ee1d25d21c770e6e6f29803097 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
 180777 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f515110e86aefe3bc2e8eb581ab724614060f be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180778 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 91608e2a44f36e79cb83f863b8a7bb57d2c98061 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180779 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 66e2c6cbacea9302a1fc5528906243d36c103fc7 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180780 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4baf3978c02b387c39dc6a75d323126ab386aece be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180781 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bfa72590df14e4c94c03d2464f3abe18bf2e5dac be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
 180753 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d009607d08d22f91ca399b72828c6693855e7325 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180783 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3887702e5f8995638c98f9d9326b4913fb107be7 be7e899350caa7b74d8271a34264c3b4aef25ab0 c8e4bbb5b8ee22fd1591ba6a5a3cef4466dda323
 180784 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d009607d08d22f91ca399b72828c6693855e7325 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180786 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ab4c44d657aeca7e1da6d6dcb1741c8e7d357b8b be7e899350caa7b74d8271a34264c3b4aef25ab0 fc1b51268025233a81e5fd9c5eabe170bc830720
 180787 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0e7e3bf1a552c178924867fa7c2f30ccc8a179e0 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180788 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0b15c42b81ff1e66ccbab3c2f2cef1535cbb9d24 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180789 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 68ea6d17fe531e383394573251359ab4f99f7091 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180790 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd48b477e90c3200b970545d1953e12e8c1431db be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180791 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180792 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180793 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180794 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180795 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180796 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
Searching for interesting versions
 Result found: flight 180659 (pass), for basis pass
 Result found: flight 180753 (fail), for basis failure (at ancestor ~1)
 Repro found: flight 180775 (pass), for basis pass
 Repro found: flight 180784 (fail), for basis failure
 0 revisions at 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
No revisions left to test, checking graph state.
 Result found: flight 180791 (pass), for last pass
 Result found: flight 180792 (fail), for first failure
 Repro found: flight 180793 (pass), for last pass
 Repro found: flight 180794 (fail), for first failure
 Repro found: flight 180795 (pass), for last pass
 Repro found: flight 180796 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180796/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-i386-xsm.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
180796: tolerable ALL FAIL

flight 180796 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/180796/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-i386-xsm                6 xen-build               fail baseline untested


jobs:
 build-i386-xsm                                               fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat May 20 06:02:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 06:02:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.536528.836510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Ffa-0001y2-1k; Sat, 20 May 2023 06:02:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 536528.836510; Sat, 20 May 2023 06:02:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0FfZ-0001xv-VH; Sat, 20 May 2023 06:02:21 +0000
Received: by outflank-mailman (input) for mailman id 536528;
 Thu, 18 May 2023 20:30:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FKe9=BH=redhat.com=lyude@srs-se1.protection.inumbo.net>)
 id 1pzkH3-0007P4-7h
 for xen-devel@lists.xenproject.org; Thu, 18 May 2023 20:30:57 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e28fedd7-f5ba-11ed-8611-37d641c3527e;
 Thu, 18 May 2023 22:30:54 +0200 (CEST)
Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com
 [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-270-nyCJTTlrODCn2wl02aVmcQ-1; Thu, 18 May 2023 16:30:51 -0400
Received: by mail-qv1-f70.google.com with SMTP id
 6a1803df08f44-62394519189so8034906d6.0
 for <xen-devel@lists.xenproject.org>; Thu, 18 May 2023 13:30:51 -0700 (PDT)
Received: from ?IPv6:2600:4040:5c62:8200::feb? ([2600:4040:5c62:8200::feb])
 by smtp.gmail.com with ESMTPSA id
 cx3-20020a056214188300b006238f82cde4sm763000qvb.108.2023.05.18.13.30.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 18 May 2023 13:30:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e28fedd7-f5ba-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684441853;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ks6xt/wC6E+ECyHt7NO8D94Kf2RxqGGaF/pXlLsIP+4=;
	b=hhMSrOa6g48vToxh/Dd/Meb0nIyRItdD3x56+Pvx4vZNW8lYpx/ZWFCIjE1Cibp0bRi47A
	Zr/qGc51mSJwItxCfv7n/Y46nuAHhstwqqch/hDYSZKOXXB482E9yU7pvvGLggl6It46OH
	y8rYAoxwjdM8kKq+GR4L4EaWLs0oNaU=
X-MC-Unique: nyCJTTlrODCn2wl02aVmcQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684441851; x=1687033851;
        h=mime-version:user-agent:content-transfer-encoding:organization
         :references:in-reply-to:date:cc:to:from:subject:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=UBHXSM6N4oYTfABvCGd3gPEAq2HM572bDIbMq8xh2Y4=;
        b=aaL4W8SYC8mEqc5GFLGPu5g72/xgT3RoA+WX2lJq6fUvEgaypk4BMHDRBAwHGfrKtb
         pTwn7frzyLNi0LC+KJ/DihydEIdazwxXdOA+ERmeIhkszolWdjqS8hZ4h/XvZx87BK4l
         hoOonM+4eoGvEdIqE562SyMeQ9a6Mv2c1Q/c0pMVBKWv6btFz+Wm043wZoVY8hJH8FOt
         pwNoEbAAQ1cN7epzPEa7B+80VJFsJ7moATxsAi7b6AQk7dcTtJzHDv/VX9n6nPAM+XdX
         qMQqPawC02uoQzdniyyhbgexhMOZuHsjJSO6nx0NYgGgJPiISGQK0Usvnh9OoXVKZJXQ
         Wtcw==
X-Gm-Message-State: AC+VfDwYgwL8FwlvDBDo47c8jW9AOb043XXHEVxcApWZaaYIqLIr6uMg
	IMpsw3VRPuXd3Nbij6hTmbEiNOWVy0Hdl0lL+s6hEm4rFRK8jvsELcT21hfaYkLJRBXZFyJD2wI
	+aePjlOeGR1l2xDG64XD7U/1erMo=
X-Received: by 2002:a05:6214:e4d:b0:5c7:d03c:f2b2 with SMTP id o13-20020a0562140e4d00b005c7d03cf2b2mr361301qvc.28.1684441851010;
        Thu, 18 May 2023 13:30:51 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ7vnjCpdmwKeoU/jIRT/FUAUaf5A1UGyq52Jfrp1Qr7akRm4t7XVgexBhARVnPncRSkeITgNQ==
X-Received: by 2002:a05:6214:e4d:b0:5c7:d03c:f2b2 with SMTP id o13-20020a0562140e4d00b005c7d03cf2b2mr361277qvc.28.1684441850746;
        Thu, 18 May 2023 13:30:50 -0700 (PDT)
Message-ID: <b07c93bc7cb71a32091794cd97f7c702c34539da.camel@redhat.com>
Subject: Re: [PATCH 3/4] drm/nouveau: stop using is_swiotlb_active
From: Lyude Paul <lyude@redhat.com>
To: Christoph Hellwig <hch@lst.de>, Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>,  Borislav Petkov <bp@alien8.de>, Dave
 Hansen <dave.hansen@linux.intel.com>, x86@kernel.org, "H. Peter Anvin"
 <hpa@zytor.com>, Ben Skeggs <bskeggs@redhat.com>, Karol Herbst
 <kherbst@redhat.com>
Cc: xen-devel@lists.xenproject.org, iommu@lists.linux.dev, 
	linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org
Date: Thu, 18 May 2023 16:30:49 -0400
In-Reply-To: <20230518134253.909623-4-hch@lst.de>
References: <20230518134253.909623-1-hch@lst.de>
	 <20230518134253.909623-4-hch@lst.de>
Organization: Red Hat Inc.
User-Agent: Evolution 3.44.4 (3.44.4-3.fc36)
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Reviewed-by: Lyude Paul <lyude@redhat.com>

Thanks for getting to this!

On Thu, 2023-05-18 at 15:42 +0200, Christoph Hellwig wrote:
> Drivers have no business looking into dma-mapping internals and check
> what backend is used.  Unfortunstely the DRM core is still broken and
> tries to do plain page allocations instead of using DMA API allocators
> by default and uses various bandaids on when to use dma_alloc_coherent.
>=20
> Switch nouveau to use the same (broken) scheme as amdgpu and radeon
> to remove the last driver user of is_swiotlb_active.
>=20
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/gpu/drm/nouveau/nouveau_ttm.c | 10 +++-------
>  1 file changed, 3 insertions(+), 7 deletions(-)
>=20
> diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouv=
eau/nouveau_ttm.c
> index 1469a88910e45d..486f39f31a38df 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
> @@ -24,9 +24,9 @@
>   */
> =20
>  #include <linux/limits.h>
> -#include <linux/swiotlb.h>
> =20
>  #include <drm/ttm/ttm_range_manager.h>
> +#include <drm/drm_cache.h>
> =20
>  #include "nouveau_drv.h"
>  #include "nouveau_gem.h"
> @@ -265,7 +265,6 @@ nouveau_ttm_init(struct nouveau_drm *drm)
>  =09struct nvkm_pci *pci =3D device->pci;
>  =09struct nvif_mmu *mmu =3D &drm->client.mmu;
>  =09struct drm_device *dev =3D drm->dev;
> -=09bool need_swiotlb =3D false;
>  =09int typei, ret;
> =20
>  =09ret =3D nouveau_ttm_init_host(drm, 0);
> @@ -300,13 +299,10 @@ nouveau_ttm_init(struct nouveau_drm *drm)
>  =09=09drm->agp.cma =3D pci->agp.cma;
>  =09}
> =20
> -#if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
> -=09need_swiotlb =3D is_swiotlb_active(dev->dev);
> -#endif
> -
>  =09ret =3D ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev-=
>dev,
>  =09=09=09=09  dev->anon_inode->i_mapping,
> -=09=09=09=09  dev->vma_offset_manager, need_swiotlb,
> +=09=09=09=09  dev->vma_offset_manager,
> +=09=09=09=09  drm_need_swiotlb(drm->client.mmu.dmabits),
>  =09=09=09=09  drm->client.mmu.dmabits <=3D 32);
>  =09if (ret) {
>  =09=09NV_ERROR(drm, "error initialising bo driver, %d\n", ret);

--=20
Cheers,
 Lyude Paul (she/her)
 Software Engineer at Red Hat



From xen-devel-bounces@lists.xenproject.org Sat May 20 06:02:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 06:02:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537074.836515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Ffa-00023d-De; Sat, 20 May 2023 06:02:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537074.836515; Sat, 20 May 2023 06:02:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Ffa-0001zn-7Y; Sat, 20 May 2023 06:02:22 +0000
Received: by outflank-mailman (input) for mailman id 537074;
 Fri, 19 May 2023 13:33:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IQyD=BI=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1q00Eb-0007qv-Qa
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 13:33:30 +0000
Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com
 [2607:f8b0:4864:20::1029])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb3e465c-f649-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 15:33:26 +0200 (CEST)
Received: by mail-pj1-x1029.google.com with SMTP id
 98e67ed59e1d1-24e14a24c9dso2573239a91.0
 for <xen-devel@lists.xenproject.org>; Fri, 19 May 2023 06:33:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb3e465c-f649-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684503204; x=1687095204;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=OEEI39IrNGfso+EqF91j5RdJQZ9anQGMII+ULCY0T78=;
        b=ks/Mgju6qJ+G2BcmQmpf89e4b3+qQ9o2EaZJBzESO5wRlJzOZE4LYDOsvDvluoaurl
         glxz6b9prqh8tmrOcPJ56dcYclfeV4kkK7hlcCN1+T7ecGtjkiTI7jDveo9cco3+mJ3Z
         HBHV6jNVAOIOdYBth+Buj9BwDhwR8oXaBNKAW612i6CNRHvCS7l9EmiAddWpSeP+SLjH
         xrqaIAxfo6Ux8LxcG9Ok0Hd14ohM28FXSVQUbYYUnKvxqdW62irVOsmLcoE9WVvkvOCW
         ng7qsQY4Ll5HpmyXKoS+pT/M19RNfSxZayml8skgX9Ukkl0R1WsP9vNU44iwveT33Vuz
         hI9g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684503204; x=1687095204;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=OEEI39IrNGfso+EqF91j5RdJQZ9anQGMII+ULCY0T78=;
        b=Yu7AThLPDbnuLDtXyGr5aVexwxAWvy8AffXrT+Kx1w6Stn4pT5gsytKZYlApZNuOXF
         u+ZzFea4A8sETIThvTwaX0LA7yksB+FC95J1NyV8WNUj4zf5wD9oO6a3uFXzh45B/40T
         XztKLhsWYjocvV5g/Vr3sU5x7RgP19OBQuVQ5Y5zcbyh3DYk6z95oeNXR/mOlKS4xtbE
         KraC+bPnR+1DivKAVkPiHxDDg9DtAZ4F6x1O37vi4ym0dDI9e/1xKb0qciaZBWsPMNYx
         gSPCR9jwwyEhKyv93zWTF5XLAulfj9tv3cqrqXF9ilM1Mdc8+5syX2mS6N5kPkLm3Da+
         se+Q==
X-Gm-Message-State: AC+VfDzXK8Q1KQbp/LCs7Q/EyUNaZR05J1a1SFwMFzo6DkgynYlB5ntM
	92L0+Sg9Jcfkin8E4tuVRUKn+lBQX3+zqY+dp8A=
X-Google-Smtp-Source: ACHHUZ4hBKr9YfPsxtg1DKVb68SthpLvpGuVq47imTCUeTs6lJNZZwioyZSdS7FsshyoXFESkGzT8/BqCqTmD8Ra/jE=
X-Received: by 2002:a17:90b:358b:b0:24b:8480:39d6 with SMTP id
 mm11-20020a17090b358b00b0024b848039d6mr2185017pjb.0.1684503203898; Fri, 19
 May 2023 06:33:23 -0700 (PDT)
MIME-Version: 1.0
References: <CA+SAi2uwrKFYN1tkYJ1_LVC-f+b-xb46RWTUv6wDOUO41yx8zg@mail.gmail.com>
 <alpine.DEB.2.22.394.2304271451420.3419@ubuntu-linux-20-04-desktop>
 <CA+SAi2u4rqdJwO5s_wU2brHgqtV=GrOpBkk+7ZXr9D4rpKME9w@mail.gmail.com>
 <4e859659-8532-7ba2-63b9-a06d91cb0ffc@amd.com> <CA+SAi2u3UdjDkpMWT0ScY8b84GutXmn+7hdMYSxJSDictgzhXw@mail.gmail.com>
 <CA+SAi2u9uT7R6u1csxg1PqTLnJ-i=+71H3ymP5REv09-srJEYA@mail.gmail.com>
 <alpine.DEB.2.22.394.2305091248560.974517@ubuntu-linux-20-04-desktop>
 <CA+SAi2u_gwuotOWexJ1MXii82NkLx8inx4VO_f_EjO9NqgM+CQ@mail.gmail.com>
 <bcac90c2-ef35-2908-9fe6-f09c1b1e2340@amd.com> <CA+SAi2sgHbUKk6mQVnFWQWJ1LBY29GW+eagrqHNN6TLDmv2AgQ@mail.gmail.com>
 <CA+SAi2tErcaAkRT5zhTwSE=-jszwAWNtEAnm5jNGEP1NoqbQ3w@mail.gmail.com>
 <53af4bc6-97ad-d806-003b-38e70ccd2b58@amd.com> <CA+SAi2vrB714Tc9dn4SbtDo3VrT3Q8OpSiFXRLRaO5=0BJo_rQ@mail.gmail.com>
 <f0e6ca10-2142-7c39-0c7b-042c454e7e08@amd.com> <CA+SAi2tCVDiQ1BLdvuH2XnvTDGDCnPBDCq70AVbsO+TZKMERSw@mail.gmail.com>
 <fa9ebe0a-6ba0-6f1a-0df1-ad65ec1e93b3@amd.com>
In-Reply-To: <fa9ebe0a-6ba0-6f1a-0df1-ad65ec1e93b3@amd.com>
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Fri, 19 May 2023 16:38:27 +0300
Message-ID: <CA+SAi2u9uxSgUHZ6EdNuQHSh9FYn3YR0kEg9BB3BQwrkf11zCw@mail.gmail.com>
Subject: Re: xen cache colors in ARM
To: Michal Orzel <michal.orzel@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Carlo Nonato <carlo.nonato@minervasys.tech>, 
	Stewart.Hildebrand@amd.com
Content-Type: multipart/alternative; boundary="000000000000176ab505fc0bf7c0"

--000000000000176ab505fc0bf7c0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello,

Thanks Michal.

Then the next question. Now it is more related to the integration than to
the development.
A license for the xen in 4.17 revision at branch xlnx_rebase_4.17 xilinx
repo has changed.
I found out when I built this version.
Now bitbake and yocto build fault by COPYING file md5 hashe inequality
reason.
I found out md5 hash stored at
sources/libs/meta-virtualization/recipes-extended/xen directory in files
xen-tools_4.15.bb
xen_4.15.bb
xen_git.bb
xen-tools_git.bb
xen-tools_4.16.bb
xen_4.16.bb
So this is a question. Should I update the license file for all our
branches or is it possible to keep an old one for old branches ?

Regards,
Oleg


=D0=B2=D1=82, 16 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 21:00, Mich=
al Orzel <michal.orzel@amd.com>:

>
>
> On 16/05/2023 17:14, Oleg Nikitenko wrote:
> >
> >
> >
> > Hi guys,
> >
> > Thanks Michal.
> >
> > So if I have more RAM It is possible to increase the color density.
> >
> > For example 8Gb/16 it is 512 Mb approximately.
> > Is this correct ?
> Yes.
> To my previous reply I should also add that the number of colors depends
> on the page size,
> but in Xen, we use 4kB pages so 64kB way size results in 16 colors.
>
> ~Michal
>
> > Regards,
> > Oleg
> >
> > =D0=B2=D1=82, 16 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 17:40, =
Michal Orzel <michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>:
> >
> >     Hi Oleg,
> >
> >     On 16/05/2023 14:15, Oleg Nikitenko wrote:
> >     >
> >     >
> >     >
> >     > Hello,
> >     >
> >     > Thanks a lot Michal.
> >     >
> >     > Then the next question.
> >     > When I just started my experiments with xen, Stefano mentioned
> that each cache's color size is 256M.
> >     > Is it possible to extend this figure ?
> >     With 16 colors (e.g. on Cortex-A53) and 4GB of memory, roughly each
> color is 256M (i.e. 4GB/16 =3D 256M).
> >     So as you can see this figure depends on the number of colors and
> memory size.
> >
> >     ~Michal
> >
> >     >
> >     > Regards,
> >     > Oleg
> >     >
> >     > =D0=BF=D0=BD, 15 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 1=
1:57, Michal Orzel <michal.orzel@amd.com
> <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>:
> >     >
> >     >     Hi Oleg,
> >     >
> >     >     On 15/05/2023 10:51, Oleg Nikitenko wrote:
> >     >     >
> >     >     >
> >     >     >
> >     >     > Hello guys,
> >     >     >
> >     >     > Thanks a lot.
> >     >     > After a long problem list I was able to run xen with Dom0
> with a cache color.
> >     >     > One more question from my side.
> >     >     > I want to run a guest with color mode too.
> >     >     > I inserted a string into guest config file llc-colors =3D
> "9-13"
> >     >     > I got an error
> >     >     > [  457.517004] loop0: detected capacity change from 0 to
> 385840
> >     >     > Parsing config from /xen/red_config.cfg
> >     >     > /xen/red_config.cfg:26: config parsing error near `-colors'=
:
> lexical error
> >     >     > warning: Config file looks like it contains Python code.
> >     >     > warning:  Arbitrary Python is no longer supported.
> >     >     > warning:  See https://wiki.xen.org/wiki/PythonInXlConfig <
> https://wiki.xen.org/wiki/PythonInXlConfig> <
> https://wiki.xen.org/wiki/PythonInXlConfig <
> https://wiki.xen.org/wiki/PythonInXlConfig>> <
> https://wiki.xen.org/wiki/PythonInXlConfig <
> https://wiki.xen.org/wiki/PythonInXlConfig> <
> https://wiki.xen.org/wiki/PythonInXlConfig <
> https://wiki.xen.org/wiki/PythonInXlConfig>>>
> >     >     > Failed to parse config: Invalid argument
> >     >     > So this is a question.
> >     >     > Is it possible to assign a color mode for the DomU by confi=
g
> file ?
> >     >     > If so, what string should I use?
> >     >     Please, always refer to the relevant documentation. In this
> case, for xl.cfg:
> >     >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod=
.in#L2890
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod=
.in#L2890>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod=
.in#L2890
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod=
.in#L2890
> >>
> >     >
> >     >     ~Michal
> >     >
> >     >     >
> >     >     > Regards,
> >     >     > Oleg
> >     >     >
> >     >     > =D1=87=D1=82, 11 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =
=D0=B2 13:32, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>:
> >     >     >
> >     >     >     Hi Michal,
> >     >     >
> >     >     >     Thanks.
> >     >     >     This compilation previously had a name CONFIG_COLORING.
> >     >     >     It mixed me up.
> >     >     >
> >     >     >     Regards,
> >     >     >     Oleg
> >     >     >
> >     >     >     =D1=87=D1=82, 11 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3=
. =D0=B2 13:15, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>:
> >     >     >
> >     >     >         Hi Oleg,
> >     >     >
> >     >     >         On 11/05/2023 12:02, Oleg Nikitenko wrote:
> >     >     >         >
> >     >     >         >
> >     >     >         >
> >     >     >         > Hello,
> >     >     >         >
> >     >     >         > Thanks Stefano.
> >     >     >         > Then the next question.
> >     >     >         > I cloned xen repo from xilinx site
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git> <
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git>> <
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git> <
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git>>> <
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git> <
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git>> <
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git> <
> https://github.com/Xilinx/xen.git <https://github.com/Xilinx/xen.git>>>>
> >     >     >         > I managed to build a xlnx_rebase_4.17 branch in m=
y
> environment.
> >     >     >         > I did it without coloring first. I did not find
> any color footprints at this branch.
> >     >     >         > I realized coloring is not in the xlnx_rebase_4.1=
7
> branch yet.
> >     >     >         This is not true. Cache coloring is in
> xlnx_rebase_4.17. Please see the docs:
> >     >     >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst>>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-c=
oloring.rst
> >>>
> >     >     >
> >     >     >         It describes the feature and documents the required
> properties.
> >     >     >
> >     >     >         ~Michal
> >     >     >
> >     >     >         >
> >     >     >         >
> >     >     >         > =D0=B2=D1=82, 9 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=
=D0=B3. =D0=B2 22:49, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>:
> >     >     >         >
> >     >     >         >     We test Xen Cache Coloring regularly on
> zcu102. Every Petalinux release
> >     >     >         >     (twice a year) is tested with cache coloring
> enabled. The last Petalinux
> >     >     >         >     release is 2023.1 and the kernel used is this=
:
> >     >     >         >
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS> <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>> <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS> <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>>> <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS> <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>> <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>
> >     <https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS <
> https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS>>>>
> >     >     >         >
> >     >     >         >
> >     >     >         >     On Tue, 9 May 2023, Oleg Nikitenko wrote:
> >     >     >         >     > Hello guys,
> >     >     >         >     >
> >     >     >         >     > I have a couple of more questions.
> >     >     >         >     > Have you ever run xen with the cache
> coloring at Zynq UltraScale+ MPSoC zcu102 xczu15eg ?
> >     >     >         >     > When did you run xen with the cache colorin=
g
> last time ?
> >     >     >         >     > What kernel version did you use for Dom0
> when you ran xen with the cache coloring last time ?
> >     >     >         >     >
> >     >     >         >     > Regards,
> >     >     >         >     > Oleg
> >     >     >         >     >
> >     >     >         >     > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=
=80=AF=D0=B3. =D0=B2 11:48, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>:
> >     >     >         >     >       Hi Michal,
> >     >     >         >     >
> >     >     >         >     > Thanks.
> >     >     >         >     >
> >     >     >         >     > Regards,
> >     >     >         >     > Oleg
> >     >     >         >     >
> >     >     >         >     > =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=8F 2023=E2=
=80=AF=D0=B3. =D0=B2 11:34, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>>:
> >     >     >         >     >       Hi Oleg,
> >     >     >         >     >
> >     >     >         >     >       Replying, so that you do not need to
> wait for Stefano.
> >     >     >         >     >
> >     >     >         >     >       On 05/05/2023 10:28, Oleg Nikitenko
> wrote:
> >     >     >         >     >       >
> >     >     >         >     >       >
> >     >     >         >     >       >
> >     >     >         >     >       > Hello Stefano,
> >     >     >         >     >       >
> >     >     >         >     >       > I would like to try a xen cache
> color property from this repo  https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>>> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>>>>
> >     >     >         >     >       <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>>> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git> <
> https://xenbits.xen.org/git-http/xen.git <
> https://xenbits.xen.org/git-http/xen.git>>>>>
> >     >     >         >     >       > Could you tell whot branch I should
> use ?
> >     >     >         >     >       Cache coloring feature is not part of
> the upstream tree and it is still under review.
> >     >     >         >     >       You can only find it integrated in th=
e
> Xilinx Xen tree.
> >     >     >         >     >
> >     >     >         >     >       ~Michal
> >     >     >         >     >
> >     >     >         >     >       >
> >     >     >         >     >       > Regards,
> >     >     >         >     >       > Oleg
> >     >     >         >     >       >
> >     >     >         >     >       > =D0=BF=D1=82, 28 =D0=B0=D0=BF=D1=80=
. 2023=E2=80=AF=D0=B3. =D0=B2 00:51, Stefano
> Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
> >     <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>>:
> >     >     >         >     >       >
> >     >     >         >     >       >     I am familiar with the zcu102
> but I don't know how you could possibly
> >     >     >         >     >       >     generate a SError.
> >     >     >         >     >       >
> >     >     >         >     >       >     I suggest to try to use
> ImageBuilder [1] to generate the boot
> >     >     >         >     >       >     configuration as a test because
> that is known to work well for zcu102.
> >     >     >         >     >       >
> >     >     >         >     >       >     [1]
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>>> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>>>> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>>
> >     <https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>>> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder> <
> https://gitlab.com/xen-project/imagebuilder <
> https://gitlab.com/xen-project/imagebuilder>>>>>
> >     >     >         >     >       >
> >     >     >         >     >       >
> >     >     >         >     >       >     On Thu, 27 Apr 2023, Oleg
> Nikitenko wrote:
> >     >     >         >     >       >     > Hello Stefano,
> >     >     >         >     >       >     >
> >     >     >         >     >       >     > Thanks for clarification.
> >     >     >         >     >       >     > We nighter use ImageBuilder
> nor uboot boot script.
> >     >     >         >     >       >     > A model is zcu102 compatible.
> >     >     >         >     >       >     >
> >     >     >         >     >       >     > Regards,
> >     >     >         >     >       >     > O.
> >     >     >         >     >       >     >
> >     >     >         >     >       >     > =D0=B2=D1=82, 25 =D0=B0=D0=BF=
=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 21:21,
> Stefano Stabellini <sstabellini@kernel.org <mailto:sstabellini@kernel.org=
>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>
> >     <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>>:
> >     >     >         >     >       >     >       This is interesting. Ar=
e
> you using Xilinx hardware by any chance? If so,
> >     >     >         >     >       >     >       which board?
> >     >     >         >     >       >     >
> >     >     >         >     >       >     >       Are you using
> ImageBuilder to generate your boot.scr boot script? If so,
> >     >     >         >     >       >     >       could you please post
> your ImageBuilder config file? If not, can you
> >     >     >         >     >       >     >       post the source of your
> uboot boot script?
> >     >     >         >     >       >     >
> >     >     >         >     >       >     >       SErrors are supposed to
> be related to a hardware failure of some kind.
> >     >     >         >     >       >     >       You are not supposed to
> be able to trigger an SError easily by
> >     >     >         >     >       >     >       "mistake". I have not
> seen SErrors due to wrong cache coloring
> >     >     >         >     >       >     >       configurations on any
> Xilinx board before.
> >     >     >         >     >       >     >
> >     >     >         >     >       >     >       The differences between
> Xen with and without cache coloring from a
> >     >     >         >     >       >     >       hardware perspective ar=
e:
> >     >     >         >     >       >     >
> >     >     >         >     >       >     >       - With cache coloring,
> the SMMU is enabled and does address translations
> >     >     >         >     >       >     >         even for dom0. Withou=
t
> cache coloring the SMMU could be disabled, and
> >     >     >         >     >       >     >         if enabled, the SMMU
> doesn't do any address translations for Dom0. If
> >     >     >         >     >       >     >         there is a hardware
> failure related to SMMU address translation it
> >     >     >         >     >       >     >         could only trigger
> with cache coloring. This would be my normal
> >     >     >         >     >       >     >         suggestion for you to
> explore, but the failure happens too early
> >     >     >         >     >       >     >         before any DMA-capabl=
e
> device is programmed. So I don't think this can
> >     >     >         >     >       >     >         be the issue.
> >     >     >         >     >       >     >
> >     >     >         >     >       >     >       - With cache coloring,
> the memory allocation is very different so you'll
> >     >     >         >     >       >     >         end up using differen=
t
> DDR regions for Dom0. So if your DDR is
> >     >     >         >     >       >     >         defective, you might
> only see a failure with cache coloring enabled
> >     >     >         >     >       >     >         because you end up
> using different regions.
> >     >     >         >     >       >     >
> >     >     >         >     >       >     >
> >     >     >         >     >       >     >       On Tue, 25 Apr 2023,
> Oleg Nikitenko wrote:
> >     >     >         >     >       >     >       > Hi Stefano,
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       > Thank you.
> >     >     >         >     >       >     >       > If I build xen withou=
t
> colors support there is not this error.
> >     >     >         >     >       >     >       > All the domains are
> booted well.
> >     >     >         >     >       >     >       > Hense it can not be a
> hardware issue.
> >     >     >         >     >       >     >       > This panic arrived
> during unpacking the rootfs.
> >     >     >         >     >       >     >       > Here I attached the
> boot log xen/Dom0 without color.
> >     >     >         >     >       >     >       > A highlighted strings
> printed exactly after the place where 1-st time panic arrived.
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >  Xen 4.16.1-pre
> >     >     >         >     >       >     >       > (XEN) Xen version
> 4.16.1-pre (nole2390@(none)) (aarch64-portable-linux-gcc (GCC) 11.3.0)
> debug=3Dy
> >     >     >         >     >       2023-04-21
> >     >     >         >     >       >     >       > (XEN) Latest
> ChangeSet: Wed Apr 19 12:56:14 2023 +0300 git:321687b231-dirty
> >     >     >         >     >       >     >       > (XEN) build-id:
> c1847258fdb1b79562fc710dda40008f96c0fde5
> >     >     >         >     >       >     >       > (XEN) Processor:
> 00000000410fd034: "ARM Limited", variant: 0x0, part 0xd03,rev 0x4
> >     >     >         >     >       >     >       > (XEN) 64-bit Executio=
n:
> >     >     >         >     >       >     >       > (XEN)   Processor
> Features: 0000000000002222 0000000000000000
> >     >     >         >     >       >     >       > (XEN)     Exception
> Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
> >     >     >         >     >       >     >       > (XEN)     Extensions:
> FloatingPoint AdvancedSIMD
> >     >     >         >     >       >     >       > (XEN)   Debug
> Features: 0000000010305106 0000000000000000
> >     >     >         >     >       >     >       > (XEN)   Auxiliary
> Features: 0000000000000000 0000000000000000
> >     >     >         >     >       >     >       > (XEN)   Memory Model
> Features: 0000000000001122 0000000000000000
> >     >     >         >     >       >     >       > (XEN)   ISA Features:
>  0000000000011120 0000000000000000
> >     >     >         >     >       >     >       > (XEN) 32-bit Executio=
n:
> >     >     >         >     >       >     >       > (XEN)   Processor
> Features: 0000000000000131:0000000000011011
> >     >     >         >     >       >     >       > (XEN)     Instruction
> Sets: AArch32 A32 Thumb Thumb-2 Jazelle
> >     >     >         >     >       >     >       > (XEN)     Extensions:
> GenericTimer Security
> >     >     >         >     >       >     >       > (XEN)   Debug
> Features: 0000000003010066
> >     >     >         >     >       >     >       > (XEN)   Auxiliary
> Features: 0000000000000000
> >     >     >         >     >       >     >       > (XEN)   Memory Model
> Features: 0000000010201105 0000000040000000
> >     >     >         >     >       >     >       > (XEN)
>          0000000001260000 0000000002102211
> >     >     >         >     >       >     >       > (XEN)   ISA Features:
> 0000000002101110 0000000013112111 0000000021232042
> >     >     >         >     >       >     >       > (XEN)
> 0000000001112131 0000000000011142 0000000000011121
> >     >     >         >     >       >     >       > (XEN) Using SMC
> Calling Convention v1.2
> >     >     >         >     >       >     >       > (XEN) Using PSCI v1.1
> >     >     >         >     >       >     >       > (XEN) SMP: Allowing 4
> CPUs
> >     >     >         >     >       >     >       > (XEN) Generic Timer
> IRQ: phys=3D30 hyp=3D26 virt=3D27 Freq: 100000 KHz
> >     >     >         >     >       >     >       > (XEN) GICv2
> initialization:
> >     >     >         >     >       >     >       > (XEN)
> gic_dist_addr=3D00000000f9010000
> >     >     >         >     >       >     >       > (XEN)
> gic_cpu_addr=3D00000000f9020000
> >     >     >         >     >       >     >       > (XEN)
> gic_hyp_addr=3D00000000f9040000
> >     >     >         >     >       >     >       > (XEN)
> gic_vcpu_addr=3D00000000f9060000
> >     >     >         >     >       >     >       > (XEN)
> gic_maintenance_irq=3D25
> >     >     >         >     >       >     >       > (XEN) GICv2: Adjustin=
g
> CPU interface base to 0xf902f000
> >     >     >         >     >       >     >       > (XEN) GICv2: 192
> lines, 4 cpus, secure (IID 0200143b).
> >     >     >         >     >       >     >       > (XEN) Using scheduler=
:
> null Scheduler (null)
> >     >     >         >     >       >     >       > (XEN) Initializing
> null scheduler
> >     >     >         >     >       >     >       > (XEN) WARNING: This i=
s
> experimental software in development.
> >     >     >         >     >       >     >       > (XEN) Use at your own
> risk.
> >     >     >         >     >       >     >       > (XEN) Allocated
> console ring of 32 KiB.
> >     >     >         >     >       >     >       > (XEN) CPU0: Guest
> atomics will try 12 times before pausing the domain
> >     >     >         >     >       >     >       > (XEN) Bringing up CPU=
1
> >     >     >         >     >       >     >       > (XEN) CPU1: Guest
> atomics will try 13 times before pausing the domain
> >     >     >         >     >       >     >       > (XEN) CPU 1 booted.
> >     >     >         >     >       >     >       > (XEN) Bringing up CPU=
2
> >     >     >         >     >       >     >       > (XEN) CPU2: Guest
> atomics will try 13 times before pausing the domain
> >     >     >         >     >       >     >       > (XEN) CPU 2 booted.
> >     >     >         >     >       >     >       > (XEN) Bringing up CPU=
3
> >     >     >         >     >       >     >       > (XEN) CPU3: Guest
> atomics will try 13 times before pausing the domain
> >     >     >         >     >       >     >       > (XEN) Brought up 4 CP=
Us
> >     >     >         >     >       >     >       > (XEN) CPU 3 booted.
> >     >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: probing hardware configuration...
> >     >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: SMMUv2 with:
> >     >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: stage 2 translation
> >     >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: stream matching with 48 register groups, mask
> 0x7fff<2>smmu:
> >     >     >         >     >       /axi/smmu@fd800000: 16 context
> >     >     >         >     >       >     >       banks (0
> >     >     >         >     >       >     >       > stage-2 only)
> >     >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: Stage-2: 48-bit IPA -> 48-bit PA
> >     >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: registered 29 master devices
> >     >     >         >     >       >     >       > (XEN) I/O
> virtualisation enabled
> >     >     >         >     >       >     >       > (XEN)  - Dom0 mode:
> Relaxed
> >     >     >         >     >       >     >       > (XEN) P2M: 40-bit IPA
> with 40-bit PA and 8-bit VMID
> >     >     >         >     >       >     >       > (XEN) P2M: 3 levels
> with order-1 root, VTCR 0x0000000080023558
> >     >     >         >     >       >     >       > (XEN) Scheduling
> granularity: cpu, 1 CPU per sched-resource
> >     >     >         >     >       >     >       > (XEN) alternatives:
> Patching with alt table 00000000002cc5c8 -> 00000000002ccb2c
> >     >     >         >     >       >     >       > (XEN) *** LOADING
> DOMAIN 0 ***
> >     >     >         >     >       >     >       > (XEN) Loading d0
> kernel from boot module @ 0000000001000000
> >     >     >         >     >       >     >       > (XEN) Loading ramdisk
> from boot module @ 0000000002000000
> >     >     >         >     >       >     >       > (XEN) Allocating 1:1
> mappings totalling 1600MB for dom0:
> >     >     >         >     >       >     >       > (XEN) BANK[0]
> 0x00000010000000-0x00000020000000 (256MB)
> >     >     >         >     >       >     >       > (XEN) BANK[1]
> 0x00000024000000-0x00000028000000 (64MB)
> >     >     >         >     >       >     >       > (XEN) BANK[2]
> 0x00000030000000-0x00000080000000 (1280MB)
> >     >     >         >     >       >     >       > (XEN) Grant table
> range: 0x00000000e00000-0x00000000e40000
> >     >     >         >     >       >     >       > (XEN) smmu:
> /axi/smmu@fd800000: d0: p2maddr 0x000000087bf94000
> >     >     >         >     >       >     >       > (XEN) Allocating PPI
> 16 for event channel interrupt
> >     >     >         >     >       >     >       > (XEN) Extended region
> 0: 0x81200000->0xa0000000
> >     >     >         >     >       >     >       > (XEN) Extended region
> 1: 0xb1200000->0xc0000000
> >     >     >         >     >       >     >       > (XEN) Extended region
> 2: 0xc8000000->0xe0000000
> >     >     >         >     >       >     >       > (XEN) Extended region
> 3: 0xf0000000->0xf9000000
> >     >     >         >     >       >     >       > (XEN) Extended region
> 4: 0x100000000->0x600000000
> >     >     >         >     >       >     >       > (XEN) Extended region
> 5: 0x880000000->0x8000000000
> >     >     >         >     >       >     >       > (XEN) Extended region
> 6: 0x8001000000->0x10000000000
> >     >     >         >     >       >     >       > (XEN) Loading zImage
> from 0000000001000000 to 0000000010000000-0000000010e41008
> >     >     >         >     >       >     >       > (XEN) Loading d0
> initrd from 0000000002000000 to 0x0000000013600000-0x000000001ff3a617
> >     >     >         >     >       >     >       > (XEN) Loading d0 DTB
> to 0x0000000013400000-0x000000001340cbdc
> >     >     >         >     >       >     >       > (XEN) Initial low
> memory virq threshold set at 0x4000 pages.
> >     >     >         >     >       >     >       > (XEN) Std. Loglevel:
> All
> >     >     >         >     >       >     >       > (XEN) Guest Loglevel:
> All
> >     >     >         >     >       >     >       > (XEN) *** Serial inpu=
t
> to DOM0 (type 'CTRL-a' three times to switch input)
> >     >     >         >     >       >     >       > (XEN) null.c:353: 0
> <-- d0v0
> >     >     >         >     >       >     >       > (XEN) Freed 356kB ini=
t
> memory.
> >     >     >         >     >       >     >       > (XEN) d0v0 Unhandled
> SMC/HVC: 0x84000050
> >     >     >         >     >       >     >       > (XEN) d0v0 Unhandled
> SMC/HVC: 0x8600ff01
> >     >     >         >     >       >     >       > (XEN) d0v0: vGICD:
> unhandled word write 0x000000ffffffff to ICACTIVER4
> >     >     >         >     >       >     >       > (XEN) d0v0: vGICD:
> unhandled word write 0x000000ffffffff to ICACTIVER8
> >     >     >         >     >       >     >       > (XEN) d0v0: vGICD:
> unhandled word write 0x000000ffffffff to ICACTIVER12
> >     >     >         >     >       >     >       > (XEN) d0v0: vGICD:
> unhandled word write 0x000000ffffffff to ICACTIVER16
> >     >     >         >     >       >     >       > (XEN) d0v0: vGICD:
> unhandled word write 0x000000ffffffff to ICACTIVER20
> >     >     >         >     >       >     >       > (XEN) d0v0: vGICD:
> unhandled word write 0x000000ffffffff to ICACTIVER0
> >     >     >         >     >       >     >       > [    0.000000] Bootin=
g
> Linux on physical CPU 0x0000000000 [0x410fd034]
> >     >     >         >     >       >     >       > [    0.000000] Linux
> version 5.15.72-xilinx-v2022.1 (oe-user@oe-host)
> (aarch64-portable-linux-gcc (GCC)
> >     >     >         >     >       11.3.0, GNU ld (GNU
> >     >     >         >     >       >     >       Binutils)
> >     >     >         >     >       >     >       > 2.38.20220708) #1 SMP
> Tue Feb 21 05:47:54 UTC 2023
> >     >     >         >     >       >     >       > [    0.000000] Machin=
e
> model: D14 Viper Board - White Unit
> >     >     >         >     >       >     >       > [    0.000000] Xen
> 4.16 support found
> >     >     >         >     >       >     >       > [    0.000000] Zone
> ranges:
> >     >     >         >     >       >     >       > [    0.000000]   DMA
>    [mem 0x0000000010000000-0x000000007fffffff]
> >     >     >         >     >       >     >       > [    0.000000]   DMA3=
2
>    empty
> >     >     >         >     >       >     >       > [    0.000000]
> Normal   empty
> >     >     >         >     >       >     >       > [    0.000000] Movabl=
e
> zone start for each node
> >     >     >         >     >       >     >       > [    0.000000] Early
> memory node ranges
> >     >     >         >     >       >     >       > [    0.000000]   node
>   0: [mem 0x0000000010000000-0x000000001fffffff]
> >     >     >         >     >       >     >       > [    0.000000]   node
>   0: [mem 0x0000000022000000-0x0000000022147fff]
> >     >     >         >     >       >     >       > [    0.000000]   node
>   0: [mem 0x0000000022200000-0x0000000022347fff]
> >     >     >         >     >       >     >       > [    0.000000]   node
>   0: [mem 0x0000000024000000-0x0000000027ffffff]
> >     >     >         >     >       >     >       > [    0.000000]   node
>   0: [mem 0x0000000030000000-0x000000007fffffff]
> >     >     >         >     >       >     >       > [    0.000000] Initme=
m
> setup node 0 [mem 0x0000000010000000-0x000000007fffffff]
> >     >     >         >     >       >     >       > [    0.000000] On nod=
e
> 0, zone DMA: 8192 pages in unavailable ranges
> >     >     >         >     >       >     >       > [    0.000000] On nod=
e
> 0, zone DMA: 184 pages in unavailable ranges
> >     >     >         >     >       >     >       > [    0.000000] On nod=
e
> 0, zone DMA: 7352 pages in unavailable ranges
> >     >     >         >     >       >     >       > [    0.000000] cma:
> Reserved 256 MiB at 0x000000006e000000
> >     >     >         >     >       >     >       > [    0.000000] psci:
> probing for conduit method from DT.
> >     >     >         >     >       >     >       > [    0.000000] psci:
> PSCIv1.1 detected in firmware.
> >     >     >         >     >       >     >       > [    0.000000] psci:
> Using standard PSCI v0.2 function IDs
> >     >     >         >     >       >     >       > [    0.000000] psci:
> Trusted OS migration not required
> >     >     >         >     >       >     >       > [    0.000000] psci:
> SMC Calling Convention v1.1
> >     >     >         >     >       >     >       > [    0.000000] percpu=
:
> Embedded 16 pages/cpu s32792 r0 d32744 u65536
> >     >     >         >     >       >     >       > [    0.000000]
> Detected VIPT I-cache on CPU0
> >     >     >         >     >       >     >       > [    0.000000] CPU
> features: kernel page table isolation forced ON by KASLR
> >     >     >         >     >       >     >       > [    0.000000] CPU
> features: detected: Kernel page table isolation (KPTI)
> >     >     >         >     >       >     >       > [    0.000000] Built =
1
> zonelists, mobility grouping on.  Total pages: 403845
> >     >     >         >     >       >     >       > [    0.000000] Kernel
> command line: console=3Dhvc0 earlycon=3Dxen earlyprintk=3Dxen clk_ignore_=
unused
> fips=3D1
> >     >     >         >     >       root=3D/dev/ram0
> >     >     >         >     >       >     >       maxcpus=3D2
> >     >     >         >     >       >     >       > [    0.000000] Unknow=
n
> kernel command line parameters "earlyprintk=3Dxen fips=3D1", will be pass=
ed to
> user
> >     >     >         >     >       space.
> >     >     >         >     >       >     >       > [    0.000000] Dentry
> cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
> >     >     >         >     >       >     >       > [    0.000000]
> Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
> >     >     >         >     >       >     >       > [    0.000000] mem
> auto-init: stack:off, heap alloc:on, heap free:on
> >     >     >         >     >       >     >       > [    0.000000] mem
> auto-init: clearing system memory may take some time...
> >     >     >         >     >       >     >       > [    0.000000] Memory=
:
> 1121936K/1641024K available (9728K kernel code, 836K rwdata, 2396K rodata=
,
> 1536K
> >     >     >         >     >       init, 262K bss,
> >     >     >         >     >       >     >       256944K reserved,
> >     >     >         >     >       >     >       > 262144K cma-reserved)
> >     >     >         >     >       >     >       > [    0.000000] SLUB:
> HWalign=3D64, Order=3D0-3, MinObjects=3D0, CPUs=3D2, Nodes=3D1
> >     >     >         >     >       >     >       > [    0.000000] rcu:
> Hierarchical RCU implementation.
> >     >     >         >     >       >     >       > [    0.000000] rcu:
> RCU event tracing is enabled.
> >     >     >         >     >       >     >       > [    0.000000] rcu:
> RCU restricting CPUs from NR_CPUS=3D8 to nr_cpu_ids=3D2.
> >     >     >         >     >       >     >       > [    0.000000] rcu:
> RCU calculated value of scheduler-enlistment delay is 25 jiffies.
> >     >     >         >     >       >     >       > [    0.000000] rcu:
> Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=3D2
> >     >     >         >     >       >     >       > [    0.000000]
> NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
> >     >     >         >     >       >     >       > [    0.000000] Root
> IRQ handler: gic_handle_irq
> >     >     >         >     >       >     >       > [    0.000000]
> arch_timer: cp15 timer(s) running at 100.00MHz (virt).
> >     >     >         >     >       >     >       > [    0.000000]
> clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles:
> 0x171024e7e0,
> >     >     >         >     >       max_idle_ns: 440795205315 ns
> >     >     >         >     >       >     >       > [    0.000000]
> sched_clock: 56 bits at 100MHz, resolution 10ns, wraps every 439804651110=
0ns
> >     >     >         >     >       >     >       > [    0.000258]
> Console: colour dummy device 80x25
> >     >     >         >     >       >     >       > [    0.310231] printk=
:
> console [hvc0] enabled
> >     >     >         >     >       >     >       > [    0.314403]
> Calibrating delay loop (skipped), value calculated using timer frequency.=
.
> 200.00 BogoMIPS
> >     >     >         >     >       (lpj=3D400000)
> >     >     >         >     >       >     >       > [    0.324851]
> pid_max: default: 32768 minimum: 301
> >     >     >         >     >       >     >       > [    0.329706] LSM:
> Security Framework initializing
> >     >     >         >     >       >     >       > [    0.334204] Yama:
> becoming mindful.
> >     >     >         >     >       >     >       > [    0.337865]
> Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
> >     >     >         >     >       >     >       > [    0.345180]
> Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
> >     >     >         >     >       >     >       > [    0.354743]
> xen:grant_table: Grant tables using version 1 layout
> >     >     >         >     >       >     >       > [    0.359132] Grant
> table initialized
> >     >     >         >     >       >     >       > [    0.362664]
> xen:events: Using FIFO-based ABI
> >     >     >         >     >       >     >       > [    0.366993] Xen:
> initializing cpu0
> >     >     >         >     >       >     >       > [    0.370515] rcu:
> Hierarchical SRCU implementation.
> >     >     >         >     >       >     >       > [    0.375930] smp:
> Bringing up secondary CPUs ...
> >     >     >         >     >       >     >       > (XEN) null.c:353: 1
> <-- d0v1
> >     >     >         >     >       >     >       > (XEN) d0v1: vGICD:
> unhandled word write 0x000000ffffffff to ICACTIVER0
> >     >     >         >     >       >     >       > [    0.382549]
> Detected VIPT I-cache on CPU1
> >     >     >         >     >       >     >       > [    0.388712] Xen:
> initializing cpu1
> >     >     >         >     >       >     >       > [    0.388743] CPU1:
> Booted secondary processor 0x0000000001 [0x410fd034]
> >     >     >         >     >       >     >       > [    0.388829] smp:
> Brought up 1 node, 2 CPUs
> >     >     >         >     >       >     >       > [    0.406941] SMP:
> Total of 2 processors activated.
> >     >     >         >     >       >     >       > [    0.411698] CPU
> features: detected: 32-bit EL0 Support
> >     >     >         >     >       >     >       > [    0.416888] CPU
> features: detected: CRC32 instructions
> >     >     >         >     >       >     >       > [    0.422121] CPU:
> All CPU(s) started at EL1
> >     >     >         >     >       >     >       > [    0.426248]
> alternatives: patching kernel code
> >     >     >         >     >       >     >       > [    0.431424]
> devtmpfs: initialized
> >     >     >         >     >       >     >       > [    0.441454] KASLR
> enabled
> >     >     >         >     >       >     >       > [    0.441602]
> clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_n=
s:
> >     >     >         >     >       7645041785100000 ns
> >     >     >         >     >       >     >       > [    0.448321] futex
> hash table entries: 512 (order: 3, 32768 bytes, linear)
> >     >     >         >     >       >     >       > [    0.496183] NET:
> Registered PF_NETLINK/PF_ROUTE protocol family
> >     >     >         >     >       >     >       > [    0.498277] DMA:
> preallocated 256 KiB GFP_KERNEL pool for atomic allocations
> >     >     >         >     >       >     >       > [    0.503772] DMA:
> preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
> >     >     >         >     >       >     >       > [    0.511610] DMA:
> preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
> >     >     >         >     >       >     >       > [    0.519478] audit:
> initializing netlink subsys (disabled)
> >     >     >         >     >       >     >       > [    0.524985] audit:
> type=3D2000 audit(0.336:1): state=3Dinitialized audit_enabled=3D0 res=3D1
> >     >     >         >     >       >     >       > [    0.529169]
> thermal_sys: Registered thermal governor 'step_wise'
> >     >     >         >     >       >     >       > [    0.533023]
> hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
> >     >     >         >     >       >     >       > [    0.545608] ASID
> allocator initialised with 32768 entries
> >     >     >         >     >       >     >       > [    0.551030]
> xen:swiotlb_xen: Warning: only able to allocate 4 MB for software IO TLB
> >     >     >         >     >       >     >       > [    0.559332]
> software IO TLB: mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB)
> >     >     >         >     >       >     >       > [    0.583565] HugeTL=
B
> registered 1.00 GiB page size, pre-allocated 0 pages
> >     >     >         >     >       >     >       > [    0.584721] HugeTL=
B
> registered 32.0 MiB page size, pre-allocated 0 pages
> >     >     >         >     >       >     >       > [    0.591478] HugeTL=
B
> registered 2.00 MiB page size, pre-allocated 0 pages
> >     >     >         >     >       >     >       > [    0.598225] HugeTL=
B
> registered 64.0 KiB page size, pre-allocated 0 pages
> >     >     >         >     >       >     >       > [    0.636520] DRBG:
> Continuing without Jitter RNG
> >     >     >         >     >       >     >       > [    0.737187] raid6:
> neonx8   gen()  2143 MB/s
> >     >     >         >     >       >     >       > [    0.805294] raid6:
> neonx8   xor()  1589 MB/s
> >     >     >         >     >       >     >       > [    0.873406] raid6:
> neonx4   gen()  2177 MB/s
> >     >     >         >     >       >     >       > [    0.941499] raid6:
> neonx4   xor()  1556 MB/s
> >     >     >         >     >       >     >       > [    1.009612] raid6:
> neonx2   gen()  2072 MB/s
> >     >     >         >     >       >     >       > [    1.077715] raid6:
> neonx2   xor()  1430 MB/s
> >     >     >         >     >       >     >       > [    1.145834] raid6:
> neonx1   gen()  1769 MB/s
> >     >     >         >     >       >     >       > [    1.213935] raid6:
> neonx1   xor()  1214 MB/s
> >     >     >         >     >       >     >       > [    1.282046] raid6:
> int64x8  gen()  1366 MB/s
> >     >     >         >     >       >     >       > [    1.350132] raid6:
> int64x8  xor()   773 MB/s
> >     >     >         >     >       >     >       > [    1.418259] raid6:
> int64x4  gen()  1602 MB/s
> >     >     >         >     >       >     >       > [    1.486349] raid6:
> int64x4  xor()   851 MB/s
> >     >     >         >     >       >     >       > [    1.554464] raid6:
> int64x2  gen()  1396 MB/s
> >     >     >         >     >       >     >       > [    1.622561] raid6:
> int64x2  xor()   744 MB/s
> >     >     >         >     >       >     >       > [    1.690687] raid6:
> int64x1  gen()  1033 MB/s
> >     >     >         >     >       >     >       > [    1.758770] raid6:
> int64x1  xor()   517 MB/s
> >     >     >         >     >       >     >       > [    1.758809] raid6:
> using algorithm neonx4 gen() 2177 MB/s
> >     >     >         >     >       >     >       > [    1.762941] raid6:
> .... xor() 1556 MB/s, rmw enabled
> >     >     >         >     >       >     >       > [    1.767957] raid6:
> using neon recovery algorithm
> >     >     >         >     >       >     >       > [    1.772824]
> xen:balloon: Initialising balloon driver
> >     >     >         >     >       >     >       > [    1.778021] iommu:
> Default domain type: Translated
> >     >     >         >     >       >     >       > [    1.782584] iommu:
> DMA domain TLB invalidation policy: strict mode
> >     >     >         >     >       >     >       > [    1.789149] SCSI
> subsystem initialized
> >     >     >         >     >       >     >       > [    1.792820]
> usbcore: registered new interface driver usbfs
> >     >     >         >     >       >     >       > [    1.798254]
> usbcore: registered new interface driver hub
> >     >     >         >     >       >     >       > [    1.803626]
> usbcore: registered new device driver usb
> >     >     >         >     >       >     >       > [    1.808761]
> pps_core: LinuxPPS API ver. 1 registered
> >     >     >         >     >       >     >       > [    1.813716]
> pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <
> giometti@linux.it <mailto:giometti@linux.it> <mailto:giometti@linux.it
> <mailto:giometti@linux.it>> <mailto:giometti@linux.it <mailto:
> giometti@linux.it> <mailto:giometti@linux.it <mailto:giometti@linux.it>>>
> <mailto:giometti@linux.it <mailto:giometti@linux.it> <mailto:
> giometti@linux.it <mailto:giometti@linux.it>> <mailto:giometti@linux.it
> <mailto:giometti@linux.it> <mailto:giometti@linux.it <mailto:
> giometti@linux.it>>>>
> >     >     >         >     >       <mailto:giometti@linux.it <mailto:
> giometti@linux.it> <mailto:giometti@linux.it <mailto:giometti@linux.it>>
> <mailto:giometti@linux.it <mailto:giometti@linux.it> <mailto:
> giometti@linux.it <mailto:giometti@linux.it>>> <mailto:giometti@linux.it
> <mailto:giometti@linux.it> <mailto:giometti@linux.it <mailto:
> giometti@linux.it>> <mailto:giometti@linux.it <mailto:giometti@linux.it>
> <mailto:giometti@linux.it <mailto:giometti@linux.it>>>>>>
> >     >     >         >     >       >     >       > [    1.822903] PTP
> clock support registered
> >     >     >         >     >       >     >       > [    1.826893] EDAC
> MC: Ver: 3.0.0
> >     >     >         >     >       >     >       > [    1.830375]
> zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX
> channels.
> >     >     >         >     >       >     >       > [    1.838863]
> zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX
> channels.
> >     >     >         >     >       >     >       > [    1.847356]
> zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX
> channels.
> >     >     >         >     >       >     >       > [    1.855907] FPGA
> manager framework
> >     >     >         >     >       >     >       > [    1.859952]
> clocksource: Switched to clocksource arch_sys_counter
> >     >     >         >     >       >     >       > [    1.871712] NET:
> Registered PF_INET protocol family
> >     >     >         >     >       >     >       > [    1.871838] IP
> idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
> >     >     >         >     >       >     >       > [    1.879392]
> tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes,
> linear)
> >     >     >         >     >       >     >       > [    1.887078]
> Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
> >     >     >         >     >       >     >       > [    1.894846] TCP
> established hash table entries: 16384 (order: 5, 131072 bytes, linear)
> >     >     >         >     >       >     >       > [    1.902900] TCP
> bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
> >     >     >         >     >       >     >       > [    1.910350] TCP:
> Hash tables configured (established 16384 bind 16384)
> >     >     >         >     >       >     >       > [    1.916778] UDP
> hash table entries: 1024 (order: 3, 32768 bytes, linear)
> >     >     >         >     >       >     >       > [    1.923509]
> UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
> >     >     >         >     >       >     >       > [    1.930759] NET:
> Registered PF_UNIX/PF_LOCAL protocol family
> >     >     >         >     >       >     >       > [    1.936834] RPC:
> Registered named UNIX socket transport module.
> >     >     >         >     >       >     >       > [    1.942342] RPC:
> Registered udp transport module.
> >     >     >         >     >       >     >       > [    1.947088] RPC:
> Registered tcp transport module.
> >     >     >         >     >       >     >       > [    1.951843] RPC:
> Registered tcp NFSv4.1 backchannel transport module.
> >     >     >         >     >       >     >       > [    1.958334] PCI:
> CLS 0 bytes, default 64
> >     >     >         >     >       >     >       > [    1.962709] Trying
> to unpack rootfs image as initramfs...
> >     >     >         >     >       >     >       > [    1.977090]
> workingset: timestamp_bits=3D62 max_order=3D19 bucket_order=3D0
> >     >     >         >     >       >     >       > [    1.982863]
> Installing knfsd (copyright (C) 1996 okir@monad.swb.de <mailto:
> okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>>
> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:
> okir@monad.swb.de <mailto:okir@monad.swb.de>>> <mailto:okir@monad.swb.de
> <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:
> okir@monad.swb.de>> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>
> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>>>> <mailto:
> okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de
> <mailto:okir@monad.swb.de>> <mailto:okir@monad.swb.de <mailto:
> okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de>>>
> <mailto:okir@monad.swb.de <mailto:okir@monad.swb.de> <mailto:
> okir@monad.swb.de <mailto:okir@monad.swb.de>> <mailto:okir@monad.swb.de
> <mailto:okir@monad.swb.de> <mailto:okir@monad.swb.de <mailto:
> okir@monad.swb.de>>>>>).
> >     >     >         >     >       >     >       > [    2.021045] NET:
> Registered PF_ALG protocol family
> >     >     >         >     >       >     >       > [    2.021122] xor:
> measuring software checksum speed
> >     >     >         >     >       >     >       > [    2.029347]
>  8regs           :  2366 MB/sec
> >     >     >         >     >       >     >       > [    2.033081]
>  32regs          :  2802 MB/sec
> >     >     >         >     >       >     >       > [    2.038223]
>  arm64_neon      :  2320 MB/sec
> >     >     >         >     >       >     >       > [    2.038385] xor:
> using function: 32regs (2802 MB/sec)
> >     >     >         >     >       >     >       > [    2.043614] Block
> layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
> >     >     >         >     >       >     >       > [    2.050959] io
> scheduler mq-deadline registered
> >     >     >         >     >       >     >       > [    2.055521] io
> scheduler kyber registered
> >     >     >         >     >       >     >       > [    2.068227]
> xen:xen_evtchn: Event-channel device installed
> >     >     >         >     >       >     >       > [    2.069281] Serial=
:
> 8250/16550 driver, 4 ports, IRQ sharing disabled
> >     >     >         >     >       >     >       > [    2.076190]
> cacheinfo: Unable to detect cache hierarchy for CPU 0
> >     >     >         >     >       >     >       > [    2.085548] brd:
> module loaded
> >     >     >         >     >       >     >       > [    2.089290] loop:
> module loaded
> >     >     >         >     >       >     >       > [    2.089341] Invali=
d
> max_queues (4), will use default max: 2.
> >     >     >         >     >       >     >       > [    2.094565] tun:
> Universal TUN/TAP device driver, 1.6
> >     >     >         >     >       >     >       > [    2.098655]
> xen_netfront: Initialising Xen virtual ethernet driver
> >     >     >         >     >       >     >       > [    2.104156]
> usbcore: registered new interface driver rtl8150
> >     >     >         >     >       >     >       > [    2.109813]
> usbcore: registered new interface driver r8152
> >     >     >         >     >       >     >       > [    2.115367]
> usbcore: registered new interface driver asix
> >     >     >         >     >       >     >       > [    2.120794]
> usbcore: registered new interface driver ax88179_178a
> >     >     >         >     >       >     >       > [    2.126934]
> usbcore: registered new interface driver cdc_ether
> >     >     >         >     >       >     >       > [    2.132816]
> usbcore: registered new interface driver cdc_eem
> >     >     >         >     >       >     >       > [    2.138527]
> usbcore: registered new interface driver net1080
> >     >     >         >     >       >     >       > [    2.144256]
> usbcore: registered new interface driver cdc_subset
> >     >     >         >     >       >     >       > [    2.150205]
> usbcore: registered new interface driver zaurus
> >     >     >         >     >       >     >       > [    2.155837]
> usbcore: registered new interface driver cdc_ncm
> >     >     >         >     >       >     >       > [    2.161550]
> usbcore: registered new interface driver r8153_ecm
> >     >     >         >     >       >     >       > [    2.168240]
> usbcore: registered new interface driver cdc_acm
> >     >     >         >     >       >     >       > [    2.173109]
> cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapte=
rs
> >     >     >         >     >       >     >       > [    2.181358]
> usbcore: registered new interface driver uas
> >     >     >         >     >       >     >       > [    2.186547]
> usbcore: registered new interface driver usb-storage
> >     >     >         >     >       >     >       > [    2.192643]
> usbcore: registered new interface driver ftdi_sio
> >     >     >         >     >       >     >       > [    2.198384]
> usbserial: USB Serial support registered for FTDI USB Serial Device
> >     >     >         >     >       >     >       > [    2.206118]
> udc-core: couldn't find an available UDC - added [g_mass_storage] to list
> of pending
> >     >     >         >     >       drivers
> >     >     >         >     >       >     >       > [    2.215332]
> i2c_dev: i2c /dev entries driver
> >     >     >         >     >       >     >       > [    2.220467] xen_wd=
t
> xen_wdt: initialized (timeout=3D60s, nowayout=3D0)
> >     >     >         >     >       >     >       > [    2.225923]
> device-mapper: uevent: version 1.0.3
> >     >     >         >     >       >     >       > [    2.230668]
> device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com>> <mailto:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com>>> <mailto:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com>> <mailto:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com> <mailto:
> dm-devel@redhat.com <mailto:dm-devel@redhat.com>>>>
> >     >     >         >     >       <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com>> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com>>> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com>> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com> <mailto:dm-devel@redhat.com <mailto:
> dm-devel@redhat.com>>>>>
> >     >     >         >     >       >     >       > [    2.239315] EDAC
> MC0: Giving out device to module 1 controller synps_ddr_controller: DEV
> synps_edac
> >     >     >         >     >       (INTERRUPT)
> >     >     >         >     >       >     >       > [    2.249405] EDAC
> DEVICE0: Giving out device to module zynqmp-ocm-edac controller zynqmp_oc=
m:
> DEV
> >     >     >         >     >       >     >
>  ff960000.memory-controller (INTERRUPT)
> >     >     >         >     >       >     >       > [    2.261719] sdhci:
> Secure Digital Host Controller Interface driver
> >     >     >         >     >       >     >       > [    2.267487] sdhci:
> Copyright(c) Pierre Ossman
> >     >     >         >     >       >     >       > [    2.271890]
> sdhci-pltfm: SDHCI platform and OF driver helper
> >     >     >         >     >       >     >       > [    2.278157]
> ledtrig-cpu: registered to indicate activity on CPUs
> >     >     >         >     >       >     >       > [    2.283816]
> zynqmp_firmware_probe Platform Management API v1.1
> >     >     >         >     >       >     >       > [    2.289554]
> zynqmp_firmware_probe Trustzone version v1.0
> >     >     >         >     >       >     >       > [    2.327875]
> securefw securefw: securefw probed
> >     >     >         >     >       >     >       > [    2.328324] alg: N=
o
> test for xilinx-zynqmp-aes (zynqmp-aes)
> >     >     >         >     >       >     >       > [    2.332563]
> zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Register=
ed
> >     >     >         >     >       >     >       > [    2.341183] alg: N=
o
> test for xilinx-zynqmp-rsa (zynqmp-rsa)
> >     >     >         >     >       >     >       > [    2.347667]
> remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is available
> >     >     >         >     >       >     >       > [    2.353003]
> remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is available
> >     >     >         >     >       >     >       > [    2.362605]
> fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registered
> >     >     >         >     >       >     >       > [    2.366540]
> viper-xen-proxy viper-xen-proxy: Viper Xen Proxy registered
> >     >     >         >     >       >     >       > [    2.372525]
> viper-vdpp a4000000.vdpp: Device Tree Probing
> >     >     >         >     >       >     >       > [    2.377778]
> viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: =
32
> >     >     >         >     >       >     >       > [    2.386432]
> viper-vdpp a4000000.vdpp: Unable to register tamper handler. Retrying...
> >     >     >         >     >       >     >       > [    2.394094]
> viper-vdpp-net a5000000.vdpp_net: Device Tree Probing
> >     >     >         >     >       >     >       > [    2.399854]
> viper-vdpp-net a5000000.vdpp_net: Device registered
> >     >     >         >     >       >     >       > [    2.405931]
> viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing
> >     >     >         >     >       >     >       > [    2.412037]
> viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VTI Count: 512 Even=
t
> Count: 32
> >     >     >         >     >       >     >       > [    2.420856] defaul=
t
> preset
> >     >     >         >     >       >     >       > [    2.423797]
> viper-vdpp-stat a8000000.vdpp_stat: Device registered
> >     >     >         >     >       >     >       > [    2.430054]
> viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing
> >     >     >         >     >       >     >       > [    2.435948]
> viper-vdpp-rng ac000000.vdpp_rng: Device registered
> >     >     >         >     >       >     >       > [    2.441976] vmcu
> driver init
> >     >     >         >     >       >     >       > [    2.444922] VMCU: =
:
> (240:0) registered
> >     >     >         >     >       >     >       > [    2.444956] In K81
> Updater init
> >     >     >         >     >       >     >       > [    2.449003] pktgen=
:
> Packet Generator for packet performance testing. Version: 2.75
> >     >     >         >     >       >     >       > [    2.468833]
> Initializing XFRM netlink socket
> >     >     >         >     >       >     >       > [    2.468902] NET:
> Registered PF_PACKET protocol family
> >     >     >         >     >       >     >       > [    2.472729] Bridge
> firewalling registered
> >     >     >         >     >       >     >       > [    2.476785] 8021q:
> 802.1Q VLAN Support v1.8
> >     >     >         >     >       >     >       > [    2.481341]
> registered taskstats version 1
> >     >     >         >     >       >     >       > [    2.486394] Btrfs
> loaded, crc32c=3Dcrc32c-generic, zoned=3Dno, fsverity=3Dno
> >     >     >         >     >       >     >       > [    2.503145]
> ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq =3D 36, base_baud =3D 625=
0000)
> is a xuartps
> >     >     >         >     >       >     >       > [    2.507103]
> of-fpga-region fpga-full: FPGA Region probed
> >     >     >         >     >       >     >       > [    2.512986]
> xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >     >         >     >       >     >       > [    2.520267]
> xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >     >         >     >       >     >       > [    2.528239]
> xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >     >         >     >       >     >       > [    2.536152]
> xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >     >         >     >       >     >       > [    2.544153]
> xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >     >         >     >       >     >       > [    2.552127]
> xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >     >         >     >       >     >       > [    2.560178]
> xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >     >         >     >       >     >       > [    2.567987]
> xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >     >         >     >       >     >       > [    2.576018]
> xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >     >         >     >       >     >       > [    2.583889]
> xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver Probe succes=
s
> >     >     >         >     >       >     >       > [    2.946379] spi-no=
r
> spi0.0: mt25qu512a (131072 Kbytes)
> >     >     >         >     >       >     >       > [    2.946467] 2
> fixed-partitions partitions found on MTD device spi0.0
> >     >     >         >     >       >     >       > [    2.952393]
> Creating 2 MTD partitions on "spi0.0":
> >     >     >         >     >       >     >       > [    2.957231]
> 0x000004000000-0x000008000000 : "bank A"
> >     >     >         >     >       >     >       > [    2.963332]
> 0x000000000000-0x000004000000 : "bank B"
> >     >     >         >     >       >     >       > [    2.968694] macb
> ff0b0000.ethernet: Not enabling partial store and forward
> >     >     >         >     >       >     >       > [    2.975333] macb
> ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0b0000 irq 25
> >     >     >         >     >       (18:41:fe:0f:ff:02)
> >     >     >         >     >       >     >       > [    2.984472] macb
> ff0c0000.ethernet: Not enabling partial store and forward
> >     >     >         >     >       >     >       > [    2.992144] macb
> ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at 0xff0c0000 irq 26
> >     >     >         >     >       (18:41:fe:0f:ff:03)
> >     >     >         >     >       >     >       > [    3.001043]
> viper_enet viper_enet: Viper power GPIOs initialised
> >     >     >         >     >       >     >       > [    3.007313]
> viper_enet viper_enet vnet0 (uninitialized): Validate interface QSGMII
> >     >     >         >     >       >     >       > [    3.014914]
> viper_enet viper_enet vnet1 (uninitialized): Validate interface QSGMII
> >     >     >         >     >       >     >       > [    3.022138]
> viper_enet viper_enet vnet1 (uninitialized): Validate interface type 18
> >     >     >         >     >       >     >       > [    3.030274]
> viper_enet viper_enet vnet2 (uninitialized): Validate interface QSGMII
> >     >     >         >     >       >     >       > [    3.037785]
> viper_enet viper_enet vnet3 (uninitialized): Validate interface QSGMII
> >     >     >         >     >       >     >       > [    3.045301]
> viper_enet viper_enet: Viper enet registered
> >     >     >         >     >       >     >       > [    3.050958]
> xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM
> >     >     >         >     >       >     >       > [    3.057135]
> xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM
> >     >     >         >     >       >     >       > [    3.063538]
> xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM
> >     >     >         >     >       >     >       > [    3.069920]
> xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM
> >     >     >         >     >       >     >       > [    3.097729] si70xx=
:
> probe of 2-0040 failed with error -5
> >     >     >         >     >       >     >       > [    3.098042]
> cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s
> >     >     >         >     >       >     >       > [    3.105111]
> cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s
> >     >     >         >     >       >     >       > [    3.112457]
> viper-tamper viper-tamper: Device registered
> >     >     >         >     >       >     >       > [    3.117593]
> active_bank active_bank: boot bank: 1
> >     >     >         >     >       >     >       > [    3.122184]
> active_bank active_bank: boot mode: (0x02) qspi32
> >     >     >         >     >       >     >       > [    3.128247]
> viper-vdpp a4000000.vdpp: Device Tree Probing
> >     >     >         >     >       >     >       > [    3.133439]
> viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen: =
32
> >     >     >         >     >       >     >       > [    3.142151]
> viper-vdpp a4000000.vdpp: Tamper handler registered
> >     >     >         >     >       >     >       > [    3.147438]
> viper-vdpp a4000000.vdpp: Device registered
> >     >     >         >     >       >     >       > [    3.153007]
> lpc55_l2 spi1.0: registered handler for protocol 0
> >     >     >         >     >       >     >       > [    3.158582]
> lpc55_user lpc55_user: The major number for your device is 236
> >     >     >         >     >       >     >       > [    3.165976]
> lpc55_l2 spi1.0: registered handler for protocol 1
> >     >     >         >     >       >     >       > [    3.181999]
> rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >     >     >         >     >       >     >       > [    3.182856]
> rtc-lpc55 rtc_lpc55: registered as rtc0
> >     >     >         >     >       >     >       > [    3.188656]
> lpc55_l2 spi1.0: (2) mcu still not ready?
> >     >     >         >     >       >     >       > [    3.193744]
> lpc55_l2 spi1.0: (3) mcu still not ready?
> >     >     >         >     >       >     >       > [    3.198848]
> lpc55_l2 spi1.0: (4) mcu still not ready?
> >     >     >         >     >       >     >       > [    3.202932] mmc0:
> SDHCI controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit
> >     >     >         >     >       >     >       > [    3.210689]
> lpc55_l2 spi1.0: (5) mcu still not ready?
> >     >     >         >     >       >     >       > [    3.215694]
> lpc55_l2 spi1.0: rx error: -110
> >     >     >         >     >       >     >       > [    3.284438] mmc0:
> new HS200 MMC card at address 0001
> >     >     >         >     >       >     >       > [    3.285179]
> mmcblk0: mmc0:0001 SEM16G 14.6 GiB
> >     >     >         >     >       >     >       > [    3.291784]
>  mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8
> >     >     >         >     >       >     >       > [    3.293915]
> mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB
> >     >     >         >     >       >     >       > [    3.299054]
> mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB
> >     >     >         >     >       >     >       > [    3.303905]
> mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)
> >     >     >         >     >       >     >       > [    3.582676]
> rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >     >     >         >     >       >     >       > [    3.583332]
> rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardware clock
> >     >     >         >     >       >     >       > [    3.591252]
> cdns-i2c ff020000.i2c: recovery information complete
> >     >     >         >     >       >     >       > [    3.597085] at24
> 0-0050: supply vcc not found, using dummy regulator
> >     >     >         >     >       >     >       > [    3.603011]
> lpc55_l2 spi1.0: (2) mcu still not ready?
> >     >     >         >     >       >     >       > [    3.608093] at24
> 0-0050: 256 byte spd EEPROM, read-only
> >     >     >         >     >       >     >       > [    3.613620]
> lpc55_l2 spi1.0: (3) mcu still not ready?
> >     >     >         >     >       >     >       > [    3.619362]
> lpc55_l2 spi1.0: (4) mcu still not ready?
> >     >     >         >     >       >     >       > [    3.624224]
> rtc-rv3028 0-0052: registered as rtc1
> >     >     >         >     >       >     >       > [    3.628343]
> lpc55_l2 spi1.0: (5) mcu still not ready?
> >     >     >         >     >       >     >       > [    3.633253]
> lpc55_l2 spi1.0: rx error: -110
> >     >     >         >     >       >     >       > [    3.639104]
> k81_bootloader 0-0010: probe
> >     >     >         >     >       >     >       > [    3.641628] VMCU: =
:
> (235:0) registered
> >     >     >         >     >       >     >       > [    3.641635]
> k81_bootloader 0-0010: probe completed
> >     >     >         >     >       >     >       > [    3.668346]
> cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 28
> >     >     >         >     >       >     >       > [    3.669154]
> cdns-i2c ff030000.i2c: recovery information complete
> >     >     >         >     >       >     >       > [    3.675412] lm75
> 1-0048: supply vs not found, using dummy regulator
> >     >     >         >     >       >     >       > [    3.682920] lm75
> 1-0048: hwmon1: sensor 'tmp112'
> >     >     >         >     >       >     >       > [    3.686548] i2c
> i2c-1: Added multiplexed i2c bus 3
> >     >     >         >     >       >     >       > [    3.690795] i2c
> i2c-1: Added multiplexed i2c bus 4
> >     >     >         >     >       >     >       > [    3.695629] i2c
> i2c-1: Added multiplexed i2c bus 5
> >     >     >         >     >       >     >       > [    3.700492] i2c
> i2c-1: Added multiplexed i2c bus 6
> >     >     >         >     >       >     >       > [    3.705157] pca954=
x
> 1-0070: registered 4 multiplexed busses for I2C switch pca9546
> >     >     >         >     >       >     >       > [    3.713049] at24
> 1-0054: supply vcc not found, using dummy regulator
> >     >     >         >     >       >     >       > [    3.720067] at24
> 1-0054: 1024 byte 24c08 EEPROM, read-only
> >     >     >         >     >       >     >       > [    3.724761]
> cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq 29
> >     >     >         >     >       >     >       > [    3.731272] sfp
> viper_enet:sfp-eth1: Host maximum power 2.0W
> >     >     >         >     >       >     >       > [    3.737549]
> sfp_register_socket: got sfp_bus
> >     >     >         >     >       >     >       > [    3.740709]
> sfp_register_socket: register sfp_bus
> >     >     >         >     >       >     >       > [    3.745459]
> sfp_register_bus: ops ok!
> >     >     >         >     >       >     >       > [    3.749179]
> sfp_register_bus: Try to attach
> >     >     >         >     >       >     >       > [    3.753419]
> sfp_register_bus: Attach succeeded
> >     >     >         >     >       >     >       > [    3.757914]
> sfp_register_bus: upstream ops attach
> >     >     >         >     >       >     >       > [    3.762677]
> sfp_register_bus: Bus registered
> >     >     >         >     >       >     >       > [    3.766999]
> sfp_register_socket: register sfp_bus succeeded
> >     >     >         >     >       >     >       > [    3.775870]
> of_cfs_init
> >     >     >         >     >       >     >       > [    3.776000]
> of_cfs_init: OK
> >     >     >         >     >       >     >       > [    3.778211] clk:
> Not disabling unused clocks
> >     >     >         >     >       >     >       > [   11.278477] Freein=
g
> initrd memory: 206056K
> >     >     >         >     >       >     >       > [   11.279406] Freein=
g
> unused kernel memory: 1536K
> >     >     >         >     >       >     >       > [   11.314006] Checke=
d
> W+X mappings: passed, no W+X pages found
> >     >     >         >     >       >     >       > [   11.314142] Run
> /init as init process
> >     >     >         >     >       >     >       > INIT: version 3.01
> booting
> >     >     >         >     >       >     >       > fsck (busybox 1.35.0)
> >     >     >         >     >       >     >       > /dev/mmcblk0p1: clean=
,
> 12/102400 files, 238162/409600 blocks
> >     >     >         >     >       >     >       > /dev/mmcblk0p2: clean=
,
> 12/102400 files, 171972/409600 blocks
> >     >     >         >     >       >     >       > /dev/mmcblk0p3 was no=
t
> cleanly unmounted, check forced.
> >     >     >         >     >       >     >       > /dev/mmcblk0p3:
> 20/4096 files (0.0% non-contiguous), 663/16384 blocks
> >     >     >         >     >       >     >       > [   11.553073] EXT4-f=
s
> (mmcblk0p3): mounted filesystem without journal. Opts: (null). Quota mode=
:
> >     >     >         >     >       disabled.
> >     >     >         >     >       >     >       > Starting random numbe=
r
> generator daemon.
> >     >     >         >     >       >     >       > [   11.580662] random=
:
> crng init done
> >     >     >         >     >       >     >       > Starting udev
> >     >     >         >     >       >     >       > [   11.613159]
> udevd[142]: starting version 3.2.10
> >     >     >         >     >       >     >       > [   11.620385]
> udevd[143]: starting eudev-3.2.10
> >     >     >         >     >       >     >       > [   11.704481] macb
> ff0b0000.ethernet control_red: renamed from eth0
> >     >     >         >     >       >     >       > [   11.720264] macb
> ff0c0000.ethernet control_black: renamed from eth1
> >     >     >         >     >       >     >       > [   12.063396]
> ip_local_port_range: prefer different parity for start/end values.
> >     >     >         >     >       >     >       > [   12.084801]
> rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >     >     >         >     >       >     >       > hwclock: RTC_RD_TIME:
> Invalid exchange
> >     >     >         >     >       >     >       > Mon Feb 27 08:40:53
> UTC 2023
> >     >     >         >     >       >     >       > [   12.115309]
> rtc-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad result
> >     >     >         >     >       >     >       > hwclock: RTC_SET_TIME=
:
> Invalid exchange
> >     >     >         >     >       >     >       > [   12.131027]
> rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >     >     >         >     >       >     >       > Starting mcud
> >     >     >         >     >       >     >       > INIT: Entering
> runlevel: 5
> >     >     >         >     >       >     >       > Configuring network
> interfaces... done.
> >     >     >         >     >       >     >       > resetting network
> interface
> >     >     >         >     >       >     >       > [   12.718295] macb
> ff0b0000.ethernet control_red: PHY [ff0b0000.ethernet-ffffffff:02] driver
> [Xilinx
> >     >     >         >     >       PCS/PMA PHY] (irq=3DPOLL)
> >     >     >         >     >       >     >       > [   12.723919] macb
> ff0b0000.ethernet control_red: configuring for phy/gmii link mode
> >     >     >         >     >       >     >       > [   12.732151] pps
> pps0: new PPS source ptp0
> >     >     >         >     >       >     >       > [   12.735563] macb
> ff0b0000.ethernet: gem-ptp-timer ptp clock registered.
> >     >     >         >     >       >     >       > [   12.745724] macb
> ff0c0000.ethernet control_black: PHY [ff0c0000.ethernet-ffffffff:01] driv=
er
> [Xilinx
> >     >     >         >     >       PCS/PMA PHY]
> >     >     >         >     >       >     >       (irq=3DPOLL)
> >     >     >         >     >       >     >       > [   12.753469] macb
> ff0c0000.ethernet control_black: configuring for phy/gmii link mode
> >     >     >         >     >       >     >       > [   12.761804] pps
> pps1: new PPS source ptp1
> >     >     >         >     >       >     >       > [   12.765398] macb
> ff0c0000.ethernet: gem-ptp-timer ptp clock registered.
> >     >     >         >     >       >     >       > Auto-negotiation: off
> >     >     >         >     >       >     >       > Auto-negotiation: off
> >     >     >         >     >       >     >       > [   16.828151] macb
> ff0b0000.ethernet control_red: unable to generate target frequency:
> 125000000 Hz
> >     >     >         >     >       >     >       > [   16.834553] macb
> ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full - flow control off
> >     >     >         >     >       >     >       > [   16.860552] macb
> ff0c0000.ethernet control_black: unable to generate target frequency:
> 125000000 Hz
> >     >     >         >     >       >     >       > [   16.867052] macb
> ff0c0000.ethernet control_black: Link is Up - 1Gbps/Full - flow control o=
ff
> >     >     >         >     >       >     >       > Starting Failsafe
> Secure Shell server in port 2222: sshd
> >     >     >         >     >       >     >       > done.
> >     >     >         >     >       >     >       > Starting rpcbind
> daemon...done.
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       > [   17.093019]
> rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1
> >     >     >         >     >       >     >       > hwclock: RTC_RD_TIME:
> Invalid exchange
> >     >     >         >     >       >     >       > Starting State Manage=
r
> Service
> >     >     >         >     >       >     >       > Start state-manager
> restarter...
> >     >     >         >     >       >     >       > (XEN) d0v1 Forwarding
> AES operation: 3254779951
> >     >     >         >     >       >     >       > Starting
> /usr/sbin/xenstored....[   17.265256] BTRFS: device fsid
> 80efc224-c202-4f8e-a949-4dae7f04a0aa
> >     >     >         >     >       devid 1 transid 744
> >     >     >         >     >       >     >       /dev/dm-0
> >     >     >         >     >       >     >       > scanned by udevd (385=
)
> >     >     >         >     >       >     >       > [   17.349933] BTRFS
> info (device dm-0): disk space caching is enabled
> >     >     >         >     >       >     >       > [   17.350670] BTRFS
> info (device dm-0): has skinny extents
> >     >     >         >     >       >     >       > [   17.364384] BTRFS
> info (device dm-0): enabling ssd optimizations
> >     >     >         >     >       >     >       > [   17.830462] BTRFS:
> device fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6
> >     >     >         >     >       /dev/mapper/client_prov scanned by
> >     >     >         >     >       >     >       mkfs.btrfs
> >     >     >         >     >       >     >       > (526)
> >     >     >         >     >       >     >       > [   17.872699] BTRFS
> info (device dm-1): using free space tree
> >     >     >         >     >       >     >       > [   17.872771] BTRFS
> info (device dm-1): has skinny extents
> >     >     >         >     >       >     >       > [   17.878114] BTRFS
> info (device dm-1): flagging fs with big metadata feature
> >     >     >         >     >       >     >       > [   17.894289] BTRFS
> info (device dm-1): enabling ssd optimizations
> >     >     >         >     >       >     >       > [   17.895695] BTRFS
> info (device dm-1): checking UUID tree
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       > Setting domain 0 name=
,
> domid and JSON config...
> >     >     >         >     >       >     >       > Done setting up Dom0
> >     >     >         >     >       >     >       > Starting xenconsoled.=
..
> >     >     >         >     >       >     >       > Starting QEMU as disk
> backend for dom0
> >     >     >         >     >       >     >       > Starting domain
> watchdog daemon: xenwatchdogd startup
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       > [   18.408647] BTRFS:
> device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6
> >     >     >         >     >       /dev/mapper/client_config scanned by
> >     >     >         >     >       >     >       mkfs.btrfs
> >     >     >         >     >       >     >       > (574)
> >     >     >         >     >       >     >       > [done]
> >     >     >         >     >       >     >       > [   18.465552] BTRFS
> info (device dm-2): using free space tree
> >     >     >         >     >       >     >       > [   18.465629] BTRFS
> info (device dm-2): has skinny extents
> >     >     >         >     >       >     >       > [   18.471002] BTRFS
> info (device dm-2): flagging fs with big metadata feature
> >     >     >         >     >       >     >       > Starting crond: [
> 18.482371] BTRFS info (device dm-2): enabling ssd optimizations
> >     >     >         >     >       >     >       > [   18.486659] BTRFS
> info (device dm-2): checking UUID tree
> >     >     >         >     >       >     >       > OK
> >     >     >         >     >       >     >       > starting rsyslogd ...
> Log partition ready after 0 poll loops
> >     >     >         >     >       >     >       > done
> >     >     >         >     >       >     >       > rsyslogd: cannot
> connect to 172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 =
<
> http://172.18.0.1:514>> <http://172.18.0.1:514 <http://172.18.0.1:514> <
> http://172.18.0.1:514 <http://172.18.0.1:514>>> <http://172.18.0.1:514 <
> http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>> <
> http://172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <
> http://172.18.0.1:514>>>> <http://172.18.0.1:514 <http://172.18.0.1:514> =
<
> http://172.18.0.1:514 <http://172.18.0.1:514>> <http://172.18.0.1:514 <
> http://172.18.0.1:514> <http://172.18.0.1:514 <http://172.18.0.1:514>>> <
> http://172.18.0.1:514 <http://172.18.0.1:514> <http://172.18.0.1:514 <
> http://172.18.0.1:514>> <http://172.18.0.1:514 <http://172.18.0.1:514> <
> http://172.18.0.1:514 <http://172.18.0.1:514>>>>>: Network is unreachable
> [v8.2208.0 try
> >     >     >         >     >       https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>>> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>>>> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>>> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>> <https://www.rsyslog.com/e/2027
> >     <https://www.rsyslog.com/e/2027> <https://www.rsyslog.com/e/2027 <
> https://www.rsyslog.com/e/2027>>>>> ]
> >     >     >         >     >       >     >       > [   18.670637] BTRFS:
> device fsid 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608
> /dev/dm-3
> >     >     >         >     >       scanned by udevd (518)
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       > Please insert USB
> token and enter your role in login prompt.
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       > login:
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       > Regards,
> >     >     >         >     >       >     >       > O.
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       > =D0=BF=D0=BD, 24 =D0=
=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2
> 23:39, Stefano Stabellini <sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>
> >     <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>>:
> >     >     >         >     >       >     >       >       Hi Oleg,
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >       Here is the
> issue from your logs:
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >       SError Interrup=
t
> on CPU0, code 0xbe000000 -- SError
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >       SErrors are
> special signals to notify software of serious hardware
> >     >     >         >     >       >     >       >       errors.
> Something is going very wrong. Defective hardware is a
> >     >     >         >     >       >     >       >       possibility.
> Another possibility if software accessing address ranges
> >     >     >         >     >       >     >       >       that it is not
> supposed to, sometimes it causes SErrors.
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >       Cheers,
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >       Stefano
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >       On Mon, 24 Apr
> 2023, Oleg Nikitenko wrote:
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >       > Hello,
> >     >     >         >     >       >     >       >       >
> >     >     >         >     >       >     >       >       > Thanks guys.
> >     >     >         >     >       >     >       >       > I found out
> where the problem was.
> >     >     >         >     >       >     >       >       > Now dom0
> booted more. But I have a new one.
> >     >     >         >     >       >     >       >       > This is a
> kernel panic during Dom0 loading.
> >     >     >         >     >       >     >       >       > Maybe someone
> is able to suggest something ?
> >     >     >         >     >       >     >       >       >
> >     >     >         >     >       >     >       >       > Regards,
> >     >     >         >     >       >     >       >       > O.
> >     >     >         >     >       >     >       >       >
> >     >     >         >     >       >     >       >       > [    3.771362=
]
> sfp_register_bus: upstream ops attach
> >     >     >         >     >       >     >       >       > [    3.776119=
]
> sfp_register_bus: Bus registered
> >     >     >         >     >       >     >       >       > [    3.780459=
]
> sfp_register_socket: register sfp_bus succeeded
> >     >     >         >     >       >     >       >       > [    3.789399=
]
> of_cfs_init
> >     >     >         >     >       >     >       >       > [    3.789499=
]
> of_cfs_init: OK
> >     >     >         >     >       >     >       >       > [    3.791685=
]
> clk: Not disabling unused clocks
> >     >     >         >     >       >     >       >       > [   11.010355=
]
> SError Interrupt on CPU0, code 0xbe000000 -- SError
> >     >     >         >     >       >     >       >       > [   11.010380=
]
> CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> >     >     >         >     >       >     >       >       > [   11.010393=
]
> Workqueue: events_unbound async_run_entry_fn
> >     >     >         >     >       >     >       >       > [   11.010414=
]
> pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=3D--)
> >     >     >         >     >       >     >       >       > [   11.010422=
]
> pc : simple_write_end+0xd0/0x130
> >     >     >         >     >       >     >       >       > [   11.010431=
]
> lr : generic_perform_write+0x118/0x1e0
> >     >     >         >     >       >     >       >       > [   11.010438=
]
> sp : ffffffc00809b910
> >     >     >         >     >       >     >       >       > [   11.010441=
]
> x29: ffffffc00809b910 x28: 0000000000000000 x27: ffffffef69ba88c0
> >     >     >         >     >       >     >       >       > [   11.010451=
]
> x26: 0000000000003eec x25: ffffff807515db00 x24: 0000000000000000
> >     >     >         >     >       >     >       >       > [   11.010459=
]
> x23: ffffffc00809ba90 x22: 0000000002aac000 x21: ffffff807315a260
> >     >     >         >     >       >     >       >       > [   11.010472=
]
> x20: 0000000000001000 x19: fffffffe02000000 x18: 0000000000000000
> >     >     >         >     >       >     >       >       > [   11.010481=
]
> x17: 00000000ffffffff x16: 0000000000008000 x15: 0000000000000000
> >     >     >         >     >       >     >       >       > [   11.010490=
]
> x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
> >     >     >         >     >       >     >       >       > [   11.010498=
]
> x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
> >     >     >         >     >       >     >       >       > [   11.010507=
]
> x8 : 0000000000000000 x7 : ffffffef693ba680 x6 : 000000002d89b700
> >     >     >         >     >       >     >       >       > [   11.010515=
]
> x5 : fffffffe02000000 x4 : ffffff807315a3c8 x3 : 0000000000001000
> >     >     >         >     >       >     >       >       > [   11.010524=
]
> x2 : 0000000002aab000 x1 : 0000000000000001 x0 : 0000000000000005
> >     >     >         >     >       >     >       >       > [   11.010534=
]
> Kernel panic - not syncing: Asynchronous SError Interrupt
> >     >     >         >     >       >     >       >       > [   11.010539=
]
> CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 5.15.72-xilinx-v2022.1 #1
> >     >     >         >     >       >     >       >       > [   11.010545=
]
> Hardware name: D14 Viper Board - White Unit (DT)
> >     >     >         >     >       >     >       >       > [   11.010548=
]
> Workqueue: events_unbound async_run_entry_fn
> >     >     >         >     >       >     >       >       > [   11.010556=
]
> Call trace:
> >     >     >         >     >       >     >       >       > [   11.010558=
]
>  dump_backtrace+0x0/0x1c4
> >     >     >         >     >       >     >       >       > [   11.010567=
]
>  show_stack+0x18/0x2c
> >     >     >         >     >       >     >       >       > [   11.010574=
]
>  dump_stack_lvl+0x7c/0xa0
> >     >     >         >     >       >     >       >       > [   11.010583=
]
>  dump_stack+0x18/0x34
> >     >     >         >     >       >     >       >       > [   11.010588=
]
>  panic+0x14c/0x2f8
> >     >     >         >     >       >     >       >       > [   11.010597=
]
>  print_tainted+0x0/0xb0
> >     >     >         >     >       >     >       >       > [   11.010606=
]
>  arm64_serror_panic+0x6c/0x7c
> >     >     >         >     >       >     >       >       > [   11.010614=
]
>  do_serror+0x28/0x60
> >     >     >         >     >       >     >       >       > [   11.010621=
]
>  el1h_64_error_handler+0x30/0x50
> >     >     >         >     >       >     >       >       > [   11.010628=
]
>  el1h_64_error+0x78/0x7c
> >     >     >         >     >       >     >       >       > [   11.010633=
]
>  simple_write_end+0xd0/0x130
> >     >     >         >     >       >     >       >       > [   11.010639=
]
>  generic_perform_write+0x118/0x1e0
> >     >     >         >     >       >     >       >       > [   11.010644=
]
>  __generic_file_write_iter+0x138/0x1c4
> >     >     >         >     >       >     >       >       > [   11.010650=
]
>  generic_file_write_iter+0x78/0xd0
> >     >     >         >     >       >     >       >       > [   11.010656=
]
>  __kernel_write+0xfc/0x2ac
> >     >     >         >     >       >     >       >       > [   11.010665=
]
>  kernel_write+0x88/0x160
> >     >     >         >     >       >     >       >       > [   11.010673=
]
>  xwrite+0x44/0x94
> >     >     >         >     >       >     >       >       > [   11.010680=
]
>  do_copy+0xa8/0x104
> >     >     >         >     >       >     >       >       > [   11.010686=
]
>  write_buffer+0x38/0x58
> >     >     >         >     >       >     >       >       > [   11.010692=
]
>  flush_buffer+0x4c/0xbc
> >     >     >         >     >       >     >       >       > [   11.010698=
]
>  __gunzip+0x280/0x310
> >     >     >         >     >       >     >       >       > [   11.010704=
]
>  gunzip+0x1c/0x28
> >     >     >         >     >       >     >       >       > [   11.010709=
]
>  unpack_to_rootfs+0x170/0x2b0
> >     >     >         >     >       >     >       >       > [   11.010715=
]
>  do_populate_rootfs+0x80/0x164
> >     >     >         >     >       >     >       >       > [   11.010722=
]
>  async_run_entry_fn+0x48/0x164
> >     >     >         >     >       >     >       >       > [   11.010728=
]
>  process_one_work+0x1e4/0x3a0
> >     >     >         >     >       >     >       >       > [   11.010736=
]
>  worker_thread+0x7c/0x4c0
> >     >     >         >     >       >     >       >       > [   11.010743=
]
>  kthread+0x120/0x130
> >     >     >         >     >       >     >       >       > [   11.010750=
]
>  ret_from_fork+0x10/0x20
> >     >     >         >     >       >     >       >       > [   11.010757=
]
> SMP: stopping secondary CPUs
> >     >     >         >     >       >     >       >       > [   11.010784=
]
> Kernel Offset: 0x2f61200000 from 0xffffffc008000000
> >     >     >         >     >       >     >       >       > [   11.010788=
]
> PHYS_OFFSET: 0x0
> >     >     >         >     >       >     >       >       > [   11.010790=
]
> CPU features: 0x00000401,00000842
> >     >     >         >     >       >     >       >       > [   11.010795=
]
> Memory Limit: none
> >     >     >         >     >       >     >       >       > [   11.277509=
]
> ---[ end Kernel panic - not syncing: Asynchronous SError Interrupt ]---
> >     >     >         >     >       >     >       >       >
> >     >     >         >     >       >     >       >       > =D0=BF=D1=82,=
 21 =D0=B0=D0=BF=D1=80.
> 2023=E2=80=AF=D0=B3. =D0=B2 15:52, Michal Orzel <michal.orzel@amd.com <ma=
ilto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>
> >     <mailto:michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>>>:
> >     >     >         >     >       >     >       >       >       Hi Oleg=
,
> >     >     >         >     >       >     >       >       >
> >     >     >         >     >       >     >       >       >       On
> 21/04/2023 14:49, Oleg Nikitenko wrote:
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       > Hello
> Michal,
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       > I was
> not able to enable earlyprintk in the xen for now.
> >     >     >         >     >       >     >       >       >       > I
> decided to choose another way.
> >     >     >         >     >       >     >       >       >       > This
> is a xen's command line that I found out completely.
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       > (XEN)
> $$$$ console=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D=
2
> dom0_vcpus_pin
> >     >     >         >     >       bootscrub=3D0
> >     >     >         >     >       >     >       vwfi=3Dnative
> >     >     >         >     >       >     >       >       sched=3Dnull
> >     >     >         >     >       >     >       >       >
>  timer_slop=3D0
> >     >     >         >     >       >     >       >       >       Yes,
> adding a printk() in Xen was also a good idea.
> >     >     >         >     >       >     >       >       >
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       > So yo=
u
> are absolutely right about a command line.
> >     >     >         >     >       >     >       >       >       > Now I
> am going to find out why xen did not have the correct parameters from the
> device
> >     >     >         >     >       tree.
> >     >     >         >     >       >     >       >       >       Maybe
> you will find this document helpful:
> >     >     >         >     >       >     >       >       >
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>>>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >
> >     <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >>>>
> >     >     >         >     >       <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>>>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >
> >     <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt>
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> <
> https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt
> >>>>>
> >     >     >         >     >       >     >       >       >
> >     >     >         >     >       >     >       >       >       ~Michal
> >     >     >         >     >       >     >       >       >
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       >
> Regards,
> >     >     >         >     >       >     >       >       >       > Oleg
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       > =D0=
=BF=D1=82, 21
> =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:16, Michal Orzel <mich=
al.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>
> >     >     >         >     >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com
> >     <mailto:michal.orzel@amd.com>>>> <mailto:michal.orzel@amd.com
> <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>>>>:
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       >     O=
n
> 21/04/2023 10:04, Oleg Nikitenko wrote:
> >     >     >         >     >       >     >       >       >       >     >
>
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
> Hello Michal,
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
> Yes, I use yocto.
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
> Yesterday all day long I tried to follow your suggestions.
> >     >     >         >     >       >     >       >       >       >     >
> I faced a problem.
> >     >     >         >     >       >     >       >       >       >     >
> Manually in the xen config build file I pasted the strings:
> >     >     >         >     >       >     >       >       >       >     I=
n
> the .config file or in some Yocto file (listing additional Kconfig option=
s)
> added
> >     >     >         >     >       to SRC_URI?
> >     >     >         >     >       >     >       >       >       >
>  You shouldn't really modify .config file but if you do, you should execu=
te
> "make
> >     >     >         >     >       olddefconfig"
> >     >     >         >     >       >     >       afterwards.
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
> CONFIG_EARLY_PRINTK
> >     >     >         >     >       >     >       >       >       >     >
> CONFIG_EARLY_PRINTK_ZYNQMP
> >     >     >         >     >       >     >       >       >       >     >
> CONFIG_EARLY_UART_CHOICE_CADENCE
> >     >     >         >     >       >     >       >       >       >     I
> hope you added =3Dy to them.
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       >
>  Anyway, you have at least the following solutions:
> >     >     >         >     >       >     >       >       >       >     1=
)
> Run bitbake xen -c menuconfig to properly set early printk
> >     >     >         >     >       >     >       >       >       >     2=
)
> Find out how you enable other Kconfig options in your project (e.g.
> >     >     >         >     >       CONFIG_COLORING=3Dy that is not
> >     >     >         >     >       >     >       enabled by
> >     >     >         >     >       >     >       >       default)
> >     >     >         >     >       >     >       >       >       >     3=
)
> Append the following to "xen/arch/arm/configs/arm64_defconfig":
> >     >     >         >     >       >     >       >       >       >
>  CONFIG_EARLY_PRINTK_ZYNQMP=3Dy
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       >
>  ~Michal
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
> Host hangs in build time.
> >     >     >         >     >       >     >       >       >       >     >
> Maybe I did not set something in the config build file ?
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
> Regards,
> >     >     >         >     >       >     >       >       >       >     >
> Oleg
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
> =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:57, O=
leg Nikitenko <oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>
> >     >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto=
:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com
> >     <mailto:oleshiiwood@gmail.com>>>> <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>>
> >     >     >         >     >       >     >       >       >       <mailto=
:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >     <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>
> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>
> >     >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto=
:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>>>>:
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>    Thanks Michal,
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>    You gave me an idea.
> >     >     >         >     >       >     >       >       >       >     >
>    I am going to try it today.
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>    Regards,
> >     >     >         >     >       >     >       >       >       >     >
>    O.
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>    =D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:56=
, Oleg Nikitenko <oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>
> >     >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto=
:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com
> >     <mailto:oleshiiwood@gmail.com>>>> <mailto:oleshiiwood@gmail.com
> <mailto:oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>>
> >     >     >         >     >       >     >       >       >       <mailto=
:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >     <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>
> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>
> >     >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto=
:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>>>>:
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>        Thanks Stefano.
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>        I am going to do it today.
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>        Regards,
> >     >     >         >     >       >     >       >       >       >     >
>        O.
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>        =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 2=
3:05, Stefano Stabellini <
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
> >     >     >         >     >       <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>
> >     >     >         >     >       >     >       <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org
> >     <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>>>>>>
> >     >     >         >     >       >     >       >       >       <mailto=
:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org
> >     <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>>>>>
> >     >     >         >     >       <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>
> >     <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org
> >>>>>>>>:
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>            On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
> >     >     >         >     >       >     >       >       >       >     >
>            > Hi Michal,
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            > I corrected xen's command line.
> >     >     >         >     >       >     >       >       >       >     >
>            > Now it is
> >     >     >         >     >       >     >       >       >       >     >
>            > xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial0
> dom0_mem=3D1600M
> >     >     >         >     >       dom0_max_vcpus=3D2
> >     >     >         >     >       >     >       dom0_vcpus_pin
> >     >     >         >     >       >     >       >       >
>  bootscrub=3D0 vwfi=3Dnative sched=3Dnull
> >     >     >         >     >       >     >       >       >       >     >
>            > timer_slop=3D0 way_size=3D65536 xen_colors=3D0-3 dom0_colors=
=3D4-7";
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>            4 colors is way too many for xen, just do xen_colors=3D0-0. Th=
ere
> is no
> >     >     >         >     >       >     >       >       >       >     >
>            advantage in using more than 1 color for Xen.
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>            4 colors is too few for dom0, if you are giving 1600M of memor=
y
> to
> >     >     >         >     >       Dom0.
> >     >     >         >     >       >     >       >       >       >     >
>            Each color is 256M. For 1600M you should give at least 7 color=
s.
> Try:
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>            xen_colors=3D0-0 dom0_colors=3D1-8
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>            > Unfortunately the result was the same.
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN)  - Dom0 mode: Relaxed
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) P2M: 3 levels with order-1 root, VTCR 0x00000000800235=
58
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) Coloring general information
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) Way size: 64kB
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) Max. number of colors available: 16
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) Xen color(s): [ 0 ]
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) alternatives: Patching with alt table 00000000002cc690=
 ->
> >     >     >         >     >       00000000002ccc0c
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) Color array allocation failed for dom0
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN)
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) ****************************************
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) Panic on CPU 0:
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) Error creating domain 0
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) ****************************************
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN)
> >     >     >         >     >       >     >       >       >       >     >
>            > (XEN) Reboot in five seconds...
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            > I am going to find out how command line arguments passed and
> parsed.
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            > Regards,
> >     >     >         >     >       >     >       >       >       >     >
>            > Oleg
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =
=D0=B2 11:25, Oleg Nikitenko <
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>
> >     >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto=
:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>
> >     >     >         >     >       >     >       <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >     <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>>
> >     >     >         >     >       >     >       >       >       <mailto=
:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>
> >     <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>>
> <mailto:oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com> <mailto:
> oleshiiwood@gmail.com <mailto:oleshiiwood@gmail.com>>>>
> >     >     >         >     >       <mailto:oleshiiwood@gmail.com <mailto=
:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com> <mailto:oleshiiwood@gmail.com <mailto:
> oleshiiwood@gmail.com>>>>>>>>:
> >     >     >         >     >       >     >       >       >       >     >
>            >       Hi Michal,
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            > You put my nose into the problem. Thank you.
> >     >     >         >     >       >     >       >       >       >     >
>            > I am going to use your point.
> >     >     >         >     >       >     >       >       >       >     >
>            > Let's see what happens.
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            > Regards,
> >     >     >         >     >       >     >       >       >       >     >
>            > Oleg
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            > =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =
=D0=B2 10:37, Michal Orzel <
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>>
> >     >     >         >     >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>>
> >     >     >         >     >       >     >       <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com
> >     <mailto:michal.orzel@amd.com>>>>>>
> >     >     >         >     >       >     >       >       >       <mailto=
:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com>> <mailto:
> michal.orzel@amd.com <mailto:michal.orzel@amd.com> <mailto:
> michal.orzel@amd.com
> >     <mailto:michal.orzel@amd.com>>>>> <mailto:michal.orzel@amd.com
> <mailto:michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>
> >     >     >         >     >       <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com> <mailto:michal.orzel@amd.com <mailto:
> michal.orzel@amd.com>>>>>>>>:
> >     >     >         >     >       >     >       >       >       >     >
>            >       Hi Oleg,
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >       On 19/04/2023 09:03, Oleg Nikitenko wrote:
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       > Hello Stefano,
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       > Thanks for the clarification.
> >     >     >         >     >       >     >       >       >       >     >
>            >       > My company uses yocto for image generation.
> >     >     >         >     >       >     >       >       >       >     >
>            >       > What kind of information do you need to consult me i=
n
> this
> >     >     >         >     >       case ?
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       > Maybe modules sizes/addresses which were mentioned b=
y
> @Julien
> >     >     >         >     >       Grall
> >     >     >         >     >       >     >       >       <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:
> julien@xen.org <mailto:julien@xen.org>>> <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>>>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto=
:
> julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:
> julien@xen.org <mailto:julien@xen.org>>>>>
> >     >     >         >     >       >     >       >       >       <mailto=
:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:
> julien@xen.org <mailto:julien@xen.org>>> <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>>>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto=
:
> julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>> <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:
> julien@xen.org <mailto:julien@xen.org>>>>>> <mailto:julien@xen.org
> <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>
> <mailto:julien@xen.org <mailto:julien@xen.org>
> >     <mailto:julien@xen.org <mailto:julien@xen.org>>> <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:
> julien@xen.org <mailto:julien@xen.org>>>>
> >     >     >         >     >       <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>>> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:
> julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>>>>
> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org
> <mailto:julien@xen.org>> <mailto:julien@xen.org <mailto:julien@xen.org>
> <mailto:julien@xen.org <mailto:julien@xen.org>>> <mailto:julien@xen.org
> <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>>
> <mailto:julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org
> <mailto:julien@xen.org>>>> <mailto:julien@xen.org <mailto:julien@xen.org>
> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:julien@xen.org
> <mailto:julien@xen.org> <mailto:julien@xen.org
> >     <mailto:julien@xen.org>>> <mailto:julien@xen.org <mailto:
> julien@xen.org> <mailto:julien@xen.org <mailto:julien@xen.org>> <mailto:
> julien@xen.org <mailto:julien@xen.org> <mailto:julien@xen.org <mailto:
> julien@xen.org>>>>>>>> ?
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >       Sorry for jumping into discussion, but FWICS the Xen
> command
> >     >     >         >     >       line you provided
> >     >     >         >     >       >     >       seems to be
> >     >     >         >     >       >     >       >       not the
> >     >     >         >     >       >     >       >       >       one
> >     >     >         >     >       >     >       >       >       >     >
>            >       Xen booted with. The error you are observing most like=
ly
> is due
> >     >     >         >     >       to dom0 colors
> >     >     >         >     >       >     >       >       configuration n=
ot
> >     >     >         >     >       >     >       >       >       being
> >     >     >         >     >       >     >       >       >       >     >
>            >       specified (i.e. lack of dom0_colors=3D<> parameter).
> Although in
> >     >     >         >     >       the command line you
> >     >     >         >     >       >     >       >       provided, this
> >     >     >         >     >       >     >       >       >       paramet=
er
> >     >     >         >     >       >     >       >       >       >     >
>            >       is set, I strongly doubt that this is the actual comma=
nd
> line
> >     >     >         >     >       in use.
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >       You wrote:
> >     >     >         >     >       >     >       >       >       >     >
>            >       xen,xen-bootargs =3D "console=3Ddtuart dtuart=3Dserial=
0
> >     >     >         >     >       dom0_mem=3D1600M dom0_max_vcpus=3D2
> >     >     >         >     >       >     >       >       dom0_vcpus_pin
> >     >     >         >     >       >     >       >       >
>  bootscrub=3D0 vwfi=3Dnative
> >     >     >         >     >       >     >       >       >       >     >
>            >       sched=3Dnull timer_slop=3D0 way_szize=3D65536 xen_colo=
rs=3D0-3
> >     >     >         >     >       dom0_colors=3D4-7";
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >       but:
> >     >     >         >     >       >     >       >       >       >     >
>            >       1) way_szize has a typo
> >     >     >         >     >       >     >       >       >       >     >
>            >       2) you specified 4 colors (0-3) for Xen, but the boot
> log says
> >     >     >         >     >       that Xen has only
> >     >     >         >     >       >     >       one:
> >     >     >         >     >       >     >       >       >       >     >
>            >       (XEN) Xen color(s): [ 0 ]
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >       This makes me believe that no colors configuration
> actually end
> >     >     >         >     >       up in command line
> >     >     >         >     >       >     >       that Xen
> >     >     >         >     >       >     >       >       booted
> >     >     >         >     >       >     >       >       >       with.
> >     >     >         >     >       >     >       >       >       >     >
>            >       Single color for Xen is a "default if not specified" a=
nd
> way
> >     >     >         >     >       size was probably
> >     >     >         >     >       >     >       calculated
> >     >     >         >     >       >     >       >       by asking
> >     >     >         >     >       >     >       >       >       HW.
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >       So I would suggest to first cross-check the command li=
ne
> in
> >     >     >         >     >       use.
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >       ~Michal
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       > Regards,
> >     >     >         >     >       >     >       >       >       >     >
>            >       > Oleg
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       > =D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=
=D0=B3. =D0=B2 20:44, Stefano Stabellini
> >     >     >         >     >       <sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>
> >     <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>
> >     >     >         >     >       >     >       >       >       <mailto=
:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org
> >     <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>>>>>>
> >     >     >         >     >       <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>
> >     <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
> >     >     >         >     >       <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>>>
> >     >     >         >     >       >     >       >       <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org
> >     <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>>>>>
> >     >     >         >     >       >     >       >       >       <mailto=
:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org
> >     <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org>>>>>>
> >     >     >         >     >       <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>
> >     <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>>
> <mailto:sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org> <mailto:
> sstabellini@kernel.org <mailto:sstabellini@kernel.org>>>>
> >     >     >         >     >       <mailto:sstabellini@kernel.org
> <mailto:sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org> <mailto:sstabellini@kernel.org <mailto:
> sstabellini@kernel.org>>>>>>>>>:
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     > Hi Julien,
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     > >> This feature has not been merged in Xen
> upstream yet
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     > > would assume that upstream + the series on t=
he
> ML [1]
> >     >     >         >     >       work
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     > Please clarify this point.
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     > Because the two thoughts are controversial.
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     Hi Oleg,
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     As Julien wrote, there is nothing controversial.
> As you
> >     >     >         >     >       are aware,
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     Xilinx maintains a separate Xen tree specific fo=
r
> Xilinx
> >     >     >         >     >       here:
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>
> >     >     >         >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>
> >     <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>>>
> >     >     >         >     >       >     >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen
> >     <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>>
> >     >     >         >     >       >     >       >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen
> >     <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>>>>
> >     >     >         >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>
> >     <https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>> =
<
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>>
> >     >     >         >     >       <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>>>
> >     >     >         >     >       >     >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen
> >     <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>>
> >     >     >         >     >       >     >       >       >       <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>>> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen> <
> https://github.com/xilinx/xen <https://github.com/xilinx/xen>> <
> https://github.com/xilinx/xen
> >     <https://github.com/xilinx/xen> <https://github.com/xilinx/xen <
> https://github.com/xilinx/xen>>>>>>>>
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     and the branch you are using (xlnx_rebase_4.16)
> comes
> >     >     >         >     >       from there.
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     Instead, the upstream Xen tree lives here:
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>
> >     >     >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >     <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>
> >     >     >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >     <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>
> >     >     >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary
> >     <https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>
> >     >     >         >     >       >     >       >       >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>
> >     >     >         >     >       <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary> <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary <
> https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary>>>>>>>>
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     The Cache Coloring feature that you are trying t=
o
> >     >     >         >     >       configure is present
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     in xlnx_rebase_4.16, but not yet present upstrea=
m
> (there
> >     >     >         >     >       is an
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     outstanding patch series to add cache coloring t=
o
> Xen
> >     >     >         >     >       upstream but it
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     hasn't been merged yet.)
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     Anyway, if you are using xlnx_rebase_4.16 it
> doesn't
> >     >     >         >     >       matter too much for
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     you as you already have Cache Coloring as a feat=
ure
> >     >     >         >     >       there.
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     I take you are using ImageBuilder to generate th=
e
> boot
> >     >     >         >     >       configuration? If
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     so, please post the ImageBuilder config file tha=
t
> you are
> >     >     >         >     >       using.
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     But from the boot message, it looks like the col=
ors
> >     >     >         >     >       configuration for
> >     >     >         >     >       >     >       >       >       >     >
>            >       >     Dom0 is incorrect.
> >     >     >         >     >       >     >       >       >       >     >
>            >       >
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
>            >
> >     >     >         >     >       >     >       >       >       >     >
> >     >     >         >     >       >     >       >       >       >
> >     >     >         >     >       >     >       >       >
> >     >     >         >     >       >     >       >       >
> >     >     >         >     >       >     >       >       >
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >       >
> >     >     >         >     >       >     >
> >     >     >         >     >       >     >
> >     >     >         >     >       >     >
> >     >     >         >     >       >
> >     >     >         >     >
> >     >     >         >     >
> >     >     >         >     >
> >     >     >         >
> >     >     >
> >     >
> >
>

--000000000000176ab505fc0bf7c0
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello,</div><div><br></div><div>Thanks Michal.</div><=
div><br></div><div></div><div>Then the next question. Now it is more relate=
d to the integration than to the development.</div><div>A license for the x=
en in 4.17 revision at branch xlnx_rebase_4.17 xilinx repo has changed.</di=
v><div>I found out when I built this version.</div><div>Now bitbake and yoc=
to build fault by COPYING file md5 hashe inequality reason.</div><div>I fou=
nd out md5 hash stored at sources/libs/meta-virtualization/recipes-extended=
/xen directory in files</div><div><a href=3D"http://xen-tools_4.15.bb">xen-=
tools_4.15.bb</a><br><a href=3D"http://xen_4.15.bb">xen_4.15.bb</a><br><a h=
ref=3D"http://xen_git.bb">xen_git.bb</a><br><a href=3D"http://xen-tools_git=
.bb">xen-tools_git.bb</a><br><a href=3D"http://xen-tools_4.16.bb">xen-tools=
_4.16.bb</a><br><a href=3D"http://xen_4.16.bb">xen_4.16.bb</a></div><div>So=
 this is a question. Should I update the license file for all our branches =
or is it possible to keep an old one for old branches ?</div><div><br></div=
><div>Regards,</div><div>Oleg<br></div><div><br></div></div><br><div class=
=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">=D0=B2=D1=82, 16 =D0=
=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 21:00, Michal Orzel &lt;<a href=
=3D"mailto:michal.orzel@amd.com">michal.orzel@amd.com</a>&gt;:<br></div><bl=
ockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-lef=
t:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
On 16/05/2023 17:14, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt; <br>
&gt; <br>
&gt; Hi guys,<br>
&gt; <br>
&gt; Thanks Michal.<br>
&gt; <br>
&gt; So if I have more RAM It is possible=C2=A0to increase the color densit=
y.<br>
&gt; <br>
&gt; For example 8Gb/16 it is 512 Mb approximately.<br>
&gt; Is this correct ?<br>
Yes.<br>
To my previous reply I should also add that the number of colors depends on=
 the page size,<br>
but in Xen, we use 4kB pages so 64kB way size results in 16 colors.<br>
<br>
~Michal<br>
<br>
&gt; Regards,<br>
&gt; Oleg<br>
&gt; <br>
&gt; =D0=B2=D1=82, 16 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 17:40,=
 Michal Orzel &lt;<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank"=
>michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com=
" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;:<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0On 16/05/2023 14:15, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hello,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Thanks a lot Michal.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Then the next question.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; When I just started my experiments with xen, S=
tefano mentioned that each cache&#39;s color size is 256M.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Is it possible to extend this figure ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0With 16 colors (e.g. on Cortex-A53) and 4GB of memo=
ry, roughly each color is 256M (i.e. 4GB/16 =3D 256M).<br>
&gt;=C2=A0 =C2=A0 =C2=A0So as you can see this figure depends on the number=
 of colors and memory size.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt; =D0=BF=D0=BD, 15 =D0=BC=D0=B0=D1=8F 2023=E2=80=
=AF=D0=B3. =D0=B2 11:57, Michal Orzel &lt;<a href=3D"mailto:michal.orzel@am=
d.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mai=
lto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &l=
t;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.o=
rzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0On 15/05/2023 10:51, Oleg N=
ikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hello guys,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Thanks a lot.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; After a long problem l=
ist I was able to run xen with Dom0 with a cache color.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; One more question from=
 my side.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; I want to run a guest =
with color mode too.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; I inserted a string in=
to guest config file llc-colors =3D &quot;9-13&quot;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; I got an error<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0457.517004] lo=
op0: detected capacity change from 0 to 385840<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Parsing config from /x=
en/red_config.cfg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; /xen/red_config.cfg:26=
: config parsing error near `-colors&#39;: lexical error<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; warning: Config file l=
ooks like it contains Python code.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; warning: =C2=A0Arbitra=
ry Python is no longer supported.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; warning: =C2=A0See <a =
href=3D"https://wiki.xen.org/wiki/PythonInXlConfig" rel=3D"noreferrer" targ=
et=3D"_blank">https://wiki.xen.org/wiki/PythonInXlConfig</a> &lt;<a href=3D=
"https://wiki.xen.org/wiki/PythonInXlConfig" rel=3D"noreferrer" target=3D"_=
blank">https://wiki.xen.org/wiki/PythonInXlConfig</a>&gt; &lt;<a href=3D"ht=
tps://wiki.xen.org/wiki/PythonInXlConfig" rel=3D"noreferrer" target=3D"_bla=
nk">https://wiki.xen.org/wiki/PythonInXlConfig</a> &lt;<a href=3D"https://w=
iki.xen.org/wiki/PythonInXlConfig" rel=3D"noreferrer" target=3D"_blank">htt=
ps://wiki.xen.org/wiki/PythonInXlConfig</a>&gt;&gt; &lt;<a href=3D"https://=
wiki.xen.org/wiki/PythonInXlConfig" rel=3D"noreferrer" target=3D"_blank">ht=
tps://wiki.xen.org/wiki/PythonInXlConfig</a> &lt;<a href=3D"https://wiki.xe=
n.org/wiki/PythonInXlConfig" rel=3D"noreferrer" target=3D"_blank">https://w=
iki.xen.org/wiki/PythonInXlConfig</a>&gt; &lt;<a href=3D"https://wiki.xen.o=
rg/wiki/PythonInXlConfig" rel=3D"noreferrer" target=3D"_blank">https://wiki=
.xen.org/wiki/PythonInXlConfig</a> &lt;<a href=3D"https://wiki.xen.org/wiki=
/PythonInXlConfig" rel=3D"noreferrer" target=3D"_blank">https://wiki.xen.or=
g/wiki/PythonInXlConfig</a>&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Failed to parse config=
: Invalid argument<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; So this is a question.=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Is it possible to assi=
gn a color mode for the DomU by config file ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; If so, what string sho=
uld I use?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Please, always refer to the=
 relevant documentation. In this case, for xl.cfg:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://github.c=
om/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod.in#L2890" rel=3D"=
noreferrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebas=
e_4.17/docs/man/xl.cfg.5.pod.in#L2890</a> &lt;<a href=3D"https://github.com=
/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod.in#L2890" rel=3D"no=
referrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_=
4.17/docs/man/xl.cfg.5.pod.in#L2890</a>&gt; &lt;<a href=3D"https://github.c=
om/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod.in#L2890" rel=3D"=
noreferrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebas=
e_4.17/docs/man/xl.cfg.5.pod.in#L2890</a> &lt;<a href=3D"https://github.com=
/Xilinx/xen/blob/xlnx_rebase_4.17/docs/man/xl.cfg.5.pod.in#L2890" rel=3D"no=
referrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_=
4.17/docs/man/xl.cfg.5.pod.in#L2890</a>&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; =D1=87=D1=82, 11 =D0=
=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 13:32, Oleg Nikitenko &lt;<a hr=
ef=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com=
</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">=
oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmai=
l.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"ma=
ilto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;=
&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">=
oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.co=
m" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"ma=
ilto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt=
;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiw=
ood@gmail.com</a>&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Hi =
Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Tha=
nks.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Thi=
s compilation previously had a name CONFIG_COLORING.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0It =
mixed me up.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Reg=
ards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Ole=
g<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0=D1=
=87=D1=82, 11 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =D0=B2 13:15, Michal =
Orzel &lt;<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.=
orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:micha=
l.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a h=
ref=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com<=
/a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_b=
lank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@am=
d.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D=
"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &l=
t;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.o=
rzel@amd.com</a>&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0On 11/05/2023 12:02, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt; Hello,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt; Thanks Stefano.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt; Then the next question.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt; I cloned xen repo from xilinx site <a href=3D"https://git=
hub.com/Xilinx/xen.git" rel=3D"noreferrer" target=3D"_blank">https://github=
.com/Xilinx/xen.git</a> &lt;<a href=3D"https://github.com/Xilinx/xen.git" r=
el=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/xen.git</a>&g=
t; &lt;<a href=3D"https://github.com/Xilinx/xen.git" rel=3D"noreferrer" tar=
get=3D"_blank">https://github.com/Xilinx/xen.git</a> &lt;<a href=3D"https:/=
/github.com/Xilinx/xen.git" rel=3D"noreferrer" target=3D"_blank">https://gi=
thub.com/Xilinx/xen.git</a>&gt;&gt; &lt;<a href=3D"https://github.com/Xilin=
x/xen.git" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/x=
en.git</a> &lt;<a href=3D"https://github.com/Xilinx/xen.git" rel=3D"norefer=
rer" target=3D"_blank">https://github.com/Xilinx/xen.git</a>&gt; &lt;<a hre=
f=3D"https://github.com/Xilinx/xen.git" rel=3D"noreferrer" target=3D"_blank=
">https://github.com/Xilinx/xen.git</a> &lt;<a href=3D"https://github.com/X=
ilinx/xen.git" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xili=
nx/xen.git</a>&gt;&gt;&gt; &lt;<a href=3D"https://github.com/Xilinx/xen.git=
" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/xen.git</a=
> &lt;<a href=3D"https://github.com/Xilinx/xen.git" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/Xilinx/xen.git</a>&gt; &lt;<a href=3D"http=
s://github.com/Xilinx/xen.git" rel=3D"noreferrer" target=3D"_blank">https:/=
/github.com/Xilinx/xen.git</a> &lt;<a href=3D"https://github.com/Xilinx/xen=
.git" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/xen.gi=
t</a>&gt;&gt; &lt;<a href=3D"https://github.com/Xilinx/xen.git" rel=3D"nore=
ferrer" target=3D"_blank">https://github.com/Xilinx/xen.git</a> &lt;<a href=
=3D"https://github.com/Xilinx/xen.git" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/Xilinx/xen.git</a>&gt; &lt;<a href=3D"https://github.co=
m/Xilinx/xen.git" rel=3D"noreferrer" target=3D"_blank">https://github.com/X=
ilinx/xen.git</a> &lt;<a href=3D"https://github.com/Xilinx/xen.git" rel=3D"=
noreferrer" target=3D"_blank">https://github.com/Xilinx/xen.git</a>&gt;&gt;=
&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt; I managed to build a xlnx_rebase_4.17 branch in my enviro=
nment.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt; I did it without coloring first. I did not find any color=
 footprints at this branch.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt; I realized coloring is not in the xlnx_rebase_4.17 branch=
 yet.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0This is not true. Cache coloring is in xlnx_rebase_4.17. Pleas=
e see the docs:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17=
/docs/misc/arm/cache-coloring.rst" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-colori=
ng.rst</a> &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.=
17/docs/misc/arm/cache-coloring.rst" rel=3D"noreferrer" target=3D"_blank">h=
ttps://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-colo=
ring.rst</a>&gt; &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_reb=
ase_4.17/docs/misc/arm/cache-coloring.rst" rel=3D"noreferrer" target=3D"_bl=
ank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cach=
e-coloring.rst</a> &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_r=
ebase_4.17/docs/misc/arm/cache-coloring.rst" rel=3D"noreferrer" target=3D"_=
blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/ca=
che-coloring.rst</a>&gt;&gt; &lt;<a href=3D"https://github.com/Xilinx/xen/b=
lob/xlnx_rebase_4.17/docs/misc/arm/cache-coloring.rst" rel=3D"noreferrer" t=
arget=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs/m=
isc/arm/cache-coloring.rst</a> &lt;<a href=3D"https://github.com/Xilinx/xen=
/blob/xlnx_rebase_4.17/docs/misc/arm/cache-coloring.rst" rel=3D"noreferrer"=
 target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.17/docs=
/misc/arm/cache-coloring.rst</a>&gt; &lt;<a href=3D"https://github.com/Xili=
nx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-coloring.rst" rel=3D"noref=
errer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.1=
7/docs/misc/arm/cache-coloring.rst</a> &lt;<a href=3D"https://github.com/Xi=
linx/xen/blob/xlnx_rebase_4.17/docs/misc/arm/cache-coloring.rst" rel=3D"nor=
eferrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4=
.17/docs/misc/arm/cache-coloring.rst</a>&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0It describes the feature and documents the required properties=
.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0~Michal<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt; =D0=B2=D1=82, 9 =D0=BC=D0=B0=D1=8F 2023=E2=80=AF=D0=B3. =
=D0=B2 22:49, Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@kernel.o=
rg" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mail=
to:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;=
 &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sst=
abellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" targe=
t=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabe=
llini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mail=
to:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@=
kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;m=
ailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabelli=
ni@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;=
&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0We test Xen Cache Coloring regularly o=
n zcu102. Every Petalinux release<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0(twice a year) is tested with cache co=
loring enabled. The last Petalinux<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0release is 2023.1 and the kernel used =
is this:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0<a href=3D"https://github.com/Xilinx/l=
inux-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=3D"noreferrer" target=3D"_blank">h=
ttps://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS</a> &lt;<a hr=
ef=3D"https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/linux-xlnx/tree=
/xlnx_rebase_v6.1_LTS</a>&gt; &lt;<a href=3D"https://github.com/Xilinx/linu=
x-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=3D"noreferrer" target=3D"_blank">http=
s://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS</a> &lt;<a href=
=3D"https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=3D"=
noreferrer" target=3D"_blank">https://github.com/Xilinx/linux-xlnx/tree/xln=
x_rebase_v6.1_LTS</a>&gt;&gt; &lt;<a href=3D"https://github.com/Xilinx/linu=
x-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=3D"noreferrer" target=3D"_blank">http=
s://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS</a> &lt;<a href=
=3D"https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=3D"=
noreferrer" target=3D"_blank">https://github.com/Xilinx/linux-xlnx/tree/xln=
x_rebase_v6.1_LTS</a>&gt; &lt;<a href=3D"https://github.com/Xilinx/linux-xl=
nx/tree/xlnx_rebase_v6.1_LTS" rel=3D"noreferrer" target=3D"_blank">https://=
github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS</a> &lt;<a href=3D"h=
ttps://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=3D"noref=
errer" target=3D"_blank">https://github.com/Xilinx/linux-xlnx/tree/xlnx_reb=
ase_v6.1_LTS</a>&gt;&gt;&gt; &lt;<a href=3D"https://github.com/Xilinx/linux=
-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=3D"noreferrer" target=3D"_blank">https=
://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS</a> &lt;<a href=
=3D"https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=3D"=
noreferrer" target=3D"_blank">https://github.com/Xilinx/linux-xlnx/tree/xln=
x_rebase_v6.1_LTS</a>&gt; &lt;<a href=3D"https://github.com/Xilinx/linux-xl=
nx/tree/xlnx_rebase_v6.1_LTS" rel=3D"noreferrer" target=3D"_blank">https://=
github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS</a> &lt;<a href=3D"h=
ttps://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=3D"noref=
errer" target=3D"_blank">https://github.com/Xilinx/linux-xlnx/tree/xlnx_reb=
ase_v6.1_LTS</a>&gt;&gt; &lt;<a href=3D"https://github.com/Xilinx/linux-xln=
x/tree/xlnx_rebase_v6.1_LTS" rel=3D"noreferrer" target=3D"_blank">https://g=
ithub.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS</a> &lt;<a href=3D"ht=
tps://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=3D"norefe=
rrer" target=3D"_blank">https://github.com/Xilinx/linux-xlnx/tree/xlnx_reba=
se_v6.1_LTS</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/Xilinx/linux-xlnx=
/tree/xlnx_rebase_v6.1_LTS" rel=3D"noreferrer" target=3D"_blank">https://gi=
thub.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS</a> &lt;<a href=3D"htt=
ps://github.com/Xilinx/linux-xlnx/tree/xlnx_rebase_v6.1_LTS" rel=3D"norefer=
rer" target=3D"_blank">https://github.com/Xilinx/linux-xlnx/tree/xlnx_rebas=
e_v6.1_LTS</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0On Tue, 9 May 2023, Oleg Nikitenko wro=
te:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hello guys,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; I have a couple of more questions=
.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Have you ever run xen with the ca=
che coloring at Zynq UltraScale+ MPSoC zcu102 xczu15eg ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; When did you run xen with the cac=
he coloring last time ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; What kernel version did you use f=
or Dom0 when you ran xen with the cache coloring last time ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=
=8F 2023=E2=80=AF=D0=B3. =D0=B2 11:48, Oleg Nikitenko &lt;<a href=3D"mailto=
:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mai=
lto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@=
gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" targe=
t=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mai=
lto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@=
gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" targe=
t=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:=
<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmai=
l.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt;&gt;&gt;:<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Mich=
al,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Thanks.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; =D0=BF=D1=82, 5 =D0=BC=D0=B0=D1=
=8F 2023=E2=80=AF=D0=B3. =D0=B2 11:34, Michal Orzel &lt;<a href=3D"mailto:m=
ichal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:=
<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.=
com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_b=
lank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@am=
d.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a hre=
f=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a=
> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mich=
al.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com"=
 target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:mi=
chal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt; =
&lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal=
.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" targe=
t=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:mich=
al.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a =
href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com=
</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_=
blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@a=
md.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=
=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>=
 &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">micha=
l.orzel@amd.com</a>&gt;&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Oleg=
,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Replyin=
g, so that you do not need to wait for Stefano.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On 05/0=
5/2023 10:28, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; He=
llo Stefano,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I =
would like to try a xen cache color property from this repo=C2=A0 <a href=
=3D"https://xenbits.xen.org/git-http/xen.git" rel=3D"noreferrer" target=3D"=
_blank">https://xenbits.xen.org/git-http/xen.git</a> &lt;<a href=3D"https:/=
/xenbits.xen.org/git-http/xen.git" rel=3D"noreferrer" target=3D"_blank">htt=
ps://xenbits.xen.org/git-http/xen.git</a>&gt; &lt;<a href=3D"https://xenbit=
s.xen.org/git-http/xen.git" rel=3D"noreferrer" target=3D"_blank">https://xe=
nbits.xen.org/git-http/xen.git</a> &lt;<a href=3D"https://xenbits.xen.org/g=
it-http/xen.git" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.o=
rg/git-http/xen.git</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/git-=
http/xen.git" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/=
git-http/xen.git</a> &lt;<a href=3D"https://xenbits.xen.org/git-http/xen.gi=
t" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/git-http/xe=
n.git</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/git-http/xen.git" rel=
=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/git-http/xen.git<=
/a> &lt;<a href=3D"https://xenbits.xen.org/git-http/xen.git" rel=3D"norefer=
rer" target=3D"_blank">https://xenbits.xen.org/git-http/xen.git</a>&gt;&gt;=
&gt; &lt;<a href=3D"https://xenbits.xen.org/git-http/xen.git" rel=3D"norefe=
rrer" target=3D"_blank">https://xenbits.xen.org/git-http/xen.git</a> &lt;<a=
 href=3D"https://xenbits.xen.org/git-http/xen.git" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/git-http/xen.git</a>&gt; &lt;<a href=
=3D"https://xenbits.xen.org/git-http/xen.git" rel=3D"noreferrer" target=3D"=
_blank">https://xenbits.xen.org/git-http/xen.git</a> &lt;<a href=3D"https:/=
/xenbits.xen.org/git-http/xen.git" rel=3D"noreferrer" target=3D"_blank">htt=
ps://xenbits.xen.org/git-http/xen.git</a>&gt;&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/git-http/xen.git" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/git-http/xen.git</a> &lt;<a href=3D"https://xenbits.xen.o=
rg/git-http/xen.git" rel=3D"noreferrer" target=3D"_blank">https://xenbits.x=
en.org/git-http/xen.git</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/git-=
http/xen.git" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/=
git-http/xen.git</a> &lt;<a href=3D"https://xenbits.xen.org/git-http/xen.gi=
t" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/git-http/xe=
n.git</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://xenbits.xen.org/git-http/xen.git" rel=3D"noreferrer" target=
=3D"_blank">https://xenbits.xen.org/git-http/xen.git</a> &lt;<a href=3D"htt=
ps://xenbits.xen.org/git-http/xen.git" rel=3D"noreferrer" target=3D"_blank"=
>https://xenbits.xen.org/git-http/xen.git</a>&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/git-http/xen.git" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/git-http/xen.git</a> &lt;<a href=3D"https://xenbits.xen.o=
rg/git-http/xen.git" rel=3D"noreferrer" target=3D"_blank">https://xenbits.x=
en.org/git-http/xen.git</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/=
git-http/xen.git" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.=
org/git-http/xen.git</a> &lt;<a href=3D"https://xenbits.xen.org/git-http/xe=
n.git" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/git-htt=
p/xen.git</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/git-http/xen.git" =
rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/git-http/xen.g=
it</a> &lt;<a href=3D"https://xenbits.xen.org/git-http/xen.git" rel=3D"nore=
ferrer" target=3D"_blank">https://xenbits.xen.org/git-http/xen.git</a>&gt;&=
gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/git-http/xen.git" rel=3D"nor=
eferrer" target=3D"_blank">https://xenbits.xen.org/git-http/xen.git</a> &lt=
;<a href=3D"https://xenbits.xen.org/git-http/xen.git" rel=3D"noreferrer" ta=
rget=3D"_blank">https://xenbits.xen.org/git-http/xen.git</a>&gt; &lt;<a hre=
f=3D"https://xenbits.xen.org/git-http/xen.git" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/git-http/xen.git</a> &lt;<a href=3D"https:=
//xenbits.xen.org/git-http/xen.git" rel=3D"noreferrer" target=3D"_blank">ht=
tps://xenbits.xen.org/git-http/xen.git</a>&gt;&gt; &lt;<a href=3D"https://x=
enbits.xen.org/git-http/xen.git" rel=3D"noreferrer" target=3D"_blank">https=
://xenbits.xen.org/git-http/xen.git</a> &lt;<a href=3D"https://xenbits.xen.=
org/git-http/xen.git" rel=3D"noreferrer" target=3D"_blank">https://xenbits.=
xen.org/git-http/xen.git</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/git=
-http/xen.git" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org=
/git-http/xen.git</a> &lt;<a href=3D"https://xenbits.xen.org/git-http/xen.g=
it" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/git-http/x=
en.git</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Co=
uld you tell whot branch I should use ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Cache c=
oloring feature is not part of the upstream tree and it is still under revi=
ew.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0You can=
 only find it integrated in the Xilinx Xen tree.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0~Michal=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Re=
gards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Ol=
eg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =
=D0=BF=D1=82, 28 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 00:51, Ste=
fano Stabellini &lt;<a href=3D"mailto:sstabellini@kernel.org" target=3D"_bl=
ank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a hr=
ef=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.o=
rg</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blan=
k">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabell=
ini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a =
href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel=
.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D=
"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;=
mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabell=
ini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" tar=
get=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:=
sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;ma=
ilto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellin=
i@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.or=
g" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; =
&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">ssta=
bellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org"=
 target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt; &lt;mailto:<a=
 href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kerne=
l.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_b=
lank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a h=
ref=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.=
org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabel=
lini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"ma=
ilto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.o=
rg" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mail=
to:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;=
&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0I am familiar with the zcu102 but I don&#39;t know how you=
 could possibly<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0generate a SError.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0I suggest to try to use ImageBuilder [1] to generate the b=
oot<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0configuration as a test because that is known to work well=
 for zcu102.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0[1] <a href=3D"https://gitlab.com/xen-project/imagebuilder=
" rel=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/image=
builder</a> &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=
=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuild=
er</a>&gt; &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=
=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuild=
er</a> &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"n=
oreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a=
>&gt;&gt; &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=
=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuild=
er</a> &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"n=
oreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a=
>&gt; &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"no=
referrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a>=
 &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"norefer=
rer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a>&gt;&=
gt;&gt; &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"=
noreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</=
a> &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"noref=
errer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a>&gt=
; &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"norefe=
rrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a> &lt=
;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"noreferrer"=
 target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a>&gt;&gt; =
&lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"noreferr=
er" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a> &lt;<=
a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"noreferrer" t=
arget=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a>&gt; &lt;<a=
 href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"noreferrer" ta=
rget=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a> &lt;<a href=
=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"noreferrer" target=
=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a>&gt;&gt;&gt;&gt;=
 &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"norefer=
rer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a> &lt;=
<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"noreferrer" =
target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a>&gt; &lt;<=
a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"noreferrer" t=
arget=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a> &lt;<a hre=
f=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"noreferrer" target=
=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a>&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://gitlab.com/xen-project/image=
builder" rel=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-projec=
t/imagebuilder</a> &lt;<a href=3D"https://gitlab.com/xen-project/imagebuild=
er" rel=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/ima=
gebuilder</a>&gt; &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilde=
r" rel=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/imag=
ebuilder</a> &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" re=
l=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuil=
der</a>&gt;&gt;&gt; &lt;<a href=3D"https://gitlab.com/xen-project/imagebuil=
der" rel=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/im=
agebuilder</a> &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" =
rel=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebu=
ilder</a>&gt; &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" r=
el=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebui=
lder</a> &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D=
"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder<=
/a>&gt;&gt; &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=
=3D"noreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuild=
er</a> &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"n=
oreferrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a=
>&gt; &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"no=
referrer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a>=
 &lt;<a href=3D"https://gitlab.com/xen-project/imagebuilder" rel=3D"norefer=
rer" target=3D"_blank">https://gitlab.com/xen-project/imagebuilder</a>&gt;&=
gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0On Thu, 27 Apr 2023, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt; Hello Stefano,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt; Thanks for clarification.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt; We nighter use ImageBuilder nor uboot boot script.<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt; A model is zcu102 compatible.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt; O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt; =D0=B2=D1=82, 25 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=
=B3. =D0=B2 21:21, Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D=
"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a=
>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank=
">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kerne=
l.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a h=
ref=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.=
org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_bla=
nk">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini=
@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a hre=
f=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.or=
g</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" targ=
et=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstab=
ellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mai=
lto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini=
@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;m=
ailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabelli=
ni@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;=
&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_bl=
ank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a hr=
ef=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.o=
rg</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blan=
k">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabell=
ini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a =
href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel=
.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D=
"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;=
mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabell=
ini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" tar=
get=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:=
sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;ma=
ilto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellin=
i@kernel.org</a>&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0This is interesting. Are yo=
u using Xilinx hardware by any chance? If so,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0which board?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Are you using ImageBuilder =
to generate your boot.scr boot script? If so,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0could you please post your =
ImageBuilder config file? If not, can you<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0post the source of your ubo=
ot boot script?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0SErrors are supposed to be =
related to a hardware failure of some kind.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0You are not supposed to be =
able to trigger an SError easily by<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&quot;mistake&quot;. I have=
 not seen SErrors due to wrong cache coloring<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0configurations on any Xilin=
x board before.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0The differences between Xen=
 with and without cache coloring from a<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0hardware perspective are:<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0- With cache coloring, the =
SMMU is enabled and does address translations<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 even for dom0. Witho=
ut cache coloring the SMMU could be disabled, and<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 if enabled, the SMMU=
 doesn&#39;t do any address translations for Dom0. If<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 there is a hardware =
failure related to SMMU address translation it<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 could only trigger w=
ith cache coloring. This would be my normal<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 suggestion for you t=
o explore, but the failure happens too early<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 before any DMA-capab=
le device is programmed. So I don&#39;t think this can<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 be the issue.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0- With cache coloring, the =
memory allocation is very different so you&#39;ll<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 end up using differe=
nt DDR regions for Dom0. So if your DDR is<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 defective, you might=
 only see a failure with cache coloring enabled<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 because you end up u=
sing different regions.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On Tue, 25 Apr 2023, Oleg N=
ikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hi Stefano,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Thank you.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; If I build xen without=
 colors support there is not this error.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; All the domains are bo=
oted well.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hense it can not be a =
hardware issue.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; This panic arrived dur=
ing unpacking the rootfs.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Here I attached the bo=
ot log xen/Dom0 without color.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; A highlighted strings =
printed exactly after the place where 1-st time panic arrived.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =C2=A0Xen 4.16.1-pre<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Xen version 4.16=
.1-pre (nole2390@(none)) (aarch64-portable-linux-gcc (GCC) 11.3.0) debug=3D=
y<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A02023-04=
-21<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Latest ChangeSet=
: Wed Apr 19 12:56:14 2023 +0300 git:321687b231-dirty<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) build-id: c18472=
58fdb1b79562fc710dda40008f96c0fde5<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Processor: 00000=
000410fd034: &quot;ARM Limited&quot;, variant: 0x0, part 0xd03,rev 0x4<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) 64-bit Execution=
:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 Processor=
 Features: 0000000000002222 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 =C2=A0 Ex=
ception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 =C2=A0 Ex=
tensions: FloatingPoint AdvancedSIMD<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 Debug Fea=
tures: 0000000010305106 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 Auxiliary=
 Features: 0000000000000000 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 Memory Mo=
del Features: 0000000000001122 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 ISA Featu=
res: =C2=A00000000000011120 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) 32-bit Execution=
:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 Processor=
 Features: 0000000000000131:0000000000011011<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 =C2=A0 In=
struction Sets: AArch32 A32 Thumb Thumb-2 Jazelle<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 =C2=A0 Ex=
tensions: GenericTimer Security<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 Debug Fea=
tures: 0000000003010066<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 Auxiliary=
 Features: 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 Memory Mo=
del Features: 0000000010201105 0000000040000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A00000000001260000 0000000002102211<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 ISA Featu=
res: 0000000002101110 0000000013112111 0000000021232042<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0000000001112131 0000000000011142=
 0000000000011121<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Using SMC Callin=
g Convention v1.2<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Using PSCI v1.1<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) SMP: Allowing 4 =
CPUs<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Generic Timer IR=
Q: phys=3D30 hyp=3D26 virt=3D27 Freq: 100000 KHz<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) GICv2 initializa=
tion:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 gic_dist_addr=3D00000000f9010000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 gic_cpu_addr=3D00000000f9020000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 gic_hyp_addr=3D00000000f9040000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 gic_vcpu_addr=3D00000000f9060000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 gic_maintenance_irq=3D25<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) GICv2: Adjusting=
 CPU interface base to 0xf902f000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) GICv2: 192 lines=
, 4 cpus, secure (IID 0200143b).<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Using scheduler:=
 null Scheduler (null)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Initializing nul=
l scheduler<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) WARNING: This is=
 experimental software in development.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Use at your own =
risk.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Allocated consol=
e ring of 32 KiB.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) CPU0: Guest atom=
ics will try 12 times before pausing the domain<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Bringing up CPU1=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) CPU1: Guest atom=
ics will try 13 times before pausing the domain<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) CPU 1 booted.<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Bringing up CPU2=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) CPU2: Guest atom=
ics will try 13 times before pausing the domain<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) CPU 2 booted.<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Bringing up CPU3=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) CPU3: Guest atom=
ics will try 13 times before pausing the domain<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Brought up 4 CPU=
s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) CPU 3 booted.<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) smmu: /axi/smmu@=
fd800000: probing hardware configuration...<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) smmu: /axi/smmu@=
fd800000: SMMUv2 with:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) smmu: /axi/smmu@=
fd800000: stage 2 translation<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) smmu: /axi/smmu@=
fd800000: stream matching with 48 register groups, mask 0x7fff&lt;2&gt;smmu=
:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0/axi/sm=
mu@fd800000: 16 context<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0banks (0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; stage-2 only)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) smmu: /axi/smmu@=
fd800000: Stage-2: 48-bit IPA -&gt; 48-bit PA<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) smmu: /axi/smmu@=
fd800000: registered 29 master devices<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) I/O virtualisati=
on enabled<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0- Dom0 mod=
e: Relaxed<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) P2M: 40-bit IPA =
with 40-bit PA and 8-bit VMID<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) P2M: 3 levels wi=
th order-1 root, VTCR 0x0000000080023558<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Scheduling granu=
larity: cpu, 1 CPU per sched-resource<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) alternatives: Pa=
tching with alt table 00000000002cc5c8 -&gt; 00000000002ccb2c<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) *** LOADING DOMA=
IN 0 ***<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Loading d0 kerne=
l from boot module @ 0000000001000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Loading ramdisk =
from boot module @ 0000000002000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Allocating 1:1 m=
appings totalling 1600MB for dom0:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) BANK[0] 0x000000=
10000000-0x00000020000000 (256MB)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) BANK[1] 0x000000=
24000000-0x00000028000000 (64MB)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) BANK[2] 0x000000=
30000000-0x00000080000000 (1280MB)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Grant table rang=
e: 0x00000000e00000-0x00000000e40000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) smmu: /axi/smmu@=
fd800000: d0: p2maddr 0x000000087bf94000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Allocating PPI 1=
6 for event channel interrupt<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Extended region =
0: 0x81200000-&gt;0xa0000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Extended region =
1: 0xb1200000-&gt;0xc0000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Extended region =
2: 0xc8000000-&gt;0xe0000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Extended region =
3: 0xf0000000-&gt;0xf9000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Extended region =
4: 0x100000000-&gt;0x600000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Extended region =
5: 0x880000000-&gt;0x8000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Extended region =
6: 0x8001000000-&gt;0x10000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Loading zImage f=
rom 0000000001000000 to 0000000010000000-0000000010e41008<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Loading d0 initr=
d from 0000000002000000 to 0x0000000013600000-0x000000001ff3a617<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Loading d0 DTB t=
o 0x0000000013400000-0x000000001340cbdc<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Initial low memo=
ry virq threshold set at 0x4000 pages.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Std. Loglevel: A=
ll<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Guest Loglevel: =
All<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) *** Serial input=
 to DOM0 (type &#39;CTRL-a&#39; three times to switch input)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) null.c:353: 0 &l=
t;-- d0v0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Freed 356kB init=
 memory.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) d0v0 Unhandled S=
MC/HVC: 0x84000050<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) d0v0 Unhandled S=
MC/HVC: 0x8600ff01<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) d0v0: vGICD: unh=
andled word write 0x000000ffffffff to ICACTIVER4<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) d0v0: vGICD: unh=
andled word write 0x000000ffffffff to ICACTIVER8<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) d0v0: vGICD: unh=
andled word write 0x000000ffffffff to ICACTIVER12<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) d0v0: vGICD: unh=
andled word write 0x000000ffffffff to ICACTIVER16<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) d0v0: vGICD: unh=
andled word write 0x000000ffffffff to ICACTIVER20<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) d0v0: vGICD: unh=
andled word write 0x000000ffffffff to ICACTIVER0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Booting Linux on physical CPU 0x0000000000 [0x410fd034]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Linux version 5.15.72-xilinx-v2022.1 (oe-user@oe-host) (aarch64-portable=
-linux-gcc (GCC)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A011.3.0,=
 GNU ld (GNU<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Binutils)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; 2.38.20220708) #1 SMP =
Tue Feb 21 05:47:54 UTC 2023<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Machine model: D14 Viper Board - White Unit<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Xen 4.16 support found<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Zone ranges:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] =C2=A0 DMA =C2=A0 =C2=A0 =C2=A0[mem 0x0000000010000000-0x000000007ffffff=
f]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] =C2=A0 DMA32 =C2=A0 =C2=A0empty<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] =C2=A0 Normal =C2=A0 empty<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Movable zone start for each node<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Early memory node ranges<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] =C2=A0 node =C2=A0 0: [mem 0x0000000010000000-0x000000001fffffff]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] =C2=A0 node =C2=A0 0: [mem 0x0000000022000000-0x0000000022147fff]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] =C2=A0 node =C2=A0 0: [mem 0x0000000022200000-0x0000000022347fff]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] =C2=A0 node =C2=A0 0: [mem 0x0000000024000000-0x0000000027ffffff]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] =C2=A0 node =C2=A0 0: [mem 0x0000000030000000-0x000000007fffffff]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Initmem setup node 0 [mem 0x0000000010000000-0x000000007fffffff]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] On node 0, zone DMA: 8192 pages in unavailable ranges<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] On node 0, zone DMA: 184 pages in unavailable ranges<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] On node 0, zone DMA: 7352 pages in unavailable ranges<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] cma: Reserved 256 MiB at 0x000000006e000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] psci: probing for conduit method from DT.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] psci: PSCIv1.1 detected in firmware.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] psci: Using standard PSCI v0.2 function IDs<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] psci: Trusted OS migration not required<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] psci: SMC Calling Convention v1.1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] percpu: Embedded 16 pages/cpu s32792 r0 d32744 u65536<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Detected VIPT I-cache on CPU0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] CPU features: kernel page table isolation forced ON by KASLR<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] CPU features: detected: Kernel page table isolation (KPTI)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Built 1 zonelists, mobility grouping on.=C2=A0 Total pages: 403845<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Kernel command line: console=3Dhvc0 earlycon=3Dxen earlyprintk=3Dxen clk=
_ignore_unused fips=3D1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0root=3D=
/dev/ram0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0maxcpus=3D2<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Unknown kernel command line parameters &quot;earlyprintk=3Dxen fips=3D1&=
quot;, will be passed to user<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0space.<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear=
)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] mem auto-init: stack:off, heap alloc:on, heap free:on<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] mem auto-init: clearing system memory may take some time...<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Memory: 1121936K/1641024K available (9728K kernel code, 836K rwdata, 239=
6K rodata, 1536K<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0init, 2=
62K bss,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0256944K reserved,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; 262144K cma-reserved)<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] SLUB: HWalign=3D64, Order=3D0-3, MinObjects=3D0, CPUs=3D2, Nodes=3D1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] rcu: Hierarchical RCU implementation.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] rcu: RCU event tracing is enabled.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] rcu: RCU restricting CPUs from NR_CPUS=3D8 to nr_cpu_ids=3D2.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] rcu: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=3D2<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] Root IRQ handler: gic_handle_irq<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] arch_timer: cp15 timer(s) running at 100.00MHz (virt).<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x1710=
24e7e0,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0max_idl=
e_ns: 440795205315 ns<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00000=
0] sched_clock: 56 bits at 100MHz, resolution 10ns, wraps every 43980465111=
00ns<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.00025=
8] Console: colour dummy device 80x25<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.31023=
1] printk: console [hvc0] enabled<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.31440=
3] Calibrating delay loop (skipped), value calculated using timer frequency=
.. 200.00 BogoMIPS<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0(lpj=3D=
400000)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.32485=
1] pid_max: default: 32768 minimum: 301<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.32970=
6] LSM: Security Framework initializing<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.33420=
4] Yama: becoming mindful.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.33786=
5] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.34518=
0] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear=
)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.35474=
3] xen:grant_table: Grant tables using version 1 layout<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.35913=
2] Grant table initialized<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.36266=
4] xen:events: Using FIFO-based ABI<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.36699=
3] Xen: initializing cpu0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.37051=
5] rcu: Hierarchical SRCU implementation.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.37593=
0] smp: Bringing up secondary CPUs ...<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) null.c:353: 1 &l=
t;-- d0v1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) d0v1: vGICD: unh=
andled word write 0x000000ffffffff to ICACTIVER0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.38254=
9] Detected VIPT I-cache on CPU1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.38871=
2] Xen: initializing cpu1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.38874=
3] CPU1: Booted secondary processor 0x0000000001 [0x410fd034]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.38882=
9] smp: Brought up 1 node, 2 CPUs<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.40694=
1] SMP: Total of 2 processors activated.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.41169=
8] CPU features: detected: 32-bit EL0 Support<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.41688=
8] CPU features: detected: CRC32 instructions<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.42212=
1] CPU: All CPU(s) started at EL1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.42624=
8] alternatives: patching kernel code<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.43142=
4] devtmpfs: initialized<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.44145=
4] KASLR enabled<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.44160=
2] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_=
ns:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A07645041=
785100000 ns<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.44832=
1] futex hash table entries: 512 (order: 3, 32768 bytes, linear)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.49618=
3] NET: Registered PF_NETLINK/PF_ROUTE protocol family<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.49827=
7] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic allocations<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.50377=
2] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.51161=
0] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocatio=
ns<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.51947=
8] audit: initializing netlink subsys (disabled)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.52498=
5] audit: type=3D2000 audit(0.336:1): state=3Dinitialized audit_enabled=3D0=
 res=3D1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.52916=
9] thermal_sys: Registered thermal governor &#39;step_wise&#39;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.53302=
3] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.54560=
8] ASID allocator initialised with 32768 entries<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.55103=
0] xen:swiotlb_xen: Warning: only able to allocate 4 MB for software IO TLB=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.55933=
2] software IO TLB: mapped [mem 0x0000000011800000-0x0000000011c00000] (4MB=
)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.58356=
5] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.58472=
1] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.59147=
8] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.59822=
5] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.63652=
0] DRBG: Continuing without Jitter RNG<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.73718=
7] raid6: neonx8 =C2=A0 gen() =C2=A02143 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.80529=
4] raid6: neonx8 =C2=A0 xor() =C2=A01589 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.87340=
6] raid6: neonx4 =C2=A0 gen() =C2=A02177 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A00.94149=
9] raid6: neonx4 =C2=A0 xor() =C2=A01556 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.00961=
2] raid6: neonx2 =C2=A0 gen() =C2=A02072 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.07771=
5] raid6: neonx2 =C2=A0 xor() =C2=A01430 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.14583=
4] raid6: neonx1 =C2=A0 gen() =C2=A01769 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.21393=
5] raid6: neonx1 =C2=A0 xor() =C2=A01214 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.28204=
6] raid6: int64x8 =C2=A0gen() =C2=A01366 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.35013=
2] raid6: int64x8 =C2=A0xor() =C2=A0 773 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.41825=
9] raid6: int64x4 =C2=A0gen() =C2=A01602 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.48634=
9] raid6: int64x4 =C2=A0xor() =C2=A0 851 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.55446=
4] raid6: int64x2 =C2=A0gen() =C2=A01396 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.62256=
1] raid6: int64x2 =C2=A0xor() =C2=A0 744 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.69068=
7] raid6: int64x1 =C2=A0gen() =C2=A01033 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.75877=
0] raid6: int64x1 =C2=A0xor() =C2=A0 517 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.75880=
9] raid6: using algorithm neonx4 gen() 2177 MB/s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.76294=
1] raid6: .... xor() 1556 MB/s, rmw enabled<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.76795=
7] raid6: using neon recovery algorithm<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.77282=
4] xen:balloon: Initialising balloon driver<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.77802=
1] iommu: Default domain type: Translated<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.78258=
4] iommu: DMA domain TLB invalidation policy: strict mode<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.78914=
9] SCSI subsystem initialized<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.79282=
0] usbcore: registered new interface driver usbfs<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.79825=
4] usbcore: registered new interface driver hub<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.80362=
6] usbcore: registered new device driver usb<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.80876=
1] pps_core: LinuxPPS API ver. 1 registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.81371=
6] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti &lt=
;<a href=3D"mailto:giometti@linux.it" target=3D"_blank">giometti@linux.it</=
a> &lt;mailto:<a href=3D"mailto:giometti@linux.it" target=3D"_blank">giomet=
ti@linux.it</a>&gt; &lt;mailto:<a href=3D"mailto:giometti@linux.it" target=
=3D"_blank">giometti@linux.it</a> &lt;mailto:<a href=3D"mailto:giometti@lin=
ux.it" target=3D"_blank">giometti@linux.it</a>&gt;&gt; &lt;mailto:<a href=
=3D"mailto:giometti@linux.it" target=3D"_blank">giometti@linux.it</a> &lt;m=
ailto:<a href=3D"mailto:giometti@linux.it" target=3D"_blank">giometti@linux=
.it</a>&gt; &lt;mailto:<a href=3D"mailto:giometti@linux.it" target=3D"_blan=
k">giometti@linux.it</a> &lt;mailto:<a href=3D"mailto:giometti@linux.it" ta=
rget=3D"_blank">giometti@linux.it</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:giometti@linux.it" target=3D"_blank">giometti@linux.it</a> &lt;mailto:<=
a href=3D"mailto:giometti@linux.it" target=3D"_blank">giometti@linux.it</a>=
&gt; &lt;mailto:<a href=3D"mailto:giometti@linux.it" target=3D"_blank">giom=
etti@linux.it</a> &lt;mailto:<a href=3D"mailto:giometti@linux.it" target=3D=
"_blank">giometti@linux.it</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:giomett=
i@linux.it" target=3D"_blank">giometti@linux.it</a> &lt;mailto:<a href=3D"m=
ailto:giometti@linux.it" target=3D"_blank">giometti@linux.it</a>&gt; &lt;ma=
ilto:<a href=3D"mailto:giometti@linux.it" target=3D"_blank">giometti@linux.=
it</a> &lt;mailto:<a href=3D"mailto:giometti@linux.it" target=3D"_blank">gi=
ometti@linux.it</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:giometti@linux.it" target=3D"_blank">giometti@linux.i=
t</a> &lt;mailto:<a href=3D"mailto:giometti@linux.it" target=3D"_blank">gio=
metti@linux.it</a>&gt; &lt;mailto:<a href=3D"mailto:giometti@linux.it" targ=
et=3D"_blank">giometti@linux.it</a> &lt;mailto:<a href=3D"mailto:giometti@l=
inux.it" target=3D"_blank">giometti@linux.it</a>&gt;&gt; &lt;mailto:<a href=
=3D"mailto:giometti@linux.it" target=3D"_blank">giometti@linux.it</a> &lt;m=
ailto:<a href=3D"mailto:giometti@linux.it" target=3D"_blank">giometti@linux=
.it</a>&gt; &lt;mailto:<a href=3D"mailto:giometti@linux.it" target=3D"_blan=
k">giometti@linux.it</a> &lt;mailto:<a href=3D"mailto:giometti@linux.it" ta=
rget=3D"_blank">giometti@linux.it</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:giometti@linux.it" target=3D"_blank">giometti@linux.it</a> &lt;mailto:<=
a href=3D"mailto:giometti@linux.it" target=3D"_blank">giometti@linux.it</a>=
&gt; &lt;mailto:<a href=3D"mailto:giometti@linux.it" target=3D"_blank">giom=
etti@linux.it</a> &lt;mailto:<a href=3D"mailto:giometti@linux.it" target=3D=
"_blank">giometti@linux.it</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:giomett=
i@linux.it" target=3D"_blank">giometti@linux.it</a> &lt;mailto:<a href=3D"m=
ailto:giometti@linux.it" target=3D"_blank">giometti@linux.it</a>&gt; &lt;ma=
ilto:<a href=3D"mailto:giometti@linux.it" target=3D"_blank">giometti@linux.=
it</a> &lt;mailto:<a href=3D"mailto:giometti@linux.it" target=3D"_blank">gi=
ometti@linux.it</a>&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.82290=
3] PTP clock support registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.82689=
3] EDAC MC: Ver: 3.0.0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.83037=
5] zynqmp-ipi-mbox mailbox@ff990400: Registered ZynqMP IPI mbox with TX/RX =
channels.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.83886=
3] zynqmp-ipi-mbox mailbox@ff990600: Registered ZynqMP IPI mbox with TX/RX =
channels.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.84735=
6] zynqmp-ipi-mbox mailbox@ff990800: Registered ZynqMP IPI mbox with TX/RX =
channels.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.85590=
7] FPGA manager framework<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.85995=
2] clocksource: Switched to clocksource arch_sys_counter<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.87171=
2] NET: Registered PF_INET protocol family<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.87183=
8] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.87939=
2] tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes=
, linear)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.88707=
8] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.89484=
6] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linea=
r)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.90290=
0] TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.91035=
0] TCP: Hash tables configured (established 16384 bind 16384)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.91677=
8] UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.92350=
9] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.93075=
9] NET: Registered PF_UNIX/PF_LOCAL protocol family<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.93683=
4] RPC: Registered named UNIX socket transport module.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.94234=
2] RPC: Registered udp transport module.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.94708=
8] RPC: Registered tcp transport module.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.95184=
3] RPC: Registered tcp NFSv4.1 backchannel transport module.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.95833=
4] PCI: CLS 0 bytes, default 64<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.96270=
9] Trying to unpack rootfs image as initramfs...<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.97709=
0] workingset: timestamp_bits=3D62 max_order=3D19 bucket_order=3D0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A01.98286=
3] Installing knfsd (copyright (C) 1996 <a href=3D"mailto:okir@monad.swb.de=
" target=3D"_blank">okir@monad.swb.de</a> &lt;mailto:<a href=3D"mailto:okir=
@monad.swb.de" target=3D"_blank">okir@monad.swb.de</a>&gt; &lt;mailto:<a hr=
ef=3D"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad.swb.de</a> &lt=
;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad.s=
wb.de</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D=
"_blank">okir@monad.swb.de</a> &lt;mailto:<a href=3D"mailto:okir@monad.swb.=
de" target=3D"_blank">okir@monad.swb.de</a>&gt; &lt;mailto:<a href=3D"mailt=
o:okir@monad.swb.de" target=3D"_blank">okir@monad.swb.de</a> &lt;mailto:<a =
href=3D"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad.swb.de</a>&g=
t;&gt;&gt; &lt;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D"_blank=
">okir@monad.swb.de</a> &lt;mailto:<a href=3D"mailto:okir@monad.swb.de" tar=
get=3D"_blank">okir@monad.swb.de</a>&gt; &lt;mailto:<a href=3D"mailto:okir@=
monad.swb.de" target=3D"_blank">okir@monad.swb.de</a> &lt;mailto:<a href=3D=
"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad.swb.de</a>&gt;&gt; =
&lt;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D"_blank">okir@mona=
d.swb.de</a> &lt;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D"_bla=
nk">okir@monad.swb.de</a>&gt; &lt;mailto:<a href=3D"mailto:okir@monad.swb.d=
e" target=3D"_blank">okir@monad.swb.de</a> &lt;mailto:<a href=3D"mailto:oki=
r@monad.swb.de" target=3D"_blank">okir@monad.swb.de</a>&gt;&gt;&gt;&gt; &lt=
;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad.s=
wb.de</a> &lt;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D"_blank"=
>okir@monad.swb.de</a>&gt; &lt;mailto:<a href=3D"mailto:okir@monad.swb.de" =
target=3D"_blank">okir@monad.swb.de</a> &lt;mailto:<a href=3D"mailto:okir@m=
onad.swb.de" target=3D"_blank">okir@monad.swb.de</a>&gt;&gt; &lt;mailto:<a =
href=3D"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad.swb.de</a> &=
lt;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad=
.swb.de</a>&gt; &lt;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D"_=
blank">okir@monad.swb.de</a> &lt;mailto:<a href=3D"mailto:okir@monad.swb.de=
" target=3D"_blank">okir@monad.swb.de</a>&gt;&gt;&gt; &lt;mailto:<a href=3D=
"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad.swb.de</a> &lt;mail=
to:<a href=3D"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad.swb.de=
</a>&gt; &lt;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D"_blank">=
okir@monad.swb.de</a> &lt;mailto:<a href=3D"mailto:okir@monad.swb.de" targe=
t=3D"_blank">okir@monad.swb.de</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oki=
r@monad.swb.de" target=3D"_blank">okir@monad.swb.de</a> &lt;mailto:<a href=
=3D"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad.swb.de</a>&gt; &=
lt;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D"_blank">okir@monad=
.swb.de</a> &lt;mailto:<a href=3D"mailto:okir@monad.swb.de" target=3D"_blan=
k">okir@monad.swb.de</a>&gt;&gt;&gt;&gt;&gt;).<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.02104=
5] NET: Registered PF_ALG protocol family<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.02112=
2] xor: measuring software checksum speed<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.02934=
7] =C2=A0 =C2=A08regs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 : =C2=A02366 MB/se=
c<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.03308=
1] =C2=A0 =C2=A032regs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0: =C2=A02802 MB/se=
c<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.03822=
3] =C2=A0 =C2=A0arm64_neon =C2=A0 =C2=A0 =C2=A0: =C2=A02320 MB/sec<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.03838=
5] xor: using function: 32regs (2802 MB/sec)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.04361=
4] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.05095=
9] io scheduler mq-deadline registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.05552=
1] io scheduler kyber registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.06822=
7] xen:xen_evtchn: Event-channel device installed<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.06928=
1] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.07619=
0] cacheinfo: Unable to detect cache hierarchy for CPU 0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.08554=
8] brd: module loaded<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.08929=
0] loop: module loaded<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.08934=
1] Invalid max_queues (4), will use default max: 2.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.09456=
5] tun: Universal TUN/TAP device driver, 1.6<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.09865=
5] xen_netfront: Initialising Xen virtual ethernet driver<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.10415=
6] usbcore: registered new interface driver rtl8150<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.10981=
3] usbcore: registered new interface driver r8152<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.11536=
7] usbcore: registered new interface driver asix<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.12079=
4] usbcore: registered new interface driver ax88179_178a<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.12693=
4] usbcore: registered new interface driver cdc_ether<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.13281=
6] usbcore: registered new interface driver cdc_eem<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.13852=
7] usbcore: registered new interface driver net1080<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.14425=
6] usbcore: registered new interface driver cdc_subset<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.15020=
5] usbcore: registered new interface driver zaurus<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.15583=
7] usbcore: registered new interface driver cdc_ncm<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.16155=
0] usbcore: registered new interface driver r8153_ecm<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.16824=
0] usbcore: registered new interface driver cdc_acm<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.17310=
9] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapt=
ers<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.18135=
8] usbcore: registered new interface driver uas<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.18654=
7] usbcore: registered new interface driver usb-storage<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.19264=
3] usbcore: registered new interface driver ftdi_sio<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.19838=
4] usbserial: USB Serial support registered for FTDI USB Serial Device<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.20611=
8] udc-core: couldn&#39;t find an available UDC - added [g_mass_storage] to=
 list of pending<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0drivers=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.21533=
2] i2c_dev: i2c /dev entries driver<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.22046=
7] xen_wdt xen_wdt: initialized (timeout=3D60s, nowayout=3D0)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.22592=
3] device-mapper: uevent: version 1.0.3<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.23066=
8] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: <a href=3D"=
mailto:dm-devel@redhat.com" target=3D"_blank">dm-devel@redhat.com</a> &lt;m=
ailto:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_blank">dm-devel@red=
hat.com</a>&gt; &lt;mailto:<a href=3D"mailto:dm-devel@redhat.com" target=3D=
"_blank">dm-devel@redhat.com</a> &lt;mailto:<a href=3D"mailto:dm-devel@redh=
at.com" target=3D"_blank">dm-devel@redhat.com</a>&gt;&gt; &lt;mailto:<a hre=
f=3D"mailto:dm-devel@redhat.com" target=3D"_blank">dm-devel@redhat.com</a> =
&lt;mailto:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_blank">dm-deve=
l@redhat.com</a>&gt; &lt;mailto:<a href=3D"mailto:dm-devel@redhat.com" targ=
et=3D"_blank">dm-devel@redhat.com</a> &lt;mailto:<a href=3D"mailto:dm-devel=
@redhat.com" target=3D"_blank">dm-devel@redhat.com</a>&gt;&gt;&gt; &lt;mail=
to:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_blank">dm-devel@redhat=
.com</a> &lt;mailto:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_blank=
">dm-devel@redhat.com</a>&gt; &lt;mailto:<a href=3D"mailto:dm-devel@redhat.=
com" target=3D"_blank">dm-devel@redhat.com</a> &lt;mailto:<a href=3D"mailto=
:dm-devel@redhat.com" target=3D"_blank">dm-devel@redhat.com</a>&gt;&gt; &lt=
;mailto:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_blank">dm-devel@r=
edhat.com</a> &lt;mailto:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_=
blank">dm-devel@redhat.com</a>&gt; &lt;mailto:<a href=3D"mailto:dm-devel@re=
dhat.com" target=3D"_blank">dm-devel@redhat.com</a> &lt;mailto:<a href=3D"m=
ailto:dm-devel@redhat.com" target=3D"_blank">dm-devel@redhat.com</a>&gt;&gt=
;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_blank">dm-devel@redha=
t.com</a> &lt;mailto:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_blan=
k">dm-devel@redhat.com</a>&gt; &lt;mailto:<a href=3D"mailto:dm-devel@redhat=
.com" target=3D"_blank">dm-devel@redhat.com</a> &lt;mailto:<a href=3D"mailt=
o:dm-devel@redhat.com" target=3D"_blank">dm-devel@redhat.com</a>&gt;&gt; &l=
t;mailto:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_blank">dm-devel@=
redhat.com</a> &lt;mailto:<a href=3D"mailto:dm-devel@redhat.com" target=3D"=
_blank">dm-devel@redhat.com</a>&gt; &lt;mailto:<a href=3D"mailto:dm-devel@r=
edhat.com" target=3D"_blank">dm-devel@redhat.com</a> &lt;mailto:<a href=3D"=
mailto:dm-devel@redhat.com" target=3D"_blank">dm-devel@redhat.com</a>&gt;&g=
t;&gt; &lt;mailto:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_blank">=
dm-devel@redhat.com</a> &lt;mailto:<a href=3D"mailto:dm-devel@redhat.com" t=
arget=3D"_blank">dm-devel@redhat.com</a>&gt; &lt;mailto:<a href=3D"mailto:d=
m-devel@redhat.com" target=3D"_blank">dm-devel@redhat.com</a> &lt;mailto:<a=
 href=3D"mailto:dm-devel@redhat.com" target=3D"_blank">dm-devel@redhat.com<=
/a>&gt;&gt; &lt;mailto:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_bl=
ank">dm-devel@redhat.com</a> &lt;mailto:<a href=3D"mailto:dm-devel@redhat.c=
om" target=3D"_blank">dm-devel@redhat.com</a>&gt; &lt;mailto:<a href=3D"mai=
lto:dm-devel@redhat.com" target=3D"_blank">dm-devel@redhat.com</a> &lt;mail=
to:<a href=3D"mailto:dm-devel@redhat.com" target=3D"_blank">dm-devel@redhat=
.com</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.23931=
5] EDAC MC0: Giving out device to module 1 controller synps_ddr_controller:=
 DEV synps_edac<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0(INTERR=
UPT)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.24940=
5] EDAC DEVICE0: Giving out device to module zynqmp-ocm-edac controller zyn=
qmp_ocm: DEV<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0ff960000.memory-controller =
(INTERRUPT)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.26171=
9] sdhci: Secure Digital Host Controller Interface driver<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.26748=
7] sdhci: Copyright(c) Pierre Ossman<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.27189=
0] sdhci-pltfm: SDHCI platform and OF driver helper<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.27815=
7] ledtrig-cpu: registered to indicate activity on CPUs<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.28381=
6] zynqmp_firmware_probe Platform Management API v1.1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.28955=
4] zynqmp_firmware_probe Trustzone version v1.0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.32787=
5] securefw securefw: securefw probed<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.32832=
4] alg: No test for xilinx-zynqmp-aes (zynqmp-aes)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.33256=
3] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: AES Successfully Registe=
red<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.34118=
3] alg: No test for xilinx-zynqmp-rsa (zynqmp-rsa)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.34766=
7] remoteproc remoteproc0: ff9a0000.rf5ss:r5f_0 is available<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.35300=
3] remoteproc remoteproc1: ff9a0000.rf5ss:r5f_1 is available<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.36260=
5] fpga_manager fpga0: Xilinx ZynqMP FPGA Manager registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.36654=
0] viper-xen-proxy viper-xen-proxy: Viper Xen Proxy registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.37252=
5] viper-vdpp a4000000.vdpp: Device Tree Probing<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.37777=
8] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen:=
 32<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.38643=
2] viper-vdpp a4000000.vdpp: Unable to register tamper handler. Retrying...=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.39409=
4] viper-vdpp-net a5000000.vdpp_net: Device Tree Probing<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.39985=
4] viper-vdpp-net a5000000.vdpp_net: Device registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.40593=
1] viper-vdpp-stat a8000000.vdpp_stat: Device Tree Probing<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.41203=
7] viper-vdpp-stat a8000000.vdpp_stat: Build parameters: VTI Count: 512 Eve=
nt Count: 32<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.42085=
6] default preset<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.42379=
7] viper-vdpp-stat a8000000.vdpp_stat: Device registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.43005=
4] viper-vdpp-rng ac000000.vdpp_rng: Device Tree Probing<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.43594=
8] viper-vdpp-rng ac000000.vdpp_rng: Device registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.44197=
6] vmcu driver init<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.44492=
2] VMCU: : (240:0) registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.44495=
6] In K81 Updater init<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.44900=
3] pktgen: Packet Generator for packet performance testing. Version: 2.75<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.46883=
3] Initializing XFRM netlink socket<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.46890=
2] NET: Registered PF_PACKET protocol family<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.47272=
9] Bridge firewalling registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.47678=
5] 8021q: 802.1Q VLAN Support v1.8<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.48134=
1] registered taskstats version 1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.48639=
4] Btrfs loaded, crc32c=3Dcrc32c-generic, zoned=3Dno, fsverity=3Dno<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.50314=
5] ff010000.serial: ttyPS1 at MMIO 0xff010000 (irq =3D 36, base_baud =3D 62=
50000) is a xuartps<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.50710=
3] of-fpga-region fpga-full: FPGA Region probed<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.51298=
6] xilinx-zynqmp-dma fd500000.dma-controller: ZynqMP DMA driver Probe succe=
ss<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.52026=
7] xilinx-zynqmp-dma fd510000.dma-controller: ZynqMP DMA driver Probe succe=
ss<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.52823=
9] xilinx-zynqmp-dma fd520000.dma-controller: ZynqMP DMA driver Probe succe=
ss<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.53615=
2] xilinx-zynqmp-dma fd530000.dma-controller: ZynqMP DMA driver Probe succe=
ss<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.54415=
3] xilinx-zynqmp-dma fd540000.dma-controller: ZynqMP DMA driver Probe succe=
ss<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.55212=
7] xilinx-zynqmp-dma fd550000.dma-controller: ZynqMP DMA driver Probe succe=
ss<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.56017=
8] xilinx-zynqmp-dma ffa80000.dma-controller: ZynqMP DMA driver Probe succe=
ss<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.56798=
7] xilinx-zynqmp-dma ffa90000.dma-controller: ZynqMP DMA driver Probe succe=
ss<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.57601=
8] xilinx-zynqmp-dma ffaa0000.dma-controller: ZynqMP DMA driver Probe succe=
ss<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.58388=
9] xilinx-zynqmp-dma ffab0000.dma-controller: ZynqMP DMA driver Probe succe=
ss<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.94637=
9] spi-nor spi0.0: mt25qu512a (131072 Kbytes)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.94646=
7] 2 fixed-partitions partitions found on MTD device spi0.0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.95239=
3] Creating 2 MTD partitions on &quot;spi0.0&quot;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.95723=
1] 0x000004000000-0x000008000000 : &quot;bank A&quot;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.96333=
2] 0x000000000000-0x000004000000 : &quot;bank B&quot;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.96869=
4] macb ff0b0000.ethernet: Not enabling partial store and forward<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.97533=
3] macb ff0b0000.ethernet eth0: Cadence GEM rev 0x50070106 at 0xff0b0000 ir=
q 25<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0(18:41:=
fe:0f:ff:02)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.98447=
2] macb ff0c0000.ethernet: Not enabling partial store and forward<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A02.99214=
4] macb ff0c0000.ethernet eth1: Cadence GEM rev 0x50070106 at 0xff0c0000 ir=
q 26<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0(18:41:=
fe:0f:ff:03)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.00104=
3] viper_enet viper_enet: Viper power GPIOs initialised<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.00731=
3] viper_enet viper_enet vnet0 (uninitialized): Validate interface QSGMII<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.01491=
4] viper_enet viper_enet vnet1 (uninitialized): Validate interface QSGMII<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.02213=
8] viper_enet viper_enet vnet1 (uninitialized): Validate interface type 18<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.03027=
4] viper_enet viper_enet vnet2 (uninitialized): Validate interface QSGMII<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.03778=
5] viper_enet viper_enet vnet3 (uninitialized): Validate interface QSGMII<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.04530=
1] viper_enet viper_enet: Viper enet registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.05095=
8] xilinx-axipmon ffa00000.perf-monitor: Probed Xilinx APM<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.05713=
5] xilinx-axipmon fd0b0000.perf-monitor: Probed Xilinx APM<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.06353=
8] xilinx-axipmon fd490000.perf-monitor: Probed Xilinx APM<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.06992=
0] xilinx-axipmon ffa10000.perf-monitor: Probed Xilinx APM<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.09772=
9] si70xx: probe of 2-0040 failed with error -5<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.09804=
2] cdns-wdt fd4d0000.watchdog: Xilinx Watchdog Timer with timeout 60s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.10511=
1] cdns-wdt ff150000.watchdog: Xilinx Watchdog Timer with timeout 10s<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.11245=
7] viper-tamper viper-tamper: Device registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.11759=
3] active_bank active_bank: boot bank: 1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.12218=
4] active_bank active_bank: boot mode: (0x02) qspi32<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.12824=
7] viper-vdpp a4000000.vdpp: Device Tree Probing<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.13343=
9] viper-vdpp a4000000.vdpp: VDPP Version: 1.3.9.0 Info: 1.512.15.0 KeyLen:=
 32<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.14215=
1] viper-vdpp a4000000.vdpp: Tamper handler registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.14743=
8] viper-vdpp a4000000.vdpp: Device registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.15300=
7] lpc55_l2 spi1.0: registered handler for protocol 0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.15858=
2] lpc55_user lpc55_user: The major number for your device is 236<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.16597=
6] lpc55_l2 spi1.0: registered handler for protocol 1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.18199=
9] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.18285=
6] rtc-lpc55 rtc_lpc55: registered as rtc0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.18865=
6] lpc55_l2 spi1.0: (2) mcu still not ready?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.19374=
4] lpc55_l2 spi1.0: (3) mcu still not ready?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.19884=
8] lpc55_l2 spi1.0: (4) mcu still not ready?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.20293=
2] mmc0: SDHCI controller on ff160000.mmc [ff160000.mmc] using ADMA 64-bit<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.21068=
9] lpc55_l2 spi1.0: (5) mcu still not ready?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.21569=
4] lpc55_l2 spi1.0: rx error: -110<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.28443=
8] mmc0: new HS200 MMC card at address 0001<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.28517=
9] mmcblk0: mmc0:0001 SEM16G 14.6 GiB<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.29178=
4] =C2=A0mmcblk0: p1 p2 p3 p4 p5 p6 p7 p8<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.29391=
5] mmcblk0boot0: mmc0:0001 SEM16G 4.00 MiB<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.29905=
4] mmcblk0boot1: mmc0:0001 SEM16G 4.00 MiB<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.30390=
5] mmcblk0rpmb: mmc0:0001 SEM16G 4.00 MiB, chardev (244:0)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.58267=
6] rtc-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.58333=
2] rtc-lpc55 rtc_lpc55: hctosys: unable to read the hardware clock<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.59125=
2] cdns-i2c ff020000.i2c: recovery information complete<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.59708=
5] at24 0-0050: supply vcc not found, using dummy regulator<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.60301=
1] lpc55_l2 spi1.0: (2) mcu still not ready?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.60809=
3] at24 0-0050: 256 byte spd EEPROM, read-only<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.61362=
0] lpc55_l2 spi1.0: (3) mcu still not ready?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.61936=
2] lpc55_l2 spi1.0: (4) mcu still not ready?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.62422=
4] rtc-rv3028 0-0052: registered as rtc1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.62834=
3] lpc55_l2 spi1.0: (5) mcu still not ready?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.63325=
3] lpc55_l2 spi1.0: rx error: -110<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.63910=
4] k81_bootloader 0-0010: probe<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.64162=
8] VMCU: : (235:0) registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.64163=
5] k81_bootloader 0-0010: probe completed<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.66834=
6] cdns-i2c ff020000.i2c: 400 kHz mmio ff020000 irq 28<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.66915=
4] cdns-i2c ff030000.i2c: recovery information complete<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.67541=
2] lm75 1-0048: supply vs not found, using dummy regulator<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.68292=
0] lm75 1-0048: hwmon1: sensor &#39;tmp112&#39;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.68654=
8] i2c i2c-1: Added multiplexed i2c bus 3<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.69079=
5] i2c i2c-1: Added multiplexed i2c bus 4<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.69562=
9] i2c i2c-1: Added multiplexed i2c bus 5<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.70049=
2] i2c i2c-1: Added multiplexed i2c bus 6<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.70515=
7] pca954x 1-0070: registered 4 multiplexed busses for I2C switch pca9546<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.71304=
9] at24 1-0054: supply vcc not found, using dummy regulator<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.72006=
7] at24 1-0054: 1024 byte 24c08 EEPROM, read-only<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.72476=
1] cdns-i2c ff030000.i2c: 100 kHz mmio ff030000 irq 29<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.73127=
2] sfp viper_enet:sfp-eth1: Host maximum power 2.0W<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.73754=
9] sfp_register_socket: got sfp_bus<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.74070=
9] sfp_register_socket: register sfp_bus<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.74545=
9] sfp_register_bus: ops ok!<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.74917=
9] sfp_register_bus: Try to attach<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.75341=
9] sfp_register_bus: Attach succeeded<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.75791=
4] sfp_register_bus: upstream ops attach<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.76267=
7] sfp_register_bus: Bus registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.76699=
9] sfp_register_socket: register sfp_bus succeeded<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.77587=
0] of_cfs_init<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.77600=
0] of_cfs_init: OK<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 =C2=A03.77821=
1] clk: Not disabling unused clocks<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.278477] Fr=
eeing initrd memory: 206056K<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.279406] Fr=
eeing unused kernel memory: 1536K<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.314006] Ch=
ecked W+X mappings: passed, no W+X pages found<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.314142] Ru=
n /init as init process<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; INIT: version 3.01 boo=
ting<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; fsck (busybox 1.35.0)<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; /dev/mmcblk0p1: clean,=
 12/102400 files, 238162/409600 blocks<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; /dev/mmcblk0p2: clean,=
 12/102400 files, 171972/409600 blocks<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; /dev/mmcblk0p3 was not=
 cleanly unmounted, check forced.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; /dev/mmcblk0p3: 20/409=
6 files (0.0% non-contiguous), 663/16384 blocks<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.553073] EX=
T4-fs (mmcblk0p3): mounted filesystem without journal. Opts: (null). Quota =
mode:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0disable=
d.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Starting random number=
 generator daemon.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.580662] ra=
ndom: crng init done<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Starting udev<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.613159] ud=
evd[142]: starting version 3.2.10<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.620385] ud=
evd[143]: starting eudev-3.2.10<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.704481] ma=
cb ff0b0000.ethernet control_red: renamed from eth0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 11.720264] ma=
cb ff0c0000.ethernet control_black: renamed from eth1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.063396] ip=
_local_port_range: prefer different parity for start/end values.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.084801] rt=
c-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; hwclock: RTC_RD_TIME: =
Invalid exchange<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Mon Feb 27 08:40:53 UT=
C 2023<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.115309] rt=
c-lpc55 rtc_lpc55: lpc55_rtc_set_time: bad result<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; hwclock: RTC_SET_TIME:=
 Invalid exchange<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.131027] rt=
c-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Starting mcud<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; INIT: Entering runleve=
l: 5<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Configuring network in=
terfaces... done.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; resetting network inte=
rface<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.718295] ma=
cb ff0b0000.ethernet control_red: PHY [ff0b0000.ethernet-ffffffff:02] drive=
r [Xilinx<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0PCS/PMA=
 PHY] (irq=3DPOLL)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.723919] ma=
cb ff0b0000.ethernet control_red: configuring for phy/gmii link mode<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.732151] pp=
s pps0: new PPS source ptp0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.735563] ma=
cb ff0b0000.ethernet: gem-ptp-timer ptp clock registered.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.745724] ma=
cb ff0c0000.ethernet control_black: PHY [ff0c0000.ethernet-ffffffff:01] dri=
ver [Xilinx<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0PCS/PMA=
 PHY]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0(irq=3DPOLL)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.753469] ma=
cb ff0c0000.ethernet control_black: configuring for phy/gmii link mode<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.761804] pp=
s pps1: new PPS source ptp1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 12.765398] ma=
cb ff0c0000.ethernet: gem-ptp-timer ptp clock registered.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Auto-negotiation: off<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Auto-negotiation: off<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 16.828151] ma=
cb ff0b0000.ethernet control_red: unable to generate target frequency: 1250=
00000 Hz<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 16.834553] ma=
cb ff0b0000.ethernet control_red: Link is Up - 1Gbps/Full - flow control of=
f<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 16.860552] ma=
cb ff0c0000.ethernet control_black: unable to generate target frequency: 12=
5000000 Hz<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 16.867052] ma=
cb ff0c0000.ethernet control_black: Link is Up - 1Gbps/Full - flow control =
off<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Starting Failsafe Secu=
re Shell server in port 2222: sshd<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; done.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Starting rpcbind daemo=
n...done.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 17.093019] rt=
c-lpc55 rtc_lpc55: lpc55_rtc_get_time: bad result: 1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; hwclock: RTC_RD_TIME: =
Invalid exchange<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Starting State Manager=
 Service<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Start state-manager re=
starter...<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) d0v1 Forwarding =
AES operation: 3254779951<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Starting /usr/sbin/xen=
stored....[ =C2=A0 17.265256] BTRFS: device fsid 80efc224-c202-4f8e-a949-4d=
ae7f04a0aa<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0devid 1=
 transid 744<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0/dev/dm-0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; scanned by udevd (385)=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 17.349933] BT=
RFS info (device dm-0): disk space caching is enabled<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 17.350670] BT=
RFS info (device dm-0): has skinny extents<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 17.364384] BT=
RFS info (device dm-0): enabling ssd optimizations<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 17.830462] BT=
RFS: device fsid 27ff666b-f4e5-4f90-9054-c210db5b2e2e devid 1 transid 6<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0/dev/ma=
pper/client_prov scanned by<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0mkfs.btrfs<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (526)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 17.872699] BT=
RFS info (device dm-1): using free space tree<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 17.872771] BT=
RFS info (device dm-1): has skinny extents<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 17.878114] BT=
RFS info (device dm-1): flagging fs with big metadata feature<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 17.894289] BT=
RFS info (device dm-1): enabling ssd optimizations<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 17.895695] BT=
RFS info (device dm-1): checking UUID tree<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Setting domain 0 name,=
 domid and JSON config...<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Done setting up Dom0<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Starting xenconsoled..=
.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Starting QEMU as disk =
backend for dom0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Starting domain watchd=
og daemon: xenwatchdogd startup<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 18.408647] BT=
RFS: device fsid 5e08d5e9-bc2a-46b9-af6a-44c7087b8921 devid 1 transid 6<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0/dev/ma=
pper/client_config scanned by<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0mkfs.btrfs<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (574)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [done]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 18.465552] BT=
RFS info (device dm-2): using free space tree<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 18.465629] BT=
RFS info (device dm-2): has skinny extents<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 18.471002] BT=
RFS info (device dm-2): flagging fs with big metadata feature<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Starting crond: [ =C2=
=A0 18.482371] BTRFS info (device dm-2): enabling ssd optimizations<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 18.486659] BT=
RFS info (device dm-2): checking UUID tree<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; OK<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; starting rsyslogd ... =
Log partition ready after 0 poll loops<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; done<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; rsyslogd: cannot conne=
ct to <a href=3D"http://172.18.0.1:514" rel=3D"noreferrer" target=3D"_blank=
">172.18.0.1:514</a> &lt;<a href=3D"http://172.18.0.1:514" rel=3D"noreferre=
r" target=3D"_blank">http://172.18.0.1:514</a>&gt; &lt;<a href=3D"http://17=
2.18.0.1:514" rel=3D"noreferrer" target=3D"_blank">http://172.18.0.1:514</a=
> &lt;<a href=3D"http://172.18.0.1:514" rel=3D"noreferrer" target=3D"_blank=
">http://172.18.0.1:514</a>&gt;&gt; &lt;<a href=3D"http://172.18.0.1:514" r=
el=3D"noreferrer" target=3D"_blank">http://172.18.0.1:514</a> &lt;<a href=
=3D"http://172.18.0.1:514" rel=3D"noreferrer" target=3D"_blank">http://172.=
18.0.1:514</a>&gt; &lt;<a href=3D"http://172.18.0.1:514" rel=3D"noreferrer"=
 target=3D"_blank">http://172.18.0.1:514</a> &lt;<a href=3D"http://172.18.0=
.1:514" rel=3D"noreferrer" target=3D"_blank">http://172.18.0.1:514</a>&gt;&=
gt;&gt; &lt;<a href=3D"http://172.18.0.1:514" rel=3D"noreferrer" target=3D"=
_blank">http://172.18.0.1:514</a> &lt;<a href=3D"http://172.18.0.1:514" rel=
=3D"noreferrer" target=3D"_blank">http://172.18.0.1:514</a>&gt; &lt;<a href=
=3D"http://172.18.0.1:514" rel=3D"noreferrer" target=3D"_blank">http://172.=
18.0.1:514</a> &lt;<a href=3D"http://172.18.0.1:514" rel=3D"noreferrer" tar=
get=3D"_blank">http://172.18.0.1:514</a>&gt;&gt; &lt;<a href=3D"http://172.=
18.0.1:514" rel=3D"noreferrer" target=3D"_blank">http://172.18.0.1:514</a> =
&lt;<a href=3D"http://172.18.0.1:514" rel=3D"noreferrer" target=3D"_blank">=
http://172.18.0.1:514</a>&gt; &lt;<a href=3D"http://172.18.0.1:514" rel=3D"=
noreferrer" target=3D"_blank">http://172.18.0.1:514</a> &lt;<a href=3D"http=
://172.18.0.1:514" rel=3D"noreferrer" target=3D"_blank">http://172.18.0.1:5=
14</a>&gt;&gt;&gt;&gt; &lt;<a href=3D"http://172.18.0.1:514" rel=3D"norefer=
rer" target=3D"_blank">http://172.18.0.1:514</a> &lt;<a href=3D"http://172.=
18.0.1:514" rel=3D"noreferrer" target=3D"_blank">http://172.18.0.1:514</a>&=
gt; &lt;<a href=3D"http://172.18.0.1:514" rel=3D"noreferrer" target=3D"_bla=
nk">http://172.18.0.1:514</a> &lt;<a href=3D"http://172.18.0.1:514" rel=3D"=
noreferrer" target=3D"_blank">http://172.18.0.1:514</a>&gt;&gt; &lt;<a href=
=3D"http://172.18.0.1:514" rel=3D"noreferrer" target=3D"_blank">http://172.=
18.0.1:514</a> &lt;<a href=3D"http://172.18.0.1:514" rel=3D"noreferrer" tar=
get=3D"_blank">http://172.18.0.1:514</a>&gt; &lt;<a href=3D"http://172.18.0=
.1:514" rel=3D"noreferrer" target=3D"_blank">http://172.18.0.1:514</a> &lt;=
<a href=3D"http://172.18.0.1:514" rel=3D"noreferrer" target=3D"_blank">http=
://172.18.0.1:514</a>&gt;&gt;&gt; &lt;<a href=3D"http://172.18.0.1:514" rel=
=3D"noreferrer" target=3D"_blank">http://172.18.0.1:514</a> &lt;<a href=3D"=
http://172.18.0.1:514" rel=3D"noreferrer" target=3D"_blank">http://172.18.0=
.1:514</a>&gt; &lt;<a href=3D"http://172.18.0.1:514" rel=3D"noreferrer" tar=
get=3D"_blank">http://172.18.0.1:514</a> &lt;<a href=3D"http://172.18.0.1:5=
14" rel=3D"noreferrer" target=3D"_blank">http://172.18.0.1:514</a>&gt;&gt; =
&lt;<a href=3D"http://172.18.0.1:514" rel=3D"noreferrer" target=3D"_blank">=
http://172.18.0.1:514</a> &lt;<a href=3D"http://172.18.0.1:514" rel=3D"nore=
ferrer" target=3D"_blank">http://172.18.0.1:514</a>&gt; &lt;<a href=3D"http=
://172.18.0.1:514" rel=3D"noreferrer" target=3D"_blank">http://172.18.0.1:5=
14</a> &lt;<a href=3D"http://172.18.0.1:514" rel=3D"noreferrer" target=3D"_=
blank">http://172.18.0.1:514</a>&gt;&gt;&gt;&gt;&gt;: Network is unreachabl=
e [v8.2208.0 try<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<a href=
=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_blank">ht=
tps://www.rsyslog.com/e/2027</a> &lt;<a href=3D"https://www.rsyslog.com/e/2=
027" rel=3D"noreferrer" target=3D"_blank">https://www.rsyslog.com/e/2027</a=
>&gt; &lt;<a href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer" tar=
get=3D"_blank">https://www.rsyslog.com/e/2027</a> &lt;<a href=3D"https://ww=
w.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_blank">https://www.rsys=
log.com/e/2027</a>&gt;&gt; &lt;<a href=3D"https://www.rsyslog.com/e/2027" r=
el=3D"noreferrer" target=3D"_blank">https://www.rsyslog.com/e/2027</a> &lt;=
<a href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_bl=
ank">https://www.rsyslog.com/e/2027</a>&gt; &lt;<a href=3D"https://www.rsys=
log.com/e/2027" rel=3D"noreferrer" target=3D"_blank">https://www.rsyslog.co=
m/e/2027</a> &lt;<a href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferr=
er" target=3D"_blank">https://www.rsyslog.com/e/2027</a>&gt;&gt;&gt; &lt;<a=
 href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_blan=
k">https://www.rsyslog.com/e/2027</a> &lt;<a href=3D"https://www.rsyslog.co=
m/e/2027" rel=3D"noreferrer" target=3D"_blank">https://www.rsyslog.com/e/20=
27</a>&gt; &lt;<a href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer=
" target=3D"_blank">https://www.rsyslog.com/e/2027</a> &lt;<a href=3D"https=
://www.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_blank">https://www=
.rsyslog.com/e/2027</a>&gt;&gt; &lt;<a href=3D"https://www.rsyslog.com/e/20=
27" rel=3D"noreferrer" target=3D"_blank">https://www.rsyslog.com/e/2027</a>=
 &lt;<a href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer" target=
=3D"_blank">https://www.rsyslog.com/e/2027</a>&gt; &lt;<a href=3D"https://w=
ww.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_blank">https://www.rsy=
slog.com/e/2027</a> &lt;<a href=3D"https://www.rsyslog.com/e/2027" rel=3D"n=
oreferrer" target=3D"_blank">https://www.rsyslog.com/e/2027</a>&gt;&gt;&gt;=
&gt; &lt;<a href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer" targ=
et=3D"_blank">https://www.rsyslog.com/e/2027</a> &lt;<a href=3D"https://www=
.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_blank">https://www.rsysl=
og.com/e/2027</a>&gt; &lt;<a href=3D"https://www.rsyslog.com/e/2027" rel=3D=
"noreferrer" target=3D"_blank">https://www.rsyslog.com/e/2027</a> &lt;<a hr=
ef=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_blank">=
https://www.rsyslog.com/e/2027</a>&gt;&gt; &lt;<a href=3D"https://www.rsysl=
og.com/e/2027" rel=3D"noreferrer" target=3D"_blank">https://www.rsyslog.com=
/e/2027</a> &lt;<a href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferre=
r" target=3D"_blank">https://www.rsyslog.com/e/2027</a>&gt; &lt;<a href=3D"=
https://www.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_blank">https:=
//www.rsyslog.com/e/2027</a> &lt;<a href=3D"https://www.rsyslog.com/e/2027"=
 rel=3D"noreferrer" target=3D"_blank">https://www.rsyslog.com/e/2027</a>&gt=
;&gt;&gt; &lt;<a href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer"=
 target=3D"_blank">https://www.rsyslog.com/e/2027</a> &lt;<a href=3D"https:=
//www.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_blank">https://www.=
rsyslog.com/e/2027</a>&gt; &lt;<a href=3D"https://www.rsyslog.com/e/2027" r=
el=3D"noreferrer" target=3D"_blank">https://www.rsyslog.com/e/2027</a> &lt;=
<a href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_bl=
ank">https://www.rsyslog.com/e/2027</a>&gt;&gt; &lt;<a href=3D"https://www.=
rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_blank">https://www.rsyslo=
g.com/e/2027</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://www.rsyslog.com/e/2027" rel=
=3D"noreferrer" target=3D"_blank">https://www.rsyslog.com/e/2027</a>&gt; &l=
t;<a href=3D"https://www.rsyslog.com/e/2027" rel=3D"noreferrer" target=3D"_=
blank">https://www.rsyslog.com/e/2027</a> &lt;<a href=3D"https://www.rsyslo=
g.com/e/2027" rel=3D"noreferrer" target=3D"_blank">https://www.rsyslog.com/=
e/2027</a>&gt;&gt;&gt;&gt;&gt; ]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; [ =C2=A0 18.670637] BT=
RFS: device fsid 39d7d9e1-967d-478e-94ae-690deb722095 devid 1 transid 608 /=
dev/dm-3<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0scanned=
 by udevd (518)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Please insert USB toke=
n and enter your role in login prompt.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; login:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D0=BF=D0=BD, 24 =D0=
=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 23:39, Stefano Stabellini &lt;=
<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@ker=
nel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"=
_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabel=
lini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a=
 href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kerne=
l.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" targ=
et=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstab=
ellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mai=
lto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini=
@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"ma=
ilto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.o=
rg" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mail=
to:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;=
&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel=
.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:ss=
tabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mail=
to:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@=
kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" tar=
get=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:ssta=
bellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &l=
t;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabe=
llini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" t=
arget=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;=
mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabell=
ini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D=
"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a=
>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank=
">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kerne=
l.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0Here is the issue from your logs:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0SError Interrupt on CPU0, code 0xbe000000 -- SError<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0SErrors are special signals to notify software of serious hardware<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0errors.=C2=A0 Something is going very wrong. Defective hardware is a<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0possibility.=C2=A0 Another possibility if software accessing address =
ranges<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0that it is not supposed to, sometimes it causes SErrors.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0Cheers,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0Stefano<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0On Mon, 24 Apr 2023, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; Hello,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; Thanks guys.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; I found out where the problem was.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; Now dom0 booted more. But I have a new one.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; This is a kernel panic during Dom0 loading.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; Maybe someone is able to suggest something ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 =C2=A03.771362] sfp_register_bus: upstream ops attach<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 =C2=A03.776119] sfp_register_bus: Bus registered<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 =C2=A03.780459] sfp_register_socket: register sfp_bus s=
ucceeded<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 =C2=A03.789399] of_cfs_init<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 =C2=A03.789499] of_cfs_init: OK<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 =C2=A03.791685] clk: Not disabling unused clocks<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010355] SError Interrupt on CPU0, code 0xbe000000 --=
 SError<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010380] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted=
 5.15.72-xilinx-v2022.1 #1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010393] Workqueue: events_unbound async_run_entry_fn=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010414] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -=
DIT -SSBS BTYPE=3D--)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010422] pc : simple_write_end+0xd0/0x130<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010431] lr : generic_perform_write+0x118/0x1e0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010438] sp : ffffffc00809b910<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010441] x29: ffffffc00809b910 x28: 0000000000000000 =
x27: ffffffef69ba88c0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010451] x26: 0000000000003eec x25: ffffff807515db00 =
x24: 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010459] x23: ffffffc00809ba90 x22: 0000000002aac000 =
x21: ffffff807315a260<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010472] x20: 0000000000001000 x19: fffffffe02000000 =
x18: 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010481] x17: 00000000ffffffff x16: 0000000000008000 =
x15: 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010490] x14: 0000000000000000 x13: 0000000000000000 =
x12: 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010498] x11: 0000000000000000 x10: 0000000000000000 =
x9 : 0000000000000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010507] x8 : 0000000000000000 x7 : ffffffef693ba680 =
x6 : 000000002d89b700<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010515] x5 : fffffffe02000000 x4 : ffffff807315a3c8 =
x3 : 0000000000001000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010524] x2 : 0000000002aab000 x1 : 0000000000000001 =
x0 : 0000000000000005<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010534] Kernel panic - not syncing: Asynchronous SEr=
ror Interrupt<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010539] CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted=
 5.15.72-xilinx-v2022.1 #1<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010545] Hardware name: D14 Viper Board - White Unit =
(DT)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010548] Workqueue: events_unbound async_run_entry_fn=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010556] Call trace:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010558] =C2=A0dump_backtrace+0x0/0x1c4<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010567] =C2=A0show_stack+0x18/0x2c<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010574] =C2=A0dump_stack_lvl+0x7c/0xa0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010583] =C2=A0dump_stack+0x18/0x34<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010588] =C2=A0panic+0x14c/0x2f8<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010597] =C2=A0print_tainted+0x0/0xb0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010606] =C2=A0arm64_serror_panic+0x6c/0x7c<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010614] =C2=A0do_serror+0x28/0x60<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010621] =C2=A0el1h_64_error_handler+0x30/0x50<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010628] =C2=A0el1h_64_error+0x78/0x7c<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010633] =C2=A0simple_write_end+0xd0/0x130<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010639] =C2=A0generic_perform_write+0x118/0x1e0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010644] =C2=A0__generic_file_write_iter+0x138/0x1c4<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010650] =C2=A0generic_file_write_iter+0x78/0xd0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010656] =C2=A0__kernel_write+0xfc/0x2ac<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010665] =C2=A0kernel_write+0x88/0x160<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010673] =C2=A0xwrite+0x44/0x94<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010680] =C2=A0do_copy+0xa8/0x104<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010686] =C2=A0write_buffer+0x38/0x58<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010692] =C2=A0flush_buffer+0x4c/0xbc<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010698] =C2=A0__gunzip+0x280/0x310<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010704] =C2=A0gunzip+0x1c/0x28<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010709] =C2=A0unpack_to_rootfs+0x170/0x2b0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010715] =C2=A0do_populate_rootfs+0x80/0x164<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010722] =C2=A0async_run_entry_fn+0x48/0x164<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010728] =C2=A0process_one_work+0x1e4/0x3a0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010736] =C2=A0worker_thread+0x7c/0x4c0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010743] =C2=A0kthread+0x120/0x130<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010750] =C2=A0ret_from_fork+0x10/0x20<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010757] SMP: stopping secondary CPUs<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010784] Kernel Offset: 0x2f61200000 from 0xffffffc00=
8000000<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010788] PHYS_OFFSET: 0x0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010790] CPU features: 0x00000401,00000842<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.010795] Memory Limit: none<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; [ =C2=A0 11.277509] ---[ end Kernel panic - not syncing: Asynchr=
onous SError Interrupt ]---<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt; =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2=
 15:52, Michal Orzel &lt;<a href=3D"mailto:michal.orzel@amd.com" target=3D"=
_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@=
amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=
=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>=
 &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">micha=
l.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.c=
om" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto=
:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;m=
ailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orze=
l@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"=
_blank">michal.orzel@amd.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:m=
ichal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:=
<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.=
com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_b=
lank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@am=
d.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a hre=
f=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a=
> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mich=
al.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com"=
 target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:mi=
chal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;&=
gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mi=
chal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" t=
arget=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:=
michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto=
:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd=
.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.or=
zel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com<=
/a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mi=
chal.orzel@amd.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orze=
l@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D=
"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt=
; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mich=
al.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" tar=
get=3D"_blank">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailt=
o:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mail=
to:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@a=
md.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" =
target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:mic=
hal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;&g=
t;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On 21/04/2023 14:49, Oleg Nikitenko wr=
ote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hello Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I was not able to enable earlypri=
ntk in the xen for now.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I decided to choose another way.<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; This is a xen&#39;s command line =
that I found out completely.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) $$$$ console=3Ddtuart dtuar=
t=3Dserial0 dom0_mem=3D1600M dom0_max_vcpus=3D2 dom0_vcpus_pin<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0bootscr=
ub=3D0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0vwfi=3Dnative<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0sched=3Dnull<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0timer_slop=3D0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Yes, adding a printk() in Xen was also=
 a good idea.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; So you are absolutely right about=
 a command line.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Now I am going to find out why xe=
n did not have the correct parameters from the device<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0tree.<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Maybe you will find this document help=
ful:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0<a href=3D"https://github.com/Xilinx/x=
en/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"nore=
ferrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.=
16/docs/misc/arm/device-tree/booting.txt</a> &lt;<a href=3D"https://github.=
com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt"=
 rel=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xl=
nx_rebase_4.16/docs/misc/arm/device-tree/booting.txt</a>&gt; &lt;<a href=3D=
"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-t=
ree/booting.txt" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xi=
linx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt</a> &l=
t;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/=
arm/device-tree/booting.txt" rel=3D"noreferrer" target=3D"_blank">https://g=
ithub.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/bootin=
g.txt</a>&gt;&gt; &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_re=
base_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/a=
rm/device-tree/booting.txt</a> &lt;<a href=3D"https://github.com/Xilinx/xen=
/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"norefe=
rrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16=
/docs/misc/arm/device-tree/booting.txt</a>&gt; &lt;<a href=3D"https://githu=
b.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.tx=
t" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/=
xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt</a> &lt;<a href=3D"h=
ttps://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tre=
e/booting.txt" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xili=
nx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt</a>&gt;&=
gt;&gt; &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/=
docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-t=
ree/booting.txt</a> &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_=
rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" targe=
t=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/=
arm/device-tree/booting.txt</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/Xilinx/xen/blob/x=
lnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" t=
arget=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/m=
isc/arm/device-tree/booting.txt</a> &lt;<a href=3D"https://github.com/Xilin=
x/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"n=
oreferrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase=
_4.16/docs/misc/arm/device-tree/booting.txt</a>&gt;&gt; &lt;<a href=3D"http=
s://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/b=
ooting.txt" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/=
xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt</a> &lt;<a =
href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/d=
evice-tree/booting.txt" rel=3D"noreferrer" target=3D"_blank">https://github=
.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt=
</a>&gt; &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16=
/docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" target=3D"_blank=
">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt</a> &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx=
_rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc=
/arm/device-tree/booting.txt</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/d=
evice-tree/booting.txt" rel=3D"noreferrer" target=3D"_blank">https://github=
.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt=
</a> &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/doc=
s/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" target=3D"_blank">ht=
tps://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree=
/booting.txt</a>&gt; &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx=
_rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc=
/arm/device-tree/booting.txt</a> &lt;<a href=3D"https://github.com/Xilinx/x=
en/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"nore=
ferrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.=
16/docs/misc/arm/device-tree/booting.txt</a>&gt;&gt; &lt;<a href=3D"https:/=
/github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/boot=
ing.txt" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/xen=
/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt</a> &lt;<a hre=
f=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/devi=
ce-tree/booting.txt" rel=3D"noreferrer" target=3D"_blank">https://github.co=
m/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt</a=
>&gt; &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/do=
cs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" target=3D"_blank">h=
ttps://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tre=
e/booting.txt</a> &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_re=
base_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/a=
rm/device-tree/booting.txt</a>&gt;&gt;&gt; &lt;<a href=3D"https://github.co=
m/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt" r=
el=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx=
_rebase_4.16/docs/misc/arm/device-tree/booting.txt</a> &lt;<a href=3D"https=
://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/bo=
oting.txt" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/x=
en/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/Xilinx/xen/blob/x=
lnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" t=
arget=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/m=
isc/arm/device-tree/booting.txt</a> &lt;<a href=3D"https://github.com/Xilin=
x/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"n=
oreferrer" target=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase=
_4.16/docs/misc/arm/device-tree/booting.txt</a>&gt;&gt; &lt;<a href=3D"http=
s://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/b=
ooting.txt" rel=3D"noreferrer" target=3D"_blank">https://github.com/Xilinx/=
xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt</a> &lt;<a =
href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/d=
evice-tree/booting.txt" rel=3D"noreferrer" target=3D"_blank">https://github=
.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-tree/booting.txt=
</a>&gt; &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16=
/docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" target=3D"_blank=
">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc/arm/device-=
tree/booting.txt</a> &lt;<a href=3D"https://github.com/Xilinx/xen/blob/xlnx=
_rebase_4.16/docs/misc/arm/device-tree/booting.txt" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/Xilinx/xen/blob/xlnx_rebase_4.16/docs/misc=
/arm/device-tree/booting.txt</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D0=BF=D1=82, 21 =D0=B0=D0=BF=D1=
=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:16, Michal Orzel &lt;<a href=3D"mailto:=
michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto=
:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd=
.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_=
blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@a=
md.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a hr=
ef=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</=
a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mic=
hal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com=
" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:m=
ichal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;=
 &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">micha=
l.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" targ=
et=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:mic=
hal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a=
 href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.co=
m</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"=
_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@=
amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=
=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>=
 &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">micha=
l.orzel@amd.com</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@=
amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_b=
lank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orze=
l@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D=
"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt=
;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">=
michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com"=
 target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailt=
o:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mail=
to:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@a=
md.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" t=
arget=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:mich=
al.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto=
:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd=
.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blan=
k">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orz=
el@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=
=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>=
&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">m=
ichal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" =
target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;&gt;&gt; &lt;mailto:<=
a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.c=
om</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank"=
>michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd=
.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mail=
to:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;=
 &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">micha=
l.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" targ=
et=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:mic=
hal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a=
 href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.co=
m</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.or=
zel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com<=
/a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mi=
chal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@am=
d.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mai=
lto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &l=
t;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.o=
rzel@amd.com</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" =
target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;&gt; &lt;mailto:<a hr=
ef=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</=
a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mic=
hal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com=
" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:m=
ichal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt; &lt=
;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.or=
zel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:micha=
l.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a h=
ref=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com<=
/a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.or=
zel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com<=
/a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mi=
chal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@am=
d.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mai=
lto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &l=
t;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.o=
rzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0On 21/04/2023 =
10:04, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Hello Mic=
hal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Yes, I us=
e yocto.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Yesterday=
 all day long I tried to follow your suggestions.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; I faced a=
 problem.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Manually =
in the xen config build file I pasted the strings:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0In the .config=
 file or in some Yocto file (listing additional Kconfig options) added<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0to SRC_=
URI?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0You shouldn&#3=
9;t really modify .config file but if you do, you should execute &quot;make=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0olddefc=
onfig&quot;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0afterwards.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; CONFIG_EA=
RLY_PRINTK<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; CONFIG_EA=
RLY_PRINTK_ZYNQMP<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; CONFIG_EA=
RLY_UART_CHOICE_CADENCE<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0I hope you add=
ed =3Dy to them.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0Anyway, you ha=
ve at least the following solutions:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A01) Run bitbake=
 xen -c menuconfig to properly set early printk<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A02) Find out ho=
w you enable other Kconfig options in your project (e.g.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0CONFIG_=
COLORING=3Dy that is not<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0enabled by<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0default)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A03) Append the =
following to &quot;xen/arch/arm/configs/arm64_defconfig&quot;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0CONFIG_EARLY_P=
RINTK_ZYNQMP=3Dy<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0~Michal<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Host hang=
s in build time.=C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Maybe I d=
id not set something in the config build file ?<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Regards,<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt; =D1=87=D1=
=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:57, Oleg Nikiten=
ko &lt;<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwo=
od@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=
=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oles=
hiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=
=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiw=
ood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bla=
nk">oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oles=
hiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"=
_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@=
gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bla=
nk">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@=
gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@=
gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood=
@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"m=
ailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt=
;&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" targe=
t=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:=
<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmai=
l.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:=
<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmai=
l.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood=
@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_b=
lank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt;&gt; &lt;mailto:<a =
href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.c=
om</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank=
">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"=
mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&g=
t;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank=
">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.=
com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"=
mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &=
lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshi=
iwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"=
mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&g=
t; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"=
mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &=
lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshi=
iwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:o=
leshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&g=
t;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwo=
od@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a hre=
f=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com<=
/a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blan=
k">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a hre=
f=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com<=
/a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">o=
leshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&=
gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blan=
k">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D=
"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> =
&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">olesh=
iiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; =
&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">olesh=
iiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" ta=
rget=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt;&gt; &lt;mailto:<a hre=
f=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com<=
/a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">o=
leshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&=
gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">o=
leshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com=
" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;=
mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwo=
od@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; =
&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">olesh=
iiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" ta=
rget=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;=
mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwo=
od@gmail.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:o=
leshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&g=
t;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_b=
lank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"=
mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&g=
t; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.=
com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mail=
to:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&g=
t; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mail=
to:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;m=
ailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwoo=
d@gmail.com</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@=
gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood=
@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"m=
ailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt=
;&gt;&gt;&gt;&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0Thanks Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0You gave me an idea.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0I am going to try it today.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0=D1=87=D1=82, 20 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 =
11:56, Oleg Nikitenko &lt;<a href=3D"mailto:oleshiiwood@gmail.com" target=
=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiw=
ood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bla=
nk">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiw=
ood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a hr=
ef=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com=
</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bla=
nk">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmai=
l.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bla=
nk">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@=
gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_b=
lank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@=
gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood=
@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"m=
ailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt=
;&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" targe=
t=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:=
<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmai=
l.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:=
<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmai=
l.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood=
@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_b=
lank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt;&gt; &lt;mailto:<a =
href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.c=
om</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank=
">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"=
mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&g=
t;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank=
">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.=
com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"=
mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &=
lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshi=
iwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"=
mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&g=
t; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"=
mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &=
lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshi=
iwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:o=
leshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&g=
t;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwo=
od@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a hre=
f=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com<=
/a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blan=
k">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a hre=
f=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com<=
/a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">o=
leshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&=
gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blan=
k">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D=
"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> =
&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">olesh=
iiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; =
&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">olesh=
iiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" ta=
rget=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt;&gt; &lt;mailto:<a hre=
f=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com<=
/a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">o=
leshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&=
gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">o=
leshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com=
" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;=
mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwo=
od@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; =
&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">olesh=
iiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" ta=
rget=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;=
mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwo=
od@gmail.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:o=
leshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&g=
t;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_b=
lank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"=
mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&g=
t; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.=
com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mail=
to:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&g=
t; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mail=
to:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;m=
ailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwoo=
d@gmail.com</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@=
gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood=
@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"m=
ailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt=
;&gt;&gt;&gt;&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0Thanks Stefano.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0I am going to do it today.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0O.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0=D1=81=D1=80, 19 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=
=D0=B3. =D0=B2 23:05, Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@=
kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_bl=
ank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<=
a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kern=
el.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_=
blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabell=
ini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a =
href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel=
.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" t=
arget=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:ss=
tabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;=
mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabell=
ini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" tar=
get=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &l=
t;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabe=
llini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.or=
g" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&=
gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini=
@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sst=
abellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt=
;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabel=
lini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" ta=
rget=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D=
"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<=
a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kern=
el.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_=
blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;=
mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabell=
ini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org"=
 target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:=
sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt=
; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">ss=
tabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.or=
g" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"m=
ailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> =
&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">ssta=
bellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini=
@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a hre=
f=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.or=
g</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_b=
lank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@k=
ernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:=
<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@ker=
nel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"=
_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabel=
lini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a=
 href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kerne=
l.org</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.o=
rg" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mail=
to:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;=
 &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sst=
abellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" targe=
t=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabe=
llini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mail=
to:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@=
kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"ma=
ilto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a>&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a h=
ref=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.=
org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"=
_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini=
@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sst=
abellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.o=
rg" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mail=
to:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;=
 &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sst=
abellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" t=
arget=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:ss=
tabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;=
mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabell=
ini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" tar=
get=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &l=
t;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabe=
llini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.or=
g" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&=
gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_bla=
nk">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a hre=
f=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.or=
g</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank=
">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"ma=
ilto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini=
@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sst=
abellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt=
;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabel=
lini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" ta=
rget=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D=
"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<=
a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kern=
el.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_=
blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &l=
t;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabe=
llini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.or=
g" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&=
gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">=
sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.=
org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D=
"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a=
> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">ss=
tabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a h=
ref=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.=
org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"=
_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini=
@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&g=
t;&gt;&gt;&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0On Wed, 19 Apr 2023, Oleg Nikitenko w=
rote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Hi Michal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I corrected xen&#39;s command li=
ne.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Now it is<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; xen,xen-bootargs =3D &quot;conso=
le=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1600M<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0dom0_ma=
x_vcpus=3D2<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0dom0_vcpus_pin<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0bootscrub=3D0 vwfi=3Dnative sched=3Dnu=
ll<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; timer_slop=3D0 way_size=3D65536 =
xen_colors=3D0-3 dom0_colors=3D4-7&quot;;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A04 colors is way too many for xen, jus=
t do xen_colors=3D0-0. There is no<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0advantage in using more than 1 color =
for Xen.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A04 colors is too few for dom0, if you =
are giving 1600M of memory to<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Dom0.<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Each color is 256M. For 1600M you sho=
uld give at least 7 colors. Try:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0xen_colors=3D0-0 dom0_colors=3D1-8<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Unfortunately the result was the=
 same.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) =C2=A0- Dom0 mode: Relaxed=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) P2M: 40-bit IPA with 40-bi=
t PA and 8-bit VMID<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) P2M: 3 levels with order-1=
 root, VTCR 0x0000000080023558<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Scheduling granularity: cp=
u, 1 CPU per sched-resource<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Coloring general informati=
on<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Way size: 64kB<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Max. number of colors avai=
lable: 16<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Xen color(s): [ 0 ]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) alternatives: Patching wit=
h alt table 00000000002cc690 -&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A00000000=
0002ccc0c<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Color array allocation fai=
led for dom0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) **************************=
**************<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Panic on CPU 0:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Error creating domain 0<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) **************************=
**************<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; (XEN) Reboot in five seconds...<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I am going to find out how comma=
nd line arguments passed and parsed.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=
=80. 2023=E2=80=AF=D0=B3. =D0=B2 11:25, Oleg Nikitenko &lt;<a href=3D"mailt=
o:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;ma=
ilto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood=
@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" targ=
et=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshi=
iwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;ma=
ilto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood=
@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=
=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oles=
hiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" ta=
rget=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oles=
hiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mail=
to:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@g=
mail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"=
_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oles=
hiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"=
_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@=
gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@=
gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood=
@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"m=
ailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt=
;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailt=
o:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;ma=
ilto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood=
@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" targ=
et=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshi=
iwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;ma=
ilto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood=
@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=
=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oles=
hiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" ta=
rget=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oles=
hiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mail=
to:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@g=
mail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"=
_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oles=
hiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"=
_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@=
gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt;&gt; &lt;=
mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwo=
od@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=
=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oles=
hiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=
=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiw=
ood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bla=
nk">oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oles=
hiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"=
_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@=
gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<=
a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail=
.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bla=
nk">oleshiiwood@gmail.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:o=
leshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&g=
t;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwo=
od@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a hre=
f=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com<=
/a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blan=
k">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a hre=
f=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com<=
/a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">o=
leshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&=
gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blan=
k">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D=
"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> =
&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">olesh=
iiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; =
&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">olesh=
iiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" ta=
rget=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt;&gt; &lt;mailto:<a hre=
f=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com<=
/a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">o=
leshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&=
gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">o=
leshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com=
" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;=
mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwo=
od@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail=
.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; =
&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">olesh=
iiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" ta=
rget=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;=
mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwo=
od@gmail.com</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:o=
leshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&g=
t;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_b=
lank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gm=
ail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"=
mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&g=
t; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=
=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</=
a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.=
com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mail=
to:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&g=
t; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">ol=
eshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com"=
 target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mail=
to:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;m=
ailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwoo=
d@gmail.com</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@=
gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D=
"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood=
@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:oleshii=
wood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_bl=
ank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt;&gt; &lt;mailto:<a h=
ref=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.co=
m</a> &lt;mailto:<a href=3D"mailto:oleshiiwood@gmail.com" target=3D"_blank"=
>oleshiiwood@gmail.com</a>&gt; &lt;mailto:<a href=3D"mailto:oleshiiwood@gma=
il.com" target=3D"_blank">oleshiiwood@gmail.com</a> &lt;mailto:<a href=3D"m=
ailto:oleshiiwood@gmail.com" target=3D"_blank">oleshiiwood@gmail.com</a>&gt=
;&gt;&gt;&gt;&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Mic=
hal,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; You put my nose into the problem=
. Thank you.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; I am going to use your point.<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Let&#39;s see what happens.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Regards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; Oleg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =D1=81=D1=80, 19 =D0=B0=D0=BF=D1=
=80. 2023=E2=80=AF=D0=B3. =D0=B2 10:37, Michal Orzel &lt;<a href=3D"mailto:=
michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto=
:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd=
.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_=
blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@a=
md.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a hr=
ef=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</=
a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mic=
hal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com=
" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:m=
ichal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;=
 &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">micha=
l.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" targ=
et=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:mic=
hal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a=
 href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.co=
m</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"=
_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@=
amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=
=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>=
 &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">micha=
l.orzel@amd.com</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@=
amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_b=
lank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orze=
l@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D=
"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt=
;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">=
michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com"=
 target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailt=
o:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mail=
to:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@a=
md.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" t=
arget=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:mich=
al.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto=
:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd=
.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blan=
k">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orz=
el@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=
=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>=
&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">m=
ichal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" =
target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailt=
o:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mail=
to:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@a=
md.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D=
"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel=
@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a =
href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com=
</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">m=
ichal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.c=
om" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto=
:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&g=
t; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mic=
hal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" ta=
rget=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:m=
ichal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:=
<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.=
com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.or=
zel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com<=
/a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mi=
chal.orzel@amd.com</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.=
orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a hre=
f=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a=
>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">=
michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com"=
 target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a href=3D"m=
ailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;=
mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orz=
el@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" targe=
t=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.o=
rzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt; &lt;ma=
ilto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel=
@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_=
blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orz=
el@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=
=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>=
&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blan=
k">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.c=
om" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"ma=
ilto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" =
target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:michal.or=
zel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=
=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>=
&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">m=
ichal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" =
target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a href=3D"ma=
ilto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;m=
ailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orze=
l@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.or=
zel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt; &lt;mai=
lto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@=
amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_b=
lank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orze=
l@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D=
"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt=
;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">=
michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com"=
 target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailt=
o:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mail=
to:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@a=
md.com</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.co=
m" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:=
michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;ma=
ilto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel=
@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_=
blank">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal=
.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a hr=
ef=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</=
a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank"=
>michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com=
" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt; &lt;mailto:<a href=
=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>=
 &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">micha=
l.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" =
target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:mic=
hal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt; &lt;m=
ailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orze=
l@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"=
_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.or=
zel@amd.com" target=3D"_blank">michal.orzel@amd.com</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" =
target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;&gt;&gt; &lt;mailto:<=
a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.c=
om</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank"=
>michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd=
.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mail=
to:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;=
 &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">micha=
l.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" targ=
et=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:mic=
hal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a=
 href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.co=
m</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.or=
zel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com<=
/a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">mi=
chal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@am=
d.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mai=
lto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &l=
t;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.o=
rzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=
=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@=
amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_b=
lank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailto:michal.orze=
l@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D=
"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt=
;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">=
michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com"=
 target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto:<a href=3D"mailt=
o:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mail=
to:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@a=
md.com</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" t=
arget=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:mich=
al.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>&gt; &lt;mailto=
:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd=
.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blan=
k">michal.orzel@amd.com</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:michal.orz=
el@amd.com" target=3D"_blank">michal.orzel@amd.com</a> &lt;mailto:<a href=
=3D"mailto:michal.orzel@amd.com" target=3D"_blank">michal.orzel@amd.com</a>=
&gt; &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" target=3D"_blank">m=
ichal.orzel@amd.com</a> &lt;mailto:<a href=3D"mailto:michal.orzel@amd.com" =
target=3D"_blank">michal.orzel@amd.com</a>&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;:=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Ole=
g,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0On 19/=
04/2023 09:03, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0 =C2=A0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; H=
ello Stefano,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; T=
hanks for the clarification.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; M=
y company uses yocto for image generation.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; W=
hat kind of information do you need to consult me in this<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0case ?<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; M=
aybe modules sizes/addresses which were mentioned by @Julien<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Grall<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien=
@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank"=
>julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=
=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org"=
 target=3D"_blank">julien@xen.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:=
julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"=
mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:=
<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;=
mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</=
a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blan=
k">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=
=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.=
org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:juli=
en@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&gt; &lt;mailto:<a href=
=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:=
<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt; =
&lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.o=
rg</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julie=
n@xen.org</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" =
target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xe=
n.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailt=
o:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=
=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&gt; &lt=
;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org<=
/a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@x=
en.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blan=
k">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=
=3D"_blank">julien@xen.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:jul=
ien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mai=
lto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a =
href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mai=
lto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&=
gt;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">juli=
en@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blan=
k">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" targ=
et=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.or=
g" target=3D"_blank">julien@xen.org</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:julien@xe=
n.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:ju=
lien@xen.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=
=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:=
<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&=
gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@x=
en.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">j=
ulien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=
=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org"=
 target=3D"_blank">julien@xen.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=
=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mai=
lto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> =
&lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.o=
rg</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blan=
k">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=
=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.=
org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:juli=
en@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&gt;&gt;&gt; &lt;mailto=
:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt=
;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org<=
/a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">juli=
en@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blan=
k">julien@xen.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" =
target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xe=
n.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailt=
o:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=
=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&gt;&gt;=
 &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.=
org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">juli=
en@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_=
blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" targ=
et=3D"_blank">julien@xen.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:julie=
n@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailt=
o:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a hr=
ef=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailt=
o:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt=
;&gt;&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=
=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org"=
 target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:juli=
en@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mail=
to:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&gt; &lt;mailto:=
<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;=
mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</=
a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:julien@xen.org" target=
=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org"=
 target=3D"_blank">julien@xen.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=
=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mai=
lto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> =
&lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.o=
rg</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blan=
k">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=
=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.=
org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:juli=
en@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> =
&lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.o=
rg</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">j=
ulien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_b=
lank">julien@xen.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.or=
g" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien=
@xen.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"ma=
ilto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a hre=
f=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&gt;&gt=
; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen=
.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">jul=
ien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"=
_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" tar=
get=3D"_blank">julien@xen.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:juli=
en@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mail=
to:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a h=
ref=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mail=
to:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&g=
t;&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_=
blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" targ=
et=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xe=
n.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:ju=
lien@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&gt; &lt;mailto:<a hr=
ef=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailt=
o:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt=
; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen=
.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">jul=
ien@xen.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" ta=
rget=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.=
org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:=
julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"=
mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt;&gt; &lt;mai=
lto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> =
&lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.o=
rg</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">j=
ulien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_b=
lank">julien@xen.org</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:julie=
n@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailt=
o:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a hr=
ef=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailt=
o:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt=
;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien=
@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank"=
>julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=
=3D"_blank">julien@xen.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:julien@xen.org" target=
=3D"_blank">julien@xen.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:jul=
ien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mai=
lto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&gt; &lt;mailto:<a =
href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a> &lt;mai=
lto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">julien@xen.org</a>&=
gt;&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blank">juli=
en@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.org" target=3D"_blan=
k">julien@xen.org</a>&gt; &lt;mailto:<a href=3D"mailto:julien@xen.org" targ=
et=3D"_blank">julien@xen.org</a> &lt;mailto:<a href=3D"mailto:julien@xen.or=
g" target=3D"_blank">julien@xen.org</a>&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; ?<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Sorry =
for jumping into discussion, but FWICS the Xen command<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0line yo=
u provided<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0seems to be<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0not the<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0one<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Xen bo=
oted with. The error you are observing most likely is due<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0to dom0=
 colors<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0configuration not<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0being<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0specif=
ied (i.e. lack of dom0_colors=3D&lt;&gt; parameter). Although in<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0the com=
mand line you<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0provided, this<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0parameter<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0is set=
, I strongly doubt that this is the actual command line<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0in use.=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0You wr=
ote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0xen,xe=
n-bootargs =3D &quot;console=3Ddtuart dtuart=3Dserial0<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0dom0_me=
m=3D1600M dom0_max_vcpus=3D2<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0dom0_vcpus_pin<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0bootscrub=3D0 vwfi=3Dnative<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0sched=
=3Dnull timer_slop=3D0 way_szize=3D65536 xen_colors=3D0-3<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0dom0_co=
lors=3D4-7&quot;;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0but:<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A01) way=
_szize has a typo<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A02) you=
 specified 4 colors (0-3) for Xen, but the boot log says<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0that Xe=
n has only<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0one:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0(XEN) =
Xen color(s): [ 0 ]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0This m=
akes me believe that no colors configuration actually end<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0up in c=
ommand line<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0that Xen<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0booted<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0with.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Single=
 color for Xen is a &quot;default if not specified&quot; and way<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0size wa=
s probably<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0calculated<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0by asking<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0HW.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0So I w=
ould suggest to first cross-check the command line in<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0use.<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0~Micha=
l<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; R=
egards,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; O=
leg<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt; =
=D0=B2=D1=82, 18 =D0=B0=D0=BF=D1=80. 2023=E2=80=AF=D0=B3. =D0=B2 20:44, Ste=
fano Stabellini<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel=
.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_bl=
ank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a hr=
ef=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.o=
rg</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabel=
lini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"ma=
ilto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.o=
rg" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mail=
to:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;=
&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel=
.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:ss=
tabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mail=
to:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@=
kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" tar=
get=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:ssta=
bellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &l=
t;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabe=
llini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" t=
arget=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;=
mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabell=
ini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D=
"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a=
>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank=
">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kerne=
l.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a h=
ref=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.=
org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_bla=
nk">sstabellini@kernel.org</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&g=
t;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a h=
ref=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.=
org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"=
_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini=
@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sst=
abellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.o=
rg" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mail=
to:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;=
 &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sst=
abellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" t=
arget=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:ss=
tabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;=
mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabell=
ini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" tar=
get=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &l=
t;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabe=
llini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.or=
g" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&=
gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_bla=
nk">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a hre=
f=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.or=
g</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank=
">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"ma=
ilto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a>&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini=
@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sst=
abellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt=
;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabel=
lini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" ta=
rget=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D=
"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<=
a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kern=
el.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_=
blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &l=
t;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabe=
llini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.or=
g" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&=
gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">=
sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.=
org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D=
"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a=
> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">ss=
tabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a h=
ref=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.=
org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"=
_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini=
@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&g=
t;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabel=
lini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;m=
ailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabelli=
ni@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;=
&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel=
.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a hr=
ef=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.o=
rg</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_=
blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@=
kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini=
@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sst=
abellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt=
;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabel=
lini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" ta=
rget=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D=
"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<=
a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kern=
el.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_=
blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank=
">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kerne=
l.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a hr=
ef=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.o=
rg</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_=
blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@=
kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;ma=
ilto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellin=
i@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" targe=
t=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:ss=
tabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mail=
to:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@=
kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org"=
 target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:=
sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &l=
t;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabe=
llini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" t=
arget=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt; &lt;mailto:<a h=
ref=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.=
org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_bla=
nk">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini=
@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a hre=
f=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.or=
g</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabel=
lini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"ma=
ilto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.o=
rg" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mail=
to:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;=
&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"ma=
ilto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a h=
ref=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.=
org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"=
_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini=
@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sst=
abellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.o=
rg" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mail=
to:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;=
 &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sst=
abellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" t=
arget=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:ss=
tabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;=
mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabell=
ini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" tar=
get=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &l=
t;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabe=
llini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.or=
g" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&=
gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_bla=
nk">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@ker=
nel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a hre=
f=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.or=
g</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank=
">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"ma=
ilto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &=
lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstab=
ellini@kernel.org</a>&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini=
@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sst=
abellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt=
;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabel=
lini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" ta=
rget=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D=
"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<=
a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kern=
el.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_=
blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt; &lt;mailto:<a href=3D"mai=
lto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &l=
t;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabe=
llini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.or=
g" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailt=
o:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&=
gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">=
sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.=
org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D=
"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a=
> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">ss=
tabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabelli=
ni@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a h=
ref=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.=
org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"=
_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini=
@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;mailto:<a href=3D"mailto:sstabellini@kernel.org=
" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&g=
t;&gt;&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabel=
lini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto=
:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;m=
ailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabelli=
ni@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;=
&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel=
.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a hr=
ef=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.o=
rg</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_=
blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@=
kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;mai=
lto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini=
@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=
=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sst=
abellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailt=
o:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@k=
ernel.org</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:s=
stabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt=
;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabel=
lini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" ta=
rget=3D"_blank">sstabellini@kernel.org</a>&gt;&gt;&gt; &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank"=
>sstabellini@kernel.org</a>&gt; &lt;mailto:<a href=3D"mailto:sstabellini@ke=
rnel.org" target=3D"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=
=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kernel.org=
</a>&gt;&gt; &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D=
"_blank">sstabellini@kernel.org</a> &lt;mailto:<a href=3D"mailto:sstabellin=
i@kernel.org" target=3D"_blank">sstabellini@kernel.org</a>&gt; &lt;mailto:<=
a href=3D"mailto:sstabellini@kernel.org" target=3D"_blank">sstabellini@kern=
el.org</a> &lt;mailto:<a href=3D"mailto:sstabellini@kernel.org" target=3D"_=
blank">sstabellini@kernel.org</a>&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0On Tue, 18 Apr 2023, Oleg Nikitenko wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0&gt; Hi Julien,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0&gt; &gt;&gt; This feature has not been merged in Xen u=
pstream yet<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0&gt; &gt; would assume that upstream + the series on th=
e ML [1]<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0work<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0&gt; Please clarify this point.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0&gt; Because the two thoughts are controversial.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0Hi Oleg,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0As Julien wrote, there is nothing controversial. As you=
<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0are awa=
re,<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0Xilinx maintains a separate Xen tree specific for Xilin=
x<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0here:<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0<a href=3D"https://github.com/xilinx/xen" rel=3D"norefe=
rrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"ht=
tps://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://g=
ithub.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" =
rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;=
<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bla=
nk">https://github.com/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github=
.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xi=
linx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferre=
r" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"h=
ttps://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://=
github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;=
&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targe=
t=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://githu=
b.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/x=
ilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nore=
ferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"=
https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https:/=
/github.com/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx=
/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a=
> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://gi=
thub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.co=
m/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noref=
errer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;&gt;&gt;<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx=
/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a=
>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://gith=
ub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/=
xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D=
"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a hre=
f=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">ht=
tps://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilin=
x/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</=
a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;&gt; &lt;<a href=3D"ht=
tps://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://g=
ithub.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt=
;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bl=
ank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferr=
er" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"http=
s://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://git=
hub.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" re=
l=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a=
 href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank=
">https://github.com/xilinx/xen</a>&gt;&gt;&gt;&gt;&gt; &lt;<a href=3D"http=
s://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://git=
hub.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D=
"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a=
 href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank=
">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilin=
x/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</=
a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer"=
 target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https:/=
/github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github=
.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a =
href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/xilinx/xen</a>&gt;&gt;&gt; &lt;<a href=3D"https://githu=
b.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/x=
ilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferr=
er" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"=
https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https:/=
/github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" re=
l=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt=
; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github=
.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xi=
linx/xen</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a =
href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/xilinx/xen</a>&gt;&gt;&gt;&gt; &lt;<a href=3D"https://g=
ithub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.c=
om/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nore=
ferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen=
" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt=
;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://gith=
ub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/=
xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nor=
eferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D=
"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https:=
//github.com/xilinx/xen</a>&gt;&gt;&gt; &lt;<a href=3D"https://github.com/x=
ilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/x=
en</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" tar=
get=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https:/=
/github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github=
.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"no=
referrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt; &lt;<=
a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blan=
k">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xili=
nx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen<=
/a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" ta=
rget=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://gi=
thub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.co=
m/xilinx/xen</a>&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://gith=
ub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/=
xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nor=
eferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D=
"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https:=
//github.com/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilin=
x/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</=
a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://gi=
thub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.co=
m/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noref=
errer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;&gt; &lt;=
<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bla=
nk">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xil=
inx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen=
</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" t=
arget=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://g=
ithub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.c=
om/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a =
href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;&gt;&gt; &lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen=
" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt=
; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github=
.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xi=
linx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"n=
oreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx=
/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a=
> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;&gt; &lt;<a href=3D"ht=
tps://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://g=
ithub.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt=
;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bl=
ank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferr=
er" target=3D"_blank">https://github.com/xilinx/xen</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt=
;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bl=
ank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xili=
nx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen<=
/a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://gi=
thub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.co=
m/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noref=
errer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt; &lt;<a h=
ref=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">=
https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/=
xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>=
&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targe=
t=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://githu=
b.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/x=
ilinx/xen</a>&gt;&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a =
href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"http=
s://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://git=
hub.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D=
"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a=
 href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank=
">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilin=
x/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</=
a>&gt;&gt;&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nor=
eferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D=
"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https:=
//github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xe=
n" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &=
lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_=
blank">https://github.com/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://git=
hub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com=
/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"norefe=
rrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen=
" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt=
;&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" =
target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://=
github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.=
com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D=
"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a hre=
f=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">ht=
tps://github.com/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/x=
ilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/x=
en</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt=
;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bl=
ank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a>&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx=
/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a=
>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://gith=
ub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/=
xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D=
"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a hre=
f=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">ht=
tps://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilin=
x/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</=
a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;&gt; &lt;<a href=3D"ht=
tps://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://g=
ithub.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt=
;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bl=
ank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferr=
er" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"http=
s://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://git=
hub.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" re=
l=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a=
 href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank=
">https://github.com/xilinx/xen</a>&gt;&gt;&gt;&gt; &lt;<a href=3D"https://=
github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.=
com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nor=
eferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a hre=
f=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">ht=
tps://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xe=
n" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&g=
t;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" tar=
get=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://git=
hub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com=
/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"no=
referrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a>&gt;&gt;&gt; &lt;<a href=3D"https://github.co=
m/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilin=
x/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" =
target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"http=
s://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://git=
hub.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D=
"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt; &l=
t;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_b=
lank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/x=
ilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/x=
en</a>&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a =
href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/xilinx/xen</a>&gt;&gt;&gt;&gt;&gt; &lt;<a href=3D"https=
://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://gith=
ub.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"=
noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a =
href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx=
/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a=
>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" =
target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://=
github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.=
com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D=
"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a hre=
f=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">ht=
tps://github.com/xilinx/xen</a>&gt;&gt;&gt; &lt;<a href=3D"https://github.c=
om/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xili=
nx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer"=
 target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"htt=
ps://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://gi=
thub.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;=
 &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D=
"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.co=
m/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilin=
x/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferr=
er" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"http=
s://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://git=
hub.com/xilinx/xen</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx=
/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a=
>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://gith=
ub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/=
xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D=
"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a hre=
f=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">ht=
tps://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilin=
x/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</=
a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;&gt; &lt;<a href=3D"ht=
tps://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://g=
ithub.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt=
;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bl=
ank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferr=
er" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"http=
s://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://git=
hub.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" re=
l=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a=
 href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank=
">https://github.com/xilinx/xen</a>&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://gith=
ub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/=
xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nor=
eferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D=
"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https:=
//github.com/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilin=
x/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</=
a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://gi=
thub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.co=
m/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noref=
errer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;&gt; &lt;=
<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bla=
nk">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xil=
inx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen=
</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" t=
arget=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://g=
ithub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.c=
om/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a =
href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;&gt;&gt; &lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen=
" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt=
; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github=
.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xi=
linx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"n=
oreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx=
/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a=
> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt;&gt; &lt;<a href=3D"ht=
tps://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://g=
ithub.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt=
;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bl=
ank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a>&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferr=
er" target=3D"_blank">https://github.com/xilinx/xen</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt=
;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bl=
ank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xili=
nx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen<=
/a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=
=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://gi=
thub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.co=
m/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noref=
errer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt; &lt;<a h=
ref=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">=
https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/=
xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>=
&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targe=
t=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://githu=
b.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/x=
ilinx/xen</a>&gt;&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a =
href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank"=
>https://github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" targ=
et=3D"_blank">https://github.com/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"http=
s://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://git=
hub.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D=
"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a=
 href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank=
">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilin=
x/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</=
a>&gt;&gt;&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"nor=
eferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D=
"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https:=
//github.com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xe=
n" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &=
lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_=
blank">https://github.com/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://git=
hub.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com=
/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"norefe=
rrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt;<a href=
=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">htt=
ps://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xilinx/xen=
" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt=
;&gt;&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" =
target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://=
github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.=
com/xilinx/xen</a>&gt; &lt;<a href=3D"https://github.com/xilinx/xen" rel=3D=
"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a> &lt;<a hre=
f=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_blank">ht=
tps://github.com/xilinx/xen</a>&gt;&gt; &lt;<a href=3D"https://github.com/x=
ilinx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/x=
en</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://github.com/xilinx/xen" rel=
=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xen</a>&gt; &lt=
;<a href=3D"https://github.com/xilinx/xen" rel=3D"noreferrer" target=3D"_bl=
ank">https://github.com/xilinx/xen</a> &lt;<a href=3D"https://github.com/xi=
linx/xen" rel=3D"noreferrer" target=3D"_blank">https://github.com/xilinx/xe=
n</a>&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0and the branch you are using (xlnx_rebase_4.16) comes<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0from th=
ere.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0Instead, the upstream Xen tree lives here:<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;=
a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"ht=
tps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" t=
arget=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a=
> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" r=
el=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen=
.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/=
?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenb=
its.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenb=
its.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_=
blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;=
<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"=
noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_bl=
ank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a hre=
f=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"norefe=
rrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:/=
/xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D=
"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer=
" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary=
</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary=
" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3D=
xen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_b=
lank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&g=
t;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"nor=
eferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt=
;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummar=
y" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_bl=
ank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank"=
>https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D=
"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer=
" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary=
</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://xenbits.xen.org=
/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" tar=
get=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&=
gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary"=
 rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.g=
it;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" ta=
rget=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>=
&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://x=
enbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=
=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt=
;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D=
"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;=
a=3Dsummary</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"nor=
eferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt=
;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummar=
y" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_bl=
ank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank"=
>https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D=
"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer=
" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary=
</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;&gt;&gt;&gt; &lt;=
<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"=
noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3D=
xen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.x=
en.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_b=
lank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a hr=
ef=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noref=
errer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsu=
mmary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.gi=
t;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org=
/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org=
/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"=
https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer"=
 target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary<=
/a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary"=
 rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt=
; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" r=
el=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen=
.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen=
.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.=
org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenb=
its.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_=
blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a h=
ref=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"nore=
ferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Ds=
ummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:/=
/xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"nor=
eferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt=
;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummar=
y" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_bl=
ank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank"=
>https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D=
"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer=
" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary=
</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://xenbits.xen.org=
/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" tar=
get=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&=
gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary"=
 rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.g=
it;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" ta=
rget=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>=
&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://x=
enbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=
=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt=
;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D=
"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;=
a=3Dsummary</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"nor=
eferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt=
;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummar=
y" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_bl=
ank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank"=
>https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D=
"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer=
" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary=
</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;&gt;&gt;&gt;&gt; =
&lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=
=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.g=
it;a=3Dsummary</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3D=
xen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.x=
en.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_b=
lank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a hr=
ef=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noref=
errer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsu=
mmary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.gi=
t;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org=
/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org=
/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"=
https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer"=
 target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary<=
/a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary"=
 rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt=
; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" r=
el=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen=
.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen=
.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.=
org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenb=
its.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_=
blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a h=
ref=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"nore=
ferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Ds=
ummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:/=
/xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"nor=
eferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt=
;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummar=
y" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_bl=
ank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank"=
>https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D=
"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer=
" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary=
</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://xenbits.xen.org=
/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" tar=
get=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&=
gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary"=
 rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.g=
it;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" ta=
rget=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>=
&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://x=
enbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=
=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt=
;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D=
"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;=
a=3Dsummary</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"nor=
eferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt=
;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummar=
y" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_bl=
ank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank"=
>https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D=
"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer=
" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary=
</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;&gt;&gt;&gt; &lt;=
<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"=
noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a><br>
&gt;=C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3D=
xen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.x=
en.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_b=
lank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a hr=
ef=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noref=
errer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsu=
mmary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.gi=
t;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org=
/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org=
/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"=
https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer"=
 target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary<=
/a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary"=
 rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt=
; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" r=
el=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen=
.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen=
.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.=
org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenb=
its.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_=
blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a h=
ref=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"nore=
ferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Ds=
ummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:/=
/xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"nor=
eferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt=
;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummar=
y" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_bl=
ank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank"=
>https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D=
"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer=
" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary=
</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a href=3D"https://xenbits.xen.org=
/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" tar=
get=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&=
gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary"=
 rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dx=
en.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.g=
it;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.or=
g/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"htt=
ps://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" ta=
rget=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>=
&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://x=
enbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=
=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt=
;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D=
"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;=
a=3Dsummary</a>&gt;&gt;&gt;&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&lt;<a =
href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"nor=
eferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3D=
summary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitwe=
b/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/g=
itweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https:=
//xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" targe=
t=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt=
;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummar=
y" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a href=3D"https://xe=
nbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D=
"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a>&gt;&gt;&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=
=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbit=
s.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_bl=
ank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt; &lt;<a=
 href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"no=
referrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gi=
tweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt; &lt;<a href=3D"https://xenbits.xe=
n.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank"=
>https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D=
"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer=
" target=3D"_blank">https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary=
</a>&gt; &lt;<a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsum=
mary" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary</a> &lt;<a href=3D"https://xenbits.xen.org/gitweb/?=
p=3Dxen.git;a=3Dsummary" rel=3D"noreferrer" target=3D"_blank">https://xenbi=
ts.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary</a>&gt;&gt;&gt;&gt;&gt;&gt;&gt;&=
gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0The Cache Coloring feature that you are trying to<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0configu=
re is present<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0in xlnx_rebase_4.16, but not yet present upstream (ther=
e<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0is an<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0outstanding patch series to add cache coloring to Xen<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0upstrea=
m but it<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0hasn&#39;t been merged yet.)<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0Anyway, if you are using xlnx_rebase_4.16 it doesn&#39;=
t<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0matter =
too much for<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0you as you already have Cache Coloring as a feature<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0there.<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0I take you are using ImageBuilder to generate the boot<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0configu=
ration? If<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0so, please post the ImageBuilder config file that you a=
re<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0using.<=
br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0But from the boot message, it looks like the colors<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0configu=
ration for<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=
=C2=A0 =C2=A0 =C2=A0Dom0 is incorrect.<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<b=
r>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;=C2=
=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0&gt;<br=
>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt;=C2=A0 =C2=A0 =C2=A0&gt;<br>
&gt; <br>
</blockquote></div>

--000000000000176ab505fc0bf7c0--


From xen-devel-bounces@lists.xenproject.org Sat May 20 06:02:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 06:02:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537194.836524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Ffa-0002CA-Th; Sat, 20 May 2023 06:02:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537194.836524; Sat, 20 May 2023 06:02:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Ffa-0002B1-Na; Sat, 20 May 2023 06:02:22 +0000
Received: by outflank-mailman (input) for mailman id 537194;
 Fri, 19 May 2023 16:30:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4+OQ=BI=quicinc.com=quic_jhugo@srs-se1.protection.inumbo.net>)
 id 1q030B-0004h7-QP
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 16:30:47 +0000
Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com
 [205.220.168.131]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8044404c-f662-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 18:30:44 +0200 (CEST)
Received: from pps.filterd (m0279864.ppops.net [127.0.0.1])
 by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 34JBfjsB011872; Fri, 19 May 2023 16:29:01 GMT
Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com
 [129.46.96.20])
 by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3qp4ccs7tp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 19 May 2023 16:29:01 +0000
Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com
 [10.47.209.196])
 by NALASPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 34JGSxwm014994
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 19 May 2023 16:28:59 GMT
Received: from [10.226.59.182] (10.80.80.8) by nalasex01a.na.qualcomm.com
 (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.42; Fri, 19 May
 2023 09:28:57 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8044404c-f662-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=message-id : date :
 mime-version : subject : to : cc : references : from : in-reply-to :
 content-type : content-transfer-encoding; s=qcppdkim1;
 bh=yZ+c8uZcJrlAlno5FxTnPElzATRQ31IF/+qQxmh4X7Y=;
 b=JXYLKwcI2GbfAQOyp3pqNSg6DjU0Y7APV0UkqvAq+7mPhE6jU2niZ2R4O5GEnV8Jv3vY
 GKGhPl9R8u8z9GTQRJNoPGaU+0mzwGn4MRHzlqrkBUJqPBmbmvIdtKopPW+l45x2R2N4
 N53tfhOYno5CyDkZBijlZR00Ad7cvVpDEdEpXoDABtsI85/Pq3Lm9EQeADaOgh7BPKhU
 vCAzv/tTJH7NrLUDiiqElo6FSx31LTwIxiOoWrGKsGUnDbsEy4Z69PrEn6OpGxkbbO+u
 mhtvqcavuyOXh7qnKKh25rs7jG7FJgFRefJwgVrMfeYuSSMcB+UmlbDChCuJ2ha23dwu fw== 
Message-ID: <16562305-3bc0-c69f-0cb5-1b9da1014f19@quicinc.com>
Date: Fri, 19 May 2023 10:28:56 -0600
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.6.0
Subject: Re: [patch V4 36/37] x86/smpboot: Support parallel startup of
 secondary CPUs
Content-Language: en-US
To: Thomas Gleixner <tglx@linutronix.de>, LKML <linux-kernel@vger.kernel.org>,
        David Woodhouse <dwmw2@infradead.org>
CC: <x86@kernel.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
        Brian Gerst
	<brgerst@gmail.com>,
        Arjan van de Veen <arjan@linux.intel.com>,
        Paolo Bonzini
	<pbonzini@redhat.com>,
        Paul McKenney <paulmck@kernel.org>,
        Tom Lendacky
	<thomas.lendacky@amd.com>,
        Sean Christopherson <seanjc@google.com>,
        Oleksandr
 Natalenko <oleksandr@natalenko.name>,
        Paul Menzel <pmenzel@molgen.mpg.de>,
        "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
        Piotr Gorski
	<lucjan.lucjanov@gmail.com>,
        Usama Arif <usama.arif@bytedance.com>,
        Juergen
 Gross <jgross@suse.com>,
        Boris Ostrovsky <boris.ostrovsky@oracle.com>,
        <xen-devel@lists.xenproject.org>, Russell King <linux@armlinux.org.uk>,
        Arnd
 Bergmann <arnd@arndb.de>, <linux-arm-kernel@lists.infradead.org>,
        Catalin
 Marinas <catalin.marinas@arm.com>,
        Will Deacon <will@kernel.org>, Guo Ren
	<guoren@kernel.org>,
        <linux-csky@vger.kernel.org>,
        Thomas Bogendoerfer
	<tsbogend@alpha.franken.de>,
        <linux-mips@vger.kernel.org>,
        "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>,
        Helge Deller
	<deller@gmx.de>, <linux-parisc@vger.kernel.org>,
        Paul Walmsley
	<paul.walmsley@sifive.com>,
        Palmer Dabbelt <palmer@dabbelt.com>, <linux-riscv@lists.infradead.org>,
        Mark Rutland <mark.rutland@arm.com>,
        Sabin
 Rapan <sabrapan@amazon.com>,
        "Michael Kelley (LINUX)"
	<mikelley@microsoft.com>,
        Ross Philipson <ross.philipson@oracle.com>,
        David
 Woodhouse <dwmw@amazon.co.uk>
References: <20230512203426.452963764@linutronix.de>
 <20230512205257.411554373@linutronix.de>
From: Jeffrey Hugo <quic_jhugo@quicinc.com>
In-Reply-To: <20230512205257.411554373@linutronix.de>
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.80.80.8]
X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To
 nalasex01a.na.qualcomm.com (10.47.209.196)
X-QCInternal: smtphost
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085
X-Proofpoint-GUID: l2X6Bcn3k_Nws-GO_lVKOQ5EQSj3BbaM
X-Proofpoint-ORIG-GUID: l2X6Bcn3k_Nws-GO_lVKOQ5EQSj3BbaM
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-05-19_11,2023-05-17_02,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 priorityscore=1501
 lowpriorityscore=0 adultscore=0 impostorscore=0 spamscore=0 bulkscore=0
 phishscore=0 mlxlogscore=999 suspectscore=0 malwarescore=0 clxscore=1011
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000
 definitions=main-2305190140

On 5/12/2023 3:07 PM, Thomas Gleixner wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> In parallel startup mode the APs are kicked alive by the control CPU
> quickly after each other and run through the early startup code in
> parallel. The real-mode startup code is already serialized with a
> bit-spinlock to protect the real-mode stack.
> 
> In parallel startup mode the smpboot_control variable obviously cannot
> contain the Linux CPU number so the APs have to determine their Linux CPU
> number on their own. This is required to find the CPUs per CPU offset in
> order to find the idle task stack and other per CPU data.
> 
> To achieve this, export the cpuid_to_apicid[] array so that each AP can
> find its own CPU number by searching therein based on its APIC ID.
> 
> Introduce a flag in the top bits of smpboot_control which indicates that
> the AP should find its CPU number by reading the APIC ID from the APIC.
> 
> This is required because CPUID based APIC ID retrieval can only provide the
> initial APIC ID, which might have been overruled by the firmware. Some AMD
> APUs come up with APIC ID = initial APIC ID + 0x10, so the APIC ID to CPU
> number lookup would fail miserably if based on CPUID. Also virtualization
> can make its own APIC ID assignements. The only requirement is that the
> APIC IDs are consistent with the APCI/MADT table.
> 
> For the boot CPU or in case parallel bringup is disabled the control bits
> are empty and the CPU number is directly available in bit 0-23 of
> smpboot_control.
> 
> [ tglx: Initial proof of concept patch with bitlock and APIC ID lookup ]
> [ dwmw2: Rework and testing, commit message, CPUID 0x1 and CPU0 support ]
> [ seanc: Fix stray override of initial_gs in common_cpu_up() ]
> [ Oleksandr Natalenko: reported suspend/resume issue fixed in
>    x86_acpi_suspend_lowlevel ]
> [ tglx: Make it read the APIC ID from the APIC instead of using CPUID,
>    	split the bitlock part out ]
> 
> Co-developed-by: Thomas Gleixner <tglx@linutronix.de>
> Co-developed-by: Brian Gerst <brgerst@gmail.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Brian Gerst <brgerst@gmail.com>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Michael Kelley <mikelley@microsoft.com>
> ---

I pulled in this change via the next tree, tag next-20230519 and I get a 
build failure using the x86_64_defconfig -

   DESCEND objtool
   INSTALL libsubcmd_headers
   CALL    scripts/checksyscalls.sh
   AS      arch/x86/kernel/head_64.o
arch/x86/kernel/head_64.S: Assembler messages:
arch/x86/kernel/head_64.S:261: Error: missing ')'
arch/x86/kernel/head_64.S:261: Error: junk `UL<<10)' after expression
   CC      arch/x86/kernel/head64.o
   CC      arch/x86/kernel/ebda.o
   CC      arch/x86/kernel/platform-quirks.o
scripts/Makefile.build:374: recipe for target 
'arch/x86/kernel/head_64.o' failed
make[3]: *** [arch/x86/kernel/head_64.o] Error 1
make[3]: *** Waiting for unfinished jobs....
scripts/Makefile.build:494: recipe for target 'arch/x86/kernel' failed
make[2]: *** [arch/x86/kernel] Error 2
scripts/Makefile.build:494: recipe for target 'arch/x86' failed
make[1]: *** [arch/x86] Error 2
make[1]: *** Waiting for unfinished jobs....
Makefile:2026: recipe for target '.' failed
make: *** [.] Error 2

This is with GCC 5.4.0, if it matters.

Reverting this change allows the build to move forward, although I also 
need to revert "x86/smpboot/64: Implement 
arch_cpuhp_init_parallel_bringup() and enable it" for the build to fully 
succeed.

I'm not familiar with this code, and nothing obvious stands out to me. 
What can I do to help root cause this?

-Jeff


From xen-devel-bounces@lists.xenproject.org Sat May 20 06:02:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 06:02:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537224.836530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Ffb-0002JR-6o; Sat, 20 May 2023 06:02:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537224.836530; Sat, 20 May 2023 06:02:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Ffb-0002Hu-23; Sat, 20 May 2023 06:02:23 +0000
Received: by outflank-mailman (input) for mailman id 537224;
 Fri, 19 May 2023 17:46:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4+OQ=BI=quicinc.com=quic_jhugo@srs-se1.protection.inumbo.net>)
 id 1q04B7-0005Ax-Dp
 for xen-devel@lists.xenproject.org; Fri, 19 May 2023 17:46:09 +0000
Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com
 [205.220.168.131]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0772ecf6-f66d-11ed-8611-37d641c3527e;
 Fri, 19 May 2023 19:46:06 +0200 (CEST)
Received: from pps.filterd (m0279867.ppops.net [127.0.0.1])
 by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 34JFMfDd012512; Fri, 19 May 2023 17:45:03 GMT
Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com
 [129.46.96.20])
 by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3qp95v0w04-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 19 May 2023 17:45:03 +0000
Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com
 [10.47.209.196])
 by NALASPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 34JHj1wR008027
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 19 May 2023 17:45:01 GMT
Received: from [10.226.59.182] (10.80.80.8) by nalasex01a.na.qualcomm.com
 (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.42; Fri, 19 May
 2023 10:44:59 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0772ecf6-f66d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=message-id : date :
 mime-version : subject : to : cc : references : from : in-reply-to :
 content-type : content-transfer-encoding; s=qcppdkim1;
 bh=eudHzY6WP7JinnfOr70AykPZvSkkpEMxuNg8hMfppYo=;
 b=DMV0U+nIXEVCY+aepXRfFYvdfBeVO+N4QF0je1NreoKWWMhdhmNCmkCFY69GHp3G1MUl
 W4uLhQQIfJlWzCmL2UlwpB67SLBLlnvd3eaMcPBXY3xpzTd4DPesNiwRSj6AMqZBspVo
 08MDwx36HiK4jQA4XpyJQxV1CWo2sx0aEh71PEANzSjVcRsNE8URDp/9OH/+p2Kh4/Bd
 u8dY8/MWw20kSLCJ06My3a975QQQC9ySq7nhEqEvzzuIHXWUfJmJ+Gzj0a0CNBUiRPUE
 dcRr1k8AfSKrqIM3q0pDGScz3hGnOYsEF+bviVQ6TCuLFhAIK7GTUA+1LUQeisPdvBFr pg== 
Message-ID: <ebe36911-024a-839c-3b7e-05c99bfb0d66@quicinc.com>
Date: Fri, 19 May 2023 11:44:58 -0600
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Thunderbird/91.6.0
Subject: Re: [patch V4 36/37] x86/smpboot: Support parallel startup of
 secondary CPUs
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>,
        Thomas Gleixner
	<tglx@linutronix.de>,
        LKML <linux-kernel@vger.kernel.org>,
        David Woodhouse
	<dwmw2@infradead.org>
CC: <x86@kernel.org>, Brian Gerst <brgerst@gmail.com>,
        Arjan van de Veen
	<arjan@linux.intel.com>,
        Paolo Bonzini <pbonzini@redhat.com>,
        Paul McKenney
	<paulmck@kernel.org>,
        Tom Lendacky <thomas.lendacky@amd.com>,
        Sean
 Christopherson <seanjc@google.com>,
        Oleksandr Natalenko
	<oleksandr@natalenko.name>,
        Paul Menzel <pmenzel@molgen.mpg.de>,
        "Guilherme
 G. Piccoli" <gpiccoli@igalia.com>,
        Piotr Gorski <lucjan.lucjanov@gmail.com>,
        Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>,
        Boris
 Ostrovsky <boris.ostrovsky@oracle.com>,
        <xen-devel@lists.xenproject.org>, Russell King <linux@armlinux.org.uk>,
        Arnd Bergmann <arnd@arndb.de>, <linux-arm-kernel@lists.infradead.org>,
        Catalin Marinas
	<catalin.marinas@arm.com>,
        Will Deacon <will@kernel.org>, Guo Ren
	<guoren@kernel.org>,
        <linux-csky@vger.kernel.org>,
        Thomas Bogendoerfer
	<tsbogend@alpha.franken.de>,
        <linux-mips@vger.kernel.org>,
        "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>,
        Helge Deller
	<deller@gmx.de>, <linux-parisc@vger.kernel.org>,
        Paul Walmsley
	<paul.walmsley@sifive.com>,
        Palmer Dabbelt <palmer@dabbelt.com>, <linux-riscv@lists.infradead.org>,
        Mark Rutland <mark.rutland@arm.com>,
        Sabin
 Rapan <sabrapan@amazon.com>,
        "Michael Kelley (LINUX)"
	<mikelley@microsoft.com>,
        Ross Philipson <ross.philipson@oracle.com>,
        David
 Woodhouse <dwmw@amazon.co.uk>
References: <20230512203426.452963764@linutronix.de>
 <20230512205257.411554373@linutronix.de>
 <16562305-3bc0-c69f-0cb5-1b9da1014f19@quicinc.com>
 <0cafbfcb-2430-6d90-ee77-4e5de08ee1da@citrix.com>
From: Jeffrey Hugo <quic_jhugo@quicinc.com>
In-Reply-To: <0cafbfcb-2430-6d90-ee77-4e5de08ee1da@citrix.com>
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 8bit
X-Originating-IP: [10.80.80.8]
X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To
 nalasex01a.na.qualcomm.com (10.47.209.196)
X-QCInternal: smtphost
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085
X-Proofpoint-ORIG-GUID: wzgfaxTFUND7jtAhGx51l9t9Ic-_fDjs
X-Proofpoint-GUID: wzgfaxTFUND7jtAhGx51l9t9Ic-_fDjs
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.170.22
 definitions=2023-05-19_12,2023-05-17_02,2023-02-09_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 lowpriorityscore=0 impostorscore=0 phishscore=0 bulkscore=0 adultscore=0
 mlxscore=0 mlxlogscore=999 malwarescore=0 clxscore=1015 spamscore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2304280000 definitions=main-2305190152

On 5/19/2023 10:57 AM, Andrew Cooper wrote:
> On 19/05/2023 5:28 pm, Jeffrey Hugo wrote:
>>    DESCEND objtool
>>    INSTALL libsubcmd_headers
>>    CALL    scripts/checksyscalls.sh
>>    AS      arch/x86/kernel/head_64.o
>> arch/x86/kernel/head_64.S: Assembler messages:
>> arch/x86/kernel/head_64.S:261: Error: missing ')'
>> arch/x86/kernel/head_64.S:261: Error: junk `UL<<10)' after expression
>>    CC      arch/x86/kernel/head64.o
>>    CC      arch/x86/kernel/ebda.o
>>    CC      arch/x86/kernel/platform-quirks.o
>> scripts/Makefile.build:374: recipe for target
>> 'arch/x86/kernel/head_64.o' failed
>> make[3]: *** [arch/x86/kernel/head_64.o] Error 1
>> make[3]: *** Waiting for unfinished jobs....
>> scripts/Makefile.build:494: recipe for target 'arch/x86/kernel' failed
>> make[2]: *** [arch/x86/kernel] Error 2
>> scripts/Makefile.build:494: recipe for target 'arch/x86' failed
>> make[1]: *** [arch/x86] Error 2
>> make[1]: *** Waiting for unfinished jobs....
>> Makefile:2026: recipe for target '.' failed
>> make: *** [.] Error 2
>>
>> This is with GCC 5.4.0, if it matters.
>>
>> Reverting this change allows the build to move forward, although I
>> also need to revert "x86/smpboot/64: Implement
>> arch_cpuhp_init_parallel_bringup() and enable it" for the build to
>> fully succeed.
>>
>> I'm not familiar with this code, and nothing obvious stands out to me.
>> What can I do to help root cause this?
> 
> Can you try:
> 
> -#define XAPIC_ENABLE    (1UL << 11)
> -#define X2APIC_ENABLE    (1UL << 10)
> +#define XAPIC_ENABLE    BIT(11)
> +#define X2APIC_ENABLE    BIT(10)
> 
> The UL suffix isn't understood by older binutils, and this patch adds
> the first use of these constants in assembly.

Ah, makes sense.

Your suggested change works for me.  No more compile error.

I assume you will be following up with a patch to address this.  Feel 
free to add the following tags as you see fit:

Reported-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
Tested-by: Jeffrey Hugo <quic_jhugo@quicinc.com>

-Jeff


From xen-devel-bounces@lists.xenproject.org Sat May 20 06:21:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 06:21:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537401.836561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Fxm-0006Dn-ER; Sat, 20 May 2023 06:21:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537401.836561; Sat, 20 May 2023 06:21:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Fxm-0006Dg-Bf; Sat, 20 May 2023 06:21:10 +0000
Received: by outflank-mailman (input) for mailman id 537401;
 Sat, 20 May 2023 06:21:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FQK3=BJ=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1q0Fxl-0006Da-A0
 for xen-devel@lists.xenproject.org; Sat, 20 May 2023 06:21:09 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8154bae3-f6d6-11ed-b22d-6b7b168915f2;
 Sat, 20 May 2023 08:21:07 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 5472F68CFE; Sat, 20 May 2023 08:21:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8154bae3-f6d6-11ed-b22d-6b7b168915f2
Date: Sat, 20 May 2023 08:21:03 +0200
From: Christoph Hellwig <hch@lst.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Christoph Hellwig <hch@lst.de>,
	Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>, Ben Skeggs <bskeggs@redhat.com>,
	Karol Herbst <kherbst@redhat.com>, Lyude Paul <lyude@redhat.com>,
	xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when
 xen-pcifront is enabling
Message-ID: <20230520062103.GA1225@lst.de>
References: <20230518134253.909623-1-hch@lst.de> <20230518134253.909623-3-hch@lst.de> <ZGZr/xgbUmVqpOpN@mail-itl> <20230519040405.GA10818@lst.de> <ZGdLErBzi9MANL3i@mail-itl> <20230519124118.GA5869@lst.de> <8617570c-6dc4-74f5-7418-98f04f7e0ece@citrix.com> <20230519125857.GA6994@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230519125857.GA6994@lst.de>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, May 19, 2023 at 02:58:57PM +0200, Christoph Hellwig wrote:
> On Fri, May 19, 2023 at 01:49:46PM +0100, Andrew Cooper wrote:
> > > The alternative would be to finally merge swiotlb-xen into swiotlb, in
> > > which case we might be able to do this later.  Let me see what I can
> > > do there.
> > 
> > If that is an option, it would be great to reduce the special-cashing.
> 
> I think it's doable, and I've been wanting it for a while.  I just
> need motivated testers, but it seems like I just found at least two :)

So looking at swiotlb-xen it does these off things where it takes a value
generated originally be xen_phys_to_dma, then only does a dma_to_phys
to go back and call pfn_valid on the result.  Does this make sense, or
is it wrong and just works by accident?  I.e. is the patch below correct?


diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 67aa74d201627d..3396c5766f0dd8 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -90,9 +90,7 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
 
 static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 {
-	unsigned long bfn = XEN_PFN_DOWN(dma_to_phys(dev, dma_addr));
-	unsigned long xen_pfn = bfn_to_local_pfn(bfn);
-	phys_addr_t paddr = (phys_addr_t)xen_pfn << XEN_PAGE_SHIFT;
+	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);
 
 	/* If the address is outside our domain, it CAN
 	 * have the same virtual address as another address
@@ -234,7 +232,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 
 done:
 	if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
-		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dev_addr))))
+		if (pfn_valid(PFN_DOWN(phys)))
 			arch_sync_dma_for_device(phys, size, dir);
 		else
 			xen_dma_sync_for_device(dev, dev_addr, size, dir);
@@ -258,7 +256,7 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 	BUG_ON(dir == DMA_NONE);
 
 	if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
-		if (pfn_valid(PFN_DOWN(dma_to_phys(hwdev, dev_addr))))
+		if (pfn_valid(PFN_DOWN(paddr)))
 			arch_sync_dma_for_cpu(paddr, size, dir);
 		else
 			xen_dma_sync_for_cpu(hwdev, dev_addr, size, dir);
@@ -276,7 +274,7 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);
 
 	if (!dev_is_dma_coherent(dev)) {
-		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
+		if (pfn_valid(PFN_DOWN(paddr)))
 			arch_sync_dma_for_cpu(paddr, size, dir);
 		else
 			xen_dma_sync_for_cpu(dev, dma_addr, size, dir);
@@ -296,7 +294,7 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev)) {
-		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
+		if (pfn_valid(PFN_DOWN(paddr)))
 			arch_sync_dma_for_device(paddr, size, dir);
 		else
 			xen_dma_sync_for_device(dev, dma_addr, size, dir);


From xen-devel-bounces@lists.xenproject.org Sat May 20 07:04:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 07:04:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537413.836591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Gd6-0002LU-Ra; Sat, 20 May 2023 07:03:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537413.836591; Sat, 20 May 2023 07:03:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Gd6-0002LN-NB; Sat, 20 May 2023 07:03:52 +0000
Received: by outflank-mailman (input) for mailman id 537413;
 Sat, 20 May 2023 07:03:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Gd5-0002LC-C1; Sat, 20 May 2023 07:03:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Gd4-00046N-TV; Sat, 20 May 2023 07:03:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Gd4-0008EF-GZ; Sat, 20 May 2023 07:03:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Gd4-0000cI-G6; Sat, 20 May 2023 07:03:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4VzfqpKsVtcqFIvvVAjs4GDeZMpGmOqbDiEI31nagFE=; b=wU7qwZJRVi9yzUB/iFCzgT93a/
	I68svzrhc//7ylEm4Yp3XzQOngtlFT6Bh0hZ9m1yWDZhT2MrcCQwFF5J6uAOBUs3kaJkMqCArrmMl
	Te14kgOMgCdoUzYTrzEf5KxAiF0aTdL8lxs8Xud/aTFUKH+cGvOHqQiMDzTjEZQoOL2w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180766-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180766: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:regression
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=753d903a6f2d1e68d98487d36449b5739c28d65a
X-Osstest-Versions-That:
    xen=42abf5b9c53eb1b1a902002fcda68708234152c3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 May 2023 07:03:50 +0000

flight 180766 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180766/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-vhd      21 guest-start/debian.repeat fail REGR. vs. 180696

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-raw   7 xen-install      fail in 180726 pass in 180766
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 180726 pass in 180766
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail pass in 180726

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180696
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180696
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180696
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180696
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180696
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180696
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180696
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180696
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180696
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  753d903a6f2d1e68d98487d36449b5739c28d65a
baseline version:
 xen                  42abf5b9c53eb1b1a902002fcda68708234152c3

Last test of basis   180696  2023-05-18 01:51:57 Z    2 days
Testing same since   180701  2023-05-18 17:09:58 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 753d903a6f2d1e68d98487d36449b5739c28d65a
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed May 17 05:57:22 2023 +0000

    automation: allow to rerun build script
    
    Calling build twice in the same environment will fail because the
    directory 'binaries' was already created before. Use mkdir -p to ignore
    an existing directory and move on to the actual build.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 816d2797468dbcc8a3d23f67592b06929f67b2ab
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue May 16 15:41:27 2023 +0000

    automation: update documentation about how to build a container
    
    The command used in the example is different from the command used in
    the Gitlab CI pipelines. Adjust it to simulate what will be used by CI.
    This is essentially the build script, which is invoked with a number of
    expected environment variables such as CC, CXX and debug.
    
    In addition the input should not be a tty, which disables colors from
    meson and interactive questions from kconfig.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit bdf48bf170bf1257b236b8f467d3de6c3adb9608
Author: Stefano Stabellini <stefano.stabellini@amd.com>
Date:   Thu May 11 16:22:37 2023 -0700

    docs/misra: adds Mandatory rules
    
    Add the Mandatory rules agreed by the MISRA C working group to
    docs/misra/rules.rst.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
    Tested-by: Luca Fancellu <luca.fancellu@arm.com>

commit b046f7e374893dd0eadc84d7010f928ea7e8fcf2
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu May 4 14:12:45 2023 +0100

    xen/misra: xen-analysis.py: use the relative path from the ...
    
    repository in the reports
    
    Currently the cppcheck report entries shows the relative file path
    from the /xen folder of the repository instead of the base folder.
    In order to ease the checks, for example, when looking a git diff
    output and the report, use the repository folder as base.
    
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit 069cb96fbd7595c80bf2af6a06454ce5c732721e
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu May 4 14:12:44 2023 +0100

    xen/misra: xen-analysis.py: allow cppcheck version above 2.7
    
    Allow the use of Cppcheck version above 2.7, exception for 2.8 which
    is known and documented do be broken.
    
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit 45bfff651173d538239308648c6a6cd7cbe37172
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu May 4 14:12:43 2023 +0100

    xen/misra: xen-analysis.py: fix parallel analysis Cppcheck errors
    
    Currently Cppcheck has a limitation that prevents to use make with
    parallel build and have a parallel Cppcheck invocation on each
    translation unit (the .c files), because of spurious internal errors.
    
    The issue comes from the fact that when using the build directory,
    Cppcheck saves temporary files as <filename>.c.<many-extensions>, but
    this doesn't work well when files with the same name are being
    analysed at the same time, leading to race conditions.
    
    Fix the issue creating, under the build directory, the same directory
    structure of the file being analysed to avoid any clash.
    
    Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat May 20 07:53:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 07:53:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537425.836611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0HPI-00089f-R3; Sat, 20 May 2023 07:53:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537425.836611; Sat, 20 May 2023 07:53:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0HPI-00089Y-Nw; Sat, 20 May 2023 07:53:40 +0000
Received: by outflank-mailman (input) for mailman id 537425;
 Sat, 20 May 2023 07:53:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0HPI-00089O-D9; Sat, 20 May 2023 07:53:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0HPI-0005Le-3v; Sat, 20 May 2023 07:53:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0HPH-0000tx-Jn; Sat, 20 May 2023 07:53:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0HPH-0003iQ-JM; Sat, 20 May 2023 07:53:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LYo9KUs2pbO2NZ+3GvoYCHpqZIs/1BEgHPNkBKtVBe0=; b=UvrQ8eoDWvdi2OYbBWiQcObjoF
	DTgbu/dj8RZJEa9gKoCnbrvA0PaJGVXkuf/4qTbmstF8gOwMCiaumOOkg9WwiMw/kOUloid+3ThXy
	9v6zTP1t5Jgkzn3kivIKDS4PRFAGqo9OJXiiLg1HG67IPOjdkxuw+lNK0a8PI/gtEwhI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180785-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180785: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 May 2023 07:53:39 +0000

flight 180785 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180785/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    2 days
Failing since        180699  2023-05-18 07:21:24 Z    2 days    7 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 20 10:29:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 10:29:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537438.836628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Jpv-0007Bf-AS; Sat, 20 May 2023 10:29:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537438.836628; Sat, 20 May 2023 10:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Jpv-0007BY-6k; Sat, 20 May 2023 10:29:19 +0000
Received: by outflank-mailman (input) for mailman id 537438;
 Sat, 20 May 2023 10:29:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gC4k=BJ=citrix.com=prvs=49700c0d4=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q0Jps-0007BS-RS
 for xen-devel@lists.xenproject.org; Sat, 20 May 2023 10:29:17 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 286ac7a9-f6f9-11ed-8611-37d641c3527e;
 Sat, 20 May 2023 12:29:12 +0200 (CEST)
Received: from mail-bn8nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 20 May 2023 06:29:08 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB5424.namprd03.prod.outlook.com (2603:10b6:a03:288::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Sat, 20 May
 2023 10:29:05 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.019; Sat, 20 May 2023
 10:29:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 286ac7a9-f6f9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684578552;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=3499sCilo3iEujJFGnrENNT5RUBhvZC85sDOwfu2T5g=;
  b=NCkclGaWs7G+ktgy1DEXx8rzXwTd7fmdhGJuA88B7c7FbQCcNmqFMI6a
   AcbRHKZ2nJPVfuW6IPsr1dAGNjyVSAHGMuKp0WORO9itvDR9bObYykrS9
   p18i00dg5IkjuJUqp6OB32g3gIcNChfF8nbhhMqRi4lpncgM/wjm4p/Wm
   o=;
X-IronPort-RemoteIP: 104.47.55.169
X-IronPort-MID: 108509498
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:wANGX6MpTFshqBfvrR1KlsFynXyQoLVcMsEvi/4bfWQNrUp23mFTn
 zNLXj3SaKqJYmumKox/PNyx9xkOvpPQnd5lHgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tF5ARmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0skpGTpc/
 vdJEhoQMTm52LKd6bHiQMA506zPLOGzVG8ekldJ6GiDSNwAEdXESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PdxujaCpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyr827eewXKrMG4UPI+IxvBRjQHO+n0sIwQ3el+5/KawzWfrDrqzL
 GRRoELCt5Ma71e3R9PwWxm5pn+svRMGXddUVeog52mlyKDZ/gKYDWgsVSNaZZots8pebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakcsSAIf5tD5rYIbjxTRT81iGqq4kt30Hz7rx
 zmA6iM5gt07ncMN1qz951nIgjugr5vOUyY84wmRVWWghj6Vf6agbo2srFLdvfBJKd/DSkHb5
 SZV3c+D8OoJEJeB0jSXR/kAF62o4PDDNyDAhVloHN8q8DHFF2OfQL28KQpWfC9BWvvosxewC
 KMPkWu9PKNuAUY=
IronPort-HdrOrdr: A9a23:kUE96aHGaw/GQCtGpLqE+ceALOsnbusQ8zAXPiFKJCC9F/by/f
 xG885rtiMc9wxhOk3I9ervBEDiex/hHPxOgbX5VI3KNDUO01HGEGgN1+rfKjTbakjDytI=
X-Talos-CUID: =?us-ascii?q?9a23=3AE/CPX2uOTbyOwQfRtS1cqpO06IsKLGWG3nzbcnW?=
 =?us-ascii?q?SFDlAcuLFYnaToalNxp8=3D?=
X-Talos-MUID: 9a23:90jt4At3ggta4MHdes2ntgFSOvxlv/mSGX8miLQpv8vZaiU3EmLI
X-IronPort-AV: E=Sophos;i="6.00,179,1681185600"; 
   d="scan'208";a="108509498"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iTJOBV6wqALcj7yQ2wQGmJtXfPpRPm8BwVg5Nau5R/gqEaRM+JdHvilMZX/i+gWJ8h6xrE+PRWeM+MB6eHWfr996OpJuhPLHTsA3zilu4gZ6bmsqbobIUdqX5+u+4J3XvF5fIW7yk/EuigunTjlc+iXx+vfWMTiUWQ9YMf7XueloekfX6xJt5515mQfZmJxlwrY/otpTiKiIHiDinwcB6HWMREfk7S+j/gzEP1EUOmfi3pHFGfz9NDJYIS/GjTtPJrJ8Bkgwigy7EqzE0q/+u6O0uLQcR2b/U09m2CBUE6xJqrN1BoEvFnD+Y2aEDASsL0Z3bDwZtW2Tz/FZKF7Cwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aNK6oQIqbVqYEq+1yNQdn3vyXnTqWQNd+tznevS4kcg=;
 b=mDZa1LITgRGoJb5FPwdeDwvZIQKF5nyv1VTgbVUh4Cd1Md8uv/oA9bLmt2CVpmZtyxUnT11mdXCRAEbj5nq7jr7C63Q1O71TcEHBG+5jXq+0QJUlkhLfqKkBowDNQKfUa1CZ0PSgpD8dsOo5m+Eu6iMCvGZVQNrS0qwZoAWVyNIS7Bi1swk+sruzIq0BGfAeHRrmlvQmLv5k4Bf0x9Er1RL7Ma/BptT0IJRJ8LJ65PHIRowky7aIgu/CMueUfcNygl90xbDPjbJoys9T/IlEGdA7i2LlldBD0XRGRZB6KPEm5R0C8vGtz82GQDz4zIpq3GyPFT1pfF7tM8cAhVQ1OA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aNK6oQIqbVqYEq+1yNQdn3vyXnTqWQNd+tznevS4kcg=;
 b=LIQtIz8vMewnfKdqTL5YiKDDgTJPCxTB5+z4W2HjUHiQoi2CFBa3jrb9p9aPIT5QU7gHrm/yEHqEuYlFmuWDD80rbau4vd9iT0X28Y+0twyJRxzEJUTcMyHda4pAmUuNPtKvLnd66I+ePT2qaNZa6zzRGfim6NDdIV1N/wQLqTY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Sat, 20 May 2023 12:28:59 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: jbeulich@suse.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
	xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
Message-ID: <ZGig68UTddfEwR6P@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
 <ZGcu7EWW1cuNjwDA@Air-de-Roger>
 <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LO4P265CA0218.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:33a::19) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB5424:EE_
X-MS-Office365-Filtering-Correlation-Id: c208c769-2c7b-468c-0be3-08db591d0928
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Y8oQ++qcvMYBIGUBI/33tu3AWoiTpya3ft1DoffH7gEBgN/kZPo3tI2pjUbqILc+SrDl1OntT5mIlfvYejFHyowTje4e1KB925bUxzLXb0pORRqYN+z8tVJ7QlZFH28gYLE78lbqSW7/QhCUGeGV07c8QzPztRV6lK/H+fanVhqImXw+WCo6xs8F++TL5KYB472pa4qC2CyoGxJcS74o/+Fi2ipY9rr6/SGqeAvjUHpWa3ukrpIm/s8tI1+I9TXj1rEvFCLjrvO/BwLfEo8mmplnAQ+xfSV6WFyQa8bGP3sWr5lVgUTTyJBcHE7UNcBtb3WnaDbugkxX1EDM9+E+e5PKTVlv5eM2dapU6RHSRiFdIWcfF/7Vp5KgNy0n7oY//2RNpEubvLyjjG8vsraHIlvKmHpNMCMZ1k95XrRP/bPCjbugl7gEXfHqrime9p+NCUkJss0PJMPQHeTY272MZU7qnn7susm1+1BA+OxflsIuN7nbOdEs2yyMHXvCEJ35qhFmqG/L0zeAZOI9owLNFEZ9qav+yvZOg785r2wj4vgQCWXN7j5h4+YldAWHRL421NG+ADlXBj97H4rxKsaAJA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(136003)(366004)(376002)(346002)(396003)(39860400002)(451199021)(6666004)(478600001)(26005)(66556008)(66476007)(6916009)(66946007)(4326008)(316002)(6486002)(66899021)(41300700001)(966005)(8676002)(8936002)(5660300002)(6506007)(83380400001)(85182001)(82960400001)(33716001)(86362001)(38100700002)(186003)(2906002)(6512007)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?czNxc1F1RjJqYjFhaU5rRDdLVWk4YzJPbWdxQ0xxYWpub0dYWWNTbngxM1Ji?=
 =?utf-8?B?bkt4bEIrajZYSUh1OGxlR1dRNVFPUG1TUEU5YVNQNFhGODQ0RDFwMXZHdGdU?=
 =?utf-8?B?WUc0YWp5U0FhZlNQZFR4cVFnMFJoYmRkVWhmeFBka0lKenZyMjFncDNWTU10?=
 =?utf-8?B?OVhZTmNCOUlXNmE0dmJqZElqWjNRMEQrU3htdFlhdWp4WWNjRXlzaG53ZitD?=
 =?utf-8?B?cUNLY0ZnRUJ0VkMwWWxyOHhjdnVIVElFb1NZdyt5TTR4YU9LR1BrVXNOZlUx?=
 =?utf-8?B?MFZnV1VTL3oxRzQ2ZTl2aERsR0c2WFJoY0U4SUZEWkR6ME5mSFA4U0l5VGV0?=
 =?utf-8?B?SGt4MUJaZWk5ZGlVVHJ1aXkvNEdJczZxZm5wdmFtVDd4V0dOb2w2RHlxaE85?=
 =?utf-8?B?VU95TmdpTUVsM1NOQ2UyOEtBSVltYnVEL0d4Y3U1b09EaFFRUmRYb29NdmxF?=
 =?utf-8?B?TGlmMHpNWWlHTG5BcHRVSUNRWXRUekx4ZFB3Vk1rcmh6cDJtWFV0VWxvczAv?=
 =?utf-8?B?TGMwUzh2TWs1d0lSeS9CY0IyamtJaHhyaG5SU2xrSDBXQnVDdGk5QUxrYjho?=
 =?utf-8?B?Wm1EOHZZVlBEbGhieUZmdkdubjVVODRsQmV2VUl5ZlhSZnhBaGpEVU9hZDZx?=
 =?utf-8?B?ekFzVEFxb3dMYnBQcW5tdjRETnpxbXJ0a3ZLR0wxNzUvS1V1Z3VTWXRGZmpQ?=
 =?utf-8?B?MEdBZUQrZVErTmlDV3hydnRqTzhMV3IxNDNnTUlZK0N2NWFsWU9nZXR1blRY?=
 =?utf-8?B?V3BBK051NjZZWml2UVhsQStJMXBpcDFZdDdYdklqSVFKVUx1UHU3NWFFb0dw?=
 =?utf-8?B?LzQraElqOU9JQklDdFp0TTlpOXRaRmg1OU9CVWZwY1ZCcWd0WVN5Q0tleGxL?=
 =?utf-8?B?SlJraWMzVlZxMmxiVlVJdTV6UlovLzZpeGlOcHpBWGUwRlJxT1lVMHhaRXd5?=
 =?utf-8?B?SmFDRnV1MnBmdXFWQVJiVzhOQXVWUzg3aUlxdkhyUkh6dnBLOGQ5UXJkd3B0?=
 =?utf-8?B?cUFWYm5MSnlKK1NQbVFjYloyZUpuT290bjJ4OEMvanAweVlxa2ZVbzdCTk5E?=
 =?utf-8?B?YVFGcHdjSDZadkY2ZU1IUk9yeUhEWkpTZGhhUzBQbmU1MkdNSU8rTkZmcmZi?=
 =?utf-8?B?b3FhMEJmT1Blb0hHUk96aGxTeGhjQTZEeHRQUU44UlF4akRVcEUvT05OSU8z?=
 =?utf-8?B?UXA1T1VZN0JGYytzNjNXQ3VHNERsNFQxb3d2dS81YThBU2RNT28rMjFDMjVw?=
 =?utf-8?B?SlpXdkh3MFN4TzcyUzRZUER5Y2xndm9Od1R3K1gxQkNDdk1uZG1BNEtyd0xE?=
 =?utf-8?B?MDJmaXJTVGh1TC9kNDNmVTJUdWplYkwrMitsQnBqU3ZDNVZEc3hBd3RxZmwv?=
 =?utf-8?B?SERrd1BtR2c0UWhoYUhWcGx1YVNzc0ZaMGpYeTJGVlVKd3ZMUUQ5VVpjWEY2?=
 =?utf-8?B?NCtaa1BmRjZCMFFiekY0M0xKY2xCenAxbVVFSHhCdldKa1FSVUU5ZTVld1lt?=
 =?utf-8?B?dGlRNG45TGFhOHdEMzNzNko2T01nZ3l6ZVhsR0lvT3Z6LzNvZ1Vsb2t4eS9H?=
 =?utf-8?B?VTBkMjI2Q2l5b1BnTmtDOGhiVUVNalVFMnJnd2RQYmJjTVc2dURkVmJFckln?=
 =?utf-8?B?em8weXlGNkw3dnBjNmJuWkduNXI5OXE1eUtSNmJNS3pyNUdPUHh4VFhUelh4?=
 =?utf-8?B?NFVhQUczemJiYzA2bXoyaTVoUGloSjNwRUZ0b293SnZIZWhFRmtCQi90akkx?=
 =?utf-8?B?elM1U0VCZGxLcUE3a0p5R2NkcytGdlZFM1ZRSlVyRVRXUlpNa3RnWUNqT2Va?=
 =?utf-8?B?WkVqVTJOY3hMdE84Q3Rkc0U3Sk1uSXdxQjEzYlowM2pVQjM1RDNXcDY5SmdO?=
 =?utf-8?B?MkZwby94RVBPL3ViNDBBY2hWYm1HVW9SalF6Y0lnN3BnVnM3Qk5NaVRNK2lE?=
 =?utf-8?B?aWFRYjAwdFNGVmNPWFZmeU1NZFgwQ293S1B2bXJMbVlzKzRxWGF4ZVNhdy9j?=
 =?utf-8?B?Zlg0aHRaU1k5MllqVWlKaDB1UXhhckNEOTJXcFNuRmdjWjBRMXh2eVIxZ0l5?=
 =?utf-8?B?bkVBYmR1UmNYUXdqMGpnM1hIT1lzZWNSKzF2Vm5UcHBzYm9nOHJWVW9hZFFa?=
 =?utf-8?Q?tT+ZB2nb7W3SPHD04WC9gm2FT?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	r1A7H+49IJ11NP3zqaEOiJrOzVgqHdOaV/D8aJJ2COb51kRka3gS74auUG4yZ7m/P1V9zpRQUsyXVxEaA9LJR756qxf2wRO0FdFahce+yARcvpisr9S+K9dUSAwiwRMeVD7iLbHZvekAXzXOzdZ5VK8q1KcUZ2N45CivIfVHOsVR6Rqix8FW/caquZMtxAn6UC9gyT4nON4lQw8sQAAlUdrJj2FnpRdE7ywz7kRsSq0zuLX+3jlOVpv67s9psKy8gtUhqFCBu/RYwcq/jwafi1tR+OS4vSe1HjJpCBZ54/Ygigf4JEZzgBe8tLXpLITkr3YSvHrbqEzxg+BGA8UgvB56tymR+78hPUZ8RTQuSuDgxpyvNuYozOiTn9W7ymvg2Lh2DJCeHCl4WsWF/5V5DuLbYzOfcM3T/lxvfMHKo04Mbs7jN4mgoxhFAr8g2W8XJ3wxbbycWyRSKo5mEnHw8uaWPuFlgp+obh2vY1vUfn4uZN1DX2CO90jruDefhutEadAUbFJ/0CgoID+Ee3XLHJZ5uIMDLzHAWs7m7rN+mhxkcJbT9Xr3mJ6GKB11sNJrgUAWCWMIpRdfmYqwf0lent61IRqlHCU2vBVCoTP+WHh9ENLWkEug2URgtZgsgwnNWo1rR3P9i5V1AINv0hmXqcEzhbLlL9en5/3q0y0KiKDkEA7jtyUMta9pP44HpPNTFAcTJOilDcNfbaCZy1vsS1KDK7aEcDYWF7v0cH8LhWzxAnrB8uqBu35jDQIk6rZu+XYUEkEZAaq4/uMdtuDTGnyOXwarheBzfIXqaoM3nsh87zElo5KJJf6iazwI3iKG9FL2mN2mTH/HKRcqlcqKrUHuJrb6az9wHSsHZDuG+2qZf6K6QiJ9Gg+R+tMReT8uTzy9jG0szsmgA711ZEkmBvfMKM4A9y1ZPJksTwFcLzI=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c208c769-2c7b-468c-0be3-08db591d0928
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2023 10:29:04.9727
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yg9OPkg/CwLn5OUO1XxRvZeDZI+SUaK0uhAsHz6EpWsit4fNBNeuRnxq51IolnFuUKHOnQmsvksIkUR2ZliAmA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5424

On Fri, May 19, 2023 at 05:02:21PM -0700, Stefano Stabellini wrote:
> On Fri, 19 May 2023, Roger Pau Monné wrote:
> > On Thu, May 18, 2023 at 06:46:52PM -0700, Stefano Stabellini wrote:
> > > On Thu, 18 May 2023, Roger Pau Monné wrote:
> > > > On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
> > > > > Hi all,
> > > > > 
> > > > > I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
> > > > > test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
> > > > > Zen3 system and we already have a few successful tests with it, see
> > > > > automation/gitlab-ci/test.yaml.
> > > > > 
> > > > > We managed to narrow down the issue to a console problem. We are
> > > > > currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
> > > > > options, it works with PV Dom0 and it is using a PCI UART card.
> > > > > 
> > > > > In the case of Dom0 PVH:
> > > > > - it works without console=com1
> > > > > - it works with console=com1 and with the patch appended below
> > > > > - it doesn't work otherwise and crashes with this error:
> > > > > https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
> > > > 
> > > > Jan also noticed this, and we have a ticket for it in gitlab:
> > > > 
> > > > https://gitlab.com/xen-project/xen/-/issues/85
> > > > 
> > > > > What is the right way to fix it?
> > > > 
> > > > I think the right fix is to simply avoid hidden devices from being
> > > > handled by vPCI, in any case such devices won't work propewrly with
> > > > vPCI because they are in use by Xen, and so any cached information by
> > > > vPCI is likely to become stable as Xen can modify the device without
> > > > vPCI noticing.
> > > > 
> > > > I think the chunk below should help.  It's not clear to me however how
> > > > hidden devices should be handled, is the intention to completely hide
> > > > such devices from dom0?
> > > 
> > > I like the idea but the patch below still failed:
> > > 
> > > (XEN) Xen call trace:
> > > (XEN)    [<ffff82d0402682b0>] R drivers/vpci/header.c#modify_bars+0x2b3/0x44d
> > > (XEN)    [<ffff82d040268714>] F drivers/vpci/header.c#init_bars+0x2ca/0x372
> > > (XEN)    [<ffff82d0402673b3>] F vpci_add_handlers+0xd5/0x10a
> > > (XEN)    [<ffff82d0404408b9>] F drivers/passthrough/pci.c#setup_one_hwdom_device+0x73/0x97
> > > (XEN)    [<ffff82d0404409b0>] F drivers/passthrough/pci.c#_setup_hwdom_pci_devices+0x63/0x15b
> > > (XEN)    [<ffff82d04027df08>] F drivers/passthrough/pci.c#pci_segments_iterate+0x43/0x69
> > > (XEN)    [<ffff82d040440e29>] F setup_hwdom_pci_devices+0x25/0x2c
> > > (XEN)    [<ffff82d04043cb1a>] F drivers/passthrough/amd/pci_amd_iommu.c#amd_iommu_hwdom_init+0xd4/0xdd
> > > (XEN)    [<ffff82d0404403c9>] F iommu_hwdom_init+0x49/0x53
> > > (XEN)    [<ffff82d04045175e>] F dom0_construct_pvh+0x160/0x138d
> > > (XEN)    [<ffff82d040468914>] F construct_dom0+0x5c/0xb7
> > > (XEN)    [<ffff82d0404619c1>] F __start_xen+0x2423/0x272d
> > > (XEN)    [<ffff82d040203344>] F __high_start+0x94/0xa0
> > > 
> > > I haven't managed to figure out why yet.
> > 
> > Do you have some other patches applied?
> > 
> > I've tested this by manually hiding a device on my system and can
> > confirm that without the fix I hit the ASSERT, but with the patch
> > applied I no longer hit it.  I have no idea how can you get into
> > init_bars if the device is hidden and thus belongs to dom_xen.
> 
> Unfortunately it doesn't work. Here are the full logs with interesting
> DEBUG messages (search for "DEBUG"):
> https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4318489116
> https://gitlab.com/xen-project/people/sstabellini/xen/-/commit/31c400caa7b86d4c14f9553138e02af18d3b3284
> 
> [...]
> (XEN) DEBUG ns16550_init_postirq 432  03:00.0
> [...]
> (XEN) DEBUG vpci_add_handlers 75 0000:00:00.0 0^M
> (XEN) DEBUG vpci_add_handlers 75 0000:00:00.2 1^M
> (XEN) DEBUG vpci_add_handlers 78 0000:00:00.2^M

This device is not handled by vPCI either, and is not the console
device.

> (XEN) DEBUG vpci_add_handlers 75 0000:00:01.0 0^M
> (XEN) DEBUG vpci_add_handlers 75 0000:00:02.0 0^M
> (XEN) DEBUG vpci_add_handlers 75 0000:00:02.1 0^M
> 
> Then crash on drivers/vpci/header.c#modify_bars

Interesting.  The crash however is a page fault instead of the
previous assert:

(XEN) ----[ Xen-4.18-unstable  x86_64  debug=y  Tainted:   C    ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d040268312>] drivers/vpci/header.c#modify_bars+0x2b3/0x44d
[...]
(XEN) Xen call trace:
(XEN)    [<ffff82d040268312>] R drivers/vpci/header.c#modify_bars+0x2b3/0x44d
(XEN)    [<ffff82d040268776>] F drivers/vpci/header.c#init_bars+0x2ca/0x372
(XEN)    [<ffff82d040267412>] F vpci_add_handlers+0x134/0x16c
(XEN)    [<ffff82d0404408e5>] F drivers/passthrough/pci.c#setup_one_hwdom_device+0x73/0x97
(XEN)    [<ffff82d0404409dc>] F drivers/passthrough/pci.c#_setup_hwdom_pci_devices+0x63/0x15b
(XEN)    [<ffff82d04027df6a>] F drivers/passthrough/pci.c#pci_segments_iterate+0x43/0x69
(XEN)    [<ffff82d040440e55>] F setup_hwdom_pci_devices+0x25/0x2c
(XEN)    [<ffff82d04043cb46>] F drivers/passthrough/amd/pci_amd_iommu.c#amd_iommu_hwdom_init+0xd4/0xdd
(XEN)    [<ffff82d0404403f5>] F iommu_hwdom_init+0x49/0x53
(XEN)    [<ffff82d04045177e>] F dom0_construct_pvh+0x160/0x138d
(XEN)    [<ffff82d040468934>] F construct_dom0+0x5c/0xb7
(XEN)    [<ffff82d0404619e1>] F __start_xen+0x2423/0x272d
(XEN)    [<ffff82d040203344>] F __high_start+0x94/0xa0
(XEN) 
(XEN) Pagetable walk from 000000000000002c:
(XEN)  L4[0x000] = 000000039015b063 ffffffffffffffff
(XEN)  L3[0x000] = 000000039015a063 ffffffffffffffff
(XEN)  L2[0x000] = 0000000390159063 ffffffffffffffff
(XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 000000000000002c
(XEN) ****************************************

Looks like a NULL pointer deref.

Using addr2line it points at xen/drivers/vpci/header.c:314, which is:

    for_each_pdev ( pdev->domain, tmp )
    {
        if ( tmp == pdev )
        {
            /*
             * Need to store the device so it's not constified and defer_map
             * can modify it in case of error.
             */
            dev = tmp;
            if ( !rom_only )
                /*
                 * If memory decoding is toggled avoid checking against the
                 * same device, or else all regions will be removed from the
                 * memory map in the unmap case.
                 */
                continue;
        }

        for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
        {
            const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
            unsigned long start = PFN_DOWN(bar->addr);
            unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);

->          if ( !bar->enabled || !rangeset_overlaps_range(mem, start, end) ||

So we have a device added to the domain device list that doesn't have
vPCI enabled.  I'm unsure how we get into that situation in the
current scenario, but Xen should be capable of coping with a domain
having devices not handled by vPCI.

Can you please try the following combined fix, it should also print
the offending device.

Thanks, Roger.
---
diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index ec2e978a4e6b..0ff8e940fa8d 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -289,6 +289,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
      */
     for_each_pdev ( pdev->domain, tmp )
     {
+        if ( !tmp->vpci )
+        {
+            printk(XENLOG_G_WARNING "%pp: not handled by vPCI for %pd\n",
+                   &tmp->sbdf, pdev->domain);
+            continue;
+        }
+
         if ( tmp == pdev )
         {
             /*
diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 652807a4a454..0baef3a8d3a1 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -72,7 +72,12 @@ int vpci_add_handlers(struct pci_dev *pdev)
     unsigned int i;
     int rc = 0;
 
-    if ( !has_vpci(pdev->domain) )
+    if ( !has_vpci(pdev->domain) ||
+         /*
+          * Ignore RO and hidden devices, those are in use by Xen and vPCI
+          * won't work on them.
+          */
+         pci_get_pdev(dom_xen, pdev->sbdf) )
         return 0;
 
     /* We should not get here twice for the same device. */


From xen-devel-bounces@lists.xenproject.org Sat May 20 12:27:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 12:27:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537453.836661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Lg4-0002ov-0w; Sat, 20 May 2023 12:27:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537453.836661; Sat, 20 May 2023 12:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Lg3-0002oo-Ub; Sat, 20 May 2023 12:27:15 +0000
Received: by outflank-mailman (input) for mailman id 537453;
 Sat, 20 May 2023 12:27:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=akxf=BJ=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1q0Lg2-0002oi-Tb
 for xen-devel@lists.xenproject.org; Sat, 20 May 2023 12:27:15 +0000
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com
 [66.111.4.29]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a3f05e60-f709-11ed-8611-37d641c3527e;
 Sat, 20 May 2023 14:27:10 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id 2AEFE5C00D1;
 Sat, 20 May 2023 08:27:09 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Sat, 20 May 2023 08:27:09 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat,
 20 May 2023 08:27:07 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3f05e60-f709-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm1; t=
	1684585629; x=1684672029; bh=l4Dj11cX38nf0OMYI1HjGSOuPta4+vL40FJ
	Nfn/+JcU=; b=NhJMkf0m2f1N9HuHX+SR7FzeK+qyCWDrookJ5miuwQ0MO/iAEin
	4v0L0tVE3Ahm7dBsYI4iFSwLhtOL8dLyIzlkokU7xEUWRVWkjylpiLx2z6vyyxuH
	pRNmW7tLohZT+aJkCzU4mC7HHPC3CsvSZS/aGiRF4tr96iRFyLVdmwyOfCPKo/gZ
	V7hYesqxkMW3etlEdw+WYZ2Xyi4AtewQDz6n1Hm9s1miq8RaAAQ9pUbHOofpsHSc
	ubBYLOrrmLZEJPBl8mve5qOHrMPd+HRPsXdhPRJ4BSY8QsDrNFCLNF2VjE/o+Fw+
	JpRPzSRhlpiCmFN7xb1ssZNQrX2nAhpmU5w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1684585629; x=1684672029; bh=l4Dj11cX38nf0
	OMYI1HjGSOuPta4+vL40FJNfn/+JcU=; b=pXrVX47YA+5SJWaVa1jBOvXAOvdVh
	J7r1tvynIj2rgPkKuLBD65lnmcX6WbLcWA8nCmJ/Z00ZcVcSi7NRV2YjnVl0C+Jh
	5H9uotuS6EkmRy8T2lZkpzWPdbBLQdDJ0jog77QR2j5XBKAtfPCu7M4wyi+p1nru
	nHN2cFbwW8fper6GxqTatQV2/AcAXkVSQep7ZN/ILGZyd/aidikZsnE42Ya3Py72
	hvkzDmZ3zAKVZNKMU5JSmkX3f1ZgtgedEn8ndVDsuWNu53pna6Wz8niHYdJ8toLz
	YSKIkhRfarsZOMU1BPBaR1tnOg0Niwls/hZHHK7jXUAgQkc1RxUHCHHKQ==
X-ME-Sender: <xms:nLxoZOlJ0dmccHmEsL10V9pRpSnv7FppjQCxstOhjGcfU5VqXjlHbQ>
    <xme:nLxoZF3qRT_5rSU1gTzJUQBflGHdFNCdMrXR_dSU28VUFhmulFcsknKY3B6bsA8tm
    qpDQRf6HZP7FA>
X-ME-Received: <xmr:nLxoZMqvcGxcz1ZjcCLo2gknvKvTARJTsEgKdUnWMV03xZrmVi0_U0UWlg90eT2DcRRUMH7BHkPxUsLkNOOz_BzZMnPh4ZNYqxY>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeijedghedtucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepuddu
    geelhefhteektdekvefgieeugedutdffgefgvdduieetleeuveelieeulefgnecuffhomh
    grihhnpehmrghtrhhigidrohhrghdpihhnvhhishhisghlvghthhhinhhgshhlrggsrdgt
    ohhmpdhgihhtlhgrsgdrtghomhenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh
    epmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhl
    rggsrdgtohhm
X-ME-Proxy: <xmx:nLxoZCnBBemWCtMtkUuNicRe6UBGAq-q13B6D6r5I8g8FmE2yjcSJQ>
    <xmx:nLxoZM3YDT_7kxhc0OyFciWPRDccP-3-V9Kzh7Cwp9bf4LjXYPanZw>
    <xmx:nLxoZJsZkG3TzYe384F60kDSYwSdVJ1jja_d0wnnKAPwJzstIUxsLg>
    <xmx:nbxoZJz3iX0OU7FzDur13f2yYDxU5JfUChicJ-vl0Ttx3Q4XzR1v0w>
Feedback-ID: i1568416f:Fastmail
Date: Sat, 20 May 2023 14:27:04 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, jbeulich@suse.com,
	andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
	xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
Message-ID: <ZGi8mMTXaKVmPi66@mail-itl>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
 <ZGcu7EWW1cuNjwDA@Air-de-Roger>
 <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
 <ZGig68UTddfEwR6P@Air-de-Roger>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Fncw6dtxpPc8pLwg"
Content-Disposition: inline
In-Reply-To: <ZGig68UTddfEwR6P@Air-de-Roger>


--Fncw6dtxpPc8pLwg
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sat, 20 May 2023 14:27:04 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, jbeulich@suse.com,
	andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
	xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure

On Sat, May 20, 2023 at 12:28:59PM +0200, Roger Pau Monn=C3=A9 wrote:
> On Fri, May 19, 2023 at 05:02:21PM -0700, Stefano Stabellini wrote:
> > On Fri, 19 May 2023, Roger Pau Monn=C3=A9 wrote:
> > > On Thu, May 18, 2023 at 06:46:52PM -0700, Stefano Stabellini wrote:
> > > > On Thu, 18 May 2023, Roger Pau Monn=C3=A9 wrote:
> > > > > On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrot=
e:
> > > > > > Hi all,
> > > > > >=20
> > > > > > I have run into another PVH Dom0 issue. I am trying to enable a=
 PVH Dom0
> > > > > > test with the brand new gitlab-ci runner offered by Qubes. It i=
s an AMD
> > > > > > Zen3 system and we already have a few successful tests with it,=
 see
> > > > > > automation/gitlab-ci/test.yaml.
> > > > > >=20
> > > > > > We managed to narrow down the issue to a console problem. We are
> > > > > > currently using console=3Dcom1 com1=3D115200,8n1,pci,msi as Xen=
 command line
> > > > > > options, it works with PV Dom0 and it is using a PCI UART card.
> > > > > >=20
> > > > > > In the case of Dom0 PVH:
> > > > > > - it works without console=3Dcom1
> > > > > > - it works with console=3Dcom1 and with the patch appended below
> > > > > > - it doesn't work otherwise and crashes with this error:
> > > > > > https://matrix-client.matrix.org/_matrix/media/r0/download/invi=
siblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
> > > > >=20
> > > > > Jan also noticed this, and we have a ticket for it in gitlab:
> > > > >=20
> > > > > https://gitlab.com/xen-project/xen/-/issues/85
> > > > >=20
> > > > > > What is the right way to fix it?
> > > > >=20
> > > > > I think the right fix is to simply avoid hidden devices from being
> > > > > handled by vPCI, in any case such devices won't work propewrly wi=
th
> > > > > vPCI because they are in use by Xen, and so any cached informatio=
n by
> > > > > vPCI is likely to become stable as Xen can modify the device with=
out
> > > > > vPCI noticing.
> > > > >=20
> > > > > I think the chunk below should help.  It's not clear to me howeve=
r how
> > > > > hidden devices should be handled, is the intention to completely =
hide
> > > > > such devices from dom0?
> > > >=20
> > > > I like the idea but the patch below still failed:
> > > >=20
> > > > (XEN) Xen call trace:
> > > > (XEN)    [<ffff82d0402682b0>] R drivers/vpci/header.c#modify_bars+0=
x2b3/0x44d
> > > > (XEN)    [<ffff82d040268714>] F drivers/vpci/header.c#init_bars+0x2=
ca/0x372
> > > > (XEN)    [<ffff82d0402673b3>] F vpci_add_handlers+0xd5/0x10a
> > > > (XEN)    [<ffff82d0404408b9>] F drivers/passthrough/pci.c#setup_one=
_hwdom_device+0x73/0x97
> > > > (XEN)    [<ffff82d0404409b0>] F drivers/passthrough/pci.c#_setup_hw=
dom_pci_devices+0x63/0x15b
> > > > (XEN)    [<ffff82d04027df08>] F drivers/passthrough/pci.c#pci_segme=
nts_iterate+0x43/0x69
> > > > (XEN)    [<ffff82d040440e29>] F setup_hwdom_pci_devices+0x25/0x2c
> > > > (XEN)    [<ffff82d04043cb1a>] F drivers/passthrough/amd/pci_amd_iom=
mu.c#amd_iommu_hwdom_init+0xd4/0xdd
> > > > (XEN)    [<ffff82d0404403c9>] F iommu_hwdom_init+0x49/0x53
> > > > (XEN)    [<ffff82d04045175e>] F dom0_construct_pvh+0x160/0x138d
> > > > (XEN)    [<ffff82d040468914>] F construct_dom0+0x5c/0xb7
> > > > (XEN)    [<ffff82d0404619c1>] F __start_xen+0x2423/0x272d
> > > > (XEN)    [<ffff82d040203344>] F __high_start+0x94/0xa0
> > > >=20
> > > > I haven't managed to figure out why yet.
> > >=20
> > > Do you have some other patches applied?
> > >=20
> > > I've tested this by manually hiding a device on my system and can
> > > confirm that without the fix I hit the ASSERT, but with the patch
> > > applied I no longer hit it.  I have no idea how can you get into
> > > init_bars if the device is hidden and thus belongs to dom_xen.
> >=20
> > Unfortunately it doesn't work. Here are the full logs with interesting
> > DEBUG messages (search for "DEBUG"):
> > https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4318489116
> > https://gitlab.com/xen-project/people/sstabellini/xen/-/commit/31c400ca=
a7b86d4c14f9553138e02af18d3b3284
> >=20
> > [...]
> > (XEN) DEBUG ns16550_init_postirq 432  03:00.0
> > [...]
> > (XEN) DEBUG vpci_add_handlers 75 0000:00:00.0 0^M
> > (XEN) DEBUG vpci_add_handlers 75 0000:00:00.2 1^M
> > (XEN) DEBUG vpci_add_handlers 78 0000:00:00.2^M
>=20
> This device is not handled by vPCI either, and is not the console
> device.

That's IOMMU.

Full lspci:

00:00.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h-1=
9h PCIe Root Complex [1022:14b5] (rev 01)
00:00.2 IOMMU [0806]: Advanced Micro Devices, Inc. [AMD] Family 17h-19h IOM=
MU [1022:14b6]
00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h-1=
9h PCIe Dummy Host Bridge [1022:14b7] (rev 01)
00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h-1=
9h PCIe Dummy Host Bridge [1022:14b7] (rev 01)
00:02.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h-19=
h PCIe GPP Bridge [1022:14ba]
00:02.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h-19=
h PCIe GPP Bridge [1022:14ba]
00:02.4 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h-19=
h PCIe GPP Bridge [1022:14ba]
00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h-1=
9h PCIe Dummy Host Bridge [1022:14b7] (rev 01)
00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 19h US=
B4/Thunderbolt PCIe tunnel [1022:14cd]
00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h-1=
9h PCIe Dummy Host Bridge [1022:14b7] (rev 01)
00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h-1=
9h PCIe Dummy Host Bridge [1022:14b7] (rev 01)
00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h-19=
h Internal PCIe GPP Bridge [1022:14b9] (rev 10)
00:08.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h-19=
h Internal PCIe GPP Bridge [1022:14b9] (rev 10)
00:08.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h-19=
h Internal PCIe GPP Bridge [1022:14b9] (rev 10)
00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controll=
er [1022:790b] (rev 71)
00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridg=
e [1022:790e] (rev 51)
00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Rembrandt Da=
ta Fabric: Device 18h; Function 0 [1022:1679]
00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Rembrandt Da=
ta Fabric: Device 18h; Function 1 [1022:167a]
00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Rembrandt Da=
ta Fabric: Device 18h; Function 2 [1022:167b]
00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Rembrandt Da=
ta Fabric: Device 18h; Function 3 [1022:167c]
00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Rembrandt Da=
ta Fabric: Device 18h; Function 4 [1022:167d]
00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Rembrandt Da=
ta Fabric: Device 18h; Function 5 [1022:167e]
00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Rembrandt Da=
ta Fabric: Device 18h; Function 6 [1022:167f]
00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Rembrandt Da=
ta Fabric: Device 18h; Function 7 [1022:1680]
01:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I=
225-V [8086:15f3] (rev 03)
02:00.0 Network controller [0280]: MEDIATEK Corp. MT7921K (RZ608) Wi-Fi 6E =
80MHz [14c3:0608]
03:00.0 Serial controller [0700]: Exar Corp. XR17V3521 Dual PCIe UART [13a8=
:0352] (rev 03)
34:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD=
/ATI] Rembrandt [Radeon 680M] [1002:1681] (rev 0a)
34:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Rembran=
dt Radeon High Definition Audio Controller [1002:1640]
34:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Va=
nGogh PSP/CCP [1022:1649]
34:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Rembrandt=
 USB4 XHCI controller #3 [1022:161d]
34:00.4 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Rembrandt=
 USB4 XHCI controller #4 [1022:161e]
34:00.5 Multimedia controller [0480]: Advanced Micro Devices, Inc. [AMD] AC=
P/ACP3X/ACP6x Audio Coprocessor [1022:15e2] (rev 60)
34:00.6 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h/=
19h HD Audio Controller [1022:15e3]
35:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA=
 Controller [AHCI mode] [1022:7901] (rev a1)
36:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Rembrandt=
 USB4 XHCI controller #8 [1022:161f]
36:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Rembrandt=
 USB4 XHCI controller #5 [1022:15d6]
36:00.4 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Rembrandt=
 USB4 XHCI controller #6 [1022:15d7]
36:00.5 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Rembrandt=
 USB4/Thunderbolt NHI controller #1 [1022:162e]

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--Fncw6dtxpPc8pLwg
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRovJgACgkQ24/THMrX
1ywimggAllbdGpQWcILXaOTkCOfxgoxRwQxySwHuVgJ/OKJknqzr5aCqvUS6rBsO
isGjim0XGXA1cMlLBdv9fNhe5X3NvAdW8OYzXDiT7sUbWoi3/ebqKWt1Z71M63oB
gO1qQ7qoDmUklsZdA18meY/rAf9FSJiBPeDr0OLqRBi5oqK/mF2PQzyE3Prybm/s
/2O6z01ht+0w4bFGaUqsl1825vN+JEO7l452+3v0r15eJ5IvgQEkmWUYoeYNNJ2D
nY2fHtDPC321qq5Bkw9m2XCZ/yCeasTiob/fNt2BMclGn8m13owA0t00Q2cHv8CT
a4UWqlGoHFl9ncKTKA6QKuU/E57fpQ==
=aQan
-----END PGP SIGNATURE-----

--Fncw6dtxpPc8pLwg--


From xen-devel-bounces@lists.xenproject.org Sat May 20 12:29:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 12:29:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537458.836671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Li2-0003OS-Dk; Sat, 20 May 2023 12:29:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537458.836671; Sat, 20 May 2023 12:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Li2-0003OL-As; Sat, 20 May 2023 12:29:18 +0000
Received: by outflank-mailman (input) for mailman id 537458;
 Sat, 20 May 2023 12:29:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Li0-0003O9-Mk; Sat, 20 May 2023 12:29:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Li0-00043K-9d; Sat, 20 May 2023 12:29:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Lhz-00054I-Vy; Sat, 20 May 2023 12:29:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Lhz-0007O9-Va; Sat, 20 May 2023 12:29:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=thIWZYGzEwbUFaVUQxwVHJC9bEIaA2rcdDXB+O6sess=; b=msNzkfpYP6fPDHYDQ00tNgr3EQ
	s3+S3LnsLiNTftHML/V3oR7AAJzJkfF4/BIwJln4x8V5AVVQxuf2VgK88midqjiEv/THsImpURtZx
	grXbmBjx62VGkvp5qtL+UjEvf6zlU0zRYpoN46qzCcAHJEIalVDD3eN9w9d3OzvrfLWQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180807-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180807: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 May 2023 12:29:15 +0000

flight 180807 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180807/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    3 days
Failing since        180699  2023-05-18 07:21:24 Z    2 days    8 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 20 12:57:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 12:57:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537471.836693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0M9L-0006xP-Rw; Sat, 20 May 2023 12:57:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537471.836693; Sat, 20 May 2023 12:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0M9L-0006xI-P8; Sat, 20 May 2023 12:57:31 +0000
Received: by outflank-mailman (input) for mailman id 537471;
 Sat, 20 May 2023 12:57:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0M9K-0006x8-Gu; Sat, 20 May 2023 12:57:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0M9K-0004ju-AV; Sat, 20 May 2023 12:57:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0M9K-0006VL-1d; Sat, 20 May 2023 12:57:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0M9K-0000az-1D; Sat, 20 May 2023 12:57:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3LT9OC0zYmiuR9ac0tuQmmCKa/YI5xywFzg1bATCjXY=; b=VQ82dYGtf/uYuQ1AFdcYnBu3iP
	lO0H++CZoGo9wrw7vcooL7gMWxCQgAm2+lwChsurXnpFh3arEEur8TxizPnrl30Xs632MkxEhukZL
	XRFs/WXJbRz1PTwUjOQNFbVxBe40MS6VmZv958Fp2kGd4y9P/uM1fw3baXLdQZRHvltA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180782-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180782: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5565ec4ef4f0d676fc8518556e239ac6945b5186
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 May 2023 12:57:30 +0000

flight 180782 linux-linus real [real]
flight 180810 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180782/
http://logs.test-lab.xenproject.org/osstest/logs/180810/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                5565ec4ef4f0d676fc8518556e239ac6945b5186
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   33 days
Failing since        180281  2023-04-17 06:24:36 Z   33 days   61 attempts
Testing same since   180782  2023-05-20 01:11:52 Z    0 days    1 attempts

------------------------------------------------------------
2453 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 308902 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 20 15:16:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 15:16:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537483.836722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0OIz-0004Rk-Ra; Sat, 20 May 2023 15:15:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537483.836722; Sat, 20 May 2023 15:15:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0OIz-0004Rd-OE; Sat, 20 May 2023 15:15:37 +0000
Received: by outflank-mailman (input) for mailman id 537483;
 Sat, 20 May 2023 15:15:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0OIy-0004RT-88; Sat, 20 May 2023 15:15:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0OIx-0008Ly-WF; Sat, 20 May 2023 15:15:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0OIx-0004I3-HR; Sat, 20 May 2023 15:15:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0OIx-00026F-Gz; Sat, 20 May 2023 15:15:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=M4HI8Z972nR9TO5XSWyxm3NbK9CHLcPJDAAlEmNBPuk=; b=oaTJ9OZJRMv7OTwDiCP9+hikM1
	AJzCAThVzyJlRkZLEPSQGOfFJ+RcFD25Wo7pcFUjksbgeHuTZbPKsf26y8P2cuzI4/tuQP1RwsJJ1
	DWykf25l/z8p0Y++AtedrieHLFmu/j1zXhoVWoO9yyLXargZMlIWeJPCk4DXCPXZ8Lkk=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-i386
Message-Id: <E1q0OIx-00026F-Gz@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 May 2023 15:15:35 +0000

branch xen-unstable
xenbranch xen-unstable
job build-i386
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180820/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-i386.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-i386.xen-build --summary-out=tmp/180820.bisection-summary --basis-template=180691 --blessings=real,real-bisect,real-retry qemu-mainline build-i386 xen-build
Searching for failure / basis pass:
 180807 fail [host=albana0] / 180691 [host=huxelrebe1] 180686 [host=huxelrebe1] 180673 [host=elbling0] 180659 ok.
Failure / basis pass flights: 180807 / 180659
(tree with no url: minios)
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
Basis pass d3225577123767fd09c91201d27e9c91663ae132 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8844bb8d896595ee1d25d21c770e6e6f29803097 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#d3225577123767fd09c91201d27e9c91663ae132-0abfb0be6cf78a8e962383e85cec57851ddae5bc git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 https://gitlab.com/qemu-project/qemu.git#8844bb8d896595ee1d25d21c770e6e6f29803097-aa222a8e4f975284b3f8f131653a4114b3d333b3 git://xenbits.xen.org/osstest/seabios.git#ea1b7a0733906b8425d948ae94fba63\
 c32b1d425-be7e899350caa7b74d8271a34264c3b4aef25ab0 git://xenbits.xen.org/xen.git#4c507d8a6b6e8be90881a335b0a66eb28e0f7737-42abf5b9c53eb1b1a902002fcda68708234152c3
Loaded 30609 nodes in revision graph
Searching for test results:
 180659 pass d3225577123767fd09c91201d27e9c91663ae132 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8844bb8d896595ee1d25d21c770e6e6f29803097 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
 180673 [host=elbling0]
 180686 [host=huxelrebe1]
 180702 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 297e8182194e634baa0cbbfd96d2e09e2a0bcd40 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180691 [host=huxelrebe1]
 180704 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 297e8182194e634baa0cbbfd96d2e09e2a0bcd40 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180721 [host=albana1]
 180742 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f515110e86aefe3bc2e8eb581ab724614060f be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180753 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d009607d08d22f91ca399b72828c6693855e7325 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180797 pass d3225577123767fd09c91201d27e9c91663ae132 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8844bb8d896595ee1d25d21c770e6e6f29803097 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
 180798 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d009607d08d22f91ca399b72828c6693855e7325 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180799 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 91608e2a44f36e79cb83f863b8a7bb57d2c98061 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180800 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 66e2c6cbacea9302a1fc5528906243d36c103fc7 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180801 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4baf3978c02b387c39dc6a75d323126ab386aece be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180802 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bfa72590df14e4c94c03d2464f3abe18bf2e5dac be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
 180803 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3887702e5f8995638c98f9d9326b4913fb107be7 be7e899350caa7b74d8271a34264c3b4aef25ab0 c8e4bbb5b8ee22fd1591ba6a5a3cef4466dda323
 180805 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ab4c44d657aeca7e1da6d6dcb1741c8e7d357b8b be7e899350caa7b74d8271a34264c3b4aef25ab0 fc1b51268025233a81e5fd9c5eabe170bc830720
 180785 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180806 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0e7e3bf1a552c178924867fa7c2f30ccc8a179e0 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180808 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180809 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0b15c42b81ff1e66ccbab3c2f2cef1535cbb9d24 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180816 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180811 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 68ea6d17fe531e383394573251359ab4f99f7091 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180812 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd48b477e90c3200b970545d1953e12e8c1431db be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180807 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180813 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180814 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180818 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180819 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180820 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
Searching for interesting versions
 Result found: flight 180659 (pass), for basis pass
 Result found: flight 180785 (fail), for basis failure
 Repro found: flight 180797 (pass), for basis pass
 Repro found: flight 180807 (fail), for basis failure
 0 revisions at 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
No revisions left to test, checking graph state.
 Result found: flight 180813 (pass), for last pass
 Result found: flight 180814 (fail), for first failure
 Repro found: flight 180816 (pass), for last pass
 Repro found: flight 180818 (fail), for first failure
 Repro found: flight 180819 (pass), for last pass
 Repro found: flight 180820 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180820/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-i386.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
180820: tolerable ALL FAIL

flight 180820 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/180820/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-i386                    6 xen-build               fail baseline untested


jobs:
 build-i386                                                   fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat May 20 16:43:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 16:43:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537495.836734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0PfH-00067K-51; Sat, 20 May 2023 16:42:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537495.836734; Sat, 20 May 2023 16:42:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0PfH-00067D-2F; Sat, 20 May 2023 16:42:43 +0000
Received: by outflank-mailman (input) for mailman id 537495;
 Sat, 20 May 2023 16:42:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0PfF-00066k-JS; Sat, 20 May 2023 16:42:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0PfF-0002Kg-Ar; Sat, 20 May 2023 16:42:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0PfE-0007N2-Qw; Sat, 20 May 2023 16:42:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0PfE-0004sZ-QT; Sat, 20 May 2023 16:42:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WgKVNWASWyOMhoC3VRnBqRhOLqLJL2xR2Ywbn3jPkFI=; b=YxzPLmar/SPnRn1AtmyM7B1+i9
	Y+SIOpDttagQnObkdIXinc/x0kAtValwYLmpbAs9ICasTH+nmDHXfPkARQmbVc5pDtJTFNC4p83E8
	CRqJgSx04QHF4cprlH5drS8ctWsr5A2xluxS4E5Iyul2iYiSg+1V9cfHAOu9eSMcU0UA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180804-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180804: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=753d903a6f2d1e68d98487d36449b5739c28d65a
X-Osstest-Versions-That:
    xen=42abf5b9c53eb1b1a902002fcda68708234152c3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 May 2023 16:42:40 +0000

flight 180804 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180804/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-raw   7 xen-install      fail in 180726 pass in 180804
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 180726 pass in 180804
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 180726 pass in 180804
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail pass in 180726

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180696
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180696
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180696
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180696
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180696
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180696
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180696
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180696
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180696
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180696
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  753d903a6f2d1e68d98487d36449b5739c28d65a
baseline version:
 xen                  42abf5b9c53eb1b1a902002fcda68708234152c3

Last test of basis   180696  2023-05-18 01:51:57 Z    2 days
Testing same since   180701  2023-05-18 17:09:58 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   42abf5b9c5..753d903a6f  753d903a6f2d1e68d98487d36449b5739c28d65a -> master


From xen-devel-bounces@lists.xenproject.org Sat May 20 17:49:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 20 May 2023 17:49:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537515.836786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Qhd-0004aZ-Ie; Sat, 20 May 2023 17:49:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537515.836786; Sat, 20 May 2023 17:49:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Qhd-0004aS-Fn; Sat, 20 May 2023 17:49:13 +0000
Received: by outflank-mailman (input) for mailman id 537515;
 Sat, 20 May 2023 17:49:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Qhc-0004aI-AW; Sat, 20 May 2023 17:49:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Qhc-0003lW-0Z; Sat, 20 May 2023 17:49:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Qhb-0000Vn-Ik; Sat, 20 May 2023 17:49:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Qhb-0004cl-IF; Sat, 20 May 2023 17:49:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=caQWpsrBt2pZPbaTOH5BasiprA12YUS1iQAVCKBrx6Q=; b=QRF3pcpvElq0NtjunB9Gl4RQW/
	DGNaNh06bcl3rKIcI2vxi2rXufbAVmOH4viWSVKC0Ef8v1lbaeSVS7ZjcnOhmnHUC3t8O0F4km9u6
	fiCQ1FBJiEaNWSxpDIEtJBAfknk23uFGtFILtfCpqJORLgnTHhmY3j5LuHdAs6K60fWM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180815-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180815: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 20 May 2023 17:49:11 +0000

flight 180815 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180815/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    3 days
Failing since        180699  2023-05-18 07:21:24 Z    2 days    9 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 21 00:28:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 00:28:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537535.836839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0WvV-0002bQ-PG; Sun, 21 May 2023 00:27:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537535.836839; Sun, 21 May 2023 00:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0WvV-0002bJ-Li; Sun, 21 May 2023 00:27:57 +0000
Received: by outflank-mailman (input) for mailman id 537535;
 Sun, 21 May 2023 00:27:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0WvU-0002b6-Ve; Sun, 21 May 2023 00:27:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0WvU-0004ka-KS; Sun, 21 May 2023 00:27:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0WvU-0000y6-6Q; Sun, 21 May 2023 00:27:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0WvU-00080c-62; Sun, 21 May 2023 00:27:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2SJeUoeomNbO9lXC4Z47lDud60FQw3zq3d0/uDVa8Fc=; b=iMyqjROv6FtyQPdojJciIkR5js
	8APTv7wnihpZ/vL5bHbkxMHAk+t7V81umCcdsTiK4PRlXw1SaUn1tDRIsXMoxTaZTEKyzIYJuYGd8
	pqZPbReJw9aQpwNErK2rko4rXsey7sqFvlEBqdQFZuM4XvJ+9g0RzbIat3UnIjyPT9JI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180825-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180825: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 00:27:56 +0000

flight 180825 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180825/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    3 days
Failing since        180699  2023-05-18 07:21:24 Z    2 days   10 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 21 01:44:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 01:44:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537551.836873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Y7U-0000os-LH; Sun, 21 May 2023 01:44:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537551.836873; Sun, 21 May 2023 01:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Y7U-0000ol-IK; Sun, 21 May 2023 01:44:24 +0000
Received: by outflank-mailman (input) for mailman id 537551;
 Sun, 21 May 2023 01:44:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Y7U-0000ob-6e; Sun, 21 May 2023 01:44:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Y7U-0004u8-0n; Sun, 21 May 2023 01:44:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Y7T-0002g5-Oe; Sun, 21 May 2023 01:44:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Y7T-0004UB-OI; Sun, 21 May 2023 01:44:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fY6O/cUAGIkcaoJIuKBzwMWYLTHDDj8fdC3m7ko3zAw=; b=wEYxRxfaljh+ZYRGQvCcuzawSV
	9kYIaXcjA9snza1vzCEQODAGNKOcKiXomSpNl2q//fXKVL5Fa3XsImrXUoQ6jgyP+YRqNUa2N3Zdr
	Hq4lbrsB99xZPh0QBLKBK6BTP2kGwW4VKr0yhV/MYpbGOV4bSSzKJI95352o1oUFWJ9k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180817-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180817: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d635f6cc934bcd467c5d67148ece74632fd96abf
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 01:44:23 +0000

flight 180817 linux-linus real [real]
flight 180834 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180817/
http://logs.test-lab.xenproject.org/osstest/logs/180834/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180834-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                d635f6cc934bcd467c5d67148ece74632fd96abf
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   34 days
Failing since        180281  2023-04-17 06:24:36 Z   33 days   62 attempts
Testing same since   180817  2023-05-20 13:00:15 Z    0 days    1 attempts

------------------------------------------------------------
2457 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 309726 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 21 02:33:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 02:33:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537561.836895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Ysp-0006Xb-EI; Sun, 21 May 2023 02:33:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537561.836895; Sun, 21 May 2023 02:33:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0Ysp-0006XU-BA; Sun, 21 May 2023 02:33:19 +0000
Received: by outflank-mailman (input) for mailman id 537561;
 Sun, 21 May 2023 02:33:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Yso-0006XK-K7; Sun, 21 May 2023 02:33:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Yso-0006Om-EF; Sun, 21 May 2023 02:33:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Yso-0003im-68; Sun, 21 May 2023 02:33:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0Yso-0003UY-5l; Sun, 21 May 2023 02:33:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UZbZI7NsUQ3Xxfw3tEtu4HL4vI8MxxffEt/SuKyLgbI=; b=oiKVPiU84as36N/VHWlToLoU5a
	QPkoUcXyskWpf9DpQif71rM3M+mgCHrk19gslZp+4s3ASpZqElXw7N5goEp1sLkaHzQTVWr22lEjo
	ePmSYEDSyBotAlBn4krw5Vaq2yCopKMTyLJffFh5tABVgpweBmqHOd1TUboDLQFZA5+w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180835-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180835: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 02:33:18 +0000

flight 180835 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180835/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    3 days
Failing since        180699  2023-05-18 07:21:24 Z    2 days   11 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    1 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 21 06:19:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 06:19:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537584.836947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0cPw-00044e-MC; Sun, 21 May 2023 06:19:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537584.836947; Sun, 21 May 2023 06:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0cPw-00044X-H5; Sun, 21 May 2023 06:19:44 +0000
Received: by outflank-mailman (input) for mailman id 537584;
 Sun, 21 May 2023 06:19:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0cPv-00044N-9M; Sun, 21 May 2023 06:19:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0cPv-0003qc-5O; Sun, 21 May 2023 06:19:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0cPu-000856-QY; Sun, 21 May 2023 06:19:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0cPu-0001gI-Q5; Sun, 21 May 2023 06:19:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=39sYjoNR77HRLN5hasswd0L5k0xdSILz1Vxhh33EByA=; b=2i0LaSuLRGovOMjLUB6ilN91Cb
	vwrrTgJESXNGfs5J5jvH4jhwBmoNQZT8+e5rdmpX3/YStJ47HTCuxIZT2ZQpJfszTxQNqXD4OaABG
	Ppigxzptqj5AtyoFScx7SwV408cN0RdjLPmk7vI3S744YNkJLVCziFbAlydLWyV4UDYA=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-arm64
Message-Id: <E1q0cPu-0001gI-Q5@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 06:19:42 +0000

branch xen-unstable
xenbranch xen-unstable
job build-arm64
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180850/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-arm64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-arm64.xen-build --summary-out=tmp/180850.bisection-summary --basis-template=180691 --blessings=real,real-bisect,real-retry qemu-mainline build-arm64 xen-build
Searching for failure / basis pass:
 180835 fail [host=rochester0] / 180691 ok.
Failure / basis pass flights: 180835 / 180691
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
Basis pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 6972ef1440a9d685482d78672620a7482f2bd09a be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e-0abfb0be6cf78a8e962383e85cec57851ddae5bc https://gitlab.com/qemu-project/qemu.git#6972ef1440a9d685482d78672620a7482f2bd09a-aa222a8e4f975284b3f8f131653a4114b3d333b3 git://xenbits.xen.org/osstest/seabios.git#be7e899350caa7b74d8271a34264c3b4aef25ab0-be7e899350caa7b74d8271a34264c3b4aef25ab0 git://xenbits.xen.org/xen.git#8f9c8274a4e3e860bd777269cb2c91971e9fa69e-753d903\
 a6f2d1e68d98487d36449b5739c28d65a
Loaded 25008 nodes in revision graph
Searching for test results:
 180702 [host=rochester1]
 180691 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 6972ef1440a9d685482d78672620a7482f2bd09a be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
 180704 fail irrelevant
 180721 fail irrelevant
 180742 [host=rochester1]
 180753 fail irrelevant
 180785 [host=rochester1]
 180807 fail irrelevant
 180821 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 6972ef1440a9d685482d78672620a7482f2bd09a be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
 180822 fail irrelevant
 180823 fail irrelevant
 180815 fail irrelevant
 180824 pass irrelevant
 180826 fail irrelevant
 180827 pass irrelevant
 180828 fail irrelevant
 180829 pass irrelevant
 180830 fail irrelevant
 180831 pass irrelevant
 180825 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180832 fail irrelevant
 180833 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 6972ef1440a9d685482d78672620a7482f2bd09a be7e899350caa7b74d8271a34264c3b4aef25ab0 8f9c8274a4e3e860bd777269cb2c91971e9fa69e
 180836 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180837 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc fe3ab4eb2de46076cbafcbc86b22e71ad24894c6 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180838 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 928348949d1d04f67715fa7125e7e1fa3ff40f7c be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180841 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 3b087f79a48807f348ea61469175e66b28ba44de be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180835 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180842 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 66e2c6cbacea9302a1fc5528906243d36c103fc7 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180844 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc dd48b477e90c3200b970545d1953e12e8c1431db be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180845 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180846 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180847 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180848 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180849 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180850 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
Searching for interesting versions
 Result found: flight 180691 (pass), for basis pass
 Result found: flight 180825 (fail), for basis failure
 Repro found: flight 180833 (pass), for basis pass
 Repro found: flight 180835 (fail), for basis failure
 0 revisions at 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
No revisions left to test, checking graph state.
 Result found: flight 180845 (pass), for last pass
 Result found: flight 180846 (fail), for first failure
 Repro found: flight 180847 (pass), for last pass
 Repro found: flight 180848 (fail), for first failure
 Repro found: flight 180849 (pass), for last pass
 Repro found: flight 180850 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180850/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-arm64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
180850: tolerable ALL FAIL

flight 180850 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/180850/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64                   6 xen-build               fail baseline untested


jobs:
 build-arm64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun May 21 06:38:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 06:38:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537591.836960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0ciQ-0006Yy-5W; Sun, 21 May 2023 06:38:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537591.836960; Sun, 21 May 2023 06:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0ciQ-0006Yr-2C; Sun, 21 May 2023 06:38:50 +0000
Received: by outflank-mailman (input) for mailman id 537591;
 Sun, 21 May 2023 06:38:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0ciO-0006Yh-8m; Sun, 21 May 2023 06:38:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0ciO-0004Kf-26; Sun, 21 May 2023 06:38:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0ciN-0000LB-Km; Sun, 21 May 2023 06:38:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0ciN-0001A3-KH; Sun, 21 May 2023 06:38:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zDtSZoBA3K1EbYpYZHpqwRWqiexQRa5VRN3M+zo+/ro=; b=1YhDyG7U0kjOdr9mcW6xUpi1if
	buHaH6qYTnBPmHF2A0RNNyGh98Hd/aiu20dj6LpYnYnsU/sSw68SDmAglGG6aLuFtLbwho8Re4Oi3
	DpT8aFpBhyWB/edOL0Xt/xy58caKO8FNawWUe5nz/0Zqz8NSORAoT5x6NOK8fr8dez9A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180843-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180843: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 06:38:47 +0000

flight 180843 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180843/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    3 days
Failing since        180699  2023-05-18 07:21:24 Z    2 days   12 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    1 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 21 10:44:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 10:44:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537613.837000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0gXk-0006jh-8F; Sun, 21 May 2023 10:44:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537613.837000; Sun, 21 May 2023 10:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0gXk-0006ja-54; Sun, 21 May 2023 10:44:04 +0000
Received: by outflank-mailman (input) for mailman id 537613;
 Sun, 21 May 2023 10:44:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gXj-0006jQ-KC; Sun, 21 May 2023 10:44:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gXj-0001vo-Da; Sun, 21 May 2023 10:44:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gXj-0001CY-3V; Sun, 21 May 2023 10:44:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gXj-0004xL-34; Sun, 21 May 2023 10:44:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Wza+2fiVusvPu+vduqjsPL5jEP45yKiGWJNR04XYl+Q=; b=TDdwBLpPRrPuRF6ecAUs/+cZcp
	knqz9EnRZWTFrhOpK5gPgJk/fmeRXp8pHXyyZ25XoOi3DguK0YSRhsomuuSAfMzkbPH2YU7n1VFd1
	AnnuTfGBKXftSNOEhBVFnzaxyMdai9vOOqxK00O9z9BrS7/IqeUp1beIpu+du4hZ33Nk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180853-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180853: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 10:44:03 +0000

flight 180853 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180853/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    3 days
Failing since        180699  2023-05-18 07:21:24 Z    3 days   13 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    1 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 21 10:59:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 10:59:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537621.837017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0gmg-0008OS-NQ; Sun, 21 May 2023 10:59:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537621.837017; Sun, 21 May 2023 10:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0gmg-0008OL-IX; Sun, 21 May 2023 10:59:30 +0000
Received: by outflank-mailman (input) for mailman id 537621;
 Sun, 21 May 2023 10:59:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gmf-0008OB-Ve; Sun, 21 May 2023 10:59:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gmf-0002AZ-MF; Sun, 21 May 2023 10:59:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gmf-0001XL-8V; Sun, 21 May 2023 10:59:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gmf-00017i-84; Sun, 21 May 2023 10:59:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yFTinn+bYKP+WvsLT0YsuHWCkg0LCWEE5U/knCgcZiw=; b=agdsx46uRc8sJWu7hlosoTnTLr
	SQ29hNGJdmQWVAALafr1ZbzbvKD7ww+8TkkPkReqbZukJ4/oumzu3RB4DtvTrcls2WnGGijWDJFAW
	/GLNSZ/VNlzeStRMaqr7OCQWjSh1wxewEhMHAEpQ7xQOuQA8ve2YV+tHm36dhWnUPAu8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180840-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180840: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-start/debianhvm.repeat:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0dd2a6fb1e34d6dcb96806bc6b111388ad324722
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 10:59:29 +0000

flight 180840 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180840/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 test-amd64-amd64-xl-qemuu-ovmf-amd64 20 guest-start/debianhvm.repeat fail REGR. vs. 180278
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                0dd2a6fb1e34d6dcb96806bc6b111388ad324722
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   34 days
Failing since        180281  2023-04-17 06:24:36 Z   34 days   63 attempts
Testing same since   180840  2023-05-21 01:58:17 Z    0 days    1 attempts

------------------------------------------------------------
2469 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 310868 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 21 11:11:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 11:11:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537631.837033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0gyF-0002RI-1O; Sun, 21 May 2023 11:11:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537631.837033; Sun, 21 May 2023 11:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0gyE-0002RB-Ru; Sun, 21 May 2023 11:11:26 +0000
Received: by outflank-mailman (input) for mailman id 537631;
 Sun, 21 May 2023 11:11:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gyE-0002R1-5I; Sun, 21 May 2023 11:11:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gyE-0002XR-13; Sun, 21 May 2023 11:11:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gyD-0001n7-Kn; Sun, 21 May 2023 11:11:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0gyD-0004nS-KQ; Sun, 21 May 2023 11:11:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ALE/zczyoLpdibfP7Q6GEzs9G86+7HyBkIi5E+L5rIU=; b=t5OjsuMqRzCGoqjsYOuvN6TpWz
	KSLesZ28ePN5PLs8SitUR/cklxImK9TYZgxtg80x6j++5TFva6YQ/vG/Gz6qBLbwXyTFOVuG6mBtA
	5IuiA96QgyLkjM4uHezn1uVnTR8nYO/DkF+sgaQDxWUcvUmzQoZMgRSq20KH16oP7RkM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180839-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180839: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=753d903a6f2d1e68d98487d36449b5739c28d65a
X-Osstest-Versions-That:
    xen=753d903a6f2d1e68d98487d36449b5739c28d65a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 11:11:25 +0000

flight 180839 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180839/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180804
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180804
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180804
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180804
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180804
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180804
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180804
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180804
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180804
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180804
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180804
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180804
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  753d903a6f2d1e68d98487d36449b5739c28d65a
baseline version:
 xen                  753d903a6f2d1e68d98487d36449b5739c28d65a

Last test of basis   180839  2023-05-21 01:53:27 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 21 13:03:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 13:03:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537649.837066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0iis-0005Vw-A7; Sun, 21 May 2023 13:03:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537649.837066; Sun, 21 May 2023 13:03:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0iis-0005Vp-65; Sun, 21 May 2023 13:03:42 +0000
Received: by outflank-mailman (input) for mailman id 537649;
 Sun, 21 May 2023 13:03:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0iiq-0005Vf-FW; Sun, 21 May 2023 13:03:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0iiq-0004q2-AQ; Sun, 21 May 2023 13:03:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0iiq-0005w3-3N; Sun, 21 May 2023 13:03:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0iig-0007b4-KZ; Sun, 21 May 2023 13:03:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z5S3Aq8J8wl7A7W9hSTOYTml+jlW758Zv2WM10zXsJ0=; b=l5ndHnEWgzSN4Q2eaNFaXgg6Nu
	LARZKUjVftuhghweG+eVpWhTdGAVk7oBInWl4OCnv3/AviBihHFO2DRGmwdiA+jhlSORXBHNZdXhu
	0mOgsgYv/aOZy3bPGwAKtaxxg5tPgffv8ZKcgzBChPgPQ6QIv1SGBVVFXC6oNrJzQTl8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180860-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180860: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 13:03:30 +0000

flight 180860 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180860/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    4 days
Failing since        180699  2023-05-18 07:21:24 Z    3 days   14 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    1 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 21 16:04:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 16:04:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537667.837106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0lXa-0007IO-6J; Sun, 21 May 2023 16:04:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537667.837106; Sun, 21 May 2023 16:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0lXa-0007IH-3e; Sun, 21 May 2023 16:04:14 +0000
Received: by outflank-mailman (input) for mailman id 537667;
 Sun, 21 May 2023 16:04:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0lXY-0007I7-Ck; Sun, 21 May 2023 16:04:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0lXY-0001Go-5R; Sun, 21 May 2023 16:04:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0lXX-0003FM-Ql; Sun, 21 May 2023 16:04:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0lXX-0003T1-QH; Sun, 21 May 2023 16:04:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=n/d88YPnyceQXav6O5Ov6kl91BJHD63SE7NuSs5aI2I=; b=albh2XXdQxZdpFSeTkrKrlkHmq
	L3fk1rF+YK+Xyzm4udIUbaeh92zFjpSLVfC+I+Tvk3rCoX07kFRH86j+mXiio6o3hqh3SFxmO+2/M
	lU9T43I/qpEgrxfkB1xxuQaG2VhVuZXR2oXLwZiLuaX2b4uYsyVIGVCMgPlLZUjt0ag4=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-arm64-xsm
Message-Id: <E1q0lXX-0003T1-QH@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 16:04:11 +0000

branch xen-unstable
xenbranch xen-unstable
job build-arm64-xsm
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180871/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-arm64-xsm.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-arm64-xsm.xen-build --summary-out=tmp/180871.bisection-summary --basis-template=180691 --blessings=real,real-bisect,real-retry qemu-mainline build-arm64-xsm xen-build
Searching for failure / basis pass:
 180860 fail [host=rochester0] / 180691 [host=rochester1] 180686 [host=rochester1] 180673 [host=rochester1] 180659 ok.
Failure / basis pass flights: 180860 / 180659
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
Basis pass d3225577123767fd09c91201d27e9c91663ae132 8844bb8d896595ee1d25d21c770e6e6f29803097 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#d3225577123767fd09c91201d27e9c91663ae132-0abfb0be6cf78a8e962383e85cec57851ddae5bc https://gitlab.com/qemu-project/qemu.git#8844bb8d896595ee1d25d21c770e6e6f29803097-aa222a8e4f975284b3f8f131653a4114b3d333b3 git://xenbits.xen.org/osstest/seabios.git#ea1b7a0733906b8425d948ae94fba63c32b1d425-be7e899350caa7b74d8271a34264c3b4aef25ab0 git://xenbits.xen.org/xen.git#4c507d8a6b6e8be90881a335b0a66eb28e0f7737-753d903\
 a6f2d1e68d98487d36449b5739c28d65a
Loaded 30609 nodes in revision graph
Searching for test results:
 180659 pass d3225577123767fd09c91201d27e9c91663ae132 8844bb8d896595ee1d25d21c770e6e6f29803097 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
 180673 [host=rochester1]
 180686 [host=rochester1]
 180702 [host=rochester1]
 180691 [host=rochester1]
 180704 fail irrelevant
 180721 fail irrelevant
 180742 [host=rochester1]
 180753 fail irrelevant
 180785 [host=rochester1]
 180807 fail irrelevant
 180815 fail irrelevant
 180825 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180835 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180843 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180851 pass d3225577123767fd09c91201d27e9c91663ae132 8844bb8d896595ee1d25d21c770e6e6f29803097 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
 180852 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180854 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 266ccbb27b3ec6661f22395ec2c41d854c94d761 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180855 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 7d478306e84259678b2941e8af7496ef32a9c4c5 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180856 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 3d8ff94e59770ec7f5effe509c94246b2cbe9ce0 be7e899350caa7b74d8271a34264c3b4aef25ab0 c8e4bbb5b8ee22fd1591ba6a5a3cef4466dda323
 180853 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180857 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc ed8d95182bc994e31e730c59e1c8bfec4822b27d be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180858 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc a988b4c56143d90f98034daf176e416b08dddf36 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180859 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc e80bdbf283fb7a3643172b7f85b41d9dd312091c be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180862 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc a9dbde71da553fe0b132ffac6d1a0de16892a90d be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180863 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc f1ad527ff5f789a19c79f5f39a87f7a8f78d81b9 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180864 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc dd48b477e90c3200b970545d1953e12e8c1431db be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180860 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180865 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180867 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180868 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180869 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180870 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180871 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
Searching for interesting versions
 Result found: flight 180659 (pass), for basis pass
 Result found: flight 180825 (fail), for basis failure
 Repro found: flight 180851 (pass), for basis pass
 Repro found: flight 180852 (fail), for basis failure
 0 revisions at 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
No revisions left to test, checking graph state.
 Result found: flight 180865 (pass), for last pass
 Result found: flight 180867 (fail), for first failure
 Repro found: flight 180868 (pass), for last pass
 Repro found: flight 180869 (fail), for first failure
 Repro found: flight 180870 (pass), for last pass
 Repro found: flight 180871 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180871/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

pnmtopng: 246 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/build-arm64-xsm.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
180871: tolerable ALL FAIL

flight 180871 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/180871/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64-xsm               6 xen-build               fail baseline untested


jobs:
 build-arm64-xsm                                              fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun May 21 16:09:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 16:09:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537677.837118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0lcv-00081G-V7; Sun, 21 May 2023 16:09:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537677.837118; Sun, 21 May 2023 16:09:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0lcv-000819-S0; Sun, 21 May 2023 16:09:45 +0000
Received: by outflank-mailman (input) for mailman id 537677;
 Sun, 21 May 2023 16:09:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0lcu-00080y-6r; Sun, 21 May 2023 16:09:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0lcu-0001Ve-0r; Sun, 21 May 2023 16:09:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0lct-0003MY-OJ; Sun, 21 May 2023 16:09:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0lct-0003sS-Nk; Sun, 21 May 2023 16:09:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=h9+FJ7XInlbkKJ/LUDpePgIz0nO25Vt92NwfdTZdD98=; b=KAosO8uIndAOUyNQ/5J/2BWZHK
	R6TgoOvkQQWyiIHRubVRFPjTFZi4Fyg+DDheWEenNPyOgNURyKHMFZMFxyfi5xJTjF57J6p2Su1VJ
	zQhq/jzOCFNYT3+7LeYcMToy/f5lVpP0dKjMysne2VK1Citla2R40qXWVcLGyNAsT1lM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180866-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180866: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 16:09:43 +0000

flight 180866 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180866/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    4 days
Failing since        180699  2023-05-18 07:21:24 Z    3 days   15 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    1 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 21 20:17:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 20:17:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537692.837147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0pTx-0007Y5-LF; Sun, 21 May 2023 20:16:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537692.837147; Sun, 21 May 2023 20:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0pTx-0007Xy-IO; Sun, 21 May 2023 20:16:45 +0000
Received: by outflank-mailman (input) for mailman id 537692;
 Sun, 21 May 2023 20:16:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0pTw-0007Xn-7J; Sun, 21 May 2023 20:16:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0pTv-0007QJ-Mk; Sun, 21 May 2023 20:16:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0pTv-0000Wd-BZ; Sun, 21 May 2023 20:16:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0pTv-00031S-BB; Sun, 21 May 2023 20:16:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e5a7vONMLUmIB6RfxoSV+6Nm1FY/Y4Thv9R2VEGwqDI=; b=K2DPRk6psYj4VxwRbthBURBpbv
	zUczFo2PLef99SlJaUi2BkKyMaVuSXu8Feqix5xEsbBGE50JedOAuJyAHnSUlm684wRXYgByYdoom
	/4jLdxfPFE3mK0EBnCko3yXkKiKNPJ2C1a1zRFEvrTg2KvHLt0/ek6BM5dqIesp2mig0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180861-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180861: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0dd2a6fb1e34d6dcb96806bc6b111388ad324722
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 20:16:43 +0000

flight 180861 linux-linus real [real]
flight 180876 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180861/
http://logs.test-lab.xenproject.org/osstest/logs/180876/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0dd2a6fb1e34d6dcb96806bc6b111388ad324722
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   35 days
Failing since        180281  2023-04-17 06:24:36 Z   34 days   64 attempts
Testing same since   180840  2023-05-21 01:58:17 Z    0 days    2 attempts

------------------------------------------------------------
2469 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 310868 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 21 20:34:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 20:34:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537700.837157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0plF-0001dm-7u; Sun, 21 May 2023 20:34:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537700.837157; Sun, 21 May 2023 20:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0plF-0001df-4u; Sun, 21 May 2023 20:34:37 +0000
Received: by outflank-mailman (input) for mailman id 537700;
 Sun, 21 May 2023 20:34:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0plE-0001dV-7T; Sun, 21 May 2023 20:34:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0plD-0007m5-RQ; Sun, 21 May 2023 20:34:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0plD-0000w6-EV; Sun, 21 May 2023 20:34:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0plD-000825-E4; Sun, 21 May 2023 20:34:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iCxs4gpNpXoipz/6RoD/JgMGKe/0ktnMHYQnT5Bw6NE=; b=B+YxtkhnWdLv3lyMdC7cfdyz8B
	dHgBQseOq4grY12vrpQISTJXFz7mWmusiM3iIUQV8415SPNftAUIfkqZbSErg5dq28pH1zPApdbpr
	lLlT7S4YoA4gWS5+4dV4iEOKBzvCzCoz1Fn5AUNqAHl9+Zik65kAE6Q/SZnAJE7p+MF0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180873-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180873: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 20:34:35 +0000

flight 180873 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180873/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    4 days
Failing since        180699  2023-05-18 07:21:24 Z    3 days   16 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    1 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 21 22:03:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 21 May 2023 22:03:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537708.837172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0r99-0002Ur-IS; Sun, 21 May 2023 22:03:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537708.837172; Sun, 21 May 2023 22:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0r99-0002Uk-Fs; Sun, 21 May 2023 22:03:23 +0000
Received: by outflank-mailman (input) for mailman id 537708;
 Sun, 21 May 2023 22:03:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0r98-0002Ua-Ew; Sun, 21 May 2023 22:03:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0r98-0001Ox-5z; Sun, 21 May 2023 22:03:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0r97-0002pb-PG; Sun, 21 May 2023 22:03:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0r97-0001oO-Ou; Sun, 21 May 2023 22:03:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DZmzNsZ2HXxrqLHHyNs5Rc0C+EY9ZKZzaCw/qWf3WmY=; b=idJN+rnGYq3WTETe/trc92SuKW
	L1jDVDLccTQytZgkavmGZoVT1feuOwtaGrLz2y/cuY9rTGKiZDniZV43ozQ4duJoYNq1K8m+WWymc
	U2a67/L+8jdxs9mm7CPTLz12loawBebQMyZ0L38nWQoUY9p3aPY1QgpbQ+pGYDtBfWsE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180878-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180878: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 21 May 2023 22:03:21 +0000

flight 180878 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180878/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    4 days
Failing since        180699  2023-05-18 07:21:24 Z    3 days   17 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 22 04:06:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 04:06:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537727.837201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0wo1-0004II-Di; Mon, 22 May 2023 04:05:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537727.837201; Mon, 22 May 2023 04:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0wo1-0004Hw-6z; Mon, 22 May 2023 04:05:57 +0000
Received: by outflank-mailman (input) for mailman id 537727;
 Mon, 22 May 2023 04:05:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0wo0-0004Hm-0c; Mon, 22 May 2023 04:05:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0wnz-0001Aj-M9; Mon, 22 May 2023 04:05:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0wnz-0007j9-4I; Mon, 22 May 2023 04:05:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0wnz-0006cH-3t; Mon, 22 May 2023 04:05:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rOT7qFnXBBhRWv9HXSaHxj19SrV9TK67Yby2kixtO9c=; b=DzAV1vWCH8cPVS18dbTt6/6d+9
	g2WZIHw1D23y941BQU5ukZehDp+hfodHNg+g6dfdb44FPLAtAEx5YT74OVK8aKgSrSx1IzPWWoseh
	Pe58GFZAQbwB2GtNNDVnX2U233GtCaZZ4HjxNnYwDUmB12Xi8BpeuQl0T25k4RD4A21s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180881-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180881: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 May 2023 04:05:55 +0000

flight 180881 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180881/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    4 days
Failing since        180699  2023-05-18 07:21:24 Z    3 days   18 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    2 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 22 04:11:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 04:11:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537733.837211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0wt2-0005lG-T7; Mon, 22 May 2023 04:11:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537733.837211; Mon, 22 May 2023 04:11:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0wt2-0005l9-Pz; Mon, 22 May 2023 04:11:08 +0000
Received: by outflank-mailman (input) for mailman id 537733;
 Mon, 22 May 2023 04:11:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0wt1-0005kz-F0; Mon, 22 May 2023 04:11:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0wt1-0001Qu-9l; Mon, 22 May 2023 04:11:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0wt1-000862-2B; Mon, 22 May 2023 04:11:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0wt1-0000Hk-1j; Mon, 22 May 2023 04:11:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=a5UPuJWlJ18DSj0XRo/K+FU15Jt+1JicA3kQWnBb5bA=; b=mQW560XsRN43NrghLvwIqegzNW
	K+DUpgrP/fGXt8YgT00HDgcWPTFyJecIAxOxxn5djoRBEm3zD8pJF+9vCzLZExxtx1jTnKZjzefyD
	3TD/I1wDmY/67uLRYRwHmO6u++TUviIOSQkEOrYoacVePfli4Yi5y2JfI2cyPKC45Ex0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180883-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180883: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c5cf7f69c98baed40754ca4a821cb504fd5423cd
X-Osstest-Versions-That:
    ovmf=0abfb0be6cf78a8e962383e85cec57851ddae5bc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 May 2023 04:11:07 +0000

flight 180883 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180883/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c5cf7f69c98baed40754ca4a821cb504fd5423cd
baseline version:
 ovmf                 0abfb0be6cf78a8e962383e85cec57851ddae5bc

Last test of basis   180695  2023-05-18 00:10:44 Z    4 days
Testing same since   180883  2023-05-22 01:40:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0abfb0be6c..c5cf7f69c9  c5cf7f69c98baed40754ca4a821cb504fd5423cd -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 22 06:48:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 06:48:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537750.837226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0zKg-0004vT-Pd; Mon, 22 May 2023 06:47:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537750.837226; Mon, 22 May 2023 06:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0zKg-0004vM-Mt; Mon, 22 May 2023 06:47:50 +0000
Received: by outflank-mailman (input) for mailman id 537750;
 Mon, 22 May 2023 06:47:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0zKf-0004vC-Fk; Mon, 22 May 2023 06:47:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0zKf-0005PN-7K; Mon, 22 May 2023 06:47:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q0zKe-00074z-QZ; Mon, 22 May 2023 06:47:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q0zKe-0002w8-Q9; Mon, 22 May 2023 06:47:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/BludxJTNgQlhWgbFFsW41/KUfuUhQPKbaJTE6MxCqU=; b=sVJEWA0jCTGMKNW1K/n1HiC15x
	00SWDoYkmGudXtyZ/d/zMfRW+2cwI2ZQovUNGLHe0Qq0V0lUjCxBJXuNsBMs6IjUiH43nCuqZtGbx
	Id9/2FNkwVAUQpVTqVZJ9hl6cTd2XQqZOMcwooTVxWDKhlTbmt+U6FipFOXQe6Kn7cp8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180879-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180879: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4927cb98f0eeaa5dbeac882e8372f4b16dc62624
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 May 2023 06:47:48 +0000

flight 180879 linux-linus real [real]
flight 180888 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180879/
http://logs.test-lab.xenproject.org/osstest/logs/180888/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-freebsd11-amd64 19 guest-localmigrate/x10 fail pass in 180888-retest
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180888-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                4927cb98f0eeaa5dbeac882e8372f4b16dc62624
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   35 days
Failing since        180281  2023-04-17 06:24:36 Z   35 days   65 attempts
Testing same since   180879  2023-05-21 20:41:55 Z    0 days    1 attempts

------------------------------------------------------------
2474 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 311545 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 22 06:51:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 06:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537758.837237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0zOJ-0006P1-Ew; Mon, 22 May 2023 06:51:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537758.837237; Mon, 22 May 2023 06:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0zOJ-0006Ou-CE; Mon, 22 May 2023 06:51:35 +0000
Received: by outflank-mailman (input) for mailman id 537758;
 Mon, 22 May 2023 06:51:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q0zOI-0006Oo-8j
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 06:51:34 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2089.outbound.protection.outlook.com [40.107.7.89])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 166b62a3-f86d-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 08:51:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7199.eurprd04.prod.outlook.com (2603:10a6:800:11d::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 06:51:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 06:51:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 166b62a3-f86d-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F6D3DGWMDyX6g6vAkGwf8Qov1gw7g9INnzD15pIMuDRIxDW4m4RFSFduyTmdJM5RVLLputZBM446LOGnxMqOmw4kDtd0EkxYyhuc2qfTqk02Y45tPPVRAOxBPMSxb6zLLCpBmlOJ+ROWtbcstVXA9nU8MJSDViQQBjfMZ2RyCz3x+09yuT+/skZZv7EwWER4Io2wlKWOVDgLLBdAIQZTi6qQp4SHpAWV613BPRkUij7qJceivsRUGqhCfJ+FIWqV89KK02sfPz7gS8U/N+VATvXhgs4BdOcO1oBxKpTZUBFDUDlFPmotBlCXRyxLkaAh7NN+pLnMOx4Zj0QJFY3moA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+kvn0ux3XSvDmdXSLLqfDwSD82qdAjr2L57M30ZkTR8=;
 b=dwMHX9eW4/VatHSH+r1NXMr8sy/8BSGsiAZeMmQYpY9uzir4OGTZC6CQz2m1EULoVza+5x+BgaF+oUNyDCO6OUYyWUhtE8ww4nCuMEwDkRx/jx1GaOKipOfAuwIPQWUqafVJXpgDc7IjGmHyWLHJqNXY1mkY36h+0WbgxoD5iFFUA4mRaWq5ZGj9sv7ipK4W7p2HeyO4RRX8xXPL2GZFiRDY/y/AmF4KkeoYBEv2vW95uMdwaOL594M/0pUGlgBG1tjVfvAVXtjjbi/sye4u10mtMaCy392UBCLG+wxdsIvbiDcIMJnFHX6yzIxowpDWsUrSP8EnyvYS6dX4YFdqtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+kvn0ux3XSvDmdXSLLqfDwSD82qdAjr2L57M30ZkTR8=;
 b=l719+tapiyBtMslfdljqIp/AoP4+hElLv0xTZtZNRyEjUehsjVyATp7V+ZTFF2JJDSDKE4xGEPm2+DjrfXP7ROgzN+MmgN+VJXufwIKJ7cOloxyjMkWPdPO+CV/wlI1RpFkhSbHcQC/32lEWK5P5p8izTsiL31wCDRh9YmNYJFrhxN7d16PdFf2GLq28IoTpHSuQ8cmgFjB9R+qViUuMFRM9PP8AA9AhxhpXtRbrYngAKiyvAb+UTruvXfu5fo51ZwrjQ9gTliarzwy7Irh2pCr1rNwjbr0rM+N70kaS6lwpaVU6lOloeLTpmTA4lsFr1Mg79gDqG7/98urT+8yBJg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <24587cae-293f-f807-0eab-18bef573e68b@suse.com>
Date: Mon, 22 May 2023 08:51:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: PVH Dom0 related UART failure
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 marmarek@invisiblethingslab.com, xenia.ragiadakou@amd.com
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <b411f7aa-7fd2-7b1c-1bcd-35b989f528b6@suse.com>
 <alpine.DEB.2.22.394.2305191544420.815658@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305191544420.815658@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0087.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1e::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7199:EE_
X-MS-Office365-Filtering-Correlation-Id: 4e342564-feb3-4673-b6e7-08db5a90e87e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MinrRNB0weAP1P9BhKorcWBXm5x91WZ+P4i8UUKhkFGGp8D7inT51exP75dTQ/IwwHz0QhuG7AtiVehIlkxQ1Stb5v7zz+OyVy38Jrrh6/BmVdrOlslmSSGFtZvP8lC8kiZBvjUIgyf0/rGB30+mYSNNddoqxhopRsaATCuvBfi7lOkXuRQeY6atrRlyWnhpdp+6qRUXk9NWL3TgwZeJ9Ah2sc9PAn+op1FNoumOcKjlteBbKrzOwOBs7szye2CcVbMfUGlQkKyNQjcvSM04NOejKeiZlhfhCZODamn2rbr8QtsilttZT2Ybg5lcDoTE+EmZQqvHpAh5wxJNUeoWAn/I93yqt1VW4+dH9AF/9OcLSoGwZcNswih2lqbQmrMSmfoyDsDZ9KTwS5wLacZT7OanFgHHjOYz9Ks85QuFeW7J0zeW+JD2BdbYJ+f4UgJP9FsA/SQlGo0Bed/1bufANELu1iq/piQ6uAb5i7nDqOK4H+Qnd9HuweRvrh8T1LGehBi1X6hmaW9pJHX4i/FJuTzFQtCdXFP81e3fqRoLhexYa1tcFEJK9terKCiTSZNszQnxWP2n6Hsx8UYQHdJed6GtNcuwi8wBH/s/9aotEJoMvYhPE126mm+lwO8cbego1RexjcKxHNW45SQVwbzW9w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(366004)(39850400004)(396003)(376002)(451199021)(8676002)(8936002)(5660300002)(186003)(26005)(53546011)(6512007)(6506007)(83380400001)(86362001)(2616005)(31696002)(38100700002)(41300700001)(6486002)(966005)(66476007)(66556008)(66946007)(316002)(6916009)(36756003)(4326008)(478600001)(2906002)(31686004)(66899021)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZStDbE5VSG1IRVE5VERjbXBhM0lUdjZYb0JlOXFFRWR5NUM5YUxxR25zQ2Yr?=
 =?utf-8?B?YjJxdFJXNjRBK2lnN2JlSWYzRVpveHRwMjZwUzVjRWI0UTFWRERMdkxGKzB3?=
 =?utf-8?B?aVBzYlJRRDBWaXF4VDd5NHQzLzRrNjVXcWpqQ3RialB1cVZKM3RrS0VpdW1t?=
 =?utf-8?B?VEN4QWMvYkZpcFVoVEVTVUVzSFFZaFMyV0xuWXJKcUwwNE1aVVNUTGZPT3ph?=
 =?utf-8?B?a2ZLUCtnSWxNRExzYlRoM3Zuc1JhdEtGOEordW82Ni95akdvYzcyMklvSXF3?=
 =?utf-8?B?TFUxRGNBQjRjWi9SVS8wUkU5V0pEMjMzQzdZRjlpc0s3TVc4Q2JyYjFHbGRq?=
 =?utf-8?B?UVdicDBnODYyOWdvUEkzN1p6dng2Y082RFdQMVJUWnhsSDlKR25wd1pUU01W?=
 =?utf-8?B?alVkT0ovQXJLN2xKUGhRczQzbmRUN1gwakJhTHNWeEhSTWlRMGFqeXRKQXJ0?=
 =?utf-8?B?bkh1Szh0NFQxOGpQekoxbm5rblFuMFZHeCthZTRNU0VGcVN5RnhkdmVoTDBa?=
 =?utf-8?B?Sk5HR3ZUYUkwcGlLNmRSTlgrL21SbTJKRGg4SHZrY1JZU3VTZWpNSzdqUDNP?=
 =?utf-8?B?SU1CZUFpZ0pZbFBjZlhPaHVlRk5pamIwVlpPdFU2ZG1NNUltT29BbDFiR0lS?=
 =?utf-8?B?R1ZObXZHeVdsQWdxY01yZUhJZWlYUENudHlFYUVBdDV0bzNSWGQ2STlOTEZ1?=
 =?utf-8?B?dmgyMUR0b1VDYnpxclRDODhWQWFIMmNMVzBtZnh2TGZSQTczTGZ4NVU1dTdi?=
 =?utf-8?B?UVNMd0JqemI0U3k5ZzUzK05JUXNpV01LeWYrMTRqemsybWVKNlVRU0pqM09h?=
 =?utf-8?B?ZXhCaUplbG92WTBNcU50VXUzRC9RWjNvQ1RYano1Z05qaUlIYnQyQ1hhRzdP?=
 =?utf-8?B?TjBPUnc2RjRuV3FONUlEMysyRFRoL201WnpMaHdLSzZ3TFB5WHJacTUycU5v?=
 =?utf-8?B?Z2FZMFpYbmRGdDBPM3JVTTlSTUxGZ255b3RoUkxVQU41Mmo5SnNoMmRxcW1U?=
 =?utf-8?B?dGZlRzViOHlTc3BWTGwxMU5CdWhVdmRaTkVtL1hYVjlyS2M5TkYvQ0FzbHB0?=
 =?utf-8?B?YkprM0tLQnpnNzJBMlhDRE9uaGJZKzBIRHU3Rm9jdC9jenNWNUZBNE5MOWFS?=
 =?utf-8?B?Tk9hZ3AvcjF0RDcxNzdsTlpBc1lIUjUvZ0VMMWVzMTBvVHEvUlU2cUFtcGQx?=
 =?utf-8?B?eTFKcHJkalo1RlZYeHJtNlQ4WHJMOFRhdXNEYVB4QzFQSjNjandNMGpBODJF?=
 =?utf-8?B?ZXkrWkpxeVg3UXVsZTltVUpHMTNsM2tXTnNwYWg2dCtmZ3RSUnhlalh4MUt0?=
 =?utf-8?B?ZlBlVFNrSkk1QTRhQ0tYMWo2SmVaUkdBSk9hdWNqUEU3bUkyVklMdUpWY1hh?=
 =?utf-8?B?RXFWb3lqZ0U2dHRqRTQwWkYvRG80M3JqRVNKUitJa3FJeVdCNFM0MkVTOW5Y?=
 =?utf-8?B?bVM4QUo2WTIvemFmOVJ1ZHhXMWE5VjRRTEJPWDJGV2ZmNTB6TytUZ1B5ZE1Y?=
 =?utf-8?B?UGUyOWFBUXVtbWtzdzRnTTE1NnJOUVpJUVVGQ1loOEJ3K3hsVzJtcENNTlFk?=
 =?utf-8?B?OER3SlhmdEhSUTI1ZjBXdUh5UXpzck5zazZIRmN2QkFHblRqdjdmK3A3UGd6?=
 =?utf-8?B?YnMvMlpFWXFqN29obWt3c0VkMmlHODkvdGhyTXlrV3BNSjVDdE1vUXM2bGl1?=
 =?utf-8?B?RW9xNURHRXJPTU1QeUpoOGVxcUYxeWp2cWdnMFVmbWdLcDkvRSsvaWI3aTNu?=
 =?utf-8?B?UDQ5cmp2UWdZNnRBeWswZWRBN0w3U3NVTmRQTGtNc0x3U1htdm9wb2ZCSUR5?=
 =?utf-8?B?L0FRMDZKYS9OdVVNQWJXb2ZhOUdsQTg0alpnRExLVExUaXh5cld5U3ZJWnNy?=
 =?utf-8?B?aHFMRXB6TmNUODRWeWdxQk9idXdRQ2QvY0hqZk9UMWV1OTd5RnpoOUFLUVo2?=
 =?utf-8?B?cXErR1ZDNGhNemU4TVY5cDF3QUszOUZsQmNMU1ZoZHl6aVBrc0JBbElpT0Ry?=
 =?utf-8?B?RElxK1hIcE4zb0NZcGlYdytRYnNLaWNCaSt4MTAxYzNGc3hsQkJxYmRwYndU?=
 =?utf-8?B?OER3MTFRMU14cjdwM0QxZ2Z3R0paUlRjdjBWNkZ3R2Z3NGpNWHFEZzdBRlU2?=
 =?utf-8?Q?gbZctp6snAm+EVyUu7EQr5197?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e342564-feb3-4673-b6e7-08db5a90e87e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 06:51:03.1364
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NwC0J0/BHi9RijiIuh0pYWHsghYMPY+HunhtnDGRl3qAuUTMexDH+sxromqIBjXsWuX1vjfqQn2D3N4SLcba/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7199

On 20.05.2023 00:44, Stefano Stabellini wrote:
> On Fri, 19 May 2023, Jan Beulich wrote:
>> On 18.05.2023 12:34, Roger Pau Monné wrote:
>>> On Wed, May 17, 2023 at 05:59:31PM -0700, Stefano Stabellini wrote:
>>>> I have run into another PVH Dom0 issue. I am trying to enable a PVH Dom0
>>>> test with the brand new gitlab-ci runner offered by Qubes. It is an AMD
>>>> Zen3 system and we already have a few successful tests with it, see
>>>> automation/gitlab-ci/test.yaml.
>>>>
>>>> We managed to narrow down the issue to a console problem. We are
>>>> currently using console=com1 com1=115200,8n1,pci,msi as Xen command line
>>>> options, it works with PV Dom0 and it is using a PCI UART card.
>>>>
>>>> In the case of Dom0 PVH:
>>>> - it works without console=com1
>>>> - it works with console=com1 and with the patch appended below
>>>> - it doesn't work otherwise and crashes with this error:
>>>> https://matrix-client.matrix.org/_matrix/media/r0/download/invisiblethingslab.com/uzcmldIqHptFZuxqsJtviLZK
>>>
>>> Jan also noticed this, and we have a ticket for it in gitlab:
>>>
>>> https://gitlab.com/xen-project/xen/-/issues/85
>>>
>>>> What is the right way to fix it?
>>>
>>> I think the right fix is to simply avoid hidden devices from being
>>> handled by vPCI, in any case such devices won't work propewrly with
>>> vPCI because they are in use by Xen, and so any cached information by
>>> vPCI is likely to become stable as Xen can modify the device without
>>> vPCI noticing.
>>>
>>> I think the chunk below should help.  It's not clear to me however how
>>> hidden devices should be handled, is the intention to completely hide
>>> such devices from dom0?
>>
>> No, Dom0 should still be able to see them in a (mostly) r/o fashion.
> 
> But why? If something is in-use by Xen (e.g. IOMMU, a serial PCI device,
> etc.) ideally Dom0 shouldn't even know of its existence because the
> device is not exposed to Dom0. Dom0 is not meant to use it. Why let Dom0
> know it exists if Dom0 should not use it?

Because we want to disturb the topology Dom0 sees as little as possible.
For example, imagine the device is func 0 of a multi-function device.
How would Dom0 know to even look at the other functions, when it can't
even read the relevant bit in func 0's config space?

Jan

> In Xen on ARM, initially we didn't expose devices used by Xen to Dom0
> at all.  However to hide them completely we had to make complex device
> tree manipulations. Now instead we leave the device nodes in device tree
> as is, but we change the "status" property to "disabled".
> 
> The idea is still that we completely hide Xen devices from Dom0, but
> because of implementation complexity, instead of completing taking away
> the corresponding nodes from device tree, we change them to disabled,
> which still leads to the same result: the guest OS will skip them.
> 
> I am saying this without being familiar with the x86 PVH implementation,
> so pardon my ignorance here, but it seems to me that as we are moving
> toward an better architecture with PVH, once that allows any OS to be
> Dom0, not just Linux, we would want to also completely hide devices
> owned by Xen from Dom0.
> 
> That way we don't need any workaround in the guest OS for it not to use
> them.



From xen-devel-bounces@lists.xenproject.org Mon May 22 06:59:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 06:59:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537762.837247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0zVz-00076g-7c; Mon, 22 May 2023 06:59:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537762.837247; Mon, 22 May 2023 06:59:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0zVz-00076Z-4I; Mon, 22 May 2023 06:59:31 +0000
Received: by outflank-mailman (input) for mailman id 537762;
 Mon, 22 May 2023 06:59:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q0zVy-00076T-29
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 06:59:30 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2053.outbound.protection.outlook.com [40.107.13.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3221de81-f86e-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 08:59:29 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7809.eurprd04.prod.outlook.com (2603:10a6:20b:242::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 06:58:59 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 06:58:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3221de81-f86e-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UlFlcSWZpVH8gFlWDqIZysr5w7STkODWVX9Q0M3N6ojFYvhVuFcWs8BgMvHixManraDc788IV14m+ADFX65VSPSe3MyUZoZ77DvD07GPPVBscai07gxRVg35Mzs9xL+c9P+SNdWZnRlnfABAolS2n0Gf5RUUKFBws/xu3vpuLGX/z08hiNNzZaqXQHh1MAOSUAWKCwrhbH8+pi9lvzsYnD1wslY0FzLi5z0hAA8UWI6VWcNsql1SnZn6VPKVK/TL5lRN3Sep3E16VgMiOtDW57Ruz1EoVSuhP4+3nGyS9BI7HUm8dFPVrf59/8dceuSZ/zD95VDW5RZyHC0p4QTB2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Xf4m/fNBysdnyou13rBh+lZCp7x9fWpPN7KawNdIYnw=;
 b=jd/TggqwFdCfNh60Yno3rJhG2/zMysRsvFt+POh3dbDG/JwqKsfn08v8sBJvZXTZ/vHXBvT/foHEVcM+0i6sMIuOOJz7ls3FKgJEbq6DbxtZ6VPFjbf0t04r0aHgLiOKG0g3kKnitXJbQSly8k6nHBe9zv9WiiuuMHVx6W/4vdo2SRqSUAw08jiuvyd8LoMnl/0WxB6XgmtH1YOEwz/XlcXxxMJVbZMJtsu1YBo2Ltzb0mHmv4gpQctCkjMnp+3ChGzLv7EITIhCA6jWNKUB0BciCMvQRAD2/u3/LDlPgv67vIGjVNDn7uXkyv4gQuSb264SadkQX59TcarfnmAfWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Xf4m/fNBysdnyou13rBh+lZCp7x9fWpPN7KawNdIYnw=;
 b=Z86lSCwySldrtmUXUwiNqA3e4L3cG/gHIuOnsTcfNH+YhaecRObCw5cHfi6vZt+UV6cbafgS68Du25Hdx+Y5zfK4FFMpJgFA0GAWE9wcSiO9AtOy0ETn/7Lmh/cs3kb21LSB24wB7Lca+8+GNBa0yv/an7szxKOzBOuk9hBPtWz/1DN9GtCnUt903UFu0grmHjlj/5OrD2AkHgztQHOtzo/j80yFumZrKgiDsmry9OUxTHyFwIf60CtKW4MmBuKQe4mv8Y2gbnxbYSH+pqPiEUqTGHSMEi50MmLgZ6vfflBkPKBNGmw6QiOBdgIM0x2ZbQBnjPcLprMokejkZ0C2kQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <94ca471c-b12c-43bd-4f94-201756024b3e@suse.com>
Date: Mon, 22 May 2023 08:58:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 1/4] x86/cpufeature: Rework {boot_,}cpu_has()
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-2-andrew.cooper3@citrix.com>
 <d5eb0703-63be-0c5c-3fee-37e74e11dcf8@suse.com>
 <9cd79b9a-2e9d-aa0b-3ea2-747a6840f5f1@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <9cd79b9a-2e9d-aa0b-3ea2-747a6840f5f1@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0113.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7809:EE_
X-MS-Office365-Filtering-Correlation-Id: c872e349-b555-4852-5497-08db5a920421
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3qz2upgc17ercYsmnVsatERQTRmXoxeb1OdfKKyQ5UdzxOqdTkyaOqJGGPDW/LP+YK718P5Zw09ObF8SrZhJQHin/UOWR999L4jvNiatfJtVWUDty0alpiIBAdxy0a+ujz/9DmCVNsA0QGfir+IjXFZzpQpORygr8BxYWigEwQzvEVWlZ4M3IGu+0/mVoyC962Xtf2DS4TbLv1eWeaEOcgv/7vaJKMWa5Z6Y/EoJizf/5K16kOQxzj3HNdtyMClbVGiBofVUu+3oljFVPs/FL6Mc2GRTCMtBlENPDHGu3LTp1ohwYln3d8xc+239NjPYPbVH1CLfP7LLrSACiUtOqO0NpfMMezx60gki6EX8v8xZEwf2z+lqnltPkjH4YBWht0VsJPMiAOxjov06IyZlE5r15CKiyhSzw+3wTl1oq++jI19SFGvMif5dGqwzsAHm6xwFSpBWejNwniOJyOOIWZ46BG5fSmHATtUKa+pqajonNfwWHiSKW1xO2frTfpdePwTAOPFU5sB6c6tEU20p6Kha3UC6McWvmDIga5UYmn/47xolXdko62GcS/BnFuqBKjHJJPYhzXsDbPWd1HEq2WGPNbb7tyBRhx07X7DULryyDX256BX5QT2fXuJEWLujYTwEYGTlSmnAnnmscuzGjg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(366004)(136003)(39860400002)(346002)(376002)(451199021)(2906002)(38100700002)(2616005)(53546011)(6506007)(83380400001)(26005)(186003)(6512007)(8676002)(8936002)(36756003)(86362001)(316002)(4326008)(41300700001)(5660300002)(6916009)(6486002)(66476007)(66556008)(66946007)(54906003)(478600001)(31696002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RzBWNDhsVDE5RTZpL29MTVczK1NsdE9yZUdCSWc5RnNlc2prR0pGaGgwVkN0?=
 =?utf-8?B?akdmRzQyMnltZXU0RUhIUnBzeXBjK2FTSDU0bCtGS3lqSDVWQ0JZOWsweHJO?=
 =?utf-8?B?Z2wzelFmUWR0emNsVElpUVRGVkhiVVJ0NXhKTzFNQytBdzlONUd3M3JSVU1o?=
 =?utf-8?B?cHd4ZXZBeWpYRngrL2QzTFJwS2NPdExzeE54ZG10U2x5ZVAzL2xpZjN0NUs2?=
 =?utf-8?B?engwTHh5TUlPV05tSy9LYkxWbzc0Ky9uRDNjQ1hSMVpPeis0R2I3SG52T2tm?=
 =?utf-8?B?Vy9SVjF5c1d1YW0zTmFqQnRGOTU1WVA0eTZ2bTNNMExnb1NjSlFtZWROd0FE?=
 =?utf-8?B?clc4ZE1yNkcrcDVDdHRNS3ZlZGNBNWFtNWxVOXRmUWgxZVVIMHV4NEJXbTVm?=
 =?utf-8?B?RGpvK3BYSG5IN1BsT1h0KzlSajJDN0dKTU1SWVVMM3d2c0tObkZUUWtUbUJS?=
 =?utf-8?B?UmoxV0xvRE5obEtxZFNnajlpL1N0aGI2Mk9TQmNJWU1ISE9zS0NGS040R2tt?=
 =?utf-8?B?Y29XWVc1ODBwU1FTMXJ3WXFJb3dxQWIxQVF2bmJweUN4bDF1OUtNUUh1UUZj?=
 =?utf-8?B?Tm85NjZiKzJaZjlBaWY2M041UHdjc3gwaTZ3UHJmUy9ZRmVzTXhKYjJCYnh2?=
 =?utf-8?B?ZkYzWTFiOU1iTnB1V3FwYTZzUVRXWUtkeEZia09jMDFSY2NIV04yRmNkV3Y5?=
 =?utf-8?B?elFxcWJhcDMvR1V0cFBlN3cvc0dWbGRTSW15UDYydjhxTkZNZ3E4cWwrR3NF?=
 =?utf-8?B?ZnVqaGRWOW52bktIRGpXd2JKZE90OTNKUVJuaS95Z3lTdGQ2UUVLZytHOXZa?=
 =?utf-8?B?cWI0eFFCZzBsNzI4UngxaUU0Q3RQRHM1VERFQ09MTE14clBpMkR6Q3ZtQVEx?=
 =?utf-8?B?YVErQ3dEeDhmT2FHWVJWZmVpN0dWNG9Gd2oxVHBIbmxQM09Jb3Y4U2YrZFhD?=
 =?utf-8?B?NzIyLzF6UTJqTVpDRmNGSmNTNjdHU2ZZaW4yUERUV05BeVRCcW52R0YvM01S?=
 =?utf-8?B?ZkR1Z051Q2Fpc3lCRlh3bzhPRmIyb2xaaXBVTnh3WEp2U2xPMWJHZGpqaU1r?=
 =?utf-8?B?SWZoM0pobllabXJpMjArVnErakdmeG1NM0xtaFRYWnN3VjMxRGlrRGFlUWJw?=
 =?utf-8?B?Qi9mdjFDajQ1OW4rb0pTSVcyLzVxaWJOMzN0cjlmVy9PQk1oa3o3Ti93bENS?=
 =?utf-8?B?SVlLbHZJMVNzN0xJYWtnWVBWRFp4eWdXMlBaUW4vQWtyTmRCZW44UHZPVmVo?=
 =?utf-8?B?SVJwK25LdVdvMnVKM2ZIQWpBS1JUK0pPM0x6K2R1Ny9EUytpaGVFZVQxWk01?=
 =?utf-8?B?dUlVY0ZlM1BrR21URkxJbTVON2JyQXdGQXVoTjhJeVlYeEFWVTJTalhtOG1G?=
 =?utf-8?B?bGtPVnZ5MWdjUWRJcHgyTkxDZTVJdHFkMTkzd0lNK2xIYXpHSElxcXYrNGxz?=
 =?utf-8?B?andpVCt6alFXZ3YzOWg5Y1VUMm1DOGtYWDU0dThOY0dJdkVLcFBNdDhic2JP?=
 =?utf-8?B?T1lCVVdaVjA0d21IMHJIR2dTSmY2RTB6aXhWM1ZZRTJZaFZuWWFyWUFFNUFj?=
 =?utf-8?B?Y29hUk5GUjRtSDh6OXJNVStObDJTVmhkMW9BdlF2M2RPRWNHb0JlakpqNFpS?=
 =?utf-8?B?RndDWHFBWWxZTnlpUVdLbHBvSDNaTkhvVWQyWFdKUDM3NUxLaDhYSENiM3kr?=
 =?utf-8?B?eEpneVRUL2d4Qmw5OHVtNDZxSTNGT3NETVFoK1l2L2h6UDZRc1V2NHh6ZFdj?=
 =?utf-8?B?Q2JRUThOaGpDOVRvempiT1RSUjEwYmRmNUh2NU5SUEl6dHpTMDZqbmdxN3E1?=
 =?utf-8?B?ek0vS2FRczd6cEU4YjFONjhZRHFpNDVrZmdTYkpTNDByaWNvRnlVdXRPQTU0?=
 =?utf-8?B?ZjlYU2VUYTVnc3hEK0d6Qy9rWVBFTkdhVEpMUDV1L0V6Z1JsamtLTWFYcmt4?=
 =?utf-8?B?N3dCTUhkREg1SWM2UEN2R0ZSajI2SCt6RDlEekJ4dDlqRjFWalFRRDlqWlMv?=
 =?utf-8?B?clhpcDUwMk9ad2xwdDJadSs0dWgxMDFZUHhDVjQ3YUdNS3ArdDVMeFFuVkk5?=
 =?utf-8?B?ZDNLWW5NMG5hU1JoVnNUcEROcGlOUVloeGtvQm42SFVDQnV3TmtrTEpzaWR0?=
 =?utf-8?Q?L8yAlWvgPYOr5lz8x4FSSi/tQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c872e349-b555-4852-5497-08db5a920421
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 06:58:58.5638
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JOpqB5FU+kbgEtCI+4FebcNM6bObLJ0hSD/wenOPLkKhn9tGRlr0jW8MroMc5ylowl4N8Sm+0cVhT4NOCTIx+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7809

On 19.05.2023 17:24, Andrew Cooper wrote:
> On 17/05/2023 3:01 pm, Jan Beulich wrote:
>> On 16.05.2023 16:53, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/include/asm/cpufeature.h
>>> +++ b/xen/arch/x86/include/asm/cpufeature.h
>>> @@ -7,6 +7,7 @@
>>>  #define __ASM_I386_CPUFEATURE_H
>>>  
>>>  #include <xen/const.h>
>>> +#include <xen/stdbool.h>
>>>  #include <asm/cpuid.h>
>> This isn't needed up here, and ...
>>
>>> @@ -17,7 +18,6 @@
>>>  #define X86_FEATURE_ALWAYS      X86_FEATURE_LM
>>>  
>>>  #ifndef __ASSEMBLY__
>>> -#include <xen/bitops.h>
>> ... putting it here would (a) eliminate a header dependency for
>> assembly sources including this file (perhaps indirectly) and (b)
>> eliminate the risk of a build breakage if something was added to
>> that header which isn't valid assembly.
> 
> b) That's a weak argument for headers in general, but you're saying it
> about our copy of stdbool.h which probably the least likely header for
> that ever to be true.
> 
> a) Not really, because cpuid.h pulls in a reasonable chunk of the world,
> including types.h and therefore stdbool.h.  cpuid.h is necessary to make
> the X86_FEATURE_ALWAYS -> X86_FEATURE_LM, which is used by asm for
> alternatives.
> 
> I'm tempted to just omit it.  cpufeature.h has one of the most tangled
> set of headers we've got, and I can't find any reasonable way to make it
> less bad.

Yeah, omitting would certainly be fine with me.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 22 07:11:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 07:11:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537766.837257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0zh9-00015R-9C; Mon, 22 May 2023 07:11:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537766.837257; Mon, 22 May 2023 07:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0zh9-00015K-5c; Mon, 22 May 2023 07:11:03 +0000
Received: by outflank-mailman (input) for mailman id 537766;
 Mon, 22 May 2023 07:11:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q0zh7-00015E-Ne
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 07:11:01 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2054.outbound.protection.outlook.com [40.107.7.54])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cdc48339-f86f-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 09:10:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8879.eurprd04.prod.outlook.com (2603:10a6:102:20e::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 07:10:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 07:10:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdc48339-f86f-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YlccuHh8rAaiZiaAoVBeQvI2zAht4UYBMvUb0DQScLgpq6KJLZEGxv6/hnlSZptGzkvfsZgbIZrrB5mQt+nDJLRI7QohFaCBFzcPyufKBZuDRoHsR/E1wcvR3swxEIjoW/DIUmeKtoSUcc8IrXwNLbseUv4xfSeobjNJDk//RRZGoZ1jXbeUmJklbjpE9P7qff/Qd3WKz39zbuHn3d/eundMCMwXWmbEUYUU480AeGWPzodom5TQSSQQlCvvWJiJ3Zhk4SX0O+LyVRcDbkueZNDgo0qb7E29RG5gQBvEDVOCdQBFsXvoCKEmZSGaXXB+mjEBzRnwsut8BaT2NSiLMQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=hUMvqFL4iMCDcp5HMfniT+gD/ju1GWE6oPi/bHszLQw=;
 b=OKBnMFAcarvGCw8LXb2ktliP8dQloqApTFljT+gAM2LRwUuc40PLBsB22EE8AsN5xxLCXQx1kAOZTFei4IdVhSStw+zj0ygp1o9WtYR9z1qbVnnF6+MgdZaKpwO2nPCsFTGxl1pRZDeCUUtV8uWcrmhXgW0gdL7zE2Y6davcmmODlQqact66+3kGRLUO0lVvZhUzYbJ2Gz8WtR8sGotDijC6OXzE6tQ+/H8FwCL6W0GhZDVWs9Zc3sG5kEpKvOlZCS6swBb59c5dOtUnfybXc51s+5xzB2heevlsiRX2CKAB1dyOImv4Vta5YFzbF0raZJrWMKoF2/MzN/swfc3xwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hUMvqFL4iMCDcp5HMfniT+gD/ju1GWE6oPi/bHszLQw=;
 b=FILn5AMahbPEYX2OAnG3f5IMI10ETCIVUfBFfwjcfMn6+yUbvr3nlhn39Qx+pCDLneGU3dLs8S1CbafW+MRYVIuzo/Xpe+Bk47fwsRFpwW7z4cDBjE4om+DkRDjkkl4AxIAZD+MOX4r+PbS0ysfoMq6l17D0dgD+vKar7P1lxAE5Avt1t71Np3U1Dabgl5GknA9DYWpD1ky24idpqtEUoyLyay1XWZvdxczajJVh0Bv8XADwCBokpTfXid+y9kVTQ/3vJiUdP57o6y93A7hO3ywcUjx23RItP1HI3SKtl4gQCf9uNG5upgFL15nZ9wUt65/9Ta0m1BEER2tmEi6lRw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <678e997a-0e3e-a6b3-1bba-5e66ff74de48@suse.com>
Date: Mon, 22 May 2023 09:10:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 4/4] x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-5-andrew.cooper3@citrix.com>
 <1d06869b-1633-7794-c5c9-92d9c0885f19@suse.com>
 <42cd2479-1eba-11c7-26d8-441045c230ed@citrix.com>
 <fb95d033-7a71-7cc1-bb8f-ee2a74d1c5cf@suse.com>
 <a8013bb5-b0bb-6e42-85de-2e12d7b6f83c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a8013bb5-b0bb-6e42-85de-2e12d7b6f83c@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0019.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8879:EE_
X-MS-Office365-Filtering-Correlation-Id: b352edec-d440-4d5c-ee0d-08db5a939fe0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WGIWSuvGGvtRov2NiGPjYwkL0U6f3nYXL8fdE1pPaJqAvsyHo98uXVYf4KkO6G7xGopfVsmNGCMhN6kOg2p1QDwxfVtlvU33hpOnkomERlvBoWw0ZsJZ6hhHieU5iTF8ItRlw3kFq999ahd8be9B0kq57gQt8brgx6iEBl3cV9tGydHiVXHPd2Juv+qIo1vG3e6NIeIavxol7vst3CAsZ1ccSu8n2vPvWAN5A/v+ptGSKultLK5JHRbRF6GdZjUDLehOGj/AD5Wky7RVtUKKZVkj2JWUv9+e/8Qbg3Mdh2FsnNaeJ5MW8bZW5b8LsZqbtHGevYMWM+H9nny3YvBTugcfPzU8+U/714mB0h9e0aK9KU+r6tsDP6eMRcACtpQ4S65UhzYQf44aek3dSjZ5Xy1UuSo5mTF2hH8VsNhPhhsZ4EZZSi5h9qgrnxw614d9kikeW9y3Ejj4+PAxDEohX4jX/Jy5jGPWDbDrQkw0bWJ+YxU8VpQ4uOJk//0n4ycFOQYxKhJzwOgrmnkseVXFVxJ/b6YC3woC3FFWh3HHHdGfxPuAMa38D8E1BIgJtwMg9kxucvo/0uzY5hIZVenSrvXqBQpTAq2b3nXF9+yyHqtFY1mDobOfY+WsVVHzdn1DhKhZG5jfC+lP2ANY8K4yAA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(366004)(346002)(396003)(39860400002)(451199021)(38100700002)(31696002)(86362001)(36756003)(53546011)(6512007)(6506007)(8676002)(8936002)(2616005)(2906002)(31686004)(54906003)(478600001)(186003)(316002)(4326008)(6916009)(26005)(41300700001)(5660300002)(66476007)(6486002)(966005)(66556008)(66946007)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dnllZWc5cFdxaE1YMitnam5DQVBvSklOMWc2M3pZcU1pTkRRVURpOG1QTGsz?=
 =?utf-8?B?ZUdKSW11bEtuSHV4a3pKRmVFdTFmYjRPc0JDMlpoMjR3amtrbFRzWGI4WXpC?=
 =?utf-8?B?Nks5ckxma212WWxlTDBTRzM0S3R4RC8wN3lNZzhWcllpRVBRbkNLQnY3SWNX?=
 =?utf-8?B?WklhNW1LVmpoYlVIbDBBalVZRlpPMVlCVlhHcEJsWkk0aHAra3pYTm9OZUR0?=
 =?utf-8?B?VHFNTjhOZHpvalZHMHV0aVVWcTFkRytwNlMvZ2dJOTFCSzJwQU0wU00zNnh0?=
 =?utf-8?B?SFpwOTFLOVhObHJoRDJPc0FXKzVPc1BybVBPQkY4V2hvMkticFdOZ2Jwdksx?=
 =?utf-8?B?QnFYOUxsV3lLU0k4WFAvQ1dXTk9EaVluV3BFYW5NVmJ2R2JTbnpQRENkaUVt?=
 =?utf-8?B?eXBYTllQMFZNQUtWVG5TRVZ4WVl2dmNrMkxodURYa0FuMmpyKzVaaFMwTXJI?=
 =?utf-8?B?VnpqSnMzWkRaWFp6eW1CbHBaV25aeVlnb2FwODI5cTVKSmFrc01ycSs1RSs3?=
 =?utf-8?B?NDR0NEtyUEFLUmJkcmtuVjFLWndEWldiVGx0bDJibnpLelRQL2JKQ0RTRm1R?=
 =?utf-8?B?RW13MXRzTVdicVB6T1pBQVpvdTRsZlB2dzRoWjdZUmxVTnN5akhaNDhpYlJJ?=
 =?utf-8?B?a09RT29raDFZRVpPYWJ3TzNoUTloTU5YcGdZUy9LMlJtcHRpbnNGbUFUaTBZ?=
 =?utf-8?B?b2czUjAyKzZERGRuR2ZHcGduSG8zYWFzTmpwQ0VuS284SGlMTUs3eWFEMnVL?=
 =?utf-8?B?UitkQkc1SXg2clo5YlB3QXpaL3BRR1FmaHNCZ1NNOEU3WURKNnowQUF4SE4z?=
 =?utf-8?B?UC9BbXQ3NXFxcms5WGVCcmtpZmpxZ3E3Z2l0azN4bHZPR2pEak5CNHlSM3Fu?=
 =?utf-8?B?SGpDZWtjOTYrdUYwem1GcjhZMnJFMXRWaStLZkVqekFBc3M1MXViSmlmRnM4?=
 =?utf-8?B?YVNnUzQvL3RuVDlIRnFXMGZYbGRkMUQ3aWR5SlY3U1lVZmRUbFdIRk9aSmp6?=
 =?utf-8?B?cjdMY2Q2aGlNMy9iWkN1ZDNYd00zRFZMYmdUL1dxdnlFYVVaQXpURmxvMDho?=
 =?utf-8?B?bHRhQU9KdWlkWmpISVVyN3o3S2cxcXFtNzJOcDRXUDVhN0tLY3R4bTBPVjJq?=
 =?utf-8?B?bVI4N0tJcGNxcVhYeThCNDErWnlHNzNZeU5hK1BjdGFXVllCSjlVQkoxbGdH?=
 =?utf-8?B?NGVWdUNVTXlHYk1GMEp1ZHJNR05ZS0RvYmVjR2ovcUUzeVlrR3dWUXJLREVv?=
 =?utf-8?B?d3Z6bzBVOUNtRzBhYWNlNW51aGtTdkVaY3IzSFAyVjV3dHhsUnFaL2tsRUpT?=
 =?utf-8?B?YWpsbGdOTmpWb09UQk1MQ3BnMlZaVkwwcnhXZWFDVHYzL0dIbExXc1grZ2hW?=
 =?utf-8?B?K0lST214d1VkOEEreU0vTG9FZnZoUGZ4U1RWSjlSU2taWUhkcjcyd2Y0dllq?=
 =?utf-8?B?UEVwTmxVT2RySjJubFJEaWRVSGIxbFVYUlBpVEZEVk1MTUZsVjExeHBCS1NI?=
 =?utf-8?B?ODhzM0ZGMlk2NnI0UDhUdlI0SWZQMzVsRzRjLzJhalJWa0wxRUlLTkZXOSti?=
 =?utf-8?B?eTk0K0tOSXJoZE5vQ3RIT3cra2JCSFVPRHE1UWFDSitsaDRGWGdUY0NndEJQ?=
 =?utf-8?B?NmdNTlJwWXlVUVRjLzdZWWlLT1BFRVdPc1l2UUpYWjRSaXlXZS84TUsvOVpk?=
 =?utf-8?B?Rm5NQ3JGNHBDaW4yRWJ5aE9XRXRza3lJZWd5QXRyTk16akZVZjBCMmcwMlRx?=
 =?utf-8?B?c3liUFMxWXVZYlVmNVJvT0NMZkRLQ0ZWQjdqcEEvaXJFOVMyVytQdEYyNmdT?=
 =?utf-8?B?cXRQd1gvcUVmSlNaQUo0MTIwMmQyLzllRkJENFhVS25XRHcxRC90ZkRUVjQ4?=
 =?utf-8?B?alY5V1ZobXg3YWpCeXpjdWZtVGJCNTlqTDB6bGtHUEpQdlM3N1JzVlIyZ3dV?=
 =?utf-8?B?SmJVcUVudTVqcFJYc2VidmVsb2lhSVQ3OW8zWlZaQmZCQ2NlWmx4Z3BuUWtZ?=
 =?utf-8?B?VDhmU3JCQVMyVEpEd2dOejNrSVVzSyt5Z1laeitOUlhCU3U3S0FweHpWbEFB?=
 =?utf-8?B?d2g2VHRoMDZmTGZNcitWNVFEWjNoc2lmU2w1Z1lzM1ZTV0JUeVhFUFdSazFB?=
 =?utf-8?Q?reBifacJOXXXlpEvZSUAnPHB5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b352edec-d440-4d5c-ee0d-08db5a939fe0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 07:10:29.4183
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: D2IvTW2kjrr39g1A3fmdAY6xvAp0YPbBUsyY2a4qVqJ2P1CZrMMwAg/JGi5TP85LjA+IG24MApKxdSOihEsh3A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8879

On 19.05.2023 16:38, Andrew Cooper wrote:
> On 19/05/2023 7:00 am, Jan Beulich wrote:
>> On 17.05.2023 18:35, Andrew Cooper wrote:
>>> On 17/05/2023 3:47 pm, Jan Beulich wrote:
>>>> On 16.05.2023 16:53, Andrew Cooper wrote:
>>>>> @@ -401,6 +400,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
>>>>>          cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
>>>>>      if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
>>>>>          cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
>>>>> +    if ( cpu_has_arch_caps )
>>>>> +        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
>>>> Why do you read the MSR again? I would have expected this to come out
>>>> of raw_cpu_policy now (and incrementally the CPUID pieces as well,
>>>> later on).
>>> Consistency with the surrounding logic.
>> I view this as relevant only when the code invoking CPUID directly is
>> intended to stay.
> 
> Quite the contrary.  It stays because this patch, and changing the
> semantics of the print block are unrelated things and should not be
> mixed together.

Hmm. On one hand I can see your point, yet otoh we move things in a longer
term intended direction in other cases where we need to touch code anyway.
While I'm not going to refuse to ack this change just because of this, I
don't fell like you've answered the original question. In particular I
don't see how taking the value from a memory location we've already cached
it in is changing any semantics here. While some masking may apply even to
the raw policy (to zap unknown bits), this should be meaningless here. No
bit used here should be unmentioned in the policy.

>>> Also because the raw and host policies don't get sorted until much later
>>> in boot.
>> identify_cpu(), which invokes init_host_cpu_policies(), is called
>> ahead of init_speculation_mitigations(), isn't it?
> 
> What is init_host_cpu_policies() ?

Oops. I did use my own tree as reference during review. See the long
pending "x86/xstate: drop xstate_offsets[] and xstate_sizes[]" [1]. Maybe
you simply didn't tell me yet ...

> I have a plan for what it's going to be if prior MSR work hadn't ground
> to a halt, but it's a bit too late for that now.
> 
> (To answer the question properly, no the policies aren't set up until
> just before building dom0, and that's not something that is trivial to
> change.)

... that what I'm doing there is too simplistic?

Jan

[1] https://lists.xen.org/archives/html/xen-devel/2021-04/msg01335.html


From xen-devel-bounces@lists.xenproject.org Mon May 22 07:19:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 07:19:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537772.837267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0zoj-0001pP-0s; Mon, 22 May 2023 07:18:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537772.837267; Mon, 22 May 2023 07:18:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q0zoi-0001pI-UV; Mon, 22 May 2023 07:18:52 +0000
Received: by outflank-mailman (input) for mailman id 537772;
 Mon, 22 May 2023 07:18:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q0zoh-0001pB-8o
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 07:18:51 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20606.outbound.protection.outlook.com
 [2a01:111:f400:7d00::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e62acfb4-f870-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 09:18:50 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9051.eurprd04.prod.outlook.com (2603:10a6:10:2e6::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 07:18:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 07:18:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e62acfb4-f870-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PfvOKYBFGS7n4q4nU4GyScZrOqXkeCa49eU3fr+bvj6+dxsTxB2TKz1MPIIyLnBHwtKM78orLd0TXLqhOOUTq0Jgau1Y/zeoTpgRMiMbC81UKYYsVlggWLjrqHklfYx4IjSTF2OQOj0t9HtYaoeM8E6aiG1CZHekfLO9sGvfHdPoCxb+8XcFSr/FL9Fw5avt4z6xyE6BQ+stEWE1IF8OIKYL+qsmjkFFU+yl9HPh+B3+NRE6lo7ScYtKJtcyukNVVcMx6o/CgrYhtz8Dpmd63KkrRZRtZWKeO3OdTOKVweXLd5XOBSb1wjSyp+LJg2oIpVkFVTu5YRScQ4ndJ4Pemw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CGK13Y0pIU3hCvD9V8NXj+phVVSQ9Etm7kb0ZOZ52zI=;
 b=MNuEIG20EVEdAvebO2XuA/Dur/wgknQ6o2DWJIDG7yBVwNwAfdvnh/EwgKaZ6iwTvipyLlfPEWkoH2VRfpemklb4h4VXKB5xFolvb3jy9lpP8kQJiPQ6UKp+5H1QN97cZ8jipZ8V3qLV4CuRC7/OkYnXoCnYMaDSXn+wUYRHYz0S4EmYKqw1o+9fvV40M8lZBjMcP1zH2/khh3tW/k9uRpbBatWqHmVQcSXA3uc3YtoxgjDEg4sxxxo0Adees9yM9jZ4BZ7TpTC350q3qvfLg8N2EzDXqBIOL/GRFGLP3QrnC2iFWhibM+w/gt9ipvADzicAwd9Xb4Z+V0SJLlxaBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CGK13Y0pIU3hCvD9V8NXj+phVVSQ9Etm7kb0ZOZ52zI=;
 b=x88yo1j4yBPqjVxjqt9QoLDa6HSQNfBv/lpUZ/R2HC/ul0eOsrfVMX11pehx/mbHQLoyuJ/4BgIwjtk+2LVtf/xMMLzN1VkZ36zmBuOGTsoXQe7gUVgL7qRY9MDZC/qigciKOaYCSIqXlLUN2t+u37FMwmyZbl3BdvYDXHNxdi7wzdjrVUiBHlc+YtWEWkkWA1iHnBjUqQBZtXpq1eO1X2N9SMawNdoS3nTvyNDNo3xheNkZxiFW24mjNv/nY/0pRuSEj0+3+DaF5Ia1s8Qbt9mHWJHR8xgXrZ4CKe6FD6ECLI2LtUCsb4mHV0SXT+XEJYYdsopjPdfNpl/XayUkJw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3248d4a9-13a6-d85c-65e6-151fcca79cbd@suse.com>
Date: Mon, 22 May 2023 09:18:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 3/6] x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-4-andrew.cooper3@citrix.com>
 <347219e4-6c3a-a0ad-b010-4dbd7282c7ad@suse.com>
 <ea8bb0da-6326-55d6-18b7-0ce681046d53@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ea8bb0da-6326-55d6-18b7-0ce681046d53@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0098.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9051:EE_
X-MS-Office365-Filtering-Correlation-Id: 931aba74-dc0e-45a3-2f58-08db5a94c894
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	w/AVZ11JdJy589XZ57VJUOH2yzaV5ix77lZ6vZJfk2XlJSYAe+0rHjraBHBv3zzrwao9OJayU5VSAZKiY6dfs90qtezMRqnWkVtZpj2I7ODMV/OSp12gTdVnkd+4UsVnhlq1QE5UaFMfAmQ/z1r2r6ZcYrebYG4fsyZhS+UK3QcNSVtLAScukTOOpAPl1JvL4tq616VqXRKSouJ3UMyRvTeQYnQYgte9HRyW0D/al/I+bI/w9ErmerUiirPJ2h3TimJss6Xzz629ZL1uJKrUPtzaBEuuKoHHc6SCVySITwLd2S9gpTY6JRURMht2KGh6TGGMQawu6XqzcSRDN3i+7Y3LaxPn4k+Y6XeoRco1oAFOfw+Uro7lRPyxuIVC/cETqBWV4KejLGnFfB3mbt8fgCwQFxgXPaVx2D6PAIp7xaINSCyohEGYLUq+DOFfCmPLcRaGWzXNQ13qpyGdNj7mQlgKUfOx5MX9GQtWFUuLuTGegIDqdipDcQuA9MAIq0EUpOaNP62YkaDFy9CP8A81cwR+1RejqcIZGLDqoh4gcKoaEh6S+295yqenkEIRArtazBzOkCmobtA3T+9lqJ/4hurmu9IGhBjp8xZ9e98/ADY6PHxkfhMspl6bdN7523XjHGdWXcPPmeN4tng97EkV+Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(39850400004)(346002)(376002)(366004)(451199021)(8676002)(8936002)(5660300002)(83380400001)(53546011)(186003)(6512007)(6506007)(26005)(31696002)(2616005)(86362001)(38100700002)(41300700001)(6486002)(478600001)(4326008)(36756003)(66946007)(66556008)(66476007)(316002)(54906003)(110136005)(2906002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bHNRSGwwenFvNzMxcU5hVU0yTkxxS3ZKK085SERVT1R1bUNlaGNYK0loajRU?=
 =?utf-8?B?MFR6TDBLZ21rNjNudkxEL2RkMkhZcUQ5WVd5YTR4ODhQT0tLN3VtaGJDSXBx?=
 =?utf-8?B?ZzYvODM4cGo4OVg5a1dwUEZiRlNHUkRDWVR5UDA3WlR2ejk5VHQ1bXdncG5q?=
 =?utf-8?B?T0wrQkM2c1hPamNMVUFkdEY5QWxTNmhSYUplMUtJcnZRMVhTb0FYS0hET3M5?=
 =?utf-8?B?NHZKUEJXTnpKWmYxcmIvTzVjV09makIvbU53OXpEVjBveW02TGpuekVNMU5q?=
 =?utf-8?B?bURuVTR0Nzl1UmpwV1VHeGVnTEJJeGwxWGh1MFA2eTh5dHpjOGh6dlFjUVpJ?=
 =?utf-8?B?S05zU1JzZHdKUUZleHBicElLNGZyZEYxT2kwMkU0SU5tb1hLWDNZa1d0UVdS?=
 =?utf-8?B?WFh5bmNoRXJKZ25kanZWN1ZPMC9TWHcwelhmcWlJUk1MUVcrR3hWMXFzbGdp?=
 =?utf-8?B?UzEzSk03S1pBT1FIUGt3S0ttM0JBbXZsZ1JxVyt5a0tyaTJxZHBobVhVOFIv?=
 =?utf-8?B?SkdraDBqL21GYm0yNzVGcEhiYm5KT2VpVEkzTnJGQlJIWVBBc0Iza2RzVUJU?=
 =?utf-8?B?UnBNbEpISDcxb05DNk8vYllEeFpUczNpUnpBTk9SSE5odGV3bmhWdEphK09r?=
 =?utf-8?B?TVIrTnFKcnJqWC9DWVR3blJURU13NW8wMCszNTM2M0RCd3FzRW5xYWR0Y3dE?=
 =?utf-8?B?MWhQTmx0c1RsUno5YXc2YTlQOXVtTU5yRzJmQmpVYXEyY1dITkhVdERUSjB2?=
 =?utf-8?B?eitxaGFpMGZHN0pLcWhVOS9PRlZZTlhoMlpnclMzNGVFU2RhS2dLYlNSenBQ?=
 =?utf-8?B?ZjNMVGc3d0JzT2ROdmk0eU1OSEc5UlhEODRhNkdRZ29vc1lVN0IrUWpoZUlo?=
 =?utf-8?B?OWJSai92MExaaC8ySWlMaUExaXBxS1RuK2c5b2Nrdzg4SS9FNU81RXdRQ1BY?=
 =?utf-8?B?UjZJYkg1N0hJY3RBckVnOGhrN3Q5WmYzNmg5RDNYUDVFMmJ0SDJPcjh6R1ov?=
 =?utf-8?B?SXVQSEZLb1RHOG9XWGFtdGNQRjlkWm5KK2RDOE05RWRSMUlzblIvYmI1QmJ0?=
 =?utf-8?B?b0N1YVFQMjVyK05sdU9XYTRkU0Z5RWtQV2dCb2ZHWGdVdWcxWlI3RVlGdnhC?=
 =?utf-8?B?cTlFQ1JHVjM0TGo0R0xhREprRDJjWllMa0tSZWxENUdzV3lnZlZjanlCcmZv?=
 =?utf-8?B?VUY2TmxnZUhJUjdxRUtCMjVnK3Zaeitna1hWVGdyK3FYT1BLMXRMaUJBclBj?=
 =?utf-8?B?STJ1M1RwZkh6Z0l3RU12bE9JUjczd3BtVndibHY5bE1XUkl1cERGMFpTQVpD?=
 =?utf-8?B?bjNlUHJ6a1h2OTAvdTk1dlRpR3FJOE05dmdMaGNITDVuL29UaGltcDJQazlT?=
 =?utf-8?B?SmlzU0FPeURtYjQzRDZNUUc2RWNUUnBnTFhNd0kxN1Q2MHQ0dXM3ZHRMVUxz?=
 =?utf-8?B?V0FFN3Mwb29heDNiK0JGWGFveURJR1FNOGw4VWtBYTRDQTVrZ0xJOHNScTgv?=
 =?utf-8?B?RFdpYXNLaDJ1YS9XVEM0enQzblJzcnZVSXpBcEpzVjZnOVZmeEROY2kwdEhK?=
 =?utf-8?B?QXhBa2xEWjFOZ1oyNnFUMFFER2owSjkreVBoZ05kZTh0OUxJMlZRcTI3V1JR?=
 =?utf-8?B?Ui9PcmFMVGV3eFRkQ2tjMmNzTUZjamVqSndwRmhZOFhJOTZDOGh6Rm5wOGkx?=
 =?utf-8?B?M2QySjdJVlUwT0RZN3NuZmxyZzNHcnlQN2hxUlN1R3IxRGxsdlZXZDBWNlNY?=
 =?utf-8?B?YU5rM2FGaFh1UU1USGYzbzhEOFV0czR0S0hKR0ZwTmhDSmZQQk9HUkN4Rzgz?=
 =?utf-8?B?Rk1mQzV1VTFhWVZZaDZQSktVMWZnb0UvUjdLKytrejZQZWN5bTdsbFVGTlNx?=
 =?utf-8?B?VVNKYldRZ1gvL3NlbHRlb0lwNGhEME1ZZlUrdTVLa08rdGt3OGhNKzFmRkZo?=
 =?utf-8?B?UGZCZGJlN2JOTHUvOVhWZ1pxUTNUb2RrbzIveklWdnhyRlVRRTZlNmFjeitP?=
 =?utf-8?B?Wko4Z2pOSStweFFCeVZBK0tyM2dKbXE1dCtqckZrUjAyTjRpRmFtSGJuMUV6?=
 =?utf-8?B?bFMzZG1ZZmkrUGEvdVV1alN5aGVqdTdlOWRxSDE3VVVoNVNVTFBSa2hIOE1o?=
 =?utf-8?Q?oUKdy+y1CnxtLAp4IZOkW/O9h?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 931aba74-dc0e-45a3-2f58-08db5a94c894
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 07:18:47.1264
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wjxR+oMSxL7z+tIy+JGA8yc+nJI9uzpteNywURp7EXU2X6aXN63MIfurdPYBcrOxzJm8D5Lk5tbV2HDHaUxvmw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9051

On 19.05.2023 17:36, Andrew Cooper wrote:
> On 16/05/2023 1:02 pm, Jan Beulich wrote:
>> On 15.05.2023 16:42, Andrew Cooper wrote:
>>> Bits through 24 are already defined, meaning that we're not far off needing
>>> the second word.  Put both in right away.
>>>
>>> The bool bitfield names in the arch_caps union are unused, and somewhat out of
>>> date.  They'll shortly be automatically generated.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> I'm largely okay, but I'd like to raise a couple of naming / presentation
>> questions:
>>
>>> --- a/tools/misc/xen-cpuid.c
>>> +++ b/tools/misc/xen-cpuid.c
>>> @@ -226,6 +226,14 @@ static const char *const str_7d2[32] =
>>>      [ 4] = "bhi-ctrl",      [ 5] = "mcdt-no",
>>>  };
>>>  
>>> +static const char *const str_10Al[32] =
>>> +{
>>> +};
>>> +
>>> +static const char *const str_10Ah[32] =
>>> +{
>>> +};
>>> +
>>>  static const struct {
>>>      const char *name;
>>>      const char *abbr;
>>> @@ -248,6 +256,8 @@ static const struct {
>>>      { "0x00000007:2.edx", "7d2", str_7d2 },
>>>      { "0x00000007:1.ecx", "7c1", str_7c1 },
>>>      { "0x00000007:1.edx", "7d1", str_7d1 },
>>> +    { "0x0000010a.lo",   "10Al", str_10Al },
>>> +    { "0x0000010a.hi",   "10Ah", str_10Ah },
>> The MSR-ness can certainly be inferred from the .lo / .hi and l/h
>> suffixes of the strings, but I wonder whether having it e.g. like
>>
>>     { "MSR0000010a.lo",   "m10Al", str_10Al },
>>     { "MSR0000010a.hi",   "m10Ah", str_10Ah },
>>
>> or
>>
>>     { "MSR[010a].lo",   "m10Al", str_10Al },
>>     { "MSR[010a].hi",   "m10Ah", str_10Ah },
>>
>> or even
>>
>>     { "ARCH_CAPS.lo",   "m10Al", str_10Al },
>>     { "ARCH_CAPS.hi",   "m10Ah", str_10Ah },
>>
>> wouldn't make it more obvious.
> 
> Well, it's takes something which is consistent, and introduces
> inconsistencies.
> 
> The e is logically part of the leaf number, so using m for MSRs is not
> equivalent.  If you do want to prefix MSRs, you need to prefix the
> non-extended leaves, and c isn't obviously CPUID when there's the
> Centaur range at 0xC000xxxx
> 
> Nor can you reasonably have MSR[...] in the long names without
> CPUID[...] too, and that's taking some pretty long lines and making them
> even longer.

I disagree, simply because of the name of the tool we're talking about
here. It's all about CPUID, so calling that out isn't relevant. Calling
out parts which aren't CPUID is, imo.

>>> --- a/xen/include/public/arch-x86/cpufeatureset.h
>>> +++ b/xen/include/public/arch-x86/cpufeatureset.h
>>> @@ -307,6 +307,10 @@ XEN_CPUFEATURE(AVX_VNNI_INT8,      15*32+ 4) /*A  AVX-VNNI-INT8 Instructions */
>>>  XEN_CPUFEATURE(AVX_NE_CONVERT,     15*32+ 5) /*A  AVX-NE-CONVERT Instructions */
>>>  XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
>>>  
>>> +/* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.eax, word 16 */
>>> +
>>> +/* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.edx, word 17 */
>> Right here I'd be inclined to omit the MSR index; the name ought to
>> be sufficient.
> 
> It doesn't hurt to have it, and it it might be helpful for people who
> don't know the indices off by heart.

I'm one of those who don't, yet I still view the number as odd here.
Imo it has no relevance in this context. But anyway, I'm going to
accept this part whichever way you want it, while I continue to be
unconvinced of the xen-cpuid side of things.

Roger, do you have any opinion here? If you and Andrew agreed, I'd
certainly accept that.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 22 07:31:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 07:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537776.837277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1013-0004BZ-4m; Mon, 22 May 2023 07:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537776.837277; Mon, 22 May 2023 07:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1013-0004BS-15; Mon, 22 May 2023 07:31:37 +0000
Received: by outflank-mailman (input) for mailman id 537776;
 Mon, 22 May 2023 07:31:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1010-0004BM-TL
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 07:31:34 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2072.outbound.protection.outlook.com [40.107.7.72])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ad01cd28-f872-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 09:31:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8085.eurprd04.prod.outlook.com (2603:10a6:20b:3f9::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 07:31:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 07:31:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad01cd28-f872-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EBL/OCVNaasVgUDZ4CDfbQIgtKSUjwsID9c8aR3lGvTGA9UQudm+7KwiH1i3ihFKWvaJN+gOpUMTrokCR1fYoV6gIbJzidpmzR24Pmpkr4KrDUJBaMp3yZTcrwrLHGlDDn+TIDZof5ej9/m8xUjwIGtRDqkMxY1WP3qyXVUC/NKEZUlnlFPN/jC/zGm+iXOuC32mOcqMnh1nHBvnQzDZtIem6IBqartGJWuYDzSd5s2lFn1/TSoeQILmcqTQJv9JDK2hfduS70uro9jWWlqhtMoTKc6JCseDbTJ6ypnAbzu0q63TtmwQ37r1AvLaR1/vGA/0uRxxHxx5xcS4BSg+5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ZWbikd4CO1Y3ADVhZAqLtLz58LsSBYYwyMmvo++9TKY=;
 b=h/ctD+3Ij2L6hWBQLFZI6Td88Ne+TwT3RIdkEobE5avrzI/rYwEAJ2iXSliPXAzIJMulMKmnw10+CaWjFSUHV7pY3r9iLnG3QGVyJ8cU6ove0KlL9mD1jtDiJEcka6d8HOFMKEE6rYLXEowT09TXr28lgF8Yc6RmELfpbwyCqce/gxelc4ZkoGCpdcYJKWZiHZ8Brm/6PjBAH36kg8c1Z/ND38R3+fuIgHCLcfJsBzhePaJTnqwdoXERLnzEIH8+vBsi8UOBtrwsmOgn9S6rEcRpxv6N3kEibGo/nifHM6sQSiOg7YWkIsABQrfJR05H884gbx6NLhk5TONeCZmm+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZWbikd4CO1Y3ADVhZAqLtLz58LsSBYYwyMmvo++9TKY=;
 b=4yu9szc1x4f2ev8H3W1HVjIAs2ugrJyWd+cNFt/knC+7VKHJ4heVC7vyM+nCH15heygiG1h43cChSfUZvJa1xp3VLet73vAimdUqiF/+UTrwJfvWJGZKHjnkYvOkUDxE9JeR8R9/IWmyIu92N9yBeJ0rsdaT6UdwM2ijwcta4J05doTV6XX8rsWF2gtNxGBzw0qVZmnnPcafSN4ac2m/xQCsIYq/NzLNuPpVhDWj86edrD5MCWQcsGHqAwnWf2+mr9Cv0aZv/+sqPFPrH/tLge2Xxf12pzKt9iQQNJSTTSYiCmeNMeOzslyPrQIGRR97hlYfLJatnP7HAvFhDELAiA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9bc8f75b-e46f-48fc-3083-aac30995ec29@suse.com>
Date: Mon, 22 May 2023 09:31:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
 <a545a6c9-db48-9d91-c23b-59ea97def769@suse.com>
 <25421dbc-5889-a33c-37dd-d82476d56ce4@citrix.com>
 <1bef2b0e-04e8-2ca0-cf03-f61cdae484a9@suse.com>
 <b1c56e56-90cc-0f37-4c8a-df1217339e59@citrix.com>
 <22a6bd70-887e-da72-ccff-16b3627463c3@suse.com>
 <54b35fa0-160e-3035-6c22-65a791ed2f84@citrix.com>
 <a53a77e3-6dcd-2668-0f3c-282de93d8b04@suse.com>
 <897bac23-b17d-ec4b-613b-d4d1b4c77d58@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <897bac23-b17d-ec4b-613b-d4d1b4c77d58@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0213.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8085:EE_
X-MS-Office365-Filtering-Correlation-Id: e798160b-93a0-46ad-36b2-08db5a96801b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2jZqNbmv624XKyHAt/sW4zZI4N/wLPhaMjlOfGvPCvRx1tPhyn5IR5FlWJ2lZANc9MWwFV+VfJBio1SQEvYBt34ZofwvkoBnp1SBfQbtusRiNV210WmSd/QAiCrxvYKPYhYk7lVG8fBeIlIhUEjNsfvqWIrQ9p4Rmk+sfENTiYa7u3V9LizGytZk87kAU60DuvCyxMWJ9eYSVbavwPBmlmvVVFea5dXVbRk12512SM6yWHOg9fbN5Ls4vf2jdR1wOoJjI9h8M5Xp3GTPMs012aktBKUNePQc193Q9WMi2XUIC94HJYS86DBH6l23ePQDS4O2AB7xfzbBG193R9A/4ppW8+xhQE4PURVvvGTtrGNY/0ZEbS7SVb0RUM2dHPLUzNkGEwBpWwEITYwyUkjc/g9H6qWmka6YG6iAO57Ft7fXgQtEtZ7ZYvbtwXOtdRXF1/WC/GXZe8gf066/IYVtCvBRgBwptv2BCds/dJX0jZrdbY+X0dUdFK4Yz3/MkE5H1u99E/TOr9cBQ1CTJxeX7TLpRbfswwqA7DNfTllBl/ffXJ7eDxpB6bU4x/0+kE+4SMGuRsUlGR0x6BbdHnF5jrR0gnkyfHauAGfzVf6fRijQtkRKwH2Z40EzmoFbRwG3nfyj/QjOk859/D8GX7OqyA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(346002)(396003)(136003)(376002)(451199021)(31686004)(2906002)(66899021)(5660300002)(83380400001)(8676002)(8936002)(41300700001)(66946007)(4326008)(66556008)(66476007)(6916009)(54906003)(316002)(36756003)(6486002)(478600001)(2616005)(6512007)(6506007)(26005)(31696002)(86362001)(186003)(53546011)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MktheEQrOWgzVG5wQm01WTA2TDI3dVQvZkVldkxuZHhuQWR1K1dVa3RZMGpZ?=
 =?utf-8?B?SmFQTWlXYW8xUlpGMm1kelNMR0QyRWtQQVpTcW9USU9CcnZLaFN0a2VvNFdS?=
 =?utf-8?B?V0JwMjc5cjRUSjVsOVZRS21adlV0TE9xZWdBQlRuV3VKajZJSHRsMXdvS3Bl?=
 =?utf-8?B?U0JuT2JkTXRMa3FtejJ1TVJpZ044U2hzWGIrVWVJWFM3VU5jWUxPYXE5SVlT?=
 =?utf-8?B?Y09EOGZaeHkyWXZvUlNnc2dZYmQ4VDZWZ3hiKzYzeWliNTc0cm5DdWdHRWpm?=
 =?utf-8?B?Y1MrTDh6ZnBQVU42YWc5Nm5WNG85NDl6bHJiUTVLRTh1cEl4TkcvZkQzUUhx?=
 =?utf-8?B?R0lhTkJadjdnLy9CSDdEb0ZuQlc0TkJSZUdEcElSLzVvTjFvWFhxcHBLdDEr?=
 =?utf-8?B?dzRoVUxOWXR6OUJ4QlhvYjZ5am4raFo1azc4L0kyR1d2UndVYjdLK25LRnVV?=
 =?utf-8?B?Z3dZZ0lrQkV2MUFnNTZMVDRieGhRZ0JxYTdVMWIrbXp1VFlEUjZJdVV6Qmk4?=
 =?utf-8?B?K1JjTCtvdzVMb0ZzRko3V2Y0WEZjTWxoUFB2dW5SaGpzR2NwWmp2Si9neW9z?=
 =?utf-8?B?RFJwQTNGZlkycUhWOG5nb1pKUmF6VzJVQld4c2x0UFV5dmZoU0g5bnNvdmsy?=
 =?utf-8?B?MjVvamxpWUZKZE5neTNOdENnUHBGVVk4c1A4SnJRcllWblkxME5uQnovcW9y?=
 =?utf-8?B?M0lBcEs2VU5wUnVteVZ0ai9pMzNUMERMTzAxSGN4TFZ5VlRsUXJCcW56SDUx?=
 =?utf-8?B?Z3p3WEpkdi9QZWFZWk00YlRSdjB0bjlnbWZncjJRRTdrVDZrMStiYlQzbUM4?=
 =?utf-8?B?V2s5T0R4eHlVTUNsS2hLMzVmcmJ1Wng4b3hOY2VJMENyTzN1S3NRak9zc29s?=
 =?utf-8?B?S1JPaXNPRnZkY3dpUVJVRjNNdFRCcDFCbkg4VGhhRzNpWlp2Y1NWc3czc013?=
 =?utf-8?B?K2hWK1JCUXk0RE9VVXdMZ1ozZ2sra3NoTkQ4WHRxWGJpdXQxR0R4NEZ0L0J1?=
 =?utf-8?B?SGJBNFJINWk0TmhFbXdhSzY4MmhVbzVGTGRjREFaZzVqdldySlI5bll3MURB?=
 =?utf-8?B?Y0tFQVBVYitiQnRtRlVhcm5IbXowQTJpTEFnT0FoMTVZOU5aRXNLN1lsdzlp?=
 =?utf-8?B?NzFHSmw1R0daT1h2VEZHek9aeER2dk1HL0VRYjc4bmxWSEpSL3lVc1ZJNWR3?=
 =?utf-8?B?MUM1aTlTQ2owOENiTkd3TmVXaW16bUUvZjdiWmN1VzViUHh4ZGNCdUR0M3gx?=
 =?utf-8?B?RWl0ZTNEb0FNTEEyY3ZVKzZsTUZ0aEcxeENrQm9PRDR2NmprZzVhcC9RbjNU?=
 =?utf-8?B?dmFVM2ExZTJaQUxjMU5NL2dyaENyQjhwQkZnVmk2K2JSVXU1MjRaZlloUmVi?=
 =?utf-8?B?QzBLb0hvZ0J0VzlDZXFMcktTRlQ4ZFdQdnVVd1NsVEtEbi95dVBkdlNobHU3?=
 =?utf-8?B?bFRVQkNpSmFyVCs0UEcrc3FnTUxpQ09BOFVGb2FYL3lXZm5jcGJwTEY4Mytt?=
 =?utf-8?B?QVV5UUdjTC9FU0dZWGhXd3RaeGdQTThnbGxjNEF5MnZOQklEeWlRYmZHU3h2?=
 =?utf-8?B?L3F6ZUlzRG5mTDlBV1lDTXJGd05LVjlyQ0ZjN29lOGk2cTk3UGNzTDJqS2Jw?=
 =?utf-8?B?QUxTTnFNRTZhREQ4Zk5yQkh0ODVnTmh2b1dydllYeWF6dUFRUlZSNm5BWXVN?=
 =?utf-8?B?ZVY2b3hzR2tXS1ZSeXZlTmdpWFVYS3NNZnBlUHd0d0ZoVFQyTjkxSE54azl0?=
 =?utf-8?B?ZmU4TW9ZWG5VMWNGR1hMdW5odnJtbHpSS28yd1BnWS81NmNlYVNzREFkQzZO?=
 =?utf-8?B?c2pvT2d1amd1Mmx4Mm5HendhSlZsYTZjOFBnTHJReWpSVjRpUlJ5ODVyT1dv?=
 =?utf-8?B?R25sbU8wOXlqdEIwbkMyaE1LSllGdGlBN2Y0cmVTUXAyUkNFc01VUExhZm5E?=
 =?utf-8?B?Y0libDFidkNSQlU1RTFSN0puclFyVjByT25ZWDFTaWZQMXZZcElpWXM4amhp?=
 =?utf-8?B?bVd1NEs4UUVOS09MSHVKcFRRQWNSRGtUMzRSR1lFRmE5V0NxbXJSMkVydjVr?=
 =?utf-8?B?eitEVHo3YWFFakUwS0NFOFpCeVpGSnp3S1F0d1d6cElsdlV5SU10UUp3SVVB?=
 =?utf-8?Q?HzWYPtg85lKdSZznOwp2Dh2z5?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e798160b-93a0-46ad-36b2-08db5a96801b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 07:31:04.5781
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jUxjm6chJa5HeOZCjIrMJRnIeXxN6Uo/5QF67Hwa+SN0woAAr1Sucd7C10zL6N5E8Eb70SEZzm08OS3dOEoliw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8085

On 19.05.2023 17:52, Andrew Cooper wrote:
> On 17/05/2023 10:20 am, Jan Beulich wrote:
>> On 16.05.2023 21:31, Andrew Cooper wrote:
>>> On 16/05/2023 3:53 pm, Jan Beulich wrote:
>>>> I guess that's no
>>>> different from other max-only features with dependents, but I wonder
>>>> whether that's good behavior.
>>> It's not really something you get a choice over.
>>>
>>> Default is always less than max, so however you choose to express these
>>> concepts, when you're opting-in you're always having to put information
>>> back in which was previously stripped out.
>> But my point is towards the amount of data you need to specify manually.
>> I would find it quite helpful if default-on sub-features became available
>> automatically once the top-level feature was turned on. I guess so far
>> we don't have many such cases, but here you add a whole bunch.
> 
> I'm not suggesting specifying it manually.  That would be a dumb UX move.
> 
> But you absolutely cannot have "user turns on EIBRS" meaning "turn on
> ARCH_CAPS too", because a) that requires creating the reverse feature
> map which is massively larger than the forward feature map, and b) it
> creates a vulnerability in the guest, and c) it's ambiguous - e.g. what
> does turning on a sub-mode of AVX-512 mean for sibling sub-modes?

Feels like a misunderstanding at this point, because what you're describing
above is not what I was referring to. Instead I was specifically referring
to "cpuid=...,arch-caps" not having any effect beyond the turning on of the
MSR itself (with all-zero content). This is even worse with libxl not even
having a way right now to express something like "arch-caps=..." to enable
some of the sub-features for guests.

To explicitly answer the AVX512 part of your reply: Turning on a sub-mode
alone could be useful as well (with the effect of also turning on the
main feature, and with or without [policy question] also turning on other
default-on subfeatures of AVX512). But again - that direction of possible
"implications" isn't what my earlier reply was about. As you say, reverse
maps would then also be needed, whereas for what I'm suggesting only the
deep-deps we already have are necessary: We'd grab the main feature's
dependencies and AND that with the default mask before ORing into the
domain's policy.

> Whichever algorithm you want, you still need *something* to know that
> ARCH_CAPS is special and how to arrange the defaults given a non-default
> overarching setting.

Afaict right now that would be achieved by the same 'a', 'A', '!a", and
"!A" annotations, suitably placed on every of the sub-features.

Jan

> When the toolstack infrastructure grows the ability to say no to the
> user, it will be able to identify explicit user settings which cannot be
> fulfilled.  (And with a bit more complicated logic, why.)
> 
>>>> Wouldn't it make more sense for the
>>>> individual bits' exposure qualifiers to become meaningful one to
>>>> qualifying feature is enabled? I.e. here this would then mean that
>>>> some ARCH_CAPS bits may become available, while others may require
>>>> explicit turning on (assuming they weren't all 'A').
>>> I'm afraid I don't follow.  You could make some bits of MSR_ARCH_CAPS
>>> itself 'a' vs 'A', and that would have the expected effect (for any VM
>>> where arch_caps was visible).
>> Visible by default, you mean. Whereas I'm considering the case where
>> the CPUID bit is default-off, and turning it on for a guest doesn't at
>> the same time turn on all the 'A' bits in ARCH_CAPS (which hardware
>> offers, or which we synthesize).
>>
>> Something similar could be seen / utilized for AMX, where in my
>> pending series I set all the bits to 'a', requiring every individual
>> bit to be turned on along with turning on AMX-TILE. Yet it would be
>> more user friendly if only the top-level bit needed enabling manually,
>> with available sub-features then becoming available "automatically".
> 
> I think I've covered all of this in the reply above?
> 
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Mon May 22 07:51:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 07:51:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537780.837287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q10Jj-0006f0-Nh; Mon, 22 May 2023 07:50:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537780.837287; Mon, 22 May 2023 07:50:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q10Jj-0006et-Ka; Mon, 22 May 2023 07:50:55 +0000
Received: by outflank-mailman (input) for mailman id 537780;
 Mon, 22 May 2023 07:50:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q10Ji-0006en-MO
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 07:50:54 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5fd1856a-f875-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 09:50:52 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7198.eurprd04.prod.outlook.com (2603:10a6:800:126::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 07:50:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 07:50:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5fd1856a-f875-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eY0voXm2Dofmia7GeueNl11ivS6rDJ1Vky3pnsHXKX1YjgHpVU9NMTWEUn6WZwkKzN2FOfqX0o59ZeI9UXW+JQiBiyTMDi/gxaZmRXX49It9xC8OVl6i0EUjY63SYExCU1YnLZdsaVOvJYGRW0ZyfOM878TGCRlAPgzh0SOe7Md0yGMmwgiI19Na9Y5uLwqaCRNpTGzhkdgeKKwY/bFx9jCzgTFefxrpSi2pscZC1/H9wS7BnScvEbUSOdihR49+b6Md2GSpBUbHuWTXEkv3uHet2XjCYTiweQ302OG/+OGFz9owAADM4G870ugoSUy8s42VUwWp8ybqAE9mjpHZtA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=O3a09+NcwZlsRC0FzSblUFOnpkhP1QbMO8h39Tfq+F4=;
 b=VQcsbsROlbUlKEjZgPF5P6PF2FckjYz27x5a2xYkSSw8OiGhXHdFsBZY9dHuNK6J7x96uB6dYzm7QVhZUZw0QVsqd2I2YGLuKGzlXfhaOMI2MjCTaPhY743Iy7j23Q8oM35a8yG8PR0uMLxmAhDT+f/9win1Zn+/9MC5Qcq1QgE+chHQDKjCTNwT1YKGRro21ubRQD/Y1WGoApaEaFiRp+LPIhhA4+D16b/xFEtpKLBR5ANlWFlvRKW0sqm4Uk41kWKVQmCcuzHSeRFryyKadwGXbIz0X7yxPYe6AiYhHeXg+Ppvi1qynyNvQXXg1OV8/L+Ymq1TnUtSAL3l2rw/tw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=O3a09+NcwZlsRC0FzSblUFOnpkhP1QbMO8h39Tfq+F4=;
 b=bwmgkNmI6diMkXpHlBzgeXvYTdJfRXh96n/Hf8KWlMV9ZICv+s1iv6DSvVshWBOq+RiBlNyiiG8X5yYxvl8E07tIJifJ4pDv9iGwBw+mLkB4AyYGDKf16dix1/3xvpQZ35JYKISEw0qK56d1CX9H7NjxYzmDw5FvbjIiuwe5kNq7M7WQV9abUK8KVTpVppRgn/uXygKZg4y9t9eYJOUacLyT842MiS1uPstgshZ0INtohiDpercS0j2HBPhdKCmP8My/EkE81mCAwCdDIDWyt7VEOSXabGgZo5QcsK8vewyfpz0hhfopicJspq7M2N2mSxVZp/vurRbd7Za5+SMGdA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a21f2917-052a-ddb5-3de5-1ea58cb55252@suse.com>
Date: Mon, 22 May 2023 09:50:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-2-luca.fancellu@arm.com>
 <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
 <86D7B5C8-2671-4359-A48D-E7D52B06565C@arm.com>
 <2f14dad9-25f5-7ac7-4ff5-d756e6f55718@xen.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <2f14dad9-25f5-7ac7-4ff5-d756e6f55718@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0038.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7198:EE_
X-MS-Office365-Filtering-Correlation-Id: 67e65663-92cd-4686-65d1-08db5a9942b8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EvFNCzUaGC9v32KWcmgZZH2HVXoVzHmuzRWJb9GG8cImfH8Y7uazdq3mJFZE6/ruO/y5rFuLc8N0Fd+yduPZxOFmcru+qqOlUD4EJ4gz0MGdIP14kXz8drzyPGoiVQoNhU+/JDKHZWVitcGOIhYrSID5KY811rERpIw9yErjPid0sr39Ym3aL/M2bge6TnIvwA6WANSbJ3l7yf2qcnzrpxxxk4izS/63jBXmHNpXAc3TIdVB3YShInIC/BI+SiGiHS+uvZgxNItVv79S4hMxAO9fzn8+nPuKnJEarwU8AG4/XJljBmG89mJUzMb0c8oB3E73KQrSMF8Zi1cemQaEdk/TuM73OzTAn++VtMdo32Pvkau6eglmT0CKXw7fXvK1vkQQ28VYTP0/fx2Dy8yxk/MRyczt9j/L+pJZszqmv0MRtS9AX0VCGoaBhTtJPjt4S9BhzS/b2QnJuS7tMyraxzUbQi7q/AexqJip5BIb8pCCSu/GxO9N9CWluJ0eQTilJnAxe7+/FpwQHzKFzuk7XiaVGiMr3gkCu3DvxdMOwtV+t2lmVSkXew4X4IiZ+ABeznqiutcFhBZUEY8GUfb4xaNb5yziyPBhIawGy77qY6VM72WbhgUZA1LBI38Cl2pY0dKLTwm9vyeVfSKXmvBmig==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(39850400004)(396003)(136003)(346002)(376002)(451199021)(8676002)(8936002)(5660300002)(186003)(26005)(53546011)(6512007)(6506007)(83380400001)(86362001)(2616005)(31696002)(38100700002)(41300700001)(6486002)(66476007)(66556008)(66946007)(316002)(36756003)(4326008)(478600001)(110136005)(54906003)(2906002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ck91azJjVXNuYUljZWxzMjBrU2hpaUpJN3VsL1JUcm5GQXpteEFudkdWQ0pG?=
 =?utf-8?B?dU40LzNvdmxYNDI3SWpieGx3YWZKWngvNFFZM3FIcnVTNFJBSWdQY2lIM3pi?=
 =?utf-8?B?ZnFWcm1LUVU3dHFSbDhlMHBXZkovOHNrdnJSZDhNditLanpzUjJidFYxeWNH?=
 =?utf-8?B?YXpyT1VTbEVqY2ExQTB2a0FyUzBKV0RjNnh4Z0NzM2V5R2hvUldJM1paTGZx?=
 =?utf-8?B?YVZWRi9yeWlVcUJpVmFYeWNVWmJybEswWTVIclFQdEE0Vml3Y3dmQ1IxYU9K?=
 =?utf-8?B?VlovMWdFWFROWXUwWWhsVDMwandseVkzaWtkQkJ3RzIraENWNk5DQkNZbkJG?=
 =?utf-8?B?YXNSZk5CejB4Qk5EeUpkdnZ4cHQ2ZVhHWTcvRmIraXI1ZS8vc0c1clJNZDRO?=
 =?utf-8?B?OTY5RUVrVGVkTGNHZjRNWUp4MU5sWlBEY3huVWV3ZGJ4dnUzZzdJRVF2Z2Rh?=
 =?utf-8?B?RFh6OSt1OTJwb0NSUTJDT3Y5TndKa3AreXBQenduamNxbGEzaUpPZnNhY0pF?=
 =?utf-8?B?WXZSYTJEbkhXYy8xbDFRVzRkaTZ2bGlWMDNKakNXdTkwRnliL091S1lRa0xq?=
 =?utf-8?B?RmZwNkpXTVZPc2JJTzlCNWZsOCtZMEwxUWl3YlJ5T245anlXeTBzTXpoUHhO?=
 =?utf-8?B?Zk1RV3hzdldxTHRWSVl6OUhxSitzNU5BYTFhaU0zN0hDOURub3phK2VWRURR?=
 =?utf-8?B?U0ZZdEN4enZDR2NUY2xtN2R0TUJOeDU0dExkeWRZOHFIYTRtenBMT0ZYUmZD?=
 =?utf-8?B?UVc4R3BKKy9zeVNMaUozZW8rN01EWjFwbXgvSFVuVUt2OFZJcEZGZVpDQ0hM?=
 =?utf-8?B?KzNXWTZaMksrMXIzR09HeXVJSFZlbjVwSkZNUWx4NXppa3RwUEJCQmFOZUxD?=
 =?utf-8?B?NklXUTFsUVdMeFI0LzZBZDgxWUpVMlpXdmR4RHZwSy9TdGVVaE5Mc29HeUZ2?=
 =?utf-8?B?VFh2NFlTR1BxSkFCUWpibnZ6UXVqcWNTeGlNclpETTRGdHd6TFE4cWZGVm5B?=
 =?utf-8?B?OFluT1c5bFIvVjBjUkN5dlI3ZkNKSitDWHI5RmRLVDQyaGsxRE91YUtvTEVO?=
 =?utf-8?B?Q2hXVFZrWk5iT0liTGVzcGhjUVZhZTdiOWFrSEhFT2ZaMTk5Q0UyWDVkLzEz?=
 =?utf-8?B?SVNjMkFUNGRFY3pxUGdrRFcxNmdUZHVvdjlZQ0ZJdlRNTzlCbGZydVVyeEhP?=
 =?utf-8?B?Vi9uNHU3MnRpN0FFUWphSTJNZzA0MktQS2M4aUhmemZtYkNmRkVPQlVjaXdu?=
 =?utf-8?B?SDlGL283enMzbVRIWmQ0NVVWZDZIUzlKeUlWdlU2aDVCU0RaNjBod3lLYlE3?=
 =?utf-8?B?ZkxMcE9GTmJkN2JvVnEwRkRYRUxIRGN5SisydjQ2UGM1b1poTzY4azZhd2JO?=
 =?utf-8?B?V3JhSkhQMHdhQlptZFZ2MzhTZldPWVJCSU1Mb2JyM2R5YnFvcGIxZnJBUU41?=
 =?utf-8?B?aCt0UWQ4YmVrZnRtQTlGT1kwZjlLeU41STh2YkhRV1J2VnNDTHJsSXRQUlc2?=
 =?utf-8?B?TmhYVE9lOXBQa1g2UnIza0ZBQlFTVnQrblNDVHhGdXhQaXNFVFpZbzdVeTRK?=
 =?utf-8?B?d3hYYWFpMzRSSGpjS1lZK0RQczRMVzl1dk5NWWFNMFFTUjJ1UTFLTXh5WDBn?=
 =?utf-8?B?d2VXamRrcWgyYUNVbkZjUC9VRVEreTFSb3A5YStEZ2cxSW5EanVnSEc5dkZi?=
 =?utf-8?B?anBYbWFySkMyQm1TQmR6OEpDRDVRMWRlcXE1OW5lZnU4RUdIZTYrZDZzeGtX?=
 =?utf-8?B?azVsR1cvRkh2NzdVVmwvOE5KSjlReXJFcTliMkd3U3hablZRNUtwQ3d4Mm0z?=
 =?utf-8?B?Ym1mZno2eEVRaTFJc3V0MVhGeXdIYVFQdGo4R2pLQXRXNEdYOHFjNlZKcUgv?=
 =?utf-8?B?YnNmeU4waWpDclFvYml1SmNoZUp5WFlLMGN0SVpGcklRc1J3Q3NPK1hYWGUy?=
 =?utf-8?B?ek0vaDNNYXY5MlpySjBlNWt1aFlxamJMdWRYUGFLald5aGtETVVSa3NFZTYy?=
 =?utf-8?B?RlJxRlcxNVhMNzIyS0VhMXZZTDZuUVhLYW1KL1lFQnBnbldrZkJTQ3o4MW5F?=
 =?utf-8?B?d0Z3aFB0VlpPQlBtSEhjWmJLMU13YUpxa0dQSjh5cHJSc3I2TzJKT1Zzb2Ju?=
 =?utf-8?Q?eADb8Qy7u+SRTsWwNlJjSHLXb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 67e65663-92cd-4686-65d1-08db5a9942b8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 07:50:50.0749
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LzGSaeNVJmH6Ja9/4U8N7qrUvhRvxOPsXZHWqZBwVhZiycpnr9+IxV2zUegvgEwlJdUADopSOyfKrKG4pvkCwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7198

On 19.05.2023 16:46, Julien Grall wrote:
> On 19/05/2023 15:26, Luca Fancellu wrote:
>>> On 18 May 2023, at 10:35, Julien Grall <julien@xen.org> wrote:
>>>> +/*
>>>> + * Arm SVE feature code
>>>> + *
>>>> + * Copyright (C) 2022 ARM Ltd.
>>>> + */
>>>> +
>>>> +#include <xen/types.h>
>>>> +#include <asm/arm64/sve.h>
>>>> +#include <asm/arm64/sysregs.h>
>>>> +#include <asm/processor.h>
>>>> +#include <asm/system.h>
>>>> +
>>>> +extern unsigned int sve_get_hw_vl(void);
>>>> +
>>>> +register_t compute_max_zcr(void)
>>>> +{
>>>> +    register_t cptr_bits = get_default_cptr_flags();
>>>> +    register_t zcr = vl_to_zcr(SVE_VL_MAX_BITS);
>>>> +    unsigned int hw_vl;
>>>> +
>>>> +    /* Remove trap for SVE resources */
>>>> +    WRITE_SYSREG(cptr_bits & ~HCPTR_CP(8), CPTR_EL2);
>>>> +    isb();
>>>> +
>>>> +    /*
>>>> +     * Set the maximum SVE vector length, doing that we will know the VL
>>>> +     * supported by the platform, calling sve_get_hw_vl()
>>>> +     */
>>>> +    WRITE_SYSREG(zcr, ZCR_EL2);
>>>
>>>  From my reading of the Arm (D19-6331, ARM DDI 0487J.a), a direct write to a system register would need to be followed by an context synchronization event (e.g. isb()) before the software can rely on the value.
>>>
>>> In this situation, AFAICT, the instruciton in sve_get_hw_vl() will use the content of ZCR_EL2. So don't we need an ISB() here?
>>
>>  From what I’ve read in the manual for ZCR_ELx:
>>
>> An indirect read of ZCR_EL2.LEN appears to occur in program order relative to a direct write of
>> the same register, without the need for explicit synchronization
>>
>> I’ve interpreted it as “there is no need to sync before write” and I’ve looked into Linux and it does not
>> Appear any synchronisation mechanism after a write to that register, but if I am wrong I can for sure
>> add an isb if you prefer.
> 
> Ah, I was reading the generic section about synchronization and didn't 
> realize there was a paragraph in the ZCR_EL2 section as well.
> 
> Reading the new section, I agree with your understanding. The isb() is 
> not necessary.

And RDVL counts as an "indirect read"? I'm pretty sure "normal" SVE insn
use is falling in that category, but RDVL might also be viewed as more
similar to MRS in this regard? While the construct CurrentVL is used in
either case, I'm still not sure this goes without saying.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 22 07:56:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 07:56:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537785.837296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q10P2-0007Lt-GV; Mon, 22 May 2023 07:56:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537785.837296; Mon, 22 May 2023 07:56:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q10P2-0007Lm-Dt; Mon, 22 May 2023 07:56:24 +0000
Received: by outflank-mailman (input) for mailman id 537785;
 Mon, 22 May 2023 07:52:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J+bL=BL=tesarici.cz=petr@srs-se1.protection.inumbo.net>)
 id 1q10LQ-0007FT-Rm
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 07:52:41 +0000
Received: from bee.tesarici.cz (bee.tesarici.cz [77.93.223.253])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9f50eddf-f875-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 09:52:39 +0200 (CEST)
Received: from meshulam.tesarici.cz
 (dynamic-2a00-1028-83b8-1e7a-4427-cc85-6706-c595.ipv6.o2.cz
 [IPv6:2a00:1028:83b8:1e7a:4427:cc85:6706:c595])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest
 SHA256) (No client certificate requested)
 by bee.tesarici.cz (Postfix) with ESMTPSA id EF59213B56F;
 Mon, 22 May 2023 09:52:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f50eddf-f875-11ed-b22d-6b7b168915f2
Authentication-Results: mail.tesarici.cz; dmarc=fail (p=none dis=none) header.from=tesarici.cz
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tesarici.cz; s=mail;
	t=1684741957; bh=jsaH7ZUR8wFCXeTPqJ2mL6nQoFiuy/hnhhZO4oD2tJI=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=fXLslpJh2pwyFKw46yDmLuFNtmDG417D/1pluDHbFUpLWOwxyyXphbZ+9lLt3heq7
	 TOb5LFM2ep8I/lYW4Rxr5pHLuRjmIlNZoYtrmqI1HI1Aj8EtmH0K1aKdNONSyGD1Py
	 XdEor7u4dVunv3kxrnDKD6H6CPX2AwcjRZNJe7gb2lMO9/J9SGd4Q4jlhu1PXBOR9X
	 umCg5oPeTTUZBH6IYcTRYkSLv2mz+GU1YhZ1d2/UJvK2yGwyYjAY4LOa34SP/h5Vwo
	 3uqicj2hzNb+4IDukPrPnygCZF3c4jC6VDTzVjC0Yz6vqF4Y4Ik5ldeKmw8O39I5xE
	 IgfjiqYv6v6TQ==
Date: Mon, 22 May 2023 09:52:36 +0200
From: Petr =?UTF-8?B?VGVzYcWZw61r?= <petr@tesarici.cz>
To: Christoph Hellwig <hch@lst.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Marek =?UTF-8?B?TWFyY3p5?=
 =?UTF-8?B?a293c2tpLUfDs3JlY2tp?= <marmarek@invisiblethingslab.com>, Juergen
 Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, Ben Skeggs
 <bskeggs@redhat.com>, Karol Herbst <kherbst@redhat.com>, Lyude Paul
 <lyude@redhat.com>, xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
 linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when
 xen-pcifront is enabling
Message-ID: <20230522095236.27460d93@meshulam.tesarici.cz>
In-Reply-To: <20230520062103.GA1225@lst.de>
References: <20230518134253.909623-1-hch@lst.de>
	<20230518134253.909623-3-hch@lst.de>
	<ZGZr/xgbUmVqpOpN@mail-itl>
	<20230519040405.GA10818@lst.de>
	<ZGdLErBzi9MANL3i@mail-itl>
	<20230519124118.GA5869@lst.de>
	<8617570c-6dc4-74f5-7418-98f04f7e0ece@citrix.com>
	<20230519125857.GA6994@lst.de>
	<20230520062103.GA1225@lst.de>
X-Mailer: Claws Mail 4.1.1 (GTK 3.24.38; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Hi Christoph,

On Sat, 20 May 2023 08:21:03 +0200
Christoph Hellwig <hch@lst.de> wrote:

> On Fri, May 19, 2023 at 02:58:57PM +0200, Christoph Hellwig wrote:
> > On Fri, May 19, 2023 at 01:49:46PM +0100, Andrew Cooper wrote:  
> > > > The alternative would be to finally merge swiotlb-xen into swiotlb, in
> > > > which case we might be able to do this later.  Let me see what I can
> > > > do there.  
> > > 
> > > If that is an option, it would be great to reduce the special-cashing.  
> > 
> > I think it's doable, and I've been wanting it for a while.  I just
> > need motivated testers, but it seems like I just found at least two :)  
> 
> So looking at swiotlb-xen it does these off things where it takes a value
> generated originally be xen_phys_to_dma, then only does a dma_to_phys
> to go back and call pfn_valid on the result.  Does this make sense, or
> is it wrong and just works by accident?  I.e. is the patch below correct?
> 
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 67aa74d201627d..3396c5766f0dd8 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -90,9 +90,7 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
>  
>  static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
>  {
> -	unsigned long bfn = XEN_PFN_DOWN(dma_to_phys(dev, dma_addr));
> -	unsigned long xen_pfn = bfn_to_local_pfn(bfn);
> -	phys_addr_t paddr = (phys_addr_t)xen_pfn << XEN_PAGE_SHIFT;
> +	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);


I'm no big Xen expert, but I think this is wrong. Let's go through it
line by line:

- bfn = XEN_PFN_DOWN(dma_to_phys(dev, dma_addr));

  Take the DMA address (as seen by devices on the bus), convert it to a
  physical address (as seen by the CPU on the bus) and shift it right
  by XEN_PAGE_SHIFT. The result is a Xen machine PFN.

- xen_pfn = bfn_to_local_pfn(bfn);

  Take the machine PFN and converts it to a physical PFN.

- paddr = (phys_addr_t)xen_pfn << XEN_PAGE_SHIFT;

  Convert the physical PFN to a physical address.

The important thing here is that Xen PV does not have auto-translated
physical addresses, so physical address != machine address. Physical
addresses in Xen PV domains are "artificial", used by the kernel to
index the mem_map array, so a PFN can be easily converted to a struct
page pointer and back. However, these addresses are never used by
hardware, not even by CPU. The addresses used by the CPU are called
machine addresses. There is no address translation between VCPUs and
CPUs, because a PV domain runs directly on the CPU. After all, that's
why it is called _para_virtualized.

HTH
Petr T


From xen-devel-bounces@lists.xenproject.org Mon May 22 07:58:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 07:58:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537788.837307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q10Qk-0007vn-T9; Mon, 22 May 2023 07:58:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537788.837307; Mon, 22 May 2023 07:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q10Qk-0007vg-Pq; Mon, 22 May 2023 07:58:10 +0000
Received: by outflank-mailman (input) for mailman id 537788;
 Mon, 22 May 2023 07:55:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=J+bL=BL=tesarici.cz=petr@srs-se1.protection.inumbo.net>)
 id 1q10O6-0007KH-EO
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 07:55:26 +0000
Received: from bee.tesarici.cz (bee.tesarici.cz [77.93.223.253])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 01bc6901-f876-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 09:55:24 +0200 (CEST)
Received: from meshulam.tesarici.cz
 (dynamic-2a00-1028-83b8-1e7a-4427-cc85-6706-c595.ipv6.o2.cz
 [IPv6:2a00:1028:83b8:1e7a:4427:cc85:6706:c595])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest
 SHA256) (No client certificate requested)
 by bee.tesarici.cz (Postfix) with ESMTPSA id 1897E13C2DA;
 Mon, 22 May 2023 09:55:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01bc6901-f876-11ed-8611-37d641c3527e
Authentication-Results: mail.tesarici.cz; dmarc=fail (p=none dis=none) header.from=tesarici.cz
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tesarici.cz; s=mail;
	t=1684742122; bh=+ecvThwVNIF+3jNDHwnBtc4nIfmBUPqhUAXqbUCfwlk=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=w6Ocr72GrnkWKmosOG9bJ8iIBfigFQC9bv9xRTioGZ2CSHlV1/RuH3eE+gUvmnvfs
	 nupeVvjngA4t2geGD3q69X34CzBWx4qAmKkpMuMnSFHVYByJ5zbwogqgIPUcoeZdqa
	 RqNtDRI+JKpfH9XtQLhgST01DHruuB5GwQejM4Ac7KHF3ZawwFK/uucm8u7sEHcsSb
	 n5PuaYK1sl0TjJmngp2og1zrSsW9f7hvCWGHGBD5YJUn25qj/Sh4f0J4vGA0wTbHg3
	 1jgpWjjEIZaxcs9YzhpV6ZMzjLM+wn23DgInohFErWKjQU2sYMZYfNvHQ7zx6gSJuh
	 Mz/r0JnFTJi2w==
Date: Mon, 22 May 2023 09:54:08 +0200
From: Petr =?UTF-8?B?VGVzYcWZw61r?= <petr@tesarici.cz>
To: Marek =?UTF-8?B?TWFyY3p5a293c2tpLUfDs3JlY2tp?=
 <marmarek@invisiblethingslab.com>
Cc: Christoph Hellwig <hch@lst.de>, Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>, x86@kernel.org, "H. Peter Anvin"
 <hpa@zytor.com>, Ben Skeggs <bskeggs@redhat.com>, Karol Herbst
 <kherbst@redhat.com>, Lyude Paul <lyude@redhat.com>,
 xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
 linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when
 xen-pcifront is enabling
Message-ID: <20230522095408.02874498@meshulam.tesarici.cz>
In-Reply-To: <ZGdLErBzi9MANL3i@mail-itl>
References: <20230518134253.909623-1-hch@lst.de>
	<20230518134253.909623-3-hch@lst.de>
	<ZGZr/xgbUmVqpOpN@mail-itl>
	<20230519040405.GA10818@lst.de>
	<ZGdLErBzi9MANL3i@mail-itl>
X-Mailer: Claws Mail 4.1.1 (GTK 3.24.38; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/4FW=_VIffR_gzuETX7eJ3Lw";
 protocol="application/pgp-signature"; micalg=pgp-sha512

--Sig_/4FW=_VIffR_gzuETX7eJ3Lw
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Fri, 19 May 2023 12:10:26 +0200
Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com> wrote:

> On Fri, May 19, 2023 at 06:04:05AM +0200, Christoph Hellwig wrote:
> > On Thu, May 18, 2023 at 08:18:39PM +0200, Marek Marczykowski-G=C3=B3rec=
ki wrote: =20
> > > On Thu, May 18, 2023 at 03:42:51PM +0200, Christoph Hellwig wrote: =20
> > > > Remove the dangerous late initialization of xen-swiotlb in
> > > > pci_xen_swiotlb_init_late and instead just always initialize
> > > > xen-swiotlb in the boot code if CONFIG_XEN_PCIDEV_FRONTEND is enabl=
ed.
> > > >=20
> > > > Signed-off-by: Christoph Hellwig <hch@lst.de> =20
> > >=20
> > > Doesn't it mean all the PV guests will basically waste 64MB of RAM
> > > by default each if they don't really have PCI devices? =20
> >=20
> > If CONFIG_XEN_PCIDEV_FRONTEND is enabled, and the kernel's isn't booted
> > with swiotlb=3Dnoforce, yes. =20
>=20
> That's "a bit" unfortunate, since that might be significant part of the
> VM memory, or if you have a lot of VMs, a significant part of the host
> memory - it quickly adds up.

I wonder if dynamic swiotlb allocation might also help with this...

Petr T

--Sig_/4FW=_VIffR_gzuETX7eJ3Lw
Content-Type: application/pgp-signature
Content-Description: Digitální podpis OpenPGP

-----BEGIN PGP SIGNATURE-----

iHUEARYKAB0WIQQR36mnYrQDNXFnn8/Pem5ZkryZSgUCZGsfoQAKCRDPem5ZkryZ
SqVBAP0bQzVwflJ+O+7tLcfRm7IETeA07OEHXqZhmGqGL5eX8QD9HjbTN1pnVQK5
45KGLgOriHcE1PpDjMqZrgg7MRy/3gY=
=o6/4
-----END PGP SIGNATURE-----

--Sig_/4FW=_VIffR_gzuETX7eJ3Lw--


From xen-devel-bounces@lists.xenproject.org Mon May 22 08:32:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 08:32:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537803.837323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q10yB-0004NX-W6; Mon, 22 May 2023 08:32:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537803.837323; Mon, 22 May 2023 08:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q10yB-0004NQ-TR; Mon, 22 May 2023 08:32:43 +0000
Received: by outflank-mailman (input) for mailman id 537803;
 Mon, 22 May 2023 08:32:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DxKA=BL=bounce.vates.fr=bounce-md_30504962.646b28a6.v1-48590a49bbd54d81a738e5d7d69af5dd@srs-se1.protection.inumbo.net>)
 id 1q10yB-0004NJ-1R
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 08:32:43 +0000
Received: from mail180-5.suw31.mandrillapp.com
 (mail180-5.suw31.mandrillapp.com [198.2.180.5])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 358f5920-f87b-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 10:32:38 +0200 (CEST)
Received: from pmta11.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail180-5.suw31.mandrillapp.com (Mailchimp) with ESMTP id 4QPrKy2NhzzG0CBWK
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 08:32:38 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 48590a49bbd54d81a738e5d7d69af5dd; Mon, 22 May 2023 08:32:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 358f5920-f87b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1684744358; x=1685004858; i=yann.dirson@vates.fr;
	bh=5ZA08ivYkpiAmPEWMcPTL2Cj/1pwkC/WlHb0/EmOyLI=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=PSE81vvQ/lABa/82Jx1ImzCi3GgEpblToN9kGvj3nbM+f2fIiWBil6vl28NUOrXk/
	 xFVxXzB1NfE7tM8QrZtK0ysAHcvdhahAZWQL4VaeBGGdEKbOYDdT/JiBCxCRUhEd44
	 G+DKUidt98xfr1oYaGL46R7GZWMhaad2s5GLMMrg=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1684744358; h=From : 
 Subject : Message-Id : To : Cc : References : In-Reply-To : Date : 
 MIME-Version : Content-Type : Content-Transfer-Encoding : From : 
 Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=5ZA08ivYkpiAmPEWMcPTL2Cj/1pwkC/WlHb0/EmOyLI=; 
 b=Nu0PL0HEFubr6IB6CEeU9xAnoiCyM5b5d32KXMwjEkzUahpmFCtrlM4Z/tMXl7cdbQSshZ
 fGwg6cE85cNget1LXBKR9LkRPhF9UX0/cx1Q7J0OCdmRZgzCFTU6Vr8bdzB3niws6OjM388P
 r4dwoCh2cSpJoZtyD+8Ff5SUrSav4=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: =?utf-8?Q?Re:=20[PATCH=202/3]=20docs:=20document=20~/control/feature-balloon?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 9f954236-9dc1-4070-9d34-807dea7ccea1
X-Bm-Transport-Timestamp: 1684744355983
Message-Id: <46399590-322d-c66d-9917-2a55e97de2dc@vates.fr>
To: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xihuan.yang@citrix.com, min.li1@citrix.com
References: <20230510142011.1120417-1-yann.dirson@vates.fr> <20230510142011.1120417-3-yann.dirson@vates.fr> <bb215c55-5064-7f48-820c-bf41d01529bd@suse.com>
In-Reply-To: <bb215c55-5064-7f48-820c-bf41d01529bd@suse.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.48590a49bbd54d81a738e5d7d69af5dd?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230522:md
Date: Mon, 22 May 2023 08:32:38 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit



On 5/11/23 11:21, Jan Beulich wrote:
> On 10.05.2023 16:20, Yann Dirson wrote:
>> --- a/docs/misc/xenstore-paths.pandoc
>> +++ b/docs/misc/xenstore-paths.pandoc
>> @@ -509,6 +509,12 @@ This may be initialized to "" by the toolstack and may then be set
>>   to 0 or 1 by a guest to indicate whether it is capable of responding
>>   to a mode value written to ~/control/laptop-slate-mode.
>>   
>> +#### ~/control/feature-balloon
>> +
>> +This may be initialized to "" by the toolstack and may then be set to
>> +0 or 1 by a guest to indicate whether it is capable of memory
>> +ballooning, and responds to values written to ~/memory/target.
> 
> Besides correctly saying "may", I guess this wants to go further and also
> clarify what the (intended) behavior is when the node is absent. Aiui PV
> guests are always expected to have a balloon driver, so the assumed
> value likely needs to be "1" there. Furthermore I'm afraid it doesn't
> really become clear what value this node is if it's only optionally
> present, while its absence doesn't really allow uniform assumptions
> towards a default value.


Things are indeed more complicated than I originally identified,
the way this xenstore entry is used currently seems to make it difficult 
to introduce it in a backward-compatible manner

I guess this and a number of details ought to be discussed at the XAPI
level first.

Details: the squeezed assumption [1] is that a domain which has not set 
this to 1 is not ready yet to get ballooned, which implies the default 
has to be 0 whatever the guest type, as it requires to know the total 
number of pages used by the domain to be stable.  So I guess we can see 
it as not being "not just a feature flag".


[1] 
https://github.com/xapi-project/xen-api/tree/master/ocaml/squeezed/doc/design#environmental-assumptions


-- 
Yann Dirson | Vates Platform Developer
XCP-ng & Xen Orchestra - Vates solutions
w: vates.tech | xcp-ng.org | xen-orchestra.com


Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


From xen-devel-bounces@lists.xenproject.org Mon May 22 08:37:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 08:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537807.837333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q112W-0004z4-JR; Mon, 22 May 2023 08:37:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537807.837333; Mon, 22 May 2023 08:37:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q112W-0004yx-FC; Mon, 22 May 2023 08:37:12 +0000
Received: by outflank-mailman (input) for mailman id 537807;
 Mon, 22 May 2023 08:37:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fRnz=BL=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q112V-0004yr-K9
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 08:37:11 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d7c5b393-f87b-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 10:37:10 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 16A8621972;
 Mon, 22 May 2023 08:37:10 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 91E1913776;
 Mon, 22 May 2023 08:37:09 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id HVzUIbUpa2QeWwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 22 May 2023 08:37:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7c5b393-f87b-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1684744630; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8ga5Zo8jqgo2Pq5tah2wZHCDCAl0WaiL8Zh0l8OnPpQ=;
	b=XtmMC6KGID7aLqQib45KPdLWlpll/Q7yy8XjVtDHEyg/w+IGN6zQsEw6yZj5sMUqzrozb7
	h8m+RBA1Hkf+x20fuf254GrTKCkUBksQdHvINgYp6+moEjtdKNsLqbmjnla407bw2Zaf8I
	yT72i5TTiaJxVLCVoumxC5JJe2HZjbs=
Message-ID: <c5defff8-882e-3482-0de1-e50a4bcdfa99@suse.com>
Date: Mon, 22 May 2023 10:37:09 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when xen-pcifront
 is enabling
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Christoph Hellwig <hch@lst.de>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 Ben Skeggs <bskeggs@redhat.com>, Karol Herbst <kherbst@redhat.com>,
 Lyude Paul <lyude@redhat.com>, xen-devel@lists.xenproject.org,
 iommu@lists.linux.dev, linux-kernel@vger.kernel.org,
 nouveau@lists.freedesktop.org
References: <20230518134253.909623-1-hch@lst.de>
 <20230518134253.909623-3-hch@lst.de> <ZGZr/xgbUmVqpOpN@mail-itl>
 <20230519040405.GA10818@lst.de> <ZGdLErBzi9MANL3i@mail-itl>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <ZGdLErBzi9MANL3i@mail-itl>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------yxQ05rE4I03r9lxlh0bCXS2U"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------yxQ05rE4I03r9lxlh0bCXS2U
Content-Type: multipart/mixed; boundary="------------lebpN3m0ix011maVFGvYhzJB";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Christoph Hellwig <hch@lst.de>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 Ben Skeggs <bskeggs@redhat.com>, Karol Herbst <kherbst@redhat.com>,
 Lyude Paul <lyude@redhat.com>, xen-devel@lists.xenproject.org,
 iommu@lists.linux.dev, linux-kernel@vger.kernel.org,
 nouveau@lists.freedesktop.org
Message-ID: <c5defff8-882e-3482-0de1-e50a4bcdfa99@suse.com>
Subject: Re: [PATCH 2/4] x86: always initialize xen-swiotlb when xen-pcifront
 is enabling
References: <20230518134253.909623-1-hch@lst.de>
 <20230518134253.909623-3-hch@lst.de> <ZGZr/xgbUmVqpOpN@mail-itl>
 <20230519040405.GA10818@lst.de> <ZGdLErBzi9MANL3i@mail-itl>
In-Reply-To: <ZGdLErBzi9MANL3i@mail-itl>

--------------lebpN3m0ix011maVFGvYhzJB
Content-Type: multipart/mixed; boundary="------------mgbNAlWVI77vXQcN1yCpbTLa"

--------------mgbNAlWVI77vXQcN1yCpbTLa
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTkuMDUuMjMgMTI6MTAsIE1hcmVrIE1hcmN6eWtvd3NraS1Hw7NyZWNraSB3cm90ZToN
Cj4gT24gRnJpLCBNYXkgMTksIDIwMjMgYXQgMDY6MDQ6MDVBTSArMDIwMCwgQ2hyaXN0b3Bo
IEhlbGx3aWcgd3JvdGU6DQo+PiBPbiBUaHUsIE1heSAxOCwgMjAyMyBhdCAwODoxODozOVBN
ICswMjAwLCBNYXJlayBNYXJjenlrb3dza2ktR8OzcmVja2kgd3JvdGU6DQo+Pj4gT24gVGh1
LCBNYXkgMTgsIDIwMjMgYXQgMDM6NDI6NTFQTSArMDIwMCwgQ2hyaXN0b3BoIEhlbGx3aWcg
d3JvdGU6DQo+Pj4+IFJlbW92ZSB0aGUgZGFuZ2Vyb3VzIGxhdGUgaW5pdGlhbGl6YXRpb24g
b2YgeGVuLXN3aW90bGIgaW4NCj4+Pj4gcGNpX3hlbl9zd2lvdGxiX2luaXRfbGF0ZSBhbmQg
aW5zdGVhZCBqdXN0IGFsd2F5cyBpbml0aWFsaXplDQo+Pj4+IHhlbi1zd2lvdGxiIGluIHRo
ZSBib290IGNvZGUgaWYgQ09ORklHX1hFTl9QQ0lERVZfRlJPTlRFTkQgaXMgZW5hYmxlZC4N
Cj4+Pj4NCj4+Pj4gU2lnbmVkLW9mZi1ieTogQ2hyaXN0b3BoIEhlbGx3aWcgPGhjaEBsc3Qu
ZGU+DQo+Pj4NCj4+PiBEb2Vzbid0IGl0IG1lYW4gYWxsIHRoZSBQViBndWVzdHMgd2lsbCBi
YXNpY2FsbHkgd2FzdGUgNjRNQiBvZiBSQU0NCj4+PiBieSBkZWZhdWx0IGVhY2ggaWYgdGhl
eSBkb24ndCByZWFsbHkgaGF2ZSBQQ0kgZGV2aWNlcz8NCj4+DQo+PiBJZiBDT05GSUdfWEVO
X1BDSURFVl9GUk9OVEVORCBpcyBlbmFibGVkLCBhbmQgdGhlIGtlcm5lbCdzIGlzbid0IGJv
b3RlZA0KPj4gd2l0aCBzd2lvdGxiPW5vZm9yY2UsIHllcy4NCj4gDQo+IFRoYXQncyAiYSBi
aXQiIHVuZm9ydHVuYXRlLCBzaW5jZSB0aGF0IG1pZ2h0IGJlIHNpZ25pZmljYW50IHBhcnQg
b2YgdGhlDQo+IFZNIG1lbW9yeSwgb3IgaWYgeW91IGhhdmUgYSBsb3Qgb2YgVk1zLCBhIHNp
Z25pZmljYW50IHBhcnQgb2YgdGhlIGhvc3QNCj4gbWVtb3J5IC0gaXQgcXVpY2tseSBhZGRz
IHVwLg0KPiBXaGlsZSBJIHdvdWxkIHNheSBQQ0kgcGFzc3Rocm91Z2ggaXMgbm90IHZlcnkg
Y29tbW9uIGZvciBQViBndWVzdHMsIGNhbg0KPiB0aGUgZGVjaXNpb24gYWJvdXQgeGVuLXN3
aW90bGIgYmUgZGVsYXllZCB1bnRpbCB5b3UgY2FuIGVudW1lcmF0ZQ0KPiB4ZW5zdG9yZSB0
byBjaGVjayBpZiB0aGVyZSBhcmUgYW55IFBDSSBkZXZpY2VzIGNvbm5lY3RlZCAoYW5kIG5v
dA0KPiBhbGxvY2F0ZSB4ZW4tc3dpb3RsYiBieSBkZWZhdWx0IGlmIHRoZXJlIGFyZSBub25l
KT8gVGhpcyB3b3VsZA0KPiBzdGlsbCBub3QgY292ZXIgdGhlIGhvdHBsdWcgY2FzZSAoaW4g
d2hpY2ggY2FzZSwgeW91J2QgbmVlZCB0byBmb3JjZSBpdA0KPiB3aXRoIGEgY21kbGluZSks
IGJ1dCBhdCBsZWFzdCB5b3Ugd291bGRuJ3QgbG9vc2UgbXVjaCBtZW1vcnkganVzdA0KPiBi
ZWNhdXNlIG9uZSBvZiB5b3VyIFZNcyBtYXkgdXNlIFBDSSBwYXNzdGhyb3VnaCAoc28sIHlv
dSBoYXZlIGl0IGVuYWJsZWQNCj4gaW4geW91ciBrZXJuZWwpLg0KPiBQbGVhc2UgcmVtZW1i
ZXIgdGhhdCBndWVzdCBrZXJuZWwgaXMgbm90IGFsd2F5cyB1bmRlciBmdWxsIGNvbnRyb2wg
b2YNCj4gdGhlIGhvc3QgYWRtaW4sIHNvIG1ha2luZyBndWVzdHMgbG9vc2UgNjRNQiBvZiBS
QU0gYWx3YXlzLCBpbiBkZWZhdWx0DQo+IHNldHVwIGlzbid0IGdvb2QgZm9yIGN1c3RvbWVy
cyBvZiBzdWNoIFZNcy4uLg0KPiANCg0KSW4gbm9ybWFsIGNhc2VzIFBDSSBwYXNzdGhyb3Vn
aCBpbiBQViBndWVzdHMgcmVxdWlyZXMgdG8gc3RhcnQgdGhlIGd1ZXN0DQp3aXRoIGU4MjBf
aG9zdD0xLiBTbyBpdCBzaG91bGQgYmUgcmF0aGVyIGVhc3kgdG8gbGltaXQgYWxsb2NhdGlu
ZyB0aGUNCjY0TUIgaW4gUFYgZ3Vlc3RzIHRvIHRoZSBjYXNlcyB3aGVyZSB0aGUgbWVtb3J5
IG1hcCBoYXMgbm9uLVJBTSByZWdpb25zDQplc3BlY2lhbGx5IGluIHRoZSBmaXJzdCAxTUIg
b2YgdGhlIG1lbW9yeS4NCg0KVGhpcyB3aWxsIGNvdmVyIGV2ZW4gaG90cGx1ZyBjYXNlcy4g
VGhlIG9ubHkgY2FzZSBub3QgY292ZXJlZCB3b3VsZCBiZSBhDQpndWVzdCBzdGFydGVkIHdp
dGggZTgyMF9ob3N0PTEgZXZlbiBpZiBubyBQQ0kgcGFzc3Rocm91Z2ggd2FzIHBsYW5uZWQu
DQpCdXQgdGhpcyBzaG91bGQgYmUgcmF0aGVyIHJhcmUgKGF0IGxlYXN0IEkgaG9wZSBzbyku
DQoNCg0KSnVlcmdlbg0K
--------------mgbNAlWVI77vXQcN1yCpbTLa
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------mgbNAlWVI77vXQcN1yCpbTLa--

--------------lebpN3m0ix011maVFGvYhzJB--

--------------yxQ05rE4I03r9lxlh0bCXS2U
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRrKbUFAwAAAAAACgkQsN6d1ii/Ey+v
oAf/RGlDPMYqkyXWlGUV4BUpVOohUjP+ndb+vqr426H0D3cuX5Wyojbd2qS0SGrO7sS92eu0Afv7
vOsOYgMl+2I+UGpMfiT9SNlhkcaIcZ7Trb3j4w61+m2zG52XShawOGUgnwnjk2PIoSsCHDc1iScg
8Kg6Acqa+as13B02OnnyGUO0aKEscYk3x7F0JbRrhBjhpyKZGZTg6r3MUn6f8lgsliESV1lMrQSo
x+ztSCC27+xJHhKN61OEVwbGVeunLOoEj9wn2yjrsGc3MN/+PntU2xdWmtbNybeQcy+7UlY8NUWB
hybiVNhbMYUI9qV3kMhO6M3zoXdYgL5le4urmk15oA==
=q2Cw
-----END PGP SIGNATURE-----

--------------yxQ05rE4I03r9lxlh0bCXS2U--


From xen-devel-bounces@lists.xenproject.org Mon May 22 08:43:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 08:43:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537813.837342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q118n-0006W6-Bp; Mon, 22 May 2023 08:43:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537813.837342; Mon, 22 May 2023 08:43:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q118n-0006Vz-9E; Mon, 22 May 2023 08:43:41 +0000
Received: by outflank-mailman (input) for mailman id 537813;
 Mon, 22 May 2023 08:43:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cvIk=BL=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q118l-0006Vt-Ow
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 08:43:39 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bea83b16-f87c-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 10:43:37 +0200 (CEST)
Received: from AM5PR0602CA0016.eurprd06.prod.outlook.com
 (2603:10a6:203:a3::26) by DB4PR08MB8029.eurprd08.prod.outlook.com
 (2603:10a6:10:38b::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 08:43:34 +0000
Received: from AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:a3:cafe::b5) by AM5PR0602CA0016.outlook.office365.com
 (2603:10a6:203:a3::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.27 via Frontend
 Transport; Mon, 22 May 2023 08:43:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT052.mail.protection.outlook.com (100.127.140.214) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.14 via Frontend Transport; Mon, 22 May 2023 08:43:33 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Mon, 22 May 2023 08:43:33 +0000
Received: from 8f980bf7ee3a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1361FCBB-310F-4C27-9035-60464F8224DE.1; 
 Mon, 22 May 2023 08:43:23 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8f980bf7ee3a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 22 May 2023 08:43:23 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAXPR08MB6496.eurprd08.prod.outlook.com (2603:10a6:102:df::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 08:43:21 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.028; Mon, 22 May 2023
 08:43:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bea83b16-f87c-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OP3O84fWw+UIaHgJZT4TtNGIMglRSNnttqHAAL06ogc=;
 b=4y/+TRw2gBBMBlU9OxTrAl7x3+uspOvFSecZujx43c4rYzV2Kt0QlZHwAmPekmt63yf1xMFhq3dyCpHD6mJ8z6FOwpShEUwVlSPfFV/0creMRsWsPjbbU+C5HdxKLh0y9UUPBTjdM4fA84XlYcWwZWjECy7xijiRiH+2864fcNw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 55640785665b7a54
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EvPGhUXYL1HE85H92fl73RGvEz8huuCo05B7Ynb9V1urky+Fxc+kpiz4tDae0Qm4JJs8xlfeoOIOaxNwgKuKdy3obrcxyk0GJ7z+UBDjY3U8xGFmHUbNvLsjCPI4M5m6mZThap9kJWBdTNlFxHx6N3CGDB2rsd8INKiWNyoHGeOVdUGusw6hTKxfje7w/biuzZp6PCkntaBZD9J8egtCW2SwyqY/BMqmgII0kQ2EgLEAPGWWqDFZXOCqopHFV8BKMnruYi86cNDrBMOhgPf9Wk+La0dk8xDBwkVmeMHvg8AzIAnP/j5hN9avrfnz/pIM17kCJt5h4OtTaqDPGAea+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OP3O84fWw+UIaHgJZT4TtNGIMglRSNnttqHAAL06ogc=;
 b=DoCtHBCfV6UiKjKbgccRkKF0OfC7PPaf2dG3AufE11Grjc4Vt1jYP6LueEnXvhWFdSk2tWyLnkNaaCUWqCW8ByrdtZR61MD/Sxj4hEroP8H5FctulWD7MRLYR+iSQ00fBzeQvfOKMx7dyd5VhJjnB+9ah8iTBQAGCLvbVCD3X95j1gVL+/I2XoJRHPqXjpnVdei6uk9R6MFlRbP73SXGg9UtqTRxfgN/oEcTG2G3pYySYKrUxS7ZieHB2v86BtYFset9NjhnxrqMZsdFIV5OebYj0nUi/hyJWgj4e/sm7SQgJlqlndWzb37wgDQUw+2ZcHyUCTKmB7wSXSypDow3JQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OP3O84fWw+UIaHgJZT4TtNGIMglRSNnttqHAAL06ogc=;
 b=4y/+TRw2gBBMBlU9OxTrAl7x3+uspOvFSecZujx43c4rYzV2Kt0QlZHwAmPekmt63yf1xMFhq3dyCpHD6mJ8z6FOwpShEUwVlSPfFV/0creMRsWsPjbbU+C5HdxKLh0y9UUPBTjdM4fA84XlYcWwZWjECy7xijiRiH+2864fcNw=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Thread-Topic: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Thread-Index: AQHZdnKBJmFHv6Wq8kSjvXctD+zlI69f6zCAgAHjioCAAAXEgIAEQtyAgAAOngA=
Date: Mon, 22 May 2023 08:43:21 +0000
Message-ID: <8A5D1D62-0FCF-4A2F-8B09-D216002D168C@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-2-luca.fancellu@arm.com>
 <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
 <86D7B5C8-2671-4359-A48D-E7D52B06565C@arm.com>
 <2f14dad9-25f5-7ac7-4ff5-d756e6f55718@xen.org>
 <a21f2917-052a-ddb5-3de5-1ea58cb55252@suse.com>
In-Reply-To: <a21f2917-052a-ddb5-3de5-1ea58cb55252@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAXPR08MB6496:EE_|AM7EUR03FT052:EE_|DB4PR08MB8029:EE_
X-MS-Office365-Filtering-Correlation-Id: 70fa53d8-942b-41fd-24d0-08db5aa0a093
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 pdpan5DwXJu7pmuyJi0aRxq1wY9PP+R+slpIOLu4PMorhJEJ0PwJGfcpbPkEoFKbfHaQzPSDysgw4UhMoFH5Ji96TdQFyRPV4TndNBLRDOq6tOEBm5JmMxnkPGIOamEnU0VYJbmd48OhBeHVgkS49ACT6uDstUAZuTAFuRfdyMttbQc0q36YDe/WUjBrTQC1/RQ1oG8MzOwa2wGadfSQnT0yVEPAG01pQz8sT4xJMN50UZZfICFUla9x841svVPuXdtiU43Pp6zIfVnTJ26ir35oB1RDTMjEZ0sM+H6A6PMk7q49CvMG/1eyl2buXIm/jinn3evAOdFmYzv15SEtxZSyYUFdHhCR/G2l45718u/ysxbjZ37FnI3ywIYwVz/QCYaA+TdmwRAc+TMuYGd9hnb38z3kkKUsE3L41kVFQ14DYi+denrlbGWlnWEJtmLQ1KvVm8Lczfzk3elQ07qJU7f2AXX6OnTCMbHmzAfYv2kLmbfyVFCNUuBPwYmtSuWbhFavkuA1mmGEVAfUARj4PJhlAst8u31jfJZPdk2IOQ08pbK+gDYxxEJELCjteX5CWPEAUn025gptO+QF6joj+HHg4+5nYe1UHAECT/VyaBmsfZo9tOmislUKCGiRPTjGnChsUSGd6rNbie+fryfSuA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(366004)(39860400002)(136003)(451199021)(122000001)(38100700002)(8676002)(8936002)(38070700005)(5660300002)(53546011)(6512007)(6506007)(26005)(186003)(2906002)(2616005)(36756003)(83380400001)(33656002)(86362001)(71200400001)(54906003)(316002)(478600001)(4326008)(66556008)(66476007)(66446008)(64756008)(6916009)(66946007)(91956017)(76116006)(41300700001)(6486002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <861D923A732B9D419094B9EBBA033C7C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6496
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3f1a590b-44d2-4c6f-f482-08db5aa09926
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ni3/uLuAC8jKW/9747jHjbVGi4In0/NWOQZS3wWnUoCSnnDk0LBMrAL33+9nFHJZ2dVK1jbnmm163EVt/NYEN0I2HNN0+DdXdc03o+dMCWQEr4bpao+Dess7F3jHTLea/R0WqT8QzHeu3wXj0haaefXi7lvS/L4eFm/H/Qe30k9u0TfIk2tipcqfeJa4hUUz1D/lAMf3ZIuASbVNwVQr/KpRCVDmJ82G9ws6EOgwNGGePwRcn/xmwqLIWxBSQvOT06cYYOtgS7e0qX/RjuJoTC/yLUXMrlLCTLpPbvOKgTOyXi3FisTzscS8mf1GBU9yTSPDTtF0vLJn7cPh2nz2dRsrcrgSndMqsMwnWjJafHjgSqiUgeT0ux1EUC9ACOQttjp5mHorZYibiReStOlZO6ZyezjCfC2sfU/JciOsS9+d8h3kjkwYnLRr3eeHZWt44pBrjmtbRwkaKBFz2/bmEWLDxOKdyXW/GMLejlgo2LaEKRvkeaj8mcpqAPIoa5a95adj8pnrwEDT80EQR7/mG/nXHJra67ACvIMEGLiGoHdCSjD1r2HTXvNrIx5gAT/RLKN+HCZ4QqTDjTZdel7MLkA5YpBV2JPJQYnnYIQKeKFflOBR6fh7URbYIkBB5F7SORI81xE7IY/eTHml7xVJb6/EiWlWuhve1MCXXuiJZylibujNekRUhrRupRCJwZBLuSpnnHNCsr60LqXHqqFRyA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39850400004)(396003)(136003)(346002)(376002)(451199021)(36840700001)(46966006)(6862004)(8676002)(8936002)(5660300002)(36860700001)(47076005)(82310400005)(186003)(26005)(53546011)(6512007)(6506007)(336012)(83380400001)(86362001)(2616005)(81166007)(356005)(82740400003)(107886003)(41300700001)(6486002)(40480700001)(33656002)(70206006)(70586007)(316002)(36756003)(4326008)(478600001)(54906003)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 08:43:33.7853
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 70fa53d8-942b-41fd-24d0-08db5aa0a093
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB8029

DQoNCj4gT24gMjIgTWF5IDIwMjMsIGF0IDA4OjUwLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMTkuMDUuMjAyMyAxNjo0NiwgSnVsaWVuIEdyYWxsIHdy
b3RlOg0KPj4gT24gMTkvMDUvMjAyMyAxNToyNiwgTHVjYSBGYW5jZWxsdSB3cm90ZToNCj4+Pj4g
T24gMTggTWF5IDIwMjMsIGF0IDEwOjM1LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPiB3
cm90ZToNCj4+Pj4+ICsvKg0KPj4+Pj4gKyAqIEFybSBTVkUgZmVhdHVyZSBjb2RlDQo+Pj4+PiAr
ICoNCj4+Pj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMjIgQVJNIEx0ZC4NCj4+Pj4+ICsgKi8NCj4+
Pj4+ICsNCj4+Pj4+ICsjaW5jbHVkZSA8eGVuL3R5cGVzLmg+DQo+Pj4+PiArI2luY2x1ZGUgPGFz
bS9hcm02NC9zdmUuaD4NCj4+Pj4+ICsjaW5jbHVkZSA8YXNtL2FybTY0L3N5c3JlZ3MuaD4NCj4+
Pj4+ICsjaW5jbHVkZSA8YXNtL3Byb2Nlc3Nvci5oPg0KPj4+Pj4gKyNpbmNsdWRlIDxhc20vc3lz
dGVtLmg+DQo+Pj4+PiArDQo+Pj4+PiArZXh0ZXJuIHVuc2lnbmVkIGludCBzdmVfZ2V0X2h3X3Zs
KHZvaWQpOw0KPj4+Pj4gKw0KPj4+Pj4gK3JlZ2lzdGVyX3QgY29tcHV0ZV9tYXhfemNyKHZvaWQp
DQo+Pj4+PiArew0KPj4+Pj4gKyAgICByZWdpc3Rlcl90IGNwdHJfYml0cyA9IGdldF9kZWZhdWx0
X2NwdHJfZmxhZ3MoKTsNCj4+Pj4+ICsgICAgcmVnaXN0ZXJfdCB6Y3IgPSB2bF90b196Y3IoU1ZF
X1ZMX01BWF9CSVRTKTsNCj4+Pj4+ICsgICAgdW5zaWduZWQgaW50IGh3X3ZsOw0KPj4+Pj4gKw0K
Pj4+Pj4gKyAgICAvKiBSZW1vdmUgdHJhcCBmb3IgU1ZFIHJlc291cmNlcyAqLw0KPj4+Pj4gKyAg
ICBXUklURV9TWVNSRUcoY3B0cl9iaXRzICYgfkhDUFRSX0NQKDgpLCBDUFRSX0VMMik7DQo+Pj4+
PiArICAgIGlzYigpOw0KPj4+Pj4gKw0KPj4+Pj4gKyAgICAvKg0KPj4+Pj4gKyAgICAgKiBTZXQg
dGhlIG1heGltdW0gU1ZFIHZlY3RvciBsZW5ndGgsIGRvaW5nIHRoYXQgd2Ugd2lsbCBrbm93IHRo
ZSBWTA0KPj4+Pj4gKyAgICAgKiBzdXBwb3J0ZWQgYnkgdGhlIHBsYXRmb3JtLCBjYWxsaW5nIHN2
ZV9nZXRfaHdfdmwoKQ0KPj4+Pj4gKyAgICAgKi8NCj4+Pj4+ICsgICAgV1JJVEVfU1lTUkVHKHpj
ciwgWkNSX0VMMik7DQo+Pj4+IA0KPj4+PiBGcm9tIG15IHJlYWRpbmcgb2YgdGhlIEFybSAoRDE5
LTYzMzEsIEFSTSBEREkgMDQ4N0ouYSksIGEgZGlyZWN0IHdyaXRlIHRvIGEgc3lzdGVtIHJlZ2lz
dGVyIHdvdWxkIG5lZWQgdG8gYmUgZm9sbG93ZWQgYnkgYW4gY29udGV4dCBzeW5jaHJvbml6YXRp
b24gZXZlbnQgKGUuZy4gaXNiKCkpIGJlZm9yZSB0aGUgc29mdHdhcmUgY2FuIHJlbHkgb24gdGhl
IHZhbHVlLg0KPj4+PiANCj4+Pj4gSW4gdGhpcyBzaXR1YXRpb24sIEFGQUlDVCwgdGhlIGluc3Ry
dWNpdG9uIGluIHN2ZV9nZXRfaHdfdmwoKSB3aWxsIHVzZSB0aGUgY29udGVudCBvZiBaQ1JfRUwy
LiBTbyBkb24ndCB3ZSBuZWVkIGFuIElTQigpIGhlcmU/DQo+Pj4gDQo+Pj4gRnJvbSB3aGF0IEni
gJl2ZSByZWFkIGluIHRoZSBtYW51YWwgZm9yIFpDUl9FTHg6DQo+Pj4gDQo+Pj4gQW4gaW5kaXJl
Y3QgcmVhZCBvZiBaQ1JfRUwyLkxFTiBhcHBlYXJzIHRvIG9jY3VyIGluIHByb2dyYW0gb3JkZXIg
cmVsYXRpdmUgdG8gYSBkaXJlY3Qgd3JpdGUgb2YNCj4+PiB0aGUgc2FtZSByZWdpc3Rlciwgd2l0
aG91dCB0aGUgbmVlZCBmb3IgZXhwbGljaXQgc3luY2hyb25pemF0aW9uDQo+Pj4gDQo+Pj4gSeKA
mXZlIGludGVycHJldGVkIGl0IGFzIOKAnHRoZXJlIGlzIG5vIG5lZWQgdG8gc3luYyBiZWZvcmUg
d3JpdGXigJ0gYW5kIEnigJl2ZSBsb29rZWQgaW50byBMaW51eCBhbmQgaXQgZG9lcyBub3QNCj4+
PiBBcHBlYXIgYW55IHN5bmNocm9uaXNhdGlvbiBtZWNoYW5pc20gYWZ0ZXIgYSB3cml0ZSB0byB0
aGF0IHJlZ2lzdGVyLCBidXQgaWYgSSBhbSB3cm9uZyBJIGNhbiBmb3Igc3VyZQ0KPj4+IGFkZCBh
biBpc2IgaWYgeW91IHByZWZlci4NCj4+IA0KPj4gQWgsIEkgd2FzIHJlYWRpbmcgdGhlIGdlbmVy
aWMgc2VjdGlvbiBhYm91dCBzeW5jaHJvbml6YXRpb24gYW5kIGRpZG4ndCANCj4+IHJlYWxpemUg
dGhlcmUgd2FzIGEgcGFyYWdyYXBoIGluIHRoZSBaQ1JfRUwyIHNlY3Rpb24gYXMgd2VsbC4NCj4+
IA0KPj4gUmVhZGluZyB0aGUgbmV3IHNlY3Rpb24sIEkgYWdyZWUgd2l0aCB5b3VyIHVuZGVyc3Rh
bmRpbmcuIFRoZSBpc2IoKSBpcyANCj4+IG5vdCBuZWNlc3NhcnkuDQo+IA0KPiBBbmQgUkRWTCBj
b3VudHMgYXMgYW4gImluZGlyZWN0IHJlYWQiPyBJJ20gcHJldHR5IHN1cmUgIm5vcm1hbCIgU1ZF
IGluc24NCj4gdXNlIGlzIGZhbGxpbmcgaW4gdGhhdCBjYXRlZ29yeSwgYnV0IFJEVkwgbWlnaHQg
YWxzbyBiZSB2aWV3ZWQgYXMgbW9yZQ0KPiBzaW1pbGFyIHRvIE1SUyBpbiB0aGlzIHJlZ2FyZD8g
V2hpbGUgdGhlIGNvbnN0cnVjdCBDdXJyZW50VkwgaXMgdXNlZCBpbg0KPiBlaXRoZXIgY2FzZSwg
SSdtIHN0aWxsIG5vdCBzdXJlIHRoaXMgZ29lcyB3aXRob3V0IHNheWluZy4NCg0KSGkgSmFuLA0K
DQpMb29raW5nIGludG8gdGhlIExpbnV4IGNvZGUsIGluIGZ1bmN0aW9uIHZlY19wcm9iZV92cXMo
Li4uKSBpbiBhcmNoL2FybTY0L2tlcm5lbC9mcHNpbWQuYywNClpDUl9FTDEgaXMgd3JpdHRlbiwg
d2l0aG91dCBzeW5jaHJvbmlzYXRpb24sIGFuZCBhZnRlcndhcmRzIFJEVkwgaXMgdXNlZC4NCg0K
SSB0aGluayBaQ1JfRUwyIGhhcyB0aGUgc2FtZSBiZWhhdmlvdXIuDQoNCkNoZWVycywNCkx1Y2EN
Cg0KPiANCj4gSmFuDQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon May 22 08:45:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 08:45:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537817.837353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q11AV-00075U-Q0; Mon, 22 May 2023 08:45:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537817.837353; Mon, 22 May 2023 08:45:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q11AV-00075N-LU; Mon, 22 May 2023 08:45:27 +0000
Received: by outflank-mailman (input) for mailman id 537817;
 Mon, 22 May 2023 08:45:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q11AU-00075F-Pk
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 08:45:26 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2044.outbound.protection.outlook.com [40.107.7.44])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fedcb24a-f87c-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 10:45:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS5PR04MB9755.eurprd04.prod.outlook.com (2603:10a6:20b:650::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 08:44:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 08:44:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fedcb24a-f87c-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nOB5qiciEUwnwGPWrzBOFS6gv2mJ7GCagbh+lY6LHZKqroLVBhAXEuLy1Q5FkyabxAdMLveEog6VJyUQWkxPDl7orhp88U5hChihKEs7n1Ug3JCx1IzpkHN6uguLMzx/uZyNFNQyx5NVjao+XzNSjA5ZXiOOPfyTM7bvS9WPcIUiCDE8Ab8ug5AtF5HOZfYbym3eRO/Q+VHi26I1XIUiaulvfP4u9OPf8CldcEsRhyti6TRX4mBqMsmY7VTZvzeZ4HcK/2g9o2VmBSLx9VAb2LP/SXVfXnpDe8DjaujFoJZ4NliKnmwmF/WJ6gmlTMiNEWZ8yrwjmIerENay+DuI4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rmLRf5Ck1L3hgbZSsPpT+g9LSkLIRfdCkOuhAtHXcUY=;
 b=PX10FujeAJexYOPuWcohP2jTXs5lf0igSQwF9okP1cPy7I3s2xP2RMsIbZALN1H0Iy3lFer61GCsd4/D6ojdq+nSntNNHm6YS1b7AS0cOXntw9TFgD61n9xzjTxhYVJPOiwhbxpcN+/gRBvh64ij83GBf3cT2UJcN9UapiNfyHI3xplDrJ51PEXOaMlfreByzcxEbhl6ZEpejPw5uOOnDMZprWEPeanE9KEBDGVBaPBDYhOSl7tD3KrS8qppwA/nnOw0BIjnHGhIAtSHqx0rTlRfsCPJSPBIZLAgXxs9ItczjtLg3seyb/Rvk/pOUkdQhrq5T27l74Jeqg29JQpZ+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rmLRf5Ck1L3hgbZSsPpT+g9LSkLIRfdCkOuhAtHXcUY=;
 b=oGeUM/dKNSGg2M8gndnjtqHkDyUIMWjQFvI4fxjBULMquvaI4PO7LU2fqXBFgOR0smOWjDagGVGvNMIsu7QyfgjLi7AOXL43EtaWuMlHKg+zHQayB3iUz36sx9NocbyrDW62qGf0BYhbif7aPTq7HDoa1tlbOqm8rOrOsZ1TOa2jn1xR0LYf4sWyIaq0DAFmNILf0xnzgXKW4anu19ZBq5Pacb2MCU/anj9tJ252ckaTgge2xPIJ+hOZyiu7aVVkhZCyxwH0bRUY6+Pka024ah/RMuluVRV+CQcZmzMFONQdbSeNxXApFWSVDQ9NdDM3dAvoYyaGuyMx+p4lyozhAQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6bfbfc9f-1f41-1ed2-fb0e-cc7efaad9947@suse.com>
Date: Mon, 22 May 2023 10:44:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 1/2] build: shorten macro references
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Connor Davis <connojdavis@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a261b10b-25e4-3374-d6c4-05b307619d81@suse.com>
 <aa76774a-0d73-989e-e054-1b30490160e1@suse.com>
In-Reply-To: <aa76774a-0d73-989e-e054-1b30490160e1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0047.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS5PR04MB9755:EE_
X-MS-Office365-Filtering-Correlation-Id: 9cb595ce-cb32-4c96-727e-08db5aa0d16d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dWflcBM+bUnKchjTkFuAJnULbEf3HgWcE0k0+psehSdAvDUgSHHnUkkHN3GqfSN3KZE1kSfQH+9ekXKGlnUtpN3vkm8OMx5yACGJ4Wnk0vE/Kp1qW8op/XUvRRYT5F1nV2t0AB36WKXDEWDJd7Ud5EjU30iEcMHh29gxNO+xTO5LzJpNF3KzBdU062+105wOaCTGnnzmEVeCqxIdqoN+Rtc+ToIw3YqZaSnkYspbUClOw3rEJTjf3NglYIU113CWD8axVThcgR1Kf/pyzOR5j7qH3H5H56J/ZPltR/axtcnwaBb895vaAvOVKuWmCZ6DW1OSSRQvFIvYcIQDBUAcKxdwCJXQwtykrfGtXbPa/tDLKoXAAkuDXtPGpAJWvCfkA7ZjAkzxNn4bCJDyEEVBmIfqZy1lgmjZed2yFeVSJNOSKyy72l1Y9QqyMAtGgzugTQzWq2dvdNaUEB6YqgvA5wOrMlO9zYB+YM+KcT16byHJ7OvRa6v/B7wkAa6pNPdcEWzLUD/xUdDWrZSzC90Z16lSwSIxrSmghRH8/BgWJUi3gX2fI5pO3ZNTAb8z1ixSMf1SN7Rb822Y8vR+txxNc42CHUTyEig99R963kp2/70cGz45AxTVQVbhZFqAngS4JIdE7D57N7h+hKW6KnTr5A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(346002)(376002)(366004)(136003)(451199021)(38100700002)(8676002)(8936002)(7416002)(5660300002)(53546011)(6512007)(6506007)(26005)(186003)(2906002)(2616005)(36756003)(31696002)(86362001)(54906003)(110136005)(316002)(31686004)(478600001)(66556008)(66476007)(4326008)(66946007)(41300700001)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WFZ4am1RM1FOc1BZdzNSMUI1d3dQdHliYzIwWDlRK1JTS2MzM3JaaDFtL01J?=
 =?utf-8?B?Ly9LWS83SVlBd2I4cm94RWZYR0NrNWRLMENOZlZFMFMySmRGdS9DRHREWVdr?=
 =?utf-8?B?Y1RnMks3M2JJMUQ4SDhMSW5vUStKMlRkSkgwOWRpbUNpWkEydkRoVkxRT2ZO?=
 =?utf-8?B?T2VZT21XSEpxSFJWbk1BWTladUJtVDI2NmZ2K0xRUjJVR3JQR0RuVjE1RWsw?=
 =?utf-8?B?UlpoU1BBd2h5MWxGSHVRV3ZjL3BNUU1IZEJSM0p1b3N3OXUwZSsxeFMzVlRq?=
 =?utf-8?B?WWpyT0RyRU9Ya3piVlB0ZlZsUjFuYVlpYnhlcVowK3JnSXRQcVNHQzk0Wm1o?=
 =?utf-8?B?eGRqZElLQ3ZlT1hlNHBBSGw0enBlTjRQSHJUUjJQTG8yc1l2RE84b1Bkak1h?=
 =?utf-8?B?RnVBY2s1OS9zb29RUHUyN1ZtQzgzU0U4NXhtMnNQa3BSTXJvQngzNG9SeWVx?=
 =?utf-8?B?VytxcjlWcWt5cDJNdGVHSWVGcmF2Mk5UaURYbmhKT2NURytCWS9kaWdNcVJi?=
 =?utf-8?B?WWlNNytwV1AzdkxhcTlxZHplUTFwYVZic2E4VzZxVFNnZ2MweGVQd3JtWFhn?=
 =?utf-8?B?SVQ0NUkzaGduZUNQZ0NhN2YxTXd4NHFXcFBxK01Dam9hUHdwM0g5NmpESFZ3?=
 =?utf-8?B?eTV6MDFZOWlERnZzNzA0aTMxZENvRGpqYVlmRndFU2UyZnJCVDAzU3RJYnpM?=
 =?utf-8?B?M3JpazVLZGZDblVwVCt3Y2ZZQU1QL2RJQUFERytIYzN4SDl1SGc4OGxOWkxj?=
 =?utf-8?B?bnZ5SFFPVERNUUxYVTU5eXBJcXBXY2VLRm85K1FSYWpLSWZUaGZseUoxRXdr?=
 =?utf-8?B?dzF1cVlnYUdhZVhPVTJuSGRrRDM0cmtWd0E4YTk4S1MxdEg0alZxdVYxcFg2?=
 =?utf-8?B?cllSZ3c4TDJOaWtxLzBkN21SNG5ScEpSZ0ZZeGZlV3hVQkdQLzlpcmdzMjFD?=
 =?utf-8?B?cGFMSEgyZTJaUFlvR0JzcFdYYnpRVWJ4ZDNlU0ZNbHB1UlVlWFFxbHNQYlpF?=
 =?utf-8?B?anFMZmhCM2ZDVktRTWVpNjcyRSsxUnFubEJsWnlvaUZvbE1aYVd2YmMyU2lF?=
 =?utf-8?B?R1RYV2N1amIrcVBTc0lKZlJFL0ZkekhXWnVTZHc1QjlrTWZsLzhscG1wSnYr?=
 =?utf-8?B?OFJwTytnTC9odU5wOTZJbTc4a1VyODZaK1BjeDVLNk5mWURISU5tSzZFZXpN?=
 =?utf-8?B?MnZnN1FNWHV6N3QxQ0NkMGJ1STUyZjg2Nk5qSE85QWx4K2Q3MFgwRXlkcDRY?=
 =?utf-8?B?MDF5NW4xQ3JtVCtEYkFmUmRPT2FraUxxSkJQTWdJRVM1RXpZVHUvVlZ6TlVG?=
 =?utf-8?B?Y0N3amhOQWRzSUFBVlE1QjhVZ3M4TXhtdmo0N3N0OThjUVFUSUNianV0RXh4?=
 =?utf-8?B?TFcvb2Zsa0RUU0lncUcrZDVKWXRBSU5jTzlDc3dCNjVtK1p3UG5NOEhjclZK?=
 =?utf-8?B?V25qTkdsQnNmZDJNYnFzcEZuQUdyenk4bUxjMnVhWWNvNnA0RU5rMEZhanRW?=
 =?utf-8?B?Z1N5anBralNURU5EaFJyeEZnT2hBazVxRGttemxWTStuQXNGUmNaK1ZOeklE?=
 =?utf-8?B?Qk8zTDRGTkNxTmJGL3pSTktRajM5NzJLMkpzSHBCV2M5RU1jcmRibExKKzhk?=
 =?utf-8?B?ZjVNd1lDeUtrUkNDL3BPS2tVSmxYR0xMZ0oxSmRMeXY3M1l6WTlza1pRNmtK?=
 =?utf-8?B?NVI4ZGpId0x0bjJlYUovcmdpcXVQZ1VGSGIrM3FLMjl0VVNnTEVPNHo3RGpZ?=
 =?utf-8?B?K1BDS21zVDlSVDdsOGluYlc1ekxZSmxqT3ZEWXN0YUFWZkpkTlJEd0tnU2E0?=
 =?utf-8?B?UjNEczBMdzFKUGxjdFprVVU5RDZiKzBnWW1COHFGb2lJelhOb0NSNTBYYm1D?=
 =?utf-8?B?S24zTmZGa2dQaHpoMndwb0pOMGZkMGxtZ1lwYWJ1VkJMNk5PUk5OUUVkTmJX?=
 =?utf-8?B?dHYwMUZJb0xaYWdBVUpHMHpGT2t0TlRuTE9jYzQ1RXowTlYrY1VqdVFiT01T?=
 =?utf-8?B?T0dXZ1JJb0ZZNEtjQkRhVlZjZlJHVkNPZmFWd2cra2pMN2hFTkx5aTYxeUtL?=
 =?utf-8?B?OTg0c2ZZRktUa0tLUFUybWpsbGJLOWJNNVRsN2t5LytMajJvbmpLU1NZU1Zp?=
 =?utf-8?Q?d4SgojAjifdQv4eNOnUWfN7GM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9cb595ce-cb32-4c96-727e-08db5aa0d16d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 08:44:55.9551
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fOdvoWrDlDH43XB2cptlpfHiq0jMkEHQNxz8LbSnTktr5YmhmAuChHp1MUtDbHioolxNBfX+AOVIZzFeETwF/A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS5PR04MB9755

On 08.05.2023 14:58, Jan Beulich wrote:
> Presumably by copy-and-paste we've accumulated a number of instances of
> $(@D)/$(@F), which really is nothing else than $@. The split form only
> needs using when we want to e.g. insert a leading . at the beginning of
> the file name portion of the full name.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> v2: Insert blanks after ">".

Any chance of a RISC-V side ack, please?

Thanks, Jan

> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -104,9 +104,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
>  	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
>  	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
>  	    $(@D)/.$(@F).1.o -o $@
> -	$(NM) -pa --format=sysv $(@D)/$(@F) \
> +	$(NM) -pa --format=sysv $@ \
>  		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
> -		>$(@D)/$(@F).map
> +		> $@.map
>  	rm -f $(@D)/.$(@F).[0-9]*
>  
>  .PHONY: include
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -10,9 +10,9 @@ $(TARGET): $(TARGET)-syms
>  
>  $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
>  	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) -o $@
> -	$(NM) -pa --format=sysv $(@D)/$(@F) \
> +	$(NM) -pa --format=sysv $@ \
>  		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
> -		>$(@D)/$(@F).map
> +		> $@.map
>  
>  $(obj)/xen.lds: $(src)/xen.lds.S FORCE
>  	$(call if_changed_dep,cpp_lds_S)
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -150,9 +150,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
>  	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
>  	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
>  	    $(orphan-handling-y) $(@D)/.$(@F).1.o -o $@
> -	$(NM) -pa --format=sysv $(@D)/$(@F) \
> +	$(NM) -pa --format=sysv $@ \
>  		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
> -		>$(@D)/$(@F).map
> +		> $@.map
>  	rm -f $(@D)/.$(@F).[0-9]* $(@D)/..$(@F).[0-9]*
>  ifeq ($(CONFIG_XEN_IBT),y)
>  	$(SHELL) $(srctree)/tools/check-endbr.sh $@
> @@ -224,8 +224,9 @@ endif
>  	$(MAKE) $(build)=$(@D) .$(@F).1r.o .$(@F).1s.o
>  	$(LD) $(call EFI_LDFLAGS,$(VIRT_BASE)) -T $(obj)/efi.lds -N $< \
>  	      $(@D)/.$(@F).1r.o $(@D)/.$(@F).1s.o $(orphan-handling-y) $(note_file_option) -o $@
> -	$(NM) -pa --format=sysv $(@D)/$(@F) \
> -		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort >$(@D)/$(@F).map
> +	$(NM) -pa --format=sysv $@ \
> +		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
> +		> $@.map
>  ifeq ($(CONFIG_DEBUG_INFO),y)
>  	$(if $(filter --strip-debug,$(EFI_LDFLAGS)),:$(space))$(OBJCOPY) -O elf64-x86-64 $@ $@.elf
>  endif
> 
> 



From xen-devel-bounces@lists.xenproject.org Mon May 22 08:49:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 08:49:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537821.837363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q11Ea-0007i3-94; Mon, 22 May 2023 08:49:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537821.837363; Mon, 22 May 2023 08:49:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q11Ea-0007hw-6W; Mon, 22 May 2023 08:49:40 +0000
Received: by outflank-mailman (input) for mailman id 537821;
 Mon, 22 May 2023 08:49:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gfWf=BL=suse.com=dfaggioli@srs-se1.protection.inumbo.net>)
 id 1q11EY-0007hq-T8
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 08:49:38 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9542d316-f87d-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 10:49:37 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 81E5B21961;
 Mon, 22 May 2023 08:49:37 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4436113336;
 Mon, 22 May 2023 08:49:37 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id XLh+DqEsa2SCYAAAMHmgww
 (envelope-from <dfaggioli@suse.com>); Mon, 22 May 2023 08:49:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9542d316-f87d-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1684745377; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IAId3F/SOs9VIIcnJZfWqez/AHJ64L6CPRkL8KeTOFI=;
	b=Kb4i9gpNtNVO9+4yiajhKeZQXNmTTFI2P+x2A4Bdy5DSqtRPMbz1jOiIfd3Bv1lXO88dc5
	4HGJGfjw1ErINfy8m3tWfRBDxJDSgRoiEwLP978POX0LWFLK80ONpEIdnVIvwpL/UMqmxi
	iD7I3nUsAErGEkQT6BRoA9POhQfcNDA=
Message-ID: <f1f4ff704480d34931161fac75e5341b9a5e2b2d.camel@suse.com>
Subject: Re: [PATCH v1] xen/sched/null: avoid crash after failed domU
 creation
From: Dario Faggioli <dfaggioli@suse.com>
To: Stewart Hildebrand <stewart.hildebrand@amd.com>, Juergen Gross
	 <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>
Date: Mon, 22 May 2023 10:49:36 +0200
In-Reply-To: <93adef92-90fb-80de-c6b4-b41872b74682@amd.com>
References: <20230501203046.168856-1-stewart.hildebrand@amd.com>
	 <30246788-c90e-e338-de4b-e7bb2e440f4e@suse.com>
	 <93adef92-90fb-80de-c6b4-b41872b74682@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.1 (by Flathub.org) 
MIME-Version: 1.0

On Thu, 2023-05-18 at 17:27 -0400, Stewart Hildebrand wrote:
> On 5/5/23 01:59, Juergen Gross wrote:
> > >=20
> > > Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
> >=20
> > Reviewed-by: Juergen Gross <jgross@suse.com>
>=20
> Thanks for the review. Does this still need a maintainer ack?
>
Acked-by: Dario Faggioli <dfaggioli@suse.com>

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


From xen-devel-bounces@lists.xenproject.org Mon May 22 09:23:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 09:23:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537827.837372 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q11lH-0003fo-02; Mon, 22 May 2023 09:23:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537827.837372; Mon, 22 May 2023 09:23:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q11lG-0003fh-Tk; Mon, 22 May 2023 09:23:26 +0000
Received: by outflank-mailman (input) for mailman id 537827;
 Mon, 22 May 2023 09:23:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q11lF-0003fa-3W
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 09:23:25 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20625.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4cbc4857-f882-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 11:23:23 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9408.eurprd04.prod.outlook.com (2603:10a6:20b:4d8::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 09:23:22 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 09:23:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cbc4857-f882-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XxioQ/An6iRYnTi0j60hxS12Xgko55OWSknr2PlVJhk0/74VS2pGeKg2+HOn1Gc72C9TG3k9nXI4VR+U3tNyjy2oF/3WNbkvwO3IOzK+edgXQLbJlpEC96d7HxfaRkAaVJPxoTn0vSRp8QlvVLyMiKuvuDghjTEmFwair9ydwKuym2PWVNPnkBSg3/3Ctwfk8McJttN/1eqQ9l2mB0O3HQcmGd+yNyU28sSe8iH273OW0zE2G/0AfwYTnAZND9PC35M8rI4WisGS40c7MT7uz3PDNZp6VG7HT9lah73xQTAeupk184iYnpJ5FyNVuIigYapFkCCtg58xBhh+Pxnb4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=K0RQAM99YhPXWe6dTITvTbitjWSiSjqtTMVsef9V3/k=;
 b=Q/8gKIh6Qn59Rl1edNtbPKzfIg73xr6kMJ5J7vNWcr7pn+HC5zTxJ9i4lanbVObd4ar14KlT0QcqZMLScNFpDqzB+mCQowC2bjC+RPeB/D483OGRkwNZgkm2eeQOsWoNvU8JNsBkJp3stcUwQ+43bRUbjktm9a1xhncMAp+eXnFZRD5QMN199pqqp3PcyK/awPW0EVZaaZSXkhIRAroIXnXZcHYev6e5w+NunRrBxHHP5iqJFsI/1fwj3fcTsZ/gAsqv/amvHz5uSsVPFiSyKYtiQyDkHXY5QBZVAGQVUv9t+YrBHCFxtcuC1kHL1iY9OlbTF8h+x4pz9PNNAi82TA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K0RQAM99YhPXWe6dTITvTbitjWSiSjqtTMVsef9V3/k=;
 b=4Q7rWGmDtuSGjKU/UzM/EW3vjUbcexSbUJ/BPz270ouSQCXZzP5iNCdmCXIgDTHfdI6MSluUvU2Rc5bOzeA6rdieXHvtcBsX8oT4d2K204tDLZgNuQh80CwXowFW3u9JcgsCBaIwMCRUFHMBJPK5FSGa0knXPXWL17cRzCxc3NRMd85Xnaty81XGHWSctuM2ditrHnaKcHUArhrnswKW3/oJ5wNek1V6zSZXPZwejQcMXP3veGT0s+4rnrdvU2gKmF9TezaVJ/IJ23gv6Ft3XCkmwOh+/nsXVRzj2XQuLrc/MkFKBjWu03bSPdvizunb8yZWkV8oRrRDhWrJ7J/39Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4a8c30bd-ebe7-3800-37ae-9e3e6c37a7d0@suse.com>
Date: Mon, 22 May 2023 11:23:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] maintainers: add regex matching for xsm
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230519114824.12482-1-dpsmith@apertussolutions.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230519114824.12482-1-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0138.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9408:EE_
X-MS-Office365-Filtering-Correlation-Id: 39de45e6-b1fd-4e99-bbe1-08db5aa62fc1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XnaLCJbcbySk0WR4b4LG29OQ1851JhV2JunxODOZyMpc2WvOGihTUOf2QOY+tS4RazYt0x/3QoYoWFddPWzdixgf1vPecbByZfvwHkXF5SCYR14+gFuSXbWOAp3GVOZCpxHsYUkW1fEm9NP+dKBrzBaeIlai0HOwWXuEgWfLECNF7LfzhLkLxHuaLJF5hEPwT5QT2gV0kAeFTTquqHRg0PrYHhUr6zfIMGyuYZQ0A83+rWbkonqNw/p4P8wJ1VTNQ06NiZuBbdSJ8Jw8pw8mNiKDHJKlk64OJy2cyoInkSPLcgdcj+k0H6FD0yRMMwOO4wXVgooWyFbsFk7ZkJy0oJEjuQ6rQxCz6LbMBqyavPgbCOMi2lbTZkHU5LRcPu51YuFPEloEDV9vk54tjwUrUVk59euovk/2EboNQBCTIedBhRQZk2h3bsuXWWUklQrwk4Ts7tVgvs3MWXjeYGfzsp4FmE3oy4M38dQb/YcJ+lMk+IA0iRUmMquvxYE2FKYVTS3tEIy2wa6+nBPpN76rLoy+LgyOtbKRKTh1uimkhRBkCPL+KjHOfFS878I/7wKNjT01yO1gIWKePe4nESyogaLF5IJmqQ4FhzRttn9VkljJsivDHiyhsSgXXZ++IzLfym06P8MCrjJLTnidZPt7Nw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(39860400002)(346002)(396003)(136003)(376002)(451199021)(31686004)(2906002)(5660300002)(83380400001)(8676002)(8936002)(41300700001)(66556008)(66946007)(4326008)(54906003)(66476007)(6916009)(316002)(36756003)(6486002)(478600001)(2616005)(6512007)(6506007)(26005)(31696002)(86362001)(186003)(53546011)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dW5Hd1ZDY3JQSTRMYzlKQm12R2ZZUmozbHVHUVcva3FlWnFIUk5jbzFRQXc2?=
 =?utf-8?B?bjB2NmlmZmJUTEJZdC9vQkVFeWRveXhBMGhSdFJpWUFvcVZoWnphbVVOV0hq?=
 =?utf-8?B?VUs3VzRMMElFOFl3SFhTYTIwU2lCaS9zbFFGQzNJR3lwbVU1V3A0WDhabWN4?=
 =?utf-8?B?UzFlZTlza2xUSXFoOUdReTFCdkFrV1RlMFNuL3RrZWNuRWlqU1E1MjhoRjhl?=
 =?utf-8?B?KyttdGV2S3ZtWWdaMUpiRWw4R0oxWHkyN2VkalN2aFlydnlmZ0h6bTJkbEFa?=
 =?utf-8?B?OWdmN3lrYmM2bUZ1djZBRnhlZzNFaCt1ZFJFWlpTVmQ1VFFwalk5b1VvSnRQ?=
 =?utf-8?B?NFRZN1cyWTZ6V3JKUzFZWm4zVi96OE1EbnFlOElvRmswKytEUXFMMVVOUmpP?=
 =?utf-8?B?SElnZjZ4aFZ6aEhMMjE1RysrRTE2alptRlRzSGZsWFR3b2FkOCtrMWt2ejlz?=
 =?utf-8?B?SHdSbWl5OGRWQ3pxWXFzeVp6UUIyN3JFMDRsQld2eGVxaGtmaFQ3Q013NWZn?=
 =?utf-8?B?bkJZS0I3SzkyYUtqYmVQeWRIcThEczNKdTduR1NpV2E5dHJTWUZ6aDRnU3hH?=
 =?utf-8?B?T2Q3aHJ1MldYdlVnTTlWQkJrMWYwN3VHYnZ0WXlLcEplZ1FhOVZOamlNNkpK?=
 =?utf-8?B?TUxQT0NmdFRZODA3cEJ2aEliQ1VML0RyajBzb3lWNllEVHFFQ3NiQ09oeklu?=
 =?utf-8?B?NjhJMHlUK3liTmhxQ2E3TEh1YWVMNUticEZRMlZ2R21YcTdhYkdaVHI1Uncw?=
 =?utf-8?B?VjFXR2F6WXJWcGNnU1dCSmxjbjIwc2hRS3g3QnZhaUZZbmxBODRkYXQ3THFW?=
 =?utf-8?B?dmp0bDNEVHZWK1p1VFhBTTdZTFN6NzBEc2srd2ZNbENiT3Y5d2VhZnRXS08z?=
 =?utf-8?B?WkpWVG8yVnlCcmFCSjk4NnF4R3VwS1l3amswYlJzaHN6cCszelowbUFFUDZ2?=
 =?utf-8?B?SGJ1QVdoVVV0YVUvci9tY2NIdnpaT3pJWTZTbWpKelB6VFBUZkl5amJORjlN?=
 =?utf-8?B?OFlRaExTa2pEemJKaW9RRUtTRVVvN2E1Y3Q1U1pFUEdRSVFySlNmNCtwdXha?=
 =?utf-8?B?eHlveTQvQmwxQjZYR3FERnp5ZllnNVllekN5VGlIMGNLT3JuT2wvVXpUNXlt?=
 =?utf-8?B?QmZCV1NtaHd2VkpmYW9lMXhOR1ZxUmZXbFFCdXNOQU5JOHJqR0tuaHJLeHZB?=
 =?utf-8?B?THJYV3dqS3I0ZTd1RDgxcXBSaFBpVG44aXhhYkJFMmN1UVExYTFvVTR4Zm9W?=
 =?utf-8?B?TzBVbkhaeDJzR2xUYXJRcGR5RGtrajYwZTJBb1Z6a0k5YWlzOWVnZG5XU3Q5?=
 =?utf-8?B?OEFZZkFkcmRINDRxSGlpSytLOWd6NzZ3K2xxVjRud25DUUIwVUt2bzl4NTkx?=
 =?utf-8?B?d3BtSlhIaks1Kzc0amNLeGVwd1pHaUpWNFByL1BhQm54bkxKK3JVcWw3QXgx?=
 =?utf-8?B?TGNpeWZqaUFiRGFYeUJUTHNaUGZrSHpVUFZvMVEwWGZuUmFTdGs3b05ZQ3FT?=
 =?utf-8?B?cUtTN1NEWms1Y1JOUlV6ZTRWazhjNTV3UnhHR1VYNEpuTm55RU1YSFFlVEFF?=
 =?utf-8?B?b3c5UnZ3eURpK1RZUlZQUG0xVW9hTmk5ZGNoa1hFd3p0a2VKRDBEOTRHWmdU?=
 =?utf-8?B?MkRVUDJwRDV0Q0svc3hpcXRtRVc3NGlNQUNweVIvS29MejNJVU5IQml3MEM0?=
 =?utf-8?B?cVNhYXNIVHBuQmdkMlViRE4veSs4Sk1lMjQxY3hGTEx3LzB0S0sveTZCYjBv?=
 =?utf-8?B?QzhhM21kUEpDTU1HU1J0N3hwbzRKR0NiOUtaQzJXTS85aWN2dVovY21oOHVK?=
 =?utf-8?B?Y1daa21GZHJ1SDFjeE1UbndjdlArYlhDLytxNHB5Rks1VThkc1ZNVkpadUV2?=
 =?utf-8?B?bkdROUpWSUQzc0VlY0o4SzUvNE9YU1dxeDlrbElCV2tXSWIvVitIYjFvZXUr?=
 =?utf-8?B?MUJsVHl5UnpGUU5TM3JDOU5QNFA5ZEUzUk90UzZzV1c0S00xeVorQitWN3p3?=
 =?utf-8?B?aEU3SXpxS0Frc3laSE9QNktEbjlTZkk4NmFRb2JsdWh1RFhGaUkwR3d2Tldq?=
 =?utf-8?B?QTFrTzRvUUJRMTA3ME54TlR5clR0Y2pBbVR0RWF1OHpLcHBuQTlNWWpUaEtJ?=
 =?utf-8?Q?aQ0mD/EPEUj3pPJaSWc1mKmmB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 39de45e6-b1fd-4e99-bbe1-08db5aa62fc1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 09:23:21.8207
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DsdbKpJtDpqniocyx3YCx2UE6Fe9aNpory+BsDn4HfvltFrEcSmoqU1TIB42sAnAKlVgVa7d9KCq0mZN0yOcRQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9408

On 19.05.2023 13:48, Daniel P. Smith wrote:
> XSM is a subsystem where it is equally important of how and where its hooks are
> called as is the implementation of the hooks. The people best suited for
> evaluating the how and where are the XSM maintainers and reviewers. This
> creates a challenge as the hooks are used throughout the hypervisor for which
> the XSM maintainers and reviewers are not, and should not be, a reviewer for
> each of these subsystems in the MAINTAINERS file. Though the MAINTAINERS file
> does support the use of regex matches, 'K' identifier, that are applied to both
> the commit message and the commit delta. Adding the 'K' identifier will declare
> that any patch relating to XSM require the input from the XSM maintainers and
> reviewers. For those that use the get_maintianers script, the 'K' identifier
> will automatically add the XSM maintainers and reviewers.

With, aiui, a fair chance of false positives when e.g. XSM hook invocations
are only in patch context. Much like ...

> Any one not using
> get_maintainers, it will be their responsibility to ensure that if their work
> touches and XSM hook, to ensure the XSM maintainers and reviewers are copied.

... manual intervention is needed in the case of not using the script, I
think people should also be at least asked to see about avoiding stray Cc-s
in that case. Unless of course I'm misreading get_maintainers.pl (my Perl
isn't really great) or the script would be adjusted to only look at added/
removed lines (albeit even that would leave a certain risk of false
positives).

> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -674,6 +674,8 @@ F:	tools/flask/
>  F:	xen/include/xsm/
>  F:	xen/xsm/
>  F:	docs/misc/xsm-flask.txt
> +K:  xsm_.*
> +K:  \b(xsm|XSM)\b

Nit: Please make padding match that of adjacent lines.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 22 09:30:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 09:30:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537831.837383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q11rp-00059q-Mr; Mon, 22 May 2023 09:30:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537831.837383; Mon, 22 May 2023 09:30:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q11rp-00059j-K8; Mon, 22 May 2023 09:30:13 +0000
Received: by outflank-mailman (input) for mailman id 537831;
 Mon, 22 May 2023 09:30:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q11ro-00059d-0F
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 09:30:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q11rn-00011v-6j; Mon, 22 May 2023 09:30:11 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.21.204]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q11rm-000750-Vz; Mon, 22 May 2023 09:30:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=UWFr3JaX+KvShznDAaU6+zbb1LXxDxIXNN8txSpwPsw=; b=L1IPjFpTMr8qCxv1TNHU2OiYw6
	A/P/bOHvIO7rB6Kgw1QeN0ko3nLFGMa1aQAayOwaX5ZMaZmVO7Aif326XbUiAorQEj5/GexHCNmlh
	RoHe5PZPLHsYJJuJDYyXNL+SvDfip0FGj+ngCQMjdyKfCg/SkFpbasVwIr2lVB++LmJY=;
Message-ID: <5fb23d0c-dbab-58c3-71d4-f3d5254249fc@xen.org>
Date: Mon, 22 May 2023 10:30:08 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>, Jan Beulich <jbeulich@suse.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-2-luca.fancellu@arm.com>
 <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
 <86D7B5C8-2671-4359-A48D-E7D52B06565C@arm.com>
 <2f14dad9-25f5-7ac7-4ff5-d756e6f55718@xen.org>
 <a21f2917-052a-ddb5-3de5-1ea58cb55252@suse.com>
 <8A5D1D62-0FCF-4A2F-8B09-D216002D168C@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <8A5D1D62-0FCF-4A2F-8B09-D216002D168C@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 22/05/2023 09:43, Luca Fancellu wrote:
>> On 22 May 2023, at 08:50, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 19.05.2023 16:46, Julien Grall wrote:
>>> On 19/05/2023 15:26, Luca Fancellu wrote:
>>>>> On 18 May 2023, at 10:35, Julien Grall <julien@xen.org> wrote:
>>>>>> +/*
>>>>>> + * Arm SVE feature code
>>>>>> + *
>>>>>> + * Copyright (C) 2022 ARM Ltd.
>>>>>> + */
>>>>>> +
>>>>>> +#include <xen/types.h>
>>>>>> +#include <asm/arm64/sve.h>
>>>>>> +#include <asm/arm64/sysregs.h>
>>>>>> +#include <asm/processor.h>
>>>>>> +#include <asm/system.h>
>>>>>> +
>>>>>> +extern unsigned int sve_get_hw_vl(void);
>>>>>> +
>>>>>> +register_t compute_max_zcr(void)
>>>>>> +{
>>>>>> +    register_t cptr_bits = get_default_cptr_flags();
>>>>>> +    register_t zcr = vl_to_zcr(SVE_VL_MAX_BITS);
>>>>>> +    unsigned int hw_vl;
>>>>>> +
>>>>>> +    /* Remove trap for SVE resources */
>>>>>> +    WRITE_SYSREG(cptr_bits & ~HCPTR_CP(8), CPTR_EL2);
>>>>>> +    isb();
>>>>>> +
>>>>>> +    /*
>>>>>> +     * Set the maximum SVE vector length, doing that we will know the VL
>>>>>> +     * supported by the platform, calling sve_get_hw_vl()
>>>>>> +     */
>>>>>> +    WRITE_SYSREG(zcr, ZCR_EL2);
>>>>>
>>>>>  From my reading of the Arm (D19-6331, ARM DDI 0487J.a), a direct write to a system register would need to be followed by an context synchronization event (e.g. isb()) before the software can rely on the value.
>>>>>
>>>>> In this situation, AFAICT, the instruciton in sve_get_hw_vl() will use the content of ZCR_EL2. So don't we need an ISB() here?
>>>>
>>>>  From what I’ve read in the manual for ZCR_ELx:
>>>>
>>>> An indirect read of ZCR_EL2.LEN appears to occur in program order relative to a direct write of
>>>> the same register, without the need for explicit synchronization
>>>>
>>>> I’ve interpreted it as “there is no need to sync before write” and I’ve looked into Linux and it does not
>>>> Appear any synchronisation mechanism after a write to that register, but if I am wrong I can for sure
>>>> add an isb if you prefer.
>>>
>>> Ah, I was reading the generic section about synchronization and didn't
>>> realize there was a paragraph in the ZCR_EL2 section as well.
>>>
>>> Reading the new section, I agree with your understanding. The isb() is
>>> not necessary.
>>
>> And RDVL counts as an "indirect read"? I'm pretty sure "normal" SVE insn
>> use is falling in that category, but RDVL might also be viewed as more
>> similar to MRS in this regard? While the construct CurrentVL is used in
>> either case, I'm still not sure this goes without saying.
> 
> Hi Jan,
> 
> Looking into the Linux code, in function vec_probe_vqs(...) in arch/arm64/kernel/fpsimd.c,
> ZCR_EL1 is written, without synchronisation, and afterwards RDVL is used.

You are making the assumption that the Linux code is correct. It is 
mostly likely the case, but in general it is best to justify barriers 
based on the Arm Arm because it is authoritative.

In this case, the Arm Arm is pretty clear on the difference between 
indirect read and direct read (see D19-63333 ARM DDI 0487J.A). The 
latter only refers to use of the instruction of MRS. RDVL is its own 
instruction and therefore this is an indirect read.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 22 09:36:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 09:36:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537835.837393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q11xR-0005mW-Az; Mon, 22 May 2023 09:36:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537835.837393; Mon, 22 May 2023 09:36:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q11xR-0005mP-7s; Mon, 22 May 2023 09:36:01 +0000
Received: by outflank-mailman (input) for mailman id 537835;
 Mon, 22 May 2023 09:35:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cvIk=BL=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q11xP-0005mJ-6K
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 09:35:59 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20628.outbound.protection.outlook.com
 [2a01:111:f400:7d00::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0bf9847d-f884-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 11:35:54 +0200 (CEST)
Received: from DUZPR01CA0163.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b3::25) by PAWPR08MB10260.eurprd08.prod.outlook.com
 (2603:10a6:102:369::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 09:35:45 +0000
Received: from DBAEUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4b3:cafe::bc) by DUZPR01CA0163.outlook.office365.com
 (2603:10a6:10:4b3::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28 via Frontend
 Transport; Mon, 22 May 2023 09:35:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT027.mail.protection.outlook.com (100.127.142.237) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.13 via Frontend Transport; Mon, 22 May 2023 09:35:44 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Mon, 22 May 2023 09:35:44 +0000
Received: from b44a5fb236c8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DDDE97EF-65C3-4138-BDED-FF08143BA998.1; 
 Mon, 22 May 2023 09:35:38 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b44a5fb236c8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 22 May 2023 09:35:38 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM9PR08MB6740.eurprd08.prod.outlook.com (2603:10a6:20b:305::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 09:35:35 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.028; Mon, 22 May 2023
 09:35:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bf9847d-f884-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bAnErE0iPmulMM7K+O4tVeGFFSPj/8XYBKtQ54e1JQU=;
 b=TXhrWxq6FumM0/Mb6lFTkYqIaWFFJLLW2dMI+NCoT10fUAaKW2GsOamj3BledkvfNZlnNr//M6tSBJz9JR4oNZcMMdJWG8MLfj3RGGGgyTJpRWQOy56mjwrGCcnBzGZk7VIohtysKq96Y1VBpDR9LD04UmOjEbZJ1FXTFJ99pQs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 44823c885a601ae7
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g79McFVoyUbTqYtvJIXDuYnnlIvtKd5tuBT2R6RwE8Qhe/8cGIIKCtiVi9w0ohcWSMQhsjTeutc9kMh65yUA0Hzp/c0qTWl8/P5g1Wsm9RZr2dLXhEpA6UgTs1g8Q9vIN++8OpC7PHnQcUAOS4TyBB874Gdgte1kR/pH0ME+C6XTUMKdivBVMuk53bpHT9DM+hpfGF1ske0zsn0al9yZUU420X74d/rrNSzrFv9YlkwKefkh7OVgiCK8arAn010N1nJtq8O6fgR8lb43u3TjD7arvu7I/AFXxGNAtUmxpz+vqfRLcE1XTYZy0h34vva2cOBTyIW5I0pkYSy3fNh6Hw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bAnErE0iPmulMM7K+O4tVeGFFSPj/8XYBKtQ54e1JQU=;
 b=Er+3HxUf2+oy7OK0tcISmp9UAKXe8GIr13mJMV3ni+ol7nknO7exEkpn/m3db9ViUZy1yAGmdMzlHTx63s68iAGJ9/fD9LAzctzaH9esi/fF8Bq35sCRIxx/6dTWfm31OhpVnfLsb+pkXcJJrzjst+6EBB60uxcZpeZb5957t7lEVeKZ1AfGVbT7fvkSpizYJKajzl7Bfs144yf7KMCg+jz78ZKwhT65g8uk9aw0XC2S0Yu2c/Y1c0HgHb7UFaVhA6gyKzkgDeuZuddiRDRy6/PDpkVDiM39cDp2Woa5syhPVZyRJ+CSA2MWp2PkDp4ZI51wuT7TGKUtV9gO8bcnOA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bAnErE0iPmulMM7K+O4tVeGFFSPj/8XYBKtQ54e1JQU=;
 b=TXhrWxq6FumM0/Mb6lFTkYqIaWFFJLLW2dMI+NCoT10fUAaKW2GsOamj3BledkvfNZlnNr//M6tSBJz9JR4oNZcMMdJWG8MLfj3RGGGgyTJpRWQOy56mjwrGCcnBzGZk7VIohtysKq96Y1VBpDR9LD04UmOjEbZJ1FXTFJ99pQs=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Jan Beulich <jbeulich@suse.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Thread-Topic: [PATCH v6 01/12] xen/arm: enable SVE extension for Xen
Thread-Index:
 AQHZdnKBJmFHv6Wq8kSjvXctD+zlI69f6zCAgAHjioCAAAXEgIAEQtyAgAAOngCAAA0iAIAAAXeA
Date: Mon, 22 May 2023 09:35:34 +0000
Message-ID: <4A4AB910-A369-4906-A4FE-7246CFB60456@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-2-luca.fancellu@arm.com>
 <1fb3c4a2-8bc7-45e4-7ccf-803157f1b3b1@xen.org>
 <86D7B5C8-2671-4359-A48D-E7D52B06565C@arm.com>
 <2f14dad9-25f5-7ac7-4ff5-d756e6f55718@xen.org>
 <a21f2917-052a-ddb5-3de5-1ea58cb55252@suse.com>
 <8A5D1D62-0FCF-4A2F-8B09-D216002D168C@arm.com>
 <5fb23d0c-dbab-58c3-71d4-f3d5254249fc@xen.org>
In-Reply-To: <5fb23d0c-dbab-58c3-71d4-f3d5254249fc@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM9PR08MB6740:EE_|DBAEUR03FT027:EE_|PAWPR08MB10260:EE_
X-MS-Office365-Filtering-Correlation-Id: a76eb7f4-5b20-48c1-824b-08db5aa7ea96
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Iqt9ZldI8Nc6TPNczmW1Yry5M7Ei1tk3QI8Mgzn7xDhOoi7MYhVpLm44KGPGOJX8tCqqzhGLodeZmWZNBE7O8V9mIBym8ifwEzXPlpiP2CpX8j0ZyKqKnXqgGJjJp9MYN0MiW441SAsKSGPKkX3nuQlrUA++g6q5DmiEvlxb7zTsElYoc7K47UlkJrX1UGobLL+qrtHGqmjci4zq4w8L9TKHhhpcXcQOPZBnmx/j+z0yQ6PJoAVqdEfi86Wch0gmJuIRB1u+evqYJjKYQIVcnNdPlaEomzCSExzUCxz5B7xmkIsGJccnUyatVwmm5gzPvrRfs69GzLEK7hmzkV4P/80oWTb2Ks8Nrz1VaupVQjM6FMD71KpKe4+1BgaD2ka19Xhae7uSL0T5kXJnQIz5dFHY3jT9x85xroeXUfpVy/5zGOqAbAI7faoPWuEpsTuwi5yj+j3O4yNcJ8Q2hcidmPN/J+iu4XADUCxnYgdaZHTVLO/sf6VFkeIlUIyQ2Hbobx53NjBYspk7E1AgsTKZ2Z+KT/ublU4N3KkQExif+AX4zB4vUxkUdIqiiV2N8E0hEChuDe8RvxIpunhHv0d5ZvjbW2ifiA5JYOZNeEPQHp60Gh5meCybhu2uwPqHnM1fViCY77/aYmI9o9bl1FKifw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(366004)(396003)(376002)(451199021)(71200400001)(2906002)(6486002)(38070700005)(91956017)(6916009)(66556008)(66946007)(76116006)(66476007)(54906003)(64756008)(66446008)(478600001)(2616005)(53546011)(122000001)(38100700002)(8676002)(8936002)(36756003)(316002)(4326008)(41300700001)(5660300002)(33656002)(86362001)(6506007)(83380400001)(26005)(186003)(6512007)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <B412DFC82C802F4DA7CBAF364E7A5745@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6740
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	53c145a6-2d20-4bcf-2471-08db5aa7e4ee
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SbyW3J0BaADCLBwmwhZkYQCdYxyNdher/hqPG4TngltRzPBOGCZLDdbWj+wrP2b/Ju5QbRBQRSY7bzLuQCvHIa9ZPAwcwmdqsUywJLqFR7wegR4GL+Jqp+rtvNNtm/vU+blxID20xXqSnB8bdERixReuOXKz6uVPQa6IJaeIkjKeSZ246LbQk2Tpc4vp9iESJiy3w3LDaqm1oKVeWAwuCxLlpB5rKrp4Y2csa6lOZUc3e7WLrrRvZw0KD8PbTgBIQxzDcl/r5X0Sl1gB3P6iMbxNet3FPeoQq8bSsFZr4c4SHNst/D8z0LIDeOSP/gsHpZuB0Wto836toyPBWQX9VGe7q8qgJ1Z463Vtdg3eY/jECreLgfJ1ddPBaO2JgugxSVF0DA1qlsXFjvLvpZyGs7NCrVOtNr/+hft6vEQVp0SbFqpTOsfFZ3SzOacg7Pe76sbIjEk/qgTj0tySsUE/qO2I62GOFVMVcRtAmB/78GfBoJosych7oqqszBVlrUFqELNwzrq+G9Cf8l/8EJG5JWUQZ2mOR/Db3/wGbx7DN5fIPnWojaw3Llp9EgUOM7nhg5Ck+k76WCjyYiKLmbijaiKMO6BC43g9/0hS07Imu1w/JjMdDX8dexww1FDGJNjhu1b1iG4/FJvBdsqoSNrENGfYlbNueT13yQdgshfj5m0e04sBtr6m7QSiPsAprfJPPqFirLbsyDWRTa448X+bnJaDRQPNMQug9zXESwx7xyAlrY94PftwCU+1iL4maOUF
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(136003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(2906002)(5660300002)(83380400001)(8676002)(8936002)(6862004)(82310400005)(36756003)(70586007)(70206006)(4326008)(54906003)(316002)(6486002)(478600001)(41300700001)(40480700001)(33656002)(86362001)(336012)(26005)(2616005)(107886003)(356005)(81166007)(82740400003)(36860700001)(47076005)(186003)(6512007)(6506007)(53546011)(40460700003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 09:35:44.4827
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a76eb7f4-5b20-48c1-824b-08db5aa7ea96
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB10260

DQoNCj4gT24gMjIgTWF5IDIwMjMsIGF0IDEwOjMwLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpLA0KPiANCj4gT24gMjIvMDUvMjAyMyAwOTo0MywgTHVjYSBG
YW5jZWxsdSB3cm90ZToNCj4+PiBPbiAyMiBNYXkgMjAyMywgYXQgMDg6NTAsIEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4gDQo+Pj4gT24gMTkuMDUuMjAyMyAxNjo0
NiwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+PiBPbiAxOS8wNS8yMDIzIDE1OjI2LCBMdWNhIEZh
bmNlbGx1IHdyb3RlOg0KPj4+Pj4+IE9uIDE4IE1heSAyMDIzLCBhdCAxMDozNSwgSnVsaWVuIEdy
YWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+Pj4+Pj4+ICsvKg0KPj4+Pj4+PiArICogQXJt
IFNWRSBmZWF0dXJlIGNvZGUNCj4+Pj4+Pj4gKyAqDQo+Pj4+Pj4+ICsgKiBDb3B5cmlnaHQgKEMp
IDIwMjIgQVJNIEx0ZC4NCj4+Pj4+Pj4gKyAqLw0KPj4+Pj4+PiArDQo+Pj4+Pj4+ICsjaW5jbHVk
ZSA8eGVuL3R5cGVzLmg+DQo+Pj4+Pj4+ICsjaW5jbHVkZSA8YXNtL2FybTY0L3N2ZS5oPg0KPj4+
Pj4+PiArI2luY2x1ZGUgPGFzbS9hcm02NC9zeXNyZWdzLmg+DQo+Pj4+Pj4+ICsjaW5jbHVkZSA8
YXNtL3Byb2Nlc3Nvci5oPg0KPj4+Pj4+PiArI2luY2x1ZGUgPGFzbS9zeXN0ZW0uaD4NCj4+Pj4+
Pj4gKw0KPj4+Pj4+PiArZXh0ZXJuIHVuc2lnbmVkIGludCBzdmVfZ2V0X2h3X3ZsKHZvaWQpOw0K
Pj4+Pj4+PiArDQo+Pj4+Pj4+ICtyZWdpc3Rlcl90IGNvbXB1dGVfbWF4X3pjcih2b2lkKQ0KPj4+
Pj4+PiArew0KPj4+Pj4+PiArICAgIHJlZ2lzdGVyX3QgY3B0cl9iaXRzID0gZ2V0X2RlZmF1bHRf
Y3B0cl9mbGFncygpOw0KPj4+Pj4+PiArICAgIHJlZ2lzdGVyX3QgemNyID0gdmxfdG9femNyKFNW
RV9WTF9NQVhfQklUUyk7DQo+Pj4+Pj4+ICsgICAgdW5zaWduZWQgaW50IGh3X3ZsOw0KPj4+Pj4+
PiArDQo+Pj4+Pj4+ICsgICAgLyogUmVtb3ZlIHRyYXAgZm9yIFNWRSByZXNvdXJjZXMgKi8NCj4+
Pj4+Pj4gKyAgICBXUklURV9TWVNSRUcoY3B0cl9iaXRzICYgfkhDUFRSX0NQKDgpLCBDUFRSX0VM
Mik7DQo+Pj4+Pj4+ICsgICAgaXNiKCk7DQo+Pj4+Pj4+ICsNCj4+Pj4+Pj4gKyAgICAvKg0KPj4+
Pj4+PiArICAgICAqIFNldCB0aGUgbWF4aW11bSBTVkUgdmVjdG9yIGxlbmd0aCwgZG9pbmcgdGhh
dCB3ZSB3aWxsIGtub3cgdGhlIFZMDQo+Pj4+Pj4+ICsgICAgICogc3VwcG9ydGVkIGJ5IHRoZSBw
bGF0Zm9ybSwgY2FsbGluZyBzdmVfZ2V0X2h3X3ZsKCkNCj4+Pj4+Pj4gKyAgICAgKi8NCj4+Pj4+
Pj4gKyAgICBXUklURV9TWVNSRUcoemNyLCBaQ1JfRUwyKTsNCj4+Pj4+PiANCj4+Pj4+PiBGcm9t
IG15IHJlYWRpbmcgb2YgdGhlIEFybSAoRDE5LTYzMzEsIEFSTSBEREkgMDQ4N0ouYSksIGEgZGly
ZWN0IHdyaXRlIHRvIGEgc3lzdGVtIHJlZ2lzdGVyIHdvdWxkIG5lZWQgdG8gYmUgZm9sbG93ZWQg
YnkgYW4gY29udGV4dCBzeW5jaHJvbml6YXRpb24gZXZlbnQgKGUuZy4gaXNiKCkpIGJlZm9yZSB0
aGUgc29mdHdhcmUgY2FuIHJlbHkgb24gdGhlIHZhbHVlLg0KPj4+Pj4+IA0KPj4+Pj4+IEluIHRo
aXMgc2l0dWF0aW9uLCBBRkFJQ1QsIHRoZSBpbnN0cnVjaXRvbiBpbiBzdmVfZ2V0X2h3X3ZsKCkg
d2lsbCB1c2UgdGhlIGNvbnRlbnQgb2YgWkNSX0VMMi4gU28gZG9uJ3Qgd2UgbmVlZCBhbiBJU0Io
KSBoZXJlPw0KPj4+Pj4gDQo+Pj4+PiBGcm9tIHdoYXQgSeKAmXZlIHJlYWQgaW4gdGhlIG1hbnVh
bCBmb3IgWkNSX0VMeDoNCj4+Pj4+IA0KPj4+Pj4gQW4gaW5kaXJlY3QgcmVhZCBvZiBaQ1JfRUwy
LkxFTiBhcHBlYXJzIHRvIG9jY3VyIGluIHByb2dyYW0gb3JkZXIgcmVsYXRpdmUgdG8gYSBkaXJl
Y3Qgd3JpdGUgb2YNCj4+Pj4+IHRoZSBzYW1lIHJlZ2lzdGVyLCB3aXRob3V0IHRoZSBuZWVkIGZv
ciBleHBsaWNpdCBzeW5jaHJvbml6YXRpb24NCj4+Pj4+IA0KPj4+Pj4gSeKAmXZlIGludGVycHJl
dGVkIGl0IGFzIOKAnHRoZXJlIGlzIG5vIG5lZWQgdG8gc3luYyBiZWZvcmUgd3JpdGXigJ0gYW5k
IEnigJl2ZSBsb29rZWQgaW50byBMaW51eCBhbmQgaXQgZG9lcyBub3QNCj4+Pj4+IEFwcGVhciBh
bnkgc3luY2hyb25pc2F0aW9uIG1lY2hhbmlzbSBhZnRlciBhIHdyaXRlIHRvIHRoYXQgcmVnaXN0
ZXIsIGJ1dCBpZiBJIGFtIHdyb25nIEkgY2FuIGZvciBzdXJlDQo+Pj4+PiBhZGQgYW4gaXNiIGlm
IHlvdSBwcmVmZXIuDQo+Pj4+IA0KPj4+PiBBaCwgSSB3YXMgcmVhZGluZyB0aGUgZ2VuZXJpYyBz
ZWN0aW9uIGFib3V0IHN5bmNocm9uaXphdGlvbiBhbmQgZGlkbid0DQo+Pj4+IHJlYWxpemUgdGhl
cmUgd2FzIGEgcGFyYWdyYXBoIGluIHRoZSBaQ1JfRUwyIHNlY3Rpb24gYXMgd2VsbC4NCj4+Pj4g
DQo+Pj4+IFJlYWRpbmcgdGhlIG5ldyBzZWN0aW9uLCBJIGFncmVlIHdpdGggeW91ciB1bmRlcnN0
YW5kaW5nLiBUaGUgaXNiKCkgaXMNCj4+Pj4gbm90IG5lY2Vzc2FyeS4NCj4+PiANCj4+PiBBbmQg
UkRWTCBjb3VudHMgYXMgYW4gImluZGlyZWN0IHJlYWQiPyBJJ20gcHJldHR5IHN1cmUgIm5vcm1h
bCIgU1ZFIGluc24NCj4+PiB1c2UgaXMgZmFsbGluZyBpbiB0aGF0IGNhdGVnb3J5LCBidXQgUkRW
TCBtaWdodCBhbHNvIGJlIHZpZXdlZCBhcyBtb3JlDQo+Pj4gc2ltaWxhciB0byBNUlMgaW4gdGhp
cyByZWdhcmQ/IFdoaWxlIHRoZSBjb25zdHJ1Y3QgQ3VycmVudFZMIGlzIHVzZWQgaW4NCj4+PiBl
aXRoZXIgY2FzZSwgSSdtIHN0aWxsIG5vdCBzdXJlIHRoaXMgZ29lcyB3aXRob3V0IHNheWluZy4N
Cj4+IEhpIEphbiwNCj4+IExvb2tpbmcgaW50byB0aGUgTGludXggY29kZSwgaW4gZnVuY3Rpb24g
dmVjX3Byb2JlX3ZxcyguLi4pIGluIGFyY2gvYXJtNjQva2VybmVsL2Zwc2ltZC5jLA0KPj4gWkNS
X0VMMSBpcyB3cml0dGVuLCB3aXRob3V0IHN5bmNocm9uaXNhdGlvbiwgYW5kIGFmdGVyd2FyZHMg
UkRWTCBpcyB1c2VkLg0KPiANCj4gWW91IGFyZSBtYWtpbmcgdGhlIGFzc3VtcHRpb24gdGhhdCB0
aGUgTGludXggY29kZSBpcyBjb3JyZWN0LiBJdCBpcyBtb3N0bHkgbGlrZWx5IHRoZSBjYXNlLCBi
dXQgaW4gZ2VuZXJhbCBpdCBpcyBiZXN0IHRvIGp1c3RpZnkgYmFycmllcnMgYmFzZWQgb24gdGhl
IEFybSBBcm0gYmVjYXVzZSBpdCBpcyBhdXRob3JpdGF0aXZlLg0KPiANCj4gSW4gdGhpcyBjYXNl
LCB0aGUgQXJtIEFybSBpcyBwcmV0dHkgY2xlYXIgb24gdGhlIGRpZmZlcmVuY2UgYmV0d2VlbiBp
bmRpcmVjdCByZWFkIGFuZCBkaXJlY3QgcmVhZCAoc2VlIEQxOS02MzMzMyBBUk0gRERJIDA0ODdK
LkEpLiBUaGUgbGF0dGVyIG9ubHkgcmVmZXJzIHRvIHVzZSBvZiB0aGUgaW5zdHJ1Y3Rpb24gb2Yg
TVJTLiBSRFZMIGlzIGl0cyBvd24gaW5zdHJ1Y3Rpb24gYW5kIHRoZXJlZm9yZSB0aGlzIGlzIGFu
IGluZGlyZWN0IHJlYWQuDQoNClllcyB5b3UgYXJlIHJpZ2h0DQoNCj4gDQo+IENoZWVycywNCj4g
DQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon May 22 10:21:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 10:21:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537841.837403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q12f9-0002rX-Qf; Mon, 22 May 2023 10:21:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537841.837403; Mon, 22 May 2023 10:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q12f9-0002rQ-NV; Mon, 22 May 2023 10:21:11 +0000
Received: by outflank-mailman (input) for mailman id 537841;
 Mon, 22 May 2023 10:21:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cvIk=BL=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q12f8-0002rK-SZ
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 10:21:11 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20602.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::602])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5a8270f7-f88a-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 12:21:03 +0200 (CEST)
Received: from AS8PR04CA0119.eurprd04.prod.outlook.com (2603:10a6:20b:31e::34)
 by PAWPR08MB9544.eurprd08.prod.outlook.com (2603:10a6:102:2f2::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 10:21:01 +0000
Received: from AM7EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:31e:cafe::a6) by AS8PR04CA0119.outlook.office365.com
 (2603:10a6:20b:31e::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28 via Frontend
 Transport; Mon, 22 May 2023 10:21:00 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT007.mail.protection.outlook.com (100.127.140.242) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.13 via Frontend Transport; Mon, 22 May 2023 10:21:00 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Mon, 22 May 2023 10:20:59 +0000
Received: from de28df80fec4.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 08E058C5-F028-498B-ABC8-D78B25A1E059.1; 
 Mon, 22 May 2023 10:20:53 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id de28df80fec4.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 22 May 2023 10:20:53 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM7PR08MB5461.eurprd08.prod.outlook.com (2603:10a6:20b:10e::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 10:20:51 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.028; Mon, 22 May 2023
 10:20:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a8270f7-f88a-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=veH7LOmkRutYLp6hm7b/yDxduAHVlwwYtauf6OoXvuo=;
 b=8v1jcDX0bZafPoqPUg+10f6SZaVo7xlhRhuuxf6Un91SfR1vLPM6nJdvZbHDVOojwD7V/rnwN23ZMQIwDhhPpd1lAjNYrcQK2M3Yb4vZJ+1B0kD+8ejTpqFEXYDvWPHkJLQvy+qqHa9q8rTkndQ8p4FaTfm2gcfBrYmPEty+raM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f7ca24b43d32f231
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NxSHU5KBPRoxeEBRjCFibyEZBx8xkhVW4nxA5M+c70hLP/6otgR/XJG27jJVVy5HYJmp5oKazYt64O/zxDUfZrNUya+891iO0DNbZe5G+u7czPz8sc8/yY2lHSYifjuAO1Yc0e75RSfFcw2dmvbPj3w3QVJnL66ugxid70uEe/FNr2rrILYnUHYDQQfYRBhFSYMnrjgca4W76wa41IGsJKnMIZqDDPBWtcop0kUUhYfrmbAfPW3ziLi9tR2+jpuOHfzVWfaCuUfqPrk/6cUixwGy1wvPZjDEgOt0QpmQlb4oL0YXTDkToBc8+gKq5As6joZy9J41fH9eEW0nU45Ndg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=veH7LOmkRutYLp6hm7b/yDxduAHVlwwYtauf6OoXvuo=;
 b=gmxxKL5Dy4mW8tTqfWpQSwbHo88LRN9aaThbVHBgjaVCo5in3KWPFWUy1G2jUFZb9hREO+BYTbE39pECeD+h5kC/oLRrUOSDrc/iGrwtryadldzu5hEfPhDfuOgq+9zdmzwa7vM3Ek+rXhg5OfotWv0Tz0g+oJjO8LJYpfyCdnaLmFsdIb9MJzSoRnsHpRVqagTSomr5/w5wuSDjoZv8NxPfoo3U7qXVPOOd9jzuc8MW2LJlbXvGkE4Bh6rGSnNnYH0ztZ0CIstukRW0Qkwqvi3xRxnvBN+ocdLTWtWp40/apUG4V+NBkZQ2nDkFFibTfoSawIIHBrhEkR6WBqirMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=veH7LOmkRutYLp6hm7b/yDxduAHVlwwYtauf6OoXvuo=;
 b=8v1jcDX0bZafPoqPUg+10f6SZaVo7xlhRhuuxf6Un91SfR1vLPM6nJdvZbHDVOojwD7V/rnwN23ZMQIwDhhPpd1lAjNYrcQK2M3Yb4vZJ+1B0kD+8ejTpqFEXYDvWPHkJLQvy+qqHa9q8rTkndQ8p4FaTfm2gcfBrYmPEty+raM=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v6 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v6 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZdnJ92qfFeC693kK/O74mILlnGa9ggLQAgAXAhAA=
Date: Mon, 22 May 2023 10:20:51 +0000
Message-ID: <288F275C-D76F-4E89-B8C6-C8D9AD54D1D9@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-6-luca.fancellu@arm.com>
 <779e46a5-a3d0-187a-6d15-e1a12f71278f@xen.org>
In-Reply-To: <779e46a5-a3d0-187a-6d15-e1a12f71278f@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM7PR08MB5461:EE_|AM7EUR03FT007:EE_|PAWPR08MB9544:EE_
X-MS-Office365-Filtering-Correlation-Id: de384282-e346-44b5-0a0c-08db5aae3d55
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FbpIZuaEpAMOD8KSSv6ofwBax7dxYAfoMKlLMlWPJ2iuULFR2IZdge/cfyw1Ws82EMh+Q99JD3Dvv03+v+FaJf8Xsvxz1ZptDDacivbI+kC/xOcwrluMYKNG+3AHoXIaWKUtFT8iiuv1murLBHScuvxQX8hvZ2FWFTTUo4xx3AADxdYyFjgvPTYuUhlmVO0Fk7R4Wd2mJ5bMhRYTy92FtQK+hAWmDXJNRAzhKv/s9//UWCAY+P0/dMK7Whg9+5L26OUabLmwlnot1bRg8t1GdduH7GbZDtZlAKeQu7MwuIe2CdeU+Mh2ZxDgXNKH3CZyJTOh2qsPd7da02ttNC1gGd0rVJU3uZWosbklNYJ0jtL/GEW7VyqqwtLZUeChXubjwO7QEmuQidBCyNRzqc8cRynzYoFepUcdI8GUIc5YRuCnJPQSRrd3H+F+wkZKalodcGaLik1Ql0l9PgCPcf13QN7G5/LilzJUgAhUd/8oX5prtPoXhsVfHII0Fn/Oc2WYT27MtxKPxWONknH7sBq2ZPIiJt3WXr9d0bLGXckTA+PfptuRhm0TXKKJesb4IyOpyhgDJyLZJ7QiAKvueeh5qViPkosjrEREdj0HwsgWM9qbmxbvtdN01Um0mWgCyPVWbpRhcPhZOrdvr6p6fVG9ZQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39850400004)(396003)(376002)(366004)(136003)(346002)(451199021)(38070700005)(122000001)(38100700002)(33656002)(86362001)(36756003)(53546011)(6512007)(6506007)(8676002)(8936002)(2616005)(4744005)(2906002)(186003)(54906003)(478600001)(64756008)(316002)(4326008)(6916009)(26005)(41300700001)(71200400001)(66446008)(5660300002)(6486002)(66476007)(76116006)(66556008)(91956017)(66946007)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <C8A713054C6B28438C57AE8D0526B272@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5461
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8d914038-8d9d-4965-2e8a-08db5aae37fb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6BFrHOhBqIRpT7ePMuou1fCmXDFYczScVGtHb+FVS85EYrLHr70OpDzn/3uoIrFRDH4V3o+NPEN0YZ6nXMzVXk5nyvvSWYdpFl3GAQg6UgTBy/d7SzjlmgKc39SvMSDHcDLb6X6RGqpb3qD5aXAv/02NICQ/Pfsv8QB/DdL7YlfV6MxvSmUGMteBmCsRjM96YX1tyz6Mlj4TpU7hO3NlsUQ/0URfhVgpOF/XV1Jz+RXxk0xVZjc6qL9Wbc5XkS1uTzQVwlJni2OUlFer1WfDmeJCPTAGwKoMLDgdE6wTuHLSJszTYsf5WfkIOW3jwN0QMZaABnY/deDmlrELXgz3jhi5lN/UAT5nvsX2UZb1prmzlNbaVXpJi65VH5uD0rkAq6OuERTG9aY8DF8gdBmVf5wXjp1kCC3IUG79301s23PGHRo1YRze9c7GhWTV/7mewN5tEDEa1zMCLcCuRMEMbw2TDV/NyQF9dn1nqE67apKqKd+r6x6MsQxq5SLOI9OM9v1aBlVPsv2uLXBldzpAJiriVkIwRXBvtjV9k/v9OhBHC0as/0VZygJBNjlKaGb1i8VkiTRjduojKvrBa6s6CpoCrkh8Os00CUMHk5adh3Ha4ai5YEh/48WfSHbsR01tBRCDYxbvVoPupdykD2mwcD9EKIKHwb1g576ytK6I1yHVV4x/4KtwWOQT6qKRzYZVXRrVZK2svsKpcQ79b12QEAKvyKbnfmAdaYZaPFfn1Nc=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(6862004)(8676002)(8936002)(5660300002)(36860700001)(82310400005)(47076005)(83380400001)(186003)(26005)(53546011)(6506007)(6512007)(86362001)(107886003)(2616005)(81166007)(356005)(82740400003)(336012)(40460700003)(41300700001)(6486002)(40480700001)(33656002)(70206006)(70586007)(316002)(4326008)(36756003)(478600001)(54906003)(4744005)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 10:21:00.2224
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: de384282-e346-44b5-0a0c-08db5aae3d55
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9544

DQoNCj4gT24gMTggTWF5IDIwMjMsIGF0IDE5OjMwLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbmUgbW9yZSByZW1hcmsuDQo+IA0K
PiBPbiAyNC8wNC8yMDIzIDA3OjAyLCBMdWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4gICNlbHNlIC8q
ICFDT05GSUdfQVJNNjRfU1ZFICovDQo+PiAgQEAgLTQ2LDYgKzUwLDE1IEBAIHN0YXRpYyBpbmxp
bmUgdW5zaWduZWQgaW50IGdldF9zeXNfdmxfbGVuKHZvaWQpDQo+PiAgICAgIHJldHVybiAwOw0K
Pj4gIH0NCj4+ICArc3RhdGljIGlubGluZSBpbnQgc3ZlX2NvbnRleHRfaW5pdChzdHJ1Y3QgdmNw
dSAqdikNCj4+ICt7DQo+PiArICAgIHJldHVybiAwOw0KPiANCj4gVGhlIGNhbGwgaXMgcHJvdGVj
dGVkIGJ5IGlzX2RvbWFpbl9zdmUoKS4gU28gSSB0aGluayB3ZSB3YW50IHRvIHJldHVybiBhbiBl
cnJvciBqdXN0IGluIGNhc2Ugc29tZW9uZSBpcyBjYWxsaW5nIGl0IG91dHNpZGUgb2YgaXRzIGlu
dGVuZGVkIHVzZS4NCg0KUmVnYXJkaW5nIHRoaXMgb25lLCBzaW5jZSBpdCBzaG91bGQgbm90IGJl
IGNhbGxlZCB3aGVuIFNWRSBpcyBub3QgZW5hYmxlZCwgYXJlIHlvdSBvayBpZiBJ4oCZbGwgZG8g
dGhpczoNCg0Kc3RhdGljIGlubGluZSBpbnQgc3ZlX2NvbnRleHRfaW5pdChzdHJ1Y3QgdmNwdSAq
dikNCnsNCkFTU0VSVF9VTlJFQUNIQUJMRSgpOw0KcmV0dXJuIDA7DQp9DQoNCg0KPiANCj4+ICt9
DQo+PiArDQo+PiArc3RhdGljIGlubGluZSB2b2lkIHN2ZV9jb250ZXh0X2ZyZWUoc3RydWN0IHZj
cHUgKnYpIHt9DQo+PiArc3RhdGljIGlubGluZSB2b2lkIHN2ZV9zYXZlX3N0YXRlKHN0cnVjdCB2
Y3B1ICp2KSB7fQ0KPj4gK3N0YXRpYyBpbmxpbmUgdm9pZCBzdmVfcmVzdG9yZV9zdGF0ZShzdHJ1
Y3QgdmNwdSAqdikge30NCj4+ICsNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0K


From xen-devel-bounces@lists.xenproject.org Mon May 22 10:27:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 10:27:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537845.837412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q12kt-0003TH-ET; Mon, 22 May 2023 10:27:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537845.837412; Mon, 22 May 2023 10:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q12kt-0003TA-Bv; Mon, 22 May 2023 10:27:07 +0000
Received: by outflank-mailman (input) for mailman id 537845;
 Mon, 22 May 2023 10:27:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q12ks-0003T4-Iu
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 10:27:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q12kp-0002Ls-Kv; Mon, 22 May 2023 10:27:03 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=[192.168.21.204]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q12kp-00010r-F6; Mon, 22 May 2023 10:27:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Y3t5PCR+goOLoHSXRN+oulTtCiOKZLDA1W2Ufx8gwQY=; b=gFKViP7m4JxZ+4psxwa7INGeju
	xYMw5du1x4NaEieA+x2RKB+gmKQxpXBD16ajlUzEloPqzIcidIHpIOqKF7UxII80ufKzUkD0TpOja
	lY/E7gBlc0ffrFnPNt27CMWK2R5eXyRHPsE0qI6enIFHD/jrWxrizc/uJ2I1Q9E5PhsM=;
Message-ID: <4c72b060-52b7-a852-f966-5849a78ccc19@xen.org>
Date: Mon, 22 May 2023 11:27:01 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH] maintainers: add regex matching for xsm
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230519114824.12482-1-dpsmith@apertussolutions.com>
 <4a8c30bd-ebe7-3800-37ae-9e3e6c37a7d0@suse.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <4a8c30bd-ebe7-3800-37ae-9e3e6c37a7d0@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 22/05/2023 10:23, Jan Beulich wrote:
> On 19.05.2023 13:48, Daniel P. Smith wrote:
>> XSM is a subsystem where it is equally important of how and where its hooks are
>> called as is the implementation of the hooks. The people best suited for
>> evaluating the how and where are the XSM maintainers and reviewers. This
>> creates a challenge as the hooks are used throughout the hypervisor for which
>> the XSM maintainers and reviewers are not, and should not be, a reviewer for
>> each of these subsystems in the MAINTAINERS file. Though the MAINTAINERS file
>> does support the use of regex matches, 'K' identifier, that are applied to both
>> the commit message and the commit delta. Adding the 'K' identifier will declare
>> that any patch relating to XSM require the input from the XSM maintainers and
>> reviewers. For those that use the get_maintianers script, the 'K' identifier
>> will automatically add the XSM maintainers and reviewers.
> 
> With, aiui, a fair chance of false positives when e.g. XSM hook invocations
> are only in patch context. Much like ...
> 
>> Any one not using
>> get_maintainers, it will be their responsibility to ensure that if their work
>> touches and XSM hook, to ensure the XSM maintainers and reviewers are copied.
> 
> ... manual intervention is needed in the case of not using the script, I
> think people should also be at least asked to see about avoiding stray Cc-s
> in that case. 

I don't particularly like this suggestion because the sender may 
mistakenly believe this is a "stray CC".

Personally, I would prefer to be in CC more often than less often. I 
think we should give the choice to Daniel to decide whether he is happy 
to be in CC potentially more often.

If it is becoming too much then we can decide to adjust the script.

> Unless of course I'm misreading get_maintainers.pl (my Perl
> isn't really great) or the script would be adjusted to only look at added/
> removed lines (albeit even that would leave a certain risk of false
> positives).
> 
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -674,6 +674,8 @@ F:	tools/flask/
>>   F:	xen/include/xsm/
>>   F:	xen/xsm/
>>   F:	docs/misc/xsm-flask.txt
>> +K:  xsm_.*
>> +K:  \b(xsm|XSM)\b

Aside the NIT from Jan, this change is only affecting the number of 
e-mails the XSM maintainers will receive. So:

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 22 10:58:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 10:58:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537851.837428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q13Ed-0006xU-S8; Mon, 22 May 2023 10:57:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537851.837428; Mon, 22 May 2023 10:57:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q13Ed-0006xN-PU; Mon, 22 May 2023 10:57:51 +0000
Received: by outflank-mailman (input) for mailman id 537851;
 Mon, 22 May 2023 10:57:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dXMk=BL=citrix.com=prvs=499503587=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q13Eb-0006xH-UW
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 10:57:50 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7be2ea55-f88f-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 12:57:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7be2ea55-f88f-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684753067;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=FtaOawtEIZcYR8kQ1ZUJu5ElatA0FpUwqecHrTU8yhQ=;
  b=eX85Xl7ewioXqa0+Du9tHPOsygu/HRrr3S4NbAB2VV+cAR+hcRuzESRm
   hUqb1q7jGzeTSdZ4yjUSnvKLEkuYeAfeXg+cqrjy6sqTcnwCXRi4oIOqc
   nP2yM0nEV1e12lCIO3MnH19Xxaj18ogd6rm0ouT628rQcvrZAZi66oetn
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109791905
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:CAU1RK0RNi4w4mJ9K/bD5TR0kn2cJEfYwER7XKvMYLTBsI5bp2MPn
 GYeXGqOPa2CamehKIx/aN609B4HvZ/Rm4JhTAM6pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFkP6gS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfEEBR6
 NAXdBY0fjObu9KG8LjgQLBPiZF2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiHJ0EwRfB9
 zqbl4j/Kjg4JvC+zxPcyC6pu7XNnBvVQIw9SaLto5aGh3XMnzdOWXX6T2CTvfa9mma6WtRCN
 1YT/Cs+66Q/nGSzR934UgeQrHeOtBMYR5xbFOhSwBmE166S/A+dA3MAQy9pc90otdE7Azct0
 zehh9nuAy5utry9U3+R9r6I6zi1PEA9NWoPYz0JSSMA5N3srYd1iwrACNpueIa/icf0Anf32
 CyQqzYlhKQ7itQC3KG2u1vAhlqEvJ/SVUgt5gTTX3q+6QVRY5SsbIilr1Pc6J5oN4+TVFCZt
 X4sks2X/ucIS5qKkUSlW+gEGrOiz/mCNzLYjBhkGJxJ3yit/HuqZsZU7TdyLUxqI+4NfDOva
 0jW0StN/5VUM3asYYd+Z5qxAoIk16HmFtnjEPzZBvJTeZF0fUmO5iBgTUSBw23hmU4nnOc0P
 pLzWcmxCmwXD4xkwSCwSuNb1qUkrggmzH7XQJv45xeqy7yTYDiSU7htGESPauQR766epgjRt
 dFFOKOi20UBeO7zeC/a9cgUN19iBX8/CJPtospbbMaFKxZvHGwnCPiXx7okduRNmqVWm+PM8
 2CVX05T0l3kgnPbbw6NbxhLbbr1XtBlsG4/NCogFV+y3j4oZoPHxKwWa5IzfL9h8eV5zPh9V
 NEMYcDGCfNKIhzl8jEQdt/Xq5B+cxGviBOmPyOjezU5Z5d6WwrE4MTkfxOp/y4LZgK+r8k3p
 ZWk1wXGUdwCQRhvCIDdb/fH51qru3Q1kfh0U0GOLtQ7UEnh9IxnMQT+i+9xL8xkARncwyTA/
 weQDw0RqeTEr8ky6tahra+croyiFO9/NktVFm3a5PC9Miyy1mis24hbS86TYCvQEmjz/c2Kd
 ehTiv3xLvADtFJLqJZnVaZmy7ok4NnirKMcyR5rdF3PblK2GvZrOXqX9ddAu7cLxbJDvwayH
 EWV9bFyPbSPJdOgFV8JPyI7YemZk/IZgD/f6bIyOkqSzCt2+qeXFF1bJRiXgwRDI7ZvdoAo2
 +EsvIgR8QPXoh4rNMuWyylO8iGPI2YGXqEPspABHJStiwwl0FhObJXQTCjs7/mnb9RKL1lvO
 D6Pj7Tqg7VH2lGEcnw9D3HB0ONRw5MUt3hiyF4EOkShk8DAgv4xwQ0X9i44CBlWpj1Nye1+I
 HN3H0JwLKSKuTxvgaBrW32nGQhHQhmE8U74zVwXvGLcRkisEGfKKQUVPeeL4VBc8G9GeDVf1
 K+XxXyjUjvwesz1mCwoViZNsuHiRutz9hDNn+i+AsWOHp8xJzHih8eTiXEg8kW9R5lr3Que+
 LcspbwrAUHmCcIOi7waFJS22ug2cQCnPFByccFrp6lQPm6JLVlexgOyA0y2f8pMIdnD/km5F
 9FiK6pzaviu6MqdhmtFXPBReteYiNZsvYNfIe2zeQbqppPF9lJUXITsGj8SbYPBa/Fnio4DJ
 4zYbFpu+UTA1CIPywchQCSpU1dUgOXohiWmh4hZE81TTfrvVd2AlmlsuoZYR13PbGNaE+u85
 WsvnZP+we140phLlIDxCKhFDAjcAYqtBLjWoVvi4o4eNY2n3SLyW+Q98wOPAuirFeFJB4Qfe
 Uql67YbI38pTJ5pCjuEyvFt5oFC5NmoXfo/D/8b2EJyxHPYMOe1uktrxoxNAcAR+D+rzpX9F
 lTQhQrZXYJ9ZuqxM1UOOnMFSE1MVPWuBkoizAvkx8mx5tEm+VSvBLuaGbXBNAm3qgdg10XCN
 zLJ
IronPort-HdrOrdr: A9a23:09+17q/QTihma8HpfNRuk+DJI+orL9Y04lQ7vn2ZKCY5TiX8ra
 vFoB11726WtN9vYgBDpTntAse9qBDnmaKdg7NwAV7KZmCPhILCFu5fBOXZogEIEheOk9Jg6Q
 ==
X-Talos-CUID: 9a23:2S2XfmxwRhoEaCNipankBgUFO984XnKB6k7AAFeEOVZZQuafEHyfrfY=
X-Talos-MUID: =?us-ascii?q?9a23=3A0NkJAA8ep6I1nEDTNh1Sp+OQf+5K8YiPDmlUq8k?=
 =?us-ascii?q?DhpOGNBReIzmy0SviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,184,1681185600"; 
   d="scan'208";a="109791905"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: <tglx@linutronix.de>
CC: <James.Bottomley@HansenPartnership.com>, <andrew.cooper3@citrix.com>,
	<arjan@linux.intel.com>, <arnd@arndb.de>, <boris.ostrovsky@oracle.com>,
	<brgerst@gmail.com>, <catalin.marinas@arm.com>, <deller@gmx.de>,
	<dwmw2@infradead.org>, <gpiccoli@igalia.com>, <guoren@kernel.org>,
	<jgross@suse.com>, <linux-arm-kernel@lists.infradead.org>,
	<linux-csky@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<linux-mips@vger.kernel.org>, <linux-parisc@vger.kernel.org>,
	<linux-riscv@lists.infradead.org>, <linux@armlinux.org.uk>,
	<lucjan.lucjanov@gmail.com>, <mark.rutland@arm.com>,
	<mikelley@microsoft.com>, <oleksandr@natalenko.name>, <palmer@dabbelt.com>,
	<paul.walmsley@sifive.com>, <paulmck@kernel.org>, <pbonzini@redhat.com>,
	<pmenzel@molgen.mpg.de>, <ross.philipson@oracle.com>, <sabrapan@amazon.com>,
	<seanjc@google.com>, <thomas.lendacky@amd.com>, <tsbogend@alpha.franken.de>,
	<usama.arif@bytedance.com>, <will@kernel.org>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, Jeffrey Hugo <quic_jhugo@quicinc.com>
Subject: [PATCH] x86/apic: Fix use of X{,2}APIC_ENABLE in asm with older binutils
Date: Mon, 22 May 2023 11:57:38 +0100
Message-ID: <20230522105738.2378364-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230512203426.452963764@linutronix.de>
References: <20230512203426.452963764@linutronix.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

"x86/smpboot: Support parallel startup of secondary CPUs" adds the first use
of X2APIC_ENABLE in assembly, but older binutils don't tolerate the UL suffix.

Switch to using BIT() instead.

Fixes: 7e75178a0950 ("x86/smpboot: Support parallel startup of secondary CPUs")
Reported-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
Tested-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 arch/x86/include/asm/apicdef.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/apicdef.h b/arch/x86/include/asm/apicdef.h
index bf546dfb6e58..4b125e5b3187 100644
--- a/arch/x86/include/asm/apicdef.h
+++ b/arch/x86/include/asm/apicdef.h
@@ -2,6 +2,8 @@
 #ifndef _ASM_X86_APICDEF_H
 #define _ASM_X86_APICDEF_H
 
+#include <linux/bits.h>
+
 /*
  * Constants for various Intel APICs. (local APIC, IOAPIC, etc.)
  *
@@ -140,8 +142,8 @@
 #define APIC_BASE (fix_to_virt(FIX_APIC_BASE))
 #define APIC_BASE_MSR		0x800
 #define APIC_X2APIC_ID_MSR	0x802
-#define XAPIC_ENABLE	(1UL << 11)
-#define X2APIC_ENABLE	(1UL << 10)
+#define XAPIC_ENABLE		BIT(11)
+#define X2APIC_ENABLE		BIT(10)
 
 #ifdef CONFIG_X86_32
 # define MAX_IO_APICS 64

base-commit: 0c7ffa32dbd6b09a87fea4ad1de8b27145dfd9a6
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 22 11:06:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 11:06:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537855.837439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q13Mw-0008Sy-MT; Mon, 22 May 2023 11:06:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537855.837439; Mon, 22 May 2023 11:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q13Mw-0008Sr-JL; Mon, 22 May 2023 11:06:26 +0000
Received: by outflank-mailman (input) for mailman id 537855;
 Mon, 22 May 2023 11:06:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q13Mu-0008Sh-TH; Mon, 22 May 2023 11:06:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q13Mu-0003Ap-LH; Mon, 22 May 2023 11:06:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q13Mu-0001jv-AT; Mon, 22 May 2023 11:06:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q13Mu-0004ud-A1; Mon, 22 May 2023 11:06:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=35s3Cgao5wrZzkfxKVj+Hgg5qI52pznZiMEmCbyA3/c=; b=dFaHMkRxjs1u0Z8YUN6opthMWW
	DuFQsXeY8xoacZpl/4WAinF31jjk3j4cTkGF28pp/w7t3osnyX9m6gjMgFajqQ4fWVoqoM1c/IKHq
	K3xq6/D24+Gtc22mohDpKukKKu2cB9yeqsDsQtZt9ifG35Fc6VVRGPsIUL1f2vE3C9GA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180887-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180887: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 May 2023 11:06:24 +0000

flight 180887 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180887/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    5 days
Failing since        180699  2023-05-18 07:21:24 Z    4 days   19 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    2 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 22 11:18:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 11:18:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537863.837448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q13YS-00020n-R8; Mon, 22 May 2023 11:18:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537863.837448; Mon, 22 May 2023 11:18:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q13YS-00020g-OV; Mon, 22 May 2023 11:18:20 +0000
Received: by outflank-mailman (input) for mailman id 537863;
 Mon, 22 May 2023 11:18:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0eYX=BL=armlinux.org.uk=linux+xen-devel=lists.xenproject.org@srs-se1.protection.inumbo.net>)
 id 1q13YP-00020a-J1
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 11:18:18 +0000
Received: from pandora.armlinux.org.uk (pandora.armlinux.org.uk
 [2001:4d48:ad52:32c8:5054:ff:fe00:142])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5814b7f9-f892-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 13:18:15 +0200 (CEST)
Received: from shell.armlinux.org.uk
 ([fd8f:7570:feb6:1:5054:ff:fe00:4ec]:59006)
 by pandora.armlinux.org.uk with esmtpsa (TLS1.3) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2)
 (envelope-from <linux@armlinux.org.uk>)
 id 1q13XQ-0006UY-2o; Mon, 22 May 2023 12:17:16 +0100
Received: from linux by shell.armlinux.org.uk with local (Exim 4.94.2)
 (envelope-from <linux@shell.armlinux.org.uk>)
 id 1q13XE-0007tn-CU; Mon, 22 May 2023 12:17:04 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
X-Inumbo-ID: 5814b7f9-f892-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=armlinux.org.uk; s=pandora-2019; h=Sender:In-Reply-To:Content-Type:
	MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date:
	Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:
	List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=cQTiKEc/43ERh/RUjjvIJQkEsJdOq6qRkauNBfX0QXQ=; b=p9m/ILtgZZh20NKGMrXp1m9S79
	L3gjgt3hqJWh343H0TOYKs01/G2BPEuhvFC4TMuRPeU+MY+G/63vgoqDRNSKM39yVvKzOTaMjmZFK
	7LM8lsFJeYjGWosrrrGG5miTPAuK0YTizFt0WQTN+y4vi/M34EkH3zTVqOfVyGhSWIOf8ow6+E///
	7SsMn4KhRSRjr9A7KCnECWwdnNE+HTrXL6yLyaObfuKu+fRhMgUnrXcHYHyRgw+uyKRbUhZTX2eMd
	dspexsNkPMSmOqiOaZOzjcHaV7Or01naZ8gCOX7wKpbxrnL7eOBmMEZkr02uGpGm9yxIp2RZVNEOy
	4Up4YdGA==;
Date: Mon, 22 May 2023 12:17:04 +0100
From: "Russell King (Oracle)" <linux@armlinux.org.uk>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: tglx@linutronix.de, James.Bottomley@hansenpartnership.com,
	arjan@linux.intel.com, arnd@arndb.de, boris.ostrovsky@oracle.com,
	brgerst@gmail.com, catalin.marinas@arm.com, deller@gmx.de,
	dwmw2@infradead.org, gpiccoli@igalia.com, guoren@kernel.org,
	jgross@suse.com, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org,
	linux-riscv@lists.infradead.org, lucjan.lucjanov@gmail.com,
	mark.rutland@arm.com, mikelley@microsoft.com,
	oleksandr@natalenko.name, palmer@dabbelt.com,
	paul.walmsley@sifive.com, paulmck@kernel.org, pbonzini@redhat.com,
	pmenzel@molgen.mpg.de, ross.philipson@oracle.com,
	sabrapan@amazon.com, seanjc@google.com, thomas.lendacky@amd.com,
	tsbogend@alpha.franken.de, usama.arif@bytedance.com,
	will@kernel.org, x86@kernel.org, xen-devel@lists.xenproject.org,
	Jeffrey Hugo <quic_jhugo@quicinc.com>
Subject: Re: [PATCH] x86/apic: Fix use of X{,2}APIC_ENABLE in asm with older
 binutils
Message-ID: <ZGtPMHJM/TfklT+2@shell.armlinux.org.uk>
References: <20230512203426.452963764@linutronix.de>
 <20230522105738.2378364-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230522105738.2378364-1-andrew.cooper3@citrix.com>
Sender: Russell King (Oracle) <linux@armlinux.org.uk>

Hi,

Please can you tell me what the relevance of this patch is to me, and
thus why I'm included in the Cc list? I have never touched this file,
not in its current path nor a previous path according to git.

Thanks.

On Mon, May 22, 2023 at 11:57:38AM +0100, Andrew Cooper wrote:
> "x86/smpboot: Support parallel startup of secondary CPUs" adds the first use
> of X2APIC_ENABLE in assembly, but older binutils don't tolerate the UL suffix.
> 
> Switch to using BIT() instead.
> 
> Fixes: 7e75178a0950 ("x86/smpboot: Support parallel startup of secondary CPUs")
> Reported-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
> Tested-by: Jeffrey Hugo <quic_jhugo@quicinc.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
>  arch/x86/include/asm/apicdef.h | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/include/asm/apicdef.h b/arch/x86/include/asm/apicdef.h
> index bf546dfb6e58..4b125e5b3187 100644
> --- a/arch/x86/include/asm/apicdef.h
> +++ b/arch/x86/include/asm/apicdef.h
> @@ -2,6 +2,8 @@
>  #ifndef _ASM_X86_APICDEF_H
>  #define _ASM_X86_APICDEF_H
>  
> +#include <linux/bits.h>
> +
>  /*
>   * Constants for various Intel APICs. (local APIC, IOAPIC, etc.)
>   *
> @@ -140,8 +142,8 @@
>  #define APIC_BASE (fix_to_virt(FIX_APIC_BASE))
>  #define APIC_BASE_MSR		0x800
>  #define APIC_X2APIC_ID_MSR	0x802
> -#define XAPIC_ENABLE	(1UL << 11)
> -#define X2APIC_ENABLE	(1UL << 10)
> +#define XAPIC_ENABLE		BIT(11)
> +#define X2APIC_ENABLE		BIT(10)
>  
>  #ifdef CONFIG_X86_32
>  # define MAX_IO_APICS 64
> 
> base-commit: 0c7ffa32dbd6b09a87fea4ad1de8b27145dfd9a6
> -- 
> 2.30.2
> 
> 

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!


From xen-devel-bounces@lists.xenproject.org Mon May 22 11:19:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 11:19:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537865.837458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q13Z7-0002VK-3m; Mon, 22 May 2023 11:19:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537865.837458; Mon, 22 May 2023 11:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q13Z7-0002VD-0v; Mon, 22 May 2023 11:19:01 +0000
Received: by outflank-mailman (input) for mailman id 537865;
 Mon, 22 May 2023 11:18:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q13Z5-0002Ux-Sx; Mon, 22 May 2023 11:18:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q13Z5-0003XG-Ly; Mon, 22 May 2023 11:18:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q13Z5-00023O-6a; Mon, 22 May 2023 11:18:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q13Z5-0005ZX-64; Mon, 22 May 2023 11:18:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tsQVzVQs4Y+9xGa5VFhlyxOx8rUY+o5PJ5PiuPJ7EnA=; b=trYfKpJMVpHISFbsME8WMbZ1SR
	YN3Zrpn31WDJWMpWYjEIWZaQ3dVQ1pAZxg85lQyE1CBSgDH3SD9ECK/CD7l9DQhTXZeO45bDgDcXL
	n1D++nX7+eMugRKJp/y9b3UQSO8Ob6C8IJIx27aKWvzrQknNVNIA0dNV58SK1l5oXh9k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180884-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180884: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-arm64-pvops:kernel-build:fail:regression
    xen-unstable:test-amd64-i386-examine-uefi:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start.2:fail:heisenbug
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=753d903a6f2d1e68d98487d36449b5739c28d65a
X-Osstest-Versions-That:
    xen=753d903a6f2d1e68d98487d36449b5739c28d65a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 May 2023 11:18:59 +0000

flight 180884 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180884/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180839

Tests which are failing intermittently (not blocking):
 test-amd64-i386-examine-uefi  6 xen-install                fail pass in 180839
 test-amd64-amd64-libvirt-vhd 20 guest-start.2              fail pass in 180839

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 180839 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 180839 never pass
 test-arm64-arm64-xl         15 migrate-support-check fail in 180839 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 180839 never pass
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 180839 never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check fail in 180839 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 180839 never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail in 180839 never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 180839 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 180839 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 180839 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 180839 never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check fail in 180839 never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail in 180839 never pass
 test-arm64-arm64-xl-vhd     14 migrate-support-check fail in 180839 never pass
 test-arm64-arm64-xl-vhd 15 saverestore-support-check fail in 180839 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180839
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180839
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180839
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180839
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180839
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180839
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180839
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180839
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180839
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180839
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180839
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180839
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  753d903a6f2d1e68d98487d36449b5739c28d65a
baseline version:
 xen                  753d903a6f2d1e68d98487d36449b5739c28d65a

Last test of basis   180884  2023-05-22 01:52:01 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 22 12:19:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 12:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537878.837472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V2-0000Xc-Mg; Mon, 22 May 2023 12:18:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537878.837472; Mon, 22 May 2023 12:18:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V2-0000XU-Js; Mon, 22 May 2023 12:18:52 +0000
Received: by outflank-mailman (input) for mailman id 537878;
 Mon, 22 May 2023 12:18:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ae2Q=BL=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q14V1-0000XN-G2
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 12:18:51 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cf3a08f8-f89a-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 14:18:50 +0200 (CEST)
Received: by mail-lf1-x131.google.com with SMTP id
 2adb3069b0e04-4f3b5881734so2768429e87.0
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 05:18:50 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 8-20020ac24828000000b004f26aabbd6asm961120lft.130.2023.05.22.05.18.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 22 May 2023 05:18:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf3a08f8-f89a-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684757930; x=1687349930;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=5Mep4fqU20J6OtORroOetFURSGKFcHgL+B0ajvu4ad4=;
        b=NUnayCIPDW8FvhCjttV7QKRCUl87wwhoxlbCu7QVTF6FHyLVZl2MrXxcgfkfHKzxsc
         kbeJ2AniKGd8fBhiagRcU39Vq49yhDfdPVTdHte/cu8C+FQS0+bfEv9V/mrNU4CHnkbO
         F17+Gle3ipPFM/PfGpmZXdEbEFB0uJdnvVrPHnX580+oLkin52YMznqmZaEe6A+E6cXy
         olQZypcIeWRCJIQzX9eTiV2JMklf6StNOzklgA/+CY+1N62Gop6/87JjnTU6PGatpOad
         WwGIBgNeXQNl9NyJlTbsv+M2muidgL15gPaFXK89J6usEAmFSflOoNWeFxuFFOSwGsQJ
         fGHg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684757930; x=1687349930;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=5Mep4fqU20J6OtORroOetFURSGKFcHgL+B0ajvu4ad4=;
        b=EOaikeCm1HIHTvxtc2ZJB7Pg+DdEa/EEJ2NeSlchHK4nFXIQs71KAqMuavFvFa+3d6
         Ljgwzqj5GEq/5bUZLnI0+WoBQtUAfYy6T9LJ4mcnLZ/YlNfL51I2+4YmCv/f4PgkJsdS
         Dk//zVfcPCQGB6kqSwl/q/N0MYk8ZhvmqLxQoaMISOkauw73uGGdNXics9TIw0pvvDQg
         kDpFb/1zh7YN8Rk1+vLsQzASfcbIdS+PcqGQrCK9LeIWQ1dM7QXcqqv2lOsEHCpiAkg2
         1q8/EY6B4r+X1GnEU0kKRGnNKX5Lbfs9REfA3Ba37IHh8krNj9P1qB6B5kqDn0cX+NrN
         vucg==
X-Gm-Message-State: AC+VfDxm3Ym/ULRKhkiT/NXyWrbtbVJ7DZiDgkY9psd5I345AXtwlqiu
	LAaDvFBsTkQE4wpwG55sPTYfYPzjcPg=
X-Google-Smtp-Source: ACHHUZ5DzSmhlM1E/ZdaEQ0hKL6cNoOY3qU7hdIFpMpX2NSUS6UT6gH0Yap56ZQ9nVqrQsqdXwyrXQ==
X-Received: by 2002:ac2:4a85:0:b0:4f1:3eea:eaf9 with SMTP id l5-20020ac24a85000000b004f13eeaeaf9mr3516481lfp.24.1684757929287;
        Mon, 22 May 2023 05:18:49 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v8 0/5] enable MMU for RISC-V
Date: Mon, 22 May 2023 15:18:41 +0300
Message-Id: <cover.1684757267.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following things:
1. Functionality to build the page tables for Xen that map
   link-time to physical-time location.
2. Check that Xen is less then page size.
3. Check that load addresses don't overlap with linker addresses.
4. Prepare things for proper switch to virtual memory world.
5. Load the built page table into the SATP
6. Enable MMU.

---
Changes in V8:
	- Add "#ifdef RV_STAGE1_MODE == SATP_MODE_SV39" instead of "#ifdef SV39"
	  in the comment to VM layout description.
	- Update the upper bound of direct map area in VM layout description.
	- Add parentheses for lvl in pt_index() macros.
	- introduce macros paddr_to_pfn() and pfn_to_paddr() and use them inside
	  paddr_to_pte()/pte_to_paddr().
	- Remove "__" in sfence_vma() and add blanks inside the parentheses of
	  asm volatile.
	- Parenthesize the two & against the || at the start of setup_initial_mapping()
	  function.
	- Code style fixes.
	- Remove ". = ALIGN(PAGE_SIZE);" before "*(.bss.page_aligned)" in vmlinux.lds.S
	  file as .bss.page_aligned specifies proper alignment themselves.
---
Changes in V7:
	- Fix range of frametable range in RV64 layout.
	- Add ifdef SV39 to the RV64 layout comment to make it explicit that
    description if for SV39 mode.
	- Add missed row in the RV64 layout table.
 	- define CONFIG_PAGING_LEVELS=2 for SATP_MODE_SV32.
 	- update switch_stack_and_jump macros(): add constraint 'X' for fn,
    memory clobber and wrap into do {} while ( false ).
 	- add noreturn to definition of enable_mmu().
 	- update pt_index() to "(pt_linear_offset(lvl, (va)) & VPN_MASK)".
 	- expand macros pte_to_addr()/addr_to_pte() in paddr_to_pte() and
    pte_to_paddr() functions and after drop them.
 	- remove inclusion of <asm/config.h>.
 	- update commit message around definition of PGTBL_INITIAL_COUNT.
 	- remove PGTBL_ENTRY_AMOUNT and use PAGETABLE_ENTRIES instead.
 	- code style fixes
 	- remove permission argument of setup_initial_mapping() function
 	- remove calc_pgtbl_lvls_num() as it's not needed anymore after definition
    of CONFIG_PAGING_LEVELS
 	- introduce sfence_vma()
 	- remove satp_mode argument from check_pgtbl_mode_support() and use
    RV_STAGE1_MODE directly instead.
 	- change .align to .p2align
 	- drop inclusion of <asm/asm-offsets.h> from head.S. This change isn't
    necessary for the current patch series.
 	- create a separate patch for xen.lds.S.
---
Changes in V6:
  - update RV VM layout and related to it things
 	- move PAGE_SHIFT, PADDR_BITS to the top of page-bits.h
 	- cast argument x of pte_to_addr() macros to paddr_t to avoid risk of overflow for RV32
 	- update type of num_levels from 'unsigned long' to 'unsigned int'
 	- define PGTBL_INITIAL_COUNT as ((CONFIG_PAGING_LEVELS - 1) + 1)
 	- update type of permission arguments. changed it from 'unsgined long' to 'unsigned int'
 	- fix code style
 	- switch while 'loop' to 'for' loop
 	- undef HANDLE_PGTBL
 	- clean root page table after MMU is disabled in check_pgtbl_mode_support() function
 	- align __bss_start properly
 	- remove unnecesssary const for paddr_to_pte, pte_to_paddr, pte_is_valid functions
 	- add switch_stack_and_jump macros and use it inside enable_mmu() before jump to
 	  cont_after_mmu_is_enabled() function

---
Changes in V5:
  * rebase the patch series on top of current staging
  * Update cover letter: it was removed the info about the patches on which
    MMU patch series is based as they were merged to staging
  * add new patch with description of VM layout for RISC-V2
  * Indent fields of pte_t struct
	* Rename addr_to_pte() and ppn_to_paddr() to match their content
---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
  * Update the cover letter message: the patch series isn't depend on
    [ RISC-V basic exception handling implementation ] as it was decied
    to enable MMU before implementation of exception handling. Also MMU
    patch series is based on two other patches which weren't merged [1]
    and [2]
  - Update the commit message for [ [PATCH v3 1/3]
    xen/riscv: introduce setup_initial_pages ].
  - update definition of pte_t structure to have a proper size of pte_t in case of RV32.
  - update asm/mm.h with new functions and remove unnecessary 'extern'.
  - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
  - update paddr_to_pte() to receive permissions as an argument.
  - add check that map_start & pa_start is properly aligned.
  - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to <asm/page-bits.h>
  - Rename PTE_SHIFT to PTE_PPN_SHIFT
  - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses and after
    setup PTEs permission for sections; update check that linker and load addresses don't
    overlap.
  - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is necessary.
  - rewrite enable_mmu in C; add the check that map_start and pa_start are aligned on 4k
    boundary.
  - update the comment for setup_initial_pagetable funcion
  - Add RV_STAGE1_MODE to support different MMU modes.
  - update the commit message that MMU is also enabled here
  - set XEN_VIRT_START very high to not overlap with load address range
  - align bss section
---
Changes in V2:
  * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
    introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
  * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
  * Remove clear_pagetables() functions as pagetables were zeroed during
    .bss initialization
  * Rename _setup_initial_pagetables() to setup_initial_mapping()
  * Make PTE_DEFAULT equal to RX.
  * Update prototype of setup_initial_mapping(..., bool writable) -> 
    setup_initial_mapping(..., UL flags)  
  * Update calls of setup_initial_mapping according to new prototype
  * Remove unnecessary call of:
    _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
  * Define index* in the loop of setup_initial_mapping
  * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
    as we don't have such section
  * make arguments of paddr_to_pte() and pte_is_valid() as const.
  * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
  * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
  * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
  * set __section(".bss.page_aligned") for page tables arrays
  * fix identatations
  * Change '__attribute__((section(".entry")))' to '__init'
  * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
    setup_inital_mapping() as they should be already aligned.
  * Remove clear_pagetables() as initial pagetables will be
    zeroed during bss initialization
  * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
    as there is no such section in xen.lds.S
  * Update the argument of pte_is_valid() to "const pte_t *p"
  * Remove patch "[PATCH v1 3/3] automation: update RISC-V smoke test" from the patch series
    as it was introduced simplified approach for RISC-V smoke test by Andrew Cooper
  * Add patch [[xen/riscv: remove dummy_bss variable] as there is no any sense in
    dummy_bss variable after introduction of inittial page tables.
---

Oleksii Kurochko (5):
  xen/riscv: add VM space layout
  xen/riscv: introduce setup_initial_pages
  xen/riscv: align __bss_start
  xen/riscv: setup initial pagetables
  xen/riscv: remove dummy_bss variable

 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  50 ++++-
 xen/arch/riscv/include/asm/current.h   |  11 +
 xen/arch/riscv/include/asm/mm.h        |  14 ++
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  61 ++++++
 xen/arch/riscv/include/asm/processor.h |   5 +
 xen/arch/riscv/mm.c                    | 277 +++++++++++++++++++++++++
 xen/arch/riscv/setup.c                 |  22 +-
 xen/arch/riscv/xen.lds.S               |   5 +-
 10 files changed, 446 insertions(+), 10 deletions(-)
 create mode 100644 xen/arch/riscv/include/asm/current.h
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 22 12:19:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 12:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537880.837487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V4-0000pt-93; Mon, 22 May 2023 12:18:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537880.837487; Mon, 22 May 2023 12:18:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V4-0000oc-3m; Mon, 22 May 2023 12:18:54 +0000
Received: by outflank-mailman (input) for mailman id 537880;
 Mon, 22 May 2023 12:18:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ae2Q=BL=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q14V3-0000XN-9w
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 12:18:53 +0000
Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com
 [2a00:1450:4864:20::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d066bf48-f89a-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 14:18:52 +0200 (CEST)
Received: by mail-lf1-x133.google.com with SMTP id
 2adb3069b0e04-4f24cfb8539so6872379e87.3
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 05:18:52 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 8-20020ac24828000000b004f26aabbd6asm961120lft.130.2023.05.22.05.18.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 22 May 2023 05:18:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d066bf48-f89a-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684757932; x=1687349932;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=oF4lEFlSEg4FP4lOEdxDr5KveUkUc0Imo2+IXVhgnpQ=;
        b=Byug/q2nDyLWIoYBDupLy3OAHC11HvZNaXt2FaPW8qNcRyOsI7Q0AaA4ol7xksR8j5
         Ctpb8o2DiOcaeXETarfL3obdWCbvdqpiwKBR8GlkkUGhieekkArztHqH5GBFWG4TetdR
         PjPoQ4bh4xp43680n0s4u17C5c7xgGVrpRmuaE4dce6kyYE0eP9TQl6Buyw3RseVTDsh
         fkIlcVP1RnqG+r6pzG9aD0V6fbG7yzj85Ctj+0xClJKXgkvUaEbEBRNA3IckrfGBNrTE
         WkAnI8x5M7etGOkMR8trE8y/bUrrxVCSaVI4fXbZe1CQVI8T0zLh9QgJNtaJxqEl5zd7
         eyZA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684757932; x=1687349932;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=oF4lEFlSEg4FP4lOEdxDr5KveUkUc0Imo2+IXVhgnpQ=;
        b=gkauuUvKzaWsyFMq/hRZAROCVdXBRJtoAsavTVqnrHSFTRkY9ONAI8ckhXdThQB0ez
         PyxRmfph+igO8y8PbfoR1mbfuxtWnUX+nM+VyXAKcBucbPeMmpgQz/9B8VLRso2J6YbG
         zPUx2eFdzYQf6aCG0PRcH/jJz1aPmn8emQAA7EaWK9RquW8k5791+oEsshzr80V8z7Ij
         sN6P57LVI6+J5y0ui42Nu3luvGv9s6gy+i6HIvPm4+psqT0IHB/3Lpen2utxNqReUt5J
         YVNx0wL+qVtdDRFeQIMNCieC2aW1fiXPrxvKSdBOE/mCgNMYGFcWmu4+c2bKbEduMHCl
         GLRg==
X-Gm-Message-State: AC+VfDwxhcXWgB9e6bk/JjgbTKGxXmtmeYHwuWCfuJk4flZPIeNlCpJp
	h4plp9xowWFjLGRWcgoSwD24BxAP+xk=
X-Google-Smtp-Source: ACHHUZ5TLR5qX4b4bfMAxnISRjnfAd4baqbpopMx3qnZL3oFyk6iq6XcDsDF051LseoD+ZwKsYn/eg==
X-Received: by 2002:ac2:544e:0:b0:4e9:bf52:7898 with SMTP id d14-20020ac2544e000000b004e9bf527898mr3313294lfn.37.1684757931666;
        Mon, 22 May 2023 05:18:51 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v8 3/5] xen/riscv: align __bss_start
Date: Mon, 22 May 2023 15:18:44 +0300
Message-Id: <ef1cfab8bc658c4701833c55af3cf1d4d9a02e68.1684757267.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1684757267.git.oleksii.kurochko@gmail.com>
References: <cover.1684757267.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

bss clear cycle requires proper alignment of __bss_start.

ALIGN(PAGE_SIZE) before "*(.bss.page_aligned)" in xen.lds.S
was removed as any contribution to "*(.bss.page_aligned)" have to
specify proper aligntment themselves.

Fixes: cfa0409f7cbb ("xen/riscv: initialize .bss section")
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V8:
 * Remove ". = ALIGN(PAGE_SIZE);" before "*(.bss.page_aligned)" in
   vmlinux.lds.S file as any contribution to .bss.page_aligned have to specify
   proper alignment themselves.
 * Add "Fixes: cfa0409f7cbb ("xen/riscv: initialize .bss section")" to
   the commit message
 * Add "Reviewed-by: Jan Beulich <jbeulich@suse.com>" to the commit message
---
Changes in V7:
 * the patch was introduced in the current patch series.
---
 xen/arch/riscv/xen.lds.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index fe475d096d..df71d31e17 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -137,9 +137,9 @@ SECTIONS
     __init_end = .;
 
     .bss : {                     /* BSS */
+        . = ALIGN(POINTER_ALIGN);
         __bss_start = .;
         *(.bss.stack_aligned)
-        . = ALIGN(PAGE_SIZE);
         *(.bss.page_aligned)
         . = ALIGN(PAGE_SIZE);
         __per_cpu_start = .;
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 22 12:19:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 12:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537883.837522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V7-0001lT-Cq; Mon, 22 May 2023 12:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537883.837522; Mon, 22 May 2023 12:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V7-0001ky-7w; Mon, 22 May 2023 12:18:57 +0000
Received: by outflank-mailman (input) for mailman id 537883;
 Mon, 22 May 2023 12:18:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ae2Q=BL=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q14V5-0000XV-Vg
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 12:18:55 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d1a9e4e1-f89a-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 14:18:54 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2af189d323fso42251381fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 05:18:54 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 8-20020ac24828000000b004f26aabbd6asm961120lft.130.2023.05.22.05.18.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 22 May 2023 05:18:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1a9e4e1-f89a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684757934; x=1687349934;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=HzaXPM5ihr2hJv4uppdYO8eOKE9U/QW5elpxLedCGQg=;
        b=jnY5gEFp/ARyk8/m3xL34Ukzc2Xc062lnJNAA8t5DcKwHus3bY7wVlNvTy5ZVs12OW
         StCfsbcgp1c/IdKVjBoTzJCwXcQp+cj+guTTF/ZkXfMyoLiTORIxJyTwuPnWTxHItGtL
         8u/F103wlV8AtlQB+faM8cb9AXB0y0YP0uoj73XwcrvsKNuFeMUuxCXqvsrPNmvwtmMh
         z5Q9K5bla1q0ho/ScxCnqGxk7SXtnXlL4WFo+AQy1aRpikZ+GnGEFqL5d6WKW6cQewBD
         AsnlMBUib5aKhGiUe8hy5o2U/l+Dk+8vDlevQh6VCxWo0MXc/G+Sh1jaHE4ipuR7mcuq
         xBgw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684757934; x=1687349934;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=HzaXPM5ihr2hJv4uppdYO8eOKE9U/QW5elpxLedCGQg=;
        b=LjuArUQ6F5BPJpyjcg/tQCznX6D7ZAYuc1Xg+EiiIHw/8OaT39qUoaRSwgwabonaPE
         6qlA+iKajujyLLrGZCH73fDIirvv/2EE1Y9sAjrk5pCwXPCPhY9B+P8WVmpIzCibza/s
         VgNPESwIY0U0yVKt+HReCysNiITrkQGVX3LjlJwEhn2IpbM5deUK+bOo8TB/qW6OsNAi
         Hk3sITBCn9QRXQHHPTgOOj616goONrZU7hNss3DCMolXBpVl7YHNK+DYzDn+wQCUlQO8
         F04EEC8/VNqiCn/xO7NjH1nn0uHZeSMdwFEOjBggXI3BtPTuveNqn3vp3Db8q5dI9Gn8
         J3HQ==
X-Gm-Message-State: AC+VfDwqbRW4eIHekzsRvdC6nou3R4Lj2yrQXwvuIA6A52681n0r2s5v
	MrWEMQ6jAwlO4Cgqa3B/AROMKshby2c=
X-Google-Smtp-Source: ACHHUZ7HhrVcVrH7MEEiRs2CvHg8BVgP3JaRumD6q8cD3Y+Mn7qRFkMARpJcZumxoEUOejgnpwQWPQ==
X-Received: by 2002:ac2:5e86:0:b0:4f0:6aa3:d85a with SMTP id b6-20020ac25e86000000b004f06aa3d85amr3227582lfq.0.1684757933708;
        Mon, 22 May 2023 05:18:53 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v8 5/5] xen/riscv: remove dummy_bss variable
Date: Mon, 22 May 2023 15:18:46 +0300
Message-Id: <3d30db8cad99aa7bb9b22728b37c5184d5ae953f.1684757267.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1684757267.git.oleksii.kurochko@gmail.com>
References: <cover.1684757267.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

After introduction of initial pagetables there is no any sense
in dummy_bss variable as bss section will not be empty anymore.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V8:
 - Nothing changed. Only rebase
---
Changes in V7:
 - Nothing changed. Only rebase
---
Changes in V6:
 - Nothing changed. Only rebase
---
Changes in V5:
 - Nothing changed. Only rebase
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 * patch was introduced in the current one patch series (v3).
---
Changes in V2:
 * patch was introduced in the current one patch series (v2).
---
 xen/arch/riscv/setup.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index cf5dc5824e..845d18d86f 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -8,14 +8,6 @@
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
-/*  
- * To be sure that .bss isn't zero. It will simplify code of
- * .bss initialization.
- * TODO:
- *   To be deleted when the first real .bss user appears
- */
-int dummy_bss __attribute__((unused));
-
 void __init noreturn start_xen(unsigned long bootcpu_id,
                                paddr_t dtb_addr)
 {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 22 12:19:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 12:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537882.837508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V6-0001Ln-36; Mon, 22 May 2023 12:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537882.837508; Mon, 22 May 2023 12:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V5-0001KH-Up; Mon, 22 May 2023 12:18:55 +0000
Received: by outflank-mailman (input) for mailman id 537882;
 Mon, 22 May 2023 12:18:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ae2Q=BL=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q14V5-0000XV-1F
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 12:18:55 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d1141695-f89a-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 14:18:53 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2af177f12a5so52339401fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 05:18:53 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 8-20020ac24828000000b004f26aabbd6asm961120lft.130.2023.05.22.05.18.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 22 May 2023 05:18:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1141695-f89a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684757933; x=1687349933;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LvpxCLBBqorbQeBpQfAjag4bfazOq0BLsYa9CfgITIY=;
        b=JmHV/vqB3H+bbZAmtmj3bQEGhoH/cq5iEmr+KxT/6n1qpa67uz2QnkGfjo2rDOWPQy
         saN1tvu8jschQCn+/24IOLucNQJwun2ktIn5ga5Wh0X1681AtwZ6VwvnL+HBWkBm3K9s
         kGb0ywialknHJiwE9QTNAYrn4oZIyglxSPI7H1Dc8ohmz5KzFydOVFhUpIT4QM8NK0v4
         lbZUb4Pbh2I9oN2cuAm7xeRmB5U7qInCwxLTn4TXKH4/yJLrikuJvggK5+iVA9tWQF/k
         PyWIvjMjcQqIL6QqvHiA3ETRlAjLzfeZTOs39+2tK05eFnGzxdMIlUBNbLt3GMqdPJnA
         Gzlg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684757933; x=1687349933;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=LvpxCLBBqorbQeBpQfAjag4bfazOq0BLsYa9CfgITIY=;
        b=D1dI6Gv+d5AxN1sdKIVyiNmMgTWs5U50cRsnKjI2kv7+SGsKw1f7eLfAGmLnoH/61l
         1wJY7ig70atMyTVuPtBlHz/xdTQnfi9ulI9uKBydyclbfXPxyFyl1N+N/TM8NFKC+T4F
         umtQVr17a6fHbmcRvR6P6x6apbEVwivQqR0w5jkM09faM4x03B3PMvRqn/2rSMrbr9CB
         eSDsJeIJb4Gmpdvg9ew2rp7Iv4CKOkIViHG5Os8kszS0BV/njWYor+ek9RxigL+MukzW
         u2/cIJc97c5WpRiXYrLHM+cLmjMLi21DBozMU0TMec/SLHrPCyUZGKdwgck4NsoawnJW
         HblA==
X-Gm-Message-State: AC+VfDyXzxjdlTo23kQbIGMrPqdE+1fWA3u5/rlgJJjErcArdaTghpEt
	OCjqlEsc5b59VtCL6w85MqS7+4Yg2do=
X-Google-Smtp-Source: ACHHUZ4QlJ10wu0A6K9oPFNOy6NPUIvuKoMJdRlJ2MlttbxuuO/WN1L1GTDpkZHZoi3IWsUR0tTdEw==
X-Received: by 2002:a05:651c:8f:b0:2ac:598e:e946 with SMTP id 15-20020a05651c008f00b002ac598ee946mr3989905ljq.3.1684757932827;
        Mon, 22 May 2023 05:18:52 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v8 4/5] xen/riscv: setup initial pagetables
Date: Mon, 22 May 2023 15:18:45 +0300
Message-Id: <ad78ba36f408f01a8a37365865ff032b495bbb2d.1684757267.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1684757267.git.oleksii.kurochko@gmail.com>
References: <cover.1684757267.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch does two thing:
1. Setup initial pagetables.
2. Enable MMU which end up with code in
   cont_after_mmu_is_enabled()

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V8:
 - Nothing changed. Only rebase
---
Changes in V7:
 - Nothing changed. Only rebase
---
Changes in V6:
 - Nothing changed. Only rebase
---
Changes in V5:
 - Nothing changed. Only rebase
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 - update the commit message that MMU is also enabled here
 - remove early_printk("All set up\n") as it was moved to
   cont_after_mmu_is_enabled() function after MMU is enabled.
---
Changes in V2:
 * Update the commit message
---
 xen/arch/riscv/setup.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 315804aa87..cf5dc5824e 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -21,7 +21,10 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 {
     early_printk("Hello from C env\n");
 
-    early_printk("All set up\n");
+    setup_initial_pagetables();
+
+    enable_mmu();
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 22 12:19:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 12:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537879.837482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V4-0000mW-0r; Mon, 22 May 2023 12:18:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537879.837482; Mon, 22 May 2023 12:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V3-0000mP-St; Mon, 22 May 2023 12:18:53 +0000
Received: by outflank-mailman (input) for mailman id 537879;
 Mon, 22 May 2023 12:18:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ae2Q=BL=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q14V2-0000XN-VV
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 12:18:53 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d03079d2-f89a-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 14:18:51 +0200 (CEST)
Received: by mail-lf1-x131.google.com with SMTP id
 2adb3069b0e04-4f3b5881734so2768457e87.0
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 05:18:51 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 8-20020ac24828000000b004f26aabbd6asm961120lft.130.2023.05.22.05.18.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 22 May 2023 05:18:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d03079d2-f89a-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684757931; x=1687349931;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2gkL2Is8EvWZRGU8xS7LanP8IxE/MMMODsHGbdZf8Xk=;
        b=JpcvJoFHdWG09xlp9yDLDMTIiMNTjKd7qDhBPNRseqJyZRgTjHky6BP3rljrKV6sFv
         bGc0Lchmn1L1oG0HDMi3v2RMmcPxNIflCoq9FjHTrPGwtDpBZ8N6ErL9yMWpJmSd7/wm
         eVoear/p0AKJLyT1x++hnf29qbrApsCLG/6saKQimAX6lGZlkKR+USIMqMU3+7NXDR/g
         jcDKuP4y7uwXVj6wmj9lBRKAMQdscgOCNaTdI1k023p6oOOWC3hToTctIND2JAecXjth
         1b4K8D/PPzNMD0vzFZNDe06HbXTSr/ii7x2ZzUpHjqTXkkA03NA2gCSkryIkPFMsYsfA
         C/Qw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684757931; x=1687349931;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=2gkL2Is8EvWZRGU8xS7LanP8IxE/MMMODsHGbdZf8Xk=;
        b=NfQwIpSnLvpQZt+mBJP8yKrA4CiVNR9rkLZT8J/Snrd+ZOxmiPePbJEGu4JGEng+xb
         MaP/k0broE0aoWAGiELypDQJZ+gpJVPuPTIPsR5cP3GJjxfRpx6/0tIB+T19agGNrRBQ
         H8PK7QXdvxDGbyeGu2MOcEPxbvmqbJAZ1ui8OxLbPA9q8YH9dBJL/poK/bdUYsftufMB
         l5GGG77UMYVbfIZPitMJ3PtLlhXddJQPshYaUle7pcFEJ5BgRi21LNV+Q6mZXGeh3Kj7
         Cjqmchvh4sOMsRUmFTqI9Wmwvf8pm5bsUrBNrNktNdwu51yviFp1U0dkvLfMm36GUsZ0
         bjig==
X-Gm-Message-State: AC+VfDy2UA9/ZG0UbixoL6aIknHMThzCPOF33xlS6USfS5KTt28NkjOq
	ZeDoOZgnxhzzO5ATCeoWNGL0k6CC4kA=
X-Google-Smtp-Source: ACHHUZ6Y238XM+k0mmlNYiSZdjpSBGGIuTYeuz61LINpSjhIWdGKYdD67wrNLnu1gUE6IyXf7HcYMA==
X-Received: by 2002:a05:6512:14e:b0:4f3:a7ff:60b9 with SMTP id m14-20020a056512014e00b004f3a7ff60b9mr3368806lfo.67.1684757930862;
        Mon, 22 May 2023 05:18:50 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v8 2/5] xen/riscv: introduce setup_initial_pages
Date: Mon, 22 May 2023 15:18:43 +0300
Message-Id: <cd705abf9e44eb7c91895163b73f60080634706f.1684757267.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1684757267.git.oleksii.kurochko@gmail.com>
References: <cover.1684757267.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The idea was taken from xvisor but the following changes
were done:
* Use only a minimal part of the code enough to enable MMU
* rename {_}setup_initial_pagetables functions
* add an argument for setup_initial_mapping to have
  an opportunity to make set PTE flags.
* update setup_initial_pagetables function to map sections
  with correct PTE flags.
* Rewrite enable_mmu() to C.
* map linker addresses range to load addresses range without
  1:1 mapping. It will be 1:1 only in case when
  load_start_addr is equal to linker_start_addr.
* add safety checks such as:
  * Xen size is less than page size
  * linker addresses range doesn't overlap load addresses
    range
* Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
* change PTE_LEAF_DEFAULT to RW instead of RWX.
* Remove phys_offset as it is not used now
* Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
  in  setup_inital_mapping() as they should be already aligned.
  Make a check that {map_pa}_start are aligned.
* Remove clear_pagetables() as initial pagetables will be
  zeroed during bss initialization
* Remove __attribute__((section(".entry")) for setup_initial_pagetables()
  as there is no such section in xen.lds.S
* Update the argument of pte_is_valid() to "const pte_t *p"
* Add check that Xen's load address is aligned at 4k boundary
* Refactor setup_initial_pagetables() so it is mapping linker
  address range to load address range. After setup needed
  permissions for specific section ( such as .text, .rodata, etc )
  otherwise RW permission will be set by default.
* Add function to check that requested SATP_MODE is supported

Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V8:
	- Add parentheses for lvl in pt_index() macros.
	- introduce macros paddr_to_pfn() and pfn_to_paddr() and use them inside
	  paddr_to_pte()/pte_to_paddr().
	- Remove "__" in sfence_vma() and add blanks inside the parentheses of
	  asm volatile.
	- Parenthesize the two & against the || at the start of setup_initial_mapping()
	  function.
	- Code style fixes.
---
Changes in V7:
 	- define CONFIG_PAGING_LEVELS=2 for SATP_MODE_SV32.
 	- update switch_stack_and_jump macros(): add constraint 'X' for fn,
    memory clobber and wrap into do {} while ( false ).
 	- add noreturn to definition of enable_mmu().
 	- update pt_index() to "(pt_linear_offset(lvl, (va)) & VPN_MASK)".
 	- expand macros pte_to_addr()/addr_to_pte() in paddr_to_pte() and
    pte_to_paddr() functions and after drop them.
 	- remove inclusion of <asm/config.h>.
 	- update commit message around definition of PGTBL_INITIAL_COUNT.
 	- remove PGTBL_ENTRY_AMOUNT and use PAGETABLE_ENTRIES instead.
 	- code style fixes
 	- remove permission argument of setup_initial_mapping() function
 	- remove calc_pgtbl_lvls_num() as it's not needed anymore after definition
    of CONFIG_PAGING_LEVELS.
 	- introduce sfence_vma().
 	- remove satp_mode argument from check_pgtbl_mode_support() and use
    RV_STAGE1_MODE directly instead.
 	- change .align to .p2align.
 	- drop inclusion of <asm/asm-offsets.h> from head.S. This change isn't
    necessary for the current patch series.
---
Changes in V6:
 	- move PAGE_SHIFT, PADDR_BITS to the top of page-bits.h
 	- cast argument x of pte_to_addr() macros to paddr_t to avoid risk of overflow for RV32
 	- update type of num_levels from 'unsigned long' to 'unsigned int'
 	- define PGTBL_INITIAL_COUNT as ((CONFIG_PAGING_LEVELS - 1) + 1)
 	- update type of permission arguments. changed it from 'unsgined long' to 'unsigned int'
 	- fix code style
 	- switch while 'loop' to 'for' loop
 	- undef HANDLE_PGTBL
 	- clean root page table after MMU is disabled in check_pgtbl_mode_support() function
 	- align __bss_start properly
 	- remove unnecesssary const for paddr_to_pte, pte_to_paddr, pte_is_valid functions
 	- add switch_stack_and_jump macros and use it inside enable_mmu() before jump to
 	  cont_after_mmu_is_enabled() function
---
Changes in V5:
	* Indent fields of pte_t struct
	* Rename addr_to_pte() and ppn_to_paddr() to match their content
---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
 - update definition of pte_t structure to have a proper size of pte_t
   in case of RV32.
 - update asm/mm.h with new functions and remove unnecessary 'extern'.
 - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
 - update paddr_to_pte() to receive permissions as an argument.
 - add check that map_start & pa_start is properly aligned.
 - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to
   <asm/page-bits.h>
 - Rename PTE_SHIFT to PTE_PPN_SHIFT
 - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses
   and after setup PTEs permission for sections; update check that linker
   and load addresses don't overlap.
 - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is
   necessary.
 - rewrite enable_mmu in C; add the check that map_start and pa_start are
   aligned on 4k boundary.
 - update the comment for setup_initial_pagetable funcion
 - Add RV_STAGE1_MODE to support different MMU modes
 - set XEN_VIRT_START very high to not overlap with load address range
 - align bss section
---
Changes in V2:
 * update the commit message:
 * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
   introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
 * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
 * Remove clear_pagetables() functions as pagetables were zeroed during
   .bss initialization
 * Rename _setup_initial_pagetables() to setup_initial_mapping()
 * Make PTE_DEFAULT equal to RX.
 * Update prototype of setup_initial_mapping(..., bool writable) -> 
   setup_initial_mapping(..., UL flags)  
 * Update calls of setup_initial_mapping according to new prototype
 * Remove unnecessary call of:
   _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
 * Define index* in the loop of setup_initial_mapping
 * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
   as we don't have such section
 * make arguments of paddr_to_pte() and pte_is_valid() as const.
 * make xen_second_pagetable static.
 * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
 * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
 * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
 * set __section(".bss.page_aligned") for page tables arrays
 * fix identatations
 * Change '__attribute__((section(".entry")))' to '__init'
 * Remove phys_offset as it isn't used now.
 * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
   setup_inital_mapping() as they should be already aligned.
 * Remove clear_pagetables() as initial pagetables will be
   zeroed during bss initialization
 * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
   as there is no such section in xen.lds.S
 * Update the argument of pte_is_valid() to "const pte_t *p"
---
 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  14 +-
 xen/arch/riscv/include/asm/current.h   |  11 +
 xen/arch/riscv/include/asm/mm.h        |  14 ++
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  61 ++++++
 xen/arch/riscv/include/asm/processor.h |   5 +
 xen/arch/riscv/mm.c                    | 277 +++++++++++++++++++++++++
 xen/arch/riscv/setup.c                 |  11 +
 xen/arch/riscv/xen.lds.S               |   3 +
 10 files changed, 406 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/include/asm/current.h
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 443f6bf15f..956ceb02df 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-y += entry.o
+obj-y += mm.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 57c1b33ee5..a7886a5ad3 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -75,12 +75,24 @@
   name:
 #endif
 
-#define XEN_VIRT_START  _AT(UL, 0x80200000)
+#ifdef CONFIG_RISCV_64
+#define XEN_VIRT_START 0xFFFFFFFFC0000000 /* (_AC(-1, UL) + 1 - GB(1)) */
+#else
+#error "RV32 isn't supported"
+#endif
 
 #define SMP_CACHE_BYTES (1 << 6)
 
 #define STACK_SIZE PAGE_SIZE
 
+#ifdef CONFIG_RISCV_64
+#define CONFIG_PAGING_LEVELS 3
+#define RV_STAGE1_MODE SATP_MODE_SV39
+#else
+#define CONFIG_PAGING_LEVELS 2
+#define RV_STAGE1_MODE SATP_MODE_SV32
+#endif
+
 #endif /* __RISCV_CONFIG_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/include/asm/current.h b/xen/arch/riscv/include/asm/current.h
new file mode 100644
index 0000000000..d87e6717e0
--- /dev/null
+++ b/xen/arch/riscv/include/asm/current.h
@@ -0,0 +1,11 @@
+#ifndef __ASM_CURRENT_H
+#define __ASM_CURRENT_H
+
+#define switch_stack_and_jump(stack, fn) do {               \
+    asm volatile (                                          \
+            "mv sp, %0\n"                                   \
+            "j " #fn :: "r" (stack), "X" (fn) : "memory" ); \
+    unreachable();                                          \
+} while ( false )
+
+#endif /* __ASM_CURRENT_H */
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
new file mode 100644
index 0000000000..64293eacee
--- /dev/null
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -0,0 +1,14 @@
+#ifndef _ASM_RISCV_MM_H
+#define _ASM_RISCV_MM_H
+
+#include <asm/page-bits.h>
+
+#define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
+#define paddr_to_pfn(pa)  ((unsigned long)((pa) >> PAGE_SHIFT))
+
+void setup_initial_pagetables(void);
+
+void enable_mmu(void);
+void cont_after_mmu_is_enabled(void);
+
+#endif /* _ASM_RISCV_MM_H */
diff --git a/xen/arch/riscv/include/asm/page-bits.h b/xen/arch/riscv/include/asm/page-bits.h
index 1801820294..4a3e33589a 100644
--- a/xen/arch/riscv/include/asm/page-bits.h
+++ b/xen/arch/riscv/include/asm/page-bits.h
@@ -4,4 +4,14 @@
 #define PAGE_SHIFT              12 /* 4 KiB Pages */
 #define PADDR_BITS              56 /* 44-bit PPN */
 
+#ifdef CONFIG_RISCV_64
+#define PAGETABLE_ORDER         (9)
+#else /* CONFIG_RISCV_32 */
+#define PAGETABLE_ORDER         (10)
+#endif
+
+#define PAGETABLE_ENTRIES       (1 << PAGETABLE_ORDER)
+
+#define PTE_PPN_SHIFT           10
+
 #endif /* __RISCV_PAGE_BITS_H__ */
diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
new file mode 100644
index 0000000000..8b7ba91017
--- /dev/null
+++ b/xen/arch/riscv/include/asm/page.h
@@ -0,0 +1,61 @@
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <xen/const.h>
+#include <xen/types.h>
+
+#include <asm/mm.h>
+#include <asm/page-bits.h>
+
+#define VPN_MASK                    ((unsigned long)(PAGETABLE_ENTRIES - 1))
+
+#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
+#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) + PAGE_SHIFT)
+#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
+#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
+#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
+
+#define PTE_VALID                   BIT(0, UL)
+#define PTE_READABLE                BIT(1, UL)
+#define PTE_WRITABLE                BIT(2, UL)
+#define PTE_EXECUTABLE              BIT(3, UL)
+#define PTE_USER                    BIT(4, UL)
+#define PTE_GLOBAL                  BIT(5, UL)
+#define PTE_ACCESSED                BIT(6, UL)
+#define PTE_DIRTY                   BIT(7, UL)
+#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
+
+#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE | PTE_WRITABLE)
+#define PTE_TABLE                   (PTE_VALID)
+
+/* Calculate the offsets into the pagetables for a given VA */
+#define pt_linear_offset(lvl, va)   ((va) >> XEN_PT_LEVEL_SHIFT(lvl))
+
+#define pt_index(lvl, va) (pt_linear_offset((lvl), (va)) & VPN_MASK)
+
+/* Page Table entry */
+typedef struct {
+#ifdef CONFIG_RISCV_64
+    uint64_t pte;
+#else
+    uint32_t pte;
+#endif
+} pte_t;
+
+static inline pte_t paddr_to_pte(paddr_t paddr,
+                                 unsigned int permissions)
+{
+    return (pte_t) { .pte = (paddr_to_pfn(paddr) << PTE_PPN_SHIFT) | permissions };
+}
+
+static inline paddr_t pte_to_paddr(pte_t pte)
+{
+    return pfn_to_paddr(pte.pte >> PTE_PPN_SHIFT);
+}
+
+static inline bool pte_is_valid(pte_t p)
+{
+    return p.pte & PTE_VALID;
+}
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/xen/arch/riscv/include/asm/processor.h b/xen/arch/riscv/include/asm/processor.h
index a71448e02e..6db681d805 100644
--- a/xen/arch/riscv/include/asm/processor.h
+++ b/xen/arch/riscv/include/asm/processor.h
@@ -69,6 +69,11 @@ static inline void die(void)
         wfi();
 }
 
+static inline void sfence_vma(void)
+{
+    asm volatile ( "sfence.vma" ::: "memory" );
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
new file mode 100644
index 0000000000..b7f50ec8a4
--- /dev/null
+++ b/xen/arch/riscv/mm.c
@@ -0,0 +1,277 @@
+#include <xen/compiler.h>
+#include <xen/init.h>
+#include <xen/kernel.h>
+#include <xen/pfn.h>
+
+#include <asm/early_printk.h>
+#include <asm/csr.h>
+#include <asm/current.h>
+#include <asm/mm.h>
+#include <asm/page.h>
+#include <asm/processor.h>
+
+struct mmu_desc {
+    unsigned int num_levels;
+    unsigned int pgtbl_count;
+    pte_t *next_pgtbl;
+    pte_t *pgtbl_base;
+};
+
+extern unsigned char cpu0_boot_stack[STACK_SIZE];
+
+#define PHYS_OFFSET ((unsigned long)_start - XEN_VIRT_START)
+#define LOAD_TO_LINK(addr) ((addr) - PHYS_OFFSET)
+#define LINK_TO_LOAD(addr) ((addr) + PHYS_OFFSET)
+
+
+/*
+ * It is expected that Xen won't be more then 2 MB.
+ * The check in xen.lds.S guarantees that.
+ * At least 3 page tables (in case of Sv39 ) are needed to cover 2 MB.
+ * One for each page level table with PAGE_SIZE = 4 Kb.
+ *
+ * One L0 page table can cover 2 MB(512 entries of one page table * PAGE_SIZE).
+ *
+ * It might be needed one more page table in case when Xen load address
+ * isn't 2 MB aligned.
+ */
+#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_root[PAGETABLE_ENTRIES];
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_nonroot[PGTBL_INITIAL_COUNT * PAGETABLE_ENTRIES];
+
+#define HANDLE_PGTBL(curr_lvl_num)                                          \
+    index = pt_index(curr_lvl_num, page_addr);                              \
+    if ( pte_is_valid(pgtbl[index]) )                                       \
+    {                                                                       \
+        /* Find L{ 0-3 } table */                                           \
+        pgtbl = (pte_t *)pte_to_paddr(pgtbl[index]);                        \
+    }                                                                       \
+    else                                                                    \
+    {                                                                       \
+        /* Allocate new L{0-3} page table */                                \
+        if ( mmu_desc->pgtbl_count == PGTBL_INITIAL_COUNT )                 \
+        {                                                                   \
+            early_printk("(XEN) No initial table available\n");             \
+            /* panic(), BUG() or ASSERT() aren't ready now. */              \
+            die();                                                          \
+        }                                                                   \
+        mmu_desc->pgtbl_count++;                                            \
+        pgtbl[index] = paddr_to_pte((unsigned long)mmu_desc->next_pgtbl,    \
+                                    PTE_VALID);                             \
+        pgtbl = mmu_desc->next_pgtbl;                                       \
+        mmu_desc->next_pgtbl += PAGETABLE_ENTRIES;                          \
+    }
+
+static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
+                                         unsigned long map_start,
+                                         unsigned long map_end,
+                                         unsigned long pa_start)
+{
+    unsigned int index;
+    pte_t *pgtbl;
+    unsigned long page_addr;
+
+    if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
+    {
+        early_printk("(XEN) Xen should be loaded at 4k boundary\n");
+        die();
+    }
+
+    if ( (map_start & ~XEN_PT_LEVEL_MAP_MASK(0)) ||
+         (pa_start & ~XEN_PT_LEVEL_MAP_MASK(0)) )
+    {
+        early_printk("(XEN) map and pa start addresses should be aligned\n");
+        /* panic(), BUG() or ASSERT() aren't ready now. */
+        die();
+    }
+
+    for ( page_addr = map_start;
+          page_addr < map_end;
+          page_addr += XEN_PT_LEVEL_SIZE(0) )
+    {
+        pgtbl = mmu_desc->pgtbl_base;
+
+        switch ( mmu_desc->num_levels )
+        {
+        case 4: /* Level 3 */
+            HANDLE_PGTBL(3);
+        case 3: /* Level 2 */
+            HANDLE_PGTBL(2);
+        case 2: /* Level 1 */
+            HANDLE_PGTBL(1);
+        case 1: /* Level 0 */
+            {
+                unsigned long paddr = (page_addr - map_start) + pa_start;
+                unsigned int permissions = PTE_LEAF_DEFAULT;
+                pte_t pte_to_be_written;
+
+                index = pt_index(0, page_addr);
+
+                if ( is_kernel_text(LINK_TO_LOAD(page_addr)) ||
+                     is_kernel_inittext(LINK_TO_LOAD(page_addr)) )
+                    permissions =
+                        PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
+
+                if ( is_kernel_rodata(LINK_TO_LOAD(page_addr)) )
+                    permissions = PTE_READABLE | PTE_VALID;
+
+                pte_to_be_written = paddr_to_pte(paddr, permissions);
+
+                if ( !pte_is_valid(pgtbl[index]) )
+                    pgtbl[index] = pte_to_be_written;
+                else
+                {
+                    if ( (pgtbl[index].pte ^ pte_to_be_written.pte) &
+                         ~(PTE_DIRTY | PTE_ACCESSED) )
+                    {
+                        early_printk("PTE overridden has occurred\n");
+                        /* panic(), <asm/bug.h> aren't ready now. */
+                        die();
+                    }
+                }
+            }
+        }
+    }
+}
+#undef HANDLE_PGTBL
+
+static bool __init check_pgtbl_mode_support(struct mmu_desc *mmu_desc,
+                                            unsigned long load_start)
+{
+    bool is_mode_supported = false;
+    unsigned int index;
+    unsigned int page_table_level = (mmu_desc->num_levels - 1);
+    unsigned level_map_mask = XEN_PT_LEVEL_MAP_MASK(page_table_level);
+
+    unsigned long aligned_load_start = load_start & level_map_mask;
+    unsigned long aligned_page_size = XEN_PT_LEVEL_SIZE(page_table_level);
+    unsigned long xen_size = (unsigned long)(_end - start);
+
+    if ( (load_start + xen_size) > (aligned_load_start + aligned_page_size) )
+    {
+        early_printk("please place Xen to be in range of PAGE_SIZE "
+                     "where PAGE_SIZE is XEN_PT_LEVEL_SIZE( {L3 | L2 | L1} ) "
+                     "depending on expected SATP_MODE \n"
+                     "XEN_PT_LEVEL_SIZE is defined in <asm/page.h>\n");
+        die();
+    }
+
+    index = pt_index(page_table_level, aligned_load_start);
+    stage1_pgtbl_root[index] = paddr_to_pte(aligned_load_start,
+                                            PTE_LEAF_DEFAULT | PTE_EXECUTABLE);
+
+    sfence_vma();
+    csr_write(CSR_SATP,
+              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
+              RV_STAGE1_MODE << SATP_MODE_SHIFT);
+
+    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) == RV_STAGE1_MODE )
+        is_mode_supported = true;
+
+    csr_write(CSR_SATP, 0);
+
+    sfence_vma();
+
+    /* Clean MMU root page table */
+    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
+
+    return is_mode_supported;
+}
+
+/*
+ * setup_initial_pagetables:
+ *
+ * Build the page tables for Xen that map the following:
+ *  1. Calculate page table's level numbers.
+ *  2. Init mmu description structure.
+ *  3. Check that linker addresses range doesn't overlap
+ *     with load addresses range
+ *  4. Map all linker addresses and load addresses ( it shouldn't
+ *     be 1:1 mapped and will be 1:1 mapped only in case if
+ *     linker address is equal to load address ) with
+ *     RW permissions by default.
+ *  5. Setup proper PTE permissions for each section.
+ */
+void __init setup_initial_pagetables(void)
+{
+    struct mmu_desc mmu_desc = { CONFIG_PAGING_LEVELS, 0, NULL, NULL };
+
+    /*
+     * Access to _start, _end is always PC-relative thereby when access
+     * them we will get load adresses of start and end of Xen.
+     * To get linker addresses LOAD_TO_LINK() is required to use.
+     */
+    unsigned long load_start    = (unsigned long)_start;
+    unsigned long load_end      = (unsigned long)_end;
+    unsigned long linker_start  = LOAD_TO_LINK(load_start);
+    unsigned long linker_end    = LOAD_TO_LINK(load_end);
+
+    if ( (linker_start != load_start) &&
+         (linker_start <= load_end) && (load_start <= linker_end) )
+    {
+        early_printk("(XEN) linker and load address ranges overlap\n");
+        die();
+    }
+
+    if ( !check_pgtbl_mode_support(&mmu_desc, load_start) )
+    {
+        early_printk("requested MMU mode isn't supported by CPU\n"
+                     "Please choose different in <asm/config.h>\n");
+        die();
+    }
+
+    mmu_desc.pgtbl_base = stage1_pgtbl_root;
+    mmu_desc.next_pgtbl = stage1_pgtbl_nonroot;
+
+    setup_initial_mapping(&mmu_desc,
+                          linker_start,
+                          linker_end,
+                          load_start);
+}
+
+void __init noreturn noinline enable_mmu()
+{
+    /*
+     * Calculate a linker time address of the mmu_is_enabled
+     * label and update CSR_STVEC with it.
+     * MMU is configured in a way where linker addresses are mapped
+     * on load addresses so in a case when linker addresses are not equal
+     * to load addresses, after MMU is enabled, it will cause
+     * an exception and jump to linker time addresses.
+     * Otherwise if load addresses are equal to linker addresses the code
+     * after mmu_is_enabled label will be executed without exception.
+     */
+    csr_write(CSR_STVEC, LOAD_TO_LINK((unsigned long)&&mmu_is_enabled));
+
+    /* Ensure page table writes precede loading the SATP */
+    sfence_vma();
+
+    /* Enable the MMU and load the new pagetable for Xen */
+    csr_write(CSR_SATP,
+              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
+              RV_STAGE1_MODE << SATP_MODE_SHIFT);
+
+    asm volatile ( ".p2align 2" );
+ mmu_is_enabled:
+    /*
+     * Stack should be re-inited as:
+     * 1. Right now an address of the stack is relative to load time
+     *    addresses what will cause an issue in case of load start address
+     *    isn't equal to linker start address.
+     * 2. Addresses in stack are all load time relative which can be an
+     *    issue in case when load start address isn't equal to linker
+     *    start address.
+     *
+     * We can't return to the caller because the stack was reseted
+     * and it may have stash some variable on the stack.
+     * Jump to a brand new function as the stack was reseted
+     */
+
+    switch_stack_and_jump((unsigned long)cpu0_boot_stack + STACK_SIZE,
+                          cont_after_mmu_is_enabled);
+}
+
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 3786f337e0..315804aa87 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -2,6 +2,7 @@
 #include <xen/init.h>
 
 #include <asm/early_printk.h>
+#include <asm/mm.h>
 
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
@@ -26,3 +27,13 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 
     unreachable();
 }
+
+void __init noreturn cont_after_mmu_is_enabled(void)
+{
+    early_printk("All set up\n");
+
+    for ( ;; )
+        asm volatile ("wfi");
+
+    unreachable();
+}
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index 31e0d3576c..fe475d096d 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -172,3 +172,6 @@ ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misaligned")
 
 ASSERT(!SIZEOF(.got),      ".got non-empty")
 ASSERT(!SIZEOF(.got.plt),  ".got.plt non-empty")
+
+ASSERT(_end - _start <= MB(2), "Xen too large for early-boot assumptions")
+
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 22 12:19:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 12:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537881.837502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V5-0001Gr-Mz; Mon, 22 May 2023 12:18:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537881.837502; Mon, 22 May 2023 12:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14V5-0001Gi-Iy; Mon, 22 May 2023 12:18:55 +0000
Received: by outflank-mailman (input) for mailman id 537881;
 Mon, 22 May 2023 12:18:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ae2Q=BL=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q14V4-0000XV-4z
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 12:18:54 +0000
Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com
 [2a00:1450:4864:20::12e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cf720081-f89a-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 14:18:50 +0200 (CEST)
Received: by mail-lf1-x12e.google.com with SMTP id
 2adb3069b0e04-4f4b0a0b557so1050542e87.1
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 05:18:50 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 8-20020ac24828000000b004f26aabbd6asm961120lft.130.2023.05.22.05.18.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 22 May 2023 05:18:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf720081-f89a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684757930; x=1687349930;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=OekhSlEnpCrynmwo/goaylbLa38lEjHZjQwhuoGYxy0=;
        b=fuhB9mHpKQT3/o2+ATewdSqseNxbmdOuoY4MVh6gOF/5seFHfh/VvPvwIdbgVgGWl2
         YgLPBlOr/XrbD8jsuJFJPxJFxqvHuJSxwGlGS9GSjOW+QvxZkv9dpO8xlRW2GSMBigkP
         F7K3oELRFyf+4Ficlqf0RGpDUt2poi34tvRpPKIgfbY0BcXHEJE3R710BZuSjq8vt9cK
         ndYd20QblTXf+iFBj+eobd5VjTMGjWkEnm0vaZjR34lMd9Sugplux0VGuswLgN54Z/xz
         yrH0TtMIltCr2p1/yCyLCkPeh6ijJ8HNMHnku+g7VYl1S4Vxxgwh+meE5gdpu/AmjpuB
         FLeA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684757930; x=1687349930;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=OekhSlEnpCrynmwo/goaylbLa38lEjHZjQwhuoGYxy0=;
        b=SYbX4U/F32JPs1O7IF4yqCvPwyyaGZRtG7gxyjouRsgBS56fN7F26YGChUNl48zGvF
         AbmukOzwTT52E7Vpj71XRgJZH3cFlQiHwkAWaJwBoKBuXLEZX1IiiCDsjS6Urb7JZ862
         w7KLaZoqtI+2W3yvpLNNaKNV95ZnL7M1ydWr7k9Na6f6B1Kq+EUuMVgHkXvHw4CKdPG6
         uOF9dkrmn+vD4b+iOiZRqg71HZqavBw3yW3/Xy4BZsYoo0mnzpS1Ht6Bb5lEf5JR4lUG
         /JXIZeUaVnguxH/bdtuMgGorokbazVQJ9HiksSjAYMv6X6QtrT7pRcWDamaGu6bTMnkA
         yaxA==
X-Gm-Message-State: AC+VfDxgmi5twGCKceX/TFD0CkcTlELD9t4vGIY4U8xHy9zz61X2iEpD
	c5xzqz9wZCTPZLqCDn8r32y+T4Hdp0I=
X-Google-Smtp-Source: ACHHUZ4OAbUB3t2ZbtNxLXTsfQJ29AZfges+Oaz2iMgvLHGAYH7u83xZpFq5hqtIWR28cQnZ/4IAYw==
X-Received: by 2002:ac2:5298:0:b0:4f2:5aae:937 with SMTP id q24-20020ac25298000000b004f25aae0937mr3118884lfm.64.1684757929933;
        Mon, 22 May 2023 05:18:49 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v8 1/5] xen/riscv: add VM space layout
Date: Mon, 22 May 2023 15:18:42 +0300
Message-Id: <bbdfbf59db6d329d65956839c79e6e42bbf13bb7.1684757267.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1684757267.git.oleksii.kurochko@gmail.com>
References: <cover.1684757267.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Also it was added explanation about ignoring of top VA bits

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V8:
 - Add "#ifdef RV_STAGE1_MODE == SATP_MODE_SV39" instead of "#ifdef SV39"
   in the comment to VM layout description.
 - Update the upper bound of direct map area in VM layout description.
---
Changes in V7:
 - Fix range of frametable range in RV64 layout.
 - Add ifdef SV39 to the RV64 layout comment to make it explicit that
   description if for SV39 mode.
 - Add missed row in the RV64 layout table.
---
Changes in V6:
 - update comment above the RISCV-64 layout table
 - add Slot column to the table with RISCV-64 Layout
 - update RV-64 layout table.
---
Changes in V5:
* the patch was introduced in the current patch series.
---
 xen/arch/riscv/include/asm/config.h | 36 +++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 763a922a04..57c1b33ee5 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -4,6 +4,42 @@
 #include <xen/const.h>
 #include <xen/page-size.h>
 
+/*
+ * RISC-V64 Layout:
+ *
+ * #ifdef RV_STAGE1_MODE == SATP_MODE_SV39
+ *
+ * From the riscv-privileged doc:
+ *   When mapping between narrower and wider addresses,
+ *   RISC-V zero-extends a narrower physical address to a wider size.
+ *   The mapping between 64-bit virtual addresses and the 39-bit usable
+ *   address space of Sv39 is not based on zero-extension but instead
+ *   follows an entrenched convention that allows an OS to use one or
+ *   a few of the most-significant bits of a full-size (64-bit) virtual
+ *   address to quickly distinguish user and supervisor address regions.
+ *
+ * It means that:
+ *   top VA bits are simply ignored for the purpose of translating to PA.
+ *
+ * ============================================================================
+ *    Start addr    |   End addr        |  Size  | Slot       |area description
+ * ============================================================================
+ * FFFFFFFFC0800000 |  FFFFFFFFFFFFFFFF |1016 MB | L2 511     | Unused
+ * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | L2 511     | Fixmap
+ * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | L2 511     | FDT
+ * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | L2 511     | Xen
+ *                 ...                  |  1 GB  | L2 510     | Unused
+ * 0000003200000000 |  0000007f80000000 | 309 GB | L2 200-509 | Direct map
+ *                 ...                  |  1 GB  | L2 199     | Unused
+ * 0000003100000000 |  00000031C0000000 |  3 GB  | L2 196-198 | Frametable
+ *                 ...                  |  1 GB  | L2 195     | Unused
+ * 0000003080000000 |  00000030C0000000 |  1 GB  | L2 194     | VMAP
+ *                 ...                  | 194 GB | L2 0 - 193 | Unused
+ * ============================================================================
+ *
+ * #endif
+ */
+
 #if defined(CONFIG_RISCV_64)
 # define LONG_BYTEORDER 3
 # define ELFSIZE 64
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 22 12:42:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 12:42:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537904.837532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14rP-0006j1-6R; Mon, 22 May 2023 12:41:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537904.837532; Mon, 22 May 2023 12:41:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14rP-0006iu-3G; Mon, 22 May 2023 12:41:59 +0000
Received: by outflank-mailman (input) for mailman id 537904;
 Mon, 22 May 2023 12:41:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q14rN-0006in-Q3
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 12:41:57 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20604.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::604])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0959667a-f89e-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 14:41:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9412.eurprd04.prod.outlook.com (2603:10a6:20b:4eb::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 12:41:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 12:41:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0959667a-f89e-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NURwf9W7lfJBn84thO40Cgg5KgkpO5tPFORTDrB24FzMbPTf4HFH0qyLBhMvgm9bvzOOwjy0lUuYAKo6qNJ0FmAZKbnUZXWq1+VkCEATH5NaWUa17huLbplUi+n8zARPBnJmI19IvszBFY/v8pJYE+V/txUaLFY7EQDdtvW5I9XOnnwhqc/sLXq0csmtONS29fAY5V1e20UQaYj6vZtHp8EsMke9XsMiEtKBCTjba0TEdx0vY4TAUCzU8GXUCXjHI79/szP/x9pi4R8+Wd144eA5Vdr3KU9ZAwgebrl9QDaB3P2jpCmGkLDXpWBIv6oJIPoQB4RZ/OPdkUKfjrZ5vQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=djhZqu2mP+2/iJCck3s6urYbtm0UlkyDVzwDbEbfYNQ=;
 b=UBB2Xhvm5CKsIBT+vVSrvQROlfeKMC5jCNOqYROY9TIvJePpQfFB01S0CB/NGC+DoBFAB6RkjSUsPL/wP4ahWv+sSknUu0gqGYC/XgxHvG9uahXQ/Xr2usQlTQ+BkVeUxwf+odouopgBr6TybFbD/SsweDt0RORH3JO2VCemGtzdasqL2yxSIueIwAounzd2Sla1lDYm00xn2x2cnR7RLaAdHLTIDmt7HKCdf0CvoAM2zdQqeWzxW8Ac5PFf015is7H9239uo7rRSCUvUBCVuHnJoIZ+dVNKtJLSuu9G3SXCpGnPHDIz3y6oRFYVW/7uQHKU8DNkRggAw7ZtHCxEpQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=djhZqu2mP+2/iJCck3s6urYbtm0UlkyDVzwDbEbfYNQ=;
 b=0CDkOLc3lgrRYfqX1vkPMZRvHIU8f/RtlsUNj3XNW7Jd4ljRw+AZdUQE1lUA58nEfwzBgqlKHl0RTPSkpDl/eEUyhydoEow0i4FqUUIZ6DldayfJGGDpR3JiVr4ZR0PrC/5X4v0CmXIfwRSTt4WGSsukcdYgc5YuZKX6Qzphw3t9KNMXj8uzf/1+MUz13BAJ+qvp4nwXzg8As/QCZ7svajwY47jDeS0vBiSsEnETwHyjXpcV8fXcqZ+I+P53tyztvZPdMhsBTMvmFF4mCMP2N9Xj8s8izx1rlIkDEh6k6Tw88nsO8JST+ioHbUqfBL5t/UkQTdzWpKcWi/2047/3YQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a2e282a6-8443-2650-dc63-e64b5fb72266@suse.com>
Date: Mon, 22 May 2023 14:41:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v6 05/12] arm/sve: save/restore SVE context switch
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Julien Grall <julien@xen.org>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-6-luca.fancellu@arm.com>
 <779e46a5-a3d0-187a-6d15-e1a12f71278f@xen.org>
 <288F275C-D76F-4E89-B8C6-C8D9AD54D1D9@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <288F275C-D76F-4E89-B8C6-C8D9AD54D1D9@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0091.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9412:EE_
X-MS-Office365-Filtering-Correlation-Id: f1f86927-ae9f-4255-9eb3-08db5ac1ec4e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Bs6a04A4FXZDbMr0e3YqjVHHa8BO45oQRwWOmho1fZHuZZZW00jtmEexW5tN/ZYzFzI4O9ID8gB3f/uzYea4v0YgkghwG173CZ8WmP0LFePEUvdB3u2dOJH2PSqDvKdsgpC9KE0/YE7OLuLX3KD+5G4gCKiIhVaRzntkdhcxOkQfig10ixZoDBzF9wffVp5NSOWxwj81I7/rZCz6rXnWTsU3c23P9tqaLkLT+0cKWDxylhKf7l/3xTo/2EMg/phAanIKhCJEFAuJnhTriJ82qYa6+rdK6bF9NHuL02/e0SAbI9UW+OEzBxznCuzHCB34CCr1SkBVJeSZlwLhzkId9OOPUj6DJ9MsjMmf/QnlVRdbeHr0PiOxGJhY3hhBPpSkU40MKluQ3TSBiwSneVdX6yRJoc//oxvRijuSTTVlfrTBhho7hQ7XJCJUyCyE29yXQQFEmJQv+eK2q4dUjbQFFbj9j9tJAO2UYfIGG6oetSDSUcppw4rhVShZeexW41PG2dmcNQeBg93q43HV6LIde2EaFzrhARsy2KL+xzoNaHvlz+NcwS89SaFp5sworZGbODmxWMFDaX7k76da5lPrUh9YF+cXLhjUuYOOmpf/hx9+oMr4YW4iWBi/qrv8PTOJPu0Mne56Np5on2dZRI1iFQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(366004)(346002)(136003)(39860400002)(451199021)(31686004)(4744005)(2906002)(54906003)(478600001)(5660300002)(8676002)(8936002)(41300700001)(316002)(66476007)(66556008)(66946007)(6916009)(36756003)(4326008)(6486002)(53546011)(26005)(6512007)(6506007)(38100700002)(86362001)(2616005)(31696002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TWQxL25hc0I3UnhESDM4NENsTTRVOFB2d2VsWWJzZnlDUUpGbUlneXd5RWpx?=
 =?utf-8?B?OWtINFZtMnVtb1gwVTM4V2VKbndxZjZVdW9QVXYzcVB4aDUxaW1NZ3pvWVM4?=
 =?utf-8?B?N2FIQ1lBUUNCcVdSM1F3K2tISEhNTFB2bTkvZTlBOFB4Y3V6K0JQZnNWRlhP?=
 =?utf-8?B?OGpSaExhOFN2bXQrR2tldGtQUlRTalA4Y0VJNEJpK1JjVmNzWmxjV1gvTVVQ?=
 =?utf-8?B?Y3BPdzY4WE9JTHgxSWVNNG45dTdsUXVZOHRDVDF2aVNvVnBkQkdmMjQzSjAv?=
 =?utf-8?B?TTgwejU5alU1Y1lmQ0hZdy9wbGxHeXdZVjhHZUltRkJVZXloM0xlZTJhRC9i?=
 =?utf-8?B?RURlOVJLWENsYVNNZzhWd2FHcjUzTkRPNXUzRFZuL3NaQnRqQ05qZjhsQ0Ny?=
 =?utf-8?B?WjNhTmNuUXNTbkFwY1BvOURQV2w2NmRMelJQaFhYbDhpbDRyc1NtQ2lEOUsr?=
 =?utf-8?B?Ky9zd3hEVVN5Y0duQkxxK0I5OTVLQlQvYjVRRmpoZjlhNmpubXNEbjJpK1Q2?=
 =?utf-8?B?UUdLYWRhN01MMVN0UmFJc0xRenZKV0FvZWVLL3lZd0pqY2FXM3BDRHFhc3Uw?=
 =?utf-8?B?ZXVGcUg5SXpwSmEzMndMcnMrQ3phODQ2MUpnblNkL2cveVZVWURpR2h4RnhB?=
 =?utf-8?B?aGVWRnVrVk5kRk8rOVV3cXpZcHp6dXJhcW4zRVBPMHNHdkpjaFc2VUdXTU9W?=
 =?utf-8?B?blIxNXQ1Q255RHdKaGZuNHhoMklYa1pGdWtJZUo0YTZxNmkrWlVGdnZ2aDA4?=
 =?utf-8?B?KzNMUFI3djJtNk5ZNFNOY1YrajJ5bUQ2Qi9iSWRpVkhwRW1IN0dLc0owMGl3?=
 =?utf-8?B?NFFrSjNPVUN6VHc3M0xhSVY2OE5LbnJLRDE0eXFUeUdNbi9WeUh1YTZJbWlV?=
 =?utf-8?B?OTMwcHJEMkdSK2g1UWdkMzhDNElqME1tK1YrbHM5UmJDUyt2cTBtM3dMQ0Fy?=
 =?utf-8?B?bmFuMFZDV1R3a1ZXMHVjb0NGTnFacE1NTjRLUVZFdnoySU4rdWdUVmhlL096?=
 =?utf-8?B?V3hDRG9FTHExY3JsamJpUFhESi9uNmVxRVZUbzk4VnVPRFFCNWpKdGw4elhM?=
 =?utf-8?B?Z0tiSHBKRzk3MXRNckFCTVNoZzlScVZkK0hpa0JMT29JdTZwdEozS1BGRlp0?=
 =?utf-8?B?dWthMk8yT1lFaHhKMEV1czVQeGVmOHZuZWE5ZEZOa2txcmpVcmx1YWtKajBt?=
 =?utf-8?B?WE1DUUdmaXRsOWpvTi9BZmIyWWRUT1lGQ2pwNE50eGFFRFdiSktNYXE4Zytr?=
 =?utf-8?B?UjV2NmlPS2lTNEl6eWYrcGIvNkpXZXpCNkQ0Nmc1VUxWZWZlZXdFM244NEdQ?=
 =?utf-8?B?Rys1MjE2RmlrK1BXd1E3dXUwSytrdFdQRVl0c1ljT2wrSkxCSVVLb3Y2MDlU?=
 =?utf-8?B?Rmo5VzN0RndJR3oxbERlNEJkNTQydExIT0N2b0lRdDY3a0c0M1kxQ1VUNEdw?=
 =?utf-8?B?SjVIVGJkdWFQRmtqQWNybUZyZDUxbTd3bWkvdG8xWjNGK1NLL0FyUDFQWFhK?=
 =?utf-8?B?cVhPL3dCdEdpQ0pYOWc0dUd1bTBuVGR3UXF4ZDRBeC8wRXJjYU1Qa2xBbVQ1?=
 =?utf-8?B?RVVQVCtPZ2dxQTRmc29JcXRqTUh4TDRITHJvWXVpcVJyMWd6SHc1TDcvUDl3?=
 =?utf-8?B?L2k0TDdDMUtnQlBGMEhpT0doSitnZ2xXekJiRkU3R09IR1Z3anZsTXlDV1Bv?=
 =?utf-8?B?Yzc2UHB4RGR4bUNsa2VFejhuQWFtL241VzNRNFBUM0QvVVZpcVh3S2o0VTJH?=
 =?utf-8?B?QWtyY2trcVpzWkYxU2Y5TUFuUHd5enhhYTFZaUttbDMwRVJXVHBwa2JhQUwr?=
 =?utf-8?B?RFJWRGdyWVRreFUrUGo3VDFTd0tESWgrVUUycEsvVmx5UTVkb1g1azJxY1RJ?=
 =?utf-8?B?aStmVko3YXd4cUtTQUw0NnI0UWloNi9YcWtwNm9sR1lYaDlSWmdQTE9NMllx?=
 =?utf-8?B?RDg3R1d4K0R4NWJNOGtyaTVWZnF1ZGdMN1puWkFjdGpjai9BOTAvTmN1OGdH?=
 =?utf-8?B?MUkvenBzSXBRTzFkaHRZYzFJRTJTcm5reTZTTG1xZjIrTWlzYzNRVjBPcXE2?=
 =?utf-8?B?RDJoUDc3T05nUXJSaC90Z1dsaHd0ZVNMZkJoQU9lbnRTWlM2RkxHYlRkNCtV?=
 =?utf-8?Q?ggaY/VQTxAdPy+/QCHcbekl5Q?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f1f86927-ae9f-4255-9eb3-08db5ac1ec4e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 12:41:54.5544
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8tEGAgThDUp9T68NoWNo1iiCPMp+g6e26vfAIc3Us+eeDa/Y+pK9MS9lRst3g9e7TdrwsDoIVGzgqVhMKDfSTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9412

On 22.05.2023 12:20, Luca Fancellu wrote:
> 
> 
>> On 18 May 2023, at 19:30, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Luca,
>>
>> One more remark.
>>
>> On 24/04/2023 07:02, Luca Fancellu wrote:
>>>  #else /* !CONFIG_ARM64_SVE */
>>>  @@ -46,6 +50,15 @@ static inline unsigned int get_sys_vl_len(void)
>>>      return 0;
>>>  }
>>>  +static inline int sve_context_init(struct vcpu *v)
>>> +{
>>> +    return 0;
>>
>> The call is protected by is_domain_sve(). So I think we want to return an error just in case someone is calling it outside of its intended use.
> 
> Regarding this one, since it should not be called when SVE is not enabled, are you ok if I’ll do this:
> 
> static inline int sve_context_init(struct vcpu *v)
> {
> ASSERT_UNREACHABLE();
> return 0;
> }

Do you need such a stub in the first place? I.e. can't you arrange for
DCE to take care of unreachable function calls, thus letting you get
away with just an always-visible declaration (and no definition when
!ARM64_SVE)?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 22 12:44:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 12:44:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537910.837542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14tY-0007ML-J2; Mon, 22 May 2023 12:44:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537910.837542; Mon, 22 May 2023 12:44:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14tY-0007ME-GL; Mon, 22 May 2023 12:44:12 +0000
Received: by outflank-mailman (input) for mailman id 537910;
 Mon, 22 May 2023 12:44:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cvIk=BL=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q14tW-0007Ls-P3
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 12:44:10 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2075.outbound.protection.outlook.com [40.107.7.75])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5825364c-f89e-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 14:44:08 +0200 (CEST)
Received: from DUZPR01CA0347.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b8::20) by AS4PR08MB8023.eurprd08.prod.outlook.com
 (2603:10a6:20b:586::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 12:43:35 +0000
Received: from DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4b8:cafe::60) by DUZPR01CA0347.outlook.office365.com
 (2603:10a6:10:4b8::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28 via Frontend
 Transport; Mon, 22 May 2023 12:43:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT028.mail.protection.outlook.com (100.127.142.236) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.13 via Frontend Transport; Mon, 22 May 2023 12:43:34 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Mon, 22 May 2023 12:43:34 +0000
Received: from d3257aa310d4.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 89DC387D-2243-4147-A16E-158F1FA094E0.1; 
 Mon, 22 May 2023 12:43:23 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d3257aa310d4.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 22 May 2023 12:43:23 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAWPR08MB9520.eurprd08.prod.outlook.com (2603:10a6:102:2f2::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 12:43:21 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.028; Mon, 22 May 2023
 12:43:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5825364c-f89e-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pppqCx3b5ghZhsVDbkSYebN87OX1KJ+kKUlMjO1a4xc=;
 b=1vvOTMKUUOC9ch0QEEyX0J7w5e2h87vX++tXzKJz0Zf8KXLC85BnlWynjjN8yU1BPBeXSnVcfD+OYnjwkK8A262x8xCuNqDNreQrcQJlpkXjxvP2Q6u75f/Z3gt/yucw2c05Yrm8EGaADurP9KJztcNfv2cigZpLcN6PRmzumN0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: bd15c8988769b56f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MHcf/q5mm/Z5QyIyiHD4FxoZFa/QKT3v+xPt8fHhezc9P+F64EeGrhvRmbtw+4Uu/svqT4UJOW7oxUYdoOpWUGCNNXmWXtyX7P7TBI3ykRPeEZHrsZs4c4TUvO/yTAPl2SYv/GeD7S8ScYRPHcRsSR1sE5QacxqXh9/lIzlu69e1P6PWOQExz3Kz9o7tYo4EaJmfawBNamNymKW4bGYjgegX4tjT+s3S4GZdCKRi2MGmug/PVvBc/zRWo1i6Y5OGAS6Ba6HPDbxs7e55Eap9iU++EOWkY5sJ0EArMQDFEDbNzZceoG5PazFVFEEdnkh0nugEnGW5nTGQBq//dVqPyg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pppqCx3b5ghZhsVDbkSYebN87OX1KJ+kKUlMjO1a4xc=;
 b=czZBtvZM5Id6a+uuXtYbKO+0YgWYqZtbecJwwqIxpPAzyZF7J1oEGWgF4fw8HxRhuIexanTIDO7V7wOrvxpnc6YjaTvmIltjkxAoZRBGRDV8jlhY1N2IUftYOY02dcjRVCV2Inlk0HjTWzWFzqz00QWvViQ2yLe+0u6uKNyPRZTj5EAaMRow6I+6THLQFZ5b86luK9UPrAfkNyqkpu9w0Ll8Tu4uqpEjEnm3YeN+GSPHbmoTdXaPCXMMf3gBiD3mgeMYnqtuKlNNfu4wqxgIaZzDv7yBOqtdVvurWf7MVSoz5ZCAWN+1EepNdeo3lrt0JqR5NY3hofKHdh6MUB59Cg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pppqCx3b5ghZhsVDbkSYebN87OX1KJ+kKUlMjO1a4xc=;
 b=1vvOTMKUUOC9ch0QEEyX0J7w5e2h87vX++tXzKJz0Zf8KXLC85BnlWynjjN8yU1BPBeXSnVcfD+OYnjwkK8A262x8xCuNqDNreQrcQJlpkXjxvP2Q6u75f/Z3gt/yucw2c05Yrm8EGaADurP9KJztcNfv2cigZpLcN6PRmzumN0=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <julien@xen.org>
Subject: Re: [PATCH v6 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v6 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZdnJ92qfFeC693kK/O74mILlnGa9ggLQAgAXAhACAACdzAIAAAF0A
Date: Mon, 22 May 2023 12:43:21 +0000
Message-ID: <F88CF36C-0A10-41E3-8273-8ACCCC1836BD@arm.com>
References: <20230424060248.1488859-1-luca.fancellu@arm.com>
 <20230424060248.1488859-6-luca.fancellu@arm.com>
 <779e46a5-a3d0-187a-6d15-e1a12f71278f@xen.org>
 <288F275C-D76F-4E89-B8C6-C8D9AD54D1D9@arm.com>
 <a2e282a6-8443-2650-dc63-e64b5fb72266@suse.com>
In-Reply-To: <a2e282a6-8443-2650-dc63-e64b5fb72266@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAWPR08MB9520:EE_|DBAEUR03FT028:EE_|AS4PR08MB8023:EE_
X-MS-Office365-Filtering-Correlation-Id: 1e9f9014-6aeb-4baf-fce0-08db5ac22846
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3KIFAiPzY2CuSlLTi9wKRO0G5p29Q8lqC9CtJHdrAni/aeMb0m0V5FuKyS7xH+JmfB/G780VCAo/gBzw1UZcZXIYY3TjeGCBW+Qe6/YiBNdWY7gy2CMr+IvvXlHkmY1toqhOh+BdV7BrOdXiejX255uY6mmLwvL5J+qWuNQ11xTwdlzPBYwSACCdNBH+aGpH17gBu0FFDyzF6v60q/5s7x/gGYN5ak584+5k67tYQjQvFZ3cXS4P58wemlgxARHt0x+9dan3HtNNnEJ1wG5BMqxjgPU8IKvH2I6pfPstUCfrhDg4K2LO4vaxoKoif6wi4bSGidRDe0KJVtggaSHLelqs1ORjgEB6NXWUnyP7SIQb2IPkwFpb8pWblL5KhjsqX1NzdljJt8cWLZZMGq9SSRtW1pastLeSFkc8af2Qp6mJTw9NZeiT61a7wSe0FcXwRnjvP0kf7jxQZC01C56TWcWSHsd6U7C2rIi3f/qQXEZjftCvOMauoqoS4SyqkpX01nVX5MCRTwbwV4EWYmaJr+QJRexytIghXryLohSUPa6VJCw/CI6oHxDuM0q7qfciGKfYOuqtapizclD5qw9L5bWXy1j9O/5QNcP3ai5LA/i0T4vXfx1RdUsB6KentXL+4CSpY723DzslSBES0FDN9Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(346002)(396003)(136003)(376002)(451199021)(2906002)(5660300002)(8676002)(8936002)(41300700001)(66946007)(76116006)(66476007)(4326008)(91956017)(66556008)(6916009)(64756008)(66446008)(54906003)(316002)(36756003)(33656002)(71200400001)(6486002)(478600001)(2616005)(6512007)(6506007)(26005)(122000001)(86362001)(186003)(53546011)(38070700005)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <E85DE17FD3F7A747B2391C3736780A0B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB9520
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f3edf340-cb76-4e95-5570-08db5ac2200a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	u6DqRvRmzk7KEr3EX7hKYNbR/0e+urhYvIwzwj+SHspUv4EqARKoinYF371ot8DFvG+TWBldORLS26BxehKqGp5Gkpg2y/V7F3d6+QP4S3WyRNq4cTjUDWHirHYAilCmiLd5etnfdnH6mJoq4bg3LxsqAh9EkGUxeHSV+XcPN0EcrZkreeNsvrslxKCCi2sMvgeJBGvmHv4adchm5omwxsNflTwPovROCmIdl2kCQDkufb96DXbB9sEX1GK/ZT01suk7Kh42ptWMmFi6Flc1+v5qAUXYR2McD9zcz7FPTGJhskaX4i1A4FQHfalzUgtTrExotCHQRw7FgjMjIVDj/B5zaByZQB9sOUt1J6VjamAqeUOkBDNDW8dfshmNThkdJWhhzXt2qY1EWfWeIpQT3DziOlJeAlz/kVTa8aAZ6g0DhqDXKjEd07ZLxIcA8Zp6c5orFEBtQrw1XCf7Lrg3CzcDNZxGrHFOCWWJmXt5EVHd06S/9YcFJd3fHD+zA7KEdLRuQdVsP/92KC+uQ2ihqscoYwcwTVZZzIVvpIZJwncxCVeMrzYjT2LatK0lN5V5UQ/0b7baoFMrHjkjgRdZOEhQ7uNwDSXoFw9iRfNwLeEcGWybTwCTvcw6JMEriDwp4yiCVnhe1lqA3WnaasOmG/wTI0V9MQEan6e/sjaoMHi33S+D5BYHLHAZqI/0SmBEVkYR/MMovLjpvN//nlFQa9BXjdaysNZ1dVv+SJyuEVqbpLOdFbkMvAXzKo3y0C2Y
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(346002)(396003)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(86362001)(36860700001)(33656002)(316002)(54906003)(478600001)(41300700001)(6486002)(70206006)(4326008)(70586007)(8676002)(8936002)(6862004)(81166007)(356005)(5660300002)(82740400003)(40460700003)(336012)(2616005)(36756003)(82310400005)(53546011)(47076005)(26005)(2906002)(186003)(40480700001)(6512007)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 12:43:34.8953
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1e9f9014-6aeb-4baf-fce0-08db5ac22846
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB8023

DQoNCj4gT24gMjIgTWF5IDIwMjMsIGF0IDEzOjQxLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjIuMDUuMjAyMyAxMjoyMCwgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+IA0KPj4gDQo+Pj4gT24gMTggTWF5IDIwMjMsIGF0IDE5OjMwLCBKdWxpZW4gR3Jh
bGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4+PiANCj4+PiBIaSBMdWNhLA0KPj4+IA0KPj4+
IE9uZSBtb3JlIHJlbWFyay4NCj4+PiANCj4+PiBPbiAyNC8wNC8yMDIzIDA3OjAyLCBMdWNhIEZh
bmNlbGx1IHdyb3RlOg0KPj4+PiAjZWxzZSAvKiAhQ09ORklHX0FSTTY0X1NWRSAqLw0KPj4+PiBA
QCAtNDYsNiArNTAsMTUgQEAgc3RhdGljIGlubGluZSB1bnNpZ25lZCBpbnQgZ2V0X3N5c192bF9s
ZW4odm9pZCkNCj4+Pj4gICAgIHJldHVybiAwOw0KPj4+PiB9DQo+Pj4+ICtzdGF0aWMgaW5saW5l
IGludCBzdmVfY29udGV4dF9pbml0KHN0cnVjdCB2Y3B1ICp2KQ0KPj4+PiArew0KPj4+PiArICAg
IHJldHVybiAwOw0KPj4+IA0KPj4+IFRoZSBjYWxsIGlzIHByb3RlY3RlZCBieSBpc19kb21haW5f
c3ZlKCkuIFNvIEkgdGhpbmsgd2Ugd2FudCB0byByZXR1cm4gYW4gZXJyb3IganVzdCBpbiBjYXNl
IHNvbWVvbmUgaXMgY2FsbGluZyBpdCBvdXRzaWRlIG9mIGl0cyBpbnRlbmRlZCB1c2UuDQo+PiAN
Cj4+IFJlZ2FyZGluZyB0aGlzIG9uZSwgc2luY2UgaXQgc2hvdWxkIG5vdCBiZSBjYWxsZWQgd2hl
biBTVkUgaXMgbm90IGVuYWJsZWQsIGFyZSB5b3Ugb2sgaWYgSeKAmWxsIGRvIHRoaXM6DQo+PiAN
Cj4+IHN0YXRpYyBpbmxpbmUgaW50IHN2ZV9jb250ZXh0X2luaXQoc3RydWN0IHZjcHUgKnYpDQo+
PiB7DQo+PiBBU1NFUlRfVU5SRUFDSEFCTEUoKTsNCj4+IHJldHVybiAwOw0KPj4gfQ0KPiANCj4g
RG8geW91IG5lZWQgc3VjaCBhIHN0dWIgaW4gdGhlIGZpcnN0IHBsYWNlPyBJLmUuIGNhbid0IHlv
dSBhcnJhbmdlIGZvcg0KPiBEQ0UgdG8gdGFrZSBjYXJlIG9mIHVucmVhY2hhYmxlIGZ1bmN0aW9u
IGNhbGxzLCB0aHVzIGxldHRpbmcgeW91IGdldA0KPiBhd2F5IHdpdGgganVzdCBhbiBhbHdheXMt
dmlzaWJsZSBkZWNsYXJhdGlvbiAoYW5kIG5vIGRlZmluaXRpb24gd2hlbg0KPiAhQVJNNjRfU1ZF
KT8NCj4gDQoNClJpZ2h0LCBJIGFsd2F5cyBmb3Jnb3QgYWJvdXQgdGhpcyBhcHByb2FjaCwgSeKA
mWxsIHRyeSB0aGF0DQoNCj4gSmFuDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Mon May 22 12:46:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 12:46:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537916.837552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14vR-0007xr-1s; Mon, 22 May 2023 12:46:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537916.837552; Mon, 22 May 2023 12:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14vQ-0007xk-V4; Mon, 22 May 2023 12:46:08 +0000
Received: by outflank-mailman (input) for mailman id 537916;
 Mon, 22 May 2023 12:46:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yzf4=BL=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1q14vQ-0007xe-0L
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 12:46:08 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9ebba97a-f89e-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 14:46:07 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id
 4fb4d7f45d1cf-50bcb229adaso10984563a12.2
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 05:46:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ebba97a-f89e-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684759566; x=1687351566;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=PrvcRyygRNuD3AdW/ijyBdVckx6G34+02tg4NrkxHb8=;
        b=TM+nuN0TCzUoWgApDuPHZLVc4uiGWdIwrP4czj+TQzbK2+ApP+gaF7ii3ibBZAq9fe
         Bc+bGfgRMRFN7I0aCS3v0x0A8Ri1uc9Ao8ktPuLVZxMnYMR6J/d1OaZBsAQ+3rwOZa0+
         XcEVDuB5f4kskcRh95QOvsc+clBwNAsv0E6rCpWmTI3+xIKmQheHLoPh7PKZ8wCZEjxn
         5uWXf6P80Kcuos++TW1D6NXLLtNMGQ9kMXsH8ces5SllQDeSOAZQxckqBMNo7EQeF65v
         LLs4HXaDbZM3OIjIacr2MSI7qxJEnOq7gCkzupHyXJjg4dH77EhSW8twxAL1O4imsKoU
         jUwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684759566; x=1687351566;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=PrvcRyygRNuD3AdW/ijyBdVckx6G34+02tg4NrkxHb8=;
        b=FjrQHOsNfSaSscxEs1/1ZpPJIyonUiSvplIVSNb4ojInbglIUAARCjtJprZwGdExXH
         yYKiWoBci9m9Y5htaPFrXKYbNsH6+6BcdkG9/L+6VrXKVCbjjd78nvwKv/eFbxYFLyvD
         mqLkiGUEYvu7QNpBfrW8oxvBPyUJEiGAN/0ttDOWpt+KgEM1Vm2+gbStJya9UC97IEtT
         O576ASvNC+hFmUca2GZntbcxfUWeSjyxzyVW4lGoVvQQcdZaR+0cgUCRBABYgEyz3u4h
         w8vURH3EO1vdx3VH4/U0zCnCVx9Ia5N6iE5bd463I2H/WdmudtenR4//908TLIvscCDO
         lR+w==
X-Gm-Message-State: AC+VfDxaizHgOACuhWZHEeX12KPx2e5u4+NaYkaijN6Go8gd42FdA03w
	hQernxPSyVTIULjqKhYaeX/CzIIQ40FCU6ZKH97tsSRm
X-Google-Smtp-Source: ACHHUZ5iYJ6IUei0qb0sS5Wg/wTmkIvJ+luxLlethB7uVGxXnQRMSQYKHGi4FbUMbxUEkwkVkncKPrf5ubwhaPCAQxA=
X-Received: by 2002:a50:ee8f:0:b0:506:8838:45cc with SMTP id
 f15-20020a50ee8f000000b00506883845ccmr8822179edr.6.1684759566299; Mon, 22 May
 2023 05:46:06 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-11-jandryuk@gmail.com>
 <1166bdf1-4d54-30bb-bdf9-65dfaeb6b29e@suse.com>
In-Reply-To: <1166bdf1-4d54-30bb-bdf9-65dfaeb6b29e@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 22 May 2023 08:45:53 -0400
Message-ID: <CAKf6xpti23_fmwVWOafjUU+OPHPOA7EWOfkShGT9Qqr9=mR9zQ@mail.gmail.com>
Subject: Re: [PATCH v3 10/14 RESEND] xen: Add SET_CPUFREQ_HWP xen_sysctl_pm_op
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, May 8, 2023 at 7:27=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
>
> On 01.05.2023 21:30, Jason Andryuk wrote:
> > @@ -531,6 +533,100 @@ int get_hwp_para(const struct cpufreq_policy *pol=
icy,
> >      return 0;
> >  }
> >
> > +int set_hwp_para(struct cpufreq_policy *policy,
> > +                 struct xen_set_hwp_para *set_hwp)
>
> const?

set_hwp can be const.  policy is passed to hwp_cpufreq_target() &
on_selected_cpus(), so it cannot readily be made const.

> > +{
> > +    unsigned int cpu =3D policy->cpu;
> > +    struct hwp_drv_data *data =3D per_cpu(hwp_drv_data, cpu);
> > +
> > +    if ( data =3D=3D NULL )
> > +        return -EINVAL;
> > +
> > +    /* Validate all parameters first */
> > +    if ( set_hwp->set_params & ~XEN_SYSCTL_HWP_SET_PARAM_MASK )
> > +        return -EINVAL;
> > +
> > +    if ( set_hwp->activity_window & ~XEN_SYSCTL_HWP_ACT_WINDOW_MASK )
> > +        return -EINVAL;
>
> Below you limit checks to when the respective control bit is set. I
> think you want the same here.

Not sure if you mean feature_hwp_activity_window or the bit in
set_params as control bit.  But, yes, they can both use some
additional checking.  IIRC, I wanted to always check
~XEN_SYSCTL_HWP_ACT_WINDOW_MASK, because bits should never be set
whether or not the activity window is supported by hardware.

> > +    if ( !feature_hwp_energy_perf &&
> > +         (set_hwp->set_params & XEN_SYSCTL_HWP_SET_ENERGY_PERF) &&
> > +         set_hwp->energy_perf > IA32_ENERGY_BIAS_MAX_POWERSAVE )
> > +        return -EINVAL;
> > +
> > +    if ( (set_hwp->set_params & XEN_SYSCTL_HWP_SET_DESIRED) &&
> > +         set_hwp->desired !=3D 0 &&
> > +         (set_hwp->desired < data->hw.lowest ||
> > +          set_hwp->desired > data->hw.highest) )
> > +        return -EINVAL;
> > +
> > +    /*
> > +     * minimum & maximum are not validated as hardware doesn't seem to=
 care
> > +     * and the SDM says CPUs will clip internally.
> > +     */
> > +
> > +    /* Apply presets */
> > +    switch ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_PRESET_MASK )
> > +    {
> > +    case XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE:
> > +        data->minimum =3D data->hw.lowest;
> > +        data->maximum =3D data->hw.lowest;
> > +        data->activity_window =3D 0;
> > +        if ( feature_hwp_energy_perf )
> > +            data->energy_perf =3D HWP_ENERGY_PERF_MAX_POWERSAVE;
> > +        else
> > +            data->energy_perf =3D IA32_ENERGY_BIAS_MAX_POWERSAVE;
> > +        data->desired =3D 0;
> > +        break;
> > +
> > +    case XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE:
> > +        data->minimum =3D data->hw.highest;
> > +        data->maximum =3D data->hw.highest;
> > +        data->activity_window =3D 0;
> > +        data->energy_perf =3D HWP_ENERGY_PERF_MAX_PERFORMANCE;
> > +        data->desired =3D 0;
> > +        break;
> > +
> > +    case XEN_SYSCTL_HWP_SET_PRESET_BALANCE:
> > +        data->minimum =3D data->hw.lowest;
> > +        data->maximum =3D data->hw.highest;
> > +        data->activity_window =3D 0;
> > +        if ( feature_hwp_energy_perf )
> > +            data->energy_perf =3D HWP_ENERGY_PERF_BALANCE;
> > +        else
> > +            data->energy_perf =3D IA32_ENERGY_BIAS_BALANCE;
> > +        data->desired =3D 0;
> > +        break;
> > +
> > +    case XEN_SYSCTL_HWP_SET_PRESET_NONE:
> > +        break;
> > +
> > +    default:
> > +        return -EINVAL;
> > +    }
>
> So presets set all the values for which the individual item control bits
> are clear. That's not exactly what I would have expected, and it took me
> reading the code several times until I realized that you write life per-
> CPU data fields here, not fields of some intermediate variable. I think
> this could do with saying explicitly in the public header (if indeed the
> intended model).

The commit message mentioned the idea of using a preset and further
refinement.  The comments above "/* Apply presets */" and below "/*
Further customize presets if needed */" were an attempt to highlight
that.  But you are right that the public header should state this
clearly.

> > +    /* Further customize presets if needed */
> > +    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_MINIMUM )
> > +        data->minimum =3D set_hwp->minimum;
> > +
> > +    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_MAXIMUM )
> > +        data->maximum =3D set_hwp->maximum;
> > +
> > +    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_ENERGY_PERF )
> > +        data->energy_perf =3D set_hwp->energy_perf;
> > +
> > +    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_DESIRED )
> > +        data->desired =3D set_hwp->desired;
> > +
> > +    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_ACT_WINDOW )
> > +        data->activity_window =3D set_hwp->activity_window &
> > +                                XEN_SYSCTL_HWP_ACT_WINDOW_MASK;
> > +
> > +    hwp_cpufreq_target(policy, 0, 0);
> > +
> > +    return 0;
>
> I don't think you should assume here that hwp_cpufreq_target() will
> only ever return 0. Plus by returning its return value here you
> allow the compiler to tail-call optimize this code.

Thanks for catching that.  Yeah, I made hwp_cpufreq_target() return a
value per your earlier comment, so its value should be returned now.

> > --- a/xen/drivers/acpi/pmstat.c
> > +++ b/xen/drivers/acpi/pmstat.c
> > @@ -398,6 +398,20 @@ static int set_cpufreq_para(struct xen_sysctl_pm_o=
p *op)
> >      return ret;
> >  }
> >
> > +static int set_cpufreq_hwp(struct xen_sysctl_pm_op *op)
>
> const?

Yes

> > --- a/xen/include/public/sysctl.h
> > +++ b/xen/include/public/sysctl.h
> > @@ -317,6 +317,34 @@ struct xen_hwp_para {
> >      uint8_t energy_perf;
> >  };
> >
> > +/* set multiple values simultaneously when set_args bit is set */
>
> What "set_args bit" does this comment refer to?

That should be set_params. IIRC, set_args was the previous name.

> > +struct xen_set_hwp_para {
> > +#define XEN_SYSCTL_HWP_SET_DESIRED              (1U << 0)
> > +#define XEN_SYSCTL_HWP_SET_ENERGY_PERF          (1U << 1)
> > +#define XEN_SYSCTL_HWP_SET_ACT_WINDOW           (1U << 2)
> > +#define XEN_SYSCTL_HWP_SET_MINIMUM              (1U << 3)
> > +#define XEN_SYSCTL_HWP_SET_MAXIMUM              (1U << 4)
> > +#define XEN_SYSCTL_HWP_SET_PRESET_MASK          0xf000
> > +#define XEN_SYSCTL_HWP_SET_PRESET_NONE          0x0000
> > +#define XEN_SYSCTL_HWP_SET_PRESET_BALANCE       0x1000
> > +#define XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE     0x2000
> > +#define XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE   0x3000
> > +#define XEN_SYSCTL_HWP_SET_PARAM_MASK ( \
> > +                                  XEN_SYSCTL_HWP_SET_PRESET_MASK | \
> > +                                  XEN_SYSCTL_HWP_SET_DESIRED     | \
> > +                                  XEN_SYSCTL_HWP_SET_ENERGY_PERF | \
> > +                                  XEN_SYSCTL_HWP_SET_ACT_WINDOW  | \
> > +                                  XEN_SYSCTL_HWP_SET_MINIMUM     | \
> > +                                  XEN_SYSCTL_HWP_SET_MAXIMUM     )
> > +    uint16_t set_params; /* bitflags for valid values */
> > +#define XEN_SYSCTL_HWP_ACT_WINDOW_MASK          0x03ff
> > +    uint16_t activity_window; /* See comment in struct xen_hwp_para */
> > +    uint8_t minimum;
> > +    uint8_t maximum;
> > +    uint8_t desired;
> > +    uint8_t energy_perf; /* 0-255 or 0-15 depending on HW support */
>
> Instead of (or in addition to) the "HW support" reference, could this
> gain a reference to the "get para" bit determining which range to use?

I've removed the fallback 0-15 support locally, so this will be removed.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Mon May 22 12:48:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 12:48:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537920.837562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14x9-00007N-EY; Mon, 22 May 2023 12:47:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537920.837562; Mon, 22 May 2023 12:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q14x9-00007G-A2; Mon, 22 May 2023 12:47:55 +0000
Received: by outflank-mailman (input) for mailman id 537920;
 Mon, 22 May 2023 12:47:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q14x8-00006t-CP
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 12:47:54 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2057.outbound.protection.outlook.com [40.107.7.57])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id de18e538-f89e-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 14:47:53 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7135.eurprd04.prod.outlook.com (2603:10a6:800:12c::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 12:47:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 12:47:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de18e538-f89e-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BMyhLW5FPHdzVKwP9Y0hoJc5Ex4IBHC8wirDur57vrNbMVKC9Eida5BcbhU31ZxPBwVjt3bKHz64ECKEEXLmCtARgPVXaCSUezGoaMX2yZlXgJpqdNMB6wj2qdloQaIXUhNkEdsQcNQwrjSOvimRRT+J9cqHo96ppEALj0CEXhOrpHxsjhSe4UuvjA/goGiVMzwkj7vQEvBNTJ5FF9fulemPJhGaKqgTlKpXBBCWhtRszyVOYdSNpQzTCdaVwf/ah1/6sSnJYkuNMuXthZrAhA6QgLChjJdc3dY/QYF66kVi2N0ksy6mdUz3+KchKK0lzaeqJuk8kSzdaCaqp1znew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/2FCGoi/+NciEYZl0lxPf7LHht5l07ddO42XZV4xGlc=;
 b=HD74JDZVpufJAMzy1VTUI4/Ul0lsWoenmGvdLJ75c+v6zZb7ZFTizkcifuCvcrgD34iMbxtTSm7BOiXTTrqZUTr3evU8AQ63cthMtInKydzLWNgjLJF9oaUIyFZaK20Zk8vs+kWzucuEk7YTUrxaTZ5k5hNvLlt+54EhKw3vaMYhEAGzBVp+NzVZ4axVGw/gu5IBK+y5au0C+s3UzicjRtys8zsJL2kQQHT5T8KqNO2QdABSNiHtSBuPn38sGNg69jhl9p+7A7qrc8LB/XfLpI4HypDZGUQ+zXkOvzjPbCB6/GWP8UOD1WFnhNoeZycpQHAY3cmacsAjTRqjMWIRlw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/2FCGoi/+NciEYZl0lxPf7LHht5l07ddO42XZV4xGlc=;
 b=d7vL16yI46aEauzZ+6Q4yNb9Z8YAZhOYwQTMG2Od1QROu3CwwVaa2Jac0eXotIIlEQQwMF2frgGkUKHikNGe3MjiTjzpmG/WruqEwY9RYyG3I3N6x6iWItZQ4Wb3TohmUANTdKtvLZkxvCN/9SxkmmT1F+5cjuCHJJvdgseQIKA62cg+zr9jeRZ9NNZrBy+YxhWGHIEBF3gIem4iFLyPgIvtStlJScueX2k4WQpOJO0IGtq/C7ZpRvCSkKAziXfr4qvvcCTANKo6tl/FYG3PlkeyKejDXOC+374vTa5apsgZuOsatp5mWZ+0mUmcblHtqV97t31aHqgVhNtu4U9fbw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <13494185-96c5-24ff-7ae6-a57d706420f2@suse.com>
Date: Mon, 22 May 2023 14:47:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v8 1/5] xen/riscv: add VM space layout
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1684757267.git.oleksii.kurochko@gmail.com>
 <bbdfbf59db6d329d65956839c79e6e42bbf13bb7.1684757267.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <bbdfbf59db6d329d65956839c79e6e42bbf13bb7.1684757267.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0183.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7135:EE_
X-MS-Office365-Filtering-Correlation-Id: f615dea1-2a8b-419c-3579-08db5ac2af3d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rJ0NlQ/DV6BjNKIkccsbcz+uXjlcxjvaLMgHO6cf4y/c4+xWBuaklHb12NtQX1k+TuhAqANx607LiKdQ8nCelqOahnBhsohL84gc+ebi3fHil51tRFgzoArkI+HoK0NU1w/CMnFeNtwW7W7Sohr0UBsF/2be9RCoxhGqTVbKYy3Pt/p5Jp3VZd/m9PVTSTmZxfZRQI3kZqwTgw+3txLk2LpQM+p/t+OIWMk7XWy+I8OyWXMBYUqFL4S8lqEakgLGwcTsmCEoNDxr/xELZi9IBm4H+HBDWrxZywxMUkpUXpjuUTInUxUIoAv2xJdjSHp/HzWjKKhQIakHXhZNz+e2S2tPy92vVL3gOd8ATt8W8AsoybA7wu1kPjlOJ5eppQM4mX7zSqsU9YNEV9Vc25lQSeERrPZjfBJBnhuGCUIuZ0wd5lw5y9k9vYQGVfH1ESPQyG5GhjrKCNGw8honjq9ZhVLYWdHHfyDhkveruN+NhcXD0oW0iNwXhLOwmlHk5ygW6/rcYWQGAhyvMlEp2lqBRZsQc59zACjdxu52FGsHTQQiZn7cZY4S2MuTR/7SKJiwTy2ZCJoKzRkOADZCyDTh0SXnanXj4pv2vemLVX5/usARZtk0YUqmW/vKPwPx9HrpOx+LRehlq+IEvZ7v+tUjug==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(396003)(136003)(39850400004)(376002)(366004)(451199021)(478600001)(31686004)(54906003)(66476007)(5660300002)(26005)(41300700001)(6486002)(186003)(316002)(4326008)(6916009)(6512007)(6506007)(8936002)(8676002)(53546011)(2906002)(2616005)(83380400001)(66946007)(66556008)(38100700002)(66899021)(36756003)(31696002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ajlwOExhRGZUWGpvSGI1czcxcEkrY2RTdm56eDI0TGYzK2ZzS1JwWmNHY3Rz?=
 =?utf-8?B?VVpIcDQxU3FEellmRHZkSnlOS2M5SkUxdGhXcEtyenNtQlpOQ2pXaW4wUC8r?=
 =?utf-8?B?SGwxUXEzdWovaHdEVHZCTVdCOUc5eStQZ3JOZzN3WU9Hd2YzVkZTbWp1dmdX?=
 =?utf-8?B?eXk3eEdRcFd2V243RTgxSk1FR2tOQXY1OGd6R2RidkY1a2xTVEU3S2REOXk0?=
 =?utf-8?B?SE9CQ2ZoaGJlK08zV0V5cnI5aGhSbEFOMUg5ek9QZXpJOTdhU3lPd3lMbkt3?=
 =?utf-8?B?eWJ4TGZOZ2VSQVlnUUpDQUVmblk0d1RmUktBVEJ2RStmVzVzZmIySUFPSXRh?=
 =?utf-8?B?MUhrMEp3OXg1OGZ0VUwyMldKa0JwTHpyWlJxd0hEbVNzS0ZSNStoVEtTWkFj?=
 =?utf-8?B?OW50ZEEvR2U1NEhCVVErc1pSbjRweS8yd3NXVjk3TURTV3BZTXE0bVo0dTBP?=
 =?utf-8?B?bysrQU5TZUZUMDBhTGpmZ1NTY2l1cjNCTS9Vdks5cEViQmVpa1J4MkhnQ0U4?=
 =?utf-8?B?dzVkTlJGVmV2UUNWZnV4c2F2Y1RJR3k1Zk5Fd3R0cW9XL0lCRGN2ZmN0Ukdl?=
 =?utf-8?B?ZTFKbnV2OERCRDN6NHdWYTJBOHVqajlBd2hmbllaQTJKVU9VZjlFVmg5emxJ?=
 =?utf-8?B?c0ROMXJHVTlZc2NKQVBEcVA1QXRYQ05CRDJSZmt2Q3lMaTZDWm9YYllLTXoz?=
 =?utf-8?B?cmIwQnFoRmxqdHNYdVpnRXh4eG5rRmRtTEcyMndnUFVyNkZOSjNUaHE1NTFn?=
 =?utf-8?B?SFQvZDJhMThvUVdCL1lkTDRDWnNLSXBLcG5ubVRZVnRXUGpDQ0tMRklhc3hv?=
 =?utf-8?B?UnhsSU1jcW9PZnhmZ0JuOTltcVAvNmVzSS80MENoZXdiWWxCUmdCcXZaSUxz?=
 =?utf-8?B?MC9HdWRXTVhrTEVVby8wQnQ2MWFHRzROaWgrSm03Q0d0MWp4SUhsbTd1Snh5?=
 =?utf-8?B?VHRWWGlwNC8zQ1g1YkhERFlXclU3UjNWZXhURENhbG80ejhxSkVkWnJkTzdp?=
 =?utf-8?B?Q25kc1VxV3lxRWtZZXV1OUovL1g2eU02eEpkZ1FBSjRmS2N2dHpBY25GcjBu?=
 =?utf-8?B?UkVGNmdackVTdmhrckJNNEhBckJ5VDVlc0hRVlJqQmdwbEZHTnJjTzZ1d3A1?=
 =?utf-8?B?djI0Q2JWbmYrR2JXUUJTNTE2dldsYnJjbitheXhGWXRVSDhsVlNUZVlVWmtY?=
 =?utf-8?B?QmwwTEc1TzJ6a0wycy9JKzlQTnlIa2hOcXNuY3pieEU5TFh1VFFqaHdHb1NR?=
 =?utf-8?B?eWdSTUgzSkpxRXFlbXNVM2ZUanFBK3lUbmQ0Q1BZMHZDYUNneW1wZUNDZEcw?=
 =?utf-8?B?RG41anA2SHYwUDdLZWZjQWNibW9wcDlLM2RaSFF4TFgvd1dlR2EvekhXQ3FK?=
 =?utf-8?B?TGxQWmhObkEvS2J2M0RCS0JyNXZEY3d5MWpSZldPdHRrdTEzWDl4bzEyVk9W?=
 =?utf-8?B?bzkvZGFidStiMXZ3YlNKUm5LaUJJZXh0WlZJR3hBMkxtU2tWVHdCRnpTOWZX?=
 =?utf-8?B?aW4rTXZDOEJCRjdqQXZPV29xaUQ2OTNBcHRCTnVTamd0N09EYjlMcmx1ZURC?=
 =?utf-8?B?eE5ZcmJiaUpDQ3gxYmpiYUtTV04wQXFBaXdkeUo0NHcxZS8yQk1tbElpN2hN?=
 =?utf-8?B?Q0pMdUF3eFhHUm9GaUVpd0pBSHRmWnNraTBqWm1vTFJocGd5K3BVUUdyYTJ5?=
 =?utf-8?B?TERYQU9ZUEdoa0lXOWJGb3k0dXIvRXhjdSs4TzVOY3d4RzZYWkF0bzJYWXFL?=
 =?utf-8?B?eW9ydzBOeUU4dk80THlRY0hyaHBZMlYyWDVkVExRQWVOKzErTEFPVVZQdjRS?=
 =?utf-8?B?d0xuaDdLNFNNcTJTQlpXYzUzK3MvUG1sWXpZczZRZ2hkaVl0UGI5Q256MURM?=
 =?utf-8?B?bUE5YTM3cEtxUTU2YlVjTS9aRGhNTnpaSEVmOXlXRm1seDdrYVhkQlhXRml6?=
 =?utf-8?B?dUc0aWVLTW5MQjYyQlZxd2ZQMFNiRzFSR0Z0MExLcmtGNFBoaWE2dS9adnR6?=
 =?utf-8?B?SFZ2OTh5eHFZTlFBUXBoNXBlUFd0anZmUHQwRXFmcVRjUXdvck9nYnExdWx4?=
 =?utf-8?B?WHQ5NFNFSUY5NVpza2ZkTiswelJTb0N2aDJoSFdVVG1PcWVjMlJPR0poTjZW?=
 =?utf-8?Q?/K4tcrNZHqhxRtejNNmi6MFAK?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f615dea1-2a8b-419c-3579-08db5ac2af3d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 12:47:21.4397
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yGcRfTKaiaeCXxE+g/z31GcF66pIhk3S3OLshqirtvBqjq0kLDbRvnlBEgx09IGYrhtY7LoNNNhbtFoWhaH2+w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7135

On 22.05.2023 14:18, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -4,6 +4,42 @@
>  #include <xen/const.h>
>  #include <xen/page-size.h>
>  
> +/*
> + * RISC-V64 Layout:
> + *
> + * #ifdef RV_STAGE1_MODE == SATP_MODE_SV39

Nit: #if please, not #ifdef. Also may I stress again that ideally this
would be formatted such that when e.g. grep-ing for SATP_MODE_SV39, the
matching line here would _not_ give the impression of being "a comment
only" (making people possibly pay less attention)? My referral to the
x86 way of doing things remains.

> + * From the riscv-privileged doc:
> + *   When mapping between narrower and wider addresses,
> + *   RISC-V zero-extends a narrower physical address to a wider size.
> + *   The mapping between 64-bit virtual addresses and the 39-bit usable
> + *   address space of Sv39 is not based on zero-extension but instead
> + *   follows an entrenched convention that allows an OS to use one or
> + *   a few of the most-significant bits of a full-size (64-bit) virtual
> + *   address to quickly distinguish user and supervisor address regions.
> + *
> + * It means that:
> + *   top VA bits are simply ignored for the purpose of translating to PA.
> + *
> + * ============================================================================
> + *    Start addr    |   End addr        |  Size  | Slot       |area description
> + * ============================================================================
> + * FFFFFFFFC0800000 |  FFFFFFFFFFFFFFFF |1016 MB | L2 511     | Unused
> + * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | L2 511     | Fixmap
> + * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | L2 511     | FDT
> + * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | L2 511     | Xen
> + *                 ...                  |  1 GB  | L2 510     | Unused
> + * 0000003200000000 |  0000007f80000000 | 309 GB | L2 200-509 | Direct map

And another, yet more minor nit: Would be nice if all addresses here
were spelled uniformly, i.e. also with upper vs lower case of hex
digit letter consistent.

Jan

> + *                 ...                  |  1 GB  | L2 199     | Unused
> + * 0000003100000000 |  00000031C0000000 |  3 GB  | L2 196-198 | Frametable
> + *                 ...                  |  1 GB  | L2 195     | Unused
> + * 0000003080000000 |  00000030C0000000 |  1 GB  | L2 194     | VMAP
> + *                 ...                  | 194 GB | L2 0 - 193 | Unused
> + * ============================================================================
> + *
> + * #endif
> + */
> +
>  #if defined(CONFIG_RISCV_64)
>  # define LONG_BYTEORDER 3
>  # define ELFSIZE 64



From xen-devel-bounces@lists.xenproject.org Mon May 22 12:59:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 12:59:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537932.837575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q158X-0001uJ-Ip; Mon, 22 May 2023 12:59:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537932.837575; Mon, 22 May 2023 12:59:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q158X-0001uC-GJ; Mon, 22 May 2023 12:59:41 +0000
Received: by outflank-mailman (input) for mailman id 537932;
 Mon, 22 May 2023 12:59:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yzf4=BL=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1q158V-0001u6-Ob
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 12:59:39 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 81de050c-f8a0-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 14:59:37 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id
 a640c23a62f3a-96fd3a658eeso195001466b.1
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 05:59:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81de050c-f8a0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684760377; x=1687352377;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=StB4zFWv//QCbf5mKx8NNFHrwVR6OSsbD/fj8FJpAmM=;
        b=SaBBhVaBLE1BxgP+p4dwLazoSPVb6rBWjQgLHHAM9Y052NL/d8NuxflMeBLb09Yeyj
         JU1oQefr/xUF30iu7OSvgU2sjJHuDyK3vd8NI0K8nnKjGJt0ED8K6BNXrsYELNmfw3se
         t5q8Rrmf2BiUQXIhJ2Q5g6nWBVGUGj+NEuVaHxD+/CnTNHx+R9m9I9Ml9Mm4izZ3E48v
         ZrVBeA9xw5YDEIvxuDuFfdxxl4iw9uf8Mynf52SrKTY/YVbmzzwY3Wr8fAM8t/m+hYAF
         Bbke+cdijR+CMI9QODwp8SWnFG59OstoiUbtIqIi9Z4OpO+c8QEnHlRo8GtquaIKhkwf
         WPFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684760377; x=1687352377;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=StB4zFWv//QCbf5mKx8NNFHrwVR6OSsbD/fj8FJpAmM=;
        b=AihXRhp3sLwIHKeIdSAHtv9c5Sqnm6l0IYy139tb5wRIn509XqcNt89rADlO7RScCv
         /UiELw1Zj2ai06wpLsFRAEj6LF6+S+SBpvgu96u9dy1GxHTVcm8LlHnNGmhcwd3lWtI1
         xK9NvVKYoyUr5AtGyvZO1D31X3q90UhJP8gUTQEZUbTDUXV19iERf9iDLxs9GLCZ5+hj
         RDfiIUOQNwF5TyiM6XXrQ2aTbTW6ItKEXR440qYRRuNA++s2exQ0ipjeKc3HdCOlox+9
         ZshTrgfwRU9dcGLD9D2O45zPRD5fqPz6jvomBXMoaKlMr8504A43KeaVcnL6YcOT7UF+
         AWEA==
X-Gm-Message-State: AC+VfDwLB+Mll5X/aIMrn1401sGq0uJodRTDOKyVYcijuwI+s2pLWe3b
	Zg6Cxk0uCEvK+s2znpF2pqNZva7xDNXLMokDDd1vd/T1sv4=
X-Google-Smtp-Source: ACHHUZ687i9fD7SribHQqxiwB3OMXDxGUOmQLyj0Qy/aIu1+pmwbvZChZ03VbNOCDRnmol6brxlz7vFVXumza+bMn0w=
X-Received: by 2002:a17:907:3ea9:b0:969:f433:9b54 with SMTP id
 hs41-20020a1709073ea900b00969f4339b54mr10582821ejc.39.1684760376817; Mon, 22
 May 2023 05:59:36 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-14-jandryuk@gmail.com>
 <46c46e0b-0fba-2f3f-6289-f68737a3d8d9@suse.com>
In-Reply-To: <46c46e0b-0fba-2f3f-6289-f68737a3d8d9@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 22 May 2023 08:59:24 -0400
Message-ID: <CAKf6xptjgvSsNobYirrF_W1cWUdVVfvX55DFNa=bWy3A24kN5A@mail.gmail.com>
Subject: Re: [PATCH v3 13/14 RESEND] xenpm: Add set-cpufreq-hwp subcommand
To: Jan Beulich <jbeulich@suse.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, May 8, 2023 at 7:56=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
>
> On 01.05.2023 21:30, Jason Andryuk wrote:
> > @@ -67,6 +68,27 @@ void show_help(void)
> >              " set-max-cstate        <num>|'unlimited' [<num2>|'unlimit=
ed']\n"
> >              "                                     set the C-State limi=
tation (<num> >=3D 0) and\n"
> >              "                                     optionally the C-sub=
-state limitation (<num2> >=3D 0)\n"
> > +            " set-cpufreq-hwp       [cpuid] [balance|performance|power=
save] <param:val>*\n"
> > +            "                                     set Hardware P-State=
 (HWP) parameters\n"
> > +            "                                     optionally a preset =
of one of\n"
> > +            "                                       balance|performanc=
e|powersave\n"
> > +            "                                     an optional list of =
param:val arguments\n"
> > +            "                                       minimum:N  lowest =
... highest\n"
> > +            "                                       maximum:N  lowest =
... highest\n"
> > +            "                                       desired:N  lowest =
... highest\n"
>
> Personally I consider these three uses of "lowest ... highest" confusing:
> It's not clear at all whether they're part of the option or merely mean
> to express the allowable range for N (which I think they do). Perhaps ...
>
> > +            "                                           Set explicit p=
erformance target.\n"
> > +            "                                           non-zero disab=
les auto-HWP mode.\n"
> > +            "                                       energy-perf:0-255 =
(or 0-15)\n"
>
> ..., also taking this into account:
>
>             "                                       energy-perf:N (0-255 =
or 0-15)\n"
>
> and then use parentheses as well for the earlier value range explanations
> (and again below)?

lowest and highest were supposed to reference the values from `xenpm
get-cpufreq-para`.  You removed some later lines that state
"get-cpufreq-para returns lowest/highest".  However, they aren't
enforced limits.  You can program from the range 0-255 and the
hardware is supposed to clip internally, so your idea of
"energy-perf:N (0-255)" seems good to me.

> Also up from here you suddenly start having full stops on the lines. I
> guess you also want to be consistent in your use of capital letters at
> the start of lines (I didn't go check how consistent pre-existing code
> is in this regard).

Looks like the existing code is consistently non-capital letters, but
the full stops are inconsistent.  I'll go with non-capital and full
stop for this addition.

> > @@ -1299,6 +1321,213 @@ void disable_turbo_mode(int argc, char *argv[])
> >                  errno, strerror(errno));
> >  }
> >
> > +/*
> > + * Parse activity_window:NNN{us,ms,s} and validate range.
> > + *
> > + * Activity window is a 7bit mantissa (0-127) with a 3bit exponent (0-=
7) base
> > + * 10 in microseconds.  So the range is 1 microsecond to 1270 seconds.=
  A value
> > + * of 0 lets the hardware autonomously select the window.
> > + *
> > + * Return 0 on success
> > + *       -1 on error
> > + */
> > +static int parse_activity_window(xc_set_hwp_para_t *set_hwp, unsigned =
long u,
> > +                                 const char *suffix)
> > +{
> > +    unsigned int exponent =3D 0;
> > +    unsigned int multiplier =3D 1;
> > +
> > +    if ( suffix && suffix[0] )
> > +    {
> > +        if ( strcasecmp(suffix, "s") =3D=3D 0 )
> > +        {
> > +            multiplier =3D 1000 * 1000;
> > +            exponent =3D 6;
> > +        }
> > +        else if ( strcasecmp(suffix, "ms") =3D=3D 0 )
> > +        {
> > +            multiplier =3D 1000;
> > +            exponent =3D 3;
> > +        }
> > +        else if ( strcasecmp(suffix, "us") =3D=3D 0 )
> > +        {
> > +            multiplier =3D 1;
> > +            exponent =3D 0;
> > +        }
>
> Considering the initializers, this "else if" body isn't really needed,
> and ...
>
> > +        else
>
> ... instead this could become "else if ( strcmp() !=3D 0 )".
>
> Note also that I use strcmp() there - none of s, ms, or us are commonly
> expressed by capital letters.

That sounds fine.

> (I wonder though whether =CE=BCs shouldn't also
> be recognized.)

While that makes sense, I do not plan to change it.  I don't know the
proper way to deal with unicode from C.  (I suppose a memcmp with the
UTF-8 encoding would be possible, but I don't know if there are corner
cases I'm overlooking.)

> > +        {
> > +            fprintf(stderr, "invalid activity window units: \"%s\"\n",=
 suffix);
> > +
> > +            return -1;
> > +        }
> > +    }
> > +
> > +    /* u * multipler > 1270 * 1000 * 1000 transformed to avoid overflo=
w. */
> > +    if ( u > 1270 * 1000 * 1000 / multiplier )
> > +    {
> > +        fprintf(stderr, "activity window is too large\n");
> > +
> > +        return -1;
> > +    }
> > +
> > +    /* looking for 7 bits of mantissa and 3 bits of exponent */
> > +    while ( u > 127 )
> > +    {
> > +        u +=3D 5; /* Round up to mitigate truncation rounding down
> > +                   e.g. 128 -> 120 vs 128 -> 130. */
> > +        u /=3D 10;
> > +        exponent +=3D 1;
> > +    }
> > +
> > +    set_hwp->activity_window =3D (exponent & HWP_ACT_WINDOW_EXPONENT_M=
ASK) <<
> > +                                   HWP_ACT_WINDOW_EXPONENT_SHIFT |
>
> The shift wants parenthesizing against the | and the shift amount wants
> indenting slightly less. (Really this would want to be MASK_INSR().)

I'll use MASK_INSR.

> > +                               (u & HWP_ACT_WINDOW_MANTISSA_MASK);
> > +    set_hwp->set_params |=3D XEN_SYSCTL_HWP_SET_ACT_WINDOW;
> > +
> > +    return 0;
> > +}
> > +
> > +static int parse_hwp_opts(xc_set_hwp_para_t *set_hwp, int *cpuid,
> > +                          int argc, char *argv[])
> > +{
> > +    int i =3D 0;
> > +
> > +    if ( argc < 1 ) {
> > +        fprintf(stderr, "Missing arguments\n");
> > +        return -1;
> > +    }
> > +
> > +    if ( parse_cpuid_non_fatal(argv[i], cpuid) =3D=3D 0 )
> > +    {
> > +        i++;
> > +    }
>
> I don't think you need the earlier patch and the separate helper:
> Whether a CPU number is present can be told by checking
> isdigit(argv[i][0]).

> Hmm, yes, there is "all", but your help text doesn't mention it and
> since you're handling a variable number of arguments anyway, there's
> not need for anyone to say "all" - they can simply omit the optional
> argument.

Most xenpm commands take "all" or a numeric cpuid, so I intended to be
consistent with them.  That was the whole point of
parse_cpuid_non_fatal() - to reuse the existing parsing code for
consistency.

I didn't read the other help text carefully enough to see that the
numeric cpuid and "all" handling was repeated.

For consistency, I would retain parse_cpuid_non_fatal() and expand the
help text.  If you don't want that, I'll switch to isdigit(argv[i][0])
and have the omission of a digit indicate all CPUs as you suggest.
Just let me know what you want.

> Also (nit) note how you're mixing brace placement throughout this
> function.

Will fix.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Mon May 22 13:04:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 13:04:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537937.837585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q15Cn-0003Os-4B; Mon, 22 May 2023 13:04:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537937.837585; Mon, 22 May 2023 13:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q15Cn-0003Ol-1R; Mon, 22 May 2023 13:04:05 +0000
Received: by outflank-mailman (input) for mailman id 537937;
 Mon, 22 May 2023 13:04:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q15Cl-0003Of-41
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 13:04:03 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on061e.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1eb34b01-f8a1-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 15:04:01 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7211.eurprd04.prod.outlook.com (2603:10a6:102:93::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 13:03:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 13:03:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1eb34b01-f8a1-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aG0QajieduBTCQmI9jjKLdjo/EWK1p7i0dFJCU6quc/6J9iJmfKyxwEo6FdhDYfH9bRFiU6OqagNegWavXLFD+6XCL8VtVS7zMGk8sbjYohClvXGWEeEA35QlakCd4kQTqTLqryZScFRfOc+Ode7aSAkV5paXTixsBKlAQ534B0paub4k5na6Wq2d4TloDFz7e8giSargayP2fcXDUm+4/t2Ro/wP3hxR9kRxY5JKUSBppenz3CfgeYcTDs4FY0fIv954E00yP0o5hEzjhjsRu3F4A8N3B6Dytdqn/NGxu8G9WDtNI/M1fnIe25sLOS8tWsdZPs/8V+07444jH1pTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mumcL+mQy8ocaFwSH9jtHJeg5S7BbONqsxIJix1dCOs=;
 b=hhvPDi+9bkudcVzbVK6CqleIvPzcXPL+p4VOJimLxefUMr2SDQV+wYLS+vUj7ITjXuxOHkElOmZzm9W0OzdBzHGcqbGgSJDV50Q8Km3Rd5Fr6Cr2NttDQxuWM1QEDzqanIC05NTlayQDTzZvK1r9ohTvj8C2Iv5/lpBpniHPtzm0SNte7hKJLcL80ViKZYfBhQklmaDRK2e90oHwubOwAIAz3mm/I0nXwLJx6xw28VnCIxtx74HK2eu7t4za9CS/V52HyVVymFyfvxHJ+lciBzXb7fi9II9Rc2PiNdl/QsX/BuiiPaZxL4pJ7G9OWVb277BHtn9/q1h1ej5vy420Gg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mumcL+mQy8ocaFwSH9jtHJeg5S7BbONqsxIJix1dCOs=;
 b=hWbqssHaAYYj94wP8+UGB4775bt2mlbldeKvLEnIACHamyH6FfjYsyo4dOX6Km4Lb2mbBzCMFpd5n9XG/rrZwBEFHyVeg1jqjwuslwQmFxEI3YlDqac4t1bOov6fS1JV2M0zYLIeZ+uoc58sgsmW3eHi/8DMFKuD4ZckEOU5KwLHiCMOPI9zukFjAZkWarTli5K5C1cNaZevcVioA0GCOpnKwPPKkrYZWSKE/2PFzwb20ttyy9NLeS4jxKZXxD+O2JAQgnqc0YZPI44emlmHmPKuzTRBB8VQS697nw4OnODqv6rz3Q4rr/R3GBleQ0fqUpvkfS0DXmEKsCtVvIgk0A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <612dc0c0-7039-b63d-220b-8075fc8c35db@suse.com>
Date: Mon, 22 May 2023 15:03:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [RFC] Xen crashes on ASSERT on suspend/resume, suggested fix
Content-Language: en-US
To: Stefano Stabellini <stefano.stabellini@amd.com>
Cc: andrew.cooper3@citrix.com, roger.pau@citrix.com,
 xenia.ragiadakou@amd.com, xen-devel@lists.xenproject.org
References: <alpine.DEB.2.22.394.2305181638390.128889@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305181638390.128889@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0165.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b3::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7211:EE_
X-MS-Office365-Filtering-Correlation-Id: ecaa8778-2072-4b4c-fbef-08db5ac500f4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uckx9qjhMglr1BpNSQEbmHF0zfFQNn+y4mTddP8oyKZJkDxypbsruzS76n+IqlzZ+NlmVDqt2ZkB+ZeXX9h5CFzBZneF3gfdsZEhCAem2bQyFncm533nqQW6qZT25TVbeFe3B0IU51IZ6XIYQ9L1tFyD/Ox0QIzVJ85OiLbja5RSqvbiKUGbqKufBwjYiDnuDgDLzaF6ieF3iSqZF2AOWPcq0QYmj0I10rk3bqEcwmhTgkMHpJWsr/jYxqB63gLstsUaW/wl3yh3iUBDc7fsoojA+M9E+896AcOP+WtnWw0cdgxzriMobcv3yj28xkgGke0+yqyzdSSeghFBFCwwT9u2wGlcGFTKFOkm1aZRJSq5LYc5LHKh9rs78ElAf7h5ABCFqN6X8KBJn5taacAC4A03sX9NojvmO0l+hLdONY7KY6Ckk8U5IpTr5E0pWv8nL3jXFpDS6fzWKkrB1NtsjYRZe/IX8IF4H9Z31jAlExzfn7oFx9tqPbwH209uE4Fzb+iWSLDZyW1bUDQDJL7kkMbLkbzYyO74NCe2S/QkSdMS/g6dSm4uGqgz0y04bvP29lQx9Mrt+exMHll+dzdzs9uk75ysqLhvVQX8fgojXAt/PxieZjq+ggEHML8lrGrSYqWVIsd3U9/Wuhkju4sUZw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(376002)(396003)(136003)(39860400002)(451199021)(31686004)(2906002)(15650500001)(5660300002)(83380400001)(8676002)(8936002)(36756003)(66946007)(66556008)(66476007)(4326008)(6916009)(6486002)(316002)(478600001)(41300700001)(86362001)(26005)(31696002)(2616005)(186003)(6512007)(6506007)(53546011)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y29teVBpVFdtejF3cTR0Z0VSbkJCOUxLUVZDWjNKbXB6TDlBQ3FPdFNVdFRs?=
 =?utf-8?B?N1VIbWNLR3VBVEhEbFVEaW9GUEdZb1NmdFRHVlF2VUUrWG83MmhHVzdLOW5M?=
 =?utf-8?B?SFkxN0VKMnVuejBjN0pqNytTSVduaTl1bDRybjh5OEpDUERMcnBLbUswWTQy?=
 =?utf-8?B?RVArQ2l6aXpJK1M4dEQ0ejMvc3JJSHh3RVM3WFJTUmVkTlppbXBWSVIrQzNp?=
 =?utf-8?B?a2tpNXQ4d0ZZbHp3WmlOOTB4TXR3M2UzdVBiMjJZdXpXemdkbml4QzlLM0Nt?=
 =?utf-8?B?V01pQ2RYQ2lBT0tVU1hhV3lYRW1RV1IwWlpyczh5ek1rRnY2SjdwWVl2aXZl?=
 =?utf-8?B?Z280UUJpV0hKVTZLU2dMbWROeEpEbFBwVURUMWVqMGljTTJZbEZHQ3RjUkIr?=
 =?utf-8?B?WE5KdU15L3NFUUhWc0pSMU5PYXBzai9naEN2N2xFQVVJTE9TaHI2R3FWekdV?=
 =?utf-8?B?RHFUd2dZYzg5M2ppb0xMdEl0WHFSK2R0bTBFZlZRMDJzQTJhRXlCUXl2SFFS?=
 =?utf-8?B?YmZsRmFIbThmMmYyOGdnVm5XYmxxZC96OUh2N2tia0hQV093Z3F2UVgvWnBh?=
 =?utf-8?B?SlF4eVZxMHFtb2hNNTF0SjdxMWhjZXpFS2JCN0RPVEszeUlFeWFWRTFZZHVw?=
 =?utf-8?B?eDBRaHVYa0E5S1dtbUVKUjhWNFEwNDYxb29sMldxM21CVnZwNjFTNUFxbEMy?=
 =?utf-8?B?L2p6aU0vRkQ1OEpSbXc2V1A2VEY2OUI2V0ZWU1RuQjRWNkp1ZXhjUlhBQVVX?=
 =?utf-8?B?RUVYbmlwQWtnUnBqK1F5TzZZQlBWSW1UR3BFUnhuOXM3UjNBUmp6eURLWUQz?=
 =?utf-8?B?OWtRZDVPSUxkN1J6dDVzbDNOZXJCb2NtZU1hNzFxQkRqUDE1bkdEZm9Yanlu?=
 =?utf-8?B?aU1sd2J3UXZhcHhZbFI0VEYwUE93OGloZDFTdXZRcm1ScXQ5U1J1bHZ5SVpN?=
 =?utf-8?B?VktkUDhDK1hVbEZjR0cyUTRBaEIzbkl0S3I4Ymprdm1VZzgxeFBXdThaV0dq?=
 =?utf-8?B?ZXJ1ZksyKy9HRnBoNXpIY3I5TCs2UXZFdW1GRW9LU3F1blNrRUQ5NXFBRTlt?=
 =?utf-8?B?cHFVZW1ZZDcxb3MyN05kNG8rOXlzSDNJUE8rTjZjc0pZNElLY2RobDJuSHpr?=
 =?utf-8?B?Ynl1S3BDTkhUM0FJZU1FZmFZdDEyZm1vNDBrVWlFTFlaR3pjRnF1alBhcTZx?=
 =?utf-8?B?WWlkOGR2UGFZc2duaDlnK3dHdUk0UUJGa0tvaW9QQmJRWkR4VTJWYjNvODlI?=
 =?utf-8?B?NXpHNVlobVVBeDRjZ3hQMzBhMnIvakVNZVkxMEM2WTdQRFZTNlVYam1acjM0?=
 =?utf-8?B?bmhSMkRlRGhBbm9qdzY2VU1QNjRNUnpMc01rbmtxNG5qZmdMOWRHYUU4RmtH?=
 =?utf-8?B?Q3hZMzR6WU1zSVJTenlFVjJIQ2ZvUmNtQWJObTNMbG8yeEw2RkZ2MitqWGpm?=
 =?utf-8?B?bjlmZFJmc0RHU3NET3FvMzlkaGFyNXdmV0hOYnlSbWVZYlhVTlkycFJyVDRD?=
 =?utf-8?B?TEhoS2xoOHRacjRSRFdXY0RhZEFYRFRLM3VLM3h4VmlDa09BRWp0Yzl6YkxZ?=
 =?utf-8?B?QXdJSGRsb0tnanVBTHpNYzB1WnJLblgyR1drWXFYeUlwRGQySTBwVGI2V0xr?=
 =?utf-8?B?ZXFkMVh0b2REQUxrNnpkNDNjVWVQZzZIaklpZ21OOEM5OTNJVDMzYnk1Lyty?=
 =?utf-8?B?S2FjeVV3Wnd0NDYwQzJtdWM3STBUOVNrUmxtTVlyWUd0djJ3Y2N3VlFjZG9Y?=
 =?utf-8?B?amQyMHpuTTg0U1EzUStqb1pJY21LOGN6ZnFINkFkL2gyN3prQjVKTWJkUEtH?=
 =?utf-8?B?STFYb2lDTmpCNy9BYVVtc3RVQ2sydVdzWmRoRTF6M0JnSnVCenBGZFF5N1RQ?=
 =?utf-8?B?RkdUZFNMMDRTSW9rdGNKQzcyWnVxMy9aNWtpUlRSREsxeEZuU1hoM0llZkUw?=
 =?utf-8?B?UVJyVnMxYWhtUUdNN1FKY3IrNWRkN0VtNUZUM2gvNGgxcHFQb1pUK2I5VnFz?=
 =?utf-8?B?MjJWcTNjNi9Kdm8wK29FTVRCZ1hQR0JBVkhQTzZwdDZKYS9uSXBjRkdHSStC?=
 =?utf-8?B?b2ZlOUd2UUpzWTVySzlyS0hycDJ3dzdPcVNITTZWLyticyt3SThUdkdqVndH?=
 =?utf-8?Q?BsbPX33W47v44tktZDwQs6OQ4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ecaa8778-2072-4b4c-fbef-08db5ac500f4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 13:03:57.6880
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rvGSDpn7Uo7L611TtiH+n8S2zcufD6O6uwlZ6C0FfCpxCZVGLL9i1hjUT9hckae9ST1wJ+UGnae5buBdk95www==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7211

On 19.05.2023 01:44, Stefano Stabellini wrote:
> Hi all,
> 
> After many PVH Dom0 suspend/resume cycles we are seeing the following
> Xen crash (it is random and doesn't reproduce reliably):
> 
> (XEN) [555.042981][<ffff82d04032a137>] R arch/x86/irq.c#_clear_irq_vector+0x214/0x2bd
> (XEN) [555.042986][<ffff82d04032a74c>] F destroy_irq+0xe2/0x1b8
> (XEN) [555.042991][<ffff82d0403276db>] F msi_free_irq+0x5e/0x1a7
> (XEN) [555.042995][<ffff82d04032da2d>] F unmap_domain_pirq+0x441/0x4b4
> (XEN) [555.043001][<ffff82d0402d29b9>] F arch/x86/hvm/vmsi.c#vpci_msi_disable+0xc0/0x155
> (XEN) [555.043006][<ffff82d0402d39fc>] F vpci_msi_arch_disable+0x1e/0x2b
> (XEN) [555.043013][<ffff82d04026561c>] F drivers/vpci/msi.c#control_write+0x109/0x10e
> (XEN) [555.043018][<ffff82d0402640c3>] F vpci_write+0x11f/0x268
> (XEN) [555.043024][<ffff82d0402c726a>] F arch/x86/hvm/io.c#vpci_portio_write+0xa0/0xa7
> (XEN) [555.043029][<ffff82d0402c6682>] F hvm_process_io_intercept+0x203/0x26f
> (XEN) [555.043034][<ffff82d0402c6718>] F hvm_io_intercept+0x2a/0x4c
> (XEN) [555.043039][<ffff82d0402b6353>] F arch/x86/hvm/emulate.c#hvmemul_do_io+0x29b/0x5f6
> (XEN) [555.043043][<ffff82d0402b66dd>] F arch/x86/hvm/emulate.c#hvmemul_do_io_buffer+0x2f/0x6a
> (XEN) [555.043048][<ffff82d0402b7bde>] F hvmemul_do_pio_buffer+0x33/0x35
> (XEN) [555.043053][<ffff82d0402c7042>] F handle_pio+0x6d/0x1b4
> (XEN) [555.043059][<ffff82d04029ec20>] F svm_vmexit_handler+0x10bf/0x18b0
> (XEN) [555.043064][<ffff82d0402034e5>] F svm_stgi_label+0x8/0x18
> (XEN) [555.043067]
> (XEN) [555.469861]
> (XEN) [555.471855] ****************************************
> (XEN) [555.477315] Panic on CPU 9:
> (XEN) [555.480608] Assertion 'per_cpu(vector_irq, cpu)[old_vector] == irq' failed at arch/x86/irq.c:233
> (XEN) [555.489882] ****************************************
> 
> Looking at the code in question, the ASSERT looks wrong to me.
> 
> Specifically, if you see send_cleanup_vector and
> irq_move_cleanup_interrupt, it is entirely possible to have old_vector
> still valid and also move_in_progress still set, but only some of the
> per_cpu(vector_irq, me)[vector] cleared. It seems to me that this could
> happen especially when an MSI has a large old_cpu_mask.
> 
> While we go and clear per_cpu(vector_irq, me)[vector] in each CPU one by
> one, the state is that not all per_cpu(vector_irq, me)[vector] are
> cleared and old_vector is still set.
> 
> If at this point we enter _clear_irq_vector, we are going to hit the
> ASSERT above.
> 
> My suggestion was to turn the ASSERT into an if. Any better ideas?

While I'm fully willing to believe there is something that needs fixing,
we need to understand what's going wrong before applying any change.
Even more so when the suggestion is the simplest possible, converting an
assertion to an if(). There's no explanation at all what happens in the
"else" case: Are we (silently) leaking vectors then? Any other undue
side effects?

What I'm particularly missing is any connection to suspend/resume. There
my primary suspect would be an issue with/in fixup_irqs().

What might be relevant in the investigation is what the value is that
is found in the array slot. In particular if it was already ~irq, it
may hint at where the issue is coming from and/or that the assertion
may merely need a little bit of relaxing. Allowing for that value in an
array slot would in particular not raise any questions towards "what if
not" (as mentioned above), because that's already the value we store
there anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 22 13:11:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 13:11:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537941.837596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q15Ja-0004vt-Ss; Mon, 22 May 2023 13:11:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537941.837596; Mon, 22 May 2023 13:11:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q15Ja-0004vm-PF; Mon, 22 May 2023 13:11:06 +0000
Received: by outflank-mailman (input) for mailman id 537941;
 Mon, 22 May 2023 13:11:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q15JZ-0004vg-PA
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 13:11:05 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20624.outbound.protection.outlook.com
 [2a01:111:f400:fe12::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1a7cddb5-f8a2-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 15:11:03 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6769.eurprd04.prod.outlook.com (2603:10a6:208:17f::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 13:11:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 13:11:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a7cddb5-f8a2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=llpcAnIOzgn7iq3tMHHVn56uxu9zgN0OCx0983ksPo/++Z/jW/eE8IDJFY9JvS1Ce0sC3sgXgJXp9rN8OQiiRBz7YBTwCp8rO8GdLYxdWl0qUdf5B2gYY5jvtegdFqp2kEaBQdRbGSqSTPfhohPoxkNIwoMKvBHre3W3qilxnH8GNfUjWGfmjKXLKHAtzEs1mGOc92I72MmlXt87zGjupfbFD2Zoi1NjQBNZvP8agZdLsNlkP7gjMOsmCRNf/7VQn5h/N3e1wIFLXQX4Iyjacw7k3uMe7OfZVZ6rIszv3AOBroDWcX9oxunj/f1kdmIX2k7DgsgoBpn38KAu8DCdYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pQx7JbQDwx5VnhnNN4cvMrDCVnEY442MqhFugsFIxYk=;
 b=U0hgbQPUhSvLb44vtnFHRcodrZYaArlw6tT7KXLiE3TCOHhHfutpNp4fL8IJsH2hYB0wczVjwfuuW82ArUhDH2BDl9EX2/QsebP/w8seoVUvfakU/9L9fRR4hf3T23b5VLoS6IOk+GBKsheWqZguET77aEuSSQm929D3txoBGSlS8Z+GeMrug7r8sjeMSpZDgDBA16ctcBPtrmcwG0o0c5qyjF7Ek8SoBSHz2/iyGx+ft4W4z8dyiF6kUK5ZZYHmAOQjf1vxgUrZwVJTnnLYBbMumhnhTMex892u3Ol3OPYo/nHVcstlYzvw0STNZCLv9IMP+F/tF8p9XGD/pYhJXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pQx7JbQDwx5VnhnNN4cvMrDCVnEY442MqhFugsFIxYk=;
 b=Lxa8LaNSZJJp2tvhtA9qfplXIFZ7lV93CYVL6sB8F+FX1Cah6+fTyyPeOReqopuRagA+bftd2iwH0IHn31ASMPqKAWyLU/+lEHZhDB4biC5BEcYmVfZ4uxEBZiUwXbiX7XYvkm7WgPGfIJYT6Bae5pK2XTXgJCZ2h4FDUgtWY09OET55foUPjhvaIBlGGN1vKPAalePKojFrex1Quqym/WvyVk9CmlckAAr2g12IjJif1XQkoAzS0I2LF27rO7yxrp14frZZu5j5bkLeOAspxXDw9RbBrUl+SNNU+boMOkZEQhs+juw/qwlwNZt7iMXoUfB4hBqNKhB6yC7Rxr8pkg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <602ff9ef-e573-279e-441f-463ca7613fa6@suse.com>
Date: Mon, 22 May 2023 15:10:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v3 10/14 RESEND] xen: Add SET_CPUFREQ_HWP xen_sysctl_pm_op
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-11-jandryuk@gmail.com>
 <1166bdf1-4d54-30bb-bdf9-65dfaeb6b29e@suse.com>
 <CAKf6xpti23_fmwVWOafjUU+OPHPOA7EWOfkShGT9Qqr9=mR9zQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xpti23_fmwVWOafjUU+OPHPOA7EWOfkShGT9Qqr9=mR9zQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0188.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6769:EE_
X-MS-Office365-Filtering-Correlation-Id: 45b8a08c-5694-45c7-3911-08db5ac5fdbd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IHBlSG2rrnA0zuINQtqEo+cpx01/7HJtiFCTmLRFSxSCF6/HPP41XG0/Cwf22VPtm/9dJhPiuXqzmGxCVkb4MW/4lGNdDxfbUZnrR9upAX2OrFLYiNRzyaql70kEpEj1r4sy2IusGJ0NDZ+Zb7Hj+thNg91ADVdo2iawBFFzcgpWcJ+98Y3oBGmX810xUHe0Gu7dBEHUeyJqB+h5K/N0sTBc85IQDjO6NZZiz7KEvawsdpY/MTFxThjbyJQq351xM/PafBqM4v1C+fpIroMsZ74aOI0fgdFxnf4ijWCJSPssdoSpeODe67kgSq3CF7fZ/twEHn3u7jxSAE5xdMC7khQOhymG/YxkjiQ+vJe1+GPdZJtZ3ujbtPKh7P35VatrRsmXyan3ORYobq2yNQ4ZxMN3283fEx1OLMiiF/42d3kuTtN0FOTgx03UdU1fKvHnvXZQ2Gvyn/OtxM0y9//OOligCUEIfUAg0fjBMm4mcXOrgO5oRUWaD/l7J7/BIqWco4pkT9tSDrkRSnYbpAVB8KhqhX/xcVQkGG0bCnke9kgtFiPaiER7NElID0xY45WunArbAsmxUaULjpYa0RAuolp0GW5CYcraoE2fdrbXpVPR7QwvPjidb8TX7gHz9F54bOVIgoULhorWvMSdOf5XVQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(366004)(376002)(396003)(39860400002)(451199021)(478600001)(186003)(31686004)(54906003)(41300700001)(5660300002)(26005)(6486002)(66476007)(316002)(4326008)(6916009)(6506007)(6512007)(8676002)(8936002)(53546011)(2906002)(2616005)(83380400001)(66946007)(66556008)(38100700002)(36756003)(31696002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dVBjOHIzUkpkSHRDcTJLRE1Dcm1GZmNwMlUrQnNvZnpNVWVETU1wMHNmcjkw?=
 =?utf-8?B?cno0U0ZDQ1dLY29ZdkVFTk0yTWdLMTJRVjI3OXFkR3FhTWt4SW1jcjlMRGFS?=
 =?utf-8?B?SFA4ejZyVmpjNW9ZbEZoUUZUMC9VazQ5clhkWVlNRlFQU0lmcHhQQzhQaU50?=
 =?utf-8?B?Q1pnWkZDdTNleGtoS1NYcWE5a1oxS2VjSkQ4UnF6cGNiWUpUZGd0K0pWeExW?=
 =?utf-8?B?bVhlTFJxUGswV3NWUllYTFNtRTd6M3ZKQXdPN3JGL3RmTllCMHRBeTJzN25k?=
 =?utf-8?B?cEpDRzlXZEh6NVBZRWt3TGtVVndYc3E3ZExtMnpLOUdhcXdKVjlTSzZBTlBH?=
 =?utf-8?B?dkJ0R24yZGFUQW51ZlJBc25FWjV3Zjc3ZzJTTUR5ZkNuL1lvME1xbnhVS24v?=
 =?utf-8?B?b0dENTMwazhkN25pOXlnM01KVTlqWFB2Y2FNdndVMXBkaHJ0MUtHdmZ2Zk1u?=
 =?utf-8?B?djVIbWNtY1ptbTFRUmlnVko5S0Y0dzV4allBWStRcnJXY2ZIOGxvYVBQZGYr?=
 =?utf-8?B?cmcvRjNuV09JZDJxd2xoVWpqZTAycVR2azhKYXpPRzg4Zmo3N1VnaTQ1UEox?=
 =?utf-8?B?dUNJVk5ETnFiT3cveUo4QVd1NnRiMXlsVGVtNys3dDAvNHFRT0RxUTZDUXYr?=
 =?utf-8?B?NUlTWFQ3a0duYkJEdUp3QVhFSDI4cEpUNk9qZm1WUm5ENlRTcHY1anl2QnpL?=
 =?utf-8?B?cXVqTUVDcDROc2gyVlhTWHZrZW9WcGFrTW00QmdZZ0RaL211YU5CNGl0NlFu?=
 =?utf-8?B?UFFNaWhmZDFxcDFJWU90aTl0SjQ4enI3NS9yeFZvaU1kQUpQV2orbVFlOENX?=
 =?utf-8?B?bUFCcis3bndGWGtFbTlBZ1VMVTRQZy96eGN5ZWROSXVrNjJLOEpTOGdLeVpZ?=
 =?utf-8?B?WHdHZEd1K1dkT29FQ0phcGpCWDZVbVVXdVdTVmZWY2IycXBLN2ptMGhFVkFx?=
 =?utf-8?B?UkFkMXF1UHVwbjY4SFVYR0lIVnJ0ZkhCZTZHTFNMMVQrZ2s0UEtRK3hXcVdu?=
 =?utf-8?B?aHArVHNwaklSQmx2MmNXVTJnb1hUclNzWlAyY1BhQk5oeDVRa0VQSzMxTzBp?=
 =?utf-8?B?Yk5xcENRcU5LSzdENG82ejlzK1RZUW9jSzhpSDZLOXRGbmFwZmsxbHRjTFVW?=
 =?utf-8?B?VktRcGE4ajU2WEk1ZGMwdjZocFVwditEWnlaQTBkc1kzeXE5YjFZSWhwS1Rn?=
 =?utf-8?B?N1pLTllHYUJRV0hiQ0NJQVJkczBnSnJML1BoaHducHdPcmxQRDh6QmtxZ3ZN?=
 =?utf-8?B?VG51dTR0VTlxanlTNUtkYUc1RVVFaXFTRm41cng1L21PUHFmUlpCNWw3UVo1?=
 =?utf-8?B?OGFFMHRwNlBlV1h5OVlWdEwrMmZ0MExaOHduOS9mZVk2M3pLRmhzcTVwK1Z4?=
 =?utf-8?B?bUM0eFZoZTd0M2ZKMlE4Vkw1MDhtL3NOajUrTy95ZFVmL2w1T0lTZnFHZTh2?=
 =?utf-8?B?U3BEclJsTnN6QUMza0RDc0NNT2FVVUphNUdaN0k5TEZjSnBZM3AxNzdIZklT?=
 =?utf-8?B?bnRHZzBWWkROUWIyRDg3UzhNbnZ6WkFldk9MSzdRYUhSbEJYM2xoZ1NISXFU?=
 =?utf-8?B?dHJOTmMzOXJBNG94OWlhWkxaY2RSbEFlWmp5RnVqanNvenl2V1dvWGlRTnZB?=
 =?utf-8?B?enRhSit3TktPVUZiNUViakZYcTUweU1TYTUzeDAwdmF1TXV0NllsOTJjVnU3?=
 =?utf-8?B?UVgyVkFDYnlwZFZ5Rk5YRXhUWTRJcXB1V3FlWEt4ZEY4TXlQYnV3SDFOL1hV?=
 =?utf-8?B?RHVsZ0lHUDBkTWRhYld5b1RpQVJDbzdrbVhiSWRIdFo3QWpvMlMzd0ZCYXdN?=
 =?utf-8?B?T25RQzB0MEhkUEJISGM0REh4eFJpajJydWE2YVdnbmhTK0daS2xOR1ZPajFK?=
 =?utf-8?B?c293STBLQlBieXJKSm5MTGxmYTJLWWlaa3FFUS91elkzZjUyd0VBeFF3Y0xr?=
 =?utf-8?B?TmF5dXpSdnFvV1dLcjZadEQya3FUVm10YnRZZmJ4Z0hBanN3eHllZXAwMnVG?=
 =?utf-8?B?cXhnVXhwQVk2Y1pVcW9MRnJtank2emRHMG1LYmJBTFpFVWR0aFZ6UWJpZ2tC?=
 =?utf-8?B?QWRsQ2NLTWZUOWhCdVNwNGszeFFBOFUyL09xakE2RENBQVh1Q1JXalFtV0E0?=
 =?utf-8?Q?1P6fd2zUKmAw3PaOVTRZTDr/t?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 45b8a08c-5694-45c7-3911-08db5ac5fdbd
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 13:11:01.8856
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vTYsvS+wbON6SAR1rUC0zqlDg+VvUYzBA5r6pl4dYWSERzWmN8pH+H7FeDSKWaJsH+VY6NKBkYvqVvYovo6YKA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6769

On 22.05.2023 14:45, Jason Andryuk wrote:
> On Mon, May 8, 2023 at 7:27 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 01.05.2023 21:30, Jason Andryuk wrote:
>>> @@ -531,6 +533,100 @@ int get_hwp_para(const struct cpufreq_policy *policy,
>>>      return 0;
>>>  }
>>>
>>> +int set_hwp_para(struct cpufreq_policy *policy,
>>> +                 struct xen_set_hwp_para *set_hwp)
>>
>> const?
> 
> set_hwp can be const.  policy is passed to hwp_cpufreq_target() &
> on_selected_cpus(), so it cannot readily be made const.

I was only meaning the 2nd parameter, yes.

>>> +{
>>> +    unsigned int cpu = policy->cpu;
>>> +    struct hwp_drv_data *data = per_cpu(hwp_drv_data, cpu);
>>> +
>>> +    if ( data == NULL )
>>> +        return -EINVAL;
>>> +
>>> +    /* Validate all parameters first */
>>> +    if ( set_hwp->set_params & ~XEN_SYSCTL_HWP_SET_PARAM_MASK )
>>> +        return -EINVAL;
>>> +
>>> +    if ( set_hwp->activity_window & ~XEN_SYSCTL_HWP_ACT_WINDOW_MASK )
>>> +        return -EINVAL;
>>
>> Below you limit checks to when the respective control bit is set. I
>> think you want the same here.
> 
> Not sure if you mean feature_hwp_activity_window or the bit in
> set_params as control bit.  But, yes, they can both use some
> additional checking.  IIRC, I wanted to always check
> ~XEN_SYSCTL_HWP_ACT_WINDOW_MASK, because bits should never be set
> whether or not the activity window is supported by hardware.

I took ...

>>> +    if ( !feature_hwp_energy_perf &&
>>> +         (set_hwp->set_params & XEN_SYSCTL_HWP_SET_ENERGY_PERF) &&
>>> +         set_hwp->energy_perf > IA32_ENERGY_BIAS_MAX_POWERSAVE )
>>> +        return -EINVAL;
>>> +
>>> +    if ( (set_hwp->set_params & XEN_SYSCTL_HWP_SET_DESIRED) &&
>>> +         set_hwp->desired != 0 &&
>>> +         (set_hwp->desired < data->hw.lowest ||
>>> +          set_hwp->desired > data->hw.highest) )
>>> +        return -EINVAL;

... e.g. this for comparison, where you apply the range check only when
the XEN_SYSCTL_HWP_* bit is set. I think you want to be consistent in
such checking: Either you always allow the caller to not care about
fields that aren't going to be consumed when their controlling bit is
off, or you always check validity. Both approaches have their pros and
cons, I think.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 22 13:21:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 13:21:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537947.837606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q15T1-0006aj-VX; Mon, 22 May 2023 13:20:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537947.837606; Mon, 22 May 2023 13:20:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q15T1-0006ac-Rj; Mon, 22 May 2023 13:20:51 +0000
Received: by outflank-mailman (input) for mailman id 537947;
 Mon, 22 May 2023 13:20:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q15T0-0006aT-2f
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 13:20:50 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2070.outbound.protection.outlook.com [40.107.13.70])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 77952a67-f8a3-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 15:20:49 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB8061.eurprd04.prod.outlook.com (2603:10a6:102:bb::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 13:20:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 13:20:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77952a67-f8a3-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ds+mSz9gDQ19aoiiZkw6Af6xNzx6tdtjDoiHulaQuHuNXEo9ZPLrQx/s6aE/OhB1WO2el986xu+KrTIh1+EkptJ31bWVMk0BUAin5gdMhwaFMbuVsapnKAXUZ84CiF4NUAoq8T46U/lSZfqILLPLitmeR8ujMSl3UgXJjPYqUCjOHyQUOAa0PIAaXCxujggaW6VJepigv77g5j0iK1KSPpUr+gun5Hkg9Y0SWYjdVeNb5Wplf0JEPI3kNK13i+9O+9vk9xvmeJtOXGIdIThbpZYsWZBrn82tjJXpZIapZwv92RM3gY8IVCdKjM2Mtho2dzA6HXIVz45GaeHUL2MmGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JOMokPvjoGqxgL86YeDqkbzpLR6vssQObSPnnAiJB9s=;
 b=FUAuhGPBZQuZh6m6vVyriKuG9tVFE7oukReVHa0lcjvwW8iEeK/xBUL9O1/uYFpU/RgPYE7BAJImOTA9N0cvymuF2Cl2oo2cP9fbOFc0z2JPVVvRQJPQiH8odkKuYrV5UiWLsyYYFukJdJzQaqB4Q9sjOVPap5a3PQh02MW1a90nTgbvhnJ9P7MGd3hnA/PJbgcZeiM6dqv0dooCi5UimuvC7NSy0u6cI0eW6ZKMKbcEojnwFIjWw39J8gwbNGMK13CDVoC7s3JTC3le2Zc8URweGvWZ9c3X91+MvOP6zblv5kMedlPJpqwZx0F0lhabxYCud9pg9CbbKb21qzCaAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JOMokPvjoGqxgL86YeDqkbzpLR6vssQObSPnnAiJB9s=;
 b=g4MYYTNH5NIh9YqlX/nONUnVZr8WFiE4BO4du5YSb0wWIFIbuLWz78vHwjlGM/V7IlB+tEut9GtEI+cNftnXQ1ZaboPUg++yyiEy2YGmYie6hJmhRes2bKj7pZ5B615zyDvk/tTmX34sKpUH68tK4iyZ6CVUhsyDdgEcYS/OXDr5b+bq0PYpwd1smRlWBvjPpUhCPaVY6QW4gO4JH+M66jWml66W2XaMp59AdM7HpwziTTqCRp3cLlSVPu0KoC6zPs+/0kkG3tosC8WvSqLedJpFrzhx0mwFM/0s1BJiYXcOTUTHeHSy/3Fp8kId/p25sRyBTUrLmjXH1bXt6tsdWQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <47818514-b03a-8094-1c03-24e4c7a67daf@suse.com>
Date: Mon, 22 May 2023 15:20:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v3 13/14 RESEND] xenpm: Add set-cpufreq-hwp subcommand
Content-Language: en-US
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230501193034.88575-1-jandryuk@gmail.com>
 <20230501193034.88575-14-jandryuk@gmail.com>
 <46c46e0b-0fba-2f3f-6289-f68737a3d8d9@suse.com>
 <CAKf6xptjgvSsNobYirrF_W1cWUdVVfvX55DFNa=bWy3A24kN5A@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CAKf6xptjgvSsNobYirrF_W1cWUdVVfvX55DFNa=bWy3A24kN5A@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AS9PR0301CA0007.eurprd03.prod.outlook.com
 (2603:10a6:20b:468::27) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB8061:EE_
X-MS-Office365-Filtering-Correlation-Id: 64449c2e-08ba-4a80-1a72-08db5ac749cf
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	k1LxnbN7yaEhER29sqS8Q7RuwWRGQjVUPmoKJ352PqwPsiwZT9p/x70JxM707o3tfMQSs8DxQ2RTskH8mfATU8N5r/kcNuhaNC5NS3oo/eVhkiT+TGnt65AKlA/FgBs6rMtWjfJYN2BopY8fFOiYeAC4DGe9Npt1gtWbPPHYVRtj+M7AA0Kh7WzzVP8TQi1dYaPpbL8qmUzyYzYI45+MoL7mEJGktoK3P9hLchFTu4Ll97PiTzQ1iuJQJnAWdg7iXx1ys4Q1cbbpwnc0oIvtgqD5Tqnf9wwJrzeJdNisSvq+mu09KFybhj65LuoYyIDqGp7IrlRVG38DSElJVJuHDQh6E5f/atB7o3YLF9LwUJYd3FP0/jL3jlB9uani4iHhkEVwTqQu4YCA3A4hksyoPUvaU5NzmkbPmZycb1a3ePDPfQ2XM+RW/7DTRQ2bC5L2iNwnpJdDd+k95fHZwIWIwFhSIHEyuVQGR+uqvilmYpVdkClPFJqp9ni6SiSZSY38vZa41dWsYELGtT6w2nuU0Yop2vODSoWoxnqdZHN8M+xv3heKFX2k08aD5Sv5Uq5O/2yI2kJhIKrkRSEzQxfiHit+RHcEdi26iJqEdrcFqh21cHlBKxOk8btlE6V6dqJZoCbuJruOIvyUJndO9T25NA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(366004)(376002)(396003)(39860400002)(451199021)(478600001)(186003)(31686004)(54906003)(41300700001)(5660300002)(26005)(6486002)(6666004)(66476007)(316002)(4326008)(6916009)(6506007)(6512007)(8676002)(8936002)(53546011)(2906002)(2616005)(83380400001)(66946007)(66556008)(66899021)(38100700002)(36756003)(31696002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M3AxWGFjdnZaWitWN0pLSk5lZ05ka2xqTUV4ZE84MkduY21QdGMrYk9xeEx4?=
 =?utf-8?B?c250ajBiVzdLRStHREpKRm9GMW56UkpxcWREWjZHT3pKUEF3anBkWDFUSG0v?=
 =?utf-8?B?NGIyMCtuUXl5MS9ocGJvRzAyL3NFVmYzcHBYRzJYM1ROZWk5NVl5SVpqakF1?=
 =?utf-8?B?OGNMamFqUTNUK08xUlVMTzBSZEJsZG5sbXdjZ2hZenQxV1ZTaGIrcmE4bWZ5?=
 =?utf-8?B?Y3dtZGNHdkxHSDhMcEM2YUxVcnE2QUl0Wkc1TzdnUkUwWlFULzN2bHorVC9s?=
 =?utf-8?B?alU0ZXpmOSs4aGdUaVJRd3ZNLzNhUEszWlpRK2xNZU9rTnorMUlubC9qNzVH?=
 =?utf-8?B?YWFuQ0NiU0tMRnBKSDEwWEJ6bEd0UEc5UUhsbEVQYlV3TDNnL3pIZnhET0Y2?=
 =?utf-8?B?UVQzMDBFREVZMkE0cVlCV0hwTDRGZENUSnpKTGZTeSszYUR4Tkplcm1vMFN0?=
 =?utf-8?B?ZTdkRHVRb0tjRGpXUlVVdW9QY0x4SEZpejE1bEdVNDAyZngyVXVtMjBFd1A2?=
 =?utf-8?B?ZzR5UFM4MTRkaExTK2VyTjBuMTFrR1hOdjA5VkxYY3JZK2taN2YrQlVWc0Np?=
 =?utf-8?B?Z0doa3JVUWphNDJUbkpCK0ZSV3UrRUxaRFJKUzkzeGxUV05oZTF3WnlKRFFu?=
 =?utf-8?B?RVJhWVEvV1lDWWc0aEZoOXhCRG1jdTZUTGQrT1dGZXh0UlBSRlJsc3J3ZTZ4?=
 =?utf-8?B?Zit3T01SUVRSM3lmQjc2WER4MDNSUkN2cGJGeWhBUVRlQmtDZEpEc29iWk5N?=
 =?utf-8?B?N1V1cHpoUlZVTk8xZC9ZRTdLcWxia1Q1azRnN2pLMm5HZ3VJa2VPelZoN3FJ?=
 =?utf-8?B?eVBSdHd0c2ZXMDd5eXZkQkh2UWxmUms3dGYxT09pTWkzSkR1akQ0WjdleGRT?=
 =?utf-8?B?bS9kQlQ3SlhvajlZNDdKUHFCcHZlUG8rSHgwcDFpbWlCK3ppVDN6aG9yak40?=
 =?utf-8?B?OWNJT2lyRTFHWis4cGJSektYUmZha1hSYTVPYUVYWnRHOE1rUmE0SWRhYjVC?=
 =?utf-8?B?U2FVcWt3UktRdVVVOEJWNkZWQStCV1IyWmQ2ZzRNVDNLN2ZyRm9jWHFCOG84?=
 =?utf-8?B?ZVAxd3RwczVERnRTWi80SkhTUzNVSGU3VVN1YXNtcVUvckVBb3hNelJHU251?=
 =?utf-8?B?VSs3M0NycDFyME1DZThwL0ZCRXBwQnVrRm1sSC95U2JrcTVXM1Ixcjc3Wm5p?=
 =?utf-8?B?SGU2c1IreGV2TjkySWMzbCtFejZzNGNNWWhJRCt5SXdoZTlEdERjME5tSGxY?=
 =?utf-8?B?cHQ2NWptTnNrYXV2UGozSFV0VVNYYS92QTdGVWhqQVpOajVOMEUzV1ZiV29l?=
 =?utf-8?B?aVJJWjRDeVVWVHR5VlVRUjJHSzYyLzhrVzQ4bk84ZGhjNzlXc2VvNHByeWFZ?=
 =?utf-8?B?aTRvRDVJZDRDaGpIZ3Q2dkJBam5JaUZydmlsYjJTQzZmSjROek1NdU1XaGZq?=
 =?utf-8?B?bUgrNlZXUzR0TmlDY1lweWs1TVM2Zi9jNlliN3hscnY1aEUzRlBsMmNsZ2gv?=
 =?utf-8?B?RTdWVytvVGpiZXprYkEwMkN3RDNLNzBsYlU1Z3ZBUEpNVGxud3pVWmJOVDdC?=
 =?utf-8?B?ME5aVVZBWldvaUw0YytnSFZRNjlLaElQbWlXSFdzbmtkTjBGSVZuY1dlanRy?=
 =?utf-8?B?TEFNaThhTFRIaUswZ0FPVDdlTFNjbDBodlZiVG1NTjFveVZiZVFhM2o5dFlL?=
 =?utf-8?B?MU1NSXhNeXJ6OURWVEl2Z3RRMnFSaVZpMVFiOVE0d0RiZUEzWjFUMHI1TS9J?=
 =?utf-8?B?dXE0Q0h4NDNQTXp2UTVSNk95V2RVQWNTTXc3bTNtamtvNkZ2bUtLRlFHSmtI?=
 =?utf-8?B?VHJXYmdXbGhjcE16czY3bENiQlEwd1Y3RHZaNEVaWVRqRDkyaVlXTmh3UzdI?=
 =?utf-8?B?WnVtcDRzSjZrNnFUc1lTeGFsRThQcE13MWs1NHNTbWNITFU5S2d3MFdKWW9y?=
 =?utf-8?B?SDNWVTRyYWs5MHYwSXUrTElWaWFlc3BENnRIMlRISnhOQldlZHkxbkNNdEda?=
 =?utf-8?B?N0VadlpYSGIra1hTcXQ5ajBVL0UwQXZwTWppQ3hEc29mTGtPR25oMEdId2gx?=
 =?utf-8?B?czB2aCtQbXdRbDNSVmRYOTUwc003dXNubm13ZDBlS2IxcnRIb0V6YjJnakp1?=
 =?utf-8?Q?+aZ59v9iLwY/w8334FmL/3ofu?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 64449c2e-08ba-4a80-1a72-08db5ac749cf
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 13:20:18.8468
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eYNvNVRg36YcyxctkyCKCtZ42AtlfRa1ueI8PgMgvJYbTGQ94aolQvBQbEHOQi/cWEpVM3kViHgsbFkbte86Zw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB8061

On 22.05.2023 14:59, Jason Andryuk wrote:
> On Mon, May 8, 2023 at 7:56 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 01.05.2023 21:30, Jason Andryuk wrote:
>>> +static int parse_hwp_opts(xc_set_hwp_para_t *set_hwp, int *cpuid,
>>> +                          int argc, char *argv[])
>>> +{
>>> +    int i = 0;
>>> +
>>> +    if ( argc < 1 ) {
>>> +        fprintf(stderr, "Missing arguments\n");
>>> +        return -1;
>>> +    }
>>> +
>>> +    if ( parse_cpuid_non_fatal(argv[i], cpuid) == 0 )
>>> +    {
>>> +        i++;
>>> +    }
>>
>> I don't think you need the earlier patch and the separate helper:
>> Whether a CPU number is present can be told by checking
>> isdigit(argv[i][0]).
> 
>> Hmm, yes, there is "all", but your help text doesn't mention it and
>> since you're handling a variable number of arguments anyway, there's
>> not need for anyone to say "all" - they can simply omit the optional
>> argument.
> 
> Most xenpm commands take "all" or a numeric cpuid, so I intended to be
> consistent with them.  That was the whole point of
> parse_cpuid_non_fatal() - to reuse the existing parsing code for
> consistency.
> 
> I didn't read the other help text carefully enough to see that the
> numeric cpuid and "all" handling was repeated.
> 
> For consistency, I would retain parse_cpuid_non_fatal() and expand the
> help text.  If you don't want that, I'll switch to isdigit(argv[i][0])
> and have the omission of a digit indicate all CPUs as you suggest.
> Just let me know what you want.

While I don't want to push you towards something you don't like yourself,
my view on the "all" has been "Why did they introduce that?" It makes
some sense when it's a placeholder to avoid needing to deal with a
variable number of arguments, but already that doesn't apply to all the
pre-existing operations. Note how many functions already have

    if ( argc > 0 )
        parse_cpuid(argv[0], &cpuid);

and {en,dis}able-turbo-mode don't properly mention "all" in their help
text either.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 22 14:02:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 14:02:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537954.837621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q167M-0002oJ-QL; Mon, 22 May 2023 14:02:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537954.837621; Mon, 22 May 2023 14:02:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q167M-0002oC-Nd; Mon, 22 May 2023 14:02:32 +0000
Received: by outflank-mailman (input) for mailman id 537954;
 Mon, 22 May 2023 14:02:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dXMk=BL=citrix.com=prvs=499503587=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q167K-0002o4-Cy
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 14:02:30 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 47cc2ea7-f8a9-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 16:02:27 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 22 May 2023 10:02:12 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by MW4PR03MB6635.namprd03.prod.outlook.com (2603:10b6:303:12a::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 14:02:09 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.028; Mon, 22 May 2023
 14:02:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47cc2ea7-f8a9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684764147;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=NMkvZl1QB6H1Mk8oacbZNLa9GQY57EkbQ9CLZAexc3M=;
  b=CNTg9+UGIcai9CkrENona8rcQ/qgHUOoXNQLfHM0VKPH5Dqb4E7/jmbS
   UMuL+HY0eemEV1AKRCpJ/W4ass+EujdR2xAN4skHtOQ+hdUmFPoKpiTt+
   9wBQAnZueC3QKS/L0VO/Vlld72eQH1Gv2VgYzsjHXRAegVtp6dqdVWUWW
   0=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 108690935
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Rr2WtarQULXBPhn+XmjYCKm7V01eBmI1ZBIvgKrLsJaIsI4StFCzt
 garIBnUPfiNZ2f8KYwiYYm08k1XupOGnNFhSlY5+HtmEi8S9puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzSNNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXADwnZE3Z19O6+qyARqort5Q7HcnJLYxK7xmMzRmBZRonabbqZvyToPV+jHI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeiraYSFEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhrq400QXCnTF75Bs+ZHK9/+WQ0W+HW5FxD
 3Ampggr/KIR3Rn+JjX6d1jiyJKehTYeUddNF+wx6CmW17HZpQ2eAwAsUTppeNEg8sgsSlQCx
 lKP2t/kGzFrmLmUUm6GsKeZqyuoPioYJnNEYjULJTbp+PHmqYA3yxjJHtBqFffsisWvQG+gh
 TeXsCI5mrMfy9YR0Lm29kzGhDTqoYXVSgky5UPcWWfNAh5FWbNJrreAsTDzhcus5q7CJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:zZFGZKlTAtUQRKPKE7OWjSBLOkLpDfK03DAbv31ZSRFFG/Fwz/
 re+cjzpiWE7Ar5OUtQ6exoV5PgfZqxz/RICMwqTNWftWrdyRiVxeNZjbcKqgeIc0bDH6xmpM
 RdmsNFZOEYeGIVsS+M2maF+rgbreVvu5rY4ts2h00dKz2CRZsQljuQUGugYzVLrCoqP+tDKH
 Ldi/A32gZJNxksH7iG7jduZZmzmzV4+aiWGyIuFlo77AGVgXey5KTnFgXw5GZgbxpfhaon+X
 LI1xP0/b+itfbT8G6j61Pu
X-Talos-CUID: =?us-ascii?q?9a23=3AD0zuaWqxCAjKS+4rTfhp/ZjmUcwPeCCHnEbcGVG?=
 =?us-ascii?q?DBSUuee2NeF+y6awxxg=3D=3D?=
X-Talos-MUID: 9a23:a7E8RAibt8Zm45O04udVSMMpBe5uvbiHU2E0kdYon/W2aQ51aj2+pWHi
X-IronPort-AV: E=Sophos;i="6.00,184,1681185600"; 
   d="scan'208";a="108690935"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jwiqFEZlweyk4wTZloS39R8PnptRKPdxmVgrlcu1WCDNwQY8tB8fz8lDfFUPghtTerYU1JfEEJ3k2ucmkKdKErlCkWhOOfSl5pfrBX6gl7uvRxaHKJlHKKW5uvCbipS3ourn5NQYuj7cS6Bfyf0V2UMvH6DSakonSZYajMyJKDPcih0vm3anFCIss2KKLAxOYvrKHVnYG760d1NGYegh3n2OpK892Kyzqo0Byx+ylaIP3suH8BmKkz0WylV5XIWpqSPyH4ZF039X2NeNIYcZNy0Gdzf18HH4tfPhCqU9Sj6vRXYBpJlaWV8DDBCBk8t/dAbLNWKnPE597pOyLsJbSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NMkvZl1QB6H1Mk8oacbZNLa9GQY57EkbQ9CLZAexc3M=;
 b=Omr0IMetnoDlz068Nt66eOocEjBDKYHv0eQwL7rHCMFpQcHdF9eTZOQ4Tdo9uO6nAR5iGPXG8838s7QjIFvHT5DXhiFGxUn5dGraYT51xjUlMMpNGlDITJUoztsnfTSi/5tPHZavESW2eIvRHFGnoM3ozkLZJR3ywzzuoWOXpxpSDQlLrnJL4Zn2eX33FR0EkztC7aabCuEhjNd3mGWlXQYlYzL48IKyyO7p0YMuihyVnPr8PCl8RrCDkMRKNiCJkB933jxd6U+n/2fyR8mbxb4om1nnQBxm1XgAssG/+BwZKUJNSIlrxvdPHINeYEJCHUjJ48WWI4BK+uyJh7yWCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NMkvZl1QB6H1Mk8oacbZNLa9GQY57EkbQ9CLZAexc3M=;
 b=KmXzgDf4ntfUu/80u3xTLOkDQl7yOG0W0/S/2FNPMDktse9L1YAAOewdgXf3dACB9ZRik7O6obz45BIlqOeRzYAuUEwzgTNGnDeC1Wy7fTEl0fcBvZ3aLyr3cRRtR6kcMsuq1aTH4KH9wcQNMwyUG70I15Eg946aCg7dYbJvMqo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <b7822620-d2cf-0486-7e43-acb7adae73e0@citrix.com>
Date: Mon, 22 May 2023 15:02:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 6/6] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230515144259.1009245-1-andrew.cooper3@citrix.com>
 <20230515144259.1009245-7-andrew.cooper3@citrix.com>
 <a545a6c9-db48-9d91-c23b-59ea97def769@suse.com>
 <25421dbc-5889-a33c-37dd-d82476d56ce4@citrix.com>
 <1bef2b0e-04e8-2ca0-cf03-f61cdae484a9@suse.com>
 <b1c56e56-90cc-0f37-4c8a-df1217339e59@citrix.com>
 <22a6bd70-887e-da72-ccff-16b3627463c3@suse.com>
 <54b35fa0-160e-3035-6c22-65a791ed2f84@citrix.com>
 <a53a77e3-6dcd-2668-0f3c-282de93d8b04@suse.com>
 <897bac23-b17d-ec4b-613b-d4d1b4c77d58@citrix.com>
 <9bc8f75b-e46f-48fc-3083-aac30995ec29@suse.com>
In-Reply-To: <9bc8f75b-e46f-48fc-3083-aac30995ec29@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0162.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::30) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|MW4PR03MB6635:EE_
X-MS-Office365-Filtering-Correlation-Id: 78a3d184-3a8e-4cee-e9a4-08db5acd220a
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	J8zJ2EUYY9N/X2bApqMbD5DYMKlY1bmfYZ4Ljqy05ccqa/dTVav+w4Y0/MD99kdpgJNM9/T/qhuwO74mD4Ls+2OOBpZYeOkngVDpm+BMaOQgGzV/QhU84O2Hn4W2X4W5VOGcpi/srzMFKJsoDCOkK0k8/Tp/XFiXnyZwjaIEWxmEjKGWVuHqsau9LnNVvldWWGpIKonjCYZh+mUToK01S9Y/Bcdw9b4GrE99pKbuPPfBhDMvoEHQhZ1nbzF8lWFMazl/QGNziSj0yEnP1HORbhbThe3eLiFlwdDoPPcOOodWZDJr4pPF0IcH6ThKlC8VJyQ4teor2VGRGXfjmW7EJ1Qnmg8kmn/oV9n7jyas2LY4xARhD+PbB/bbDSZl3F11g/c8qYo2azqLoq09x2Zj8iMSwUYfLSAkQMKPvsMkebu7GCXJBR5FSl+hQyHjWbkOEd4NQeM4g50EC3NcHdTYd01rIsucrJioXDU/8ODyCGt87ygjH+d8/TWQwJEDbDgEqkgTw2KRz4tHdOwFCn1V+p1Up33nvDsLNYAYeX2tpklMTEA6J/7Sa9v4TxZcmHr6Pco3LjWLM/+mYDudS7ciJYMaLCttASNercuy+rQkKNOi42d6EWsxZ/2RMGfkI8aAlZ9Vvt0lH1hZDkM1QiccQg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(376002)(396003)(346002)(39860400002)(451199021)(316002)(31686004)(54906003)(4326008)(66476007)(66556008)(6916009)(82960400001)(66946007)(41300700001)(6486002)(6666004)(38100700002)(478600001)(31696002)(86362001)(8936002)(8676002)(5660300002)(186003)(6506007)(6512007)(26005)(36756003)(83380400001)(53546011)(2906002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NHZzQWFpcm8wMy9GQm1nN2NZMlZvTU9kQmt0YkNva0g3TW5nb3ZPVG1vVVV1?=
 =?utf-8?B?YlZHdk1vUXF2T2ZKbUpPb0lkcEJhU1NpSlBMSXV4OTc1Ym02dmVTWVpQQzZU?=
 =?utf-8?B?MmxNdDU3bXFKQ3NGZHhuVlpWSm9sR0theHZZeFM4MHFRd2ZteklQdHRvTmZI?=
 =?utf-8?B?WENQdWJ2ZVM3ZTV2UmFSdVVxSEJ1b1RxMTFvYU11ODBReGVLNDlLM1c1aXZl?=
 =?utf-8?B?M0huNkFlU1RDZ1BIR3lkVC85UThhTTZMTWMyY2pWQlV3bkFSR25xVUdEcXQz?=
 =?utf-8?B?MFlycE9SZTVSUWxpa0hvTHU2dFRwTXZyalBUV2wvNzBvM0o3am1CbWlKY0tG?=
 =?utf-8?B?OEFVWXpPNEVRUmZHVnVYTjRDbitNajV3a0xiV2o2L0h6RUcvT2ZYL3lMVGF4?=
 =?utf-8?B?ZFc1dmp4dVdXUFRCNHFkYzRsZEtRVXhSdjIwR0VIbkxmVDEvWFFwRlFwbDBF?=
 =?utf-8?B?UXVEQTRDWGJDc2hhSWxDbGlpRG15OUE5RTE3TnlSR0RlR2xyby9rdHltQ1o5?=
 =?utf-8?B?eWdjN0JPeG5HdndIVTJ6Q1JjTmEyUThQcXZ1SkZRRERkbi94cEZzbHJqVEFm?=
 =?utf-8?B?MkVYeENseDd0WkU1eUhON2IxSjdMZ1cxRlNBSzNhUm1TZXhCamp2ZHo5cUxF?=
 =?utf-8?B?ZFpobHdtSHdURCtLbC9qYmNHUVJuRmYvY0t2cHhIYjBuanhlb1FPWWZLTncy?=
 =?utf-8?B?Umc4NWxuRVZWWE1Odi92YlNjTmRWbG9Wb1IyYnJqallqZTllUDZNUjlING1Z?=
 =?utf-8?B?MHMwa0M2aFhTWDRSR21zMTJ0U0RScmZ3dEdySVlBSDN1UHNjdkd1NW0xVjhC?=
 =?utf-8?B?dzdoaXBPd3NIWDJnS21HMU5JaS83SXZ2VWJ0bzdjdHd1S2FxR3RjYlpUeThx?=
 =?utf-8?B?YldQTldGakJXZ1FUL0F0Wm9hWTBCdE8zclBvQ2dYbUt5aFo3S2g4RXM4Tlpp?=
 =?utf-8?B?eG5xcE1rK0Q5M3c4NEN0dFRKc0JYMnc5VXJOSnVCTXpUZHhacG91cEZkL1hR?=
 =?utf-8?B?NUQxRERnM2dhZlk3TGFjbXc1M2ErelNWbk9wK0ZEMWE4eVoyWHlLTXB6NVZK?=
 =?utf-8?B?M0lucHVrMWkxem1OalE5V0czZmw2eXJNd0loR25WV3BpUmh0ZVltUmVmTi84?=
 =?utf-8?B?OFVMRkJOd0tJalhuWVUyQkNJSWVKL0RSdWZkMSszSWREM2IzUWFja2N2ZUxw?=
 =?utf-8?B?c245SlFvM2d5bnhIb05JajhzUDJ5N2ZLS2hnYkVDcWJDWTdHUzJtbHlLUGl5?=
 =?utf-8?B?NVpMSEo5VUlDUkdrSmRXblNhKzhmdlBkSW1FRDRyRWxVK3F1TmFrekdDeFpO?=
 =?utf-8?B?VDZEbUxIMFlkYWFSbkJ3OU1Vd2hsVUFvMHhENHpoczFKZ24vQ0N4VmM2YXVJ?=
 =?utf-8?B?UktyTGpkSzJncXJRK1c2Zi9ncmZvWm9YT3QvOEl0ZWUrdHRUZkNtNnNOTmY3?=
 =?utf-8?B?TUFGWmN0R0NXQis2b1o5QUpWOFRTTzlHaExNakVQSldaemtoTWNLM2p2NXZh?=
 =?utf-8?B?ZitHUnNaODhjc1JkMW5xc2NqL0hjcHFPNWVUZjFjR0Q1dEVWUitGQVpsSWNh?=
 =?utf-8?B?Z0dFZ1NjdTdPNFB6b0FwM2Y1WFVyTmw2VkhCSFljSnlTbEJBRTBMSkRYUmhk?=
 =?utf-8?B?cUxGR3QyTFp4SEhYNFA3N3V2eUVPSVdsTVljYno3T2dhK0FYT2dtQXRrei9s?=
 =?utf-8?B?SEJGRE5oTGpxbkZtM1hjZG40cGR0emNuMWRMTUpPa0lOdWh4L2JVWmVWWG1t?=
 =?utf-8?B?Z0U0eDFHOTQ0VHp4UVpZcVQwNW9JbGp2OUFrTVc3Ukh4d1BaNmVYY3M5VTlP?=
 =?utf-8?B?c2hXRkNIcCtUYU1ZbVRkbzh6enZ3V2Z2dUV4WWdEbXlkTlpiL2xsV1VxT0ZP?=
 =?utf-8?B?NnRHUytoc0hXc00vNjVQRFhxcTF4TWJwRXlvV0RtNzlicDFRMHB4SndKS2pS?=
 =?utf-8?B?bVhJczlmSnJrWEV6VUhIOTkzajhJU2RqdkljTW8xa2R4R3ZEOFloSFAvc3Vy?=
 =?utf-8?B?SU1lZU0xcW5EL2xZaVBHd3RiRVBTY1NmVVp6NmJReWRsYlRhalphNXdMRjB3?=
 =?utf-8?B?SGU3OFBhSlREMW14N1hGUHJKYXJ0dlB4VTgwRHJPQTFjdG9SUmpXSlkxa015?=
 =?utf-8?B?TU1iVW1uaVE2RTRBTzZ6c2EwU1Yxdi9NU0tDZUoxVmN2WFBYbFVxOVFmTWc0?=
 =?utf-8?B?aGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	rQfncl0iJ2zEYgCLWYYcWxFFpT8pDh6X+Yt5nu7SPuH7rWm5qCDtrpfY993SazKswxMybpJa7BZJodoXXrNTbKZLl9zOPfZDzyC8tkMBFkiW6MTMvwg7UWpJ+RCwCocXN3uFIrRSqJQDVprbYa1YCvIZ8tvB2Y8dxzrC55kslSbBcF3tVquDfKnf8rupUufSXEOr0+z63RWS7Q+O9+p/pH9A+oufVVqWoQXZmmmpr76NMCdSs5+w0QZdKMgng2sQOJ4H1mB3RHBc2AhXx7Dsuuzdyd1fC5/A2NXyRh6TdUa1WPLP2/6VZoqWn9OYToRga2sjgnX0A/r7MenKc2D86E20HzCWV80+58S4UaD+CcOgZ4CSAa00Htgn9S8pNkqoplfx9LkhVQu9Gg/PEsDo/L63iBPz3D37TbzJF20it/6Tz4zdABPIRTZRALQyGGWoehd4sQC9I8b4eaj8iUQsWT+EE3NxrvKN+ZAhmlRVDv2DVwEDvxAWSQDSOxNtU4/CI9pR/oSAJO7jlIX6anQcaI3lAgHQzSor7mTLWnJ4Ymr0OB/UaLZYcktMGLQaT2oKRFwB987+IwsbY0t75Knwg3jjlm5RRxHjhONs2Cera3f36wu53AWu4oOFcSkg5Khp4VV+vqOEbIkfV0wTxJG+JDwGxIZfZUp1UbZwOdWu6ECY2r0Zsp1YuKss4fjRxZNd/pWNDqaroUCKjDCWRzioT9x+iaZjzPlUQF2E3zQwaugB/8A4c4N4SLx1A9xPn0mK02Q5xu/R4G/P49TZJC/jZPfcF7wkUm9lMQ/SjbPxRKlCB420r6z3uf+74++W81lr
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 78a3d184-3a8e-4cee-e9a4-08db5acd220a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 14:02:09.2670
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: d5RAAy0JwuI4f6Por23xArXRL5T50naonxN/inEzhAsH5BIDV+Kzmg5ymGwZSAwVFwDOA6+cRQH0EEZrTpGJRXgLCnc/KuMGbIBPj/aBcOI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR03MB6635

On 22/05/2023 8:31 am, Jan Beulich wrote:
> On 19.05.2023 17:52, Andrew Cooper wrote:
>> On 17/05/2023 10:20 am, Jan Beulich wrote:
>>> On 16.05.2023 21:31, Andrew Cooper wrote:
>>>> On 16/05/2023 3:53 pm, Jan Beulich wrote:
>>>>> I guess that's no
>>>>> different from other max-only features with dependents, but I wonder
>>>>> whether that's good behavior.
>>>> It's not really something you get a choice over.
>>>>
>>>> Default is always less than max, so however you choose to express these
>>>> concepts, when you're opting-in you're always having to put information
>>>> back in which was previously stripped out.
>>> But my point is towards the amount of data you need to specify manually.
>>> I would find it quite helpful if default-on sub-features became available
>>> automatically once the top-level feature was turned on. I guess so far
>>> we don't have many such cases, but here you add a whole bunch.
>> I'm not suggesting specifying it manually.  That would be a dumb UX move.
>>
>> But you absolutely cannot have "user turns on EIBRS" meaning "turn on
>> ARCH_CAPS too", because a) that requires creating the reverse feature
>> map which is massively larger than the forward feature map, and b) it
>> creates a vulnerability in the guest, and c) it's ambiguous - e.g. what
>> does turning on a sub-mode of AVX-512 mean for sibling sub-modes?
> Feels like a misunderstanding at this point, because what you're describing
> above is not what I was referring to. Instead I was specifically referring
> to "cpuid=...,arch-caps" not having any effect beyond the turning on of the
> MSR itself (with all-zero content). This is even worse with libxl not even
> having a way right now to express something like "arch-caps=..." to enable
> some of the sub-features for guests.

We discussed this on the x86 call.  In summary:

Right now, arch-caps is off-by-default because it doesn't work safely or
nicely.  I'm working on safely first, and nicely will come second.

The end result of my work is going to have arch-caps active by default
and supported.  End users aren't going to in a position of needing to
specify cpuid="...,arch-caps" in their config files.

Fixing libxl's ability to specify the contents is something needing to
be done before we can say arch-caps is fully supported, because right
now it's the only way users of xl/libxl have for levelling a VM for migrate.

> To explicitly answer the AVX512 part of your reply: Turning on a sub-mode
> alone could be useful as well (with the effect of also turning on the
> main feature, and with or without [policy question] also turning on other
> default-on subfeatures of AVX512). But again - that direction of possible
> "implications" isn't what my earlier reply was about. As you say, reverse
> maps would then also be needed, whereas for what I'm suggesting only the
> deep-deps we already have are necessary: We'd grab the main feature's
> dependencies and AND that with the default mask before ORing into the
> domain's policy.

Having discussed this, I agree and specifically have an idea for how to
use it for AVX512.  But it is also going to take a fair amount of
rearranging in libxl/libxc to make work.

I.e., yes, but not immediately right now.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 22 14:15:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 14:15:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537958.837631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16JE-0004L0-Sh; Mon, 22 May 2023 14:14:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537958.837631; Mon, 22 May 2023 14:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16JE-0004Kt-Py; Mon, 22 May 2023 14:14:48 +0000
Received: by outflank-mailman (input) for mailman id 537958;
 Mon, 22 May 2023 14:14:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dXMk=BL=citrix.com=prvs=499503587=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q16JC-0004Kn-Ga
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 14:14:46 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fec9f36b-f8aa-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 16:14:43 +0200 (CEST)
Received: from mail-dm6nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 22 May 2023 10:14:28 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6985.namprd03.prod.outlook.com (2603:10b6:a03:433::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 14:14:23 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.028; Mon, 22 May 2023
 14:14:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fec9f36b-f8aa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684764883;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=HikiG1NFCBU8u1Lo53caaapa4/+8mqmyVPewJllENnU=;
  b=SbF9nnKZgBOJ869F1HeM9GOl2+UURsg2RGtK7gIUXvk6ELnOStUznjmW
   d3NIl/OHV2EjzIGQLgXo2tbPwDK8bSV9f3TR9L9wh+i7vCfcfihUCZ2OJ
   bxsXO0aMIAz/76wRrjiXcTdFBt7jaoPrr46NoNjKOG5Xdpj8/Bu3yytPB
   M=;
X-IronPort-RemoteIP: 104.47.57.173
X-IronPort-MID: 112399140
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ngfITKlXbRJ/KyNtYPjGInLo5gytJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIYDDiGO/uMYGH1c9t1bt+2/B5S68SExoJgQQplpCkwESMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE0p5KyaVA8w5ARkPqgW5gWGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 aEGC2gpagGvvMC364m5T+9Fu5gBcNa+aevzulk4pd3YJdAPZMmaBonvu5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVM3iee2WDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTefjp68z3AX7Kmo7GhQuZQWF+seCzUe3YINCF
 ldN93IehP1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xGWwsXjNHLts8u6ceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDRLoViMA6tjn5Y020BTGS486FLbv14KuXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94bd+51knHpU
 KA4pvWj
IronPort-HdrOrdr: A9a23:8fCIUqyDWGyGN9P2kzzxKrPxMegkLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67SdC9qADnhPlICO4qTMqftWjdyRGVxeRZgbcKrAeQeBEWmtQtsJ
 uINpIOc+EYbmIK8/oSgjPZLz9I+rDunsGVbKXlvg9QpGlRGt5dBmxCe2Km+yNNNW977NYCZf
 ihDp0tnUvdRZ1bVLXyOpFDNNKz1eHjpdbDW1orFhQn4A6BgXeB76P7KQGR2lMzQi5C2rAr9E
 nCikjc6r+4u/+25xfA3yuLhq4m1OfJ+59mPoihm8IVIjLjhkKBY5lgYaSLuHQYsfyi81Ejlf
 jLulMFM95o433cU2mpqV/G2hXm0hwp93j+oGXozEfLkIjcfnYXGsBBjYVWfl/w7Fchhsh11O
 Zu03iCv5RaIBvclGCljuK4HS1Cpw6Rmz4PgOQTh3tQXc83b6JQl5UW+AdwHI0bFCz3xYg7GK
 1FDd3a5txRbVSGBkqp9VVH8ZiJZDAeDx2GSk8Ntoi81CVXpmlwyw8iyMkWjh47heUAYqgBw9
 6BHrVjlblIQMNTR7l6Hv09Tcy+DXGIaQ7QMUqJSG6XVJ0vCjbokdra8b817OaldNgj150pgq
 nMV1teqCobZ1/uM8uTx5dGmyq9AVlVZQ6diP222qIJ/4EVHNHQQGm+oREV4oWdSswkc47ms6
 3ZAuMQPxfhRVGebbqhkTeOHaW6EkNuI/H9iuxLKm5mnfi7WrECltarBso7d4CdWAoMayfYPk
 YpegTVCYFp0n2LM0WI9SQ5HUmdNXDCwQ==
X-Talos-CUID: =?us-ascii?q?9a23=3ACjpYgGhbZMt2XBBm2tKRp3T5fDJuL2P+lFPUMmO?=
 =?us-ascii?q?DK2dCc7+QT3atpIE4jJ87?=
X-Talos-MUID: 9a23:N+50QAamJc8+qOBTpTbUtG9cb/lTpK2nMxEQrI5ftpG0HHkl
X-IronPort-AV: E=Sophos;i="6.00,184,1681185600"; 
   d="scan'208";a="112399140"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Awo3VaTwwS3sUZqLpuXDJOASPXhNmN5u1FFnKCyDuVczZsNv+WI8lAE8d/JlFcjz9Y/Zok8pIJdGYinDhcSOP/1/L7+hCnnCzekk3QHSkTzXRVAjapSH0Ve5j73Nt+tkhbNClQXtDcwya6MGe9fqgD9zDjZ9rjSKxHued/qKG9d3JBk7yRQ088CufZkV1hw0YlczsFerkjoVqxJUApgxszvNAlP2Q+1vRBCVa4aAuetvI6/Jw39aj4XV+IG5yXceDa9ad4L0PlFS7i374EcK2Wha65Di4LrH6Wp8Tc1LwMmkxt3+1rUGdegVU/hPG0n3BCXkFuSHeGodO0YUgkEa8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iXDCI8hGzOzqJwY/J8wzbDgmUQRsgCSILaSDg0F0HzE=;
 b=oJ592Alg1lUL62fFoaRVUM7wimjRje0iFYxbbJZeB+Jiy81GO1IkwLrq3eQYFcNr7BObPK1TcUHQ0SJJpKTV6Qy2LHMBdNJ8YfGq9N6STviVLaECFNXIjPYgdT0eepYZiXczCWUpNPYPHbAzMok/fv257JqguOUtY2yOfsA65uoz0f369gWe/a5S0LicVY2Fg1XmvzH4QxybbbG+FAy3G+UcQKZwBKbhkALdEA0RLVKD6DHOGErT+ORK8b7M4Rl43HeldEBjW9Qy3+TxRUsnxGQIb34KhxlUWRM7bcHvnrWiYI0xVc2kw2P/0r7ieUV+kHJhQ6ldZ005FenozQF7Ow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iXDCI8hGzOzqJwY/J8wzbDgmUQRsgCSILaSDg0F0HzE=;
 b=hwTOHjoK6rs7h4gc4N0qvvOIZdSDIU7htgpsTfuBLX6yYSXZaPPuPVqAvQqO1bTEw9zrQm2aOElSPOT3CSUADcN9lj8AxRkamLeN85AepUzHIAomR7de3MSMiTilwNsFpcGVgv577VnNfsy60kI9ktkJXyWpflBYYJkpH7bolaQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <e0d0de09-2da0-f961-f3d5-576cd7268f32@citrix.com>
Date: Mon, 22 May 2023 15:14:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 4/4] x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-5-andrew.cooper3@citrix.com>
 <1d06869b-1633-7794-c5c9-92d9c0885f19@suse.com>
 <42cd2479-1eba-11c7-26d8-441045c230ed@citrix.com>
 <fb95d033-7a71-7cc1-bb8f-ee2a74d1c5cf@suse.com>
 <a8013bb5-b0bb-6e42-85de-2e12d7b6f83c@citrix.com>
 <678e997a-0e3e-a6b3-1bba-5e66ff74de48@suse.com>
In-Reply-To: <678e997a-0e3e-a6b3-1bba-5e66ff74de48@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0674.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:351::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6985:EE_
X-MS-Office365-Filtering-Correlation-Id: 2629c22b-7233-437c-b469-08db5aced765
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T/42W0s3Wc+2Hve+Il71j8WdSfarYPRGfHVdM+31hHBmSShHmi1fNlXte9u3P09h8felA+9c6R9N4z7gUgpbx890nUDFZ58vE9kUpGak2YQAOHevB1t4W6c1Cx6RqbO1JM7FQ+co5R7gS+2z/mqjfvaIuTbcc+41k1zA24sQuRGBnEc30cZtHIQArpqRUUGZoDf2nrI9hAelen3EzogiWAZ4aoHItYoj2tY4UlWkLM+US8uhCTeqNoczzu0rss8vU55jYu5Tfchan8G6X7duI8b+O11jdKMEqFiFJQ591zsCre/mCzk6/g3PWTXe8e/p3xNzyjCYC45yGu9M8cdhWNax8VGoedoIKZOlMEHjG6ZYTRiDmMrJl3UpDhIr077ri5mkigpLis+LY5koGuXljlPFPlM7EruOO4I2Vj2P48OCa1WhakbuGQQIi76hz1LbC0IdUGNAfgs/JoIEt9y/Voz1wHr41WcczBW5ugnozDTYUy6R5NzqEpJ+iiEz0PB168vHSkA2+kpvewOCYdm2WnjJu+yjKY8FyIWJqZmb0XCZXFvl8tsxMvytTSaSxugWcEOFOx677NcVPCk3ibzE34d1BBR4A+f9MQTicl8RBiGT22J9yEyI2MnomzurzvLKdPcZxRkcSl9DRrGLBEIyOw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(366004)(346002)(136003)(39860400002)(451199021)(31686004)(2906002)(54906003)(478600001)(5660300002)(8676002)(8936002)(41300700001)(316002)(66476007)(66556008)(66946007)(6916009)(36756003)(4326008)(6486002)(6666004)(83380400001)(53546011)(26005)(6512007)(6506007)(38100700002)(86362001)(2616005)(31696002)(186003)(82960400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Y0Z6QVV4dXpaT25Vb05jSDVyNUNOWi9zWFZ4S1ovSWRPR2JPd2QrZzg1SVB4?=
 =?utf-8?B?QzVnMjU0TEZKVldkWnNHWFEreFRKQW5yQkt6b0VpNEZhamtjVk5aeVlVWldo?=
 =?utf-8?B?TUk4NW55Sm9WcWFWTVg0RkdMSHo3S25zZDBteFNaemJQMDNnbE1rMkhVMlV1?=
 =?utf-8?B?VFFSOEFVUGxKMy8rakpoWkxaUGpUVit1bDg2dTQ1YXFjSjY3T09UNzRZaDVL?=
 =?utf-8?B?UVB0V2txVy9XUzBqaDdWLzhNRExqVWprYWUrRXNTR1ZTMWdPc3RnWm51Mitl?=
 =?utf-8?B?SFFUSmFacHVTRFdYZHZIWkdPa2FOdnIvaEs3VnF6YU5nVkt5MS8wUmhmN1Mr?=
 =?utf-8?B?R0xQQlJXNDUzcUgrMWtEMWtPQzhsZzdlcjZMbU5CS3NrUDQ2OFcvVUlYWHBP?=
 =?utf-8?B?cUg0RnFzd0lDUGRLUnBOTlVMV0RlWnhSOEFlcG5RcnQ2YjNtZ3JVZWZyM0hE?=
 =?utf-8?B?aFlycWhObk1STDlBYmpZZ2x2WVl2bFZuYXBtODEweHRQNzJkK3FkTnM4ME1D?=
 =?utf-8?B?UnRLV29yYTJyaFl4UCtlYWFwVFBlemNKUnJVOVAvblpMTzV5Z2xSa1I3M1J0?=
 =?utf-8?B?Uks4NUJ2UEVGa3ozS0lDRktQdW13azdMN29JbG9WQ0NJZzgyRUFLaGJtcGhM?=
 =?utf-8?B?VnJjUTJoWUs4OTZXY2ZNc2YzZVkvdmQzVkYvTzlNSmNFUFdFZjdOZEN0UzFn?=
 =?utf-8?B?MXp3ODlZakE4dVJLUEVidU5hajVpMnBmdUJNS05DcjBYMVo1VFpORjAxUk1n?=
 =?utf-8?B?QlkrUGhCUldiNmpKaVlZQlhrTnR0eDB6WU9OM0tQanpoN3RaK3dLZ210YW5r?=
 =?utf-8?B?Q3VKb3I4ZmdnZ0hUQll3UzM4UDVpcCszRkY3dytxU2hBTDNEYW1sRi9vNU9X?=
 =?utf-8?B?L0s1OU14R01XdTNwVUVHdW92SmgyYTlVR01ubEEramh4TDAzMHV0VkxHNTZ4?=
 =?utf-8?B?MGVuNkNoNGo3TW1yeUpOOGRYM0lXdEhUWjBZSVFSd29iSVpkNy9iWUx3S2to?=
 =?utf-8?B?UWtMeGhRa091dkZTLzRMNjNRWmhIR2h1MzdkNTdiWWgyVTU2MGR4eXNhWFZu?=
 =?utf-8?B?clFaUjVkRzBxemh5bFZEZzZtN3lFdFM4M081TDdvRm15akRIV1lhUElkdFpw?=
 =?utf-8?B?WjZnOTlBZjZ5Y1loSE9UeDRvZWs5andKZXlScElNOHdJU2tyV3F6OVFRdHJs?=
 =?utf-8?B?b3FJemJrd0lRSG8xRDIrSFBKUmxXQllDY3NPcEx6YWhSN3BYUWdTbXdLcUMz?=
 =?utf-8?B?VTRzM1l4eTgrejY4azBKcyt2bER6bFVwOXVXRnJ3VS91QUJzVUpoY0NGWndB?=
 =?utf-8?B?OW93eDJOcHdzdlp0cTVDSUZPU1BXVGJNbWNkNy8xYXZBMis0Q0xveWpTZGJR?=
 =?utf-8?B?ei9jVkprc2NMT0RyVGJQa3JBL1RiSVRWcFN0Y0Q1VTk3dGUzc3MvZkE3WXZ2?=
 =?utf-8?B?T0puV2RGUG9VWjZuTTRrcGtOSjFqVkxGYy9DYWk0dU1hV0VGQWJxZFhtdVU1?=
 =?utf-8?B?cDBnNFU2dkxJdVNyOVJRbTRLVjZVejBZK0NKWUdvY1BiWFYrYkxiN0Jqa1Nu?=
 =?utf-8?B?TXdLcWtibktNcmpKdHNMZmo2aDB6cWN3R2hYZG1nZXMwa3Y4OTZ3T1FvMTVo?=
 =?utf-8?B?VHhOSlRkcXBFODlGQjJkNDRPQ1JwYy9RWWVyUC9NSlpxMld4KzY4aktrY21J?=
 =?utf-8?B?MUxKYSsrRnNQQXc4RHBnSkZIQ3E2TDBUbFhjVUsvaU4rTWJ6Z1dxVmFGUFd0?=
 =?utf-8?B?U2NUTnZaTXVnc1UwME8xTDVzL2RCaUcrRGVRWkYrVDRqbFF1NStBQ0pNNjJL?=
 =?utf-8?B?dE9yYWhpNkQyUW9Nd1FyTmg4Vy93aFNrc3ZPd0pOa2VGcWYrc3NjTk05UUR3?=
 =?utf-8?B?QTZyanlxQ1ZNdmhyTWE2OWRETHRLSk91bmFuSFB0cTUzVmU5aGszWHR3d1dS?=
 =?utf-8?B?WWxXdjdKOTdpL2NKYlNYQldvQmllV045bFJ5QnRTY3pFNTlBSG5vVGF2RDE2?=
 =?utf-8?B?OUhLWFNtZFlPRUpFNkFHczVxUVVTbk0vbnFISjNuNG94OU9BTkhvNGU4cjVG?=
 =?utf-8?B?eWFudjJuVldsZHNTT2E0Q2pQSzVHc2F0Z1o2RVUvcmx0cElNRENVN3VFWTdY?=
 =?utf-8?B?SXBUNDBwOVlHemVTbnZmV3p1WWVrbVE4M1QyNUdYUzMrS1BNTWVDbVQyTzRn?=
 =?utf-8?B?dFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	necJDkHN4BMKM9uq6nBAI6uyNKAwYg8t3KPw9KOVBR2ukx4smM5D2rtL/qkN7GMKuFp6qw9mq4rUI+iEDk0stvoVaT42z86JrTnRCIxh8VstWYHEoDvU5i2/AE//Ijs6Hzxpaa9pEocb7llu/MTYNd9WQapdgKvkLKo2uaGA/oUsdK29+hium7DxZ382wOr4++9jMr5pX5jwnvQWJcz+0/166gvlGtKmopASwRq3I+/o8UctyqaeudOk+cIQLHYe9wx6Z1M87x7CjzxI8wZTwwf/HwfFY6bLQILz1bdOPCT5GMOebttjQEX9ox/sHpDCXpLW21e37AZmduU/xkKLexzkVLx8N1H1yw/RedYbwgj4DeUYSiH37Cv/89jF21zIwAeEkTI3M8nXsdwR4TPlrTzCL1GulM0TeGK/uzisPq9imorssb6ip+dn7o9GfpWUkLjkb7oOGPnC+IAA0WSbWnbD9p378Li1n6rxTeI5l8Sty9Sd4LR7QOPsJ9sONbissLyh0atcwStc4c7Yqmdn5UWqj+TKetdPFp0hCYCKGw65gKNhG/ESc0RpJM3UgVBrko77Ea8i4dZMpj4mRMNqH1Tgyup+JU2NSkCa/u2kllPlPKYUYRoF/dDp1WnCtGqpAjM1UF8OkUUUxhWSLFwpfWEaCiwf8s5DQ/T3nNetpGA4UUS+N1YmuiefPjr5Q5H22PvXAksFUylF5GkMGAOCwtUEzhProS69lKOKwod+UFgUapjZhRtAyLYLQLDko7dVe+07o+T84Mlrs6ZNH/46PHOUU9zh208mVmNEqLa62CFwMDtxcwOKZEO+Vx65cOtI
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2629c22b-7233-437c-b469-08db5aced765
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 14:14:23.0247
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: pQZEBhCyGRTDrEs7fWH6rIJ9LIksck1e6wPYiMYyNYByO5AinPXkum9ahO7tV+t1V7xC/XNYlMNDjr3XgHiV2OSuTSIjYJZ9qo4rnCOYxQs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6985

On 22/05/2023 8:10 am, Jan Beulich wrote:
> On 19.05.2023 16:38, Andrew Cooper wrote:
>> On 19/05/2023 7:00 am, Jan Beulich wrote:
>>> On 17.05.2023 18:35, Andrew Cooper wrote:
>>>> On 17/05/2023 3:47 pm, Jan Beulich wrote:
>>>>> On 16.05.2023 16:53, Andrew Cooper wrote:
>>>>>> @@ -401,6 +400,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
>>>>>>          cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
>>>>>>      if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
>>>>>>          cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
>>>>>> +    if ( cpu_has_arch_caps )
>>>>>> +        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
>>>>> Why do you read the MSR again? I would have expected this to come out
>>>>> of raw_cpu_policy now (and incrementally the CPUID pieces as well,
>>>>> later on).
>>>> Consistency with the surrounding logic.
>>> I view this as relevant only when the code invoking CPUID directly is
>>> intended to stay.
>> Quite the contrary.  It stays because this patch, and changing the
>> semantics of the print block are unrelated things and should not be
>> mixed together.
> Hmm. On one hand I can see your point, yet otoh we move things in a longer
> term intended direction in other cases where we need to touch code anyway.
> While I'm not going to refuse to ack this change just because of this, I
> don't fell like you've answered the original question. In particular I
> don't see how taking the value from a memory location we've already cached
> it in is changing any semantics here. While some masking may apply even to
> the raw policy (to zap unknown bits), this should be meaningless here. No
> bit used here should be unmentioned in the policy.

The very next thing I'm going to need to do is start synthesizing arch
caps bits for the hardware with known properties but without appropriate
enumerations.  This is necessary to make migration work.

Because we have not taken a decision about the what printed block means,
it needs to not change when I start using setup_force_cpu_cap().

>>>> Also because the raw and host policies don't get sorted until much later
>>>> in boot.
>>> identify_cpu(), which invokes init_host_cpu_policies(), is called
>>> ahead of init_speculation_mitigations(), isn't it?
>> What is init_host_cpu_policies() ?
> Oops. I did use my own tree as reference during review. See the long
> pending "x86/xstate: drop xstate_offsets[] and xstate_sizes[]" [1]. Maybe
> you simply didn't tell me yet ...
>
>> I have a plan for what it's going to be if prior MSR work hadn't ground
>> to a halt, but it's a bit too late for that now.
>>
>> (To answer the question properly, no the policies aren't set up until
>> just before building dom0, and that's not something that is trivial to
>> change.)
> ... that what I'm doing there is too simplistic?

Raw is fine.  I found complexities with Host when doing that.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 22 14:18:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 14:18:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537964.837642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16MF-0005C7-G6; Mon, 22 May 2023 14:17:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537964.837642; Mon, 22 May 2023 14:17:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16MF-0005C0-Cl; Mon, 22 May 2023 14:17:55 +0000
Received: by outflank-mailman (input) for mailman id 537964;
 Mon, 22 May 2023 14:17:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fRnz=BL=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q16ME-0005Bu-JY
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 14:17:54 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 700bba7c-f8ab-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 16:17:52 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A78371FF5E;
 Mon, 22 May 2023 14:17:51 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 170C913336;
 Mon, 22 May 2023 14:17:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id RorcA495a2RqGQAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 22 May 2023 14:17:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 700bba7c-f8ab-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1684765071; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=aMXOv017gA7OC8h0l1Op8lsHVE3DpZGFJ59TiCVlzmA=;
	b=u468hUflZ0ck1KUSCJByXIlhSHB4Qw5a+46o+U9MQWezLnYkWI0Xv85naNjBxDi3fKLPLi
	FE4DEGSBte1XqlfFA/u3G1vPF6gBzpNhqeP+mOr3vRkGOOqIT1ziE+XbjFcoZpBynKOxtJ
	su6rM+Iv39CgLAOX071bPd5I+iE7V4I=
Message-ID: <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
Date: Mon, 22 May 2023 16:17:50 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Content-Language: en-US
To: Borislav Petkov <bp@alien8.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
 mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
 Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------n0B4FE3QHZoi0kCIT4NnqRK6"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------n0B4FE3QHZoi0kCIT4NnqRK6
Content-Type: multipart/mixed; boundary="------------0FQRRT0t3Hqeswcs3KGFjuki";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Borislav Petkov <bp@alien8.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
 mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
 Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
Message-ID: <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
In-Reply-To: <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>

--------------0FQRRT0t3Hqeswcs3KGFjuki
Content-Type: multipart/mixed; boundary="------------1qimvhvxGrvQzPahjkpLpV2o"

--------------1qimvhvxGrvQzPahjkpLpV2o
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTEuMDUuMjMgMTg6MzIsIEJvcmlzbGF2IFBldGtvdiB3cm90ZToNCj4gT24gV2VkLCBN
YXkgMTAsIDIwMjMgYXQgMDU6NTM6MTVQTSArMDIwMCwgSnVlcmdlbiBHcm9zcyB3cm90ZToN
Cj4+IFVyZ2gsIHllcywgdGhlcmUgaXMgc29tZXRoaW5nIG1pc3Npbmc6DQo+Pg0KPj4gZGlm
ZiAtLWdpdCBhL2FyY2gveDg2L2tlcm5lbC9jcHUvbXRyci9nZW5lcmljLmMgYi9hcmNoL3g4
Ni9rZXJuZWwvY3B1L210cnIvZ2VuZXJpYy5jDQo+PiBpbmRleCAwMzFmN2VhOGU3MmIuLjk1
NDRlN2QxM2JiMyAxMDA2NDQNCj4+IC0tLSBhL2FyY2gveDg2L2tlcm5lbC9jcHUvbXRyci9n
ZW5lcmljLmMNCj4+ICsrKyBiL2FyY2gveDg2L2tlcm5lbC9jcHUvbXRyci9nZW5lcmljLmMN
Cj4+IEBAIC01MjEsOCArNTIxLDEyIEBAIHU4IG10cnJfdHlwZV9sb29rdXAodTY0IHN0YXJ0
LCB1NjQgZW5kLCB1OCAqdW5pZm9ybSkNCj4+ICAgICAgICAgIGZvciAoaSA9IDA7IGkgPCBj
YWNoZV9tYXBfbiAmJiBzdGFydCA8IGVuZDsgaSsrKSB7DQo+PiAgICAgICAgICAgICAgICAg
IGlmIChzdGFydCA+PSBjYWNoZV9tYXBbaV0uZW5kKQ0KPj4gICAgICAgICAgICAgICAgICAg
ICAgICAgIGNvbnRpbnVlOw0KPiANCj4gU28gdGhlIGxvb3Agd2lsbCBnbyB0aHJvdWdoIHRo
ZSBtYXAgdW50aWwuLi4NCj4gDQo+PiAtICAgICAgICAgICAgICAgaWYgKHN0YXJ0IDwgY2Fj
aGVfbWFwW2ldLnN0YXJ0KQ0KPj4gKyAgICAgICAgICAgICAgIGlmIChzdGFydCA8IGNhY2hl
X21hcFtpXS5zdGFydCkgew0KPiANCj4gLi4uIGl0IHJlYWNoZXMgdGhlIGZpcnN0IGVudHJ5
IHdoZXJlIHRoYXQgaXMgdHJ1ZS4NCj4gDQo+PiAgICAgICAgICAgICAgICAgICAgICAgICAg
dHlwZSA9IHR5cGVfbWVyZ2UodHlwZSwgbXRycl9zdGF0ZS5kZWZfdHlwZSwgdW5pZm9ybSk7
DQo+IA0KPiB0aGUgQHR5cGUgYXJndW1lbnQgaXMgTVRSUl9UWVBFX0lOVkFMSUQsIGRlZl90
eXBlIGlzIFdSQkFDSyBzbyB3aGF0DQo+IHRoaXMnbGwgZG8gaXMgc2ltcGx5IGdldCB5b3Ug
dGhlIGRlZmF1bHQgV1JCQUNLIHR5cGU6DQo+IA0KPiB0eXBlX21lcmdlOg0KPiAgICAgICAg
ICBpZiAodHlwZSA9PSBNVFJSX1RZUEVfSU5WQUxJRCkNCj4gICAgICAgICAgICAgICAgICBy
ZXR1cm4gbmV3X3R5cGU7DQo+IA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgc3RhcnQg
PSBjYWNoZV9tYXBbaV0uc3RhcnQ7DQo+PiArICAgICAgICAgICAgICAgICAgICAgICBpZiAo
ZW5kIDw9IHN0YXJ0KQ0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBicmVh
azsNCj4gDQo+IE5vdyB5b3UgYnJlYWsgaGVyZSBiZWNhdXNlIGVuZCA8PSBzdGFydC4gV2h5
Pw0KPiANCj4gWW91IGNhbiBqdXN0IGFzIHdlbGwgZG86DQo+IA0KPiAJaWYgKHN0YXJ0IDwg
Y2FjaGVfbWFwW2ldLnN0YXJ0KSB7DQo+IAkJLyogcmVnaW9uIG5vbi1vdmVybGFwcGluZyB3
aXRoIHRoZSByZWdpb24gaW4gdGhlIG1hcCAqLw0KPiAJCWlmIChlbmQgPD0gY2FjaGVfbWFw
W2ldLnN0YXJ0KQ0KPiAJCQlyZXR1cm4gdHlwZV9tZXJnZSh0eXBlLCBtdHJyX3N0YXRlLmRl
Zl90eXBlLCB1bmlmb3JtKTsNCj4gDQo+IAkJLi4uIHJlc3Qgb2YgdGhlIHByb2Nlc3Npbmcg
Li4uDQo+IA0KPiBJbiBnZW5lcmFsLCBJIGdldCBpdCB0aGF0IHlvdXIgY29kZSBpcyBzbGlj
ayBidXQgSSB3YW50IGl0IHRvIGJlDQo+IG1haW50YWluYWJsZSAtIG5vdCBzbGljay4gSSdk
IGxpa2UgZm9yIHdoZW4gcGVvcGxlIGxvb2sgYXQgdGhpcywgbm90DQo+IGhhdmUgdG8gIGFk
ZCBhIGJ1bmNoIG9mIGRlYnVnZ2luZyBvdXRwdXQgaW4gb3JkZXIgdG8gc3dhcCB0aGUgd2hv
bGUNCj4gdGhpbmcgYmFjayBpbnRvIHRoZWlyIGJyYWlucy4NCj4gDQo+IFNvIG10cnJfdHlw
ZV9sb29rdXAoKSBkZWZpbml0ZWx5IG5lZWRzIGNvbW1lbnRzIGV4cGxhaW5pbmcgd2hhdCBn
b2VzDQo+IHdoZXJlLg0KPiANCj4gWW91IGNhbiBzZW5kIGl0IGFzIGEgZGlmZiBvbnRvcCAt
IEknbGwgbWVyZ2UgaXQuDQoNClRoZSBhdHRhY2hlZCBkaWZmIGlzIGZvciBwYXRjaCAxMy4N
Cg0KDQpKdWVyZ2VuDQoNCg==
--------------1qimvhvxGrvQzPahjkpLpV2o
Content-Type: text/x-patch; charset=UTF-8; name="diff.patch"
Content-Disposition: attachment; filename="diff.patch"
Content-Transfer-Encoding: base64

ZGlmZiAtLWdpdCBhL2FyY2gveDg2L2tlcm5lbC9jcHUvbXRyci9nZW5lcmljLmMgYi9hcmNo
L3g4Ni9rZXJuZWwvY3B1L210cnIvZ2VuZXJpYy5jCmluZGV4IDAzMWY3ZWE4ZTcyYi4uNDE3
MTc4OGI4NzU0IDEwMDY0NAotLS0gYS9hcmNoL3g4Ni9rZXJuZWwvY3B1L210cnIvZ2VuZXJp
Yy5jCisrKyBiL2FyY2gveDg2L2tlcm5lbC9jcHUvbXRyci9nZW5lcmljLmMKQEAgLTUxOSwx
NSArNTE5LDI2IEBAIHU4IG10cnJfdHlwZV9sb29rdXAodTY0IHN0YXJ0LCB1NjQgZW5kLCB1
OCAqdW5pZm9ybSkKIAkJcmV0dXJuIE1UUlJfVFlQRV9JTlZBTElEOwogCiAJZm9yIChpID0g
MDsgaSA8IGNhY2hlX21hcF9uICYmIHN0YXJ0IDwgZW5kOyBpKyspIHsKKwkJLyogUmVnaW9u
IGFmdGVyIGN1cnJlbnQgbWFwIGVudHJ5PyAtPiBjb250aW51ZSB3aXRoIG5leHQgb25lLiAq
LwogCQlpZiAoc3RhcnQgPj0gY2FjaGVfbWFwW2ldLmVuZCkKIAkJCWNvbnRpbnVlOwotCQlp
ZiAoc3RhcnQgPCBjYWNoZV9tYXBbaV0uc3RhcnQpCisKKwkJLyogU3RhcnQgb2YgcmVnaW9u
IG5vdCBjb3ZlcmVkIGJ5IGN1cnJlbnQgbWFwIGVudHJ5PyAqLworCQlpZiAoc3RhcnQgPCBj
YWNoZV9tYXBbaV0uc3RhcnQpIHsKKwkJCS8qIEF0IGxlYXN0IHNvbWUgcGFydCBvZiByZWdp
b24gaGFzIGRlZmF1bHQgdHlwZS4gKi8KIAkJCXR5cGUgPSB0eXBlX21lcmdlKHR5cGUsIG10
cnJfc3RhdGUuZGVmX3R5cGUsIHVuaWZvcm0pOworCQkJLyogRW5kIG9mIHJlZ2lvbiBub3Qg
Y292ZXJlZCwgdG9vPyAtPiBsb29rdXAgZG9uZS4gKi8KKwkJCWlmIChlbmQgPD0gY2FjaGVf
bWFwW2ldLnN0YXJ0KQorCQkJCXJldHVybiB0eXBlOworCQl9CisKKwkJLyogQXQgbGVhc3Qg
cGFydCBvZiByZWdpb24gY292ZXJlZCBieSBtYXAgZW50cnkuICovCiAJCXR5cGUgPSB0eXBl
X21lcmdlKHR5cGUsIGNhY2hlX21hcFtpXS50eXBlLCB1bmlmb3JtKTsKIAogCQlzdGFydCA9
IGNhY2hlX21hcFtpXS5lbmQ7CiAJfQogCisJLyogRW5kIG9mIHJlZ2lvbiBwYXN0IGxhc3Qg
ZW50cnkgaW4gbWFwPyAtPiB1c2UgZGVmYXVsdCB0eXBlLiAqLwogCWlmIChzdGFydCA8IGVu
ZCkKIAkJdHlwZSA9IHR5cGVfbWVyZ2UodHlwZSwgbXRycl9zdGF0ZS5kZWZfdHlwZSwgdW5p
Zm9ybSk7CiAK
--------------1qimvhvxGrvQzPahjkpLpV2o
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1qimvhvxGrvQzPahjkpLpV2o--

--------------0FQRRT0t3Hqeswcs3KGFjuki--

--------------n0B4FE3QHZoi0kCIT4NnqRK6
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRreY4FAwAAAAAACgkQsN6d1ii/Ey8p
8Af8CD4ulrGY4bIE4VheoIlbQepsapBmA1yFswSZvFCwEUOD4f/+u0VxDyUASSsZoBjBjQ95PXgn
N+0eu2iex+F+1hUPViRGL6POrIMtb5MEjeLD26awEAPQLawmkjBqQvb46GsKH5ecLRs94L+gCK+i
u3WuTYq6QyOhd4O7odSRtyn+2CXD31tiSaDX9Ur1CDGVszGNMsFFQyIhOim6kHEIYOjHnGlbBZat
lpl+vYhqxrbELUg+JDfjxBg5P4LV/Q/GoUuOOgVM2HlcaKptGe8qrYLQ6baMx1fJJ8RD7hO2is/9
0pYwhkMMpUIWP1kxnU8nFoFyGro1YGwxU0JilZIPGg==
=WxiL
-----END PGP SIGNATURE-----

--------------n0B4FE3QHZoi0kCIT4NnqRK6--


From xen-devel-bounces@lists.xenproject.org Mon May 22 14:23:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 14:23:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537976.837656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16RC-0006iI-4K; Mon, 22 May 2023 14:23:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537976.837656; Mon, 22 May 2023 14:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16RC-0006iB-0t; Mon, 22 May 2023 14:23:02 +0000
Received: by outflank-mailman (input) for mailman id 537976;
 Mon, 22 May 2023 14:23:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fRnz=BL=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q16RA-0006i5-IV
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 14:23:00 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2555ad51-f8ac-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 16:22:56 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6F1821FF55;
 Mon, 22 May 2023 14:22:56 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2B36113336;
 Mon, 22 May 2023 14:22:56 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id GbL6CMB6a2QlHAAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 22 May 2023 14:22:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2555ad51-f8ac-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1684765376; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=C4NNUJ0p3UejTtuJtDpLt5athMDKhxztzYwBpT9sLic=;
	b=HU5egji8GIchdffyRW1XlaC48/Na4pnRvt9yZGj60dlQMxB+mBCkOhAI/WIRlHxbQcmcqM
	5pDED6HXvwVzLzjeQIPBPqPlXrviparTVol/pZUC/stOPmDMen4EEdvDaNao9azmee7Jc4
	jGp5mxdk3XsiO2mINsbTQQ4rAstpvkU=
Message-ID: <e06f42d2-191e-a516-d0d3-8f84c9c79471@suse.com>
Date: Mon, 22 May 2023 16:22:55 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] MAINTAINERS: add more xenstore files
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230428132756.8763-1-jgross@suse.com>
 <599f18f9-880a-c016-9e98-4090e135fdf6@suse.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <599f18f9-880a-c016-9e98-4090e135fdf6@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------5SMXePJ5LDu3v9aKf305CuNY"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------5SMXePJ5LDu3v9aKf305CuNY
Content-Type: multipart/mixed; boundary="------------jJof0c0WtZLyPk57OlBDW0HX";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Message-ID: <e06f42d2-191e-a516-d0d3-8f84c9c79471@suse.com>
Subject: Re: [PATCH] MAINTAINERS: add more xenstore files
References: <20230428132756.8763-1-jgross@suse.com>
 <599f18f9-880a-c016-9e98-4090e135fdf6@suse.com>
In-Reply-To: <599f18f9-880a-c016-9e98-4090e135fdf6@suse.com>

--------------jJof0c0WtZLyPk57OlBDW0HX
Content-Type: multipart/mixed; boundary="------------lefej49ZRXVFMdCJn5vmzwrX"

--------------lefej49ZRXVFMdCJn5vmzwrX
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTcuMDUuMjMgMTc6MTksIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyOC4wNC4yMDIz
IDE1OjI3LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gWGVuc3RvcmUgY29uc2lzdHMgb2Yg
bW9yZSBmaWxlcyB0aGFuIGp1c3QgdGhlIHRvb2xzL3hlbnN0b3JlIGRpcmVjdG9yeS4NCj4+
DQo+PiBBZGQgdGhlbSB0byB0aGUgWEVOU1RPUkUgYmxvY2suDQo+Pg0KPj4gU3VnZ2VzdGVk
LWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPj4gU2ln
bmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KPj4gLS0tDQo+
PiAgIE1BSU5UQUlORVJTIHwgNSArKysrKw0KPj4gICAxIGZpbGUgY2hhbmdlZCwgNSBpbnNl
cnRpb25zKCspDQo+Pg0KPj4gZGlmZiAtLWdpdCBhL01BSU5UQUlORVJTIGIvTUFJTlRBSU5F
UlMNCj4+IGluZGV4IDBlNWViYTIzMTIuLmYyZjE4ODFiMzIgMTAwNjQ0DQo+PiAtLS0gYS9N
QUlOVEFJTkVSUw0KPj4gKysrIGIvTUFJTlRBSU5FUlMNCj4+IEBAIC02NTMsNiArNjUzLDEx
IEBAIE06CVdlaSBMaXUgPHdsQHhlbi5vcmc+DQo+PiAgIE06CUp1ZXJnZW4gR3Jvc3MgPGpn
cm9zc0BzdXNlLmNvbT4NCj4+ICAgUjoJSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4N
Cj4+ICAgUzoJU3VwcG9ydGVkDQo+PiArRjoJdG9vbHMvaGVscGVycy9pbml0LXhlbnN0b3Jl
LWRvbWFpbi5jDQo+PiArRjoJdG9vbHMvaW5jbHVkZS94ZW5zdG9yZS1jb21wYXQvDQo+PiAr
RjoJdG9vbHMvaW5jbHVkZS94ZW5zdG9yZS5oDQo+PiArRjoJdG9vbHMvaW5jbHVkZS94ZW5z
dG9yZV9saWIuaA0KPj4gK0Y6CXRvb2xzL2xpYnMvc3RvcmUvDQo+PiAgIEY6CXRvb2xzL3hl
bnN0b3JlLw0KPiANCj4gSSB3b25kZXIgaWYgYXQgdGhlIHNhbWUgdGltZSB4ZW5zdG9yZSBz
cGVjaWZpYyBpbmNsdWRlIGZpbGVzIHNob3VsZG4ndA0KPiBoYXZlIGJlZW4gcHVyZ2VkIGZy
b20gTElCUy4NCg0KUHJvYmFibHksIHllcy4gSSdsbCBzZW5kIGEgcGF0Y2guDQoNCg0KSnVl
cmdlbg0KDQo=
--------------lefej49ZRXVFMdCJn5vmzwrX
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------lefej49ZRXVFMdCJn5vmzwrX--

--------------jJof0c0WtZLyPk57OlBDW0HX--

--------------5SMXePJ5LDu3v9aKf305CuNY
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRrer8FAwAAAAAACgkQsN6d1ii/Ey9Z
Agf+LTEmp0N1mzSGVvL1KnipjTSQEzhYJaWZRKRYXX8G9pjJ/Pi8DBHTEHivBhsptSx6SwrYSC+N
FmABEz8RAc5XsXLv8X7LbCbKA1tC4BaZxPZmc2sEK4ALrgf9eUkpKP1asZkLaXC0PfP+0cYKzvpN
A8b9Y0eNXtZ+ljW08iyp+B1jcponHnqVAwPz1j3dZ/+ty2gsNiD35J3JMOZj1Dl6VGTsHwlGzKzb
Tru7/L3kR5sbtqoCANPciHu/NbQuE2LNboKhGz+72IpsKDiGWeNmid1UF1x2ukaojWvfnMGWb+vt
VnDcB2S1UTlpqlVnyaMoWJaQrrgQ+JqKLb7cPVWLDw==
=dMrK
-----END PGP SIGNATURE-----

--------------5SMXePJ5LDu3v9aKf305CuNY--


From xen-devel-bounces@lists.xenproject.org Mon May 22 14:24:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 14:24:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537980.837666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16Sf-0007Fo-EK; Mon, 22 May 2023 14:24:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537980.837666; Mon, 22 May 2023 14:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16Sf-0007Fh-BL; Mon, 22 May 2023 14:24:33 +0000
Received: by outflank-mailman (input) for mailman id 537980;
 Mon, 22 May 2023 14:24:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q16Sd-0007FZ-Iq
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 14:24:31 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20628.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::628])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5c0002e7-f8ac-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 16:24:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7070.eurprd04.prod.outlook.com (2603:10a6:800:123::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 14:24:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 14:24:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c0002e7-f8ac-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BXKuecOZgCnbw26bCWrjmW0w4eU6hCUYYLtv7QL75v9xzbqvoB9204OPV4/nw5mgjOFmJ2JojSA8+4jKe2/T3fmyTvvGDmQFj4VAqi+vWYAjsd+SDQz5ShLJUdqp/LwsJBKYyek1oBxvjm2O/gj/QfIdnXBqy9O371ZWW38zeNRNPwUgWjlXHw2YC1zNdqFwv6Zlv0ilA6coFCKvCXqjPNgof8V4J9XcYJ3kV1/C8XRm7LihwDYCGULR1lYanJ49cC+ChXkXfvVzMb4NrG94E6iJVSvp46Jgghrosf1FqKiUwHa7DSq68paZLKaT3LViLFEtJA+3nvB5qIErIsno+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LJ8S4kwInCreaZDstvSczi/tr9qkWZ0yKrweweCBVU0=;
 b=FzOzfXeGfIXpTRMBFZvfd8ixIG5O4rlaIbJQ8fBBHOPDm5DwBuQYLuV6LgoLOkkkgMEvY9PCi7aGbS1nG31eV5PtYw9K92If6KVbIYR/cjf7VA2b/kXswP7SdS2ZEm1H/2amv5HLv5U42hjbIuiuSFPKsmYPWdlydxIdSJiQwQeRkQA5WDxiUJdbVV8+w/CIXg7JeZW1pgbv5wsxvcO4aa35/1W/VHJe5jAdh2o7IefhwtKDvRSdYdkHleKmDjMF8BjafKHirZE1vQWIT5cF9xCY54K1mZZkhLUB6ZzOvsCR4d/vZ+6j6JE2uDwncBFQJ0yybiiUX8gkCfsTFJkxBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LJ8S4kwInCreaZDstvSczi/tr9qkWZ0yKrweweCBVU0=;
 b=DFCO7CUDFGcKz+OKfbSVewpn2ie9bwBv/IFynFNHSaTocTL4W5qTw3mzDp966JcHjdyrkDN/cRHv3si6snu/ocavXH+SJ1itKa/T08p1icBlBebKnbjrsfRdlrH1P0IxZBzpPfvGzhUHwAFyoE3Z5hN+oYGKO1WXEI4up0rhKZU1fB8IiB1uLTw4kvvls5mcjltGdR65ifRNFROs70NluWN/Ee2TzHEEwTbNdDgrhd31WH5dtr7Yl7jd0XqoMVEZOPZ3/tk9f47J4INDS9Wqoi2IEBtLfhcJ/7vb9YOza5PcL9JoFSeVAZ44eQ6YH5x4SXcL5Fk5gZfjMdDNGHQnng==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0ddd0533-635d-48c6-7f19-d8ef2ee04302@suse.com>
Date: Mon, 22 May 2023 16:24:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v8 2/5] xen/riscv: introduce setup_initial_pages
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1684757267.git.oleksii.kurochko@gmail.com>
 <cd705abf9e44eb7c91895163b73f60080634706f.1684757267.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <cd705abf9e44eb7c91895163b73f60080634706f.1684757267.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0122.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7070:EE_
X-MS-Office365-Filtering-Correlation-Id: a0513f71-e67b-4be2-7b1e-08db5ad03e18
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9pnTYj0kX1HPgrvAyGHWQZ1rJHoZUTkCVGL7EFv3YB2QrKbVB0pa1DT4dsEr9lfpbY216Lv2NhZmkiyfXmLTzS/WltaYtt9Pg6HFau3YXeIlaKgd4u9CQ/5UqjjCTajo14cdYfcP0H5eroSm02BCVIIQxXSTqDgFTbBe9Vra6LJyNkxMmEHMlRUZ0mY1+S+/k871htZgknUIyGBTvNmJ33N6Rell3unvm7BHbq2QnAoFwGPdZUjke4PZDUSlTd1a8hgTCsT4zudkuDFrTnr3Zugc27SwTZQrDXap3tmD7y3UOOuNDcKiYifQyPVLSfZsL9rWKMQtS3i5cKrUMCO/uxd9u079uCen/mG5U87Mlv5vOjaPgUuCzRxgooEyf34u9l9qEC7BAqP9Crydamr9NUm/h5a8SkkzspHmhUDNMebWbc/EQsMouNMLTlVNmxxmocINsdBvPs4D49+yyrpJOOdscfESHs844siwSX4QkKTLi3j+WnKXFv1oQjrtJimAzkqcImnOC4IYsJAA3/ekkhTrr55vPCm2ZighsVdr98QWCbuBUaxpD99H2o7VWr5DFnwFgAvNH23LuBJwIVPU8ew0hUECiIlXlB1KA/qq3mP0iC7GwQ3l/L2t6YX4wgYl8GvTEaUirMGPvoX9EvIXGw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39850400004)(366004)(376002)(346002)(396003)(451199021)(6666004)(31686004)(86362001)(316002)(54906003)(478600001)(41300700001)(6486002)(66946007)(4326008)(66556008)(66476007)(6916009)(38100700002)(8676002)(8936002)(5660300002)(83380400001)(2616005)(36756003)(31696002)(53546011)(26005)(2906002)(186003)(6512007)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WmF5cVVCQ2dsMm5qeERSU2VuZVJYcnN0K3JkZG4yOEs2VHlMcW4vdjZTMnAr?=
 =?utf-8?B?dmdOSDdDb1pzOWQrRDUyWmVQelZCM0NlWThGeGhvRVpVZTk4ZDhmV0ZkRXNC?=
 =?utf-8?B?R3JKTUUxYXl6V3g0clJWOUdnU0NtUHVVTkdiTGVBc2pRZEQvRVB1NzVtUUVY?=
 =?utf-8?B?bzVpZ1N0L0hmOUVtZGgyanVGYXJsVFh5aVV1YzYvRmRxeUZlb1VDeEU2TW5l?=
 =?utf-8?B?aWlrSGsxTk9QZWlvWTMvQzY5Wm1zenJVQWNWZVJCYmtmOGRMNk41RzFNYnJl?=
 =?utf-8?B?MjdTandOVVhzeEdESGJxZnBYa3FEa2piUFU1THA1ck1IYmZmTUo5WVd2Q0Fk?=
 =?utf-8?B?bWJFakpBV3Btd0RxbXN6UkFFc1NJbXlKQzlSRnN6Wms5dU0weWdoRVBwV01m?=
 =?utf-8?B?Zi9VSmdmY21VVDFRWU1rd1JYZFh2UTZjUHI3WFVocmhHYUdTaVB3NzJJS095?=
 =?utf-8?B?MEgvM1B3dE9iS3BVTWxmc1lIOHIxcHo2U2c0OUtwRFBuWGFOcDlVbVR0ajQ1?=
 =?utf-8?B?ZUZzMTNJT1Brcjl2a1hYRWpyOFdlNU1zZVhkVVQwYzJ2UTBUN3ZwQXA4RjFy?=
 =?utf-8?B?V0hZMm9tb2s2K0t5ZlBCZUxqblRhM0VyeHNDelU4RHRWSmhzSWdxa1IzbWF1?=
 =?utf-8?B?Y21DdkJaaVQzTkZaek5rL3lVbytNclVQZ0d6VzZqVGxJM3lSRDNReVA5RTh2?=
 =?utf-8?B?OW0wTnIxZ2VnTlE3cVVWQ01ydER0OGxiK2FrOWVRQ0F0UWtFU2piYkgxWjVl?=
 =?utf-8?B?RmdlQlIwNW9QUXlCVTVYZ2l1YU1wK0gxUVVxS0ZaZjJORVhXbGVYZ20rbEhO?=
 =?utf-8?B?YnpyZ1lZSE92NW5NS0dCNURQWjlBSWpxZHpSWWFHd3VFYXByMS9UN2xHejIv?=
 =?utf-8?B?Ny91NjBNd05FRHpodXBRazZnYkNkc1AyaCs4UFhUb1F0MjZjNW1LcFpqL3lV?=
 =?utf-8?B?YzI1dXFUYnNRd1hZR1V3QWpqZWZTUU51bWhwVHB4S2tkdVFIeWwxNGM1L1E1?=
 =?utf-8?B?QUJEODVFNU0zZ0wrNU1UWnBHSGZCWnVPYkEyMWsramRTaTNIK0pvVXVXK3dw?=
 =?utf-8?B?aTBKZ1U0b2czajBXOFlWOG5hc0xoTDY2R0c0RVhEQ0hyOStQdURuUC9QTmEy?=
 =?utf-8?B?ZEhsLzBkS0JJeTc5OWRCKzcwSFFFdjBQQmc1VDJJeitTaXFkdEp4WDVYTUk2?=
 =?utf-8?B?MEMySDgvNFJtekJDMk1kWkVhQ2tEdFg4UWdBZDNFT1V5ZW5KbkxyMFFndXgv?=
 =?utf-8?B?T1BQUHZKZkRQcVcyVHpVWko0YkxWbW52Y2p0d1dhYmZpMWdBdWVsRnNVd1l2?=
 =?utf-8?B?YThncnVEOFdiUEVqcFVTai9seWRVZnF6WlV5THFQUkJpaTN4MDZFY2FYOXZa?=
 =?utf-8?B?N0dhS2dNZnlpNmE0eDRzYk9UZUlxdURzU0VFRkV0aVo5MkJVODFVNldvRWZi?=
 =?utf-8?B?RExTZU1TL0h0cnRva2ZNL2tNb1kvMURhc2pCU01HSDB3SitRRTdFMFJaNWpt?=
 =?utf-8?B?TFBHeXVsN3A2T0hDblhwY211QlQ4ZGFxWkhZbUZDTjBrQW5VVWhwVld0cUdq?=
 =?utf-8?B?dlN5TmIraG02M0tJRlpxK3NVb1dPT1d3a2pkdHN6ZzUvNDI3Ky9tVVhGTTZ2?=
 =?utf-8?B?WmNDbjhXVnYzRVkyU1RhUEQySTBPbDhvRHVieW5Yem9namJBK3hLaTZPMy9G?=
 =?utf-8?B?b1hNR21ucTFMcHVJTmlwUkozcm9xVTUyMUFlZjlyY1NYMUg4UDRlNGt2STR1?=
 =?utf-8?B?SDhiSXhIWHRtNllxbFBTREFxUUxMVmJKdzUyeEMvRW84czhlL0hKR0hSM2Fy?=
 =?utf-8?B?ZDdkZjYvb2FPNnVMd3lSYmgvalErVmJVRU1COEwyY3U4M2ZqU0hTUlVhNllS?=
 =?utf-8?B?TGhNR0JRQkIrS0xtV3lKOTNXd3lTV3RSQ3BOelFnSHZtb0tHcVVuWml6Tmdn?=
 =?utf-8?B?b1AwWmxHSWo4TDFkbm9UeCs3QjVxYTlsNngvRnUvODZNKzBhejFUTU1ZNm9T?=
 =?utf-8?B?Z3NVREZPWStnVEozcVBNVjlyZzlRZFdjVUJRVEo2WjJwenNzVWVzazRaaUlr?=
 =?utf-8?B?anUrRFdaT0d1bjlhWW56NGdnR1RMQSsrVk1xeTJiYXlTLzl2KzVkb3hoZVJs?=
 =?utf-8?Q?2g8rW4Aog4oSmXndRHX39FHtC?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a0513f71-e67b-4be2-7b1e-08db5ad03e18
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 14:24:24.6578
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0S3Lh9HQ2O3uumMt1NEHjpDsSQ0/sVZ4KJjE8KU1K5uJb9xUZyclqqgL2du+pv794gDIOmdWiQ+/ugvu/D2nLQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7070

On 22.05.2023 14:18, Oleksii Kurochko wrote:
> The idea was taken from xvisor but the following changes
> were done:
> * Use only a minimal part of the code enough to enable MMU
> * rename {_}setup_initial_pagetables functions
> * add an argument for setup_initial_mapping to have
>   an opportunity to make set PTE flags.
> * update setup_initial_pagetables function to map sections
>   with correct PTE flags.
> * Rewrite enable_mmu() to C.
> * map linker addresses range to load addresses range without
>   1:1 mapping. It will be 1:1 only in case when
>   load_start_addr is equal to linker_start_addr.
> * add safety checks such as:
>   * Xen size is less than page size
>   * linker addresses range doesn't overlap load addresses
>     range
> * Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
> * change PTE_LEAF_DEFAULT to RW instead of RWX.
> * Remove phys_offset as it is not used now
> * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
>   in  setup_inital_mapping() as they should be already aligned.
>   Make a check that {map_pa}_start are aligned.
> * Remove clear_pagetables() as initial pagetables will be
>   zeroed during bss initialization
> * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
>   as there is no such section in xen.lds.S
> * Update the argument of pte_is_valid() to "const pte_t *p"
> * Add check that Xen's load address is aligned at 4k boundary
> * Refactor setup_initial_pagetables() so it is mapping linker
>   address range to load address range. After setup needed
>   permissions for specific section ( such as .text, .rodata, etc )
>   otherwise RW permission will be set by default.
> * Add function to check that requested SATP_MODE is supported
> 
> Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one more nit and a remark:

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/page.h
> @@ -0,0 +1,61 @@
> +#ifndef _ASM_RISCV_PAGE_H
> +#define _ASM_RISCV_PAGE_H
> +
> +#include <xen/const.h>
> +#include <xen/types.h>
> +
> +#include <asm/mm.h>
> +#include <asm/page-bits.h>
> +
> +#define VPN_MASK                    ((unsigned long)(PAGETABLE_ENTRIES - 1))

Just as a remark - this could also be just

#define VPN_MASK                    (PAGETABLE_ENTRIES - 1UL)

> --- /dev/null
> +++ b/xen/arch/riscv/mm.c
> @@ -0,0 +1,277 @@
> +#include <xen/compiler.h>
> +#include <xen/init.h>
> +#include <xen/kernel.h>
> +#include <xen/pfn.h>
> +
> +#include <asm/early_printk.h>
> +#include <asm/csr.h>
> +#include <asm/current.h>
> +#include <asm/mm.h>
> +#include <asm/page.h>
> +#include <asm/processor.h>
> +
> +struct mmu_desc {
> +    unsigned int num_levels;
> +    unsigned int pgtbl_count;
> +    pte_t *next_pgtbl;
> +    pte_t *pgtbl_base;
> +};
> +
> +extern unsigned char cpu0_boot_stack[STACK_SIZE];
> +
> +#define PHYS_OFFSET ((unsigned long)_start - XEN_VIRT_START)
> +#define LOAD_TO_LINK(addr) ((addr) - PHYS_OFFSET)
> +#define LINK_TO_LOAD(addr) ((addr) + PHYS_OFFSET)
> +
> +

Nit: No double blank lines please.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 22 14:24:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 14:24:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537981.837676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16Su-0007fK-Qd; Mon, 22 May 2023 14:24:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537981.837676; Mon, 22 May 2023 14:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16Su-0007fD-Ni; Mon, 22 May 2023 14:24:48 +0000
Received: by outflank-mailman (input) for mailman id 537981;
 Mon, 22 May 2023 14:24:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ae2Q=BL=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q16St-0007c2-KY
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 14:24:47 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6710a44c-f8ac-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 16:24:46 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2af1822b710so55133761fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 07:24:46 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 w4-20020ac25984000000b004f379affcb3sm1004023lfn.61.2023.05.22.07.24.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 22 May 2023 07:24:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6710a44c-f8ac-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684765486; x=1687357486;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=zcpO8eW1Mk6fAPCsU4piHEsYJz56kctFY4VPQ5xr6ZA=;
        b=C1XQb5fiHax/Bubm7iAmFXdchkz4wpQQ69seIz/7zOUq/ESR5ytZHqkt6CCd2pb8nv
         pGN827SrrGQCR4FpvOCjT9kAFRMNM99Rcl5fbahu76RMZrqSwQff89smbiKKMPn2kYwg
         WW8350oquliZXmjVIj6FPFCagqqAP49h3l32O2zZI98wIDPtCA/WoXhXoeBadWQMPpLZ
         xUYDGfqQbewuv2UvXdYmn6NJQrB2ZXB3I/souDrKFBUtZ4gmD+NRnRAtesA4Vy/vGOjB
         9N3X8Mhi2fxZl/SE0a1T1S8JpQDUzAON4EcLRhj/G7dDXB7DMJ3L8uIPPv+xjmEjiQQ/
         auKg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684765486; x=1687357486;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=zcpO8eW1Mk6fAPCsU4piHEsYJz56kctFY4VPQ5xr6ZA=;
        b=OUKicVVEXwXixFy7bcKlo8+JHwkgqWSoVlWcE19syfRt4V3wCUXa//N6vFX46kyBx/
         3p2PcXKAfl/02iFODvpA0H6OebqzyhyT978Hu+iEsaTK2W4REVIe4YBJ37V5FhYmhwbf
         9+RwL1zLdMohvfLGdI5xftqH1juMR/fYIS0dKTMB+zr0q1ThIXmRvGOJX1YjOZ0VylgB
         +yX4U8MaCzp1ha+rtVERtxG8nZVmVyccBTvx9gDQqq01vxt1AAi+4RFzXSMEfiHpX6vp
         G9zHLVR70hUD+1hO97KWK+H0OR8rzqIrTMIItsfKTpnrKhbOQ8epYNwezitVjZ7E5GFb
         5c4Q==
X-Gm-Message-State: AC+VfDw7Nd73O10V3MWT/NHg9e2DWpgO7BwmvJXikI5+8GQmoga3sJE5
	ylUnKzym6CSHirbfcyt++ew=
X-Google-Smtp-Source: ACHHUZ7UAzPRP0MomOAUTQ3GDJEqmKzvWTZ5Lb+cO+KtV4Ue89XvWCyZsU02WmtoahfUIIuewgW1ug==
X-Received: by 2002:a2e:8615:0:b0:2ad:b01b:d458 with SMTP id a21-20020a2e8615000000b002adb01bd458mr3893325lji.30.1684765485873;
        Mon, 22 May 2023 07:24:45 -0700 (PDT)
Message-ID: <d87557e4a82419fc881182a03f39bb13cacf1384.camel@gmail.com>
Subject: Re: [PATCH v8 1/5] xen/riscv: add VM space layout
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, 
 xen-devel@lists.xenproject.org
Date: Mon, 22 May 2023 17:24:45 +0300
In-Reply-To: <13494185-96c5-24ff-7ae6-a57d706420f2@suse.com>
References: <cover.1684757267.git.oleksii.kurochko@gmail.com>
	 <bbdfbf59db6d329d65956839c79e6e42bbf13bb7.1684757267.git.oleksii.kurochko@gmail.com>
	 <13494185-96c5-24ff-7ae6-a57d706420f2@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.1 (3.48.1-1.fc38) 
MIME-Version: 1.0

On Mon, 2023-05-22 at 14:47 +0200, Jan Beulich wrote:
> On 22.05.2023 14:18, Oleksii Kurochko wrote:
> > --- a/xen/arch/riscv/include/asm/config.h
> > +++ b/xen/arch/riscv/include/asm/config.h
> > @@ -4,6 +4,42 @@
> > =C2=A0#include <xen/const.h>
> > =C2=A0#include <xen/page-size.h>
> > =C2=A0
> > +/*
> > + * RISC-V64 Layout:
> > + *
> > + * #ifdef RV_STAGE1_MODE =3D=3D SATP_MODE_SV39
>=20
> Nit: #if please, not #ifdef. Also may I stress again that ideally
> this
> would be formatted such that when e.g. grep-ing for SATP_MODE_SV39,
> the
> matching line here would _not_ give the impression of being "a
> comment
> only" (making people possibly pay less attention)? My referral to the
> x86 way of doing things remains.
Oh, I double checked the x86's config.h and understood ( my editor's
colour scheme draw it as a comment so I missed that there is no=C2=A0'*' at
the start if "#ifndef CONFIG_BIGMEM" line ) what you mean:
  '*' should be removed at the start of "#ifdef RV..." line.
>=20
> > + * From the riscv-privileged doc:
> > + *=C2=A0=C2=A0 When mapping between narrower and wider addresses,
> > + *=C2=A0=C2=A0 RISC-V zero-extends a narrower physical address to a wi=
der
> > size.
> > + *=C2=A0=C2=A0 The mapping between 64-bit virtual addresses and the 39=
-bit
> > usable
> > + *=C2=A0=C2=A0 address space of Sv39 is not based on zero-extension bu=
t
> > instead
> > + *=C2=A0=C2=A0 follows an entrenched convention that allows an OS to u=
se one
> > or
> > + *=C2=A0=C2=A0 a few of the most-significant bits of a full-size (64-b=
it)
> > virtual
> > + *=C2=A0=C2=A0 address to quickly distinguish user and supervisor addr=
ess
> > regions.
> > + *
> > + * It means that:
> > + *=C2=A0=C2=A0 top VA bits are simply ignored for the purpose of trans=
lating
> > to PA.
> > + *
> > + *
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > + *=C2=A0=C2=A0=C2=A0 Start addr=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 End ad=
dr=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 Size=C2=A0 | Slot=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
> > |area description
> > + *
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > + * FFFFFFFFC0800000 |=C2=A0 FFFFFFFFFFFFFFFF |1016 MB | L2 511=C2=A0=
=C2=A0=C2=A0=C2=A0 |
> > Unused
> > + * FFFFFFFFC0600000 |=C2=A0 FFFFFFFFC0800000 |=C2=A0 2 MB=C2=A0 | L2 5=
11=C2=A0=C2=A0=C2=A0=C2=A0 |
> > Fixmap
> > + * FFFFFFFFC0200000 |=C2=A0 FFFFFFFFC0600000 |=C2=A0 4 MB=C2=A0 | L2 5=
11=C2=A0=C2=A0=C2=A0=C2=A0 |
> > FDT
> > + * FFFFFFFFC0000000 |=C2=A0 FFFFFFFFC0200000 |=C2=A0 2 MB=C2=A0 | L2 5=
11=C2=A0=C2=A0=C2=A0=C2=A0 |
> > Xen
> > + *=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ...=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 1 G=
B=C2=A0 | L2 510=C2=A0=C2=A0=C2=A0=C2=A0 |
> > Unused
> > + * 0000003200000000 |=C2=A0 0000007f80000000 | 309 GB | L2 200-509 |
> > Direct map
>=20
> And another, yet more minor nit: Would be nice if all addresses here
> were spelled uniformly, i.e. also with upper vs lower case of hex
> digit letter consistent.
Sure. I will make all upper case. Thanks.
>=20
> > + *=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ...=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 1 G=
B=C2=A0 | L2 199=C2=A0=C2=A0=C2=A0=C2=A0 |
> > Unused
> > + * 0000003100000000 |=C2=A0 00000031C0000000 |=C2=A0 3 GB=C2=A0 | L2 1=
96-198 |
> > Frametable
> > + *=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ...=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 1 G=
B=C2=A0 | L2 195=C2=A0=C2=A0=C2=A0=C2=A0 |
> > Unused
> > + * 0000003080000000 |=C2=A0 00000030C0000000 |=C2=A0 1 GB=C2=A0 | L2 1=
94=C2=A0=C2=A0=C2=A0=C2=A0 |
> > VMAP
> > + *=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ...=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 194 GB | =
L2 0 - 193 |
> > Unused
> > + *
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> > =3D=3D=3D=3D=3D=3D=3D=3D=3D
> > + *
> > + * #endif
> > + */
> > +
> > =C2=A0#if defined(CONFIG_RISCV_64)
> > =C2=A0# define LONG_BYTEORDER 3
> > =C2=A0# define ELFSIZE 64
>=20

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Mon May 22 14:29:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 14:29:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537990.837686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16X3-0008VE-AV; Mon, 22 May 2023 14:29:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537990.837686; Mon, 22 May 2023 14:29:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16X3-0008V7-7S; Mon, 22 May 2023 14:29:05 +0000
Received: by outflank-mailman (input) for mailman id 537990;
 Mon, 22 May 2023 14:29:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qrm0=BL=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1q16X1-0008V1-Ob
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 14:29:04 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2061a.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::61a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc215821-f8ac-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 16:28:58 +0200 (CEST)
Received: from BY5PR13CA0032.namprd13.prod.outlook.com (2603:10b6:a03:180::45)
 by PH7PR12MB6934.namprd12.prod.outlook.com (2603:10b6:510:1b8::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 14:28:54 +0000
Received: from CO1PEPF000042AE.namprd03.prod.outlook.com
 (2603:10b6:a03:180:cafe::9a) by BY5PR13CA0032.outlook.office365.com
 (2603:10b6:a03:180::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.14 via Frontend
 Transport; Mon, 22 May 2023 14:28:54 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1PEPF000042AE.mail.protection.outlook.com (10.167.243.43) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6433.8 via Frontend Transport; Mon, 22 May 2023 14:28:54 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 22 May
 2023 09:28:53 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 22 May
 2023 09:28:53 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 22 May 2023 09:28:51 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc215821-f8ac-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LLJAT3fs0qV4+OsP9j6emXi+4MIguYmnEJik5m6j4dWlrvPnNE+HG5FmT6wuwrG9MnzdoUIcJ97KUyoRWi9ccBBQbwwr/2dkml7g5ZuKj1/5m+NYvZCON49kX2cZ3NrdiBghFsawFgUqC/OJb6cZ0DyrX1g+80ocapyGRcGXGgZSgdDrOzp6qhM96KxRsZSX8Lgnl2b4x+mZB5QyTvy08KwMkuYtCK15woUMVcTXSO9bQkVQq0YkK2rNh7Kn7SHHLRX+pg89S9G2w7q3gNvi9+aDAVrHxxs3wyhq9MiN8/TpCnQxdlnsstdVZln2ejdx9ibJeFC400yHay16qsRhrg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ej/OZGNmH8Ot5qexgjSvxHobfOGZLm5MQXG+MJ/BM50=;
 b=dRlFroS9VdclGVyWqgwJ7USzDnlMStpagTRYtvLFUvE70aS8uEgG/bIIICHVSe4SuXQrN/31DRytB4PAqNBnbE3b01WuPCgfB8+ue+SLBMv/qAgwn2Kvtl0QkkMMa5ixw0XLDUiA6IkS3+nFmHAWif9L+JkD8KjweSVg13xW6lQ8Qumi2tgDdfRQstT2OTq8bmiNMX2pZDFCygOQtd/KiLOp9fq9knIaWJhYxAVQ5N6uy9EXh8wowfehwFKe8gKeaC6Q1Flk4aSOTFGMfzYngeknY8n7sv9n5UpYorSsgzK/eYsqF4UcFmfMYeJ6R+cCuRip4Wm+7c5Rx7Yf/KzwVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ej/OZGNmH8Ot5qexgjSvxHobfOGZLm5MQXG+MJ/BM50=;
 b=JSflcAy1VdLrXc/Iv53YAkY+nF0YUvqeenQdPiwwZsNoXjXfXFhfkht5EjNP4kRabEdqXEraFAids6+GC/6rZ1g7lr2L5SCKMAw6j6TdZqQSSSldFCZVZW/bIpRr4WTya4NXAwM8Mp8462wPlJNzaNY/K7a7JMUJzxeZ1H4hM0k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <0033f950-eacd-6af5-8256-38f4cbea87e3@amd.com>
Date: Mon, 22 May 2023 16:28:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v3 1/6] xen/arm: Move is_protected flag to struct device
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Rahul Singh <rahul.singh@arm.com>
References: <20230518210658.66156-1-stewart.hildebrand@amd.com>
 <20230518210658.66156-2-stewart.hildebrand@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230518210658.66156-2-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1PEPF000042AE:EE_|PH7PR12MB6934:EE_
X-MS-Office365-Filtering-Correlation-Id: 8aa026cd-5cac-4963-b67b-08db5ad0df0d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gAUTyg+mSN4IcFq7uDe4YanV5zNDWGDIRb9SK2ZlVrd+MXegNRAGYWXi7SGCPnfvtGXw3IjGaUmTznFqGwfNwCCYpMGg6tR+g36jwcnI5XpvOjE7O483ZjtuLzleGTOa1vIc0qxjccq4xFxWKzNno2DoHaGZqd3ZY1FdAkTNh//2g/owiuGLBqyr7IooGEEOQlxI4KjXadonvk4VlA1DEByPn5lHs/PWS61bYmVLRvZgCUWS3aLqXoYVorBWW0wc4hCurC7OBnqqXtntJHYyoXokNco0EDe//3G1yPcTcWJxH1zZWocGQ3RKdVjP/x1Fxg1V/IET1r1MR4KbGWKKh5WEyEBVy3YT6fiVGrPlrI3Z1f6L7AJ8cZoFbIBWnOt2WhMkrHh3Kb+JyDI9r288EXQDSka/tr10TQJ33KPhVDF2BzkcLxvMtysSTyl0JrD1jSNcsjOOxTcNgONUGDtZrxDRPcXmLA/UqymBhW087LGVkx1D/4lVne1iE40akuCgIRBjHwHHemBbDntALg1aYymfED1+sXFyhYwvOMD8uuMtlwz0GOcwKUVo8oCJoJtzzUgtWMhQGlPcX9twYYgv+4D2nudGVimQJUNTBu7Ie5hPWGlFfJldeayy7O5qiLNxAKdf5PNQzu0rMV784HWoId+mbq5dAz5rXYJUJwEyuqZh2FiUT0K+LEZjK6YGQlXtlTD5WAGjN1+BdCuxYzLrS4ctpvhTa8MUsTwtXN5/89A9JyV5wmzuGO141f3Zn4mLSpy3Qa8IMIy6kMn5lCpkj6jI0q2YgzrWZ+GtwqFzqXs=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(396003)(346002)(451199021)(36840700001)(40470700004)(46966006)(31686004)(2906002)(110136005)(16576012)(54906003)(5660300002)(44832011)(36860700001)(82310400005)(47076005)(8676002)(8936002)(40460700003)(41300700001)(316002)(70206006)(70586007)(478600001)(4326008)(36756003)(40480700001)(6666004)(966005)(336012)(53546011)(81166007)(31696002)(86362001)(2616005)(26005)(186003)(83380400001)(82740400003)(356005)(426003)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 14:28:54.3566
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8aa026cd-5cac-4963-b67b-08db5ad0df0d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1PEPF000042AE.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6934

Hi Stewart,

On 18/05/2023 23:06, Stewart Hildebrand wrote:
> 
> 
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> This flag will be re-used for PCI devices by the subsequent
> patches.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
> ---
> v2->v3:
> * no change
> 
> v1->v2:
> * no change
> 
> downstream->v1:
> * rebase
> * s/dev_node->is_protected/dev_node->dev.is_protected/ in smmu.c
> * s/dt_device_set_protected(dev_to_dt(dev))/device_set_protected(dev)/ in smmu-v3.c
> * remove redundant device_is_protected checks in smmu-v3.c/ipmmu-vmsa.c
> 
> (cherry picked from commit 59753aac77528a584d3950936b853ebf264b68e7 from
>  the downstream branch poc/pci-passthrough from
>  https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
> ---
>  xen/arch/arm/domain_build.c              |  4 ++--
>  xen/arch/arm/include/asm/device.h        | 13 +++++++++++++
>  xen/common/device_tree.c                 |  2 +-
>  xen/drivers/passthrough/arm/ipmmu-vmsa.c |  8 +-------
>  xen/drivers/passthrough/arm/smmu-v3.c    |  7 +------
>  xen/drivers/passthrough/arm/smmu.c       |  2 +-
>  xen/drivers/passthrough/device_tree.c    | 15 +++++++++------
>  xen/include/xen/device_tree.h            | 13 -------------
>  8 files changed, 28 insertions(+), 36 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 71f307a572e9..d228da641367 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2503,7 +2503,7 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev,
>              return res;
>          }
> 
> -        if ( dt_device_is_protected(dev) )
> +        if ( device_is_protected(dt_to_dev(dev)) )
>          {
>              dt_dprintk("%s setup iommu\n", dt_node_full_name(dev));
>              res = iommu_assign_dt_device(d, dev);
> @@ -3003,7 +3003,7 @@ static int __init handle_passthrough_prop(struct kernel_info *kinfo,
>          return res;
> 
>      /* If xen_force, we allow assignment of devices without IOMMU protection. */
> -    if ( xen_force && !dt_device_is_protected(node) )
> +    if ( xen_force && !device_is_protected(dt_to_dev(node)) )
>          return 0;
> 
>      return iommu_assign_dt_device(kinfo->d, node);
> diff --git a/xen/arch/arm/include/asm/device.h b/xen/arch/arm/include/asm/device.h
> index b5d451e08776..086dde13eb6b 100644
> --- a/xen/arch/arm/include/asm/device.h
> +++ b/xen/arch/arm/include/asm/device.h
> @@ -1,6 +1,8 @@
>  #ifndef __ASM_ARM_DEVICE_H
>  #define __ASM_ARM_DEVICE_H
> 
> +#include <xen/types.h>
> +
>  enum device_type
>  {
>      DEV_DT,
> @@ -20,6 +22,7 @@ struct device
>  #endif
>      struct dev_archdata archdata;
>      struct iommu_fwspec *iommu_fwspec; /* per-device IOMMU instance data */
> +    bool is_protected; /* Shows that device is protected by IOMMU */
This will add 7 bytes of padding on arm64. Did you check if there is a hole you can reuse?

>  };
> 
>  typedef struct device device_t;
> @@ -94,6 +97,16 @@ int device_init(struct dt_device_node *dev, enum device_class class,
>   */
>  enum device_class device_get_class(const struct dt_device_node *dev);
> 
> +static inline void device_set_protected(struct device *device)
> +{
> +    device->is_protected = true;
> +}
> +
> +static inline bool device_is_protected(const struct device *device)
> +{
> +    return device->is_protected;
> +}
> +
>  #define DT_DEVICE_START(_name, _namestr, _class)                    \
>  static const struct device_desc __dev_desc_##_name __used           \
>  __section(".dev.info") = {                                          \
> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
> index 6c9712ab7bda..1d5d7cb5f01b 100644
> --- a/xen/common/device_tree.c
> +++ b/xen/common/device_tree.c
> @@ -1874,7 +1874,7 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
>          /* By default dom0 owns the device */
>          np->used_by = 0;
>          /* By default the device is not protected */
> -        np->is_protected = false;
> +        np->dev.is_protected = false;
>          INIT_LIST_HEAD(&np->domain_list);
> 
>          if ( new_format )
> diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> index 091f09b21752..039212a3a990 100644
> --- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> +++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
> @@ -1288,14 +1288,8 @@ static int ipmmu_add_device(u8 devfn, struct device *dev)
>      if ( !to_ipmmu(dev) )
>          return -ENODEV;
> 
> -    if ( dt_device_is_protected(dev_to_dt(dev)) )
> -    {
> -        dev_err(dev, "Already added to IPMMU\n");
> -        return -EEXIST;
> -    }
This removal and in smmuv3 needs to be explained in the commit msg.

> -
>      /* Let Xen know that the master device is protected by an IOMMU. */
> -    dt_device_set_protected(dev_to_dt(dev));
> +    device_set_protected(dev);
> 
>      dev_info(dev, "Added master device (IPMMU %s micro-TLBs %u)\n",
>               dev_name(fwspec->iommu_dev), fwspec->num_ids);
> diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
> index 4ca55d400a7b..f5910e79922f 100644
> --- a/xen/drivers/passthrough/arm/smmu-v3.c
> +++ b/xen/drivers/passthrough/arm/smmu-v3.c
> @@ -1521,13 +1521,8 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev)
>          */
>         arm_smmu_enable_pasid(master);
> 
> -       if (dt_device_is_protected(dev_to_dt(dev))) {
> -               dev_err(dev, "Already added to SMMUv3\n");
> -               return -EEXIST;
> -       }
> -
>         /* Let Xen know that the master device is protected by an IOMMU. */
> -       dt_device_set_protected(dev_to_dt(dev));
> +       device_set_protected(dev);
> 
>         dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
>                         dev_name(fwspec->iommu_dev), fwspec->num_ids);
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 0a514821b336..5b6024d579a8 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -838,7 +838,7 @@ static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
>         master->of_node = dev_node;
> 
>         /* Xen: Let Xen know that the device is protected by an SMMU */
> -       dt_device_set_protected(dev_node);
> +       device_set_protected(dev);
> 
>         for (i = 0; i < fwspec->num_ids; ++i) {
>                 if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) &&
> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
> index 1c32d7b50cce..b5bd13393b56 100644
> --- a/xen/drivers/passthrough/device_tree.c
> +++ b/xen/drivers/passthrough/device_tree.c
> @@ -34,7 +34,7 @@ int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev)
>      if ( !is_iommu_enabled(d) )
>          return -EINVAL;
> 
> -    if ( !dt_device_is_protected(dev) )
> +    if ( !device_is_protected(dt_to_dev(dev)) )
>          return -EINVAL;
> 
>      spin_lock(&dtdevs_lock);
> @@ -65,7 +65,7 @@ int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev)
>      if ( !is_iommu_enabled(d) )
>          return -EINVAL;
> 
> -    if ( !dt_device_is_protected(dev) )
> +    if ( !device_is_protected(dt_to_dev(dev)) )
>          return -EINVAL;
> 
>      spin_lock(&dtdevs_lock);
> @@ -87,7 +87,7 @@ static bool_t iommu_dt_device_is_assigned(const struct dt_device_node *dev)
>  {
>      bool_t assigned = 0;
> 
> -    if ( !dt_device_is_protected(dev) )
> +    if ( !device_is_protected(dt_to_dev(dev)) )
>          return 0;
> 
>      spin_lock(&dtdevs_lock);
> @@ -141,12 +141,15 @@ int iommu_add_dt_device(struct dt_device_node *np)
>          return -EINVAL;
> 
>      /*
> -     * The device may already have been registered. As there is no harm in
> -     * it just return success early.
> +     * This is needed in case a device has both the iommus property and
> +     * also appears in the mmu-masters list.
>       */
> -    if ( dev_iommu_fwspec_get(dev) )
> +    if ( device_is_protected(dev) )
>          return 0;
> 
> +    if ( dev_iommu_fwspec_get(dev) )
> +        return -EEXIST;
This needs to be explained, because before this change, we were returning 0.

> +
>      /*
>       * According to the Documentation/devicetree/bindings/iommu/iommu.txt
>       * from Linux.
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 19a74909cece..c1e4751a581f 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -90,9 +90,6 @@ struct dt_device_node {
>      struct dt_device_node *next; /* TODO: Remove it. Only use to know the last children */
>      struct dt_device_node *allnext;
> 
> -    /* IOMMU specific fields */
> -    bool is_protected;
> -
>      /* HACK: Remove this if there is a need of space */
>      bool_t static_evtchn_created;
Not your fault (and I don't know if you should do anything about it) but this single field now causes
the whole structure to be 8 bytes larger than it could be on arm64.

~Michal


From xen-devel-bounces@lists.xenproject.org Mon May 22 14:31:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 14:31:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.537994.837696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16Yr-0001SG-M6; Mon, 22 May 2023 14:30:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 537994.837696; Mon, 22 May 2023 14:30:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16Yr-0001S9-JI; Mon, 22 May 2023 14:30:57 +0000
Received: by outflank-mailman (input) for mailman id 537994;
 Mon, 22 May 2023 14:30:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oqCu=BL=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q16Yq-0001S3-By
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 14:30:56 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 427464a1-f8ad-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 16:30:55 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VI1PR04MB7070.eurprd04.prod.outlook.com (2603:10a6:800:123::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 14:30:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.027; Mon, 22 May 2023
 14:30:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 427464a1-f8ad-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BMYI5PbRnocA0QiAeq3oq3yUIyc8fl3ntpWs9s6CIJ+x0uV38kX4RE/UcoqVnJFlBbljcdjJcmdccz2m3pXOcW+HIKbS1PrQkBkeF13+1JoF5Nqc0GyDU7XqSdmyQpYkglOouDjHDM2w+gZko+ISQH8XRJ+2j16uQEOvgl5VYTOhRXRV7A2fUBFUvvqX6E4Xe8SJr04cYH2odC5QtzwtIdYQ4kDkVS7lFvPzxlZlp0IjeLohHbv+Pnt2k6mRtw7wCXupgdGdTvU1ifqsbnuuIyrk97NBGBVmMABHd+P9CLuX7C/pEZiI7Wnu5AILIPM9Kmy+lVc/73+U2VlYoXSuXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SW6JPend92Ls/4wOtwU4ZVJKjhNrjd9e8aWNiHmZUpU=;
 b=U9fQOB7vAxPM6xy2ZyGJSjOoF47soK8WhPvgXCwjHL7shyLkMDTiq+i3JnB59Dcc4WZf8yqKkCrsuNBNyTvF/zesVWkm/raBPtSoXBc00/28FaksaBPRwx316xX5M9XkxozxIsWtIVTZ3xq6xfQjsrm0tXblbgGxKHOdoBb511OavBNk0M1lwMfvrg3en5QQxgvLPQThZJw6Qx7DnLlQJcZc5JzFXEBucmcUW7KQOcEFk8o917ulN+xbw7udd/BlvrdiVL8LTiBFL9ri/5RW+7ESOkAG5xQRSaO430/HOqs9jfNiEvtDy3/01h1gbsupgmlvqSf+ObHqSMZ0DtO3VQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SW6JPend92Ls/4wOtwU4ZVJKjhNrjd9e8aWNiHmZUpU=;
 b=LGzTswiFW4evaomkezvGiQFjGkIcFt0edOLOUjklhbzMaQecK0LkbXZq+dE0oOCzvoJiQiG4a9dqh5EKQLRSLotb/ze+kBZlTi1pMBO9/k1r9oKlE7RW6XlFcm/QyopRPpAPM2AEW93LX7+U5lp9v8qd5Pe3PTFCCzfLxnVZD+C35IH+8Mg13U7J/xDQK2Pm7QhexLDF97iBzJ61P4E8id1qtSjIUMVGvDU9mMO40RfmX2am3fSwJlQEuP4TBiCh/3OpIciMKnzW6uKh2F+GrgIO2tyi0gBZKaE9DRnExc9FUPcFLPW9HauS2ec0miG8wRkkc0XpyQ1GOrtEV0njpg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f477a115-f8af-f3e8-26e3-13ea38109bf2@suse.com>
Date: Mon, 22 May 2023 16:30:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 4/4] x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-5-andrew.cooper3@citrix.com>
 <1d06869b-1633-7794-c5c9-92d9c0885f19@suse.com>
 <42cd2479-1eba-11c7-26d8-441045c230ed@citrix.com>
 <fb95d033-7a71-7cc1-bb8f-ee2a74d1c5cf@suse.com>
 <a8013bb5-b0bb-6e42-85de-2e12d7b6f83c@citrix.com>
 <678e997a-0e3e-a6b3-1bba-5e66ff74de48@suse.com>
 <e0d0de09-2da0-f961-f3d5-576cd7268f32@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e0d0de09-2da0-f961-f3d5-576cd7268f32@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM0PR02CA0163.eurprd02.prod.outlook.com
 (2603:10a6:20b:28d::30) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VI1PR04MB7070:EE_
X-MS-Office365-Filtering-Correlation-Id: 4956a2ad-ab8c-481e-92dd-08db5ad125f8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gquZ72MpXw7WbqhqiXnHcNu+ecuc/Wfe2WCUneCIklo4AJd5HgPbNmEwOaUt6gBDmsETeuywI1jbVSTqDVFBKCO6kx2QRSBc6i05waj3yGeNXlpl9rpUw+FQBsyJF7ZKBAe3vpYg3DOHksqw7qqnD3lhKdfzR9qeXwC8b6/RNDsl4gOYpHDXLbUjel4SrersTW1mpv01yy3fVX8gqv0agMrE8MaR9KUEPgJirgXi5rOI+7leSn1X/vLlAhAIV73YkIK0Q0QQQR/ZxdTz+eii6uRbmwXm8K1DvbXTJGsAlVn418atwIXD4jbEBMoth42ttnRFkmefAoYErSSZ+XI20dHm5j615KEgP1BmaS9iXQg+BPjKEhSjjZsNEoH0Q7dWXK94e/WF4j19OnlVcle3EjaUvlmSmcMexZgV+M7KPEIzGO2YWQBal8kpGTiHtKmmr9XF5AWGHjjL8LNbdJrPX8IUkoiP0jyOI5cFaUUVn8x6DiK+erjX5eJTjgGHhHTOmEOYxM6tm/DhNHFHPQTUZyObZNarj5UW2BTGnlHV/WrtBc6hGDja4YKp8YQH3OJIt0EiwQeyLDJg+2sMVFmhxzS82amgbXwMAInesobqLSIP1TJAoqA9qSeNCluuqW8TuLGPLhHp8TETwr0myoggLQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(39850400004)(366004)(376002)(346002)(396003)(451199021)(31686004)(86362001)(316002)(54906003)(478600001)(41300700001)(6486002)(66946007)(4326008)(66556008)(66476007)(6916009)(38100700002)(8676002)(8936002)(5660300002)(83380400001)(2616005)(36756003)(31696002)(53546011)(26005)(2906002)(186003)(6512007)(6506007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TERCTjd2TllRY1M5YVZ3dUhPMVFSUkc4YVVxcjhmVWlHSGozWThtMTJIRHF3?=
 =?utf-8?B?cVZDRDkzM3k4Q0RXZnZxQ3h0ZENkS2RBaUcvYmNoNjc0KzhTb0lJMW03MXBs?=
 =?utf-8?B?TVZlc3BLSmxLRDNiTnU4ZDRmQzU2YTZkcXByUWVRZThydjZKbXFNV0NVSDdh?=
 =?utf-8?B?QkozWmFvc0p2U2lJSHJnZWtwTG9NOE55ZWdtYXNPR3pwM2QyYnpVd3V0amha?=
 =?utf-8?B?U3pOWkJFbGFUY2g1clA1eERzM2QzYnE1b1E3VFljM1dkNzhVZ2ttak52Tkty?=
 =?utf-8?B?eTQ0bGtRNVF5YzhUZU04ZnNtcndtcEx2UTEyR0dWMlJYWVgrNTU0eHBJZ1l4?=
 =?utf-8?B?RWd4QlpIeGlQVnBXWEtnTUNZd2p6YXFJREtQbDJIbGhNbTJudVZZVjJoN0pY?=
 =?utf-8?B?T0pZQ0liakFkZXJJN3VZN1gxbHRlY3hPek5wVDJuU1l2bjEzTzJrd1BCN0RM?=
 =?utf-8?B?UDNGeHZSalBFaXNTWFBCTVByYlBORWZMd1p1Wk5xdjIyVWlTL1RDaXZhQ3RS?=
 =?utf-8?B?aEwrNU1SSWxHTmVMaG9WM0JHbDFRNlRodjNVb0dSaHFmNjdvM2E4NzZsVktQ?=
 =?utf-8?B?dXNFK21ZWUNuVVlOVDZzZGJuMHZOUlE4d21yRnM0bUVFTVB3aUpwS3lOZkpR?=
 =?utf-8?B?S2dnZ0V5M1dyWWw4RStQaFFPZDdQeFZOcUZWekIySnF2Tzl0dHREUXRqUGdp?=
 =?utf-8?B?STROSmJIRGs5VjZ0Y1VtSnpjOGNVdXVMNDRpaE1CdTY5T3EvV2praXVPZks1?=
 =?utf-8?B?ZXFjUy95UFMrZitxQTdGQXZsWDhvRVN4bW5yMHpWRWNDSWlNUDg4R0RnU3VS?=
 =?utf-8?B?cHpQVGdFTUQ2NlhaTlVFcGZLQlJpb3o2QVRhbjZ6OU1pclg4ZzJPS1YyOVQr?=
 =?utf-8?B?SUdsd1IxM00wa0VvZE1lRHB4QUtvRHRBQmZBa1lEUzlEeWJubVR3NzhWTjdh?=
 =?utf-8?B?SDVKeUkwWk1JRTVJOGpGdytHRUFYS3FYckVsbnJ3cW5jb1ZUOXREK2JYMEdB?=
 =?utf-8?B?enkyNXBGWUhrdmprWmprYWNHVUg0VUo2OWE3NTRHSHhYZjNoVVFQY3NteGtk?=
 =?utf-8?B?SS9CMVNCS2dUK0N0dys4UkplQnd6NnlndGxKWU5makFXclV0QWF2dytydTM3?=
 =?utf-8?B?dVBoVExCdWJSc1lCRHh4aHlMK3JHdVBBYU81cDllOEZoUC9hOGpYVkRRVVVh?=
 =?utf-8?B?L1FQSGxzYnRRMGphd3Q4bk5yYWdPdUc1RUE2TFNnNExQYzNGQlFaV1RpU0Iw?=
 =?utf-8?B?bWN5T09ZR3hQYXFZQ004OGUwTC9rWmtXdTZvK3NHV2FmL0ZFdmRmU0RpMitZ?=
 =?utf-8?B?MmtwSmlCR2FHWklOZExJWFpnQWt5bXFrdE9IWjV6S1lrZkFXNXI2MUZUVmZS?=
 =?utf-8?B?Z0ZmSGxGUVErRUdBZG1YQ2JxT1VEWU14dmV1Ylc1YzVjanU3anZBbXhJVERW?=
 =?utf-8?B?NDByREJxM1MvMnhqYkFTemtNNzNUVlZra2tudnFDeU0rR0dMSFl5MzRiM0g3?=
 =?utf-8?B?djgrRzVFdFJPYWtYRlFlTW5zWnMzdEZqT1pXaTlsS2VQcWRQbUsyNkljUnpU?=
 =?utf-8?B?MER2bjk4VC9reGVzZVdzMy9ZcVdTUDhoQWVMeFhxczhBMXBTMlVwZDhFeU1z?=
 =?utf-8?B?R2tDTHpyS2xvZTRKQW9OcGxseG83Sk5XMGg5ckh1a1V4SVRhNGh6Um9BMlRi?=
 =?utf-8?B?VEhlM1EwVnhXci95aExMZG1RQ1lJRlkrMmVwN0NCUUthRzRkUUdNSVMzY1F3?=
 =?utf-8?B?UjRUVFdVeHgybGJpQ2lTdnNXM2UxbW5kbUozRjFuK2FqSmRZejl5ZUtwTWx6?=
 =?utf-8?B?RU04M1B5cUZmRlZvbm56eUpxSGo4SUpaOXNzTUg1WVVtY3N6dldMbVRlM2ph?=
 =?utf-8?B?NlRuUFhlN1Uxd2hRUHB0NlA0QnQ3bnJwTjdkZGdPYkE1cVIvTEZOK3VkVEha?=
 =?utf-8?B?VVczZmVNVkxvU1RCM0tJMnFlMkV4VWVnREtETzFUdG5jdTA4MjFDc2l1bVdu?=
 =?utf-8?B?dXVobmRVd2ZyTHVSeU1nS1l4cUg4OTBvSnZHQkFoSGMxN3U1akhoQk45bnhN?=
 =?utf-8?B?VStCd2phaU8vdVNXLzl2eVA1aEhnV092OGo5K0Fka3BHTktkRzdrSWZnOTM2?=
 =?utf-8?Q?RLlUebVVTFy6v2MnezjG3Ez8W?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4956a2ad-ab8c-481e-92dd-08db5ad125f8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 14:30:53.6245
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8N0rL12myqdC9k6KW8X72ezKu0ODK7RXie627YMFUP0qRzAW3JT/L/X3q26rbGcDgmLBSFacsGfJqDd84jHPtQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB7070

On 22.05.2023 16:14, Andrew Cooper wrote:
> On 22/05/2023 8:10 am, Jan Beulich wrote:
>> On 19.05.2023 16:38, Andrew Cooper wrote:
>>> On 19/05/2023 7:00 am, Jan Beulich wrote:
>>>> On 17.05.2023 18:35, Andrew Cooper wrote:
>>>>> On 17/05/2023 3:47 pm, Jan Beulich wrote:
>>>>>> On 16.05.2023 16:53, Andrew Cooper wrote:
>>>>>>> @@ -401,6 +400,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
>>>>>>>          cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
>>>>>>>      if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
>>>>>>>          cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
>>>>>>> +    if ( cpu_has_arch_caps )
>>>>>>> +        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
>>>>>> Why do you read the MSR again? I would have expected this to come out
>>>>>> of raw_cpu_policy now (and incrementally the CPUID pieces as well,
>>>>>> later on).
>>>>> Consistency with the surrounding logic.
>>>> I view this as relevant only when the code invoking CPUID directly is
>>>> intended to stay.
>>> Quite the contrary.  It stays because this patch, and changing the
>>> semantics of the print block are unrelated things and should not be
>>> mixed together.
>> Hmm. On one hand I can see your point, yet otoh we move things in a longer
>> term intended direction in other cases where we need to touch code anyway.
>> While I'm not going to refuse to ack this change just because of this, I
>> don't fell like you've answered the original question. In particular I
>> don't see how taking the value from a memory location we've already cached
>> it in is changing any semantics here. While some masking may apply even to
>> the raw policy (to zap unknown bits), this should be meaningless here. No
>> bit used here should be unmentioned in the policy.
> 
> The very next thing I'm going to need to do is start synthesizing arch
> caps bits for the hardware with known properties but without appropriate
> enumerations.  This is necessary to make migration work.

But you wouldn't alter the raw featureset, would you? As much as ...

> Because we have not taken a decision about the what printed block means,
> it needs to not change when I start using setup_force_cpu_cap().

... setup_force_cpu_cap() doesn't affect raw.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 22 14:43:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 14:43:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538000.837706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16l1-00032v-U2; Mon, 22 May 2023 14:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538000.837706; Mon, 22 May 2023 14:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16l1-00032o-Qi; Mon, 22 May 2023 14:43:31 +0000
Received: by outflank-mailman (input) for mailman id 538000;
 Mon, 22 May 2023 14:43:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yzf4=BL=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1q16l0-00032f-Ea
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 14:43:30 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 040c9f5f-f8af-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 16:43:29 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-96652cb7673so975759866b.0
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 07:43:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 040c9f5f-f8af-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684766608; x=1687358608;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=pC21Wt79F0ocPDNkemtlfkU3jZZKz4hZTtPDV2VYO3c=;
        b=DQalti/cz4Z23gv4roSvFIVC9PPLMBk2SJPEM9VksL+7ZRyclHv+epcDNfZXqtCD04
         WCVnuPv2i/2uBgGvI/mMw8lSj+mQtwH5z3B13K7VGGQc2Jpt1vdBsiQEFb6ZlILO0eIf
         aXz9PlFy55KzLdrNH9iyebCX8DVK/1mcwnwlYW/62rgsVVx1efNRLGBsoYz4Ol49nmYu
         5yuyS0RCIsZhljMd25N380+CKKarR2PoimHQO5YRTNjT3jvtZyA/gOBrVwFqmP1tX4ju
         fE7VjgqvoAxFZp7W4AM39O19VNDWHoBYjUKA6Qmpajw+avhIzE85vFKYZd93wpVVSPv1
         nWIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684766608; x=1687358608;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=pC21Wt79F0ocPDNkemtlfkU3jZZKz4hZTtPDV2VYO3c=;
        b=hhUaTYG8lFmzQAegtOcX7ZqIX4I6xGCOqc3msXTzjgRFYerNlO7CZn3eYaDxjD90Q7
         N9YVTndPfs+xIre1iwTZbCxEsfZcemtMLGdmR7sNKMd3HUKfgaVgy6Q6sj+7saqrmAgj
         YGJqv17/7N9ph3DWZHxYI2lgEGMb6zNWnECZ0OT+GCioBCxIWUK1153HGZ0K8uzJLSBx
         YF/u2n3Cgx7b/sVYGRLYKyZPXHt3zKmyPoHKf1S4o26pbhRIWpwWH4AkPwK3ArqWPx9r
         Q8fFXIb46NpdVIOiFKKRjY2S9PXQcWy0B2Ltyi7+of1Ng3u5Wwh8f2yoLQ4VZU6cWni6
         Wf9Q==
X-Gm-Message-State: AC+VfDyIXP0iRw7quSbFEkMaL+j8GSmuKfkbCwb7at0SgoBb5nqCcpUU
	CxyA4ARqIIxV9sEz0DCdOUFoq7lHNmpvm92ahcM=
X-Google-Smtp-Source: ACHHUZ78FtPyo6gMtcvvdGIMLHmE24bokwLE8lG9CMeQSCb9560NBylM0oGgwx2jCoG/ilHOICyZ6ZPi0ozP2dgjCI4=
X-Received: by 2002:a17:907:74d:b0:94f:7edf:8fa1 with SMTP id
 xc13-20020a170907074d00b0094f7edf8fa1mr10151530ejb.32.1684766608315; Mon, 22
 May 2023 07:43:28 -0700 (PDT)
MIME-Version: 1.0
References: <20230501193034.88575-1-jandryuk@gmail.com> <20230501193034.88575-11-jandryuk@gmail.com>
 <1166bdf1-4d54-30bb-bdf9-65dfaeb6b29e@suse.com> <CAKf6xpti23_fmwVWOafjUU+OPHPOA7EWOfkShGT9Qqr9=mR9zQ@mail.gmail.com>
 <602ff9ef-e573-279e-441f-463ca7613fa6@suse.com>
In-Reply-To: <602ff9ef-e573-279e-441f-463ca7613fa6@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 22 May 2023 10:43:16 -0400
Message-ID: <CAKf6xptFN4s4yazcmmgqMWqNqmyYQjE+PaV6f=NdjR5g=NUx_g@mail.gmail.com>
Subject: Re: [PATCH v3 10/14 RESEND] xen: Add SET_CPUFREQ_HWP xen_sysctl_pm_op
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, May 22, 2023 at 9:11=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 22.05.2023 14:45, Jason Andryuk wrote:
> > On Mon, May 8, 2023 at 7:27=E2=80=AFAM Jan Beulich <jbeulich@suse.com> =
wrote:
> >>
> >> On 01.05.2023 21:30, Jason Andryuk wrote:
> >>> +    if ( set_hwp->activity_window & ~XEN_SYSCTL_HWP_ACT_WINDOW_MASK =
)
> >>> +        return -EINVAL;
> >>
> >> Below you limit checks to when the respective control bit is set. I
> >> think you want the same here.
> >
> > Not sure if you mean feature_hwp_activity_window or the bit in
> > set_params as control bit.  But, yes, they can both use some
> > additional checking.  IIRC, I wanted to always check
> > ~XEN_SYSCTL_HWP_ACT_WINDOW_MASK, because bits should never be set
> > whether or not the activity window is supported by hardware.
>
> I took ...
>
> >>> +    if ( !feature_hwp_energy_perf &&
> >>> +         (set_hwp->set_params & XEN_SYSCTL_HWP_SET_ENERGY_PERF) &&
> >>> +         set_hwp->energy_perf > IA32_ENERGY_BIAS_MAX_POWERSAVE )
> >>> +        return -EINVAL;
> >>> +
> >>> +    if ( (set_hwp->set_params & XEN_SYSCTL_HWP_SET_DESIRED) &&
> >>> +         set_hwp->desired !=3D 0 &&
> >>> +         (set_hwp->desired < data->hw.lowest ||
> >>> +          set_hwp->desired > data->hw.highest) )
> >>> +        return -EINVAL;
>
> ... e.g. this for comparison, where you apply the range check only when
> the XEN_SYSCTL_HWP_* bit is set. I think you want to be consistent in
> such checking: Either you always allow the caller to not care about
> fields that aren't going to be consumed when their controlling bit is
> off, or you always check validity. Both approaches have their pros and
> cons, I think.

Ok, good point.  I wrote it inconsistently because the SDM states the
desired limit: "When set to a non-zero value (between the range of
Lowest_Performance and Highest_Performance of IA32_HWP_CAPABILITIES)
conveys an explicit performance request hint to the hardware;
effectively disabling HW Autonomous selection."  And I was trying to
follow that.  But later "The HWP hardware clips and resolves the field
values as necessary to the valid range." seems to override that.  I'll
test to verify that it is correct, and drop the lowest/highest
checking if so.

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Mon May 22 14:48:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 14:48:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538004.837717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16pt-0003fu-HN; Mon, 22 May 2023 14:48:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538004.837717; Mon, 22 May 2023 14:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q16pt-0003fn-Cv; Mon, 22 May 2023 14:48:33 +0000
Received: by outflank-mailman (input) for mailman id 538004;
 Mon, 22 May 2023 14:48:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qrm0=BL=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1q16ps-0003fh-45
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 14:48:32 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20616.outbound.protection.outlook.com
 [2a01:111:f400:7e89::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b59410c5-f8af-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 16:48:28 +0200 (CEST)
Received: from BYAPR05CA0054.namprd05.prod.outlook.com (2603:10b6:a03:74::31)
 by DM6PR12MB4073.namprd12.prod.outlook.com (2603:10b6:5:217::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 14:48:25 +0000
Received: from DM6NAM11FT099.eop-nam11.prod.protection.outlook.com
 (2603:10b6:a03:74:cafe::81) by BYAPR05CA0054.outlook.office365.com
 (2603:10b6:a03:74::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.13 via Frontend
 Transport; Mon, 22 May 2023 14:48:24 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 DM6NAM11FT099.mail.protection.outlook.com (10.13.172.241) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.29 via Frontend Transport; Mon, 22 May 2023 14:48:24 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 22 May
 2023 09:48:22 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Mon, 22 May
 2023 07:48:22 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Mon, 22 May 2023 09:48:20 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b59410c5-f8af-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Obt0/fydd3hKwj9etMrkyV6F48TcgzFv86SkRL7pCjrJrG/lrHHS1Fd91jnPBWPTR31bZ5eXPSFAuA0q+r9FgB8veeKYjdp3tY3wTio6PdjcIzs8jWSlOVuU4KKckGCOdIj2zEh+66ZtvbqRL0LcG8HnDXtii6gw6xJgVW7+4Hj7KLIqS2xKSdFDnexub33cALOpS/51+IkuiINpb7mTL2DdfxLclYlYlmojsbkXEUzJXzYxQquSF2PlIXsHYcKPWPKZCQgm5S2vvIMj2/ntsrkijtD2aT/05UbvcJYB6tpGPFi2NHVx417p/PHkePFU8xjV5miAxQWhqu4HhQs8Jw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bu8tc6QUaD2f1Bg+pJtdesvDnIdQHpfh5WkPGu9o9vI=;
 b=A5Hfsge6A+oMgi1YfWN5VJy+BPp3d9OHwem8fQP0xw6oHwtwvK9PKH8MMPoWoZZjXdLogOfvmWH6ynSUhKhD5BtWiuVjNZLHZ+Jrn6WUEXKcpBwMZ/xTSjCC5Bs5ZfYA1M/qJCfuFPJfuX9uvSUhg/mJC/x9W8UAyG8gPSoN3+LTmTAl+an3JW3dp7C38JUPYJJz46le3kRSk0hyhXJZf87R09lpf7gFXsMYaeB9O/UkzPXzK6L1+EKaFCJoZrP4gC8qNMQmkfPUCMi7CHOQAlSHiqAd6lhFFrvX6QZ0vsd1POOSJq5DSOGWFnEuQiBGQhyxGFVLGAtgIq3J+LAfNQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bu8tc6QUaD2f1Bg+pJtdesvDnIdQHpfh5WkPGu9o9vI=;
 b=PGVwJLI9VokB550zZY4xWnKIiG1XYpF2KiHuoIczmGEbXwo1QVysjzoUfaOnguhc6i8wOrgchtZ9ngtcwBDS0P/E4W7MUK5IWZnHSaM5RTzecOTQ/xT/X51AGvDR+qX1eWFQAk9X01oSErDMeTX1+PXYCBlR5kcrDdDRShyuqic=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <c6dd2581-7dd9-a0da-8b27-3523744349c0@amd.com>
Date: Mon, 22 May 2023 16:48:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v3 2/6] iommu/arm: Add iommu_dt_xlate()
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Rahul Singh
	<rahul.singh@arm.com>, Bertrand Marquis <bertrand.marquis@arm.com>
References: <20230518210658.66156-1-stewart.hildebrand@amd.com>
 <20230518210658.66156-3-stewart.hildebrand@amd.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230518210658.66156-3-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT099:EE_|DM6PR12MB4073:EE_
X-MS-Office365-Filtering-Correlation-Id: 72aabe80-d278-4f64-9bd9-08db5ad39849
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yQh6Wws2ibR9EVHtPkhJ6s8C9IdKtATmGArtlz3GSbUX3p2UY7da6x7z8IcaBV7eOYveYyMtxEoMXwsLOhnfptCFvsohbp0Xm9gGFgtzKw/rslHA1gc6pldycMdZ3pTiHCAH1p39UtfubE8j9GGRS8qpmfbVMP2o8acLiUcTPnr6Sosntppu+TO6lKM+IfCMKgu8Yb1nTuW2lwJA9aAtIIkWY4YQ4Kl7FH9eIn/fiL/IZlyZsEYjEVk9z1gvgzyb5dJJLzKbLGsPmcydH8edcOpaxE8S7fYkxaCxlBWK/hKSadb+1rhYEqCRZR1LdIDFT3f6LCr0d0cOJixFexbZc2Mrl6e9foJJhAjz4IhjyR1wuJvIe4XJrEpVBqGufwk6hzyYXcmBvwxEiheHSfyikBu9I30xwYrdgoxDcW4K9ol5PuBSsrS5byBC/sxyvp8jNhCTKUOvqnTvbqijrxGnFb9daQkTJrbfggs8LWVPWEvFnQDPTWI/Qb9z3R89WDAg0JTzFUttYGTRxu8uC5cjgZwXAEQcH04ZnHYxZtpLZKqIwIjWZdeEY4SzV0V01dcCpLiRav5tJ9emaDrnNhsAXWOoCXW6hNPSXslk3Y2MmnWY/Y0qC3Z/HIXTE3KgPlg9WnTUh8m4fbkK4Oz4Ivrdi36vaJ0fgZvarhvXlHC/ylMeHWP0hAHI6PRhJgHYREMSy3LeDonuAbzy2+oEonh3o2vJr9NgjbuACB4Kgog2ek+NFDpDcwpmmeFTFcuUZyPI0SwOgb/1B8j7OOEI481913foqbmZFIV73DjfcWTfL4M=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(396003)(376002)(451199021)(36840700001)(40470700004)(46966006)(8676002)(8936002)(5660300002)(44832011)(36860700001)(82310400005)(47076005)(186003)(81166007)(53546011)(26005)(83380400001)(336012)(86362001)(2616005)(31696002)(82740400003)(356005)(426003)(40460700003)(41300700001)(6666004)(40480700001)(966005)(70206006)(70586007)(316002)(36756003)(4326008)(478600001)(110136005)(54906003)(16576012)(2906002)(31686004)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 14:48:24.1836
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 72aabe80-d278-4f64-9bd9-08db5ad39849
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT099.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4073

Hi Stewart,

On 18/05/2023 23:06, Stewart Hildebrand wrote:
> 
> 
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Move code for processing DT IOMMU specifier to a separate helper.
> This helper will be re-used for adding PCI devices by the subsequent
> patches as we will need exact the same actions for processing
> DT PCI-IOMMU specifier.
> 
> While at it introduce NO_IOMMU to avoid magic "1".
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com> # rename
> ---
> v2->v3:
> * no change
> 
> v1->v2:
> * no change
> 
> downstream->v1:
> * trivial rebase
> * s/dt_iommu_xlate/iommu_dt_xlate/
> 
> (cherry picked from commit c26bab0415ca303df86aba1d06ef8edc713734d3 from
>  the downstream branch poc/pci-passthrough from
>  https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
> ---
>  xen/drivers/passthrough/device_tree.c | 42 +++++++++++++++++----------
>  1 file changed, 27 insertions(+), 15 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
> index b5bd13393b56..1b50f4670944 100644
> --- a/xen/drivers/passthrough/device_tree.c
> +++ b/xen/drivers/passthrough/device_tree.c
> @@ -127,15 +127,39 @@ int iommu_release_dt_devices(struct domain *d)
>      return 0;
>  }
> 
> +/* This correlation must not be altered */
> +#define NO_IOMMU    1
> +
> +static int iommu_dt_xlate(struct device *dev,
> +                          struct dt_phandle_args *iommu_spec)
I think iommu_spec can be const.

> +{
> +    const struct iommu_ops *ops = iommu_get_ops();
> +    int rc;
> +
> +    if ( !dt_device_is_available(iommu_spec->np) )
> +        return NO_IOMMU;
> +
> +    rc = iommu_fwspec_init(dev, &iommu_spec->np->dev);
> +    if ( rc )
> +        return rc;
> +
> +    /*
> +     * Provide DT IOMMU specifier which describes the IOMMU master
> +     * interfaces of that device (device IDs, etc) to the driver.
> +     * The driver is responsible to decide how to interpret them.
> +     */
> +    return ops->dt_xlate(dev, iommu_spec);
Wouldn't it be better to move the check (!ops->dt_xlate) from iommu_add_dt_device to this helper?
After all it is the only function that calls dt_xlate so for me it would be a natural placement.
Looking at the next patch it will also reduce the similar check in iommu_add_dt_pci_sideband_ids.

~Michal


From xen-devel-bounces@lists.xenproject.org Mon May 22 15:14:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 15:14:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538016.837729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q17EP-000797-GF; Mon, 22 May 2023 15:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538016.837729; Mon, 22 May 2023 15:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q17EP-000790-Da; Mon, 22 May 2023 15:13:53 +0000
Received: by outflank-mailman (input) for mailman id 538016;
 Mon, 22 May 2023 15:13:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1r6C=BL=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1q17EO-00078u-Dn
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 15:13:52 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0612.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::612])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 412106a0-f8b3-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 17:13:49 +0200 (CEST)
Received: from AM6PR01CA0044.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::21) by DBAPR08MB5782.eurprd08.prod.outlook.com
 (2603:10a6:10:1b2::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 15:13:44 +0000
Received: from AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::27) by AM6PR01CA0044.outlook.office365.com
 (2603:10a6:20b:e0::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28 via Frontend
 Transport; Mon, 22 May 2023 15:13:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT003.mail.protection.outlook.com (100.127.140.227) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.13 via Frontend Transport; Mon, 22 May 2023 15:13:43 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Mon, 22 May 2023 15:13:43 +0000
Received: from c205ad55f2b6.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3E441F9F-4C23-4D04-A077-C75D88EDBDE0.1; 
 Mon, 22 May 2023 15:13:36 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c205ad55f2b6.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 22 May 2023 15:13:36 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GV2PR08MB8631.eurprd08.prod.outlook.com (2603:10a6:150:b3::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 22 May
 2023 15:13:33 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482%4]) with mapi id 15.20.6411.028; Mon, 22 May 2023
 15:13:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 412106a0-f8b3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+dMpDnEtIqS+4ZqN9Ar8dOyygSGAkVUmErcE18V8Wik=;
 b=xKLf7bBPSQWIii9B3vxS0ZHXaH13/7VZkTdfeuX9BLU0z1cgoqfs/iEbPW900nIBxBcp/yqDUCCUFreZdLuAQ2FMcTealCaBFRQ8BYxxku57MkueyqugK1hrpL86h3Mq7aYLgQsYls6WWRl8rwTFDCj5kHZe/JcyLyWoh66E9nM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 214bec7b6c064d0b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PmStE4ngD+jbDetzy13gtNKppGWocF1ufjKykr1c2KPEAwQbl7PIB2qpbNzlPsGDXXzrgqGayfVGlGoNbft0hk2f21VQ1z+dCEZYAqEnFx67r2Q0Mv/DlDIOjxUooXYqt3idrEbrqzJcHRAM/BlmyhzKFZkBAWAYE60A6D7b/T/Uy5gNcR2ij911bm8v4R1HZ60cckZdwTAaPJIZA576jlRRaavWmIvZnNfyd26vHxX7nodVEV5xyxd4lWkgyoY8fs7YwYpJy8mwl55l0V84WM3FJrc4QXsHJlAFgWx2oy3bWwt8iYR/dheDTqpcf46oGWJvsT+oLQ5OXCWVRpgZQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+dMpDnEtIqS+4ZqN9Ar8dOyygSGAkVUmErcE18V8Wik=;
 b=NiLQp39EJn7dqfccnwO3usiyARqa96qQnxD1k7FWkgkdAICCxFLMSCRgZ/QL3zDibhl/uXLSH8LH1BfZUUWr2MXYgqWVwPCftq+VF8WR4g1MjMwYO4Lo9m0K4rrLGaf/yawBBdkh1tedZNmgifTZE/lEnD9fJBfJZ9dJZelYk+iwDUmVqCEhak55WHRdcbN1fN8Y9N5Wp+3LsjAWytXI9shSrDT9aHcAQuQZKj29j46YOSf35agiVaWVW19t+Nb3N0UTCT2sESUob6L8ztUJM1a/S5vtJySWWmAMfdjs0v5hgTghjbSZrhbcDqZMshCzfupYxBLH04qGiMG4hkRLkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+dMpDnEtIqS+4ZqN9Ar8dOyygSGAkVUmErcE18V8Wik=;
 b=xKLf7bBPSQWIii9B3vxS0ZHXaH13/7VZkTdfeuX9BLU0z1cgoqfs/iEbPW900nIBxBcp/yqDUCCUFreZdLuAQ2FMcTealCaBFRQ8BYxxku57MkueyqugK1hrpL86h3Mq7aYLgQsYls6WWRl8rwTFDCj5kHZe/JcyLyWoh66E9nM=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	"roger.pau@citrix.com" <roger.pau@citrix.com>, "julien@xen.org"
	<julien@xen.org>, Stefano Stabellini <stefano.stabellini@amd.com>
Subject: Re: [PATCH] docs/misra: adds Mandatory rules
Thread-Topic: [PATCH] docs/misra: adds Mandatory rules
Thread-Index: AQHZhF+CDPpocrDIxEOFvzb4c9SB5a9mdySA
Date: Mon, 22 May 2023 15:13:31 +0000
Message-ID: <D8D826F7-7301-4F01-AD6E-7011346A590A@arm.com>
References: <20230511232237.3720769-1-sstabellini@kernel.org>
In-Reply-To: <20230511232237.3720769-1-sstabellini@kernel.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|GV2PR08MB8631:EE_|AM7EUR03FT003:EE_|DBAPR08MB5782:EE_
X-MS-Office365-Filtering-Correlation-Id: 5f056a84-e89a-4c9e-ebc2-08db5ad72201
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 N09x7LW4qqaqwCTMGWCdeQzb60v4MPP+n+abPbt+XNedxTRBWk1+ewxx0SW9Ewj/2irYm24Fektkhh0wOB+4JhKp1GwfHIZ4OxGbq7gZ0niAbTT8FfmBQKqB17WMmuputERH2y7HRaL/8cnRAopLuLofNqbtreii4oe3cV+rlIWn0Ky5nQfZFUPoBT4husXVk3YqcyUUfltRiSSGVbmXEVF7JFN+nnhkEdvEbcZZOGa5V3qt/uvCPEADdfeQ1+dZqg6nH3FftVF4JqYGmaQsgOZ2py/9bYrr/42IhmX9zw3XXVnhzj9SGBI1teVP85uZiJ9DFZlONk4dMso3Zx+WPgfspaKBQR/8OmhR7m969kTW3ZIyels7ZvyuWvHJaV91nL7KfTUy6y1h1laxvVrOQUAkPFH9n0E5e+0KkaBAXb9ngbzV5VtLrl/3sj7tHuDfhEfZPRi3Ecpcs9ZYoWJHtt7N5id9Nq+CEZGDh9TZ7W0OsxOWDWHpDLrA49ZWICH0nN462Xi/Qd961UrZlqRRw3V6BqXeBz9dZhN0vNt1icnb4XzxI9xp2zFZogXmmlbVJH2jebLO6zMZGbobIxhP1rm0fgb9TBNWtNFwVgD0R7wyYlnUz+QoRIjbDqbhoNNRNL/JpFcq8pxx++gNpHAZCd3pEsOA2bBk6etYj2ykA5RvlrzX3ezOsaSAFBAy/znD
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(396003)(376002)(136003)(39860400002)(451199021)(8936002)(8676002)(5660300002)(83380400001)(186003)(53546011)(6506007)(6512007)(86362001)(2616005)(122000001)(38100700002)(38070700005)(71200400001)(41300700001)(6486002)(64756008)(33656002)(66446008)(66476007)(66556008)(66946007)(91956017)(76116006)(316002)(6916009)(4326008)(36756003)(478600001)(54906003)(2906002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <596EB45BF9BB7D4AAD193E16449FF4C0@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8631
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	28b74984-41df-4a35-b103-08db5ad71af1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uzZZiXvAqNBlcWSJ1pg2uFSp3HIo4pZwdvd6vR+r+bBLdDBk6xkCiS3Nlcqt/AeeJifMe4bPS0qwwvPNhFOy2Gzs3LSGvp8HDQyaJRcQGFOYhkq56D4CVy+9Hmfvsfgx5dl9eNGMWzMQpbPkZ3+ryVN37Hxf8Nxr1cpmLOAPYIN8CQxsWEv723w+XYztuoYnU7r6q3kDUm0qYg1Qe1v+x0wARniR/HBvQ1Qv3dZVBFa5QqvVlqdiQxDHV2+Kl9d3JBBNfeIf91qOqMg4Ek3iFPUjPWHddVJCwPP/k5I0I8PtREFDCIg8rOqlnSiCmDGKCysNz10sxQBSOr7RcYgK6mr1k0vNJoBBpgmfubd/xOzJFG2QfqfaHUcu4bJUCD/rbq0JFaSPecF+i+O6a0UI6kOCbphXzGdM9N9YzW1vhn9h1YtKOZaF8IJcP4uY4BE2meevb6qBfqxDXWJ35bI//FafYHU7oMt60iKaiRIJ8RQ/0lxs1kgzjiZETJIMUb8kuX6poaQgFB4FSnmyaa3DhK9e6p08AIsMH/RQt8MPvVBrHZKbriFf3mLCbJwNiTdOxWt51jPzQaw6RSwetZqqYq2Ci86K3VW5gLwBrz9LWlZDnqUKaoRO311/Vywfi8qtNU975KSBJId1mJ4WK7PP85lmJcxX2D4XqUuDPKNQQhCN14hcIt9AgUkDcIpKaX6aykfPMuNc++1MiJ2TAyq7D12d0kCUY7QWCk2FhBmD+nVRSABAl8dEg3wd0YXqANvteJGA/WN9+oxfF44B/DE0yg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39850400004)(136003)(376002)(396003)(346002)(451199021)(40470700004)(36840700001)(46966006)(8676002)(8936002)(6862004)(5660300002)(83380400001)(47076005)(36860700001)(53546011)(186003)(6512007)(6506007)(2616005)(356005)(26005)(86362001)(336012)(81166007)(82740400003)(40460700003)(82310400005)(41300700001)(40480700001)(478600001)(33656002)(4326008)(36756003)(70586007)(70206006)(6486002)(316002)(54906003)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 15:13:43.7451
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5f056a84-e89a-4c9e-ebc2-08db5ad72201
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5782

Hi Stefano,


> On 12 May 2023, at 01:22, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> From: Stefano Stabellini <stefano.stabellini@amd.com>
>=20
> Add the Mandatory rules agreed by the MISRA C working group to
> docs/misra/rules.rst.
>=20
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>

Acked-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> docs/misra/rules.rst | 62 ++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 62 insertions(+)
>=20
> diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
> index 83f01462f7..d5a6ee8cb6 100644
> --- a/docs/misra/rules.rst
> +++ b/docs/misra/rules.rst
> @@ -204,6 +204,12 @@ existing codebase are work-in-progress.
>        braces
>      -
>=20
> +   * - `Rule 12.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example=
-Suite/-/blob/master/R_12_05.c>`_
> +     - Mandatory
> +     - The sizeof operator shall not have an operand which is a function
> +       parameter declared as "array of type"
> +     -
> +
>    * - `Rule 13.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-=
Suite/-/blob/master/R_13_06.c>`_
>      - Mandatory
>      - The operand of the sizeof operator shall not contain any
> @@ -274,3 +280,59 @@ existing codebase are work-in-progress.
>        in the same file as the #if #ifdef or #ifndef directive to which
>        they are related
>      -
> +
> +   * - `Rule 21.13 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Exampl=
e-Suite/-/blob/master/R_21_13.c>`_
> +     - Mandatory
> +     - Any value passed to a function in <ctype.h> shall be representabl=
e as an
> +       unsigned char or be the value EOF
> +     -
> +
> +   * - `Rule 21.17 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Exampl=
e-Suite/-/blob/master/R_21_17.c>`_
> +     - Mandatory
> +     - Use of the string handling functions from <string.h> shall not re=
sult in
> +       accesses beyond the bounds of the objects referenced by their poi=
nter
> +       parameters
> +     -
> +
> +   * - `Rule 21.18 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Exampl=
e-Suite/-/blob/master/R_21_18.c>`_
> +     - Mandatory
> +     - The size_t argument passed to any function in <string.h> shall ha=
ve an
> +       appropriate value
> +     -
> +
> +   * - `Rule 21.19 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Exampl=
e-Suite/-/blob/master/R_21_19.c>`_
> +     - Mandatory
> +     - The pointers returned by the Standard Library functions localecon=
v,
> +       getenv, setlocale or, strerror shall only be used as if they have
> +       pointer to const-qualified type
> +     -
> +
> +   * - `Rule 21.20 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Exampl=
e-Suite/-/blob/master/R_21_20.c>`_
> +     - Mandatory
> +     - The pointer returned by the Standard Library functions asctime ct=
ime
> +       gmtime localtime localeconv getenv setlocale or strerror shall no=
t be
> +       used following a subsequent call to the same function
> +     -
> +
> +   * - `Rule 22.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example=
-Suite/-/blob/master/R_22_02.c>`_
> +     - Mandatory
> +     - A block of memory shall only be freed if it was allocated by mean=
s of a
> +       Standard Library function
> +     -
> +
> +   * - `Rule 22.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example=
-Suite/-/blob/master/R_22_04.c>`_
> +     - Mandatory
> +     - There shall be no attempt to write to a stream which has been ope=
ned as
> +       read-only
> +     -
> +
> +   * - `Rule 22.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example=
-Suite/-/blob/master/R_22_05.c>`_
> +     - Mandatory
> +     - A pointer to a FILE object shall not be dereferenced
> +     -
> +
> +   * - `Rule 22.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example=
-Suite/-/blob/master/R_22_06.c>`_
> +     - Mandatory
> +     - The value of a pointer to a FILE shall not be used after the asso=
ciated
> +       stream has been closed
> +     -
> --=20
> 2.25.1
>=20



From xen-devel-bounces@lists.xenproject.org Mon May 22 15:42:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 15:42:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538022.837740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q17fp-00028m-PF; Mon, 22 May 2023 15:42:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538022.837740; Mon, 22 May 2023 15:42:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q17fp-00028f-Mb; Mon, 22 May 2023 15:42:13 +0000
Received: by outflank-mailman (input) for mailman id 538022;
 Mon, 22 May 2023 15:42:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=g3Ry=BL=gmail.com=shentey@srs-se1.protection.inumbo.net>)
 id 1q17fo-00028Z-HJ
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 15:42:12 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 36022f7d-f8b7-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 17:42:08 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-96fbc74fbf1so339148466b.1
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 08:42:08 -0700 (PDT)
Received: from [127.0.0.1] (dynamic-089-012-142-218.89.12.pool.telefonica.de.
 [89.12.142.218]) by smtp.gmail.com with ESMTPSA id
 z14-20020a1709067e4e00b00969dfd160aesm3224573ejr.109.2023.05.22.08.42.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 22 May 2023 08:42:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36022f7d-f8b7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1684770128; x=1687362128;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ntBwRRWm0xdALQrGuEtYHqvRbQtMjEFrpMKv/FDPAtc=;
        b=Bq+iCjCjRCZk7Z2o2PEXXeN3EHl2PVNoeR7dHYGP77ELQXFLrOsTqbRZNuu2EZXNox
         ee4KV72bxzASbNu8HHDb+tecCgvuR1+j9zcQuziJ3WHXgGbDgMgR4q35VvaAvQ8FIzhd
         7BH7AFq52xE4g3RpUx86aW1PvdW2V9Ih7SQVS6OCPje7weuu4TmPucCfY8QPq554ONKh
         clrmC4luNFd0SSLT6qeD3mKwfFdwOBez4hfM2ae+Z6si5DOoX53MgZ8dc0DuEBSKZ6PM
         a1t9uxlb3K5N0McFOznAuyrBO9Or/4Cg+9jkY6n5rQpK6mtOO6vRrHc8hpGr/m3xTHGe
         HFOw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684770128; x=1687362128;
        h=content-transfer-encoding:mime-version:message-id:references
         :in-reply-to:subject:cc:to:from:date:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ntBwRRWm0xdALQrGuEtYHqvRbQtMjEFrpMKv/FDPAtc=;
        b=h9TQ/kqdiZzLxb2CgfZTJJValuyNTjamzNT7qVGgFaJpXaqv2LSjBLOr4mT6R8Gjy/
         qqoMBNTdY1xz2L64WPOncWdMIIyvLMlCu1haTjkOKEIuyLdCMHTCtICDqqqH/ge7o+bi
         vMk1v/+CzKDhq1piFQYt6kUhX/FeBmz+tQsQx3VuZ0ks4/Tu76qlhTrBylGgrZ7ULj1e
         bFIX3e2O6CGJtb8eJmf1M1hZLwvHMEiDdQyna1BsxmBT7sKs/fkk+hWL00d5irN5aKXF
         SQNI9WiidzJUwcS6igJJ//43ZxBWRUXFl4zt/EVLOkbV17COmLFQn5NY2BGsRpY3Pakt
         4xfA==
X-Gm-Message-State: AC+VfDwyEQt/llmRnX2bzbe84q9cRQPbWjUhJeDJp3hVxvA+42cYMdID
	Si20HdzuPK41gSlY81Bk3V0=
X-Google-Smtp-Source: ACHHUZ7G6JHBlLpI2JhNhYUZZZnAm4QfrynURn4NvO9azaPSCc58vc0V5uJ4Z0Rkvw7XgawRIq94UA==
X-Received: by 2002:a17:907:72c5:b0:96f:5f44:ea02 with SMTP id du5-20020a17090772c500b0096f5f44ea02mr10250547ejc.8.1684770128067;
        Mon, 22 May 2023 08:42:08 -0700 (PDT)
Date: Mon, 22 May 2023 15:42:03 +0000
From: Bernhard Beschow <shentey@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>, mst@redhat.com
CC: qemu-devel@nongnu.org, Richard Henderson <richard.henderson@linaro.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 David Woodhouse <dwmw@amazon.co.uk>, Eduardo Habkost <eduardo@habkost.net>,
 Chuck Zmudzinski <brchuckz@aol.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?ISO-8859-1?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Paul Durrant <paul@xen.org>, Paolo Bonzini <pbonzini@redhat.com>,
 =?ISO-8859-1?Q?Philippe_Mathieu-Daud=E9?= <philmd@linaro.org>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v4 0/7] Resolve TYPE_PIIX3_XEN_DEVICE
In-Reply-To: <alpine.DEB.2.22.394.2305151350180.4125828@ubuntu-linux-20-04-desktop>
References: <20230403074124.3925-1-shentey@gmail.com> <20230421033757-mutt-send-email-mst@kernel.org> <9EB9A984-61E5-4226-8352-B5DDC6E2C62E@gmail.com> <alpine.DEB.2.22.394.2305151350180.4125828@ubuntu-linux-20-04-desktop>
Message-ID: <EB3E61EB-B543-4B15-94A9-C16A66437601@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: quoted-printable



Am 15=2E Mai 2023 20:52:40 UTC schrieb Stefano Stabellini <sstabellini@ker=
nel=2Eorg>:
>On Sat, 13 May 2023, Bernhard Beschow wrote:
>> Am 21=2E April 2023 07:38:10 UTC schrieb "Michael S=2E Tsirkin" <mst@re=
dhat=2Ecom>:
>> >On Mon, Apr 03, 2023 at 09:41:17AM +0200, Bernhard Beschow wrote:
>> >> There is currently a dedicated PIIX3 device model for use under Xen=
=2E By reusing
>> >> existing PCI API during initialization this device model can be elim=
inated and
>> >> the plain PIIX3 device model can be used instead=2E
>> >>=20
>> >> Resolving TYPE_PIIX3_XEN_DEVICE results in less code while also maki=
ng Xen
>> >> agnostic towards the precise south bridge being used in the PC machi=
ne=2E The
>> >> latter might become particularily interesting once PIIX4 becomes usa=
ble in the
>> >> PC machine, avoiding the "Frankenstein" use of PIIX4_ACPI in PIIX3=
=2E
>> >
>> >xen stuff so I assume that tree?
>>=20
>> Ping
>
>I am OK either way=2E Michael, what do you prefer?
>
>Normally I would suggest for you to pick up the patches=2E But as it
>happens I'll have to likely send another pull request in a week or two
>and I can add these patches to it=2E
>
>Let me know your preference and I am happy to follow it=2E

Hi Stefano,

Michael's PR was merged last week=2E How about including this series into =
your PR then?

Best regards,
Bernhard

>
>
>> >
>> >> Testing done:
>> >> - `make check`
>> >> - Run `xl create` with the following config:
>> >>     name =3D "Manjaro"
>> >>     type =3D 'hvm'
>> >>     memory =3D 1536
>> >>     apic =3D 1
>> >>     usb =3D 1
>> >>     disk =3D [ "file:manjaro-kde-21=2E2=2E6-220416-linux515=2Eiso,hd=
c:cdrom,r" ]
>> >>     device_model_override =3D "/usr/bin/qemu-system-x86_64"
>> >>     vga =3D "stdvga"
>> >>     sdl =3D 1
>> >> - `qemu-system-x86_64 -M pc -m 2G -cpu host -accel kvm \
>> >>     -cdrom manjaro-kde-21=2E2=2E6-220416-linux515=2Eiso`
>> >>=20
>> >> v4:
>> >> - Add patch fixing latent memory leak in pci_bus_irqs() (Anthony)
>> >>=20
>> >> v3:
>> >> - Rebase onto master
>> >>=20
>> >> v2:
>> >> - xen_piix3_set_irq() is already generic=2E Just rename it=2E (Chuck=
)
>> >>=20
>> >> Tested-by: Chuck Zmudzinski <brchuckz@aol=2Ecom>
>> >>=20
>> >> Bernhard Beschow (7):
>> >>   include/hw/xen/xen: Rename xen_piix3_set_irq() to xen_intx_set_irq=
()
>> >>   hw/pci/pci=2Ec: Don't leak PCIBus::irq_count[] in pci_bus_irqs()
>> >>   hw/isa/piix3: Reuse piix3_realize() in piix3_xen_realize()
>> >>   hw/isa/piix3: Wire up Xen PCI IRQ handling outside of PIIX3
>> >>   hw/isa/piix3: Avoid Xen-specific variant of piix3_write_config()
>> >>   hw/isa/piix3: Resolve redundant k->config_write assignments
>> >>   hw/isa/piix3: Resolve redundant TYPE_PIIX3_XEN_DEVICE
>> >>=20
>> >>  include/hw/southbridge/piix=2Eh |  1 -
>> >>  include/hw/xen/xen=2Eh          |  2 +-
>> >>  hw/i386/pc_piix=2Ec             | 36 +++++++++++++++++++--
>> >>  hw/i386/xen/xen-hvm=2Ec         |  2 +-
>> >>  hw/isa/piix3=2Ec                | 60 +-----------------------------=
-----
>> >>  hw/pci/pci=2Ec                  |  2 ++
>> >>  stubs/xen-hw-stub=2Ec           |  2 +-
>> >>  7 files changed, 39 insertions(+), 66 deletions(-)
>> >>=20
>> >> --=20
>> >> 2=2E40=2E0
>> >>=20
>> >
>>=20


From xen-devel-bounces@lists.xenproject.org Mon May 22 15:49:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 15:49:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538026.837750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q17n9-0002mf-Hf; Mon, 22 May 2023 15:49:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538026.837750; Mon, 22 May 2023 15:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q17n9-0002mY-Eb; Mon, 22 May 2023 15:49:47 +0000
Received: by outflank-mailman (input) for mailman id 538026;
 Mon, 22 May 2023 15:49:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y+8G=BL=citrix.com=prvs=4993fc31b=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q17n7-0002mS-Kz
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 15:49:46 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 43cba945-f8b8-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 17:49:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43cba945-f8b8-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684770583;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=SJc3SCfBe5OsiyVSoP2ByjfbA3Gd7WO976ExFwtGNZw=;
  b=JOyJp7BVMoyZs3VlFu2H4ohZNgtDrJgkTIGyxMKK+t5Sb8JuidDgYApK
   YDhhoCjtJoXCBoT1RePCnaLU7S9SB8g4ngNKgxljG4GfQ+Bd6cbA6ibEw
   xV3PHO/2HAXLrYi5ovbuLyfib6DdplKiwvqwXKTczHrYb+jVjpcALE70a
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109831211
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:i1jqvKBYb+zrgBVW/+Xjw5YqxClBgxIJ4kV8jS/XYbTApGwhgzZUn
 WtOUW3Vbv/YM2Oje4t1O4++9B5TuZDSnYVmQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbCRMs8pvlDs15K6p4G5C5ARnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw+vxlW3l2r
 MAid24RKS2Kq//tm6i3c7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9I4TbFZ4MxB/Bz
 o7A13/8KDo4Oo2V8ACM1S2dmMHMpQXVZI1HQdVU8dY12QbOlwT/EiY+V1G2vP24gU6WQM9EJ
 gof/S9Ghbg/8gmnQ8fwWzW8oWWYpVgMVtxICeo45QqRjK3O7G6xBHUATzNHQMwrsokxXzNC/
 kSSg9rjCDhrsbuUYXGQ7LGZqXW1Iyd9BXAGTT8JS00C+daLiIgrgwjGVNpLDK+/hdqzEjb1q
 w1mtwBn2e9V15RSkfzmoxae2WnESoX1ohAd5h/FBHqdtz9DO9C5ILGhxUWAtuxMFdPMJrWeh
 0Qsl8+b5eEIKJiCki2RXekAdI2UC+a53C702gA2QcR4n9i50zv6JN0LvmkiTKt8GpxcEQIFd
 nM/ru+4CHV7GHKxJZF6bIuqYyjB5fixTI+1Phw4gzcnX3SQSONl1Hs1DaJz9zq3+KTJrU3YE
 cnzTCpUJSxGYZmLNRLvLwvn7Zclxzol2UTYTo3hwhKs3NK2PSDFFelUawrfNLBhsstoRTk5F
 P4GbaO3J+h3CrWiMkE7D6ZIRbz1EZTLLc+v8JEGHgJyCgFnBHsgG5fs/F/VQKQ8x/49vr6Rr
 hmAtrpwlAKXaYvvdV/bNRiOqdrHAf5CkJ7MFXVzYA/5hSB4OtvHAWV2X8JfQITLPddLlZZcJ
 8Tpse3aahiTYlwrIwggUKQ=
IronPort-HdrOrdr: A9a23:3sMyLa5ZhAykhJnCagPXwPfXdLJyesId70hD6qm+c20tTiX4rb
 HXoB1/73XJYVkqKRQdcLy7Scu9qDbnhP1ICOoqXItKPjOW3FdARbsKheDfKn/bexEWndQtsp
 uIHZIObuEYzmIXsS852mSF+hobr+VvOZrHudvj
X-Talos-CUID: 9a23:whXJ7W2nffzWPgL4Rvr0PLxfGto3KGXllUvqfBGkTltgZKDOEkWcwfYx
X-Talos-MUID: =?us-ascii?q?9a23=3A8Zlwyg8NyPzlUkv9ZQrsJyqQf9xtyY+MBhEHqK8?=
 =?us-ascii?q?PnJa1KwJzAxGzsDviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,184,1681185600"; 
   d="scan'208";a="109831211"
Date: Mon, 22 May 2023 16:49:36 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH RFC] build: respect top-level .config also for
 out-of-tree hypervisor builds
Message-ID: <a87f9103-2262-4fc2-9598-7442074df71a@perard>
References: <beace0ce-e90b-eb79-4419-03de45ea7360@suse.com>
 <a08f3650-0c81-4873-ae10-f5200f8b7613@perard>
 <3c9f8fd2-b60b-5540-00be-87351fec656e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <3c9f8fd2-b60b-5540-00be-87351fec656e@suse.com>

On Mon, May 08, 2023 at 08:23:43AM +0200, Jan Beulich wrote:
> On 05.05.2023 18:08, Anthony PERARD wrote:
> > On Wed, Mar 15, 2023 at 03:58:59PM +0100, Jan Beulich wrote:
> >> With in-tree builds Config.mk includes a .config file (if present) from
> >> the top-level directory. Similar functionality is wanted with out-of-
> >> tree builds. Yet the concept of "top-level directory" becomes fuzzy in
> >> that case, because there is not really a requirement to have identical
> >> top-level directory structure in the output tree; in fact there's no
> >> need for anything top-level-ish there. Look for such a .config, but only
> >> if the tree layout matches (read: if the directory we're building in is
> >> named "xen").
> > 
> > Well, as long as the "xen/" part of the repository is the only build
> > system to be able to build out-of-srctree, there isn't going to be a
> > top-level .config possible in the build tree, as such .config will be
> > outside of the build tree. Reading outside of the build tree or source
> > tree might be problematic.
> > 
> > It's a possibility that some project might want to just build the
> > hypervisor, and they happened to name the build tree "xen", but they
> > also have a ".config" use for something else. Kconfig is using ".config"
> > for other reason for example, like we do to build Xen.
> 
> Right, that's what my first RFC remark is about.
> 
> > It might be better to have a different name instead of ".config", and
> > putting it in the build tree rather than the parent directory. Maybe
> > ".xenbuild-config" ?
> 
> Using a less ambiguous name is clearly an option. Putting the file in
> the (Xen) build directory, otoh, imo isn't: In the hope that further
> sub-trees would be enabled to build out-of-tree (qemu for instance in
> principle can already, and I guess at least some of stubdom's sub-
> packages also can), the present functionality of the top-level
> .config (or whatever its new name) also controlling those builds would
> better be retained.

I'm not sure what out-of-tree build for the whole tree will look like,
but it probably going to be `/path/to/configure && make`. After that,
Config.mk should know what kind of build it's doing, and probably only
load ".config" in the build tree. In the meantime, it feels out of place
for xen/Makefile or xen/Rules.mk to load a ".config" that would be
present in the parent directory of the build dir.

> > I've been using the optional makefile named "xen-version" to adjust
> > variable of the build system, with content like:
> > 
> >     export XEN_TARGET_ARCH=arm64
> >     export CROSS_COMPILE=aarch64-linux-gnu-
> > 
> > so that I can have multiple build tree for different arch, and not have
> > to do anything other than running make and do the expected build. Maybe
> > that's not possible because for some reason, the build system still
> > reconfigure some variable when entering a sub-directory, but that's a
> > start.
> 
> Hmm, interesting idea. I could (ab)use this at least in the short
> term. Yet even then the file would further need including from
> Rules.mk. Requiring all variables defined there to be exported isn't
> a good idea, imo. Iirc not all make variables can usefully be
> exported. For example, I have a local extension to CROSS_COMPILE in
> place, which uses a variable with a dash in its name.
> 
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> ---
> >> RFC: The directory name based heuristic of course isn't nice. But I
> >>      couldn't think of anything better. Suggestions?
> > 
> > I can only thing of looking at a file that is in the build-tree rather
> > than outside of it. A file named ".xenbuild-config" proposed early for
> > example.
> > 
> >> RFC: There also being a .config in the top-level source dir would be a
> >>      little problematic: It would be included _after_ the one in the
> >>      object tree. Yet if such a scenario is to be expected/supported at
> >>      all, it makes more sense the other way around.
> > 
> > Well, that would mean teaching Config.mk about out-of-tree build that
> > part of the repository is capable of, or better would be to stop trying
> > to read ".config" from Config.mk, and handle the file in the different
> > sub-build systems.
> 
> Hmm, is that a viable option? Or put differently: Wouldn't this mean doing
> away with ./Config.mk altogether? Which in turn would mean no central
> place anymore where XEN_TARGET_ARCH and friends could be overridden. (I
> view this as a capability that we want to retain, if nothing else then for
> the - see above - future option of allowing more than just xen/ to be
> built out-of-tree.)

No, I'm not trying to get rid of Config.mk. There's a few thing in it
that I'd like to remove from it, but not everything. I don't know how to
deal with ".config" at the moment, but I guess that if Config.mk knew
about out-of-tree build, it probably should only read one ".config", the
one in the build tree.

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon May 22 16:00:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 16:00:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538030.837759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q17xI-0005Xh-FA; Mon, 22 May 2023 16:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538030.837759; Mon, 22 May 2023 16:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q17xI-0005Xa-CX; Mon, 22 May 2023 16:00:16 +0000
Received: by outflank-mailman (input) for mailman id 538030;
 Mon, 22 May 2023 16:00:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fRnz=BL=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q17xG-0005XU-M0
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 16:00:14 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb16cc69-f8b9-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 18:00:12 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A4DB51FFE7;
 Mon, 22 May 2023 16:00:10 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 57DC513336;
 Mon, 22 May 2023 16:00:10 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id EWP8E4qRa2T4UwAAMHmgww
 (envelope-from <jgross@suse.com>); Mon, 22 May 2023 16:00:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb16cc69-f8b9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1684771210; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=BP7MEx0wqpUzQfE6+pnt/+uKKqY6p022N72TDH/hyBc=;
	b=W57TZcpl8adxTy2DVoueq7OMTobqxZWxjHg21F10L+uNbshI1NvlxtaOr+VQTGI4mJHZu4
	gwtqXCDBTPn3RmreeHTLlagoiiODgphbTf/H7GWKPpePfrhQ7SVDgs0iEwHUP7rxjMEyDa
	cUZPZm6l8ZY3Yoqo03bbYZII3FIe9Zw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] MAINTAINERS: remove xenstore related files from LIBS
Date: Mon, 22 May 2023 18:00:08 +0200
Message-Id: <20230522160008.27779-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There is no need to have the Xenstore headers listed in the LIBS
section now that they have been added to the XENSTORE section.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 MAINTAINERS | 2 --
 1 file changed, 2 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index f2f1881b32..82ad8d1709 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -411,8 +411,6 @@ F:	tools/include/xengnttab.h
 F:	tools/include/xenguest.h
 F:	tools/include/xenhypfs.h
 F:	tools/include/xenstat.h
-F:	tools/include/xenstore*.h
-F:	tools/include/xenstore-compat/*.h
 F:	tools/include/xentoolcore*.h
 F:	tools/include/xentoollog.h
 F:	tools/libs/
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Mon May 22 16:26:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 16:26:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538034.837770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q18MR-0008Bw-Cf; Mon, 22 May 2023 16:26:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538034.837770; Mon, 22 May 2023 16:26:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q18MR-0008Bp-9p; Mon, 22 May 2023 16:26:15 +0000
Received: by outflank-mailman (input) for mailman id 538034;
 Mon, 22 May 2023 16:26:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dXMk=BL=citrix.com=prvs=499503587=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q18MQ-0008BT-4o
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 16:26:14 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5c4381af-f8bd-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 18:26:11 +0200 (CEST)
Received: from mail-mw2nam12lp2040.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 22 May 2023 12:25:54 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB6746.namprd03.prod.outlook.com (2603:10b6:a03:409::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Mon, 22 May
 2023 16:25:52 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.028; Mon, 22 May 2023
 16:25:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c4381af-f8bd-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684772771;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=x/z7FRGiRtq5/93DtD1VroEMpJhQJKR2RPRMQ9XH0SU=;
  b=RRiuvGRb9PWoaN9rXcQXCCOlzZOgiVV7AeizVxHKCaT00GAUbYZTgvsb
   uNwhuHo7fAYq1sBVGAU5qv8ZHrN85CeUrRH9ooYcfY9fCDyXKO5O1k5uW
   W1GnU/QsA+kGreW78sgsjDANtGnHvr+DiVhBk0FlgU8sD128C4+Zb6noa
   o=;
X-IronPort-RemoteIP: 104.47.66.40
X-IronPort-MID: 109276189
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:zMRF664/35cR6gXHIx6QxwxRtBrGchMFZxGqfqrLsTDasY5as4F+v
 mAaUTiGO/aDazGjLot0aoq290kGscSEnN81SVFl/ioyHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa0R5geH/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5my
 e0/IjxWPy6/wO+k4aP4EMJNqv0MBZy+VG8fkikIITDxK98DGMqGZpqQoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6NkkotiNABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpDTuLmr6476LGV7nw5IzAMUQaamOWgp2TjZ+sDM
 Wga5AN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq16bO8vT60fy8PIgc/iTQsSAIE55zmv9s1hxeWFNJ7Svbp1pvyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLzt56s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:j8PTLay4x5Kci6WrtvA+KrPxMegkLtp133Aq2lEZdPULSKGlfp
 GV9sjziyWetN9wYh4dcB67SdC9qADnhPlICO4qTMqftWjdyRGVxeRZgbcKrAeQeBEWmtQtsJ
 uINpIOc+EYbmIK8/oSgjPZLz9I+rDunsGVbKXlvg9QpGlRGt5dBmxCe2Km+yNNNW977NYCZf
 ihDp0tnUvdRZ1bVLXyOpFDNNKz1eHjpdbDW1orFhQn4A6BgXeB76P7KQGR2lMzQi5C2rAr9E
 nCikjc6r+4u/+25xfA3yuLhq4m1OfJ+59mPoihm8IVIjLjhkKBY5lgYaSLuHQYsfyi81Ejlf
 jLulMFM95o433cU2mpqV/G2hXm0hwp93j+oGXozEfLkIjcfnYXGsBBjYVWfl/w7Fchhsh11O
 Zu03iCv5RaIBvclGCljuK4HS1Cpw6Rmz4PgOQTh3tQXc83b6JQl5UW+AdwHI0bFCz3xYg7GK
 1FDd3a5txRbVSGBkqp9VVH8ZiJZDAeDx2GSk8Ntoi81CVXpmlwyw8iyMkWjh47heUAYqgBw9
 6BHrVjlblIQMNTR7l6Hv09Tcy+DXGIaQ7QMUqJSG6XVJ0vCjbokdra8b817OaldNgj150pgq
 nMV1teqCobZ1/uM8uTx5dGmyq9AVlVZQ6diP222qIJ/4EVHNHQQGm+oREV4oWdSswkc47ms6
 3ZAuMQPxfhRVGebbqhkTeOHaW6EkNuI/H9iuxLKm5mnfi7WrECltarBso7d4CdWAoMayfYPk
 YpegTVCYFp0n2LM0WI9SQ5HUmdNXDCwQ==
X-Talos-CUID: =?us-ascii?q?9a23=3ApACUsWmem7tvD6FY/GEznPIDPvDXOUHM633ceB6?=
 =?us-ascii?q?GNSFkWpyECgW72Y9dvOM7zg=3D=3D?=
X-Talos-MUID: 9a23:AcHusgu2LUyQ5GWtgM2nlBtJGd1svoWXFFkgu8g6gpWeESJuEmLI
X-IronPort-AV: E=Sophos;i="6.00,184,1681185600"; 
   d="scan'208";a="109276189"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AFsWs5yN/6PtY5cJF+f8A1+FxGo6EFaPKI5aoFybnZWx0+T6FSQNF6hDVibiNYQ0w8oBdXHKqYT/ntZol86hP9/KoolQeaphVpCdUiGMbgoMAd+ez1vJEspi+JGIA3NVDHnkGuv9NvQI/0qxvIb4nnwzUlvnvAR8mpW0US8gfYUjKzvDW4/TR45gwrnssRPjbYLl+Tqbb+dqOEIDsQDkDkOLVOezIwPMVb8FEUkCPQq6tfB2mCioUz9plC8gWRzMSgHVfYadEeVY8/+mxLKKWBq1jYUi6LEbw8BWCVxyQPFFiWz3KowNP81l+/R42hRESxvVHJhS2n0he7ssMktmYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=x/z7FRGiRtq5/93DtD1VroEMpJhQJKR2RPRMQ9XH0SU=;
 b=NnQpKWVNy0ahxjM00+yqX+mGLcgXOo+eI3Om58cETJ6d4X3GNL+FZ/meYxx5YPUoTxSDxLDxng7X6Xi1irbe8w3JLB+5g7B+KnaDw/Yxtq3rG3APHzL5XyxPd0IiEIoesRIyDTrHchmPtHWRk9ha7+wGJgZE007u4+vzA2uUNAs7cGNiOzXPqVoAfDRWjLMIjfz1ofkA7qCHnaNuqhnoo00TWwwqNdif2Fc7kd/Pot8OgiUfwaGR3yGwPjmv920a9GVikBDL2czDBoLKGPJISl0vSMgajhnjqynxdy5mrxu+Smtvz+wlNgB1vgCdWXqSWIFET+OMY61AZl/LEp5+Rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x/z7FRGiRtq5/93DtD1VroEMpJhQJKR2RPRMQ9XH0SU=;
 b=S+YJqXWLl8nJwSAjgusAblXUZFQx4WwSUiXq8R320xuHKdhdAOzgUVEzclIfkFV+1VtEQ7eClu3D484C7tZ5d6DjCACuiSZF1bnYQAqy1ixaYVxiq4ABknThIbAhsqSyDgUYo+2kKIfsTYfz1ebuzyW/pTKRpNGLd7zn1mCRjFY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <d6aa8dee-49e2-1493-adb5-2adb474af067@citrix.com>
Date: Mon, 22 May 2023 17:25:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 00/10] x86: support AVX512-FP16
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
In-Reply-To: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0021.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ae::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB6746:EE_
X-MS-Office365-Filtering-Correlation-Id: c898c02c-6762-427c-09d7-08db5ae13589
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	n1VowiQR8uE7mc7hAgSXLA7+FzPg2cRwMERnxq0Fqx/m2cA36JRSQ9JWv+vbjNZiTns3khC+AdsfaX/T7vEaG8AjuOWjXuc0S7hjLNdSeq/p7NM/W/pgCQuKvWTPa/iIDP/VgTOg1lP0kJ78to+T1aomDSUTX0Lfsa3uZhNMkgalcYi7aukjb0HiYEcmUbOgqwsPo6iMT2EGfdpZTIuV/3pHqca87A3/JwVoDkYMy2Bsvk3GJoFiiv3hCj42urtIdqRo96p53uPq9oWnBwXxMTbO9MJSfrlL98dvwHmXqfTtMjPMxSPLdXBPX8VifSBXrSrMouUc2EF0BLjdU/Vm15aauXy8kkEriXth8w4rSkN88qpupGc7pT3zszY5sVk+PFtwc4gcT+ov0dGt/+WKBnKB7G1Yl8r3/1QxcMq7CfStkIfw0iMPp38oqqJNN5tBZM6+VR2vBkuLnraJ409W2jtdWw8aQHuGYBhuherSRK1qjcO7ya4/2XcIeo9SnWFBZlPx1OoFC+CKJIrZAZbnveEAmlER3WbF879i2DMjJM/Rc3UWdnUh7CPtMa/G571bMu5A4TnIEPHI5jaPWDDUWXrS4m8/iIQq6TZ6cbEq4Mgt/jS7Twx88gA5aDml+ulNtdaSXtNVCUajh/voqQ6FkA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(376002)(346002)(39860400002)(366004)(451199021)(54906003)(110136005)(66556008)(6666004)(316002)(4326008)(478600001)(66476007)(66946007)(2616005)(107886003)(6512007)(8676002)(8936002)(26005)(186003)(53546011)(2906002)(41300700001)(6486002)(31686004)(6506007)(5660300002)(38100700002)(82960400001)(86362001)(31696002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aUtZTTRiUjFSVzZ4NUlRcjh4ZzFab3lMdU9HUFo4aDNNYkY5MlJzMkVMc0J2?=
 =?utf-8?B?SVdLVUVoU2JBMFVYNmNRRWNuUmY5NFE1UzZRdWwyYmdsRTBsRUJVbDdtK1VS?=
 =?utf-8?B?ZFJwbExmdkRQYmdKOGhHV3d6NUJqaTRZTmFiSjZaR2wxT1NYTENjZE9zWGpr?=
 =?utf-8?B?MXBhZzNOaUZuYlBEVkFzWE51bzR2bnV0TG9CM0Nnb0dUYitaNnA5UGtzZzN3?=
 =?utf-8?B?TVBwWXpnbG1Ib2xVNmdSVmlCdG0wYkZKUGJOT1BuQ0lnUlNlclpVSTBnaWZB?=
 =?utf-8?B?U0NnNGhZMUR0a0FlMEVvYkdQdGUxOFoycUw1YlgrSXp6cWhnQ1d3eHgxQ2kx?=
 =?utf-8?B?RHk5cnE1U0s0N3BwaWg0OGpDNEZqc3cySk1tZk1NOVNnQ2I2NS9XOUJRM2tr?=
 =?utf-8?B?TDhKSkQ0RUdtaFgrVDd5Tlh3TnEwUU4wOTc1MVlncTROMmQ5SVFjamZBVHJD?=
 =?utf-8?B?THJ5Wlh0TFZWRnBTNk9aSDlVbXNaejdzdlZaRDQ2ZmNpRnpJMzVkRGM4ZW8r?=
 =?utf-8?B?TlBmQitPLy9WbGRMcUI0RjlwVWRZNktzZEdsWDFFdmp1VkVpcW91SVhKWlRS?=
 =?utf-8?B?cG91bnUycmpZd0dNL0NkTWFJb1dJVkh3WWVsWW9MMHdXN2cwcmFBZGNmcmh6?=
 =?utf-8?B?VmJtdkZoNHFyMkVac0QrQ25OZTY1T0RsUUtOam84RElyaytQT3lDVnljaE9U?=
 =?utf-8?B?NUpRZUhhSlNXR1lYZFVsdkt5S3g2WE5VV2J0UGZGQUdsOVNldWM5eGI5eklZ?=
 =?utf-8?B?RjRQUzE2MEhJaFlLbkpMS3JadDJCb0hqUUZ0L0xXQmVOb0ZDNHJWSStYMWNh?=
 =?utf-8?B?M05peXF5aFNCNGFzYWZJdFBFWWdmZFB0WXhaYllLQ0hJWlpxMEZFTG9GMURo?=
 =?utf-8?B?ZGV6YWNpUDVpNHgxMlVXUlpab1NOWStWdlJyZkF5djdKcjl0Q1FkRG5kZkhK?=
 =?utf-8?B?V0h6eXBCK2VnVzRaMm5IS0hzRGZKMFZhNXpmZENaVGlLc0JBSDBTbU9iZUkz?=
 =?utf-8?B?bFVkb2tzOEJUUzg1SE5pNXFEK01BMm1yQnNlMWdoeTQ5TFNvMlY5MWVVTDF5?=
 =?utf-8?B?NU1oeEdPU3NOUkkwZi83b2tyN0p3MzF2Rlh2ZWZwQ1lwbXQ0RCtNakxqT3Ft?=
 =?utf-8?B?M3h0VWxpUmtBMUM4a0xOcE1rWTZma2xHK3pZYkFCYjRhUVJnT09qVHVvc1lp?=
 =?utf-8?B?Yjk4UkxxNXJ2QWhjOUZZZFNLTFJmV2wrZlZtUGVQaTdJdHN6aG96K2t5MHhT?=
 =?utf-8?B?UFpxQkNSVERVN0FaZ2ZOa2dXZndRQjk0VlRZWlk2NVh2YzJIdFV3M3NGU1BS?=
 =?utf-8?B?blBzUjNSU29reWxFTTNodExMek5SanVtSFBqYlZjUDlYWUdDYkhrc3QyMjMw?=
 =?utf-8?B?anFoWjAvSm5JWEVRdXZzeVc0czh5UE5XMXNPaWZkOEdRVFhzZkdXV1JCNzdU?=
 =?utf-8?B?NkM0VDJzRGxCRTVWMng0T1JzalU1L045YzMzeDlMbGJGQ3hPTk5BVmlFY3VS?=
 =?utf-8?B?Z3VjN1RCZ0dZV1VQa2RURkJuM0psbC9KUFdLZ3RHVXQ5b0twclFpRW43STR6?=
 =?utf-8?B?RUx0S0FTWUI0U3pBWmxkd0R6dUdzOWlwVTRBOFllMW9vazdHdnlOellhWkM3?=
 =?utf-8?B?clNIMHUrR0RmOTA5eU1NZFpkU3U5Y3FVR3hwQ2dZcCtrbytmTHNzb0JCSGlE?=
 =?utf-8?B?Y1ZTTVJ3MTR1NWZxOXVRV2FZMk1KY3hKc3BJbVVIYXkvS3RSbkFGalZmU3ZJ?=
 =?utf-8?B?YUJIMW5yM3YrM0tYZjNOQi9CQ2FuemZHYTN2MXFjZGZ2NnlDbnNKNUorSURl?=
 =?utf-8?B?ZFVUU0Y5SGhia2xZZy9FM0I3Y0U1ekxhV2NMMHpsQWN4VHFvRkxuQU82cG9O?=
 =?utf-8?B?a3EwYStPT24xSEV3Z2FuMDhiNmVQbUgvZWlVRGJtcHpWNTBQVzBrcmV6Mmdt?=
 =?utf-8?B?dHhtNU1BOWdiZlBCTGpkeldJWTYwSnRRQWhHNlhNWjBZMEZZa0NYS2l0N0h6?=
 =?utf-8?B?a0w4TnA5cXV0UlJPQjlXTnlqcGNPRHRzR1I5RUxVZUxFaXNCTlNleUxIQ1JQ?=
 =?utf-8?B?T0hoMzF1R3lKR2xjQlNQL3NUcDJWMjdlNXRzNHIzcFIzRDhhNmhtTWxYdXlY?=
 =?utf-8?B?SHVLVkN6L1kvRERWSVUxaVFhekh5S0kxN1EvanlIT09yZGJrVUZheUgzOXdB?=
 =?utf-8?B?N0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	NJc/ArCoaAWTd/3AiDJO4OmXCjhAYqdiJS1fV5agqdfVD/80VqNeoJA2v5kWhM240DPb0ufTohEFKvd/mw9Em98cLc7DJCSxks8VHJ8v4bPp8IXomTohq4U5OZnWWVctlCf+jIqjVts3o7OtG2oNzgsBp5IBDaFd7dvkif2fV6v0sfvMu6iaF+5A0nybSxjHPSSL0j6j692p4RHFABOV/MjIPVMaa5y08jCmSWKJz/qWFpRVBeR38NFK19wMovXnXkYNpzytMfNOGKVeJSUweLpuzKJXQtjGDOXliyq8QM9YBfYqGMsRfi0vhtVTsErvhH0rlDvgdTbOmrBkmYwKck06bGRZ/gkq4mY47JVTFaeJeQnUjFZRueu+PIEAmqPuwBMV//fgn9KwsbcXdzVEeueqEvXB9bhHe7yEa18hXKhQP1h4UdUW5IGcSWtFV+DlemW8+wJSEipB+6/p34zv+G67gy1cPJZwl73dW4iXh+VLEC5/HVXpZez1mKpdDuI+duHMp/pE2aQq5BC1c7PI2LlxNocmIUa81+BbETF/08Lo2+MISuE2jacgaJ5zPcnIMLSQXHoPZ2cp0DFvt+tbgwyGF7Z8sloSaidpt+2APZZ66jbFWHHWBkKVKQ40RupeEj7VkwhGkXq8CoxRp9RaG6p6KbFHtmLOqCP3KZqeNTiqb9b8NQv0hlktnKOuFsP8+755UtYY2rB6JYnKxl5JoatHXMa00qx/Xf2bDFwYImKGQV5CCFhCnTfNxd0TUNNq8UUMvKFr+W8kbLp8+Dysaw6VN03XGLhm0AKrZg8fcN6HJGE54G2g9snhhZII04Dq
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c898c02c-6762-427c-09d7-08db5ae13589
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2023 16:25:52.1043
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2Do5+ZgufIxIq/aNyZHxgK6nZdj7YoERvVKD4FqzZJKU0FAFCKE+By/+bAK1Tv5Cs/sTRCeDb2ZtsbHv2oT5Z26UnXIaU0PBLu+dgrYwYbE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6746

On 03/04/2023 3:56 pm, Jan Beulich wrote:
> While I (quite obviously) don't have any suitable hardware, Intel's
> SDE allows testing the implementation. And since there's no new
> state (registers) associated with this ISA extension, this should
> suffice for integration.

I've given this a spin on a Sapphire Rapids system.

Relevant (AFAICT) bits of the log:

Testing vfpclasspsz $0x46,64(%edx),%k2...okay
Testing vfpclassphz $0x46,128(%ecx),%k3...okay
...
Testing avx512_fp16/all disp8 handling...okay
Testing avx512_fp16/128 disp8 handling...okay
...
Testing AVX512_FP16 f16 scal native execution...okay
Testing AVX512_FP16 f16 scal 64-bit code sequence...okay
Testing AVX512_FP16 f16 scal 32-bit code sequence...okay
Testing AVX512_FP16 f16x32 native execution...okay
Testing AVX512_FP16 f16x32 64-bit code sequence...okay
Testing AVX512_FP16 f16x32 32-bit code sequence...okay
Testing AVX512_FP16+VL f16x8 native execution...okay
Testing AVX512_FP16+VL f16x8 64-bit code sequence...okay
Testing AVX512_FP16+VL f16x8 32-bit code sequence...okay
Testing AVX512_FP16+VL f16x16 native execution...okay
Testing AVX512_FP16+VL f16x16 64-bit code sequence...okay
Testing AVX512_FP16+VL f16x16 32-bit code sequence...okay

and it exits zero, so everything seems fine.


One thing however, this series ups the minimum GCC version required to
build the emulator at all:

make: Entering directory '/local/xen.git/tools/tests/x86_emulator'
gcc: error: unrecognized command-line option ‘-mavx512fp16’; did you
mean ‘-mavx512bf16’?
Makefile:121: Test harness not built, use newer compiler than "gcc"
(version 10) and an "{evex}" capable assembler

and I'm not sure we want to do this.  When upping the version of GCC but
leaving binutils as-was does lead to a build of the harness without
AVX512-FP16 active, which is the preferred behaviour here.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 22 16:47:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 16:47:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538040.837780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q18hJ-0002HX-6b; Mon, 22 May 2023 16:47:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538040.837780; Mon, 22 May 2023 16:47:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q18hJ-0002HQ-3d; Mon, 22 May 2023 16:47:49 +0000
Received: by outflank-mailman (input) for mailman id 538040;
 Mon, 22 May 2023 16:47:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eDGG=BL=arm.com=Peter.Hoyes@srs-se1.protection.inumbo.net>)
 id 1q18hH-0002HK-Sv
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 16:47:47 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 60632598-f8c0-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 18:47:45 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E9C6E11FB;
 Mon, 22 May 2023 09:48:29 -0700 (PDT)
Received: from [10.1.199.64] (unknown [10.1.199.64])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7B8BE3F762;
 Mon, 22 May 2023 09:47:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60632598-f8c0-11ed-b22d-6b7b168915f2
Message-ID: <f59cf610-7e1c-8143-8608-f76dc43e13b6@arm.com>
Date: Mon, 22 May 2023 17:47:37 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH] tools/xendomains: Don't auto save/restore/migrate on
 Arm*
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>
References: <d52c31c7-57b1-41c1-af35-a9b847683c0a@perard>
 <20230519162454.50337-1-anthony.perard@citrix.com>
Content-Language: en-US
From: Peter Hoyes <Peter.Hoyes@arm.com>
In-Reply-To: <20230519162454.50337-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 19/05/2023 17:24, Anthony PERARD wrote:
> Saving, restoring and migrating domains are not currently supported on
> arm and arm64 platforms, so xendomains prints the warning:
>
>    An error occurred while saving domain:
>    command not implemented
>
> when attempting to run `xendomains stop`. It otherwise continues to shut
> down the domains cleanly, with the unsupported steps skipped.
>
> Also in sysconfig.xendomains, change "Default" to "Example" as the
> real default is an empty value.
>
> Reported-by: Peter Hoyes <Peter.Hoyes@arm.com>
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
>
> Peter, what do you think of this approach?

Thanks for preparing this. Just validated that the warning above is not 
present with both qemu-aarch64 and an internal stack.

Tested-by: Peter Hoyes <peter.hoyes@arm.com>

>
> For reference, there's also a way to findout if a macro exist, with
> AC_CHECK_DECL(), but the libxl.h header depends on other headers that
> needs to be generated.
> ---
>   tools/configure                                    | 11 +++++++++++
>   tools/configure.ac                                 | 13 +++++++++++++
>   tools/hotplug/Linux/init.d/sysconfig.xendomains.in |  4 ++--
>   3 files changed, 26 insertions(+), 2 deletions(-)
>
> diff --git a/tools/configure b/tools/configure
> index 52b4717d01..a722f72c08 100755
> --- a/tools/configure
> +++ b/tools/configure
> @@ -624,6 +624,7 @@ ac_includes_default="\
>   
>   ac_subst_vars='LTLIBOBJS
>   LIBOBJS
> +XENDOMAINS_SAVE_DIR
>   pvshim
>   ninepfs
>   SYSTEMD_LIBS
> @@ -10155,6 +10156,16 @@ if test "$ax_found" = "0"; then :
>   fi
>   
>   
> +case "$host_cpu" in
> +    arm*|aarch64)
> +        XENDOMAINS_SAVE_DIR=
> +        ;;
> +    *)
> +        XENDOMAINS_SAVE_DIR="$XEN_LIB_DIR/save"
> +        ;;
> +esac
> +
> +
>   cat >confcache <<\_ACEOF
>   # This file is a shell script that caches the results of configure
>   # tests run on this system so they can be shared between configure
> diff --git a/tools/configure.ac b/tools/configure.ac
> index 3cccf41960..0f0983f6b7 100644
> --- a/tools/configure.ac
> +++ b/tools/configure.ac
> @@ -517,4 +517,17 @@ AS_IF([test "x$pvshim" = "xy"], [
>   
>   AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])
>   
> +dnl Disable autosave of domain in xendomains on shutdown
> +dnl due to missing support. This should be in sync with
> +dnl LIBXL_HAVE_NO_SUSPEND_RESUME in libxl.h
> +case "$host_cpu" in
> +    arm*|aarch64)
> +        XENDOMAINS_SAVE_DIR=
> +        ;;
> +    *)
> +        XENDOMAINS_SAVE_DIR="$XEN_LIB_DIR/save"
> +        ;;
> +esac
> +AC_SUBST(XENDOMAINS_SAVE_DIR)
> +
>   AC_OUTPUT()
> diff --git a/tools/hotplug/Linux/init.d/sysconfig.xendomains.in b/tools/hotplug/Linux/init.d/sysconfig.xendomains.in
> index f61ef9c4d1..3c49f18bb0 100644
> --- a/tools/hotplug/Linux/init.d/sysconfig.xendomains.in
> +++ b/tools/hotplug/Linux/init.d/sysconfig.xendomains.in
> @@ -45,7 +45,7 @@ XENDOMAINS_CREATE_USLEEP=5000000
>   XENDOMAINS_MIGRATE=""
>   
>   ## Type: string
> -## Default: @XEN_LIB_DIR@/save
> +## Example: @XEN_LIB_DIR@/save
>   #
>   # Directory to save running domains to when the system (dom0) is
>   # shut down. Will also be used to restore domains from if # XENDOMAINS_RESTORE
> @@ -53,7 +53,7 @@ XENDOMAINS_MIGRATE=""
>   # (e.g. because you rather shut domains down).
>   # If domain saving does succeed, SHUTDOWN will not be executed.
>   #
> -XENDOMAINS_SAVE=@XEN_LIB_DIR@/save
> +XENDOMAINS_SAVE=@XENDOMAINS_SAVE_DIR@
>   
>   ## Type: string
>   ## Default: "--wait"


From xen-devel-bounces@lists.xenproject.org Mon May 22 17:44:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 17:44:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538057.837800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q19aE-0000ID-I8; Mon, 22 May 2023 17:44:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538057.837800; Mon, 22 May 2023 17:44:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q19aE-0000I3-FC; Mon, 22 May 2023 17:44:34 +0000
Received: by outflank-mailman (input) for mailman id 538057;
 Mon, 22 May 2023 17:44:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q19aD-0000Ht-1O; Mon, 22 May 2023 17:44:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q19aC-0005tV-JG; Mon, 22 May 2023 17:44:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q19aC-00045E-2o; Mon, 22 May 2023 17:44:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q19aC-0003CF-29; Mon, 22 May 2023 17:44:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=u6L3L7qbJ+vH9M6w1/IcR+WxQ00XU2vB767a7KPq4UI=; b=B96U+9ZE0siUt/ldbVUpLFCRm6
	NmnIknnlOgvpftg1XgqJbcDWnZUX4CVvkb+t1rLJOO8pDMDDFWQpp/v7k7Api4cw/qM2EOSj8KFFk
	TcIP09+ZAR/gIGW0ah3Qt1WARM1ccK+pX102TXEPX8FgYJC0ld5D/F2NIEHsEfnGaaKs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180897-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180897: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c7908869ac26961a3919491705e521179ad3fc0e
X-Osstest-Versions-That:
    xen=753d903a6f2d1e68d98487d36449b5739c28d65a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 May 2023 17:44:32 +0000

flight 180897 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180897/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c7908869ac26961a3919491705e521179ad3fc0e
baseline version:
 xen                  753d903a6f2d1e68d98487d36449b5739c28d65a

Last test of basis   180694  2023-05-18 00:03:29 Z    4 days
Testing same since   180897  2023-05-22 15:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>
  Yann Dirson <yann.dirson@vates.fr>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   753d903a6f..c7908869ac  c7908869ac26961a3919491705e521179ad3fc0e -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 22 19:11:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 19:11:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538068.837814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Aw1-00016V-QI; Mon, 22 May 2023 19:11:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538068.837814; Mon, 22 May 2023 19:11:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Aw1-00016O-MW; Mon, 22 May 2023 19:11:09 +0000
Received: by outflank-mailman (input) for mailman id 538068;
 Mon, 22 May 2023 19:11:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iirr=BL=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1q1Avz-00016D-U5
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 19:11:08 +0000
Received: from sender3-of-o57.zoho.com (sender3-of-o57.zoho.com
 [136.143.184.57]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 62eab82e-f8d4-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 21:11:01 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 16847826465641012.4911080837867;
 Mon, 22 May 2023 12:10:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62eab82e-f8d4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1684782647; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=TI/LPNqVMPHQLTqxPnbta7GkraYlp4fhOsfKXhPKbph+SQmynfRzGypC+swalw2w44SFnfVVdECHWNB6eQo30LXhmH2S0YlniQbCCunVRbqklqRYB3w0zdTatIbT1GhrBH26UDL0+6z7KPw9hom8KJMPoipG3nKOoKw1Tvuo/uc=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1684782647; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=XAgV32Qod+z5MHCfvaT/mP1HVEx9HWoewIm0vYR7fa4=; 
	b=OsBxectFPE1ytTAJlvJOXOV0ZzpfAtgh4dcyB9faGxZjX7b2OD8J/2epAI4FeAVMt3/X2VRVXRuLWMUb0bHUuaSWE+PTAxkK/KtdNdO8M6CflqF0huQ3/G864RIJRY4ppImF/Sn4fODTeJSpVIdlC/SBK/7aAwgTHreRQlDYD0Y=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1684782647;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:To:To:Cc:Cc:References:From:From:Subject:Subject:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=XAgV32Qod+z5MHCfvaT/mP1HVEx9HWoewIm0vYR7fa4=;
	b=WRvQ6tECHHjI1Ayv2WDUAew+SkLDfZ5J8VesDdK/kOOMNMMeRHU3H07vMDXmCPYc
	RS/ntmg4VDoNoVisukTiJ4ML85RE0G8LhT+2otIA2GXGbWe8lwMiXP5JJt5zhylk6KE
	sDufutK74Ialpe0MGwfnMdsVlM5aC+t05ktzuMLs=
Message-ID: <aed5a8d6-42ae-200b-0bd7-ce6e7a5f02d8@apertussolutions.com>
Date: Mon, 22 May 2023 15:10:44 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230519114824.12482-1-dpsmith@apertussolutions.com>
 <4a8c30bd-ebe7-3800-37ae-9e3e6c37a7d0@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH] maintainers: add regex matching for xsm
In-Reply-To: <4a8c30bd-ebe7-3800-37ae-9e3e6c37a7d0@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 5/22/23 05:23, Jan Beulich wrote:
> On 19.05.2023 13:48, Daniel P. Smith wrote:
>> XSM is a subsystem where it is equally important of how and where its hooks are
>> called as is the implementation of the hooks. The people best suited for
>> evaluating the how and where are the XSM maintainers and reviewers. This
>> creates a challenge as the hooks are used throughout the hypervisor for which
>> the XSM maintainers and reviewers are not, and should not be, a reviewer for
>> each of these subsystems in the MAINTAINERS file. Though the MAINTAINERS file
>> does support the use of regex matches, 'K' identifier, that are applied to both
>> the commit message and the commit delta. Adding the 'K' identifier will declare
>> that any patch relating to XSM require the input from the XSM maintainers and
>> reviewers. For those that use the get_maintianers script, the 'K' identifier
>> will automatically add the XSM maintainers and reviewers.
> 
> With, aiui, a fair chance of false positives when e.g. XSM hook invocations
> are only in patch context. Much like ...

I was torn between matching lines with `xsm_` and matching lines with 
the first non-whitespace character being a `+` and having `xsm_`. In the 
end, I opted for the former because the concern is not just a change to 
the line with the XSM hook, but the changing context around the hook. As 
a result, yes, there will be false positives as well as the potentially 
false negatives as a relevant context change may happen far enough 
outside the diff scope. Regardless, the end result will be an increased 
awareness at the cost of some noise. INHO I find this to be a better 
situation than the current place we are at today.

>> Any one not using
>> get_maintainers, it will be their responsibility to ensure that if their work
>> touches and XSM hook, to ensure the XSM maintainers and reviewers are copied.
> 
> ... manual intervention is needed in the case of not using the script, I
> think people should also be at least asked to see about avoiding stray Cc-s
> in that case. Unless of course I'm misreading get_maintainers.pl (my Perl
> isn't really great) or the script would be adjusted to only look at added/
> removed lines (albeit even that would leave a certain risk of false
> positives).
> 
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -674,6 +674,8 @@ F:	tools/flask/
>>   F:	xen/include/xsm/
>>   F:	xen/xsm/
>>   F:	docs/misc/xsm-flask.txt
>> +K:  xsm_.*
>> +K:  \b(xsm|XSM)\b
> 
> Nit: Please make padding match that of adjacent lines.
s
Apologies, I didn't catch expandtab was on, will resubmit with hard tabs 
in place.

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Mon May 22 19:15:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 19:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538078.837829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Azt-0001jS-Bp; Mon, 22 May 2023 19:15:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538078.837829; Mon, 22 May 2023 19:15:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Azt-0001jL-9L; Mon, 22 May 2023 19:15:09 +0000
Received: by outflank-mailman (input) for mailman id 538078;
 Mon, 22 May 2023 19:15:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iirr=BL=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1q1Azs-0001jA-C2
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 19:15:08 +0000
Received: from sender3-of-o57.zoho.com (sender3-of-o57.zoho.com
 [136.143.184.57]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f54518db-f8d4-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 21:15:06 +0200 (CEST)
Delivered-To: dpsmith@apertussolutions.com
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 168478290119119.98926170790287;
 Mon, 22 May 2023 12:15:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f54518db-f8d4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1684782902; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=O0kdTS3rolyoS3J0hIN0vOe5o99wlAJsZ5oFiSdxjTwPjmYLX6z2Mu1YfkgGqBnGgPmAR1770DwoPRLCHOfz3XZyZzpOtEZ7mWz+r1QGfZwSTjG0Q0h2ZcbXhTMqxGIlNwVoOdxqiWMZOO5nneZ2ablU5n2jJUnltesKf0hJERs=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1684782902; h=Content-Transfer-Encoding:Cc:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=dAPFVcJYTMmZ5NEfQ3IJYFc1oVcKeuaNKP6R+EjiTdk=; 
	b=UlJXX835xzOCgshy0+MfMpcIqMn+lpz5bHL3bbKWFXtG0DkGSLp2WGkNPvvV6Bm/j3qD9B+zKdbrhUAPkPHWbUiXAzs99dDlwO1eoj5wVPi3hQq0ZCcVWDtsyl3PsVM+2VWbNG8HaSAHiAo5fE5aAQUh6s+GV4Hi8I6AK6A2eX4=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1684782902;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-Id:Message-Id:MIME-Version:Content-Transfer-Encoding:Reply-To;
	bh=dAPFVcJYTMmZ5NEfQ3IJYFc1oVcKeuaNKP6R+EjiTdk=;
	b=VuBKVrqwSvsbAJiUWpvnJWjNwMvLhfYMAv1iwEaMDTdpswUXY6Nf/7AkgQWVJ7e6
	FubF7dfkzxFR5lQK7lNs9hJrZRGf40Z1O2vsGqHwt3nAFmVQb1+ZaZs8sR/KFcnDMSt
	z3UAWukmsCXny0fKWvJUp6XTe08l83KtFtAvo8vA=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] maintainers: add regex matching for xsm
Date: Mon, 22 May 2023 15:14:49 -0400
Message-Id: <20230522191450.5665-1-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

XSM is a subsystem where it is equally important of how and where its hooks are
called as is the implementation of the hooks. The people best suited for
evaluating the how and where are the XSM maintainers and reviewers. This
creates a challenge as the hooks are used throughout the hypervisor for which
the XSM maintainers and reviewers are not, and should not be, a reviewer for
each of these subsystems in the MAINTAINERS file. Though the MAINTAINERS file
does support the use of regex matches, 'K' identifier, that are applied to both
the commit message and the commit delta. Adding the 'K' identifier will declare
that any patch relating to XSM require the input from the XSM maintainers and
reviewers. For those that use the get_maintianers script, the 'K' identifier
will automatically add the XSM maintainers and reviewers. Any one not using
get_maintainers, it will be their responsibility to ensure that if their work
touches and XSM hook, to ensure the XSM maintainers and reviewers are copied.

This patch adds a pair of regex expressions to the XSM section. The first is
`xsm_.*` which seeks to match XSM hooks in the commit's delta. The second is
`\b(xsm|XSM)\b` which seeks to match strictly the words xsm or XSM and should
not capture words with a substring of "xsm".

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 MAINTAINERS | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index f2f1881b32..b0f0823d21 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -674,6 +674,8 @@ F:	tools/flask/
 F:	xen/include/xsm/
 F:	xen/xsm/
 F:	docs/misc/xsm-flask.txt
+K:	xsm_.*
+K:	\b(xsm|XSM)\b
 
 THE REST
 M:	Andrew Cooper <andrew.cooper3@citrix.com>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon May 22 19:35:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 19:35:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538082.837840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1BJ2-0004DV-W1; Mon, 22 May 2023 19:34:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538082.837840; Mon, 22 May 2023 19:34:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1BJ2-0004DO-Sl; Mon, 22 May 2023 19:34:56 +0000
Received: by outflank-mailman (input) for mailman id 538082;
 Mon, 22 May 2023 19:34:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q1BJ1-0004DI-LJ
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 19:34:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q1BJ0-0000V1-Ko; Mon, 22 May 2023 19:34:54 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.102])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q1BJ0-0005ie-Fu; Mon, 22 May 2023 19:34:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Irn1Cg7gBaFqvIZrMdYcTS+cLcoPNbY4R08SgynHaIY=; b=05AAonTa1UCWacF2zlO3XTrPKw
	ZkPmoHV15Baye4qS9ltXjwzobYZRz/JrJf2Ew7ggK31rc/Q/fVvMr6RpIybbYihR4qjDwxl4CdciT
	SthMy7OhjK31PIkc5QuFqdLeyi8pwHgFdPBNG2lxX8mSaqRIjBPm0vKPbyhbBqIpG1Q8=;
Message-ID: <b18f7dcb-b790-2571-93e1-aa9a9132276a@xen.org>
Date: Mon, 22 May 2023 20:34:52 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.10.1
Subject: Re: [XEN PATCH] tools/xendomains: Don't auto save/restore/migrate on
 Arm*
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Peter Hoyes <Peter.Hoyes@arm.com>, Wei Liu <wl@xen.org>
References: <d52c31c7-57b1-41c1-af35-a9b847683c0a@perard>
 <20230519162454.50337-1-anthony.perard@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230519162454.50337-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Anthony,

On 19/05/2023 17:24, Anthony PERARD wrote:
> Saving, restoring and migrating domains are not currently supported on
> arm and arm64 platforms, so xendomains prints the warning:
> 
>    An error occurred while saving domain:
>    command not implemented
> 
> when attempting to run `xendomains stop`. It otherwise continues to shut
> down the domains cleanly, with the unsupported steps skipped.
> 
> Also in sysconfig.xendomains, change "Default" to "Example" as the
> real default is an empty value.
> 
> Reported-by: Peter Hoyes <Peter.Hoyes@arm.com>
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
> 
> Peter, what do you think of this approach?
> 
> For reference, there's also a way to findout if a macro exist, with
> AC_CHECK_DECL(), but the libxl.h header depends on other headers that
> needs to be generated.
> ---
>   tools/configure                                    | 11 +++++++++++
>   tools/configure.ac                                 | 13 +++++++++++++
>   tools/hotplug/Linux/init.d/sysconfig.xendomains.in |  4 ++--
>   3 files changed, 26 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/configure b/tools/configure
> index 52b4717d01..a722f72c08 100755
> --- a/tools/configure
> +++ b/tools/configure
> @@ -624,6 +624,7 @@ ac_includes_default="\
>   
>   ac_subst_vars='LTLIBOBJS
>   LIBOBJS
> +XENDOMAINS_SAVE_DIR
>   pvshim
>   ninepfs
>   SYSTEMD_LIBS
> @@ -10155,6 +10156,16 @@ if test "$ax_found" = "0"; then :
>   fi
>   
>   
> +case "$host_cpu" in
> +    arm*|aarch64)
> +        XENDOMAINS_SAVE_DIR=
> +        ;;
> +    *)
> +        XENDOMAINS_SAVE_DIR="$XEN_LIB_DIR/save"
> +        ;;
> +esac
> +
> +
>   cat >confcache <<\_ACEOF
>   # This file is a shell script that caches the results of configure
>   # tests run on this system so they can be shared between configure
> diff --git a/tools/configure.ac b/tools/configure.ac
> index 3cccf41960..0f0983f6b7 100644
> --- a/tools/configure.ac
> +++ b/tools/configure.ac
> @@ -517,4 +517,17 @@ AS_IF([test "x$pvshim" = "xy"], [
>   
>   AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])
>   
> +dnl Disable autosave of domain in xendomains on shutdown
> +dnl due to missing support. This should be in sync with
> +dnl LIBXL_HAVE_NO_SUSPEND_RESUME in libxl.h

Quite likely, a developer adding a new arch will first look at the 
definition of LIXBL_HAVE_NO_SUSPEND_RESUME. So it would be good if we 
have a similar message there to remind them to update this case. That 
said...

> +case "$host_cpu" in
> +    arm*|aarch64)
> +        XENDOMAINS_SAVE_DIR=
> +        ;;
> +    *)
> +        XENDOMAINS_SAVE_DIR="$XEN_LIB_DIR/save"
> +        ;;
> +esac

... I am wondering if the switch should be the other way around. IOW, 
the default should be no support for suspend/resume. This will make 
easier to add support for RISC-V (I suspect support for suspend/resume 
will not be in the first version) or any new other arch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 22 19:35:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 19:35:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538083.837850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1BJI-0004Vf-7a; Mon, 22 May 2023 19:35:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538083.837850; Mon, 22 May 2023 19:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1BJI-0004VW-4h; Mon, 22 May 2023 19:35:12 +0000
Received: by outflank-mailman (input) for mailman id 538083;
 Mon, 22 May 2023 19:35:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iirr=BL=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1q1BJG-0004Ud-GD
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 19:35:10 +0000
Received: from sender4-of-o50.zoho.com (sender4-of-o50.zoho.com
 [136.143.188.50]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c1eb3879-f8d7-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 21:35:08 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 16847841019723.4284023614443413;
 Mon, 22 May 2023 12:35:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1eb3879-f8d7-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1684784103; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=NZD3h/6MSHBDgza8k9G0UMwscqDDu8wBb/sEltpJfAmGWHwBYjPbXcZTW/D/EgV1QD90XEZa25nNA4iEjlhdrBiZqSf2gO8uIayZyEAwlnrY9hAUJMT7g/3u8Th6c44QftbzTwPVGt5eUGQhLpNwmPOm6vLW8yALTxnnlLOqFyQ=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1684784103; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=LHtXxuQLJ2qd3EJg+oiO4vWWwXemlyBNcmjyBgrOsEI=; 
	b=V4HQghP2xS1Oh4wqTTWJT3hEk5bSvOeRVtvn/SjmKON3XVexFDS6rOpI79iFC0++cCQNcKosnya7dFDMoBGSEcAb4WyD0DuDdPT2fkK2QVjS8sb33pkMq8/t9ZCvk+8gupXtEUWJf5eE/CtPqCg2T/eUF2O/d1IuCALg2xNa1Ck=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1684784103;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:Cc:Cc:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=LHtXxuQLJ2qd3EJg+oiO4vWWwXemlyBNcmjyBgrOsEI=;
	b=QKqBKbAMwOMFEacp/FFpoY6l4QtyAfj0SHU6qV1pF2V+4Twx1x8rib/sg+ZwBwdw
	Iuv4pckJEj0psdvrRQUVfQsL8TafUPN4kRlplM9G8cWyxcBLK3EKqQ+giKiFT724TnF
	u2o/17K5/697qS1B5MobmyWDJ4/XJZ+D5XIHGGVw=
Message-ID: <6028fc89-9a9a-5d3d-66f7-e761a6c9439b@apertussolutions.com>
Date: Mon, 22 May 2023 15:35:00 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [PATCH] maintainers: add regex matching for xsm
Content-Language: en-US
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230519114824.12482-1-dpsmith@apertussolutions.com>
 <4a8c30bd-ebe7-3800-37ae-9e3e6c37a7d0@suse.com>
 <4c72b060-52b7-a852-f966-5849a78ccc19@xen.org>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
In-Reply-To: <4c72b060-52b7-a852-f966-5849a78ccc19@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 5/22/23 06:27, Julien Grall wrote:
> Hi,
> 
> On 22/05/2023 10:23, Jan Beulich wrote:
>> On 19.05.2023 13:48, Daniel P. Smith wrote:
>>> XSM is a subsystem where it is equally important of how and where its 
>>> hooks are
>>> called as is the implementation of the hooks. The people best suited for
>>> evaluating the how and where are the XSM maintainers and reviewers. This
>>> creates a challenge as the hooks are used throughout the hypervisor 
>>> for which
>>> the XSM maintainers and reviewers are not, and should not be, a 
>>> reviewer for
>>> each of these subsystems in the MAINTAINERS file. Though the 
>>> MAINTAINERS file
>>> does support the use of regex matches, 'K' identifier, that are 
>>> applied to both
>>> the commit message and the commit delta. Adding the 'K' identifier 
>>> will declare
>>> that any patch relating to XSM require the input from the XSM 
>>> maintainers and
>>> reviewers. For those that use the get_maintianers script, the 'K' 
>>> identifier
>>> will automatically add the XSM maintainers and reviewers.
>>
>> With, aiui, a fair chance of false positives when e.g. XSM hook 
>> invocations
>> are only in patch context. Much like ...
>>
>>> Any one not using
>>> get_maintainers, it will be their responsibility to ensure that if 
>>> their work
>>> touches and XSM hook, to ensure the XSM maintainers and reviewers are 
>>> copied.
>>
>> ... manual intervention is needed in the case of not using the script, I
>> think people should also be at least asked to see about avoiding stray 
>> Cc-s
>> in that case. 
> 
> I don't particularly like this suggestion because the sender may 
> mistakenly believe this is a "stray CC".
> 
> Personally, I would prefer to be in CC more often than less often. I 
> think we should give the choice to Daniel to decide whether he is happy 
> to be in CC potentially more often.
> 
> If it is becoming too much then we can decide to adjust the script.
> 
>> Unless of course I'm misreading get_maintainers.pl (my Perl
>> isn't really great) or the script would be adjusted to only look at 
>> added/
>> removed lines (albeit even that would leave a certain risk of false
>> positives).
>>
>>> --- a/MAINTAINERS
>>> +++ b/MAINTAINERS
>>> @@ -674,6 +674,8 @@ F:    tools/flask/
>>>   F:    xen/include/xsm/
>>>   F:    xen/xsm/
>>>   F:    docs/misc/xsm-flask.txt
>>> +K:  xsm_.*
>>> +K:  \b(xsm|XSM)\b
> 
> Aside the NIT from Jan, this change is only affecting the number of 
> e-mails the XSM maintainers will receive. So:
> 
> Acked-by: Julien Grall <jgrall@amazon.com>

Apologies, I missed your ack before I sent v2 out.

v/r,
dps


From xen-devel-bounces@lists.xenproject.org Mon May 22 19:44:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 19:44:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538092.837860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1BSG-0006M6-6d; Mon, 22 May 2023 19:44:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538092.837860; Mon, 22 May 2023 19:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1BSG-0006Lz-3i; Mon, 22 May 2023 19:44:28 +0000
Received: by outflank-mailman (input) for mailman id 538092;
 Mon, 22 May 2023 19:44:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1BSE-0006Ln-IR; Mon, 22 May 2023 19:44:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1BSE-0000sF-2k; Mon, 22 May 2023 19:44:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1BSD-0000m2-MS; Mon, 22 May 2023 19:44:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1BSD-0007UJ-Ly; Mon, 22 May 2023 19:44:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ewb+oZSbv6i5MXLnBc/zW7P43ned9X3W05DyI2hr/Y0=; b=0SQJLZnu3ztzs1W/1BqwbNuEB0
	Q1+JEbshrJPwxabxvxIXRcaiEPk5rHAUABGG/4X/lrLK06bWINRrxCxLpIiyYSk11oRkO2I1Faoo+
	v68bWkIt6/yxnpwsmepYZIL9PVhWmXVpAu0IQ4nEptImwai27vpnNwUnbuaJ08vHFEoc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180894-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180894: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=aa222a8e4f975284b3f8f131653a4114b3d333b3
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 22 May 2023 19:44:25 +0000

flight 180894 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180894/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                aa222a8e4f975284b3f8f131653a4114b3d333b3
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    5 days
Failing since        180699  2023-05-18 07:21:24 Z    4 days   20 attempts
Testing same since   180785  2023-05-20 01:55:58 Z    2 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4282 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 22 19:46:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 19:46:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538098.837869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1BTi-0006v1-HS; Mon, 22 May 2023 19:45:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538098.837869; Mon, 22 May 2023 19:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1BTi-0006uu-Eu; Mon, 22 May 2023 19:45:58 +0000
Received: by outflank-mailman (input) for mailman id 538098;
 Mon, 22 May 2023 19:45:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y+KS=BL=kernel.org=broonie@srs-se1.protection.inumbo.net>)
 id 1q1BTg-0006uo-Oj
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 19:45:56 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4311c0dc-f8d9-11ed-8611-37d641c3527e;
 Mon, 22 May 2023 21:45:54 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0318B62A8F;
 Mon, 22 May 2023 19:45:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1426CC433D2;
 Mon, 22 May 2023 19:45:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4311c0dc-f8d9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684784752;
	bh=hjg+nEkGnT5wX4cHznoLQRDdNnOeCH9yJ5NNOV5v5V8=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=hA34ih2YynFmHhjBrG8aSg5C3lBqNWxO+7zUy6BcX+aQ+XsHNhgCm4xunK35wUfDJ
	 04AL6sb9/gUSPWXOZMaAq5Y7JIXyvMtkGkZMTG9vTCt4A8+skUC7ElqxtgBp2x2krK
	 KXPIKvhskol6TlkDwgxlGGlJ5y2FCniz+hQ98V0mD9bUkw4wvKfKbmP/xvUTM9q8Ve
	 fONIqzl3Sgg70s+V3M83exiIb/68r7UKMiD+x7d50jdm3a/1eRP+3d2dz9MSWt954u
	 vNN+OVP0DZVxqN+BeDQuT8WJBBnv0iFmhuwNmLFk5+EAJ8WntO0QmIY8JmuCh+aiqH
	 WbAh9K2p4Y8cA==
Date: Mon, 22 May 2023 20:45:41 +0100
From: Mark Brown <broonie@kernel.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Ross Philipson <ross.philipson@oracle.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch V4 33/37] cpu/hotplug: Allow "parallel" bringup up to
 CPUHP_BP_KICK_AP_STATE
Message-ID: <4ca39e58-055f-432c-8124-7c747fa4e85b@sirena.org.uk>
References: <20230512203426.452963764@linutronix.de>
 <20230512205257.240231377@linutronix.de>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="nxuSzC5SBKt722Ku"
Content-Disposition: inline
In-Reply-To: <20230512205257.240231377@linutronix.de>
X-Cookie: Even bytes get lonely for a little bit.


--nxuSzC5SBKt722Ku
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, May 12, 2023 at 11:07:50PM +0200, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
>=20
> There is often significant latency in the early stages of CPU bringup, and
> time is wasted by waking each CPU (e.g. with SIPI/INIT/INIT on x86) and
> then waiting for it to respond before moving on to the next.
>=20
> Allow a platform to enable parallel setup which brings all to be onlined
> CPUs up to the CPUHP_BP_KICK_AP state. While this state advancement on the
> control CPU (BP) is single-threaded the important part is the last state
> CPUHP_BP_KICK_AP which wakes the to be onlined CPUs up.

We're seeing a regression on ThunderX2 systems with 256 CPUs with an
arm64 defconfig running -next which I've bisected to this patch.  Before
this commit we bring up 256 CPUs:

[   29.137225] GICv3: CPU254: found redistributor 11e03 region 1:0x00000004=
41f60000
[   29.137238] GICv3: CPU254: using allocated LPI pending table @0x00000008=
818e0000
[   29.137305] CPU254: Booted secondary processor 0x0000011e03 [0x431f0af1]
[   29.292421] Detected PIPT I-cache on CPU255
[   29.292635] GICv3: CPU255: found redistributor 11f03 region 1:0x00000004=
41fe0000
[   29.292648] GICv3: CPU255: using allocated LPI pending table @0x00000008=
818f0000
[   29.292715] CPU255: Booted secondary processor 0x0000011f03 [0x431f0af1]
[   29.292859] smp: Brought up 2 nodes, 256 CPUs
[   29.292864] SMP: Total of 256 processors activated.

but after we only bring up 255, missing the 256th:

[   29.165888] GICv3: CPU254: found redistributor 11e03 region 1:0x00000004=
41f60000
[   29.165901] GICv3: CPU254: using allocated LPI pending table @0x00000008=
818e0000
[   29.165968] CPU254: Booted secondary processor 0x0000011e03 [0x431f0af1]
[   29.166120] smp: Brought up 2 nodes, 255 CPUs
[   29.166125] SMP: Total of 255 processors activated.

I can't immediately see an issue with the patch itself, for systems
without CONFIG_HOTPLUG_PARALLEL=3Dy it should replace the loop over
cpu_present_mask done by for_each_present_cpu() with an open coded one.
I didn't check the rest of the series yet.

The KernelCI bisection bot also isolated an issue on Odroid XU3 (a 32
bit arm system) with the final CPU of the 8 on the system not coming up
to the same patch:

  https://groups.io/g/kernelci-results/message/42480?p=3D%2C%2C%2C20%2C0%2C=
0%2C0%3A%3Acreated%2C0%2Call-cpus%2C20%2C2%2C0%2C99054444

Other boards I've checked (including some with multiple CPU clusters)
seem to be bringing up all their CPUs so it doesn't seem to just be
general breakage.

Log from my bisect:

git bisect start
# bad: [9f258af06b6268be8e960f63c3f66e88bdbbbdb0] Add linux-next specific f=
iles for 20230522
git bisect bad 9f258af06b6268be8e960f63c3f66e88bdbbbdb0
# good: [44c026a73be8038f03dbdeef028b642880cf1511] Linux 6.4-rc3
git bisect good 44c026a73be8038f03dbdeef028b642880cf1511
# good: [914db90ee0172753ab5298a48c63ac4f1fe089cf] Merge branch 'for-linux-=
next' of git://anongit.freedesktop.org/drm/drm-misc
git bisect good 914db90ee0172753ab5298a48c63ac4f1fe089cf
# good: [4624865b65777295cbe97cf1b98e6e49d81119d3] Merge branch 'next' of g=
it://git.kernel.org/pub/scm/linux/kernel/git/dtor/input.git
git bisect good 4624865b65777295cbe97cf1b98e6e49d81119d3
# bad: [be7220c44fbc06825f7f122d06051630e1bf51e4] Merge branch 'for-next' o=
f git://github.com/cminyard/linux-ipmi.git
git bisect bad be7220c44fbc06825f7f122d06051630e1bf51e4
# good: [cc677f7bec0da862a93d176524cdad5f416d58ef] Merge branch 'for-next' =
of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git
git bisect good cc677f7bec0da862a93d176524cdad5f416d58ef
# bad: [cdcc744aee1b886cbe4737798c0b8178b9ba5ae5] next-20230518/rcu
git bisect bad cdcc744aee1b886cbe4737798c0b8178b9ba5ae5
# bad: [8397dce1586a35af63fe9ea3e8fb3344758e55b5] Merge branch into tip/mas=
ter: 'x86/mm'
git bisect bad 8397dce1586a35af63fe9ea3e8fb3344758e55b5
# bad: [0c7ffa32dbd6b09a87fea4ad1de8b27145dfd9a6] x86/smpboot/64: Implement=
 arch_cpuhp_init_parallel_bringup() and enable it
git bisect bad 0c7ffa32dbd6b09a87fea4ad1de8b27145dfd9a6
# good: [ab24eb9abb9c60c45119370731735b79ed79f36c] x86/xen/hvm: Get rid of =
DEAD_FROZEN handling
git bisect good ab24eb9abb9c60c45119370731735b79ed79f36c
# good: [72b11aa7f8f93449141544cecb21b2963416902d] riscv: Switch to hotplug=
 core state synchronization
git bisect good 72b11aa7f8f93449141544cecb21b2963416902d
# good: [f54d4434c281f38b975d58de47adeca671beff4f] x86/apic: Provide cpu_pr=
imary_thread mask
git bisect good f54d4434c281f38b975d58de47adeca671beff4f
# bad: [bea629d57d006733d155bdb65ba4867788da69b6] x86/apic: Save the APIC v=
irtual base address
git bisect bad bea629d57d006733d155bdb65ba4867788da69b6
# bad: [18415f33e2ac4ab382cbca8b5ff82a9036b5bd49] cpu/hotplug: Allow "paral=
lel" bringup up to CPUHP_BP_KICK_AP_STATE
git bisect bad 18415f33e2ac4ab382cbca8b5ff82a9036b5bd49
# first bad commit: [18415f33e2ac4ab382cbca8b5ff82a9036b5bd49] cpu/hotplug:=
 Allow "parallel" bringup up to CPUHP_BP_KICK_AP_STATE

--nxuSzC5SBKt722Ku
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmRrxmQACgkQJNaLcl1U
h9CiKAf8DZMz9GmmJSE2csjdGdRu7caRKW0Tt5rUN3Rk8slC+Fb2FYWsc6BSJXU7
bN4/F3Oie4ukVf5oamB42/30Iemh8OdOE3aC/ceb9n2OmF9rKnQHIrC8q2xZo5S/
jZDRUq5Y91IMU6dsMy8Teoctmt4UCyXbSm6r5hA2hfepTZnZ0MszdpwdkfKTtb2h
nXXb54WH18u/i1vmPWTNpfHfTXXhKPSkRJ4NcJWt6NaRINld+kJBqdU1usQBnxG3
mn4C2p3YYPtd0fv66x2tPLxbXyjmGYv0/fsSg1INqS0A8Jq/piohEK9rqAeBi6BD
BEFqLrxN7XtuQjD2ksUH5UcvrDWwVw==
=FllZ
-----END PGP SIGNATURE-----

--nxuSzC5SBKt722Ku--


From xen-devel-bounces@lists.xenproject.org Mon May 22 21:04:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 21:04:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538110.837886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Chc-0006qs-7g; Mon, 22 May 2023 21:04:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538110.837886; Mon, 22 May 2023 21:04:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Chc-0006ql-4d; Mon, 22 May 2023 21:04:24 +0000
Received: by outflank-mailman (input) for mailman id 538110;
 Mon, 22 May 2023 21:04:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7M57=BL=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q1ChZ-0006qf-J2
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 21:04:21 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 37aac9be-f8e4-11ed-b22d-6b7b168915f2;
 Mon, 22 May 2023 23:04:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37aac9be-f8e4-11ed-b22d-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1684789457;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NByQKI51jy2j672ZUXdG7FihvnPAeSwdTvvAedoGpTg=;
	b=MX+9r6HXu/5QEBqc2yVGdayzLN9QC3Spcof7x6wlSwPAvG+JCGl3dIF3Tqjwo831B+7A0z
	HjgfkuU57m3Mx12VWLwF1kWcZOoV7A8fkpM0+EmI1ItXgmcU/k6TXgHZPx8IDVYMbQizgt
	tGVREPK3sLYjlK2mAsLydenhzGmSLA3xyVTQNIN837Vka6O+yHpXvp2jLe0l2HJV7Nc9Qe
	pnPkq0PwnFymxCDoDBNb/vRRCWh8VNuMos17F9fUWM3d52sMdPly/QVK6LuLIuAUqckaId
	NvxIeaxkD6G74LbIeNkj/Kr2rL0IzG/VT2eU1eqW2iuY9+IYD3SC/W2JWHiiMQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1684789457;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NByQKI51jy2j672ZUXdG7FihvnPAeSwdTvvAedoGpTg=;
	b=vhXfTHSmkg8sMek6N/YM9Kxa/vm8sXoYRsypyfUtGXP1dQSKCaL+3v90Q5MYWJoZJcTP6l
	6NVrzQcJMR+6yiAA==
To: Mark Brown <broonie@kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Ross Philipson <ross.philipson@oracle.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch V4 33/37] cpu/hotplug: Allow "parallel" bringup up to
 CPUHP_BP_KICK_AP_STATE
In-Reply-To: <4ca39e58-055f-432c-8124-7c747fa4e85b@sirena.org.uk>
References: <20230512203426.452963764@linutronix.de>
 <20230512205257.240231377@linutronix.de>
 <4ca39e58-055f-432c-8124-7c747fa4e85b@sirena.org.uk>
Date: Mon, 22 May 2023 23:04:17 +0200
Message-ID: <87bkicw01a.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Mon, May 22 2023 at 20:45, Mark Brown wrote:
> On Fri, May 12, 2023 at 11:07:50PM +0200, Thomas Gleixner wrote:
>> From: Thomas Gleixner <tglx@linutronix.de>
>> 
>> There is often significant latency in the early stages of CPU bringup, and
>> time is wasted by waking each CPU (e.g. with SIPI/INIT/INIT on x86) and
>> then waiting for it to respond before moving on to the next.
>> 
>> Allow a platform to enable parallel setup which brings all to be onlined
>> CPUs up to the CPUHP_BP_KICK_AP state. While this state advancement on the
>> control CPU (BP) is single-threaded the important part is the last state
>> CPUHP_BP_KICK_AP which wakes the to be onlined CPUs up.
>
> We're seeing a regression on ThunderX2 systems with 256 CPUs with an
> arm64 defconfig running -next which I've bisected to this patch.  Before
> this commit we bring up 256 CPUs:
>
> [   29.137225] GICv3: CPU254: found redistributor 11e03 region 1:0x0000000441f60000
> [   29.137238] GICv3: CPU254: using allocated LPI pending table @0x00000008818e0000
> [   29.137305] CPU254: Booted secondary processor 0x0000011e03 [0x431f0af1]
> [   29.292421] Detected PIPT I-cache on CPU255
> [   29.292635] GICv3: CPU255: found redistributor 11f03 region 1:0x0000000441fe0000
> [   29.292648] GICv3: CPU255: using allocated LPI pending table @0x00000008818f0000
> [   29.292715] CPU255: Booted secondary processor 0x0000011f03 [0x431f0af1]
> [   29.292859] smp: Brought up 2 nodes, 256 CPUs
> [   29.292864] SMP: Total of 256 processors activated.
>
> but after we only bring up 255, missing the 256th:
>
> [   29.165888] GICv3: CPU254: found redistributor 11e03 region 1:0x0000000441f60000
> [   29.165901] GICv3: CPU254: using allocated LPI pending table @0x00000008818e0000
> [   29.165968] CPU254: Booted secondary processor 0x0000011e03 [0x431f0af1]
> [   29.166120] smp: Brought up 2 nodes, 255 CPUs
> [   29.166125] SMP: Total of 255 processors activated.
>
> I can't immediately see an issue with the patch itself, for systems
> without CONFIG_HOTPLUG_PARALLEL=y it should replace the loop over
> cpu_present_mask done by for_each_present_cpu() with an open coded one.
> I didn't check the rest of the series yet.
>
> The KernelCI bisection bot also isolated an issue on Odroid XU3 (a 32
> bit arm system) with the final CPU of the 8 on the system not coming up
> to the same patch:
>
>   https://groups.io/g/kernelci-results/message/42480?p=%2C%2C%2C20%2C0%2C0%2C0%3A%3Acreated%2C0%2Call-cpus%2C20%2C2%2C0%2C99054444
>
> Other boards I've checked (including some with multiple CPU clusters)
> seem to be bringing up all their CPUs so it doesn't seem to just be
> general breakage.

That does not make any sense at all and my tired brain does not help
either.

Can you please apply the below debug patch and provide the output?

Thanks,

        tglx
---
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 005f863a3d2b..90a9b2ae8391 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1767,13 +1767,20 @@ static void __init cpuhp_bringup_mask(const struct cpumask *mask, unsigned int n
 {
 	unsigned int cpu;
 
+	pr_info("Bringup max %u CPUs to %d\n", ncpus, target);
+
 	for_each_cpu(cpu, mask) {
 		struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
+		int ret;
+
+		pr_info("Bringup CPU%u left %u\n", cpu, ncpus);
 
 		if (!--ncpus)
 			break;
 
-		if (cpu_up(cpu, target) && can_rollback_cpu(st)) {
+		ret = cpu_up(cpu, target);
+		pr_info("Bringup CPU%u %d\n", cpu, ret);
+		if (ret && can_rollback_cpu(st)) {
 			/*
 			 * If this failed then cpu_up() might have only
 			 * rolled back to CPUHP_BP_KICK_AP for the final


From xen-devel-bounces@lists.xenproject.org Mon May 22 22:21:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 22:21:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538122.837900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Dta-0006i1-LN; Mon, 22 May 2023 22:20:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538122.837900; Mon, 22 May 2023 22:20:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Dta-0006hu-Im; Mon, 22 May 2023 22:20:50 +0000
Received: by outflank-mailman (input) for mailman id 538122;
 Mon, 22 May 2023 22:20:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yAX1=BL=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q1DtZ-0006ho-U2
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 22:20:49 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e65d829f-f8ee-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 00:20:47 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 8911E619F4;
 Mon, 22 May 2023 22:20:46 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01D60C433D2;
 Mon, 22 May 2023 22:20:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e65d829f-f8ee-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684794045;
	bh=J/cPXQUnTb4WpGkFmVanjp+Ar1eIhUDhY6TNtVq40SY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=uNQuv5oxrax8aCnpLa/6t05M4ajzoAKiqANgp5oBmGoR4nWQ304zmuqp5I2E8shOq
	 Bi3xpUZNyR7uPepSmauDMF7jkw7XgKdyrg/TFQsbNy5yQ0W9vD8tlPlyNAGekzt0vO
	 1OSCZlnJ0tw0ywjZkwUjYROWX3DajEjOOxLx3Xgg5IwRp6ezQLZw0dEfIhk3sC0bfU
	 IAhj+BKRTYGt6X89zG8+eukxT5rw1ykOswsVlIEm7I4qOcVWG8SRAQFeyfI74aP818
	 afKB1qrzIKAznaBfLQDNrx5S2Op0WhnQj8qKlqvR2E1P/e/Tv+3+LHtqQ+DprWUdA7
	 uPwy+Yd+eOytw==
Date: Mon, 22 May 2023 15:20:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, jbeulich@suse.com, 
    andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org, 
    marmarek@invisiblethingslab.com, xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
In-Reply-To: <ZGig68UTddfEwR6P@Air-de-Roger>
Message-ID: <alpine.DEB.2.22.394.2305221520350.1553709@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop> <ZGX/Pvgy3+onJOJZ@Air-de-Roger> <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop> <ZGcu7EWW1cuNjwDA@Air-de-Roger> <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
 <ZGig68UTddfEwR6P@Air-de-Roger>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1250537822-1684793996=:1553709"
Content-ID: <alpine.DEB.2.22.394.2305221520000.1553709@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1250537822-1684793996=:1553709
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305221520001.1553709@ubuntu-linux-20-04-desktop>

On Sat, 20 May 2023, Roger Pau Monné wrote:
> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> index ec2e978a4e6b..0ff8e940fa8d 100644
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -289,6 +289,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>       */
>      for_each_pdev ( pdev->domain, tmp )
>      {
> +        if ( !tmp->vpci )
> +        {
> +            printk(XENLOG_G_WARNING "%pp: not handled by vPCI for %pd\n",
> +                   &tmp->sbdf, pdev->domain);
> +            continue;
> +        }
> +
>          if ( tmp == pdev )
>          {
>              /*
> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
> index 652807a4a454..0baef3a8d3a1 100644
> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -72,7 +72,12 @@ int vpci_add_handlers(struct pci_dev *pdev)
>      unsigned int i;
>      int rc = 0;
>  
> -    if ( !has_vpci(pdev->domain) )
> +    if ( !has_vpci(pdev->domain) ||
> +         /*
> +          * Ignore RO and hidden devices, those are in use by Xen and vPCI
> +          * won't work on them.
> +          */
> +         pci_get_pdev(dom_xen, pdev->sbdf) )
>          return 0;
>  
>      /* We should not get here twice for the same device. */


Now this patch works! Thank you!! :-)

You can check the full logs here
https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4329259080

Is the patch ready to be upstreamed aside from the commit message?
--8323329-1250537822-1684793996=:1553709--


From xen-devel-bounces@lists.xenproject.org Mon May 22 22:27:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 22:27:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538126.837910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1E0G-0007LV-CK; Mon, 22 May 2023 22:27:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538126.837910; Mon, 22 May 2023 22:27:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1E0G-0007LO-8m; Mon, 22 May 2023 22:27:44 +0000
Received: by outflank-mailman (input) for mailman id 538126;
 Mon, 22 May 2023 22:27:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y+KS=BL=kernel.org=broonie@srs-se1.protection.inumbo.net>)
 id 1q1E0F-0007LI-49
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 22:27:43 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc2ec7f9-f8ef-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 00:27:40 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D589462C45;
 Mon, 22 May 2023 22:27:38 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6FCDCC433D2;
 Mon, 22 May 2023 22:27:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc2ec7f9-f8ef-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684794458;
	bh=jUlZ1O2Cbsh9SfN/Yy703X/c15uKYJ0gTtetpIgZ2S4=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=CiNLVKc17r/cJ4HWwHuvyIz8vM3BIDWsStXKy3YJ72j6CnPPVjujExI2+gDkgxJKc
	 Tztnm2SF/vHe4ofGmtcSA8WAJ3iNrC1CVtsr/8wmv2oRkKrcldUU5JFwZ0UOEV5M/b
	 BOx6j31aV0L0vX5GCLf2tjtNWNWeKSw+cTethqtmilyJnqye/dNqNMpPNYX6igQqYx
	 Ivwtya1CrxIDBSs8awDppMytm6Ef3Vs6lZGxbTbh46I1cBC6PIJAKs/g+992F1FKdi
	 H0xie8ukBsHwy/uUt4HXd4Qj41UA/tFYAyBmwwhSrhQgnaIeH/RoXD4PwKl/laChsg
	 dN3Ep20H9UYUA==
Date: Mon, 22 May 2023 23:27:26 +0100
From: Mark Brown <broonie@kernel.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Ross Philipson <ross.philipson@oracle.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch V4 33/37] cpu/hotplug: Allow "parallel" bringup up to
 CPUHP_BP_KICK_AP_STATE
Message-ID: <2ed3ff77-c973-4e23-9e2f-f10776e432b7@sirena.org.uk>
References: <20230512203426.452963764@linutronix.de>
 <20230512205257.240231377@linutronix.de>
 <4ca39e58-055f-432c-8124-7c747fa4e85b@sirena.org.uk>
 <87bkicw01a.ffs@tglx>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="lnLTz5q6oum7yiEn"
Content-Disposition: inline
In-Reply-To: <87bkicw01a.ffs@tglx>
X-Cookie: HOST SYSTEM RESPONDING, PROBABLY UP...


--lnLTz5q6oum7yiEn
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Mon, May 22, 2023 at 11:04:17PM +0200, Thomas Gleixner wrote:

> That does not make any sense at all and my tired brain does not help
> either.

> Can you please apply the below debug patch and provide the output?

Here's the log, a quick glance says the=20

	if (!--ncpus)
		break;

check is doing the wrong thing when CONFIG_NR_CPUS=3D256 as it is for
arm64 defconfig and the system actually has 256 CPUs, the Odroid looks
like the same issue as the Exynos defconfig that fails there has
NR_CPUs=3D8 which is what the board has.

[    0.048542] smp: Bringing up secondary CPUs ...
[    0.048545] Bringup max 256 CPUs to 235
[    0.048547] Bringup CPU0 left 256
[    0.048575] Bringup CPU0 0
[    0.048577] Bringup CPU1 left 255
[    0.124561] Detected PIPT I-cache on CPU1
[    0.124586] GICv3: CPU1: found redistributor 100 region 0:0x000000040108=
0000
[    0.124595] GICv3: CPU1: using allocated LPI pending table @0x0000000880=
910000
[    0.124654] CPU1: Booted secondary processor 0x0000000100 [0x431f0af1]
[    0.124759] Bringup CPU1 0
[    0.124763] Bringup CPU2 left 254
[    0.195421] Detected PIPT I-cache on CPU2
[    0.195445] GICv3: CPU2: found redistributor 200 region 0:0x000000040110=
0000
[    0.195453] GICv3: CPU2: using allocated LPI pending table @0x0000000880=
920000
[    0.195510] CPU2: Booted secondary processor 0x0000000200 [0x431f0af1]
[    0.195611] Bringup CPU2 0
[    0.195615] Bringup CPU3 left 253
[    0.273859] Detected PIPT I-cache on CPU3
[    0.273885] GICv3: CPU3: found redistributor 300 region 0:0x000000040118=
0000
[    0.273893] GICv3: CPU3: using allocated LPI pending table @0x0000000880=
930000
[    0.273949] CPU3: Booted secondary processor 0x0000000300 [0x431f0af1]
[    0.274050] Bringup CPU3 0
[    0.274053] Bringup CPU4 left 252
[    0.351345] Detected PIPT I-cache on CPU4
[    0.351374] GICv3: CPU4: found redistributor 400 region 0:0x000000040120=
0000
[    0.351382] GICv3: CPU4: using allocated LPI pending table @0x0000000880=
940000
[    0.351438] CPU4: Booted secondary processor 0x0000000400 [0x431f0af1]
[    0.351540] Bringup CPU4 0
[    0.351543] Bringup CPU5 left 251
[    0.431068] Detected PIPT I-cache on CPU5
[    0.431099] GICv3: CPU5: found redistributor 500 region 0:0x000000040128=
0000
[    0.431107] GICv3: CPU5: using allocated LPI pending table @0x0000000880=
950000
[    0.431162] CPU5: Booted secondary processor 0x0000000500 [0x431f0af1]
[    0.431264] Bringup CPU5 0
[    0.431267] Bringup CPU6 left 250
[    0.503403] Detected PIPT I-cache on CPU6
[    0.503435] GICv3: CPU6: found redistributor 600 region 0:0x000000040130=
0000
[    0.503443] GICv3: CPU6: using allocated LPI pending table @0x0000000880=
960000
[    0.503498] CPU6: Booted secondary processor 0x0000000600 [0x431f0af1]
[    0.503600] Bringup CPU6 0
[    0.503604] Bringup CPU7 left 249
[    0.580128] Detected PIPT I-cache on CPU7
[    0.580162] GICv3: CPU7: found redistributor 700 region 0:0x000000040138=
0000
[    0.580171] GICv3: CPU7: using allocated LPI pending table @0x0000000880=
970000
[    0.580226] CPU7: Booted secondary processor 0x0000000700 [0x431f0af1]
[    0.580328] Bringup CPU7 0
[    0.580332] Bringup CPU8 left 248
[    0.660158] Detected PIPT I-cache on CPU8
[    0.660194] GICv3: CPU8: found redistributor 800 region 0:0x000000040140=
0000
[    0.660203] GICv3: CPU8: using allocated LPI pending table @0x0000000880=
980000
[    0.660258] CPU8: Booted secondary processor 0x0000000800 [0x431f0af1]
[    0.660359] Bringup CPU8 0
[    0.660363] Bringup CPU9 left 247
[    0.741063] Detected PIPT I-cache on CPU9
[    0.741102] GICv3: CPU9: found redistributor 900 region 0:0x000000040148=
0000
[    0.741110] GICv3: CPU9: using allocated LPI pending table @0x0000000880=
990000
[    0.741166] CPU9: Booted secondary processor 0x0000000900 [0x431f0af1]
[    0.741268] Bringup CPU9 0
[    0.741272] Bringup CPU10 left 246
[    0.817643] Detected PIPT I-cache on CPU10
[    0.817684] GICv3: CPU10: found redistributor a00 region 0:0x00000004015=
00000
[    0.817692] GICv3: CPU10: using allocated LPI pending table @0x000000088=
09a0000
[    0.817747] CPU10: Booted secondary processor 0x0000000a00 [0x431f0af1]
[    0.817850] Bringup CPU10 0
[    0.817854] Bringup CPU11 left 245
[    0.896094] Detected PIPT I-cache on CPU11
[    0.896137] GICv3: CPU11: found redistributor b00 region 0:0x00000004015=
80000
[    0.896145] GICv3: CPU11: using allocated LPI pending table @0x000000088=
09b0000
[    0.896202] CPU11: Booted secondary processor 0x0000000b00 [0x431f0af1]
[    0.896304] Bringup CPU11 0
[    0.896308] Bringup CPU12 left 244
[    0.976966] Detected PIPT I-cache on CPU12
[    0.977010] GICv3: CPU12: found redistributor c00 region 0:0x00000004016=
00000
[    0.977018] GICv3: CPU12: using allocated LPI pending table @0x000000088=
09c0000
[    0.977074] CPU12: Booted secondary processor 0x0000000c00 [0x431f0af1]
[    0.977179] Bringup CPU12 0
[    0.977183] Bringup CPU13 left 243
[    1.053939] Detected PIPT I-cache on CPU13
[    1.053985] GICv3: CPU13: found redistributor d00 region 0:0x00000004016=
80000
[    1.053994] GICv3: CPU13: using allocated LPI pending table @0x000000088=
09d0000
[    1.054050] CPU13: Booted secondary processor 0x0000000d00 [0x431f0af1]
[    1.054169] Bringup CPU13 0
[    1.054172] Bringup CPU14 left 242
[    1.133133] Detected PIPT I-cache on CPU14
[    1.133182] GICv3: CPU14: found redistributor e00 region 0:0x00000004017=
00000
[    1.133190] GICv3: CPU14: using allocated LPI pending table @0x000000088=
09e0000
[    1.133248] CPU14: Booted secondary processor 0x0000000e00 [0x431f0af1]
[    1.133352] Bringup CPU14 0
[    1.133356] Bringup CPU15 left 241
[    1.214963] Detected PIPT I-cache on CPU15
[    1.215013] GICv3: CPU15: found redistributor f00 region 0:0x00000004017=
80000
[    1.215022] GICv3: CPU15: using allocated LPI pending table @0x000000088=
09f0000
[    1.215078] CPU15: Booted secondary processor 0x0000000f00 [0x431f0af1]
[    1.215179] Bringup CPU15 0
[    1.215183] Bringup CPU16 left 240
[    1.292692] Detected PIPT I-cache on CPU16
[    1.292746] GICv3: CPU16: found redistributor 1000 region 0:0x0000000401=
800000
[    1.292755] GICv3: CPU16: using allocated LPI pending table @0x000000088=
0a00000
[    1.292810] CPU16: Booted secondary processor 0x0000001000 [0x431f0af1]
[    1.292914] Bringup CPU16 0
[    1.292918] Bringup CPU17 left 239
[    1.371926] Detected PIPT I-cache on CPU17
[    1.371982] GICv3: CPU17: found redistributor 1100 region 0:0x0000000401=
880000
[    1.371991] GICv3: CPU17: using allocated LPI pending table @0x000000088=
0a10000
[    1.372047] CPU17: Booted secondary processor 0x0000001100 [0x431f0af1]
[    1.372153] Bringup CPU17 0
[    1.372156] Bringup CPU18 left 238
[    1.454059] Detected PIPT I-cache on CPU18
[    1.454117] GICv3: CPU18: found redistributor 1200 region 0:0x0000000401=
900000
[    1.454126] GICv3: CPU18: using allocated LPI pending table @0x000000088=
0a20000
[    1.454182] CPU18: Booted secondary processor 0x0000001200 [0x431f0af1]
[    1.454287] Bringup CPU18 0
[    1.454290] Bringup CPU19 left 237
[    1.530831] Detected PIPT I-cache on CPU19
[    1.530891] GICv3: CPU19: found redistributor 1300 region 0:0x0000000401=
980000
[    1.530900] GICv3: CPU19: using allocated LPI pending table @0x000000088=
0a30000
[    1.530956] CPU19: Booted secondary processor 0x0000001300 [0x431f0af1]
[    1.531061] Bringup CPU19 0
[    1.531065] Bringup CPU20 left 236
[    1.609799] Detected PIPT I-cache on CPU20
[    1.609860] GICv3: CPU20: found redistributor 1400 region 0:0x0000000401=
a00000
[    1.609869] GICv3: CPU20: using allocated LPI pending table @0x000000088=
0a40000
[    1.609927] CPU20: Booted secondary processor 0x0000001400 [0x431f0af1]
[    1.610032] Bringup CPU20 0
[    1.610036] Bringup CPU21 left 235
[    1.692060] Detected PIPT I-cache on CPU21
[    1.692123] GICv3: CPU21: found redistributor 1500 region 0:0x0000000401=
a80000
[    1.692132] GICv3: CPU21: using allocated LPI pending table @0x000000088=
0a50000
[    1.692188] CPU21: Booted secondary processor 0x0000001500 [0x431f0af1]
[    1.692296] Bringup CPU21 0
[    1.692299] Bringup CPU22 left 234
[    1.768877] Detected PIPT I-cache on CPU22
[    1.768942] GICv3: CPU22: found redistributor 1600 region 0:0x0000000401=
b00000
[    1.768951] GICv3: CPU22: using allocated LPI pending table @0x000000088=
0a60000
[    1.769008] CPU22: Booted secondary processor 0x0000001600 [0x431f0af1]
[    1.769115] Bringup CPU22 0
[    1.769119] Bringup CPU23 left 233
[    1.847956] Detected PIPT I-cache on CPU23
[    1.848023] GICv3: CPU23: found redistributor 1700 region 0:0x0000000401=
b80000
[    1.848032] GICv3: CPU23: using allocated LPI pending table @0x000000088=
0a70000
[    1.848088] CPU23: Booted secondary processor 0x0000001700 [0x431f0af1]
[    1.848196] Bringup CPU23 0
[    1.848199] Bringup CPU24 left 232
[    1.920945] Detected PIPT I-cache on CPU24
[    1.921015] GICv3: CPU24: found redistributor 1800 region 0:0x0000000401=
c00000
[    1.921024] GICv3: CPU24: using allocated LPI pending table @0x000000088=
0a80000
[    1.921081] CPU24: Booted secondary processor 0x0000001800 [0x431f0af1]
[    1.921187] Bringup CPU24 0
[    1.921190] Bringup CPU25 left 231
[    1.997836] Detected PIPT I-cache on CPU25
[    1.997908] GICv3: CPU25: found redistributor 1900 region 0:0x0000000401=
c80000
[    1.997917] GICv3: CPU25: using allocated LPI pending table @0x000000088=
0a90000
[    1.997973] CPU25: Booted secondary processor 0x0000001900 [0x431f0af1]
[    1.998080] Bringup CPU25 0
[    1.998084] Bringup CPU26 left 230
[    2.071111] Detected PIPT I-cache on CPU26
[    2.071185] GICv3: CPU26: found redistributor 1a00 region 0:0x0000000401=
d00000
[    2.071194] GICv3: CPU26: using allocated LPI pending table @0x000000088=
0aa0000
[    2.071250] CPU26: Booted secondary processor 0x0000001a00 [0x431f0af1]
[    2.071358] Bringup CPU26 0
[    2.071362] Bringup CPU27 left 229
[    2.147890] Detected PIPT I-cache on CPU27
[    2.147966] GICv3: CPU27: found redistributor 1b00 region 0:0x0000000401=
d80000
[    2.147975] GICv3: CPU27: using allocated LPI pending table @0x000000088=
0ab0000
[    2.148032] CPU27: Booted secondary processor 0x0000001b00 [0x431f0af1]
[    2.148138] Bringup CPU27 0
[    2.148142] Bringup CPU28 left 228
[    2.230081] Detected PIPT I-cache on CPU28
[    2.230159] GICv3: CPU28: found redistributor 1c00 region 0:0x0000000401=
e00000
[    2.230168] GICv3: CPU28: using allocated LPI pending table @0x000000088=
0ac0000
[    2.230224] CPU28: Booted secondary processor 0x0000001c00 [0x431f0af1]
[    2.230343] Bringup CPU28 0
[    2.230347] Bringup CPU29 left 227
[    2.311933] Detected PIPT I-cache on CPU29
[    2.312012] GICv3: CPU29: found redistributor 1d00 region 0:0x0000000401=
e80000
[    2.312021] GICv3: CPU29: using allocated LPI pending table @0x000000088=
0ad0000
[    2.312077] CPU29: Booted secondary processor 0x0000001d00 [0x431f0af1]
[    2.312183] Bringup CPU29 0
[    2.312187] Bringup CPU30 left 226
[    2.392181] Detected PIPT I-cache on CPU30
[    2.392263] GICv3: CPU30: found redistributor 1e00 region 0:0x0000000401=
f00000
[    2.392272] GICv3: CPU30: using allocated LPI pending table @0x000000088=
0ae0000
[    2.392328] CPU30: Booted secondary processor 0x0000001e00 [0x431f0af1]
[    2.392435] Bringup CPU30 0
[    2.392439] Bringup CPU31 left 225
[    2.466927] Detected PIPT I-cache on CPU31
[    2.467010] GICv3: CPU31: found redistributor 1f00 region 0:0x0000000401=
f80000
[    2.467020] GICv3: CPU31: using allocated LPI pending table @0x000000088=
0af0000
[    2.467076] CPU31: Booted secondary processor 0x0000001f00 [0x431f0af1]
[    2.467184] Bringup CPU31 0
[    2.467188] Bringup CPU32 left 224
[    2.535705] Detected PIPT I-cache on CPU32
[    2.535734] GICv3: CPU32: found redistributor 1 region 0:0x0000000401020=
000
[    2.535743] GICv3: CPU32: using allocated LPI pending table @0x000000088=
0b00000
[    2.535798] CPU32: Booted secondary processor 0x0000000001 [0x431f0af1]
[    2.535910] Bringup CPU32 0
[    2.535914] Bringup CPU33 left 223
[    2.604057] Detected PIPT I-cache on CPU33
[    2.604084] GICv3: CPU33: found redistributor 101 region 0:0x00000004010=
a0000
[    2.604092] GICv3: CPU33: using allocated LPI pending table @0x000000088=
0b10000
[    2.604145] CPU33: Booted secondary processor 0x0000000101 [0x431f0af1]
[    2.604235] Bringup CPU33 0
[    2.604237] Bringup CPU34 left 222
[    2.672660] Detected PIPT I-cache on CPU34
[    2.672691] GICv3: CPU34: found redistributor 201 region 0:0x00000004011=
20000
[    2.672699] GICv3: CPU34: using allocated LPI pending table @0x000000088=
0b20000
[    2.672753] CPU34: Booted secondary processor 0x0000000201 [0x431f0af1]
[    2.672843] Bringup CPU34 0
[    2.672846] Bringup CPU35 left 221
[    2.740986] Detected PIPT I-cache on CPU35
[    2.741018] GICv3: CPU35: found redistributor 301 region 0:0x00000004011=
a0000
[    2.741027] GICv3: CPU35: using allocated LPI pending table @0x000000088=
0b30000
[    2.741080] CPU35: Booted secondary processor 0x0000000301 [0x431f0af1]
[    2.741169] Bringup CPU35 0
[    2.741172] Bringup CPU36 left 220
[    2.809469] Detected PIPT I-cache on CPU36
[    2.809503] GICv3: CPU36: found redistributor 401 region 0:0x00000004012=
20000
[    2.809511] GICv3: CPU36: using allocated LPI pending table @0x000000088=
0b40000
[    2.809564] CPU36: Booted secondary processor 0x0000000401 [0x431f0af1]
[    2.809654] Bringup CPU36 0
[    2.809657] Bringup CPU37 left 219
[    2.878071] Detected PIPT I-cache on CPU37
[    2.878108] GICv3: CPU37: found redistributor 501 region 0:0x00000004012=
a0000
[    2.878116] GICv3: CPU37: using allocated LPI pending table @0x000000088=
0b50000
[    2.878169] CPU37: Booted secondary processor 0x0000000501 [0x431f0af1]
[    2.878259] Bringup CPU37 0
[    2.878262] Bringup CPU38 left 218
[    2.946487] Detected PIPT I-cache on CPU38
[    2.946525] GICv3: CPU38: found redistributor 601 region 0:0x00000004013=
20000
[    2.946533] GICv3: CPU38: using allocated LPI pending table @0x000000088=
0b60000
[    2.946587] CPU38: Booted secondary processor 0x0000000601 [0x431f0af1]
[    2.946677] Bringup CPU38 0
[    2.946680] Bringup CPU39 left 217
[    3.014963] Detected PIPT I-cache on CPU39
[    3.015003] GICv3: CPU39: found redistributor 701 region 0:0x00000004013=
a0000
[    3.015012] GICv3: CPU39: using allocated LPI pending table @0x000000088=
0b70000
[    3.015064] CPU39: Booted secondary processor 0x0000000701 [0x431f0af1]
[    3.015158] Bringup CPU39 0
[    3.015161] Bringup CPU40 left 216
[    3.083400] Detected PIPT I-cache on CPU40
[    3.083443] GICv3: CPU40: found redistributor 801 region 0:0x00000004014=
20000
[    3.083451] GICv3: CPU40: using allocated LPI pending table @0x000000088=
0b80000
[    3.083504] CPU40: Booted secondary processor 0x0000000801 [0x431f0af1]
[    3.083598] Bringup CPU40 0
[    3.083601] Bringup CPU41 left 215
[    3.152018] Detected PIPT I-cache on CPU41
[    3.152063] GICv3: CPU41: found redistributor 901 region 0:0x00000004014=
a0000
[    3.152071] GICv3: CPU41: using allocated LPI pending table @0x000000088=
0b90000
[    3.152123] CPU41: Booted secondary processor 0x0000000901 [0x431f0af1]
[    3.152211] Bringup CPU41 0
[    3.152214] Bringup CPU42 left 214
[    3.220435] Detected PIPT I-cache on CPU42
[    3.220481] GICv3: CPU42: found redistributor a01 region 0:0x00000004015=
20000
[    3.220489] GICv3: CPU42: using allocated LPI pending table @0x000000088=
0ba0000
[    3.220542] CPU42: Booted secondary processor 0x0000000a01 [0x431f0af1]
[    3.220634] Bringup CPU42 0
[    3.220637] Bringup CPU43 left 213
[    3.288786] Detected PIPT I-cache on CPU43
[    3.288833] GICv3: CPU43: found redistributor b01 region 0:0x00000004015=
a0000
[    3.288842] GICv3: CPU43: using allocated LPI pending table @0x000000088=
0bb0000
[    3.288895] CPU43: Booted secondary processor 0x0000000b01 [0x431f0af1]
[    3.288985] Bringup CPU43 0
[    3.288988] Bringup CPU44 left 212
[    3.357271] Detected PIPT I-cache on CPU44
[    3.357321] GICv3: CPU44: found redistributor c01 region 0:0x00000004016=
20000
[    3.357329] GICv3: CPU44: using allocated LPI pending table @0x000000088=
0bc0000
[    3.357382] CPU44: Booted secondary processor 0x0000000c01 [0x431f0af1]
[    3.357475] Bringup CPU44 0
[    3.357478] Bringup CPU45 left 211
[    3.426136] Detected PIPT I-cache on CPU45
[    3.426188] GICv3: CPU45: found redistributor d01 region 0:0x00000004016=
a0000
[    3.426196] GICv3: CPU45: using allocated LPI pending table @0x000000088=
0bd0000
[    3.426249] CPU45: Booted secondary processor 0x0000000d01 [0x431f0af1]
[    3.426342] Bringup CPU45 0
[    3.426345] Bringup CPU46 left 210
[    3.494971] Detected PIPT I-cache on CPU46
[    3.495026] GICv3: CPU46: found redistributor e01 region 0:0x00000004017=
20000
[    3.495034] GICv3: CPU46: using allocated LPI pending table @0x000000088=
0be0000
[    3.495087] CPU46: Booted secondary processor 0x0000000e01 [0x431f0af1]
[    3.495178] Bringup CPU46 0
[    3.495181] Bringup CPU47 left 209
[    3.563244] Detected PIPT I-cache on CPU47
[    3.563300] GICv3: CPU47: found redistributor f01 region 0:0x00000004017=
a0000
[    3.563308] GICv3: CPU47: using allocated LPI pending table @0x000000088=
0bf0000
[    3.563362] CPU47: Booted secondary processor 0x0000000f01 [0x431f0af1]
[    3.563453] Bringup CPU47 0
[    3.563455] Bringup CPU48 left 208
[    3.631916] Detected PIPT I-cache on CPU48
[    3.631975] GICv3: CPU48: found redistributor 1001 region 0:0x0000000401=
820000
[    3.631984] GICv3: CPU48: using allocated LPI pending table @0x000000088=
0c00000
[    3.632036] CPU48: Booted secondary processor 0x0000001001 [0x431f0af1]
[    3.632129] Bringup CPU48 0
[    3.632132] Bringup CPU49 left 207
[    3.700602] Detected PIPT I-cache on CPU49
[    3.700664] GICv3: CPU49: found redistributor 1101 region 0:0x0000000401=
8a0000
[    3.700672] GICv3: CPU49: using allocated LPI pending table @0x000000088=
0c10000
[    3.700725] CPU49: Booted secondary processor 0x0000001101 [0x431f0af1]
[    3.700817] Bringup CPU49 0
[    3.700820] Bringup CPU50 left 206
[    3.769387] Detected PIPT I-cache on CPU50
[    3.769450] GICv3: CPU50: found redistributor 1201 region 0:0x0000000401=
920000
[    3.769458] GICv3: CPU50: using allocated LPI pending table @0x000000088=
0c20000
[    3.769512] CPU50: Booted secondary processor 0x0000001201 [0x431f0af1]
[    3.769605] Bringup CPU50 0
[    3.769607] Bringup CPU51 left 205
[    3.838160] Detected PIPT I-cache on CPU51
[    3.838225] GICv3: CPU51: found redistributor 1301 region 0:0x0000000401=
9a0000
[    3.838233] GICv3: CPU51: using allocated LPI pending table @0x000000088=
0c30000
[    3.838286] CPU51: Booted secondary processor 0x0000001301 [0x431f0af1]
[    3.838379] Bringup CPU51 0
[    3.838381] Bringup CPU52 left 204
[    3.906682] Detected PIPT I-cache on CPU52
[    3.906749] GICv3: CPU52: found redistributor 1401 region 0:0x0000000401=
a20000
[    3.906757] GICv3: CPU52: using allocated LPI pending table @0x000000088=
0c40000
[    3.906810] CPU52: Booted secondary processor 0x0000001401 [0x431f0af1]
[    3.906905] Bringup CPU52 0
[    3.906907] Bringup CPU53 left 203
[    3.975408] Detected PIPT I-cache on CPU53
[    3.975477] GICv3: CPU53: found redistributor 1501 region 0:0x0000000401=
aa0000
[    3.975485] GICv3: CPU53: using allocated LPI pending table @0x000000088=
0c50000
[    3.975538] CPU53: Booted secondary processor 0x0000001501 [0x431f0af1]
[    3.975631] Bringup CPU53 0
[    3.975633] Bringup CPU54 left 202
[    4.044084] Detected PIPT I-cache on CPU54
[    4.044154] GICv3: CPU54: found redistributor 1601 region 0:0x0000000401=
b20000
[    4.044162] GICv3: CPU54: using allocated LPI pending table @0x000000088=
0c60000
[    4.044216] CPU54: Booted secondary processor 0x0000001601 [0x431f0af1]
[    4.044309] Bringup CPU54 0
[    4.044312] Bringup CPU55 left 201
[    4.112725] Detected PIPT I-cache on CPU55
[    4.112797] GICv3: CPU55: found redistributor 1701 region 0:0x0000000401=
ba0000
[    4.112805] GICv3: CPU55: using allocated LPI pending table @0x000000088=
0c70000
[    4.112858] CPU55: Booted secondary processor 0x0000001701 [0x431f0af1]
[    4.112952] Bringup CPU55 0
[    4.112954] Bringup CPU56 left 200
[    4.181726] Detected PIPT I-cache on CPU56
[    4.181801] GICv3: CPU56: found redistributor 1801 region 0:0x0000000401=
c20000
[    4.181809] GICv3: CPU56: using allocated LPI pending table @0x000000088=
0c80000
[    4.181863] CPU56: Booted secondary processor 0x0000001801 [0x431f0af1]
[    4.181956] Bringup CPU56 0
[    4.181959] Bringup CPU57 left 199
[    4.250209] Detected PIPT I-cache on CPU57
[    4.250285] GICv3: CPU57: found redistributor 1901 region 0:0x0000000401=
ca0000
[    4.250293] GICv3: CPU57: using allocated LPI pending table @0x000000088=
0c90000
[    4.250347] CPU57: Booted secondary processor 0x0000001901 [0x431f0af1]
[    4.250441] Bringup CPU57 0
[    4.250443] Bringup CPU58 left 198
[    4.319062] Detected PIPT I-cache on CPU58
[    4.319141] GICv3: CPU58: found redistributor 1a01 region 0:0x0000000401=
d20000
[    4.319149] GICv3: CPU58: using allocated LPI pending table @0x000000088=
0ca0000
[    4.319203] CPU58: Booted secondary processor 0x0000001a01 [0x431f0af1]
[    4.319298] Bringup CPU58 0
[    4.319300] Bringup CPU59 left 197
[    4.387907] Detected PIPT I-cache on CPU59
[    4.387988] GICv3: CPU59: found redistributor 1b01 region 0:0x0000000401=
da0000
[    4.387996] GICv3: CPU59: using allocated LPI pending table @0x000000088=
0cb0000
[    4.388050] CPU59: Booted secondary processor 0x0000001b01 [0x431f0af1]
[    4.388143] Bringup CPU59 0
[    4.388146] Bringup CPU60 left 196
[    4.456533] Detected PIPT I-cache on CPU60
[    4.456615] GICv3: CPU60: found redistributor 1c01 region 0:0x0000000401=
e20000
[    4.456624] GICv3: CPU60: using allocated LPI pending table @0x000000088=
0cc0000
[    4.456679] CPU60: Booted secondary processor 0x0000001c01 [0x431f0af1]
[    4.456792] Bringup CPU60 0
[    4.456795] Bringup CPU61 left 195
[    4.525028] Detected PIPT I-cache on CPU61
[    4.525113] GICv3: CPU61: found redistributor 1d01 region 0:0x0000000401=
ea0000
[    4.525121] GICv3: CPU61: using allocated LPI pending table @0x000000088=
0cd0000
[    4.525174] CPU61: Booted secondary processor 0x0000001d01 [0x431f0af1]
[    4.525269] Bringup CPU61 0
[    4.525272] Bringup CPU62 left 194
[    4.593720] Detected PIPT I-cache on CPU62
[    4.593808] GICv3: CPU62: found redistributor 1e01 region 0:0x0000000401=
f20000
[    4.593816] GICv3: CPU62: using allocated LPI pending table @0x000000088=
0ce0000
[    4.593868] CPU62: Booted secondary processor 0x0000001e01 [0x431f0af1]
[    4.593962] Bringup CPU62 0
[    4.593965] Bringup CPU63 left 193
[    4.662153] Detected PIPT I-cache on CPU63
[    4.662241] GICv3: CPU63: found redistributor 1f01 region 0:0x0000000401=
fa0000
[    4.662249] GICv3: CPU63: using allocated LPI pending table @0x000000088=
0cf0000
[    4.662301] CPU63: Booted secondary processor 0x0000001f01 [0x431f0af1]
[    4.662399] Bringup CPU63 0
[    4.662402] Bringup CPU64 left 192
[    4.731197] Detected PIPT I-cache on CPU64
[    4.731234] GICv3: CPU64: found redistributor 2 region 0:0x0000000401040=
000
[    4.731243] GICv3: CPU64: using allocated LPI pending table @0x000000088=
0d00000
[    4.731296] CPU64: Booted secondary processor 0x0000000002 [0x431f0af1]
[    4.731396] Bringup CPU64 0
[    4.731399] Bringup CPU65 left 191
[    4.799636] Detected PIPT I-cache on CPU65
[    4.799671] GICv3: CPU65: found redistributor 102 region 0:0x00000004010=
c0000
[    4.799679] GICv3: CPU65: using allocated LPI pending table @0x000000088=
0d10000
[    4.799731] CPU65: Booted secondary processor 0x0000000102 [0x431f0af1]
[    4.799822] Bringup CPU65 0
[    4.799825] Bringup CPU66 left 190
[    4.868445] Detected PIPT I-cache on CPU66
[    4.868486] GICv3: CPU66: found redistributor 202 region 0:0x00000004011=
40000
[    4.868494] GICv3: CPU66: using allocated LPI pending table @0x000000088=
0d20000
[    4.868546] CPU66: Booted secondary processor 0x0000000202 [0x431f0af1]
[    4.868638] Bringup CPU66 0
[    4.868640] Bringup CPU67 left 189
[    4.937078] Detected PIPT I-cache on CPU67
[    4.937120] GICv3: CPU67: found redistributor 302 region 0:0x00000004011=
c0000
[    4.937128] GICv3: CPU67: using allocated LPI pending table @0x000000088=
0d30000
[    4.937180] CPU67: Booted secondary processor 0x0000000302 [0x431f0af1]
[    4.937276] Bringup CPU67 0
[    4.937278] Bringup CPU68 left 188
[    5.005580] Detected PIPT I-cache on CPU68
[    5.005623] GICv3: CPU68: found redistributor 402 region 0:0x00000004012=
40000
[    5.005631] GICv3: CPU68: using allocated LPI pending table @0x000000088=
0d40000
[    5.005683] CPU68: Booted secondary processor 0x0000000402 [0x431f0af1]
[    5.005776] Bringup CPU68 0
[    5.005779] Bringup CPU69 left 187
[    5.074306] Detected PIPT I-cache on CPU69
[    5.074352] GICv3: CPU69: found redistributor 502 region 0:0x00000004012=
c0000
[    5.074360] GICv3: CPU69: using allocated LPI pending table @0x000000088=
0d50000
[    5.074412] CPU69: Booted secondary processor 0x0000000502 [0x431f0af1]
[    5.074506] Bringup CPU69 0
[    5.074508] Bringup CPU70 left 186
[    5.142762] Detected PIPT I-cache on CPU70
[    5.142810] GICv3: CPU70: found redistributor 602 region 0:0x00000004013=
40000
[    5.142818] GICv3: CPU70: using allocated LPI pending table @0x000000088=
0d60000
[    5.142870] CPU70: Booted secondary processor 0x0000000602 [0x431f0af1]
[    5.142964] Bringup CPU70 0
[    5.142967] Bringup CPU71 left 185
[    5.211351] Detected PIPT I-cache on CPU71
[    5.211400] GICv3: CPU71: found redistributor 702 region 0:0x00000004013=
c0000
[    5.211408] GICv3: CPU71: using allocated LPI pending table @0x000000088=
0d70000
[    5.211460] CPU71: Booted secondary processor 0x0000000702 [0x431f0af1]
[    5.211552] Bringup CPU71 0
[    5.211555] Bringup CPU72 left 184
[    5.279995] Detected PIPT I-cache on CPU72
[    5.280048] GICv3: CPU72: found redistributor 802 region 0:0x00000004014=
40000
[    5.280056] GICv3: CPU72: using allocated LPI pending table @0x000000088=
0d80000
[    5.280109] CPU72: Booted secondary processor 0x0000000802 [0x431f0af1]
[    5.280201] Bringup CPU72 0
[    5.280204] Bringup CPU73 left 183
[    5.348712] Detected PIPT I-cache on CPU73
[    5.348766] GICv3: CPU73: found redistributor 902 region 0:0x00000004014=
c0000
[    5.348774] GICv3: CPU73: using allocated LPI pending table @0x000000088=
0d90000
[    5.348826] CPU73: Booted secondary processor 0x0000000902 [0x431f0af1]
[    5.348920] Bringup CPU73 0
[    5.348923] Bringup CPU74 left 182
[    5.417266] Detected PIPT I-cache on CPU74
[    5.417321] GICv3: CPU74: found redistributor a02 region 0:0x00000004015=
40000
[    5.417329] GICv3: CPU74: using allocated LPI pending table @0x000000088=
0da0000
[    5.417381] CPU74: Booted secondary processor 0x0000000a02 [0x431f0af1]
[    5.417477] Bringup CPU74 0
[    5.417480] Bringup CPU75 left 181
[    5.485736] Detected PIPT I-cache on CPU75
[    5.485793] GICv3: CPU75: found redistributor b02 region 0:0x00000004015=
c0000
[    5.485801] GICv3: CPU75: using allocated LPI pending table @0x000000088=
0db0000
[    5.485854] CPU75: Booted secondary processor 0x0000000b02 [0x431f0af1]
[    5.485972] Bringup CPU75 0
[    5.485975] Bringup CPU76 left 180
[    5.554355] Detected PIPT I-cache on CPU76
[    5.554414] GICv3: CPU76: found redistributor c02 region 0:0x00000004016=
40000
[    5.554422] GICv3: CPU76: using allocated LPI pending table @0x000000088=
0dc0000
[    5.554475] CPU76: Booted secondary processor 0x0000000c02 [0x431f0af1]
[    5.554569] Bringup CPU76 0
[    5.554572] Bringup CPU77 left 179
[    5.623035] Detected PIPT I-cache on CPU77
[    5.623097] GICv3: CPU77: found redistributor d02 region 0:0x00000004016=
c0000
[    5.623105] GICv3: CPU77: using allocated LPI pending table @0x000000088=
0dd0000
[    5.623157] CPU77: Booted secondary processor 0x0000000d02 [0x431f0af1]
[    5.623249] Bringup CPU77 0
[    5.623252] Bringup CPU78 left 178
[    5.691609] Detected PIPT I-cache on CPU78
[    5.691674] GICv3: CPU78: found redistributor e02 region 0:0x00000004017=
40000
[    5.691683] GICv3: CPU78: using allocated LPI pending table @0x000000088=
0de0000
[    5.691735] CPU78: Booted secondary processor 0x0000000e02 [0x431f0af1]
[    5.691829] Bringup CPU78 0
[    5.691832] Bringup CPU79 left 177
[    5.760010] Detected PIPT I-cache on CPU79
[    5.760075] GICv3: CPU79: found redistributor f02 region 0:0x00000004017=
c0000
[    5.760083] GICv3: CPU79: using allocated LPI pending table @0x000000088=
0df0000
[    5.760136] CPU79: Booted secondary processor 0x0000000f02 [0x431f0af1]
[    5.760230] Bringup CPU79 0
[    5.760232] Bringup CPU80 left 176
[    5.828861] Detected PIPT I-cache on CPU80
[    5.828929] GICv3: CPU80: found redistributor 1002 region 0:0x0000000401=
840000
[    5.828938] GICv3: CPU80: using allocated LPI pending table @0x000000088=
0e00000
[    5.828991] CPU80: Booted secondary processor 0x0000001002 [0x431f0af1]
[    5.829090] Bringup CPU80 0
[    5.829093] Bringup CPU81 left 175
[    5.897816] Detected PIPT I-cache on CPU81
[    5.897886] GICv3: CPU81: found redistributor 1102 region 0:0x0000000401=
8c0000
[    5.897894] GICv3: CPU81: using allocated LPI pending table @0x000000088=
0e10000
[    5.897946] CPU81: Booted secondary processor 0x0000001102 [0x431f0af1]
[    5.898043] Bringup CPU81 0
[    5.898046] Bringup CPU82 left 174
[    5.966694] Detected PIPT I-cache on CPU82
[    5.966767] GICv3: CPU82: found redistributor 1202 region 0:0x0000000401=
940000
[    5.966776] GICv3: CPU82: using allocated LPI pending table @0x000000088=
0e20000
[    5.966828] CPU82: Booted secondary processor 0x0000001202 [0x431f0af1]
[    5.966925] Bringup CPU82 0
[    5.966927] Bringup CPU83 left 173
[    6.035887] Detected PIPT I-cache on CPU83
[    6.035962] GICv3: CPU83: found redistributor 1302 region 0:0x0000000401=
9c0000
[    6.035971] GICv3: CPU83: using allocated LPI pending table @0x000000088=
0e30000
[    6.036023] CPU83: Booted secondary processor 0x0000001302 [0x431f0af1]
[    6.036118] Bringup CPU83 0
[    6.036121] Bringup CPU84 left 172
[    6.104513] Detected PIPT I-cache on CPU84
[    6.104588] GICv3: CPU84: found redistributor 1402 region 0:0x0000000401=
a40000
[    6.104597] GICv3: CPU84: using allocated LPI pending table @0x000000088=
0e40000
[    6.104650] CPU84: Booted secondary processor 0x0000001402 [0x431f0af1]
[    6.104745] Bringup CPU84 0
[    6.104747] Bringup CPU85 left 171
[    6.173344] Detected PIPT I-cache on CPU85
[    6.173421] GICv3: CPU85: found redistributor 1502 region 0:0x0000000401=
ac0000
[    6.173430] GICv3: CPU85: using allocated LPI pending table @0x000000088=
0e50000
[    6.173483] CPU85: Booted secondary processor 0x0000001502 [0x431f0af1]
[    6.173579] Bringup CPU85 0
[    6.173582] Bringup CPU86 left 170
[    6.242129] Detected PIPT I-cache on CPU86
[    6.242208] GICv3: CPU86: found redistributor 1602 region 0:0x0000000401=
b40000
[    6.242217] GICv3: CPU86: using allocated LPI pending table @0x000000088=
0e60000
[    6.242270] CPU86: Booted secondary processor 0x0000001602 [0x431f0af1]
[    6.242366] Bringup CPU86 0
[    6.242369] Bringup CPU87 left 169
[    6.310865] Detected PIPT I-cache on CPU87
[    6.310946] GICv3: CPU87: found redistributor 1702 region 0:0x0000000401=
bc0000
[    6.310955] GICv3: CPU87: using allocated LPI pending table @0x000000088=
0e70000
[    6.311007] CPU87: Booted secondary processor 0x0000001702 [0x431f0af1]
[    6.311104] Bringup CPU87 0
[    6.311107] Bringup CPU88 left 168
[    6.379680] Detected PIPT I-cache on CPU88
[    6.379765] GICv3: CPU88: found redistributor 1802 region 0:0x0000000401=
c40000
[    6.379774] GICv3: CPU88: using allocated LPI pending table @0x000000088=
0e80000
[    6.379827] CPU88: Booted secondary processor 0x0000001802 [0x431f0af1]
[    6.379924] Bringup CPU88 0
[    6.379927] Bringup CPU89 left 167
[    6.448290] Detected PIPT I-cache on CPU89
[    6.448376] GICv3: CPU89: found redistributor 1902 region 0:0x0000000401=
cc0000
[    6.448385] GICv3: CPU89: using allocated LPI pending table @0x000000088=
0e90000
[    6.448438] CPU89: Booted secondary processor 0x0000001902 [0x431f0af1]
[    6.448536] Bringup CPU89 0
[    6.448538] Bringup CPU90 left 166
[    6.517179] Detected PIPT I-cache on CPU90
[    6.517268] GICv3: CPU90: found redistributor 1a02 region 0:0x0000000401=
d40000
[    6.517277] GICv3: CPU90: using allocated LPI pending table @0x000000088=
0ea0000
[    6.517330] CPU90: Booted secondary processor 0x0000001a02 [0x431f0af1]
[    6.517426] Bringup CPU90 0
[    6.517429] Bringup CPU91 left 165
[    6.585792] Detected PIPT I-cache on CPU91
[    6.585881] GICv3: CPU91: found redistributor 1b02 region 0:0x0000000401=
dc0000
[    6.585890] GICv3: CPU91: using allocated LPI pending table @0x000000088=
0eb0000
[    6.585944] CPU91: Booted secondary processor 0x0000001b02 [0x431f0af1]
[    6.586065] Bringup CPU91 0
[    6.586069] Bringup CPU92 left 164
[    6.654569] Detected PIPT I-cache on CPU92
[    6.654660] GICv3: CPU92: found redistributor 1c02 region 0:0x0000000401=
e40000
[    6.654669] GICv3: CPU92: using allocated LPI pending table @0x000000088=
0ec0000
[    6.654722] CPU92: Booted secondary processor 0x0000001c02 [0x431f0af1]
[    6.654821] Bringup CPU92 0
[    6.654824] Bringup CPU93 left 163
[    6.723164] Detected PIPT I-cache on CPU93
[    6.723258] GICv3: CPU93: found redistributor 1d02 region 0:0x0000000401=
ec0000
[    6.723267] GICv3: CPU93: using allocated LPI pending table @0x000000088=
0ed0000
[    6.723320] CPU93: Booted secondary processor 0x0000001d02 [0x431f0af1]
[    6.723415] Bringup CPU93 0
[    6.723418] Bringup CPU94 left 162
[    6.791967] Detected PIPT I-cache on CPU94
[    6.792065] GICv3: CPU94: found redistributor 1e02 region 0:0x0000000401=
f40000
[    6.792074] GICv3: CPU94: using allocated LPI pending table @0x000000088=
0ee0000
[    6.792127] CPU94: Booted secondary processor 0x0000001e02 [0x431f0af1]
[    6.792226] Bringup CPU94 0
[    6.792229] Bringup CPU95 left 161
[    6.860520] Detected PIPT I-cache on CPU95
[    6.860618] GICv3: CPU95: found redistributor 1f02 region 0:0x0000000401=
fc0000
[    6.860627] GICv3: CPU95: using allocated LPI pending table @0x000000088=
0ef0000
[    6.860681] CPU95: Booted secondary processor 0x0000001f02 [0x431f0af1]
[    6.860777] Bringup CPU95 0
[    6.860780] Bringup CPU96 left 160
[    6.929652] Detected PIPT I-cache on CPU96
[    6.929700] GICv3: CPU96: found redistributor 3 region 0:0x0000000401060=
000
[    6.929710] GICv3: CPU96: using allocated LPI pending table @0x000000088=
0f00000
[    6.929763] CPU96: Booted secondary processor 0x0000000003 [0x431f0af1]
[    6.929866] Bringup CPU96 0
[    6.929869] Bringup CPU97 left 159
[    6.998236] Detected PIPT I-cache on CPU97
[    6.998279] GICv3: CPU97: found redistributor 103 region 0:0x00000004010=
e0000
[    6.998287] GICv3: CPU97: using allocated LPI pending table @0x000000088=
0f10000
[    6.998339] CPU97: Booted secondary processor 0x0000000103 [0x431f0af1]
[    6.998434] Bringup CPU97 0
[    6.998437] Bringup CPU98 left 158
[    7.066986] Detected PIPT I-cache on CPU98
[    7.067036] GICv3: CPU98: found redistributor 203 region 0:0x00000004011=
60000
[    7.067045] GICv3: CPU98: using allocated LPI pending table @0x000000088=
0f20000
[    7.067097] CPU98: Booted secondary processor 0x0000000203 [0x431f0af1]
[    7.067194] Bringup CPU98 0
[    7.067197] Bringup CPU99 left 157
[    7.135657] Detected PIPT I-cache on CPU99
[    7.135709] GICv3: CPU99: found redistributor 303 region 0:0x00000004011=
e0000
[    7.135717] GICv3: CPU99: using allocated LPI pending table @0x000000088=
0f30000
[    7.135770] CPU99: Booted secondary processor 0x0000000303 [0x431f0af1]
[    7.135870] Bringup CPU99 0
[    7.135872] Bringup CPU100 left 156
[    7.204283] Detected PIPT I-cache on CPU100
[    7.204337] GICv3: CPU100: found redistributor 403 region 0:0x0000000401=
260000
[    7.204346] GICv3: CPU100: using allocated LPI pending table @0x00000008=
80f40000
[    7.204399] CPU100: Booted secondary processor 0x0000000403 [0x431f0af1]
[    7.204496] Bringup CPU100 0
[    7.204499] Bringup CPU101 left 155
[    7.273135] Detected PIPT I-cache on CPU101
[    7.273191] GICv3: CPU101: found redistributor 503 region 0:0x0000000401=
2e0000
[    7.273200] GICv3: CPU101: using allocated LPI pending table @0x00000008=
80f50000
[    7.273252] CPU101: Booted secondary processor 0x0000000503 [0x431f0af1]
[    7.273354] Bringup CPU101 0
[    7.273357] Bringup CPU102 left 154
[    7.341720] Detected PIPT I-cache on CPU102
[    7.341777] GICv3: CPU102: found redistributor 603 region 0:0x0000000401=
360000
[    7.341786] GICv3: CPU102: using allocated LPI pending table @0x00000008=
80f60000
[    7.341837] CPU102: Booted secondary processor 0x0000000603 [0x431f0af1]
[    7.341937] Bringup CPU102 0
[    7.341940] Bringup CPU103 left 153
[    7.410435] Detected PIPT I-cache on CPU103
[    7.410495] GICv3: CPU103: found redistributor 703 region 0:0x0000000401=
3e0000
[    7.410503] GICv3: CPU103: using allocated LPI pending table @0x00000008=
80f70000
[    7.410556] CPU103: Booted secondary processor 0x0000000703 [0x431f0af1]
[    7.410653] Bringup CPU103 0
[    7.410655] Bringup CPU104 left 152
[    7.479258] Detected PIPT I-cache on CPU104
[    7.479319] GICv3: CPU104: found redistributor 803 region 0:0x0000000401=
460000
[    7.479328] GICv3: CPU104: using allocated LPI pending table @0x00000008=
80f80000
[    7.479381] CPU104: Booted secondary processor 0x0000000803 [0x431f0af1]
[    7.479479] Bringup CPU104 0
[    7.479482] Bringup CPU105 left 151
[    7.548112] Detected PIPT I-cache on CPU105
[    7.548174] GICv3: CPU105: found redistributor 903 region 0:0x0000000401=
4e0000
[    7.548184] GICv3: CPU105: using allocated LPI pending table @0x00000008=
80f90000
[    7.548236] CPU105: Booted secondary processor 0x0000000903 [0x431f0af1]
[    7.548331] Bringup CPU105 0
[    7.548334] Bringup CPU106 left 150
[    7.616781] Detected PIPT I-cache on CPU106
[    7.616848] GICv3: CPU106: found redistributor a03 region 0:0x0000000401=
560000
[    7.616858] GICv3: CPU106: using allocated LPI pending table @0x00000008=
80fa0000
[    7.616910] CPU106: Booted secondary processor 0x0000000a03 [0x431f0af1]
[    7.617006] Bringup CPU106 0
[    7.617009] Bringup CPU107 left 149
[    7.685372] Detected PIPT I-cache on CPU107
[    7.685439] GICv3: CPU107: found redistributor b03 region 0:0x0000000401=
5e0000
[    7.685449] GICv3: CPU107: using allocated LPI pending table @0x00000008=
80fb0000
[    7.685501] CPU107: Booted secondary processor 0x0000000b03 [0x431f0af1]
[    7.685626] Bringup CPU107 0
[    7.685629] Bringup CPU108 left 148
[    7.754320] Detected PIPT I-cache on CPU108
[    7.754389] GICv3: CPU108: found redistributor c03 region 0:0x0000000401=
660000
[    7.754398] GICv3: CPU108: using allocated LPI pending table @0x00000008=
80fc0000
[    7.754451] CPU108: Booted secondary processor 0x0000000c03 [0x431f0af1]
[    7.754548] Bringup CPU108 0
[    7.754551] Bringup CPU109 left 147
[    7.823128] Detected PIPT I-cache on CPU109
[    7.823199] GICv3: CPU109: found redistributor d03 region 0:0x0000000401=
6e0000
[    7.823208] GICv3: CPU109: using allocated LPI pending table @0x00000008=
80fd0000
[    7.823260] CPU109: Booted secondary processor 0x0000000d03 [0x431f0af1]
[    7.823359] Bringup CPU109 0
[    7.823362] Bringup CPU110 left 146
[    7.891829] Detected PIPT I-cache on CPU110
[    7.891904] GICv3: CPU110: found redistributor e03 region 0:0x0000000401=
760000
[    7.891913] GICv3: CPU110: using allocated LPI pending table @0x00000008=
80fe0000
[    7.891965] CPU110: Booted secondary processor 0x0000000e03 [0x431f0af1]
[    7.892064] Bringup CPU110 0
[    7.892067] Bringup CPU111 left 145
[    7.960367] Detected PIPT I-cache on CPU111
[    7.960442] GICv3: CPU111: found redistributor f03 region 0:0x0000000401=
7e0000
[    7.960451] GICv3: CPU111: using allocated LPI pending table @0x00000008=
80ff0000
[    7.960503] CPU111: Booted secondary processor 0x0000000f03 [0x431f0af1]
[    7.960600] Bringup CPU111 0
[    7.960603] Bringup CPU112 left 144
[    8.029232] Detected PIPT I-cache on CPU112
[    8.029310] GICv3: CPU112: found redistributor 1003 region 0:0x000000040=
1860000
[    8.029319] GICv3: CPU112: using allocated LPI pending table @0x00000008=
81000000
[    8.029370] CPU112: Booted secondary processor 0x0000001003 [0x431f0af1]
[    8.029471] Bringup CPU112 0
[    8.029474] Bringup CPU113 left 143
[    8.098113] Detected PIPT I-cache on CPU113
[    8.098193] GICv3: CPU113: found redistributor 1103 region 0:0x000000040=
18e0000
[    8.098203] GICv3: CPU113: using allocated LPI pending table @0x00000008=
81010000
[    8.098255] CPU113: Booted secondary processor 0x0000001103 [0x431f0af1]
[    8.098353] Bringup CPU113 0
[    8.098356] Bringup CPU114 left 142
[    8.167090] Detected PIPT I-cache on CPU114
[    8.167172] GICv3: CPU114: found redistributor 1203 region 0:0x000000040=
1960000
[    8.167181] GICv3: CPU114: using allocated LPI pending table @0x00000008=
81020000
[    8.167234] CPU114: Booted secondary processor 0x0000001203 [0x431f0af1]
[    8.167333] Bringup CPU114 0
[    8.167336] Bringup CPU115 left 141
[    8.236006] Detected PIPT I-cache on CPU115
[    8.236090] GICv3: CPU115: found redistributor 1303 region 0:0x000000040=
19e0000
[    8.236099] GICv3: CPU115: using allocated LPI pending table @0x00000008=
81030000
[    8.236152] CPU115: Booted secondary processor 0x0000001303 [0x431f0af1]
[    8.236252] Bringup CPU115 0
[    8.236255] Bringup CPU116 left 140
[    8.304797] Detected PIPT I-cache on CPU116
[    8.304883] GICv3: CPU116: found redistributor 1403 region 0:0x000000040=
1a60000
[    8.304892] GICv3: CPU116: using allocated LPI pending table @0x00000008=
81040000
[    8.304944] CPU116: Booted secondary processor 0x0000001403 [0x431f0af1]
[    8.305045] Bringup CPU116 0
[    8.305048] Bringup CPU117 left 139
[    8.373728] Detected PIPT I-cache on CPU117
[    8.373816] GICv3: CPU117: found redistributor 1503 region 0:0x000000040=
1ae0000
[    8.373826] GICv3: CPU117: using allocated LPI pending table @0x00000008=
81050000
[    8.373879] CPU117: Booted secondary processor 0x0000001503 [0x431f0af1]
[    8.373982] Bringup CPU117 0
[    8.373985] Bringup CPU118 left 138
[    8.442617] Detected PIPT I-cache on CPU118
[    8.442707] GICv3: CPU118: found redistributor 1603 region 0:0x000000040=
1b60000
[    8.442716] GICv3: CPU118: using allocated LPI pending table @0x00000008=
81060000
[    8.442768] CPU118: Booted secondary processor 0x0000001603 [0x431f0af1]
[    8.442868] Bringup CPU118 0
[    8.442871] Bringup CPU119 left 137
[    8.511459] Detected PIPT I-cache on CPU119
[    8.511550] GICv3: CPU119: found redistributor 1703 region 0:0x000000040=
1be0000
[    8.511560] GICv3: CPU119: using allocated LPI pending table @0x00000008=
81070000
[    8.511612] CPU119: Booted secondary processor 0x0000001703 [0x431f0af1]
[    8.511712] Bringup CPU119 0
[    8.511715] Bringup CPU120 left 136
[    8.580428] Detected PIPT I-cache on CPU120
[    8.580521] GICv3: CPU120: found redistributor 1803 region 0:0x000000040=
1c60000
[    8.580531] GICv3: CPU120: using allocated LPI pending table @0x00000008=
81080000
[    8.580583] CPU120: Booted secondary processor 0x0000001803 [0x431f0af1]
[    8.580682] Bringup CPU120 0
[    8.580685] Bringup CPU121 left 135
[    8.649122] Detected PIPT I-cache on CPU121
[    8.649218] GICv3: CPU121: found redistributor 1903 region 0:0x000000040=
1ce0000
[    8.649227] GICv3: CPU121: using allocated LPI pending table @0x00000008=
81090000
[    8.649279] CPU121: Booted secondary processor 0x0000001903 [0x431f0af1]
[    8.649379] Bringup CPU121 0
[    8.649382] Bringup CPU122 left 134
[    8.718460] Detected PIPT I-cache on CPU122
[    8.718558] GICv3: CPU122: found redistributor 1a03 region 0:0x000000040=
1d60000
[    8.718567] GICv3: CPU122: using allocated LPI pending table @0x00000008=
810a0000
[    8.718620] CPU122: Booted secondary processor 0x0000001a03 [0x431f0af1]
[    8.718751] Bringup CPU122 0
[    8.718754] Bringup CPU123 left 133
[    8.787419] Detected PIPT I-cache on CPU123
[    8.787519] GICv3: CPU123: found redistributor 1b03 region 0:0x000000040=
1de0000
[    8.787528] GICv3: CPU123: using allocated LPI pending table @0x00000008=
810b0000
[    8.787580] CPU123: Booted secondary processor 0x0000001b03 [0x431f0af1]
[    8.787683] Bringup CPU123 0
[    8.787686] Bringup CPU124 left 132
[    8.856255] Detected PIPT I-cache on CPU124
[    8.856357] GICv3: CPU124: found redistributor 1c03 region 0:0x000000040=
1e60000
[    8.856366] GICv3: CPU124: using allocated LPI pending table @0x00000008=
810c0000
[    8.856419] CPU124: Booted secondary processor 0x0000001c03 [0x431f0af1]
[    8.856518] Bringup CPU124 0
[    8.856522] Bringup CPU125 left 131
[    8.924931] Detected PIPT I-cache on CPU125
[    8.925034] GICv3: CPU125: found redistributor 1d03 region 0:0x000000040=
1ee0000
[    8.925044] GICv3: CPU125: using allocated LPI pending table @0x00000008=
810d0000
[    8.925095] CPU125: Booted secondary processor 0x0000001d03 [0x431f0af1]
[    8.925196] Bringup CPU125 0
[    8.925199] Bringup CPU126 left 130
[    8.993823] Detected PIPT I-cache on CPU126
[    8.993931] GICv3: CPU126: found redistributor 1e03 region 0:0x000000040=
1f60000
[    8.993941] GICv3: CPU126: using allocated LPI pending table @0x00000008=
810e0000
[    8.993993] CPU126: Booted secondary processor 0x0000001e03 [0x431f0af1]
[    8.994095] Bringup CPU126 0
[    8.994098] Bringup CPU127 left 129
[    9.062463] Detected PIPT I-cache on CPU127
[    9.062570] GICv3: CPU127: found redistributor 1f03 region 0:0x000000040=
1fe0000
[    9.062579] GICv3: CPU127: using allocated LPI pending table @0x00000008=
810f0000
[    9.062631] CPU127: Booted secondary processor 0x0000001f03 [0x431f0af1]
[    9.062734] Bringup CPU127 0
[    9.062737] Bringup CPU128 left 128
[    9.220384] Detected PIPT I-cache on CPU128
[    9.220561] GICv3: CPU128: found redistributor 10000 region 1:0x00000004=
41000000
[    9.220580] GICv3: CPU128: using allocated LPI pending table @0x00000008=
81100000
[    9.220666] CPU128: Booted secondary processor 0x0000010000 [0x431f0af1]
[    9.221068] Bringup CPU128 0
[    9.221072] Bringup CPU129 left 127
[    9.387854] Detected PIPT I-cache on CPU129
[    9.387987] GICv3: CPU129: found redistributor 10100 region 1:0x00000004=
41080000
[    9.387998] GICv3: CPU129: using allocated LPI pending table @0x00000008=
81110000
[    9.388068] CPU129: Booted secondary processor 0x0000010100 [0x431f0af1]
[    9.388215] Bringup CPU129 0
[    9.388219] Bringup CPU130 left 126
[    9.545912] Detected PIPT I-cache on CPU130
[    9.546046] GICv3: CPU130: found redistributor 10200 region 1:0x00000004=
41100000
[    9.546057] GICv3: CPU130: using allocated LPI pending table @0x00000008=
81120000
[    9.546126] CPU130: Booted secondary processor 0x0000010200 [0x431f0af1]
[    9.546275] Bringup CPU130 0
[    9.546279] Bringup CPU131 left 125
[    9.711527] Detected PIPT I-cache on CPU131
[    9.711666] GICv3: CPU131: found redistributor 10300 region 1:0x00000004=
41180000
[    9.711678] GICv3: CPU131: using allocated LPI pending table @0x00000008=
81130000
[    9.711750] CPU131: Booted secondary processor 0x0000010300 [0x431f0af1]
[    9.711898] Bringup CPU131 0
[    9.711901] Bringup CPU132 left 124
[    9.879364] Detected PIPT I-cache on CPU132
[    9.879505] GICv3: CPU132: found redistributor 10400 region 1:0x00000004=
41200000
[    9.879517] GICv3: CPU132: using allocated LPI pending table @0x00000008=
81140000
[    9.879589] CPU132: Booted secondary processor 0x0000010400 [0x431f0af1]
[    9.879805] Bringup CPU132 0
[    9.879809] Bringup CPU133 left 123
[   10.048128] Detected PIPT I-cache on CPU133
[   10.048269] GICv3: CPU133: found redistributor 10500 region 1:0x00000004=
41280000
[   10.048281] GICv3: CPU133: using allocated LPI pending table @0x00000008=
81150000
[   10.048350] CPU133: Booted secondary processor 0x0000010500 [0x431f0af1]
[   10.048498] Bringup CPU133 0
[   10.048502] Bringup CPU134 left 122
[   10.214683] Detected PIPT I-cache on CPU134
[   10.214828] GICv3: CPU134: found redistributor 10600 region 1:0x00000004=
41300000
[   10.214840] GICv3: CPU134: using allocated LPI pending table @0x00000008=
81160000
[   10.214910] CPU134: Booted secondary processor 0x0000010600 [0x431f0af1]
[   10.215055] Bringup CPU134 0
[   10.215059] Bringup CPU135 left 121
[   10.372067] Detected PIPT I-cache on CPU135
[   10.372213] GICv3: CPU135: found redistributor 10700 region 1:0x00000004=
41380000
[   10.372225] GICv3: CPU135: using allocated LPI pending table @0x00000008=
81170000
[   10.372295] CPU135: Booted secondary processor 0x0000010700 [0x431f0af1]
[   10.372444] Bringup CPU135 0
[   10.372448] Bringup CPU136 left 120
[   10.539095] Detected PIPT I-cache on CPU136
[   10.539244] GICv3: CPU136: found redistributor 10800 region 1:0x00000004=
41400000
[   10.539256] GICv3: CPU136: using allocated LPI pending table @0x00000008=
81180000
[   10.539325] CPU136: Booted secondary processor 0x0000010800 [0x431f0af1]
[   10.539471] Bringup CPU136 0
[   10.539474] Bringup CPU137 left 119
[   10.705053] Detected PIPT I-cache on CPU137
[   10.705204] GICv3: CPU137: found redistributor 10900 region 1:0x00000004=
41480000
[   10.705216] GICv3: CPU137: using allocated LPI pending table @0x00000008=
81190000
[   10.705285] CPU137: Booted secondary processor 0x0000010900 [0x431f0af1]
[   10.705430] Bringup CPU137 0
[   10.705434] Bringup CPU138 left 118
[   10.871710] Detected PIPT I-cache on CPU138
[   10.871863] GICv3: CPU138: found redistributor 10a00 region 1:0x00000004=
41500000
[   10.871875] GICv3: CPU138: using allocated LPI pending table @0x00000008=
811a0000
[   10.871944] CPU138: Booted secondary processor 0x0000010a00 [0x431f0af1]
[   10.872094] Bringup CPU138 0
[   10.872097] Bringup CPU139 left 117
[   11.037668] Detected PIPT I-cache on CPU139
[   11.037823] GICv3: CPU139: found redistributor 10b00 region 1:0x00000004=
41580000
[   11.037835] GICv3: CPU139: using allocated LPI pending table @0x00000008=
811b0000
[   11.037906] CPU139: Booted secondary processor 0x0000010b00 [0x431f0af1]
[   11.038125] Bringup CPU139 0
[   11.038129] Bringup CPU140 left 116
[   11.203709] Detected PIPT I-cache on CPU140
[   11.203866] GICv3: CPU140: found redistributor 10c00 region 1:0x00000004=
41600000
[   11.203878] GICv3: CPU140: using allocated LPI pending table @0x00000008=
811c0000
[   11.203948] CPU140: Booted secondary processor 0x0000010c00 [0x431f0af1]
[   11.204101] Bringup CPU140 0
[   11.204105] Bringup CPU141 left 115
[   11.370278] Detected PIPT I-cache on CPU141
[   11.370434] GICv3: CPU141: found redistributor 10d00 region 1:0x00000004=
41680000
[   11.370446] GICv3: CPU141: using allocated LPI pending table @0x00000008=
811d0000
[   11.370517] CPU141: Booted secondary processor 0x0000010d00 [0x431f0af1]
[   11.370669] Bringup CPU141 0
[   11.370672] Bringup CPU142 left 114
[   11.534969] Detected PIPT I-cache on CPU142
[   11.535132] GICv3: CPU142: found redistributor 10e00 region 1:0x00000004=
41700000
[   11.535144] GICv3: CPU142: using allocated LPI pending table @0x00000008=
811e0000
[   11.535213] CPU142: Booted secondary processor 0x0000010e00 [0x431f0af1]
[   11.535359] Bringup CPU142 0
[   11.535363] Bringup CPU143 left 113
[   11.701078] Detected PIPT I-cache on CPU143
[   11.701242] GICv3: CPU143: found redistributor 10f00 region 1:0x00000004=
41780000
[   11.701254] GICv3: CPU143: using allocated LPI pending table @0x00000008=
811f0000
[   11.701323] CPU143: Booted secondary processor 0x0000010f00 [0x431f0af1]
[   11.701468] Bringup CPU143 0
[   11.701472] Bringup CPU144 left 112
[   11.869090] Detected PIPT I-cache on CPU144
[   11.869258] GICv3: CPU144: found redistributor 11000 region 1:0x00000004=
41800000
[   11.869271] GICv3: CPU144: using allocated LPI pending table @0x00000008=
81200000
[   11.869341] CPU144: Booted secondary processor 0x0000011000 [0x431f0af1]
[   11.869491] Bringup CPU144 0
[   11.869495] Bringup CPU145 left 111
[   12.034171] Detected PIPT I-cache on CPU145
[   12.034340] GICv3: CPU145: found redistributor 11100 region 1:0x00000004=
41880000
[   12.034353] GICv3: CPU145: using allocated LPI pending table @0x00000008=
81210000
[   12.034423] CPU145: Booted secondary processor 0x0000011100 [0x431f0af1]
[   12.034650] Bringup CPU145 0
[   12.034654] Bringup CPU146 left 110
[   12.200891] Detected PIPT I-cache on CPU146
[   12.201062] GICv3: CPU146: found redistributor 11200 region 1:0x00000004=
41900000
[   12.201075] GICv3: CPU146: using allocated LPI pending table @0x00000008=
81220000
[   12.201144] CPU146: Booted secondary processor 0x0000011200 [0x431f0af1]
[   12.201300] Bringup CPU146 0
[   12.201304] Bringup CPU147 left 109
[   12.366599] Detected PIPT I-cache on CPU147
[   12.366771] GICv3: CPU147: found redistributor 11300 region 1:0x00000004=
41980000
[   12.366784] GICv3: CPU147: using allocated LPI pending table @0x00000008=
81230000
[   12.366854] CPU147: Booted secondary processor 0x0000011300 [0x431f0af1]
[   12.367007] Bringup CPU147 0
[   12.367011] Bringup CPU148 left 108
[   12.532735] Detected PIPT I-cache on CPU148
[   12.532911] GICv3: CPU148: found redistributor 11400 region 1:0x00000004=
41a00000
[   12.532924] GICv3: CPU148: using allocated LPI pending table @0x00000008=
81240000
[   12.532994] CPU148: Booted secondary processor 0x0000011400 [0x431f0af1]
[   12.533147] Bringup CPU148 0
[   12.533151] Bringup CPU149 left 107
[   12.699101] Detected PIPT I-cache on CPU149
[   12.699279] GICv3: CPU149: found redistributor 11500 region 1:0x00000004=
41a80000
[   12.699292] GICv3: CPU149: using allocated LPI pending table @0x00000008=
81250000
[   12.699363] CPU149: Booted secondary processor 0x0000011500 [0x431f0af1]
[   12.699518] Bringup CPU149 0
[   12.699522] Bringup CPU150 left 106
[   12.864456] Detected PIPT I-cache on CPU150
[   12.864634] GICv3: CPU150: found redistributor 11600 region 1:0x00000004=
41b00000
[   12.864647] GICv3: CPU150: using allocated LPI pending table @0x00000008=
81260000
[   12.864717] CPU150: Booted secondary processor 0x0000011600 [0x431f0af1]
[   12.864871] Bringup CPU150 0
[   12.864874] Bringup CPU151 left 105
[   13.032476] Detected PIPT I-cache on CPU151
[   13.032656] GICv3: CPU151: found redistributor 11700 region 1:0x00000004=
41b80000
[   13.032669] GICv3: CPU151: using allocated LPI pending table @0x00000008=
81270000
[   13.032739] CPU151: Booted secondary processor 0x0000011700 [0x431f0af1]
[   13.032893] Bringup CPU151 0
[   13.032897] Bringup CPU152 left 104
[   13.200736] Detected PIPT I-cache on CPU152
[   13.200918] GICv3: CPU152: found redistributor 11800 region 1:0x00000004=
41c00000
[   13.200931] GICv3: CPU152: using allocated LPI pending table @0x00000008=
81280000
[   13.201003] CPU152: Booted secondary processor 0x0000011800 [0x431f0af1]
[   13.201234] Bringup CPU152 0
[   13.201238] Bringup CPU153 left 103
[   13.369144] Detected PIPT I-cache on CPU153
[   13.369326] GICv3: CPU153: found redistributor 11900 region 1:0x00000004=
41c80000
[   13.369338] GICv3: CPU153: using allocated LPI pending table @0x00000008=
81290000
[   13.369410] CPU153: Booted secondary processor 0x0000011900 [0x431f0af1]
[   13.369566] Bringup CPU153 0
[   13.369570] Bringup CPU154 left 102
[   13.537130] Detected PIPT I-cache on CPU154
[   13.537316] GICv3: CPU154: found redistributor 11a00 region 1:0x00000004=
41d00000
[   13.537328] GICv3: CPU154: using allocated LPI pending table @0x00000008=
812a0000
[   13.537398] CPU154: Booted secondary processor 0x0000011a00 [0x431f0af1]
[   13.537551] Bringup CPU154 0
[   13.537555] Bringup CPU155 left 101
[   13.704667] Detected PIPT I-cache on CPU155
[   13.704852] GICv3: CPU155: found redistributor 11b00 region 1:0x00000004=
41d80000
[   13.704865] GICv3: CPU155: using allocated LPI pending table @0x00000008=
812b0000
[   13.704935] CPU155: Booted secondary processor 0x0000011b00 [0x431f0af1]
[   13.705088] Bringup CPU155 0
[   13.705091] Bringup CPU156 left 100
[   13.873414] Detected PIPT I-cache on CPU156
[   13.873602] GICv3: CPU156: found redistributor 11c00 region 1:0x00000004=
41e00000
[   13.873615] GICv3: CPU156: using allocated LPI pending table @0x00000008=
812c0000
[   13.873685] CPU156: Booted secondary processor 0x0000011c00 [0x431f0af1]
[   13.873841] Bringup CPU156 0
[   13.873845] Bringup CPU157 left 99
[   14.041030] Detected PIPT I-cache on CPU157
[   14.041223] GICv3: CPU157: found redistributor 11d00 region 1:0x00000004=
41e80000
[   14.041235] GICv3: CPU157: using allocated LPI pending table @0x00000008=
812d0000
[   14.041306] CPU157: Booted secondary processor 0x0000011d00 [0x431f0af1]
[   14.041458] Bringup CPU157 0
[   14.041462] Bringup CPU158 left 98
[   14.209238] Detected PIPT I-cache on CPU158
[   14.209434] GICv3: CPU158: found redistributor 11e00 region 1:0x00000004=
41f00000
[   14.209446] GICv3: CPU158: using allocated LPI pending table @0x00000008=
812e0000
[   14.209516] CPU158: Booted secondary processor 0x0000011e00 [0x431f0af1]
[   14.209751] Bringup CPU158 0
[   14.209755] Bringup CPU159 left 97
[   14.375193] Detected PIPT I-cache on CPU159
[   14.375393] GICv3: CPU159: found redistributor 11f00 region 1:0x00000004=
41f80000
[   14.375405] GICv3: CPU159: using allocated LPI pending table @0x00000008=
812f0000
[   14.375475] CPU159: Booted secondary processor 0x0000011f00 [0x431f0af1]
[   14.375633] Bringup CPU159 0
[   14.375637] Bringup CPU160 left 96
[   14.530664] Detected PIPT I-cache on CPU160
[   14.530800] GICv3: CPU160: found redistributor 10001 region 1:0x00000004=
41020000
[   14.530812] GICv3: CPU160: using allocated LPI pending table @0x00000008=
81300000
[   14.530880] CPU160: Booted secondary processor 0x0000010001 [0x431f0af1]
[   14.531019] Bringup CPU160 0
[   14.531022] Bringup CPU161 left 95
[   14.686047] Detected PIPT I-cache on CPU161
[   14.686189] GICv3: CPU161: found redistributor 10101 region 1:0x00000004=
410a0000
[   14.686200] GICv3: CPU161: using allocated LPI pending table @0x00000008=
81310000
[   14.686269] CPU161: Booted secondary processor 0x0000010101 [0x431f0af1]
[   14.686405] Bringup CPU161 0
[   14.686409] Bringup CPU162 left 94
[   14.841025] Detected PIPT I-cache on CPU162
[   14.841168] GICv3: CPU162: found redistributor 10201 region 1:0x00000004=
41120000
[   14.841179] GICv3: CPU162: using allocated LPI pending table @0x00000008=
81320000
[   14.841247] CPU162: Booted secondary processor 0x0000010201 [0x431f0af1]
[   14.841380] Bringup CPU162 0
[   14.841384] Bringup CPU163 left 93
[   14.996501] Detected PIPT I-cache on CPU163
[   14.996648] GICv3: CPU163: found redistributor 10301 region 1:0x00000004=
411a0000
[   14.996660] GICv3: CPU163: using allocated LPI pending table @0x00000008=
81330000
[   14.996726] CPU163: Booted secondary processor 0x0000010301 [0x431f0af1]
[   14.996868] Bringup CPU163 0
[   14.996871] Bringup CPU164 left 92
[   15.151407] Detected PIPT I-cache on CPU164
[   15.151552] GICv3: CPU164: found redistributor 10401 region 1:0x00000004=
41220000
[   15.151564] GICv3: CPU164: using allocated LPI pending table @0x00000008=
81340000
[   15.151633] CPU164: Booted secondary processor 0x0000010401 [0x431f0af1]
[   15.151776] Bringup CPU164 0
[   15.151780] Bringup CPU165 left 91
[   15.306915] Detected PIPT I-cache on CPU165
[   15.307063] GICv3: CPU165: found redistributor 10501 region 1:0x00000004=
412a0000
[   15.307075] GICv3: CPU165: using allocated LPI pending table @0x00000008=
81350000
[   15.307141] CPU165: Booted secondary processor 0x0000010501 [0x431f0af1]
[   15.307364] Bringup CPU165 0
[   15.307368] Bringup CPU166 left 90
[   15.461950] Detected PIPT I-cache on CPU166
[   15.462101] GICv3: CPU166: found redistributor 10601 region 1:0x00000004=
41320000
[   15.462113] GICv3: CPU166: using allocated LPI pending table @0x00000008=
81360000
[   15.462180] CPU166: Booted secondary processor 0x0000010601 [0x431f0af1]
[   15.462325] Bringup CPU166 0
[   15.462328] Bringup CPU167 left 89
[   15.616738] Detected PIPT I-cache on CPU167
[   15.616891] GICv3: CPU167: found redistributor 10701 region 1:0x00000004=
413a0000
[   15.616903] GICv3: CPU167: using allocated LPI pending table @0x00000008=
81370000
[   15.616971] CPU167: Booted secondary processor 0x0000010701 [0x431f0af1]
[   15.617107] Bringup CPU167 0
[   15.617111] Bringup CPU168 left 88
[   15.771979] Detected PIPT I-cache on CPU168
[   15.772133] GICv3: CPU168: found redistributor 10801 region 1:0x00000004=
41420000
[   15.772145] GICv3: CPU168: using allocated LPI pending table @0x00000008=
81380000
[   15.772215] CPU168: Booted secondary processor 0x0000010801 [0x431f0af1]
[   15.772355] Bringup CPU168 0
[   15.772359] Bringup CPU169 left 87
[   15.927422] Detected PIPT I-cache on CPU169
[   15.927578] GICv3: CPU169: found redistributor 10901 region 1:0x00000004=
414a0000
[   15.927590] GICv3: CPU169: using allocated LPI pending table @0x00000008=
81390000
[   15.927659] CPU169: Booted secondary processor 0x0000010901 [0x431f0af1]
[   15.927796] Bringup CPU169 0
[   15.927799] Bringup CPU170 left 86
[   16.082419] Detected PIPT I-cache on CPU170
[   16.082581] GICv3: CPU170: found redistributor 10a01 region 1:0x00000004=
41520000
[   16.082592] GICv3: CPU170: using allocated LPI pending table @0x00000008=
813a0000
[   16.082660] CPU170: Booted secondary processor 0x0000010a01 [0x431f0af1]
[   16.082800] Bringup CPU170 0
[   16.082803] Bringup CPU171 left 85
[   16.238203] Detected PIPT I-cache on CPU171
[   16.238364] GICv3: CPU171: found redistributor 10b01 region 1:0x00000004=
415a0000
[   16.238376] GICv3: CPU171: using allocated LPI pending table @0x00000008=
813b0000
[   16.238445] CPU171: Booted secondary processor 0x0000010b01 [0x431f0af1]
[   16.238585] Bringup CPU171 0
[   16.238589] Bringup CPU172 left 84
[   16.393091] Detected PIPT I-cache on CPU172
[   16.393253] GICv3: CPU172: found redistributor 10c01 region 1:0x00000004=
41620000
[   16.393265] GICv3: CPU172: using allocated LPI pending table @0x00000008=
813c0000
[   16.393334] CPU172: Booted secondary processor 0x0000010c01 [0x431f0af1]
[   16.393560] Bringup CPU172 0
[   16.393564] Bringup CPU173 left 83
[   16.548527] Detected PIPT I-cache on CPU173
[   16.548690] GICv3: CPU173: found redistributor 10d01 region 1:0x00000004=
416a0000
[   16.548702] GICv3: CPU173: using allocated LPI pending table @0x00000008=
813d0000
[   16.548769] CPU173: Booted secondary processor 0x0000010d01 [0x431f0af1]
[   16.548917] Bringup CPU173 0
[   16.548920] Bringup CPU174 left 82
[   16.703287] Detected PIPT I-cache on CPU174
[   16.703458] GICv3: CPU174: found redistributor 10e01 region 1:0x00000004=
41720000
[   16.703470] GICv3: CPU174: using allocated LPI pending table @0x00000008=
813e0000
[   16.703537] CPU174: Booted secondary processor 0x0000010e01 [0x431f0af1]
[   16.703675] Bringup CPU174 0
[   16.703678] Bringup CPU175 left 81
[   16.858143] Detected PIPT I-cache on CPU175
[   16.858311] GICv3: CPU175: found redistributor 10f01 region 1:0x00000004=
417a0000
[   16.858323] GICv3: CPU175: using allocated LPI pending table @0x00000008=
813f0000
[   16.858391] CPU175: Booted secondary processor 0x0000010f01 [0x431f0af1]
[   16.858533] Bringup CPU175 0
[   16.858536] Bringup CPU176 left 80
[   17.013712] Detected PIPT I-cache on CPU176
[   17.013881] GICv3: CPU176: found redistributor 11001 region 1:0x00000004=
41820000
[   17.013893] GICv3: CPU176: using allocated LPI pending table @0x00000008=
81400000
[   17.013960] CPU176: Booted secondary processor 0x0000011001 [0x431f0af1]
[   17.014098] Bringup CPU176 0
[   17.014101] Bringup CPU177 left 79
[   17.169073] Detected PIPT I-cache on CPU177
[   17.169243] GICv3: CPU177: found redistributor 11101 region 1:0x00000004=
418a0000
[   17.169255] GICv3: CPU177: using allocated LPI pending table @0x00000008=
81410000
[   17.169323] CPU177: Booted secondary processor 0x0000011101 [0x431f0af1]
[   17.169459] Bringup CPU177 0
[   17.169463] Bringup CPU178 left 78
[   17.324558] Detected PIPT I-cache on CPU178
[   17.324730] GICv3: CPU178: found redistributor 11201 region 1:0x00000004=
41920000
[   17.324742] GICv3: CPU178: using allocated LPI pending table @0x00000008=
81420000
[   17.324809] CPU178: Booted secondary processor 0x0000011201 [0x431f0af1]
[   17.325040] Bringup CPU178 0
[   17.325043] Bringup CPU179 left 77
[   17.479665] Detected PIPT I-cache on CPU179
[   17.479840] GICv3: CPU179: found redistributor 11301 region 1:0x00000004=
419a0000
[   17.479852] GICv3: CPU179: using allocated LPI pending table @0x00000008=
81430000
[   17.479923] CPU179: Booted secondary processor 0x0000011301 [0x431f0af1]
[   17.480070] Bringup CPU179 0
[   17.480074] Bringup CPU180 left 76
[   17.634812] Detected PIPT I-cache on CPU180
[   17.634993] GICv3: CPU180: found redistributor 11401 region 1:0x00000004=
41a20000
[   17.635005] GICv3: CPU180: using allocated LPI pending table @0x00000008=
81440000
[   17.635073] CPU180: Booted secondary processor 0x0000011401 [0x431f0af1]
[   17.635216] Bringup CPU180 0
[   17.635219] Bringup CPU181 left 75
[   17.790362] Detected PIPT I-cache on CPU181
[   17.790547] GICv3: CPU181: found redistributor 11501 region 1:0x00000004=
41aa0000
[   17.790559] GICv3: CPU181: using allocated LPI pending table @0x00000008=
81450000
[   17.790627] CPU181: Booted secondary processor 0x0000011501 [0x431f0af1]
[   17.790770] Bringup CPU181 0
[   17.790774] Bringup CPU182 left 74
[   17.945435] Detected PIPT I-cache on CPU182
[   17.945616] GICv3: CPU182: found redistributor 11601 region 1:0x00000004=
41b20000
[   17.945628] GICv3: CPU182: using allocated LPI pending table @0x00000008=
81460000
[   17.945695] CPU182: Booted secondary processor 0x0000011601 [0x431f0af1]
[   17.945839] Bringup CPU182 0
[   17.945842] Bringup CPU183 left 73
[   18.100344] Detected PIPT I-cache on CPU183
[   18.100525] GICv3: CPU183: found redistributor 11701 region 1:0x00000004=
41ba0000
[   18.100537] GICv3: CPU183: using allocated LPI pending table @0x00000008=
81470000
[   18.100606] CPU183: Booted secondary processor 0x0000011701 [0x431f0af1]
[   18.100751] Bringup CPU183 0
[   18.100755] Bringup CPU184 left 72
[   18.255485] Detected PIPT I-cache on CPU184
[   18.255675] GICv3: CPU184: found redistributor 11801 region 1:0x00000004=
41c20000
[   18.255688] GICv3: CPU184: using allocated LPI pending table @0x00000008=
81480000
[   18.255755] CPU184: Booted secondary processor 0x0000011801 [0x431f0af1]
[   18.255896] Bringup CPU184 0
[   18.255899] Bringup CPU185 left 71
[   18.410979] Detected PIPT I-cache on CPU185
[   18.411166] GICv3: CPU185: found redistributor 11901 region 1:0x00000004=
41ca0000
[   18.411178] GICv3: CPU185: using allocated LPI pending table @0x00000008=
81490000
[   18.411246] CPU185: Booted secondary processor 0x0000011901 [0x431f0af1]
[   18.411485] Bringup CPU185 0
[   18.411489] Bringup CPU186 left 70
[   18.566521] Detected PIPT I-cache on CPU186
[   18.566712] GICv3: CPU186: found redistributor 11a01 region 1:0x00000004=
41d20000
[   18.566724] GICv3: CPU186: using allocated LPI pending table @0x00000008=
814a0000
[   18.566791] CPU186: Booted secondary processor 0x0000011a01 [0x431f0af1]
[   18.566944] Bringup CPU186 0
[   18.566947] Bringup CPU187 left 69
[   18.721510] Detected PIPT I-cache on CPU187
[   18.721706] GICv3: CPU187: found redistributor 11b01 region 1:0x00000004=
41da0000
[   18.721718] GICv3: CPU187: using allocated LPI pending table @0x00000008=
814b0000
[   18.721787] CPU187: Booted secondary processor 0x0000011b01 [0x431f0af1]
[   18.721931] Bringup CPU187 0
[   18.721934] Bringup CPU188 left 68
[   18.877191] Detected PIPT I-cache on CPU188
[   18.877386] GICv3: CPU188: found redistributor 11c01 region 1:0x00000004=
41e20000
[   18.877399] GICv3: CPU188: using allocated LPI pending table @0x00000008=
814c0000
[   18.877467] CPU188: Booted secondary processor 0x0000011c01 [0x431f0af1]
[   18.877612] Bringup CPU188 0
[   18.877616] Bringup CPU189 left 67
[   19.032656] Detected PIPT I-cache on CPU189
[   19.032852] GICv3: CPU189: found redistributor 11d01 region 1:0x00000004=
41ea0000
[   19.032863] GICv3: CPU189: using allocated LPI pending table @0x00000008=
814d0000
[   19.032931] CPU189: Booted secondary processor 0x0000011d01 [0x431f0af1]
[   19.033071] Bringup CPU189 0
[   19.033074] Bringup CPU190 left 66
[   19.188112] Detected PIPT I-cache on CPU190
[   19.188314] GICv3: CPU190: found redistributor 11e01 region 1:0x00000004=
41f20000
[   19.188326] GICv3: CPU190: using allocated LPI pending table @0x00000008=
814e0000
[   19.188394] CPU190: Booted secondary processor 0x0000011e01 [0x431f0af1]
[   19.188532] Bringup CPU190 0
[   19.188536] Bringup CPU191 left 65
[   19.343549] Detected PIPT I-cache on CPU191
[   19.343747] GICv3: CPU191: found redistributor 11f01 region 1:0x00000004=
41fa0000
[   19.343759] GICv3: CPU191: using allocated LPI pending table @0x00000008=
814f0000
[   19.343826] CPU191: Booted secondary processor 0x0000011f01 [0x431f0af1]
[   19.344062] Bringup CPU191 0
[   19.344066] Bringup CPU192 left 64
[   19.499181] Detected PIPT I-cache on CPU192
[   19.499333] GICv3: CPU192: found redistributor 10002 region 1:0x00000004=
41040000
[   19.499344] GICv3: CPU192: using allocated LPI pending table @0x00000008=
81500000
[   19.499410] CPU192: Booted secondary processor 0x0000010002 [0x431f0af1]
[   19.499555] Bringup CPU192 0
[   19.499559] Bringup CPU193 left 63
[   19.654687] Detected PIPT I-cache on CPU193
[   19.654833] GICv3: CPU193: found redistributor 10102 region 1:0x00000004=
410c0000
[   19.654845] GICv3: CPU193: using allocated LPI pending table @0x00000008=
81510000
[   19.654914] CPU193: Booted secondary processor 0x0000010102 [0x431f0af1]
[   19.655055] Bringup CPU193 0
[   19.655058] Bringup CPU194 left 62
[   19.809738] Detected PIPT I-cache on CPU194
[   19.809888] GICv3: CPU194: found redistributor 10202 region 1:0x00000004=
41140000
[   19.809899] GICv3: CPU194: using allocated LPI pending table @0x00000008=
81520000
[   19.809966] CPU194: Booted secondary processor 0x0000010202 [0x431f0af1]
[   19.810110] Bringup CPU194 0
[   19.810114] Bringup CPU195 left 61
[   19.964932] Detected PIPT I-cache on CPU195
[   19.965086] GICv3: CPU195: found redistributor 10302 region 1:0x00000004=
411c0000
[   19.965097] GICv3: CPU195: using allocated LPI pending table @0x00000008=
81530000
[   19.965164] CPU195: Booted secondary processor 0x0000010302 [0x431f0af1]
[   19.965304] Bringup CPU195 0
[   19.965307] Bringup CPU196 left 60
[   20.119941] Detected PIPT I-cache on CPU196
[   20.120094] GICv3: CPU196: found redistributor 10402 region 1:0x00000004=
41240000
[   20.120106] GICv3: CPU196: using allocated LPI pending table @0x00000008=
81540000
[   20.120174] CPU196: Booted secondary processor 0x0000010402 [0x431f0af1]
[   20.120315] Bringup CPU196 0
[   20.120319] Bringup CPU197 left 59
[   20.275680] Detected PIPT I-cache on CPU197
[   20.275841] GICv3: CPU197: found redistributor 10502 region 1:0x00000004=
412c0000
[   20.275852] GICv3: CPU197: using allocated LPI pending table @0x00000008=
81550000
[   20.275919] CPU197: Booted secondary processor 0x0000010502 [0x431f0af1]
[   20.276064] Bringup CPU197 0
[   20.276068] Bringup CPU198 left 58
[   20.430754] Detected PIPT I-cache on CPU198
[   20.430913] GICv3: CPU198: found redistributor 10602 region 1:0x00000004=
41340000
[   20.430924] GICv3: CPU198: using allocated LPI pending table @0x00000008=
81560000
[   20.430992] CPU198: Booted secondary processor 0x0000010602 [0x431f0af1]
[   20.431230] Bringup CPU198 0
[   20.431234] Bringup CPU199 left 57
[   20.585910] Detected PIPT I-cache on CPU199
[   20.586072] GICv3: CPU199: found redistributor 10702 region 1:0x00000004=
413c0000
[   20.586085] GICv3: CPU199: using allocated LPI pending table @0x00000008=
81570000
[   20.586150] CPU199: Booted secondary processor 0x0000010702 [0x431f0af1]
[   20.586298] Bringup CPU199 0
[   20.586301] Bringup CPU200 left 56
[   20.741300] Detected PIPT I-cache on CPU200
[   20.741469] GICv3: CPU200: found redistributor 10802 region 1:0x00000004=
41440000
[   20.741481] GICv3: CPU200: using allocated LPI pending table @0x00000008=
81580000
[   20.741548] CPU200: Booted secondary processor 0x0000010802 [0x431f0af1]
[   20.741690] Bringup CPU200 0
[   20.741694] Bringup CPU201 left 55
[   20.896632] Detected PIPT I-cache on CPU201
[   20.896796] GICv3: CPU201: found redistributor 10902 region 1:0x00000004=
414c0000
[   20.896808] GICv3: CPU201: using allocated LPI pending table @0x00000008=
81590000
[   20.896874] CPU201: Booted secondary processor 0x0000010902 [0x431f0af1]
[   20.897016] Bringup CPU201 0
[   20.897020] Bringup CPU202 left 54
[   21.051752] Detected PIPT I-cache on CPU202
[   21.051918] GICv3: CPU202: found redistributor 10a02 region 1:0x00000004=
41540000
[   21.051930] GICv3: CPU202: using allocated LPI pending table @0x00000008=
815a0000
[   21.051998] CPU202: Booted secondary processor 0x0000010a02 [0x431f0af1]
[   21.052138] Bringup CPU202 0
[   21.052142] Bringup CPU203 left 53
[   21.207990] Detected PIPT I-cache on CPU203
[   21.208156] GICv3: CPU203: found redistributor 10b02 region 1:0x00000004=
415c0000
[   21.208168] GICv3: CPU203: using allocated LPI pending table @0x00000008=
815b0000
[   21.208235] CPU203: Booted secondary processor 0x0000010b02 [0x431f0af1]
[   21.208377] Bringup CPU203 0
[   21.208380] Bringup CPU204 left 52
[   21.363111] Detected PIPT I-cache on CPU204
[   21.363279] GICv3: CPU204: found redistributor 10c02 region 1:0x00000004=
41640000
[   21.363292] GICv3: CPU204: using allocated LPI pending table @0x00000008=
815c0000
[   21.363359] CPU204: Booted secondary processor 0x0000010c02 [0x431f0af1]
[   21.363598] Bringup CPU204 0
[   21.363601] Bringup CPU205 left 51
[   21.518927] Detected PIPT I-cache on CPU205
[   21.519098] GICv3: CPU205: found redistributor 10d02 region 1:0x00000004=
416c0000
[   21.519110] GICv3: CPU205: using allocated LPI pending table @0x00000008=
815d0000
[   21.519177] CPU205: Booted secondary processor 0x0000010d02 [0x431f0af1]
[   21.519326] Bringup CPU205 0
[   21.519329] Bringup CPU206 left 50
[   21.673907] Detected PIPT I-cache on CPU206
[   21.674079] GICv3: CPU206: found redistributor 10e02 region 1:0x00000004=
41740000
[   21.674091] GICv3: CPU206: using allocated LPI pending table @0x00000008=
815e0000
[   21.674158] CPU206: Booted secondary processor 0x0000010e02 [0x431f0af1]
[   21.674299] Bringup CPU206 0
[   21.674302] Bringup CPU207 left 49
[   21.828905] Detected PIPT I-cache on CPU207
[   21.829083] GICv3: CPU207: found redistributor 10f02 region 1:0x00000004=
417c0000
[   21.829095] GICv3: CPU207: using allocated LPI pending table @0x00000008=
815f0000
[   21.829162] CPU207: Booted secondary processor 0x0000010f02 [0x431f0af1]
[   21.829303] Bringup CPU207 0
[   21.829307] Bringup CPU208 left 48
[   21.984625] Detected PIPT I-cache on CPU208
[   21.984804] GICv3: CPU208: found redistributor 11002 region 1:0x00000004=
41840000
[   21.984817] GICv3: CPU208: using allocated LPI pending table @0x00000008=
81600000
[   21.984883] CPU208: Booted secondary processor 0x0000011002 [0x431f0af1]
[   21.985027] Bringup CPU208 0
[   21.985030] Bringup CPU209 left 47
[   22.140149] Detected PIPT I-cache on CPU209
[   22.140328] GICv3: CPU209: found redistributor 11102 region 1:0x00000004=
418c0000
[   22.140340] GICv3: CPU209: using allocated LPI pending table @0x00000008=
81610000
[   22.140407] CPU209: Booted secondary processor 0x0000011102 [0x431f0af1]
[   22.140551] Bringup CPU209 0
[   22.140554] Bringup CPU210 left 46
[   22.295766] Detected PIPT I-cache on CPU210
[   22.295948] GICv3: CPU210: found redistributor 11202 region 1:0x00000004=
41940000
[   22.295960] GICv3: CPU210: using allocated LPI pending table @0x00000008=
81620000
[   22.296028] CPU210: Booted secondary processor 0x0000011202 [0x431f0af1]
[   22.296176] Bringup CPU210 0
[   22.296180] Bringup CPU211 left 45
[   22.450955] Detected PIPT I-cache on CPU211
[   22.451137] GICv3: CPU211: found redistributor 11302 region 1:0x00000004=
419c0000
[   22.451150] GICv3: CPU211: using allocated LPI pending table @0x00000008=
81630000
[   22.451216] CPU211: Booted secondary processor 0x0000011302 [0x431f0af1]
[   22.451467] Bringup CPU211 0
[   22.451471] Bringup CPU212 left 44
[   22.606322] Detected PIPT I-cache on CPU212
[   22.606507] GICv3: CPU212: found redistributor 11402 region 1:0x00000004=
41a40000
[   22.606519] GICv3: CPU212: using allocated LPI pending table @0x00000008=
81640000
[   22.606587] CPU212: Booted secondary processor 0x0000011402 [0x431f0af1]
[   22.606742] Bringup CPU212 0
[   22.606746] Bringup CPU213 left 43
[   22.762153] Detected PIPT I-cache on CPU213
[   22.762342] GICv3: CPU213: found redistributor 11502 region 1:0x00000004=
41ac0000
[   22.762354] GICv3: CPU213: using allocated LPI pending table @0x00000008=
81650000
[   22.762422] CPU213: Booted secondary processor 0x0000011502 [0x431f0af1]
[   22.762565] Bringup CPU213 0
[   22.762569] Bringup CPU214 left 42
[   22.917237] Detected PIPT I-cache on CPU214
[   22.917427] GICv3: CPU214: found redistributor 11602 region 1:0x00000004=
41b40000
[   22.917439] GICv3: CPU214: using allocated LPI pending table @0x00000008=
81660000
[   22.917506] CPU214: Booted secondary processor 0x0000011602 [0x431f0af1]
[   22.917647] Bringup CPU214 0
[   22.917651] Bringup CPU215 left 41
[   23.072267] Detected PIPT I-cache on CPU215
[   23.072458] GICv3: CPU215: found redistributor 11702 region 1:0x00000004=
41bc0000
[   23.072470] GICv3: CPU215: using allocated LPI pending table @0x00000008=
81670000
[   23.072537] CPU215: Booted secondary processor 0x0000011702 [0x431f0af1]
[   23.072678] Bringup CPU215 0
[   23.072682] Bringup CPU216 left 40
[   23.227546] Detected PIPT I-cache on CPU216
[   23.227739] GICv3: CPU216: found redistributor 11802 region 1:0x00000004=
41c40000
[   23.227752] GICv3: CPU216: using allocated LPI pending table @0x00000008=
81680000
[   23.227818] CPU216: Booted secondary processor 0x0000011802 [0x431f0af1]
[   23.227963] Bringup CPU216 0
[   23.227966] Bringup CPU217 left 39
[   23.383191] Detected PIPT I-cache on CPU217
[   23.383384] GICv3: CPU217: found redistributor 11902 region 1:0x00000004=
41cc0000
[   23.383396] GICv3: CPU217: using allocated LPI pending table @0x00000008=
81690000
[   23.383464] CPU217: Booted secondary processor 0x0000011902 [0x431f0af1]
[   23.383608] Bringup CPU217 0
[   23.383612] Bringup CPU218 left 38
[   23.538788] Detected PIPT I-cache on CPU218
[   23.538985] GICv3: CPU218: found redistributor 11a02 region 1:0x00000004=
41d40000
[   23.538997] GICv3: CPU218: using allocated LPI pending table @0x00000008=
816a0000
[   23.539066] CPU218: Booted secondary processor 0x0000011a02 [0x431f0af1]
[   23.539317] Bringup CPU218 0
[   23.539321] Bringup CPU219 left 37
[   23.694100] Detected PIPT I-cache on CPU219
[   23.694299] GICv3: CPU219: found redistributor 11b02 region 1:0x00000004=
41dc0000
[   23.694312] GICv3: CPU219: using allocated LPI pending table @0x00000008=
816b0000
[   23.694380] CPU219: Booted secondary processor 0x0000011b02 [0x431f0af1]
[   23.694532] Bringup CPU219 0
[   23.694536] Bringup CPU220 left 36
[   23.850295] Detected PIPT I-cache on CPU220
[   23.850496] GICv3: CPU220: found redistributor 11c02 region 1:0x00000004=
41e40000
[   23.850509] GICv3: CPU220: using allocated LPI pending table @0x00000008=
816c0000
[   23.850576] CPU220: Booted secondary processor 0x0000011c02 [0x431f0af1]
[   23.850723] Bringup CPU220 0
[   23.850727] Bringup CPU221 left 35
[   24.005688] Detected PIPT I-cache on CPU221
[   24.005888] GICv3: CPU221: found redistributor 11d02 region 1:0x00000004=
41ec0000
[   24.005901] GICv3: CPU221: using allocated LPI pending table @0x00000008=
816d0000
[   24.005968] CPU221: Booted secondary processor 0x0000011d02 [0x431f0af1]
[   24.006112] Bringup CPU221 0
[   24.006115] Bringup CPU222 left 34
[   24.161241] Detected PIPT I-cache on CPU222
[   24.161448] GICv3: CPU222: found redistributor 11e02 region 1:0x00000004=
41f40000
[   24.161461] GICv3: CPU222: using allocated LPI pending table @0x00000008=
816e0000
[   24.161529] CPU222: Booted secondary processor 0x0000011e02 [0x431f0af1]
[   24.161672] Bringup CPU222 0
[   24.161675] Bringup CPU223 left 33
[   24.316729] Detected PIPT I-cache on CPU223
[   24.316935] GICv3: CPU223: found redistributor 11f02 region 1:0x00000004=
41fc0000
[   24.316948] GICv3: CPU223: using allocated LPI pending table @0x00000008=
816f0000
[   24.317015] CPU223: Booted secondary processor 0x0000011f02 [0x431f0af1]
[   24.317163] Bringup CPU223 0
[   24.317167] Bringup CPU224 left 32
[   24.472373] Detected PIPT I-cache on CPU224
[   24.472527] GICv3: CPU224: found redistributor 10003 region 1:0x00000004=
41060000
[   24.472540] GICv3: CPU224: using allocated LPI pending table @0x00000008=
81700000
[   24.472606] CPU224: Booted secondary processor 0x0000010003 [0x431f0af1]
[   24.472865] Bringup CPU224 0
[   24.472869] Bringup CPU225 left 31
[   24.628094] Detected PIPT I-cache on CPU225
[   24.628251] GICv3: CPU225: found redistributor 10103 region 1:0x00000004=
410e0000
[   24.628263] GICv3: CPU225: using allocated LPI pending table @0x00000008=
81710000
[   24.628331] CPU225: Booted secondary processor 0x0000010103 [0x431f0af1]
[   24.628485] Bringup CPU225 0
[   24.628489] Bringup CPU226 left 30
[   24.783218] Detected PIPT I-cache on CPU226
[   24.783376] GICv3: CPU226: found redistributor 10203 region 1:0x00000004=
41160000
[   24.783389] GICv3: CPU226: using allocated LPI pending table @0x00000008=
81720000
[   24.783455] CPU226: Booted secondary processor 0x0000010203 [0x431f0af1]
[   24.783602] Bringup CPU226 0
[   24.783606] Bringup CPU227 left 29
[   24.938519] Detected PIPT I-cache on CPU227
[   24.938679] GICv3: CPU227: found redistributor 10303 region 1:0x00000004=
411e0000
[   24.938692] GICv3: CPU227: using allocated LPI pending table @0x00000008=
81730000
[   24.938758] CPU227: Booted secondary processor 0x0000010303 [0x431f0af1]
[   24.938902] Bringup CPU227 0
[   24.938906] Bringup CPU228 left 28
[   25.093622] Detected PIPT I-cache on CPU228
[   25.093789] GICv3: CPU228: found redistributor 10403 region 1:0x00000004=
41260000
[   25.093802] GICv3: CPU228: using allocated LPI pending table @0x00000008=
81740000
[   25.093868] CPU228: Booted secondary processor 0x0000010403 [0x431f0af1]
[   25.094015] Bringup CPU228 0
[   25.094019] Bringup CPU229 left 27
[   25.249382] Detected PIPT I-cache on CPU229
[   25.249548] GICv3: CPU229: found redistributor 10503 region 1:0x00000004=
412e0000
[   25.249560] GICv3: CPU229: using allocated LPI pending table @0x00000008=
81750000
[   25.249629] CPU229: Booted secondary processor 0x0000010503 [0x431f0af1]
[   25.249772] Bringup CPU229 0
[   25.249776] Bringup CPU230 left 26
[   25.404562] Detected PIPT I-cache on CPU230
[   25.404728] GICv3: CPU230: found redistributor 10603 region 1:0x00000004=
41360000
[   25.404741] GICv3: CPU230: using allocated LPI pending table @0x00000008=
81760000
[   25.404808] CPU230: Booted secondary processor 0x0000010603 [0x431f0af1]
[   25.404951] Bringup CPU230 0
[   25.404955] Bringup CPU231 left 25
[   25.559585] Detected PIPT I-cache on CPU231
[   25.559755] GICv3: CPU231: found redistributor 10703 region 1:0x00000004=
413e0000
[   25.559768] GICv3: CPU231: using allocated LPI pending table @0x00000008=
81770000
[   25.559837] CPU231: Booted secondary processor 0x0000010703 [0x431f0af1]
[   25.560097] Bringup CPU231 0
[   25.560101] Bringup CPU232 left 24
[   25.715151] Detected PIPT I-cache on CPU232
[   25.715321] GICv3: CPU232: found redistributor 10803 region 1:0x00000004=
41460000
[   25.715334] GICv3: CPU232: using allocated LPI pending table @0x00000008=
81780000
[   25.715404] CPU232: Booted secondary processor 0x0000010803 [0x431f0af1]
[   25.715556] Bringup CPU232 0
[   25.715560] Bringup CPU233 left 23
[   25.870587] Detected PIPT I-cache on CPU233
[   25.870759] GICv3: CPU233: found redistributor 10903 region 1:0x00000004=
414e0000
[   25.870771] GICv3: CPU233: using allocated LPI pending table @0x00000008=
81790000
[   25.870839] CPU233: Booted secondary processor 0x0000010903 [0x431f0af1]
[   25.870978] Bringup CPU233 0
[   25.870982] Bringup CPU234 left 22
[   26.026220] Detected PIPT I-cache on CPU234
[   26.026396] GICv3: CPU234: found redistributor 10a03 region 1:0x00000004=
41560000
[   26.026409] GICv3: CPU234: using allocated LPI pending table @0x00000008=
817a0000
[   26.026475] CPU234: Booted secondary processor 0x0000010a03 [0x431f0af1]
[   26.026623] Bringup CPU234 0
[   26.026627] Bringup CPU235 left 21
[   26.182382] Detected PIPT I-cache on CPU235
[   26.182559] GICv3: CPU235: found redistributor 10b03 region 1:0x00000004=
415e0000
[   26.182572] GICv3: CPU235: using allocated LPI pending table @0x00000008=
817b0000
[   26.182639] CPU235: Booted secondary processor 0x0000010b03 [0x431f0af1]
[   26.182782] Bringup CPU235 0
[   26.182785] Bringup CPU236 left 20
[   26.337520] Detected PIPT I-cache on CPU236
[   26.337697] GICv3: CPU236: found redistributor 10c03 region 1:0x00000004=
41660000
[   26.337711] GICv3: CPU236: using allocated LPI pending table @0x00000008=
817c0000
[   26.337777] CPU236: Booted secondary processor 0x0000010c03 [0x431f0af1]
[   26.337925] Bringup CPU236 0
[   26.337929] Bringup CPU237 left 19
[   26.493156] Detected PIPT I-cache on CPU237
[   26.493336] GICv3: CPU237: found redistributor 10d03 region 1:0x00000004=
416e0000
[   26.493349] GICv3: CPU237: using allocated LPI pending table @0x00000008=
817d0000
[   26.493418] CPU237: Booted secondary processor 0x0000010d03 [0x431f0af1]
[   26.493688] Bringup CPU237 0
[   26.493691] Bringup CPU238 left 18
[   26.648336] Detected PIPT I-cache on CPU238
[   26.648520] GICv3: CPU238: found redistributor 10e03 region 1:0x00000004=
41760000
[   26.648533] GICv3: CPU238: using allocated LPI pending table @0x00000008=
817e0000
[   26.648601] CPU238: Booted secondary processor 0x0000010e03 [0x431f0af1]
[   26.648757] Bringup CPU238 0
[   26.648761] Bringup CPU239 left 17
[   26.803436] Detected PIPT I-cache on CPU239
[   26.803620] GICv3: CPU239: found redistributor 10f03 region 1:0x00000004=
417e0000
[   26.803633] GICv3: CPU239: using allocated LPI pending table @0x00000008=
817f0000
[   26.803700] CPU239: Booted secondary processor 0x0000010f03 [0x431f0af1]
[   26.803842] Bringup CPU239 0
[   26.803845] Bringup CPU240 left 16
[   26.959245] Detected PIPT I-cache on CPU240
[   26.959431] GICv3: CPU240: found redistributor 11003 region 1:0x00000004=
41860000
[   26.959445] GICv3: CPU240: using allocated LPI pending table @0x00000008=
81800000
[   26.959513] CPU240: Booted secondary processor 0x0000011003 [0x431f0af1]
[   26.959656] Bringup CPU240 0
[   26.959660] Bringup CPU241 left 15
[   27.114861] Detected PIPT I-cache on CPU241
[   27.115049] GICv3: CPU241: found redistributor 11103 region 1:0x00000004=
418e0000
[   27.115061] GICv3: CPU241: using allocated LPI pending table @0x00000008=
81810000
[   27.115127] CPU241: Booted secondary processor 0x0000011103 [0x431f0af1]
[   27.115278] Bringup CPU241 0
[   27.115281] Bringup CPU242 left 14
[   27.270560] Detected PIPT I-cache on CPU242
[   27.270753] GICv3: CPU242: found redistributor 11203 region 1:0x00000004=
41960000
[   27.270767] GICv3: CPU242: using allocated LPI pending table @0x00000008=
81820000
[   27.270834] CPU242: Booted secondary processor 0x0000011203 [0x431f0af1]
[   27.270980] Bringup CPU242 0
[   27.270983] Bringup CPU243 left 13
[   27.426107] Detected PIPT I-cache on CPU243
[   27.426299] GICv3: CPU243: found redistributor 11303 region 1:0x00000004=
419e0000
[   27.426312] GICv3: CPU243: using allocated LPI pending table @0x00000008=
81830000
[   27.426380] CPU243: Booted secondary processor 0x0000011303 [0x431f0af1]
[   27.426525] Bringup CPU243 0
[   27.426529] Bringup CPU244 left 12
[   27.581504] Detected PIPT I-cache on CPU244
[   27.581699] GICv3: CPU244: found redistributor 11403 region 1:0x00000004=
41a60000
[   27.581713] GICv3: CPU244: using allocated LPI pending table @0x00000008=
81840000
[   27.581781] CPU244: Booted secondary processor 0x0000011403 [0x431f0af1]
[   27.582055] Bringup CPU244 0
[   27.582059] Bringup CPU245 left 11
[   27.737400] Detected PIPT I-cache on CPU245
[   27.737596] GICv3: CPU245: found redistributor 11503 region 1:0x00000004=
41ae0000
[   27.737608] GICv3: CPU245: using allocated LPI pending table @0x00000008=
81850000
[   27.737676] CPU245: Booted secondary processor 0x0000011503 [0x431f0af1]
[   27.737830] Bringup CPU245 0
[   27.737833] Bringup CPU246 left 10
[   27.892579] Detected PIPT I-cache on CPU246
[   27.892778] GICv3: CPU246: found redistributor 11603 region 1:0x00000004=
41b60000
[   27.892791] GICv3: CPU246: using allocated LPI pending table @0x00000008=
81860000
[   27.892857] CPU246: Booted secondary processor 0x0000011603 [0x431f0af1]
[   27.893004] Bringup CPU246 0
[   27.893008] Bringup CPU247 left 9
[   28.047725] Detected PIPT I-cache on CPU247
[   28.047925] GICv3: CPU247: found redistributor 11703 region 1:0x00000004=
41be0000
[   28.047938] GICv3: CPU247: using allocated LPI pending table @0x00000008=
81870000
[   28.048008] CPU247: Booted secondary processor 0x0000011703 [0x431f0af1]
[   28.048159] Bringup CPU247 0
[   28.048163] Bringup CPU248 left 8
[   28.203549] Detected PIPT I-cache on CPU248
[   28.203752] GICv3: CPU248: found redistributor 11803 region 1:0x00000004=
41c60000
[   28.203765] GICv3: CPU248: using allocated LPI pending table @0x00000008=
81880000
[   28.203831] CPU248: Booted secondary processor 0x0000011803 [0x431f0af1]
[   28.203976] Bringup CPU248 0
[   28.203980] Bringup CPU249 left 7
[   28.359259] Detected PIPT I-cache on CPU249
[   28.359465] GICv3: CPU249: found redistributor 11903 region 1:0x00000004=
41ce0000
[   28.359478] GICv3: CPU249: using allocated LPI pending table @0x00000008=
81890000
[   28.359546] CPU249: Booted secondary processor 0x0000011903 [0x431f0af1]
[   28.359696] Bringup CPU249 0
[   28.359699] Bringup CPU250 left 6
[   28.514990] Detected PIPT I-cache on CPU250
[   28.515196] GICv3: CPU250: found redistributor 11a03 region 1:0x00000004=
41d60000
[   28.515209] GICv3: CPU250: using allocated LPI pending table @0x00000008=
818a0000
[   28.515278] CPU250: Booted secondary processor 0x0000011a03 [0x431f0af1]
[   28.515427] Bringup CPU250 0
[   28.515431] Bringup CPU251 left 5
[   28.670202] Detected PIPT I-cache on CPU251
[   28.670410] GICv3: CPU251: found redistributor 11b03 region 1:0x00000004=
41de0000
[   28.670424] GICv3: CPU251: using allocated LPI pending table @0x00000008=
818b0000
[   28.670490] CPU251: Booted secondary processor 0x0000011b03 [0x431f0af1]
[   28.670764] Bringup CPU251 0
[   28.670767] Bringup CPU252 left 4
[   28.826271] Detected PIPT I-cache on CPU252
[   28.826482] GICv3: CPU252: found redistributor 11c03 region 1:0x00000004=
41e60000
[   28.826495] GICv3: CPU252: using allocated LPI pending table @0x00000008=
818c0000
[   28.826564] CPU252: Booted secondary processor 0x0000011c03 [0x431f0af1]
[   28.826721] Bringup CPU252 0
[   28.826725] Bringup CPU253 left 3
[   28.981779] Detected PIPT I-cache on CPU253
[   28.981990] GICv3: CPU253: found redistributor 11d03 region 1:0x00000004=
41ee0000
[   28.982003] GICv3: CPU253: using allocated LPI pending table @0x00000008=
818d0000
[   28.982071] CPU253: Booted secondary processor 0x0000011d03 [0x431f0af1]
[   28.982224] Bringup CPU253 0
[   28.982227] Bringup CPU254 left 2
[   29.137440] Detected PIPT I-cache on CPU254
[   29.137658] GICv3: CPU254: found redistributor 11e03 region 1:0x00000004=
41f60000
[   29.137671] GICv3: CPU254: using allocated LPI pending table @0x00000008=
818e0000
[   29.137737] CPU254: Booted secondary processor 0x0000011e03 [0x431f0af1]
[   29.137895] Bringup CPU254 0
[   29.137899] Bringup CPU255 left 1
[   29.137901] smp: Brought up 2 nodes, 255 CPUs
[   29.137904] SMP: Total of 255 processors activated.

--lnLTz5q6oum7yiEn
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmRr7E4ACgkQJNaLcl1U
h9BNvgf/ftrexCGa7QYVeyqWe115HCddY0kTkMyPmo914XpHBkMRqLQepkpMpR/d
e07ZTuXjHbYtICJ+/1W2zHRqa2r4KmguwWOFCZtWgTLE+Zeb2bNOc29LjTUU9c1X
G1l3ZQfCCfvXKFHJosNVg3dQUOacMoQ/TrC5evE3/XffcXrJKdyzdsfJal5E2OV0
HHPzR/yZa+LInZADOxwJiJiYF1xNOg4dwNorMhtTRCfjCnA1W4mGuw7RkHX5i0zp
ugP5GXE/ktORobIt8tTTKqaoYzr/dh6OXBw3EhyJgzGuQwmAZmsiMViaYw0fEzaQ
EEVPXFaSEp4e3/DgE5KOT5fl4RwI4w==
=Iiz6
-----END PGP SIGNATURE-----

--lnLTz5q6oum7yiEn--


From xen-devel-bounces@lists.xenproject.org Mon May 22 23:12:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 22 May 2023 23:12:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538136.837925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Ehb-0004Gw-Pv; Mon, 22 May 2023 23:12:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538136.837925; Mon, 22 May 2023 23:12:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Ehb-0004Gp-ND; Mon, 22 May 2023 23:12:31 +0000
Received: by outflank-mailman (input) for mailman id 538136;
 Mon, 22 May 2023 23:12:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7M57=BL=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q1Eha-0004Gj-LJ
 for xen-devel@lists.xenproject.org; Mon, 22 May 2023 23:12:30 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ec436ff-f8f6-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 01:12:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ec436ff-f8f6-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1684797146;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=C7TsK7GgGmkvmWTTcWM2ucSNGAZooIPJARpsVm1elw0=;
	b=yrUtqJV10ki6ehHolL2hnu9DMcCIuWlud8kBCfHX2rWfH90kquusd5JROH50fXuSFSmQwU
	PKNQy6H0pPg6Mk5OQdYPhKYbeQI1YUR9KMUd/a7Qt8XePk2e/SyRiHPAMSjuFqp7Gvl67j
	jsYzQTS2roesS5yiUarkF8GfeI3DeGPf6Ebcoc4Ljc1KYqKx7NP63xqe3/V5tuO0wj3vfK
	JFIGfRt4eOzIcmOGPkAafZCHGsIxOB41QQ0hviBcfX4KV8V6SP+E8PZS1myMMBnon3NhCD
	iok8IPIaOi8QqOj9b+eAxWl9vgJ4JWbE94zRxWOk3/fV7mjhSGxDcyoSZunF8g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1684797146;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=C7TsK7GgGmkvmWTTcWM2ucSNGAZooIPJARpsVm1elw0=;
	b=LsHOhlHhAmIjOmv4OFf6rm8oVhRsGLXF3Ede/E4PfuqZRooT2DLrTIb9VMr4oD66YERTjE
	beGkkZmQ8Ci6bkBQ==
To: Mark Brown <broonie@kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Ross Philipson <ross.philipson@oracle.com>,
 David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch V4 33/37] cpu/hotplug: Allow "parallel" bringup up to
 CPUHP_BP_KICK_AP_STATE
In-Reply-To: <2ed3ff77-c973-4e23-9e2f-f10776e432b7@sirena.org.uk>
References: <20230512203426.452963764@linutronix.de>
 <20230512205257.240231377@linutronix.de>
 <4ca39e58-055f-432c-8124-7c747fa4e85b@sirena.org.uk> <87bkicw01a.ffs@tglx>
 <2ed3ff77-c973-4e23-9e2f-f10776e432b7@sirena.org.uk>
Date: Tue, 23 May 2023 01:12:26 +0200
Message-ID: <87wn10ufj9.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Mon, May 22 2023 at 23:27, Mark Brown wrote:
> On Mon, May 22, 2023 at 11:04:17PM +0200, Thomas Gleixner wrote:
>
>> That does not make any sense at all and my tired brain does not help
>> either.
>
>> Can you please apply the below debug patch and provide the output?
>
> Here's the log, a quick glance says the 
>
> 	if (!--ncpus)
> 		break;
>
> check is doing the wrong thing

Obviously.

Let me find a brown paperbag and go to sleep before I even try to
compile the obvious fix.

---
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 005f863a3d2b..88a7ede322bd 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1770,9 +1770,6 @@ static void __init cpuhp_bringup_mask(const struct cpumask *mask, unsigned int n
 	for_each_cpu(cpu, mask) {
 		struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
 
-		if (!--ncpus)
-			break;
-
 		if (cpu_up(cpu, target) && can_rollback_cpu(st)) {
 			/*
 			 * If this failed then cpu_up() might have only
@@ -1781,6 +1778,9 @@ static void __init cpuhp_bringup_mask(const struct cpumask *mask, unsigned int n
 			 */
 			WARN_ON(cpuhp_invoke_callback_range(false, cpu, st, CPUHP_OFFLINE));
 		}
+
+		if (!--ncpus)
+			break;
 	}
 }
 


From xen-devel-bounces@lists.xenproject.org Tue May 23 00:36:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 00:36:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538164.837948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1G0O-0004th-Lv; Tue, 23 May 2023 00:36:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538164.837948; Tue, 23 May 2023 00:36:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1G0O-0004ta-Id; Tue, 23 May 2023 00:36:00 +0000
Received: by outflank-mailman (input) for mailman id 538164;
 Tue, 23 May 2023 00:35:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1G0N-0004tQ-O7; Tue, 23 May 2023 00:35:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1G0N-0001Az-Dh; Tue, 23 May 2023 00:35:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1G0M-0005iq-VU; Tue, 23 May 2023 00:35:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1G0M-0005i8-V1; Tue, 23 May 2023 00:35:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=DcjRthjQhHKaPkeiOa+PxZKMgAE1lyFRE0Kc9DScR0w=; b=h+XdNXcNvl+QAp9/fLub4Fh587
	tPbnISbL8QeI/wRtI0YdU/cJg6bWFkCYUMSGrUWD/0aGF5NRR5Sd+49y5AThkTXSQv3ZYZEX4gAAi
	VE6BIqob8+RjHbPAF2mVatM1yo3fq7s/w7/fVPNPzanAfBcCAy3Ru69MYYwwaRSYqTBQ=;
To: xen-devel@lists.xenproject.org
Subject: [qemu-mainline bisection] complete build-armhf
Message-Id: <E1q1G0M-0005i8-V1@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 00:35:58 +0000

branch xen-unstable
xenbranch xen-unstable
job build-armhf
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180904/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-armhf.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-armhf.xen-build --summary-out=tmp/180904.bisection-summary --basis-template=180691 --blessings=real,real-bisect,real-retry qemu-mainline build-armhf xen-build
Searching for failure / basis pass:
 180894 fail [host=cubietruck-metzinger] / 180691 [host=cubietruck-gleizes] 180686 [host=cubietruck-gleizes] 180673 ok.
Failure / basis pass flights: 180894 / 180673
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu https://gitlab.com/qemu-project/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c5cf7f69c98baed40754ca4a821cb504fd5423cd aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
Basis pass 80bc13db83ddbd5bbe757a20abcdd34daf4871f8 18b6727083acceac5d76ea0b8cb6f5cdef6858a7 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#80bc13db83ddbd5bbe757a20abcdd34daf4871f8-c5cf7f69c98baed40754ca4a821cb504fd5423cd https://gitlab.com/qemu-project/qemu.git#18b6727083acceac5d76ea0b8cb6f5cdef6858a7-aa222a8e4f975284b3f8f131653a4114b3d333b3 git://xenbits.xen.org/osstest/seabios.git#ea1b7a0733906b8425d948ae94fba63c32b1d425-be7e899350caa7b74d8271a34264c3b4aef25ab0 git://xenbits.xen.org/xen.git#4c507d8a6b6e8be90881a335b0a66eb28e0f7737-753d903\
 a6f2d1e68d98487d36449b5739c28d65a
Loaded 30630 nodes in revision graph
Searching for test results:
 180673 pass 80bc13db83ddbd5bbe757a20abcdd34daf4871f8 18b6727083acceac5d76ea0b8cb6f5cdef6858a7 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
 180686 [host=cubietruck-gleizes]
 180702 [host=cubietruck-picasso]
 180691 [host=cubietruck-gleizes]
 180704 [host=cubietruck-picasso]
 180721 [host=cubietruck-gleizes]
 180742 [host=cubietruck-gleizes]
 180753 [host=cubietruck-gleizes]
 180785 [host=cubietruck-gleizes]
 180807 [host=cubietruck-gleizes]
 180815 [host=cubietruck-gleizes]
 180825 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180835 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180843 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180853 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180860 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180866 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180872 pass 80bc13db83ddbd5bbe757a20abcdd34daf4871f8 18b6727083acceac5d76ea0b8cb6f5cdef6858a7 ea1b7a0733906b8425d948ae94fba63c32b1d425 4c507d8a6b6e8be90881a335b0a66eb28e0f7737
 180874 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180875 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 266ccbb27b3ec6661f22395ec2c41d854c94d761 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180873 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180877 pass cafb4f3f36e2101cab2ed6db3ea246a5a3e4708e 7d478306e84259678b2941e8af7496ef32a9c4c5 be7e899350caa7b74d8271a34264c3b4aef25ab0 42abf5b9c53eb1b1a902002fcda68708234152c3
 180878 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180880 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc ed8d95182bc994e31e730c59e1c8bfec4822b27d be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180882 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc a988b4c56143d90f98034daf176e416b08dddf36 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180885 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc e80bdbf283fb7a3643172b7f85b41d9dd312091c be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180881 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180889 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc f1ad527ff5f789a19c79f5f39a87f7a8f78d81b9 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180886 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc a9dbde71da553fe0b132ffac6d1a0de16892a90d be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180891 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc dd48b477e90c3200b970545d1953e12e8c1431db be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180887 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180892 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180896 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180899 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180894 fail c5cf7f69c98baed40754ca4a821cb504fd5423cd aa222a8e4f975284b3f8f131653a4114b3d333b3 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180901 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180903 pass 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
 180904 fail 0abfb0be6cf78a8e962383e85cec57851ddae5bc 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
Searching for interesting versions
 Result found: flight 180673 (pass), for basis pass
 Result found: flight 180825 (fail), for basis failure (at ancestor ~1)
 Repro found: flight 180872 (pass), for basis pass
 Repro found: flight 180894 (fail), for basis failure
 0 revisions at 0abfb0be6cf78a8e962383e85cec57851ddae5bc 2274817f6c499fd31081d7973b7cbfdca17c44a8 be7e899350caa7b74d8271a34264c3b4aef25ab0 753d903a6f2d1e68d98487d36449b5739c28d65a
No revisions left to test, checking graph state.
 Result found: flight 180892 (pass), for last pass
 Result found: flight 180896 (fail), for first failure
 Repro found: flight 180899 (pass), for last pass
 Repro found: flight 180901 (fail), for first failure
 Repro found: flight 180903 (pass), for last pass
 Repro found: flight 180904 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu https://gitlab.com/qemu-project/qemu.git
  Bug introduced:  81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Bug not present: 2274817f6c499fd31081d7973b7cbfdca17c44a8
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/180904/


  commit 81e2b198a8cb4ee5fdf108bd438f44b193ee3a36
  Author: John Snow <jsnow@redhat.com>
  Date:   Wed May 10 23:54:23 2023 -0400
  
      configure: create a python venv unconditionally
      
      This patch changes the configure script so that it always creates and
      uses a python virtual environment unconditionally.
      
      Meson bootstrapping is temporarily altered to force the use of meson
      from git or vendored source (as packaged in our source tarballs). A
      subsequent commit restores the use of distribution-vendored Meson.
      
      Signed-off-by: John Snow <jsnow@redhat.com>
      Message-Id: <20230511035435.734312-16-jsnow@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

pnmtopng: 253 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/build-armhf.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
180904: tolerable ALL FAIL

flight 180904 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/180904/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-armhf                   6 xen-build               fail baseline untested


jobs:
 build-armhf                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue May 23 00:42:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 00:42:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538170.837958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1G6S-0006Lg-An; Tue, 23 May 2023 00:42:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538170.837958; Tue, 23 May 2023 00:42:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1G6S-0006LZ-84; Tue, 23 May 2023 00:42:16 +0000
Received: by outflank-mailman (input) for mailman id 538170;
 Tue, 23 May 2023 00:42:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1G6R-0006LP-3X; Tue, 23 May 2023 00:42:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1G6Q-0001U9-Q1; Tue, 23 May 2023 00:42:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1G6Q-0005qc-GB; Tue, 23 May 2023 00:42:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1G6Q-0007j3-Fc; Tue, 23 May 2023 00:42:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=O5q7hD0czsx+2G3QdAlGpXXT2bIiGhRDmLprAziFd/s=; b=BM8QNErYau6ommM868bbCw0Rue
	n23Yq3SLLee2B3FBZHj34CVHPizpm6toiYn/ia7n9NHfobU2PSCML9MHJRkXylxi/VhTogUGNPhhg
	BaY/YDk7032Q11P/JGvHhTw33b1UYYgtUCSPvGbxdnW6jI8cpWmtoUMNrirfhN3VWcOk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180890-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180890: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=44c026a73be8038f03dbdeef028b642880cf1511
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 00:42:14 +0000

flight 180890 linux-linus real [real]
flight 180905 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180890/
http://logs.test-lab.xenproject.org/osstest/logs/180905/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-freebsd11-amd64 21 guest-start/freebsd.repeat fail pass in 180905-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                44c026a73be8038f03dbdeef028b642880cf1511
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   36 days
Failing since        180281  2023-04-17 06:24:36 Z   35 days   66 attempts
Testing same since   180890  2023-05-22 06:50:33 Z    0 days    1 attempts

------------------------------------------------------------
2480 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 313583 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 23 01:06:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 01:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538179.837968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1GTZ-0007DO-Cj; Tue, 23 May 2023 01:06:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538179.837968; Tue, 23 May 2023 01:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1GTZ-0007DH-9J; Tue, 23 May 2023 01:06:09 +0000
Received: by outflank-mailman (input) for mailman id 538179;
 Tue, 23 May 2023 01:06:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1GTY-0007D7-5N; Tue, 23 May 2023 01:06:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1GTX-0000SJ-VP; Tue, 23 May 2023 01:06:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1GTX-0006V0-KK; Tue, 23 May 2023 01:06:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1GTX-0008Jj-Jp; Tue, 23 May 2023 01:06:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ois/LkUqh1IjmYRvDU3p+KaIGklUwvk4WHofHONrmME=; b=ZQE0e0W6Gda5YeKncvhl7Re0cx
	dWLzHUfN/Ckw/TBBCmPhqtQX/+WY/zpL4fYRv70MQx32/QvOVoOE9utk9TmaRLqHcVSnXAwDOylc5
	IN2XJQkIOLvm7MuM5RBJdWtEiEY/vD1vVS+iiX56Vu3mI5yLU0rU1TYbVDKTOUKWX13w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180902-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180902: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=ad3387396a71416cacc0b394e5e440dd6e9ba19a
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 01:06:07 +0000

flight 180902 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180902/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                ad3387396a71416cacc0b394e5e440dd6e9ba19a
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    5 days
Failing since        180699  2023-05-18 07:21:24 Z    4 days   21 attempts
Testing same since   180902  2023-05-22 20:10:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4955 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 23 03:46:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 03:46:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538201.837986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1IyC-0006u3-Ru; Tue, 23 May 2023 03:45:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538201.837986; Tue, 23 May 2023 03:45:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1IyC-0006tv-NT; Tue, 23 May 2023 03:45:56 +0000
Received: by outflank-mailman (input) for mailman id 538201;
 Tue, 23 May 2023 03:45:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1IyB-0006tl-Av; Tue, 23 May 2023 03:45:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1IyB-0005nD-0g; Tue, 23 May 2023 03:45:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1IyA-000504-GG; Tue, 23 May 2023 03:45:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1IyA-0007ZK-Fp; Tue, 23 May 2023 03:45:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Jyq1+fvJJgK1afT4I3JHGvCWElF/7ZIOEPWmcLogqcw=; b=GBb+OwkL0sUrUP4ft2XQai0KLl
	fbPdooeA3+zNVc5FqGX75cQudbPUWEdj8ezHLDTncz6WFdjlq7nrUqSxim3oPwyxIxbm0+zdg1BPr
	rJ6tGuj6xYcqsbJ3/GpSrMvhDvs6+ZXvOnflWE/7+2S10u1d+0Q0dcTAKzidInEzbPC8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180906-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180906: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=37246d54d656933035094ed95f2d8e4708058856
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 03:45:54 +0000

flight 180906 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180906/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                37246d54d656933035094ed95f2d8e4708058856
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    5 days
Failing since        180699  2023-05-18 07:21:24 Z    4 days   22 attempts
Testing same since   180906  2023-05-23 01:08:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5306 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 23 04:24:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 04:24:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538209.837996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1JYx-0002w9-S4; Tue, 23 May 2023 04:23:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538209.837996; Tue, 23 May 2023 04:23:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1JYx-0002w2-Os; Tue, 23 May 2023 04:23:55 +0000
Received: by outflank-mailman (input) for mailman id 538209;
 Tue, 23 May 2023 04:23:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1JYw-0002vs-0t; Tue, 23 May 2023 04:23:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1JYv-00077N-Pw; Tue, 23 May 2023 04:23:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1JYv-0007Jv-5U; Tue, 23 May 2023 04:23:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1JYv-00060o-4z; Tue, 23 May 2023 04:23:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jBeCxAzN7cuVbmwWmmqKbtpAOhmi2uxY+2ZDEsMmrf4=; b=xievT7mjckrH4atYgqxu9aO01T
	w+uQnSizOe8NDCnzh7niYIThHEtw/LWGq+CdwzprwlfLKEnwR9r+BL4h9ktv5830VnelFzAxWMS/z
	eVNvSY9djRMEFoiFsF4i6hHlasHBuWdeOcj4MD0evzcjGvlOE1fW9XLqgYwPW/twrjys=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180908-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180908: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=5ce29ae84db340244c3c3299f84713a88dec5171
X-Osstest-Versions-That:
    ovmf=c5cf7f69c98baed40754ca4a821cb504fd5423cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 04:23:53 +0000

flight 180908 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180908/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 5ce29ae84db340244c3c3299f84713a88dec5171
baseline version:
 ovmf                 c5cf7f69c98baed40754ca4a821cb504fd5423cd

Last test of basis   180883  2023-05-22 01:40:47 Z    1 days
Testing same since   180908  2023-05-23 01:12:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Michael D Kinney <michael.d.kinney@intel.com>
  Oliver Steffen <osteffen@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c5cf7f69c9..5ce29ae84d  5ce29ae84db340244c3c3299f84713a88dec5171 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 23 05:03:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 05:03:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538215.838006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1KBH-0007jl-Sh; Tue, 23 May 2023 05:03:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538215.838006; Tue, 23 May 2023 05:03:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1KBH-0007je-Of; Tue, 23 May 2023 05:03:31 +0000
Received: by outflank-mailman (input) for mailman id 538215;
 Tue, 23 May 2023 05:03:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1KBF-0007jR-Sg; Tue, 23 May 2023 05:03:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1KBF-0000F9-JC; Tue, 23 May 2023 05:03:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1KBF-0000K5-51; Tue, 23 May 2023 05:03:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1KBF-0005B1-4R; Tue, 23 May 2023 05:03:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2EP6/5Kw9oVGOMxtlLFj8PB5UJdhae4wYjtxKzuNUgw=; b=JLqKmN0OH3CIcWvpb/vUoRScUi
	nwBiNu8fqFa6TZV7o77wqpWkegVeFb7KLemq4Im02am9iij2ibQ90KGSlXywubUywJ45zn17DS5Vy
	zDi/YmX323RZnXSVEvFRDKhIFiXeNahHQqx6QjyzhsIZVsjmHnlrFbTN7DxAtCCz+7/g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180909-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180909: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=886c0453cbf10eebd42a9ccf89c3e46eb389c357
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 05:03:29 +0000

flight 180909 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180909/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                886c0453cbf10eebd42a9ccf89c3e46eb389c357
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    5 days
Failing since        180699  2023-05-18 07:21:24 Z    4 days   23 attempts
Testing same since   180909  2023-05-23 03:48:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5427 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 23 06:32:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 06:32:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538223.838016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1LYy-00008I-DJ; Tue, 23 May 2023 06:32:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538223.838016; Tue, 23 May 2023 06:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1LYy-00008B-9L; Tue, 23 May 2023 06:32:04 +0000
Received: by outflank-mailman (input) for mailman id 538223;
 Tue, 23 May 2023 06:32:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MIFg=BM=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q1LYw-000085-JQ
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 06:32:02 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0630.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 84af7b4a-f933-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 08:31:58 +0200 (CEST)
Received: from AM5PR04CA0021.eurprd04.prod.outlook.com (2603:10a6:206:1::34)
 by DB3PR08MB9111.eurprd08.prod.outlook.com (2603:10a6:10:43c::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 06:31:56 +0000
Received: from AM7EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:1:cafe::bc) by AM5PR04CA0021.outlook.office365.com
 (2603:10a6:206:1::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29 via Frontend
 Transport; Tue, 23 May 2023 06:31:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT064.mail.protection.outlook.com (100.127.140.127) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.14 via Frontend Transport; Tue, 23 May 2023 06:31:55 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Tue, 23 May 2023 06:31:55 +0000
Received: from b6b94f1fd7ca.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 61D0EAB4-BA31-46E5-ADE3-611831B8E445.1; 
 Tue, 23 May 2023 06:31:46 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b6b94f1fd7ca.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 23 May 2023 06:31:46 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB7905.eurprd08.prod.outlook.com (2603:10a6:10:3b3::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 06:31:43 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%4]) with mapi id 15.20.6411.028; Tue, 23 May 2023
 06:31:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84af7b4a-f933-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ruz11yEVISzMnnmv3xI0C/m+aDQpZaUTIWLpZzpQ1CE=;
 b=Q3ST0mC9I5t8cgDwvlVVQRiWdLkNFDdYC6AGww7quxhDBoOo8vxfsMUDBiqO2kMwak37cCwVGQQxbrBxha1fxjCxnPxwhhDAi3ZGHMiMXyw019y3frg6SZgVbLOAa5MXf8tCINy9Zi46OHnfgI3nBTVluAcB1wnC4OFiiY1/KJQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RytE2Vd1ocT3dEjlro2Ga7nVCunQA6VxtMRt66YBDPos84riKr2Vgg+NEQK2pf4Gm056K8dbufQcYRUY3uwxWltvkzIfiCBdhdEqxSBl/KudVOO7xdjeC6w+euXc0enxmcZeaR1hOUj8dZ92lRkvtBkd9RC+GI1vuoNhWJ5duUV/uru97yCO+hHjXl8TdJD77u1mXMhawMy4qgvi4lFqXVqIbdqwWQ/mXjAkLObEgjqsrY1Jrvh5go8M/eYWbc6yyJFYEes7W8XERw+jZKqjjZ/BnWXdbCWWDRqMS+6PUdtH9h0AgoDLr9YO/FnoAlmqcZM36EDKnHtcvBdKu38bTQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ruz11yEVISzMnnmv3xI0C/m+aDQpZaUTIWLpZzpQ1CE=;
 b=KqRU2eA/zE7enuAps12QlUoZpcCR2q/IHhG8NeGeu8F02PjklsPM7WTKJYlfTnxqUZs+3lqN548BhVwdFWAVDJRpWCNjTd7RPdWCPKNPQNbqlUv4uQ2J2g+2Hhgzs4x0Sn2db3wFERzv2l+PzQQYXQPTsoDoOAMtIN1TS/hlJwk0A5AlWV5YHipaQuIXx6q9vyoWXHNPx9ivSt8TrwNFc6n9kxn1UiQxFTxOJxiUQUQ6lRRvlB5kgCpmI6HDlUbnkMSt2SmnuwsFmyQ9mxAtuVwV11qAW/S2zDN3vaMi76d1YgZ16EUCCFfhkt6X3hsUV785faIOmoRDbyViR12+4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ruz11yEVISzMnnmv3xI0C/m+aDQpZaUTIWLpZzpQ1CE=;
 b=Q3ST0mC9I5t8cgDwvlVVQRiWdLkNFDdYC6AGww7quxhDBoOo8vxfsMUDBiqO2kMwak37cCwVGQQxbrBxha1fxjCxnPxwhhDAi3ZGHMiMXyw019y3frg6SZgVbLOAa5MXf8tCINy9Zi46OHnfgI3nBTVluAcB1wnC4OFiiY1/KJQ=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
Subject: RE: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Thread-Topic: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
 tree NUMA distance map
Thread-Index:
 AQHZd0ulMpqFICe/LUiDVq6+aDsW9687sveAgAFTkCCAACNagIAAAPgggAADbgCAKli08A==
Date: Tue, 23 May 2023 06:31:41 +0000
Message-ID:
 <AS8PR08MB79919CD7C233345113F8D5C992409@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <e03bbb52-1a19-7d18-4abe-75bbef8a0aee@suse.com>
 <AS8PR08MB799117EDD6BAB892CAB870A192659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <13635377-e296-370d-121b-5b617dc210bc@suse.com>
 <AS8PR08MB7991DCE0DFC850FEA920BF8C92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <c195ef53-1151-1fb2-0cf9-f6f47d20b75e@suse.com>
In-Reply-To: <c195ef53-1151-1fb2-0cf9-f6f47d20b75e@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 5630A10A977B5749875297F31FB003CF.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB7905:EE_|AM7EUR03FT064:EE_|DB3PR08MB9111:EE_
X-MS-Office365-Filtering-Correlation-Id: 36abae3f-6c1a-4150-ae35-08db5b576788
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 WYNVUyO2xZkXRWvOcfqidybVCvyjQ7wj+fl8kNi2Xmu32h9LcMmIiwRfh/Prl8qNxvyahEPRlStwBRjhKxkHazAtcxgFdRGdYYNSC179Tl+4itTwKoVVuZgLiSotz+y4rjkdb0F3eWpTOk2d7b0tIyUmfUxHL8M7xrs/IQ0lMXPeMPEbvqrB4ep5jZ5/HBqNASVNEoCiaWNxNAdMrg2ssF3QUKvzW9dDYXvCm7DwWUserFwEQnU4vpUI0X8sC/W4mJM6m073QEJM6XcNRZp1s8Jje0V+epag6KxPo0FM9jr/+P6PubCsgp/sbZvQo2Y6osR8B4o3tmVDnT05ZJhJjyTjAQ4KnIID1n/16z6zJUztKDbd/Rz20TwY7N5NhgHazc3PbaNBy04LHUzgqKyBpVba7eIpSZLURUP2x6nTBuFE8aFY6rGokaOdBNE5SQHUBwaFc0BvPrKDXfbXFIEJz3olI9wUSlbINNxZO+Rjla65Tg7czfo7EZoS9C6pJmrv0/8OzrQoCBcpZCPa/jKf13u6Oen6YsSAmR6+DDr5hkDYgj9802xKR6YVxOTvgmWtJ4akmnuqN5mGtGz91yXg8mxgj+qg7/LspfCf/1aQ1eho3rXQc4+uo+uXtQljEG8g
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(366004)(39860400002)(396003)(136003)(451199021)(5660300002)(52536014)(8936002)(8676002)(83380400001)(33656002)(2906002)(186003)(122000001)(38070700005)(38100700002)(86362001)(55016003)(6506007)(26005)(9686003)(54906003)(71200400001)(316002)(478600001)(66476007)(66446008)(4326008)(6916009)(64756008)(66556008)(66946007)(76116006)(7696005)(41300700001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7905
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	691f5c2b-71a7-4b80-d439-08db5b575f2c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DPV/WmRW6t7ZPzyyq4FQabS1GN0FZ1/IXWOH4pvgJfpl6JKZKey3nG9UAUoL00H4pQLYvJwlFzSOIW68Svvk/2Jvsm1CSog//Va7IrHCJJ8yfA7e8uCF1SJbXKeuWgGuDQR3hvbZZ0Alytm0+t+9f1r8Bt3yCcl0E4rXK2SdzYwfDSi7I1Jyq3ORaiGgF1732Nv1v1KIYeXU0BXyA7VLhsnWr0z2saY9YqiO4IMQ/Rdn9O9YpVoB19VL43mGvxFN/cURbuXJ1fMOiBKOEjO/LZ9X/HV9zllUFk8OxTBk5rZ2jeolbXFxL6KoUfHpo9Yq71cSPoNzkrGp0MU2RoWgZrX0iKYZPPtWqjUCdW0v+dlWbE9NQgb9Q60n5NPFZ2sHanVlGqz0cfhWUTIAmjya8Vven7tbv/mtsG21fzymHl/6TGXKislM2USE9Y95g6kDR4QFxa03FRCMY24E2ZQNxORp5533yeGR+6nPoazS5zU2ZrxIVm6DkMrztRETDChw7k7hj6guxJngxm24ZGr0/qYbNVWS2R7DwbeG0do6cuWj+62nClScuhxEPizBh2eAfVnLqSXDpgN1j2NosD05hv2ruTU0iwivHtIHlb4X5poXa1jh/PK9fh/+kSbiXcvEoGSaCryyC4l5oY8zPHATCDyhU82Gtegxoy77BMSnAdyz35Xd5RF9M/T96iY5rNfOavtfRn+8yHcmbQ5AXXtRuaS+nyv/Y64Asb1b22iX35/ZMzzxgllEOMn0dHMoEuJ+
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39850400004)(396003)(136003)(376002)(346002)(451199021)(46966006)(40470700004)(36840700001)(186003)(316002)(33656002)(86362001)(6506007)(26005)(9686003)(2906002)(8676002)(8936002)(6862004)(40480700001)(55016003)(5660300002)(40460700003)(52536014)(41300700001)(7696005)(478600001)(36860700001)(54906003)(356005)(82740400003)(336012)(81166007)(82310400005)(70206006)(4326008)(47076005)(70586007)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 06:31:55.9412
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 36abae3f-6c1a-4150-ae35-08db5b576788
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR08MB9111

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IFN1YmplY3Q6IFJlOiBb
UEFUQ0ggdjQgMDkvMTddIHhlbi9hcm06IGludHJvZHVjZSBhIGhlbHBlciB0byBwYXJzZSBkZXZp
Y2UNCj4gdHJlZSBOVU1BIGRpc3RhbmNlIG1hcA0KPiANCj4gPj4+Pj4gKyAgICAgICAgLyogVGhl
IGRlZmF1bHQgdmFsdWUgaW4gbm9kZV9kaXN0YW5jZV9tYXAgaXMNCj4gPj4gTlVNQV9OT19ESVNU
QU5DRQ0KPiA+Pj4+ICovDQo+ID4+Pj4+ICsgICAgICAgIGlmICggb3Bwb3NpdGUgPT0gTlVNQV9O
T19ESVNUQU5DRSApDQo+ID4+Pj4NCj4gPj4+PiBBbmQgdGhlIG1hdHJpeCB5b3UncmUgcmVhZGlu
ZyBmcm9tIGNhbid0IGhvbGQgTlVNQV9OT19ESVNUQU5DRQ0KPiA+PiBlbnRyaWVzPw0KPiA+Pj4+
IEkgYXNrIGJlY2F1c2UgeW91IGRvbid0IGNoZWNrIHRoaXMgYWJvdmU7IHlvdSBvbmx5IGNoZWNr
IGFnYWluc3QNCj4gPj4+PiBOVU1BX0xPQ0FMX0RJU1RBTkNFLg0KPiA+Pj4NCj4gPj4+IE15IHVu
ZGVyc3RhbmRpbmcgZm9yIHRoZSBwdXJwb3NlIG9mIHRoaXMgcGFydCBvZiBjb2RlIGlzIHRvIGNo
ZWNrIGlmIHRoZQ0KPiA+PiBvcHBvc2l0ZQ0KPiA+Pj4gd2F5IGRpc3RhbmNlIGhhcyBhbHJlYWR5
IGJlZW4gc2V0LCBzbyB3ZSBuZWVkIHRvIGNvbXBhcmUgdGhlIG9wcG9zaXRlDQo+ID4+IHdheQ0K
PiA+Pj4gZGlzdGFuY2Ugd2l0aCB0aGUgZGVmYXVsdCB2YWx1ZSBOVU1BX05PX0RJU1RBTkNFIGhl
cmUuDQo+ID4+Pg0KPiA+Pj4gQmFjayB0byB5b3VyIHF1ZXN0aW9uLCBJIGNhbiBzZWUgeW91ciBw
b2ludCBvZiB0aGUgcXVlc3Rpb24uIEhvd2V2ZXIgSQ0KPiBkb24ndA0KPiA+PiB0aGluaw0KPiA+
Pj4gTlVNQV9OT19ESVNUQU5DRSBpcyBhIHZhbGlkIHZhbHVlIHRvIGRlc2NyaWJlIHRoZSBub2Rl
IGRpc3RhbmNlIGluIHRoZQ0KPiA+PiBkZXZpY2UNCj4gPj4+IHRyZWUuIFRoaXMgaXMgYmVjYXVz
ZSBJIGh1bnRlZCBkb3duIHRoZSBwcmV2aW91cyBkaXNjdXNzaW9ucyBhbmQgZm91bmQNCj4gWzJd
DQo+ID4+IGFib3V0DQo+ID4+PiB3ZSBzaG91bGQgdHJ5IHRvIGtlZXAgY29uc2lzdGVudCBiZXR3
ZWVuIHRoZSB2YWx1ZSB1c2VkIGluIGRldmljZSB0cmVlDQo+IGFuZA0KPiA+PiBBQ1BJDQo+ID4+
PiB0YWJsZXMuIEZyb20gdGhlIEFDUEkgc3BlYywgMHhGRiwgaS5lLiBOVU1BX05PX0RJU1RBTkNF
IG1lYW5zDQo+ID4+IHVucmVhY2hhYmxlLg0KPiA+Pj4gSSB0aGluayB0aGlzIGlzIGFsc28gdGhl
IHJlYXNvbiB3aHkgTlVNQV9OT19ESVNUQU5DRSBjYW4gYmUgdXNlZCBhcw0KPiB0aGUNCj4gPj4g
ZGVmYXVsdA0KPiA+Pj4gdmFsdWUgb2YgdGhlIGRpc3RhbmNlIG1hcCwgb3RoZXJ3aXNlIHdlIHdv
bid0IGhhdmUgYW55IHZhbHVlIHRvIHVzZS4NCj4gPj4NCj4gPj4gVGhlIFsyXSBsaW5rIHlvdSBw
cm92aWRlZCBkaXNjdXNzZXMgTlVNQV9MT0NBTF9ESVNUQU5DRS4NCj4gPg0KPiA+IEkgaW5mZXJy
ZWQgdGhlIGRpc2N1c3Npb24gYXMgIndlIHNob3VsZCB0cnkgdG8ga2VlcCBjb25zaXN0ZW50IGJl
dHdlZW4gdGhlDQo+IHZhbHVlDQo+ID4gdXNlZCBpbiBkZXZpY2UgdHJlZSBhbmQgQUNQSSB0YWJs
ZXMiLiBNYXliZSBteSBpbmZlcmVuY2UgaXMgd3JvbmcuDQo+ID4NCj4gPj4gTG9va2luZyBhdA0K
PiA+PiBMaW51eCdlcyBEb2N1bWVudGF0aW9uL2RldmljZXRyZWUvbnVtYS50eHQsIHRoZXJlJ3Mg
bm8gbWVudGlvbiBvZiBhbg0KPiA+PiB1cHBlciBib3VuZCBvbiB0aGUgZGlzdGFuY2UgdmFsdWVz
LiBJdCBvbmx5IHNheXMgdGhhdCBvbiB0aGUgZGlhZ29uYWwNCj4gPj4gZW50cmllcyBzaG91bGQg
YmUgMTAgKGkuZS4gbWF0Y2hpbmcgQUNQSSwgd2l0aG91dCByZWFsbHkgc2F5aW5nIHNvKS4NCj4g
Pg0KPiA+IEkgYWdyZWUgdGhhdCB0aGUgTlVNQSBkZXZpY2UgdHJlZSBiaW5kaW5nIGlzIGEgbGl0
dGxlIGJpdCB2YWd1ZS4gU28gSSBjYW5ub3QNCj4gPiBzYXkgdGhlIGNhc2UgdGhhdCB5b3UgcHJv
dmlkZWQgaXMgbm90IHZhbGlkLiBJIHdvdWxkIGxpa2UgdG8gYXNrIEFybQ0KPiBtYWludGFpbmVy
cw0KPiA+IChwdXR0aW5nIHRoZW0gaW50byBUbzopIG9waW5pb24gb24gdGhpcyBhcyBJIHRoaW5r
IEkgYW0gbm90IHRoZSBvbmUgdG8gZGVjaWRlDQo+IHRoZQ0KPiA+IGV4cGVjdGVkIGJlaGF2aW9y
IG9uIEFybS4NCj4gPg0KPiA+IEJlcnRyYW5kL0p1bGllbi9TdGVmYW5vOiBXb3VsZCB5b3UgcGxl
YXNlIGtpbmRseSBzaGFyZSB5b3VyIG9waW5pb24gb24NCj4gd2hpY2gNCj4gPiB2YWx1ZSBzaG91
bGQgYmUgdXNlZCBhcyB0aGUgZGVmYXVsdCB2YWx1ZSBvZiB0aGUgbm9kZSBkaXN0YW5jZSBtYXA/
IERvDQo+IHlvdQ0KPiA+IHRoaW5rIHJldXNpbmcgdGhlICJ1bnJlYWNoYWJsZSIgZGlzdGFuY2Us
IGkuZS4gMHhGRiwgYXMgdGhlIGRlZmF1bHQgbm9kZQ0KPiBkaXN0YW5jZQ0KPiA+IGlzIGFjY2Vw
dGFibGUgaGVyZT8gVGhhbmtzIQ0KPiANCj4gTXkgc3VnZ2VzdGlvbiB3b3VsZCBiZSB0bywgcmF0
aGVyIHRoYW4gcmVqZWN0aW5nIHZhbHVlcyA+PSAweGZmLCB0byBzYXR1cmF0ZQ0KPiBhdCAweGZl
LCB3aGlsZSBrZWVwaW5nIDB4ZmYgZm9yIE5VTUFfTk9fRElTVEFOQ0UgKGFuZCBvdmVyYWxsIGtl
ZXBpbmcNCj4gdGhpbmdzDQo+IGNvbnNpc3RlbnQgd2l0aCBBQ1BJKS4NCg0KU2luY2UgaXQgaGFz
IGJlZW4gYSB3aGlsZSBhbmQgdGhlcmUgd2VyZSBubyBmZWVkYmFjayBmcm9tIEFybSBtYWludGFp
bmVycywNCkkgd291bGQgbGlrZSB0byBmb2xsb3cgeW91ciBzdWdnZXN0aW9ucyBmb3IgdjUuIEhv
d2V2ZXIgd2hpbGUgSSB3YXMgd3JpdGluZyB0aGUNCmNvZGUgZm9yIHRoZSAic2F0dXJhdGlvbiIs
IGkuZS4sIGFkZGluZyBiZWxvdyBzbmlwcGV0IGluIG51bWFfc2V0X2Rpc3RhbmNlKCk6DQpgYGAN
CmlmICggZGlzdGFuY2UgPiBOVU1BX05PX0RJU1RBTkNFICkNCiAgICAgICAgZGlzdGFuY2UgPSBO
VU1BX01BWF9ESVNUQU5DRTsNCmBgYA0KSSBub3RpY2VkIGFuIGlzc3VlIHdoaWNoIG5lZWRzIHlv
dXIgY2xhcmlmaWNhdGlvbjoNCkN1cnJlbnRseSwgdGhlIGRlZmF1bHQgdmFsdWUgb2YgdGhlIGRp
c3RhbmNlIG1hcCBpcyBOVU1BX05PX0RJU1RBTkNFLA0Kd2hpY2ggaW5kaWNhdGVzIHRoZSBub2Rl
cyBhcmUgbm90IHJlYWNoYWJsZS4gSU1ITywgaWYgaW4gZGV2aWNlIHRyZWUsIHdlIGRvDQpzYXR1
cmF0aW9ucyBsaWtlIGFib3ZlIGZvciBBQ1BJIGludmFsaWQgZGlzdGFuY2VzIChkaXN0YW5jZXMg
Pj0gMHhmZiksIGJ5IHNhdHVyYXRpbmcNCnRoZSBkaXN0YW5jZSB0byAweGZlLCB3ZSBhcmUgbWFr
aW5nIHRoZSB1bnJlYWNoYWJsZSBub2RlcyB0byByZWFjaGFibGUuIEkgYW0NCm5vdCBzdXJlIGlm
IHRoaXMgd2lsbCBsZWFkIHRvIHByb2JsZW1zLiBEbyB5b3UgaGF2ZSBhbnkgbW9yZSB0aG91Z2h0
cz8gVGhhbmtzIQ0KDQpLaW5kIHJlZ2FyZHMsDQpIZW5yeQ0KDQo+IA0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Tue May 23 06:34:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 06:34:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538229.838025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Lb4-0000kV-Ut; Tue, 23 May 2023 06:34:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538229.838025; Tue, 23 May 2023 06:34:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Lb4-0000kO-S3; Tue, 23 May 2023 06:34:14 +0000
Received: by outflank-mailman (input) for mailman id 538229;
 Tue, 23 May 2023 06:34:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9dW/=BM=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1q1Lb3-0000kG-K3
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 06:34:13 +0000
Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com
 [2a00:1450:4864:20::436])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4711d75-f933-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 08:34:12 +0200 (CEST)
Received: by mail-wr1-x436.google.com with SMTP id
 ffacd0b85a97d-3090d3e9c92so6872696f8f.2
 for <xen-devel@lists.xenproject.org>; Mon, 22 May 2023 23:34:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4711d75-f933-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1684823652; x=1687415652;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wnBGbdHR+3pPeCTiRuqses1R7OdBNjFBvOBL+hIrMf0=;
        b=WUWitROiatqmlfGHUalIGaF8CHKD8DTv+Gm5LXx35YTotGmFE6bxLB4F5k3XVKdQdo
         CAfZZ/zDa6FWGd75xTN6XEOlAjXxvgOGiZOLlgOowfFLhANxetxpma31I/dSCvErRqGe
         ZpfLmh0OC2/RBHwOq32aEebfZDe0tK/eRYbvtHZqUhbPRFXy1czkp/4APSdSjMhpujnK
         nSjWlj7me4CbSnErV0B6mUOwmKGPzuqg089jtXzdcssHL7v81NVO4bojNHNy7A/F56JC
         KlF9M/S99jvNbWXyFzfvr0LFclU/8vzt/KE2fPX80//jelPNUed7UFxBkyq8JuBCiLXM
         MWHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684823652; x=1687415652;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=wnBGbdHR+3pPeCTiRuqses1R7OdBNjFBvOBL+hIrMf0=;
        b=LezcPj/J8GIjt4mptrLhNMpW5tx3Pn8jhCd7QjEBKtgXlRgDQs4Yiy1IlGU52ezY7H
         d9Vbx/nDyFgwevyVkZb5MVgmH1rYnZ+phJANoG10xq4QuyVmOIU7F6AuSxSt7vlwTdJg
         abLDxFo8yvMogvleDd9lpqPIfeSctG5s0VDFIoLXkzdVDjS3A2oLNhR/JgqIDfC9L06A
         8rUGclD65U7Pe3AH6jwDlZL5PtBcxlxRheIFUHLBXzavUIjoN9NTCkho0duqVSD+th0L
         w89H7psdeWd4oCRje8ML+rkkZdIFhHf3VKWAuvVhI4RIZw6CDyQOPmKwP655NBSEii67
         aITA==
X-Gm-Message-State: AC+VfDxxIkEBXZP4XCsepA/F4kj74cjlX+uV2qacO+tHEzLkWLjyGWfa
	bwuZwe2N+TpAdIBu3dTmmhuwKGm19AHmKMsgsHp9XA==
X-Google-Smtp-Source: ACHHUZ6t/vpL/D8DuApHt/ChBdBbX0lg33TQPiHimKvX0Fnov9GmcNXBT7hUnABjmPczbFE3MYgIGKiFZPFAjlks/qM=
X-Received: by 2002:adf:e689:0:b0:2f4:170:3807 with SMTP id
 r9-20020adfe689000000b002f401703807mr8368494wrm.44.1684823651692; Mon, 22 May
 2023 23:34:11 -0700 (PDT)
MIME-Version: 1.0
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-4-jens.wiklander@linaro.org> <45f59b7a-592a-4a1e-b606-c2d564b979b8@perard>
In-Reply-To: <45f59b7a-592a-4a1e-b606-c2d564b979b8@perard>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Tue, 23 May 2023 08:34:00 +0200
Message-ID: <CAHUa44HSXOWeyi4PhCtyTLz=uCoiu9zEWuvvvgz7Pn1tTf1VHQ@mail.gmail.com>
Subject: Re: [XEN PATCH v8 03/22] tools: add Arm FF-A mediator
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com, 
	Marc Bonnici <marc.bonnici@arm.com>, Achin Gupta <achin.gupta@arm.com>, Wei Liu <wl@xen.org>, 
	Juergen Gross <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 18, 2023 at 4:35=E2=80=AFPM Anthony PERARD
<anthony.perard@citrix.com> wrote:
>
> On Thu, Apr 13, 2023 at 09:14:05AM +0200, Jens Wiklander wrote:
> > Adds a new "ffa" value to the Enumeration "tee_type" to indicate if a
> > guest is trusted to use FF-A.
>
> Is "ffa" working yet in the hypervisor? (At this point in the patch
> series) I'm asking because the doc change is at the end of the patch
> series and this patch at the beginning.
>
> I feel like this patch would be better at the end of the series, just
> before the doc change, when the hypervisor is ready for it.

Makes sense, I'll move it.

>
> In any case:
> Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,
Jens

>
> Thanks,
>
> --
> Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 23 06:35:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 06:35:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538233.838035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Lbz-0001Ik-9S; Tue, 23 May 2023 06:35:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538233.838035; Tue, 23 May 2023 06:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Lbz-0001Id-6C; Tue, 23 May 2023 06:35:11 +0000
Received: by outflank-mailman (input) for mailman id 538233;
 Tue, 23 May 2023 06:35:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1Lby-0000kG-KM
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 06:35:10 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20622.outbound.protection.outlook.com
 [2a01:111:f400:fe16::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f68f9568-f933-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 08:35:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6917.eurprd04.prod.outlook.com (2603:10a6:20b:109::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 06:35:07 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 06:35:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f68f9568-f933-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LYx/c4t7pPeKchveUb5vDidI82L+Z/q82Ve2abZ08Aupg0PVRhruttBIxUeo8RK84qE7Dru+f0Y8T7C2qGe/+8+eUYMVZ7mDm9SBnM33E1sTGcS+TwJVDAXKBZcPfWOZCEpRECXno2aKOKMHZNYycn1A6M0OXL/8EQhtHDjyU0Xq9IkGufv3j61GRoO6qDlWyj1EZOhXQWDkllzj/WV/3MdX61GNSySre3pSZrEO8VqWG6wEo2kyVpzJ6lzsD5uHcUlz/nBd5Ldcz4iHI4OwqymS6jcebyhj/zNQ/gNE1qSmBT+jImU3QJ+lX6AnS7dUOs04NohwLB3aafMG78e7WA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uFGqUfDf30qXhHqeq7tRuoNndDE9UeRpAYnp+oLLIKY=;
 b=b73yHWCxxgjzJ6UbvP/riEr/d9GZKhHAFLyzvOKWZiLC4//00RBHNUCox+RlADLXlKxv82Y5conRXDGAUyfOamlS9lQ/VkIkteQf58e4pXnw3+uN0FfjoUAF5rlUGUAeO4ZKMKeeCefj4KefURyWSz+W4ln7mTJ5pyypBDLjfZgxW6km9QG53fvX6niLg2KNfcyX9YDUjefJqGXbMDocwesiDoJWsNJNQR3YYbOoWQwrJa1rY8w/RlF1hbGnQlWbOrOtOqTLWCGD+5k/Ls/2LScadT8e5bkO2uygWoG/fxcpMR2+aEL/0WTRsXsNmiEsk67mHqwF+ZJun/WJp0m6dg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uFGqUfDf30qXhHqeq7tRuoNndDE9UeRpAYnp+oLLIKY=;
 b=Ndqf5zc0y1Dvdn16Hlou3LzIkOvd22ALP1qs+0QCfd0orWqkg0DzYYpyMbd13PR3Vmm2UcGCwXWrcrRnj1yn/gQTd0VRLdQZSDxRYA2R2NYIkEa/ZhZS125FlFrNMr1qpQS/4lr3X3neaStS9kT2RrfGvLJJs26K7PaXkZdLtKJbmrvVW0OishVHRjIp2SMUn5sjZJvZSq7GMTIFENtW+Z2Z58NBEWuSOHu5K6iB7jQDIv9C5WXBFXpCxzl6yvaLSeA0AwTDUxJSObj1TntBTKTXQuNgS/9w1pm3N9sOZcHlV59rO7XlcEKb00b0NLa9BUDF/MOIukwh1jUXyflbMw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fc1aca4e-c333-957c-d0e2-bbb4edc7af72@suse.com>
Date: Tue, 23 May 2023 08:35:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 00/10] x86: support AVX512-FP16
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <8cbbab55-d670-5632-30ee-3e8ca352f048@suse.com>
 <d6aa8dee-49e2-1493-adb5-2adb474af067@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d6aa8dee-49e2-1493-adb5-2adb474af067@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0044.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6917:EE_
X-MS-Office365-Filtering-Correlation-Id: 8b7fab0a-a5aa-45ca-5c65-08db5b57d99f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Kvvpj4Gwf6aRaURS75Q/Z2iZAeoXJLEpA0n98Tjnjqh9V7RPLpRYnTW6H0Sgb6W9pFz1WMW4pL/MAX6OXTeqnqLEv9E4GOjXxKjgHi9y4/B1ArYSoIQlMhKv1M3isMX8RmTORrja5EsLEKoFYl7yd5r8+2CQbc9u0DKvF/z7juf9DLt3qaq9xhsgJzPlZNKcleB4Rf4e4N4extfT6Nh9VrE43tmOZcVOuxmH/DoWKqTBmpA00AWHU6G6BP6AOP92IY8CuXAM7Hr3VC0AFSNf58vOw+ft1C6R1XaHZ6XjCGtqEgU9a5BGRQ9wAIbzE/mbTbxKExcsmr4I61a3PATaByBM5zvrfJZ/xm31y18N0JKV6UPDNeRYk6UHlFLMkizecq8ulYvf3t345t2FIXuYaNmmXWKasjrZus+1QjfrsBnPWmF0dLbQQXb6TFO+0hRWiC1X6mBXRmPyDjxo4XZuEkA4IhpbKMzoW2CY/lBw12rQsP1xAddy/hXZgaXr58i651Wn/1QL48yknxKqlgyHlBbs+/tgFQp2jTw7N4959Z7HBQdOSsEicEns2n0jtX3+kfxg+CZdLdFmNiCalokhA92lRjg29cqa/23LpD6dE6wflDwHSlaSt+M8/S6c8cXvjrKGkIl1/+wd6jm1N02rvw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(396003)(346002)(366004)(376002)(136003)(451199021)(8676002)(8936002)(5660300002)(86362001)(31696002)(966005)(2906002)(53546011)(6512007)(6506007)(26005)(83380400001)(36756003)(2616005)(186003)(4326008)(66476007)(66556008)(66946007)(316002)(54906003)(110136005)(31686004)(38100700002)(478600001)(6486002)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eThKVzlzcTB4SUNnZDdQUGVXWVdVbkVGWVE0VmdFdHN1Z015d1cwbXEzYmZH?=
 =?utf-8?B?RThZRjRJNGxqYVBCTWR4Q0tCUVM4ZHVNZXJJeGFXMFZzdjNUZ01kN29FdElq?=
 =?utf-8?B?dUJyaEVscDRnam9CQk5Hc3FzRkFuVldqTHN2bmQ4V2psRTljdmRPZFJVbWFP?=
 =?utf-8?B?ZGE2NGRNOXprYXROUVd4YnlTNmlnbCtZblYwQ1BxMlNCenVERkxweE9mT0sx?=
 =?utf-8?B?anhuVG1HTVc3a3Bxa0ViRVAvRTRtU1YvT09YRVZOTnFNZEpsMWtPblAxZ2J6?=
 =?utf-8?B?aDVScXF5cjRPdERyWWlFMEs2aGNVUGZMQUhPM0dhN0JESE9kQUtsUi9BdGxW?=
 =?utf-8?B?MDZHU0pTZG04K1pOY29WTHhGdUNaR1ZqTEpMMUZ0ZEZ6QUJNUktyVXBhS0ZW?=
 =?utf-8?B?VUo1VUhPeHFqUU90cEo3R1NoYk5QNTYvRFVZbTRwcUFoQVZBMkFGWE1HVGY5?=
 =?utf-8?B?Mm1qZ0l4Q1NFTWcvcTgzN2NpcFVRMjNoMVhWUDFGNDBlcmt2UEhvZXdOZ3FS?=
 =?utf-8?B?TWxtM0tjcFlYVVdQai9PU2V1aGt0aG1sYm5EcFJsNXBobHFOVG9naCtzVkxS?=
 =?utf-8?B?NzYwckVnQ2JwMUsrNTlURWZEUGtCU1lnejJxU3QzSG9LTGdCN21FVW5oZ2dz?=
 =?utf-8?B?d3pjVVlMaUFrTU5QY0l0TjhRMDB1Z21WeUQwL3Q1OU5KbTFsbnEzL09hQjcx?=
 =?utf-8?B?MHFUNnE3L0FMVnE3WXdFSElZdndiTFYwWVYwUGV2QUg5MW1Od1J5bWRPdHhy?=
 =?utf-8?B?ZVNiWUk1aG9OL1lrbndxTGtBbmFCU29MUUNLZmpTK3JpUWxlK05tVFVRcGpS?=
 =?utf-8?B?K1FKTXdmY1VFVW1CSlkyZ1BMNTZqWUVxczliQ245d1J5d044dk1IWmJJQ1ZT?=
 =?utf-8?B?VnFKbDg3REZkak5MNVJXdEROcFgzSEk2dXFyMlo3SmNwQnFhYUpWcjJHR2tV?=
 =?utf-8?B?RGNWalJLbE9US2pPdSswS2xDZEtwSzlxSHBWYWQ5TU9EdnBPR252VU1IVTVs?=
 =?utf-8?B?a0p3eUdHS0ZlSlFrSmVVMnFUdlZWVjh3VVVselNWNVhMUFpEZDNBaHZ1NENI?=
 =?utf-8?B?M1BsUllSU3NUNC8zMnNrK3VaWER1VU9oZ1R4MFFveXI5dWVPcEVmOTJsRmxz?=
 =?utf-8?B?RGJ1dGcvMHNaTUtTK3JWRmhNUzljWHFNckhnYklIbWhyRHBkYmRiV0hSUFJC?=
 =?utf-8?B?TytLQTVVRmVDSndxaXE5Y2pnMHdEVDBiYityd0ZaeVNid20vYm1YTlVEQlQr?=
 =?utf-8?B?Nld3MXZpMnVmZkhOSzJuQllCNjZDWGhHNXBWWXAyZlAwWjU1MFgyeittS3dX?=
 =?utf-8?B?Y1FYeHdLSkdMc2FoY1A0RXdLZTZocUtYd0U0dHFUUGFDRVJZUGx6Um5YS09W?=
 =?utf-8?B?NHFCVUlhMFhGb0xaSEozbU1PekdHSHhGVkZGM1RUdGtGQnFRbmd5dHd6ekla?=
 =?utf-8?B?dzFwUDlEL2ZCZVRkUUVSRUlWcEJFTXpGWDByays5dDd1KzIrRGtOSWNwbk1E?=
 =?utf-8?B?OFprRm5pTVliR0dXMWY3NnV0eEtHenR6VHp5cktRZHNib1E2NUZpck1vWGJF?=
 =?utf-8?B?TE5scVp4dE5CNURjMEJJWllueFBmdkxRVEg3TTk4ellsVUYvdE5hNzd6YnZX?=
 =?utf-8?B?ZkRrb2xwaUVzN21pTXJWRGc5OExPL081UnJ6UVAzRzkwY2dmSERtdnl0czBR?=
 =?utf-8?B?NnJ0RG95NFFyVXJCb0oyY0s5aUdGbzMrSk1NM25WOGo4SDJ6YndJVDRUWGpP?=
 =?utf-8?B?KzAwQXA5NGczWnBsRGV4d0FPMGtyNXA5cHNyN1VtUFRySjNlbE1HM1RjcUJ1?=
 =?utf-8?B?L3lSeUJvZE93eXlYUWdONXFFbWE2bmhzNFU5V28vM3h6TDN4OVJWT01nWXY5?=
 =?utf-8?B?N1BlQ2dNb25yZ3hoQXZtaTcrVDBGNTlPSUlsZWJGQjl5VHdtR2YyUkltRU5i?=
 =?utf-8?B?NXhTbmVuM2l1RjRuU3R5c05qUXZlbGJhV3dEZWdXeW84OGM2dDkybGp0UkJR?=
 =?utf-8?B?RmpQVzVDZ2tySlAyY09UWHlMZ1FtNWc3MENMS2Z2Vm5Jand6RlNYVERTcGQ5?=
 =?utf-8?B?NTRza0tzQWduelVFaXU5cHhRTi9jRExON0g3WVJCdXVRemtiUW9QZ2k0SEJh?=
 =?utf-8?Q?gp8XNoXa3KrWS0739AQc6eZCt?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8b7fab0a-a5aa-45ca-5c65-08db5b57d99f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 06:35:07.6504
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cAQ7uN90Iayao/2Yn3UYHBN1BDLRD00m8V+EBdOf60Au/l8elZAu2oiva8GucPjtaL3FqhmV2jYo+gRIqrPt9A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6917

On 22.05.2023 18:25, Andrew Cooper wrote:
> On 03/04/2023 3:56 pm, Jan Beulich wrote:
>> While I (quite obviously) don't have any suitable hardware, Intel's
>> SDE allows testing the implementation. And since there's no new
>> state (registers) associated with this ISA extension, this should
>> suffice for integration.
> 
> I've given this a spin on a Sapphire Rapids system.
> 
> Relevant (AFAICT) bits of the log:
> 
> Testing vfpclasspsz $0x46,64(%edx),%k2...okay
> Testing vfpclassphz $0x46,128(%ecx),%k3...okay
> ...
> Testing avx512_fp16/all disp8 handling...okay
> Testing avx512_fp16/128 disp8 handling...okay
> ...
> Testing AVX512_FP16 f16 scal native execution...okay
> Testing AVX512_FP16 f16 scal 64-bit code sequence...okay
> Testing AVX512_FP16 f16 scal 32-bit code sequence...okay
> Testing AVX512_FP16 f16x32 native execution...okay
> Testing AVX512_FP16 f16x32 64-bit code sequence...okay
> Testing AVX512_FP16 f16x32 32-bit code sequence...okay
> Testing AVX512_FP16+VL f16x8 native execution...okay
> Testing AVX512_FP16+VL f16x8 64-bit code sequence...okay
> Testing AVX512_FP16+VL f16x8 32-bit code sequence...okay
> Testing AVX512_FP16+VL f16x16 native execution...okay
> Testing AVX512_FP16+VL f16x16 64-bit code sequence...okay
> Testing AVX512_FP16+VL f16x16 32-bit code sequence...okay
> 
> and it exits zero, so everything seems fine.
> 
> 
> One thing however, this series ups the minimum GCC version required to
> build the emulator at all:
> 
> make: Entering directory '/local/xen.git/tools/tests/x86_emulator'
> gcc: error: unrecognized command-line option ‘-mavx512fp16’; did you
> mean ‘-mavx512bf16’?
> Makefile:121: Test harness not built, use newer compiler than "gcc"
> (version 10) and an "{evex}" capable assembler
> 
> and I'm not sure we want to do this.  When upping the version of GCC but
> leaving binutils as-was does lead to a build of the harness without
> AVX512-FP16 active, which is the preferred behaviour here.

Well, this series on its own does, but I did notice the issue already.
Hence "x86emul: rework compiler probing in the test harness" [1].

Jan

[1] https://lists.xen.org/archives/html/xen-devel/2023-03/msg00123.html


From xen-devel-bounces@lists.xenproject.org Tue May 23 06:37:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 06:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538237.838046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1LeO-0001uo-Ly; Tue, 23 May 2023 06:37:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538237.838046; Tue, 23 May 2023 06:37:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1LeO-0001uh-JF; Tue, 23 May 2023 06:37:40 +0000
Received: by outflank-mailman (input) for mailman id 538237;
 Tue, 23 May 2023 06:37:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1LeN-0001ub-SC
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 06:37:39 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20629.outbound.protection.outlook.com
 [2a01:111:f400:fe16::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f815fe5-f934-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 08:37:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6917.eurprd04.prod.outlook.com (2603:10a6:20b:109::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 06:37:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 06:37:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f815fe5-f934-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fVqYGVueVKHm2FdpUCHNe5c6s7G2UGojoPJb4K54SPWNKUBge+IxhPHhRoeFNo9OjFPhnMHuMU1fWg3Jw8PpeQqC1qHz9S6twTTyW8scj63JzME6V0MNusfkFjcqvng3SMvuIoNkS585pN6/9T+4Fl9UCLLsyV9rH1VpJgDTXMv/BtnOA4vSE51nlD7PtHG+1OOK9nweijhNdQheijasatVIk3PGDhchlDx23SXnUYNURM2HIxFOZ8Soe7d5uhczeH7Vxu6PeapSB7iSSzzXwmmnYWCv35zLsrGYHO9XBMH73R/qRT/bgsTM+bHJgZdptHkkfkHTRaNHKqWY4RXnpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PJXjn+rwTUfp/opgXzwXVIYfaFyj1aIFXa4K5yGG6qc=;
 b=E3gQeR0505+O+UxypblUZBULyj3Vrf/0yj+Rg8H1noE37BqF9iZRrhF4lTBymdN3aqd7B1aHqw4XW520apKCzRzxbbNMlCIsGziFjEsbx+dbRPBXMaRvSMXpK8V9CS3MHHrYiP98KEOi12qzwf1PgegI+4c19kHcoTmYLypq39ZFq3zjp30T02D2NJRzQ+ufztcm17J0Kse38zOq22CbRWp4g5EatsDQFVtUYeTlbEQnSDN2gTH5JPjkrDfNPr0m/DSTLFalbKmoblNboEOQ0R+5btKQ1nMc+VdfHOClP3PuXd099aHA13IpmFgBNvr7jaXBAZjeafmw9ovTLc/kEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PJXjn+rwTUfp/opgXzwXVIYfaFyj1aIFXa4K5yGG6qc=;
 b=NNQTQmcoSW1EjJ18O9FFbMjxhFdEufWUle0uw7plEkWgH5wsNFJTo+lFy/xIy3ITae/yKY3OyUUKlFzvLz/vPUjXbOe+Ruza+091HCyM7LIDS7WavOEBrve8ViCfRYpxUi//Exf7HNj9wfK339V51llOeu8HTSO18Y89N6CKMVFbzZNQ80ik6dix8Mq3YGe5ncF47i2ALBYvEBxu9Q14nnm8YL+MybGcO/rx3GRpgUhmqzFvR8yA5GYsVnRbPcOElg95uFOVX7KnQPic1tBzAllCU5EGgeK/9b3j7RfwnjJO97AVVDAIQcfMaYtMLA2moTgceSrmeMNN6u0YCgdwqw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <79859cfb-ab60-8661-e1ec-75fac74531b4@suse.com>
Date: Tue, 23 May 2023 08:37:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <e03bbb52-1a19-7d18-4abe-75bbef8a0aee@suse.com>
 <AS8PR08MB799117EDD6BAB892CAB870A192659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <13635377-e296-370d-121b-5b617dc210bc@suse.com>
 <AS8PR08MB7991DCE0DFC850FEA920BF8C92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <c195ef53-1151-1fb2-0cf9-f6f47d20b75e@suse.com>
 <AS8PR08MB79919CD7C233345113F8D5C992409@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <AS8PR08MB79919CD7C233345113F8D5C992409@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0219.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6917:EE_
X-MS-Office365-Filtering-Correlation-Id: 01a5d677-36b4-46e8-eb04-08db5b583282
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ygw5Funxd/0OGx6caZPoLFaNADjfxRJXr+gCrx4YIKiUcND/ueNm+dJmuWBTmDw2sSe3Jr78hJXQxpv1zOPh3Tmz+CMvOqKQ93Yku6z1qaEb3Vj1ZRRqcQ5/r0dRhdW2L7UjAyK9tjKEGAyk++V74vjjhBaE5F/EVHeJQoObiQwtTXTx8wD1mur/fcNfDA6JbhAZpKPA1vKb70y7kOJM1QnboTk8yCZ5DvO6uUwfv7tf/+jKyAwCsSnZ8iHDSh33mXsIMGqXg2mKcmGWHz9wCAt2tRoy+kCc6txNocF2Zekt1a7/jDSI4pZTbhgKdhYFAV7V5rdjjjtzViyE8MKlQPG8KZRczLVJo2vtINIY3haf3raSJfVt2u34f6kP7FcUBjHcnkK9VdzS6Ez82LDJtASikX1Ll9K7/T7/4uwBvXnGBBT4kSpDzrxDMvjSrf3AzadYkyTQeAm9eyxbpAjfwWWBJRLl8KzoeHF89ybfhLLs0sHpeRHXJ5YhaFck5gTwZKkuQaTcKMs+zxuwxG3SPr2wjXA0nHyZq5aI9hDdXNU5vtu278yH9VkRhXT5hJ9p6dDhP1n8y0ayXtGyYCtV+RICu9SbL7Yzl6r7eUhoiuBhQb+V9DGgz+sKadk+Afay7cGTERLhNR9vbeN2pY8acQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(396003)(346002)(366004)(376002)(136003)(451199021)(8676002)(8936002)(5660300002)(86362001)(31696002)(2906002)(53546011)(6512007)(6506007)(26005)(83380400001)(36756003)(2616005)(186003)(4326008)(6916009)(66476007)(66556008)(66946007)(316002)(54906003)(31686004)(38100700002)(478600001)(6486002)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MllCMEUwdEVXQm9GK1lNS2VocEpHSzMzbVkrbFUyc0FCWmhjL1I4RVQrUkVP?=
 =?utf-8?B?dWdpUzF2OTNRUUt5TElpZHY5U0tvWENwSmt3cXVVL3NLdUxSZFRMeDBoM1RB?=
 =?utf-8?B?S3hPbGtwR2hpOEY0RDdSWGlHazNpV1VuUWgyN0o2aDNXV2pXSUYvSHZkdjZa?=
 =?utf-8?B?M2NxcDlwYzl0NTdhMk8zZS9wV2FFVHFLYzA2YkF1b2p6d2l1ZGJUcnUwQzl3?=
 =?utf-8?B?cnVtRno1R3d1enhxR2REdjhOWjBOU0dyM1BnN2lrSnhwTk5BUWlINXdkMWtv?=
 =?utf-8?B?bWcxVU04ZDF0VVNsQ1VVcVowdnhSVThTZGdGS2pKcVllN0FCWWxuMElQclFq?=
 =?utf-8?B?d3lsenFiQk53bU9LVnZMeHE5dmJESERkVFhKM3lyRk9KZUtpSWRvbjBzcVBR?=
 =?utf-8?B?c1Z6dHM3V21tdGp6RXhiMDE1VjIrOTVJd0hnSm8yL24xdE5sZjNvZzFiTnBR?=
 =?utf-8?B?bisrZ3dLWW5MZTJGdHUxZ2xRc2pTSnBCVEZ4dkhreWluT2pLZXJjQnZwekh1?=
 =?utf-8?B?YTArdjl4ZXZWNXh0TDJpYzVMdytJRFRFZzNydG9FaEJSbktiY3kwVzhCSUM3?=
 =?utf-8?B?N1A2anBud3Ntd3BLZG9CNnNzYm9QV2tuZ05oUi8yQnJpSitaV3NHU2FZMFFN?=
 =?utf-8?B?ZUtuam50YkVnSEkvL25JelJKMzFzekFSbnBMTHMvZEdRV21kWlFZeVEySDRG?=
 =?utf-8?B?RnIvTmh0VDh2cHFjUm1PNmkwR3F0OE82bzc0ZURiS0RoSUsxdTFaOW8zUXNv?=
 =?utf-8?B?NmF1Lzk3K28wQmV2bzlaaWxMQXZsWWl6NEcwRERjUWxIeTdNd040NEFIN0hF?=
 =?utf-8?B?dkh1L0Y5aUdWeTFsMWk0WGpONTNnWm9DYnZSeHFCbDdIZGpyTmFDUFVZVFd5?=
 =?utf-8?B?aDlOMitMUXJBNVZLazdsSE9VV2tTSWZESkNzdjdlSjN5UzlzOTlLZGRkUGRq?=
 =?utf-8?B?Ym40TWE4VkVLWTVnemQ2WUpTMFVBWWgvNVU4TFBpM3R5WFNIRFRyNWJBbkp3?=
 =?utf-8?B?dFMvaElVR2dwd2kyTkYraktVOS9RZnBsUXNDOGp3bjJLL3hRR0VLS3V4ODdX?=
 =?utf-8?B?MGdYbjJiWm9TSW9NbWY2T2RxdEp0V3pJSWJ2SE1jZ2s3cVk1cHQwMDV6Y2Rx?=
 =?utf-8?B?aUlWNWtuSXRZZ1hGSGl3R1FhYTQwTk1rSHphMnpWRXJ2dzAxQ0FhV3BUbk9s?=
 =?utf-8?B?Qmt3Sy9pQ0xNdE8rV2xNZ2U5YzR6NkFxWWRwZkI0bzQySldhQUYxUUpLcGhB?=
 =?utf-8?B?ZmJsQUFYQmNHeVJBR05aT1hyMk5ETXhkVjBrWUUyUGVnejNaQk54bnFoMlN6?=
 =?utf-8?B?WDV2UlV1TkQ4LzJJb3VtU21EOEU1bHFVZ2hDREFBMks3RlF4Zzd0UzM4YWZS?=
 =?utf-8?B?QWphQXdoZEt6bTFGeWhGNkI3NVBtWFJWL3VQa3ZremFuZXY4b0JLQm5LRFc1?=
 =?utf-8?B?b2Iza1plbWJpbGVKWWpQSlZoVnF5Z1VuYTVTeDRXOWpCYk5WQUI1NmFHZ0ZU?=
 =?utf-8?B?NXo5SU40cG95WS9MVi9xZVdYc2xheFMzRVY4cFdlSFRYQnk4S2FBY29ic1k5?=
 =?utf-8?B?czRZSGdCVW8yTDNEalJLVytQZlNIalZGcTJDNmhNYzRQUUJWcHdMLzlVVnM0?=
 =?utf-8?B?eGt6SGdTVzZUVXNQRFVad2VuY2R4TXlVTXlDNS9KbVRqTjZKQVVlUE4za0dx?=
 =?utf-8?B?QmIvOVFqMXcxbEVjSGNONWlJWkJHY3d3ZlgyNTR1NysrbXpUeG9LL1NkQUt6?=
 =?utf-8?B?dVkrVVY0RUsyUDA2aTJQdXRpVGwrSEtuUjJSNjVLODFPQzRHeVdON1J1QWZz?=
 =?utf-8?B?U0RkSzVYVnJHVE1Zd3ZPM2p2MUNESlQvbjFBRW1ZWmQ4RVpncTBxdTZERjZM?=
 =?utf-8?B?VEpRTkovMldBOUduWE1xL095WHpVaWJjczI2aFpSYmJRRXRjb2IxaFNxYy8y?=
 =?utf-8?B?d1hnNDdITFA1THNMZGJRTGE3SUI3S0ZXOEd2bGljdkR1YzE4dnJhdWR4eXhY?=
 =?utf-8?B?bHdRSDlJa2F4ejF2UFgzQjRma2pjc2dheVhDOGs0cnlINUZWbzQyUzhYdnFv?=
 =?utf-8?B?QkNTOGovdElnd1dBZU8rSEprSFlpeXRLWldaanY5dkU2VTF5V2tTejg3b2hz?=
 =?utf-8?Q?eHqkhK+Qmc6H8WRyqYKCWpKA1?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 01a5d677-36b4-46e8-eb04-08db5b583282
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 06:37:36.7197
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VVvlkm/PH7Ym+aTjUaUfr5/DZo1Q8JICSGxzb3DS+IUEmrgBogSTPKu0xWlbqZnxZfwk2T9q3AFpSdrmadWvfQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6917

On 23.05.2023 08:31, Henry Wang wrote:
>> -----Original Message-----
>> Subject: Re: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
>> tree NUMA distance map
>>
>>>>>>> +        /* The default value in node_distance_map is
>>>> NUMA_NO_DISTANCE
>>>>>> */
>>>>>>> +        if ( opposite == NUMA_NO_DISTANCE )
>>>>>>
>>>>>> And the matrix you're reading from can't hold NUMA_NO_DISTANCE
>>>> entries?
>>>>>> I ask because you don't check this above; you only check against
>>>>>> NUMA_LOCAL_DISTANCE.
>>>>>
>>>>> My understanding for the purpose of this part of code is to check if the
>>>> opposite
>>>>> way distance has already been set, so we need to compare the opposite
>>>> way
>>>>> distance with the default value NUMA_NO_DISTANCE here.
>>>>>
>>>>> Back to your question, I can see your point of the question. However I
>> don't
>>>> think
>>>>> NUMA_NO_DISTANCE is a valid value to describe the node distance in the
>>>> device
>>>>> tree. This is because I hunted down the previous discussions and found
>> [2]
>>>> about
>>>>> we should try to keep consistent between the value used in device tree
>> and
>>>> ACPI
>>>>> tables. From the ACPI spec, 0xFF, i.e. NUMA_NO_DISTANCE means
>>>> unreachable.
>>>>> I think this is also the reason why NUMA_NO_DISTANCE can be used as
>> the
>>>> default
>>>>> value of the distance map, otherwise we won't have any value to use.
>>>>
>>>> The [2] link you provided discusses NUMA_LOCAL_DISTANCE.
>>>
>>> I inferred the discussion as "we should try to keep consistent between the
>> value
>>> used in device tree and ACPI tables". Maybe my inference is wrong.
>>>
>>>> Looking at
>>>> Linux'es Documentation/devicetree/numa.txt, there's no mention of an
>>>> upper bound on the distance values. It only says that on the diagonal
>>>> entries should be 10 (i.e. matching ACPI, without really saying so).
>>>
>>> I agree that the NUMA device tree binding is a little bit vague. So I cannot
>>> say the case that you provided is not valid. I would like to ask Arm
>> maintainers
>>> (putting them into To:) opinion on this as I think I am not the one to decide
>> the
>>> expected behavior on Arm.
>>>
>>> Bertrand/Julien/Stefano: Would you please kindly share your opinion on
>> which
>>> value should be used as the default value of the node distance map? Do
>> you
>>> think reusing the "unreachable" distance, i.e. 0xFF, as the default node
>> distance
>>> is acceptable here? Thanks!
>>
>> My suggestion would be to, rather than rejecting values >= 0xff, to saturate
>> at 0xfe, while keeping 0xff for NUMA_NO_DISTANCE (and overall keeping
>> things
>> consistent with ACPI).
> 
> Since it has been a while and there were no feedback from Arm maintainers,
> I would like to follow your suggestions for v5. However while I was writing the
> code for the "saturation", i.e., adding below snippet in numa_set_distance():
> ```
> if ( distance > NUMA_NO_DISTANCE )
>         distance = NUMA_MAX_DISTANCE;
> ```
> I noticed an issue which needs your clarification:
> Currently, the default value of the distance map is NUMA_NO_DISTANCE,
> which indicates the nodes are not reachable. IMHO, if in device tree, we do
> saturations like above for ACPI invalid distances (distances >= 0xff), by saturating
> the distance to 0xfe, we are making the unreachable nodes to reachable. I am
> not sure if this will lead to problems. Do you have any more thoughts? Thanks!

I can only answer this with a question back: How is "unreachable" represented
in DT? Or is "unreachable" simply expressed by the absence of any data?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 23 06:45:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 06:45:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538241.838055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1LlP-0003Pt-Ci; Tue, 23 May 2023 06:44:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538241.838055; Tue, 23 May 2023 06:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1LlP-0003Pm-A5; Tue, 23 May 2023 06:44:55 +0000
Received: by outflank-mailman (input) for mailman id 538241;
 Tue, 23 May 2023 06:44:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1LlN-0003Ou-Iu
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 06:44:53 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20628.outbound.protection.outlook.com
 [2a01:111:f400:7d00::628])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51fddd55-f935-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 08:44:52 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DUZPR04MB9845.eurprd04.prod.outlook.com (2603:10a6:10:4d8::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 06:44:50 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 06:44:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51fddd55-f935-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V9gyp684BYbVazTITUAfKCkDTUmIIHAsNWqjsStL8CotHG6X7OA9/q83an8bmmt/xakqhu+igwfSsUu4J+c3H6UiPH5kcv72BojyFLQuOYA7Vx8vFKiPdg9C05sr78XEUwfS7GUeOCkMck5Sr3vQ2dMpoglWe0u7itNy2R5YR+8g5SPgh0IPVUydCFOgrSi8v8v7f5QEK8X/AIQZNPECIMWt52cI+WElRQdlJv/gCZjN2bVr/iY/RXA/mpfgi/7dkCTelFazdLYDq/e+GFvRB56lC3MwxNB2KFPtoxwXIcz9R5DQsbe6sF2z+MSCMVmoZw0ssvMSZ/G55AOUZEWJlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AWZsBUOoGQA/pMRklxQ169OlO1xKU6Cin6kSzQzHTbQ=;
 b=nm7drmM/2dXBzu7XZB13nPIkA7Fy10r6prf/bygVMBt3EOUIqTRLwAJeTASyo1CrFXhz0DLz7G/0mYCgq1Kdbm1D5ZqNKg1xBq3vc91G7mCT4PCeXKcQjbB4JZoLTaiZr2aFvopA9AXuZoumu7CBwX2xKbexCX4IV3mlLJaoafOOpqUzyp/DfNXU4oiDNgDFtnAl7vjDqCZxrDk0AkMMBh+SHTcxH40Bgn7qUk3OARxxPrA/tTjOUiyt25zZYWgU48oaggcS3JTwlRn6b9aa5aqre9myY9U8eVcntiYy/LETrdJOvVapVmtB8gzw5ZIBTwKQYTidu9Fu5ArCx43Hew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AWZsBUOoGQA/pMRklxQ169OlO1xKU6Cin6kSzQzHTbQ=;
 b=fP3hui/D0avZedW7TbR7HAN2SlLBgv+w8KR7amDhZcAbK0YdKrnXUQK4UXenTaZn9Djisi9QtVXReO/OnDgo04TeynmGW0AJK4hpUX4/fJj8UrmBTsWoduu/+Fr3/el4LzZbUORKYP2IobYeuIPUqiLSLLLu3oNUq6cLCX5kfMdV6mXLBGYRbcaj31pSwSAuEB8XidWGJccjvtcRhCc1x38tQKchEVn+tFH1EZSdNktKQfCh5gXoMs2TPlPO/+7NsEnVmVVuS1lMvqlioXZbBRppG0E2FmBTKgVPfGaueHnS8oM2k0cBvNhHfWgq9M4QAXtWhrMwrcJ/io63h2TC9w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <818859a6-883a-81c0-1222-84c62ada3200@suse.com>
Date: Tue, 23 May 2023 08:44:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: PVH Dom0 related UART failure
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 marmarek@invisiblethingslab.com, xenia.ragiadakou@amd.com,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
 <ZGcu7EWW1cuNjwDA@Air-de-Roger>
 <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
 <ZGig68UTddfEwR6P@Air-de-Roger>
 <alpine.DEB.2.22.394.2305221520350.1553709@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305221520350.1553709@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0055.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DUZPR04MB9845:EE_
X-MS-Office365-Filtering-Correlation-Id: 81ddad58-6c14-4e55-a537-08db5b5934fb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kw6GlkHE08dOmFDDbn4fE+S1nKmCAvoGKxoEm6PqaWcE26L5dJQv4CWbsfsujC82iYAhDOJnp6YCOVsYu/ElRVXxtnt5RaZJkj4A82usLCTUx6M77jLImOBByWrBdb3bkYarCpnPzTr4Dw5dqHFzut78ioqM95QKOOVyLmIUl7cVRKbHKcvII11dQFfypUZHdekAbRckdYiS4ZKNPkGz+fs/6/5X7PPyjEnHaVgz/8ajHaeggTkHqTNILvxe5gmYq5kXRu4pn3VwdBGGFG6Ab4/pwKhHwPrvTMwQ1f+6D5Qy4gaB3TfvTeVDl6ZohykYdBDV6P9bYqvNodogxWx/TmIREFQyORGdagkfmQlXPQDW1DK8/gFzfb+2fbZ1iIieKhKbbsyTSK5COKP1XPYB6JUMoxUgNqJP9mw1yj/fN8A5Cyenaec7WRa39lsuNwlM5bwNlW2eYksdXdupT9BWHU8GSQw0Pp4pRv31V6j2UBRkOETa6c/1NnnIhxADPwPG/o/5NOhPYjQ+QA3d55/pSEjzcmm1CxunqNfyG2ZHrlQg90ZFHuLGuktfMbvDA3QX/HbGcEj1BtauOqe7gv3Av32SRBUhQFl5NFpwTNQGI/NbRlag++6D1us1X2TPllpQw+DqAl3NsdLQKUF4dBGphA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(346002)(396003)(366004)(136003)(39860400002)(451199021)(38100700002)(31696002)(86362001)(36756003)(53546011)(6512007)(6506007)(478600001)(8936002)(8676002)(2616005)(2906002)(186003)(31686004)(4326008)(6916009)(316002)(26005)(5660300002)(41300700001)(6486002)(66476007)(966005)(66556008)(66946007)(83380400001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZVRTUFQrQlYwQ1h1YkU0SWtZWGVIaksxZ0RqblhRdzJPZ3FUNThzTDNaa1FV?=
 =?utf-8?B?S2tSNi9oV3k2cjJkTnZKT2ExWUFTajFRRCtFYzhBWlFTWlBzcEpTdU5kYnh5?=
 =?utf-8?B?b2lxTTAxTUc2M3Nkamh3TUMyUWRkSzdjYmtqOWY5UDVJSGs0U0JYajNKdWNP?=
 =?utf-8?B?QXJXYmZBQ01nYUUwUDg5Q3Q0aFVaZU9rYm9meWNRN0F0WHBDVWUydEQyWkda?=
 =?utf-8?B?WjRCUHF2bll1ZWpvRHVKOFBRRHhyQ2hIQ3kxdmZsa1pkQmdTbWd1Qnh1U1Zy?=
 =?utf-8?B?WW5aTEFyZ3pYZjNlNEdNM2lJeGROQzlZcVBrNTdFM0MxR3RLUmlIOHVIUnRu?=
 =?utf-8?B?aEN2SHdpYk56T0Z4dTVBVUxLdHRCWm1XTEg5ZHRWb0NUOXFxTE5LYk9UUkND?=
 =?utf-8?B?UW9pWWo0Y2dIajdzTUxqZGJhK1pPMlpIQVdkVDVWWkVUNXFZaVdPeVhld0V1?=
 =?utf-8?B?Mkp2VWIvUkEvYVpLbmxaV0NNRDJlWGhZNTZEQ0RBYUtDWU9pSDA1bk5yb01B?=
 =?utf-8?B?ellBV2c5T2pKQjBxa0hpUGlRcVk2RWh3QXExSDI3VDE1VWdXRGQ2YjNaUXE5?=
 =?utf-8?B?dVFEZU5TSm8rNlFyaGFEL2NwaVJtR3JteUF1b0hVRk8yVHl1QnpYcC9lR01V?=
 =?utf-8?B?YmlPWnZlRE9Hc2V4QjI0L0Q2VW5GUTNoc2c4VDF5anN2cFFhRkNWcE05dUJv?=
 =?utf-8?B?U0ZXbWwrOGp2RWpVZzJsTXFWM2kySmZYMlpoZklBSHgvbllvdUtPRjRTQUta?=
 =?utf-8?B?SjB1K1RrYWRyZWQxMDBFWVhIVXJ2aG5GbXIxNEZjblpmUEl6TUNOb3JWRFpi?=
 =?utf-8?B?Q0JSV05Qa0ZVbFlsSStVR3d2ck5tLy9CWkx2M0NtOVJSN1B1QVhVZ0pTWDVF?=
 =?utf-8?B?KzA1SFkxWlk4THAvRWpXMnhaS3JJOVVMYVVMcW5QSFhyakY4bnNEOXI4QU00?=
 =?utf-8?B?OTJJUXJ1SGw4Vzk1dWU3TWI2RjVNNVlZSGR0cVhwZUc1V1dtV2lSNUpyek5x?=
 =?utf-8?B?MWxYU0wzenFFNXdaYjJrNGtiR0NJS3hQd2xmRmFLcSthS3dXLzRZOVp5VDNU?=
 =?utf-8?B?K3pCd0FxNWlFMFZ6N3laWHdoNC9uMWVURlhmTm5GUkNIT0pRUEwxYWZOQUQ5?=
 =?utf-8?B?aXVTZXhCcGNXR0M2UTZyQXdad2tDU042S01NYmxMUHJXTElSd254NzQ4eUZw?=
 =?utf-8?B?ZEY5WTZWMmN1S2VMM0JFVzA4SnltelpDY2ExemdTNFZLUjVjWUt2Q0tRakJs?=
 =?utf-8?B?RnJTdkhPZjZ6cWcwZG5ua2tSZ3hWdENRV24weFJKcFYvS0NWNURaZGl0bU56?=
 =?utf-8?B?eTk1Y2xNcXFteGNqSUFNbWVwSVIzYUExWlRYY2F5bTNjYi93djVJbmNPaXJJ?=
 =?utf-8?B?NlBSdm1nVTI1NytFMml1NG9IQzViOHJvbEYwYzY0QlBQUkRpcGFpK3hCNVJV?=
 =?utf-8?B?Nlp3UWttVjdwOWZQcnEyTjBGRUxkdk9aVkVVM2tZTzhTN2txbDZCVkNiUjls?=
 =?utf-8?B?eUZNTm1jWUpudEJneHRSLzFSZ2VJaG90WXoraitDWUZEZDhnNnhnd1ZIWEpJ?=
 =?utf-8?B?WmppQ0NhYXdMbmJUVmhYOW5Wcm9zdTJSQUE2UVVRVWp5N1A5Rzg1YlVHSlZw?=
 =?utf-8?B?L0xZVURWU2NRZU8wTWFEMnlYQmtlMU9MallacjgxeUdSczJoNGYwamlHa3c2?=
 =?utf-8?B?aVBsUmllSjRuRnBJZkZLMzRybnNOcVZjdkxKWjc5UUo2UkZ4TnJIaWJsb0FK?=
 =?utf-8?B?NVFRdDliYlFKY1BXMDNGbnlBL2FHRHRaTm4xYjZmdFpHaVZKSTdjcnRKTVBj?=
 =?utf-8?B?OW9wdzdjVFFyNnFUSys2ZUtzNW9taWFMdEFHYmpQdGhHWnB4cWVOYWFIaURQ?=
 =?utf-8?B?TzZYZ0ltcjI0UjgxZnpBSWlTZmV3V2ZpSFVyeHRyb2hHcXJHcWxaeFlGUElp?=
 =?utf-8?B?bnJJUW9aa1hNZnpaTEhIenpYb3czOHZJbXVjeE5JUTB6Y3JjSmw5a2V2ZGJF?=
 =?utf-8?B?REt0QkVzeHc5ZTdJSlJCQXZNNDVHalBvRTlDNTJzbHRLQ1NiT1F2UlNlWHZP?=
 =?utf-8?B?WE0yWEExOUxZYytBTkZlc25PcVI2WkpCWUlSeHg1RU9yWVQ1SjBYVkxhckwx?=
 =?utf-8?Q?uKwo9HU+O/AGe+A0NQDnCyARM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 81ddad58-6c14-4e55-a537-08db5b5934fb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 06:44:50.4009
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5erdTPedh9eILPUO37kId6vGEROML1JfivoOjPjYkgvT90/8y4wWXktmELqsFN+RTPfHu3pCvbtAVxMyXzo7WA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DUZPR04MB9845

On 23.05.2023 00:20, Stefano Stabellini wrote:
> On Sat, 20 May 2023, Roger Pau Monné wrote:
>> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
>> index ec2e978a4e6b..0ff8e940fa8d 100644
>> --- a/xen/drivers/vpci/header.c
>> +++ b/xen/drivers/vpci/header.c
>> @@ -289,6 +289,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>>       */
>>      for_each_pdev ( pdev->domain, tmp )
>>      {
>> +        if ( !tmp->vpci )
>> +        {
>> +            printk(XENLOG_G_WARNING "%pp: not handled by vPCI for %pd\n",
>> +                   &tmp->sbdf, pdev->domain);
>> +            continue;
>> +        }
>> +
>>          if ( tmp == pdev )
>>          {
>>              /*
>> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
>> index 652807a4a454..0baef3a8d3a1 100644
>> --- a/xen/drivers/vpci/vpci.c
>> +++ b/xen/drivers/vpci/vpci.c
>> @@ -72,7 +72,12 @@ int vpci_add_handlers(struct pci_dev *pdev)
>>      unsigned int i;
>>      int rc = 0;
>>  
>> -    if ( !has_vpci(pdev->domain) )
>> +    if ( !has_vpci(pdev->domain) ||
>> +         /*
>> +          * Ignore RO and hidden devices, those are in use by Xen and vPCI
>> +          * won't work on them.
>> +          */
>> +         pci_get_pdev(dom_xen, pdev->sbdf) )
>>          return 0;
>>  
>>      /* We should not get here twice for the same device. */
> 
> 
> Now this patch works! Thank you!! :-)
> 
> You can check the full logs here
> https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4329259080
> 
> Is the patch ready to be upstreamed aside from the commit message?

I don't think so. vPCI ought to work on "r/o" devices. Out of curiosity,
have you also tried my (hackish and hence RFC) patch [1]?

Jan

[1] https://lists.xen.org/archives/html/xen-devel/2021-08/msg01489.html


From xen-devel-bounces@lists.xenproject.org Tue May 23 06:56:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 06:56:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538247.838066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Lwq-0004zE-JX; Tue, 23 May 2023 06:56:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538247.838066; Tue, 23 May 2023 06:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Lwq-0004z7-FL; Tue, 23 May 2023 06:56:44 +0000
Received: by outflank-mailman (input) for mailman id 538247;
 Tue, 23 May 2023 06:56:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1Lwo-0004z1-Dc
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 06:56:42 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on060c.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::60c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f7b165ab-f936-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 08:56:40 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8212.eurprd04.prod.outlook.com (2603:10a6:20b:3b7::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 06:56:38 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 06:56:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7b165ab-f936-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Utf3+MeQbk+v2L5/cn4vqoSbyh5zGhOXTH+K3aOY2AXIE0eP3e6zusBDWiMmEVxs1/bXp/+Qq6Mq0EDDk72t3NDOeJ7u/JwXNPf2Sn5DL0cVyh/gCB7WZigw5SnRkX8NIsn9pV0q73AWneDIpjwtLjhgnxiTOxVKjihTa21Cdo5oaZGdXfE2KZHNNxoEAJuuZqVy/DrmVuqlz5g1G9PsDpAF0HhQDmOMQntKEG5lIeEkS5QuQZ6qjuY0y3g4Y6Y3iBwN0V7kPZib9DsXaKtKD/PRP7Pp7BMSk6VsYrSvKsKl8PvHOdjcqK3/s5FsL6otLhaHrqAKvGg4U2TQwa6xhw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Qq2sCG8h2PZErZUNhWKNOGgMwihdTVeyVg8dL/GNLqQ=;
 b=iLsLXYt9l1jYmrs9S1yz38ZSNBM/77NoGO/nXaKbuL4kV5Qg0wWUeymkXWyF3VzEbIC6g7ewrFe5g80dtv9DA579v6Ofzov6/2d47G2hgLNLBF94S+ky84r7U6gZALvLL5090bGrQ9vB44G/QGbYV0La0UbQwE+feV8eI3JJzA+a4WSJZbnTGC1UAgns9T4r3WfqXCgGxlWQ7nOvIlmEcT+34WTpO3jdGCycx91aUw6g/hQeACKkfGeApUvE3mf1JSNP0Heo42QE9uzIO5rdaff5cRHWzqEzXtd5O1UO9DI4LYUgPKiluX9t93xImy2lxM34Ce69bPqzfVGiqs+uvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Qq2sCG8h2PZErZUNhWKNOGgMwihdTVeyVg8dL/GNLqQ=;
 b=0RiVCRegLhcz8JSKvfb1xxSabOPELjXX1vyoz9Ii0HjLlHIrnuU5ay6lOt/vyfplD/NxooiJyfwi/rL3AL6lZXPvUqZHpH1TEH8TJZsPowYIU0nQGq1ISG0ILhhk9TwPqrwioT9imdMQXrYFxwv1P66X62IFfqhMkUSA0intrNbnMhafnOCXH/jza2ZUF0hyUJyOOIt9qfWYTedKTY75QQCrVHeDrpbCcR3dyx8nqgy6XJkv+82I0gG1IUaZus6APGsmRXtxi5pj0/FqNA4fKqOmLq1XkmNY9POnglYSp16TizLgSUk+XlcsgBEU1+ynEPPhtkrSk74WJzod4zykFg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4644c0a3-24c6-caf9-fecb-ac868f0e6226@suse.com>
Date: Tue, 23 May 2023 08:56:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC] build: respect top-level .config also for out-of-tree
 hypervisor builds
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <beace0ce-e90b-eb79-4419-03de45ea7360@suse.com>
 <a08f3650-0c81-4873-ae10-f5200f8b7613@perard>
 <3c9f8fd2-b60b-5540-00be-87351fec656e@suse.com>
 <a87f9103-2262-4fc2-9598-7442074df71a@perard>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a87f9103-2262-4fc2-9598-7442074df71a@perard>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0232.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b2::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8212:EE_
X-MS-Office365-Filtering-Correlation-Id: 39932bd8-9f80-4795-0431-08db5b5adac5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/H7hP6Xyp3T/8tEo4wireZD0M7+Xwe3wOmPifEmujcmQb3UOKJBo3irCI+NbXQzX5iaVRfsMD1XboouyG9Qj0/pxuKp9MFreQc4C+83faeu7t4lCh9nNZiP6BKlsFMIaIynQNAaOlxrhmSrLkMHFJS8wlCHsXW4Y56+nGwtyEJcEnAz+U59knwzLRK2lqJOLqj4gv+JdtpBhMd641C4l2eqaj+H1xclC4t3FYA7Aathl6Lpri5la2znTu0s7DPLUwqOioSNrD1JOlWr2Lt52aQxrKAknb9LAuuzQkbyOxWqGWKL0DBJw14z8j+HOGL6lpFCP2x1RQkV7hxepZq2IGaVUMmHe/US3yZ4gN225S1QfoEfUkv6ladSMTayKOzguMX5x8HxTPhJkpHuVRsUUxkg8Mfx1J74hhIXod5DnC4u+eFTf7e08DndW9rOLvSkNUlpSZ1Dx0Pv63LeUvGylDzeABpyUzh64OPdAxKvvVg2Q+ubmGfAaksFCXtBxnwHZm9Cof+CZWDQ+Z7D+UMICC/Btl+qQDCLdgS+rkeRVu1msaAzN6ZlrkhtT7r2cozw41rwC0Twdb7+cTu+pS8ITFCYk63Xvr8n/ezhxcIDIMbeykMiYT3LECJEEG76uhO27zNd9oeoEjIe8C/pSJboQXTUnZDr7H609fYyP9JICAs/ZGZKpCm+p45DqvoJcclrD
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(39860400002)(136003)(396003)(346002)(376002)(451199021)(41300700001)(6486002)(316002)(26005)(53546011)(6506007)(6512007)(8936002)(8676002)(5660300002)(36756003)(38100700002)(478600001)(31696002)(2616005)(86362001)(54906003)(31686004)(2906002)(186003)(6916009)(66556008)(66476007)(66946007)(66899021)(4326008)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bGlLbHR1M05zL2hQLzZ6QTdaR3FkMW94Y2w2VmlmMEtQazM4OC9JMjduSG44?=
 =?utf-8?B?dE43NTZaaFRkMGtkSkRXaVR5TTJycTArb2lBSlducGd4U0FiMVlrN1dLYmdL?=
 =?utf-8?B?ejBNcURJbGRyMHEzOG1VM0t2WDFzMUwvREtLZWxEYXdVS2U2ZTVPZmtmZ1VL?=
 =?utf-8?B?N2EyWVRrV1lTZ2xqM084ekc3UnUvak9TZWs0WEgwYWtvaGM1K2MrNFNnMVN4?=
 =?utf-8?B?Z1dVb2l2elYrU2hkMWRWMDBaQkhlVVAwU0F3K01WTTNrMVdFdmo0dmdDWnpU?=
 =?utf-8?B?NUthQzJMNlVVRDVvS3VHR1Zpa3N1S2daOHhNWDYxWWVKbzEyMU1rZ2ZGeUN6?=
 =?utf-8?B?Zlc4U2EvbHVzd0ltaHRiVWtzMkpwM2JmTFpqdUpoa29DSEErYUxXQmtQL3hn?=
 =?utf-8?B?L3BUWDZYTkE1MXFtRVNYVWNxYkdvV0JrR1FudmQ1cUFrL1plZ08vZ3RhZFJB?=
 =?utf-8?B?ZER0NU1pUHNVMjRJNVZrL0NoeTR6amVHTDBKaW13QUREYXJzUUxtQ1h6KzRP?=
 =?utf-8?B?NUNQK1NoTXVubi96V2ZnWlJpcXFwUjhLRy8vVURvdEJMRUlVU3FkN0kvRHF0?=
 =?utf-8?B?S1NUMHBFd0VCTmF4KzU5QjRxTTZod25aNlNmMjJlR094bS9TdkpHbE0rWjVW?=
 =?utf-8?B?Zy9jTkZhS1FYTjU3WERHWFVSMWRhdHN3enMxTy9Wbngrc2tOb09wUTdlQ21Z?=
 =?utf-8?B?cVZtRW9KbFE5eTVxTVRiL3hKMytqQ01ndGZpRjhzL2h6cWJSTjhFYmd4dTRR?=
 =?utf-8?B?Z0lGekFTSlEvN1QyOHpNY0lyUFVJNWxrWktrN3NmYjd2Z3B0M3FjMHNXWTdN?=
 =?utf-8?B?WFRrL2hkdGNVbWppc1lKZ3JkSWJZMlE2WTc0dHFOS0NjSXFkQkxGdjJQQzJC?=
 =?utf-8?B?QmtzV2FPbytlN0t6QUdKbWl3UlFQeDVjVDR5Tjc2bTJUMDN6VzBRNWxINnNo?=
 =?utf-8?B?MVJqclovdlNNTnNDdEdlMGpxZklCc1hLcU1rcmhnRUlScGVFOHpBbGFJai9S?=
 =?utf-8?B?SklrVXgzcTNKUmVDT0RSeXpiYXVDdE1SSzl1NnlGVm1QNkdEVzRNZTUxbVI1?=
 =?utf-8?B?NUVtY2c0OXN3MFNkbytLYjFCSVBIMCt5TFV1U2pWV3VNVWpINkdUd29Xbmtn?=
 =?utf-8?B?Sy9vdTRQMkpuZVdaMTkzZTgxOVZkS25DZmhCUnp0MWQ4dzJJdFM1SXA5QkU4?=
 =?utf-8?B?a2E4ZnNoRlN5bndiVUxKK1RZM0puKzVoTUdtVng5bGM3MWVPYVJpMDlaU3lR?=
 =?utf-8?B?MkdYWE1vSzY1S1NTNXZmZGw2Ylp6Kyt4RDZDVnRZdndnblFuYTZhZ2tMT2tr?=
 =?utf-8?B?NVNOeTJGK3gvZFJRb2lUZnJGVUlRTlNPNEE4a2N2d200SkR3TUdML1NFNjQ5?=
 =?utf-8?B?dmgxWkJReHNEZ2JZVW0rL1lGTmdkZGpuaFQxbTRrdm1qdWhvVm80K2dKclRp?=
 =?utf-8?B?U3dHcWhpenljdDlFVUN2MnJ6YXJDWVM4SWl5STVjMGlTSU5CN0dSTkNiUVpO?=
 =?utf-8?B?S0cwSUdyVVJiR01WbmNEazNzYm5LVWxjbFRrYXlGVS85bldRc29PSWRkUlc2?=
 =?utf-8?B?OEhXMGErVjdoVXhOSGNibkcxSU14L1lEQmdMVFlTLys5anMrYXY5eTlnRXdW?=
 =?utf-8?B?S2cyNXZkbkdHakYzV1BaN2NOSVF5OTl1ajBKUkNCSmxZcld4cC96dGdrQ0VU?=
 =?utf-8?B?cGZnZlJ1ZUs0TU9ML3U4VVpUZGlvM01xMlZ1RDJ6azVOTU9qeG5zYVhwVWpj?=
 =?utf-8?B?aFY5MlQzRlNUL1RMZ1pHdkpQRm00dDNtMUxFSTh4U3g2TjNwSzc0RjlHVlhC?=
 =?utf-8?B?cVBFVlp6L05MRlRLaUZjUElSMmJ4ZVZ1Q0JkY0thc29rWFdnTERvak4ycGJS?=
 =?utf-8?B?akdQcXNuUWkxWW5OM3BneEVDSU0wQ3BOK1QrR2JzS3FOWEhBRU43Y2J2SkJZ?=
 =?utf-8?B?c0pnRmc2ZlJRM0RLcTVjejh0bktXb2c1RDBMbDE3MEZtaWc4Wm1EVDJad0xQ?=
 =?utf-8?B?ZDRGNUpZZWs1Yk03QnRDVzB2M3VURHJSSjdrR0RrMDZzVXlLcGRZUE5jbDNs?=
 =?utf-8?B?VmZKS1dvSGxKazJnZmF0NTJQbmJIazN6cUNFTHhsS2c2NFVYaStrS2RFSHB1?=
 =?utf-8?Q?ERTOlOrp0bgbYc/ajMXibGotn?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 39932bd8-9f80-4795-0431-08db5b5adac5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 06:56:38.0193
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Xc/Na+B1R6Y1iVOlvZRo0l0BB4KACZkFCkhco1zzFW1ta92mOWT/pk2n/KDCmElcrUG5TO5sXp+a7juXxHqTcw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8212

On 22.05.2023 17:49, Anthony PERARD wrote:
> On Mon, May 08, 2023 at 08:23:43AM +0200, Jan Beulich wrote:
>> On 05.05.2023 18:08, Anthony PERARD wrote:
>>> On Wed, Mar 15, 2023 at 03:58:59PM +0100, Jan Beulich wrote:
>>>> With in-tree builds Config.mk includes a .config file (if present) from
>>>> the top-level directory. Similar functionality is wanted with out-of-
>>>> tree builds. Yet the concept of "top-level directory" becomes fuzzy in
>>>> that case, because there is not really a requirement to have identical
>>>> top-level directory structure in the output tree; in fact there's no
>>>> need for anything top-level-ish there. Look for such a .config, but only
>>>> if the tree layout matches (read: if the directory we're building in is
>>>> named "xen").
>>>
>>> Well, as long as the "xen/" part of the repository is the only build
>>> system to be able to build out-of-srctree, there isn't going to be a
>>> top-level .config possible in the build tree, as such .config will be
>>> outside of the build tree. Reading outside of the build tree or source
>>> tree might be problematic.
>>>
>>> It's a possibility that some project might want to just build the
>>> hypervisor, and they happened to name the build tree "xen", but they
>>> also have a ".config" use for something else. Kconfig is using ".config"
>>> for other reason for example, like we do to build Xen.
>>
>> Right, that's what my first RFC remark is about.
>>
>>> It might be better to have a different name instead of ".config", and
>>> putting it in the build tree rather than the parent directory. Maybe
>>> ".xenbuild-config" ?
>>
>> Using a less ambiguous name is clearly an option. Putting the file in
>> the (Xen) build directory, otoh, imo isn't: In the hope that further
>> sub-trees would be enabled to build out-of-tree (qemu for instance in
>> principle can already, and I guess at least some of stubdom's sub-
>> packages also can), the present functionality of the top-level
>> .config (or whatever its new name) also controlling those builds would
>> better be retained.
> 
> I'm not sure what out-of-tree build for the whole tree will look like,
> but it probably going to be `/path/to/configure && make`. After that,
> Config.mk should know what kind of build it's doing, and probably only
> load ".config" in the build tree.

Except that the hypervisor build still isn't really connected to
./configure's results.

> In the meantime, it feels out of place
> for xen/Makefile or xen/Rules.mk to load a ".config" that would be
> present in the parent directory of the build dir.

Right, hence me looking for possible alternatives (and using this
approach only for the apparent lack thereof).

>>> I've been using the optional makefile named "xen-version" to adjust
>>> variable of the build system, with content like:
>>>
>>>     export XEN_TARGET_ARCH=arm64
>>>     export CROSS_COMPILE=aarch64-linux-gnu-
>>>
>>> so that I can have multiple build tree for different arch, and not have
>>> to do anything other than running make and do the expected build. Maybe
>>> that's not possible because for some reason, the build system still
>>> reconfigure some variable when entering a sub-directory, but that's a
>>> start.
>>
>> Hmm, interesting idea. I could (ab)use this at least in the short
>> term. Yet even then the file would further need including from
>> Rules.mk. Requiring all variables defined there to be exported isn't
>> a good idea, imo. Iirc not all make variables can usefully be
>> exported. For example, I have a local extension to CROSS_COMPILE in
>> place, which uses a variable with a dash in its name.
>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> RFC: The directory name based heuristic of course isn't nice. But I
>>>>      couldn't think of anything better. Suggestions?
>>>
>>> I can only thing of looking at a file that is in the build-tree rather
>>> than outside of it. A file named ".xenbuild-config" proposed early for
>>> example.
>>>
>>>> RFC: There also being a .config in the top-level source dir would be a
>>>>      little problematic: It would be included _after_ the one in the
>>>>      object tree. Yet if such a scenario is to be expected/supported at
>>>>      all, it makes more sense the other way around.
>>>
>>> Well, that would mean teaching Config.mk about out-of-tree build that
>>> part of the repository is capable of, or better would be to stop trying
>>> to read ".config" from Config.mk, and handle the file in the different
>>> sub-build systems.
>>
>> Hmm, is that a viable option? Or put differently: Wouldn't this mean doing
>> away with ./Config.mk altogether? Which in turn would mean no central
>> place anymore where XEN_TARGET_ARCH and friends could be overridden. (I
>> view this as a capability that we want to retain, if nothing else then for
>> the - see above - future option of allowing more than just xen/ to be
>> built out-of-tree.)
> 
> No, I'm not trying to get rid of Config.mk. There's a few thing in it
> that I'd like to remove from it, but not everything. I don't know how to
> deal with ".config" at the moment, but I guess that if Config.mk knew
> about out-of-tree build, it probably should only read one ".config", the
> one in the build tree.

And that (2nd) .config would then be placed where in that build tree,
according to what you're envisioning?

Just to mention it: Since a similar problem exists in Linux, for many
years I've been carrying private logic to record necessary overrides
in the Makefile that's generated into the build tree. But of course
that's hackery, i.e. doing just enough to fit my own needs. I'd like
to avoid having to carry similar hackery for Xen.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 23 07:15:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:15:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538255.838088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MFB-0007c7-8T; Tue, 23 May 2023 07:15:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538255.838088; Tue, 23 May 2023 07:15:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MFB-0007c0-4y; Tue, 23 May 2023 07:15:41 +0000
Received: by outflank-mailman (input) for mailman id 538255;
 Tue, 23 May 2023 07:15:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1MFA-0007bq-G1; Tue, 23 May 2023 07:15:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1MFA-0004FI-Da; Tue, 23 May 2023 07:15:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1MFA-0004bE-2i; Tue, 23 May 2023 07:15:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1MFA-000164-2I; Tue, 23 May 2023 07:15:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GLj0wZ9bM/AKDJA10XF4PwOYFqExAsXVAgC9Xlb7PXo=; b=TR41/ayNyIyTPyvx7JOXBj5rhF
	esfW51ZJlNl3WuEN3EIDtixZ6Vr1Pp5KNUP1ocnOZUH0wT1xrDFWRnMx66MbcXifBMh+tE0JnCgLh
	aIfA1BPaaxJr8BuNH0t983Yuowfhoi4FpDwUSR+UV7hoKn85fixSrSncti25Q0MQEFro=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180900-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180900: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c7908869ac26961a3919491705e521179ad3fc0e
X-Osstest-Versions-That:
    xen=753d903a6f2d1e68d98487d36449b5739c28d65a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 07:15:40 +0000

flight 180900 xen-unstable real [real]
flight 180911 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180900/
http://logs.test-lab.xenproject.org/osstest/logs/180911/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install fail pass in 180911-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 180911 like 180884
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180884
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180884
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180884
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180884
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180884
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180884
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180884
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180884
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180884
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180884
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180884
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c7908869ac26961a3919491705e521179ad3fc0e
baseline version:
 xen                  753d903a6f2d1e68d98487d36449b5739c28d65a

Last test of basis   180884  2023-05-22 01:52:01 Z    1 days
Testing same since   180900  2023-05-22 18:07:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>
  Yann Dirson <yann.dirson@vates.fr>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   753d903a6f..c7908869ac  c7908869ac26961a3919491705e521179ad3fc0e -> master


From xen-devel-bounces@lists.xenproject.org Tue May 23 07:33:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:33:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538263.838098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MVx-0001l1-Qc; Tue, 23 May 2023 07:33:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538263.838098; Tue, 23 May 2023 07:33:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MVx-0001ku-M6; Tue, 23 May 2023 07:33:01 +0000
Received: by outflank-mailman (input) for mailman id 538263;
 Tue, 23 May 2023 07:33:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MIFg=BM=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q1MVw-0001ko-0I
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:33:00 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0603.outbound.protection.outlook.com
 [2a01:111:f400:fe02::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 091b3c29-f93c-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 09:32:56 +0200 (CEST)
Received: from DB6PR07CA0191.eurprd07.prod.outlook.com (2603:10a6:6:42::21) by
 DB9PR08MB7844.eurprd08.prod.outlook.com (2603:10a6:10:39f::13) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.28; Tue, 23 May 2023 07:32:53 +0000
Received: from DBAEUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:42:cafe::38) by DB6PR07CA0191.outlook.office365.com
 (2603:10a6:6:42::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.14 via Frontend
 Transport; Tue, 23 May 2023 07:32:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT018.mail.protection.outlook.com (100.127.142.74) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.14 via Frontend Transport; Tue, 23 May 2023 07:32:52 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Tue, 23 May 2023 07:32:52 +0000
Received: from 13639c6974b2.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3CB0692C-8997-4BDF-9F4C-512F26EAA047.1; 
 Tue, 23 May 2023 07:32:46 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 13639c6974b2.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 23 May 2023 07:32:46 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DBBPR08MB6123.eurprd08.prod.outlook.com (2603:10a6:10:20a::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 07:32:44 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%4]) with mapi id 15.20.6411.028; Tue, 23 May 2023
 07:32:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 091b3c29-f93c-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ueLP7IRQmccCYSodzUQhi55gYNLSPf8dyedW1AWxZfY=;
 b=qMpp8nqiXDz5sJ9+vSQwjlW3aEVDqHPUSDo4ifTwX1r6e+cjIUR9wJmo73CQQir+5kxu3BF5/JAJfAs4UtJiO/RKOGnM6q/ZOcwU5HcTVC8xluBCMatcC1pYOlMFeT0DUIXGFpScVwJENlQE4cXcSofLrthhgbSe7FQHFgxCBp0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OrsrULf2EZI7TtHOH3B9N5Wd4Xj/eT42GMZ3a2pQglsUuJvGMmHSdM4maIZeHTOYerl52nRfQQsJOxVLvMkUo3Tylb7SHcjaZGom90COQa4qFQOWLKEvopM7ylT1XWWbfcnbC+dRAfP9qxPYuB/r8LxPrjFmqKPS82IftWmfOEVK4V9jzMSiUVAzSKX+1UlNbGwuy4yugcVjQYxTYEo9C6WnODla53JXQU2idHk13NQ2G2nbDUQ0NSd6+YHPrtU+Cn7HKXXl1MOW75frKM5rkKIISfSMoImJ3pwUNjUZ41DuphqVk8ayZSRdmke7DigEf0q56Gti5rICHirrUaKx8A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ueLP7IRQmccCYSodzUQhi55gYNLSPf8dyedW1AWxZfY=;
 b=lzP0T0gqpapp850Itr6D0Vk0ZcUKxOXCneHiFpg9IiiUMIKLqYLQwZKAImvybk9ygEOjZB7lDIbKtlBUhvfAG6bLz8kkum2c2BUMuOdqSDMTczEq+gyY2CELHaoFU+W/qte8L+M2R5gBlkdxh4NrF8S4jdVTwFPJ4nLq2Ufo8YFJ8hSWoc/pERumhs91yIWMY/YOwekKas72pVDo0S+q6aPqVVbIT7ppHxu4ZN09Lpe3llzyH5LSoOAPiyebaMw1e9/ucEtj6gsfil1bGRhpM+yl752LTaASahKdrKLqmEptQcQW+jqvUGwpAXfIrhR/wddo5d5RbvzkDe25fIWOxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ueLP7IRQmccCYSodzUQhi55gYNLSPf8dyedW1AWxZfY=;
 b=qMpp8nqiXDz5sJ9+vSQwjlW3aEVDqHPUSDo4ifTwX1r6e+cjIUR9wJmo73CQQir+5kxu3BF5/JAJfAs4UtJiO/RKOGnM6q/ZOcwU5HcTVC8xluBCMatcC1pYOlMFeT0DUIXGFpScVwJENlQE4cXcSofLrthhgbSe7FQHFgxCBp0=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
Subject: RE: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Thread-Topic: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
 tree NUMA distance map
Thread-Index:
 AQHZd0ulMpqFICe/LUiDVq6+aDsW9687sveAgAFTkCCAACNagIAAAPgggAADbgCAKli08IAADIuAgAADKTA=
Date: Tue, 23 May 2023 07:32:43 +0000
Message-ID:
 <AS8PR08MB7991C727E0A90875055DA1D492409@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <e03bbb52-1a19-7d18-4abe-75bbef8a0aee@suse.com>
 <AS8PR08MB799117EDD6BAB892CAB870A192659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <13635377-e296-370d-121b-5b617dc210bc@suse.com>
 <AS8PR08MB7991DCE0DFC850FEA920BF8C92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <c195ef53-1151-1fb2-0cf9-f6f47d20b75e@suse.com>
 <AS8PR08MB79919CD7C233345113F8D5C992409@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <79859cfb-ab60-8661-e1ec-75fac74531b4@suse.com>
In-Reply-To: <79859cfb-ab60-8661-e1ec-75fac74531b4@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: D034F9FEEDFF1545A02644F6BDD11414.0
x-checkrecipientchecked: true
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DBBPR08MB6123:EE_|DBAEUR03FT018:EE_|DB9PR08MB7844:EE_
X-MS-Office365-Filtering-Correlation-Id: 46218956-1992-4988-d754-08db5b5feb07
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 HVYVoXM5tNUFVTO/vFnUv5sWHq7yss3KehxCxS+Gc3f/tErge9khm300nBcGEOmPSCGsVgJd90D3k6w8Ow1pIwet8T4rM7UZvRkNuli1+0RT43iRiCaK/VPUJc25s/MLFJgCFQ4aLH9PBh5rzxZP7DSyJj4oPgTAlzXPreYOLqAh9kauUDBLFKp7EGFRvasIRYHSQzKgdVX+CSzVClZNVEyUMWVSLwPWoczJVGprN+JFO0bS+SMXdEsVd0ed3m56G5VIA0DBqnXmjZmReFjxOoXFBBlAY294BJPdkBJDZ8mM2F3F1S4NNwPwjeQwf7wAI7QlGEjj8A8mYkZEbDmpb/r/1S1Fb4MgA3mwTr3LYayVhH5x8CpImv6EUo2I3gMQfOpC2gTvct6cNrQlItOjyzOdrI3f+wIXeQXl3deA5Dufp/7nZlzQIC52K+FUxw+RE8pmwgSAsKe4z/GMNL8QIa724t7XFwImLADYPuQTU3PU+Ea3/4f3DoiaXaiEJWOcHA9GY1w0wHQuU0+gXzhm6KQ5EebEhRo5pF+BkO7H1llG3C8QYIGc295k7Czt6AcQUcTy7N6REbW13a0BNGwAJEeFsQi9mE1AFF3yyMQif3Y8B1K6DyhQNUpyPty/dyyo
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(366004)(136003)(39860400002)(451199021)(38070700005)(122000001)(38100700002)(33656002)(86362001)(55016003)(9686003)(52536014)(7696005)(6506007)(478600001)(8936002)(8676002)(2906002)(186003)(54906003)(4326008)(6916009)(64756008)(316002)(26005)(66446008)(71200400001)(5660300002)(41300700001)(66476007)(76116006)(66556008)(66946007)(83380400001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6123
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0f32ab2d-2e58-4cac-adc0-08db5b5fe5ab
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	w00FU5YcyQYpao0j7QRtJVwS520WZ7FLwaer0EuNZD1Eltl7Nckyad0zH2lVbgEbtIB1zBus5+cBXnNH17TFPIqeb05QA//xJAOC82nPIFdYBURE/cInCmwUJjLC3qm6LMhqZCRnrI0B/U/H8pc7hQL8CJcWjyjEAN0DU3O6xzeBpZuddSpRfrjRRWP4Kkja3P298fYUUQl33yHZRE4OuHr3QqK1UHCQW0O4EOWEVwEsOpuAyYHhUhPINwcrgxiX8mIFEab6XbQUGvyphG4yOTqWBslrq5aFovo2Z+l1FQAqfpuf+OxX4mwre1s5XFAcvd1apBGc4uT2BlCHOZneTLsaUoLAj7VrY6TNCja0HmUaimTozLe0iZU7imdWroIYAhmvnMorgKVUOGcBs0msHfg95A3M4Nsotedj1a+3b3nscM0tDxXHV2HscuWtD4cgxyBuR8Mygm2OF00o1QWDILu/whvwMuoxUcatq+Nyk73ZuHGcSSXJ1q8kI3hQomflU5xteHbMQ3OtL12CM+Awjc7KgJbYjOKmZIc9n6RWmPCDMEbzyNngNNxSD23pxlEZ0RfWIgSX2X8sLuGj8Ri7l16EOXKIxIlXWlj6yWg+w0n3pw5Xk0oCzehv70HBd1xJXdSWDQxG0i4/ic0wZxM7mxF31x8TMnwqfqPAjgsKPkw2i4LeVueulPipCHSt36xeCmO3F6P5gFanBJTX+wsT8xXaBgfk6Dy9sDYIjlNPMm3fQ/F2+RAkIgIyR1wU7KlZ
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(5660300002)(52536014)(40460700003)(8676002)(6862004)(8936002)(47076005)(33656002)(36860700001)(83380400001)(336012)(186003)(356005)(81166007)(82740400003)(2906002)(86362001)(55016003)(82310400005)(40480700001)(6506007)(26005)(9686003)(316002)(70206006)(4326008)(70586007)(478600001)(41300700001)(54906003)(7696005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 07:32:52.6237
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 46218956-1992-4988-d754-08db5b5feb07
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7844

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IFN1YmplY3Q6IFJlOiBb
UEFUQ0ggdjQgMDkvMTddIHhlbi9hcm06IGludHJvZHVjZSBhIGhlbHBlciB0byBwYXJzZSBkZXZp
Y2UNCj4gdHJlZSBOVU1BIGRpc3RhbmNlIG1hcA0KPiANCj4gPj4+PiBUaGUgWzJdIGxpbmsgeW91
IHByb3ZpZGVkIGRpc2N1c3NlcyBOVU1BX0xPQ0FMX0RJU1RBTkNFLg0KPiA+Pj4NCj4gPj4+IEkg
aW5mZXJyZWQgdGhlIGRpc2N1c3Npb24gYXMgIndlIHNob3VsZCB0cnkgdG8ga2VlcCBjb25zaXN0
ZW50IGJldHdlZW4gdGhlDQo+ID4+IHZhbHVlDQo+ID4+PiB1c2VkIGluIGRldmljZSB0cmVlIGFu
ZCBBQ1BJIHRhYmxlcyIuIE1heWJlIG15IGluZmVyZW5jZSBpcyB3cm9uZy4NCj4gPj4+DQo+ID4+
Pj4gTG9va2luZyBhdA0KPiA+Pj4+IExpbnV4J2VzIERvY3VtZW50YXRpb24vZGV2aWNldHJlZS9u
dW1hLnR4dCwgdGhlcmUncyBubyBtZW50aW9uIG9mIGFuDQo+ID4+Pj4gdXBwZXIgYm91bmQgb24g
dGhlIGRpc3RhbmNlIHZhbHVlcy4gSXQgb25seSBzYXlzIHRoYXQgb24gdGhlIGRpYWdvbmFsDQo+
ID4+Pj4gZW50cmllcyBzaG91bGQgYmUgMTAgKGkuZS4gbWF0Y2hpbmcgQUNQSSwgd2l0aG91dCBy
ZWFsbHkgc2F5aW5nIHNvKS4NCj4gPj4+DQo+ID4+PiBJIGFncmVlIHRoYXQgdGhlIE5VTUEgZGV2
aWNlIHRyZWUgYmluZGluZyBpcyBhIGxpdHRsZSBiaXQgdmFndWUuIFNvIEkgY2Fubm90DQo+ID4+
PiBzYXkgdGhlIGNhc2UgdGhhdCB5b3UgcHJvdmlkZWQgaXMgbm90IHZhbGlkLiBJIHdvdWxkIGxp
a2UgdG8gYXNrIEFybQ0KPiA+PiBtYWludGFpbmVycw0KPiA+Pj4gKHB1dHRpbmcgdGhlbSBpbnRv
IFRvOikgb3BpbmlvbiBvbiB0aGlzIGFzIEkgdGhpbmsgSSBhbSBub3QgdGhlIG9uZSB0bw0KPiBk
ZWNpZGUNCj4gPj4gdGhlDQo+ID4+PiBleHBlY3RlZCBiZWhhdmlvciBvbiBBcm0uDQo+ID4+Pg0K
PiA+Pj4gQmVydHJhbmQvSnVsaWVuL1N0ZWZhbm86IFdvdWxkIHlvdSBwbGVhc2Uga2luZGx5IHNo
YXJlIHlvdXIgb3BpbmlvbiBvbg0KPiA+PiB3aGljaA0KPiA+Pj4gdmFsdWUgc2hvdWxkIGJlIHVz
ZWQgYXMgdGhlIGRlZmF1bHQgdmFsdWUgb2YgdGhlIG5vZGUgZGlzdGFuY2UgbWFwPyBEbw0KPiA+
PiB5b3UNCj4gPj4+IHRoaW5rIHJldXNpbmcgdGhlICJ1bnJlYWNoYWJsZSIgZGlzdGFuY2UsIGku
ZS4gMHhGRiwgYXMgdGhlIGRlZmF1bHQgbm9kZQ0KPiA+PiBkaXN0YW5jZQ0KPiA+Pj4gaXMgYWNj
ZXB0YWJsZSBoZXJlPyBUaGFua3MhDQo+ID4+DQo+ID4+IE15IHN1Z2dlc3Rpb24gd291bGQgYmUg
dG8sIHJhdGhlciB0aGFuIHJlamVjdGluZyB2YWx1ZXMgPj0gMHhmZiwgdG8gc2F0dXJhdGUNCj4g
Pj4gYXQgMHhmZSwgd2hpbGUga2VlcGluZyAweGZmIGZvciBOVU1BX05PX0RJU1RBTkNFIChhbmQg
b3ZlcmFsbCBrZWVwaW5nDQo+ID4+IHRoaW5ncw0KPiA+PiBjb25zaXN0ZW50IHdpdGggQUNQSSku
DQo+ID4NCj4gPiBTaW5jZSBpdCBoYXMgYmVlbiBhIHdoaWxlIGFuZCB0aGVyZSB3ZXJlIG5vIGZl
ZWRiYWNrIGZyb20gQXJtDQo+IG1haW50YWluZXJzLA0KPiA+IEkgd291bGQgbGlrZSB0byBmb2xs
b3cgeW91ciBzdWdnZXN0aW9ucyBmb3IgdjUuIEhvd2V2ZXIgd2hpbGUgSSB3YXMgd3JpdGluZw0K
PiB0aGUNCj4gPiBjb2RlIGZvciB0aGUgInNhdHVyYXRpb24iLCBpLmUuLCBhZGRpbmcgYmVsb3cg
c25pcHBldCBpbg0KPiBudW1hX3NldF9kaXN0YW5jZSgpOg0KPiA+IGBgYA0KPiA+IGlmICggZGlz
dGFuY2UgPiBOVU1BX05PX0RJU1RBTkNFICkNCj4gPiAgICAgICAgIGRpc3RhbmNlID0gTlVNQV9N
QVhfRElTVEFOQ0U7DQo+ID4gYGBgDQo+ID4gSSBub3RpY2VkIGFuIGlzc3VlIHdoaWNoIG5lZWRz
IHlvdXIgY2xhcmlmaWNhdGlvbjoNCj4gPiBDdXJyZW50bHksIHRoZSBkZWZhdWx0IHZhbHVlIG9m
IHRoZSBkaXN0YW5jZSBtYXAgaXMgTlVNQV9OT19ESVNUQU5DRSwNCj4gPiB3aGljaCBpbmRpY2F0
ZXMgdGhlIG5vZGVzIGFyZSBub3QgcmVhY2hhYmxlLiBJTUhPLCBpZiBpbiBkZXZpY2UgdHJlZSwg
d2UgZG8NCj4gPiBzYXR1cmF0aW9ucyBsaWtlIGFib3ZlIGZvciBBQ1BJIGludmFsaWQgZGlzdGFu
Y2VzIChkaXN0YW5jZXMgPj0gMHhmZiksIGJ5DQo+IHNhdHVyYXRpbmcNCj4gPiB0aGUgZGlzdGFu
Y2UgdG8gMHhmZSwgd2UgYXJlIG1ha2luZyB0aGUgdW5yZWFjaGFibGUgbm9kZXMgdG8gcmVhY2hh
YmxlLiBJDQo+IGFtDQo+ID4gbm90IHN1cmUgaWYgdGhpcyB3aWxsIGxlYWQgdG8gcHJvYmxlbXMu
IERvIHlvdSBoYXZlIGFueSBtb3JlIHRob3VnaHRzPw0KPiBUaGFua3MhDQo+IA0KPiBJIGNhbiBv
bmx5IGFuc3dlciB0aGlzIHdpdGggYSBxdWVzdGlvbiBiYWNrOiBIb3cgaXMgInVucmVhY2hhYmxl
Ig0KPiByZXByZXNlbnRlZA0KPiBpbiBEVD8gDQoNCkV4YWN0bHksIHRoYXQgaXMgYWxzbyB3aGF0
IHdlIEkgYW0gdHJ5aW5nIHRvIGZpbmQgYnV0IGZhaWxlZC4gTXkgdW5kZXJzdGFuZGluZw0KaXMg
dGhhdCwgdGhlIHNwZWMgb2YgTlVNQSBpcyBkZWZpbmVkIGluIHRoZSBBQ1BJLCBhbmQgdGhlIERU
IE5VTUEgYmluZGluZw0Kb25seSBzcGVjaWZpZXMgaG93IHVzZXJzIGNhbiB1c2UgRFQgdG8gcmVw
cmVzZW50IHRoZSBzYW1lIHNldCBvZiBBQ1BJIGRhdGEsDQppbnN0ZWFkIG9mIGRlZmluZSBhbm90
aGVyIHN0YW5kYXJkLg0KDQpCeSBsb29raW5nIGludG8gdGhlIGV4aXN0aW5nIGltcGxlbWVudGF0
aW9uIG9mIE5VTUEgZm9yIERULA0KaW4gTGludXgsIGZyb20gZHJpdmVycy9iYXNlL2FyY2hfbnVt
YS5jOiBudW1hX3NldF9kaXN0YW5jZSgpLCB0aGVyZSBpcyBhDQoiaWYgKCh1OClkaXN0YW5jZSAh
PSBkaXN0YW5jZSkiIHRoZW4gcmV0dXJuLiBTbyBJIHRoaW5rIGF0IGxlYXN0IGluIExpbnV4IHRo
ZQ0KZGlzdGFuY2UgdmFsdWUgaXMgY29uc2lzdGVudCB3aXRoIHRoZSBBQ1BJIHNwZWMuIA0KDQo+
IE9yIGlzICJ1bnJlYWNoYWJsZSIgc2ltcGx5IGV4cHJlc3NlZCBieSB0aGUgYWJzZW5jZSBvZiBh
bnkgZGF0YT8NCg0KTWF5YmUgSSBhbSB3cm9uZyBidXQgSSBkb24ndCB0aGluayBzbywgYXMgaW4g
dGhlIExpbnV4IGltcGxlbWVudGF0aW9uLA0KZHJpdmVycy9vZi9vZl9udW1hLmM6IG9mX251bWFf
cGFyc2VfZGlzdGFuY2VfbWFwX3YxKCksIHRoZSBmb3IgbG9vcA0KImZvciAoaSA9IDA7IGkgKyAy
IDwgZW50cnlfY291bnQ7IGkgKz0gMykiIGJhc2ljYWxseSBpbXBsaWVzIG5vIGZpZWxkcyBzaG91
bGQNCmJlIG9taXR0ZWQgaW4gdGhlIGRpc3RhbmNlIG1hcCBlbnRyeS4NCg0KS2luZCByZWdhcmRz
LA0KSGVucnkNCg0KPiANCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538267.838108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgF-0003Ia-OY; Tue, 23 May 2023 07:43:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538267.838108; Tue, 23 May 2023 07:43:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgF-0003IT-Ll; Tue, 23 May 2023 07:43:39 +0000
Received: by outflank-mailman (input) for mailman id 538267;
 Tue, 23 May 2023 07:43:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgE-0003IN-Eu
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:38 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 8680845a-f93d-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 09:43:36 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E8485139F;
 Tue, 23 May 2023 00:44:20 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9A0523F762;
 Tue, 23 May 2023 00:43:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8680845a-f93d-11ed-b22d-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH v7 00/12] SVE feature for arm guests
Date: Tue, 23 May 2023 08:43:14 +0100
Message-Id: <20230523074326.3035745-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This serie is introducing the possibility for Dom0 and DomU guests to use
sve/sve2 instructions.

SVE feature introduces new instruction and registers to improve performances on
floating point operations.

The SVE feature is advertised using the ID_AA64PFR0_EL1 register, SVE field, and
when available the ID_AA64ZFR0_EL1 register provides additional information
about the implemented version and other SVE feature.

New registers added by the SVE feature are Z0-Z31, P0-P15, FFR, ZCR_ELx.

Z0-Z31 are scalable vector register whose size is implementation defined and
goes from 128 bits to maximum 2048, the term vector length will be used to refer
to this quantity.
P0-P15 are predicate registers and the size is the vector length divided by 8,
same size is the FFR (First Fault Register).
ZCR_ELx is a register that can control and restrict the maximum vector length
used by the <x> exception level and all the lower exception levels, so for
example EL3 can restrict the vector length usable by EL3,2,1,0.

The platform has a maximum implemented vector length, so for every value
written in ZCR register, if this value is above the implemented length, then the
lower value will be used. The RDVL instruction can be used to check what vector
length is the HW using after setting ZCR.

For an SVE guest, the V0-V31 registers are part of the Z0-Z31, so there is no
need to save them separately, saving Z0-Z31 will save implicitly also V0-V31.

SVE usage can be trapped using a flag in CPTR_EL2, hence in this serie the
register is added to the domain state, to be able to trap only the guests that
are not allowed to use SVE.

This serie is introducing a command line parameter to enable Dom0 to use SVE and
to set its maximum vector length that by default is 0 which means the guest is
not allowed to use SVE. Values from 128 to 2048 mean the guest can use SVE with
the selected value used as maximum allowed vector length (which could be lower
if the implemented one is lower).
For DomUs, an XL parameter with the same way of use is introduced and a dom0less
DTB binding is created.

The context switch is the most critical part because there can be big registers
to be saved, in this serie an easy approach is used and the context is
saved/restored every time for the guests that are allowed to use SVE.

Luca Fancellu (12):
  xen/arm: enable SVE extension for Xen
  xen/arm: add SVE vector length field to the domain
  xen/arm: Expose SVE feature to the guest
  xen/arm: add SVE exception class handling
  arm/sve: save/restore SVE context switch
  xen/common: add dom0 xen command line argument for Arm
  xen: enable Dom0 to use SVE feature
  xen/physinfo: encode Arm SVE vector length in arch_capabilities
  tools: add physinfo arch_capabilities handling for Arm
  xen/tools: add sve parameter in XL configuration
  xen/arm: add sve property for dom0less domUs
  xen/changelog: Add SVE and "dom0" options to the changelog for Arm

 CHANGELOG.md                                  |   3 +
 SUPPORT.md                                    |   6 +
 docs/man/xl.cfg.5.pod.in                      |  16 ++
 docs/misc/arm/device-tree/booting.txt         |  16 ++
 docs/misc/xen-command-line.pandoc             |  20 +-
 tools/golang/xenlight/helpers.gen.go          |   4 +
 tools/golang/xenlight/types.gen.go            |  24 +++
 tools/include/libxl.h                         |  11 +
 .../include/xen-tools/arm-arch-capabilities.h |  28 +++
 tools/include/xen-tools/common-macros.h       |   2 +
 tools/libs/light/libxl.c                      |   1 +
 tools/libs/light/libxl_arm.c                  |  33 +++
 tools/libs/light/libxl_internal.h             |   1 -
 tools/libs/light/libxl_types.idl              |  23 +++
 tools/ocaml/libs/xc/xenctrl.ml                |   4 +-
 tools/ocaml/libs/xc/xenctrl.mli               |   4 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c           |   8 +-
 tools/python/xen/lowlevel/xc/xc.c             |   8 +-
 tools/xl/xl_info.c                            |   8 +
 tools/xl/xl_parse.c                           |   8 +
 xen/arch/arm/Kconfig                          |  10 +-
 xen/arch/arm/README.LinuxPrimitives           |  11 +
 xen/arch/arm/arm64/Makefile                   |   1 +
 xen/arch/arm/arm64/cpufeature.c               |   7 +-
 xen/arch/arm/arm64/domctl.c                   |   4 +
 xen/arch/arm/arm64/sve-asm.S                  | 195 ++++++++++++++++++
 xen/arch/arm/arm64/sve.c                      | 182 ++++++++++++++++
 xen/arch/arm/arm64/vfp.c                      |  79 ++++---
 xen/arch/arm/arm64/vsysreg.c                  |  41 +++-
 xen/arch/arm/cpufeature.c                     |   6 +-
 xen/arch/arm/domain.c                         |  55 ++++-
 xen/arch/arm/domain_build.c                   |  66 ++++++
 xen/arch/arm/include/asm/arm64/sve.h          |  72 +++++++
 xen/arch/arm/include/asm/arm64/sysregs.h      |   4 +
 xen/arch/arm/include/asm/arm64/vfp.h          |  12 ++
 xen/arch/arm/include/asm/cpufeature.h         |  14 ++
 xen/arch/arm/include/asm/domain.h             |   8 +
 xen/arch/arm/include/asm/processor.h          |   3 +
 xen/arch/arm/setup.c                          |   5 +-
 xen/arch/arm/sysctl.c                         |   4 +
 xen/arch/arm/traps.c                          |  36 +++-
 xen/arch/x86/dom0_build.c                     |  48 ++---
 xen/common/domain.c                           |  23 +++
 xen/common/kernel.c                           |  28 +++
 xen/include/public/arch-arm.h                 |   2 +
 xen/include/public/sysctl.h                   |   4 +
 xen/include/xen/domain.h                      |   1 +
 xen/include/xen/lib.h                         |  10 +
 48 files changed, 1052 insertions(+), 107 deletions(-)
 create mode 100644 tools/include/xen-tools/arm-arch-capabilities.h
 create mode 100644 xen/arch/arm/arm64/sve-asm.S
 create mode 100644 xen/arch/arm/arm64/sve.c
 create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538269.838128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgI-0003nA-Dv; Tue, 23 May 2023 07:43:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538269.838128; Tue, 23 May 2023 07:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgI-0003n3-Ay; Tue, 23 May 2023 07:43:42 +0000
Received: by outflank-mailman (input) for mailman id 538269;
 Tue, 23 May 2023 07:43:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgH-0003Zx-FA
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:41 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 8734ffaa-f93d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 09:43:38 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 26B57150C;
 Tue, 23 May 2023 00:44:22 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 252613F762;
 Tue, 23 May 2023 00:43:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8734ffaa-f93d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v7 01/12] xen/arm: enable SVE extension for Xen
Date: Tue, 23 May 2023 08:43:15 +0100
Message-Id: <20230523074326.3035745-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Enable Xen to handle the SVE extension, add code in cpufeature module
to handle ZCR SVE register, disable trapping SVE feature on system
boot only when SVE resources are accessed.
While there, correct coding style for the comment on coprocessor
trapping.

Now cptr_el2 is part of the domain context and it will be restored
on context switch, this is a preparation for saving the SVE context
which will be part of VFP operations, so restore it before the call
to save VFP registers.
To save an additional isb barrier, restore cptr_el2 before an
existing isb barrier and move the call for saving VFP context after
that barrier. To keep a (mostly) specularity of ctxt_switch_to()
and ctxt_switch_from(), move vfp_save_state() up in the function.

Change the KConfig entry to make ARM64_SVE symbol selectable, by
default it will be not selected.

Create sve module and sve_asm.S that contains assembly routines for
the SVE feature, this code is inspired from linux and it uses
instruction encoding to be compatible with compilers that does not
support SVE, imported instructions are documented in
README.LinuxPrimitives.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v6:
 - modified licence, add emacs block, move vfp_save_state up in the
   function, add comments to CPTR_EL2 and vfp_restore_state, don't
   use variable in init_traps(), code style fixes,
   add entries to README.LinuxPrimitives (Julien)
 - vl_to_zcr is moved into sve.c module as changes to the series led
   to its usage only inside it, remove stub for compute_max_zcr and
   rely on compiler DCE.
Changes from v5:
 - Add R-by Bertrand
Changes from v4:
 - don't use fixed types in vl_to_zcr, forgot to address that in
   v3, by mistake I changed that in patch 2, fixing now (Jan)
Changes from v3:
 - no changes
Changes from v2:
 - renamed sve_asm.S in sve-asm.S, new files should not contain
   underscore in the name (Jan)
Changes from v1:
 - Add assert to vl_to_zcr, it is never called with vl==0, but just
   to be sure it won't in the future.
Changes from RFC:
 - Moved restoring of cptr before an existing barrier (Julien)
 - Marked the feature as unsupported for now (Julien)
 - Trap and un-trap only when using SVE resources in
   compute_max_zcr() (Julien)
---
 xen/arch/arm/Kconfig                     | 10 ++--
 xen/arch/arm/README.LinuxPrimitives      |  9 ++++
 xen/arch/arm/arm64/Makefile              |  1 +
 xen/arch/arm/arm64/cpufeature.c          |  7 ++-
 xen/arch/arm/arm64/sve-asm.S             | 48 +++++++++++++++++++
 xen/arch/arm/arm64/sve.c                 | 59 ++++++++++++++++++++++++
 xen/arch/arm/cpufeature.c                |  6 ++-
 xen/arch/arm/domain.c                    | 20 +++++---
 xen/arch/arm/include/asm/arm64/sve.h     | 27 +++++++++++
 xen/arch/arm/include/asm/arm64/sysregs.h |  1 +
 xen/arch/arm/include/asm/cpufeature.h    | 14 ++++++
 xen/arch/arm/include/asm/domain.h        |  1 +
 xen/arch/arm/include/asm/processor.h     |  2 +
 xen/arch/arm/setup.c                     |  5 +-
 xen/arch/arm/traps.c                     | 27 ++++++-----
 15 files changed, 210 insertions(+), 27 deletions(-)
 create mode 100644 xen/arch/arm/arm64/sve-asm.S
 create mode 100644 xen/arch/arm/arm64/sve.c
 create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c7f..41f45d8d1203 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -112,11 +112,15 @@ config ARM64_PTR_AUTH
 	  This feature is not supported in Xen.
 
 config ARM64_SVE
-	def_bool n
+	bool "Enable Scalar Vector Extension support (UNSUPPORTED)" if UNSUPPORTED
 	depends on ARM_64
 	help
-	  Scalar Vector Extension support.
-	  This feature is not supported in Xen.
+	  Scalar Vector Extension (SVE/SVE2) support for guests.
+
+	  Please be aware that currently, enabling this feature will add latency on
+	  VM context switch between SVE enabled guests, between not-enabled SVE
+	  guests and SVE enabled guests and viceversa, compared to the time
+	  required to switch between not-enabled SVE guests.
 
 config ARM64_MTE
 	def_bool n
diff --git a/xen/arch/arm/README.LinuxPrimitives b/xen/arch/arm/README.LinuxPrimitives
index 1d53e6a898da..76c8df29e416 100644
--- a/xen/arch/arm/README.LinuxPrimitives
+++ b/xen/arch/arm/README.LinuxPrimitives
@@ -62,6 +62,15 @@ done
 linux/arch/arm64/lib/clear_page.S       xen/arch/arm/arm64/lib/clear_page.S
 linux/arch/arm64/lib/copy_page.S        unused in Xen
 
+---------------------------------------------------------------------
+
+SVE assembly macro: last sync @ v6.3.0 (last commit: 457391b03803)
+
+linux/arch/arm64/include/asm/fpsimdmacros.h   xen/arch/arm/include/asm/arm64/sve-asm.S
+
+The following macros were taken from Linux:
+    _check_general_reg, _check_num, _sve_rdvl
+
 =====================================================================
 arm32
 =====================================================================
diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 28481393e98f..54ad55c75cda 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -13,6 +13,7 @@ obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += mm.o
 obj-y += smc.o
 obj-y += smpboot.o
+obj-$(CONFIG_ARM64_SVE) += sve.o sve-asm.o
 obj-y += traps.o
 obj-y += vfp.o
 obj-y += vsysreg.o
diff --git a/xen/arch/arm/arm64/cpufeature.c b/xen/arch/arm/arm64/cpufeature.c
index d9039d37b2d1..b4656ff4d80f 100644
--- a/xen/arch/arm/arm64/cpufeature.c
+++ b/xen/arch/arm/arm64/cpufeature.c
@@ -455,15 +455,11 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
 	ARM64_FTR_END,
 };
 
-#if 0
-/* TODO: use this to sanitize SVE once we support it */
-
 static const struct arm64_ftr_bits ftr_zcr[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
 		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
 	ARM64_FTR_END,
 };
-#endif
 
 /*
  * Common ftr bits for a 32bit register with all hidden, strict
@@ -603,6 +599,9 @@ void update_system_features(const struct cpuinfo_arm *new)
 
 	SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
 
+	if ( cpu_has_sve )
+		SANITIZE_REG(zcr64, 0, zcr);
+
 	/*
 	 * Comment from Linux:
 	 * Userspace may perform DC ZVA instructions. Mismatched block sizes
diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
new file mode 100644
index 000000000000..4d1549344733
--- /dev/null
+++ b/xen/arch/arm/arm64/sve-asm.S
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Arm SVE assembly routines
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ *
+ * Some macros and instruction encoding in this file are taken from linux 6.1.1,
+ * file arch/arm64/include/asm/fpsimdmacros.h, some of them are a modified
+ * version.
+ */
+
+/* Sanity-check macros to help avoid encoding garbage instructions */
+
+.macro _check_general_reg nr
+    .if (\nr) < 0 || (\nr) > 30
+        .error "Bad register number \nr."
+    .endif
+.endm
+
+.macro _check_num n, min, max
+    .if (\n) < (\min) || (\n) > (\max)
+        .error "Number \n out of range [\min,\max]"
+    .endif
+.endm
+
+/* SVE instruction encodings for non-SVE-capable assemblers */
+/* (pre binutils 2.28, all kernel capable clang versions support SVE) */
+
+/* RDVL X\nx, #\imm */
+.macro _sve_rdvl nx, imm
+    _check_general_reg \nx
+    _check_num (\imm), -0x20, 0x1f
+    .inst 0x04bf5000                \
+        | (\nx)                     \
+        | (((\imm) & 0x3f) << 5)
+.endm
+
+/* Gets the current vector register size in bytes */
+GLOBAL(sve_get_hw_vl)
+    _sve_rdvl 0, 1
+    ret
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
new file mode 100644
index 000000000000..e05ccc38a896
--- /dev/null
+++ b/xen/arch/arm/arm64/sve.c
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#include <xen/types.h>
+#include <asm/arm64/sve.h>
+#include <asm/arm64/sysregs.h>
+#include <asm/processor.h>
+#include <asm/system.h>
+
+extern unsigned int sve_get_hw_vl(void);
+
+/* Takes a vector length in bits and returns the ZCR_ELx encoding */
+static inline register_t vl_to_zcr(unsigned int vl)
+{
+    ASSERT(vl > 0);
+    return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
+}
+
+register_t compute_max_zcr(void)
+{
+    register_t cptr_bits = get_default_cptr_flags();
+    register_t zcr = vl_to_zcr(SVE_VL_MAX_BITS);
+    unsigned int hw_vl;
+
+    /* Remove trap for SVE resources */
+    WRITE_SYSREG(cptr_bits & ~HCPTR_CP(8), CPTR_EL2);
+    isb();
+
+    /*
+     * Set the maximum SVE vector length, doing that we will know the VL
+     * supported by the platform, calling sve_get_hw_vl()
+     */
+    WRITE_SYSREG(zcr, ZCR_EL2);
+
+    /*
+     * Read the maximum VL, which could be lower than what we imposed before,
+     * hw_vl contains VL in bytes, multiply it by 8 to use vl_to_zcr() later
+     */
+    hw_vl = sve_get_hw_vl() * 8U;
+
+    /* Restore CPTR_EL2 */
+    WRITE_SYSREG(cptr_bits, CPTR_EL2);
+    isb();
+
+    return vl_to_zcr(hw_vl);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index c4ec38bb2554..83b84368f6d5 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -9,6 +9,7 @@
 #include <xen/init.h>
 #include <xen/smp.h>
 #include <xen/stop_machine.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 
 DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
@@ -143,6 +144,9 @@ void identify_cpu(struct cpuinfo_arm *c)
 
     c->zfr64.bits[0] = READ_SYSREG(ID_AA64ZFR0_EL1);
 
+    if ( cpu_has_sve )
+        c->zcr64.bits[0] = compute_max_zcr();
+
     c->dczid.bits[0] = READ_SYSREG(DCZID_EL0);
 
     c->ctr.bits[0] = READ_SYSREG(CTR_EL0);
@@ -199,7 +203,7 @@ static int __init create_guest_cpuinfo(void)
     guest_cpuinfo.pfr64.mpam = 0;
     guest_cpuinfo.pfr64.mpam_frac = 0;
 
-    /* Hide SVE as Xen does not support it */
+    /* Hide SVE by default to the guests */
     guest_cpuinfo.pfr64.sve = 0;
     guest_cpuinfo.zfr64.bits[0] = 0;
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index d8ef6501ff8e..d5ab15db46c4 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -95,6 +95,9 @@ static void ctxt_switch_from(struct vcpu *p)
     /* CP 15 */
     p->arch.csselr = READ_SYSREG(CSSELR_EL1);
 
+    /* VFP */
+    vfp_save_state(p);
+
     /* Control Registers */
     p->arch.cpacr = READ_SYSREG(CPACR_EL1);
 
@@ -155,9 +158,6 @@ static void ctxt_switch_from(struct vcpu *p)
 
     /* XXX MPU */
 
-    /* VFP */
-    vfp_save_state(p);
-
     /* VGIC */
     gic_save_state(p);
 
@@ -181,9 +181,6 @@ static void ctxt_switch_to(struct vcpu *n)
     /* VGIC */
     gic_restore_state(n);
 
-    /* VFP */
-    vfp_restore_state(n);
-
     /* XXX MPU */
 
     /* Fault Status */
@@ -256,8 +253,17 @@ static void ctxt_switch_to(struct vcpu *n)
     WRITE_CP32(n->arch.joscr, JOSCR);
     WRITE_CP32(n->arch.jmcr, JMCR);
 #endif
+
+    /*
+     * CPTR_EL2 needs to be written before calling vfp_restore_state, a
+     * synchronization instruction is expected after the write (isb)
+     */
+    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
     isb();
 
+    /* VFP - call vfp_restore_state after writing on CPTR_EL2 + isb */
+    vfp_restore_state(n);
+
     /* CP 15 */
     WRITE_SYSREG(n->arch.csselr, CSSELR_EL1);
 
@@ -548,6 +554,8 @@ int arch_vcpu_create(struct vcpu *v)
 
     v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
+    v->arch.cptr_el2 = get_default_cptr_flags();
+
     v->arch.hcr_el2 = get_default_hcr_flags();
 
     v->arch.mdcr_el2 = HDCR_TDRA | HDCR_TDOSA | HDCR_TDA;
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
new file mode 100644
index 000000000000..c0466243c7bc
--- /dev/null
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#ifndef _ARM_ARM64_SVE_H
+#define _ARM_ARM64_SVE_H
+
+#define SVE_VL_MAX_BITS 2048U
+
+/* Vector length must be multiple of 128 */
+#define SVE_VL_MULTIPLE_VAL 128U
+
+register_t compute_max_zcr(void);
+
+#endif /* _ARM_ARM64_SVE_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 463899951414..4cabb9eb4d5e 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -24,6 +24,7 @@
 #define ICH_EISR_EL2              S3_4_C12_C11_3
 #define ICH_ELSR_EL2              S3_4_C12_C11_5
 #define ICH_VMCR_EL2              S3_4_C12_C11_7
+#define ZCR_EL2                   S3_4_C1_C2_0
 
 #define __LR0_EL2(x)              S3_4_C12_C12_ ## x
 #define __LR8_EL2(x)              S3_4_C12_C13_ ## x
diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
index c62cf6293fd6..03fe684b4d36 100644
--- a/xen/arch/arm/include/asm/cpufeature.h
+++ b/xen/arch/arm/include/asm/cpufeature.h
@@ -32,6 +32,12 @@
 #define cpu_has_thumbee   (boot_cpu_feature32(thumbee) == 1)
 #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
 
+#ifdef CONFIG_ARM64_SVE
+#define cpu_has_sve       (boot_cpu_feature64(sve) == 1)
+#else
+#define cpu_has_sve       0
+#endif
+
 #ifdef CONFIG_ARM_32
 #define cpu_has_gicv3     (boot_cpu_feature32(gic) >= 1)
 #define cpu_has_gentimer  (boot_cpu_feature32(gentimer) == 1)
@@ -323,6 +329,14 @@ struct cpuinfo_arm {
         };
     } isa64;
 
+    union {
+        register_t bits[1];
+        struct {
+            unsigned long len:4;
+            unsigned long __res0:60;
+        };
+    } zcr64;
+
     struct {
         register_t bits[1];
     } zfr64;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 2a51f0ca688e..e776ee704b7d 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -190,6 +190,7 @@ struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+    register_t cptr_el2;
     register_t hcr_el2;
     register_t mdcr_el2;
 
diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index 54f253087718..bc683334125c 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -582,6 +582,8 @@ void do_trap_guest_serror(struct cpu_user_regs *regs);
 
 register_t get_default_hcr_flags(void);
 
+register_t get_default_cptr_flags(void);
+
 /*
  * Synchronize SError unless the feature is selected.
  * This is relying on the SErrors are currently unmasked.
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 6f9f4d8c8a15..4191a766767a 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -135,10 +135,11 @@ static void __init processor_id(void)
            cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No",
            cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No",
            cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No");
-    printk("    Extensions:%s%s%s\n",
+    printk("    Extensions:%s%s%s%s\n",
            cpu_has_fp ? " FloatingPoint" : "",
            cpu_has_simd ? " AdvancedSIMD" : "",
-           cpu_has_gicv3 ? " GICv3-SysReg" : "");
+           cpu_has_gicv3 ? " GICv3-SysReg" : "",
+           cpu_has_sve ? " SVE" : "");
 
     /* Warn user if we find unknown floating-point features */
     if ( cpu_has_fp && (boot_cpu_feature64(fp) >= 2) )
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index d40c331a4e9c..3393e10b52e6 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -93,6 +93,21 @@ register_t get_default_hcr_flags(void)
              HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
 }
 
+register_t get_default_cptr_flags(void)
+{
+    /*
+     * Trap all coprocessor registers (0-13) except cp10 and
+     * cp11 for VFP.
+     *
+     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
+     *
+     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
+     * RES1, i.e. they would trap whether we did this write or not.
+     */
+    return  ((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
+             HCPTR_TTA | HCPTR_TAM);
+}
+
 static enum {
     SERRORS_DIVERSE,
     SERRORS_PANIC,
@@ -135,17 +150,7 @@ void init_traps(void)
     /* Trap CP15 c15 used for implementation defined registers */
     WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
 
-    /* Trap all coprocessor registers (0-13) except cp10 and
-     * cp11 for VFP.
-     *
-     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
-     *
-     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
-     * RES1, i.e. they would trap whether we did this write or not.
-     */
-    WRITE_SYSREG((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
-                 HCPTR_TTA | HCPTR_TAM,
-                 CPTR_EL2);
+    WRITE_SYSREG(get_default_cptr_flags(), CPTR_EL2);
 
     /*
      * Configure HCR_EL2 with the bare minimum to run Xen until a guest
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538268.838118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgH-0003XV-0E; Tue, 23 May 2023 07:43:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538268.838118; Tue, 23 May 2023 07:43:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgG-0003XO-T5; Tue, 23 May 2023 07:43:40 +0000
Received: by outflank-mailman (input) for mailman id 538268;
 Tue, 23 May 2023 07:43:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgF-0003IN-RV
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:39 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 87e1922f-f93d-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 09:43:38 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 442A81515;
 Tue, 23 May 2023 00:44:23 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 583E53F762;
 Tue, 23 May 2023 00:43:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87e1922f-f93d-11ed-b22d-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v7 02/12] xen/arm: add SVE vector length field to the domain
Date: Tue, 23 May 2023 08:43:16 +0100
Message-Id: <20230523074326.3035745-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add sve_vl field to arch_domain and xen_arch_domainconfig struct,
to allow the domain to have an information about the SVE feature
and the number of SVE register bits that are allowed for this
domain.

sve_vl field is the vector length in bits divided by 128, this
allows to use less space in the structures.

The field is used also to allow or forbid a domain to use SVE,
because a value equal to zero means the guest is not allowed to
use the feature.

Check that the requested vector length is lower or equal to the
platform supported vector length, otherwise fail on domain
creation.

Check that only 64 bit domains have SVE enabled, otherwise fail.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v6:
 - Style fix, have is_sve_domain as static inline instead of macro
   (Julien)
Changes from v5:
 - Update commit message stating the interface ver. bump (Bertrand)
 - in struct arch_domain, protect sve_vl with CONFIG_ARM64_SVE,
   given the change, move also is_sve_domain() where it's protected
   inside sve.h and create a stub when the macro is not defined,
   protect the usage of sve_vl where needed.
   (Julien)
 - Add a check for 32 bit guest running on top of 64 bit host that
   have sve parameter enabled to stop the domain creation, added in
   construct_domain() of domain_build.c and subarch_do_domctl of
   domctl.c. (Julien)
Changes from v4:
 - Return 0 in get_sys_vl_len() if sve is not supported, code style fix,
   removed else if since the conditions can't fallthrough, removed not
   needed condition checking for VL bits validity because it's already
   covered, so delete is_vl_valid() function. (Jan)
Changes from v3:
 - don't use fixed types when not needed, use encoded value also in
   arch_domain so rename sve_vl_bits in sve_vl. (Jan)
 - rename domainconfig_decode_vl to sve_decode_vl because it will now
   be used also to decode from arch_domain value
 - change sve_vl from uint16_t to uint8_t and move it after "type" field
   to optimize space.
Changes from v2:
 - rename field in xen_arch_domainconfig from "sve_vl_bits" to
   "sve_vl" and use the implicit padding after gic_version to
   store it, now this field is the VL/128. (Jan)
 - Created domainconfig_decode_vl() function to decode the sve_vl
   field and use it as plain bits value inside arch_domain.
 - Changed commit message reflecting the changes
Changes from v1:
 - no changes
Changes from RFC:
 - restore zcr_el2 in sve_restore_state, that will be introduced
   later in this serie, so remove zcr_el2 related code from this
   patch and move everything to the later patch (Julien)
 - add explicit padding into struct xen_arch_domainconfig (Julien)
 - Don't lower down the vector length, just fail to create the
   domain. (Julien)
---
 xen/arch/arm/arm64/domctl.c          |  4 ++++
 xen/arch/arm/arm64/sve.c             | 12 +++++++++++
 xen/arch/arm/domain.c                | 29 ++++++++++++++++++++++++++
 xen/arch/arm/domain_build.c          |  7 +++++++
 xen/arch/arm/include/asm/arm64/sve.h | 31 ++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/domain.h    |  5 +++++
 xen/include/public/arch-arm.h        |  2 ++
 7 files changed, 90 insertions(+)

diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c
index 0de89b42c448..14fc622e9956 100644
--- a/xen/arch/arm/arm64/domctl.c
+++ b/xen/arch/arm/arm64/domctl.c
@@ -10,6 +10,7 @@
 #include <xen/sched.h>
 #include <xen/hypercall.h>
 #include <public/domctl.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 
 static long switch_mode(struct domain *d, enum domain_type type)
@@ -43,6 +44,9 @@ long subarch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         case 32:
             if ( !cpu_has_el1_32 )
                 return -EINVAL;
+            /* SVE is not supported for 32 bit domain */
+            if ( is_sve_domain(d) )
+                return -EINVAL;
             return switch_mode(d, DOMAIN_32BIT);
         case 64:
             return switch_mode(d, DOMAIN_64BIT);
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index e05ccc38a896..a9144e48ef6b 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -8,6 +8,7 @@
 #include <xen/types.h>
 #include <asm/arm64/sve.h>
 #include <asm/arm64/sysregs.h>
+#include <asm/cpufeature.h>
 #include <asm/processor.h>
 #include <asm/system.h>
 
@@ -49,6 +50,17 @@ register_t compute_max_zcr(void)
     return vl_to_zcr(hw_vl);
 }
 
+/* Get the system sanitized value for VL in bits */
+unsigned int get_sys_vl_len(void)
+{
+    if ( !cpu_has_sve )
+        return 0;
+
+    /* ZCR_ELx len field is ((len + 1) * 128) = vector bits length */
+    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
+            SVE_VL_MULTIPLE_VAL;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index d5ab15db46c4..6c22551b0ed2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -13,6 +13,7 @@
 #include <xen/wait.h>
 
 #include <asm/alternative.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpuerrata.h>
 #include <asm/cpufeature.h>
 #include <asm/current.h>
@@ -555,6 +556,8 @@ int arch_vcpu_create(struct vcpu *v)
     v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
     v->arch.cptr_el2 = get_default_cptr_flags();
+    if ( is_sve_domain(v->domain) )
+        v->arch.cptr_el2 &= ~HCPTR_CP(8);
 
     v->arch.hcr_el2 = get_default_hcr_flags();
 
@@ -599,6 +602,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
     unsigned int max_vcpus;
     unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
     unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
+    unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
 
     if ( (config->flags & ~flags_optional) != flags_required )
     {
@@ -607,6 +611,26 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }
 
+    /* Check feature flags */
+    if ( sve_vl_bits > 0 )
+    {
+        unsigned int zcr_max_bits = get_sys_vl_len();
+
+        if ( !zcr_max_bits )
+        {
+            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
+            return -EINVAL;
+        }
+
+        if ( sve_vl_bits > zcr_max_bits )
+        {
+            dprintk(XENLOG_INFO,
+                    "Requested SVE vector length (%u) > supported length (%u)\n",
+                    sve_vl_bits, zcr_max_bits);
+            return -EINVAL;
+        }
+    }
+
     /* The P2M table must always be shared between the CPU and the IOMMU */
     if ( config->iommu_opts & XEN_DOMCTL_IOMMU_no_sharept )
     {
@@ -749,6 +773,11 @@ int arch_domain_create(struct domain *d,
     if ( (rc = domain_vpci_init(d)) != 0 )
         goto fail;
 
+#ifdef CONFIG_ARM64_SVE
+    /* Copy the encoded vector length sve_vl from the domain configuration */
+    d->arch.sve_vl = config->arch.sve_vl;
+#endif
+
     return 0;
 
 fail:
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 71f307a572e9..9dd1ed5bce44 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -26,6 +26,7 @@
 #include <asm/platform.h>
 #include <asm/psci.h>
 #include <asm/setup.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 #include <asm/domain_build.h>
 #include <xen/event.h>
@@ -3670,6 +3671,12 @@ static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
         return -EINVAL;
     }
 
+    if ( is_sve_domain(d) && (kinfo->type == DOMAIN_32BIT) )
+    {
+        printk("SVE is not available for 32-bit domain\n");
+        return -EINVAL;
+    }
+
     if ( is_64bit_domain(d) )
         vcpu_switch_to_aarch64_mode(v);
 
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index c0466243c7bc..4b63412727fc 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -8,13 +8,44 @@
 #ifndef _ARM_ARM64_SVE_H
 #define _ARM_ARM64_SVE_H
 
+#include <xen/sched.h>
+
 #define SVE_VL_MAX_BITS 2048U
 
 /* Vector length must be multiple of 128 */
 #define SVE_VL_MULTIPLE_VAL 128U
 
+static inline unsigned int sve_decode_vl(unsigned int sve_vl)
+{
+    /* SVE vector length is stored as VL/128 in xen_arch_domainconfig */
+    return sve_vl * SVE_VL_MULTIPLE_VAL;
+}
+
 register_t compute_max_zcr(void);
 
+#ifdef CONFIG_ARM64_SVE
+
+static inline bool is_sve_domain(const struct domain *d)
+{
+    return d->arch.sve_vl > 0;
+}
+
+unsigned int get_sys_vl_len(void);
+
+#else /* !CONFIG_ARM64_SVE */
+
+static inline bool is_sve_domain(const struct domain *d)
+{
+    return false;
+}
+
+static inline unsigned int get_sys_vl_len(void)
+{
+    return 0;
+}
+
+#endif /* CONFIG_ARM64_SVE */
+
 #endif /* _ARM_ARM64_SVE_H */
 
 /*
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index e776ee704b7d..331da0f3bcc3 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -67,6 +67,11 @@ struct arch_domain
     enum domain_type type;
 #endif
 
+#ifdef CONFIG_ARM64_SVE
+    /* max SVE encoded vector length */
+    uint8_t sve_vl;
+#endif
+
     /* Virtual MMU */
     struct p2m_domain p2m;
 
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 1528ced5097a..38311f559581 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -300,6 +300,8 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
 struct xen_arch_domainconfig {
     /* IN/OUT */
     uint8_t gic_version;
+    /* IN - Contains SVE vector length divided by 128 */
+    uint8_t sve_vl;
     /* IN */
     uint16_t tee_type;
     /* IN */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538270.838138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgJ-00043M-M3; Tue, 23 May 2023 07:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538270.838138; Tue, 23 May 2023 07:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgJ-00043B-IS; Tue, 23 May 2023 07:43:43 +0000
Received: by outflank-mailman (input) for mailman id 538270;
 Tue, 23 May 2023 07:43:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgI-0003Zx-4U
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:42 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 88a57ad9-f93d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 09:43:40 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 725111516;
 Tue, 23 May 2023 00:44:24 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 70AC93F762;
 Tue, 23 May 2023 00:43:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88a57ad9-f93d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v7 03/12] xen/arm: Expose SVE feature to the guest
Date: Tue, 23 May 2023 08:43:17 +0100
Message-Id: <20230523074326.3035745-4-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a guest is allowed to use SVE, expose the SVE features through
the identification registers.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
Changes from v6:
 - code style fix, add A-by Julien
Changes from v5:
 - given the move of is_sve_domain() in asm/arm64/sve.h, add the
   header to vsysreg.c
 - dropping Bertrand's R-by because of the change
Changes from v4:
 - no changes
Changes from v3:
 - no changes
Changes from v2:
 - no changes
Changes from v1:
 - No changes
Changes from RFC:
 - No changes
---
 xen/arch/arm/arm64/vsysreg.c | 41 ++++++++++++++++++++++++++++++++++--
 1 file changed, 39 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
index 758750983c11..fe31f7b3827f 100644
--- a/xen/arch/arm/arm64/vsysreg.c
+++ b/xen/arch/arm/arm64/vsysreg.c
@@ -18,6 +18,8 @@
 
 #include <xen/sched.h>
 
+#include <asm/arm64/cpufeature.h>
+#include <asm/arm64/sve.h>
 #include <asm/current.h>
 #include <asm/regs.h>
 #include <asm/traps.h>
@@ -295,7 +297,28 @@ void do_sysreg(struct cpu_user_regs *regs,
     GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
     GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
     GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
-    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
+
+    case HSR_SYSREG_ID_AA64PFR0_EL1:
+    {
+        register_t guest_reg_value = guest_cpuinfo.pfr64.bits[0];
+
+        if ( is_sve_domain(v->domain) )
+        {
+            /* 4 is the SVE field width in id_aa64pfr0_el1 */
+            uint64_t mask = GENMASK(ID_AA64PFR0_SVE_SHIFT + 4 - 1,
+                                    ID_AA64PFR0_SVE_SHIFT);
+            /* sysval is the sve field on the system */
+            uint64_t sysval = cpuid_feature_extract_unsigned_field_width(
+                                system_cpuinfo.pfr64.bits[0],
+                                ID_AA64PFR0_SVE_SHIFT, 4);
+            guest_reg_value &= ~mask;
+            guest_reg_value |= (sysval << ID_AA64PFR0_SVE_SHIFT) & mask;
+        }
+
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
+                                  guest_reg_value);
+    }
+
     GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
     GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
     GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
@@ -306,7 +329,21 @@ void do_sysreg(struct cpu_user_regs *regs,
     GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
     GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
     GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
-    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
+
+    case HSR_SYSREG_ID_AA64ZFR0_EL1:
+    {
+        /*
+         * When the guest has the SVE feature enabled, the whole id_aa64zfr0_el1
+         * needs to be exposed.
+         */
+        register_t guest_reg_value = guest_cpuinfo.zfr64.bits[0];
+
+        if ( is_sve_domain(v->domain) )
+            guest_reg_value = system_cpuinfo.zfr64.bits[0];
+
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
+                                  guest_reg_value);
+    }
 
     /*
      * Those cases are catching all Reserved registers trapped by TID3 which
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538272.838158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgM-0004bE-BY; Tue, 23 May 2023 07:43:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538272.838158; Tue, 23 May 2023 07:43:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgM-0004b1-7f; Tue, 23 May 2023 07:43:46 +0000
Received: by outflank-mailman (input) for mailman id 538272;
 Tue, 23 May 2023 07:43:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgK-0003Zx-NA
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:44 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 89f6a9fe-f93d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 09:43:42 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D88B81515;
 Tue, 23 May 2023 00:44:26 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D76203F762;
 Tue, 23 May 2023 00:43:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89f6a9fe-f93d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v7 05/12] arm/sve: save/restore SVE context switch
Date: Tue, 23 May 2023 08:43:19 +0100
Message-Id: <20230523074326.3035745-6-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Save/restore context switch for SVE, allocate memory to contain
the Z0-31 registers whose length is maximum 2048 bits each and
FFR who can be maximum 256 bits, the allocated memory depends on
how many bits is the vector length for the domain and how many bits
are supported by the platform.

Save P0-15 whose length is maximum 256 bits each, in this case the
memory used is from the fpregs field in struct vfp_state,
because V0-31 are part of Z0-31 and this space would have been
unused for SVE domain otherwise.

Create zcr_el{1,2} fields in arch_vcpu, initialise zcr_el2 on vcpu
creation given the requested vector length and restore it on
context switch, save/restore ZCR_EL1 value as well.

List import macros from Linux in README.LinuxPrimitives.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v6:
 - Add comment for explain why sve_save/sve_load are different from
   Linux, add macros in xen/arch/arm/README.LinuxPrimitives (Julien)
 - Add comments in sve_context_init and sve_context_free, handle the
   case where sve_zreg_ctx_end is NULL, move setting of v->arch.zcr_el2
   in sve_context_init (Julien)
 - remove stubs for sve_context_* and sve_save_* and rely on compiler
   DCE (Jan)
 - Add comments for sve_save_ctx/sve_load_ctx (Julien)
Changes from v5:
 - use XFREE instead of xfree, keep the headers (Julien)
 - Avoid math computation for every save/restore, store the computation
   in struct vfp_state once (Bertrand)
 - protect access to v->domain->arch.sve_vl inside arch_vcpu_create now
   that sve_vl is available only on arm64
Changes from v4:
 - No changes
Changes from v3:
 - don't use fixed len types when not needed (Jan)
 - now VL is an encoded value, decode it before using.
Changes from v2:
 - No changes
Changes from v1:
 - No changes
Changes from RFC:
 - Moved zcr_el2 field introduction in this patch, restore its
   content inside sve_restore_state function. (Julien)

fix patch 5

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Change-Id: Ief65b2ff14fd579afa4fd110ce08a19980e64fa9
---
 xen/arch/arm/README.LinuxPrimitives      |   4 +-
 xen/arch/arm/arm64/sve-asm.S             | 147 +++++++++++++++++++++++
 xen/arch/arm/arm64/sve.c                 |  91 ++++++++++++++
 xen/arch/arm/arm64/vfp.c                 |  79 ++++++------
 xen/arch/arm/domain.c                    |   6 +
 xen/arch/arm/include/asm/arm64/sve.h     |   4 +
 xen/arch/arm/include/asm/arm64/sysregs.h |   3 +
 xen/arch/arm/include/asm/arm64/vfp.h     |  12 ++
 xen/arch/arm/include/asm/domain.h        |   2 +
 9 files changed, 313 insertions(+), 35 deletions(-)

diff --git a/xen/arch/arm/README.LinuxPrimitives b/xen/arch/arm/README.LinuxPrimitives
index 76c8df29e416..301c0271bbe4 100644
--- a/xen/arch/arm/README.LinuxPrimitives
+++ b/xen/arch/arm/README.LinuxPrimitives
@@ -69,7 +69,9 @@ SVE assembly macro: last sync @ v6.3.0 (last commit: 457391b03803)
 linux/arch/arm64/include/asm/fpsimdmacros.h   xen/arch/arm/include/asm/arm64/sve-asm.S
 
 The following macros were taken from Linux:
-    _check_general_reg, _check_num, _sve_rdvl
+    _check_general_reg, _check_num, _sve_rdvl, __for, _for, _sve_check_zreg,
+    _sve_check_preg, _sve_str_v, _sve_ldr_v, _sve_str_p, _sve_ldr_p, _sve_rdffr,
+    _sve_wrffr
 
 =====================================================================
 arm32
diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
index 4d1549344733..59dbefbbb252 100644
--- a/xen/arch/arm/arm64/sve-asm.S
+++ b/xen/arch/arm/arm64/sve-asm.S
@@ -17,6 +17,18 @@
     .endif
 .endm
 
+.macro _sve_check_zreg znr
+    .if (\znr) < 0 || (\znr) > 31
+        .error "Bad Scalable Vector Extension vector register number \znr."
+    .endif
+.endm
+
+.macro _sve_check_preg pnr
+    .if (\pnr) < 0 || (\pnr) > 15
+        .error "Bad Scalable Vector Extension predicate register number \pnr."
+    .endif
+.endm
+
 .macro _check_num n, min, max
     .if (\n) < (\min) || (\n) > (\max)
         .error "Number \n out of range [\min,\max]"
@@ -26,6 +38,54 @@
 /* SVE instruction encodings for non-SVE-capable assemblers */
 /* (pre binutils 2.28, all kernel capable clang versions support SVE) */
 
+/* STR (vector): STR Z\nz, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_str_v nz, nxbase, offset=0
+    _sve_check_zreg \nz
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0xe5804000                \
+        | (\nz)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* LDR (vector): LDR Z\nz, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_ldr_v nz, nxbase, offset=0
+    _sve_check_zreg \nz
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0x85804000                \
+        | (\nz)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* STR (predicate): STR P\np, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_str_p np, nxbase, offset=0
+    _sve_check_preg \np
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0xe5800000                \
+        | (\np)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* LDR (predicate): LDR P\np, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_ldr_p np, nxbase, offset=0
+    _sve_check_preg \np
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0x85800000                \
+        | (\np)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
 /* RDVL X\nx, #\imm */
 .macro _sve_rdvl nx, imm
     _check_general_reg \nx
@@ -35,11 +95,98 @@
         | (((\imm) & 0x3f) << 5)
 .endm
 
+/* RDFFR (unpredicated): RDFFR P\np.B */
+.macro _sve_rdffr np
+    _sve_check_preg \np
+    .inst 0x2519f000                \
+        | (\np)
+.endm
+
+/* WRFFR P\np.B */
+.macro _sve_wrffr np
+    _sve_check_preg \np
+    .inst 0x25289000                \
+        | ((\np) << 5)
+.endm
+
+.macro __for from:req, to:req
+    .if (\from) == (\to)
+        _for__body %\from
+    .else
+        __for %\from, %((\from) + ((\to) - (\from)) / 2)
+        __for %((\from) + ((\to) - (\from)) / 2 + 1), %\to
+    .endif
+.endm
+
+.macro _for var:req, from:req, to:req, insn:vararg
+    .macro _for__body \var:req
+        .noaltmacro
+        \insn
+        .altmacro
+    .endm
+
+    .altmacro
+    __for \from, \to
+    .noaltmacro
+
+    .purgem _for__body
+.endm
+
+/*
+ * sve_save and sve_load are different from the Linux version because the
+ * buffers to save the context are different from Xen and for example Linux
+ * is using this macro to save/restore also fpsr and fpcr while we do it in C
+ */
+
+.macro sve_save nxzffrctx, nxpctx, save_ffr
+    _for n, 0, 31, _sve_str_v \n, \nxzffrctx, \n - 32
+    _for n, 0, 15, _sve_str_p \n, \nxpctx, \n
+        cbz \save_ffr, 1f
+        _sve_rdffr 0
+        _sve_str_p 0, \nxzffrctx
+        _sve_ldr_p 0, \nxpctx
+        b 2f
+1:
+        str xzr, [x\nxzffrctx]      // Zero out FFR
+2:
+.endm
+
+.macro sve_load nxzffrctx, nxpctx, restore_ffr
+    _for n, 0, 31, _sve_ldr_v \n, \nxzffrctx, \n - 32
+        cbz \restore_ffr, 1f
+        _sve_ldr_p 0, \nxzffrctx
+        _sve_wrffr 0
+1:
+    _for n, 0, 15, _sve_ldr_p \n, \nxpctx, \n
+.endm
+
 /* Gets the current vector register size in bytes */
 GLOBAL(sve_get_hw_vl)
     _sve_rdvl 0, 1
     ret
 
+/*
+ * Save the SVE context
+ *
+ * x0 - pointer to buffer for Z0-31 + FFR
+ * x1 - pointer to buffer for P0-15
+ * x2 - Save FFR if non-zero
+ */
+GLOBAL(sve_save_ctx)
+    sve_save 0, 1, x2
+    ret
+
+/*
+ * Load the SVE context
+ *
+ * x0 - pointer to buffer for Z0-31 + FFR
+ * x1 - pointer to buffer for P0-15
+ * x2 - Restore FFR if non-zero
+ */
+GLOBAL(sve_load_ctx)
+    sve_load 0, 1, x2
+    ret
+
 /*
  * Local variables:
  * mode: ASM
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index a9144e48ef6b..84a6dedc1fd7 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -5,6 +5,7 @@
  * Copyright (C) 2022 ARM Ltd.
  */
 
+#include <xen/sizes.h>
 #include <xen/types.h>
 #include <asm/arm64/sve.h>
 #include <asm/arm64/sysregs.h>
@@ -14,6 +15,25 @@
 
 extern unsigned int sve_get_hw_vl(void);
 
+/*
+ * Save the SVE context
+ *
+ * sve_ctx - pointer to buffer for Z0-31 + FFR
+ * pregs - pointer to buffer for P0-15
+ * save_ffr - Save FFR if non-zero
+ */
+extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr);
+
+/*
+ * Load the SVE context
+ *
+ * sve_ctx - pointer to buffer for Z0-31 + FFR
+ * pregs - pointer to buffer for P0-15
+ * restore_ffr - Restore FFR if non-zero
+ */
+extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
+                         int restore_ffr);
+
 /* Takes a vector length in bits and returns the ZCR_ELx encoding */
 static inline register_t vl_to_zcr(unsigned int vl)
 {
@@ -21,6 +41,21 @@ static inline register_t vl_to_zcr(unsigned int vl)
     return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
 }
 
+static inline unsigned int sve_zreg_ctx_size(unsigned int vl)
+{
+    /*
+     * Z0-31 registers size in bytes is computed from VL that is in bits, so VL
+     * in bytes is VL/8.
+     */
+    return (vl / 8U) * 32U;
+}
+
+static inline unsigned int sve_ffrreg_ctx_size(unsigned int vl)
+{
+    /* FFR register size is VL/8, which is in bytes (VL/8)/8 */
+    return (vl / 64U);
+}
+
 register_t compute_max_zcr(void)
 {
     register_t cptr_bits = get_default_cptr_flags();
@@ -61,6 +96,62 @@ unsigned int get_sys_vl_len(void)
             SVE_VL_MULTIPLE_VAL;
 }
 
+int sve_context_init(struct vcpu *v)
+{
+    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
+    uint64_t *ctx = _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
+                             sve_ffrreg_ctx_size(sve_vl_bits),
+                             L1_CACHE_BYTES);
+
+    if ( !ctx )
+        return -ENOMEM;
+
+    /*
+     * Point to the end of Z0-Z31 memory, just before FFR memory, to be kept in
+     * sync with sve_context_free()
+     */
+    v->arch.vfp.sve_zreg_ctx_end = ctx +
+        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
+
+    v->arch.zcr_el2 = vl_to_zcr(sve_vl_bits);
+
+    return 0;
+}
+
+void sve_context_free(struct vcpu *v)
+{
+    unsigned int sve_vl_bits;
+
+    if ( v->arch.vfp.sve_zreg_ctx_end )
+        return;
+
+    sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
+
+    /*
+    * Point to the end of Z0-Z31 memory, just before FFR memory, to be kept
+    * in sync with sve_context_init()
+    */
+    v->arch.vfp.sve_zreg_ctx_end -=
+        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
+
+    XFREE(v->arch.vfp.sve_zreg_ctx_end);
+}
+
+void sve_save_state(struct vcpu *v)
+{
+    v->arch.zcr_el1 = READ_SYSREG(ZCR_EL1);
+
+    sve_save_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
+}
+
+void sve_restore_state(struct vcpu *v)
+{
+    WRITE_SYSREG(v->arch.zcr_el1, ZCR_EL1);
+    WRITE_SYSREG(v->arch.zcr_el2, ZCR_EL2);
+
+    sve_load_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index 47885e76baae..2d0d7c2e6ddb 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -2,29 +2,35 @@
 #include <asm/processor.h>
 #include <asm/cpufeature.h>
 #include <asm/vfp.h>
+#include <asm/arm64/sve.h>
 
 void vfp_save_state(struct vcpu *v)
 {
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
-                 "stp q2, q3, [%1, #16 * 2]\n\t"
-                 "stp q4, q5, [%1, #16 * 4]\n\t"
-                 "stp q6, q7, [%1, #16 * 6]\n\t"
-                 "stp q8, q9, [%1, #16 * 8]\n\t"
-                 "stp q10, q11, [%1, #16 * 10]\n\t"
-                 "stp q12, q13, [%1, #16 * 12]\n\t"
-                 "stp q14, q15, [%1, #16 * 14]\n\t"
-                 "stp q16, q17, [%1, #16 * 16]\n\t"
-                 "stp q18, q19, [%1, #16 * 18]\n\t"
-                 "stp q20, q21, [%1, #16 * 20]\n\t"
-                 "stp q22, q23, [%1, #16 * 22]\n\t"
-                 "stp q24, q25, [%1, #16 * 24]\n\t"
-                 "stp q26, q27, [%1, #16 * 26]\n\t"
-                 "stp q28, q29, [%1, #16 * 28]\n\t"
-                 "stp q30, q31, [%1, #16 * 30]\n\t"
-                 : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
+    if ( is_sve_domain(v->domain) )
+        sve_save_state(v);
+    else
+    {
+        asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
+                     "stp q2, q3, [%1, #16 * 2]\n\t"
+                     "stp q4, q5, [%1, #16 * 4]\n\t"
+                     "stp q6, q7, [%1, #16 * 6]\n\t"
+                     "stp q8, q9, [%1, #16 * 8]\n\t"
+                     "stp q10, q11, [%1, #16 * 10]\n\t"
+                     "stp q12, q13, [%1, #16 * 12]\n\t"
+                     "stp q14, q15, [%1, #16 * 14]\n\t"
+                     "stp q16, q17, [%1, #16 * 16]\n\t"
+                     "stp q18, q19, [%1, #16 * 18]\n\t"
+                     "stp q20, q21, [%1, #16 * 20]\n\t"
+                     "stp q22, q23, [%1, #16 * 22]\n\t"
+                     "stp q24, q25, [%1, #16 * 24]\n\t"
+                     "stp q26, q27, [%1, #16 * 26]\n\t"
+                     "stp q28, q29, [%1, #16 * 28]\n\t"
+                     "stp q30, q31, [%1, #16 * 30]\n\t"
+                     : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
+    }
 
     v->arch.vfp.fpsr = READ_SYSREG(FPSR);
     v->arch.vfp.fpcr = READ_SYSREG(FPCR);
@@ -37,23 +43,28 @@ void vfp_restore_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
-                 "ldp q2, q3, [%1, #16 * 2]\n\t"
-                 "ldp q4, q5, [%1, #16 * 4]\n\t"
-                 "ldp q6, q7, [%1, #16 * 6]\n\t"
-                 "ldp q8, q9, [%1, #16 * 8]\n\t"
-                 "ldp q10, q11, [%1, #16 * 10]\n\t"
-                 "ldp q12, q13, [%1, #16 * 12]\n\t"
-                 "ldp q14, q15, [%1, #16 * 14]\n\t"
-                 "ldp q16, q17, [%1, #16 * 16]\n\t"
-                 "ldp q18, q19, [%1, #16 * 18]\n\t"
-                 "ldp q20, q21, [%1, #16 * 20]\n\t"
-                 "ldp q22, q23, [%1, #16 * 22]\n\t"
-                 "ldp q24, q25, [%1, #16 * 24]\n\t"
-                 "ldp q26, q27, [%1, #16 * 26]\n\t"
-                 "ldp q28, q29, [%1, #16 * 28]\n\t"
-                 "ldp q30, q31, [%1, #16 * 30]\n\t"
-                 : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
+    if ( is_sve_domain(v->domain) )
+        sve_restore_state(v);
+    else
+    {
+        asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
+                     "ldp q2, q3, [%1, #16 * 2]\n\t"
+                     "ldp q4, q5, [%1, #16 * 4]\n\t"
+                     "ldp q6, q7, [%1, #16 * 6]\n\t"
+                     "ldp q8, q9, [%1, #16 * 8]\n\t"
+                     "ldp q10, q11, [%1, #16 * 10]\n\t"
+                     "ldp q12, q13, [%1, #16 * 12]\n\t"
+                     "ldp q14, q15, [%1, #16 * 14]\n\t"
+                     "ldp q16, q17, [%1, #16 * 16]\n\t"
+                     "ldp q18, q19, [%1, #16 * 18]\n\t"
+                     "ldp q20, q21, [%1, #16 * 20]\n\t"
+                     "ldp q22, q23, [%1, #16 * 22]\n\t"
+                     "ldp q24, q25, [%1, #16 * 24]\n\t"
+                     "ldp q26, q27, [%1, #16 * 26]\n\t"
+                     "ldp q28, q29, [%1, #16 * 28]\n\t"
+                     "ldp q30, q31, [%1, #16 * 30]\n\t"
+                     : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
+    }
 
     WRITE_SYSREG(v->arch.vfp.fpsr, FPSR);
     WRITE_SYSREG(v->arch.vfp.fpcr, FPCR);
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 6c22551b0ed2..add9929b7943 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -557,7 +557,11 @@ int arch_vcpu_create(struct vcpu *v)
 
     v->arch.cptr_el2 = get_default_cptr_flags();
     if ( is_sve_domain(v->domain) )
+    {
+        if ( (rc = sve_context_init(v)) != 0 )
+            goto fail;
         v->arch.cptr_el2 &= ~HCPTR_CP(8);
+    }
 
     v->arch.hcr_el2 = get_default_hcr_flags();
 
@@ -587,6 +591,8 @@ fail:
 
 void arch_vcpu_destroy(struct vcpu *v)
 {
+    if ( is_sve_domain(v->domain) )
+        sve_context_free(v);
     vcpu_timer_destroy(v);
     vcpu_vgic_free(v);
     free_xenheap_pages(v->arch.stack, STACK_ORDER);
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index 4b63412727fc..65b46685d263 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -22,6 +22,10 @@ static inline unsigned int sve_decode_vl(unsigned int sve_vl)
 }
 
 register_t compute_max_zcr(void);
+int sve_context_init(struct vcpu *v);
+void sve_context_free(struct vcpu *v);
+void sve_save_state(struct vcpu *v);
+void sve_restore_state(struct vcpu *v);
 
 #ifdef CONFIG_ARM64_SVE
 
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 4cabb9eb4d5e..3fdeb9d8cdef 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -88,6 +88,9 @@
 #ifndef ID_AA64ISAR2_EL1
 #define ID_AA64ISAR2_EL1            S3_0_C0_C6_2
 #endif
+#ifndef ZCR_EL1
+#define ZCR_EL1                     S3_0_C1_C2_0
+#endif
 
 /* ID registers (imported from arm64/include/asm/sysreg.h in Linux) */
 
diff --git a/xen/arch/arm/include/asm/arm64/vfp.h b/xen/arch/arm/include/asm/arm64/vfp.h
index e6e8c363bc16..4aa371e85d26 100644
--- a/xen/arch/arm/include/asm/arm64/vfp.h
+++ b/xen/arch/arm/include/asm/arm64/vfp.h
@@ -6,7 +6,19 @@
 
 struct vfp_state
 {
+    /*
+     * When SVE is enabled for the guest, fpregs memory will be used to
+     * save/restore P0-P15 registers, otherwise it will be used for the V0-V31
+     * registers.
+     */
     uint64_t fpregs[64] __vfp_aligned;
+    /*
+     * When SVE is enabled for the guest, sve_zreg_ctx_end points to memory
+     * where Z0-Z31 registers and FFR can be saved/restored, it points at the
+     * end of the Z0-Z31 space and at the beginning of the FFR space, it's done
+     * like that to ease the save/restore assembly operations.
+     */
+    uint64_t *sve_zreg_ctx_end;
     register_t fpcr;
     register_t fpexc32_el2;
     register_t fpsr;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 331da0f3bcc3..814652d92568 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -195,6 +195,8 @@ struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+    register_t zcr_el1;
+    register_t zcr_el2;
     register_t cptr_el2;
     register_t hcr_el2;
     register_t mdcr_el2;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538273.838163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgM-0004jK-Si; Tue, 23 May 2023 07:43:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538273.838163; Tue, 23 May 2023 07:43:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgM-0004h2-OL; Tue, 23 May 2023 07:43:46 +0000
Received: by outflank-mailman (input) for mailman id 538273;
 Tue, 23 May 2023 07:43:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgL-0003Zx-LH
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:45 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 8adb7817-f93d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 09:43:43 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 82E311516;
 Tue, 23 May 2023 00:44:28 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 175CD3F762;
 Tue, 23 May 2023 00:43:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8adb7817-f93d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v7 06/12] xen/common: add dom0 xen command line argument for Arm
Date: Tue, 23 May 2023 08:43:20 +0100
Message-Id: <20230523074326.3035745-7-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently x86 defines a Xen command line argument dom0=<list> where
there can be specified dom0 controlling sub-options, to use it also
on Arm, move the code that loops through the list of arguments from
x86 to the common code and from there, call architecture specific
functions to handle the comma separated sub-options.

No functional changes are intended.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes from v6:
 - no changes
Changes from v5:
 - Add Bertrand R-by
Changes from v4:
 - return EINVAL in Arm implementation of parse_arch_dom0_param,
   shorten variable name in the funtion from str_begin, str_end to
   s, e. Removed variable rc from x86 parse_arch_dom0_param
   implementation. (Jan)
 - Add R-By Jan
Changes from v3:
 - new patch
---
 xen/arch/arm/domain_build.c |  5 ++++
 xen/arch/x86/dom0_build.c   | 48 ++++++++++++++-----------------------
 xen/common/domain.c         | 23 ++++++++++++++++++
 xen/include/xen/domain.h    |  1 +
 4 files changed, 47 insertions(+), 30 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 9dd1ed5bce44..f373a5024783 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -60,6 +60,11 @@ static int __init parse_dom0_mem(const char *s)
 }
 custom_param("dom0_mem", parse_dom0_mem);
 
+int __init parse_arch_dom0_param(const char *s, const char *e)
+{
+    return -EINVAL;
+}
+
 /* Override macros from asm/page.h to make them work with mfn_t */
 #undef virt_to_mfn
 #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 79234f18ff01..9f5300a3efbb 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -266,42 +266,30 @@ bool __initdata opt_dom0_pvh = !IS_ENABLED(CONFIG_PV);
 bool __initdata opt_dom0_verbose = IS_ENABLED(CONFIG_VERBOSE_DEBUG);
 bool __initdata opt_dom0_msr_relaxed;
 
-static int __init cf_check parse_dom0_param(const char *s)
+int __init parse_arch_dom0_param(const char *s, const char *e)
 {
-    const char *ss;
-    int rc = 0;
+    int val;
 
-    do {
-        int val;
-
-        ss = strchr(s, ',');
-        if ( !ss )
-            ss = strchr(s, '\0');
-
-        if ( IS_ENABLED(CONFIG_PV) && !cmdline_strcmp(s, "pv") )
-            opt_dom0_pvh = false;
-        else if ( IS_ENABLED(CONFIG_HVM) && !cmdline_strcmp(s, "pvh") )
-            opt_dom0_pvh = true;
+    if ( IS_ENABLED(CONFIG_PV) && !cmdline_strcmp(s, "pv") )
+        opt_dom0_pvh = false;
+    else if ( IS_ENABLED(CONFIG_HVM) && !cmdline_strcmp(s, "pvh") )
+        opt_dom0_pvh = true;
 #ifdef CONFIG_SHADOW_PAGING
-        else if ( (val = parse_boolean("shadow", s, ss)) >= 0 )
-            opt_dom0_shadow = val;
+    else if ( (val = parse_boolean("shadow", s, e)) >= 0 )
+        opt_dom0_shadow = val;
 #endif
-        else if ( (val = parse_boolean("verbose", s, ss)) >= 0 )
-            opt_dom0_verbose = val;
-        else if ( IS_ENABLED(CONFIG_PV) &&
-                  (val = parse_boolean("cpuid-faulting", s, ss)) >= 0 )
-            opt_dom0_cpuid_faulting = val;
-        else if ( (val = parse_boolean("msr-relaxed", s, ss)) >= 0 )
-            opt_dom0_msr_relaxed = val;
-        else
-            rc = -EINVAL;
-
-        s = ss + 1;
-    } while ( *ss );
+    else if ( (val = parse_boolean("verbose", s, e)) >= 0 )
+        opt_dom0_verbose = val;
+    else if ( IS_ENABLED(CONFIG_PV) &&
+              (val = parse_boolean("cpuid-faulting", s, e)) >= 0 )
+        opt_dom0_cpuid_faulting = val;
+    else if ( (val = parse_boolean("msr-relaxed", s, e)) >= 0 )
+        opt_dom0_msr_relaxed = val;
+    else
+        return -EINVAL;
 
-    return rc;
+    return 0;
 }
-custom_param("dom0", parse_dom0_param);
 
 static char __initdata opt_dom0_ioports_disable[200] = "";
 string_param("dom0_ioports_disable", opt_dom0_ioports_disable);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 6a440590fe2a..caaa40263792 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -364,6 +364,29 @@ static int __init cf_check parse_extra_guest_irqs(const char *s)
 }
 custom_param("extra_guest_irqs", parse_extra_guest_irqs);
 
+static int __init cf_check parse_dom0_param(const char *s)
+{
+    const char *ss;
+    int rc = 0;
+
+    do {
+        int ret;
+
+        ss = strchr(s, ',');
+        if ( !ss )
+            ss = strchr(s, '\0');
+
+        ret = parse_arch_dom0_param(s, ss);
+        if ( ret && !rc )
+            rc = ret;
+
+        s = ss + 1;
+    } while ( *ss );
+
+    return rc;
+}
+custom_param("dom0", parse_dom0_param);
+
 /*
  * Release resources held by a domain.  There may or may not be live
  * references to the domain, and it may or may not be fully constructed.
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 26f9c4f6dd5b..1df8f933d076 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -16,6 +16,7 @@ typedef union {
 struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id);
 
 unsigned int dom0_max_vcpus(void);
+int parse_arch_dom0_param(const char *s, const char *e);
 struct vcpu *alloc_dom0_vcpu0(struct domain *dom0);
 
 int vcpu_reset(struct vcpu *);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538271.838148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgL-0004K2-1N; Tue, 23 May 2023 07:43:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538271.838148; Tue, 23 May 2023 07:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgK-0004Jq-Rh; Tue, 23 May 2023 07:43:44 +0000
Received: by outflank-mailman (input) for mailman id 538271;
 Tue, 23 May 2023 07:43:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgJ-0003Zx-4U
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:43 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 894df1f1-f93d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 09:43:41 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A53D6152B;
 Tue, 23 May 2023 00:44:25 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A3DC33F762;
 Tue, 23 May 2023 00:43:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 894df1f1-f93d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v7 04/12] xen/arm: add SVE exception class handling
Date: Tue, 23 May 2023 08:43:18 +0100
Message-Id: <20230523074326.3035745-5-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SVE has a new exception class with code 0x19, introduce the new code
and handle the exception.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
Changes from v6:
 - Add R-by Julien
Changes from v5:
 - modified error messages (Julien)
 - add R-by Bertrand
Changes from v4:
 - No changes
Changes from v3:
 - No changes
Changes from v2:
 - No changes
Changes from v1:
 - No changes
Changes from RFC:
 - No changes
---
 xen/arch/arm/include/asm/processor.h | 1 +
 xen/arch/arm/traps.c                 | 9 +++++++++
 2 files changed, 10 insertions(+)

diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index bc683334125c..7e42ff8811fc 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -426,6 +426,7 @@
 #define HSR_EC_HVC64                0x16
 #define HSR_EC_SMC64                0x17
 #define HSR_EC_SYSREG               0x18
+#define HSR_EC_SVE                  0x19
 #endif
 #define HSR_EC_INSTR_ABORT_LOWER_EL 0x20
 #define HSR_EC_INSTR_ABORT_CURR_EL  0x21
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 3393e10b52e6..f6437f6aa9c9 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2172,6 +2172,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
         perfc_incr(trap_sysreg);
         do_sysreg(regs, hsr);
         break;
+    case HSR_EC_SVE:
+        GUEST_BUG_ON(regs_mode_is_32bit(regs));
+        gprintk(XENLOG_WARNING, "Domain tried to use SVE while not allowed\n");
+        inject_undef_exception(regs, hsr);
+        break;
 #endif
 
     case HSR_EC_INSTR_ABORT_LOWER_EL:
@@ -2201,6 +2206,10 @@ void do_trap_hyp_sync(struct cpu_user_regs *regs)
     case HSR_EC_BRK:
         do_trap_brk(regs, hsr);
         break;
+    case HSR_EC_SVE:
+        /* An SVE exception is a bug somewhere in hypervisor code */
+        do_unexpected_trap("SVE trap at EL2", regs);
+        break;
 #endif
     case HSR_EC_DATA_ABORT_CURR_EL:
     case HSR_EC_INSTR_ABORT_CURR_EL:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538274.838178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgP-0005CZ-6v; Tue, 23 May 2023 07:43:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538274.838178; Tue, 23 May 2023 07:43:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgP-0005CB-2G; Tue, 23 May 2023 07:43:49 +0000
Received: by outflank-mailman (input) for mailman id 538274;
 Tue, 23 May 2023 07:43:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgN-0003Zx-Bu
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:47 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 8bce095a-f93d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 09:43:45 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1261F153B;
 Tue, 23 May 2023 00:44:30 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B58313F762;
 Tue, 23 May 2023 00:43:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8bce095a-f93d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Date: Tue, 23 May 2023 08:43:21 +0100
Message-Id: <20230523074326.3035745-8-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a command line parameter to allow Dom0 the use of SVE resources,
the command line parameter sve=<integer>, sub argument of dom0=,
controls the feature on this domain and sets the maximum SVE vector
length for Dom0.

Add a new function, parse_signed_integer(), to parse an integer
command line argument.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v6:
 - Fixed case for e==NULL in parse_signed_integer, drop parenthesis
   from if conditions, delete inline sve_domctl_vl_param and rely on
   DCE from the compiler (Jan)
 - Drop parenthesis from opt_dom0_sve (Julien)
 - Do not continue if 'sve' is in command line args but
   CONFIG_ARM64_SVE is not selected:
   https://lore.kernel.org/all/7614AE25-F59D-430A-9C3E-30B1CE0E1580@arm.com/
Changes from v5:
 - stop the domain if VL error occurs (Julien, Bertrand)
 - update the documentation
 - Rename sve_sanitize_vl_param to sve_domctl_vl_param to
   mark the fact that we are sanitizing a parameter coming from
   the user before encoding it into sve_vl in domctl structure.
   (suggestion from Bertrand in a separate discussion)
 - update comment in parse_signed_integer, return boolean in
   sve_domctl_vl_param (Jan).
Changes from v4:
 - Negative values as user param means max supported HW VL (Jan)
 - update documentation, make use of no_config_param(), rename
   parse_integer into parse_signed_integer and take long long *,
   also put a comment on the -2 return condition, update
   declaration comment to reflect the modifications (Jan)
Changes from v3:
 - Don't use fixed len types when not needed (Jan)
 - renamed domainconfig_encode_vl to sve_encode_vl
 - Use a sub argument of dom0= to enable the feature (Jan)
 - Add parse_integer() function
Changes from v2:
 - xen_domctl_createdomain field has changed into sve_vl and its
   value now is the VL / 128, create an helper function for that.
Changes from v1:
 - No changes
Changes from RFC:
 - Changed docs to explain that the domain won't be created if the
   requested vector length is above the supported one from the
   platform.
---
 docs/misc/xen-command-line.pandoc    | 20 ++++++++++++++++++--
 xen/arch/arm/arm64/sve.c             | 20 ++++++++++++++++++++
 xen/arch/arm/domain_build.c          | 26 ++++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/sve.h | 10 ++++++++++
 xen/common/kernel.c                  | 28 ++++++++++++++++++++++++++++
 xen/include/xen/lib.h                | 10 ++++++++++
 6 files changed, 112 insertions(+), 2 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e0b89b7d3319..47e5b4eb6199 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -777,9 +777,9 @@ Specify the bit width of the DMA heap.
 
 ### dom0
     = List of [ pv | pvh, shadow=<bool>, verbose=<bool>,
-                cpuid-faulting=<bool>, msr-relaxed=<bool> ]
+                cpuid-faulting=<bool>, msr-relaxed=<bool> ] (x86)
 
-    Applicability: x86
+    = List of [ sve=<integer> ] (Arm)
 
 Controls for how dom0 is constructed on x86 systems.
 
@@ -838,6 +838,22 @@ Controls for how dom0 is constructed on x86 systems.
 
     If using this option is necessary to fix an issue, please report a bug.
 
+Enables features on dom0 on Arm systems.
+
+*   The `sve` integer parameter enables Arm SVE usage for Dom0 domain and sets
+    the maximum SVE vector length, the option is applicable only to AArch64
+    guests.
+    A value equal to 0 disables the feature, this is the default value.
+    Values below 0 means the feature uses the maximum SVE vector length
+    supported by hardware, if SVE is supported.
+    Values above 0 explicitly set the maximum SVE vector length for Dom0,
+    allowed values are from 128 to maximum 2048, being multiple of 128.
+    Please note that when the user explicitly specifies the value, if that value
+    is above the hardware supported maximum SVE vector length, the domain
+    creation will fail and the system will stop, the same will occur if the
+    option is provided with a non zero value, but the platform doesn't support
+    SVE.
+
 ### dom0-cpuid
     = List of comma separated booleans
 
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index 84a6dedc1fd7..feaca2cf647d 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -13,6 +13,9 @@
 #include <asm/processor.h>
 #include <asm/system.h>
 
+/* opt_dom0_sve: allow Dom0 to use SVE and set maximum vector length. */
+int __initdata opt_dom0_sve;
+
 extern unsigned int sve_get_hw_vl(void);
 
 /*
@@ -152,6 +155,23 @@ void sve_restore_state(struct vcpu *v)
     sve_load_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
 }
 
+bool __init sve_domctl_vl_param(int val, unsigned int *out)
+{
+    /*
+     * Negative SVE parameter value means to use the maximum supported
+     * vector length, otherwise if a positive value is provided, check if the
+     * vector length is a multiple of 128
+     */
+    if ( val < 0 )
+        *out = get_sys_vl_len();
+    else if ( (val % SVE_VL_MULTIPLE_VAL) == 0 )
+        *out = val;
+    else
+        return false;
+
+    return true;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f373a5024783..9202a96d9c28 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -62,6 +62,22 @@ custom_param("dom0_mem", parse_dom0_mem);
 
 int __init parse_arch_dom0_param(const char *s, const char *e)
 {
+    long long val;
+
+    if ( !parse_signed_integer("sve", s, e, &val) )
+    {
+#ifdef CONFIG_ARM64_SVE
+        if ( (val >= INT_MIN) && (val <= INT_MAX) )
+            opt_dom0_sve = val;
+        else
+            printk(XENLOG_INFO "'sve=%lld' value out of range!\n", val);
+
+        return 0;
+#else
+        panic("'sve' property found, but CONFIG_ARM64_SVE not selected");
+#endif
+    }
+
     return -EINVAL;
 }
 
@@ -4113,6 +4129,16 @@ void __init create_dom0(void)
     if ( iommu_enabled )
         dom0_cfg.flags |= XEN_DOMCTL_CDF_iommu;
 
+    if ( opt_dom0_sve )
+    {
+        unsigned int vl;
+
+        if ( sve_domctl_vl_param(opt_dom0_sve, &vl) )
+            dom0_cfg.arch.sve_vl = sve_encode_vl(vl);
+        else
+            panic("SVE vector length error\n");
+    }
+
     dom0 = domain_create(0, &dom0_cfg, CDF_privileged | CDF_directmap);
     if ( IS_ERR(dom0) )
         panic("Error creating domain 0 (rc = %ld)\n", PTR_ERR(dom0));
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index 65b46685d263..a71d6a295dcc 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -21,14 +21,22 @@ static inline unsigned int sve_decode_vl(unsigned int sve_vl)
     return sve_vl * SVE_VL_MULTIPLE_VAL;
 }
 
+static inline unsigned int sve_encode_vl(unsigned int sve_vl_bits)
+{
+    return sve_vl_bits / SVE_VL_MULTIPLE_VAL;
+}
+
 register_t compute_max_zcr(void);
 int sve_context_init(struct vcpu *v);
 void sve_context_free(struct vcpu *v);
 void sve_save_state(struct vcpu *v);
 void sve_restore_state(struct vcpu *v);
+bool sve_domctl_vl_param(int val, unsigned int *out);
 
 #ifdef CONFIG_ARM64_SVE
 
+extern int opt_dom0_sve;
+
 static inline bool is_sve_domain(const struct domain *d)
 {
     return d->arch.sve_vl > 0;
@@ -38,6 +46,8 @@ unsigned int get_sys_vl_len(void);
 
 #else /* !CONFIG_ARM64_SVE */
 
+#define opt_dom0_sve     0
+
 static inline bool is_sve_domain(const struct domain *d)
 {
     return false;
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index f7b1f65f373c..7cd00a4c999a 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -314,6 +314,34 @@ int parse_boolean(const char *name, const char *s, const char *e)
     return -1;
 }
 
+int __init parse_signed_integer(const char *name, const char *s, const char *e,
+                                long long *val)
+{
+    size_t slen, nlen;
+    const char *str;
+    long long pval;
+
+    slen = e ? ({ ASSERT(e >= s); e - s; }) : strlen(s);
+    nlen = strlen(name);
+
+    if ( !e )
+        e = s + slen;
+
+    /* Check that this is the name we're looking for and a value was provided */
+    if ( slen <= nlen || strncmp(s, name, nlen) || s[nlen] != '=' )
+        return -1;
+
+    pval = simple_strtoll(&s[nlen + 1], &str, 10);
+
+    /* Number not recognised */
+    if ( str != e )
+        return -2;
+
+    *val = pval;
+
+    return 0;
+}
+
 int cmdline_strcmp(const char *frag, const char *name)
 {
     for ( ; ; frag++, name++ )
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index e914ccade095..5343ee7a944a 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -94,6 +94,16 @@ int parse_bool(const char *s, const char *e);
  */
 int parse_boolean(const char *name, const char *s, const char *e);
 
+/**
+ * Given a specific name, parses a string of the form:
+ *   $NAME=<integer number>
+ * returning 0 and a value in val, for a recognised integer.
+ * Returns -1 for name not found, general errors, or -2 if name is found but
+ * not recognised number.
+ */
+int parse_signed_integer(const char *name, const char *s, const char *e,
+                         long long *val);
+
 /**
  * Very similar to strcmp(), but will declare a match if the NUL in 'name'
  * lines up with comma, colon, semicolon or equals in 'frag'.  Designed for
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538275.838187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgQ-0005UG-KZ; Tue, 23 May 2023 07:43:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538275.838187; Tue, 23 May 2023 07:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgQ-0005U1-FR; Tue, 23 May 2023 07:43:50 +0000
Received: by outflank-mailman (input) for mailman id 538275;
 Tue, 23 May 2023 07:43:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgO-0003Zx-Im
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:48 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 8cc1d5eb-f93d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 09:43:46 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9597C139F;
 Tue, 23 May 2023 00:44:31 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 448513F762;
 Tue, 23 May 2023 00:43:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cc1d5eb-f93d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v7 08/12] xen/physinfo: encode Arm SVE vector length in arch_capabilities
Date: Tue, 23 May 2023 08:43:22 +0100
Message-Id: <20230523074326.3035745-9-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the arm platform supports SVE, advertise the feature in the
field arch_capabilities in struct xen_sysctl_physinfo by encoding
the SVE vector length in it.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes from v6:
 - no changes
Changes from v5:
 - Add R-by from Bertrand
Changes from v4:
 - Write arch_capabilities from arch_do_physinfo instead of using
   stub functions (Jan)
Changes from v3:
 - domainconfig_encode_vl is now named sve_encode_vl
Changes from v2:
 - Remove XEN_SYSCTL_PHYSCAP_ARM_SVE_SHFT, use MASK_INSR and
   protect with ifdef XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK (Jan)
 - Use the helper function sve_arch_cap_physinfo to encode
   the VL into physinfo arch_capabilities field.
Changes from v1:
 - Use only arch_capabilities and some defines to encode SVE VL
   (Bertrand, Stefano, Jan)
Changes from RFC:
 - new patch
---
 xen/arch/arm/sysctl.c       | 4 ++++
 xen/include/public/sysctl.h | 4 ++++
 2 files changed, 8 insertions(+)

diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index b0a78a8b10d0..e9a0661146e4 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -11,11 +11,15 @@
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <xen/hypercall.h>
+#include <asm/arm64/sve.h>
 #include <public/sysctl.h>
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
 {
     pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm | XEN_SYSCTL_PHYSCAP_hap;
+
+    pi->arch_capabilities |= MASK_INSR(sve_encode_vl(get_sys_vl_len()),
+                                       XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK);
 }
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 2b24d6bfd00e..9d06e92d0f6a 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -94,6 +94,10 @@ struct xen_sysctl_tbuf_op {
 /* Max XEN_SYSCTL_PHYSCAP_* constant.  Used for ABI checking. */
 #define XEN_SYSCTL_PHYSCAP_MAX XEN_SYSCTL_PHYSCAP_gnttab_v2
 
+#if defined(__arm__) || defined(__aarch64__)
+#define XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK  (0x1FU)
+#endif
+
 struct xen_sysctl_physinfo {
     uint32_t threads_per_core;
     uint32_t cores_per_socket;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538276.838198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgS-0005oV-3V; Tue, 23 May 2023 07:43:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538276.838198; Tue, 23 May 2023 07:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgR-0005ns-Tj; Tue, 23 May 2023 07:43:51 +0000
Received: by outflank-mailman (input) for mailman id 538276;
 Tue, 23 May 2023 07:43:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgQ-0003Zx-Ot
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:50 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 8dc0bc1a-f93d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 09:43:48 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5ABED1515;
 Tue, 23 May 2023 00:44:33 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C8D603F762;
 Tue, 23 May 2023 00:43:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8dc0bc1a-f93d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Christian Lindig <christian.lindig@cloud.com>
Subject: [PATCH v7 09/12] tools: add physinfo arch_capabilities handling for Arm
Date: Tue, 23 May 2023 08:43:23 +0100
Message-Id: <20230523074326.3035745-10-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On Arm, the SVE vector length is encoded in arch_capabilities field
of struct xen_sysctl_physinfo, make use of this field in the tools
when building for arm.

Create header arm-arch-capabilities.h to handle the arch_capabilities
field of physinfo for Arm.

Removed include for xen-tools/common-macros.h in
python/xen/lowlevel/xc/xc.c because it is already included by the
arm-arch-capabilities.h header.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>
Acked-by: Christian Lindig <christian.lindig@cloud.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
Changes from v6:
 - Fix licence header in arm-atch-capabilities.h, add R-by (Anthony)
Changes from v5:
 - no changes
Changes from v4:
 - Move arm-arch-capabilities.h into xen-tools/, add LIBXL_HAVE_,
   fixed python return type to I instead of i. (Anthony)
Changes from v3:
 - add Ack-by for the Golang bits (George)
 - add Ack-by for the OCaml tools (Christian)
 - now xen-tools/libs.h is named xen-tools/common-macros.h
 - changed commit message to explain why the header modification
   in python/xen/lowlevel/xc/xc.c
Changes from v2:
 - rename arm_arch_capabilities.h in arm-arch-capabilities.h, use
   MASK_EXTR.
 - Now arm-arch-capabilities.h needs MASK_EXTR macro, but it is
   defined in libxl_internal.h, it doesn't feel right to include
   that header so move MASK_EXTR into xen-tools/libs.h that is also
   included in libxl_internal.h
Changes from v1:
 - now SVE VL is encoded in arch_capabilities on Arm
Changes from RFC:
 - new patch
---
 tools/golang/xenlight/helpers.gen.go          |  2 ++
 tools/golang/xenlight/types.gen.go            |  1 +
 tools/include/libxl.h                         |  6 ++++
 .../include/xen-tools/arm-arch-capabilities.h | 28 +++++++++++++++++++
 tools/include/xen-tools/common-macros.h       |  2 ++
 tools/libs/light/libxl.c                      |  1 +
 tools/libs/light/libxl_internal.h             |  1 -
 tools/libs/light/libxl_types.idl              |  1 +
 tools/ocaml/libs/xc/xenctrl.ml                |  4 +--
 tools/ocaml/libs/xc/xenctrl.mli               |  4 +--
 tools/ocaml/libs/xc/xenctrl_stubs.c           |  8 ++++--
 tools/python/xen/lowlevel/xc/xc.c             |  8 ++++--
 tools/xl/xl_info.c                            |  8 ++++++
 13 files changed, 62 insertions(+), 12 deletions(-)
 create mode 100644 tools/include/xen-tools/arm-arch-capabilities.h

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 0a203d22321f..35397be2f9e2 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -3506,6 +3506,7 @@ x.CapVmtrace = bool(xc.cap_vmtrace)
 x.CapVpmu = bool(xc.cap_vpmu)
 x.CapGnttabV1 = bool(xc.cap_gnttab_v1)
 x.CapGnttabV2 = bool(xc.cap_gnttab_v2)
+x.ArchCapabilities = uint32(xc.arch_capabilities)
 
  return nil}
 
@@ -3540,6 +3541,7 @@ xc.cap_vmtrace = C.bool(x.CapVmtrace)
 xc.cap_vpmu = C.bool(x.CapVpmu)
 xc.cap_gnttab_v1 = C.bool(x.CapGnttabV1)
 xc.cap_gnttab_v2 = C.bool(x.CapGnttabV2)
+xc.arch_capabilities = C.uint32_t(x.ArchCapabilities)
 
  return nil
  }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index a7c17699f80e..3d968a496744 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -1079,6 +1079,7 @@ CapVmtrace bool
 CapVpmu bool
 CapGnttabV1 bool
 CapGnttabV2 bool
+ArchCapabilities uint32
 }
 
 type Connectorinfo struct {
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index cfa1a191318c..4fa09ff7635a 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -525,6 +525,12 @@
  */
 #define LIBXL_HAVE_PHYSINFO_CAP_GNTTAB 1
 
+/*
+ * LIBXL_HAVE_PHYSINFO_ARCH_CAPABILITIES indicates that libxl_physinfo has a
+ * arch_capabilities field.
+ */
+#define LIBXL_HAVE_PHYSINFO_ARCH_CAPABILITIES 1
+
 /*
  * LIBXL_HAVE_MAX_GRANT_VERSION indicates libxl_domain_build_info has a
  * max_grant_version field for setting the max grant table version per
diff --git a/tools/include/xen-tools/arm-arch-capabilities.h b/tools/include/xen-tools/arm-arch-capabilities.h
new file mode 100644
index 000000000000..3849e897925d
--- /dev/null
+++ b/tools/include/xen-tools/arm-arch-capabilities.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: LGPL-2.1-only */
+/*
+ * Copyright (C) 2023 ARM Ltd.
+ */
+
+#ifndef ARM_ARCH_CAPABILITIES_H
+#define ARM_ARCH_CAPABILITIES_H
+
+#include <stdint.h>
+#include <xen/sysctl.h>
+
+#include <xen-tools/common-macros.h>
+
+static inline
+unsigned int arch_capabilities_arm_sve(unsigned int arch_capabilities)
+{
+#if defined(__aarch64__)
+    unsigned int sve_vl = MASK_EXTR(arch_capabilities,
+                                    XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK);
+
+    /* Vector length is divided by 128 before storing it in arch_capabilities */
+    return sve_vl * 128U;
+#else
+    return 0;
+#endif
+}
+
+#endif /* ARM_ARCH_CAPABILITIES_H */
diff --git a/tools/include/xen-tools/common-macros.h b/tools/include/xen-tools/common-macros.h
index 76b55bf62085..d53b88182560 100644
--- a/tools/include/xen-tools/common-macros.h
+++ b/tools/include/xen-tools/common-macros.h
@@ -72,6 +72,8 @@
 #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
 #endif
 
+#define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
+
 #ifndef __must_check
 #define __must_check __attribute__((__warn_unused_result__))
 #endif
diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
index a0bf7d186f69..175d6dde0b80 100644
--- a/tools/libs/light/libxl.c
+++ b/tools/libs/light/libxl.c
@@ -409,6 +409,7 @@ int libxl_get_physinfo(libxl_ctx *ctx, libxl_physinfo *physinfo)
         !!(xcphysinfo.capabilities & XEN_SYSCTL_PHYSCAP_gnttab_v1);
     physinfo->cap_gnttab_v2 =
         !!(xcphysinfo.capabilities & XEN_SYSCTL_PHYSCAP_gnttab_v2);
+    physinfo->arch_capabilities = xcphysinfo.arch_capabilities;
 
     GC_FREE;
     return 0;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 5244fde6239a..8aba3e138909 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -132,7 +132,6 @@
 
 #define DIV_ROUNDUP(n, d) (((n) + (d) - 1) / (d))
 
-#define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
 #define MASK_INSR(v, m) (((v) * ((m) & -(m))) & (m))
 
 #define LIBXL__LOGGING_ENABLED
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index c10292e0d7e3..fd31dacf7d5a 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -1133,6 +1133,7 @@ libxl_physinfo = Struct("physinfo", [
     ("cap_vpmu", bool),
     ("cap_gnttab_v1", bool),
     ("cap_gnttab_v2", bool),
+    ("arch_capabilities", uint32),
     ], dir=DIR_OUT)
 
 libxl_connectorinfo = Struct("connectorinfo", [
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index e4096bf92c1d..bf23ca50bb15 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -128,12 +128,10 @@ type physinfo_cap_flag =
   | CAP_Gnttab_v1
   | CAP_Gnttab_v2
 
-type arm_physinfo_cap_flag
-
 type x86_physinfo_cap_flag
 
 type arch_physinfo_cap_flags =
-  | ARM of arm_physinfo_cap_flag list
+  | ARM of int
   | X86 of x86_physinfo_cap_flag list
 
 type physinfo =
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index ef2254537430..ed1e28ea30a0 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -113,12 +113,10 @@ type physinfo_cap_flag =
   | CAP_Gnttab_v1
   | CAP_Gnttab_v2
 
-type arm_physinfo_cap_flag
-
 type x86_physinfo_cap_flag
 
 type arch_physinfo_cap_flags =
-  | ARM of arm_physinfo_cap_flag list
+  | ARM of int
   | X86 of x86_physinfo_cap_flag list
 
 type physinfo = {
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index f686db3124ee..a03da31f6f2c 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -851,13 +851,15 @@ CAMLprim value stub_xc_physinfo(value xch_val)
 	arch_cap_list = Tag_cons;
 
 	arch_cap_flags_tag = 1; /* tag x86 */
-#else
-	caml_failwith("Unhandled architecture");
-#endif
 
 	arch_cap_flags = caml_alloc_small(1, arch_cap_flags_tag);
 	Store_field(arch_cap_flags, 0, arch_cap_list);
 	Store_field(physinfo, 10, arch_cap_flags);
+#elif defined(__aarch64__)
+	Store_field(physinfo, 10, Val_int(c_physinfo.arch_capabilities));
+#else
+	caml_failwith("Unhandled architecture");
+#endif
 
 	CAMLreturn(physinfo);
 }
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 9728b34185ac..b3699fdac58e 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -22,6 +22,7 @@
 #include <xen/hvm/hvm_info_table.h>
 #include <xen/hvm/params.h>
 
+#include <xen-tools/arm-arch-capabilities.h>
 #include <xen-tools/common-macros.h>
 
 /* Needed for Python versions earlier than 2.3. */
@@ -897,7 +898,7 @@ static PyObject *pyxc_physinfo(XcObject *self)
     if ( p != virt_caps )
       *(p-1) = '\0';
 
-    return Py_BuildValue("{s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s,s:s}",
+    return Py_BuildValue("{s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s,s:s,s:I}",
                             "nr_nodes",         pinfo.nr_nodes,
                             "threads_per_core", pinfo.threads_per_core,
                             "cores_per_socket", pinfo.cores_per_socket,
@@ -907,7 +908,10 @@ static PyObject *pyxc_physinfo(XcObject *self)
                             "scrub_memory",     pages_to_kib(pinfo.scrub_pages),
                             "cpu_khz",          pinfo.cpu_khz,
                             "hw_caps",          cpu_cap,
-                            "virt_caps",        virt_caps);
+                            "virt_caps",        virt_caps,
+                            "arm_sve_vl",
+                              arch_capabilities_arm_sve(pinfo.arch_capabilities)
+                        );
 }
 
 static PyObject *pyxc_getcpuinfo(XcObject *self, PyObject *args, PyObject *kwds)
diff --git a/tools/xl/xl_info.c b/tools/xl/xl_info.c
index 712b7638b013..ddc42f96b979 100644
--- a/tools/xl/xl_info.c
+++ b/tools/xl/xl_info.c
@@ -27,6 +27,7 @@
 #include <libxl_json.h>
 #include <libxl_utils.h>
 #include <libxlutil.h>
+#include <xen-tools/arm-arch-capabilities.h>
 
 #include "xl.h"
 #include "xl_utils.h"
@@ -224,6 +225,13 @@ static void output_physinfo(void)
          info.cap_gnttab_v2 ? " gnttab-v2" : ""
         );
 
+    /* Print arm SVE vector length only on ARM platforms */
+#if defined(__aarch64__)
+    maybe_printf("arm_sve_vector_length  : %u\n",
+         arch_capabilities_arm_sve(info.arch_capabilities)
+        );
+#endif
+
     vinfo = libxl_get_version_info(ctx);
     if (vinfo) {
         i = (1 << 20) / vinfo->pagesize;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538277.838207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgT-00068h-LN; Tue, 23 May 2023 07:43:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538277.838207; Tue, 23 May 2023 07:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgT-000684-Ds; Tue, 23 May 2023 07:43:53 +0000
Received: by outflank-mailman (input) for mailman id 538277;
 Tue, 23 May 2023 07:43:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgS-0003Zx-4y
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:52 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 8e98f432-f93d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 09:43:50 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A852A139F;
 Tue, 23 May 2023 00:44:34 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8C96C3F762;
 Tue, 23 May 2023 00:43:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e98f432-f93d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v7 10/12] xen/tools: add sve parameter in XL configuration
Date: Tue, 23 May 2023 08:43:24 +0100
Message-Id: <20230523074326.3035745-11-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add sve parameter in XL configuration to allow guests to use
SVE feature.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v6:
 - Add check for sve_vl be multiple of 128 (Anthony)
Changes from v5:
 - Update documentation
 - re-generated golang files
Changes from v4:
 - Rename sve field to sve_vl (Anthony), changed type to
   libxl_sve_type
 - Sanity check of sve field in libxl instead of xl, update docs
   (Anthony)
 - drop Ack-by from George because of the changes in the Golang bits
Changes from v3:
 - no changes
Changes from v2:
 - domain configuration field name has changed to sve_vl,
   also its value now is VL/128.
 - Add Ack-by George for the Golang bits
Changes from v1:
 - updated to use arch_capabilities field for vector length
Changes from RFC:
 - changed libxl_types.idl sve field to uint16
 - now toolstack uses info from physinfo to check against the
   sve XL value
 - Changed documentation
---
 docs/man/xl.cfg.5.pod.in             | 16 ++++++++++++++
 tools/golang/xenlight/helpers.gen.go |  2 ++
 tools/golang/xenlight/types.gen.go   | 23 +++++++++++++++++++
 tools/include/libxl.h                |  5 +++++
 tools/libs/light/libxl_arm.c         | 33 ++++++++++++++++++++++++++++
 tools/libs/light/libxl_types.idl     | 22 +++++++++++++++++++
 tools/xl/xl_parse.c                  |  8 +++++++
 7 files changed, 109 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 24ac92718288..1b4e13ab647b 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2955,6 +2955,22 @@ Currently, only the "sbsa_uart" model is supported for ARM.
 
 =back
 
+=item B<sve="vl">
+
+The `sve` parameter enables Arm Scalable Vector Extension (SVE) usage for the
+guest and sets the maximum SVE vector length, the option is applicable only to
+AArch64 guests.
+A value equal to "disabled" disables the feature, this is the default value.
+Allowed values are "disabled", "128", "256", "384", "512", "640", "768", "896",
+"1024", "1152", "1280", "1408", "1536", "1664", "1792", "1920", "2048", "hw".
+Specifying "hw" means that the maximum vector length supported by the platform
+will be used.
+Please be aware that if a specific vector length is passed and its value is
+above the maximum vector length supported by the platform, an error will be
+raised.
+
+=back
+
 =head3 x86
 
 =over 4
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 35397be2f9e2..cd1a16e32eac 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1149,6 +1149,7 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version)
 x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart)
+x.ArchArm.SveVl = SveType(xc.arch_arm.sve_vl)
 if err := x.ArchX86.MsrRelaxed.fromC(&xc.arch_x86.msr_relaxed);err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
@@ -1653,6 +1654,7 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 xc.arch_arm.gic_version = C.libxl_gic_version(x.ArchArm.GicVersion)
 xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart)
+xc.arch_arm.sve_vl = C.libxl_sve_type(x.ArchArm.SveVl)
 if err := x.ArchX86.MsrRelaxed.toC(&xc.arch_x86.msr_relaxed); err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index 3d968a496744..b131a7eedc9d 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -490,6 +490,28 @@ TeeTypeNone TeeType = 0
 TeeTypeOptee TeeType = 1
 )
 
+type SveType int
+const(
+SveTypeHw SveType = -1
+SveTypeDisabled SveType = 0
+SveType128 SveType = 128
+SveType256 SveType = 256
+SveType384 SveType = 384
+SveType512 SveType = 512
+SveType640 SveType = 640
+SveType768 SveType = 768
+SveType896 SveType = 896
+SveType1024 SveType = 1024
+SveType1152 SveType = 1152
+SveType1280 SveType = 1280
+SveType1408 SveType = 1408
+SveType1536 SveType = 1536
+SveType1664 SveType = 1664
+SveType1792 SveType = 1792
+SveType1920 SveType = 1920
+SveType2048 SveType = 2048
+)
+
 type RdmReserve struct {
 Strategy RdmReserveStrategy
 Policy RdmReservePolicy
@@ -564,6 +586,7 @@ TypeUnion DomainBuildInfoTypeUnion
 ArchArm struct {
 GicVersion GicVersion
 Vuart VuartType
+SveVl SveType
 }
 ArchX86 struct {
 MsrRelaxed Defbool
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 4fa09ff7635a..cac641a7eba2 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -283,6 +283,11 @@
  */
 #define LIBXL_HAVE_BUILDINFO_ARCH_ARM_TEE 1
 
+/*
+ * libxl_domain_build_info has the arch_arm.sve_vl field.
+ */
+#define LIBXL_HAVE_BUILDINFO_ARCH_ARM_SVE_VL 1
+
 /*
  * LIBXL_HAVE_SOFT_RESET indicates that libxl supports performing
  * 'soft reset' for domains and there is 'soft_reset' shutdown reason
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 97c80d7ed0fa..35f76dfc21e4 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -3,6 +3,8 @@
 #include "libxl_libfdt_compat.h"
 #include "libxl_arm.h"
 
+#include <xen-tools/arm-arch-capabilities.h>
+
 #include <stdbool.h>
 #include <libfdt.h>
 #include <assert.h>
@@ -211,6 +213,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
+    /* Parameter is sanitised in libxl__arch_domain_build_info_setdefault */
+    if (d_config->b_info.arch_arm.sve_vl) {
+        /* Vector length is divided by 128 in struct xen_domctl_createdomain */
+        config->arch.sve_vl = d_config->b_info.arch_arm.sve_vl / 128U;
+    }
+
     return 0;
 }
 
@@ -1685,6 +1693,31 @@ int libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
     /* ACPI is disabled by default */
     libxl_defbool_setdefault(&b_info->acpi, false);
 
+    /* Sanitise SVE parameter */
+    if (b_info->arch_arm.sve_vl) {
+        unsigned int max_sve_vl =
+            arch_capabilities_arm_sve(physinfo->arch_capabilities);
+
+        if (!max_sve_vl) {
+            LOG(ERROR, "SVE is unsupported on this machine.");
+            return ERROR_FAIL;
+        }
+
+        if (LIBXL_SVE_TYPE_HW == b_info->arch_arm.sve_vl) {
+            b_info->arch_arm.sve_vl = max_sve_vl;
+        } else if (b_info->arch_arm.sve_vl > max_sve_vl) {
+            LOG(ERROR,
+                "Invalid sve value: %d. Platform supports up to %u bits",
+                b_info->arch_arm.sve_vl, max_sve_vl);
+            return ERROR_FAIL;
+        } else if (b_info->arch_arm.sve_vl % 128) {
+            LOG(ERROR,
+                "Invalid sve value: %d. It must be multiple of 128",
+                b_info->arch_arm.sve_vl);
+            return ERROR_FAIL;
+        }
+    }
+
     if (b_info->type != LIBXL_DOMAIN_TYPE_PV)
         return 0;
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index fd31dacf7d5a..9e48bb772646 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -523,6 +523,27 @@ libxl_tee_type = Enumeration("tee_type", [
     (1, "optee")
     ], init_val = "LIBXL_TEE_TYPE_NONE")
 
+libxl_sve_type = Enumeration("sve_type", [
+    (-1, "hw"),
+    (0, "disabled"),
+    (128, "128"),
+    (256, "256"),
+    (384, "384"),
+    (512, "512"),
+    (640, "640"),
+    (768, "768"),
+    (896, "896"),
+    (1024, "1024"),
+    (1152, "1152"),
+    (1280, "1280"),
+    (1408, "1408"),
+    (1536, "1536"),
+    (1664, "1664"),
+    (1792, "1792"),
+    (1920, "1920"),
+    (2048, "2048")
+    ], init_val = "LIBXL_SVE_TYPE_DISABLED")
+
 libxl_rdm_reserve = Struct("rdm_reserve", [
     ("strategy",    libxl_rdm_reserve_strategy),
     ("policy",      libxl_rdm_reserve_policy),
@@ -690,6 +711,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
     ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
                                ("vuart", libxl_vuart_type),
+                               ("sve_vl", libxl_sve_type),
                               ])),
     ("arch_x86", Struct(None, [("msr_relaxed", libxl_defbool),
                               ])),
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 1f6f47daf4e1..f036e56fc239 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2887,6 +2887,14 @@ skip_usbdev:
         }
     }
 
+    if (!xlu_cfg_get_string (config, "sve", &buf, 1)) {
+        e = libxl_sve_type_from_string(buf, &b_info->arch_arm.sve_vl);
+        if (e) {
+            fprintf(stderr, "Unknown sve \"%s\" specified\n", buf);
+            exit(EXIT_FAILURE);
+        }
+    }
+
     parse_vkb_list(config, d_config);
 
     d_config->virtios = NULL;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:43:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:43:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538278.838216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgU-0006TC-WB; Tue, 23 May 2023 07:43:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538278.838216; Tue, 23 May 2023 07:43:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MgU-0006SV-Pt; Tue, 23 May 2023 07:43:54 +0000
Received: by outflank-mailman (input) for mailman id 538278;
 Tue, 23 May 2023 07:43:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgS-0003Zx-W2
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:52 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 8f3e398e-f93d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 09:43:51 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C0552153B;
 Tue, 23 May 2023 00:44:35 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D9ADB3F762;
 Tue, 23 May 2023 00:43:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f3e398e-f93d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v7 11/12] xen/arm: add sve property for dom0less domUs
Date: Tue, 23 May 2023 08:43:25 +0100
Message-Id: <20230523074326.3035745-12-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a device tree property in the dom0less domU configuration
to enable the guest to use SVE.

Update documentation.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v6:
 - Use ifdef in create_domUs and fail if 'sve' is used on systems
   with CONFIG_ARM64_SVE not selected (Bertrand, Julien, Jan)
Changes from v5:
 - Stop the domain creation if SVE not supported or SVE VL
   errors (Julien, Bertrand)
 - now sve_sanitize_vl_param is renamed to sve_domctl_vl_param
   and returns a boolean, change the affected code.
 - Reworded documentation.
Changes from v4:
 - Now it is possible to specify the property "sve" for dom0less
   device tree node without any value, that means the platform
   supported VL will be used.
Changes from v3:
 - Now domainconfig_encode_vl is named sve_encode_vl
Changes from v2:
 - xen_domctl_createdomain field name has changed into sve_vl
   and its value is the VL/128, use domainconfig_encode_vl
   to encode a plain VL in bits.
Changes from v1:
 - No changes
Changes from RFC:
 - Changed documentation
---
 docs/misc/arm/device-tree/booting.txt | 16 +++++++++++++++
 xen/arch/arm/domain_build.c           | 28 +++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 3879340b5e0a..32a0e508c471 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -193,6 +193,22 @@ with the following properties:
     Optional. Handle to a xen,cpupool device tree node that identifies the
     cpupool where the guest will be started at boot.
 
+- sve
+
+    Optional. The `sve` property enables Arm SVE usage for the domain and sets
+    the maximum SVE vector length, the option is applicable only to AArch64
+    guests.
+    A value equal to 0 disables the feature, this is the default value.
+    Specifying this property with no value, means that the SVE vector length
+    will be set equal to the maximum vector length supported by the platform.
+    Values above 0 explicitly set the maximum SVE vector length for the domain,
+    allowed values are from 128 to maximum 2048, being multiple of 128.
+    Please note that when the user explicitly specifies the value, if that value
+    is above the hardware supported maximum SVE vector length, the domain
+    creation will fail and the system will stop, the same will occur if the
+    option is provided with a non zero value, but the platform doesn't support
+    SVE.
+
 - xen,enhanced
 
     A string property. Possible property values are:
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 9202a96d9c28..ba4fe9e165ee 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -4008,6 +4008,34 @@ void __init create_domUs(void)
             d_cfg.max_maptrack_frames = val;
         }
 
+        if ( dt_get_property(node, "sve", &val) )
+        {
+#ifdef CONFIG_ARM64_SVE
+            unsigned int sve_vl_bits;
+            bool ret = false;
+
+            if ( !val )
+            {
+                /* Property found with no value, means max HW VL supported */
+                ret = sve_domctl_vl_param(-1, &sve_vl_bits);
+            }
+            else
+            {
+                if ( dt_property_read_u32(node, "sve", &val) )
+                    ret = sve_domctl_vl_param(val, &sve_vl_bits);
+                else
+                    panic("Error reading 'sve' property");
+            }
+
+            if ( ret )
+                d_cfg.arch.sve_vl = sve_encode_vl(sve_vl_bits);
+            else
+                panic("SVE vector length error\n");
+#else
+            panic("'sve' property found, but CONFIG_ARM64_SVE not selected");
+#endif
+        }
+
         /*
          * The variable max_init_domid is initialized with zero, so here it's
          * very important to use the pre-increment operator to call
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 07:47:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 07:47:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538324.838228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MkB-0001mB-Kj; Tue, 23 May 2023 07:47:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538324.838228; Tue, 23 May 2023 07:47:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1MkB-0001m4-HD; Tue, 23 May 2023 07:47:43 +0000
Received: by outflank-mailman (input) for mailman id 538324;
 Tue, 23 May 2023 07:47:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q1MgU-0003Zx-Db
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 07:43:54 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 903d1134-f93d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 09:43:52 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6A44E139F;
 Tue, 23 May 2023 00:44:37 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F2E603F762;
 Tue, 23 May 2023 00:43:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 903d1134-f93d-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v7 12/12] xen/changelog: Add SVE and "dom0" options to the changelog for Arm
Date: Tue, 23 May 2023 08:43:26 +0100
Message-Id: <20230523074326.3035745-13-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Arm now can use the "dom0=" Xen command line option and the support
for guests running SVE instructions is added, put entries in the
changelog.

Mention the "Tech Preview" status and add an entry in SUPPORT.md

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: Henry Wang <Henry.Wang@arm.com> # CHANGELOG
---
Changes from v6:
 - Add Henry's A-by to CHANGELOG
Changes from v5:
 - Add Tech Preview status and add entry in SUPPORT.md (Bertrand)
Changes from v4:
 - No changes
Change from v3:
 - new patch
---
 CHANGELOG.md | 3 +++
 SUPPORT.md   | 6 ++++++
 2 files changed, 9 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5bfd3aa5c0d5..512b7bdc0fcb 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -11,6 +11,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    cap toolstack provided values.
  - Ignore VCPUOP_set_singleshot_timer's VCPU_SSHOTTMR_future flag. The only
    known user doesn't use it properly, leading to in-guest breakage.
+ - The "dom0" option is now supported on Arm and "sve=" sub-option can be used
+   to enable dom0 guest to use SVE/SVE2 instructions.
 
 ### Added
  - On x86, support for features new in Intel Sapphire Rapids CPUs:
@@ -20,6 +22,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    - Bus-lock detection, used by Xen to mitigate (by rate-limiting) the system
      wide impact of a guest misusing atomic instructions.
  - xl/libxl can customize SMBIOS strings for HVM guests.
+ - On Arm, Xen supports guests running SVE/SVE2 instructions. (Tech Preview)
 
 ## [4.17.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.17.0) - 2022-12-12
 
diff --git a/SUPPORT.md b/SUPPORT.md
index 6dbed9d5d029..e0fa2246807b 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -99,6 +99,12 @@ Extension to the GICv3 interrupt controller to support MSI.
 
     Status: Experimental
 
+### ARM Scalable Vector Extension (SVE/SVE2)
+
+AArch64 guest can use Scalable Vector Extension (SVE/SVE2).
+
+    Status: Tech Preview
+
 ## Guest Type
 
 ### x86/PV
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 10:02:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 10:02:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538374.838255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1OqV-0000la-0Y; Tue, 23 May 2023 10:02:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538374.838255; Tue, 23 May 2023 10:02:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1OqU-0000lT-Rz; Tue, 23 May 2023 10:02:22 +0000
Received: by outflank-mailman (input) for mailman id 538374;
 Tue, 23 May 2023 10:02:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1OqT-0000lN-Ez
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 10:02:21 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2060d.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e6772e08-f950-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 12:02:18 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9457.eurprd04.prod.outlook.com (2603:10a6:20b:4e9::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 10:02:16 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 10:02:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6772e08-f950-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N5v6Qb4mRX7uYfC58GM0LleJbOcGTPIhJD5JeWQqNBJ0C7l9dXvbhYA4C8yHZVjngDCNpgrH/iL1gCFPZHsUD9FI7d/LUDFPjGibjDn3CeK6WQKIkIOOVwmP9HLtcv/ghDU2epxtkDL+pqc5nI6ElryaFRhCUbLg7C7MRU8+mQvypKWxOTo0xhy50x6HU9NmqLvPQEOna57zsT4k7IzT5tf6eT37FTjogRGpYf9YGIn1a22c9OsBbBfAF+Y9TS3r/pszQkGkfgGiMnEYdjreQtNggaYiPz9OqAkJk5BEDL5b5WnXoFMsLU98l5G70hsOxdBMsbVM8jBokUHt62K0LA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lw0hmpCUQr1vTjf5tGYOdFJpFJQmi1l+9VsKQDyvw8c=;
 b=YrWuyJEBzUxACj8PiP9+oMfa6weHSXkMBhycgW7ZjRLn5QoV+b7bKQU+P8BQRBNpb1R2Hq3SKMmtp+YCasPm6ShLJl1Cc33xlk7G8bQAxWKo3K4+ksuBNNJGPVdpELgIkdQquDExuesimf9ysF+S5aVm8b0sAZPi5u403fGY8t8hFx27yS7t06gY4rr1oEcAF7EE/j1+2nWsVLjBcoWeMB/X6dpoxhG8WNpUrGabFIzHr+Q2f+F3bWv/6xoT3mcUNnHobuYyywsDnZtv26K5iz2yWom8PI7bmzA6oHMhQgFkb3odOCKNWJrdsPaG+0yg7c3yAaRDWs6rdJFUML59AQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lw0hmpCUQr1vTjf5tGYOdFJpFJQmi1l+9VsKQDyvw8c=;
 b=ujcs1J/9Xbn+dDOortqreVSmamp+EmWtXodx0niH12Qbheq5bkpR1MIloz+/jt13UKgduDj10037wBAlRQWM8dwUnyaKNKZ7aiaXl7KZbaJ63o9wydpOpKnP+SZeuBhhxgP1VOh7T+kkzdO9tMKK9DiV3KRBrie+tJLquaXbTJYkqKdfSEhL18hfw00uPoJikbxNSpfLL+wI2c5YVPf2JxC5PgHqu5LlRUPOIhjF5TuJMt5Uvw3DmsR72QuOEntIZQQqdO6GUb/48GEULSLzXs61rpXCp5HQB1Yn9MFsKuAL34g9LIZ8yEM4lX8YiJ4zcuaQohT/tUH0yhTCF7ezEg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <48722696-ea22-1af9-2a0a-aa78972d118a@suse.com>
Date: Tue, 23 May 2023 12:02:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel@lists.xenproject.org
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-8-luca.fancellu@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523074326.3035745-8-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0260.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9457:EE_
X-MS-Office365-Filtering-Correlation-Id: deae2056-9152-46de-4ebd-08db5b74c960
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	l5HNmpKh/0SJ1f+8fWy7yDNwPqk9NMWk2wkZANsNsvhpyRnHC5PBEfyH4hO33DQtZ7pwBOOYBg0hq0swp51xT8Fj5eP1G4GHn4uj/tK1D2Y5AfX4oNJtk2vos9tiDbKgXrDp27XJLnhursztTyr+sOx2dJ12tWCBconfUOzce0xiwwBqS8w2YrANG6yfNOQv/tvTaYxQzZf1RzMvZjzTHEHeZTkau82DFd3zum29cYhi3NOYX37nyw5sZ93tN+UWqTD/XwQpBLITniviPUY+nU3b7YfIlAZfgjq8z/eSgNes5VjScWNQxAbFvoAcya+YSvUQMr+8rV6yTAILh7cCtBRS2YcFU26TsSb861SSg49Rjg8FM4QlvcViZXkUrQNBO214qVhA9NCNgqezMSeyeXOndz9k/gtvdrOU8gEoVCec/ifysVApy1UZWA7uiijCam7z4JX7jfd/QuhWsnoZcbTGoByLMNpZBCnyhUbWRYk8lWJqKV6IhikNqkqbaTUPn1F+Eptv6U957QYE2us7UOFoHSfRpwUBIqEjihbC8zlhRcfD+VtjXIh7XkwCK4Ec0yJpuoaGz+87Kp77uzQYlPN60GkYFg7zWuhwnO6yqaKrzwAc5w2KpJlY9XXg4YxjYF+pd+nJscjRdXV9hagtQw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(366004)(346002)(376002)(39860400002)(451199021)(8676002)(8936002)(5660300002)(7416002)(186003)(6512007)(6506007)(53546011)(31696002)(2616005)(86362001)(26005)(38100700002)(41300700001)(6486002)(66476007)(66946007)(66556008)(316002)(478600001)(6916009)(36756003)(4326008)(54906003)(2906002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TVFUMjEwbXh3TklkUlBaSVROUW5OUGJBNWpkQURDL0RZN1N5dnpxbzlpK002?=
 =?utf-8?B?SFpnaUxCR2JpSStBRFlMT0FaSVlMUjMzUDU2TTROMGN6SFFjSExCTjFyZnNU?=
 =?utf-8?B?UThhSzFzejZHVUVHcElDald5VWhNdjJpZW5HRktKaFlTOXBjdVpFL2ZzS0Nw?=
 =?utf-8?B?UDUzd3k0cEVFSVBNRU4ybk9TVStGNEw4TlZ4dllpYW5sNU0rWjlaZnVPQmFm?=
 =?utf-8?B?YkhDNVFvWFBLTWZlZkMxcEFYemV1azlvN0NkVmlRYy9lL2VLSm5WZjNZSzNH?=
 =?utf-8?B?My9tS1lSVmEzS21oVURRMFVySTVaR0lkR3VpbFVBT3RVYTR5WjJST1BVOGtW?=
 =?utf-8?B?OFgyZVJxbVlTbjZNN1h1Tm5FaVlSUW9RbzA4VUd4STJJdWtxSEhFbmxpUmE4?=
 =?utf-8?B?NHBEMEZGT0hPYXdXWFFnT283RVk0eWUxR2l3Skd2VVdEc0laM000Y3hZT2d2?=
 =?utf-8?B?eXZlM2FWOVBIVTJqVlF4Wll2VVk0SEJQUk81VVJQbVh1b0dKRGFUMmVPdHJy?=
 =?utf-8?B?dzh1MC9DdGx4dXlLZk9sZUxSaFRSWVZMbFFwb2hXWGtHbWFESVVBeUl5ZFd6?=
 =?utf-8?B?RHpSZi80THBVYktmT2kvUFlQOVF0QXVneSs1cS8zR0VVVENFbS94dUZtZFJG?=
 =?utf-8?B?d0lveUh1RENWMlEyS2h3ZlJzQU8yclJYdVZvVlVLNVJ0b0tIQjIwUlFKNWVi?=
 =?utf-8?B?eVJEVzhZOXV0eldBQjlJVmtxSU9JS1NTYm42TGQ5QnBWWElPU3FCVlVMYVdx?=
 =?utf-8?B?SjYxR0swaVRSSXBlL1NUYTIzSmUyWFB5OGIwanl1Q05Ec1BJZmZyVUZkRGpo?=
 =?utf-8?B?TUVZNUVFc1pjbXhYYXJISFpOcWtYVjcrQzVteHhSZXdSVEp5cDJYN1ZlUUdk?=
 =?utf-8?B?bHV0bElGVmlGcnpoTXFRc1AyWlNiMGRweHVqc2w4b1FEdXlCb3lkbjRKMVNV?=
 =?utf-8?B?emNFbGVMRnlWbjM3U0l6dmpzemlPU3cxNHUyOFhKOFVHNGdOR09NbEQ0UjRV?=
 =?utf-8?B?QS9QclVOOCtaYjE0NzBEU1NDWDhVRU1QOFUyTmROK0hMbDAySTF5cy9Td3lR?=
 =?utf-8?B?akhGS2FlYkp1eXgvNVdpT3ZzRjUycGFHSGxuc1IrblNvYTVOdHZCa2w4d3dB?=
 =?utf-8?B?bFpRSTRyYWt0SXRza3o2MGpHdk5MM3VXeEZ4SE9LamlOaHB4RUxNeVo1Zmxw?=
 =?utf-8?B?SUlWSzV6SmRzLzB1MG5XZDRLZS9aMzZWTitKamlNbCszcjIxZ05Hb2ZKck9K?=
 =?utf-8?B?Z3UxY2JnR0pCWi9aYjZmYnpCMXpDaEFOck9lVTVOKzVMdzBXNXR3eDFiUFhj?=
 =?utf-8?B?ZVlUckNrK3dORHZmMkI5d1l3VjFvVTBwZ2F2elg1aVQxMTk4cEZpVG16RnVo?=
 =?utf-8?B?Mi8xR2tIdStUa3FTcHJDbUlIK09nSVdqV2NxOHlRcUFPL2lrYUZqc05ER2hP?=
 =?utf-8?B?V2tCd1h0OW41K2RwM2hWTFZjcElQYnBYYXNNd21FWC8rU2RVSUczR0RiWmdK?=
 =?utf-8?B?cmM1eWVVaHJYNWZzbk1SUWUyVldxcm9OQzNybmh3RWhhYlMvbkt1WWtMd2l5?=
 =?utf-8?B?UVBtK2VNRnhCMlpRb0x3QUZmMzNtTlZXc1RBZDhCT1R1TkoxQmVkK2phTEZk?=
 =?utf-8?B?TEV2YVBBSDdOZVVTVHVJYk1LeU96Q2RFc0VsQThzUk82OUYrTXIzaGhjejlO?=
 =?utf-8?B?UWpJRDNzTUVUVWlXKzlaTEw3S3Y0YlQ5dFFKbi9KNmtXM0VSUldEbVU0SFdq?=
 =?utf-8?B?MWJxOWJEejR5T1hIejBoZmUvdno2bHc1cE5yQUozMnR3aFJyQzRuUktSV0hW?=
 =?utf-8?B?NVlEb1RjcGxTVHlUK0N6L2NSSjIxUjJTbDd0UDJ3R1dkUzI1czBWdVQrdjdh?=
 =?utf-8?B?VnZpMWpZQkVpT05mem1ucDZ4VmhGUWFPMmpEbUtMM1pFM0YyWW9GSHZqVFJ4?=
 =?utf-8?B?alhyWjNVRS9rVnhmUm9YcDVyaVRUZDVnc3ZrZDVwRGttaG0xNHRvdjlQY2JQ?=
 =?utf-8?B?UExTakduQnhaNVdRN3dXRkxWdklYOElvbk03V21DTkhpZ2xUK1Nsb0dpUEFy?=
 =?utf-8?B?d1c2ZU0xb0tGMzZjaXR4WHRHcW4zSGRsSm5aTWpJVkZTMThBb0Nja0tpdDkz?=
 =?utf-8?Q?CKJUUg7/LLMkVFD/0b6mbd0K2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: deae2056-9152-46de-4ebd-08db5b74c960
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 10:02:15.8254
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mPtFo+ugBzC1FzMaDeRUQxYiabuhA2lyupqZRXaRHS3J6s5tlDykCr5FZmikr8bZvLRYAWHyhncTc+mmW4XXLQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9457

On 23.05.2023 09:43, Luca Fancellu wrote:
> Add a command line parameter to allow Dom0 the use of SVE resources,
> the command line parameter sve=<integer>, sub argument of dom0=,
> controls the feature on this domain and sets the maximum SVE vector
> length for Dom0.
> 
> Add a new function, parse_signed_integer(), to parse an integer
> command line argument.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com> # !arm

> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -777,9 +777,9 @@ Specify the bit width of the DMA heap.
>  
>  ### dom0
>      = List of [ pv | pvh, shadow=<bool>, verbose=<bool>,
> -                cpuid-faulting=<bool>, msr-relaxed=<bool> ]
> +                cpuid-faulting=<bool>, msr-relaxed=<bool> ] (x86)
>  
> -    Applicability: x86
> +    = List of [ sve=<integer> ] (Arm)

While in the text below you mention this is Arm64 only, I think the tag
here would better express this as well.

> @@ -838,6 +838,22 @@ Controls for how dom0 is constructed on x86 systems.
>  
>      If using this option is necessary to fix an issue, please report a bug.
>  
> +Enables features on dom0 on Arm systems.
> +
> +*   The `sve` integer parameter enables Arm SVE usage for Dom0 domain and sets
> +    the maximum SVE vector length, the option is applicable only to AArch64
> +    guests.

Why "guests"? Does the option affect more than Dom0?

> +    A value equal to 0 disables the feature, this is the default value.
> +    Values below 0 means the feature uses the maximum SVE vector length
> +    supported by hardware, if SVE is supported.
> +    Values above 0 explicitly set the maximum SVE vector length for Dom0,
> +    allowed values are from 128 to maximum 2048, being multiple of 128.
> +    Please note that when the user explicitly specifies the value, if that value
> +    is above the hardware supported maximum SVE vector length, the domain
> +    creation will fail and the system will stop, the same will occur if the
> +    option is provided with a non zero value, but the platform doesn't support
> +    SVE.

Assuming this also covers the -1 case, I wonder if that isn't a little too
strict. "Maximum supported" imo can very well be 0.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 23 10:05:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 10:05:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538380.838265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Otd-0001Q3-Gp; Tue, 23 May 2023 10:05:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538380.838265; Tue, 23 May 2023 10:05:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Otd-0001Pw-E4; Tue, 23 May 2023 10:05:37 +0000
Received: by outflank-mailman (input) for mailman id 538380;
 Tue, 23 May 2023 10:05:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1Otc-0001Pq-NS
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 10:05:36 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0615.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::615])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5c259ec4-f951-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 12:05:35 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8633.eurprd04.prod.outlook.com (2603:10a6:20b:43c::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 10:05:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 10:05:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c259ec4-f951-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H0IALyCFrvrdXsEKd4hF6iXOAA3CqKvNsQEqLZSVIDloLsBMbJ1EJdGcWRnjFzqAbuqyQ9jIdnm4jiuNBAVtRs4j4AKBdGILoO4VjdQVQMtkKj04pDbM0lQl2j3yk9pEgpJja3jUMDiloqC387gLsdY1iDyeotHULwEm7tGlxK38cXfMj0sADtI9tSBq1d7LiI3XyU93u3NOBHF5733vqCpFqj956XV0yYFijwRVD8kRRThtw78NSJeqB5zDqjUmzkke1aA2WypCScBdtN9D4NLmVcgrqY6xLJmg+FYCl0OsY9mAjrHHlkkVsne0xpH3EstPvXi4WriadhQ0PcGt+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GdrJmdYMqPVXF1FwEgoRlCA0N+jmgwBMq/66TIo8VPI=;
 b=MSzFaFQ0S9su4JgfjrLvmGOHbgBHtr6fskTKe8eorw+et17cSaS4VqGmL0sjiTcNBUaxyXaqLKxqHJNjMvFoIqSE63YrJ17Ry9M4eoIbhcyp4KuhraXoA9CWCI8iao9WdSg5g3gZlWLGzYr06rO5ZJ0s7WnT8fuSF+KrrzHj3I1HMBwWz54DkZEq+Tbtq2juTsCQ8zhCR4Zm1UEAAQ6glhDU8vj9XFU3c2HXM8ojq5m+zSYqfhdllPTVs0bMlT7rmkABdl3F8vTf5USacovugHQpLS9gYAsUPoLgLCAY7CtxCl5tZio3P0lA7k3JpDxD8t8bwbEOotQkp3jC4CyoPQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GdrJmdYMqPVXF1FwEgoRlCA0N+jmgwBMq/66TIo8VPI=;
 b=VL2ry5d0O17gxBzzeilQi9P/nzxVFCBo83rTu6a4bWLhiRzG4x/Kz/TnYnh2EnOAE3kL5hWV4kOUwt5eK9DpIjkZRAgQ9ThLxXF9UJFtE7WlACnqvixM/6feO3dEouCCfGtdH+GL92ZgAFaPfKsgfpiBoJKxKnPy+J+OGsRPPV7qohp48YlTk7r6YHgEG73Y+zFt2AvWCa26X8N3hhyk8sDqf7pNQT2p8kHroM846VHcpo2Eph5SlD/ABaLDAymSC6n0b2hp8tRWm+nCEDr+YkKJeg7fywBCrZ/qHAXjoi5uXQasXO1hjfW5ykXM5ja0XUJC/DEBIGxnqdVEm4tbVg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7f01db34-3256-0c79-018b-f081863f2599@suse.com>
Date: Tue, 23 May 2023 12:05:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Content-Language: en-US
To: Henry Wang <Henry.Wang@arm.com>
Cc: Wei Chen <Wei.Chen@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <e03bbb52-1a19-7d18-4abe-75bbef8a0aee@suse.com>
 <AS8PR08MB799117EDD6BAB892CAB870A192659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <13635377-e296-370d-121b-5b617dc210bc@suse.com>
 <AS8PR08MB7991DCE0DFC850FEA920BF8C92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <c195ef53-1151-1fb2-0cf9-f6f47d20b75e@suse.com>
 <AS8PR08MB79919CD7C233345113F8D5C992409@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <79859cfb-ab60-8661-e1ec-75fac74531b4@suse.com>
 <AS8PR08MB7991C727E0A90875055DA1D492409@AS8PR08MB7991.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <AS8PR08MB7991C727E0A90875055DA1D492409@AS8PR08MB7991.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0056.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8633:EE_
X-MS-Office365-Filtering-Correlation-Id: 66df6b0d-d941-4713-492a-08db5b753f46
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jZdwa0km9nvJeHDTlafUHFLLDaT6aBpugBxfqEcIRmK0cXoVKebbRDoAbrNy4tJX8pM5nSqM7ET4l9hhFYY2NKct0VipLqvngxiu0Q37dTzfpkyteorq+j86/mRz8053h7VoLArXEh5105Ko0ANy4+Gd82+1oZQZqHfrqiVwYj631Lm8K2kguMKaKQEmqSm/P84nWW1AqGDcoZoy8kbGIIZXZzvtxOxxiHkGZpm1qiWhDYe5M4N0oYtswjdgBcmrQXb9SaOKUwBn5OPQiqzhvxCVf8ZNmcntVEhz1v5IUp1tT3Lh1iENxAkvn2oyovUhFzsxibSN8VRg3wzYU5MpZCjPnosAmKbswyC0tVXNVoVOuwjYrom6EEV4QDh94Y8aRYPvLCWY8zsKVx75asyrYNlom7zT26OA967K54ORXFVZBXeCplilfBeTHWIR/7bl9bctRTJ++g9RoN613PDEFHyAqwfKq8znA2NsTuBIhFKkev4GCR+V0MYi8W6GMYdwpK2ZKmNSM6r4/zMIPG7IBoKhystWMMl60Qogk6/tEzx5AfijbWWDMNS60j6vaOLYcvWpz7v+typXS8dlmzdAkeTdBgyRCsWyMyNzgUQR5QKvetD7qSDxNzVdvXEv0YAnG04xY2DzFMDveYFZI0tuXQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(396003)(366004)(39860400002)(376002)(451199021)(5660300002)(8936002)(8676002)(86362001)(31696002)(2906002)(53546011)(6506007)(26005)(6512007)(83380400001)(36756003)(2616005)(186003)(38100700002)(66556008)(31686004)(66476007)(66946007)(6916009)(4326008)(54906003)(316002)(478600001)(6486002)(41300700001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UGhLSWl1eHM0R0J1SFF0K1VPL0hTenlGMCtkWEJOd0lsc3VwQlp0NDI0em5C?=
 =?utf-8?B?a0QzdkxOcGZkeVNqWHRYbkpvNzZlMjQraWpQSXlsZlRIeUlUbnJMVFhrYlJo?=
 =?utf-8?B?UFo5NlBDK2lpcDgzNHFyWjV6bTdPMW11Vk5GQmZZL05FeDFpbFhsSUUyNzFX?=
 =?utf-8?B?VEVkYngyeFZSUjlKMUZ2SkJqV3RBYmdBVGlwTE90TkJ5WVB2WUJxNm9Fb1JM?=
 =?utf-8?B?UXBvSHY5Q1JqclFSKy8rajNLbm5sVWRVZ2JmWE1PRldVWWFHWjhLbzZDTVdN?=
 =?utf-8?B?bUlhRTQ2UmFZKys5N1loTk1yMEt6aGdKNEZGZndUMjJFNzc4TkU4K1p4ZGZJ?=
 =?utf-8?B?Y25abG5lM1dXd0I1QVJqdUJOcTZ4eEZOOWI5c1YvL1NuV2VxeVdNang1Nms3?=
 =?utf-8?B?ZUh5em9EcjVsVmRpbkhaallYd2JDaTdKcGFTbHc2bkUvWHZHcXdoZzFPRVQ5?=
 =?utf-8?B?S3VIbVBOV2JiRStOVGNTN2MwSTcrNGJEOU1PbXgzYk1iZUhoRm5UY0V0Tlg3?=
 =?utf-8?B?eXFiWjU4Qmt6c3RsUHRoMTA2bmdSSGliMXVEMThYYUZuS25vTGx6S1c4L0Rk?=
 =?utf-8?B?OG9JYVBMWWE3aDYwU2Jab0hwV0FzWmV1dnlLb3VqZDFDOTJDWGJTOUgwRUNL?=
 =?utf-8?B?bkFIN1BycGVyVHQ0Q0hPRkhLU0xWTG5rLzU5VUxTejVWV3JZQjVzcUhROHJB?=
 =?utf-8?B?NFBuSHcwTDBWa0xLNklLOWtMMkdwdE9TWUFNc0hvci9oZWRkTndpWVVKcit5?=
 =?utf-8?B?V2JlbVZpV1pDUlJJZHJ1MXdGeFpSRzRoQlAzWUtQL0hvbGJKVWl6OHJIM2hy?=
 =?utf-8?B?dWkrMyt4QmJIS3lNTXFZaUVtZzNsd2pSTWVjOEpFY1dPbjNxR0dEMXVaeGV3?=
 =?utf-8?B?UzQ0Ym5tRjB2UU9GU2hTcWhNZzd1QnhIUFY0QTB3T3ZZWVJURjdtTU5zeGoy?=
 =?utf-8?B?ajFERHo2WnU2dSt5cTVweVhVQm5BMDJQdGliOUxTekEyWlVUNGM4bjJ5YWNk?=
 =?utf-8?B?MFlISmMrSUZLcHM5Q1R3UWtDMWJianFhWUI3TDMzZGE2SkRsZ21lSXRtbTN6?=
 =?utf-8?B?ZjhPYkMxOGN4aVRGZFZHYVVSdHgyam5wNUdBUkgrK2tmYXVYdGd2VTlGSktm?=
 =?utf-8?B?ak10VzRLQ1krLzl1Rzk2cmNWWVhmbnRXelBwZlVqWURrc3AzYTE5Zkd1WlFS?=
 =?utf-8?B?TWZaangxbXFXMVk2Q0xiTmhnaUhZbTYrMEMrTnNuTXI5ZzEzNzAySFFqS2Y2?=
 =?utf-8?B?S1lsZVFFOXJsRm1TZEJkRzdlTlp0ZlViQmY3U2NxNG5JditaUlFmN0c2eTBh?=
 =?utf-8?B?MnNnZmQ0MUtPWjdTT2E1N0MySVhVdDg0Y1BWaXB5K0ZNamVkYnRhUkpORmd0?=
 =?utf-8?B?SFdFSDV4MVNNS1p0eGUvMGszNCtYMW9yaUZNRFlrUGlnNCszMTJoL3VKYzY1?=
 =?utf-8?B?aWJjdTlKTS85Q0oybjEyZGExN1cvUGVMaVMwa2c1SndhcHB2bTBYRTNZYmI4?=
 =?utf-8?B?ZWlQQ09aTFdURDdIbHFjVTBibGlIMTM4TmJEZ0lYZDVNcmhudmRkbHpIVWJ3?=
 =?utf-8?B?MlYvWWt4UU5ieTFtWGZmditDYXdEYXptVUNHRlA5KzdYVllGWUUvYzg1TUM4?=
 =?utf-8?B?MGVRbGQ3MTNUSWlUSlArcVlrYVgyTk9pUHNZY0VkUnZFQlRpK3F1TFVjdWxF?=
 =?utf-8?B?L0h1engrQk5MbmxpVyszRUsvQ1ZkTGJGN3Bkand5UG9xOGMwTTRzVkpoSmtQ?=
 =?utf-8?B?MU1KMUR0NE1Kc2JrTmt3SnVaeEZadGVFeDcxcEM1T0pqZWhXMk5IcTZUMmNt?=
 =?utf-8?B?MnVjSzI3SmtiUzB4UFlpRVkyUjNOV2R2VkhxTWlHcG5WbDFFMllyNlFsOVJn?=
 =?utf-8?B?bkFJVFRLQmhJcUN4TUd4b0k4SWJ2M0ZjYURydVdpMy9TNTNNQ0ZGdkVuMFJi?=
 =?utf-8?B?WXFHejN0U3ZpNXFIc3ZIbWgxY2tpMGNNTlNFbHNZd3E1bGJDRWljMDdoYXBk?=
 =?utf-8?B?c2QvZjZtR1lteEJpNTRQcGRHYzY1bmZkVW9vcWhmY1k2cEFkVXpzVW85MWwz?=
 =?utf-8?B?VElDN0hUUmVwT056aFJseThTUDVIdzhabEhMS3U2TmZiWVc4bGhieDR3a0l2?=
 =?utf-8?Q?oOi8BmCrSwTUM/qXSDqwLzlA8?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 66df6b0d-d941-4713-492a-08db5b753f46
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 10:05:33.5978
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EM6Bd3utKZgtUhvubAb+Y7Tf52dC+d7FLLdZpW34vBAYA0IQmcd/pVTABLUvhRB9EDQTtKzlwlIj3KJXqHhAFg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8633

On 23.05.2023 09:32, Henry Wang wrote:
>> -----Original Message-----
>> Subject: Re: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
>> tree NUMA distance map
>>
>>>>>> The [2] link you provided discusses NUMA_LOCAL_DISTANCE.
>>>>>
>>>>> I inferred the discussion as "we should try to keep consistent between the
>>>> value
>>>>> used in device tree and ACPI tables". Maybe my inference is wrong.
>>>>>
>>>>>> Looking at
>>>>>> Linux'es Documentation/devicetree/numa.txt, there's no mention of an
>>>>>> upper bound on the distance values. It only says that on the diagonal
>>>>>> entries should be 10 (i.e. matching ACPI, without really saying so).
>>>>>
>>>>> I agree that the NUMA device tree binding is a little bit vague. So I cannot
>>>>> say the case that you provided is not valid. I would like to ask Arm
>>>> maintainers
>>>>> (putting them into To:) opinion on this as I think I am not the one to
>> decide
>>>> the
>>>>> expected behavior on Arm.
>>>>>
>>>>> Bertrand/Julien/Stefano: Would you please kindly share your opinion on
>>>> which
>>>>> value should be used as the default value of the node distance map? Do
>>>> you
>>>>> think reusing the "unreachable" distance, i.e. 0xFF, as the default node
>>>> distance
>>>>> is acceptable here? Thanks!
>>>>
>>>> My suggestion would be to, rather than rejecting values >= 0xff, to saturate
>>>> at 0xfe, while keeping 0xff for NUMA_NO_DISTANCE (and overall keeping
>>>> things
>>>> consistent with ACPI).
>>>
>>> Since it has been a while and there were no feedback from Arm
>> maintainers,
>>> I would like to follow your suggestions for v5. However while I was writing
>> the
>>> code for the "saturation", i.e., adding below snippet in
>> numa_set_distance():
>>> ```
>>> if ( distance > NUMA_NO_DISTANCE )
>>>         distance = NUMA_MAX_DISTANCE;
>>> ```
>>> I noticed an issue which needs your clarification:
>>> Currently, the default value of the distance map is NUMA_NO_DISTANCE,
>>> which indicates the nodes are not reachable. IMHO, if in device tree, we do
>>> saturations like above for ACPI invalid distances (distances >= 0xff), by
>> saturating
>>> the distance to 0xfe, we are making the unreachable nodes to reachable. I
>> am
>>> not sure if this will lead to problems. Do you have any more thoughts?
>> Thanks!
>>
>> I can only answer this with a question back: How is "unreachable"
>> represented
>> in DT? 
> 
> Exactly, that is also what we I am trying to find but failed. My understanding
> is that, the spec of NUMA is defined in the ACPI, and the DT NUMA binding
> only specifies how users can use DT to represent the same set of ACPI data,
> instead of define another standard.
> 
> By looking into the existing implementation of NUMA for DT,
> in Linux, from drivers/base/arch_numa.c: numa_set_distance(), there is a
> "if ((u8)distance != distance)" then return. So I think at least in Linux the
> distance value is consistent with the ACPI spec. 

Can we simply gain a similar check (suitably commented), eliminating the
need for saturation?

Jan

>> Or is "unreachable" simply expressed by the absence of any data?
> 
> Maybe I am wrong but I don't think so, as in the Linux implementation,
> drivers/of/of_numa.c: of_numa_parse_distance_map_v1(), the for loop
> "for (i = 0; i + 2 < entry_count; i += 3)" basically implies no fields should
> be omitted in the distance map entry.
> 
> Kind regards,
> Henry



From xen-devel-bounces@lists.xenproject.org Tue May 23 10:16:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 10:16:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538384.838275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1P40-0002vn-Ff; Tue, 23 May 2023 10:16:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538384.838275; Tue, 23 May 2023 10:16:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1P40-0002vg-Cc; Tue, 23 May 2023 10:16:20 +0000
Received: by outflank-mailman (input) for mailman id 538384;
 Tue, 23 May 2023 10:16:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1P3z-0002vW-4P; Tue, 23 May 2023 10:16:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1P3y-0000r6-OC; Tue, 23 May 2023 10:16:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1P3y-0005rz-Ae; Tue, 23 May 2023 10:16:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1P3y-0006rY-AD; Tue, 23 May 2023 10:16:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YGouUAUq5gU1ZdOTPYxPHVvtx1PSSeJY0rHeZZxy78A=; b=zCbf89+4+sz3HsBsEdr7o4rgFu
	Ill+DfucOYWj3aT+kK3rA5JtEpXlS1NrVU0TS60p2xg47JmHgZ9woTu20JQ9o4q8i8uCrA15j8/IO
	tyk1rq0Y/+cC9p47Iw7JMy24gLJi/nWL7fgBnsUT9Lgz1vFRl2qarvXKFu6bBj8VR7x4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180912-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180912: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=886c0453cbf10eebd42a9ccf89c3e46eb389c357
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 10:16:18 +0000

flight 180912 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180912/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                886c0453cbf10eebd42a9ccf89c3e46eb389c357
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    5 days
Failing since        180699  2023-05-18 07:21:24 Z    5 days   24 attempts
Testing same since   180909  2023-05-23 03:48:14 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5427 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 23 10:19:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 10:19:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538392.838285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1P7L-0003aX-41; Tue, 23 May 2023 10:19:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538392.838285; Tue, 23 May 2023 10:19:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1P7L-0003aQ-0I; Tue, 23 May 2023 10:19:47 +0000
Received: by outflank-mailman (input) for mailman id 538392;
 Tue, 23 May 2023 10:19:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BCIw=BM=kernel.org=broonie@srs-se1.protection.inumbo.net>)
 id 1q1P7J-0003aK-S1
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 10:19:45 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 551d9682-f953-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 12:19:43 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E39526299B;
 Tue, 23 May 2023 10:19:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D0C28C433EF;
 Tue, 23 May 2023 10:19:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 551d9682-f953-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684837181;
	bh=Hw4JvCY5c+o4xH0G4/ilqKI5AiMs+w7Ev9t6iQW4VSA=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=oF2yAr9oIBfEhw1WDj0+q079mjfRIQXD6lWaXj8yNXAn1n4z5I3T3WlcM2TNAp7BF
	 3+mepvSeWgLSFPGRpuTo+CmYRAec4PvQfzIorMGn6yyibuLZ+uMHnKEPKbGU/cqsEL
	 HIGrssrbPot9AwfgDePO4CWvJXLEo2uEycOdlIOPtFLlMGIpRPahL70lwHpFEa+/X0
	 VMbLgxkGmQ2R6F7FOqgy2ab4kG5KRr2d+OOQm9hFDt2oSgWdT83u8H5GBi+RujDc0e
	 O4ePjS+L2NzuJecudp/Eshc86kg5/RUvBP1TJO5lUtLG1gH6E0FDVmMY2aqUv7Oz66
	 lyfVOGhf/Vr2A==
Date: Tue, 23 May 2023 11:19:30 +0100
From: Mark Brown <broonie@kernel.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,
	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>,
	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,
	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Ross Philipson <ross.philipson@oracle.com>,
	David Woodhouse <dwmw@amazon.co.uk>
Subject: Re: [patch V4 33/37] cpu/hotplug: Allow "parallel" bringup up to
 CPUHP_BP_KICK_AP_STATE
Message-ID: <7377ca00-cb66-430f-9c97-55c60bf5d40e@sirena.org.uk>
References: <20230512203426.452963764@linutronix.de>
 <20230512205257.240231377@linutronix.de>
 <4ca39e58-055f-432c-8124-7c747fa4e85b@sirena.org.uk>
 <87bkicw01a.ffs@tglx>
 <2ed3ff77-c973-4e23-9e2f-f10776e432b7@sirena.org.uk>
 <87wn10ufj9.ffs@tglx>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="Folw/pFEu4pCxXPv"
Content-Disposition: inline
In-Reply-To: <87wn10ufj9.ffs@tglx>
X-Cookie: Beware of low-flying butterflies.


--Folw/pFEu4pCxXPv
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Tue, May 23, 2023 at 01:12:26AM +0200, Thomas Gleixner wrote:

> Let me find a brown paperbag and go to sleep before I even try to
> compile the obvious fix.

That fixes the problem on TX2 - thanks!

Tested-by: Mark Brown <broonie@kernel.org>

--Folw/pFEu4pCxXPv
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmRskzEACgkQJNaLcl1U
h9BsgAf9GF4VsHf+jkaRuKOvlac1rE2bFEpVYeMuNf6KbqeD4odFEYIAUQA+YDLK
cQfMclu7qJPPej3ztYBbyVR+3mXH9m+tx5c784FHFJnEyG4Fz5bbVGEBLlcT+PCQ
AIM3pioDp3hHXcHZFKw8MyT/3VII4gf4yGIsw/rtWZZgPOvk0a1G3nctOClgwrqm
asYtDEGLWUnXYKNWNv5dqfCq7olIGl+y4R3mn9KobyW13YvPEmn9uhHz7WuOGHrI
XeXeNgg92QVhmIrSrSbLU+lUrxTqdNNE1CBav+R6fS0nmwNoKdMqgdlZok0toOUw
PoCHQ6x7O7dUMlMoIsrUsIcDOSqrdw==
=lXgK
-----END PGP SIGNATURE-----

--Folw/pFEu4pCxXPv--


From xen-devel-bounces@lists.xenproject.org Tue May 23 10:22:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 10:22:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538396.838295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1P9S-0004yZ-G9; Tue, 23 May 2023 10:21:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538396.838295; Tue, 23 May 2023 10:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1P9S-0004yS-CQ; Tue, 23 May 2023 10:21:58 +0000
Received: by outflank-mailman (input) for mailman id 538396;
 Tue, 23 May 2023 10:21:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1P9Q-0004yM-Si
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 10:21:57 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20616.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a334c981-f953-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 12:21:54 +0200 (CEST)
Received: from AS9PR06CA0388.eurprd06.prod.outlook.com (2603:10a6:20b:460::31)
 by AS2PR08MB9546.eurprd08.prod.outlook.com (2603:10a6:20b:60d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 10:21:48 +0000
Received: from AM7EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:460:cafe::1d) by AS9PR06CA0388.outlook.office365.com
 (2603:10a6:20b:460::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29 via Frontend
 Transport; Tue, 23 May 2023 10:21:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT012.mail.protection.outlook.com (100.127.141.26) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.14 via Frontend Transport; Tue, 23 May 2023 10:21:47 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Tue, 23 May 2023 10:21:47 +0000
Received: from 8a674dac0949.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4C8B512E-1E01-4BA9-AEC9-C71C64BE2AEE.1; 
 Tue, 23 May 2023 10:21:41 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8a674dac0949.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 23 May 2023 10:21:41 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAXPR08MB6336.eurprd08.prod.outlook.com (2603:10a6:102:158::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 10:21:38 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6411.028; Tue, 23 May 2023
 10:21:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a334c981-f953-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8NbhuJvCEb4CBeWUGnXX/Hb2tIitJI+3/qVN7WX+HUA=;
 b=SNA64zG8J7v/mPc0h9WMAwfnryE8kGJv3BbrqYkcnZ0t8ypmGWpzgVBcDF4uzAszVXj5duouw12eDZR/7LiYBr+0EMN0t9REm+vR5Xm5TI6YUgLgeRDS5HbtGViKctg6bkpSqrjZjYqXeOitgstjl5VPqCkrOyfFDp9VqmEWeUA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: a004a54d70825d36
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ImkGYcI1p5giYYJ/az1+UMuXjNxAf6ltTUqJcs5yhXTnE3a9UhwHtF3ML6Kb0A9MeK5oASt8TLkc7gTZk4y4GfYox7Zl7r43tTJh2eQ/yjri7ijngUO754ygXD7/Ewql4Q21nXdvh4C2rl22IjP4oelVlC7eWxlAp0iuVl2McpHguJNBoH9I/CTDnl5gDrASkIsySvrZA4PswaWNxYIZZXT00Si1AVZkwuDgfmHMU9rwnEx1ru3iEg1Gk/XpZs0x9uyEQqIPPW/gWC0X78annVikPWu18fLQjllH048A8Zo+Cy6s/GfDDy1CmuRfBh3HvTiWQF3eYCF4gic9LfqQAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8NbhuJvCEb4CBeWUGnXX/Hb2tIitJI+3/qVN7WX+HUA=;
 b=iPoLw10OnhiZAOLy90q61uQ7aVy/knsbHCwje1gzgjIkE34c5K8x40wPv2PqiE+0QvkVOCfv4xBaopzHy9uMbxkwnl8nGeLQQDQ1JvwIhuEJk3QmWbMtMRc7+6AwKBdZqU18X0VLoPjLk11heK4ghwGM3OPV5HYFCkZFvuCWZ60zugOQTayrMTj6bFTLawegyB8KJ1ivd4mUXwzSohN+H49Jrn/A0jyzSsE5D4wcw946fE8sGI3NYs/TFtWmjVbtNuoXN2lX1nlK9jE84LycxTNnQQrLDQvj8icyJBRIVjHSwafBFTcKLML1HFt6C3B7Q5+Vh8fr7NPclFh9ez0+zg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8NbhuJvCEb4CBeWUGnXX/Hb2tIitJI+3/qVN7WX+HUA=;
 b=SNA64zG8J7v/mPc0h9WMAwfnryE8kGJv3BbrqYkcnZ0t8ypmGWpzgVBcDF4uzAszVXj5duouw12eDZR/7LiYBr+0EMN0t9REm+vR5Xm5TI6YUgLgeRDS5HbtGViKctg6bkpSqrjZjYqXeOitgstjl5VPqCkrOyfFDp9VqmEWeUA=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Thread-Index: AQHZjUpfkyJmVIVJBUW8KdO0aW075a9noLWAgAAFYIA=
Date: Tue, 23 May 2023 10:21:38 +0000
Message-ID: <6DEA0CC3-C3EF-4509-A869-807CE4B21401@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-8-luca.fancellu@arm.com>
 <48722696-ea22-1af9-2a0a-aa78972d118a@suse.com>
In-Reply-To: <48722696-ea22-1af9-2a0a-aa78972d118a@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAXPR08MB6336:EE_|AM7EUR03FT012:EE_|AS2PR08MB9546:EE_
X-MS-Office365-Filtering-Correlation-Id: 8e022454-cfb6-41d8-19e6-08db5b77841b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 RpBLDaQmT+YEOFiRQEDlMgRzyQ3xm2rlfJ7+dgEhn7PRQ5CqTA/4yb1kMA5aZJaSjDkwnuEi7xTHn4BhrTHwKZdUm/iKb+3DiuqZrDVrMjvUcAUE0/ueJQ0Ww+sts5zXoOpT0+MxLz/nCHGqPGcYFXP5Iil93Fmf7e0BI9bTlJSwVwU2pzlwYyaZ3fD0ui9FcXV6y22POz+uds+O4Cx5K70YwEWFkawfbebZfrAyvj2kCkzJSeLtS2nIk6CbsD+BtvKce3YlilnXQlFGu8PmvWRMf+UC1+sWrXjtLUfiaHqh+HMvR+Dg2hobM4V1rPpRfLuOTRNTJtwKLaY8hk2Vyjhiog8g3bFO+6W/Z9xB0PZLfGooqdvibxbBWoJCVHHuWon82iGMC8HzEGJAJJs60ZGj4B9xaofIPzaDJoUsqwkRuBwKdiSThBTOjUuCrXus9XjV411Hx2bGpul4+FR6EgNlZQDFcPyPZtXHkEUnvmBdRnSKlPJ5btAkIy2aVgMfa3VGzndq1oElwpIn+uQCYuJ+jkCp28YckxFU0WezFYQhYm/WeaxVWSiqBJ9aymJ7u50oA+nN4CQiO+AZV2GduqoERWIbMO6msQFFKbvgJfqdfyPf3JllG9tyYQTRB6JKG5RhwIMdLvyfjBIurVrLbg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(396003)(39860400002)(376002)(346002)(366004)(451199021)(53546011)(186003)(316002)(33656002)(86362001)(26005)(6506007)(6512007)(71200400001)(2906002)(8936002)(8676002)(36756003)(5660300002)(41300700001)(6486002)(4326008)(478600001)(54906003)(38100700002)(122000001)(2616005)(66946007)(66476007)(64756008)(91956017)(66446008)(66556008)(6916009)(76116006)(38070700005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <F47FD606904CCC42BEEAA3CB54C9D44C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6336
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	afb09f92-556b-44f2-3fa5-08db5b777e8d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rkH/E/BKPyJfwwW8u7S6/Wp8Y3xNZ7FYbO30qNF+nQQe+wobXISVRcdZFgAkurPmM+i/EqRk+tiXST+s+2SBchnvc3BMCiGtu8yUNHrTLX9ssOVawb0Yr0wYkmXGlD2cgFv5PMIBk96byDp61DgUPOeLTB+CMEhM0zHcdK6oSBHjej6qmH1yymIvJ3lTw4IXOuIy1yb4glBFWY7O1hGYcN7NBrrXyeODEWPkewwZ20OOZOpdziPBmO51zXeEXr78/fRhKq7TJ80J3PFfoV6Qj9ZtVxxTn9zYQb/A2hM3LSmzlHVidZZ3l4C+At6bStMLMMhPQv2+q5QRsoi5lzgC8vCfzu2yvs8sumtn4itRgX8KUxylQGOpk434HxD9JK8giHRfZaGwHsU82t/7S2Hz/SnY1utu0nDH41JDmM6irfVcelZ0J/gz4yQW8P07mVV9QQERWENUzbYy2GqtB+IlipemZp+mefu9xd/F1nv3Mr9w54R1ZKq6dadD45Zh+mexixmBtDL0pmFhqkiVQ4sv2iMKFTtYI/fE6AaE1QozJFX0lS5vvjbNDT0HcEXmkb1MRN6kxhugyO/fIomPqpylkEcqNc8MJMu4jATwQQaAptkaNa7UXazSEYVbw6viu4PqNZ4EGyKzjLz3Ccaclsd/els13GM5hoS7dFzYj1yDSehndnoBJNmMm32W9snTCQuVWw9lvGFP0P+LsrERXeAzKUqQPYnDO3Sfmu6eIhOqIyQ=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(39860400002)(396003)(451199021)(40470700004)(46966006)(36840700001)(6512007)(6506007)(53546011)(2616005)(356005)(40480700001)(186003)(478600001)(54906003)(26005)(70586007)(70206006)(316002)(81166007)(86362001)(33656002)(82740400003)(4326008)(41300700001)(40460700003)(36756003)(2906002)(82310400005)(5660300002)(336012)(36860700001)(47076005)(8676002)(8936002)(6862004)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 10:21:47.8036
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8e022454-cfb6-41d8-19e6-08db5b77841b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9546

DQoNCj4gT24gMjMgTWF5IDIwMjMsIGF0IDExOjAyLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjMuMDUuMjAyMyAwOTo0MywgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+IEFkZCBhIGNvbW1hbmQgbGluZSBwYXJhbWV0ZXIgdG8gYWxsb3cgRG9tMCB0aGUg
dXNlIG9mIFNWRSByZXNvdXJjZXMsDQo+PiB0aGUgY29tbWFuZCBsaW5lIHBhcmFtZXRlciBzdmU9
PGludGVnZXI+LCBzdWIgYXJndW1lbnQgb2YgZG9tMD0sDQo+PiBjb250cm9scyB0aGUgZmVhdHVy
ZSBvbiB0aGlzIGRvbWFpbiBhbmQgc2V0cyB0aGUgbWF4aW11bSBTVkUgdmVjdG9yDQo+PiBsZW5n
dGggZm9yIERvbTAuDQo+PiANCj4+IEFkZCBhIG5ldyBmdW5jdGlvbiwgcGFyc2Vfc2lnbmVkX2lu
dGVnZXIoKSwgdG8gcGFyc2UgYW4gaW50ZWdlcg0KPj4gY29tbWFuZCBsaW5lIGFyZ3VtZW50Lg0K
Pj4gDQo+PiBTaWduZWQtb2ZmLWJ5OiBMdWNhIEZhbmNlbGx1IDxsdWNhLmZhbmNlbGx1QGFybS5j
b20+DQo+IA0KPiBSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPiAj
ICFhcm0NCj4gDQo+PiAtLS0gYS9kb2NzL21pc2MveGVuLWNvbW1hbmQtbGluZS5wYW5kb2MNCj4+
ICsrKyBiL2RvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYw0KPj4gQEAgLTc3Nyw5ICs3
NzcsOSBAQCBTcGVjaWZ5IHRoZSBiaXQgd2lkdGggb2YgdGhlIERNQSBoZWFwLg0KPj4gDQo+PiAj
IyMgZG9tMA0KPj4gICAgID0gTGlzdCBvZiBbIHB2IHwgcHZoLCBzaGFkb3c9PGJvb2w+LCB2ZXJi
b3NlPTxib29sPiwNCj4+IC0gICAgICAgICAgICAgICAgY3B1aWQtZmF1bHRpbmc9PGJvb2w+LCBt
c3ItcmVsYXhlZD08Ym9vbD4gXQ0KPj4gKyAgICAgICAgICAgICAgICBjcHVpZC1mYXVsdGluZz08
Ym9vbD4sIG1zci1yZWxheGVkPTxib29sPiBdICh4ODYpDQo+PiANCj4+IC0gICAgQXBwbGljYWJp
bGl0eTogeDg2DQo+PiArICAgID0gTGlzdCBvZiBbIHN2ZT08aW50ZWdlcj4gXSAoQXJtKQ0KPiAN
Cj4gV2hpbGUgaW4gdGhlIHRleHQgYmVsb3cgeW91IG1lbnRpb24gdGhpcyBpcyBBcm02NCBvbmx5
LCBJIHRoaW5rIHRoZSB0YWcNCj4gaGVyZSB3b3VsZCBiZXR0ZXIgZXhwcmVzcyB0aGlzIGFzIHdl
bGwuDQoNCk9rIEkgY2FuIHVzZSBBcm02NCBpbnN0ZWFkIGlmIHRoZXJlIGlzIG5vIG9wcG9zaXRp
b24gZnJvbSBvdGhlcnMNCg0KPiANCj4+IEBAIC04MzgsNiArODM4LDIyIEBAIENvbnRyb2xzIGZv
ciBob3cgZG9tMCBpcyBjb25zdHJ1Y3RlZCBvbiB4ODYgc3lzdGVtcy4NCj4+IA0KPj4gICAgIElm
IHVzaW5nIHRoaXMgb3B0aW9uIGlzIG5lY2Vzc2FyeSB0byBmaXggYW4gaXNzdWUsIHBsZWFzZSBy
ZXBvcnQgYSBidWcuDQo+PiANCj4+ICtFbmFibGVzIGZlYXR1cmVzIG9uIGRvbTAgb24gQXJtIHN5
c3RlbXMuDQo+PiArDQo+PiArKiAgIFRoZSBgc3ZlYCBpbnRlZ2VyIHBhcmFtZXRlciBlbmFibGVz
IEFybSBTVkUgdXNhZ2UgZm9yIERvbTAgZG9tYWluIGFuZCBzZXRzDQo+PiArICAgIHRoZSBtYXhp
bXVtIFNWRSB2ZWN0b3IgbGVuZ3RoLCB0aGUgb3B0aW9uIGlzIGFwcGxpY2FibGUgb25seSB0byBB
QXJjaDY0DQo+PiArICAgIGd1ZXN0cy4NCj4gDQo+IFdoeSAiZ3Vlc3RzIj8gRG9lcyB0aGUgb3B0
aW9uIGFmZmVjdCBtb3JlIHRoYW4gRG9tMD8NCg0KSSB1c2VkIOKAnGd1ZXN0c+KAnSBiZWNhdXNl
IGluIG15IG1pbmQgSSB3YXMgcmVmZXJyaW5nIHRvIGFsbCB0aGUgYWFyY2g2NCBPUyB0aGF0IGNh
biBiZSB1c2VkDQphcyBjb250cm9sIGRvbWFpbiwgSSBjYW4gY2hhbmdlIGl0IGlmIGl0IHNvdW5k
cyBiYWQuDQoNCj4gDQo+PiArICAgIEEgdmFsdWUgZXF1YWwgdG8gMCBkaXNhYmxlcyB0aGUgZmVh
dHVyZSwgdGhpcyBpcyB0aGUgZGVmYXVsdCB2YWx1ZS4NCj4+ICsgICAgVmFsdWVzIGJlbG93IDAg
bWVhbnMgdGhlIGZlYXR1cmUgdXNlcyB0aGUgbWF4aW11bSBTVkUgdmVjdG9yIGxlbmd0aA0KPj4g
KyAgICBzdXBwb3J0ZWQgYnkgaGFyZHdhcmUsIGlmIFNWRSBpcyBzdXBwb3J0ZWQuDQo+PiArICAg
IFZhbHVlcyBhYm92ZSAwIGV4cGxpY2l0bHkgc2V0IHRoZSBtYXhpbXVtIFNWRSB2ZWN0b3IgbGVu
Z3RoIGZvciBEb20wLA0KPj4gKyAgICBhbGxvd2VkIHZhbHVlcyBhcmUgZnJvbSAxMjggdG8gbWF4
aW11bSAyMDQ4LCBiZWluZyBtdWx0aXBsZSBvZiAxMjguDQo+PiArICAgIFBsZWFzZSBub3RlIHRo
YXQgd2hlbiB0aGUgdXNlciBleHBsaWNpdGx5IHNwZWNpZmllcyB0aGUgdmFsdWUsIGlmIHRoYXQg
dmFsdWUNCj4+ICsgICAgaXMgYWJvdmUgdGhlIGhhcmR3YXJlIHN1cHBvcnRlZCBtYXhpbXVtIFNW
RSB2ZWN0b3IgbGVuZ3RoLCB0aGUgZG9tYWluDQo+PiArICAgIGNyZWF0aW9uIHdpbGwgZmFpbCBh
bmQgdGhlIHN5c3RlbSB3aWxsIHN0b3AsIHRoZSBzYW1lIHdpbGwgb2NjdXIgaWYgdGhlDQo+PiAr
ICAgIG9wdGlvbiBpcyBwcm92aWRlZCB3aXRoIGEgbm9uIHplcm8gdmFsdWUsIGJ1dCB0aGUgcGxh
dGZvcm0gZG9lc24ndCBzdXBwb3J0DQo+PiArICAgIFNWRS4NCj4gDQo+IEFzc3VtaW5nIHRoaXMg
YWxzbyBjb3ZlcnMgdGhlIC0xIGNhc2UsIEkgd29uZGVyIGlmIHRoYXQgaXNuJ3QgYSBsaXR0bGUg
dG9vDQo+IHN0cmljdC4gIk1heGltdW0gc3VwcG9ydGVkIiBpbW8gY2FuIHZlcnkgd2VsbCBiZSAw
Lg0KDQpNYXhpbXVtIHN1cHBvcnRlZCwgd2hlbiBwbGF0Zm9ybXMgdXNlcyBTVkUsIGNhbiBiZSBh
dCBtaW5pbXVtIDEyOCBieSBhcm0gc3BlY3MuDQoNCg0KDQo+IA0KPiBKYW4NCg0K


From xen-devel-bounces@lists.xenproject.org Tue May 23 10:31:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 10:31:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538400.838304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1PJ4-0006UC-DR; Tue, 23 May 2023 10:31:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538400.838304; Tue, 23 May 2023 10:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1PJ4-0006U5-Af; Tue, 23 May 2023 10:31:54 +0000
Received: by outflank-mailman (input) for mailman id 538400;
 Tue, 23 May 2023 10:31:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1PJ3-0006Tz-FH
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 10:31:53 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20610.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 07500e1b-f955-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 12:31:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DUZPR04MB10062.eurprd04.prod.outlook.com (2603:10a6:10:4e2::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 10:31:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 10:31:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07500e1b-f955-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V8Krp7ZEIyHS8zQAvhS3JSxMXQmWsqAYTwP/hqLKyxP3dzwZL+TKko2auCB1ZnJxIHSrJUW/MlL/a2C7ExEPQFq2MwG8TJJtfgp2X7n8epCswYx66Z8lUQcBvbdLvWvezR0f6RQR0Zl98HQova82eapWLiQmYPmoftTQYpZj3+QPY8VAIh6jhpewrvzytTLgTtYrwJFB3CR7qvavexVxaVQvoOAWTxS6Z/5SVLmt6CFSdMiPXSOh0r2O8IPRxXQu3aWk03K6P1/hn5lI7e82eABs5HrkidhO1j5J94hMLguHMjBrdlfrr5RuW8K7mYA0ttd/mygavwE97sUfuOsVgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ahmiuw8dLkApBHTPn4Wt72OFNmIZiMr5ocHFJRn6XSM=;
 b=NronREirkXlc8273I2cClzclErVTUQZZZtZzS5yABiKEoQ3XYHLrpZhBVEbxyce1/dMRPiyJ/SLEPBGupAu7dXO9hzhsOCHMwd8C8EY8kTqVpttdSD2JeNEGx2kBvW93oHBnSlmin2BI8MmYYcz4k8FEBEkATjH5YZpWe1oSjbZA9tx6PgdEKFZ/vBsJlMCjS9NyVpjkBD1jMmgr7FQ14MuNhCVrsImsWAkC19m84gDDU+tl7wmg1nrtWhRJD/BihGtUMikHDHPJuds6VN4fQpFfIHKLTMFa8u+4IWfeqxrDdYTwpPtZA1e4Z7L3IMPYzJjmi96ipwS9iHNWYNGODA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ahmiuw8dLkApBHTPn4Wt72OFNmIZiMr5ocHFJRn6XSM=;
 b=RggcN0XFV2DYPBCL+EQPNC0LUaaOHMvChNfRPloeOyJwjT7WMC1A67FvavB13oZmECAsHxRMNbmBrW40kRP1D5t/PgzN8CPpm3TwOG4U1fWHOZajfOv72jX4VrjzO0O9EEuLW71iSoqE0vmxlsWItNBYSSE4jn3eSOX3DGlYTmDaW0eKWiYxKJRe/Ek9ZjFb73eA6w91u7fS2/wIAJ28TPCTfP0CsrA6jobps9/1du2WpqKuvAskIal4qftc0wzm1OV0tmvM5nQmTKMlwDPOfvTayOxRg9wKI9htOY/uJ12bUD8V7jWSd7KQq70F5jcZyE4GC3c5gV1bLQE5I9J2Xg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <767b11a1-4d43-9057-1fcc-6516fea64fb3@suse.com>
Date: Tue, 23 May 2023 12:31:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-8-luca.fancellu@arm.com>
 <48722696-ea22-1af9-2a0a-aa78972d118a@suse.com>
 <6DEA0CC3-C3EF-4509-A869-807CE4B21401@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <6DEA0CC3-C3EF-4509-A869-807CE4B21401@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0076.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1f::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DUZPR04MB10062:EE_
X-MS-Office365-Filtering-Correlation-Id: 5c63b4a0-f35c-4a07-d062-08db5b78e9c3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aScPZEK1uja/4MQqaosyBiOkwoXWCmRbRBRLHs6t+U8rp8nKxQjnsPm41wNjL5wSm56GeUtnMP/+kVaHarrdSMI2OcRWtaS9e/R7Aukt1kUz37aHY4A7AJRbh2Hgi2t8bLkiDyAGxRESntE6vvXoIft3qTyyVA0mIPWYDi7uQ8+UesYvLehdqQFAYKJXZFq5q3AuIFecPTb4vzwXtETicBKK9Usgt28gbVLEzQUlDdb904JfdWqIEXTnQFirSkCna4WvDMNWj01nMV+j2AMtqCCKPkkjF9dAfIWaeRLHGkmbvxcTlMMFeqA7YV2WfpnB+HQ6OGMl1rxT2JLj40na1vUeQdpk8ojwKeKVjmM3oPhEmxmlarAHGZqW8etPGt5GGX+VYrTd0K0HZBtmxNg+C2DiiZvjO0cSN+cV9gWcrr7KHRDFmCx726WCi0UCEJ+EeqGS4gTLrRC0w1e+ws4Ix0YvSQiE5Kh87wlxr1wOYAv95YqoqbGJmpq37BpzlTwyrQSZUOQS2iyTW2lconeexmoZypdC0JTi4A4VjgLg1UZhXsGIar2zhwfoUi9pTSriT+s3eBHhKDKqzCtMrSS7Qbe167mh9VoBYqfVJyVf1U+jOTy35FS0/U5fvBmK/RHvasEMAflIbwW/WlnD+R0bpg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(346002)(376002)(39860400002)(366004)(136003)(451199021)(7416002)(5660300002)(8936002)(8676002)(2906002)(36756003)(186003)(2616005)(38100700002)(86362001)(31696002)(26005)(6512007)(6506007)(53546011)(54906003)(31686004)(316002)(478600001)(66476007)(4326008)(6916009)(66556008)(66946007)(6486002)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L2F2clFFR2lPKy9PeHV2YUpqVmxicWsrZzA2bjN2RWxURmNDcnF1eVhQbXBa?=
 =?utf-8?B?ZW12LzlFTkwvNTJqQnZLOGVNZUdNOFNmMUxaUFA1Mk5mQlYybThzdlQwWFRQ?=
 =?utf-8?B?SDhZaTZqSDI5Wlpmd0tkMGpqOXlSeGdNdTh0LzhXbVZ0ODNNaWY3eDRLbk1U?=
 =?utf-8?B?WVNPL2k3Ri9pQWNRZUtsMU5wZ0dkbisybEthakhhTG9ucXkyTmFick4xcXNh?=
 =?utf-8?B?SXROVFB2UEwyQk5selI3Um9WbkhvaVNDeXV1cnBCQmNIYktmcCswdGl2Ny9n?=
 =?utf-8?B?VEpib2VPVFI4S3E0TTNZbE9BVDJvdlpycHhQaVI3NnV4TnFCS3RyZHc4REpS?=
 =?utf-8?B?V1BYSmorTkowclB6eHpCZFFndGJHQjh4SGZLemEvM0hmWko2eTdWUkJsVG9v?=
 =?utf-8?B?Lzd5cklwNG9PU0MwdnVFd0gzZlhyVmNFellHV1FqYUJUam9EaDQwRU8yZlJx?=
 =?utf-8?B?RlpEbDRsblBhWFpLYTUwVnhOaXlUT3dHdStmVWlqeWZpQWlwUHJqcXRVMmwz?=
 =?utf-8?B?dWFQRHI4aTJ6MDVUbnJ5bzMwaEdhZDk4Ui9KUW96bGJQTnNKcGxibjVaWUNF?=
 =?utf-8?B?YUkxQVlRVmo4S1dTU3JGUVYvUEtsRnVDczNDZE5EK1FJdkh4bDFhUFVvS0V2?=
 =?utf-8?B?eUlrQ3FGMVBIQ3dmS2VqUUoyckJnMmRwMi9uOWtIcW9LamdDZkFnL0JnUE9L?=
 =?utf-8?B?VmZqT2NaTWxEYndRYURoTEdtR3U3dEdtZ08yc3FjY3JzVitIb2Z6YlY3YllV?=
 =?utf-8?B?M3hjbXVQS2pJam1wbmRweFhQM0NleUJER3RlaWRES1JRR1pRMElOeE44cjl5?=
 =?utf-8?B?V09WVk9id0FKeUliNVhLZXRoSWJOUmt1WWJVTGppR21KUzhBNHR3U0g5eDFS?=
 =?utf-8?B?ZFhWaUEzR1lNTU1CdDMvTm4xNGd6R0JWT0Qwb21WZHh5QVc2aGw3TXJ5eE1z?=
 =?utf-8?B?b0poNWVnNE9CbUxEZnJvYWFZdHdzRTB2UEZPNGRCb0l0VGc1MElLampjNmhR?=
 =?utf-8?B?OVhGSGtpM09GTitaSjd4aGpjLzl5Kzl1ejVEM0RTTUNabUwzaHA4M3l2K0pB?=
 =?utf-8?B?ZkVnWUNkN0prdmJKWjlMQlFJY0RQcVpvaWJKS3F1SWd5ZjVlVW1XbHFTWEds?=
 =?utf-8?B?b0ZkWGoxWDkvNUFMT3lGRGRoc0svYWFTWS9uTmdzdlBUZURMUTZuMjU4Vmpl?=
 =?utf-8?B?SHZsaklhUGtITlRhUmlCMUNGU1ViU0dQWWRnVkRKWEhkVmdBaWlvUU91NVRX?=
 =?utf-8?B?d2hUUDhqV3BoOEd6aGI4cjE3bmV1WXpiYjFZa0lJNVZXNUh4eHBPZXFTL25F?=
 =?utf-8?B?OXNmaEFUdUZ1NXNFWDl5clp3ZXA4SisyYllGd0VzZGg2MFpQaXBqN1lpUXpO?=
 =?utf-8?B?eFlyMHlaL0Y0SDlDMGprWXRjbjdQMEtsd0h6cm1GYWFjcnRSRkZoOHZFLzg2?=
 =?utf-8?B?ZDk4K0llOS9NNm9hTE41M3VQZ1pWeHlmdDNSOHNRaUpIbnpNNms0Y2wwSVlP?=
 =?utf-8?B?MTdUZlpqeldSeE40LzZHZlJRZXg3UzZIM243aFgwUkJ5bjFJZ1pJcHpYUXpE?=
 =?utf-8?B?eHJRUUpMUzBxa3lsWEpCZk0zUDB2QUhQbEIrSkIrK05xY1BlM2xkWHJCT3Jl?=
 =?utf-8?B?RzZPMzMvcEdYdnlUZFNhbkppQ0h5OGF0ZXdxR2MwRkRPZjlQaUlOZ3VHOXg0?=
 =?utf-8?B?QjFCcWVLSHo3MCtvQ00veGlsSXBOWUw4VkpaM09wVTZKNGpweTMwU1VsazI1?=
 =?utf-8?B?VXc5b2lCYlJJWGVmTEc5RURqN3E5eU5VT0NWVU1oQmpjaUdkNVJya3lrMWhP?=
 =?utf-8?B?K1lLLzhzT0w5cTVBdVptc01oNUp6WGJvSWEreGN1S0NYT3RleTkxQ0I1N1U3?=
 =?utf-8?B?aHFoMStIRk13NElhekxvbzIvSER6clh5eFJBNS9PRm9XdzBFWHJ4a1R6NFM5?=
 =?utf-8?B?VjhwLzJOeStLREpic0s5cXlxTjI1dVlacktWNjJwV0Rxd04xVWJ0Z2V0NVVJ?=
 =?utf-8?B?UWNQczRUM041ZmJCWUFwaHFoUUJWeVdESjdiSjRtaFBKdE1Iekt3V2tYM2NZ?=
 =?utf-8?B?Z0xUWHF6WmkxcExjOERpWkZDK1BUUmpIaEZzRlVXTDRZWFdaU0orMTNnWUlD?=
 =?utf-8?Q?QgvRsMq9VQnT9UeE4h7DLB32r?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c63b4a0-f35c-4a07-d062-08db5b78e9c3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 10:31:48.4436
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AUrqkEsWXcB/6cJA3/Szj9K3I8Anr8vxg143LG+8T5pa9koTMI736Y2l8TBP+24LcN/c2Q5INash8HdJrprL5A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DUZPR04MB10062

On 23.05.2023 12:21, Luca Fancellu wrote:
>> On 23 May 2023, at 11:02, Jan Beulich <jbeulich@suse.com> wrote:
>> On 23.05.2023 09:43, Luca Fancellu wrote:
>>> @@ -838,6 +838,22 @@ Controls for how dom0 is constructed on x86 systems.
>>>
>>>     If using this option is necessary to fix an issue, please report a bug.
>>>
>>> +Enables features on dom0 on Arm systems.
>>> +
>>> +*   The `sve` integer parameter enables Arm SVE usage for Dom0 domain and sets
>>> +    the maximum SVE vector length, the option is applicable only to AArch64
>>> +    guests.
>>
>> Why "guests"? Does the option affect more than Dom0?
> 
> I used “guests” because in my mind I was referring to all the aarch64 OS that can be used
> as control domain, I can change it if it sounds bad.

If you means OSes then better also say OSes. But maybe this doesn't need
specifically expressing, by saying e.g. "..., the option is applicable
only on AArch64"? Or can a Dom0 be 32-bit on Arm64 Xen?

>>> +    A value equal to 0 disables the feature, this is the default value.
>>> +    Values below 0 means the feature uses the maximum SVE vector length
>>> +    supported by hardware, if SVE is supported.
>>> +    Values above 0 explicitly set the maximum SVE vector length for Dom0,
>>> +    allowed values are from 128 to maximum 2048, being multiple of 128.
>>> +    Please note that when the user explicitly specifies the value, if that value
>>> +    is above the hardware supported maximum SVE vector length, the domain
>>> +    creation will fail and the system will stop, the same will occur if the
>>> +    option is provided with a non zero value, but the platform doesn't support
>>> +    SVE.
>>
>> Assuming this also covers the -1 case, I wonder if that isn't a little too
>> strict. "Maximum supported" imo can very well be 0.
> 
> Maximum supported, when platforms uses SVE, can be at minimum 128 by arm specs.

When there is SVE - sure. But when there's no SVE, 0 is kind of the implied
length. And I'd view a command line option value of -1 quite okay in that
case: They've asked for the maximum supported, so they'll get 0. No reason
to crash the system during boot.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 23 10:36:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 10:36:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538406.838315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1PNB-00079T-1D; Tue, 23 May 2023 10:36:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538406.838315; Tue, 23 May 2023 10:36:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1PNA-00079M-UQ; Tue, 23 May 2023 10:36:08 +0000
Received: by outflank-mailman (input) for mailman id 538406;
 Tue, 23 May 2023 10:36:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MIFg=BM=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q1PN9-00079G-Ev
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 10:36:07 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0625.outbound.protection.outlook.com
 [2a01:111:f400:fe02::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9f122520-f955-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 12:36:05 +0200 (CEST)
Received: from AM5PR0601CA0026.eurprd06.prod.outlook.com
 (2603:10a6:203:68::12) by AM0PR08MB5299.eurprd08.prod.outlook.com
 (2603:10a6:208:18d::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 10:36:02 +0000
Received: from AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:68:cafe::38) by AM5PR0601CA0026.outlook.office365.com
 (2603:10a6:203:68::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29 via Frontend
 Transport; Tue, 23 May 2023 10:36:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT031.mail.protection.outlook.com (100.127.140.84) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.14 via Frontend Transport; Tue, 23 May 2023 10:36:01 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Tue, 23 May 2023 10:36:01 +0000
Received: from 61a77d812223.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 10907E15-410A-412C-81E6-E0EA105F3A9F.1; 
 Tue, 23 May 2023 10:35:49 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 61a77d812223.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 23 May 2023 10:35:49 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com (2603:10a6:20b:570::15)
 by DU0PR08MB8472.eurprd08.prod.outlook.com (2603:10a6:10:404::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 10:35:48 +0000
Received: from AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6]) by AS8PR08MB7991.eurprd08.prod.outlook.com
 ([fe80::2db3:aa30:7be0:10a6%4]) with mapi id 15.20.6411.028; Tue, 23 May 2023
 10:35:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f122520-f955-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HA0mV2B5cDKm8ei7Ve+40Brx4aFdGQWaPzgY9XxKeHk=;
 b=nTAScV4zfh45q2hhZ6WLgpDXYY7psT0idPEWDUlffZSl7q9U3wBl8Pm+QJUuIO3ZlVg2vTaVRLTD5pr8k2Is9u25zxLt+CJKyWsmFN1pSLPLVTJp8sXLKXRzdy9OCz9305ykgXrZmQkeRe+93k9/O5E4iNJWjkvjz8bFeEEZ+ug=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1abb61c29e232e67
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RDocUBfFhxuFdEWf/ZzsMoxM83oNMKrxpJyK/Fxm7iM7GSdNpXYZvZFBaWyOsyVOgnQsQtGv7nNhS7DBdL6PTNvQp5jqRqko0jM+wbBCu9WhFmNVKk/xgXkNdehVr9qLWQaqvNDlvjWrL5/SY2DSzS8gsDiCm/k/9ExV5YS9h0k1l++mfaSFMWphDqcAMf8aZIzVb4kzNNCNfLloMSH3ODMi1vH12JMddUIX0oGu0+f8Z3KCMiMBmqXA09WSDycv05y6Bc0aOMPwcm0/EdgWlshFE0Bs8ol8Zu+ezaR57l22LnI7ebSgMxwu1D885mugJE9bVM/fsFRR6WqQxsRikg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HA0mV2B5cDKm8ei7Ve+40Brx4aFdGQWaPzgY9XxKeHk=;
 b=iuY9x6pm/3BgNlLX5CEV8VavEWK4dbvFORCgBBu22oKMK/6BFx9M2hz51NHtTHNNNtGNXP1mupb+KS8+oMH0jQ2LdMtHFiMvMKEHp/LxRQdk/SQIc6xCJvn1sph9t9bTkfd9NkjBKNMRTBB6gfGUYBme21HQ8REdyeAeCFutTORLOpoF/il4VkDfB20L8xOJWUPY0E4RouTHVrBneWbcn4YGdNoZAA0WGJNiUqaZ82Lhm1DMHW4GtuoDXQ/ICPIUaRwB/+Zn696c6W0CfAVGJz76yT3rwGVFa6btZ3ZmhnUoF+DnyNEJib6WdYYVHcVQAvFnFenjjE9I7ft9IyckPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HA0mV2B5cDKm8ei7Ve+40Brx4aFdGQWaPzgY9XxKeHk=;
 b=nTAScV4zfh45q2hhZ6WLgpDXYY7psT0idPEWDUlffZSl7q9U3wBl8Pm+QJUuIO3ZlVg2vTaVRLTD5pr8k2Is9u25zxLt+CJKyWsmFN1pSLPLVTJp8sXLKXRzdy9OCz9305ykgXrZmQkeRe+93k9/O5E4iNJWjkvjz8bFeEEZ+ug=
From: Henry Wang <Henry.Wang@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Chen <Wei.Chen@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
Subject: RE: [PATCH v4 09/17] xen/arm: introduce a helper to parse device tree
 NUMA distance map
Thread-Topic: [PATCH v4 09/17] xen/arm: introduce a helper to parse device
 tree NUMA distance map
Thread-Index:
 AQHZd0ulMpqFICe/LUiDVq6+aDsW9687sveAgAFTkCCAACNagIAAAPgggAADbgCAKli08IAADIuAgAADKTCAADbvAIAAB2ng
Date: Tue, 23 May 2023 10:35:47 +0000
Message-ID:
 <AS8PR08MB7991E08D46F3609CBC5805F992409@AS8PR08MB7991.eurprd08.prod.outlook.com>
References: <20230425075655.4037980-1-Henry.Wang@arm.com>
 <20230425075655.4037980-10-Henry.Wang@arm.com>
 <e03bbb52-1a19-7d18-4abe-75bbef8a0aee@suse.com>
 <AS8PR08MB799117EDD6BAB892CAB870A192659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <13635377-e296-370d-121b-5b617dc210bc@suse.com>
 <AS8PR08MB7991DCE0DFC850FEA920BF8C92659@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <c195ef53-1151-1fb2-0cf9-f6f47d20b75e@suse.com>
 <AS8PR08MB79919CD7C233345113F8D5C992409@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <79859cfb-ab60-8661-e1ec-75fac74531b4@suse.com>
 <AS8PR08MB7991C727E0A90875055DA1D492409@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <7f01db34-3256-0c79-018b-f081863f2599@suse.com>
In-Reply-To: <7f01db34-3256-0c79-018b-f081863f2599@suse.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 600217892549AD4BBD9DB346F7F6AB12.0
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7991:EE_|DU0PR08MB8472:EE_|AM7EUR03FT031:EE_|AM0PR08MB5299:EE_
X-MS-Office365-Filtering-Correlation-Id: 8a6407a6-18de-4b94-ece5-08db5b798127
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 oavPctWkw2+Kk3iX8+VKy8DmG5su8740j+EXIwojGss+ehP6YVuChuESn9DpVaJglhWRwj56zci6boGVJu9FcyWxBmiynF9mlsCY20L3sN4KdBFIZZV3A22cHtOuZMkM11an0N4Ue/M7JZr8f6+kNuDCP0wJeGPk5n4ANVHq7VK2AXW8mY+CMdHhLjrxCx9gtvNqXGaZOZnp1hocsb6PdJ+ksoE0ROz0mDmKySBUkMe3Ftf4uY3vsPzhcL7miC1gTj9NutGCXLCqcpw1Vcwy+nyOB14CCvAkvAQ9Djhr+VNbMXJwoIYikuLaF6VZXaWAuJnzL6RdNteQ6Xk32afhADqPGQLwLtR0l1SsXQpwC5r35uypSyLmd6IZrYs7RO5IfwewMxa5c9B5RK2Gac51WwVcfezDRc4JKWQs0+ZUcCyeO/TSq+VQUfJEMHxIbKBvnBOoIaybIUzoE3kHLjZKY1orLUYSDQcG9zuIBU8eYwKEanLGEG9bKL7AwQqk3vvtES2+vwy1x7A2hgZoI3MmtFa1ESdzUs8tr9LUBgJRdxD+elV7o1yvvLHT/T9CjBPWyv2m+KXGJWi1dIa3f9QlcCVhHwt0yAfOhiexYPLGvjA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7991.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(39860400002)(346002)(376002)(136003)(451199021)(186003)(2906002)(4744005)(55016003)(8676002)(8936002)(6506007)(26005)(9686003)(52536014)(53546011)(5660300002)(478600001)(66946007)(66556008)(64756008)(76116006)(66476007)(66446008)(6916009)(4326008)(86362001)(38100700002)(38070700005)(54906003)(122000001)(83380400001)(41300700001)(7696005)(71200400001)(316002)(33656002);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8472
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f65823bb-634e-4f31-0468-08db5b7978d2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sgSsUbS8xLhQyD/XJnT+n6V3WgkHCux5UEzX4h7g/jKY9KV9bDU2d9GPV0ccgGbj1VLgFNCiSWAtA2lBRFM71liRffK6uFBon52CQ/3HW21Fdbz2lPuZJfijJ35S43Wfxhvk6XXDZvwUa90cazh6b4GgaEFwKhTdTuxZam8KCqvNQSZevT/EfJLwdn2bJjarrFoOlHTJWI0IgHcMgaFsCpt/z2ioI/JTBLcZt/md0e68DL8GUKOFeHemNkvuIIm/ISAmsFX1EdizbT9VUw1bnfY+puXAVP7a+MiR01w1m5uthSMUZK7ogm/9qL+wANzcCeuxNcS6LZwFNHCp8t8hgHv4CzKrxWA61n4HTVrqq2H+RwULDhRtNUwjsgRlPxURVhfiQNP0adej2vCNPn1TXvvjOpVObNbBsvgh3nY+9UerB+e/MGfcotRhAX70ragkgYsrlddq+zWLGaxxCYqxaADIFVJm13eg3DoyvKWZ8rGR2bcNIHZuk+E4DTNg2QY4VqqJe29RKE5eP1C3llgM1pYg7oLEH/b2+U8aiMSVAdy8KESEI8vCe5gDm6JKr9m5nlmqZu2g2r9h+RiD13xZ5GqqcvQAI4mabB7dvAZCYJ9WrtsKUAjAlRFc1hTZe/AubHyHM+wBYgeZ2znYmKcXZA6LbZgZMnAEm8q6phrhjqz/f3lwcRBnZDwgsUjWWNa0qo59qOWWge8KwAmGRTyGfO21W1yXsE4LML4GaK/lPm8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(346002)(396003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(41300700001)(7696005)(316002)(26005)(53546011)(6506007)(9686003)(40480700001)(55016003)(82740400003)(8676002)(81166007)(356005)(8936002)(5660300002)(52536014)(6862004)(40460700003)(86362001)(478600001)(82310400005)(36860700001)(83380400001)(54906003)(2906002)(47076005)(4744005)(336012)(70586007)(70206006)(4326008)(186003)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 10:36:01.8480
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8a6407a6-18de-4b94-ece5-08db5b798127
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5299

SGkgSmFuLA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IE9uIDIzLjA1LjIwMjMg
MDk6MzIsIEhlbnJ5IFdhbmcgd3JvdGU6DQo+ID4gQnkgbG9va2luZyBpbnRvIHRoZSBleGlzdGlu
ZyBpbXBsZW1lbnRhdGlvbiBvZiBOVU1BIGZvciBEVCwNCj4gPiBpbiBMaW51eCwgZnJvbSBkcml2
ZXJzL2Jhc2UvYXJjaF9udW1hLmM6IG51bWFfc2V0X2Rpc3RhbmNlKCksIHRoZXJlIGlzIGENCj4g
PiAiaWYgKCh1OClkaXN0YW5jZSAhPSBkaXN0YW5jZSkiIHRoZW4gcmV0dXJuLiBTbyBJIHRoaW5r
IGF0IGxlYXN0IGluIExpbnV4IHRoZQ0KPiA+IGRpc3RhbmNlIHZhbHVlIGlzIGNvbnNpc3RlbnQg
d2l0aCB0aGUgQUNQSSBzcGVjLg0KPiANCj4gQ2FuIHdlIHNpbXBseSBnYWluIGEgc2ltaWxhciBj
aGVjayAoc3VpdGFibHkgY29tbWVudGVkKSwgZWxpbWluYXRpbmcgdGhlDQo+IG5lZWQgZm9yIHNh
dHVyYXRpb24/DQoNClllcywgSSB3aWxsIHVzZSB0aGUgc2ltaWxhciBjaGVjayBhbmQgYWRkIHRo
ZSByZWxhdGVkIGNvbW1lbnRzIGluIHY1LiBUaGFua3MNCmZvciB5b3VyIGNvbmZpcm1hdGlvbiEN
Cg0KS2luZCByZWdhcmRzLA0KSGVucnkNCg0KPiANCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Tue May 23 10:58:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 10:58:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538410.838324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Pis-0001Er-Lj; Tue, 23 May 2023 10:58:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538410.838324; Tue, 23 May 2023 10:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Pis-0001Ek-J5; Tue, 23 May 2023 10:58:34 +0000
Received: by outflank-mailman (input) for mailman id 538410;
 Tue, 23 May 2023 10:58:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1Pir-0001Ea-0W; Tue, 23 May 2023 10:58:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1Piq-0001qU-Rb; Tue, 23 May 2023 10:58:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1Piq-0008Lo-Gg; Tue, 23 May 2023 10:58:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1Piq-00024K-GD; Tue, 23 May 2023 10:58:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dalH7Id8mkH1QD1nkMbKXbCUw2IJn+Uuxzm+rKScls8=; b=X8h5QZlwLwmsAbLlOuh6CGwb5v
	hYRVNf1gxGVpSmqtVmo+lRMHab0c7f44ZSnyrjXziyga6x5Ns4Z1n79/ea072y0qGnEAr+OhjxR2G
	kqGktRMIdFuZCeLp7j2LUTCxvAWjZSu9uuWWs6dEWPqsP6PrATxC/84/eDOaP0S2TX14=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180907-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180907: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ae8373a5add4ea39f032563cf12a02946d1e3546
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 10:58:32 +0000

flight 180907 linux-linus real [real]
flight 180915 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180907/
http://logs.test-lab.xenproject.org/osstest/logs/180915/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ae8373a5add4ea39f032563cf12a02946d1e3546
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   36 days
Failing since        180281  2023-04-17 06:24:36 Z   36 days   67 attempts
Testing same since   180907  2023-05-23 01:11:53 Z    0 days    1 attempts

------------------------------------------------------------
2480 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 313746 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 23 11:00:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 11:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538415.838334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1PkS-0002cj-0t; Tue, 23 May 2023 11:00:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538415.838334; Tue, 23 May 2023 11:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1PkR-0002cc-U1; Tue, 23 May 2023 11:00:11 +0000
Received: by outflank-mailman (input) for mailman id 538415;
 Tue, 23 May 2023 11:00:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YHTC=BM=citrix.com=prvs=500ef747c=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q1PkQ-0002cU-Dt
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 11:00:10 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f96e7451-f958-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 13:00:08 +0200 (CEST)
Received: from mail-dm3nam02lp2046.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 May 2023 06:59:59 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SN7PR03MB7132.namprd03.prod.outlook.com (2603:10b6:806:352::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 10:59:56 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Tue, 23 May 2023
 10:59:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f96e7451-f958-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684839608;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=QASePHSTrMv7/KOl6YpulxDw6XcjQASsOTUOoT4aOog=;
  b=Lq8plXSpd91KYF1ks98sbRJEw/AIFyorDVKUAHfH3dM8UZsAXZ4pct+x
   FS3OMf6m8s1vvADlnjfG9F6FuAxSfVujM4tV57g6y14uTm+U/L81PsuQp
   1ZZAMYQEpaT3l6jEQNpcfBsrL27lrIfQXtK8siZdZP+qF6JBWAF5ZRKzI
   w=;
X-IronPort-RemoteIP: 104.47.56.46
X-IronPort-MID: 112519965
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:yhS/yKjcdF0MWQBuNp52ETjBX161jBEKZh0ujC45NGQN5FlHY01je
 htvDGCDP/reZGfzKth/a97l8h9S6JLdmt5kGVBkrihjFC8b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QWFzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ4CmEHTBWOgtumg/WjS+ZNrZQJPvfSadZ3VnFIlVk1DN4AaLWbGeDmwIQd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEvluSzWDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHqhBdtDRePlnhJsqGfJ52xLMBMraQf4mcC5rHCuZ+MAe
 1NBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rHP/w+TC2wATzhAQN8rrsk7QXotz
 FDht8PkA3ljvaOYTVqZ96yItnWiNC4NN2gAaCQYCwwf7LHLpYgpixvVQ9VLEairj8b0EzX93
 zCLqiclg7wZy8UM0s2T7V3BgjvqvJHGTwc57wbQQ0qs6w8/b4mgD7FE8nDe5PdEaYqcFV+Iu
 SBen9DEtLxQS5aQiCaKXeMBWqmz4OqIOyHdhlgpGIQ98zOq+DioeoU4DCxCGXqF+/0sIVfBC
 HI/cysKjHOPFBNGtZNKXr8=
IronPort-HdrOrdr: A9a23:hqAyzqoOgYK7QIC0lobqO78aV5rbeYIsimQD101hICG9Evb0qy
 nOpoV+6faQslwssR4b9uxoVJPvfZq+z+8R3WByB8bAYOCOggLBQL2Ki7GC/9SJIUbDH4VmpM
 VdmsZFaOEYdmIK6voT4GODYqodKNvsytHWuQ8JpU0dMz2DaMtbnnZE4h7wKDwReOHfb6BJbq
 Z14KB81kOdUEVSVOuXLF8fUdPOotXa/aiWHCLvV3YcmXGzZSrD0s+ALySl
X-Talos-CUID: 9a23:iaOapmymHK2hEcq1eKjWBgU0OJEvUHjQyU7AYAigCGlIY+2/W2GprfY=
X-Talos-MUID: =?us-ascii?q?9a23=3ALU9nAgySTfbUi6VtSoGqmGsKbvKaqJ70KUBVqZs?=
 =?us-ascii?q?7h5XHDRVwHQmNsAXoEpByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="112519965"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ijHgPNX8r9bgClz6UbkzA5Xx1RA2VL1+nJmAS2r+WyHqWb6VoOnl6Om4cS1rxSa8dwp3IQGDLkKilW4k50mGuhy65vhp5CsSWeki+yHtfNxrCx683xJ0DD65d7cWId69yR/op+n+SFqU2KLUaAq9JS1sSAo5X+KhmS/bNvrJp9aSyZ7/t/PhF87eT5WziYqY9kRz9lQ3Vubi0c/0Xj9ks6KqF0SKCtcjjK47ucac1waFXNxx3pNYQAc7TrP6Br7O3T52xygBIMBxDhCXL1iiHBSGCb6WQf/Z0jhpgi2Ncdn1a0BQLTQtDkyU7wXEpWqWj4TzdPpm4jK7uJjCkbOwKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WtYrYjSQ21RavIziIh1mIxcGPRwDCU5VsbbaVOKXO28=;
 b=YjE9Khzh6OxvVlJetn1GsB1ofJR/M0g/D1ZUsKb1Dmn5wGe5JwnUMPaCJmS8Yu27j0UdmqC736ZAf/65aUB/b0EZMVPlgColy9PNW7czulU9niILBfbJymd8ziQdsaV3/Ziklj4lWd4WrbUCENVIzIv0dCwr/fmL7wg01qf7i3B88neEMrnajo2P8Rv4w4c0mFsR3OhEa/CmO9aoJhjVeeF3VEAMJ5BbdgwTMUohx99m4yccqXmKfNkoZSMmm12xsDcEKZ5Glx5zXfEf6oiVB45RnDvm19VKXIobatMUJrBnrrDCkFIhTUgEIiH6fLoFPlYuLKRTj3ReGgf69EZA9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WtYrYjSQ21RavIziIh1mIxcGPRwDCU5VsbbaVOKXO28=;
 b=PUJVb2SxdTHwiEeM+VZxsHtHz5XXimyPZ/iN+/I90enT6LK/tD2N0b7rhwsHsI563Qfi+O4BRRUtAm3wyBbuNBRvSjuadz7QywURx4aga/f/bj9tHdrKjq3tt7WSOuPUkTr+VXxal/wCRl7hADbEV+Lhagfgp7fntp6F89ygt2Q=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 23 May 2023 12:59:49 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com,
	xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
	xenia.ragiadakou@amd.com
Subject: Re: PVH Dom0 related UART failure
Message-ID: <ZGycpaCkSvWecsuE@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
 <ZGcu7EWW1cuNjwDA@Air-de-Roger>
 <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
 <ZGig68UTddfEwR6P@Air-de-Roger>
 <alpine.DEB.2.22.394.2305221520350.1553709@ubuntu-linux-20-04-desktop>
 <818859a6-883a-81c0-1222-84c62ada3200@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <818859a6-883a-81c0-1222-84c62ada3200@suse.com>
X-ClientProxiedBy: LO6P123CA0058.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:310::19) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SN7PR03MB7132:EE_
X-MS-Office365-Filtering-Correlation-Id: 5a5f1a92-2b5e-49bc-e8ff-08db5b7cd7db
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ysO21nXO4/KyPeojWGNXs23tayWctgyW04EpcQlRmcY+dheLxlgEA25J5gJQfC55iT3+cnSpIdEf74ICRpJNh3fEGr1iIgtPut/qfGdY+FURa2yWuj/fSzdXXYRhEF2jt/IXNB/kJvJYTnojyk2eQXD9JC3KyAohw/13gV5sISle2lMpUNjeN6lME3s2AkGwqlFsaK2TibKmqapw8E0avfB/X5pQcgPDLIYY6+SkMpBugBh//ZccKNaTnTpZSPjNqySzfNP5vNLg+DyK8ezs8RYwb+UVZYtPkuiGYBftnEMe/tc2x7Q3gQs0Ch9HEJU+AfVlam9kwQ0ohVtpcwR13YfdKoxIF98N93bdkRxhZL+vvz6yBaU/dRrsieATklE5ApBwpdINSeYuKskd04qfeZcYfWPStHCOvJgNxYrhvvxzd31Oe3M/6Qjc9nfEFRhizwOZ2R+THQyydIf9NyYh4+JVEMsMetR+HeOHhzzyGdCz0kZOLvwZbmnaPchvjhD5T7NNfbm0tsxFhQhZINMbQKJOjjkTWnxrXhhhNhTej9k=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(366004)(396003)(346002)(136003)(376002)(39860400002)(451199021)(8936002)(8676002)(5660300002)(33716001)(186003)(53546011)(83380400001)(9686003)(6506007)(26005)(86362001)(6512007)(38100700002)(82960400001)(41300700001)(6666004)(6486002)(966005)(66556008)(66476007)(66946007)(85182001)(316002)(4326008)(6916009)(478600001)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MHNwZ0plQmFoMTRNakxTL3VDWjZhNkViRlFLRWFjVDRTY1MzSlFrMzFXTEp4?=
 =?utf-8?B?MzJCVjFGY2xUL3lRaTJ2Rm1Ua1I5ZEV1QytaNWZUUjBTb1phWFBXbmptcWRK?=
 =?utf-8?B?bjhBbEkxSFpieHRIQWdyT0J4NFBIRmNqbXFLeGxsazY5MklSL0VMZm1wYWcz?=
 =?utf-8?B?LzlZNGluVlh1c0hKQUxuN09NTzFhMCttWkg4ajJLQkJ1NkwxK2FHMEY4cW1G?=
 =?utf-8?B?aHBXcHFUOU91cG9aclNBTzQ2aFBUUEdmS1B4elZhYnVRN01pZUl6czlURzhN?=
 =?utf-8?B?clhHc2svUTVadDhZbklzdExqcS8razZDU1JFMngxZjNodGlodHlZRlNiWEdE?=
 =?utf-8?B?bDNRb1B5OERKUHgrU285VVZ0bUpPYnZST2lqQ2FCU3czd1czNWF3bXNGRkZR?=
 =?utf-8?B?OGFrRDZ1ZEs4TzlsbEVMSHo1NU1Cd3ZRQlVtcXpyckVkaWZ2cmZIY1FlQXAv?=
 =?utf-8?B?U1hwZTN3SlYyb29jRFh0SEFvZHVhaFJXZ1NicEdFMUExcUJGOEl1MU8vZVhY?=
 =?utf-8?B?dG1maWFBSEEvRGlCRWxGVTBuU251dm9ZSVdYUEExdU10dFJBdE55YTZ0eU1S?=
 =?utf-8?B?UWhHOGlsN1VuejBYMzdrMVV4NUFmRlBORENoMEdSWE9hRGtJSithM1lKVUVS?=
 =?utf-8?B?VTh0YStTNjlDUTJqdGdlSUNJYlhacjlCRHhHWHFHbzN5TFM1L0tiVTNsLy83?=
 =?utf-8?B?akRuUWpQYlAvYVNKWmtta3NiRSttbW1qcnZpWXorcCthVER2SWoyV3lldS9q?=
 =?utf-8?B?ampzMG1YZUpnY1NaUzdldFI5MHdDTE1YUFJLNjNKQW9HUUdITkFlbUhjVzRH?=
 =?utf-8?B?eXVjYmkvMUtGbThuZ1NsT3hzaVNIdUhaeFRKeEpZN2gwcDF0c1d0ODZuajNy?=
 =?utf-8?B?aGpMVlNTUEpUKytKaWFSaFRXcXNqR0k5VEUwS29ibjVnamdTNzB5MjFuYWNO?=
 =?utf-8?B?cWRmRzhkTGJvQm1VTzlqRnFTZFhZaHBOWDVTV0VhQ3loQlZaejVNVXVTU3dM?=
 =?utf-8?B?ajAzajRsem1SY3FJVjdUUzBMdnRDMmM3b3lHK2FxSklxdEE4RTRvTnZGbmhj?=
 =?utf-8?B?cXR3cEQwV3BhdDdqK3dmcGtCdHNNTnkzQVFBbmNTeEtEWmJ0UnhPM1RNRDds?=
 =?utf-8?B?MlBZWVBiSEYwSVl3N0VNQnRENVpUWTRFYmNMejhRMUdwZzRQWjVuYU8rTFlj?=
 =?utf-8?B?TG9aL2IycWZJUHJhZGI4TDZwbXgwSTlHWWMyUzZ5ZmVtQjIvbzVOallDcXpF?=
 =?utf-8?B?QnphTmRRY0RYZUh5S0xxdXpZZUZDa2dMa2NCS1ZMblZYRGo4UW4vNU5EM0Zi?=
 =?utf-8?B?eWZQaGhJdXltQStiL0lkcmFYTFlvZUVwcmUrTGh1NUFHejdHai82ZWt6SU1W?=
 =?utf-8?B?ZngvZGdVekd1RGJDRjRqK1BpejRLdzFORUptdWlLdjVnay9LSk5NZkxPMmRZ?=
 =?utf-8?B?Zkx6aTZPTGFKZWFIOWxON25uUUUxOTZHZGtjTkN5SklsZG1EYXBkZ29aYURH?=
 =?utf-8?B?Y2lwNjFJbW1OV2NPTFhPZmJ2RnpwZWx1bk43eFArYWtCRG5XY0xEZXByN2hP?=
 =?utf-8?B?US9iUUhFV2N6Tno5Z2VQYk4rTnFzQStXYVFzWWhRNW9kWnFlZW9LUC9WaUwv?=
 =?utf-8?B?ckVQTC9QYW0zek9hVzhDdVFRL1pIenVLMFRjcVhkT1BXQUVkTW9FU0FMeGRZ?=
 =?utf-8?B?R2hDYWRwdXVZbElYVmQzNStpNEJHSlI5ZVFOMEdIZGtsOHd0R3lRU29tVGJY?=
 =?utf-8?B?OWtoUmRaMDRodVcwNTRYclcvNTBqQ3BUWUhmUXRBWVpwOHByY2VTcnFSZjRl?=
 =?utf-8?B?Wjc2cEhUZEtOYVJQY0VGOW9jaHFNOGtSK09UclFtR0wzVmF2UUNBbjBWakVr?=
 =?utf-8?B?bUhiTjEyZDY4WEh3U2NqckJONk56TmdPNFQ0ZTZ1Q2R6ejV4S3RwRm9sdUNr?=
 =?utf-8?B?aUM3TDU5aVJKUW93aEVZbUFYQ2lnb3FMekZoUTM5UXhsbmtUTDV3a0VYeHFM?=
 =?utf-8?B?c2hxeHpTZ1B1TG9PaUZXQzNuNFVLT3VSNkY5ZGdZVktWQzVkdFVJN0ZrbnNJ?=
 =?utf-8?B?K21sb2MvMXJMc1h4ZFFaY0p1Z2swUkhqRUszZHVNeGRlZDFMaGRTSFBzSWo3?=
 =?utf-8?Q?WNvYW5BkR/kuaKBeibaAR8gFR?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	GRVmJY/nXBw7Dnox6L7Mj1eGQJaA6GDoLRB+6BhauU9By3X1okeezD/HcyjPFwza6OsKPc+CHeVOXMbJMO+hjRF5BIS7/YatcrX/X/NeGpzgK8KCwb4Bbof6RPg6CNZf398GRt4NeWAh/bAmWPesfrJqsalpKGRqyl3yEIYyn0ikLYMLxQK5zXHoK2uUA80QmCzrtEsFBwE7EUMvIMQsk33+w1jXSKior1uwVzGlgdp86GZCbCDIRjyS3f/tGAoXuTYYDbMLjV+osZ4n/kZTHURWdpQBogYACLe8O4sPs6J15GYVkQxjqNyq1/9w1A2nHMHK9bxydpFhYB66gwVKmI/bu2uPFL7QZbLMxEsJ4rGlZAU2q8Bu7yOKTuAFxbPYlczFeX7k5hBPyVzLxwcfZ+IU5451OjgYZj/veMkvei5t+BLb7YwX6BKV0Fcpz9SSqPuGq/hIY+DgEgTPjxWG97akBclBa0l1mMH+/NS7+61mYDGC5qhKizv2Gp90LkZySMkbZlxcAOTQAfAyGpcGW57EuW4PGAhGYbgYecY600A0/LnTjDvpvYGaYRKm6afUQ4wDkYfUl28HXNwqomn7VsGvbFUpn3/gjRtyyrYG5Q9WHAghxpLflGTGRoLQdVzHdvd2UD82rqzuBZZNw18XvmA2s2bULmARut4z3xY1l9ZkcdXBHLKcqnHmofYb3PAbsFrR6qOjf76zFOPLDNxBRYUB5Ugcy+ZWhnb8az3uTKhqSoBMn6AQjWuNNH1Kp97jNZwuMpE0m3Z80nefEFhCrspzrLqyG4ooLr+1mWNUKnKuVJ0KFmEuRKyHZg8xBxvA69B1Jq2qLFuEPgBq2tEOPkQG6VSb9inD6LbbKiyUBXJvzXOwhapRefiPA4qLvSwBGx6H6/eRzQnc8ly6Nn7/pMvGaC4Mad9GAHButmyJAVQ=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a5f1a92-2b5e-49bc-e8ff-08db5b7cd7db
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 10:59:56.2086
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IzDhX2kwXe6OiFuX0onEkXcayLJueturpBIpoKqlaPBLO4gusP51C7t2KBI2zD88JIA+14WpCSA+uIvPqvOR4g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR03MB7132

On Tue, May 23, 2023 at 08:44:48AM +0200, Jan Beulich wrote:
> On 23.05.2023 00:20, Stefano Stabellini wrote:
> > On Sat, 20 May 2023, Roger Pau Monné wrote:
> >> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> >> index ec2e978a4e6b..0ff8e940fa8d 100644
> >> --- a/xen/drivers/vpci/header.c
> >> +++ b/xen/drivers/vpci/header.c
> >> @@ -289,6 +289,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
> >>       */
> >>      for_each_pdev ( pdev->domain, tmp )
> >>      {
> >> +        if ( !tmp->vpci )
> >> +        {
> >> +            printk(XENLOG_G_WARNING "%pp: not handled by vPCI for %pd\n",
> >> +                   &tmp->sbdf, pdev->domain);
> >> +            continue;
> >> +        }
> >> +
> >>          if ( tmp == pdev )
> >>          {
> >>              /*
> >> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
> >> index 652807a4a454..0baef3a8d3a1 100644
> >> --- a/xen/drivers/vpci/vpci.c
> >> +++ b/xen/drivers/vpci/vpci.c
> >> @@ -72,7 +72,12 @@ int vpci_add_handlers(struct pci_dev *pdev)
> >>      unsigned int i;
> >>      int rc = 0;
> >>  
> >> -    if ( !has_vpci(pdev->domain) )
> >> +    if ( !has_vpci(pdev->domain) ||
> >> +         /*
> >> +          * Ignore RO and hidden devices, those are in use by Xen and vPCI
> >> +          * won't work on them.
> >> +          */
> >> +         pci_get_pdev(dom_xen, pdev->sbdf) )
> >>          return 0;
> >>  
> >>      /* We should not get here twice for the same device. */
> > 
> > 
> > Now this patch works! Thank you!! :-)
> > 
> > You can check the full logs here
> > https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4329259080
> > 
> > Is the patch ready to be upstreamed aside from the commit message?
> 
> I don't think so. vPCI ought to work on "r/o" devices. Out of curiosity,
> have you also tried my (hackish and hence RFC) patch [1]?

For r/o devices there should be no need of vPCI handlers, reading the
config space of such devices can be done directly.

There's some work to be done for hidden devices, as for those dom0 has
write access to the config space and thus needs vPCI to be setup
properly.

The change to modify_bars() in order to handle devices without vpci
populated is a bugfix, as it's already possible to have devices
assigned to a domain that don't have vpci setup, if the call to
vpci_add_handlers() in setup_one_hwdom_device() fails.  That one could
go in separately of the rest of the work in order to enable support
for hidden devices.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 23 11:30:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 11:30:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538424.838345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QDT-00068X-ER; Tue, 23 May 2023 11:30:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538424.838345; Tue, 23 May 2023 11:30:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QDT-00068Q-BQ; Tue, 23 May 2023 11:30:11 +0000
Received: by outflank-mailman (input) for mailman id 538424;
 Tue, 23 May 2023 11:30:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1QDS-00068K-6u
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 11:30:10 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20618.outbound.protection.outlook.com
 [2a01:111:f400:7d00::618])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2b7a6c5b-f95d-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 13:30:07 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8684.eurprd04.prod.outlook.com (2603:10a6:20b:43f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 11:30:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 11:30:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b7a6c5b-f95d-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=E9PAN9vRVU5Bx8V4Nw/9xtVh05gWP6PckpLbDPSlpN3zqAFFFaMSF9RWG+BNpF/uWuNa9EjFBvuuHgXktElfbVbgANr04cO87CMr2dGWYjqgXksYqP1kygLYZ/X7hwWJV2cnnq/E6nUZ7oEKayA7wt+d4MheAHDM3JjHZYrZAi1SmMhjXkvEScn6dhnUWLT7Dae7iOwZSeuCotpbMVQ1DMW7vDQbgF4UhdJ7nqmOu58hzbxqFzsQVQULoy4Rzl1NjkunCK63gZy7qNt4/4hWsF5yupK+kaYqFcWQmWsjUFQRGtF2fNXfEd0jXd5jjiI/wHsIvACmCBt9xmkHNVUSUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pUyOU5lzXwzxWDwgmTXcg9sc8hmVRTtijOUrbr7KSpo=;
 b=c9VVZOpPJnax4iNT2v2I4dDqUfUjpgyEiCWnLBgQucsZ6wkhOo6EQg3mNzjQmVpg6qonTqkQ1V6zd1a/+Lv2KT0N6ivF/UOEwXuj5jp1EXZer0C+mBpuOvN3WxiW7hO5B9/c2qxeVGMy1PDkoclffvJcDs3bb4OyP0+kYcLBmZG0iSFB75JurplSefa+izyxKVILelvOtxqfWHMCvZRAguGhg3AMBlx15HLWIJW7QaLxIOZb/fySrmA2hLhinQXv+RS5PM9E6809Q7Wck77qneTFCO0of2oGgyyKccgnU7PRtMCgwlrHXH1bpNiFcs7GEEmolcW1cVHpfAbuBv6l9w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pUyOU5lzXwzxWDwgmTXcg9sc8hmVRTtijOUrbr7KSpo=;
 b=otTJi+eR0FIAfBpWscv5thJrjG1pOZz0Eqgh7D1+j472a9+poHIbkKUq8HpYR3PHFXhJiqb67+Vk/SKhCj1rlnkYkBZQr+gRFsVZJxA7OoTbwNIJCyGxnu88XmJ2BEHlUPqlejqD9NSXAEeQhmkhmVKWD+UzyylMlgQwEykXCBBs+qguNjS6Cgp2k7cj/qw2KUvTvezAc+2gT0/J4vrLekEC15w8KTUwsrhjkhnkv8SfmhUUAadNC+zVi8asyi5tVbNORuE9oGELZgIjEx2ZEobcXF3r/oM51FEVAlGgoAF6xj9kyEoU8E85qpnXL8RANuWUOnvGOHHgd6ttYJYrxQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <db10bc3d-962e-72a7-b53d-93a7ddd7f3ef@suse.com>
Date: Tue, 23 May 2023 13:30:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] x86: annotate entry points with type and size
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <e4bf47ca-2ae6-1fd4-56a6-e4e777150b64@suse.com>
Content-Language: en-US
In-Reply-To: <e4bf47ca-2ae6-1fd4-56a6-e4e777150b64@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0184.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8684:EE_
X-MS-Office365-Filtering-Correlation-Id: 2affeb09-72d1-4c6b-8931-08db5b810e34
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UvUOl0tmkf5SuPr10InfKpf04yncHaGosL+Qh1NddqXNbSOl+Y3RG5fCekzSVSdDEpMM3S/Il/uf9FDQiGZqEcDvI9wTbskSg3E3x8Dgd56XxCgiuzUFRrRqDHCRNu2hOrz3EX+WxznW7eusFBpXTf4FKzSvo7nMwMmEWNcK30pnI5BsIzfkb6iA7/2JygjRTNZ/cLH2dt/AvBuwRwrFRJtaIN4Db1/dbhdD8yNKcfT9c/9uudvG50DNObcD+sST7bkrTwLbijvRYtyw4VkMZSH1XaVC5Rg3xQZbI84jVLykHaUUFXEC4sYKDP6fvjaqHYJKr8mK/7baQTB2jMZW2RNkFHDDOOx2X/GTgIDZx6L9LIV5DcXNCa0mQo7JyKzQ8+8sO/udmGou77nwhXG1MTswBWspcVbsG0XWoBMtCjBhk25Spmp8uIhFeSimS53KKnnxYtGmrNr+jmjcwRyGV6gZ0Nq1bL8jiO6xyHRkCMgxDgK3KqFoFyDCxeK+AOQp2ALw0AgOo4FxcudEilCXo9rTp948r38ZriiDR5u29igqtDk5ossBcplDa+2xKHtyEtv5+6/AOJDeyRnJ6UeadqjpsMYzcelMKJjhqyB5+bqofbGzCG0PfTnCT5VyOhvQBJN4x8bRCfRiuJduNPdH8Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(376002)(39860400002)(396003)(346002)(451199021)(38100700002)(86362001)(31686004)(2906002)(2616005)(478600001)(7416002)(8676002)(8936002)(6916009)(6512007)(6506007)(36756003)(316002)(4326008)(5660300002)(54906003)(558084003)(6486002)(31696002)(186003)(26005)(66556008)(66946007)(41300700001)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VGtPWE1CbVNITUtnTzU1UlFBWXhzYno0QW9ENng0R2VwbWgvZjRBdEtBRE9W?=
 =?utf-8?B?UzRBOVJ1RWRycldzTCsvUmRNb2kvdy9DRHZDVllnb2tQd1ByNC9LbnVhK1Ro?=
 =?utf-8?B?dFYzemtRQUxUZ0V3UTFJSHFQMWgrWnBPTjZaMVBDQVZEbk5DZE5nNWF6eTdX?=
 =?utf-8?B?VzU1eVZ1Y2Vham1XR0RtL0l3ODlzaWJld3EvMzR3bEhabjVLT2t6akVDOUpy?=
 =?utf-8?B?REpKalFpa3pOQVZhUlEvSllaMi9RcVdPSnVvQW95SUhac1BqajJxcGU5TUxh?=
 =?utf-8?B?YkNNSmFCVFNTYS92djg0SWhtcEtCM2NTWUxuejJmV25GZC9ySW14SWVnb0p4?=
 =?utf-8?B?MFlzeGVTRmE4N0hxUHp6dXFUSnZZMFlGY21yd0NXSGNjSERKVTFXUkJCVmtK?=
 =?utf-8?B?N2VXT1VsbjlXQWlkRzM1YzErYW9wdU5Xc29NY2k4YTVKeUJJZ3pzcW9neWxW?=
 =?utf-8?B?ZktLaUhSS28vWEpicXk1U3NhSFM4eE5LaFhvRzE5WlUwNHJmMEVaUExpQk9s?=
 =?utf-8?B?NmRiUHRaVU82QVg3ZE9tcmt3Ty8zRlZaMWV3Wkptb3duMkVwdmhKa0dPTkcz?=
 =?utf-8?B?YnF6cVFpeW1aMC9ocVl4d3ZLWFA5RmFVekRteU1IbUN3YTBBS0JBYk8xQWU5?=
 =?utf-8?B?dnB1TzVvTU4zNjJBOVhVb2ZNMUxxTmEwNTBTSjZzWEFwZzYyTVhhNE9OTzlo?=
 =?utf-8?B?M1RBQjlaQ2dYeXVFK0ZyM3lIRTg4YTAveXVabEtXUDNzR2Q4RUNMRWpDQyth?=
 =?utf-8?B?RlJOS0lNeWlTajFtQWVSQkd5c0lORVNwMlJQQ2pDaXZGb2Q5K08rbkdvU1Vs?=
 =?utf-8?B?YTdGOG9NU01OTnAzMHp3enRBY1hXWkF6Q3JDMFpKZlk1YTBZTDV6MmdLMUZ2?=
 =?utf-8?B?T25NTVdYNktiNW5HKzh5MGJPOU5aK3lBUDhqekEzMjVQUlVXTE92MEZLdXJT?=
 =?utf-8?B?TnptdUtOOC9GOVo1M3ZidFZtK2dIb1Vzem1hdy92dG5TdnVOckVMeGRYc0RU?=
 =?utf-8?B?c0hXTTNhK0V0bUJCK1pnbDhoTFQ0WHhFWVE3VysxY2lzUjZ0dDRMOWVGU3cr?=
 =?utf-8?B?RjlFMVJnYlB5TlFKMGYwUEdZRkk0MDN6dkFaMU56MGhUbE9USmwwaTRZQVZE?=
 =?utf-8?B?eTduK1VJNzNFbi9VMzN3SWM2OHNEeUJDTXFRYmtoYUFzWlJKRlZkNkw1enRt?=
 =?utf-8?B?ZFZiV3hxaWNDQ3I0ZlREVGw0M29MNWFOQmFOdUFVSDI2KzBDZ1lkOFZFK2NW?=
 =?utf-8?B?SFcxaHVrOWROc1FZSjNGMFhzQmFid3EySWhoWjFoZHVmdW93WGFsU094RW9R?=
 =?utf-8?B?N0dmR3pQbSt3VWw1Q0Z2bDBVeWFVd2hLekNQVW1heHlBZXYxbWZUcUFaS2Nn?=
 =?utf-8?B?RlErUG1KNEJrUm5seHR2ZjJmVitiZWkxSnJoSlo3MWc5K3d3LzZQRXRIVHNk?=
 =?utf-8?B?ZU4zdU4zd3FUUjZVY1ovVW5BUmhsN2g1UUZNQ0w2ank4Q1FTUm4xcW1YbUtn?=
 =?utf-8?B?cGdSeVYvZzIzbi91QkZWTUtEejVXRGxXTHgvNDZlWjNnQ0l4NVU4SjZ3OWht?=
 =?utf-8?B?Q0g0R0c3UE0vd2cySDhoRktLSm5Fbk5GcWxhQm9HK2xFaEZtaXVlU1JkWUNZ?=
 =?utf-8?B?MThXOUtjVFdqbnYxbEsxbWZBd0NGZ0xza1gydkx2SmtlZHRIdjl6ZlFMdXJC?=
 =?utf-8?B?SUNTVFNkV2RzVDZSQWRycGgzeWJSNkpSUnBiZ1llZG5zM0J1MlB5TVNXSHhD?=
 =?utf-8?B?b1QrT1hLdVdsQURhbWQyTUc2WWZiQUZ5ellEaHpGaVdxUjBCekdmZHdybWt0?=
 =?utf-8?B?NXBlYTQxN3RKVlg3dk8waERaWWxBSWtXVU9kSi9oU2Q2T0dMUTRTUzdrdDMv?=
 =?utf-8?B?Wjc1UUJ6RUFPRkx1eUluZENmSi9IWTFzb3RGMnBmczFQc0tWNUZNRWtwRHlP?=
 =?utf-8?B?LzBvZXBwY2pJSTRzdU1aUlNhTHlmOXJSNWtmcGsvNCtGTCt3YWpSWnlmazFT?=
 =?utf-8?B?L3EzaFFCTlFvb2VwTDUwNWc5L0o0UWlYd29TcmVRMkM4SVI2azJ3UEpldm81?=
 =?utf-8?B?UVZ2U3Y1M0RYa2dwMmFUTlg2NkV1ZXZOTUpjQWdhejhuL25HbVNkMVFmNmpy?=
 =?utf-8?Q?j6m3heOq74iLoCLPOXp6LVC/T?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2affeb09-72d1-4c6b-8931-08db5b810e34
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 11:30:05.7466
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AUKfP5Mv2UqSL2eTHHQU76bE3BtxRqGGWyF2iHcXs9rCNb6V7a6bxF2lXTM4C5HwFHwyyScLOOxFlltfdXJ3Yw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8684

The model introduced in patch 1 is in principle arch-agnostic, hence
why I'm including Arm and RISC-V reviewers here as well.

1: annotate entry points with type and size
2: also mark assembler globals hidden

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 23 11:31:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 11:31:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538427.838354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QEG-0006cr-Np; Tue, 23 May 2023 11:31:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538427.838354; Tue, 23 May 2023 11:31:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QEG-0006ck-KX; Tue, 23 May 2023 11:31:00 +0000
Received: by outflank-mailman (input) for mailman id 538427;
 Tue, 23 May 2023 11:30:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1QEF-0006cZ-70
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 11:30:59 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20621.outbound.protection.outlook.com
 [2a01:111:f400:7d00::621])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4828b6f7-f95d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 13:30:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8684.eurprd04.prod.outlook.com (2603:10a6:20b:43f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 11:30:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 11:30:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4828b6f7-f95d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hR79YJUBQ4c3RHDFird+ptlKd4eqvanPXtv08a7/cTKT2XosjDZVG4oTsdCsNSWEQZAWbnNijnT6uh82rB6CO46WPj63xIVYrQ3N79G8vQP0kTmQqKj3EEKRk8XwITgGmZWXpD3q5IF/oiE530Lam2msNpT378xZ/4OokvhNT/27QAh2qugR2tLIKfclQbm79nFPRY5MsDCkRa110neRWkYvIRd7KPGEcuUZdxWduFUrzDuFhpvrkllXEJS+XZwiHlh/DGvQpb3H0z3igRT1vHrTiX7ab6KrCEpKC5jmhdM4ud5vltSBO3pzJNrigGG78SverTH4eHS1SzcL9qH9MQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0Zz7/XvWMX03lfJxE79MpM7wbSIlFPhaPUKfMq4W8cw=;
 b=MMLU+9PVno9os3YW51AQlhSlg4zTt35hRNck1Hd5IXaaxNHruw/R5hvv5MSWRuZDaOgTGROQPMrN8UjzN7a5TjfrPHudYcy3Xczi2nbQFF98p+cgxs6lULEt7iCwkn/+RZOeqP2NMVlb9mIu3n8nHKBux4qrrjvfcqz03YCnrVXpGz5Jxa1yf9vFNwLbx0HLwlYTkiQ04DmR1ALAIIvwMqIf6l01V4xhYKKpkX7QBJrAvzYR/jGvVL7yn/IX+3z7+qxfGtJxVZzm0ifK/TlTHC8LmyibBiflYLqM/YQsi5dSJwKGJyEeBUnnf+8LnRRWwXCfiSccNWzi3YIi80ORqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0Zz7/XvWMX03lfJxE79MpM7wbSIlFPhaPUKfMq4W8cw=;
 b=PMrXSY9t2TzUzB79pYgiszIkt+XGVnrdsoL9ZVNLGL4WiaTETRMcfeoFBoQLV922jowywwQfMVP0j8vbEevtrZKm9JYfWW93lSuNoP6gJFUzoypSIeIoX6DwHHGxgeBrTvV4sMAP6p1BI0bEal7ohwQWMllSYY3DmJIjUtl/PzO/IOP+2R2OwDoMymMj9Log+UjUry7bOjSyUrXxd0634J333BtjaDTRGVxMx5aWWS12KAgRQ8xHH11n5jOg+NmBJCzs7xOyNcwZBkpaApC5ArQYV3MG5Uv0ehZCFyazTORQy4gmORa2Ugfs92JDxym4Onna1IlRzD5Hwi5Buxd3UA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fd492a4a-11ba-b63a-daf4-99697db0db0e@suse.com>
Date: Tue, 23 May 2023 13:30:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: [PATCH v2 1/2] x86: annotate entry points with type and size
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <e4bf47ca-2ae6-1fd4-56a6-e4e777150b64@suse.com>
 <db10bc3d-962e-72a7-b53d-93a7ddd7f3ef@suse.com>
In-Reply-To: <db10bc3d-962e-72a7-b53d-93a7ddd7f3ef@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0173.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8684:EE_
X-MS-Office365-Filtering-Correlation-Id: 6e91a961-fd11-4585-2b35-08db5b812b07
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	j0XQV98t+G7gD6vf4Z6qBPi5D6WLWfeP80Z8nZhDNoqzYIrYI029Oj03EJdNIMTbCXNL/yZcIT6W3kPoe0+iBR7mSZPrfLlGu6b4lXHv1jHmPRtRyZ6/6+9mkIrMBNxYlIlEMXZt2lUBArE8Ro1ho9u1yEQY1tBQgws8dWZGednnkDmBw61xpahw3q3o/I6lPmNwqJgpj8gHrn5K/ceWInk9hFTQpec+wLr0r+6TPfFLaS2oz3zRqvbj6ObomwwsUIxDqoIyRcwHWLktdP5tuyLw0cJ5x1amCur96wR1TJZqgV4e15t9jdc4jA5H2TAMowvN58Vm8V0mrztjxcXFoadP+Be9Sy6v6hc7Qreztybmk2ZTc6KsjRZhb9tVzfxz7GyDptQmtgc2MXJmzLYzuL7748e3q8PzKU/U689DKKxXMvELfkecIx+icewPj7+tnvIAH6k363xnmZ8qnmC6KE2e9l1uATSPkYgD5Y7d+WK7uepjTljKUHtufXnsBsm1Mpi7Mk2rIX8hCP017y7HCJ27rLh5UyejjJpwLoa7k+DZnyYfWjx2jZv1i8M7+h+3N7tNPpbGxA17Pa4HWH61x8RgJ7fkLEK/Fii1DDXmFeTqmUr0fAPqa53L++tv6NgFIsMwcq698fYxbG/NEEFj2A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(376002)(39860400002)(396003)(346002)(451199021)(38100700002)(86362001)(31686004)(30864003)(2906002)(2616005)(478600001)(7416002)(8676002)(8936002)(6916009)(6512007)(6506007)(36756003)(316002)(4326008)(5660300002)(54906003)(6666004)(6486002)(83380400001)(31696002)(966005)(186003)(26005)(66556008)(66946007)(41300700001)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N3FwcFNzUHhkbmhzY2VnR09LSUJaMG1KUGxCQ005STk2eFVKVGZjd0gxMnlD?=
 =?utf-8?B?U2xmU3p0Zk5NakRGYmd3K3B3c1pzelJjZUVYTU9VTFZ0SkJvTVliZUpNSXBZ?=
 =?utf-8?B?TU0ySVMvUlhodmdKWWYyelZoUCtEQkRObVNpMTBEdzZneGRsbXBhVXgvUGV3?=
 =?utf-8?B?K2R6R29WSDkwVnl1SGNsOVdBNXVLUGlNd1ZBRE9OSVFFRi9hd0x0VHRpVndU?=
 =?utf-8?B?U3ZZT0pXa0JIS0JsYmlhaG1qMG9OYXg0K0ZvMlRsVlcrTFNKMzIvdHJJMDh5?=
 =?utf-8?B?d3lpZFFtLzRRdk5tVHJNMmNtblV1b2ZpQ21Mc2tZV2RsM0NJMU5LZVBwOWJy?=
 =?utf-8?B?WU9STzEySzMrWWYybndNWkVwaTRUbFliYnI5ZWpjazRuZW84a0lDc3RzLzI1?=
 =?utf-8?B?V3RLV3dkRVFaSHZlbHA3N3krUXU4M2cwYWZ5bE45NUk0TzJlQm5ldkl0SDRV?=
 =?utf-8?B?ZlYvOTFxa1hvM3BPakFYYVkyd3ZQb0tEZ1VuaUh5OTBsZ3B2dHJWU1k5SlJR?=
 =?utf-8?B?Wk8xb2E1amViem5vby8rZ3VCYVNQZHdjeTFyRVIrWGdaeEtVeHZhcktWMUtK?=
 =?utf-8?B?M2dQVDFiRXZqR25UUkVnU2RoaEhOTGJsSGwvb3hrK2RJQWx3Q3BsYUtCb25n?=
 =?utf-8?B?NHNGR3BZeHBvVXZHVnVvNkwrWnBJbVFlN1FqMVg5L0trQTI3UVQyNzF3VnNp?=
 =?utf-8?B?ejQyU1NncTducnV5RDdySWZRc0VMaENIaVQ4VlNERDlkMjB1UG40aVM0Yyt3?=
 =?utf-8?B?MUVXQ1JWZ3ArY01QQitmQWg1Yy9lazZ0YkNQNzgzd2x5OUwvMi9GcEY1SkNR?=
 =?utf-8?B?dlV4dU01K3ljU3VJT053Q2M3UDk2RFF4MnhDNGVBOEY3bWlLTjlYKzBtMFNz?=
 =?utf-8?B?VUhrUlpvQmR0Q3FHaTdUakdjT09DaG81OWxtcHYrbVQ3b1I1TkVzbzhZZ1dU?=
 =?utf-8?B?RlRybVFTdlh6UVZkc2dhTnhIQUNEYm84dkZJck1mM1BEL0Z3M2xPNzRuL3Y5?=
 =?utf-8?B?NUVkdnFmc0V2NDQ4eXdCSUl5Q2R5NlpTejBqTlhjN2ZjNUNuRVRBZmlqc3Rh?=
 =?utf-8?B?MktURGgzOFg0YXQ1cWJCZjhWUW5hbGRlcHorcHJQYUY2RHNwMzJEMlFpZHF0?=
 =?utf-8?B?V1RwWmVhL29yUlp0UUE0N2pKZDlBODlsanlNdEZKQ2VuNm10eVZ1Z0RSc3JP?=
 =?utf-8?B?Sm9zRzFnK09PKzFqaUJiY0hQWmRBMWE1c0ZOZEVaTjhLQkZFWDBIcklIWjFr?=
 =?utf-8?B?OGlhZmltMzI0QXM2NU9lZXQ4K2lQZlNHYjRKaFQ3cSs1K1E4VWtPM0YrS2JW?=
 =?utf-8?B?bGlvRWlSMHFOa0NNbnM2MXpnVW96UFBDUDJxNHE0ajQ5YmJxZ1V1cUVzR1lC?=
 =?utf-8?B?SHRXaEtYbDhrVGNJaVN3Q2wwSjNiT1Z4TXVvK1l5cmY3Q0VmV3VDVEE5Z1Zu?=
 =?utf-8?B?anJGQnQ0R0lpbkZXK2dzMEpjM1l2bGhUNmRubTEvdkdNRTduUjFnNXFpSzdL?=
 =?utf-8?B?UU1FNXFEOUNick9qVUlFa3VERVl3MUdTNGZqTXpCUlpMOVFHVW8zTjI4RldY?=
 =?utf-8?B?WHhLZnRockRXVXhmbWgyTUFBZUhnT0tyaUthR3ZzOUZxM2lvSm1QMFlkT2VW?=
 =?utf-8?B?S3R1VnBVaHNvVnlYdkZFY0hEVko4Ly8zU3FGV3M2MDhDRUtHWjN3VStybGVP?=
 =?utf-8?B?eHVNa2xCRlBYSTJsMHl1TEFONGFyUlBNdDRwNEs1b3ZhK2NCYjVheHRnZFRo?=
 =?utf-8?B?M1ZUbzN0QUJMWExrOENFNzhYZVVLUkRJVXZBZ3dub0hvQkkrUDZHTXNaU2xv?=
 =?utf-8?B?eGJMREdZR3YxaVNYLzZCSDd3enlqK2hyaXpWOU1nTnBvbDJKQkZsaHFYdnEr?=
 =?utf-8?B?ZFUrd2RDbmdDdVp0emdMTUdRTGtKTDQrOW9xbUFRYlRVZHlrTjdDdkt4Qm96?=
 =?utf-8?B?Q05uUGJXNUFBdFNzZzFSRDBoTTJKNEI0Z1VHUmdiOXk5cjFvWXdPMTRGQmZH?=
 =?utf-8?B?RFAwRmk1RXlMMzZvUGg2NlVITlR6MzdIVUxRMENMNC9KMEdOMkd3ZnRHeDVW?=
 =?utf-8?B?cE81SmZBVzA5Y0hXUHJUSU4raTR4eFkxNFBqVmFVMVZGb2JUZDIxK21rWDB1?=
 =?utf-8?Q?fL6KXJCOHE6yHx8DD0GOhDigN?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e91a961-fd11-4585-2b35-08db5b812b07
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 11:30:54.1505
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5HDMDl3CRilZWY9bzb2iAeI97zmfFWMQmI1N5/oGsmhXmRufbE+jNiRAGP85+KNaOy6vnxiayGPR0QwAIT/IzQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8684

Recent gas versions generate minimalistic Dwarf debug info for items
annotated as functions and having their sizes specified [1]. "Borrow"
Arm's END() and (remotely) derive other annotation infrastructure from
Linux'es.

For switch_to_kernel() and restore_all_guest() so far implicit alignment
(from being first in their respective sections) is being made explicit
(as in: using FUNC() without 2nd argument). Whereas for
{,compat}create_bounce_frame() and autogen_entrypoints[] alignment is
newly arranged for.

Except for the added alignment padding (including their knock-on
effects) no change in generated code/data.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

[1] https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=591cc9fbbfd6d51131c0f1d4a92e7893edcc7a28
---
v2: Full rework.
---
Only two of the assembly files are being converted for now. More could
be done right here or as follow-on in separate patches.

In principle the framework should be possible to use by other
architectures as well. If we want this, the main questions are going to
be:
- What header file name? (I don't really like Linux'es linkage.h, so I'd
  prefer e.g. asm-defns.h or asm_defns.h as we already have in x86.)
- How much per-arch customization do we want to permit up front (i.e.
  without knowing how much of it is going to be needed)? Initially I'd
  expect only the default function alignment (and padding) to require
  per-arch definitions.

Note that the FB-label in autogen_stubs() cannot be converted just yet:
Such labels cannot be used with .type. We could further diverge from
Linux'es model and avoid setting STT_NOTYPE explicitly (that's the type
labels get by default anyway).

Note that we can't use ALIGN() (in place of SYM_ALIGN()) as long as we
still have ALIGN.

--- a/xen/arch/x86/include/asm/asm_defns.h
+++ b/xen/arch/x86/include/asm/asm_defns.h
@@ -81,6 +81,45 @@ register unsigned long current_stack_poi
 
 #ifdef __ASSEMBLY__
 
+#define SYM_ALIGN(algn...) .balign algn
+
+#define SYM_L_GLOBAL(name) .globl name
+#define SYM_L_WEAK(name)   .weak name
+#define SYM_L_LOCAL(name)  /* nothing */
+
+#define SYM_T_FUNC         STT_FUNC
+#define SYM_T_DATA         STT_OBJECT
+#define SYM_T_NONE         STT_NOTYPE
+
+#define SYM(name, typ, linkage, algn...)          \
+        .type name, SYM_T_ ## typ;                \
+        SYM_L_ ## linkage(name);                  \
+        SYM_ALIGN(algn);                          \
+        name:
+
+#define END(name) .size name, . - name
+
+#define ARG1_(x, y...) (x)
+#define ARG2_(x, y...) ARG1_(y)
+
+#define LAST__(nr) ARG ## nr ## _
+#define LAST_(nr)  LAST__(nr)
+#define LAST(x, y...) LAST_(count_args(x, ## y))(x, ## y)
+
+#define FUNC(name, algn...) \
+        SYM(name, FUNC, GLOBAL, LAST(16, ## algn), 0x90)
+#define LABEL(name, algn...) \
+        SYM(name, NONE, GLOBAL, LAST(16, ## algn), 0x90)
+#define DATA(name, algn...) \
+        SYM(name, DATA, GLOBAL, LAST(0, ## algn), 0xff)
+
+#define FUNC_LOCAL(name, algn...) \
+        SYM(name, FUNC, LOCAL, LAST(16, ## algn), 0x90)
+#define LABEL_LOCAL(name, algn...) \
+        SYM(name, NONE, LOCAL, LAST(16, ## algn), 0x90)
+#define DATA_LOCAL(name, algn...) \
+        SYM(name, DATA, LOCAL, LAST(0, ## algn), 0xff)
+
 #ifdef HAVE_AS_QUOTED_SYM
 #define SUBSECTION_LBL(tag)                        \
         .ifndef .L.tag;                            \
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -8,10 +8,11 @@
 #include <asm/page.h>
 #include <asm/processor.h>
 #include <asm/desc.h>
+#include <xen/lib.h>
 #include <public/xen.h>
 #include <irq_vectors.h>
 
-ENTRY(entry_int82)
+FUNC(entry_int82)
         ENDBR64
         ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $0
@@ -27,9 +28,10 @@ ENTRY(entry_int82)
 
         mov   %rsp, %rdi
         call  do_entry_int82
+END(entry_int82)
 
 /* %rbx: struct vcpu */
-ENTRY(compat_test_all_events)
+FUNC(compat_test_all_events)
         ASSERT_NOT_IN_ATOMIC
         cli                             # tests must not race interrupts
 /*compat_test_softirqs:*/
@@ -66,24 +68,21 @@ compat_test_guest_events:
         call  compat_create_bounce_frame
         jmp   compat_test_all_events
 
-        ALIGN
 /* %rbx: struct vcpu */
-compat_process_softirqs:
+LABEL_LOCAL(compat_process_softirqs)
         sti
         call  do_softirq
         jmp   compat_test_all_events
 
-        ALIGN
 /* %rbx: struct vcpu, %rdx: struct trap_bounce */
-.Lcompat_process_trapbounce:
+LABEL_LOCAL(.Lcompat_process_trapbounce)
         sti
 .Lcompat_bounce_exception:
         call  compat_create_bounce_frame
         jmp   compat_test_all_events
 
-	ALIGN
 /* %rbx: struct vcpu */
-compat_process_mce:
+LABEL_LOCAL(compat_process_mce)
         testb $1 << VCPU_TRAP_MCE,VCPU_async_exception_mask(%rbx)
         jnz   .Lcompat_test_guest_nmi
         sti
@@ -97,9 +96,8 @@ compat_process_mce:
         movb %dl,VCPU_async_exception_mask(%rbx)
         jmp   compat_process_trap
 
-	ALIGN
 /* %rbx: struct vcpu */
-compat_process_nmi:
+LABEL_LOCAL(compat_process_nmi)
         testb $1 << VCPU_TRAP_NMI,VCPU_async_exception_mask(%rbx)
         jnz   compat_test_guest_events
         sti
@@ -116,9 +114,10 @@ compat_process_trap:
         leaq  VCPU_trap_bounce(%rbx),%rdx
         call  compat_create_bounce_frame
         jmp   compat_test_all_events
+END(compat_test_all_events)
 
 /* %rbx: struct vcpu, interrupts disabled */
-ENTRY(compat_restore_all_guest)
+FUNC(compat_restore_all_guest)
         ASSERT_INTERRUPTS_DISABLED
         mov   $~(X86_EFLAGS_IOPL | X86_EFLAGS_VM), %r11d
         and   UREGS_eflags(%rsp),%r11d
@@ -161,9 +160,10 @@ ENTRY(compat_restore_all_guest)
         RESTORE_ALL adj=8 compat=1
 .Lft0:  iretq
         _ASM_PRE_EXTABLE(.Lft0, handle_exception)
+END(compat_restore_all_guest)
 
 /* This mustn't modify registers other than %rax. */
-ENTRY(cr4_pv32_restore)
+FUNC(cr4_pv32_restore)
         push  %rdx
         GET_CPUINFO_FIELD(cr4, dx)
         mov   (%rdx), %rax
@@ -193,8 +193,9 @@ ENTRY(cr4_pv32_restore)
         pop   %rdx
         xor   %eax, %eax
         ret
+END(cr4_pv32_restore)
 
-ENTRY(compat_syscall)
+FUNC(compat_syscall)
         /* Fix up reported %cs/%ss for compat domains. */
         movl  $FLAT_COMPAT_USER_SS, UREGS_ss(%rsp)
         movl  $FLAT_COMPAT_USER_CS, UREGS_cs(%rsp)
@@ -222,8 +223,9 @@ UNLIKELY_END(compat_syscall_gpf)
         movw  %si,TRAPBOUNCE_cs(%rdx)
         movb  %cl,TRAPBOUNCE_flags(%rdx)
         jmp   .Lcompat_bounce_exception
+END(compat_syscall)
 
-ENTRY(compat_sysenter)
+FUNC(compat_sysenter)
         CR4_PV32_RESTORE
         movq  VCPU_trap_ctxt(%rbx),%rcx
         cmpb  $X86_EXC_GP, UREGS_entry_vector(%rsp)
@@ -236,17 +238,19 @@ ENTRY(compat_sysenter)
         movw  %ax,TRAPBOUNCE_cs(%rdx)
         call  compat_create_bounce_frame
         jmp   compat_test_all_events
+END(compat_sysenter)
 
-ENTRY(compat_int80_direct_trap)
+FUNC(compat_int80_direct_trap)
         CR4_PV32_RESTORE
         call  compat_create_bounce_frame
         jmp   compat_test_all_events
+END(compat_int80_direct_trap)
 
 /* CREATE A BASIC EXCEPTION FRAME ON GUEST OS (RING-1) STACK:            */
 /*   {[ERRCODE,] EIP, CS, EFLAGS, [ESP, SS]}                             */
 /* %rdx: trap_bounce, %rbx: struct vcpu                                  */
 /* On return only %rbx and %rdx are guaranteed non-clobbered.            */
-compat_create_bounce_frame:
+FUNC_LOCAL(compat_create_bounce_frame)
         ASSERT_INTERRUPTS_ENABLED
         mov   %fs,%edi
         ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
@@ -352,3 +356,4 @@ compat_crash_page_fault:
         jmp   .Lft14
 .previous
         _ASM_EXTABLE(.Lft14, .Lfx14)
+END(compat_create_bounce_frame)
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -9,6 +9,7 @@
 #include <asm/asm_defns.h>
 #include <asm/page.h>
 #include <asm/processor.h>
+#include <xen/lib.h>
 #include <public/xen.h>
 #include <irq_vectors.h>
 
@@ -24,7 +25,7 @@
 
 #ifdef CONFIG_PV
 /* %rbx: struct vcpu */
-switch_to_kernel:
+FUNC_LOCAL(switch_to_kernel)
         leaq  VCPU_trap_bounce(%rbx),%rdx
 
         /* TB_eip = 32-bit syscall ? syscall32_addr : syscall_addr */
@@ -89,24 +90,21 @@ test_guest_events:
         call  create_bounce_frame
         jmp   test_all_events
 
-        ALIGN
 /* %rbx: struct vcpu */
-process_softirqs:
+LABEL_LOCAL(process_softirqs)
         sti
         call do_softirq
         jmp  test_all_events
 
-        ALIGN
 /* %rbx: struct vcpu, %rdx struct trap_bounce */
-.Lprocess_trapbounce:
+LABEL_LOCAL(.Lprocess_trapbounce)
         sti
 .Lbounce_exception:
         call  create_bounce_frame
         jmp   test_all_events
 
-        ALIGN
 /* %rbx: struct vcpu */
-process_mce:
+LABEL_LOCAL(process_mce)
         testb $1 << VCPU_TRAP_MCE, VCPU_async_exception_mask(%rbx)
         jnz  .Ltest_guest_nmi
         sti
@@ -120,9 +118,8 @@ process_mce:
         movb %dl, VCPU_async_exception_mask(%rbx)
         jmp  process_trap
 
-        ALIGN
 /* %rbx: struct vcpu */
-process_nmi:
+LABEL_LOCAL(process_nmi)
         testb $1 << VCPU_TRAP_NMI, VCPU_async_exception_mask(%rbx)
         jnz  test_guest_events
         sti
@@ -139,11 +136,12 @@ process_trap:
         leaq VCPU_trap_bounce(%rbx), %rdx
         call create_bounce_frame
         jmp  test_all_events
+END(switch_to_kernel)
 
         .section .text.entry, "ax", @progbits
 
 /* %rbx: struct vcpu, interrupts disabled */
-restore_all_guest:
+FUNC_LOCAL(restore_all_guest)
         ASSERT_INTERRUPTS_DISABLED
 
         /* Stash guest SPEC_CTRL value while we can read struct vcpu. */
@@ -220,8 +218,7 @@ restore_all_guest:
         sysretq
 1:      sysretl
 
-        ALIGN
-.Lrestore_rcx_iret_exit_to_guest:
+LABEL_LOCAL(.Lrestore_rcx_iret_exit_to_guest)
         movq  8(%rsp), %rcx           # RIP
 /* No special register assumptions. */
 iret_exit_to_guest:
@@ -230,6 +227,7 @@ iret_exit_to_guest:
         addq  $8,%rsp
 .Lft0:  iretq
         _ASM_PRE_EXTABLE(.Lft0, handle_exception)
+END(restore_all_guest)
 
 /*
  * When entering SYSCALL from kernel mode:
@@ -246,7 +244,7 @@ iret_exit_to_guest:
  *  - Guest %rsp stored in %rax
  *  - Xen stack loaded, pointing at the %ss slot
  */
-ENTRY(lstar_enter)
+FUNC(lstar_enter)
 #ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
 #endif
@@ -281,9 +279,10 @@ ENTRY(lstar_enter)
         mov   %rsp, %rdi
         call  pv_hypercall
         jmp   test_all_events
+END(lstar_enter)
 
 /* See lstar_enter for entry register state. */
-ENTRY(cstar_enter)
+FUNC(cstar_enter)
 #ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
 #endif
@@ -321,8 +320,9 @@ ENTRY(cstar_enter)
         jne   compat_syscall
 #endif
         jmp   switch_to_kernel
+END(cstar_enter)
 
-ENTRY(sysenter_entry)
+FUNC(sysenter_entry)
         ENDBR64
 #ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
@@ -331,7 +331,7 @@ ENTRY(sysenter_entry)
         pushq $FLAT_USER_SS
         pushq $0
         pushfq
-GLOBAL(sysenter_eflags_saved)
+LABEL(sysenter_eflags_saved, 0)
         ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $3 /* ring 3 null cs */
         pushq $0 /* null rip */
@@ -385,8 +385,9 @@ UNLIKELY_END(sysenter_gpf)
         jne   compat_sysenter
 #endif
         jmp   .Lbounce_exception
+END(sysenter_entry)
 
-ENTRY(int80_direct_trap)
+FUNC(int80_direct_trap)
         ENDBR64
         ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $0
@@ -474,6 +475,7 @@ int80_slow_path:
          */
         GET_STACK_END(14)
         jmp   handle_exception_saved
+END(int80_direct_trap)
 
         /* create_bounce_frame & helpers don't need to be in .text.entry */
         .text
@@ -482,7 +484,7 @@ int80_slow_path:
 /*   { RCX, R11, [ERRCODE,] RIP, CS, RFLAGS, RSP, SS }                   */
 /* %rdx: trap_bounce, %rbx: struct vcpu                                  */
 /* On return only %rbx and %rdx are guaranteed non-clobbered.            */
-create_bounce_frame:
+FUNC_LOCAL(create_bounce_frame)
         ASSERT_INTERRUPTS_ENABLED
         testb $TF_kernel_mode,VCPU_thread_flags(%rbx)
         jnz   1f
@@ -618,6 +620,7 @@ ENTRY(dom_crash_sync_extable)
         xorl  %edi,%edi
         jmp   asm_domain_crash_synchronous /* Does not return */
         .popsection
+END(create_bounce_frame)
 #endif /* CONFIG_PV */
 
 /* --- CODE BELOW THIS LINE (MOSTLY) NOT GUEST RELATED --- */
@@ -626,7 +629,7 @@ ENTRY(dom_crash_sync_extable)
 
 /* No special register assumptions. */
 #ifdef CONFIG_PV
-ENTRY(continue_pv_domain)
+FUNC(continue_pv_domain)
         ENDBR64
         call  check_wakeup_from_wait
 ret_from_intr:
@@ -641,26 +644,28 @@ ret_from_intr:
 #else
         jmp   test_all_events
 #endif
+END(continue_pv_domain)
 #else
-ret_from_intr:
+FUNC(ret_from_intr, 0)
         ASSERT_CONTEXT_IS_XEN
         jmp   restore_all_xen
+END(ret_from_intr)
 #endif
 
         .section .init.text, "ax", @progbits
-ENTRY(early_page_fault)
+FUNC(early_page_fault)
         ENDBR64
         movl  $X86_EXC_PF, 4(%rsp)
         SAVE_ALL
         movq  %rsp, %rdi
         call  do_early_page_fault
         jmp   restore_all_xen
+END(early_page_fault)
 
         .section .text.entry, "ax", @progbits
 
-        ALIGN
 /* No special register assumptions. */
-restore_all_xen:
+FUNC_LOCAL(restore_all_xen)
         /*
          * Check whether we need to switch to the per-CPU page tables, in
          * case we return to late PV exit code (from an NMI or #MC).
@@ -677,8 +682,9 @@ UNLIKELY_END(exit_cr3)
 
         RESTORE_ALL adj=8
         iretq
+END(restore_all_xen)
 
-ENTRY(common_interrupt)
+FUNC(common_interrupt)
         ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         SAVE_ALL
 
@@ -707,12 +713,14 @@ ENTRY(common_interrupt)
         mov   %r15, STACK_CPUINFO_FIELD(xen_cr3)(%r14)
         mov   %bl, STACK_CPUINFO_FIELD(use_pv_cr3)(%r14)
         jmp ret_from_intr
+END(common_interrupt)
 
-ENTRY(page_fault)
+FUNC(page_fault)
         ENDBR64
         movl  $X86_EXC_PF, 4(%rsp)
+END(page_fault)
 /* No special register assumptions. */
-GLOBAL(handle_exception)
+FUNC(handle_exception, 0)
         ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         SAVE_ALL
 
@@ -882,92 +890,108 @@ FATAL_exception_with_ints_disabled:
         movq  %rsp,%rdi
         call  fatal_trap
         BUG   /* fatal_trap() shouldn't return. */
+END(handle_exception)
 
-ENTRY(divide_error)
+FUNC(divide_error)
         ENDBR64
         pushq $0
         movl  $X86_EXC_DE, 4(%rsp)
         jmp   handle_exception
+END(divide_error)
 
-ENTRY(coprocessor_error)
+FUNC(coprocessor_error)
         ENDBR64
         pushq $0
         movl  $X86_EXC_MF, 4(%rsp)
         jmp   handle_exception
+END(coprocessor_error)
 
-ENTRY(simd_coprocessor_error)
+FUNC(simd_coprocessor_error)
         ENDBR64
         pushq $0
         movl  $X86_EXC_XM, 4(%rsp)
         jmp   handle_exception
+END(coprocessor_error)
 
-ENTRY(device_not_available)
+FUNC(device_not_available)
         ENDBR64
         pushq $0
         movl  $X86_EXC_NM, 4(%rsp)
         jmp   handle_exception
+END(device_not_available)
 
-ENTRY(debug)
+FUNC(debug)
         ENDBR64
         pushq $0
         movl  $X86_EXC_DB, 4(%rsp)
         jmp   handle_ist_exception
+END(debug)
 
-ENTRY(int3)
+FUNC(int3)
         ENDBR64
         pushq $0
         movl  $X86_EXC_BP, 4(%rsp)
         jmp   handle_exception
+END(int3)
 
-ENTRY(overflow)
+FUNC(overflow)
         ENDBR64
         pushq $0
         movl  $X86_EXC_OF, 4(%rsp)
         jmp   handle_exception
+END(overflow)
 
-ENTRY(bounds)
+FUNC(bounds)
         ENDBR64
         pushq $0
         movl  $X86_EXC_BR, 4(%rsp)
         jmp   handle_exception
+END(bounds)
 
-ENTRY(invalid_op)
+FUNC(invalid_op)
         ENDBR64
         pushq $0
         movl  $X86_EXC_UD, 4(%rsp)
         jmp   handle_exception
+END(invalid_op)
 
-ENTRY(invalid_TSS)
+FUNC(invalid_TSS)
         ENDBR64
         movl  $X86_EXC_TS, 4(%rsp)
         jmp   handle_exception
+END(invalid_TSS)
 
-ENTRY(segment_not_present)
+FUNC(segment_not_present)
         ENDBR64
         movl  $X86_EXC_NP, 4(%rsp)
         jmp   handle_exception
+END(segment_not_present)
 
-ENTRY(stack_segment)
+FUNC(stack_segment)
         ENDBR64
         movl  $X86_EXC_SS, 4(%rsp)
         jmp   handle_exception
+END(stack_segment)
 
-ENTRY(general_protection)
+FUNC(general_protection)
         ENDBR64
         movl  $X86_EXC_GP, 4(%rsp)
         jmp   handle_exception
+END(general_protection)
 
-ENTRY(alignment_check)
+FUNC(alignment_check)
         ENDBR64
         movl  $X86_EXC_AC, 4(%rsp)
         jmp   handle_exception
+END(alignment_check)
 
-ENTRY(entry_CP)
+FUNC(entry_CP)
         ENDBR64
         movl  $X86_EXC_CP, 4(%rsp)
         jmp   handle_exception
+END(entry_CP)
 
-ENTRY(double_fault)
+FUNC(double_fault)
         ENDBR64
         movl  $X86_EXC_DF, 4(%rsp)
         /* Set AC to reduce chance of further SMAP faults */
@@ -991,8 +1015,9 @@ ENTRY(double_fault)
         movq  %rsp,%rdi
         call  do_double_fault
         BUG   /* do_double_fault() shouldn't return. */
+END(double_fault)
 
-ENTRY(nmi)
+FUNC(nmi)
         ENDBR64
         pushq $0
         movl  $X86_EXC_NMI, 4(%rsp)
@@ -1120,21 +1145,24 @@ handle_ist_exception:
         ASSERT_CONTEXT_IS_XEN
         jmp   restore_all_xen
 #endif
+END(nmi)
 
-ENTRY(machine_check)
+FUNC(machine_check)
         ENDBR64
         pushq $0
         movl  $X86_EXC_MC, 4(%rsp)
         jmp   handle_ist_exception
+END(machine_check)
 
 /* No op trap handler.  Required for kexec crash path. */
-GLOBAL(trap_nop)
+FUNC(trap_nop, 0)
         ENDBR64
         iretq
+END(trap_nop)
 
 /* Table of automatically generated entry points.  One per vector. */
         .pushsection .init.rodata, "a", @progbits
-GLOBAL(autogen_entrypoints)
+DATA(autogen_entrypoints, 8)
         /* pop into the .init.rodata section and record an entry point. */
         .macro entrypoint ent
         .pushsection .init.rodata, "a", @progbits
@@ -1143,7 +1171,7 @@ GLOBAL(autogen_entrypoints)
         .endm
 
         .popsection
-autogen_stubs: /* Automatically generated stubs. */
+FUNC_LOCAL(autogen_stubs, 0) /* Automatically generated stubs. */
 
         vec = 0
         .rept X86_NR_VECTORS
@@ -1187,6 +1215,7 @@ autogen_stubs: /* Automatically generate
 
         vec = vec + 1
         .endr
+END(autogen_stubs)
 
         .section .init.rodata, "a", @progbits
-        .size autogen_entrypoints, . - autogen_entrypoints
+END(autogen_entrypoints)



From xen-devel-bounces@lists.xenproject.org Tue May 23 11:31:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 11:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538429.838365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QEc-00074T-4x; Tue, 23 May 2023 11:31:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538429.838365; Tue, 23 May 2023 11:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QEc-00074K-1Y; Tue, 23 May 2023 11:31:22 +0000
Received: by outflank-mailman (input) for mailman id 538429;
 Tue, 23 May 2023 11:31:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1QEa-0006vL-Vm
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 11:31:20 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20616.outbound.protection.outlook.com
 [2a01:111:f400:7d00::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56baa5d0-f95d-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 13:31:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8586.eurprd04.prod.outlook.com (2603:10a6:20b:439::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 11:31:18 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 11:31:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56baa5d0-f95d-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RsUrY0M5tcQQeAN+INrKVUA+BLSmKWpLDG13sS2te5f+bXYoIl/NKU2aMmz/H2WcRH3IZW9t5zrVzYKIeekcxPl4m5+vNiqjL1XrqsFvhWzhk4BNP4qkSpARk0ze9GPgvG8+NvgkdiuDtBXfD1kSmHFFDWqLCHQMrs1gLBS/tIbPEFIEwcPc2PYXWNZQb72l0dNhXWSxehu+0i046PvAGnOi4aFrS8hLcrQDaPEmC4ek0FuVcCe4TNyboUO7ZelwWMkrbvHM55EN48ZpUXVN+nWyALuABoK8DaeFE8B7pjC9ad+zhV9MXCx8yKePvl12XcKV0EKRRiOiQT/2Lfwa6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mDEoGHJAvvsFh3CMZf5yMvN6tsIrJvFiC/zCIanZdQw=;
 b=jnkD67kS9jXHgf+PXTnoKzlmKetMsj9M2js5d3Uzx7r+M4meQoncW02rOh7W/22yzKlIHrURINp2XnkB6HLCu0LzelMWBJ030JpoJ2f+HVSTag9N3xtFHr8SZzpbK2i/45Ht43IlaNKcNrzCM7Ypt0l6z4dvqC56IGIKdFzYXUStyqzD9NXIhPAp1IbTBOzH8bzO9f691JFpIIPsYVi0Gv6M97ZRC6EtTrGYtuy/m0kUtPhD3w+lJJcCMDygrrYjb/AVwHbDdXI7ke8gd+MM/6LpPw3cmSSA1SnyX6KnTen1aq6M7cFDWgLTIspsnY6dTJcGpqdxabIjStYC8DYf4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mDEoGHJAvvsFh3CMZf5yMvN6tsIrJvFiC/zCIanZdQw=;
 b=ED2OuOySokv/WwpG8C6tztnnCIiq+gaFL8XpUM1ypkcMJ5OIAKGH8mYqeJKaOVGXTZ26wsG+hj6lTKrIAkVV1ODiPqc2LbSDOmIJ+3SKzfKYaDAU/MiVHWROzOVUjTxwP+8mzS85P7utnxr5FVqkjrlBviHv24+ISYtWJNwow/VnssjNVu/bXTJFV2X89FN3nsyW64DujUMmCn4q7pwhYObnNgGOwZZTkRbKRmwnvWJjsylDrQPdxrDxPSEsoidxZA3+qSlzwxadrSnflNd5KHwIgZlQ0buQamkIYnbSJp9YkLeQv0Q0lVX+yo59fzi6C8Kj0Wgnj536CXfVzwmh7g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <52f884cf-b88f-6fd9-e4d9-79da2bfde422@suse.com>
Date: Tue, 23 May 2023 13:31:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: [PATCH v2 2/2] x86: also mark assembler globals hidden
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <e4bf47ca-2ae6-1fd4-56a6-e4e777150b64@suse.com>
 <db10bc3d-962e-72a7-b53d-93a7ddd7f3ef@suse.com>
In-Reply-To: <db10bc3d-962e-72a7-b53d-93a7ddd7f3ef@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0079.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::19) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8586:EE_
X-MS-Office365-Filtering-Correlation-Id: 4f72934c-dcde-4015-cf7d-08db5b8139b6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hKh987ANTY7QXKM3mkn3cpbaDerUfJLa0QC48zSdUIlMBU7QnO0FTR1KQKOfaQqozIEm2z3n1ojpA6/2luBm57u2phnasHldTbRGh3DRc3rRZhGF9wLEHf2nX/OzfBh9AIaZdoVABu6ChUkWXBRo4tmHZIW8d2HB5caxDIAWDkuorJ1bhDM6mINyKaTpKzgvQf/SHAsk0J1iyrgJsfW9IvsjqL7GMbR0N2W4ZgclFA9JXSI/6pap2A59KJvWqNV9ck+MU0RDCmukpHWLaab0zD1af/heqQIXJXecpwI414ZurP634ns8oEe5PBRRBPgbDPhEZbRpNZr/o/me23btPHQf9sOv8t3NPF2j6m1HZIr7Hi5wUNafxDEXiF1WZfSp/idcNNAR+4pUhHecY+AGJ3bCXz8yB78piXA/uqVA7+lFVMVwSDxRLuiu+FkKEZhnZtCoYjca8d16n5wEy6TjLp42ihjfdzS7/HNUYZMjRXHdU7j/eea/+VcbA4e4Fu82LFf4tnbhIbuXLkKgFWfYyUq1eQqJr9TF/Ov9XgW9+H4iktbehb+buxZ1esIczTBksgq9lGGWn6tQC4xXP2KQ6Y0G2UINsYfg55utlQlMHVKH2MDTboEU8FqEGXcTVQg13QpMIoCWPWC5Y95RB3y12g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(39860400002)(346002)(376002)(396003)(451199021)(41300700001)(6486002)(316002)(26005)(6506007)(6512007)(5660300002)(7416002)(36756003)(8936002)(8676002)(38100700002)(478600001)(31696002)(86362001)(2616005)(54906003)(31686004)(4744005)(6916009)(186003)(2906002)(66556008)(66476007)(66946007)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aEpTU3lPblFybnl4R3EyT3dTTGpGNDk3cEc4YjJSSFEycGdmZWFmWGMrN21G?=
 =?utf-8?B?NDNMaTByQXFrTXVlSlJnU0dncFh3L2E5V3lGVHFSVldNZXF4ZzI0THZDbjd3?=
 =?utf-8?B?VEJDK3ZnU3FMaGdDb21VWDZSTEI4d1VHdjZFRkkzSmE2MENISVp4VVhPSW5M?=
 =?utf-8?B?T04xdmlFYkR5c3A4VHMwTXZ0VFZDWTJtRFZaalBsdlpJM2VySVIwaWE0NFJZ?=
 =?utf-8?B?c0pzL0VTM3M5bmJMYlFNU2J6aHlVR2xkRE1sUFBoY2pRTFFHNzBnR2Z2ZTgz?=
 =?utf-8?B?ai9YVUJnMks0cDJHZWVVUGlxSjNod3E3T2dTZmN1NW9XT25nY3Z2T3NsV3JO?=
 =?utf-8?B?V1FDbXpqZ3NEOVNhcEs3aHZWbDgxRmNvQndQSGk5VmtxTGFQdW1NSnBCNHd6?=
 =?utf-8?B?VDZLRGMxdUhXNlhBdHVsMUg3K2NDMzRCRi9BcnNEa3NIdFlEU2NTUHZQRGNS?=
 =?utf-8?B?SElPMFpxMTQ4WStZMS9ITjIwYzBEN1lob1RwSHRmVklqTlZWb0hRZ21BWXFN?=
 =?utf-8?B?NFJBUStTZFQzdWFxUmRwSUh0d210Q2YrOSt4YXZrSmh4cHNPK1R2c3VUSFk2?=
 =?utf-8?B?ODNtWUxNV0xlRjdRU3VDQ1NabklCZE5UcjNFaDAzMjAzUVhEYTE4aGdEUE0v?=
 =?utf-8?B?OWt6eUhlaGFQek40UEZwRFhUdDJOcTRFWDF5SlR4REdYRVYzb0w3UUMrZ0pC?=
 =?utf-8?B?YkFTaUVRdU1MV1E0WFFlWVVxWXBzVjdLSm5tQnAvMzFEVUxQZVJ4TURIL2Za?=
 =?utf-8?B?b1A2RHpvUFMvR2ViRlJNOEdxOFVsNTVQR0pSVXJ1dUdmanVMYzBrNC9vMk1T?=
 =?utf-8?B?UVNGZkpub1VmTU5RaEhFMGVJSjB6SEVZa1BWelY4dTRCOWpDNEphL3ErSmcv?=
 =?utf-8?B?ZjQwVVNCeWRMcG9GUU5pTkc3MEV4VjdmWlJWRjlCYzFmM3ZRcWdrVk8wRDFv?=
 =?utf-8?B?QkFoQjI5aHNFT01GOXVQMlJ5NzVHMDJtSmtvaCtScngwQzRIQTlaM0lUUVdV?=
 =?utf-8?B?Y3dtREszYXZ1Y3BUUXZ4K0RYaEhtZ21QYlJ3a3lpc1VodWVwdlBRQ2NoVW5B?=
 =?utf-8?B?WTRuQmpqanlMbFhscThNaCtaemRHWTE2cnBPNEwzWjQ2QlpZZ3pxNDUyTjVZ?=
 =?utf-8?B?bE03NnVsNVM4VW1pdWdCNFZ5NnRoMURYYkxpeFZsWkRDY2ZOQW9ZdWhRekNw?=
 =?utf-8?B?OEpmNHpvRUx1bk84TUMzZ3VCaVBrajFoMERTUU53aytWRFRJRGY2bDBEQjFk?=
 =?utf-8?B?RjN0Z25raXl5YUlPaGhzdHFhaFRlR3dUK3ZKN0tlVWwyZUhraXVmbngzSG1P?=
 =?utf-8?B?MFJLdHhVZkNaM242Z1ZxL3dXRkI1amtXbkZsSEdDWHZiQ3B4SnkrMXgwRFJN?=
 =?utf-8?B?SjdzOU9YN3hLN2VmdEZjQUJKVTd3dDZuVU1NdlBDQlBiZHkvSWdaWm85Nk9h?=
 =?utf-8?B?azBrV3pjbVpwVmpNM1RnMEQ4MFU2cVY5dEVrbkRmZXBvdVRnZVpaUVdNNVhl?=
 =?utf-8?B?eEZSRENiSm4wbWFwRVdEV1g1QjhsRkRRdkxqcEYwcUo1b0hjTFRveGZLZlFm?=
 =?utf-8?B?SWVuUmhqU3Y3OUVUQkREVFU4RTI0V2lhcWs2bnY4UXlyR0FxQkh2Qm5IZy9S?=
 =?utf-8?B?SjJNcTlWUW16cFk0WENtWlV2Z1pnVVRtZFVYL3F1b1lyZ0dDbGZ4YWx6SFBW?=
 =?utf-8?B?eTVzYUJnNXdhVFplSlY2Skp3cEMzMi9tZmZHUzdHUFo0QjBTS281WE5EWHZS?=
 =?utf-8?B?RTFMZ0NaUkp0L054RStMdTBmbmZkQ1dVRHI3NDVSLzVqSVdrM0UweC9zVHZ3?=
 =?utf-8?B?Z1FPVHdBcDBMeXFxVFZtbUZTRTlkdUdVRCszVHdVbDRqOVd1MmZnaEVSQmxJ?=
 =?utf-8?B?OTlqWlpoTlNaVjZlUEsxU3dKNjdMNkRLM0JVV0Uvd2NmUm1Oa00rM2hIbTRP?=
 =?utf-8?B?ckJoekUwYnJKOGN5YTRiT2xsQ1d0aWRCd1MxblVkYmVLWmF0dWVZVTV0eHRw?=
 =?utf-8?B?MngwYUFtcllScHoyS2s2THpFTWJVZ2U1b3ZVYXFWN25IeVJXa096WUZZeWkx?=
 =?utf-8?B?VzIyMEVTbFJXT2tVNnYwK1l6RXZ5b21pUFBvSzEzZUlHMmV0TTdMZS80MmhT?=
 =?utf-8?Q?zkhFFCiewoIFDFHrm8HZ0uRNB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4f72934c-dcde-4015-cf7d-08db5b8139b6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 11:31:18.2159
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P7wFSsDTLDwcGMsLZ5oqJxWvh0v9KqmFAdchDr8xLodaPhtgDv1+t9cKe9bVz334xmLoD8JDWe2+9F2/B/l6jg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8586

Let's have assembler symbols be consistent with C ones. In principle
there are (a few) cases where gas can produce smaller code this way,
just that for now there's a gas bug causing smaller code to be emitted
even when that shouldn't be the case.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/include/asm/asm_defns.h
+++ b/xen/arch/x86/include/asm/asm_defns.h
@@ -83,7 +83,7 @@ register unsigned long current_stack_poi
 
 #define SYM_ALIGN(algn...) .balign algn
 
-#define SYM_L_GLOBAL(name) .globl name
+#define SYM_L_GLOBAL(name) .globl name; .hidden name
 #define SYM_L_WEAK(name)   .weak name
 #define SYM_L_LOCAL(name)  /* nothing */
 
--- a/xen/arch/x86/include/asm/config.h
+++ b/xen/arch/x86/include/asm/config.h
@@ -45,11 +45,11 @@
 #ifdef __ASSEMBLY__
 #define ALIGN .align 16,0x90
 #define ENTRY(name)                             \
-  .globl name;                                  \
   ALIGN;                                        \
-  name:
+  GLOBAL(name)
 #define GLOBAL(name)                            \
   .globl name;                                  \
+  .hidden name;                                 \
   name:
 #endif
 



From xen-devel-bounces@lists.xenproject.org Tue May 23 11:51:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 11:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538440.838375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QXR-0001K1-MD; Tue, 23 May 2023 11:50:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538440.838375; Tue, 23 May 2023 11:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QXR-0001Ju-In; Tue, 23 May 2023 11:50:49 +0000
Received: by outflank-mailman (input) for mailman id 538440;
 Tue, 23 May 2023 11:50:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1QXQ-0001Jo-39
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 11:50:48 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2054.outbound.protection.outlook.com [40.107.7.54])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0d84b5b2-f960-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 13:50:45 +0200 (CEST)
Received: from DB6PR0301CA0046.eurprd03.prod.outlook.com (2603:10a6:4:54::14)
 by AS8PR08MB6533.eurprd08.prod.outlook.com (2603:10a6:20b:33e::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 11:50:16 +0000
Received: from DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:54:cafe::7e) by DB6PR0301CA0046.outlook.office365.com
 (2603:10a6:4:54::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28 via Frontend
 Transport; Tue, 23 May 2023 11:50:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT063.mail.protection.outlook.com (100.127.142.255) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.14 via Frontend Transport; Tue, 23 May 2023 11:50:15 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Tue, 23 May 2023 11:50:15 +0000
Received: from d4b6237505fa.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 158FAB78-C492-4C52-B5BD-B3B8641032C4.1; 
 Tue, 23 May 2023 11:50:08 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d4b6237505fa.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 23 May 2023 11:50:08 +0000
Received: from VI1PR08MB3760.eurprd08.prod.outlook.com (2603:10a6:803:c1::27)
 by AS4PR08MB7431.eurprd08.prod.outlook.com (2603:10a6:20b:4e3::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 11:50:06 +0000
Received: from VI1PR08MB3760.eurprd08.prod.outlook.com
 ([fe80::2fda:206f:dfef:271a]) by VI1PR08MB3760.eurprd08.prod.outlook.com
 ([fe80::2fda:206f:dfef:271a%5]) with mapi id 15.20.6411.028; Tue, 23 May 2023
 11:50:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d84b5b2-f960-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gntxz5waQTKUMc2v5Abr7thKpt6n/nRJDpo0w5JvnFk=;
 b=8LVYmoRonWzeCIXkWHD1B5S+iZlbhcMtDPsOg23/+uBMj+i8L1fNCD0vFS5T+DxDe1oaN1YJiM5ofmuY613OtvD3yCH+LqMi2am6b9O39AA3TThz36z0kQ4sMzPgok4Soxzz6r2CUYx0mVGSYfKIDQVfv9gQHv7oZ0725uKdkcI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5a8c8fccef13b4c2
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ankw7P3NO9EB0dwHzGMNIq0ZtGP1VdPuhTMOF5H8tsq+skdsq36JrHK3PFG0tbJ0oUWNW5XyFdAuyt5fNm5pjnRGLm+2ALBnKQW2WvEIpqV+RLdiH5Uj41l8JERxhU64KgoW0rE1DLD1GIlDaq5SB983lEA07+R3caOt7U7cHdWdX5R2O83fsMHJmBZAqNLs5iDRrL6GLMsR4Y1p/N+uPNjmkZ0hGMIsZr8oR4o3XfWSjXsksHCA2LVzXKBOCFPX+V2iLr0V59GK5FfQwNuc0X2cCXJvVKoVcRhHYJAFGoD61IV2SvJAkoFvpLt4RBK6TfSPtoYoOtjnCmaMkr4syQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Gntxz5waQTKUMc2v5Abr7thKpt6n/nRJDpo0w5JvnFk=;
 b=AnMImUt8Tl5geEhz9pK5yDXGKxtiydGOGoYrtWycX6+5w92rlc6PUEkyYoMestLREWVb5ueyvhkdup/QDMBEZEGSeaVJ4JHX9khZQbgyGE50VC/wU4Jpog4Q4qAiEAo1CxwH4cqG91o5BoTXkZWkeBelsEAkGQCBpCKZ89qbJeZK6nAT3kd6djFWIus3lCrYTYoggn6kIXqASDfRg8NpTKu/fDHqUlp3ZSb9kCFo/5Kk65PP4KTg57syl/OaysjzL5nIf1yzhQybSQnLvJ6HRCMA03VByhHKA79vuFqhHmYxoSZOfa/tDuAjeZw0bZfMlCfDEsBmpLyXRpQsYhhRtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gntxz5waQTKUMc2v5Abr7thKpt6n/nRJDpo0w5JvnFk=;
 b=8LVYmoRonWzeCIXkWHD1B5S+iZlbhcMtDPsOg23/+uBMj+i8L1fNCD0vFS5T+DxDe1oaN1YJiM5ofmuY613OtvD3yCH+LqMi2am6b9O39AA3TThz36z0kQ4sMzPgok4Soxzz6r2CUYx0mVGSYfKIDQVfv9gQHv7oZ0725uKdkcI=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Thread-Index: AQHZjUpfkyJmVIVJBUW8KdO0aW075a9noLWAgAAFYICAAALiAIAAFdcA
Date: Tue, 23 May 2023 11:50:06 +0000
Message-ID: <45285215-4528-435A-B203-B770D60FABAA@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-8-luca.fancellu@arm.com>
 <48722696-ea22-1af9-2a0a-aa78972d118a@suse.com>
 <6DEA0CC3-C3EF-4509-A869-807CE4B21401@arm.com>
 <767b11a1-4d43-9057-1fcc-6516fea64fb3@suse.com>
In-Reply-To: <767b11a1-4d43-9057-1fcc-6516fea64fb3@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	VI1PR08MB3760:EE_|AS4PR08MB7431:EE_|DBAEUR03FT063:EE_|AS8PR08MB6533:EE_
X-MS-Office365-Filtering-Correlation-Id: 146c2fa1-3fd5-4a5b-d2f3-08db5b83dfd9
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 wx/gVWLOD+abr4QkU4AeTwGweYY5tU4Dyvs6UsQpwzvwwm8dmO+uwU2FfeZNYTI/wXKxGRfZJTlHt8tFV7sTpfj3urcKW7NLf6oFDPU04Z93bLN3P5Af72FwhzMBfU7AgcQ04Bu5uosL9ic1nRVBR1SJlJrdS4T6NAMuxhdlzBhdrpjS6z2veDDH4yOWUnySXjOyhEwNlJuYLhvMpfWsKaCFfd/bem1ugXCV3GB3ZpQNgdakNhx2kuw6wsufnEeDnThvpkMPVYNizv4Dv7Ih6TGt+ZlmP30HCW62fMLDxL5OgWxjoF8Xj91KoMm/yf46BxUEhKkOTKch39TCZT7VRlZ/LKZEDU6/odUYqJk2eBTLF6IMMYR93kwvhG7N5Iz84uoqJdA7q2Xq8Nm1H2MaO+kJZytdIPiwEcWOlyadSeTwTAFGsSABtSQ/Etzlc1v2enOo0L325cvCbrXtpxA4EpJiQuBPcXPfVlVJv4EBSTE7nmyiHDkMEaTcMN9hu0BEHg/v5zQ1HEV0foKg7sAwYJYZZcB5D3cQz2iWVfhmV9MKfqVXhKCjNl2ULBDqmkLaC5ZX/80GziXfR3ivYSKVXaASj9ap3iy9PcfeUpX/o5twy0aVrTTrG5AlEX9/tMA5KZcKC6ipTVE6h7GyAFrqOg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3760.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(396003)(346002)(39860400002)(366004)(451199021)(316002)(38100700002)(54906003)(76116006)(66946007)(66556008)(66476007)(6916009)(64756008)(4326008)(66446008)(91956017)(71200400001)(122000001)(41300700001)(478600001)(38070700005)(6486002)(86362001)(5660300002)(8676002)(8936002)(186003)(33656002)(53546011)(26005)(6506007)(6512007)(2906002)(36756003)(83380400001)(2616005)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <7C0F2C5F71EA58409164D64D794858D7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7431
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d0d67079-9c53-4dec-1317-08db5b83da66
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	59RwllMuR4ykA6vkI+6aduDosEBw7PzxpG6mODfhfBepEDS08tdYSCiTNvKKfqfITh73hKlUKHZ4Or0b8XGluGCLdMiJTorRABPyePq8b5P7eeKRHiE5OQCVgTgBPvhHDw9BciuQxDjmTHmuT3d9j98XIHpIu3RlfiDfiDkB++BfcUC011Qb5nlVz2xaa4PFb+3VaS0pK+u71W4LguCrbvH9WAm6SutueXvM3Xi3zx1DSyuwNuv7kQVDHKGk4m4gXK08mjO3x777ZQojIFy/TV36tpenJYyRTiwaslyV7hNkce+84LranZo1XEmPDv484IQ8dhKjYXlJzIcf9ZsY4uTmohuI8InitAyx4Bre4+c1EBqr5kfjVvBxEGsD/XlwCOTt9YqGXjxbVlDISzrT7nObA28LrGUH+5WQuE1YoPktn+c7LQe4nb5AYzcfEHVFen9tzpzijUPU8fG5iRAjmNVkAhKXFz7JXTMIxEW+3TTumgJDti2UWD2G2o5KClYCvL7f2IOblV8NIrXHCRrOf1oEHpGkW+JZAb0dxdRhVOrTeR1J3x6SdegJTOhiUL/8b6diHscoTQZ17hH5h/RjtacfVHfBgFduZpi4ADbDAM8bgSC246QZhMdLlJGBm2q5r6JDY1TQ0xDvCGP/25859J5XteQfKkB/7EITCci4LULm1vJNKoNG9oIJ26iw5pJvcc26fpJ9sqhjTKEsElbSHtRH6z94QtaZCwTgGscZkyMCC0fs0zwVskHA4DEikybT
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(136003)(396003)(376002)(451199021)(36840700001)(40470700004)(46966006)(82740400003)(356005)(6506007)(6512007)(26005)(33656002)(186003)(53546011)(40460700003)(81166007)(2616005)(36860700001)(47076005)(36756003)(83380400001)(336012)(2906002)(40480700001)(6486002)(41300700001)(316002)(54906003)(82310400005)(478600001)(70586007)(70206006)(4326008)(86362001)(8676002)(8936002)(6862004)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 11:50:15.7338
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 146c2fa1-3fd5-4a5b-d2f3-08db5b83dfd9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6533

DQoNCj4gT24gMjMgTWF5IDIwMjMsIGF0IDExOjMxLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjMuMDUuMjAyMyAxMjoyMSwgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+PiBPbiAyMyBNYXkgMjAyMywgYXQgMTE6MDIsIEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4gT24gMjMuMDUuMjAyMyAwOTo0MywgTHVjYSBGYW5jZWxs
dSB3cm90ZToNCj4+Pj4gQEAgLTgzOCw2ICs4MzgsMjIgQEAgQ29udHJvbHMgZm9yIGhvdyBkb20w
IGlzIGNvbnN0cnVjdGVkIG9uIHg4NiBzeXN0ZW1zLg0KPj4+PiANCj4+Pj4gICAgSWYgdXNpbmcg
dGhpcyBvcHRpb24gaXMgbmVjZXNzYXJ5IHRvIGZpeCBhbiBpc3N1ZSwgcGxlYXNlIHJlcG9ydCBh
IGJ1Zy4NCj4+Pj4gDQo+Pj4+ICtFbmFibGVzIGZlYXR1cmVzIG9uIGRvbTAgb24gQXJtIHN5c3Rl
bXMuDQo+Pj4+ICsNCj4+Pj4gKyogICBUaGUgYHN2ZWAgaW50ZWdlciBwYXJhbWV0ZXIgZW5hYmxl
cyBBcm0gU1ZFIHVzYWdlIGZvciBEb20wIGRvbWFpbiBhbmQgc2V0cw0KPj4+PiArICAgIHRoZSBt
YXhpbXVtIFNWRSB2ZWN0b3IgbGVuZ3RoLCB0aGUgb3B0aW9uIGlzIGFwcGxpY2FibGUgb25seSB0
byBBQXJjaDY0DQo+Pj4+ICsgICAgZ3Vlc3RzLg0KPj4+IA0KPj4+IFdoeSAiZ3Vlc3RzIj8gRG9l
cyB0aGUgb3B0aW9uIGFmZmVjdCBtb3JlIHRoYW4gRG9tMD8NCj4+IA0KPj4gSSB1c2VkIOKAnGd1
ZXN0c+KAnSBiZWNhdXNlIGluIG15IG1pbmQgSSB3YXMgcmVmZXJyaW5nIHRvIGFsbCB0aGUgYWFy
Y2g2NCBPUyB0aGF0IGNhbiBiZSB1c2VkDQo+PiBhcyBjb250cm9sIGRvbWFpbiwgSSBjYW4gY2hh
bmdlIGl0IGlmIGl0IHNvdW5kcyBiYWQuDQo+IA0KPiBJZiB5b3UgbWVhbnMgT1NlcyB0aGVuIGJl
dHRlciBhbHNvIHNheSBPU2VzLiBCdXQgbWF5YmUgdGhpcyBkb2Vzbid0IG5lZWQNCj4gc3BlY2lm
aWNhbGx5IGV4cHJlc3NpbmcsIGJ5IHNheWluZyBlLmcuICIuLi4sIHRoZSBvcHRpb24gaXMgYXBw
bGljYWJsZQ0KPiBvbmx5IG9uIEFBcmNoNjQiPyBPciBjYW4gYSBEb20wIGJlIDMyLWJpdCBvbiBB
cm02NCBYZW4/DQoNCkkgdGhpbmsgdGhlcmUgaXMgbm8gbGltaXRhdGlvbiBzbyBEb20wIGNhbiBi
ZSAzMiBiaXQgb3IgNjQuIE1heWJlIEkgY2FuIHNheQ0K4oCcLi4uIEFBcmNoNjQga2VybmVsIGd1
ZXN0cy7igJ0/DQoNCj4gDQo+Pj4+ICsgICAgQSB2YWx1ZSBlcXVhbCB0byAwIGRpc2FibGVzIHRo
ZSBmZWF0dXJlLCB0aGlzIGlzIHRoZSBkZWZhdWx0IHZhbHVlLg0KPj4+PiArICAgIFZhbHVlcyBi
ZWxvdyAwIG1lYW5zIHRoZSBmZWF0dXJlIHVzZXMgdGhlIG1heGltdW0gU1ZFIHZlY3RvciBsZW5n
dGgNCj4+Pj4gKyAgICBzdXBwb3J0ZWQgYnkgaGFyZHdhcmUsIGlmIFNWRSBpcyBzdXBwb3J0ZWQu
DQo+Pj4+ICsgICAgVmFsdWVzIGFib3ZlIDAgZXhwbGljaXRseSBzZXQgdGhlIG1heGltdW0gU1ZF
IHZlY3RvciBsZW5ndGggZm9yIERvbTAsDQo+Pj4+ICsgICAgYWxsb3dlZCB2YWx1ZXMgYXJlIGZy
b20gMTI4IHRvIG1heGltdW0gMjA0OCwgYmVpbmcgbXVsdGlwbGUgb2YgMTI4Lg0KPj4+PiArICAg
IFBsZWFzZSBub3RlIHRoYXQgd2hlbiB0aGUgdXNlciBleHBsaWNpdGx5IHNwZWNpZmllcyB0aGUg
dmFsdWUsIGlmIHRoYXQgdmFsdWUNCj4+Pj4gKyAgICBpcyBhYm92ZSB0aGUgaGFyZHdhcmUgc3Vw
cG9ydGVkIG1heGltdW0gU1ZFIHZlY3RvciBsZW5ndGgsIHRoZSBkb21haW4NCj4+Pj4gKyAgICBj
cmVhdGlvbiB3aWxsIGZhaWwgYW5kIHRoZSBzeXN0ZW0gd2lsbCBzdG9wLCB0aGUgc2FtZSB3aWxs
IG9jY3VyIGlmIHRoZQ0KPj4+PiArICAgIG9wdGlvbiBpcyBwcm92aWRlZCB3aXRoIGEgbm9uIHpl
cm8gdmFsdWUsIGJ1dCB0aGUgcGxhdGZvcm0gZG9lc24ndCBzdXBwb3J0DQo+Pj4+ICsgICAgU1ZF
Lg0KPj4+IA0KPj4+IEFzc3VtaW5nIHRoaXMgYWxzbyBjb3ZlcnMgdGhlIC0xIGNhc2UsIEkgd29u
ZGVyIGlmIHRoYXQgaXNuJ3QgYSBsaXR0bGUgdG9vDQo+Pj4gc3RyaWN0LiAiTWF4aW11bSBzdXBw
b3J0ZWQiIGltbyBjYW4gdmVyeSB3ZWxsIGJlIDAuDQo+PiANCj4+IE1heGltdW0gc3VwcG9ydGVk
LCB3aGVuIHBsYXRmb3JtcyB1c2VzIFNWRSwgY2FuIGJlIGF0IG1pbmltdW0gMTI4IGJ5IGFybSBz
cGVjcy4NCj4gDQo+IFdoZW4gdGhlcmUgaXMgU1ZFIC0gc3VyZS4gQnV0IHdoZW4gdGhlcmUncyBu
byBTVkUsIDAgaXMga2luZCBvZiB0aGUgaW1wbGllZA0KPiBsZW5ndGguIEFuZCBJJ2QgdmlldyBh
IGNvbW1hbmQgbGluZSBvcHRpb24gdmFsdWUgb2YgLTEgcXVpdGUgb2theSBpbiB0aGF0DQo+IGNh
c2U6IFRoZXkndmUgYXNrZWQgZm9yIHRoZSBtYXhpbXVtIHN1cHBvcnRlZCwgc28gdGhleSdsbCBn
ZXQgMC4gTm8gcmVhc29uDQo+IHRvIGNyYXNoIHRoZSBzeXN0ZW0gZHVyaW5nIGJvb3QuDQoNCk9r
IEkgc2VlIHdoYXQgeW91IG1lYW4sIGZvciBleGFtcGxlIHdoZW4gS2NvbmZpZyBTVkUgaXMgZW5h
YmxlZCwgYnV0IHRoZSBwbGF0Zm9ybSBkb2VzbuKAmXQNCmhhdmUgU1ZFIGZlYXR1cmUsIHJlcXVl
c3Rpbmcgc3ZlPS0xIHdpbGwga2VlcCB0aGUgdmFsdWUgdG8gMCwgYW5kIG5vIHN5c3RlbSB3aWxs
IGJlIHN0b3BwZWQuDQoNCk1heWJlIEkgY2FuIHNheTogDQoNCuKAnC4uLiB0aGUgc2FtZSB3aWxs
IG9jY3VyIGlmIHRoZSBvcHRpb24gaXMgcHJvdmlkZWQgd2l0aCBhIHBvc2l0aXZlIG5vbiB6ZXJv
IHZhbHVlLA0KYnV0IHRoZSBwbGF0Zm9ybSBkb2Vzbid0IHN1cHBvcnQgU1ZFLiINCg0KDQoNCj4g
DQo+IEphbg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Tue May 23 11:54:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 11:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538445.838385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QaM-0001t1-5V; Tue, 23 May 2023 11:53:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538445.838385; Tue, 23 May 2023 11:53:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QaM-0001su-0J; Tue, 23 May 2023 11:53:50 +0000
Received: by outflank-mailman (input) for mailman id 538445;
 Tue, 23 May 2023 11:53:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1QaK-0001sm-JL
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 11:53:48 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20616.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 797224ac-f960-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 13:53:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7837.eurprd04.prod.outlook.com (2603:10a6:102:ca::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 11:53:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 11:53:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 797224ac-f960-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cXIT+p+J4E81pHA1C/LyVfCnhveF06wmpr4ZCEfdE6621WMKURKdVWAmYK5Q4j6l6Gk7f6JAePzCLqkAqt0jY5DvLHtboTANfSM/XcZ2uavex8/6DDra8wHCfJmUsroiK9rVIDmQVa3LmMV6NgUMQqfcGVweQKhm4AGhc3kzTrTJGsn5ULE8c5tvbOkSczNjxb/bKiltTzykiPgg5BXKyR6iFtsTvg5j7WpD0esFDmQ/IDL7XzJ/QzgAJxEnhxyUuci3g2EEXK3rcCTTf2xjnWwjS5CuYKADzI52BgGqMCk/zqTsp5abV8B3Ntsupq9+nfC8Yj7UmaxlxNcSh2Z8+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4R9vTKK29xbkNvEcb/+VwLUjS5BdIwWFLBXFrldruko=;
 b=dVPdvhC7mJ+f7KHEnbplearwIlBx+nV31Ih8nl8Jzv1y+WqE4XiYz9sAc5tWCZb6oj/q4NsZfwn++p4m6tB1sI1258VgB7GWSGYN6YWogzMzKiT8D+ToOqZGuNY1Q13RpluHv5miZqRUtf6GkfyE+zQ0GRBvOidWs25p6EJayFhC8g3c69LTkGfDOws9NkWjGoxWKt6PaoCMZgK0/ttQDILY6BLqTVKRzq57RNv3xA7SeyYfUvOgYhYx7vbstw9/udiygbTKNTaqmEFLJ/2cZViWYKm8FOnjHGxg86LdCHYAGIeUI1TWV7iEBfWj8mtnn9kIMYR3NxcGPOcJBGcDcQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4R9vTKK29xbkNvEcb/+VwLUjS5BdIwWFLBXFrldruko=;
 b=zAj2GzM0DjsXSCexXlvMr0jjVJWHFv4tAYoWOlDnowqSHfckOskDuPHsZlVPh6/iRuYX5D0jyCmU4nAEBQHSvjYH82RA4dlPaysNdFmwq6+MmIC6m8xlizu1jwvmJoZL2d2iFO7vRuVVU1Kx/8I2CONzVkVfenIOWWSx6J1cl54Fqf0gc9OmZMeXTVAzbzRCPHWbZN3dkhjC9Gj3VIKqp9b8DBKVqk6qcj9SL3e69lFezOI6/RVKhfDWbpE55emj/dBYnpCeoo2Rbeq02pfNUCRUylw/0f/qsrAGA3fa0T3V9MvBG+i+4hXsZTuBCYyVqqjTwUWYsssiADR6CTHJmw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4d86d3b6-3ce5-8d25-abb9-4c27b592647f@suse.com>
Date: Tue, 23 May 2023 13:53:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-8-luca.fancellu@arm.com>
 <48722696-ea22-1af9-2a0a-aa78972d118a@suse.com>
 <6DEA0CC3-C3EF-4509-A869-807CE4B21401@arm.com>
 <767b11a1-4d43-9057-1fcc-6516fea64fb3@suse.com>
 <45285215-4528-435A-B203-B770D60FABAA@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <45285215-4528-435A-B203-B770D60FABAA@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0059.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7837:EE_
X-MS-Office365-Filtering-Correlation-Id: 0581ef38-621a-440e-ba35-08db5b845b64
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	93RFR8TxKzjZ1Te5yEmfvJ7JYFtWKx2pq7ZI9XfzPLTZ6YPz8dISufnZ9uOhhXYKqTNYSiHgCszJ5BD2Geuz+JIdZcdIGoh9/rSQRKgeeWb0mo2wNkXoKg9TpjCNGTqslv8CFtI6wwd/JVG7GnxWKcejw7SWvhMi4LOIpoQv/FwJOp0H7wzOmiz2e4DEI8AEo7z7xtLX+rcM3uoJxNE+pFAQRdWh0f/WRdXhSQIsHjjhswPn15eqGxmU7ORosliKHlUIS+8bHIynbHVberLB5Q/u1LNcybgdoYsHcRQ4aMMBtYYPbZu8VaeLLwAbk4NYN3l+nvGFTZf+PESsVVDLIGRJ1qYnosk7DqCWRQdg4lsrRGu6Z0iTYgaz638B4/V2T678EPsSkwA/MAAZShenopA5bBmSKgigDpSm31lJTPBarV+GKUC1rI2VnTTxBVnvHA1BRimWXWiniucOW5M2vsDw6LAvZjIQo1hHU0j2dGD7PNtsmIaW3OV+9YkxzBIidk6J6lL2jUaW/fDegQt/s+IhfHRQUFZfONXSl6ZJ75cc7NF+deCietG7Mk1uybN5ruwP7Lx+0kqupHJZ9qNXOgTWyyDSAEoSFw0uvJTsOyuAG3bf24lbZeYKhw8pcsVgeicfpKpl53HJwF/J+fPb+A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(346002)(396003)(366004)(376002)(451199021)(31696002)(54906003)(38100700002)(41300700001)(6486002)(66476007)(316002)(6916009)(4326008)(66556008)(6666004)(66946007)(5660300002)(8676002)(8936002)(86362001)(478600001)(31686004)(7416002)(83380400001)(6506007)(26005)(6512007)(2906002)(53546011)(186003)(36756003)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SEVjclRPR3ZYaG9BdzlYaThKOEZMTjIwL3JJRnYwUzBBbkVLUFEwZzhydWRX?=
 =?utf-8?B?S1FVbmVyUHNxNjFpdlZFdk5sVm5FZnV6NHlONlJEaU9XWVNmSVMvTkgvRkdL?=
 =?utf-8?B?TkN3d3Njb1RoQnhKOFRxWE56ekhkZ1NqTjMvM01rOG5FMjVpakZhckt4T2JF?=
 =?utf-8?B?NDd0RXpVOUkySE8vWkw5QWlScTRHei9GaTRteGg5YnlEVmdGaUNzN3h3ekRi?=
 =?utf-8?B?bXRidC9GTVBCU0JwWjN4alovVVVPMlJRQlJrRFF1eFQwcXZNKzdxZ2NPOGN4?=
 =?utf-8?B?Q2Rick5MdnB0ZUdvRC9XZjZsMXdkWjhJWHFxNTBEU2lkbnFybkxCUHpEenlQ?=
 =?utf-8?B?YXhYRHhHSDFXbUMvb0I5TUZSdmlPOVBpVjZHUnBkZXFxbHViZjZmR29JRXkz?=
 =?utf-8?B?aTI1NVZWWE9yZksvQU1hWkhtYjhadEJ3M1FWNkdBZzNDMUpteEdocjJ2TVM5?=
 =?utf-8?B?c3Y2MldtL3dadXRnSlBEWFYyS1FIRWx4bnplelM2TXpzZ0pSRys0Z2pIb2Nq?=
 =?utf-8?B?Zis2MVBoSkFaVEhBR05wZzY1QXd0Vmg4SUJLa2UzcnhVQ1RrdTRFY1l2Ympp?=
 =?utf-8?B?SmEvRThzcERQSVJHcWVINlJhWlRkTVZ3U0tYSmpyWXNsdXNQN3pNbjIrN29m?=
 =?utf-8?B?UTFBUmd0MGFSMjZaVTNDdjRaNmh1UmVmVHlkWGhhcUpBYUtMTEZrSWJ5a1Nu?=
 =?utf-8?B?VGl6SmhJQWJsbURpcDBLVUpTWnRWVXVXV3c1WUM1MkhoQVhIdXZNc2NuUXF3?=
 =?utf-8?B?eFFhTFAvZ3lqNGI4S2ZXazErUG52YzFna2ovUnRSTUhxSk93dzZQVG1EUGF0?=
 =?utf-8?B?YmNKOEFrNkNnTTNrM3RDbHFBSXNrRGtUZ2VpNnBQOEZWMldmVEZwRTRDY0d3?=
 =?utf-8?B?NUZEbmhXYUgvNmlkd3g3Q0J0RzliQ1FSMHFjNHBkQklHKzVDaHU3UldlcTZB?=
 =?utf-8?B?WEEwYjBtZXdPR0l3QWk1VkZwb1ZLZnU2NDhBbXptWUJxQmxDTktRdVE1ZUJG?=
 =?utf-8?B?LzRrdk9JREo1TnoxNTd2WnErWG84dnBwRHpIK1FIV3dncGlMODJxaHNlZGtG?=
 =?utf-8?B?VU1TcnRRRVdjdi9FM085a0pGVjg1NkxkNGc5cnlWNk9OTmsxT2U1QWtvTGla?=
 =?utf-8?B?WU1mSCtYZnNrWUV2UHRiMmMvelBIUXlqOFY5YTJ3RE9sQTZZcHYwbXV1M0pE?=
 =?utf-8?B?VkthZmNsWWZBSm82bGowcEt3WXkrYUoxSjAzNEZmUmp4bmg1c1dLYitoMGpW?=
 =?utf-8?B?cHk2UDlzOExVWDFmWWE3VUpPYzhRbi91eVBSbUUrNTFsTjlxVzR4a0hrcmRT?=
 =?utf-8?B?eHRHYjZXUm5uSHQ3QVlCRmc2emg3UDl2aFI4VEJWNWs1UnBqTm9KNFlCOURT?=
 =?utf-8?B?ZkY2WkFqNnFmaUJPTi9ZTjdRT2Z0Q3U1aGc3a1p0a1Uxb2l0dHVicW9EUmxx?=
 =?utf-8?B?N3A4cGR5ZjVXT01FYmdqZ003bk1HbFA1amI3UnRpY1dRMCtuckprK1R1T3FF?=
 =?utf-8?B?UGV1bXhFQ3h0NEVWMm54ajI0S1YrU3hhRDBIVzFveXlSYUdNSjVZSmlibXRk?=
 =?utf-8?B?cmR1UVJ6K0JHSjBlT1R0dUhRcTR2NlREcWJDakQ0VEJOeFV0S3BicEtuMDI4?=
 =?utf-8?B?U2QrUGNQVVdSeXI1bDU4WStsM2V1cTJYUVRpcWdGK0VhNTRWSWtmblY0bzBY?=
 =?utf-8?B?Z3Jaa0RLandMTFlCczdzNlVXUC9GbFMyakZVUXNYZTFJY0xPQTRybVpFdkFl?=
 =?utf-8?B?dklCOFFIMlVjaysrNnhQcjl5eU5oa1F6bjNMc0s0MEcvQmlxdEhPamlTN3Fa?=
 =?utf-8?B?OW9vVDdGV1YwVmRwZFV3WnJaS0RHZXZpNUN6MEVOd29JNXo5K0N0WHZwYXJQ?=
 =?utf-8?B?andwblJBMTdhVGFMNSt1UGJwNFRRajR6REhmTGtSajg3QjdLSXp0dlNST3R4?=
 =?utf-8?B?TnVlajltamxodFFPOUJ0elk4cnRRbVNWcmZvWFQ0YzZHcnQ3U3JWNUdWbzRt?=
 =?utf-8?B?VjNIQkRHZHByTG9tZXo0cURQNHdjMGlTS0VIalRwUHRkbko0d0RrU2lJQmRO?=
 =?utf-8?B?U21QZWIxc1BYMlFIOWFSQjJLTUx4dlpOYXJRa1hXTjBNZkdJakxCSE5iUVRr?=
 =?utf-8?Q?l58h+ksEArohkyjfSkKhu/2Ca?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0581ef38-621a-440e-ba35-08db5b845b64
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 11:53:44.3918
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RL9mtMg2c7CZijh2UJZViq1X2N58WhlK8ZIVF0BG7Wot05C1pjGu3cEwy9h0FzJw0YECcsjdlJCTg6QlwX2WgQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7837

On 23.05.2023 13:50, Luca Fancellu wrote:
>> On 23 May 2023, at 11:31, Jan Beulich <jbeulich@suse.com> wrote:
>> On 23.05.2023 12:21, Luca Fancellu wrote:
>>>> On 23 May 2023, at 11:02, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 23.05.2023 09:43, Luca Fancellu wrote:
>>>>> @@ -838,6 +838,22 @@ Controls for how dom0 is constructed on x86 systems.
>>>>>
>>>>>    If using this option is necessary to fix an issue, please report a bug.
>>>>>
>>>>> +Enables features on dom0 on Arm systems.
>>>>> +
>>>>> +*   The `sve` integer parameter enables Arm SVE usage for Dom0 domain and sets
>>>>> +    the maximum SVE vector length, the option is applicable only to AArch64
>>>>> +    guests.
>>>>
>>>> Why "guests"? Does the option affect more than Dom0?
>>>
>>> I used “guests” because in my mind I was referring to all the aarch64 OS that can be used
>>> as control domain, I can change it if it sounds bad.
>>
>> If you means OSes then better also say OSes. But maybe this doesn't need
>> specifically expressing, by saying e.g. "..., the option is applicable
>> only on AArch64"? Or can a Dom0 be 32-bit on Arm64 Xen?
> 
> I think there is no limitation so Dom0 can be 32 bit or 64. Maybe I can say
> “... AArch64 kernel guests.”?

I'd recommend to avoid the term "guest" when you talk about Dom0 alone.
Commonly "guest" means ordinary domains only, i.e. in particular excluding
Dom0. What's wrong with "AArch64 Dom0 kernels"?

>>>>> +    A value equal to 0 disables the feature, this is the default value.
>>>>> +    Values below 0 means the feature uses the maximum SVE vector length
>>>>> +    supported by hardware, if SVE is supported.
>>>>> +    Values above 0 explicitly set the maximum SVE vector length for Dom0,
>>>>> +    allowed values are from 128 to maximum 2048, being multiple of 128.
>>>>> +    Please note that when the user explicitly specifies the value, if that value
>>>>> +    is above the hardware supported maximum SVE vector length, the domain
>>>>> +    creation will fail and the system will stop, the same will occur if the
>>>>> +    option is provided with a non zero value, but the platform doesn't support
>>>>> +    SVE.
>>>>
>>>> Assuming this also covers the -1 case, I wonder if that isn't a little too
>>>> strict. "Maximum supported" imo can very well be 0.
>>>
>>> Maximum supported, when platforms uses SVE, can be at minimum 128 by arm specs.
>>
>> When there is SVE - sure. But when there's no SVE, 0 is kind of the implied
>> length. And I'd view a command line option value of -1 quite okay in that
>> case: They've asked for the maximum supported, so they'll get 0. No reason
>> to crash the system during boot.
> 
> Ok I see what you mean, for example when Kconfig SVE is enabled, but the platform doesn’t
> have SVE feature, requesting sve=-1 will keep the value to 0, and no system will be stopped.
> 
> Maybe I can say: 
> 
> “... the same will occur if the option is provided with a positive non zero value,
> but the platform doesn't support SVE."

Right, provided that matches the implementation.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 23 11:57:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 11:57:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538451.838395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QeD-0002ZL-NG; Tue, 23 May 2023 11:57:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538451.838395; Tue, 23 May 2023 11:57:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1QeD-0002ZE-Ji; Tue, 23 May 2023 11:57:49 +0000
Received: by outflank-mailman (input) for mailman id 538451;
 Tue, 23 May 2023 11:57:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kqVG=BM=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1QeB-0002Z8-F7
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 11:57:47 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20630.outbound.protection.outlook.com
 [2a01:111:f400:fe13::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 07774ad1-f961-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 13:57:45 +0200 (CEST)
Received: from AS8PR04CA0186.eurprd04.prod.outlook.com (2603:10a6:20b:2f3::11)
 by AS8PR08MB6104.eurprd08.prod.outlook.com (2603:10a6:20b:299::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 11:57:41 +0000
Received: from AM7EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2f3:cafe::da) by AS8PR04CA0186.outlook.office365.com
 (2603:10a6:20b:2f3::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29 via Frontend
 Transport; Tue, 23 May 2023 11:57:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT004.mail.protection.outlook.com (100.127.140.210) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.14 via Frontend Transport; Tue, 23 May 2023 11:57:40 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Tue, 23 May 2023 11:57:40 +0000
Received: from 5c02d8c4d31e.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2DB02D10-2A82-4057-9DF7-22250801DEC5.1; 
 Tue, 23 May 2023 11:57:28 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5c02d8c4d31e.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 23 May 2023 11:57:28 +0000
Received: from VI1PR08MB3760.eurprd08.prod.outlook.com (2603:10a6:803:c1::27)
 by PR3PR08MB5689.eurprd08.prod.outlook.com (2603:10a6:102:90::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 11:57:24 +0000
Received: from VI1PR08MB3760.eurprd08.prod.outlook.com
 ([fe80::2fda:206f:dfef:271a]) by VI1PR08MB3760.eurprd08.prod.outlook.com
 ([fe80::2fda:206f:dfef:271a%5]) with mapi id 15.20.6411.028; Tue, 23 May 2023
 11:57:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07774ad1-f961-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sjbiPSHywVpHCq3pYp8eUa3OS8jM44qqSBY533cIvOk=;
 b=IvuS68WkmOejLWi6Qh/5rEmDIBVLcDz3jNtcSuNcZ4mv4RYVO5UFDgd2uxiVO4oGhQfylf98TyBClXcjK8kbCgcWGf7nO9WtzruQDSnoID89JyiS7Yhx8qz62w16pMc0bCh0Cu1Kl7VIMsy8XMWfTDuDQBDsJ5XH4fHptsBfs8U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: c3a0d9607804f212
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=clFRI/gFCed1YsmF9pW2WWdcyBxENhIUsiA69vLAA/und4NSW48zsuyOA6iu2QZTIsvgaE2c2WuWN/aoInJc1xnWUTIJTfErEaiJcItshHjegbzWbarY7FJWR58uaHTbNMqB4zdnoTc6UDesiM4BH8B1fHyHHa7mYaUuUv6Aa71woW9vZ6F77Iq9hFqn1Vh2XvHuo4qJ5mdI1UhimGVXv2+Oo5lBPDabgQFqBx4I0cleQL/Py0RGEqwNtwRtKc7K2Bt0gR+FhDLMTuuHpQ9KTSceBFqHet5tTNkTxzWQUzoG/WCb6HjVQnkfk2pSzmvwWsiH0cX0M7MklzNUKPzdXQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sjbiPSHywVpHCq3pYp8eUa3OS8jM44qqSBY533cIvOk=;
 b=dm9w0dAk1VLocZ4KD7AMzuXeHHNusliE9RBz0gXA6V14RNKOQlPChGlqX3EsLZ9t4VUOhtNPgxBnTywMeWDPkSrlDk3TuPkIel3wEnSdvcL2ET+5nO3bL5ocLt3KiYRs8EQ4aBslJL+5kQ8e25YmH1ggmUXQXnCNO9FZ7UnFXx+TeWGHZuQEYhIHeYjR6jAwVhZ8xUcSG+nyAcDKYxCSKDWQ65cyas81HyUHKCuUJzHle+V0MrYxFc7kmgzvRywOUFsi2qUkMCEcJhzkeMb/fzKzZHR2dHwFBj8kpQhKLqD+UgPks3L+EE6MpawT1elTLYDWL9STDD9tOzNctvQK4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sjbiPSHywVpHCq3pYp8eUa3OS8jM44qqSBY533cIvOk=;
 b=IvuS68WkmOejLWi6Qh/5rEmDIBVLcDz3jNtcSuNcZ4mv4RYVO5UFDgd2uxiVO4oGhQfylf98TyBClXcjK8kbCgcWGf7nO9WtzruQDSnoID89JyiS7Yhx8qz62w16pMc0bCh0Cu1Kl7VIMsy8XMWfTDuDQBDsJ5XH4fHptsBfs8U=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Thread-Index:
 AQHZjUpfkyJmVIVJBUW8KdO0aW075a9noLWAgAAFYICAAALiAIAAFdcAgAABCwCAAAD+gA==
Date: Tue, 23 May 2023 11:57:23 +0000
Message-ID: <E8FD576D-917C-486B-B537-2455C9686A2E@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-8-luca.fancellu@arm.com>
 <48722696-ea22-1af9-2a0a-aa78972d118a@suse.com>
 <6DEA0CC3-C3EF-4509-A869-807CE4B21401@arm.com>
 <767b11a1-4d43-9057-1fcc-6516fea64fb3@suse.com>
 <45285215-4528-435A-B203-B770D60FABAA@arm.com>
 <4d86d3b6-3ce5-8d25-abb9-4c27b592647f@suse.com>
In-Reply-To: <4d86d3b6-3ce5-8d25-abb9-4c27b592647f@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	VI1PR08MB3760:EE_|PR3PR08MB5689:EE_|AM7EUR03FT004:EE_|AS8PR08MB6104:EE_
X-MS-Office365-Filtering-Correlation-Id: 5632290a-ea64-41cf-b1b2-08db5b84e8fb
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 bF3ywAVUU32/4AKN6WmBvgGmIHHt3drqZFhgdd12Ui5FgyEAL18j5OcGj3L70Bf1lsA1952zJFbmkVvXeCs3U1CWH9ycIqBB4h6LJ9kXFH24SNeXquQ9fIRXAUORASyuU6obISBg6/dfIRmATEj86caItUoHmKERsRJBBW5/1PeHlN/VmYLwBNJ/LcfQdMpsL5CQWBrw7sBvD3gawUn3J1alYcAQXWXGDxbZWu05KEiB+OsPEkphZ/cKN4u8J/GEDEwtPRsthkOPDnWkWWS7ijQIASzeMbtA9fMT4H9v0Jj7S19zP1ahm9SRZyn7MtgSUwkF8Gf/rHnBql6NowqNfFRtuBRbKJpg5aN3pC7uaEyDjCaWSysXQrJ7onTHJl6pu9OZOHSLE64bbhqR7sMcTVd9IH30e/ky5OlkqUo/be6qER5JGqRaNPlkVFT7P5zxbRf7teWLa6lxhnZfEkwt7Lk7shE/PfeTXdFlLP2lXRPTlRwM/YeWHu5gs14A7QH/mT3/m/bTEidi0J1/CNp7Ezwi1mZmqtFjg4M3WCaPqFzsSUE8uxsSsncFm0+GsF+9ZVwk+ZNYAGNSEu3qPUdLl58WmZBd6YMvVUUPbr4kxdqNBzs/W/m+FDQ1IDQ1msJHUuEM7aOagiUMc8oq88EwnQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3760.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(396003)(136003)(39860400002)(366004)(451199021)(26005)(122000001)(186003)(53546011)(33656002)(6512007)(6506007)(38100700002)(2616005)(36756003)(83380400001)(2906002)(41300700001)(6486002)(71200400001)(316002)(54906003)(478600001)(76116006)(66946007)(66556008)(66476007)(66446008)(64756008)(91956017)(6916009)(4326008)(38070700005)(86362001)(8936002)(8676002)(5660300002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <5A083519363D7444AF8AC658CF9D252A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5689
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8ef3475d-2bb8-4546-c05e-08db5b84df13
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Gpnyfas9lPUbxyPnPXHqYO/1XEf5muAxel3m0h2GzjZoqA0Tb81eBFio8I3GNBsyjSAI1CMroB1MS1UqhXXLtZXGNp8h8UGhbpVLAviHZnykwGWQX6Ue7luN1gebhEaz11IWiSh8Q2rwFXKvgxSDTIxfbIw5rr0owTbBJy5gEhgUpnk4wsb12dG5G4huZwT9CxBZRec3wYcbn9elfoCCPFVlGpShSGtTzpd+iWagyD0BezEyZ/U9ux4Sf/Yyv04MKI9DC2HeqTugll+hy6NOx/aexOeOBpw7qKhpi68MgH3UWf0E5qbJzizWsPV32wMf6b2i3dm76v0m2Ac6nFNw0blpbfalkeEb89Sbox5F0kE9aPH4QoECSl6a06vrSNkvH1oNBjjvxwl5ZATCbK/nZ8iFXk+1hOZtj8CysKxXYLRwqmEl7xR+QPdCddPxExi616hTmCmenWBQu0Qib05lqp1iOOsYDOo/UwdWe3CuF5aDyShV0ahleAPb5dita9q3g3OQ5vkEzD8o6CvXBxvfoAGELMBs2mgS5IWaf4FvMxt7riow+tqmgySJUS4/lgnenZOWxfpt5DBT4oqVie0skwWK+4QQSmCv7GWJouLtObKbcOqcd2VeQN73zQdNpyzKmU8nOTCnlqaSZ/b8C9iarvaAoptTAPYv6R1Jbqrk3Ys4YBSmM5fxpGpxeQCYNnCoysLxI0Hj3+KEyXukwWAEIaAIDIB4FhqOO7JEi70QsX/dWrgpHk7MA+cMde7VwMiI
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(396003)(346002)(136003)(451199021)(36840700001)(40470700004)(46966006)(41300700001)(70586007)(70206006)(2906002)(40480700001)(186003)(478600001)(26005)(6486002)(316002)(4326008)(5660300002)(54906003)(6512007)(6862004)(8936002)(8676002)(40460700003)(36860700001)(53546011)(81166007)(356005)(36756003)(47076005)(2616005)(336012)(6506007)(83380400001)(33656002)(82310400005)(86362001)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 11:57:40.4895
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5632290a-ea64-41cf-b1b2-08db5b84e8fb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6104

DQoNCj4gT24gMjMgTWF5IDIwMjMsIGF0IDEyOjUzLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjMuMDUuMjAyMyAxMzo1MCwgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+PiBPbiAyMyBNYXkgMjAyMywgYXQgMTE6MzEsIEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4gT24gMjMuMDUuMjAyMyAxMjoyMSwgTHVjYSBGYW5jZWxs
dSB3cm90ZToNCj4+Pj4+IE9uIDIzIE1heSAyMDIzLCBhdCAxMTowMiwgSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPiB3cm90ZToNCj4+Pj4+IE9uIDIzLjA1LjIwMjMgMDk6NDMsIEx1Y2Eg
RmFuY2VsbHUgd3JvdGU6DQo+Pj4+Pj4gQEAgLTgzOCw2ICs4MzgsMjIgQEAgQ29udHJvbHMgZm9y
IGhvdyBkb20wIGlzIGNvbnN0cnVjdGVkIG9uIHg4NiBzeXN0ZW1zLg0KPj4+Pj4+IA0KPj4+Pj4+
ICAgSWYgdXNpbmcgdGhpcyBvcHRpb24gaXMgbmVjZXNzYXJ5IHRvIGZpeCBhbiBpc3N1ZSwgcGxl
YXNlIHJlcG9ydCBhIGJ1Zy4NCj4+Pj4+PiANCj4+Pj4+PiArRW5hYmxlcyBmZWF0dXJlcyBvbiBk
b20wIG9uIEFybSBzeXN0ZW1zLg0KPj4+Pj4+ICsNCj4+Pj4+PiArKiAgIFRoZSBgc3ZlYCBpbnRl
Z2VyIHBhcmFtZXRlciBlbmFibGVzIEFybSBTVkUgdXNhZ2UgZm9yIERvbTAgZG9tYWluIGFuZCBz
ZXRzDQo+Pj4+Pj4gKyAgICB0aGUgbWF4aW11bSBTVkUgdmVjdG9yIGxlbmd0aCwgdGhlIG9wdGlv
biBpcyBhcHBsaWNhYmxlIG9ubHkgdG8gQUFyY2g2NA0KPj4+Pj4+ICsgICAgZ3Vlc3RzLg0KPj4+
Pj4gDQo+Pj4+PiBXaHkgImd1ZXN0cyI/IERvZXMgdGhlIG9wdGlvbiBhZmZlY3QgbW9yZSB0aGFu
IERvbTA/DQo+Pj4+IA0KPj4+PiBJIHVzZWQg4oCcZ3Vlc3Rz4oCdIGJlY2F1c2UgaW4gbXkgbWlu
ZCBJIHdhcyByZWZlcnJpbmcgdG8gYWxsIHRoZSBhYXJjaDY0IE9TIHRoYXQgY2FuIGJlIHVzZWQN
Cj4+Pj4gYXMgY29udHJvbCBkb21haW4sIEkgY2FuIGNoYW5nZSBpdCBpZiBpdCBzb3VuZHMgYmFk
Lg0KPj4+IA0KPj4+IElmIHlvdSBtZWFucyBPU2VzIHRoZW4gYmV0dGVyIGFsc28gc2F5IE9TZXMu
IEJ1dCBtYXliZSB0aGlzIGRvZXNuJ3QgbmVlZA0KPj4+IHNwZWNpZmljYWxseSBleHByZXNzaW5n
LCBieSBzYXlpbmcgZS5nLiAiLi4uLCB0aGUgb3B0aW9uIGlzIGFwcGxpY2FibGUNCj4+PiBvbmx5
IG9uIEFBcmNoNjQiPyBPciBjYW4gYSBEb20wIGJlIDMyLWJpdCBvbiBBcm02NCBYZW4/DQo+PiAN
Cj4+IEkgdGhpbmsgdGhlcmUgaXMgbm8gbGltaXRhdGlvbiBzbyBEb20wIGNhbiBiZSAzMiBiaXQg
b3IgNjQuIE1heWJlIEkgY2FuIHNheQ0KPj4g4oCcLi4uIEFBcmNoNjQga2VybmVsIGd1ZXN0cy7i
gJ0/DQo+IA0KPiBJJ2QgcmVjb21tZW5kIHRvIGF2b2lkIHRoZSB0ZXJtICJndWVzdCIgd2hlbiB5
b3UgdGFsayBhYm91dCBEb20wIGFsb25lLg0KPiBDb21tb25seSAiZ3Vlc3QiIG1lYW5zIG9yZGlu
YXJ5IGRvbWFpbnMgb25seSwgaS5lLiBpbiBwYXJ0aWN1bGFyIGV4Y2x1ZGluZw0KPiBEb20wLiBX
aGF0J3Mgd3Jvbmcgd2l0aCAiQUFyY2g2NCBEb20wIGtlcm5lbHMiPw0KDQpPayB3b3JrcyBmb3Ig
bWUsIEkgd2lsbCB1c2Ug4oCcQUFyY2g2NCBEb20wIGtlcm5lbHMiLCBJIHRob3VnaHQg4oCcZ3Vl
c3Rz4oCdIHdlcmUgYSBnZW5lcmljIGNhdGVnb3J5DQphbmQgdGhlbiB3ZSBoYXZlIOKAnHByaXZp
bGVnZWQJZ3Vlc3Rz4oCdLCBmb3IgZXhhbXBsZSBEb20wIG9yIGRyaXZlciBkb21haW4sIGFuZCDi
gJx1bnByaXZpbGVnZWQgZ3Vlc3Rz4oCdDQpsaWtlIERvbVVzLg0KDQo+IA0KPj4+Pj4+ICsgICAg
QSB2YWx1ZSBlcXVhbCB0byAwIGRpc2FibGVzIHRoZSBmZWF0dXJlLCB0aGlzIGlzIHRoZSBkZWZh
dWx0IHZhbHVlLg0KPj4+Pj4+ICsgICAgVmFsdWVzIGJlbG93IDAgbWVhbnMgdGhlIGZlYXR1cmUg
dXNlcyB0aGUgbWF4aW11bSBTVkUgdmVjdG9yIGxlbmd0aA0KPj4+Pj4+ICsgICAgc3VwcG9ydGVk
IGJ5IGhhcmR3YXJlLCBpZiBTVkUgaXMgc3VwcG9ydGVkLg0KPj4+Pj4+ICsgICAgVmFsdWVzIGFi
b3ZlIDAgZXhwbGljaXRseSBzZXQgdGhlIG1heGltdW0gU1ZFIHZlY3RvciBsZW5ndGggZm9yIERv
bTAsDQo+Pj4+Pj4gKyAgICBhbGxvd2VkIHZhbHVlcyBhcmUgZnJvbSAxMjggdG8gbWF4aW11bSAy
MDQ4LCBiZWluZyBtdWx0aXBsZSBvZiAxMjguDQo+Pj4+Pj4gKyAgICBQbGVhc2Ugbm90ZSB0aGF0
IHdoZW4gdGhlIHVzZXIgZXhwbGljaXRseSBzcGVjaWZpZXMgdGhlIHZhbHVlLCBpZiB0aGF0IHZh
bHVlDQo+Pj4+Pj4gKyAgICBpcyBhYm92ZSB0aGUgaGFyZHdhcmUgc3VwcG9ydGVkIG1heGltdW0g
U1ZFIHZlY3RvciBsZW5ndGgsIHRoZSBkb21haW4NCj4+Pj4+PiArICAgIGNyZWF0aW9uIHdpbGwg
ZmFpbCBhbmQgdGhlIHN5c3RlbSB3aWxsIHN0b3AsIHRoZSBzYW1lIHdpbGwgb2NjdXIgaWYgdGhl
DQo+Pj4+Pj4gKyAgICBvcHRpb24gaXMgcHJvdmlkZWQgd2l0aCBhIG5vbiB6ZXJvIHZhbHVlLCBi
dXQgdGhlIHBsYXRmb3JtIGRvZXNuJ3Qgc3VwcG9ydA0KPj4+Pj4+ICsgICAgU1ZFLg0KPj4+Pj4g
DQo+Pj4+PiBBc3N1bWluZyB0aGlzIGFsc28gY292ZXJzIHRoZSAtMSBjYXNlLCBJIHdvbmRlciBp
ZiB0aGF0IGlzbid0IGEgbGl0dGxlIHRvbw0KPj4+Pj4gc3RyaWN0LiAiTWF4aW11bSBzdXBwb3J0
ZWQiIGltbyBjYW4gdmVyeSB3ZWxsIGJlIDAuDQo+Pj4+IA0KPj4+PiBNYXhpbXVtIHN1cHBvcnRl
ZCwgd2hlbiBwbGF0Zm9ybXMgdXNlcyBTVkUsIGNhbiBiZSBhdCBtaW5pbXVtIDEyOCBieSBhcm0g
c3BlY3MuDQo+Pj4gDQo+Pj4gV2hlbiB0aGVyZSBpcyBTVkUgLSBzdXJlLiBCdXQgd2hlbiB0aGVy
ZSdzIG5vIFNWRSwgMCBpcyBraW5kIG9mIHRoZSBpbXBsaWVkDQo+Pj4gbGVuZ3RoLiBBbmQgSSdk
IHZpZXcgYSBjb21tYW5kIGxpbmUgb3B0aW9uIHZhbHVlIG9mIC0xIHF1aXRlIG9rYXkgaW4gdGhh
dA0KPj4+IGNhc2U6IFRoZXkndmUgYXNrZWQgZm9yIHRoZSBtYXhpbXVtIHN1cHBvcnRlZCwgc28g
dGhleSdsbCBnZXQgMC4gTm8gcmVhc29uDQo+Pj4gdG8gY3Jhc2ggdGhlIHN5c3RlbSBkdXJpbmcg
Ym9vdC4NCj4+IA0KPj4gT2sgSSBzZWUgd2hhdCB5b3UgbWVhbiwgZm9yIGV4YW1wbGUgd2hlbiBL
Y29uZmlnIFNWRSBpcyBlbmFibGVkLCBidXQgdGhlIHBsYXRmb3JtIGRvZXNu4oCZdA0KPj4gaGF2
ZSBTVkUgZmVhdHVyZSwgcmVxdWVzdGluZyBzdmU9LTEgd2lsbCBrZWVwIHRoZSB2YWx1ZSB0byAw
LCBhbmQgbm8gc3lzdGVtIHdpbGwgYmUgc3RvcHBlZC4NCj4+IA0KPj4gTWF5YmUgSSBjYW4gc2F5
OiANCj4+IA0KPj4g4oCcLi4uIHRoZSBzYW1lIHdpbGwgb2NjdXIgaWYgdGhlIG9wdGlvbiBpcyBw
cm92aWRlZCB3aXRoIGEgcG9zaXRpdmUgbm9uIHplcm8gdmFsdWUsDQo+PiBidXQgdGhlIHBsYXRm
b3JtIGRvZXNuJ3Qgc3VwcG9ydCBTVkUuIg0KPiANCj4gUmlnaHQsIHByb3ZpZGVkIHRoYXQgbWF0
Y2hlcyB0aGUgaW1wbGVtZW50YXRpb24uDQoNCk9rIEkgd2lsbCBkbyB0aGUgY2hhbmdlcywgY2Fu
IEkgcmV0YWluIHlvdXIgUi1ieT8gSSBzdXBwb3NlIGl0IGNvdmVycyBhbHNvIGRvY3VtZW50YXRp
b24gcmlnaHQ/DQoNCj4gDQo+IEphbg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Tue May 23 12:37:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 12:37:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538468.838405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1RGM-0007Bg-6J; Tue, 23 May 2023 12:37:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538468.838405; Tue, 23 May 2023 12:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1RGM-0007BZ-3W; Tue, 23 May 2023 12:37:14 +0000
Received: by outflank-mailman (input) for mailman id 538468;
 Tue, 23 May 2023 12:37:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1RGK-0007BT-Nu
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 12:37:12 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20612.outbound.protection.outlook.com
 [2a01:111:f400:7d00::612])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 89786595-f966-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 14:37:11 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB8000.eurprd04.prod.outlook.com (2603:10a6:102:c1::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 12:37:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 12:37:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89786595-f966-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CJvAkbbSmAQNiE0stZ1Wr7tYThM/VebErE184Ndipy3Nrjq6n6YxEQfRvXXqBtTFSOo53SjlGnyQBqWagB05LGXPDp+jzRWFbpnrkGxDbSLU+xBidvHuYcyZqjT0E4v1AYZGKIXflAiRaiUET28Y2sAiAISLMyEuzsKZQtodpXRax/9uaooNT1Z55U2Bq5a4NQ9xV/9zBnYFQzEYyFUmXvzQEo95oMwqwsBciIDrZvuaJNPfG1gYwyMrFTLutSajkykmyCXhjpisg4QCR1zlgKCWroC8w/+M9qOZ6vpN2X1AI2r09GG9cY3utJ4oB1i8EUitpud03GaVko5AqsPVdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VBlSeCkggQ/q/Seh1YdjZGAzuf4J5h3CY4GRei16rpE=;
 b=leCE8DNR+VSWqjyyKgUqaE2qjvRFRmYTi+CDXcIcQKrnfBPQQm8cxBDpoqZ/vmoMzshUSV2PyRbR9gHJO30WBINi1osODq8YuXHqPhEgz8alGRRWTwahlCkMuvqrckUtyd6K/nrNdICTGaFkvvHwdDGYF8mu/IiEIiTU/kej99KE/doPyw4KXxljlcHEk00fx/+18YftJMI1yaeS4KHBE8FhpsekkaVssVFegyU8HvdbgWPe3kogL8V6O6/9BV4DPEdI5I9S2cogL327ceOMt8Awm1yDdljj6R+2OjXbp5p0ojkBpGYzZalYl/YkfTqP8u6B3bWO/Ksf0D9Raxxy3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VBlSeCkggQ/q/Seh1YdjZGAzuf4J5h3CY4GRei16rpE=;
 b=aLt71JHoLny5N0GGhMpTjFXuAecH/LcdVt2H7b7Pc95XW1QHwVYCnWCtLh/MImmkPqTX5mr1M4nZlJB53wmoYMg6KD5hgLuALvLjiBdLY2A1z8/J/GbApmKwddbqDP/YyARuHtUJaZeNqIuZrbgFGulGG/i+UiE3VD/R4m2/TkrGaj6Vk+V56SrogKfwUggDY0gazjEY0Kzhx7WfB4lf8U0kFQRzK59Y/qKMik7U+McoBMa9j0kF0FThoXSB/Aj8gwgdKgtc81ZP3hzLYtmPnxkRDa14Qo85+r7+LNkUbG4SSoY+OQOrVJrKOJlZ65cu/zOO/6NAr741n6l/+86RqQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2f4234ce-9427-b78c-f7d5-4d9822d4d1c9@suse.com>
Date: Tue, 23 May 2023 14:37:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] x86: do away with HAVE_AS_NEGATIVE_TRUE
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0114.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB8000:EE_
X-MS-Office365-Filtering-Correlation-Id: ce575273-8c6f-4fb2-2172-08db5b8a6bf3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EZyDxjdlNKOgjnAnBb+R89ls1Ldw/yusoZ1mkwPcVkVMBlGf7lAPUOOGVud6m0BFwZbL/8/XqvpC4ofcTf1+PnLUpMq3YEtVArr5uz3BXtPJNpy/uleXemVD+t+QBYoyrkwHo8STDGweeEmFCnoMPJfC8Qta/5JbmhC4RebFCQ1z5/wFP9A1dX7pdZ5WB6sz1LTniSEILr5P/YcDjg/OLSsZX44M1E9+1qhh7R9dm9lB1Mp2SZz11kKKhOsP6NVmFF+MN6b/+eiuKm5nViGy55B3A4ppd9UILaCpsHLEm3GjsM2qogqKhc8VUqa1UVZoX7jTp9ob2Q8jZ2uFcA3BqbY49BBmD/5X8Vuosn+d2NZ3AwdMdZ0YaJK5pdPbIcRD9mqHzkn6fRV/gYU/F6UNKHi90M78LLj6UiPAQg4HtjzR+2VOmXBW3Qz3gngegwJRs9HfXKjr69gcZ9ASoqWtzYmEitpdO2i9E6mRBnBvB7sUQ/NGE9qRh70eL83SD2oJtKQZ6Di9wXCMBnN4rmaUnLftWV+NrpDgvBZWeZl9augoqlozswl1zCh4C63bEfrILuSNNZ+NvRk61FEWpeTvcUqu7wYT6UEajucMjLnXBS9D+PY98VXDXktubvRzDwkk6TT10LUnqlKtjjxyb/4i6Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(346002)(136003)(376002)(366004)(451199021)(8676002)(8936002)(5660300002)(186003)(26005)(6512007)(6506007)(41300700001)(31686004)(38100700002)(54906003)(66946007)(478600001)(86362001)(31696002)(66556008)(66476007)(316002)(4326008)(6916009)(2616005)(36756003)(2906002)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K01RVkRZVHJNMzRwcDdhcUU1V0U5UHNDN2JOUGZaakd0eUtPVlpsbmU4cGVS?=
 =?utf-8?B?ekpOVnpKWXpjb2tsbSsxditPcThQMmxzVlJjbWNqd0hIdUpnWGs2eUQrQzYr?=
 =?utf-8?B?YVNaVzFZdzZjY29pUHZBVW1RY2lOSkNKUXdGdzZseCtqTFNKWEc1WEFQNVgz?=
 =?utf-8?B?MUcvMHlLWHk5b0N6TXBla1pIRWZKOTFnTDNNZC84bTlqd2o0OGw0QUNGc3Mv?=
 =?utf-8?B?RTNBSUJBdE94MEV0RzA1ZXdCTm9RbEpXWDQzUU8rTnJGTk56aHBwVlIwb3Ex?=
 =?utf-8?B?R0prVCtTRU9iK2ZDaWNXNG12bDdXZ1RRdlZxMGNIYlhaR3pYTUdaaUYzRE9D?=
 =?utf-8?B?c2NKVzN2QWxwZjU0NmlIOGZKdVYxcGNIM3Fhbk5COXF2a3VxVm4yQ3h5OEh1?=
 =?utf-8?B?eXpFalJnZkthOVBGSFJKN1lUOW5HMEtRdUorSlRSc05rbHMwck14Z2NPREx3?=
 =?utf-8?B?L2VuL09NcWdXVlpmeTdOYkk2dTlDK1RHRytZY1lZckozY1VvVGN5ZDArcmZM?=
 =?utf-8?B?MW5RblUzZTRvMyt1K3V2VnBONkRwOFk5Yi9YVnhhOHNQSHlIZDluQzIvL2hr?=
 =?utf-8?B?cVdvY0YxN3V5U2pjazlmSHlYMHJPbTZkeGVMNmorbXk5KzllNjlwK1l2djFJ?=
 =?utf-8?B?ODlaNHFWeXYrV3I5UVU2SHMxSTIwNFhVNmVFQk9wb1dqZ0RLWFJYWlYvUVdC?=
 =?utf-8?B?WU9VbnJMWWJqMkZXS3VhSDE4YStieG0veVl3K05COXVKSVIzTWsweFFiSFcx?=
 =?utf-8?B?L1VpaytHRGRncG13Z1gxWXJvcUx6TFZCdXNVTVZwWnJ0SVlZY3VsSHpUSU5M?=
 =?utf-8?B?UVg4UnNBdkRkUWg3UkpFQkY4TWFPSUs4WDZMRm9ZKzFqY0dnTmZYbmJQUG9t?=
 =?utf-8?B?YUdxMXIyUG11SkdHWis3elFHMHYweEhDdFRmbE5RL2NmTkJDWGsxYjRzOGEw?=
 =?utf-8?B?R1NYVE9SSWs1eHBKTUxnYkY4U1pJeWJ0ckhrUVFrbktZU0JYeFc3VGVhQzVz?=
 =?utf-8?B?VmZXS2FoTndqSFE1V2lRWXFxMmV1TVdMdFZFYS9uSEZIZ1gydDdLMFRoVFMx?=
 =?utf-8?B?Q1VNbitISUU5Zm0wK1hHZXdBUXE3MVFOUitKTE5ZVFBrNjN0WXpXbE54UjRF?=
 =?utf-8?B?RHlVWkFtRWJjYSthSTRvRFhNYnV1eTcweW5OQWpZOGg3c1lnQTJaN0JOR1Ns?=
 =?utf-8?B?dS9CN3BaeU5pMmJPTjBIS255dWVUcVNENFpOUW55QU1yQmE2TU5BV05saDhl?=
 =?utf-8?B?aGJPR1NZZnVNUEUwVSsyT3A5QkhwNVV6SERnT3Rxd29ucFRqYnZIZk1BM0dI?=
 =?utf-8?B?UjUvREJZNmoyVGpyb2N3YlFuU2FaVFloVWNwUXY1KytmM2lTSVF5V0hXdkFX?=
 =?utf-8?B?K1JmQVZHMlZZb0pNL2pIamw3eGQvbGk1UHZITGt6TFRrNnkwaHdCcEpBdEZh?=
 =?utf-8?B?NkFiN0pCQkRIWWxlZXpINDNsZVNTbnJFSUlWb1kyS29PUi9GZXVIMFEzYlpj?=
 =?utf-8?B?UGpJRTg2MW9PUGNIS1l4TVNIN25pQTVrclJiSjhJVFl1eXdPZ3dzWEhEcDhz?=
 =?utf-8?B?VjhqTmlLbURQUitGbkZtTjlFRldXTDBaNjFnbHFSeCtGTFdYTm5BL1c3R2tu?=
 =?utf-8?B?dnNjYVpZTGlXeVZjcmlGT0t5V3Rmd0NwYXF6WEEyR1Vna1B4ZVNZMWc0aFU4?=
 =?utf-8?B?UEZJeXd3ajY5dURCckdFL3JjOTljSGljQTRPc1lMQWtGQnd4TmtJRGQ4TzRU?=
 =?utf-8?B?cEFLV1o1aThlMTR3THpuM0NyMnowTnlKdUVWcDQwcHZCMkt6Y01EME9vSVdM?=
 =?utf-8?B?UFVtYjk4M1lpdVJSbkVnaTFtcTBHeUpUSmNmN3V2WUREanhRcEtxUlRhTkR3?=
 =?utf-8?B?dVM0TDB5ODJVNW9YaUxNbmVLbmdRQ3VOMGhGbTB4M1pYRjVvekszMmRNWlMy?=
 =?utf-8?B?aXJtZ2pWWUtEMHJ1L1oxS1haTi8zbHRRWnJpVHozeHh5QkludWJtUjNoMHk4?=
 =?utf-8?B?anJIUGJzQkovZFZuVW1aZHRlb0xGQll6b1kzWm1ReFpjNDdINFAwZzVQcUtw?=
 =?utf-8?B?cXVzOWVSRzBTVjBuTlVXcDdnTVBuYzFvUFltTWlrREZNaVNpdHZZK3kzQ1lC?=
 =?utf-8?Q?+NX5KxZOSfaQI3GAgjCvkL1qo?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ce575273-8c6f-4fb2-2172-08db5b8a6bf3
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 12:37:07.9427
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nI8u7lAEDicmCKvJkXSxuu+rRhroRkLpcl7EDlJ/gBWZXmyl+bKXtpUf1s0PcdFaVj9FHCYiidCvMAOi19XXKg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB8000

There's no real need for the associated probing - we can easily convert
to a uniform value without knowing the specific behavior (note also that
the respective comments weren't fully correct and have gone stale).

No difference in generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Use "& 1".

--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -26,10 +26,6 @@ $(call as-option-add,CFLAGS,CC,"invpcid
 $(call as-option-add,CFLAGS,CC,"movdiri %rax$(comma)(%rax)",-DHAVE_AS_MOVDIR)
 $(call as-option-add,CFLAGS,CC,"enqcmd (%rax)$(comma)%rax",-DHAVE_AS_ENQCMD)
 
-# GAS's idea of true is -1.  Clang's idea is 1
-$(call as-option-add,CFLAGS,CC,\
-    ".if ((1 > 0) < 0); .error \"\";.endif",,-DHAVE_AS_NEGATIVE_TRUE)
-
 # Check to see whether the assmbler supports the .nop directive.
 $(call as-option-add,CFLAGS,CC,\
     ".L1: .L2: .nops (.L2 - .L1)$(comma)9",-DHAVE_AS_NOPS_DIRECTIVE)
--- a/xen/arch/x86/include/asm/alternative.h
+++ b/xen/arch/x86/include/asm/alternative.h
@@ -35,19 +35,19 @@ extern void alternative_branches(void);
 #define alt_repl_e(num)    ".LXEN%=_repl_e"#num
 #define alt_repl_len(num)  "(" alt_repl_e(num) " - " alt_repl_s(num) ")"
 
-/* GAS's idea of true is -1, while Clang's idea is 1. */
-#ifdef HAVE_AS_NEGATIVE_TRUE
-# define AS_TRUE "-"
-#else
-# define AS_TRUE ""
-#endif
+/*
+ * GAS's idea of true is sometimes 1 and sometimes -1, while Clang's idea
+ * was consistently 1 up to 6.x (it matches GAS's now).  Transform it to
+ * uniformly 1.
+ */
+#define AS_TRUE(x) "((" x ") & 1)"
 
-#define as_max(a, b) "(("a") ^ ((("a") ^ ("b")) & -("AS_TRUE"(("a") < ("b")))))"
+#define as_max(a, b) "(("a") ^ ((("a") ^ ("b")) & -("AS_TRUE("("a") < ("b")")")))"
 
 #define OLDINSTR(oldinstr, padding)                              \
     ".LXEN%=_orig_s:\n\t" oldinstr "\n .LXEN%=_orig_e:\n\t"      \
     ".LXEN%=_diff = " padding "\n\t"                             \
-    "mknops ("AS_TRUE"(.LXEN%=_diff > 0) * .LXEN%=_diff)\n\t"    \
+    "mknops ("AS_TRUE(".LXEN%=_diff > 0")" * .LXEN%=_diff)\n\t"  \
     ".LXEN%=_orig_p:\n\t"
 
 #define OLDINSTR_1(oldinstr, n1)                                 \
--- a/xen/arch/x86/include/asm/alternative-asm.h
+++ b/xen/arch/x86/include/asm/alternative-asm.h
@@ -29,12 +29,12 @@
 #endif
 .endm
 
-/* GAS's idea of true is -1, while Clang's idea is 1. */
-#ifdef HAVE_AS_NEGATIVE_TRUE
-# define as_true(x) (-(x))
-#else
-# define as_true(x) (x)
-#endif
+/*
+ * GAS's idea of true is sometimes 1 and sometimes -1, while Clang's idea
+ * was consistently 1 up to 6.x (it matches GAS's now).  Transform it to
+ * uniformly 1.
+ */
+#define as_true(x) ((x) & 1)
 
 #define decl_orig(insn, padding)                  \
  .L\@_orig_s: insn; .L\@_orig_e:                  \


From xen-devel-bounces@lists.xenproject.org Tue May 23 12:40:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 12:40:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538472.838414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1RJV-0000AX-L4; Tue, 23 May 2023 12:40:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538472.838414; Tue, 23 May 2023 12:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1RJV-0000AJ-II; Tue, 23 May 2023 12:40:29 +0000
Received: by outflank-mailman (input) for mailman id 538472;
 Tue, 23 May 2023 12:40:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1RJU-0000AB-FF
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 12:40:28 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20626.outbound.protection.outlook.com
 [2a01:111:f400:7d00::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fea88322-f966-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 14:40:27 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB8000.eurprd04.prod.outlook.com (2603:10a6:102:c1::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 12:40:26 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 12:40:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fea88322-f966-11ed-b22d-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MA/3fyi8oq1oklIc+q0HV5EfComqIi4RHuPcvvi6yQWizHum6WyIa5dheJcUBXvlc9JArrKfRWWJbaEn0P6tY5aGEFR82nHPGftHvruJSIs0MYcuZC6lmNgCbB2tPnTwz4Q1JrHZQhLQhT2YElukPupZaIxBaWGRfi67TETp3lyBGjkeg5YcFDoZVkUtTMApM/9G1KwCSJLp6PyGuZYh2f5DAiRczpOnYqsDseWn2+mrc3LoJQ6l3xkuKhwb7GeLdH015NOBkOhdMtqHH0cxd7cC52ZT/7dgGtpnwlvYrELcW8V7FapQ2Bt2V0DJCJKL5Vd+IAX9JUm7cf0wzXMj5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Gk7mG3uOeJFcBHhF6CDm1xUDcBtt2eAWWhJ9Omrg5/8=;
 b=S1WBi/NHSquw3ZEjwRRLn3Uebxg6CZQ22tbCfzxV1eRpXVu/8LbF5j6Sz+RHoVSImkcGFVul9od7JEbD95Oxh8ARxJegr0qdCmSgyLRgqjcEPXhCyYOvJNQd+3tptUEjVaVKxn25PhJhSWiFtwLuxPpNBb997xUZ+IqWEykP75GHglvRqZlECBbUD13JSocT0zE1H6Y2kn3uh5uu1cvGWJhw4f3/tADAQM882KmuFVlckxbHmbQ1KxwV5qo1Q72hVAQq6ZEzgXXWXEnYi+LP2th/OJUkwnRR2BrvY/EC+P6xu0ksxWVWrLyx68FoRJV1WeWt/frwZYBLAocH37UC+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gk7mG3uOeJFcBHhF6CDm1xUDcBtt2eAWWhJ9Omrg5/8=;
 b=3z4fFgVAKxTuf9PQ1EWGo64exoWtNNDpkqOA5Hp3yCeDgb7BOdPe1C53sg4dWSY50CGVvE63nxW463hsvpIvKcdn73DOC+yT/LOsCb3jSLtiogqvmtQaF/KUxY+szXpZEQGIaVRaGgmS6YTgnfPMG6pg0Z/UNtfh9gL+APF6B/nL+c1PpoHyh3agif7MnWSYAXBbMgFQ5MfdBV3Wsa4btQtw5/sIXHzqdnF0xdhDCKiyuGO+8ONE48z6rLJ3lZoq/nrksS5y+b8FVtNwp/woLkOlMIcLiXYFOOglw603ofDeuitDi9O4eA4O2OHt6bZarOBBwTFuFk8uXnFLotAMtQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <87e7f1ac-11a0-834b-2905-e91a800ec7b1@suse.com>
Date: Tue, 23 May 2023 14:40:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-8-luca.fancellu@arm.com>
 <48722696-ea22-1af9-2a0a-aa78972d118a@suse.com>
 <6DEA0CC3-C3EF-4509-A869-807CE4B21401@arm.com>
 <767b11a1-4d43-9057-1fcc-6516fea64fb3@suse.com>
 <45285215-4528-435A-B203-B770D60FABAA@arm.com>
 <4d86d3b6-3ce5-8d25-abb9-4c27b592647f@suse.com>
 <E8FD576D-917C-486B-B537-2455C9686A2E@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <E8FD576D-917C-486B-B537-2455C9686A2E@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0110.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB8000:EE_
X-MS-Office365-Filtering-Correlation-Id: 2f491457-9829-4156-c8be-08db5b8ae208
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mZfAsDZJ0200UzgRu6h6eWoAT+5/1XxG24EoRC3qBR//FsqWzcpgfmIEyQo2Ewc/92SWxVux4K2UuhqlL4obzAdLiTsMQZgSv4T3KH3ZZk2Bsnz86tQcu3bIFek34UPT0dd1zgXD/0l+4m9f7K6wehhFqVpxDQ0QPlm4jhMgi19WQjqlti00wvV367WL1KZxj8KZ4Ptj0+AXx4mnJR1VDW5X1D6lfzn9F0NMFfPfZMpyW07Uptmq2QZ11zLA5OXXIQ2h/gxfybRMTHJD5OQaTvkOFzMnDvZYLupEwIddDstkkCwoQlKttLJbTzOWPErafW59ZasfvfOMpjXwUb7QCqFtOQlR3+7YDKR+sGMoBihWWdDoeVYQ+p+sp68DqQpvWLsbKZm5kYtWp/pKmqfOB6mJW6bSXq4qm58neGCpWuU7vAQfQbEWGheMO970YWafFWUkQykwMjdTksB/U4V4aaRWuUUtzddri9ZXPhdLRceS6eHK1QbrrMGShJaBf8eg57ivX8wUORqJWJgw0rh/+7aQy3E2CpxfwtBGqF+pqeHMolrqX2KyUUug2zHf+OkiVYCeqBTPZE/K5RyBDPtiuOfRPKp0rjuqP8xgyKExONsGYsu73Zr+6nJDWCR+4XCU1kl8vX4dOLKDrQd4ubc7vA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(346002)(136003)(376002)(366004)(451199021)(7416002)(8676002)(8936002)(5660300002)(186003)(26005)(6512007)(6506007)(41300700001)(53546011)(83380400001)(31686004)(38100700002)(54906003)(66946007)(478600001)(86362001)(31696002)(66556008)(66476007)(316002)(4326008)(6916009)(2616005)(6666004)(36756003)(2906002)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cUNBamVDTSsyNVh0clc0RU12MlNhdlJIanNtNUhOU0JtclRkSWxMcW1VVWls?=
 =?utf-8?B?emV6RVRRbVVkWnhGei9DZytsYkVTQ2hxcXdjejhCSktNWktqYk9Wd0d3eEZq?=
 =?utf-8?B?d3h6RUcvdnlxaTJMcm9kUS9wR1V0VHUrbFBhVm52NDROVk5JQ3Fkd0RlQlVB?=
 =?utf-8?B?YnpyRXdLMXZNc0huQndwMGVBVm0xdVpnRWpIc0xTalB6aEIrT1RoN0RqanAz?=
 =?utf-8?B?RURVQjJaRGhYaU1VeU1nRitPTUVlUVJsWkloL3kyTEVLQVJRTU96bjI2WWY0?=
 =?utf-8?B?TEJJNm1tQ3hOQ2xaa1p1eXgxemdQNWtlT0cwMUczblN5blppaytoQ1FXSjNQ?=
 =?utf-8?B?dmtwWDdyUHRHZEJoVGJ4SjdpUmZZTXhFQ1lGMjRYMXo5dmsrNTFxSFNMc1Zs?=
 =?utf-8?B?Q0Q1N1RWSVBpdWlHRUNreHV2WWJrRHZDNVF3cVY5RDhDZXZHK3ptZ0tYRVBk?=
 =?utf-8?B?ZFFuZmI4eVBheFpYaERIdGJWK2lIUktaWk1POHVqcXdiWnJhNzJrOFdQT0oz?=
 =?utf-8?B?aTZXM2FkcFBoNHZLRVFGcENZa0gxVzRBYkZIT09VTlI1S05TNkN1YjE1cnNO?=
 =?utf-8?B?QnNqdHFnQXJyQTRKUFl3cTZtYUFpeWZzdlpGZ2NTMVNqb3YyeW1tc2JBTVd5?=
 =?utf-8?B?N0JDRE5OVjkySWJKMDBEdXdzMFZJOVVzYnk5RlJhNk5rdWV2WkJsNzkzTHpH?=
 =?utf-8?B?bjZETVA2VzFQaDJydEtYQ01jRzM4VWJrSVpZcWFWN0hyK29nWFBDcFdDa0Zi?=
 =?utf-8?B?WjBramNpdlI4N2YwamFYRHNQRGdheGExWWtTcCsvcVhGMGR1dFFkZGNtUDRF?=
 =?utf-8?B?ZE9FZU9MbW1aejdUYmkrcmhoYXZLa1pLamlwRWxCZWlLV3hZRjhCK2paS0x4?=
 =?utf-8?B?c0llS3pyUURvYnY2VURFblpnUHJvSmtnSlY1TVBYMUFyQVNINUZLMHcxTERz?=
 =?utf-8?B?eFRqNzEvVzlrVEg1d3RvUm4vMzNPTHFKcnZZVVcrdE9wQy9wYlM1Zitic1Iw?=
 =?utf-8?B?R0lGQTdKY3B5UmlaQ1o4L25ub1FXNHBST1lTQlhyZXlkN25ZSWZFa0kvTW14?=
 =?utf-8?B?Y2JNTVJudVhGTkVoSEdhSDZYWUU3d3hpYUVveGZDb05xMU0zU0tSNUpHU1Va?=
 =?utf-8?B?clBNVVp6ZkRMZTdnUlZoOHMwU3BGVVJNTElUb3Y5b25ycjFYSWh6Tzh2WUhR?=
 =?utf-8?B?VExQdTlqbEZhWEwxZ2lOMU10VUpVNTdjYnBLZG01aGpybFdyNm5MbTArQVlq?=
 =?utf-8?B?VFg4VHBqT3ovdHY1cmFrKzc3Um1YWkZZSStOWVhISGw5UitvZGZYUzVhN1I3?=
 =?utf-8?B?cjEyMEJUNUpOU0NFbHFRM2NXQ01DQnJ3R29BdzhaL2M4cHEzOHo5WUJHbkha?=
 =?utf-8?B?ZjV0WmJUY2ErMkc3ZE9SZ2xyQlB1TjN4eE85blphM0l1Y01DRkFUTXlGNTV3?=
 =?utf-8?B?bHNySkdzeTBFQi9jZWhxRlZTQ1FuRlFLSHBEQWV5amdMUmZ4TnNTRzE0SFM4?=
 =?utf-8?B?Q2ZydEFGWmdKeXljREZJTWk1blVJY25YTDh5bElKQnoxYkJkeTRtYWhOZ2dq?=
 =?utf-8?B?YlZjZ0R4c1lLcWo2N0hqYnp6bHJsdFlXT29DUEZlRG9FR0Vha1I5NmU2SDR3?=
 =?utf-8?B?Qk54bDdmdW5Kc3pzWU55M3RidzFHVVlzQUNYMExwdG84YmxhK1E1Um55NWZY?=
 =?utf-8?B?TnMzbFFCK040TzZ6dFFBMDAxV1RyNytTN3hIdGk0NngzREFZUDBkcVVKTGxZ?=
 =?utf-8?B?Y1FSb3NDenpxaHNNWmt6M0duVGN2djU3M1VMelV6STkzMHVNVGU3TlJ6MTM4?=
 =?utf-8?B?K0w4ZXh2RUlDQUMyWnF6dGtPcDhxMHN6YjVxdEtDaThNSjlSRC8xVnpycjQx?=
 =?utf-8?B?aUxmUmFIRVUyUFZURVJ6cGNyT2tYWWFjbDJEY3lsaUtIVXhlWkdkeFU2dTNG?=
 =?utf-8?B?LzkyMnFSM2FaUXFoL2tvWTdWUWFZRlVWelVURC9paEJqa045WTNZTlgrT3JZ?=
 =?utf-8?B?T0h5djFIV1gyOE5uVzNQNnJGbU5ZL2M2YnhnUS90amtDai9xZ2tNa2puVmUw?=
 =?utf-8?B?Wkx2VExCRFBVUTRFbmt3U2pEbDB6bzRENkJBMHV6aWhMaTlUUFMzQmRqeDFw?=
 =?utf-8?Q?P8jqwM57M1G4ODfr+q0XLbtlc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2f491457-9829-4156-c8be-08db5b8ae208
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 12:40:26.0839
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RjY5Qw5+5c0x50/1TxE3cQrZscAKNGbYEmS5quwjoAAi4QHhc8iirTd0Ay02Omckj0UNqCd50V2ZL7nH9a3LYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB8000

On 23.05.2023 13:57, Luca Fancellu wrote:
>> On 23 May 2023, at 12:53, Jan Beulich <jbeulich@suse.com> wrote:
>> On 23.05.2023 13:50, Luca Fancellu wrote:
>>>> On 23 May 2023, at 11:31, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 23.05.2023 12:21, Luca Fancellu wrote:
>>>>>> On 23 May 2023, at 11:02, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>> On 23.05.2023 09:43, Luca Fancellu wrote:
>>>>>>> @@ -838,6 +838,22 @@ Controls for how dom0 is constructed on x86 systems.
>>>>>>>
>>>>>>>   If using this option is necessary to fix an issue, please report a bug.
>>>>>>>
>>>>>>> +Enables features on dom0 on Arm systems.
>>>>>>> +
>>>>>>> +*   The `sve` integer parameter enables Arm SVE usage for Dom0 domain and sets
>>>>>>> +    the maximum SVE vector length, the option is applicable only to AArch64
>>>>>>> +    guests.
>>>>>>
>>>>>> Why "guests"? Does the option affect more than Dom0?
>>>>>
>>>>> I used “guests” because in my mind I was referring to all the aarch64 OS that can be used
>>>>> as control domain, I can change it if it sounds bad.
>>>>
>>>> If you means OSes then better also say OSes. But maybe this doesn't need
>>>> specifically expressing, by saying e.g. "..., the option is applicable
>>>> only on AArch64"? Or can a Dom0 be 32-bit on Arm64 Xen?
>>>
>>> I think there is no limitation so Dom0 can be 32 bit or 64. Maybe I can say
>>> “... AArch64 kernel guests.”?
>>
>> I'd recommend to avoid the term "guest" when you talk about Dom0 alone.
>> Commonly "guest" means ordinary domains only, i.e. in particular excluding
>> Dom0. What's wrong with "AArch64 Dom0 kernels"?
> 
> Ok works for me, I will use “AArch64 Dom0 kernels", I thought “guests” were a generic category
> and then we have “privileged	guests”, for example Dom0 or driver domain, and “unprivileged guests”
> like DomUs.

Well, yes - "commonly" doesn't mean "always".

>>>>>>> +    A value equal to 0 disables the feature, this is the default value.
>>>>>>> +    Values below 0 means the feature uses the maximum SVE vector length
>>>>>>> +    supported by hardware, if SVE is supported.
>>>>>>> +    Values above 0 explicitly set the maximum SVE vector length for Dom0,
>>>>>>> +    allowed values are from 128 to maximum 2048, being multiple of 128.
>>>>>>> +    Please note that when the user explicitly specifies the value, if that value
>>>>>>> +    is above the hardware supported maximum SVE vector length, the domain
>>>>>>> +    creation will fail and the system will stop, the same will occur if the
>>>>>>> +    option is provided with a non zero value, but the platform doesn't support
>>>>>>> +    SVE.
>>>>>>
>>>>>> Assuming this also covers the -1 case, I wonder if that isn't a little too
>>>>>> strict. "Maximum supported" imo can very well be 0.
>>>>>
>>>>> Maximum supported, when platforms uses SVE, can be at minimum 128 by arm specs.
>>>>
>>>> When there is SVE - sure. But when there's no SVE, 0 is kind of the implied
>>>> length. And I'd view a command line option value of -1 quite okay in that
>>>> case: They've asked for the maximum supported, so they'll get 0. No reason
>>>> to crash the system during boot.
>>>
>>> Ok I see what you mean, for example when Kconfig SVE is enabled, but the platform doesn’t
>>> have SVE feature, requesting sve=-1 will keep the value to 0, and no system will be stopped.
>>>
>>> Maybe I can say: 
>>>
>>> “... the same will occur if the option is provided with a positive non zero value,
>>> but the platform doesn't support SVE."
>>
>> Right, provided that matches the implementation.
> 
> Ok I will do the changes, can I retain your R-by? I suppose it covers also documentation right?

I guess whether doc is covered is fuzzy. Since the doc part is Arm-
specific, I'd probably consider it not covered with the "!arm" that
I appended. But whichever way you look at it, you can keep the tag
in place.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 23 12:45:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 12:45:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538476.838425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1ROg-0000oZ-82; Tue, 23 May 2023 12:45:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538476.838425; Tue, 23 May 2023 12:45:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1ROg-0000oS-59; Tue, 23 May 2023 12:45:50 +0000
Received: by outflank-mailman (input) for mailman id 538476;
 Tue, 23 May 2023 12:45:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jTts=BM=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1ROf-0000oL-BM
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 12:45:49 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20625.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd465075-f967-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 14:45:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB6905.eurprd04.prod.outlook.com (2603:10a6:10:113::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 12:45:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 12:45:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd465075-f967-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=myRCklZztdUdIkKBIKVeEbycdskxvrGYCMlqSIDOrASoPzjAcjY2J8NJGYftDe7XWDHuqXoXdqBocd88FAwX7JDcXqAaVQcpK/vBmEURwQb7yemV6qTjkjmbP1AOTit4410XEk+vcJqcDKPUHsEbmWHxVhXW53NriKwE0aHjFSieWlDBCpF3J+reOkCBPuZ2vy6XctMPoUpM59VvYpAJ1u38zqqL+9Y3iupPq7/3NDLQHgTdwEwtE35uSirDIuIEExk6JgPOjvAySqItGMETkZFcqJXMoXV50JwSea4QBiUjwFD/UX7Vrp/wBxrGUnbv/whS4Q3YRch6f+9JUq5BEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0CXuBeXxOikOKFsNolVs99C2zsNvN7fyTTxA1WR2XbI=;
 b=Ws39qH2Kcu+bTxErxtDPF5kpCVvtN+ObfiCa/e78yK0JWN6hvSja5d5hbHW6djZUDxVCxroFxoXwQGVJVPdwa9AWci8BwGvC6lPlP7mAbI5QHGKyVjNtXTqhIpw8AVbdQOOCB6EecdzPAFkdewWz0knmpByDxTBvyiU+iGvdj3vchVf5xMyHavyzY89PEpbAE+s52MnxYhaccquWZf8cP1P90SwF5uQKigthNUwJ/4GAIICEnA2v77tIvrKg8fsczO/sZ8iIASTu7RfzVjybeHb9ZKWhXwBPliZtjLftxzhbirOePAy+n66U8Aw+hCxXAsfcUzV8X4nS2VDURndh+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0CXuBeXxOikOKFsNolVs99C2zsNvN7fyTTxA1WR2XbI=;
 b=McHh01lEH4fzXGt7CzmJLeYQo9p/smU5kfseFoT6UIKlrH6iKrz5yqKniQ1Q9sYBOZ/Br/yqWELkbJyFxT/0O329b/osyuqveW3iU4aEnUpcHa+mdok+EIZ6icwjVEsTw3vAUypGM1mPr4yRrQj0nIn+n4zpqxjfT1MhDknfgvWZ89ief+jvIsKy1yFSgqSwysSDCOWPRXgNV6ztBijwJvtYm37wX0ouo/ASrIR08jco8X2EL5eWG7croZskzzrpqjKthK+qE+r8aiGwPp2pIXV3fZfHa1tsAoxj1MwojLMmFeW27tlY4aOMfdOa3U8LIcKZNt4esFEllEfA8AhwIA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <521a7bd6-b6b1-210d-aecc-38e6b42f379a@suse.com>
Date: Tue, 23 May 2023 14:45:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: PVH Dom0 related UART failure
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com,
 xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
 xenia.ragiadakou@amd.com
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
 <ZGcu7EWW1cuNjwDA@Air-de-Roger>
 <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
 <ZGig68UTddfEwR6P@Air-de-Roger>
 <alpine.DEB.2.22.394.2305221520350.1553709@ubuntu-linux-20-04-desktop>
 <818859a6-883a-81c0-1222-84c62ada3200@suse.com>
 <ZGycpaCkSvWecsuE@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZGycpaCkSvWecsuE@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0169.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB6905:EE_
X-MS-Office365-Filtering-Correlation-Id: 2b089bd9-e880-4b9d-2355-08db5b8b9f0b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8xk72SB5OpZza/ShbvdcRHjNYgzeV63T2UsCA56ih94S5TkQu2NJPj9zm0pQqvybzFUFufUxSCBX1uYR+uHpEKomCrqsE3n3N/CnJI1y1v+lbUf3QpAOiAeLHXWe8TLcJmJ1NDabYuj9R7Kt1lPDO2cnzmY5edmA0FT4AUt8bp8v8PF4FAempU8lfBAssa0Mf8ObIXt0T79ydulv1DsrDC00/2jQetpwlTKcoZMUNGLN2XCjTnVq5tJM/NL754R0eXofcjXRAHvk9uj1DMn9ZJiXqmOKxcVkvJhFnFYhkjpQd9K/tItmA26nYj7UcI4wygNBnqCIma7RXyfwcsYTnPFvDtBn2j0L7PNFSQuZXCF38eq3H33bVPspg1CN1xIws9IZd7v8zYvyNEL2a8lPXnjO6NGSeVNe0gRBgjnte3RnqS5ATadI2kHNJ16V8IcFr7Cv91RRmF85Ms/f6BvUNIgOPe48HdTnHHQTdW4/zCxZeM3Er+kBOPKCUFiuT7zmrivSItsriHaHM6UANNsfox74yGICvCbiCvTvVwsJSXEBvhSTLoOYgtnY+uVntG4SW2bV6vPINPxL2mJ6siZQz2Xw/2ZXh0AlDrxnbAP5A27TDw1xzRCchKoIZabsKlgB9DxFFAHG6nWi5byC1dqa8g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(346002)(396003)(366004)(136003)(451199021)(186003)(26005)(6512007)(6506007)(53546011)(38100700002)(2906002)(2616005)(36756003)(83380400001)(316002)(86362001)(66946007)(66556008)(66476007)(4326008)(6916009)(966005)(41300700001)(6486002)(31686004)(478600001)(31696002)(8936002)(8676002)(5660300002)(66899021)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?R0NyTjBRRDdPV1lxRzJURmhJR1JMc3UwSnBvT3ZmL0R3KzI2aEVMNXdxdlBu?=
 =?utf-8?B?cXVVRExJMkFiVFhhelorWUJFTkZZRTVaZ0RsVkpVaUlWazlWY0pTdzVrMjFQ?=
 =?utf-8?B?UjBqY2hGT00veVF6a1ZGYkxMcVF2Z3B4eWZ2aXI4S0V2ZGE3aTFWblE2Rjhk?=
 =?utf-8?B?M01RQXFTd01UK0JSc3BacnhqV2NtYWN2eUJQQ3dldGlnUzRXTDNhYlRhT0RJ?=
 =?utf-8?B?aWFFSWdNajVTc2tKaWNlOE9CdXhEbDdQRS9ybGQ0U1p2bEVTc0UyVXBoYUcx?=
 =?utf-8?B?em1ZUE9OVUZGbVZyS1dWdDI0Vk5haExBbUo0c1hHall0dHhhU1U0QWNZS1A4?=
 =?utf-8?B?YWZ6OG5FbmtlRDJXRzk1U2NZazhNUGZmaUlIUVl0Q1BFZnBUdXVwT3RjYlV6?=
 =?utf-8?B?VzdUVDlFM2JuTmZQZ2hGL3pZQkwybjdiTWhsUTFlVDdVa1hwSTNrbHNCNGgy?=
 =?utf-8?B?MTMwSVF5MFVwMUQwTVMrRmNiM2g1cWR6UGorRGl4b3EwVlZaVmlMZk5GaFVM?=
 =?utf-8?B?MWMzcmtxR1hyQlNDYytLOEFKS2NQMGU2Z2x4TEpiajVKYktOYlpIa3VOK3JG?=
 =?utf-8?B?bDk5bEpyK0xkN1FoUWdRaXg5M00xZ1VRMEdscnJFbHNwKzlwOFJoV1BPNGR4?=
 =?utf-8?B?d05uZFkwMGxpUHhEcTRYTGlyMXpndzk0NWVrMlhTTGZBZG9TUitIcHpYUmIy?=
 =?utf-8?B?eUx1NmE2WHZQVkYxWXRLWDR2Zk82ZUZjd3RkOTNiMUhXeEpYZy9xNzNZY0ha?=
 =?utf-8?B?dWNJQ2lHOVhNM1dJNDZYY3Y1UVEvV25GZThKQ3FYa1c4aHI5Q0ZsL1RSOG9x?=
 =?utf-8?B?ZDY0ckpXajkrdEVkb3g2dHRRVnQ5dVczUVdQeG82ak16LzNqMU44clYxcVVh?=
 =?utf-8?B?UlNxU0tnMDdxaHBUakJ2YVN5WGpQMTgzQ1M0VFIzYTlrSEFtWlJsb2RIWFMx?=
 =?utf-8?B?Uzh5VUJqOW9YalhGcHAzWnl6WVN5OVNmZ3R6b0J2cHJDb0tOZnRPMDRJTzBZ?=
 =?utf-8?B?S0VzM2lqT2JSYm93RXllWkJpYXJDbWRRa0JGNThkSStlZGlrSnAwZTRzV1BY?=
 =?utf-8?B?dGNUSkQyZ2FnM2FLQUI2d1R2SFZFUGM1Mkk0V1RYek1xd09kbHl5YjhES3lh?=
 =?utf-8?B?ZlEzV1h1Qzc0ek5zbXR5S2hXQmJUdHFUeWp1dkt5NVA2d2w5UUNQVnJOVStO?=
 =?utf-8?B?VnVJQkMvK0VHeERZVUJIdlNGNmJsa2hpdGVkTUpadWt3eHE5cHhkWjNFV0dG?=
 =?utf-8?B?ZkZFTkV2L3FjWm0zZWMxOFJkSjBUeEwwZFl0eGp1dVE0VTVyMmVsVWVyUUxq?=
 =?utf-8?B?bnRYRTV3WEFmNlNuN1Y1ZlJ4TVMvbEZKR21sa1kvK0lLMzc5VDhQNWlSWnVX?=
 =?utf-8?B?dldHWE4xZVozMmpqelB1ck9jdUtpb29kdWdVQm1jbWhwSm5Gb21jY0Zxcnp6?=
 =?utf-8?B?dkFkRG5vQ3ZiZ3V1bDNkc0QyMVBMRXQ5Y1M1Uk0vRmhBRkZ0ZkRLaU03Qk1t?=
 =?utf-8?B?VlRQOENycXRwUmZqcWthZDJuYTJPeU50eW1GQWVOR2lMV3ZPUkE3c0J2bk16?=
 =?utf-8?B?TnM1OHZmdENvSm9JVUwyZ0VTaDl6WElrSXdnaEpPQ2xvNHNCcnhrZzVQSFh4?=
 =?utf-8?B?UzlMSHR3MGZsVXRlZ1dtQWZIVnkwRzM5MkRNTUJ5bjlOSTZvckFpSk9ub3ZL?=
 =?utf-8?B?V0ZDbDVCUFRaOXdTTy83cEdvVEV3SXhOMEtzUy9zTlFqNEVROFRwU0hOSlV6?=
 =?utf-8?B?bFJ2QzB6bWI3TjRXU3FxWWtZY1N5MVIrbkpwR1pmR3hITUdyZDcwbTNQeWVt?=
 =?utf-8?B?OVh5NWFwVXpkd2dGYzJuNjlZWHl1VkZTa0NOQmFxMGRXZExvNVc2TVU2dFZx?=
 =?utf-8?B?cG5INVV2R1Y3cXdwUnVlMVJrU2ZhRldMbFBjbG5VMEJvNW9IYUNHbzUzMkFo?=
 =?utf-8?B?ZlRDaXBaazNacVNjRkxxYjNLQ1ZNRTltNFhTZHRaUEI5TnU3RnpqOFduTEsy?=
 =?utf-8?B?NzR1ZG9hWURMenI0bWREcVNvR3JzbFVKdUNxZlBBcGVRRldzbGNmZkVKWDFt?=
 =?utf-8?B?cG8wMXloYmlsVkdVQUN4bEpSOUNqMjdTT043ajd2eGtDaHNZbS90RTFURVRL?=
 =?utf-8?Q?ENxqYEtbjtzn9xxEQWWMd1OrQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b089bd9-e880-4b9d-2355-08db5b8b9f0b
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 12:45:43.4512
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VQp+xJh6ZtX6hUzQsZXJ/vJmPhZQmd8Oh6PEYKwaeMHF8MsA7KWoiJmmRoUt2CHJuxw0aX9PK8oaxZ0LfqMLzg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB6905

On 23.05.2023 12:59, Roger Pau Monné wrote:
> On Tue, May 23, 2023 at 08:44:48AM +0200, Jan Beulich wrote:
>> On 23.05.2023 00:20, Stefano Stabellini wrote:
>>> On Sat, 20 May 2023, Roger Pau Monné wrote:
>>>> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
>>>> index ec2e978a4e6b..0ff8e940fa8d 100644
>>>> --- a/xen/drivers/vpci/header.c
>>>> +++ b/xen/drivers/vpci/header.c
>>>> @@ -289,6 +289,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>>>>       */
>>>>      for_each_pdev ( pdev->domain, tmp )
>>>>      {
>>>> +        if ( !tmp->vpci )
>>>> +        {
>>>> +            printk(XENLOG_G_WARNING "%pp: not handled by vPCI for %pd\n",
>>>> +                   &tmp->sbdf, pdev->domain);
>>>> +            continue;
>>>> +        }
>>>> +
>>>>          if ( tmp == pdev )
>>>>          {
>>>>              /*
>>>> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
>>>> index 652807a4a454..0baef3a8d3a1 100644
>>>> --- a/xen/drivers/vpci/vpci.c
>>>> +++ b/xen/drivers/vpci/vpci.c
>>>> @@ -72,7 +72,12 @@ int vpci_add_handlers(struct pci_dev *pdev)
>>>>      unsigned int i;
>>>>      int rc = 0;
>>>>  
>>>> -    if ( !has_vpci(pdev->domain) )
>>>> +    if ( !has_vpci(pdev->domain) ||
>>>> +         /*
>>>> +          * Ignore RO and hidden devices, those are in use by Xen and vPCI
>>>> +          * won't work on them.
>>>> +          */
>>>> +         pci_get_pdev(dom_xen, pdev->sbdf) )
>>>>          return 0;
>>>>  
>>>>      /* We should not get here twice for the same device. */
>>>
>>>
>>> Now this patch works! Thank you!! :-)
>>>
>>> You can check the full logs here
>>> https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4329259080
>>>
>>> Is the patch ready to be upstreamed aside from the commit message?
>>
>> I don't think so. vPCI ought to work on "r/o" devices. Out of curiosity,
>> have you also tried my (hackish and hence RFC) patch [1]?
> 
> For r/o devices there should be no need of vPCI handlers, reading the
> config space of such devices can be done directly.
> 
> There's some work to be done for hidden devices, as for those dom0 has
> write access to the config space and thus needs vPCI to be setup
> properly.

But then isn't it going to complicate things when dealing with r/o and
hidden devices differently?

> The change to modify_bars() in order to handle devices without vpci
> populated is a bugfix, as it's already possible to have devices
> assigned to a domain that don't have vpci setup, if the call to
> vpci_add_handlers() in setup_one_hwdom_device() fails.  That one could
> go in separately of the rest of the work in order to enable support
> for hidden devices.

You saying "assigned to a domain" makes this sound more problematic
than it probably is: If it really was any domain other than Dom0, I
think there would be a security concern. Yet even for Dom0 I wonder
what good can come out of there not being proper vPCI setup for a
device.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 23 12:49:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 12:49:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538482.838434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1RRq-0001Rl-Qp; Tue, 23 May 2023 12:49:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538482.838434; Tue, 23 May 2023 12:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1RRq-0001Re-Nr; Tue, 23 May 2023 12:49:06 +0000
Received: by outflank-mailman (input) for mailman id 538482;
 Tue, 23 May 2023 12:49:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qOGL=BM=citrix.com=prvs=500b25f39=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1RRp-0001RW-LA
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 12:49:05 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c68d8d8-f968-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 14:49:01 +0200 (CEST)
Received: from mail-dm6nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 May 2023 08:48:52 -0400
Received: from BN7PR03MB3618.namprd03.prod.outlook.com (2603:10b6:406:c3::27)
 by MN2PR03MB5120.namprd03.prod.outlook.com (2603:10b6:208:1ac::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 12:48:48 +0000
Received: from BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::f930:7d2:9ec6:fe3d]) by BN7PR03MB3618.namprd03.prod.outlook.com
 ([fe80::f930:7d2:9ec6:fe3d%3]) with mapi id 15.20.6411.029; Tue, 23 May 2023
 12:48:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c68d8d8-f968-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684846141;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=X/nPEb9G4LjMOi1yfKSgDLkmh3WnvlncUMic08Ivb9k=;
  b=D2OOc/BdYfJOPcO4TG6dYo8UYzavm4jsks6kJyJLoLSXvJoZGj56MRee
   OGNgumkTVnjiRaXST71ByDnwgPWlDEtcBREZI+icz7yoxorEh+cakAMn9
   keY3/LMngNeMqg3mpu0ziE0SXXaH1NXLmDS6/iwOo5Dny3JsowFoxAWtp
   g=;
X-IronPort-RemoteIP: 104.47.58.108
X-IronPort-MID: 109949730
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:3h4i7Khxl4T/2E1Ympd5fsP+X161TREKZh0ujC45NGQN5FlHY01je
 htvDzyBPKmCYDPyKYh1Oozj901X7MXUnNNmGQQ5/yBnECsb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QWFzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQxMS9cUyKcjtu12ZmGa+Bzm8YFFPbCadZ3VnFIlVk1DN4AaLWaG+DmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEhluG1YLI5efTTLSlRtm+eq
 njL4CLSBRYCOcbE4TGE7mitlqnEmiaTtIc6TeXpp6I73wbLroAVID4vV0Phpqi7sUWFVcBBJ
 GgExXErirdnoSRHSfG4BXVUukWsvBQRRt5RGO0S8xyWx+zf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8sjeaKSUTa2gYakc5oRAt5tDipMQ/i0zJR9M6SKqt1IStSXf33
 iyAqzU4i/MLl8kX2q6n/FfBxTWxupzOSQ1z7QLSNo640j5EiEeeT9TAwTDmATxode51knHpU
 KA4pvWj
IronPort-HdrOrdr: A9a23:iJIPuKEFS3Y8+jfKpLqFCJLXdLJyesId70hD6qkvc203TiXIra
 CTdaogtCMcRgxhIU3I/urwSZVoIEmsi6KdhLN/TNWftWbdyRCVxb9ZnKHfKlHbakrDH6tmpN
 tdm0YXMqynMbBV5fyKyDVQOexQlOVuyciT9M7jJ50Ed3BAV0gY1XYvNu/5KDwEeOA5P+tCKH
 PG3LsTm9PIQx1+Ba7Xahd1OpmmmzDSruO6XfdFPW9Z1OCgt0KE1FeQKWnh4v5xaUIo/V/imV
 K1wDAQsc6YwoGGI1LnpiXuxqUTvOGk5spIBcSKhMRQAjLwijywbIAkf7GZpjg6rMym9V5vyb
 D30lsdFvU2z0mUUnC+oBPr1QWl+DEy60X6wVvdpXf4u8T2SB8zFsIEr4NEdRny7VYmobhHoe
 929lPck6ASIQLLnSz76dSNfxZ2lnCsqX5nquIXh2w3a/pWVJZh6agkuG9FGpYJGyz3rKo9Fv
 N1Mc3a7PFKNXuHcnHwpABUsZyRd0V2Oi3DblkJu8ST3TQTtmt+1VEkyMsWmWpF3I4hSqND+/
 /PPs1T5fBzp/ctHOBA7do6MI6K4jSne2OJDIvSGyWoKEg/AQOPl3ati49Fo91De/Qzve0Pcd
 r6IRVlXFUJCjbT4P21re92Gy/2MRCAtEzWu7xjDxcQgMyveFPKC1zKdLl8qbrqnxxYOLyVZ9
 +DfKtMBfntNG3vHpsM8THfdvBpWCEjefxQg808XV2WpMLNN8nNjcz0NNjuBJeFK0d8ZovEaE
 FzBgQb4P8wtXyDazvDmRDUbXvmZ0z4+vtLYfjn1tlW8pEEMrtGuhN9syXm2ui7bQdauqgBZ0
 dmJqj7+5nL9FWezCLz9m1sDABWCF1YiY+QDE9ilEsxKkvxR6wIvc6ST2BUwRK8V0NCZtKTCR
 VYoVtv/6KxMtib3iY5Ec+qNWqckhIo1QW3c6s=
X-Talos-CUID: 9a23:MCuhgmPyO+X9Mu5DcTB4pXMmBckccWT28VbZPFK6L15WYejA
X-Talos-MUID: 9a23:UoZXjwtfTUJ6uAGj1s2npwhgLJ1vsqaVA3sDg4sJn9m7bilXJGLI
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="109949730"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LlJv3mj+wgslz1bIkDemjjqIzo45KiUmpybouzHjAf0NsyskRTQQgZ+OsVp24XfbpUIilcNmEJJzBHv0CjVjbsjVYXakY4EK+yc6VVRkhms3Y2F06MN/ebzTnYlE0sry1jQOxy4SM3pO0P0sEFzFAICzmnq4tvhZE2HSpVq/Qnmk8ZVh/mErsiZnnVnzRJ9a8VhLDA9O9f2CT9YPH0K3TfYf+mXlY6TItFFDjtVqPJyPCy+bclXmyXVGaPgMB3A1ujBNH1qJGAaL10c47IaMcM2Do6MqW2/TcBg2/r+1rrYO6aOKcrtEscAu8M7bDPzvySYlylr84bNoCmXYCIG4AQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=X/nPEb9G4LjMOi1yfKSgDLkmh3WnvlncUMic08Ivb9k=;
 b=Y5GZAlN2rcw8KfBjr89YItMFy09KgywRKTJq6kJei8g7Biu8ZzzC4o/6Bg9FqOFgWfJ5PJHdArHGUVdFOXcjwso40O8TfMWRf6szHTVbVJO4i9ejhjhD03w/PQrwDnxecCTsfL7wpyvJrQoQBELTv2Wb7g4I5kr4fN/jNisqbracWFm1OGWFuRN/IynT2INAOWXRwZV0MEOWaWVRaXBSdHNsWad0LkaXE10H6n+iYBZDbEk3hx7rjm1pBvdAugOh+Ov6ktAQb3r3LtU28SlefYXcDh2Dw+o3HKVruJpAfWf+QO2NcrbqQdc9R/l9yeUnSQlNoCn/R9HR50/TZC30rg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X/nPEb9G4LjMOi1yfKSgDLkmh3WnvlncUMic08Ivb9k=;
 b=niqyEkOl87cHEJmdHIcAI5XYbOiL5FkFEAZLGUB8XxZQvFYGn7vl/SF8NUoUxU+97vK9dbJtgPK3awI4o16sSvpbEwmlZHFTO1aEFbP35r0v82yCzj+zxtRcfRPXgTkgOmSy3lsfYWwUBIqrh1O16BHuulQLyF9F6BBgfIJc61Y=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <39156e66-d9e8-f768-1d9d-0bfd0fdde757@citrix.com>
Date: Tue, 23 May 2023 13:48:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] x86: do away with HAVE_AS_NEGATIVE_TRUE
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <2f4234ce-9427-b78c-f7d5-4d9822d4d1c9@suse.com>
In-Reply-To: <2f4234ce-9427-b78c-f7d5-4d9822d4d1c9@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0053.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ac::19) To BN7PR03MB3618.namprd03.prod.outlook.com
 (2603:10b6:406:c3::27)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN7PR03MB3618:EE_|MN2PR03MB5120:EE_
X-MS-Office365-Filtering-Correlation-Id: fbbe3438-a798-4e84-7e30-08db5b8c0d02
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HQrtDOD9XvoR1qN70bEQslvSFrJSWbDskRurl+EwtfH3zp8Q3Et2bZGNgs3hZNmBuL8g4Xnba9zLVKEJKF+fTP2ZxQaz2FTFmouBf+4qo93AN9YA+VeY2cXjLgE7KKLvZpbsZXLxZM9WnWRht5ZQpr0LovKb93wrDjWBrp26dQCDTCxxcOoFkL3+rabOUrFk3DXPQlQqqOYYZcVcEFnktXOkAVgdZH2/3JFmWoXyQGvaRn8yBB/32euWrnKqBFVnGljso6/uUnpIegLJP6Zzq5b8xJ/7lr/LQDZggvOwC1hHW4zNPH3Sa2rc+nn7lLofHYyZYRpv2xva+L4anB6mZsJMGmzXtZT+0tblGByxJsKav6hP8yZTEI09BbBS13VilR+JzCf/IW8l9D5F/vx4dMWIzbzOcEVlImXnd6t/eBcl4TE9oGoYMMfRMr4LbEwg85tKON6RdfchTNcb0brHi39SD394QLrC8g68PPt8u9ZrxiKYYc8Jx8lLIvPXaRR+alDI7uITE4t7XtlRIsiMSRx3Cqqvj8S5BBGgMDPBY26yxsURNJUoCR2Ya3zr4XUNbvTasAmkNgYu0+WLNVg0uZLNv84k8ug007eD4dbq04CMZXePSoKM8+hHqpjrRBYPgtIxSRacC12ErkyHUaYVqQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN7PR03MB3618.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(82960400001)(38100700002)(31696002)(86362001)(36756003)(107886003)(53546011)(6512007)(6506007)(478600001)(8936002)(8676002)(2906002)(2616005)(4744005)(186003)(54906003)(31686004)(4326008)(316002)(110136005)(26005)(41300700001)(5660300002)(6666004)(6486002)(66476007)(66556008)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bTN6Mm9xWHRMdEh0K0hyRDJ0MDhJclZjRk5ocHBWTVp1dDVNT3hYdzlwN3dI?=
 =?utf-8?B?aUxHeGFBU0xNSnZtTlc1S3JCRWJVUlNNeG0zT2R4dGJodFF0b3lZMExwZngw?=
 =?utf-8?B?SWR5VWtISHZKRWJtUHV2cVIvQVo2OG9GTjNYZ05DTFFaamRDZFBHdHhIOHpp?=
 =?utf-8?B?Y1hrUXk0d2hFM1JpTm1NL0FzbmluNWs2K0ZHVmlReFFuOWVwaXdUaEE1d09q?=
 =?utf-8?B?WVY5dHNXSjE0NUh6QW5KSkxJaVNXRWJ4MjBrWEZTblNlZkF2YVp6SGhnb2N2?=
 =?utf-8?B?a2xrbkpzMVdTN1ZzT2c3bWZ2TFBBbkthTnYxNXp0dkZlRnlQNE9HREhrU2hY?=
 =?utf-8?B?VzJMS1dWWWNyRzhkeU50azlSQWVCSTVqei92YUtjMVFwZWlWR2ovS1pYL1ZU?=
 =?utf-8?B?dEpsVGhBT05SbDlDdFVDS0JQN1NNVEdlaldEd2tYdGN4NXk5YVdYT0cwUlZQ?=
 =?utf-8?B?VlNrbFdiTjFVL0Y5MENYYnpta1k1NTgwYXcvclJaQkFZQnFtVkttaUtTYTUy?=
 =?utf-8?B?eUJ0aTdaeGFnUTdhLzhtTnh3N0lteEk5bTYwV2JYVmc1SlBRVVJvZmFPT1c5?=
 =?utf-8?B?UUpDQ0ZTZFh4V1FWVnc3aUIzVzVGbkp3WTFOM29zYXRiaFRkRFpuQVlPU3VT?=
 =?utf-8?B?RHhwanhpV1pvQXhUdGZkZE0yZDlRT1dmTzkzODFybFZpamRNVzh2ZDBTbjRV?=
 =?utf-8?B?Nm16VnJLam1lT29xU0VKKzYyZWd6KzdkNnlVaC9Pam1GbmxtYkhuY2U0NEth?=
 =?utf-8?B?RTgvdU41dER0NXJWT1FsZGtUdVlGY3BUdW93Qm9yNVhQQ2lXOEhFTWQ1Y1Mx?=
 =?utf-8?B?eXE4d0ZnWjB5U253Z1NTQ2IrbXhrS3FwTmxUcGVlZmhJV0ZWVTQ2NThWSE1R?=
 =?utf-8?B?WWJWbFBOclpLZmVNZlZ2SmM0ZG5heUNSMldKOVBwVU1uMGFRVVNudnhjbHZr?=
 =?utf-8?B?YXFNSXhFQjZOdVRVcGdsTTlaSndFVk1CMFRqZUJha1l0bGlMM0E3clUxd2FH?=
 =?utf-8?B?MDRHOVlPR2ttRHVoU0k1eDcrQ05Fd21hcllVTWYxQjUrWFFsbGswRmtMMkhK?=
 =?utf-8?B?YzUwUXlKRUlXVEppaVh3R2ZFNTNiNHNOcXZrRUY3L0RpaXBaUUVxb0FSdDJx?=
 =?utf-8?B?WDFxdFdiQzBJb0FtRnduek1abHJhN0ZxYXlBUkxHZVh4cVBTRmJLZE9ISU9y?=
 =?utf-8?B?OU4rdmxqRVUwcUk2YjJvY01GR3hZSXJkempIc2RzNFl6NEVmaTV1dlVrdjZD?=
 =?utf-8?B?VHh4U2I2R1JjVmZOekJkQ1Y4Ky80TVhWMTJGbVJQbkk4TUxvRFA5V1RkR3hD?=
 =?utf-8?B?QUlRSUE2VUQxb1NpNFp4T2lQSWpsbEVYWExBZlFmWGx3bDAzQk9VeHM4S1Y3?=
 =?utf-8?B?MEtsVU9SZkVjVlh1THlBMEhjMkZqWjR6QnpESWdSK29NQnFiT1RhLy96VmVG?=
 =?utf-8?B?YTRGR2VhOGxuQ0NycUs4RHNDejFUZUdPQW00R0VIR1hPTnh2ZTQ4TzdrZFUv?=
 =?utf-8?B?Q2l6endDUEVvb3FXYTlpSFNpS2VsK2pLMTZKSVJRQXhrWTlGYkFlRllTdlhZ?=
 =?utf-8?B?aTV6R09pc3Y1YjBmREJENis2UFpmZ3QrNUtJcmFyejU0eDVKSjl0Z2ZWYVlk?=
 =?utf-8?B?eFptbnhhcVBOTDNuV08zRWRIdEhLSXV4UnU2cXZSTUFxVUpsdVJUc09XeFll?=
 =?utf-8?B?VUFPOUhuWWhMaGhyWXNUbFZnOFRIRS94ejFCUjYwZ0hMdkdKR0E5dFJnN1kv?=
 =?utf-8?B?OEJPMlJtRFUyVHhyeHhtTmpJTUdQeG9qMXZ3V3hNc3A1cjEvZXcwdXRkOFJH?=
 =?utf-8?B?TE15LzdtYndxUzMxRFRST2trNGRNYmt4cFZyd0hRMHF6YkNiRnVVRHh6dXNw?=
 =?utf-8?B?WlY0UnlkT01hNU9oRElxbG9YZC9GRE9Rb3J3dXpsQ0FQUG5rVWt1RWd4UjdE?=
 =?utf-8?B?T3QvcXlNR0thQ3YyVEE1eWlhRU5adW5IS2dHWlRrRzhwTjg5djNKd3phbEkx?=
 =?utf-8?B?eCtWeC9uTkRWbVduMHQvUEpPMGRaYlE5SS9qempRTXdZN0lNYTJIRFFEaTdF?=
 =?utf-8?B?MzExQlY5Ukg3Y2c3T05vZGhHOGc5V01GQ012Z0MyUWp3anJrYnl6SkIwUjBz?=
 =?utf-8?B?djc4TmhkZ1BwNjd3VHh1NFQ4Wk1DUnBYYzl2enZvMlNGK0ZJWUg1ek5CcTVO?=
 =?utf-8?B?MFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	X62mK3blDhZ0KGMcNZCkQnWx2tWBItjvJ5tD+HEbblVZoFRqUlXpS5NBaJqFfcvxTE/1KJm2WYSioPG7v3xJ0vTCzP29BscF//xI7Q21ZoAVnS7se5PT/QU0fqdZPQ4qjHYYZNGMZqETZjGfbkhjg8aR8gP+G7Xi4TAqHJElvhh25cxByMWgpsFxhmALnZJ/qIvL4r76htAloS5V0n4WB/uLlcKNgL25sGs1VXnKOBv36kp8ozeW23/N/XFK6F/FTAHStbxz6lBY002ZIugjMDV2dOdWVTBX2ucBJo2waY1ASGqAD1xt7eJi+QNKaC1zPOOXT+paRuX8Nynrgwd5gK3Mf3UTQP1x6VEoXxtpjgR7lO37QdpBnJ+6aM2phs7IS+kl9RABFUtCC4I4BumembHHh9W25K4GbpErzv4am9wKJzlCqbfBG3fK8s1hy1xMkVmdQyFLuotyyvSKBtCyif+Hqtos83bZ7upvHTARAF7ulNKv/nI2GH2VM+DVS/Y4bU3D3tR4mdy41Kyb3TKHWJ6dKmIkH7mW0Q+nEPLTWZOwGEA0WaXar+LXMBZIgOXxyt9+ahrWNdp+NUcLcTXMdmd2mULi+1279yRVTqdMBhPbvMZnTO23GMrYCmVsXgkmlq2GDREH7CtGaErOcCZoRCH41iWnB0t2eutXdx+mk/snT3ZvKXPKihHBBqmGg706264OTADUqVwmff64P31vekXlS/uuJFa6Kpu7P6nRQAddvoqjbu6Da6nNjmi8m99FuaeykxM1iQCGeKhmj6TgHHT7DxZnq4J0VyGxAtdpFdvdpcKmMSiXpjJfq9mIlr4wnwEPU0fOYNP1YeIpF81N+Q==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fbbe3438-a798-4e84-7e30-08db5b8c0d02
X-MS-Exchange-CrossTenant-AuthSource: BN7PR03MB3618.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 12:48:47.9261
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yj8SqJVnsph18lFfqS4VD6xX9lbAN0Z7HjI99ag0eExg6gKhDjP3IKkj+T0LW/00oQLnaGQt4kjMN1K7CIyaOKfaX3w2itQgeJT4Olx5mXU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR03MB5120

On 23/05/2023 1:37 pm, Jan Beulich wrote:
> There's no real need for the associated probing - we can easily convert
> to a uniform value without knowing the specific behavior (note also that
> the respective comments weren't fully correct and have gone stale).
>
> No difference in generated code.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.  I do think this form is easier to follow.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 23 12:49:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 12:49:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538483.838445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1RS2-0001jq-2t; Tue, 23 May 2023 12:49:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538483.838445; Tue, 23 May 2023 12:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1RS2-0001jj-01; Tue, 23 May 2023 12:49:18 +0000
Received: by outflank-mailman (input) for mailman id 538483;
 Tue, 23 May 2023 12:49:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q1RS0-0001j7-Uk
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 12:49:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q1RRz-0004f0-Fo; Tue, 23 May 2023 12:49:15 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=[192.168.11.95]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q1RRz-0001S7-9P; Tue, 23 May 2023 12:49:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=4t0bMAxBhqhtnV31uOR6tsKBZ3Y3/TqJbAfZTn9OVcU=; b=HoH1ZUsXiPBVfFcQoJO9UId+IW
	eNkzgPvHwg04xgwv9QuLN+CPOfGhRDl0KsHCZsWU6z5Gx7iOCil3UEaSZlzTJds0ZfDltrQfh7yoB
	54sx1uzbAIeJefIEWzOrZbztqzY/Jk2KgyrIlT6A8bj4cIcIxKOcSnBTbd39n/p9PKco=;
Message-ID: <b5743ce3-2f3c-c514-2deb-710a3980b2db@xen.org>
Date: Tue, 23 May 2023 13:49:13 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v2] maintainers: add regex matching for xsm
Content-Language: en-US
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230522191450.5665-1-dpsmith@apertussolutions.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230522191450.5665-1-dpsmith@apertussolutions.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Daniel,

On 22/05/2023 20:14, Daniel P. Smith wrote:
> XSM is a subsystem where it is equally important of how and where its hooks are
> called as is the implementation of the hooks. The people best suited for
> evaluating the how and where are the XSM maintainers and reviewers. This
> creates a challenge as the hooks are used throughout the hypervisor for which
> the XSM maintainers and reviewers are not, and should not be, a reviewer for
> each of these subsystems in the MAINTAINERS file. Though the MAINTAINERS file
> does support the use of regex matches, 'K' identifier, that are applied to both
> the commit message and the commit delta. Adding the 'K' identifier will declare
> that any patch relating to XSM require the input from the XSM maintainers and
> reviewers. For those that use the get_maintianers script, the 'K' identifier
> will automatically add the XSM maintainers and reviewers. Any one not using
> get_maintainers, it will be their responsibility to ensure that if their work
> touches and XSM hook, to ensure the XSM maintainers and reviewers are copied.
> 
> This patch adds a pair of regex expressions to the XSM section. The first is
> `xsm_.*` which seeks to match XSM hooks in the commit's delta. The second is
> `\b(xsm|XSM)\b` which seeks to match strictly the words xsm or XSM and should
> not capture words with a substring of "xsm".
> 
> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
>   MAINTAINERS | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index f2f1881b32..b0f0823d21 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -674,6 +674,8 @@ F:	tools/flask/
>   F:	xen/include/xsm/
>   F:	xen/xsm/
>   F:	docs/misc/xsm-flask.txt
> +K:	xsm_.*
> +K:	\b(xsm|XSM)\b
>   
>   THE REST
>   M:	Andrew Cooper <andrew.cooper3@citrix.com>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 23 13:55:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 13:55:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538494.838455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1STW-0000ru-Ql; Tue, 23 May 2023 13:54:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538494.838455; Tue, 23 May 2023 13:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1STW-0000rn-Ne; Tue, 23 May 2023 13:54:54 +0000
Received: by outflank-mailman (input) for mailman id 538494;
 Tue, 23 May 2023 13:54:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YHTC=BM=citrix.com=prvs=500ef747c=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q1STV-0000qv-Dh
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 13:54:53 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 61e0d436-f971-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 15:54:50 +0200 (CEST)
Received: from mail-bn8nam12lp2172.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.172])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 May 2023 09:54:47 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DS7PR03MB5638.namprd03.prod.outlook.com (2603:10b6:5:2c3::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 13:54:43 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Tue, 23 May 2023
 13:54:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61e0d436-f971-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684850090;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=MQI94l6oUzrg/Adl6XcIVElAggxEE0jHT4ZP2Slxbt8=;
  b=BJCddEDScc+YpPMYLu9f9AtnP6K33+7QDbVabVwOoY8iT+Aip3vkruMP
   H/ufs1flv1FR+HQuZjhwkvkVeC6Oi3GcPBalcdpoONWxXkSDwuTIIT2FO
   4iLX612eLWy2u0pQTIk3702nTPxFc2OyDH9ZMGrjjOCVzxm84nGS2J72O
   w=;
X-IronPort-RemoteIP: 104.47.55.172
X-IronPort-MID: 110085792
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:HkMaxKCVGPKNPBVW/+Hiw5YqxClBgxIJ4kV8jS/XYbTApDxwgTADm
 GUbDWnTO6yPNGTyco9za9uy9UsB7cfVyIBhQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbCRMs8pvlDs15K6p4G5C5QRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw9eh1OE9i0
 MwjLh81XA/SwOCI6baAc7w57igjBJGD0II3nFhFlGucJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuvDK7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqyr01rKQzHmTtIQ6GbCl6vkpqkKqn08iVTM/bQGcpqG3hRvrMz5YA
 wlOksY0loAp6EGlR9/6GQakqXSJuhodXdt4Gug2rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqf/LqJqTK5OQAOMHQPIyQDSGMt89TloYh1lBvAT99vGa2yk/X8HD22y
 DePxBXSnJ0WhM8Pkq+9rVbOhmv2ooCTF1FvoALKQmii8wV1Ipa/YJCl4kTa6vAGK5uFSl6Gv
 z4PnM32AP0yMKxhXRelGI0ldIxFLd7fWNEAqTaDx6Ucygk=
IronPort-HdrOrdr: A9a23:xRyWJ6r0RxmHCcx4UTgwtRcaV5oleYIsimQD101hICG9E/b1qy
 nKpp8mPHDP5wr5NEtPpTnjAsm9qALnlKKdiLN5Vd3OYOCMghrKEGgN1/qG/xTQXwH46+5Bxe
 NBXsFFebnN5IFB/KTH3DU=
X-Talos-CUID: 9a23:34jeNmzkMUkdoS6OJV6uBgUbFMIcXFH4/EzWDHeKFjcuSoO8TWSprfY=
X-Talos-MUID: 9a23:ugmiFAo5xXsDTP46XXQezzxvKdtvoI6xMRkuoMgokvHeEisoNTjI2Q==
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="110085792"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FlxEvfeRhh6aUlQBwsXBARyxqyEcUJg5sQqgJlc3PI0GAQdkOix82tgGTCgJoFS6tE46AAZUtTtSO7f6ttSW6JWcYsQTkDAo0HRwiNATRz5A65QkFG0HPDVmO3P+jx8UbgKpxDsqJ6qC+oVhnfKeZ6oY/PbMxzbZRKtf792OL5VY0EsPdWr8XJnUCEli13zoxnzyVOqzlS9HpGtHu3Mfy23atDkMVKLV+8psukslRXGv1r+4RYuouA+Q4vKI1xpwQbdqZy9e6Hw5uJ/M1Y8q/V1Ggkn1hgn/Emo9Bfo/gecReIwkBsrKO0V1GgT16tMYww3dJ6sTahCEVEciurYBvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PQbcCiDxSWHvhUxrrc3j4cucKK+/8BcO/kM9Y/HecYI=;
 b=POxvFr0YHSICreFWYwJHGWRjzg6qLptP58kT6pcmQFv+aYHft8sSYbtw70XmXmQSmrStIbHg5R1qFXiEe5hqRGC6pCNGnVj5kOwoDW0DLulQwZJURBzWC+EVi62J0Ty41z7lRHqii1BoNdA09mGPcLjHNI2urOO/y8vSiUBVy+U02a1xUqsDv2OpiHY+FukCO3k7UKTnU8ql4MOymb9a/n44+IygR0to/UCO4nGNQHZWp0thLAMk45Ja7UdR/bFvcH69i+k0N8a8QtA/65KKgoO+GJpo9oapHN0Soi3/Ei03S8aLU+V4FBq3szG0Na75toxg2H2tpXwWIgFfgf91RA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PQbcCiDxSWHvhUxrrc3j4cucKK+/8BcO/kM9Y/HecYI=;
 b=EzSizDhFfp/cckl9i6FORJdDFxwx1COE1khLoIOyHqwWAjgx7XwDBjrUih6ArlwL1m5mqVFv0/1Tq6BqFwNBTGIcxsxibdYY0z6DsnwMZySb7a8yGQ91oVrs9CvWs/gl27+3K/d5uho3fgIh6Ax/XMrm30LT9i0cOVX4+vXdgPU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 23 May 2023 15:54:36 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <stefano.stabellini@amd.com>
Cc: xen-devel@lists.xenproject.org, jbeulich@suse.com,
	andrew.cooper3@citrix.com, xenia.ragiadakou@amd.com
Subject: Re: [RFC] Xen crashes on ASSERT on suspend/resume, suggested fix
Message-ID: <ZGzFnE2w/YqYT35c@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305181638390.128889@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.22.394.2305181638390.128889@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LO4P265CA0220.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:33a::18) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DS7PR03MB5638:EE_
X-MS-Office365-Filtering-Correlation-Id: a89434d7-edf8-42c5-1801-08db5b95424b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	l9mJBj8LC2IJR5BfphSdhVjokq0H4u4YSGNJBlsiSvpte1LK2bhGsVZGIJe4AmyayqMdweYa09Qdr71+wlM0lFGd833jBtqfKQoZKPxsinJtXh1gVgqXgoKhsghq3c3sdxzUdRU7/uRxclpb93Y8/IrReU7vPc6kBw06xRqqO/IjKbP3orgUH8pPDBOeHN4zS1gQk/X8slK9oiwKzXZODL2RYHtMw/r9q5EGw2t8rjBIN8SKOmnX0m20bGDF467Lk6aSHFueXsNHpgC4QZ/FJq6n2Sa7jiFjRMAbk0mFDvQbzWeL11qFysX5dRum8xfQSLF/gDWXplLv2RNGWSY5yPLCN36XcdCoTYW2t+RPW4+aSPDemhY7Bl80UeYTv9Ma+uFqqckLgEL7U0p/IEIzn+/bItTEOev3fY+zpRTiI011JeC7jg+hl++4YzjgQfv9vkpF93A/rMEICa8j+4r0H9RMkD+DHTrdlxj+ZsaOLhGEs1CfMojvmYySfdyy3jN8ZPf36WbPvvxfG2fpDPtTm1AbIxD4ffGQPHFWO/yMsgOiVkVyys8uQigYW9zQm62v
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(136003)(39860400002)(366004)(346002)(396003)(376002)(451199021)(41300700001)(6486002)(316002)(66946007)(66556008)(66476007)(4326008)(6916009)(6666004)(8676002)(8936002)(5660300002)(478600001)(86362001)(33716001)(38100700002)(82960400001)(15650500001)(9686003)(6506007)(6512007)(186003)(26005)(2906002)(85182001)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N0R0MW5Gb3Y3QzVVUkZ5WitnVlBCdCtvUFhPOWk0TTJXWWFBRG5TT3c4Z3Jy?=
 =?utf-8?B?UkordjNXeGl0bHA1ZkM2RHVGSFEzZ1ROL3d4dkNpVEQ0ZnduUm5pQWppbkFM?=
 =?utf-8?B?MkhyQ3BRb1lGZnlsYUZzaGRTNHI0eGttQ3hqUVRCd0VHNmZvRUhVaDhITis4?=
 =?utf-8?B?QlkxTGpyb1AwbldWNVFtVkQzRUZHcWoxSUtkU3YwN2VzREdxY0NqWHNwbUhq?=
 =?utf-8?B?STdZZDFqcGs1dkRodGZES0hjV0JMK01NL0wxOUJ4Y3NwWk9TN0JlMDRHenIw?=
 =?utf-8?B?d1R3L25UeERTaHZqSjJ5cExuTjlSZ2ZrWHV1S1l1UmVERkxYRk1IOWp0QlNp?=
 =?utf-8?B?ckRtamQxOXhzUVR6ckI0STlQV2NvRzJsMDV2emF3ejdUQXdLUjFueVdRS0Na?=
 =?utf-8?B?UytleXdMbjRYUVdRMzR2eVptNUhVR0ZSejk3Y2w4RlBrbUFvMWkxakR3MlFE?=
 =?utf-8?B?bjV0ZTJnbEFucHpkQ1Mva0UrMnFWdllqcE93QW9yRDJWQWE4dzFzYkFZZk1t?=
 =?utf-8?B?T2dOUCtOQWpmL1FYMFhZcDFGMjAyZ1JTbnpsZFJ2WldIVTcvNE9LSjluTkhk?=
 =?utf-8?B?NVdwL3ZVQXNuRFpoZFJFcHFZbHhxeTFKeHhMT25rODRqNnlGRWZHenpwYTlZ?=
 =?utf-8?B?UE16QVBhUzhTek5kemtXd0V3aWU2ZExCRkdEb3lzVTh5cjhOOWhOY3NaZFNE?=
 =?utf-8?B?V3lBc2ZPcXFPcW5YNW5kSGY3ZDd5T0hsend5UVVESWM0ajhINUY0VmsxU3lY?=
 =?utf-8?B?b2l4UmthOEJ3aHJveVJVRy9zcC9SaHdOeUhoNGloVURLTHpLWjJEeS9uQXA2?=
 =?utf-8?B?ejJkSlZ0ZUZoQnVkQmc5QmM3R3F2LzRTZm5vYXpKWGdUazVzTm5XTlMrRklB?=
 =?utf-8?B?WmpSMGlUUGxNWFRzeVBCYmhUZVFqelBkcHpoKzZSU1pOdjBTc01YMDZTS2lk?=
 =?utf-8?B?ekZjYWFxUlV1WVZRTVp2Y2xoRlgrSHc5RHVYTHRmR0VhUXFzdE5obkY4Ukow?=
 =?utf-8?B?bmYwNmY4enJKVFhkVFl2K2c4cnIrZXBpaXRrTnlzcWV2VTlKQkFQN0tITjIx?=
 =?utf-8?B?Q1Z6Sk5PNWdLTEc5REI1cU4zZ2I0bFN3NnVNUGNZaW9Cdzk0a1lBZEVuYWc4?=
 =?utf-8?B?S1ZzSDhEMHJxTSs2V1VxQzNSUWJUblFocC85NmVXWGVqZHdFUEI0NGEwek1n?=
 =?utf-8?B?V2cvdWhpcVpmN2l3QjJVSVgyUXJqNWZKRjJjczJSVFVvbDBrYU1YUVJlU3FS?=
 =?utf-8?B?ZEpiNGl1L1JYa1U2SzB2U25KWmdNc0sySjlzalBpcnUrdkpPMFM2TGFkbmpX?=
 =?utf-8?B?L3hyVjRZSnRBQkZVRDhINm9Bc1UwQ3QzNTBVVThGdUdIZXhOQzl3VkxKenBL?=
 =?utf-8?B?bjJjc3pWYVM1dm5DVGpoQm8zeS9YTXN0eEIrNFpGT2M1VVl1RHhnLzAwNXhi?=
 =?utf-8?B?Nkx1MDVKVjZTQUhwZjFUeHl5b0JreFBKd0tjMkVFc0VaWU9jTnVScUN0dkhz?=
 =?utf-8?B?SkV0RnVXcnZyY1JIUDhRTnYwU2VtYXl2bTBsbU9jNTdzazVyOXFrM3dUT2Vz?=
 =?utf-8?B?OFlqa3pWei9hajI5VXd6bTQrYjE0WVhMMTlGT2M1TTluUXQvajBXaGdjRnlK?=
 =?utf-8?B?ZTcyTlZ6M01lUXhscmYxVWhybk9HYXczcTQ3WGJrbG1NeVZXMWhzVGFycHBF?=
 =?utf-8?B?bFRDVVlSQnNBd0hKbXBQRkZuN0dFQkVjRDFQZ2laNVpnbVlMZm1nRXRiU0dK?=
 =?utf-8?B?RnpVb1lIeGpFMFRScjRHaWFqZzdXdndpTkU5OU0wQ0ZxK2s1ZStUL2puQkNa?=
 =?utf-8?B?RDdjbklyUXVtaHQ0YXpld0ZzMlV5UTNCMzZoVEREMmdGQk5xTVp5SEdRQldu?=
 =?utf-8?B?bUJXZEZWVjgwTk5yZDVQUlJuSHJKbU9YZzg2OStkMTVuZ3FGSm1MRHJpTjA2?=
 =?utf-8?B?T2U2eWpBR0lib2tVeGNPdW1xNjBIZ04zTk8rL3kraEVlNzV4UW5ROG1RM0NV?=
 =?utf-8?B?RGY5NXdPelRFekJNVHpKSWEvQ2RCQ0dHRVUrNTNyV0MzbXFGSmFKN3U2U2ts?=
 =?utf-8?B?QkhoOG1pWFJuSlU0QVljeGFxcXA4YVllRm9BNFVYb2VqVGF4WGxzYWlWRGkv?=
 =?utf-8?Q?mGj1mxe2Y209Ez08phWOHR+7+?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	j+zmy61mMCGwxRk/q/H274NHA7xDTvci2B9Mpp76duUHOQHjkAG2CqhTRfdmc1SYPg5ph6r1h9wxO4ESTd9cMCBDEBm/OJHqd0nTr6hWK0iUHoX21sf68tTDPT2Z3ReGw3AyJ14cS4dfeDNk/x7Ytgr9PPz+JEtTBhpJnDazkODhBOtLP5VNtqPYwo2ZqR3Dy57mwY/nh+uxjssjqiNCOzOf4FWRtRD46sRnC4fOiWt/YmxoDAskKxL9uY0x7f1fp6fKQP9PVe6tjEV4flVv4UL78rrtIUgklK8OjUC+d4Rx4N65q6WaqbNcHTkgbSX36rfi6JpI2TgcvgS+zjGkQ84p7qH/dZvyDVh9N0IRMnDiACa48aLr5RdLiDJDOhcHyXE1Lpu5jldKs2VUl4D95uD+ENCUd4FIKyShBrVpBwWnc9KHbeCRseO7lDz9PPy5xCaeuVJiSTL6dyf2rxaKHs0XsLznrJqeuIYVbPTxfUYMaL10R3JIPSCNTvzKFE4WGz0eKFdQgEkQ3k+aGzxFUq/IGs/8kRbDiBpHU1I5Uvglaw3j1wSYkLiOiCzXCbf/igvBKbjwgSdob7fKamSXHtaVuVSNyB0dvC9E2MuOuE6HBZwm8aoxuTMaoZs5FjG3qWvdWn4aFvBccLMOrQwUA1q8h64n+9KWUHsrXWCB8QYW9+frmcahuPNQri3Ly7WLK8CTksF/GdK0TaSKK/eI+6M/+j7PFSrO907JQatYyEv29JkiQDcYHXtAMjJROdRO7A0bQohjQWcddIjUrzLCOZ89dQ+gQlQX9fWeM04ISo1QQOSz30i6K98FRnefmJAn
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a89434d7-edf8-42c5-1801-08db5b95424b
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 13:54:42.6764
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UGZEyeSwV3jJNue7J2MsW5IEE4lmrW6WwTT4IyQxiAdS7CsH81cVEackM3ZPF6T45WyjEMI5ejesTrmU+4axbQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5638

On Thu, May 18, 2023 at 04:44:53PM -0700, Stefano Stabellini wrote:
> Hi all,
> 
> After many PVH Dom0 suspend/resume cycles we are seeing the following
> Xen crash (it is random and doesn't reproduce reliably):
> 
> (XEN) [555.042981][<ffff82d04032a137>] R arch/x86/irq.c#_clear_irq_vector+0x214/0x2bd
> (XEN) [555.042986][<ffff82d04032a74c>] F destroy_irq+0xe2/0x1b8
> (XEN) [555.042991][<ffff82d0403276db>] F msi_free_irq+0x5e/0x1a7
> (XEN) [555.042995][<ffff82d04032da2d>] F unmap_domain_pirq+0x441/0x4b4
> (XEN) [555.043001][<ffff82d0402d29b9>] F arch/x86/hvm/vmsi.c#vpci_msi_disable+0xc0/0x155
> (XEN) [555.043006][<ffff82d0402d39fc>] F vpci_msi_arch_disable+0x1e/0x2b
> (XEN) [555.043013][<ffff82d04026561c>] F drivers/vpci/msi.c#control_write+0x109/0x10e
> (XEN) [555.043018][<ffff82d0402640c3>] F vpci_write+0x11f/0x268
> (XEN) [555.043024][<ffff82d0402c726a>] F arch/x86/hvm/io.c#vpci_portio_write+0xa0/0xa7
> (XEN) [555.043029][<ffff82d0402c6682>] F hvm_process_io_intercept+0x203/0x26f
> (XEN) [555.043034][<ffff82d0402c6718>] F hvm_io_intercept+0x2a/0x4c
> (XEN) [555.043039][<ffff82d0402b6353>] F arch/x86/hvm/emulate.c#hvmemul_do_io+0x29b/0x5f6
> (XEN) [555.043043][<ffff82d0402b66dd>] F arch/x86/hvm/emulate.c#hvmemul_do_io_buffer+0x2f/0x6a
> (XEN) [555.043048][<ffff82d0402b7bde>] F hvmemul_do_pio_buffer+0x33/0x35
> (XEN) [555.043053][<ffff82d0402c7042>] F handle_pio+0x6d/0x1b4
> (XEN) [555.043059][<ffff82d04029ec20>] F svm_vmexit_handler+0x10bf/0x18b0
> (XEN) [555.043064][<ffff82d0402034e5>] F svm_stgi_label+0x8/0x18
> (XEN) [555.043067]
> (XEN) [555.469861]
> (XEN) [555.471855] ****************************************
> (XEN) [555.477315] Panic on CPU 9:
> (XEN) [555.480608] Assertion 'per_cpu(vector_irq, cpu)[old_vector] == irq' failed at arch/x86/irq.c:233
> (XEN) [555.489882] ****************************************
> 
> Looking at the code in question, the ASSERT looks wrong to me.
> 
> Specifically, if you see send_cleanup_vector and
> irq_move_cleanup_interrupt, it is entirely possible to have old_vector
> still valid and also move_in_progress still set, but only some of the
> per_cpu(vector_irq, me)[vector] cleared. It seems to me that this could
> happen especially when an MSI has a large old_cpu_mask.

i guess the only way to get into such situation would be if you happen
to execute _clear_irq_vector() with a cpu_online_map smaller than the
one in old_cpu_mask, at which point you will leave old_vector fields
not updated.

Maybe somehow you get into this situation when doing suspend/resume?

Could you try to add a:

ASSERT(cpumask_equal(tmp_mask, desc->arch.old_cpu_mask));

Before the `for_each_cpu(cpu, tmp_mask)` loop?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 23 14:03:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 14:03:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538500.838465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Sc8-0002Tj-QB; Tue, 23 May 2023 14:03:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538500.838465; Tue, 23 May 2023 14:03:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Sc8-0002Tc-ML; Tue, 23 May 2023 14:03:48 +0000
Received: by outflank-mailman (input) for mailman id 538500;
 Tue, 23 May 2023 14:03:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/YuP=BM=linaro.org=linus.walleij@srs-se1.protection.inumbo.net>)
 id 1q1Sc7-0002TW-4s
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 14:03:47 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a22f03f6-f972-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 16:03:46 +0200 (CEST)
Received: by mail-lf1-x131.google.com with SMTP id
 2adb3069b0e04-4f122ff663eso8053272e87.2
 for <xen-devel@lists.xenproject.org>; Tue, 23 May 2023 07:03:46 -0700 (PDT)
Received: from Fecusia.lan (c-05d8225c.014-348-6c756e10.bbcust.telenor.se.
 [92.34.216.5]) by smtp.gmail.com with ESMTPSA id
 w16-20020ac254b0000000b004f01ae1e63esm1338341lfk.272.2023.05.23.07.03.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 23 May 2023 07:03:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a22f03f6-f972-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1684850625; x=1687442625;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=vPsiQpeoZd8UCXYzsHHeRCuEWCzprLDI2K1UxMG1vNw=;
        b=x+MCP6RIfgrhP/C4vp/VF513UZfXgcHOXjt39EnZYcgmTNS9SmK6xTyRxymm3cAXhy
         /LaZFhOURjsNJOM6lIY9MwAVxzXsmMIWE8DLJaBbKJFE4kIQ0kHMc3SxPyVh6uSwrtk+
         wNU2s2zwxen1OqwemfuCkOAFnIWaN+GP7UW62vDuVvYvmriGIigLWe6Gni7csFiLWmIh
         jchR1tkEcWwy7Bb8Twqgp8qV4sQVPq4jMfbrSvauaswEagc8MJQP8ZxgMukHJvIdRfW4
         SeSUkwhQub6h3PdQEBdBn9B6wSPsBeZUjcBXyY2tNf52dtUjqpcdqhVQtWybc8ItqCU5
         4xkQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684850625; x=1687442625;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=vPsiQpeoZd8UCXYzsHHeRCuEWCzprLDI2K1UxMG1vNw=;
        b=DT3xa5WkxjGCf3NxZYZAwrn3u98c07+sLCWnXIidDBHvIhBXC1gAGp1gDU1SE0LbHW
         DNypVl1cTQ1YAyYtsu4SDSOT3MvdX0G2O4zsSMqq/GzfBmanU4ma5yYNhzurnnYxbWiT
         xA5sA9Um5cV1MEV0Mz7OSs8Wh1czoMIUyrKpczsiQOjqd21HWMtt2T3c3LpCh/mTRNq3
         4dpLRj5gkR57kOMoDE3LgKc5bb24cPGJRCsmyssOwOmXDegPY+nKin6MuR5+vG5Yf534
         BVXo+GpK5yNLBp+VmxIll906aYJFFAea57bYYb38TlG3F6fzutQfdzmNwtREf6mcraqR
         1ctg==
X-Gm-Message-State: AC+VfDxOwgvxalm+BoQpi17eNJ3/cKoZau+5C/tCeBpt5bJRwwh8lGrB
	M857lEK+xiEtEaff6KIJ7r3Glw==
X-Google-Smtp-Source: ACHHUZ4e1X4jxIZwXl+y72sLOP3yazcOb2+MopQ9WDQmhL++ViPZczhGjjS6itrmgC5AhktkZRl30w==
X-Received: by 2002:ac2:5de8:0:b0:4f1:3bd7:e53a with SMTP id z8-20020ac25de8000000b004f13bd7e53amr4566812lfq.49.1684850625594;
        Tue, 23 May 2023 07:03:45 -0700 (PDT)
From: Linus Walleij <linus.walleij@linaro.org>
To: Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	Linus Walleij <linus.walleij@linaro.org>
Subject: [PATCH] xen/netback: Pass (void *) to virt_to_page()
Date: Tue, 23 May 2023 16:03:42 +0200
Message-Id: <20230523140342.2672713-1-linus.walleij@linaro.org>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

virt_to_page() takes a virtual address as argument but
the driver passes an unsigned long, which works because
the target platform(s) uses polymorphic macros to calculate
the page.

Since many architectures implement virt_to_pfn() as
a macro, this function becomes polymorphic and accepts both a
(unsigned long) and a (void *).

Fix this up by an explicit (void *) cast.

Cc: Wei Liu <wei.liu@kernel.org>
Cc: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org
Cc: netdev@vger.kernel.org
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
 drivers/net/xen-netback/netback.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index c1501f41e2d8..caf0c815436c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -689,7 +689,7 @@ static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
 		prev_pending_idx = pending_idx;
 
 		txp = &queue->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
+		page = virt_to_page((void *)idx_to_kaddr(queue, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 14:50:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 14:50:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538508.838475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1TLE-0007j2-7w; Tue, 23 May 2023 14:50:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538508.838475; Tue, 23 May 2023 14:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1TLE-0007iv-4a; Tue, 23 May 2023 14:50:24 +0000
Received: by outflank-mailman (input) for mailman id 538508;
 Tue, 23 May 2023 14:50:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YHTC=BM=citrix.com=prvs=500ef747c=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q1TLD-0007ip-AN
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 14:50:23 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 21a2f17e-f979-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 16:50:18 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 May 2023 10:50:15 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6801.namprd03.prod.outlook.com (2603:10b6:510:118::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.29; Tue, 23 May
 2023 14:50:12 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Tue, 23 May 2023
 14:50:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21a2f17e-f979-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684853418;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=l8GXLPbeva7yVj4fNSVD32V8t3CEE94ZtnoDhUJ7Aj4=;
  b=YPcng1zPfunKVIqCqR/2oH7vHZzPxcxrJ5VO2qZ+s6eKrpEESsf9nQLv
   AvhUz7il2utyNMW1bmhtLcfDFVNQ5Iap/saQsM49a9/00o6ZAaez9F4DH
   sGM4CMQghPHCFYBkplZrm4CYXvGhrJX+ap8Bl184fMnyHzDVH1P0DY/ZY
   Y=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 109969725
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:PKTpYKxt3EXqTfB8F8h6t+f/xyrEfRIJ4+MujC+fZmUNrF6WrkUBm
 2JLWDiEM6rfZ2GnfNx0Ptyx9xhQ7ZfSztAxS1RopCAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw/zF8EsHUMja4mtC5QRjP6sT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KWJx8
 NgXOhE2Ui+83+Op8KiVWvk13Mt2eaEHPKtH0p1h5RfwKK9/BLrlE+DN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjiVlVIhuFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37eezH2qB9hPfFG+3uBmvk2c6l0qMQExVWSE/dDhs0WeSc0Kf
 iT4/QJr98De7neDVcXwURS+pzifohcWVt5UEus7wAiIxuzf5APxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+8rzm/JCwUJm8qfjIfQE0O5NyLiJE+iBPGCMxqH6+8gtT2HizYy
 jWG6iM5gt0uYdUj0qy6+RXNhWKqr52QFwotvFyJDiSi8x9zY5Oja8qw81/H4P1cLYGfCF6co
 HwDnMvY5+cLZX2QqBGwrCw2NOnBz5643Pf02zaDw7FJG+yRxkOe
IronPort-HdrOrdr: A9a23:gqxvpKp54EXVJOt2oHPgSqQaV5r5eYIsimQD101hICG9E/bo7f
 xG+c5x6faaslgssR0b9Oxoe5PhfZqkz+8S3WBJB8baYOCEggqVxeNZnPDfKlTbckWVygc378
 hdmsZFZOEYQmIK7vrS0U2UH9Mh39Wd4MmT9ILjJxAEd3ATV0jM1XYcNu/eKDwQeCBWQZ40Do
 CV6MZkqyrIQwV0UviG
X-Talos-CUID: 9a23:1041bm8qR6R2Zg1/iC6Vv00rH+QqYFzm8Ejve06nOXxHWK3ME1DFrQ==
X-Talos-MUID: 9a23:wAUPrAjLprDu2fpjmTUE8sMpKstW86uCOXk0jbIGhPbYHHRJMBePk2Hi
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="109969725"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hiSGoMs2Kgd0TpOEU9nliTjs+TD5WAPz8tma5wH4q+gjDJYjeJkj/sq4AzpP8zqsDzph4RSWdiu0NSyjdvBhVBSp8rPKU6MU7AjThJZMt4wx+3dlBfUzXrAO9p1QjS7C1shEYzp79J0W6TsBObrKu8iC1edlI8tgyNp3j5qv3SoJSpRbiHpDLPdv1xYZFFq9keVSejoJrE36JFVTSWHMV+rGeMebP79BK4icQ/6ASqTqSi+fznYNDB7sBvaKFLoh1KCHTf1x9FVQd53zZuFxEVTiFBWPktLwF3wA829koJ2e9vFtc+WAcnQqYZD54G6hopI5DN4qjyT3igp/d0nBow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iewsP3RDwBj7upNajhGPj9pL1aILmljqTlcpQ2xqdk0=;
 b=KX9OlMp8khVhj9zo7DfB22N1LdtzOyhx3hI1asm8dkiUXdc4IGmSXUSU1iNcqxPIZm3/BFuwXI6T7iO7BsCx2kbKv/DfHIkBiNbFaVyyY5Knbu2Wcmdxw8iylvAFR9h+uSvN0qEgrxN/XMng3wtZ6Ld1u6jF/Z0oGQOi+FGIK8SwH+P8DsTcQm0BJd+1S9/j8Gd6WAa0C3rBud6Emy/QnViX7fN+sooGWkFWv0yGY46QrrYaJxQ5oQ/6ztOlbSCJIraljFKJmx9Y7VAcQyERsLkOsr4xyJH8WW5Wf3G3enwxdYwWGUt2GIGHjGR8PjXvnNFOE7h+JVYIgwj1VjeKFA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iewsP3RDwBj7upNajhGPj9pL1aILmljqTlcpQ2xqdk0=;
 b=tCDcjBWfwDD7IsqbG5cy6fnayaq0AJgJeqYE8Auwaz5XqDsqwBsVgKQqBnztY+QqNSqCoq+9tPgirMh+WqT2BAsesjK3n/BEWLrbAieOOF2OXq4qT32fkcn+jpIeatqU2xBDCr7hDyqKxHtqhvlvvmeLieRSLYJ09ANj3U+Nh4Y=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 23 May 2023 16:50:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <stefano.stabellini@amd.com>
Cc: xen-devel@lists.xenproject.org, jbeulich@suse.com,
	andrew.cooper3@citrix.com, xenia.ragiadakou@amd.com
Subject: Re: [RFC] Xen crashes on ASSERT on suspend/resume, suggested fix
Message-ID: <ZGzSnu8m/IqjmyHx@Air-de-Roger>
References: <alpine.DEB.2.22.394.2305181638390.128889@ubuntu-linux-20-04-desktop>
 <ZGzFnE2w/YqYT35c@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZGzFnE2w/YqYT35c@Air-de-Roger>
X-ClientProxiedBy: LO4P123CA0278.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:195::13) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6801:EE_
X-MS-Office365-Filtering-Correlation-Id: a25c4759-0979-4bf7-a030-08db5b9d0306
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OOTJIICAiMAlQY8mZ3O8m10DNgRADcWspaqiBh3NJZ6H4+4y1BUe/yyArVf/Qnod1WkDYI1Q4FtJe1m4ORri38yhTo9XDlIHFfN9FVeA5IEYS1uJKsUP8nHVZkSIfAkNFDlIn9Z/DzYEYd7/kKMi1YrRU1QxGj9b8w0QW+BnY0xoCloXPQDadQ087Jv081Q+sHRMS261BmBxOqb1sT22kjKkpss/fGmBTLoV+WL6o0GumAB95lOM8FppatmKK09MKoM8TUnx3S5R4DxtJ7VtcJvw3lkcyLKyyAaP4aGfI3ZpmRn+2pnbd0BIHh0DQ2j5kxDaEBGBmJoQYMVLdfAb63cxqu91XnDBXfUm0WKlxdia2zujDC8goE3lPXIOw8dkLMMVrA+lXPZyEBKtB8mEPBZL1z1KN0ojr4U3u9IEOMnvYIejYwaZS8jtcNZgytPArnP+a+wwXV8NB0l6VXiphPSaQ6XfXknUcd31XlxVqUIMFk/SjVDhSkeCI2n8rpJOOM8eRgust8G4Z8rT4ov9dQbVcZEn3ZxcvLyV41ohDn7ud/61khQOTZ2tTgIEKdVY
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(366004)(376002)(39860400002)(346002)(136003)(396003)(451199021)(33716001)(6512007)(6506007)(85182001)(478600001)(82960400001)(86362001)(26005)(186003)(9686003)(15650500001)(66946007)(66556008)(66476007)(2906002)(6486002)(6666004)(4326008)(316002)(83380400001)(6916009)(41300700001)(8676002)(8936002)(38100700002)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N3hoYkZENnBXbmxmMHBrS3oycUhUR1g0Z3JzZkdxNyt3Y2x2bytUTXBMek15?=
 =?utf-8?B?MlUwUk82VDh5bVRTNG04QisyOElmN2JKaURJOVpUNTRwTzJEcW5TUHE4VE83?=
 =?utf-8?B?ODV2RFNZVXhTUm1xZ0YwSzZmaWdkbGc1eGRQNDlIN1UxRDBBZSsvcGowQzlP?=
 =?utf-8?B?U2liWVlONW82QWlkU2l4WGdNc085Qk1kS1ZWTTdPTUhocmNlZmUvZ3FyVHZY?=
 =?utf-8?B?T0hRdUNBVXlkanl2ZTZPSS80dGRoWldtd3hhVGlmaktPQWNUdW12cnZaa2t0?=
 =?utf-8?B?NDJ3OW5uTEY4Z2QwRUZnVjR3dGRQSVJHRFlsSXZvZ1h5TmZXRXRNTWtza2tJ?=
 =?utf-8?B?Nm9DL004SnhpN3BqRmlmemlyU1IvUGcvaTVqQlYyeG5GdFZzMUptSHhKRmNw?=
 =?utf-8?B?NklyVEpRNDV3Kzh3OUVDWkxjQUxjNHBGajdIV3U0R3ByVVVIUWNnSVkyRWli?=
 =?utf-8?B?S0YvTWxpRHlIVnlrUEFNVDhWZUgrVWc4cUtMQ2ZXeVlQS3FrelBJVzBYbytE?=
 =?utf-8?B?aFZrU25kM0IwOUtwQnNPNm9mbis4RDBlT0llMnprd1hrRFZDcStuYVVLVjRX?=
 =?utf-8?B?akw4ZklxaHY2T24yNmlINjFUMnhzdHd1Sjd1VGtTdVpiS3hHM0dCOXF2QmRG?=
 =?utf-8?B?eHFpaHpEMWQ2Tk1ocG9VMXJXSDdzdjJTNmZjVXJzUm1kbGJhYlRwWkJreDNX?=
 =?utf-8?B?aUJseERWZ0JBVjdCbDhNNVJKTVJuWGVnajVOUlBtL2VqUUxGMTJMZ3RXYVYy?=
 =?utf-8?B?RVk1WW9VZEY1Z3ZrZmttWnAxN05DOXNCeGNCZERhUFcwODdYazEwQ214bWt2?=
 =?utf-8?B?VWJDUWdJYlRuSXcycHVOTEFKY2JKZ1ovQStlVCt5ZnZYVmMwbVhHOUM4dklj?=
 =?utf-8?B?amRzZks2TEhMcm9wbFovVDkvUGpVaTBJeXlRMCttQitpRitmUVhRMHp0YVhS?=
 =?utf-8?B?cXBRZjM0WXNPSmhQcGltcFQ5YXBpY2tUOTh4cFdaZlM3NGkwRDRRZ3BuZFhD?=
 =?utf-8?B?ckJ3cVd6N2N2Vm9kUXlmSTZRY0R1a0ZOVzNKc3ZlR1ZlVlNzQmRtZVlGdjE4?=
 =?utf-8?B?MU1HczJVb3N4V2FZTitCRFBmK2NaOXUxR3NnYmpyNG5BZDBVVERxRWJSVGtR?=
 =?utf-8?B?UURlRnB0dERaVkYwZEVIdU5FcjU2TkRIZGt3SVdwdGg4RGIxaFJvb2FCUUox?=
 =?utf-8?B?QWVHTEhuZHBXajdySnFlNWM4OXZzUmJacjFTdXBKN2t2ajU2SGNUalhmRDEw?=
 =?utf-8?B?Wlk5c0M3V3ZXcVdiNjl1aXg2TVFwKy9kSis5aFVaR3lTTHlMU09IMVZzYUVY?=
 =?utf-8?B?czM1Z05oRU9UTkVrTlJ6WkVOTjlXR01iN0ZmZ2I5aWJBWE1ZU2Z4Wi9aWFFW?=
 =?utf-8?B?YW5YdTF5eE45c1lURVBjTUlxeHVjMHpyODQydS9rU3FzZHlGTStkQm0xL1ZN?=
 =?utf-8?B?NEYvWk5iT3FubnBkMlVYMGJBMm1mQmx0aHhVU0QxWWpibmFTTmFvazNIeGlz?=
 =?utf-8?B?SUJnNDRWTko3ZWxhQmh5UldRazM3d0VLU204TGxiNm11VGZPanhZMTZZNUVh?=
 =?utf-8?B?b3VwVWtHWmgvTzFEVWdnMlhzVDh1c0hXVW0rVUE3L05JZDE3OUM3WmVEN3FM?=
 =?utf-8?B?d29Lc3JRTWk4bjBVODdmQWJENWplWllzd0N3UWRJNytvdmU4R29BRVlaYSsz?=
 =?utf-8?B?N2sxd2h0WGp5ZlNBbGJzaUZJekZYYy82S2lKV0pocnJKZWV1aHgyVDFnREIx?=
 =?utf-8?B?bDU5MkpCeGFhczFqVWtaOXdCQlhGditpNkpFSVpqYVgrTnN0NjdrMEcvamtv?=
 =?utf-8?B?YTZqdmtuN3JHWUdDWVlZQUFKOEg0UFIwUmVsLzZLOXhCRVJydTMwdEdsSzAx?=
 =?utf-8?B?TzFSc0hFc0ovQzNJcDJkeXh6cjlUb0Y5TS9GTy9uam9wQnZRRi93bFNVK0hP?=
 =?utf-8?B?N3oySU9sajV2OUx2Y3lTL0dYVWVpZDV6M0Vqd2d6enVhZVRlbVh2TG5Ua1p0?=
 =?utf-8?B?N3Y5REt5YmRENHNWakhMOGFxTkhWaEU1R1E2Zmg2bHVodzVUc1VaemwyeGpW?=
 =?utf-8?B?dW9GSmR6RlNwR21CSjhnZmJobE5kK2tjZk9OdjAxVDdEb2loR1ZhdHYxb3U3?=
 =?utf-8?Q?vlWgHRj7p/NQRQDbiFYbF3kfR?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	5Rv1IVrPqLAj+iZh9rHVPRTaGd1f1gnlvKoHrR7gF6JaOC7S5dCZ9e+xPYs8UX4IwyaL67d6zXMJq4n0WJpdAFGXRWkTp2aat427x+lDcN22ONMzkac6R5i7RYIaxiZRtTlcP+lpD183F9YkqCQz2iWeqmdRB6a5U1NVvOiIbjbN2CgKq/SHjzRHcpyOeIYugdkSQvuRhzhYuGOfh6FyU5J7FORVghSxTrXPml0lltEkbxjiuB7VDamdNQw4nXGjPrry8ATyC71BZ5PqlZsPfn1KcqkYI9Ap5IowMLhLN0iDCo8l2kEzmg2gCmLdY4m4hPQJRA3vWPjTwbKSeXsK/1jYbTGLdYpgHL8oXV+TFI9b6qNUUCmkyFY3ArSSupZcUBRv4aH/wAJ+pBFKE1ev2EOR+gFFjHQiFCJqOnIat1JQiAWsotGAjB5Sw8NO+zfZkmwe6ycRTKT7R30gEZFm4sviVW2Vo/ectkoWRjw5qX6KBCRh55lFpTcuaJYzrHsFacbpTsdtquHkTj47/QL1zLUFjGwJzABPhLFYlfbiY4AR2MPw1YL5DCqe8qCUiY2hTEtOXj/lycUoUXTpURboCya1jAuxwONHIZZ5Dy5kwAu51wkl/N2XRQT2pgtfas782JPZc6H2SjP3qKKhEAEkaFNXxn4OKb29ym0m7sQau1Z0qJiao6PhRBqUl2kaB6gMtfHnZizViL5mr8kod5a3Oa2y3mKZOnFJ56Kg5ZvHr70qEBmsBFqij1i4e0WimlffgYLdO+BoZFKwBav3RnMLuutxpUHAAfdQwX9oDCWsH44r8yk83bKmXgsf57WYokQZ
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a25c4759-0979-4bf7-a030-08db5b9d0306
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 14:50:12.4558
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ABs921PSVm+CfjqiVFm8EhF/JjfsQWqdc9thmKkd1AnFnTN1SUPPkIPr3ROG9ty2pDyEFuyJvYS+dWl+KzFpFg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6801

On Tue, May 23, 2023 at 03:54:36PM +0200, Roger Pau Monné wrote:
> On Thu, May 18, 2023 at 04:44:53PM -0700, Stefano Stabellini wrote:
> > Hi all,
> > 
> > After many PVH Dom0 suspend/resume cycles we are seeing the following
> > Xen crash (it is random and doesn't reproduce reliably):
> > 
> > (XEN) [555.042981][<ffff82d04032a137>] R arch/x86/irq.c#_clear_irq_vector+0x214/0x2bd
> > (XEN) [555.042986][<ffff82d04032a74c>] F destroy_irq+0xe2/0x1b8
> > (XEN) [555.042991][<ffff82d0403276db>] F msi_free_irq+0x5e/0x1a7
> > (XEN) [555.042995][<ffff82d04032da2d>] F unmap_domain_pirq+0x441/0x4b4
> > (XEN) [555.043001][<ffff82d0402d29b9>] F arch/x86/hvm/vmsi.c#vpci_msi_disable+0xc0/0x155
> > (XEN) [555.043006][<ffff82d0402d39fc>] F vpci_msi_arch_disable+0x1e/0x2b
> > (XEN) [555.043013][<ffff82d04026561c>] F drivers/vpci/msi.c#control_write+0x109/0x10e
> > (XEN) [555.043018][<ffff82d0402640c3>] F vpci_write+0x11f/0x268
> > (XEN) [555.043024][<ffff82d0402c726a>] F arch/x86/hvm/io.c#vpci_portio_write+0xa0/0xa7
> > (XEN) [555.043029][<ffff82d0402c6682>] F hvm_process_io_intercept+0x203/0x26f
> > (XEN) [555.043034][<ffff82d0402c6718>] F hvm_io_intercept+0x2a/0x4c
> > (XEN) [555.043039][<ffff82d0402b6353>] F arch/x86/hvm/emulate.c#hvmemul_do_io+0x29b/0x5f6
> > (XEN) [555.043043][<ffff82d0402b66dd>] F arch/x86/hvm/emulate.c#hvmemul_do_io_buffer+0x2f/0x6a
> > (XEN) [555.043048][<ffff82d0402b7bde>] F hvmemul_do_pio_buffer+0x33/0x35
> > (XEN) [555.043053][<ffff82d0402c7042>] F handle_pio+0x6d/0x1b4
> > (XEN) [555.043059][<ffff82d04029ec20>] F svm_vmexit_handler+0x10bf/0x18b0
> > (XEN) [555.043064][<ffff82d0402034e5>] F svm_stgi_label+0x8/0x18
> > (XEN) [555.043067]
> > (XEN) [555.469861]
> > (XEN) [555.471855] ****************************************
> > (XEN) [555.477315] Panic on CPU 9:
> > (XEN) [555.480608] Assertion 'per_cpu(vector_irq, cpu)[old_vector] == irq' failed at arch/x86/irq.c:233
> > (XEN) [555.489882] ****************************************
> > 
> > Looking at the code in question, the ASSERT looks wrong to me.
> > 
> > Specifically, if you see send_cleanup_vector and
> > irq_move_cleanup_interrupt, it is entirely possible to have old_vector
> > still valid and also move_in_progress still set, but only some of the
> > per_cpu(vector_irq, me)[vector] cleared. It seems to me that this could
> > happen especially when an MSI has a large old_cpu_mask.
> 
> i guess the only way to get into such situation would be if you happen
> to execute _clear_irq_vector() with a cpu_online_map smaller than the
> one in old_cpu_mask, at which point you will leave old_vector fields
> not updated.
> 
> Maybe somehow you get into this situation when doing suspend/resume?
> 
> Could you try to add a:
> 
> ASSERT(cpumask_equal(tmp_mask, desc->arch.old_cpu_mask));
> 
> Before the `for_each_cpu(cpu, tmp_mask)` loop?

I see that the old_cpu_mask is cleared in release_old_vec(), so that
suggestion is not very useful.

Does the crash happen at specific points, for example just after
resume or before suspend?

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 23 15:16:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 15:16:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538512.838485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Tjy-0001qq-8U; Tue, 23 May 2023 15:15:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538512.838485; Tue, 23 May 2023 15:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Tjy-0001qj-4c; Tue, 23 May 2023 15:15:58 +0000
Received: by outflank-mailman (input) for mailman id 538512;
 Tue, 23 May 2023 15:15:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rcTw=BM=gmail.com=wei.liu.linux@srs-se1.protection.inumbo.net>)
 id 1q1Tjw-0001qd-N3
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 15:15:56 +0000
Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com
 [209.85.214.176]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b592934d-f97c-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 17:15:54 +0200 (CEST)
Received: by mail-pl1-f176.google.com with SMTP id
 d9443c01a7336-1ae4c5e1388so71464705ad.1
 for <xen-devel@lists.xenproject.org>; Tue, 23 May 2023 08:15:54 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([20.69.120.36])
 by smtp.gmail.com with ESMTPSA id
 x4-20020a170902ea8400b001ac40488620sm6955882plb.92.2023.05.23.08.15.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 23 May 2023 08:15:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b592934d-f97c-11ed-8611-37d641c3527e
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684854953; x=1687446953;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZJeGQEMrpM8eeqfU3xH3VNZZx7eme2tUb730Gdj7mkI=;
        b=OSNn9pHvfuvI8n53+a6KiMU+k0FeKwkbYJ/ciqdjQYnvvQ33IC6SxVdP73xYgIe7rd
         5/9Zdl1XPf+DEIbq/MngQykqeFFHinkSsW7TStGmFDXGB2qVmhq9hW2cmWEVSN8XUtS7
         Hvd6EzSLbNlKW+o48gx6pAkJN38yKBxytsaUdFVNL+NQzokzEIDSGVTyempLYEg4P4yD
         NzTuTAN9DH37a7KZMcK58obx65D11rTFzQFAoNaXbJ+q+qrYyL80EkOszg7hii5dJB0L
         0AQvaeGbYhWqPyAPq7hauJdRuBpnS5zOYGVvhxKtM2U28oiJyCGCBFO8DDxiveMJ+cKs
         rvCg==
X-Gm-Message-State: AC+VfDwdrMUVpkZ0KXjIFPTN1pgbCmbv1AKtzNPyIDd8GyRHQKH3NRno
	/jmI1/AsLdHCdUuM104h6xM=
X-Google-Smtp-Source: ACHHUZ4xFXkiB/BvdLUoKTFRgeVpyuCwCMcGSxWP57ZlD0BqfYdE3+H9GMSiNjXnpz2La8kYeQDaNQ==
X-Received: by 2002:a17:902:7683:b0:1ac:637d:5888 with SMTP id m3-20020a170902768300b001ac637d5888mr13387115pll.43.1684854952831;
        Tue, 23 May 2023 08:15:52 -0700 (PDT)
Date: Tue, 23 May 2023 15:15:50 +0000
From: Wei Liu <wei.liu@kernel.org>
To: Linus Walleij <linus.walleij@linaro.org>
Cc: Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org, netdev@vger.kernel.org
Subject: Re: [PATCH] xen/netback: Pass (void *) to virt_to_page()
Message-ID: <ZGzYpm/Vs+TfSBMR@liuwe-devbox-debian-v2>
References: <20230523140342.2672713-1-linus.walleij@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230523140342.2672713-1-linus.walleij@linaro.org>

On Tue, May 23, 2023 at 04:03:42PM +0200, Linus Walleij wrote:
> virt_to_page() takes a virtual address as argument but
> the driver passes an unsigned long, which works because
> the target platform(s) uses polymorphic macros to calculate
> the page.
> 
> Since many architectures implement virt_to_pfn() as
> a macro, this function becomes polymorphic and accepts both a
> (unsigned long) and a (void *).
> 
> Fix this up by an explicit (void *) cast.
> 
> Cc: Wei Liu <wei.liu@kernel.org>
> Cc: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org
> Cc: netdev@vger.kernel.org
> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

Acked-by: Wei Liu <wei.liu@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue May 23 15:29:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 15:29:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538517.838495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Two-0003O3-CN; Tue, 23 May 2023 15:29:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538517.838495; Tue, 23 May 2023 15:29:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Two-0003Nw-92; Tue, 23 May 2023 15:29:14 +0000
Received: by outflank-mailman (input) for mailman id 538517;
 Tue, 23 May 2023 15:29:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1Twm-0003Nq-VJ
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 15:29:13 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8f28e8ab-f97e-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 17:29:10 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f28e8ab-f97e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684855749;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=mkCocN7i5a+MVKnvRRH3JXM3dGWs+ajYpoAKaFQWJoM=;
  b=IdQ6EnLXEPt5Xxw2LHyywpTpDLh0Uho8lnfpEYgNKp3XVsO3gHtEfKIl
   i63BcPsFjM4qzbOmkM01l1wG77PEMppMryNF862LVtdD0a+JoAOeXSuXm
   QgZgAeSap1yyzDvOCBl02X1wINQHx3xJu7XSAikISPSvnt0RdOAlYDkZz
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110491764
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:JbAUR6x1l1Rwwxk/s+l6t+ddxirEfRIJ4+MujC+fZmUNrF6WrkVTy
 2YbW2mEafbYazfyeogiOYi1oUkOu5LWn9A2HgQ4pSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw/zF8EsHUMja4mtC5QRjP6sT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KX5Jr
 aQgcmwvVQmsvf+o5+mJRcRv2P12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZwMxhrJ/
 z2YowwVBDk8Cs2N2RWP8kuuvfXUmn/EBLkfDZyBo6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXYht8F4SrNgrlvXk+yNvljfXzJfJtJcVDA4nNAxHWQSj
 AOrpMrGPyFxl+anRl+eqI7B+FteJhMpBWMFYCYFSy4M7N/ivJw/g3rzczpzLEKmpoarQG+tm
 lhmuAB73uxO1pBTi81X6Hid21qRSo71ohnZD+k9dkas9UtHaYGsfOREAnCLvK8bfO51orRs1
 UXoevRyDshUVflhdwTXGo3h+Y1FAN7bWAAweXY1Q/EcG82FohZPh7x47jBkP1tOOc0ZYzLva
 0K7kVoPtMMMbCP2Nv4uO97Z5yEWIU/ITI6NaxwpRoAWPsgZmPGvp0mCmnJ8L0iyyRNxwMnTy
 L+QcNq2DGZyNJmLOAGeHr9HuZdyn3BW+I8mbcyjp/hR+ebENSH9pHZsGAfmU93VG4vf+F+Er
 IkHZ5rWo/idOcWnChTqHUcoBQhiBRAG6Vre8KS7qsbrztJaJVwc
IronPort-HdrOrdr: A9a23:uQORS62j2meTZx5CF2DJXwqjBIgkLtp133Aq2lEZdPU1SL36qy
 nKpp8mPHDP5Ar5NEtOpTniAsm9qBHnm6KdiLN5Vd3OYOCMggqVxe9ZnOzf6gylNyri9vNMkY
 dMGpIObuEY1GIK6PoSNjPId+rIa+P3kpyVuQ==
X-Talos-CUID: 9a23:OvKnO2Gu1jJQD2KKqmJ48xZMIpp5UEbE403NPWjjO2lES+OsHAo=
X-Talos-MUID: =?us-ascii?q?9a23=3AeD1Bww2pDnA4OEQZFb8IfZ2fSzUjxZq/GlAim5U?=
 =?us-ascii?q?/ufKeLissBGumgS2We9py?=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="110491764"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [OSSTEST PATCH] ts-xen-build-prep: Install python3-venv for QEMU
Date: Tue, 23 May 2023 16:27:48 +0100
Message-ID: <20230523152748.23726-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Since QEMU commit 81e2b198a8cb ("configure: create a python venv
unconditionally"), a python venv is always created and this new package
is needed on Debian.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 ts-xen-build-prep | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index 3ae8f215..547bbc16 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -209,6 +209,7 @@ sub prep () {
                       libdevmapper-dev libxml-xpath-perl libelf-dev
                       ccache nasm checkpolicy ebtables
 		      python3-docutils python3-dev
+		      python3-venv
                       libtirpc-dev
                       libgnutls28-dev);
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 15:47:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 15:47:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538521.838504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1UEL-0005n8-Tf; Tue, 23 May 2023 15:47:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538521.838504; Tue, 23 May 2023 15:47:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1UEL-0005n1-Qr; Tue, 23 May 2023 15:47:21 +0000
Received: by outflank-mailman (input) for mailman id 538521;
 Tue, 23 May 2023 15:47:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1UEJ-0005mv-Sr
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 15:47:20 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 18001ae8-f981-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 17:47:17 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-66-SxXu0-qlNDqLb0_bLpXO-w-1; Tue, 23 May 2023 11:47:11 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1F1CE8039AD;
 Tue, 23 May 2023 15:47:11 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 14A2640C6CD7;
 Tue, 23 May 2023 15:47:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18001ae8-f981-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684856836;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=NbSCT8maNtke0NQg6LjoQCaLDrMfiYyIwfNxs+qRW+U=;
	b=D7YFJUnhURPQZO+wzaL4HlP3cN8Yl3gzaFka/Q6my6Cor6HxaVyHtfqmuc0KtB4ooDU2ba
	VgtkIyfy1QsJXnYRf9hBCIHr7c6URvZV3aBmzfGwanfKG8sH9ffm+Ivqz+KDqMTSo1afYG
	zzv1t4OGpWxdLJquU9k+0QxXKZ7NmgY=
X-MC-Unique: SxXu0-qlNDqLb0_bLpXO-w-1
Date: Tue, 23 May 2023 11:47:08 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 1/6] block: add blk_io_plug_call() API
Message-ID: <20230523154708.GB96478@fedora>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-2-stefanha@redhat.com>
 <mzxjz4d3ab3sq6grwsle6wlacysh2uffz42ojpdze3hmqimbr5@fxgkad47nnim>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="IMmHh1O73W2Mz05r"
Content-Disposition: inline
In-Reply-To: <mzxjz4d3ab3sq6grwsle6wlacysh2uffz42ojpdze3hmqimbr5@fxgkad47nnim>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2


--IMmHh1O73W2Mz05r
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, May 19, 2023 at 10:45:57AM +0200, Stefano Garzarella wrote:
> On Wed, May 17, 2023 at 06:10:17PM -0400, Stefan Hajnoczi wrote:
> > Introduce a new API for thread-local blk_io_plug() that does not
> > traverse the block graph. The goal is to make blk_io_plug() multi-queue
> > friendly.
> >=20
> > Instead of having block drivers track whether or not we're in a plugged
> > section, provide an API that allows them to defer a function call until
> > we're unplugged: blk_io_plug_call(fn, opaque). If blk_io_plug_call() is
> > called multiple times with the same fn/opaque pair, then fn() is only
> > called once at the end of the function - resulting in batching.
> >=20
> > This patch introduces the API and changes blk_io_plug()/blk_io_unplug().
> > blk_io_plug()/blk_io_unplug() no longer require a BlockBackend argument
> > because the plug state is now thread-local.
> >=20
> > Later patches convert block drivers to blk_io_plug_call() and then we
> > can finally remove .bdrv_co_io_plug() once all block drivers have been
> > converted.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> > MAINTAINERS                       |   1 +
> > include/sysemu/block-backend-io.h |  13 +--
> > block/block-backend.c             |  22 -----
> > block/plug.c                      | 159 ++++++++++++++++++++++++++++++
> > hw/block/dataplane/xen-block.c    |   8 +-
> > hw/block/virtio-blk.c             |   4 +-
> > hw/scsi/virtio-scsi.c             |   6 +-
> > block/meson.build                 |   1 +
> > 8 files changed, 173 insertions(+), 41 deletions(-)
> > create mode 100644 block/plug.c
> >=20
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index 50585117a0..574202295c 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -2644,6 +2644,7 @@ F: util/aio-*.c
> > F: util/aio-*.h
> > F: util/fdmon-*.c
> > F: block/io.c
> > +F: block/plug.c
> > F: migration/block*
> > F: include/block/aio.h
> > F: include/block/aio-wait.h
> > diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-b=
ackend-io.h
> > index d62a7ee773..be4dcef59d 100644
> > --- a/include/sysemu/block-backend-io.h
> > +++ b/include/sysemu/block-backend-io.h
> > @@ -100,16 +100,9 @@ void blk_iostatus_set_err(BlockBackend *blk, int e=
rror);
> > int blk_get_max_iov(BlockBackend *blk);
> > int blk_get_max_hw_iov(BlockBackend *blk);
> >=20
> > -/*
> > - * blk_io_plug/unplug are thread-local operations. This means that mul=
tiple
> > - * IOThreads can simultaneously call plug/unplug, but the caller must =
ensure
> > - * that each unplug() is called in the same IOThread of the matching p=
lug().
> > - */
> > -void coroutine_fn blk_co_io_plug(BlockBackend *blk);
> > -void co_wrapper blk_io_plug(BlockBackend *blk);
> > -
> > -void coroutine_fn blk_co_io_unplug(BlockBackend *blk);
> > -void co_wrapper blk_io_unplug(BlockBackend *blk);
> > +void blk_io_plug(void);
> > +void blk_io_unplug(void);
> > +void blk_io_plug_call(void (*fn)(void *), void *opaque);
> >=20
> > AioContext *blk_get_aio_context(BlockBackend *blk);
> > BlockAcctStats *blk_get_stats(BlockBackend *blk);
> > diff --git a/block/block-backend.c b/block/block-backend.c
> > index ca537cd0ad..1f1d226ba6 100644
> > --- a/block/block-backend.c
> > +++ b/block/block-backend.c
> > @@ -2568,28 +2568,6 @@ void blk_add_insert_bs_notifier(BlockBackend *bl=
k, Notifier *notify)
> >     notifier_list_add(&blk->insert_bs_notifiers, notify);
> > }
> >=20
> > -void coroutine_fn blk_co_io_plug(BlockBackend *blk)
> > -{
> > -    BlockDriverState *bs =3D blk_bs(blk);
> > -    IO_CODE();
> > -    GRAPH_RDLOCK_GUARD();
> > -
> > -    if (bs) {
> > -        bdrv_co_io_plug(bs);
> > -    }
> > -}
> > -
> > -void coroutine_fn blk_co_io_unplug(BlockBackend *blk)
> > -{
> > -    BlockDriverState *bs =3D blk_bs(blk);
> > -    IO_CODE();
> > -    GRAPH_RDLOCK_GUARD();
> > -
> > -    if (bs) {
> > -        bdrv_co_io_unplug(bs);
> > -    }
> > -}
> > -
> > BlockAcctStats *blk_get_stats(BlockBackend *blk)
> > {
> >     IO_CODE();
> > diff --git a/block/plug.c b/block/plug.c
> > new file mode 100644
> > index 0000000000..6738a568ba
> > --- /dev/null
> > +++ b/block/plug.c
> > @@ -0,0 +1,159 @@
> > +/* SPDX-License-Identifier: GPL-2.0-or-later */
> > +/*
> > + * Block I/O plugging
> > + *
> > + * Copyright Red Hat.
> > + *
> > + * This API defers a function call within a blk_io_plug()/blk_io_unplu=
g()
> > + * section, allowing multiple calls to batch up. This is a performance
> > + * optimization that is used in the block layer to submit several I/O =
requests
> > + * at once instead of individually:
> > + *
> > + *   blk_io_plug(); <-- start of plugged region
> > + *   ...
> > + *   blk_io_plug_call(my_func, my_obj); <-- deferred my_func(my_obj) c=
all
> > + *   blk_io_plug_call(my_func, my_obj); <-- another
> > + *   blk_io_plug_call(my_func, my_obj); <-- another
> > + *   ...
> > + *   blk_io_unplug(); <-- end of plugged region, my_func(my_obj) is ca=
lled once
> > + *
> > + * This code is actually generic and not tied to the block layer. If a=
nother
> > + * subsystem needs this functionality, it could be renamed.
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "qemu/coroutine-tls.h"
> > +#include "qemu/notify.h"
> > +#include "qemu/thread.h"
> > +#include "sysemu/block-backend.h"
> > +
> > +/* A function call that has been deferred until unplug() */
> > +typedef struct {
> > +    void (*fn)(void *);
> > +    void *opaque;
> > +} UnplugFn;
> > +
> > +/* Per-thread state */
> > +typedef struct {
> > +    unsigned count;       /* how many times has plug() been called? */
> > +    GArray *unplug_fns;   /* functions to call at unplug time */
> > +} Plug;
> > +
> > +/* Use get_ptr_plug() to fetch this thread-local value */
> > +QEMU_DEFINE_STATIC_CO_TLS(Plug, plug);
> > +
> > +/* Called at thread cleanup time */
> > +static void blk_io_plug_atexit(Notifier *n, void *value)
> > +{
> > +    Plug *plug =3D get_ptr_plug();
> > +    g_array_free(plug->unplug_fns, TRUE);
> > +}
> > +
> > +/* This won't involve coroutines, so use __thread */
> > +static __thread Notifier blk_io_plug_atexit_notifier;
> > +
> > +/**
> > + * blk_io_plug_call:
> > + * @fn: a function pointer to be invoked
> > + * @opaque: a user-defined argument to @fn()
> > + *
> > + * Call @fn(@opaque) immediately if not within a blk_io_plug()/blk_io_=
unplug()
> > + * section.
>=20
> Just to understand better, what if two BlockDrivers share the same
> iothread but one calls blk_io_plug()/blk_io_unplug(), while the other
> calls this function not in a blk_io_plug()/blk_io_unplug() section?
>=20
> If the call is in the middle of the other BlockDriver's section, it is
> deferred, right?

Yes, the call is deferred until blk_io_unplug().

> Is this situation possible?

One scenario I can think of is when aio_poll() is called between
plug/unplug. In that case, some I/O associated with device B might
happen while device A is with plug/unplug.

> Or should we prevent blk_io_plug_call() from being called out of a
> blk_io_plug()/blk_io_unplug() section?

blk_io_plug_call() is called outside blk_io_plug()/blk_io_unplug() when
device emulation doesn't use plug/unplug. For example, IDE doesn't use
it but still calls down into the same Linux AIO or io_uring code that
invokes blk_io_plug_call(). This is why blk_io_plug_call() calls fn()
immediately when invoked outside plug/unplug.

Stefan

--IMmHh1O73W2Mz05r
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRs3/wACgkQnKSrs4Gr
c8gnMQgAw4yEMupKiX1brU9yWHDC6XCDoa4S8rVmoAMRaPx6lVyo7Czly5q2X5Do
dHWWXPts7lZpMttrxye8v6fupyADoE6kQC9kocucLL1HZdBLabIpQxCxB/tEGVzv
uGXtBsHOApqYGQQ5z4/VR3u5pVXW/GHCcJLxsEPptmRx4zDA9JNyf9d/CBZs2gaq
trm7E3o8rOPuTlbafOzTfGmgTYIGmFHHJzqmFbNdQwvakovTSOhLv5q/o2kRxEvm
3y1VV3vrmlQo3CwNzIJmU3iOyj38m3tKthfLpr0tXKMhRQsQfOXIKRvOCwjruZib
K8i7Fu810Z5RzMPkk8gXCRT6WST5GA==
=1bHM
-----END PGP SIGNATURE-----

--IMmHh1O73W2Mz05r--



From xen-devel-bounces@lists.xenproject.org Tue May 23 15:47:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 15:47:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538522.838515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1UEd-00067l-AU; Tue, 23 May 2023 15:47:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538522.838515; Tue, 23 May 2023 15:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1UEd-00067c-7Q; Tue, 23 May 2023 15:47:39 +0000
Received: by outflank-mailman (input) for mailman id 538522;
 Tue, 23 May 2023 15:47:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1UEc-00066y-6e
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 15:47:38 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2359e778-f981-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 17:47:37 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-624-oNyRN5KSNaOub17gdHHNcw-1; Tue, 23 May 2023 11:47:30 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 11057858F1B;
 Tue, 23 May 2023 15:47:30 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 84A3F492B0A;
 Tue, 23 May 2023 15:47:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2359e778-f981-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684856855;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=O3B1/QDqYYFNADsiJ/SScpfUTui+YFuZRwwUutZa9ls=;
	b=b1b2JOHJXB7VhKcjDTl0U8s7kRuuo+eHyMGNzemKdL1kZPNzroAhm4iX8fP1/p/Jx8zvyC
	Yycxls1hojo173myJxCGOP2La9/Vkoev8Hlm79oDUIIktS47dlHPvKBolobzEnmaqbs4sc
	Y/qO/m5Ir7RaaK3EbpGHV1Ikhzs1mm0=
X-MC-Unique: oNyRN5KSNaOub17gdHHNcw-1
Date: Tue, 23 May 2023 11:47:27 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 2/6] block/nvme: convert to blk_io_plug_call() API
Message-ID: <20230523154727.GC96478@fedora>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-3-stefanha@redhat.com>
 <utuievutol5cux2axpym7x3t4tueresl4tbqadizc36f5yblpi@ndpva7u6croa>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="SagQeXz/GIVguVBW"
Content-Disposition: inline
In-Reply-To: <utuievutol5cux2axpym7x3t4tueresl4tbqadizc36f5yblpi@ndpva7u6croa>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10


--SagQeXz/GIVguVBW
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, May 19, 2023 at 10:46:25AM +0200, Stefano Garzarella wrote:
> On Wed, May 17, 2023 at 06:10:18PM -0400, Stefan Hajnoczi wrote:
> > Stop using the .bdrv_co_io_plug() API because it is not multi-queue
> > block layer friendly. Use the new blk_io_plug_call() API to batch I/O
> > submission instead.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> > block/nvme.c | 44 ++++++++++++--------------------------------
> > 1 file changed, 12 insertions(+), 32 deletions(-)
> >=20
> > diff --git a/block/nvme.c b/block/nvme.c
> > index 5b744c2bda..100b38b592 100644
> > --- a/block/nvme.c
> > +++ b/block/nvme.c
> > @@ -25,6 +25,7 @@
> > #include "qemu/vfio-helpers.h"
> > #include "block/block-io.h"
> > #include "block/block_int.h"
> > +#include "sysemu/block-backend.h"
> > #include "sysemu/replay.h"
> > #include "trace.h"
> >=20
> > @@ -119,7 +120,6 @@ struct BDRVNVMeState {
> >     int blkshift;
> >=20
> >     uint64_t max_transfer;
> > -    bool plugged;
> >=20
> >     bool supports_write_zeroes;
> >     bool supports_discard;
> > @@ -282,7 +282,7 @@ static void nvme_kick(NVMeQueuePair *q)
> > {
> >     BDRVNVMeState *s =3D q->s;
> >=20
> > -    if (s->plugged || !q->need_kick) {
> > +    if (!q->need_kick) {
> >         return;
> >     }
> >     trace_nvme_kick(s, q->index);
> > @@ -387,10 +387,6 @@ static bool nvme_process_completion(NVMeQueuePair =
*q)
> >     NvmeCqe *c;
> >=20
> >     trace_nvme_process_completion(s, q->index, q->inflight);
> > -    if (s->plugged) {
> > -        trace_nvme_process_completion_queue_plugged(s, q->index);
>=20
> Should we remove "nvme_process_completion_queue_plugged(void *s,
> unsigned q_index) "s %p q #%u" from block/trace-events?

Will fix, thanks!

Stefan

--SagQeXz/GIVguVBW
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRs4A8ACgkQnKSrs4Gr
c8jRLggAnHAGUNyvJQIwz9Dtrtu8UXLxyZdpgSQS/HH5rqZy6JfdddIMYu/fw4JF
3AGDfNBXW+1wq1ox5p45zwVw+Az+NX+m46LDuBjCuEj7dtS7K8ASVrS5UMV1hWQ7
Hb54eovBy/L+nSQBtSX4ILFdyhA+aOIaMC/OFyClMz1/yzNmnUWyupjr/B8jf3Xf
/vGlY2CP3WjoTrYvIDxhZ7JhIStLB3r4VfVIiIMhD3JKkSjxCOjqAiQwPJtFDMQ2
2JI2bZtYLzfxJpq6y1grntDAzDLg3wCYybtPUxMbb45vvR+udLDfN5fcMDkX5vLs
9pChlshANBJlHUF5rzPmx4ROD1Kccw==
=S5t/
-----END PGP SIGNATURE-----

--SagQeXz/GIVguVBW--



From xen-devel-bounces@lists.xenproject.org Tue May 23 15:48:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 15:48:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538526.838525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1UEs-0006eW-Is; Tue, 23 May 2023 15:47:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538526.838525; Tue, 23 May 2023 15:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1UEs-0006eP-Fb; Tue, 23 May 2023 15:47:54 +0000
Received: by outflank-mailman (input) for mailman id 538526;
 Tue, 23 May 2023 15:47:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1UEr-00066y-6A
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 15:47:53 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29fc1cae-f981-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 17:47:52 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-203-h1xXJyOoMk6r5QUJju6_Wg-1; Tue, 23 May 2023 11:47:41 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B00423C0C889;
 Tue, 23 May 2023 15:47:40 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 160DE492B0B;
 Tue, 23 May 2023 15:47:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29fc1cae-f981-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684856866;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Rm9mITxOyG5cxYYxQZ7UqEX4fWrCe1vU4eCgXL+WHCA=;
	b=LvlGxMphQ4CVhzZ5TzXZ9lolvBGHKbH5eRdxkGdYf/lsY11YM4oNgPso9PfwA8OgGaoDwu
	cbaLtIG9pr44u+9GWW+1ANFfywAl1F7PKvnX8G3mECZSxCNlx4m976eonh6GKOZz3qggDc
	2P3ERBFjpKRy/DdUQHljMRLTWsIBE5Y=
X-MC-Unique: h1xXJyOoMk6r5QUJju6_Wg-1
Date: Tue, 23 May 2023 11:47:38 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Eric Blake <eblake@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 1/6] block: add blk_io_plug_call() API
Message-ID: <20230523154738.GD96478@fedora>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-2-stefanha@redhat.com>
 <7bsmwvpfmf6kelaxv32p6nhqcx2f2um2vqhvhu6uw5cooztrhe@oijddrxc2ysx>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="/bz85OT1GdNsEeu4"
Content-Disposition: inline
In-Reply-To: <7bsmwvpfmf6kelaxv32p6nhqcx2f2um2vqhvhu6uw5cooztrhe@oijddrxc2ysx>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10


--/bz85OT1GdNsEeu4
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, May 18, 2023 at 07:04:52PM -0500, Eric Blake wrote:
> On Wed, May 17, 2023 at 06:10:17PM -0400, Stefan Hajnoczi wrote:
> > Introduce a new API for thread-local blk_io_plug() that does not
> > traverse the block graph. The goal is to make blk_io_plug() multi-queue
> > friendly.
> >=20
> > Instead of having block drivers track whether or not we're in a plugged
> > section, provide an API that allows them to defer a function call until
> > we're unplugged: blk_io_plug_call(fn, opaque). If blk_io_plug_call() is
> > called multiple times with the same fn/opaque pair, then fn() is only
> > called once at the end of the function - resulting in batching.
> >=20
> > This patch introduces the API and changes blk_io_plug()/blk_io_unplug().
> > blk_io_plug()/blk_io_unplug() no longer require a BlockBackend argument
> > because the plug state is now thread-local.
> >=20
> > Later patches convert block drivers to blk_io_plug_call() and then we
> > can finally remove .bdrv_co_io_plug() once all block drivers have been
> > converted.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
>=20
> > +++ b/block/plug.c
> > +
> > +/**
> > + * blk_io_plug_call:
> > + * @fn: a function pointer to be invoked
> > + * @opaque: a user-defined argument to @fn()
> > + *
> > + * Call @fn(@opaque) immediately if not within a blk_io_plug()/blk_io_=
unplug()
> > + * section.
> > + *
> > + * Otherwise defer the call until the end of the outermost
> > + * blk_io_plug()/blk_io_unplug() section in this thread. If the same
> > + * @fn/@opaque pair has already been deferred, it will only be called =
once upon
> > + * blk_io_unplug() so that accumulated calls are batched into a single=
 call.
> > + *
> > + * The caller must ensure that @opaque is not be freed before @fn() is=
 invoked.
>=20
> s/be //

Will fix, thanks!

Stefan

--/bz85OT1GdNsEeu4
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRs4BoACgkQnKSrs4Gr
c8g/3Qf8DjCIpK9YEJBH+iKEtbVAQHcEC/7bPX1ycHXFuHAraWci8xbeeGhnjLvP
qVo0xbXOZE53Ain3joyIK+OL8L+CEEkQU/uAD+VQpJB26yeEZ/edMdLb2qZekQaa
7rzckl1MWg7YXVg4nOybLVIT2tD2CbXTKYxh2ohMsM1cHEOQqXyoUW43g1VOVx1E
N+q1UH2VHWfDoHtPDu8R2Fdq+IOMf/Kl35rxKH5w79OfZGTI3tOCjWHY6JNO5H9K
uGSSWdR0Ty4DAUIG/i4l/+wigqaR1TuJ4+QojpGad9AkjESBSIuFEDtCkscVcFeU
PJUiGF+ARlcOl/9tR1Xdq/fKTOm5tw==
=NIAu
-----END PGP SIGNATURE-----

--/bz85OT1GdNsEeu4--



From xen-devel-bounces@lists.xenproject.org Tue May 23 15:49:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 15:49:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538534.838534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1UG1-0007TY-Ro; Tue, 23 May 2023 15:49:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538534.838534; Tue, 23 May 2023 15:49:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1UG1-0007TR-P0; Tue, 23 May 2023 15:49:05 +0000
Received: by outflank-mailman (input) for mailman id 538534;
 Tue, 23 May 2023 15:49:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1UFz-0007T9-SK
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 15:49:03 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56a45156-f981-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 17:49:02 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-392-49NU2e_DPTe9RRCFQW3Mrw-1; Tue, 23 May 2023 11:48:57 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 34381296A610;
 Tue, 23 May 2023 15:48:57 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 8D68920296C6;
 Tue, 23 May 2023 15:48:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56a45156-f981-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684856941;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=5vpY+wU+9PQ6HvioSmxsXPA1be3D4AKhGSK68roqb6M=;
	b=LCiAueVULDxs51sa+vAHgaz3iRFxkcBEektGLLzx8SqnUVZhF0iV2JOVbIO8vifWEVnhWS
	v1mQ82EXUmq6wpvIFXEz9iZ8EOe7+GeR/smCgzR0YM7hKLyfPvTB9wr8jwbFfANtyVRH1h
	XstfYfwy9DoyXUgVmqR2oCG3/yMgrd4=
X-MC-Unique: 49NU2e_DPTe9RRCFQW3Mrw-1
Date: Tue, 23 May 2023 11:48:54 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Eric Blake <eblake@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 4/6] block/io_uring: convert to blk_io_plug_call() API
Message-ID: <20230523154854.GF96478@fedora>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-5-stefanha@redhat.com>
 <7xerljqzrhzvl73beu7dboq3d6jbxbkrxbhs25xzcw5ozopgbn@3olwj3w5fil5>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="3aPiKHngEQMNjJoi"
Content-Disposition: inline
In-Reply-To: <7xerljqzrhzvl73beu7dboq3d6jbxbkrxbhs25xzcw5ozopgbn@3olwj3w5fil5>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4


--3aPiKHngEQMNjJoi
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, May 18, 2023 at 07:18:42PM -0500, Eric Blake wrote:
> On Wed, May 17, 2023 at 06:10:20PM -0400, Stefan Hajnoczi wrote:
> > Stop using the .bdrv_co_io_plug() API because it is not multi-queue
> > block layer friendly. Use the new blk_io_plug_call() API to batch I/O
> > submission instead.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  include/block/raw-aio.h |  7 -------
> >  block/file-posix.c      | 10 ---------
> >  block/io_uring.c        | 45 ++++++++++++++++-------------------------
> >  block/trace-events      |  5 ++---
> >  4 files changed, 19 insertions(+), 48 deletions(-)
> >=20
>=20
> > @@ -337,7 +325,6 @@ void luring_io_unplug(void)
> >   * @type: type of request
> >   *
> >   * Fetches sqes from ring, adds to pending queue and preps them
> > - *
> >   */
> >  static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState=
 *s,
> >                              uint64_t offset, int type)
> > @@ -370,14 +357,16 @@ static int luring_do_submit(int fd, LuringAIOCB *=
luringcb, LuringState *s,
>=20
> Looks a bit like a stray hunk, but you are touching the function, so
> it's okay.

I'm respinning, so I'll drop this.

Stefan

--3aPiKHngEQMNjJoi
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRs4GYACgkQnKSrs4Gr
c8ijKAf+IFPE85RQqY0hwYwn31qfIo8D9vIz8fpX/MAUGgNXWAh0uABXAhGO+6ug
JRVPu18OjoECTr9rVOFaoTkEiXqs26G+ArMJt6QdX9r5Z+4X09ub4z3cLnpRul9z
e5IUmsZaz2k0qrALqzJf3gUT1xqaVB4heIt5y8WUufxR5V/Cb/bkeMo86hZLiWv6
WT5C0iZy7GZbucPA3JEMiR20zoocHUbFZsqXa9SLdeNwKATYn6uEoXOwa2ZZVgUo
o1cdwDtSBMqQIpZXl4ZP0U5pePUNSZ6zlzkPCiq0eyBm2+C4HfAmvJqdlGHn6YsD
R6F3Gg2PCGTdSBtQtijctZkigpdZVg==
=Ofen
-----END PGP SIGNATURE-----

--3aPiKHngEQMNjJoi--



From xen-devel-bounces@lists.xenproject.org Tue May 23 15:49:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 15:49:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538536.838545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1UGE-0007mo-36; Tue, 23 May 2023 15:49:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538536.838545; Tue, 23 May 2023 15:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1UGD-0007mh-W6; Tue, 23 May 2023 15:49:18 +0000
Received: by outflank-mailman (input) for mailman id 538536;
 Tue, 23 May 2023 15:49:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1UGC-0005mv-CU
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 15:49:16 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5dcc5c3e-f981-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 17:49:14 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-22-ydMYk082PL-T9DEkukad9A-1; Tue, 23 May 2023 11:48:48 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8706629AA3B0;
 Tue, 23 May 2023 15:48:20 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 06F37C5146F;
 Tue, 23 May 2023 15:48:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5dcc5c3e-f981-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684856953;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=rQ6nlvqTSvY4+s9JAwczIEJjWTozdr3kOea/XsYr2bY=;
	b=XybKL35SLveJARaInMFrMuAvJbKeTnEBrbouGBp0fZNkg9B9HLjbns0MWwe2iy168Kn7bF
	Gn1iNTehKw+VupZ53Kg4Gih6+FKlMdEgB3KcjWJWn64He2oUQNXWrjtUeprnlYmfKKKbih
	6HPF0rpGq4r/uUqNy2IAbDARSu8EmqQ=
X-MC-Unique: ydMYk082PL-T9DEkukad9A-1
Date: Tue, 23 May 2023 11:48:18 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>,
	Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 3/6] block/blkio: convert to blk_io_plug_call() API
Message-ID: <20230523154818.GE96478@fedora>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-4-stefanha@redhat.com>
 <wtyut5kd4v5vapon7fzpvi3kghvpplokcas5ovcwnjhiwyuccb@rm6eb6jjhhp5>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="y9sI4btH/QqT9qV/"
Content-Disposition: inline
In-Reply-To: <wtyut5kd4v5vapon7fzpvi3kghvpplokcas5ovcwnjhiwyuccb@rm6eb6jjhhp5>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8


--y9sI4btH/QqT9qV/
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, May 19, 2023 at 10:47:00AM +0200, Stefano Garzarella wrote:
> On Wed, May 17, 2023 at 06:10:19PM -0400, Stefan Hajnoczi wrote:
> > Stop using the .bdrv_co_io_plug() API because it is not multi-queue
> > block layer friendly. Use the new blk_io_plug_call() API to batch I/O
> > submission instead.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> > block/blkio.c | 40 +++++++++++++++++++++-------------------
> > 1 file changed, 21 insertions(+), 19 deletions(-)
>=20
> With this patch, the build fails in several places, maybe it's an old
> version:
>=20
> ../block/blkio.c:347:5: error: implicit declaration of function
> =E2=80=98blk_io_plug_call=E2=80=99 [-Werror=3Dimplicit-function-declarati=
on]
>   347 |     blk_io_plug_call(blkio_unplug_fn, bs);

Will fix, thanks!

Stefan

--y9sI4btH/QqT9qV/
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRs4EIACgkQnKSrs4Gr
c8jxMgf/ZdgG8aDCuwHwEH9aJ8kuEqAjFKKDKa/eAzlnouLXNCYig1RhfL9cDWaZ
4Zd/vBacIMN9YdG/gtrMl8iJlbz8fzNj9XqZnQuEByHQg82r/8+ehKP1tP+lhLWY
92jI+7BiJGCzg7nmFPW9CnADyWDmDvkWgmpXuLOZNGn9O3VTFLlmSjGv0BXi7kUT
pAIPfTrIanIJgLykGqSe606MCIo0aAefJ5zqAUhxT5jAk3R49722wMXYNbG2huFC
gWm3wH6hcTqRYpfbHZNo2hgz0PwEBqETuoH6VpylmcU8PLp54faw7euHo7eEqF1R
rJs7BnbeXR/04nOSnMfHWnLiql1OZw==
=duel
-----END PGP SIGNATURE-----

--y9sI4btH/QqT9qV/--



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:24:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:24:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538543.838555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Und-0004Zu-K5; Tue, 23 May 2023 16:23:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538543.838555; Tue, 23 May 2023 16:23:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Und-0004Zn-Gt; Tue, 23 May 2023 16:23:49 +0000
Received: by outflank-mailman (input) for mailman id 538543;
 Tue, 23 May 2023 16:23:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1Unb-0004Zd-Hl; Tue, 23 May 2023 16:23:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1Unb-0001k0-BE; Tue, 23 May 2023 16:23:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1Una-0004Le-Pv; Tue, 23 May 2023 16:23:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1Una-0001IU-PX; Tue, 23 May 2023 16:23:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o67MDz3Setw89Es7oxMbf0iPfsmidIZ+RznLTQ4/QyM=; b=JncyGgEh3ZnerxHkJ3f8kLFyf7
	ZpZrtwxMVlWrsMxGQjwUtOSGfrU5kmhv8Q0MM5ZwMJmYb/w7n17orOo9bABiHMAidiHQObK9JrAeI
	e1Oeu2TiLLujXrgAJ5aCC1/K1bfu3LCkghRtyHjv2pJorBX3oc8Op8JTmtR9CW7G2xVY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180910-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180910: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=3b6d69237f0a07bf8d9807cd68a387f8d42b076f
X-Osstest-Versions-That:
    libvirt=90404c53682f464b4a26efd618887dc336d9da80
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 16:23:46 +0000

flight 180910 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180910/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180714
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180714
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180714
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              3b6d69237f0a07bf8d9807cd68a387f8d42b076f
baseline version:
 libvirt              90404c53682f464b4a26efd618887dc336d9da80

Last test of basis   180714  2023-05-19 04:18:50 Z    4 days
Testing same since   180910  2023-05-23 04:18:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   90404c5368..3b6d69237f  3b6d69237f0a07bf8d9807cd68a387f8d42b076f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 23 16:38:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538554.838595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1r-0006xK-P4; Tue, 23 May 2023 16:38:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538554.838595; Tue, 23 May 2023 16:38:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1r-0006x1-MG; Tue, 23 May 2023 16:38:31 +0000
Received: by outflank-mailman (input) for mailman id 538554;
 Tue, 23 May 2023 16:38:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V1q-0006Dr-EW
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:30 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3d9ef1d2-f988-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 18:38:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d9ef1d2-f988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859908;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=gHSnwWQZeNLJTGRbdWUmwUXIbgpyRr1aGRD7CkY8Nag=;
  b=OQ9nSR56CJY6dCqTZ5z6nG8sF/XtQaMkoABf+s1nP8+/6zPXi1ICl3Y2
   Im8BPiG1ZwFTBscauCjoPTIJFIgVPjxbTuP0GUTIyWyw9a02bzhIaEskt
   VTTJJjs18DxMe1TAXL7gt7mmZNZNrNzuzSe6DKkpmjvh/P4RtunucR4vv
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110112539
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:lCo++6LMFJBJQso/FE+R2pUlxSXFcZb7ZxGr2PjKsXjdYENShDNWn
 DcaW2qPOKmJMGGgLtx2bty18EsB7JTTyIRrTFFlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wZlPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5JWV1Xs
 tYnJApdLRCOm6Wmwq2xZLhF05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TTHJ0MxxzF/
 TOuE2LRWztECPeTyRS+83+Mhff0wC3VYpggC+jtnhJtqALKnTFCYPEMbnOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHslhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LQHGz
 XfQwYmvX2Y29uTIFzTErOz8QS6O1TY9Ezc+fGgucgc/s5rjrZ10nhDQRPgyOfvg5jHqIg3Yz
 zePpSk4orwci88Xyqm2lWz6byKQSovhFVBsuFiONo6xxkYgPdP+OdT0gbTOxawYRLt1WGVtq
 5TtdyK2yOkVRa+AmyWWKAnmNOH4vq3VWNEwbLMGInXAy9hP0yT7FWyzyGskTKuMDirjUWGBX
 aMrkVkNjKK/xVPzBUONX6q/Ct4x0Y/rHsn/W/bfY7JmO8YhKFDXpH01NBfIgwgBdXTAdolmY
 /+mnTuEVy5GWcyLMhLtLwvi7VPb7n9nnj6CLXwK5x+mzaCfdBaodFvxC3PXNrpRxPrd8G3oH
 yN3a5PiJ+N3DLevPUE6MOc7cTg3EJTMLcuu8Z0IKbbTc1YO9aNII6a5/I7NsrdNx8x9/tokN
 FnnMqOE4DITXUH6FDg=
IronPort-HdrOrdr: A9a23:4KA0+qG7d0GSW/a9pLqEGMeALOsnbusQ8zAXPiBKJCC9E/bo8/
 xG+c5w6faaslkssR0b9+xoW5PwJE80l6QFgrX5VI3KNGXbUQ2TTb2KhbGI/9SKIVydygcy78
 ddmtNFebrN5VgRt7eH3OG7eexQv+VuJsqT9JnjJ3QGd3AaV0l5hT0JbDpyiidNNXN77ZxSLu
 vk2uN34wCOVF4wdcqBCnwMT4H41qD2fMKPW29/O/Y/gjP+9g+V1A==
X-Talos-CUID: 9a23:ohZfuGF0dMTMVjdhqmJZrHINXeoafkeNj3fdPwioC3ljZuy8HAo=
X-Talos-MUID: 9a23:GBp5vwXBqT4llInq/GbG2g1JOMdG2KWjKEMVqsgIlfOeBzMlbg==
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="110112539"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [XEN PATCH 03/15] build, x86: clean build log for boot/ targets
Date: Tue, 23 May 2023 17:37:59 +0100
Message-ID: <20230523163811.30792-4-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

We are adding %.lnk to .PRECIOUS or make treat them as intermediate
targets and remove them.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/arch/x86/boot/Makefile | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/boot/Makefile b/xen/arch/x86/boot/Makefile
index 03d8ce3a9e..2693b938bd 100644
--- a/xen/arch/x86/boot/Makefile
+++ b/xen/arch/x86/boot/Makefile
@@ -5,6 +5,8 @@ head-bin-objs := cmdline.o reloc.o
 nocov-y   += $(head-bin-objs)
 noubsan-y += $(head-bin-objs)
 targets   += $(head-bin-objs)
+targets   += $(head-bin-objs:.o=.bin)
+targets   += $(head-bin-objs:.o=.lnk)
 
 head-bin-objs := $(addprefix $(obj)/,$(head-bin-objs))
 
@@ -26,10 +28,16 @@ $(head-bin-objs): XEN_CFLAGS := $(CFLAGS_x86_32) -fpic
 LDFLAGS_DIRECT-$(call ld-option,--warn-rwx-segments) := --no-warn-rwx-segments
 LDFLAGS_DIRECT += $(LDFLAGS_DIRECT-y)
 
-%.bin: %.lnk
-	$(OBJCOPY) -j .text -O binary $< $@
+%.bin: OBJCOPYFLAGS := -j .text -O binary
+%.bin: %.lnk FORCE
+	$(call if_changed,objcopy)
 
-%.lnk: %.o $(src)/build32.lds
-	$(LD) $(subst x86_64,i386,$(LDFLAGS_DIRECT)) -N -T $(filter %.lds,$^) -o $@ $<
+quiet_cmd_ld_lnk_o = LD      $@
+cmd_ld_lnk_o = $(LD) $(subst x86_64,i386,$(LDFLAGS_DIRECT)) -N -T $(filter %.lds,$^) -o $@ $<
+
+%.lnk: %.o $(src)/build32.lds FORCE
+	$(call if_changed,ld_lnk_o)
 
 clean-files := *.lnk *.bin
+
+.PRECIOUS: %.lnk
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:38:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538551.838565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1p-0006EE-W5; Tue, 23 May 2023 16:38:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538551.838565; Tue, 23 May 2023 16:38:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1p-0006E7-T9; Tue, 23 May 2023 16:38:29 +0000
Received: by outflank-mailman (input) for mailman id 538551;
 Tue, 23 May 2023 16:38:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V1n-0006Dq-JL
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:27 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3c3778e6-f988-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 18:38:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c3778e6-f988-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859905;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=tpIIJAj41XN+i19+pdP8PgfnfYVAitDX3tJqFhm43yI=;
  b=J3c7VbvFVLD8QueeVmWJrbdMUpyIsKE38q/D9XeajBLb7HF+0D1Du+zr
   PanUGZtkWJc4cz0qOAtRDuhytpOz3xHhkNvsOOrjQQlyaDRlZjJwmZ/p7
   CE8Jw1OuSFdiipZqZnxrTmCU+Lv5/17Zk3CjPffOoOH+hysDLO3wYqhwp
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109422917
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:8Aj1jq771o4ONcVCDwaPwAxRtAvHchMFZxGqfqrLsTDasY5as4F+v
 mYcD26AOvqPZTehfYwiPYnn9xhV6p/Uy4dlSwBupC83Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa0R5weE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m0
 /k+FjJSSjG/rfuT5pm5bc88huV6BZy+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrnD5bz1frkPTvact6nLf5AdwzKLsIJzefdniqcB9xx7J+
 jicrj2mav0cHP3Axze61k2Pvezoow/GaqUNJp2W98c/1TV/wURMUUZLBDNXu8KRmkO4Ht5SN
 UEQ0i4vtrQpslymSMHnWB+1q2LCuQQTM/JSGeAn7ACGyoLP/h2UQGMDS1ZpasEitcIwbSwn0
 BmOhdyBLSd0rLSfRHaZ97GVhTC/Iy4YKSkFfyBsZRQBy8nupsc0lB2nczp4OPfr1JuvQ2i2m
 m3U6nFk3N3/kPLnyY2d+Hb5gW2Ih6TjVysTzQfweDKlz1pQMdvNi5OT1XDX6vNJLYC8R1aHv
 WQZl8X20N3iHa1hhwTWHrxTQejBC+KtdWSF3AUxR8VJGyGFoSbLQGxG3N1pyK6F2O4gcCShX
 kLcsBg5CHR7bCrzNv8fj25c5q0XIUnc+TbNDKi8gjlmOMIZmOq7EMZGOyatM5jFyhRErE3GE
 c7znTyQJXgbE7976zG9Wv0Q17QmrghnmzOPGsCjk0/2iOLCDJJwdVviGALUBt3VEYve+FmFm
 zqhH5DiJ+pjvB3WPXCMrN97waEiJnknH5Hmw/Fqmhq4ClM+QgkJUqaBqY7NjqQ5x8y5YM+Up
 CDiMqKZoXKj7UD6xfKiMSg5OeywBcYu8RrW/0UEZD6V5pTqWq73hI93Snf9VeBPGDBLpRKsc
 8Q4Rg==
IronPort-HdrOrdr: A9a23:bPY3nalEYt96bfxg83iSVyEmnKrpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-Talos-CUID: 9a23:6bZIKGMnnAj7Eu5DfixBykxLPuEZWGT46lzSBGnoCUcwYejA
X-Talos-MUID: 9a23:CF0FpwQ0F0JnVbKFRXTN2z9vC+NXwJ+tL34hrKchpPukGy1JbmI=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="109422917"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 01/15] build: hide that we are updating xen/lib/x86
Date: Tue, 23 May 2023 17:37:57 +0100
Message-ID: <20230523163811.30792-2-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Change of directory to xen/lib/x86 isn't needed to be shown. If
something gets updated, make will print the command line.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/include/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/Makefile b/xen/include/Makefile
index edd5322e88..96d5f6f3c8 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -229,7 +229,7 @@ ifeq ($(XEN_TARGET_ARCH),x86_64)
 .PHONY: lib-x86-all
 lib-x86-all:
 	@mkdir -p $(obj)/xen/lib/x86
-	$(MAKE) -C $(obj)/xen/lib/x86 -f $(abs_srctree)/$(src)/xen/lib/x86/Makefile all
+	$(Q)$(MAKE) -C $(obj)/xen/lib/x86 -f $(abs_srctree)/$(src)/xen/lib/x86/Makefile all
 
 all: lib-x86-all
 endif
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:38:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538555.838604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1y-0007Jz-2A; Tue, 23 May 2023 16:38:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538555.838604; Tue, 23 May 2023 16:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1x-0007Jo-Vj; Tue, 23 May 2023 16:38:37 +0000
Received: by outflank-mailman (input) for mailman id 538555;
 Tue, 23 May 2023 16:38:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V1w-0006Dr-3m
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:36 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 415f2f5f-f988-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 18:38:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 415f2f5f-f988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859914;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=IEGR3JC5y7uEKhXS+Y9a7VtCNob5XWmUbPOoOUvPi10=;
  b=cDkzbdqIILspKHeRoP+W1aLY9tz8Pt3L99A2u5Lx2rOsepGySQEUneJk
   G1c9gnb+mIYXGzxpg0lGW3M5ACERAWmYWBIktrhCmyKI9fEv95a3viToL
   CfOVdgsTtVX/QSi5BLtBTadJDwfsaMtkGR4m7TdQKh7UQFWQLwjwcqLbj
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112568309
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:GKpitKNvQsKLJbnvrR2tl8FynXyQoLVcMsEvi/4bfWQNrUomhDZTy
 jYXXGyAPfqNZjH3L953bd/j8RhS6pfcy9c1TQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tF5AdmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uBQGlgV6
 N4REhZTcTuGi/Ky8buJYPY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUoXTH5gLzh3A9
 woq+UynEwgTN9ux4gG5rHaGp8KMkyShdYQdQejQGvlC3wTImz175ActfUu2p7y1h1CzX/pbK
 lcI4Ww+oK4q7kupQ9LhGRqirxassgYHXttME8Uz8AyX1rfP+AGdG3QFSThaLtchsacLqScCj
 wHT2YmzXHo27ePTECjGnluJkd+sES4yNlZZeA0Ndy1b/PLmrJE3vxv9ZMk2RcZZkebJMT33x
 jmLqg03iLMSkdMH2s2HwLzXv96/jsOXF1Bov207Skrgt1okP9D9O+RE/HCBtZ59wJClok5tV
 ZTus+yX96gwAJ6Ej0Rhq81dTejyt55p3NAx6GOD/qXNFRz3oxZPnqgKulmSwXuF1e5aEQIFm
 GeJ5WtsCGZ7ZRNGl5NfbYOrENgNxqP9D9njXf28RoMQMsQuJFfbp3A3PRL4M4XRfK8EyPtXB
 HtmWZz0USZy5VpPl1JauNvxIZd0n3tjlAs/tLjwzgi90Kr2WUN5vYwtaQPUBshgtfPsnekg2
 4oHXyd840kFAbKWj+i+2dJ7EG3m2lBgXcqn9JMPJr/fSuekcUl4Y8LsLXoaU9QNt8xoei3gp
 RlRhmcwJILDuED6
IronPort-HdrOrdr: A9a23:ZH80mahGIJG3ZyMc8MxBHr1FMnBQXrkji2hC6mlwRA09TyX4ra
 CTdZEgviMc5wx9ZJhNo7q90cq7IE80i6Qb3WB5B97LYOCMggeVxe9Zg7ff/w==
X-Talos-CUID: 9a23:BCGbFG/R8fdHfma3dIaVv0gzAfl+b1/U9lGOPGS9BXZtebSnY0DFrQ==
X-Talos-MUID: 9a23:U3QjYwnkB/TNRTWZubgZdno8Jd5h4KOAI3sxz5EsmuTHOysqFjGS2WE=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="112568309"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>
Subject: [XEN PATCH 04/15] build: hide policy.bin commands
Date: Tue, 23 May 2023 17:38:00 +0100
Message-ID: <20230523163811.30792-5-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Instead, show only when "policy.bin" is been updated.

We still have the full command from the flask/policy Makefile, but we
can't change that Makefile.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/xsm/flask/Makefile | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/xsm/flask/Makefile b/xen/xsm/flask/Makefile
index 3fdcf7727e..fc44ad684f 100644
--- a/xen/xsm/flask/Makefile
+++ b/xen/xsm/flask/Makefile
@@ -48,10 +48,15 @@ targets += flask-policy.S
 FLASK_BUILD_DIR := $(abs_objtree)/$(obj)
 POLICY_SRC := $(FLASK_BUILD_DIR)/xenpolicy-$(XEN_FULLVERSION)
 
+policy_chk = \
+    $(Q)if ! cmp -s $(POLICY_SRC) $@; then \
+        $(kecho) '  UPD     $@'; \
+        cp $(POLICY_SRC) $@; \
+    fi
 $(obj)/policy.bin: FORCE
-	$(MAKE) -f $(XEN_ROOT)/tools/flask/policy/Makefile.common \
+	$(Q)$(MAKE) -f $(XEN_ROOT)/tools/flask/policy/Makefile.common \
 	        -C $(XEN_ROOT)/tools/flask/policy \
 	        FLASK_BUILD_DIR=$(FLASK_BUILD_DIR) POLICY_FILENAME=$(POLICY_SRC)
-	cmp -s $(POLICY_SRC) $@ || cp $(POLICY_SRC) $@
+	$(call policy_chk)
 
 clean-files := policy.* $(POLICY_SRC)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:38:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538553.838579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1q-0006Nq-NO; Tue, 23 May 2023 16:38:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538553.838579; Tue, 23 May 2023 16:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1q-0006Ld-C3; Tue, 23 May 2023 16:38:30 +0000
Received: by outflank-mailman (input) for mailman id 538553;
 Tue, 23 May 2023 16:38:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V1o-0006Dr-OV
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:28 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3cf31a17-f988-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 18:38:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cf31a17-f988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859906;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=ehPXLskpRZYpYI4YTcAs0IW9d4IX2bw/AKtNWOoxDTU=;
  b=ATyznmYY3c0AfBe/OqDaFuH6QoZnllwByE62Zai3DaVNKMc/v74/ly+3
   VqVFH79/Pmkh/Zdcw/zKMLCA+0VBAevIqnktxQhBxAc30qhelQOxeokpd
   k8n47WvwjF6JiO8T30QkfpwLq2cu7HMBnHSNTeJuxyD9RM2DomeA3dHAR
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108859023
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:latz1qz7XGd82WUK/0p6t+dNwSrEfRIJ4+MujC+fZmUNrF6WrkVWn
 2QYWWzXO6rfYmfxLtt3aorio0gFuZDWn9EyTVNupSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw/zF8EsHUMja4mtC5QRjP6sT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KT1o+
 PYFDA1TVSqKiOzr2Z2kQNR9uP12eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZwNzhbF9
 zqcpAwVBDkKOOK45WaI1UmzqfPDuS24BIYiJKKBo6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0V8JLFuwm6CmE0qfO/xuCHW8AUyJAb9o98sQxQFQC1
 ViPhdrlQyNutL69TmiU/bOZ6zi1PEA9JGsDfjMNTBFD7cPqpooylTrQQt0lG6mw5vX3BDXxz
 jaivCU4wbIJgqYjzL6n9FrKhzatoJnhTQMv4AjTGGW/4WtRe4qNd4Gur1/B4p5oJ4+DQl6Ml
 HMNgcSZ4aYFCpTlvCaKSu8cEaqp4/uAOTv0jltmHp1n/DOok1aoeoZW5zNyLVloKe4LfDboZ
 AnYvgY5zJVeJmewZKl7JYe4Ed03zLPIHM7gEPvTa7JmXJ91cwOW+TB0UmSZ1WvtjUsEnLk2P
 NGQdsPEJXQQBLljzTG2b/wAyrJtzSc7rV4/XriikU7hi+DHIifIF/FcagDmgv0FAL2s/CPY+
 ct7CpWx8RxQXrDnTTbdzJ8tFAVfRZQkPqzep8tSf++FBwNpHmA9FvPcqY8cl5xZc7d9zbmRo
 CzkMqNM4B+m3CCcd13WApx2QOm3NauTu07XKsDF0byA/3E4Kbii464EH3fcVel2rbczpRKYo
 hRsRilhPhitYm6fk9j+RcOnxGCHSPhMrVzmAsZdSGJjF6OMviSQkjMeQiPh9TMVEg28vtYkr
 rur22vzGMRTG1syVJqNOav3kztdWETxf8orBSP1ziR7Ih2woOCG1QSq5hPIHy38AUqanWbLv
 +pnKRwZufPMs+cIzTU9vojd993BO7InTiJn85zzse7e2d/yojDynuetkY+gIVjgaY8D0P76P
 7QKkKmsYaFvcZQjm9MULouHBJkWv7PHz4K2BCw9dJkXRzxH0o9dH0Q=
IronPort-HdrOrdr: A9a23:H5lduKALrKZxAenlHelc55DYdb4zR+YMi2TDt3oddfU1SL39qy
 nKpp4mPHDP5wr5NEtPpTniAtjkfZq/z+8X3WB5B97LMDUO3lHIEGgL1+DfKlbbak/DH4BmtZ
 uICJIOb+EZDTJB/LrHCAvTKade/DFQmprY+9s3zB1WPHBXg7kL1XYeNu4CeHcGPjWvA/ACZe
 Ohz/sCnRWMU1INYP+2A3EUNtKz2uEixPrdEGY77wdM0nj0sQ+V
X-Talos-CUID: =?us-ascii?q?9a23=3Aw8kwQGoOghl7nuRt2a9Lb7jmUZwDf3fA6EqOGHW?=
 =?us-ascii?q?xBmg0UZKKQ2KBxpoxxg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AdAcT6A69SdyEzUgxu/+t0/W+xox475qJJBwAtq4?=
 =?us-ascii?q?ZlNWIESIzZzOtvCuOF9o=3D?=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="108859023"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Alistair Francis <alistair.francis@wdc.com>, "Julien
 Grall" <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Doug Goldstein <cardoe@cardoe.com>, Bob Eshleman
	<bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, "Bertrand
 Marquis" <bertrand.marquis@arm.com>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	"Ross Lagerwall" <ross.lagerwall@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, George Dunlap <george.dunlap@citrix.com>
Subject: [XEN PATCH 00/15] build: cleanup build log, avoid user's CFLAGS, avoid too many include of Config.mk
Date: Tue, 23 May 2023 17:37:56 +0100
Message-ID: <20230523163811.30792-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Patch series available in this git branch:
https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.build-system-xen-removing-config.mk-v1

Hi,

This series of patch cleanup the remaining rules still displaying their command
line.

Then, some change are made in Config.mk to remove build-id stuff only used by
Xen build.

Then, the variable AFLAGS and CFLAGS are been renamed XEN_AFLAGS and XEN_CFLAGS
from the beginning to about inclusion of users CFLAGS as those are usually made
for user space program, not kernel, especially in build environment for distro
packages.

Last patch removes the inclusion of Config.mk from xen/Rules.mk, as this slow
down the build unnecessarily. xen/Makefile should do everything necessary to
setup the build of the hypervisor, and is its only entry point.

Thanks,

Anthony PERARD (15):
  build: hide that we are updating xen/lib/x86
  build: rework asm-offsets.* build step to use kbuild
  build, x86: clean build log for boot/ targets
  build: hide policy.bin commands
  build: introduce a generic command for gzip
  build: quiet for .allconfig.tmp target
  build: move XEN_HAS_BUILD_ID out of Config.mk
  build: use $(filechk, ) for all compat/.xlat/%.lst
  build: hide commands run for kconfig
  build: rename $(AFLAGS) to $(XEN_AFLAGS)
  build: rename CFLAGS to XEN_CFLAGS in xen/Makefile
  build: avoid Config.mk's CFLAGS
  build: fix compile.h compiler version command line
  Config.mk: move $(cc-option, ) to config/compiler-testing.mk
  build: remove Config.mk include from Rules.mk

 Config.mk                   | 39 +------------------
 config/compiler-testing.mk  | 25 +++++++++++++
 xen/Makefile                | 75 +++++++++++++++++++++++++------------
 xen/Rules.mk                |  7 +++-
 xen/arch/arm/Makefile       |  2 +-
 xen/arch/arm/arch.mk        |  8 +++-
 xen/arch/riscv/Makefile     |  2 +-
 xen/arch/riscv/arch.mk      |  4 +-
 xen/arch/x86/Makefile       | 12 +++---
 xen/arch/x86/arch.mk        | 62 +++++++++++++++---------------
 xen/arch/x86/boot/Makefile  | 16 ++++++--
 xen/build.mk                | 24 +++++++-----
 xen/common/Makefile         |  8 ++--
 xen/include/Makefile        | 10 ++---
 xen/scripts/Kbuild.include  | 10 +++++
 xen/test/livepatch/Makefile |  4 +-
 xen/tools/kconfig/Makefile  | 14 +++----
 xen/xsm/flask/Makefile      |  9 ++++-
 18 files changed, 193 insertions(+), 138 deletions(-)
 create mode 100644 config/compiler-testing.mk

-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:38:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538552.838570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1q-0006Gb-9d; Tue, 23 May 2023 16:38:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538552.838570; Tue, 23 May 2023 16:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1q-0006Fz-4e; Tue, 23 May 2023 16:38:30 +0000
Received: by outflank-mailman (input) for mailman id 538552;
 Tue, 23 May 2023 16:38:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V1o-0006Dr-2J
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:28 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3c14679e-f988-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 18:38:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c14679e-f988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859905;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=ufpNHiN8OYSwXkEse8X/FKw/mqXv9HClfgFsxNyVIRY=;
  b=equVlQO6SOBlBialkLEL+65Z23Fhq6nlIMoGvfhVkOCK5kQBu4j1iAF/
   NPUi12Ath/hbK/1vZMUhz5x47SthJavkLBoq771rzwZFD7t1r9LBTBEaZ
   mhy67+j2H+4oBwGQznj20EsTaMedoEfEt9ikNVuPZ2sKnxHMXPwsgLvWg
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110501175
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:tcMeqanrWvY+NtgZ0nDXRGTo5gy7JkRdPkR7XQ2eYbSJt1+Wr1Gzt
 xIYCGjXMv2MZTHxLdtwOYu//UNSuMSHz9QyTwNr+y4wQiMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE0p5KyaVA8w5ARkPqgW5gSGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 dMVJzcDPg+Dvs2ZxpeXW8Y1oNQ4Eca+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO
 ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglH2dSFYr1SE47I6+WHJwCR60aT3McqTcduPLSlQth/B/
 j+epj2mX3n2MvS5xj2HyC6Bq9bRgCHKdJ0TH+O7tft11Qj7Kms7V0RNCArTTeOCol6zXZdTJ
 lIZ/gIqrLMu7wq7Q9/lRRq6rXWY+BkGVLJ4Eec39QWMwar8+BuCCy4PSTspQN47sM47QxQ62
 1nPmMnmbRR0q6GcQ3+Z8raSrBuxNDITIGtEYjULJSMa5/HzrYd1iQjAJuuPC4bs0IezQ2uph
 WnX8m5n3e57YdM3O7uT0l3IhDz8uZjwYSEzwynGcTuD/hhEa9vwD2C30mTz4fFFJYefa1COu
 nkYhsSThNwz4YGxeD+lG7tUQuzwjxqRGHiF2AM0QcF9n9i40yT7Fb289g2SM6uA3iwsXTbyK
 HHetgpKjHO4FCv7NPQnC25d5ilD8EQBKTgHfqqMBjatSsIrHONiwM2JTRD44owVuBJw+ZzTw
 6uzf8e2Fmo9Aq961jewTOp1+eZ1lnxhlTuPHsGil0jPPV+iiJm9EO1tDbdzRrphsPPsTPv9q
 L6zyPdmOz0ACbajM0E7AKYYLEwQLGhTOK0aX/d/L7bZSiI/QTFJNhMk6e95E6R/gb9vn/vFl
 lnkHBcwJKzX2SeWdm1nqxlLNNvSYHqIhSljZ31zYg74iiRLjETGxP53SqbbtIIPrIRLpcOYh
 dFcEylcKpyjkgj6xgk=
IronPort-HdrOrdr: A9a23:QBWQ86BMY0QaU/nlHenP55DYdb4zR+YMi2TDtnoQdfUxSKelfq
 +V8cjzuSWftN9zYhAdcK67V5VoKEm0naKdirN8AV7NZmfbhFc=
X-Talos-CUID: 9a23:ii66Rm2k+/7t96EQWhLX37xfNc0qdXON03fpDGC+LEp4Z+SzWUSt9/Yx
X-Talos-MUID: =?us-ascii?q?9a23=3A2WhmcQ/33Dqs4r4wFMr/YOeQf+Fn5KaUVG0gqIs?=
 =?us-ascii?q?bufioKi17ZTTeiSviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="110501175"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 02/15] build: rework asm-offsets.* build step to use kbuild
Date: Tue, 23 May 2023 17:37:58 +0100
Message-ID: <20230523163811.30792-3-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Use $(if_changed_dep, ) macro to generate "asm-offsets.s" and remove
the use of $(move-if-changes,). That mean that "asm-offset.s" will be
changed even when the output doesn't change.

But "asm-offsets.s" is only used to generated "asm-offsets.h". So
instead of regenerating "asm-offsets.h" every time "asm-offsets.s"
change, we will use "$(filechk, )" to only update the ".h" when the
output change. Also, with "$(filechk, )", the file does get
regenerated when the rule change in the makefile.

This changes also result in a cleaner build log.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Instead of having a special $(cmd_asm-offsets.s) command, we could
probably reuse $(cmd_cc_s_c) from Rules.mk, but that would mean that
an hypothetical additional flags "-flto" in CFLAGS would not be
removed anymore, not sure if that matter here.

But then we could write this:

targets += arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s
arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s: CFLAGS-y += -g0
arch/$(TARGET_ARCH)/include/asm/asm-offsets.h: arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s FORCE

instead of having to write a rule for asm-offsets.s
---
 xen/build.mk | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/xen/build.mk b/xen/build.mk
index 758590c68e..e2a78aa806 100644
--- a/xen/build.mk
+++ b/xen/build.mk
@@ -40,13 +40,15 @@ include/xen/compile.h: include/xen/compile.h.in .banner FORCE
 
 targets += include/xen/compile.h
 
--include $(wildcard .asm-offsets.s.d)
-asm-offsets.s: arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.c
-	$(CC) $(call cpp_flags,$(c_flags)) -S -g0 -o $@.new -MQ $@ $<
-	$(call move-if-changed,$@.new,$@)
+quiet_cmd_asm-offsets.s = CC      $@
+cmd_asm-offsets.s = $(CC) $(call cpp_flags,$(c_flags)) -S -g0 $< -o $@
 
-arch/$(TARGET_ARCH)/include/asm/asm-offsets.h: asm-offsets.s
-	@(set -e; \
+asm-offsets.s: arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.c FORCE
+	$(call if_changed_dep,asm-offsets.s)
+
+targets += asm-offsets.s
+
+define filechk_asm-offsets.h
 	  echo "/*"; \
 	  echo " * DO NOT MODIFY."; \
 	  echo " *"; \
@@ -57,9 +59,13 @@ arch/$(TARGET_ARCH)/include/asm/asm-offsets.h: asm-offsets.s
 	  echo "#ifndef __ASM_OFFSETS_H__"; \
 	  echo "#define __ASM_OFFSETS_H__"; \
 	  echo ""; \
-	  sed -rne "/^[^#].*==>/{s:.*==>(.*)<==.*:\1:; s: [\$$#]: :; p;}"; \
+	  sed -rne "/^[^#].*==>/{s:.*==>(.*)<==.*:\1:; s: [\$$#]: :; p;}" $<; \
 	  echo ""; \
-	  echo "#endif") <$< >$@
+	  echo "#endif"
+endef
+
+arch/$(TARGET_ARCH)/include/asm/asm-offsets.h: asm-offsets.s FORCE
+	$(call filechk,asm-offsets.h)
 
 build-dirs := $(patsubst %/built_in.o,%,$(filter %/built_in.o,$(ALL_OBJS) $(ALL_LIBS)))
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:38:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538556.838610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1y-0007RW-Iu; Tue, 23 May 2023 16:38:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538556.838610; Tue, 23 May 2023 16:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V1y-0007PQ-E5; Tue, 23 May 2023 16:38:38 +0000
Received: by outflank-mailman (input) for mailman id 538556;
 Tue, 23 May 2023 16:38:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V1x-0006Dq-Jl
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:37 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 42cb2ca4-f988-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 18:38:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42cb2ca4-f988-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859916;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=oXOI7IkQMJP7o4IIkYu8XpZs5Gw1LndgpIXVIuwSGWc=;
  b=dcSCdl1DFMRYwiNWU9AoKyUN1Zz1KocQVbmVceRRGVDsRwEyOzuz7K24
   vrdrLUMAEiz6DgXMQJykwgMlskB3fte9isnp8k816utr3mLy8JcKLDQUZ
   fF3PlXwGbn5dxcJJatsMUfwhAaxBiU27oXiOzbEoIZ0dkKtxqa9Y1jI42
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108859051
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:MSn1xa06NhDT0h62v/bD5eBxkn2cJEfYwER7XKvMYLTBsI5bpzYOz
 DQdDTjVOP+Pa2qnL4t0b9+z8UlT6J+BmIVhTgRopC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFkPqgR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfX14fq
 qYfdTYxcgGGrciUzLumZ+VXr5F2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tO6umnn4dSwesF+PrLA7y2PS0BZwwP7mN9+9ltmiHJ0FzhvJ/
 j+fl4j/KgAiDO60lzvayFfylPWRp3zpXbw3ObLto5aGh3XMnzdOWXX6T2CTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1jYeUddNF+wx6CmW17HZpQ2eAwAsTCNFadEgnN87Q3otz
 FDht8jyGTVlvbmRSHSc3rSZtzW/PW4SN2BqTTAAZRsI5Z/kuo5bs/7UZo89Sujv1ISzQGyuh
 WnQ90DSmon/k+ZV6PTkp1eahQ6wt53jZCw57xj6UlmMu1YRiJGeW6Sk7l3S7PBlJYmfT0Wcs
 HVsp/Vy/NziHrnWynXTHbxl8KWBoq/cbWaC2QIH84wJrWzFxpK1QWxHDNiSzm9NO91MRzLma
 VS7Veh5tM4KZyvCgUOajuuM5yUWIUrIT4yNuhP8NIAmjn1NmOivoklTiba4hTyFraTVufhX1
 W2nWcitF20GLq9s0SC7QewQuZdymHBimjOLGcuqlkz7uVZ7WJJyYe5fWGZik8hjtP/UyOkr2
 4032zS2J+V3D7SlP3i/HX87JlEWN3krba3LRzhsXrfbeGJOQThxY8I9NJt9I+SJaYwJzLaXl
 px8M2cEoGfCaYrvcl3QOy88MOO+Af6SbxsTZEQRALph4FB7Ca7H0UvVX8BfkWUPnAC78cNJc
 g==
IronPort-HdrOrdr: A9a23:8s1hvaPswhe31MBcTjGjsMiBIKoaSvp037BK7S1MoH1uA6ilfq
 WV9sjzuiWatN98Yh8dcLO7Scy9qBHnhP1ICOAqVN/PYOCBggqVxelZhrcKqAeQeREWmNQ86U
 4aSdkYNDXxZ2IK8foT4mODYqkdKA/sytHXuQ/cpU0dPD2Dc8tbnmFE4p7wKDwNeOFBb6BJba
 a01458iBeLX28YVci/DmltZZm/mzWa/KiWGSLvHnQcmXKzsQ8=
X-Talos-CUID: =?us-ascii?q?9a23=3AZ/4+8mqaoIlVBmf42RefWZ7mUZ4aIyKHyyrWH1O?=
 =?us-ascii?q?pDExDarGcVHKcw4oxxg=3D=3D?=
X-Talos-MUID: 9a23:XrUydwUSV5H+wBDq/C7UvANJasY32YHwGlpTq8oUvvehGCMlbg==
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="108859051"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 06/15] build: quiet for .allconfig.tmp target
Date: Tue, 23 May 2023 17:38:02 +0100
Message-ID: <20230523163811.30792-7-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/Makefile b/xen/Makefile
index e89fc461fc..27f70d2200 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -339,7 +339,7 @@ filechk_kconfig_allconfig = \
     :
 
 .allconfig.tmp: FORCE
-	set -e; { $(call filechk_kconfig_allconfig); } > $@
+	$(Q)set -e; { $(call filechk_kconfig_allconfig); } > $@
 
 config: tools_fixdep outputmakefile FORCE
 	$(Q)$(MAKE) $(build)=tools/kconfig $@
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:38:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:38:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538557.838625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V20-0007tq-SG; Tue, 23 May 2023 16:38:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538557.838625; Tue, 23 May 2023 16:38:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V20-0007td-O9; Tue, 23 May 2023 16:38:40 +0000
Received: by outflank-mailman (input) for mailman id 538557;
 Tue, 23 May 2023 16:38:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V1z-0006Dq-EZ
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:39 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4409e439-f988-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 18:38:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4409e439-f988-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859918;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=PR1Nr1IkdQd7XuTeHUYIINkb87rvsAFhUL6D4IH39fU=;
  b=WJ655CNxJCkwLw/sitlRzO+VagKoN294YMpWqp81TYVbGh/mLcMpPLh3
   iFEcDf/UD1c0s+SgCXB48gn8ZTfJkTzmEq6W1yFKJ9IeoT0Xt6a7JydZP
   lP4PyQjTmSJ/Tdg0v7AU01PZaxZAkX0AJ3lrRqZTEbRtd0Fu/rXTbe5CJ
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110112559
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:tl6GW6IPxdOCmsGHFE+RmpIlxSXFcZb7ZxGr2PjKsXjdYENS02YAy
 mRODW3TbvfbYmX3eNsjPYy/9xsHsJDTm4BrHgtlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wZlPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5GWFx0z
 tUULAkVSRKlprus35+iVOhj05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TTHJ0MxxzF+
 zOuE2LRWUAUMtqb0Ca/6nuR2fSSoHrEdp09C+jtnhJtqALKnTFCYPEMbnOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHslhwWVsdUEuY6wBqQ0aeS6AGcbkAGUzpAZdoOpMIwAzsw2
 TehhMj1DDZitLmUT3O19bqOqz62fy8PIgcqeissXQYDpd75r+kbhB/VUsxqFqLzi9TvACzx2
 BiDti14jLIW5eY10KG88UHCkiibjJHDRQ4o5S3aRmugqAh+YeaNbYui40nW9vZEIYOQSHGOu
 XEFn46V6+VmJZiJlTeRSeQXWr+z7vCOMSb0nlJkWZIm8lyF8Hmle4dS7DhgJVxBPcMNeDuva
 0jW0StS45lJNXfscq5zYKq2Ec0hyaWmHtPgPs04dfIXPMI3LlXeungzOwjJhTuFfFUQfb8XC
 M6mS/2FAXIjEYNl6hvvXfgwyKEqyXVrrY/MfqzTwxOi2LuYQXeaT7YZLVeDBtwEALO4TBb9q
 IgGaZbTo/lLeKinO3SMr9ZPRbwfBSJjba0avfC7YQJqzuBOPGg6Q8Hczro6E2COt/QEz7yYl
 p1Rt6Ix9bYeuZElAV/SApyAQOm1NXqakZ7cFXJEALpQ8yJ/CbtDFY9GH3fNQZEp9fZ40dl/R
 OQfdsOLD5xnE2qXp2tNNcWm/dU6KHxHYD5i2AL8OlACk2NIHVSVqrcIgCO0nMXxMsZHnZRn+
 ODxvu8qaZECWx5jHK7rVR5b9Hvo5SJ1sLsrDyP1zix7JB2EHH5Cd3ag0Zfa4qgkdX3++9dt/
 13OW0ZH+rGd8tNdHRugrfnskrpF2tBWRiJyd1Q3J57tXcUG1gJPGbN9bds=
IronPort-HdrOrdr: A9a23:0IHgGKmRtA4cNC69p2DmB8OQnoLpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-Talos-CUID: 9a23:DZ5qBW2iCXgj3n4NC14DwbxfK5kYSEb0wHPpO2i6L0cwUqK4UEGR9/Yx
X-Talos-MUID: 9a23:/+oh6wQgM/5RClhHRXS31T9kMsNx/562AX1Sisg74vaqKgNvbmI=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="110112559"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Ross Lagerwall
	<ross.lagerwall@citrix.com>
Subject: [XEN PATCH 07/15] build: move XEN_HAS_BUILD_ID out of Config.mk
Date: Tue, 23 May 2023 17:38:03 +0100
Message-ID: <20230523163811.30792-8-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Whether or not the linker can do build id is only used by the
hypervisor build, so move that there.

Rename $(build_id_linker) to $(XEN_LDFLAGS_BUILD_ID) as this is a
better name to be exported as to use the "XEN_*" namespace.

Also update XEN_TREEWIDE_CFLAGS so flags can be used for
arch/x86/boot/ CFLAGS_x86_32

Beside a reordering of the command line where CFLAGS is used, there
shouldn't be any other changes.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 Config.mk                   | 12 ------------
 xen/Makefile                | 12 ++++++++++++
 xen/arch/arm/Makefile       |  2 +-
 xen/arch/riscv/Makefile     |  2 +-
 xen/arch/x86/Makefile       | 12 ++++++------
 xen/scripts/Kbuild.include  |  3 +++
 xen/test/livepatch/Makefile |  4 ++--
 7 files changed, 25 insertions(+), 22 deletions(-)

diff --git a/Config.mk b/Config.mk
index d12d4c2b8f..27f48f654a 100644
--- a/Config.mk
+++ b/Config.mk
@@ -125,18 +125,6 @@ endef
 check-$(gcc) = $(call cc-ver-check,CC,0x040100,"Xen requires at least gcc-4.1")
 $(eval $(check-y))
 
-ld-ver-build-id = $(shell $(1) --build-id 2>&1 | \
-					grep -q build-id && echo n || echo y)
-
-export XEN_HAS_BUILD_ID ?= n
-ifeq ($(call ld-ver-build-id,$(LD)),n)
-build_id_linker :=
-else
-CFLAGS += -DBUILD_ID
-export XEN_HAS_BUILD_ID=y
-build_id_linker := --build-id=sha1
-endif
-
 define buildmakevars2shellvars
     export PREFIX="$(prefix)";                                            \
     export XEN_SCRIPT_DIR="$(XEN_SCRIPT_DIR)";                            \
diff --git a/xen/Makefile b/xen/Makefile
index 27f70d2200..4dc960df2c 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -286,6 +286,18 @@ CFLAGS += $(CLANG_FLAGS)
 export CLANG_FLAGS
 endif
 
+# XEN_HAS_BUILD_ID needed by Kconfig
+ifeq ($(call ld-ver-build-id,$(LD)),n)
+XEN_LDFLAGS_BUILD_ID :=
+XEN_HAS_BUILD_ID := n
+else
+CFLAGS += -DBUILD_ID
+XEN_TREEWIDE_CFLAGS += -DBUILD_ID
+XEN_HAS_BUILD_ID := y
+XEN_LDFLAGS_BUILD_ID := --build-id=sha1
+endif
+export XEN_HAS_BUILD_ID XEN_LDFLAGS_BUILD_ID
+
 export XEN_HAS_CHECKPOLICY := $(call success,$(CHECKPOLICY) -h 2>&1 | grep -q xen)
 
 # ===========================================================================
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 4d076b278b..1cc57d2cf0 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -102,7 +102,7 @@ $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
 	$(NM) -pa --format=sysv $(@D)/.$(@F).1 \
 		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort >$(@D)/.$(@F).1.S
 	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
-	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
+	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(XEN_LDFLAGS_BUILD_ID) \
 	    $(@D)/.$(@F).1.o -o $@
 	$(NM) -pa --format=sysv $(@D)/$(@F) \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 443f6bf15f..8a0e483c66 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -9,7 +9,7 @@ $(TARGET): $(TARGET)-syms
 	$(OBJCOPY) -O binary -S $< $@
 
 $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
-	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) -o $@
+	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(XEN_LDFLAGS_BUILD_ID) -o $@
 	$(NM) -pa --format=sysv $(@D)/$(@F) \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
 		>$(@D)/$(@F).map
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 2672d7f4ee..c7ec315fe6 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -100,7 +100,7 @@ efi-y := $(shell if [ ! -r $(objtree)/include/xen/compile.h -o \
          $(space)
 efi-$(CONFIG_PV_SHIM_EXCLUSIVE) :=
 
-ifneq ($(build_id_linker),)
+ifneq ($(XEN_LDFLAGS_BUILD_ID),)
 notes_phdrs = --notes
 else
 ifeq ($(CONFIG_PVH_GUEST),y)
@@ -136,19 +136,19 @@ $(TARGET): $(TARGET)-syms $(efi-y) $(obj)/boot/mkelf32
 CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI
 
 $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
-	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
+	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(XEN_LDFLAGS_BUILD_ID) \
 	    $(objtree)/common/symbols-dummy.o -o $(@D)/.$(@F).0
 	$(NM) -pa --format=sysv $(@D)/.$(@F).0 \
 		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort \
 		>$(@D)/.$(@F).0.S
 	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).0.o
-	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
+	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(XEN_LDFLAGS_BUILD_ID) \
 	    $(@D)/.$(@F).0.o -o $(@D)/.$(@F).1
 	$(NM) -pa --format=sysv $(@D)/.$(@F).1 \
 		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort $(syms-warn-dup-y) \
 		>$(@D)/.$(@F).1.S
 	$(MAKE) $(build)=$(@D) $(@D)/.$(@F).1.o
-	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
+	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(XEN_LDFLAGS_BUILD_ID) \
 	    $(orphan-handling-y) $(@D)/.$(@F).1.o -o $@
 	$(NM) -pa --format=sysv $(@D)/$(@F) \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
@@ -186,10 +186,10 @@ relocs-dummy := $(obj)/efi/relocs-dummy.o
 $(TARGET).efi: ALT_BASE = 0x$(shell $(NM) $(obj)/efi/relocs-dummy.o | sed -n 's, A ALT_START$$,,p')
 endif
 
-ifneq ($(build_id_linker),)
+ifneq ($(XEN_LDFLAGS_BUILD_ID),)
 ifeq ($(call ld-ver-build-id,$(LD) $(filter -m%,$(EFI_LDFLAGS))),y)
 CFLAGS-y += -DBUILD_ID_EFI
-EFI_LDFLAGS += $(build_id_linker)
+EFI_LDFLAGS += $(XEN_LDFLAGS_BUILD_ID)
 note_file := $(obj)/efi/buildid.o
 # NB: this must be the last input in the linker call, because inputs following
 # the -b option will all be treated as being in the specified format.
diff --git a/xen/scripts/Kbuild.include b/xen/scripts/Kbuild.include
index 785a32c32e..d820595e2f 100644
--- a/xen/scripts/Kbuild.include
+++ b/xen/scripts/Kbuild.include
@@ -91,6 +91,9 @@ cc-ifversion = $(shell [ $(CONFIG_GCC_VERSION)0 $(1) $(2)000 ] && echo $(3) || e
 
 clang-ifversion = $(shell [ $(CONFIG_CLANG_VERSION)0 $(1) $(2)000 ] && echo $(3) || echo $(4))
 
+ld-ver-build-id = $(shell $(1) --build-id 2>&1 | \
+					grep -q build-id && echo n || echo y)
+
 ###
 # Shorthand for $(Q)$(MAKE) -f scripts/Makefile.build obj=
 # Usage:
diff --git a/xen/test/livepatch/Makefile b/xen/test/livepatch/Makefile
index c258ab0b59..c78f3ce06f 100644
--- a/xen/test/livepatch/Makefile
+++ b/xen/test/livepatch/Makefile
@@ -37,7 +37,7 @@ $(obj)/modinfo.o:
 
 #
 # This target is only accessible if CONFIG_LIVEPATCH is defined, which
-# depends on $(build_id_linker) being available. Hence we do not
+# depends on $(XEN_LDFLAGS_BUILD_ID) being available. Hence we do not
 # need any checks.
 #
 # N.B. The reason we don't use arch/x86/note.o is that it may
@@ -142,7 +142,7 @@ xen_expectations_fail-objs := xen_expectations_fail.o xen_hello_world_func.o not
 
 
 quiet_cmd_livepatch = LD      $@
-cmd_livepatch = $(LD) $(XEN_LDFLAGS) $(build_id_linker) -r -o $@ $(real-prereqs)
+cmd_livepatch = $(LD) $(XEN_LDFLAGS) $(XEN_LDFLAGS_BUILD_ID) -r -o $@ $(real-prereqs)
 
 $(obj)/%.livepatch: FORCE
 	$(call if_changed,livepatch)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:38:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:38:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538558.838635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V26-0008Vq-7N; Tue, 23 May 2023 16:38:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538558.838635; Tue, 23 May 2023 16:38:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V26-0008Vc-3s; Tue, 23 May 2023 16:38:46 +0000
Received: by outflank-mailman (input) for mailman id 538558;
 Tue, 23 May 2023 16:38:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V24-0006Dq-9a
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:44 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 471c51cd-f988-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 18:38:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 471c51cd-f988-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859923;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=UqV5mZcPJiexqI58U22QslwrAPllWJ5ALOxWdCDzORY=;
  b=O3tEw0qI4P5vtfAvRaAQ8FRNE2lHMLsnTPjy+UQ6yFDEW6qBgrgbpGJ+
   oOjqHpHfCDizEJa89+e1c5pJnR3oP0vkkHtZb4ALLEAhUdOUfaVF9/yzy
   qgG3s5+1Mu6LBfaAPgWhRpl2SGQDYMhpx0nXWsc4XEtXZ0r14pSgbAh9Q
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109422951
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:jbqg+q/wFFznlahHhgozDrUDs36TJUtcMsCJ2f8bNWPcYEJGY0x3y
 jFJCjqFPqqMa2fwKtgkaIiz901U6MeGn9JhGlNkrHw8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ird7ks31BjOkGlA5AdmOKoQ5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklnz
 9BGBDpSVymNpOuxyuujEbIviJgKeZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAj3/jczpeuRSNqLA++WT7xw1tyrn9dtHSf7RmQO0MxhfE/
 DiXoj2R7hcyLMOH2HmPzlGQmebkpjP1Ca9JH4KU+as/6LGU7jNKU0BHPbehmtG7l0q/VtR3O
 0ESvC00osAa9kamU938VB2Qu2Ofs1gXXN84O/I+wBGAzOzT+QnxLnMfUjdLZdgitck3bT8nz
 FmEm5XuHzMHmK2YTzeR+6mZqRu2ODMJNikSaCkcVwwH7tL/5oYpgXryos1LSfDvyIevQHepn
 m7M9XJl71kOsSIV/4yB0Q7riW2Vn5bqRwk/vx2MBSGrsiosMeZJeLeUwVTc6P9BKqOQQV+Ao
 GUIlqCi0QweMX2evHfTGbtQRdlF897AaWSB2gA3Q/HN4hz3oxaekZZsDCaSzauDGuINYnfXb
 UDaomu9D7cDbSLxPcebj29cYvnGLJQM9/y/Dpg4jfIUOPCdkTNrGwkwDXN8J0i3zCARfVgXY
 P93i/qEA3cAErhAxzGrXeob2rJD7nlglT+MFcinlEX+ieb2iJuppVAtaQHmUwzExPnc/FW9H
 yh3bKNmNCmzoMWhO3KKoOb/3HgBLGQhBICelvG7gtWre1I8cEl4Uq+5/F/UU9A990ijvruSr
 y7Vt44x4AaXuEAr3i3RMioyN+y3DccjxZ/5VAR1VWuVN7EYSd7HxM8im1EfJtHLKMQLISZIc
 sQ4
IronPort-HdrOrdr: A9a23:t42FNq4vKkCZFEnorQPXwPLXdLJyesId70hD6qkXc3Bom62j+P
 xG+c5x6faaslgssR0b+OxoWpPwIk80hKQU3WB5B97LNmTbUQCTXeNfBOXZslrdMhy72ulB1b
 pxN4hSYeeAa2SSVPyKhTVQxexQpOW6zA==
X-Talos-CUID: =?us-ascii?q?9a23=3Ad5rPLWqI5dEx0IXWotB7xTPmUekfXiPxxijpGkG?=
 =?us-ascii?q?bAGFjEaXOEhyepIoxxg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3A1GlSBAyDVmlAkAEsrfuyGZ1UCtOaqLyxEFFOm6k?=
 =?us-ascii?q?lh9aNHhdgNWaw1z+IboByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="109422951"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 08/15] build: use $(filechk, ) for all compat/.xlat/%.lst
Date: Tue, 23 May 2023 17:38:04 +0100
Message-ID: <20230523163811.30792-9-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Make use of filechk mean that we don't have to use
$(move-if-changed,). It also mean that will have sometime "UPD .." in
the build output when the target changed, rather than having "GEN ..."
all the time when "xlat.lst" happen to have a more recent modification
timestamp.

While there, replace `grep -v` by `sed '//d'` to avoid an extra
fork and pipe when building.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/include/Makefile | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/xen/include/Makefile b/xen/include/Makefile
index 96d5f6f3c8..2e61b50139 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -93,15 +93,13 @@ targets += $(patsubst compat/%, compat/.xlat/%, $(headers-y))
 $(obj)/compat/.xlat/%.h: $(obj)/compat/%.h $(obj)/compat/.xlat/%.lst $(srctree)/tools/compat-xlat-header.py FORCE
 	$(call if_changed,xlat_headers)
 
-quiet_cmd_xlat_lst = GEN     $@
-cmd_xlat_lst = \
-	grep -v '^[[:blank:]]*$(pound)' $< | sed -ne 's,@arch@,$(compat-arch-y),g' -re 's,[[:blank:]]+$*\.h[[:blank:]]*$$,,p' >$@.new; \
-	$(call move-if-changed,$@.new,$@)
+filechk_xlat_lst = \
+	sed -ne '/^[[:blank:]]*$(pound)/d' -e 's,@arch@,$(compat-arch-y),g' -re 's,[[:blank:]]+$*\.h[[:blank:]]*$$,,p' $<
 
 .PRECIOUS: $(obj)/compat/.xlat/%.lst
 targets += $(patsubst compat/%.h, compat/.xlat/%.lst, $(headers-y))
 $(obj)/compat/.xlat/%.lst: $(srcdir)/xlat.lst FORCE
-	$(call if_changed,xlat_lst)
+	$(call filechk,xlat_lst)
 
 xlat-y := $(shell sed -ne 's,@arch@,$(compat-arch-y),g' -re 's,^[?!][[:blank:]]+[^[:blank:]]+[[:blank:]]+,,p' $(srcdir)/xlat.lst | uniq)
 xlat-y := $(filter $(patsubst compat/%,%,$(headers-y)),$(xlat-y))
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:38:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:38:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538559.838646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V27-0000Qq-PS; Tue, 23 May 2023 16:38:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538559.838646; Tue, 23 May 2023 16:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V27-0000QP-En; Tue, 23 May 2023 16:38:47 +0000
Received: by outflank-mailman (input) for mailman id 538559;
 Tue, 23 May 2023 16:38:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V26-0006Dr-AB
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:46 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 47ad06e7-f988-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 18:38:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47ad06e7-f988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859924;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=/+AxULjDy0vT+JPl4g0BCJbfJV4FPatkg8JwDT25Ydw=;
  b=DYnhq5Jza+CZ0hGLVqwuGqwyIJqcaQqdXPWzO4G2bWRCbrnsYLfmJkNj
   NGqS2VDLcykKwD1vLVSbmbMU9Gjy6H14M40WGjat3yXTiMEdLj2Z5ZuQc
   X9/qTxWPUpSLq8Gq0q8g6DuhSFEtcZMqHBiJplSznMFNEMmUqcEWA4rPx
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112568330
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:ln3pEaLH4atIe/nkFE+RHZUlxSXFcZb7ZxGr2PjKsXjdYENS3jIFx
 2MaX2nSa/7cMzT8c9wgaNuwpk5U6p6HnNYxQAVlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wZlPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5tWDkX7
 cYAdAkJbynbiM+kx7PgbPVF05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TTHJ8IwBbD/
 TquE2LRHj5CZNmhzRm/63eJr+nVkSn6BLITLejtnhJtqALKnTFCYPEMbnOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHslhwWVsdUEuY6wBqQ0aeS6AGcbkAGUzpAZdoOpMIwAzsw2
 TehhMj1DDZitLmUT3O19bqOqz62fy8PIgcqeissXQYDpd75r+kbhRvVQtFuOKW8lNHyFHf7x
 DXikcQlr+xN14hRjfz9pA2ZxWv2/fAlUzLZ+C3NBzr89jlhOLehZrXw5nXJ6vcbCpqwGwzpU
 Gc/p+CS6+UHDJeonSOLQfkQELzB28tpIAEwknY0QcB/qm3FF2qLONkJvWogfBsB3tMsI2eBX
 aPFhe9GCHa/1lOOZLQ/XY++At9CIUPIRYW8DaC8gjajj/FMmO67EMNGPxb4M4PFyhJEfUQD1
 XCzL66R4Y4yU/gP8dZPb751PUUX7i4/33jPYpvw0g6q17GTDFbMF+dZbwvWMrtotfvYyOkwz
 zq4H5LQoyizrcWkOnWHmWLtBQtiwYcH6WDe9JUMK7/rzvtOE2A9Ef7BqY4cl3het/0NzI/gp
 yjtMnK0PXKj3RUr3y3WMCE8AF4uNL4jxU8G0dsEZg7yiid8Odb+sM/ytfIfJNEayQCq9tYsJ
 9FtRilKKq4npujvk9jFUaTAkQ==
IronPort-HdrOrdr: A9a23:uSD24KF/IQNWRkV4pLqEi8eALOsnbusQ8zAXPiFKKSC9F/byqy
 nAppkmPHPP4gr5O0tApTn/Asa9qBrnnPZICOIqUYtKMjOJhFeV
X-Talos-CUID: 9a23:7Y4ldm2+MZ6lVhgh5kdFTrxfAJl9K2Xn1CvpJ06BUWlEWrqUEnmM0fYx
X-Talos-MUID: =?us-ascii?q?9a23=3Ajk0GIw3Msg/vtmLxBl65XDvjKTUj/4v2WEUzlbY?=
 =?us-ascii?q?8osyhHCpqNz2c0AS2e9py?=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="112568330"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Doug Goldstein
	<cardoe@cardoe.com>
Subject: [XEN PATCH 09/15] build: hide commands run for kconfig
Date: Tue, 23 May 2023 17:38:05 +0100
Message-ID: <20230523163811.30792-10-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

but still show a log entry for syncconfig. We have to use kecho
instead of $(cmd,) to avoid issue with prompt from kconfig.

linux commits for reference:
    23cd88c91343 ("kbuild: hide commands to run Kconfig, and show short log for syncconfig")
    d952cfaf0cff ("kbuild: do not suppress Kconfig prompts for silent build")

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/Makefile               |  1 +
 xen/tools/kconfig/Makefile | 14 +++++++-------
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/Makefile b/xen/Makefile
index 4dc960df2c..127c0e40b5 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -382,6 +382,7 @@ $(KCONFIG_CONFIG): tools_fixdep
 # This exploits the 'multi-target pattern rule' trick.
 # The syncconfig should be executed only once to make all the targets.
 include/config/%.conf include/config/%.conf.cmd: $(KCONFIG_CONFIG)
+	$(Q)$(kecho) "  SYNC    $@"
 	$(Q)$(MAKE) $(build)=tools/kconfig syncconfig
 
 ifeq ($(CONFIG_DEBUG),y)
diff --git a/xen/tools/kconfig/Makefile b/xen/tools/kconfig/Makefile
index b7b9a419ad..18c0f5d4f1 100644
--- a/xen/tools/kconfig/Makefile
+++ b/xen/tools/kconfig/Makefile
@@ -24,19 +24,19 @@ endif
 unexport CONFIG_
 
 xconfig: $(obj)/qconf
-	$< $(silent) $(Kconfig)
+	$(Q)$< $(silent) $(Kconfig)
 
 gconfig: $(obj)/gconf
-	$< $(silent) $(Kconfig)
+	$(Q)$< $(silent) $(Kconfig)
 
 menuconfig: $(obj)/mconf
-	$< $(silent) $(Kconfig)
+	$(Q)$< $(silent) $(Kconfig)
 
 config: $(obj)/conf
-	$< $(silent) --oldaskconfig $(Kconfig)
+	$(Q)$< $(silent) --oldaskconfig $(Kconfig)
 
 nconfig: $(obj)/nconf
-	$< $(silent) $(Kconfig)
+	$(Q)$< $(silent) $(Kconfig)
 
 build_menuconfig: $(obj)/mconf
 
@@ -70,12 +70,12 @@ simple-targets := oldconfig allnoconfig allyesconfig allmodconfig \
 PHONY += $(simple-targets)
 
 $(simple-targets): $(obj)/conf
-	$< $(silent) --$@ $(Kconfig)
+	$(Q)$< $(silent) --$@ $(Kconfig)
 
 PHONY += savedefconfig defconfig
 
 savedefconfig: $(obj)/conf
-	$< $(silent) --$@=defconfig $(Kconfig)
+	$(Q)$< $(silent) --$@=defconfig $(Kconfig)
 
 defconfig: $(obj)/conf
 ifneq ($(wildcard $(srctree)/arch/$(SRCARCH)/configs/$(KBUILD_DEFCONFIG)),)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:38:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:38:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538563.838655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V2G-0001Xz-W2; Tue, 23 May 2023 16:38:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538563.838655; Tue, 23 May 2023 16:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V2G-0001Xj-Sr; Tue, 23 May 2023 16:38:56 +0000
Received: by outflank-mailman (input) for mailman id 538563;
 Tue, 23 May 2023 16:38:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V2F-0006Dq-IC
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:55 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d58af33-f988-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 18:38:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d58af33-f988-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859934;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=xUrRJZg21IcuIXPT65Mny77WEiL8WRyRZGb2P5u/V+k=;
  b=LZ61PVuPAlSVkaOC4RYQddLagOPLe1XbU0xz7v36taSS2HOe/PAKUWMk
   euDVWxBUxQ3ODQcGXOgEXCNeTnnPxLxM3GSRdRxlctLd7X6vaPfxkbsZn
   D1b0nLMkKm6bEtXggTb0GfsSoKq+A0ttIOKTqdvPF1NI3wYkn7FO1ZQeL
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110112588
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:CRZ2A6ODbD5H+ZXvrR2Wl8FynXyQoLVcMsEvi/4bfWQNrUoghGMFn
 GsYDD2DbPeOM2D8fIh1aou19xgHucWHzIJkSgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tF5AdmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0t11KjhX8
 8w3EmpOUD6biMir+4uUF+Y506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUoXTHZwMxBvI9
 goq+UzwMB03d9qgkgap1X6ptrORgHOiVqE7QejQGvlC3wTImz175ActfUu2p7y1h1CzX/pbK
 lcI4Ww+oK4q7kupQ9LhGRqirxasvBQRRt5RGO0S8xyWx+zf5APxLmoZSj9MbvQ2uclwQiYlv
 neShM/gDzFrtLyTSFqe+62SoDf0PjIaRUcdYQcUQA1D5MPsyLzflTqWEIwlSvTsyISoR3epm
 WviQDUCa6s7tswgjry9zX/92XGSiIDGZBMO3yqKUTfwhu9mX7KNa4ut4FndyP9PKoeFU1WM1
 EQ5d9iiAPMmVs/UynHUKAkZNPTwvqvebmWA6bJ6N8N5nwlB7UJPamy5DNtWAE5yevgJdjbyC
 KM4kVMAvcQDVJdGgEIeXm5QNyjI5fK4fTgGfqqOBjarXnSWXFHvwc2WTRTMt10BaWB1+U3FB
 b+VcNy3EVERArl9wTy9So81iOF7mn9unT+IFMmjk3xLNIZyg1bME9843KamNLhlvMtoXi2Om
 zqgCyd640oGC7CvCsUm2YUSMUoLPRAGOHwCkOQOLrTrClM/SAkc5wr5netJl3pNw/4EyY8lP
 xiVBidl9bYIrSafc1TVNSgzOe+HsFQWhStTABHA9G2AgxALCbtDJo9GH3frVdHLLNBe8MM=
IronPort-HdrOrdr: A9a23:CD91cKvMtS3VwGBX/AUAJYtZ7skDhtV00zEX/kB9WHVpm6yj+v
 xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBO3ZogEIcxeUygc379
 YDT0ERMr3N5CNB/KHHCAnTKadd/DGEmprY+ts3GR1WPH9Xg6IL1XYJNu6CeHcGIjWvnfACZe
 ChDswsnUvYRV0nKv6VK1MiROb5q9jChPvdEGM7705O0nj3sduwgoSKaCSl4g==
X-Talos-CUID: 9a23:KD4jOmyU9n6r63qOtdRbBgUvGOt8a3vt402KYHGWEWlDVYasZgKfrfY=
X-Talos-MUID: =?us-ascii?q?9a23=3AZnozNw/eKAaoSy0k1ob4b3KQf85a74ntDn9Wrbk?=
 =?us-ascii?q?ht8y0aXJta26Trx3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="110112588"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 13/15] build: fix compile.h compiler version command line
Date: Tue, 23 May 2023 17:38:09 +0100
Message-ID: <20230523163811.30792-14-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

CFLAGS is just from Config.mk, instead use the flags used to build
Xen.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    I don't know if CFLAGS is even useful there, just --version without the
    flags might produce the same result.

 xen/build.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/build.mk b/xen/build.mk
index e2a78aa806..d468bb6e26 100644
--- a/xen/build.mk
+++ b/xen/build.mk
@@ -23,7 +23,7 @@ define cmd_compile.h
 	    -e 's/@@whoami@@/$(XEN_WHOAMI)/g' \
 	    -e 's/@@domain@@/$(XEN_DOMAIN)/g' \
 	    -e 's/@@hostname@@/$(XEN_BUILD_HOST)/g' \
-	    -e 's!@@compiler@@!$(shell $(CC) $(CFLAGS) --version 2>&1 | head -1)!g' \
+	    -e 's!@@compiler@@!$(shell $(CC) $(XEN_CFLAGS) --version 2>&1 | head -1)!g' \
 	    -e 's/@@version@@/$(XEN_VERSION)/g' \
 	    -e 's/@@subversion@@/$(XEN_SUBVERSION)/g' \
 	    -e 's/@@extraversion@@/$(XEN_EXTRAVERSION)/g' \
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:38:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:38:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538567.838665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V2J-0001y2-EL; Tue, 23 May 2023 16:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538567.838665; Tue, 23 May 2023 16:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V2J-0001x6-AR; Tue, 23 May 2023 16:38:59 +0000
Received: by outflank-mailman (input) for mailman id 538567;
 Tue, 23 May 2023 16:38:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V2H-0006Dq-Ij
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:57 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e613f36-f988-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 18:38:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e613f36-f988-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859936;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=1S/PIs/4N3VAkKwJ+CBkSKlCj5ulIeeoVMLk5Q/XC08=;
  b=ReGjuiPeDNGs5/wWMLup3ywIS2NAdQ8aqbibv7LZRQ2OZr2HV2MwNDrF
   T7Xn86TkJ8VO3jIu3ONEOC6u8iQEECaz5PhjkE4W8PLFtL+ZD+PhCCzed
   r7K7+pVWaEFz21b2BkFTVXMG/x8/cjcOUM/h6PK/e0HDGjfkqNA+kb3/K
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109985443
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:n7tHLaiFli/lPQhkHvJ7ujoCX161UhAKZh0ujC45NGQN5FlHY01je
 htvWmGFOv2LZmD8fYp+OYy18BhS6MPdnIRlGQJo/ClnRi4b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QWFzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQ2GQg0Nlezl96MzbTgEbNLgNQRBZDSadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XejtEqFWTtOwv7nLa1gBZ27nxKtvFPNeNQK25m27B/
 jOXrzmlXkpy2Nq3kCq0+G6vn9XzmhzmZ4ApHq2jq8BonwjGroAUIEJPDgbqyRWjsWa8RtZeJ
 ko86ico668o+ySDTNPwQhm5q36spQMHVpxbFOhSwB6J4rrZ5UCeHGdsZiVadNUsucsyRDor/
 lyEhdXkAXpoqrL9YWKQ8PKYoC2/PQARLHQefmkUQA0d+d7hrYovyBXVQb5e/LWd14OvX2uqm
 nbT8XZ43u9I5SIW60ml1X72uwv04ajZcjQ44F6MBEWj/jFQPbfwMuRE9mPnxfpHKY+YSHyIs
 34Fh9WS4YgyMH2dqMCeaL5TRe/0vp5pJBWZ2AcyRMd5q1xB7lb5JehtDCdCyFCF2yruURvge
 wfttAxY//e/11P6PPYsM+pd5ynHpJUM9OgJtNiONrKigbArLmdrGR2CgmbOt10BaGB2zckC1
 W6zKK5A90oyB6V91yaRTOwAy7ItzS1W7TqNFcykn0z7iuvHPCL9pVI53LymN7pR0U95iF+Nr
 4Y32zWikH2zr9ESkgGIqNVOfDjm3FAwBIzsqtw/S9Nv1jFOQTl7Y9eImONJRmCQt/gN/gs+1
 i3nCxAwJZuWrSGvFDhmnVg4MOm+Askn/SNnVcHuVH7xs0UejU+UxP93X/MKkXMPrbELISJcJ
 xXdR/i9Pw==
IronPort-HdrOrdr: A9a23:U9TLsKoPt5qbgWg/NZ2IesAaV5oTeYIsimQD101hICG8cqSj+f
 xG+85rsyMc6QxhIE3I9urhBEDtex/hHNtOkOws1NSZLW7bUQmTXeJfBOLZqlWKcUDDH6xmpM
 NdmsBFeaTN5DNB7PoSjjPWLz9Z+qjkzJyV
X-Talos-CUID: 9a23:dGDAdWBLlX3GBFT6EwdWzHdMXfA1TkfQyiyLeWi+GVRAEYTAHA==
X-Talos-MUID: 9a23:1zO0yQaC0Av2oOBT7i/dji1hLf1U5JuzN0I2m4Ud5veGKnkl
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="109985443"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 05/15] build: introduce a generic command for gzip
Date: Tue, 23 May 2023 17:38:01 +0100
Message-ID: <20230523163811.30792-6-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Make the gzip command generic and use -9 which wasn't use for
config.gz. (xen.gz does use -9)

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/Rules.mk        | 5 +++++
 xen/common/Makefile | 8 ++++----
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/xen/Rules.mk b/xen/Rules.mk
index 59072ae8df..68b10ca5ef 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -63,6 +63,11 @@ cmd_objcopy = $(OBJCOPY) $(OBJCOPYFLAGS) $< $@
 quiet_cmd_binfile = BINFILE $@
 cmd_binfile = $(SHELL) $(srctree)/tools/binfile $(BINFILE_FLAGS) $@ $(2)
 
+# gzip
+quiet_cmd_gzip = GZIP    $@
+cmd_gzip = \
+    cat $(real-prereqs) | gzip -n -f -9 > $@
+
 # Figure out what we need to build from the various variables
 # ===========================================================================
 
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 46049eac35..f45f19c391 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -78,13 +78,13 @@ obj-$(CONFIG_NEEDS_LIBELF) += libelf/
 obj-$(CONFIG_HAS_DEVICE_TREE) += libfdt/
 
 CONF_FILE := $(if $(patsubst /%,,$(KCONFIG_CONFIG)),$(objtree)/)$(KCONFIG_CONFIG)
-$(obj)/config.gz: $(CONF_FILE)
-	gzip -n -c $< >$@
+$(obj)/config.gz: $(CONF_FILE) FORCE
+	$(call if_changed,gzip)
+
+targets += config.gz
 
 $(obj)/config_data.o: $(obj)/config.gz
 
 $(obj)/config_data.S: $(srctree)/tools/binfile FORCE
 	$(call if_changed,binfile,$(obj)/config.gz xen_config_data)
 targets += config_data.S
-
-clean-files := config.gz
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:39:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:39:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538571.838670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V2J-00027G-Vx; Tue, 23 May 2023 16:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538571.838670; Tue, 23 May 2023 16:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V2J-000240-Pq; Tue, 23 May 2023 16:38:59 +0000
Received: by outflank-mailman (input) for mailman id 538571;
 Tue, 23 May 2023 16:38:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V2I-0006Dr-Ua
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:59 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4eed198e-f988-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 18:38:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4eed198e-f988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859936;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=NQbxeavhneBRrc5N4TAMJ536RafB9YjyOc/ENwv/Ymk=;
  b=NGpLfxnujMQzESbFl3zoFl5z7trgZIOTN6hOAGjwMxY0S+rfik8Wmfoy
   Wa3N1jdv8NfpKDeiDMZcYr8sj0JHLWKwFLNeGWt2JZ3X4iHr3frqLHFL6
   XAnUL2Hk9thB98XIaL88gNqazvt7U0FR28RrZDeX6CrT2MN3Hs4HUK9MZ
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109985465
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:xTHf2qoAs4rTChhkO6Qj8ZBj5lFeBmKqZRIvgKrLsJaIsI4StFCzt
 garIBnSOfaLNGv2Ltojb9i08E9Tvp7Rm4VnSQJq+301FXkRo5uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzSJNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACAJYhTel/KH+uKEd6psvM0lIszVLIxK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVRrk6VoqwmpXDe1gVr3JDmMcbPe8zMTsJQ9qqdj
 juerjWpX01EabRzzxKp/3S+pMrGzBrdXYsLEKyH8/p3rn+Mkzl75Bo+CgLg/KjRZlSFc+xYL
 0sY6y8/t58Y/UagTsT+dxCgqXvCtRkZM/JaHvcm8giLxuzR6hyAG2kfZjdbbZots8pebSMu/
 k+EmZXuHzMHmLeSQ3iM+6yUqT63MC49ImoLZCtCRgwAi/Hop4c1iRDDR8hiC4a6i9T0HXf7x
 DXihCE6hq4PhM8Rkauh9FbMgimEuZTCCAUy423/Tm+jqw90eoOhT4ip8kTAq+ZNKp6DSVuMt
 2RCnNKRhMgVFo2EniGJROQLHZmq6uyDPTmahkRgd7Ej6jCs9niLbY1WpjZkKy9BMMwJZDvoa
 0/7oh5K6dlYO37CUENsS9vvUYJwl/GmTIm7EKmONbKif6SdaieX+C1qOF6A7lq9r3VvkZs2P
 5meT9ajWCNy5btc8BK6QOIU0LkOzy84xH/OSZ2T8ylLwYZyd1bOF+5bbQLmgvQRqfrd/V6Lq
 4o3296ikU03bQHoXsXAHWf/x3guJGNzO535otc/mgWrcls/QzFJ5xM8LNocl21Zc0Z9zL+gE
 pKVABUwJL/DaZrvd223hohLMu+HYHqGhStT0dYQFVipwWM/Ro2k8b0ScZA6FZF+qrw/kq4sE
 6JUIp3cahiqdtgg021HBaQRUaQ4LEj77e5wF3HNjMcDk25IGFWSp46MkvrH/ygSFCun3fYDT
 0mb/lqDG/IrHl0yZPs6ndrzlztdS1BBwrMtN6YJS/EPEHjRHH9CcHSr36Boe5FddX0uBFKyj
 m6rPPvRnsGVy6ddzTUDrfrUx2t1O4OSxnZnIlQ=
IronPort-HdrOrdr: A9a23:Purub6ixrSIPgfr7MA8h30qkinBQXioji2hC6mlwRA09TyX5ra
 2TdZUgpHvJYVMqMk3I9uruBEDtex3hHP1OkOws1NWZLWrbUQKTRekP0WKF+Vzd8kXFndK1vp
 0QEZSWZueRMbEAt7ec3OG5eexQvOVu8sqT9JjjJ6EGd3AVV0lihT0JezpyCidNNW977QJSLu
 vn2iJAzQDQAEg/X4CAKVQuefPMnNHPnIKOW296O/Z2gDP+9Q9B8dTBYmOl4is=
X-Talos-CUID: 9a23:zQciRGOW3sj4YO5DSC1r8BMECvEfIkLgj3fqKkm2DFh5R+jA
X-Talos-MUID: 9a23:bvn3Dwk6E3u5H3vQ4n4bdnpOOcF6wYGnWXkrlJwfv9i7MChtFRWC2WE=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="109985465"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 11/15] build: rename CFLAGS to XEN_CFLAGS in xen/Makefile
Date: Tue, 23 May 2023 17:38:07 +0100
Message-ID: <20230523163811.30792-12-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

This is a preparatory patch. A future patch will not even use
$(CFLAGS) to seed $(XEN_CFLAGS).

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/Makefile           | 41 ++++++++++++++---------------
 xen/arch/arm/arch.mk   |  4 +--
 xen/arch/riscv/arch.mk |  4 +--
 xen/arch/x86/arch.mk   | 58 +++++++++++++++++++++---------------------
 4 files changed, 54 insertions(+), 53 deletions(-)

diff --git a/xen/Makefile b/xen/Makefile
index c4a83fca76..b3bffe8c6f 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -259,6 +259,7 @@ export KBUILD_DEFCONFIG := $(ARCH)_defconfig
 export XEN_TREEWIDE_CFLAGS := $(CFLAGS)
 
 XEN_AFLAGS =
+XEN_CFLAGS = $(CFLAGS)
 
 # CLANG_FLAGS needs to be calculated before calling Kconfig
 ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
@@ -284,7 +285,7 @@ CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
 endif
 
 CLANG_FLAGS += -Werror=unknown-warning-option
-CFLAGS += $(CLANG_FLAGS)
+XEN_CFLAGS += $(CLANG_FLAGS)
 export CLANG_FLAGS
 endif
 
@@ -293,7 +294,7 @@ ifeq ($(call ld-ver-build-id,$(LD)),n)
 XEN_LDFLAGS_BUILD_ID :=
 XEN_HAS_BUILD_ID := n
 else
-CFLAGS += -DBUILD_ID
+XEN_CFLAGS += -DBUILD_ID
 XEN_TREEWIDE_CFLAGS += -DBUILD_ID
 XEN_HAS_BUILD_ID := y
 XEN_LDFLAGS_BUILD_ID := --build-id=sha1
@@ -388,30 +389,30 @@ include/config/%.conf include/config/%.conf.cmd: $(KCONFIG_CONFIG)
 	$(Q)$(MAKE) $(build)=tools/kconfig syncconfig
 
 ifeq ($(CONFIG_DEBUG),y)
-CFLAGS += -O1
+XEN_CFLAGS += -O1
 else
-CFLAGS += -O2
+XEN_CFLAGS += -O2
 endif
 
 ifeq ($(CONFIG_FRAME_POINTER),y)
-CFLAGS += -fno-omit-frame-pointer
+XEN_CFLAGS += -fno-omit-frame-pointer
 else
-CFLAGS += -fomit-frame-pointer
+XEN_CFLAGS += -fomit-frame-pointer
 endif
 
 CFLAGS-$(CONFIG_CC_SPLIT_SECTIONS) += -ffunction-sections -fdata-sections
 
-CFLAGS += -nostdinc -fno-builtin -fno-common
-CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
-$(call cc-option-add,CFLAGS,CC,-Wvla)
-CFLAGS += -pipe -D__XEN__ -include $(srctree)/include/xen/config.h
+XEN_CFLAGS += -nostdinc -fno-builtin -fno-common
+XEN_CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
+$(call cc-option-add,XEN_CFLAGS,CC,-Wvla)
+XEN_CFLAGS += -pipe -D__XEN__ -include $(srctree)/include/xen/config.h
 CFLAGS-$(CONFIG_DEBUG_INFO) += -g
 
 ifneq ($(CONFIG_CC_IS_CLANG),y)
 # Clang doesn't understand this command line argument, and doesn't appear to
 # have a suitable alternative.  The resulting compiled binary does function,
 # but has an excessively large symbol table.
-CFLAGS += -Wa,--strip-local-absolute
+XEN_CFLAGS += -Wa,--strip-local-absolute
 endif
 
 XEN_AFLAGS += -D__ASSEMBLY__
@@ -420,14 +421,14 @@ $(call cc-option-add,XEN_AFLAGS,CC,-Wa$(comma)--noexecstack)
 
 LDFLAGS-$(call ld-option,--warn-rwx-segments) += --no-warn-rwx-segments
 
-CFLAGS += $(CFLAGS-y)
+XEN_CFLAGS += $(CFLAGS-y)
 # allow extra CFLAGS externally via EXTRA_CFLAGS_XEN_CORE
-CFLAGS += $(EXTRA_CFLAGS_XEN_CORE)
+XEN_CFLAGS += $(EXTRA_CFLAGS_XEN_CORE)
 
 # Most CFLAGS are safe for assembly files:
 #  -std=gnu{89,99} gets confused by #-prefixed end-of-line comments
 #  -flto makes no sense and annoys clang
-XEN_AFLAGS += $(filter-out -std=gnu% -flto,$(CFLAGS)) $(AFLAGS-y)
+XEN_AFLAGS += $(filter-out -std=gnu% -flto,$(XEN_CFLAGS)) $(AFLAGS-y)
 
 # LDFLAGS are only passed directly to $(LD)
 LDFLAGS += $(LDFLAGS_DIRECT) $(LDFLAGS-y)
@@ -439,16 +440,16 @@ CFLAGS_UBSAN :=
 endif
 
 ifeq ($(CONFIG_LTO),y)
-CFLAGS += -flto
+XEN_CFLAGS += -flto
 LDFLAGS-$(CONFIG_CC_IS_CLANG) += -plugin LLVMgold.so
 endif
 
 ifdef building_out_of_srctree
-    CFLAGS += -I$(objtree)/include
-    CFLAGS += -I$(objtree)/arch/$(TARGET_ARCH)/include
+    XEN_CFLAGS += -I$(objtree)/include
+    XEN_CFLAGS += -I$(objtree)/arch/$(TARGET_ARCH)/include
 endif
-CFLAGS += -I$(srctree)/include
-CFLAGS += -I$(srctree)/arch/$(TARGET_ARCH)/include
+XEN_CFLAGS += -I$(srctree)/include
+XEN_CFLAGS += -I$(srctree)/arch/$(TARGET_ARCH)/include
 
 # Note that link order matters!
 ALL_OBJS-y                := common/built_in.o
@@ -463,7 +464,7 @@ ALL_LIBS-y                := lib/lib.a
 include $(srctree)/arch/$(TARGET_ARCH)/arch.mk
 
 # define new variables to avoid the ones defined in Config.mk
-export XEN_CFLAGS := $(CFLAGS)
+export XEN_CFLAGS := $(XEN_CFLAGS)
 export XEN_AFLAGS := $(XEN_AFLAGS)
 export XEN_LDFLAGS := $(LDFLAGS)
 export CFLAGS_UBSAN
diff --git a/xen/arch/arm/arch.mk b/xen/arch/arm/arch.mk
index 58db76c4e1..300b8bf7ae 100644
--- a/xen/arch/arm/arch.mk
+++ b/xen/arch/arm/arch.mk
@@ -1,8 +1,8 @@
 ########################################
 # arm-specific definitions
 
-$(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
-$(call cc-option-add,CFLAGS,CC,-Wnested-externs)
+$(call cc-options-add,XEN_CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
+$(call cc-option-add,XEN_CFLAGS,CC,-Wnested-externs)
 
 # Prevent floating-point variables from creeping into Xen.
 CFLAGS-$(CONFIG_ARM_32) += -msoft-float
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
index 7448f759b4..aadf373ce8 100644
--- a/xen/arch/riscv/arch.mk
+++ b/xen/arch/riscv/arch.mk
@@ -1,7 +1,7 @@
 ########################################
 # RISCV-specific definitions
 
-$(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
+$(call cc-options-add,XEN_CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 
 CFLAGS-$(CONFIG_RISCV_64) += -mabi=lp64
 
@@ -12,7 +12,7 @@ riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
 # into the upper half _or_ the lower half of the address space.
 # -mcmodel=medlow would force Xen into the lower half.
 
-CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+XEN_CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
 
 # TODO: Drop override when more of the build is working
 override ALL_OBJS-y = arch/$(TARGET_ARCH)/built_in.o
diff --git a/xen/arch/x86/arch.mk b/xen/arch/x86/arch.mk
index 13ec88a628..5df3cf6bc3 100644
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -3,42 +3,42 @@
 
 export XEN_IMG_OFFSET := 0x200000
 
-CFLAGS += -I$(srctree)/arch/x86/include/asm/mach-generic
-CFLAGS += -I$(srctree)/arch/x86/include/asm/mach-default
-CFLAGS += -DXEN_IMG_OFFSET=$(XEN_IMG_OFFSET)
+XEN_CFLAGS += -I$(srctree)/arch/x86/include/asm/mach-generic
+XEN_CFLAGS += -I$(srctree)/arch/x86/include/asm/mach-default
+XEN_CFLAGS += -DXEN_IMG_OFFSET=$(XEN_IMG_OFFSET)
 
 # Prevent floating-point variables from creeping into Xen.
-CFLAGS += -msoft-float
-
-$(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
-$(call cc-option-add,CFLAGS,CC,-Wnested-externs)
-$(call as-option-add,CFLAGS,CC,"vmcall",-DHAVE_AS_VMX)
-$(call as-option-add,CFLAGS,CC,"crc32 %eax$(comma)%eax",-DHAVE_AS_SSE4_2)
-$(call as-option-add,CFLAGS,CC,"invept (%rax)$(comma)%rax",-DHAVE_AS_EPT)
-$(call as-option-add,CFLAGS,CC,"rdrand %eax",-DHAVE_AS_RDRAND)
-$(call as-option-add,CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
-$(call as-option-add,CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
-$(call as-option-add,CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
-$(call as-option-add,CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)
-$(call as-option-add,CFLAGS,CC,"clwb (%rax)",-DHAVE_AS_CLWB)
-$(call as-option-add,CFLAGS,CC,".equ \"x\"$(comma)1",-DHAVE_AS_QUOTED_SYM)
-$(call as-option-add,CFLAGS,CC,"invpcid (%rax)$(comma)%rax",-DHAVE_AS_INVPCID)
-$(call as-option-add,CFLAGS,CC,"movdiri %rax$(comma)(%rax)",-DHAVE_AS_MOVDIR)
-$(call as-option-add,CFLAGS,CC,"enqcmd (%rax)$(comma)%rax",-DHAVE_AS_ENQCMD)
+XEN_CFLAGS += -msoft-float
+
+$(call cc-options-add,XEN_CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
+$(call cc-option-add,XEN_CFLAGS,CC,-Wnested-externs)
+$(call as-option-add,XEN_CFLAGS,CC,"vmcall",-DHAVE_AS_VMX)
+$(call as-option-add,XEN_CFLAGS,CC,"crc32 %eax$(comma)%eax",-DHAVE_AS_SSE4_2)
+$(call as-option-add,XEN_CFLAGS,CC,"invept (%rax)$(comma)%rax",-DHAVE_AS_EPT)
+$(call as-option-add,XEN_CFLAGS,CC,"rdrand %eax",-DHAVE_AS_RDRAND)
+$(call as-option-add,XEN_CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
+$(call as-option-add,XEN_CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
+$(call as-option-add,XEN_CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
+$(call as-option-add,XEN_CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)
+$(call as-option-add,XEN_CFLAGS,CC,"clwb (%rax)",-DHAVE_AS_CLWB)
+$(call as-option-add,XEN_CFLAGS,CC,".equ \"x\"$(comma)1",-DHAVE_AS_QUOTED_SYM)
+$(call as-option-add,XEN_CFLAGS,CC,"invpcid (%rax)$(comma)%rax",-DHAVE_AS_INVPCID)
+$(call as-option-add,XEN_CFLAGS,CC,"movdiri %rax$(comma)(%rax)",-DHAVE_AS_MOVDIR)
+$(call as-option-add,XEN_CFLAGS,CC,"enqcmd (%rax)$(comma)%rax",-DHAVE_AS_ENQCMD)
 
 # GAS's idea of true is -1.  Clang's idea is 1
-$(call as-option-add,CFLAGS,CC,\
+$(call as-option-add,XEN_CFLAGS,CC,\
     ".if ((1 > 0) < 0); .error \"\";.endif",,-DHAVE_AS_NEGATIVE_TRUE)
 
 # Check to see whether the assmbler supports the .nop directive.
-$(call as-option-add,CFLAGS,CC,\
+$(call as-option-add,XEN_CFLAGS,CC,\
     ".L1: .L2: .nops (.L2 - .L1)$(comma)9",-DHAVE_AS_NOPS_DIRECTIVE)
 
-CFLAGS += -mno-red-zone -fpic
+XEN_CFLAGS += -mno-red-zone -fpic
 
 # Xen doesn't use MMX or SSE interally.  If the compiler supports it, also skip
 # the SSE setup for variadic function calls.
-CFLAGS += -mno-mmx -mno-sse $(call cc-option,$(CC),-mskip-rax-setup)
+XEN_CFLAGS += -mno-mmx -mno-sse $(call cc-option,$(CC),-mskip-rax-setup)
 
 ifeq ($(CONFIG_INDIRECT_THUNK),y)
 # Compile with gcc thunk-extern, indirect-branch-register if available.
@@ -54,10 +54,10 @@ ifdef CONFIG_XEN_IBT
 # Force -fno-jump-tables to work around
 #   https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104816
 #   https://github.com/llvm/llvm-project/issues/54247
-CFLAGS += -fcf-protection=branch -mmanual-endbr -fno-jump-tables
-$(call cc-option-add,CFLAGS,CC,-fcf-check-attribute=no)
+XEN_CFLAGS += -fcf-protection=branch -mmanual-endbr -fno-jump-tables
+$(call cc-option-add,XEN_CFLAGS,CC,-fcf-check-attribute=no)
 else
-$(call cc-option-add,CFLAGS,CC,-fcf-protection=none)
+$(call cc-option-add,XEN_CFLAGS,CC,-fcf-protection=none)
 endif
 
 # If supported by the compiler, reduce stack alignment to 8 bytes. But allow
@@ -91,7 +91,7 @@ efi-check := arch/x86/efi/check
 $(shell mkdir -p $(dir $(efi-check)))
 
 # Check if the compiler supports the MS ABI.
-XEN_BUILD_EFI := $(call if-success,$(CC) $(CFLAGS) -c $(srctree)/$(efi-check).c -o $(efi-check).o,y)
+XEN_BUILD_EFI := $(call if-success,$(CC) $(XEN_CFLAGS) -c $(srctree)/$(efi-check).c -o $(efi-check).o,y)
 
 # Check if the linker supports PE.
 EFI_LDFLAGS := $(patsubst -m%,-mi386pep,$(LDFLAGS)) --subsystem=10
@@ -129,4 +129,4 @@ export EFI_LDFLAGS
 endif
 
 # Set up the assembler include path properly for older toolchains.
-CFLAGS += -Wa,-I$(objtree)/include -Wa,-I$(srctree)/include
+XEN_CFLAGS += -Wa,-I$(objtree)/include -Wa,-I$(srctree)/include
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:39:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:39:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538572.838685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V2L-0002YH-Fp; Tue, 23 May 2023 16:39:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538572.838685; Tue, 23 May 2023 16:39:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V2L-0002Y8-4v; Tue, 23 May 2023 16:39:01 +0000
Received: by outflank-mailman (input) for mailman id 538572;
 Tue, 23 May 2023 16:39:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V2J-0006Dr-UY
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:38:59 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4f8cbac7-f988-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 18:38:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f8cbac7-f988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859937;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=/u1bPN10AWgMTCILIOBzV6UQMmmPOJyKlHt8eO7R7EE=;
  b=P7sTDJPWEL+hYfGJusW8tGz0ldynvxlDXQZXe+F0bgr3t+tLmJJlvOT4
   5E1MRDAPGijuYt/Z4y5XbLtkCzHf+lrikQ47Zq9Gp/T4b1bwZEXQDrt2J
   6gnamkmMExxLH1e7DilYYS7m5DgoZ/AAoAKsCjbs//ykeQsiwuv4n6vYE
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112568347
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:A5SOsqiH3MG+/aaBIWpKvC/WX161UxAKZh0ujC45NGQN5FlHY01je
 htvW2jUPPiNNGvwLt5ybYW+9UID7ZPcytU1TAA6q3w1Qysb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QWFzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQWEx8mPzC9gNmPyYigEPlPo+Ukd9D0adZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XejtEqFWTtOwv7nLa1gBZ27nxKtvFPNeNQK25m27B/
 ziboTSiXk5y2Nq3ziKGsW2FoL/z2jrHXrgvJbPh1fQ3jwjGroAUIEJPDgbqyRWjsWa8RtZeJ
 ko86ico668o+ySDTNPwQhm5q36spQMHVpxbFOhSwB6J4rrZ5UCeHGdsZiVadNUsucsyRDor/
 lyEhdXkAXpoqrL9YWKQ8PKYoC2/PQARLHQefmkUQA0d+d7hrYovyBXVQb5e/LWd14OvX2uqm
 nbT8XZ43u9I5SIW60ml1V78rBn9hqbOdTc83B6NTkGAzwQifZHwMuRE9mPnxfpHKY+YSHyIs
 34Fh9WS4YgyMH2dqMCeaL5TRe/0vp5pJBWZ2AcyRMd5q1xB7lb5JehtDCdCyFCF2yruURvge
 wfttAxY//e/11P6PPYsM+pd5ynHpJUM9OgJtNiONrKigbArLmdrGR2CgmbOt10BaGB2zckC1
 W6zKK5A90oyB6V91yaRTOwAy7ItzS1W7TqNFcykn0z7iuvHPCL9pVI53LymN7pR0U95iF+Nr
 4Y32zWikH2zr9ESkgGIqNVOfDjm3FAwBIzsqtw/S9Nv1jFOQTl7Y9eImONJRmCQt/gN/gs+1
 i3nCxAwJZuWrSGvFDhmnVg4MOm+Askn/SNnVcHuVH7xs0UejU+UxP93X/MKkXMProSPEdYco
 yE5Rvi9
IronPort-HdrOrdr: A9a23:OhteVq0nZ2XJDKANE2fIDAqjBNEkLtp133Aq2lEZdPU1SKylfq
 WV98jzuiWYtN98YhsdcLO7WZVoP0myyXcd2+B4AV7IZmXbUQWTQr1f0Q==
X-Talos-CUID: 9a23:H1KEWW/5dh3YhMKwz02Vv0AvNet0UkPa907ZGVTiGUVMabrOUEDFrQ==
X-Talos-MUID: 9a23:dsWiigkPbcARgL6gz/LjdnplLMQr/I2IWHxKiKkAgfS1O3VTMWqS2WE=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="112568347"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 14/15] Config.mk: move $(cc-option, ) to config/compiler-testing.mk
Date: Tue, 23 May 2023 17:38:10 +0100
Message-ID: <20230523163811.30792-15-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

In xen/, it isn't necessary to include Config.mk in every Makefile in
subdirectories as nearly all necessary variables should be calculated
in xen/Makefile. But some Makefile make use of the macro $(cc-option,)
that is only available in Config.mk.

Extract $(cc-option,) from Config.mk so we can use it without
including Config.mk and thus without having to recalculate some CFLAGS
which would be ignored.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 Config.mk                  | 27 +--------------------------
 config/compiler-testing.mk | 25 +++++++++++++++++++++++++
 xen/Rules.mk               |  1 +
 3 files changed, 27 insertions(+), 26 deletions(-)
 create mode 100644 config/compiler-testing.mk

diff --git a/Config.mk b/Config.mk
index 27f48f654a..ebc8d0dfdd 100644
--- a/Config.mk
+++ b/Config.mk
@@ -18,6 +18,7 @@ realpath = $(wildcard $(foreach file,$(1),$(shell cd -P $(dir $(file)) && echo "
 or       = $(if $(strip $(1)),$(1),$(if $(strip $(2)),$(2),$(if $(strip $(3)),$(3),$(if $(strip $(4)),$(4)))))
 
 -include $(XEN_ROOT)/.config
+include $(XEN_ROOT)/config/compiler-testing.mk
 
 XEN_COMPILE_ARCH    ?= $(shell uname -m | sed -e s/i.86/x86_32/ \
                          -e s/i86pc/x86_32/ -e s/amd64/x86_64/ \
@@ -79,32 +80,6 @@ PYTHON_PREFIX_ARG ?= --prefix="$(prefix)"
 # to permit the user to set PYTHON_PREFIX_ARG to '' to workaround this bug:
 #  https://bugs.launchpad.net/ubuntu/+bug/362570
 
-# cc-option: Check if compiler supports first option, else fall back to second.
-#
-# This is complicated by the fact that unrecognised -Wno-* options:
-#   (a) are ignored unless the compilation emits a warning; and
-#   (b) even then produce a warning rather than an error
-# To handle this we do a test compile, passing the option-under-test, on a code
-# fragment that will always produce a warning (integer assigned to pointer).
-# We then grep for the option-under-test in the compiler's output, the presence
-# of which would indicate an "unrecognized command-line option" warning/error.
-#
-# Usage: cflags-y += $(call cc-option,$(CC),-march=winchip-c6,-march=i586)
-cc-option = $(shell if test -z "`echo 'void*p=1;' | \
-              $(1) $(2) -c -o /dev/null -x c - 2>&1 | grep -- $(2:-Wa$(comma)%=%) -`"; \
-              then echo "$(2)"; else echo "$(3)"; fi ;)
-
-# cc-option-add: Add an option to compilation flags, but only if supported.
-# Usage: $(call cc-option-add CFLAGS,CC,-march=winchip-c6)
-cc-option-add = $(eval $(call cc-option-add-closure,$(1),$(2),$(3)))
-define cc-option-add-closure
-    ifneq ($$(call cc-option,$$($(2)),$(3),n),n)
-        $(1) += $(3)
-    endif
-endef
-
-cc-options-add = $(foreach o,$(3),$(call cc-option-add,$(1),$(2),$(o)))
-
 # cc-ver: Check compiler against the version requirement. Return boolean 'y'/'n'.
 # Usage: ifeq ($(call cc-ver,$(CC),ge,0x030400),y)
 cc-ver = $(shell if [ $$((`$(1) -dumpversion | awk -F. \
diff --git a/config/compiler-testing.mk b/config/compiler-testing.mk
new file mode 100644
index 0000000000..9677f91abe
--- /dev/null
+++ b/config/compiler-testing.mk
@@ -0,0 +1,25 @@
+# cc-option: Check if compiler supports first option, else fall back to second.
+#
+# This is complicated by the fact that unrecognised -Wno-* options:
+#   (a) are ignored unless the compilation emits a warning; and
+#   (b) even then produce a warning rather than an error
+# To handle this we do a test compile, passing the option-under-test, on a code
+# fragment that will always produce a warning (integer assigned to pointer).
+# We then grep for the option-under-test in the compiler's output, the presence
+# of which would indicate an "unrecognized command-line option" warning/error.
+#
+# Usage: cflags-y += $(call cc-option,$(CC),-march=winchip-c6,-march=i586)
+cc-option = $(shell if test -z "`echo 'void*p=1;' | \
+              $(1) $(2) -c -o /dev/null -x c - 2>&1 | grep -- $(2:-Wa$(comma)%=%) -`"; \
+              then echo "$(2)"; else echo "$(3)"; fi ;)
+
+# cc-option-add: Add an option to compilation flags, but only if supported.
+# Usage: $(call cc-option-add CFLAGS,CC,-march=winchip-c6)
+cc-option-add = $(eval $(call cc-option-add-closure,$(1),$(2),$(3)))
+define cc-option-add-closure
+    ifneq ($$(call cc-option,$$($(2)),$(3),n),n)
+        $(1) += $(3)
+    endif
+endef
+
+cc-options-add = $(foreach o,$(3),$(call cc-option-add,$(1),$(2),$(o)))
diff --git a/xen/Rules.mk b/xen/Rules.mk
index 68b10ca5ef..8177d405c3 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -19,6 +19,7 @@ __build:
 
 include $(XEN_ROOT)/Config.mk
 include $(srctree)/scripts/Kbuild.include
+include $(XEN_ROOT)/config/compiler-testing.mk
 
 # Initialise some variables
 obj-y :=
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:44:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:44:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538594.838694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V7V-0005zu-8e; Tue, 23 May 2023 16:44:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538594.838694; Tue, 23 May 2023 16:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1V7V-0005zn-60; Tue, 23 May 2023 16:44:21 +0000
Received: by outflank-mailman (input) for mailman id 538594;
 Tue, 23 May 2023 16:44:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V7U-0005zh-7F
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:44:20 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0e65b4f3-f989-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 18:44:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e65b4f3-f989-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684860257;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=EgRf1ZV6Y/Xg3Pz/iQe97WoKAP6ERDrGxXFTDkm8Sr8=;
  b=ciFS33VqGqCqSmngsr0aeIhUf4DU6Ctq1tKfYf5O2h8nr5dUb3puONeT
   Y7gTajoKUyKis+4ZrCY9xdaz2HKHLAesSCIEvoMU45L2qAVSx4ARBaWqP
   XM3b1wk8y9IMOw9OJJNalgmYowyY0dD4FTCIe2Tgp10UsmwLXqSkkR2OK
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110113515
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:gGqTkqPLcdpTl+HvrR2Tl8FynXyQoLVcMsEvi/4bfWQNrUok3jwCn
 GpNUGrSP/mLajb8KNBzYIXno0MEsZWGytVkTQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tF5AdmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0vpXBERz1
 fAyEx42Vwqiob2zwrKKVMA506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUoXTHZwMxRvB+
 woq+Uz6Dk5LL4bPlQGV82CAg+7kwQnUBdI7QejQGvlC3wTImz175ActfUS/iem0jAi5Qd03A
 0UM9zAnt6Qa6E2hRd67VBq9yFa8swIRQZxwFPw38ymE0K+S6AGcbkAGUzpAZdoOpMIwAzsw2
 Tehj97vQDBirrCRYXac7auP6yO/PzAPKm0PbjNCShEKi+QPu6lq0EiJFIw6Vvfo0JusQ2qYL
 y22QDYWm5UWqPMr24+A02v+mROK+Z3LTj8rz1CCNo661T9RaImgbo2uzFHU6/dcMYqUJmW8U
 Gg4d9u2t75XU8zU/MCZaKBURezyua7ZWNHJqQQ3d6TN4QhB7JJKkWp4xDhlbHlkPc8fEdMCS
 B+C4FgBjHO/0ZbDUEOWX25TI55ypUQDPY6/PhwxUjapSsYZSeN/1HsyDXN8Jki0+KTWrYkxO
 I2AbeGnBmsABKJswVKeHrlNjeB7nnxllDqLGfgXKihLNpLHPhaopUotagPSPojVEovfyOkqz
 zqvH5TTkEgOOAEPSiLW7ZQSPTg3EJTPPriv85Y/XrfacmJb9JQJV6e5LUUJJ9Y0wMy4V47go
 hmAZ6Ov4AGn3yyYdl3aOywLhXGGdc8XkE/X9BcEZT6As0XPq671hEvDX/PbpYUaydE=
IronPort-HdrOrdr: A9a23:YsrY8aAePEZ28CXlHenP55DYdb4zR+YMi2TDtnoQdfUxSKelfq
 +V8cjzuSWftN9zYhAdcK67V5VoKEm0naKdirN8AV7NZmfbhFc=
X-Talos-CUID: =?us-ascii?q?9a23=3AyQjD2GjmdKaT33HN57SvCzeYmDJubl3Y6UrwL2W?=
 =?us-ascii?q?DJ11QUpyzFlqS+JJ6up87?=
X-Talos-MUID: =?us-ascii?q?9a23=3A24nFeg1/c2XVro5OxoSylpddGjUj4KjxWVtXwck?=
 =?us-ascii?q?9q8iWGSlxEGu3o2nna9py?=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="110113515"
Date: Tue, 23 May 2023 17:44:07 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, Peter Hoyes <Peter.Hoyes@arm.com>, "Wei
 Liu" <wl@xen.org>
Subject: Re: [XEN PATCH] tools/xendomains: Don't auto save/restore/migrate on
 Arm*
Message-ID: <97f676c3-65da-4361-835c-5aa91b99db86@perard>
References: <d52c31c7-57b1-41c1-af35-a9b847683c0a@perard>
 <20230519162454.50337-1-anthony.perard@citrix.com>
 <b18f7dcb-b790-2571-93e1-aa9a9132276a@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <b18f7dcb-b790-2571-93e1-aa9a9132276a@xen.org>

On Mon, May 22, 2023 at 08:34:52PM +0100, Julien Grall wrote:
> On 19/05/2023 17:24, Anthony PERARD wrote:
> > diff --git a/tools/configure.ac b/tools/configure.ac
> > index 3cccf41960..0f0983f6b7 100644
> > --- a/tools/configure.ac
> > +++ b/tools/configure.ac
> > @@ -517,4 +517,17 @@ AS_IF([test "x$pvshim" = "xy"], [
> >   AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])
> > +dnl Disable autosave of domain in xendomains on shutdown
> > +dnl due to missing support. This should be in sync with
> > +dnl LIBXL_HAVE_NO_SUSPEND_RESUME in libxl.h
> 
> Quite likely, a developer adding a new arch will first look at the
> definition of LIXBL_HAVE_NO_SUSPEND_RESUME. So it would be good if we have a
> similar message there to remind them to update this case. That said...

Probably true, I'll look at adding a comment there.

> > +case "$host_cpu" in
> > +    arm*|aarch64)
> > +        XENDOMAINS_SAVE_DIR=
> > +        ;;
> > +    *)
> > +        XENDOMAINS_SAVE_DIR="$XEN_LIB_DIR/save"
> > +        ;;
> > +esac
> 
> ... I am wondering if the switch should be the other way around. IOW, the
> default should be no support for suspend/resume. This will make easier to
> add support for RISC-V (I suspect support for suspend/resume will not be in
> the first version) or any new other arch.

Sounds good, I'll look at that.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 23 16:47:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:47:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538601.838711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VAd-0006ft-3j; Tue, 23 May 2023 16:47:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538601.838711; Tue, 23 May 2023 16:47:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VAc-0006de-Sg; Tue, 23 May 2023 16:47:34 +0000
Received: by outflank-mailman (input) for mailman id 538601;
 Tue, 23 May 2023 16:47:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V2Q-0006Dr-EE
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:39:06 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5365c366-f988-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 18:39:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5365c366-f988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859944;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=CUr6Ia8hka4MDdYyEkCHkmCAcxvFP9VRnjNhVX7Kyj0=;
  b=QrX94MT2nLvMK7ov1dkRAEoc7v/E8vKdoyPHO/DszDViYpBAmSD6Jnx2
   f8AzT8AZAVMeNCO73ZuztKU1D+10yFDHPwjlIOZRyuzDPY+JhihJ/tBbU
   roLvkU/BUpSneGTVrFzSHhEWn10AsxXKZYkt9nL0o+oSTXOQOflXiJa3c
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 108859061
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:D/UT8aIYpTbgJanbFE+R4pUlxSXFcZb7ZxGr2PjKsXjdYENS1DMPm
 mZKCm7Qb62PZGXxfohyPNy2oUpX68fUy4JmSAtlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wZlPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5HX2xi/
 uEJFgscMEGmitOsm7uXds1V05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TTHZUFwxfA+
 DuuE2LREChKNMSNlyi86V2zrLXGnir0c78NLejtnhJtqALKnTFCYPEMbnOguuWwgEO6X9NZK
 mQX9zAooKx081akJvH/Qhm5rXisrhMaHd1KHIUS9wWl2qfSpQGDCQAsTDRMddgnv88eXiEx2
 xmCmNaBLSxitviZRGyQ8p+QrCiuIm4FIGkafygGQAAZpd75r+kOYgnnF4g5VvTv15usRG+2m
 mrRxMQju1kNpf5V2omw4EH5uCPy973EfxRu7ynrelvwu2uVe7WZT4Cv7FHa69NJI4CYUkSNs
 RA4piSO0AwdJcrTzXLQGY3hCJnsvq/Ya2OE3TaDCrF7r1yQF2ifkZe8Cd2UDGNgKY46dDDge
 yc/UisBtcYIbBNGgUKaCr9d6vjGL4C6TbwJtdiONLKih6SdkyfZlByCnWbKgwjQfLEEyMnTw
 6uzf8e2Fmo9Aq961jewTOp1+eZ1lnxhlTuPHsGil0jPPV+iiJm9EO1tDbdzRrphsPPsTPv9q
 L6zyPdmOz0ACbajM0E7AKYYLEwQLGhTOK0aX/d/L7bZSiI/QTFJNhMk6e95E2CTt/gPx7igE
 7DUchMw9WcTclWccV/bNS87OOKzNXu9xFpiVRER0Z+T8yBLSe6SAG03J/PboZFPGDRf8MNJ
IronPort-HdrOrdr: A9a23:xHAr66iFYpiKja7gCBVG0uA6YXBQXuIji2hC6mlwRA09TySZ//
 rBoB19726TtN9xYgBZpTnuAsm9qB/nmaKdpLNhWItKPzOW31dATrsSjrcKqgeIc0aVm9K1l5
 0QF5SWYOeAdGSS5vya3ODXKbkdKaG8gcKVuds=
X-Talos-CUID: 9a23:LWJsim4kRj7cSE9xI9ssxBAZNss4Y0Pm3SmBBkKJEz1gZIGxcArF
X-Talos-MUID: 9a23:rxKyPgjR7Zoc9PX/l/qAdcMpJv4447y8VlI0iroZkZONNiV/EAqCtWHi
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="108859061"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 10/15] build: rename $(AFLAGS) to $(XEN_AFLAGS)
Date: Tue, 23 May 2023 17:38:06 +0100
Message-ID: <20230523163811.30792-11-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

We don't want the AFLAGS from the environment, they are usually meant
to build user space application and not for the hypervisor.

Config.mk doesn't provied any $(AFLAGS) so we can start a fresh
$(XEN_AFLAGS).

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/Makefile         | 10 ++++++----
 xen/arch/x86/arch.mk |  2 +-
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/xen/Makefile b/xen/Makefile
index 127c0e40b5..c4a83fca76 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -258,6 +258,8 @@ export KBUILD_DEFCONFIG := $(ARCH)_defconfig
 # reparsing Config.mk by e.g. arch/x86/boot/.
 export XEN_TREEWIDE_CFLAGS := $(CFLAGS)
 
+XEN_AFLAGS =
+
 # CLANG_FLAGS needs to be calculated before calling Kconfig
 ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
 CLANG_FLAGS :=
@@ -412,9 +414,9 @@ ifneq ($(CONFIG_CC_IS_CLANG),y)
 CFLAGS += -Wa,--strip-local-absolute
 endif
 
-AFLAGS += -D__ASSEMBLY__
+XEN_AFLAGS += -D__ASSEMBLY__
 
-$(call cc-option-add,AFLAGS,CC,-Wa$(comma)--noexecstack)
+$(call cc-option-add,XEN_AFLAGS,CC,-Wa$(comma)--noexecstack)
 
 LDFLAGS-$(call ld-option,--warn-rwx-segments) += --no-warn-rwx-segments
 
@@ -425,7 +427,7 @@ CFLAGS += $(EXTRA_CFLAGS_XEN_CORE)
 # Most CFLAGS are safe for assembly files:
 #  -std=gnu{89,99} gets confused by #-prefixed end-of-line comments
 #  -flto makes no sense and annoys clang
-AFLAGS += $(filter-out -std=gnu% -flto,$(CFLAGS)) $(AFLAGS-y)
+XEN_AFLAGS += $(filter-out -std=gnu% -flto,$(CFLAGS)) $(AFLAGS-y)
 
 # LDFLAGS are only passed directly to $(LD)
 LDFLAGS += $(LDFLAGS_DIRECT) $(LDFLAGS-y)
@@ -462,7 +464,7 @@ include $(srctree)/arch/$(TARGET_ARCH)/arch.mk
 
 # define new variables to avoid the ones defined in Config.mk
 export XEN_CFLAGS := $(CFLAGS)
-export XEN_AFLAGS := $(AFLAGS)
+export XEN_AFLAGS := $(XEN_AFLAGS)
 export XEN_LDFLAGS := $(LDFLAGS)
 export CFLAGS_UBSAN
 
diff --git a/xen/arch/x86/arch.mk b/xen/arch/x86/arch.mk
index 7b5be9fe46..13ec88a628 100644
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -80,7 +80,7 @@ ifeq ($(CONFIG_LD_IS_GNU),y)
 AFLAGS-$(call ld-option,--print-output-format) += -DHAVE_LD_SORT_BY_INIT_PRIORITY
 else
 # Assume all versions of LLD support this.
-AFLAGS += -DHAVE_LD_SORT_BY_INIT_PRIORITY
+XEN_AFLAGS += -DHAVE_LD_SORT_BY_INIT_PRIORITY
 endif
 
 ifneq ($(CONFIG_PV_SHIM_EXCLUSIVE),y)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:47:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:47:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538600.838706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VAc-0006d5-RJ; Tue, 23 May 2023 16:47:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538600.838706; Tue, 23 May 2023 16:47:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VAc-0006cy-LI; Tue, 23 May 2023 16:47:34 +0000
Received: by outflank-mailman (input) for mailman id 538600;
 Tue, 23 May 2023 16:47:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V2L-0006Dq-Jd
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:39:01 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 50b7961c-f988-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 18:38:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50b7961c-f988-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859939;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=FFiqj1ESYfIyZ3l/pGkAtoeGdSC6hrZ1HgE4QjQoKLM=;
  b=hMONPKQOWRfooN7SyTyrio+zS/vcRROvZLp181GlE5D33I+65KK2C7ej
   +KemFwvr0142Q2QZ04hlW+oHHrWW4ztf/4NdUzjgCo2cnEJHvjHQQSMIp
   qohS/yv96aFPorMNXf9MrGYx1akmPWjPBixvY2n/1hPRrQraovA3lFhQd
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112568352
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:4tWeS6heu9iS9JxuHAdsVXmFX161UxAKZh0ujC45NGQN5FlHY01je
 htvXjuAb/aPa2X9eo1xOY2090gO6JPcmtNkHFQ6q39gFC8b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QWFzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQGEBQ/Szuqp9if/+q4SfRwuPUGNJnSadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XejtEqFWTtOwv7nLa1gBZ27nxKtvFPNeNQK25m27B/
 ziboTSiX0ty2Nq322uez2qhmez0pAjUQoM8GoKkpsNxjwjGroAUIEJPDgbqyRWjsWa8RtZeJ
 ko86ico668o+ySDTNPwQhm5q36spQMHVpxbFOhSwB6J4rrZ5UCeHGdsZiVadNUsucsyRDor/
 lyEhdXkAXpoqrL9YWKQ8PKYoC2/PQARLHQefmkUQA0d+d7hrYovyBXVQb5e/LWd14OvX2uqm
 nbT8XZ43u9I5SIW60ml1U7/pwCJjbHpdCcKvgbUQ36b3yZhNLfwMuRE9mPnxfpHKY+YSHyIs
 34Fh9WS4YgyMH2dqMCeaL5TRe/0vp5pJBWZ2AcyRMd5q1xB7lb5JehtDCdCyFCF2yruURvge
 wfttAxY//e/11P6PPYsM+pd5ynHpJUM9OgJtNiONrKigbArLmdrGR2CgmbOt10BaGB2zckC1
 W6zKK5A90oyB6V91yaRTOwAy7ItzS1W7TqNFcykn0z7iuvHPCL9pVI53LymN7pR0U95iF+Nr
 4Y32zWikH2zr9ESkgGIqNVOfDjm3FAwBIzsqtw/S9Nv1jFOQTl7Y9eImONJRmCQt/gN/gs+1
 i3nCxAwJZuWrSGvFDhmnVg4MOm+Askn/SNnVcHuVH7xs0UejU+UxP93X/MKkXMProSPEdYco
 yE5Rvi9
IronPort-HdrOrdr: A9a23:IP5oP6h9NcmKU0Mfu0DocS4MvXBQXrkji2hC6mlwRA09TyX4ra
 CTdZEgviMc5wx9ZJhNo7q90cq7IE80i6Qb3WB5B97LYOCMggeVxe9Zg7ff/w==
X-Talos-CUID: 9a23:8Gg8sWwFN2FrOH2iILjFBgUKIuQXV1b07E35HBe7Cnt1doywUQ+prfY=
X-Talos-MUID: =?us-ascii?q?9a23=3AYUm1dgz3etzLgAZcjWMAOCwuFAyaqLb/E30VnIo?=
 =?us-ascii?q?FgPCvGjZxFTmikGSKWYByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="112568352"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH 15/15] build: remove Config.mk include from Rules.mk
Date: Tue, 23 May 2023 17:38:11 +0100
Message-ID: <20230523163811.30792-16-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Everything needed to build the hypervisor should already be configured
by "xen/Makefile", thus Config.mk shouldn't be needed.

Then, Config.mk keeps on testing support of some CFLAGS with CC, the
result of this testing is not used at this stage so the build is
slowed unnecessarily.

Likewise, GCC is checked to be at the minimum at 4.2 when entering
every sub-directory, so the check have run countless time at this
stage.

We only need to export a few more configuration variables. And add
some variables in Kbuild.include, and macro fallbacks for Make older
than 3.81. (Adding `or` just in case. it's only used in xen/Makefile,
which includes Config.mk and so has already the fallback.)

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    Removing Config.mk benefit:
        Rebuild on my computer is aboud 1 second faster overall.
        Save maybe 3 seconds of user time
        system less loaded

 xen/Makefile               | 4 ++++
 xen/Rules.mk               | 1 -
 xen/scripts/Kbuild.include | 7 +++++++
 3 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/xen/Makefile b/xen/Makefile
index 4dc3acf2a6..9af7223c66 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -246,10 +246,14 @@ export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
                             sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
                                 -e s'/riscv.*/riscv/g')
 
+export XEN_COMPILE_ARCH XEN_TARGET_ARCH
 export CONFIG_SHELL := $(SHELL)
 export CC CXX LD NM OBJCOPY OBJDUMP ADDR2LINE
+export CPP AR
 export YACC = $(if $(BISON),$(BISON),bison)
 export LEX = $(if $(FLEX),$(FLEX),flex)
+export HOSTCC HOSTCXX HOSTCFLAGS
+export EMBEDDED_EXTRA_CFLAGS LDFLAGS_DIRECT
 
 # Default file for 'make defconfig'.
 export KBUILD_DEFCONFIG := $(ARCH)_defconfig
diff --git a/xen/Rules.mk b/xen/Rules.mk
index 8177d405c3..8291e0a573 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -17,7 +17,6 @@ __build:
 
 -include $(objtree)/include/config/auto.conf
 
-include $(XEN_ROOT)/Config.mk
 include $(srctree)/scripts/Kbuild.include
 include $(XEN_ROOT)/config/compiler-testing.mk
 
diff --git a/xen/scripts/Kbuild.include b/xen/scripts/Kbuild.include
index d820595e2f..dfa66f2c8a 100644
--- a/xen/scripts/Kbuild.include
+++ b/xen/scripts/Kbuild.include
@@ -8,6 +8,13 @@ empty   :=
 space   := $(empty) $(empty)
 space_escape := _-_SPACE_-_
 pound   := \#
+comma   := ,
+open    := (
+close   := )
+
+# fallbacks for GNU Make older than 3.81
+realpath = $(wildcard $(foreach file,$(1),$(shell cd -P $(dir $(file)) && echo "$$PWD/$(notdir $(file))")))
+or       = $(if $(strip $(1)),$(1),$(if $(strip $(2)),$(2),$(if $(strip $(3)),$(3),$(if $(strip $(4)),$(4)))))
 
 ###
 # Name of target with a '.' as filename prefix. foo/bar.o => foo/.bar.o
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:47:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:47:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538615.838725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VAf-00078p-8P; Tue, 23 May 2023 16:47:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538615.838725; Tue, 23 May 2023 16:47:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VAf-00078g-5W; Tue, 23 May 2023 16:47:37 +0000
Received: by outflank-mailman (input) for mailman id 538615;
 Tue, 23 May 2023 16:47:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1V2Y-0006Dr-O2
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:39:14 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58185c51-f988-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 18:39:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58185c51-f988-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684859952;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=M4sHTi4LXEAqx1EAv6HnupNoWKrhWbjNFEYPRFSBd30=;
  b=FXqQmfhChlxuws1lESkt8GOLKq5qPnLTRoXombK+s/Fbth5Q4gQVeEMh
   QrMXI+dz7XHobgG4QX3YHRdR2CTLXlKrtF7/KbGleNLtbofhdiC7S+fP5
   T+RQrui7D0yafb/mZTaqta+w0qEKKe1W29oTBaB3olWFrfOblWc8tKi9f
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109422966
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:0b29w6zc/W3WM3pV9YB6t+ffxirEfRIJ4+MujC+fZmUNrF6WrkVWn
 GcWXGrSOPyDYmKketp2bYmwo04HuZHUx4QwGQtq+CAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw/zF8EsHUMja4mtC5QRjP6sT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KXwUr
 O03Miw9VBK8rNq847ehdOdeu9t2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+hgGX/dDtJ7kqYv6Mt70DYzRBr0airO93QEjCPbZwNzxrC/
 DOYoQwVBDk0d9q8kySizU6TudfTpjn4U6wKTqeRo6sCbFq7mTVIVUx+uUGAiea9ol6zXZRYM
 UN80jE1saE4+UivT9/8dx61uniJulgbQdU4O+c38h2Xw6zYpQOQHHEZTyVpYcYj8sQxQFQC6
 FiNmN/4AC11h5ecQ3md67S8oCu7PG4eKmpqTS0ZSQoI5fHzrYd1iQjAJv5zHajwgtDrFDXYx
 zGRsDN4l7gVldQM1aiw4RbAmT3EjprDQxMx5w7Xdnm49Q4/b4mgD7FE8nCCs6wGdtzACADc4
 j5dwZP2AP0y4Y+liXbUUcoXAIGT5cmuCDf/hERREasc6GH4k5K8Rrx47DZ7LUZvF88Lfz71f
 UPe0T9sCI9v0GiCNvEuPd/oYyg+5e25TIm+CKiIBjZbSsIpHDJr6h2CcqJ5M4rFtEE32Z8yN
 p6AGSpHJSZLUP83pNZaqgp07FPK+szc7TmLLXwY5079uVZ7WJJyYeltDbd2RrplhJ5oWS2Mm
 zqlC+OEyg9ETMr1aTTN/IgYIDgidCZrWc+m85AMJ7fbfmKK/V3N7NeIm9scl3FNxfwJxo8kA
 FnmMqOn9LYPrSKecljbApySQLjuQYx+vRoGAMDYBn7xgyJLSd/2vM8im24fIeFPGBpLkaQlE
 JHouqyoXpxyd9gw029FN8ii89QyJUjDaMDnF3PNXQXTtqVIH2ThkuIItCO2rUHi0gLfWRMCn
 oCd
IronPort-HdrOrdr: A9a23:xSa81KwvOvR54IamfYmiKrPwT71zdoMgy1knxilNoH1uEvBw8v
 rEoB1173LJYVoqMk3I+urgBED/exzhHPdOiOEs1NyZMDUO1lHHEL1f
X-Talos-CUID: 9a23:+LDj4WwHUmK7At+y8MC1BgUJQJk0XWfB/E3SHGSAA0VsUoObe2KPrfY=
X-Talos-MUID: 9a23:bP208AmVrVf/tWaddmyDdnpMd8FLubqJN3wyrq0agNOHPgJoYS202WE=
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="109422966"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 12/15] build: avoid Config.mk's CFLAGS
Date: Tue, 23 May 2023 17:38:08 +0100
Message-ID: <20230523163811.30792-13-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The variable $(CFLAGS) is too often set in the environment,
especially when building a package for a distribution. Often, those
CFLAGS are intended to be use to build user spaces binaries, not a
kernel. This mean packager needs to takes extra steps to build Xen by
overriding the CFLAGS provided by the package build environment.

With this patch, we avoid using the variable $(CFLAGS). Also, the
hypervisor's build system have complete control over which CFLAGS are
used.

No change intended to XEN_CFLAGS used, beside some flags which may be
in a different order on the command line.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---

Notes:
    There's still $(EXTRA_CFLAGS_XEN_CORE) which allows to add more CFLAGS
    if someone building Xen needs to add more CFLAGS to the hypervisor
    build.

 xen/Makefile         | 11 ++++++++++-
 xen/arch/arm/arch.mk |  4 ++++
 xen/arch/x86/arch.mk |  2 ++
 3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/xen/Makefile b/xen/Makefile
index b3bffe8c6f..4dc3acf2a6 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -259,7 +259,16 @@ export KBUILD_DEFCONFIG := $(ARCH)_defconfig
 export XEN_TREEWIDE_CFLAGS := $(CFLAGS)
 
 XEN_AFLAGS =
-XEN_CFLAGS = $(CFLAGS)
+XEN_CFLAGS =
+ifeq ($(XEN_OS),SunOS)
+    XEN_CFLAGS +=  -Wa,--divide -D_POSIX_C_SOURCE=200112L -D__EXTENSIONS__
+endif
+XEN_CFLAGS += -fno-strict-aliasing
+XEN_CFLAGS += -std=gnu99
+XEN_CFLAGS += -Wall -Wstrict-prototypes
+$(call cc-option-add,XEN_CFLAGS,CC,-Wdeclaration-after-statement)
+$(call cc-option-add,XEN_CFLAGS,CC,-Wno-unused-but-set-variable)
+$(call cc-option-add,XEN_CFLAGS,CC,-Wno-unused-local-typedefs)
 
 # CLANG_FLAGS needs to be calculated before calling Kconfig
 ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
diff --git a/xen/arch/arm/arch.mk b/xen/arch/arm/arch.mk
index 300b8bf7ae..0478fadde2 100644
--- a/xen/arch/arm/arch.mk
+++ b/xen/arch/arm/arch.mk
@@ -1,6 +1,10 @@
 ########################################
 # arm-specific definitions
 
+ifeq ($(XEN_TARGET_ARCH),arm32)
+    XEN_CFLAGS += -marm
+endif
+
 $(call cc-options-add,XEN_CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 $(call cc-option-add,XEN_CFLAGS,CC,-Wnested-externs)
 
diff --git a/xen/arch/x86/arch.mk b/xen/arch/x86/arch.mk
index 5df3cf6bc3..fc3b1dc922 100644
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -1,6 +1,8 @@
 ########################################
 # x86-specific definitions
 
+XEN_CFLAGS += -m64
+
 export XEN_IMG_OFFSET := 0x200000
 
 XEN_CFLAGS += -I$(srctree)/arch/x86/include/asm/mach-generic
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 23 16:49:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 16:49:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538625.838735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VCa-0008HU-Jt; Tue, 23 May 2023 16:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538625.838735; Tue, 23 May 2023 16:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VCa-0008HN-GQ; Tue, 23 May 2023 16:49:36 +0000
Received: by outflank-mailman (input) for mailman id 538625;
 Tue, 23 May 2023 16:49:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuGZ=BM=citrix.com=prvs=5000a0748=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q1VCY-0008HF-NS
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 16:49:34 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ca612a8c-f989-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 18:49:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca612a8c-f989-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684860573;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=Zl9vTPqi6RVZJGxKRYtGaBAPuiSWpOT0k2k62r7HVqA=;
  b=D1J7JO5QicrFAlF3DMcgfTEGyNXyzFOihlD1TVqYGILwR52NpYtvfRNo
   pXtQsfICzEgq45U/sBDEmnDpSGAIxLRRnJkE50EbnEnl1j/qGUmVnzCMH
   Gwa//OcQSJ4eiKSO5bJzsPSHsqKQYCPeL6gtk0l3hhFUI72MuclXu8oKF
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112569755
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:wq/fba82syAfjNCismedDrUDcn6TJUtcMsCJ2f8bNWPcYEJGY0x3m
 mIYDD+DPavYNmv2fYojao6y9kNT78Lcz9dmTQFo/no8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ird7ks31BjOkGlA5AdmOKoQ5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkJz
 dtGCwocRCyyvLiMzOmqddRKu548eZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAj3/jczpeuRSNqLA++WT7xw1tyrn9dtHSf7RmQO0MxxzF+
 DOZojmR7hcyJeSE6See81SVi+7dwDrCVbg3TbTi36s/6LGU7jNKU0BHPbehmtG9l0W3HdxWL
 UcZ/i4zhaEo8QqgSdyVdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOvsIsWSYj0
 FPPmtrzHCFuq5WcU3fb/bCRxRutNClTJm8PYwcNVw5D6N7myKksijrfQ9AlF7S65vXvHir62
 TeNry4WiLAajMpN3KK+lXjFjCirvYPhVRMu60PcWWfNxgphaZSsfYCA9VnR5vEGJ4GcJmRtp
 1BdxZLYtrpXS8jQymrUGr5l8KyVC+itdwX7331VDrMdxg/3wXL4bL9txgwkOxI8WiobQgPBb
 EjWsAJXwZZcOnq2cKN6C76M59QWIbvIToq8CK2NBjZaSt0oLVLconkyDaKF9zq1+HXAh53TL
 ntynSyEKX8BQZpqwzOtLwv2+e96n3turY8/qH2S8vhG7VZ8TCTNIVvmGAHUBgzc0E9jiFu9z
 jqnH5HWoyizqcWnCsUtzaYdLEoRMV8wDo3spspce4are1Q2RD5xU6OKkOp5J+SJepi5cc+Rp
 BmAtrJwkgKj1RUr1y3RApycVF8fdckm9i9qVcDdFV2px2Iice6S0UvrTLNuJeNP3LU6nZZJo
 wwtJ53o7gJnFm6WpFzwrPDV8ORfSfhcrVnWZXv6P2BvLsIIqs6g0oaMQzYDPRImVkKf3fbSa
 ZX8vu8HafLvnzhfMfs=
IronPort-HdrOrdr: A9a23:SbtdiK+F2cX3G2cF9yVuk+C7I+orL9Y04lQ7vn2ZKCY0TiX8ra
 uTdZsguCMc5Ax6ZJhCo7G90de7Lk80nKQdibX5Vo3PYOCJggWVEL0=
X-Talos-CUID: 9a23:u2WO9WF9kvMiHHYaqmIgzGlKOfo0WEHs9y3qLxSpNltPdeeKHAo=
X-Talos-MUID: 9a23:T9n1swaaPIPIauBTpgf3pCx5NeFR7qW8VVk0nK8j4sDeHHkl
X-IronPort-AV: E=Sophos;i="6.00,186,1681185600"; 
   d="scan'208";a="112569755"
Date: Tue, 23 May 2023 17:49:19 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Luca Fancellu <luca.fancellu@arm.com>
CC: <xen-devel@lists.xenproject.org>, <bertrand.marquis@arm.com>,
	<wei.chen@arm.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, "Juergen
 Gross" <jgross@suse.com>
Subject: Re: [PATCH v7 10/12] xen/tools: add sve parameter in XL configuration
Message-ID: <1c32737f-771a-4172-84ef-65d24a42e8d4@perard>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-11-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230523074326.3035745-11-luca.fancellu@arm.com>

On Tue, May 23, 2023 at 08:43:24AM +0100, Luca Fancellu wrote:
> Add sve parameter in XL configuration to allow guests to use
> SVE feature.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 23 17:13:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 17:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538633.838755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZT-0003YZ-QJ; Tue, 23 May 2023 17:13:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538633.838755; Tue, 23 May 2023 17:13:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZT-0003YS-Mh; Tue, 23 May 2023 17:13:15 +0000
Received: by outflank-mailman (input) for mailman id 538633;
 Tue, 23 May 2023 17:13:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1VZS-0003Rc-77
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 17:13:14 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1891f5db-f98d-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 19:13:12 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-446-OWN8SbHRN8KoJsCl-5O8JQ-1; Tue, 23 May 2023 13:13:06 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 593C48032FE;
 Tue, 23 May 2023 17:13:05 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 34221C1ED99;
 Tue, 23 May 2023 17:13:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1891f5db-f98d-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684861991;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lWkemwIYHqUqRrd9czrbQXH/T+H09BEI6lXtVpoxOE0=;
	b=ei7xrdCUKJ1Qm804EVuueRqHxBUiBb96YkfPe+W/ReR7NvJ6pE/oyNTyryVmPxiehR6ri2
	FZiMTVJVpPsJ1+qnztG8mu1QJcMTnQpHxJxXJq4SAB36dnmEJtixw1gidByVK5lXKQBg/b
	90k7GTwrSJ2UlVEVzybamMQ5acwaXyc=
X-MC-Unique: OWN8SbHRN8KoJsCl-5O8JQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	xen-devel@lists.xenproject.org,
	eblake@redhat.com,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-block@nongnu.org
Subject: [PATCH v2 1/6] block: add blk_io_plug_call() API
Date: Tue, 23 May 2023 13:12:55 -0400
Message-Id: <20230523171300.132347-2-stefanha@redhat.com>
In-Reply-To: <20230523171300.132347-1-stefanha@redhat.com>
References: <20230523171300.132347-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

Introduce a new API for thread-local blk_io_plug() that does not
traverse the block graph. The goal is to make blk_io_plug() multi-queue
friendly.

Instead of having block drivers track whether or not we're in a plugged
section, provide an API that allows them to defer a function call until
we're unplugged: blk_io_plug_call(fn, opaque). If blk_io_plug_call() is
called multiple times with the same fn/opaque pair, then fn() is only
called once at the end of the function - resulting in batching.

This patch introduces the API and changes blk_io_plug()/blk_io_unplug().
blk_io_plug()/blk_io_unplug() no longer require a BlockBackend argument
because the plug state is now thread-local.

Later patches convert block drivers to blk_io_plug_call() and then we
can finally remove .bdrv_co_io_plug() once all block drivers have been
converted.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
v2
- "is not be freed" -> "is not freed" [Eric]
---
 MAINTAINERS                       |   1 +
 include/sysemu/block-backend-io.h |  13 +--
 block/block-backend.c             |  22 -----
 block/plug.c                      | 159 ++++++++++++++++++++++++++++++
 hw/block/dataplane/xen-block.c    |   8 +-
 hw/block/virtio-blk.c             |   4 +-
 hw/scsi/virtio-scsi.c             |   6 +-
 block/meson.build                 |   1 +
 8 files changed, 173 insertions(+), 41 deletions(-)
 create mode 100644 block/plug.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 1b6466496d..2be6f0c26b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2646,6 +2646,7 @@ F: util/aio-*.c
 F: util/aio-*.h
 F: util/fdmon-*.c
 F: block/io.c
+F: block/plug.c
 F: migration/block*
 F: include/block/aio.h
 F: include/block/aio-wait.h
diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
index d62a7ee773..be4dcef59d 100644
--- a/include/sysemu/block-backend-io.h
+++ b/include/sysemu/block-backend-io.h
@@ -100,16 +100,9 @@ void blk_iostatus_set_err(BlockBackend *blk, int error);
 int blk_get_max_iov(BlockBackend *blk);
 int blk_get_max_hw_iov(BlockBackend *blk);
 
-/*
- * blk_io_plug/unplug are thread-local operations. This means that multiple
- * IOThreads can simultaneously call plug/unplug, but the caller must ensure
- * that each unplug() is called in the same IOThread of the matching plug().
- */
-void coroutine_fn blk_co_io_plug(BlockBackend *blk);
-void co_wrapper blk_io_plug(BlockBackend *blk);
-
-void coroutine_fn blk_co_io_unplug(BlockBackend *blk);
-void co_wrapper blk_io_unplug(BlockBackend *blk);
+void blk_io_plug(void);
+void blk_io_unplug(void);
+void blk_io_plug_call(void (*fn)(void *), void *opaque);
 
 AioContext *blk_get_aio_context(BlockBackend *blk);
 BlockAcctStats *blk_get_stats(BlockBackend *blk);
diff --git a/block/block-backend.c b/block/block-backend.c
index ca537cd0ad..1f1d226ba6 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -2568,28 +2568,6 @@ void blk_add_insert_bs_notifier(BlockBackend *blk, Notifier *notify)
     notifier_list_add(&blk->insert_bs_notifiers, notify);
 }
 
-void coroutine_fn blk_co_io_plug(BlockBackend *blk)
-{
-    BlockDriverState *bs = blk_bs(blk);
-    IO_CODE();
-    GRAPH_RDLOCK_GUARD();
-
-    if (bs) {
-        bdrv_co_io_plug(bs);
-    }
-}
-
-void coroutine_fn blk_co_io_unplug(BlockBackend *blk)
-{
-    BlockDriverState *bs = blk_bs(blk);
-    IO_CODE();
-    GRAPH_RDLOCK_GUARD();
-
-    if (bs) {
-        bdrv_co_io_unplug(bs);
-    }
-}
-
 BlockAcctStats *blk_get_stats(BlockBackend *blk)
 {
     IO_CODE();
diff --git a/block/plug.c b/block/plug.c
new file mode 100644
index 0000000000..98a155d2f4
--- /dev/null
+++ b/block/plug.c
@@ -0,0 +1,159 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Block I/O plugging
+ *
+ * Copyright Red Hat.
+ *
+ * This API defers a function call within a blk_io_plug()/blk_io_unplug()
+ * section, allowing multiple calls to batch up. This is a performance
+ * optimization that is used in the block layer to submit several I/O requests
+ * at once instead of individually:
+ *
+ *   blk_io_plug(); <-- start of plugged region
+ *   ...
+ *   blk_io_plug_call(my_func, my_obj); <-- deferred my_func(my_obj) call
+ *   blk_io_plug_call(my_func, my_obj); <-- another
+ *   blk_io_plug_call(my_func, my_obj); <-- another
+ *   ...
+ *   blk_io_unplug(); <-- end of plugged region, my_func(my_obj) is called once
+ *
+ * This code is actually generic and not tied to the block layer. If another
+ * subsystem needs this functionality, it could be renamed.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/coroutine-tls.h"
+#include "qemu/notify.h"
+#include "qemu/thread.h"
+#include "sysemu/block-backend.h"
+
+/* A function call that has been deferred until unplug() */
+typedef struct {
+    void (*fn)(void *);
+    void *opaque;
+} UnplugFn;
+
+/* Per-thread state */
+typedef struct {
+    unsigned count;       /* how many times has plug() been called? */
+    GArray *unplug_fns;   /* functions to call at unplug time */
+} Plug;
+
+/* Use get_ptr_plug() to fetch this thread-local value */
+QEMU_DEFINE_STATIC_CO_TLS(Plug, plug);
+
+/* Called at thread cleanup time */
+static void blk_io_plug_atexit(Notifier *n, void *value)
+{
+    Plug *plug = get_ptr_plug();
+    g_array_free(plug->unplug_fns, TRUE);
+}
+
+/* This won't involve coroutines, so use __thread */
+static __thread Notifier blk_io_plug_atexit_notifier;
+
+/**
+ * blk_io_plug_call:
+ * @fn: a function pointer to be invoked
+ * @opaque: a user-defined argument to @fn()
+ *
+ * Call @fn(@opaque) immediately if not within a blk_io_plug()/blk_io_unplug()
+ * section.
+ *
+ * Otherwise defer the call until the end of the outermost
+ * blk_io_plug()/blk_io_unplug() section in this thread. If the same
+ * @fn/@opaque pair has already been deferred, it will only be called once upon
+ * blk_io_unplug() so that accumulated calls are batched into a single call.
+ *
+ * The caller must ensure that @opaque is not freed before @fn() is invoked.
+ */
+void blk_io_plug_call(void (*fn)(void *), void *opaque)
+{
+    Plug *plug = get_ptr_plug();
+
+    /* Call immediately if we're not plugged */
+    if (plug->count == 0) {
+        fn(opaque);
+        return;
+    }
+
+    GArray *array = plug->unplug_fns;
+    if (!array) {
+        array = g_array_new(FALSE, FALSE, sizeof(UnplugFn));
+        plug->unplug_fns = array;
+        blk_io_plug_atexit_notifier.notify = blk_io_plug_atexit;
+        qemu_thread_atexit_add(&blk_io_plug_atexit_notifier);
+    }
+
+    UnplugFn *fns = (UnplugFn *)array->data;
+    UnplugFn new_fn = {
+        .fn = fn,
+        .opaque = opaque,
+    };
+
+    /*
+     * There won't be many, so do a linear search. If this becomes a bottleneck
+     * then a binary search (glib 2.62+) or different data structure could be
+     * used.
+     */
+    for (guint i = 0; i < array->len; i++) {
+        if (memcmp(&fns[i], &new_fn, sizeof(new_fn)) == 0) {
+            return; /* already exists */
+        }
+    }
+
+    g_array_append_val(array, new_fn);
+}
+
+/**
+ * blk_io_plug: Defer blk_io_plug_call() functions until blk_io_unplug()
+ *
+ * blk_io_plug/unplug are thread-local operations. This means that multiple
+ * threads can simultaneously call plug/unplug, but the caller must ensure that
+ * each unplug() is called in the same thread of the matching plug().
+ *
+ * Nesting is supported. blk_io_plug_call() functions are only called at the
+ * outermost blk_io_unplug().
+ */
+void blk_io_plug(void)
+{
+    Plug *plug = get_ptr_plug();
+
+    assert(plug->count < UINT32_MAX);
+
+    plug->count++;
+}
+
+/**
+ * blk_io_unplug: Run any pending blk_io_plug_call() functions
+ *
+ * There must have been a matching blk_io_plug() call in the same thread prior
+ * to this blk_io_unplug() call.
+ */
+void blk_io_unplug(void)
+{
+    Plug *plug = get_ptr_plug();
+
+    assert(plug->count > 0);
+
+    if (--plug->count > 0) {
+        return;
+    }
+
+    GArray *array = plug->unplug_fns;
+    if (!array) {
+        return;
+    }
+
+    UnplugFn *fns = (UnplugFn *)array->data;
+
+    for (guint i = 0; i < array->len; i++) {
+        fns[i].fn(fns[i].opaque);
+    }
+
+    /*
+     * This resets the array without freeing memory so that appending is cheap
+     * in the future.
+     */
+    g_array_set_size(array, 0);
+}
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index d8bc39d359..e49c24f63d 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -537,7 +537,7 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
      * is below us.
      */
     if (inflight_atstart > IO_PLUG_THRESHOLD) {
-        blk_io_plug(dataplane->blk);
+        blk_io_plug();
     }
     while (rc != rp) {
         /* pull request from ring */
@@ -577,12 +577,12 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
 
         if (inflight_atstart > IO_PLUG_THRESHOLD &&
             batched >= inflight_atstart) {
-            blk_io_unplug(dataplane->blk);
+            blk_io_unplug();
         }
         xen_block_do_aio(request);
         if (inflight_atstart > IO_PLUG_THRESHOLD) {
             if (batched >= inflight_atstart) {
-                blk_io_plug(dataplane->blk);
+                blk_io_plug();
                 batched = 0;
             } else {
                 batched++;
@@ -590,7 +590,7 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
         }
     }
     if (inflight_atstart > IO_PLUG_THRESHOLD) {
-        blk_io_unplug(dataplane->blk);
+        blk_io_unplug();
     }
 
     return done_something;
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 8f65ea4659..b4286424c1 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1134,7 +1134,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
     bool suppress_notifications = virtio_queue_get_notification(vq);
 
     aio_context_acquire(blk_get_aio_context(s->blk));
-    blk_io_plug(s->blk);
+    blk_io_plug();
 
     do {
         if (suppress_notifications) {
@@ -1158,7 +1158,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
         virtio_blk_submit_multireq(s, &mrb);
     }
 
-    blk_io_unplug(s->blk);
+    blk_io_unplug();
     aio_context_release(blk_get_aio_context(s->blk));
 }
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 612c525d9d..534a44ee07 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -799,7 +799,7 @@ static int virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
         return -ENOBUFS;
     }
     scsi_req_ref(req->sreq);
-    blk_io_plug(d->conf.blk);
+    blk_io_plug();
     object_unref(OBJECT(d));
     return 0;
 }
@@ -810,7 +810,7 @@ static void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
     if (scsi_req_enqueue(sreq)) {
         scsi_req_continue(sreq);
     }
-    blk_io_unplug(sreq->dev->conf.blk);
+    blk_io_unplug();
     scsi_req_unref(sreq);
 }
 
@@ -836,7 +836,7 @@ static void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
                 while (!QTAILQ_EMPTY(&reqs)) {
                     req = QTAILQ_FIRST(&reqs);
                     QTAILQ_REMOVE(&reqs, req, next);
-                    blk_io_unplug(req->sreq->dev->conf.blk);
+                    blk_io_unplug();
                     scsi_req_unref(req->sreq);
                     virtqueue_detach_element(req->vq, &req->elem, 0);
                     virtio_scsi_free_req(req);
diff --git a/block/meson.build b/block/meson.build
index 486dda8b85..fb4332bd66 100644
--- a/block/meson.build
+++ b/block/meson.build
@@ -23,6 +23,7 @@ block_ss.add(files(
   'mirror.c',
   'nbd.c',
   'null.c',
+  'plug.c',
   'qapi.c',
   'qcow2-bitmap.c',
   'qcow2-cache.c',
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 17:13:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 17:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538632.838745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZR-0003Je-J9; Tue, 23 May 2023 17:13:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538632.838745; Tue, 23 May 2023 17:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZR-0003JX-GR; Tue, 23 May 2023 17:13:13 +0000
Received: by outflank-mailman (input) for mailman id 538632;
 Tue, 23 May 2023 17:13:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1VZQ-0003JO-Qr
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 17:13:12 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16eaa886-f98d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 19:13:09 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-605-6zDAZPuTMHuYV4TW0N4iPQ-1; Tue, 23 May 2023 13:13:03 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5DCDC1C08780;
 Tue, 23 May 2023 17:13:02 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 9437F1121314;
 Tue, 23 May 2023 17:13:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16eaa886-f98d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684861988;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=3MBKX2AXrRRW1fC3P4hqaX0bBaxtga1xMfpCUvm9y3Y=;
	b=AiE7Evsj1IxfRCsO+umt7TYkxqVD4AyvDAC3G62aNGPyxLPp8TjQdT2YrMOZLlN5pnjwQR
	Df7Fmx4qji7Obv0PIfTCvNNA7ZiDJfV6KeIH/aHgtHpF7csOLCTaxNSnFJL20Z4U0hHya3
	9wjUmLu0abiPieNvYHCFTJxVGeb+lbs=
X-MC-Unique: 6zDAZPuTMHuYV4TW0N4iPQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	xen-devel@lists.xenproject.org,
	eblake@redhat.com,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-block@nongnu.org
Subject: [PATCH v2 0/6] block: add blk_io_plug_call() API
Date: Tue, 23 May 2023 13:12:54 -0400
Message-Id: <20230523171300.132347-1-stefanha@redhat.com>
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

v2
- Patch 1: "is not be freed" -> "is not freed" [Eric]
- Patch 2: Remove unused nvme_process_completion_queue_plugged trace event
  [Stefano]
- Patch 3: Add missing #include and fix blkio_unplug_fn() prototype [Stefano]
- Patch 4: Removed whitespace hunk [Eric]

The existing blk_io_plug() API is not block layer multi-queue friendly because
the plug state is per-BlockDriverState.

Change blk_io_plug()'s implementation so it is thread-local. This is done by
introducing the blk_io_plug_call() function that block drivers use to batch
calls while plugged. It is relatively easy to convert block drivers from
.bdrv_co_io_plug() to blk_io_plug_call().

Random read 4KB performance with virtio-blk on a host NVMe block device:

iodepth   iops   change vs today
1        45612   -4%
2        87967   +2%
4       129872   +0%
8       171096   -3%
16      194508   -4%
32      208947   -1%
64      217647   +0%
128     229629   +0%

The results are within the noise for these benchmarks. This is to be expected
because the plugging behavior for a single thread hasn't changed in this patch
series, only that the state is thread-local now.

The following graph compares several approaches:
https://vmsplice.net/~stefan/blk_io_plug-thread-local.png
- v7.2.0: before most of the multi-queue block layer changes landed.
- with-blk_io_plug: today's post-8.0.0 QEMU.
- blk_io_plug-thread-local: this patch series.
- no-blk_io_plug: what happens when we simply remove plugging?
- call-after-dispatch: what if we integrate plugging into the event loop? I
  decided against this approach in the end because it's more likely to
  introduce performance regressions since I/O submission is deferred until the
  end of the event loop iteration.

Aside from the no-blk_io_plug case, which bottlenecks much earlier than the
others, we see that all plugging approaches are more or less equivalent in this
benchmark. It is also clear that QEMU 8.0.0 has lower performance than 7.2.0.

The Ansible playbook, fio results, and a Jupyter notebook are available here:
https://github.com/stefanha/qemu-perf/tree/remove-blk_io_plug

Stefan Hajnoczi (6):
  block: add blk_io_plug_call() API
  block/nvme: convert to blk_io_plug_call() API
  block/blkio: convert to blk_io_plug_call() API
  block/io_uring: convert to blk_io_plug_call() API
  block/linux-aio: convert to blk_io_plug_call() API
  block: remove bdrv_co_io_plug() API

 MAINTAINERS                       |   1 +
 include/block/block-io.h          |   3 -
 include/block/block_int-common.h  |  11 ---
 include/block/raw-aio.h           |  14 ---
 include/sysemu/block-backend-io.h |  13 +--
 block/blkio.c                     |  43 ++++----
 block/block-backend.c             |  22 -----
 block/file-posix.c                |  38 -------
 block/io.c                        |  37 -------
 block/io_uring.c                  |  44 ++++-----
 block/linux-aio.c                 |  41 +++-----
 block/nvme.c                      |  44 +++------
 block/plug.c                      | 159 ++++++++++++++++++++++++++++++
 hw/block/dataplane/xen-block.c    |   8 +-
 hw/block/virtio-blk.c             |   4 +-
 hw/scsi/virtio-scsi.c             |   6 +-
 block/meson.build                 |   1 +
 block/trace-events                |   6 +-
 18 files changed, 239 insertions(+), 256 deletions(-)
 create mode 100644 block/plug.c

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 17:13:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 17:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538634.838761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZU-0003c4-54; Tue, 23 May 2023 17:13:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538634.838761; Tue, 23 May 2023 17:13:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZT-0003bQ-Up; Tue, 23 May 2023 17:13:15 +0000
Received: by outflank-mailman (input) for mailman id 538634;
 Tue, 23 May 2023 17:13:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1VZS-0003Rc-SC
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 17:13:14 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 18f315e8-f98d-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 19:13:13 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-390-U7wtUpM-PKeB1Q5oAQl4JQ-1; Tue, 23 May 2023 13:13:08 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E180B811E86;
 Tue, 23 May 2023 17:13:07 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 452C940C6CD9;
 Tue, 23 May 2023 17:13:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18f315e8-f98d-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684861992;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gl1BF0SyL3LhZIAAGPQ17yn9AeFfqz/sAdquKqfMac8=;
	b=PjoZO4BN9usTpYk74+1eR9sCmkitsUf+Y5w0UyhDWzumVxhBT6jawoFyp+UncUF94d9HNL
	6JIUgglfndKReyYmETCEayNeK+L1WhZaE3QDJR72fItTyVMs9Yj76e4JBErhw2sWlRrFyl
	nCfa9YlWV0HAuDQAJfaksWZwqRFAcHc=
X-MC-Unique: U7wtUpM-PKeB1Q5oAQl4JQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	xen-devel@lists.xenproject.org,
	eblake@redhat.com,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-block@nongnu.org
Subject: [PATCH v2 2/6] block/nvme: convert to blk_io_plug_call() API
Date: Tue, 23 May 2023 13:12:56 -0400
Message-Id: <20230523171300.132347-3-stefanha@redhat.com>
In-Reply-To: <20230523171300.132347-1-stefanha@redhat.com>
References: <20230523171300.132347-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
v2
- Remove unused nvme_process_completion_queue_plugged trace event
  [Stefano]
---
 block/nvme.c       | 44 ++++++++++++--------------------------------
 block/trace-events |  1 -
 2 files changed, 12 insertions(+), 33 deletions(-)

diff --git a/block/nvme.c b/block/nvme.c
index 5b744c2bda..100b38b592 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -25,6 +25,7 @@
 #include "qemu/vfio-helpers.h"
 #include "block/block-io.h"
 #include "block/block_int.h"
+#include "sysemu/block-backend.h"
 #include "sysemu/replay.h"
 #include "trace.h"
 
@@ -119,7 +120,6 @@ struct BDRVNVMeState {
     int blkshift;
 
     uint64_t max_transfer;
-    bool plugged;
 
     bool supports_write_zeroes;
     bool supports_discard;
@@ -282,7 +282,7 @@ static void nvme_kick(NVMeQueuePair *q)
 {
     BDRVNVMeState *s = q->s;
 
-    if (s->plugged || !q->need_kick) {
+    if (!q->need_kick) {
         return;
     }
     trace_nvme_kick(s, q->index);
@@ -387,10 +387,6 @@ static bool nvme_process_completion(NVMeQueuePair *q)
     NvmeCqe *c;
 
     trace_nvme_process_completion(s, q->index, q->inflight);
-    if (s->plugged) {
-        trace_nvme_process_completion_queue_plugged(s, q->index);
-        return false;
-    }
 
     /*
      * Support re-entrancy when a request cb() function invokes aio_poll().
@@ -480,6 +476,15 @@ static void nvme_trace_command(const NvmeCmd *cmd)
     }
 }
 
+static void nvme_unplug_fn(void *opaque)
+{
+    NVMeQueuePair *q = opaque;
+
+    QEMU_LOCK_GUARD(&q->lock);
+    nvme_kick(q);
+    nvme_process_completion(q);
+}
+
 static void nvme_submit_command(NVMeQueuePair *q, NVMeRequest *req,
                                 NvmeCmd *cmd, BlockCompletionFunc cb,
                                 void *opaque)
@@ -496,8 +501,7 @@ static void nvme_submit_command(NVMeQueuePair *q, NVMeRequest *req,
            q->sq.tail * NVME_SQ_ENTRY_BYTES, cmd, sizeof(*cmd));
     q->sq.tail = (q->sq.tail + 1) % NVME_QUEUE_SIZE;
     q->need_kick++;
-    nvme_kick(q);
-    nvme_process_completion(q);
+    blk_io_plug_call(nvme_unplug_fn, q);
     qemu_mutex_unlock(&q->lock);
 }
 
@@ -1567,27 +1571,6 @@ static void nvme_attach_aio_context(BlockDriverState *bs,
     }
 }
 
-static void coroutine_fn nvme_co_io_plug(BlockDriverState *bs)
-{
-    BDRVNVMeState *s = bs->opaque;
-    assert(!s->plugged);
-    s->plugged = true;
-}
-
-static void coroutine_fn nvme_co_io_unplug(BlockDriverState *bs)
-{
-    BDRVNVMeState *s = bs->opaque;
-    assert(s->plugged);
-    s->plugged = false;
-    for (unsigned i = INDEX_IO(0); i < s->queue_count; i++) {
-        NVMeQueuePair *q = s->queues[i];
-        qemu_mutex_lock(&q->lock);
-        nvme_kick(q);
-        nvme_process_completion(q);
-        qemu_mutex_unlock(&q->lock);
-    }
-}
-
 static bool nvme_register_buf(BlockDriverState *bs, void *host, size_t size,
                               Error **errp)
 {
@@ -1664,9 +1647,6 @@ static BlockDriver bdrv_nvme = {
     .bdrv_detach_aio_context  = nvme_detach_aio_context,
     .bdrv_attach_aio_context  = nvme_attach_aio_context,
 
-    .bdrv_co_io_plug          = nvme_co_io_plug,
-    .bdrv_co_io_unplug        = nvme_co_io_unplug,
-
     .bdrv_register_buf        = nvme_register_buf,
     .bdrv_unregister_buf      = nvme_unregister_buf,
 };
diff --git a/block/trace-events b/block/trace-events
index 32665158d6..048ad27519 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -141,7 +141,6 @@ nvme_kick(void *s, unsigned q_index) "s %p q #%u"
 nvme_dma_flush_queue_wait(void *s) "s %p"
 nvme_error(int cmd_specific, int sq_head, int sqid, int cid, int status) "cmd_specific %d sq_head %d sqid %d cid %d status 0x%x"
 nvme_process_completion(void *s, unsigned q_index, int inflight) "s %p q #%u inflight %d"
-nvme_process_completion_queue_plugged(void *s, unsigned q_index) "s %p q #%u"
 nvme_complete_command(void *s, unsigned q_index, int cid) "s %p q #%u cid %d"
 nvme_submit_command(void *s, unsigned q_index, int cid) "s %p q #%u cid %d"
 nvme_submit_command_raw(int c0, int c1, int c2, int c3, int c4, int c5, int c6, int c7) "%02x %02x %02x %02x %02x %02x %02x %02x"
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 17:13:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 17:13:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538635.838775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZY-00045u-Bq; Tue, 23 May 2023 17:13:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538635.838775; Tue, 23 May 2023 17:13:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZY-00045l-8d; Tue, 23 May 2023 17:13:20 +0000
Received: by outflank-mailman (input) for mailman id 538635;
 Tue, 23 May 2023 17:13:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1VZW-0003Rc-JU
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 17:13:18 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1ba180ba-f98d-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 19:13:17 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-495-ZauBHbUoNY-qinf-3koLiw-1; Tue, 23 May 2023 13:13:13 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 99A238037AD;
 Tue, 23 May 2023 17:13:12 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 015D020296C8;
 Tue, 23 May 2023 17:13:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ba180ba-f98d-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684861996;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=adsezOIruOLlF6TNGoaz9EB3Mhcv/eh3FPCDU/pv2jU=;
	b=HP5NACv1aNHN28e/zcPmGUBBaKEwR4/4wvvQYsVGFiqiNyXBvIflyObzP1cE42nhrlzZcf
	l9BgYJZv+xtw9aEfJDRKzFc2yTtrW2BEfEQbyKfEtzRqJhwlXfAFnUchj2R2KQTUGAonwF
	b5W7aLq10wK3uZiBb4SVgGNljdumIxA=
X-MC-Unique: ZauBHbUoNY-qinf-3koLiw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	xen-devel@lists.xenproject.org,
	eblake@redhat.com,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-block@nongnu.org
Subject: [PATCH v2 4/6] block/io_uring: convert to blk_io_plug_call() API
Date: Tue, 23 May 2023 13:12:58 -0400
Message-Id: <20230523171300.132347-5-stefanha@redhat.com>
In-Reply-To: <20230523171300.132347-1-stefanha@redhat.com>
References: <20230523171300.132347-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
v2
- Removed whitespace hunk [Eric]
---
 include/block/raw-aio.h |  7 -------
 block/file-posix.c      | 10 ----------
 block/io_uring.c        | 44 ++++++++++++++++-------------------------
 block/trace-events      |  5 ++---
 4 files changed, 19 insertions(+), 47 deletions(-)

diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
index 0fe85ade77..da60ca13ef 100644
--- a/include/block/raw-aio.h
+++ b/include/block/raw-aio.h
@@ -81,13 +81,6 @@ int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t offset,
                                   QEMUIOVector *qiov, int type);
 void luring_detach_aio_context(LuringState *s, AioContext *old_context);
 void luring_attach_aio_context(LuringState *s, AioContext *new_context);
-
-/*
- * luring_io_plug/unplug work in the thread's current AioContext, therefore the
- * caller must ensure that they are paired in the same IOThread.
- */
-void luring_io_plug(void);
-void luring_io_unplug(void);
 #endif
 
 #ifdef _WIN32
diff --git a/block/file-posix.c b/block/file-posix.c
index 0ab158efba..7baa8491dd 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2558,11 +2558,6 @@ static void coroutine_fn raw_co_io_plug(BlockDriverState *bs)
         laio_io_plug();
     }
 #endif
-#ifdef CONFIG_LINUX_IO_URING
-    if (s->use_linux_io_uring) {
-        luring_io_plug();
-    }
-#endif
 }
 
 static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
@@ -2573,11 +2568,6 @@ static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
         laio_io_unplug(s->aio_max_batch);
     }
 #endif
-#ifdef CONFIG_LINUX_IO_URING
-    if (s->use_linux_io_uring) {
-        luring_io_unplug();
-    }
-#endif
 }
 
 static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
diff --git a/block/io_uring.c b/block/io_uring.c
index 82cab6a5bd..4e25b56c9c 100644
--- a/block/io_uring.c
+++ b/block/io_uring.c
@@ -16,6 +16,7 @@
 #include "block/raw-aio.h"
 #include "qemu/coroutine.h"
 #include "qapi/error.h"
+#include "sysemu/block-backend.h"
 #include "trace.h"
 
 /* Only used for assertions.  */
@@ -41,7 +42,6 @@ typedef struct LuringAIOCB {
 } LuringAIOCB;
 
 typedef struct LuringQueue {
-    int plugged;
     unsigned int in_queue;
     unsigned int in_flight;
     bool blocked;
@@ -267,7 +267,7 @@ static void luring_process_completions_and_submit(LuringState *s)
 {
     luring_process_completions(s);
 
-    if (!s->io_q.plugged && s->io_q.in_queue > 0) {
+    if (s->io_q.in_queue > 0) {
         ioq_submit(s);
     }
 }
@@ -301,29 +301,17 @@ static void qemu_luring_poll_ready(void *opaque)
 static void ioq_init(LuringQueue *io_q)
 {
     QSIMPLEQ_INIT(&io_q->submit_queue);
-    io_q->plugged = 0;
     io_q->in_queue = 0;
     io_q->in_flight = 0;
     io_q->blocked = false;
 }
 
-void luring_io_plug(void)
+static void luring_unplug_fn(void *opaque)
 {
-    AioContext *ctx = qemu_get_current_aio_context();
-    LuringState *s = aio_get_linux_io_uring(ctx);
-    trace_luring_io_plug(s);
-    s->io_q.plugged++;
-}
-
-void luring_io_unplug(void)
-{
-    AioContext *ctx = qemu_get_current_aio_context();
-    LuringState *s = aio_get_linux_io_uring(ctx);
-    assert(s->io_q.plugged);
-    trace_luring_io_unplug(s, s->io_q.blocked, s->io_q.plugged,
-                           s->io_q.in_queue, s->io_q.in_flight);
-    if (--s->io_q.plugged == 0 &&
-        !s->io_q.blocked && s->io_q.in_queue > 0) {
+    LuringState *s = opaque;
+    trace_luring_unplug_fn(s, s->io_q.blocked, s->io_q.in_queue,
+                           s->io_q.in_flight);
+    if (!s->io_q.blocked && s->io_q.in_queue > 0) {
         ioq_submit(s);
     }
 }
@@ -370,14 +358,16 @@ static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s,
 
     QSIMPLEQ_INSERT_TAIL(&s->io_q.submit_queue, luringcb, next);
     s->io_q.in_queue++;
-    trace_luring_do_submit(s, s->io_q.blocked, s->io_q.plugged,
-                           s->io_q.in_queue, s->io_q.in_flight);
-    if (!s->io_q.blocked &&
-        (!s->io_q.plugged ||
-         s->io_q.in_flight + s->io_q.in_queue >= MAX_ENTRIES)) {
-        ret = ioq_submit(s);
-        trace_luring_do_submit_done(s, ret);
-        return ret;
+    trace_luring_do_submit(s, s->io_q.blocked, s->io_q.in_queue,
+                           s->io_q.in_flight);
+    if (!s->io_q.blocked) {
+        if (s->io_q.in_flight + s->io_q.in_queue >= MAX_ENTRIES) {
+            ret = ioq_submit(s);
+            trace_luring_do_submit_done(s, ret);
+            return ret;
+        }
+
+        blk_io_plug_call(luring_unplug_fn, s);
     }
     return 0;
 }
diff --git a/block/trace-events b/block/trace-events
index 048ad27519..6f121b7636 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -64,9 +64,8 @@ file_paio_submit(void *acb, void *opaque, int64_t offset, int count, int type) "
 # io_uring.c
 luring_init_state(void *s, size_t size) "s %p size %zu"
 luring_cleanup_state(void *s) "%p freed"
-luring_io_plug(void *s) "LuringState %p plug"
-luring_io_unplug(void *s, int blocked, int plugged, int queued, int inflight) "LuringState %p blocked %d plugged %d queued %d inflight %d"
-luring_do_submit(void *s, int blocked, int plugged, int queued, int inflight) "LuringState %p blocked %d plugged %d queued %d inflight %d"
+luring_unplug_fn(void *s, int blocked, int queued, int inflight) "LuringState %p blocked %d queued %d inflight %d"
+luring_do_submit(void *s, int blocked, int queued, int inflight) "LuringState %p blocked %d queued %d inflight %d"
 luring_do_submit_done(void *s, int ret) "LuringState %p submitted to kernel %d"
 luring_co_submit(void *bs, void *s, void *luringcb, int fd, uint64_t offset, size_t nbytes, int type) "bs %p s %p luringcb %p fd %d offset %" PRId64 " nbytes %zd type %d"
 luring_process_completion(void *s, void *aiocb, int ret) "LuringState %p luringcb %p ret %d"
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 17:13:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 17:13:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538636.838782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZY-0004DT-RG; Tue, 23 May 2023 17:13:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538636.838782; Tue, 23 May 2023 17:13:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZY-0004BV-Md; Tue, 23 May 2023 17:13:20 +0000
Received: by outflank-mailman (input) for mailman id 538636;
 Tue, 23 May 2023 17:13:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1VZX-0003Rc-Ji
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 17:13:19 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1bf03b41-f98d-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 19:13:18 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-301-gzuX83o1N-G_u96V8qK1mA-1; Tue, 23 May 2023 13:13:10 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 29DD6101A55C;
 Tue, 23 May 2023 17:13:10 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 8332E1121314;
 Tue, 23 May 2023 17:13:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bf03b41-f98d-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684861997;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DQ+Ta4jd8QCmtIt4q28Sl6gRjCrcvbURcmSRFrOguqY=;
	b=YxFvshAv3NUnokaHr4nhb//fLti70q/xzkkBimEBSj6jsmMky8gJUbVZfS3/5c2RzghPPc
	Lm66z7okm5fJS673gee5O2dX6lcyp0GtO7qhhRTzXLRcJEsl3gGUCwReagnGcDny1kB4W9
	opZkOejQIK2nfDJ4CO1gs9asi/MJo7U=
X-MC-Unique: gzuX83o1N-G_u96V8qK1mA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	xen-devel@lists.xenproject.org,
	eblake@redhat.com,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-block@nongnu.org
Subject: [PATCH v2 3/6] block/blkio: convert to blk_io_plug_call() API
Date: Tue, 23 May 2023 13:12:57 -0400
Message-Id: <20230523171300.132347-4-stefanha@redhat.com>
In-Reply-To: <20230523171300.132347-1-stefanha@redhat.com>
References: <20230523171300.132347-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
v2
- Add missing #include and fix blkio_unplug_fn() prototype [Stefano]
---
 block/blkio.c | 43 ++++++++++++++++++++++++-------------------
 1 file changed, 24 insertions(+), 19 deletions(-)

diff --git a/block/blkio.c b/block/blkio.c
index 0cdc99a729..93c6d20d39 100644
--- a/block/blkio.c
+++ b/block/blkio.c
@@ -17,6 +17,7 @@
 #include "qemu/error-report.h"
 #include "qapi/qmp/qdict.h"
 #include "qemu/module.h"
+#include "sysemu/block-backend.h"
 #include "exec/memory.h" /* for ram_block_discard_disable() */
 
 #include "block/block-io.h"
@@ -325,16 +326,30 @@ static void blkio_detach_aio_context(BlockDriverState *bs)
                        false, NULL, NULL, NULL, NULL, NULL);
 }
 
-/* Call with s->blkio_lock held to submit I/O after enqueuing a new request */
-static void blkio_submit_io(BlockDriverState *bs)
+/*
+ * Called by blk_io_unplug() or immediately if not plugged. Called without
+ * blkio_lock.
+ */
+static void blkio_unplug_fn(void *opaque)
 {
-    if (qatomic_read(&bs->io_plugged) == 0) {
-        BDRVBlkioState *s = bs->opaque;
+    BDRVBlkioState *s = opaque;
 
+    WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_do_io(s->blkioq, NULL, 0, 0, NULL);
     }
 }
 
+/*
+ * Schedule I/O submission after enqueuing a new request. Called without
+ * blkio_lock.
+ */
+static void blkio_submit_io(BlockDriverState *bs)
+{
+    BDRVBlkioState *s = bs->opaque;
+
+    blk_io_plug_call(blkio_unplug_fn, s);
+}
+
 static int coroutine_fn
 blkio_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
 {
@@ -345,9 +360,9 @@ blkio_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_discard(s->blkioq, offset, bytes, &cod, 0);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
     return cod.ret;
 }
@@ -378,9 +393,9 @@ blkio_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_readv(s->blkioq, offset, iov, iovcnt, &cod, 0);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
 
     if (use_bounce_buffer) {
@@ -423,9 +438,9 @@ static int coroutine_fn blkio_co_pwritev(BlockDriverState *bs, int64_t offset,
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_writev(s->blkioq, offset, iov, iovcnt, &cod, blkio_flags);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
 
     if (use_bounce_buffer) {
@@ -444,9 +459,9 @@ static int coroutine_fn blkio_co_flush(BlockDriverState *bs)
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_flush(s->blkioq, &cod, 0);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
     return cod.ret;
 }
@@ -472,22 +487,13 @@ static int coroutine_fn blkio_co_pwrite_zeroes(BlockDriverState *bs,
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_write_zeroes(s->blkioq, offset, bytes, &cod, blkio_flags);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
     return cod.ret;
 }
 
-static void coroutine_fn blkio_co_io_unplug(BlockDriverState *bs)
-{
-    BDRVBlkioState *s = bs->opaque;
-
-    WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
-        blkio_submit_io(bs);
-    }
-}
-
 typedef enum {
     BMRR_OK,
     BMRR_SKIP,
@@ -1009,7 +1015,6 @@ static void blkio_refresh_limits(BlockDriverState *bs, Error **errp)
         .bdrv_co_pwritev         = blkio_co_pwritev, \
         .bdrv_co_flush_to_disk   = blkio_co_flush, \
         .bdrv_co_pwrite_zeroes   = blkio_co_pwrite_zeroes, \
-        .bdrv_co_io_unplug       = blkio_co_io_unplug, \
         .bdrv_refresh_limits     = blkio_refresh_limits, \
         .bdrv_register_buf       = blkio_register_buf, \
         .bdrv_unregister_buf     = blkio_unregister_buf, \
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 17:13:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 17:13:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538637.838795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZa-0004bW-5g; Tue, 23 May 2023 17:13:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538637.838795; Tue, 23 May 2023 17:13:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZa-0004b5-16; Tue, 23 May 2023 17:13:22 +0000
Received: by outflank-mailman (input) for mailman id 538637;
 Tue, 23 May 2023 17:13:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1VZY-0003JO-Tv
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 17:13:20 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c7d71b6-f98d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 19:13:19 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-179-gp84_tuxOQSN9WPS2-uenQ-1; Tue, 23 May 2023 13:13:16 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 60B711C08784;
 Tue, 23 May 2023 17:13:15 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id D1EC01121314;
 Tue, 23 May 2023 17:13:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c7d71b6-f98d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684861998;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3FPGvS4E8XhJl458NpT+4AaqENtWEqDNIS9t7Nd8CmQ=;
	b=EesUVfQ6n503Cqx9uB61hT85qWlbV6147STBzL37tFf+/mOxvLmibMY1x6vbPgh0R7ANdr
	7NXwvrQpTF8FvHXbiMb0R6furkNFjl+7ipDP3M2xEZb+FGboD/iVXuYpcfrkdxhceJkfVU
	2LCUu3IiaR0N88ylnZo4bVHLnr7un94=
X-MC-Unique: gp84_tuxOQSN9WPS2-uenQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	xen-devel@lists.xenproject.org,
	eblake@redhat.com,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-block@nongnu.org
Subject: [PATCH v2 5/6] block/linux-aio: convert to blk_io_plug_call() API
Date: Tue, 23 May 2023 13:12:59 -0400
Message-Id: <20230523171300.132347-6-stefanha@redhat.com>
In-Reply-To: <20230523171300.132347-1-stefanha@redhat.com>
References: <20230523171300.132347-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 include/block/raw-aio.h |  7 -------
 block/file-posix.c      | 28 ----------------------------
 block/linux-aio.c       | 41 +++++++++++------------------------------
 3 files changed, 11 insertions(+), 65 deletions(-)

diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
index da60ca13ef..0f63c2800c 100644
--- a/include/block/raw-aio.h
+++ b/include/block/raw-aio.h
@@ -62,13 +62,6 @@ int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
 
 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
 void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
-
-/*
- * laio_io_plug/unplug work in the thread's current AioContext, therefore the
- * caller must ensure that they are paired in the same IOThread.
- */
-void laio_io_plug(void);
-void laio_io_unplug(uint64_t dev_max_batch);
 #endif
 /* io_uring.c - Linux io_uring implementation */
 #ifdef CONFIG_LINUX_IO_URING
diff --git a/block/file-posix.c b/block/file-posix.c
index 7baa8491dd..ac1ed54811 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2550,26 +2550,6 @@ static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset,
     return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_WRITE);
 }
 
-static void coroutine_fn raw_co_io_plug(BlockDriverState *bs)
-{
-    BDRVRawState __attribute__((unused)) *s = bs->opaque;
-#ifdef CONFIG_LINUX_AIO
-    if (s->use_linux_aio) {
-        laio_io_plug();
-    }
-#endif
-}
-
-static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
-{
-    BDRVRawState __attribute__((unused)) *s = bs->opaque;
-#ifdef CONFIG_LINUX_AIO
-    if (s->use_linux_aio) {
-        laio_io_unplug(s->aio_max_batch);
-    }
-#endif
-}
-
 static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
 {
     BDRVRawState *s = bs->opaque;
@@ -3914,8 +3894,6 @@ BlockDriver bdrv_file = {
     .bdrv_co_copy_range_from = raw_co_copy_range_from,
     .bdrv_co_copy_range_to  = raw_co_copy_range_to,
     .bdrv_refresh_limits = raw_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
@@ -4286,8 +4264,6 @@ static BlockDriver bdrv_host_device = {
     .bdrv_co_copy_range_from = raw_co_copy_range_from,
     .bdrv_co_copy_range_to  = raw_co_copy_range_to,
     .bdrv_refresh_limits = raw_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
@@ -4424,8 +4400,6 @@ static BlockDriver bdrv_host_cdrom = {
     .bdrv_co_pwritev        = raw_co_pwritev,
     .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
     .bdrv_refresh_limits    = cdrom_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
@@ -4552,8 +4526,6 @@ static BlockDriver bdrv_host_cdrom = {
     .bdrv_co_pwritev        = raw_co_pwritev,
     .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
     .bdrv_refresh_limits    = cdrom_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
diff --git a/block/linux-aio.c b/block/linux-aio.c
index 442c86209b..5021aed68f 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -15,6 +15,7 @@
 #include "qemu/event_notifier.h"
 #include "qemu/coroutine.h"
 #include "qapi/error.h"
+#include "sysemu/block-backend.h"
 
 /* Only used for assertions.  */
 #include "qemu/coroutine_int.h"
@@ -46,7 +47,6 @@ struct qemu_laiocb {
 };
 
 typedef struct {
-    int plugged;
     unsigned int in_queue;
     unsigned int in_flight;
     bool blocked;
@@ -236,7 +236,7 @@ static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
 {
     qemu_laio_process_completions(s);
 
-    if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
+    if (!QSIMPLEQ_EMPTY(&s->io_q.pending)) {
         ioq_submit(s);
     }
 }
@@ -277,7 +277,6 @@ static void qemu_laio_poll_ready(EventNotifier *opaque)
 static void ioq_init(LaioQueue *io_q)
 {
     QSIMPLEQ_INIT(&io_q->pending);
-    io_q->plugged = 0;
     io_q->in_queue = 0;
     io_q->in_flight = 0;
     io_q->blocked = false;
@@ -354,31 +353,11 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64_t dev_max_batch)
     return max_batch;
 }
 
-void laio_io_plug(void)
+static void laio_unplug_fn(void *opaque)
 {
-    AioContext *ctx = qemu_get_current_aio_context();
-    LinuxAioState *s = aio_get_linux_aio(ctx);
+    LinuxAioState *s = opaque;
 
-    s->io_q.plugged++;
-}
-
-void laio_io_unplug(uint64_t dev_max_batch)
-{
-    AioContext *ctx = qemu_get_current_aio_context();
-    LinuxAioState *s = aio_get_linux_aio(ctx);
-
-    assert(s->io_q.plugged);
-    s->io_q.plugged--;
-
-    /*
-     * Why max batch checking is performed here:
-     * Another BDS may have queued requests with a higher dev_max_batch and
-     * therefore in_queue could now exceed our dev_max_batch. Re-check the max
-     * batch so we can honor our device's dev_max_batch.
-     */
-    if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch) ||
-        (!s->io_q.plugged &&
-         !s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending))) {
+    if (!s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
         ioq_submit(s);
     }
 }
@@ -410,10 +389,12 @@ static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
 
     QSIMPLEQ_INSERT_TAIL(&s->io_q.pending, laiocb, next);
     s->io_q.in_queue++;
-    if (!s->io_q.blocked &&
-        (!s->io_q.plugged ||
-         s->io_q.in_queue >= laio_max_batch(s, dev_max_batch))) {
-        ioq_submit(s);
+    if (!s->io_q.blocked) {
+        if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch)) {
+            ioq_submit(s);
+        } else {
+            blk_io_plug_call(laio_unplug_fn, s);
+        }
     }
 
     return 0;
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 17:13:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 17:13:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538638.838804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZf-0004yc-De; Tue, 23 May 2023 17:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538638.838804; Tue, 23 May 2023 17:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1VZf-0004yR-Ac; Tue, 23 May 2023 17:13:27 +0000
Received: by outflank-mailman (input) for mailman id 538638;
 Tue, 23 May 2023 17:13:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZ9c=BM=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q1VZd-0003JO-Mf
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 17:13:25 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f463b3e-f98d-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 19:13:24 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-541-GrPxPye5NiuvRcUuQhgTnA-1; Tue, 23 May 2023 13:13:18 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A2F6A29ABA06;
 Tue, 23 May 2023 17:13:17 +0000 (UTC)
Received: from localhost (unknown [10.39.195.107])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1E427C1ED99;
 Tue, 23 May 2023 17:13:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f463b3e-f98d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684862002;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=czTdm2uRIahqSqIwAZcPFjYWVpvuZBgfbd8HjxPRqIY=;
	b=NTWymu704nh97POzPIa++gf4NVdb8NzWeFzTQOK4yVJyOrWJcyad59z+N+gw3GE1Iwd8F7
	2KU3NaF/z0XC8qqB9rLXAy/45fsW7jRmreAFtoD5cclVkvgl3+Euww5A8acKVgdc5G2Jfs
	h9LTJHiSDwc7/AnMBwCPI/q1VmhwXAk=
X-MC-Unique: GrPxPye5NiuvRcUuQhgTnA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Hanna Reitz <hreitz@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	xen-devel@lists.xenproject.org,
	eblake@redhat.com,
	Anthony Perard <anthony.perard@citrix.com>,
	qemu-block@nongnu.org
Subject: [PATCH v2 6/6] block: remove bdrv_co_io_plug() API
Date: Tue, 23 May 2023 13:13:00 -0400
Message-Id: <20230523171300.132347-7-stefanha@redhat.com>
In-Reply-To: <20230523171300.132347-1-stefanha@redhat.com>
References: <20230523171300.132347-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

No block driver implements .bdrv_co_io_plug() anymore. Get rid of the
function pointers.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 include/block/block-io.h         |  3 ---
 include/block/block_int-common.h | 11 ----------
 block/io.c                       | 37 --------------------------------
 3 files changed, 51 deletions(-)

diff --git a/include/block/block-io.h b/include/block/block-io.h
index a27e471a87..43af816d75 100644
--- a/include/block/block-io.h
+++ b/include/block/block-io.h
@@ -259,9 +259,6 @@ void coroutine_fn bdrv_co_leave(BlockDriverState *bs, AioContext *old_ctx);
 
 AioContext *child_of_bds_get_parent_aio_context(BdrvChild *c);
 
-void coroutine_fn GRAPH_RDLOCK bdrv_co_io_plug(BlockDriverState *bs);
-void coroutine_fn GRAPH_RDLOCK bdrv_co_io_unplug(BlockDriverState *bs);
-
 bool coroutine_fn GRAPH_RDLOCK
 bdrv_co_can_store_new_dirty_bitmap(BlockDriverState *bs, const char *name,
                                    uint32_t granularity, Error **errp);
diff --git a/include/block/block_int-common.h b/include/block/block_int-common.h
index 6492a1e538..958962aa3a 100644
--- a/include/block/block_int-common.h
+++ b/include/block/block_int-common.h
@@ -753,11 +753,6 @@ struct BlockDriver {
     void coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_debug_event)(
         BlockDriverState *bs, BlkdebugEvent event);
 
-    /* io queue for linux-aio */
-    void coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_io_plug)(BlockDriverState *bs);
-    void coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_io_unplug)(
-        BlockDriverState *bs);
-
     /**
      * bdrv_drain_begin is called if implemented in the beginning of a
      * drain operation to drain and stop any internal sources of requests in
@@ -1227,12 +1222,6 @@ struct BlockDriverState {
     unsigned int in_flight;
     unsigned int serialising_in_flight;
 
-    /*
-     * counter for nested bdrv_io_plug.
-     * Accessed with atomic ops.
-     */
-    unsigned io_plugged;
-
     /* do we need to tell the quest if we have a volatile write cache? */
     int enable_write_cache;
 
diff --git a/block/io.c b/block/io.c
index 4d54fda593..56b0c1ce6c 100644
--- a/block/io.c
+++ b/block/io.c
@@ -3219,43 +3219,6 @@ void *qemu_try_blockalign0(BlockDriverState *bs, size_t size)
     return mem;
 }
 
-void coroutine_fn bdrv_co_io_plug(BlockDriverState *bs)
-{
-    BdrvChild *child;
-    IO_CODE();
-    assert_bdrv_graph_readable();
-
-    QLIST_FOREACH(child, &bs->children, next) {
-        bdrv_co_io_plug(child->bs);
-    }
-
-    if (qatomic_fetch_inc(&bs->io_plugged) == 0) {
-        BlockDriver *drv = bs->drv;
-        if (drv && drv->bdrv_co_io_plug) {
-            drv->bdrv_co_io_plug(bs);
-        }
-    }
-}
-
-void coroutine_fn bdrv_co_io_unplug(BlockDriverState *bs)
-{
-    BdrvChild *child;
-    IO_CODE();
-    assert_bdrv_graph_readable();
-
-    assert(bs->io_plugged);
-    if (qatomic_fetch_dec(&bs->io_plugged) == 1) {
-        BlockDriver *drv = bs->drv;
-        if (drv && drv->bdrv_co_io_unplug) {
-            drv->bdrv_co_io_unplug(bs);
-        }
-    }
-
-    QLIST_FOREACH(child, &bs->children, next) {
-        bdrv_co_io_unplug(child->bs);
-    }
-}
-
 /* Helper that undoes bdrv_register_buf() when it fails partway through */
 static void GRAPH_RDLOCK
 bdrv_register_buf_rollback(BlockDriverState *bs, void *host, size_t size,
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 23 18:14:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 18:14:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538663.838815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1WWD-0004iR-P6; Tue, 23 May 2023 18:13:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538663.838815; Tue, 23 May 2023 18:13:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1WWD-0004iK-MO; Tue, 23 May 2023 18:13:57 +0000
Received: by outflank-mailman (input) for mailman id 538663;
 Tue, 23 May 2023 18:13:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1WWC-0004i9-BJ; Tue, 23 May 2023 18:13:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1WWB-0004Ne-W2; Tue, 23 May 2023 18:13:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1WWB-0002Qs-Hv; Tue, 23 May 2023 18:13:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1WWB-0005ct-FZ; Tue, 23 May 2023 18:13:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CozlGr8TQBQupUXQHt4/Bvvkg1ncOM2EdxqRNG6H8Gc=; b=bZLVWaI/Id0r00EcJWYW/O7QSf
	bA4Bayu+X+8uEt+5vIkuUz0oWBZjOY2cO9iGZK/LNouxF9n4C0aUW9ttOdrfvpljaI3ySmPsIF3Dt
	NaSD4HUIozb9v0mSI3QzUusS3RDQ1tnyC0EUQEJJNkaMkV9YGL+Lzx4NQI155DNlzBNs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180916-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180916: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=886c0453cbf10eebd42a9ccf89c3e46eb389c357
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 18:13:55 +0000

flight 180916 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180916/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                886c0453cbf10eebd42a9ccf89c3e46eb389c357
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    6 days
Failing since        180699  2023-05-18 07:21:24 Z    5 days   25 attempts
Testing same since   180909  2023-05-23 03:48:14 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5427 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 23 18:15:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 18:15:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538671.838825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1WXL-0005Iz-A6; Tue, 23 May 2023 18:15:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538671.838825; Tue, 23 May 2023 18:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1WXL-0005Is-6y; Tue, 23 May 2023 18:15:07 +0000
Received: by outflank-mailman (input) for mailman id 538671;
 Tue, 23 May 2023 18:15:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qOGL=BM=citrix.com=prvs=500b25f39=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1WXJ-0005Eo-OF
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 18:15:06 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bbd51d29-f995-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 20:15:03 +0200 (CEST)
Received: from mail-mw2nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 23 May 2023 14:14:53 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BLAPR03MB5442.namprd03.prod.outlook.com (2603:10b6:208:291::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Tue, 23 May
 2023 18:14:50 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6411.028; Tue, 23 May 2023
 18:14:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbd51d29-f995-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684865703;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=U5MfgQV+qvb64/SKdDtKUqSA7CzyoMsPYyzVmzqmPyI=;
  b=F7RaS+zoZ7af9NNuIoiC43QnVZ/kBgypeLsBLy5y68lo7pThqiUi42g7
   ZDxLyUr+seE4QQFXBgzVjXWUSrhegvfQ6eZeUaAJdn+LoffBFt19pV117
   LH3C2ZAZExf8DeYDqR23sj1JB57UVJOunWH2sKp3MhCdY4orihVsrgsXc
   Y=;
X-IronPort-RemoteIP: 104.47.55.108
X-IronPort-MID: 109434699
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:jslT3qM79nS8SyjvrR1wlsFynXyQoLVcMsEvi/4bfWQNrUokgzVUz
 2ceUGyAaf2KYmGkLogkO47n8BxQ78fcyoIwSAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tF5AdmPpingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0rtzJEAey
 cY7ExFXQxy+t8mz7paYaeY506zPLOGzVG8ekldJ6GiBSNMZG9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PdxujCDpOBy+OGF3N79U9qGX8hK2G2fo
 XrL5T/RCRAGLt2PjzGC9xpAg8eWxXKnA95MRezQGvhCuEWRxWwcBQMtWnyXp/6AoR+CB9MEE
 hlBksYphe1onKCxdfHtUhv9rHOasxo0X9tLD/Z8+AyL0rDT4QuSGi4DVDEpQN4sudIyRDcq/
 kSUhN6vDjtq2JWXVHac+7G8vT60fy8PIgcqfjQYRAEI593ipoAbjR/VSNtnVqmvgbXdBjXY0
 z2M6i8kiN0uYdUj0qy6+RXCnGiqr52QFgotvFyPDiSi8x9zY5Oja8qw81/H4P1cLYGfCF6co
 HwDnMvY5+cLZX2QqBGwrCw2NOnBz5643Pf02DaDw7FJG+yRxkOe
IronPort-HdrOrdr: A9a23:sqGZVK5cxQdbG6x8vgPXwMnXdLJyesId70hD6qkXc3Fom62j+/
 xG+c5w6faaslcssR0b6LW90YO7MBThHOdOkO0s1NSZPDUO2lHHEGgK1+KL/9SHIVycygc379
 YDT0ERMrLN5AhB5vrH3A==
X-Talos-CUID: 9a23:+bK5/W7ZGPZScZVoVtssqUoxO5AZSF7m7S39L3KIImNHbJ62RgrF
X-Talos-MUID: =?us-ascii?q?9a23=3AWGU0sgx7saft+1c+8iCSqFJ3+y6aqJy2CkMNnpg?=
 =?us-ascii?q?YgMajGSp3AhzMsGi2GbZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,187,1681185600"; 
   d="scan'208";a="109434699"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KsdUPrSxzGU1geVuR6Oxi0P1Som4tfnLNWfe6Hs3nWMT1cYi8o+RfwYZu3DdjR3KNJgy317mz7Sc/n8PJoD/2hI/FWbGeoQUUB0SuGheP9pYfrhrFhCv0RqNVq3aLVGlyuJZ7NXP0ZU1sKOHy+OjWwJnw7lt1QO1eAAleHwliX4WJJwOjBC5DlG2iewSGhiGMshqCwgErvuJihDnPjhj+f2v7ST7t0yHy6qrftBayoG7RNv8dP/ARBexdjGqt1eNn2q1mEVQ7F94ZeymBqcO/BJhGwOaY9TRJ71jJ4d8sOXPLMqHT2vw3xVs+cA0y4jmYGR6RXHUXAS3ycLL2W1vgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YxCbUoWjhPX7xYxAhVEmLszIZSLW2bqybMohuzmsJ2A=;
 b=ca/KWMG9ePEyRrogsBYij6XbFc4rt6khqNd6S2Ka+r53TIaPHTz2cuXuylkyWsvYJVZPkdIv7qrZjSO1fD1l9RUWcjmZ5pxSIGAIvlqnfQNv3mYtqe82jaSqJy9Ev1Jmc9zniUPxmeUyq5twO2NGXh4l8K8tRLF90rC7NI17vzDX2xY59uZSD9cj2QBrSb3Sfs6cP7r00j7t9Uz7vdY91YqWMgoOX9KavQznhqmeONeT60ZAzlSlZss3gL+Ob7qCGlFOzO9PVYS5cYaFJC5GR/o6GqrryMfgc+RUVir+AnrRL2WhNMJMA/s7guG0K0bHuw+EtHcqi/SUC+9AoZ+WCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YxCbUoWjhPX7xYxAhVEmLszIZSLW2bqybMohuzmsJ2A=;
 b=G2VM7SUHAQqR/X8JKmNs0xmBvq7Eda4yBnSkRXOlN7lCeIf6CJflTEFOedJLVI7JJsiqoLRMa2t+924zlXPAggIfE0unDcZhzK7a84jGoz+7Q8GaEt+InxNb3HKnk7t+Lfg/UrA3dzW7ohjn7UtzYAQsFHs5Z1hEyThWCR3BlBY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <f04c4e98-84b1-acde-5acd-2f5e18f591d7@citrix.com>
Date: Tue, 23 May 2023 19:14:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [XEN PATCH 13/15] build: fix compile.h compiler version command
 line
Content-Language: en-GB
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich
 <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-14-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-14-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0327.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18c::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BLAPR03MB5442:EE_
X-MS-Office365-Filtering-Correlation-Id: 968b3dce-eca2-47b3-c759-08db5bb99850
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BTKLmcRdrFRC6gPz3jSdfBiRHQM4nu1JNYFMqJOYL97lb8b3Q9v4W3kIpPp1ZG8iJUmelWjigvekW0Jc17+Nr/GJnVTERScmdrgwtH02iGJrS8oTIW3je+6e6p8E1y/wNkxbYh59v94o3zxhTT9kLoESiJlbjBS7QRSUVMMUU+h6YCdAAFeUK4PzCp0Ei823rX/XuzZfPP5Hu6BjPub2NqaJYUhtb0cwYnHWK43508PUKUy7fxvfB82i69bD8BZ9NOnHky1IQj3txtbUO7TQaLKxO6tSE49wKgUvGqg2rso21kOMOslq6Y+amLEVndGQ3o6Gs69Pss4LB+IFXgcjjN1ImunvK7ZS4gljwK3m8bFbRjVABEV7Ty3SwE8G6nBqXLMp2yrMXqrUe8s9tHq9jtn0fZ3b1loPbNWVxphyDbkzN4vEISk2KDzS2SGlxe/Wb69GulW1LBkuQuESyTWdkd2hebNFc2B/smW4+ZStPnKNCQXOqfIzoEYTk9teroKEnyrkmtxQO9riJFWU273AUIbC8YpI7H6vvv1pFfVsRQyIfdIlfJlq2aBMMPgvLLUEz97tVDjru+ZWizTLMFkzSCDUgrjx7hT6Dqo8Hni7m/0jq3UmcXrB8rHiToHsJszrGtt9SLGfQkc3Jg5MwRwbJQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(39860400002)(136003)(376002)(346002)(451199021)(82960400001)(53546011)(6506007)(186003)(26005)(6512007)(38100700002)(2616005)(36756003)(4744005)(2906002)(54906003)(31696002)(6666004)(316002)(41300700001)(6486002)(478600001)(86362001)(4326008)(31686004)(66946007)(66556008)(66476007)(8676002)(8936002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z2lpS0tyUVdNVDBzem9uZ1FUS0o3dkdKT3B2VFZzNXl3Y3FkdFNON2lnYkFk?=
 =?utf-8?B?aVhRTTdDNTNoaGlBaUxKRU16T0ZPY2hkYzdGRkVrUEhCZnRadHhXdnlIbDlD?=
 =?utf-8?B?SmZkamZ0YWdDV2pJZFVaN09ub1lFQjlVdVU4OFdsdWI2Z2hZWGRJS01ON3Nm?=
 =?utf-8?B?Zjg3WmliWHVRQU42N1BML0ExSCtxVGZseXMxaU9RY0o2SjlhNTk0b0U0YUFq?=
 =?utf-8?B?R0Z3UVRyaklHbjFnRWdONVdWamhOQkFyNStlUHN2dUtMc3YzL205cmRzaS8v?=
 =?utf-8?B?NnI2a3RTa0hlbml2OEl4d0d4T3duS2ljL1BBZ0lGZU54SVQzSm8yVDhBZTA2?=
 =?utf-8?B?bHJBUE9ENFBEWnkvZHptSC9HMUFzOVpIUXpRV3M5Ykp5RFJtTUkwd2gzdnFj?=
 =?utf-8?B?Y1ZNZXA1eXNjdXNHMzdQUzBLd1NZc1VGQXdTNUpuUU9MUVQrRXZRaFhKZlRI?=
 =?utf-8?B?ZHc1Q294Q0t1dGwrMmdpRlJIMU9wQ3l4K2ZBeVVITEpxSkdKazJqcTNVQ1VN?=
 =?utf-8?B?VXFQY05OcTRLOFVpT2VoTlpvUG1zNzByeHhnZUptSlpwcGRlT1FnNzJYQ0tM?=
 =?utf-8?B?bHczbFlmYTVsRlhyQlpacEtGV09oSk8wN0hPZnUyRmU2NVYyQ2FESUNZaEZC?=
 =?utf-8?B?VjhxNXpxWkhkOC9tczNFbEI2TnZhb1RnUU1HWXY0bllIQWhVQzdoMUdxeW5o?=
 =?utf-8?B?bGRxTDRPL00zVGp5b3RHd1ZvYnk1UUhRVThRdEZNcVFzcjA2VDhWdUtMbS9O?=
 =?utf-8?B?OUh0a1ppanRNS0Z0OTByeUlVSFNWU01qUDBzQ1plYVFvUkJldU5PUjhoR2d3?=
 =?utf-8?B?ZnZDdHhGWlRGT0NTaDhLMDFndmxmdnBmVkN2MDh2eDRnUGNYK21YeGhtb3l6?=
 =?utf-8?B?YkVhbnhMdlpDU1M1TWZFd280QjNvQW5hRTdkQmI2SVBnWVlHQkFwT2tJZDg4?=
 =?utf-8?B?Z2VqK3ZoN0ExNFl5YjE0aUZZSTFoZ0F0M2o4Z1VBcGtiZ2ZqdVJJUVhpeUVj?=
 =?utf-8?B?NjF0TW52dHVBd0k3SW1uc0xrMVgvTC9FM3JBTGJWZ3djZnliMHRkQ1lDampW?=
 =?utf-8?B?RTYyL2VlYlpGVU1URzFXeDFJOVZFZHIzRmQxL09nVktnZ3o0NDZxWElaeEQ1?=
 =?utf-8?B?TXE0VnpNNU9oUVNjdzM1bDJldzI2MFNzcWoxZ0tBVm9ZbDNVTVRKMm5najRx?=
 =?utf-8?B?VnZ6a1lDV2hPWFpXd1FmUGhLcjluaXY0VHZVV1Y5UVV4eXE3d29JSVRKZElu?=
 =?utf-8?B?UVorWEhlRjlQWE5YNUtBM0NEM3dTWnNNbnExS2h1ZDVTMWdhR0ErVFFJVWpF?=
 =?utf-8?B?ek1BdUVtanpkQldLMEVReXlGbkR2bDhiZi9Nc1Irdms2cm9WMWlwWkNXMDVJ?=
 =?utf-8?B?RElQUURuVWl2TFVNeVVtOTIzNytDa2lza2NZbERmaTZjMWlmbnd5MWM5UEJW?=
 =?utf-8?B?VEM0SXpGalEzZ3dnTEpkdGZLa2JreStOc2NZcEpZM3lWRGJhNUgxRU83R212?=
 =?utf-8?B?UnlXTUNNNUNhaHNwb0xkQ2hRTGgxL2o3b0YzaEpXSG9sSUFPQjVNOTlsUEFR?=
 =?utf-8?B?WTFHdlVIakE5Y0RTdXYvT2dzcDZIQTdXd1dYV1AyTE1PNzF0S3ZkZ1JHNDdX?=
 =?utf-8?B?U054eitRM0FvR1VTUXVyWDdUSnEyclVzWVlrc3VHZUw3L2FhMjN2ZW9wM0M0?=
 =?utf-8?B?TUI1SlpNcStjdXdTbXdDaUhKOHRkdVpYRTBBYWMyQkkvM3RMZzU2K09MTmFz?=
 =?utf-8?B?MDdFYzZRcHNiemFETG92TmhFbkc4V2N0U1V4OE55ZXBJOVk3bzM0QWc0clFG?=
 =?utf-8?B?RkVGTFNpWXZSUHRpbU1penpHSnNHNDBhZlBxQXMydndHc3dBZnhUblVvSmY4?=
 =?utf-8?B?Q1Z4SDhzNllNV2syTG9CMEJEdWp6OWZCbGVYVUg4OGpxVmFBaVdCUWF4STg3?=
 =?utf-8?B?NmttVXZXQm8vcVNLdnd1V3hZRkQ3S2JwTGdrR29zVXdBbkpzemZSU3h5a0Ux?=
 =?utf-8?B?SEtwZ0g5V1l3bG1sZnNFUzVrcFNsY1REaFg0TGNjWksrNzRMbEZBc09qek1Y?=
 =?utf-8?B?eVZhYkNkUUVweGJwMFBHZ0VUUG5ITW1TUkFzZ3Uzc284S3hTSWtRNmdKYm1x?=
 =?utf-8?B?ODhLdTMxcEpSTzhmeUtCRW9BaDMyWE42ZHkvZDJXY3B5RC9IOXVPRmJhdFZ0?=
 =?utf-8?B?K1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	juG9qeXJxhrAvhFVg6VjNDHdlrfvaIMA/kO+O3OUSlRZMr67FtlO4xw/3G4TVMXR7aBkiqn2joLl5L4v02B6EyrtNESSmG2fqvFXkGNjwifhRvYRj3hwPkuehBVF0VoxW1fWyarG2irq8pYL2fXwumSWHaqNXu6rNYw0XEtTUNegu2iF6cOCWCZX+yFepE2KWxN6uLqA5hvBMXZoRc0J8kQZOtJnnsoBH5n9GWtm5z4YDxTgH9lVbzq+bFso+BfSLq49tJEhfQN1wOXw9LZSmmxwtIVr9p+iD9Wl/Z12DksWklFjtDmu1/ddvoIu+CPA40f6HrsN9ORInSqSyXnlSHyBk9og7mMQO2MzDeW9p3sCovJwX+ox1LawE1w23ha0FJo7dNombO6d7bCTXnbi/d+fbtAA+nHWOHCgh+Xf7hkXiN7PtVgcLgYS+w+IIlXoQfSibCihkqJV0FJVUdrsDDxe5OZ9uqX1NGLXS9Gko83PHtE/bb7vFTLoSWBKHhoYlkcVJdA95aJHJXCRGF3DawAPlek44kaecEXJ0Wyi9S2QL+JgsC/WKgn9+XfRhacpuFFjmlS96gezdYi1SZ+/lJIlhV2YYIyxQU5tnGrsDLJQK4zDRN3hbvqJITkiNy3CoXr582eSdfrnxGXjh6cccIoWSrc3VS6a9WCKtg6VQh4HqZJSIgvzpmsZiZVO/LX0eZVESJ9PjtPHaRWO11CHDQAITuULzDRPdKhMyZYsoP1jyd4Wqr2+lmPu7pOAp3pMkbxI84tV2TG/AusO9zVmiaHEF7I7ZKurGeMRok2UbY7GX2U1C9WMcuZPshzaxBzdu3xT/H2D+h0wqZNmhFV4NovOnGWFgoe2X0ZV6SHDahQ=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 968b3dce-eca2-47b3-c759-08db5bb99850
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 18:14:50.1268
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7ATAkb0oaiFcdb/ub6X7rhK1XAJO6895ALF2eHYTylVc4A8DNnzLzgE1c9zN1h4HEGmfmlOFKz/E1nKyUeUja6pA6zJc2HtFaH8M6BAT8fo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5442

On 23/05/2023 5:38 pm, Anthony PERARD wrote:
> CFLAGS is just from Config.mk, instead use the flags used to build
> Xen.
>
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
>
> Notes:
>     I don't know if CFLAGS is even useful there, just --version without the
>     flags might produce the same result.

I can't think of any legitimate reason for CFLAGS to be here.

Any compiler which does differ its output based on CLFAGS is probably
one we don't want to be using...

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 23 20:37:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 20:37:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538681.838835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1YlJ-0002Qo-Lj; Tue, 23 May 2023 20:37:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538681.838835; Tue, 23 May 2023 20:37:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1YlJ-0002Qh-IT; Tue, 23 May 2023 20:37:41 +0000
Received: by outflank-mailman (input) for mailman id 538681;
 Tue, 23 May 2023 20:37:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HP3A=BM=arndb.de=arnd@srs-se1.protection.inumbo.net>)
 id 1q1YlI-0002Qb-0b
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 20:37:40 +0000
Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com
 [64.147.123.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a5780338-f9a9-11ed-8611-37d641c3527e;
 Tue, 23 May 2023 22:37:35 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.west.internal (Postfix) with ESMTP id 97711320039A;
 Tue, 23 May 2023 16:37:31 -0400 (EDT)
Received: from imap51 ([10.202.2.101])
 by compute6.internal (MEProxy); Tue, 23 May 2023 16:37:32 -0400
Received: by mailuser.nyi.internal (Postfix, from userid 501)
 id BE89CB60086; Tue, 23 May 2023 16:37:29 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5780338-f9a9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arndb.de; h=cc
	:cc:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1684874251; x=1684960651; bh=k+
	bwdQP9AGCGHKD++qZv5KMOabdeJ0JpVksfMHT8xqQ=; b=g60/YYiE2JmkqOBOUa
	XY1qnShfctdB+wEArZbQpSLhHcZyqwcCUMjwHTqw05t3g63Ymx7kfjiWEWM4ryH/
	QX67pkxSCtd9+iz3hQiaK0cIZH7EgJrbYd7L76Bu1iFroFmygo0oEt8DgU4AuX+Q
	y0S9gr8VYvoB6OZsxufVjAACGbFGmjYbMPQiyoj+JPlNRtjZpgp6rzYQWEvUa6C/
	W0XhB4M8rRm70IfJaO1VtTNVT6kpM4800mHA5rfCrrbwPN8hGvRNdZvCVkV9ZwwA
	lq+Wuuqd0xmxskSyzlPCTuRhiK+pmfFMKhqQVMTQxOdxleMO0P18kkIY5rpD+g3D
	AFZw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1684874251; x=1684960651; bh=k+bwdQP9AGCGH
	KD++qZv5KMOabdeJ0JpVksfMHT8xqQ=; b=JDyznUMubiXFh8oJaXvwlfxCJWNSa
	PnVs/XoD3hYJrb3FGcvkxKqbR7sWjQHwDk0eKf4k+BT8Z7a6wlYllN8yA20PcF67
	tYTUDjSPg04DOX5pYPGpYmFCC9tlQ9J8wme85X/WqSRyjynju0aR/z0fZEKk1btg
	nVdrIBtwvzImpuiE5xcpIQu0Or6pOI+inTFy34eCXWbhehA5hUoLQEkgGWIvaCJj
	UKyEnH5zn8Nm8VZ/B8aj3rduZZ7VFhvvKIVGZ03ldiXT6k+ubie3SxCcXFWPCZNj
	av+T6vdP6Vfd+hhThCeuoW+iPXSE9TtB3eB4tE2OZsYkXvQG5tExZncXQ==
X-ME-Sender: <xms:CSRtZHI6Noe76T50p519ldksbvrDV-0PxOGO9iPOxffGnHtUXwp2AA>
    <xme:CSRtZLJqp0-0DeNVg_ynRRmoq6vcljIYSjcx7iyNxPAW1wy4JbOoCdP6LIglITnrk
    YDZebmspOrq_J7txdE>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeejfedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefofgggkfgjfhffhffvvefutgesthdtredtreertdenucfhrhhomhepfdet
    rhhnugcuuegvrhhgmhgrnhhnfdcuoegrrhhnugesrghrnhgusgdruggvqeenucggtffrrg
    htthgvrhhnpeffheeugeetiefhgeethfejgfdtuefggeejleehjeeutefhfeeggefhkedt
    keetffenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe
    grrhhnugesrghrnhgusgdruggv
X-ME-Proxy: <xmx:CSRtZPuxnAORlY84xEv5PojDG5wD_B_JKTeTU1zRZcFO3ICboxVbbg>
    <xmx:CSRtZAa25lND9tAT-uTOiNxVL95gmUsJvEKE_RDFWXbHLC3mfcuAhQ>
    <xmx:CSRtZOaokD6a31vcwJoFWzp1glAFv-iuBDh1XnCHBadw_OMjQIGAVg>
    <xmx:CyRtZABNekMVVlF679HRCKNrTPSclHjooVT8oAogJ0hS-OVXfsHFaA>
Feedback-ID: i56a14606:Fastmail
X-Mailer: MessagingEngine.com Webmail Interface
User-Agent: Cyrus-JMAP/3.9.0-alpha0-441-ga3ab13cd6d-fm-20230517.001-ga3ab13cd
Mime-Version: 1.0
Message-Id: <418f75d5-5acb-4eba-96a5-5f9ec7f963a6@app.fastmail.com>
In-Reply-To: <35c82bbd-4c33-05da-1252-6eeec946ea22@oracle.com>
References: <20230519092905.3828633-1-arnd@kernel.org>
 <35c82bbd-4c33-05da-1252-6eeec946ea22@oracle.com>
Date: Tue, 23 May 2023 22:37:09 +0200
From: "Arnd Bergmann" <arnd@arndb.de>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
 "Arnd Bergmann" <arnd@kernel.org>, "Juergen Gross" <jgross@suse.com>,
 "Thomas Gleixner" <tglx@linutronix.de>, "Ingo Molnar" <mingo@redhat.com>,
 "Borislav Petkov" <bp@alien8.de>,
 "Dave Hansen" <dave.hansen@linux.intel.com>, x86@kernel.org,
 "Stefano Stabellini" <sstabellini@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>,
 "Oleksandr Tyshchenko" <oleksandr_tyshchenko@epam.com>,
 "Peter Zijlstra" <peterz@infradead.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Subject: Re: [PATCH] [v2] x86: xen: add missing prototypes
Content-Type: text/plain

On Sat, May 20, 2023, at 00:24, Boris Ostrovsky wrote:
> On 5/19/23 5:28 AM, Arnd Bergmann wrote:
>
>> diff --git a/arch/x86/xen/smp.h b/arch/x86/xen/smp.h
>> index 22fb982ff971..81a7821dd07f 100644
>> --- a/arch/x86/xen/smp.h
>> +++ b/arch/x86/xen/smp.h
>> @@ -1,7 +1,11 @@
>>   /* SPDX-License-Identifier: GPL-2.0 */
>>   #ifndef _XEN_SMP_H
>>   
>> +void asm_cpu_bringup_and_idle(void);
>> +asmlinkage void cpu_bringup_and_idle(void);
>
> These can go under CONFIG_SMP

Not sure if there is much point for the second one, since
it's only called from assembler, so the #else path is
never seen, but I can do that for consistency if you
like.

I generally prefer to avoid the extra #if checks
when there is no strict need for an empty stub.

I guess you also want me to add the empty stubs for the
other functions that only have a bit in #ifdef but not
#else then?

>> +void xen_force_evtchn_callback(void);
>
> These ...
>
>> +pteval_t xen_pte_val(pte_t pte);
>> +pgdval_t xen_pgd_val(pgd_t pgd);
>> +pte_t xen_make_pte(pteval_t pte);
>> +pgd_t xen_make_pgd(pgdval_t pgd);
>> +pmdval_t xen_pmd_val(pmd_t pmd);
>> +pmd_t xen_make_pmd(pmdval_t pmd);
>> +pudval_t xen_pud_val(pud_t pud);
>> +pud_t xen_make_pud(pudval_t pud);
>> +p4dval_t xen_p4d_val(p4d_t p4d);
>> +p4d_t xen_make_p4d(p4dval_t p4d);
>> +pte_t xen_make_pte_init(pteval_t pte);
>> +void xen_start_kernel(struct start_info *si);
>
>
> ... should go under '#ifdef CONFIG_XEN_PV'

What should the stubs do here? I guess just return the
unmodified value?

    Arnd


From xen-devel-bounces@lists.xenproject.org Tue May 23 20:57:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 20:57:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538686.838845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Z4E-0004rn-7B; Tue, 23 May 2023 20:57:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538686.838845; Tue, 23 May 2023 20:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1Z4E-0004rg-44; Tue, 23 May 2023 20:57:14 +0000
Received: by outflank-mailman (input) for mailman id 538686;
 Tue, 23 May 2023 20:57:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0MEn=BM=kernel.org=arnd@srs-se1.protection.inumbo.net>)
 id 1q1Z4D-0004ra-71
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 20:57:13 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 62b00f6d-f9ac-11ed-b22d-6b7b168915f2;
 Tue, 23 May 2023 22:57:11 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E017960ED2;
 Tue, 23 May 2023 20:57:09 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2CAB8C433EF;
 Tue, 23 May 2023 20:57:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62b00f6d-f9ac-11ed-b22d-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684875429;
	bh=VDaXY9Z26DgqTQfA7kdWz2RLA0Qbr8V8gDzJu6fxAGE=;
	h=From:To:Cc:Subject:Date:From;
	b=NdF7czWO5Ya8oF7Qy46hT26QDGGQDjALDUg69WYCsXtnkaxipvAViOFpyY2QsLJ+0
	 6MEaVIbsEaQJaK4TJDtQulv94/YeKuKLvuww1qNDq2gP1NPAYFauQNzAjhTNkPVUzf
	 V44Hi/SXNNJPxg2tpMCYUHWJvxWjhcOTq6utvKbvJEoqLoRFlr24ONC7GJe8taKJjm
	 6MiIw2IhCyvbYBqjy59HghYmcVsbUvyVGQU/sWjm34lzYGDNW8q/8/kigq0SgWxJAT
	 rk2CduI212QtBRPBC2otAcr12GCESPO1ahrnrtK4SN8J2eJRWky1ZbU8j/9YlQpEqt
	 EmBhEuL4buyuA==
From: Arnd Bergmann <arnd@kernel.org>
To: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Roger Pau Monne <roger.pau@citrix.com>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH] [v3] x86: xen: add missing prototypes
Date: Tue, 23 May 2023 22:56:46 +0200
Message-Id: <20230523205703.2116910-1-arnd@kernel.org>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

These function are all called from assembler files, or from inline assembler,
so there is no immediate need for a prototype in a header, but if -Wmissing-prototypes
is enabled, the compiler warns about them:

arch/x86/xen/efi.c:130:13: error: no previous prototype for 'xen_efi_init' [-Werror=missing-prototypes]
arch/x86/platform/pvh/enlighten.c:120:13: error: no previous prototype for 'xen_prepare_pvh' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:358:20: error: no previous prototype for 'xen_pte_val' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:366:20: error: no previous prototype for 'xen_pgd_val' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:372:17: error: no previous prototype for 'xen_make_pte' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:380:17: error: no previous prototype for 'xen_make_pgd' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:387:20: error: no previous prototype for 'xen_pmd_val' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:425:17: error: no previous prototype for 'xen_make_pmd' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:432:20: error: no previous prototype for 'xen_pud_val' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:438:17: error: no previous prototype for 'xen_make_pud' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:522:20: error: no previous prototype for 'xen_p4d_val' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:528:17: error: no previous prototype for 'xen_make_p4d' [-Werror=missing-prototypes]
arch/x86/xen/mmu_pv.c:1442:17: error: no previous prototype for 'xen_make_pte_init' [-Werror=missing-prototypes]
arch/x86/xen/enlighten_pv.c:1233:34: error: no previous prototype for 'xen_start_kernel' [-Werror=missing-prototypes]
arch/x86/xen/irq.c:22:14: error: no previous prototype for 'xen_force_evtchn_callback' [-Werror=missing-prototypes]
arch/x86/entry/common.c:302:24: error: no previous prototype for 'xen_pv_evtchn_do_upcall' [-Werror=missing-prototypes]

Declare all of them in an appropriate header file to avoid the warnings.
For consistency, also move the asm_cpu_bringup_and_idle() declaration out of
smp_pv.c.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
v3: move declations of conditional function in an #ifdef with a stub
v2: fix up changelog
---
 arch/x86/xen/efi.c     |  2 ++
 arch/x86/xen/smp.h     | 11 +++++++++++
 arch/x86/xen/smp_pv.c  |  1 -
 arch/x86/xen/xen-ops.h | 26 ++++++++++++++++++++++++++
 include/xen/xen.h      |  3 +++
 5 files changed, 42 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/efi.c b/arch/x86/xen/efi.c
index 7d7ffb9c826a..863d0d6b3edc 100644
--- a/arch/x86/xen/efi.c
+++ b/arch/x86/xen/efi.c
@@ -16,6 +16,8 @@
 #include <asm/setup.h>
 #include <asm/xen/hypercall.h>
 
+#include "xen-ops.h"
+
 static efi_char16_t vendor[100] __initdata;
 
 static efi_system_table_t efi_systab_xen __initdata = {
diff --git a/arch/x86/xen/smp.h b/arch/x86/xen/smp.h
index 22fb982ff971..9367502281dc 100644
--- a/arch/x86/xen/smp.h
+++ b/arch/x86/xen/smp.h
@@ -2,6 +2,10 @@
 #ifndef _XEN_SMP_H
 
 #ifdef CONFIG_SMP
+
+void asm_cpu_bringup_and_idle(void);
+asmlinkage void cpu_bringup_and_idle(void);
+
 extern void xen_send_IPI_mask(const struct cpumask *mask,
 			      int vector);
 extern void xen_send_IPI_mask_allbutself(const struct cpumask *mask,
@@ -29,6 +33,13 @@ struct xen_common_irq {
 };
 #else /* CONFIG_SMP */
 
+static inline void asm_cpu_bringup_and_idle(void)
+{
+}
+static inline void cpu_bringup_and_idle(void)
+{
+}
+
 static inline int xen_smp_intr_init(unsigned int cpu)
 {
 	return 0;
diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
index a92e8002b5cf..d5ae5de2daa2 100644
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -55,7 +55,6 @@ static DEFINE_PER_CPU(struct xen_common_irq, xen_irq_work) = { .irq = -1 };
 static DEFINE_PER_CPU(struct xen_common_irq, xen_pmu_irq) = { .irq = -1 };
 
 static irqreturn_t xen_irq_work_interrupt(int irq, void *dev_id);
-void asm_cpu_bringup_and_idle(void);
 
 static void cpu_bringup(void)
 {
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 6d7f6318fc07..eb4cb30570c7 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -146,12 +146,38 @@ int xen_cpuhp_setup(int (*cpu_up_prepare_cb)(unsigned int),
 void xen_pin_vcpu(int cpu);
 
 void xen_emergency_restart(void);
+void xen_force_evtchn_callback(void);
+
 #ifdef CONFIG_XEN_PV
 void xen_pv_pre_suspend(void);
 void xen_pv_post_suspend(int suspend_cancelled);
+pteval_t xen_pte_val(pte_t pte);
+pgdval_t xen_pgd_val(pgd_t pgd);
+pmdval_t xen_pmd_val(pmd_t pmd);
+pudval_t xen_pud_val(pud_t pud);
+p4dval_t xen_p4d_val(p4d_t p4d);
+pte_t xen_make_pte(pteval_t pte);
+pgd_t xen_make_pgd(pgdval_t pgd);
+pmd_t xen_make_pmd(pmdval_t pmd);
+pud_t xen_make_pud(pudval_t pud);
+p4d_t xen_make_p4d(p4dval_t p4d);
+pte_t xen_make_pte_init(pteval_t pte);
+void xen_start_kernel(struct start_info *si);
 #else
 static inline void xen_pv_pre_suspend(void) {}
 static inline void xen_pv_post_suspend(int suspend_cancelled) {}
+static inline pteval_t xen_pte_val(pte_t pte)	{ return pte.pte; }
+static inline pgdval_t xen_pgd_val(pgd_t pgd)	{ return pgd.pgd; }
+static inline pmdval_t xen_pmd_val(pmd_t pmd)	{ return pmd.pmd; }
+static inline pudval_t xen_pud_val(pud_t pud)	{ return pud.pud; }
+static inline p4dval_t xen_p4d_val(p4d_t p4d)	{ return p4d.p4d; }
+static inline pte_t xen_make_pte(pteval_t pte)	{ return (pte_t){pte}; }
+static inline pgd_t xen_make_pgd(pgdval_t pgd)	{ return (pgd_t){pgd}; }
+static inline pmd_t xen_make_pmd(pmdval_t pmd)	{ return (pmd_t){pmd}; }
+static inline pud_t xen_make_pud(pudval_t pud)	{ return (pud_t){pud}; }
+static inline p4d_t xen_make_p4d(p4dval_t p4d)	{ return (p4d_t){p4d}; }
+static inline pte_t xen_make_pte_init(pteval_t pte)	{ return (pte_t){pte}; }
+static inline void xen_start_kernel(struct start_info *si) {}
 #endif
 
 #ifdef CONFIG_XEN_PVHVM
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 0efeb652f9b8..f989162983c3 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -31,6 +31,9 @@ extern uint32_t xen_start_flags;
 
 #include <xen/interface/hvm/start_info.h>
 extern struct hvm_start_info pvh_start_info;
+void xen_prepare_pvh(void);
+struct pt_regs;
+void xen_pv_evtchn_do_upcall(struct pt_regs *regs);
 
 #ifdef CONFIG_XEN_DOM0
 #include <xen/interface/xen.h>
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue May 23 23:10:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 23:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538690.838855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1b94-0001wf-J3; Tue, 23 May 2023 23:10:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538690.838855; Tue, 23 May 2023 23:10:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1b94-0001wY-Fz; Tue, 23 May 2023 23:10:22 +0000
Received: by outflank-mailman (input) for mailman id 538690;
 Tue, 23 May 2023 23:10:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k9aD=BM=oracle.com=boris.ostrovsky@srs-se1.protection.inumbo.net>)
 id 1q1b93-0001wS-0m
 for xen-devel@lists.xenproject.org; Tue, 23 May 2023 23:10:21 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fa9b1eca-f9be-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 01:10:17 +0200 (CEST)
Received: from pps.filterd (m0246629.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 34NM4TkA001429; Tue, 23 May 2023 23:09:17 GMT
Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta01.appoci.oracle.com [138.1.114.2])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3qpp426dr3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 May 2023 23:09:16 +0000
Received: from pps.filterd
 (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 34NLhbMt027327; Tue, 23 May 2023 23:09:16 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2175.outbound.protection.outlook.com [104.47.56.175])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id
 3qqk2e7jjf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 23 May 2023 23:09:16 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by SJ0PR10MB5787.namprd10.prod.outlook.com (2603:10b6:a03:3dd::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Tue, 23 May
 2023 23:09:14 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::1be4:11e3:2446:aee4]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::1be4:11e3:2446:aee4%3]) with mapi id 15.20.6411.030; Tue, 23 May 2023
 23:09:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa9b1eca-f9be-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=message-id : date :
 to : cc : references : from : subject : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2023-03-30;
 bh=+qwBvZzx6z2ER5JvViHJG0G/iidDC76dmxZ2VENB0ME=;
 b=HWMuAaT+tCG9LpTLgmw8t9Uie6Q8VlTPqUlZ1R8fY9xrMUYzOT+po4XSlvQE52fI+/gE
 0FhMftfMDcLFR8GH7cgeJKOHHuMDOG3kJZ+B+AKG8RaNTzSn7uHVRyDW1YDS7EH1A6sH
 eF+M8yBQhIgQf3RZe0AH6Qbp7ahLxSrbI4ELzhEy9/JUVdDwtlAuHdX27Eq7DTWvBH2u
 M5GKyTiTyVduBDMY1Wjqw+I3DfooXXTlszvzcDDb+LsXCEO98pk4pkhU3iF4wife14Bf
 CvjwxK8Rmr6f9HIczMjWuzsAMOX2JJxlXDUo9x2bqra6qgEXpY3vtQkg7xOZtMYCQJjg 7A== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Girz8uPNQKjRrsInWgnJ1EQJOUknmipgTiTwpCEHrpcxRUOEGGLtJEt+AHcgxl68K9VQRcuUxCI5QYib4g+iXMj/OJzp2wAiGWd6WD3ySb6BdWMrmDD5tvBhLKxz4ZTx19sEpNT7Y71K7bUHLWViye9JljFmNEp5RMDu01bKVUnW8Nh/xCFFIscj9uPcbnfzKWiBimxZuTvYvej3CocPbMg6lpeOtQL5pOo6cOVf/CITYyo/lgmyiw6bVAjP53+jWbNKADS4YGlM7NbabbM66br1cc1/OIfDw3uWdMmIW+5PTS+RWWeTbS+Kjsr0fc0teP/QmSasPTRJKLyLKwZ9Gw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+qwBvZzx6z2ER5JvViHJG0G/iidDC76dmxZ2VENB0ME=;
 b=adcOjgBJiNBsBxv+04SWIsTmzbRX9irC/NnBkY5Jut4VBHoenkN9xvAARdcdIbpokb/z0VQ7Rk2D/ei9SqXLt3DR4uQDSTl2iVbnG2iCNMWJw+e6jDQFciBP92UK2FKHJWUcaxZ29QUzRvHu2WVg7RtezPOY62Ifv+v0lE2y6xbn/xkP0LsjA1MWBNYCOMk00XBLJSNF8/v5TJXkztW9CRxLumsOLRxXvVidmJtJEp6Z3YiiB3lfkG8wWiOhaWU3/fP7ciIYOL6VbsHlq9iVKjeRy/0FmCrd9Z9EegmtkLYPqU7kMhJtSJYQLYaabE2kuSJq2dc4HcmaJfksbs94zQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+qwBvZzx6z2ER5JvViHJG0G/iidDC76dmxZ2VENB0ME=;
 b=xIkFYSZPEvUwLhcloK7BOGDsYbK2eyOqlc6jNn8rkeDHjE4zQDD7Ci6QKM928E7Cl+Nheeln+s0A1X4Nd9Apsw+kGnRtfYb0WNoaaZU71iydFJTy3ER4CIPWCpCDe4g/7EM4Dx39pMYlZo+4Mt0SLi6Vz5aOcj88HImoG+AZUOA=
Message-ID: <2e5e627e-9f7e-ae63-05a3-d2b55e0703ba@oracle.com>
Date: Tue, 23 May 2023 19:09:06 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Content-Language: en-US
To: Arnd Bergmann <arnd@arndb.de>, Arnd Bergmann <arnd@kernel.org>,
        Juergen Gross <jgross@suse.com>, Thomas Gleixner <tglx@linutronix.de>,
        Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
        Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
        Stefano Stabellini <sstabellini@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>,
        Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
        Peter Zijlstra <peterz@infradead.org>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
References: <20230519092905.3828633-1-arnd@kernel.org>
 <35c82bbd-4c33-05da-1252-6eeec946ea22@oracle.com>
 <418f75d5-5acb-4eba-96a5-5f9ec7f963a6@app.fastmail.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH] [v2] x86: xen: add missing prototypes
In-Reply-To: <418f75d5-5acb-4eba-96a5-5f9ec7f963a6@app.fastmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SN7PR04CA0206.namprd04.prod.outlook.com
 (2603:10b6:806:126::31) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BLAPR10MB5009:EE_|SJ0PR10MB5787:EE_
X-MS-Office365-Filtering-Correlation-Id: 39426e25-8119-4eef-9d0e-08db5be2b98b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	WAkYfzPMTpqafADcQxXXndyL/lmfgM9mJPen08Qr0Jcop/mUBgWzMxsfx+30Be3sZkxjeM1OJvL6FYO5VTnyMv26e5pWmM+SvAcDTQCuSWpUMtzOfj6H9z3sGDiw7+CJBtdRJhOBvg7qp/pCuJQ8oOcXUWbdroINmz+ge2IyO3krCq/udpvg+feHFhe0ueZr0KBcnlzkGBVs+UjLC6t2sq00BnMClfWRlAOKg4Q8l0shccQrvaJ6j/K4WAvX+s1t6gq1gLgsz7RAaFjQ69g1Irmnn5WR4XjIAPMvdTPl1COjJA95oB51fRuU6nopBjYPYQLo2WRrQZlE7HZqfurRAtANTlBKV0wQn7qu4tY/uRWAY0nj3N4A8H0Sq5+3VJ47VspBrWcPfCiqzY5ZkOrHLMxAfW4QJbX29i0wKiioFVxHhS7NJD/rxQkrwvBBCLTNBs8ZPIbN+n1meYBYobz/KKjCtyDzBSMHPEhZR3AKErzEY0seY11DxYM1Q+Fg86iu/FkjsdVMbljPXVS8dDwzlR8/KxTCFs3MM2C8cjw95m6eJZpesybO0cKb53MLoD9hIYbJ28LQpeD+hqT3ICe4J8UrAtkgZ1reX89/JDt5pfTv+mklbMK9QYuht/3v6+yaY4IVxqk7GEaGA36SgkKn+A==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(346002)(136003)(376002)(366004)(451199021)(8936002)(8676002)(7416002)(5660300002)(186003)(26005)(41300700001)(6506007)(44832011)(6512007)(53546011)(31686004)(38100700002)(110136005)(54906003)(31696002)(478600001)(86362001)(66946007)(66476007)(66556008)(316002)(4326008)(2616005)(6666004)(36756003)(6486002)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?VThyL2hXczlXSUdJbXNYemtYdEdWMktNTXEvVWdDRk5oZm1iL3VyNW9zVnlU?=
 =?utf-8?B?NHhoWHI5ZXNmOS9Ha1BrWWF5ZWk2U0cyVURTZDJrZ1RLUVJPUFFyS2xYTWtm?=
 =?utf-8?B?VE9oZzlWYnkvNkhEeWhtZmgvbHBwcGNRTmczclA1K25OQmlHU2xjZG9sUi9O?=
 =?utf-8?B?Y1JFTWZYQTAzWkg1M01la2dNcjJocjJuV09IS21hdFpka0tVUTMyYmVoYnpa?=
 =?utf-8?B?U0J2UnlqUkpyRDZIenlveDRGa2pnTXNLdzVLQnBoSW4vUWF5VEZjaVlCNUg5?=
 =?utf-8?B?SzdxdHQvN1BBbHRVdjh4MlV4UlliYi8zZ0l0WDVFbnh1UmFtY0lpMUV2c09x?=
 =?utf-8?B?bjUrdDFQTHNwR1BmdmN4anBDekt6cDdzdE1YbnA4TGpUSGhHUFZmM3FFNjlH?=
 =?utf-8?B?NVRHRFJULzBKVFNCd2ZhSGZ6VUVPZVRZcjNRK1VoQVFDYVR2dU90ZCt4RHNO?=
 =?utf-8?B?ZzVaNDBySnh6by81S3VUa2VTaG0yQVV2KzdZYkszbk5Ra1FqczBQQ2hYVGw5?=
 =?utf-8?B?TThrR3gvWGtKVmZTTFJtS2lqNzdHSkxCSlpDdzh1NFp1ZUJBYkdWcjFyYVMv?=
 =?utf-8?B?WllTbXc4TEF6TzhtandCdk9qZG56cDN3T2JNd0lHWkNTblhWNkcxS2ZQb3hp?=
 =?utf-8?B?K0VmeUJYS3VIbDdTMG5sK3VzWDgvZE43MnQrdkpWYSt1Y1pvUXdJdW5ZbC9R?=
 =?utf-8?B?MmNNYTJUZFZGMUc4Y1N3VHZGZllNTUJiNDVRN2w4Z010VVNEdFQ2OW1XSlQ5?=
 =?utf-8?B?SlFLU0xGdDFYblJKR0NUUEszdnd1ekc2dTZaU2VZaXNtdzZmdnJmWkxTSjh6?=
 =?utf-8?B?NkJlanF4SW5scFk0ZlNXd2Y3b2RyVExEa2xSRHFNR2tBSnpjeldrOWZ1WVlx?=
 =?utf-8?B?N0xtR0cxNjN4eEl2eXRhRnVkdVhncmoxd0dkUFZyRzkwV00zTFNId2pkd3Bp?=
 =?utf-8?B?SG85aytNR2l2MEtTZFhVUlRyemlIUUFucThpVEhtdDRwOUlSblMyeUZDL0U4?=
 =?utf-8?B?S2R3YTVsSmNYdVd0bXJRUUdDTExwZHpJdEJ6QUhmSERxWXhtV0paTnJHT0sx?=
 =?utf-8?B?UXNLS2Z6cm9jR2p1R2M5WjB0QitCRGRYZGQ3RFR1a3lMZWJYb0pnSEo1YWNF?=
 =?utf-8?B?cXBYWm8xQ2RjRmFUR0ZOVXNTY2J5Y0N3UkZjZEh5Nkd3aW10dFJGREVZYmxr?=
 =?utf-8?B?ZmFBUGJ4Ym1HWlFMZUI2aE9nbWFqeHhOLzE2b1Z1NXZudWRsd1VSOC96cGZR?=
 =?utf-8?B?TkpXcXA5UWFBL2JaUkRhUWRmZHlvQ0VaY25MT2RZL1RGcUQwUmtmcUJ6QW1C?=
 =?utf-8?B?SzhKSnNlM3ZnYUVENXpPNWdpRGxMY29hMWhsVm51WEVWT05rYU5HazJvLy9v?=
 =?utf-8?B?RXdtYkl1NjdBNnhDUFI3bit3ZSswRXI4S2lyTFdFTVhsekQ3cjFtcTNIZzhY?=
 =?utf-8?B?WWJjbnJmaUN0c3dMK25tdGZERy9XRmdiNTY0dlBCbzJMNDZlNEFNRkdwYWgy?=
 =?utf-8?B?akNFZ0tJeENvNUo3ZlBlSVIySUNvU1E3aEJ6UEEyd2xmZXFSaHlsM3JySEha?=
 =?utf-8?B?azZ5V3NrUis3UlZhSFhnblFlWHllZnJIeFErVDYvQnBrNzVpNko1ZTdGU2RB?=
 =?utf-8?B?V3Jna2ZhMmVoS1RxbHJvOVowd3YrWUdjbWo4QmRQME9oaW90ODVtalNUQ0Q5?=
 =?utf-8?B?R01hWkorT09tVDRtTW5iS3pyWkxIK1kvanZCMFlzamdaY3Q3SWFRVU5CSENn?=
 =?utf-8?B?ZW5ZNmtTbmJLa3RNcnFSTlBWTTk5UXA1bXgrenJMdHZFdlF4a0RMMnJnSHZO?=
 =?utf-8?B?cTBGd00wajhJM3M0WnVGMmNvTjdWbGhZblJpVEJMcFp2eG9yVDZ1bHhZUjFk?=
 =?utf-8?B?VDJKbHg0U09IaFJrZnZYNlVuK2xNMGN3akMvRVJoUDFpeFlybG1CWUVNMUxH?=
 =?utf-8?B?Rkt1UUdUaEVDYVNsTklVTVVKa0RrNTJ3SWxuZ1dpeXRIdTU1b05lM1I3emN5?=
 =?utf-8?B?K0c0TDBxYmFMWnJlUjNBV3VpTWo5VHVXK2wrVnVieVluQmJiUjRiZW1VMGxu?=
 =?utf-8?B?czlRQjdLZVVOazQ3K2RPTXVIVi9VcDhaenAyQkorNkJ2dW84VUNBQ0FTM3dH?=
 =?utf-8?B?UVNVanJMUUpaNmIxTzQxUGczWHplblRXK05KUFpBREJCUkNEdlAzemJHemlx?=
 =?utf-8?B?N1E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	=?utf-8?B?WkViZDlacHYxNDVwRGhTMlkrTlU5U1pGS2MzeEdITmg0di9mTCtVQ2NycldJ?=
 =?utf-8?B?VjRUUVJqZUxOU2N3OHlRdHNLcTdaM1lyNUFScUlZcEpmbmRiSG9IdzJYS0FN?=
 =?utf-8?B?UzBrM2REK245SGxIZFVXakxKZkY1bVNXNkJvNkp4R0pDaXc1eTNmRnFtUFA2?=
 =?utf-8?B?L1hmMUVjakozcStuUnpiU1MxSWtmR201NDdhSzdNUzgxSlhYcERXcDRLektF?=
 =?utf-8?B?UDljM25aUjVWbnppYzg0VDRDN1p6aWN2TW1pV0czSG9uNWhPZ2JodGphMmwy?=
 =?utf-8?B?THluQ3lHN010K3ZhQlhwbTI1cHZ0WkJDVE15VUFKeEpVWTVmQm5sZGo4Q1dB?=
 =?utf-8?B?NXdsTzBIRUt4RmFFdmVZTVVIOS94YUFZMXI1ZTNLejhSZk04V1pkRVhrNml2?=
 =?utf-8?B?bVRmYjY5akJ5ZnJ4VGw3OTZGbTFpdkZpV3F2Qkg2M2o0a25haG1zaXh5aTh4?=
 =?utf-8?B?QUloQmpMQk56S1Fkc1hPOGtvZ1RUdno5c01EaldSbjhrZCt2WTdFZUZvMzN2?=
 =?utf-8?B?RSsvYVBWNUY1TmhYSnliNWxpMnUxYk9ia2kyTDNrVjA1bDRFWmV3cVFUdyt0?=
 =?utf-8?B?K0lya3pLb3FBYnliUzRYOGp1bXp0dnYvb2Z3SmZXVkFuckF3a0JrWGV6RDVx?=
 =?utf-8?B?QUNuYVRjbDhOckdoZ0xIenJ3a09WL1B6WTg1bG55SXJZVEYwRVUvbnBpZ0ZK?=
 =?utf-8?B?anFDL0VEVDI2QmxpUFNUTmVTRk9tNXR6NTBJK2xLVll3cElGbmZyRU9OMmlI?=
 =?utf-8?B?ZnVQL0I0akNPNzJtRW1DNEFZZDNxK3plTHR1SDkvWXQ5aDRCbGNYZFRmY3Bk?=
 =?utf-8?B?YWVSZnJmRDB1NGpzaTAvV1VkSWtRZEFobzl3M2l5T1dBbWZNek1td1FDZXcz?=
 =?utf-8?B?WStCT1JwVmszanBuYzZBbERwZk9MMzAreVNNbWNuNGdzalBZblVibzR1SldU?=
 =?utf-8?B?SVFNYzBYV2pmMmMvZUFub2pNL054Mm5wZXlpUVlZWEppeEhEMi9ITzUzek5I?=
 =?utf-8?B?NWFwSlcxZlpjb0MrNnZhRFhCUHVOZW9vNEN6QmVMOU5lV1FnZmxReWVOcHZ5?=
 =?utf-8?B?bll0SDh6VVZUUjVpQ0RsTDBYTlNrZlZOamlLbWZRM0FDcDZHRm1Lakt6Z3Yv?=
 =?utf-8?B?RXc3b1lsWGpqYkJ6NWlEL21Qc1JjVXgxYzdIQXFYeTZsZ1pJRmdybHBFOVNz?=
 =?utf-8?B?ZVV3NkJCUlIwV2J0QnpZNVpWOFNWRCttOTduY0lvejJ0SmprNFd1aDY4aE5q?=
 =?utf-8?B?ZGJ0Zk5KM3RGTG80WW9mRlVFdXVxZzRqb0tUbUNKVGk1OUZwZUcyNkVmSFZ4?=
 =?utf-8?B?Z1BkMlVSODVwSGZ4dWR0cFF3ZmJsb0FERmNQVzRZN1BBNmRZZE1aY0FVeWt3?=
 =?utf-8?B?STB0Z3J1V1p1TFUzWStzSXl1WFc0YTlXaGVCVWdBVGhMNElMSGlRUHBEbTB0?=
 =?utf-8?B?WmhBSFh4TStHbE1VWkJJOWlWT0VKR094ODBSbEVPWmJwOUdTcFVlMnlVRzhO?=
 =?utf-8?Q?rnuFHqkwUmnA/gMYHYMMNl12FBn?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 39426e25-8119-4eef-9d0e-08db5be2b98b
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2023 23:09:13.8328
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vrrhQsZRsL1mXjq4E63eyLh/ju2qMi8fi5z9gs6SPj8A1oOx4z4FsrQoA0UGv8WxLGJnq8OoGeIKmfjgjPRD5cVkPcY2DOB2aE4VC+iFNYM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR10MB5787
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26
 definitions=2023-05-23_14,2023-05-23_02,2023-05-22_02
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 mlxlogscore=999 phishscore=0
 malwarescore=0 suspectscore=0 spamscore=0 bulkscore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2304280000
 definitions=main-2305230186
X-Proofpoint-ORIG-GUID: J_Rq9KNi1bE66L3hmD1169m08HoEeHQX
X-Proofpoint-GUID: J_Rq9KNi1bE66L3hmD1169m08HoEeHQX



On 5/23/23 4:37 PM, Arnd Bergmann wrote:
> On Sat, May 20, 2023, at 00:24, Boris Ostrovsky wrote:
>> On 5/19/23 5:28 AM, Arnd Bergmann wrote:
>>
>>> diff --git a/arch/x86/xen/smp.h b/arch/x86/xen/smp.h
>>> index 22fb982ff971..81a7821dd07f 100644
>>> --- a/arch/x86/xen/smp.h
>>> +++ b/arch/x86/xen/smp.h
>>> @@ -1,7 +1,11 @@
>>>    /* SPDX-License-Identifier: GPL-2.0 */
>>>    #ifndef _XEN_SMP_H
>>>    
>>> +void asm_cpu_bringup_and_idle(void);
>>> +asmlinkage void cpu_bringup_and_idle(void);
>>
>> These can go under CONFIG_SMP
> 
> Not sure if there is much point for the second one, since
> it's only called from assembler, so the #else path is
> never seen, but I can do that for consistency if you
> like.
> 
> I generally prefer to avoid the extra #if checks
> when there is no strict need for an empty stub.

Do we need the empty stubs? Neither of these should be referenced if !SMP (or more precisely if !CONFIG_XEN_PV_SMP)

> 
> I guess you also want me to add the empty stubs for the
> other functions that only have a bit in #ifdef but not
> #else then?
> 
>>> +void xen_force_evtchn_callback(void);
>>
>> These ...
>>
>>> +pteval_t xen_pte_val(pte_t pte);
>>> +pgdval_t xen_pgd_val(pgd_t pgd);
>>> +pte_t xen_make_pte(pteval_t pte);
>>> +pgd_t xen_make_pgd(pgdval_t pgd);
>>> +pmdval_t xen_pmd_val(pmd_t pmd);
>>> +pmd_t xen_make_pmd(pmdval_t pmd);
>>> +pudval_t xen_pud_val(pud_t pud);
>>> +pud_t xen_make_pud(pudval_t pud);
>>> +p4dval_t xen_p4d_val(p4d_t p4d);
>>> +p4d_t xen_make_p4d(p4dval_t p4d);
>>> +pte_t xen_make_pte_init(pteval_t pte);
>>> +void xen_start_kernel(struct start_info *si);
>>
>>
>> ... should go under '#ifdef CONFIG_XEN_PV'
> 
> What should the stubs do here? I guess just return the
> unmodified value?


Same here -- these should only be referenced if CONFIG_XEN_PV is defined which is why I suggested moving them under ifdef.


-boris


From xen-devel-bounces@lists.xenproject.org Tue May 23 23:17:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 23 May 2023 23:17:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538696.838865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1bFQ-0002dB-C0; Tue, 23 May 2023 23:16:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538696.838865; Tue, 23 May 2023 23:16:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1bFQ-0002d4-9G; Tue, 23 May 2023 23:16:56 +0000
Received: by outflank-mailman (input) for mailman id 538696;
 Tue, 23 May 2023 23:16:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1bFO-0002cu-W0; Tue, 23 May 2023 23:16:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1bFO-00038y-Ka; Tue, 23 May 2023 23:16:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1bFO-0006Lz-6D; Tue, 23 May 2023 23:16:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1bFO-0000ps-5p; Tue, 23 May 2023 23:16:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WacNvJ7PQ77IKhdOeUSKwOhMeAWLYbbXUWcIrmICEKc=; b=1W3ALOfKzyRENo5HBwFR4Rx5kG
	rA+yU71J1oDaH9jhzHz+oCySF9LJHJWZBcsO3KrFk7fm6hTLAJBtwEWiK4/Psx/4QFfG72s5WwciX
	bh4TmejZqFZmdFl0iEhHocpDQo3CiheMYiL63Bkvjp4qhq+oRclWTLbvkbCp1oyqaNdE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180913-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180913: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c7908869ac26961a3919491705e521179ad3fc0e
X-Osstest-Versions-That:
    xen=c7908869ac26961a3919491705e521179ad3fc0e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 23 May 2023 23:16:54 +0000

flight 180913 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180913/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install fail in 180900 pass in 180913
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install    fail pass in 180900
 test-amd64-i386-libvirt-raw   7 xen-install                fail pass in 180900
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 180900
 test-amd64-i386-xl-vhd       22 guest-start.2              fail pass in 180900

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop      fail blocked in 180900
 test-amd64-i386-libvirt-raw 14 migrate-support-check fail in 180900 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180900
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180900
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180900
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180900
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180900
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180900
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180900
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180900
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180900
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180900
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180900
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c7908869ac26961a3919491705e521179ad3fc0e
baseline version:
 xen                  c7908869ac26961a3919491705e521179ad3fc0e

Last test of basis   180913  2023-05-23 07:17:53 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed May 24 00:27:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 00:27:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538743.838935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1cLA-0003UI-0o; Wed, 24 May 2023 00:26:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538743.838935; Wed, 24 May 2023 00:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1cL9-0003UB-UC; Wed, 24 May 2023 00:26:55 +0000
Received: by outflank-mailman (input) for mailman id 538743;
 Wed, 24 May 2023 00:26:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1cL8-0003U1-Qy; Wed, 24 May 2023 00:26:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1cL8-0005Lc-PM; Wed, 24 May 2023 00:26:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1cL8-00083D-C5; Wed, 24 May 2023 00:26:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1cL8-0004Yk-Bd; Wed, 24 May 2023 00:26:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0xr+tPkZHjSDHXjfMGynArWbOzCeO6saC4BjoPKFwHI=; b=pb5iiMjkSHJ1CR6s6SJowVIYDQ
	Q6bOBvN52tS4NC+/YUDLJ6/PCJaOArRX5pdtsa57O/GHF53gCkSCC7rsaQXhD/z3JOjjMmZ3ajLxS
	448WuynOKcL2zEUYYFgJeEcwXc4ExLR5nkVwdXWuI88aOa2C03+lToeqQjW7FUII461g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180918-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.16-testing test] 180918: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-4.16-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit2:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-credit1:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-vhd:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-xl-xsm:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-raw:hosts-allocate:starved:nonblocking
    xen-4.16-testing:test-arm64-arm64-libvirt-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=b0806d84d48d983d40a29534e663652887287a78
X-Osstest-Versions-That:
    xen=7251cea957cbe4a0772651a4ab110ed76f689f96
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 May 2023 00:26:54 +0000

flight 180918 xen-4.16-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180918/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180478
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180478
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180478
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180478
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180478
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180478
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180478
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180478
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180478
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180478
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180478
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180478
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl           3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-credit1   3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-vhd       3 hosts-allocate               starved  n/a
 test-arm64-arm64-xl-xsm       3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-raw  3 hosts-allocate               starved  n/a
 test-arm64-arm64-libvirt-xsm  3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  b0806d84d48d983d40a29534e663652887287a78
baseline version:
 xen                  7251cea957cbe4a0772651a4ab110ed76f689f96

Last test of basis   180478  2023-04-29 03:21:22 Z   24 days
Testing same since   180918  2023-05-23 13:08:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>
  Yann Dirson <yann.dirson@vates.fr>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          starved 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 starved 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      starved 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  starved 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  starved 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 starved 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      starved 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7251cea957..b0806d84d4  b0806d84d48d983d40a29534e663652887287a78 -> stable-4.16


From xen-devel-bounces@lists.xenproject.org Wed May 24 01:14:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 01:14:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538751.838946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1d4c-00077s-Ie; Wed, 24 May 2023 01:13:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538751.838946; Wed, 24 May 2023 01:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1d4c-00077l-Ft; Wed, 24 May 2023 01:13:54 +0000
Received: by outflank-mailman (input) for mailman id 538751;
 Wed, 24 May 2023 01:13:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hEuA=BN=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q1d4b-00077f-Mn
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 01:13:53 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3d6d259a-f9d0-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 03:13:50 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5336E60C05;
 Wed, 24 May 2023 01:13:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4B67C4339B;
 Wed, 24 May 2023 01:13:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d6d259a-f9d0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684890828;
	bh=Gop+BTAEidpVZdh+7oP0MH+DpcN3udcDsLwAzCMd8kA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=C/Vugyj41E8Erkh4hHLNsYwzs4IT1EcfoHhUyobLMMoGs/SIidG3vo+zgngZiPGf1
	 9ACLl563cepGlYnJz2zUWdpIsbNAlw3Pt+ZjjMlQo/k6cV7gr+9NHRRdCtPJSOKiLd
	 nbyJKdcpwddJezrYfw1gpfBSKow23bB+vWcxBnkzmH9zWFkwYz6uuXGDFJsIaKZ03l
	 0zox2OKaGIcD8HzmgA8FdTfNGROw2e9pOAeRQAotLiO9QfUrE27gLUzYx0DYDlV8Bu
	 fNNrnSzO1PWP9EllCnB5+gIU5/Cf5+XrDPKlgW8xZ5WwehrgU7u73Lsxh2JUXjOd0k
	 sNLmXiUrmLfUA==
Date: Tue, 23 May 2023 18:13:43 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com, 
    xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com, 
    xenia.ragiadakou@amd.com, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: PVH Dom0 related UART failure
In-Reply-To: <818859a6-883a-81c0-1222-84c62ada3200@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305231756500.44000@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop> <ZGX/Pvgy3+onJOJZ@Air-de-Roger> <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop> <ZGcu7EWW1cuNjwDA@Air-de-Roger> <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
 <ZGig68UTddfEwR6P@Air-de-Roger> <alpine.DEB.2.22.394.2305221520350.1553709@ubuntu-linux-20-04-desktop> <818859a6-883a-81c0-1222-84c62ada3200@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1473932416-1684889972=:44000"
Content-ID: <alpine.DEB.2.22.394.2305231759370.44000@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1473932416-1684889972=:44000
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305231759371.44000@ubuntu-linux-20-04-desktop>

On Tue, 23 May 2023, Jan Beulich wrote:
> On 23.05.2023 00:20, Stefano Stabellini wrote:
> > On Sat, 20 May 2023, Roger Pau Monné wrote:
> >> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> >> index ec2e978a4e6b..0ff8e940fa8d 100644
> >> --- a/xen/drivers/vpci/header.c
> >> +++ b/xen/drivers/vpci/header.c
> >> @@ -289,6 +289,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
> >>       */
> >>      for_each_pdev ( pdev->domain, tmp )
> >>      {
> >> +        if ( !tmp->vpci )
> >> +        {
> >> +            printk(XENLOG_G_WARNING "%pp: not handled by vPCI for %pd\n",
> >> +                   &tmp->sbdf, pdev->domain);
> >> +            continue;
> >> +        }
> >> +
> >>          if ( tmp == pdev )
> >>          {
> >>              /*
> >> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
> >> index 652807a4a454..0baef3a8d3a1 100644
> >> --- a/xen/drivers/vpci/vpci.c
> >> +++ b/xen/drivers/vpci/vpci.c
> >> @@ -72,7 +72,12 @@ int vpci_add_handlers(struct pci_dev *pdev)
> >>      unsigned int i;
> >>      int rc = 0;
> >>  
> >> -    if ( !has_vpci(pdev->domain) )
> >> +    if ( !has_vpci(pdev->domain) ||
> >> +         /*
> >> +          * Ignore RO and hidden devices, those are in use by Xen and vPCI
> >> +          * won't work on them.
> >> +          */
> >> +         pci_get_pdev(dom_xen, pdev->sbdf) )
> >>          return 0;
> >>  
> >>      /* We should not get here twice for the same device. */
> > 
> > 
> > Now this patch works! Thank you!! :-)
> > 
> > You can check the full logs here
> > https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4329259080
> > 
> > Is the patch ready to be upstreamed aside from the commit message?
> 
> I don't think so. vPCI ought to work on "r/o" devices. Out of curiosity,
> have you also tried my (hackish and hence RFC) patch [1]?
> 
> [1] https://lists.xen.org/archives/html/xen-devel/2021-08/msg01489.html

I don't know the code well enough to discuss what is the best solution.
I'll let you and Roger figure it out. I would only kindly request to
solve this in few days so that we can enable the real hardware PVH test
in gitlab-ci as soon as possible. I think it is critical as it will
allow us to catch many real issues going forward.

For sure I can test your patch. BTW it is also really easy for you to do
it your simply by pushing a branch to a repo on gitlab-ci and watch for
the results. If you are interested let me know I can give you a
tutorial, you just need to create a repo, and register the gitlab runner
and voila'.

This is the outcome:

https://gitlab.com/xen-project/people/sstabellini/xen/-/pipelines/876808194


(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:00.2
(XEN) PCI add device 0000:00:01.0
(XEN) PCI add device 0000:00:02.0
(XEN) Assertion 'd == dom_xen && system_state < SYS_STATE_active' failed at drivers/vpci/header.c:313
(XEN) ----[ Xen-4.18-unstable  x86_64  debug=y  Tainted:   C    ]----
(XEN) CPU:    14
(XEN) RIP:    e008:[<ffff82d04026839e>] drivers/vpci/header.c#modify_bars+0x3ba/0x4a3
(XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d0v0)
(XEN) rax: ffff830390164000   rbx: ffff83038bd8a7f0   rcx: 0000000000000010
(XEN) rdx: ffff83038e3b7fff   rsi: ffff83038e3c83a0   rdi: ffff83038e3c8398
(XEN) rbp: ffff83038e3b7c08   rsp: ffff83038e3b7b98   r8:  0000000000000001
(XEN) r9:  ffff83038dcfa000   r10: 000000000000000e   r11: 0000000000000000
(XEN) r12: 0000000000000007   r13: 00000000000dcc03   r14: ffff83039016ad10
(XEN) r15: 00000000000dcc00   cr0: 000000008005003b   cr4: 0000000000f506e0
(XEN) cr3: 000000038ddfe000   cr2: ffff88814e3ff000
(XEN) fsb: 0000000000000000   gsb: ffff888149a00000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen code around <ffff82d04026839e> (drivers/vpci/header.c#modify_bars+0x3ba/0x4a3):
(XEN)  3d c4 d1 37 00 02 76 c0 <0f> 0b 4c 8b 75 b0 0f ae e8 48 83 7d 98 00 74 2b
(XEN) Xen stack trace from rsp=ffff83038e3b7b98:
(XEN)    0000000000000002 ffff830390150230 ffff830390164000 ffff830390164158
(XEN)    ffff830390150230 00ff830300000000 ffff83038dcf8b00 0000000000000003
(XEN)    0000000000000206 ffff83038bd8c010 0000000000000000 0000000000000002
(XEN)    0000000000000002 0000000000000004 ffff83038e3b7c18 ffff82d040268909
(XEN)    ffff83038e3b7ca0 ffff82d040267998 000000118e3b7ca0 0000000000117803
(XEN)    0000000400000000 ffff830390150230 0000000000000000 0000000000000000
(XEN)    0000000400000002 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000002 0000000000000000 0000000000000004 0000000000000000
(XEN)    ffff82d04041df40 ffff83038e3b7cd0 ffff82d0402cb649 0000001140332c9e
(XEN)    ffff83038e3b7d88 0000000000000000 ffff83038dd094a0 ffff83038e3b7d30
(XEN)    ffff82d0402caa72 ffff83038e3b7ce0 00000cfc00000002 0000000000000000
(XEN)    0000000000000002 0000000000000000 ffff83038dd094a0 ffff83038e3b7d88
(XEN)    0000000000000001 0000000000000000 0000000000000000 ffff83038e3b7d58
(XEN)    ffff82d0402cab08 0000000000000002 ffff83038dcfa000 ffff83038e3b7e28
(XEN)    ffff83038e3b7dd0 ffff82d0402ba4ee 0000000000000cfc 0000000000000000
(XEN)    ffff83038e38f000 0000000000000000 0000000000000cfc 0000000000000000
(XEN)    0000000200000001 0001000000000000 0000000000000002 0000000000000002
(XEN)    0000000000000000 ffff83038e3b7e44 ffff83038dcfa000 ffff83038e3b7e10
(XEN)    ffff82d0402ba871 0000000000000000 ffff83038e3b7e44 0000000000000002
(XEN)    0000000000000cfc ffff83038dcf6000 0000000000000000 ffff83038e3b7e30
(XEN) Xen call trace:
(XEN)    [<ffff82d04026839e>] R drivers/vpci/header.c#modify_bars+0x3ba/0x4a3
(XEN)    [<ffff82d040268909>] F drivers/vpci/header.c#cmd_write+0x2c/0x3b
(XEN)    [<ffff82d040267998>] F vpci_write+0x14c/0x268
(XEN)    [<ffff82d0402cb649>] F arch/x86/hvm/io.c#vpci_portio_write+0xa0/0xa7
(XEN)    [<ffff82d0402caa72>] F hvm_process_io_intercept+0x203/0x26f
(XEN)    [<ffff82d0402cab08>] F hvm_io_intercept+0x2a/0x4c
(XEN)    [<ffff82d0402ba4ee>] F arch/x86/hvm/emulate.c#hvmemul_do_io+0x29a/0x5ee
(XEN)    [<ffff82d0402ba871>] F arch/x86/hvm/emulate.c#hvmemul_do_io_buffer+0x2f/0x6a
(XEN)    [<ffff82d0402bbd10>] F hvmemul_do_pio_buffer+0x33/0x35
(XEN)    [<ffff82d0402cb41d>] F handle_pio+0x6d/0x1b8
(XEN)    [<ffff82d0402a2e4d>] F svm_vmexit_handler+0x10fe/0x18e2
(XEN)    [<ffff82d0402034dc>] F svm_stgi_label+0x5/0x15


You can also check how I applied the patch here:
https://gitlab.com/xen-project/people/sstabellini/xen/-/commit/851e76bf0a1cf6f040b6e90795d216ebfcc069cc
--8323329-1473932416-1684889972=:44000--


From xen-devel-bounces@lists.xenproject.org Wed May 24 05:04:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 05:04:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538755.838955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1gf7-00055Z-Os; Wed, 24 May 2023 05:03:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538755.838955; Wed, 24 May 2023 05:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1gf7-00055S-Li; Wed, 24 May 2023 05:03:49 +0000
Received: by outflank-mailman (input) for mailman id 538755;
 Wed, 24 May 2023 05:03:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1gf6-00055I-Qh; Wed, 24 May 2023 05:03:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1gf6-00029O-KF; Wed, 24 May 2023 05:03:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1gf6-000313-0E; Wed, 24 May 2023 05:03:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1gf5-0007PV-VC; Wed, 24 May 2023 05:03:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g+Adb7pvsPEvUJcmOYyQbzOJrzlfRpLeDNB2sXOtB1M=; b=GE+FYwlNJXwqC1Z6NbniehrCfq
	Me72EXJqr5vWwzEbKIOR451PGBHoFOs1b83jJNGco/8NH9mgEY64mvgaA+AnMsW/mtN05yiJ3CVQt
	i5lz9mP9Pb/AwbdobIWInd3EnMpCcWtvpqqe1sFm4m8fk1l/NiVE/9b3GYVmidnTnMI8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180917-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180917: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ae8373a5add4ea39f032563cf12a02946d1e3546
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 May 2023 05:03:47 +0000

flight 180917 linux-linus real [real]
flight 180923 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180917/
http://logs.test-lab.xenproject.org/osstest/logs/180923/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ae8373a5add4ea39f032563cf12a02946d1e3546
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   37 days
Failing since        180281  2023-04-17 06:24:36 Z   36 days   68 attempts
Testing same since   180907  2023-05-23 01:11:53 Z    1 days    2 attempts

------------------------------------------------------------
2480 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 313746 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 24 06:18:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 06:18:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538762.838966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1hpY-000416-6Z; Wed, 24 May 2023 06:18:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538762.838966; Wed, 24 May 2023 06:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1hpY-00040z-3e; Wed, 24 May 2023 06:18:40 +0000
Received: by outflank-mailman (input) for mailman id 538762;
 Wed, 24 May 2023 06:18:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1hpW-00040d-Ab
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 06:18:38 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0612.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::612])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce3eeb4b-f9fa-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 08:18:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7003.eurprd04.prod.outlook.com (2603:10a6:10:11d::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 06:18:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 06:18:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce3eeb4b-f9fa-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lh+70nPIjVjc/Zos+2dHkU+gWQyMXRZieV8Dq6LqIfjoeSshnbPPkpYAkSgL10iAQf9Cc4quw1yivdEtuzeZgfcDVUWq6wsCXsR45lWqdp0BWqexPYLfvU8LFejzM5nwzNMP2Bx1Zd+W+b3KEJ0bmxKVSajzHOL4bj4CO3L8NdXrtJnX5n2QJ+6gGHKrkZmgICL1BN6hbzld8+1rNT7qRzP4dUlcy4C8boywrAbTY/zOSD3DA3qQ/IStR9jk2OkaYHfpFV9vos2Ofe+eMqi8238Y0QMUDHQNh0CtY0L5L0YmuauFsHyg1EPvTpHh+GOsFWZ1YjUlUWkCCrhjGh0g/w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kDFKoE0fgUN+7hn7eXW8vi+a5SEBcycZlbKOC8fa38Q=;
 b=iEkj7JjPa13rsrgB4WqE9MstoL2BKJ854se7WTJFHGoXLM3gG4ZScBOPxnaYBozYjOvrlueFXqVkaNN2uwtjHcTr3YdY4bDqRcvo/akG5tjk9VOhDz+4tPBYJn8YYcQfa6anZMJApLhF27SBX5QD8KGiKjIPDfnHbfiAVhSVi6/f0lIVSIwzpaZSmnep2RgxXYb3IBzGCGwKrDFxQVb/+EfOSGEuqFWXGNZ/WsTTglZjwfrhZQuP/bwERDobP5WOhX6OGJDqYKHoTpJghHjd+w1BHWsxMx5ma+MPr4GQ74quSHbpyxn7eJl5FyniF9PZvE7Z+7YntzvBmjUOLaO2sg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kDFKoE0fgUN+7hn7eXW8vi+a5SEBcycZlbKOC8fa38Q=;
 b=RyczAYtK52/sHFnQgm2oy4r6rWtLKtxwFqQYPysUulHOPltmIkuYyDpvoEkYdfKmfcmYI5CjETETcdbedGWDOT+qZdEkNUdZR493+tpnOaHeKeJD+4YCn4hnlLPOA+pY6Jm8FkJeVH6u+O3Tb4D/g2rwUGUAR8JU49jI6w+B4qCsZqyLrrwLUR2hT9eSzFsuFpAbdL2dtF/KjlFmrxfgBkWYlpjBGo+XuyHy7QHpAY84C6UkkqegIZvB5vd7zsj9SM2NIfz8eaN7dZ7n/PTXUpqPjcdAY+KPTk2YQ0wBdeeL/vhn5SZ8ZA4W+pMT4rNJFfI3SuuRAAIFHTvRbPEqNQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f0ec68a2-bfd9-22e8-39a8-24ba9a7505ca@suse.com>
Date: Wed, 24 May 2023 08:18:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: PVH Dom0 related UART failure
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 marmarek@invisiblethingslab.com, xenia.ragiadakou@amd.com,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
 <ZGcu7EWW1cuNjwDA@Air-de-Roger>
 <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
 <ZGig68UTddfEwR6P@Air-de-Roger>
 <alpine.DEB.2.22.394.2305221520350.1553709@ubuntu-linux-20-04-desktop>
 <818859a6-883a-81c0-1222-84c62ada3200@suse.com>
 <alpine.DEB.2.22.394.2305231756500.44000@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305231756500.44000@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0194.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7003:EE_
X-MS-Office365-Filtering-Correlation-Id: 3d3c94d2-2b8b-4839-8f44-08db5c1eb129
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	e2iTKhO1jGngs+afrwaSxe8HkeurECyvE0H8cfC9lkhTISV6ryh+nAiOv7cVKPXjEf6IktXCVYMlK+MGu4rAPuL5y51MywgLKgQ6RRrVuNaWpmCQ4TTJCUrrcRYxvxZRPeXS9MJ8N47v0sIGg3f2wx2c1a9/KjXftfxaQbM2a3RHTMBHoHVY3iB7ZZPKykf9vGnlMD44l3GS5PYyGSaQjC0EulQgQQ8Ocklpxo5xKvm8bIdDHnHZEU5l/WeK9y17dqZOMWXLv41UCnvyMAz7BI/kDuYpR0UAmKAdz/aOIXNamf62BR71WT6EvB2VBeGDLBjqJsh5uOrQU07RGkAnKZvzOTQECIubxKLS0v7Z7rD5g/sA+HvhNf2uN3bqombbI8dyGtMD/tynIRlOtCV5qAkAmkSwlqu45+CO9c8XP+kZR6/zyQSq4v8CCKRycma3v9g5npIFXB0009C3zoAR/FOZbJTm/OMmxpibTevO1i310src7Jx0LpeUXwDGnTOClOpCKd5aHjohUEmcjSlvU6sWOUT/JaPqQtr9EPNHwJj6f4iWRCieY/TVnpGhhXAdJIjDHgfVNypt8w0xKgxZfALRnQLnF2G9KErPhAiEddLQtj7NDkn3wK3H2STfEUulLYtovahyGuX9qsnYBzGaQCoTYS+jg/ScQe8znTU3wMDk24UZpc3cV7mWC3t4LhOt
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(396003)(346002)(39860400002)(366004)(451199021)(31686004)(31696002)(2906002)(966005)(26005)(6506007)(6512007)(6486002)(41300700001)(6916009)(4326008)(66476007)(66556008)(478600001)(66946007)(38100700002)(83380400001)(53546011)(86362001)(2616005)(186003)(5660300002)(316002)(36756003)(8676002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aHhYb1FvVHM5OWZ4djF2VVRZWkwySDdEckRBQ1dRU0g4RldPUjdyOE1vYlBW?=
 =?utf-8?B?eUpuMXZCL294L1pOWHBxakhsc1VEQjRFSFZTczJnYUpjV0dCRElPYldQOXAx?=
 =?utf-8?B?bHNTNW9sRVJJWXVqNHpKSWQxbFVlZlgxVjBNNWcwZlRTWGkzMHJuVnBQa2I0?=
 =?utf-8?B?SzRxQVpoYldQYUMrQkE1ZDZXK0tzT0ZZYUlKWlp0UjdyZDFFZFJXRE5HTjJI?=
 =?utf-8?B?RlVKMng4ZzV1eWdyaXkvUFhkVDRCeGhMZXRVUnFvM1QrODAxdEtqMi9EVzlZ?=
 =?utf-8?B?UjJoeEIwZnc5c0sxQitmVXFLUkJBVFoyQWlIRUM1TDUyWWtlZzRaWm1MWWtp?=
 =?utf-8?B?MGRESkhlMi81NVZHV2h0ZkVHRjZLM1ZnU3paWlNKdUViY3g0Ujl2ZUhXODlC?=
 =?utf-8?B?TXNvMkZDWWQrU0hJQ1JETHk3R0hQRWRuNVY2dHJsSnZTUGkrSjNQbnBWWld4?=
 =?utf-8?B?NHZLNHlSQUVNTDBxWnFVTi84K3RHMEp2UzhXVXZsYisvby9KRUMvWHhIME5R?=
 =?utf-8?B?OW5GR0pjb1V6dUluLzI5d1BOVUI4SDJsTkJiTGI1Y0JFV1lsMVFpbzU4cGJD?=
 =?utf-8?B?ZG05dnFWL3Y4c2FkQ0tOMDJpU0grSVVNTXpKU2d1dnhtV2VoK1lwS0tOZktE?=
 =?utf-8?B?c1h6cnlqMVo3dkVLQytNeVgvYzZKRzluQTl5dWFtTkNELzN0ZjhNZk5tZ2NW?=
 =?utf-8?B?WE4xYldmUXZyZU5qSTlicW5OcWwrZ0NQTEdFeVNTSVJpeU1XRU9sQ1BveGU0?=
 =?utf-8?B?NmZxUlVGMFpsR3Q5MGdNd1pjMFMxZ0hiUndvSDQzRTJIaVI3Q213U291bW9p?=
 =?utf-8?B?SnJjQWlxZXNBUmcxazl4cEhISE1aeFkxYmlPdDhhS0NJcWI1SkV6cXNnbmNQ?=
 =?utf-8?B?SWlvVU5WWTJvU0Faa2hqVlB6MmdQMll6eC92NDNlTXNRTlNoc3Z5b21ma2RV?=
 =?utf-8?B?T3Qxd3BZeVBuMjdTZmUvSHh5UEtLUjg3eXNxZ2tpcno1Uno1V2ZlR1JyakZ3?=
 =?utf-8?B?RmR0emtXL3J5Y3A4Y0w0UEdiNGoyOGRuVkhxQUo4SkZmMjB1ZzRYK0hBa0h2?=
 =?utf-8?B?dnVkS1gwRG92K2dDVGIwNndxRldseHNpMk1kSVlvVjlxcExqV0tYNlRrQmQ3?=
 =?utf-8?B?SWJkdVBhUkk3Q3NBb01YaDlrZ096VC9TSFpLd0dxdGVlN1p1elQrdXE4ZFRi?=
 =?utf-8?B?MnNBTWRraWJ4ckxtTTVWRzBqWG9ETDJ1TEwxVjdQRzBoQzNuanRQaTBvTU83?=
 =?utf-8?B?dXd3TXUvSGFhQTJnSFRQejNxQ0hsNHl0R0pqWHJXQVh1aWcrbHc0a3QwRU0x?=
 =?utf-8?B?NVRia2Q2Qy9qZlFkdnR1VmZCeDBDcWx3S1FCY3FSNTJRQXduUUFzQkI4REh0?=
 =?utf-8?B?RVdEcjd6VkplY2VDRGt0RTgxVDhKMXJjM0lFMElZejgzUXFCdFhyV2pwakR0?=
 =?utf-8?B?Rm9YN1B0UE0zb01RNC9VV2FSb3JneGVDK2RDOUhjVFk1SFpXV0E4amlkT2NR?=
 =?utf-8?B?eFRNSjZDWVFXMGxuUkY4eDB3OFBrK1BNTlJWTlIvVFljbXNWNDlndlpYMFUx?=
 =?utf-8?B?MDBFQlZGMGVDYWsrK2xGc0pZWXJXdzVBenBxK2gvbHppREpiNnlpNWFIUnQw?=
 =?utf-8?B?d0hKOW9PU25BdGRpT1I5RTVjSGpPK3NIeXJTT3dkQTFaaVBWV3ZHQ1dEeHhW?=
 =?utf-8?B?TEZTVVBpRFBkRnJ6MGFnc3NuYWl2NllNYzAvcTd5ZUZXVXVZblNibllLeVZX?=
 =?utf-8?B?aWlFbWU2THFLVzVtR2hHUEl5WmNHVE5wOFoxRS92WlVuUkdPZnltZzVMcmVm?=
 =?utf-8?B?Zy8yaTFVbzdUaFgweVNOc3BaYUM4TU5hcXBtYUpCZ0E4Y1pHdTlqMjQ2VDNV?=
 =?utf-8?B?QUlCODY3cWhlblEwVUVlMVBlNk0xMm43eUgzNXlBekFJZ1JzcVBoWENEbWpZ?=
 =?utf-8?B?TmVVYjJBd3FrVEx4VkZLZWtNZDJPVG85UDNRVmV4MXEycEhGV2lnYXJPMTVk?=
 =?utf-8?B?MjlqZnVuOEh3c2YzSExLOVZwaTNyZnN0T2tWd3BzQTFJZmd1aitKKyt0WFlj?=
 =?utf-8?B?dW91UUhDQTZJei9yRzZHUGhZSTNjd1FaajViOWJoa0tMM010OE1sS29FeEVY?=
 =?utf-8?Q?VPk6q2mhS3W7yBJwxIIx84J47?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3d3c94d2-2b8b-4839-8f44-08db5c1eb129
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 06:18:29.8621
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GFNmRxtw845PGWmgwndGti6+xON6G2gtuzZFEUgUlM2fMol3z0eAymTsrVfu/tfN/qddl7oNsuT05Pa/iICuDg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7003

On 24.05.2023 03:13, Stefano Stabellini wrote:
> On Tue, 23 May 2023, Jan Beulich wrote:
>> On 23.05.2023 00:20, Stefano Stabellini wrote:
>>> On Sat, 20 May 2023, Roger Pau Monné wrote:
>>>> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
>>>> index ec2e978a4e6b..0ff8e940fa8d 100644
>>>> --- a/xen/drivers/vpci/header.c
>>>> +++ b/xen/drivers/vpci/header.c
>>>> @@ -289,6 +289,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>>>>       */
>>>>      for_each_pdev ( pdev->domain, tmp )
>>>>      {
>>>> +        if ( !tmp->vpci )
>>>> +        {
>>>> +            printk(XENLOG_G_WARNING "%pp: not handled by vPCI for %pd\n",
>>>> +                   &tmp->sbdf, pdev->domain);
>>>> +            continue;
>>>> +        }
>>>> +
>>>>          if ( tmp == pdev )
>>>>          {
>>>>              /*
>>>> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
>>>> index 652807a4a454..0baef3a8d3a1 100644
>>>> --- a/xen/drivers/vpci/vpci.c
>>>> +++ b/xen/drivers/vpci/vpci.c
>>>> @@ -72,7 +72,12 @@ int vpci_add_handlers(struct pci_dev *pdev)
>>>>      unsigned int i;
>>>>      int rc = 0;
>>>>  
>>>> -    if ( !has_vpci(pdev->domain) )
>>>> +    if ( !has_vpci(pdev->domain) ||
>>>> +         /*
>>>> +          * Ignore RO and hidden devices, those are in use by Xen and vPCI
>>>> +          * won't work on them.
>>>> +          */
>>>> +         pci_get_pdev(dom_xen, pdev->sbdf) )
>>>>          return 0;
>>>>  
>>>>      /* We should not get here twice for the same device. */
>>>
>>>
>>> Now this patch works! Thank you!! :-)
>>>
>>> You can check the full logs here
>>> https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4329259080
>>>
>>> Is the patch ready to be upstreamed aside from the commit message?
>>
>> I don't think so. vPCI ought to work on "r/o" devices. Out of curiosity,
>> have you also tried my (hackish and hence RFC) patch [1]?
>>
>> [1] https://lists.xen.org/archives/html/xen-devel/2021-08/msg01489.html
> 
> I don't know the code well enough to discuss what is the best solution.
> I'll let you and Roger figure it out. I would only kindly request to
> solve this in few days so that we can enable the real hardware PVH test
> in gitlab-ci as soon as possible. I think it is critical as it will
> allow us to catch many real issues going forward.

Funny. The problem has been pending for almost two years, and now you
expect it to be addressed within a few days?

> For sure I can test your patch. BTW it is also really easy for you to do
> it your simply by pushing a branch to a repo on gitlab-ci and watch for
> the results. If you are interested let me know I can give you a
> tutorial, you just need to create a repo, and register the gitlab runner
> and voila'.
> 
> This is the outcome:
> 
> https://gitlab.com/xen-project/people/sstabellini/xen/-/pipelines/876808194
> 
> 
> (XEN) PCI add device 0000:00:00.0
> (XEN) PCI add device 0000:00:00.2
> (XEN) PCI add device 0000:00:01.0
> (XEN) PCI add device 0000:00:02.0
> (XEN) Assertion 'd == dom_xen && system_state < SYS_STATE_active' failed at drivers/vpci/header.c:313

So this is an assertion my patch adds. The right side of the && may be too
strict, but it's been too long to recall why exactly I thought the case
should occur only before Dom0 starts. You may want to retry with that 2nd
half of the condition dropped. Meanwhile I'll see to refresh my memory as
to the reasons for the assertion in its present shape.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 24 06:43:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 06:43:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538768.838976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1iDD-0007Qy-7a; Wed, 24 May 2023 06:43:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538768.838976; Wed, 24 May 2023 06:43:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1iDD-0007Qr-40; Wed, 24 May 2023 06:43:07 +0000
Received: by outflank-mailman (input) for mailman id 538768;
 Wed, 24 May 2023 06:43:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1iDB-0007Ql-TE
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 06:43:06 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2060c.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::60c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b0cef3e-f9fe-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 08:43:03 +0200 (CEST)
Received: from AM6P191CA0010.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::23)
 by AS8PR08MB8419.eurprd08.prod.outlook.com (2603:10a6:20b:567::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 06:42:59 +0000
Received: from AM7EUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::ae) by AM6P191CA0010.outlook.office365.com
 (2603:10a6:209:8b::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 06:42:59 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT062.mail.protection.outlook.com (100.127.140.99) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.14 via Frontend Transport; Wed, 24 May 2023 06:42:58 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Wed, 24 May 2023 06:42:58 +0000
Received: from 10e443ad96d2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5CC41885-5AFB-46C5-A331-CD6E64524D24.1; 
 Wed, 24 May 2023 06:42:52 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 10e443ad96d2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 06:42:52 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by GV2PR08MB8073.eurprd08.prod.outlook.com (2603:10a6:150:75::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 06:42:46 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 06:42:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b0cef3e-f9fe-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Vm1fzcNeK1KHKekNawMyyBSDSb+/OzHUc7oBbXnvTjY=;
 b=iNWEg+A5nJGf2e6o0KH2phvLPF4h7UPvNbT0XGchbDynprFf5r0Qz+S1aSnolhA79dUz4h/9MzFOYLVHdXyx9wwHr2gNg9NftNsFQ3E9P8Am3xFnHo5h/0CocIQC/lA4sM/BhMXWsvon2JHxsuT5U1LMMm5FgsaDrHUOZAH/VXU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: de6a66ae06009f2f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WPZra5yIHD7lMBUdVAI2/CsY/e6v8urCgyghyr/E0VQgAQEBcFfj0ecLeEXYrCN6ALfU5kQumjVOtDiNHHGfqaSEgtblt3Wp854ErMBNnER/cq2csN+mmVPfWLsI1AZVlkVyusRNHCHaGk6XrusnGNPYMrkrjBwWKoS9HbiGj4U+vquDPvuMTnzRVBfzb81j3djNz/wTgfxZuyarxsQ4RY55tbxUJLgQAsgrkCI67DdKgFPCKnFBw8DZqZMAyo/qoj3+5a9Xd+PDMStgt4bPW9UTQmsCauS/TLQZHQbZG3vWaGi8Bem1ig6dQZZwNmaTE03FXAb+T/4DwEUAr0D32A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Vm1fzcNeK1KHKekNawMyyBSDSb+/OzHUc7oBbXnvTjY=;
 b=A9JT8k5I4QLMBzQ+WjEozGRGGtr6APSdkLX9GOt6LnFfs09qL34BUQ00XYeRp4dh8aQAUjWaEy8P0gVywVQ16vziuurZ0T7u6Wx3PDmuesP+nDyXajKGNRTWke3493wBaBX9b2IBcvAbT1mJ2FEIODqTfdLnNqoHuvCzQm8nxLX71wPjfF32zwJPKnZfyWcCk6XmpLMDY9EuvBVG/JeIIwLr57i+1/h4bcDOkbbYK3XUB+DhY3sBYQ8PoePJ3Aqfa/UbLwtJhonmfbv47BxtFUN6pz7G9ZMFoqjTwpbus22monlYTWlVlH6asZooeRPzTJaX0XMUyE9NcgzF4v3kug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Vm1fzcNeK1KHKekNawMyyBSDSb+/OzHUc7oBbXnvTjY=;
 b=iNWEg+A5nJGf2e6o0KH2phvLPF4h7UPvNbT0XGchbDynprFf5r0Qz+S1aSnolhA79dUz4h/9MzFOYLVHdXyx9wwHr2gNg9NftNsFQ3E9P8Am3xFnHo5h/0CocIQC/lA4sM/BhMXWsvon2JHxsuT5U1LMMm5FgsaDrHUOZAH/VXU=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 01/15] build: hide that we are updating xen/lib/x86
Thread-Topic: [XEN PATCH 01/15] build: hide that we are updating xen/lib/x86
Thread-Index: AQHZjZURKjD8NZdEuk+seL2t35mCkq9o+q6A
Date: Wed, 24 May 2023 06:42:45 +0000
Message-ID: <0355AF41-789F-4BF3-AC81-F325C8E63D61@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-2-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-2-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|GV2PR08MB8073:EE_|AM7EUR03FT062:EE_|AS8PR08MB8419:EE_
X-MS-Office365-Filtering-Correlation-Id: 502fa23f-35a8-414a-bced-08db5c221ce8
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 mj5w0Xs17GFm324Jf620F4O5xLdH8OnPYnvUSFTy8NqCcECaLMwcNX26VOuGSyrtcDrEV0h0Oxqys0HM7gBsgHLCGapP8+YHFAWTnQnexKxhuzmzoZ04wAUltcGrrA4+JPyWkpXB08oX9/umAVWOPqWbNid7eXfxxiGNSj5XQqip0YXlihZG4GbW6pDrD3FKKGDgEGIAMcbdw7UYwnF4k8E7+i7Y1feH7U19Yb557NXmy/rbr4n3npVJORWmVyyAjDb6dNaPNxUCsdaycxUqIwzxFhVP/pJGu1rh8YYXF+s5sGAHedTntqd22sBcvdSwRmGeOUnYLrXPHfzklKMQvpyVkz5YRLJtsUxlt0MV4nq2ZHEBCN9A8YjronOQflc4hBRHFi43zAMzHkHh6auD2dIc/c16fXJqENWKFP2KPsYzbZCRL8adhdF583XmDcWed3SFzg4yChldH4f4bM3H4TMi8ldoS56+ErnLo9vK/l7f5YJlWLYQgPsJmXS5f5Oiuv7kstUxTVidrDh3Lf4KGlrySlbyZakvbt8V7hBtPGiP5mxLjQq75XmVs8jj8+glury6a1MPla4T1VstYEuT3WZye5X9YrRL41E4/DJH+shRVHAYxxM5tV11pGppwh1BxkkahrpxqJqZnArM7CckGQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(366004)(396003)(346002)(136003)(451199021)(53546011)(6512007)(26005)(6506007)(71200400001)(186003)(316002)(33656002)(38100700002)(5660300002)(6486002)(41300700001)(36756003)(86362001)(2616005)(478600001)(83380400001)(38070700005)(54906003)(122000001)(4744005)(2906002)(8936002)(6916009)(91956017)(64756008)(66446008)(66476007)(8676002)(4326008)(66556008)(66946007)(76116006)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <BD607D61DE58554FB7659D0E5F11F593@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8073
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0e4521c3-1605-47d3-6f0b-08db5c221547
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oUXr8KeiVc2Lpsl3im1rtTA63XPlJJOUXVlYBICY8yUXglE77ooWkFhLN1TzBTWpTs+1UhitWfNvW0B98A49F0v7Vs2iw/vlIq00oLPqRxLRgIuj1lnpX9Vyt/4GXFZmtoZSjl5Mu2Zv5hHgqt05jgiOHXdK2eSWi/6rpLnNFda0uvCw+oN+eN/erFwCGBjmCEGhji3vLwGkAYkRnf2u2lgL7TBJn56USRQYwrKX74381rfhQeot+h4MIsZ8ILUvuFw2b3t16B2TmJXy861e6Gj558+lang+xphidOBmTWA3JYcgJcKBsKjFy12K7gRKnooq/tCucIBKZkUnTO161pwaZyUd54XFsEHqbb2AwUv/d3J3ZKgCuOSBWx9GkuMhgus//V85ENjvwLKxMJuE4Tf3b2yewU8BjIbkeCM2m8Fa273vMbKY8ebjL+mUKRZh8thS/71Dh3QZ0ovXWXW3dcrP+LaRctth14VRmTQyu4GDiDZOfLE+MKMZabMyCfK+SFHOqVXcMEdYDCvRlj2MKzwVzahRtMX2FpWJZmZwDHpZHV0mGO5/J+YdXijuSbI/LbCy4oGVzMPLmnmifOeQFg2fSiCfUogAnsX3DDkNo0K6JUxhP/ccDOY7qAFEbnyPYdIGrrL8RvBER+c2ixUGFAnUPZ8jTjaiLEAWSPL4oXyxRepMNuO+MYOzTd60KPsMu06oTErmw21HnrMy7oO2RsncviBXKeqNkhfqhCWjcZiDfAJXMCWoePORBCJjPu7b
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(396003)(376002)(136003)(451199021)(36840700001)(40470700004)(46966006)(40460700003)(5660300002)(6862004)(8676002)(8936002)(186003)(2906002)(4744005)(33656002)(36860700001)(86362001)(336012)(2616005)(36756003)(83380400001)(47076005)(82740400003)(81166007)(356005)(26005)(6512007)(6506007)(82310400005)(53546011)(40480700001)(316002)(4326008)(54906003)(478600001)(70586007)(70206006)(41300700001)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 06:42:58.6246
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 502fa23f-35a8-414a-bced-08db5c221ce8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8419



> On 23 May 2023, at 17:37, Anthony PERARD <anthony.perard@citrix.com> wrot=
e:
>=20
> Change of directory to xen/lib/x86 isn't needed to be shown. If
> something gets updated, make will print the command line.
>=20
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Hi Anthony,

Looks good to me

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>


> ---
> xen/include/Makefile | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/xen/include/Makefile b/xen/include/Makefile
> index edd5322e88..96d5f6f3c8 100644
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -229,7 +229,7 @@ ifeq ($(XEN_TARGET_ARCH),x86_64)
> .PHONY: lib-x86-all
> lib-x86-all:
> @mkdir -p $(obj)/xen/lib/x86
> - $(MAKE) -C $(obj)/xen/lib/x86 -f $(abs_srctree)/$(src)/xen/lib/x86/Make=
file all
> + $(Q)$(MAKE) -C $(obj)/xen/lib/x86 -f $(abs_srctree)/$(src)/xen/lib/x86/=
Makefile all
>=20
> all: lib-x86-all
> endif
> --=20
> Anthony PERARD
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed May 24 07:05:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 07:05:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538772.838985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1iYk-0001Xw-W6; Wed, 24 May 2023 07:05:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538772.838985; Wed, 24 May 2023 07:05:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1iYk-0001Xp-TO; Wed, 24 May 2023 07:05:22 +0000
Received: by outflank-mailman (input) for mailman id 538772;
 Wed, 24 May 2023 07:05:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1iYj-0001Xj-Vk
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 07:05:21 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2059.outbound.protection.outlook.com [40.107.7.59])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 57bd8342-fa01-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 09:05:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7750.eurprd04.prod.outlook.com (2603:10a6:20b:2aa::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 07:04:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 07:04:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57bd8342-fa01-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JLLn+2EfbmKfw3MtY8mW54rhgAPqj2qdstxx8U0FVBM+gLVUXQJbIitPV4PuUnT+nLAylYgqsPgob6Zawq5kR9KsSmJBQaqj/0doP34VHWbLscvrBatloD1cYhYYUz5gjvwXl21KBG1WxqWA7OK4ydyfn4yGy+SLTh1h+Gbam9edTK83xr1NJQqo1paPWyPHhHOJ6EDGLkPfd7WH47eTl1er2ZauHSltPZaPuDmSN2GtVWW7hkNVDLXIgWYCqjkS22IAr1t/n3swt69m/iY0JpC4Cu3xUUaypIIqeE4bmNRCT34W9UOMrVFtDcOMUFwoPRnTkkJCPKCwEHgxb4/OvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=byzwatVIY7htQyENJLt6VmmEaooMTR/zZB47gLDq2VA=;
 b=STav1CQ8oz1OltJYHNCzkY+FV/NYwsq1tIRsoUpoNXxSYIytHST1YucqWKtfGN4yEXs73mc9KxZUKW2hdxJyNDY4gNfP3NSadEepS/sIqGVzjuFYo4wtE7E8R0kP6RfVV7ntYLcMwdu1I4h9+9uBK6MW/BRavAXNtyBL0Z5H+jbb2ZELCpdrMAUt1a2wo2YCj6yK6b350CLAiZZlbBKVH62fjgGBK2pc756TEVuPAA9g+7A9+uzwaStAVJxj9iWHoV3gtqYoIt9iNA5nKpSQg7diGiPwM5G52GFKmLRc4Zg84eKc6xYdRAv3mMVgmAMkduL7FTn9Rvf98qq6UthSUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=byzwatVIY7htQyENJLt6VmmEaooMTR/zZB47gLDq2VA=;
 b=MkPn1SVMjwe1DPUcih+fImZCh7uuk1JBKC0LGXg4SNbsqDCWEmAp/+qi8NcLB6KiR8h3+kL2fsTnSaJGsJm4AYM+II/tRrQXHUaLOGYpgxKIgjjZYMFg6kI8AU9yRnrXtx0pIf+kekF81yrtmMbvgFEhUpjqZPxqA6xJwGH6MYA1QOxwMdsMAE+mKOFrb2i0vidBI3W/ev8xS48WyYMKQUlOXAzo7o8UC8La3ytB1JMLpFpnuacSl2lFN2baP9yVN8cKuDI3omlLH82a09f+bEztX+/mjtItnAq1PUuON4ASPFZE+E38TshvUFbhzAuzJ0yuuUP8HbBUmIhk8y0AZQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <faf07a3a-df48-3bf7-4dc2-92fb734c4e2d@suse.com>
Date: Wed, 24 May 2023 09:04:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 01/15] build: hide that we are updating xen/lib/x86
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-2-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-2-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0022.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7750:EE_
X-MS-Office365-Filtering-Correlation-Id: cd1b34be-41b1-4b05-7b90-08db5c252af6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uHGd8XJKfeJ3XQ1HtPT2lVCKZ6tEtz/Tos8+jggjRXhlFpV4R9AGkBoQ0IOs2pjJ3LF+0TEqIxv67WbxiKh5MHi0oZR4e3T0X3QiV4P05l/YLPU8xgNDCV645XtMWLhjCKR7mhqBLc4SoCHNeNh5fCQB41Pn58K0DOOh9UbgLOB6yC6W+/yeZrQG/hZjK53bq3wtHGYvd2xY8CUpMLfMDb74/wydSSK6WlKLewJ6zr7kwQ31fZUvdgPLTrOzInbm9knNBZZmzl0sfB1gCcvBmuI9a0MHg2JtgzYWZyjbxeCbslkvbFEJe+ehyIhXveoVCKcX3Hn1sPlcC3t2NjKOZYJ5TNwfOpfM1vaMn4HmXIk5yEOYHWpu1NnnKEtclew+5F696fq1qmySX8gQ5EhvKbz+JWdckCBfb9bm5Rk7FoCb5+fQFxuht84VHqjg/5yxdixEI93K6RpzkraxVKNpdLJhM6i8nftBO4eSEW/H/sC6ONZVtVbXBHDWRyPnJMRpkoLQMqg/VBplR8rUbMSgw9LHyCCfKM9mJcOdvVGPqFMmZGo5DCHQxrCdrYfPSY0EJqtmLg7Z++qoQulbAY+yC1597jkBaD6AA3Scozg1Gjiqs3YBPySTR6e1FkxHrSjURr0CwXf1gi10dVGiBx5VXg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(366004)(396003)(376002)(39860400002)(451199021)(5660300002)(8676002)(8936002)(31696002)(86362001)(558084003)(26005)(53546011)(6512007)(6506007)(36756003)(83380400001)(2616005)(186003)(2906002)(478600001)(66476007)(54906003)(316002)(38100700002)(6916009)(4326008)(66946007)(66556008)(6486002)(31686004)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VkQ0RStlbVZod0xoVDBDQzRwb2ZzWmYzSUx3MzJnKytxbFl1TGpZTUFhbU1a?=
 =?utf-8?B?MG1ZUG5KNWdNaHpwVjNRczU1bjJ5YzZ5WThhMmpMRXhVNC9yMDY5alNkNW1l?=
 =?utf-8?B?UTF2S0RqYmhjOVV5c1NmVWZjeFNqZ1dQVS82b1h3TFlsSlBDbWVEVGMwU0JX?=
 =?utf-8?B?QUV2OVBaUnYvWGZNMTF2UVQyZ3NKSzlyeTVOZjR2NXYwMHRqVFNrbXFqUlZ1?=
 =?utf-8?B?c0w0bERuR25KZTJSL0RaVTJmRm1nT3hFNmNvdS9XSjRPQjhsMStJKzBqMlVY?=
 =?utf-8?B?UDRmbm9mMTU1YUVTbUNZUlZhYzZORldBSVdyMGZ1RDk5SmJuOFR4amY0TmpP?=
 =?utf-8?B?d0daTlQ3RnhQalZJT2EyQ3pPLzB2TFF0RU5RSDdVL1FhVTRCR05LLzNGbWtC?=
 =?utf-8?B?T3Q0TXQzYk9UeHAyWEw0MU9ndkh4SlhOSHNlUnBhblh3T0Z4ZlNKV3U2K0Vw?=
 =?utf-8?B?MUVpM21HaWhYSlNEWHhNV2FvWERUcllJV0J6NTZyeVpWRjY5c21WdDg2MXd3?=
 =?utf-8?B?VDhUdXhFaDBBVjVvYmFTclBucndKWjlTeFJHb3VTQXJubnhkTXJKL3Y3dkhr?=
 =?utf-8?B?dFNVY2NrZ28vS0JZc3dWaVBLenJzeTBTWjBicFh3elBxaGxQWUZNNnlXdWlD?=
 =?utf-8?B?Z3ljb1ZNeHdQUTZyMlM3cWl4T3k4QjNRU2VCck5rUTdTRXFmSVI4d2xMdlJB?=
 =?utf-8?B?L2orZ0dST1ZiU0JGVkJZTW9zTktpT2dzSS81NlEydkhtUU1Vc013WlJ0dUgv?=
 =?utf-8?B?cDV6NzJCVGNES3RyTEZxZStpTW41OFplMVc1VWE1TWQ0eU4vd251N1A5RFBu?=
 =?utf-8?B?aitMbFUxSWIvSUNpdHhWMGhnR3hKSlBWZFlKVDhTT1VGQktSVnNzRlMvMksr?=
 =?utf-8?B?UFkzSG1Za1ovN0hHSGlPdy93N1lVZ0hBRWNzZzRWSS81NFVLWlJ4YlcveFd5?=
 =?utf-8?B?aGJMUDhzdEw3aG5pSE5ianRqemd0dE1lNEVQSmhNZjE0OTV0Uy9MZjU5U3Bi?=
 =?utf-8?B?ZmwzbGpPUjlqNUdaY0ZLMFh5Z3JlYzM5eFNKeWVzYjQ3ZjIyaTY5TDZHV2xT?=
 =?utf-8?B?SzJLM2xGbnRrMlQ3VThwTkV5VGM5ZmxtZTUvWUdsZnFRK0RjQ1hWNzY5R1pG?=
 =?utf-8?B?WUJPaVlhZnRlYjJPaU9OQTB3V0dUN0czN2hOenZMdDBlVjQzOEZSRndYdkMv?=
 =?utf-8?B?NVhWb0VsQTA1SlhyRWpEcndjY25HeWR6MkRZeEplNGdNQStMenh2UnYrbmty?=
 =?utf-8?B?TXZCTjJacDBtVkVyNTl6MEZpandKN3NSd21XOGJ2YlhMQmdHR2hrZDNBRG5X?=
 =?utf-8?B?OVJMVFdIT21Hdlp5U1BWdGV6YXpBRUI0V1pydW9lUVVvbFdBcUQ1RkVBdGkw?=
 =?utf-8?B?K1FEazlEV2NGR3BvZXMveVBBYTJNb1dsalpKQTdlYmtFTEJyNStzbVl2SjZD?=
 =?utf-8?B?MUJjOFhsQklTKzRVU0llbWdrWm1CQjRvVFo0TjlhdGFJTDJieFEybHFPWkpl?=
 =?utf-8?B?VS9SVDlhbXZ3NVI4cnk4QjBIUVZuQWVWNjdZSG1CbmlxQW9OTUpHM0MyeW9p?=
 =?utf-8?B?QmpHdC85NWFndTlQSDVDTHpaMmpJYi80SGE1SkQ0L253Q2tSQ2U3bWZQdmdR?=
 =?utf-8?B?STFOM0g3KzI3bTlPTXFpNitXOGpBclgxVnptUWZ1eHZ0MmpiaHZZT2FSZWpi?=
 =?utf-8?B?bi9pczF0TFlGb3B1bEtKbE45TXRaOUFNZDJISEx3ZUs1MUkyYUJkSHFWZ3VV?=
 =?utf-8?B?ZGVocW9uRlNOMk1OV24vSVI2aTlTUGNjY1BHaTRyT1c0bFhFRm5RNDlLRllV?=
 =?utf-8?B?NjJadW4xVEw4dUdmWVlPMFZzTzlTMTdaTjhtSnczK2NzOW1DTEx4U0hic0VB?=
 =?utf-8?B?bEpnMnA1ZDREdExuYjI3Q0VhTTlkcENEcERmWUNyRHJGOXBQcVJHMUF0dHBx?=
 =?utf-8?B?QmhQYXRaVkxqb0ZFMWVYeFNJWmhTQ0dZOXZrc3VXZjlRMDFhYUZQWE1oZG9M?=
 =?utf-8?B?aHo5bC91ZzJBaG8rL09LdnN4bG1ZVU9WbXlUUnZrWW1oRk1LRDJ2V05XTjhk?=
 =?utf-8?B?VmhpN3gvamR2UW9HVHI4T1lHRG1nN2VXeGhFM0NuMm1GMXptWGxnV1lRbXpF?=
 =?utf-8?Q?wg5r4+Gsxp1tF9wHMUfbCJYKU?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cd1b34be-41b1-4b05-7b90-08db5c252af6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 07:04:50.9593
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oF90PPboiVBcBPMnOvzBSb7iqISCBK4Dypwp/9rGR8gqHB55EmEJN1njCULGdSVve9tsbmBqo0VlrAdeL8Gksg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7750

On 23.05.2023 18:37, Anthony PERARD wrote:
> Change of directory to xen/lib/x86 isn't needed to be shown. If
> something gets updated, make will print the command line.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed May 24 07:11:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 07:11:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538776.838995 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1ieV-0002yo-K3; Wed, 24 May 2023 07:11:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538776.838995; Wed, 24 May 2023 07:11:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1ieV-0002yh-HB; Wed, 24 May 2023 07:11:19 +0000
Received: by outflank-mailman (input) for mailman id 538776;
 Wed, 24 May 2023 07:11:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1ieT-0002yb-UT
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 07:11:17 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0626.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::626])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c233c62-fa02-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 09:11:15 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS1PR04MB9653.eurprd04.prod.outlook.com (2603:10a6:20b:475::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 07:11:13 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 07:11:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c233c62-fa02-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fpX8awG+JPCMcscF2udFAkS1ysSp0zzb9OhFoYk0fxeXyh1ABrF9ubbjwxJ/JdaYJ8y1Bj2JhWk+DAcaDH9jC1L8lPc6+gEGwjWSL1ZgGaiMUIVyhgjjW2nbAV3F7xSBdkhK2C2Fr6MdmRCJfpwYCM6pta4eFNz6hZXRE9w+I0fefjdVSMehpeTSOKc0Wa9v37U/mCU/8GaQqIfsif9NZ9bMBa8NEbn3Q0fBlzuFDDHdiY3IvDCYyY+1DrvBycD9Eh+q8nZ2ozvGlVAI69hfI4kNiuziivNWC26orcvN33QS6bsAFklHh/JBBhFIxaOLn1qTcSFW+h6V49mfPzVT7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=d3iEGQwJ6NHRvRWoDuRx/P3UeRJpn+HOmoBB6dyYtQg=;
 b=TtWepDnCW5MIYpIYkIrboQViPc6QPcCIG7umVEhhC69UhKXQ+F6JRn7s/Sg3yNRIsfOoCvaz0Fe2KBZfNX1MaGGv/ztajH2x4c9PthM6zD6+7wKz+baPK4gIA8jeEvo1Ge+dKKuz13UdXmBNn1TbqWiT93OKXZjKM2onET+9uRkhpyssLWuuUXEIh6t2uhcWbpWRehZdY4knvV/ZPpG00++LY69Tn3uysMU7N80qsSPrjBwnShELyLbRVeXXmUBYkWWOugYAg9270Mr8rms9fYbDPbkwpqlFdoBcYyIyb1A+GvtqmGF0xxnD2si7OvXFIOqhIjbAU/kn0gmh4hR1MA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=d3iEGQwJ6NHRvRWoDuRx/P3UeRJpn+HOmoBB6dyYtQg=;
 b=YsiyOsNFN4CypD1ixL68Teu0aMPYa4cQEr6k99XFpb0fcS+PnLDGA/czUDSEsHdIukZIVWHtX4inVUy7nc5p/TdxhwDlnEBXQf7+yBQwYoyoouN36FjmqrwMhc3u8jSO/5Xjh/vUgqNsJDoAHH9AxgwiQM0qHG0VKT1BBtQ9Zf5uw4KOJi6stT1yEaZibbz7rf/GrsxysfLiYRBFke7ZaMjJoTkqEVObHDJ8wUulWrF2CaIiOuGq4JJZPC1gCgqocs6ryrnxGzpOtyz4JD1T2kzSsYPoe+2DfU+HYJK2SNWFqEJrBFlCwirsESUttYJIgtQXqqi64nLEgcIMXRUz5Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c75368db-6444-6910-487c-8ac9477a6785@suse.com>
Date: Wed, 24 May 2023 09:11:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 04/15] build: hide policy.bin commands
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-5-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-5-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0096.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS1PR04MB9653:EE_
X-MS-Office365-Filtering-Correlation-Id: ad9f46c0-3e6b-4372-e92d-08db5c260ef6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lZMpPoCd4cKxyT+p9/3cFJygaM04O4L4fpeI18NU17FKfx7yCVHNfO5A5wCUfv9bJbPzOCNy54kiFZl6NABmDnyyVJpf3NWlriGc9p0+7UMxA+Ehf0cy3H/IHGwgTW6xa71oh0Vr2yUILh0b708DfPWi+XXh1Su/rV7JzaNxsReX6PvWgsOV+et7ujEJkw0utyKbRwGt+1lMRrZkngwO0P9igx4O1EfL4NkFHuUWWnJn//tGiOsaXfJ6E7diwQwMbJayHyILFKZS62ZNJMkAGw60iMyyllBJS2RFRvmjxb0mwHVLU2/24Dgl+fZqjwv5W0ebVRG9ATVl1fw3U/OznphOuvH9LZIqEoSo01Tz6czOlwJ3mW6yxivnOfZ704EgBX0Ps1Z18HCOalaP8o0O8ODfSAgT44wcHKe6HSeO/81r6apXF/FDvJGMvgMzKN+QzZacvw/87qsxM1wldjPLp1iqXtCYKVayAZJbIJJHH58N/Czj0Tr46O2nr4M7LhAwENSTHhoGisWjxhl/lsjO5E1DN9JyqQz989pezV6pDuzw38WpLclbNW1fbnDnkpBGZ8q20g/+XBVv+mK+/8wbKlrxYOdzqnTA3eXqqUNZ3TOfNCzjdn/QXrVh6CAgyZ2P47AOmw1m5X1gweXGHXNWxg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(366004)(39860400002)(376002)(136003)(396003)(451199021)(2906002)(31696002)(53546011)(186003)(86362001)(2616005)(38100700002)(5660300002)(8676002)(8936002)(316002)(36756003)(41300700001)(26005)(66946007)(6486002)(6506007)(6512007)(66476007)(66556008)(478600001)(4326008)(6916009)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NWgvYWdWemU1M29qbENSOTl5akYvMm5HQm9paEY2cW91Q1FYQ25BTWFHaWdH?=
 =?utf-8?B?N1pSTUpXZnNWRUpmZ3lBaEZKMXRQWWVHUjFqVURFQ0pIMmoyUmRTUzhQNXZ4?=
 =?utf-8?B?Qkk3VEZVUGRISDU0c2tNVm5LSjdNcU9xNU1jYUtuMW44SDJPR2RMS0tWNEo5?=
 =?utf-8?B?enZEUkdibTcvQ1hZay8yOUl0NytnQ09Sckd3MElOVUxJMXZiT3NUTm5PM1E0?=
 =?utf-8?B?ZThmdnF1Y2pJbXBXSzlDSElOc3F3eW55MTlkaDQzdHZpQ2wvSkYwSko2d1ha?=
 =?utf-8?B?bHJQdVZMSWtPNFBDSWlXYnk2OXlOemJMRGRYcWZCUW9FbkdPcDhCN2pldUNu?=
 =?utf-8?B?cUxodis5bmRUcmwxaWtGSmIzdlQ5UExsTXNvd1ZYQVJtSVQyNG9LK3Q5NWRo?=
 =?utf-8?B?K2dSbGdhT1VrYWRxTk8xMHFCOEh2MU9JeEFGYnY2OEQ5Mmw5cXZWN0w3b1pV?=
 =?utf-8?B?NXhhK2drTUp2WHVWR0hEMlZ6bUNHdit1SzYrS2tNMUJqVldtK2ZMNTV6dUJQ?=
 =?utf-8?B?cHZkUU91c2hnaW5wZmllVGVJbjRQb1RVK21BWXJiMkhsUTFLdHVCOXkyUlo5?=
 =?utf-8?B?akpGZjJHZ3VhY2lWbGpVT3ZLWnpJNHFsU2o4R2xWekhMSDU3SzNtMDg0akdk?=
 =?utf-8?B?QTZEU29EcmswcjJyM1c1NGMwVHRJTmlocytUSTl1eWFtQ21ZUjVaQUNid3dN?=
 =?utf-8?B?TXZOUVNpaGszMUtGM0ViM0Zsa2c2ZEg2U0lmWUlSa3psNUhDNmd3d3ZKY2hs?=
 =?utf-8?B?bHRsV2VJdWl2SkFRZ3doL2ZXMzVSY0JGQ3FJWmtGMXZiRkZDaWpBWU8wR01Y?=
 =?utf-8?B?UU5tMXd5RUhSNWNwUFNWbzdYckpBZ3Z1dGJxKzdNckhtSEZUdnozZlBhR1NT?=
 =?utf-8?B?azQ5Z0hVZnB2YzB4RURuRjE0QnlYNlFlamhFV0NDc0h5QWgzbEhFR1B1SVMr?=
 =?utf-8?B?M2pTaTBVaWJTVFFNNVgyS3F4ejI2R29rT05SSXE4c2V5UnZnMXRZSG94dFJT?=
 =?utf-8?B?Nk1UWVducDI4VmpFMEI4V3FvVWhQQmFodndudGNGUkc4dHNjL25rMGVDaFFj?=
 =?utf-8?B?eDdQWTFhd0ZNWi94cER6S1NRcWllU0thTG5NczNiZUh1TW5sQ01JUkEzbFdr?=
 =?utf-8?B?QVY1Z2VxWlNucGxzVjFuOHdiYkRZczFjUXQ4TDVHNUtwSDJ6Ni8xTWdtSVV1?=
 =?utf-8?B?SnNreFJvTUxjYlNSTlZSVkg4MTFPZHZYYmVTSHJsTTZ3Q1FoMCtDSWV5SjZm?=
 =?utf-8?B?YTFSZG5jRzNDSzIzRnErYXozNFBMdDJzeXNXazB2WkdaUTliWHdyZ2tneHNh?=
 =?utf-8?B?aVlDWThodEowazFoZ0VabjlFaEcreFcxRUpOUUZldDJrYjhyNUxHQ1lxd1NW?=
 =?utf-8?B?L2ZLTThCUkNFMGpBa044WmpvbFhTNWYvMlBEditSdktqaWR4V2U2S1Y2RlZ1?=
 =?utf-8?B?T05uTUhIazZWYWJDdGtVeVFtYW5OQzlUMHNnMnNKNmxRWVVoaEtEVVZVRDd0?=
 =?utf-8?B?L2lUclRPcEVaa3RaTVJwYlZoUmpFVDRDUWlENmlKL3dnNDZsZ2owQnRPa2Np?=
 =?utf-8?B?NVhwdnM3THFmY2dtaTRrYis2SkNHMk44TGFWT1lLV3ovTDFLbzI4b3BseXl5?=
 =?utf-8?B?OUFSRjVBVXM0cytTUHZsRGN5TC9ERGVlVHZPVUtVVXBYUGJiS3NIcVdYSjhw?=
 =?utf-8?B?SE5zM0tzZlozQU95bERGcXowSkIrSytGTWpUald1VnBSV1pGVXA3dEhFVEZW?=
 =?utf-8?B?OTljeVdvT25sQ1pzNDNmWXNxeUx6YnphWFF5cWFOemNzdHVHeDhWcUpsbWF6?=
 =?utf-8?B?djdqNmIvREp1Njk2MmdQd0ZtVE40ZEh0UXN2Qy9xb2hYaEF0U0JkU3VZMTdP?=
 =?utf-8?B?aDN0dWhVam5XOTdYMUFRTmtpbTBTcEdFbTVoMmJYMS9TY2dCWEloSlVoek1I?=
 =?utf-8?B?cVhvTWx6akJ5TDE4RGdBM2FyVDkwYzhZZkdyZmR6ZjQyUEdtQTQ4T1B0VG01?=
 =?utf-8?B?QzhuQkFlZ29DZlhXaXFjL1N0UkpNZ240VmpxQm5GbjljSTQvdENFb3gzWjdU?=
 =?utf-8?B?U2JpUDVKRWVUekVUb3h6U1dpcE9vZWdVdmdBcE9rKzR1bjh5N1VjWUtjTzFa?=
 =?utf-8?Q?u2U9uhlSJ621p9SEQNzwmt33u?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad9f46c0-3e6b-4372-e92d-08db5c260ef6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 07:11:13.5136
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3nfmq0GFnkuK+CGC/SSHFg7SR5aa9is2p8tk0w6voy6Z/Xpu4yWT+tIfZLe7seZgACJgG3HBL02BDu9iWcS4wQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR04MB9653

On 23.05.2023 18:38, Anthony PERARD wrote:
> --- a/xen/xsm/flask/Makefile
> +++ b/xen/xsm/flask/Makefile
> @@ -48,10 +48,15 @@ targets += flask-policy.S
>  FLASK_BUILD_DIR := $(abs_objtree)/$(obj)
>  POLICY_SRC := $(FLASK_BUILD_DIR)/xenpolicy-$(XEN_FULLVERSION)
>  
> +policy_chk = \
> +    $(Q)if ! cmp -s $(POLICY_SRC) $@; then \
> +        $(kecho) '  UPD     $@'; \
> +        cp $(POLICY_SRC) $@; \

Wouldn't this better use move-if-changed? Which, if "UPD ..." output is
desired, would then need overriding from what Config.mk supplies?

In any event, much like move-if-changed itself - please avoid underscores
in names when dashes are fine to use.

> +    fi
>  $(obj)/policy.bin: FORCE

Nit: Blank line above here please.

Jan

> -	$(MAKE) -f $(XEN_ROOT)/tools/flask/policy/Makefile.common \
> +	$(Q)$(MAKE) -f $(XEN_ROOT)/tools/flask/policy/Makefile.common \
>  	        -C $(XEN_ROOT)/tools/flask/policy \
>  	        FLASK_BUILD_DIR=$(FLASK_BUILD_DIR) POLICY_FILENAME=$(POLICY_SRC)
> -	cmp -s $(POLICY_SRC) $@ || cp $(POLICY_SRC) $@
> +	$(call policy_chk)
>  
>  clean-files := policy.* $(POLICY_SRC)



From xen-devel-bounces@lists.xenproject.org Wed May 24 07:17:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 07:17:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538782.839007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1ikH-0003eg-Cr; Wed, 24 May 2023 07:17:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538782.839007; Wed, 24 May 2023 07:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1ikH-0003eZ-83; Wed, 24 May 2023 07:17:17 +0000
Received: by outflank-mailman (input) for mailman id 538782;
 Wed, 24 May 2023 07:17:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1ikG-0003eT-86
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 07:17:16 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20624.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::624])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0255c00a-fa03-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 09:17:15 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8813.eurprd04.prod.outlook.com (2603:10a6:102:20c::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 07:17:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 07:17:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0255c00a-fa03-11ed-b22f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HLRbmfC6SI5SUP2Sr34KmLEo90YVSvHrsAkzxyRDWXeNX0nuHyLjwRHsrLygJDpizA6ZTJQBINafseRFoSn9JXqTflz05l+UX83FueHjFmCenpEbdOLz3WW0NyXn3lUe+LTByfOhs91EjeIe4f1wzjSmMp9dEIrj+jz99a2DQNfmMR3FgOR7e6E6zXmuQb75s2kE1olUGWlqFMgPJxTypUjeiYxgufgKIC5L9zZye8xJJdUz9KsyHJRCtTtUXy0sfFhj8ewgpO6ClU1cY7ThsEMS9D+Djy1TixjwqYBBOlJkrG5r8Sw5T80n6MfTx3UmMxM1zuX3G9rrqhu35V6U1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QaYgCrM4BZ3oLejbA1usdbUF4x0f5gyf5/bJsjpPHUY=;
 b=M813DkbUcBxXq0GydeA2hdEeqRL1n6pJom6bCHy/zVV43SoOkXzd9ev334UsZifrKLRPqXBV1iVv+FgqbAiE71/4Seej4D7Sa5crV3rWWGlCHWB7Y0W5DcccOC0Leb5qOPJKuZ+wvmSRm+Zfty0Mr/ZJ3i2z01GSbYrrohTuYcgEcSlbwx/pr/1I92naIsMRhm5Xz6ubc0Q6TnGjYUNRJ85bkZdjlaBzrhK8E1oweU/AJbSZk3XVreGkNW6I32KOFbVqQ+TLafiYcPCakvaFkNbcQfbH4sIUJQWrHK2C89y+es7ts8kR9/1MmudQeFB+z19ePv5FEBxBpChGrfuZ4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QaYgCrM4BZ3oLejbA1usdbUF4x0f5gyf5/bJsjpPHUY=;
 b=rm4Qt/cAEVriYjyEI/EuJk1LQHyR3YU9iyWdnsOS83uaeJjMZynXfly0rnC2GGPRnetzpk+wN/qkvXOcvwDg77rA95KjrWcY5sce5VeMUHPpKpDSJlRvg6wL4nAkprQLcHiQRYYrGtLN5q22oLL54pqynWkSW9XDF2xAAQd0tNZsdzcdTGnhYfKIQShqP3lyd06u6Z6cGDnELr46clXbs5oIhOE9RmEWk18zSJd7p0nGafmz1ZT0K71CjMGzLDmocz8rjG6P3FHPQiI/GMjprdCwMfjoS7kZV/OHTo+c4WUu9k6hBLebZMvLGFnNK6RJK3GQRWWV2u/ITBislsJLkg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9e8f44e5-7dbd-5369-2ac5-5cf171908648@suse.com>
Date: Wed, 24 May 2023 09:17:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 05/15] build: introduce a generic command for gzip
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-6-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-6-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0252.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:af::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8813:EE_
X-MS-Office365-Filtering-Correlation-Id: 02a15ea8-955a-4dc1-4583-08db5c26e4b5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BT2T1NM2suSbchXRoMs1V5g6KDIRvngSImSaWhOSFUgttLMx/aJSR2bBDHKatPzC4m1Oj4wdgVD7nmzAhPw7DYpt/QEXKTlk3AKDhD/guvsslw3PThCKTBT8svoIhLN0hbVzn4EfHfCmiX12oLhx9ifHZe7oyimj4mchKFjAaqF1B/CzdzWzUpO/hXc0U2Tx+on+cCSTf4RcDI8bSf3vv8hihGkjCgfT+64Z89n4qNGi7sIWC1RP4J4vmGPAReDzlcGWaHRXOM2JHafAnYLD6YIKxJSB8c5NF6daxXieDkyaQiC5oDtxJQU0ws68lRuxSiyfLdIlsEeDMsZ0lMJa/Dwq7SnuonEQvA/hdVOlnihgRdi9aJBjkANhPP61U7Mc/RehjxSzpMDheNtwK6hRA8P8DJMgBOWh8v7FxQ5jHgaCAuGj7/D/LCER1FzxFxyWIIZARf/vs4oprXMFueTVToCpauBgzG+9fzq47rFbPUwm0Sw3CHBbawRWOujjGmqztXwe01IkmJ9DhVEheRQSS0iXPmxrr7aOrcDXZgaSKTW+AVD1+Q1P8a5DDKqfHLw9HvV/3kwcyG7y7jP2Wo1o0D5aWC1Y0xSv3K1rKc9qYDHZYI4jkNm7CQ/P/Q0arazkDSWyhIqTnle4Uma5uiSc1PQ3GnlKRGxmFr4o3g7dts0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(136003)(346002)(396003)(366004)(451199021)(6512007)(6506007)(26005)(53546011)(38100700002)(2906002)(2616005)(36756003)(186003)(83380400001)(316002)(6666004)(66946007)(86362001)(66556008)(66476007)(6916009)(4326008)(41300700001)(6486002)(54906003)(31686004)(478600001)(31696002)(8936002)(8676002)(5660300002)(41533002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q0FjWDA2cmxhdG9qUWxmejk3YkVIc1lTU01EQytMVEVLUHJIMUx1VTh0Tnlq?=
 =?utf-8?B?U0dKWS92YWlTOCtqZmdWUzdkbStSUnhhRTllSUxnbkFKVTNPU05qcGRnVmt5?=
 =?utf-8?B?UDlxc3FWd0FXNmlaUksrMlRGTCtDKzQ0SWwxbjFrZ1J5eUtxVEJ3UGIraEdk?=
 =?utf-8?B?VjJGQU52K2NFZWEvVmsxTm5qNC9rdE1oNXRUeTdzeHhnMUFqTkFVbU14cXJP?=
 =?utf-8?B?dlhjdkNrbGtrRTZwRWVucHhkbHljWVMzaHI5azk0TTBUa2dUZ2FpUGppRWpl?=
 =?utf-8?B?aUhqODNXV2xybGNXOXBRNS9zcmMvZFNndlpaZnRKUXdobGFVWFNQWE9XRVB5?=
 =?utf-8?B?RTA0bW5DTFZYcklTV3RVU3R5NzNMUmpnSUd4Vk1UbUszZVRBeklxTytScm9X?=
 =?utf-8?B?SThqQTlEN3d6NWJKQUlnb3lBSzhsUlF2dTMrM0UxL1E1L29hUWZTSUVaVVB3?=
 =?utf-8?B?SXV6Mk9WRzJSOFpneUhTdjJ3a1ZhUGlQOWlnTUFOdEt6SmFpbFhtdVJTdThh?=
 =?utf-8?B?SFEwMlJJYmQ0ajJ5WWxZRklNYVYyVy9xRzBGK2xVT0lKRG1QdEJzK3ZzV3lx?=
 =?utf-8?B?ZUNTQ3lHZklRVjQ4aHpEd0ZRWjBIRW1wMFQyWXFLSHlJWkE0MElLS244NlE5?=
 =?utf-8?B?bVBUd2ZzUWVTelNIYU9nRWdDTE91Skl2S2p6UmxpV2NCYUR4U0IwYW9KUVFa?=
 =?utf-8?B?L2YwRStlYjFwZmlKNVMvU0g3cVFlQXJ4ZVZ0aUFBb2VRcUlxbUR4c0ROcjlp?=
 =?utf-8?B?UTFpWUVnYk4rTU81Nk96dUtYWG40VHpmbStXMnZLYzR5QnBBOW54ay9BcEJx?=
 =?utf-8?B?QWFoSUhrMlMvY0FUOVBjaTlYQUpJZ1BzNnA5YVVFdW1CQlNsSHZ2WDZCR3Rq?=
 =?utf-8?B?L0c5SXp1Uld6RnZ0aXYybmw2RzF1WkJCVDVNWmdEV2I2VFpJc0FSUGtZOW92?=
 =?utf-8?B?Um5zUlE2VkJYVElOMWlUK3Nqb3RQNWZmME5qWVRzRmdyd0dDVk4ra2owbGw1?=
 =?utf-8?B?ZUpPU3RqZ2s3ZnBEYUJSRXNjclJRZ05qQ0ZwdlZYd2c3WTdkTWZyNUg5VkZw?=
 =?utf-8?B?aE41dUR2MFgzSTNPWHZCeFQyRVMzSldNRHBFUWRQM0xQM1UyejNVYWZPWmcw?=
 =?utf-8?B?RmZyb2o3aDRyOFNLV3l6S2ZyTm5SSlRDN044ZUgvdHZqNmMxTXZmeDNTaUlD?=
 =?utf-8?B?dmJRdVFEMzIzUGdQWHlxTnNZeDRGbEpjZjRmZTBnZ1RrR2ZyelJGWFJaN0g2?=
 =?utf-8?B?eGgvVlEvRDEvMUUwK0hMbWhmOXMvMzM3VXdTb3MwQ2JXWTc3RmpiQWkweWU4?=
 =?utf-8?B?VExPdUgwN3BMTklVMmk0bUV3NFNDNGhJYXVLQjFSRHBJKzU5YXY4YlBuZEZ2?=
 =?utf-8?B?VzQvSWJld1dkRnpuSlpaYm5nZERVSFdpVjNGbVIzMFRKaGdYU3poUENueSs5?=
 =?utf-8?B?eXRuem9zWjN0Qy9KUXpITDhveklhL0ZYZjE0YkpjaVNVQ05YeDFLb3dIMkVa?=
 =?utf-8?B?UGRHMWxrc09JZXZ2b2xDbkdnUWNWWlg0ZVFBa29Odmp4cVIwczlSekJibHF5?=
 =?utf-8?B?TGVtMlRPNWt0R2I1eWRsaGZ5eE1kb1NLWUlIK2ZPS1dqTU5FS3ZLUXNTNFlZ?=
 =?utf-8?B?MS8yNVQzR1Z5b0JWemxDYno5Sm9EZWQvRGZTeFRJcTJwTTZ4aVIwNXVJSkZZ?=
 =?utf-8?B?M1ZER2tJT2REMVZSb29vOVZDbkdhdGFnSUw0UWlDbXhKeFlUbHVxYk81YmE0?=
 =?utf-8?B?S1lXNzFienF6aEhQa2xYS28rb2lOWU9vVjZGSDBZRURpbU1tTVJTdFRVSGRa?=
 =?utf-8?B?UGtBZklRYlBNUkxVdkVZdUorRGQrWEdRblIxbVRMY3d5bjZNaUtETkM0ZmRM?=
 =?utf-8?B?bURlaU5nMzROY0ZjRENHMC9YSEFRQ2gzb1djSS9wc29xTG9uTUxqdE9KL3NH?=
 =?utf-8?B?ajA5cnpWYU1TbUgrL1JnSkFFZ1ZxL1VPb3l6eGFMTkEySGx0bHNEM0pkUDBN?=
 =?utf-8?B?SHFyRFlYVElJdDJmRTVTZE1wRE1XMlRYNWtSYjJBbGdxSk9xZzhKWDRRYmRh?=
 =?utf-8?B?cDFleDE0aHg1OWUzVDRZdEROQWdBakVZOTdQc0VZeEhIS1pyVVVDVmowOEZv?=
 =?utf-8?Q?p7/DcwdBZhgvILpQO9NF4Qaci?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 02a15ea8-955a-4dc1-4583-08db5c26e4b5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 07:17:12.0412
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: v1lX/DWq+x9lEctvb9eNl+6j8CSdlLH8ey/unHJIojf5lsMHN29rcZ9lWo0PNzyCJzj+wwULHP2CQ6LHKscH7g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8813

On 23.05.2023 18:38, Anthony PERARD wrote:
> Make the gzip command generic and use -9 which wasn't use for
> config.gz. (xen.gz does use -9)

You mention xen.gz here, but you don't make its rule use this new
construct. Is that intentional (and if so, why)? (There we also go
through $@.new, and being consistent in that regard would imo be
desirable as well.)

Jan

> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
>  xen/Rules.mk        | 5 +++++
>  xen/common/Makefile | 8 ++++----
>  2 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index 59072ae8df..68b10ca5ef 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -63,6 +63,11 @@ cmd_objcopy = $(OBJCOPY) $(OBJCOPYFLAGS) $< $@
>  quiet_cmd_binfile = BINFILE $@
>  cmd_binfile = $(SHELL) $(srctree)/tools/binfile $(BINFILE_FLAGS) $@ $(2)
>  
> +# gzip
> +quiet_cmd_gzip = GZIP    $@
> +cmd_gzip = \
> +    cat $(real-prereqs) | gzip -n -f -9 > $@
> +
>  # Figure out what we need to build from the various variables
>  # ===========================================================================
>  
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index 46049eac35..f45f19c391 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -78,13 +78,13 @@ obj-$(CONFIG_NEEDS_LIBELF) += libelf/
>  obj-$(CONFIG_HAS_DEVICE_TREE) += libfdt/
>  
>  CONF_FILE := $(if $(patsubst /%,,$(KCONFIG_CONFIG)),$(objtree)/)$(KCONFIG_CONFIG)
> -$(obj)/config.gz: $(CONF_FILE)
> -	gzip -n -c $< >$@
> +$(obj)/config.gz: $(CONF_FILE) FORCE
> +	$(call if_changed,gzip)
> +
> +targets += config.gz
>  
>  $(obj)/config_data.o: $(obj)/config.gz
>  
>  $(obj)/config_data.S: $(srctree)/tools/binfile FORCE
>  	$(call if_changed,binfile,$(obj)/config.gz xen_config_data)
>  targets += config_data.S
> -
> -clean-files := config.gz



From xen-devel-bounces@lists.xenproject.org Wed May 24 07:30:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 07:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538786.839016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1ixA-00062T-FG; Wed, 24 May 2023 07:30:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538786.839016; Wed, 24 May 2023 07:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1ixA-00062M-Bb; Wed, 24 May 2023 07:30:36 +0000
Received: by outflank-mailman (input) for mailman id 538786;
 Wed, 24 May 2023 07:30:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1ix8-00062G-5e
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 07:30:34 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060f.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ddb7eca5-fa04-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 09:30:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9325.eurprd04.prod.outlook.com (2603:10a6:102:2b9::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.27; Wed, 24 May
 2023 07:30:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 07:30:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddb7eca5-fa04-11ed-b22f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EsXkhW7lKze1Tpy5p9vuqcvXiabUC/zcC8EmtcSOukVpwOQ5NWpdtPsCp+izWHmx6FiEy1WE8SM5zls8O0JQ5pElOkN511wvK6BifCOsV4Su9t7crORC5bwgPC1mkGuYQSXOotEWesAE01/4C4lW0jqKOfqmUnaUXPq9+3rxxCiO1EocQjaVkh41Q1jBlJvJ+Br/9mb71C/pNShjgmtUPBxBPODSw8sFk6zH10yQQqJU5e01zQEBLpTSUAXGLfIYzxHkrSHey3Atztub0IGhHMNpi4dVHwGy/lYj94J/vM+EH4kj72Yp7UYBVUosrZXVG99JU2BvEAMeRpz25J0sKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sx3iF0BllwDVSP4n8qbeh0/VHvSGvqX8Cp2GTZi1pWs=;
 b=G4Mp2IbO0U7dSaVSXbf2zqlnuXnTP/OW3gl/QW2KK8tYrS4dHPkZ6dLvg40egc9jkT8kB40t2VSBUtO6xU58yosySmEts9J6tjRgaKiizrLsoTNIAN+HUz+CqlVWZk5Xxa1Jj81vnq4PH+PYg7cv//bSjOAkW4+Ztte1VaJoAVUq3QrzCbzqLdqif1JEWuz1eTfwJlEPUVTBloNEE1z9VcKxPVShIdyQ9oSH0T9k0v+Oj+oScwNrDU7+gtxPu2AMO28Oz8vGg/0Pa65ary3/J+hTt/iC1ZQ5Q075S6BvVGhlMMegInF5VaB/4eMU2n0nm3CaLtb45DUmxyU0Ytx9Gw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sx3iF0BllwDVSP4n8qbeh0/VHvSGvqX8Cp2GTZi1pWs=;
 b=KkAPNNdwFcEJh/LeR2DhLjqggp304c7rBM6eMXt9Cx/anyXIUaBX42RBtDqIgk3xzgi5byhEPJitAQ/fJl+RzhCQ9ZXuBCTcl4Nas7fC8r54qCjN+asZABQd2jHl+xs1vrSmqLNnMngt9yn27m21qbIyiqJ4fydJPeOIwqHkS0TTNdvik2zdJ4tywO5aau7B2V+Gdj1XF8solHrUDOX5io43uDdgGIat0PBkhudqsZ1gohG54R7PClkkIHmJmn017Egt/Y/pfL2PyUQSthqt3AKwfYpOpd2Rx/Rj+vLi+GWv3OPH0/NShKNekZDgEIbBT/dYvk6ClANmEJIzYSkx+g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ce24476e-6e20-576c-7f9a-74d423776634@suse.com>
Date: Wed, 24 May 2023 09:30:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 06/15] build: quiet for .allconfig.tmp target
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-7-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-7-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0141.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9325:EE_
X-MS-Office365-Filtering-Correlation-Id: 8e6ff75a-b639-4411-a231-08db5c28c03f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jwjB3w1/kuCPwO8+WOAotE0aR8g88x0XGuO2bU8eDAMR5S4Mo6lKvSfAlrQRU5IC/LodmP0K5tyT0zVRYRyL8/u6OjKj9XMcgJ0HhBcClOpJqNnKBAH6FEeIfPmdeNj2QcmCR/V1TBJNvmY84vAYVu+iICCl7LFWGyhuvI6NnmOnhl2g/lWYAKsfRJ+ubmnTp9taMIFfhZcrEF9HemUTntX8+50+rXyKC92/7FGyuyQVXJMiMBkNeYxuLPSw934WixsX4EpljpA9/Xd7gfUJ9TDvKwk/rQKoOJJ7h1MTknmaSk923rskxZi+5LMDZRflceyoVoSU/knIJTKX9PSlNk9K4BXpmPDz4Beovc/e76m22Nhrf03CM4tOHFYy1VXVSgNKYDOv4jAlL9C5PvT0cSKWecWcE4YTcH27dg+pJmC4PnSsSC0L16OuDv6Cos0aqmKVifrIzsmakqRVE+IP1qn4Txs5TIGZQ9D9pktu53K+VC0MlCx2oEWqsZY+rj8DjxW4mFXRW5etiDpYbk2AZn2CZC8QXJWz9Q336MeTg+ppukpL4nOk0B4ERtWLwWJy1Ujef6WMsOvL1RuiCnNHrgKU1s+/LFWfEgD8TMa9bQ9ptn3i1UvQjJpjsqekL/tWpgJpDC97vbcr82O3GtW2lj/wuzyMxCnsMXxFfHYvPK5oPyh2hw2LGfNTWQyjSbsA
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(376002)(39860400002)(366004)(346002)(451199021)(316002)(38100700002)(31686004)(54906003)(66476007)(66946007)(66556008)(4326008)(6916009)(41300700001)(478600001)(6486002)(86362001)(5660300002)(8936002)(8676002)(31696002)(4744005)(186003)(53546011)(26005)(6506007)(6512007)(2906002)(36756003)(2616005)(83380400001)(142923001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YVpNeXRyOWdwdXlRVDZIVmdlbmppcSt3SE92RG5hS20yMWNldklCUENwZHFI?=
 =?utf-8?B?L3RWSHFVcExsWUtNRldtamdEUGZpY0d5N2N2QTlOdFZoUkc3UkdGRjVNa21h?=
 =?utf-8?B?eU0wT0dWT1lSeVg2Z09JZ0hWOVRuRC9iUXRWN0p0Z0JmQ2Z1SUg2WmpYVzh3?=
 =?utf-8?B?Q1pObkc0S00rOURtaXlqOEZ5Y3BlRUVrUFgyeFlJSEZMenF6c0VldUJSODJL?=
 =?utf-8?B?d21TakJYcHRQU0hDMTdLTDRObjYyakZ1M1d0S2kyQWh1OWxHY08zbkVxenZx?=
 =?utf-8?B?NlZ1bmRZZ3pPWTFqSDBZd1IzbUpERkZORE53RnVpQmxrak9IZHFLNDRHYm5L?=
 =?utf-8?B?RHF0YVNyUnRNQ0trK25oK0N2L2lncUVTaWNRN3lxYWR6QVhDZFJLdnArWVFi?=
 =?utf-8?B?TGpaMUtzbmVFaTNaNVd2cGkwdDYveDhzV3dwQm5raVVmSUc4dE8wU2QxUFZS?=
 =?utf-8?B?dDJwTW9LbkZpTXdEbWtmd29SUURYVlBmVDdSaW5DT1dEL3U3TThtcmlpYlps?=
 =?utf-8?B?MTkyUGp3aW5EeitwTDdkNVI3YXpDdHNMNEdiOUVSVWhGTG9hckdnVlhpeDUv?=
 =?utf-8?B?WldIdTl3TFFOc3hweEMveGJMNkRNTWdCTGhhcmZqRzdmNSszZ0VwaC9SRklN?=
 =?utf-8?B?UkVDdU9IcnBNWUhJdFp1blBFbURiTS8vVXBqLzVNNmpOREVIbEhaSkNKTTJr?=
 =?utf-8?B?ODlVRndZOS9XMEc4RXdOZ0V4VDQvMGdTb3VrVTlhZ2VNMkhXd294S1Uxdk1B?=
 =?utf-8?B?eitTWWZSVldjMTJFWm1OYWhYZjBZZXR2WmlTT0tZMTI4T0FNUUl4eFZMTFBw?=
 =?utf-8?B?SGJFYnlkRnNFa2hqd1hIMUMwR1ZUOTU1bnBLM2ZSTjZDUnk3TUY0Mk5iUXl3?=
 =?utf-8?B?WERMS3N3cmJCamxTOVFTQ0w1YmdqTmVuRm51cmdSYkhKb2lWbjkxd0VDRlQ5?=
 =?utf-8?B?a3VMSEFGblh0TDlQN3V5aU1iTmhZQVhDUysvcGkzMGtwY2wwNWFXbW0zck9m?=
 =?utf-8?B?ZUlHajN1aThhREVPNHdrdDlFYnVTT3g3YXlZNGhTYlZycmZtMjJVTk5pQ0tj?=
 =?utf-8?B?d2cvdmoycFZzK2VKeGJ2RHcwWUhNZXc1ekM1N0RVdVp6dHVKYU5ZdnhoYVJM?=
 =?utf-8?B?ekYrNTlkNVEwYkV1MG9WSUU5eTRoOGpSOHRIK0ZxWmhOdVJxWUN3VXNENVNs?=
 =?utf-8?B?SjZaS1R3TzlPc1hTdzdBQ2NLVmJLblRPUDRoNUN1dDNhVEg5VTZuT25WTFVB?=
 =?utf-8?B?Y0pqTnRJejZaM0U2NEtZU01VSmttK015a0FMM0NTR01Ud0JucDZYeTk3ZHd4?=
 =?utf-8?B?M3UwYW9NbitpQStXVERGdkZNL1lldFllVFM5REVodHdrbXZ2MDg5bjBrVUJZ?=
 =?utf-8?B?aXMybnlSNGR5cEpCRFlFY1VBV1dPeHVUcU8vQmo5bFAwVnhURXFRUXYxMUJ4?=
 =?utf-8?B?ZHEvclN1NUxHNXNIcWNBMEFyczhFRDBGTWl1N2pHZG1sUGxud0RJL2Rha2hi?=
 =?utf-8?B?TitMVTFqYU1tOFpMenVHUE9VRFdxdng1U1liTVo1RTdsdVlZSGtjYmQxRllp?=
 =?utf-8?B?ZjdIaEcwQ08yNy9aZzFrcWhSTjhBUUh6UStFOXExSWJGSjUrV3FZVEFyRHpO?=
 =?utf-8?B?NWdxRTdPSSt2YzdPQWRic2J6R05jSmdmRlRDY3FreHE1OHV2SzNXSE82ZnhE?=
 =?utf-8?B?V2ZHVHFDSXBOcDVUYnQ0bG52T0FhS3pyQndrUlV4bmF1QVkrcUlvNEpFYkRT?=
 =?utf-8?B?MDhCNndlekh1d0ZnSlM1ZG8wT3pXYlJuc2NlWHZKZHlCS0NCam84dXFIRXJL?=
 =?utf-8?B?Ny9Ib0x6cHFIcFVaVUgza3Jtczg1R1drb2dOMEZpeVRPbEdPdDBwQXNpT2JI?=
 =?utf-8?B?YWkvbE1IM0prMEFDWjhrQWpSbDNCNUNBVHBXR09uVFhPNXpyNUZZZlJXemwy?=
 =?utf-8?B?anUzQUxsWjhGZW5RaXNzUk5FMVpSK1orckZmMUw0b0pwNlRjRXNYbmp0UzZ4?=
 =?utf-8?B?RCtXMElJSUVZaDJtRytYSVlhb3pYRCtieHdMb3RQS3VJSUtaRmZYSWNjTlVG?=
 =?utf-8?B?blZZYjUrbitNVEdhT0JFMEtyTXJodGJZYk5zYVVtZklSa3JqNkp5R09md2t4?=
 =?utf-8?Q?oc3ABjk43CjJh/G43xsrb7ffu?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8e6ff75a-b639-4411-a231-08db5c28c03f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 07:30:29.8833
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FLc/WsuuodD0M4igVseoSUjpZXgybHIU4zHD1GvKl2wQkiPyNaZOBTx/JQ+0oA1/oyKajNJvs+KfgMcNbhRHeg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9325

On 23.05.2023 18:38, Anthony PERARD wrote:
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -339,7 +339,7 @@ filechk_kconfig_allconfig = \
>      :
>  
>  .allconfig.tmp: FORCE
> -	set -e; { $(call filechk_kconfig_allconfig); } > $@
> +	$(Q)set -e; { $(call filechk_kconfig_allconfig); } > $@

So this then leaves no trace of the generation of this file. This might
be okay, if only the file wasn't needlessly generated for e.g. "make
syncconfig". So I'd be okay with this complete silencing only if prior
to that (or in the same patch) the dependencies were limited to just
the *config goals which actually need the file (and the setting of
KCONFIG_ALLCONFIG).

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 24 07:40:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 07:40:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538790.839026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1j6A-0006jM-9e; Wed, 24 May 2023 07:39:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538790.839026; Wed, 24 May 2023 07:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1j6A-0006jF-6d; Wed, 24 May 2023 07:39:54 +0000
Received: by outflank-mailman (input) for mailman id 538790;
 Wed, 24 May 2023 07:39:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1j69-0006iu-Px
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 07:39:53 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20631.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a195061-fa06-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 09:39:50 +0200 (CEST)
Received: from AS9PR06CA0383.eurprd06.prod.outlook.com (2603:10a6:20b:460::30)
 by AS2PR08MB10055.eurprd08.prod.outlook.com (2603:10a6:20b:645::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 07:39:47 +0000
Received: from AM7EUR03FT029.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:460:cafe::8e) by AS9PR06CA0383.outlook.office365.com
 (2603:10a6:20b:460::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 07:39:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT029.mail.protection.outlook.com (100.127.140.143) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 07:39:47 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Wed, 24 May 2023 07:39:46 +0000
Received: from 78039e85eb6e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 146BB87C-85BA-4DE8-9FF6-9B914A89B9BF.1; 
 Wed, 24 May 2023 07:39:40 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 78039e85eb6e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 07:39:40 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PA4PR08MB5904.eurprd08.prod.outlook.com (2603:10a6:102:e5::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 07:39:38 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 07:39:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a195061-fa06-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ivUBw/lSIJMGqEvNt20YZqVfNrYZCUitn8kD/5DRJVY=;
 b=82CZoqyQGlNFxBQ1QcGT540geuCvUP262asRBmSt8j2krmFyMgm0VsHKbUK0Zhk81SI2bN+PvDxzirx3aiyLNw2TXLIEbKsimlzCBIA23x+6mGJmTQmozv+Jtf6JZkKxxthJ03U+DSwLasM3LlAuBWzfX7L3MDlbPp0McuzZpXQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: da272b1aa70f8e20
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZBRscVxlWrz4dtQHFi+LvG6p8BRevPH2c045uh/2xjacwbU3ixd+O0WjXhojM+XsAdGNseghHryIfCQqhjXYuuIqJ8WQgLOlQh8mrbKUgRLpuNmRgaKgMg5b2CqKqWdf0Po+Vjs5LuSMUuZLugXR0B/mCApFIbAGJ+JU4EY4Tj7m+B/7nhhF7mPDsMLv+0uC2+qPiRl4zw4OP6rLQzrdfAOuIRQrTgdTQ9SCTUCVvWdyTeCfo9cIBDlJhmkkJNlYxbwIv0fxAaOhc2Ui+YVnU8hGgWdbSpqM73f4Dft+noUrj4q+7pRs7USZ5qy0b36xVMsRE11/cn84xB4+Abw4cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ivUBw/lSIJMGqEvNt20YZqVfNrYZCUitn8kD/5DRJVY=;
 b=ByNxx17tXQVnX+CkXTWe4FgU4wADEVsf8HjmS+OuyKf00r8UVC2AnObOZz8Lq4KDbv+AAj0hjbfEW8M8UpNmF90MZkoenbu1sr8aurW4iNr0kw1aX919LznoembGpxbzcAultZ6geFW4wMl6V7TuCgGdOR6a8Shpx8kIDCDMaaF7Ttfg14Q0wwE0hNo3mQg3iz7sH8ofX/Imy84IjzG7vbMau9tFSMviFfi55HxeOj2kKoRizaMiGGRn0XW9Nz2Jt9aX/MR6ICs/vdOqfesLoaD4ydb0gadkONCaJwDXyMiy7vVR0TnEkAqT54N3fqOhOk/UAWDnUs1yJF+wpr+LDA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ivUBw/lSIJMGqEvNt20YZqVfNrYZCUitn8kD/5DRJVY=;
 b=82CZoqyQGlNFxBQ1QcGT540geuCvUP262asRBmSt8j2krmFyMgm0VsHKbUK0Zhk81SI2bN+PvDxzirx3aiyLNw2TXLIEbKsimlzCBIA23x+6mGJmTQmozv+Jtf6JZkKxxthJ03U+DSwLasM3LlAuBWzfX7L3MDlbPp0McuzZpXQ=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 02/15] build: rework asm-offsets.* build step to use
 kbuild
Thread-Topic: [XEN PATCH 02/15] build: rework asm-offsets.* build step to use
 kbuild
Thread-Index: AQHZjZUYHIYuuKOi9UK9bl3DPb8C/K9pCpGA
Date: Wed, 24 May 2023 07:39:37 +0000
Message-ID: <CBAF4CF3-C7DD-4071-9321-5EC7BEA1D432@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-3-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-3-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PA4PR08MB5904:EE_|AM7EUR03FT029:EE_|AS2PR08MB10055:EE_
X-MS-Office365-Filtering-Correlation-Id: abb8fcbc-6a38-44e4-d279-08db5c2a0c7d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 rpWu1bFWFBc58CEASeb2PZuy892jF0yvbjHGAqVqVizF/iSrVg9LEOlHv2IqJE41B/MVuButBQVurNjw68PD21R+ajVVW7080YN75prinWC+0R6bzTaR+yRxHpGTbyT6QIKNsT+FLtClU7soFXIfn3lbkcULGuET93cjHKVOMdK4sghG5HnBQFONe1n4aWCsTxTgTG1Vm6E7tvQ2LNN6BNmjiVParTs2ajhLLk358+5axuLTGd4etg6YxHSONYBhYY/lIvC2A75sbJyvd/EkcEE0ikxUAVpNSDTaVcTi3FCrFaD1irGNjEm2wzn/KRkYOm0UPe35jH74tU2s8OGCQQ82BA2om4ImxUeO1KTW7EC7tQTiKs9u5a8kBFKykES2BX9pjNCPZIIUH1A3ym4CVnhQI2A2BCgrbuBjvdFipCU3lufWmm5BeL9BrM7jwkAccWugcRBPFSGiMIhDPCxCsJQLctZcS5CHXdty0ahSl1T0bNOADfGhmb4Bz2T3SXRa+Ex+Hb8+fkaq2uOcKBDI1c9pbRyNUCkjVUhJcFmhlOQO5NqGLSnPM4CzLOtxXGfvXjaLuzJgWWwm9iAAPqtEy0lz18z/Iz0YQ3EJ9woVmM27Lf6r6xlaQcL7M7wVtOQDQXEAb6HqoSlikfX1CHzYEA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(366004)(396003)(376002)(451199021)(6486002)(8676002)(5660300002)(2906002)(8936002)(316002)(66946007)(4326008)(6916009)(64756008)(66476007)(66446008)(91956017)(66556008)(76116006)(54906003)(478600001)(33656002)(83380400001)(2616005)(71200400001)(53546011)(186003)(26005)(36756003)(6506007)(6512007)(41300700001)(38100700002)(38070700005)(86362001)(122000001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <3CF09667F70DA94BB7DD1A779AA058B0@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB5904
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1d2df5d9-f67f-48ef-2692-08db5c2a0700
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sLZlkL1FsgFN9AK96B9fSD2EYr5ATci3nf45v0/T8YmtaoD0P+d8BVGYPxXepjxBdVEPVmh0fDnZfcuOo+7/krYXNTJ5961SwxC0IUp+fhaq2CeNIPkH4Wq4DeUHU9dFA2yDtD4PFsfnFFFjhHY8DnX7PC1Dl6cXGW1+LbMp5w2gSLLXnHtjLwtyio2jKuovl8oPVElzuYT6/bUeVYsnZtmX9e3XbR+SfcfiI1Ka/VXVRoUDg77PwUP4UM6dpiMbR6E4+w/aOva19oAcyySiE7YPYRVoGVcEv8JBs2PFY2X1RfEkOJ7499mTbIMda52Py2A4AxLCyp5042n+Hfgz3m1hHVWELk8K4llD8+m366x8QjU38gzHJAaZnjtj4PA9NJMVL7VkiSeNbpcA0Oq89k0fSN2+59hjuzHaLvk1IcS0u4NzMa/AXu0apqM6VsSibFkRZ9TLQyZru5NuheIxDuW+VfGLTs9k1ET8QUjOfoKA2dN596fIPGDDjxujsHuV28qWnsFDAsxm9FMTNHZjHh2UBFjVr0mU5R3AnJvQHBKwm+ImShD9zc4qLvc9U+L5Nu9Lot2H1SS0ecZsidz5sxfr3NyHGB+Fh1QtjZgV3fk/6iJjAM/F3zmvOmXNhmyfLjZRIEiRM67CqE5IZGCNj9W8QpOMv1m5H/hne4i8YvqJv+rM2sDkJ8y1A0m+OP4qrbo0GYwr6bnxep3YSbLiOXgD3g+M2ysfUKFb8xo639GzQLr8vNEE5smsQb/Z5Oyi
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(336012)(36860700001)(47076005)(83380400001)(4326008)(70586007)(70206006)(6486002)(2616005)(54906003)(6506007)(26005)(6512007)(53546011)(478600001)(186003)(2906002)(86362001)(8936002)(6862004)(8676002)(36756003)(33656002)(5660300002)(81166007)(40460700003)(316002)(356005)(82740400003)(40480700001)(41300700001)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 07:39:47.0386
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: abb8fcbc-6a38-44e4-d279-08db5c2a0c7d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB10055

DQoNCj4gT24gMjMgTWF5IDIwMjMsIGF0IDE3OjM3LCBBbnRob255IFBFUkFSRCA8YW50aG9ueS5w
ZXJhcmRAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBVc2UgJChpZl9jaGFuZ2VkX2RlcCwgKSBt
YWNybyB0byBnZW5lcmF0ZSAiYXNtLW9mZnNldHMucyIgYW5kIHJlbW92ZQ0KPiB0aGUgdXNlIG9m
ICQobW92ZS1pZi1jaGFuZ2VzLCkuIFRoYXQgbWVhbiB0aGF0ICJhc20tb2Zmc2V0LnMiIHdpbGwg
YmUNCj4gY2hhbmdlZCBldmVuIHdoZW4gdGhlIG91dHB1dCBkb2Vzbid0IGNoYW5nZS4NCj4gDQo+
IEJ1dCAiYXNtLW9mZnNldHMucyIgaXMgb25seSB1c2VkIHRvIGdlbmVyYXRlZCAiYXNtLW9mZnNl
dHMuaCIuIFNvDQo+IGluc3RlYWQgb2YgcmVnZW5lcmF0aW5nICJhc20tb2Zmc2V0cy5oIiBldmVy
eSB0aW1lICJhc20tb2Zmc2V0cy5zIg0KPiBjaGFuZ2UsIHdlIHdpbGwgdXNlICIkKGZpbGVjaGss
ICkiIHRvIG9ubHkgdXBkYXRlIHRoZSAiLmgiIHdoZW4gdGhlDQo+IG91dHB1dCBjaGFuZ2UuIEFs
c28sIHdpdGggIiQoZmlsZWNoaywgKSIsIHRoZSBmaWxlIGRvZXMgZ2V0DQo+IHJlZ2VuZXJhdGVk
IHdoZW4gdGhlIHJ1bGUgY2hhbmdlIGluIHRoZSBtYWtlZmlsZS4NCj4gDQo+IFRoaXMgY2hhbmdl
cyBhbHNvIHJlc3VsdCBpbiBhIGNsZWFuZXIgYnVpbGQgbG9nLg0KPiANCj4gU2lnbmVkLW9mZi1i
eTogQW50aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+DQo+IC0tLQ0KPiAN
Cj4gSW5zdGVhZCBvZiBoYXZpbmcgYSBzcGVjaWFsICQoY21kX2FzbS1vZmZzZXRzLnMpIGNvbW1h
bmQsIHdlIGNvdWxkDQo+IHByb2JhYmx5IHJldXNlICQoY21kX2NjX3NfYykgZnJvbSBSdWxlcy5t
aywgYnV0IHRoYXQgd291bGQgbWVhbiB0aGF0DQo+IGFuIGh5cG90aGV0aWNhbCBhZGRpdGlvbmFs
IGZsYWdzICItZmx0byIgaW4gQ0ZMQUdTIHdvdWxkIG5vdCBiZQ0KPiByZW1vdmVkIGFueW1vcmUs
IG5vdCBzdXJlIGlmIHRoYXQgbWF0dGVyIGhlcmUuDQo+IA0KPiBCdXQgdGhlbiB3ZSBjb3VsZCB3
cml0ZSB0aGlzOg0KPiANCj4gdGFyZ2V0cyArPSBhcmNoLyQoVEFSR0VUX0FSQ0gpLyQoVEFSR0VU
X1NVQkFSQ0gpL2FzbS1vZmZzZXRzLnMNCj4gYXJjaC8kKFRBUkdFVF9BUkNIKS8kKFRBUkdFVF9T
VUJBUkNIKS9hc20tb2Zmc2V0cy5zOiBDRkxBR1MteSArPSAtZzANCj4gYXJjaC8kKFRBUkdFVF9B
UkNIKS9pbmNsdWRlL2FzbS9hc20tb2Zmc2V0cy5oOiBhcmNoLyQoVEFSR0VUX0FSQ0gpLyQoVEFS
R0VUX1NVQkFSQ0gpL2FzbS1vZmZzZXRzLnMgRk9SQ0UNCj4gDQo+IGluc3RlYWQgb2YgaGF2aW5n
IHRvIHdyaXRlIGEgcnVsZSBmb3IgYXNtLW9mZnNldHMucw0KDQpUaGUgc29sdXRpb24gYWJvdmUg
c2VlbXMgY2xlYW4sIG1heWJlIEkgYW0gd3JvbmcgYnV0IC1mbHRvIHNob3VsZCBub3QgbWF0dGVy
IGhlcmUgYXMgd2UgYXJlDQpub3QgYnVpbGRpbmcgb2JqZWN0cyB0byBpbmNsdWRlIGluIHRoZSBm
aW5hbCBidWlsZCwgaXNu4oCZdCBpdD8gQW5kIGdjYyBkb2N1bWVudGF0aW9uIHN0YXRlcyBqdXN0
Og0KDQrigJxJdCBpcyByZWNvbW1lbmRlZCB0aGF0IHlvdSBjb21waWxlIGFsbCB0aGUgZmlsZXMg
cGFydGljaXBhdGluZyBpbiB0aGUgc2FtZSBsaW5rIHdpdGggdGhlIHNhbWUNCm9wdGlvbnMgYW5k
IGFsc28gc3BlY2lmeSB0aG9zZSBvcHRpb25zIGF0IGxpbmsgdGltZS4iDQoNCknigJl2ZSBhbHNv
IHRlc3RlZCB0aGlzIHBhdGNoIGFuZCBpdCB3b3JrcyBmaW5lLCBJIGhhdmUgdG8gc2F5IGhvd2V2
ZXIgdGhhdCBJIHByZWZlcnJlZA0KYSBtb3JlIHZlcmJvc2Ugb3V0cHV0LCBzbyB0aGF0IHBlb3Bs
ZSBjYW4gY2hlY2sgaG93IHdlIGFyZSBpbnZva2luZyB0aGUgY29tcGlsZXIsDQpidXQgSSBndWVz
cyBub3cgaXTigJlzIG1vcmUgY29uc2lzdGVudCB3aXRoIHRoZSBvdGhlciBpbnZvY2F0aW9ucyB0
aGF0IGRvZXNu4oCZdCBwcmludA0KdGhlIGNvbXBpbGVyIGludm9jYXRpb24uDQoNClNvIGlmIHlv
dSB3YW50IHRvIHByb2NlZWQgd2l0aCB0aGlzIG9uZSwgZm9yIG1lIGxvb2tzIGZpbmU6DQoNClJl
dmlld2VkLWJ5OiBMdWNhIEZhbmNlbGx1IDxsdWNhLmZhbmNlbGx1QGFybS5jb20+DQpUZXN0ZWQt
Ynk6IEx1Y2EgRmFuY2VsbHUgPGx1Y2EuZmFuY2VsbHVAYXJtLmNvbT4NCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 24 07:50:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 07:50:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538796.839036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jFt-0008KF-C1; Wed, 24 May 2023 07:49:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538796.839036; Wed, 24 May 2023 07:49:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jFt-0008K8-84; Wed, 24 May 2023 07:49:57 +0000
Received: by outflank-mailman (input) for mailman id 538796;
 Wed, 24 May 2023 07:49:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=roa/=BN=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1q1jFs-0008K1-0q
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 07:49:56 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2062b.outbound.protection.outlook.com
 [2a01:111:f400:7e89::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 912ddbc7-fa07-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 09:49:52 +0200 (CEST)
Received: from DM6PR02CA0126.namprd02.prod.outlook.com (2603:10b6:5:1b4::28)
 by IA1PR12MB6211.namprd12.prod.outlook.com (2603:10b6:208:3e5::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 07:49:47 +0000
Received: from DM6NAM11FT079.eop-nam11.prod.protection.outlook.com
 (2603:10b6:5:1b4:cafe::63) by DM6PR02CA0126.outlook.office365.com
 (2603:10b6:5:1b4::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 07:49:47 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 DM6NAM11FT079.mail.protection.outlook.com (10.13.173.4) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 07:49:47 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 24 May
 2023 02:49:46 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Wed, 24 May 2023 02:49:44 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 912ddbc7-fa07-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WtISry57sLZKCaNZaZrINYi2rbVtE4+jBtZ65ZCv91OBtygRBiI2clSt1hYZYwd3zmBddebIt8Rr0zMy/THdWVrA0giYIkxBwli1CR6we5Goya309COxSdaPqORiNCDUA+6KzS6jnRuX1e/Cfn7sYuzc5Qjc7WQHh5zRhpSGsrnPCqqIEOAkSCDha5kU+FPHhuVZ7v+ddBXTVpxxxvHvYNGIfBNe0Pix4GtsMcaDUawWPEYJzWjhssFZy0meOPn+DerLdn4DF3F0wC1EIDnb8bAWtQsqq6oJGk5coxruyAiCfB81Gr3i8vTcgl9A+oL/DR6na0+EIZcczQ3lO6dS4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rJi0KdBVBLe7ayNyuo+EBMqj6Ef9Ndhpt4ecXs4bbOo=;
 b=OZrgM3eq/TsPJG/aiF0b10tWfWuWoKqmlc4yc2Wr0vG1ieA2qpUVJ6542lmiMxv11dg4aX7x5K+pqAN0yiFMXzHTSoKlEgjX7Mm6S3dmXxIrKhv3YTnUBgHSnYjH23F4C19Hxh/hP+Gk0W3o/0twcR4Y1eq/r1FfTrOsPLVYAyjNaKBDi1gCSdgRJmrZBVAoDK83Z7V1ini97uZrZ+ylrzVBuwU4qBt8upNSvJzNH70e4gS+qq3TdmdNg0uqeBOBqApkKSzo2WcF8mKc0KOYJ1552YgHFZcujPIag1Au94x0kWWv5RYohjrebQ16wiYNbqR41qT47Hqp4/x19bxhHA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rJi0KdBVBLe7ayNyuo+EBMqj6Ef9Ndhpt4ecXs4bbOo=;
 b=nDUrLcokl9xIXj92Y56oTYFClYOFNH9hlgJna9TVu5tB86zpkI9N3fMbAiwSmxfehf5s9mln2bt0TVshXgTDkj3woD4azkJsEzZpZp6w7X02wqbKgr9gp6RoGvuBZqRQVZcqnIO/jA3zIyAjZUbqwhlohnOP192TitiyCC6m8p4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <0b842e78-a206-7434-f7a9-d8021dd66731@amd.com>
Date: Wed, 24 May 2023 09:49:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v3 3/6] iommu/arm: Introduce iommu_add_dt_pci_sideband_ids
 API
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
	<xen-devel@lists.xenproject.org>
CC: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Rahul Singh
	<rahul.singh@arm.com>, Bertrand Marquis <bertrand.marquis@arm.com>
References: <20230518210658.66156-1-stewart.hildebrand@amd.com>
 <20230518210658.66156-4-stewart.hildebrand@amd.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <20230518210658.66156-4-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM6NAM11FT079:EE_|IA1PR12MB6211:EE_
X-MS-Office365-Filtering-Correlation-Id: 2053032a-6113-436f-e7b0-08db5c2b7253
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	n9WgVw6+3LzyjuJyvibSQINN6daMdNtEyI3kiTp9MIFgbaGObxCFBg0zA+4K5U0RvpGgXyvdDZOYuZxeDxTCHzIk90lxwx1Z1Wi0HEQBen6h6U+3GHHuSPQcQeR6BCIHbzYEIA2UdvNm28WeUUAmoqnq2fs9CB80CZvWAxGAwvYcMK1zK5IxfRkkr6BUS7ZbtwmvmDjui5FzIfXCDISafYMDrPvFTBu5xh+msfzP6WGrz1sFCc1APVxg3DV3V6gOQrVGMSY+xe+vxoM4tdSVVWg6iw0fgl7vFnjxM8O0hk9QECoF2z/CF9AYO0J/ZqvjcwCZ+txiZKF9N9+ThPVYdNFnPOKve2iBqepSp88VDll2AL8GL2vhXiavK2MG6VHDVgFtZl95310klDN/5BsMaM0+gx+oJydY0PPewc9YJFyQF/gc9ePkShoxayKEweEsgbMZeWS+BMUASWMofZM8Tler2x0UEWKpdZR5hsfOH0o2PcK/Y0L5kAu2lr8XWMeh7/q9l8zlWMP45aInZ2MNwphUy6IMqWRSFD2/4RAKCo6++tM+ftVxqU8uVOeerL+0CSbDmAeir8aIuk8FIqlieViS5slAjOGaJdzDoIWcpZruGVGice8MnMvEw9ul4QT47tyOOJGUr0o9yXCUGye9faY+bOonOlRvk5nJnUzsCEHf3tiU56XF4JsHpsuUApFl03J33cFNNR82Kjv+c+t52624DLk5NhokKzT7L/TPQc9lfjb4gZ7AbSFQZvpkvgvFiQUJtWQ2uncn+0gEWZUum++1lUWyJ8FILL+pRxwy5fGRo+OuGjBAa7qoXrrIhbDek9P7ofiqKbu/huU3X75EuRZSUO+6Qh6D/e7e7ZQJfbM=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199021)(36840700001)(40470700004)(46966006)(40480700001)(8676002)(8936002)(5660300002)(40460700003)(26005)(186003)(44832011)(41300700001)(53546011)(36860700001)(83380400001)(31686004)(82740400003)(110136005)(82310400005)(54906003)(478600001)(16576012)(356005)(81166007)(86362001)(31696002)(70586007)(70206006)(4326008)(316002)(2616005)(36756003)(47076005)(2906002)(966005)(336012)(426003)(21314003)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 07:49:47.3726
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2053032a-6113-436f-e7b0-08db5c2b7253
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DM6NAM11FT079.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6211

Hi Stewart,

On 18/05/2023 23:06, Stewart Hildebrand wrote:
> 
> 
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> The main purpose of this patch is to add a way to register PCI device
> (which is behind the IOMMU) using the generic PCI-IOMMU DT bindings [1]
> before assigning that device to a domain.
> 
> This behaves in almost the same way as existing iommu_add_dt_device API,
> the difference is in devices to handle and DT bindings to use.
> 
> The function of_map_id to translate an ID through a downstream mapping
> (which is also suitable for mapping Requester ID) was borrowed from Linux
> (v5.10-rc6) and updated according to the Xen code base.
> 
> XXX: I don't port pci_for_each_dma_alias from Linux which is a part
> of PCI-IOMMU bindings infrastucture as I don't have a good understanding
> for how it is expected to work in Xen environment.
> Also it is not completely clear whether we need to distinguish between
> different PCI types here (DEV_TYPE_PCI, DEV_TYPE_PCI_HOST_BRIDGE, etc).
> For example, how we should behave here if the host bridge doesn't have
> a stream ID (so not described in iommu-map property) just simple
> fail or bypasses translation?
> 
> [1] https://www.kernel.org/doc/Documentation/devicetree/bindings/pci/pci-iommu.txt
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
> ---
> v2->v3:
> * new patch title (was: iommu/arm: Introduce iommu_add_dt_pci_device API)
> * renamed function
>   from: iommu_add_dt_pci_device
>   to: iommu_add_dt_pci_sideband_ids
> * removed stale ops->add_device check
> * iommu.h: add empty stub iommu_add_dt_pci_sideband_ids for !HAS_DEVICE_TREE
> * iommu.h: add iommu_add_pci_sideband_ids helper
> * iommu.h: don't wrap prototype in #ifdef CONFIG_HAS_PCI
> * s/iommu_fwspec_free(pci_to_dev(pdev))/iommu_fwspec_free(dev)/
> 
> v1->v2:
> * remove extra devfn parameter since pdev fully describes the device
> * remove ops->add_device() call from iommu_add_dt_pci_device(). Instead, rely on
>   the existing iommu call in iommu_add_device().
> * move the ops->add_device and ops->dt_xlate checks earlier
> 
> downstream->v1:
> * rebase
> * add const qualifier to struct dt_device_node *np arg in dt_map_id()
> * add const qualifier to struct dt_device_node *np declaration in iommu_add_pci_device()
> * use stdint.h types instead of u8/u32/etc...
> * rename functions:
>   s/dt_iommu_xlate/iommu_dt_xlate/
>   s/dt_map_id/iommu_dt_pci_map_id/
>   s/iommu_add_pci_device/iommu_add_dt_pci_device/
> * add device_is_protected check in iommu_add_dt_pci_device
> * wrap prototypes in CONFIG_HAS_PCI
> 
> (cherry picked from commit 734e3bf6ee77e7947667ab8fa96c25b349c2e1da from
>  the downstream branch poc/pci-passthrough from
>  https://gitlab.com/xen-project/people/bmarquis/xen-arm-poc.git)
> ---
>  xen/drivers/passthrough/device_tree.c | 140 ++++++++++++++++++++++++++
>  xen/include/xen/device_tree.h         |  25 +++++
>  xen/include/xen/iommu.h               |  17 +++-
>  3 files changed, 181 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
> index 1b50f4670944..d568166e19ec 100644
> --- a/xen/drivers/passthrough/device_tree.c
> +++ b/xen/drivers/passthrough/device_tree.c
> @@ -151,6 +151,146 @@ static int iommu_dt_xlate(struct device *dev,
>      return ops->dt_xlate(dev, iommu_spec);
>  }
> 
> +#ifdef CONFIG_HAS_PCI
> +int iommu_dt_pci_map_id(const struct dt_device_node *np, uint32_t id,
> +                        const char *map_name, const char *map_mask_name,
> +                        struct dt_device_node **target, uint32_t *id_out)
> +{
> +    uint32_t map_mask, masked_id, map_len;
> +    const __be32 *map = NULL;
> +
> +    if ( !np || !map_name || (!target && !id_out) )
> +        return -EINVAL;
> +
> +    map = dt_get_property(np, map_name, &map_len);
> +    if ( !map )
> +    {
> +        if ( target )
> +            return -ENODEV;
empty line here

> +        /* Otherwise, no map implies no translation */
> +        *id_out = id;
> +        return 0;
> +    }
> +
> +    if ( !map_len || map_len % (4 * sizeof(*map)) )
could you enclose the second expression in parantheses?

> +    {
> +        printk(XENLOG_ERR "%pOF: Error: Bad %s length: %d\n", np,
%pOF is a Linux special printk format to print full name of the node.
We do not have this in Xen (see printk-formats.txt). If you want to achieve the same, use np->full_name.
This applies to all the uses below.
Also, use %u for map_len as it is unsigned.

> +            map_name, map_len);
incorrect alignment

> +        return -EINVAL;
> +    }
> +
> +    /* The default is to select all bits. */
> +    map_mask = 0xffffffff;
> +
> +    /*
> +     * Can be overridden by "{iommu,msi}-map-mask" property.
> +     * If of_property_read_u32() fails, the default is used.
s/of_property_read_u32/dt_property_read_u32

> +     */
> +    if ( map_mask_name )
> +        dt_property_read_u32(np, map_mask_name, &map_mask);
> +
> +    masked_id = map_mask & id;
> +    for ( ; (int)map_len > 0; map_len -= 4 * sizeof(*map), map += 4 )
> +    {
> +        struct dt_device_node *phandle_node;
> +        uint32_t id_base = be32_to_cpup(map + 0);
> +        uint32_t phandle = be32_to_cpup(map + 1);
> +        uint32_t out_base = be32_to_cpup(map + 2);
> +        uint32_t id_len = be32_to_cpup(map + 3);
> +
> +        if ( id_base & ~map_mask )
> +        {
> +            printk(XENLOG_ERR "%pOF: Invalid %s translation - %s-mask (0x%x) ignores id-base (0x%x)\n",
we tend to use PRIx32 to print uint32_t values.

> +                   np, map_name, map_name, map_mask, id_base);
> +            return -EFAULT;
> +        }
> +
> +        if ( masked_id < id_base || masked_id >= id_base + id_len )
could you enclose the expressions in parantheses?

> +            continue;
> +
> +        phandle_node = dt_find_node_by_phandle(phandle);
> +        if ( !phandle_node )
> +            return -ENODEV;
> +
> +        if ( target )
> +        {
> +            if ( !*target )
> +                *target = phandle_node;
> +
> +            if ( *target != phandle_node )
> +                continue;
> +        }
> +
> +        if ( id_out )
> +            *id_out = masked_id - id_base + out_base;
> +
> +        printk(XENLOG_DEBUG "%pOF: %s, using mask %08x, id-base: %08x, out-base: %08x, length: %08x, id: %08x -> %08x\n",
%08"PRIx32"

> +               np, map_name, map_mask, id_base, out_base, id_len, id,
> +               masked_id - id_base + out_base);
> +        return 0;
> +    }
> +
> +    printk(XENLOG_ERR "%pOF: no %s translation for id 0x%x on %pOF\n",
> +           np, map_name, id, target && *target ? *target : NULL);
> +
> +    /*
> +     * NOTE: Linux bypasses translation without returning an error here,
> +     * but should we behave in the same way on Xen? Restrict for now.
> +     */
> +    return -EFAULT;
> +}
> +
> +int iommu_add_dt_pci_sideband_ids(struct pci_dev *pdev)
> +{
> +    const struct iommu_ops *ops = iommu_get_ops();
> +    struct dt_phandle_args iommu_spec = { .args_count = 1 };
> +    struct device *dev = pci_to_dev(pdev);
> +    const struct dt_device_node *np;
> +    int rc = NO_IOMMU;
> +
> +    if ( !iommu_enabled )
> +        return NO_IOMMU;
> +
> +    if ( !ops )
> +        return -EINVAL;
> +
> +    if ( device_is_protected(dev) )
> +        return 0;
> +
> +    if ( dev_iommu_fwspec_get(dev) )
> +        return -EEXIST;
> +
> +    np = pci_find_host_bridge_node(pdev);
> +    if ( !np )
> +        return -ENODEV;
> +
> +    /*
> +     * The driver which supports generic PCI-IOMMU DT bindings must have
> +     * these callback implemented.
> +     */
> +    if ( !ops->dt_xlate )
> +        return -EINVAL;
See my comment in previous patch. This could be moved to iommu_dt_xlate().

> +
> +    /*
> +     * According to the Documentation/devicetree/bindings/pci/pci-iommu.txt
> +     * from Linux.
> +     */
> +    rc = iommu_dt_pci_map_id(np, PCI_BDF(pdev->bus, pdev->devfn), "iommu-map",
> +                             "iommu-map-mask", &iommu_spec.np, iommu_spec.args);
> +    if ( rc )
> +        return rc == -ENODEV ? NO_IOMMU : rc;
> +
> +    rc = iommu_dt_xlate(dev, &iommu_spec);
> +    if ( rc < 0 )
> +    {
> +        iommu_fwspec_free(dev);
> +        return -EINVAL;
> +    }
> +
> +    return rc;
> +}
> +#endif /* CONFIG_HAS_PCI */
> +
>  int iommu_add_dt_device(struct dt_device_node *np)
>  {
>      const struct iommu_ops *ops = iommu_get_ops();
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index c1e4751a581f..dc40fdfb9231 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -852,6 +852,31 @@ int dt_count_phandle_with_args(const struct dt_device_node *np,
>   */
>  int dt_get_pci_domain_nr(struct dt_device_node *node);
> 
> +#ifdef CONFIG_HAS_PCI
> +/**
> + * iommu_dt_pci_map_id - Translate an ID through a downstream mapping.
> + * @np: root complex device node.
> + * @id: device ID to map.
> + * @map_name: property name of the map to use.
> + * @map_mask_name: optional property name of the mask to use.
> + * @target: optional pointer to a target device node.
> + * @id_out: optional pointer to receive the translated ID.
> + *
> + * Given a device ID, look up the appropriate implementation-defined
> + * platform ID and/or the target device which receives transactions on that
> + * ID, as per the "iommu-map" and "msi-map" bindings. Either of @target or
> + * @id_out may be NULL if only the other is required. If @target points to
> + * a non-NULL device node pointer, only entries targeting that node will be
> + * matched; if it points to a NULL value, it will receive the device node of
> + * the first matching target phandle, with a reference held.
> + *
> + * Return: 0 on success or a standard error code on failure.
> + */
> +int iommu_dt_pci_map_id(const struct dt_device_node *np, uint32_t id,
> +                        const char *map_name, const char *map_mask_name,
> +                        struct dt_device_node **target, uint32_t *id_out);
Why is the iommu function prototype in device_tree.h and not in iommu.h where all rest of the iommu
dt related prototypes are placed?

Furthermore, do you need to expose globally this function?
Looking at this series it could be static as it is only used by iommu_add_dt_pci_sideband_ids().
Will there be any use of it later on?

~Michal


From xen-devel-bounces@lists.xenproject.org Wed May 24 08:01:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:01:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538804.839046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jQt-0002oJ-PO; Wed, 24 May 2023 08:01:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538804.839046; Wed, 24 May 2023 08:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jQt-0002oC-MU; Wed, 24 May 2023 08:01:19 +0000
Received: by outflank-mailman (input) for mailman id 538804;
 Wed, 24 May 2023 08:01:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1jQs-0002o6-5d
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:01:18 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0602.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::602])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 28c34b41-fa09-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 10:01:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7341.eurprd04.prod.outlook.com (2603:10a6:800:1a6::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 08:01:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 08:01:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28c34b41-fa09-11ed-b22f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gcbtwhMT0ggJSURqDoBAJFQ68LXXKNiXR8NSVdUDMciO9bxayjbHOjlEeqUH43LPDMqmL1iomdZM6+bvMOpwT0RmNH9SoErX6vlo7bjqp+7n8tSh/BQ03Zlz1zBvzQ36aPIm0aKTgXndTiYD87f5PCfl/NWHhaJcCWzb4AHac7aeB2cuFDCpHr9sSCccBuDGBgquAwpmiBT78wUnQbmJRTda4EsObjp0yjI3K1wbFOV5Ia5J5FhQ6NTiqTzt2AcPCj5Sr5/vV7OR3PwqLwVe5BArPoW9+mSFZWHU8WW1tPlGilkE1DdHjg/1ALh4zfqFTu5+B2Kc0UQArgl0DRQeHw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=un/dK/U6I31Il4k/zp4fEZ6bx9r0hWlIAKWoVSrhzUA=;
 b=FMsPVca0VZ23OREM/xygVd/bMr75xBxjGUtBS4wVK0WtCtyvM+/29aNfoou45u1Az2zZFup7EGDmhosNAoRVsIDE25XthOWrwxPtuAmLlNl9uMDowHQq2VGLudFTe1xFNINglTrMUaORhMpfUl6R74gTJWB3tEJpA4AmpLM3fwfdVWUhK49WrhoyBkopVcR9VqFujoUMaTcnjG7SkUh4m68xdcq+7pn4K3Xug7hitNQCUy65oRSHu8ViyLl3dVQJ7PosXIVa62AWDsFxgcXpdQaDowhNkOlLjcuFkde4S3uvYtjS92VNoPfgrJy/za/k5o3urbvXJEwBw31IdSs33Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=un/dK/U6I31Il4k/zp4fEZ6bx9r0hWlIAKWoVSrhzUA=;
 b=ro15YyUJo8gNPM+KAW5rlZb2qDKsQy9WjDZE3O8W8CvcLg/xLzzZ5/4jJ1jNdAM1O6gL7CK56E9r2yTE+pMkL/dmKPw6qz46LFgNCiZK/g95+nOVX27xBIBFOsuS8ONQqbNYuTFSEFRqsJWFXAEV2iQYG+U/IkSrT1w1G4wkEdRlbcuc5STV4724ISGNIT5l8MhWCMDvnuPSoacXYl1MvmBC2YcT316f39eTjsFYP20ZWvOFJK6V+p96Muc+XAGlT57KWFeJI7l/3fzZi40ihMYz+AGs07EcGFDWvTbmqFh5Hb7Tw1hDpH39swjdTpeyuZH+dcg3Xrg829ZVFoyW0g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ee97f251-a780-82be-b607-7cc7783a950f@suse.com>
Date: Wed, 24 May 2023 10:01:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 02/15] build: rework asm-offsets.* build step to use
 kbuild
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-3-anthony.perard@citrix.com>
 <CBAF4CF3-C7DD-4071-9321-5EC7BEA1D432@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <CBAF4CF3-C7DD-4071-9321-5EC7BEA1D432@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: HE1PR09CA0064.eurprd09.prod.outlook.com
 (2603:10a6:7:3c::32) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7341:EE_
X-MS-Office365-Filtering-Correlation-Id: a0edec24-8e52-429a-dae8-08db5c2d0716
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fJ64El7lkK9+mQIVtIOaGVJyH76azQPrWslNhBXWa0t2L3F1FH+XaKdwei2/eYoYet2R5Z4DHWwg+qZ18HyUEo1rPPhWpSCmnHsbPWnGkYogmTHIStNnsA/ZKazc+n5SdM+p5OLVFJYBKXHMJLYRTR+7ZgU4GyaLLVXIXelLRYeg/520G6Ba8xVPuawdfecup67mW8m9AEliDFlOvo03V0z2zj+ftbJ3SstkPbvahFs804qxuk/2t3MKOy+Om3Hp5BLML/j2jc5QcizC8oizt3EWpVSkbQfJ2q6224XsHAN3U3XSQAOaPNTMMYLxsTApymUV6D2vyOGIERbjDDgi6zY39RXdLMQGxGcGU77JZF3SNATDf3kE8zMD1oQ32slBRbb8ymCc5feY2HkCVtLx4tHRDFaUkyjSw+ESvJSnFORfdUbQXoHodSxxLwEQw/Q+jBfHZzQbVvetZNdxwXh5tPoS6BLy8FXQL3Pf7TcudKZgVDdmj7VtUIgAm01sM10aC9NA5eYRN02zCi8b9gAssFFu48t0s16PPt/vd2WN4XIZ2KVYEXZeUyDEN5BlvRMwPPI65ckhl3zk+K/oW32HnQN1Zrt1us/i3qW0a9qj7InM1Iz3DN9sEQ0icRtkbhALKPD5V3YbJ9/IGmVXZlsOXw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(366004)(396003)(39860400002)(376002)(451199021)(31686004)(66946007)(66556008)(66476007)(6916009)(31696002)(478600001)(41300700001)(6486002)(6666004)(54906003)(316002)(4326008)(86362001)(8676002)(5660300002)(8936002)(26005)(186003)(53546011)(6512007)(6506007)(2906002)(38100700002)(2616005)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZGhic1QrOS9hb2hlQ1pMUXhSYVJsNSs4aWdleTNidzFuQ1hQZWtyOUZLOTQz?=
 =?utf-8?B?dW5Ickx5RFZ0TG5tZkdVWUNOUG4wNnQwUGt0VXVVcy8wYXBITTdYT0FJRVdQ?=
 =?utf-8?B?SmdZRHZZVjVERmRWU1lCQzA5S2VvclpMaHpOQnJpaWVWYkU4TDgxcXV0T2xn?=
 =?utf-8?B?em1Nb0loSXFLMFREdElPUkxra3o5bVFKa1Q0UlNsbWhqU1JsenhTVzlSei8x?=
 =?utf-8?B?N1cwWHBOaVBOa1gzK203ZTdHbmRoT0pkME5Od3ZLMzNMak01NTVWZWJQQTc3?=
 =?utf-8?B?MUg4V1ZRS2JUSk95TlZNalJDcUUzcm1PNFpCQ2RzZWd5N1ZkRThtZ0gwRXMy?=
 =?utf-8?B?a2lrdkxmUCsxc2lTU24rWEFCKzBYMXFVMkY2RVdmaFRablU4Tkp3SzFEZ0Uy?=
 =?utf-8?B?SEJDVmdQWW9Uanh4Yi93UFYrT214UGRDRUQvQ2VyVXpJL0J3ZXIxaWo0a2Vl?=
 =?utf-8?B?ZHZXcURwTnIzZ2NrRFBSVWlQUy96ZDNaNmhYK0hIMFZ5bGNPMXdSb2poK1A3?=
 =?utf-8?B?M09wK3JlYnJiOVV3enEvUWxtTGJqOE0wNmY4cVVLTjFsaWZZVmcvUW95SHZE?=
 =?utf-8?B?cFA2ZTBiSWZqc1RCenhyS3VmcDNUNGZyblhZT3pzcDdCVnVzYzhGREM1TldS?=
 =?utf-8?B?dmR1QUNoRGZBWDZCem1ZcEZtbHlySm50Smg1eFpReGxCRzRxVXE2M2pwdUtt?=
 =?utf-8?B?N0k2R0NkdDluNlAzYSsvLzhOZi9LZlRrR0NjajVBVWpMTHNMelVUNFRrZ25u?=
 =?utf-8?B?YzF5WVZEdkxJNm9jMzZKOE81L2tSK0JVQnhGNTdlcFJpT0cydnZmV1Bpc0J5?=
 =?utf-8?B?L2JWd0FrTW1GcDFoWXNVRncwVzZDdGxNYU1yelhsY3cxQ09aYVk4QTFSQXFB?=
 =?utf-8?B?YUppdUVBWFNqTVhVRVJ2RUpxdFQ2M3luL0k5RG93eitTbUdyYjliMHZmRVh1?=
 =?utf-8?B?Z1prNnJCMjJEdWd2cXJCVUtRbWx3cVN4ZUZSMGVpZUxIOURNZlVjTzRFM1J4?=
 =?utf-8?B?VUZLSTUyVFdveXhjRjZZWFR1NzIvNDhzM1N5SFB2bloxUG5uVHZReHNHZ2JX?=
 =?utf-8?B?SEpYSUdjbjZ4M2lMaHl4YWR5Qk9sSHFKSmI2SlJ5alNkcWViK01rNGxEVzds?=
 =?utf-8?B?S0R4cnF6OUlXQlBJV281VmkxdWkzNk1TTlpVMStWMmNaZXpSeHE4U0o0K21Y?=
 =?utf-8?B?RDdkK0tUWE5lS0dBTUx3T094bWIyakFFZElyNlh0enN2eHdoZnZZeGQ5dUVi?=
 =?utf-8?B?U2wrUlNUS1h5TEZuL3ozblhZTERvQjlaUXpvNDY2R09zSzRFR0IwaDNyTDR5?=
 =?utf-8?B?SS8vUVNzcEhxb2N1bkg1aFVzTUlYUmo4WmlkMHRkTi84V2FkOWxVbU1CS0Nm?=
 =?utf-8?B?TjlIWTJlcW5CRE55STM1V3VESUNkVTBSMjJ6c1h2a0ZwYlU5L3BZTUc3VWVU?=
 =?utf-8?B?QTgxWkZLUmNhdERvQy9icDJHa2VkTFBhZnZVQW1SLzEzLzZoRkc1L1lxNzNV?=
 =?utf-8?B?Zlg5cTk5V3U3WSszV3R6QStZTUFoMllNSjBOd2dvRnhMTHhaWmdta2p2SDRF?=
 =?utf-8?B?TTRyR1BPQ24xbE85TE95R29zdWpKZW9XYngrdGRhekhGL1JaMXZ6SVBQaGVn?=
 =?utf-8?B?dnl6OGMzNlVWY2Q4ckxjalJaMUl3NW80eEg5dlZJa1psYVV4Q0VtRkxJWFJT?=
 =?utf-8?B?cDkxQ0xwT0prTkFiTW8vRkd3ajZYQjQ0TWxMTTdUd01oUHhpS0xVcnBqZW5a?=
 =?utf-8?B?UjZxZ3E4ZmtQVzhjWEdMRyt4bTJZODBSL2dHVlFpTWxwZ1ZXMyt5Lys0VFdH?=
 =?utf-8?B?MjNkbnNWemxBWFMvenlra3lWT09VNU5jUzVBTEFTUUdQUVZoWGgxQkM1ZG9I?=
 =?utf-8?B?SkptMzhHWm41YjFHbmQzVnpWYmdwdGFIOXUrSmJxMk4rT09QcjgrMDA3Z2VX?=
 =?utf-8?B?cGVNdDIwQSszVlRFTm9HWkkrZzFRdWw1ZFpmS0RLR2FsbG1xajVJelJvdkZl?=
 =?utf-8?B?dE5WelRRRER3RWlzdDFqeFFKcGV3MTBjWUQvbzZaM0ZlcFd5TXQwcXppc3JK?=
 =?utf-8?B?MkJKaGRQYU93VEdWRFY3L2VFMDR5RWhwOFhqTVBBUWpoUG10YThOZW9uTDZh?=
 =?utf-8?Q?bzqn0/cK6eP5Ru2oIZL8KibeI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a0edec24-8e52-429a-dae8-08db5c2d0716
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 08:01:06.9925
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qBC6iqGpuTkeRI+afaiCp8VsCuoCLDlg84w+XRVHrkBbjrdDT2Cdy5gb12O2mGwf+4F5XvSW4+f654KQl2d+3w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7341

On 24.05.2023 09:39, Luca Fancellu wrote:
>> On 23 May 2023, at 17:37, Anthony PERARD <anthony.perard@citrix.com> wrote:
>> Instead of having a special $(cmd_asm-offsets.s) command, we could
>> probably reuse $(cmd_cc_s_c) from Rules.mk, but that would mean that
>> an hypothetical additional flags "-flto" in CFLAGS would not be
>> removed anymore, not sure if that matter here.
>>
>> But then we could write this:
>>
>> targets += arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s
>> arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s: CFLAGS-y += -g0
>> arch/$(TARGET_ARCH)/include/asm/asm-offsets.h: arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s FORCE
>>
>> instead of having to write a rule for asm-offsets.s
> 
> The solution above seems clean, maybe I am wrong but -flto should not matter here as we are
> not building objects to include in the final build, isn’t it? And gcc documentation states just:
> 
> “It is recommended that you compile all the files participating in the same link with the same
> options and also specify those options at link time."
> 
> I’ve also tested this patch and it works fine, I have to say however that I preferred
> a more verbose output, so that people can check how we are invoking the compiler,
> but I guess now it’s more consistent with the other invocations that doesn’t print
> the compiler invocation.

If you want it more verbose, you can pass V=1 on the make command line.
(Of course that'll affect all commands' output.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 24 08:06:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:06:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538809.839055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jVO-0003Py-BD; Wed, 24 May 2023 08:05:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538809.839055; Wed, 24 May 2023 08:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jVO-0003Pr-8F; Wed, 24 May 2023 08:05:58 +0000
Received: by outflank-mailman (input) for mailman id 538809;
 Wed, 24 May 2023 08:05:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cCLF=BN=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1q1jVN-0003Pl-Nt
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:05:57 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce6d8640-fa09-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 10:05:55 +0200 (CEST)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-608-RErQzMsmNfyVW-OdrzZ_Jg-1; Wed, 24 May 2023 04:05:52 -0400
Received: by mail-wm1-f72.google.com with SMTP id
 5b1f17b1804b1-3f60481749eso3824565e9.1
 for <xen-devel@lists.xenproject.org>; Wed, 24 May 2023 01:05:52 -0700 (PDT)
Received: from sgarzare-redhat ([134.0.5.107])
 by smtp.gmail.com with ESMTPSA id
 f8-20020a7bcd08000000b003f17848673fsm1401647wmj.27.2023.05.24.01.05.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 24 May 2023 01:05:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce6d8640-fa09-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684915554;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=4D41eFNvy0+kPQfc9V8H+rhjCzgj7odAxqq3xoeOmEs=;
	b=I2chZDsBHaFbKQfQczOovqRmEYsc+CSiwvSxA8EoEUDcAN0S89laA8v+qSK6IXKA8Gf9w8
	inqfNXKN35GISZwrbwO1FihETqsEnmyEMadyMqJjUz+0gNACym0ZQeIbYSBYXyO9hX/45L
	h2bSTOv8yq4bAfpIaMZmCQRnIQ9RKRY=
X-MC-Unique: RErQzMsmNfyVW-OdrzZ_Jg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684915551; x=1687507551;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4D41eFNvy0+kPQfc9V8H+rhjCzgj7odAxqq3xoeOmEs=;
        b=GJ086LMrgsmPFM/nMAgu/hPqYRjImqozvQCA9vUZHZI7sXcf1EcOjUV2sAKWyDKwzh
         tWi2lQp+Twmwj1cQ+oqazV18Nete2uySTuJZBV01Cj+Wly/ATIMxdnKVHq8WQnB867Gt
         ZCYQ8nIjoPnL/kyTf4QnvTxuYRcaYVzIpA8MbTL32eC8O9z82wtHLf4sQ0d0Ec2GI8k6
         5EiqYOcAChmtZqatJmiGYM2m/r9QIRRKd8Dvp7eQx2iQCTVf/FBwy8e79PidwYc/w9qD
         2e+y8n8fTQk8dysgWVbow+UNCEHseTpRu7t9Ulop1mhBkYLkHZqHePkrfpAdBXLwHZNf
         MHlQ==
X-Gm-Message-State: AC+VfDyqI7et5soky/zJf5NgMiHjhy90Pq+tVIBK8pMassxHQzBYuj/h
	aKbWh5GUiQN3optl6rKvLJaUGYpoa0mAt9TX4leZMzPk82i+Inzce/hHxEBDfep+pP5sTjpOwka
	f65ObcaJ8wE0i3oelBFOx87lNkVk=
X-Received: by 2002:a05:600c:1c0a:b0:3f6:335:d8f0 with SMTP id j10-20020a05600c1c0a00b003f60335d8f0mr7927344wms.13.1684915551656;
        Wed, 24 May 2023 01:05:51 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ4Pa3Z/4cCdCc3y6y76Di99EDzxN9dw2x4Sv4Z+LSxzGUUIHRLjgxpENvXr/pbAwQBGrYrRFg==
X-Received: by 2002:a05:600c:1c0a:b0:3f6:335:d8f0 with SMTP id j10-20020a05600c1c0a00b003f60335d8f0mr7927316wms.13.1684915551306;
        Wed, 24 May 2023 01:05:51 -0700 (PDT)
Date: Wed, 24 May 2023 10:05:46 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
	Julia Suvorova <jusual@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, 
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, Paolo Bonzini <pbonzini@redhat.com>, qemu-block@nongnu.org, 
	Paul Durrant <paul@xen.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Aarushi Mehta <mehta.aaru20@gmail.com>, Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 1/6] block: add blk_io_plug_call() API
Message-ID: <cwgklwmov3evxbmty56vwgg2xzdpagcrguy66adrdiy43u35eb@jmamrxotsf45>
References: <20230517221022.325091-1-stefanha@redhat.com>
 <20230517221022.325091-2-stefanha@redhat.com>
 <mzxjz4d3ab3sq6grwsle6wlacysh2uffz42ojpdze3hmqimbr5@fxgkad47nnim>
 <20230523154708.GB96478@fedora>
MIME-Version: 1.0
In-Reply-To: <20230523154708.GB96478@fedora>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline

On Tue, May 23, 2023 at 11:47:08AM -0400, Stefan Hajnoczi wrote:
>On Fri, May 19, 2023 at 10:45:57AM +0200, Stefano Garzarella wrote:
>> On Wed, May 17, 2023 at 06:10:17PM -0400, Stefan Hajnoczi wrote:
>> > Introduce a new API for thread-local blk_io_plug() that does not
>> > traverse the block graph. The goal is to make blk_io_plug() multi-queue
>> > friendly.
>> >
>> > Instead of having block drivers track whether or not we're in a plugged
>> > section, provide an API that allows them to defer a function call until
>> > we're unplugged: blk_io_plug_call(fn, opaque). If blk_io_plug_call() is
>> > called multiple times with the same fn/opaque pair, then fn() is only
>> > called once at the end of the function - resulting in batching.
>> >
>> > This patch introduces the API and changes blk_io_plug()/blk_io_unplug().
>> > blk_io_plug()/blk_io_unplug() no longer require a BlockBackend argument
>> > because the plug state is now thread-local.
>> >
>> > Later patches convert block drivers to blk_io_plug_call() and then we
>> > can finally remove .bdrv_co_io_plug() once all block drivers have been
>> > converted.
>> >
>> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>> > ---
>> > MAINTAINERS                       |   1 +
>> > include/sysemu/block-backend-io.h |  13 +--
>> > block/block-backend.c             |  22 -----
>> > block/plug.c                      | 159 ++++++++++++++++++++++++++++++
>> > hw/block/dataplane/xen-block.c    |   8 +-
>> > hw/block/virtio-blk.c             |   4 +-
>> > hw/scsi/virtio-scsi.c             |   6 +-
>> > block/meson.build                 |   1 +
>> > 8 files changed, 173 insertions(+), 41 deletions(-)
>> > create mode 100644 block/plug.c
>> >
>> > diff --git a/MAINTAINERS b/MAINTAINERS
>> > index 50585117a0..574202295c 100644
>> > --- a/MAINTAINERS
>> > +++ b/MAINTAINERS
>> > @@ -2644,6 +2644,7 @@ F: util/aio-*.c
>> > F: util/aio-*.h
>> > F: util/fdmon-*.c
>> > F: block/io.c
>> > +F: block/plug.c
>> > F: migration/block*
>> > F: include/block/aio.h
>> > F: include/block/aio-wait.h
>> > diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
>> > index d62a7ee773..be4dcef59d 100644
>> > --- a/include/sysemu/block-backend-io.h
>> > +++ b/include/sysemu/block-backend-io.h
>> > @@ -100,16 +100,9 @@ void blk_iostatus_set_err(BlockBackend *blk, int error);
>> > int blk_get_max_iov(BlockBackend *blk);
>> > int blk_get_max_hw_iov(BlockBackend *blk);
>> >
>> > -/*
>> > - * blk_io_plug/unplug are thread-local operations. This means that multiple
>> > - * IOThreads can simultaneously call plug/unplug, but the caller must ensure
>> > - * that each unplug() is called in the same IOThread of the matching plug().
>> > - */
>> > -void coroutine_fn blk_co_io_plug(BlockBackend *blk);
>> > -void co_wrapper blk_io_plug(BlockBackend *blk);
>> > -
>> > -void coroutine_fn blk_co_io_unplug(BlockBackend *blk);
>> > -void co_wrapper blk_io_unplug(BlockBackend *blk);
>> > +void blk_io_plug(void);
>> > +void blk_io_unplug(void);
>> > +void blk_io_plug_call(void (*fn)(void *), void *opaque);
>> >
>> > AioContext *blk_get_aio_context(BlockBackend *blk);
>> > BlockAcctStats *blk_get_stats(BlockBackend *blk);
>> > diff --git a/block/block-backend.c b/block/block-backend.c
>> > index ca537cd0ad..1f1d226ba6 100644
>> > --- a/block/block-backend.c
>> > +++ b/block/block-backend.c
>> > @@ -2568,28 +2568,6 @@ void blk_add_insert_bs_notifier(BlockBackend *blk, Notifier *notify)
>> >     notifier_list_add(&blk->insert_bs_notifiers, notify);
>> > }
>> >
>> > -void coroutine_fn blk_co_io_plug(BlockBackend *blk)
>> > -{
>> > -    BlockDriverState *bs = blk_bs(blk);
>> > -    IO_CODE();
>> > -    GRAPH_RDLOCK_GUARD();
>> > -
>> > -    if (bs) {
>> > -        bdrv_co_io_plug(bs);
>> > -    }
>> > -}
>> > -
>> > -void coroutine_fn blk_co_io_unplug(BlockBackend *blk)
>> > -{
>> > -    BlockDriverState *bs = blk_bs(blk);
>> > -    IO_CODE();
>> > -    GRAPH_RDLOCK_GUARD();
>> > -
>> > -    if (bs) {
>> > -        bdrv_co_io_unplug(bs);
>> > -    }
>> > -}
>> > -
>> > BlockAcctStats *blk_get_stats(BlockBackend *blk)
>> > {
>> >     IO_CODE();
>> > diff --git a/block/plug.c b/block/plug.c
>> > new file mode 100644
>> > index 0000000000..6738a568ba
>> > --- /dev/null
>> > +++ b/block/plug.c
>> > @@ -0,0 +1,159 @@
>> > +/* SPDX-License-Identifier: GPL-2.0-or-later */
>> > +/*
>> > + * Block I/O plugging
>> > + *
>> > + * Copyright Red Hat.
>> > + *
>> > + * This API defers a function call within a blk_io_plug()/blk_io_unplug()
>> > + * section, allowing multiple calls to batch up. This is a performance
>> > + * optimization that is used in the block layer to submit several I/O requests
>> > + * at once instead of individually:
>> > + *
>> > + *   blk_io_plug(); <-- start of plugged region
>> > + *   ...
>> > + *   blk_io_plug_call(my_func, my_obj); <-- deferred my_func(my_obj) call
>> > + *   blk_io_plug_call(my_func, my_obj); <-- another
>> > + *   blk_io_plug_call(my_func, my_obj); <-- another
>> > + *   ...
>> > + *   blk_io_unplug(); <-- end of plugged region, my_func(my_obj) is called once
>> > + *
>> > + * This code is actually generic and not tied to the block layer. If another
>> > + * subsystem needs this functionality, it could be renamed.
>> > + */
>> > +
>> > +#include "qemu/osdep.h"
>> > +#include "qemu/coroutine-tls.h"
>> > +#include "qemu/notify.h"
>> > +#include "qemu/thread.h"
>> > +#include "sysemu/block-backend.h"
>> > +
>> > +/* A function call that has been deferred until unplug() */
>> > +typedef struct {
>> > +    void (*fn)(void *);
>> > +    void *opaque;
>> > +} UnplugFn;
>> > +
>> > +/* Per-thread state */
>> > +typedef struct {
>> > +    unsigned count;       /* how many times has plug() been called? */
>> > +    GArray *unplug_fns;   /* functions to call at unplug time */
>> > +} Plug;
>> > +
>> > +/* Use get_ptr_plug() to fetch this thread-local value */
>> > +QEMU_DEFINE_STATIC_CO_TLS(Plug, plug);
>> > +
>> > +/* Called at thread cleanup time */
>> > +static void blk_io_plug_atexit(Notifier *n, void *value)
>> > +{
>> > +    Plug *plug = get_ptr_plug();
>> > +    g_array_free(plug->unplug_fns, TRUE);
>> > +}
>> > +
>> > +/* This won't involve coroutines, so use __thread */
>> > +static __thread Notifier blk_io_plug_atexit_notifier;
>> > +
>> > +/**
>> > + * blk_io_plug_call:
>> > + * @fn: a function pointer to be invoked
>> > + * @opaque: a user-defined argument to @fn()
>> > + *
>> > + * Call @fn(@opaque) immediately if not within a blk_io_plug()/blk_io_unplug()
>> > + * section.
>>
>> Just to understand better, what if two BlockDrivers share the same
>> iothread but one calls blk_io_plug()/blk_io_unplug(), while the other
>> calls this function not in a blk_io_plug()/blk_io_unplug() section?
>>
>> If the call is in the middle of the other BlockDriver's section, it is
>> deferred, right?
>
>Yes, the call is deferred until blk_io_unplug().
>
>> Is this situation possible?
>
>One scenario I can think of is when aio_poll() is called between
>plug/unplug. In that case, some I/O associated with device B might
>happen while device A is with plug/unplug.
>
>> Or should we prevent blk_io_plug_call() from being called out of a
>> blk_io_plug()/blk_io_unplug() section?
>
>blk_io_plug_call() is called outside blk_io_plug()/blk_io_unplug() when
>device emulation doesn't use plug/unplug. For example, IDE doesn't use
>it but still calls down into the same Linux AIO or io_uring code that
>invokes blk_io_plug_call(). This is why blk_io_plug_call() calls fn()
>immediately when invoked outside plug/unplug.

Got it, so it seems that everything should work properly.

Thanks,
Stefano



From xen-devel-bounces@lists.xenproject.org Wed May 24 08:07:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538815.839065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jXD-00041H-PQ; Wed, 24 May 2023 08:07:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538815.839065; Wed, 24 May 2023 08:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jXD-00041A-Mn; Wed, 24 May 2023 08:07:51 +0000
Received: by outflank-mailman (input) for mailman id 538815;
 Wed, 24 May 2023 08:07:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1jXC-000414-5j
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:07:50 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2073.outbound.protection.outlook.com [40.107.7.73])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 11e971b9-fa0a-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 10:07:47 +0200 (CEST)
Received: from AM3PR07CA0148.eurprd07.prod.outlook.com (2603:10a6:207:8::34)
 by GV1PR08MB8033.eurprd08.prod.outlook.com (2603:10a6:150:9a::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 08:06:59 +0000
Received: from AM7EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:207:8:cafe::a2) by AM3PR07CA0148.outlook.office365.com
 (2603:10a6:207:8::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 08:06:59 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT055.mail.protection.outlook.com (100.127.141.28) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.14 via Frontend Transport; Wed, 24 May 2023 08:06:58 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Wed, 24 May 2023 08:06:58 +0000
Received: from a303e1c0bd28.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8F477992-9084-4C47-9E84-6840C73B5DB2.1; 
 Wed, 24 May 2023 08:06:47 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a303e1c0bd28.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 08:06:47 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS2PR08MB8647.eurprd08.prod.outlook.com (2603:10a6:20b:55e::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 08:06:44 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 08:06:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11e971b9-fa0a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B0VMxCLoQQPUr46kFOSiOFnCB4xFnkONu7nZ74zTObo=;
 b=Iq1wjs1bKwOszSNwffxMBH9HWgiAdgAGgziUimmP+/vVAHYBcKlDYyV4uchvGVZcU6hRnAcsImnzt5j0pEx9GNsWGIzFUqmE++iPRi1VT5YwYGCTKhmtZevz1C2X2R7rBtpgvIeGe/6aTfhyYy3TXBsW+Zvty97YUAwjMythm60=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 090856cf2ef342ea
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=geyn6l9T+apqosIKcCApIA6FieVvvUosXjKuez6ALBVZGytn0ixFznv7OEaveqg4W67KKcnNoc+qrwt9HOXbiH1v42wHJ5Co0YknnOa3mgxt8SdnL1lUl+pp6G+GfIUw4qtYJczo8h0586q0PZhK8D4bIVHjVInGjknbM44aunTogw3SJ6HFTP6RfVUoZ1uSTzlH6VN1hODIlSX6IBEyA2CFQHhaebp8dUB+LmUpEJl0Pu2CuN/ifdzPLAlhztx1/JAEEkic9Fa3do18Rw2GtPC2wggjKDLk6qafBQ4oZlTwDGOrq3uiwlrshnzcE1NAUVHQ7T4YFzDN5JfCqp2Ycg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=B0VMxCLoQQPUr46kFOSiOFnCB4xFnkONu7nZ74zTObo=;
 b=Xh5IxK/qJMm0PSL1PsLpHVwm337yPmOSyQbDe/mQiRS4c2jWupH4kq+hNOIO5a3E1VgGyOUGkmwwyWHB9An/y3cW/XsBKq4Rea5IDtReSrHFC5emZ2sv/WYh3QSCjidz5p1gr4QDGU7/NGdAniccM+PsDQwBP3rHfsVDyV8aaRWnmFChN7AgQ048a8DW6Bx1duQ/cXrppwGedFl7Hx/aOAIYZHjXRisvsJjPk2lQHHFyPzu+k3zMigQvMXG09SqcRyvzTUdfhXWTXGWOTRMKZkEKWl3CiMRA4lrm5Nwl2sSuuhx2dj0Rq7f2BbJQOSt46/Xrba+q11XnsNJF5Im7Lw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B0VMxCLoQQPUr46kFOSiOFnCB4xFnkONu7nZ74zTObo=;
 b=Iq1wjs1bKwOszSNwffxMBH9HWgiAdgAGgziUimmP+/vVAHYBcKlDYyV4uchvGVZcU6hRnAcsImnzt5j0pEx9GNsWGIzFUqmE++iPRi1VT5YwYGCTKhmtZevz1C2X2R7rBtpgvIeGe/6aTfhyYy3TXBsW+Zvty97YUAwjMythm60=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Konrad Rzeszutek
 Wilk <konrad.wilk@oracle.com>, Ross Lagerwall <ross.lagerwall@citrix.com>
Subject: Re: [XEN PATCH 07/15] build: move XEN_HAS_BUILD_ID out of Config.mk
Thread-Topic: [XEN PATCH 07/15] build: move XEN_HAS_BUILD_ID out of Config.mk
Thread-Index: AQHZjZUa7+Qnm+197EOtmkiEO2C8y69pEiUA
Date: Wed, 24 May 2023 08:06:44 +0000
Message-ID: <04E10732-4D3A-497C-BA38-584B90216E17@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-8-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-8-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS2PR08MB8647:EE_|AM7EUR03FT055:EE_|GV1PR08MB8033:EE_
X-MS-Office365-Filtering-Correlation-Id: 6fb72240-3bc2-4bb5-1086-08db5c2dd934
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 EW4PJx4wluwgXPwAPOMpY2nAXSCq5LZNPRRf8v273UEvuu51b0iA8h1LUtpangv6jx3qMVZnn/jNXjM1DI3cvWEWN4IXqPKVNVRkjOY0cwOuRNhlsRVNMLDzmPoTzlAmyeP7fEvgS074DvJRElrIz/2YCYgvWNUGkYOxNPfweJWqbTcIYJf2nH9WNsIDBvQhvZIE5JfJ3z4RLlt2YbGnNBMJwFMSK84FJL96Sy0eKsYra8UNMRk9/FukYe+biH63U/RuwozqHDoQiy3+o5xG69L80zt4oi7w8Q+FSjt8F8Tf6z0eaqXB8vTXFmQv1PrrmsqIcSJkhNC2Bj/dBBL7jkvhys4W2X5p/s2zt0aPLwj6+3FriNQq9vpZ6DTRp4gagSCT2PcR9nFZZw+CiUyVxGCC3TEcjnfa0x3rLvAxuzuP7nejSbl/aGLAhs3EBrifiOc65UQiRPJRF6vH2fcEuRlNwMG3ln52kKAgX3LQI5PzAGXYFThM5D64DrjNUILfSKB0aIDtnqvBlncDuwaWRwdSRo+CVWq2sFLW1i8jrp9734Vrfyieeug5LUC0EYL3/d3theP/iaqf6jHdE1o47ZOMIkNgyiNS8YMd1nrqmJIV3IUxmLAamcilxuOfj34qIwFHC1hXKuy+6J5QNCpHYg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(136003)(39860400002)(366004)(376002)(451199021)(83380400001)(6916009)(4326008)(76116006)(91956017)(64756008)(66476007)(66556008)(66946007)(66446008)(6486002)(71200400001)(2616005)(6512007)(26005)(54906003)(6506007)(53546011)(478600001)(186003)(2906002)(4744005)(86362001)(8936002)(8676002)(7416002)(36756003)(33656002)(5660300002)(316002)(122000001)(38070700005)(41300700001)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <ECC7305E4037FE49A447FCB8704D365B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8647
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	46c69587-32ef-4093-4698-08db5c2dd096
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nRr8G5F+tW2n+RZuwwmeOK+EVgEbIZ1K+vVJo1Q7MWcEMLumKT9Sc+QZxtx/fgVxAKl7TSnjVQfpuFD+UHV9GkoKS9YGhLKbPWE0PjZQaphnaO9z5N8DQMNDizxKFQrRAzWcTPdOxdLuIkbFFIE4cMRjuuae6ICKUJ6FXWhelp2xTxTCoi44BN0pf5oCTga1au3DLSF9/FI/ndFnyFVLyTH2oqykOIZkmZ037mVVRIml+X6x2h34Ng4U07Zw/G+tyPNLHpJYmYBybgERZcVQZ04SAkWtFlXeVhCdMZ8izihhCqZGMMUy1Tl7ByBdlPknjti4WyeAPacMhsq7Uep0mauitY0i6QU1jKyfia6AfEw3mvRR5zlVMSinAJG0HQ21HRnAMm8tSmbP5m8OD3WSi5NkbpKcqmNfZm7bgMFQ3nH8t7mmYgnWjQTtZ+a498+jOJxIz9d2or6kODRABtrLawW633NYtxTYxdPr8NOrfsCDflUAEDSWD827FQyq+SaRWfgliF7qYUm6Jg4LBJJLgJhJ+dPJ/O/DqZgP5qOgxtdFzwX44X/WU7md9zuSKFj1qGOqps1zPb5B1zQ7mqN1WTDqM+VsNXtOI+4a2dXgPJXd/3+kLsRdS2KXhIsjx73Lf+U59dUm95KjrYCptosWnPVuY2MkuqNtXBgH23BTreF6QG5zQWu5fjkBoZtN8qECqSQ+4XiWJc7A0oQgnslo8WtSWqvwMGDtNA5dvKdFsGs=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(346002)(39860400002)(376002)(396003)(451199021)(46966006)(40470700004)(36840700001)(6486002)(478600001)(53546011)(54906003)(47076005)(40480700001)(2616005)(83380400001)(336012)(36756003)(86362001)(36860700001)(40460700003)(33656002)(26005)(4326008)(82310400005)(107886003)(6506007)(186003)(6512007)(316002)(4744005)(2906002)(70586007)(82740400003)(81166007)(356005)(8676002)(8936002)(41300700001)(6862004)(70206006)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 08:06:58.9783
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6fb72240-3bc2-4bb5-1086-08db5c2dd934
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8033



> On 23 May 2023, at 17:38, Anthony PERARD <anthony.perard@citrix.com> wrot=
e:
>=20
> Whether or not the linker can do build id is only used by the
> hypervisor build, so move that there.
>=20
> Rename $(build_id_linker) to $(XEN_LDFLAGS_BUILD_ID) as this is a
> better name to be exported as to use the "XEN_*" namespace.
>=20
> Also update XEN_TREEWIDE_CFLAGS so flags can be used for
> arch/x86/boot/ CFLAGS_x86_32
>=20
> Beside a reordering of the command line where CFLAGS is used, there
> shouldn't be any other changes.
>=20
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>


Seems good to me!

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>




From xen-devel-bounces@lists.xenproject.org Wed May 24 08:08:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:08:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538818.839076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jXr-0004VL-2v; Wed, 24 May 2023 08:08:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538818.839076; Wed, 24 May 2023 08:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jXq-0004VE-VR; Wed, 24 May 2023 08:08:30 +0000
Received: by outflank-mailman (input) for mailman id 538818;
 Wed, 24 May 2023 08:08:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cCLF=BN=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1q1jXp-0004Uz-B1
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:08:29 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 29037790-fa0a-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 10:08:27 +0200 (CEST)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-132-SZjLovacMxmjtGVxjpIsCw-1; Wed, 24 May 2023 04:08:25 -0400
Received: by mail-wm1-f72.google.com with SMTP id
 5b1f17b1804b1-3f60f085cd2so2514725e9.0
 for <xen-devel@lists.xenproject.org>; Wed, 24 May 2023 01:08:23 -0700 (PDT)
Received: from sgarzare-redhat ([134.0.5.107])
 by smtp.gmail.com with ESMTPSA id
 f8-20020a1c6a08000000b003f41bb52834sm1408937wmc.38.2023.05.24.01.08.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 24 May 2023 01:08:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29037790-fa0a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684915706;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xtyDgeve/qnGaPE92shW9EZgBMKYja34N8neosaUrjA=;
	b=aSrXYIItHG0SoAUbNOmeN30HihO2b7AF42xMOeuepq5cynjIpb3h+EWgOYWcTK8ukFeupI
	sYknpFcZDbmXeLYYPbiGgLRQ+WjZAgj8YPP2ffiYH9qx6zFj01yhIu02mGdUgoJw1ATLgW
	E0O8W3v0TiUFLDn0OQ2dOS7h6roQlZE=
X-MC-Unique: SZjLovacMxmjtGVxjpIsCw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684915703; x=1687507703;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xtyDgeve/qnGaPE92shW9EZgBMKYja34N8neosaUrjA=;
        b=LTTLaAOQlKP66u3w30ijqCz8tLIyjcckRCUzrN767Iwkzx1URVIVY1D709gQ2H/42U
         mlMBXhChT4XIeveb8iXR9jeAa3lTPZ4kgj0yAYgeT5KKx0H2F3MTd2UaxUB4L9SfyPrb
         41XevQiGu3/JOkxOGkMrb1514nCdbmVbJut0ETJ7ATPaUZ+09td7TjcKqgYXmVcYyLi6
         EYro0pZGpnijRZ/LwwLegxWqDfY+UB6dmTV+JL+CvQ7jDkdHg311DxPC23JkEU5jJQ1S
         kGKuzqtdt0/xGlGU0ajCW9GGxXs2bLKKz3CQazrDzCwFbHW/iPlPkYlIPrx1TErP3l49
         v/Xw==
X-Gm-Message-State: AC+VfDxJRTD8TEt+/ACQaCJqVc9UpvYAQKKvZpHGl1HF2mO5naC1Q/Pd
	WJye3tW7qghfEmsq+sdtNlT3vFquerwcMdYQjRX2f3sR4y58bWbMwQF5cZbEvC539iaMKcAbvCI
	gxIbEx+iYOK3rO20+P/0v5GHVE3o=
X-Received: by 2002:a05:600c:2152:b0:3f6:938:1001 with SMTP id v18-20020a05600c215200b003f609381001mr4316374wml.8.1684915702808;
        Wed, 24 May 2023 01:08:22 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ6OZ1cIk0Dqc8AIrN6VBJlUyMvP0RkdvvnqSAdc1TcTNBSiUtLIy6saUk6qZOkQyyZ9l92FHg==
X-Received: by 2002:a05:600c:2152:b0:3f6:938:1001 with SMTP id v18-20020a05600c215200b003f609381001mr4316360wml.8.1684915702550;
        Wed, 24 May 2023 01:08:22 -0700 (PDT)
Date: Wed, 24 May 2023 10:08:18 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Aarushi Mehta <mehta.aaru20@gmail.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Julia Suvorova <jusual@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Hanna Reitz <hreitz@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, xen-devel@lists.xenproject.org, 
	eblake@redhat.com, Anthony Perard <anthony.perard@citrix.com>, 
	qemu-block@nongnu.org
Subject: Re: [PATCH v2 1/6] block: add blk_io_plug_call() API
Message-ID: <dpbl4aehbrgii34tibg3pzgkdsi56vxtvk66657ksqedho3cnv@fhbd2jozlhdp>
References: <20230523171300.132347-1-stefanha@redhat.com>
 <20230523171300.132347-2-stefanha@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230523171300.132347-2-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline

On Tue, May 23, 2023 at 01:12:55PM -0400, Stefan Hajnoczi wrote:
>Introduce a new API for thread-local blk_io_plug() that does not
>traverse the block graph. The goal is to make blk_io_plug() multi-queue
>friendly.
>
>Instead of having block drivers track whether or not we're in a plugged
>section, provide an API that allows them to defer a function call until
>we're unplugged: blk_io_plug_call(fn, opaque). If blk_io_plug_call() is
>called multiple times with the same fn/opaque pair, then fn() is only
>called once at the end of the function - resulting in batching.
>
>This patch introduces the API and changes blk_io_plug()/blk_io_unplug().
>blk_io_plug()/blk_io_unplug() no longer require a BlockBackend argument
>because the plug state is now thread-local.
>
>Later patches convert block drivers to blk_io_plug_call() and then we
>can finally remove .bdrv_co_io_plug() once all block drivers have been
>converted.
>
>Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>Reviewed-by: Eric Blake <eblake@redhat.com>
>---
>v2
>- "is not be freed" -> "is not freed" [Eric]
>---
> MAINTAINERS                       |   1 +
> include/sysemu/block-backend-io.h |  13 +--
> block/block-backend.c             |  22 -----
> block/plug.c                      | 159 ++++++++++++++++++++++++++++++
> hw/block/dataplane/xen-block.c    |   8 +-
> hw/block/virtio-blk.c             |   4 +-
> hw/scsi/virtio-scsi.c             |   6 +-
> block/meson.build                 |   1 +
> 8 files changed, 173 insertions(+), 41 deletions(-)
> create mode 100644 block/plug.c

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>



From xen-devel-bounces@lists.xenproject.org Wed May 24 08:11:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:11:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538825.839085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jad-0005xu-FN; Wed, 24 May 2023 08:11:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538825.839085; Wed, 24 May 2023 08:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jad-0005xn-Cp; Wed, 24 May 2023 08:11:23 +0000
Received: by outflank-mailman (input) for mailman id 538825;
 Wed, 24 May 2023 08:11:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cCLF=BN=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1q1jab-0005xh-MY
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:11:21 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8f8404a3-fa0a-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 10:11:19 +0200 (CEST)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-279-ikgPTjepOnaVOThyTc337A-1; Wed, 24 May 2023 04:11:16 -0400
Received: by mail-wr1-f69.google.com with SMTP id
 ffacd0b85a97d-30959544cbdso223068f8f.0
 for <xen-devel@lists.xenproject.org>; Wed, 24 May 2023 01:11:16 -0700 (PDT)
Received: from sgarzare-redhat ([134.0.5.107])
 by smtp.gmail.com with ESMTPSA id
 r14-20020adfce8e000000b00306c5900c10sm13683435wrn.9.2023.05.24.01.11.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 24 May 2023 01:11:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f8404a3-fa0a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684915878;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=G/7YN0/EgAQNZhcWOzYnQDnc6s/74LmWiJRQZI2Rysc=;
	b=g6atmp3WFlWkhSeqVvj1QRj3Be8+aEeMrcm6KD2cyvmEvZx9f04jWHTkUHuYlhd3gZxM0q
	Xazp+7WVxVofacPeRzGZbo9+Y3CNipmgxQfu9fes74TVD73uuOOi7PYkMYbbry9DyBkwph
	o5vy4gOogqFrexE4GT4tNHfKgdHfMT0=
X-MC-Unique: ikgPTjepOnaVOThyTc337A-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684915875; x=1687507875;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=G/7YN0/EgAQNZhcWOzYnQDnc6s/74LmWiJRQZI2Rysc=;
        b=E7HI3zeCO8jinugRpW1f2JGv+Ve7F5NCfeRgVhzfDORiHHE5A7HjSzCcxI2ZnMniTl
         iJwlHE6C0a9mlx1zF4ffgsA1GDyNbPxSZdwBD3QekjJaHlOVEstJxUTsESj2CDLI/Fpi
         xMI3Pu3JTTs3JTJ8ISKiVc7hlgPo3mGMN/pvU5Wm1i0sRBUyicZoGcrAOOrZ1aCwpRJ2
         GUydF0iVesEZAELdNUpp7hYPBhooVKLxx98VOatQDdVki+2gfeqKof+DIOPkZt/4tzdk
         /cbLdxthIHwhmYDqVdcyduUzBKeACD3OlScDj8xT19cSuQLKne6vnM7G3OeLPcsadP5F
         rfqQ==
X-Gm-Message-State: AC+VfDwt42C/O/9gL11nBKgNp98KyHMkgTXDMzF07Fe9ApjZ62WRJuzZ
	LPYrrupOh+8nEKCSf62l/vqQy+NKCqythylOuC9vslm2tvR8oeQUnRhLIz+ZElwn3gLacXlKT6O
	4MqfWIfJsPuHpKCZFCiLK0EyOFx0=
X-Received: by 2002:a5d:5291:0:b0:307:f75:f581 with SMTP id c17-20020a5d5291000000b003070f75f581mr11985751wrv.18.1684915875676;
        Wed, 24 May 2023 01:11:15 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ577uGe1zMc+ifh7kvZp70XkqxdzoAHp3sJz16D+lvdNzKzMhD8u7jKwCYMRYhN9TscLAWlwA==
X-Received: by 2002:a5d:5291:0:b0:307:f75:f581 with SMTP id c17-20020a5d5291000000b003070f75f581mr11985721wrv.18.1684915875357;
        Wed, 24 May 2023 01:11:15 -0700 (PDT)
Date: Wed, 24 May 2023 10:11:11 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Aarushi Mehta <mehta.aaru20@gmail.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Julia Suvorova <jusual@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Hanna Reitz <hreitz@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, xen-devel@lists.xenproject.org, 
	eblake@redhat.com, Anthony Perard <anthony.perard@citrix.com>, 
	qemu-block@nongnu.org
Subject: Re: [PATCH v2 2/6] block/nvme: convert to blk_io_plug_call() API
Message-ID: <5xfhgfmpnjorl4fxzm5b7ow2nijsz4qx27otj7d6jgzpca26st@jg3ah6u46cqq>
References: <20230523171300.132347-1-stefanha@redhat.com>
 <20230523171300.132347-3-stefanha@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230523171300.132347-3-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline

On Tue, May 23, 2023 at 01:12:56PM -0400, Stefan Hajnoczi wrote:
>Stop using the .bdrv_co_io_plug() API because it is not multi-queue
>block layer friendly. Use the new blk_io_plug_call() API to batch I/O
>submission instead.
>
>Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>Reviewed-by: Eric Blake <eblake@redhat.com>
>---
>v2
>- Remove unused nvme_process_completion_queue_plugged trace event
>  [Stefano]
>---
> block/nvme.c       | 44 ++++++++++++--------------------------------
> block/trace-events |  1 -
> 2 files changed, 12 insertions(+), 33 deletions(-)

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>



From xen-devel-bounces@lists.xenproject.org Wed May 24 08:13:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:13:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538829.839096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jcg-0006YJ-Rj; Wed, 24 May 2023 08:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538829.839096; Wed, 24 May 2023 08:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jcg-0006YC-Nz; Wed, 24 May 2023 08:13:30 +0000
Received: by outflank-mailman (input) for mailman id 538829;
 Wed, 24 May 2023 08:13:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cCLF=BN=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1q1jcf-0006Xy-1f
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:13:29 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc68d831-fa0a-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 10:13:28 +0200 (CEST)
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-104-Sg2ppy_2P6Kiq07pUBPByQ-1; Wed, 24 May 2023 04:13:25 -0400
Received: by mail-wr1-f71.google.com with SMTP id
 ffacd0b85a97d-3095483ea29so211965f8f.1
 for <xen-devel@lists.xenproject.org>; Wed, 24 May 2023 01:13:25 -0700 (PDT)
Received: from sgarzare-redhat ([134.0.5.107])
 by smtp.gmail.com with ESMTPSA id
 o11-20020a05600c378b00b003f195d540d9sm1466237wmr.14.2023.05.24.01.13.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 24 May 2023 01:13:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc68d831-fa0a-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684916007;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=pdqRwguXZFLYTV9FtQQFWD/FtkvuQeDoxuRVCDomYds=;
	b=ikeOwOrgm08Zrx0MECdmF7rufVAQt7xFiQvuNy+0qg0kmmjXGc4j7EvVCwcoFs25Kud2tN
	VFq6lD18kqJPYLyCykOnBgUK7Fw4wGHwMHcSs5hkkqbKiGRE/JXb9dBq38wi0wO7iH+By/
	FZgaLouB9kR4g2uqxqOY3athh7LyPHA=
X-MC-Unique: Sg2ppy_2P6Kiq07pUBPByQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684916004; x=1687508004;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=pdqRwguXZFLYTV9FtQQFWD/FtkvuQeDoxuRVCDomYds=;
        b=U09YoVqh6lDUN6/VGtu8Sehqh+05xKXIe8vPySD6GWXP2NqM6e8lZNrSLAoFr2CwcY
         U2d//4huOEGnrI1eg06cnbdDhrJidrr4LeccZs8ZO/Fdwi49quezc8dmAZTfAC7kUgmX
         Hl5drSs15hiw82jryucqR9fIsw9NfX85wmpgIdDKbik7Tn0nQb5SueOGItOsVl3Jm1Wr
         P0tCBTVMZAm08bt97WQH8YJlpn4GxoxBtY571nJvGbUlqiymJ8n1O7tZjwfjLAodx/dv
         1avUkwXmqCu1cRFXU1x7COKj0E9jce3kdnBCMC8WjAVfdKTyJ2mjvuyBkB5/BWy7AvoC
         BbPQ==
X-Gm-Message-State: AC+VfDxjcus0rJGtv8Qb+bJCjyKIS5cAQjHHn7TPvH4puUz8dhGW+ZQG
	y0PZCj2Irr0AA3IXdeGqpnWFI+SulG4+xBouuQ/IlP1qgHJ1NPUFJkZBRUl1Ht3jWvQOExYg2C1
	sEtg0rZbWD7cKUIOW+uvMgHvX5ZA=
X-Received: by 2002:adf:f74a:0:b0:306:342a:6a01 with SMTP id z10-20020adff74a000000b00306342a6a01mr11799940wrp.47.1684916004609;
        Wed, 24 May 2023 01:13:24 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ4ZnJ0/K9dDPiC0ZaAZ/7pQ5w7egHROoU20w4IJP/Gip9aXS3+Iv+Josl6t+iFMGZ+1JPC4Zw==
X-Received: by 2002:adf:f74a:0:b0:306:342a:6a01 with SMTP id z10-20020adff74a000000b00306342a6a01mr11799919wrp.47.1684916004329;
        Wed, 24 May 2023 01:13:24 -0700 (PDT)
Date: Wed, 24 May 2023 10:13:20 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Aarushi Mehta <mehta.aaru20@gmail.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Julia Suvorova <jusual@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Hanna Reitz <hreitz@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, xen-devel@lists.xenproject.org, 
	eblake@redhat.com, Anthony Perard <anthony.perard@citrix.com>, 
	qemu-block@nongnu.org
Subject: Re: [PATCH v2 3/6] block/blkio: convert to blk_io_plug_call() API
Message-ID: <7nq4x6dcajbnjuoytz7g3egfipbq2y5hyadactbco7t4kr6gxt@g37bnv7sijh2>
References: <20230523171300.132347-1-stefanha@redhat.com>
 <20230523171300.132347-4-stefanha@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230523171300.132347-4-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline

On Tue, May 23, 2023 at 01:12:57PM -0400, Stefan Hajnoczi wrote:
>Stop using the .bdrv_co_io_plug() API because it is not multi-queue
>block layer friendly. Use the new blk_io_plug_call() API to batch I/O
>submission instead.
>
>Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>Reviewed-by: Eric Blake <eblake@redhat.com>
>---
>v2
>- Add missing #include and fix blkio_unplug_fn() prototype [Stefano]
>---
> block/blkio.c | 43 ++++++++++++++++++++++++-------------------
> 1 file changed, 24 insertions(+), 19 deletions(-)

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>



From xen-devel-bounces@lists.xenproject.org Wed May 24 08:13:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:13:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538830.839106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jcq-0006pt-35; Wed, 24 May 2023 08:13:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538830.839106; Wed, 24 May 2023 08:13:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jcq-0006pk-05; Wed, 24 May 2023 08:13:40 +0000
Received: by outflank-mailman (input) for mailman id 538830;
 Wed, 24 May 2023 08:13:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1jco-0006p3-Lt
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:13:38 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2060d.outbound.protection.outlook.com
 [2a01:111:f400:fe12::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e177a745-fa0a-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 10:13:36 +0200 (CEST)
Received: from DUZPR01CA0346.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b8::29) by GV1PR08MB7940.eurprd08.prod.outlook.com
 (2603:10a6:150:9f::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 08:13:33 +0000
Received: from DBAEUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:4b8:cafe::d7) by DUZPR01CA0346.outlook.office365.com
 (2603:10a6:10:4b8::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 08:13:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT004.mail.protection.outlook.com (100.127.142.103) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 08:13:32 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Wed, 24 May 2023 08:13:32 +0000
Received: from 4e06a1b77e5f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 154FE8B2-93BD-4539-91DF-229DA2FCD565.1; 
 Wed, 24 May 2023 08:13:25 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4e06a1b77e5f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 08:13:25 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB9PR08MB9708.eurprd08.prod.outlook.com (2603:10a6:10:460::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 08:13:23 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 08:13:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e177a745-fa0a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jE+3IFOIGu7z7+DJWr7+RUgt8xxDUyDgAXMrkJnU1XY=;
 b=C16jrQb2yBw2Z++0wNTS464W1svppxjmmVhBszWShYhqrSIw45nx7IgWGoDKsVT4fzrxDZaIZO18pZNyaAIk27/uGUQucXRRER0RlryUT5TPXlB1d+vLIF4BRcZwv1opiAxbYhs8JwB0raXPaxWOU3X/Xq0PAnAhT4qrooMPYI8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0bdcf7060ca90c2d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N4GiBu3HkUu6m//6PZfP7FXbMp3HJkvMa29DW1tJv2rsemKu/Jq4U3N0bX3AE7PPFMj1bK5XMZrsCwzViJixGTf8qePhAZ8a1BLGO7+wi5o7iZOQnjCIYX3jGYoVGlJZvGLCdoJLopFiUTwhoqJ2zm5H5fxvQUwyemKMmS0mOejpmGm4kiUZ2oHarnNmfT4FDqwJsGCDdkX51bWQFMDcmC9fe51AIMgL/Ke5+CLZACUaArZ9F6Bv2mWmd3VBw330to1/m+fV1/6379mjVomsjc/99WKRsc6WDbcDtzlcQmVdMonTqZuJF/ltttRj/cd4GkvUvh1tLnLssZJjaCb0pw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jE+3IFOIGu7z7+DJWr7+RUgt8xxDUyDgAXMrkJnU1XY=;
 b=jLyvrHLYIZxRlsGxE3ho5bqmktF5EgUe3nxICX0bzYAL8b/lpqWV+Lydv4aIL5klwpMEMUcd3oOLDF4yo98AHD+hwdrjRkVi3gisQQXQppez0JPczNe6r16KCIWAz591/pR3x95RjncsYSe+9iJAxCS5ndHYnyn+prco8hzUhqhhDpBikmARSvr5naxUk+l8w0p/HM8qYR0DtHMmsVu1g6FV99JtX31Q3kiUg4+U5yMWG6AS+bGXOXWLOPbRWM9sobSU/T+R8vvyJpjJ3RlFOZrztiUt+Fw/RdEMZFocE4FCtjf93e2iyxvJOaZdKJYhJyyo0EQSt5Rz+oPnvnq39A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jE+3IFOIGu7z7+DJWr7+RUgt8xxDUyDgAXMrkJnU1XY=;
 b=C16jrQb2yBw2Z++0wNTS464W1svppxjmmVhBszWShYhqrSIw45nx7IgWGoDKsVT4fzrxDZaIZO18pZNyaAIk27/uGUQucXRRER0RlryUT5TPXlB1d+vLIF4BRcZwv1opiAxbYhs8JwB0raXPaxWOU3X/Xq0PAnAhT4qrooMPYI8=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 08/15] build: use $(filechk, ) for all
 compat/.xlat/%.lst
Thread-Topic: [XEN PATCH 08/15] build: use $(filechk, ) for all
 compat/.xlat/%.lst
Thread-Index: AQHZjZUbjxSqGdWpCESp84pzrS3Y8a9pE/8A
Date: Wed, 24 May 2023 08:13:23 +0000
Message-ID: <375E8DDB-2B3B-40DF-9BA4-17FAA90EEE8A@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-9-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-9-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB9PR08MB9708:EE_|DBAEUR03FT004:EE_|GV1PR08MB7940:EE_
X-MS-Office365-Filtering-Correlation-Id: 91f04230-e682-4cd2-c065-08db5c2ec3e5
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 XP0GFFZIVivI22sEpxzQbvX95D9xiEiMHqpdZN2a1aVlgkmSDL3W8u1FhE02W/b6Di+FtlvhoQ8OGxVMYXlNmLvaonwoAhjy5+HHToRuGCibnkcAuOULs8Ike/y5yTjQ/ZADhNf/Njgo44fS81yvPL18ELadetarxQfYKuD2nOv4KLwbK8ZnSH2Qa3wilXKVqpJ+PEOhwzkm5CaNziJmcMHrN5Pb2Z51QhdCw0SODSR3Bhc2qsliFaw+6nTnsHPqpjjT1WuDzFTWxcjPEE6trgip04kbdAa+xpg3g/f3bbqkDstmUpuZkPwaXxwQQUT0ZbmtQvOY/at7nhwbd9oAbAEezaJxmz2p4ADhkjUDH44g/gMj2P8O4qmGiF8/n+KfrzLHPZAz8tmkXGKiDA8G/d0alBFocotNDYsv8pbM8L/+mlRTG1RiGDr+ikfYdMhvcQDu9zct7EhoTktKyC71aM7vTDXFvsGnWmNtbejq+ecGwBHopdPV6QIlkE8FAbwYszoA4UnudLPUBzL3BYgUygmN1kysFT+TTWagnpbvX2K5l8ukeBBvAsQ5wGu22Tv4/soQn1IpvKU9l1JNKtcFzsfpsJNkqR8DD9G7na5wvX/xTiKwtLC4OLSYp5Xa/MHZfM1ZmVtCA3V3Ze+lU2tbKQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(366004)(39860400002)(136003)(346002)(451199021)(91956017)(76116006)(478600001)(38070700005)(6916009)(66556008)(66946007)(66446008)(66476007)(64756008)(4326008)(38100700002)(122000001)(54906003)(71200400001)(316002)(2616005)(41300700001)(6486002)(86362001)(5660300002)(4744005)(2906002)(8676002)(8936002)(26005)(6512007)(6506007)(53546011)(36756003)(186003)(33656002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <46B6CA87FFEFBF4BB7D8DFA49187EBE5@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9708
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0ee1c729-e917-4d58-2f73-08db5c2ebe30
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Wh1L27HK23WwKIAnggXHmupcwCmt3utPK5/dm2b/FdguzwIJ6yQhUOW2IKwbeYmH/7+CpXSyvRE41QHpcJGsQv8uVZK1THRkjahFM5fgYcQzZK9ytikBmGBzhd7o72Stjh2YEt7yvX2iObk8VmZ6Eh32zjJxiv8eX3l38DVIVh9ai3SSUxQvTUY5bmIqYWXmLljIbdRZdzpEVNV2y86vWfqC2L27G2gv3lfwvJzp5by83XTSMFTeHU1pX/cXtuwZQ795A2HfUcP21tK3yUbSIinF53/EKnZbPnd6ZbXti+5mqFRJhVYoqmPpI7Z6qVURc/CEWgmE3KMhQQI/XSpB0LsIl8Xm53XW//chs/eIT+6T2lZl+DqBwj6Hrxe8VmgzPJWIWk9dZT4YxdPCf1uus+y4iSNOUwO/q+0qQHorK8aD+wB/4SnW0mXo/v9ZHzAfyEUuJ28YOwk98sR2bSyVH0DsjwIdxDuw8/OkXcB9B2mZKgygWAEcUjXJ3qM38ui3qaplrqDSCojDx3NtzLRvvjA3ZAPn1gBNRtk9CZPEUSJM8Lxz1yNlcqwAXodage1zVHmtk6voX8DSIz8y94ceyqOg9T2stVnTMZ40DcZvVdGWolBYY37zFNt5sIlartBkjl7q7SjtIGeUrCRjWCmTKli3HCjsZyMrrRPLCNmHfJCYnp3KRKXMGYJ5ZEaJ2qbEfS447DpGsPHCGDK7+DhQ/6yIPVwhzSjLd1kqRtWyV13SF1/B1xw3CAYMUO6FLO7+
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(39860400002)(346002)(396003)(451199021)(36840700001)(46966006)(40470700004)(54906003)(86362001)(6486002)(41300700001)(478600001)(82310400005)(316002)(4326008)(70586007)(70206006)(5660300002)(8936002)(6862004)(8676002)(81166007)(40460700003)(356005)(33656002)(26005)(82740400003)(186003)(6512007)(6506007)(53546011)(336012)(40480700001)(4744005)(2906002)(2616005)(36756003)(36860700001)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 08:13:32.7922
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 91f04230-e682-4cd2-c065-08db5c2ec3e5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7940



> On 23 May 2023, at 17:38, Anthony PERARD <anthony.perard@citrix.com> wrot=
e:
>=20
> Make use of filechk mean that we don't have to use
> $(move-if-changed,). It also mean that will have sometime "UPD .." in
> the build output when the target changed, rather than having "GEN ..."
> all the time when "xlat.lst" happen to have a more recent modification
> timestamp.
>=20
> While there, replace `grep -v` by `sed '//d'` to avoid an extra
> fork and pipe when building.
>=20
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---


Seems good to me!

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>





From xen-devel-bounces@lists.xenproject.org Wed May 24 08:18:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:18:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538839.839116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jhD-0007o7-OL; Wed, 24 May 2023 08:18:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538839.839116; Wed, 24 May 2023 08:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jhD-0007o0-Kf; Wed, 24 May 2023 08:18:11 +0000
Received: by outflank-mailman (input) for mailman id 538839;
 Wed, 24 May 2023 08:18:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1jhC-0007nu-KY
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:18:10 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2061f.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::61f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 84753e40-fa0b-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 10:18:09 +0200 (CEST)
Received: from AS9PR06CA0754.eurprd06.prod.outlook.com (2603:10a6:20b:484::8)
 by GV1PR08MB7378.eurprd08.prod.outlook.com (2603:10a6:150:22::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 08:18:03 +0000
Received: from AM7EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:484:cafe::13) by AS9PR06CA0754.outlook.office365.com
 (2603:10a6:20b:484::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 08:18:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT050.mail.protection.outlook.com (100.127.141.27) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 08:18:02 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Wed, 24 May 2023 08:18:02 +0000
Received: from 8feac114bdd0.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8E22CF17-804B-4FD3-AFD8-74F175F15312.1; 
 Wed, 24 May 2023 08:17:54 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8feac114bdd0.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 08:17:54 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM7PR08MB5464.eurprd08.prod.outlook.com (2603:10a6:20b:10a::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 08:17:53 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 08:17:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84753e40-fa0b-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5/xsYqdaGGe0ZwXsZb3e+Z0gZ7qyQu6w8n7yJgQBAFw=;
 b=ekiGj7HOFk1UF45ybNoA82JICE/VPRImE0c51L1GyzzKq4h5DzjItymyvof1V7i9QcQlr4mLWgoNmsQr0Q5QPiZluQ508cyCd/cAEh7BRtj14aCMVzrFzYeC1jlTYLD4vNVJyAEwrimghbrbLwmkrMVsOkZRDRO72MWGlG9Geog=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0f3e921affead396
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IzVRCVJA8+BbQ/EZwLl9l6lcqts61gxQGagyDvDjLO6g0CYABKbnbdBYeWXJBADKZ5qfSFse8q1ASsElI/2r7J/JV6/NcnfnYDWgujXy/BfybHPZRZ7jFfY11VdNGp3YSvUeNAhkvkqbYzD+kVMfCR2bOH0jvWfdStc3e6yAqr4a9lfckCMUlrQSEYOQCS0O0nM8HYCBcrg601ka5dNBqg6J9omDH+UUSQ4BjyYHxg3XJyoj3X636WOf3RDoLbkDIMLNSa17VTdc+xAgMMDXc5iOp+5zvTVU+qc44twDHJGoWMrN+0nauI4a7yh7bRmlwmppyN/RKl04UjL17bjMTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5/xsYqdaGGe0ZwXsZb3e+Z0gZ7qyQu6w8n7yJgQBAFw=;
 b=YeWUDbmTWYUfCGMZutuYu85fm0c3/fNhLJCTPRn5EAz1aISd7gGotV46w2zSohdSXpZ++iSWv4/zVzSyWY0+CdkCodkE/5Pa2RiVfXigglBfhazlNq4UkccL9npPMIWq/8SC3AeQ1dSemJUmR5b03l9zC1nzm2rJYWh8pbJDIUT/qqzWszLLtLFc88pC8hGi+VizUbWl0HLQb2JojMbUG2xsVECOWqyaABGKOg1AlcDpPVclKoiLN96RroMwW1HgvNS78SDENEhm7KxrJP1/2Bs9/ze3bH4fzchWYHOw6sTsiGtBZYVsquDrldnpGjdr2YG+pojuI6MgK3BaVRvHEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5/xsYqdaGGe0ZwXsZb3e+Z0gZ7qyQu6w8n7yJgQBAFw=;
 b=ekiGj7HOFk1UF45ybNoA82JICE/VPRImE0c51L1GyzzKq4h5DzjItymyvof1V7i9QcQlr4mLWgoNmsQr0Q5QPiZluQ508cyCd/cAEh7BRtj14aCMVzrFzYeC1jlTYLD4vNVJyAEwrimghbrbLwmkrMVsOkZRDRO72MWGlG9Geog=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Julien
 Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [XEN PATCH 02/15] build: rework asm-offsets.* build step to use
 kbuild
Thread-Topic: [XEN PATCH 02/15] build: rework asm-offsets.* build step to use
 kbuild
Thread-Index: AQHZjZUYHIYuuKOi9UK9bl3DPb8C/K9pCpGAgAAGCYCAAASnAA==
Date: Wed, 24 May 2023 08:17:53 +0000
Message-ID: <CCBA4DD9-80EA-4B1E-8C60-3340793F3484@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-3-anthony.perard@citrix.com>
 <CBAF4CF3-C7DD-4071-9321-5EC7BEA1D432@arm.com>
 <ee97f251-a780-82be-b607-7cc7783a950f@suse.com>
In-Reply-To: <ee97f251-a780-82be-b607-7cc7783a950f@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM7PR08MB5464:EE_|AM7EUR03FT050:EE_|GV1PR08MB7378:EE_
X-MS-Office365-Filtering-Correlation-Id: 399768ef-12b6-4cd9-7ed5-08db5c2f6491
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 uIindZ9qzNtpmdvdkETIGUxb7RryR+aOEYXcuHH94s5WVUiBJ766hxSboCSov+JWksbHVps7yIpVuaFCnscQPsliPNXhjmw9vpApKxXcWTCPjkjxx5VyQLy6u2L+f6KLSZ8oEbpLAWklE35ZDDQZ2f7oZACct2Dv9T+gGM3G4GUoqi0Yd/D4TNTkNAyCbRNg4zzHfruitwApyxgJTum4AEtq46Qpk0ObAUldiXaIAIHrrLfalK4k1wbOeBuwxkoJ+bdx6+gaPmrQUQZ2JJqpZD9NVvjdHj/rhDQWEclYG+lcQarA+tcJTrs5FgLoEg0DTkHPu5xHkmztlR9kphj9OdkxHP1giheoM76znZYr7UaDrjBPGPji1R5grhtKLyguxGdx3a7cWtwK6QEfcmQKeu1HgjXa1WCksyzZeAXTqx0x9YEMf6SsHSVByFd26hicywULUVM3uaTKicVh556JvrqsiAeg7yCf823EvkekCeRXhRtPk+Q4EB+EwyV8PFwDSEMdxbvsL+dGA4+BILq9rOyKuEeO0sOJ1YMkSPbBrhUqz8jb51f9N9gs7OgBhnzhEPgusr7Tg1zOF1ceEbfHyCGvlwAueri8rPkAUki+P3OuRCD83rOv/VycW2Ir9b3TGNfVx5tbp9iH1pq/DZ0WqQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(376002)(39860400002)(366004)(346002)(451199021)(71200400001)(41300700001)(66446008)(64756008)(91956017)(76116006)(66946007)(66556008)(66476007)(2906002)(26005)(186003)(478600001)(6486002)(6916009)(316002)(4326008)(5660300002)(54906003)(8936002)(8676002)(6512007)(53546011)(36756003)(2616005)(6506007)(83380400001)(86362001)(38070700005)(122000001)(33656002)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <8332CE21A9C0C244882086B91C98052D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5464
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9fb8ec2c-0202-4815-d4b1-08db5c2f5f20
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	t4WzeIP/m+tWqDqo6agwA64CBHUtSxcfW8wq0HI72tJVRcoS1+T+Y2/o//PO5PE5cLNVVahiD0kZvGXM8gXsSrAt0C6EU21T1z5vRJq9eWYBLqJHtRjiAYzMdG0liD9r0bNoBj1M8MWZx8UuBaGqia5MGmCrApZetAaMUAAjkTrPkcJQ8xu6kJLWINDKcqkCiH07/c35cmZxzvSnG1DQXqvJIRRRFSk1D74PmGW/AThe19ftl+GiWjDVd+dYm5fZL3sjMhRq1rvtuUQ+a7jV0lu1VpWmLa24/90Vo+b0Ln3oWZ3VzYg4Wf7cC5H1+6LfCY16GZ+MTf+RlxRRh1wu3YGa2KCA44F31Ci4c2fvtuxVfTRqUBGwwo6XCTmiUT7sVABVqtVV58jHTEaFcz5Z3IsrNUpm/cwttZmGigbmk82090WxQAyfmkoVd8kVLGTnxwUzO3wMU0KQfkA4eku2wx47+yQrTyK3dDxSN5BxdtIbJo7EF4m8QJwfMNuHrIVvJJW31UGWN/iUTAufmi/GmpPqphq58J1mYZykZdjnKFJco8vU9jG+NX2NqaHw8EJ7Wb98iM66H1/CMEnnnJXwPbA7NcLS+kX7kk9K6HJkB0xJR5UdOHN9E1e1Xu0lBLNQYUCTx0GIpwa5Uckqt+jGsplweF7WfrMCzyCwqzMi+cF7Zd9n8h6ksr76r1qvX47waor1h3Gw25Wbe/lL1x4+3voUfDa5ymlk3021v0+9asD3K1P3gTKtutecif6M1TV2
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(376002)(136003)(451199021)(46966006)(40470700004)(36840700001)(33656002)(6506007)(26005)(82740400003)(356005)(81166007)(40460700003)(53546011)(6512007)(107886003)(186003)(2616005)(36860700001)(47076005)(36756003)(83380400001)(336012)(2906002)(40480700001)(316002)(70586007)(70206006)(4326008)(6486002)(41300700001)(86362001)(54906003)(82310400005)(478600001)(6862004)(8676002)(8936002)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 08:18:02.2907
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 399768ef-12b6-4cd9-7ed5-08db5c2f6491
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7378

DQoNCj4gT24gMjQgTWF5IDIwMjMsIGF0IDA5OjAxLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjQuMDUuMjAyMyAwOTozOSwgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+PiBPbiAyMyBNYXkgMjAyMywgYXQgMTc6MzcsIEFudGhvbnkgUEVSQVJEIDxhbnRo
b255LnBlcmFyZEBjaXRyaXguY29tPiB3cm90ZToNCj4+PiBJbnN0ZWFkIG9mIGhhdmluZyBhIHNw
ZWNpYWwgJChjbWRfYXNtLW9mZnNldHMucykgY29tbWFuZCwgd2UgY291bGQNCj4+PiBwcm9iYWJs
eSByZXVzZSAkKGNtZF9jY19zX2MpIGZyb20gUnVsZXMubWssIGJ1dCB0aGF0IHdvdWxkIG1lYW4g
dGhhdA0KPj4+IGFuIGh5cG90aGV0aWNhbCBhZGRpdGlvbmFsIGZsYWdzICItZmx0byIgaW4gQ0ZM
QUdTIHdvdWxkIG5vdCBiZQ0KPj4+IHJlbW92ZWQgYW55bW9yZSwgbm90IHN1cmUgaWYgdGhhdCBt
YXR0ZXIgaGVyZS4NCj4+PiANCj4+PiBCdXQgdGhlbiB3ZSBjb3VsZCB3cml0ZSB0aGlzOg0KPj4+
IA0KPj4+IHRhcmdldHMgKz0gYXJjaC8kKFRBUkdFVF9BUkNIKS8kKFRBUkdFVF9TVUJBUkNIKS9h
c20tb2Zmc2V0cy5zDQo+Pj4gYXJjaC8kKFRBUkdFVF9BUkNIKS8kKFRBUkdFVF9TVUJBUkNIKS9h
c20tb2Zmc2V0cy5zOiBDRkxBR1MteSArPSAtZzANCj4+PiBhcmNoLyQoVEFSR0VUX0FSQ0gpL2lu
Y2x1ZGUvYXNtL2FzbS1vZmZzZXRzLmg6IGFyY2gvJChUQVJHRVRfQVJDSCkvJChUQVJHRVRfU1VC
QVJDSCkvYXNtLW9mZnNldHMucyBGT1JDRQ0KPj4+IA0KPj4+IGluc3RlYWQgb2YgaGF2aW5nIHRv
IHdyaXRlIGEgcnVsZSBmb3IgYXNtLW9mZnNldHMucw0KPj4gDQo+PiBUaGUgc29sdXRpb24gYWJv
dmUgc2VlbXMgY2xlYW4sIG1heWJlIEkgYW0gd3JvbmcgYnV0IC1mbHRvIHNob3VsZCBub3QgbWF0
dGVyIGhlcmUgYXMgd2UgYXJlDQo+PiBub3QgYnVpbGRpbmcgb2JqZWN0cyB0byBpbmNsdWRlIGlu
IHRoZSBmaW5hbCBidWlsZCwgaXNu4oCZdCBpdD8gQW5kIGdjYyBkb2N1bWVudGF0aW9uIHN0YXRl
cyBqdXN0Og0KPj4gDQo+PiDigJxJdCBpcyByZWNvbW1lbmRlZCB0aGF0IHlvdSBjb21waWxlIGFs
bCB0aGUgZmlsZXMgcGFydGljaXBhdGluZyBpbiB0aGUgc2FtZSBsaW5rIHdpdGggdGhlIHNhbWUN
Cj4+IG9wdGlvbnMgYW5kIGFsc28gc3BlY2lmeSB0aG9zZSBvcHRpb25zIGF0IGxpbmsgdGltZS4i
DQo+PiANCj4+IEnigJl2ZSBhbHNvIHRlc3RlZCB0aGlzIHBhdGNoIGFuZCBpdCB3b3JrcyBmaW5l
LCBJIGhhdmUgdG8gc2F5IGhvd2V2ZXIgdGhhdCBJIHByZWZlcnJlZA0KPj4gYSBtb3JlIHZlcmJv
c2Ugb3V0cHV0LCBzbyB0aGF0IHBlb3BsZSBjYW4gY2hlY2sgaG93IHdlIGFyZSBpbnZva2luZyB0
aGUgY29tcGlsZXIsDQo+PiBidXQgSSBndWVzcyBub3cgaXTigJlzIG1vcmUgY29uc2lzdGVudCB3
aXRoIHRoZSBvdGhlciBpbnZvY2F0aW9ucyB0aGF0IGRvZXNu4oCZdCBwcmludA0KPj4gdGhlIGNv
bXBpbGVyIGludm9jYXRpb24uDQo+IA0KPiBJZiB5b3Ugd2FudCBpdCBtb3JlIHZlcmJvc2UsIHlv
dSBjYW4gcGFzcyBWPTEgb24gdGhlIG1ha2UgY29tbWFuZCBsaW5lLg0KPiAoT2YgY291cnNlIHRo
YXQnbGwgYWZmZWN0IGFsbCBjb21tYW5kcycgb3V0cHV0LikNCg0KWWVzIEkgaGF2ZSB0byBzYXkg
dGhhdCBhZnRlciBzZW5kaW5nIHRoZSBtYWlsLCBJ4oCZdmUgY2hlY2tlZCB0aGUgTWFrZWZpbGUg
YW5kIEnigJl2ZSBmb3VuZCB0aGUgY29tbWVudA0KDQojIFRvIHB1dCBtb3JlIGZvY3VzIG9uIHdh
cm5pbmdzLCBiZSBsZXNzIHZlcmJvc2UgYXMgZGVmYXVsdA0KIyBVc2UgJ21ha2UgVj0xJyB0byBz
ZWUgdGhlIGZ1bGwgY29tbWFuZHMNCg0KVGhhbmsgeW91IGZvciBwb2ludGluZyB0aGF0IG91dA0K
DQoNCj4gDQo+IEphbg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 24 08:20:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:20:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538843.839125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jjR-0000mc-3v; Wed, 24 May 2023 08:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538843.839125; Wed, 24 May 2023 08:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jjR-0000mV-1B; Wed, 24 May 2023 08:20:29 +0000
Received: by outflank-mailman (input) for mailman id 538843;
 Wed, 24 May 2023 08:20:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4LsG=BN=arndb.de=arnd@srs-se1.protection.inumbo.net>)
 id 1q1jjO-0000mN-Uk
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:20:27 +0000
Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com
 [66.111.4.29]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d3c507cb-fa0b-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 10:20:23 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id 1C7A45C02B8;
 Wed, 24 May 2023 04:20:22 -0400 (EDT)
Received: from imap51 ([10.202.2.101])
 by compute6.internal (MEProxy); Wed, 24 May 2023 04:20:22 -0400
Received: by mailuser.nyi.internal (Postfix, from userid 501)
 id 35350B60089; Wed, 24 May 2023 04:20:21 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3c507cb-fa0b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arndb.de; h=cc
	:cc:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm3; t=1684916422; x=1685002822; bh=tY
	Zdd2ag8G3DyUn++y9TtCSdP5cAJ0TR5Ul9fhN2ndw=; b=DQIgslMqP/BQhUqd+t
	TJ5lyK533g0PXZguW7fgbQbaEZL0TJ6DFtc0ztFW9/a/IPbtrzNHzYuUd8JR+mla
	1YAiG8vP20QJnfs3dqAcyoqHPHe4rpqA16INAETgsmF1hmMduNo7rEsBQmWC/gQN
	o2ok4bed7fLvnqjtX3/8jvusNErESnd6JgLIdbOlYWhjuFOfEQears9rNIm/33fh
	8zYdThXmJRmr/e/u5YEVPR2ESWnKuwRMl3JCGYLrCsZMSC4CV7aGDRix3N2a9qGZ
	DHIHYDNcVwMnW7+oV1XF4xesNa2Fbx451/JOPtk7/sAoKI1hmZn5ssmo8UYZjm1i
	8HMQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1684916422; x=1685002822; bh=tYZdd2ag8G3Dy
	Un++y9TtCSdP5cAJ0TR5Ul9fhN2ndw=; b=MVYQVZPlyWk4dXzXs3bMzLh61TsRG
	0zqO/4LPg+/xnt/ZYmpc4dpLa+b0Rq5DF3bNJjro5mF1G9a6wg6mrSPBZuoCgDgC
	YPgT6qjjKtlzUX0DtvDfH3MG0Sgdg63WknBag7Apfq3JQT4yY/SMIPR6D0db70Wi
	KElTSVJojg0qSn0Qpf5A6Zn+qLZe2INR41OtsyVaPUEhhfs5nvrNE55czpGpHwJg
	4ba/loCsxOaisZ1HO7/FXyriNne2K0HZZz7Lvmqg0jx/B0hqRDCIwVt9GBepRLll
	EEKemLuCquy0JFi9lp+L4/bTgEWNUXUVap2mR0oUNE2OikbW4zcOga/tQ==
X-ME-Sender: <xms:xchtZC4uV5DpyVK-Y1ve2PHxVwZK5FqvfB9KyrTi1rl70yMp0296fw>
    <xme:xchtZL4Z-oYtWPtLhM9V_2fCBwvZHZODmM98sUhaj7v2fab_ZUXkM1Z5Ji3cCGhWY
    H-rHl90c5gmxL_r7zg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeejhedgtdduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepofgfggfkjghffffhvfevufgtsehttdertderredtnecuhfhrohhmpedftehr
    nhguuceuvghrghhmrghnnhdfuceorghrnhgusegrrhhnuggsrdguvgeqnecuggftrfgrth
    htvghrnhepffehueegteeihfegtefhjefgtdeugfegjeelheejueethfefgeeghfektdek
    teffnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomheprg
    hrnhgusegrrhhnuggsrdguvg
X-ME-Proxy: <xmx:xchtZBcCJqJMxxQj_FTXrvf1_W7sXVZde4u4EM_a5Oe7LLqnbFKM2g>
    <xmx:xchtZPL5JlSqUbnaGEU96WI21q22YjU0HaG9bz1xYM4YklW54kRyGw>
    <xmx:xchtZGKcg1w0jhVvjpkleUOUf-s1ieu0yAf746uBYjJeIsk7rqNfCQ>
    <xmx:xshtZMzrWSifih0NK25uDia6pz1px0kn2OqTySsDlKFjbznfo5uLfg>
Feedback-ID: i56a14606:Fastmail
X-Mailer: MessagingEngine.com Webmail Interface
User-Agent: Cyrus-JMAP/3.9.0-alpha0-441-ga3ab13cd6d-fm-20230517.001-ga3ab13cd
Mime-Version: 1.0
Message-Id: <9dd597c5-1abc-4a25-a1cd-d7488d9d5b33@app.fastmail.com>
In-Reply-To: <2e5e627e-9f7e-ae63-05a3-d2b55e0703ba@oracle.com>
References: <20230519092905.3828633-1-arnd@kernel.org>
 <35c82bbd-4c33-05da-1252-6eeec946ea22@oracle.com>
 <418f75d5-5acb-4eba-96a5-5f9ec7f963a6@app.fastmail.com>
 <2e5e627e-9f7e-ae63-05a3-d2b55e0703ba@oracle.com>
Date: Wed, 24 May 2023 10:20:00 +0200
From: "Arnd Bergmann" <arnd@arndb.de>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
 "Arnd Bergmann" <arnd@kernel.org>, "Juergen Gross" <jgross@suse.com>,
 "Thomas Gleixner" <tglx@linutronix.de>, "Ingo Molnar" <mingo@redhat.com>,
 "Borislav Petkov" <bp@alien8.de>,
 "Dave Hansen" <dave.hansen@linux.intel.com>, x86@kernel.org,
 "Stefano Stabellini" <sstabellini@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>,
 "Oleksandr Tyshchenko" <oleksandr_tyshchenko@epam.com>,
 "Peter Zijlstra" <peterz@infradead.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Subject: Re: [PATCH] [v2] x86: xen: add missing prototypes
Content-Type: text/plain

On Wed, May 24, 2023, at 01:09, Boris Ostrovsky wrote:
> On 5/23/23 4:37 PM, Arnd Bergmann wrote:
>> On Sat, May 20, 2023, at 00:24, Boris Ostrovsky wrote:
>>> On 5/19/23 5:28 AM, Arnd Bergmann wrote:
>>
>> Not sure if there is much point for the second one, since
>> it's only called from assembler, so the #else path is
>> never seen, but I can do that for consistency if you
>> like.
>> 
>> I generally prefer to avoid the extra #if checks
>> when there is no strict need for an empty stub.
>
> Do we need the empty stubs? Neither of these should be referenced if 
> !SMP (or more precisely if !CONFIG_XEN_PV_SMP)

We don't need the prototypes at all for building, they
are only there to avoid the missing-prototype warning!

I added the stubs in v3 because you asked for an #ifdef,
and having an #ifdef without an else clause seemed even
weirder: that would only add complexity for something that
is already unused while making it harder to maintain or
use if an actual user comes up.

I'll let someone else figure out what you actually want
here.

     Arnd


From xen-devel-bounces@lists.xenproject.org Wed May 24 08:24:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:24:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538858.839176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jnM-0001pq-38; Wed, 24 May 2023 08:24:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538858.839176; Wed, 24 May 2023 08:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jnL-0001pj-W0; Wed, 24 May 2023 08:24:31 +0000
Received: by outflank-mailman (input) for mailman id 538858;
 Wed, 24 May 2023 08:24:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1jnK-0001pd-Mb
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:24:30 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2080.outbound.protection.outlook.com [40.107.7.80])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6629ed53-fa0c-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 10:24:28 +0200 (CEST)
Received: from AS8PR04CA0142.eurprd04.prod.outlook.com (2603:10a6:20b:127::27)
 by DBAPR08MB5622.eurprd08.prod.outlook.com (2603:10a6:10:1af::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 08:23:59 +0000
Received: from AM7EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:127:cafe::16) by AS8PR04CA0142.outlook.office365.com
 (2603:10a6:20b:127::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 08:23:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT023.mail.protection.outlook.com (100.127.140.73) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 08:23:58 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Wed, 24 May 2023 08:23:58 +0000
Received: from 6924a593a8e7.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6019D4E4-6E39-4DCA-983B-553A0D66A8D7.1; 
 Wed, 24 May 2023 08:23:51 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6924a593a8e7.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 08:23:51 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB8PR08MB5369.eurprd08.prod.outlook.com (2603:10a6:10:11c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 08:23:48 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 08:23:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6629ed53-fa0c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p+ZtTVyHFUj2UKsTLlBul5a5nsSxQFVSTk7EzWTOPLQ=;
 b=mYHa1M0YdwOyi5UKuhiqN9nNnkLOi6NXHa67sDmFpO3MuUSY8O3UR4IYLCcMK07JtXZhMXlXBVcf2ifesY0owH+emGq247Bkh1nze8SriLDKTcZk5Kk/QKR+ebn78vaxAMd1cLTBThdMfejd2evvJDGpiwPwaeYJxCjPPDA4HJg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7dad52d7735b7b93
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ksPyQdE5CnFW9Ki7iRX8811htZMepFkVvfRUTtHUzzOKDA/kY5XHKVMqL5jNBtodSy9XkDEuELPnBS+nDFDarQAz+2dQjoobk0AItJtX+lc2iTSrOUbNfBIl1MUqy/KbqaG37Rpkbe2tsi0yhFa8b+w0S4XXryKFEBnh4apQLycQlQb/7RQRTaNGXRBL20f/XL1ifwOKIm7naJGiCcvGSA8ijlcU8Tl+eeGSQcNi0Hm0e8F2dzEREg+k2S1Rxfmg8U5GUST+lL6rRv8HbpcEQTFuq9ueYFlYeKID9xtm79oR7akRQ2FKtiWjXvhreH8ensaK+17VlVSZusWG+hBZ4g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p+ZtTVyHFUj2UKsTLlBul5a5nsSxQFVSTk7EzWTOPLQ=;
 b=EXgM7Rc0hz5JcNQ19TBozxbR5PT5MOQXatDCYvtb0mqm9L7ydswt9cbZOGKTFbsvUbTrXCrHOjSHbzXGw7l2mdtp5lK8m3kBpBUJS4XXJ9HW5pAQAMVQz+pIKKnc6tWGb627s7b9OfRkmPhSnLwdKtygkFGPBKses2LabZkJr6Tel7/UsoQIG1Pty2GbKAQCFuhtlYcL/zCWCUvpQ1EIE6eZM8/AhnI0a7d4qTNeiERPPLyrdmnwDWEPtKT3+V3IeixiHEfnIvYkhIBO5ODAQ84MTNLoAPWCpKGliWAXQnr3RHHvBcSTlUF6YMNfVixXCW9vn5E04oxHJGRElMYxwA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p+ZtTVyHFUj2UKsTLlBul5a5nsSxQFVSTk7EzWTOPLQ=;
 b=mYHa1M0YdwOyi5UKuhiqN9nNnkLOi6NXHa67sDmFpO3MuUSY8O3UR4IYLCcMK07JtXZhMXlXBVcf2ifesY0owH+emGq247Bkh1nze8SriLDKTcZk5Kk/QKR+ebn78vaxAMd1cLTBThdMfejd2evvJDGpiwPwaeYJxCjPPDA4HJg=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Doug Goldstein
	<cardoe@cardoe.com>
Subject: Re: [XEN PATCH 09/15] build: hide commands run for kconfig
Thread-Topic: [XEN PATCH 09/15] build: hide commands run for kconfig
Thread-Index: AQHZjZUZfllK9TyAd06fAPp67+uiZK9pFuiA
Date: Wed, 24 May 2023 08:23:48 +0000
Message-ID: <BF9D6C54-1DA7-4B10-A447-6F53783EA078@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-10-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-10-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB8PR08MB5369:EE_|AM7EUR03FT023:EE_|DBAPR08MB5622:EE_
X-MS-Office365-Filtering-Correlation-Id: a79e65e7-903a-4c56-015f-08db5c303909
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 F8rbyjvrvthx6FF/tQTqCEYDuLybzaAjHOk91q1ex6VhkfNfvrKXw5pxrcSSM050ljjUJn4HV5SxIniPF0lg52g7IU6KloAlFSoi+Rg4aPApBW3HLVnO0dSyIVtMty5guX78emJjd+67ksprfqPjnIIkBFTyn/UL/oi9jp5fYK796PpPaI0TdYFLEDeIRVbG8x3ftTIZVv9sU8qEgTZPT8HNi1525Me1FIcxNtHa76/z5Re5Uobt72kDjRFB1xZMj6R+TO393atgVgApgcDYFCNM/48QpfpVqzBxIFs5nnDYkGk/Dh6EEkQLmFIte3fogFMUINavpNw5wYalYyWXZ+srpsaJ9SJaWH8ASHY+3X+vN2EwTixYXJ8hdxiKwD/GsetRWHhZQ8BOmCCN8pERohrV8i8edXI8hhhcq6pMdJJM/Vv8w2ZFB+V3b5RNYCABMxvjc+5UpP1YgPjeYfUS+9cFyXB7Gvh0RXz5RtQ8ZvaKWA5aOx5NstGCWzBT6aCWZ5u7qHbyTBr0aHcL7ZoOMv69L+V/iVDdNXJvOTeonZ3pydZ4z+I2yT1qfLSj4pHGwNMt/pj6NOftr8f54yDVKFUWq4BZuhT2pZquUPK5TeqCKuQ1YEZ5P1JAnTjU2DhivIoxtxlY1LxSXUqLaAOFaw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(136003)(39860400002)(366004)(451199021)(71200400001)(478600001)(6486002)(33656002)(2616005)(38070700005)(86362001)(38100700002)(122000001)(6506007)(6512007)(26005)(36756003)(53546011)(186003)(41300700001)(8676002)(8936002)(316002)(5660300002)(2906002)(4744005)(54906003)(66556008)(66946007)(66476007)(66446008)(91956017)(76116006)(6916009)(4326008)(64756008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <4EED588F4DA6AC44A5094A4F8C0A8CFC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5369
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	cc798970-e7c5-479d-4402-08db5c3032de
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Y9bIWh7eei9IZjj7wRA8/BYLmPihtyCKUeLkXanM+dIwNE0SkkKUjQgrkju3XxCrI0amPGGUbpkYGgb07UwyJZMnLTBxdsjL4v3L17J3semVFXEU0uIyepixYXGi/tzkCjSS/31H0Ia/kyx9WyLrcPYNxgdzkAR2YjSbFA+foJDtPXJTZ+HF0vyDq7l4am2SEmbcUUmm5rnqUl6I8U9sVBxGxU8ri8hkN/Ok8Ox4q7AkU26mImhhJAYs62y4SF6DGZNK5YqXC9mq0GC3f52OHxl5MYxqNDuCQqHRnVetoiv2Oe8uC7xEi7o3Ez6BxENcCdyWm+V5AYw+u9N1zWDg2NGdRYOu4XlFiKCmWBgtTrBAb4vXaK+l1GZVW1u8tYIF+XnBqx1MIzAhTiwFd+cN1UwIE2SCRS5PJ/74m3Xb3Gy10s0yWhM4pYmxB9eaoGs8LYyiiyxvwxvxQkJsZf9RdGtazKwOa6i+YtKayGVkQrAa2i0z15Vdb/gs7Ds1QMAMCGQrzRsgKRACkLWeZZR5bMYqMX3NbbbFBpRiIU6Knl7SwZ9NCAzchIu+rpN5gHTYK5UKbiclNhrKM6gTRCt1QvzzfeudwwiFFHeBbXp2O1EQpY1CaCn8nl5YANwsYicAkT9aEaqkpzx3GMxukG2I6HPKmN//5mZ9xySJ6ZaV98rebMQzyngvQ4x9JL1kXQYVB5ZV+FDy+co6+izfkCpWRgNHYY11ojkIQq1FUz7vwkdq3bvipUMlVMMrRMIKRbW2
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(346002)(396003)(451199021)(46966006)(40470700004)(36840700001)(8676002)(8936002)(6862004)(41300700001)(33656002)(5660300002)(82740400003)(53546011)(26005)(6512007)(6506007)(186003)(107886003)(40480700001)(82310400005)(86362001)(54906003)(478600001)(6486002)(4326008)(70586007)(70206006)(316002)(40460700003)(356005)(81166007)(36756003)(4744005)(2906002)(36860700001)(336012)(47076005)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 08:23:58.7509
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a79e65e7-903a-4c56-015f-08db5c303909
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5622



> On 23 May 2023, at 17:38, Anthony PERARD <anthony.perard@citrix.com> wrot=
e:
>=20
> but still show a log entry for syncconfig. We have to use kecho
> instead of $(cmd,) to avoid issue with prompt from kconfig.
>=20
> linux commits for reference:
>    23cd88c91343 ("kbuild: hide commands to run Kconfig, and show short lo=
g for syncconfig")
>    d952cfaf0cff ("kbuild: do not suppress Kconfig prompts for silent buil=
d")
>=20
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>





From xen-devel-bounces@lists.xenproject.org Wed May 24 08:26:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:26:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538864.839185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jpM-0002T4-Hd; Wed, 24 May 2023 08:26:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538864.839185; Wed, 24 May 2023 08:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jpM-0002Sx-F3; Wed, 24 May 2023 08:26:36 +0000
Received: by outflank-mailman (input) for mailman id 538864;
 Wed, 24 May 2023 08:26:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1jpL-0002Sl-Pe; Wed, 24 May 2023 08:26:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1jpL-0007QC-NI; Wed, 24 May 2023 08:26:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1jpL-00050P-1k; Wed, 24 May 2023 08:26:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1jpL-0008N5-1O; Wed, 24 May 2023 08:26:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CxsfRuZZz1qamFWWQ2YWoCRZtBSisEvExkrjZrLDGOo=; b=hRUcpBo9Z4e5w7Zl/08aTjCyEb
	LSyfDmJAU367yzSXi2ksMu9AAT/m+x5RmHwR1kKrfgh9HIH7VjfOWHRbzeGx4Zrhc55+zSLLjWCAc
	TF+1ZUOd8siaxCeiGUjZ1HVB11X+2oLlnxkots5uhaU43Nf0db7xheKO/HECJssjnm0k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180919-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 180919: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.17-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=47eb94123035a2987dd1e328e9adec6db36e7fb3
X-Osstest-Versions-That:
    xen=b773c48e368d9cf1ea29b259fe4ae434b8bb42da
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 May 2023 08:26:35 +0000

flight 180919 xen-4.17-testing real [real]
flight 180926 xen-4.17-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180919/
http://logs.test-lab.xenproject.org/osstest/logs/180926/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180926-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180683
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180683
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180683
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180683
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180683
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180683
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180683
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180683
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180683
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180683
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180683
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180683
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  47eb94123035a2987dd1e328e9adec6db36e7fb3
baseline version:
 xen                  b773c48e368d9cf1ea29b259fe4ae434b8bb42da

Last test of basis   180683  2023-05-16 15:38:22 Z    7 days
Testing same since   180919  2023-05-23 13:08:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@amd.com>
  Yann Dirson <yann.dirson@vates.fr>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b773c48e36..47eb941230  47eb94123035a2987dd1e328e9adec6db36e7fb3 -> stable-4.17


From xen-devel-bounces@lists.xenproject.org Wed May 24 08:27:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:27:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538869.839196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jq1-0002yn-Sn; Wed, 24 May 2023 08:27:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538869.839196; Wed, 24 May 2023 08:27:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jq1-0002ye-Pi; Wed, 24 May 2023 08:27:17 +0000
Received: by outflank-mailman (input) for mailman id 538869;
 Wed, 24 May 2023 08:27:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1jq0-0002yJ-EB; Wed, 24 May 2023 08:27:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1jpz-0007Qu-Ve; Wed, 24 May 2023 08:27:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1jpz-00051a-GG; Wed, 24 May 2023 08:27:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1jpz-0000iu-Fi; Wed, 24 May 2023 08:27:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qEtmFrGBigQpmXr0fH0byU+y8tdwmIwhnM6sQ2zTJOw=; b=I5d31sNU3hFNaZzNRIebwZs1lF
	By/9FXaDokA9BCA2OITbZEfDu2TAup4aPKZxvJ294JOZ+FB/2J3gk0pAgNr2gzTlA90m1dT2x15hG
	U57BAIZwP4oIW1L0MyTlA2/XMnnBaSA0ZcnFMcrCqUwnBXa2t4D2n5/apIXXGYmDaJ10=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180921-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180921: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=00f76608a68dcd832c4a1b66694ef3e9cb52912f
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 May 2023 08:27:15 +0000

flight 180921 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180921/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                00f76608a68dcd832c4a1b66694ef3e9cb52912f
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    6 days
Failing since        180699  2023-05-18 07:21:24 Z    6 days   26 attempts
Testing same since   180921  2023-05-23 18:40:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 6210 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 24 08:30:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:30:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538878.839206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jt9-0004Ws-G9; Wed, 24 May 2023 08:30:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538878.839206; Wed, 24 May 2023 08:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1jt9-0004Wl-Cx; Wed, 24 May 2023 08:30:31 +0000
Received: by outflank-mailman (input) for mailman id 538878;
 Wed, 24 May 2023 08:30:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1jt8-0004Wf-28
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:30:30 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20624.outbound.protection.outlook.com
 [2a01:111:f400:7d00::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3aff8bfe-fa0d-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 10:30:27 +0200 (CEST)
Received: from AM6P192CA0007.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:83::20)
 by DU0PR08MB9079.eurprd08.prod.outlook.com (2603:10a6:10:470::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 08:30:22 +0000
Received: from AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:83:cafe::ce) by AM6P192CA0007.outlook.office365.com
 (2603:10a6:209:83::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 08:30:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT048.mail.protection.outlook.com (100.127.140.86) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 08:30:21 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Wed, 24 May 2023 08:30:21 +0000
Received: from 2ea2d9c5bfa5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A17A1631-65C6-46EE-A673-B62BA9585E51.1; 
 Wed, 24 May 2023 08:30:15 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2ea2d9c5bfa5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 08:30:14 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by GV1PR08MB7684.eurprd08.prod.outlook.com (2603:10a6:150:63::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 08:30:09 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 08:30:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3aff8bfe-fa0d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ar+Fa7NV0qRGHdNkrriyS/Ocaqk+MyoSIP+/dFJENMM=;
 b=Lbuys9yrrREnOZKe13crnEylh0dkxDahm9CdSkwI71lw8ZmnML0SjCsS7j7o0BhFbW9OClwlV+4lQPNfnbJi/A/wo9bMRqzm8bqmtDktT7q2I7sIUdIuIgchOblyoUutJpyfnFhug9jTY9dF0GR/JCYdzj/RLSe20dbxAp+TuMQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 73c2e526052f83df
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZdA5uq+FoQL3KEkfJYaA+vT5SfjSaB8fsw80pWDSKUF4/DuGAfjxRtViONSB/TnHwqxcwGp0tB5nfUFK/FE4eAhzJyao/K4xdCxxJXs7LJ1/O8znsbjXiH7z7rjlZ7ZRvciuZevm9UsJYxQIlB5mh/VSAIG9WBL8FT4QphYmmqlF1uocWszwxaA1j0R+RWRd5Q7nW0GSYGlS3T8BibXhigIUuZrSHNHe0Ab6ZFxv2ko3Ewc3ab3C7KAF7NCPKoCxuOydqModpnD7Ux+zsM6yO5E5YH5z1IdDkE0rJ8FdLg5rBdZAnZoGIQh2I6Fvz51ZVEa64mnLev+/hk5Qt7/IjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ar+Fa7NV0qRGHdNkrriyS/Ocaqk+MyoSIP+/dFJENMM=;
 b=JqVThb/jRHjPpwlAw38qjjmAWxggCISnR+ZqibjJyNUB2FUBYkNnb+Z9xASVJXJbqUfcSsi7mSfTfdDdL1Jqr+KJid3Y05ML6Jb7h0VM8gaO4hCHsKGdhN5sXCwf2JNhoiNZELYrEsPO0dulMzjWhSMf89qdVx3AZxLYtH99RK2nkXdcbT/KR420LBWWj3yFpe1bFjN7O4ETcqr2vf4jfjbYaJKqjCk2aSK34st6iZkQ4SjJXuMo2lMWMkNTdfWugaYWbT26FaxbN+Ey7e3e5G4CPl7iKlasbipJ1IOyC17wKNARVJxCYCzDHqZ4Y7GRf5IcD97VdhJZQuNdPbBHEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ar+Fa7NV0qRGHdNkrriyS/Ocaqk+MyoSIP+/dFJENMM=;
 b=Lbuys9yrrREnOZKe13crnEylh0dkxDahm9CdSkwI71lw8ZmnML0SjCsS7j7o0BhFbW9OClwlV+4lQPNfnbJi/A/wo9bMRqzm8bqmtDktT7q2I7sIUdIuIgchOblyoUutJpyfnFhug9jTY9dF0GR/JCYdzj/RLSe20dbxAp+TuMQ=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH 10/15] build: rename $(AFLAGS) to $(XEN_AFLAGS)
Thread-Topic: [XEN PATCH 10/15] build: rename $(AFLAGS) to $(XEN_AFLAGS)
Thread-Index: AQHZjZZTzywyLasu2UeN6RTgzfRGn69pGKwA
Date: Wed, 24 May 2023 08:30:09 +0000
Message-ID: <17CC7699-2B73-499B-946B-E423F7C9620E@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-11-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-11-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|GV1PR08MB7684:EE_|AM7EUR03FT048:EE_|DU0PR08MB9079:EE_
X-MS-Office365-Filtering-Correlation-Id: b15cea1b-c9dc-4746-39c1-08db5c311d49
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 xQ+Z5EVi6HtJEsi3fo4oX3DOk1A3oc1/bwGJt/kCcyMxWAtRfLaM55tKDR0rs0rLEyqKPlGB53mwZkrJmQO7exCWr9TKqbfzw4SgJIUld3sUzgjkF6b3cs4glPq+JwR/s3rcNmeje/OqsCghSdpTplkhvEzQ3Eh7YQQT1U/p5fZ5P6GdjeATK78/7/RRRKfuIrZRy7lFKGhRL6oOSkuzFSHQGJKBSY0X0833wBkCRYbRyRNG8y+Vdr0Lfld2u9EZaVrkoqydAa4iDJg9aCMFf7Q3XaQVvIP9cIiAKRkq7O97HmwHJrE15Mu0ithziicys6kpE3wcKtFKKGife42sHvoi29dLd5v5Md3jWYbYxBp4XLbYypqMlzsj9fLmEDEXBWfeGSuXL81RXJKsiVYV1HIBeVb4FfiUKZ4Oq/IjNdnsvZWekXZnrviLcKU6jIGfLDK8sHvnKBegcc8USnUFUDz8e78UN3dRCrAs/8bloSKfKimfSL6p4yPF8NgTBYPVGjcAy5N2f8U+onGtW9Gmb6xtMWh4imq9CyQoeKpb/1uRtazjVN4SSCnw5vTmKb0yTpFgD5VYMeW57kcG8jFrv9Jd2cxWoQzZ2qFM5JXYtSFWz5hnzd+Y1Txd0ZwHhWmwEsgWEwW6PT1nhrZMPOKw+w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(346002)(366004)(396003)(451199021)(6512007)(26005)(33656002)(53546011)(186003)(122000001)(6506007)(38100700002)(4744005)(2616005)(36756003)(2906002)(6486002)(71200400001)(316002)(54906003)(41300700001)(478600001)(66556008)(66946007)(66446008)(66476007)(76116006)(64756008)(6916009)(91956017)(4326008)(38070700005)(86362001)(8936002)(8676002)(5660300002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <078600B4A242364AA9A4027BD93ED7F9@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7684
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	311dfa14-3fe2-4419-2269-08db5c3115c7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	O0/po+DxlbFedtoOYxPlRqL8M8I6sxXEWv3qxwVPnp1I00/7frP1itug11/tEQL/41D/SIotmUjP2Gxs9ecqy0iS3YlUWV+m3nnIket3VP70fz6c3xl4ZTAbLKKf2Kgu/FXkYiNR1glpM+bujX+431kitXZpBqUDPEqT6snMqVH07Ak3d7FJBuelujSdABwtI0DIxCuZaPuwqAGGz+f188ZFJ90p69yxBo0iwLsqE/8C37LdmuD8H5dTY05mWOZ2DDXb+jD/NMvNQXGCRLQaVVCHb89xYagoICQOfx0gu9vTyvxHumai5EHNO2zq14yhss7kegS9dXzLygt8byjyrGpCrcB/MC/pmGfZiBsE19Z2EgouJc55yZM8tZz2RtSrdR2gANslDmi6Ta/aqnNwmbPDP3MvGgUB9DC7aVXUQuTNp779bYJ1BrnKuwZS2mfdaRu0gvqOJgt5q2f0GkOS58JaP3Sv78tfQiXmq0m4ePX7jI5sd8hb5m+vWEqoWFE2tWKdkU52cqfISFgVzQV0fMUQcTBEqqZz7NH0xV/St6NoRMqM7BJX7DmBoJ7WSjPOVX5Fk94rfQsbEKBLK+xysD02dNTdXBZa5HPfqzZFqdofcB/qOuH4Kzj5yhBrKIHIMBUGQ7EF9Hw0jYi/sQa2l0UFEX/g6w8fVNy+Lx7iaZkGCHHfObC3JChETi5NmRbDYXgeED55AQ7rmLTARSlsoDTxJODYrh/Ro3pRxTqDLyo=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(346002)(136003)(376002)(451199021)(40470700004)(46966006)(36840700001)(26005)(33656002)(6506007)(6512007)(82740400003)(356005)(40460700003)(53546011)(81166007)(107886003)(186003)(2616005)(36860700001)(47076005)(36756003)(336012)(2906002)(4744005)(40480700001)(316002)(70206006)(70586007)(4326008)(41300700001)(6486002)(54906003)(86362001)(478600001)(82310400005)(6862004)(8676002)(8936002)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 08:30:21.6767
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b15cea1b-c9dc-4746-39c1-08db5c311d49
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9079



> On 23 May 2023, at 17:38, Anthony PERARD <anthony.perard@citrix.com> wrot=
e:
>=20
> We don't want the AFLAGS from the environment, they are usually meant
> to build user space application and not for the hypervisor.
>=20
> Config.mk doesn't provied any $(AFLAGS) so we can start a fresh
> $(XEN_AFLAGS).
>=20
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>





From xen-devel-bounces@lists.xenproject.org Wed May 24 08:45:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:45:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538882.839216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1k7h-00066Y-Ms; Wed, 24 May 2023 08:45:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538882.839216; Wed, 24 May 2023 08:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1k7h-00066R-K6; Wed, 24 May 2023 08:45:33 +0000
Received: by outflank-mailman (input) for mailman id 538882;
 Wed, 24 May 2023 08:45:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1k7g-00066L-H2
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:45:32 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2062f.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56d83415-fa0f-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 10:45:31 +0200 (CEST)
Received: from DB7PR02CA0003.eurprd02.prod.outlook.com (2603:10a6:10:52::16)
 by AS2PR08MB9836.eurprd08.prod.outlook.com (2603:10a6:20b:603::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 08:45:22 +0000
Received: from DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:52:cafe::2a) by DB7PR02CA0003.outlook.office365.com
 (2603:10a6:10:52::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 08:45:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT038.mail.protection.outlook.com (100.127.143.23) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 08:45:22 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Wed, 24 May 2023 08:45:22 +0000
Received: from 475d12ffc67c.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D27593EE-B430-496A-8EEE-9301B544C8CC.1; 
 Wed, 24 May 2023 08:45:16 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 475d12ffc67c.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 08:45:16 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS2PR08MB9546.eurprd08.prod.outlook.com (2603:10a6:20b:60d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 08:45:13 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 08:45:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56d83415-fa0f-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9HaKm3gGkaSYZkrT2tkqOvdIV+RmDdenvT9gjfreHAI=;
 b=EnlOcS0Aysk6B5EjtI8Ugu8+t9vXg6NUYi4wvzfiycbBTr7MtIuT8NJdKWcxzsnz5FejMS9dlEc15Lw5cWC6G5iXR+GaQuS99H76za4RqATZqGSlOG8PER2BPZsG1EB+POJqcE9pKph5532PE+DO9QOhJXTecOOnAQ0dfDQVv0M=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 91cf3ada4ab5f699
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CAfZCJSyoTacN6HrCNcaQ+Owolae8NA8dzM01i5aIMM0gt2Oqqa6KBCA4oirJWAvNS0V1LQGaDsheSA1+t1V/Ww3yYU96U/IfCOoDSVuRlQ+f2L4FPavQdOU05O9qmx7fWQOKG780KEsWsYXgmWys3AWrPz4Ri659QxSSrL6W1k7X/aZoZhYrBQfE7j8NmvVkKvd2/yn5YRt/oMiX1ao4eJSonJzS+uM4ncOfcBlCLdk0+P8Q0FRWP3lErh+ymSICWro83x5rsfZ32E7pIv8A4XQ7Z3yQaVPPJuyIfHhhmrgIU1XnWuZPcBbNW5i6KUqoSwyUlP3hXm3qPrmiFLK3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9HaKm3gGkaSYZkrT2tkqOvdIV+RmDdenvT9gjfreHAI=;
 b=Hte5s74MtU6Z8NH3V3Ht/wOyxNfdtjVrJVdmhp47MPl8G0rGfdFG1h5m/CqnJQHuB08+EEYiw6c7DSuRGZt7aLFOCo3Io2E3pxHmk/T3k94hmzP7FBKlRXDWdGRrZembm126ihAOmu51n6cGAUHguawZNss5qSMhKNdhsbwCf7jnOusYedPWnul2TriPsRew+tQr4vxfzlEYNmfHCnAiJCuMvwtiLdHNDM/KvndIYugVpCkNbtIaLv7oVaMKx6GhTm+JM7csVDTnx+KGWU5ULLNzA4rZ9sJRoQfCUvMXfAj81THPaetzoaMg+VVPUreWsQ26Ne4x0lvVfqeb6L9foA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9HaKm3gGkaSYZkrT2tkqOvdIV+RmDdenvT9gjfreHAI=;
 b=EnlOcS0Aysk6B5EjtI8Ugu8+t9vXg6NUYi4wvzfiycbBTr7MtIuT8NJdKWcxzsnz5FejMS9dlEc15Lw5cWC6G5iXR+GaQuS99H76za4RqATZqGSlOG8PER2BPZsG1EB+POJqcE9pKph5532PE+DO9QOhJXTecOOnAQ0dfDQVv0M=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH 11/15] build: rename CFLAGS to XEN_CFLAGS in
 xen/Makefile
Thread-Topic: [XEN PATCH 11/15] build: rename CFLAGS to XEN_CFLAGS in
 xen/Makefile
Thread-Index: AQHZjZUfrLy+HiKXh0WMCekPaQT+e69pHOWA
Date: Wed, 24 May 2023 08:45:13 +0000
Message-ID: <7F409345-9F8E-461E-AE41-1C0B59F633B9@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-12-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-12-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS2PR08MB9546:EE_|DBAEUR03FT038:EE_|AS2PR08MB9836:EE_
X-MS-Office365-Filtering-Correlation-Id: c3a54945-2199-4912-8ff7-08db5c33362b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 m6jsEZixeJZ9f/P6hLt+b/jXeFrqGn7Cg8eFd2HKNEBNd+Lc/+T1shjt8J1tQn5J5n2s6KDzYXUhRZJs/WY2Y3fkJqVK2HNYn/Dzzqbq4JDMQBbiUo32R7LMkicutSEybuQxZDk6IcYkNrW6PAmE1PtMP7bYlOPvM3HcRVlCfQbHRFF8LgTbmzMDo5hlz5OMbWBcn5DqrC3yIfRAH10n4fX4h4qXut00cW5pJa7MOrkMqYs3rAlhIw5r9gIvA5G/+7M7es7BquuoIhNQ4ALnC+BSZpLW2xSUVcrGNlDN/cgJKGf8xadAriMntvIOW8XQzTl9pIqu/aqRZ7UtgFLEekuOeTWID6wLNIuSL+DNMMFQJhgLqInHf19MqnaOpQPtZe1yd3sSUtDgBrcfG89WwxX3DhGMr+ecWEqf9G3Sb/LhZxJa2qeqyEFu3LiQEPdhRjp8lQKp7ilNFev2zInd6MOMDO67FRNXaGKUNRLvw2lVwr7luZvDHVZT1HOdylTQb85OBrl1sAfiBnycHI7R3PqMaY2LryOjW9Z9ps3IWqSzu5vDw8nSdBo3j5uz4dNFrD2G4xsTaSsaaI7bh8VUe7VXLXpFvZPmqAjFi8+Nt6uPDzw9qKPPoRvrfbhUAXyMOBkISx1rWWo/MT0uOUGOzA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(366004)(136003)(376002)(451199021)(6506007)(6512007)(122000001)(53546011)(2616005)(71200400001)(66556008)(66476007)(66446008)(86362001)(186003)(91956017)(478600001)(54906003)(64756008)(26005)(66946007)(76116006)(316002)(33656002)(6916009)(4326008)(558084003)(41300700001)(36756003)(2906002)(38070700005)(5660300002)(38100700002)(8676002)(8936002)(7416002)(6486002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <6EA68C32215A9F42B7499318B48B77B3@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9546
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	66e80116-eb70-4aa4-b495-08db5c333100
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+x+1IdrLGmlp5bOte38LPoaTMBbwlsEVJkn+U9py59yuYbEaND8l6j1vu115NOQdsFgzfBYn6uJ/Uzlgh1q+qnxiACaPtX3I+CMddbAYi7XiJpLsThzJy3oLCPmggyAQipbw0vv+2D+rYmpNNGsHUcntjYS9vFrcEJFyIC/h9SQAd0nLcxGwR5bhQTGpbK9NjJeW+0xfzCDYlKGqVfThANOYmm7gO8m6J2g7v1B0MhUEodUxjcF/g4swBhZAqjH9F/U341Ih/aBtVffVKUe7G6eJTR+6H8uNA/2eOTL/8hzrnEkDXJ1ckYDcFnQIfHFym8tnepGkR+kYhs3qD/UkdBS8CjNWcblbo/oZ/ljedrVOp2n/j0wsjzaXEqUufTYVHZu7OTVy1nGuiPjNzslVTaIh/lM4P4Mf4YsFDqUNZpGQ3h8KSdbAkFJ1eVFJ2C5xCpV5ETjcyo0pxH7wLZ2LmTee0J4er+yNV6WGZCqskW+1XhtuFhU7tiNNf3mrCKJcnOzYtuQP7X+BcyS4QM9dSQ90GrIqqz6+4vjsCPJ95LW/iAeNP5049T2F8EsedY43bgQddBOBxeEEq9iv9f5A5Xl4mUxhxL+iCHPwu0rMdLjslLEVzsxTbv4Sh5g5vFPY3VQZi7bostOl43xx1qp0RURZb3SJuPVin/egXDxmxtyPiV8oKuljtA4f3+5lEtAoNaFN42e5q0nTuSfHIP9aAe9rV6K7WEeKXP/6F2KaDurms5XjeYBPP7POQqVJ0mQ17Brie+mMmbb7KI2aACBlDn3gkb7Eh0F7JaJ/zvkvfU4=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(396003)(376002)(136003)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(5660300002)(6862004)(8936002)(8676002)(186003)(2906002)(4744005)(33656002)(36860700001)(86362001)(36756003)(336012)(2616005)(47076005)(81166007)(356005)(82740400003)(26005)(82310400005)(6506007)(53546011)(40480700001)(6512007)(107886003)(316002)(4326008)(54906003)(478600001)(70586007)(70206006)(41300700001)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 08:45:22.4969
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c3a54945-2199-4912-8ff7-08db5c33362b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9836



> On 23 May 2023, at 17:38, Anthony PERARD <anthony.perard@citrix.com> wrot=
e:
>=20
> This is a preparatory patch. A future patch will not even use
> $(CFLAGS) to seed $(XEN_CFLAGS).
>=20
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>




From xen-devel-bounces@lists.xenproject.org Wed May 24 08:47:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:47:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538886.839226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1k9R-0006fL-37; Wed, 24 May 2023 08:47:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538886.839226; Wed, 24 May 2023 08:47:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1k9R-0006fE-0J; Wed, 24 May 2023 08:47:21 +0000
Received: by outflank-mailman (input) for mailman id 538886;
 Wed, 24 May 2023 08:47:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1k9P-0006f0-NY
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:47:19 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0619.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::619])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9707593c-fa0f-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 10:47:18 +0200 (CEST)
Received: from AM6PR02CA0024.eurprd02.prod.outlook.com (2603:10a6:20b:6e::37)
 by AM8PR08MB6353.eurprd08.prod.outlook.com (2603:10a6:20b:361::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 08:47:15 +0000
Received: from AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:6e:cafe::af) by AM6PR02CA0024.outlook.office365.com
 (2603:10a6:20b:6e::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29 via Frontend
 Transport; Wed, 24 May 2023 08:47:15 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT046.mail.protection.outlook.com (100.127.140.78) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 08:47:15 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Wed, 24 May 2023 08:47:15 +0000
Received: from c82e35752628.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 21BF890F-99AD-4FA9-93E7-7CCD90646768.1; 
 Wed, 24 May 2023 08:47:08 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c82e35752628.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 08:47:08 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by GV1PR08MB8238.eurprd08.prod.outlook.com (2603:10a6:150:5e::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 08:47:05 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 08:47:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9707593c-fa0f-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IVsR5TmnHCfQt8kl97a300y8QrrIdnnvlz6fJ5xoabU=;
 b=UlBUl1VhYtuC5S/yUGD9se1Wua352Sj7E6O4UhiaTt+pLalph3EN0GBGxIj4+iLTsysQYCvrci3P9ZH3OtxGr1RzxKPlTJD3vIVGPtHmj9ogB5hQK1/uhJIWiGZsLGLNAw3ffaxU08YOtr1v6m9e/RYNby5iKKm4xlOBPd0MrPY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8de4c10e4b07d30d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CJTb9qy4CqzceTX9b6Wr2tJjaIPlUfimsnpLFPQkcHpbD29bxnFENFpoos7XYWPpbJrfHFKqF5SMi07Ui6x4LFOZHFolTpBR4W22jZK6RmEy2jmAx20qIGvCzsL9GCxZ234+A+9ybB45BBAO2gjYwEqX6xqRgjSNTQ0MVcwh8Fhw4NfD5c/vU4I3nb4uejr838r/2n72B312LLi9YSOSTGQSoje/gXZYK2U4cYv7qRnWyXE/i8zkgyx+x06hsbt92aesHsFyUdsEEi/k5LiQoylR0w2JF+RsL5Ol7jm/TErP+CM9opWLPqff6C4JrIjQ6299Smny6yB0TUBTlouFaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IVsR5TmnHCfQt8kl97a300y8QrrIdnnvlz6fJ5xoabU=;
 b=Upt8kvHKszK//dKEcbrGnXVi2Xukp/oHKCkGOCgQIz76TMDIOZ6WiFtKWDdJnl6ZEp/t4d55mV+mHxlyBqIQ7zXsPw/JvL30krB7nAFf+iYEnGd+b9dp/Q5EzVz9ZGbjGpH6pb1cx2rvOEpKDV9EchLzL6knI+jFfR5XtYiXe/VhRWNia9MgEhtN+kgm2Y415o2eHqm9HTC7pIEVXR52/D15p9lyY+XmnXjgqqJOq/YyP1lehLSw014wZdG8T+xQhXAm6XRyj3WsJEU7bIYOjiCLlI8Xp2uP79WfADZEjqmsYaq+1WzLFTlGywqifz5q8Z3Aya3wcQ30mkC/KlXzoA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IVsR5TmnHCfQt8kl97a300y8QrrIdnnvlz6fJ5xoabU=;
 b=UlBUl1VhYtuC5S/yUGD9se1Wua352Sj7E6O4UhiaTt+pLalph3EN0GBGxIj4+iLTsysQYCvrci3P9ZH3OtxGr1RzxKPlTJD3vIVGPtHmj9ogB5hQK1/uhJIWiGZsLGLNAw3ffaxU08YOtr1v6m9e/RYNby5iKKm4xlOBPd0MrPY=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH 10/15] build: rename $(AFLAGS) to $(XEN_AFLAGS)
Thread-Topic: [XEN PATCH 10/15] build: rename $(AFLAGS) to $(XEN_AFLAGS)
Thread-Index: AQHZjZZTzywyLasu2UeN6RTgzfRGn69pGKwAgAAEuwA=
Date: Wed, 24 May 2023 08:47:05 +0000
Message-ID: <BA29A878-86E4-4B5F-A344-C920C3D82076@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-11-anthony.perard@citrix.com>
 <17CC7699-2B73-499B-946B-E423F7C9620E@arm.com>
In-Reply-To: <17CC7699-2B73-499B-946B-E423F7C9620E@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|GV1PR08MB8238:EE_|AM7EUR03FT046:EE_|AM8PR08MB6353:EE_
X-MS-Office365-Filtering-Correlation-Id: de3d3f70-0ebd-4770-826b-08db5c337973
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9WdAoY5D3CIqsoWR7Au8obk/OwVS5s/gXccvvsiD87i/2KreoON4eXBx1CnS9s/glejp2pkmjdtDG+umiFqvD4eLoztGTZcnrgYof/6JUR7KSsYP7lCmyIUCa4YwdDmIOaI+mKxSy5RY+EULUMUHzrk/mRvicwClHbdAHqjQOO2EdylyqvvBSD9V23TNe7xemVCKaQUBfoNpVx35AkBNXdrk71C9rGAUlws0l9gcYMdsv5f9VngxrDWjBnhVNPyMJE0U0Ft+OUIfXvcjQoxJlIRvG9y28w24/m6rxR+ItMPec/YI8ZGqU1GXMZF9y3S0fRd8FzYW0EsTpeaQYTFB5lumOEW+xEDrc8hfukkzezuyrT3/q3THdGPYAtlhfsWyt5zknvyuqNIis9GoBmPRgJEUWmTpN+Uof5l1QzYrSJCd+y5vc8uJ8bGX0CGLLLzBYawXSYW8gmxFlzBJaqbMvSPznZVnKfJnEWV6tCt0sRFLHffgXSprfTk3XjXhWxpriNxZu6JlIP+RwLpFlRBOY97y8RVjefK2prz+zeGvWOPL6Se+TK+RuTzjsS5G1QXnkx70iOAbzi4BS5mL9L/XJQa5VYP/LYED5vRlAY9M1RQiotUKH2F6A/qcQafDh5EIENCZ4eYh8PwUXFQtVEn64g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(39860400002)(346002)(376002)(136003)(451199021)(41300700001)(6486002)(54906003)(478600001)(316002)(91956017)(66556008)(6916009)(66446008)(4326008)(66476007)(71200400001)(66946007)(64756008)(76116006)(86362001)(5660300002)(38070700005)(8676002)(8936002)(38100700002)(6506007)(26005)(6512007)(186003)(122000001)(33656002)(53546011)(4744005)(2906002)(2616005)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <D3B77C872D9B8C49B7729E963FFF2318@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8238
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	85c4a4dd-9526-414f-fc08-08db5c337368
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aE5MDas2CkWgZqVlsUg1/j5qvnCzgognfj/kUcyUTPiuW2+egtJjwrnNvFPtgrX2kXez2v5QkwDqh6Urzzy3msep9CHdxJptJX16wP0XAd6XbHbpLcV0RmQeZOZYEgwaNZPn8eMAnnfXKS9xQfO5o9A8lR43mz1sKqGvFTJdUgK8dtNs9oxNM2QuQNf6g1PM/8BBXADdn4UaflAYDKpStxIx7Zzc/HNq32lqhoXtvyJ1zNrCqHAfqey1hamOVvSVvWHb8wPNXol8slWzDqRdIujPMHWe7qs9c4LpuaR6IRYeUtMcQgqdgVleApnR8s43k9hRlu+p73F16cCe3xO2L1dc4E7FbkgZm0RqxiK6ROn+2wJXqGpdYR9xRqpjQ+4f3l22mZiUeU++7wggOoh6ArjWrVwGLhDH9faa2P+O+6Ie5iZ7OD0FIHJWrumwTUyOZ6dSp2Ny3yS4O8fT+u3kDY/EnJJ0tDYaoBWGDc7ibVM4E1dvpRmsJuS+Z7kZNIW3Nen0DY4dqyvpLWTs5Lzke5gmjpe5W0Fmk7QCzTM5HLPewHInKC04+5X9dfXKnwpSRH/YZ3jJCrhLCkGu7xYZ3H9wMQO3nfVpyPXhLbRHIBQoGIsdcbJ1Y+4hj+2V/LW2W8XDDVXRBHs33HKQ/zntUi1AS1MJRiaZwMu22+zdNs33VDxgcJy/nSL7w+yWJutd4cXJ9ZyhtQg+IYFx9utE7+4ja6LyuzrUC+Y6XgPQh4uM7YLZ1g8ZXpPkXVqF9HiM
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199021)(36840700001)(40470700004)(46966006)(54906003)(2906002)(4744005)(82310400005)(36860700001)(5660300002)(47076005)(6862004)(8936002)(8676002)(70586007)(40460700003)(41300700001)(478600001)(4326008)(316002)(36756003)(70206006)(33656002)(6486002)(40480700001)(336012)(53546011)(6506007)(6512007)(26005)(81166007)(86362001)(2616005)(107886003)(186003)(356005)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 08:47:15.3132
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: de3d3f70-0ebd-4770-826b-08db5c337973
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6353



> On 24 May 2023, at 09:29, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
>=20
>=20
>> On 23 May 2023, at 17:38, Anthony PERARD <anthony.perard@citrix.com> wro=
te:
>>=20
>> We don't want the AFLAGS from the environment, they are usually meant
>> to build user space application and not for the hypervisor.
>>=20
>> Config.mk doesn't provied any $(AFLAGS) so we can start a fresh

NIT: there is a typo s/provied/provide/

>> $(XEN_AFLAGS).
>>=20
>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>> ---
>=20
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> Tested-by: Luca Fancellu <luca.fancellu@arm.com>
>=20
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed May 24 08:48:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:48:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538892.839236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kAA-0007Gx-Eb; Wed, 24 May 2023 08:48:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538892.839236; Wed, 24 May 2023 08:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kAA-0007Gq-C1; Wed, 24 May 2023 08:48:06 +0000
Received: by outflank-mailman (input) for mailman id 538892;
 Wed, 24 May 2023 08:48:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cCLF=BN=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1q1kA9-0007Gi-RC
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:48:05 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b2014364-fa0f-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 10:48:04 +0200 (CEST)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-609-yfskZc9gPxSUclOgW4g4cA-1; Wed, 24 May 2023 04:48:02 -0400
Received: by mail-wm1-f70.google.com with SMTP id
 5b1f17b1804b1-3f604687a23so3072045e9.1
 for <xen-devel@lists.xenproject.org>; Wed, 24 May 2023 01:48:02 -0700 (PDT)
Received: from sgarzare-redhat ([134.0.5.107])
 by smtp.gmail.com with ESMTPSA id
 r14-20020adfce8e000000b00306c5900c10sm13805227wrn.9.2023.05.24.01.47.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 24 May 2023 01:47:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2014364-fa0f-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684918083;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ice5vyWLZIdFvZsU4KYca0HCuRNiy82oFSrQDE43eC4=;
	b=cQoD5sPLM1F3kKxTKZo+X13YmvgvLgLJ/4VXV3QWZUfSxOnt9UOYJuvThme6jIK+FJl60E
	hN/I4/tpW5Yrgytk+Hmpa5OErY/xKgoGBd+ps7xlMvQxJIgBWj9unu3TKZUDKb+sIWH7on
	lM6DMRriicAcc6TtPxNyP2LLuYH/058=
X-MC-Unique: yfskZc9gPxSUclOgW4g4cA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684918081; x=1687510081;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ice5vyWLZIdFvZsU4KYca0HCuRNiy82oFSrQDE43eC4=;
        b=kZwUwlrhMsuWMhORb1REWUtlKRanmIE0/9fkB5FMM/drUkhyMYITiSrVLcQw8F0gpj
         qW1zZ3L+xMQQl1XStdrOmE5H1ubymr0uzhFXnDVmCBPstB7qS+kAcAQC8MS1ejf3AtyR
         D7EzOgvhWvrhYqmT7J+4SmygfLvvjKtJF5MK2ipxKax/hQb31qp9A1VCA7SXPtX6jaLu
         VGYqgp0S5ZCfRBWmYQSoEAVAPv3F5lkAIhQOQ9bGuk3ITUNHtTQzZ21Lmbj5WSpaDg0S
         Fjs7z7f8lmRPSjd34xsRCBhKNU2f6K6vM3s+ECq7kiBqPZH8nTNHeeV5R+H9OhF2aoe+
         4Kaw==
X-Gm-Message-State: AC+VfDxoc+hlDhaWrRvcA13hWozm/RJNcUzFMCNYJiLN9Iy9/M3SJUzm
	3yuPu0jCwAVHr3HHmNwS8B0MZ4E38jVJflk14wyUw4uIy/K/EQWme3UZoASsAf39o5Y9aUXekeG
	qk5XGjDh8E1S3AuBzfj3zZ+3dBFg=
X-Received: by 2002:a7b:ce88:0:b0:3f6:c7b:d3b7 with SMTP id q8-20020a7bce88000000b003f60c7bd3b7mr3479825wmj.37.1684918080856;
        Wed, 24 May 2023 01:48:00 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ7UHT/Zh7wO+OH1Oxh3QT/tcrOrKhvVcbUbZ7TzycjShzVBmTQRbmnX0N1Gj6T5Sjrqzkd74Q==
X-Received: by 2002:a7b:ce88:0:b0:3f6:c7b:d3b7 with SMTP id q8-20020a7bce88000000b003f60c7bd3b7mr3479809wmj.37.1684918080585;
        Wed, 24 May 2023 01:48:00 -0700 (PDT)
Date: Wed, 24 May 2023 10:47:53 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Aarushi Mehta <mehta.aaru20@gmail.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Julia Suvorova <jusual@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Hanna Reitz <hreitz@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, xen-devel@lists.xenproject.org, 
	eblake@redhat.com, Anthony Perard <anthony.perard@citrix.com>, 
	qemu-block@nongnu.org
Subject: Re: [PATCH v2 4/6] block/io_uring: convert to blk_io_plug_call() API
Message-ID: <qnl7sgmindnbs32daw44rhgbyhcw4lruecqgtctdbeiwc7yog2@yxuukgxwi4m3>
References: <20230523171300.132347-1-stefanha@redhat.com>
 <20230523171300.132347-5-stefanha@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230523171300.132347-5-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline

On Tue, May 23, 2023 at 01:12:58PM -0400, Stefan Hajnoczi wrote:
>Stop using the .bdrv_co_io_plug() API because it is not multi-queue
>block layer friendly. Use the new blk_io_plug_call() API to batch I/O
>submission instead.
>
>Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>Reviewed-by: Eric Blake <eblake@redhat.com>
>---
>v2
>- Removed whitespace hunk [Eric]
>---
> include/block/raw-aio.h |  7 -------
> block/file-posix.c      | 10 ----------
> block/io_uring.c        | 44 ++++++++++++++++-------------------------
> block/trace-events      |  5 ++---
> 4 files changed, 19 insertions(+), 47 deletions(-)

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>



From xen-devel-bounces@lists.xenproject.org Wed May 24 08:52:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:52:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538896.839245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kEa-0000LR-VQ; Wed, 24 May 2023 08:52:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538896.839245; Wed, 24 May 2023 08:52:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kEa-0000LK-Sf; Wed, 24 May 2023 08:52:40 +0000
Received: by outflank-mailman (input) for mailman id 538896;
 Wed, 24 May 2023 08:52:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cCLF=BN=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1q1kEa-0000Ky-0r
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:52:40 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 55b02e66-fa10-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 10:52:39 +0200 (CEST)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-150-NzXqVvZhNByjNFXUNYOTFQ-1; Wed, 24 May 2023 04:52:34 -0400
Received: by mail-wm1-f71.google.com with SMTP id
 5b1f17b1804b1-3f60911a417so2638465e9.2
 for <xen-devel@lists.xenproject.org>; Wed, 24 May 2023 01:52:34 -0700 (PDT)
Received: from sgarzare-redhat ([134.0.5.107])
 by smtp.gmail.com with ESMTPSA id
 c8-20020a05600c0ac800b003f4285629casm1515371wmr.42.2023.05.24.01.52.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 24 May 2023 01:52:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55b02e66-fa10-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684918358;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=3kcVMQ7kvj0Iz6K5ZAjc8mmj/6YQBshqNuaYR17461o=;
	b=hgGcRmNAlUBciCHcGzA5HuoKMZYIbdmOU/NkprZKd7/QpBd8Tk7pUwjMSf+d0w8lGcCi/K
	O3Z6Nc89x8i5lM7UJaV3hLpdWI/oGWNjr6e8PP0O7zl5KL4sUzIDH8/h4NtaIWOjOGogPu
	QeTW8Q+MYI8kJsVSYbCgvAL71suW/zI=
X-MC-Unique: NzXqVvZhNByjNFXUNYOTFQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684918353; x=1687510353;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3kcVMQ7kvj0Iz6K5ZAjc8mmj/6YQBshqNuaYR17461o=;
        b=Kk6P9VTemsx+snlZ9wyTf5Da4h/YGdR41yJsIqmYHkU87E2TfY6qd5xWFSqRYlUw7U
         EaKXH65VqpGe2rQxJCpBWtEc2TjZgcLiGfk/fxMM261eRDgaNBQZnuRmMNbi9r/AnUoA
         a5kaoFisWwriOHh44oGgKK9SvvzUNA6DpFdTVDt+4bdZpA7xKcC0NBQlT7u1JD1eWTey
         FRsTTOMRj4puquTxFv5G3pDrip6VmBiCz8gITUZc2ESE9VR2JytiACDVvS8q9A63SGXA
         DuV+s8xtqAeAHc3DqaCMrrwPgsVltQGR/3+Q/kAh1qcWZ21Ue8ZxTtjMS9z54VuKWknF
         igTQ==
X-Gm-Message-State: AC+VfDxNMIUZXM/4lVH0M620KLiJrq3uYX+9tPqNK84dltpgRSmhK4ir
	x/mnHH1XqEQDyBiEHXskn2JC6Ie7O3/KoVTKFdqgEfw55D1R8NUUHW8tPn2Bn0f1+29IkSRJvZH
	2neMbQp7Oo0kNVuSifQXGPcYE6iQ=
X-Received: by 2002:a7b:cd8a:0:b0:3f6:a66:a372 with SMTP id y10-20020a7bcd8a000000b003f60a66a372mr3673523wmj.1.1684918353366;
        Wed, 24 May 2023 01:52:33 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ7TQg7OL7OGuh6V7LESCC2zr/IQghY3wNUBo+TQKydHcYVu7y54l8KC2f19D6VPXvGmoMotOg==
X-Received: by 2002:a7b:cd8a:0:b0:3f6:a66:a372 with SMTP id y10-20020a7bcd8a000000b003f60a66a372mr3673507wmj.1.1684918353149;
        Wed, 24 May 2023 01:52:33 -0700 (PDT)
Date: Wed, 24 May 2023 10:52:29 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Aarushi Mehta <mehta.aaru20@gmail.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Julia Suvorova <jusual@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Hanna Reitz <hreitz@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, xen-devel@lists.xenproject.org, 
	eblake@redhat.com, Anthony Perard <anthony.perard@citrix.com>, 
	qemu-block@nongnu.org
Subject: Re: [PATCH v2 6/6] block: remove bdrv_co_io_plug() API
Message-ID: <yosdxsqcqcj4utt4dwmfgfdsgnmduu6avjc7wrd33bxzs3vds4@wossen4sa7ve>
References: <20230523171300.132347-1-stefanha@redhat.com>
 <20230523171300.132347-7-stefanha@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230523171300.132347-7-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline

On Tue, May 23, 2023 at 01:13:00PM -0400, Stefan Hajnoczi wrote:
>No block driver implements .bdrv_co_io_plug() anymore. Get rid of the
>function pointers.
>
>Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>Reviewed-by: Eric Blake <eblake@redhat.com>
>---
> include/block/block-io.h         |  3 ---
> include/block/block_int-common.h | 11 ----------
> block/io.c                       | 37 --------------------------------
> 3 files changed, 51 deletions(-)

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>



From xen-devel-bounces@lists.xenproject.org Wed May 24 08:52:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 08:52:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538897.839256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kEq-0000ko-7F; Wed, 24 May 2023 08:52:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538897.839256; Wed, 24 May 2023 08:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kEq-0000kh-3L; Wed, 24 May 2023 08:52:56 +0000
Received: by outflank-mailman (input) for mailman id 538897;
 Wed, 24 May 2023 08:52:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cCLF=BN=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1q1kEo-0000hy-KT
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 08:52:54 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5d4f4496-fa10-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 10:52:52 +0200 (CEST)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-132-H62jd5bBNBKhJzcwknVWQg-1; Wed, 24 May 2023 04:52:09 -0400
Received: by mail-wr1-f72.google.com with SMTP id
 ffacd0b85a97d-30a8ba2debdso196206f8f.3
 for <xen-devel@lists.xenproject.org>; Wed, 24 May 2023 01:52:09 -0700 (PDT)
Received: from sgarzare-redhat ([134.0.5.107])
 by smtp.gmail.com with ESMTPSA id
 u10-20020a5d514a000000b003079693eff2sm13651819wrt.41.2023.05.24.01.52.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 24 May 2023 01:52:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d4f4496-fa10-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1684918371;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hS0uvYBcWnukVOxqzspQrxWnJxsCSKpfkO34Ey+pI4g=;
	b=Ci7s6tmqxiVzQNTYA+3lF9QtD5YoVtS7HagY7nu+mzeu6da1l9zCupInQ38HIKDWPxL1YL
	ibJeZz5IlqVQRYbXptZrC5fTZbNTowk10YRijWjbEMeukzU+b4RI5JkGcssiIpB7YekGeB
	fi1OjPpNK2bAet5Wl1Y5fHKMF2mUI/I=
X-MC-Unique: H62jd5bBNBKhJzcwknVWQg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1684918328; x=1687510328;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=hS0uvYBcWnukVOxqzspQrxWnJxsCSKpfkO34Ey+pI4g=;
        b=HqU2/YwC3XKYoHw+Dq3duna3dO1w2F6chAIr5RNjpkG18ke6XXcGUFG3TOXrreY4lT
         jBJT61Ym9bxXDC9LEaolz+TDRxbl379hGiS/Xm1ztEtbkxTJv5Qv6f5yqxFF5Z8vL8+q
         TGZZiFirhIy6ZxtuE1RX2fciVQo8VVPK8pGsSlab7Xl4dcl3P6/GZ2KKsFbIafnCDlW1
         Gc1w2FQk2q0awD9jJoSqGbp2+w+s5asAayd/Fd/BK9JUsrFHZkN2eAttYh0t0dPankjV
         MI1GqYq9RmOWyjfPIxoO/MGIFCgXZzZgVun7UNgFD84PuPoRps4kDJPRWv6ZmRUJr8an
         1cmQ==
X-Gm-Message-State: AC+VfDyNDMFG5FsdzStiC7IU2dmO25qsGQIgt91NR+7jXa4/1lEkXeSV
	e/6Lw3Zpzu83XzkMpyxOKKxTbI2OMmC4hZW4/m8WOkVHZzRDhDmmrbZ9CJJ6A+QhZeoQTP6MCZ/
	0gUhHAPWA4roRxeC0Yw4aZTG33lA=
X-Received: by 2002:a5d:40c4:0:b0:309:43a2:8e9f with SMTP id b4-20020a5d40c4000000b0030943a28e9fmr12034395wrq.27.1684918328470;
        Wed, 24 May 2023 01:52:08 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ48wcBIrQ1YXKTu9Tjicy/7O8ibTJFW/CAMNgZakXBqw0/BbHdtzE9vLxgJQXPbBNkM7YHDjg==
X-Received: by 2002:a5d:40c4:0:b0:309:43a2:8e9f with SMTP id b4-20020a5d40c4000000b0030943a28e9fmr12034374wrq.27.1684918328166;
        Wed, 24 May 2023 01:52:08 -0700 (PDT)
Date: Wed, 24 May 2023 10:52:03 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Aarushi Mehta <mehta.aaru20@gmail.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Julia Suvorova <jusual@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Hanna Reitz <hreitz@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, xen-devel@lists.xenproject.org, 
	eblake@redhat.com, Anthony Perard <anthony.perard@citrix.com>, 
	qemu-block@nongnu.org
Subject: Re: [PATCH v2 5/6] block/linux-aio: convert to blk_io_plug_call() API
Message-ID: <n6hik7dbl26lomhxvfal2kjrq6jhdiknjepb372dvxavuwiw6q@3l3mo4eywoxq>
References: <20230523171300.132347-1-stefanha@redhat.com>
 <20230523171300.132347-6-stefanha@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230523171300.132347-6-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline

On Tue, May 23, 2023 at 01:12:59PM -0400, Stefan Hajnoczi wrote:
>Stop using the .bdrv_co_io_plug() API because it is not multi-queue
>block layer friendly. Use the new blk_io_plug_call() API to batch I/O
>submission instead.
>
>Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>Reviewed-by: Eric Blake <eblake@redhat.com>
>---
> include/block/raw-aio.h |  7 -------
> block/file-posix.c      | 28 ----------------------------
> block/linux-aio.c       | 41 +++++++++++------------------------------
> 3 files changed, 11 insertions(+), 65 deletions(-)
>
>diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
>index da60ca13ef..0f63c2800c 100644
>--- a/include/block/raw-aio.h
>+++ b/include/block/raw-aio.h
>@@ -62,13 +62,6 @@ int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
>
> void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
> void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
>-
>-/*
>- * laio_io_plug/unplug work in the thread's current AioContext, therefore the
>- * caller must ensure that they are paired in the same IOThread.
>- */
>-void laio_io_plug(void);
>-void laio_io_unplug(uint64_t dev_max_batch);
> #endif
> /* io_uring.c - Linux io_uring implementation */
> #ifdef CONFIG_LINUX_IO_URING
>diff --git a/block/file-posix.c b/block/file-posix.c
>index 7baa8491dd..ac1ed54811 100644
>--- a/block/file-posix.c
>+++ b/block/file-posix.c
>@@ -2550,26 +2550,6 @@ static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset,
>     return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_WRITE);
> }
>
>-static void coroutine_fn raw_co_io_plug(BlockDriverState *bs)
>-{
>-    BDRVRawState __attribute__((unused)) *s = bs->opaque;
>-#ifdef CONFIG_LINUX_AIO
>-    if (s->use_linux_aio) {
>-        laio_io_plug();
>-    }
>-#endif
>-}
>-
>-static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
>-{
>-    BDRVRawState __attribute__((unused)) *s = bs->opaque;
>-#ifdef CONFIG_LINUX_AIO
>-    if (s->use_linux_aio) {
>-        laio_io_unplug(s->aio_max_batch);
>-    }
>-#endif
>-}
>-
> static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
> {
>     BDRVRawState *s = bs->opaque;
>@@ -3914,8 +3894,6 @@ BlockDriver bdrv_file = {
>     .bdrv_co_copy_range_from = raw_co_copy_range_from,
>     .bdrv_co_copy_range_to  = raw_co_copy_range_to,
>     .bdrv_refresh_limits = raw_refresh_limits,
>-    .bdrv_co_io_plug        = raw_co_io_plug,
>-    .bdrv_co_io_unplug      = raw_co_io_unplug,
>     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
>
>     .bdrv_co_truncate                   = raw_co_truncate,
>@@ -4286,8 +4264,6 @@ static BlockDriver bdrv_host_device = {
>     .bdrv_co_copy_range_from = raw_co_copy_range_from,
>     .bdrv_co_copy_range_to  = raw_co_copy_range_to,
>     .bdrv_refresh_limits = raw_refresh_limits,
>-    .bdrv_co_io_plug        = raw_co_io_plug,
>-    .bdrv_co_io_unplug      = raw_co_io_unplug,
>     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
>
>     .bdrv_co_truncate                   = raw_co_truncate,
>@@ -4424,8 +4400,6 @@ static BlockDriver bdrv_host_cdrom = {
>     .bdrv_co_pwritev        = raw_co_pwritev,
>     .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
>     .bdrv_refresh_limits    = cdrom_refresh_limits,
>-    .bdrv_co_io_plug        = raw_co_io_plug,
>-    .bdrv_co_io_unplug      = raw_co_io_unplug,
>     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
>
>     .bdrv_co_truncate                   = raw_co_truncate,
>@@ -4552,8 +4526,6 @@ static BlockDriver bdrv_host_cdrom = {
>     .bdrv_co_pwritev        = raw_co_pwritev,
>     .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
>     .bdrv_refresh_limits    = cdrom_refresh_limits,
>-    .bdrv_co_io_plug        = raw_co_io_plug,
>-    .bdrv_co_io_unplug      = raw_co_io_unplug,
>     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
>
>     .bdrv_co_truncate                   = raw_co_truncate,
>diff --git a/block/linux-aio.c b/block/linux-aio.c
>index 442c86209b..5021aed68f 100644
>--- a/block/linux-aio.c
>+++ b/block/linux-aio.c
>@@ -15,6 +15,7 @@
> #include "qemu/event_notifier.h"
> #include "qemu/coroutine.h"
> #include "qapi/error.h"
>+#include "sysemu/block-backend.h"
>
> /* Only used for assertions.  */
> #include "qemu/coroutine_int.h"
>@@ -46,7 +47,6 @@ struct qemu_laiocb {
> };
>
> typedef struct {
>-    int plugged;
>     unsigned int in_queue;
>     unsigned int in_flight;
>     bool blocked;
>@@ -236,7 +236,7 @@ static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
> {
>     qemu_laio_process_completions(s);
>
>-    if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
>+    if (!QSIMPLEQ_EMPTY(&s->io_q.pending)) {
>         ioq_submit(s);
>     }
> }
>@@ -277,7 +277,6 @@ static void qemu_laio_poll_ready(EventNotifier *opaque)
> static void ioq_init(LaioQueue *io_q)
> {
>     QSIMPLEQ_INIT(&io_q->pending);
>-    io_q->plugged = 0;
>     io_q->in_queue = 0;
>     io_q->in_flight = 0;
>     io_q->blocked = false;
>@@ -354,31 +353,11 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64_t dev_max_batch)
>     return max_batch;
> }
>
>-void laio_io_plug(void)
>+static void laio_unplug_fn(void *opaque)
> {
>-    AioContext *ctx = qemu_get_current_aio_context();
>-    LinuxAioState *s = aio_get_linux_aio(ctx);
>+    LinuxAioState *s = opaque;
>
>-    s->io_q.plugged++;
>-}
>-
>-void laio_io_unplug(uint64_t dev_max_batch)
>-{
>-    AioContext *ctx = qemu_get_current_aio_context();
>-    LinuxAioState *s = aio_get_linux_aio(ctx);
>-
>-    assert(s->io_q.plugged);
>-    s->io_q.plugged--;
>-
>-    /*
>-     * Why max batch checking is performed here:
>-     * Another BDS may have queued requests with a higher dev_max_batch and
>-     * therefore in_queue could now exceed our dev_max_batch. Re-check the max
>-     * batch so we can honor our device's dev_max_batch.
>-     */
>-    if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch) ||

Why are we removing this condition?
Could the same situation occur with the new API?

Thanks,
Stefano

>-        (!s->io_q.plugged &&
>-         !s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending))) {
>+    if (!s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
>         ioq_submit(s);
>     }
> }
>@@ -410,10 +389,12 @@ static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
>
>     QSIMPLEQ_INSERT_TAIL(&s->io_q.pending, laiocb, next);
>     s->io_q.in_queue++;
>-    if (!s->io_q.blocked &&
>-        (!s->io_q.plugged ||
>-         s->io_q.in_queue >= laio_max_batch(s, dev_max_batch))) {
>-        ioq_submit(s);
>+    if (!s->io_q.blocked) {
>+        if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch)) {
>+            ioq_submit(s);
>+        } else {
>+            blk_io_plug_call(laio_unplug_fn, s);
>+        }
>     }
>
>     return 0;
>-- 
>2.40.1
>



From xen-devel-bounces@lists.xenproject.org Wed May 24 09:03:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 09:03:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538904.839266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kOM-0002RB-7B; Wed, 24 May 2023 09:02:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538904.839266; Wed, 24 May 2023 09:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kOM-0002R4-3J; Wed, 24 May 2023 09:02:46 +0000
Received: by outflank-mailman (input) for mailman id 538904;
 Wed, 24 May 2023 09:02:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VEVP=BN=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1q1kOK-0002Qw-WE
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 09:02:45 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2053.outbound.protection.outlook.com [40.107.7.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bec8e311-fa11-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 11:02:44 +0200 (CEST)
Received: from AM0PR02CA0032.eurprd02.prod.outlook.com (2603:10a6:208:3e::45)
 by DB9PR08MB8740.eurprd08.prod.outlook.com (2603:10a6:10:3d0::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 09:02:15 +0000
Received: from AM7EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:3e:cafe::9c) by AM0PR02CA0032.outlook.office365.com
 (2603:10a6:208:3e::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 09:02:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT054.mail.protection.outlook.com (100.127.140.133) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 09:02:14 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Wed, 24 May 2023 09:02:13 +0000
Received: from 0f70156dfcd2.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 08E81128-BBDB-4B50-8710-4FBACE2D9294.1; 
 Wed, 24 May 2023 09:02:02 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0f70156dfcd2.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 09:02:02 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GV1PR08MB8258.eurprd08.prod.outlook.com (2603:10a6:150:89::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 09:01:59 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482%4]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 09:01:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bec8e311-fa11-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cEONkkL9IaVASxqNP9PS1iiTVljciUjFEiI0xhZTpkY=;
 b=uPLGcgRzoBPtxrMAeM2kqWOIRPjT+jptjfJCU4NewcBpvitjWWT8qsPwB6jd6LaExoTHEwuPuqKglABnjo7wB5NGzOakoDdMygr9EuH0kyrpOAgfwZ3+4GT5HSg44/PDI8PbypdVKPxg7yQG1pVvuo0ZTsNLOtFVZzp4/6GCoVI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: e73089b0b7829119
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SLuCavJpVuafjEKsehCt2mbq2D8wzkqCpGKXT4DPIqudilfZJlJWfVTuOXTBkH3OPyS/7HwjHPsJoj2F8mNL2QtKkVBmHOXf9sI/6VDAoSZCl8vmiCovsBplJYRWIu/zvSP/WPsN/A2Ii8uS2Y1TEHSck2L1qJIyu5GXA+tEVEMdkp1eiYF/oD26mq2swEwR8J3n6td7Cp1J4kDoOVObrG41XEvf7oz+q45coO22JOOeU530nmIpblht/r7xNZA/9kGctfuM1QegPqjX7SqI1HJYGnri+WG5eAcycILAqBKwaKpPyrwR7YnbehKGIFvaTCkad5ZK/fc/hVX+bBDuOg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cEONkkL9IaVASxqNP9PS1iiTVljciUjFEiI0xhZTpkY=;
 b=T0DwA04jnrFHqMq0jRSjCpt0OSf5YaP9+OSSl9Ekkn8XN6JhdKBRbg2goUusqltGbi7eVRFP/l1M0QMDnycNQYG2HKPptYcZM+seqnzYPb3fAmXld8K2Kw+JSl6hBSkWx+L5emAvwafvLQ3kpwzgLo/KTOclmu8r7Krw9G3zXWWkyFinbtt+zSrzvzoDE1s0v1J47KUnimoS1HqbyKgEJQdHWkJYy3boPLCfJYKEjwdeIGOFtjW+QTyXkIzFA9b9hs+JODcpdCIjXtO99ftVSBM5SBKk5y/3/qrDDdMA4FTGsIqOup3CEo3YP1osPs0rn83yJIwtkyGKm3RbIXDZBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cEONkkL9IaVASxqNP9PS1iiTVljciUjFEiI0xhZTpkY=;
 b=uPLGcgRzoBPtxrMAeM2kqWOIRPjT+jptjfJCU4NewcBpvitjWWT8qsPwB6jd6LaExoTHEwuPuqKglABnjo7wB5NGzOakoDdMygr9EuH0kyrpOAgfwZ3+4GT5HSg44/PDI8PbypdVKPxg7yQG1pVvuo0ZTsNLOtFVZzp4/6GCoVI=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 01/12] xen/arm: enable SVE extension for Xen
Thread-Topic: [PATCH v7 01/12] xen/arm: enable SVE extension for Xen
Thread-Index: AQHZjUpOMMV4Yflw9E2LiWKuLwqFE69pIiaA
Date: Wed, 24 May 2023 09:01:56 +0000
Message-ID: <98D7529A-FF7E-4079-B4FB-7EA096CB6822@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-2-luca.fancellu@arm.com>
In-Reply-To: <20230523074326.3035745-2-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|GV1PR08MB8258:EE_|AM7EUR03FT054:EE_|DB9PR08MB8740:EE_
X-MS-Office365-Filtering-Correlation-Id: 2c3ed4f2-624b-4667-3ead-08db5c35913b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 DlGiHYbb6qAWt9NiF9Bk4Bbjj1xCwMTCHQA66GFUsZzPEaFsfvDK+hh+OoVhrzdPJOaFWYhwH4y5UzK6OLdTC8iJtDYC161/tURV/Cm+bIxs0UypOfhJxtx1k98ir3O4zqT740EkRI+3y8u7i6t44mqTf8dx6IzumqBCE4ZEKep598DwnhIQF/zWF9N7jRPtiIObRgCinjGQbz5p7LHVPuL7a79PNy4lq17bVaMYqsYZviX+x+mu0C4FkI2Kah6gxx3vySx2kYO3u16dq+P0/lCaoU9SLzd2+0SsSVjBzgzeVSQsAiroiYjY1Ku1l2ltoZudVdug6Jen30uQonPpwiwv8vZJEh+ZGO04uN33g+4tef3GOWEu40m5c1whGFoi5TSgVH+xt+aSmMDeh2ISBY+8v8wNgRyuohjydBqRbvRbJVsFL+ORWaM0HWsR9l8gzODLVPWz6V/X8Q5p71oICDtuuQFntp6/snw2Egl+hM8DaUspkD5z3EG2ncJ/Pxf7Fc10U1OIdr2JPyzo4pY1nsyezIRwtyIWaKutya6PtT1FCMHNllXtgqwfw4BQCKn0XOMkD2YF1j77s4iL/YlfA0qlSyguxWlpzeCEF5+xxGFDajz+XvbDeYTPmKJ8msdjAKU7rd7smdTLYy0Ftdg5vA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(136003)(346002)(396003)(39860400002)(451199021)(54906003)(86362001)(37006003)(41300700001)(6486002)(478600001)(316002)(66446008)(91956017)(4326008)(6636002)(66476007)(66556008)(76116006)(6666004)(71200400001)(64756008)(66946007)(5660300002)(38070700005)(8676002)(6862004)(8936002)(38100700002)(122000001)(30864003)(26005)(33656002)(6512007)(6506007)(53546011)(83380400001)(2906002)(2616005)(186003)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1143C7A69B0A4B429C9923E595167895@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8258
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5f949864-f35d-402b-6e0c-08db5c35867f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EY7HEwMpWuAvnDHX8zTiSgtRb7BEyWz2HGvXMGY7x5RIiO1eWPdRQH4YaC7i1oAVV1CJ8wgXsSdWloJnj6os+M9masZYSEyAFF916wsOODgEKrJI1tCU54Yn6WLDRVrozE6KgekMnYjzxArPeaquziunJdAmxmEQ9/u2YFTAqSWnAp+Px+d0IlaO8pRzBdmwkLLiLmzx65E1ikq0q6GygpC4YRjgTbjNZA1sesRngTcT1fAot9DYSIp5CYij1Inhs5dtYRmDbi2gUDUDtHaG7G5oynDWaPTaXiti1tAWkwz64IejFzLyUWEsHxkrhMYLjfq3ZXN/n+unRZ4KaF/+/QF5D8ECke10o3pcN5YoFwklWwgTkgHjdqr9p4EkiY6P+Enk0JyHDCIzbYxJqB9Nf2lt/kNXmO3yC/4ysgYNfj8m9b3isu594Q+fJ+Ssx0h3GTMp0/Yb/+1iROvt/WOxtB1sEmNOS7r/YV+32ARVC85ELruInonKevDEjOkketwgGujvbAYOPXYzLbJVlwxTZjZRZ2SOK//uZhBh2vEnAzIAFpfzYsV/yuujDBuUTcEK79WMojkzs36F08EZPGVsbEWc8tjvpEuQkMXbiYnbGdk7oxJAnlD0AU9JeIY7wQ9QV0MEGYsfvdlmYpZBQMCKe81YTLauAtVU2lSzw16JgeWBqKFEMN3q4cvGZRcMqTxDaGiT8j8RxgxmgtB0nO/zWY8lvJuYKIhYmSQIliR5AZ4J2WJx0wytXKdhFAbLWEFg
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199021)(40470700004)(46966006)(36840700001)(30864003)(2906002)(36860700001)(53546011)(186003)(83380400001)(47076005)(86362001)(2616005)(336012)(40480700001)(82740400003)(81166007)(356005)(6862004)(36756003)(5660300002)(8936002)(316002)(41300700001)(70586007)(70206006)(40460700003)(6666004)(6486002)(33656002)(26005)(107886003)(6512007)(6506007)(4326008)(478600001)(8676002)(6636002)(37006003)(54906003)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 09:02:14.1877
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2c3ed4f2-624b-4667-3ead-08db5c35913b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB8740

Hi Luca,

> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Enable Xen to handle the SVE extension, add code in cpufeature module
> to handle ZCR SVE register, disable trapping SVE feature on system
> boot only when SVE resources are accessed.
> While there, correct coding style for the comment on coprocessor
> trapping.
>=20
> Now cptr_el2 is part of the domain context and it will be restored
> on context switch, this is a preparation for saving the SVE context
> which will be part of VFP operations, so restore it before the call
> to save VFP registers.
> To save an additional isb barrier, restore cptr_el2 before an
> existing isb barrier and move the call for saving VFP context after
> that barrier. To keep a (mostly) specularity of ctxt_switch_to()
> and ctxt_switch_from(), move vfp_save_state() up in the function.
>=20
> Change the KConfig entry to make ARM64_SVE symbol selectable, by
> default it will be not selected.
>=20
> Create sve module and sve_asm.S that contains assembly routines for
> the SVE feature, this code is inspired from linux and it uses
> instruction encoding to be compatible with compilers that does not
> support SVE, imported instructions are documented in
> README.LinuxPrimitives.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

with one minor NIT that could be fixed on commit...

> ---
> Changes from v6:
> - modified licence, add emacs block, move vfp_save_state up in the
>   function, add comments to CPTR_EL2 and vfp_restore_state, don't
>   use variable in init_traps(), code style fixes,
>   add entries to README.LinuxPrimitives (Julien)
> - vl_to_zcr is moved into sve.c module as changes to the series led
>   to its usage only inside it, remove stub for compute_max_zcr and
>   rely on compiler DCE.
> Changes from v5:
> - Add R-by Bertrand
> Changes from v4:
> - don't use fixed types in vl_to_zcr, forgot to address that in
>   v3, by mistake I changed that in patch 2, fixing now (Jan)
> Changes from v3:
> - no changes
> Changes from v2:
> - renamed sve_asm.S in sve-asm.S, new files should not contain
>   underscore in the name (Jan)
> Changes from v1:
> - Add assert to vl_to_zcr, it is never called with vl=3D=3D0, but just
>   to be sure it won't in the future.
> Changes from RFC:
> - Moved restoring of cptr before an existing barrier (Julien)
> - Marked the feature as unsupported for now (Julien)
> - Trap and un-trap only when using SVE resources in
>   compute_max_zcr() (Julien)
> ---
> xen/arch/arm/Kconfig                     | 10 ++--
> xen/arch/arm/README.LinuxPrimitives      |  9 ++++
> xen/arch/arm/arm64/Makefile              |  1 +
> xen/arch/arm/arm64/cpufeature.c          |  7 ++-
> xen/arch/arm/arm64/sve-asm.S             | 48 +++++++++++++++++++
> xen/arch/arm/arm64/sve.c                 | 59 ++++++++++++++++++++++++
> xen/arch/arm/cpufeature.c                |  6 ++-
> xen/arch/arm/domain.c                    | 20 +++++---
> xen/arch/arm/include/asm/arm64/sve.h     | 27 +++++++++++
> xen/arch/arm/include/asm/arm64/sysregs.h |  1 +
> xen/arch/arm/include/asm/cpufeature.h    | 14 ++++++
> xen/arch/arm/include/asm/domain.h        |  1 +
> xen/arch/arm/include/asm/processor.h     |  2 +
> xen/arch/arm/setup.c                     |  5 +-
> xen/arch/arm/traps.c                     | 27 ++++++-----
> 15 files changed, 210 insertions(+), 27 deletions(-)
> create mode 100644 xen/arch/arm/arm64/sve-asm.S
> create mode 100644 xen/arch/arm/arm64/sve.c
> create mode 100644 xen/arch/arm/include/asm/arm64/sve.h
>=20
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 239d3aed3c7f..41f45d8d1203 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -112,11 +112,15 @@ config ARM64_PTR_AUTH
>  This feature is not supported in Xen.
>=20
> config ARM64_SVE
> - def_bool n
> + bool "Enable Scalar Vector Extension support (UNSUPPORTED)" if UNSUPPOR=
TED
> depends on ARM_64
> help
> -  Scalar Vector Extension support.
> -  This feature is not supported in Xen.
> +  Scalar Vector Extension (SVE/SVE2) support for guests.
> +
> +  Please be aware that currently, enabling this feature will add latency=
 on
> +  VM context switch between SVE enabled guests, between not-enabled SVE
> +  guests and SVE enabled guests and viceversa, compared to the time
> +  required to switch between not-enabled SVE guests.
>=20
> config ARM64_MTE
> def_bool n
> diff --git a/xen/arch/arm/README.LinuxPrimitives b/xen/arch/arm/README.Li=
nuxPrimitives
> index 1d53e6a898da..76c8df29e416 100644
> --- a/xen/arch/arm/README.LinuxPrimitives
> +++ b/xen/arch/arm/README.LinuxPrimitives
> @@ -62,6 +62,15 @@ done
> linux/arch/arm64/lib/clear_page.S       xen/arch/arm/arm64/lib/clear_page=
.S
> linux/arch/arm64/lib/copy_page.S        unused in Xen
>=20
> +---------------------------------------------------------------------
> +
> +SVE assembly macro: last sync @ v6.3.0 (last commit: 457391b03803)
> +
> +linux/arch/arm64/include/asm/fpsimdmacros.h   xen/arch/arm/include/asm/a=
rm64/sve-asm.S
> +
> +The following macros were taken from Linux:
> +    _check_general_reg, _check_num, _sve_rdvl
> +
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> arm32
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
> index 28481393e98f..54ad55c75cda 100644
> --- a/xen/arch/arm/arm64/Makefile
> +++ b/xen/arch/arm/arm64/Makefile
> @@ -13,6 +13,7 @@ obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o
> obj-y +=3D mm.o
> obj-y +=3D smc.o
> obj-y +=3D smpboot.o
> +obj-$(CONFIG_ARM64_SVE) +=3D sve.o sve-asm.o
> obj-y +=3D traps.o
> obj-y +=3D vfp.o
> obj-y +=3D vsysreg.o
> diff --git a/xen/arch/arm/arm64/cpufeature.c b/xen/arch/arm/arm64/cpufeat=
ure.c
> index d9039d37b2d1..b4656ff4d80f 100644
> --- a/xen/arch/arm/arm64/cpufeature.c
> +++ b/xen/arch/arm/arm64/cpufeature.c
> @@ -455,15 +455,11 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] =
=3D {
> ARM64_FTR_END,
> };
>=20
> -#if 0
> -/* TODO: use this to sanitize SVE once we support it */
> -
> static const struct arm64_ftr_bits ftr_zcr[] =3D {
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
> ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0), /* LEN */
> ARM64_FTR_END,
> };
> -#endif
>=20
> /*
>  * Common ftr bits for a 32bit register with all hidden, strict
> @@ -603,6 +599,9 @@ void update_system_features(const struct cpuinfo_arm =
*new)
>=20
> SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
>=20
> + if ( cpu_has_sve )
> + SANITIZE_REG(zcr64, 0, zcr);
> +
> /*
> * Comment from Linux:
> * Userspace may perform DC ZVA instructions. Mismatched block sizes
> diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
> new file mode 100644
> index 000000000000..4d1549344733
> --- /dev/null
> +++ b/xen/arch/arm/arm64/sve-asm.S
> @@ -0,0 +1,48 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Arm SVE assembly routines
> + *
> + * Copyright (C) 2022 ARM Ltd.
> + *
> + * Some macros and instruction encoding in this file are taken from linu=
x 6.1.1,
> + * file arch/arm64/include/asm/fpsimdmacros.h, some of them are a modifi=
ed
> + * version.
> + */
> +
> +/* Sanity-check macros to help avoid encoding garbage instructions */
> +
> +.macro _check_general_reg nr
> +    .if (\nr) < 0 || (\nr) > 30
> +        .error "Bad register number \nr."
> +    .endif
> +.endm
> +
> +.macro _check_num n, min, max
> +    .if (\n) < (\min) || (\n) > (\max)
> +        .error "Number \n out of range [\min,\max]"
> +    .endif
> +.endm
> +
> +/* SVE instruction encodings for non-SVE-capable assemblers */
> +/* (pre binutils 2.28, all kernel capable clang versions support SVE) */
> +
> +/* RDVL X\nx, #\imm */
> +.macro _sve_rdvl nx, imm
> +    _check_general_reg \nx
> +    _check_num (\imm), -0x20, 0x1f
> +    .inst 0x04bf5000                \
> +        | (\nx)                     \
> +        | (((\imm) & 0x3f) << 5)
> +.endm
> +
> +/* Gets the current vector register size in bytes */
> +GLOBAL(sve_get_hw_vl)
> +    _sve_rdvl 0, 1
> +    ret
> +
> +/*
> + * Local variables:
> + * mode: ASM
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> new file mode 100644
> index 000000000000..e05ccc38a896
> --- /dev/null
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -0,0 +1,59 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Arm SVE feature code
> + *
> + * Copyright (C) 2022 ARM Ltd.
> + */
> +
> +#include <xen/types.h>
> +#include <asm/arm64/sve.h>
> +#include <asm/arm64/sysregs.h>
> +#include <asm/processor.h>
> +#include <asm/system.h>
> +
> +extern unsigned int sve_get_hw_vl(void);
> +
> +/* Takes a vector length in bits and returns the ZCR_ELx encoding */
> +static inline register_t vl_to_zcr(unsigned int vl)
> +{
> +    ASSERT(vl > 0);
> +    return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
> +}
> +
> +register_t compute_max_zcr(void)
> +{
> +    register_t cptr_bits =3D get_default_cptr_flags();
> +    register_t zcr =3D vl_to_zcr(SVE_VL_MAX_BITS);
> +    unsigned int hw_vl;
> +
> +    /* Remove trap for SVE resources */
> +    WRITE_SYSREG(cptr_bits & ~HCPTR_CP(8), CPTR_EL2);
> +    isb();
> +
> +    /*
> +     * Set the maximum SVE vector length, doing that we will know the VL
> +     * supported by the platform, calling sve_get_hw_vl()
> +     */
> +    WRITE_SYSREG(zcr, ZCR_EL2);
> +
> +    /*
> +     * Read the maximum VL, which could be lower than what we imposed be=
fore,
> +     * hw_vl contains VL in bytes, multiply it by 8 to use vl_to_zcr() l=
ater
> +     */
> +    hw_vl =3D sve_get_hw_vl() * 8U;
> +
> +    /* Restore CPTR_EL2 */
> +    WRITE_SYSREG(cptr_bits, CPTR_EL2);
> +    isb();
> +
> +    return vl_to_zcr(hw_vl);
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
> index c4ec38bb2554..83b84368f6d5 100644
> --- a/xen/arch/arm/cpufeature.c
> +++ b/xen/arch/arm/cpufeature.c
> @@ -9,6 +9,7 @@
> #include <xen/init.h>
> #include <xen/smp.h>
> #include <xen/stop_machine.h>
> +#include <asm/arm64/sve.h>
> #include <asm/cpufeature.h>
>=20
> DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
> @@ -143,6 +144,9 @@ void identify_cpu(struct cpuinfo_arm *c)
>=20
>     c->zfr64.bits[0] =3D READ_SYSREG(ID_AA64ZFR0_EL1);
>=20
> +    if ( cpu_has_sve )
> +        c->zcr64.bits[0] =3D compute_max_zcr();
> +
>     c->dczid.bits[0] =3D READ_SYSREG(DCZID_EL0);
>=20
>     c->ctr.bits[0] =3D READ_SYSREG(CTR_EL0);
> @@ -199,7 +203,7 @@ static int __init create_guest_cpuinfo(void)
>     guest_cpuinfo.pfr64.mpam =3D 0;
>     guest_cpuinfo.pfr64.mpam_frac =3D 0;
>=20
> -    /* Hide SVE as Xen does not support it */
> +    /* Hide SVE by default to the guests */

Everything is for guests and as Jan mentioned in an other comment
this could be wrongly interpreted.

Here I would suggest to just stick to:
/* Hide SVE by default */

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed May 24 09:24:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 09:24:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538911.839276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kit-0004zr-1G; Wed, 24 May 2023 09:23:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538911.839276; Wed, 24 May 2023 09:23:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kis-0004zk-TB; Wed, 24 May 2023 09:23:58 +0000
Received: by outflank-mailman (input) for mailman id 538911;
 Wed, 24 May 2023 09:23:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VEVP=BN=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1q1kir-0004ze-6u
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 09:23:57 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20627.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b43a3f79-fa14-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 11:23:55 +0200 (CEST)
Received: from DU2PR04CA0020.eurprd04.prod.outlook.com (2603:10a6:10:3b::25)
 by AS2PR08MB9125.eurprd08.prod.outlook.com (2603:10a6:20b:5e5::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 09:23:51 +0000
Received: from DBAEUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:3b:cafe::f9) by DU2PR04CA0020.outlook.office365.com
 (2603:10a6:10:3b::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 09:23:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT016.mail.protection.outlook.com (100.127.142.204) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 09:23:50 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Wed, 24 May 2023 09:23:50 +0000
Received: from f7e971803245.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 58F759A5-6B55-42E8-9CD1-DB34678A2609.1; 
 Wed, 24 May 2023 09:23:43 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f7e971803245.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 09:23:43 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS2PR08MB9221.eurprd08.prod.outlook.com (2603:10a6:20b:59f::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 09:23:41 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482%4]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 09:23:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b43a3f79-fa14-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aWazh+9nbq7/LJc2LuqEiBu97kDOqNZYQHm76r3j/jU=;
 b=VLU63AnHR3ZoPXeLFoHuZYNLktx20+1bqCIqr90Nr8RB79Z1ApCjv48BFBAd/i7wLGWDNF20Vip4ItzzaBTmxmzGVSyD8dm0suj+1RhHNo4l5d20gmnBz5m+IQRF7M7immHI0COHerQ3uWMgzDLRilBEilTTOrE2DM8OpG5bP+U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 71be40462b8243ff
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ei82EH1GYOi+/D1e2B7UyLaTijBkH5U0Vx6mr/URZ1BTOgsQwc0BFaqf0vE+sHHG642fwOWwbymI/3XkEUcY6Dks0wTIOIu+CCKAw1v88f7mp5srWfAh2QaM8/oIWxi5AIrtXNDJlK0fT6VIXbywhzH8Cd1TevW7IGzjMHzrqboXtaUeWIPgAt9y5RRNNZZCCu4O5R90Vexx8hlN9zlnJN7vKn7Lt6cKIBPaJF/NimiMP3gwxjKM13c/G5JJVOhk5r/Qua7LmYJXwn/f++wexd+2znKz6WPvVukKkcSYLyWtRJy9wv2+4+c8DiJa/g3+E1O7aBqhWodZzYDM6RbO4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aWazh+9nbq7/LJc2LuqEiBu97kDOqNZYQHm76r3j/jU=;
 b=aokwgMI8Ob2s8u+x+4rV03j0G0RrtCG8h1/Vk9q7yTViG6yNNcpKdVYBo+0738Hee+4xxGQpSTet1/31DzLPsDWG+Iq73RqUcv5h8lsazmnVAOaRivDofS5KPuv35gfG/LbIPm+qHB41tCZFOc5sctFYfSy+cIWg6hBVf2lnzs0+vf2kFMfNZroiiTttMgqadyG0p9Y6Yhx3WJ2mFEwURaw7kTS9cXcD8drrVDk5sUOmDYpCduHPhyiKCLDaVsDuXAiHUJc6i1ypa2EZfPAbW6hkmDK4RmEXs67k6TfcRlNNZfzGNgcI7T8uOLujSUmdAZOgoIMLJNp0CjeTNmQqXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aWazh+9nbq7/LJc2LuqEiBu97kDOqNZYQHm76r3j/jU=;
 b=VLU63AnHR3ZoPXeLFoHuZYNLktx20+1bqCIqr90Nr8RB79Z1ApCjv48BFBAd/i7wLGWDNF20Vip4ItzzaBTmxmzGVSyD8dm0suj+1RhHNo4l5d20gmnBz5m+IQRF7M7immHI0COHerQ3uWMgzDLRilBEilTTOrE2DM8OpG5bP+U=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Topic: [PATCH v7 02/12] xen/arm: add SVE vector length field to the
 domain
Thread-Index: AQHZjUpOjVckXKVhKUe5W06SOMG/nq9pKDiA
Date: Wed, 24 May 2023 09:23:40 +0000
Message-ID: <0052F4DC-79EF-47BF-968E-B3E42F7900D0@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-3-luca.fancellu@arm.com>
In-Reply-To: <20230523074326.3035745-3-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS2PR08MB9221:EE_|DBAEUR03FT016:EE_|AS2PR08MB9125:EE_
X-MS-Office365-Filtering-Correlation-Id: 3faaa439-2511-489c-56b0-08db5c3895d0
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 aiT2SjJUqgPbLQz12Kl52vW3FY+9j4vF/fnnuX3ABACm4cDsdXShU4qzu3BKjoe5k6PSPg8cCklYCb41AwLdkGwu+hVfeEF37/zshK1LjVTDLk78p3lcB9CY1VWT6EDN7yY1ijovS2P3azUdi9MJGMBNl3cmR5sJbgKE3jXo2RyBYdgcAJlSAYqkL0+0WDFeQ/NjRNyjwKtxkL1uUdtikjB9CzzOhtKY1vUbClBjDxLO/p7WYMROsMR6gLlUiil87tXvcjMEkF8Aq5SihXQZDQBykY3dGOS9erxvCfcCVJuMfuWVpHcTDsKj9LSwunD5tju2gxSHvH9k2ao6tMykU7ZCJKCzB82GgK6Hu/N/m4o1cObewEs1ynYXLzrIl5CHovAn+zf5dxqaw6NH43FfW1F5+4Hhu9VevN3HqGbpiMOogRjFfYrgXiuRCl3GtrqZPOLNRT1n5bbmWnet/HQuWGJxwoL5dlSPCmk3+XxNtGU/wZtYMCeCmeTMXCieaXNLVciB6ZoZz3/uEvFmtScc5PtHmKcuRlYtAb0F4QAoncoOJHZ8s1bZ0ePOKSOU1ZClfEjDkJJmnyzZOgaPM6nwDNd1haxMZbKwPF1gDJ3y4Qv4pqJMMkaDsYl9Gct11lsowh3bqlV6x+I9oY4WU+vzpg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(451199021)(478600001)(66476007)(66446008)(66946007)(64756008)(66556008)(6636002)(76116006)(4326008)(91956017)(6486002)(41300700001)(37006003)(316002)(54906003)(71200400001)(5660300002)(8936002)(6862004)(8676002)(38070700005)(86362001)(38100700002)(122000001)(26005)(6506007)(6512007)(33656002)(186003)(53546011)(83380400001)(2906002)(4744005)(2616005)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2F14255A2B6B13478AF45DEFA7EBDFBB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9221
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6a970f28-8c25-4c05-954d-08db5c388ff1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qWRTTtg5g7XeT5YhclwtD2ggMDO9eABiyQobJ4DqAeTY6uWJ0L6RwZwDDmy0hAc3Z9yOx1ZDTEUZE2kds84SxMxz2k8CjemFA8WE8YgMPFf+qTJmNLV/3tJLNB1qx0RvJd9lqPF39feUKC+0hopK0/yv/2FyvA3mnqpOEBa0IENs15+YAaJTWyNs0gxxt2RHf1mys2cqS5KBwgNK9gN/8Y6S88gLadrJb8fi8KKQBuOD3FH/c5JRyTjLj8Bw70jfIMkPPWvSGFC3H7yfV13RVZih8hkEKqy0blRTiE8+YtqmRG5fYZCI/vIY2ycMz6hn+NRO9eu9yhfHCJtBo4xpku8e8vZhwPqt1lWVjqxzkMBCCQBaT2bBtw9adjpLz2BpyZLYVxhmPAwwW3EKOqtO0Fl2V6HghGYmxPdOBC1N4MNBQRbIdpdWNtaS/yZ8H9Gi5kGuFVYdbBrhT7qhWcgFu8767Izr/iiBeCwBAo+c5/iB3Sjb9XajOMDXteBVigozAcnPALGt02L7CxcyCSq+N9jiRrmvMhGWAwEQODvmrhLaJRRX7+oEvAmQZnMZNc6zV8+NrTBRPc9bIEfcz9XCt/8oLXej4oqhuXIeZcu6Pu9I5ruU70SB7PMXNo78cVr5pM6pQ+S6vbe2LBUg8ffZGQGiEJU0BYFAvwIzXjxh4lKYPoiDp8GWRI/fhcFrZAVrZaLKTlTUMCK0nDixBeLNFfCixAFPkksWoGkJcqXUCbid1+YVqIkgcgbAyumlOgKT
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(346002)(376002)(451199021)(46966006)(36840700001)(40470700004)(5660300002)(8936002)(6862004)(8676002)(40460700003)(40480700001)(186003)(53546011)(33656002)(36860700001)(47076005)(86362001)(36756003)(2906002)(4744005)(336012)(83380400001)(2616005)(82310400005)(356005)(82740400003)(81166007)(6506007)(26005)(6512007)(107886003)(4326008)(6636002)(316002)(70586007)(70206006)(54906003)(37006003)(6486002)(478600001)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 09:23:50.4439
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3faaa439-2511-489c-56b0-08db5c3895d0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9125

Hi Luca,

> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Add sve_vl field to arch_domain and xen_arch_domainconfig struct,
> to allow the domain to have an information about the SVE feature
> and the number of SVE register bits that are allowed for this
> domain.
>=20
> sve_vl field is the vector length in bits divided by 128, this
> allows to use less space in the structures.
>=20
> The field is used also to allow or forbid a domain to use SVE,
> because a value equal to zero means the guest is not allowed to
> use the feature.
>=20
> Check that the requested vector length is lower or equal to the
> platform supported vector length, otherwise fail on domain
> creation.
>=20
> Check that only 64 bit domains have SVE enabled, otherwise fail.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed May 24 09:26:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 09:26:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538915.839286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kkr-0005Z8-CF; Wed, 24 May 2023 09:26:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538915.839286; Wed, 24 May 2023 09:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kkr-0005Z1-91; Wed, 24 May 2023 09:26:01 +0000
Received: by outflank-mailman (input) for mailman id 538915;
 Wed, 24 May 2023 09:26:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VEVP=BN=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1q1kkq-0005Yq-0R
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 09:26:00 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20601.outbound.protection.outlook.com
 [2a01:111:f400:fe13::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fde50d14-fa14-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 11:25:58 +0200 (CEST)
Received: from FR0P281CA0223.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:ac::16)
 by AM9PR08MB6673.eurprd08.prod.outlook.com (2603:10a6:20b:307::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 09:25:53 +0000
Received: from VI1EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:d10:ac:cafe::40) by FR0P281CA0223.outlook.office365.com
 (2603:10a6:d10:ac::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 09:25:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT064.mail.protection.outlook.com (100.127.144.94) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 09:25:52 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Wed, 24 May 2023 09:25:52 +0000
Received: from b8f26c31278c.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E5CC82C0-98B9-41AF-9C61-D75D0CB32373.1; 
 Wed, 24 May 2023 09:25:40 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b8f26c31278c.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 09:25:40 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DU2PR08MB10204.eurprd08.prod.outlook.com (2603:10a6:10:49b::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.27; Wed, 24 May
 2023 09:25:38 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482%4]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 09:25:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fde50d14-fa14-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gg+1tg7xMYW4sM23wo5yUBvN6Jhwd2R/WMPma6BcnwI=;
 b=VYyaeIEK2EsyIuVvRkQ04kHBXw+Uirqpd3h4h+FizXOFIuYxHZdUAnG6agCZYT82rDc1kM+c2/MIynwaNw6TSwby/Ek3OMB0t5YyNDuZjrU4MSljpCWm+ZNPviVr8mCus8wYrEk5ZA0QQWGvV9JevA8gDHNllWuEfZvJGfsojE0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: fe6d1aea0b96b9ce
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YR8zbG4emHyorelM3W48WJK6JQJxNqxIT531EDgwG6FCVGYSeBdR2xwRv7NBDi7JpDJxW1cD0Ttr/wdJhO5VxYZpPkU1eXZ7iE7TuHhGj4gQjseAr0SI0i6RtLYis377E7dQbInR4DU8pO/DsiFWDOaXIS6ndjbiXvdL2mbMRPSj5LvclfftrkSj4cpkuD0HNFmTwiL5ZlsHOBY1OsY8n/WJCAoNpliy/hylYfyDl5WWHQoLp4ynkhlHVmPyl7lFSh4ShmZ86NeG9Oe6z4PcV8cRnGIGtL2NlNFUKnmm96qEWpr2RVntGDhQ0zrOwWoIXWp/uy3MmfK+I7RWjH6dWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gg+1tg7xMYW4sM23wo5yUBvN6Jhwd2R/WMPma6BcnwI=;
 b=IalznXcHGL8vZ8SM39xpg3f9H2VWjxnt+ucoyEHTh9V++1bT897A0mTsAaNL/L0L6PrtOGyoXCuSLdv87hFszpEkbXK5undn+mDy/TiQVNo8gbAyaJJoHfxcYq32k2TqTq/0Ef2chHN0k0WMw1CxLTDyKM9pR3ReHmDU+X45DokxOZL94bN/MqEDJqvswabSkDWdo7s8QYM3ExIvfawI59cNyvNACP6KclzV3Qt2ILTg1VWcJgPlfV9e9VZW9Do7Inc9CqBXhUcBAn0MQsgvTrh2TDg6dxao8ZY1gI1Qh4x6bLfBIO6Oy0eiG2WJL7WTMWkHFIh0cgVwa9m21YtTUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gg+1tg7xMYW4sM23wo5yUBvN6Jhwd2R/WMPma6BcnwI=;
 b=VYyaeIEK2EsyIuVvRkQ04kHBXw+Uirqpd3h4h+FizXOFIuYxHZdUAnG6agCZYT82rDc1kM+c2/MIynwaNw6TSwby/Ek3OMB0t5YyNDuZjrU4MSljpCWm+ZNPviVr8mCus8wYrEk5ZA0QQWGvV9JevA8gDHNllWuEfZvJGfsojE0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Julien Grall
	<jgrall@amazon.com>
Subject: Re: [PATCH v7 03/12] xen/arm: Expose SVE feature to the guest
Thread-Topic: [PATCH v7 03/12] xen/arm: Expose SVE feature to the guest
Thread-Index: AQHZjUpPM3gITm+uPkGc0myaZMtxS69pKMWA
Date: Wed, 24 May 2023 09:25:37 +0000
Message-ID: <88BD5890-59AE-45E9-AFC0-9D630C8738A7@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-4-luca.fancellu@arm.com>
In-Reply-To: <20230523074326.3035745-4-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DU2PR08MB10204:EE_|VI1EUR03FT064:EE_|AM9PR08MB6673:EE_
X-MS-Office365-Filtering-Correlation-Id: ea7f6359-4d53-4d5d-bef5-08db5c38debe
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 YkhzNAiRSubIMtaWVUnDKF0K5OvlJnAGSmhrQlz8vyUedaqZwT+CnTR2mJOfWOpzHgqFK2ZxAZZ6XTj6zX6nxG2MNEcUQGWMW89K9bHNEZypHRJ9kqebv0Su75x603g9izq3j6nOOG2Zlppg7inGhTBdPiTu9lnizDhjsgzQe2J+IszTTT57bn2Pobm7Mv+mMPE+fXML1qVV/rCZiTnP5hN6jhC6j7K7v7dquH6Qv2mUp/2gdVVrhekkwACabp6H7IQlEzU1ETTauhA8260YNQDdgGeAxgqncXz32+QAgl/Hle9HlSlzQ/g6iqUZzeKNmGPRLuTnSZL+uZIp2rF77XNaQB/9gzpPY8982lxWx23L+6PzjAzrN4wf+iEmE31Zu07xxL7vm/Pv7o0Cds3H064QnjawpJjs4rat/geou6s9oyWnLVFijU8Ia3qbw+Jh3JxRJz5ynKlYPNlKEnZJvRtfyo+yNTSs9Ph4Hlnni17I0ExgFSgA1lFxTwYnKE/XGNm9HE90UZ5BViW9xwJgGpJ/mRCJgUsh0k/m9K6LqYUui2HtRYtgCh+Xuy0bIcb5H17b7cX0DolYmC4loZAbGMpVKOapY5k/Bw7n+kcVpVARDM8GwsyZhMWAR85kZ2dpJulilUmrNWKiuZFeXhUSTQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(376002)(396003)(136003)(346002)(451199021)(33656002)(36756003)(37006003)(91956017)(54906003)(66446008)(4326008)(66556008)(76116006)(66946007)(64756008)(66476007)(6636002)(316002)(478600001)(5660300002)(8676002)(8936002)(6862004)(41300700001)(2906002)(38070700005)(71200400001)(86362001)(122000001)(38100700002)(186003)(2616005)(6512007)(26005)(53546011)(6506007)(83380400001)(6486002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <E23A7693727C1A4AB36152EB49893B5F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB10204
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	cabf80ec-bfce-4ebe-3c94-08db5c38d5d6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bSxnyd/M2B98mwUunVTSi4oW7u7V7yiIAixxP35PutZy3tzfgSYJCKq/3Av/XPmUTpXLbzG8NvDzU3klWhFEe2pG9+z/akKn/SHO3cBEixsybKH8HacZ6GJHVunXHhPM5hLDBb5wpa734zT0IUflTVmpdV2XEisrRjugg1msRupaTKfOZibL6HTaFRh36/cbssCpO60fQhjPSpx/8P7N38EI9O7fJJgmZ7HmY1xb0W9YBiUOVLOA+sFh+rpykCZrwy+09LFgR9O5jSmLTCzJL3BB8DwzYRlz9wFqFVcWiYck/CsvadrH+j8C+JURg3FCm3pfCNj4npM7SIFaFZhjgi2SwTJ/7gJ4DyGmL0XOvGR/fXOZF2dJO/7wrGpk3QoKKXXvdTEDkQ9y5MXwrG2dald0F0PbJno3vPnu5C6OZoRmVKrzYVCeK9/gT/TfAVpLiLl6pGbRWBDJftd7Q3/erq8wiRLz8UAp9IoZpr1mP3EvQD366h3mTtQ3bYCEkh9irhvLO25asuYGTgTlE5Xdn8YX9PLcYC0XQQXp2l8fT5lwR41gDaobgNazLxLtR6w7+pGCGamJa4JbYats709jY0d8c81FeZlVCYQFxYe2m8A8edaIs5sBwX1wyScq+iCwAK/I6CnsbnyXLkqOwBrZ0PiWTjs0gNSPZyjWjYkEwXu9ba6FOkbZDvOoUSj+DW4dguy0hchijrUWBHzyWAwM5Ft6xIi6byLaNhY7db6Gyz9gfh0ZM0XLlzlqDLnXPh6m
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(346002)(39860400002)(396003)(451199021)(36840700001)(40470700004)(46966006)(47076005)(83380400001)(82740400003)(33656002)(6636002)(4326008)(40460700003)(81166007)(356005)(70586007)(70206006)(2906002)(41300700001)(8936002)(8676002)(5660300002)(6862004)(316002)(40480700001)(36860700001)(86362001)(36756003)(2616005)(37006003)(54906003)(26005)(107886003)(6486002)(186003)(6512007)(6506007)(53546011)(478600001)(82310400005)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 09:25:52.6930
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ea7f6359-4d53-4d5d-bef5-08db5c38debe
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6673

Hi Luca,

> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> When a guest is allowed to use SVE, expose the SVE features through
> the identification registers.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Acked-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> Changes from v6:
> - code style fix, add A-by Julien
> Changes from v5:
> - given the move of is_sve_domain() in asm/arm64/sve.h, add the
>   header to vsysreg.c
> - dropping Bertrand's R-by because of the change
> Changes from v4:
> - no changes
> Changes from v3:
> - no changes
> Changes from v2:
> - no changes
> Changes from v1:
> - No changes
> Changes from RFC:
> - No changes
> ---
> xen/arch/arm/arm64/vsysreg.c | 41 ++++++++++++++++++++++++++++++++++--
> 1 file changed, 39 insertions(+), 2 deletions(-)
>=20
> diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
> index 758750983c11..fe31f7b3827f 100644
> --- a/xen/arch/arm/arm64/vsysreg.c
> +++ b/xen/arch/arm/arm64/vsysreg.c
> @@ -18,6 +18,8 @@
>=20
> #include <xen/sched.h>
>=20
> +#include <asm/arm64/cpufeature.h>
> +#include <asm/arm64/sve.h>
> #include <asm/current.h>
> #include <asm/regs.h>
> #include <asm/traps.h>
> @@ -295,7 +297,28 @@ void do_sysreg(struct cpu_user_regs *regs,
>     GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
>     GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
>     GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
> -    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
> +
> +    case HSR_SYSREG_ID_AA64PFR0_EL1:
> +    {
> +        register_t guest_reg_value =3D guest_cpuinfo.pfr64.bits[0];
> +
> +        if ( is_sve_domain(v->domain) )
> +        {
> +            /* 4 is the SVE field width in id_aa64pfr0_el1 */
> +            uint64_t mask =3D GENMASK(ID_AA64PFR0_SVE_SHIFT + 4 - 1,
> +                                    ID_AA64PFR0_SVE_SHIFT);
> +            /* sysval is the sve field on the system */
> +            uint64_t sysval =3D cpuid_feature_extract_unsigned_field_wid=
th(
> +                                system_cpuinfo.pfr64.bits[0],
> +                                ID_AA64PFR0_SVE_SHIFT, 4);
> +            guest_reg_value &=3D ~mask;
> +            guest_reg_value |=3D (sysval << ID_AA64PFR0_SVE_SHIFT) & mas=
k;
> +        }
> +
> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
> +                                  guest_reg_value);
> +    }
> +
>     GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
>     GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
>     GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
> @@ -306,7 +329,21 @@ void do_sysreg(struct cpu_user_regs *regs,
>     GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
>     GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
>     GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
> -    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
> +
> +    case HSR_SYSREG_ID_AA64ZFR0_EL1:
> +    {
> +        /*
> +         * When the guest has the SVE feature enabled, the whole id_aa64=
zfr0_el1
> +         * needs to be exposed.
> +         */
> +        register_t guest_reg_value =3D guest_cpuinfo.zfr64.bits[0];
> +
> +        if ( is_sve_domain(v->domain) )
> +            guest_reg_value =3D system_cpuinfo.zfr64.bits[0];
> +
> +        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
> +                                  guest_reg_value);
> +    }
>=20
>     /*
>      * Those cases are catching all Reserved registers trapped by TID3 wh=
ich
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed May 24 09:28:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 09:28:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538919.839296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kmn-00069V-Ov; Wed, 24 May 2023 09:28:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538919.839296; Wed, 24 May 2023 09:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kmn-00069O-LU; Wed, 24 May 2023 09:28:01 +0000
Received: by outflank-mailman (input) for mailman id 538919;
 Wed, 24 May 2023 09:28:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1kmm-00069G-53
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 09:28:00 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 442401ba-fa15-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 11:27:58 +0200 (CEST)
Received: from mail-mw2nam12lp2040.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 May 2023 05:27:52 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB6608.namprd03.prod.outlook.com (2603:10b6:510:bb::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 09:27:50 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 09:27:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 442401ba-fa15-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684920478;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=mZJTJiNe2yrvWtjAcW+mCPp+FDrPb5c4YG/4tfh/ItM=;
  b=PEeonw2x1utQveNQyd8NLRZTOT/vkq+L9E7egwiXD2tN1kz/0RfpnEJg
   hptEg3u4NTZ31ie+zAmduxY3fzsEIIBiSKS3aFfZlLEPoDckJYg43SJ+O
   B4KEhxwVtQqrE3GIald2ts8V+tT9Ivsssn7wBfUB5j0KtpB5io5h6+HOo
   0=;
X-IronPort-RemoteIP: 104.47.66.40
X-IronPort-MID: 108956681
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:hJ3P262OwjEVxBaXe/bD5c5xkn2cJEfYwER7XKvMYLTBsI5bp2QOx
 mFLXjuDaPqOMDeget5+b4W1p0MD7Z7XyN9kG1Q6pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFkOagS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfDE5N1
 eQmDmsxbT++3ee865u6b/VwiZF2RCXrFNt3VnBI6xj8VKxjZK+ZBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxpvS6PlGSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHurCd9MSu3hnhJsqH+omHU2LRwEbErljveCmhWnWPQCN
 1NBr0LCqoB3riRHVOLVTxC+5XKJoBMYc95RCPEhrhGAzLLO5ASUDXRCSSROAPQqrNQzRCAq/
 laRksn1GCd0t7mIVXOa8KzSpjS3UQAWKmkYbCNCUgoB4PHkuog4ih+JRdFmeIa8itzuBTjx2
 XaEtiE4jLQIpdEH3OOw+lWvqzCxopnESCYl6wORWXiqhitif5KsbYGs7Vnd7N5DIZyfQ13Hu
 2IL8+CB6MgeAJfLkzaCKM0GFaul4PutOzTGjVliWZIm8lyF8Xmpd5Fd+zF6KUJgNO4LfDboZ
 AnYvgY5zJRUOn2tb6N+fYOqI8svxKnkU9/iU5j8ZNdUfoJ4cgPB+Sh0fFORxEjkik1qmqY6U
 b+ldsKrAWcfGL5Q5jO8TOcA0pcm3ik7g2jUQPjTzR6qzL6fb369UqoePR2FaeVR0U+fiADc8
 tIaMtTQzRxaCbf6enOPrdZVKk0WJ38mA5yws9ZQauOIPgthHicmFuPVxrQiPYdimsy5i9v1w
 510YWcAoHKXuJENAV/ihqxLAF83YatCkA==
IronPort-HdrOrdr: A9a23:GjEKJaEpYT7DllHLpLqE18eALOsnbusQ8zAXPo5KOGVom62j5r
 iTdZEgvyMc5wxhPU3I9erwWpVoBEmslqKdgrNxAV7BZniDhILAFugLhrcKgQeBJ8SUzJ876U
 4PSdkZNDQyNzRHZATBjTVQ3+xO/DBPys6Vuds=
X-Talos-CUID: 9a23:qv78SW9vHAKn8c2+zpeVv0M7GeA9TFuG8HT/HhW9A01ASZiJVVDFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3ACBnJvw7bfK49wZuG/kbOAx3JxoxK2Pq3LWUMg6w?=
 =?us-ascii?q?X5eaKDjwrIjS70SioF9o=3D?=
X-IronPort-AV: E=Sophos;i="6.00,188,1681185600"; 
   d="scan'208";a="108956681"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nmZcyKOCfnMD1OYEkCR2kHTNgKt0iPwnBM2wMTQ5+NgvUzuAEwhDZEljKTpmJGK35FY6jHxfzLD224+P2mqz9b2WbNRgEVvM4+MzAA6/+v5WKRxhn85WKNXZqwP1bYoZFTXX9HXL1TWcqawjqGMaKreNn2LBmGFW2lOEpE4oyKn9+YDV2KWcbECpk8SYTDw+gy6NL53iFkZLSKzIuVY+HQzzaFmd1M5jJmdegILZ5yQLR7cEsNlHTjsHJe3JHVmluJnj/eSyB1zxeYAI1DCkpKRy4IsCGYZnYBfHuh02ykH2dCFwMi2jUiNY9IJYPefVNEVuHTo44qy17LV8rO9AGg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mZJTJiNe2yrvWtjAcW+mCPp+FDrPb5c4YG/4tfh/ItM=;
 b=fQzAPBgJb5aAGRZ7/N364/Xk2ELrtq2OwSHaPtOyeLjWM+xupkhmTsXVGGSWKXM+1xivHvFRtQr/M6yNOQpk5H+zyZbqeApihWWDNvHB7+6QJNMaYuiu/IcEFZPUp1JECWvV8Z4dnPK0Sac/C5MjPcqN0aEo+KhVpMoKIWCjutNeytSP4pcA4Ie4J1bxVyd07dF+RuHUszHDwqXcoWP7Rox+sNt08zvj1ZRGOS4ERw/LwUwUXCEOo3x7kq+VwUQUNyj7mfkCuY5NAhT/ElZasmvLapwWqWUc0RkPDsn39ZDjKCQS6DVTXqMD5PTmWnN+QNLm9SHqfUJ7Id6FtnS8Ww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mZJTJiNe2yrvWtjAcW+mCPp+FDrPb5c4YG/4tfh/ItM=;
 b=uhRs1dxkm9ppMMa9yIjlZxM37qVo4kYXZPWfu3kX9SIGtZ8KwSiRA/q0OD7T2bpvA1y9TM01kwmSq2/8UcBNhIg3D4GQuSj3NQ/JpKgJMmdxM6RoqCGaLVEz+o8E9AF4u190dYVvTnkXd3urxd4f4qu1aZoF+ANa6Vs5MKTliUc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <6a1c8a1e-ce60-7e52-78fe-f82f21a89d1f@citrix.com>
Date: Wed, 24 May 2023 10:27:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [XEN PATCH 00/15] build: cleanup build log, avoid user's CFLAGS,
 avoid too many include of Config.mk
Content-Language: en-GB
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Alistair Francis <alistair.francis@wdc.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Doug Goldstein <cardoe@cardoe.com>, Bob Eshleman <bobbyeshleman@gmail.com>,
 Connor Davis <connojdavis@gmail.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P123CA0081.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:138::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB6608:EE_
X-MS-Office365-Filtering-Correlation-Id: 43165941-ea4e-4c62-1e83-08db5c3923e2
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rhfi4UfS/thGCn/ljQEshZ13CFlMUgrOMNZIY2ve1/QOlYMHxiKEwfUUFm6b+u8YLZtWxGaIqKPZ5FFN+OzkQRMsGAdxDYHQrj/2JeXJaQqbfNCyxwka3K5/LIr8z6Bh9Wk9OSZny9m1dh786i7BnUJ5nJmVSwhuz8gH5A1vslFXS1SeFt9hEa04oBVkKuNp4/E7Tx+fvXYqgsdsKqqbfHfj2xTipqo+kpg16sw8xW4AL2nd7X21q6ScF2rfaELPlZsRzi2ruu4OZrK287snypbr/8XYsN2IViqAWs8vhJZmjpSjvAAQIlPwahkgoLyTSAiCsq8Y80r2YckvJ2RlpnhSstNjKkx4Vq5aWsXScTEN3EuDWFxP8LY4ACWUQ1CMTJcEoCk81MFA0YffFetm2BOo/j0Tuvpz1sFN7m8FbXv7Qo+VUXUJxHCBMnMg+gfPXXnDQkb3qmybLvXoK1xbVS1SKO8MYTm+FDwIusfSKjouiu49gHDvwcBlh4muvodGomRX3dxg8NUHWAcpZQtbrQeyhi4Y+410k3lmqxaLgzvEQs8j4y8fCOHkyV9fzHWIrg/5kIUEH2G8zinyVhJqa1N/KhcbQ0c4jBa9RRBVgmUPx31cEOZMxwOiYZL7ti71X8maZ/ciispNtO1N1gqNlw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(136003)(366004)(39860400002)(396003)(346002)(451199021)(26005)(6506007)(107886003)(6512007)(186003)(41300700001)(2616005)(53546011)(31686004)(8936002)(8676002)(966005)(6486002)(6666004)(36756003)(4326008)(82960400001)(7416002)(66946007)(38100700002)(86362001)(66476007)(478600001)(316002)(31696002)(5660300002)(54906003)(2906002)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RzY5VlRqN2RKVGd4OTJDTndCN0VXNHZQRkoxaVJ3L090MW1sVVkxL0xkSFdU?=
 =?utf-8?B?cE1mN096ZE04NHJkT3pUcmpLS0czMHZwcm9mYTVaNjA4eGIvSnltS0thNk1x?=
 =?utf-8?B?WFhqYUtlQUJFMU15NVRoRURUbTgxZ3hpaGc4U0ZnNUNPdE5YZVlWZEFvdFpx?=
 =?utf-8?B?V0tPaFNMUWVZT1FwdFptQXZtbjVLcFlxeDdaTGd0eU9JNUNIMjNDaTBSSC9O?=
 =?utf-8?B?QnhFdjRsWnkrVW1PYi83TTNkNkc2enc1SjdLRTFRUkZBTWxZUXRid1VSWXRt?=
 =?utf-8?B?NmpHdkVSRGZDMWJ0UDJmWWEwNTNHdWZiajF5Z1FZalhiY0ZCOU5YeVN6dXZi?=
 =?utf-8?B?Ry83YktSTWYvNTdXcDE1TXZ4S00wc0wxWXVSWlBEQmhITStncndKVytKcWNI?=
 =?utf-8?B?UzJ0RllpV2QvZTBaR0RTWkRoYURuN2pJcGlqRFMycVZaM011VmljYlpjREw4?=
 =?utf-8?B?MlFLNWFObCt6WWM3WXU3RTVhcldIWUYxZHJFSG9VQzRhM0c4cU4zQ09od1NK?=
 =?utf-8?B?LzRNWW9iTG9CcXgvTG92Rkx1K0x2S0ZUYkJmZXhzbkJMTlRGbFEyYWZBeHZ1?=
 =?utf-8?B?bHZNLzNQbFhjQjVuZEkrbVcrcmcwYnRJWVlYQXZzS3BGWUk5NXpjRFJQNDVu?=
 =?utf-8?B?aVorTGNzOUJxTmZDUzlaMlNxN2NRalFLQkxsZUhzTlZFV3FqWXVLbzdseWFi?=
 =?utf-8?B?dGRBYVFkWnRNOGxCdjI3VzlwUC9SdTJ4N1FLTFozVU1Va1FJQ3NPTUdkSjVx?=
 =?utf-8?B?eWI0eUpMUGZPdG9sbXdWbWNQUFhCb2w2L0tYNEhoMFVxdGp6b2tIZ3ppR0s2?=
 =?utf-8?B?YWdzUGcyRWtSUDJWRnFndnVlRTVzRU9XZ0RYakFuTlFyR3pzQXdnbG00MjFP?=
 =?utf-8?B?eXkrbHVIRDJSa0FYOXBqUUxuNTdLWHFZNlIycG9vaDhLMExqVXNOOFpIVmJo?=
 =?utf-8?B?b24zRjNTd3NJQkx2Zm1XajNZUU9CWDhkWGhQZEhPcnlpYWJxV3VpKzlScWFl?=
 =?utf-8?B?d3NTbmE1RjNnMkZnQjdHbkF4a2h5WCsvZzRDQzB2OXFyVjNTbS9Yc1ZpS1Nr?=
 =?utf-8?B?bDBxM205RVgrNG1lZndDSmI1VFFGblZGbnU1Sk5aMUl4VGZjS0NpbjZKc0t6?=
 =?utf-8?B?Rzg4RWJMWHIwbldDbjFCTERRWjlTNmd0amdJM2hoN2t6VWZxaUJicS85ZXl6?=
 =?utf-8?B?SWVSVlJYY1RGNmZCTDBmWWNuZ2xyNjVZRmNMT2loNXlCM0hqbFYvM0ZVUzJM?=
 =?utf-8?B?bExERGJzMEp6NlNlTlhXT0NJRk5BTDRGak1OSXpFdk8wVExCS0R0TGE5TGlG?=
 =?utf-8?B?UnVQTUxOVnp5R1NlMEE1by9HN0hrZDVFOGQ1K1VWT0VLMU00NTdIQVFsOExZ?=
 =?utf-8?B?WEMzNW9HY2NiRGpHb1NQdWZKRzlUbjdPaG5GcVdkTCs0RFhpeDFJOUh4L0NY?=
 =?utf-8?B?eE53VThWSUtsSEZnS3N6dVF0M0YwVkpEb3RxUlNoRlluVDQ4c1JiamduOE1j?=
 =?utf-8?B?enRBcGVlYi9FcFMxUGJtOGNpNEU3d0V5QTYvbVE5bmIzQWZnUEgzMWN5dEp6?=
 =?utf-8?B?Ykx0QTVTblBGbkZFeGoyeVdqeUdEUGdad3dxNEFyTy8xWGJQQTlnenlnTUNX?=
 =?utf-8?B?TzhvQzBZUHN4YjBENHh0aEJVN3BEZWdGNnFlUDR4S3lYZlZyNFZtY2VKZ2J1?=
 =?utf-8?B?N00vM1RldUxnaXRRZTh2UXpPMnU0d2hHQm95T3o1RlBTc0RwdU81UHRCaWRL?=
 =?utf-8?B?d2h0SWJOZCtpckY5UnJ3cEVWMnBOUjRwdFV4NDBZWnhwSUlNakxob1RPMEFy?=
 =?utf-8?B?b3Z4TDIxTEo4NTVyQ3lUM1plMEV0U0krS3hVc2NudldwWmdtd0c4emtUUDZu?=
 =?utf-8?B?Qk9KcmR3OGxOUTBqS2I3d0djKzk2ZlMrcHhJdXA3bHJkMlVLU0tLV0gzR25O?=
 =?utf-8?B?THNNUk1mOEJQeGN5WC8xWDJKNTN5MDY1dmpVSStzZlltb0NSbWFyZm5GZHRB?=
 =?utf-8?B?UmEzUUZIR1FteHlQQ0ExaEJ3aURYNnl2a0dHZHJ3d2UrM0xLaUZJb0JzVFpX?=
 =?utf-8?B?a1FxajBQaGpRYStkN1BwNVF3eElRcW5QeEFzL2NDa0E0a0t5cG04cElJTnpo?=
 =?utf-8?B?N05NMStmSmJUVExYVGVjakZzcHl3UmJIbTJYVU1FTzFkSkc4SWJWUWt1NHlV?=
 =?utf-8?B?dnc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?L2ZtZTRFWTBBSVJNZnV5UWo2dm1CVi9Xb0xYSjh1ak0vR0U3QTRKVmVmcHgr?=
 =?utf-8?B?eU9TN0VBdldGeU1TUENHaWFOM3pOUFVOTHVRTTNKdFpkUzBRdWJKVS9qTVRR?=
 =?utf-8?B?b3BNMkVieFgwcndNYnpOQ2IvamFldmIyYzlFSms3M0FiWTRrcmJCY1VPbjcr?=
 =?utf-8?B?K2Ztd1hpMWErcmdQUkZNNnR6UVRDODhjcnQ0Y3dzb1pWakVjaDU3TitibFJa?=
 =?utf-8?B?bHcxaFZHakdSbFJGTTZiaEtWdkFCNWI1WG0yeFBGa3kzN2FuUGxkSWJRWUlJ?=
 =?utf-8?B?ZHVZNEVuTm0yMlhHZzJsSXc5MTJwNEV4dGJjdFlnY1NDTFU5TEVlejA1eWFY?=
 =?utf-8?B?emoycGpQMHZLa2FRdVU0M3RCblZBcy9rbkxZVkRZUXVwNzZOYXBNYWNXMGlr?=
 =?utf-8?B?VlpxVkNMQ2VneDdha2JXVjBkOU1rQnJKVTNhY0laalZzc0cxR1JWWlFIK0lZ?=
 =?utf-8?B?Z3dGYTBmSHF6NkVUT05DMXI1clF5QUU1VGhIM2dKdWNmaER6d2pPdytXTXNp?=
 =?utf-8?B?Q2ViOVdtUUUrUkJBNTJ1NTNhQVFFK2M5dUJQSTFtY3ljaHJnZ25tbDMra2FV?=
 =?utf-8?B?UVI2Y1lUaFYwYUpSYU5WZ3hsbVNXM0JlL2hzekZVNDBzTnlJUVgxSHFMQnFw?=
 =?utf-8?B?NjdwMFBma1EvMWNHWGFkY3VBVUk4VysxTHpKS1lyMEc4bXk4T2dSbjYrbjlK?=
 =?utf-8?B?aXd5NGpUU3Q5NjRXV1VkT29EYmpobHhlK3ZIaTJiTjdicm1Vcmc3Ulc4YUpj?=
 =?utf-8?B?SFhlUzg0YUpnZ1pMOG4rdVhSMDM3R1V1Y1BIeHVwSEpVRGNuZXFLUmhZcVcz?=
 =?utf-8?B?NkxSRFZGTVdvem5JaXQvRzVwRmdmZUE2blptUDJ6QjhpOENrUUlwcEQ5VHBB?=
 =?utf-8?B?dW5Wd3JQU3lHd05FOWlKYjBrTEJ5RllzL1dYMys5RXhaWUNFNW1YRjdpU3ZS?=
 =?utf-8?B?UnZ5Ly84bmlMRWFld1hYR0YzakNWT0Y1YmttRlhIYVBucUxEK09ObXR4bGhm?=
 =?utf-8?B?Mk5haUVrU0FHV1VtbUVhTmlmSmJLU3lxKzZ1R054aGNqVlM3d204TGhMUklv?=
 =?utf-8?B?NG53cWpUdWtTcTFrTXczUC9WeGpEb3ZlWTlUUDA1Q0dERDU3cUJsRjF3dUFW?=
 =?utf-8?B?QkxuQzl3Y3pFL1dBQ29McFZ5T2VoeHdFaS93WThMMEQ5OElNMzZZTkdZYVM1?=
 =?utf-8?B?MXF0WDFMdXNNMnpOVmQzU1hzMitxRkV3aGQ2N3hKOUlvMnhzSFJObi90bGs1?=
 =?utf-8?B?eWtHcTYwWmg2M0ZtalNXdzRMNmdReFdoczZHRWhoV1pxeUg2T2xsZTVhOHpz?=
 =?utf-8?B?dzVGeGlZRUZwU3d5ZmpNY1lxMk1pdDhGOXVCMnU5QTlyek9yUVlhS2Nxd3B1?=
 =?utf-8?B?NUhia3VWQU5zT1ROM21uTnE0N0o0b29zN0hoTVdvVzdCb3BRNUs1QXNRVm1k?=
 =?utf-8?Q?IZ/O8d4a?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 43165941-ea4e-4c62-1e83-08db5c3923e2
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 09:27:49.2608
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: o1MxTrhYf6GlciQO9RnoyYHWMHuZ5VUXrT9IzVXuEIXvIAywk5ORTHmhGAkeloPc+kgP9zRgBnxZoJkbl1Z0GHORv3n24jDBbp1g/fEiPQc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6608

On 23/05/2023 5:37 pm, Anthony PERARD wrote:
> Patch series available in this git branch:
> https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.build-system-xen-removing-config.mk-v1
>
> Hi,
>
> This series of patch cleanup the remaining rules still displaying their command
> line.
>
> Then, some change are made in Config.mk to remove build-id stuff only used by
> Xen build.
>
> Then, the variable AFLAGS and CFLAGS are been renamed XEN_AFLAGS and XEN_CFLAGS
> from the beginning to about inclusion of users CFLAGS as those are usually made
> for user space program, not kernel, especially in build environment for distro
> packages.
>
> Last patch removes the inclusion of Config.mk from xen/Rules.mk, as this slow
> down the build unnecessarily. xen/Makefile should do everything necessary to
> setup the build of the hypervisor, and is its only entry point.

Thankyou for doing this.  I'm tempted to summarily ack it, but lets do
things properly.

One thing though, which I think might be a regression but I'm not sure. 
When doing an incremental build (second build with no change), we get:

...
  UPD     include/xen/compile.h
 __  __            _  _    _  ___                     _        _     _      
 \ \/ /___ _ __   | || |  / |( _ )    _   _ _ __  ___| |_ __ _| |__ | | ___
  \  // _ \ '_ \  | || |_ | |/ _ \ __| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | | |__   _|| | (_) |__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_|    |_|(_)_|\___/    \__,_|_| |_|___/\__\__,_|_.__/|_|\___|
                                                                            
make[2]: Nothing to be done for 'all'.
make[4]: Nothing to be done for 'all'.
  CC      common/version.o
  CC      arch/x86/efi/boot.o
...

Where I think those two "nothing to be done for 'all'" are new.  I don't
see them in a build from clean.

~Andrew

P.S. I do have some other notes for further cleanup, but I'm not going
to extend this current series with them.


From xen-devel-bounces@lists.xenproject.org Wed May 24 09:31:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 09:31:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538925.839307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kq8-0007dU-Ed; Wed, 24 May 2023 09:31:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538925.839307; Wed, 24 May 2023 09:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kq8-0007dM-9o; Wed, 24 May 2023 09:31:28 +0000
Received: by outflank-mailman (input) for mailman id 538925;
 Wed, 24 May 2023 09:31:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1kq6-0007dG-La
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 09:31:26 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id be8fd577-fa15-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 11:31:23 +0200 (CEST)
Received: from mail-dm6nam12lp2174.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 May 2023 05:31:21 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA0PR03MB5643.namprd03.prod.outlook.com (2603:10b6:806:bf::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 09:31:19 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 09:31:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be8fd577-fa15-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684920683;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=o0CYfAJRR1CdRG3v5ySaiIacTYnqN7+GTZRSVh8jMko=;
  b=ZlTl03E73+drb3hrIhYR+pnZKNnEpxeREkA0eCx+0g6A4++pNiDe2ic/
   1nwJpqEJc+/7jIARg3+VOkgQFHO0iqKWa9/DkT4gjbRZSae5gsBBR54nB
   6fdzS9oajBrfG3kIMTiDqAUduBT0h+8AyHhCsZektWh8gMiY1kQczpmR+
   Q=;
X-IronPort-RemoteIP: 104.47.59.174
X-IronPort-MID: 110598937
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:xaBAyK15MwBau8GBQvbD5f5wkn2cJEfYwER7XKvMYLTBsI5bp2QPn
 GEbDG6POv2IYmbxLo90O9y28UhTu5KGzdQ2GVNkpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFkOagS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfO08T+
 s4Ib2A3XFOR3/nr6uqASeVpr5F2RCXrFNt3VnBI6xj8VKxjbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqsi6Kk1AZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXqiAN9DROzjqZaGhnXCw1BJLDg7R2K8uP68pUi3R+tUJ
 Usbr39GQa8asRbDosPGdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOtsU7WDgr3
 V+hhM7yCHpkt7j9YW2Z3qeZq3W1Iyd9EIMZTSoNTA9A79y9pog210jLVow6T/LzicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQGzASpoRGpBcmS8g
 Q==
IronPort-HdrOrdr: A9a23:B1wpQajHhVovICtopNC8zpfC1HBQX8d23DAbv31ZSRFFG/FwyP
 rCoB1L73XJYWgqM03I+eruBEBPewKkyXcH2/h3AV7EZniahILIFvAZ0WKG+VHd8kLFh41gPM
 tbAs1D4ZjLfCNHZKXBkXeF+rQboOVvmZrA7Ym+854ud3ATV0gJ1XYHNu/xKDwTeOApP+teKH
 PR3Lskm9L2Ek5nEvhTS0N1F9Qq4Lbw5eDbSC9DIyRixBiFjDuu5rK/Ox+E3i0GWzcK7aY+/X
 PDmwnZ4Lzml/2g0BfT20La8pwTwbLau5d+Lf3JrvJQBiTniw6uaogkc7qevAotqOXqxEc2nM
 LKqxIAOd02z3/KZGm6rTbkxgGl+jcz7H3Jz0OenBLY0IHEbQN/L/AEqZNScxPf5UZllNZg0J
 hT12bck5ZMFxvPkAn0+tCNDnhR5wCJiEtntdRWo21UUIMYZrMUhYsD/HlNGJNFOC7h8ogoHM
 RnEcmZzvdLdlGxaWzfowBUsZeRd0V2Oi3DblkJu8ST3TQTtHdlz3EAzMhapXsE/IJVcegy28
 30doBT0J1eRM4faqxwQM0bR9GsN2DLSRXQdEqPPFXODsg8SjLwgq+yxI9wyPCheZQOwpd3so
 /GSklkuWk7fF+rIdGS3adM7gvGTAyGLHXQI/llltpEU4DHNf/W2XXpciFrryLgmYRQPiTjYY
 fxBHoMaMWTalcHGu5yrnnDstdpWD8jufYuy6UGsmK107P2w7LRx5zmmdboVczQ+GUfKyrCK0
 pGegTPD+N9yW3uckPEoXHqKgbQkwrEjN1NLJQ=
X-Talos-CUID: =?us-ascii?q?9a23=3ALOFcUGg5bWl38qkCjyUpTQV1RDJuSFOAw0XAZB+?=
 =?us-ascii?q?COyV7EKbIYwer+69hup87?=
X-Talos-MUID: 9a23:psJYwgt9bgzJdJPOKc2n1TRebuNNzraXD1kKlLoAstW9PiJ3NGLI
X-IronPort-AV: E=Sophos;i="6.00,188,1681185600"; 
   d="scan'208";a="110598937"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q6l1Vat6Z8oUyi4uFioKhO7h+BxXLIQTAaCxCNTD+7ylae2OOn3NmIxfl1otlJUJYpvXdS2KoYGRsbDIiZii7/8m1nAa4OQeF1aBXGSYbCw9XciaV14f1rF7YAOt7TZa0ZGc4G58Vn4oZgP7kmdqCpcEetqyyysZqDddpsI58L3+Xy0kswKax/YfdRnqzYOn8p1bgf3M6g4ldbLpJvsWLpmfAFeZSNcs/3dKQ90Cj8ldiZ9XL/iDq82ueuCjieyACDPlCNsELLO19rn0QMjvWdZ4dVHZw70zLg4KiFH/pBo7HuviTRim7NGCllgRQIC/8uWg3GVh1jojDIhuycDF1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vsWwIvUADpcI8Ht5ETuwIlS1fasThyPx7k8697c00R8=;
 b=A8wBnz+aBr2rSAIPytjVgg0B9KyFApJTCFn/rtt8R6UEvSIrotc9c8e6sD38raunfTJPpSwCIIrsrmV+Wb6UEncLSwvc6Awbic854oYN5F5vedPeAyW+ZnngmMlddZPcjZhrmKiqmMaDEo/OPlVDQzF7LfwEGXBnxemCyqyJn1PUn2BgCzmhRVgLqlvO8ld+QVANHNkL/b+hLD8KkJZiU4u0KDt1ZHNfG2lgLxJ3wwIpgj6Ui7DX+IrQS+913e1wgspqPX8DenawsCpZCR/G78DLbwOPv7iaPrwojsqA2Kei0kFsofKdal7yknZaq2sESsB/lOPVvbzTV/YiUlt1aw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vsWwIvUADpcI8Ht5ETuwIlS1fasThyPx7k8697c00R8=;
 b=Tq/nah4LWcieNRlj4VGFvTIiEwu48g21reY+Gm/e7/GzA7rncG3Q32lA2V3NmdREaAPeWpYvKFgb2+wgON4Lz0x1Ky+CZKi7EzmDnDRkDCOfSRYp/HibhoUGlgI29FMCGNGARb45UfQmkPtsR3QfWnZ7JzKnQEU2FBpuEFK74EA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <27e599a5-c457-cf62-a008-ea871c823312@citrix.com>
Date: Wed, 24 May 2023 10:31:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [XEN PATCH 03/15] build, x86: clean build log for boot/ targets
Content-Language: en-GB
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-4-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-4-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0128.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c6::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA0PR03MB5643:EE_
X-MS-Office365-Filtering-Correlation-Id: 05dbf4a3-2f9f-4690-c502-08db5c39a0e2
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3rTF6ygu6xoMi2D9SpKkHL2vVQsxhvxC6qvKWIOzR5584i/8FPI+AFXZEo8woPsAxpnCwULDXIYwjA/ezvhWPVMPe7ixJVnfn2vlVbX+tMF2gUob4OEWRplSTeK6b61qQQyTNs/wUOXQRX6iLkSvsSuq5XSpz80g5hIv7GETEsADD5Pu8zvONuQ/wwTDDwTpgkqT0FUp2K/64zVcAQohWCzM6BtFFmtm3rY2GCJfq/2dO2S88ciOOeLpp4zJHXUKVagZyltmt3OcoGM4B0lHlA2kR1bgTlKCOLlSMamvV5BmgCu7yB55PTOfi33U8nG/DNKZzGv9hdZr+856PcCvXq7/3PIGIfiRwObSHmzZn1P7oYmYavbGK+RhacmRuc+p0McAlpkzXuLXE5IwH4HoTPrHVpBL84M11sJWTctBperr3QoqtxWEquQLs6gLpQkEaORnVLjqiE0Skw9ej6AyCqU5/sftPX2VhdtK/sMYtd3VZOIKy172oWxPkUhlikjHIqdNs45IkAJw3sb7cPS8TIUxOr6708KavqeAY1Uvu0d9viD9n8cQ7ruh76jI+prN1OqBISdqgFlg10lmOD3fJuw7ZGus+80nHku7G3JeItkjdvPsvhBHVaS96GupPQbJ6r73QxVZuMMCIdpVN/8SiA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(39860400002)(396003)(376002)(366004)(451199021)(26005)(53546011)(6512007)(6506007)(5660300002)(8936002)(8676002)(478600001)(86362001)(31696002)(38100700002)(82960400001)(6666004)(54906003)(31686004)(6486002)(41300700001)(2616005)(36756003)(66556008)(66476007)(66946007)(4326008)(2906002)(4744005)(186003)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?emI4N2JQRGxqcGlkdzhsR0pmMXZEQTJwNnZHYVc0Z051dlpCdXhUNUQ2ZnZ2?=
 =?utf-8?B?bUZRc1J1eWE0bkdoNlBTTnVJU2F3RTBpcWU1SjhEWGtmTXdLRlNuSnNZV0RO?=
 =?utf-8?B?eUlJa095VUpJQjRLaW4wZlJ0MHJnTXljOGMrdW9PbEl4MW9DenlQTHM4bUpQ?=
 =?utf-8?B?Q2I5eU9odXFDQVJoZHpzcFdraDBnWWR2K1F2ZmpMaUlwUVNWaElnT2JqOUJ2?=
 =?utf-8?B?c3c5a3R4bjZtV2gyWXVIWWNKc2tOZld2bDFWTHZHVXFwVkNHcjF3MUthbGJV?=
 =?utf-8?B?YXNhYXBlNGc3QWJ2S21BTUpLYlpLSDQrU0JQUEhlM2IyRlR2cUgrSVlHUnQv?=
 =?utf-8?B?VW50VW1DRzNDUEppZkp0V1hXaWRRbWJFQlhzZlpSaXdoSnV1YXRkNjV3My9S?=
 =?utf-8?B?VjFJT2JqdTlBajRoeStQTVIrZk8zUEh1WUUvcHJGYVpZSndLcHVxSmlDMUVJ?=
 =?utf-8?B?YTlWS1hFNndmTS8wQXNkcjVDdkVUajlEYWhyeThiVDdaKzI3YnZua0prL09y?=
 =?utf-8?B?R1NTM1BCeS80ZjhFcjlYZXFyK2o3THRNZGlTLy84OXFUMTh6aGtEMDM1Y3RO?=
 =?utf-8?B?OVNibmRJSVI1S1NDeFV6dEtSY1BEZTM1dzJSUTJnc1p2TXgvWFJkU2txazhL?=
 =?utf-8?B?WkpOTGJWN2VCc0pMcFRFb25DM0xHZmNKNVJUSDhCanVKWGJmdDdTTnNVUVEw?=
 =?utf-8?B?RFhVaXlKZVc0TzgyZk1ua0lrQnpxbVQzZ2c3ZlhDUHdBRDlPU2JIN3VLb2RJ?=
 =?utf-8?B?UVRKSkUyZUtHa1M2dDJGR1FKaEpUbTlDSFpMcW1USUhzT2NnQmx3cWwzZkE1?=
 =?utf-8?B?d0F5Uk9zYjEvYUcydXNQQXZtWmJOM3lNb0w2WmhYUHFxdktHWWFJalpkT2lR?=
 =?utf-8?B?c3kvNFlNdnc3OHF2bFdFRlUrTll5VGtTc0tHeE1KZGh2cTJpOTdaM1F6azlU?=
 =?utf-8?B?Tm1QcDV6MDc3TUFaZDNGSmdOOEt2UGl5dS91NmcvaVZmQ1R1bTBjaURjQ3RR?=
 =?utf-8?B?ZkNLM3VJUE5aQzRHTHZpcitjMUpIeTlpdkZkcnJLTHYvRHJSOUpHOGQwbkdh?=
 =?utf-8?B?bkxTNnNrZUJvc1FVQlhwcW40dmxaL0VYUGZFYW9hcTRrRUtJL0VsTTNIWCtN?=
 =?utf-8?B?V2N1U2FzOFczYWNXZXhZUmdxY0FnbmlVOWltNXpSMVVFVTJrNkg5QkN3VGlJ?=
 =?utf-8?B?TGVlUmtJRzRINW5tUWVMa1IxUDhPV2VQbGFZUXhHeXI3THRXK0pLNEVpN2lv?=
 =?utf-8?B?ellFUFY4cUF5cnlmdURGZGJKUDIxbXdnZDB2Zm9rM3RRemFTOTdEWG43VW4z?=
 =?utf-8?B?R29QYlE2eXJ5TWpVSTcyVVVpdUhqMGNqZHZlSitHdzM5dTRmcVMzVW4zeU9V?=
 =?utf-8?B?NzEvV3VXK3MxL0dlcmZrU2ZoVE45Q2FBSk1oZlpvZERZMVBSU0IzK3dwTE1J?=
 =?utf-8?B?R1YxWnNUckFxRTZEMHd3MlVqSUVBNml6bnEvNUxzckh1UXliSDc5VmgwZ0RL?=
 =?utf-8?B?a0NxVFZueTcxaHhUQ2RKZWNJU1g1ZFMvS3VpRkpKc29mUExHNkpjL0J6Mktk?=
 =?utf-8?B?dllFSWdpa3I0ZytrRm1ZaVhKZ2VEODZyZVNXcjU5SUhCcWYzSDE1QjNZdm5z?=
 =?utf-8?B?a2NnWCszSldUY3BrMlRWNHVUV2s5eGdkSldOS1FCQ1c1L3ZlV3RvYm5nNXFU?=
 =?utf-8?B?NC9Fd1NHM3dvOCs0VnNSeWxmMlhwRDhzNWtCdUNrMU90UVJmRGFKbzVkc1hq?=
 =?utf-8?B?Rkl6Vmo5WXdhRy9xVks1NFlQZzlSZGZGNzQ5NDhXMEsvRng0RFJKUWtEYU5Z?=
 =?utf-8?B?dXFRdGlqaHo5QXFWaG9rb1NHR1o4cnA0TFFTdXdhN3RDa3ZrVVlyOHVmd1My?=
 =?utf-8?B?c3RHenBQUkkrTjhHTDBpcWxsSHJnMFA5TUdlUTRFejNhQlFGeWtrU0N6UkFV?=
 =?utf-8?B?VDNhTEFpNHdqWGxaR3JHWFZ2QWRNYUhHTThSbE56TExzeTNURzBwcDlxbHhG?=
 =?utf-8?B?ZXBpQnlIK3Q1TWlLVWMxZU9BZFdVeDh4MWJUa2ppNXNuYUUyM2ZHM1FBWVRG?=
 =?utf-8?B?ZXZsSzRScktReTMvRjhtR3p6bUdGeW1GcDZRMjM2NURTWURLakxyczlxWWc4?=
 =?utf-8?B?ZWptUGZibzV4WktjRnFEQjNnbldNdkw0aXptdDYybnR5RC9DeTllS3ZKQmIy?=
 =?utf-8?B?Z0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ox1QDD0GrmniYE1F2DMqiuGRbnH9KeEvWfo4Vi5f6y1eyN8mlTOqDW1RVZvkG9vIxbY4SZOI2MOK0Swyoq5gkCKyaxTqU3g+YUFnKes8yBO/enJ/bHKKqi2FzkkL1JrRdHtVAvjpZzXNGWTnktrZ/w/Nymfrw7kgax+ORTsg4Tp+pf0LktqOwXdu059FuCY6Mqiqb6wCBNcmxOWq3Wm4dzJiN13z5ySM/80UO73hNsBFfjSb6TjNGS3GafPZiuEOFmBBLZyDaFHpXaaKBFKL6oL7bNciJPVO2iDww+RXV1VRQgOnmLgeMZctUHvJD/BV/apxiS+nt43dcAGrzfPybZoYGb6K++cEXHZBkQOcVA9HrZHAzx3KfOKFeaLFHZE6S/Tz1/JMDO0fGVJ9sqYxHJc1UBT8J+PwOz0MJtTo4bKjrbsJWmezhTUziAJQ0ll4ucjSzzDkx48blibjx6z41Q78rWZbRBO+K2XYARVzZaUlXIePYMLABh2WUL0+Dg0Aylo4nP1tnLBD62xROOrVYRHzxBHzpJ1uA7OnZCO5bLhV98TjMxJfFGLy0BOVfIiCoh5mF4ldva73dYh7klLwB0YsuMDiqUWu8T41SnIU5uzJYjpD1oG5602b7DYdO59lEwn4WMrGi1QA09HcA8R6vukgb6LBbhjnpCQQ1VtLKnDG90A7WIV3xADriMzrKThfB67l3vlh7uCnLJhbzS/gthzCz7aYjGq2bfv0yPxgbOtpDy6BtxopDEMW98xMAvcYhWVxZUfObs1beWDgzcuIP2X+XY/ra3BQmpIxhM63MUbPKLP75uRFn+lL8zSvhcuf
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 05dbf4a3-2f9f-4690-c502-08db5c39a0e2
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 09:31:18.9950
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jq+vk1AQMFIbKblFXK0d4e0vbe5fg2MqsQZki93mee2IjEsLqxntDXVII8RRs8aTf1sTaoOeSljSoBX98PmcZPEXvWSmgzKBURf2GZKUVE8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5643

On 23/05/2023 5:37 pm, Anthony PERARD wrote:
> diff --git a/xen/arch/x86/boot/Makefile b/xen/arch/x86/boot/Makefile
> index 03d8ce3a9e..2693b938bd 100644
> --- a/xen/arch/x86/boot/Makefile
> +++ b/xen/arch/x86/boot/Makefile
> @@ -26,10 +28,16 @@ $(head-bin-objs): XEN_CFLAGS := $(CFLAGS_x86_32) -fpic
>  LDFLAGS_DIRECT-$(call ld-option,--warn-rwx-segments) := --no-warn-rwx-segments
>  LDFLAGS_DIRECT += $(LDFLAGS_DIRECT-y)
>  
> -%.bin: %.lnk
> -	$(OBJCOPY) -j .text -O binary $< $@
> +%.bin: OBJCOPYFLAGS := -j .text -O binary
> +%.bin: %.lnk FORCE
> +	$(call if_changed,objcopy)
>  
> -%.lnk: %.o $(src)/build32.lds
> -	$(LD) $(subst x86_64,i386,$(LDFLAGS_DIRECT)) -N -T $(filter %.lds,$^) -o $@ $<
> +quiet_cmd_ld_lnk_o = LD      $@

I'd suggest that this be LD32 because it is different to most other LD's
in the log.

However, this is a definite improvement.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

Happy to fix up on commit.


From xen-devel-bounces@lists.xenproject.org Wed May 24 09:35:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 09:35:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538929.839316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kti-0008Er-UD; Wed, 24 May 2023 09:35:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538929.839316; Wed, 24 May 2023 09:35:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1kti-0008Ek-Pv; Wed, 24 May 2023 09:35:10 +0000
Received: by outflank-mailman (input) for mailman id 538929;
 Wed, 24 May 2023 09:35:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1kth-0008Ee-Ev
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 09:35:09 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2042.outbound.protection.outlook.com [40.107.7.42])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4447c46b-fa16-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 11:35:06 +0200 (CEST)
Received: from AM5PR0402CA0009.eurprd04.prod.outlook.com
 (2603:10a6:203:90::19) by AS8PR08MB9242.eurprd08.prod.outlook.com
 (2603:10a6:20b:5a1::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 09:34:38 +0000
Received: from AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:90:cafe::73) by AM5PR0402CA0009.outlook.office365.com
 (2603:10a6:203:90::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29 via Frontend
 Transport; Wed, 24 May 2023 09:34:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT063.mail.protection.outlook.com (100.127.140.221) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 09:34:37 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Wed, 24 May 2023 09:34:37 +0000
Received: from a58fd8093b79.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B8DE8E2A-B3E3-4AB1-9987-71492F0F336D.1; 
 Wed, 24 May 2023 09:34:26 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a58fd8093b79.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 09:34:26 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB9935.eurprd08.prod.outlook.com (2603:10a6:10:401::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 09:34:24 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 09:34:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4447c46b-fa16-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WRFR/mOq552L5wgb1XLOo2zPC3fxeM/Ns4sXOcT08VM=;
 b=Burklm+NhqoJGnraMp1mNOIyvgQvFlA8EfLoL/aOzumn3Q57i3GqMFIGXe0BnQck03XFm315WcVGltMDAtzpdJ/6zCrhWdGmOKKvLD9YaXAGxfJinrvb5bioDBtvdOyhWV8DkBlzxKvP//PdVvGf7+Muwny9QrlwShUJsKU+gfc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 983b534f4a6f0110
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gwHOFCOwuxmOagW2vH0jRfYjqY1hFeLMWchi06dTuk13RYuOG0nbwavgtDAkbEHl9S97PsQNivqWX0cWrhUkobc2ki9wJXdpb9wStvh0YeKld72EQ0GMe39agpfwjYqETl411VrYti+mDjuhj9FXHq9sUfdpzdvnR1JYi1BzC/kVCyd04i4+QqT0TINknbIsb11vgaAko071G4FoKxEUyDV+X9dQU629MQwM0ICUtXBgyR98Kc5Cr8IxXmsfVDUkLUEfy/i8G8pXwCSVU7KzFb2bkI2tWzScGShnoT7jW9BcJ4x+ImZAm9xd3cucV90NBw+0cgSUL7m8nxi3VcPoBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WRFR/mOq552L5wgb1XLOo2zPC3fxeM/Ns4sXOcT08VM=;
 b=KYlFxgEfyZvsFk1aCdIrL12LUlrJPKJhT+J1gfA1yicdfToGVRnWET673v0C5zzMOXActlAnco7rWqE21+QvAJ3GAtr7j/BXu9iHJf0GfCVp7R5v9MBxQ/n+Qw5MPDzod1nHfLpuIH57KatwpUZyrHDIlOmzphOuMxonM+Lgx+wpMFIP0Nz1IjLcTYHIxLpAtfWmG+O6P0gbMtGi9oSk7ypMnOZxyYy5UdAViZYOQ44//zQcP09SsElxc2YJU04TKaesmQsqGbCbdsVcC6i0JX6TbeyOi/kSrmOFqy23sMocDkjyMFzVzfVZPp3ba5SRISHAdguoAnl51l+OhYcvQg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WRFR/mOq552L5wgb1XLOo2zPC3fxeM/Ns4sXOcT08VM=;
 b=Burklm+NhqoJGnraMp1mNOIyvgQvFlA8EfLoL/aOzumn3Q57i3GqMFIGXe0BnQck03XFm315WcVGltMDAtzpdJ/6zCrhWdGmOKKvLD9YaXAGxfJinrvb5bioDBtvdOyhWV8DkBlzxKvP//PdVvGf7+Muwny9QrlwShUJsKU+gfc=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH 12/15] build: avoid Config.mk's CFLAGS
Thread-Topic: [XEN PATCH 12/15] build: avoid Config.mk's CFLAGS
Thread-Index: AQHZjZZXWyxFq0oXWE+6zQEJ2UT2c69pKp4A
Date: Wed, 24 May 2023 09:34:23 +0000
Message-ID: <6F4AA7BA-C515-4DE0-92CF-D8234C8B852D@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-13-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-13-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB9935:EE_|AM7EUR03FT063:EE_|AS8PR08MB9242:EE_
X-MS-Office365-Filtering-Correlation-Id: 7d2346f8-57bd-4c7f-4a05-08db5c3a177b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 kZsUH4qlL7A95IdoKPxEjod/5KQ3XhNoKEL3bIQBe7xd0HvyvM9ECBVc2tKettCzxplMNrJJTrkGL19GnS/V2KkTDD0ye2wklq/3am+xzAj+QmGVEQRpuy/76nB3L45oU3iB5kPNYh/i7b8vIceO1pATAeU8RMk1m09I6QggNBwOZ9gm17qoxRx5hJYz+qWt/Ntb30HLvejDR8OhgitTP9S5wO4DRjqA1irTrdEkLcD4EvpWhN/C/2HZzE4v13DI84Ui4vGjlqFD3zgZvcL9y981EonXxPZZIrFMp5Hvm5Sp3phhPq15vjfFlgFr8Jif26jOXvwpDVZN28jFqV0UmNjs6dfC4fVnW8/Y9F5ChUOQxpQuhmyEBoGmcnqOOxhIXFNUDB2QDOOuAcvjc5U+ddnnZPiIvbq/+/VOkPTDmB5l9sF/Z3qHI/ANHDmcktDg3+FJ3U0lbA1DjSv3qlK/0zQWKOm79V5t4WyxsuvmqwO2gbERB+67f+zMECsK46wAHzkkzmvXbL6W4GVy3U9pZCoQlTooNIpYDs8haohMrHlkC+hqREcAKjwyge1RxBkviqqubbFBMFwu1ENwya5fVpnz1hmUlqWw5Mz6vC11Fly6no3Rw9klsq2DjOMcR4W0OlsmJ0tP13smnms+5nEknA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(136003)(39860400002)(346002)(376002)(451199021)(6916009)(66556008)(66446008)(66946007)(66476007)(64756008)(4326008)(76116006)(91956017)(6486002)(41300700001)(71200400001)(316002)(54906003)(5660300002)(8936002)(8676002)(478600001)(38070700005)(86362001)(38100700002)(122000001)(26005)(33656002)(6506007)(6512007)(7416002)(186003)(53546011)(83380400001)(2906002)(4744005)(2616005)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <B76962B08763F34294433149BD29DAE2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9935
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	16f1af64-c036-4de6-417a-08db5c3a0f25
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DFk4Z/yajKu0ityyvxuBdpB9W0LSl0HIqYGw5PZKSXZOOV1ZG11NZQUXDwVh0+qNyN2BnCPv0a7KLX2H8Iub0Yy+wgT6lycJtRYK2tNXEWi+U2ApDReHXALH37gMi7xop1M/etpu8pkV/iyMl8YV6hGRqfNlUOOt3x8U3u18Vq2Wo8ez3/htGqlsjUUdA46bvDq+rmU2zb52ubLg3kKP964BeoH2Y7nIEARh3xT0YndFR8Amp7iRzumuwDkqeooRBWwq51NlMPOmgHt4vL2Uz7Fa6WkhgTyFpegYJW4aCD4lgOJAWf+ZEqh0s6JpMArwSZYH5Sx6OKGyl3k/VMd0g3RUl8Wi30ViV6zipB9SmyekID7Uaa0rlF9MOUgo0VZibfEz5BxhtDOM5wWm32yyIeoo3QZfnRzfEhAKZcfxFYiIheMgR1fpDf3bZKzftiK4cpLuTjN6cXful4EhvHh1bitb161MWudBYoYiseNU/LsSROVBHgYp1J/GAU5FrX+ST1IJnDOvfCGw+aR/ivfK2QlpV4okSG2pIh0K9O991qVd95GVad8/ezY19XLXVV7wwU/3GG06Di3Gb7eg2PGO8gU3GKtIZ34B1+zBic1PsdKURdDOwAnmCqWnAuWBz35ixQb0h3W3K5daRbLLlT2Au5AO9SMt98qt8z1t3byKo2IulosgBq67HERrZ4ca94sxU6FShbru0Syuw0tOnldUSk4TCIlH3AFtT7yF4iJ6e7sO2wzIXbuuGT2gwP/f80kn
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(39860400002)(136003)(451199021)(46966006)(40470700004)(36840700001)(36860700001)(33656002)(47076005)(53546011)(83380400001)(81166007)(356005)(82740400003)(26005)(6512007)(36756003)(6506007)(107886003)(2616005)(40480700001)(40460700003)(186003)(2906002)(336012)(478600001)(316002)(70206006)(4326008)(70586007)(6486002)(86362001)(41300700001)(5660300002)(6862004)(8676002)(8936002)(82310400005)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 09:34:37.4246
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7d2346f8-57bd-4c7f-4a05-08db5c3a177b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9242

DQoNCj4gT24gMjMgTWF5IDIwMjMsIGF0IDE3OjM4LCBBbnRob255IFBFUkFSRCA8YW50aG9ueS5w
ZXJhcmRAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBUaGUgdmFyaWFibGUgJChDRkxBR1MpIGlz
IHRvbyBvZnRlbiBzZXQgaW4gdGhlIGVudmlyb25tZW50LA0KPiBlc3BlY2lhbGx5IHdoZW4gYnVp
bGRpbmcgYSBwYWNrYWdlIGZvciBhIGRpc3RyaWJ1dGlvbi4gT2Z0ZW4sIHRob3NlDQo+IENGTEFH
UyBhcmUgaW50ZW5kZWQgdG8gYmUgdXNlIHRvIGJ1aWxkIHVzZXIgc3BhY2VzIGJpbmFyaWVzLCBu
b3QgYQ0KDQpOSVQ6IHMvdXNlL3VzZWQvDQoNCkJ1dCBJ4oCZbSBub3QgYSBuYXRpdmUgc3BlYWtl
ciBzbyBJIG1pZ2h0IGJlIHdyb25nIG9uIHRoaXMNCg0KPiBrZXJuZWwuIFRoaXMgbWVhbiBwYWNr
YWdlciBuZWVkcyB0byB0YWtlcyBleHRyYSBzdGVwcyB0byBidWlsZCBYZW4gYnkNCj4gb3ZlcnJp
ZGluZyB0aGUgQ0ZMQUdTIHByb3ZpZGVkIGJ5IHRoZSBwYWNrYWdlIGJ1aWxkIGVudmlyb25tZW50
Lg0KPiANCj4gV2l0aCB0aGlzIHBhdGNoLCB3ZSBhdm9pZCB1c2luZyB0aGUgdmFyaWFibGUgJChD
RkxBR1MpLiBBbHNvLCB0aGUNCj4gaHlwZXJ2aXNvcidzIGJ1aWxkIHN5c3RlbSBoYXZlIGNvbXBs
ZXRlIGNvbnRyb2wgb3ZlciB3aGljaCBDRkxBR1MgYXJlDQo+IHVzZWQuDQo+IA0KPiBObyBjaGFu
Z2UgaW50ZW5kZWQgdG8gWEVOX0NGTEFHUyB1c2VkLCBiZXNpZGUgc29tZSBmbGFncyB3aGljaCBt
YXkgYmUNCj4gaW4gYSBkaWZmZXJlbnQgb3JkZXIgb24gdGhlIGNvbW1hbmQgbGluZS4NCj4gDQo+
IFNpZ25lZC1vZmYtYnk6IEFudGhvbnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29t
Pg0KPiAtLS0NCj4gDQo+IE5vdGVzOg0KPiAgICBUaGVyZSdzIHN0aWxsICQoRVhUUkFfQ0ZMQUdT
X1hFTl9DT1JFKSB3aGljaCBhbGxvd3MgdG8gYWRkIG1vcmUgQ0ZMQUdTDQo+ICAgIGlmIHNvbWVv
bmUgYnVpbGRpbmcgWGVuIG5lZWRzIHRvIGFkZCBtb3JlIENGTEFHUyB0byB0aGUgaHlwZXJ2aXNv
cg0KPiAgICBidWlsZC4NCj4gDQoNClJldmlld2VkLWJ5OiBMdWNhIEZhbmNlbGx1IDxsdWNhLmZh
bmNlbGx1QGFybS5jb20+DQpUZXN0ZWQtYnk6IEx1Y2EgRmFuY2VsbHUgPGx1Y2EuZmFuY2VsbHVA
YXJtLmNvbT4NCg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed May 24 09:44:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 09:44:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538933.839326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1l2d-0001IG-NL; Wed, 24 May 2023 09:44:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538933.839326; Wed, 24 May 2023 09:44:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1l2d-0001I9-Ki; Wed, 24 May 2023 09:44:23 +0000
Received: by outflank-mailman (input) for mailman id 538933;
 Wed, 24 May 2023 09:44:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1l2d-0001I3-65
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 09:44:23 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2058.outbound.protection.outlook.com [40.107.13.58])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8e724e71-fa17-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 11:44:20 +0200 (CEST)
Received: from DUZPR01CA0005.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:3c3::9) by DU0PR08MB8255.eurprd08.prod.outlook.com
 (2603:10a6:10:411::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 09:43:51 +0000
Received: from DBAEUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:3c3:cafe::7) by DUZPR01CA0005.outlook.office365.com
 (2603:10a6:10:3c3::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 09:43:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT061.mail.protection.outlook.com (100.127.143.28) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 09:43:50 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Wed, 24 May 2023 09:43:50 +0000
Received: from 5d54565177c3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6345C028-20ED-4D7D-BE35-E984096BFB8C.1; 
 Wed, 24 May 2023 09:43:44 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5d54565177c3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 09:43:44 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS2PR08MB9102.eurprd08.prod.outlook.com (2603:10a6:20b:5fc::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 09:43:42 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 09:43:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e724e71-fa17-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UMAP0MJrxj6d0XOTtOQ8XfnC7l/aGKJtyutcJWc2aPY=;
 b=e2Bx6jpskyQI01fNpl4Uq5L+lFjyM9/X0mARk/DjIQWnSRsXwqGKYX2Xi+Zs2U5Tg6qV5HsBU52DxPPlvfi4nrSIf9ocKGHBk1mJaUaeukjqE7gIYteRsdJ/cQgo9Y0forUBg3g8pCMxVkGgFd6fRbBFdilVA/eruiTsKbkHiLs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 87fc1ccb1f27b1a2
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mLAkvsOad2KTObRXiny8oXaX5jKRHX3b548j4d35RkrzZl3pQyIyhu7iCqPdkJgG+HLj3f/tuj6JzeC5s9jDCjVwFuAAhVYDxHs+PS3Cljge5ZeA+Rp/ZeuBw6B48mI60iTqEOC5tz25LuLt9fNeDXWcu8UbX5xCxuGSFPvltM7f/iT+X01N/86PmOyzvBbSTVd4v2c2pnvtnmQPmVii/gaTDr4PJ5W0IZwEh0VBCqyFm1Rpl/N3dEX7h5BWuZhZVV2Xfg/6NsXj0nfDcd2STd+8dVQJSkCzGv1Rfr9p4hjPHQqiXAR6RjcePzCIAJJ4VDKDAxljuKw0QAEOJDkeaA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UMAP0MJrxj6d0XOTtOQ8XfnC7l/aGKJtyutcJWc2aPY=;
 b=kkLbpe+OceNiDq+j/IFnEydCFQA9xkbtu4BcXO5/Z9zCN1+9JuAPNACZl5U1WfpgjOzWoW0H9TW87NIOfG1dQdn9vWK8X5Y9AHi0xyhpNMxJ+C7FZmojKYK8IJ+GHeZHYZnAmrpzSqqBtWs8HhL8Wb+IQxLAyclC4CU5FN0BMP8y+uYFREGkPrGUEXjJnWR0MOULGeOtnArOIEKwf0Aa61DIU+glFipWzvRyp1MQCl4KrabwlZ9P08hl3HxhbvUEnrgoFU3slPMv7DQY1rjCxv213xiMWQeNOMa9HSOOURUDAAqOGgyy8T9TSoz/ktBEyvT9OyVATTQQhvxa53D5JQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UMAP0MJrxj6d0XOTtOQ8XfnC7l/aGKJtyutcJWc2aPY=;
 b=e2Bx6jpskyQI01fNpl4Uq5L+lFjyM9/X0mARk/DjIQWnSRsXwqGKYX2Xi+Zs2U5Tg6qV5HsBU52DxPPlvfi4nrSIf9ocKGHBk1mJaUaeukjqE7gIYteRsdJ/cQgo9Y0forUBg3g8pCMxVkGgFd6fRbBFdilVA/eruiTsKbkHiLs=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 13/15] build: fix compile.h compiler version command
 line
Thread-Topic: [XEN PATCH 13/15] build: fix compile.h compiler version command
 line
Thread-Index: AQHZjZUfS+Hth9ncdEKi1WC1jlJ0EK9pLTuA
Date: Wed, 24 May 2023 09:43:42 +0000
Message-ID: <35D40E55-2D93-431A-8B16-FCFEBBDA25B1@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-14-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-14-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS2PR08MB9102:EE_|DBAEUR03FT061:EE_|DU0PR08MB8255:EE_
X-MS-Office365-Filtering-Correlation-Id: db3e4346-229d-4dbe-1fd0-08db5c3b6158
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 60w0RS79KfbiNkufT0MjMfM46TI99Gn+JX6t8VLeKCkqoNDYMClKOhiCQKRn+LX18HYbyaZiJcnhAZ+wT0blh1N2wDPovoMeuNctLFZ6umZ3hnvQZFzGf84dBUDILQ8hURS/iA5X/rMZktRymZbQ69VTr+Jp0qV8Ocpb5F9ZPGU1N7yPLxPsIxAipZr6DRiQ+0wEC5K2bnLQ2UKPLOiFghZ+ryi59FFHp8s/8/VCt2WSCYK5Cq539Gj6hSNKztr5DgvBlJNnMpltjV17TCKpYWy6/KDspkBdUQoTcnJluj2yxdZ6qcQwxTedLUo+rq9gnvH8jXSva/gggc2TcPsDM9ry7r+5uY/2KQe9c+6neT9CVjUZ+pYIHm0Osda7xlit649Han3xLbJ6Abkh3lTAfRE7os8Z8k6+KERrbP5XUAMhqnKSKUS1WvntWEGUYUHosQ8ZAhxOiSOPwYb3/+UMqsqLZWos7fREfRk/IIG6/VYc2S+7xvbXpYnxIyYSgfuck/Inl/cvatlPzXHPm1JlAj/rwhZN98s2QAZAgwXipwk83q/C88djeqONLr9yZAIpiwDvP9zbbP6kK4499eB/C7ZwvNUbMkNMMpT4sHg9DWtAcYDbAakBfD/ykWk+fHT+ZHLE02eqDNVWj2wUkFdjbw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(39860400002)(346002)(396003)(376002)(451199021)(66556008)(8936002)(5660300002)(83380400001)(53546011)(6512007)(6506007)(86362001)(26005)(186003)(122000001)(2616005)(38100700002)(38070700005)(8676002)(33656002)(71200400001)(478600001)(6486002)(6916009)(66476007)(316002)(36756003)(4326008)(41300700001)(64756008)(66446008)(54906003)(91956017)(2906002)(66946007)(76116006)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <76F57324C2EC3E44A4D66AA7FEDB7AA4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9102
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	69aae2a3-8f46-4e05-1df7-08db5c3b5c59
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ofsKaLD95WYavyChD2+VFy+nSZmCgl6Y5D4JRuhwQyuUhfiKa6Zr4tfvCezFNgfNog2vuXrVeeft3KI7cuzVZDwAqoKW0cYKiZXgocxhS7EXP/uFz13y05nLolCoA3hxMZxSIH/d6q1XIL5eSW/utiF1GX1a1Vdsln9hIG4tF/HMKOxkmKKjxXuwOa6a3U1D1HcVUmvzaDDAAAg1/Peu80aUigLUl85npOk3irwv/HCMDxMgQ5IX60XI6A606yptmZL9+69am7mZWLX0fYxp5Pb7mLkfFYpFSbn+NwoGTp7JlURQZOWuxZDavgFjC9NVqnL94rq3OragOaAgknpm0hro6lMOHTgypVn09BHQ7ds5AkYJLUgFvOyjNRCNnxQkQ08ux8H3osnDk3xVQzKrt1SYmZy1a3fQZOjvtf2cP4WNDLOJnRyEXC8Fe3Gby8a69q7QTFVMfzRyIr+PDeCa5y9PL58MEEuXKhl8cB42ZssBiyxh98j1kAPeou7uhS4RV7xcDfrNvc4jn4XSTv1xpD9htrey8LKdWKintiYoHHTX04uhnG0KA1r+g0sLgn+s9cog8aVAvMTgaSi0ppev47SJ+Yu7kZhqvr01Un+2S8miJyXzx85qeJuSpVOWGWlGhDFym3+RKlAZgbGlm1RPRDL+t8B6No8h7t/yjHxtMDqM3XwZao8T2Fs26eoMbyiV+aWI/K6/kjb4WgU8nS71SUBwctrTXRuei4wUBx8CBNn4O58XG0rlXqibXAe9OhMg
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(39860400002)(136003)(451199021)(40470700004)(46966006)(36840700001)(6486002)(41300700001)(40480700001)(47076005)(186003)(36860700001)(83380400001)(5660300002)(8676002)(6862004)(8936002)(70586007)(4326008)(70206006)(316002)(81166007)(356005)(82740400003)(33656002)(336012)(40460700003)(2906002)(54906003)(2616005)(26005)(6506007)(6512007)(86362001)(478600001)(53546011)(82310400005)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 09:43:50.8735
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: db3e4346-229d-4dbe-1fd0-08db5c3b6158
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8255

DQoNCj4gT24gMjMgTWF5IDIwMjMsIGF0IDE3OjM4LCBBbnRob255IFBFUkFSRCA8YW50aG9ueS5w
ZXJhcmRAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBDRkxBR1MgaXMganVzdCBmcm9tIENvbmZp
Zy5taywgaW5zdGVhZCB1c2UgdGhlIGZsYWdzIHVzZWQgdG8gYnVpbGQNCj4gWGVuLg0KPiANCj4g
U2lnbmVkLW9mZi1ieTogQW50aG9ueSBQRVJBUkQgPGFudGhvbnkucGVyYXJkQGNpdHJpeC5jb20+
DQo+IC0tLQ0KPiANCj4gTm90ZXM6DQo+ICAgIEkgZG9uJ3Qga25vdyBpZiBDRkxBR1MgaXMgZXZl
biB1c2VmdWwgdGhlcmUsIGp1c3QgLS12ZXJzaW9uIHdpdGhvdXQgdGhlDQo+ICAgIGZsYWdzIG1p
Z2h0IHByb2R1Y2UgdGhlIHNhbWUgcmVzdWx0Lg0KPiANCj4geGVuL2J1aWxkLm1rIHwgMiArLQ0K
PiAxIGZpbGUgY2hhbmdlZCwgMSBpbnNlcnRpb24oKyksIDEgZGVsZXRpb24oLSkNCj4gDQo+IGRp
ZmYgLS1naXQgYS94ZW4vYnVpbGQubWsgYi94ZW4vYnVpbGQubWsNCj4gaW5kZXggZTJhNzhhYTgw
Ni4uZDQ2OGJiNmUyNiAxMDA2NDQNCj4gLS0tIGEveGVuL2J1aWxkLm1rDQo+ICsrKyBiL3hlbi9i
dWlsZC5taw0KPiBAQCAtMjMsNyArMjMsNyBAQCBkZWZpbmUgY21kX2NvbXBpbGUuaA0KPiAgICAt
ZSAncy9AQHdob2FtaUBALyQoWEVOX1dIT0FNSSkvZycgXA0KPiAgICAtZSAncy9AQGRvbWFpbkBA
LyQoWEVOX0RPTUFJTikvZycgXA0KPiAgICAtZSAncy9AQGhvc3RuYW1lQEAvJChYRU5fQlVJTERf
SE9TVCkvZycgXA0KPiAtICAgIC1lICdzIUBAY29tcGlsZXJAQCEkKHNoZWxsICQoQ0MpICQoQ0ZM
QUdTKSAtLXZlcnNpb24gMj4mMSB8IGhlYWQgLTEpIWcnIFwNCj4gKyAgICAtZSAncyFAQGNvbXBp
bGVyQEAhJChzaGVsbCAkKENDKSAkKFhFTl9DRkxBR1MpIC0tdmVyc2lvbiAyPiYxIHwgaGVhZCAt
MSkhZycgXA0KPiAgICAtZSAncy9AQHZlcnNpb25AQC8kKFhFTl9WRVJTSU9OKS9nJyBcDQo+ICAg
IC1lICdzL0BAc3VidmVyc2lvbkBALyQoWEVOX1NVQlZFUlNJT04pL2cnIFwNCj4gICAgLWUgJ3Mv
QEBleHRyYXZlcnNpb25AQC8kKFhFTl9FWFRSQVZFUlNJT04pL2cnIFwNCj4gLS0gDQo+IEFudGhv
bnkgUEVSQVJEDQo+IA0KPiANCg0KWWVzIEkgdGhpbmsgQW5kcmV3IGlzIHJpZ2h0LCBzbyBJIGd1
ZXNzICQoWEVOX0NGTEFHUykgY2FuIGJlIGRyb3BwZWQ/DQoNClJldmlld2VkLWJ5OiBMdWNhIEZh
bmNlbGx1IDxsdWNhLmZhbmNlbGx1QGFybS5jb20+DQpUZXN0ZWQtYnk6IEx1Y2EgRmFuY2VsbHUg
PGx1Y2EuZmFuY2VsbHVAYXJtLmNvbT4NCg0KSeKAmXZlIHRlc3RlZCB0aGlzIHBhdGNoIHdpdGgg
YW5kIHdpdGhvdXQgdGhlICQoWEVOX0NGTEFHUyksIHNvIGlmIHlvdSBkcm9wIGl0IHlvdSBjYW4N
CnJldGFpbiBteSByLWJ5IGlmIHlvdSB3YW50Lg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 24 09:47:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 09:47:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538939.839336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1l5r-0001xa-AP; Wed, 24 May 2023 09:47:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538939.839336; Wed, 24 May 2023 09:47:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1l5r-0001xS-6P; Wed, 24 May 2023 09:47:43 +0000
Received: by outflank-mailman (input) for mailman id 538939;
 Wed, 24 May 2023 09:47:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VEVP=BN=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1q1l5q-0001xM-1y
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 09:47:42 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062a.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 04a9d1a7-fa18-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 11:47:38 +0200 (CEST)
Received: from AS9PR06CA0576.eurprd06.prod.outlook.com (2603:10a6:20b:486::10)
 by PAWPR08MB10240.eurprd08.prod.outlook.com (2603:10a6:102:366::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.25; Wed, 24 May
 2023 09:47:37 +0000
Received: from AM7EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:486:cafe::29) by AS9PR06CA0576.outlook.office365.com
 (2603:10a6:20b:486::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29 via Frontend
 Transport; Wed, 24 May 2023 09:47:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT014.mail.protection.outlook.com (100.127.140.163) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 09:47:36 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Wed, 24 May 2023 09:47:36 +0000
Received: from d613ba3180f2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FED0CA29-2088-495B-BAD1-3496B6233F74.1; 
 Wed, 24 May 2023 09:47:30 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d613ba3180f2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 09:47:30 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by GV2PR08MB8487.eurprd08.prod.outlook.com (2603:10a6:150:b4::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 09:47:28 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482%4]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 09:47:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04a9d1a7-fa18-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IgzGKZTLSJ2OF9q6yjz06r5nx+NFpp6mUw96GiZbv/M=;
 b=nk/aA/IAPZhqDuetf3TsYhz7OPM18lx7R73iGss+nj8M8QPGnL71Tct3GzHcvscBgmanCh6VnKCDNsDGEUrTZ2gCPKfcrRvezn/BVpk6/xRMyFx3Hj3mFRqXby3lPSZsTZ7Jdh0S0UpyDFOJnKfJphGC/qZ26U2rT0CYh0WgzPg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1bf272dc08745cd6
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h3K4NHFFwWEvn1do8M8BFIp4GKwBm3LyoVrutgvxl7tZjNJMDRUiZj7+0V6zHAv67Px0Hhy01cuxqQdlSUirbXcWQXcBaPNRqo295TfYq81T9AicyY4vEprfxgmgfbgbIDeMPDGiTjZ9D9gw36kIz89XYTb29aPK5gseg65mVod1kVhCDG+0O0zX9bSQcq96xluLiEZqSj/M06ASbNJZTb6omjovftgFViOyK1AniLWERjoaDHHEtieO9l3FA5Vk2/5HQbMv90md6ciVVu6vaSPA5PDgG7HJ+uXtXs3RJ52k8rAd1jAO0uMyWMElwfHA1AjROSzChYyeSwQ3hghf7Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IgzGKZTLSJ2OF9q6yjz06r5nx+NFpp6mUw96GiZbv/M=;
 b=UkYTfrd/cwcAo/zq+bmfbDIaF92coAZ/4tzG204Ov3bCvLY03Te20I7bcQmHbo3nZvjRc3mzNvUX06XhdFxNlSYjUBZ32uSqYamxCuq5bqcaukSqOleCFCpxRWLoc60foJIFBW0AKpQJcBUBtLr+2iQIjagUEYjVUl4JvNoC5qqrj13JmyJwpX1cF5o5l3ruVlEEUiJx0LaqVVdlfgBJxxezCqUUMJH6TItNNZ5ftlkHRMj1sdt+dJGD35Gp6fjaDBYUZWqt9cHdeNq9p21pgfx9/eSHgkJIPZAQSpJAttHEzFpWWqx3o2yteazI5ymwSRvIMu7bJmLjvumeosvPrg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IgzGKZTLSJ2OF9q6yjz06r5nx+NFpp6mUw96GiZbv/M=;
 b=nk/aA/IAPZhqDuetf3TsYhz7OPM18lx7R73iGss+nj8M8QPGnL71Tct3GzHcvscBgmanCh6VnKCDNsDGEUrTZ2gCPKfcrRvezn/BVpk6/xRMyFx3Hj3mFRqXby3lPSZsTZ7Jdh0S0UpyDFOJnKfJphGC/qZ26U2rT0CYh0WgzPg=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v7 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZjUpTyDGNoUFbjUSAMdNsQrEV069pLt6A
Date: Wed, 24 May 2023 09:47:27 +0000
Message-ID: <04720B92-03CF-464A-8FE7-F90B527F3E7B@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-6-luca.fancellu@arm.com>
In-Reply-To: <20230523074326.3035745-6-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|GV2PR08MB8487:EE_|AM7EUR03FT014:EE_|PAWPR08MB10240:EE_
X-MS-Office365-Filtering-Correlation-Id: c7710e18-f8aa-4cbe-77e9-08db5c3be816
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 HmUFb88Qtqv6LO3GtfFX9MCnLyiimnGPfynZTeJQmxKwXT0p7SdQbVMgpJFVZBJcOkFrCrbKgzYEazBQrJdnqvyfSoU6M6VDTEw7yVCW6t7Dg6qZ9s7iW5McJwvywRnfzEZvzqn3XBDvunmqIq0cMQBmiP8NBdLR6QwyyHDRbtJm+ZS0eLp1UccBQUDxBiWj/38rp+XRihVqxrbD2X1KBbMhNh/T4MsMjG78zQNtuzp4m3q2iTQEXat9ECInujMdjDAwbXgrb51ywJ8RhJWj8bTkkCtdL4+rkI5lMb25dpnOMuWgCUe64eAXqku6wIcs4KqHt+YGfgabHudIuz0RpJIFd3BSE5/0YyVfCwx2uYwumJB0/WCntn7q67jK2ht2DDd3esERAoJxO9ZXZzgTGif4zsJdhdUqGvoEnZyNOsTYy8IGwpfZLct+TOk2RDhf7sLXakuDkSi8kQoUHuHCnWiYEsCUSqKODTzOD0yqJs72kZ3uxgGcwlRKR5W29cP76ocRi+KHWYwkRm7wxYzeseCo0dGbjUeyljSFr+2I6opTNy4xayZvIJbIYNO4/P3cKS8iXEzAdk19fnAu16plzxBl13jIhDobtXDXiV8AF3oYQvylrUJhye7M44Awj7i4keFDdrAfePkG+u1Wr/Zbmg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(396003)(366004)(346002)(451199021)(6486002)(41300700001)(316002)(71200400001)(33656002)(8936002)(8676002)(122000001)(186003)(6862004)(36756003)(38100700002)(53546011)(26005)(6506007)(6512007)(5660300002)(2906002)(2616005)(66946007)(76116006)(66476007)(66446008)(64756008)(66556008)(6636002)(4326008)(91956017)(38070700005)(478600001)(37006003)(54906003)(86362001)(83380400001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <39AE7D40E2818F409CAEDB99BCA99574@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8487
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	cbfc2bef-9098-4b6f-5070-08db5c3be28a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FdcwwaoY64bcYtjRTfxLygFcObea9G5i7xuu9W62ZRnqh6btfURWZOcj/u2Y6CH7IGuLNcXiZJZRAQyLv74i3IQBlJeBW4RXtYZvTBgQnPgvmyg8B6ehvhw4ulRMFuV5qmHBCcfbSIC8ty2G8MOB6nyKdOOX/2InEKtwKipIKNEiYI/20o/aODfPhksl/VnkI5U55QZQYOCFkIZuSeFc/alECXGfV6SiRyGT5VKklP1jMClZ5e/CXMSlvWr02+GZlAtzrr5bs2/fayiIQZ2z0Kc0LgCtkTGPYOOiTwqOExUZVQOFfKecE8hWVIsYHfU3aLtDaSv2vC97nEhTA8+zJ245Wa6ZisuYJF29221w2CVwhM1CGLanVnshuICpyjCqg8M+EsuxU8Pq8hqzhrf8NlArHVSDrkJTTJyLPXO++sHJQv2WtRjTd7VOSW2xWkh41IKKdFM5mP+OqGP/2dNR0w2JAulvtnuFk2neo4vzV+kWwHV3hv7lKfv0hndzvIsiOPIHBk/98aSJ+cU+jGJVI4K51altOVyOSznwTW2hd0LxHr1K1qvWU/s9cF+VnL5NtkPm7b10aCg2vqvIlwhOVchU0XR2NfrxNmqji3P66mY7PFXB/Cp8XONhuoJCc6thGiYQM46sTTVhy66J/sSbWMnD9qA0wtWRgnVab0ldW4UgN5hnUbaqqjkLdCLp1M5LQMhg6oJBan8E0fMvTy1/7X/0tTLpMSchX1PbTMLQaM99xbseHVMA/PVOmMszfIkB
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199021)(36840700001)(40470700004)(46966006)(316002)(70586007)(54906003)(37006003)(70206006)(4326008)(6636002)(81166007)(356005)(82740400003)(41300700001)(82310400005)(478600001)(6486002)(8676002)(86362001)(5660300002)(40460700003)(8936002)(6862004)(40480700001)(186003)(33656002)(36860700001)(107886003)(53546011)(6506007)(336012)(6512007)(26005)(2906002)(47076005)(36756003)(83380400001)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 09:47:36.9044
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c7710e18-f8aa-4cbe-77e9-08db5c3be816
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAWPR08MB10240

Hi Luca,

> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Save/restore context switch for SVE, allocate memory to contain
> the Z0-31 registers whose length is maximum 2048 bits each and
> FFR who can be maximum 256 bits, the allocated memory depends on
> how many bits is the vector length for the domain and how many bits
> are supported by the platform.
>=20
> Save P0-15 whose length is maximum 256 bits each, in this case the
> memory used is from the fpregs field in struct vfp_state,
> because V0-31 are part of Z0-31 and this space would have been
> unused for SVE domain otherwise.
>=20
> Create zcr_el{1,2} fields in arch_vcpu, initialise zcr_el2 on vcpu
> creation given the requested vector length and restore it on
> context switch, save/restore ZCR_EL1 value as well.
>=20
> List import macros from Linux in README.LinuxPrimitives.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Just ...

> ---
> Changes from v6:
> - Add comment for explain why sve_save/sve_load are different from
>   Linux, add macros in xen/arch/arm/README.LinuxPrimitives (Julien)
> - Add comments in sve_context_init and sve_context_free, handle the
>   case where sve_zreg_ctx_end is NULL, move setting of v->arch.zcr_el2
>   in sve_context_init (Julien)
> - remove stubs for sve_context_* and sve_save_* and rely on compiler
>   DCE (Jan)
> - Add comments for sve_save_ctx/sve_load_ctx (Julien)
> Changes from v5:
> - use XFREE instead of xfree, keep the headers (Julien)
> - Avoid math computation for every save/restore, store the computation
>   in struct vfp_state once (Bertrand)
> - protect access to v->domain->arch.sve_vl inside arch_vcpu_create now
>   that sve_vl is available only on arm64
> Changes from v4:
> - No changes
> Changes from v3:
> - don't use fixed len types when not needed (Jan)
> - now VL is an encoded value, decode it before using.
> Changes from v2:
> - No changes
> Changes from v1:
> - No changes
> Changes from RFC:
> - Moved zcr_el2 field introduction in this patch, restore its
>   content inside sve_restore_state function. (Julien)
>=20
> fix patch 5
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Change-Id: Ief65b2ff14fd579afa4fd110ce08a19980e64fa9

You have a signed off and a change-id that should not be here.
They are in the comment section so should be removed during push so might b=
e ok :-)

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed May 24 09:55:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 09:55:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538943.839345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1lDd-0003QO-2u; Wed, 24 May 2023 09:55:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538943.839345; Wed, 24 May 2023 09:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1lDd-0003QH-03; Wed, 24 May 2023 09:55:45 +0000
Received: by outflank-mailman (input) for mailman id 538943;
 Wed, 24 May 2023 09:55:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1lDb-0003QB-Jq
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 09:55:43 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20616.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 251a6721-fa19-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 11:55:42 +0200 (CEST)
Received: from AM6P191CA0071.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:7f::48)
 by VE1PR08MB5616.eurprd08.prod.outlook.com (2603:10a6:800:1a1::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 09:55:35 +0000
Received: from AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:7f:cafe::7c) by AM6P191CA0071.outlook.office365.com
 (2603:10a6:209:7f::48) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 09:55:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT031.mail.protection.outlook.com (100.127.140.84) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 09:55:35 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Wed, 24 May 2023 09:55:34 +0000
Received: from d004075b34eb.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FD959967-991B-4D2F-8CB8-82012011B982.1; 
 Wed, 24 May 2023 09:55:28 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d004075b34eb.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 09:55:28 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by PAXPR08MB6367.eurprd08.prod.outlook.com (2603:10a6:102:15b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 09:55:26 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 09:55:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 251a6721-fa19-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tdIMhJvgEpp6Wnqo5bm/izItqJv5GWw+D3KZTD9sCOY=;
 b=6m0fKCf4EC53ruq0ZU+VLxAIek3gcM+DiPFPCctSeJ02kHgwyHyHHdiasu6DmAp4h3qvXOgKOIs/yXUdYetFe9E9u2O3cTpPRwSo9fn6/T4GBaGdAkOTAv22vykBKMGP5E+UvV0y5R/ZLT3wR3kfdNWmmp9WR12Pvt3iOS8X3aQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 86aa1d088ec21d86
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jNMKRELRADKQke19diys4E8zzIVqhX2puUTY4tjsOzKAR8Y5hbs2YscEV5qovzRLw+SGvt92dBlVUvQ1F6MSXwYop0BGpZdbR0pr+OGEJ8VSuj9S9xLO6Kd8mexDvEvE5acvD28yh1rkQBmY61krSKHHY4ym9hSQv2IUm2F1MdMtEUGd2FhSbxxLWsXiXPm02S+4BEWiM0sHyW5ufTjqElSilC0k5slF8AnF3W2o8D5VhUpqtNDNfGLKYg42rTzY6ev7pjpQD7VSERm17JU001XnaNDQnZwdDsN9Cn9Km7B0wEDg2h5hrDUBHwdY0iNzVC6Rw8ztLZnu2Ah21D3UXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tdIMhJvgEpp6Wnqo5bm/izItqJv5GWw+D3KZTD9sCOY=;
 b=QCwCgv0S85eh7vkodW4wU5htKYDuO3fS4ZoA1MUJOmqSlZmC3UJT7FJTnt+Thx7yvThGyTTIZxSVGwfhs0Y0dbRbgLFmvP96ra/4MhxpUlT4nKfdLtRBTz0r3omH9eaGOKdP3Z43b77jEu8BzzgZ+dGavD8vcAoZV1xAE925X67YX/CksGEKEGiH5Ymfqf8Z7HlIkmEydAvJrhwktdpoYFoyPqcMSC48jZaEPxoaISkP7n09C720+klzovJMRr1VIEBQ67fq1OsKMvCRU+ri7fikOuBlyGULw45wzLrDthhOo0b2CnEJoTttWeTccerHKHsFUOfRRtdYv9t0VDpojg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tdIMhJvgEpp6Wnqo5bm/izItqJv5GWw+D3KZTD9sCOY=;
 b=6m0fKCf4EC53ruq0ZU+VLxAIek3gcM+DiPFPCctSeJ02kHgwyHyHHdiasu6DmAp4h3qvXOgKOIs/yXUdYetFe9E9u2O3cTpPRwSo9fn6/T4GBaGdAkOTAv22vykBKMGP5E+UvV0y5R/ZLT3wR3kfdNWmmp9WR12Pvt3iOS8X3aQ=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v7 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZjUpcuuLAh7YurESbASUYh0J8dK9pLuqAgAACLoA=
Date: Wed, 24 May 2023 09:55:26 +0000
Message-ID: <32B75BC9-2687-4D34-89D2-CD014BCA4FB5@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-6-luca.fancellu@arm.com>
 <04720B92-03CF-464A-8FE7-F90B527F3E7B@arm.com>
In-Reply-To: <04720B92-03CF-464A-8FE7-F90B527F3E7B@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|PAXPR08MB6367:EE_|AM7EUR03FT031:EE_|VE1PR08MB5616:EE_
X-MS-Office365-Filtering-Correlation-Id: 48a056dc-d8e4-4e56-39bb-08db5c3d052b
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 mspeCavwfovsBvnVEgz1VFQ5FCJD7PVMCIVabWF5IdEqRB5wDHuHz3wwRbCpDMSqDtEr0K/ODo1XcNMiMkXaKaSIHM5yd19aLapKTofORdN1kIIgPaF48Hwj7YDvRmwYGHdwYv3Vgy8MIOEfR0mbxe2BS1tG0JN1lX/cGwCS66QvkLSxSQMgNtNv6XV8ma2V8AfDZt3Z2deNQPTCUSxtKfs8lbkeKF+t0PQjy7VGodo3PDlv9XjjiQ9+LE/vX5f6yV/VbhYd5w9dCoQDk6UhSMk7CdxIOyrG4wOcDmjfY5uHbl4bKMGTQAuZp8sVVwAWnkgSVpKfS0hC2Y3X8dfZKB6xr0Tk9/gsN863cA/C7DYgxE9WDsHBU3udPl6KdAu0cR3bbgAwXpzsF5EBxorwmwrK5DocSLVnBjQo6UJ8kh755YHxAxdMu4Wq7pSfDFy+oSJhrRiKuM4IZH+UYic1JvpYAjGmnhGzp/sLgVAy9sfKgTqFDZgEVEto9cn6EcBcAITQs9fUkbTwgfTrMp+SQd0Yx7SdxUv6Lzw+s/MVh9MoYpVYS4zQ+daP/INKVSlucb9GDziTStlUA9TJ9uXCe+vnQ6okLWJ7rH9Br6AOxm3KXVUDiGyRqQDiI2aolUpcKVAHwZ/1PXq+LE2c01+DQw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(366004)(396003)(376002)(451199021)(8676002)(5660300002)(2906002)(8936002)(316002)(66946007)(4326008)(6636002)(64756008)(66446008)(91956017)(66556008)(66476007)(76116006)(54906003)(37006003)(478600001)(33656002)(2616005)(83380400001)(71200400001)(53546011)(186003)(6512007)(36756003)(26005)(6506007)(41300700001)(6862004)(38100700002)(86362001)(38070700005)(122000001)(6486002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <AF2092347A77614FAE5736EEE35CD0C5@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6367
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ad96fb38-b23d-443c-f8aa-08db5c3d0012
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MX/ZStrSi+V6/uCvJDTm2fb/niyzS5YuvW97/LWCaxZuqktr3NFztzia1BvDBIThpraC1M2LZLO/XLaYB4XgA7cNNhOeVHIOeDDoguivDu/37WyYpcS62KELh6FF6sCbPMC6b+q2fRy6Y62vq4eKcsk4Z8nxvaPddq7WxTwpLQoAh0Ob0+xEjx+uV+0ADzlDuhJ+1CiVuQ+OKTOWJvVI1OTnn68dR2wGY2Y0pQSFHydjcrasya9HFsGjQetL8W4z4gDh5JvPwymcG/uyPP+8atwG+eAU/OEDxLgswP81h7L5tSELHotzFpB7IhNXE2lWq3Uxn8tTvwQOb/Cp4DwqRHbkZ/3LY9hgQ7XMk1KCN1Vnd/9fY4RDdac+R0B3DUSst28sKO+AOEJJySTAz4V0OTZkfD8EXuFzAnlwm9MWWCGh5pV68+GeF35URPYGVaYX2CiO/dDk8l4PasjIv/tR2AHdNGvQmUdhtdBFPRYQT191Oa2ohygma9THvLIhgTtxF1I13Eoj3zWR++EXCPIh2+yW3hpxMjmUPt7xfFoCS7oHhunM8YvwpGs6USi2R9jNS4ar2Tkdp/hwm4GBHjAGFs25fQ7TOMKtfL8HDumC0FYrE8Gt3xCqXp2vwDcXc6isnH9a9jF4fWbMQ7FfBfNA0yXHUSzT/FUF+rJHxN+CJHyfK9SvmqJRjuMnzN3HihM7uRbFZG5eBsP0d8euTcZx0BPu7S/TxXNZOMuIxf5da7+xpNRZEvMj7IIDbr9QudzW
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(36860700001)(336012)(47076005)(83380400001)(6636002)(4326008)(70206006)(70586007)(107886003)(6486002)(2616005)(54906003)(37006003)(6506007)(26005)(6512007)(53546011)(478600001)(186003)(2906002)(8936002)(6862004)(86362001)(8676002)(5660300002)(36756003)(33656002)(316002)(40460700003)(356005)(82740400003)(81166007)(40480700001)(41300700001)(82310400005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 09:55:35.1911
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 48a056dc-d8e4-4e56-39bb-08db5c3d052b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5616

DQoNCj4gT24gMjQgTWF5IDIwMjMsIGF0IDEwOjQ3LCBCZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFu
ZC5NYXJxdWlzQGFybS5jb20+IHdyb3RlOg0KPiANCj4gSGkgTHVjYSwNCj4gDQo+PiBPbiAyMyBN
YXkgMjAyMywgYXQgMDk6NDMsIEx1Y2EgRmFuY2VsbHUgPEx1Y2EuRmFuY2VsbHVAYXJtLmNvbT4g
d3JvdGU6DQo+PiANCj4+IFNhdmUvcmVzdG9yZSBjb250ZXh0IHN3aXRjaCBmb3IgU1ZFLCBhbGxv
Y2F0ZSBtZW1vcnkgdG8gY29udGFpbg0KPj4gdGhlIFowLTMxIHJlZ2lzdGVycyB3aG9zZSBsZW5n
dGggaXMgbWF4aW11bSAyMDQ4IGJpdHMgZWFjaCBhbmQNCj4+IEZGUiB3aG8gY2FuIGJlIG1heGlt
dW0gMjU2IGJpdHMsIHRoZSBhbGxvY2F0ZWQgbWVtb3J5IGRlcGVuZHMgb24NCj4+IGhvdyBtYW55
IGJpdHMgaXMgdGhlIHZlY3RvciBsZW5ndGggZm9yIHRoZSBkb21haW4gYW5kIGhvdyBtYW55IGJp
dHMNCj4+IGFyZSBzdXBwb3J0ZWQgYnkgdGhlIHBsYXRmb3JtLg0KPj4gDQo+PiBTYXZlIFAwLTE1
IHdob3NlIGxlbmd0aCBpcyBtYXhpbXVtIDI1NiBiaXRzIGVhY2gsIGluIHRoaXMgY2FzZSB0aGUN
Cj4+IG1lbW9yeSB1c2VkIGlzIGZyb20gdGhlIGZwcmVncyBmaWVsZCBpbiBzdHJ1Y3QgdmZwX3N0
YXRlLA0KPj4gYmVjYXVzZSBWMC0zMSBhcmUgcGFydCBvZiBaMC0zMSBhbmQgdGhpcyBzcGFjZSB3
b3VsZCBoYXZlIGJlZW4NCj4+IHVudXNlZCBmb3IgU1ZFIGRvbWFpbiBvdGhlcndpc2UuDQo+PiAN
Cj4+IENyZWF0ZSB6Y3JfZWx7MSwyfSBmaWVsZHMgaW4gYXJjaF92Y3B1LCBpbml0aWFsaXNlIHpj
cl9lbDIgb24gdmNwdQ0KPj4gY3JlYXRpb24gZ2l2ZW4gdGhlIHJlcXVlc3RlZCB2ZWN0b3IgbGVu
Z3RoIGFuZCByZXN0b3JlIGl0IG9uDQo+PiBjb250ZXh0IHN3aXRjaCwgc2F2ZS9yZXN0b3JlIFpD
Ul9FTDEgdmFsdWUgYXMgd2VsbC4NCj4+IA0KPj4gTGlzdCBpbXBvcnQgbWFjcm9zIGZyb20gTGlu
dXggaW4gUkVBRE1FLkxpbnV4UHJpbWl0aXZlcy4NCj4+IA0KPj4gU2lnbmVkLW9mZi1ieTogTHVj
YSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KPiBSZXZpZXdlZC1ieTogQmVydHJh
bmQgTWFycXVpcyA8YmVydHJhbmQubWFycXVpc0Bhcm0uY29tPg0KPiANCj4gSnVzdCAuLi4NCj4g
DQo+PiAtLS0NCj4+IENoYW5nZXMgZnJvbSB2NjoNCj4+IC0gQWRkIGNvbW1lbnQgZm9yIGV4cGxh
aW4gd2h5IHN2ZV9zYXZlL3N2ZV9sb2FkIGFyZSBkaWZmZXJlbnQgZnJvbQ0KPj4gIExpbnV4LCBh
ZGQgbWFjcm9zIGluIHhlbi9hcmNoL2FybS9SRUFETUUuTGludXhQcmltaXRpdmVzIChKdWxpZW4p
DQo+PiAtIEFkZCBjb21tZW50cyBpbiBzdmVfY29udGV4dF9pbml0IGFuZCBzdmVfY29udGV4dF9m
cmVlLCBoYW5kbGUgdGhlDQo+PiAgY2FzZSB3aGVyZSBzdmVfenJlZ19jdHhfZW5kIGlzIE5VTEws
IG1vdmUgc2V0dGluZyBvZiB2LT5hcmNoLnpjcl9lbDINCj4+ICBpbiBzdmVfY29udGV4dF9pbml0
IChKdWxpZW4pDQo+PiAtIHJlbW92ZSBzdHVicyBmb3Igc3ZlX2NvbnRleHRfKiBhbmQgc3ZlX3Nh
dmVfKiBhbmQgcmVseSBvbiBjb21waWxlcg0KPj4gIERDRSAoSmFuKQ0KPj4gLSBBZGQgY29tbWVu
dHMgZm9yIHN2ZV9zYXZlX2N0eC9zdmVfbG9hZF9jdHggKEp1bGllbikNCj4+IENoYW5nZXMgZnJv
bSB2NToNCj4+IC0gdXNlIFhGUkVFIGluc3RlYWQgb2YgeGZyZWUsIGtlZXAgdGhlIGhlYWRlcnMg
KEp1bGllbikNCj4+IC0gQXZvaWQgbWF0aCBjb21wdXRhdGlvbiBmb3IgZXZlcnkgc2F2ZS9yZXN0
b3JlLCBzdG9yZSB0aGUgY29tcHV0YXRpb24NCj4+ICBpbiBzdHJ1Y3QgdmZwX3N0YXRlIG9uY2Ug
KEJlcnRyYW5kKQ0KPj4gLSBwcm90ZWN0IGFjY2VzcyB0byB2LT5kb21haW4tPmFyY2guc3ZlX3Zs
IGluc2lkZSBhcmNoX3ZjcHVfY3JlYXRlIG5vdw0KPj4gIHRoYXQgc3ZlX3ZsIGlzIGF2YWlsYWJs
ZSBvbmx5IG9uIGFybTY0DQo+PiBDaGFuZ2VzIGZyb20gdjQ6DQo+PiAtIE5vIGNoYW5nZXMNCj4+
IENoYW5nZXMgZnJvbSB2MzoNCj4+IC0gZG9uJ3QgdXNlIGZpeGVkIGxlbiB0eXBlcyB3aGVuIG5v
dCBuZWVkZWQgKEphbikNCj4+IC0gbm93IFZMIGlzIGFuIGVuY29kZWQgdmFsdWUsIGRlY29kZSBp
dCBiZWZvcmUgdXNpbmcuDQo+PiBDaGFuZ2VzIGZyb20gdjI6DQo+PiAtIE5vIGNoYW5nZXMNCj4+
IENoYW5nZXMgZnJvbSB2MToNCj4+IC0gTm8gY2hhbmdlcw0KPj4gQ2hhbmdlcyBmcm9tIFJGQzoN
Cj4+IC0gTW92ZWQgemNyX2VsMiBmaWVsZCBpbnRyb2R1Y3Rpb24gaW4gdGhpcyBwYXRjaCwgcmVz
dG9yZSBpdHMNCj4+ICBjb250ZW50IGluc2lkZSBzdmVfcmVzdG9yZV9zdGF0ZSBmdW5jdGlvbi4g
KEp1bGllbikNCj4+IA0KPj4gZml4IHBhdGNoIDUNCj4+IA0KPj4gU2lnbmVkLW9mZi1ieTogTHVj
YSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KPj4gQ2hhbmdlLUlkOiBJZWY2NWIy
ZmYxNGZkNTc5YWZhNGZkMTEwY2UwOGExOTk4MGU2NGZhOQ0KPiANCj4gWW91IGhhdmUgYSBzaWdu
ZWQgb2ZmIGFuZCBhIGNoYW5nZS1pZCB0aGF0IHNob3VsZCBub3QgYmUgaGVyZS4NCj4gVGhleSBh
cmUgaW4gdGhlIGNvbW1lbnQgc2VjdGlvbiBzbyBzaG91bGQgYmUgcmVtb3ZlZCBkdXJpbmcgcHVz
aCBzbyBtaWdodCBiZSBvayA6LSkNCg0KT2hoIHllYWggSSBtaXNzZWQgdGhhdCwgcHJvYmFibHkg
aXTigJlzIGZyb20gYSBzcXVhc2ghIA0KDQo+IA0KPiBDaGVlcnMNCj4gQmVydHJhbmQNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 24 09:58:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 09:58:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538947.839356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1lGM-0003zk-Dv; Wed, 24 May 2023 09:58:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538947.839356; Wed, 24 May 2023 09:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1lGM-0003zd-Az; Wed, 24 May 2023 09:58:34 +0000
Received: by outflank-mailman (input) for mailman id 538947;
 Wed, 24 May 2023 09:58:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q1lGK-0003zX-KU
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 09:58:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q1lGK-0001EJ-3e; Wed, 24 May 2023 09:58:32 +0000
Received: from [15.248.2.62] (helo=[10.24.67.26])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q1lGJ-00007d-SI; Wed, 24 May 2023 09:58:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=br+UEi3hwR7LZWdj/8hnhWarxthBUE4mQwt+5ho3tOE=; b=avD3RVTVF6XBE61FOgv1NcD+xl
	usJyhsljV/7BcGpJeEWGE/WHkeBl9gATqMhOr4dLiKXKbjK6VxIy9jHw57UrflI+cEkXBNdEGsXZk
	LCLmYAYzcZigpSpTXCbA8cuC3hqKvfj5yVFdXB8P+brDNI++UOsUSghF2modjZ8uTnyE=;
Message-ID: <dae8d4f9-7a1e-3940-1f25-0b1a2cb123bf@xen.org>
Date: Wed, 24 May 2023 10:58:29 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v7 01/12] xen/arm: enable SVE extension for Xen
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-2-luca.fancellu@arm.com>
 <98D7529A-FF7E-4079-B4FB-7EA096CB6822@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <98D7529A-FF7E-4079-B4FB-7EA096CB6822@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 24/05/2023 10:01, Bertrand Marquis wrote:
>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>> index c4ec38bb2554..83b84368f6d5 100644
>> --- a/xen/arch/arm/cpufeature.c
>> +++ b/xen/arch/arm/cpufeature.c
>> @@ -9,6 +9,7 @@
>> #include <xen/init.h>
>> #include <xen/smp.h>
>> #include <xen/stop_machine.h>
>> +#include <asm/arm64/sve.h>
>> #include <asm/cpufeature.h>
>>
>> DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>> @@ -143,6 +144,9 @@ void identify_cpu(struct cpuinfo_arm *c)
>>
>>      c->zfr64.bits[0] = READ_SYSREG(ID_AA64ZFR0_EL1);
>>
>> +    if ( cpu_has_sve )
>> +        c->zcr64.bits[0] = compute_max_zcr();
>> +
>>      c->dczid.bits[0] = READ_SYSREG(DCZID_EL0);
>>
>>      c->ctr.bits[0] = READ_SYSREG(CTR_EL0);
>> @@ -199,7 +203,7 @@ static int __init create_guest_cpuinfo(void)
>>      guest_cpuinfo.pfr64.mpam = 0;
>>      guest_cpuinfo.pfr64.mpam_frac = 0;
>>
>> -    /* Hide SVE as Xen does not support it */
>> +    /* Hide SVE by default to the guests */
> 
> Everything is for guests and as Jan mentioned in an other comment
> this could be wrongly interpreted.

(Not directly related to this patch, so no changes expected here)

Hmmm... The name of the function/variable is confusing as well given 
that the cpuinfo should also apply to dom0. Should we s/guest/domain/?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 24 10:06:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 10:06:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538953.839366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1lNa-0005bs-95; Wed, 24 May 2023 10:06:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538953.839366; Wed, 24 May 2023 10:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1lNa-0005bl-63; Wed, 24 May 2023 10:06:02 +0000
Received: by outflank-mailman (input) for mailman id 538953;
 Wed, 24 May 2023 10:06:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VEVP=BN=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1q1lNZ-0005bf-7k
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 10:06:01 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0610.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 93bedf07-fa1a-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 12:05:57 +0200 (CEST)
Received: from AS9PR06CA0002.eurprd06.prod.outlook.com (2603:10a6:20b:462::7)
 by AS8PR08MB10150.eurprd08.prod.outlook.com (2603:10a6:20b:62a::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 10:05:51 +0000
Received: from AM7EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:462:cafe::fc) by AS9PR06CA0002.outlook.office365.com
 (2603:10a6:20b:462::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 10:05:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT056.mail.protection.outlook.com (100.127.140.107) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.14 via Frontend Transport; Wed, 24 May 2023 10:05:50 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Wed, 24 May 2023 10:05:50 +0000
Received: from f872fec28455.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 747FC9E9-FEF8-4C06-841F-9D3378800A34.1; 
 Wed, 24 May 2023 10:05:44 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f872fec28455.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 10:05:44 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS2PR08MB8477.eurprd08.prod.outlook.com (2603:10a6:20b:55b::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 10:05:40 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482%4]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 10:05:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93bedf07-fa1a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=al/3yr0pngWD9gXRxZRp0PKFoHd5lNjj1aeMXVh/jog=;
 b=L3gDF0TCv1itvommmVmZ+shUZg3uVjmFTHfpcMP1ZwRBpFIuTSXwo7ziplQuPbk1kEqGv1g5pYfqeFtgxF7xmmvE2KbpA3MuKXVeLxWVQTzGI/YuoKw+N36hADWOHprqhZ4gtB+2H0xS4/odFCgMSHUJQ1o02+cxZJ3mJFFtLsc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3af1e2ff4d36cd07
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eB2YkQfeV240/tN9yHWCohTkkfs3HJAxCbJOBPA1owopX4rElYwDlIIZ+ag5QIzBFuu1J81/p77hwBWflHz+eNQ+q1cw89HuqTXkIIoi4kAysaR5WDKsZ4NWbXVBNdNbORefh7MfGfIqXab9LHoQeYnoE3ZZW9x0CM98BZQx3oMLM8v6Kz7Bkv0GiXp1Go1LUtd3UkLgvzm9oqO24DktdIQrBzl/Jyu6fkhdiRcUGKfHugFw++kz4JkxA9ROqDCxTcaNwm+BpQZU2Co6/X79b7VeR0v4PFin1mGEQO010sri6Z5Eu17USRApDXSHGlLtxvxZFcT14tIIOV6BCoKIxg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=al/3yr0pngWD9gXRxZRp0PKFoHd5lNjj1aeMXVh/jog=;
 b=OkuBT8XScJMv9nQccwk/aC7ZDpQzZGHi/TDczn2/H8CP3YZfwZ68Q32mJDHy8sYylHch5XnVwHDr0slJL+LWWIMnX4T7UBqFwuUeqdhkFpN/rxMPbEaOkGV7bWLvL9wPug38lNgIjlvqIrUUP6856iq9yA5c7ue37Dv1koCR01YB4YBhTmlyYHI6J8y8jeVpOLAPBuBn5VnN5AtRW7ZDTVADR5ki+vbdJMGTZv2HdkPqM2IzC84lC5A+vAmX9k0Z280RGkQyQ/Edk1C15rVhwkEHoZ489kBm0Jk0lFINcQ/8Kh28nqMZk9VVB5FmPnEmOQhnbF8OavNZ2Wrf5xiZ4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=al/3yr0pngWD9gXRxZRp0PKFoHd5lNjj1aeMXVh/jog=;
 b=L3gDF0TCv1itvommmVmZ+shUZg3uVjmFTHfpcMP1ZwRBpFIuTSXwo7ziplQuPbk1kEqGv1g5pYfqeFtgxF7xmmvE2KbpA3MuKXVeLxWVQTzGI/YuoKw+N36hADWOHprqhZ4gtB+2H0xS4/odFCgMSHUJQ1o02+cxZJ3mJFFtLsc=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Thread-Topic: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Thread-Index: AQHZjUpZciMsvgYXykmY5lL2NeB+Hq9pM/SA
Date: Wed, 24 May 2023 10:05:40 +0000
Message-ID: <04D83C51-E734-465D-BC2D-4F0535C91B77@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-8-luca.fancellu@arm.com>
In-Reply-To: <20230523074326.3035745-8-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS2PR08MB8477:EE_|AM7EUR03FT056:EE_|AS8PR08MB10150:EE_
X-MS-Office365-Filtering-Correlation-Id: 236e4b64-1ac8-40bb-2c37-08db5c3e73fe
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 n8ZQnN5Gcb6/nOYWViKtZJfjNu4pNAQ+CfsSyyWVAr/zV2k8gwIkS/MjIXyQRjqDU/sLMKmey9f0qd8f7qdpZWNIWZ079zqPvNOHxrJnY2/tEf8sLALt5OWvXqyjmDOIrte3YwLhmtcXu3s2HrCvrdv5qANW9jAtQstdQmAA75npsZE4xNMLV1YmZIG3hFciYvsWKRBEHrriHd5375DCpg9nDwF1FOFHBuykxhx5Gq13okfz0cuh+NxKlQNNfDraMGkzYvWb3hMX+t3DsecqlNRJFWXhX19FViMm/anNsLremOT7XvgdMUOVTFnxI9PFk8dvU+KR9+bKHq1H9OJt28z+TvJtfRAU4VFx1htK4FwwzVEzV76owRsZ12q0lrGNl9mjrnBAMzxjwS6Rp6DaSVrgUSAB5g94dVs2dtZgk5D/xJVrkwYmce+xcmUe1nUO/seT20p/Wlmwukt02WajLkqZfirGo1ifmh+lxx8ifbXK2b8Y5WH9Jkg0L3YLlrJKmlJ3POMQxl50adZ5rR0BFgRbcOKiZDnZ3w6GAWv0Ss8ngohKTTEfHiabGmcsPvNlV0t6OILxCz0QkDryK3pUn7HHqSHN7D5MEWU5fo/Kld2Q0nF+ua74FDKGeo/XFxBd
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(136003)(346002)(366004)(396003)(451199021)(6512007)(26005)(122000001)(53546011)(186003)(33656002)(6506007)(2616005)(36756003)(83380400001)(2906002)(38100700002)(41300700001)(6486002)(71200400001)(316002)(37006003)(54906003)(966005)(478600001)(66946007)(66556008)(66476007)(66446008)(76116006)(64756008)(91956017)(6636002)(38070700005)(4326008)(86362001)(8936002)(6862004)(5660300002)(8676002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <094B75803BC03E488BA52F7CCFB8B207@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8477
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	387631a9-8ddd-493c-6f4c-08db5c3e6dfd
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vEFQoa+5wix+oaFQqkPOBvIkx8P/gPDvtsO23YujizqV7sux7MxTxZ9j4hMNfeNpG7c3qBQ8UGESyLBfX/LXRC7ODpevbLGXU/NAiZOzdNkKezTQNQb3nQ6U9jlkqvfg2FQfbSvj+/FMdfvKx3oS620p73v9F5xtJWfxs1kWsRcccyBoFGK8pX/Mjb2BWzs62AGgdYXDBMNhO+/ydTf+Zq9g6dKhNA7lU9o6aWa5DxnOD15HTeqZHjih2qFH9+D58vOaStpxj6+/CwvkdMakiaM063Cn8TMRqGCxlepA1uPYmjf/zH57E6KoEbIJ0SeQXKImd6k5km7QHK6x/2j/We6UaU4QDenny7TWVuIHTdb73GUuhiQDL9nPvm4+nsWTE0HbXAxgdfvVeUJFd9B3p1++iFEaAdRLoqPrrnqef98k5a2JiGq9unihmkZbqes3yoLtLsnwGjzsPCkO/yezTkW3izGTsokVJXZfkUYTjr5t8VJ+rD6qJuD5e4sjTF0Mz8rde+ScR3rliumQQgqkZYtnp2pNLrzkHvGyQ8itIyJ++mml/GBi5MioPBvs4lBA6aXiEFSMVJ2AINvQboQvrYCzUKNhm+pZCRLgyn53NrIXGz8vV9aDmCNjRqXObz02OmUCqOtLuYdAoCfwR/BZqsVtlIW8IcrcpGPuI12gE2LsrSPT/AjkBKT+G0J+Po5nVh5Seezd3yhoeFhwS0KaUJy4PNXpENgMjFyQDyJy8SE=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199021)(36840700001)(46966006)(40470700004)(2906002)(966005)(2616005)(36756003)(186003)(40480700001)(107886003)(8936002)(6862004)(8676002)(40460700003)(5660300002)(6512007)(6506007)(26005)(53546011)(82310400005)(478600001)(70586007)(70206006)(6636002)(4326008)(86362001)(82740400003)(54906003)(37006003)(81166007)(41300700001)(83380400001)(6486002)(336012)(47076005)(356005)(316002)(36860700001)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 10:05:50.5728
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 236e4b64-1ac8-40bb-2c37-08db5c3e73fe
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB10150

Hi Luca,

> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Add a command line parameter to allow Dom0 the use of SVE resources,
> the command line parameter sve=3D<integer>, sub argument of dom0=3D,
> controls the feature on this domain and sets the maximum SVE vector
> length for Dom0.
>=20
> Add a new function, parse_signed_integer(), to parse an integer
> command line argument.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

with ...

> ---
> Changes from v6:
> - Fixed case for e=3D=3DNULL in parse_signed_integer, drop parenthesis
>   from if conditions, delete inline sve_domctl_vl_param and rely on
>   DCE from the compiler (Jan)
> - Drop parenthesis from opt_dom0_sve (Julien)
> - Do not continue if 'sve' is in command line args but
>   CONFIG_ARM64_SVE is not selected:
>   https://lore.kernel.org/all/7614AE25-F59D-430A-9C3E-30B1CE0E1580@arm.co=
m/
> Changes from v5:
> - stop the domain if VL error occurs (Julien, Bertrand)
> - update the documentation
> - Rename sve_sanitize_vl_param to sve_domctl_vl_param to
>   mark the fact that we are sanitizing a parameter coming from
>   the user before encoding it into sve_vl in domctl structure.
>   (suggestion from Bertrand in a separate discussion)
> - update comment in parse_signed_integer, return boolean in
>   sve_domctl_vl_param (Jan).
> Changes from v4:
> - Negative values as user param means max supported HW VL (Jan)
> - update documentation, make use of no_config_param(), rename
>   parse_integer into parse_signed_integer and take long long *,
>   also put a comment on the -2 return condition, update
>   declaration comment to reflect the modifications (Jan)
> Changes from v3:
> - Don't use fixed len types when not needed (Jan)
> - renamed domainconfig_encode_vl to sve_encode_vl
> - Use a sub argument of dom0=3D to enable the feature (Jan)
> - Add parse_integer() function
> Changes from v2:
> - xen_domctl_createdomain field has changed into sve_vl and its
>   value now is the VL / 128, create an helper function for that.
> Changes from v1:
> - No changes
> Changes from RFC:
> - Changed docs to explain that the domain won't be created if the
>   requested vector length is above the supported one from the
>   platform.
> ---
> docs/misc/xen-command-line.pandoc    | 20 ++++++++++++++++++--
> xen/arch/arm/arm64/sve.c             | 20 ++++++++++++++++++++
> xen/arch/arm/domain_build.c          | 26 ++++++++++++++++++++++++++
> xen/arch/arm/include/asm/arm64/sve.h | 10 ++++++++++
> xen/common/kernel.c                  | 28 ++++++++++++++++++++++++++++
> xen/include/xen/lib.h                | 10 ++++++++++
> 6 files changed, 112 insertions(+), 2 deletions(-)
>=20
> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-li=
ne.pandoc
> index e0b89b7d3319..47e5b4eb6199 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -777,9 +777,9 @@ Specify the bit width of the DMA heap.
>=20
> ### dom0
>     =3D List of [ pv | pvh, shadow=3D<bool>, verbose=3D<bool>,
> -                cpuid-faulting=3D<bool>, msr-relaxed=3D<bool> ]
> +                cpuid-faulting=3D<bool>, msr-relaxed=3D<bool> ] (x86)
>=20
> -    Applicability: x86
> +    =3D List of [ sve=3D<integer> ] (Arm)
>=20
> Controls for how dom0 is constructed on x86 systems.
>=20
> @@ -838,6 +838,22 @@ Controls for how dom0 is constructed on x86 systems.
>=20
>     If using this option is necessary to fix an issue, please report a bu=
g.
>=20
> +Enables features on dom0 on Arm systems.
> +
> +*   The `sve` integer parameter enables Arm SVE usage for Dom0 domain an=
d sets
> +    the maximum SVE vector length, the option is applicable only to AArc=
h64
> +    guests.

Here i would just remove "guests", just AArch64 is enough.
I am ok if you choose to use "AArch64 Dom0 kernels"

> +    A value equal to 0 disables the feature, this is the default value.
> +    Values below 0 means the feature uses the maximum SVE vector length
> +    supported by hardware, if SVE is supported.
> +    Values above 0 explicitly set the maximum SVE vector length for Dom0=
,
> +    allowed values are from 128 to maximum 2048, being multiple of 128.
> +    Please note that when the user explicitly specifies the value, if th=
at value
> +    is above the hardware supported maximum SVE vector length, the domai=
n
> +    creation will fail and the system will stop, the same will occur if =
the
> +    option is provided with a non zero value, but the platform doesn't s=
upport
> +    SVE.
> +

I agree on the discussion with Jan here so you can keep my R-b if modified =
as discussed.


Cheers
Bertrand

> ### dom0-cpuid
>     =3D List of comma separated booleans
>=20
> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> index 84a6dedc1fd7..feaca2cf647d 100644
> --- a/xen/arch/arm/arm64/sve.c
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -13,6 +13,9 @@
> #include <asm/processor.h>
> #include <asm/system.h>
>=20
> +/* opt_dom0_sve: allow Dom0 to use SVE and set maximum vector length. */
> +int __initdata opt_dom0_sve;
> +
> extern unsigned int sve_get_hw_vl(void);
>=20
> /*
> @@ -152,6 +155,23 @@ void sve_restore_state(struct vcpu *v)
>     sve_load_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
> }
>=20
> +bool __init sve_domctl_vl_param(int val, unsigned int *out)
> +{
> +    /*
> +     * Negative SVE parameter value means to use the maximum supported
> +     * vector length, otherwise if a positive value is provided, check i=
f the
> +     * vector length is a multiple of 128
> +     */
> +    if ( val < 0 )
> +        *out =3D get_sys_vl_len();
> +    else if ( (val % SVE_VL_MULTIPLE_VAL) =3D=3D 0 )
> +        *out =3D val;
> +    else
> +        return false;
> +
> +    return true;
> +}
> +
> /*
>  * Local variables:
>  * mode: C
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index f373a5024783..9202a96d9c28 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -62,6 +62,22 @@ custom_param("dom0_mem", parse_dom0_mem);
>=20
> int __init parse_arch_dom0_param(const char *s, const char *e)
> {
> +    long long val;
> +
> +    if ( !parse_signed_integer("sve", s, e, &val) )
> +    {
> +#ifdef CONFIG_ARM64_SVE
> +        if ( (val >=3D INT_MIN) && (val <=3D INT_MAX) )
> +            opt_dom0_sve =3D val;
> +        else
> +            printk(XENLOG_INFO "'sve=3D%lld' value out of range!\n", val=
);
> +
> +        return 0;
> +#else
> +        panic("'sve' property found, but CONFIG_ARM64_SVE not selected")=
;
> +#endif
> +    }
> +
>     return -EINVAL;
> }
>=20
> @@ -4113,6 +4129,16 @@ void __init create_dom0(void)
>     if ( iommu_enabled )
>         dom0_cfg.flags |=3D XEN_DOMCTL_CDF_iommu;
>=20
> +    if ( opt_dom0_sve )
> +    {
> +        unsigned int vl;
> +
> +        if ( sve_domctl_vl_param(opt_dom0_sve, &vl) )
> +            dom0_cfg.arch.sve_vl =3D sve_encode_vl(vl);
> +        else
> +            panic("SVE vector length error\n");
> +    }
> +
>     dom0 =3D domain_create(0, &dom0_cfg, CDF_privileged | CDF_directmap);
>     if ( IS_ERR(dom0) )
>         panic("Error creating domain 0 (rc =3D %ld)\n", PTR_ERR(dom0));
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/=
asm/arm64/sve.h
> index 65b46685d263..a71d6a295dcc 100644
> --- a/xen/arch/arm/include/asm/arm64/sve.h
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -21,14 +21,22 @@ static inline unsigned int sve_decode_vl(unsigned int=
 sve_vl)
>     return sve_vl * SVE_VL_MULTIPLE_VAL;
> }
>=20
> +static inline unsigned int sve_encode_vl(unsigned int sve_vl_bits)
> +{
> +    return sve_vl_bits / SVE_VL_MULTIPLE_VAL;
> +}
> +
> register_t compute_max_zcr(void);
> int sve_context_init(struct vcpu *v);
> void sve_context_free(struct vcpu *v);
> void sve_save_state(struct vcpu *v);
> void sve_restore_state(struct vcpu *v);
> +bool sve_domctl_vl_param(int val, unsigned int *out);
>=20
> #ifdef CONFIG_ARM64_SVE
>=20
> +extern int opt_dom0_sve;
> +
> static inline bool is_sve_domain(const struct domain *d)
> {
>     return d->arch.sve_vl > 0;
> @@ -38,6 +46,8 @@ unsigned int get_sys_vl_len(void);
>=20
> #else /* !CONFIG_ARM64_SVE */
>=20
> +#define opt_dom0_sve     0
> +
> static inline bool is_sve_domain(const struct domain *d)
> {
>     return false;
> diff --git a/xen/common/kernel.c b/xen/common/kernel.c
> index f7b1f65f373c..7cd00a4c999a 100644
> --- a/xen/common/kernel.c
> +++ b/xen/common/kernel.c
> @@ -314,6 +314,34 @@ int parse_boolean(const char *name, const char *s, c=
onst char *e)
>     return -1;
> }
>=20
> +int __init parse_signed_integer(const char *name, const char *s, const c=
har *e,
> +                                long long *val)
> +{
> +    size_t slen, nlen;
> +    const char *str;
> +    long long pval;
> +
> +    slen =3D e ? ({ ASSERT(e >=3D s); e - s; }) : strlen(s);
> +    nlen =3D strlen(name);
> +
> +    if ( !e )
> +        e =3D s + slen;
> +
> +    /* Check that this is the name we're looking for and a value was pro=
vided */
> +    if ( slen <=3D nlen || strncmp(s, name, nlen) || s[nlen] !=3D '=3D' =
)
> +        return -1;
> +
> +    pval =3D simple_strtoll(&s[nlen + 1], &str, 10);
> +
> +    /* Number not recognised */
> +    if ( str !=3D e )
> +        return -2;
> +
> +    *val =3D pval;
> +
> +    return 0;
> +}
> +
> int cmdline_strcmp(const char *frag, const char *name)
> {
>     for ( ; ; frag++, name++ )
> diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
> index e914ccade095..5343ee7a944a 100644
> --- a/xen/include/xen/lib.h
> +++ b/xen/include/xen/lib.h
> @@ -94,6 +94,16 @@ int parse_bool(const char *s, const char *e);
>  */
> int parse_boolean(const char *name, const char *s, const char *e);
>=20
> +/**
> + * Given a specific name, parses a string of the form:
> + *   $NAME=3D<integer number>
> + * returning 0 and a value in val, for a recognised integer.
> + * Returns -1 for name not found, general errors, or -2 if name is found=
 but
> + * not recognised number.
> + */
> +int parse_signed_integer(const char *name, const char *s, const char *e,
> +                         long long *val);
> +
> /**
>  * Very similar to strcmp(), but will declare a match if the NUL in 'name=
'
>  * lines up with comma, colon, semicolon or equals in 'frag'.  Designed f=
or
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed May 24 10:08:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 10:08:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538957.839375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1lPs-0006B5-MB; Wed, 24 May 2023 10:08:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538957.839375; Wed, 24 May 2023 10:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1lPs-0006Aw-J2; Wed, 24 May 2023 10:08:24 +0000
Received: by outflank-mailman (input) for mailman id 538957;
 Wed, 24 May 2023 10:08:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1lPr-0006Am-8I
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 10:08:23 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2050.outbound.protection.outlook.com [40.107.13.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e9fe9a4e-fa1a-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 12:08:22 +0200 (CEST)
Received: from AM8P191CA0008.EURP191.PROD.OUTLOOK.COM (2603:10a6:20b:21a::13)
 by DB4PR08MB8008.eurprd08.prod.outlook.com (2603:10a6:10:38c::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 10:07:45 +0000
Received: from AM7EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:21a:cafe::24) by AM8P191CA0008.outlook.office365.com
 (2603:10a6:20b:21a::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 10:07:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT050.mail.protection.outlook.com (100.127.141.27) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 10:07:45 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Wed, 24 May 2023 10:07:44 +0000
Received: from d80fb12ad030.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CBB10996-AF54-4F07-9518-598B300C6E2F.1; 
 Wed, 24 May 2023 10:07:38 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d80fb12ad030.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 10:07:38 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB6632.eurprd08.prod.outlook.com (2603:10a6:20b:31c::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 10:07:37 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 10:07:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9fe9a4e-fa1a-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H7Ah2BWtayd7RF8HftxaIIV14nkf4CEjMjb21qwZRNA=;
 b=fLvSTblOe5BydaWMIgHuf1J2u7BuB+qNKZZoAkQSdRW6mTe4dwoQK3Ual48TXMyDnApjylbYpMr2QLWwtVOiFM0KrAgMYAhKwXw17Th87pW0gt9+bFfs5CUYSVp+az7AnCmzx5Tdt8L3Iiwj5jdLqH1eP5l6kaWZ/NGe1j5b88Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: dfc32a3c81162e68
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oYbGYIjaIsq4dNfqP0fjoDTyFeI0swZROS2MFXTCAHSpC+Lz3G/GPRCMA+CTcgsYm8Ajey5R+cvEGmz8h8oKBTJT0lHr8h4PTaC3H3pbL+Zlt2tHoUZFS2jl0kfgNaPNqQtmP0JRdql0p+qZ8fl+77ElhYmDHCfRnZUrzDeEp1TutyiYbAT7PJF9AdXPy9EjbwavflHjegZ/5JqvYoe8Ym+SPR+dz+Ma1J11ImB6Ktn1cxHQ1vWlPKAxpiGW1d2g3jzQR8rXQA+dd2xyOrldUtyYozdC0mfdMVlFQxhtkzyBAp6HRfA/4oq/4D1alQ5qUjWosrIXastYzkmXQGkjAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=H7Ah2BWtayd7RF8HftxaIIV14nkf4CEjMjb21qwZRNA=;
 b=Gwutk3Zh6G2fQPNDQCibSnwXyqAY4B/EhSk1am6WCObyULlc7DKxluWvVCA6/SlsCsZ1ZI1oWVCxcTM20ZlJ26cP4ZZ3kTsmJctHdhgiHIdD4MXC4ofYwRIi7KfKAe2yTIjVpi2yPaEuj3L+9qvZjDcQkssLJxWSO0aKvLtAttkt73hEc8Vfcc0AIpxDs1T3pfvtnPQCZC//tgbGPDE/PvxW4YaN3tbE1NGDMlcHkyQNgsSp/n5Qn7MVfBZhD/1I/NuALc+ERKBbtgQbJhMtWbZNcpLZtwspzlLAL2bM+3+6adL1RdDfhX+ELzRM+RnyvNOHTZWvBYu8jzCyGEbS2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H7Ah2BWtayd7RF8HftxaIIV14nkf4CEjMjb21qwZRNA=;
 b=fLvSTblOe5BydaWMIgHuf1J2u7BuB+qNKZZoAkQSdRW6mTe4dwoQK3Ual48TXMyDnApjylbYpMr2QLWwtVOiFM0KrAgMYAhKwXw17Th87pW0gt9+bFfs5CUYSVp+az7AnCmzx5Tdt8L3Iiwj5jdLqH1eP5l6kaWZ/NGe1j5b88Q=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 14/15] Config.mk: move $(cc-option, ) to
 config/compiler-testing.mk
Thread-Topic: [XEN PATCH 14/15] Config.mk: move $(cc-option, ) to
 config/compiler-testing.mk
Thread-Index: AQHZjZUjhpAhcMjsuE+nmzbJuPpI6q9pM+oA
Date: Wed, 24 May 2023 10:07:36 +0000
Message-ID: <38763156-45E5-4A88-ADA4-F8E68E595A3D@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-15-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-15-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB6632:EE_|AM7EUR03FT050:EE_|DB4PR08MB8008:EE_
X-MS-Office365-Filtering-Correlation-Id: c94c5bb4-d30f-48b1-3ffe-08db5c3eb82d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 g34Yeu8EXdT4GMQPvqGxNSgAzlvhLDzO7RS3stWA6RM34zkm30FIsxoStp+VckwMWj4hNCtR0SMV/v15WwU53eo86MjmsDGiBKfUXQiSt1VSWJCNBIBExFppiX1bXXtONwxDXOqaRknBIsoD8fkV7VoCB6FOxJgJE6UQLvlwoP+oyilTJ1QoD+SkElM+vUeOEYoMK4E8rBQ5mIkF3V65IOph7C5s+jbpuz/49ZknRIna9pyP9tS4rdIIUPXBbxKIWPwjLxcDRKq3/KcDGf7bnz90rJJOlIG2/m2Z2k3wiP0kjDK04oKKlpu8QASlITFit9RM8PGl/G1I79HxmMXO5d2e/VojEOhl1sRufKOIRGag6NE6qtkkKIW6vcdM5KcfJEvcd2CDUuOeIZHh9o6GmI/ESscLzUzT7ka9sVsVqaUdIA8hYB3oPN+36Vo/YH7/aPKD9mKYpFxaZvDLqnqskTxCaodpVGUUV0K5dxfnkoznqE7zv2AFWWK4FJG1CPotRI+UVvio1o4GkrKzwqQPmDbFAj7WZPEHOVygWJ4hVpkV4JqbdKhi2BCBuuVnIgHzQq/Kc3yxodvyFsDd981MKn9VOujakHAlAdoxoh1t5RpfCk7Jstf1LQWlu4CPMyZURq9/EpnskbwBHDGvyMgblQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(376002)(346002)(136003)(451199021)(8936002)(8676002)(5660300002)(86362001)(122000001)(53546011)(6506007)(6512007)(26005)(2906002)(186003)(2616005)(4744005)(36756003)(66556008)(64756008)(6916009)(4326008)(66946007)(76116006)(66476007)(91956017)(66446008)(71200400001)(316002)(33656002)(54906003)(6486002)(41300700001)(478600001)(38070700005)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <265DEE2B0FAC9C4CB6C4E4590DBD1F21@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6632
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6ae12219-80ae-40f5-f0d9-08db5c3eb35e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vEB3+kgAFv+IXBe0nfMq0CFt0DEOlXFNRWJT85DGGUqm2YArWun0Al6S2DZNQU7ygFZR29/1NDxakW7vumvzTb5NvNrHTuKLxVO8CEms5olHcQ4NPn3gOWkD8vHaaSI2ANngSzxyqY/e2TEQ9/vLHi4783Ga0sobaCKfWaIy8DfzE3rPd7bVsGvREyIHFHtjVEYJG0w5b3kUV2pX+wLAFdVBu3G4axbbzPhEPz1y0aRAk8VNH0d8tWsQle7MCwFBZrjQ9i/XmIUF6vswKRvz5DWnbMJv1GEWdhI1sXc9f3cpRbuq+PUYVsPfB/jbzvR0FmsUL6tiQteDn18e59ftZHl0JIxAZ4N/Vvw2bW4wq5nX76eYzudfT6Y09BNt13/wI7SFiTmCsakNAopGyt5Pw+9eZK6peA9ig2pXSEKrev0zkXHzfuBPQj77RNejJSuzex1WMXResG3ORqNoC9RZMulifIPK/1TydL9V4P5R0tWYC2QCbULDBLtPzCnfwQ55mpJiZq7wfin14CDL7AftBIIDd71SR1rfHvFMHAvTDV1deb7VbuLVVBwFPJNcRMbkndgIN2WR14Y59bgKEHlEuYZNb0zvpr/vscpWWefbIMyXsPIvBqOuSvpbmY1mGoTtexaRyLGZ4943ZIsdPTe4hQorPtmRG+ZQnAhYs4UvViKK2rLxdZ8xXt5n5DhmhGxj1KUus2mvabEiqUSwAikVVKuadoCG7GoebE3aazsmWEJD209EWQYUaK+SS5Ni52fT
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(376002)(396003)(346002)(451199021)(46966006)(40470700004)(36840700001)(54906003)(6486002)(41300700001)(82310400005)(316002)(70206006)(70586007)(4326008)(8676002)(5660300002)(8936002)(6862004)(478600001)(86362001)(82740400003)(40460700003)(6506007)(81166007)(356005)(186003)(26005)(53546011)(33656002)(40480700001)(4744005)(6512007)(2906002)(2616005)(336012)(36860700001)(36756003)(47076005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 10:07:45.0133
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c94c5bb4-d30f-48b1-3ffe-08db5c3eb82d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB8008



> On 23 May 2023, at 17:38, Anthony PERARD <anthony.perard@citrix.com> wrot=
e:
>=20
> In xen/, it isn't necessary to include Config.mk in every Makefile in
> subdirectories as nearly all necessary variables should be calculated
> in xen/Makefile. But some Makefile make use of the macro $(cc-option,)
> that is only available in Config.mk.
>=20
> Extract $(cc-option,) from Config.mk so we can use it without
> including Config.mk and thus without having to recalculate some CFLAGS
> which would be ignored.
>=20
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>





From xen-devel-bounces@lists.xenproject.org Wed May 24 10:28:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 10:28:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538963.839386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1lib-0000Kv-Al; Wed, 24 May 2023 10:27:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538963.839386; Wed, 24 May 2023 10:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1lib-0000Ko-7e; Wed, 24 May 2023 10:27:45 +0000
Received: by outflank-mailman (input) for mailman id 538963;
 Wed, 24 May 2023 10:27:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T5KZ=BN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q1liZ-0000Ki-Vm
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 10:27:44 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on2060a.outbound.protection.outlook.com
 [2a01:111:f400:fe12::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9c1bcb1f-fa1d-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 12:27:40 +0200 (CEST)
Received: from AS9PR06CA0010.eurprd06.prod.outlook.com (2603:10a6:20b:462::33)
 by AS2PR08MB8335.eurprd08.prod.outlook.com (2603:10a6:20b:557::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 10:27:38 +0000
Received: from VI1EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:462:cafe::b2) by AS9PR06CA0010.outlook.office365.com
 (2603:10a6:20b:462::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29 via Frontend
 Transport; Wed, 24 May 2023 10:27:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VI1EUR03FT064.mail.protection.outlook.com (100.127.144.94) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 10:27:37 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Wed, 24 May 2023 10:27:36 +0000
Received: from 01cbd565a69e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5190F13D-6ECC-4609-ACF2-2B9F18D2E7FA.1; 
 Wed, 24 May 2023 10:27:30 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 01cbd565a69e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 10:27:30 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DB9PR08MB9755.eurprd08.prod.outlook.com (2603:10a6:10:460::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 10:27:28 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 10:27:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c1bcb1f-fa1d-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SQDou18rh7izUFLGpVEumTsRr1hHQnvABTTGb046RD4=;
 b=qM+0Hhx5fGomAEI1ZMw5LcRPzfm9iSD9YNuSVV4/Bn3ZmyXd8hEzNQMMVr8yV3z2aAbAUwU+OgtMiErK53U6HIoQgCxm7Yw8KvZoEA8+SzT8becKMQKwGf22oBd7NEsoi3XG4B35IjatdfEuL67NLL7IOltmv/SRae4KZ5WgmvE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9389547f330a9ad5
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eT79aPJI2UNDCc9Sl7Mp/Y/liEzGkb60x7nDOR6qavZ+nYL+Hfr0Z8QTqyVY9T6I27U24roEybJZfQ8LJtVkOpGmy2z1ZiRa9Y/WOlsMDrYLjGvBAh5Ta6F9sjE5Gmu1irCfP5D6Q7VRlX5LfH4vpNAfR8yjJI3hQCbo8HX/xb1pmYE12rBig0WHNDSXTG7n6EhZmsN+u/YNtMJOSt0Lgt4oidxINrfQTFYZqMRhf58Nk+b1yDSd271oUUQHNlaGSazlWDhLcAVwWiJ3IZrgKnZlLzHZMYIM7SuvrYi4etmKPujKUyLPF6/oPXe/f+URae4cxk0aNPIHMSAkIRhMRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SQDou18rh7izUFLGpVEumTsRr1hHQnvABTTGb046RD4=;
 b=UvrCIycxDnqJtAfa2WXg/cLnXein9kqqbYuHSJNNqEzZsewfQHO0yFZ0sSDeTu0BFPpAEbyB8y504WCq5rcUMaBGqXZBkW6232TamAJtbxP/WZUDjyaHhxWeUhGsFry0dlLJ7L99aQEhW8XCEUcbeRsDtRL408AV3LtnsAFVhckg4+cTBCKmOkbHYo0HNH3/Jk1LIv21Y6kPgp009Ej3eCtR4UVXCtVuamPDum2vQo58d+5OtxGh1yMeStBJAMVkLEAaazybKPtrCpVGnZwye5LRjFeBk49znoF2uXr/424WsiJA5CXEpMbzKxMGRjCm+g4kt3iyIHghFEYbEFoBog==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SQDou18rh7izUFLGpVEumTsRr1hHQnvABTTGb046RD4=;
 b=qM+0Hhx5fGomAEI1ZMw5LcRPzfm9iSD9YNuSVV4/Bn3ZmyXd8hEzNQMMVr8yV3z2aAbAUwU+OgtMiErK53U6HIoQgCxm7Yw8KvZoEA8+SzT8becKMQKwGf22oBd7NEsoi3XG4B35IjatdfEuL67NLL7IOltmv/SRae4KZ5WgmvE=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Anthony PERARD <anthony.perard@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 15/15] build: remove Config.mk include from Rules.mk
Thread-Topic: [XEN PATCH 15/15] build: remove Config.mk include from Rules.mk
Thread-Index: AQHZjZZcR9VZERXJ80eRAbM26/D8DK9pOXQA
Date: Wed, 24 May 2023 10:27:28 +0000
Message-ID: <A53E5B44-6898-41FC-BB0B-3B30DAA831BB@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-16-anthony.perard@citrix.com>
In-Reply-To: <20230523163811.30792-16-anthony.perard@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DB9PR08MB9755:EE_|VI1EUR03FT064:EE_|AS2PR08MB8335:EE_
X-MS-Office365-Filtering-Correlation-Id: beab92b0-40ce-4a7c-75c6-08db5c417ee9
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 t2x5W0fFdJ4nWakUEHlhmqEWgcLJSgufrdb4pDo8TjvbmPyydgKMsBK5BMWDC3sLUQRNg54PnTA/Tih5oDU3h3tpSQNR27HZWF0CDIWXdJ4yKDUPJCtRR1TZcY9Oc4/c3y49k41wMZ2VIILTNSQUnhiCDzHPjQJ2I52HPSRViODfkdiYgELDBpnhqSNIQ/nu5flbcDZcPjGoKymljorn/Bivv+vPrdy8KYX6C8vx70XVsmFxoCC4C/eRBIeG+cZFf7NhVbRqBjNxK0SI5TjKqhjrvzpbiNdLK5AjLX5YGeQn3NexoD9m1vvNfrhA2wxoDVttP5JbpWC+JzUao+puQpZ7sWC+W4o8K4Ff0LzqGSOJLKGQSZx+vXl7fGFe0vSiFWsJKgVjYyB5esKxB2xmEtS/vPHgQ0DxZKuagxRF1SvI439jeG6IE+VTtqizgbYrZ68jOIqCIUl3KpJ8IwtAgmxAmrG2iArWMMiMyAnGJVhRDN7z6P2sl384bw9TFRSxiq5oB8WgPbozC9UY2vpsUC+7WzwE3JU/RhP332UlVCF5lSON7UVXwl+yHTGmnuqYrr5A+3FBa6yyVazF8YkgFbIV4ZA6sPoEGqEp9Ia+7ckQi9TtsjHvL6Nru+cGR+DyvJh4CjXW9rplgGwfhfTgpA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(136003)(346002)(376002)(396003)(451199021)(4744005)(2906002)(91956017)(5660300002)(8936002)(8676002)(33656002)(66946007)(76116006)(66556008)(66446008)(6916009)(66476007)(36756003)(4326008)(316002)(41300700001)(54906003)(478600001)(71200400001)(6486002)(64756008)(26005)(2616005)(122000001)(86362001)(186003)(6506007)(53546011)(6512007)(38070700005)(38100700002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <F6603D1D432641499F418BEA893F9D92@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9755
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VI1EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f87960a1-b7cf-4d46-3e38-08db5c4179a1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rYCIu8VnP8NXopIpphtOoo6x+8cxy8CyF/QjujQ8khPjlwxf8XHVu9sB+IIxAP5kPUPD48RmdlvSO5oz1E4QFzi6N+8PP3k1tLDmRlWw3IKv6+mTD8nKepy4nxmS+XdqhL4H5Amt6MJnH2J6A9FItD8erLx7/AQIffH1MGwb18wOp4/OvJ4T/qtS+MjVhrRKxkiAfSIWRGeiCd+yT0dxQRY2wIfWgP8O7Epz1x3x8jrD5xx3vZSPU+M1ms9Wm5Q24LmyOUW8YFwrzfsEZNdhNzNayCelVZDihlFv6haGtQEIoiQpfYlRWltn8FBUTbNJIIb8vLcjrsAB3DbeuVUpSv+tlVVp1rlx+ZNkWKTGOlhuGgTrsxJ4J6tpntzshu9oIxO8NhnsBJFPc9GXYQmTiZNMSieklfWMVvSkvLgGP1AQebFuwq9GwJNM7WcuKiE40mP57XBr9HtPvoK+zBGzZRPZtfeIf65Lfyo4BUElVvS5p54Ma/l1DyBjizJfVCRXdo78m1jrcmbRlWRfary4IqQoKVN2e78cfemBXmTL/5OcR/yZqskojOmUnuPLVjJomgLvkdfYcAu7gT0E2LFkuE2iHk4b536iZRz6mY6N94PbWUwLegunK2ckz6GWMR1kdXhgNjCslQD7uDoc3LNB5lm23yshr+VszZ3Eh+Gvh8wo3wzBqSC4hzgXBMn1VCg/dgdP7iJyf8eczeA5J8uPfLfDFLrVaXtBs46T3IP8ULw=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(8936002)(6862004)(8676002)(5660300002)(47076005)(36860700001)(82310400005)(186003)(6512007)(6506007)(26005)(53546011)(336012)(2616005)(86362001)(81166007)(82740400003)(356005)(40460700003)(41300700001)(6486002)(40480700001)(33656002)(70206006)(70586007)(36756003)(4326008)(478600001)(316002)(54906003)(4744005)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 10:27:37.3822
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: beab92b0-40ce-4a7c-75c6-08db5c417ee9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VI1EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8335



> On 23 May 2023, at 17:38, Anthony PERARD <anthony.perard@citrix.com> wrot=
e:
>=20
> Everything needed to build the hypervisor should already be configured
> by "xen/Makefile", thus Config.mk shouldn't be needed.
>=20
> Then, Config.mk keeps on testing support of some CFLAGS with CC, the
> result of this testing is not used at this stage so the build is
> slowed unnecessarily.
>=20
> Likewise, GCC is checked to be at the minimum at 4.2 when entering
> every sub-directory, so the check have run countless time at this
> stage.
>=20
> We only need to export a few more configuration variables. And add
> some variables in Kbuild.include, and macro fallbacks for Make older
> than 3.81. (Adding `or` just in case. it's only used in xen/Makefile,
> which includes Config.mk and so has already the fallback.)
>=20
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
Tested-by: Luca Fancellu <luca.fancellu@arm.com>





From xen-devel-bounces@lists.xenproject.org Wed May 24 11:26:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538968.839406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcj-0006th-O2; Wed, 24 May 2023 11:25:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538968.839406; Wed, 24 May 2023 11:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcj-0006ta-L6; Wed, 24 May 2023 11:25:45 +0000
Received: by outflank-mailman (input) for mailman id 538968;
 Wed, 24 May 2023 11:25:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1mci-0006nQ-8i
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:25:44 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b729bd84-fa25-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 13:25:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b729bd84-fa25-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684927542;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=A+NR5il01dJbxGEp0G0M8wxuLUYrizJXy1p52xy/KWk=;
  b=JxN82FwO6dKy9lb2+ngbZy6VHUiBdD9QdygBJ1z3pI/fFHM3+knEMRPY
   JW/xM2Ms1JvwDCWrJHUGcZ7S9zqBw6aCH7TKIyB51FznQoJ8IptSI7B/g
   RhLsdMyTkzAJkGGOA0owWEILNalBDZmJFD4vGEXQ621dEhHaQHNN0Gg+4
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110226275
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:ijCp+6sjDkAJa6Kg7YiH+Z138ufnVF5eMUV32f8akzHdYApBsoF/q
 tZmKWCGa6yJYjfyc9snPYi+/UIPv8SHy9ExTFFuqCxjHiNE+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Fv0gnRkPaoQ5AKEyyFOZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwEW0zKUyqtdmPxp2GbPdwhPkRNYrvI9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WxRepEiYuuwc5G/LwRYq+LPsLMDUapqBQsA9ckOw/
 zuepT6nWE5HXDCZ4SWp8FCWnf3FpADieL0iPqef1dlTmXTGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U44gyQzqvf4y6CG3MJCDVGbbQOq8seVTEsk
 FiTkLvU6SdH6ePPDyjHr/HN8G30YHJORYMfWcMaZTJY3Z6/ibMItxTgc/B9DKyIvMS2HS6ll
 lhmsxMCr7kUiMcK0YCy8lbGny+gq/D1c+Il2unEdjn7t10kPeZJc6TtsAGGtqgYcO51W3Hb5
 BA5d96iAPfi5H1nvAiEW60zEb6g/J5p2xWM0Ac0T/HNG9lAkkNPnLy8AhkkfC+F0e5eI1cFh
 XM/XisPjKK/xFPwMcdKj3uZUqzGN5TIG9X/TezzZdFTeJV3fwLv1HgwNRLAgT69zBNywPtX1
 XKnnSGEVCxyNEia5GDuG7d1PUEDnUjSOl8/tbiklk/6gNJylVaeSKsfMUvmU93VGJis+V2Pm
 /4Gbpvi9vmqeLGmCsUh2dJJfA9iwLlSLcyelvG7gcbcf1M5QTh9WqC5LHFIU9UNopm5X9zgp
 hmVMnK0AnKm7ZEbAW1mskxeVY4=
IronPort-HdrOrdr: A9a23:3mGHUK0SBdDGszx9Qs00IQqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-Talos-CUID: 9a23:MW32G2AlI/WX9Yj6ExA6yglIM/IeS3vy8VHOP02FO38qTKLAHA==
X-Talos-MUID: 9a23:RwS9AgtXTfBb2hWyZ82nmx1gFv1z8a6XK1ktrqUsoe6nEhVpEmLI
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="110226275"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 02/10] x86/boot: Adjust MSR_ARCH_CAPS handling for the Host policy
Date: Wed, 24 May 2023 12:25:18 +0100
Message-ID: <20230524112526.3475200-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

We are about to move MSR_ARCH_CAPS into featureset, but the order of
operations (copy raw policy, then copy x86_capabilitiles[] in) will end up
clobbering the ARCH_CAPS value.

Some toolstacks use this information to handle TSX compatibility across the
CPUs and microcode versions where support was removed.

To avoid this transient breakage, read from raw_cpu_policy rather than
modifying it in place.  This logic will be removed entirely in due course.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu-policy.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 5e7e19fbcda8..49f5465ec445 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -411,7 +411,7 @@ static void __init calculate_host_policy(void)
     p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
 
     /* Temporary, until we have known_features[] for feature bits in MSRs. */
-    p->arch_caps.raw &=
+    p->arch_caps.raw = raw_cpu_policy.arch_caps.raw &
         (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
          ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
          ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_TAA_NO |
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 24 11:26:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538970.839426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcl-0007O8-G6; Wed, 24 May 2023 11:25:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538970.839426; Wed, 24 May 2023 11:25:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcl-0007Ny-Bq; Wed, 24 May 2023 11:25:47 +0000
Received: by outflank-mailman (input) for mailman id 538970;
 Wed, 24 May 2023 11:25:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1mck-0006nQ-3u
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:25:46 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b9607717-fa25-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 13:25:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9607717-fa25-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684927544;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=GLjlEWjBPEDl+W06sqlUB6iIlh53o/yxuzce6ss5CB0=;
  b=Qi+5BXGND01m6HTjC3X8D6gegEGs66jAiIWtUfSqLNU5xepQ2M7q1/oC
   +6Zf0/5C67BqpEkHzLIsv3/4F+WUZgkjwEq1pAltj5fLMwwFmwBMWILBI
   1rOW3pv3H5NkUr7B7XTzMKxS/xNqT9KfqavyzaNNxqHy/nmtOM4vvPbuM
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110226277
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:TvcuLaDjCBcjSBVW/yHjw5YqxClBgxIJ4kV8jS/XYbTApDMg0mAGz
 mAfWW6HPfqKMDejctogad/n8xsEvJ7dydQyQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbCRMs8pvlDs15K6p4G5C4gRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIww7tJRj1nx
 8chLh8wSA6fhd+E2fG/Vbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTILs4kP2lmT/UdDpApUjOjaE2/3LS3Ep6172F3N/9I4XSH58LxBnHz
 o7A10TjOANBG8Gt8gCq2COBrOaMwgb8XKtHQdVU8dY12QbOlwT/EiY+Sl+TsfS/zEmkVLp3O
 0ESvyYjs6U23EiqVcXmGQ21pmaeuRwRUMYWFPc1gCmPwKfJ5weSBkAfUyVMLtchsacLqScCj
 wHT2YmzXHo27ePTECjGnluJkd+sERFIEyheTB0/dA0q3v/9vIMOvAuMSsk2RcZZkebJMT33x
 jmLqg03iLMSkdMH2s2HwLzXv96/jsOXF1Bov207Skrgt1okP9D9O+RE/HCBtZ59wJClok5tV
 ZTus+yX96gwAJ6Ej0Rhq81dTejyt55p3NAx6GOD/qXNFRz3oxZPnqgKulmSwXuF1e5aEQIFm
 GeJ5WtsCGZ7ZRNGl5NfbYOrENgNxqP9D9njXf28RoMQMsQhKlHXoHowNBT4M4XRfK8EyPtXB
 HtmWZz0USZy5VpPl1JauNvxIZd0n3tjlAs/tLjwzgi90Kr2WUN5vYwtaQPUBshgtfPsnekg2
 4oHXyd840kFAbKWj+i+2dJ7EG3m2lBkWMmp8pULJ7Hrz8gPMDhJNsI9CIgJI+RN95m5XM+Rl
 p1hcie0EGbCuEA=
IronPort-HdrOrdr: A9a23:e5orrazYnQeFjQXbdo0XKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-Talos-CUID: 9a23:K7xl32M4NjuJLO5DYXRZsxI9KPscdFLg6mnMIXKKECVtV+jA
X-Talos-MUID: 9a23:QRZiswSiWR5vjkV6RXTMngtwb+h10Z+UJ2pcvL8P+JahMihZbmI=
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="110226277"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 05/10] x86/boot: Record MSR_ARCH_CAPS for the Raw and Host CPU policy
Date: Wed, 24 May 2023 12:25:21 +0100
Message-ID: <20230524112526.3475200-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Extend x86_cpu_policy_fill_native() with a read of ARCH_CAPS based on the
CPUID information just read, removing the specially handling in
calculate_raw_cpu_policy().

Right now, the only use of x86_cpu_policy_fill_native() outside of Xen is the
unit tests.  Getting MSR data in this context is left to whomever first
encounters a genuine need to have it.

Extend generic_identify() to read ARCH_CAPS into x86_capability[], which is
fed into the Host Policy.  This in turn means there's no need to special case
arch_caps in calculate_host_policy().

No practical change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Extend commit message to discuss the absence of MSRs outside of __XEN__
---
 xen/arch/x86/cpu-policy.c | 12 ------------
 xen/arch/x86/cpu/common.c |  5 +++++
 xen/lib/x86/cpuid.c       |  7 ++++++-
 3 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 49f5465ec445..dfd9abd8564c 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -354,9 +354,6 @@ void calculate_raw_cpu_policy(void)
 
     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
     /* Was already added by probe_cpuid_faulting() */
-
-    if ( cpu_has_arch_caps )
-        rdmsrl(MSR_ARCH_CAPABILITIES, p->arch_caps.raw);
 }
 
 static void __init calculate_host_policy(void)
@@ -409,15 +406,6 @@ static void __init calculate_host_policy(void)
     /* 0x000000ce  MSR_INTEL_PLATFORM_INFO */
     /* probe_cpuid_faulting() sanity checks presence of MISC_FEATURES_ENABLES */
     p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
-
-    /* Temporary, until we have known_features[] for feature bits in MSRs. */
-    p->arch_caps.raw = raw_cpu_policy.arch_caps.raw &
-        (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
-         ARCH_CAPS_SKIP_L1DFL | ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO |
-         ARCH_CAPS_IF_PSCHANGE_MC_NO | ARCH_CAPS_TSX_CTRL | ARCH_CAPS_TAA_NO |
-         ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO | ARCH_CAPS_PSDP_NO |
-         ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA | ARCH_CAPS_BHI_NO |
-         ARCH_CAPS_PBRSB_NO);
 }
 
 static void __init guest_common_default_feature_adjustments(uint32_t *fs)
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 9bbb385db42d..f1084bb1ed36 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -477,6 +477,11 @@ static void generic_identify(struct cpuinfo_x86 *c)
 		cpuid_count(0xd, 1,
 			    &c->x86_capability[FEATURESET_Da1],
 			    &tmp, &tmp, &tmp);
+
+	if (test_bit(X86_FEATURE_ARCH_CAPS, c->x86_capability))
+		rdmsr(MSR_ARCH_CAPABILITIES,
+		      c->x86_capability[FEATURESET_10Al],
+		      c->x86_capability[FEATURESET_10Ah]);
 }
 
 /*
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index e795ce375032..07e550191448 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -226,7 +226,12 @@ void x86_cpu_policy_fill_native(struct cpu_policy *p)
     p->hv_limit = 0;
     p->hv2_limit = 0;
 
-    /* TODO MSRs */
+#ifdef __XEN__
+    /* TODO MSR_PLATFORM_INFO */
+
+    if ( p->feat.arch_caps )
+        rdmsrl(MSR_ARCH_CAPABILITIES, p->arch_caps.raw);
+#endif
 
     x86_cpu_policy_recalc_synth(p);
 }
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 24 11:26:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538971.839436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcm-0007e9-O1; Wed, 24 May 2023 11:25:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538971.839436; Wed, 24 May 2023 11:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcm-0007e2-Jy; Wed, 24 May 2023 11:25:48 +0000
Received: by outflank-mailman (input) for mailman id 538971;
 Wed, 24 May 2023 11:25:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1mck-0006nQ-Tz
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:25:46 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b8f85bba-fa25-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 13:25:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8f85bba-fa25-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684927545;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=TG0c7JzNnQ5g9G9kFuiZt2yKe8SA/MEe6cjKV8RkyQQ=;
  b=MTkuYHTqVi0rfto+69ICBoQ4JYbkWZwPGvIWmXE4xMu4RThIZS7B2nHM
   9pCcUntrX8vDE8Ls0Xgy+CBe3G57TlXujFPgXLJqyOcuXtOPUBlR5okuN
   ++oV3BE/CnWTMSgRI4lBbrVvRIsNO9jQmWLphUIsKl2DhASDrdtTckWZG
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112680539
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:cj4uTKBf80eBzRVW/yHjw5YqxClBgxIJ4kV8jS/XYbTApD4i3zQPn
 WEZDDqBM/aJZzCkfNF3aYvn8x4H75LcytJrQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbCRMs8pvlDs15K6p4G5C4gRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIwxtR1K3EXp
 N4jDHMCMTHdv/KX0ZKeVbw57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTILs4kP2lmT/UdDpApUjOjaE2/3LS3Ep6172F3N/9I4XQG5UNwx3Jz
 o7A1zylDCgTP4Wn9SiI+1umvPbAnyGnH7tHQdVU8dY12QbOlwT/EiY+Sl+TsfS/zEmkVLp3O
 0ESvyYjs6U23EiqVcXmGQ21pmaeuRwRUMYWFPc1gCmv4KfJ5weSBkAfUyVMLtchsacLqScCj
 wHT2YmzXHo27ePTECjGnluJkd+sERQnL0USPH4mdi9G7IbC/rkjrwPva8k2RcZZkebJMT33x
 jmLqg03iLMSkdMH2s2HwLzXv96/jsOXF1Bov207Skrgt1okP9D9O+RE/HCBtZ59wJClok5tV
 ZTus+yX96gwAJ6Ej0Rhq81dTejyt55p3NAx6GOD/qXNFRz3oxZPnqgKulmSwXuF1e5aEQIFm
 GeJ5WtsCGZ7ZRNGl5NfbYOrENgNxqP9D9njXf28RoMQMsQhKlHXoHowNBT4M4XRfK8EyPtXB
 HtmWZz0USZy5VpPl1JauNvxIZd0n3tjlAs/tLjwzgi90Kr2WUN5vYwtaQPUBshgtfPsnekg2
 4oHXyd840kFAbKWj+i+2dJ7EG3m2lBkWMmp8pULJ7Hrz8gPMDhJNsI9CIgJI+RN95m5XM+Sl
 p1hcie0EGbCuEA=
IronPort-HdrOrdr: A9a23:cqGYMKwj+LQVUJwksWgEKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-Talos-CUID: 9a23:uRAEsW1s53yQ2HNtnQNqd7xfJvInWG2EnVXrBwy3Vkc3cK20WH+15/Yx
X-Talos-MUID: =?us-ascii?q?9a23=3AN5gHCgx2iwWVAeml2I5+6kLS3y6aqPzyBnkEmsw?=
 =?us-ascii?q?+h9uJPAV6AS6SjjbrBbZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="112680539"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 00/10] x86: Introduce MSR_ARCH_CAPS into featuresets
Date: Wed, 24 May 2023 12:25:16 +0100
Message-ID: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Also combined with "x86: Feature check cleanup" for simplicity.

See individual patches for v2 deltas.

Andrew Cooper (10):
  x86/boot: Rework dom0 feature configuration
  x86/boot: Adjust MSR_ARCH_CAPS handling for the Host policy
  x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
  x86/cpu-policy: MSR_ARCH_CAPS feature names
  x86/boot: Record MSR_ARCH_CAPS for the Raw and Host CPU policy
  x86/boot: Expose MSR_ARCH_CAPS data in guest max policies
  x86/cpufeature: Rework {boot_,}cpu_has()
  x86/vtx: Remove opencoded MSR_ARCH_CAPS check
  x86/tsx: Remove opencoded MSR_ARCH_CAPS check
  x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check

 tools/misc/xen-cpuid.c                      | 57 +++++++++-----
 tools/tests/cpu-policy/test-cpu-policy.c    |  5 --
 xen/arch/x86/cpu-policy.c                   | 83 ++++++++++-----------
 xen/arch/x86/cpu/common.c                   |  5 ++
 xen/arch/x86/hvm/vmx/vmx.c                  |  8 +-
 xen/arch/x86/include/asm/cpufeature.h       | 23 +++++-
 xen/arch/x86/include/asm/processor.h        |  2 +-
 xen/arch/x86/spec_ctrl.c                    | 56 +++++++-------
 xen/arch/x86/tsx.c                          | 13 ++--
 xen/include/public/arch-x86/cpufeatureset.h | 29 ++++++-
 xen/include/xen/lib/x86/cpu-policy.h        | 50 ++++++-------
 xen/lib/x86/cpuid.c                         | 11 ++-
 xen/tools/gen-cpuid.py                      |  3 +
 13 files changed, 208 insertions(+), 137 deletions(-)

-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 24 11:26:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538969.839412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mck-000704-7W; Wed, 24 May 2023 11:25:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538969.839412; Wed, 24 May 2023 11:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mck-0006yY-1f; Wed, 24 May 2023 11:25:46 +0000
Received: by outflank-mailman (input) for mailman id 538969;
 Wed, 24 May 2023 11:25:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1mci-0006nQ-U2
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:25:44 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b8ac8520-fa25-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 13:25:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8ac8520-fa25-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684927543;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=GzTaYahCvQbgziVS0gqwaK4qufFZHl9uL0tpi65cPFw=;
  b=NI4TuNsVkjTErUwcS4vm6g7fuSQqvtWQc2W4gjWa6m5lbCxy3EjkBa0J
   A9jUfcGh6wv/4ptscnWyRCaH6gJEbWBLLZdCikQNzQ2KdMJTaJb9XPsjY
   tWF5G5W30Jmc+qT936fFi7Xh+tpGRSC0Ln2UaKHkfO7SVc140Y4pUhaCb
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110226276
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:BPtf06vvZQDj7fcV+yRiuCfg+ufnVF5eMUV32f8akzHdYApBsoF/q
 tZmKW7QOK7ZMWryfNglb4qw90sFupWGz9c3TQQ/+30xQy8S+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Fv0gnRkPaoQ5AKEyyFOZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwEW0zKUyqtdmPxp2GbPdwhPkRNYrvI9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WxRepEiYuuwc5G/LwRYq+LPsLMDUapqBQsA9ckOw/
 zuepT6nWE5EXDCZ4RCe3nn0oObBoX36RL4CE4ay5MQ133TGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8U44gyQzqvf4y6CG3MJCDVGbbQOq8seVTEsk
 FiTkLvU6SdH6ePPDyjHr/HN8G30YHJORYMfWcMaZTJY3Z6/ibMItxTgc/B9DKyIvMS2HS6ll
 lhmsxMCr7kUiMcK0YCy8lbGny+gq/D1c+Il2unEdjn7t10kPeZJc6TtsAGGtqgYcO51W3Hb5
 BA5d96iAPfi5H1nvAiEW60zEb6g/J5p2xWM0Ac0T/HNG9lAkkNPnLy8AhkkfC+F0e5eI1cFh
 XM/XisPjKK/xFPwMcdKj3uZUqzGN5TIG9X/TezzZdFTeJV3fwLv1HgwNRLAgT69zBNywPtX1
 XKnnSGEVCxyNEia5GDuG7d1PUEDnUjSOl8/tbiklk/6gNJylVaeSKsfMUvmU93VGJis+V2Pm
 /4Gbpvi9vmqeLGmCsUh2dJJfA9iwLlSLcyelvG7gcbcf1M5QTh9WqC5LHFIU9UNopm5X9zgp
 hmVMnK0AnKm7ZEbAW1mskxeVY4=
IronPort-HdrOrdr: A9a23:R0+V5KrTmuAziqFCCO7W7M8aV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-Talos-CUID: =?us-ascii?q?9a23=3A2GQLlWqKhvDRLz/dE7vP32vmUfl4TDqB5UXrGkm?=
 =?us-ascii?q?XNEB3VL+oe0bT/Lwxxg=3D=3D?=
X-Talos-MUID: 9a23:hyHvygW7wTVQmEXq/BX3qDo6Es5m3/qvFHwdlq4MqfPbagUlbg==
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="110226276"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 01/10] x86/boot: Rework dom0 feature configuration
Date: Wed, 24 May 2023 12:25:17 +0100
Message-ID: <20230524112526.3475200-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Right now, dom0's feature configuration is split between between the common
path and a dom0-specific one.  This mostly is by accident, and causes some
very subtle bugs.

First, start by clearly defining init_dom0_cpuid_policy() to be the domain
that Xen builds automatically.  The late hwdom case is still constructed in a
mostly normal way, with the control domain having full discretion over the CPU
policy.

Identifying this highlights a latent bug - the two halves of the MSR_ARCH_CAPS
bodge are asymmetric with respect to the hardware domain.  This means that
shim, or a control-only dom0 sees the MSR_ARCH_CAPS CPUID bit but none of the
MSR content.  This in turn declares the hardware to be retpoline-safe by
failing to advertise the {R,}RSBA bits appropriately.  Restrict this logic to
the hardware domain, although the special case will cease to exist shortly.

For the CPUID Faulting adjustment, the comment in ctxt_switch_levelling()
isn't actually relevant.  Provide a better explanation.

Move the recalculate_cpuid_policy() call outside of the dom0-cpuid= case.
This is no change for now, but will become necessary shortly.

Finally, place the second half of the MSR_ARCH_CAPS bodge after the
recalculate_cpuid_policy() call.  This is necessary to avoid transiently
breaking the hardware domain's view while the handling is cleaned up.  This
special case will cease to exist shortly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu-policy.c | 57 +++++++++++++++++++++------------------
 1 file changed, 31 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index ef6a2d0d180a..5e7e19fbcda8 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -687,29 +687,6 @@ int init_domain_cpu_policy(struct domain *d)
     if ( !p )
         return -ENOMEM;
 
-    /* See comment in ctxt_switch_levelling() */
-    if ( !opt_dom0_cpuid_faulting && is_control_domain(d) && is_pv_domain(d) )
-        p->platform_info.cpuid_faulting = false;
-
-    /*
-     * Expose the "hardware speculation behaviour" bits of ARCH_CAPS to dom0,
-     * so dom0 can turn off workarounds as appropriate.  Temporary, until the
-     * domain policy logic gains a better understanding of MSRs.
-     */
-    if ( is_hardware_domain(d) && cpu_has_arch_caps )
-    {
-        uint64_t val;
-
-        rdmsrl(MSR_ARCH_CAPABILITIES, val);
-
-        p->arch_caps.raw = val &
-            (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
-             ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
-             ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
-             ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
-             ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
-    }
-
     d->arch.cpu_policy = p;
 
     recalculate_cpuid_policy(d);
@@ -845,11 +822,15 @@ void recalculate_cpuid_policy(struct domain *d)
         p->extd.raw[0x19] = EMPTY_LEAF;
 }
 
+/*
+ * Adjust the CPU policy for dom0.  Really, this is "the domain Xen builds
+ * automatically on boot", and might not have the domid 0 (e.g. pvshim).
+ */
 void __init init_dom0_cpuid_policy(struct domain *d)
 {
     struct cpu_policy *p = d->arch.cpuid;
 
-    /* dom0 can't migrate.  Give it ITSC if available. */
+    /* Dom0 doesn't migrate relative to Xen.  Give it ITSC if available. */
     if ( cpu_has_itsc )
         p->extd.itsc = true;
 
@@ -858,7 +839,7 @@ void __init init_dom0_cpuid_policy(struct domain *d)
      * so dom0 can turn off workarounds as appropriate.  Temporary, until the
      * domain policy logic gains a better understanding of MSRs.
      */
-    if ( cpu_has_arch_caps )
+    if ( is_hardware_domain(d) && cpu_has_arch_caps )
         p->feat.arch_caps = true;
 
     /* Apply dom0-cpuid= command line settings, if provided. */
@@ -876,8 +857,32 @@ void __init init_dom0_cpuid_policy(struct domain *d)
         }
 
         x86_cpu_featureset_to_policy(fs, p);
+    }
+
+    /*
+     * PV Control domains used to require unfiltered CPUID.  This was fixed in
+     * Xen 4.13, but there is an cmdline knob to restore the prior behaviour.
+     *
+     * If the domain is getting unfiltered CPUID, don't let the guest kernel
+     * play with CPUID faulting either, as Xen's CPUID path won't cope.
+     */
+    if ( !opt_dom0_cpuid_faulting && is_control_domain(d) && is_pv_domain(d) )
+        p->platform_info.cpuid_faulting = false;
 
-        recalculate_cpuid_policy(d);
+    recalculate_cpuid_policy(d);
+
+    if ( is_hardware_domain(d) && cpu_has_arch_caps )
+    {
+        uint64_t val;
+
+        rdmsrl(MSR_ARCH_CAPABILITIES, val);
+
+        p->arch_caps.raw = val &
+            (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
+             ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
+             ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
+             ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
+             ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
     }
 }
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 24 11:26:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538967.839396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mch-0006eL-Hb; Wed, 24 May 2023 11:25:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538967.839396; Wed, 24 May 2023 11:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mch-0006eE-EI; Wed, 24 May 2023 11:25:43 +0000
Received: by outflank-mailman (input) for mailman id 538967;
 Wed, 24 May 2023 11:25:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1mcg-0006dp-BG
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:25:42 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b4502455-fa25-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 13:25:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4502455-fa25-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684927538;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=cEqAKJhSYdiDhZzPNXI65MjIM+ej5xzXmsT0GY2hH+4=;
  b=ecRy0NUWX5vR+LD/lWKf91PrAFnKjGEXvfsj7EpjZwcZdJKDR35eQoY3
   DzwN3aRSGQ1JCHh5KVtSz5dlyvqJCc6tS52Syko6yV0kbJO8UTBg1Dola
   Wk0zfxYwe+nPj4hetJ9KTHo0wHvWg5u1viVALBodQpZBZXfxegbE9e5J0
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110611672
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:HTHsMKzAyxe0REPuBV96t+c/xirEfRIJ4+MujC+fZmUNrF6WrkUEx
 mIdD2iDOKmCZzD0fN0jYISy/B9X6sSGzYI1TApu/yAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw/zF8EsHUMja4mtC5QRjP6wT5zcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KV4R+
 vo5cS0gVxKCq9rqh+LqWu1Autt2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZwMxhjB/
 zyZpQwVBDkQPvLYyGKFzUv2revwty/RepMSM7uno6sCbFq7mTVIVUx+uUGAiem0jAuyVsxSL
 2QQ+zEytu4i+UqzVN7/Uhak5nmesXY0efBdDuk74wGl0bfP7kCSAW1sZiFFQMwrsokxXzNC6
 7OSt4q3X3o16uTTEC/DsO7O9lteJBT5M0c9OiACbFIYzuDhoa0L0lWfH8ZnPJKq24id9S7L/
 xiGqy03hrM2hMEN1rmm8V2vvw9AtqQlXSZuuFyJAzvNAhdRIdf8Otf2sQSzAeNodt7xc7WXg
 JQTdyFyBsgqBIrFqiGCSf5l8FqBt6fca220bbKC8vAcG9WRF5yLJ9g4DNJWfh0B3iM4ldjBP
 ifuVft5vsM7AZdTRfYfj3iNI8or17P8Mt/uS+rZaNFDCrAoKl/brH8wNRLLgTG3+KTJrU3YE
 c7BGftA8F5AUfg3pNZIb7x1PUAXKtAWmjqIGMGTI+WP2ruCfn+FIYo43K+1Rrlhtsus+VyFm
 +uzwuPWk32zpsWiOHiImWPSRHhWRUUG6Wfe9JEOKbfafls5cIzjYteIqY4cl0Vet/w9vo/1E
 ruVAye0FHKXaaX7FDi3
IronPort-HdrOrdr: A9a23:pkObz6kqQG8KfA9w0wipdxQ6n5DpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-Talos-CUID: =?us-ascii?q?9a23=3Azi/5dGmKKTxOD+MkxR/u9dONFnTXOUKCw3HxelK?=
 =?us-ascii?q?VMGRoZIXJDg/B6KdIr9U7zg=3D=3D?=
X-Talos-MUID: 9a23:1/JwWArI4B1l889VhuIezztoGoBpu6SRMRomlaost+OmJCxXPSjI2Q==
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="110611672"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 03/10] x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
Date: Wed, 24 May 2023 12:25:19 +0100
Message-ID: <20230524112526.3475200-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Bits through 24 are already defined, meaning that we're not far off needing
the second word.  Put both in right away.

As both halves are present now, the arch_caps field is full width.  Adjust the
unit test, which notices.

The bool bitfield names in the arch_caps union are unused, and somewhat out of
date.  They'll shortly be automatically generated.

Add CPUID and MSR prefixes to the ./xen-cpuid verbose output, now that there
are a mix of the two.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Adjust the unit test.
 * Use an m prefix on the short name.
 * Add CPUID/MSR to the verbose name.
---
 tools/misc/xen-cpuid.c                      | 44 +++++++++++-------
 tools/tests/cpu-policy/test-cpu-policy.c    |  5 ---
 xen/include/public/arch-x86/cpufeatureset.h |  4 ++
 xen/include/xen/lib/x86/cpu-policy.h        | 50 ++++++++++-----------
 xen/lib/x86/cpuid.c                         |  4 ++
 5 files changed, 59 insertions(+), 48 deletions(-)

diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index 8ec143ebc854..15ad2d33e2a1 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -226,31 +226,41 @@ static const char *const str_7d2[32] =
     [ 4] = "bhi-ctrl",      [ 5] = "mcdt-no",
 };
 
+static const char *const str_10Al[32] =
+{
+};
+
+static const char *const str_10Ah[32] =
+{
+};
+
 static const struct {
     const char *name;
     const char *abbr;
     const char *const *strs;
 } decodes[] =
 {
-    { "0x00000001.edx",   "1d",  str_1d },
-    { "0x00000001.ecx",   "1c",  str_1c },
-    { "0x80000001.edx",   "e1d", str_e1d },
-    { "0x80000001.ecx",   "e1c", str_e1c },
-    { "0x0000000d:1.eax", "Da1", str_Da1 },
-    { "0x00000007:0.ebx", "7b0", str_7b0 },
-    { "0x00000007:0.ecx", "7c0", str_7c0 },
-    { "0x80000007.edx",   "e7d", str_e7d },
-    { "0x80000008.ebx",   "e8b", str_e8b },
-    { "0x00000007:0.edx", "7d0", str_7d0 },
-    { "0x00000007:1.eax", "7a1", str_7a1 },
-    { "0x80000021.eax",  "e21a", str_e21a },
-    { "0x00000007:1.ebx", "7b1", str_7b1 },
-    { "0x00000007:2.edx", "7d2", str_7d2 },
-    { "0x00000007:1.ecx", "7c1", str_7c1 },
-    { "0x00000007:1.edx", "7d1", str_7d1 },
+    { "CPUID 0x00000001.edx",        "1d", str_1d },
+    { "CPUID 0x00000001.ecx",        "1c", str_1c },
+    { "CPUID 0x80000001.edx",       "e1d", str_e1d },
+    { "CPUID 0x80000001.ecx",       "e1c", str_e1c },
+    { "CPUID 0x0000000d:1.eax",     "Da1", str_Da1 },
+    { "CPUID 0x00000007:0.ebx",     "7b0", str_7b0 },
+    { "CPUID 0x00000007:0.ecx",     "7c0", str_7c0 },
+    { "CPUID 0x80000007.edx",       "e7d", str_e7d },
+    { "CPUID 0x80000008.ebx",       "e8b", str_e8b },
+    { "CPUID 0x00000007:0.edx",     "7d0", str_7d0 },
+    { "CPUID 0x00000007:1.eax",     "7a1", str_7a1 },
+    { "CPUID 0x80000021.eax",      "e21a", str_e21a },
+    { "CPUID 0x00000007:1.ebx",     "7b1", str_7b1 },
+    { "CPUID 0x00000007:2.edx",     "7d2", str_7d2 },
+    { "CPUID 0x00000007:1.ecx",     "7c1", str_7c1 },
+    { "CPUID 0x00000007:1.edx",     "7d1", str_7d1 },
+    { "MSR   0x0000010a.lo",      "m10Al", str_10Al },
+    { "MSR   0x0000010a.hi",      "m10Ah", str_10Ah },
 };
 
-#define COL_ALIGN "18"
+#define COL_ALIGN "24"
 
 static const char *const fs_names[] = {
     [XEN_SYSCTL_cpu_featureset_raw]     = "Raw",
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index f1d968adfc39..301df2c00285 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -391,11 +391,6 @@ static void test_msr_deserialise_failure(void)
             .msr = { .idx = 0xce, .val = ~0ull },
             .rc = -EOVERFLOW,
         },
-        {
-            .name = "truncated val",
-            .msr = { .idx = 0x10a, .val = ~0ull },
-            .rc = -EOVERFLOW,
-        },
     };
 
     printf("Testing MSR deserialise failure:\n");
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 8de73aebc3e0..032cec3ccba2 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -307,6 +307,10 @@ XEN_CPUFEATURE(AVX_VNNI_INT8,      15*32+ 4) /*A  AVX-VNNI-INT8 Instructions */
 XEN_CPUFEATURE(AVX_NE_CONVERT,     15*32+ 5) /*A  AVX-NE-CONVERT Instructions */
 XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
 
+/* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.eax, word 16 */
+
+/* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.edx, word 17 */
+
 #endif /* XEN_CPUFEATURE */
 
 /* Clean up from a default include.  Close the enum (for C). */
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index bfa425060464..6d5e9edd269b 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -4,22 +4,24 @@
 
 #include <xen/lib/x86/cpuid-autogen.h>
 
-#define FEATURESET_1d     0 /* 0x00000001.edx      */
-#define FEATURESET_1c     1 /* 0x00000001.ecx      */
-#define FEATURESET_e1d    2 /* 0x80000001.edx      */
-#define FEATURESET_e1c    3 /* 0x80000001.ecx      */
-#define FEATURESET_Da1    4 /* 0x0000000d:1.eax    */
-#define FEATURESET_7b0    5 /* 0x00000007:0.ebx    */
-#define FEATURESET_7c0    6 /* 0x00000007:0.ecx    */
-#define FEATURESET_e7d    7 /* 0x80000007.edx      */
-#define FEATURESET_e8b    8 /* 0x80000008.ebx      */
-#define FEATURESET_7d0    9 /* 0x00000007:0.edx    */
-#define FEATURESET_7a1   10 /* 0x00000007:1.eax    */
-#define FEATURESET_e21a  11 /* 0x80000021.eax      */
-#define FEATURESET_7b1   12 /* 0x00000007:1.ebx    */
-#define FEATURESET_7d2   13 /* 0x00000007:2.edx    */
-#define FEATURESET_7c1   14 /* 0x00000007:1.ecx    */
-#define FEATURESET_7d1   15 /* 0x00000007:1.edx    */
+#define FEATURESET_1d         0 /* 0x00000001.edx      */
+#define FEATURESET_1c         1 /* 0x00000001.ecx      */
+#define FEATURESET_e1d        2 /* 0x80000001.edx      */
+#define FEATURESET_e1c        3 /* 0x80000001.ecx      */
+#define FEATURESET_Da1        4 /* 0x0000000d:1.eax    */
+#define FEATURESET_7b0        5 /* 0x00000007:0.ebx    */
+#define FEATURESET_7c0        6 /* 0x00000007:0.ecx    */
+#define FEATURESET_e7d        7 /* 0x80000007.edx      */
+#define FEATURESET_e8b        8 /* 0x80000008.ebx      */
+#define FEATURESET_7d0        9 /* 0x00000007:0.edx    */
+#define FEATURESET_7a1       10 /* 0x00000007:1.eax    */
+#define FEATURESET_e21a      11 /* 0x80000021.eax      */
+#define FEATURESET_7b1       12 /* 0x00000007:1.ebx    */
+#define FEATURESET_7d2       13 /* 0x00000007:2.edx    */
+#define FEATURESET_7c1       14 /* 0x00000007:1.ecx    */
+#define FEATURESET_7d1       15 /* 0x00000007:1.edx    */
+#define FEATURESET_m10Al     16 /* 0x0000010a.eax      */
+#define FEATURESET_m10Ah     17 /* 0x0000010a.edx      */
 
 struct cpuid_leaf
 {
@@ -350,17 +352,13 @@ struct cpu_policy
      * fixed in hardware.
      */
     union {
-        uint32_t raw;
+        uint64_t raw;
+        struct {
+            uint32_t lo, hi;
+        };
         struct {
-            bool rdcl_no:1;
-            bool ibrs_all:1;
-            bool rsba:1;
-            bool skip_l1dfl:1;
-            bool ssb_no:1;
-            bool mds_no:1;
-            bool if_pschange_mc_no:1;
-            bool tsx_ctrl:1;
-            bool taa_no:1;
+            DECL_BITFIELD(m10Al);
+            DECL_BITFIELD(m10Ah);
         };
     } arch_caps;
 
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 68aafb404927..e795ce375032 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -79,6 +79,8 @@ void x86_cpu_policy_to_featureset(
     fs[FEATURESET_7d2]       = p->feat._7d2;
     fs[FEATURESET_7c1]       = p->feat._7c1;
     fs[FEATURESET_7d1]       = p->feat._7d1;
+    fs[FEATURESET_m10Al]     = p->arch_caps.lo;
+    fs[FEATURESET_m10Ah]     = p->arch_caps.hi;
 }
 
 void x86_cpu_featureset_to_policy(
@@ -100,6 +102,8 @@ void x86_cpu_featureset_to_policy(
     p->feat._7d2             = fs[FEATURESET_7d2];
     p->feat._7c1             = fs[FEATURESET_7c1];
     p->feat._7d1             = fs[FEATURESET_7d1];
+    p->arch_caps.lo          = fs[FEATURESET_m10Al];
+    p->arch_caps.hi          = fs[FEATURESET_m10Ah];
 }
 
 void x86_cpu_policy_recalc_synth(struct cpu_policy *p)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 24 11:26:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538972.839440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcn-0007gv-2F; Wed, 24 May 2023 11:25:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538972.839440; Wed, 24 May 2023 11:25:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcm-0007g3-Sb; Wed, 24 May 2023 11:25:48 +0000
Received: by outflank-mailman (input) for mailman id 538972;
 Wed, 24 May 2023 11:25:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1mcl-0006dp-86
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:25:47 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b8be0275-fa25-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 13:25:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8be0275-fa25-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684927545;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=wH9OWbCjMtjVvBDJDgEnYY93dMh+0saP5eOPD5fBQ2Q=;
  b=UkJqOY3Cs1Bhc5UXqCQGRE0buql3rLCVWUCnhB61tWy+PeFq7mNWHcqk
   B8eT5qStL1d2zT19x6JO/w1zk++JDv/+v9jGqBCiu6hEHn+kmWf+MryKC
   kmuiRtmaRzpriIEsAB4kzgv2vIetiUnO1NGsIE1Yq0871Me6w+pxV5Tgm
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110226278
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:tomPLagrxQlYzaXd02yFPLYWX161aBAKZh0ujC45NGQN5FlHY01je
 htvXW/UOvyCN2P1KNAkYdy39hkO6JTSytdgHAVoriszEC4b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QWCzyJ94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQhdQ5KMjmxvNuw74+dVPVuu/Y6b830adZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P
 pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27B/
 zqcpTqjXUFy2Nq35xmC3Gmvoff1ki74Z6MsLJyRq/tojwjGroAUIEJPDgbqyRWjsWauVtQaJ
 0EK9y4Gqakp6FftXtT7Rwe/onOPolgbQdU4O+8w5RyJy6HUyx2EHWVCRTlEAPQ5sOcmSDps0
 UWG9+4FHhQ27ufTEyjEsO7J83XrY3N9wXI+iTEsdFY7pIXKkroKiD3yaMh/EpOHl57xBmSlq
 9yVlxQWi7IWhM8N8qy0+1Hbnj6hzqT0oh4JChb/BTz8sF4gDGKxT8nxsAWAs64cRGqMZgPZ1
 EXojfRy+wzn4XulsCWWCNsAE7iyjxpuGG2N2AU/d3XNGtnExpJCQWyyyGsmTKuKGpxeEdMMX
 KM0kV052XOrFCH2BZKbmqroYyjQ8YDuFM7+StffZcdUb556eWevpX8+OR7OgTCxyxZ9y8nT3
 Kt3lu71Vx4n5VlPlmLqF4/xL5dwrszB+Y8jbc+ilEn2uVZvTHWUVa0EIDOzUwzN14vd+F+92
 48GZ6O3J+B3DLWWjt//rdRCcjjn7BETWfjLliCgXrHee1U/QT1wVJc8A9oJIuRYokicrc+Ql
 lnVZ6OS4ACXaaHvQelSVk1eVQ==
IronPort-HdrOrdr: A9a23:z2TrkalLGKL8IPFzhZLjPPJNlXPpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-Talos-CUID: 9a23:3P1k6GBFSgPutLX6ExA6yglIM/IeS3vy8VHOP02FO38qTKLAHA==
X-Talos-MUID: 9a23:goc8Wgqt2kaMpi7bUy8ezx9BM+lz4IO2Mk8AkqonhfGlHHF1OTjI2Q==
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="110226278"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 09/10] x86/tsx: Remove opencoded MSR_ARCH_CAPS check
Date: Wed, 24 May 2023 12:25:25 +0100
Message-ID: <20230524112526.3475200-10-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The current cpu_has_tsx_ctrl tristate is serving double pupose; to signal the
first pass through tsx_init(), and the availability of MSR_TSX_CTRL.

Drop the variable, replacing it with a once boolean, and altering
cpu_has_tsx_ctrl to come out of the feature information.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/include/asm/cpufeature.h |  1 +
 xen/arch/x86/include/asm/processor.h  |  2 +-
 xen/arch/x86/tsx.c                    | 13 ++++++++-----
 3 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index e3154ec5800d..9047ea43f503 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -184,6 +184,7 @@ static inline bool boot_cpu_has(unsigned int feat)
 
 /* MSR_ARCH_CAPS */
 #define cpu_has_if_pschange_mc_no boot_cpu_has(X86_FEATURE_IF_PSCHANGE_MC_NO)
+#define cpu_has_tsx_ctrl        boot_cpu_has(X86_FEATURE_TSX_CTRL)
 
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
diff --git a/xen/arch/x86/include/asm/processor.h b/xen/arch/x86/include/asm/processor.h
index 0eaa2c3094d0..f983ff501d95 100644
--- a/xen/arch/x86/include/asm/processor.h
+++ b/xen/arch/x86/include/asm/processor.h
@@ -535,7 +535,7 @@ static inline uint8_t get_cpu_family(uint32_t raw, uint8_t *model,
     return fam;
 }
 
-extern int8_t opt_tsx, cpu_has_tsx_ctrl;
+extern int8_t opt_tsx;
 extern bool rtm_disabled;
 void tsx_init(void);
 
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index 41b6092cfe16..fc199815994d 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -19,7 +19,6 @@
  * controlling TSX behaviour, and where TSX isn't force-disabled by firmware.
  */
 int8_t __read_mostly opt_tsx = -1;
-int8_t __read_mostly cpu_has_tsx_ctrl = -1;
 bool __read_mostly rtm_disabled;
 
 static int __init cf_check parse_tsx(const char *s)
@@ -37,24 +36,28 @@ custom_param("tsx", parse_tsx);
 
 void tsx_init(void)
 {
+    static bool __read_mostly once;
+
     /*
      * This function is first called between microcode being loaded, and CPUID
      * being scanned generally.  Read into boot_cpu_data.x86_capability[] for
      * the cpu_has_* bits we care about using here.
      */
-    if ( unlikely(cpu_has_tsx_ctrl < 0) )
+    if ( unlikely(!once) )
     {
-        uint64_t caps = 0;
         bool has_rtm_always_abort;
 
+        once = true;
+
         if ( boot_cpu_data.cpuid_level >= 7 )
             boot_cpu_data.x86_capability[FEATURESET_7d0]
                 = cpuid_count_edx(7, 0);
 
         if ( cpu_has_arch_caps )
-            rdmsrl(MSR_ARCH_CAPABILITIES, caps);
+            rdmsr(MSR_ARCH_CAPABILITIES,
+                  boot_cpu_data.x86_capability[FEATURESET_10Al],
+                  boot_cpu_data.x86_capability[FEATURESET_10Ah]);
 
-        cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
         has_rtm_always_abort = cpu_has_rtm_always_abort;
 
         if ( cpu_has_tsx_ctrl && cpu_has_srbds_ctrl )
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 24 11:26:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538974.839465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcp-0008OW-9H; Wed, 24 May 2023 11:25:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538974.839465; Wed, 24 May 2023 11:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcp-0008Mc-2R; Wed, 24 May 2023 11:25:51 +0000
Received: by outflank-mailman (input) for mailman id 538974;
 Wed, 24 May 2023 11:25:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1mcn-0006nQ-72
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:25:49 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bb3d1b93-fa25-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 13:25:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb3d1b93-fa25-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684927548;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=2M4LsdJHQ+3Rfju7ux5VA/VnrdGS15MZcIC/IW123ak=;
  b=LuHGGZ0BzxUb6ZDvCDw3FPG+3diMk3Qc8aW8eGcWQ8iINEtjKIgmOAak
   8vLVD4iL7unWp0ym3/GHyDds34b4iJJXSCEbNeBtJhiBIjQRfoLu5+6UO
   uNKPd0SiQRfGSpPKENjDKkgSv8grg3E6rYcIg4lNb2kEwgidEMektka8R
   g=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112680560
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Lwvlqq1rZr2iUEXQzvbD5dtxkn2cJEfYwER7XKvMYLTBsI5bpzIBy
 WUcDT+HP/qLZGv9ctpwb4iyoB4Bv5DUnNJhTQVlpC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFkOagQ1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfGmJCx
 NY+FCI0TE6qjcG5y6mQR9Zov5F2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiHJwPwBbA+
 zyel4j/KgE6P8Oc1BvfyUy9p7WI2hPWVqMbPqLto5aGh3XMnzdOWXX6T2CTvv2RmkO4HdVFJ
 CQ86ico6KQ/6kGvZt38RAGj5m6JuAYGXNhdGPF87xuCooLW6QuEAmkPThZadccr8sQxQFQXO
 kShxo2zQ2Y16fvMFCzbr+3Pxd+vBcQLBXQBaR4uazcX2vu9iYsQgRnUZdtcFJfg27UZBgrM6
 zyNqSE/gZAagsgKy7i38Dj7vt68mnTaZlVrv1uKBwpJ+is8Pdf4PNLwtTA3+N4adO6kok+9U
 G/ociR0xMQHFtmzmSOEW43h95n5tq/eYFUwbbOCdqTNFghBGVb5Jei8Axkkfi+F1/ronhe3C
 HI/QSsLuPdu0IKCNMebmb6ZBcUw1rTHHt/4TP3SZdcmSsEvJFPXon8+ORXOjjqFfK0QfUYXY
 M3zTCpRJSxCVfQPIMSeHI/xLoPHNghhnDiOFPgXPjys0KaEZW79dIrpxGCmN7hjhIvd+VW9z
 jqqH5fSo/mpeLGkM3a/HE96BQxiEEXX8rip9pUIL7ffc1UO9aNII6a5/I7NsrdNx8x9/tokN
 FnmMqOE4DITXUH6FDg=
IronPort-HdrOrdr: A9a23:a64lz6BZAp9E4RflHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-Talos-CUID: =?us-ascii?q?9a23=3AMFJnNWjHRJ6KlXK5rhMCWYb+NTJuaX7R/V32H12?=
 =?us-ascii?q?BKzhAQp6aQkS79Zxgqp87?=
X-Talos-MUID: 9a23:LoRdkAWLSS/Jm3Hq/C/Mjy5hKeYy2Jq/KRpK1qUrteiNHzMlbg==
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="112680560"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v2 07/10] x86/cpufeature: Rework {boot_,}cpu_has()
Date: Wed, 24 May 2023 12:25:23 +0100
Message-ID: <20230524112526.3475200-8-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

One area where Xen deviates from Linux is that test_bit() forces a volatile
read.  This leads to poor code generation, because the optimiser cannot merge
bit operations on the same word.

Drop the use of test_bit(), and write the expressions in regular C.  This
removes the include of bitops.h (which is a frequent source of header
tangles), and it offers the optimiser far more flexibility.

Bloat-o-meter reports a net change of:

  add/remove: 0/0 grow/shrink: 21/87 up/down: 641/-2751 (-2110)

with half of that in x86_emulate() alone.  vmx_ctxt_switch_to() seems to be
the fastpath with the greatest delta at -24, where the optimiser has
successfully removed the branch hidden in cpu_has_msr_tsc_aux.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Drop stdbool.  It is already covered by other includes.
---
 xen/arch/x86/include/asm/cpufeature.h | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 4140ec0938b2..d0ead8e7a51e 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -17,7 +17,6 @@
 #define X86_FEATURE_ALWAYS      X86_FEATURE_LM
 
 #ifndef __ASSEMBLY__
-#include <xen/bitops.h>
 
 struct cpuinfo_x86 {
     unsigned char x86;                 /* CPU family */
@@ -43,8 +42,15 @@ struct cpuinfo_x86 {
 
 extern struct cpuinfo_x86 boot_cpu_data;
 
-#define cpu_has(c, bit)		test_bit(bit, (c)->x86_capability)
-#define boot_cpu_has(bit)	test_bit(bit, boot_cpu_data.x86_capability)
+static inline bool cpu_has(const struct cpuinfo_x86 *info, unsigned int feat)
+{
+    return info->x86_capability[cpufeat_word(feat)] & cpufeat_mask(feat);
+}
+
+static inline bool boot_cpu_has(unsigned int feat)
+{
+    return cpu_has(&boot_cpu_data, feat);
+}
 
 #define CPUID_PM_LEAF                    6
 #define CPUID6_ECX_APERFMPERF_CAPABILITY 0x1
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 24 11:26:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538975.839471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcp-0008VF-Pw; Wed, 24 May 2023 11:25:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538975.839471; Wed, 24 May 2023 11:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcp-0008UB-HH; Wed, 24 May 2023 11:25:51 +0000
Received: by outflank-mailman (input) for mailman id 538975;
 Wed, 24 May 2023 11:25:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1mcn-0006nQ-Ue
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:25:49 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ba94cc4c-fa25-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 13:25:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba94cc4c-fa25-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684927548;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=FFrjGy0N3rUhXRjNR9bsStAp23/y+xkzRX/aa0Az0Wg=;
  b=DLSU54IfRzMS8Ancg76nmyn6IvmSihDJLpJ5JAs7qiiKh5SgJcbYDJW2
   naLLtxFHAquerPqT/rhVglecqpQSLQjyD4YgbBTr82wiqO46a98lMkWsp
   /1XnaYyk00BQtJj0IaIg5Hw7b0WNbokWnoUJYmgna5nwznM76pHaAFPDr
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112680542
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:09FfLa8X5Ql3PVKIFV8kDrUDiH6TJUtcMsCJ2f8bNWPcYEJGY0x3n
 WAYCmzXMvnbZDOjLdB1boq380wE6J/RydNlGVE5q3s8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ird7ks31BjOkGlA5AdmOKoX5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklyx
 /4yOGlcTCunxO3p4eyDWsh9puE8eZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0MxxzG9
 jqboz6R7hcyBOWH2zrdrliVoLHEpyTiAKYLDZf7z6s/6LGU7jNKU0BHPbehmtGmjmauVtQZL
 FYbkgI+oK53+EG1Q93VWxyjvGXCrhMaQ8BXEeAx9EeK0KW8yySzC3UATzVBQMc7r8JwTjsvv
 mJlhPuwW2Yp6ufMDyvAqPHN92ja1TUpwXEqWR0GZxtcsvvYhMI80TORdsZCAY6QkYigcd3v+
 AyioC87jrQVqMcE0aSn4FzK6w6RSoj1oh0dvVuOAD/8hu9tTMv8PtHztwCHhRpVBNzBJmRtq
 kTojCR3AAomKZiW3BKAT+wWdF1Cz6bUaWaM6bKD8nRIythMx5JBVdoIiN2dDB0zWirhRdMOS
 BG7hO+pzMUPVEZGlIcuC25LN+wkzLL7CfPuXe3OY9xFb/BZLVHXoHEwOx7MhD68yiDAdJ3T3
 r/CK66R4YsyU/w7nFJauc9GuVPU+szO7TyKHs2qp/hW+bGfeGSUWd84Dbd6VchgtPnsiFyMo
 75i2z6il003vBvWPnOGrub+7DkicRAGOHwBg5MKL7Hae1Y3RwnMyZb5mNscRmCspIwN/s+gw
 513chUwJIbX7ZEfFTi3Vw==
IronPort-HdrOrdr: A9a23:zMW3caHMfC45vcAUpLqELMeALOsnbusQ8zAXPiBKJCC9E/bo8v
 xG+c5w6faaslkssR0b9+xoW5PwI080l6QU3WB5B97LMDUO0FHCEGgI1/qA/9SPIUzDHu4279
 YbT0B9YueAcGSTW6zBkXWF+9VL+qj5zEix792uq0uE1WtRGtldBwESMHf9LmRGADNoKLAeD5
 Sm6s9Ot1ObCA8qhpTSPAhiYwDbzee77a7bXQ==
X-Talos-CUID: 9a23:zOqA4mDk5zJoqX36ExVV9mQEQ5EHSVmewyjxfXeoKHdJbaLAHA==
X-Talos-MUID: 9a23:MxLm+gXM+pQtQl/q/A/VnCNibJtC2oLwA3oKiM0iotCudhUlbg==
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="112680542"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 04/10] x86/cpu-policy: MSR_ARCH_CAPS feature names
Date: Wed, 24 May 2023 12:25:20 +0100
Message-ID: <20230524112526.3475200-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Seed the default visibility from the dom0 special case, which for the most
part just exposes the *_NO bits.  EIBRS is the one non-*_NO bit, which is
"just" a status bit to the guest indicating a change in implemention of IBRS
which is already fully supported.

Insert a block dependency from the ARCH_CAPS CPUID bit to the entire content
of the MSR.  This is because MSRs have no structure information similar to
CPUID, and used by x86_cpu_policy_clear_out_of_range_leaves(), in order to
bulk-clear inaccessable words.

The overall CPUID bit is still max-only, so all of MSR_ARCH_CAPS is hidden in
the default policies.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

There is no libxl logic because libxl still uses the older xend format which
is specific to CPUID data.  That is going to need untangling at some other
point.

v2:
 * Don't expose SKIP_L1DFL to guests (it's only applicable for nested virt)
 * Fix SBDR_SSDP_NO and FBSDP_NO names.
 * Extend the commit message.
---
 tools/misc/xen-cpuid.c                      | 13 ++++++++++++
 xen/include/public/arch-x86/cpufeatureset.h | 23 +++++++++++++++++++++
 xen/tools/gen-cpuid.py                      |  3 +++
 3 files changed, 39 insertions(+)

diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index 15ad2d33e2a1..8925a583edd5 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -228,6 +228,19 @@ static const char *const str_7d2[32] =
 
 static const char *const str_10Al[32] =
 {
+    [ 0] = "rdcl-no",             [ 1] = "eibrs",
+    [ 2] = "rsba",                [ 3] = "skip-l1dfl",
+    [ 4] = "intel-ssb-no",        [ 5] = "mds-no",
+    [ 6] = "if-pschange-mc-no",   [ 7] = "tsx-ctrl",
+    [ 8] = "taa-no",              [ 9] = "mcu-ctrl",
+    [10] = "misc-pkg-ctrl",       [11] = "energy-ctrl",
+    [12] = "doitm",               [13] = "sbdr-ssdp-no",
+    [14] = "fbsdp-no",            [15] = "psdp-no",
+    /* 16 */                      [17] = "fb-clear",
+    [18] = "fb-clear-ctrl",       [19] = "rrsba",
+    [20] = "bhi-no",              [21] = "xapic-status",
+    /* 22 */                      [23] = "ovrclk-status",
+    [24] = "pbrsb-no",
 };
 
 static const char *const str_10Ah[32] =
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 032cec3ccba2..033b1a72feea 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -308,6 +308,29 @@ XEN_CPUFEATURE(AVX_NE_CONVERT,     15*32+ 5) /*A  AVX-NE-CONVERT Instructions */
 XEN_CPUFEATURE(CET_SSS,            15*32+18) /*   CET Supervisor Shadow Stacks safe to use */
 
 /* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.eax, word 16 */
+XEN_CPUFEATURE(RDCL_NO,            16*32+ 0) /*A  No Rogue Data Cache Load (Meltdown) */
+XEN_CPUFEATURE(EIBRS,              16*32+ 1) /*A  Enhanced IBRS */
+XEN_CPUFEATURE(RSBA,               16*32+ 2) /*!A RSB Alternative (Retpoline not safe) */
+XEN_CPUFEATURE(SKIP_L1DFL,         16*32+ 3) /*   Don't need to flush L1D on VMEntry */
+XEN_CPUFEATURE(INTEL_SSB_NO,       16*32+ 4) /*A  No Speculative Store Bypass */
+XEN_CPUFEATURE(MDS_NO,             16*32+ 5) /*A  No Microarchitectural Data Sampling */
+XEN_CPUFEATURE(IF_PSCHANGE_MC_NO,  16*32+ 6) /*A  No Instruction fetch #MC */
+XEN_CPUFEATURE(TSX_CTRL,           16*32+ 7) /*   MSR_TSX_CTRL */
+XEN_CPUFEATURE(TAA_NO,             16*32+ 8) /*A  No TSX Async Abort */
+XEN_CPUFEATURE(MCU_CTRL,           16*32+ 9) /*   MSR_MCU_CTRL */
+XEN_CPUFEATURE(MISC_PKG_CTRL,      16*32+10) /*   MSR_MISC_PKG_CTRL */
+XEN_CPUFEATURE(ENERGY_FILTERING,   16*32+11) /*   MSR_MISC_PKG_CTRL.ENERGY_FILTERING */
+XEN_CPUFEATURE(DOITM,              16*32+12) /*   Data Operand Invariant Timing Mode */
+XEN_CPUFEATURE(SBDR_SSDP_NO,       16*32+13) /*A  No Shared Buffer Data Read or Sideband Stale Data Propagation */
+XEN_CPUFEATURE(FBSDP_NO,           16*32+14) /*A  No Fill Buffer Stale Data Propagation */
+XEN_CPUFEATURE(PSDP_NO,            16*32+15) /*A  No Primary Stale Data Propagation */
+XEN_CPUFEATURE(FB_CLEAR,           16*32+17) /*A  Fill Buffers cleared by VERW */
+XEN_CPUFEATURE(FB_CLEAR_CTRL,      16*32+18) /*   MSR_OPT_CPU_CTRL.FB_CLEAR_DIS */
+XEN_CPUFEATURE(RRSBA,              16*32+19) /*!A Restricted RSB Alternative */
+XEN_CPUFEATURE(BHI_NO,             16*32+20) /*A  No Branch History Injection  */
+XEN_CPUFEATURE(XAPIC_STATUS,       16*32+21) /*   MSR_XAPIC_DISABLE_STATUS */
+XEN_CPUFEATURE(OVRCLK_STATUS,      16*32+23) /*   MSR_OVERCLOCKING_STATUS */
+XEN_CPUFEATURE(PBRSB_NO,           16*32+24) /*A  No Post-Barrier RSB predictions */
 
 /* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.edx, word 17 */
 
diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
index 86d00bb3c273..f28ff708a2fc 100755
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -325,6 +325,9 @@ def crunch_numbers(state):
 
         # In principle the TSXLDTRK insns could also be considered independent.
         RTM: [TSXLDTRK],
+
+        # The ARCH_CAPS CPUID bit enumerates the availability of the whole register.
+        ARCH_CAPS: list(range(RDCL_NO, RDCL_NO + 64)),
     }
 
     deep_features = tuple(sorted(deps.keys()))
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 24 11:26:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538973.839449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcn-0007vV-PG; Wed, 24 May 2023 11:25:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538973.839449; Wed, 24 May 2023 11:25:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcn-0007rU-KI; Wed, 24 May 2023 11:25:49 +0000
Received: by outflank-mailman (input) for mailman id 538973;
 Wed, 24 May 2023 11:25:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1mcm-0006nQ-27
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:25:48 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ba7e82f3-fa25-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 13:25:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba7e82f3-fa25-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684927546;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Ymf2v6YYBhgCHFBQwmR/FNFi4YTX/UtOqLN2xBmhJsQ=;
  b=DZ2hHTbtZLsr0BM8nrRGf2J0kuf3Dn8l9ZibpfQnJypazH26JrYX5OWF
   0AYwF7NQTANWKPVAkdU8H+13fDpIHeU8E8r4Ugbrol3X15QhE96KkO16h
   lIP6RQI06GslSss/F4fVtMl+jS+7+wOFO+jnG2qHQMUI9g+wT+w3uQrM4
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112680540
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:2F2mwq+mIgexGFTb1OHrDrUDiH6TJUtcMsCJ2f8bNWPcYEJGY0x3y
 GUZXGzSMv+CZ2fyet9/PoXn8RsHvcWDzddrSVdr+Sw8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ird7ks31BjOkGlA5AdmOKoX5AS2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklyx
 /4yOGlcTCunxO3p4eyDWsh9puE8eZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0MxxzG9
 jqbozyR7hcyC8yS6CCK4FGQrOrij36iSJs+SrSaz6s/6LGU7jNKU0BHPbehmtGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c/h6HvA+6QqN4rHJ+AvfDW8BJhZebPQ2uclwQiYlv
 mJlhPuwW2Yp6ufMDyvAqPHN92ja1TUpwXEqWR0GZxtcsvvYhMI80TORdsZCAY6QkYigcd3v+
 AyioC87jrQVqMcE0aSn4FzK6w6RSoj1oh0dvVuOAD/8hu9tTMv8PtHztwCHhRpVBNzBJmRtq
 kTojCR3AAomKZiW3BKAT+wWdF1Cz6bUaWaM6bKD8nRIythMx5JBVdoIiN2dDB0zWirhRdMOS
 BG7hO+pzMUPVEZGlIcuC25LN+wkzLL7CfPuXe3OY9xFb/BZLVHXoHEwOx7MhD68yiDAdJ3T3
 r/CK66R4YsyU/w7nFJauc9GuVPU+szO7TyKHs2qp/hW+bGfeGSUWd84Dbd6VchgtPnsiFyMo
 75i2z6il003vBvWPnOGrub+7DkicRAGOHwBg5MKL7Hae1Y3RwnMyZb5mNscRmCspIwN/s+gw
 513chUwJIbX7ZEfFTi3Vw==
IronPort-HdrOrdr: A9a23:kXOuIqPWxvCkt8BcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-Talos-CUID: 9a23:z8Q0ymFc1Cys3Z/HqmJbxkw+Cpl0UEb5j3iPAhGAGUd4aJiKHAo=
X-Talos-MUID: 9a23:0wJCMAldoRAdPG/QOsZ5dnpdJMpJ+au8K3wnlLgU5pCULAAuMhWk2WE=
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="112680540"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 06/10] x86/boot: Expose MSR_ARCH_CAPS data in guest max policies
Date: Wed, 24 May 2023 12:25:22 +0100
Message-ID: <20230524112526.3475200-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

We already have common and default feature adjustment helpers.  Introduce one
for max featuresets too.

Offer MSR_ARCH_CAPS unconditionally in the max policy, and stop clobbering the
data inherited from the Host policy.  This will be necessary to level a VM
safely for migration.  Annotate the ARCH_CAPS CPUID bit as special.  Note:
ARCH_CAPS is still max-only for now, so will not be inhereted by the default
policies.

With this done, the special case for dom0 can be shrunk to just resampling the
Host policy (as ARCH_CAPS isn't visible by default yet).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Annotate ARCH_CAPS as special.
---
 xen/arch/x86/cpu-policy.c                   | 42 ++++++++++++---------
 xen/include/public/arch-x86/cpufeatureset.h |  2 +-
 2 files changed, 25 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index dfd9abd8564c..74266d30b551 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -408,6 +408,25 @@ static void __init calculate_host_policy(void)
     p->platform_info.cpuid_faulting = cpu_has_cpuid_faulting;
 }
 
+static void __init guest_common_max_feature_adjustments(uint32_t *fs)
+{
+    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+    {
+        /*
+         * MSR_ARCH_CAPS is just feature data, and we can offer it to guests
+         * unconditionally, although limit it to Intel systems as it is highly
+         * uarch-specific.
+         *
+         * In particular, the RSBA and RRSBA bits mean "you might migrate to a
+         * system where RSB underflow uses alternative predictors (a.k.a
+         * Retpoline not safe)", so these need to be visible to a guest in all
+         * cases, even when it's only some other server in the pool which
+         * suffers the identified behaviour.
+         */
+        __set_bit(X86_FEATURE_ARCH_CAPS, fs);
+    }
+}
+
 static void __init guest_common_default_feature_adjustments(uint32_t *fs)
 {
     /*
@@ -483,6 +502,7 @@ static void __init calculate_pv_max_policy(void)
         __clear_bit(X86_FEATURE_IBRS, fs);
     }
 
+    guest_common_max_feature_adjustments(fs);
     guest_common_feature_adjustments(fs);
 
     sanitise_featureset(fs);
@@ -490,8 +510,6 @@ static void __init calculate_pv_max_policy(void)
     recalculate_xstate(p);
 
     p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
-
-    p->arch_caps.raw = 0; /* Not supported yet. */
 }
 
 static void __init calculate_pv_def_policy(void)
@@ -598,6 +616,7 @@ static void __init calculate_hvm_max_policy(void)
     if ( !cpu_has_vmx )
         __clear_bit(X86_FEATURE_PKS, fs);
 
+    guest_common_max_feature_adjustments(fs);
     guest_common_feature_adjustments(fs);
 
     sanitise_featureset(fs);
@@ -606,8 +625,6 @@ static void __init calculate_hvm_max_policy(void)
 
     /* It's always possible to emulate CPUID faulting for HVM guests */
     p->platform_info.cpuid_faulting = true;
-
-    p->arch_caps.raw = 0; /* Not supported yet. */
 }
 
 static void __init calculate_hvm_def_policy(void)
@@ -828,7 +845,10 @@ void __init init_dom0_cpuid_policy(struct domain *d)
      * domain policy logic gains a better understanding of MSRs.
      */
     if ( is_hardware_domain(d) && cpu_has_arch_caps )
+    {
         p->feat.arch_caps = true;
+        p->arch_caps.raw = host_cpu_policy.arch_caps.raw;
+    }
 
     /* Apply dom0-cpuid= command line settings, if provided. */
     if ( dom0_cpuid_cmdline )
@@ -858,20 +878,6 @@ void __init init_dom0_cpuid_policy(struct domain *d)
         p->platform_info.cpuid_faulting = false;
 
     recalculate_cpuid_policy(d);
-
-    if ( is_hardware_domain(d) && cpu_has_arch_caps )
-    {
-        uint64_t val;
-
-        rdmsrl(MSR_ARCH_CAPABILITIES, val);
-
-        p->arch_caps.raw = val &
-            (ARCH_CAPS_RDCL_NO | ARCH_CAPS_IBRS_ALL | ARCH_CAPS_RSBA |
-             ARCH_CAPS_SSB_NO | ARCH_CAPS_MDS_NO | ARCH_CAPS_IF_PSCHANGE_MC_NO |
-             ARCH_CAPS_TAA_NO | ARCH_CAPS_SBDR_SSDP_NO | ARCH_CAPS_FBSDP_NO |
-             ARCH_CAPS_PSDP_NO | ARCH_CAPS_FB_CLEAR | ARCH_CAPS_RRSBA |
-             ARCH_CAPS_BHI_NO | ARCH_CAPS_PBRSB_NO);
-    }
 }
 
 static void __init __maybe_unused build_assertions(void)
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 033b1a72feea..777041425e0a 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -271,7 +271,7 @@ XEN_CPUFEATURE(AVX512_FP16,   9*32+23) /*   AVX512 FP16 instructions */
 XEN_CPUFEATURE(IBRSB,         9*32+26) /*A  IBRS and IBPB support (used by Intel) */
 XEN_CPUFEATURE(STIBP,         9*32+27) /*A  STIBP */
 XEN_CPUFEATURE(L1D_FLUSH,     9*32+28) /*S  MSR_FLUSH_CMD and L1D flush. */
-XEN_CPUFEATURE(ARCH_CAPS,     9*32+29) /*a  IA32_ARCH_CAPABILITIES MSR */
+XEN_CPUFEATURE(ARCH_CAPS,     9*32+29) /*!a IA32_ARCH_CAPABILITIES MSR */
 XEN_CPUFEATURE(CORE_CAPS,     9*32+30) /*   IA32_CORE_CAPABILITIES MSR */
 XEN_CPUFEATURE(SSBD,          9*32+31) /*A  MSR_SPEC_CTRL.SSBD available */
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 24 11:26:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:26:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538976.839486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcy-0000u8-5b; Wed, 24 May 2023 11:26:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538976.839486; Wed, 24 May 2023 11:26:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1mcy-0000tm-0p; Wed, 24 May 2023 11:26:00 +0000
Received: by outflank-mailman (input) for mailman id 538976;
 Wed, 24 May 2023 11:25:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1mcw-0006nQ-Tc
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:25:58 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c04d50ee-fa25-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 13:25:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c04d50ee-fa25-11ed-b22f-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684927557;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=lb/LSILGLCd6gJWJslDO2ppBGoRSQMQXkFRr5JMK4ms=;
  b=drk7pOLZ6UJE9rAViFLetIpYGok3ByzpvgK99/0mGDPesKzAzTu6/2GI
   3+ceFvSfakjeiEUt+8fD6hbKGCN5ESXxHD52vcJMN5RVLMfwIcmc1uaTn
   dIcoF41I7Fh2c4S1ZCOh89i2PY9roX3Lc3VTkVy+99VxgReuEiuUR7Emt
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109533433
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:o4YfL62lWqzXFzG5LfbD5epxkn2cJEfYwER7XKvMYLTBsI5bpzdRz
 TAdX2mDb/iPNzP0eIsga9yz8UlQu5aHnIc3GVY+pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFkOagQ1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfPDFP3
 NwCOB42TFOPhqW1/uubVetJr5F2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiHJ0Ewx3D+
 jmdl4j/Kj0gLcCt6iGVyGvylqzhgyngQdlKEITto5aGh3XMnzdOWXX6T2CTsfS/z0KzRd9bA
 0gV4TY167g/8lSxSdvwVAH+p2SL1jYeUddNF+wx6CmW17HZpQ2eAwAsUTppeNEg8sgsSlQC1
 EKP2dXgBjVtsbicYXOb6rqQ6zi1PEA9LmIcZClCUQoM5fHipp0+ilTESdMLLUKupoSrQ3eqm
 WnM9XVgwexJ1qbnyplX43j60zOFhoLZYDUXpQnWGWGbtyNmZquMMtnABUfg0d5MK4OQT1+kt
 XcCmtSD4O1mMaxhhBBhU81WQuj3uq/t3Cn0xAc2QsJ/r2jFF2uLJ9g43d1oGKt+3i/okxfNa
 VSbhw5e7YQ70JCCPf4uONLZ5yjHIMHd+TXZuhL8NIImjntZLlXvEMRSiam4gQjQfLAEy/1XB
 HtiWZ/E4YwmIapm1iGqYOwWzKUmwCszrUuKG8Cnn0r5gebFOi/PIVvgDLdpRrljhJ5oXS2Pq
 4oPXyd04043vBLCjtn/rtdIcAFiwYkTDpHqsc1HHtO+zv5dMDh5UZf5mOpxE7GJaowJzo8kC
 FnhAB4HoLc+7FWbQTi3hodLN+62As8m8SNkVcHuVH7xs0UejU+UxP93X/MKkXMPrbcLISJcJ
 xXdR/i9Pw==
IronPort-HdrOrdr: A9a23:YaXTEaCs0UI/nqnlHemU55DYdb4zR+YMi2TC1yhKJyC9Ffbo7v
 xG/c5rsyMc5wxwZJhNo7y90ey7MBbhHP1OkO4s1NWZLWrbUQKTRekIh+bfKn/baknDH4ZmpN
 9dmsNFaeEYY2IUsS+D2njbL+od
X-Talos-CUID: 9a23:YwxQ6GFtG711vo4hqmJapE89Oto1XEb/j3vRZEyGUEZ2Spi8HAo=
X-Talos-MUID: =?us-ascii?q?9a23=3AqReWIQzJ0kRRTQsSZ+74frqh5jeaqIiUEVw/so4?=
 =?us-ascii?q?WgdDaJW9eJT2GlW/vRrZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="109533433"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>
Subject: [PATCH v2 08/10] x86/vtx: Remove opencoded MSR_ARCH_CAPS check
Date: Wed, 24 May 2023 12:25:24 +0100
Message-ID: <20230524112526.3475200-9-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

MSR_ARCH_CAPS data is now included in featureset information.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c            | 8 ++------
 xen/arch/x86/include/asm/cpufeature.h | 3 +++
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 096c69251d58..9dc16d0cc6b9 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2849,8 +2849,6 @@ static void __init ler_to_fixup_check(void);
  */
 static bool __init has_if_pschange_mc(void)
 {
-    uint64_t caps = 0;
-
     /*
      * If we are virtualised, there is nothing we can do.  Our EPT tables are
      * shadowed by our hypervisor, and not walked by hardware.
@@ -2858,10 +2856,8 @@ static bool __init has_if_pschange_mc(void)
     if ( cpu_has_hypervisor )
         return false;
 
-    if ( cpu_has_arch_caps )
-        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
-
-    if ( caps & ARCH_CAPS_IF_PSCHANGE_MC_NO )
+    /* Hardware reports itself as fixed. */
+    if ( cpu_has_if_pschange_mc_no )
         return false;
 
     /*
diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index d0ead8e7a51e..e3154ec5800d 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -182,6 +182,9 @@ static inline bool boot_cpu_has(unsigned int feat)
 #define cpu_has_avx_vnni_int8   boot_cpu_has(X86_FEATURE_AVX_VNNI_INT8)
 #define cpu_has_avx_ne_convert  boot_cpu_has(X86_FEATURE_AVX_NE_CONVERT)
 
+/* MSR_ARCH_CAPS */
+#define cpu_has_if_pschange_mc_no boot_cpu_has(X86_FEATURE_IF_PSCHANGE_MC_NO)
+
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
 #define cpu_has_cpuid_faulting  boot_cpu_has(X86_FEATURE_CPUID_FAULTING)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 24 11:26:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.538979.839496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1md4-00023d-Kf; Wed, 24 May 2023 11:26:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 538979.839496; Wed, 24 May 2023 11:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1md4-00023O-GL; Wed, 24 May 2023 11:26:06 +0000
Received: by outflank-mailman (input) for mailman id 538979;
 Wed, 24 May 2023 11:26:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1md4-0006dp-0f
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:26:06 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c351b31c-fa25-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 13:26:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c351b31c-fa25-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684927562;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=DtKNj1X+fbwrK8e7vMIUyEUyV/cXQDpO7JUIGknehMg=;
  b=M91kP5SnkPjyQVv0mzzx77lVtyeiYvdGjNzYUh7rYShoLKZlaArsSdLa
   okQhiVy5CDoIkA2qNrHaIwQv2AWKrObNCgs/hCz87q71BDdxqJ4+NyO8d
   1TZ5lQo3umNIYR7cElyfIT9ghsSO0Wr+ttAC5bTqU5xlgepu0gPRyxnob
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112680598
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:+RRWQK2QWq/XF7tKefbD5dtxkn2cJEfYwER7XKvMYLTBsI5bp2MOz
 2YXD22Ga6zfNmH2LdwiOYjj8x8Bu8fSxtVrGQRupC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFkOagQ1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfXWFPx
 KwUdRYxQE67rPznmuijbblSr5F2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tM6uimPybHtzr1WNqLBsy2PS0BZwwP7mN9+9ltmiHJwPwBbA+
 zOWl4j/KgwnP/i5xzmVyDWXqsbSngjaSNobGLLto5aGh3XMnzdOWXX6T2CTvv2RmkO4HdVFJ
 CQ86ico6KQ/6kGvZt38RAGj5m6JuAYGXNhdGPF87xuCooL2yQuEAmkPThZadccr8sQxQFQXO
 kShxo2zQ2Y16fvMFCzbr+3Pxd+vBcQLBTMCZB5UQVYj3/e9mKot30iVVvEzLofg27UZBgrM6
 zyNqSE/gZAagsgKy7i38Dj7vt68mnTaZlVrv1uKBwpJ+is8Pdf4PNLwtTA3+N4adO6kok+9U
 G/ociR0xMQHFtmzmSOEW43h95n5tq/eYFUwbbOCdqTNFghBGVb5Jei8Axkkfi+F1/ronhe3C
 HI/QSsLuPdu0IKCNMebmb6ZBcUw1rTHHt/4TP3SZdcmSsEvJFPXon8+ORXOjjqFfK0QfUYXY
 M3zTCpRJSxCVfQPIMSeHI/xLoPHNghhnDiOFPgXPjys0KaEZW79dIrpxGCmN7hjhIvd+VW9z
 jqqH5fSo/mpeLGkM3a/HE96BQxiEEXX8rip9pUIL7ffc1UO9aNII6a5/I7NsrdNx8x9/tokN
 FnkMqOE4DITXUH6FDg=
IronPort-HdrOrdr: A9a23:hzoYdaFuYX3PegjnpLqELMeALOsnbusQ8zAXPiBKJCC9E/bo8v
 xG+c5w6faaslkssR0b9+xoW5PwI080l6QU3WB5B97LMDUO0FHCEGgI1/qA/9SPIUzDHu4279
 YbT0B9YueAcGSTW6zBkXWF+9VL+qj5zEix792uq0uE1WtRGtldBwESMHf9LmRGADNoKLAeD5
 Sm6s9Ot1ObCA8qhpTSPAhiYwDbzee77a7bXQ==
X-Talos-CUID: =?us-ascii?q?9a23=3AL6vDXWtyMTfYkwei8UjlrTkP6It5cnzsnHuNBXO?=
 =?us-ascii?q?WV15CVuzPE0e3qJ1Nxp8=3D?=
X-Talos-MUID: 9a23:0/lztARkwZZSeuOeRXTxuTpGEslr857pM2wkjJYtmtncPCB/bmI=
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="112680598"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 10/10] x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
Date: Wed, 24 May 2023 12:25:26 +0100
Message-ID: <20230524112526.3475200-11-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

MSR_ARCH_CAPS data is now included in featureset information.  Replace
opencoded checks with regular feature ones.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/include/asm/cpufeature.h |  7 ++++
 xen/arch/x86/spec_ctrl.c              | 56 +++++++++++++--------------
 2 files changed, 33 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 9047ea43f503..50235f098d70 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -183,8 +183,15 @@ static inline bool boot_cpu_has(unsigned int feat)
 #define cpu_has_avx_ne_convert  boot_cpu_has(X86_FEATURE_AVX_NE_CONVERT)
 
 /* MSR_ARCH_CAPS */
+#define cpu_has_rdcl_no         boot_cpu_has(X86_FEATURE_RDCL_NO)
+#define cpu_has_eibrs           boot_cpu_has(X86_FEATURE_EIBRS)
+#define cpu_has_rsba            boot_cpu_has(X86_FEATURE_RSBA)
+#define cpu_has_skip_l1dfl      boot_cpu_has(X86_FEATURE_SKIP_L1DFL)
+#define cpu_has_mds_no          boot_cpu_has(X86_FEATURE_MDS_NO)
 #define cpu_has_if_pschange_mc_no boot_cpu_has(X86_FEATURE_IF_PSCHANGE_MC_NO)
 #define cpu_has_tsx_ctrl        boot_cpu_has(X86_FEATURE_TSX_CTRL)
+#define cpu_has_taa_no          boot_cpu_has(X86_FEATURE_TAA_NO)
+#define cpu_has_fb_clear        boot_cpu_has(X86_FEATURE_FB_CLEAR)
 
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index f81db2143328..50d467f74cf8 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -282,12 +282,10 @@ custom_param("spec-ctrl", parse_spec_ctrl);
 int8_t __read_mostly opt_xpti_hwdom = -1;
 int8_t __read_mostly opt_xpti_domu = -1;
 
-static __init void xpti_init_default(uint64_t caps)
+static __init void xpti_init_default(void)
 {
-    if ( boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
-        caps = ARCH_CAPS_RDCL_NO;
-
-    if ( caps & ARCH_CAPS_RDCL_NO )
+    if ( (boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ||
+         cpu_has_rdcl_no )
     {
         if ( opt_xpti_hwdom < 0 )
             opt_xpti_hwdom = 0;
@@ -390,9 +388,10 @@ static int __init cf_check parse_pv_l1tf(const char *s)
 }
 custom_param("pv-l1tf", parse_pv_l1tf);
 
-static void __init print_details(enum ind_thunk thunk, uint64_t caps)
+static void __init print_details(enum ind_thunk thunk)
 {
     unsigned int _7d0 = 0, _7d2 = 0, e8b = 0, max = 0, tmp;
+    uint64_t caps = 0;
 
     /* Collect diagnostics about available mitigations. */
     if ( boot_cpu_data.cpuid_level >= 7 )
@@ -401,6 +400,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
         cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
     if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
         cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
+    if ( cpu_has_arch_caps )
+        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
 
     printk("Speculative mitigation facilities:\n");
 
@@ -578,7 +579,7 @@ static bool __init check_smt_enabled(void)
 }
 
 /* Calculate whether Retpoline is known-safe on this CPU. */
-static bool __init retpoline_safe(uint64_t caps)
+static bool __init retpoline_safe(void)
 {
     unsigned int ucode_rev = this_cpu(cpu_sig).rev;
 
@@ -596,7 +597,7 @@ static bool __init retpoline_safe(uint64_t caps)
      * Processors offering Enhanced IBRS are not guarenteed to be
      * repoline-safe.
      */
-    if ( caps & (ARCH_CAPS_RSBA | ARCH_CAPS_IBRS_ALL) )
+    if ( cpu_has_rsba || cpu_has_eibrs )
         return false;
 
     switch ( boot_cpu_data.x86_model )
@@ -845,7 +846,7 @@ static void __init ibpb_calculations(void)
 }
 
 /* Calculate whether this CPU is vulnerable to L1TF. */
-static __init void l1tf_calculations(uint64_t caps)
+static __init void l1tf_calculations(void)
 {
     bool hit_default = false;
 
@@ -933,7 +934,7 @@ static __init void l1tf_calculations(uint64_t caps)
     }
 
     /* Any processor advertising RDCL_NO should be not vulnerable to L1TF. */
-    if ( caps & ARCH_CAPS_RDCL_NO )
+    if ( cpu_has_rdcl_no )
         cpu_has_bug_l1tf = false;
 
     if ( cpu_has_bug_l1tf && hit_default )
@@ -992,7 +993,7 @@ static __init void l1tf_calculations(uint64_t caps)
 }
 
 /* Calculate whether this CPU is vulnerable to MDS. */
-static __init void mds_calculations(uint64_t caps)
+static __init void mds_calculations(void)
 {
     /* MDS is only known to affect Intel Family 6 processors at this time. */
     if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
@@ -1000,7 +1001,7 @@ static __init void mds_calculations(uint64_t caps)
         return;
 
     /* Any processor advertising MDS_NO should be not vulnerable to MDS. */
-    if ( caps & ARCH_CAPS_MDS_NO )
+    if ( cpu_has_mds_no )
         return;
 
     switch ( boot_cpu_data.x86_model )
@@ -1113,10 +1114,6 @@ void __init init_speculation_mitigations(void)
     enum ind_thunk thunk = THUNK_DEFAULT;
     bool has_spec_ctrl, ibrs = false, hw_smt_enabled;
     bool cpu_has_bug_taa;
-    uint64_t caps = 0;
-
-    if ( cpu_has_arch_caps )
-        rdmsrl(MSR_ARCH_CAPABILITIES, caps);
 
     hw_smt_enabled = check_smt_enabled();
 
@@ -1163,7 +1160,7 @@ void __init init_speculation_mitigations(void)
              * On all hardware, we'd like to use retpoline in preference to
              * IBRS, but only if it is safe on this hardware.
              */
-            if ( retpoline_safe(caps) )
+            if ( retpoline_safe() )
                 thunk = THUNK_RETPOLINE;
             else if ( has_spec_ctrl )
                 ibrs = true;
@@ -1392,13 +1389,13 @@ void __init init_speculation_mitigations(void)
      * threads.  Activate this if SMT is enabled, and Xen is using a non-zero
      * MSR_SPEC_CTRL setting.
      */
-    if ( boot_cpu_has(X86_FEATURE_IBRSB) && !(caps & ARCH_CAPS_IBRS_ALL) &&
+    if ( boot_cpu_has(X86_FEATURE_IBRSB) && !cpu_has_eibrs &&
          hw_smt_enabled && default_xen_spec_ctrl )
         setup_force_cpu_cap(X86_FEATURE_SC_MSR_IDLE);
 
-    xpti_init_default(caps);
+    xpti_init_default();
 
-    l1tf_calculations(caps);
+    l1tf_calculations();
 
     /*
      * By default, enable PV domU L1TF mitigations on all L1TF-vulnerable
@@ -1419,7 +1416,7 @@ void __init init_speculation_mitigations(void)
     if ( !boot_cpu_has(X86_FEATURE_L1D_FLUSH) )
         opt_l1d_flush = 0;
     else if ( opt_l1d_flush == -1 )
-        opt_l1d_flush = cpu_has_bug_l1tf && !(caps & ARCH_CAPS_SKIP_L1DFL);
+        opt_l1d_flush = cpu_has_bug_l1tf && !cpu_has_skip_l1dfl;
 
     /* We compile lfence's in by default, and nop them out if requested. */
     if ( !opt_branch_harden )
@@ -1442,7 +1439,7 @@ void __init init_speculation_mitigations(void)
             "enabled.  Please assess your configuration and choose an\n"
             "explicit 'smt=<bool>' setting.  See XSA-273.\n");
 
-    mds_calculations(caps);
+    mds_calculations();
 
     /*
      * Parts which enumerate FB_CLEAR are those which are post-MDS_NO and have
@@ -1454,7 +1451,7 @@ void __init init_speculation_mitigations(void)
      * the return-to-guest path.
      */
     if ( opt_unpriv_mmio )
-        opt_fb_clear_mmio = caps & ARCH_CAPS_FB_CLEAR;
+        opt_fb_clear_mmio = cpu_has_fb_clear;
 
     /*
      * By default, enable PV and HVM mitigations on MDS-vulnerable hardware.
@@ -1484,7 +1481,7 @@ void __init init_speculation_mitigations(void)
      */
     if ( opt_md_clear_pv || opt_md_clear_hvm || opt_fb_clear_mmio )
         setup_force_cpu_cap(X86_FEATURE_SC_VERW_IDLE);
-    opt_md_clear_hvm &= !(caps & ARCH_CAPS_SKIP_L1DFL) && !opt_l1d_flush;
+    opt_md_clear_hvm &= !cpu_has_skip_l1dfl && !opt_l1d_flush;
 
     /*
      * Warn the user if they are on MLPDS/MFBDS-vulnerable hardware with HT
@@ -1515,8 +1512,7 @@ void __init init_speculation_mitigations(void)
      *       we check both to spot TSX in a microcode/cmdline independent way.
      */
     cpu_has_bug_taa =
-        (cpu_has_rtm || (caps & ARCH_CAPS_TSX_CTRL)) &&
-        (caps & (ARCH_CAPS_MDS_NO | ARCH_CAPS_TAA_NO)) == ARCH_CAPS_MDS_NO;
+        (cpu_has_rtm || cpu_has_tsx_ctrl) && cpu_has_mds_no && !cpu_has_taa_no;
 
     /*
      * On TAA-affected hardware, disabling TSX is the preferred mitigation, vs
@@ -1535,7 +1531,7 @@ void __init init_speculation_mitigations(void)
      * plausibly value TSX higher than Hyperthreading...), disable TSX to
      * mitigate TAA.
      */
-    if ( opt_tsx == -1 && cpu_has_bug_taa && (caps & ARCH_CAPS_TSX_CTRL) &&
+    if ( opt_tsx == -1 && cpu_has_bug_taa && cpu_has_tsx_ctrl &&
          ((hw_smt_enabled && opt_smt) ||
           !boot_cpu_has(X86_FEATURE_SC_VERW_IDLE)) )
     {
@@ -1560,15 +1556,15 @@ void __init init_speculation_mitigations(void)
     if ( cpu_has_srbds_ctrl )
     {
         if ( opt_srb_lock == -1 && !opt_unpriv_mmio &&
-             (caps & (ARCH_CAPS_MDS_NO|ARCH_CAPS_TAA_NO)) == ARCH_CAPS_MDS_NO &&
-             (!cpu_has_hle || ((caps & ARCH_CAPS_TSX_CTRL) && rtm_disabled)) )
+             cpu_has_mds_no && !cpu_has_taa_no &&
+             (!cpu_has_hle || (cpu_has_tsx_ctrl && rtm_disabled)) )
             opt_srb_lock = 0;
 
         set_in_mcu_opt_ctrl(MCU_OPT_CTRL_RNGDS_MITG_DIS,
                             opt_srb_lock ? 0 : MCU_OPT_CTRL_RNGDS_MITG_DIS);
     }
 
-    print_details(thunk, caps);
+    print_details(thunk);
 
     /*
      * If MSR_SPEC_CTRL is available, apply Xen's default setting and discard
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 24 11:52:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 11:52:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539017.839506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1n2u-0007Ms-Qs; Wed, 24 May 2023 11:52:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539017.839506; Wed, 24 May 2023 11:52:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1n2u-0007Ml-Nr; Wed, 24 May 2023 11:52:48 +0000
Received: by outflank-mailman (input) for mailman id 539017;
 Wed, 24 May 2023 11:52:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1n2t-0007Mf-4g
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 11:52:47 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2040.outbound.protection.outlook.com [40.107.13.40])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ed44e76-fa29-11ed-b22f-6b7b168915f2;
 Wed, 24 May 2023 13:52:44 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8321.eurprd04.prod.outlook.com (2603:10a6:20b:3ed::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 11:52:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 11:52:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ed44e76-fa29-11ed-b22f-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I7CEQb4gtLqMfyzNokMedmBu7AxoOZ2gjmLWpYVfkyo0jjeZBi14CKC5i6d+l2EnMENp7sL/a4DJ9ZJnqOA1Xu7SPO3Ok+veOsmN+AAVxp1UiMj/4ibk87AUEU4YFyyJ8Wd0zX2/ch/oK/4CUlYdBPGT8q1oYAmd5DhDob72/q9rjqbBqouHGaO/y/arB7YYkRE/Lw75pyVJtbMW8Xh4t1BtF/+mK8anbS+u9zBiGgobGstgvABmMsq4oW5TfU7YvPb/tFFu09+P0YpmbJyX/FsikuLkpDmX2m0vl/WPWMxZbLIEb/EvwbKH0TJEmWk8Pu9xsTiUwyaJyaU7JAwrRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+rJonwk2U3lw/fR6Ok41tkijMS7ayKAfe4A66fj6cLI=;
 b=YBlK4L31kvDnrzIp7OSCdX8LrwC08kVyijdHOmqTRxfdPQS34aGmd52u3Gy6KuS+QzEXSutZwimlrQ8fXFeH+OzvejlWkjtpIOB6jIolgX1arvw8L8MmIphvoB7yhfFmZvGcj1dTxqJ9XEWYhIMsh0CRSv8iOP57CMrMow4nPpYvHzDElH7ur8Frk2J1dqFl/Cr8SzgatOgIAIQXMWBD/m1Jo/XOay7p104UVVrk/WN/yWGT7dprvyiiArsmyDeWYKV8OrvaXTU9gPkS9p6DS42UOv/sNl3UHlH5r7ck8eA6yluIChkQMSxE0rZKqpW7HE4+ODiz15m9GgfvUlaL3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+rJonwk2U3lw/fR6Ok41tkijMS7ayKAfe4A66fj6cLI=;
 b=a+WqLraY4TfgeCuzGmbs20NrLinRYfJ/bkW4hyqKojX2fqH7vUw3MDF8iY+6fhaKhM02In+7dBDVwvvU16mHZ6nkJ82SyYIDULREjeR41K4soeK3SzxsvVIWZv0GjULC3oykZqyPVscXfmVC7QL8kU0RErBxarOZnQblPZc0+NIS3lMc29ElfJlUgSOIJRs9ZgpNqhE8Ven1wkRNaOzklZrKPsCgQ3W7PWiJ+TkfrwPCEpjai7Njopp61mHm1gz9DX6BjHBIUUye5qwpJoIthxHv5d9MHcY7j4QS79f2YQYlnpoG/izmqhPEET0DJhVSGfFnWslh2xYLpKDDh/caQQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bc0389af-6bcb-bb8c-2e79-f41b6825b383@suse.com>
Date: Wed, 24 May 2023 13:52:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: PVH Dom0 related UART failure
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 marmarek@invisiblethingslab.com, xenia.ragiadakou@amd.com,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
 <ZGcu7EWW1cuNjwDA@Air-de-Roger>
 <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
 <ZGig68UTddfEwR6P@Air-de-Roger>
 <alpine.DEB.2.22.394.2305221520350.1553709@ubuntu-linux-20-04-desktop>
 <818859a6-883a-81c0-1222-84c62ada3200@suse.com>
 <alpine.DEB.2.22.394.2305231756500.44000@ubuntu-linux-20-04-desktop>
 <a08128b5-b6e7-8329-3127-96e171bb76b9@suse.com>
In-Reply-To: <a08128b5-b6e7-8329-3127-96e171bb76b9@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0249.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:af::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8321:EE_
X-MS-Office365-Filtering-Correlation-Id: 9ee7413f-da8f-44d0-8e44-08db5c4d50f5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EqjgldP8Z3UMIra+cvpjCc299C6Cv3PQ+NJpAl99u3LpBvTmL0jsHmdfivefdNd6xuEeyxXdnvtehJzmcq/f1llmxXMNSPBgKoQ5j1CmEq4q6k3J4hDAYhG6yt5U2ONMWbjnanwBn3Dbud/REjOazNL1wmOZ2i16ClgfORmbIu1mXNR2Ahsj7xE+2RClr1g7vjvRHTu4Q+9tYGkvXc3dtw0Z+6hZMfvsfFNj5gHyJpEfigiwrysAnzBMtzN8UbLTyXi5W7SEFmwkZshterB9N1wFYAGgzXQJOkOXsdRMSDBYz8N0MTfs9q2ltW2dXXlUk5+4F74Yp6Wk7vTuJm9m4fz7Un7uYsJeb/nNgOTzgYZewMCXN5PWct44a73PVP/o6GzabSJHx5YeeDFGjXHoThhkLnhrC3OjCWfWE5OecZLVNH4p26Lt1NCHbTonqhh3kpMv2ZprXvaHfBxU1cnyONldm/Bhb9JDENsmjKKjKfHbBQB5dL8qHNiDRA8lFynSu9+/WSrUaxohy3sxT073zAnsTq1PIDrPpAcLhZV1bTNdVTCb1/O+xaoEeqNCkXeSTl5c5IyVEhW6hgmeglphE5pROiW7jLXPTaMvrUHSprNlnafHOCHpiAlRr9NN7m2gvKxbYADrUPkekTYFncrFJA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(346002)(136003)(376002)(451199021)(41300700001)(316002)(4326008)(6916009)(66556008)(66476007)(66946007)(8676002)(8936002)(5660300002)(36756003)(31696002)(86362001)(2906002)(38100700002)(2616005)(6506007)(6512007)(26005)(186003)(6666004)(6486002)(966005)(53546011)(31686004)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MEdPRElpRjVrK0pLNHRTbTNjUGFBRnRpY3dSS2NnU0IxMXE2MFlHNE1DWDVD?=
 =?utf-8?B?WVRPNngrdkhoMVFibUgzbEdjemdZalBjNFlOWHBoT2tyVUJQZEo2T0g5S3Vv?=
 =?utf-8?B?RG9uaFpLY0FvUHlPTGtLQTBEYzErRHltNkU5dDM0VDYzQWx3U3RjRjY1VTlj?=
 =?utf-8?B?bVpqRGRzOEhQeGY0N1Q0bTlHZEhnclNSYnRXZ2cyL2NOcm1KWkNhS1ZLUUZP?=
 =?utf-8?B?OVZxbFVGT0x1cUFrVDY5RjlucFk4YzAvN08rVEQ3MEszN3lzU0MwQUpWUjFO?=
 =?utf-8?B?cEtmY1FrUVlJdGZUQTgxQ3M1ckd4dXhGWXhsNEJFdkpJOEgydnY0bUhnRUtq?=
 =?utf-8?B?N3FBaHNLRmtDVzlxMUl1aHE5THZmSXB2azYwQ1k5VE9jV3Qxdk9BalhZMDZq?=
 =?utf-8?B?WmZVVEdsdEhqck1MUnNOUWpCckQyVFhRSzFHaHJTM0lkaktPdkxkYnVMckpw?=
 =?utf-8?B?d0VxTDBWVDJFcHQwbyswbkM1UFZua1FlUnJPamVFN2JabXJuSDlsNHFKQ0hm?=
 =?utf-8?B?N1pkNFVJd09kQ2NPbzVEVlNIdHREanBzV2QzUkRTR1JsdjhVZFUxK21TT2FS?=
 =?utf-8?B?RWFxUDVSTkRXUmZmWDdYemZUT2JXOC9XVDZoL0NmK21UYVdMaVVqZVRPaEVY?=
 =?utf-8?B?UVFaZjVWcmlNMUxISno1dEZvcU94QkUzYytGUzU0ZTk1VzkxWmZRQ0QrdTlG?=
 =?utf-8?B?b3ZPS2xDTmlOb08vN1J0RVVWbEZHUHdTeENwYWZIaUtkR2JPMkxYeS9STUVR?=
 =?utf-8?B?ckszc1NxZzUwQ0FiUTl1Z2NJZFNrZXBjekhQY1JscWYvaXh5Q25RYXkzZGVn?=
 =?utf-8?B?LzdLeUlTRnZDUkY3a3BuK2lzK1RQOHZlVEJ3WlphMG54S3Z3bkltemNVbkhr?=
 =?utf-8?B?cW1FSHJxbGhmYk5MdTAvcHdGd1BDakpWYy9sSlVkMDVGSDRhb08xZ2hFTk9r?=
 =?utf-8?B?NmxVcEgrSlRDQ29oVmE4L2xJTEZnaHhEQUk4VVZ2M2w5TStYS21DOW1maWlz?=
 =?utf-8?B?VHgwL3Z1YnlQK2JLdjhZV3VrTFFZcGI4YVhSR0t1bUg0TWZ4dFZqeFlUTEtr?=
 =?utf-8?B?ZlgzbXhUMks0OGYxMXY0NC8wWWpoVzZaR1RKWiszUmNJSlFRdm1kcmdWNzZB?=
 =?utf-8?B?Mlp4WEE0VW5PMHg0c2NxZnFwNG9WTEExNnR2d3BxZU1SQ1NkUUZEeFJIZGpE?=
 =?utf-8?B?M0tDN3lXeFd0RFZpdEpXbis0bzlDUllUc3F4QW1tMXpITGI2QkdkWlFRTG1C?=
 =?utf-8?B?b2RBb252SGhjZVRmT09VaXZhZVEzdnNZWUNhTG5zMU1WcUFrVlR6WGZLWHhx?=
 =?utf-8?B?UDl5SVhmeEdsbjdMSDJIbEF0NEtOUndrRkxtR3B4UjlrQXo2TlprZkZDUXB2?=
 =?utf-8?B?cjgxeVpicmFVR3VyeFExQVZxQkYrRnpQbjNqSGZIQjV1UXlzRlRKTCtuaG1D?=
 =?utf-8?B?RXM3MEZybVNQcytFcDZhOGdTS1lUaUZTZittUVRrQlMxZFNTRmNpRmVGQi80?=
 =?utf-8?B?NW5KTjB0SlQ0U2FwZnE1L2l0L1NERTM3Y0hhOFlScVBXQWNmYWpEVjEwUU1p?=
 =?utf-8?B?YmlBWGVEWXVqcVQ2QXhHR2RTMHlCZFNzUXVPenBqeW9jRnRRUDVwK2FwUjY5?=
 =?utf-8?B?dWJnWUR4UitJZTJTUk11RGRlWEJQSUVsdzZiOUxDQzNUTGVsWDZNbVNKQTNw?=
 =?utf-8?B?b2JaZWlPUnlXbjNGY0NueDN2aW1rTUliZEV5ZGRzenVZeTFjeHFhZ1RRUVAv?=
 =?utf-8?B?ajFwU2VqcWNqa085MmtOelUzaHVoc3h0b01Cb3lkaGlueWNKMExXS1VvR2lG?=
 =?utf-8?B?Ujg2K1pmRmxuaklNalhwNzV2M3l4WUNHc0E5czZoT3A1TGhFRFdDMTNVSEt1?=
 =?utf-8?B?MVJtYUJVQkNlUHczUXRNQXB1WE8yaGp4RDd5NHhkK1hFSWJJU0F0d1RPMUpD?=
 =?utf-8?B?SjRwK3NqcDRHUUM4aW9yYVpHL2Q4STRmeHVYSFZwdEEwVjZUVkxFVzlWNnVk?=
 =?utf-8?B?cHZYSFNwejNXa2lwd1p5K1JPaVAwUjB3c3FNclJ4MnRuY25PUnBCbWViK3E4?=
 =?utf-8?B?NXhOTkFEWVhKZ0k5cEZHOHlXWFY2dEd6ZlYvUm5HTStyWVNwbkdYWmNCTlpI?=
 =?utf-8?Q?FBK6hcH/9Bz5Pyorfi5h169Jg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ee7413f-da8f-44d0-8e44-08db5c4d50f5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 11:52:14.8444
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xko7qJnk8Va6c2aicqR1isThkw/cVDSBtjs1h7aX+oWAhrE/g3YddqOJLdONNyZAHQDyPUcs4YhBh04qofUIOA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8321

On 24.05.2023 08:14, Jan Beulich wrote:
> On 24.05.2023 03:13, Stefano Stabellini wrote:
>> For sure I can test your patch. BTW it is also really easy for you to do
>> it your simply by pushing a branch to a repo on gitlab-ci and watch for
>> the results. If you are interested let me know I can give you a
>> tutorial, you just need to create a repo, and register the gitlab runner
>> and voila'.
>>
>> This is the outcome:
>>
>> https://gitlab.com/xen-project/people/sstabellini/xen/-/pipelines/876808194
>>
>>
>> (XEN) PCI add device 0000:00:00.0
>> (XEN) PCI add device 0000:00:00.2
>> (XEN) PCI add device 0000:00:01.0
>> (XEN) PCI add device 0000:00:02.0
>> (XEN) Assertion 'd == dom_xen && system_state < SYS_STATE_active' failed at drivers/vpci/header.c:313
> 
> So this is an assertion my patch adds. The right side of the && may be too
> strict, but it's been too long to recall why exactly I thought the case
> should occur only before Dom0 starts. You may want to retry with that 2nd
> half of the condition dropped.

And indeed it needs dropping. The patch pre-dates 163db6a72b66 ("x86/PVH:
permit more physdevop-s to be used by Dom0"), and despite me being the
author of that one I failed to make the connection. Which quite clearly
indicates some other oddity, because in principle I should be hitting
that same assertion then as well when booting PVH Dom0. Yet I don't, so
I have more to figure out.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 24 13:19:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 13:19:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539038.839515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1oOH-0007cW-HU; Wed, 24 May 2023 13:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539038.839515; Wed, 24 May 2023 13:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1oOH-0007cP-Ed; Wed, 24 May 2023 13:18:57 +0000
Received: by outflank-mailman (input) for mailman id 539038;
 Wed, 24 May 2023 13:18:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1oOG-0007cF-BM; Wed, 24 May 2023 13:18:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1oOF-0005zE-Qw; Wed, 24 May 2023 13:18:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1oOF-0003tF-AV; Wed, 24 May 2023 13:18:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1oOF-0007dK-A2; Wed, 24 May 2023 13:18:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SCqawlBKMkI2eR/im5icPa5h+9Iojme3GuKJaU72swM=; b=gJ/3CrPiyddGxUW+M8FToMgm4f
	XdJoa5AACl9aPwHqYkKs0vI2w6hG9uUserxqTKT4nJ3Ob45BB40FeeLZ5nd2KdKUNgD5g8MB1tYTF
	88QJm996BFe8B3QEhA88a/gNnYG+JL/7jGUTOSS8TkPUtJZYGjLd69242EV546BhYyRE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180922-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180922: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-raw:xen-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start.2:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c7908869ac26961a3919491705e521179ad3fc0e
X-Osstest-Versions-That:
    xen=c7908869ac26961a3919491705e521179ad3fc0e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 May 2023 13:18:55 +0000

flight 180922 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180922/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-install fail in 180913 pass in 180922
 test-amd64-i386-libvirt-raw   7 xen-install      fail in 180913 pass in 180922
 test-amd64-i386-xl-vhd       22 guest-start.2    fail in 180913 pass in 180922
 test-amd64-amd64-xl-qemut-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 180913

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180913
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180913
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180913
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180913
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180913
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180913
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180913
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180913
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180913
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180913
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180913
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180913
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180913
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  c7908869ac26961a3919491705e521179ad3fc0e
baseline version:
 xen                  c7908869ac26961a3919491705e521179ad3fc0e

Last test of basis   180922  2023-05-24 01:54:52 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed May 24 13:25:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 13:25:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539044.839526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1oUm-0000ex-8z; Wed, 24 May 2023 13:25:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539044.839526; Wed, 24 May 2023 13:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1oUm-0000eq-5O; Wed, 24 May 2023 13:25:40 +0000
Received: by outflank-mailman (input) for mailman id 539044;
 Wed, 24 May 2023 13:25:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CqhT=BN=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q1oUk-0000ek-PR
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 13:25:38 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77150686-fa36-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 15:25:35 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E1FD41F8BF;
 Wed, 24 May 2023 13:25:34 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 78EC5133E6;
 Wed, 24 May 2023 13:25:34 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id Clw3HE4QbmRTBwAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 24 May 2023 13:25:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77150686-fa36-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1684934734; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ud6MB7nedCZkSb5qPo9d/1Yd567GK4FyPlSF+rxBmoo=;
	b=sjZ7uOhM6BQ/AXe2dfIHc3p4JD/E30fE8XmNSPeiUwRrTVr/oJHH9uLg03XXGffpT1+APg
	NMPKaEgUiDUSoEZHBKULg0Ed0cRwNQ5FKOMmbCX6oeXzUtzPhbJDZwzoYYy/SoWqlyuzIF
	CRv1DXUe7n/XrjtppN6eBXVDaxi1tKE=
Message-ID: <42ca1d5b-f064-9327-90a4-a51a536cca1f@suse.com>
Date: Wed, 24 May 2023 15:25:34 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: Arnd Bergmann <arnd@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, "H. Peter Anvin"
 <hpa@zytor.com>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 "Peter Zijlstra (Intel)" <peterz@infradead.org>,
 Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20230523205703.2116910-1-arnd@kernel.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] [v3] x86: xen: add missing prototypes
In-Reply-To: <20230523205703.2116910-1-arnd@kernel.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------EEwKLaWCxiBLY3VNTN9kwMkV"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------EEwKLaWCxiBLY3VNTN9kwMkV
Content-Type: multipart/mixed; boundary="------------FRGwNkgU1tOwivNx2Vn2eN1p";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Arnd Bergmann <arnd@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, "H. Peter Anvin"
 <hpa@zytor.com>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 "Peter Zijlstra (Intel)" <peterz@infradead.org>,
 Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
Message-ID: <42ca1d5b-f064-9327-90a4-a51a536cca1f@suse.com>
Subject: Re: [PATCH] [v3] x86: xen: add missing prototypes
References: <20230523205703.2116910-1-arnd@kernel.org>
In-Reply-To: <20230523205703.2116910-1-arnd@kernel.org>

--------------FRGwNkgU1tOwivNx2Vn2eN1p
Content-Type: multipart/mixed; boundary="------------vqX0FtuPaXK098mpV9QoM0mE"

--------------vqX0FtuPaXK098mpV9QoM0mE
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjMuMDUuMjMgMjI6NTYsIEFybmQgQmVyZ21hbm4gd3JvdGU6DQo+IEZyb206IEFybmQg
QmVyZ21hbm4gPGFybmRAYXJuZGIuZGU+DQo+IA0KPiBUaGVzZSBmdW5jdGlvbiBhcmUgYWxs
IGNhbGxlZCBmcm9tIGFzc2VtYmxlciBmaWxlcywgb3IgZnJvbSBpbmxpbmUgYXNzZW1ibGVy
LA0KPiBzbyB0aGVyZSBpcyBubyBpbW1lZGlhdGUgbmVlZCBmb3IgYSBwcm90b3R5cGUgaW4g
YSBoZWFkZXIsIGJ1dCBpZiAtV21pc3NpbmctcHJvdG90eXBlcw0KPiBpcyBlbmFibGVkLCB0
aGUgY29tcGlsZXIgd2FybnMgYWJvdXQgdGhlbToNCj4gDQo+IGFyY2gveDg2L3hlbi9lZmku
YzoxMzA6MTM6IGVycm9yOiBubyBwcmV2aW91cyBwcm90b3R5cGUgZm9yICd4ZW5fZWZpX2lu
aXQnIFstV2Vycm9yPW1pc3NpbmctcHJvdG90eXBlc10NCj4gYXJjaC94ODYvcGxhdGZvcm0v
cHZoL2VubGlnaHRlbi5jOjEyMDoxMzogZXJyb3I6IG5vIHByZXZpb3VzIHByb3RvdHlwZSBm
b3IgJ3hlbl9wcmVwYXJlX3B2aCcgWy1XZXJyb3I9bWlzc2luZy1wcm90b3R5cGVzXQ0KPiBh
cmNoL3g4Ni94ZW4vbW11X3B2LmM6MzU4OjIwOiBlcnJvcjogbm8gcHJldmlvdXMgcHJvdG90
eXBlIGZvciAneGVuX3B0ZV92YWwnIFstV2Vycm9yPW1pc3NpbmctcHJvdG90eXBlc10NCj4g
YXJjaC94ODYveGVuL21tdV9wdi5jOjM2NjoyMDogZXJyb3I6IG5vIHByZXZpb3VzIHByb3Rv
dHlwZSBmb3IgJ3hlbl9wZ2RfdmFsJyBbLVdlcnJvcj1taXNzaW5nLXByb3RvdHlwZXNdDQo+
IGFyY2gveDg2L3hlbi9tbXVfcHYuYzozNzI6MTc6IGVycm9yOiBubyBwcmV2aW91cyBwcm90
b3R5cGUgZm9yICd4ZW5fbWFrZV9wdGUnIFstV2Vycm9yPW1pc3NpbmctcHJvdG90eXBlc10N
Cj4gYXJjaC94ODYveGVuL21tdV9wdi5jOjM4MDoxNzogZXJyb3I6IG5vIHByZXZpb3VzIHBy
b3RvdHlwZSBmb3IgJ3hlbl9tYWtlX3BnZCcgWy1XZXJyb3I9bWlzc2luZy1wcm90b3R5cGVz
XQ0KPiBhcmNoL3g4Ni94ZW4vbW11X3B2LmM6Mzg3OjIwOiBlcnJvcjogbm8gcHJldmlvdXMg
cHJvdG90eXBlIGZvciAneGVuX3BtZF92YWwnIFstV2Vycm9yPW1pc3NpbmctcHJvdG90eXBl
c10NCj4gYXJjaC94ODYveGVuL21tdV9wdi5jOjQyNToxNzogZXJyb3I6IG5vIHByZXZpb3Vz
IHByb3RvdHlwZSBmb3IgJ3hlbl9tYWtlX3BtZCcgWy1XZXJyb3I9bWlzc2luZy1wcm90b3R5
cGVzXQ0KPiBhcmNoL3g4Ni94ZW4vbW11X3B2LmM6NDMyOjIwOiBlcnJvcjogbm8gcHJldmlv
dXMgcHJvdG90eXBlIGZvciAneGVuX3B1ZF92YWwnIFstV2Vycm9yPW1pc3NpbmctcHJvdG90
eXBlc10NCj4gYXJjaC94ODYveGVuL21tdV9wdi5jOjQzODoxNzogZXJyb3I6IG5vIHByZXZp
b3VzIHByb3RvdHlwZSBmb3IgJ3hlbl9tYWtlX3B1ZCcgWy1XZXJyb3I9bWlzc2luZy1wcm90
b3R5cGVzXQ0KPiBhcmNoL3g4Ni94ZW4vbW11X3B2LmM6NTIyOjIwOiBlcnJvcjogbm8gcHJl
dmlvdXMgcHJvdG90eXBlIGZvciAneGVuX3A0ZF92YWwnIFstV2Vycm9yPW1pc3NpbmctcHJv
dG90eXBlc10NCj4gYXJjaC94ODYveGVuL21tdV9wdi5jOjUyODoxNzogZXJyb3I6IG5vIHBy
ZXZpb3VzIHByb3RvdHlwZSBmb3IgJ3hlbl9tYWtlX3A0ZCcgWy1XZXJyb3I9bWlzc2luZy1w
cm90b3R5cGVzXQ0KPiBhcmNoL3g4Ni94ZW4vbW11X3B2LmM6MTQ0MjoxNzogZXJyb3I6IG5v
IHByZXZpb3VzIHByb3RvdHlwZSBmb3IgJ3hlbl9tYWtlX3B0ZV9pbml0JyBbLVdlcnJvcj1t
aXNzaW5nLXByb3RvdHlwZXNdDQoNCkknZCBzdWdnZXN0IHRvIGFkZCB0aGUgcHJvdG90eXBl
cyBvZiB0aGUgZnVuY3Rpb25zIGRlZmluZWQgaW4NCmFyY2gveDg2L3hlbi9tbXVfcHYuYyB0
byB0aGUgc2FtZSBzb3VyY2UsIGFzIHRoZXkgYXJlIHJlZmVyZW5jZWQNCm5vd2hlcmUgZWxz
ZS4gVGhpcyBhdm9pZHMgdGhlIG5lZWQgb2YgI2lmZGVmcyBmb3IgdGhvc2UgZnVuY3Rpb25z
Lg0KDQo+IGFyY2gveDg2L3hlbi9lbmxpZ2h0ZW5fcHYuYzoxMjMzOjM0OiBlcnJvcjogbm8g
cHJldmlvdXMgcHJvdG90eXBlIGZvciAneGVuX3N0YXJ0X2tlcm5lbCcgWy1XZXJyb3I9bWlz
c2luZy1wcm90b3R5cGVzXQ0KPiBhcmNoL3g4Ni94ZW4vaXJxLmM6MjI6MTQ6IGVycm9yOiBu
byBwcmV2aW91cyBwcm90b3R5cGUgZm9yICd4ZW5fZm9yY2VfZXZ0Y2huX2NhbGxiYWNrJyBb
LVdlcnJvcj1taXNzaW5nLXByb3RvdHlwZXNdDQo+IGFyY2gveDg2L2VudHJ5L2NvbW1vbi5j
OjMwMjoyNDogZXJyb3I6IG5vIHByZXZpb3VzIHByb3RvdHlwZSBmb3IgJ3hlbl9wdl9ldnRj
aG5fZG9fdXBjYWxsJyBbLVdlcnJvcj1taXNzaW5nLXByb3RvdHlwZXNdDQo+IA0KPiBEZWNs
YXJlIGFsbCBvZiB0aGVtIGluIGFuIGFwcHJvcHJpYXRlIGhlYWRlciBmaWxlIHRvIGF2b2lk
IHRoZSB3YXJuaW5ncy4NCj4gRm9yIGNvbnNpc3RlbmN5LCBhbHNvIG1vdmUgdGhlIGFzbV9j
cHVfYnJpbmd1cF9hbmRfaWRsZSgpIGRlY2xhcmF0aW9uIG91dCBvZg0KPiBzbXBfcHYuYy4N
Cj4gDQo+IFNpZ25lZC1vZmYtYnk6IEFybmQgQmVyZ21hbm4gPGFybmRAYXJuZGIuZGU+DQo+
IC0tLQ0KPiB2MzogbW92ZSBkZWNsYXRpb25zIG9mIGNvbmRpdGlvbmFsIGZ1bmN0aW9uIGlu
IGFuICNpZmRlZiB3aXRoIGEgc3R1Yg0KPiB2MjogZml4IHVwIGNoYW5nZWxvZw0KPiAtLS0N
Cj4gICBhcmNoL3g4Ni94ZW4vZWZpLmMgICAgIHwgIDIgKysNCj4gICBhcmNoL3g4Ni94ZW4v
c21wLmggICAgIHwgMTEgKysrKysrKysrKysNCj4gICBhcmNoL3g4Ni94ZW4vc21wX3B2LmMg
IHwgIDEgLQ0KPiAgIGFyY2gveDg2L3hlbi94ZW4tb3BzLmggfCAyNiArKysrKysrKysrKysr
KysrKysrKysrKysrKw0KPiAgIGluY2x1ZGUveGVuL3hlbi5oICAgICAgfCAgMyArKysNCj4g
ICA1IGZpbGVzIGNoYW5nZWQsIDQyIGluc2VydGlvbnMoKyksIDEgZGVsZXRpb24oLSkNCj4g
DQo+IGRpZmYgLS1naXQgYS9hcmNoL3g4Ni94ZW4vZWZpLmMgYi9hcmNoL3g4Ni94ZW4vZWZp
LmMNCj4gaW5kZXggN2Q3ZmZiOWM4MjZhLi44NjNkMGQ2YjNlZGMgMTAwNjQ0DQo+IC0tLSBh
L2FyY2gveDg2L3hlbi9lZmkuYw0KPiArKysgYi9hcmNoL3g4Ni94ZW4vZWZpLmMNCj4gQEAg
LTE2LDYgKzE2LDggQEANCj4gICAjaW5jbHVkZSA8YXNtL3NldHVwLmg+DQo+ICAgI2luY2x1
ZGUgPGFzbS94ZW4vaHlwZXJjYWxsLmg+DQo+ICAgDQo+ICsjaW5jbHVkZSAieGVuLW9wcy5o
Ig0KPiArDQo+ICAgc3RhdGljIGVmaV9jaGFyMTZfdCB2ZW5kb3JbMTAwXSBfX2luaXRkYXRh
Ow0KPiAgIA0KPiAgIHN0YXRpYyBlZmlfc3lzdGVtX3RhYmxlX3QgZWZpX3N5c3RhYl94ZW4g
X19pbml0ZGF0YSA9IHsNCj4gZGlmZiAtLWdpdCBhL2FyY2gveDg2L3hlbi9zbXAuaCBiL2Fy
Y2gveDg2L3hlbi9zbXAuaA0KPiBpbmRleCAyMmZiOTgyZmY5NzEuLjkzNjc1MDIyODFkYyAx
MDA2NDQNCj4gLS0tIGEvYXJjaC94ODYveGVuL3NtcC5oDQo+ICsrKyBiL2FyY2gveDg2L3hl
bi9zbXAuaA0KPiBAQCAtMiw2ICsyLDEwIEBADQo+ICAgI2lmbmRlZiBfWEVOX1NNUF9IDQo+
ICAgDQo+ICAgI2lmZGVmIENPTkZJR19TTVANCj4gKw0KPiArdm9pZCBhc21fY3B1X2JyaW5n
dXBfYW5kX2lkbGUodm9pZCk7DQo+ICthc21saW5rYWdlIHZvaWQgY3B1X2JyaW5ndXBfYW5k
X2lkbGUodm9pZCk7DQo+ICsNCj4gICBleHRlcm4gdm9pZCB4ZW5fc2VuZF9JUElfbWFzayhj
b25zdCBzdHJ1Y3QgY3B1bWFzayAqbWFzaywNCj4gICAJCQkgICAgICBpbnQgdmVjdG9yKTsN
Cj4gICBleHRlcm4gdm9pZCB4ZW5fc2VuZF9JUElfbWFza19hbGxidXRzZWxmKGNvbnN0IHN0
cnVjdCBjcHVtYXNrICptYXNrLA0KPiBAQCAtMjksNiArMzMsMTMgQEAgc3RydWN0IHhlbl9j
b21tb25faXJxIHsNCj4gICB9Ow0KPiAgICNlbHNlIC8qIENPTkZJR19TTVAgKi8NCj4gICAN
Cj4gK3N0YXRpYyBpbmxpbmUgdm9pZCBhc21fY3B1X2JyaW5ndXBfYW5kX2lkbGUodm9pZCkN
Cj4gK3sNCj4gK30NCj4gK3N0YXRpYyBpbmxpbmUgdm9pZCBjcHVfYnJpbmd1cF9hbmRfaWRs
ZSh2b2lkKQ0KPiArew0KPiArfQ0KPiArDQoNCk5vIG5lZWQgZm9yIGFib3ZlIHN0dWJzLCBh
cyB0aGVyZSBpcyBubyB3YXkgdGhleSBjYW4gZ2V0IGNhbGxlZA0Kd2l0aG91dCBDT05GSUdf
U01QLg0KDQo+ICAgc3RhdGljIGlubGluZSBpbnQgeGVuX3NtcF9pbnRyX2luaXQodW5zaWdu
ZWQgaW50IGNwdSkNCj4gICB7DQo+ICAgCXJldHVybiAwOw0KPiBkaWZmIC0tZ2l0IGEvYXJj
aC94ODYveGVuL3NtcF9wdi5jIGIvYXJjaC94ODYveGVuL3NtcF9wdi5jDQo+IGluZGV4IGE5
MmU4MDAyYjVjZi4uZDVhZTVkZTJkYWEyIDEwMDY0NA0KPiAtLS0gYS9hcmNoL3g4Ni94ZW4v
c21wX3B2LmMNCj4gKysrIGIvYXJjaC94ODYveGVuL3NtcF9wdi5jDQo+IEBAIC01NSw3ICs1
NSw2IEBAIHN0YXRpYyBERUZJTkVfUEVSX0NQVShzdHJ1Y3QgeGVuX2NvbW1vbl9pcnEsIHhl
bl9pcnFfd29yaykgPSB7IC5pcnEgPSAtMSB9Ow0KPiAgIHN0YXRpYyBERUZJTkVfUEVSX0NQ
VShzdHJ1Y3QgeGVuX2NvbW1vbl9pcnEsIHhlbl9wbXVfaXJxKSA9IHsgLmlycSA9IC0xIH07
DQo+ICAgDQo+ICAgc3RhdGljIGlycXJldHVybl90IHhlbl9pcnFfd29ya19pbnRlcnJ1cHQo
aW50IGlycSwgdm9pZCAqZGV2X2lkKTsNCj4gLXZvaWQgYXNtX2NwdV9icmluZ3VwX2FuZF9p
ZGxlKHZvaWQpOw0KPiAgIA0KPiAgIHN0YXRpYyB2b2lkIGNwdV9icmluZ3VwKHZvaWQpDQo+
ICAgew0KPiBkaWZmIC0tZ2l0IGEvYXJjaC94ODYveGVuL3hlbi1vcHMuaCBiL2FyY2gveDg2
L3hlbi94ZW4tb3BzLmgNCj4gaW5kZXggNmQ3ZjYzMThmYzA3Li5lYjRjYjMwNTcwYzcgMTAw
NjQ0DQo+IC0tLSBhL2FyY2gveDg2L3hlbi94ZW4tb3BzLmgNCj4gKysrIGIvYXJjaC94ODYv
eGVuL3hlbi1vcHMuaA0KPiBAQCAtMTQ2LDEyICsxNDYsMzggQEAgaW50IHhlbl9jcHVocF9z
ZXR1cChpbnQgKCpjcHVfdXBfcHJlcGFyZV9jYikodW5zaWduZWQgaW50KSwNCj4gICB2b2lk
IHhlbl9waW5fdmNwdShpbnQgY3B1KTsNCj4gICANCj4gICB2b2lkIHhlbl9lbWVyZ2VuY3lf
cmVzdGFydCh2b2lkKTsNCj4gK3ZvaWQgeGVuX2ZvcmNlX2V2dGNobl9jYWxsYmFjayh2b2lk
KTsNCj4gKw0KPiAgICNpZmRlZiBDT05GSUdfWEVOX1BWDQo+ICAgdm9pZCB4ZW5fcHZfcHJl
X3N1c3BlbmQodm9pZCk7DQo+ICAgdm9pZCB4ZW5fcHZfcG9zdF9zdXNwZW5kKGludCBzdXNw
ZW5kX2NhbmNlbGxlZCk7DQo+ICtwdGV2YWxfdCB4ZW5fcHRlX3ZhbChwdGVfdCBwdGUpOw0K
PiArcGdkdmFsX3QgeGVuX3BnZF92YWwocGdkX3QgcGdkKTsNCj4gK3BtZHZhbF90IHhlbl9w
bWRfdmFsKHBtZF90IHBtZCk7DQo+ICtwdWR2YWxfdCB4ZW5fcHVkX3ZhbChwdWRfdCBwdWQp
Ow0KPiArcDRkdmFsX3QgeGVuX3A0ZF92YWwocDRkX3QgcDRkKTsNCj4gK3B0ZV90IHhlbl9t
YWtlX3B0ZShwdGV2YWxfdCBwdGUpOw0KPiArcGdkX3QgeGVuX21ha2VfcGdkKHBnZHZhbF90
IHBnZCk7DQo+ICtwbWRfdCB4ZW5fbWFrZV9wbWQocG1kdmFsX3QgcG1kKTsNCj4gK3B1ZF90
IHhlbl9tYWtlX3B1ZChwdWR2YWxfdCBwdWQpOw0KPiArcDRkX3QgeGVuX21ha2VfcDRkKHA0
ZHZhbF90IHA0ZCk7DQo+ICtwdGVfdCB4ZW5fbWFrZV9wdGVfaW5pdChwdGV2YWxfdCBwdGUp
Ow0KDQpTZWUgYWJvdmUuDQoNCj4gK3ZvaWQgeGVuX3N0YXJ0X2tlcm5lbChzdHJ1Y3Qgc3Rh
cnRfaW5mbyAqc2kpOw0KPiAgICNlbHNlDQo+ICAgc3RhdGljIGlubGluZSB2b2lkIHhlbl9w
dl9wcmVfc3VzcGVuZCh2b2lkKSB7fQ0KPiAgIHN0YXRpYyBpbmxpbmUgdm9pZCB4ZW5fcHZf
cG9zdF9zdXNwZW5kKGludCBzdXNwZW5kX2NhbmNlbGxlZCkge30NCj4gK3N0YXRpYyBpbmxp
bmUgcHRldmFsX3QgeGVuX3B0ZV92YWwocHRlX3QgcHRlKQl7IHJldHVybiBwdGUucHRlOyB9
DQo+ICtzdGF0aWMgaW5saW5lIHBnZHZhbF90IHhlbl9wZ2RfdmFsKHBnZF90IHBnZCkJeyBy
ZXR1cm4gcGdkLnBnZDsgfQ0KPiArc3RhdGljIGlubGluZSBwbWR2YWxfdCB4ZW5fcG1kX3Zh
bChwbWRfdCBwbWQpCXsgcmV0dXJuIHBtZC5wbWQ7IH0NCj4gK3N0YXRpYyBpbmxpbmUgcHVk
dmFsX3QgeGVuX3B1ZF92YWwocHVkX3QgcHVkKQl7IHJldHVybiBwdWQucHVkOyB9DQo+ICtz
dGF0aWMgaW5saW5lIHA0ZHZhbF90IHhlbl9wNGRfdmFsKHA0ZF90IHA0ZCkJeyByZXR1cm4g
cDRkLnA0ZDsgfQ0KPiArc3RhdGljIGlubGluZSBwdGVfdCB4ZW5fbWFrZV9wdGUocHRldmFs
X3QgcHRlKQl7IHJldHVybiAocHRlX3Qpe3B0ZX07IH0NCj4gK3N0YXRpYyBpbmxpbmUgcGdk
X3QgeGVuX21ha2VfcGdkKHBnZHZhbF90IHBnZCkJeyByZXR1cm4gKHBnZF90KXtwZ2R9OyB9
DQo+ICtzdGF0aWMgaW5saW5lIHBtZF90IHhlbl9tYWtlX3BtZChwbWR2YWxfdCBwbWQpCXsg
cmV0dXJuIChwbWRfdCl7cG1kfTsgfQ0KPiArc3RhdGljIGlubGluZSBwdWRfdCB4ZW5fbWFr
ZV9wdWQocHVkdmFsX3QgcHVkKQl7IHJldHVybiAocHVkX3Qpe3B1ZH07IH0NCj4gK3N0YXRp
YyBpbmxpbmUgcDRkX3QgeGVuX21ha2VfcDRkKHA0ZHZhbF90IHA0ZCkJeyByZXR1cm4gKHA0
ZF90KXtwNGR9OyB9DQo+ICtzdGF0aWMgaW5saW5lIHB0ZV90IHhlbl9tYWtlX3B0ZV9pbml0
KHB0ZXZhbF90IHB0ZSkJeyByZXR1cm4gKHB0ZV90KXtwdGV9OyB9DQo+ICtzdGF0aWMgaW5s
aW5lIHZvaWQgeGVuX3N0YXJ0X2tlcm5lbChzdHJ1Y3Qgc3RhcnRfaW5mbyAqc2kpIHt9DQoN
CkRyb3AgdGhvc2Ugc3R1YnMsIHRoZXkgYXJlIG5vdCB1c2VkIHdpdGhvdXQgQ09ORklHX1hF
Tl9QViwgYW5kDQp0aGV5IHNob3VsZG4ndCBiZSB1c2VkIGluIHRoaXMgY2FzZSBhbnl3YXku
DQoNCj4gICAjZW5kaWYNCj4gICANCj4gICAjaWZkZWYgQ09ORklHX1hFTl9QVkhWTQ0KPiBk
aWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4veGVuLmggYi9pbmNsdWRlL3hlbi94ZW4uaA0KPiBp
bmRleCAwZWZlYjY1MmY5YjguLmY5ODkxNjI5ODNjMyAxMDA2NDQNCj4gLS0tIGEvaW5jbHVk
ZS94ZW4veGVuLmgNCj4gKysrIGIvaW5jbHVkZS94ZW4veGVuLmgNCj4gQEAgLTMxLDYgKzMx
LDkgQEAgZXh0ZXJuIHVpbnQzMl90IHhlbl9zdGFydF9mbGFnczsNCj4gICANCj4gICAjaW5j
bHVkZSA8eGVuL2ludGVyZmFjZS9odm0vc3RhcnRfaW5mby5oPg0KPiAgIGV4dGVybiBzdHJ1
Y3QgaHZtX3N0YXJ0X2luZm8gcHZoX3N0YXJ0X2luZm87DQo+ICt2b2lkIHhlbl9wcmVwYXJl
X3B2aCh2b2lkKTsNCj4gK3N0cnVjdCBwdF9yZWdzOw0KPiArdm9pZCB4ZW5fcHZfZXZ0Y2hu
X2RvX3VwY2FsbChzdHJ1Y3QgcHRfcmVncyAqcmVncyk7DQo+ICAgDQo+ICAgI2lmZGVmIENP
TkZJR19YRU5fRE9NMA0KPiAgICNpbmNsdWRlIDx4ZW4vaW50ZXJmYWNlL3hlbi5oPg0KDQoN
Ckp1ZXJnZW4NCg==
--------------vqX0FtuPaXK098mpV9QoM0mE
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------vqX0FtuPaXK098mpV9QoM0mE--

--------------FRGwNkgU1tOwivNx2Vn2eN1p--

--------------EEwKLaWCxiBLY3VNTN9kwMkV
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRuEE4FAwAAAAAACgkQsN6d1ii/Ey9N
MAf+LtwS2j+za6YUynWwQzUfny4yRglRQMREx7rQBDJU9sUcYHztPmuYE7MPL2RlxhMvudM0o+JU
ChzsnrDR5hDn26iB1kDvBYgUXrrYzUrxEbZpqw4cKtsPeaDc/q+4lUYyAPo1kZXuZ9e87rVx6il5
r0WRSM/dHvMy/qSG53GpEHHAgXBEtZArk0Rrtdw13Zt/uWvVvQPWd6J6wW8ODXC78bz34EkokJeB
xbQNQk3mZSWjVFfi7P/391rBWGksrTrfme7VrNUxvxBJQlnawuzP4TNzwE2Dzk5SAin5aXLjomgg
TTltl4YQWgZugUNiaTKweRZObLhEP57EvY1WKuQLnQ==
=8to8
-----END PGP SIGNATURE-----

--------------EEwKLaWCxiBLY3VNTN9kwMkV--


From xen-devel-bounces@lists.xenproject.org Wed May 24 13:46:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 13:46:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539050.839537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1oob-00039H-1N; Wed, 24 May 2023 13:46:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539050.839537; Wed, 24 May 2023 13:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1ooa-00039A-T0; Wed, 24 May 2023 13:46:08 +0000
Received: by outflank-mailman (input) for mailman id 539050;
 Wed, 24 May 2023 13:46:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1ooa-000394-14
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 13:46:08 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 53e72a47-fa39-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 15:46:05 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7280.eurprd04.prod.outlook.com (2603:10a6:800:1af::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 13:46:01 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 13:46:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53e72a47-fa39-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TIOAP64M9OGKguvqRc+5WRZsESRSVtyHh38/m7PcT2hmbXCKyOIk3AAKMQ9wZgLElyCP1o4lWJZtzEQ2y8eQVLIW3rnutFFxx8NO/hL/U9rTU9Wid/Vr1+qJCAb7JBzaPrQA9L+WuORFWZ9FkdMwcjCfUMfJiLeWL7pvx7hFBjbKxBlTjdRg8vay0xDlMg0+EuzCFZn0wn+tzS0YbdQf9XHknOXc1dOFjZ7/Tj+U7v2DPXHcZIozIk1UcpBOPI5AMWL98OYUBOHiqUZisaz97+miN7Ndt2X/OcQlCzFvDZJ5/ZzwThfPSfmO6T0fJSQaSIskohya795oXpHibTWnKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ujJbCJNF4sWBy6RwEm197UJw1XLCnTru4XS4yhruDZ0=;
 b=ebAesyQyxg+5qM625uEsl3qLHH7NFr25eksRavZwasUbvBuBNjsHa9Kyp6IGSKnw0Lf53Q/tGYNQ47lbsLVn3dwB8yT1X/vRu90hFGLc2gLK4P0hAgRINMtzhjiccUD4PXF+PoJ/vrD+Kugzo9CYfOrn7BAQrFoyi3Ebz2jKWU24DsPEWfTW9YFgk+OClEQtWEqdfWNypEQmSGOPpycw80LC0FMwdK/EGV9ynsvP3NGq+r91FXVwR+tR6SSt9bmg9FN2YVh2i1NAvc3hm5rQA+12yJvqg45eKLwGba4FSc+/5PB5r5ne2ipaifCo5rF2JaUG7V7Q7QEjWmvu8GMuMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ujJbCJNF4sWBy6RwEm197UJw1XLCnTru4XS4yhruDZ0=;
 b=PLz5dl1AvYSZapm8SJDp2Rd94YY9f0BxKG9mkKdfdGvZY6Dc7mMboFGpS1WRYYzPolODQ6mZvJxkAIEd63wxapwXzCVQnfF3eHNhuzkdLX2VWXJyrars0g5FC8UCb2jA2TCg9uAW0TGh78mlNq97cKjU/OMvWI/inS5vNvDsKiKDn13+hSqIvaGyT0MsTwN3kZuuobsCmTQVW84Bn3lITMRB7NRx77TTxi9vVnDJiOL0OwFh5LBybaaWFBTNdL8Z6PRVGvPxPU+OM4qVWSw7sOxggsdpiSJDlfmEh2VeT94FdwgXwEZkac3n97inbaHMpe03cQ+zHmS7H3gEzx4qiA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
Date: Wed, 24 May 2023 15:45:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH RFC v2] vPCI: account for hidden devices
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0017.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::27)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7280:EE_
X-MS-Office365-Filtering-Correlation-Id: bf1a3f79-f118-4d7b-f40f-08db5c5d35b4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	V0F8gMfpdIxfLDD88Svo1Aes34fKQy3qpJFBfN3sEfug1y4zyBQUyQfWl1kNhe/BkfNZDdth6ddB2TnhzNumCFGEX0SfxOaTB7C8bASAMxOvpdho0vqeXA6ENWIpbV7u47sJTxNouAkAsDgph+EbbEYe2hesMNTbwWgtLzfUzgXlO5u1qD1ATLxBNpEb5l/ekBzjjWNZ9HX/Z91iS2DbNHDRnkb3N9SAn8eL3CsDABIj+aaL3XsKSnBQyUx7rBnRIxVj9KjwVgI+PTqrRPP7qRmAi/W2muzKTGGjrhWmCzZBR5eqCzrD37ubaz+tV8xcW03c6nVeGftH7KqM06a6KH6rb3MHEBwFvqrNNmfQ0QZxCKfqWQb+5yxWwAhwqpzv0rnU55Pre4I8eHi+TzDSPbp3Wd1SqUyKeACmwbPMtkwym8YxqAw0szNfjWdRloihZ22HcsIKuomY57D1S/ZWwtPFxA0WVXYp+KCe58pDmsi/SC5iNDtngT/TrUKaheklL15NhmsgFGA5nBttd4pVkSii0obFELB8NQYJk2MSwcRw8INq5/IUjxX7FlipKdFacUXFJDILwVjsJTJdqq8YhlusdEi3LLZoblgRVWRa4QBfm4FHo1F/nYFMMljlH1sZuAFp4LxteT2RjC9qw3PJpw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(346002)(396003)(366004)(136003)(451199021)(26005)(6512007)(6506007)(2906002)(186003)(2616005)(36756003)(83380400001)(38100700002)(316002)(6666004)(66946007)(66556008)(66476007)(6916009)(4326008)(86362001)(41300700001)(6486002)(54906003)(478600001)(31696002)(31686004)(8936002)(8676002)(5660300002)(66899021)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eS8xcnhPWk9wRmZyU0hZeFBMQ0FhZmFCdmxLRHljVzI4cWk2ejlHa1h0c21L?=
 =?utf-8?B?SmR1b0dtWU1LWWFaQk94eU1TSmFxVldDZStDMDl5UFhPRk95ZHlzOUIyOVJw?=
 =?utf-8?B?SElwZGNZTTdBSXNqbFdDWU9JRUM4eG5ZVWx0cWkrdWFpanI4VnFONGRkYzVs?=
 =?utf-8?B?VDBiK2FZMWZCbTA5OFVaVGhDVG44VFVmaVFxVG5mL2pGNGU3K2FQejQ2Nndq?=
 =?utf-8?B?MHBzQkd4eG5iUnE5bmRUZlNhZkRKQ1JjVjh4NkoxU0hsOTVYSS9zdDBEMkR5?=
 =?utf-8?B?bnBVR0VDWDF2STVYZDhkOVh4aXRUQTRROHRoeGNkdXdtMmgzYmFnSzZMTlVZ?=
 =?utf-8?B?WXNvcDRlMTQybjNSWUNTeGg5ZUV1cXRaR1ZMTzBzVjc0dnB1NE1TQkEyY09L?=
 =?utf-8?B?dGxxeDQzaE5jQ0g3VWhkRitObVZyYjg0Z2E4V29yelRlSDBNZitKV2tHUkk1?=
 =?utf-8?B?Zk02WFRnd1lsdVNDbTVlaU9wcUxxWlJSbS8zR2dPK1lVNGVqR043NnJCL0Ri?=
 =?utf-8?B?NjBVZ1JKZXdKSUNUZmhxWHZSb25RV3lMWmw2ZmNMSWxKUjRPYStrLzFnRzNQ?=
 =?utf-8?B?QVU2VVZuNlNHMHo2VFRFVlhaaXlnSG5lNVM4WHRPVCtveDM3NzBvbnZpWXRJ?=
 =?utf-8?B?RTlpbkZVejlzSFEvbkRoMlpRV0x5bjBiRjQzbWNNVVVybFdKUkJydlBmcVZk?=
 =?utf-8?B?Sldsd2ZnTVpyQVVuM1ZRMUFMZUpWckI2RkNHNGgxTWQ1SHo0MCtxVTc4SS9J?=
 =?utf-8?B?UXp0RXBLa2lraWIvREJ2ZzdkWmp0KzJQOG5OTnEvRkhjY1VqWkY0RmQxSUZW?=
 =?utf-8?B?dkFwR0ZzOXNTcjd5VkdneXZXTmZZOThKa0MwSFNkZU1GTkhOT09NbGN0T2ZR?=
 =?utf-8?B?aTBUQTRidlJueXVPZVdtUm1qT25UUGIrRnBjcng1WE96N3FoTGlqOC9FMXg2?=
 =?utf-8?B?K0xzSU1ZQXowTEJNc0tZSjU4cUdoRXdsL3RGc0RkeVRBVm94dWNKbUIwajB4?=
 =?utf-8?B?T21XdmNnN3JmQnFJUjQvaEx1ekk3dE5EcEkwZjBCVWFwdzUwcTUyaXZiSk5o?=
 =?utf-8?B?UlBiT1NqcG9HK3krdXRESXBuaWtqbGlyQkJaL0ErTTc1T0I5SlZHSzVrb1Qr?=
 =?utf-8?B?dURNUzB6Z3Y5STd2ekhnTkgyZFozU1NxbURmaHdGTmMxUEdFSWsrR1lBdDQ5?=
 =?utf-8?B?Q2NyUDJvQlBYVk1USTJjOWc1eEs4TENXWSt3aXA1RTVLOGVqaVlzZ0NzQ2Zu?=
 =?utf-8?B?UGdLZmF4ZndORFlMYjlwQUpadHIvcmoxTUw1QjZiYkNjQ2FrVkpSMDd3MlZk?=
 =?utf-8?B?SEYvWkVnTGhxUDAzNllzWnJrOExBNjcxVWxEQUFxSFd2Nk1wOC9qUFlSMWlF?=
 =?utf-8?B?WjV2UzdvQ2t2bWlCSHkyK1U2UllzUWR0WDh2SjJwdjM2N1dKd3ZoNkdQODJw?=
 =?utf-8?B?YXZXVHB3QlVucUI3YUFNY2pISXZlVXhpMmtSK21Fc2gxMzhTbkFLSEdIR3lO?=
 =?utf-8?B?QUVnZ2JtNkhDSHRJdTZQNjBTZlBRdmhqRFB2UEJoVk1sTHc4UGE1UVJuekdh?=
 =?utf-8?B?d0JkYXFNZWVCTTBROGg0cXM2RWFNTnJFMkxaWG9IZmdyTzcrd2VkUVd0bzBS?=
 =?utf-8?B?Z3VZS3l5L1FOUXVVWldJZkk0K3BnVHNibEZsVmRHb1B0aUluYllyKzUwTTkr?=
 =?utf-8?B?dzN1SXdiN3F3OE1TVFNCRzRuTnRGa0U5RGt0WHlPSVpza2kxSnd2RG14SjUy?=
 =?utf-8?B?aE8xQWtIbnUwZU0ycXUwYTF4dnF3K3Y1T05NcTVTRmd4OFMyRFVOQ095cXZo?=
 =?utf-8?B?UGtpZmhrcUdkOWRSOW5ZMEswVUR0UktGN3hkUndCYU5KdHRPRDNZMlozekZo?=
 =?utf-8?B?eG9kSEVQZlJhcmxVZWRrTDhLelFmd2QwaHR3SFdmMjNOTUZtUXU4OC96VWhH?=
 =?utf-8?B?T0hPL3NFSjcwMEhTNDB6MVo2dGt4ZjJRSmR1bEliWGdTWnZrYlJuN1Vud2pp?=
 =?utf-8?B?UlV6b1JqWnB2ZHp4cUxlWDB1SVJSbmdCU096UWRielMyZWJxZXJVMnBiRmZ0?=
 =?utf-8?B?VHZhL011UHhRVVF0bGtnTTVXNnBGZ3YxMUIrZjJsOHpZWkwrYnMvZWdQWExP?=
 =?utf-8?Q?s+ThaM0LMpeWugWVgSBWyms+H?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bf1a3f79-f118-4d7b-f40f-08db5c5d35b4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 13:46:00.9445
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4ySPWVK/vrMnr7qu+IzCka0SmU+qYzz5tmUHLw3zgKWAPWgQpdoxKokwUYL1p4uz7NVauh6lKCnXOV9fgPSfug==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7280

Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
console) are associated with DomXEN, not Dom0. This means that while
looking for overlapping BARs such devices cannot be found on Dom0's list
of devices; DomXEN's list also needs to be scanned.

Suppress vPCI init altogether for r/o devices (which constitute a subset
of hidden ones).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: The modify_bars() change is intentionally mis-formatted, as the
     necessary re-indentation would make the diff difficult to read. At
     this point I'd merely like to gather input towards possible better
     approaches to solve the issue (not the least because quite possibly
     there are further places needing changing).

RFC: Whether to actually suppress vPCI init is up for debate; adding the
     extra logic is following Roger's suggestion (I'm not convinced it is
     useful to have). If we want to keep that, a 2nd question would be
     whether to keep it in vpci_add_handlers(): Both of its callers (can)
     have a struct pci_seg readily available, so checking ->ro_map at the
     call sites would be easier.

RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
     modify_bars() to consistently respect BARs of hidden devices while
     setting up "normal" ones (i.e. to avoid as much as possible the
     "continue" path introduced here), setting up of the former may want
     doing first.

RFC: vpci_write()'s check of the r/o map may want moving out of mainline
     code, into the case dealing with !pdev->vpci.
---
v2: Extend existing comment. Relax assertion. Don't initialize vPCI for
    r/o devices.

--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
     struct vpci_header *header = &pdev->vpci->header;
     struct rangeset *mem = rangeset_new(NULL, NULL, 0);
     struct pci_dev *tmp, *dev = NULL;
+    const struct domain *d;
     const struct vpci_msix *msix = pdev->vpci->msix;
     unsigned int i;
     int rc;
@@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
 
     /*
      * Check for overlaps with other BARs. Note that only BARs that are
-     * currently mapped (enabled) are checked for overlaps.
+     * currently mapped (enabled) are checked for overlaps. Note also that
+     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
      */
-    for_each_pdev ( pdev->domain, tmp )
+for ( d = pdev->domain; ; d = dom_xen ) {//todo
+    for_each_pdev ( d, tmp )
     {
         if ( tmp == pdev )
         {
@@ -304,6 +307,7 @@ static int modify_bars(const struct pci_
                  */
                 continue;
         }
+if ( !tmp->vpci ) { ASSERT(d == dom_xen); continue; }//todo
 
         for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
         {
@@ -330,6 +334,7 @@ static int modify_bars(const struct pci_
             }
         }
     }
+if ( !is_hardware_domain(d) ) break; }//todo
 
     ASSERT(dev);
 
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -70,6 +70,7 @@ void vpci_remove_device(struct pci_dev *
 int vpci_add_handlers(struct pci_dev *pdev)
 {
     unsigned int i;
+    const unsigned long *ro_map;
     int rc = 0;
 
     if ( !has_vpci(pdev->domain) )
@@ -78,6 +79,11 @@ int vpci_add_handlers(struct pci_dev *pd
     /* We should not get here twice for the same device. */
     ASSERT(!pdev->vpci);
 
+    /* No vPCI for r/o devices. */
+    ro_map = pci_get_ro_map(pdev->sbdf.seg);
+    if ( ro_map && test_bit(pdev->sbdf.bdf, ro_map) )
+        return 0;
+
     pdev->vpci = xzalloc(struct vpci);
     if ( !pdev->vpci )
         return -ENOMEM;


From xen-devel-bounces@lists.xenproject.org Wed May 24 13:48:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 13:48:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539054.839546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1oqD-0003hC-Ak; Wed, 24 May 2023 13:47:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539054.839546; Wed, 24 May 2023 13:47:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1oqD-0003h5-7L; Wed, 24 May 2023 13:47:49 +0000
Received: by outflank-mailman (input) for mailman id 539054;
 Wed, 24 May 2023 13:47:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1oqB-0003gz-Oo
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 13:47:47 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on060c.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::60c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 90b1f9f9-fa39-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 15:47:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB9778.eurprd04.prod.outlook.com (2603:10a6:150:110::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 13:47:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 13:47:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90b1f9f9-fa39-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K9LgLsmQoMUGAt6mr1jCv1SKGCFErTgZAWxRoINltk7rkSAk8TOsQgCEVzWK8+g5qGSOI2rIoAYQVdvqs4drb6mzSGmsgCGe9bDpbiTQMOVJYQzlBTRHyvpp3nWNxYLLCVlx1uWgJkxQqqYcOuBgqfmvt7I5FY0YnkaNWghDdV+2gtXGq0DTRv1MRXbI/M+81HMZckiTcgM3rplstk5JLvlyqm6fsSixlKVjhoYybs4rpcrwDhY6Z/YLzTqnq3Myw9TSkj/ITaqBKJSLqVuKWSprJn3Ad7ZMfbtVq08HcXa10Pii+K6yBdwCM5gFB6hJqNhQ9O4uMBlfFOUXTE0G+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=G5IvX8/Ewj7KsK/hSok2H1eVmBl7JZFr7tRqxXO7HFI=;
 b=WfK/ieOzV1pN7IAOX+JSibIB0+KyVtLj55E5RZ7jsoZtywUvRUH54nXx48dQw1AWxjtpCdrdyOdu8tNtzyn0ZKoz/nqynaj4676fY46zvqPf7pp44ajEIkw+WYY0rNiwJ87ZlXOJUEtxPhYE7NiAw+K9+5C73oNVwN17tuKOubnHZQ16IaT66hP5GJEz/Di9Ktld5FfNyRUSqFis5yiciD7mUtPCGbzE1rKksrlKAevFiNpr/AabeSRhyhcG9xOkT6Y3eUkn1lkOoAWyDJNc1fyMfdYGPdoh9l1P8QDbzYDFQLNFt6CLWPvD5fOf2o+/3P+/Fnj4dt75cEHBcyAS9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G5IvX8/Ewj7KsK/hSok2H1eVmBl7JZFr7tRqxXO7HFI=;
 b=EcvRbFHYJ0T0O1649/i2ycM0bn8sc6kbB6MnVAH+TLtdsp7lr4fwPfo2XC25R5ZzgB5f2TXNSL5sZYzvjMpCiuotGw7uhjc0HISAbP7f/vQhSwMFCGmiXSVBndywnat5Kf9D8Tq3UCtJIUUkVaKBTpaC6kzJcIwoJP/D+qekdVDQ6/R70ZNHsenvbKYqcWHmuIZbipLNDRKgT3bG8s1j4mWWl/EB9GusPcvXEH+30I2hiIzWbpnKxbTdXgHjZVv4hlg5ZlVkI07agJ3R/E5fNbPYcCJdEKOJKtpgP1LNvfFhZC7BwSzOX0SUSM1+bBLJ2R03o7Av1j+OUqPhgsJJpA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <78efef03-ff39-3289-d347-396c06e06eb2@suse.com>
Date: Wed, 24 May 2023 15:47:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: PVH Dom0 related UART failure
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 marmarek@invisiblethingslab.com, xenia.ragiadakou@amd.com,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <alpine.DEB.2.22.394.2305171745450.128889@ubuntu-linux-20-04-desktop>
 <ZGX/Pvgy3+onJOJZ@Air-de-Roger>
 <alpine.DEB.2.22.394.2305181748280.128889@ubuntu-linux-20-04-desktop>
 <ZGcu7EWW1cuNjwDA@Air-de-Roger>
 <alpine.DEB.2.22.394.2305191641010.815658@ubuntu-linux-20-04-desktop>
 <ZGig68UTddfEwR6P@Air-de-Roger>
 <alpine.DEB.2.22.394.2305221520350.1553709@ubuntu-linux-20-04-desktop>
 <818859a6-883a-81c0-1222-84c62ada3200@suse.com>
 <alpine.DEB.2.22.394.2305231756500.44000@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305231756500.44000@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0251.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:af::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB9778:EE_
X-MS-Office365-Filtering-Correlation-Id: 43a0e4c2-22b8-47a3-7ffa-08db5c5d7318
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xzhbFL61pQUeCMKkxr6AOx4d9As8aEXEobU0ojjVyN+3v3PqXc/k1/KJpft3J90L3ypjrnNY2sDJobyEOYW0Vv2+Quv1GMpCYMiISoErxffrOMs8haxhjiaccbC8eDXZjyXvC31m7AitHWFKlLt/3TvYAZ718/GgTY21ruC/w9EMvku98Gn+TRRmr/ywf2AE/BQerhSCIX2e6oCc5i1lrSg/8bkTAh60BFfCtrZrvH662XGdxyLS0GE93v7nLBVSe3kQMGZ07e6KN8prklqtfKVE9Yi0lSiKbxsk7ELNZ3Z+we+sA247+2snOKGbr5LQPj95H6tQPZosQFpdiPjKHx9gpUz7tGRdnS0c4AnEl72pia3C9b9LzyziP6VZSzf/BdpluVhzQyBXLUSdGgLv02VxkIJi6FtOmgeEa9ANzp+OxhQAA/XpdRsi4sZgoFeLlmWq+X1eeA2P5oR/jQQYDbBnOK6CtQoxYP9l+4WvbF5MYjw5XJ6WtOCua9vmN7XH69o1MipPCPDsNpaxcxfeyAZS5+tX8hRC35QfTzoDIYdLa3gYlR6lm4OQBhuS8bIxpDP7TJqnV8hU0ipNF9hQ5TLc+XB/36ok9ZfAcsNjH9Innr0maXqGXKb4vat/jdInoTAYILjsKBSpl6F2YO4bhTf855LLa85f37I5XIrJySEGtyJLeFNkPbHMKXJXptQy
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(346002)(39860400002)(366004)(376002)(451199021)(83380400001)(4326008)(6916009)(66556008)(66476007)(66946007)(6512007)(966005)(6506007)(2616005)(26005)(53546011)(6486002)(186003)(478600001)(2906002)(5660300002)(31696002)(86362001)(8676002)(36756003)(316002)(8936002)(41300700001)(38100700002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?alRReFFHKy90Z3pBVS8xWkMzU3FNa1YyUDA2Mi80Rk05QkZuYm9tcWd6bGRp?=
 =?utf-8?B?d0dxbTNWVkVkOHBWMjZDcWZBNDdteDhGSlYxQ29xbmpsUU45OTh2T0JKZStr?=
 =?utf-8?B?c0pPdDJPeTdzNG1peFpQUXFvNmhRandjVkkyUDZ6UjFPdEswU0FhNmxUOU1C?=
 =?utf-8?B?NTQ0VVBSdU5HWm50eHYwc0lKcVBaTm5CYytUWkJKbmtzbXY0dWQwV0JxeHlj?=
 =?utf-8?B?WDQ3aDh1QkM0TnhNdUhNRFYwMXA5djVSdStOalNCTGo4Tm9Rc1ZlcXN1bEd2?=
 =?utf-8?B?NnB4TDl5Mm9PNURxQTBZcmZSZmJna2tiaUxGOUFKZkRJN3NVbkFRWnRrOWEy?=
 =?utf-8?B?K3VVdWJ4WHFVV2IyS3R3cWVJeUoyREZta1h5UkRSQWRuZVdhaGtqYzB6TTJz?=
 =?utf-8?B?a3lEUDRZUGtBaTBPakVlZmpIb0wyZGdPK0h4MVJSMTlDKzI5QVVjaVM4QjZY?=
 =?utf-8?B?OEdUbytDa29ncGxkUSsrMG5UTngyK2lwQjZuVXAzQnV2SHEyNVBDZGtYSmI3?=
 =?utf-8?B?aHlhMnQyUmVqWE5IZlJZZndZY2J3VkNlZFRWdEZEUjlhQS9pT1pGRlY1TUlI?=
 =?utf-8?B?WTZZei9WdFdZTEtVaWpUNlUxV0JUcFZBaXNTY1prK3N2VnV5bkZBajRnUFZT?=
 =?utf-8?B?NE56M1A0eCs1L0p6WTFjRkN5bEJHRFB0TnJDVTBqNVpSemNVTWVZb3E0ZFZi?=
 =?utf-8?B?dGdPSWE1b2JkeUR0RkpQaDl6dE5SdUxTRzhxYTlvdFZlSVlXaDlTRnEyMFJZ?=
 =?utf-8?B?WUJ4SER0NHJiTmkvZ0FHamcyQXdqcDBacldiRjJGVEpXSGVKaGJZWlo1OTRK?=
 =?utf-8?B?NHgxb0dJRTRxbVFCRUVPeXRXUHNXb0FCU1d1aTNjVHFwTzYwWWd1VzFKaXVG?=
 =?utf-8?B?Nk1LUzh6UHZpd3hFcTU5L2FCNmtlRUo2blJnTnF4NGFZTElYRnRqWWxBdUF2?=
 =?utf-8?B?amIyTjVEVWFHbFM2MVhOVkV2ZXV2VndVUGNnWnFCdUVsdHRNV0JaS3Nrb281?=
 =?utf-8?B?UzdtRlM3SmRVQldURkQ0d0RDUXd5NnArcnVidjdjUlBXVEFRa2RIZlRGU3o3?=
 =?utf-8?B?a0pHN1J0WERYNnM3cVgzbUlkQWIwS2ZaNTBQbk0wMVEzZ3B3VndBM3Y4QXBS?=
 =?utf-8?B?cHBPaTY2d3pJVGlYSHF0aTRkOTM2R0Z6bjcwRTg0bkVvMHNZRWROaFl4ckpL?=
 =?utf-8?B?eVJiRTdvTmJFTlJEc0F2WlRPL0lrRU51RGU3TEhiL2FrRWtDemJ6UUxnS2xN?=
 =?utf-8?B?UzliQzZKM1k2alFNc3Budm5TMW95ZmtTWkE0VWxEN2hrR3QvcVR3cUk3VVdh?=
 =?utf-8?B?YVJyZGdPZ2Z1REwzbCtkQWdsVEF1ZTg3M3V1MnpRVTdCVlFYREZ6a1JqbnVs?=
 =?utf-8?B?Umt5MVIxVDRQMnZjUGtzMFVLUStJR053bkNPd01DdW45R3g3anhIbTFIbC83?=
 =?utf-8?B?bGErd1BiRyttMkF6TTZZNEJnNUJMRXRHbHJCbWdXcDFUamQ5d2syME1kNEFv?=
 =?utf-8?B?SE10WS9lWURlcEozdldKMkxXVW9IcFJjckxZT21WMkxaeGo0aHR2RFJCRkYx?=
 =?utf-8?B?OHV6cFh2aHpGMXFZL2JFWC9oZE4wWm1nNlFNNEdtUWJ4OU1nZ1JWbEZKekJt?=
 =?utf-8?B?K3pDbThzcTcwV2kvaGNtMW5sdWk3UU1KVG82NkgyWTlNRDlPRzdlQTF5cmY1?=
 =?utf-8?B?N1lDYnU2aW1CQlhhRkZ5Y2xlbUJCSUpENDZBTWdFelpuZGtOY09LdVZmbWNL?=
 =?utf-8?B?U0lnb3U1b25PL1RDVUlWWlNOblpoK25OSTgxdWhESE9ZNVo0NHJIdENVQ3lY?=
 =?utf-8?B?bXQ1Y0swTHJtS3pESnphMlNSekgxRGc4Q0V6dWdGaldLaDd0VjQrelloUzN2?=
 =?utf-8?B?U3NUdnVoY0Jsb292OUpuanVzS3JHNDg0ZDJwY2dpUlVocjJsYzlOUzJqekdU?=
 =?utf-8?B?MjIxcUhueUo2S0pLRmloWkVTSUpEWE5PMWd0a3dlOG1lY0FlTkhpS3VZdS9H?=
 =?utf-8?B?YnVBb0JBb1p2SkE5ZWlXQmVRM3dVWDhQaExOd3lHQUlFWlo1K3pocGt6eG9E?=
 =?utf-8?B?VVpld1N0aXl3OVd2V3p3STZJNnRqTlVkdDV2bTdXUFgwMVlkb1JQOEhUeTFs?=
 =?utf-8?Q?IBQ9vWU1C2CL3gRxHXHj9UPtD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 43a0e4c2-22b8-47a3-7ffa-08db5c5d7318
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 13:47:43.8513
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: v2J4xdjG/zN2X1Qo6sesdpENLiLxxtP4YZmZ9RL1nmNdgkem+6lSYO8cWDfElMjD6BR6VbSBGCh15AkLKLfbEQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB9778

On 24.05.2023 03:13, Stefano Stabellini wrote:
> On Tue, 23 May 2023, Jan Beulich wrote:
>> On 23.05.2023 00:20, Stefano Stabellini wrote:
>>> On Sat, 20 May 2023, Roger Pau Monné wrote:
>>>> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
>>>> index ec2e978a4e6b..0ff8e940fa8d 100644
>>>> --- a/xen/drivers/vpci/header.c
>>>> +++ b/xen/drivers/vpci/header.c
>>>> @@ -289,6 +289,13 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>>>>       */
>>>>      for_each_pdev ( pdev->domain, tmp )
>>>>      {
>>>> +        if ( !tmp->vpci )
>>>> +        {
>>>> +            printk(XENLOG_G_WARNING "%pp: not handled by vPCI for %pd\n",
>>>> +                   &tmp->sbdf, pdev->domain);
>>>> +            continue;
>>>> +        }
>>>> +
>>>>          if ( tmp == pdev )
>>>>          {
>>>>              /*
>>>> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
>>>> index 652807a4a454..0baef3a8d3a1 100644
>>>> --- a/xen/drivers/vpci/vpci.c
>>>> +++ b/xen/drivers/vpci/vpci.c
>>>> @@ -72,7 +72,12 @@ int vpci_add_handlers(struct pci_dev *pdev)
>>>>      unsigned int i;
>>>>      int rc = 0;
>>>>  
>>>> -    if ( !has_vpci(pdev->domain) )
>>>> +    if ( !has_vpci(pdev->domain) ||
>>>> +         /*
>>>> +          * Ignore RO and hidden devices, those are in use by Xen and vPCI
>>>> +          * won't work on them.
>>>> +          */
>>>> +         pci_get_pdev(dom_xen, pdev->sbdf) )
>>>>          return 0;
>>>>  
>>>>      /* We should not get here twice for the same device. */
>>>
>>>
>>> Now this patch works! Thank you!! :-)
>>>
>>> You can check the full logs here
>>> https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4329259080
>>>
>>> Is the patch ready to be upstreamed aside from the commit message?
>>
>> I don't think so. vPCI ought to work on "r/o" devices. Out of curiosity,
>> have you also tried my (hackish and hence RFC) patch [1]?
>>
>> [1] https://lists.xen.org/archives/html/xen-devel/2021-08/msg01489.html
> 
> I don't know the code well enough to discuss what is the best solution.
> I'll let you and Roger figure it out. I would only kindly request to
> solve this in few days so that we can enable the real hardware PVH test
> in gitlab-ci as soon as possible. I think it is critical as it will
> allow us to catch many real issues going forward.
> 
> For sure I can test your patch. BTW it is also really easy for you to do
> it your simply by pushing a branch to a repo on gitlab-ci and watch for
> the results. If you are interested let me know I can give you a
> tutorial, you just need to create a repo, and register the gitlab runner
> and voila'.
> 
> This is the outcome:
> 
> https://gitlab.com/xen-project/people/sstabellini/xen/-/pipelines/876808194
> 
> 
> (XEN) PCI add device 0000:00:00.0
> (XEN) PCI add device 0000:00:00.2
> (XEN) PCI add device 0000:00:01.0
> (XEN) PCI add device 0000:00:02.0
> (XEN) Assertion 'd == dom_xen && system_state < SYS_STATE_active' failed at drivers/vpci/header.c:313

I've sent an updated RFC patch, integrating a variant of Roger's patch
at the same time.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 24 14:03:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:03:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539058.839556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1p5R-00069j-LT; Wed, 24 May 2023 14:03:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539058.839556; Wed, 24 May 2023 14:03:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1p5R-00069c-IO; Wed, 24 May 2023 14:03:33 +0000
Received: by outflank-mailman (input) for mailman id 539058;
 Wed, 24 May 2023 14:03:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1p5P-00069W-G0
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 14:03:31 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c0e751de-fa3b-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 16:03:28 +0200 (CEST)
Received: from mail-dm6nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 May 2023 10:03:16 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS7PR03MB5462.namprd03.prod.outlook.com (2603:10b6:5:2d0::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 14:03:13 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 14:03:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0e751de-fa3b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684937007;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=o0YZJClRcMHKqb9OUMWKI0KomM0noTUH1d/DDYOBjBw=;
  b=S9kGyMyFX+4tnuCiTVcstvkw0uds+a3jCm6Jt4WYh2vWPyKpp0eSdA6g
   o8oP5pgcoJjxPKc9xomAB90WGhcvon8Tei0vzM+tuvGfQTixUe9vK01ya
   523KjcZhm/vBsTxosvmRXxsmf71bbMm+MesXshAEeedz84cRmhi0BXIW3
   8=;
X-IronPort-RemoteIP: 104.47.58.100
X-IronPort-MID: 108991531
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:K4Xi/aIClJogvdI3FE+R/JQlxSXFcZb7ZxGr2PjKsXjdYENS1DFUm
 jFMD2CDOamKM2r9KYh0bN++8U0GuJHSnIdnHQplqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wZiPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4uEVhA8
 /ETdQtdZzuCh/Cd0Oy6bftF05FLwMnDZOvzu1lG5BSAVbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGLlGSd05C0WDbRUvWMSd9YgQCzo
 WXe8n6iKhobKMae2XyO9XfEaurnxHurCdNLSebhnhJsqHKB7U0WKzlHbxiYodSalXDhC/YCN
 WVBr0LCqoB3riRHVOLVTxC+5XKJoBMYc95RCPEhrhGAzLLO5ASUDXRCSSROAPQkvsIrQT0h1
 neSgsjkQzdotdW9Vna15rqS6zSoNkAowXQqYCYFSU4J5oflqYRq1hbXFI87SOiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAsjA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:P0X1Q64d7+rwGBnQhgPXwOrXdLJyesId70hD6qkRc20xTiX8ra
 rCoB1173PJYVoqN03I4OrwX5VoIkmsl6Kdg7NwAV7KZmCPhILPFu9fBODZsl7d8kPFl9K14p
 0QF5SWWOeaMbGjt7eB3OBjKadZ/DBbytHPuQ4D9QYXcei1UdAc0+8XYjzra3FLeA==
X-Talos-CUID: 9a23:6C/cCGCAfCWSyZv6E3B3zUASF8t4SS3U53bQMWSoV2dOWpTAHA==
X-Talos-MUID: 9a23:eQ75eAuCqeZbwSdVZs2nqyNJFcJLx/WUS2MNvLYZhZbUEXFuEmLI
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="108991531"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oFmzvDQRwYd6QkovsHa2wn7mqjYjRXzsGyipnrPKrgFNt7rLFeSgNTxe2bMVuY9moYRmlYsyhjV8+8RxOuOu8f+iv8SahnvcEyuyu3X5njTpS7HSdbf61ixjQ0sGZ4AtOYRDVOmyrnUwL7+O6F1SJBzS+1I1X/xApJdTcvRkDfg5d8869OMM8i0H8g6kGx04H/3RtYtvmdfUEwu8TjaWAIVAw7RJ40s2UBqfPt12T1lEGtLpL8Ca9fOAVXtBoN/HD1OR+2ejxeD/n/t0YyRisaZHh/A4MhE+mmI1JphD5BZA4U+hWtCveGA5igAzcJEMjbLR7pcx2oP+1IbbX1lltw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JArMfPQaRq8QAANfEdYRU40E6c26elQI4m66XtPPgw4=;
 b=l7I5dd2dUuCD31/INvKewNX0h3UbxTtPuLW/40nQcm3oDsQb11FCVTdkx9pYCC8EfNed+2ib2sZA4wFUxO9rQygOFLNii/7dZM93dhvh4oRwui/JOXMnVChAv4vU7S+azJ9rKRNNjS70z17/RDzjpjoapxlk7rRTUzs4hrLLiUIJCGLTWP53UBJR+M8IADHY1RUi1GxThPe144M8u4ThvZHA9ULJMOUs3w3bzoeUj2LAq1wp1LtiMJKvTomqoh9jt0+nhgDBY9wsz03EysSqMpcY4sbOABIH0cPs7wIo3M/x2hqGamqBj2NR58rAtVEn7GfVsEhJ7VqcLbQHB0FP4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JArMfPQaRq8QAANfEdYRU40E6c26elQI4m66XtPPgw4=;
 b=UgC+XGY6ZMYLkUS5YJ6AO9Pl+NXBMX3RI1m4SMY2bKwoWK8aKPkpa97cXVtxkJzbvW/wmUU4JfRaaRwtcv0gq+bfb5ISNjaHmxkSwnuqmcURQ8DYdlsSs6V6q5yI0d5qqjrjyciQxYi4ehrcg2bbgb9RIzEIiahmhbogdVSMqQM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <143ca25b-4689-0cef-e6ce-99defa571159@citrix.com>
Date: Wed, 24 May 2023 15:03:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 05/10] x86/boot: Record MSR_ARCH_CAPS for the Raw and
 Host CPU policy
Content-Language: en-GB
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
 <20230524112526.3475200-6-andrew.cooper3@citrix.com>
In-Reply-To: <20230524112526.3475200-6-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0643.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:296::10) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DS7PR03MB5462:EE_
X-MS-Office365-Filtering-Correlation-Id: 30f306be-53ea-42af-88a9-08db5c5f9cbe
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZtGlkrTTcYJC3mRy04dE7vq7418IhG4a6MYKVPHy9doIMU4G2HG77fr389bEtN6lgNs9IvdQZgtDuCkLULVrh0nHSNZf/vZM0ZuL10Tke4rVc2/WrHlpnMOV/3wwjiiL1UvZt0M/BCVWudS1ciGzOHhB46JJRfFMLYRWtlLaSB5rDsUJktOsMQxcUxPYhLrGv6y1S+VZNRs8MpQb25lzh9Tb2EnDYKrTOc3XMsJapv61/c+C65aBgNJ8xTFAiqVXxdRkb6x/QiziuN7U7/rVJjiOZPytpH3LPfWBOoP4sSDfe2+dV4ENlpUNA+zy0WVOsxBxhIBbcvdXqQ6+YS0jT4KOOAnl/cJqwVNbuieX76VivLzaKUtDUNNY+7d9rZpLT+IGRmkxbkgiOYu2YKOWpiod/NiQvHbDLpF26PYb4Z2i/zo3paVzbFMExd5tlRX2Krwdu0+ZLzDsmWhgAIGY6UveFCZAACb4Svdt/rf10RWHCuE0L0qI+VoWjW4IPWBz+Tw7cl9ynCPrzLXLtaE7Gn1KD3Ztmh9GIHGb1Ed1YR1qLT/GdPli7oyIMm7j8i6OTVV/8zocEplddy2Lq0Zc+aMqy7R+/KxSMmeGlm2FBxscDO1T+32EJO2q6ZK1qvEVfZmwSDP33iU4KRSqtBWKcg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(376002)(366004)(346002)(396003)(451199021)(4744005)(2906002)(2616005)(36756003)(186003)(8936002)(8676002)(5660300002)(6512007)(6506007)(26005)(53546011)(478600001)(66946007)(66556008)(66476007)(6916009)(4326008)(31696002)(86362001)(54906003)(31686004)(38100700002)(82960400001)(41300700001)(6486002)(316002)(6666004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bW5iU3lCSjkwb0M2bjR2TTRXbnZKcGpiS0NwTWp2ZkFnNk5hSjVmMVhQbjlG?=
 =?utf-8?B?ZkRGRkxLbUIrejNHZWxWa2dteWpuM2FRaUlGMDQ0ZWNqN0t3eDluVG9YTHRX?=
 =?utf-8?B?RFFmRE11cnZRZHRwYVVES1owTVFKVnFjajFhVXBNNGRtSEJ3T0NJQ2I0dDNn?=
 =?utf-8?B?UDZ3QUNFYm05aG1RY0VESWljcWxwUnUrVisvOVF2K2thKzhmZjdsdnRMQ3BG?=
 =?utf-8?B?M2ZIaVdLZm1BQTF5eUxaajJ6Wko2WHpxZHhGYlRGYk05UXMwdmRsUFN6KzBT?=
 =?utf-8?B?VE5pN00vdFNZNHNVUUFVT1c5bHpiVkFMbGJidDRlYkVxMjVCZnVTTndTbHg3?=
 =?utf-8?B?ZjdFb2VhWlUxL1hGT0N1L3JYWXNqbHhEL0Izd01wVmNDVVI4Yi8rOWRlaHVx?=
 =?utf-8?B?K2lCS2E4bm9UZWx4M0hKZDdHSFdtdWF4Z1gxTjFXdWhuWFE3V1lKL1htU0pL?=
 =?utf-8?B?elF2WEpaaDNBdzF4T2J1Z1lIN1d1Zk0wQnpDRHJiakVzVGVVS1RmTHBTYnlX?=
 =?utf-8?B?VmF0aEs4L0U5M0lOSlZtbERQT01uRTJxRFpJQm5Id2oyc3JnemtRa25zdlgy?=
 =?utf-8?B?UW9Sc21jNHhndGFOWm5xcmd4cTFLb1NBdTd5T0dsZThHRnpka1JUTXVIVG02?=
 =?utf-8?B?MTN1eDQvNTVMMURGekVzQnQ5ZC9ublpnS25IUG5EcTY0eEZoVlZYSzJpMUE1?=
 =?utf-8?B?N0hPc2lqU2xuVW1za1h5aG41d1ZUQmloMlVQaThsTHlja3RvR3krdkZhL3J4?=
 =?utf-8?B?elZKemlwdTN2Y3FPNjJzdm5pRDdPV2U5aGlXRmRwaHg2cllEY3RsbkllUWx4?=
 =?utf-8?B?QlZsZmY2WGh3Szg3dlkxdDRxSWorTThvWWc2TE5BcDlES2lPNzRrVjNKV0or?=
 =?utf-8?B?R2tyUm5pTzZ0dUp4cTJRUENsdEo3WFNUTEMwdVdtUzhWT2dZRGpoMjhmd3Y3?=
 =?utf-8?B?QjR3NDgzMVVNM1ljTFY2d2piL0U5WUlYTHN2ZTdVMUlIZWZGaFpQZjkzRVRv?=
 =?utf-8?B?eDBsVWRQUkliOS95UnZHVHdyOHZlNHdaZ0VVSDhxSWNLUDkxUk1ud2hsQ3Ny?=
 =?utf-8?B?bkUzRHFQa0VPbDZPQktaMVJBMXp2SkVTUWxaelFtOW5wWllvOGpNM2RIa3Q5?=
 =?utf-8?B?dHJKU3NxSTNuek9aNUhrWWd4L2VUMHU3ZEN4UDRnMFo0dlNmZE5aWEZZTmVh?=
 =?utf-8?B?R1pVNkY2M1JmWHhCUDgxSjBhQ1RyNmltR0huc3FrVi9iV3NzRUkrenJEWFNF?=
 =?utf-8?B?U2l3WEg4WUkwRlRUS3pxTmUwakZ2cmVwY0R0R3htTVdjUmZ5Ly93UnJIbUti?=
 =?utf-8?B?bUtiV2hIY1JlMkJ1Z2hzVklsTGZ0WVRFS1ZnSktrNjdDRkJiUHhyLzZ3Yk9Y?=
 =?utf-8?B?ZFpGdjltMGVnMDdVUFMwTWtrS1hEdEJVNlV2SDBreEhaeDR2YmZ3cTdncVVj?=
 =?utf-8?B?V2lEMHhFazVhM1VMYlRXdThYQWl0Z1NpaXB3YzAwN2krU1J2SnZCSWVhb1F3?=
 =?utf-8?B?N0JDK0lJVEJENmIwcTY3Nk5WY1ZvZW1XWXFaTUhiUGV3QTZCOHZ0T3pwV0Zn?=
 =?utf-8?B?ODVLVkhrclhCYzdpOUxNbUpmZnU2YWYwUmREMDFxNzlXUGxiS0VFMklSTjdh?=
 =?utf-8?B?aUQ0NTZOYVZUUEtndnUzdmhzUEJSenZKU3JESzBybzVLeEJ2YnFVckpiZTdx?=
 =?utf-8?B?T2xUSnZRR3lMRmo0S0huTERaWlVHT3hQdnNJaTVjbGJMQ3lhU2twRC91VmN6?=
 =?utf-8?B?OFAzTXJCZDc1WWs2VXZSbEtnVGV3RzYyMTNGdXVZL1BTQ1Q3ZGlNeTRxckRD?=
 =?utf-8?B?ZmJCa083VGoxNXBsV2xqckZEUy96QW5qcEZ0cEZ6eVFROG9pU1FQOWFXc2tk?=
 =?utf-8?B?VUgvWjVjejltYWdpUi9EanI2NTF1akFsUEVCL29wNEdDQzV0RHpjMW95UWFD?=
 =?utf-8?B?cTg5SEh4eWhldHdrOUc5YUNuMlZtOWxZRUYxT080bGM2M29mUmRHS0xJVmt4?=
 =?utf-8?B?MGo3VkV2OWRFUTd4M241T3hzVnlQS25XL2o0dThQQzNQT2F1MGNsYkhEWHZz?=
 =?utf-8?B?M2JkYVpjQ1hXb01JN04yUkRsNmdCT3NxaTFZUjRjSFc4NUZGYWsyWkQ1ZnNt?=
 =?utf-8?B?Ni9wdzVEbHBLaXFBQ05XekxveC9SOEpYM3BPNHFMWlpLTzhsSXIxYVVjZ1pt?=
 =?utf-8?B?WFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	z8fa8rnWWLYU3LRt5HFYBL6FkH9eKfoXtSaiZguAhf+bHEX17Cq0+w4xIxHiCBsk+ZHAXGo34SY4ZthW23YrBpZceH4Z3KU0GSYKlyltkq2XKZvilL9FVxdPhgDlS0uREEabY15iO+NxVskSfUVoBMll66f4Y+y39HTsqeOryOlj9v1oMNwWo1D2UzDB7u9JE4nzXMCGtXc1ispnMpEfEq+j8D76aN2cQ7yQ0A77cYdre+gdxjOF7q9oj1U++r7eONEuKfvYVWBh5r/fGkOhH1Z/ZnoNG/1ENqDgbLZZGz2UI7rm5tVkJw9kFwRc3f/TDmuwEbTPFhooe2z2BIRBuNfRq6gJGzTGx15prmJiSDLZYbQ0rAwgB4rldBVSb5hdU/cnxUbAblweiEtykWQgkLE5BiA9VTAqturZSwOdm6MYHs9sDenTfCHtWMI5XFuea5USYREBta4IKvZLsZ4VhrxUkqGe4IlI2Y91tcVrkoxr+nB4hzxjg7W9SxSmwifpadpu+A1i8/tOTh6Eo5TEdUNgZ7DWjWmHv9mqvzXf8aCNufRZ5S2MLMh1TGObzYQKDViVbmZockE933U5RNDNKIldhtcXYx2grsofsyAo64jQRn2cIPBxBm6+mPWvPlXmoSpRtoEPGrK5SQfatikOD1A9ILbwMGitEI6oVu7SHeDmMXXi7vmPAeNQp1+yOa/zi8sGHy++Br2DRfZGYNk3dqbFPPhl4MkfnnG3tAXt2QoYewcfh7eblcXabZg0LPqZgpb3vzfCBoPRksahWCebFzjhl420c4MOKHBTBc9lh+WZpViuhzWK262nchiuJteK
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 30f306be-53ea-42af-88a9-08db5c5f9cbe
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 14:03:12.8268
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rMvYFQDNCC7iEMzOTVgIJnbpcyyKsO7NeMAExW9rfxj1If8SYcIjkHhrmFoyjbeVgmQuiW04suaqNRbeC7DXyU8jyO4E50tm6eGzVgwGny8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5462

On 24/05/2023 12:25 pm, Andrew Cooper wrote:
> diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
> index 9bbb385db42d..f1084bb1ed36 100644
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -477,6 +477,11 @@ static void generic_identify(struct cpuinfo_x86 *c)
>  		cpuid_count(0xd, 1,
>  			    &c->x86_capability[FEATURESET_Da1],
>  			    &tmp, &tmp, &tmp);
> +
> +	if (test_bit(X86_FEATURE_ARCH_CAPS, c->x86_capability))
> +		rdmsr(MSR_ARCH_CAPABILITIES,
> +		      c->x86_capability[FEATURESET_10Al],
> +		      c->x86_capability[FEATURESET_10Ah]);

I managed to send out a stale version.  I've corrected this and the
other instances locally.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 24 14:09:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:09:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539064.839565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pBS-0006qi-Et; Wed, 24 May 2023 14:09:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539064.839565; Wed, 24 May 2023 14:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pBS-0006qb-CM; Wed, 24 May 2023 14:09:46 +0000
Received: by outflank-mailman (input) for mailman id 539064;
 Wed, 24 May 2023 14:09:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1pBQ-0006qU-Tc
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 14:09:44 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20620.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1024558-fa3c-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 16:09:42 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DUZPR04MB9918.eurprd04.prod.outlook.com (2603:10a6:10:4db::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 14:09:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 14:09:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1024558-fa3c-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C55b8NMvaDIGjFXBshyxWprU71e5uP2EJc5d33VOr2KxPWpp/8Bmb+/fErZIsxwlBS8QU1mXeOQbirdnzER5a2f0ZMKJADirZ4tge5cA5A5ExUB01k91WWyP7//IajZAxj2sTtp9n7PXII9ywTI55zV2gZfcUYyYwNX5qxJiV8mMqFrzQ8LyTpXCdKYkJmA8MTsBux+75xzbX59hQJfeGRuz9G/glDYE6bwqZVpvDyUWhoY2OCpaYhbSSF5lMB3iAWGI/bLsACiRslaFmZgSdJkIBfZUAuuizTI9LvSFTZHGpg8XEulj9YRBlnr3wgoEHNEW68IVk/Nq0s1l2QPokw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uAeBHMFs+GjiCVZ8B+czgLLU3xN46DLXZkoDfOs5V1c=;
 b=MYvb4i2xhygdQ42Y/8b5V5IQxS9ODJ2K8g/OJgoDunepZaUmeXgZCqp7ix96cQawY6xHt/ZppEOHg4fciS9UDC+8xiMLwIaMFClNyxkVMAX/LJOuh50YsFvk2JETvgIWNMg3CMBfsqz6LzHqaI53r/05scB6sePcl8GG460uqcnAASI4iNmFYK52yWTcvEbdAkQM5Tc8pbxGBY2d1Zg6t6pmnOjO8WhcSy5ADRrnHI7IxchMRJdV+JqKExJmhXmb7Os4aCky/l8sogp24Jc0n3JygincuayXgEQADQkBHWRtW/gk7yniURMKZKsW6NRg4EWc6vJMxcrOELhgW28JbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uAeBHMFs+GjiCVZ8B+czgLLU3xN46DLXZkoDfOs5V1c=;
 b=rQ3Ynfn0vfg365qscZrLbudPGSm9qOBQ5C+wzMnn12sgXm7R1VBzzbivC8/9Xw/RjBH6Rlgp3ov3wP5RnhPcH+nCBSWawEmkR0AFcWHzF8g1oHY0VZSmhUc2IdmyMUPuoOwskRnzCTe3GwCCAvX+A5PfVyzLD6WX9eLC1HQY7uQEznc8podGRRUj/Elh7lyyaaCeri3VMXmCJPDaKxBk/3BIGgR1G3TIvkkw63ykm6ibTrgUonzn4Rpf0VRZEPiadqZ+BeYBEeDhjM6gK+Zp83h2uCHiedIS9Wfr0+XrMr283uJ9BVl3AeYcqKOrgdcW+8J1nu0ly70ft/AkPt0vaA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6cf71fcf-789a-e80d-2d9d-407257fe5e3f@suse.com>
Date: Wed, 24 May 2023 16:09:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 02/15] build: rework asm-offsets.* build step to use
 kbuild
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-3-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-3-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0021.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:14::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DUZPR04MB9918:EE_
X-MS-Office365-Filtering-Correlation-Id: 42c0c222-8093-4b9f-b517-08db5c60844c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QpsUolwfNwMvMH0Yxxco2zLLGkzjGyZf3BEs126xvCLyh5nUrZxdrgWFCUN6DkWk8hG7aF3QDMW01RsjxuMJB0tz1WSBUd6p0AfIB75WMpgcwOBbXw2LNwygHFS0wluoZpQE0Lfl+TT838P1S+5/3A5IHKQHcnGPHggfcORxyz7Vuc45U1+LDZ8bHkiEEPJZKltawN3u/33NWhkRpkhzl5mJ3e3jVvwVL1J2GniyqUaoSYd3b2GvJqDfZ8xdd3knWKqm4EKApaZuGlITdi/LzvS34V2e+ZkG4zhRdLZX5n+ACT2qs0Sbxf2cmzG/T6ePkHMQLOzr50zwF9FBEsZxsthhUCcSHlgwoJSXDoUZa2nsnMChgAVDm/Tak5lhJ7qG816JEvgPh1d0TnAbXjt2bH+8ceJRTkddnbll6tcHyWi63bP2t7URNjarcLp/OmF1fukvZZaVRFtrWjXhk1IPh408NNabEAl8Fhf2z3dq3w86uM08RTRzpyns88A5xdJCNK2P1v7bXyahj8M+drHpUD+Y1lUr94Uld+zZ1WlyxDsD11RgwRE8/4AQChjat65l2I8f396Iq2lNGBToixHS6izOFMFjSKPeP2n6A6d1ZlldJ24dG72Jcv7dVJ51WVHqmRHz4RsmuZ3ZHffaLCOKJg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(39860400002)(396003)(376002)(346002)(451199021)(38100700002)(8676002)(8936002)(5660300002)(53546011)(26005)(6512007)(6506007)(186003)(2906002)(2616005)(36756003)(83380400001)(31696002)(86362001)(54906003)(31686004)(316002)(478600001)(4326008)(6916009)(66476007)(66556008)(66946007)(6486002)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TW1lU0F5OXJNRHhPMUlzUElWZDcrNEs1Yk1nNzA0dWE3N1pBRlZpVi81RzAx?=
 =?utf-8?B?UUdIdW4xOVJ6VE41NU5SUVhKbE5hTEk5YnVkaUZ4TlhQZEIyME9FMmVNYTFp?=
 =?utf-8?B?NkpBT2JaSkljbHRGak5PbVBOK0l5eHVzblZUa0YzOEszQU9nQWJFMWwrRndt?=
 =?utf-8?B?bG8yQWxlR0ZYM2l3TGp2VWpvZ3JpMmFqTXBEbHpFeG96bXIybERONXRzYUps?=
 =?utf-8?B?R0xOOTNPWnJNelRjMFAyajNqajB1a1ZudUxBU3ZsbTM0SHdmVCtvNUt5Kys4?=
 =?utf-8?B?NDN1NnFOQlg5UC9uc3hKTzdPL2V1YmlZQ1YxcW5ZVDd5Z0ROWTZhb1JyN3g0?=
 =?utf-8?B?Umw1UzVPcnltNlBjL0d3b3dnWkYvZ2ZVTm5BR292am05cDJWOFQ2YkdvTTFr?=
 =?utf-8?B?Y2dLTjJkUjZ5U1NuNEw3ZUNSRG8yUmQ4b0JjU2FhUWJBYVdwQ28xbFdmRXM1?=
 =?utf-8?B?aCtHaWVBOHA2ZnhNR1ErVWxQU0R4TVlXK2NXUjduUU1iSXRFT1AwQVBjNVhm?=
 =?utf-8?B?NTJWb2hOTDI2cHFCdnBLdW1XZXI0Y3N5ZmZ1WGpteDFMNEtUVmlZdXluMmxv?=
 =?utf-8?B?RVZTTjZwTG5DbmsxcU1zK3hjTng2QnpFUkNYbHJDUG5mSW5mMi9NWklQeU11?=
 =?utf-8?B?SzZzelFSNGo0cUZmMUlTV2tqUm1PbE9QZXpxN01pUHFXeE5tL0Q4UXRFU2xE?=
 =?utf-8?B?WENRMkJrSTAvclVaZStXTGJ1cUpxVlhxem9KZC8yaWl5QzZyWFliUjltTnI4?=
 =?utf-8?B?WDJEOXdpbFRKSlVUYzNmNWxDN3RLNnF6eWI0RlZkalB3bzhrN0dheDRYVVVp?=
 =?utf-8?B?U2NvYkc2RTlGZXVhQUxlb2thMENLM0MwazNZTThSVWFCaGVISjE4TlhyTjRs?=
 =?utf-8?B?SkdNczczMTFmZ1dkZ1dFQzVBd2pPYlF4WVBob0k5OS9VZWw4bnRxYVFnSUFB?=
 =?utf-8?B?YzIrcHdrSnhOS29GYzNydGpEWUJEQThWZ3NsS0RrMzlGZGJxdWxoUFR3RUs1?=
 =?utf-8?B?K0kwbTNkUjJjeWNsNXZxZkVBVWpVU2ZNUC9DVk91SmdtZS9Mem02ZS9EckFS?=
 =?utf-8?B?V1JsakJpRXdDUjBtZ0F5K3dwQUtWeVV4YWhVeVFuVG1LQXdpWkZsdFBzQm1H?=
 =?utf-8?B?NWM5eU01M3NHcG1TYVFvVythcUFJWVl0WUkvOUt1QXQ3Z3BZTk9jZmpYZmc4?=
 =?utf-8?B?eSsyQ0I4NW5PNW1mem5KNTk1K0NWeFlWdlBRbG0xZlNqbkR5NTQxOVJjVDdS?=
 =?utf-8?B?b1hkR3ZTdWNxdVcyOWpwSVVZdHJBS3Jyd0g1OGZWWWNweFIyK0ZCbGtCUWdR?=
 =?utf-8?B?UWhoOG01S3ZHVmMxR2pOczhrWFdZSVp5VkVPWCsvOURqNExKOWo0VzNVN2Va?=
 =?utf-8?B?TzZEN1lqeUc4d0U5azdVZmdoSURzN1E4QWtoVzMxcndwK1p6OGJ6eHQvajdj?=
 =?utf-8?B?d0NsNTkxbk9LVWdYSVlEa0wzOVRlVVVqMzRtNUtIcW9xR3h1aVJJWVJhTDYy?=
 =?utf-8?B?K2VpU0VMTlZyMkg2NzlzRzFsbzJUc3dNenhKUTBWalc1dnljN0c5WVVuZ243?=
 =?utf-8?B?czFWWUpoZWlpTFFTcERaaWcwMmE4ejVFTmc4QllTQ0tNaEsyK3kwYmZrSHB3?=
 =?utf-8?B?SWJrb2s3alo5b2E3U3EzaFczeE5IaU4vTlpDSlZidmZGYUZxUEtOV0dhUnJp?=
 =?utf-8?B?NUtTTXdBSzNCMmFQVnJtKys5MkxjQ3dSQ1dxUXJqdTZ1aVh4WWhFczkwVEdX?=
 =?utf-8?B?VVhYT256L3RmcEhoaWpvOXpmRlZPcDl4K0JhMDJTaHJsN0l0ak9JSk9oSUhx?=
 =?utf-8?B?aVN1eE41dGZVeFZhOTdVMFBqSG01NENwLzk0b1hhb1FrUmZET214U2VKQWg0?=
 =?utf-8?B?M01kWUMzTTJ1Z3J2amo1L2JsWFR2Mzl3NzJPaGYwbjZzeUFtUVZIcDlzNmhK?=
 =?utf-8?B?Y1VYcXpERDV3ZUJKUVcrNEhhbnpvOUgrY1NoQ00yNHNLWm5ncmxXY0t0QllI?=
 =?utf-8?B?ZjNWc2Zqamt5amlvcnd3SlZnU01SWml2K2o1V2VyRGZMaml3N1FNN1hLMXdR?=
 =?utf-8?B?bWtCaW9haVY2Q29zY09nektSbEM2N2xQL050ZkRER1N2WHBNaGJZV0lucm1y?=
 =?utf-8?Q?7+h8cMk73IUltDNzeA7cMxHMg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 42c0c222-8093-4b9f-b517-08db5c60844c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 14:09:41.1433
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5PhcH3gajJY3ndpu9DnnYO0HJEPKLzOxNY+VCRaDw6Gu12wB4lcA+vpfW8z+KkZKODLAtqAM3u33obAputtkMw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DUZPR04MB9918

On 23.05.2023 18:37, Anthony PERARD wrote:
> Use $(if_changed_dep, ) macro to generate "asm-offsets.s" and remove
> the use of $(move-if-changes,). That mean that "asm-offset.s" will be
> changed even when the output doesn't change.
> 
> But "asm-offsets.s" is only used to generated "asm-offsets.h". So
> instead of regenerating "asm-offsets.h" every time "asm-offsets.s"
> change, we will use "$(filechk, )" to only update the ".h" when the
> output change. Also, with "$(filechk, )", the file does get
> regenerated when the rule change in the makefile.
> 
> This changes also result in a cleaner build log.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
> 
> Instead of having a special $(cmd_asm-offsets.s) command, we could
> probably reuse $(cmd_cc_s_c) from Rules.mk, but that would mean that
> an hypothetical additional flags "-flto" in CFLAGS would not be
> removed anymore, not sure if that matter here.

There's no real code being generated there, and what we're after are
merely the special .ascii directives. So the presence of -flto should
have no effect there, and hence it would even look more consistent to
me if we didn't use different options (and even rules) for .c -> .s
transformations.

> But then we could write this:
> 
> targets += arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s
> arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s: CFLAGS-y += -g0
> arch/$(TARGET_ARCH)/include/asm/asm-offsets.h: arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s FORCE
> 
> instead of having to write a rule for asm-offsets.s

Ftaod, I'd be happy to ack the patch as it is, but I would favor if
you could do the rework / simplification as outlined.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 24 14:17:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:17:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539068.839576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pIv-0008JH-8c; Wed, 24 May 2023 14:17:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539068.839576; Wed, 24 May 2023 14:17:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pIv-0008JA-5L; Wed, 24 May 2023 14:17:29 +0000
Received: by outflank-mailman (input) for mailman id 539068;
 Wed, 24 May 2023 14:17:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1pIt-0008J4-FW
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 14:17:27 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2051.outbound.protection.outlook.com [40.107.7.51])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b4b2df34-fa3d-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 16:17:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7034.eurprd04.prod.outlook.com (2603:10a6:10:128::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 14:16:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 14:16:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4b2df34-fa3d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dIBW/3Pd3m7YchDFMMFscO9NtW88uFQ7bAY04kh9oMW4bI0C7ow+HzJ1DPgpr3uXx3rC9MHVh1/0gOfw5/+1pLGvz0lTmEBy+S76llB/76HGSyKhVe4xJHadcUiCvca7140T/uwpPxL7Z+iP/jHUQk2wwpYzUuflZI1iDAJSntOnFqIAdmspKQ6daIKXpUlnGCp7zDMKuW5JUXoJzmD/xVBhWA3M7nidNj2/YyUUIYov6+0qEEDLTxMhuivH/xHBAlL4mEjBW/4OjyiQ+wCPlNBPlBGFoY32lH2nOSrBBTSmxqoaLlegsB7hVqAl941h7J+kdocCokBdGpNuAnM75g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yXM7rh3xdQyaxZ56r5dLp8WFv33ZZYdit8eD2pxupCs=;
 b=KVVLYbVsSEqEWyxOTjm+kZ577sYMnqua2qSN8KDJaiKhyYAVXWBKFX3+AdSW7RNj6R8lkpwvLoanw+VYNNfqg5pfOiOi+7M+XeKXh/1STEdSDPt7B57clKDuOfTP64TKF/tlSXDpx3GpML07ZPIWJxQT6Eo+4CY3HGaotC5Fw8FEhbgnKD4aGUvqgM2gMYpnCazcapQfzGtSRm5PiO+Wd8NTKxk+TP/wKV1FQvOn6usbXPKiQshoqDEAmBNUBmkhOB8L8lJ+8Cuu1I7qc64ltnWbnKHEVow9gb0enAvELUnitkAQ/qxlxRwo2Nuwlbr45swriudc1Oi6muNCV8o0+A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yXM7rh3xdQyaxZ56r5dLp8WFv33ZZYdit8eD2pxupCs=;
 b=uUNl5rEd801v+7qAwJ16PXqXHhBq8md39YuXftdGZet9oW/4wT3yJgNzGimqvkSr21eDrXcsd9FRv/KNiows/1HxLt3uo8v82bpgT6PU88+miruDNDcmwn0nM/wmDfqIEQeZfhjhxnqC6CQFxNM1lqdxo9zWvyMWo50yzs21olfWfhJqkE9QEXubqD2H3qE95nVP+ZAfUlxSovoI2csdhTlDZXqq+qUVkmjTtx4TeIktuHP0DOI+ZwibPLJl1v3P0ttGzBgk6ZU/AO6jOm7HRh7CjgJkallrQB0z4EthMpyMJNrToPrrT9q/RLjLyB/ISEUydsZiRdMTRAZs7psjdw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <eaaae1c8-fdef-0746-b744-3a3e04933164@suse.com>
Date: Wed, 24 May 2023 16:16:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 03/15] build, x86: clean build log for boot/ targets
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-4-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-4-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0130.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7034:EE_
X-MS-Office365-Filtering-Correlation-Id: 3bb4d441-590f-4710-9669-08db5c618786
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BT7bDWdWJY2K8NneMVl3PkQlwPs0h0TeunMnNJ4EGpXbdhRKa/Oxr3M8mhW6JgXywpVvD5AJu0hZUuuYYgmORTOHBro29+l+7yWVviyn02bkeCHrBPM+yrbR8T5+C3Vy/+yZPO5zv+3RNuOTfZIlO9pqP8H7aSHtOFfK856qH//mn6iSrU/yl3PWFFBvi+I6oUrV1bCE1QVRcr/35Lzq+nfkIpC5Q6nka1RjMR4CWEHEJg9nM2rv9H/27+1CQl0WKOxyMqgBSoWjwfzxLY3+yzPG3aKt1tLm0fZdSm7lpiuph525GCFryQe7BnOZ5MSTkrN1hautiYsYw3ejUkJ4wRa681Yx1sCbY8tt2leqjE+stDTdZ0aEtdXnQyOyxIQotwYjroxqPCtHQOt5lzVQAK+vjaESvXD4r1n5st+LAVSDLINzEJOqMxys6gMc+HcHr7Wi4mATtwrDqP5yJ0iiVvBF/lcW01zOyOT5UCSXxQrLnw5Fvvj4uv+GOGQ7hg+KifXS/+hLwPJdwtd2fFZv8LAJPGohNOgFiOV9pKQfJg6kz+aXkl6irJ7g1hY2NmkReMdfLKPS6tVVQTY0s4I24mocjhZOzSikNy6od9/sbLNa+q9ReQCSvSQqzeKECOKiWoyYXEURe/LIAZCWcQWqnQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(376002)(396003)(136003)(346002)(451199021)(31686004)(54906003)(2906002)(5660300002)(8936002)(8676002)(41300700001)(66556008)(66946007)(66476007)(4326008)(478600001)(6916009)(36756003)(316002)(6486002)(26005)(6506007)(6512007)(38100700002)(2616005)(86362001)(31696002)(53546011)(186003)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aDE5VnJUNHQ0Z2MzOFNjRG1hYit5ckFzN3NaeXNVczNpbHRyRWVCN1F1b0xC?=
 =?utf-8?B?bTlkTFBGMU1sY0VBbHdmVE82a3prbnVuOEtFVW1JR0lmRkNjN2pRM3J2b205?=
 =?utf-8?B?RXhEMDZ3eSt5d0Q5NUJUbkwyb0ZQVlFzL1RBMlliZzdqcDZtN1JrT21pV1dk?=
 =?utf-8?B?b2ZKSE53NFFOTFlnT1YyQWMwWGFJK2NkK3phemxCVW0rSjFLaDFxNGZFV2p5?=
 =?utf-8?B?cmR2d1hhSzM1Y3FaNHo0N0Y1UWs3ZmZ4d3JFa0xVUGswcHBBdzhURkIzNXdF?=
 =?utf-8?B?cGYvQkxiM3Ivbi9yTG9lVjVzdmQzamZtdUR3Tk1FTWRrUTNyQnZudVpPczRV?=
 =?utf-8?B?cVVxc1l1ODV6YndwVjBEYytoWlVrZkNlRlF5clNidng1OTNNN09HbGRZbGNW?=
 =?utf-8?B?eGxOOERjamJuOWFhVERieEh5Tk1yeGd5M0pDV05qanEzNDZuL1pBTVMyUmdT?=
 =?utf-8?B?ME9hY09nbm4rUFdkRFJPa2dKWEZNU1ZhRlpYL2NkbHNuZ2ZJdnRMQktEaGNl?=
 =?utf-8?B?dFhUTGlsTGgxaEhQNDV0V1RCNStEY3U0R2laMHppMGlObUIzOHoxR01OaXFw?=
 =?utf-8?B?SmdxWCtsVGo0MHY1NzJEVyt3S2d1QnZrbDVsWkl5ejNBWkZnNXJJMU13OTBB?=
 =?utf-8?B?a2g1TjhRTjF0MFJ5UXhLc2Ezb1VzRkhSejdMQ0tiV01oWWwwTnFQVWtsUU95?=
 =?utf-8?B?aVJka1pHR3ZsS2swS3NrNjB5cjIveDJYc1p5em5jdHoxbENzMnVBbFo1REhS?=
 =?utf-8?B?alB2ZmZCRC9TWmViRDdWYUJZODdvTnpQSVpDbzRua21rcGt1dzNVS281TVlJ?=
 =?utf-8?B?djBVeHJ2UkZ0eXNndDkvaVpIZ0RGd1cwVnEzaWxmWVZhaG1iKzZoYkIxczNt?=
 =?utf-8?B?eU5FeVFudlM1ZGx0UG9CWnJXbzlCdUFkMlY0MWsxQU03ZVNySGJORDloT0F2?=
 =?utf-8?B?bjFwaVBXKzFuZ2xnZlFtUXkvUXpmUEhxYWlHSlphYTJWdmdEeTN0blZtTmRv?=
 =?utf-8?B?ZnNsZ05MRGNIZXZwbzZwVDRYeHJaL3ZXdXp5U0g4TS9tME93WGhJTVFtQ1pF?=
 =?utf-8?B?OEpnaE5CaXBIUVBrcEllK0t2Y1lldmUyWS9mandST3B3U1czdVRTbzUvT2lF?=
 =?utf-8?B?d240eWpZT3l6bVlkMHovWFgwajAzeHhmaGdxb0lDcXgxdHAxcW14Tk01dTlx?=
 =?utf-8?B?bW4rTGtkKzVIenVTRUZEajNNNjFoSCtDUzg0MzBNV0JwcDB0eFNhdG1hQkN0?=
 =?utf-8?B?N0M1WTQyYzZtN1JvNmMxZi9pN1hGWFFwVXhqSmNrU1ljaFFtN25GTFVoZEM1?=
 =?utf-8?B?b2NaMXkySmJQT2w3RW9vNEgzMjliVUJiWWpzdytSVjlTNm95d3ZGb0tUU2ZD?=
 =?utf-8?B?Mno4WDVZaVI0YytBS0VlZEtZbFNQdFBRdk1pVFF5OUg5WTlTTVJ1cmEvbDB2?=
 =?utf-8?B?RFhVdkFYcXNic0Zsanc2MHhmOTRlVlBpMk1yaVNQTDFWeHdXZWw2V2pqVytQ?=
 =?utf-8?B?WFRCdkR1c2tCTXBFY2pEMG1KWFRlMVpxSUdzTlE0ZENrK3Z3ZlZCdW1TYkJE?=
 =?utf-8?B?d2djQ0Vlck55UGFHaThyam0vZi9ZbHFvaTRuNVI2dVBJYmN3bDR2RG55YW5z?=
 =?utf-8?B?WWo2djNSbGZDckNJK0R4d3FXNm11cW9TNjlZYTNRQ1FBUnIwOUcyb1dnNTNu?=
 =?utf-8?B?Nk45b2YvNjRxYTU0d2RRQk9sbDllUFZsaGZNZUswMi90ZzJrTmZYQmZXTTN5?=
 =?utf-8?B?VG1DbXIrQ2NpWmIzQldnK2kxVHIvZjZDZ3FlSHU5WDRHZy9yKzJZeWNtNlpo?=
 =?utf-8?B?dlcrSjhFckVHT1hqV0R4czVEZU1BMWVBVUc4Z3RJZkRFUzcvN0J0R0FKWDFh?=
 =?utf-8?B?UHJPdHFmV25sbWJrSU1iR21xQVhPQWZuUld6UVg4cXFYQWhMKzZUMDdsUHhl?=
 =?utf-8?B?dnR6MWwwTlQzV0wyVWNGcHlPK1h6MkNkakRFMVFRR3RIMDZkTTBuVTlUeXN1?=
 =?utf-8?B?S2s4dzBxZUdXVDBaMmRnYnJ6OXVHY01wSHdZbDB2TXAzY0t5MFpMc3h1VGti?=
 =?utf-8?B?eXJHQmhPeUV1TkFsWExRazhsbGs1VXVDcXBaaHErWGc0MHVpZHlIZitzcmIr?=
 =?utf-8?Q?jy9P2Fdwlcn67ec0Fid/7A8Ed?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3bb4d441-590f-4710-9669-08db5c618786
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 14:16:56.0094
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uYXpD10uaUKtyySM4wBZ7epw1geDKT4FWUqjrn364U5hMeCAOycUbfRbGWOYcDzdHFBlwFI/uirN410FqKutcw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7034

On 23.05.2023 18:37, Anthony PERARD wrote:
> We are adding %.lnk to .PRECIOUS or make treat them as intermediate
> targets and remove them.

What's wrong with them getting removed? Note also that's no different from
today, so it's an orthogonal change in any event (unless I'm overlooking
something). Plus if such behavior was intended, shouldn't $(targets) be
made a prereq of .PRECIOUS in generic makefile logic?

> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
>  xen/arch/x86/boot/Makefile | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/boot/Makefile b/xen/arch/x86/boot/Makefile
> index 03d8ce3a9e..2693b938bd 100644
> --- a/xen/arch/x86/boot/Makefile
> +++ b/xen/arch/x86/boot/Makefile
> @@ -5,6 +5,8 @@ head-bin-objs := cmdline.o reloc.o
>  nocov-y   += $(head-bin-objs)
>  noubsan-y += $(head-bin-objs)
>  targets   += $(head-bin-objs)
> +targets   += $(head-bin-objs:.o=.bin)
> +targets   += $(head-bin-objs:.o=.lnk)

Leaving aside the question of whether .lnk really should be part
of $(targets), don't these two lines also ...

> @@ -26,10 +28,16 @@ $(head-bin-objs): XEN_CFLAGS := $(CFLAGS_x86_32) -fpic
>  LDFLAGS_DIRECT-$(call ld-option,--warn-rwx-segments) := --no-warn-rwx-segments
>  LDFLAGS_DIRECT += $(LDFLAGS_DIRECT-y)
>  
> -%.bin: %.lnk
> -	$(OBJCOPY) -j .text -O binary $< $@
> +%.bin: OBJCOPYFLAGS := -j .text -O binary
> +%.bin: %.lnk FORCE
> +	$(call if_changed,objcopy)
>  
> -%.lnk: %.o $(src)/build32.lds
> -	$(LD) $(subst x86_64,i386,$(LDFLAGS_DIRECT)) -N -T $(filter %.lds,$^) -o $@ $<
> +quiet_cmd_ld_lnk_o = LD      $@
> +cmd_ld_lnk_o = $(LD) $(subst x86_64,i386,$(LDFLAGS_DIRECT)) -N -T $(filter %.lds,$^) -o $@ $<
> +
> +%.lnk: %.o $(src)/build32.lds FORCE
> +	$(call if_changed,ld_lnk_o)
>  
>  clean-files := *.lnk *.bin

... eliminate the need for this?

Jan

> +
> +.PRECIOUS: %.lnk



From xen-devel-bounces@lists.xenproject.org Wed May 24 14:22:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:22:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539072.839586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pNT-0001KQ-T7; Wed, 24 May 2023 14:22:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539072.839586; Wed, 24 May 2023 14:22:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pNT-0001KJ-P4; Wed, 24 May 2023 14:22:11 +0000
Received: by outflank-mailman (input) for mailman id 539072;
 Wed, 24 May 2023 14:22:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x+R+=BN=amd.com=ayan.kumar.halder@srs-se1.protection.inumbo.net>)
 id 1q1pNS-0001KD-35
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 14:22:10 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20612.outbound.protection.outlook.com
 [2a01:111:f400:fe59::612])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d241d7d-fa3e-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 16:22:07 +0200 (CEST)
Received: from SN6PR12MB2621.namprd12.prod.outlook.com (2603:10b6:805:73::15)
 by SJ2PR12MB8135.namprd12.prod.outlook.com (2603:10b6:a03:4f3::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Wed, 24 May
 2023 14:22:04 +0000
Received: from SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::8018:78f7:1b08:7a54]) by SN6PR12MB2621.namprd12.prod.outlook.com
 ([fe80::8018:78f7:1b08:7a54%2]) with mapi id 15.20.6411.028; Wed, 24 May 2023
 14:22:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d241d7d-fa3e-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NzZc9/lLws6oU4dSMRJooDL9UeF3+GaIT1OGbaLuG4u8o39lM7JJGlElN2RsrBkEtjemdqud0Zn/GWCz6HaM5Ey5bd8z3PLj8FdN/uy5hZmkjwWoSjH+5yn1hUNdjRe0No2JaZfcJwvvYouLJof+RSeuvFg4N9r1WwqB5tEHGxpBz7G9Y22Pb5AQvUs/xnIsEnGqF94Aq65vJ2r9Xv6zppsH7RnWT9vR7ATIq9SIaBnybql3WgVJD6qVr/bo+dqeNmDH5nEJWksdQ66+VsyzCivlAEQ7wJBs1ZbCCRPc9IY6rVMSWWg5CDDG9Q9nR47JFd4Q4aDsQ8ln++uVysLqUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NCThOBywJsRM4cIDOyqxsqcp71KqHSL7k+aLtuBgGX4=;
 b=AtQUC3zpAbGuMgQfVn4s+lH8uEUnLOS7WGVEQnhaSbx0bsdyQuMxoSymyHOAPT/FmIs5MZAa+0l0U883k9GXobnvukSzE0uXghJyGBxz3PFyrjSdTZ5WdmUFSoHkUui41Dx2k9HEDE2AEjV5Sf0xY8n5nGg+og6lJ47P/W3gb80Fw540bPLFcfYLVaxadhbRWuN9lWtnuXdEGiUP+Ds2cbtgu83mAkDMtnDpvDLvhBoZW6Ztf+tZ32fPNhnvTB9qXsFb7Wi5wJcOphlt6IpjYck6vVCCWqyeIsiubJLFCO9zv29ldUOdU7etLo0Tu+R1k7kfNenB2wrp6NwB9uc+2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NCThOBywJsRM4cIDOyqxsqcp71KqHSL7k+aLtuBgGX4=;
 b=wGrqLD+46xnN1BKKV0pHLuDUVmLVLAw3VVJU2FjFUjiV0WjfoIpcoTNy1EoZrEzTBEf8tGjaBXgcbMQcHy9Au8SDWTRmvitB9B+ejSJS2qaTq1Kqq4WfJYF3oxzlQ4KYE0kpuYw5UAeQwmli3wrKw51orS0nM/UpWn0aRbnt30o=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <06e08bf5-ac14-b0ae-743e-60da5e2396e4@amd.com>
Date: Wed, 24 May 2023 15:21:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [XEN v7 07/11] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
To: Michal Orzel <michal.orzel@amd.com>,
 Ayan Kumar Halder <ayan.kumar.halder@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com, julien@xen.org,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
 <20230518143920.43186-8-ayan.kumar.halder@amd.com>
 <66f04988-9a4b-39c0-fb17-c508b98e3bdf@amd.com>
From: Ayan Kumar Halder <ayankuma@amd.com>
In-Reply-To: <66f04988-9a4b-39c0-fb17-c508b98e3bdf@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0242.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:350::16) To SN6PR12MB2621.namprd12.prod.outlook.com
 (2603:10b6:805:73::15)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SN6PR12MB2621:EE_|SJ2PR12MB8135:EE_
X-MS-Office365-Filtering-Correlation-Id: 3bf46f4e-9076-4c2c-b0c4-08db5c623f2b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OrsW8L9lg5nWz3yz2XlXnnzcBRoa94Gp4ycVMkK0VStmwv1Y5ZDfxgRsObvB84V7yRh3XVW1Uaux6F6W+sHLRBdn91nJ2fjkGkylLrh/f0fvToTRpt0gP4OoDikSQciQMPnsqruC7g4gkOl9FwkSYTfYoxHunFrX3WCJw9JEV6iBps+Tk28qJArkIVvEo0Qcrb5rZ2zHtoFTsuHJ1GotW4wJoaU6NkF8kAKhXpEAOEcUblduwyA/oYwPt7HY08VcWOGOTrU3ReYDFwgKn2XnCD/HUpSwBAzvufoDurJ/OzbPt8mpe6qfq+/NhhyfGh+M7h/GrA8gvGg2hRg1aWqFRC3xKtiHWdLuYZZsESsYdoGGysANN1RkhmVxUtHSPPZ2vCDz0HlecEe6CbTotaUmroMt8BkJN1vyXDmS9ssHzVYQyuriVnj7TYPz8VA7J9EZPNWzU2+jnkZ5QbcY7XLljM+DUKocCuhC2aKJDv23LWxOFmuNa2AV5wNykI8qKEuVd0Q73R+KALQgVtyH9S9MC9SO8806CWQf96QWGwnwwX6eyypBKm8tqGvKXW2dR45grbSbyfKaljZH8UZiST8rqGDCH2k5yjNdVy+CQrkuueHnXIT7UkLYgLsE1CXGOQuQ9MdzL2H3BK5IugSjFavSkw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SN6PR12MB2621.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(136003)(396003)(39860400002)(376002)(451199021)(31686004)(66476007)(66556008)(66946007)(31696002)(478600001)(41300700001)(6486002)(6666004)(316002)(110136005)(4326008)(5660300002)(8676002)(8936002)(26005)(6512007)(186003)(53546011)(38100700002)(6506007)(7416002)(83380400001)(2906002)(2616005)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bVBXeU5YZG11bVFPazhtbEROcVIyaG81azhqL1RsckZvVzlzbzF2ZGZqSklI?=
 =?utf-8?B?dDY4WTdRUzB4N3R0Mk9qM05uLzBVQ1d4THRXVEpOQWVCT0dkeUJmaXE4WGRj?=
 =?utf-8?B?NUx0K01YSUlDOXBKeCtQcngxdCtMVWNqejJJZUFSQ1ppNjFTQmtHeFIxVnNK?=
 =?utf-8?B?SmpFRk1sQ1I5Q24ya2ZpSHpRU2ZzVU9yU3kyaDBieFQrbFcrckRpS2txaXFx?=
 =?utf-8?B?QU5IN0dEbFBTM0lYRXdhK1lXQTAvdXQ1OGdMQURZUTUwaHZaVUIra0FZNkVs?=
 =?utf-8?B?ZFkyRlBxVDdCN3FFYVBGdUJiVnJodnZsWU1lREM5LzRSQU5ETzIweUlEZ21T?=
 =?utf-8?B?WEdLc0RvanlLb1ZabkFMdWVta1JsaXpZcXdTa1JsM2ZMUjQ0d0h3cnYyUVlB?=
 =?utf-8?B?VG9RTGRNbjVvYTRhRmRyQnhraC9vZU8vYkQ1M04yTHBhWGtxTmVvNlNobzNI?=
 =?utf-8?B?dXRnTWF0QmVXaU1XZ3ZJUy9rd3FOc0lCa00ycE80d2VRU2RoVDBWbjlGRUZS?=
 =?utf-8?B?bStBSFVDVzFqczJtTmliOGE2RXI1WlA3K3VpUGdiOFVaRUpLZ0xkYWRNN1lO?=
 =?utf-8?B?WmR4a1ovQVU4enpNZVk3bEVxRkpGME9FeTk5TjI0TW5OSXA0ZVRSa01uUmhJ?=
 =?utf-8?B?a2hRNHc2amFkamxoVFgrTUJnNXN4VW02NlJIM3BHMWVXYU5uMWdOdVJnRnpk?=
 =?utf-8?B?RXVlcXU1MHpxbE1jWWxIcjU5ZSt2NDBBNWY1TUhJYlcrRkttbzhSTzBvcEds?=
 =?utf-8?B?RnFRZkNEUExWbGdHam9SOWJiN0JsU21SSFN6UkgxRk1yMjlpL252RllIaTgy?=
 =?utf-8?B?Ulp3M2I1WlRMeGZHb2RSWXR3TWRURERNY2hHYUlncWUvTU5WekZRc0Nzd3kv?=
 =?utf-8?B?bnZmSmlTaXZiVStVQmNjSjBEVURiQy9yelZ1czlhQjFZWTVrajM1YjA3NkQz?=
 =?utf-8?B?a3V0b3V3Sy9JdjlWZjFGWU05cE9PYmV1RFdZOFNsZG1FaVpNU2R4KzJPNjlr?=
 =?utf-8?B?bkJDZW85c2pGVFVoTmVDTkt4a1d3Q2RrbmM5Y05sME5KWG1NNXNSOWFzemRM?=
 =?utf-8?B?TlNqYWpDZHdWUGg0UERXWktiNVBPOFFiaWVLeDFTN3NZV2xBbXluRlQvV0VG?=
 =?utf-8?B?MU02UFhtK3l0VXFObU5CcG1FekJWL2x1Zk9DaHAyUnFVVXFlYTdublBUSkFs?=
 =?utf-8?B?RCt1ZGxIQjE1MUttNFVEbktGcms2TVc0c1JLSGZQM0JtcWZtOXVkZWhsVHFW?=
 =?utf-8?B?U1BFSEJHNGJYc0hvU0VSdE5CUnVLT05wQVFseHNBSi9BSDU5akh1NUpGTUhx?=
 =?utf-8?B?eVBnWm8wTEQ3WkkwY3lLYXdYSEJJWVJuakJhV0xnRllFQm13cHNNUFVoYUpv?=
 =?utf-8?B?SDVMUGRyMUU5NUd2UGhkT3JzcEY3bFZCYU9EaVp3blNCR3l4RUJnQ1VNQ2tP?=
 =?utf-8?B?QlhFUlBxNzdESTRjYVlmS1J3S3VBUWRRUGlMWTN6UXI3RXU0U0xYNi9LMFE2?=
 =?utf-8?B?WWpXclFEVmpnWVB2aUxoMVB0Wksyd2JMTnpvR1F6S3FNdXdVWnpQU3c0NXBl?=
 =?utf-8?B?MkEwMXB3VGRLZFdpV1RVMit4S1dqcjM1RzRwa2Q4Z0VwNm5pY1dJUU90NDll?=
 =?utf-8?B?ZW1SUllOWWpQZXRDZVVLWG9ERzZaeHJpMUw1WnV4QjhXdHg1N1U3U0VsT3RH?=
 =?utf-8?B?VFN1M2NMYUVjcTRnUzNYNVBtOWZEYWNwdFhEVGdSRVFXZDdDakRUWHlWTnl5?=
 =?utf-8?B?NFVNVGNhNk9mZlVDclppTzM5RTNTdWgweWJJcysrQjRwOTZnYVFKLzVhNU94?=
 =?utf-8?B?OHJxWkw1OVhOM3grVFNodllUWUNhemI0TlVHYmdYWUZYL2laY3NUN2JiKytt?=
 =?utf-8?B?QnNvb1hpTWljTjNqN1grMmhGQngwWDlxV1hzbDhYb3o1ZVNMYmhQeDdSWHpH?=
 =?utf-8?B?cldjeTAzUTRFa0ZmbnBOcUdtVk83QmhzSHJCc3U1dDBJQnFFaTQ3cTFGdEQ2?=
 =?utf-8?B?WUxLK0l3R0hEaDl5bDl1dFNEdzZ2dXNtSnhubUNWTk5rWFNicWNMWlQ1Slpp?=
 =?utf-8?B?a3UzQ1FSeGZaRFJyUDd3M1RYV0F0czZGNTdwdnJFRDZQQStMdys4NU5QclRB?=
 =?utf-8?Q?/pCH0RbDlgVYOfHYz9EZJpcPX?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3bf46f4e-9076-4c2c-b0c4-08db5c623f2b
X-MS-Exchange-CrossTenant-AuthSource: SN6PR12MB2621.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 14:22:04.1935
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jmOwpMZpjVFj/htW2B/Kw/LB1jY/kz/ZjYB/XAd/spibcl6e0HcNCxF14bxpN2VProb12NpWnlYnZ+6rT/EAkA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8135


On 19/05/2023 09:54, Michal Orzel wrote:
> Hi Ayan,
Hi Michal,
>
> On 18/05/2023 16:39, Ayan Kumar Halder wrote:
>> Restructure the code so that one can use pa_range_info[] table for both
>> ARM_32 as well as ARM_64.
>>
>> Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
>> p2m_root_order can be obtained from the pa_range_info[].root_order and
>> p2m_root_level can be obtained from pa_range_info[].sl0.
>>
>> Refer ARM DDI 0406C.d ID040418, B3-1345,
>> "Use of concatenated first-level translation tables
>>
>> ...However, a 40-bit input address range with a translation granularity of 4KB
>> requires a total of 28 bits of address resolution. Therefore, a stage 2
>> translation that supports a 40-bit input address range requires two concatenated
>> first-level translation tables,..."
>>
>> Thus, root-order is 1 for 40-bit IPA on ARM_32.
>>
>> Refer ARM DDI 0406C.d ID040418, B3-1348,
>>
>> "Determining the required first lookup level for stage 2 translations
>>
>> For a stage 2 translation, the output address range from the stage 1
>> translations determines the required input address range for the stage 2
>> translation. The permitted values of VTCR.SL0 are:
>>
>> 0b00 Stage 2 translation lookup must start at the second level.
>> 0b01 Stage 2 translation lookup must start at the first level.
>>
>> VTCR.T0SZ must indicate the required input address range. The size of the input
>> address region is 2^(32-T0SZ) bytes."
>>
>> Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of input
>> address region is 2^40 bytes.
>>
>> Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b which is 24.
>>
>> VTCR.T0SZ, is bits [5:0] for ARM_64.
>> VTCR.T0SZ is bits [3:0] and S(sign extension), bit[4] for ARM_32.
>>
>> Thus, VTCR.T0SZ bits are interpreted accordingly for different architecture.
>> For this, we have used union.
>>
>> pa_range_info[] is indexed by ID_AA64MMFR0_EL1.PARange which is present in Arm64
>> only. This is the reason we do not specify the indices for ARM_32. Also, we
>> duplicated the entry "{ 40,      24/*24*/,  1,          1 }" between ARM_64 and
>> ARM_32. This is done to avoid introducing extra #if-defs.
>>
>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>> ---
>> Changes from -
>>
>> v3 - 1. New patch introduced in v4.
>> 2. Restructure the code such that pa_range_info[] is used both by ARM_32 as
>> well as ARM_64.
>>
>> v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and P2M_ROOT_LEVEL.
>> The reason being root_order will not be always 1 (See the next patch).
>> 2. Updated the commit message to explain t0sz, sl0 and root_order values for
>> 32-bit IPA on Arm32.
>> 3. Some sanity fixes.
>>
>> v5 - pa_range_info is indexed by system_cpuinfo.mm64.pa_range. ie
>> when PARange is 0, the PA size is 32, 1 -> 36 and so on. So pa_range_info[] has
>> been updated accordingly.
>> For ARM_32 pa_range_info[0] = 0 and pa_range_info[1] = 0 as we do not support
>> 32-bit, 36-bit physical address range yet.
>>
>> v6 - 1. Added pa_range_info[] entries for ARM_32 without indices. Some entry
>> may be duplicated between ARM_64 and ARM_32.
>> 2. Recalculate p2m_ipa_bits for ARM_32 from T0SZ (similar to ARM_64).
>> 3. Introduced an union to reinterpret T0SZ bits between ARM_32 and ARM_64.
>>
>>   xen/arch/arm/include/asm/p2m.h |  6 ------
>>   xen/arch/arm/p2m.c             | 37 +++++++++++++++++++++++-----------
>>   2 files changed, 25 insertions(+), 18 deletions(-)
>>
>> diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
>> index f67e9ddc72..940495d42b 100644
>> --- a/xen/arch/arm/include/asm/p2m.h
>> +++ b/xen/arch/arm/include/asm/p2m.h
>> @@ -14,16 +14,10 @@
>>   /* Holds the bit size of IPAs in p2m tables.  */
>>   extern unsigned int p2m_ipa_bits;
>>   
>> -#ifdef CONFIG_ARM_64
>>   extern unsigned int p2m_root_order;
>>   extern unsigned int p2m_root_level;
>>   #define P2M_ROOT_ORDER    p2m_root_order
>>   #define P2M_ROOT_LEVEL p2m_root_level
>> -#else
>> -/* First level P2M is always 2 consecutive pages */
>> -#define P2M_ROOT_ORDER    1
>> -#define P2M_ROOT_LEVEL 1
>> -#endif
>>   
>>   struct domain;
>>   
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 418997843d..755cb86c5b 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -19,9 +19,9 @@
>>   
>>   #define INVALID_VMID 0 /* VMID 0 is reserved */
>>   
>> -#ifdef CONFIG_ARM_64
>>   unsigned int __read_mostly p2m_root_order;
>>   unsigned int __read_mostly p2m_root_level;
>> +#ifdef CONFIG_ARM_64
>>   static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>>   /* VMID is by default 8 bit width on AArch64 */
>>   #define MAX_VMID       max_vmid
>> @@ -2247,16 +2247,6 @@ void __init setup_virt_paging(void)
>>       /* Setup Stage 2 address translation */
>>       register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
>>   
>> -#ifdef CONFIG_ARM_32
>> -    if ( p2m_ipa_bits < 40 )
>> -        panic("P2M: Not able to support %u-bit IPA at the moment\n",
>> -              p2m_ipa_bits);
>> -
>> -    printk("P2M: 40-bit IPA\n");
>> -    p2m_ipa_bits = 40;
>> -    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
>> -    val |= VTCR_SL0(0x1); /* P2M starts at first level */
>> -#else /* CONFIG_ARM_64 */
>>       static const struct {
>>           unsigned int pabits; /* Physical Address Size */
>>           unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
>> @@ -2265,6 +2255,7 @@ void __init setup_virt_paging(void)
>>       } pa_range_info[] __initconst = {
>>           /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */
>>           /*      PA size, t0sz(min), root-order, sl0(max) */
>> +#ifdef CONFIG_ARM_64
>>           [0] = { 32,      32/*32*/,  0,          1 },
>>           [1] = { 36,      28/*28*/,  0,          1 },
>>           [2] = { 40,      24/*24*/,  1,          1 },
>> @@ -2273,11 +2264,22 @@ void __init setup_virt_paging(void)
>>           [5] = { 48,      16/*16*/,  0,          2 },
>>           [6] = { 52,      12/*12*/,  4,          2 },
>>           [7] = { 0 }  /* Invalid */
>> +#else
>> +        { 40,      24/*24*/,  1,          1 },
>> +        { 0 },  /* Invalid */
> Do we really need this invalid entry?

Actually I preserved it to be consistent with ARM_64, but we can drop it 
for ARM_32.

The only benefit of keeping this entry is that if p2m_ipa_bits is set to 
0 in p2m_restrict_ipa_bits(), then ...

panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", pa_range);

This will print pa_range as an index pointing to invalid entry.


Also, this reminds me that I should change the message as 
ID_AA64MMFR0_EL1 does not exist on ARM_32.

So this may look better for both ARM_64 and ARM_32 :-

panic("Unsupported value for p2m_ipa_bits = 0x%x\n", p2m_ipa_bits);

>
>> +#endif
>>       };
>>   
>>       unsigned int i;
>>       unsigned int pa_range = 0x10; /* Larger than any possible value */
>>   
>> +    /* Typecast pa_range_info[].t0sz into ARM_32 and ARM_64 bit variants. */
> This would want some explanation in the code.

/*

VTCR.T0SZ, is bits [5:0] for ARM_64.

VTCR.T0SZ is bits [3:0] and S(sign extension), bit[4] for ARM_32.

Thus, VTCR.T0SZ bits are interpreted accordingly for different architecture.

*/

>
>> +    union {
>> +        signed int t0sz_32:5;
>> +        unsigned int t0sz_64:6;
>> +    } t0sz;
>> +
>> +#ifdef CONFIG_ARM_64
>>       /*
>>        * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
>>        * with IPA bits == PA bits, compare against "pabits".
>> @@ -2291,6 +2293,7 @@ void __init setup_virt_paging(void)
>>        */
>>       if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
>>           max_vmid = MAX_VMID_16_BIT;
>> +#endif
>>   
>>       /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits". */
>>       for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
>> @@ -2306,24 +2309,34 @@ void __init setup_virt_paging(void)
>>       if ( pa_range >= ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_range].pabits )
>>           panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", pa_range);
>>   
>> +#ifdef CONFIG_ARM_64
>>       val |= VTCR_PS(pa_range);
>>       val |= VTCR_TG0_4K;
>>   
>>       /* Set the VS bit only if 16 bit VMID is supported. */
>>       if ( MAX_VMID == MAX_VMID_16_BIT )
>>           val |= VTCR_VS;
>> +#endif
>> +
>>       val |= VTCR_SL0(pa_range_info[pa_range].sl0);
>>       val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
>>   
>>       p2m_root_order = pa_range_info[pa_range].root_order;
>>       p2m_root_level = 2 - pa_range_info[pa_range].sl0;
>> +
>> +#ifdef CONFIG_ARM_64
>> +    t0sz.t0sz_64 = pa_range_info[pa_range].t0sz;
>>       p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
> This should be:
> p2m_ipa_bits = 64 - t0sz.t0sz_64;
Agreed. I should fix this.
>
> Another alternative would be to use anonymous unions+struct/union inside pa_range_info, e.g:
>          union {
>              unsigned int t0sz;
>              struct {
>                  signed int t0sz_32:5;
>              };
>          };
> so, if t0sz stores 24, t0sz_32 would automatically store -8.
> This could simplify the code later on, as you could just do:
>
> #ifdef CONFIG_ARM_64
>      p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
> #else
>      p2m_ipa_bits = 32 - pa_range_info[pa_range].t0sz_32;
> #endif
>
> However, I think it would require placing extra braces around initializers, i.e:
> [0] = { 32,      {32/*32*/},  0,          1 },

I am not very sure of this alternative approach because of the extra braces.

However, if you think it is better this way, then I can change this.

May be Julien/Stefano should also comment on this alternative approach.

- Ayan

>
> ~Michal


From xen-devel-bounces@lists.xenproject.org Wed May 24 14:22:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:22:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539073.839595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pNm-0001ik-6j; Wed, 24 May 2023 14:22:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539073.839595; Wed, 24 May 2023 14:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pNm-0001id-3v; Wed, 24 May 2023 14:22:30 +0000
Received: by outflank-mailman (input) for mailman id 539073;
 Wed, 24 May 2023 14:22:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q8ut=BN=citrix.com=prvs=5011a8a4f=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q1pNl-0001KD-MO
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 14:22:29 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6895247c-fa3e-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 16:22:28 +0200 (CEST)
Received: from mail-bn8nam04lp2048.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 May 2023 10:22:25 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by CO1PR03MB5666.namprd03.prod.outlook.com (2603:10b6:303:9c::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 14:22:23 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Wed, 24 May 2023
 14:22:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6895247c-fa3e-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684938148;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=FzkJ1hDv25QnRJTEZ8GbIwCNr4VWqj7XJ/9txhKdoCw=;
  b=OeErEvT9zb8/0bb2Jjbp+P34u5+gwH0EIpu7cp4Ek7vSLUZwGSU21H1G
   xj7rt7aJqlBw3vbFECUB1qQKi51EHXWvGv2MdAaT3i1iiwGQgGH6pyvkd
   iMh/LoXcoDWz0T4IekzkxZuyNw01QFCI6E9gbRpYXuv8UU7vqt2PTczh6
   w=;
X-IronPort-RemoteIP: 104.47.74.48
X-IronPort-MID: 108994698
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:1gWgJKgPZ75eC5DIkAYqkNtJX161SxEKZh0ujC45NGQN5FlHY01je
 htvUWyGa/2INzfzc4txbty18hkCvZPVndI3TFFlrSlmFH8b9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QWCzyJ94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQfAx4xfiyDgdu156uxQ+J9vtktHo7CadZ3VnFIlVk1DN4AaLWbH+Dgw48d2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilMtluS9WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHurCdNOSuzonhJsqGap4Vc5FQUqaV+2j8fo2m2RRvBuE
 1NBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9W9UmmB/72ZqTezPyk9LmIYYyIACwwf7LHeTJobixvOSpNpFv6zh9itRTXom
 WjW/G45mqkZitMN2+Oj51fbjjmwp5/PCAko+gHQWWHj5QR8DGK4W7GVBZHgxa4oBO6kopOp5
 RDoR+D2ADgyMKyw
IronPort-HdrOrdr: A9a23:1aLDfaDCzONfF/vlHemT55DYdb4zR+YMi2TDGXoBMCC9E/bo7/
 xG+c5w6faaskd1ZJhNo6HjBEDEewK+yXcX2+gs1NWZLW3bUQKTRekI0WKh+V3d8kbFh4lgPM
 lbAs5D4R7LYWSST/yW3OB1KbkdKRC8npyVuQ==
X-Talos-CUID: 9a23:fxrHDWwOZcigy/i5DbzIBgUTMe4ZYkHdyk7TBFayAG14cKCbexi5rfY=
X-Talos-MUID: 9a23:o06WqwlLQoPxDbcKZvZjdnpZDuBTybm2NHwJnLs46pmnGjFSCg+C2WE=
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="108994698"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SFFKlGNgIkfSj0LBTZWwejn2B0DXdYZLqDVADoumeFYhOdfHCI1TCCp8mSqGF0aRqx45Y84Ewac0F/ftYzg9ShjrBxvdC1SPxTWKohgxMJpc512cqDDv84z8o9TDGaLSlVPbBMIUUcLHk+O65wU2EtIVRuKdaF9frK+9Yq+pe/T+yoGQPti6Pk70KMCMB05zh4QXBK+DsN+cGzPxq3ZvUjx5KSFFcCWnVvvC3pt1jsWBygA6rdPiCP0rUJPCcJABe0WDdfoej9dyw80poVpAHtt9hITBPMcoHPISWZzZfSLqWF6oAbk64cEK1c5JLtv1GlT2Rhb7xmJS/ptZJpu7xw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=npxPhcN9pONCtO8Yp8EXA1Q01+mBUV2Kawsye0P5PaE=;
 b=oXEq26fGhg2a0RLpc6t5GKplCGSvyA0XHjETdRLlv3CFFlQqHCtfVH3v1GtPcHyY4wN+F2PSBq9FmC9Ly9sab/lG/NMA3+cvg6fxCpNkmSv1EfiaA5pMqxM6R8mBPcIpIjFC4I8X/tuaMXnznGU/k1cg/pNh4z3HlJVAC8wmcNjwiFcBEf3m7nTB2bWiWWRy0F83DvbVY7ZAhJn0SWegI8owYbhlGk8l4cOJQqVR+0CoQiYecGeeD+t3O0DTMRc0BezVDJ5MfYSh6W86aVV8RDohcCjhtIDJ7iOlssrhuYyoG2mUR9mHyFgeoK7sWexiz2amy+Koz6QC8QWxL74+Ng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=npxPhcN9pONCtO8Yp8EXA1Q01+mBUV2Kawsye0P5PaE=;
 b=FXS+tQAAakzCZN23HJkhf26s8g0Yv9TulqAc8AO/gWHeUuyjaEDGHjV8px7M2RakpbPoIvV4A9O0jnVu/YPFCZidZOhSmsu7WOg5UnsgB1Od4P/B0Nxo+V6x7GW7/BLJbFHxRwfexpm88YAS4+/OnZGcAZcb3YS6Sr+QTRTWxHg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 24 May 2023 16:22:16 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Message-ID: <ZG4dmJuzNVUE5UIY@Air-de-Roger>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
X-ClientProxiedBy: LO4P123CA0576.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:276::23) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|CO1PR03MB5666:EE_
X-MS-Office365-Filtering-Correlation-Id: 5ecaa73c-6fcc-4f60-6a0c-08db5c624a38
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZhxOCBvAtgrKe3vfi3jHHii/Wtr6nZAZvbpXJk1iUDmBP9TMdLn4IXNXy8EiAjJV8Ww95VMclQNdfzx03vsOCH6x13B82xH86sjUxCmRJDgdd+fQcilYR+lRVH4Jv92rLAWhwsPp1+muAnDcELvQ9ocsvhzvWni8xO4PZ69mJOSmSyNGE9FYONxQsAN0XJH8/T2VTlDO3WDW8o5pssFwi+dDHx7ffeoUcdBX7E2iGgm7DFL2Y026Vt+HYpWDTVW32j/ydRX54eiNEpTWno2siq59tBHGeRQb606af1qgPxPWnhJN/qiOh2gpPglfs1nw83j4KuGb7C9jBx7R6A6/zAQhRq4rI2PSOba0iVkfhmsanKOr9eIh9FzC16vBHzD9+fasUoNrAIXkz98Rz1VZh0C8QZJL+EiksVY/9b8suCyb3LanfuWLOpQjvSqdo3ACFYGqXG1mnyx/qou5UVUFCX038ba2Czt2BPP5tJVgB+lCr8j7LKw4+WAXJb9RRFIZ6wQPdQMWX57H+TMscIj6Ew9VgqrlZjcnw61aEG7Ag4iD8XXBooZzstT4WXN+SIM5
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(366004)(346002)(136003)(396003)(39860400002)(376002)(451199021)(33716001)(66476007)(66556008)(66946007)(478600001)(6916009)(41300700001)(6486002)(6666004)(54906003)(316002)(4326008)(86362001)(5660300002)(8676002)(8936002)(66899021)(15650500001)(82960400001)(26005)(6512007)(186003)(38100700002)(9686003)(6506007)(83380400001)(2906002)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QUdBNmlydnZrcGZhcDdwb3ZBbVk1MDV6cTNjSTZTT2VBSU9FeXdveURaVFdj?=
 =?utf-8?B?alhhc0NBQ0h4MzlnNmh2SlpScXZEbnVLdDREVzdNWS9pdXJMeUE1VFZ6VG9R?=
 =?utf-8?B?bkhDUTBMbm1hb294N3hvWU9TRXM5TXcwNlB5NWZjNFR0RkdyTHpvTjQxT2tq?=
 =?utf-8?B?T0ZBZ0I3dnB2UFBqeldaT1E5VnIzZ2l1T3dpdDVaVzVabUI4cjZYMU9icFRC?=
 =?utf-8?B?UTg5N2MydmxteDIvaVVlM0doME1QSWVYdWdJNkhnOUxYL3BqWFE0cEw1aS92?=
 =?utf-8?B?NUxrQUJjT3p4eFZEbkxER0JvZUFQczBqblFXL2VzQnhDbVBxNVVqZnoxbHNn?=
 =?utf-8?B?VllxWHZTNFNNV09tSXRGZ2tqQ3Nob3dIOXdTZG03UUdBSWxRcGxTdXlweU5v?=
 =?utf-8?B?VjV1OUllTTU5MzhZb2laamw3RzdFemlxbDk3bGlwYkhtQmFFd0pVWlVZQ2Zs?=
 =?utf-8?B?TUsrejN0ZkJHQmh6YTdYZncyVStpUVZRUExRNHRKUGtieWYzdW9LblF1N0pz?=
 =?utf-8?B?V0RhWUxyY2NYNVBjbUdnTndmVk5ZTXc2YkZNNGhyaHpVM2NpTVBVMVJ5M3JO?=
 =?utf-8?B?WWJVWDdLYlU3TDE1cUxwL2lUZisxbUI4ZlA4WXV3TjQrSEJiblJ0Vytqak5Y?=
 =?utf-8?B?LzhCL3E4YUpLK2pSbnFRN2NwWUNvZjh5UDJ2LzBqWm9OUkpGSWozWW8yWDYw?=
 =?utf-8?B?ei9kYXJYVkhVb0FhaEk3RE1DaVdIdjhHamhKaC9PSVI5UHJIY3pLZnFQUldQ?=
 =?utf-8?B?TlhkbGVNUUdrY2VZbzFyWU5EWEtrYUFKQXdkbFJENjNwa0V0ZnVZRlkxaWln?=
 =?utf-8?B?QzVPdW15Ynl4S0ZjaFVDcmNJSVZDU21tb0F3SWZBMzZEc3lvMHBNZDUxby9t?=
 =?utf-8?B?eUFmbUd2Wk1iSlBSQnF0YzJHRVRRb3k5M3pPT0N1UDFUdnpCY2RsTHZGZkFV?=
 =?utf-8?B?L2R5OXRRZnQvZG1pd3ZndEQ2T3dGa3R1bUVyczFldTNObmVtQWlFSnRXWTh6?=
 =?utf-8?B?TFVaYTZ0MjRrQmhGT0N2bEZWcnNPNStnSHB1WGRFOCtqSktJYkhmVXZTTkRq?=
 =?utf-8?B?WjRjYVJDTldYd2YzVXVpWnYrZCszbXh4bStXelpMZFVqOFpZamZZVjBoV2FE?=
 =?utf-8?B?ay9GV1pJeEd3TUZXYVZXUzdXSTBNei9UdWtjcjBVV25QVFdRa29Ea3FiK3Ew?=
 =?utf-8?B?VUpJa0NLRTB0Y01XMFJFc1owZEoxU2h0N2JpSjYvaCs3L0hubHB4U3MwYUhS?=
 =?utf-8?B?LytMQU9ldS9qN0w3WmRBdTVuSlZFZ3VVS0hkYWl6UlZFQnNnTmVvNlV1RHBL?=
 =?utf-8?B?Ym0rTkpRd2hIbTR0RlNTdEpxa3NldDVYMnA4M0F3eXRxT255c3hjMzdQYkRx?=
 =?utf-8?B?SE5KUWZwWS96d3oyanNEdzNZd25kN3dObkxiWlZwcWcrUEJOT2poenI1cUgx?=
 =?utf-8?B?VXJoZEJjd1lGbGEwbHljM3U1WmlWNWFrQnF1TFNyM3QvOUIzNHVlalZGQWNI?=
 =?utf-8?B?NTJjeVlqL2tJalkrRFVjVkE2ZWdMY3hJVHdSa25HMjB4dS9YZ0E3ZkxaMjk3?=
 =?utf-8?B?TE9TeldNOURRczlKSTFTWThmUnVGclBDaDdDY2p1OUMxQm9qZENHejJwSytU?=
 =?utf-8?B?RFFGeG92QjdwdzhKQWw0Zit6b3pXWkhTNTE1eTBjcnFtdGo3M0hIZFVNRkJW?=
 =?utf-8?B?WDVZUHhjOUxNYVUvREt0c3FkdTJJN0tGa3ZrZCtkZ1hQQkJSYlhRR3RrNzBD?=
 =?utf-8?B?cGk4TjlXUHJRMVlkOEVjSnErOFRqcXFZWVJTb0EzeDFMdkFnWHRGSWxmTXli?=
 =?utf-8?B?aGFmTDMzeFFuMC9heFdWa0pBUmd1M0xSeGpoZVdLaHBXYXFaSnVHVnl5Rytq?=
 =?utf-8?B?eS9nZVEyRVlEdGRIN0lETTVOaXBVS1hRTjVpN2laSC90d3N4Z0pibE5mZ3JR?=
 =?utf-8?B?MW9TSWdoQzkzMWxxbzlTdkVXT2I3dml3dFRLV0pvYXltUnRzR2psakFxN3RL?=
 =?utf-8?B?WXV1eVZIQWJqNWJ5c25EWEVFeWtYWnYxR1NCSy9HNTZ2RXRlcC9FUU9naWE3?=
 =?utf-8?B?TDZpQ1Z6QWhBMWNnVCtvRjU0YzRwSlI3SUNKcWpnVURFMXV6YkZwRUNyYXkx?=
 =?utf-8?Q?KzXrCW5FY+pjycH9Os9aCgApV?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	G6wbguR+vO1/eSLBHV/BqegRJt4hYVioq1Cx/kGOvEDyGrkeWTU8bBjpbAJEduT0kIYcrOz7n9/25FN31YD6MXy11Sw3p7CPMptbi3CzumRqin47VboBEHOgc1l9dXPc+QDIb9Bfm7lX+GaEiv5bIbi7xQNP0BjvTJojw+znshi2ChmcPmbBcMX19BZ2RmhbKNnitE+r9nUqvWewqydTNhQcse41s9K2CjzQSeX4D8LVw/tsNT9Zb4zo2QT6iwqrWu7MDt9n55XSQaoPKIXiKV8XvyOFIlCl+/xdwVcL+CmQZqBmVNxuRdcM23W1Cp535TC58JEROoFLwdKgQiQyrf3fYfXKd2qRhRleV2X+Foi5Y+a4Xg0AKlaHH3RMaChvjZormO9iBZO/uXWeWCE3nk1YDryqsXtuOB6VSNZlsbz60R1wVsAk47wel3jF+ydiggfRlR/GHJwfGO1wLaTl3gL72NWhvS4A2XGZdnFU4jUx+v77rry1QikZvYyKvxD8xlG2/wwhPr+1+v8ki/wYR5TYFHUEue2xPD7agagbk3xkKEO3nMapEI9bs/OwjUlBWPZj/lf7mCJ9+ijWgcM+NFoa1LiBacW9ZdYi+506xQMK7oA/991NsuFA4HGwthvuGCGNQxq/PvGnhQJ9CwGKZxpylGSUjUY6hUxjCrdw1onkyYEbL6OHOfT8w+lZ+rdHowTo/Z7F9JgdPDR/NEq0BI0BPrazuYm/EHYO211H8Bf3nlguZk5mP3PqM6N3gITsI0WgJqiUM5UvcdGsQpTMexXbWoxa8YK7aQl3EXAeK21382UVHcC1SaEDeUm8tN/CHuBnukI0fmgT/59Ycj482w==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5ecaa73c-6fcc-4f60-6a0c-08db5c624a38
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 14:22:22.8116
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dFzJ6DebO3DGSyd2tXHsKLkkwh9KB6HD9lEsCDZ3RSNwsTi/6yLd4RW5jFay75cQDAzqM05G8sg6DTUqhgJ+dw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR03MB5666

On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
> Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
> console) are associated with DomXEN, not Dom0. This means that while
> looking for overlapping BARs such devices cannot be found on Dom0's list
> of devices; DomXEN's list also needs to be scanned.
> 
> Suppress vPCI init altogether for r/o devices (which constitute a subset
> of hidden ones).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> RFC: The modify_bars() change is intentionally mis-formatted, as the
>      necessary re-indentation would make the diff difficult to read. At
>      this point I'd merely like to gather input towards possible better
>      approaches to solve the issue (not the least because quite possibly
>      there are further places needing changing).

I think we should also handle the case of !pdev->vpci for the hardware
doamin, as it's allowed for the vpci_add_handlers() call in
setup_one_hwdom_device() to fail and the device wold still be assigned
to the hardware domain.

I can submit that as a separate bugfix, as it's already an issue
without taking r/o or hidden devices into account.

> 
> RFC: Whether to actually suppress vPCI init is up for debate; adding the
>      extra logic is following Roger's suggestion (I'm not convinced it is
>      useful to have). If we want to keep that, a 2nd question would be
>      whether to keep it in vpci_add_handlers(): Both of its callers (can)
>      have a struct pci_seg readily available, so checking ->ro_map at the
>      call sites would be easier.

But that would duplicate the logic into the callers, which doesn't
seem very nice to me, and makes it easier to make mistakes if further
callers are added and r/o is not checked there.

> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
>      modify_bars() to consistently respect BARs of hidden devices while
>      setting up "normal" ones (i.e. to avoid as much as possible the
>      "continue" path introduced here), setting up of the former may want
>      doing first.

But BARs of hidden devices should be mapped into dom0 physmap?  And
hence doing those before or after normal devices will lead to the same
result.  The loop in modify_bars() is there to avoid attempting to map
the same range twice, or to unmap a range while there are devices
still using it, but the unmap is never done during initial device
setup.

> 
> RFC: vpci_write()'s check of the r/o map may want moving out of mainline
>      code, into the case dealing with !pdev->vpci.

Indeed.

> ---
> v2: Extend existing comment. Relax assertion. Don't initialize vPCI for
>     r/o devices.
> 
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
>      struct vpci_header *header = &pdev->vpci->header;
>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>      struct pci_dev *tmp, *dev = NULL;
> +    const struct domain *d;
>      const struct vpci_msix *msix = pdev->vpci->msix;
>      unsigned int i;
>      int rc;
> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
>  
>      /*
>       * Check for overlaps with other BARs. Note that only BARs that are
> -     * currently mapped (enabled) are checked for overlaps.
> +     * currently mapped (enabled) are checked for overlaps. Note also that
> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
>       */
> -    for_each_pdev ( pdev->domain, tmp )
> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
> +    for_each_pdev ( d, tmp )
>      {
>          if ( tmp == pdev )
>          {
> @@ -304,6 +307,7 @@ static int modify_bars(const struct pci_
>                   */
>                  continue;
>          }
> +if ( !tmp->vpci ) { ASSERT(d == dom_xen); continue; }//todo
>  
>          for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
>          {
> @@ -330,6 +334,7 @@ static int modify_bars(const struct pci_
>              }
>          }
>      }
> +if ( !is_hardware_domain(d) ) break; }//todo

AFAICT in order to support hidden devices there's one bit missing in
vpci_{write,read}(): the call to pci_get_pdev() should be also done
against dom_xen when handling accesses originated from the hardware
domain.

One further question is whether we want to map BARs of r/o devices
into the hardware domain physmap.  Not sure that's very helpful, as
dom0 won't be allowed to modify any of the config space bits of those
devices, so even attempts to size the BARs will fail.  I wonder what
kind of issues this can cause to dom0 OSes.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 24 14:30:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:30:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539082.839606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pVZ-0003PZ-0C; Wed, 24 May 2023 14:30:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539082.839606; Wed, 24 May 2023 14:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pVY-0003PS-TS; Wed, 24 May 2023 14:30:32 +0000
Received: by outflank-mailman (input) for mailman id 539082;
 Wed, 24 May 2023 14:30:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VEVP=BN=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1q1pVX-0003PM-Rn
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 14:30:31 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7d00::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 88984848-fa3f-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 16:30:30 +0200 (CEST)
Received: from DU2PR04CA0356.eurprd04.prod.outlook.com (2603:10a6:10:2b4::14)
 by DBBPR08MB6314.eurprd08.prod.outlook.com (2603:10a6:10:20f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 14:30:26 +0000
Received: from DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b4:cafe::46) by DU2PR04CA0356.outlook.office365.com
 (2603:10a6:10:2b4::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 14:30:25 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT020.mail.protection.outlook.com (100.127.143.27) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 14:30:25 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Wed, 24 May 2023 14:30:25 +0000
Received: from 7c3bc349fa94.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D5B6D0F0-24DD-4713-985E-166EF0C50B8D.1; 
 Wed, 24 May 2023 14:30:19 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7c3bc349fa94.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 14:30:19 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DU0PR08MB8375.eurprd08.prod.outlook.com (2603:10a6:10:408::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 14:30:16 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482%4]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 14:30:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88984848-fa3f-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/nDoDP1cQr2TdFC3oohFVBusjQitDMOHQV8cJuoJidM=;
 b=5KJy268J0Mchni5oD7r1XEUi+yeEBWIKUTiFixlzdX/uLpd5minc+kQHiXav9XP4utSbK6syW7UOT2LmpKOWd4EIC1ckrIyFqV0+LNFLZNtC5wyGlj9MgfPyXyHPVu/tW/D60d5vpvOw2zWmcuYt+u8qvdgxGumWxuQAjMrPFXY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3dd4096e60384b6f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b7n4Vdi78k1vjud5o2FBhqBJWZcedU2LJdfjaLfWfq5diSRwGAB/7Iwvi+SPsVcPL2ZaByG4xIO25oQRqL3QLBpJhu/2WAoxF6XopBFwjcUOhrM31vOYP85XuoYNTS6UkCDDWFpnksqRoFwMRNL89CUwdIALObP89iA97/1fkeZXLjrRmOFIHyl97DYGft8zm7TaXAZdFlGjFIye5mDeCKmUvnWdQFWebTdQMRH7NteEnhiIXEuN9Kie00HosBxiHaNuUH7TqlUItD1HevtpRIeFTDqLU08JwnS9VlcM8yI0qCELOcjo7rrqLwBjhOGZQCG0/HEWTcEUgRfCSfDSdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/nDoDP1cQr2TdFC3oohFVBusjQitDMOHQV8cJuoJidM=;
 b=hOqHrWJXOXs6owbjcGQiwQUYjXhtb6rDApvstRaIGIb3pPzf0UB0THXRKCc9zCVJHjDNd53ISWK898Txr2XoaUW8ZGAQ6z/7gKP6M/wrGTH56M/qYj0f8OGlBYNODUr+q7TQErOrqe19IZc1Fvmo7PbCp1s/JgwRwnT2nO/msFji02udNsLPB+20tGx2XIIbbHBcknCevBaBrcBLiUZqaQC3F6pgoFO+rjeZT1D6VDZOqh2vNiVk6qECTgZdifJ86Rk12uDFEyMUq6UoSG2Db0XoD8NMnPECmDIef4k9msBdoB3IwV8H1kj0FCPpeapUCbu4xDn3qTELRAVSRFtMbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/nDoDP1cQr2TdFC3oohFVBusjQitDMOHQV8cJuoJidM=;
 b=5KJy268J0Mchni5oD7r1XEUi+yeEBWIKUTiFixlzdX/uLpd5minc+kQHiXav9XP4utSbK6syW7UOT2LmpKOWd4EIC1ckrIyFqV0+LNFLZNtC5wyGlj9MgfPyXyHPVu/tW/D60d5vpvOw2zWmcuYt+u8qvdgxGumWxuQAjMrPFXY=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Luca Fancellu <Luca.Fancellu@arm.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 01/12] xen/arm: enable SVE extension for Xen
Thread-Topic: [PATCH v7 01/12] xen/arm: enable SVE extension for Xen
Thread-Index: AQHZjUpOMMV4Yflw9E2LiWKuLwqFE69pIiaAgAAP2YCAAEvjgA==
Date: Wed, 24 May 2023 14:30:16 +0000
Message-ID: <10FEE82B-9EEB-4A29-BEFF-F425A396D2B6@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-2-luca.fancellu@arm.com>
 <98D7529A-FF7E-4079-B4FB-7EA096CB6822@arm.com>
 <dae8d4f9-7a1e-3940-1f25-0b1a2cb123bf@xen.org>
In-Reply-To: <dae8d4f9-7a1e-3940-1f25-0b1a2cb123bf@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DU0PR08MB8375:EE_|DBAEUR03FT020:EE_|DBBPR08MB6314:EE_
X-MS-Office365-Filtering-Correlation-Id: 30ee7d10-aca0-42de-dcc8-08db5c636a28
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 UNZod9pvReZUr4GtZVHZOPsDtry50wdqIRjKizOnLq4EQ8gCRF6YQ6BhLFawrDm4oNfwy05yQXaUL3G0aPqJSJkJ3Lr8+uHHlSGMqayTKYwHUsj/ef6k5ZiuJztOwRzaRk9c2Z+luKKsRAvKNwxTmt1FX2tobrhQoQdVAKt5kFzeujIi6vOiYYeznj39QgiAiWRaGHqeAQLY1z0aBOWkKEOI5VzkVGywaZ38xDlqkt0uz1WCP+a0RS1+0bFp7VGVDl6JwJpd/tPH270vo9oohSu5lfAWWzgU5VRU55ZslR2WlP8CO+d6QaDq/EUNAaIHEIxry3vCOd1oHGXFJdl61LiAxuXyi49XExQFGYDfw/Ezwv2vd0Zw/UAcSSFMO7vpl/AiN3bcNwRrk9Fngs6E3vblpMdKxswWhVc4rN+vzDnHOKIR5Jjl0PQ88vSGyOzYiYaICdB0CXYqh5FFSxUSVYALMvdMT3ZaRHhlxSXLX2qxMC9Jh01AC1Jj3bY3RaMfw0o297kj+qW5x5v0/QWBJ+Q92mlAT3qL5bJGU/UczdE46jGdJM7p9vsZGS2Nw7/IQ950GGLSoRErv81xBr8JkX4VrpUHMFan0uFPoZYlrig7cG/pG0hob/iquMex4Q1hm2SK/Nlrph1rnaD1qgDdag==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(39860400002)(366004)(136003)(451199021)(54906003)(86362001)(6916009)(64756008)(66946007)(66476007)(66556008)(66446008)(4326008)(478600001)(76116006)(91956017)(6486002)(41300700001)(71200400001)(316002)(5660300002)(8676002)(38070700005)(8936002)(38100700002)(26005)(6512007)(6506007)(33656002)(122000001)(186003)(53546011)(2906002)(2616005)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <39FB56CF98F66D45B4186578CCA70A4A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8375
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e8e10d6c-ea2a-46e0-5588-08db5c6364bd
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a+Q1YmdwiiaRW+6Y+3vLDhYglDCu4XNdEURMKlVxy1g6nLdWA+vhU32TtZOmbM45BC4zHeCKGckTroLKOInzpgZubXCNd/seIELkhP1Xzspg9RCm+h9oI3zvDUVJtLmhQI9A4sBnI3/zZNcOBOWSWqOehqbkQPoFeDZofbsjzx6rJlRHrl81w7q+aW7Qk+EapU0XBLm8wCrFItYSg9C2NW7CTel9FQh0AC/SSoX6FPQIectJfmOi8C/aUIO6r36NinJS4I/0c823IwJ7MO3cnNtDAeykBh/bZ34Zm6CKU6oYqk+FhKZakDZtC9oXugdOxGfDghx27Jdwj3T7OfarnJQHESlp1hmn228TFypn/tXhBdAu8y5144pztBod6gqN2o0wd5iFlYWt8ai7zBConqhvQgI2ryxnpXb0CJqgd0r3pr8d3UAp3c10y2+0+bB5ESwBYN0nHeA/ZW76uywN2JVSb8pdzwYXI+ntngK4BouEXhZJXPAjgH2eA19iTPbt10hJHnuUvasVK8q/8xDSPdDX7gj/bLjzqqJxD3bmgtLRchzoV9GEDLqakBrIGJMuKpAh6PSQrYp6WqMC8hDLtoY2Inbm8LrbqCa6RSdhUkMB0YQG8MgQ7L24JYNI8tntsD9szpo19e+ZNkvgb3JfYchWhfyhDUMIXUYjQh4daCgf8MFr0J6Db10VdOFb0UAqG0v+6n0TWkOy01Qh+nsLDW7WKe8I1uEJFDhdQuqKBGhCObUAS93xCI+zpeomFkqv
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(136003)(346002)(376002)(451199021)(36840700001)(46966006)(40470700004)(33656002)(86362001)(6862004)(8936002)(8676002)(2906002)(356005)(70206006)(70586007)(4326008)(316002)(478600001)(5660300002)(41300700001)(54906003)(82740400003)(6486002)(82310400005)(40460700003)(81166007)(47076005)(2616005)(186003)(53546011)(26005)(6506007)(6512007)(40480700001)(107886003)(36756003)(336012)(36860700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 14:30:25.5481
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 30ee7d10-aca0-42de-dcc8-08db5c636a28
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6314

Hi Julien,

> On 24 May 2023, at 11:58, Julien Grall <julien@xen.org> wrote:
>=20
> Hi,
>=20
> On 24/05/2023 10:01, Bertrand Marquis wrote:
>>> diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
>>> index c4ec38bb2554..83b84368f6d5 100644
>>> --- a/xen/arch/arm/cpufeature.c
>>> +++ b/xen/arch/arm/cpufeature.c
>>> @@ -9,6 +9,7 @@
>>> #include <xen/init.h>
>>> #include <xen/smp.h>
>>> #include <xen/stop_machine.h>
>>> +#include <asm/arm64/sve.h>
>>> #include <asm/cpufeature.h>
>>>=20
>>> DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
>>> @@ -143,6 +144,9 @@ void identify_cpu(struct cpuinfo_arm *c)
>>>=20
>>>     c->zfr64.bits[0] =3D READ_SYSREG(ID_AA64ZFR0_EL1);
>>>=20
>>> +    if ( cpu_has_sve )
>>> +        c->zcr64.bits[0] =3D compute_max_zcr();
>>> +
>>>     c->dczid.bits[0] =3D READ_SYSREG(DCZID_EL0);
>>>=20
>>>     c->ctr.bits[0] =3D READ_SYSREG(CTR_EL0);
>>> @@ -199,7 +203,7 @@ static int __init create_guest_cpuinfo(void)
>>>     guest_cpuinfo.pfr64.mpam =3D 0;
>>>     guest_cpuinfo.pfr64.mpam_frac =3D 0;
>>>=20
>>> -    /* Hide SVE as Xen does not support it */
>>> +    /* Hide SVE by default to the guests */
>> Everything is for guests and as Jan mentioned in an other comment
>> this could be wrongly interpreted.
>=20
> (Not directly related to this patch, so no changes expected here)
>=20
> Hmmm... The name of the function/variable is confusing as well given that=
 the cpuinfo should also apply to dom0. Should we s/guest/domain/?

Would make sense to do some kind of coherency check here to use domain when=
ever something is for dom0 or guest.
So yes that would be a good idea and I can add this to my todolist (after S=
VE is merged to prevent conflicts).

Cheers
Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Wed May 24 14:34:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:34:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539086.839616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pZC-0003zU-Fv; Wed, 24 May 2023 14:34:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539086.839616; Wed, 24 May 2023 14:34:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pZC-0003zN-D4; Wed, 24 May 2023 14:34:18 +0000
Received: by outflank-mailman (input) for mailman id 539086;
 Wed, 24 May 2023 14:34:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q8ut=BN=citrix.com=prvs=5011a8a4f=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q1pZA-0003zH-ON
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 14:34:16 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0d2f4b0d-fa40-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 16:34:14 +0200 (CEST)
Received: from mail-co1nam11lp2174.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 May 2023 10:34:04 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM6PR03MB5225.namprd03.prod.outlook.com (2603:10b6:5:24d::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 14:34:01 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Wed, 24 May 2023
 14:34:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d2f4b0d-fa40-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684938853;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=EcNIaQydvhtXNlRaBXSKnKItdCmNhbyp4OJLBjZxCnc=;
  b=VpeAVR+1yj6XPXPfeRj4u+aORWwBCnGmd6y+ZM8UffMlbjkvM1s0wXsY
   JO2CON84h2xXd68o5AWEXgNzzKek9bPhgl3ZNO3OlT/Ev1yc8YwAzdwqO
   bCX7Aqel4SzUX03MDWBX4wdO0Um1rtnVuHI/iuNH4Yn5ltaoWItxK81l1
   I=;
X-IronPort-RemoteIP: 104.47.56.174
X-IronPort-MID: 110637046
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:4NFo0qjiH0DJktbdWkGgk/U8X161QxEKZh0ujC45NGQN5FlHY01je
 htvXWHQaPePYGbyKd5xPY/l8xgHvpOGmtMxHlE9rSg0RSMb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QWCzyJ94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQlKjQAcACmjtuKnpGaabJvt5Q5Ksn0adZ3VnFIlVk1DN4AaLWaGuDhwoYd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEvluGybLI5efTTLSlRtlyfq
 W/cuXzwHzkRNcCFyCrD+XWp7gPKtXqiANxJS+XkqJaGhnW69EcSJT8ueWfn//iAsGiRQf1Vd
 Gwbr39GQa8asRbDosPGdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOtsU7WDgr3
 V+hhM7yCHpkt7j9YWqU67O8vT60fy8PIgc/iTQsSAIE55zop9g1hxeWF9J7Svfq0pvyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLzt6sowFqxJrVZg
 EU5pg==
IronPort-HdrOrdr: A9a23:HYw6362GYtL4SSGh5oWOBwqjBIckLtp133Aq2lEZdPWaSL3gqy
 nIpoVh6faUskdoZJhEo7q90ca7MBbhHPJOkOss1PKZLWrbUQiTTb2Kj7GSpwEIcheWnoVgPO
 VbE5SWY+eAamSS4/yKhjVQ0OxN/DBEys2VbCvloEuFhDsFV51d
X-Talos-CUID: =?us-ascii?q?9a23=3AGGYfrGtQABHbJ0s0w3HZJ9eO6IsYcHnQ0mf1Kna?=
 =?us-ascii?q?EV0VIa+aWeA+K/qN7xp8=3D?=
X-Talos-MUID: 9a23:zi66lwumHzKjwUCEe82nrQNJNepK24iSVB4MqK8glNOrP3NQNGLI
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="110637046"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HYaPZRjsLMC6Vj+WMKxVqm2Psa+NTAInsCE278gsyCqswtGnCcEttnjjJmb9rEp85bGfwPE0Zi7KxW+7K5riLVNi2iRf5PcwWInCYyjjWtC5QVoaystvMvNwFpmqU0nq9g8Npq0TPO5x+FUun7M8/HhsomTTHhPA2CT8RAzwqw+/ESNtFbLFFUpmjliULrXFTcnu3whMgzYmhOycRFHd6diEJTV5AXxQsTXTSLPCZvhVFS8VFlXSB/s9URP+dqJ0l+FrfUjJnGrtgCyxh6nhyZ8EuL36y6yPY1uoPEUQ3q1dnBys+o4A8TypP7Av42uOi8NYhqC+DQ6yWxvxYkbuGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=evojdKPoO2vv7sqJFZpt7vipqjpyXG9GSYbmRyw6PoA=;
 b=CdehBN+XqJNfCtXa/5ny9GT6lRkl/W8qqWDyUhp5xn1QJWFLRDXbfrqnTjNFvtciDNIN0b3dpXE+qrWImcst07otmfxd9l/Vv4Jz7KLKKPa/PTw86Kkst5309q6uANUY354gLt24/OiQl93Dsa8DPsMqzyPNmBiXdhVX5uRgOOZr4RVeCjgel6IBp9czG6MChy9tFgNFh1Pu+QTSwyEyxwfBndVtR7Vjvr2kyfREFNEhouKOtS8Oy8Lw34R93xyjnERnaS5oKE7y6R3NpfGNhHSCmvK+TgXpA1E+bkWTLGRzjO/SDWAZf/gXtSuV3hu7fJv++47oqr0qUke7WltyjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=evojdKPoO2vv7sqJFZpt7vipqjpyXG9GSYbmRyw6PoA=;
 b=mqHX5TuaxsBIf7mk8sCESw3HzBnT2sjFVWU/RKrv6RQ6gnmhHBNse5GJ0xnfyY3KWIGj7vTZJToWMdjwK5GL1FFAKPU5hFuTPb7+Q5CHwb5ccAY80YtCuSX57WWgNjIYfrY4pvXfnulz6TRR+Oji/lThF3tzui0VOkCVhAdhK6Y=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v2] x86/iommu: adjust type in arch_iommu_hwdom_init()
Date: Wed, 24 May 2023 16:30:50 +0200
Message-Id: <20230524143050.17573-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0175.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:312::14) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM6PR03MB5225:EE_
X-MS-Office365-Filtering-Correlation-Id: 0d94a729-7054-4efc-f34c-08db5c63eace
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nvOTsWC9d0spAP/WRLAv4EiHPCCbowTtDwjdMaYkSxJg6eoSoG4E17PyH2Exn0eAD+Thw4X83R+yTB10Y5vlVbIPmVODya7PaKQmGDgQ5gOnTOiPlAGzZaUFVlXVzjT2amvpmwAzqe21GhNtsvC19XJYkPrlLaZMmKrykytP3GG+HxrDtxBEasLjf7bwyDGg+8Ooo6hlXiQmG2itfAuCF4ck2KKbuIvrS3/OY6Z7nxdtUj4TwC6Y68YHBZjDeAqWhORHCOP2pPRiOTTsy8GAxo/INI4lskQZxPjXbv6I26b/QUy2powDI1twf/2oosV+I49DUf3ORviK2N7Y+fRxiV+fYHRtnn790grGUZkjmzj8D+NzVFX7uclkEP0XUBUZDUsmhke1w7LdJitUSw79q2FVVBC5N6SwVEgZX+okGSWs6S0JxAAiYwbF0C4RXBiDwCtOPe907r/5cfC3GiR6+U+zhuw7dR/ZPWSmZhMEw7tXLerEkCONWB7//AxUGHjvB7LO/sk8X25n/NP+DztLapuHNo2ppg59Cvdl9gb6w7Q17xY6fVVMfoOHt7YQpUHk
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(366004)(376002)(39860400002)(136003)(451199021)(54906003)(38100700002)(82960400001)(66946007)(66556008)(66476007)(478600001)(86362001)(6916009)(4326008)(316002)(6666004)(83380400001)(41300700001)(6486002)(2616005)(36756003)(186003)(2906002)(6506007)(26005)(1076003)(6512007)(5660300002)(8936002)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eXFZQ1RaWDFMSGhycWEzUm93NW9OK08zOC9zSXVkTUZaa1FXamxra21VWUJz?=
 =?utf-8?B?eTJycG1FYmdRVTlPUENkeXdieWhRQTE2R3NXVTIrQXQwbnpucFNGZVdPTjdk?=
 =?utf-8?B?WnRkUFNiOGpjR1NzRVArTGdoQ25iMC9HL3dFMzh0VVJzbDN0bjJjajRhQXR0?=
 =?utf-8?B?M2xUWWVpWEVmUktLdTNZVnFTaE5RbUE0WGFONGV2bkllZWYyWi9JTXhvR0pX?=
 =?utf-8?B?RWRyS0ZiR3VkeFF6U2s1YjYrNEZQLzFqd2piNlZPdmoySW5ZUmJjY29icm0r?=
 =?utf-8?B?THFPQUtoTXgxVFpzTVhBTW1MclY2ZFZnZ0tnamkzZzU2RkpvMjQ2N2d0cDhE?=
 =?utf-8?B?eTJmb01OZlJuNlZKWHRSRnc3dlQ3b3dDdEMzdHBvTTZ2SFZYTnExTjRlUXVm?=
 =?utf-8?B?SzRrYjFjY2VFKy9ldU1xS2J0SjhrUDVIenlwWTY2b1pKdnhKTVpON2IvaWtt?=
 =?utf-8?B?bjFSUnIxYkdUZTVjVEhwYVdyY01XN0Q1OUEyaGVOT3RvL21LeUFPajFNRUtE?=
 =?utf-8?B?WENFbVA3L1NvdmpWU0xKTkpUb1VZR1dFcHV5dzQ5UHo1UXFQVktXME5NNkMy?=
 =?utf-8?B?bFdxTy9qSmFpTTFOS211MWxxZWRaZVV3WEp2UXRtcjZ6Wm1WeS9VR21wSG5z?=
 =?utf-8?B?Z1cyeGtWblhHcysrY0pxdStIU0RqcmtNZjNZdVFuOTJwamdhNU5VK0Zub0p5?=
 =?utf-8?B?STJzWkc5R0xBbGlEajJOUk40TFpXZnora2YwY1duTm93c0pFWFFuZEVYM2Va?=
 =?utf-8?B?QnYrTjQ4WDhxOEhNd29IQ2Q3aHJoam00MDRsNmMxa3JRTGtDWCtEUXplVTlN?=
 =?utf-8?B?aDBDdHNpclVNL2FzL3BoaHJIKzcxcHpQMjhKdFFUSkFlNzk2OWxXdEdDSUVw?=
 =?utf-8?B?Ynl1R2VyWUlQOTNqTTdZQW0xaGZsL2Z4NzlJR1lWZWlhNUlZSVV6aWNwWGpk?=
 =?utf-8?B?WDlrYnlKeXFvUUJyRWg5Q0pOVkdUc0NKSFJTMnFDK0xWQWNhbGRrbHhoQWxY?=
 =?utf-8?B?c2doMnpTajBtQW1jQUdsZWNESzFWNDJFMkVxR1VUd2paOTBDbHJ0Ymd6WWFT?=
 =?utf-8?B?NXNHYXhVbHlYQVlIOWJ3Qlk3T1ByRmM1OC9wRkx6VlMwYWY0YnBObW45dmQv?=
 =?utf-8?B?Y1BOVmVNTkNqNGlaRVZrRTZUTXdVa05WTFlYWGtuZFAvekRNazMvNVcwRHFz?=
 =?utf-8?B?cURIOGN6MzZHUDRhTFZobU9hUUlqNUwvVHZINEdUMXJVVlE1VTdwVElISW1x?=
 =?utf-8?B?K1ZLenozYzk3RS9tekNaRjhCeWdFeTJSekg5R0diRG5WTkhlZjl0UWtERmRL?=
 =?utf-8?B?NStaTTBQZllDUi9hMzFkN0l4MDcvWkgyNC9hQzdBMytkSnc3T2JlNXNrRnY0?=
 =?utf-8?B?cXd5cXJqZ0xaeXZWM1l1K2hvNjBlRDZwWVhkcjVtdm5WQkhKUElGVmhDcXdD?=
 =?utf-8?B?ZHUrUG9KYzU0NXp5VXFZUi9hMkFLNzZWaGt5aGtSbDM0V3NaKy92aUNydzhG?=
 =?utf-8?B?NDBhc2ZxcnI1M0dtcjZLL0MrQkVaNUY4OUN6a2RlVHJxV1BFd21FNGRWNGFs?=
 =?utf-8?B?TWttSUpDSmRrNGFYanNKNmlOODBITkFLc1NQWXQrV3RWUWJEQ2xaQUZxVXBE?=
 =?utf-8?B?QnhNbGtuVFdaWHFUUHBEdXVDWkNSMVVsNG5UcGJaOXpIbUVrVG5meVdRMTRP?=
 =?utf-8?B?SFFTaVVxdGV5ZUxVdDJITEhPS2Zod3czbWlMNkwyNk5sTnFnZDFFRjhuTVJJ?=
 =?utf-8?B?MkVDZWlPeEVKeDFxN3QyZktvQXlXaEMvRSs5eDFXblhQMTNaeDFza3JMTkl6?=
 =?utf-8?B?eFdhdnNJeFFhYmZiMlJESXpNMUtnUGJlZ21oWFpUTG80eWgraTNMaDJrRWd5?=
 =?utf-8?B?RytmT0gyNFNhZGJ6Ky9UTS8vcGtHUW1Ybmt4MURrVTQ1RkNVTDRoQUJpSy9B?=
 =?utf-8?B?c1p2QVA2WE91YmozWm15YmZOMURhMGEvNWpKdWluTjlTdGRrUW5NR3VFV011?=
 =?utf-8?B?em9scVZPOVZ6YzFmT2Y0UHIwdVpjYWgreWRCMGpYK2hBSzhBWXozZTRNTm1q?=
 =?utf-8?B?aGtxNTJPUXRHb1JITnBMT3dlZGF5ZFNUbDBrN3dHbElweTk3cTlpQkYxZTl6?=
 =?utf-8?B?YjVKYU1YYU5nWjFRSFhXM0lRWS8wT3NUZm9wQ21jdm5oanBpS09nR2tYUFFz?=
 =?utf-8?B?V3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	cFK7XwTv3jPX1PF43oqpGlFugnauwn6MWvfLMy0Rxm6p9rJWDKuYn21jbZ3ejDuWPPpDDHqV4v5vxnJh3WGF4j+LtkPGDoIcMCakcTWNpI9pj8Cgj0mnmlLYJhnnAnDsFbq4Q1NtticnVHD4pgTaR4U5NaScwhi0E3zk5w+leQQ2hft2UswZ/JbfE4h4LdAY6IYWacJ0qlNoVE5qoRdkziiXm//7ZRaUGyvsGGxO4ebE4IyVGnGLtkR13lZFSZ0Y/uRBWFlbyXaobInmKerRmaOKbNnuX5IuvJrRzMCZSMQEWHD393qDjZpVHgaD+/a9+mX/1M7pIP+ojaH/XRfXhP5/YBjD8agJZZCbX4ydc/D0obl2NYh2Omt+qhbcf2s9IPrsHTYivHxXkyj21eJe9zVWBLO1bF/9WGh3g6BNMsFSdegWkI8a9Zs6eNSwtQ8HfaiW9C9AfVLHgYz8u2a4YGi/nhLi2DR15iO76Ro2qynNcEYPOkr6WVk3yT2RXm1GHchvvO/ArzeA7IVI2Sip+KpxNaIFB19ZJU74qEkeQT962DBaBwcTByMYDIJYbXQu/ARbInqCbOEXj+Wxb0lK+cHXLVsDPFlE0/P0dQU/6e4eiuZHhq2Lm68VsVCsqhgJ9Ap76yQtic21Fs8E8z/HbUk8B9u8/kj1cE7EyGU3eh46PUZyhSJyyGqYQ/FIZgvCTxiDU0m+s01OlIGsgjD4dQzrG6xC6af5eYnpz4I0nR9Kc5tc6cCs9wBUTbymi0va43tzL4bzwVK8+BmUO1kvtckfODjVL/xL5Qwst1LnqtEVJl6WcypgqEOajVCdFJEd
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0d94a729-7054-4efc-f34c-08db5c63eace
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 14:34:01.5604
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: s9vz2shIq29QaA97Ld4cOTgwuUIk0wI0dhoxngr9VcyH8KiLySwQ7+RRjfoZGuMPzA6RRQY8y5juGR4cxZ+PeQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5225

The 'i' iterator index stores a PDX, not a PFN, and hence the initial
assignation of start (which stores a PFN) needs a conversion from PFN
to PDX.

This is harmless currently, as the PDX compression skips the bottom
MAX_ORDER bits which cover the low 1MB, but still do the conversion
from PDX to PFN for type correctness.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Soften the description as it's not an error.
---
 xen/drivers/passthrough/x86/iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cb0788960a08..6bc79e7ec843 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -406,7 +406,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
      */
     start = paging_mode_translate(d) ? PFN_DOWN(MB(1)) : 0;
 
-    for ( i = start, count = 0; i < top; )
+    for ( i = pfn_to_pdx(start), count = 0; i < top; )
     {
         unsigned long pfn = pdx_to_pfn(i);
         unsigned int perms = hwdom_iommu_map(d, pfn, max_pfn);
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Wed May 24 14:41:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:41:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539092.839625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pg5-0005WA-AJ; Wed, 24 May 2023 14:41:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539092.839625; Wed, 24 May 2023 14:41:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pg5-0005W3-7d; Wed, 24 May 2023 14:41:25 +0000
Received: by outflank-mailman (input) for mailman id 539092;
 Wed, 24 May 2023 14:41:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1pg3-0005Vt-Nb; Wed, 24 May 2023 14:41:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1pg3-0008Ff-Hr; Wed, 24 May 2023 14:41:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1pg3-0005ud-1Y; Wed, 24 May 2023 14:41:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1pg3-0005Py-15; Wed, 24 May 2023 14:41:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ITBSDlz/GmJiDERAYt0kzo1uPR62kO6IlUcd+YHLoAI=; b=XRtydqNIJCXqviYal5DLQIROel
	IRU0YSjfgJUVlRLZJPSLZ0lbJyEDIRpPTGQDLJgcnPtkeETmhWtP+YJAnB/pXDmawusb97q9WqZVO
	gQysiYeMIdNmMFFXixTdg0ixniPNR3NreNwAOEJry+kxHyZELt+FDV/I+Jd7z5UQ8iD4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180928-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180928: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ba91d0292e593df8528b66f99c1b0b14fadc8e16
X-Osstest-Versions-That:
    ovmf=5ce29ae84db340244c3c3299f84713a88dec5171
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 May 2023 14:41:23 +0000

flight 180928 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180928/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ba91d0292e593df8528b66f99c1b0b14fadc8e16
baseline version:
 ovmf                 5ce29ae84db340244c3c3299f84713a88dec5171

Last test of basis   180908  2023-05-23 01:12:21 Z    1 days
Testing same since   180928  2023-05-24 13:10:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Wendy Liao <wendy.liao@insyde.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5ce29ae84d..ba91d0292e  ba91d0292e593df8528b66f99c1b0b14fadc8e16 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 24 14:45:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:45:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539098.839636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pjU-00067j-Pw; Wed, 24 May 2023 14:44:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539098.839636; Wed, 24 May 2023 14:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pjU-00067c-NB; Wed, 24 May 2023 14:44:56 +0000
Received: by outflank-mailman (input) for mailman id 539098;
 Wed, 24 May 2023 14:44:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1pjT-00067W-6g
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 14:44:55 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20614.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::614])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8b5e05b9-fa41-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 16:44:54 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7731.eurprd04.prod.outlook.com (2603:10a6:20b:249::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 14:44:51 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 14:44:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b5e05b9-fa41-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gKD/MBgDCo0iPURngbKZctXRxQ2EgCJ9L+WMEO+10E9h+Mg3E09kyASvs1SEf2koi13JK/rKElzFfZAscQpaRqnSMBx3nQM2pQjGOjuKoR39WRE3SQJLcQvD32zd/KnkNRjlGjRFDAIQgpr9VXsJntwJAY2aP8Ux0clf1cVzBfOg+DJJwiv4DHh+AmprfQ0+qNi40DraJIrCCqCB1t2NJWoCe4TyuZdnPWvUXhDyLnmy9ao8SZnAMidX+ABFxBguoFqq4LqZ3PacN3t3KFOQuAO2l9196ofH30yALigpwX/YTBo3fvFSOtVUebgqrGI018n5eV7WMOPHvJCDXoGJyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GcE4zPR0D4gZcd26hQ8qWH/3DDzjHPGYoR9nmzA4ibw=;
 b=dYJ2BsljE1fQeINF4Wnjq+dUTkjUG65v/y/VHE5NM7D/gkG9YTCynRHdfrduVlptDLxgaIta72KnV/VznMoxr8dcQFg9eAtaSkRH3cbOnimOvgRgu8oRr1GnRoBJXVlBzNBArV1XmTHgQxz9Wo6jKyxc5h7aDBFpTODE3M5OhU47xyZ+CEa31N7VsfglXR1XIj0qzoVOBMDkc0u2/Kian1u3cvoiLt6hdB4mz5vem6xllcc/rdZX266asq1cy/wfhijHNdM4QPhzxF8uj7XEd1IpUXC3hmWIaIUND5lo0dPHFZZToAyWnz5/qByVLmiFzGxuoEz3gQYK4bK0Dvw6PA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GcE4zPR0D4gZcd26hQ8qWH/3DDzjHPGYoR9nmzA4ibw=;
 b=uGOChJXP+0Tz5F8DVoPvv3nN2TiMNL8VHIcDIAlbhJBy4emFJa8gVlZFpI9r1kPEJHEmyH55lA3b2t9AwvDWPy8huak/OW3fpFyC6MGMD63mK1DfS3UCZ6ghtZcpedvyWMrylC+Xjr9U36CUTV1qOuGoHPEVk23ZX2quZusE+yzfBTpTSW3hbnVJVXW0lN3YDMy+bif5HlRgyAxzbHeQrAXlYpAOCvU/27Z/VIV5axmlF7f3RYPioZs/Nmt8sMOd0v0LclIh0kZgvvwikoh4Dx2ZZexPIOtxxoF9N+uNeBLTvK3mDIM6kyJkC4P0nbcR3gGrUdy3nNSLqdryI1zgYw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com>
Date: Wed, 24 May 2023 16:44:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4dmJuzNVUE5UIY@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZG4dmJuzNVUE5UIY@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0061.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7731:EE_
X-MS-Office365-Filtering-Correlation-Id: 8a2badaf-20cc-466d-e31c-08db5c656e01
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T6IdkuyOHOsw11Hq/PyIp6Tb7uYfNz6tKxTz2rLf2NFsdMy+lppxNf4sODYkAv6x4mgy6KeL1yti+tBOTszXS+cdvzdmT+zkBAWYWTnbB5EDmjDhuuCrdQJpH7VqYXWwrQI1lCu+6ZsnEGzVU9eScolDRlgKblQoWWlm0wxaiYA+HeN1s52hXdIbJF7eU4acUsp5ultZq9ksyfBBSkxVyedJYpNpzPlpMhkfv4QletmtLE/EcXTupJ9Wo7Rx6HlAsCmddENp7ey2Hb5KjbdmVMdt0SS3OEQOya0tIloMWzvMEAfc7+jFsPRLsclEYJdDLgwg4tXjAVRjOQsLL9ISQ8KFHF83ftBRj3epa/347SVCm18coLlUsPkwNC0mqurHyequrzq3UxuUghYtQDZDlIA8VM1AjSm+RnwGiy+1Ejs1URdgXzgEr7OXtg8r4AVvMMX9jV+THxKkK80YhGQTjNzz10NNfP/rQIY+8itpGOEE1URK1EnnMRKEzYQNeIuRhK5g0HB+pOYNzjYLv05t3Eensysn1rKp5jbfOKDPCwYArzYStw8+6i9uSrC290ZWpYMNJF4KVzjILcmLlUobMjghPpguRFaQqzLm6sh+WuECQTLabkAasq5HZLia0HvJFpsUjdc5gvE0uO2nS0U4kA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(366004)(346002)(39860400002)(396003)(451199021)(2616005)(26005)(186003)(6506007)(6512007)(38100700002)(31686004)(54906003)(478600001)(83380400001)(6486002)(53546011)(8676002)(8936002)(66899021)(4326008)(6916009)(316002)(66946007)(66556008)(66476007)(5660300002)(15650500001)(41300700001)(2906002)(86362001)(36756003)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TGc1Zk8wZWI1WlRidWFyQlBNVkY0NnhYRTY2SzJxbU84aGxGZ3RONUQrSVVV?=
 =?utf-8?B?Nnh1dnBxYmF4QnZLd2xDd3o2WjJMN21MbUxOTU1uWUJ4dTJ3SEx2UzZ0QUdN?=
 =?utf-8?B?Z1VLcWwxSmlTSVEyMk02YnR2dUM2TDJNUXJaamhHYlY4dW91OUxBWGdXNjN2?=
 =?utf-8?B?ejNXV1ZQdmRBRHptK0NLY2VRRWplc0U3eEJ2MVJvdW5RSm9RaGcycU9CeEtN?=
 =?utf-8?B?N0J0Q0ZuUkZSVFZkcy9xRXdyTVluWExZQU5zUDEwOVFtSWxTL2twWUdFdFE2?=
 =?utf-8?B?VCt2ZCtPK2xua1UrVmZXNzJSUHFTQjhkYjk2czUzUVF6c3Jub09KcHJyUmVn?=
 =?utf-8?B?czdHV2kvVE9nY05PMGVpSkR0VkN5cERQQ1h2YlNwZk9GN21nTTIwL2tFbXVK?=
 =?utf-8?B?eWRhSDZZa1pVZnV3NlRnZy9kQzQ2Y3RxTGQ4OHRLSDhFeW9DSWs2RUZHTnFq?=
 =?utf-8?B?d2lQSnEwMTRDSHZIcWFZcHRUVmFKcjl2bUZnNVVKS0RWWUdnNUtxNjIzTW53?=
 =?utf-8?B?ZnpwM09XaGFHdnpxa0ZRQll6NFZBbFp2Y1lmdFQvOHVycGkrSzdpdEpjQ2J2?=
 =?utf-8?B?d250T002Sk9aNXpXQ0c5L3BjYnByemdkNHZMREsxcUR6S2htVkdtV2c2MTFN?=
 =?utf-8?B?RTVuWEJIS3A0UC83elhoT2IyNHVCWkFUR1R5N3NMV3VRWnlkalFUbGlSSDli?=
 =?utf-8?B?ckZmWUFoZ0VkVlFWS1JIU1VyRURSaXNuN0JBRUhKWE1rdm5pUk91T3RCUldQ?=
 =?utf-8?B?NHAvTFUwY3FjSlNEMVZML0MvRVZnMkRENUFKQjN0YTdSd0lzNkN0Wno1dXll?=
 =?utf-8?B?RVRIQVdOTURDeDRpV0RzLytmMlNIcE13czVxTmxQN3lHVG1rTVFwd1JRQ0Rn?=
 =?utf-8?B?UGRpenA0R1Q0Z3p0OEcwSkVRckZHdmsybXRCakhOQWhSQUJiWmxpcW5sRTI1?=
 =?utf-8?B?b3AzU01ndGswREpiUStSN09JWWordDl5Wnh2akJvTVZmN0YwSFdzNXBvYmZX?=
 =?utf-8?B?UkltNUx6bVUwdlJlcFljYWI1WWs1azhUTC93UUplVi9VS0toOUV3M2RycUJL?=
 =?utf-8?B?WEJ5aUJWc2NIa01tLzVpWUNNUEhKcGVqam9tRzRQOVd6OW9XRmtVemtPM09R?=
 =?utf-8?B?REFwMmJTQVd1N1dLY1ZSVnNtTHlxQTU3Vjc1MkJ0SXNCOFJZODh5MTUzbTBY?=
 =?utf-8?B?Vlp4anpUcWhWSDZDMXJKVUhmeDU3MEt1cE50U0RoSTIzNUIzQmcwOCtMKytT?=
 =?utf-8?B?MUZ1aU9ITEJkbUlCV2xKN0F0ZWZ6YzBpNXBhN0kwTytrZVpNUWpqN0I4ZlN2?=
 =?utf-8?B?c3B0QkhpbDFJSXMrUFpqdlRSSUs3YlkycHl0UXdSZkZyZlFOd0k3ZkpGWWpZ?=
 =?utf-8?B?TkQyMHJuZTNEYmU5SWl0S1R4Yk92SFVPWWpxWDZnelN6eTJYVEU1WDhXai9C?=
 =?utf-8?B?WFRyKytFalRhWEFFajJxK25oaDFieFh2N3ZDODdHang3WFVwYUhLTjlJeC9x?=
 =?utf-8?B?dFlJTVNrbksrV09iOGVvL1hwR3kwSmR4SGhzaVFVTm8xaC8ybU4rSjgzOWxI?=
 =?utf-8?B?eVNBblp3TmtMeWpuWDN3MEVkb0JUc3dpdnBLVDZQeHBwSHZVbUxMLzJ5cEcw?=
 =?utf-8?B?RHhmTFNhZzM1ZXZPa3I2R1RwR0NYZWs4S1JMcWEvRmEzMWlITzg3eWJFNEVP?=
 =?utf-8?B?Z3E0aVlDdXhsK0NCdVJWZERyTFpzV0tSTDZHelcrdmNsTXlCaFBaY3hON3hz?=
 =?utf-8?B?dlBWTFRCSEttcVU3MS9xWUxma3E3N2R5STFCTHBuZ1dRejNlRVhBZWYyWmpa?=
 =?utf-8?B?cUppVkdHcjEydHRTWTJSYWJ6SkEvR3RqZnhqV3pqNXFLem5Sd2tiUHZkQmxM?=
 =?utf-8?B?dnMwV1RkVUZXVXgreU4wWDMvZ2JzYlgrdEErRzJQcS92WS8vcVpVWmxNSG5P?=
 =?utf-8?B?ZVhZbkpWUXVoSHJDNXptanJJckI4K055RUJQR1NNWXYzV3FiWStySmdFNTds?=
 =?utf-8?B?STZaeDJvbTh2cmY2SVBWbVM2QVE0SW1PcXc0SXdEUjlEeWRVKzFGMkRBZllP?=
 =?utf-8?B?T3kzbDVGakR6ZmJqOVBlV3ZRNEJjUk9hbzZnRERrK2k2bUZpcm1kVjY0NmRL?=
 =?utf-8?Q?y5QlHSIlmndeQu+dbhHj4kFXs?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8a2badaf-20cc-466d-e31c-08db5c656e01
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 14:44:51.1709
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B+fQoJYK4k1n+j6s9C4xeGod2e6Gp4d6aMvxWkCXTLMBL27ciosVBKm84THG6w9b0ifGwoIYZToGkLv4XU/lCg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7731

On 24.05.2023 16:22, Roger Pau Monné wrote:
> On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
>> Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
>> console) are associated with DomXEN, not Dom0. This means that while
>> looking for overlapping BARs such devices cannot be found on Dom0's list
>> of devices; DomXEN's list also needs to be scanned.
>>
>> Suppress vPCI init altogether for r/o devices (which constitute a subset
>> of hidden ones).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> RFC: The modify_bars() change is intentionally mis-formatted, as the
>>      necessary re-indentation would make the diff difficult to read. At
>>      this point I'd merely like to gather input towards possible better
>>      approaches to solve the issue (not the least because quite possibly
>>      there are further places needing changing).
> 
> I think we should also handle the case of !pdev->vpci for the hardware
> doamin, as it's allowed for the vpci_add_handlers() call in
> setup_one_hwdom_device() to fail and the device wold still be assigned
> to the hardware domain.
> 
> I can submit that as a separate bugfix, as it's already an issue
> without taking r/o or hidden devices into account.

Yeah, I think that wants dealing with separately. I'm not actually sure
though that "is allowed to fail" is proper behavior ...

But anyway - I take this as you agreeing to go that route, which is the
prereq for me to actually make a well-formed patch. Please shout soon
if that's a misunderstanding of mine.

>> RFC: Whether to actually suppress vPCI init is up for debate; adding the
>>      extra logic is following Roger's suggestion (I'm not convinced it is
>>      useful to have). If we want to keep that, a 2nd question would be
>>      whether to keep it in vpci_add_handlers(): Both of its callers (can)
>>      have a struct pci_seg readily available, so checking ->ro_map at the
>>      call sites would be easier.
> 
> But that would duplicate the logic into the callers, which doesn't
> seem very nice to me, and makes it easier to make mistakes if further
> callers are added and r/o is not checked there.

Right, hence why I didn't do it the alternative way from the beginning.
Both approaches have a pro and a con.

But prior to answering the 2nd question, what about the 1st one? Is it
really worth to have the extra logic?

>> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
>>      modify_bars() to consistently respect BARs of hidden devices while
>>      setting up "normal" ones (i.e. to avoid as much as possible the
>>      "continue" path introduced here), setting up of the former may want
>>      doing first.
> 
> But BARs of hidden devices should be mapped into dom0 physmap?

Yes.

>  And
> hence doing those before or after normal devices will lead to the same
> result.  The loop in modify_bars() is there to avoid attempting to map
> the same range twice, or to unmap a range while there are devices
> still using it, but the unmap is never done during initial device
> setup.

Okay, so maybe indeed there's no effect on the final result. Yet still
the anomaly bothered me when seeing it in the logs (it actually mislead
me initially in my conclusions as to what was actually going on), when
I added a printk() to that new "continue" path. We would skip hidden
devices up until them getting initialized themselves. There would be
less skipping if all (there aren't going to be many) DomXEN devices
were initialized first.

>> RFC: vpci_write()'s check of the r/o map may want moving out of mainline
>>      code, into the case dealing with !pdev->vpci.
> 
> Indeed.

Will extend the patch accordingly then

>> --- a/xen/drivers/vpci/header.c
>> +++ b/xen/drivers/vpci/header.c
>> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
>>      struct vpci_header *header = &pdev->vpci->header;
>>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>>      struct pci_dev *tmp, *dev = NULL;
>> +    const struct domain *d;
>>      const struct vpci_msix *msix = pdev->vpci->msix;
>>      unsigned int i;
>>      int rc;
>> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
>>  
>>      /*
>>       * Check for overlaps with other BARs. Note that only BARs that are
>> -     * currently mapped (enabled) are checked for overlaps.
>> +     * currently mapped (enabled) are checked for overlaps. Note also that
>> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
>>       */
>> -    for_each_pdev ( pdev->domain, tmp )
>> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
>> +    for_each_pdev ( d, tmp )
>>      {
>>          if ( tmp == pdev )
>>          {
>> @@ -304,6 +307,7 @@ static int modify_bars(const struct pci_
>>                   */
>>                  continue;
>>          }
>> +if ( !tmp->vpci ) { ASSERT(d == dom_xen); continue; }//todo
>>  
>>          for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
>>          {
>> @@ -330,6 +334,7 @@ static int modify_bars(const struct pci_
>>              }
>>          }
>>      }
>> +if ( !is_hardware_domain(d) ) break; }//todo
> 
> AFAICT in order to support hidden devices there's one bit missing in
> vpci_{write,read}(): the call to pci_get_pdev() should be also done
> against dom_xen when handling accesses originated from the hardware
> domain.

Hmm, right. Without that we'd always take the vpci_{read,write}_hw()
path.

> One further question is whether we want to map BARs of r/o devices
> into the hardware domain physmap.  Not sure that's very helpful, as
> dom0 won't be allowed to modify any of the config space bits of those
> devices, so even attempts to size the BARs will fail.  I wonder what
> kind of issues this can cause to dom0 OSes.

This is what Linux (6.3) says:

pci 0000:02:00.1: [Firmware Bug]: reg 0x10: invalid BAR (can't size)
pci 0000:02:00.1: [Firmware Bug]: reg 0x14: invalid BAR (can't size)
pci 0000:02:00.1: [Firmware Bug]: reg 0x24: invalid BAR (can't size)

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 24 14:46:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:46:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539102.839646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pko-0006es-4f; Wed, 24 May 2023 14:46:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539102.839646; Wed, 24 May 2023 14:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pko-0006el-1C; Wed, 24 May 2023 14:46:18 +0000
Received: by outflank-mailman (input) for mailman id 539102;
 Wed, 24 May 2023 14:46:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1pkm-0006ea-EV
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 14:46:16 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb89bd5e-fa41-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 16:46:14 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7731.eurprd04.prod.outlook.com (2603:10a6:20b:249::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 14:46:13 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 14:46:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb89bd5e-fa41-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cTKvqIc34rVLRxPtkz/6dcnWrAEY2BHEOUTV3B3IOGPiVhsi7PXRw6LmCJh3Kd4+pp8aKiYo/76pFrxoJ9BU0KdDO9JIEv9b114RNW7va7NiYYOeLmynr7NOsGliAn5b6pkvgRRx0JTiK5xf0wp8vr8iQfthOKsyyJM3q3wEI1VGvgr2yjeVQwwZ8ZuYzNvxmE9XsuISvRf594AwsvZBFwZlNi2676SjUzlpvdEzVdoB9L1fuBnANqju+I0IWg+GUTBWHcDuyF3V+mtmX5tCcbfgPMZ/QtyCYd/I2Lc/WAXyZfthxaiNGdFoJNm8tR4QELin5iOgxm7hVwgWvIPDZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uKhIpjGFbTpCafpQ6OgOSTqaUpsjgeE9DjSPmiuRRIc=;
 b=J6G69r79zC4kpP3T/ZY4Rs9OSXqoJlgm57lRS0SOY2TJ7M9NIc6ahW1Nl7JWuWCDFWsptjV8x9aFjgN4bVZ7o1GGk8lVXG4eQGGldXjkaYa8XD3GyCwLEMOj9FidlR/D8VYh/35hxuEw38RBXumDV+zcfzNFGpG5aHd/eTBT430IO5cUjhBV5ffiE5Ee2FLxCRwUYHoExiGQrEhYtf+AjbIq7YkkvT6vJUlSc0n19x7fBJ/3R3PfDCQfwJCGjoplJkTfp+L7C15Z6thWAu8BSPlrcKFL6L8TIKWXgX4Shp6nIirx6hZwzHz2aSwyhCLqBse4SEBBgVYKb+yukQXjSw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uKhIpjGFbTpCafpQ6OgOSTqaUpsjgeE9DjSPmiuRRIc=;
 b=ND42XOid+LfogJJw5bqmuVoWbslP0ar0tQ4+Qfz26KjI89t34aJl+FHPY1pNX4NmmbVewY5Pi1XQ82ew49OutJSGQxRb2sU/fDOI8L2kO8ye0Udu54otCClU4c7b3QAqUltLtMe5rKNidpl9JsBT6Dj7iH6z5V/OfRKFwcCSnqcI+33Pdj23tJ2UZVtX866vjpx2j5VlTdOR47nqlCLCWabVc6yy/5DWkrb4p42ptkkTN56LTk6+uxbe5129ABPQaGK4dvKb78WF2XVQ3LxCJvcZgWMPwku4OwacxZSDRcmrp3c+OcNmDbw+31NjUhCuErUxwaXmuvPeUvTEJ++o+A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c2df76b4-1a17-8383-d73b-7c7b7fb9ea3f@suse.com>
Date: Wed, 24 May 2023 16:46:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2] x86/iommu: adjust type in arch_iommu_hwdom_init()
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20230524143050.17573-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230524143050.17573-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0136.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:96::10) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7731:EE_
X-MS-Office365-Filtering-Correlation-Id: 48d85244-4e90-439a-3bba-08db5c659eed
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tDxcVFa+rSrqOMUrbJ3p2pFlGMmEGjZDUoYGuhDxjDpIF5Gba9eEzeOS7P6+ZT/G0pdGDyQcEfBVNFENUtqzLl/pThtDSsb8LZjtHx8Seo56aBbUs3pIkPt51lqderXaHLKW4nQ1SBq+edxqNLhQwX8L0yBxxYJdnSt5Jfr3q4I9FEq8pkZ3y1AqpeLnxd9oadzf2qko8pcTbKC6Gz26EuZh1lkBKnsdyPr3Au1zBPHfw0rsV8Uh9WEon3Cn2/uW5iY7PmPkGqQ5VnEtXCYgIs5JScQr+xZCHWD3/bM9k/XGk9Twi8RvqxArmirnojHSzPef7oBmjE4RbF/yPaSQdawHkB41fMzlSoW5YXg3vUt75gVjUUnbHquLcliWgWJdsh5eMvVRXf9ttGZv90TOZwGQNP8eZfCEVLeAWoe1p9fSZuH/H48KTaSMPHplmW8jG2lG1I6TgsDsXqvC/r/6DO29TaofNlCvl0pMotQe/pHBr4UodKz4GZIw06f5KX7V5NUhN4PnmlWLXkCCpGeNxer3WkUSegYND6skiW6CB6mnMpdkygOp5mfnd9/MeF7RPS/36ZOiwpkqBCUDica5w+wKlNteVtTZqxlTS8uHAwD6/YtDR9GTdvWn2MrNgK6e6sgtD6+OMa5JJDZ0EvCodw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(366004)(346002)(39860400002)(396003)(451199021)(2616005)(26005)(186003)(6506007)(6512007)(38100700002)(31686004)(478600001)(83380400001)(6486002)(53546011)(8676002)(8936002)(4326008)(6916009)(316002)(66946007)(66556008)(66476007)(5660300002)(4744005)(41300700001)(2906002)(86362001)(36756003)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MHJoTVlYWDhQMFFYaHJNN3NVY2ZWZENabS82cEE0S3cyMWlLbjJGcHBOa2la?=
 =?utf-8?B?Q3pVMTJYVk5IYWpmWWRMWjJuT05tRU5yZVJZQmhvbm4wQmlHOXBIbzVpUTB5?=
 =?utf-8?B?NWRzcHAwYzZjdEhFbkJpaHN3bGhBNlVzTFNuL2hDcXB5MlRLUkFQNTRUOUpa?=
 =?utf-8?B?Q29meDlkZE45RXpQTithWGxpVmxQUko5TnU2SG51dS85S2FzbFRONWNFblpw?=
 =?utf-8?B?eFZJUEJLR21CTEZHNnQ5ZnljSWl2MnFPdUtzVnJmM2VIQ2JvbGZaTjJEK3Vy?=
 =?utf-8?B?ZFVtUzQ2aDZBdmxEdmE0TE96eDFJTlpZN1V3bGR4dS9GK2swTm1VWW9VRU1l?=
 =?utf-8?B?VjdOT1NCRXAweW1JUWRIU2ZqOG9KYVp5NWRhRmNZc0dQSW1USDRQdWllSzVl?=
 =?utf-8?B?K2RXSUd2MWFjRm9WLzduQnR4ZUVWbzVFbHVhL1FzanVCTGdUZlJwcUY2Kzgy?=
 =?utf-8?B?S1h3c08vdTVwc0RUbWQwazRzaWRnU2JnbjNTWUZyMmhpQ1MvdE0wWmxGbDVy?=
 =?utf-8?B?dXMza1VEb0VCWHpLaWJ5UER6b1Erc2oyWWZNYm00K0JtOHlEZy9TOVZCM1M2?=
 =?utf-8?B?enZmQ20yYk1BajRpS2Y2MzA3QVc3cjFsNTI0QlRJZFc1clFpdmNLeS9OQUpu?=
 =?utf-8?B?VjBoWHlSb2dTRVZSUmhmNStDN2FKVGs5LzFvdEk4ZVRYVEJ2QVdTRVpISUVj?=
 =?utf-8?B?eGhtb3lySnhKbHVoMUFMWWpWcG5KVU91UEdUREw5YmFvQURQdGxCTW9hMjdL?=
 =?utf-8?B?UW81NHl5SUNnR2F2cStZT2NldDBSbzZ6ZUV2QlkyVHlxTUZnSFhxdlo3Wmor?=
 =?utf-8?B?N0dYRmh3V3FodUZsRTJYQXpvdWZUMk54Zzc0M3libU9aSU84cnF6Z09rOXdh?=
 =?utf-8?B?R0lWSjhrOFZZQU9sYnNrMjVEdFZqaDBtNGY4ZUN4ZkdXRCs0aE12c2NJd0Ro?=
 =?utf-8?B?QjBLWm1MUUc0cStXaTNSalhEcDlUSUttQjBXMEZhOFhya3hUWitLWHFrMGhU?=
 =?utf-8?B?WDZNYXoycS9lTE9VZSs3SGExT2UrbUlaWUhNek9CYURsdmpxdmtpR1hSeW1Z?=
 =?utf-8?B?VkQwV0JuMk5pY2hnZmZYUFQwa244blluTi9RQ1ZiVWFZdm1MRmNVUUUrQmpL?=
 =?utf-8?B?bkxnM282b1lLSTFpRGl1aWMxK2tQNXhwb0xuM1pyRHJPc2lMVGJqL3k5bFVH?=
 =?utf-8?B?MUZBRWV2ZGRCU0NtOXZMdEh6RDcyZFF2ZlVLT1B2bmpLandYNVUrandsY0VE?=
 =?utf-8?B?VktDUmZTbjNiZGh1ZHJOd2N1UUhxYmNDckhzSlc1VzBWSUJxV2RIb2RJVVFM?=
 =?utf-8?B?QnFGdzNBQVFRWCtCTWI1U3dTbWFBOS94T2d3Z1Q0OTNVL0xwNm1rc3N1Tk5h?=
 =?utf-8?B?VlYrUnp4cDNUdXYzcjVtejROelZkVC9jU0pwZ0FTUmdlZDlJTGh5YkxTTE5Y?=
 =?utf-8?B?bWMvUzJmbTJBY1ptUzBFZ2lpMU9WY09KZ3Nkdzg2bmFxb3NGTUw4T2FvVEJX?=
 =?utf-8?B?bkdiMjY3L0RKTzFxTXkyS2hhMkVPYzFYQXo4NzcvWkg4YWU3M3hVLzhyMnRP?=
 =?utf-8?B?dHh0azhCTGROakx0R0xkeGI5aFBXUzNua3VpMDZCV29XZzRrRGVLOVo4SUV6?=
 =?utf-8?B?aGwvVi80SllkTEl2bHFFeS8yVlVaZ1Mva3h3eWlvczhaZnppb3cwakNKYnJU?=
 =?utf-8?B?b2swbnNPN1VIYVAyKzErQ3F1M2VvSS9hVUswL09LR05VK24vWDNFcXR4bndY?=
 =?utf-8?B?ZnZvWHp4OUVKdHlVZCtTNG85Q0xkMUF0czNOTmdnNXNaVXhQYzFTc0E5bGtY?=
 =?utf-8?B?VVVJQ0lPRm5OL0c5dHovWHdiNjVkNFYrWXRwRGlkZHNtdmRURDhpMXJuMU1C?=
 =?utf-8?B?d290b2J6cHhTUCtLUGZxWDM1Q2lmVnVnLzg3Z0lqUURCREQ5cmZKK1dxMCtp?=
 =?utf-8?B?STNQZW5mc2ZWZXNudlozUG5VOXc0SndTVzlCRVJjb1QwV3V1WTRHWVlPSHJI?=
 =?utf-8?B?OGZTaGlOQmQwRGZSZDdNNXpmSjk0SGhRU3NaZExNMVBISlFUdXFVS1YwWjhM?=
 =?utf-8?B?YmNqK216OFVVWkRwTUx5VUNjRlB3TitFTzZMZ0gxc284T2IraC9rY1ZDUUNs?=
 =?utf-8?Q?nyNP25eI1fgQhDl9yxtYg03ev?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 48d85244-4e90-439a-3bba-08db5c659eed
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 14:46:13.2772
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rJTcHnciEzFLdOxHk9wvm0hu0oUqGnQi6ptedpN+GR+zteyXxpFmt5unljgimEYdN1YzZUxiL/W0sDW/JtaX+w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7731

On 24.05.2023 16:30, Roger Pau Monne wrote:
> The 'i' iterator index stores a PDX, not a PFN, and hence the initial
> assignation of start (which stores a PFN) needs a conversion from PFN
> to PDX.
> 
> This is harmless currently, as the PDX compression skips the bottom
> MAX_ORDER bits which cover the low 1MB, but still do the conversion
> from PDX to PFN for type correctness.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed May 24 14:54:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:54:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539106.839656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1psh-00089u-S7; Wed, 24 May 2023 14:54:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539106.839656; Wed, 24 May 2023 14:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1psh-00089n-P2; Wed, 24 May 2023 14:54:27 +0000
Received: by outflank-mailman (input) for mailman id 539106;
 Wed, 24 May 2023 14:54:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1psf-00089h-ND
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 14:54:25 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2053.outbound.protection.outlook.com [40.107.7.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df874483-fa42-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 16:54:24 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7291.eurprd04.prod.outlook.com (2603:10a6:102:8c::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 14:53:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 14:53:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df874483-fa42-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IwiTDNWsew0Yay0QxfCfZA+XhPtnYJqlsfPB4v4RPG7qiym27+xEqOQ+XvLIa0h7VGXANw179eiRN2AONfU2MIH5OvgtMRXXnrprHK90dC+4qN4bYxrBqVqeq58zu5rNjgY790xmXkQrR0x0XZJJG5nlNL40t87vFpt45luitG524N5oJYSSMan574brD50lAP/AT6KwxWVsTMRbIKjMdvGMzJ/KDSdMVzYDazTuEnMk9Yp6HfkfMt6I0CC6djcSfwQR6jh1Pr/GVzcr1AiKUqxBT0G0ngETvte4R7ztN0HwvvUYUm7S42ZR0pfv1GxvI7Hx7MVQlmco5h4mdOpIyw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=DPJUS0WCIC5eRtguMom/gM5IoTXNHIgnDQqjRzF9sd0=;
 b=KwQW6Bo/yNXqEqnx+XOOhYqPzWrGXwai+y3o9ExijwtqiMvOJDFFSNSpbw0PeJjh9rUI+fH3Brm26BX8vQ+2WpJGVHyp1MSj4Cd8Fj0CnWkgc3Kx7k56H8FjhW+ziBgc82Q7RMNtWqRJ3O/9Nkth0zb6GSmZGzvIHPppRi5mAKKR0XHTkVcII9c4MG2TVeg8uSCrI8x+5wLyGKsgnKTh4t3mTXsb8gEisW5XASS+zrO00ZTtvJF17tFEYVsVo4HTyRHJxg/wE7Wo35dGCgtzU2jiiJXdkKgDRe2wn3ck8zWxT4W30NzF26dp+z229hEGlJclY1Fn7QNQbd0JqO66Zw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DPJUS0WCIC5eRtguMom/gM5IoTXNHIgnDQqjRzF9sd0=;
 b=yyNrESG8/oNsFLue68ZjuNRkfywLLAtPvFBwsQatGhHE6A5NFE8X1MDMnT4I9xZxrvYG9hphDtbot8JN7S1YCHlgz9EJu8CG/0it0vL0A0/NQG8y4uC1LIT8MQ0lHA1fmCOEDTibYyOlC6Z9EkxTUaiBpIp2Jo1XPHKHpb/PwccZM5GJCkja/K6fmLp/YC+fQhY5/iPDjoEL93UaSx2dHGXWq3/3R+AfNZvaTpv0vQP0S+HwaxUIWLG7W53Zdkp/hZBm9bb9HQu7H0LQ+dEfYQ7OArp3u/BYnxKf0B4CUX7xfuflkJoU1pu+Xy5wMLVG8pTzcXuX6iFNSBlDUyhq0Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c144bf13-9e65-483a-6887-9bd8645f25b8@suse.com>
Date: Wed, 24 May 2023 16:53:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 03/10] x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
 <20230524112526.3475200-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230524112526.3475200-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0139.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7291:EE_
X-MS-Office365-Filtering-Correlation-Id: 28426b10-39a2-4469-3d5f-08db5c66b117
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1IQn6HcP/U1gxZV/kq5IL2LBGejYOLy+2azgYsN6rH42AFYqkesHE8vHVn18lGfRqLbWVobFyqvzcOkdq8W3wDkQabFwkkY+5PBo2NLy3mTzobEp6QfQWPIpb5AtvpAj/dMluDysRmV2dUAhLZNeOxNmOuF2OH8RRzr1SSH4il266KCIY0QWl2cA6R8tjEZD3GCq8bdHBiQhQNB1OBm6+WpUPiYKbJb7zee8W4X53V0UX4xjhOU6s003cR04lXglwi/moG3CrHnmHWLwGmUCUAWA3IwGfdVbVg9Xb1paMI2UOdiEDRCLyEj/3ZQ181Edh5ek2X3xPqsipP/8J4lUCWfF2F0AfKY6A/WOlecEHcJ57Fwp8uFSmoZoKCTrEHXgaushgX2MP6MzZehqEu80ax8T2fybxsRvb5ADc4jZmmdK6BS8eDeMX3x5PJo19QmNmbStk+yGLbkC0EBP6YAvlZwdMI26XgjOWz1L0Zi5gPzb9d1bixbeVBPUACYe8lYE15hzb4MQJBYxupBqZMkomWlqSv8sg2Oq8Jh8XecccvR7+//zObSMxTFIESD7uv20zu9qglKal84Cd64DiIMDBZbhKtthmV8mBOWF0OIUHf90uNn4Q4AjfUodJFOXvYAWrfSRRIdrowleTSJKy29t1g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(396003)(366004)(136003)(376002)(451199021)(4326008)(478600001)(53546011)(54906003)(316002)(41300700001)(66476007)(66556008)(6916009)(31686004)(66946007)(6486002)(38100700002)(5660300002)(8676002)(8936002)(83380400001)(2616005)(26005)(6506007)(6512007)(2906002)(186003)(36756003)(31696002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OHlJeWVHcVQ2YTd6b2p4QzNFQ0c5bnlLdU9YYml5NHBMcmJwMDkrQVp4eGF3?=
 =?utf-8?B?RmtoVVNPVnlBa1pxTWpsNlZnaDNmVFV0TXJFdHRsN0NCalJETS9xWUo3R0Zx?=
 =?utf-8?B?SzRDeVhsSEZQb1RSQkJYcFBpNUcySjFjTnFyY1VJTmtncTFsa0dPT203ZnNl?=
 =?utf-8?B?V0Y2VE1LZThSTGhVVklZNzJjdjBSUmlSTE9Sc0JDaDZQQlhGT2lNUkdYaC8y?=
 =?utf-8?B?c29DUElFQ0F4QkVILzQ1bG0xY3RFd0ZoWGV6SEFzVXRzYmV4Uk93cmt1aUtt?=
 =?utf-8?B?WWlxT09INGRDNjk5amJGNk5wNTJ6MExVK3hxZHdmaDYzL25SZjkwaEd1eUFy?=
 =?utf-8?B?Y3o2RHhZd1R5MzdRQkVkcHBERy84WmZFODVPMGVWd3h6UHJYVUR3ZjdaQ0dr?=
 =?utf-8?B?U3UwM1ZQTFdEMkkyR3dOZmgvcG1xY0RUWXVYS3VvWnNZWkMwbkFvZjUwQ1hS?=
 =?utf-8?B?cDhkeGs1Y3ZPNXg3cloyTFZzYkg0ajF0NHZyb3RZdW42TWF3cmFNdkkvWmFZ?=
 =?utf-8?B?Z2luVVg3SjRHYU93RUhtRS9nN2dYcmsyWEl0RmlRb0U4ZktGMFVTR3dxcUhF?=
 =?utf-8?B?VUcrelE2a29qaWlJS2ptUTlDeUtRVGs5WDZTSXFCKy9MeDJjcnJpandFNGg0?=
 =?utf-8?B?SWxWMUQ2OThLd2tGaWlHOGFIRllNS0JMbmd1Z211Y2VIN2dhQTF6c0MyWmkr?=
 =?utf-8?B?elNJSEE2R3pURENZazR6SDJ5WFJDQ2Y4ZmdaR0tZUjEwU1BTM2NPT2ZuWm5P?=
 =?utf-8?B?aDRLRUtVWEhyR0l5cThpRXVSZmx3MXJPQzhKMEJCVDVwbTkrRnVjMEd4b0c0?=
 =?utf-8?B?MFE2am0wRFVkWjhtM09hSlFSNit5cTN0ZjZjTFd2ZTJaWUNuclFjWmNjOWVH?=
 =?utf-8?B?R2xIaFN4VTdqMEJlT2NrUkpJaHNDa3FWZU1sVEZBT0JQVVdVN2dTUlBJcHpU?=
 =?utf-8?B?ZndFTU95S2ZJZ0JUejRta0pLTHowYVpkc1ZVQ0ZLYkQ4aHJTZjdWemNPVFQ0?=
 =?utf-8?B?YW1Rd2pzcXoxdkhnRStPR25kdGYydFc5dlNKYUFnb3Q1d1VBODdVRnlRVHJk?=
 =?utf-8?B?aHB4UkFaeWJPNDRpR3lVV043TWxZcUFvVVBxeXQ4ejRLamNMVnZXaWRySGp3?=
 =?utf-8?B?N0cxaGI1YUFnb0N2VnJJdlBlQ244SUNsemordHhVUFlvdU9CaG9DVEdIRkRW?=
 =?utf-8?B?c1Z3MFc1aUhSRnpHVUNmTGpHZThyMG9jaklmcXRuSnZqR0kzL3EzSlNaSzJh?=
 =?utf-8?B?bXp1c3JtbkFTZTN6YVZmYUt2L2tmYldHU3kvSTFING9JRTR3c2tFbUhWUkd3?=
 =?utf-8?B?SjlFZ2Ird1doMk9BR051Wk90bE1jRkwzN0RrU0FuNHVHUVNRdmJnRkx3Ykd5?=
 =?utf-8?B?QTd3dVAreVhqNzNIcDN0U0lmakNtTGFTMnJ1WTZ4dEhia0ZWS1g3eVVnZXdT?=
 =?utf-8?B?U3B0ZGUvYkFaakNIMjFYNllPZGpvMXY0bHlDalcrZnI4ZDd1anNyeWNSNjVU?=
 =?utf-8?B?S2hJOGZ1VmY1VDNONFdkaGRETzlPSVJsa25KeGx2c3REYmk4aThQZ3RTUlFw?=
 =?utf-8?B?VmdLNG1NRFMxUlkvS0tNM01wRlZHaDErL1gzYVY5YitJWHZuQ2I2cHZJRFVH?=
 =?utf-8?B?TWQweEJsMHhnWk1YNkh3VXZicURYbVpVZE1vMHNlWkdHUkdJRDNVQkoweGFQ?=
 =?utf-8?B?cVN5SmUwcHpIZytSekg3dXQ4MUZvTENqYzR2b3FoYVpxZFRmWnFTeGQ4a085?=
 =?utf-8?B?K1cwZzcvdHYyYnlxKzY4LzZwNjdISW1qbXhlaXg0S1VTUFp2QUZYQU02elk4?=
 =?utf-8?B?MEVzejZVaEV1ZTc3RG1Kd21tYjJKY3NQb3RqTGdocUpobDlhaTVnaWtqSllC?=
 =?utf-8?B?cGRyWXhGdVRLSmxDZFkxRkt1RmpIREtrWHQ0cVlqZEg1ZHVPazFRSzJKVDFv?=
 =?utf-8?B?VjNwbVUraEYveFdpMUJDTnJjRWVwZlovOXV6WmRBOGhicTR1b1E1RFJGZm9a?=
 =?utf-8?B?SGE3RXhDblpVSy8wUEhnamlNVWdoc3JXWnhpbytrMGlzS3AzTTVDcHVQL3Nj?=
 =?utf-8?B?aUxBZjhub2NKVFRoVm9ESitWTDI3RW80ZENDejcxKzFRZUErcEhwNXBLU01H?=
 =?utf-8?Q?51b1vrqtzzk8roN/paWo4kavq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 28426b10-39a2-4469-3d5f-08db5c66b117
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 14:53:53.2886
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5Ed7Be6uQZeFfRiI67HB2VWByJztGyZCwSDmH9TKdr69Lyz3j2yEwboBd5ll+tfxSwn9iYBAYokQqFYQp8JyjQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7291

On 24.05.2023 13:25, Andrew Cooper wrote:
> Bits through 24 are already defined, meaning that we're not far off needing
> the second word.  Put both in right away.
> 
> As both halves are present now, the arch_caps field is full width.  Adjust the
> unit test, which notices.
> 
> The bool bitfield names in the arch_caps union are unused, and somewhat out of
> date.  They'll shortly be automatically generated.
> 
> Add CPUID and MSR prefixes to the ./xen-cpuid verbose output, now that there
> are a mix of the two.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
albeit ...

> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -226,31 +226,41 @@ static const char *const str_7d2[32] =
>      [ 4] = "bhi-ctrl",      [ 5] = "mcdt-no",
>  };
>  
> +static const char *const str_10Al[32] =
> +{
> +};
> +
> +static const char *const str_10Ah[32] =
> +{
> +};
> +
>  static const struct {
>      const char *name;
>      const char *abbr;
>      const char *const *strs;
>  } decodes[] =
>  {
> -    { "0x00000001.edx",   "1d",  str_1d },
> -    { "0x00000001.ecx",   "1c",  str_1c },
> -    { "0x80000001.edx",   "e1d", str_e1d },
> -    { "0x80000001.ecx",   "e1c", str_e1c },
> -    { "0x0000000d:1.eax", "Da1", str_Da1 },
> -    { "0x00000007:0.ebx", "7b0", str_7b0 },
> -    { "0x00000007:0.ecx", "7c0", str_7c0 },
> -    { "0x80000007.edx",   "e7d", str_e7d },
> -    { "0x80000008.ebx",   "e8b", str_e8b },
> -    { "0x00000007:0.edx", "7d0", str_7d0 },
> -    { "0x00000007:1.eax", "7a1", str_7a1 },
> -    { "0x80000021.eax",  "e21a", str_e21a },
> -    { "0x00000007:1.ebx", "7b1", str_7b1 },
> -    { "0x00000007:2.edx", "7d2", str_7d2 },
> -    { "0x00000007:1.ecx", "7c1", str_7c1 },
> -    { "0x00000007:1.edx", "7d1", str_7d1 },
> +    { "CPUID 0x00000001.edx",        "1d", str_1d },
> +    { "CPUID 0x00000001.ecx",        "1c", str_1c },
> +    { "CPUID 0x80000001.edx",       "e1d", str_e1d },
> +    { "CPUID 0x80000001.ecx",       "e1c", str_e1c },
> +    { "CPUID 0x0000000d:1.eax",     "Da1", str_Da1 },
> +    { "CPUID 0x00000007:0.ebx",     "7b0", str_7b0 },
> +    { "CPUID 0x00000007:0.ecx",     "7c0", str_7c0 },
> +    { "CPUID 0x80000007.edx",       "e7d", str_e7d },
> +    { "CPUID 0x80000008.ebx",       "e8b", str_e8b },
> +    { "CPUID 0x00000007:0.edx",     "7d0", str_7d0 },
> +    { "CPUID 0x00000007:1.eax",     "7a1", str_7a1 },
> +    { "CPUID 0x80000021.eax",      "e21a", str_e21a },
> +    { "CPUID 0x00000007:1.ebx",     "7b1", str_7b1 },
> +    { "CPUID 0x00000007:2.edx",     "7d2", str_7d2 },
> +    { "CPUID 0x00000007:1.ecx",     "7c1", str_7c1 },
> +    { "CPUID 0x00000007:1.edx",     "7d1", str_7d1 },

... I'm not really happy about this added verbosity. In a tool of this
name, I think it's pretty clear that unadorned names are CPUID stuff.

> +    { "MSR   0x0000010a.lo",      "m10Al", str_10Al },
> +    { "MSR   0x0000010a.hi",      "m10Ah", str_10Ah },

Once we gain a few more MSRs, I'm afraid the raw numbers aren't going
to be very useful. As vaguely suggested before, how about

    { "MSR_ARCH_CAPS.lo",      "m10Al", str_10Al },
    { "MSR_ARCH_CAPS.hi",      "m10Ah", str_10Ah },

?

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 24 14:57:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 14:57:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539112.839666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pvZ-0000Nx-Da; Wed, 24 May 2023 14:57:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539112.839666; Wed, 24 May 2023 14:57:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1pvZ-0000Nq-Ai; Wed, 24 May 2023 14:57:25 +0000
Received: by outflank-mailman (input) for mailman id 539112;
 Wed, 24 May 2023 14:57:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1pvY-0000Nh-Jv
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 14:57:24 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2085.outbound.protection.outlook.com [40.107.7.85])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 49b3de63-fa43-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 16:57:22 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7291.eurprd04.prod.outlook.com (2603:10a6:102:8c::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 14:56:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 14:56:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49b3de63-fa43-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K1xl3fcCDO91g/lo+Jjh79g5WyjUdWEivVqjORrmoLzpevIzDtV+L42wgR9uQ0I+oe+rDb4ZwnEHDgM59bZM+6UKcRfPJtuOxHxkFT3j/eyNy/cx49MIPXKMFw28AiHYQrLUrolJS+BJbeHxtq2++nGuw+EQLIdSTyY8giGTsYoXb7Vr0PP5GwuGlP+9QezKGjMNGN4f/vWD1u8DqIJS3KJqi5qw6vVvlzrvdkRXwOlpgi9GmxmyNJ+Fo2v88XmmpIxrFse8gVA7vTBVz74+lshHoB/KF8QnMP9q5d7TAauMcxi1U3RBHXhyQ4IaS5l8c9YBLFR0P1Ywbg3CLlBnSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LNlnr0yZ83DXomo0p/RPSp3MCGoXzjUP6EIcRAG2XGk=;
 b=dE+orfZVP2Vb/pLSCwEcQkSfrcl1Rwx/sA6UQqxA1XbnLIHRPfK1zX9UiahsxdGB0RhVjzPBnC9EoOtxwFEVxLBraqEAYgd6spSFbnDF77aS2H6jf+exJk1h9BcjkBrsqlF/opU/mnL3jY/P4Nk33mqNp+rq5XLMgLHY40Y3UiFWJljJ8sPGJiV66dvUnW/5RX6P3n9wW+cy3aX3A7jbJ/mR0x8gobbBJB0EzM/nCdABVpFA6xeK45sy+xLkXmPM5bYoyxIhtu3D+jPNHn82ogunR1SDg3t20QyI6bztk3jZlqB6Hp8MtGT+T2pZGtOT+2r/AP5pouBsmTAaTfDV4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LNlnr0yZ83DXomo0p/RPSp3MCGoXzjUP6EIcRAG2XGk=;
 b=vs6clhdrXgvhycDlU17GmHt2VqL9bA7i6pfZrxomXKv7mKpdP+hXw1llmcR2QERmvhC/XTmaC0FiPf47JlyeRaDkVD4Lu0c4aglUE6JPvNM9mRF0TGbep6rwMpeVH/R3yXLXxD+kHUoBNe+o0g9RcgZv5QoTwp5IvfM+QP2OwQQ7X6Ka2Sa1QUn5r3n9jU+mP9D5iMrBJbNEkf8fT0oWDZhQhcO3wd0rkJB9EZL9fSofZb7nTCXFkwDMA6ubIuvt68qNHzkoDwpvPfPkWz3ASFNagAkgOJItxMuYhGeeRk6HG9nfMug3nnfy0Bro15XEjla7AeIlOTHMPZXEznmT8g==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e5a723a7-3403-52e4-de2d-ad787f894a6f@suse.com>
Date: Wed, 24 May 2023 16:56:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 04/10] x86/cpu-policy: MSR_ARCH_CAPS feature names
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
 <20230524112526.3475200-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230524112526.3475200-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0072.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7291:EE_
X-MS-Office365-Filtering-Correlation-Id: 267f9964-82f8-44ee-820b-08db5c671cf5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jr4HTSYt0ptcWeuaZhv+fGuhauuAVwbHKhbFBGtWvNaFDV1cvATmiHly4CbMQDREti/HncD2U2dW4tMq8NqCDryzJymWxFWVjgkzSJNUgRTUuCk3Qodszw982MIitBHDCzf8Y1o4C47GpqCYyU38oHXT6PdrwuaLgzbwdyOwG6IVbLnsmmfdoVO/yi2S/A1X4n1hiJ/u3umlht7plLx/uTElJc/hr9QJE2TcRDHr5klXgRvf+55AtacvpKY+ZvbFCGLWGXHQyDrxVsqQzo5izqGdxidS+6KJlk0Yeal39rjSPTxmxhB2SdB34uxSGPfs4yE+vy5sxZcSeB55145/noAoNtBjcFLTuKnAkNKRkhcWL7dhNItM4001CM8nlp682Ed38QcCRcR+Fgp4DqRbIz0s5uZ0WLmW8yewNAgdaxnxWgG3iZV7K5cJMkFoE1EYI2liUYJMctRdgNSQfdSUJJ+FssrOLdHBISKf5WKZH69ozSLnpgmSec9ISG2bFRGMU60gXVJBcFATjG/JzDjNo0i+EfuNyz+LAzengtD6uyvOWHv67tpKzzrhVvowpfwz1nu9oTw7b3b+egk+UcixIKN9hr6vEL2RsGolHTy8W4saBxt0jhimdOhrifRQXH4lFqylcKwvzEtsjliskFEJZw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(396003)(366004)(136003)(376002)(451199021)(4326008)(478600001)(53546011)(54906003)(316002)(41300700001)(66476007)(66556008)(6916009)(31686004)(66946007)(6486002)(38100700002)(5660300002)(8676002)(8936002)(83380400001)(2616005)(26005)(6506007)(6512007)(2906002)(4744005)(186003)(36756003)(31696002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bTdpSUo1N09TaVU4ek1pWHF0eEhqQ3g0d2Jabzl1Lzl6RlNpRVduNUx2Ym4v?=
 =?utf-8?B?aURFVWJlWHVGb0swTkdSUUkrK3kzNW84ZDhkSWFlQTV0N1g2SFFRVTh2NE1S?=
 =?utf-8?B?ckwrbHpTY01OSDRVbjNNNzcvenpUSXpyR1NSVTJrZXFhU2oxNGsvanFnN1F3?=
 =?utf-8?B?MEpKTG4yVUoxSlVLSmR0UTd4QTk4bWdKblJjUWF5eTBGNDkzakNuVzFrbHpL?=
 =?utf-8?B?KytoTUxNblRFWHV2cHBYV1RBdEVJTnJ5UVNVVFEzRjV0TThtSHVGOUEyU3BJ?=
 =?utf-8?B?TnJ2eUY2VS9vUDB4SVI2RGptYytJNzQ4d1FTTGE1aG9hRjFnNU0rYVhOVi9x?=
 =?utf-8?B?clpvOGFwdkkzYk5JWU4xYUwvZDJVZXgzZ1BJNml3UVZWS0tZeE5tYXk4S0Jo?=
 =?utf-8?B?eERTWHd4MDZUMDhuZXBORHRENWpYaWs3STFtOEw4cHF4ajJ6MWRZZytzMkZo?=
 =?utf-8?B?MjBIQ1dIeHdUdFpqTVFzZHNQbkpBOHMvT3JUQm1TOWNHdjNkRDBpWG5jM3I3?=
 =?utf-8?B?aTQxeWJ2clZJWFRWZEk1cE9WRTNMYld5TlYrMzRzaDcrMm9GNmM5K0F1R2dW?=
 =?utf-8?B?T3hOZVlveDY4MEhxcDRpQUdBd3lNdmdMOHl1QnlYUUNxNDBjb3R2QTRvMXU0?=
 =?utf-8?B?T1NYeDI5U1lvbzB4eTBXS2dvbEtuK0RuYkNIWmdNQVNILzJpYmlqWkxSbUhm?=
 =?utf-8?B?YnpURDlPeTI5ZklNc3gzMmlLK0hoWkNLRXltTmtScXM0bXBqemJJMmduWFNT?=
 =?utf-8?B?MTdCSkVyb3lub3FrK1dCcE9mK3dOM3FWdnBxeGJyQk9udWp0UENYaHJVUnNm?=
 =?utf-8?B?K3RRT0J2L3RUZ1ZtRW96ZElVSy84TjZMSHlxbHRma3cyVENDTzlPcVR0N2w3?=
 =?utf-8?B?cHFTdzVESU1mYlZxdnI4eFZpOTJKd2dLZ0FuZTAzQmtvYWlwbUhpUG92eU9x?=
 =?utf-8?B?VUFnMUh5SUtzZWJCcUVXMTdIWkF5ZzQ2dExVV0t1TUZnR3dlaFVONlVMTmtR?=
 =?utf-8?B?dFJLT3FXMlZBQ0NLcXZsRGRoVGlvdENZK25pZ2ZHdEtEUHQrWDFqYk5CK2JY?=
 =?utf-8?B?dXRwbXVmZ0pnMDgwWng4ejIwV2R1c3ozeEtPR0JGVzBZT3EzVERueHA1NnF6?=
 =?utf-8?B?VVkvb3FXbWJ2T20xMEFrU3ZBYkpPeTJTMEpWQXdiaGxIY0hncjl0Mzc3Z2p2?=
 =?utf-8?B?SVU1bXlMcTNkVEZzVDNoMDNtWUJGTG5HY3lKd2wwZjg0TUxKQm96NmJGZXFY?=
 =?utf-8?B?eVRCVnJlTGVadGRFWFlMcGs4OG5oSVUrc2pIUmIvZS9GbjFheFk1V2duRFFS?=
 =?utf-8?B?YkZzZGludTQvVWZ0NzhZWnRvdzg3SVg0WjZMSDRhTHlyMVpMa3lVUC9qd0pD?=
 =?utf-8?B?ZjMvQnpLTlZaekxDNVluYWlxWGM0cjBFaGQ2Y2pIalFQUG1DRDYrZlA1RGJs?=
 =?utf-8?B?UHZWUXQ2OHdNOHZ3VnlJRTA0bTRXd3BYajB6TGl0NFBaUDBtWFpmWTlBRW92?=
 =?utf-8?B?cGpiVDdMd3JhbzhZd1ZLRThnaTh6dFBsWXYzcGFWNTFmaG8rL00rd3I1UFJL?=
 =?utf-8?B?TFFkMHVLb0IwMkVCc0pMYi9BamJabFJ2K1VIZzM3R0VSL0V4RTlKRVMxNzgr?=
 =?utf-8?B?TmV2Z252bXV2cW5IUXc5UlZQTmVNWksxUnRwMm10amNSVTM3MHpXcjZHajBr?=
 =?utf-8?B?NWJMVzJuNjVySnlTVm0yaFp6ZlpkRkVLT2FFV0pRQ29pdURpQXpkMHRFV3Fj?=
 =?utf-8?B?a3pGRWZyT2FLYk1zc0VUR0Y3Qy9UZURxRTM2ZVBWdXlkQ3BFakJnUytXVDRQ?=
 =?utf-8?B?NjBSUEJlOG5MS09uMFlpQjQ1Q090aFk5dm1DWDN6Sk0xZ05iMGE4eHppRGxU?=
 =?utf-8?B?cmpUalgxUkZZS05nSkpFOTNTS2xYQzBaNGVRaDZnTjVsRTZ0ZUhrUDZ4a0Mw?=
 =?utf-8?B?bVNuTzBmK1NVUlBiSXAyd0YzaDRGbUlzQUlkeThKbTNQMjhDbDlRdjVwOGVX?=
 =?utf-8?B?Nk51NDAvOHBkL0MxYmltVzZUNTZqR01PajRaUzdVMDY1Z1NQekl5Z042VUZz?=
 =?utf-8?B?eFp5ZzhBRlJ6QitvemVjRkIwSmZNWlhQWmd3cUgxNDhQdTRjUS9UNVc0SktV?=
 =?utf-8?Q?vIZWlP10DriX0xkIESye8M+qL?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 267f9964-82f8-44ee-820b-08db5c671cf5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 14:56:54.2740
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Lco+n1+b84l2gCU+S1lW4mkgjg85uDpgWs0HBmcHwavGAyvn2CKytJPAXUpT83aMnrXS3tAypxAAlbCavffN3w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7291

On 24.05.2023 13:25, Andrew Cooper wrote:
> Seed the default visibility from the dom0 special case, which for the most
> part just exposes the *_NO bits.  EIBRS is the one non-*_NO bit, which is
> "just" a status bit to the guest indicating a change in implemention of IBRS
> which is already fully supported.
> 
> Insert a block dependency from the ARCH_CAPS CPUID bit to the entire content
> of the MSR.  This is because MSRs have no structure information similar to
> CPUID, and used by x86_cpu_policy_clear_out_of_range_leaves(), in order to
> bulk-clear inaccessable words.
> 
> The overall CPUID bit is still max-only, so all of MSR_ARCH_CAPS is hidden in
> the default policies.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed May 24 15:02:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:02:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539116.839676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1q0d-0001rZ-0T; Wed, 24 May 2023 15:02:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539116.839676; Wed, 24 May 2023 15:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1q0c-0001rS-Tp; Wed, 24 May 2023 15:02:38 +0000
Received: by outflank-mailman (input) for mailman id 539116;
 Wed, 24 May 2023 15:02:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1q0b-0001rM-PM
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 15:02:37 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20601.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0513678c-fa44-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 17:02:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7311.eurprd04.prod.outlook.com (2603:10a6:800:1ae::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 15:02:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 15:02:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0513678c-fa44-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dHrq4GI/jOqW26GB+m97rMpJhzaKMh1vY7HoX98BxQlVIOmfuc27AMigcEBx72XICV3Yq5mN4x8tqTByK6WPcd9poahvWuFCjBVy9qsJt4kaag9pevWqZFDs7PB9m6N1S2QiH9B0v1uU0TNqgjwZM00hqoYljHUz/VZCwI7fdFD51hl8BFg7gMBlyHdvnO3LOAr+LwRj0QqjmHsQVWHMr7KhoBTjxHLIH5jr6YC5EAc8prvL38L+JxMR2hLxaZcIFgF+Cz/AVb6ePXH/an0nGMPqByTfANz1ZExYq8ZQb3ypvA+HTqDdjhB/k1TJJJpipF3D0AmhZv3Rmcrn1+gOAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kFcYZRr9ZtsZhw023xziDDrFhaodemdX3B4JtSQTiOA=;
 b=EBYo/A7GeM6hoFzICNUV9cIdsEvuxPBnOcNYr5NKzitA4Zyrb6nwH2TvH1zL8Sc+VojeUeHjin1cp6U6Old8h+0rkrAc5wOSab7WRlP7g2rTYX4KZZVEWHA+PykhISXvokQRMHCEMBGL3Ldo+j/UFFw9DJRqjcNBVB8dJ7/Myky30WSSO8v+Nm96kPLRe6Yk55ULiyTeeNT22H30XDadEdV0ad3TwuCAAo6oVgKqmYDOGzm5iBeErSxdWSZASKXVL++EbOD0y3jTkAeG97KYyTjF6h+V7HTmN7dar1Z0vJ+wZQLIVkSWHKVtCRtd59Bplh1VaPiA1iYzVi22Xbfv9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kFcYZRr9ZtsZhw023xziDDrFhaodemdX3B4JtSQTiOA=;
 b=XImIM27uJ1Ql3WdVsk88CABS1/Xom6Y4mV/APpLVKCD+ZA4rbC7HasRcultgnWP6wmkRmXvvwPz34G8hILQbu5wVJZlyBDFJub81/TlNZCoJ576SwtlCD/mtPZJ3ksc+Aw1SAts+oYGR2iaaZhvFgYkNfHXIpiCxw7Q2BvQTD2hp/b2DVNjOOykBSoOculXFxwDBdoC/6XKA+b4eKRGueFOlhfmT3YbNS85JIkl1V2WgffzT24h23YFACy12dc20e9SeHyAfhw+fmZVO0DI9aCViFWRizpXVCxkPFS90MuacGYk6yxxmHoX3AabhuAWT58ME0aWAPF/k5mgHmgboIQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <7c0e8676-8623-0b07-9d60-616736eecc98@suse.com>
Date: Wed, 24 May 2023 17:02:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 06/10] x86/boot: Expose MSR_ARCH_CAPS data in guest max
 policies
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
 <20230524112526.3475200-7-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230524112526.3475200-7-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0075.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9a::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7311:EE_
X-MS-Office365-Filtering-Correlation-Id: a9cb9642-305e-43e7-0722-08db5c67e768
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eLdgTpBfgTs/Ce6BQxU/xieLPWXFaux2+L/LnqeN8PTK/uzgbCR5AuDS93KKRjna8nIECt0WYfHoYviBfKI/c42VsSaMAc9qcA1WD6Ig5avt/CU1ie7Zp8owWEv3UCKirxfD9a4BKmer2HFul4bEEaShz2aH2wSdbiy8CEDQ6R3/QPUkkmTZ+YfPkL+oLnAuESsxroJ5csIje1K2IpNHldUp10xXbyR61ar8l9rXDLI56dwq1GT/dltDF9RWMVn1ztNz6NGGJxEyVEkk/sZE/quyI29Q0KMTcORzdIqYAygUZwAvAAC8dQzvQ/VutoU3wX5mGnIBJqVxTzrRrdCFDzLZ29eEuosnzEsozqGjXtwvEjXNC6ARtDaK7nO8ZGAdMDg6VmgkT8I6d5YppfAXJMfTOehZ4+olBfir/PIZobW40EHw0pYeoTnca8COkaBILVIls0lNo851cjryKqQwznLyNs9It7k1kgLM+6k7grPJ/ctli3BPE5PUTZLkP4wi1AKMoLLF/8MPj911t5zlLXE0HPOYoY+8ZWs3uhMkUSvPEvopqaOHxy0Hoc2WIhM+9UQIOM1oNEvYy5ZE87I5nCLE3sqABG+iKbfZEMcdqWtsfcxbwMtsMnKBDPgr4QoPmHsfue4ncaJtgaAEXmTc+A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(366004)(136003)(376002)(396003)(451199021)(26005)(186003)(6512007)(6506007)(38100700002)(6486002)(53546011)(478600001)(54906003)(2616005)(31686004)(8676002)(66899021)(8936002)(6916009)(66556008)(66946007)(66476007)(2906002)(5660300002)(4744005)(316002)(41300700001)(4326008)(31696002)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ams1UHBrNWpaK2gxQmkwTXZOZ25aT215dzdMRGNmRUFRM3ZYaWpPc0d5bitL?=
 =?utf-8?B?Yy9rRjV4TWxtUTRIR1J3L3JIb3I4Sk1FZ2VXS1JYOWwrY1doQ3pwd3NwRTJ5?=
 =?utf-8?B?aXdna1Y5MitGM3FjLzZweWN2blhVOTFKdVJsNVFxZVF3SDNXS3NncndqUCs5?=
 =?utf-8?B?L0MwcUNWd3h3T1ZlZllkM1d6RnY1YTlKTGdEYTQwWW5mcm1HSFJ2Y3hhRHBH?=
 =?utf-8?B?dW95SEIwWXNkc2RQeGptcmZKd2xRTW1IOTNKMXZxMmFGS0JXNGdNdStnZVpU?=
 =?utf-8?B?WUhyUE5Tc3Rxem94ZmdrMjFLbFFJTGhVSXJpRHA4VlFDbUtoZUM3aW9uaWM2?=
 =?utf-8?B?MTVYeFFHT2ZiZHUwUFJZcUFjc3NvV08zUVd1SVU1YVg0ak9qYzljMit3ajRl?=
 =?utf-8?B?RHdFU2hJdmt3aEh3TDVXeFJqSnRtVmlHWnlUWS94N2ZUb05kVXRhVDBKeXNL?=
 =?utf-8?B?ZUhrcy8yRVdkOGN2ZGFnMTE0NFgrMEFsc0RyQTBZUk1kT1VnNVdLd01YdlVr?=
 =?utf-8?B?aVBrRnlZeGlHQnpTeHhGWkcrRmpKMXdabEdIbHZ6Y0dnZEVvTUZJR0NyUVl4?=
 =?utf-8?B?dHh0bDhMREUxMHl1UEYxZDFnZHVhbUdxMGJLeDZ2Y3VZdlFpcUtGLzhaUVFR?=
 =?utf-8?B?RkpKZUh3TVhiYW1tQ3doV3k5NXFMaytJaURNMXl1MXhpMkVKR3ZrN2tDVjQz?=
 =?utf-8?B?d0dMSTE1KzJ6NS9DSlhPcmVRbWgrZjVCVzREbmRzSzVuaTRzNDhpeWYycWEx?=
 =?utf-8?B?M2xRYjU1RUhIM2pLaU92bHF5RVJibnRHUGRCa3d3TnYvR21vaHRYT09lWElX?=
 =?utf-8?B?V0pPU2tHWlkxU1ZGcXJJeHF2amFxNHpHZzNRWHVIcXZhU2pBS3V6RjUxT2lT?=
 =?utf-8?B?Y0F4a1Jxa3dvWTVwQmpMcm0wbEEySjJOWDZHQm01SkF1M1RqMWgxRTFqMFJD?=
 =?utf-8?B?amdKQXR4ZEczQXczUFJKTHNVeDQwais4RXdDdlBpdlBPbU9XY1NKMzhYR25Z?=
 =?utf-8?B?d1lucGtIZU8yc0paTCs1S2x3YlJiRVJqWUZXNGNDRVAvYWN2VGptMDBoMHFw?=
 =?utf-8?B?bTJYUVNMWmYyNVFzYVVzRm9LbVZleHJaYVFUcEFVTUNIajJrM2R5ZnhneFQ4?=
 =?utf-8?B?S1A0NkkzcTVIZTAzNGYrNEJ0a3VXbUJRZ3cvYzFQUFhaWkppNlhXNXE3QVNL?=
 =?utf-8?B?VVk0N0xZVTRzS1JmeGNVak9BU202R0g4Yk5WVFhBbnlielQ1S05jK0VCZ2dV?=
 =?utf-8?B?Q1UrZUg4em1KS1lyOXdweEtkQzQ5Q29qN3NmaUtZWDllZXhLS24xY0VGT0po?=
 =?utf-8?B?R05KR2pMSWJ5QkYwTjl0cTVBUVZQbUc4YndtaTQ4bnFyOTBIQTZKaGlhd3FD?=
 =?utf-8?B?Zlp6ZEtRSjNLQ1pOcm5qMFRnVnZTOWxFWWRXS2tHYUJLZXpHYWFubVFVWndM?=
 =?utf-8?B?dWUzMmpTeUZPbWN2WU9LZDY5Q0h2YTU2R2p0Q1NTcDZjQTIyOCtqbm4yM0lR?=
 =?utf-8?B?UHZlNFF4L3o1cWp0TUQ4ZnlOUkZRWlVQU1RtcUFYZVorUnM3Z0NBbWc5TmdE?=
 =?utf-8?B?cXlpRFVQYXoxY1VmclhPVWN0RjBkVU8wNVNZQjdwZzQxMzVNM29KaTl2K3Zq?=
 =?utf-8?B?dTVkWWpiZXN6aFJwalJSNU9CY0JvOVNqSGdwRklHVVZOcnhRRU1sUVR4SXQw?=
 =?utf-8?B?UURjRjhJUXNzS2FXQ1Jpc2VYcmh6RWZKdGsyR0tiNTRXWW9WR1lpRk0vaHV3?=
 =?utf-8?B?bThhbXN0b0EzVkVRNnZHa3paNXM5aGtzQ1g2enpwL2syY3JaODBHd1E2Vytn?=
 =?utf-8?B?WXB2K0dneXVmTWNOejJCWWNJcEVQWEFqZDRHMnNnOEw2QVRoVzRtTFZxdUpB?=
 =?utf-8?B?eXRXajNWR1QrZVlYcC9UZmZUa2hJcklLZVFWbE5NVDNHNitUUCtVeGJiM0o4?=
 =?utf-8?B?b3pYbmpPSG5RY28yK1ZrOU8wZDNQVW12YndMZzR4eE0wWFhVRVdTNWVlWW9I?=
 =?utf-8?B?ZVkzdHhGL1JjL1BuWkFDWEpxYjlUb1BBSXYyeFYwTG03MTY2QzR1NXZDeStY?=
 =?utf-8?B?d0xMbk1WOWlHa1QvWUFLeVQwZTk5YVNrVnVnSXdlTG5ncVVzaHJ6ajloa0sz?=
 =?utf-8?Q?lvAmD90FHsDWrvQF2tKYfF/Qx?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a9cb9642-305e-43e7-0722-08db5c67e768
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 15:02:33.8193
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: N6CUD3HaWa2w+Z6Np1bZPOMO1dqW+sOB5p+iBVlCyP9FoYKb9xeXfy3SjimeOsFHpEmR0fyxH6E92yIlWO9oTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7311

On 24.05.2023 13:25, Andrew Cooper wrote:
> We already have common and default feature adjustment helpers.  Introduce one
> for max featuresets too.
> 
> Offer MSR_ARCH_CAPS unconditionally in the max policy, and stop clobbering the
> data inherited from the Host policy.  This will be necessary to level a VM
> safely for migration.  Annotate the ARCH_CAPS CPUID bit as special.  Note:
> ARCH_CAPS is still max-only for now, so will not be inhereted by the default
> policies.
> 
> With this done, the special case for dom0 can be shrunk to just resampling the
> Host policy (as ARCH_CAPS isn't visible by default yet).
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed May 24 15:06:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:06:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539120.839686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1q41-0002Sj-G5; Wed, 24 May 2023 15:06:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539120.839686; Wed, 24 May 2023 15:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1q41-0002Sc-CW; Wed, 24 May 2023 15:06:09 +0000
Received: by outflank-mailman (input) for mailman id 539120;
 Wed, 24 May 2023 15:06:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1q3z-0002SV-QW
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 15:06:07 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20611.outbound.protection.outlook.com
 [2a01:111:f400:7d00::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8228acf1-fa44-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 17:06:07 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB9424.eurprd04.prod.outlook.com (2603:10a6:102:2b2::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 15:06:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 15:06:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8228acf1-fa44-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gZoXzqdjm7HBz6DJFAeUEw1IWp5lpdYfmC+Zue3n2l7j4eMq6G5ONUALU5B1zn+I2c+sJSmn5/U73PbHllS2W5dc57TS/Xi0CxUuQJ1+CqGVeM/FMgzdB7MOtHz6aVH59RTlCou3z4wK7+euTqB0R8POS9a6Ib5/Bh/duYz80kmp6hHCLZRG5trCOz2CIkT4XwUosW0KbNIgayHZHNEAsevdrJuXKPN1T+RySgRLfr+LWwzoaJSzPDscgmVzF1TbpWojkkt5dkZMXn3eS2HkDSzLvqu0LfQwg3Vt23Gn22ae2Z0C0F79SCPm0Y/2ctj/4WeXptnwbem7ZTKp9QGADA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fBSHk2zYjykEsi+9PNwsLEpLmZCZWe7xTM/ThcZSASc=;
 b=h3GxkQRe8bL9Cjri+7duaPY7LA1fAfgDwDwZW8XLLLyrpBy2r3iXyosrf7oqkNf+sYt3B52ffm7FlBF/GZIUYxUACXkbJH3uS7GahrCiedErF/yz1Pm+1ywFiJ86xky5z9BL15ZEPU8C2rzuPF6K2VnroyZfhNrK1W/35gwXAGK+rrZykWzWbpPfyMVL+VyY3UJZnGopCUPSU0DbYJe6sS5uixTylwIHUgvmeORnifsD5EytsgiXv1cdHK/XANVe53LNoILbwSiSvJ5kZusmfGdFJ7XDeUwpBYjeYr+L3P8cQ0wHYXfIHTmybFz1qyitd13eosTKKDHPn9ZztJbh3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fBSHk2zYjykEsi+9PNwsLEpLmZCZWe7xTM/ThcZSASc=;
 b=kTlZmiUudqLrVM+EYKXHqqV4KxQ+HUrggk2RLbTjIjMDAGFSmxMVTsg+1fd8XvPTJjLBwCjYjsMfkn/g5jZvXiWADTjnOYVOq1yoWNvnoWFPHGFj3KLEzgBDE9Ese6+BwxztHAPeIBqDo4BtFjmhFWBeu93tt/pssRLhjg3UaU6N3qkrjLlJ2zvBEzW8t2ntbbrV6vz6IV2vuiikLbDfZgT5FrUbCnpFY7PZjCc6FD14BD4EDbIzfGGXD1WK+S5NB00ST/RMQa5hvj9WdxJzTQDTRpEOKuhSP9H0hAYcJk01P38Iw86lMj8j/Z0q4ndspZwMD1Z1rzRuwVOhmIJ6Ow==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <44f4710f-4cc4-55dc-dee6-33b3f6f70f75@suse.com>
Date: Wed, 24 May 2023 17:06:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 10/10] x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS
 check
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
 <20230524112526.3475200-11-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230524112526.3475200-11-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0005.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::15)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB9424:EE_
X-MS-Office365-Filtering-Correlation-Id: edcc1309-4b77-4132-94fe-08db5c6864d6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IRTNzOXiaDoNCqAW3dwVk50YWl1CnCfciiu/LDTNIuwovBwVmqWrrbFushcdnv/h7EE/TX3le9A0oYatjVhP2jxkQcIFUTCK5dz9zjI0YjiJbfOguctXH8JhG1kJ4V/BqvCeqA/QndWb2SX4Ae5aupOcMoAqhFTSwaNd0AgfQDwTOQUJj060K9+h2LwX2AklnOdRWBKfp9tbGnP29ndKWRWCUeTtQ0DbgrhmOzLyqeRLkvuVCoCtXOXzid3F2pHm2m4jVQGkP3Kcsp3DEhM1w5HYXp8jjOLi1eRemjVR+aixzKLGcExCtgN343Y60I+WwhZmwmGt6r+bBtY8UeLTMyJ1iZplXzft/jrnpgU3WTsPHL6F+5pwe1mbjUESSunbyTyJEihozpGvWCQU/uBx8hWDTODVDvPPdg8luSipIvaqnrH7QdnBaA+uGjXaABgdnhTV/+P960HYsAKSFBIouiUzQHMMPlxyu6d5yoc8NucKLTkRnaZIBmTN0NuA9bIKTuXK33WKCoDNVfNMPHFPlGqEWrHCxOkb5Aj8peGRCvNBm9VnQEIQ7Pu2E5GXVnjV0ZKjWvJROaL8M2etMyTb4gMak/8+L7AOquMopGPNzEhxT7Yv4qYzRLfpYR1dXsHtgh7DPsdERYB2PJFrqCZt2g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(346002)(376002)(39860400002)(366004)(136003)(451199021)(478600001)(558084003)(2616005)(86362001)(38100700002)(31696002)(6506007)(26005)(6512007)(36756003)(186003)(53546011)(41300700001)(8936002)(316002)(31686004)(6486002)(8676002)(5660300002)(2906002)(54906003)(66946007)(4326008)(66556008)(66476007)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M1ZEYzFQd2JCdUIweEhMUVorSzVDMUxWRVdEUG1zVGpPL0RaTUdwSi9hWG1C?=
 =?utf-8?B?ZGdWRDA3U1JFeUR1S3pxaGlFSVpMbnFMWEJSMFRjcHFZMlZCQTloamdvTmlC?=
 =?utf-8?B?bmRoSzB2bkh2amIvTC8vRk5xcHJmeDJxVUdlcklsSXhvZzIxMzlkbG9ZTDRx?=
 =?utf-8?B?MTRxYnZHc2xhM0EzUEIyQWF5UGIzTXI0cFRFZkprN2ZVSGNTMnlmTWJZU0VB?=
 =?utf-8?B?YnVJSU1XVGNPRUl0N1Bsejd4MkYxMnBvQW5LRmFpQTdwUjdtZFB2enBnSU9t?=
 =?utf-8?B?cVFSVXJFRW1FNDdzZ0x0TW5SNk1UUGEvZi84bVlveWFXMXZFclJQSU12T3dT?=
 =?utf-8?B?b2Q1RTVtZUhCd0ZhaFdLSXZtOFBWbzJnejF1MnRrL0JpNWRwbXd6RG9Xd1NF?=
 =?utf-8?B?eVQxL1llQlViR0ZOOXd0SE9SV2NoOXpvTGthc3dmYUxHWE0vSFNXRHdtdU94?=
 =?utf-8?B?MDZWZk9UYW9CMm95RCtxMTdxOWh0QXg1THRrRCtnMEhaZ1l3dlJid2pPcFh1?=
 =?utf-8?B?eS9lTVhrWnRrZEZCY3dDemFhK0gydnpNRTJCV0VOdWtqU0FxMm1vVkpIaUpK?=
 =?utf-8?B?dDZnWEdCMW9kRFg1MFNDZzE1dnNhaGJhajFaR2VsMXNWdWZrdHZQY2hWVUZ0?=
 =?utf-8?B?ZjNWRmpQZkhyZ1lyUC8rU29qQXRHTW5CTUlHRmhvbFB0dG9pSlhVM2IrR3FN?=
 =?utf-8?B?cU1OVVNvMVRSanpHMmkwdnZFdkZ2MVNDV3BITmdpNnNwVUVsaXkwTHJiYjA2?=
 =?utf-8?B?UUZubEwzcnpzcWgvcnF1TnZ6cFJCcXdvd0JvdmwvMkxLMWo0UkpBV2tOd0Rl?=
 =?utf-8?B?Z21VZEtPYlhhRi9LcURLYWpzYlZaVVYxREZaWCttRFdMcW5paitUOEozdmVK?=
 =?utf-8?B?MjBWWGViVFJGeUM3OXQ3YStjeTVQMWlUNjcyK1c5a3JFa3ZScnU0Q1JhNm5M?=
 =?utf-8?B?djBuYWNLeXJvaHU1NEd1Vng1MmVqSnB5YS9QV1RxNXBPVzZXZlM1ZG11QVV3?=
 =?utf-8?B?ZFh4Vm0vMm1LZjkwSnZhQUlnZWR3R2R3V2JlR1ViVlBQSWt6VXBEdzR6eWwr?=
 =?utf-8?B?aEtSekxReWhWNWR5bFRMVnl3Mjhnd3NIMkN6T1ZGYXNVSVArTTJNVXRwTVBM?=
 =?utf-8?B?cDBJSHpqMS9wczM3YjQ3cTg0U2lWcFpVN2xZeUF2c2pibkYrVUlXTEVDbFJt?=
 =?utf-8?B?OE1HUTlnRDB3bSt3SFV1azBpOE5oL2xPMjA0blNreHFnTldaMHJaV2RzNGVk?=
 =?utf-8?B?NnFrVUVLLy8xT3ByaDh5VXp2OG14alJYSGFzWFk5d3BIVXJIZjFnRzFDZ0ZR?=
 =?utf-8?B?OXptTDAxS3Y4eEdOTVdCV3NEN2VxNmtTdTlxRUtzclhQbUpuWmR6NXJhb2or?=
 =?utf-8?B?aC92RXRVN2RqQ1NiRk04U0NpS3dLZzFYZmRoQXBLczM3VURCQVNhcm1ZSEU0?=
 =?utf-8?B?M0R4OUo0cGttdFNwNSsyV0FaemJtTnllTkh5ZmkxNFM4YWhpendXZHpCQUNJ?=
 =?utf-8?B?NlpHVnlnMjNZRmJiMk5BOHgyQ3RKdTh3dHVweUJSZThTRGhhOGlrdjZMZk04?=
 =?utf-8?B?eUxGSW0rS0htN0RVdEh0RVdwZVpxTmVOaHlTbjJyaDRjblR0L0ZqQWdOeFU2?=
 =?utf-8?B?WU9kNE51dVhmRkNscHE3bXkrbXVHZUpKMCtiZk5ZRXVaT1NSMXBPK3ZwV0RO?=
 =?utf-8?B?Q0lHMUUxNEdtVHVmNm1vSkcyNngyczFua2YwYmZCblVoMFJ0QndPRnEwSkZB?=
 =?utf-8?B?VHZWWXZibTJ2Q0RlL3dpS2F3ck44WkhCazVyK0FKMld2RmlvaUo3ZTdsLzhE?=
 =?utf-8?B?V214NXQvdmJZNXpkdUdCc2wyZHNWalVwZmZYWjNWR3JGOU83Vk9wNUl1dC9n?=
 =?utf-8?B?bkcyMUdGdTZGZXArdm5yY0FkUzUvRXhNWmE4RjQ4V1VzSnQ2MEdzbWR0NWth?=
 =?utf-8?B?TlBQbWlOU2dJVHdkajJSbHYvMlhxME9BQ0g0K2VLVGRUN1Rja2N1S1Jsc0lH?=
 =?utf-8?B?TVVid3pvUlExOXlEbTFvQmFIQlBoTU8wcCszSHVUT3BJZ3RFYVhRWWFRVVRj?=
 =?utf-8?B?V3dNdHFRaEJJZW95NEh4UGZPRVcwV3lMZ2xRRFgzWnU5UkZ3ZTQwdHlVbkpz?=
 =?utf-8?Q?J4rSbBk4rYh2lkXPy5nRPYAUD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: edcc1309-4b77-4132-94fe-08db5c6864d6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 15:06:04.3204
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oGwnKHMn7edX55eXvW26cNWDGnvMj7RrtYphINZAyaQ2t3EcvDOEhBXi6mFsAXGtmLZqM7FoIh+1jWrNqsmclw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB9424

On 24.05.2023 13:25, Andrew Cooper wrote:
> MSR_ARCH_CAPS data is now included in featureset information.  Replace
> opencoded checks with regular feature ones.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed May 24 15:08:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:08:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539125.839695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1q5c-00034e-Tt; Wed, 24 May 2023 15:07:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539125.839695; Wed, 24 May 2023 15:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1q5c-00034X-RC; Wed, 24 May 2023 15:07:48 +0000
Received: by outflank-mailman (input) for mailman id 539125;
 Wed, 24 May 2023 15:07:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1q5a-00034J-Qy; Wed, 24 May 2023 15:07:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1q5a-0000Tb-KL; Wed, 24 May 2023 15:07:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1q5a-0006Vk-56; Wed, 24 May 2023 15:07:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1q5a-0000CL-4c; Wed, 24 May 2023 15:07:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jYav03U8s55kb5mJ4vCY5NNOR/vNUtUj3zrk2hIaMeM=; b=r5aJvt71PoJ/+Xt4iWuNRrnBhL
	nngAOrHUE0tzykXP9RgCmNWDI0p0y2qxSOHU2IMf3OARU54E9HUwx6PyipE9W5NRwrD0QXq6ZkNXw
	UwaZyxJbVNd+ihbtItVdJtfYqKhBU6KoK6SmNpEVrvggXCiw1GocC8bp95TeKpBfG0lg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180924-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180924: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=44a0f2f0c8ff5e78c238013ed297b8fce223ac5a
X-Osstest-Versions-That:
    libvirt=3b6d69237f0a07bf8d9807cd68a387f8d42b076f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 May 2023 15:07:46 +0000

flight 180924 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180924/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180910
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180910
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180910
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              44a0f2f0c8ff5e78c238013ed297b8fce223ac5a
baseline version:
 libvirt              3b6d69237f0a07bf8d9807cd68a387f8d42b076f

Last test of basis   180910  2023-05-23 04:18:51 Z    1 days
Testing same since   180924  2023-05-24 04:20:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Jiri Denemark <jdenemar@redhat.com>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   3b6d69237f..44a0f2f0c8  44a0f2f0c8ff5e78c238013ed297b8fce223ac5a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 24 15:20:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:20:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539132.839706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qHw-0005Sb-1W; Wed, 24 May 2023 15:20:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539132.839706; Wed, 24 May 2023 15:20:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qHv-0005SU-To; Wed, 24 May 2023 15:20:31 +0000
Received: by outflank-mailman (input) for mailman id 539132;
 Wed, 24 May 2023 15:20:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VEVP=BN=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1q1qHu-0005SO-Dl
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 15:20:30 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20623.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 842cf15d-fa46-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 17:20:29 +0200 (CEST)
Received: from AM6PR10CA0049.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:80::26)
 by DB3PR08MB8914.eurprd08.prod.outlook.com (2603:10a6:10:42a::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 15:20:27 +0000
Received: from AM7EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:80:cafe::be) by AM6PR10CA0049.outlook.office365.com
 (2603:10a6:209:80::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 15:20:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT030.mail.protection.outlook.com (100.127.140.180) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 15:20:26 +0000
Received: ("Tessian outbound 99a3040377ca:v136");
 Wed, 24 May 2023 15:20:26 +0000
Received: from bc8c95cf9e27.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CE0F3723-1BF4-40D8-AB35-A24ED89D42BE.1; 
 Wed, 24 May 2023 15:20:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bc8c95cf9e27.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 15:20:15 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by DB4PR08MB8128.eurprd08.prod.outlook.com (2603:10a6:10:381::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 15:20:12 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482%4]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 15:20:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 842cf15d-fa46-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TGQSEYXYzTOUrspQIYqCBfKVX8KDk+ogujSOEgH2T5s=;
 b=8vkDeNstl533oBrocjEK3l5FixziocRhFh6ganb7FtdGMSMdDqSpAFJQ/RMl3h7aNm44er/RDBvCIYxPWFI1sm1wXNU/W5rI3XAl8rWgl9LgiMBe3rvME2eKnmbrpLAnBwox7AoJvgx+ytYrN7Ud4FlZLZHUNYmj1IomrmeRXgM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: a07f6aafb21d0a3f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EM64cbDnBi3LGi1ctV/BIm2URjodvf/1v1c5e6MXcEnTd/zBXEKUtrtH78RlZ/jkW6eSPwINdAlClO7DGzQIT04ZlJXrMM2s/WbU/iVx4U50Il0s6VDeWWIfMH57WRirDk4HucU7a5EeSiiw7HiEnvc+uk3Zkst65fV/xFnlQJkJyrSBrO0RZA0ecCcnhXoMeiqB1hvxp4fgBXXE9oQBtQyy6RnIpfKAaasZ9hlvz4L9YZlGYX3tJUXPAH426ljK1HAVmi1v83JZxUdRfkGyYFOWBgWnFWzdyIiDaFGDm/1Z3yS1fNTK3sUiDLkZAJKZJCgt6Fp7BkNGMayslKS6LA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TGQSEYXYzTOUrspQIYqCBfKVX8KDk+ogujSOEgH2T5s=;
 b=SKY+jNRX8v4dl9oDA0J1DbnybWgOh/D4ggUH/d3cD8RK11lscde6SHlIZ1doXrIne6IrgeAPfzrletHaekNNvdNCt0BAIEDoXy2HYle6zXbawqo/GbVxe5G3GsrZDAzpUtVpV6VDwlLUkaxDXKulFA9dcNoIf9AkRCGFvH7EUWu9toP9elFvhYg8SkCd2UfQEoinb4VSMmJ3cHpNw055nLxPMxaUUvNuW2Yna1WRc8NyTV/PHzqR8FZLy1VdUsSgSojawk2MgfZgHl6rvtFyhL2XX5uGHrqgazfxC8RrgG5KRC6dzR//9TzCUdmi7ZGq+SCIk4KAA5PrA3OSnysgKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TGQSEYXYzTOUrspQIYqCBfKVX8KDk+ogujSOEgH2T5s=;
 b=8vkDeNstl533oBrocjEK3l5FixziocRhFh6ganb7FtdGMSMdDqSpAFJQ/RMl3h7aNm44er/RDBvCIYxPWFI1sm1wXNU/W5rI3XAl8rWgl9LgiMBe3rvME2eKnmbrpLAnBwox7AoJvgx+ytYrN7Ud4FlZLZHUNYmj1IomrmeRXgM=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 11/12] xen/arm: add sve property for dom0less domUs
Thread-Topic: [PATCH v7 11/12] xen/arm: add sve property for dom0less domUs
Thread-Index: AQHZjUpY6xFVIVbG80+SN4dEj9SmF69pi9WA
Date: Wed, 24 May 2023 15:20:12 +0000
Message-ID: <D01601D3-8215-46A0-8539-CF126A5FE11F@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-12-luca.fancellu@arm.com>
In-Reply-To: <20230523074326.3035745-12-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|DB4PR08MB8128:EE_|AM7EUR03FT030:EE_|DB3PR08MB8914:EE_
X-MS-Office365-Filtering-Correlation-Id: 1138b77e-3698-4549-80a4-08db5c6a66ed
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ahLWNEp7j6Y5/5vdCDPuGSjUpmy5v5RG3fsHexteo/GhxzYfo9qPdbZZfdedsPKZlES2ZXEn1quVWpSKHE7PIhAcK8kh9ae3M7qW05BNIu+ZctV8pAzxe9J1TJOrsdk8kKVBqdjQ7tQSy2HGhMbLFtCDZsYgYirj493jwiG7hUK9joE1YaJzJFs7VoZEb36wMxuVd7oZRYMmSJuomvR4Lxmt1R2sU7HuWqH6lZtoGoEOfWkyWvBQLugjknElcn2VRtJh1qJ8WfnE/7neTWTknpnquepKMtI5a96BitISqVFQVBO6kSso8M9hBJq/76vUHpd/zxM/21S8SqKXaaYKphp+cY+Wr/CJj4KWWX71Lw6tA+7lv0JSGrSAPN7RFl4jqQ7kM7XMWBxj0i11s7OikpMMXYG2kzzMjLdkJABxGMwnaJWRr76Yd4Tk6cT7t3QqEm6wTCDcj9BBcKuCprRLtAoDrk3EPpxeaX0vJyPIboxnXkd4rSJVgTrmC2FYWtnvElvuupIds5QW7nTlupWblkX+/8v3WGPu/HP72riIxzl5M4+AqdRJybAlOTUubeZxwqg7v+QhoW2ahdJx9faMLqKQV0khN6FezemFyZRRS+2U/GRSNKzBADXTIneOjyxjKQ/UIbjd2yRJ02ErFcTfpDQSo5KmCfXaArEUX3Mt1GM=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(136003)(366004)(396003)(346002)(451199021)(83380400001)(38100700002)(2906002)(38070700005)(41300700001)(36756003)(8936002)(8676002)(6862004)(6486002)(5660300002)(6512007)(6506007)(86362001)(33656002)(2616005)(71200400001)(122000001)(37006003)(26005)(186003)(53546011)(54906003)(64756008)(6636002)(76116006)(4326008)(66946007)(478600001)(91956017)(66556008)(66446008)(66476007)(316002)(32563001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <A186961898FF054181C7661A2DFA59B5@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB4PR08MB8128
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	47396534-0880-45d3-44db-08db5c6a5e52
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tg4ZmrLGeK1b9wSrJZZVDnq5QPzXY3x+fwsq0CsBCKywVntHpVGXMbDOwLAvTqwK8X22aPxW2t0d6T1+y7fVYoJyixuTYdvERSQURCPfVwBTGKqKSjSxmuh/ArT550w+hLMn5dLy97VIeYduuOBBTUeFRRpr6DUTm5Su9cLKbrKYIvkt8fC7iwUbkeqWl5XA5Dck8pUl66fKP88+ZoWepJcuvuRIRWlE3DFm3DQ420UeLVPPFQn9teeGY2wOBi/iD8An1FiTWwVYkGoJmB/uSKlQ2emCm/2s+SPkHp/u5D7w6whyPyo3IHaNsmozuWGFFl86rlkKcX5n5bezcCVO76HuEn7Zg/hbLMhtgtsT9/Rt0DojfBb5VFePPxE60uOkFZYv3B/svI/4jgjN6uAQKx0pFU0JRHxTaTDoPBMIJV1bfyIDtqrdhTyUfgQkyr6AU/PEqd8g3NOLm/BFqLgkjJJpq0at9SJZPHV8fd7oNjsXPu/gjsecf5NMJ8rvZ/EFzhDlVCCYIrN51aCIusPvXzvS6Oa2LUBWdACJurHUicrL1tS+/W3yt038ngO3xV21DD9SUeaPu6KxYSRv0qoKkMp/k99MiFyjwc2iyghEyF6q/IG5hBV2Z6+yAwjm1S1/J8G+s/Qv4gbWB2qd6T0Q84TeHkG2slo7JJ7poYU8ctK2My9SQOSdWw4X2U/kM8Wo/Fwy2vlpdnmrZBw+w3J9b3M09SfZUGtf2X3rMU8RoPVOJi//KhAu4T+DELBVjP3+wAVcdg4mk2ylPjImMmpElA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(376002)(346002)(396003)(136003)(451199021)(36840700001)(40470700004)(46966006)(4326008)(37006003)(54906003)(70206006)(70586007)(6636002)(316002)(81166007)(82740400003)(356005)(41300700001)(6486002)(82310400005)(478600001)(86362001)(8676002)(8936002)(6862004)(5660300002)(33656002)(40460700003)(107886003)(40480700001)(26005)(6512007)(6506007)(53546011)(36860700001)(36756003)(336012)(186003)(2906002)(83380400001)(47076005)(2616005)(32563001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 15:20:26.5561
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1138b77e-3698-4549-80a4-08db5c6a66ed
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR08MB8914

Hi Luca,

> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Add a device tree property in the dom0less domU configuration
> to enable the guest to use SVE.
>=20
> Update documentation.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> Changes from v6:
> - Use ifdef in create_domUs and fail if 'sve' is used on systems
>   with CONFIG_ARM64_SVE not selected (Bertrand, Julien, Jan)
> Changes from v5:
> - Stop the domain creation if SVE not supported or SVE VL
>   errors (Julien, Bertrand)
> - now sve_sanitize_vl_param is renamed to sve_domctl_vl_param
>   and returns a boolean, change the affected code.
> - Reworded documentation.
> Changes from v4:
> - Now it is possible to specify the property "sve" for dom0less
>   device tree node without any value, that means the platform
>   supported VL will be used.
> Changes from v3:
> - Now domainconfig_encode_vl is named sve_encode_vl
> Changes from v2:
> - xen_domctl_createdomain field name has changed into sve_vl
>   and its value is the VL/128, use domainconfig_encode_vl
>   to encode a plain VL in bits.
> Changes from v1:
> - No changes
> Changes from RFC:
> - Changed documentation
> ---
> docs/misc/arm/device-tree/booting.txt | 16 +++++++++++++++
> xen/arch/arm/domain_build.c           | 28 +++++++++++++++++++++++++++
> 2 files changed, 44 insertions(+)
>=20
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device=
-tree/booting.txt
> index 3879340b5e0a..32a0e508c471 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -193,6 +193,22 @@ with the following properties:
>     Optional. Handle to a xen,cpupool device tree node that identifies th=
e
>     cpupool where the guest will be started at boot.
>=20
> +- sve
> +
> +    Optional. The `sve` property enables Arm SVE usage for the domain an=
d sets
> +    the maximum SVE vector length, the option is applicable only to AArc=
h64
> +    guests.
> +    A value equal to 0 disables the feature, this is the default value.
> +    Specifying this property with no value, means that the SVE vector le=
ngth
> +    will be set equal to the maximum vector length supported by the plat=
form.
> +    Values above 0 explicitly set the maximum SVE vector length for the =
domain,
> +    allowed values are from 128 to maximum 2048, being multiple of 128.
> +    Please note that when the user explicitly specifies the value, if th=
at value
> +    is above the hardware supported maximum SVE vector length, the domai=
n
> +    creation will fail and the system will stop, the same will occur if =
the
> +    option is provided with a non zero value, but the platform doesn't s=
upport
> +    SVE.
> +
> - xen,enhanced
>=20
>     A string property. Possible property values are:
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 9202a96d9c28..ba4fe9e165ee 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -4008,6 +4008,34 @@ void __init create_domUs(void)
>             d_cfg.max_maptrack_frames =3D val;
>         }
>=20
> +        if ( dt_get_property(node, "sve", &val) )
> +        {
> +#ifdef CONFIG_ARM64_SVE
> +            unsigned int sve_vl_bits;
> +            bool ret =3D false;
> +
> +            if ( !val )
> +            {
> +                /* Property found with no value, means max HW VL support=
ed */
> +                ret =3D sve_domctl_vl_param(-1, &sve_vl_bits);
> +            }
> +            else
> +            {
> +                if ( dt_property_read_u32(node, "sve", &val) )
> +                    ret =3D sve_domctl_vl_param(val, &sve_vl_bits);
> +                else
> +                    panic("Error reading 'sve' property");
> +            }
> +
> +            if ( ret )
> +                d_cfg.arch.sve_vl =3D sve_encode_vl(sve_vl_bits);
> +            else
> +                panic("SVE vector length error\n");
> +#else
> +            panic("'sve' property found, but CONFIG_ARM64_SVE not select=
ed");
> +#endif
> +        }
> +
>         /*
>          * The variable max_init_domid is initialized with zero, so here =
it's
>          * very important to use the pre-increment operator to call
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed May 24 15:22:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539136.839715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qJo-00060X-Cb; Wed, 24 May 2023 15:22:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539136.839715; Wed, 24 May 2023 15:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qJo-00060Q-9V; Wed, 24 May 2023 15:22:28 +0000
Received: by outflank-mailman (input) for mailman id 539136;
 Wed, 24 May 2023 15:22:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VEVP=BN=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1q1qJm-00060I-Ot
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 15:22:26 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20600.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c8d260ef-fa46-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 17:22:24 +0200 (CEST)
Received: from DB9PR01CA0023.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:1d8::28) by AS8PR08MB6023.eurprd08.prod.outlook.com
 (2603:10a6:20b:291::5) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 15:22:21 +0000
Received: from DBAEUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1d8:cafe::ba) by DB9PR01CA0023.outlook.office365.com
 (2603:10a6:10:1d8::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29 via Frontend
 Transport; Wed, 24 May 2023 15:22:21 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT008.mail.protection.outlook.com (100.127.142.107) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 15:22:21 +0000
Received: ("Tessian outbound 5154e9d36775:v136");
 Wed, 24 May 2023 15:22:20 +0000
Received: from c126a362e386.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6754B5D6-8066-452F-85FA-5E04F7E7D3B1.1; 
 Wed, 24 May 2023 15:22:09 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c126a362e386.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 15:22:09 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM8PR08MB5827.eurprd08.prod.outlook.com (2603:10a6:20b:1da::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 15:22:06 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482%4]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 15:22:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8d260ef-fa46-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EnFv3WfxmIdeNHqkTf+1Za3HrPZwsu4WLDuboSjxFp8=;
 b=Qoi3gY1rap1to1k+aOiZGO9T5ss7kSOBFVCCaoEZkt1RwDt/R86YobC7y17eqTvZoa3KWspIChVz4lHncwD8xcMm8sR9Q4UedksxZ/wKKWwaoD5EoucSne7qkxmHaZC0shQeeIU0zofehU+EObg9Cc8+VXIS2VZe5h4VwLSxmR4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: b8120e44f134aa86
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mMX7GCMd49wjGEHJFj0+ridk+PeEk00l00b2J78LCgvAWaMbNP9hHD1modOuyV9h0hxrz6Z7tkZ4d6OximLsMeIbymzP4+ZDiwFsYhYQBXLT+7vE//uAYHcNe2apSkEgG61rVlVRM5TQ8b5O/aNhH4+7TKQwgIlqk4PFpPb4N30KVnYf2ieXTk+wmrOrtJH/czADAAI1yjecWrk8GTQetfKblWYU5BjIzV63GfuyBZ79jG68naJemAhVk5Cdu0oYSKAzhJfSFbAwZLbUxtoZm/btGbj4q0cuB/7jCa/Bu5Qrj82r0U2drG4/KEQa7nxPPqupEbSMTjGe3ql0XQG1PQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EnFv3WfxmIdeNHqkTf+1Za3HrPZwsu4WLDuboSjxFp8=;
 b=mHG75csgvuYLGaVtzuKXp3iM8kUa01fJwUwlGH5BkyDGRGYn0DJSThqt0OtBAZYa3DVXA3lAOqNw8597sywoASHovBNyP3kTJYRsi93CqUYyp7FO4AwyvTEfwoxOwXy0YNhU3OFCPebF/w+68M+6E/hgwmGOxzyS1zmEt/6KLDYYKJwfLFPLx3sZHV+1QkuaakeQEmmTW7oNfptVPeQ9bD8OES1owrK8rfFkdOWj6T07qcHonN7R+0DkPDinNNvXst+9j3iLMIE/+s8zNsJrqbIlLEq8vGxvopWD2YfllP2Rn2z5bwiJj7DacEcL/QZHKdwROWZkZosghJ+TRR82kg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EnFv3WfxmIdeNHqkTf+1Za3HrPZwsu4WLDuboSjxFp8=;
 b=Qoi3gY1rap1to1k+aOiZGO9T5ss7kSOBFVCCaoEZkt1RwDt/R86YobC7y17eqTvZoa3KWspIChVz4lHncwD8xcMm8sR9Q4UedksxZ/wKKWwaoD5EoucSne7qkxmHaZC0shQeeIU0zofehU+EObg9Cc8+VXIS2VZe5h4VwLSxmR4=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Henry Wang <Henry.Wang@arm.com>, Community Manager
	<community.manager@xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v7 12/12] xen/changelog: Add SVE and "dom0" options to the
 changelog for Arm
Thread-Topic: [PATCH v7 12/12] xen/changelog: Add SVE and "dom0" options to
 the changelog for Arm
Thread-Index: AQHZjUpdf9Q15ls77023F7OWvjvckK9pjF2A
Date: Wed, 24 May 2023 15:22:06 +0000
Message-ID: <84B6F092-8007-4217-9EA6-5325BADC66D8@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-13-luca.fancellu@arm.com>
In-Reply-To: <20230523074326.3035745-13-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AM8PR08MB5827:EE_|DBAEUR03FT008:EE_|AS8PR08MB6023:EE_
X-MS-Office365-Filtering-Correlation-Id: 4fa4387d-8be9-4961-ba9b-08db5c6aab24
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gu3GT+aRL/W/Orz6GtX9BMxzhgngOvTKUjVmtUoxDpSZ94qYAm5v/ecZfSJerxO0+Xm9MKRfBdLCCG9cjeRKRrhBwHgu7LRUdkv98t2MORGmn/sIGax0E6if1m8pf0cSXn6QJvBXbg1+twvkX4qvhiq5Xbn9wYUXEKObB0vHIfJtrlsCjuX4udlFt2AovIhnWXZDCH1ryFgb0wOim+A4yHvnz0rtDom4MBDKNPXzsV6RfNfzVuBQEFSgatfSg6Leupan3itM3kuSPpzllu8Jhu0a5TAI7vbRcXZj1xtJfTvhZtHQR1hIBVXbGKaIKsdX4HQcdID7t3LT1/UY7WtXNzEDSS2nFbca/fHs2I4elyCFM6OKMTbrrRC9cuJykO0V2NmVbcRU89Zsrl0DN9P/kZOYogF0rQkpPIuM9w1hmFuOXReH5zhkHNItOd68JDbiiRQfW4itY0HwzPOkmLV60tRMTFw/LBuUMmUruyqMOMGyMi7GHTLklywYAcGr+nXlIoepsnOA0q1gecNKCSx2v4xunrXB+gydpLTOSrw36mEgN1slarqp0Ws6qlW0jlOjChg3COZ5gR2+A3E2EcSBQavobGpTvLHutMnPLC9G/Jvx33vx3TSVXv2I+aPh944FCW7Dj4El+0uJbkI2v2yQjzz8MXajPP8vgeA1VnVmZ2haTSJTfCuxEVjEtwgxyeA80PXTcoC+Fkt9fLi/UZHjQhMIq6YI/5gnmDr/XP9Chfs=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(366004)(376002)(39860400002)(136003)(451199021)(37006003)(54906003)(38070700005)(38100700002)(122000001)(66946007)(76116006)(66446008)(91956017)(66556008)(66476007)(478600001)(86362001)(64756008)(6636002)(4326008)(316002)(71200400001)(33656002)(83380400001)(41300700001)(6486002)(2616005)(36756003)(186003)(2906002)(4001150100001)(53546011)(6506007)(26005)(6512007)(5660300002)(8936002)(6862004)(8676002)(219803003)(207903002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <27D5DB122B37B74498E778E284FBF16C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5827
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	abeb1eed-8ff2-4463-fe1f-08db5c6aa252
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Bo1CFbJZdIoltmNognnpmrm+qSWZWCWZ7VkJMmQT9es9LTY4CV2RSKcrit9FUkZ7vsEg2lr54DHYGjMUmVMY168bZ8PMZ0s6f7V4QNXs7WQxEx9ccCe7PupTv6vyFEEpH4mNP/h+5d5zrQgcDNxjLh9fWjSVNBI9WhM3p+bjDLoXL3domK3Rm0NenFtNquR3yPaypx9LWHna95uV+zGrpQa6blS2zdBqtVgeJLx9leNlBctKiGcQJA9AM5m1tuZgjyEdTPwVjAYzCZRFTr7ErRchpE58EkGjEvmRtm9siMzXwdXk8+qLt5CIlK7f4hvRtxQ72DjfOI6+p8u0tiTL0M7opXCHq6Hzjh3W3VAbQuwoj2qOnAPp1HAYiDc3P5qKlLqCZREB+tDge48rI7SqvKOmzz80rl3hNDK2U2PfRLBPoh76mhYw7pWTCHNmjBhFxzJp5dy1lS/7pCwr4AEjPaIhfB5QvXpYOgs8PpMuhmr/DfwuEL0C+9msXRm4T4pJP6D3fL6mDaLvaj9WxFhSOF4mKmoUbKqQxsgrW/UixuZN/Zj9nG94CLhExWV6bkhQC/idQuzMsh7YItS/XtSFlOJLSgoUgsGass0wAYswUnP3VuqGrxfj6UQ+orQ0pyeoT0KBlRcRFZjho+RIQAnZZ8zlyIfA8bFPWXWMeyVMPGus5wpE2iT7dUa3FS0agoWUuxyhT8NcM3ZWcq6wR3i+B3h8AT0m4tGpkgWTlrpH0gibvsZeONJh1oF/Nq9WhdqOYHz9rqQK6gNl26qkc1LuF96kVzv9l9gyINdAI2g5SNbFMpo8T4gFCz2OLp0N+ZfOqeskLi1CqdM954LoiYVevw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(396003)(39860400002)(136003)(376002)(451199021)(46966006)(40470700004)(36840700001)(6862004)(54906003)(37006003)(82740400003)(70586007)(70206006)(82310400005)(478600001)(86362001)(4326008)(6636002)(316002)(81166007)(36860700001)(33656002)(47076005)(356005)(83380400001)(6486002)(41300700001)(336012)(186003)(36756003)(2616005)(2906002)(4001150100001)(5660300002)(40460700003)(6506007)(6512007)(53546011)(26005)(40480700001)(8676002)(8936002)(207903002)(219803003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 15:22:21.0500
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4fa4387d-8be9-4961-ba9b-08db5c6aab24
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6023

Hi Luca,

> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> Arm now can use the "dom0=3D" Xen command line option and the support
> for guests running SVE instructions is added, put entries in the
> changelog.
>=20
> Mention the "Tech Preview" status and add an entry in SUPPORT.md
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Acked-by: Henry Wang <Henry.Wang@arm.com> # CHANGELOG

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> Changes from v6:
> - Add Henry's A-by to CHANGELOG
> Changes from v5:
> - Add Tech Preview status and add entry in SUPPORT.md (Bertrand)
> Changes from v4:
> - No changes
> Change from v3:
> - new patch
> ---
> CHANGELOG.md | 3 +++
> SUPPORT.md   | 6 ++++++
> 2 files changed, 9 insertions(+)
>=20
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index 5bfd3aa5c0d5..512b7bdc0fcb 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -11,6 +11,8 @@ The format is based on [Keep a Changelog](https://keepa=
changelog.com/en/1.0.0/)
>    cap toolstack provided values.
>  - Ignore VCPUOP_set_singleshot_timer's VCPU_SSHOTTMR_future flag. The on=
ly
>    known user doesn't use it properly, leading to in-guest breakage.
> + - The "dom0" option is now supported on Arm and "sve=3D" sub-option can=
 be used
> +   to enable dom0 guest to use SVE/SVE2 instructions.
>=20
> ### Added
>  - On x86, support for features new in Intel Sapphire Rapids CPUs:
> @@ -20,6 +22,7 @@ The format is based on [Keep a Changelog](https://keepa=
changelog.com/en/1.0.0/)
>    - Bus-lock detection, used by Xen to mitigate (by rate-limiting) the s=
ystem
>      wide impact of a guest misusing atomic instructions.
>  - xl/libxl can customize SMBIOS strings for HVM guests.
> + - On Arm, Xen supports guests running SVE/SVE2 instructions. (Tech Prev=
iew)
>=20
> ## [4.17.0](https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dshortlog;h=3D=
RELEASE-4.17.0) - 2022-12-12
>=20
> diff --git a/SUPPORT.md b/SUPPORT.md
> index 6dbed9d5d029..e0fa2246807b 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -99,6 +99,12 @@ Extension to the GICv3 interrupt controller to support=
 MSI.
>=20
>     Status: Experimental
>=20
> +### ARM Scalable Vector Extension (SVE/SVE2)
> +
> +AArch64 guest can use Scalable Vector Extension (SVE/SVE2).
> +
> +    Status: Tech Preview
> +
> ## Guest Type
>=20
> ### x86/PV
> --=20
> 2.34.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed May 24 15:22:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539137.839726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qJw-0006IF-NN; Wed, 24 May 2023 15:22:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539137.839726; Wed, 24 May 2023 15:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qJw-0006I6-KX; Wed, 24 May 2023 15:22:36 +0000
Received: by outflank-mailman (input) for mailman id 539137;
 Wed, 24 May 2023 15:22:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q8ut=BN=citrix.com=prvs=5011a8a4f=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q1qJu-00060I-Qv
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 15:22:35 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cccfdb45-fa46-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 17:22:32 +0200 (CEST)
Received: from mail-mw2nam12lp2043.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.43])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 May 2023 11:22:19 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by LV8PR03MB7472.namprd03.prod.outlook.com (2603:10b6:408:18f::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 15:22:17 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Wed, 24 May 2023
 15:22:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cccfdb45-fa46-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684941752;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=o76yZ4k497HOZQhx8YNmmMYxCb66VFuaVaseEYwZvWw=;
  b=dn65mXJ9/ycP3W/5s5FU3CCqaby88my6tDsGb5Xh5zrHt+fhr9l0oe1F
   lIkxyd/w1T2rCl4cytlf3FYQTeoPHi2E2UZIyIcgSzO4ULl8fU26rVof/
   R2RMGO3lf1ikhLycOAmrYwuIJrLOz/vi4+l4YYouHc+H++Kt/AzHjcXJ3
   o=;
X-IronPort-RemoteIP: 104.47.66.43
X-IronPort-MID: 109567907
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:51bo66KKseUoK6FHFE+RzZQlxSXFcZb7ZxGr2PjKsXjdYENS3zVSx
 2AYUWiEb/7eamX8e9p1YY6y80xXuJPVx9cxSgdlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wZiPasjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c50WnlW9
 tE1BgwgMEGyn7yumo3qR+dV05FLwMnDZOvzu1lG5BSAV7MDfsqGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/RppTSIpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyr927GQzHukMG4UPOGU1NQ1n3+y/TZQUEYvDFSmr6i9zUHrDrqzL
 GRRoELCt5Ma5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWbkAHQyBAbpo6tcYwbT0sy
 lKN2djuAFRSXKa9THuc8vKRsmm0MC1Md2saP3dYFk0C/sXpp5w1glTXVNF/HaWpj9rzXzbt3
 zSNqyt4jLIW5SIW65iGEZn8q2rEjvD0osQdvG07gkrNAttFWbOY
IronPort-HdrOrdr: A9a23:bXVz9a5l8iSwtuSMzwPXwaOCI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HjBEDmewKlyXcV2/hpAV7GZmXbUQSTXeVfBOfZowEIeBeOi9K1q5
 0QFJSWYeeYZTYasS+T2njDLz9K+qjjzEnHv5a88587JjsaEJ2Ioj0JfTpyVSZNNXh7LKt8MK
 DZyttMpjKmd3hSRsOnBkMdV+yGi8zXmIngaRsmAQdizAWVlzun5JPzDhDdh34lInhy6IZn1V
 KAvx3y562lvf3+4hjA11XL55ATtMr9xsBFDMmsjNFQDjn3kA6naKloRrXHljEop+OE7kosjb
 D30l8dFvU2z0mUUnC+oBPr1QWl+DEy60X6wVvdpXf4u8T2SB8zFsIE3OtiA1LkwntlmOs5/L
 NA3mqfuZYSJRTcnB7l79yNcx1xjEK7rVcrjOZWpX1CVok1bqNXsOUkjTVoOaZFOBi/xJEsEe
 FoAs2ZzPFKcWmCZ3SchWVryMzEZAVAIj62Bmw5/uCF2Tlfm350i2ECwtYEo3sG/JUhD7FZ+u
 XtKM1T5f5zZ/5TSZg4KPYKQMOxBGCIawnLKniuLVPuE7xCE27RqqTw/K4+6IiRCdA1JaMJ6d
 X8uW5jxC4PkxqEM7zM4HQLyGGBfIyFZ0Wi9ikEjKIJ+IEVR9LQQF6+oR4V4o6dSs4kc7Pmss
 aISe5r6sDYXBTT8P5yrmvDsrlpWAwjuZ4uy6IGcmPLhP73AavXkcGeWMrvBdPWYEYZsyXEcz
 E+YAQ=
X-Talos-CUID: 9a23:tMFRDGP3f82iO+5DW3J/7E05N+4ZXyOM/W+OOxWHWGtncejA
X-Talos-MUID: 9a23:yNlMKwnMV+4dFOrFAu1Bdno+GOVCvLuMKXtX0s1dgpCLKCIhP3CS2WE=
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="109567907"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KsRf5++0xf5A+sk8HxU+DbQz+3tF3tOe5pjRdABYP1nmAOv0+LtJlsomJpZxdZtaiN577t5kklZXC/fyn4Vz9WN4/J6PaBzSMgFaoXRn7bKAJ+P326/yRU6uwwR0ivmc02pEMpp81DP92+Jwu7+uvjhajwF98aMVYeP4Y/mboPXMr8ae6pZkl1Ax0gOYA3zTIVpanfwGSCrNC431Ea+1oXhSHMu2XavS2XM8fCGhN08frVe9WowwTkJAHUXr6PI8Nd9G/XJJiTK6dsXR20A9iGbi1dI/6NmRQzsEsG4szcXztK4VjEc5ndGMgPR+giWUHZ3Mrhk2ok8mApEk1LWEdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=30ru+lLePwOyS4uvchiCqk6eSxy4Oc0C9zDO72tKQss=;
 b=FfjddFRe46QMf5MIKEHJIIyzS25SoM0VvbAGbX2vQPxxfihqF2ndELV5W9mWWeVFCHHwbo9ZtOmDdecX4Jby7dKoHXK9vMZ3ERLIJPX43Wo85Iy3rkAaUf+NApbpunU3AuZUOxckBn0MzxKFuoPRHmJL8gLD8GrWnf1erQpTJQfr7SPxHY3ZDP0EZnQnp5F6h/Wz+oiU7wB1Qb7XfD/XJvZRyhHFe/7HJEn9n2Lon+gvzFdwhEguijnqdbiyxRJZp6kEOfuomM4MTJDjdeJWZ6opawLpjKxfA/SMYFULCwfQPhQ7Wu6crJBQCcwcyGjONNhvwcxuGpiqgjgw01bZYw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=30ru+lLePwOyS4uvchiCqk6eSxy4Oc0C9zDO72tKQss=;
 b=sESwpvZrGW0hJ/QkSJDIlulWghC4YtdONy5WzbkUYSlyX5m2V8pBYRg8Wwe24R4ZLFhKHj441K/lUROZS5lZ6JgMPBQ8jMz6IBd7YCRv+Pgo56AxYw0ZdO1rVPMFEp0XjhDqznEpjgkM+ee+LI1gNfEEiIVCXCUtKY+GyV+fV8A=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH v2] iommu/vtd: fix address translation for superpages
Date: Wed, 24 May 2023 17:22:08 +0200
Message-Id: <20230524152208.18302-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0164.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::32) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|LV8PR03MB7472:EE_
X-MS-Office365-Filtering-Correlation-Id: 630431e9-139c-4744-e7ce-08db5c6aa874
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8nzlfNSFzuxVyZ+QAUUm6PXy1flwqXjDF767tDZAy8L2kYY85+r/M9BIjzB5hFycOsxfLB9qvkCeFbk7kz7aYjhN7z4h5U7ESAc5yAX9fQpeeNHRFWGjpCM+bPRm5E/Prr7GZoqsEuW9FSxRox58jbFvlyvk5gGwhSe9kXAg6viJQGiwAqBFw8kNIRldqF+Ugjn6ZCkvGPRM4okAbF/y9SjVCgmsdjvxI+VX7vzJ1ZbnHuTIUwgtFJ05QVjtW8hBMggvYkNLe430DTeKs67bc4M5TjZrG+dif8YZFfpxJvxBQyFoDbb3J1ruHLor6u6IN0ZjvQEk5Ol7tUunyRENqbR9EpSOUAFmYMtbEfaDn0gqeLZV6JeUJijb8Lp1m0L/Fq7Vmz5UifxL+MmqG5rd3ffr+nAqDNAygZnEVtQiintJ3JTeDEY/Bq9+DeLQflA33H1z9LbNuM9ve/R2e8q0em59cdSbelmi7zr8jduvNjUVV0+hTIGPIHo3OqGEhPt++TdYCBxj3Va5ZZsJ9lyAL1DVRHhA8cq9zM5p+irx3gu1GdlU0BcaTJ7vlbsd5fBq
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(396003)(376002)(136003)(346002)(366004)(451199021)(41300700001)(66556008)(66476007)(66946007)(2906002)(26005)(186003)(6916009)(1076003)(478600001)(6486002)(6666004)(4326008)(316002)(5660300002)(54906003)(6512007)(8676002)(8936002)(36756003)(2616005)(6506007)(83380400001)(86362001)(82960400001)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bWZIR3ljRExVdUtuaDFmeEJ6Z2o5WXNYMVVQUWFQa1M2aSszcWh1dklSK0JY?=
 =?utf-8?B?TlQ1VTEzQ2lFU2h6RkZqbnVYT3M4aWlIZjRQWSt3RVkvY0dWNlVBVjBSMXF0?=
 =?utf-8?B?VGtjSWZDSnU2MEtScXAzT0hZZk92N1VXTitjaEhXbzJyWXN4TE5GM3d1dFJP?=
 =?utf-8?B?QjIyZHN5Yk5DUHlyQzRkcmIwaEZ2ZWlUYUVVUGw1TFBwb0E1TzlDWEhMZVN6?=
 =?utf-8?B?ODlyVjhReUZOTWZpWGwxSXo3TjRNVGJHek93a3Y1WnUwQ0VQUTNIV2wyYkFM?=
 =?utf-8?B?eHdVWU85MHpuTkU3aXZqYUhxNFNCWFNJNGx0R2prNk9KSExtRkxCQmdQZlJx?=
 =?utf-8?B?aEJOY1dxU21saDRJbU00RmtaYUFBK25HeWFuRllySEgxeEh3Ry9KcUtHb3RM?=
 =?utf-8?B?WDljV0o3ME1HRWRXOVZLanRFSk9MUGY1YmIvZkl0ZE5WWUNjVVZCRFV6czFq?=
 =?utf-8?B?RGtORk1UcktLMkQ3YzlScTZtNmU2a3dqR0IxV1lSYWVyemUyZmhHeXBMNVFy?=
 =?utf-8?B?dGQ1c29LVllrVGlEekFCTXhpbDhWZmFaaEIxSkdaMG4vdXlhbld3bXpHZ1Rh?=
 =?utf-8?B?WkVLTWdtcDFTSzFDc3hzQlFaUnJYbjVLZm9rSFEwRjFqTUJ4dmhxUzZERkZP?=
 =?utf-8?B?RlVEMzBRSVZxK0k5VFY2VytPMHdzcHNDQUxndUVKZVQyWnFXaEVDWkgzQnRj?=
 =?utf-8?B?ek8yVGFTVTRqNkZ5VFo4OXM4d0ViVEd0NlI2M0RyWGRsc3JEV3FjTnlxdksz?=
 =?utf-8?B?S1ZER2h4Zm0zWUdlcGZUQlNmVGJIVVl6WXF5MzdBdy92WE5UYnJTMmlRSS9E?=
 =?utf-8?B?N0k5VkpEVlZoSnNDbUY3WWNGTzhTVFF4QXh2cDFtRVlrSUwzZ09tRUN1SkVz?=
 =?utf-8?B?WkZDTW9WajBZcHFoYS9zTWxST1JBVVk3RjgzQWVQSDRIVkxRWG1JSC94S2V5?=
 =?utf-8?B?VWpJcXBIMC81c0N0SWhuVEdWanpWZldjVDRPcFBUQ1h0bWRWSEk5cGc1WFpM?=
 =?utf-8?B?Ri9hSFVxVWxMWlhUcU9JcEZOaW1QckRDNE9nOFZsSm9zU1ZVQ1BNYkNUSnJj?=
 =?utf-8?B?bWo5Y2YrQktiYWgyWmNuaHIwcVFCd2pGVHhFTkNSYzRsaHFIRnVZMG43My9O?=
 =?utf-8?B?bWJ6VWNFQk01MEdEaDFJMCtkdVZhakxjbEptS3NiWXN0ZFZvV0NMNGgwQTlt?=
 =?utf-8?B?c24xYXpMRHd1MHpWOVNyNXZSeGYxNDhoN0l0U2RhemJaUFh6bnNUaEc2T0N1?=
 =?utf-8?B?bGVRejBjMmdJZkxTZHNia3N0anU1K09WRkg3eW1vcUF6cEp1Um5LbUQwem9K?=
 =?utf-8?B?YUMrQjZnYWdnK0JESzZQQVVYLytkV25FUUx6Nm1UUktGN1d3ZmxicVlEeEZN?=
 =?utf-8?B?YkdwQk53R2U1ZmNwVUlBdzlHRER1ODZxQWdlSXlpZks1eG11WlRvK3Y1Nm9v?=
 =?utf-8?B?aW9CaUZpM3Y0L1ZocFhXMHhuZkVQQ01Pc2MxRlI2RncvTmhERGtvVHd2c3BN?=
 =?utf-8?B?SWVBQmlWajljOUgvOUpaT29rZkR1TVkzN3Y1ZFB5RmxBZldkUFJYVUV5VDR3?=
 =?utf-8?B?UmRmN0hlK1V6dy9MQjVoL0NydmJzWjNndnI5dTl6ZUpMTndUeFNpUTZVUTVi?=
 =?utf-8?B?S1IrbFRIYmFEeFFzOVR2K29DZFFjZm5NMURXMCtFV2hrU3AyV2Fnbmh5Wklw?=
 =?utf-8?B?eDJOUE1XZFNhTFN3ZEh0Q2dQcThHSDV1eXJyWDF5R1JwdjF2bjJ6MHV2ZGlt?=
 =?utf-8?B?ekNNc2FkMVdRNFl5cDhTdWlidFA2T3dRby9CWG81YlRyb1A3UkZlN01ybkgx?=
 =?utf-8?B?WEZJYlRpM2VtcDdZM1dtN1NoZmhid2NFK1VOTUJwV3VkVE1KSjlRbFJUTHJu?=
 =?utf-8?B?RG0vTEJzdzNIZW53ZlV1a1p5UnJtMFN4MjJWTmRIbUxaSTZYcFY1RFpTdU9t?=
 =?utf-8?B?V3NjR3RWS2cxOG0zeWMwMzdQcjgvSWFZMlBzMVVTRGxJWUxwSVhxNTBmNUt5?=
 =?utf-8?B?amFFZzJLblJWSFFuY2Z3TWVsZDhGb3BsS1R3a1J2ZTgvMnRmL2M0UTArSjdV?=
 =?utf-8?B?MW9LWGxmaE9tcE9UOXExSGFUQS9nWHl4MzVPRklaZVFTN2VzWWRFMU9wL2kx?=
 =?utf-8?Q?qA684X8imkm78sA3/VUsuM0b/?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	6/SI3eXurq9bb04URCGQkumCu1k394xGBtF5jz1eZdnfKuOs393aJ0bwuGoYHJf4SLXiUpqegbZS7YfOQKmLynSyLN4sR+KKFH6EnHHNgml5kCVXGI2RjCWAfl05jfkpZxmOrZsUl76CdbsEqZPPS7KIAVmrc2eAujYEWmMrlrRGimjGqc3wR7LDsfTTQwcLROiHTSozYBKrBx+jy4N8h6aNrLOhlKjzqF0cmcKfbMNUccUpqrRkNkCdP8CcjXDuUQy+LsbWGJtKKwrlHDS8ob2hIX4ExFrBsoZuu71qVtqNHFiyfzxpeCnXhp/CGugHlh2+5IjhhlXknz4UzbssaRo3euSmCeZRztvDx/8cWNaiKe9p7qppZ1xRbDRRX0b30Z3LNsXuUh1I5t21VdzQ2XNvMl5ck5fqeOQPUQluvT9q12puqOIRcFuHxNNbmFMj4FYTw4FnJye92FzlzAcNWeZDnHSMltAy2UMzwUGiNPnHZcydDTgnzB3pv/pUJ47X81YhqgYvswsibgmmD+6mDWL3/Ji1NpWw97M0jHMdNH7e2D2cPUhnVx/BXYNx2SMVZ8zj6EpR2PVJlZJYWAD4VMhDwahGbmEGqm6PFjYUN41pcc+3Au2yJhbv1kQHN5InxHcjeaCh+PyY+sZxD1RHSrizyU4oxDilimi6RxtQCR8qwimB6UvLF2aoJc0eZAbub1s+gpcf+maEa4nsJSQHd/Uo6tw6VvsfOcWN1tD/IuKPEwqsIubLlEva02/en4k2yGAAMF2y75AR1xUEnAIJXOcxOuqH+qs5gRuvljAEFXWnjW/62yS+ictxY/lMryMk
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 630431e9-139c-4744-e7ce-08db5c6aa874
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 15:22:16.7951
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oL4qMSaMqmiyC9Hz10RY+d4KKmWzGImBeiDwY+nze1xuNYnw1Hz7fwuhhKHIQMbfu3mOfa11RN+DatGWR9JjJA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR03MB7472

When translating an address that falls inside of a superpage in the
IOMMU page tables the fetching of the PTE value wasn't masking of the
contiguous related data, which caused the returned data to be
corrupt as it would contain bits that the caller would interpret as
part of the address.

Fix this by masking off the contiguous bits.

Fixes: c71e55501a61 ('VT-d: have callers specify the target level for page table walks')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Return all the PTE bits except for the contiguous count ones.
---
 xen/drivers/passthrough/vtd/iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 130a159cde07..d7ba9a9c349f 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -368,7 +368,7 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
                  * with the address adjusted to account for the residual of
                  * the walk.
                  */
-                pte_maddr = pte->val +
+                pte_maddr = (pte->val & ~DMA_PTE_CONTIG_MASK) +
                     (addr & ((1UL << level_to_offset_bits(level)) - 1) &
                      PAGE_MASK);
                 if ( !target )
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Wed May 24 15:23:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:23:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539145.839736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qKt-00077u-1W; Wed, 24 May 2023 15:23:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539145.839736; Wed, 24 May 2023 15:23:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qKs-00077n-V0; Wed, 24 May 2023 15:23:34 +0000
Received: by outflank-mailman (input) for mailman id 539145;
 Wed, 24 May 2023 15:23:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VEVP=BN=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1q1qKr-00077b-Ut
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 15:23:34 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061e.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f1c6de0b-fa46-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 17:23:33 +0200 (CEST)
Received: from AS9PR04CA0138.eurprd04.prod.outlook.com (2603:10a6:20b:48a::18)
 by GV2PR08MB8317.eurprd08.prod.outlook.com (2603:10a6:150:bf::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 15:23:30 +0000
Received: from AM7EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:48a:cafe::18) by AS9PR04CA0138.outlook.office365.com
 (2603:10a6:20b:48a::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 15:23:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT060.mail.protection.outlook.com (100.127.140.216) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Wed, 24 May 2023 15:23:29 +0000
Received: ("Tessian outbound 945aec65ec65:v136");
 Wed, 24 May 2023 15:23:29 +0000
Received: from 3bcaad989e26.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 73BD4658-663B-48CA-A6CF-587DBD40728A.1; 
 Wed, 24 May 2023 15:23:23 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3bcaad989e26.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 24 May 2023 15:23:23 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AM8PR08MB5827.eurprd08.prod.outlook.com (2603:10a6:20b:1da::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 15:23:20 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482%4]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 15:23:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1c6de0b-fa46-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XqxQ53YTx85Spgy8Kdqm1nKEC8oNCRb3nyrUPJpb73U=;
 b=FT/BdXE8lZL4vFrd1JMSLUcp/ElqSuNhrKEtAaax+J6WTG16D+NoTgpQBGXKSI2PHeO8gVv51R6Jn8pbhRrLcs0C4IQAdRmIfdakem5PinKSl0dQI7e3k1hPYAnOqbZnIH0rHiJlbfD7SHvZzrvU4IjqWWZ2KT3RrQHzol/Hms8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: e3840f0009ec0d1a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hV+1iMMDCBDB1+rE21Ti1172zPl+mNV5+6ZcAQmp+qC6n/TOc6fk+eOp5+QnKGT58OiLz9FbycgGA0U93T8P5h2AEifrgk+EmBsJyEe7qZ/tdsbKa76b4UeqWjHBi966DX++ZCuUzR408IBrTP5tzJs+rIDkRd2PvqCmT/wxLX0CWRxWL/OLGWEcU8JEujgYkt3n3AwpTyL7RwHt3wk+yQbi9XxpVs/gFs1ustzywaJzTJgB9qEQ5840keCdx2rtP6o2hZ6llR+qkD6XIPUQeOiVrBvq3ebXQpqXE90VQVtpKJ0dmbgo9yXN5CbFng/OTDh/5Bizq3/FaOt8dxzuGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XqxQ53YTx85Spgy8Kdqm1nKEC8oNCRb3nyrUPJpb73U=;
 b=DZoVXSO4f64ytiMKm47CUIaLCL21v4QwVqeml8WGo7PC4zWq8cKi7F8Jza26K/cdsJSe1bU2BuoAO7+n1yC+52vwMleJLNaL81r0kKgua6ZOZsf9G+Fvba+hfZ+IbZh2ZlyqZIRMY9TCD58pRI3LKOm/DhhRnT7Yk7Mu/wx9q3tr2UTSByrouv1D55Ietr2iMgLIE6/YYaz6yQhGx787nywVpj1fHr++SF+Lu/59l7EShLa3hVedHl26cGiqjFBJiXWt8JT9puXTV/6h2SUcaAMqcflALkYmQtltPMND/1PctGAUPcoN3VeIkw2VSmYvAlzEkdwAYvLN9HNxNQmcTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XqxQ53YTx85Spgy8Kdqm1nKEC8oNCRb3nyrUPJpb73U=;
 b=FT/BdXE8lZL4vFrd1JMSLUcp/ElqSuNhrKEtAaax+J6WTG16D+NoTgpQBGXKSI2PHeO8gVv51R6Jn8pbhRrLcs0C4IQAdRmIfdakem5PinKSl0dQI7e3k1hPYAnOqbZnIH0rHiJlbfD7SHvZzrvU4IjqWWZ2KT3RrQHzol/Hms8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>,
	Marc Bonnici <Marc.Bonnici@arm.com>, Achin Gupta <Achin.Gupta@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Jens Wiklander <jens.wiklander@linaro.org>
Subject: Re: [XEN PATCH v8 02/22] xen/arm: tee: add a primitive FF-A mediator
Thread-Topic: [XEN PATCH v8 02/22] xen/arm: tee: add a primitive FF-A mediator
Thread-Index: AQHZbdfHy/Y46nL9n0ej3opYe7XQ5q8pKpqAgAAQ9oCAAUdZgIAftheAgB+SnAA=
Date: Wed, 24 May 2023 15:23:20 +0000
Message-ID: <5351E66A-AC24-41B5-B9DE-4ABAAA16C89E@arm.com>
References: <20230413071424.3273490-1-jens.wiklander@linaro.org>
 <20230413071424.3273490-3-jens.wiklander@linaro.org>
 <ad1d5ebd-38e5-bab9-24ac-6facc8ccb95c@xen.org>
 <d7f18393-262b-f2b1-9af3-a371dae75994@citrix.com>
 <CAHUa44FYGeA-knf2HMR6t4B_q3JZ_WuEq9fpTmD2_sJLMwPoQw@mail.gmail.com>
 <BF745983-F062-4237-B6B9-E3455E72233E@arm.com>
In-Reply-To: <BF745983-F062-4237-B6B9-E3455E72233E@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AM8PR08MB5827:EE_|AM7EUR03FT060:EE_|GV2PR08MB8317:EE_
X-MS-Office365-Filtering-Correlation-Id: 23387ddb-4dfd-4f8a-b9f9-08db5c6ad413
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 NwH0TjI+xYKhJR9rLdivZIUWSX+TsBTLRFniqyTRIH7cB9kFBuhpMOdQzdev8Qn4OhMyvuDkWLKLUq+5aZt6hg+tom9GNpqJEFB2W8qNKd25InKfwvqX44dFEbWajhoW53Xr53oRf1iCXDpBn9cWwQF/sSH7vkCQ8Lp6N4+WZf7jeYecqOjmGyHw1QGJB/D2MM+5OTreEoQU6f3mvax9epiQSzdVKyPY5cC0Y3gTW1QowS6C64wtfOS8ILt/PdUHcbIEeQxg6NnMCccDDi0unK5Cuh1Sr3j3DwHVR6UU4/kzn0tGNvdC4o+TXyU09GO/f6zgLGtJjCQP65ZKBSVa9Jt8sfqdhsapmqKw6tI9+ts+nwf4D9zPsW/urRpQNsYRz/eNAvWX41Tw9cLq4xKDAIi/lQIHD10Rfs68Fdg6m3rFzWwLO5Z65xgY9cvm+Kn8OThat1nqQ7tdqSawcHaUCKM9Rm6qYA3fTFw3IhDIuMhyNm6yLzoz1X5AsUeMZ86q3qoh8pISJ0i+epX7bBJwXWrPzfQx0Ph5qjlPbXGxf35AdvP4rFE7rNyBt13DzVntXx/+mHIciBF5SOkGWkerlpkNuZijmFhIua4M+NibiA/neoQiwFwgZi6W5Nh3InDBJ4zNi+EWB6fBw+5O7dwORQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(366004)(376002)(39860400002)(136003)(451199021)(54906003)(38070700005)(38100700002)(122000001)(66946007)(76116006)(66446008)(91956017)(66556008)(66476007)(478600001)(86362001)(64756008)(6916009)(4326008)(316002)(71200400001)(33656002)(83380400001)(41300700001)(6486002)(66899021)(2616005)(36756003)(186003)(2906002)(53546011)(6506007)(26005)(6512007)(5660300002)(8936002)(8676002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <CCC6815310CAC64596A7B2E5A43AAA52@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5827
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e7fe9600-94d1-4a42-bc0b-08db5c6ace7e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qlDMg6xGydQe0zy/+y03lo++FkWacYgphMMW9RZYj0LEeoWkLvxBGJL31qfAxiBqG8vtAxkVwX/RJarWJ1rACnwodfONRt4+WQl0Y0/hzejuILl4QrzjX67JHzZ3mj3BOY1ksGoQwqwRQ95x8XlQVV4A1VDkQtOXJLOREAG8GP0HfnlfS9aEQ5aOgyPuQ4Sw21B6RHR2YaApvVq4atyoQtqx4Zp1BLum9X4XvvGebs2pn9zK5DNSusRUZ3lmHtkhEMYObpPdm8pHPIl957yi5u9kS+N2gaGDa0iiGyuC23n5t+9RVhLWgCdGHr6YjAUg8x73Zo8yecgDD4lh5BFYEcVtNetT6fXg/n8IjXiTGw1fvSi8MPN31MrGnJsQsU2y0TdTIS2OvS9YLRUm6bKLmlX1Jd+e1UXmrbU3zPE/Kzb1vhCO0dM6tRtttM+XkQ+9/i74R7aa0QJ96cuizDgn5fPC6uL6SvVn8/QMaZlBIyZ+uDARocJaNZyC1icATWAvNCjsBEtGGpEwEDm/jq3hFU0nsckJAKgOYA1CFX+Sq6hl5r6sy4V/HmHIemybYGdhXJOH7LK1y7mis4ZqtvUHRljNZ7Cs7WCweZsVbP9zlGefVo7CfUDIGgaWQa+1AIhTIDXm4bT/cRDOGByB5rFTYHCfug5pM9F5JXzeqaW/dGuqvi1UnqEY/oVWEo7LkMS6V9Edw3YeqWG7KEL13HubgMccezvKSYX2khJDCxKyHqu0Z9+twV4KNkYrixiT/G6z
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(346002)(396003)(39860400002)(451199021)(40470700004)(46966006)(36840700001)(2906002)(66899021)(70586007)(70206006)(5660300002)(47076005)(336012)(83380400001)(82310400005)(8936002)(8676002)(6862004)(33656002)(4326008)(36756003)(316002)(54906003)(41300700001)(6486002)(40480700001)(478600001)(26005)(186003)(86362001)(356005)(82740400003)(81166007)(53546011)(107886003)(6506007)(6512007)(40460700003)(36860700001)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 15:23:29.6627
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 23387ddb-4dfd-4f8a-b9f9-08db5c6ad413
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8317

SGkgQW5kcmV3LA0KDQo+IE9uIDQgTWF5IDIwMjMsIGF0IDE1OjE0LCBCZXJ0cmFuZCBNYXJxdWlz
IDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+IHdyb3RlOg0KPiANCj4gSGkgQW5kcmV3LA0KPiAN
Cj4+IE9uIDE0IEFwciAyMDIzLCBhdCAxMDo1OCwgSmVucyBXaWtsYW5kZXIgPGplbnMud2lrbGFu
ZGVyQGxpbmFyby5vcmc+IHdyb3RlOg0KPj4gDQo+PiBIaSwNCj4+IA0KPj4gT24gVGh1LCBBcHIg
MTMsIDIwMjMgYXQgMzoyN+KAr1BNIEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJp
eC5jb20+IHdyb3RlOg0KPj4+IA0KPj4+IE9uIDEzLzA0LzIwMjMgMToyNiBwbSwgSnVsaWVuIEdy
YWxsIHdyb3RlOg0KPj4+Pj4gK3N0YXRpYyBpbnQgZmZhX2RvbWFpbl9pbml0KHN0cnVjdCBkb21h
aW4gKmQpDQo+Pj4+PiArew0KPj4+Pj4gKyAgICBzdHJ1Y3QgZmZhX2N0eCAqY3R4Ow0KPj4+Pj4g
Kw0KPj4+Pj4gKyAgICBpZiAoICFmZmFfdmVyc2lvbiApDQo+Pj4+PiArICAgICAgICByZXR1cm4g
LUVOT0RFVjsNCj4+Pj4+ICsNCj4+Pj4+ICsgICAgY3R4ID0geHphbGxvYyhzdHJ1Y3QgZmZhX2N0
eCk7DQo+Pj4+PiArICAgIGlmICggIWN0eCApDQo+Pj4+PiArICAgICAgICByZXR1cm4gLUVOT01F
TTsNCj4+Pj4+ICsNCj4+Pj4+ICsgICAgZC0+YXJjaC50ZWUgPSBjdHg7DQo+Pj4+PiArDQo+Pj4+
PiArICAgIHJldHVybiAwOw0KPj4+Pj4gK30NCj4+Pj4+ICsNCj4+Pj4+ICsvKiBUaGlzIGZ1bmN0
aW9uIGlzIHN1cHBvc2VkIHRvIHVuZG8gd2hhdCBmZmFfZG9tYWluX2luaXQoKSBoYXMgZG9uZSAq
Lw0KPj4+PiANCj4+Pj4gSSB0aGluayB0aGVyZSBpcyBhIHByb2JsZW0gaW4gdGhlIFRFRSBmcmFt
ZXdvcmsuIFRoZSBjYWxsYmFjaw0KPj4+PiAucmVsaW5xdWlzaF9yZXNvdXJjZXMoKSB3aWxsIG5v
dCBiZSBjYWxsZWQgaWYgZG9tYWluX2NyZWF0ZSgpIGZhaWxlZC4NCj4+Pj4gU28gdGhpcyB3aWxs
IHJlc3VsdCB0byBhIG1lbW9yeSBsZWFrLg0KPj4+PiANCj4+Pj4gV2UgYWxzbyBjYW4ndCBjYWxs
IC5yZWxpbnF1aXNoX3Jlc291cmNlcygpIG9uIGVhcmx5IGRvbWFpbiBjcmVhdGlvbg0KPj4+PiBm
YWlsdXJlIGJlY2F1c2UgcmVsaW5xdWlzaGluZyByZXNvdXJjZXMgY2FuIHRha2UgdGltZSBhbmQg
dGhlcmVmb3JlDQo+Pj4+IG5lZWRzIHRvIGJlIHByZWVtcHRpYmxlLg0KPj4+PiANCj4+Pj4gU28g
SSB0aGluayB3ZSBuZWVkIHRvIGludHJvZHVjZSBhIG5ldyBjYWxsYmFjayBkb21haW5fZnJlZSgp
IHRoYXQgd2lsbA0KPj4+PiBiZSBjYWxsZWQgYXJjaF9kb21haW5fZGVzdHJveSgpLiBJcyB0aGlz
IHNvbWV0aGluZyB5b3UgY2FuIGxvb2sgYXQ/DQo+Pj4gDQo+Pj4gDQo+Pj4gQ2xlYW51cCBvZiBh
biBlYXJseSBkb21haW4gY3JlYXRpb24gZmFpbHVyZSwgaG93ZXZlciB5b3UgZG8gaXQsIGlzIGF0
DQo+Pj4gbW9zdCAidGhlIHNhbWUgYW1vdW50IG9mIHRpbWUgYWdhaW4iLiAgSXQgY2Fubm90IChh
YnNlbnQgb2YgZGV2ZWxvcG1lbnQNCj4+PiBlcnJvcnMpIHRha2UgdGhlIHNhbWUgaW5kZWZpbml0
ZSB0aW1lIHBlcmlvZHMgb2YgdGltZSB0aGF0IGEgZnVsbA0KPj4+IGRvbWFpbl9kZXN0cm95KCkg
Y2FuLg0KPj4+IA0KPj4+IFRoZSBlcnJvciBwYXRoIGluIGRvbWFpbl9jcmVhdGUoKSBleHBsaWNp
dGx5IGRvZXMgY2FsbCBkb21haW5fdGVhcmRvd24oKQ0KPj4+IHNvIHdlIGNhbiAoZXZlbnR1YWxs
eSkgcHVyZ2UgdGhlc2UgZHVwbGljYXRlIGNsZWFudXAgcGF0aHMuICBUaGVyZSBhcmUNCj4+PiBm
YXIgdG9vIG1hbnkgZWFzeSBlcnJvcnMgdG8gYmUgbWFkZSB3aGljaCBvY2N1ciBmcm9tIGhhdmlu
ZyBzcGxpdA0KPj4+IGNsZWFudXAsIGFuZCB3ZSBoYXZlIGhhZCB0byBpc3N1ZSBYU0FzIGluIHRo
ZSBwYXN0IHRvIGFkZHJlc3Mgc29tZSBvZg0KPj4+IHRoZW0uICAoSGVuY2UgdGhlIGVmZm9ydCB0
byB0cnkgYW5kIHNwZWNpZmljYWxseSBjaGFuZ2UgdGhpbmdzLCBhbmQNCj4+PiByZW1vdmUgdGhl
IGFiaWxpdHkgdG8gaW50cm9kdWNlIHRoZSBlcnJvcnMgaW4gdGhlIGZpcnN0IHBsYWNlLikNCj4+
PiANCj4+PiANCj4+PiBSaWdodCBub3csIGl0IGlzIHNwZWNpZmljYWxseSBhd2t3YXJkIHRvIGRv
IHRoaXMgbmljZWx5IGJlY2F1c2UNCj4+PiBkb21haW5fdGVhcmRvd24oKSBkb2Vzbid0IGNhbGwg
aW50byBhIHN1aXRhYmxlIGFyY2ggaG9vay4NCj4+PiANCj4+PiBJTU8gdGhlIGJlc3Qgb3B0aW9u
IGhlcmUgaXMgZXh0ZW5kIGRvbWFpbl90ZWFyZG93bigpIHdpdGggYW4NCj4+PiBhcmNoX2RvbWFp
bl90ZWFyZG93bigpIHN0YXRlL2hvb2ssIGFuZCB3aXJlIGluIHRoZSBURUUgY2xlYW51cCBwYXRo
IGludG8NCj4+PiB0aGlzIHRvby4NCj4+PiANCj4+PiBBbnl0aGluZyBlbHNlIGlzIGV4cGxpY2l0
bHkgYWRkaW5nIHRvIHRlY2huaWNhbCBkZWJ0IHRoYXQgSSAob3Igc29tZW9uZQ0KPj4+IGVsc2Up
IGlzIGdvaW5nIHRvIGhhdmUgdG8gcmV2ZXJ0IGZ1cnRoZXIgZG93biB0aGUgbGluZS4NCj4+PiAN
Cj4+PiBJZiB5b3Ugd2FudCwgSSBhbSBoYXBweSB0byBwcm90b3R5cGUgdGhlIGFyY2hfZG9tYWlu
X3RlYXJkb3duKCkgYml0IG9mDQo+Pj4gdGhlIGZpeCwgYnV0IEkgd2lsbCBoYXZlIHRvIGRlZmVy
IHdpcmluZyBpbiB0aGUgVEVFIHBhcnQgdG8gc29tZW9uZQ0KPj4+IGNhcGFibGUgb2YgdGVzdGlu
ZyBpdC4NCj4+IA0KPj4gWW91J3JlIG1vcmUgdGhhbiB3ZWxjb21lIHRvIHByb3RvdHlwZSB0aGUg
Zml4LCBJIGNhbiB0ZXN0IGl0IGFuZCBhZGQNCj4+IGl0IHRvIHRoZSBuZXh0IHZlcnNpb24gb2Yg
dGhlIHBhdGNoIHNldCB3aGVuIHdlJ3JlIGhhcHB5IHdpdGggdGhlDQo+PiByZXN1bHQuDQo+IA0K
PiANCj4gQ291bGQgeW91IHRlbGwgdXMgaWYgeW91IGFyZSBzdGlsbCBoYXBweSB0byB3b3JrIG9u
IHRoZSBwcm90b3R5cGUgZm9yDQo+IGFyY2hfZG9tYWluX3RlYXJkb3duIGFuZCB3aGVuIHlvdSB3
b3VsZCBiZSBhYmxlIHRvIGdpdmUgYSBmaXJzdCBwcm90b3R5cGUgPw0KDQpDb3VsZCB5b3UgYW5z
d2VyIHRvIHRoaXMgcXVlc3Rpb24gPw0KDQpDaGVlcnMNCkJlcnRyYW5kDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed May 24 15:34:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:34:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539150.839746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qVP-0000HL-0x; Wed, 24 May 2023 15:34:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539150.839746; Wed, 24 May 2023 15:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qVO-0000HE-TI; Wed, 24 May 2023 15:34:26 +0000
Received: by outflank-mailman (input) for mailman id 539150;
 Wed, 24 May 2023 15:34:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q8ut=BN=citrix.com=prvs=5011a8a4f=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q1qVN-0000H8-R3
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 15:34:25 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 73cac4b4-fa48-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 17:34:23 +0200 (CEST)
Received: from mail-bn8nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 May 2023 11:33:46 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM4PR03MB6158.namprd03.prod.outlook.com (2603:10b6:5:399::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 15:33:42 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Wed, 24 May 2023
 15:33:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73cac4b4-fa48-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684942463;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=+Mzh/Qjr/yt7qldLKgdRLD+TEEQ2cI5k11kGRYowJoc=;
  b=M5w8ljyRr0CIoPMR+3IDoUgydFNQeDpmO/rGEPsagp/Y9rleiWC7hxLB
   +8LZzTqXP4Hd8UPlgYqsiaKwyupVcy85jyF4VJYNFh2etntLw+MQquUUp
   t/LPgyKf4gpFJtdQl2cfemsCCjujp3m8X/UoeS4ZjD73fPgmlwC3Mems6
   w=;
X-IronPort-RemoteIP: 104.47.55.175
X-IronPort-MID: 112715697
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:zM7an6CMdvXvsBVW/xziw5YqxClBgxIJ4kV8jS/XYbTApDsihDMGy
 zcWUT3UbKqMZWD3ftwnYNuy9E8EvpOEndFjQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbCRMs8pvlDs15K6p4G5C4gRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw9c9OLU9U/
 OwjBQsMaEGxiNqPz6K1c7w57igjBJGD0II3nFhFlGucJ9B2BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+OxuvDW7IA9ZidABNPLPfdOHX4NNl1uwr
 WPa5WXpRBodMbRzzBLcqyjz37KRzHyTtIQ6D+2z6NRGvUGo2zIPCAw3VmCw+8i5sxvrMz5YA
 wlOksY0loAw/kG2Stj2XzWjvWWJ+BUbXrJ4A+A8rQ2A1KfQywKYHXQfCC5MbsQ8s807TiBs0
 UWG9/vyHiBmurCRTXOb95+XoCm0NCxTKnUNDRLoViMA6tjn5Ys13hTGS485FLbv14KoXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94Zdu51knHpU
 KA4pvWj
IronPort-HdrOrdr: A9a23:3KZHc6rPHpmJwKHgngHfSOwaV5oReYIsimQD101hICG9E/b1qy
 nKpp8mPHDP5wr5NEtPpTn4AtjnfZq/z/5ICPgqXItKNTOO0AHEEGgI1/qA/9SPIVybysdtkY
 tmbqhiGJnRIDFB/KHHCdCDYrQdKQ+8gcWVuds=
X-Talos-CUID: =?us-ascii?q?9a23=3A6NLLG2kj8ooc/94cmYHntCq9PPfXOXL09Vv+OUa?=
 =?us-ascii?q?oMktSSbvEe1Cowax0nfM7zg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3ATtz1vQ8L6rUoqMp8UbwDBSeQf9w46LqEJH0Rqoh?=
 =?us-ascii?q?Yp8mVFC9LMG2sjA3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="112715697"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jvq04jnntkFi+S/AHlOjQfHWqrU7Mm42HoztgFx9tfVco8v40m+W3QUEOkqKPzG1hgoaW7IO3Berb1LuRdNcR3hGMpO+JD98ShyK1SAfjIapTfICAYTS0gXK2WLeGgC/rYsCF6pWhbJXVG4HIAZKkuqd4XAj1q+uEVQ+CQ9vaXEKmVgdFt/YzkKKh75Y6iVnW4605c8sSjoYhGyPG7Gg1foThl2nYYLXhgR5P1UUv2mftih8hgGDfeIIn/LEi3gHHSiOJUq69fkjYGR5rsSB7RBGziyPCNfe1iHLDFuHentVPXWqiRNSXlRZMT+VMyXCa+707t/pd6Ke/V1TmVJRtQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=f3FBJFLRaaFplxSjxmtESrM0CxUPXgB4jYJz9Cxhqnk=;
 b=cGAKOSheen/JDw/xBIh4lLzrLi+VH70jSqG7iH+Mj737nCaA+iAQ2j44gF18T9QW073cr3lmX9qG4BiO8OEG4LdYZATo8ZL0pY1IkXLxlyMioCgSuvcSVm0kJ+nxtK+jdQNd3jKeR/jVwPGQEiyaJ4HKgfSRJwACZdZbl1fiXzzR634ej7AWWePpsuh51/2nr8EivnoLRXQw7+LSuGIYXCdJD+P7l2EIjQ45lVeJCCI8hk2aK8tW6+cFl4EXpxj3/+0/FR1SvqO/H4d5ok2vQMuJkMDugpvC1Cp5Se/Sm/c1Udf2ykAsY5L4gjRL4O/iHG2WysmnTL1pDFFye4YIZg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f3FBJFLRaaFplxSjxmtESrM0CxUPXgB4jYJz9Cxhqnk=;
 b=o35ZPq9NiiDQ+1/p4+AvNyLB9RB65N47pewIWQrygzpXSSwR7FtSlNg0Nqrt0LMLB7hViVMDkX4fgKc5NBt/VzX5sUqr/I6623UcRHJtGIoA4MPwG0K/S5Pto8k1Ev+AgL3xEtdiRZue6Yxa8tn8K4L9FbopGWDIMspxvlj2PHo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 24 May 2023 17:33:36 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Message-ID: <ZG4uUO93Iub36IFp@Air-de-Roger>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4dmJuzNVUE5UIY@Air-de-Roger>
 <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com>
X-ClientProxiedBy: LO4P123CA0148.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:188::9) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM4PR03MB6158:EE_
X-MS-Office365-Filtering-Correlation-Id: 6267cd00-06b2-4054-5c0d-08db5c6c40f6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lOkPZxvNHaIPQeEvt6zB7q64SiefNEgp4phJfsW3zPEaeStBh0Cfnaid0Xw0WmB3hwbIO6QbsUZCOM1iLdw+Er3o1AkS7cd3JvXi4/8xPAtuZsAoT2FiIVbcktkrRVasj45ISqQbR7Wzw1JDYiiKkkt7lK0jsxbTuFR4WcVbE+al9Qf0PREuYqQ3tZUvFRPYZoIbQThVeUSQ/jWVwtw27l9Xo1mpIs1531xSj42NidyMRim5CKqEsFn2yshn+8eJe2hF9Z4q5OcU6kiQq2qsAJa+ZPu785KqFL7gc2wTeJE8rGk595Up9fo9ibpDXvsfBURVRNd8Qs3m7cjSIRocEiWWMMmK0nDhE+bUoJpWXB9YBNp8wkHSppFjjegl10XywozcxaKMp7sPp7eLkQ6LgolznyNtqtssnmc4kwdHKOAjW4a6rL/eZdm30IxXBhNpQcNnWvSEpOHymo8AnqJLial5MOcNiakn7Lf4YtA54NgE21lsi129KEbqMW97gZuyGvpQhze0Zad5Z0pZ42CQ0Uo6Fs0+oPZBU+YG4ybhlGJdrHN6LxkHBp7b661YZaW0
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(396003)(376002)(136003)(366004)(39860400002)(451199021)(82960400001)(478600001)(86362001)(66946007)(33716001)(66556008)(66476007)(54906003)(38100700002)(83380400001)(85182001)(6486002)(2906002)(15650500001)(6666004)(316002)(4326008)(6916009)(8936002)(8676002)(53546011)(66899021)(26005)(186003)(5660300002)(6512007)(41300700001)(9686003)(6506007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZUMydjM2dTFqOUkwMVZBTlAzVVpEQjZNcXVEWmJQajd5aXdSKzYyb2M5QWpD?=
 =?utf-8?B?YmFhZ3RzakcvU09ucWVQV1VvbUMvL2xtU1psL3VXUll0blJzVWtPM1lJT240?=
 =?utf-8?B?ZjdLUGhjZ1V2UVkxS3I4UC9jVjMyMVlYMkdadlMvc1pZcEJwbWJ6UHMxWUsw?=
 =?utf-8?B?NThJbXJtSklYL2x6ZTdSWm8rZVBEQkRXZ0k2Y0owZUNhK1UxSHU2blI0U2FG?=
 =?utf-8?B?bVlzQnhXazdKTWpJR1VBM00yditFTS9wTzR1WmJlTTg2QUZwMXBrTzVjUTR0?=
 =?utf-8?B?UStpZ2dnRGpSYjlqSktTVFkzTTZhZ09ZdGxSaStyeWJVR2pkNmUzcGRQekhV?=
 =?utf-8?B?TkVKd2N3VGx5YzIrbDg1dkV4U243VXpreVNvRk9TdFVacHJ1VkVJTFR3dGRO?=
 =?utf-8?B?QlNveWNPT0JJSGV4Z0RrdEdWMjdWMVBEZWp2VTBIRkpuZC94dXZSMlF6OGdz?=
 =?utf-8?B?T0ZIejdNTkthY3plR1U3c3pvWVhZR3E5ZklrWmorZEo1elR5WVQyWElVMGEx?=
 =?utf-8?B?MHAyYnFqSHloOFpHSVg4THdkY1NTcGFzcWtwZENHRGc0WmpRMXZEOUt5V3hL?=
 =?utf-8?B?ZjE2T2tpbnZYdGN5SzNkQlR2R2RrNVRKaTJ6V0J6K0E1ODM5ZjZUSURYcElo?=
 =?utf-8?B?T0F2bVZnTnVhYllNUlhTRnBaWjlmT1hOMWNEZVJUaGVMaEVjSFlWVDRwTUV6?=
 =?utf-8?B?bUdidy94VXFxVlJwQlQwNTFuaXlRQWRNYm5UR2lKaUdMenZhYnlmZVRVcXEx?=
 =?utf-8?B?Z1RIaHhDTWdpR0NjRGx5dWxwK0ZoZjZpdFRYREpaWVU3cGIzaU1nQ2F4ZmJJ?=
 =?utf-8?B?ZkM1c3VZOFB4SndNb05QTWEwTndzRk5JeUxsaFZiekFyNEVWb1JPTlFtcGd4?=
 =?utf-8?B?UnhDeUtKY1ErYllQa0I3UE54Vll0bXp0aW13VEJteDF5RkVOQWx0Y0xoWGFE?=
 =?utf-8?B?RWJIUHBtWEVPRnZ4OW5pdFJjU0l3RDl1MGc5emhNQndCd2ZSNjloUXMyTW9K?=
 =?utf-8?B?VXJlTlA4eXdaMktuWUtGblZaUFFBWm03dVp5NWw3YjRlQW1WMlYzYm5jS0JG?=
 =?utf-8?B?cktZV2JJbVdNYm9HeE40K2VyVlVaeUxaUnhVSGc3RnUvTlZjWjErYnQzVTRt?=
 =?utf-8?B?WGd0S2RvemVROUEvdFlPbkpVRG9kbWdSMm1mdGxLL3J6WldRY2JTUVBKMnNG?=
 =?utf-8?B?Z0hDUWJBWmhmejBVWkdPdGR1aVF3aUtTbTdtS3VaeDlBMzJzbVNJbDR5MDBh?=
 =?utf-8?B?MHMwclRPVmZMMGhqZ0lCSkZUVjZNYkp3Rk1LVlNYRnlIMHhhbHdsRGU1UEFu?=
 =?utf-8?B?RE1PRG1IcXJXd1RwYWtLMFdFVjV4RkhrVWJTbUN2TU96R0JOS0RRZ1l4RVJ1?=
 =?utf-8?B?djFZeXhwK1NOTGlRZGJybjRNVjNpZFhRUUJGSThMaXliY0JJZjd2N2RNb2Va?=
 =?utf-8?B?SmdzNmM3MGsvTHc0cmRnWjM3Lzl5enEwM3V5Y0ZQdTRrL2g2NHpKYkFSZW5t?=
 =?utf-8?B?by80YjBQQkxRQ3dlQVpWVGJTSzl4Z1lyNndQU1Y2cTlDMVh1b1FGVDAyNkQ2?=
 =?utf-8?B?Rkx4d3hydXdHL3dXRGhoSmtsdGNVUUk4ZXlkQnNCWUN6Z0ZiRmxwSkhISkxR?=
 =?utf-8?B?OGUwS2VqYS83aDI4a1Z2Q2hiYWwzdnFEYzhPdklzY2loMEw2bTVkMTQ2clho?=
 =?utf-8?B?WFg0NncrUTVmeVRpMnpEM0lRbnFHZkwyL2piNWFXZmxKWXh4Q0ptS1JUQmdv?=
 =?utf-8?B?UTRCc01IWnU4MUp2c3FvN29EcE43SlF5alhMaURGQm9DOEFINHJsc2E5NTl1?=
 =?utf-8?B?V1ZpQXc5Nll6djRCQmJVRGl2YzNyRGdxWUJSMkFncWl1TjlLYjZMWlg2eFNL?=
 =?utf-8?B?VlBCb1VQTmNOY0c1elRnZ1lsVWpqWW1UTkgyTXJ2bmdQaW1NVUlyWVJjL3dE?=
 =?utf-8?B?U1hIK2hFc2NoL3dMRXNxVDR1TFZxbmpxUXFMa1FqZ0dBUzN1bkh4VkZPVHNQ?=
 =?utf-8?B?Tm1ueFd3RkNVZWZDMk00bHpnNDlHVXY2VWh0REpmSFExUFpQa05Bb0dWVWJ4?=
 =?utf-8?B?d3MwTE5paDhpMEFDV256enY4aktmRVB6YkRYaUl0b1ZlbnFkc2hhdCt2YnE1?=
 =?utf-8?B?SUwyVUxvUW1HNFliR0I0WTZXcEF3S2VCejB6Snl4ZCtpUTJOVjUwODgxb3h0?=
 =?utf-8?B?amc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	NbAxgPnYytJz3YvZw7IKgLkAlFklrSmJ2THKjO+dE20mRi16ZNh2xkJtQ3RCO+s0D5jkC5ZlLZix9o2pWk4szQKh0nCu4vIy/lfJFmQwc0Amlk04h0LaBmbj9D07+BVlQNkaVkEkUQFHuqxagHle1Ic5HI8CwN9qwmRaPoJ8WL+sHEXIjWTVLMG7YRhlvC+ux9CbEZ8rkRJ58Ide6tOf7KnBGMrgqAHoXbSC/xRGU10Nx6EZGXaBRZBT8/ZF0I15m0z+wq5uQ6lT+aY48noB9/Y6h+0TgrMaOIlZw1YKeDgXfC/N7/o6t7BsKTe5FiL/qloBqgmB3+chlLeFwGfncuDvCHcV7w2fLlVR9BtwSIqLGn4YWUj1NW7261WGnG9MuqchRpHY1xaWWruTA1bbRBnbS7vT3lny3fPoqc6e2XcODio2S9lfi8itieUIJoNuQjv6YurAgbsjOF1+ZZLlFdon4Rdc9EyOTXTEOatSCeHLtDGPw9gUJ5+aHlzzogXva4YZeWvxJP12qKx9obJfe36cFBDT7qkLfUb1y9iYbco8ItMxlL2BmRUlFp00hE+h+S5gdu6tZjTs17WXu6EoikUt/GC8R87j2bIyf+aDIPa2n450ICn5rWpdcpfaRQCgR5CjaT02sVwyHwY4Xko9LbOrYy/t+w9lPQpVOohxv/YsjZWsNqMlJ4PplFlGOaHawKorSgOTZh43WhOMsZd49f/ERth4dzZHIyMzNh9MX6dYiLyZ0rDIqF80Wgy3GdyE+AeliYFiWVjzYeVUQlpFjPtuACRhZ9x5RS7kqOZCTxCff9wc4MosX/uhVWDL1V1v+F1bhoh2kOeFIJ8+O+hJ0Q==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6267cd00-06b2-4054-5c0d-08db5c6c40f6
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 15:33:42.1282
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TQfOiR+eL6ybvLqBGC0PB4EtzfDvB36mslxi+Y4492QeNPQQp03gIvw32NpXFDaWZ+Hixgep/TyZkaBWJMDH9g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6158

On Wed, May 24, 2023 at 04:44:49PM +0200, Jan Beulich wrote:
> On 24.05.2023 16:22, Roger Pau Monné wrote:
> > On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
> >> Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
> >> console) are associated with DomXEN, not Dom0. This means that while
> >> looking for overlapping BARs such devices cannot be found on Dom0's list
> >> of devices; DomXEN's list also needs to be scanned.
> >>
> >> Suppress vPCI init altogether for r/o devices (which constitute a subset
> >> of hidden ones).
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> ---
> >> RFC: The modify_bars() change is intentionally mis-formatted, as the
> >>      necessary re-indentation would make the diff difficult to read. At
> >>      this point I'd merely like to gather input towards possible better
> >>      approaches to solve the issue (not the least because quite possibly
> >>      there are further places needing changing).
> > 
> > I think we should also handle the case of !pdev->vpci for the hardware
> > doamin, as it's allowed for the vpci_add_handlers() call in
> > setup_one_hwdom_device() to fail and the device wold still be assigned
> > to the hardware domain.
> > 
> > I can submit that as a separate bugfix, as it's already an issue
> > without taking r/o or hidden devices into account.
> 
> Yeah, I think that wants dealing with separately. I'm not actually sure
> though that "is allowed to fail" is proper behavior ...

One better option would be to mark the device r/o if the
vpci_add_handlers() call fails in setup_one_hwdom_device(), as that
would prevent dom0 from accessing native MSI(-X) capabilities.

> But anyway - I take this as you agreeing to go that route, which is the
> prereq for me to actually make a well-formed patch. Please shout soon
> if that's a misunderstanding of mine.

Sure, will send the fix later today or tomorrow so that you can
rebase.

> >> RFC: Whether to actually suppress vPCI init is up for debate; adding the
> >>      extra logic is following Roger's suggestion (I'm not convinced it is
> >>      useful to have). If we want to keep that, a 2nd question would be
> >>      whether to keep it in vpci_add_handlers(): Both of its callers (can)
> >>      have a struct pci_seg readily available, so checking ->ro_map at the
> >>      call sites would be easier.
> > 
> > But that would duplicate the logic into the callers, which doesn't
> > seem very nice to me, and makes it easier to make mistakes if further
> > callers are added and r/o is not checked there.
> 
> Right, hence why I didn't do it the alternative way from the beginning.
> Both approaches have a pro and a con.
> 
> But prior to answering the 2nd question, what about the 1st one? Is it
> really worth to have the extra logic?

Why would we want to do all the vPCI initialization for r/o devices?
None of the handlers setup will get called, so I see it the other way
around: not short-circuiting vpci_add_handlers() for r/o devices is a
waste of time and resources because none of the setup state would be
used anyway.

> >  And
> > hence doing those before or after normal devices will lead to the same
> > result.  The loop in modify_bars() is there to avoid attempting to map
> > the same range twice, or to unmap a range while there are devices
> > still using it, but the unmap is never done during initial device
> > setup.
> 
> Okay, so maybe indeed there's no effect on the final result. Yet still
> the anomaly bothered me when seeing it in the logs (it actually mislead
> me initially in my conclusions as to what was actually going on), when
> I added a printk() to that new "continue" path. We would skip hidden
> devices up until them getting initialized themselves. There would be
> less skipping if all (there aren't going to be many) DomXEN devices
> were initialized first.

I think that just makes the logic more complicated for no reason, the
only reason you don't see this with devices assigned to dom0 is
because device addition is interleaved with calls to
vpci_add_handlers().  However it would also be valid to add all
devices to dom0 and then call vpci_add_handlers() for each one of them.

> > One further question is whether we want to map BARs of r/o devices
> > into the hardware domain physmap.  Not sure that's very helpful, as
> > dom0 won't be allowed to modify any of the config space bits of those
> > devices, so even attempts to size the BARs will fail.  I wonder what
> > kind of issues this can cause to dom0 OSes.
> 
> This is what Linux (6.3) says:
> 
> pci 0000:02:00.1: [Firmware Bug]: reg 0x10: invalid BAR (can't size)
> pci 0000:02:00.1: [Firmware Bug]: reg 0x14: invalid BAR (can't size)
> pci 0000:02:00.1: [Firmware Bug]: reg 0x24: invalid BAR (can't size)

OK, seems fine then.  There's no point in mapping the BARs of r/o
devices to the dom0 physmap, as the domain is unable to size them in
the first place.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 24 15:44:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:44:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539156.839756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qf4-0001qc-2d; Wed, 24 May 2023 15:44:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539156.839756; Wed, 24 May 2023 15:44:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qf3-0001qV-Ud; Wed, 24 May 2023 15:44:25 +0000
Received: by outflank-mailman (input) for mailman id 539156;
 Wed, 24 May 2023 15:44:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l7k3=BN=amazon.de=prvs=501c83221=mheyne@srs-se1.protection.inumbo.net>)
 id 1q1qf2-0001qP-B6
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 15:44:24 +0000
Received: from smtp-fw-6001.amazon.com (smtp-fw-6001.amazon.com [52.95.48.154])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d9cbcd41-fa49-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 17:44:22 +0200 (CEST)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-iad-1a-m6i4x-93c3b254.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-6001.iad6.amazon.com with
 ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 15:43:32 +0000
Received: from EX19MTAUWC002.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-iad-1a-m6i4x-93c3b254.us-east-1.amazon.com (Postfix)
 with ESMTPS id C180AE4889; Wed, 24 May 2023 15:43:31 +0000 (UTC)
Received: from EX19MTAUWA001.ant.amazon.com (10.250.64.204) by
 EX19MTAUWC002.ant.amazon.com (10.250.64.143) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.2.1118.26; Wed, 24 May 2023 15:43:31 +0000
Received: from dev-dsk-mheyne-1b-c1362c4d.eu-west-1.amazon.com (10.15.57.183)
 by mail-relay.amazon.com (10.250.64.204) with Microsoft SMTP Server
 id
 15.2.1118.26 via Frontend Transport; Wed, 24 May 2023 15:43:31 +0000
Received: by dev-dsk-mheyne-1b-c1362c4d.eu-west-1.amazon.com (Postfix,
 from userid 5466572)
 id C18DD964; Wed, 24 May 2023 15:43:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9cbcd41-fa49-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
  t=1684943062; x=1716479062;
  h=date:from:to:subject:message-id:references:mime-version:
   in-reply-to;
  bh=9/BFiR9WjSZPq/EuaxN0jvRDDD5ugdAe5QjD82M1rfs=;
  b=HGJlH/oIdvkNfHuDmWp/x95zW5+o42GOXyLReUl0j44zIZKrEAF53yW9
   JibG1YvfvoN7dCfNSbiOP68jD3v/EyJr9sw7QWRqou7D69OV43lX08BuR
   Zii5xOnzO1sDIEoVxcFsFI9FeH8p3w9lMWQgAgqNdMbsEeeeabRmWLKP5
   A=;
X-IronPort-AV: E=Sophos;i="6.00,189,1681171200"; 
   d="scan'208";a="335305194"
Date: Wed, 24 May 2023 15:43:30 +0000
From: Maximilian Heyne <mheyne@amazon.de>
To: Juergen Gross <jgross@suse.com>, Bjorn Helgaas <bhelgaas@google.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
	<x86@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>, Marc Zyngier
	<maz@kernel.org>, Kevin Tian <kevin.tian@intel.com>, Jason Gunthorpe
	<jgg@ziepe.ca>, Ashok Raj <ashok.raj@intel.com>, "Ahmed S. Darwish"
	<darwi@linutronix.de>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	<xen-devel@lists.xenproject.org>, <linux-pci@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] x86/pci/xen: populate MSI sysfs entries
Message-ID: <20230524154330.GA52988@dev-dsk-mheyne-1b-c1362c4d.eu-west-1.amazon.com>
References: <20230503131656.15928-1-mheyne@amazon.de>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20230503131656.15928-1-mheyne@amazon.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk

On Wed, May 03, 2023 at 01:16:53PM +0000, Maximilian Heyne wrote:
> Commit bf5e758f02fc ("genirq/msi: Simplify sysfs handling") reworked the
> creation of sysfs entries for MSI IRQs. The creation used to be in
> msi_domain_alloc_irqs_descs_locked after calling ops->domain_alloc_irqs.
> Then it moved into __msi_domain_alloc_irqs which is an implementation of
> domain_alloc_irqs. However, Xen comes with the only other implementation
> of domain_alloc_irqs and hence doesn't run the sysfs population code
> anymore.
> 
> Commit 6c796996ee70 ("x86/pci/xen: Fixup fallout from the PCI/MSI
> overhaul") set the flag MSI_FLAG_DEV_SYSFS for the xen msi_domain_info
> but that doesn't actually have an effect because Xen uses it's own
> domain_alloc_irqs implementation.
> 
> Fix this by making use of the fallback functions for sysfs population.
> 
> Fixes: bf5e758f02fc ("genirq/msi: Simplify sysfs handling")
> Signed-off-by: Maximilian Heyne <mheyne@amazon.de>


Any other feedback on this one? This is definitely a bug but I understand that
there might be different ways to fix it.



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





From xen-devel-bounces@lists.xenproject.org Wed May 24 15:47:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:47:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539160.839765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qiK-0002To-Gt; Wed, 24 May 2023 15:47:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539160.839765; Wed, 24 May 2023 15:47:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qiK-0002Th-DM; Wed, 24 May 2023 15:47:48 +0000
Received: by outflank-mailman (input) for mailman id 539160;
 Wed, 24 May 2023 15:47:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CqhT=BN=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q1qiJ-0002Ta-BF
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 15:47:47 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 53f4b0f8-fa4a-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 17:47:46 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C9F981F88C;
 Wed, 24 May 2023 15:47:45 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 5D263133E6;
 Wed, 24 May 2023 15:47:45 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id CsNdFaExbmRyUgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 24 May 2023 15:47:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53f4b0f8-fa4a-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1684943265; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=x7bQfub+mU8p72YGXdoxNHqyjDrsX69NoywxvxclMH8=;
	b=ZLIakZ7u2CeMvr4oXafV/pMx8QWHodGRVKibG2FSPeYru2saQa2v1oWy38l99Sn2qykzSu
	NHDQTj1egp/RtLTElQ92rCeW6T36wPDDbiPVC8Lx0Ty+Se0oSxZgPtV6S7mNkKmk1I7jxP
	IGOxYR7eaeECip5vg1o5V2ha9UFKqpk=
Message-ID: <c0f7cf97-f7ea-83f2-3a9c-f77f82dfb689@suse.com>
Date: Wed, 24 May 2023 17:47:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH] x86/pci/xen: populate MSI sysfs entries
Content-Language: en-US
To: Maximilian Heyne <mheyne@amazon.de>, Thomas Gleixner
 <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, linux-pci@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20230503131656.15928-1-mheyne@amazon.de>
 <20230524154330.GA52988@dev-dsk-mheyne-1b-c1362c4d.eu-west-1.amazon.com>
Cc: Ashok Raj <ashok.raj@intel.com>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Marc Zyngier <maz@kernel.org>, "Ahmed S. Darwish" <darwi@linutronix.de>,
 Kevin Tian <kevin.tian@intel.com>, Jason Gunthorpe <jgg@ziepe.ca>,
 Bjorn Helgaas <bhelgaas@google.com>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230524154330.GA52988@dev-dsk-mheyne-1b-c1362c4d.eu-west-1.amazon.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------tY7iSs0AlaL6tDi0XFsZuMcH"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------tY7iSs0AlaL6tDi0XFsZuMcH
Content-Type: multipart/mixed; boundary="------------h7YXw7ZzpCvooqKrwptAHfub";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Maximilian Heyne <mheyne@amazon.de>, Thomas Gleixner
 <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, linux-pci@vger.kernel.org,
 linux-kernel@vger.kernel.org
Cc: Ashok Raj <ashok.raj@intel.com>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Marc Zyngier <maz@kernel.org>, "Ahmed S. Darwish" <darwi@linutronix.de>,
 Kevin Tian <kevin.tian@intel.com>, Jason Gunthorpe <jgg@ziepe.ca>,
 Bjorn Helgaas <bhelgaas@google.com>
Message-ID: <c0f7cf97-f7ea-83f2-3a9c-f77f82dfb689@suse.com>
Subject: Re: [PATCH] x86/pci/xen: populate MSI sysfs entries
References: <20230503131656.15928-1-mheyne@amazon.de>
 <20230524154330.GA52988@dev-dsk-mheyne-1b-c1362c4d.eu-west-1.amazon.com>
In-Reply-To: <20230524154330.GA52988@dev-dsk-mheyne-1b-c1362c4d.eu-west-1.amazon.com>

--------------h7YXw7ZzpCvooqKrwptAHfub
Content-Type: multipart/mixed; boundary="------------mkAVHjLtPHiTlxZTuoU0N2Fu"

--------------mkAVHjLtPHiTlxZTuoU0N2Fu
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjQuMDUuMjMgMTc6NDMsIE1heGltaWxpYW4gSGV5bmUgd3JvdGU6DQo+IE9uIFdlZCwg
TWF5IDAzLCAyMDIzIGF0IDAxOjE2OjUzUE0gKzAwMDAsIE1heGltaWxpYW4gSGV5bmUgd3Jv
dGU6DQo+PiBDb21taXQgYmY1ZTc1OGYwMmZjICgiZ2VuaXJxL21zaTogU2ltcGxpZnkgc3lz
ZnMgaGFuZGxpbmciKSByZXdvcmtlZCB0aGUNCj4+IGNyZWF0aW9uIG9mIHN5c2ZzIGVudHJp
ZXMgZm9yIE1TSSBJUlFzLiBUaGUgY3JlYXRpb24gdXNlZCB0byBiZSBpbg0KPj4gbXNpX2Rv
bWFpbl9hbGxvY19pcnFzX2Rlc2NzX2xvY2tlZCBhZnRlciBjYWxsaW5nIG9wcy0+ZG9tYWlu
X2FsbG9jX2lycXMuDQo+PiBUaGVuIGl0IG1vdmVkIGludG8gX19tc2lfZG9tYWluX2FsbG9j
X2lycXMgd2hpY2ggaXMgYW4gaW1wbGVtZW50YXRpb24gb2YNCj4+IGRvbWFpbl9hbGxvY19p
cnFzLiBIb3dldmVyLCBYZW4gY29tZXMgd2l0aCB0aGUgb25seSBvdGhlciBpbXBsZW1lbnRh
dGlvbg0KPj4gb2YgZG9tYWluX2FsbG9jX2lycXMgYW5kIGhlbmNlIGRvZXNuJ3QgcnVuIHRo
ZSBzeXNmcyBwb3B1bGF0aW9uIGNvZGUNCj4+IGFueW1vcmUuDQo+Pg0KPj4gQ29tbWl0IDZj
Nzk2OTk2ZWU3MCAoIng4Ni9wY2kveGVuOiBGaXh1cCBmYWxsb3V0IGZyb20gdGhlIFBDSS9N
U0kNCj4+IG92ZXJoYXVsIikgc2V0IHRoZSBmbGFnIE1TSV9GTEFHX0RFVl9TWVNGUyBmb3Ig
dGhlIHhlbiBtc2lfZG9tYWluX2luZm8NCj4+IGJ1dCB0aGF0IGRvZXNuJ3QgYWN0dWFsbHkg
aGF2ZSBhbiBlZmZlY3QgYmVjYXVzZSBYZW4gdXNlcyBpdCdzIG93bg0KPj4gZG9tYWluX2Fs
bG9jX2lycXMgaW1wbGVtZW50YXRpb24uDQo+Pg0KPj4gRml4IHRoaXMgYnkgbWFraW5nIHVz
ZSBvZiB0aGUgZmFsbGJhY2sgZnVuY3Rpb25zIGZvciBzeXNmcyBwb3B1bGF0aW9uLg0KPj4N
Cj4+IEZpeGVzOiBiZjVlNzU4ZjAyZmMgKCJnZW5pcnEvbXNpOiBTaW1wbGlmeSBzeXNmcyBo
YW5kbGluZyIpDQo+PiBTaWduZWQtb2ZmLWJ5OiBNYXhpbWlsaWFuIEhleW5lIDxtaGV5bmVA
YW1hem9uLmRlPg0KPiANCj4gDQo+IEFueSBvdGhlciBmZWVkYmFjayBvbiB0aGlzIG9uZT8g
VGhpcyBpcyBkZWZpbml0ZWx5IGEgYnVnIGJ1dCBJIHVuZGVyc3RhbmQgdGhhdA0KPiB0aGVy
ZSBtaWdodCBiZSBkaWZmZXJlbnQgd2F5cyB0byBmaXggaXQuDQoNCkknZCBiZSBoYXBweSB0
byB0YWtlIHRoZSBwYXRjaCB2aWEgdGhlIFhlbiB0cmVlLCBidXQgSSB0aGluayB4ODYgbWFp
bnRhaW5lcnMNCnNob3VsZCBhdCBsZWFzdCBhY2sgdGhhdC4NCg0KDQpKdWVyZ2VuDQoNCg==

--------------mkAVHjLtPHiTlxZTuoU0N2Fu
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------mkAVHjLtPHiTlxZTuoU0N2Fu--

--------------h7YXw7ZzpCvooqKrwptAHfub--

--------------tY7iSs0AlaL6tDi0XFsZuMcH
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRuMaAFAwAAAAAACgkQsN6d1ii/Ey/d
XQf+MHz9Oxk6qpS+Bteq2DR9mN3i32YwO+FxkAYG+F7obmnMtxktZFz4kLUeOW0JfyXIyZt4lE5i
CqCvVRcD9Ip8cJFlWGQH3n4pZPskE5J4bd0emu5zZJkSV6SZIFWmJwRpOPY4H2vx/Wi5jJgP+JDN
SRMbAe8K5JtW3nwky4hH3gGNBV20b4Z2JcI33AlS5rTvjjeQZBvFs+dFLtHCK6u/iLDfVQCEkO5p
+yeE3eUFdKTnaWxpFhkFSHz7PGLYSC7xsCKsrUbA4wcldye6xNu6vJTeMes5x+q48w9WUr10OtFD
VqxWSY0jO7BxQbw2+lmL9SX4aKIp7zHDkhwDt58LBw==
=8m3b
-----END PGP SIGNATURE-----

--------------tY7iSs0AlaL6tDi0XFsZuMcH--


From xen-devel-bounces@lists.xenproject.org Wed May 24 15:57:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:57:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539164.839776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qrM-0003xn-Dr; Wed, 24 May 2023 15:57:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539164.839776; Wed, 24 May 2023 15:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qrM-0003xg-Aq; Wed, 24 May 2023 15:57:08 +0000
Received: by outflank-mailman (input) for mailman id 539164;
 Wed, 24 May 2023 15:57:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q8ut=BN=citrix.com=prvs=5011a8a4f=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q1qrL-0003xa-HR
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 15:57:07 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a07fbc79-fa4b-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 17:57:05 +0200 (CEST)
Received: from mail-dm6nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 May 2023 11:57:02 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA0PR03MB5531.namprd03.prod.outlook.com (2603:10b6:806:bd::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 15:57:01 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Wed, 24 May 2023
 15:57:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a07fbc79-fa4b-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684943825;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=06QIJvnfgSCYeDpOUkD61jE14yPE/j9gzZT/zZITqr4=;
  b=hVlNhI8QQQOQtVcjduEk5wxlRdiwar9WokIP56F/oTTaYJInHUR4j+OU
   CBuZSldwiQ0LqaxC3TrXlRaN8R2StNRnPoM7MbwULpLasMfLrnU2e/uRD
   OphbBwsLqtCExXfuViuzLqHfeyMxBkEtQ/pqIhXQNgouhEY2aYP3J4sLk
   w=;
X-IronPort-RemoteIP: 104.47.59.176
X-IronPort-MID: 110649438
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:eJAIu6oQBMTDtJ+gyEqSb/fePC1eBmIyZBIvgKrLsJaIsI4StFCzt
 garIBmGPqqINDGjfNF1YI++9UIAvJLVmIBlHQFk/ys1FnwboJuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzSVNV/rzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAAJTTiKY3c2k+Y+QeOgvpPkIcNPgZKpK7xmMzRmBZRonabbqZvyQoPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jemraYWLEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdtKSuHpqqU66LGV7n4SKAcXc3afmvakiki1R4pbC
 hQZ5gN7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq17ayIpDm/PSwUK24qZiIeSwYBpd75r+kOYgnnS99iFOu5i4PzEDSpm
 TSS9nFh2fMUkNIB0Li98RbfmTWwq5PVTwkzoALKQmai6QA/b4mgD2C11WXmAT97BN7xZjG8U
 LIswqByMMhm4UmxqRGw
IronPort-HdrOrdr: A9a23:QR9giq7oGffjTzRjSQPXwaKCI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc6Ax/ZJjvo6HkBEClewKlyXcV2/hpAV7GZmXbUQSTTL2KgbGSoAEIXheOjdK1tp
 0QD5SWaueAamSS5PySiGfYLz9j+qjgzEnBv5ai854Hd3APV0gP1XYaNu7NeXcGPjWuSKBJY6
 a0145inX6NaH4XZsO0Cj0sWPXCncTCkNbLcAMLHBku7SiJlHeN5KThGxaV8x8CW3cXqI1Sul
 Ttokjc3OGOovu7whjT2yv66IlXosLozp9uFdGBkc8cLxTrk0KNaJ56U7OPkTgpqKWE6Uoskv
 PLvxA8Vv4Do0/5TyWQm1/AygPg2DEh5zvLzkKZu2LqpYjcSCghA8RMqIpFel+BgnBQ9e1U4e
 Zu5Sa0ppBXBRTPkGDU4MXJbQhjkg6RrWA5meAeonRDWc81aaNXr6YY4ERJea1wah7S2cQCKq
 1DHcvc7PFZfRezaG3YhHBmxJiWUnE6Dn69Mzo/k/3Q9wITsGFyzkMeysBatGwH7ogBR55N4P
 mBGrh0lZlVJ/VmIp5VNaMke4+aG2bNSRXDPCa5OlL8DpwKPHrLttre/Kg13ue3Y5YFpaFC1K
 gpaGko9FLaRnieSfFnhPZwg1PwqSSGLHnQI/hlltZEUuaWfsuoDcWBIGpe4PdI7c9vR/EzYM
 zDSa6+M8WTUlcGJrw5oTEWe6MiXEX2A/dlzuoTahapnv/hDLHMm6jyTMvzTYCdYQrMHFmPSE
 c+YA==
X-Talos-CUID: =?us-ascii?q?9a23=3AQ1x8YGmCZx/jNx/ynvvc2QoDDNnXOU+D7EXLeWC?=
 =?us-ascii?q?qMFtCerzWa2K7pLlvy+M7zg=3D=3D?=
X-Talos-MUID: 9a23:B6UlkgbEStuHpeBTpT7MnjBEGeNT7LmuDksRyZEn48SYHHkl
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="110649438"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SkY0LQSYXaehWiyvzyNqitoY3vltcjD6pFbFdTsM9BSfDuTtwvl6K0ZkfFONHJQkozhWy/fy8R/iBS/BhBRHkVGhWcWCnQ2BzRHOLRB7OxJJziyUdyRIcH/CNpr2aX6fokbG5yg0u4gqA3Hs1nKpPb4vqSmMoCoB9r66PlQzh+iV0BeFCvtDad9LgwEcSWUOkSuJwZy5zU1qK6oJqyToJP2BYHKDnU0bOgL6QFN2aYeRwbNOzZHwPnIZSd7JMSUBZHdotaLtfyPY84W5IDVFq1iD3oPtZd7m7CUEe0Ng8NABMDHXmZ3Bvl2fbk3+0umvPQpTOVl4TuUdKOX/eDpFSA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qOuUxbvUgvn+5VOn2eOqg0nklFLGERDRMTjiWI5GnQc=;
 b=f+e8P04hOsLu1y4TkTZ80z7tFkK/hO8bCrSCnMGf2xAecz4Wk0jsZ1qoWZExmgJ0meTcDTgYlPrQpyIg2V5VYLol2P0De7i1bcKE8gcVZ/Sv82JGnB5kEuYs13bxE3UFufTRhf7ggr1R4Lm0DZuGamXrSJERaegJ+Hy7CBfoUF4vbbHtQ9Lft0DZMsu2bfNLuoNTsF9QHW4NT1x0bAygrb3up7Q8xfCpcx8TYQ+7YXDNKKpj9ED1huuV0jkfbiE2b6KH+Mv8SN+xhx7Tya5oVKuDnmlTJbALnfOdSMZLSvaZAQIjXMReNPOyZoU5PVSnus2CkauNRQ9gJiOHGUa0sQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qOuUxbvUgvn+5VOn2eOqg0nklFLGERDRMTjiWI5GnQc=;
 b=tEok5xHsMOXazDg33A9RjvwRTPykIr/8CzgSYGM5T1jClwqtgBeznHgl4G0PrH/fLyeMcFspdUuvQk4vcjDsbvRcee0QYHXkqsvHlqd/bWdfzBs6AF/0zysuGCI+mDU4ku0vs/tsBgRDljm7uhT30oNdUr5mq/xE+b4JKE6h2gM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 24 May 2023 17:56:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Message-ID: <ZG4zx+TvUWTCEMh3@Air-de-Roger>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
X-ClientProxiedBy: LO4P123CA0465.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1aa::20) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA0PR03MB5531:EE_
X-MS-Office365-Filtering-Correlation-Id: 9c66fb08-d32b-4f6a-7e77-08db5c6f82b2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AER3+ifWJbSnaetzj+pMTJdDdddF3JTM15iAza1ikqBZWFH34JDl7xdVChKQ0UGMFkeoXyxFNs9ZAu5xNg268eHMKrcB6v3e6pwPT+oL0MZ8Fti15uYG22frFuZd6wSM7ohN2kGZl6uwsheBGZX4QAfWGmT+H0Vo38D+5Ow4LLvX9aDGcBA5ZWatbtICqaLqn1qxwhZNuq6JraKl5AiB3SXEfmsGG8SdZGZKyUlqwy7Yb13n01DCQ0KgflhZKN1pLplqgyn6ngOLuf9B6BRuSNED1IycClQOqcckr7oioO2167kj+o/thGCdoQ7YlLS/EV7irqGPxfp8eAASlF6tAgenjb0QpcrH2zjgCsqQwE0MnRR54/nd8y8wLvmJyGYQTGhnCKVlvioOoff6yvxM35ny06Rgm7gUd2ZfIyzDmaj1zLHIcm3IEGG722/WKyP4aTH4E/TkwRrDNelKdTmkK/v5saZ3d1qEtdfKX8Oq1S2cJ3RuUmjsaAhD5wFzkGpNwjy0gxD3wfCLtS8x1zZOWjDcrDd9jRbjkWgFxzIe0X9HduwhFlukJtrbLR9GOjxO8Nh6wsey1J+XXfWwsJoCaametbz3gUn8XXdsLpnmBi0=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(136003)(346002)(396003)(366004)(39860400002)(451199021)(5660300002)(8676002)(8936002)(86362001)(26005)(6512007)(9686003)(6506007)(85182001)(2906002)(83380400001)(15650500001)(186003)(66476007)(66556008)(66946007)(6916009)(33716001)(82960400001)(316002)(4326008)(54906003)(38100700002)(478600001)(41300700001)(6486002)(6666004)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ek9uNHpmWFJkeExOL2UwZ2Ztcm95a1pkODZpQWlRd1pwekI5Y204a01mejBJ?=
 =?utf-8?B?OWtXZkJtNjVmU3QxMEJXOHlObXhUMktvZjhaMHRYOGQxc3lnODNVT0xCNG1U?=
 =?utf-8?B?c0Jqek14eHF2Vk5aU2ZXK0s5OW9pRTVVUkg3YlNyOVc5aExHdVE4TXdMM1dN?=
 =?utf-8?B?d1lMcEhxeURIalhxVGRjZ0VBenc0NXV1ZmdDUmphL1VLY091OUlGVHB1TXFk?=
 =?utf-8?B?UzhVVGJCMTRsYkRQdjhRWTYyTmhkYlBWak5nSm5xaGNxd01oY0RoK1k0REFs?=
 =?utf-8?B?VDZIN2o0ajFJSlBGL3pValhhc2Jzc1o4MGZaS0NXTEthUjZVT0VvT3QzNDdG?=
 =?utf-8?B?Vm1JUk5HN1RFOWdGdzdEbHlHTWUzcHNlU01BN3FONlM4Q0dtSVdNZzUzZWNV?=
 =?utf-8?B?WkgvTHRaQ0MvUndEbDgrQ0N1ZlQrNmtUYUZLUysxSWlrc2pvemZxazRTcTJi?=
 =?utf-8?B?NUZPaEtsUE1TeVRjdjd0eG9GVEdhcWZ0SC9PMUZpMWJMaHdxNWFjOHlVTEVD?=
 =?utf-8?B?L2hyR2Y0S0EwaldsbVRTRi9vTGNaRWk3OWNyNy9CaDAva1YyODlPT2FubXha?=
 =?utf-8?B?SFN3T2ZNZ1c4NkppOHdqV2tVRVJZc09LQkhnK1ljV2ZJMjBPSDBpUElNMHdq?=
 =?utf-8?B?ODk4Y2VpazJBRWxuVmsvem96SklSV1RPNFk4QSt1WWprUjhVTENoQ3cwaEgw?=
 =?utf-8?B?WGROWlZiNjJvT0hEWjlQT3BEQ3ZZL2pnUzJ2K1hEK0JMTnRia1Z6b1BMM0xl?=
 =?utf-8?B?N3ppakdUVUZpZ3JKcjE2eEpjZXVLTnU1WWliTEF3Ry9yY3pWOGFiSm9ramxp?=
 =?utf-8?B?VTMrSkJBMFVoUkQvWmJLOXQwVDZrdGtJWVZvSnBJbytpamU4azdMU1dkU0pp?=
 =?utf-8?B?V3ljMzRvWmJaR29HeEd4aVRaR25QMUI2Y3hhRlBjcEtDTmdiMkpkSkU4RXZM?=
 =?utf-8?B?aHJZeEhRN3p6bTlZVEcyQXFLbXBFMmpzdGFDVTk0V3A3Tk5nZFF5Zk1pZXJi?=
 =?utf-8?B?NElwK2drdjF3ZVFYclZHVHYxK2FjZnNybWpING0xSnp1ZEoxTzI2Q28yYlNi?=
 =?utf-8?B?NFJ2ckg5MkQzR1VaWGJYbGhndnpRamFvcUhGbXRQYjlJRkVNVk1XTGZFditO?=
 =?utf-8?B?M3REM0VsUmlkNFQwYkdhOUFGSnZhSDFYVEZLNFJUMkd0NDEzdUhzOHVnUmpK?=
 =?utf-8?B?YmsyTG1tb1I0SnFRUU9lVFdQNGZXY1pCSnRFMFVMYzFXOXVpK29ncW9nTGI4?=
 =?utf-8?B?SFlHV08xZ1pjcExiVGphT0swQzkwWUJDeTM5M1hQQ2xTazBVbFRPeDFDaGNl?=
 =?utf-8?B?OENvS1BQeHZQeDNnTHhzRm1ycTRyd0N5dWRJaTV1aWhtVS9iajA1MWxrQVhW?=
 =?utf-8?B?WG1VanhHcmZOdkgrZVhFQWJqY1RqV083ZHhKTDF4TWpvUUhuSGNkb3B1Z2ln?=
 =?utf-8?B?d1YrRGdIdGErY2wzT1F3SlM3Qyt3TTJCTUQydHFvYWxnY0lBVllNeWhZNG9V?=
 =?utf-8?B?dFNHQndXSzhsOWZuZG1kWG1VK2RxNWhaU2ptK1pKaWIwMTFCVTZDOEhraXB5?=
 =?utf-8?B?cWVoajlEUmNOT0RJa0gxbGhXU2R6UldoelFJVUo3ZE5QNUx4Nmt3YTNrVVlM?=
 =?utf-8?B?QmMreG44djV5eTk0TDh2Y0FrazByaWQ3TlBQeVlQZktrdlpaSWI3WG00T0tI?=
 =?utf-8?B?cDQzSmNWSlUzaEZ3YnlNa1RhN2gwUnkrZGZqbk41TDBkL2h4NmtDNlJuS3c1?=
 =?utf-8?B?eHYzQ3JvTDdjVHFjb0NBSWZRSHprWmZlRTdpcWpHMkVwSVRhcXVkM1QzbkZ3?=
 =?utf-8?B?QnJjNE5xYWw1a2lKUVRmN3BjVEg3L3ZLTnNOYmlYbnpCemdtVDdxVTBjczN3?=
 =?utf-8?B?SDRHZ0tocnJjWFB5UUQrbFV1dm5TV1FZeFNnZ3JIcDJpZTN2d3Rhc0ptNHRG?=
 =?utf-8?B?c3ZQc09BVGJ3eTlwb0txbEI3RWlMcWtwZk5MUW1DOEQwRkJaMWR2SjZ3QnV3?=
 =?utf-8?B?djNqdXl3L0JxQTJ2UXM0OTJQR29MbzNYTTdaWXV1QUJuZU1BVXNJOTI2MUFG?=
 =?utf-8?B?V21DbFhZRHh6VHRBaDBwaDdXR2t6MU9nOVRoejM2a1MxV0tNcXdvdTY0aDBE?=
 =?utf-8?B?VExpNkdGVGFmUVZzZit2Wkw4bVBCWE1QREd1NWd0dlMwRHNyS1JreVJTRHZN?=
 =?utf-8?B?cmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	XVRJh9N+Rbf0Wb1puCBOV/Iag4GC60Xg/1+emEZiVv/GAr6QiL+shft1pst45jjwhmB8XhkacyW27PQiBQkJZUQoOA+kwfM9m5bVZNzlWZtrgz2x2ymkMx7fQE8KoH1jF2OdY5eFPhRlmZxzbLbxKxu+rnC9WGngg5jSGh/Fm+R1YABZBBVEUiiaTkZtrc6qfHAx+pJxXvMpFg44HGini+kkHKGkl7yyeJBctciFwKY1Autr4B08Dn6qrsus1y65yIDogIVoPapwNaTdaryLPCAaWGs1osPwCYTV77qCIbozknW1ANUtdYPkDsaPjhxUVK9cuHzM3gHlO4imQz1urUWgZt5PMsM+Y0XIKkuaihSyXhBQ72emxnk1cVBB1I6OGH+cHWom/Ueck7i4kdlKrF3KPSbzvwcl9VSiEiEthBWyoaY7s+/ipWvPxe4ySonx1QqmRncqwQdViCZuaBM5xXgLq1j6Sj5e98Qk3XnYh9pgsBzkQ+RVvMvzyw+JcCIGRLMXnXEn4C/GQ6qLpz26cc/v8Wvu07N/h/6Y9umtYq7cnmfKklc5F9Xc0AOz3+TACVFSWoQJh2L10xN6fAuUeI4dbWGDlbIAip6vxTshllTFaaA5ESgsiezMRLL6WGiCFvrtqKiIMA23dnusdol4sxi6iHKs0poQEMXSzisWDHGjF0LvQHgwSbWQmfw4aMDeu3A4ZuMO1l1s1Yq3eUeHJMxLg9TywnGT9L1OoUid1+ndPSPh4Na9t+1KINAi6N8b9Xd7EOE6M2JvZBn35QajEbBbCW7Qi6WVSjIkdJ0YaSxh18je7gLOMBlykn91/+NPFns4u9f1Hftlr8E+5mEOew==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9c66fb08-d32b-4f6a-7e77-08db5c6f82b2
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 15:57:01.0033
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: IHPzm1JLdpU56xy24zMbsMsUvFHuGPYiO23xyc7WjWpAtQOvMGbE8rlbLIjhNX5Uylc+B/XMqLMzvxe8MqyCKw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5531

On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
>      struct vpci_header *header = &pdev->vpci->header;
>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>      struct pci_dev *tmp, *dev = NULL;
> +    const struct domain *d;
>      const struct vpci_msix *msix = pdev->vpci->msix;
>      unsigned int i;
>      int rc;
> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
>  
>      /*
>       * Check for overlaps with other BARs. Note that only BARs that are
> -     * currently mapped (enabled) are checked for overlaps.
> +     * currently mapped (enabled) are checked for overlaps. Note also that
> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
>       */
> -    for_each_pdev ( pdev->domain, tmp )
> +for ( d = pdev->domain; ; d = dom_xen ) {//todo

Looking at this again, I think this is slightly more complex, as during
runtime dom0 will get here with pdev->domain == hardware_domain OR
dom_xen, and hence you also need to account that devices that have
pdev->domain == dom_xen need to iterate over devices that belong to
the hardware_domain, ie:

for ( d = pdev->domain; ;
      d = (pdev->domain == dom_xen) ? hardware_domain : dom_xen )

And we likely want to limit this to devices that belong to the
hardware_domain or to dom_xen (in preparation for vPCI being used for
domUs).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 24 15:59:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 15:59:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539170.839786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qu0-0004bK-0k; Wed, 24 May 2023 15:59:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539170.839786; Wed, 24 May 2023 15:59:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1qtz-0004bD-U2; Wed, 24 May 2023 15:59:51 +0000
Received: by outflank-mailman (input) for mailman id 539170;
 Wed, 24 May 2023 15:59:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Zbm=BN=intel.com=dave.hansen@srs-se1.protection.inumbo.net>)
 id 1q1qty-0004b7-3B
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 15:59:50 +0000
Received: from mga02.intel.com (mga02.intel.com [134.134.136.20])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 00d9dc89-fa4c-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 17:59:47 +0200 (CEST)
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 24 May 2023 08:59:45 -0700
Received: from kknopp-mobl1.amr.corp.intel.com (HELO [10.212.186.147])
 ([10.212.186.147])
 by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 24 May 2023 08:59:44 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00d9dc89-fa4c-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1684943987; x=1716479987;
  h=message-id:date:mime-version:subject:to:cc:references:
   from:in-reply-to:content-transfer-encoding;
  bh=Wx0Ob/xebu07nSB9TJ+GF90jaV0TCKSH0UQ2GQEg59o=;
  b=CH85vFvrR+4NpRd4ezVB0OpvX04ZhRtgrngvhvcsCCGPNJqZM33EE+cG
   WW1M2MBHcqKWaRjRrcnNuECBcASy1cans7CozSOQXQPhEnmDHsOIgQpN0
   +HuRAOqFp2Xwo6ZX/O/iSSvJa5eybEaDkPZnjFipeKW0iyQ1uAtqZZyNF
   CbwI5YDG6YQj0BNSzeh4XBBqACpzjEbqnEd8C1x68D6CKYd4W0vPfpsBU
   M70ylyqTKnv4OUDm505scBUb5YxqSwySZJCzoUpuRYDWFRv4s03f/FPtd
   aFsyicjO1/XX7Uac0ZZ18Z6yILJvcbJBNmMQlLN163WKkHTUadXKRxyoC
   g==;
X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="343073977"
X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; 
   d="scan'208";a="343073977"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="654853056"
X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; 
   d="scan'208";a="654853056"
Message-ID: <571c2a4a-4832-e64e-4f3c-8e7c8a795579@intel.com>
Date: Wed, 24 May 2023 09:00:12 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] x86/pci/xen: populate MSI sysfs entries
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>, Maximilian Heyne <mheyne@amazon.de>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, linux-pci@vger.kernel.org,
 linux-kernel@vger.kernel.org
Cc: Ashok Raj <ashok.raj@intel.com>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Marc Zyngier <maz@kernel.org>, "Ahmed S. Darwish" <darwi@linutronix.de>,
 Kevin Tian <kevin.tian@intel.com>, Jason Gunthorpe <jgg@ziepe.ca>,
 Bjorn Helgaas <bhelgaas@google.com>
References: <20230503131656.15928-1-mheyne@amazon.de>
 <20230524154330.GA52988@dev-dsk-mheyne-1b-c1362c4d.eu-west-1.amazon.com>
 <c0f7cf97-f7ea-83f2-3a9c-f77f82dfb689@suse.com>
From: Dave Hansen <dave.hansen@intel.com>
In-Reply-To: <c0f7cf97-f7ea-83f2-3a9c-f77f82dfb689@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 5/24/23 08:47, Juergen Gross wrote:
>> Any other feedback on this one? This is definitely a bug but I
>> understand that
>> there might be different ways to fix it.
> 
> I'd be happy to take the patch via the Xen tree, but I think x86
> maintainers should at least ack that.

Ack.

Works for me.


From xen-devel-bounces@lists.xenproject.org Wed May 24 16:06:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 16:06:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539174.839795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1r0N-0006bL-M1; Wed, 24 May 2023 16:06:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539174.839795; Wed, 24 May 2023 16:06:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1r0N-0006bE-JQ; Wed, 24 May 2023 16:06:27 +0000
Received: by outflank-mailman (input) for mailman id 539174;
 Wed, 24 May 2023 16:06:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V2qZ=BN=citrix.com=prvs=5010cee81=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1q1r0L-0006b8-Op
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 16:06:25 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ecf495ba-fa4c-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 18:06:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecf495ba-fa4c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684944383;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=tm65vq3RdyIRKBQDq/1y/ORB5HYwi09XnCHiE+t3B2o=;
  b=IXOyRW1O1dwyTZFgkjs6FBkkW6963VO7lDUxOdBOy9fNuwTVoKa4cFf1
   T7ktKkPK07YpHcX/ZRkMBQ0aaSgTvC4yJz4WiOHmv1D3ijgXVtFFWoDZo
   qlxUyslgFWoprcmAfngHgxwwSXR4wfHO0Bjr9UlbKN28Aj1UPJVMjvF0g
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109010816
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:ZJU7g65duXdmHzo6yR3dbwxRtHzAchMFZxGqfqrLsTDasY5as4F+v
 jccWGqBOfbbZmukKIgja4+0pElTusKEydY3SlZu/ngyHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa0R4AeF/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m3
 NEVGSwWSw242s2OzJWmS+Mvuv0aM5y+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP
 YxDM2MpNUmeJUQVYT/7C7pn9AusrmP4aCYerFuaqLAo6mzX5AdwzKLsIJzefdniqcB9xx7J/
 juerz2nav0cHPuixD/ZtVu2uuqM3jHWU94CU7uk2fE/1TV/wURMUUZLBDNXu8KRjk+4RsIaK
 EEO/CcqhbY9+VbtTdTnWRC85nmesXY0S9dWVuE39gyJ4q7V+BqCQHgJSCZbb94rv9NwQiYlv
 neVkNf5LThutqCJU3Wb96fSoT7aESINBWYGZCICHU0J7rHLrIgtg1TPR9B4HaiditzzBCG2w
 jaWoSx4jLIW5eYJ2L+85kvvmC+3q97CSQtdzg/QRGO+qARieJSieZer+HDc9/9LKIvfRV6E1
 FABms6R68gUAJ2NnTDLS+IIdJmx5fGDPSb0m1NjH5A9sT+q/haLbdAOyDJzPkFkNoADYzCBS
 EzUvxlY6LdQMWGsYKsxZJi+Y/nG1oC5S46jDKqNKIMTPN4oLlTvEDxSiVC49k3goVQ3tLwEZ
 LijfeGxDHxZNZ1e02/jLwsC6oMDyic7zGLVYJn0yRW7zLaTDEKopac53EimNb5gsv7dyOnB2
 5MGbpbRlU0DOAHrSnOPmbP/O2zmOpTS6Xrej8VMPtCOLQN9cI3KI6+AmOhxE2CJckk8qwspw
 p1fchUAoLYcrSedQelvVpyEQO2HYHqHhShnVRHAxH7xs5TZXa6h7b0Ea7w8dqQ9+epowJZcF
 qdVJ5XeXKQTEWqWoVzxiKURS6Q7LnyWaf+mZXL5MFDTgbY7L+A2xjMUVlS2r3RfZsZGncA/v
 6ehxmvmfHb3fCw7VJy+QKv2nzuMUY01xLoas73gfoMCJy0BMeFCd0TMsxPAC51Qd0iYnWvGi
 m57w34w/IHwnmP8y/GR7YjskmtjO7AidqaGNwE3NYqLCBQ=
IronPort-HdrOrdr: A9a23:o7Kw36o2R/qRvJjUU7PAEJUaV5oJeYIsimQD101hICG9E/bo8v
 xG+c5wuCMc5wx8ZJhNo7+90dC7MBThHP1OkOss1NWZPDUO0VHARL2Ki7GN/9SKIVycygcy78
 Zdmp9FebnN5AhB5voSODPIaerIGuP3iJxAWN2uqUuFkTsaEJ2IMT0JdzpyfSVNNXB7OaY=
X-Talos-CUID: =?us-ascii?q?9a23=3AqSPEPWn0YOBFKzhT5B2DEIZaQmvXOVTwwUbMOGq?=
 =?us-ascii?q?lMjtGcLGyVF6au59dqtU7zg=3D=3D?=
X-Talos-MUID: 9a23:E7Tvtggk1lYhpJwDOK5m0MMpE/pl5vjpIVoxlNZFqeSPaTw3PSuyg2Hi
X-IronPort-AV: E=Sophos;i="6.00,189,1681185600"; 
   d="scan'208";a="109010816"
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: <linux-kernel@vger.kernel.org>, <xen-devel@lists.xenproject.org>
CC: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
	<x86@kernel.org>, Juergen Gross <jgross@suse.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, Peter Jones <pjones@redhat.com>, "Konrad
 Rzeszutek Wilk" <konrad@kernel.org>, Ross Lagerwall
	<ross.lagerwall@citrix.com>
Subject: [PATCH] iscsi_ibft: Fix finding the iBFT under Xen Dom 0
Date: Wed, 24 May 2023 17:05:58 +0100
Message-ID: <20230524160558.3686226-1-ross.lagerwall@citrix.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Since firmware doesn't indicate the iBFT in the E820, add a reserved
region so that it gets identity mapped when running as Dom 0 so that it
is possible to search for it. Move the call to reserve_ibft_region()
later so that it is called after the Xen identity mapping adjustments
are applied.

Finally, instead of using isa_bus_to_virt() which doesn't do the right
thing under Xen, use early_memremap() like the dmi_scan code does.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
---

It tested this to work under Xen and native Linux.

 arch/x86/kernel/setup.c            |  2 +-
 arch/x86/xen/setup.c               |  8 +++++++-
 drivers/firmware/iscsi_ibft_find.c | 24 +++++++++++++++++-------
 3 files changed, 25 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 16babff771bd..616b80507abd 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -796,7 +796,6 @@ static void __init early_reserve_memory(void)
 
 	memblock_x86_reserve_range_setup_data();
 
-	reserve_ibft_region();
 	reserve_bios_regions();
 	trim_snb_memory();
 }
@@ -1032,6 +1031,7 @@ void __init setup_arch(char **cmdline_p)
 	if (efi_enabled(EFI_BOOT))
 		efi_init();
 
+	reserve_ibft_region();
 	dmi_setup();
 
 	/*
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index c2be3efb2ba0..daab59df3b99 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -772,8 +772,14 @@ char * __init xen_memory_setup(void)
 	 * UNUSABLE regions in domUs are not handled and will need
 	 * a patch in the future.
 	 */
-	if (xen_initial_domain())
+	if (xen_initial_domain()) {
 		xen_ignore_unusable();
+		/* Reserve 0.5 MiB to 1 MiB region so iBFT can be found */
+		xen_e820_table.entries[xen_e820_table.nr_entries].addr = 0x80000;
+		xen_e820_table.entries[xen_e820_table.nr_entries].size = 0x80000;
+		xen_e820_table.entries[xen_e820_table.nr_entries].type = E820_TYPE_RESERVED;
+		xen_e820_table.nr_entries++;
+	}
 
 	/* Make sure the Xen-supplied memory map is well-ordered. */
 	e820__update_table(&xen_e820_table);
diff --git a/drivers/firmware/iscsi_ibft_find.c b/drivers/firmware/iscsi_ibft_find.c
index 94b49ccd23ac..e3c1449987dd 100644
--- a/drivers/firmware/iscsi_ibft_find.c
+++ b/drivers/firmware/iscsi_ibft_find.c
@@ -52,9 +52,9 @@ static const struct {
  */
 void __init reserve_ibft_region(void)
 {
-	unsigned long pos;
+	unsigned long pos, virt_pos = 0;
 	unsigned int len = 0;
-	void *virt;
+	void *virt = NULL;
 	int i;
 
 	ibft_phys_addr = 0;
@@ -70,13 +70,20 @@ void __init reserve_ibft_region(void)
 		 * so skip that area */
 		if (pos == VGA_MEM)
 			pos += VGA_SIZE;
-		virt = isa_bus_to_virt(pos);
+
+		/* Map page by page */
+		if (offset_in_page(pos) == 0) {
+			if (virt)
+				early_memunmap(virt, PAGE_SIZE);
+			virt = early_memremap_ro(pos, PAGE_SIZE);
+			virt_pos = pos;
+		}
 
 		for (i = 0; i < ARRAY_SIZE(ibft_signs); i++) {
-			if (memcmp(virt, ibft_signs[i].sign, IBFT_SIGN_LEN) ==
-			    0) {
+			if (memcmp(virt + (pos - virt_pos), ibft_signs[i].sign,
+				   IBFT_SIGN_LEN) == 0) {
 				unsigned long *addr =
-				    (unsigned long *)isa_bus_to_virt(pos + 4);
+				    (unsigned long *)(virt + pos - virt_pos + 4);
 				len = *addr;
 				/* if the length of the table extends past 1M,
 				 * the table cannot be valid. */
@@ -84,9 +91,12 @@ void __init reserve_ibft_region(void)
 					ibft_phys_addr = pos;
 					memblock_reserve(ibft_phys_addr, PAGE_ALIGN(len));
 					pr_info("iBFT found at %pa.\n", &ibft_phys_addr);
-					return;
+					goto out;
 				}
 			}
 		}
 	}
+
+out:
+	early_memunmap(virt, PAGE_SIZE);
 }
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed May 24 16:11:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 16:11:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539178.839806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1r51-00082K-8d; Wed, 24 May 2023 16:11:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539178.839806; Wed, 24 May 2023 16:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1r51-00082D-5t; Wed, 24 May 2023 16:11:15 +0000
Received: by outflank-mailman (input) for mailman id 539178;
 Wed, 24 May 2023 16:11:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uwFZ=BN=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q1r50-000827-Eu
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 16:11:14 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20616.outbound.protection.outlook.com
 [2a01:111:f400:fe12::616])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9710cfd3-fa4d-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 18:11:07 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8607.eurprd04.prod.outlook.com (2603:10a6:102:21a::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 16:11:05 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6411.029; Wed, 24 May 2023
 16:11:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9710cfd3-fa4d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RZiWlxdS7FO5sQT/6Ne0RIcxcs3a2rr58GOG2GWdoP+b4khxw7idDsXqfJM0adJPR8Hu7d3os2KOuaGcuHvaMNQFxbS1DfPaTwBstNSQ6x0p3aaw5TyisbmCRxpm+CzfxNRBd3tt0z4Qd5+GEXq3GYoFhqqT0X0mfA16gUvzZcfPsL4uH+C2Sk4WHj37qLyeM9qc3tLd4whA81iReTOaAtp9ij0kxKP78wLM1og0jSgWPSBGgh90y/EiKBFEEddWKronOW5a46gUQSAZbkBNaAHIaH9lG3US/dG6obbvzNxeiJxUjkE/aB63UIuVwclxJLX6tSeQj2bx7Obzw0YIRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cKlbmUGuOtAnnhs3muv7wdY4OoUhyDfht/b6x81YJCc=;
 b=XBPvs5u4AkW/MzLqsB6TYFYtdlzMGxWdNHwTeLxtlbphS6njOYuNiGLTiNknMzIYGYyj1KseodkQN3t5s/zFny45zhgq+LGuI9uIVcnFksyfEi9jfARuEsKcfOOUKxqHB98P2gpQUbDBU/ZgrT+7kzFL6EZHk8wEAVg19smz4mHrdh6FiJZSyfxIK90SUNee99CrMjEoJMwPllAQ10C2f1lX2w2Et6YiL/y30g/rTDs9WZpT6fKoLntsMlxiBkbJHruRVLxXxAzlG1g2+yAmUaRnCyx9u7kwnBQ7nVcsrIjhpr0Ry3bFg/JCeJalh6ntKIkNvuP1QC9CuJ3YhHuPrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cKlbmUGuOtAnnhs3muv7wdY4OoUhyDfht/b6x81YJCc=;
 b=ImErJo9MsZ+Ze/CBxu/4ckAfGbT7t0xrKXEVPpg/05jrirvYg5cWLjPx+YSgIGlRuKpPQjWKDh1i5A1JXzUOA64tCIRqkYE3g57/IGicESiOupaw5Dq1tQpwviRRy+zESLa79XjRfw7g/w99Tp8Hf6ohlWs7o6WhtNPHF2qqBWd4zNMEzCh6TrQP8EMRfCNlyqKbbMLbpBfwfTGu/gjAxj1L/RH47HcOQX8seOd10kNBzeCNe02lXLyh1WKOvXITirxjozkWR6x98YDn/Yr4ldSYZL7iMBBDueeuUD1b2jFt5mM9H4P4qXyIi6bDf4uFoKYJrDNaufwAnDj4ouSGCg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2099b1b9-e3c3-aae2-351e-cbf067dc6ecc@suse.com>
Date: Wed, 24 May 2023 18:11:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2] iommu/vtd: fix address translation for superpages
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <20230524152208.18302-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230524152208.18302-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0142.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:98::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8607:EE_
X-MS-Office365-Filtering-Correlation-Id: 4cc8fac5-736e-4274-cde3-08db5c717a35
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	a7P3jczRbzivb6yT4iDIg81W6mj9aIzoxFlYuwuWe8JeAHZ+O0W8LUK0Y2Qa6oKbrZLyLYgf8OH0NcUgQiL0IOb//pojHkAclhNqe34uFZcLY4YAMHH98nqlslgPiRQdbGDRtoz3ORp7O19HCz9nUlc6dLlM7HgwNPJma7+zN8cU3PA18lS3mzIqnJlhFusZzTk8rW3hlkS/sVH/tUHh4jbQ2J/nLYNORQ+P/JgiNVgcuyXMRC0MtKA7MVhSF5Mqq/4dlB+TODFQLjHUP/P6Gz/PJ91XRxRQwU1bKCzdnZaOcTsiNMhgKbuPjjZdPg7Z/PR0U5HffYmEYuwbvupq8sK5oaN68vfU7iqAqJxy+kChSsYxQBVRJ53b1uNBASiOBv3VyhqRPjcw7gTxlrfeJAea6HhQJTue6OWzMFZLdAnZ4xHQ8FBgmzZylZEVigo6cecoxNp2O1zB8lCzfLHNcDmzmRTWcafrdHd+l3zT3CQ+HqsjI7Nl9wgtxw7Yr4rVBS2tIkP6b1kPEiel8oS3xneZJ62bF90Q6TWSB114atTlva6NmUIaBIaJPUc4KtCvICi0Wh2knBNcYF4bThslgC4vt0CRFUaaCaBKkIMGJ/zHdJf9OVGpVioqEhx2BWB+ueXXzZlBAqqPDbMGApSb8A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(346002)(366004)(136003)(39860400002)(451199021)(6486002)(478600001)(83380400001)(2616005)(31696002)(86362001)(38100700002)(26005)(6506007)(6512007)(36756003)(186003)(53546011)(41300700001)(8936002)(316002)(31686004)(8676002)(5660300002)(2906002)(66946007)(66556008)(66476007)(6916009)(4326008)(66899021)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QjUva3FkaldWQTlNdmFNZWlRYVZzVXMvMkJQVXluTmVGQzdwc0hJamw2TmQr?=
 =?utf-8?B?YzNrSi9tTjhaYzdkR012T3A3RlVDSkVPeUU1YnRnQ2dHTEh5eDJENTh1aG1u?=
 =?utf-8?B?NFk2RjJjYXB1ak1uWlVPcVdTaHk0SkFmaHpLTnVKMkZhYS8weUJ4NUQ4YlhB?=
 =?utf-8?B?ZExIOWJoWG5PMnBGa2NDSXRBdHJyRG02S3RWc1NyMHJMOW5QOHl1MkNvMlRh?=
 =?utf-8?B?RUR2c280bFRkd25PYWppdkoreGpCcmhDY2ptMHFuTXNEczVpVlVJY0lSNTJC?=
 =?utf-8?B?Y0NncDlTMUlsdkRGWGlkaS95M2xDbmhzd1N2Mjd0YUtsSWdoMEltL212Tlly?=
 =?utf-8?B?ZGp0UFhFVkNTZ1NpV08yK2hqd2tNdTZrdGdXQzNJcitVQnV6aDl0MWcxdnUw?=
 =?utf-8?B?Z2NxSXFzeDdvRThnbGNYcWg4R21uOXZFb21lTk1SOFh5NWFxZUY3ZnF0cDkw?=
 =?utf-8?B?bjhnNGZnT3hiNWZPTmhGQ1RlQnVXRTBHblpSSkRob2JNdTBGZm1pYVJVckRW?=
 =?utf-8?B?bm1oc05nOXJuZHdvUHgrVkN3ZHBKQXQxbDJVYlNkQjBKVWtRQ3V3Z1pxTWt4?=
 =?utf-8?B?VDVNSmpTbnpaVjN6YUt5anRoYk9OMXpDRHIzQnJIWldYNGZHTVlCQWpHY3Uv?=
 =?utf-8?B?S1ljYnlYcWNXV21mNi9ZUC8yWjlEaGNSVDR1YVV3SE1oQy9qRFdKVmRRN2JO?=
 =?utf-8?B?ZCsxdlR2Vm5jOVRiV2xBcTFRK1N6SHdGbitBL3laNmoxeXM5NUplS3o3b1pI?=
 =?utf-8?B?WkR0cFZEdjFheTlNbjdaZVVZbTZ6K1kzTUNONUEvUHdMTCt1RkxhVUUvOGh1?=
 =?utf-8?B?YVVOSUh3UDRJbE8yRm1HMkpQMkdpbHR2aU9McEhJWDFUeE5uM2NYUGpFQTRw?=
 =?utf-8?B?V0tpV2VZaHJFd0lqYk8vNmw4WWU2U1NtaVdkU1ZWM2NUSnY4UTE3M3hEMTM4?=
 =?utf-8?B?SjRLbW1UdFhISTdmTE9HczhWeDlHZ3N5MjVmT1RjTC9lRzY2MjRWK01vV0NI?=
 =?utf-8?B?UGlocEEwdWRuWlgwTmQreFRrRXZ6eWc1bWpLdW5URzU2OW1yOXhBa1o2Y3hu?=
 =?utf-8?B?M0xiWkpONHVGSElldmNBajVLWUlMVDNjM0JmeitRTGhSR09xd2pRTTFyTFp6?=
 =?utf-8?B?RXBwWGFiNnk3V2I3ZWhZR1Z2WWhhR1FLbUVURjJEMnlXL2tsWkt2OUNXOU5l?=
 =?utf-8?B?YXhDMVhiU3FJaWVmZndhV0s1Q2JBSXN0QkpvS1g0SDBraFlWbnF3N3AvR3Fw?=
 =?utf-8?B?L2tWdjhIMFBGSUtvcUh4SmlrQ1hQSmZoUHZPTEJtdVVlRy9EQUlwTURnSzdX?=
 =?utf-8?B?S2pDM0Zmc1hxSHZjbitNWXpNYTF5V1dzK0xyblJEcFJ6MjBMUk82N1hXL1Bq?=
 =?utf-8?B?RUtFQmR0ZWZ0TDlDLzNnQ3JDNS9MTGlIS0xXZjJ3bHF4cVZCS2xXSThnTjZQ?=
 =?utf-8?B?a0xBbFc4YTlVUFJYZndGaE5UL0FST1VMUDRaK2dYcXZ1d0s1U2d6TWZnekh4?=
 =?utf-8?B?L3c0TjcyK0pGU05ZQzk2Ky9zZGZRVG1IZHBKWUp0UjBibTFTL1cyVGlTcHN2?=
 =?utf-8?B?K1VvNWJ2QUNjVUtldUY0Sy9nbzI4RkJlOHFMb2xRV0xLejBFZENoR1V6QVB6?=
 =?utf-8?B?L1VlZHFPRSt2MWtOaHVhY2hOeS9FTGJweXJoWDVYZGZjVVhISzhYN3hORHZl?=
 =?utf-8?B?S2kyOVpqZDZyY2pVMTVnMEFnTGNILzVoeDlCczVWZTE1cENoaFBVOFdqUUVC?=
 =?utf-8?B?Z2dBUzRCdDZiVzIxcE43akJMQXJTclFqNVZ3ZWRiUFJVMUg5NUV1OG4zZm1r?=
 =?utf-8?B?dCszWmhzNnZEZGtJVy9kRUtqb1BIZzUyQzlETWJNWU9DUjh6VXREUHE1b2dK?=
 =?utf-8?B?YWxTOGpwdnhDOVpGVFRDY2s5c1E2R2hHOFNMU1kvMHp6bkpLN3ZTdFQ3ay84?=
 =?utf-8?B?anM5VEhmTHhPUFl5Q2NkbElPTlowY1d2SGFqT0dUZTRYT1dXNDJ3dWZQQTdq?=
 =?utf-8?B?QkdlWWZEaTAva2N4cWMxVTBSMmkxZjA0djRGUDFLd0ZTYml5bUhBV3VGMWZS?=
 =?utf-8?B?Umptd1JGM1c4b2ZuY2kxME5tM1p5VjZqNm9lSTdZV3NOVGNaRXVRanN4aGxV?=
 =?utf-8?Q?C4Ral6NmlES7oTmhjuL2PvMkD?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4cc8fac5-736e-4274-cde3-08db5c717a35
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 16:11:05.5610
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qPPeLZs4pitRaNbiALoMNYPBXEgR5WZcvoJkp7ik5S4YjXGJrb5Za7Yyx12Yn65wsqjkbN2m5QtDkUERXmQSFA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8607

On 24.05.2023 17:22, Roger Pau Monne wrote:
> When translating an address that falls inside of a superpage in the
> IOMMU page tables the fetching of the PTE value wasn't masking of the
> contiguous related data, which caused the returned data to be
> corrupt as it would contain bits that the caller would interpret as
> part of the address.
> 
> Fix this by masking off the contiguous bits.
> 
> Fixes: c71e55501a61 ('VT-d: have callers specify the target level for page table walks')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Just to clarify: The title says superpages and you also only deal with
superpages. Yet in the earlier discussion I pointed out that the 4k-page
case looks to also be flawed (I don't think anymore we iterate one too
many times, but I'm pretty sure the r/w flags are missing in what we
return to intel_iommu_lookup_page()). Did you convince yourself
otherwise in the meantime? Or is that going to be a separate change
(whether by you or someone else, like me)?

> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -368,7 +368,7 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
>                   * with the address adjusted to account for the residual of
>                   * the walk.
>                   */
> -                pte_maddr = pte->val +
> +                pte_maddr = (pte->val & ~DMA_PTE_CONTIG_MASK) +

While this addresses the problem at hand, wouldn't masking by PADDR_MASK
be more forward compatible (for whenever another of the high bits gets
used)?

Jan

>                      (addr & ((1UL << level_to_offset_bits(level)) - 1) &
>                       PAGE_MASK);
>                  if ( !target )



From xen-devel-bounces@lists.xenproject.org Wed May 24 17:31:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 17:31:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539183.839816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1sKI-0007p0-0g; Wed, 24 May 2023 17:31:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539183.839816; Wed, 24 May 2023 17:31:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1sKH-0007ot-SY; Wed, 24 May 2023 17:31:05 +0000
Received: by outflank-mailman (input) for mailman id 539183;
 Wed, 24 May 2023 17:31:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1sKG-0007oj-RG; Wed, 24 May 2023 17:31:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1sKG-0004Jv-JE; Wed, 24 May 2023 17:31:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1sKG-0001Sg-8M; Wed, 24 May 2023 17:31:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1sKG-0000Fo-7x; Wed, 24 May 2023 17:31:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DdII5kSN+6IrGwUENV+M9Ev4/96Y93doKi/IIVo07Pk=; b=P/n67EZaxRUw6UXVktqNBz3ooh
	dlnG9zjOIM8VMnYeYo4CnwdYgWX7rbeMFsqn/9zXZ3wB1gycsITMmEXPwjoPu1FnDHtjMurPkMhen
	k7p+j2XfM8O1aX1cq+7VwvFDHlRI+1+VCIXLECWAwoukfUtWB+nU3YXbnS93bmAhoND4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180929-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180929: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=380c6c170393c48852d4f2b1ea97125a399cfc61
X-Osstest-Versions-That:
    xen=c7908869ac26961a3919491705e521179ad3fc0e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 May 2023 17:31:04 +0000

flight 180929 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180929/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  380c6c170393c48852d4f2b1ea97125a399cfc61
baseline version:
 xen                  c7908869ac26961a3919491705e521179ad3fc0e

Last test of basis   180897  2023-05-22 15:00:25 Z    2 days
Testing same since   180929  2023-05-24 15:03:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c7908869ac..380c6c1703  380c6c170393c48852d4f2b1ea97125a399cfc61 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 24 18:02:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 18:02:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539189.839826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1sp3-0002yt-F2; Wed, 24 May 2023 18:02:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539189.839826; Wed, 24 May 2023 18:02:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1sp3-0002ym-Ab; Wed, 24 May 2023 18:02:53 +0000
Received: by outflank-mailman (input) for mailman id 539189;
 Wed, 24 May 2023 18:02:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTJK=BN=citrix.com=prvs=501cbbf32=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q1sp2-0002yg-1G
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 18:02:52 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 307917fa-fa5d-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 20:02:48 +0200 (CEST)
Received: from mail-sn1nam02lp2048.outbound.protection.outlook.com (HELO
 NAM02-SN1-obe.outbound.protection.outlook.com) ([104.47.57.48])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 24 May 2023 14:02:37 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5455.namprd03.prod.outlook.com (2603:10b6:a03:27b::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Wed, 24 May
 2023 18:02:35 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.015; Wed, 24 May 2023
 18:02:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 307917fa-fa5d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1684951369;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=xDsCvtKw1mTi6RlMB4xNQopAra+GhXeWUpQjfMJbFYs=;
  b=iAr5De+iXvBeYd1yD1X05Jb8mT6b8rvzle+oE+KQ+JcnFXH+hQmVyjoO
   Ni/C1rGg3h8kFKdtwQgLMg5L77NbXqAvA9CWiPA/WCNQXezbDbMh+Bbpt
   q9sgc0E4ZV1cpQLbOOfEoBbeHYfiB9kq3olFYpLSIrhEQTqGzKH0CzKgx
   c=;
X-IronPort-RemoteIP: 104.47.57.48
X-IronPort-MID: 112734452
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:VgcPz6qs3dLOvwQj1Mmnox1cO/peBmI1ZBIvgKrLsJaIsI4StFCzt
 garIBmHO6yONGLxeop3YYu/pExXsZPRn9RmQQM9/ClmFC8U+JuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzSVNUPrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACk0VRG/hPKS+pKEdNcvnJUNdMz7bIxK7xmMzRmBZRonabbqZv2QoOR+hXI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeerbIq9lt+iHK25mm6xo
 G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOThrKEx3AHOnAT/DjUvWFeYqOayoHWzXukFB
 34JpRUL/KctoRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ62
 1nMk973CDhHtLyOVWnb5rqStSm1OyUeMSkFfyBscOcey9zqoYV2lRSWSN9mSPSxloetRWG2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBNzBJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:gIrHjK5Bl4cWMO+D6APXwB3XdLJyesId70hD6qkRc202TiX8ra
 vCoB1173PJYVoqN03I4OrwRpVoIkmslqKdg7NxAV74ZniChILAFugLh7cKqAeBJ8SRzIBgPK
 1bAs9D4PqZNykC/L6Km3jDL/8QhPqi+KCsify29QYQcegTUdAc0++mYjzrdHGffGF9dOUE/T
 Gnl7t6Tn6bCBAqUvg=
X-Talos-CUID: 9a23:uXmTrmFaBvU/UteFqmJ+zXUVJsM1Vkf980nuYFXpJz5sWO2aHAo=
X-Talos-MUID: =?us-ascii?q?9a23=3AnTSmKw8niZ1qlR7Pn+cLvoaQf/5l74enF0snqpg?=
 =?us-ascii?q?DgJOoBzEvBBLMgyviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,190,1681185600"; 
   d="scan'208";a="112734452"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B45wa7jpv+U7e5znz1+nU4KjcxHEQYxyW+wQX+nD3kUfIANTbBS/FQk0tZvKCVV3ETzPBGTYp2D6V6ctFQeI7yjaVcv8mjnHAiUd6ZOR1tZ8TBozKzks/ZhDKEAv1sOvHKbc/psXiq0RRaSI/Fv9s1Jj1W8cDoZqG8SZ0GzIpb8W4U7omZM0DzK/F3xZEZHp6EkkgcSf2ss96FvM3e/c4HTCRTnf9OFczgxtwf/y2I4K62EF6Dr6ME77HLrpNJTD0oj57eY75jGVI0EJ3kBeWE5eB1q82/GQoXJ5qkhSF6Z/sPmN14ImUkI/SgoVeRWxH6iEqrAwRWRWbZ5D4ktsgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XSiAbTh2PNbUv+gThcrq3U54/n6URbmXwcG5TAo5UDg=;
 b=Ep+I2cZOcHZEBa0rP5n9jdFl66fRLvIkgOvFRSEXAnXLFA8HW0m+85dEYOEWRVGThqU54yArzS/zFUHkSxWdCzuqUlqWkW0IuZ11T6yNYjYAmM825I/rCL1N/7JU3xBlEsOK7klXgR8L0vzMBVg3JV1AS11p1hvzb9VdDjbHKgumpYvUlZwM3ubP+WSV4sW0vScFowEr5Q3H91Tq/s8VUkPXodaPOOzJsXPsVvrOibWNat9dzBrsiyC6KCDFGsCp9dkyC8VSDhUgHOtKn1vUy5lVUMtY2mwCevAN/wBdf+xSX38Yh+jA6oLKyKFh1RmGBSLTQ7HoT7taVAwqmIoqZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XSiAbTh2PNbUv+gThcrq3U54/n6URbmXwcG5TAo5UDg=;
 b=mM0ZSrOoT7ROmslesC9xvY8DNmMxjzCsltXn71Rv579XYv7YJI9ukUq1oMom88tmsUeP/8eee/KEMXIpnILNacVNaD4xOwI4fAILzu15ek5Dq3sU4uZbSvf0s4r2AT53lkA7C4pQGencY44xR0EDkTKaeJm7PsvB/xot5jhsgmE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <b321009e-db23-19eb-e94f-41eea8ec3bc8@citrix.com>
Date: Wed, 24 May 2023 19:02:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 03/10] x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230524112526.3475200-1-andrew.cooper3@citrix.com>
 <20230524112526.3475200-4-andrew.cooper3@citrix.com>
 <c144bf13-9e65-483a-6887-9bd8645f25b8@suse.com>
In-Reply-To: <c144bf13-9e65-483a-6887-9bd8645f25b8@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0209.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9e::29) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5455:EE_
X-MS-Office365-Filtering-Correlation-Id: d3ca9afb-5f12-4c3e-6861-08db5c810ce6
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	77G0BGBTPqYHOsUNL9cXDuoPPN3iUN47zgQocsq2PQxR16bKmdDcyUEyYPwBZoQTE1/Xin7sDION0L3P7oEh1JTCsrz74dnvbqfLqJFc6Sihw0pIOpFxc07bVW/B6aC7ElXLED4Nsk7S3ozFJ+gYsFKXPhVrJ9XvFz9o2+vMXep8HDSZLnt2/ufZS9FYuhaFHIKVVfgf/Y7ObEiwJVHIOKl+M/J42qHuTyO1zO8A6ySLtjPDozWvsCdxlBsAB+BjpMugJquCNKztaHAnEkXoAti4M/venPCsYiwfqL1/8y7k+qWOD5l7KEx3jvYCVdado45FuebKfM26hH/cnWjn7+Ox69zo6ZOnq9M6ibHQYsTLuNdCgzdkT3wXUjL/1+L/JwmKtZn/316IivVu4ITPmGOHPK/LP15NwUzWnhLvQiZrwOSo6iiWxU9b2qaKx9RSr4XGNK15+p8aeUHu4aAp1Gu9CQ+gfWvAFm43JPZGj3SlPz9P/oYh6B2BX0ubizLcNlFaupkXJlMf1g6Mb0KQvLrDQVdkMfyaFKukKMXBfDlt+o9X+gPP3hYKCj4fWe2gZyzwaSx3iQgYiBL7ztRepP736NwZHS6l4fOIOb1pEGE1LRGvERT9uB3H5S2HTV+vSQWZLLV8x3GR4SKzuRlndQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(39860400002)(136003)(396003)(376002)(451199021)(82960400001)(6512007)(6506007)(26005)(186003)(53546011)(38100700002)(2616005)(36756003)(83380400001)(2906002)(6486002)(41300700001)(54906003)(316002)(6666004)(31696002)(31686004)(66476007)(66556008)(66946007)(6916009)(4326008)(86362001)(8676002)(8936002)(5660300002)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?djBVTVNQMlF3Ukk0OEdKMWsxaUx3U3lXQXhMSXNHUVMwa21HWEtSKzZWR1RG?=
 =?utf-8?B?Snk1eVFrcG45WWRubTlXZ3hFOHUvTTdFZ2ZkVE02Ym5VenFLdmJiWXVXT015?=
 =?utf-8?B?bEhwck80Yzd6a3M0MG5xRDZ2Q0lpaE5FOWdGbXp0cHM5aUlyeTVpdVZmUkxX?=
 =?utf-8?B?YWNPQlZaSXo4N2dBZGk0TVZaS3ArV3JyMTFjenJzVGlWQXAvSng2VkdXMytN?=
 =?utf-8?B?T3ZzS2Fldy9GaEo4OEgrM1pMcGhBVC9qakd3M2Q3L2xDNmcvcGdMWlV0akVh?=
 =?utf-8?B?M0VxVkgxNW5pc29RaXNIdU43azROUlhNMHEwNlhMUnBMRFRKdWhRc2NHWldl?=
 =?utf-8?B?RnVldVdPMXRzQ3hOQkRicTBzbVVzZ3I2TVZyUTRyYmdueUFnZSt0OWNjaWwx?=
 =?utf-8?B?b0lvRkllZ1VwcXhkYkFaVEhxZHhyNnljZFJEZWFKN1BUVXZaVVZjY1FBT2FM?=
 =?utf-8?B?TVRQMS96S3hqMldoM3VlTk42em1pMnVEbUVHTmg4SlJVdnlsTFpLK1VhM282?=
 =?utf-8?B?T0lPRVZsMXFUVVNhRUJ3RTV4WkdTUk9KUGg1cTN1OHlTT3p4SDd2RkdBRHlR?=
 =?utf-8?B?ZmRtYjdmUGVQRkk5TldMT2tneTQ4eXJRUTk5YkVYT2VVOVBLTzFSY2JRaHNH?=
 =?utf-8?B?bDI1azArSm0xcXI4RlJlQlpuRy9OOHM4QlZuNk5sNjBNZlkzenM3VFA5czJP?=
 =?utf-8?B?UHFGTVc0VjM1SEdsMXM4NDNla0JxZHdzL2pKYllQbEI3dVAvV1ByQ00rNkRj?=
 =?utf-8?B?d3o0bmNlNU9ZL0NnOGtSRnI5YTN4R0MzanhzRWFVdnhzbEpBaW0vY0l3QXRr?=
 =?utf-8?B?ZExtQytGTVA3a004NDdzUkJwM1RCcktCOU9WV1ByandFNEhDdmErZ0ZtYTNs?=
 =?utf-8?B?bEFHL3J3aU93RHh0ZW5HbTlNYi9LQlJTbThRWTVid0dReExxdzRUVFRzaGJV?=
 =?utf-8?B?YkIxaWlFMnVaWjVyWXRSaFBmSGgrTHBWcnZPcU9FMk1meERBeWJkbUxvS0hs?=
 =?utf-8?B?T2JzR2k2d2xkOUI1cVhFdUQyaCtTdklkZStNZHlSbGRTTVJDYUFjQkU1MjJs?=
 =?utf-8?B?dlNRbEorUDE2aXdYUmVBcGVQWHltaEQyYnNMVnBGeDJzZGJZSU9wWitBU0RS?=
 =?utf-8?B?Z1NuS1RpL3FlNGYzNzhmc0h3Q2tEZ1Q1SU41bG43QTZkbDdBZUhmOW5JMFZw?=
 =?utf-8?B?bHhkcG1tbGlUeGJTZGVsMzJKYzdPaE04dGV3ckhYUGpuTlVaWGpESDZaSVdM?=
 =?utf-8?B?dTBwSjlvZkdlQm42emh1RHBVT1lUWjE5RXNXeDRIZU1yN1RkSFkrVkZML0hF?=
 =?utf-8?B?dVUreGlCWHY2UkZDbXJLYUFJUnVHWm9xQjVhU1o3OXBUa003S252T3lYM2I5?=
 =?utf-8?B?R3FLYTVVUUcwOUVWOW8wVU5hbDJyYjhrTHpVSG03L3Y3cGwxdWxVYlRPdWtQ?=
 =?utf-8?B?WWF3RWFWcEN5MVhnU29RU3ZNOGtQRWNhZkh1UmdRMUxoQXlyRU95Y0RHRHcv?=
 =?utf-8?B?MjB3TDFWY1JKWS9PR2VFQUFzdnhBS2VDbUdxbzd6Tk9jcHhINHJjYWVmeXk4?=
 =?utf-8?B?c2NpL0E1cnMxUWh6NXBYaG0wRjNXVDBSQ2lubFd5N1dwUmJ6NjJxZndHZWlK?=
 =?utf-8?B?bTRvVjdXdFJ2SGx1MGttSnVEZ0xOT1EwWGQ0OUhVcWFuejY1UlF3cFRuLzEv?=
 =?utf-8?B?L0pZY003YytsOFBiV3NOTUFMQVNTaUt3eFlkd3g0S2t4RzYveVFYMHR2cThq?=
 =?utf-8?B?eXF5L1BtQ3lEbXNwNWNiekEwNUdjUkUyVU5Db0tpeGtER2tHRXJpYjJ0SUFK?=
 =?utf-8?B?TXNsZG1BUmVpRVduSEhaZnhyWno3bkdleWFEMHpHVXFYVjFZd25KMVFhc0xz?=
 =?utf-8?B?N1FmclYwSDlMUENxR2Fzb0VlRGVndVFQek9TL2c3ZjA4eWppdWhGT1FjanN6?=
 =?utf-8?B?NkJSeTJkUWZhMHV0QnArL0JCNndHMmJyTDdvY0RKNVlCK3M2WG5GNk1GNjdi?=
 =?utf-8?B?WU5YWExlakFHR05xeG9YMUplcVVJRWlvQWpDTVlOaVNyNjZrM1NsR1AwSjVr?=
 =?utf-8?B?SUROWU0wdWdMWTd1bHFSVXJMbFR1dkxHUjJCL29NeTFqU2JtM2xFc0tGN3Vo?=
 =?utf-8?B?SlpvTGpuZ3RpQUhyM041ZU9IYUdiaFoyMEQvZGRBZHc3NEx6WHEvc2NuVm9w?=
 =?utf-8?B?WFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ef+VB/ZO5nPVFtc+Y/sBvtY0M6EbwrfzBa2UQ05NL/PChp/vh7wpoKRAfIJCnn7/1ngOfC6cKLKWtm4d61duyUkDaPH7gGJUPE0vIbBt78dAbwL/uXIazKJ8OjTO8KqINIdgUYw/PcVOUV2KnPjoSTYgAP1Bx2WRMFLXkARm32g1GnU9iQF+9JGP4VqPiwQSaWNDURktJnbFAdk3dJ6FO73ve/tnKLUYCnz01AKP/jzLdXhO8W9G6DjYSno0FGxITGLl84aNU0hclwF78J351ITQukN9CAlu5vEe07yeEpfdsVY7IVeqkb1p72ffnxiEiB1OGO+UnuIcA/PMGgXvbj32qENHr8kXL1EZ7u/pEPl6M0LkXq4S8sZV8ddaXa9j+dbSEsXCjZPEdB5I4h9aJ1vScZWrXfzhoFnpy7s/L2OPYk+6Y2WsvVJERqYckhcvaR7rz+DUpTt8SR4OHBDmFlYCi1Xkqelml+PB37rjgxQQWst+As3NCR6owU0IP6FEa7j6uoIUj9pXUA2Y2khvo+4zijZe3Razr+1J3dw0ZAf04U1gNaUfrOGBFBPD3oOydVXP6OIaqjESQ4smD+4NWx3HTjo393kks8Qv64IT0h+I901VKkVMmDaMrJBx+pZrbevYAJ/zHmUFTjjz7rb3Ge/K3q4x+iZIJqQXWgoI40JFdintHl9+1D10ItztKqL3KUH5Hi80HnTbNZdDVZjCIfQ074saI1z0TPnBJ2Objdbid1BM07wN63b6/4GevBwnSoUCzt/P5KidsBCQLqC9gvM8hSNF39k4lBECOWiRiJF/urp1F4Px9JAVQHHkKsax
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d3ca9afb-5f12-4c3e-6861-08db5c810ce6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 18:02:34.3822
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Sdb7ZQCdgGlSEEEHPMjxL+YteYzwYIxJcs635BXiqQ57iUcIX4GDn1Uayz2+1DXzUWx8NJq0Sj5wWkEVXuLDX82QON0OMk4+BFjgScfUZsk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5455

On 24/05/2023 3:53 pm, Jan Beulich wrote:
> On 24.05.2023 13:25, Andrew Cooper wrote:
>> Bits through 24 are already defined, meaning that we're not far off needing
>> the second word.  Put both in right away.
>>
>> As both halves are present now, the arch_caps field is full width.  Adjust the
>> unit test, which notices.
>>
>> The bool bitfield names in the arch_caps union are unused, and somewhat out of
>> date.  They'll shortly be automatically generated.
>>
>> Add CPUID and MSR prefixes to the ./xen-cpuid verbose output, now that there
>> are a mix of the two.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks,

> albeit ...
>
>> --- a/tools/misc/xen-cpuid.c
>> +++ b/tools/misc/xen-cpuid.c
>> @@ -226,31 +226,41 @@ static const char *const str_7d2[32] =
>>      [ 4] = "bhi-ctrl",      [ 5] = "mcdt-no",
>>  };
>>  
>> +static const char *const str_10Al[32] =
>> +{
>> +};
>> +
>> +static const char *const str_10Ah[32] =
>> +{
>> +};
>> +
>>  static const struct {
>>      const char *name;
>>      const char *abbr;
>>      const char *const *strs;
>>  } decodes[] =
>>  {
>> -    { "0x00000001.edx",   "1d",  str_1d },
>> -    { "0x00000001.ecx",   "1c",  str_1c },
>> -    { "0x80000001.edx",   "e1d", str_e1d },
>> -    { "0x80000001.ecx",   "e1c", str_e1c },
>> -    { "0x0000000d:1.eax", "Da1", str_Da1 },
>> -    { "0x00000007:0.ebx", "7b0", str_7b0 },
>> -    { "0x00000007:0.ecx", "7c0", str_7c0 },
>> -    { "0x80000007.edx",   "e7d", str_e7d },
>> -    { "0x80000008.ebx",   "e8b", str_e8b },
>> -    { "0x00000007:0.edx", "7d0", str_7d0 },
>> -    { "0x00000007:1.eax", "7a1", str_7a1 },
>> -    { "0x80000021.eax",  "e21a", str_e21a },
>> -    { "0x00000007:1.ebx", "7b1", str_7b1 },
>> -    { "0x00000007:2.edx", "7d2", str_7d2 },
>> -    { "0x00000007:1.ecx", "7c1", str_7c1 },
>> -    { "0x00000007:1.edx", "7d1", str_7d1 },
>> +    { "CPUID 0x00000001.edx",        "1d", str_1d },
>> +    { "CPUID 0x00000001.ecx",        "1c", str_1c },
>> +    { "CPUID 0x80000001.edx",       "e1d", str_e1d },
>> +    { "CPUID 0x80000001.ecx",       "e1c", str_e1c },
>> +    { "CPUID 0x0000000d:1.eax",     "Da1", str_Da1 },
>> +    { "CPUID 0x00000007:0.ebx",     "7b0", str_7b0 },
>> +    { "CPUID 0x00000007:0.ecx",     "7c0", str_7c0 },
>> +    { "CPUID 0x80000007.edx",       "e7d", str_e7d },
>> +    { "CPUID 0x80000008.ebx",       "e8b", str_e8b },
>> +    { "CPUID 0x00000007:0.edx",     "7d0", str_7d0 },
>> +    { "CPUID 0x00000007:1.eax",     "7a1", str_7a1 },
>> +    { "CPUID 0x80000021.eax",      "e21a", str_e21a },
>> +    { "CPUID 0x00000007:1.ebx",     "7b1", str_7b1 },
>> +    { "CPUID 0x00000007:2.edx",     "7d2", str_7d2 },
>> +    { "CPUID 0x00000007:1.ecx",     "7c1", str_7c1 },
>> +    { "CPUID 0x00000007:1.edx",     "7d1", str_7d1 },
> ... I'm not really happy about this added verbosity. In a tool of this
> name, I think it's pretty clear that unadorned names are CPUID stuff.

You might make the connection, but it's unreasonable to expect the same
of everyone else.  This is used by end users.

If nothing else, the name of the binary is made stale by this change.

>> +    { "MSR   0x0000010a.lo",      "m10Al", str_10Al },
>> +    { "MSR   0x0000010a.hi",      "m10Ah", str_10Ah },
> Once we gain a few more MSRs, I'm afraid the raw numbers aren't going
> to be very useful. As vaguely suggested before, how about
>
>     { "MSR_ARCH_CAPS.lo",      "m10Al", str_10Al },
>     { "MSR_ARCH_CAPS.hi",      "m10Ah", str_10Ah },
>
> ?

I've done this.  I remain to be convinced, but it probably is nicer for
people who don't know the MSR indices like I do.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 24 20:41:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 20:41:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539200.839836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1vI4-0002DC-Ho; Wed, 24 May 2023 20:41:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539200.839836; Wed, 24 May 2023 20:41:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1vI4-0002D5-F3; Wed, 24 May 2023 20:41:00 +0000
Received: by outflank-mailman (input) for mailman id 539200;
 Wed, 24 May 2023 20:40:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1vI2-0002Ct-Cc; Wed, 24 May 2023 20:40:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1vI2-0000P7-6u; Wed, 24 May 2023 20:40:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1vI1-0001Rt-Lw; Wed, 24 May 2023 20:40:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1vI1-0003vM-LX; Wed, 24 May 2023 20:40:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KllHVmEZNwZGqAoIKBDNLHv410zVGsYhIijcYSrYdLk=; b=5IOdshx/JsrOSOtvUWIVcgm9cL
	ayn0gJsdpdwH7v0LaexAbvDqjBZXHq9KQ1N59VTCLaYQWzUNjcNaX2qrqpmqSpKp9ABbtWDJwrieD
	ElYoUuOOmN+PIGe64ZaIpo6ZpsQ5etkxwr3IVlq0UUhhmvzLVJXst0nBBDuTB+4Lo9bw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180927-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180927: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=1c12355b31046a6b35a4f50c85c4f01afb1bd728
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 May 2023 20:40:57 +0000

flight 180927 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180927/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                1c12355b31046a6b35a4f50c85c4f01afb1bd728
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    7 days
Failing since        180699  2023-05-18 07:21:24 Z    6 days   27 attempts
Testing same since   180927  2023-05-24 08:31:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 6708 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 24 20:41:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 20:41:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539203.839846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1vIR-0002Xc-Sr; Wed, 24 May 2023 20:41:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539203.839846; Wed, 24 May 2023 20:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1vIR-0002XV-Ol; Wed, 24 May 2023 20:41:23 +0000
Received: by outflank-mailman (input) for mailman id 539203;
 Wed, 24 May 2023 20:41:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0arK=BN=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q1vIP-0002W0-JD
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 20:41:21 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [85.215.255.54]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 549d4ffe-fa73-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 22:41:17 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4OKer0Zq
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 24 May 2023 22:40:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 549d4ffe-fa73-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1684960853; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=jo2po5ZgKoErdZxmFc0n7nKOkv7tj+hfEV4dHjWCncB3bq7G23VvUIB47yabzVsM/Q
    OSsImss9y7SNyaHWSiEIB561SEjrO2JsxwEtzxqyfulRf1EzdqaA/FZaLftIY+J16+m2
    EgI12u2N5nUQNpry4IaPXfU5uFU2o3V9krl70PERTAWvsc90iMXKJU99+RgFtjxU/lD5
    gVREkEfUx4PTgyaUh/hOdqAZRrIHq8e6hlrCELDoGVTP6AnhMWpBh2gcV7aQ7xbjvHfK
    ylYECgwoPK4ZWOduUfxXFt6d3gfEwCo6NF3AvXO1MsGz4HflUaGnmOASnVwt22jTcac4
    0rLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1684960853;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=22LZU7fLgz2ev8zDdEybHY4CxAHPDwKCqTvuQ9FCQIg=;
    b=GYauEbxUzk9m6p3cIsGgOAfpoX+MFD95xjHM0+U6O7Vrbp0dJBPzqR74MAs3+5DLd/
    kaG5Bu0l3R0ogG6Edskkl/8vPQKEALF/aFlZR2YjBQM/NY7AfPI62drl/CBmvEOx/6jw
    d4C4FpFOZIdPcisIykxoyxNe2/ZoyTQ0oIK9xarJKFxfezj1ct3HWYHjCnPIA6GEDJuB
    s2GvITN2b29UTt1/sahvJWqKllPGvbon6GVDdgaz5qy6H/UzsuW8e5lNJEZBXNhAiBm8
    Orowe6LWSZRN381m7GaoaS8Gg1i9gb3ouKlf0tPx+n56rQEjAzZ+KONr6xAla4NrILes
    Skig==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1684960853;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=22LZU7fLgz2ev8zDdEybHY4CxAHPDwKCqTvuQ9FCQIg=;
    b=gXjJs2gmTF9b3sQJRx07n626IHyJJcc5v6QlZlSpBRFx8MrL7FLLy0F/D9PHlq75X5
    JL9PVl77SYvbgXc/9lBSCB54bxeDU67CKS4zJjfdGxvTqKe9OV0z0QguLqbQNa//AETb
    2i19GVQdL/SjfkEowkP5Jg7tIMssusscxsFcYNRZpsXlagICZ2H5T6aa8GjwHrw3dl1G
    /DVQ+f3UIL4MbNakQHxamFXmtUjvFswkQKU9Qc+KhoMzjdGrkmD9AwlHA2GjYodod54X
    W+2FhagN3Q1ga+K6aZPohb5Aui1tblMWZcZd7g3WKxtfRsARaajt5YM7d1b4I+H8PLHX
    eUuw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1684960853;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=22LZU7fLgz2ev8zDdEybHY4CxAHPDwKCqTvuQ9FCQIg=;
    b=Cd4bZxxys+fzPVxW3gdyH6Nei9RJT/APa4XkU/aeE1026EMHKEFrBF95tJpqSm3C44
    piiA9mZUyHG9rvqYAVBQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xqFv7EJ0tgRX/vKfT/e8Ig6v0dNw4QAWpzMWrRQ=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] xen: fix comment typo regarding IOREQ Server
Date: Wed, 24 May 2023 22:40:32 +0200
Message-Id: <20230524204032.3193-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 xen/include/public/hvm/dm_op.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index acdf91693d..fa98551914 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -17,7 +17,7 @@
 /*
  * IOREQ Servers
  *
- * The interface between an I/O emulator an Xen is called an IOREQ Server.
+ * The interface between an I/O emulator and Xen is called an IOREQ Server.
  * A domain supports a single 'legacy' IOREQ Server which is instantiated if
  * parameter...
  *


From xen-devel-bounces@lists.xenproject.org Wed May 24 20:48:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 20:48:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539213.839856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1vPO-0003T0-Ne; Wed, 24 May 2023 20:48:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539213.839856; Wed, 24 May 2023 20:48:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1vPO-0003St-Jd; Wed, 24 May 2023 20:48:34 +0000
Received: by outflank-mailman (input) for mailman id 539213;
 Wed, 24 May 2023 20:48:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o7M2=BN=shutemov.name=kirill@srs-se1.protection.inumbo.net>)
 id 1q1vPN-0003Sn-5w
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 20:48:33 +0000
Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com
 [64.147.123.24]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 55f7fe42-fa74-11ed-8611-37d641c3527e;
 Wed, 24 May 2023 22:48:30 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.west.internal (Postfix) with ESMTP id 564803200B1C;
 Wed, 24 May 2023 16:48:24 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Wed, 24 May 2023 16:48:27 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 24 May 2023 16:48:21 -0400 (EDT)
Received: by box.shutemov.name (Postfix, from userid 1000)
 id 49820109F9F; Wed, 24 May 2023 23:48:18 +0300 (+03)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55f7fe42-fa74-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name;
	 h=cc:cc:content-type:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1684961303; x=
	1685047703; bh=GIQrYXVIJfghlyJeZVTwMC4NvSXNZTZ81GeVUDK35vA=; b=t
	V78D8AQWQriaLyjLryKXcRouM18sDnenIWh1Yu+0MLtrKqbfxYSkYAP7Ah45z0C0
	wqrRshdZnP8Fzs3Y74J2dEDBmVYWYkp0Dr1xsrl5u5j90fMLI6BY4GxnFIanI2ra
	ZHoNHyFdoiSeY0hssG0uaEFpYzR9Lz3q64/uyyqR22r9Bp9R7utjmhCAN40FS5yl
	LMsNTY7sC/uKblPAscJfx1OowRzqDWvUUksw9EcLsmJ5dxmskbo1njKOtT/uAEJb
	p8CBABZQSKnGrDpi9R3ukiBCGNRVKr8j1xz53Ydve6A5EMl4OC/qVLH06zs0pCt+
	L7IqaBqFdMvHDowOKkSCg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1684961303; x=1685047703; bh=GIQrYXVIJfghl
	yJeZVTwMC4NvSXNZTZ81GeVUDK35vA=; b=YXeLdPNcTaSCsiCsBeJv2f5p9SuP4
	8PNIAd5OHvvHh0zi0J6sHQ2iDxJi3KUZPgxTwHgsCiBnsG4iFJMa5e9VSO/9SRsp
	1uf6gxH+qkDhvA2KvyiORL0inPnSLCOCJ7W/hgwUyUdJI1MHVbrXnbq8uhCmMqay
	QCl1ZWzVXrCzb30GT0XC1Wta3d7Y3WxRKULau+sNOAow3o6ZJP+eHzUVYtEPFdUO
	SEJPLcn5mgVKhWahWT0mp2x4brRIS3jKZQ5jeti0GnxZf5j3AV7cUWQLho6bY9Hg
	bJgTAAYblxZs0e6yU12BJKusEGvIJR9TETpkuLHQittJSabdfnNE4zZlw==
X-ME-Sender: <xms:FnhuZDTH2VncK0GAbUd4QSuubawcVJEemEMmfWagwDdFrmymqbynRw>
    <xme:FnhuZEylmO4WOFyEi63qXNPS7KmYBZh0Oi4Ko1ah2WyXbPpIb5vWyfJ0rtnlScPMf
    YwGNKBrFecAJTAStLU>
X-ME-Received: <xmr:FnhuZI0B8fy9Ol2lnVR91k-mp_zuw1fzrmbcod9KqT-pVjgXEb11Z3-99qaxPUeQ45cz-A>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeejhedgudehudcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehttddttddttddvnecuhfhrohhmpedfmfhi
    rhhilhhlucetrdcuufhhuhhtvghmohhvfdcuoehkihhrihhllhesshhhuhhtvghmohhvrd
    hnrghmvgeqnecuggftrfgrthhtvghrnhephfeigefhtdefhedtfedthefghedutddvueeh
    tedttdehjeeukeejgeeuiedvkedtnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrg
    hmpehmrghilhhfrhhomhepkhhirhhilhhlsehshhhuthgvmhhovhdrnhgrmhgv
X-ME-Proxy: <xmx:FnhuZDDD4TIO_TlTLMdLlTBKIBaI4Z9lylnIe3wBeaO4FVJJxTQ0FA>
    <xmx:FnhuZMj_AEyzunCa7U0nUvY822b_W_v3y-MdIwmWBPCWgqMitbF1og>
    <xmx:FnhuZHo0gf_JVrLFUK-EgR8w02dXTQBT2P7OqLOR94f80xY3JTy1xg>
    <xmx:F3huZHB2sQNsFiPjk9M0j2hsG8Xww7Nt8oC4qQnHLKoB8ZE2QQW6hQ>
Feedback-ID: ie3994620:Fastmail
Date: Wed, 24 May 2023 23:48:18 +0300
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,	Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,	Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
Message-ID: <20230524204818.3tjlwah2euncxzmh@box.shutemov.name>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230508185218.962208640@linutronix.de>

On Mon, May 08, 2023 at 09:44:17PM +0200, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Make the primary thread tracking CPU mask based in preparation for simpler
> handling of parallel bootup.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Michael Kelley <mikelley@microsoft.com>
> 
> 
> ---
>  arch/x86/include/asm/apic.h     |    2 --
>  arch/x86/include/asm/topology.h |   19 +++++++++++++++----
>  arch/x86/kernel/apic/apic.c     |   20 +++++++++-----------
>  arch/x86/kernel/smpboot.c       |   12 +++---------
>  4 files changed, 27 insertions(+), 26 deletions(-)
> ---
> 

...

> @@ -2386,20 +2386,16 @@ bool arch_match_cpu_phys_id(int cpu, u64
>  }
>  
>  #ifdef CONFIG_SMP
> -/**
> - * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
> - * @apicid: APIC ID to check
> - */
> -bool apic_id_is_primary_thread(unsigned int apicid)
> +static void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid)
>  {
> -	u32 mask;
> -
> -	if (smp_num_siblings == 1)
> -		return true;
>  	/* Isolate the SMT bit(s) in the APICID and check for 0 */
> -	mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
> -	return !(apicid & mask);
> +	u32 mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
> +
> +	if (smp_num_siblings == 1 || !(apicid & mask))
> +		cpumask_set_cpu(cpu, &__cpu_primary_thread_mask);
>  }
> +#else
> +static inline void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid) { }
>  #endif
>  
>  /*

This patch causes boot regression on TDX guest. The guest crashes on SMP
bring up.

The change makes use of smp_num_siblings earlier than before: the mask get
constructed in acpi_boot_init() codepath. Later on smp_num_siblings gets
updated in detect_ht().

In my setup with 16 vCPUs, smp_num_siblings is 16 before detect_ht() and
set to 1 in detect_ht().

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


From xen-devel-bounces@lists.xenproject.org Wed May 24 20:53:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 20:53:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539217.839866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1vU6-0004t4-8r; Wed, 24 May 2023 20:53:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539217.839866; Wed, 24 May 2023 20:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1vU6-0004sx-5d; Wed, 24 May 2023 20:53:26 +0000
Received: by outflank-mailman (input) for mailman id 539217;
 Wed, 24 May 2023 20:53:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9d7M=BN=linux.microsoft.com=madvenka@srs-se1.protection.inumbo.net>)
 id 1q1vU5-0004sr-5F
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 20:53:25 +0000
Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 059cc036-fa75-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 22:53:23 +0200 (CEST)
Received: from [192.168.4.26] (unknown [47.186.50.133])
 by linux.microsoft.com (Postfix) with ESMTPSA id E7B6E20FBA6D;
 Wed, 24 May 2023 13:53:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 059cc036-fa75-11ed-b230-6b7b168915f2
DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com E7B6E20FBA6D
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com;
	s=default; t=1684961602;
	bh=jaoGaATWl6qH0LBuT7PMpFhq3wR8UB0SRjcinmr2MGw=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=gdGCklkn+W2FG8nWXMknVmZRf7MOqfBXFQSg1XVTieJjYv2pZpbSK13nBh6SaoEhT
	 jcL639wc1nVXAHEnmQmaX6AIuaD25yFRRA9zSSEUAf6N/FK7V8PguD80UGHHcNbtZY
	 /rfQmT61JJHHOojvC0XgtjYl2COQYPXjTiszLZkM=
Message-ID: <b1ffbf50-7728-64a1-5d46-10331a17530d@linux.microsoft.com>
Date: Wed, 24 May 2023 15:53:18 -0500
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v1 2/9] KVM: x86/mmu: Add support for prewrite page
 tracking
To: Sean Christopherson <seanjc@google.com>, =?UTF-8?B?TWlja2HDq2wgU2FsYcO8?=
 =?UTF-8?Q?n?= <mic@digikod.net>
Cc: Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>, "H . Peter Anvin" <hpa@zytor.com>,
 Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>,
 Paolo Bonzini <pbonzini@redhat.com>, Thomas Gleixner <tglx@linutronix.de>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
 James Morris <jamorris@linux.microsoft.com>,
 John Andersen <john.s.andersen@intel.com>, Liran Alon
 <liran.alon@oracle.com>, Marian Rotariu <marian.c.rotariu@gmail.com>,
 =?UTF-8?Q?Mihai_Don=c8=9bu?= <mdontu@bitdefender.com>,
 =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>,
 Zahra Tarkhani <ztarkhani@microsoft.com>,
 =?UTF-8?Q?=c8=98tefan_=c8=98icleru?= <ssicleru@bitdefender.com>,
 dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
 linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
 qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
 x86@kernel.org, xen-devel@lists.xenproject.org
References: <20230505152046.6575-1-mic@digikod.net>
 <20230505152046.6575-3-mic@digikod.net> <ZFUumGdZDNs1tkQA@google.com>
 <6412bf27-4d05-eab8-3db1-d4efa44af3aa@digikod.net>
 <ZFU9YzqG/T+Ty9gY@google.com>
Content-Language: en-US
From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
In-Reply-To: <ZFU9YzqG/T+Ty9gY@google.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit



On 5/5/23 12:31, Sean Christopherson wrote:
> On Fri, May 05, 2023, Mickaï¿½l Salaï¿½n wrote:
>>
>> On 05/05/2023 18:28, Sean Christopherson wrote:
>>> I have no doubt that we'll need to solve performance and scaling issues with the
>>> memory attributes implementation, e.g. to utilize xarray multi-range support
>>> instead of storing information on a per-4KiB-page basis, but AFAICT, the core
>>> idea is sound.  And a very big positive from a maintenance perspective is that
>>> any optimizations, fixes, etc. for one use case (CoCo vs. hardening) should also
>>> benefit the other use case.
>>>
>>> [1] https://lore.kernel.org/all/20230311002258.852397-22-seanjc@google.com
>>> [2] https://lore.kernel.org/all/Y2WB48kD0J4VGynX@google.com
>>> [3] https://lore.kernel.org/all/Y1a1i9vbJ%2FpVmV9r@google.com
>>
>> I agree, I used this mechanism because it was easier at first to rely on a
>> previous work, but while I was working on the MBEC support, I realized that
>> it's not the optimal way to do it.
>>
>> I was thinking about using a new special EPT bit similar to
>> EPT_SPTE_HOST_WRITABLE, but it may not be portable though. What do you
>> think?
> 
> On x86, SPTEs are even more ephemeral than memslots.  E.g. for historical reasons,
> KVM zaps all SPTEs if _any_ memslot is deleted, which is problematic if the guest
> is moving around BARs, using option ROMs, etc.
> 
> ARM's pKVM tracks metadata in its stage-2 PTEs, i.e. doesn't need an xarray to
> otrack attributes, but that works only because pKVM is more privileged than the
> host kernel, and the shared vs. private memory attribute that pKVM cares about
> is very, very restricted in how it can be used and changed.
> 
> I tried shoehorning private vs. shared metadata into x86's SPTEs in the past, and
> it ended up being a constant battle with the kernel, e.g. page migration, and with
> KVM itself, e.g. the above memslot mess.

Sorry for the delay in responding to this. I wanted to study the KVM code and fully
understand your comment before responding.

Yes, I quite agree with you. I will make an attempt to address this in the next version.
I am working on it right now.

Thanks.

Madhavan


From xen-devel-bounces@lists.xenproject.org Wed May 24 21:05:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 21:05:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539221.839876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1vfO-0006Rx-9z; Wed, 24 May 2023 21:05:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539221.839876; Wed, 24 May 2023 21:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1vfO-0006Rq-6i; Wed, 24 May 2023 21:05:06 +0000
Received: by outflank-mailman (input) for mailman id 539221;
 Wed, 24 May 2023 21:05:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eCI3=BN=quicinc.com=quic_tsoni@srs-se1.protection.inumbo.net>)
 id 1q1vfN-0006Rj-42
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 21:05:05 +0000
Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com
 [205.220.180.131]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a68a492e-fa76-11ed-b230-6b7b168915f2;
 Wed, 24 May 2023 23:05:03 +0200 (CEST)
Received: from pps.filterd (m0279873.ppops.net [127.0.0.1])
 by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 34OKx7uq018706; Wed, 24 May 2023 21:04:10 GMT
Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com
 [199.106.103.254])
 by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3qsqqj89sj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 24 May 2023 21:04:10 +0000
Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com
 [10.52.223.231])
 by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 34OL495t029551
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 24 May 2023 21:04:09 GMT
Received: from [10.110.74.38] (10.80.80.8) by nasanex01a.na.qualcomm.com
 (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.42; Wed, 24 May
 2023 14:04:07 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a68a492e-fa76-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=message-id : date :
 mime-version : subject : to : cc : references : from : in-reply-to :
 content-type : content-transfer-encoding; s=qcppdkim1;
 bh=0IusaswoV4GMJ2krC0g9zaR7+NoenIcA9sZufP1qfbs=;
 b=Pn2tBz5Uml86ZbHPbF67KQqgCh73xR1i5yg3+TTBB9MLfOHQJBgnxJvGEGTnO1sjnbb5
 U3hN6SQPVGqczxdXEc5jZ3BsRuG4v1PjwOlJ+BUwBbaWu+Gb3kIhHgI2dJBJPsyyvDlc
 ARiZPVoxGryM9wKzC1smnDkbMLUTn0aGCjNLX7jJ+uzKz+02TGLf7WIxRqGxPINN1O7b
 H6OfnfxKsUpexRRPsIok/i/FsmrmmJJiNBeLOKhyCebYuCM7GoxBrHUKCD7zLJFN1tYw
 9ScZ+UfUU+jIHpa/GwHnf1SHGoNE2ZSw01qWH55IQJ4A/JSEPQvbLCoS3Id3Q46OwxcM vQ== 
Message-ID: <1e10da25-5704-18ee-b0ce-6de704e6f0e1@quicinc.com>
Date: Wed, 24 May 2023 14:04:07 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Content-Language: en-US
To: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>,
        Borislav Petkov
	<bp@alien8.de>,
        Dave Hansen <dave.hansen@linux.intel.com>,
        "H . Peter Anvin"
	<hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
        Kees Cook
	<keescook@chromium.org>,
        Paolo Bonzini <pbonzini@redhat.com>,
        Sean
 Christopherson <seanjc@google.com>,
        Thomas Gleixner <tglx@linutronix.de>,
        Vitaly Kuznetsov <vkuznets@redhat.com>,
        Wanpeng Li <wanpengli@tencent.com>
CC: Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
        James Morris <jamorris@linux.microsoft.com>,
        John Andersen
	<john.s.andersen@intel.com>,
        Liran Alon <liran.alon@oracle.com>,
        "Madhavan T
 . Venkataraman" <madvenka@linux.microsoft.com>,
        Marian Rotariu
	<marian.c.rotariu@gmail.com>,
        =?UTF-8?Q?Mihai_Don=c8=9bu?=
	<mdontu@bitdefender.com>,
        =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?=
	<nicu.citu@icloud.com>,
        Rick Edgecombe <rick.p.edgecombe@intel.com>,
        Thara
 Gopinath <tgopinath@microsoft.com>,
        Will Deacon <will@kernel.org>,
        Zahra
 Tarkhani <ztarkhani@microsoft.com>,
        =?UTF-8?Q?=c8=98tefan_=c8=98icleru?=
	<ssicleru@bitdefender.com>,
        <dev@lists.cloudhypervisor.org>, <kvm@vger.kernel.org>,
        <linux-hardening@vger.kernel.org>, <linux-hyperv@vger.kernel.org>,
        <linux-kernel@vger.kernel.org>,
        <linux-security-module@vger.kernel.org>, <qemu-devel@nongnu.org>,
        <virtualization@lists.linux-foundation.org>, <x86@kernel.org>,
        <xen-devel@lists.xenproject.org>
References: <20230505152046.6575-1-mic@digikod.net>
From: Trilok Soni <quic_tsoni@quicinc.com>
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 8bit
X-Originating-IP: [10.80.80.8]
X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To
 nasanex01a.na.qualcomm.com (10.52.223.231)
X-QCInternal: smtphost
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085
X-Proofpoint-GUID: 05vNi3kgc2wfcP2iD9QY9INL0A2A4C1U
X-Proofpoint-ORIG-GUID: 05vNi3kgc2wfcP2iD9QY9INL0A2A4C1U
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26
 definitions=2023-05-24_15,2023-05-24_01,2023-05-22_02
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 impostorscore=0 bulkscore=0 adultscore=0 suspectscore=0 clxscore=1011
 lowpriorityscore=0 spamscore=0 phishscore=0 mlxscore=0 malwarescore=0
 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2304280000 definitions=main-2305240176

On 5/5/2023 8:20 AM, Mickaël Salaün wrote:
> Hi,
> 
> This patch series is a proof-of-concept that implements new KVM features
> (extended page tracking, MBEC support, CR pinning) and defines a new API to
> protect guest VMs. No VMM (e.g., Qemu) modification is required.
> 
> The main idea being that kernel self-protection mechanisms should be delegated
> to a more privileged part of the system, hence the hypervisor. It is still the
> role of the guest kernel to request such restrictions according to its

Only for the guest kernel images here? Why not for the host OS kernel? 
Embedded devices w/ Android you have mentioned below supports the host 
OS as well it seems, right?

Do we suggest that all the functionalities should be implemented in the 
Hypervisor (NS-EL2 for ARM) or even at Secure EL like Secure-EL1 (ARM).

I am hoping that whatever we suggest the interface here from the Guest 
to the Hypervisor becomes the ABI right?


> 
> # Current limitations
> 
> The main limitation of this patch series is the statically enforced
> permissions. This is not an issue for kernels without module but this needs to
> be addressed.  Mechanisms that dynamically impact kernel executable memory are
> not handled for now (e.g., kernel modules, tracepoints, eBPF JIT), and such
> code will need to be authenticated.  Because the hypervisor is highly
> privileged and critical to the security of all the VMs, we don't want to
> implement a code authentication mechanism in the hypervisor itself but delegate
> this verification to something much less privileged. We are thinking of two
> ways to solve this: implement this verification in the VMM or spawn a dedicated
> special VM (similar to Windows's VBS). There are pros on cons to each approach:
> complexity, verification code ownership (guest's or VMM's), access to guest
> memory (i.e., confidential computing).

Do you foresee the performance regressions due to lot of tracking here? 
Production kernels do have lot of tracepoints and we use it as feature 
in the GKI kernel for the vendor hooks implementation and in those cases 
every vendor driver is a module. Separate VM further fragments this 
design and delegates more of it to proprietary solutions?

Do you have any performance numbers w/ current RFC?

---Trilok Soni


From xen-devel-bounces@lists.xenproject.org Wed May 24 22:16:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 22:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539225.839886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1wmF-0005ZS-IK; Wed, 24 May 2023 22:16:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539225.839886; Wed, 24 May 2023 22:16:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1wmF-0005ZL-FN; Wed, 24 May 2023 22:16:15 +0000
Received: by outflank-mailman (input) for mailman id 539225;
 Wed, 24 May 2023 22:16:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hEuA=BN=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q1wmD-0005ZF-Hx
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 22:16:13 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 92e5017b-fa80-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 00:16:05 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 272D16160D;
 Wed, 24 May 2023 22:16:04 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 79F05C433EF;
 Wed, 24 May 2023 22:16:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92e5017b-fa80-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684966563;
	bh=eeFLFGZMVCEPE2HtejpmbcNN7wAgLIPWb5dnlfIaXaw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nGeFOg2t9r+NdRLZlcfTcSJKZSQLR87Re7VTo2y+6Y9OWe74yeYHhhNp5+a30ZJnk
	 Urxw5nunjRL11DWDZtsWMbodGja+n8fByepPt8bUsicL18hAjIiz+Ef+BvdilruEPU
	 Bi9NRxs5PzGSuXaz66rJvnhzEYWRTx2prOnvo1VqItlw7ss/5j4k2nKxbRE+WJiNeB
	 UQS4fUkHXFQ+ceNtK2IYMzTLf+g+pDYs5aDAGbIdLppbjKcBZk3N/MnlEffE1Xty33
	 w8lzXwVqfbHO5m+LGSToVFUTkB6qAI0mAzSuZ67jFkQIRzEmVTBMpUvqECPztmsBdX
	 czYcN+7GYgliQ==
Date: Wed, 24 May 2023 15:16:00 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Olaf Hering <olaf@aepfle.de>
cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1] xen: fix comment typo regarding IOREQ Server
In-Reply-To: <20230524204032.3193-1-olaf@aepfle.de>
Message-ID: <alpine.DEB.2.22.394.2305241515390.44000@ubuntu-linux-20-04-desktop>
References: <20230524204032.3193-1-olaf@aepfle.de>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 24 May 2023, Olaf Hering wrote:
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/include/public/hvm/dm_op.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
> index acdf91693d..fa98551914 100644
> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -17,7 +17,7 @@
>  /*
>   * IOREQ Servers
>   *
> - * The interface between an I/O emulator an Xen is called an IOREQ Server.
> + * The interface between an I/O emulator and Xen is called an IOREQ Server.
>   * A domain supports a single 'legacy' IOREQ Server which is instantiated if
>   * parameter...
>   *
> 


From xen-devel-bounces@lists.xenproject.org Wed May 24 22:20:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 22:20:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539229.839895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1wqV-00070j-3a; Wed, 24 May 2023 22:20:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539229.839895; Wed, 24 May 2023 22:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1wqV-00070c-0n; Wed, 24 May 2023 22:20:39 +0000
Received: by outflank-mailman (input) for mailman id 539229;
 Wed, 24 May 2023 22:20:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dqxP=BN=intel.com=rick.p.edgecombe@srs-se1.protection.inumbo.net>)
 id 1q1wqU-00070W-B1
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 22:20:38 +0000
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3313e025-fa81-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 00:20:35 +0200 (CEST)
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 24 May 2023 15:20:25 -0700
Received: from fmsmsx603.amr.corp.intel.com ([10.18.126.83])
 by orsmga008.jf.intel.com with ESMTP; 24 May 2023 15:20:23 -0700
Received: from fmsmsx612.amr.corp.intel.com (10.18.126.92) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Wed, 24 May 2023 15:20:23 -0700
Received: from fmsmsx603.amr.corp.intel.com (10.18.126.83) by
 fmsmsx612.amr.corp.intel.com (10.18.126.92) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Wed, 24 May 2023 15:20:23 -0700
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23 via Frontend Transport; Wed, 24 May 2023 15:20:23 -0700
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.104)
 by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.23; Wed, 24 May 2023 15:20:22 -0700
Received: from MN0PR11MB5963.namprd11.prod.outlook.com (2603:10b6:208:372::10)
 by DM4PR11MB5455.namprd11.prod.outlook.com (2603:10b6:5:39b::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Wed, 24 May
 2023 22:20:16 +0000
Received: from MN0PR11MB5963.namprd11.prod.outlook.com
 ([fe80::6984:19a5:fe1c:dfec]) by MN0PR11MB5963.namprd11.prod.outlook.com
 ([fe80::6984:19a5:fe1c:dfec%6]) with mapi id 15.20.6433.016; Wed, 24 May 2023
 22:20:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3313e025-fa81-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1684966835; x=1716502835;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=R0XpjUyWH/rXrr7TI130MtsaAWuAja2kckR/B6IKM4w=;
  b=gb1m8V0I4x8b/vuyQ6HBd3XxOAaEGLjTffMy0WIUjv+OC6dK3W+sAHuh
   dAy9Sct0L1Y1XQreSAkoTwCOMigmn3UGCKnXuCNSpXLgX8XeRurJFmt8o
   LjxzmWRE6wsY9zDiblK46srh2e14OGvkzZu/c30cxJipa0yQBo5uf2XGs
   KUAcAEDtUjX4QcI4mhK77kL+nXxz2tuNqZHBmjH5AQKs1yY64S9gmgpoE
   qbwX3cTWZumNW1lc+CCya1tHPZYWA7+2wMWjt7gFvXfD+jR1nkVFllqC2
   a01V7EHBKMwAFN3R5gQWfrUyvMuXC6gx7GNtjaw9f1IroRCF8gcmXDazL
   w==;
X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="381953520"
X-IronPort-AV: E=Sophos;i="6.00,190,1681196400"; 
   d="scan'208";a="381953520"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="735351617"
X-IronPort-AV: E=Sophos;i="6.00,190,1681196400"; 
   d="scan'208";a="735351617"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a1wwXd55doA+ig+0rv0ZGDtyRyhVT7BgbeGlPmbmb0pXAOC5w9ZfFnphANzbIr9a1yx62PJXwq9ZBmazuJ1hmvtnOre4A2mOy6aHQTbr/l/EMUIN3QIc8hBYRwRLBsw0QNRzPzLhLXkYpmlEDQkyYiLWDDKxprA/v3e9gAEc4xPKRaBAXhTurK8yEioyTZcEo09wwXK6IqatF0xRun0LZZuMQHthlsfwrO6Sx6h7ldArnu6+kfitqtmygk1GQciLLDvxxqjfaIdcd4tKveRnKvB9sfdfmc0DOipq0/cP7m4zqNPbctoQ4oYgRNGuKljHQNRW4orKTpFeA35x+u+Jzw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=R0XpjUyWH/rXrr7TI130MtsaAWuAja2kckR/B6IKM4w=;
 b=dNKoh2LcJNH4xtkVdyQ9rAlYOqeIo7oIpWo+5Xwd60pXQFXVbOsCThbGO0phkRIlAIq2bzv6J3RS62jR8l+iJAoug+Er9lEcmDCjvFKG2rZV7t4GZlVd/xz2YLCwwwgZ+wclRxAjmVGIIfcKcJxGDKDJM4m1c7AImIsH5kQ06Y+zr9iqTCw239L9ji9OVXAtAJWxsfRCDO73DiojMbDzbpLcgLyppmdXOGECc+LtlH8tF+sudgQ9tWmmWdw3TzNnD8T20vlWNh1RJSLkHIcUg1HW77KVKpdG9xfA4y1qKK9A9rK7Nh+5DD1mDznoLrnHSR5jVL0vSmgH0xGa88EpAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
To: "mic@digikod.net" <mic@digikod.net>, "Christopherson,, Sean"
	<seanjc@google.com>, "bp@alien8.de" <bp@alien8.de>,
	"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
	"keescook@chromium.org" <keescook@chromium.org>, "hpa@zytor.com"
	<hpa@zytor.com>, "mingo@redhat.com" <mingo@redhat.com>, "tglx@linutronix.de"
	<tglx@linutronix.de>, "pbonzini@redhat.com" <pbonzini@redhat.com>,
	"wanpengli@tencent.com" <wanpengli@tencent.com>, "vkuznets@redhat.com"
	<vkuznets@redhat.com>
CC: "kvm@vger.kernel.org" <kvm@vger.kernel.org>, "yuanyu@google.com"
	<yuanyu@google.com>, "jamorris@linux.microsoft.com"
	<jamorris@linux.microsoft.com>, "marian.c.rotariu@gmail.com"
	<marian.c.rotariu@gmail.com>, "Graf, Alexander" <graf@amazon.com>, "Andersen,
 John S" <john.s.andersen@intel.com>, "madvenka@linux.microsoft.com"
	<madvenka@linux.microsoft.com>, "liran.alon@oracle.com"
	<liran.alon@oracle.com>, "ssicleru@bitdefender.com"
	<ssicleru@bitdefender.com>, "tgopinath@microsoft.com"
	<tgopinath@microsoft.com>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, "qemu-devel@nongnu.org"
	<qemu-devel@nongnu.org>, "linux-security-module@vger.kernel.org"
	<linux-security-module@vger.kernel.org>, "will@kernel.org" <will@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"dev@lists.cloudhypervisor.org" <dev@lists.cloudhypervisor.org>,
	"mdontu@bitdefender.com" <mdontu@bitdefender.com>,
	"linux-hardening@vger.kernel.org" <linux-hardening@vger.kernel.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>, "nicu.citu@icloud.com"
	<nicu.citu@icloud.com>, "ztarkhani@microsoft.com" <ztarkhani@microsoft.com>,
	"x86@kernel.org" <x86@kernel.org>
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Thread-Topic: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Thread-Index: AQHZf2Ve5xw6RDm4tUGLnlOpizAoCa9qHQcA
Date: Wed, 24 May 2023 22:20:15 +0000
Message-ID: <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
References: <20230505152046.6575-1-mic@digikod.net>
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Evolution 3.44.4-0ubuntu1 
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: MN0PR11MB5963:EE_|DM4PR11MB5455:EE_
x-ms-office365-filtering-correlation-id: 4d6de98d-376a-4f9e-607c-08db5ca50cdc
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: B+hCW7IktQfMtTDFz2aveZKvtvuBIm9Ulh1wh8aPwgbA2O53Vsqp0Xs/tzkPncl7tUMjYFItEM8R+xtmpUS7YHcPUuGmfm0w4OFFDcDwKHbcmUHJ/6qvNZrejgyE0iQ2l2p7u1mEqhV8CzNXTFBe3isQ1BhH35rNcKeER5P11EDxOXGdveCtJ0LT3KcJTDzw5srMIBUxZfJH5XS7Rx0PypfF1NX592YzfLNxEPZlvzzxyhrU0vFEfT0WntbPC1EJIxnFirXV/mQub9u1ORVBSrn6/VRuGN/bYX0GUvCK6Wi7RB4FnigXDRQOItVQyqMq3TZF/8nnAF2sNmqF3eSHF3RgNz1QpZCJEisfZS50V2GO5O0CZLU1gNEnToTuZFOdvzvYcXfA1LRT02rK9iq+/ciU+tbOcNuxKuCzg/HRJMwMz2MQcCo6eZcIlPN8llJs042em5OobPrJU8maW32itOTmdwqoaSyW6xFqK4eYsyhsaOFli+a0eMsTjW6Ukrgq3eWp464Gg+6K/ctUsaPzJ9oQfYqnzTUdgtAyZMTzLAso+XRU1MiOkv96NYn1Yc6QHHYV05lLJy2CedR+oG+OfCqxyuQzy6DDz8W+IEbIEM1n5yakzqNTrdhqtPSr7QZxJREHE+ln0sqdiDSh8n/rWg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN0PR11MB5963.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(346002)(376002)(39860400002)(136003)(451199021)(66556008)(64756008)(66946007)(66446008)(76116006)(66476007)(4326008)(7406005)(36756003)(7416002)(8676002)(8936002)(5660300002)(91956017)(316002)(71200400001)(110136005)(54906003)(478600001)(6486002)(41300700001)(38100700002)(122000001)(921005)(82960400001)(2906002)(6512007)(186003)(83380400001)(6506007)(38070700005)(66574015)(86362001)(2616005)(26005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?cXNaK1hWMHRPZHgxVXhkRGpvNDA3UDhuVlQ4WXFPUU5mWHRTWVBiVjBvenlM?=
 =?utf-8?B?L0FTS3h4UFlTaGtzUkZwbUNXVklBNE82WXpUTUhBbFhPaTJnUCt2WEFaaXVV?=
 =?utf-8?B?YUJrb1gxMnVhOXRFVGQvKzJiWUJUYUJmTjl4OWJ5RnZhUEpySjZBVm1LRzk0?=
 =?utf-8?B?TVZVTUlpMU5GdkNyN3JRVmhPb0NkalMxZTBXYmdGZnIvZmdPaXY0SXpjOUNz?=
 =?utf-8?B?V1RQdC8vMk9xNVFDSW1VUVBvN0dVM1kwVFZXbzROR0R1Y0Fva3QxWEUyYk1t?=
 =?utf-8?B?dkx6c1JXemQ5UTkrY05hdHpwZzRFdnNKeGFBMUtEMDhQQmFhNFl4Sis0OVRI?=
 =?utf-8?B?ckRpVlZHWEM1WWVNYllnTlB6ZHo3cGFDcG50T2tLM3V1aFlSL2pkeVNEZ1RS?=
 =?utf-8?B?cEZJOGh3QkpKZ2tnQS9ZTUdUNW5iV0hFVGNIaTMySzBJSjAzaEVhQUQxVEhJ?=
 =?utf-8?B?bmltZzdtRXRnaERNZis3MnVBbkVPc3VYOEhJTGdzR2FuZTFvSGU5VVJFODJi?=
 =?utf-8?B?WVk0UHJ3MmdZQkd1K1F2U29MUjRMMUQ5VTdQM2FuWlVLeWw2SzJwMzNINEFp?=
 =?utf-8?B?MzMzVzRIYjlDRko4dUwrYnQrYnhiRTB2dWczU0w0eHJ2dWVyMkZ2S0Q2cGZF?=
 =?utf-8?B?aVhEMjIxMHZLaU9hUHEreHl5WmJXVENNczJZQ3ZjKyt1M0YvK3RQQ2lpekY0?=
 =?utf-8?B?U21iVTRlUFRQL1ZFc1NLRmlxRHFGRGlhN2tTWklqT1lqYzh5eWh2UEswTXVk?=
 =?utf-8?B?U3czNGs3eDJIVUxrU3V0QUZpbktwbk9FcEtjOURXWnQ3bE8wWkNGSWNlYWoy?=
 =?utf-8?B?N2t4enBDck5hWnRZakh2aVpuclBRb0FPMytXL2tRYjlISVFLZG5NZFRlRXp2?=
 =?utf-8?B?ZUpFOGt4dEo5dUs2c2tpdDZrM25PSXlYMEhEWDg3dVpTMGxCTFhub1huVWp6?=
 =?utf-8?B?UXhHWDVnNWJKaGZmbEJQRE9jWFNCcHg4SVNJVVpzTzZuWG9GQTZqZkN3d2xq?=
 =?utf-8?B?UW4yZHFrazYzSnlJUkpkRC9YV28xcGRhVGlZZEZiZFV4NmlGdE9zWUY5aW9U?=
 =?utf-8?B?R21UanVKUjdsNkc2dmJNNUNnclZ0Q1Nqb0pDdHgrODZRNlBmYU1Ocm1NK3Z4?=
 =?utf-8?B?NTRGc2dXVmpDQ1cwQ2x4b21yMVdQZDhueEQyMTV0NUFVZ1A2WHBuWGpjY2h3?=
 =?utf-8?B?VEFCYVd6TFBkRFRpZ2o4N1lsMWROQWpDUmpIQXhqdzRLc2t2YTJaczBGQWxD?=
 =?utf-8?B?Qyt3ODhyUXI0dUMzeFBXRlJxMUxSWWRQV0xKdWROenNPUFpQL0RUdmhKdEdL?=
 =?utf-8?B?ZFdZMzhHOFF3YXA5Qjd5NEJnSXE5UnZhWldYOU9SZVdTSGNGbmZsM05ER2Zl?=
 =?utf-8?B?RjIrdWJXRjBrQ2h1QXFVOW1xS2VjaDA5bWh6cVFjRTM4L2cvd29WV05pMjYv?=
 =?utf-8?B?cXQ3NjBtekNrM0xhbXFobEhIRGJnTkxFcWlHZnZMZ2F6SU9yd2toMGo2WGk2?=
 =?utf-8?B?bHhsaGxzcmNSQ1h1cmVEKzNrTTVHR1J0emFvUXVmdjdHMDJaS3FiclA2SDBG?=
 =?utf-8?B?eVAxbStGeUVyc09oWGE4aTFoczlKa1V2aHVqNjl3bWdoQ3ZlYVRCYkJSb05Y?=
 =?utf-8?B?SDJqWlFLKzA0WVltcklKZG1pTFhaTzh5dVozVUgzRy9lam9WWklEZTJyK1Jm?=
 =?utf-8?B?S1VPT25Vb2YwWTBZYWFhZXBGK2xHTEFKcEk1L2puU2loYjJGMTF0YnVtZ3kw?=
 =?utf-8?B?QlVxZ3Z2SjZSVlRPQVN5SG9LanBuQlhYU3lnYXlHNFlZc0FvU0YzSjdDTVcy?=
 =?utf-8?B?NEhWU2JDSUhSb2JmZXRXRFN6bkFjTmR0V2Ryamw0bGx6Y0dCMEdLYXZOUktM?=
 =?utf-8?B?ckdMTUlSYmZhRjBaN0h3azVuVVNZYzhKbVY0blNqblc0QnQvdEphbTJkcHVR?=
 =?utf-8?B?d0pPTE9iQkhSWVJyVEV4cnBrbnJCRFplRncyY3dxR29LQmxTZjFwMzllR0hF?=
 =?utf-8?B?djV1RURzeCtWS2VCZHRYY29BRGV2UUNhQmdMSGQ0ZXE0RWNsYjV0bUl3Q2s3?=
 =?utf-8?B?Nm9GWE9yNXlpbUg2UGtzNVk1NUtiNnpOaGpEc3BwczdiTHRrWll0UDdZR2xB?=
 =?utf-8?B?d0w5QjRqOFFqeGYwWFUzNElKZVRMQmU5cU9pYk9jdGRQeEdhNmV1WFcxbE1r?=
 =?utf-8?B?ZHc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0058187AD508C14A816AB1B9AA8BFE1D@namprd11.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MN0PR11MB5963.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4d6de98d-376a-4f9e-607c-08db5ca50cdc
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 May 2023 22:20:15.7775
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 2wsf3UOKJr2UaKwGumskXSPj5m7xmbhO708YN3Vqg3JHfDQ+MQv//CVZTCMF/CN2YpolXKhuYyIskKrbrzIIsCxIswV4W3867yr8QFf4VHA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR11MB5455
X-OriginatorOrg: intel.com

T24gRnJpLCAyMDIzLTA1LTA1IGF0IDE3OjIwICswMjAwLCBNaWNrYcOrbCBTYWxhw7xuIHdyb3Rl
Og0KPiAjIEhvdyBkb2VzIGl0IHdvcms/DQo+IA0KPiBUaGlzIGltcGxlbWVudGF0aW9uIG1haW5s
eSBsZXZlcmFnZXMgS1ZNIGNhcGFiaWxpdGllcyB0byBjb250cm9sIHRoZQ0KPiBTZWNvbmQNCj4g
TGF5ZXIgQWRkcmVzcyBUcmFuc2xhdGlvbiAob3IgdGhlIFR3byBEaW1lbnNpb25hbCBQYWdpbmcg
ZS5nLiwNCj4gSW50ZWwncyBFUFQgb3INCj4gQU1EJ3MgUlZJL05QVCkgYW5kIE1vZGUgQmFzZWQg
RXhlY3V0aW9uIENvbnRyb2wgKEludGVsJ3MgTUJFQykNCj4gaW50cm9kdWNlZCB3aXRoDQo+IHRo
ZSBLYWJ5IExha2UgKDd0aCBnZW5lcmF0aW9uKSBhcmNoaXRlY3R1cmUuIFRoaXMgYWxsb3dzIHRv
IHNldA0KPiBwZXJtaXNzaW9ucyBvbg0KPiBtZW1vcnkgcGFnZXMgaW4gYSBjb21wbGVtZW50YXJ5
IHdheSB0byB0aGUgZ3Vlc3Qga2VybmVsJ3MgbWFuYWdlZA0KPiBtZW1vcnkNCj4gcGVybWlzc2lv
bnMuIE9uY2UgdGhlc2UgcGVybWlzc2lvbnMgYXJlIHNldCwgdGhleSBhcmUgbG9ja2VkIGFuZA0K
PiB0aGVyZSBpcyBubw0KPiB3YXkgYmFjay4NCj4gDQo+IEEgZmlyc3QgS1ZNX0hDX0xPQ0tfTUVN
X1BBR0VfUkFOR0VTIGh5cGVyY2FsbCBlbmFibGVzIHRoZSBndWVzdA0KPiBrZXJuZWwgdG8gbG9j
aw0KPiBhIHNldCBvZiBpdHMgbWVtb3J5IHBhZ2UgcmFuZ2VzIHdpdGggZWl0aGVyIHRoZSBIRUtJ
X0FUVFJfTUVNX05PV1JJVEUNCj4gb3IgdGhlDQo+IEhFS0lfQVRUUl9NRU1fRVhFQyBhdHRyaWJ1
dGUuIFRoZSBmaXJzdCBvbmUgZGVuaWVzIHdyaXRlIGFjY2VzcyB0byBhDQo+IHNwZWNpZmljDQo+
IHNldCBvZiBwYWdlcyAoYWxsb3ctbGlzdCBhcHByb2FjaCksIGFuZCB0aGUgc2Vjb25kIG9ubHkg
YWxsb3dzIGtlcm5lbA0KPiBleGVjdXRpb24NCj4gZm9yIGEgc2V0IG9mIHBhZ2VzIChkZW55LWxp
c3QgYXBwcm9hY2gpLg0KPiANCj4gVGhlIGN1cnJlbnQgaW1wbGVtZW50YXRpb24gc2V0cyB0aGUg
d2hvbGUga2VybmVsJ3MgLnJvZGF0YSAoaS5lLiwgYW55DQo+IGNvbnN0IG9yDQo+IF9fcm9fYWZ0
ZXJfaW5pdCB2YXJpYWJsZXMsIHdoaWNoIGluY2x1ZGVzIGNyaXRpY2FsIHNlY3VyaXR5IGRhdGEg
c3VjaA0KPiBhcyBMU00NCj4gcGFyYW1ldGVycykgYW5kIC50ZXh0IHNlY3Rpb25zIGFzIG5vbi13
cml0YWJsZSwgYW5kIHRoZSAudGV4dCBzZWN0aW9uDQo+IGlzIHRoZQ0KPiBvbmx5IG9uZSB3aGVy
ZSBrZXJuZWwgZXhlY3V0aW9uIGlzIGFsbG93ZWQuIFRoaXMgaXMgcG9zc2libGUgdGhhbmtzDQo+
IHRvIHRoZSBuZXcNCj4gTUJFQyBzdXBwb3J0IGFsc28gYnJvdWdoIGJ5IHRoaXMgc2VyaWVzIChv
dGhlcndpc2UgdGhlIHZEU08gd291bGQNCj4gaGF2ZSB0byBiZQ0KPiBleGVjdXRhYmxlKS4gVGhh
bmtzIHRvIHRoaXMgaGFyZHdhcmUgc3VwcG9ydCAoVlQteCwgRVBUIGFuZCBNQkVDKSwNCj4gdGhl
DQo+IHBlcmZvcm1hbmNlIGltcGFjdCBvZiBzdWNoIGd1ZXN0IHByb3RlY3Rpb24gaXMgbmVnbGln
aWJsZS4NCj4gDQo+IFRoZSBzZWNvbmQgS1ZNX0hDX0xPQ0tfQ1JfVVBEQVRFIGh5cGVyY2FsbCBl
bmFibGVzIGd1ZXN0cyB0byBwaW4gc29tZQ0KPiBvZiBpdHMNCj4gQ1BVIGNvbnRyb2wgcmVnaXN0
ZXIgZmxhZ3MgKGUuZy4sIFg4Nl9DUjBfV1AsIFg4Nl9DUjRfU01FUCwNCj4gWDg2X0NSNF9TTUFQ
KSwNCj4gd2hpY2ggaXMgYW5vdGhlciBjb21wbGVtZW50YXJ5IGhhcmRlbmluZyBtZWNoYW5pc20u
DQo+IA0KPiBIZWtpIGNhbiBiZSBlbmFibGVkIHdpdGggdGhlIGhla2k9MSBib290IGNvbW1hbmQg
YXJndW1lbnQuDQo+IA0KPiANCg0KQ2FuIHRoZSBndWVzdCBrZXJuZWwgYXNrIHRoZSBob3N0IFZN
TSdzIGVtdWxhdGVkIGRldmljZXMgdG8gRE1BIGludG8NCnRoZSBwcm90ZWN0ZWQgZGF0YT8gSXQg
c2hvdWxkIGdvIHRocm91Z2ggdGhlIGhvc3QgdXNlcnNwYWNlIG1hcHBpbmdzIEkNCnRoaW5rLCB3
aGljaCBkb24ndCBjYXJlIGFib3V0IEVQVCBwZXJtaXNzaW9ucy4gT3IgZGlkIEkgbWlzcyB3aGVy
ZSB5b3UNCmFyZSBwcm90ZWN0aW5nIHRoYXQgYW5vdGhlciB3YXk/IFRoZXJlIGFyZSBhIGxvdCBv
ZiBlYXN5IHdheXMgdG8gYXNrDQp0aGUgaG9zdCB0byB3cml0ZSB0byBndWVzdCBtZW1vcnkgdGhh
dCBkb24ndCBpbnZvbHZlIHRoZSBFUFQuIFlvdQ0KcHJvYmFibHkgbmVlZCB0byBwcm90ZWN0IHRo
ZSBob3N0IHVzZXJzcGFjZSBtYXBwaW5ncywgYW5kIGFsc28gdGhlDQpwbGFjZXMgaW4gS1ZNIHRo
YXQga21hcCBhIEdQQSBwcm92aWRlZCBieSB0aGUgZ3Vlc3QuDQoNClsgc25pcCBdDQoNCj4gDQo+
ICMgQ3VycmVudCBsaW1pdGF0aW9ucw0KPiANCj4gVGhlIG1haW4gbGltaXRhdGlvbiBvZiB0aGlz
IHBhdGNoIHNlcmllcyBpcyB0aGUgc3RhdGljYWxseSBlbmZvcmNlZA0KPiBwZXJtaXNzaW9ucy4g
VGhpcyBpcyBub3QgYW4gaXNzdWUgZm9yIGtlcm5lbHMgd2l0aG91dCBtb2R1bGUgYnV0IHRoaXMN
Cj4gbmVlZHMgdG8NCj4gYmUgYWRkcmVzc2VkLsKgIE1lY2hhbmlzbXMgdGhhdCBkeW5hbWljYWxs
eSBpbXBhY3Qga2VybmVsIGV4ZWN1dGFibGUNCj4gbWVtb3J5IGFyZQ0KPiBub3QgaGFuZGxlZCBm
b3Igbm93IChlLmcuLCBrZXJuZWwgbW9kdWxlcywgdHJhY2Vwb2ludHMsIGVCUEYgSklUKSwNCj4g
YW5kIHN1Y2gNCj4gY29kZSB3aWxsIG5lZWQgdG8gYmUgYXV0aGVudGljYXRlZC7CoCBCZWNhdXNl
IHRoZSBoeXBlcnZpc29yIGlzIGhpZ2hseQ0KPiBwcml2aWxlZ2VkIGFuZCBjcml0aWNhbCB0byB0
aGUgc2VjdXJpdHkgb2YgYWxsIHRoZSBWTXMsIHdlIGRvbid0IHdhbnQNCj4gdG8NCj4gaW1wbGVt
ZW50IGEgY29kZSBhdXRoZW50aWNhdGlvbiBtZWNoYW5pc20gaW4gdGhlIGh5cGVydmlzb3IgaXRz
ZWxmDQo+IGJ1dCBkZWxlZ2F0ZQ0KPiB0aGlzIHZlcmlmaWNhdGlvbiB0byBzb21ldGhpbmcgbXVj
aCBsZXNzIHByaXZpbGVnZWQuIFdlIGFyZSB0aGlua2luZw0KPiBvZiB0d28NCj4gd2F5cyB0byBz
b2x2ZSB0aGlzOiBpbXBsZW1lbnQgdGhpcyB2ZXJpZmljYXRpb24gaW4gdGhlIFZNTSBvciBzcGF3
biBhDQo+IGRlZGljYXRlZA0KPiBzcGVjaWFsIFZNIChzaW1pbGFyIHRvIFdpbmRvd3MncyBWQlMp
LiBUaGVyZSBhcmUgcHJvcyBvbiBjb25zIHRvIGVhY2gNCj4gYXBwcm9hY2g6DQo+IGNvbXBsZXhp
dHksIHZlcmlmaWNhdGlvbiBjb2RlIG93bmVyc2hpcCAoZ3Vlc3QncyBvciBWTU0ncyksIGFjY2Vz
cyB0bw0KPiBndWVzdA0KPiBtZW1vcnkgKGkuZS4sIGNvbmZpZGVudGlhbCBjb21wdXRpbmcpLg0K
DQpUaGUga2VybmVsIG9mdGVuIGNyZWF0ZXMgd3JpdGFibGUgYWxpYXNlcyBpbiBvcmRlciB0byB3
cml0ZSB0bw0KcHJvdGVjdGVkIGRhdGEgKGtlcm5lbCB0ZXh0LCBldGMpLiBTb21lIG9mIHRoaXMg
aXMgZG9uZSByaWdodCBhcyB0ZXh0DQppcyBiZWluZyBmaXJzdCB3cml0dGVuIG91dCAoYWx0ZXJu
YXRpdmVzIGZvciBleGFtcGxlKSwgYW5kIHNvbWUgaGFwcGVucw0Kd2F5IGxhdGVyIChqdW1wIGxh
YmVscywgZXRjKS4gU28gZm9yIHZlcmlmaWNhdGlvbiwgSSB3b25kZXIgd2hhdCBzdGFnZQ0KeW91
IHdvdWxkIGJlIHZlcmlmeWluZz8gSWYgeW91IHdhbnQgdG8gdmVyaWZ5IHRoZSBlbmQgc3RhdGUs
IHlvdSB3b3VsZA0KaGF2ZSB0byBtYWludGFpbiBrbm93bGVkZ2UgaW4gdGhlIHZlcmlmaWVyIG9m
IGFsbCB0aGUgdG91Y2gtdXBzIHRoZQ0Ka2VybmVsIGRvZXMuIEkgdGhpbmsgaXQgd291bGQgZ2V0
IHZlcnkgdHJpY2t5Lg0KDQpJdCBhbHNvIHNlZW1zIGl0IHdpbGwgYmUgYSBkZWNlbnQgYXNrIGZv
ciB0aGUgZ3Vlc3Qga2VybmVsIHRvIGtlZXANCnRyYWNrIG9mIEdQQSBwZXJtaXNzaW9ucyBhcyB3
ZWxsIGFzIG5vcm1hbCB2aXJ0dWFsIG1lbW9yeSBwZW1pcnNzaW9ucywNCmlmIHRoaXMgdGhpbmcg
aXMgbm90IHdpZGVseSB1c2VkLg0KDQpTbyBJIHdvbmRlcmluZyBpZiB5b3UgY291bGQgZ28gaW4g
dHdvIGRpcmVjdGlvbnMgd2l0aCB0aGlzOg0KMS4gTWFrZSB0aGlzIGEgZmVhdHVyZSBvbmx5IGZv
ciBzdXBlciBsb2NrZWQgZG93biBrZXJuZWxzIChubyBtb2R1bGVzLA0KZXRjKS4gRm9yYmlkIGFu
eSBjb25maWd1cmF0aW9ucyB0aGF0IG1pZ2h0IG1vZGlmeSB0ZXh0LiBCdXQgZUJQRiBpcw0KdXNl
ZCBmb3Igc2VjY29tcCwgc28geW91IG1pZ2h0IGJlIHR1cm5pbmcgb2ZmIHNvbWUgc2VjdXJpdHkg
cHJvdGVjdGlvbnMNCnRvIGdldCB0aGlzLg0KMi4gTG9vc2VuIHRoZSBydWxlcyB0byBhbGxvdyB0
aGUgcHJvdGVjdGlvbnMgdG8gbm90IGJlIHNvIG9uZS13YXkNCmVuYWJsZS4gR2V0IGxlc3Mgc2Vj
dXJpdHksIGJ1dCB1c2VkIG1vcmUgd2lkZWx5Lg0KDQpUaGVyZSB3ZXJlIHNpbWlsYXIgZGlsZW1t
YXMgd2l0aCB0aGUgUFYgQ1IgcGlubmluZyBzdHVmZi4NCg==


From xen-devel-bounces@lists.xenproject.org Wed May 24 22:30:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 22:30:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539235.839906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1x08-00009T-5v; Wed, 24 May 2023 22:30:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539235.839906; Wed, 24 May 2023 22:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1x08-00009M-2r; Wed, 24 May 2023 22:30:36 +0000
Received: by outflank-mailman (input) for mailman id 539235;
 Wed, 24 May 2023 22:30:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1x06-00009C-UZ; Wed, 24 May 2023 22:30:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1x06-0002mH-Lc; Wed, 24 May 2023 22:30:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1x06-0006qp-3H; Wed, 24 May 2023 22:30:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1x06-0006rZ-2t; Wed, 24 May 2023 22:30:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eQxSCsgJc8YTOl/6IFoDO3j+Dw3gU67TsH2UKe1ZpFs=; b=35mJiOpK9uE23KZHtJ6pVWQoft
	7PRQSh1r5KfiY1KKH4Cvyptl8kldwQDkfwBUsekwTLb9/FV65jymkvwiQTu7JT+Qq9fWA1OyzzQfn
	E83lolHbCelFKJW1P/zu+AmJU8EublXcqmqNoLClNM/vyh+MyMqdu/0+svkN612jm1pg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180925-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180925: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9d646009f65d62d32815f376465a3b92d8d9b046
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 May 2023 22:30:34 +0000

flight 180925 linux-linus real [real]
flight 180932 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180925/
http://logs.test-lab.xenproject.org/osstest/logs/180932/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-vhd 21 guest-start/debian.repeat fail pass in 180932-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                9d646009f65d62d32815f376465a3b92d8d9b046
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   38 days
Failing since        180281  2023-04-17 06:24:36 Z   37 days   69 attempts
Testing same since   180925  2023-05-24 05:07:18 Z    0 days    1 attempts

------------------------------------------------------------
2480 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 313915 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 24 22:56:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 22:56:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539241.839916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1xOl-0002jB-3M; Wed, 24 May 2023 22:56:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539241.839916; Wed, 24 May 2023 22:56:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1xOk-0002j4-WB; Wed, 24 May 2023 22:56:03 +0000
Received: by outflank-mailman (input) for mailman id 539241;
 Wed, 24 May 2023 22:56:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1xOk-0002iu-2U; Wed, 24 May 2023 22:56:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1xOj-0003Lh-Q8; Wed, 24 May 2023 22:56:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q1xOj-0007XS-BC; Wed, 24 May 2023 22:56:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q1xOj-0006Wn-Al; Wed, 24 May 2023 22:56:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xbPTjfKwI/W8sC4o0vROd2/Lh138esPnXryhZ3YaeyM=; b=sZeyAfKbmHTqceMx3Id2wAfH5B
	mE9+PgF98farHW7t0tIAOD/ok3NYzuUqdOGN261FN68ncqODvZpINnipnBllBTnlYso5SdDuVsgtO
	EoHITK77cmC58kc1zdKPx2c8jL0NywpGjD9NAOBng5RRArTTXlfbipa3qcflkUVy+1Qc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180931-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180931: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=511b9f286c3dadd041e0d90beeff7d47c9bf3b7a
X-Osstest-Versions-That:
    xen=380c6c170393c48852d4f2b1ea97125a399cfc61
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 24 May 2023 22:56:01 +0000

flight 180931 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180931/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  511b9f286c3dadd041e0d90beeff7d47c9bf3b7a
baseline version:
 xen                  380c6c170393c48852d4f2b1ea97125a399cfc61

Last test of basis   180929  2023-05-24 15:03:16 Z    0 days
Testing same since   180931  2023-05-24 20:01:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   380c6c1703..511b9f286c  511b9f286c3dadd041e0d90beeff7d47c9bf3b7a -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 24 23:38:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 23:38:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539247.839926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1y3C-0007B8-8u; Wed, 24 May 2023 23:37:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539247.839926; Wed, 24 May 2023 23:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1y3C-0007B1-6A; Wed, 24 May 2023 23:37:50 +0000
Received: by outflank-mailman (input) for mailman id 539247;
 Wed, 24 May 2023 23:37:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hEuA=BN=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q1y3A-0007Av-LS
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 23:37:48 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fc3c7283-fa8b-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 01:37:46 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4079C60A20;
 Wed, 24 May 2023 23:37:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id F270BC433EF;
 Wed, 24 May 2023 23:37:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc3c7283-fa8b-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684971464;
	bh=6uKVCP+Qbgk4oPiNn1yhMJ0omp7uIoP9Abjj3g+v0yA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QbO8uFfwcaRLr+/+sTj8tgSV/6oOglhFPWu6PynKT8WwXP6Lj0jTT/fm5zPfHQuRq
	 +hf/xXSSOATFx5esYPzwXM7IWKGWrM5NthxQ3gpzKqBYxlq7H8SpxWMDcpM1RMn4Q1
	 H18dturO1ekTAp3iu9VG2ca0YywjAcOY9rLA3qPJsRXiUz36t7cWaLx2fHFsmVRYSq
	 502on6+AFSZRfB/uSAMgGVG1aqhuydMXeR6WvbXqReOYfv44PGCg+Mu8NuNQ+9HFEK
	 f4sr+8Heo7ceRkyGWr6UjqgXA2t9bA/gG539vUMXSZvS4Mf9uNGV0rs9PXPuoEFyoq
	 WHK8epqseXAGw==
Date: Wed, 24 May 2023 16:37:42 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
In-Reply-To: <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com> <ZG4dmJuzNVUE5UIY@Air-de-Roger> <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 24 May 2023, Jan Beulich wrote:
> >> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
> >>      modify_bars() to consistently respect BARs of hidden devices while
> >>      setting up "normal" ones (i.e. to avoid as much as possible the
> >>      "continue" path introduced here), setting up of the former may want
> >>      doing first.
> > 
> > But BARs of hidden devices should be mapped into dom0 physmap?
> 
> Yes.

The BARs would be mapped read-only (not read-write), right? Otherwise we
let dom0 access devices that belong to Xen, which doesn't seem like a
good idea.

But even if we map the BARs read-only, what is the benefit of mapping
them to Dom0? If Dom0 loads a driver for it and the driver wants to
initialize the device, the driver will crash because the MMIO region is
read-only instead of read-write, right?

How does this device hiding work for dom0? How does dom0 know not to
access a device that is present on the PCI bus but is used by Xen?

The reason why I was suggesting to hide the device completely in the
past was that I assumed we had to hide the device (make it "disappear"
on the PCI bus) otherwise Dom0 would access it. Is there another way to
mark a PCI devices as "inaccessible" or "disabled"?


From xen-devel-bounces@lists.xenproject.org Wed May 24 23:41:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 23:41:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539251.839935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1y69-00009d-Nd; Wed, 24 May 2023 23:40:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539251.839935; Wed, 24 May 2023 23:40:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1y69-00009W-KV; Wed, 24 May 2023 23:40:53 +0000
Received: by outflank-mailman (input) for mailman id 539251;
 Wed, 24 May 2023 23:40:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hEuA=BN=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q1y68-00009Q-2Y
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 23:40:52 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 69f3e674-fa8c-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 01:40:50 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 7999561BAC;
 Wed, 24 May 2023 23:40:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 433F3C433EF;
 Wed, 24 May 2023 23:40:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69f3e674-fa8c-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684971648;
	bh=mustg59kTrFir/r1vrmDKAg/EoZXV6B6Ri2bio/sLdE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=uTAn9aQ+GK5oDtzW3Xv4QcbIPbxkAePP6lxgDEZVhqYLo+Pb7LlbpJIYRiYobUbtB
	 xACnHRQrzGnrn7iNcI6FvvOl3uWZkshB4iJsOvyiTFeTyu71ytuDxl7U5p4iouW24l
	 Zp17OKbziVmtd13nwE3lkscTrlpiUVHBUrwrjfOZhu/Pot2XkAHeiPVM5vS2I/B5Rz
	 77uY45xvXwPySDQ7fdTuodCZCGrcYVCVBBLk/FO+wwQMrykYkZjlH7djcWQG3fRu39
	 G7oZPESoBNSPjAQ+A8k8buhzwlXIAdkpbD78jaesyg7J3006ZUtiP4c/re9Wt/3Oqv
	 MQbPpYgNvFBXQ==
Date: Wed, 24 May 2023 16:40:46 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
In-Reply-To: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305241639150.44000@ubuntu-linux-20-04-desktop>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 24 May 2023, Jan Beulich wrote:
> Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
> console) are associated with DomXEN, not Dom0. This means that while
> looking for overlapping BARs such devices cannot be found on Dom0's list
> of devices; DomXEN's list also needs to be scanned.
> 
> Suppress vPCI init altogether for r/o devices (which constitute a subset
> of hidden ones).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

This works! Ship it! :-)
https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4346896950

I understand this is an RFC and there are still open questions, but
thank you for addressing the issue quickly.



> ---
> RFC: The modify_bars() change is intentionally mis-formatted, as the
>      necessary re-indentation would make the diff difficult to read. At
>      this point I'd merely like to gather input towards possible better
>      approaches to solve the issue (not the least because quite possibly
>      there are further places needing changing).
> 
> RFC: Whether to actually suppress vPCI init is up for debate; adding the
>      extra logic is following Roger's suggestion (I'm not convinced it is
>      useful to have). If we want to keep that, a 2nd question would be
>      whether to keep it in vpci_add_handlers(): Both of its callers (can)
>      have a struct pci_seg readily available, so checking ->ro_map at the
>      call sites would be easier.
> 
> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
>      modify_bars() to consistently respect BARs of hidden devices while
>      setting up "normal" ones (i.e. to avoid as much as possible the
>      "continue" path introduced here), setting up of the former may want
>      doing first.
> 
> RFC: vpci_write()'s check of the r/o map may want moving out of mainline
>      code, into the case dealing with !pdev->vpci.
> ---
> v2: Extend existing comment. Relax assertion. Don't initialize vPCI for
>     r/o devices.
> 
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
>      struct vpci_header *header = &pdev->vpci->header;
>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>      struct pci_dev *tmp, *dev = NULL;
> +    const struct domain *d;
>      const struct vpci_msix *msix = pdev->vpci->msix;
>      unsigned int i;
>      int rc;
> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
>  
>      /*
>       * Check for overlaps with other BARs. Note that only BARs that are
> -     * currently mapped (enabled) are checked for overlaps.
> +     * currently mapped (enabled) are checked for overlaps. Note also that
> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
>       */
> -    for_each_pdev ( pdev->domain, tmp )
> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
> +    for_each_pdev ( d, tmp )
>      {
>          if ( tmp == pdev )
>          {
> @@ -304,6 +307,7 @@ static int modify_bars(const struct pci_
>                   */
>                  continue;
>          }
> +if ( !tmp->vpci ) { ASSERT(d == dom_xen); continue; }//todo
>  
>          for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
>          {
> @@ -330,6 +334,7 @@ static int modify_bars(const struct pci_
>              }
>          }
>      }
> +if ( !is_hardware_domain(d) ) break; }//todo
>  
>      ASSERT(dev);
>  
> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -70,6 +70,7 @@ void vpci_remove_device(struct pci_dev *
>  int vpci_add_handlers(struct pci_dev *pdev)
>  {
>      unsigned int i;
> +    const unsigned long *ro_map;
>      int rc = 0;
>  
>      if ( !has_vpci(pdev->domain) )
> @@ -78,6 +79,11 @@ int vpci_add_handlers(struct pci_dev *pd
>      /* We should not get here twice for the same device. */
>      ASSERT(!pdev->vpci);
>  
> +    /* No vPCI for r/o devices. */
> +    ro_map = pci_get_ro_map(pdev->sbdf.seg);
> +    if ( ro_map && test_bit(pdev->sbdf.bdf, ro_map) )
> +        return 0;
> +
>      pdev->vpci = xzalloc(struct vpci);
>      if ( !pdev->vpci )
>          return -ENOMEM;
> 


From xen-devel-bounces@lists.xenproject.org Wed May 24 23:51:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 24 May 2023 23:51:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539257.839946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1yGI-0001kp-Qe; Wed, 24 May 2023 23:51:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539257.839946; Wed, 24 May 2023 23:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1yGI-0001ki-Nz; Wed, 24 May 2023 23:51:22 +0000
Received: by outflank-mailman (input) for mailman id 539257;
 Wed, 24 May 2023 23:51:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BDQD=BN=amd.com=stefano.stabellini@srs-se1.protection.inumbo.net>)
 id 1q1yGH-0001kc-Qh
 for xen-devel@lists.xenproject.org; Wed, 24 May 2023 23:51:22 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20620.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dff569f3-fa8d-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 01:51:17 +0200 (CEST)
Received: from BN9PR03CA0316.namprd03.prod.outlook.com (2603:10b6:408:112::21)
 by CH3PR12MB7617.namprd12.prod.outlook.com (2603:10b6:610:140::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Wed, 24 May
 2023 23:51:13 +0000
Received: from BN8NAM11FT104.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:112:cafe::3f) by BN9PR03CA0316.outlook.office365.com
 (2603:10b6:408:112::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Wed, 24 May 2023 23:51:13 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT104.mail.protection.outlook.com (10.13.177.160) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6433.16 via Frontend Transport; Wed, 24 May 2023 23:51:13 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 24 May
 2023 18:51:10 -0500
Received: from ubuntu-20.04.2-arm64.shared (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2375.34 via Frontend Transport; Wed, 24 May 2023 18:51:09 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dff569f3-fa8d-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UG1uQHf1PS6aFhM68Cz+ovRWYTp1o7KBU0gWE5szrB/v0i7Bgm2Fgw+a8S75IBMaAR86Vwu+5KVe1JSzGCb1NUG/fP5BnUHDiwfCOsRXNb385O2fm0QMu01dXWu8G11lv+3Woxvw7Dkf1sh4+a7M8njUbOiwYYCeDNqGla6Fr5p9tCIh38YI0JQ3y1FMh/hIhdYZ9A1pBEFAHYDcL+fYtwnTgwMvf5ymfg5ixe1NIvlbnoG343/46mkMTj7AyVnnjoXY2ZfcYJ8eXtsmqPBY3eRFIYG8pqU1Uo8P03r6t6vkShVXi0lMJcIsbcc0tJlGp6ga3N2omJIPsu679p1skw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iA7i5SpS5XOBzknTlY79r/o2KC7fLSJ3ksCljteN7JY=;
 b=XMnVA2t49t0AXh7GFL1EvZSKzKObSwwkMSxAcvqw7GhKzKwsErCMnWOUY2LupuWiB+N7XHBGqnCBDubkiyj12kS7J9kT3qDfVsuizpuTzQFPh1cD0QBn5f3Wsr6qGbdGOnP2I8Drm61uT2AP2BICNUQ/SF6canhs208Gs4JaHM58qad8w7kmIF1icMNqzrY38R4y6+H7ykdvF/bznPhiVyScC0sfWLsBePfgRGontwHO+CmixMiGNdWdYV1ohE60VXw0qFjRMg2lG1myFg42K96QI4+sF4B07UJGNX0pLOqZnXYP8HKf10+qdx9xB5Sfb6g0UmnA4b0EY5l3oglbYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iA7i5SpS5XOBzknTlY79r/o2KC7fLSJ3ksCljteN7JY=;
 b=DgT/Pfu3bSgcU4q5Rk7IiqnajJ9lyepMOSIez412BZAQ3nuc8mj9CbWhU2C2CHhwla4mzm2THC+bLILeuaNWEg3STNZej5l6A9HYIazPD7GrPMGFczyU3fYu8kvkOd/O+1YRlSmwltOb0aye+o2Q4npFRgZFpwOcyaJn0p1eAzM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Date: Wed, 24 May 2023 16:51:03 -0700
From: Stefano Stabellini <stefano.stabellini@amd.com>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@amd.com>,
	<xen-devel@lists.xenproject.org>, <jbeulich@suse.com>,
	<andrew.cooper3@citrix.com>, <xenia.ragiadakou@amd.com>
Subject: Re: [RFC] Xen crashes on ASSERT on suspend/resume, suggested fix
In-Reply-To: <ZGzSnu8m/IqjmyHx@Air-de-Roger>
Message-ID: <alpine.DEB.2.22.394.2305241646520.44000@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305181638390.128889@ubuntu-linux-20-04-desktop> <ZGzFnE2w/YqYT35c@Air-de-Roger> <ZGzSnu8m/IqjmyHx@Air-de-Roger>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="8323329-690127031-1684972061=:44000"
Content-ID: <alpine.DEB.2.22.394.2305241648030.44000@ubuntu-linux-20-04-desktop>
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT104:EE_|CH3PR12MB7617:EE_
X-MS-Office365-Filtering-Correlation-Id: 3ee378bb-6a7e-42d3-9be1-08db5cb1c1e2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ntsjOlHpzUNnsb1EXmISaVJYIAZJXOMHgRCjkPGTV57UUEYY9zV3o1eMN//3obTAunyI2BoYLNmlvibMgTgetUmteGTm/qFIDGH9kJc5yxxRPtcSgOGrMxbPJOaW2g+F+o2RyoHGX7b0tycApDivGZAixVKNtAI2hmGFFvTzxLP1/mSJoqcz/RZ8TmP9hDvchaXu3thSHy0vtkUvqWvj12U47ODrPZPW6unMa7ULWMrWyYt9FV02FvfvEurpXKmIrLDFABAMnTEdW062OxWBhKfjMxF7udOGGzGkI6bnKHC3U6J49zkI8y3EFrfEKOU1fTwrwbRroVYivzt34hsqHtYS8HIdzov9Oo/D5C97Ra7zlgcP5dUYJGzD8WDMYlhTrXnegyC10kYz8WWEA3wx+tQKY8jK2J6X9jEEIxec7z/7chYtfvlyFQyIvLJ3W0iuugb+LRvpyICpkStp33EZCDPeRAvIbTiWTD7JxVNS+BO4N/rczI+Zt4mLsxh6fPAH8q0VIn7M7x5pTR+3tgMeYVGRxgs+yIYOpkvqZ4U0hr+sVQRnPiBHDAgj90wo42t64f6IEW3ntb8HzeamW4LM8aNiNzqTFJdBHKPQlf8y3TBqnjw09wqowBPvA//CEWKutvds5Xrg0E1soQiBKBN/V08/6COWCwY+whMS+IuH4UxN+iibtsBB3cOFvN1HCTLPw9hmj1bPPWF34ZeXRXgBjvdNtkJcM0Q6OfR5OlqPB1ooYCWzlT7Du9WTDfQGueP6ZitQj8KXaW9EAntaAVVGFA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(7916004)(4636009)(136003)(396003)(39860400002)(346002)(376002)(451199021)(40470700004)(36840700001)(46966006)(82310400005)(478600001)(33716001)(4326008)(6916009)(70586007)(70206006)(41300700001)(33964004)(6666004)(54906003)(316002)(5660300002)(8936002)(8676002)(86362001)(44832011)(81166007)(26005)(9686003)(356005)(82740400003)(186003)(40460700003)(15650500001)(426003)(336012)(40480700001)(2906002)(83380400001)(36860700001)(47076005)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2023 23:51:13.4408
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ee378bb-6a7e-42d3-9be1-08db5cb1c1e2
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT104.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB7617

--8323329-690127031-1684972061=:44000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2305241648031.44000@ubuntu-linux-20-04-desktop>

On Tue, 23 May 2023, Roger Pau Monné wrote:
> On Tue, May 23, 2023 at 03:54:36PM +0200, Roger Pau Monné wrote:
> > On Thu, May 18, 2023 at 04:44:53PM -0700, Stefano Stabellini wrote:
> > > Hi all,
> > > 
> > > After many PVH Dom0 suspend/resume cycles we are seeing the following
> > > Xen crash (it is random and doesn't reproduce reliably):
> > > 
> > > (XEN) [555.042981][<ffff82d04032a137>] R arch/x86/irq.c#_clear_irq_vector+0x214/0x2bd
> > > (XEN) [555.042986][<ffff82d04032a74c>] F destroy_irq+0xe2/0x1b8
> > > (XEN) [555.042991][<ffff82d0403276db>] F msi_free_irq+0x5e/0x1a7
> > > (XEN) [555.042995][<ffff82d04032da2d>] F unmap_domain_pirq+0x441/0x4b4
> > > (XEN) [555.043001][<ffff82d0402d29b9>] F arch/x86/hvm/vmsi.c#vpci_msi_disable+0xc0/0x155
> > > (XEN) [555.043006][<ffff82d0402d39fc>] F vpci_msi_arch_disable+0x1e/0x2b
> > > (XEN) [555.043013][<ffff82d04026561c>] F drivers/vpci/msi.c#control_write+0x109/0x10e
> > > (XEN) [555.043018][<ffff82d0402640c3>] F vpci_write+0x11f/0x268
> > > (XEN) [555.043024][<ffff82d0402c726a>] F arch/x86/hvm/io.c#vpci_portio_write+0xa0/0xa7
> > > (XEN) [555.043029][<ffff82d0402c6682>] F hvm_process_io_intercept+0x203/0x26f
> > > (XEN) [555.043034][<ffff82d0402c6718>] F hvm_io_intercept+0x2a/0x4c
> > > (XEN) [555.043039][<ffff82d0402b6353>] F arch/x86/hvm/emulate.c#hvmemul_do_io+0x29b/0x5f6
> > > (XEN) [555.043043][<ffff82d0402b66dd>] F arch/x86/hvm/emulate.c#hvmemul_do_io_buffer+0x2f/0x6a
> > > (XEN) [555.043048][<ffff82d0402b7bde>] F hvmemul_do_pio_buffer+0x33/0x35
> > > (XEN) [555.043053][<ffff82d0402c7042>] F handle_pio+0x6d/0x1b4
> > > (XEN) [555.043059][<ffff82d04029ec20>] F svm_vmexit_handler+0x10bf/0x18b0
> > > (XEN) [555.043064][<ffff82d0402034e5>] F svm_stgi_label+0x8/0x18
> > > (XEN) [555.043067]
> > > (XEN) [555.469861]
> > > (XEN) [555.471855] ****************************************
> > > (XEN) [555.477315] Panic on CPU 9:
> > > (XEN) [555.480608] Assertion 'per_cpu(vector_irq, cpu)[old_vector] == irq' failed at arch/x86/irq.c:233
> > > (XEN) [555.489882] ****************************************
> > > 
> > > Looking at the code in question, the ASSERT looks wrong to me.
> > > 
> > > Specifically, if you see send_cleanup_vector and
> > > irq_move_cleanup_interrupt, it is entirely possible to have old_vector
> > > still valid and also move_in_progress still set, but only some of the
> > > per_cpu(vector_irq, me)[vector] cleared. It seems to me that this could
> > > happen especially when an MSI has a large old_cpu_mask.
> > 
> > i guess the only way to get into such situation would be if you happen
> > to execute _clear_irq_vector() with a cpu_online_map smaller than the
> > one in old_cpu_mask, at which point you will leave old_vector fields
> > not updated.
> > 
> > Maybe somehow you get into this situation when doing suspend/resume?
> > 
> > Could you try to add a:
> > 
> > ASSERT(cpumask_equal(tmp_mask, desc->arch.old_cpu_mask));
> > 
> > Before the `for_each_cpu(cpu, tmp_mask)` loop?
> 
> I see that the old_cpu_mask is cleared in release_old_vec(), so that
> suggestion is not very useful.
> 
> Does the crash happen at specific points, for example just after
> resume or before suspend?

Hi Roger, Jan,

Unfortunately we are all completely blind on this issue. It is not
reproducible. It only happened once. We only have our wits to solve this
problem. However, looking at the codebase I think there are a few
possible races. Here is my analysis and suggested fix.


---
xen/irq: fix races between send_cleanup_vector and _clear_irq_vector

It is possible that send_cleanup_vector and _clear_irq_vector are
running at the same time on different CPUs. In that case we have a race
as both _clear_irq_vector and irq_move_cleanup_interrupt are trying to
clear old_vector.

This patch fixes 3 races:

1) As irq_move_cleanup_interrupt is running on multiple CPUs at the
same time, and also _clear_irq_vector is running, it is possible that
only some per_cpu(vector_irq, cpu)[old_vector] are valid but not all.
So, turn the ASSERT in _clear_irq_vector into an if.

2) It is possible that _clear_irq_vector is running at the same time as
release_old_vec, called from irq_move_cleanup_interrupt. At the moment,
it is possible for _clear_irq_vector to read a valid old_cpu_mask but an
invalid old_vector (because it is being set to invalid by
release_old_vec). To avoid this problem in release_old_vec move clearing
old_cpu_mask before setting old_vector to invalid. This way, we know that
in _clear_irq_vector if old_vector is invalid also old_cpu_mask is zero
and we don't enter the loop.

3) It is possible that release_old_vec is running twice at the same time
for the same old_vector. Change the code in release_old_vec to make it
OK to call it twice. Remove both ASSERTs. With those gone, it should be
possible now to call release_old_vec twice in a row for the same
old_vector.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 xen/arch/x86/irq.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 20150b1c7f..d9c139cf1b 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -112,16 +112,11 @@ static void release_old_vec(struct irq_desc *desc)
 {
     unsigned int vector = desc->arch.old_vector;
 
-    desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
     cpumask_clear(desc->arch.old_cpu_mask);
+    desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
 
-    if ( !valid_irq_vector(vector) )
-        ASSERT_UNREACHABLE();
-    else if ( desc->arch.used_vectors )
-    {
-        ASSERT(test_bit(vector, desc->arch.used_vectors));
+    if ( desc->arch.used_vectors )
         clear_bit(vector, desc->arch.used_vectors);
-    }
 }
 
 static void _trace_irq_mask(uint32_t event, int irq, int vector,
@@ -230,9 +225,11 @@ static void _clear_irq_vector(struct irq_desc *desc)
 
         for_each_cpu(cpu, tmp_mask)
         {
-            ASSERT(per_cpu(vector_irq, cpu)[old_vector] == irq);
-            TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
-            per_cpu(vector_irq, cpu)[old_vector] = ~irq;
+            if ( per_cpu(vector_irq, cpu)[old_vector] == irq )
+            {
+                TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
+                per_cpu(vector_irq, cpu)[old_vector] = ~irq;
+            }
         }
 
         release_old_vec(desc);
-- 
2.25.1
--8323329-690127031-1684972061=:44000--


From xen-devel-bounces@lists.xenproject.org Thu May 25 00:38:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 00:38:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539261.839956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1yzH-0006nR-Fj; Thu, 25 May 2023 00:37:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539261.839956; Thu, 25 May 2023 00:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1yzH-0006nK-Cd; Thu, 25 May 2023 00:37:51 +0000
Received: by outflank-mailman (input) for mailman id 539261;
 Thu, 25 May 2023 00:37:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGWh=BO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q1yzG-0006mv-20
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 00:37:50 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5e0a4a67-fa94-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 02:37:47 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 3F480640E1;
 Thu, 25 May 2023 00:37:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 501BBC433D2;
 Thu, 25 May 2023 00:37:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e0a4a67-fa94-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684975064;
	bh=R/CEbL3gLijXrnN1wmloMA92/w/+Y2y8jCJRaA0r2j4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=PE8mFMDLFhgHlmb768p/ZNPrUeoe57nW4gUASnYvLAD3hedPozwIkOwTL0HgonGV2
	 kkQFVi2jAdPAwoaeR0bydedXYLccIVxnniErj1G5IaNhVAfytDv60NDiMxqgd87nkH
	 pcklZBLTjVohQVBQv3Li9ct/R/XHKfhJO/3VrAvEeBuNPSAd6AQqxIUjogEtEEaAZ8
	 n81yaIZnc+kAoQrWOSS6thvQB+Vx/evtX7VQ27lmiitjuKq8C+9hkVa+gwqsMhgiv4
	 bsKY88lgH2gjVhuczBSpjAAsZZSDXSEFL205wsF9mHxTWmwK+yR+Mp6eFT4+dO7kXp
	 FfEcBsQvILIFQ==
Date: Wed, 24 May 2023 17:37:41 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/3] xen/misra: xen-analysis.py: Improve the cppcheck
 version check
In-Reply-To: <20230519093019.2131896-2-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305241737340.44000@ubuntu-linux-20-04-desktop>
References: <20230519093019.2131896-1-luca.fancellu@arm.com> <20230519093019.2131896-2-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 19 May 2023, Luca Fancellu wrote:
> Use tuple comparison to check the cppcheck version.
> 
> Take the occasion to harden the regex, escaping the dots so that we
> check for them instead of generic characters.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/scripts/xen_analysis/cppcheck_analysis.py | 13 ++++---------
>  1 file changed, 4 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/scripts/xen_analysis/cppcheck_analysis.py b/xen/scripts/xen_analysis/cppcheck_analysis.py
> index c8abbe0fca79..8dc45e653b79 100644
> --- a/xen/scripts/xen_analysis/cppcheck_analysis.py
> +++ b/xen/scripts/xen_analysis/cppcheck_analysis.py
> @@ -157,7 +157,7 @@ def generate_cppcheck_deps():
>              "Error occured retrieving cppcheck version:\n{}\n\n{}"
>          )
>  
> -    version_regex = re.search('^Cppcheck (\d+).(\d+)(?:.\d+)?$',
> +    version_regex = re.search('^Cppcheck (\d+)\.(\d+)(?:\.\d+)?$',
>                                invoke_cppcheck, flags=re.M)
>      # Currently, only cppcheck version >= 2.7 is supported, but version 2.8 is
>      # known to be broken, please refer to docs/misra/cppcheck.txt
> @@ -166,15 +166,10 @@ def generate_cppcheck_deps():
>              "Can't find cppcheck version or version not identified: "
>              "{}".format(invoke_cppcheck)
>          )
> -    major = int(version_regex.group(1))
> -    minor = int(version_regex.group(2))
> -    if major < 2 or (major == 2 and minor < 7):
> +    version = (int(version_regex.group(1)), int(version_regex.group(2)))
> +    if version < (2, 7) or version == (2, 8):
>          raise CppcheckDepsPhaseError(
> -            "Cppcheck version < 2.7 is not supported"
> -        )
> -    if major == 2 and minor == 8:
> -        raise CppcheckDepsPhaseError(
> -            "Cppcheck version 2.8 is known to be broken, see the documentation"
> +            "Cppcheck version < 2.7 or 2.8 are not supported"
>          )
>  
>      # If misra option is selected, append misra addon and generate cppcheck
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 25 00:38:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 00:38:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539264.839966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1yzw-0007GR-O6; Thu, 25 May 2023 00:38:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539264.839966; Thu, 25 May 2023 00:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1yzw-0007GK-Kr; Thu, 25 May 2023 00:38:32 +0000
Received: by outflank-mailman (input) for mailman id 539264;
 Thu, 25 May 2023 00:38:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2bT+=BO=quicinc.com=quic_tsoni@srs-se1.protection.inumbo.net>)
 id 1q1yzv-0007Fv-K3
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 00:38:31 +0000
Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com
 [205.220.180.131]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 766847be-fa94-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 02:38:27 +0200 (CEST)
Received: from pps.filterd (m0279872.ppops.net [127.0.0.1])
 by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 34P0TOdZ014158; Thu, 25 May 2023 00:37:18 GMT
Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com
 [199.106.103.254])
 by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3qscautcq9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 25 May 2023 00:37:18 +0000
Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com
 [10.52.223.231])
 by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 34P0bGcV018047
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 25 May 2023 00:37:16 GMT
Received: from [10.110.74.38] (10.80.80.8) by nasanex01a.na.qualcomm.com
 (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.42; Wed, 24 May
 2023 17:37:15 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 766847be-fa94-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=message-id : date :
 mime-version : subject : to : cc : references : from : in-reply-to :
 content-type : content-transfer-encoding; s=qcppdkim1;
 bh=/e6B8PzwIMOA1NUnKfCKwK2WyRgcsLlUL/WWtMhoXOo=;
 b=p3qB04b9XJKftly38O5+vr0J26dCdM35VoVjloUmf9fYS0RaN82z0Hr3YOWh4ZJQsIjf
 BZiPx+ihcB08jdCDDKbmzPherYn1S2m1s//fnj9cxWlR23hvxTUQUVj+hShpm/tBZn5v
 jDiBZ5ioeyyjMpK+LEvalgHQPiEK8wKmXBNCpReedGyLTdn2zyAqM3iDTmQKde8ORkWM
 T86J62r2qVscXNPcq/nHDEto456qzSc9BtrhHoCSk2cNbiTANPUKFw9/ulvZYESobBSt
 WF7jcasL0f/+Lab6joyPDPm4sna1+eyFr3WPINyTsoOpta5EBv9ueff6H1jxJGX7j1Tt ww== 
Message-ID: <cf1d6831-dac9-f738-44b4-a9dbc575b7e9@quicinc.com>
Date: Wed, 24 May 2023 17:37:14 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Content-Language: en-US
To: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
        "mic@digikod.net"
	<mic@digikod.net>,
        "Christopherson,, Sean" <seanjc@google.com>,
        "bp@alien8.de" <bp@alien8.de>,
        "dave.hansen@linux.intel.com"
	<dave.hansen@linux.intel.com>,
        "keescook@chromium.org"
	<keescook@chromium.org>,
        "hpa@zytor.com" <hpa@zytor.com>,
        "mingo@redhat.com"
	<mingo@redhat.com>,
        "tglx@linutronix.de" <tglx@linutronix.de>,
        "pbonzini@redhat.com" <pbonzini@redhat.com>,
        "wanpengli@tencent.com"
	<wanpengli@tencent.com>,
        "vkuznets@redhat.com" <vkuznets@redhat.com>
CC: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
        "yuanyu@google.com"
	<yuanyu@google.com>,
        "jamorris@linux.microsoft.com"
	<jamorris@linux.microsoft.com>,
        "marian.c.rotariu@gmail.com"
	<marian.c.rotariu@gmail.com>,
        "Graf, Alexander" <graf@amazon.com>,
        "Andersen,
 John S" <john.s.andersen@intel.com>,
        "madvenka@linux.microsoft.com"
	<madvenka@linux.microsoft.com>,
        "liran.alon@oracle.com"
	<liran.alon@oracle.com>,
        "ssicleru@bitdefender.com"
	<ssicleru@bitdefender.com>,
        "tgopinath@microsoft.com"
	<tgopinath@microsoft.com>,
        "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>,
        "qemu-devel@nongnu.org"
	<qemu-devel@nongnu.org>,
        "linux-security-module@vger.kernel.org"
	<linux-security-module@vger.kernel.org>,
        "will@kernel.org" <will@kernel.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "dev@lists.cloudhypervisor.org" <dev@lists.cloudhypervisor.org>,
        "mdontu@bitdefender.com" <mdontu@bitdefender.com>,
        "linux-hardening@vger.kernel.org" <linux-hardening@vger.kernel.org>,
        "linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
        "virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>,
        "nicu.citu@icloud.com"
	<nicu.citu@icloud.com>,
        "ztarkhani@microsoft.com" <ztarkhani@microsoft.com>,
        "x86@kernel.org" <x86@kernel.org>
References: <20230505152046.6575-1-mic@digikod.net>
 <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
From: Trilok Soni <quic_tsoni@quicinc.com>
In-Reply-To: <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 8bit
X-Originating-IP: [10.80.80.8]
X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To
 nasanex01a.na.qualcomm.com (10.52.223.231)
X-QCInternal: smtphost
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085
X-Proofpoint-ORIG-GUID: IbOKlK9jsOZG2ChFhg5i0OOOj4Du3a1t
X-Proofpoint-GUID: IbOKlK9jsOZG2ChFhg5i0OOOj4Du3a1t
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26
 definitions=2023-05-24_17,2023-05-24_01,2023-05-22_02
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 adultscore=0
 impostorscore=0 priorityscore=1501 spamscore=0 lowpriorityscore=0
 phishscore=0 bulkscore=0 suspectscore=0 clxscore=1015 malwarescore=0
 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2304280000 definitions=main-2305250003

On 5/24/2023 3:20 PM, Edgecombe, Rick P wrote:
> On Fri, 2023-05-05 at 17:20 +0200, Mickaël Salaün wrote:
>> # How does it work?
>>
>> This implementation mainly leverages KVM capabilities to control the
>> Second
>> Layer Address Translation (or the Two Dimensional Paging e.g.,
>> Intel's EPT or
>> AMD's RVI/NPT) and Mode Based Execution Control (Intel's MBEC)
>> introduced with
>> the Kaby Lake (7th generation) architecture. This allows to set
>> permissions on
>> memory pages in a complementary way to the guest kernel's managed
>> memory
>> permissions. Once these permissions are set, they are locked and
>> there is no
>> way back.
>>
>> A first KVM_HC_LOCK_MEM_PAGE_RANGES hypercall enables the guest
>> kernel to lock
>> a set of its memory page ranges with either the HEKI_ATTR_MEM_NOWRITE
>> or the
>> HEKI_ATTR_MEM_EXEC attribute. The first one denies write access to a
>> specific
>> set of pages (allow-list approach), and the second only allows kernel
>> execution
>> for a set of pages (deny-list approach).
>>
>> The current implementation sets the whole kernel's .rodata (i.e., any
>> const or
>> __ro_after_init variables, which includes critical security data such
>> as LSM
>> parameters) and .text sections as non-writable, and the .text section
>> is the
>> only one where kernel execution is allowed. This is possible thanks
>> to the new
>> MBEC support also brough by this series (otherwise the vDSO would
>> have to be
>> executable). Thanks to this hardware support (VT-x, EPT and MBEC),
>> the
>> performance impact of such guest protection is negligible.
>>
>> The second KVM_HC_LOCK_CR_UPDATE hypercall enables guests to pin some
>> of its
>> CPU control register flags (e.g., X86_CR0_WP, X86_CR4_SMEP,
>> X86_CR4_SMAP),
>> which is another complementary hardening mechanism.
>>
>> Heki can be enabled with the heki=1 boot command argument.
>>
>>
> 
> Can the guest kernel ask the host VMM's emulated devices to DMA into
> the protected data? It should go through the host userspace mappings I
> think, which don't care about EPT permissions. Or did I miss where you
> are protecting that another way? There are a lot of easy ways to ask
> the host to write to guest memory that don't involve the EPT. You
> probably need to protect the host userspace mappings, and also the
> places in KVM that kmap a GPA provided by the guest.
> 
> [ snip ]
> 
>>
>> # Current limitations
>>
>> The main limitation of this patch series is the statically enforced
>> permissions. This is not an issue for kernels without module but this
>> needs to
>> be addressed.  Mechanisms that dynamically impact kernel executable
>> memory are
>> not handled for now (e.g., kernel modules, tracepoints, eBPF JIT),
>> and such
>> code will need to be authenticated.  Because the hypervisor is highly
>> privileged and critical to the security of all the VMs, we don't want
>> to
>> implement a code authentication mechanism in the hypervisor itself
>> but delegate
>> this verification to something much less privileged. We are thinking
>> of two
>> ways to solve this: implement this verification in the VMM or spawn a
>> dedicated
>> special VM (similar to Windows's VBS). There are pros on cons to each
>> approach:
>> complexity, verification code ownership (guest's or VMM's), access to
>> guest
>> memory (i.e., confidential computing).
> 
> The kernel often creates writable aliases in order to write to
> protected data (kernel text, etc). Some of this is done right as text
> is being first written out (alternatives for example), and some happens
> way later (jump labels, etc). So for verification, I wonder what stage
> you would be verifying? If you want to verify the end state, you would
> have to maintain knowledge in the verifier of all the touch-ups the
> kernel does. I think it would get very tricky.

Right and for the ARM (from what I know) is that Erratas can be applied
using the alternatives fwk when you hotplug in the CPU post boot.

---Trilok Soni


From xen-devel-bounces@lists.xenproject.org Thu May 25 00:46:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 00:46:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539269.839976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1z7v-0000Nw-Iu; Thu, 25 May 2023 00:46:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539269.839976; Thu, 25 May 2023 00:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1z7v-0000Np-Fq; Thu, 25 May 2023 00:46:47 +0000
Received: by outflank-mailman (input) for mailman id 539269;
 Thu, 25 May 2023 00:46:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGWh=BO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q1z7u-0000Nj-9s
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 00:46:46 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9e5dae11-fa95-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 02:46:44 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id BF8876413A;
 Thu, 25 May 2023 00:46:42 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B3BE3C433D2;
 Thu, 25 May 2023 00:46:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e5dae11-fa95-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684975602;
	bh=nfH3mGgNwMSbNgF7ChnkiSmXD4C5vRPEvASZWFe7nwg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=XLfV3Xaerez7xkpsVhDp+HVK9vXQ1llvLC/3PudOh9845//F+pUOXKfmnpSpW5Z2f
	 U2l0T7wL7P2OOPYX1QWMhbA3I5ZWEyifkXoUDUkllCOwkWFyXozJtUdSCN+4qMGmFj
	 DP8+uUsqFmjsLFGpyqEKvBeugfJnHykEDuPtBGKL5/H+IB1b/dqrauY85EZgvMkoYT
	 9x0XZDuD/tbCw1nRoVYbuZV9g8F39cQtA5N3HWYAxs6a28yARXkILdbKHSF9a9TZgr
	 6iLgtcSt2Ta9+mKwmw1bkLWzGMZ0ChOR7qJCxf2oEzyhKgSD/D94EFuss2+8j0b8QU
	 Ecc7XFGvUI6Gg==
Date: Wed, 24 May 2023 17:46:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, Michal Orzel <michal.orzel@amd.com>
Subject: Re: [PATCH 3/3] xen/misra: xen-analysis.py: Fix cppcheck report
 relative paths
In-Reply-To: <20230519093019.2131896-4-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305241744170.44000@ubuntu-linux-20-04-desktop>
References: <20230519093019.2131896-1-luca.fancellu@arm.com> <20230519093019.2131896-4-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 19 May 2023, Luca Fancellu wrote:
> Fix the generation of the relative path from the repo, for cppcheck
> reports, when the script is launching make with in-tree build.
> 
> Fixes: b046f7e37489 ("xen/misra: xen-analysis.py: use the relative path from the ...")
> Reported-by: Michal Orzel <michal.orzel@amd.com>
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  .../xen_analysis/cppcheck_report_utils.py     | 25 ++++++++++++++++---
>  1 file changed, 21 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> index fdc299c7e029..10100f6c6a57 100644
> --- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
> +++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> @@ -1,6 +1,7 @@
>  #!/usr/bin/env python3
>  
> -import os
> +import os, re
> +from . import settings
>  from xml.etree import ElementTree
>  
>  class CppcheckHTMLReportError(Exception):
> @@ -101,12 +102,28 @@ def cppcheck_merge_txt_fragments(fragments_list, out_txt_file, strip_paths):
>              text_report_content = list(text_report_content)
>              # Strip path from report lines
>              for i in list(range(0, len(text_report_content))):
> -                for path in strip_paths:
> -                    text_report_content[i] = text_report_content[i].replace(
> -                                                                path + "/", "")
>                  # Split by : separator
>                  text_report_content[i] = text_report_content[i].split(":")
>  
> +                for path in strip_paths:
> +                    text_report_content[i][0] = \
> +                        text_report_content[i][0].replace(path + "/", "")

Why moving this for loop after "Split by : separator"? If it is
necessary, would it make sense to do it in the previous patch?


> +                # When the compilation is in-tree, the makefile places
> +                # the directory in /xen/xen, making cppcheck produce
> +                # relative path from there, so check if "xen/" is a prefix
> +                # of the path and if it's not, check if it can be added to
> +                # have a relative path from the repository instead of from
> +                # /xen/xen
> +                if not text_report_content[i][0].startswith("xen/"):
> +                    # cppcheck first entry is in this format:
> +                    # path/to/file(line,cols), remove (line,cols)
> +                    cppcheck_file = re.sub(r'\(.*\)', '',
> +                                           text_report_content[i][0])
> +                    if os.path.isfile(settings.xen_dir + "/" + cppcheck_file):
> +                        text_report_content[i][0] = \
> +                            "xen/" + text_report_content[i][0]
> +
>              # sort alphabetically for second field (misra rule) and as second
>              # criteria for the first field (file name)
>              text_report_content.sort(key = lambda x: (x[1], x[0]))
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 25 01:09:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 01:09:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539275.839985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1zTE-0001Dh-DP; Thu, 25 May 2023 01:08:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539275.839985; Thu, 25 May 2023 01:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1zTE-0001Da-Ak; Thu, 25 May 2023 01:08:48 +0000
Received: by outflank-mailman (input) for mailman id 539275;
 Thu, 25 May 2023 01:08:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGWh=BO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q1zTC-0001DU-K0
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 01:08:46 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b1730f5f-fa98-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 03:08:44 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4A6CB638E2;
 Thu, 25 May 2023 01:08:43 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4723DC4339B;
 Thu, 25 May 2023 01:08:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1730f5f-fa98-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684976922;
	bh=OJ3nGvm1kGzimKYOlU3imxP5BIz1dmvBtlW73TAM3k4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=tMmApaSuFxN2lxKaWT3mgVGqmajyVhFOq01m3wMu2TLwdxAmDOJZ+H4YsXVXb66Ov
	 pQDJWHFjbfeBjqDEmGbTGebtkekHnQPwla+GAkyUv05aTJ2o/uILMxtvrMGkf79ysA
	 vXp0HFGShSd9Pr/SmY1MFfW/rnBZ4RPXg4X4HWi54RUSHbV9HPADt28HdR3vFZEEvv
	 JqAfHUIGBLXv6yrcD5q3B07Cfhy8BifbSwv7+K+lWGHKDzSwTBexzDJQuTcIUFvunc
	 nOdtayr9FSzJsOf1UtLOUZEkoAhTaoj8NCwQ5WCsOPGcjFmp0peThTQxVt4SJy10w2
	 YbRgEiBC4Pshw==
Date: Wed, 24 May 2023 18:08:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/2] xen/misra: add diff-report.py tool
In-Reply-To: <20230519094613.2134153-2-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305241808050.44000@ubuntu-linux-20-04-desktop>
References: <20230519094613.2134153-1-luca.fancellu@arm.com> <20230519094613.2134153-2-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 19 May 2023, Luca Fancellu wrote:
> Add a new tool, diff-report.py that can be used to make diff between
> reports generated by xen-analysis.py tool.
> Currently this tool supports the Xen cppcheck text report format in
> its operations.
> 
> The tool prints every finding that is in the report passed with -r
> (check report) which is not in the report passed with -b (baseline).
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes from v1:
>  - removed 2 method from class ReportEntry that landed there by a
>    mistake on rebase.
>  - Made the script compatible also with python2 (Stefano)
> ---
>  xen/scripts/diff-report.py                    |  80 ++++++++++++++
>  .../xen_analysis/diff_tool/__init__.py        |   0
>  .../xen_analysis/diff_tool/cppcheck_report.py |  44 ++++++++
>  xen/scripts/xen_analysis/diff_tool/debug.py   |  40 +++++++
>  xen/scripts/xen_analysis/diff_tool/report.py  | 100 ++++++++++++++++++
>  5 files changed, 264 insertions(+)
>  create mode 100755 xen/scripts/diff-report.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/__init__.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/debug.py
>  create mode 100644 xen/scripts/xen_analysis/diff_tool/report.py
> 
> diff --git a/xen/scripts/diff-report.py b/xen/scripts/diff-report.py
> new file mode 100755
> index 000000000000..f97cb2355cc3
> --- /dev/null
> +++ b/xen/scripts/diff-report.py
> @@ -0,0 +1,80 @@
> +#!/usr/bin/env python3
> +
> +from __future__ import print_function
> +import os
> +import sys
> +from argparse import ArgumentParser
> +from xen_analysis.diff_tool.cppcheck_report import CppcheckReport
> +from xen_analysis.diff_tool.debug import Debug
> +from xen_analysis.diff_tool.report import ReportError
> +
> +
> +def log_info(text, end='\n'):
> +    # type: (str, str) -> None
> +    global args
> +    global file_out
> +
> +    if (args.verbose):
> +        print(text, end=end, file=file_out)
> +
> +
> +def main(argv):
> +    # type: (list) -> None
> +    global args
> +    global file_out
> +
> +    parser = ArgumentParser(prog="diff-report.py")
> +    parser.add_argument("-b", "--baseline", required=True, type=str,
> +                        help="Path to the baseline report.")
> +    parser.add_argument("--debug", action='store_true',
> +                        help="Produce intermediate reports during operations.")
> +    parser.add_argument("-o", "--out", default="stdout", type=str,
> +                        help="Where to print the tool output. Default is "
> +                             "stdout")
> +    parser.add_argument("-r", "--report", required=True, type=str,
> +                        help="Path to the 'check report', the one checked "
> +                             "against the baseline.")
> +    parser.add_argument("-v", "--verbose", action='store_true',
> +                        help="Print more informations during the run.")
> +
> +    args = parser.parse_args()
> +
> +    if args.out == "stdout":
> +        file_out = sys.stdout
> +    else:
> +        try:
> +            file_out = open(args.out, "wt")
> +        except OSError as e:
> +            print("ERROR: Issue opening file {}: {}".format(args.out, e))
> +            sys.exit(1)
> +
> +    debug = Debug(args)
> +
> +    try:
> +        baseline_path = os.path.realpath(args.baseline)
> +        log_info("Loading baseline report {}".format(baseline_path), "")
> +        baseline = CppcheckReport(baseline_path)
> +        baseline.parse()
> +        debug.debug_print_parsed_report(baseline)
> +        log_info(" [OK]")
> +        new_rep_path = os.path.realpath(args.report)
> +        log_info("Loading check report {}".format(new_rep_path), "")
> +        new_rep = CppcheckReport(new_rep_path)
> +        new_rep.parse()
> +        debug.debug_print_parsed_report(new_rep)
> +        log_info(" [OK]")
> +    except ReportError as e:
> +        print("ERROR: {}".format(e))
> +        sys.exit(1)
> +
> +    output = new_rep - baseline
> +    print(output, end="", file=file_out)
> +
> +    if len(output) > 0:
> +        sys.exit(1)
> +
> +    sys.exit(0)
> +
> +
> +if __name__ == "__main__":
> +    main(sys.argv[1:])
> diff --git a/xen/scripts/xen_analysis/diff_tool/__init__.py b/xen/scripts/xen_analysis/diff_tool/__init__.py
> new file mode 100644
> index 000000000000..e69de29bb2d1
> diff --git a/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py b/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
> new file mode 100644
> index 000000000000..e7e80a9dde84
> --- /dev/null
> +++ b/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
> @@ -0,0 +1,44 @@
> +#!/usr/bin/env python3
> +
> +import re
> +from .report import Report, ReportError
> +
> +
> +class CppcheckReport(Report):
> +    def __init__(self, report_path):
> +        # type: (str) -> None
> +        super(CppcheckReport, self).__init__(report_path)
> +        # This matches a string like:
> +        # path/to/file.c(<line number>,<digits>):<whatever>
> +        # and captures file name path and line number
> +        # the last capture group is used for text substitution in __str__
> +        self.__report_entry_regex = re.compile(r'^(.*)\((\d+)(,\d+\):.*)$')
> +
> +    def parse(self):
> +        # type: () -> None
> +        report_path = self.get_report_path()
> +        try:
> +            with open(report_path, "rt") as infile:
> +                report_lines = infile.readlines()
> +        except OSError as e:
> +            raise ReportError("Issue with reading file {}: {}"
> +                              .format(report_path, e))
> +        for line in report_lines:
> +            entry = self.__report_entry_regex.match(line)
> +            if entry and entry.group(1) and entry.group(2):
> +                file_path = entry.group(1)
> +                line_number = int(entry.group(2))
> +                self.add_entry(file_path, line_number, line)
> +            else:
> +                raise ReportError("Malformed report entry in file {}:\n{}"
> +                                  .format(report_path, line))
> +
> +    def __str__(self):
> +        # type: () -> str
> +        ret = ""
> +        for entry in self.to_list():
> +            ret += re.sub(self.__report_entry_regex,
> +                          r'{}({}\3'.format(entry.file_path,
> +                                            entry.line_number),
> +                          entry.text)
> +        return ret
> diff --git a/xen/scripts/xen_analysis/diff_tool/debug.py b/xen/scripts/xen_analysis/diff_tool/debug.py
> new file mode 100644
> index 000000000000..65cca2464110
> --- /dev/null
> +++ b/xen/scripts/xen_analysis/diff_tool/debug.py
> @@ -0,0 +1,40 @@
> +#!/usr/bin/env python3
> +
> +from __future__ import print_function
> +import os
> +from .report import Report
> +
> +
> +class Debug:
> +    def __init__(self, args):
> +        self.args = args
> +
> +    def __get_debug_out_filename(self, path, type):
> +        # type: (str, str) -> str
> +        # Take basename
> +        file_name = os.path.basename(path)
> +        # Split in name and extension
> +        file_name = os.path.splitext(file_name)
> +        if self.args.out != "stdout":
> +            out_folder = os.path.dirname(self.args.out)
> +        else:
> +            out_folder = "./"
> +        dbg_report_path = out_folder + file_name[0] + type + file_name[1]
> +
> +        return dbg_report_path
> +
> +    def __debug_print_report(self, report, type):
> +        # type: (Report, str) -> None
> +        report_name = self.__get_debug_out_filename(report.get_report_path(),
> +                                                    type)
> +        try:
> +            with open(report_name, "wt") as outfile:
> +                print(report, end="", file=outfile)
> +        except OSError as e:
> +            print("ERROR: Issue opening file {}: {}".format(report_name, e))
> +
> +    def debug_print_parsed_report(self, report):
> +        # type: (Report) -> None
> +        if not self.args.debug:
> +            return
> +        self.__debug_print_report(report, ".parsed")
> diff --git a/xen/scripts/xen_analysis/diff_tool/report.py b/xen/scripts/xen_analysis/diff_tool/report.py
> new file mode 100644
> index 000000000000..4a303d61b3ea
> --- /dev/null
> +++ b/xen/scripts/xen_analysis/diff_tool/report.py
> @@ -0,0 +1,100 @@
> +#!/usr/bin/env python3
> +
> +import os
> +
> +
> +class ReportError(Exception):
> +    pass
> +
> +
> +class Report(object):
> +    class ReportEntry:
> +        def __init__(self, file_path, line_number, entry_text, line_id):
> +            # type: (str, int, list, int) -> None
> +            if not isinstance(line_number, int) or \
> +               not isinstance(line_id, int):
> +                raise ReportError("ReportEntry constructor wrong type args")
> +            self.file_path = file_path
> +            self.line_number = line_number
> +            self.text = entry_text
> +            self.line_id = line_id
> +
> +    def __init__(self, report_path):
> +        # type: (str) -> None
> +        self.__entries = {}
> +        self.__path = report_path
> +        self.__last_line_order = 0
> +
> +    def parse(self):
> +        # type: () -> None
> +        raise ReportError("Please create a specialised class from 'Report'.")
> +
> +    def get_report_path(self):
> +        # type: () -> str
> +        return self.__path
> +
> +    def get_report_entries(self):
> +        # type: () -> dict
> +        return self.__entries
> +
> +    def add_entry(self, entry_path, entry_line_number, entry_text):
> +        # type: (str, int, str) -> None
> +        entry = Report.ReportEntry(entry_path, entry_line_number, entry_text,
> +                                   self.__last_line_order)
> +        if entry_path in self.__entries.keys():
> +            self.__entries[entry_path].append(entry)
> +        else:
> +            self.__entries[entry_path] = [entry]
> +        self.__last_line_order += 1
> +
> +    def to_list(self):
> +        # type: () -> list
> +        report_list = []
> +        for _, entries in self.__entries.items():
> +            for entry in entries:
> +                report_list.append(entry)
> +
> +        report_list.sort(key=lambda x: x.line_id)
> +        return report_list
> +
> +    def __str__(self):
> +        # type: () -> str
> +        ret = ""
> +        for entry in self.to_list():
> +            ret += entry.file_path + ":" + entry.line_number + ":" + entry.text
> +
> +        return ret
> +
> +    def __len__(self):
> +        # type: () -> int
> +        return len(self.to_list())
> +
> +    def __sub__(self, report_b):
> +        # type: (Report) -> Report
> +        if self.__class__ != report_b.__class__:
> +            raise ReportError("Diff of different type of report!")
> +
> +        filename, file_extension = os.path.splitext(self.__path)
> +        diff_report = self.__class__(filename + ".diff" + file_extension)
> +        # Put in the diff report only records of this report that are not
> +        # present in the report_b.
> +        for file_path, entries in self.__entries.items():
> +            rep_b_entries = report_b.get_report_entries()
> +            if file_path in rep_b_entries.keys():
> +                # File path exists in report_b, so check what entries of that
> +                # file path doesn't exist in report_b and add them to the diff
> +                rep_b_entries_num = [
> +                    x.line_number for x in rep_b_entries[file_path]
> +                ]
> +                for entry in entries:
> +                    if entry.line_number not in rep_b_entries_num:
> +                        diff_report.add_entry(file_path, entry.line_number,
> +                                              entry.text)
> +            else:
> +                # File path doesn't exist in report_b, so add every entry
> +                # of that file path to the diff
> +                for entry in entries:
> +                    diff_report.add_entry(file_path, entry.line_number,
> +                                          entry.text)
> +
> +        return diff_report
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 25 01:19:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 01:19:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539279.839996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1zdG-0002ia-A5; Thu, 25 May 2023 01:19:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539279.839996; Thu, 25 May 2023 01:19:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q1zdG-0002iT-7Q; Thu, 25 May 2023 01:19:10 +0000
Received: by outflank-mailman (input) for mailman id 539279;
 Thu, 25 May 2023 01:19:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGWh=BO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q1zdE-0002iJ-GK
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 01:19:08 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 23eb60c4-fa9a-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 03:19:06 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D4B9960CA4;
 Thu, 25 May 2023 01:19:04 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A28FC433D2;
 Thu, 25 May 2023 01:19:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23eb60c4-fa9a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684977544;
	bh=poYWUzsvU1FJcSFwaQ/BgZYa728xSmNE2fmYEjAePHk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=TwlAAFK3FRSs8E0wCJeO0zyKTDsGA8GOg4PpHpTsM808uQbcmw8FtRWtBRjQzKGQP
	 D5e8TCcNG7EaNG6qC0A8ec1yqixXQNYo5/9MsjpWnzyrSiNdzpsGqc+itgguiGWpFM
	 BA6GIc9K2gz6y3eYyKah+1kBNMoptaLiexw2w728ZCiAOVqjnamTRlIRSEK/AvBZAH
	 IxXvyul2NG5mxvJmPyu7nmwHcpzORicQf3FufaUIsZ9ZkAvL8spJnBanBm+9IfBePZ
	 y3Tamh0evIfK6KuxMBo+Pxvsa3u6Y5FkRsA6E8UAsE0ZDQe4Ka/OQVrEDyc4hi0y5e
	 BXHyxop6nry/g==
Date: Wed, 24 May 2023 18:18:59 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <Luca.Fancellu@arm.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/2] xen/misra: diff-report.py: add report patching
 feature
In-Reply-To: <CA6576E0-E49F-4E36-9363-CEB23B508DCE@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305241818410.44000@ubuntu-linux-20-04-desktop>
References: <20230519094613.2134153-1-luca.fancellu@arm.com> <20230519094613.2134153-3-luca.fancellu@arm.com> <CA6576E0-E49F-4E36-9363-CEB23B508DCE@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-2005679444-1684977543=:44000"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-2005679444-1684977543=:44000
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 19 May 2023, Luca Fancellu wrote:
> > On 19 May 2023, at 10:46, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
> > 
> > Add a feature to the diff-report.py script that improves the comparison
> > between two analysis report, one from a baseline codebase and the other
> > from the changes applied to the baseline.
> > 
> > The comparison between reports of different codebase is an issue because
> > entries in the baseline could have been moved in position due to addition
> > or deletion of unrelated lines or can disappear because of deletion of
> > the interested line, making the comparison between two revisions of the
> > code harder.
> > 
> > Having a baseline report, a report of the codebase with the changes
> > called "new report" and a git diff format file that describes the
> > changes happened to the code from the baseline, this feature can
> > understand which entries from the baseline report are deleted or shifted
> > in position due to changes to unrelated lines and can modify them as
> > they will appear in the "new report".
> > 
> > Having the "patched baseline" and the "new report", now it's simple
> > to make the diff between them and print only the entry that are new.
> > 
> > Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> > ---
> > Changes from v1:
> > - Made the script compatible with python2 (Stefano)
> > ---
> > xen/scripts/diff-report.py                    |  55 ++++-
> > xen/scripts/xen_analysis/diff_tool/debug.py   |  21 ++
> > xen/scripts/xen_analysis/diff_tool/report.py  |  87 +++++++
> > .../diff_tool/unified_format_parser.py        | 232 ++++++++++++++++++
> > 4 files changed, 393 insertions(+), 2 deletions(-)
> > create mode 100644 xen/scripts/xen_analysis/diff_tool/unified_format_parser.py
> > 
> > diff --git a/xen/scripts/diff-report.py b/xen/scripts/diff-report.py
> > index f97cb2355cc3..d608e3a05aa1 100755
> > --- a/xen/scripts/diff-report.py
> > +++ b/xen/scripts/diff-report.py
> > @@ -7,6 +7,10 @@ from argparse import ArgumentParser
> > from xen_analysis.diff_tool.cppcheck_report import CppcheckReport
> > from xen_analysis.diff_tool.debug import Debug
> > from xen_analysis.diff_tool.report import ReportError
> > +from xen_analysis.diff_tool.unified_format_parser import \
> > +    (UnifiedFormatParser, UnifiedFormatParseError)
> > +from xen_analysis.settings import repo_dir
> > +from xen_analysis.utils import invoke_command
> > 
> > 
> > def log_info(text, end='\n'):
> > @@ -36,9 +40,32 @@ def main(argv):
> >                              "against the baseline.")
> >     parser.add_argument("-v", "--verbose", action='store_true',
> >                         help="Print more informations during the run.")
> > +    parser.add_argument("--patch", type=str,
> > +                        help="The patch file containing the changes to the "
> > +                             "code, from the baseline analysis result to the "
> > +                             "'check report' analysis result.\n"
> > +                             "Do not use with --baseline-rev/--report-rev")
> > +    parser.add_argument("--baseline-rev", type=str,
> > +                        help="Revision or SHA of the codebase analysed to "
> > +                             "create the baseline report.\n"
> > +                             "Use together with --report-rev")
> > +    parser.add_argument("--report-rev", type=str,
> > +                        help="Revision or SHA of the codebase analysed to "
> > +                             "create the 'check report'.\n"
> > +                             "Use together with --baseline-rev")
> > 
> >     args = parser.parse_args()
> > 
> > +    if args.patch and (args.baseline_rev or args.report_rev):
> > +        print("ERROR: '--patch' argument can't be used with '--baseline-rev'"
> > +              " or '--report-rev'.")
> > +        sys.exit(1)
> > +
> > +    if bool(args.baseline_rev) != bool(args.report_rev):
> > +        print("ERROR: '--baseline-rev' must be used together with "
> > +              "'--report-rev'.")
> > +        sys.exit(1)
> > +
> >     if args.out == "stdout":
> >         file_out = sys.stdout
> >     else:
> > @@ -63,11 +90,35 @@ def main(argv):
> >         new_rep.parse()
> >         debug.debug_print_parsed_report(new_rep)
> >         log_info(" [OK]")
> > -    except ReportError as e:
> > +        diff_source = None
> > +        if args.patch:
> > +            diff_source = os.path.realpath(args.patch)
> > +        elif args.baseline_rev:
> > +            git_diff = invoke_command(
> > +                "git diff --git-dir={} -C -C {}..{}".format(repo_dir,
> > +                                                            args.baseline_rev,
> > +                                                            args.report_rev),
> > +                True, "Error occured invoking:\n{}\n\n{}"
> > +            )
> 
> I’ve noticed now an issue here, when using --baseline-rev/--report-rev, the fix is this one:
> 
> diff --git a/xen/scripts/diff-report.py b/xen/scripts/diff-report.py
> index d608e3a05aa1..636f98f5eebe 100755
> --- a/xen/scripts/diff-report.py
> +++ b/xen/scripts/diff-report.py
> @@ -95,9 +95,8 @@ def main(argv):
>              diff_source = os.path.realpath(args.patch)
>          elif args.baseline_rev:
>              git_diff = invoke_command(
> -                "git diff --git-dir={} -C -C {}..{}".format(repo_dir,
> -                                                            args.baseline_rev,
> -                                                            args.report_rev),
> +                "git --git-dir={}/.git diff -C -C {}..{}"
> +                .format(repo_dir, args.baseline_rev, args.report_rev),
>                  True, "Error occured invoking:\n{}\n\n{}"
>              )
>              diff_source = git_diff.splitlines(keepends=True)
> 
> I’ll wait for other feedback on the patch before sending it again.

With this change:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
--8323329-2005679444-1684977543=:44000--


From xen-devel-bounces@lists.xenproject.org Thu May 25 02:01:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 02:01:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539283.840006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q20IB-0008Hp-Gp; Thu, 25 May 2023 02:01:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539283.840006; Thu, 25 May 2023 02:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q20IB-0008Hh-BW; Thu, 25 May 2023 02:01:27 +0000
Received: by outflank-mailman (input) for mailman id 539283;
 Thu, 25 May 2023 02:01:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q20IA-0008HX-4k; Thu, 25 May 2023 02:01:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q20I9-0006x1-RR; Thu, 25 May 2023 02:01:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q20I9-00077u-6I; Thu, 25 May 2023 02:01:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q20I9-0004qL-5t; Thu, 25 May 2023 02:01:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+nCqb+VgCI4lzs5aD50mNGbVJiP8RUUir7WUCotTUxI=; b=Yj+hXpD6h5DuEunAPq4hUb/u18
	tSN7q+naa4Qt4QEqmGi3IT129KBE4jKc/bOiPPxEfR1pcmtaz2YR3STl5wQ/rgX9U8ZAne/omYtfp
	Kqyw8RK+1A14gdx4/qdbMkU/yUo6lxAy+lf9il4P6hkv6ayrkDfRYy5Six4q0T5v4tWA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180933-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180933: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=1c12355b31046a6b35a4f50c85c4f01afb1bd728
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 May 2023 02:01:25 +0000

flight 180933 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180933/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                1c12355b31046a6b35a4f50c85c4f01afb1bd728
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    7 days
Failing since        180699  2023-05-18 07:21:24 Z    6 days   28 attempts
Testing same since   180927  2023-05-24 08:31:13 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 6708 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 25 02:37:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 02:37:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539294.840028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q20qr-0003ad-Dc; Thu, 25 May 2023 02:37:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539294.840028; Thu, 25 May 2023 02:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q20qr-0003aW-9n; Thu, 25 May 2023 02:37:17 +0000
Received: by outflank-mailman (input) for mailman id 539294;
 Thu, 25 May 2023 02:37:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q20qp-0003aM-OL; Thu, 25 May 2023 02:37:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q20qp-0007gP-Lt; Thu, 25 May 2023 02:37:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q20qp-0008JW-74; Thu, 25 May 2023 02:37:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q20qp-0003So-6X; Thu, 25 May 2023 02:37:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3zC0U64NdyiP9AmChQlgQvtFDyzQtWjvfmPMHb43ybk=; b=Di9FfWlge21dEGGdjZMwrM6tNo
	o501sDkUQYGxsbpQmOkyFOafp7ncYmusWKNVhEo9yp2HmyBdDBPk5bSDpEaant6PcA/cURFxyrtfT
	/P1iVw62bQ4a53MyfTQUIoBedW40K+4iprAwanB5BSb0+7QxdZEqOgcVL206ApPpgMUU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180930-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180930: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=380c6c170393c48852d4f2b1ea97125a399cfc61
X-Osstest-Versions-That:
    xen=c7908869ac26961a3919491705e521179ad3fc0e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 May 2023 02:37:15 +0000

flight 180930 xen-unstable real [real]
flight 180935 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180930/
http://logs.test-lab.xenproject.org/osstest/logs/180935/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail pass in 180935-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180922
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180922
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180922
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180922
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180922
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180922
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180922
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180922
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180922
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180922
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180922
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180922
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  380c6c170393c48852d4f2b1ea97125a399cfc61
baseline version:
 xen                  c7908869ac26961a3919491705e521179ad3fc0e

Last test of basis   180922  2023-05-24 01:54:52 Z    1 days
Testing same since   180930  2023-05-24 17:38:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c7908869ac..380c6c1703  380c6c170393c48852d4f2b1ea97125a399cfc61 -> master


From xen-devel-bounces@lists.xenproject.org Thu May 25 04:24:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 04:24:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539300.840037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q22WU-0006FV-Tu; Thu, 25 May 2023 04:24:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539300.840037; Thu, 25 May 2023 04:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q22WU-0006FO-RE; Thu, 25 May 2023 04:24:22 +0000
Received: by outflank-mailman (input) for mailman id 539300;
 Thu, 25 May 2023 04:24:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q22WT-0006FE-RN; Thu, 25 May 2023 04:24:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q22WT-0001l6-MS; Thu, 25 May 2023 04:24:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q22WT-0004Vm-BU; Thu, 25 May 2023 04:24:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q22WT-0006VO-B4; Thu, 25 May 2023 04:24:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P1C7eNtXuKSGPEbizVovXvxUyF3hvTU/vIP8c3f+Jmk=; b=5cTZL5XuUABAUnd7XUFSYdI92a
	G0ew2+CijMMckFwevg1XPcnXI1JIPQVmfZnMiE76o5hGSWnIyGrMC1Kc1HYPGKl/4GR0kLEFvC1nH
	dVj6F/HCaeDAbI+Ed0QBgYaK58FoTpBoh8y2P/c9u/Vlna/i/yMMuf5d6dRjuiIVlcPQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180936-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180936: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cca2361947b3c9851b3ded6e43cc48caf5258eee
X-Osstest-Versions-That:
    xen=511b9f286c3dadd041e0d90beeff7d47c9bf3b7a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 May 2023 04:24:21 +0000

flight 180936 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180936/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cca2361947b3c9851b3ded6e43cc48caf5258eee
baseline version:
 xen                  511b9f286c3dadd041e0d90beeff7d47c9bf3b7a

Last test of basis   180931  2023-05-24 20:01:55 Z    0 days
Testing same since   180936  2023-05-25 02:02:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   511b9f286c..cca2361947  cca2361947b3c9851b3ded6e43cc48caf5258eee -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 25 05:11:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 05:11:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539306.840047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q23GT-0003cX-F6; Thu, 25 May 2023 05:11:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539306.840047; Thu, 25 May 2023 05:11:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q23GT-0003cQ-Cd; Thu, 25 May 2023 05:11:53 +0000
Received: by outflank-mailman (input) for mailman id 539306;
 Thu, 25 May 2023 05:11:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SX5k=BO=kernel.org=kuba@srs-se1.protection.inumbo.net>)
 id 1q23GS-0003cK-9Y
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 05:11:52 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a72d56ee-faba-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 07:11:50 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id EC5D663FA3;
 Thu, 25 May 2023 05:11:48 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20665C4339B;
 Thu, 25 May 2023 05:11:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a72d56ee-faba-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684991508;
	bh=a0tzE9UDpxf2Gv03EAySk423E9WJbttajURADQK6RpY=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=E+prJKSG6S1YLfI14dLgGo1xJFb3uRkM/dPsvsquftreGTEc4mdpGqCTE7w1vcjIj
	 vXMGcd2WHPPBm1xLHJDAP249/W6Mjfg3E1wmwyNnriWw5w6b2O48gkDtDkdrifnm1o
	 dVEl9xgtqcSrvqF9PsN5b4phTeSzfIA8mVgdMgaaj9Ww8A1OtUxS2OZhLGE9GyqD9P
	 c/H04SuRMQyVKw3raKrejRwKGEX+ueVginsBjR+mXRD1JokoUZucIdWlb6z37Szt2f
	 we9Y9GD0a5ZcpY+bQ+CiF+F4ncxe/vM8aRqCaCPwGGjMNcIc6pNuAtIvQ5g91kxp2P
	 a4yZbJWlTgikA==
Date: Wed, 24 May 2023 22:11:47 -0700
From: Jakub Kicinski <kuba@kernel.org>
To: Linus Walleij <linus.walleij@linaro.org>
Cc: Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org
Subject: Re: [PATCH] xen/netback: Pass (void *) to virt_to_page()
Message-ID: <20230524221147.5791ba3a@kernel.org>
In-Reply-To: <20230523140342.2672713-1-linus.walleij@linaro.org>
References: <20230523140342.2672713-1-linus.walleij@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Tue, 23 May 2023 16:03:42 +0200 Linus Walleij wrote:
> virt_to_page() takes a virtual address as argument but
> the driver passes an unsigned long, which works because
> the target platform(s) uses polymorphic macros to calculate
> the page.
> 
> Since many architectures implement virt_to_pfn() as
> a macro, this function becomes polymorphic and accepts both a
> (unsigned long) and a (void *).
> 
> Fix this up by an explicit (void *) cast.

Paul, Wei, looks like netdev may be the usual path for this patch 
to flow thru, although I'm never 100% sure with Xen.
Please ack or LUK if you prefer to direct the patch elsewhere?


From xen-devel-bounces@lists.xenproject.org Thu May 25 05:12:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 05:12:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539310.840058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q23HQ-00047s-OW; Thu, 25 May 2023 05:12:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539310.840058; Thu, 25 May 2023 05:12:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q23HQ-00047l-LO; Thu, 25 May 2023 05:12:52 +0000
Received: by outflank-mailman (input) for mailman id 539310;
 Thu, 25 May 2023 05:12:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SX5k=BO=kernel.org=kuba@srs-se1.protection.inumbo.net>)
 id 1q23HP-000478-MB
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 05:12:51 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb491fa4-faba-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 07:12:51 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B25AB63FA3;
 Thu, 25 May 2023 05:12:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D54B4C433EF;
 Thu, 25 May 2023 05:12:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb491fa4-faba-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1684991569;
	bh=gx6rfY8zkLlGyaW9DzoBbw69iftNsi3UHSSsrPAeD1o=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=CZGzlNOxbgpVc1jwE5hJ9N25DOVM2hHKBkE10Fx8i/mMokENlE+s1WXgpJlAIP2zE
	 AV3a6r5qQWo1+V2e+WUPc5i0JqACbO3iVpEWbm6EOsInBQitUVTlPrO5nkzBtpp3gm
	 EMwHARxeNvvDR8FNYkJkxr56fJgaElTTt+cm1ejbnGGXiM8kTprHe20/b0zVsZKlrq
	 RUBI1LwUpsT7Iqt4cOryXHpZ9RHE0kTOb++OAPVdO9AWLkCtmpnXrb1ogiY76AAX3y
	 P5AZwPFdO++llcitqp9a9zDseEXr54y31KCae+z1boirxvyIHacGuk1mjY9kQsuNSg
	 +Ad/B5k5e3UWg==
Date: Wed, 24 May 2023 22:12:47 -0700
From: Jakub Kicinski <kuba@kernel.org>
To: Linus Walleij <linus.walleij@linaro.org>
Cc: Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org
Subject: Re: [PATCH] xen/netback: Pass (void *) to virt_to_page()
Message-ID: <20230524221247.1dc731a8@kernel.org>
In-Reply-To: <20230524221147.5791ba3a@kernel.org>
References: <20230523140342.2672713-1-linus.walleij@linaro.org>
	<20230524221147.5791ba3a@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Wed, 24 May 2023 22:11:47 -0700 Jakub Kicinski wrote:
> On Tue, 23 May 2023 16:03:42 +0200 Linus Walleij wrote:
> > virt_to_page() takes a virtual address as argument but
> > the driver passes an unsigned long, which works because
> > the target platform(s) uses polymorphic macros to calculate
> > the page.
> > 
> > Since many architectures implement virt_to_pfn() as
> > a macro, this function becomes polymorphic and accepts both a
> > (unsigned long) and a (void *).
> > 
> > Fix this up by an explicit (void *) cast.  
> 
> Paul, Wei, looks like netdev may be the usual path for this patch 
> to flow thru, although I'm never 100% sure with Xen.
> Please ack or LUK if you prefer to direct the patch elsewhere?

Ugh, Wei already acked this, sorry for the noise.


From xen-devel-bounces@lists.xenproject.org Thu May 25 07:23:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 07:23:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539316.840068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q25Jv-0000Tr-UT; Thu, 25 May 2023 07:23:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539316.840068; Thu, 25 May 2023 07:23:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q25Jv-0000Tk-RA; Thu, 25 May 2023 07:23:35 +0000
Received: by outflank-mailman (input) for mailman id 539316;
 Thu, 25 May 2023 07:23:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q25Ju-0000Te-79
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 07:23:34 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0610.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0be6e153-facd-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 09:23:31 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8576.eurprd04.prod.outlook.com (2603:10a6:102:217::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 07:23:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 07:23:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0be6e153-facd-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TfvEw/zzypCqkc5w8togE7vAAxNdOWTTfB55yNlwMUfhXSR4n8f6R+36NV4fi22quGv/Y5B1vbOQ1NJttOeIAJsnJ4r1LSiGUv2ozlbapn8OWQba7OJ1v944MMC+vLcDTGKNjx1ZbQg7bPuyVYvxE8WO6PBTENtlTXrcjtVdmM2Qgb+R8H2e16yMKfLzSlnUy4Nwkka6BR/0njWc0BUoKWR+r7FnYjqQByZToPSpwmFSjKbCDA1HsReQiWdPu94otHBL1chfsvdKCccQ0w3/Y5YmXcscidNgtBkrLEB3pOaOR2/kpXP9XYoSj4YXIS00lzxI2fGibLCbEXdIZII4Xg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=s86RG9AS8FGHvqLTvKEg/XH5nVrjdm1UbFwo7Q5Z18w=;
 b=bnJCGtwdt4132YGkFAZ3zMCUKwyQ0NP/f/Gqdx8Zr5pz1DOsqUv1t7pqIOMmY85u57U075M74mmUAb5p5GRtIjihsoTc1I6bxAaQN7V0/XwWe2PJzDAezcI+oTVXYqwifcNDEw2vg6Z6YAaixZNqF+JyitFJQMq1+A20Mn753bfI0i1B20y5RWWos0J9Mw63kcZgUcUL/f3gOAHCBgsBDJZJlbr6X/roCs0UtPncn/EqzWsGtgZppKXcZmceev8TiDlYws+WxuY4Zy6SMrhcbREU6vOpvPTlG7SARJDz5XNHzlOR+dOLpg02N0yD6TFCxmTMvM4+eBEG/9WRNfTEfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s86RG9AS8FGHvqLTvKEg/XH5nVrjdm1UbFwo7Q5Z18w=;
 b=U206SHHYCwJK3Z3fAWnTab2wP3virKcXpb4yK7chR4OB9gcap1M3ypy073Pi8jPUzyindQr0VCNnznRwubvWznc0MsH1x5fdiy4MLwfDVVMe51g88jtz3SU5pCB12G5Rfv5ULRIRtvSlyMnULrTnHrnDfUsF8jJkXTUwBtIk6aVsuCHPux3pObj3UDJy+wtMMi59fRlpHmdZVWIWc1YImsh9pvrzHHJ+WdfoaexlhpSkGzxYAjgX3nzs8FLv6rpZ31rTfHK15249jAVFjL50chxQoXgCDl5/auk5j9PjV5vtgzUxk2BD7kR7WaS6Sdn5gZvBUVFVET+7y38cci7TcQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8956af09-9ba4-11bf-a272-25f508bbbb3c@suse.com>
Date: Thu, 25 May 2023 09:23:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4dmJuzNVUE5UIY@Air-de-Roger>
 <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com>
 <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0115.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8576:EE_
X-MS-Office365-Filtering-Correlation-Id: 645d6067-49af-4a4a-17da-08db5cf0ee9f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	XB3j86Xg4FmEo/c8T2wPXQPoHvif29LwH+5InmCS37RGqWlcjTTWSvSdh2wfNoN+iuSnvkRgnZGiiQNiXg2PnXJPrW9XEbv1+ade+LipCwvN418ZDgiBBhT/ohW6cj3YcbsbSXQ1+fwOLKKYqeMi/k6iqs9hPY5EZxcgbMKCgOxOhYMWKwQGl3Z39orXt7JaDMvk21JUiZg75FyZvf9vhbx2k8xJ/9pMtd7JcPnusor6aAPTKCwd2ci4wgOoeaMjn5DHwFH2z/oRGwOrtyXX6AwC2ioOYr56WQ5cpoeBmvulKNZdX5tDe0JnaaBTp74Wnoum+trNQrlz3WDwoj53xh2dUkzEqI+xBs05coWUQKMe7mIAvQU7b6ZH/h4eEYj3CcKgLZC5pRxnhSES9ouyV3/r85fBf0tZfwis6p9J58V6NPoc8PtvRZFjIrFGo9asTG0zXLSjRjwAMah4O01nk/TAwRv3kNqNg9LN0mJR/UchIzAbt0g4jbgjes9R7agyoiSpJ8nSxsO06E8j2pfNuIu2jvsydEF1DJz4C/8OYClX0VuR+k0pzFg5sIVC1H3v8Ad6FP3edHtJkVDT5RjSDhTF/b1ZLfgqc/lfGMLeBqDv+x77CKvx9tVE9ZO/1rM0/J/neqocyQRMnVaLkzYTMg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(376002)(39860400002)(346002)(136003)(396003)(451199021)(6666004)(31686004)(54906003)(31696002)(38100700002)(83380400001)(6916009)(4326008)(36756003)(66476007)(66556008)(316002)(66946007)(2906002)(186003)(41300700001)(6486002)(2616005)(53546011)(6512007)(6506007)(26005)(8676002)(8936002)(478600001)(86362001)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NXUrK2k1U3lxeGY3M3A3QkgwZGU5bGpZRkNyYWRIbk1mTVBwcEwxWmhZQXdq?=
 =?utf-8?B?VHhzY0s2L0g3U0lvRHJLL040MjZ3ZjIyRGhZVVRwNmswQmZ6RmM5Z0poNnFJ?=
 =?utf-8?B?R00xV2tJSDIxRzd0OGxmZWZIOU82ZUJYa2RDK1dHa1RiWTJWRjQ3SUd4WFpl?=
 =?utf-8?B?eFJxVFgwR1dTY1JKNEhMSEpvM0ZBLzJBeHh3ZFVjYzJlM2RqQUx3eFhUOFZU?=
 =?utf-8?B?bGJteWEvbFBKSjlScUk2elNvQVM0Y0tLenMzUjhmeHRZbkMrZzZENmp4THVz?=
 =?utf-8?B?YWU2ZUJ2c1FCZ1JNYVR1a2x5MExGMzNhc1U4RmxSTFV0RnkxOWZrSEhTMHZz?=
 =?utf-8?B?ZE44Vk10RDI1WENjTnBsR0ZjQldmZjk5dUduMVJGV3cybnJIR3FWN05RVTFW?=
 =?utf-8?B?UWVpaElpYkpPU28yR3dZQlk3Q0RpZWVlTEY0SkRIVVNmc0g2SWloVFFHcEZt?=
 =?utf-8?B?cjgvRTUrY2xzK0V5cVdpY1VuVGpQd1ZUNU9NUTFKMU42TnBqdFVUbTVhMFN2?=
 =?utf-8?B?V0ZwQ1ViTkVqUFNXbHdLWkd5OEhWNHZhSDZhNXVUSEFSUkdBVE00VTd2VVBj?=
 =?utf-8?B?WndCcGR5YTZFSzZSRGY5Ylo3ellONFE4LzNmaFcvbElQWmNOeVdoMTdTbFpr?=
 =?utf-8?B?NC8xTGF6anlnV2N0NmJZVmpIcXVKUjB4ZXlRQVMwVGZKaVV1S2dDb0JJeUtn?=
 =?utf-8?B?Smg4Z05wUm90VS9WSDduNUVrcTA2L1RPQk5nQTJTbUJyeUt2ZXo1Mm9OSmJL?=
 =?utf-8?B?Sk9TNmcySkE1RzRlTUN4MUJQR2d1R3gvMXQ5WStGQnBSMFdJTmVzY0ZCWFJN?=
 =?utf-8?B?cDhHN1UzV0E3UTNNakUyK0RIMWZid3FuVTZxRHlTcENiL2NINy84aWRLOTVy?=
 =?utf-8?B?TTdTS2M5Y1V0aGQzeFNFQW5MMEFhQXB5Z3VwcUMyOWZ6NUNXYUo3UlBLSzFL?=
 =?utf-8?B?OVB4VTRZVS9xbExCekN3Mk44V256TTFEajNZNG5EUmtRbUpBRlNHbHNDVkJ0?=
 =?utf-8?B?L1VyZ1EyTnEyN1RpaUtHKzcwZytZRkR5N0hCTkhiTm9SSFhQNndpcFZON3ls?=
 =?utf-8?B?dmFCMVdhS05zczlFanQ2bzNOTHM3RjBWWXJ4N1BQMDliR1ZFZVdsdHZqcVFy?=
 =?utf-8?B?Z1ZQd09sS3hmcElGQm5QOHFSbW1SQzJxL1RRczl2ZWYxbnppL1I1dk9tY1V4?=
 =?utf-8?B?ekVSQlhuSWdldHgvMlZYMFVSbk9Ham8zejNjZXc5V0M1TWhPREpqbldSNEJ5?=
 =?utf-8?B?K3RuV3hPY1EzYnZqdmVrRjYvTlprVUwxYWxTM1FsUFZlSVdoUWZOdEtUNTNZ?=
 =?utf-8?B?VU9lT3hjSnNyMU1qQ2tWZC9qQjhMU1lNc0luWkF5dEFYZ3k0eXJFODhnam1E?=
 =?utf-8?B?K1FCSkZYaGpoakJvTVZXbDIvcWxvQ1haUlRTdURiaUM3SnBlSjF5RU1DaVdH?=
 =?utf-8?B?UnZCYXNzeTJhUVBpVGZrR0wxelRIKzhYUmMvSmJpNkJ4UGVCN0xnb1BXR0hX?=
 =?utf-8?B?ZDBVNk1RSmtvdVR3bFhKOGt4emduaE40S2NZVUlmRS82a2ZXZ2FSSlgwMk9O?=
 =?utf-8?B?MjhJb0NxRmt2T0praU03SUpTWEV6aUxFa3ZiMC9GQXBUVDRQREplUy8xcXFJ?=
 =?utf-8?B?TmtJaExIMUJUMld1UW5QSEptOXp2dmoya3dxaFNLWURFV0VNbGFTcng3VmJi?=
 =?utf-8?B?MndUMkt2R3FkNDQvaGtIaUpyZ0xBRmg0V0pRaXZGdVIzckg5OTFNcWt4eEha?=
 =?utf-8?B?MnFPOFBzRGdnb2FGUXVHUmhJYlZUYkpRalFVWWh5L3A2dU1jNmNZZVpoOGNF?=
 =?utf-8?B?WFhKQlE4MFVtL29JVWxMMGpSVjJSbGVWVW4wNUZVenpvTy9mZ2xUOHRXQnRL?=
 =?utf-8?B?M3VmZEFodUhhK241a1JoOHNGOWdoWURicStMdEN2NlJVN3haU2l3SEJiRytI?=
 =?utf-8?B?ZURYaFhCSUJjWWtpc2N6Sk1pekYwY045Q2cxSFFMNDYyZUxDczdJanBWK3g1?=
 =?utf-8?B?clJQQUZGQmwyL3kvc0U2b2UwcEM1d3B1WTY5bWxPQnpSYWZ1UkxyQ1MrOVBl?=
 =?utf-8?B?UVBRVGJyemhqOFFGU3N3Zjl1T0NZMTdyVGNoQWNWbW0ybjlWY1AzVisvSS9F?=
 =?utf-8?Q?FsB4z+cYJDY4+BL6B0os3ZGm2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 645d6067-49af-4a4a-17da-08db5cf0ee9f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 07:23:27.0067
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9bNCuAZjQhSb3hr4/BRKCUlokqKFFzilmluTsnkyN/YCZHuZCk7ZIiK1h/0+iv2z3vzz4zf4RteFatYil3ealg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8576

On 25.05.2023 01:37, Stefano Stabellini wrote:
> On Wed, 24 May 2023, Jan Beulich wrote:
>>>> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
>>>>      modify_bars() to consistently respect BARs of hidden devices while
>>>>      setting up "normal" ones (i.e. to avoid as much as possible the
>>>>      "continue" path introduced here), setting up of the former may want
>>>>      doing first.
>>>
>>> But BARs of hidden devices should be mapped into dom0 physmap?
>>
>> Yes.
> 
> The BARs would be mapped read-only (not read-write), right? Otherwise we
> let dom0 access devices that belong to Xen, which doesn't seem like a
> good idea.
> 
> But even if we map the BARs read-only, what is the benefit of mapping
> them to Dom0? If Dom0 loads a driver for it and the driver wants to
> initialize the device, the driver will crash because the MMIO region is
> read-only instead of read-write, right?
> 
> How does this device hiding work for dom0? How does dom0 know not to
> access a device that is present on the PCI bus but is used by Xen?

None of these are new questions - this has all been this way for PV Dom0,
and so far we've limped along quite okay. That's not to say that we
shouldn't improve things if we can, but that first requires ideas as to
how.

> The reason why I was suggesting to hide the device completely in the
> past was that I assumed we had to hide the device (make it "disappear"
> on the PCI bus) otherwise Dom0 would access it. Is there another way to
> mark a PCI devices as "inaccessible" or "disabled"?

I'm no aware of any.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 07:35:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 07:35:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539320.840077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q25VU-00020z-0H; Thu, 25 May 2023 07:35:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539320.840077; Thu, 25 May 2023 07:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q25VT-00020s-Tp; Thu, 25 May 2023 07:35:31 +0000
Received: by outflank-mailman (input) for mailman id 539320;
 Thu, 25 May 2023 07:35:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/c8s=BO=citrix.com=prvs=5022a0095=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q25VS-00020m-0u
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 07:35:30 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b74ee2f8-face-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 09:35:28 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 03:35:12 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA2PR03MB5834.namprd03.prod.outlook.com (2603:10b6:806:f8::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 07:35:08 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Thu, 25 May 2023
 07:35:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b74ee2f8-face-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685000127;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=ZT61XllAzQmDrjsQNS1CD2/g6lwbcPG7bOk3Qi1DXG0=;
  b=iXyNIMRMEvHRe5NFY9S94MtZ4cs0PdDFoT1D3H4wyAVz7XszjFaPEdbo
   b/gyKKvnkiQQp4gm0tUyr8TwcEagKlPMJkCVEyACsFyg97v6AgNVxsrFx
   Vzrbl78HohJFajyUDy0m5OIYel3nn2hr/JILCJn2t3CY4WLnrvY36AcB1
   M=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 109093415
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:pUqmiq5SBUE02ySHrY6PFgxRtBLGchMFZxGqfqrLsTDasY5as4F+v
 mceUG2FaPiLMTHyc9Bza4229BlXuJ/RzIRrSldsqiswHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa0R4AeC/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m6
 v1HBRoNKSK/psmk/JyeQ9lJ2toFBZy+VG8fkikIITDxK98DGMmGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OnEoojuiF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxXKjCdlOTeDQGvhCvlOw+2cMVwwqVBilrNiQtheGd+58J
 BlBksYphe1onKCxdfHmRAGxqnOAuh8aWvJTHvc85QXLzbDbiy6bDGUZSj9KaPQ9qdQ7Azct0
 zehj97vQDBirrCRYXac7auP6yO/PzAPKm0PbjNCShEKi+QPu6k2hxPLC9xlQKi8i4SsHSmqm
 m7a6i8jm78UkMgHkb2h+kzKiC6toZ6PSRMp4gLQXSSu6QYRiJOZWrFEIGPztZ5oRLt1hHHa1
 JTYs6ByNNwzMKw=
IronPort-HdrOrdr: A9a23:F3+aXKpXomFuMOrXeoMklmUaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-Talos-CUID: 9a23:sB56fW5EfeD0FHbGXdss609LN+olNU/h7VzCH2GdAFZMFKOzYArF
X-Talos-MUID: =?us-ascii?q?9a23=3AILAEhw850QiDMOb+eFs6MVOQf/lmwJmlFx8crZR?=
 =?us-ascii?q?FpcSlHBApAxuhrzviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,190,1681185600"; 
   d="scan'208";a="109093415"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fG0kSGpzqJfdHQM91nVG+ZjejKvEVAYF9GLlMZU2sselI+eqNnno3tmR4J5d9q0T4M/IrGM/wzrZAclBnQoXyokuSgwyi4VriLGis3wtTdLjp8h+0ASnqR6OhhON545jKIcfNz7nLMnK293bcp4PJ56vfH4cFYJ7BRRrG5rKf8xIi+GyH5PIkuH+/S6SDrbvjLrEGW3qEQQX3Ai1nVFiwy9Cb+XiVPQCn4ZEt4KoCNQk+b8KaN1UAW3fXV3mgEPTYCCealjggAWrH8BIJHhh4ZUvqO65Jj2f3MVB+YbM/qTDMBmjYQeDfi3dSdqsLSMsypNBJx278puTcF7XOb0kyw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JMTB9KDiY/p0p9NLr52mvTizmfg6ZizPVRJTlPx83uo=;
 b=ICSQb0qRWUFMm7dGF8jEfsZz9kpLLMvXbyZXY796qt99m7x2BA6/Q2D2nKUj0w+2hhKcJVsYqJG8IAloHnFKvDfS14bu9ym8mCwmpUvIK7ufMmre+DZrGs3alhFYQypFbnOJo8hrpGvge9tTGFWJyp7MVYxORRHj8aCBSIxHQUHqBi4B/vmXsQ1R/yAKFEjZ7FNMT8Zzmz+chK97+UxWru9NroI8vi1hnfxNRTu11irl6l1+fUFkec/qJ0NXaVDuUh2mEQTa+4o7vKU5uCurrIremg5cfo/s79U0OQAaKd+1wsFSSxMOduFVwnYXFtDIve0a9tIXVkzEFRU5MikW/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JMTB9KDiY/p0p9NLr52mvTizmfg6ZizPVRJTlPx83uo=;
 b=krW8gnfRdV64VO5YrqkIHg0JdSpgQ73MKGLMrSspC6q0Qnv00nqsuEqgKlcaFQtLdTPCmzvmz1BN/oac0/346QEGJXylazmFGi2f0fl/eY/UAJJNf5MOueIy+Yrh9affBlTcHRJv+umwn+tG1M5vOvhtVZlS9/Ltuhmyv5Viq6A=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 25 May 2023 09:35:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Message-ID: <ZG8PpDlKIy9hiGqg@Air-de-Roger>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4dmJuzNVUE5UIY@Air-de-Roger>
 <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com>
 <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop>
X-ClientProxiedBy: LO2P265CA0211.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9e::31) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA2PR03MB5834:EE_
X-MS-Office365-Filtering-Correlation-Id: 1336e5da-4a28-4f86-8533-08db5cf28fe4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mlFHWYjNMT3F51W5YaWK0asOG3dJ6cbAykkTtZ24fRdy64wl+w4Ed84UWc94Vw9tYcbjtidz2Ek+n/nXJgB0tMopM0g4g4yE1L9RdA5GOFb1WxRJgu1ucKlx3VoVkM5XstabXrqVOdPOfEkiicjt+cyz2L/1EWUwuzA1RcS0cBpW7CcRbynJDIqnfXA8MwFF9IJU7Vqh4wuP4CyKloeeJ+BC7mVQ9axP+6tvIM6Nhq2czaZzA3JAl5FrznbPsiP3jd0PDrY1PO6hXLLynLtdS+Bscv+iZmsosrnMyHuREWDjy1F/6KVSqkQbBwXwKuchJ1WNntZGPC4V/fM8mSstUBIjdKYiD0r9DGgHmJNH3z7G0vL19BA+CpU9stPHDEDOFqcf+dqGKRJHhLw9pf9K43RG+sCW8Rgco2GEorgsASXCEPtIIXjxCctkUBNUIf2EXDvVgs/P6TLTDHdo5gmds3nf3YYgV8ONdmawkPekXeXkPl7DT35BpYpbCb2gtLcIkFpo708x2Cb3aQOXbxByjJF9p4ynvtwV0uaS3/VTkIWdQ+lg/oY5RS3rSDbQa4uv
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(396003)(366004)(39860400002)(346002)(376002)(136003)(451199021)(478600001)(6486002)(6666004)(54906003)(26005)(6506007)(9686003)(6512007)(66476007)(186003)(66556008)(66946007)(6916009)(4326008)(5660300002)(82960400001)(85182001)(41300700001)(33716001)(86362001)(38100700002)(2906002)(8676002)(8936002)(316002)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aGppZzNBOFdYckdGeUhLZXA0Y0c4T2tFOXd3a1Rwb21KY0dtUUwrbXgxOENk?=
 =?utf-8?B?THBJd1lCVVN0WGYzNG5IU3FpcTIyV1hBUUdLUVRWM0ZTbmRuaUJHZXpVRDVF?=
 =?utf-8?B?R3NBNS91UksxL1JZSXRYaXBKZnBzNlg5ZmMvRmJKL001Umkyakd5RVVwOUNP?=
 =?utf-8?B?bjZoWjF2TUNYWktRaTBkcW5JeHVZRXZ5L0kwVlpZaXB3Z1dlSzIxVVhoeEM3?=
 =?utf-8?B?RkVMREFERkVKYnVDUUxxYWp5QWtkN3pMNXRrSEZKbkxBVHFtWmpRSjNFVGFj?=
 =?utf-8?B?ZW1vd1lPa1hzbVU2NVM1RGxXTU9aaEtFUkZTNm01N2F2MEwxREY1UUt5eHl6?=
 =?utf-8?B?NFZ4Ylp4Z015MjNpb05uMUswZDhrNFAyYWlWak45SHdMaFBFNlZwdkFtMDVy?=
 =?utf-8?B?aTF0cm9CbzdrUm1oSmxKVWx4WW1wK2VtVUU4dm9mSjJMbExnVTk4WFdUcU9j?=
 =?utf-8?B?ZldrYVhrb2Z1YlhkRGpJYUJ1WFExTjJFSEFkcWtod2d1ODRrMlAwMmM1UGJr?=
 =?utf-8?B?dElKQjRVT2VtWnJaVEVUN2FiaWZMcGhZWVcxZDJLYVZET2htamFsZG9UUUZ6?=
 =?utf-8?B?Y0JvbUtZYlFlQjIzaVlVSUJ3OTRKY3ZrdlFpbm5ldXdRaVIybnNKWVJxMjh1?=
 =?utf-8?B?WTVIUjVmNkJRQ1lJaTg3dVFQdXN2ckF5SlFZbnExMnRVdXorTmFrTUNxTU9s?=
 =?utf-8?B?WmtkdGNqMU1sV0Z4SVJVVXBHYjAvL3BEd28rQk8vV0xNOXRkRUNzOVlVNmM4?=
 =?utf-8?B?R3dLZU10VnlLTDdJYVV4aENkZG13aFpqMm9maU0xV3U1eUpqMkpaRHBsMkx1?=
 =?utf-8?B?TEkwbjVBcFVnR3FKTjkzbGh0K2YxcG1BU1owTXlqZG03TXJuTEUvL2x5MWNB?=
 =?utf-8?B?aHBScVgwZjVkYzZKT2ZKL0lZeG9ySjUxNVE4VUsrUGkxUHREZHFObnpoM2Q5?=
 =?utf-8?B?d2w5M1ZxN3pEeTdDdmxtRllOY3JpdVFvRnBQNEYxZFIreEVoYVd5NmlJM1Q3?=
 =?utf-8?B?VGNSN1BmYmFYaVY2SlJzT0lrbHhrVW1PV1d3dEZHT2tZR1dhMS96SFJsNzBE?=
 =?utf-8?B?U0JCaDRHRjhhRGlKWHdnT1EzUFZnS2V4OERrUUdESG5VOFhNVVdHZEtOQkVo?=
 =?utf-8?B?TGU4OHVjRm1kbm13L1RJbW1oSlBXYnVZTUdYdUNWT0Fyb1Q1aUpiSFMyV01T?=
 =?utf-8?B?VlZpSmpOcGlNR2Uvb0tLY1VBQ2UzOTZxWmxXZU94YkJnZ0I1VDFDMC9DQlgx?=
 =?utf-8?B?NzFwRGIxVm1DOTVYZEgxWWhqcGhPMnNGc095dk5oZUZiYnNILzRVWGtJc1JW?=
 =?utf-8?B?RHhZUmpYaE1xWjZuZ3hEZTZnSjN5RHRWSXVCRThpTWhQYlE5bWN4N2N3Q21Z?=
 =?utf-8?B?TXlldGpCZ2MvN2Z0SkIyeEQrU1FHc1o3em1rdEg0ZzVNT0lpK0lhdU93eUQ0?=
 =?utf-8?B?VTFDRDJ3b3VneTRSckk4Q0xtT1N1VHpPVkhwekxDVWx4aUtxVERFVWJnOWl5?=
 =?utf-8?B?VjhXb0J5Sm5rYUthYVhwbDBDbE9JTHZPMDhmdlhtTFdhRHYrYkRReVhDd3ZD?=
 =?utf-8?B?YjRUN1pkNnk5QzFWNFRteUNRbUJZNEViNWp1aWcrSWRsMnVTc1FIMnVVY1hL?=
 =?utf-8?B?bGhLa2FlN0JCSzJVTUR6SUdVSGM0dVpkbm5majFCTHVmOHg2akhxZHZpWnpv?=
 =?utf-8?B?ZUJmejMycFI5blJxZ1U3TWFraHlHcHNKVkdmU1hXeDhsTS9oUC9DdUxKZlA3?=
 =?utf-8?B?STdObkVoeWZGQTVQVFBUV3djVGZhMUgwVUsrSWJydFRoS1E0TDI4NE1hQU9i?=
 =?utf-8?B?NzNYK3dXNDQyTTN1NjJGNnJwa3MwS1ZWcFgzVlF6dXFlWjhGWDVaZ2EwU1dh?=
 =?utf-8?B?cGVTQit1MUpDOHp4Q2tnWVdSMmdqZXM4NkZkbktLQUFuM29xNzdCWXBSZFg2?=
 =?utf-8?B?bE12a00zM2tIbWUrQnJmRVVhWC9CeGUrYkU3UUkydkkremlhcGluaDNaOVAv?=
 =?utf-8?B?d2pnQk94c0crci9LdU52cmNUbHRQL0tKaDBGcTVDK2prVTk1aC9hd0k4ZUM3?=
 =?utf-8?B?RGlQT055YzhGd3FEYlpjRDZUYWtxSmN4MStrcERrajJBWCsrdzJ3aE9xNGN1?=
 =?utf-8?Q?1uvFYiRNeEapdL8qObzNnWaWO?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	SRbiDj+VUMm2rQ8MmIhS78Qlh9lHCsEaAO8B9KuNf/6VL4dIErwXs6AizFsGYuKN6LErLRJY06+25RHPGPBPBSEOXScJB5X2Kqr/jH4E5Kff6Ki/x4h2CbvyN5zOEmXcLYiV6X/NDk7IwVG9aAfaIf2JjQGyF5uiGoQ8pkURxe4lp3fWhO9nC/XFOSpC3fk4etPK+ae4ylaTAFZS4EnZ2qbr69LC/kgY0I5lpgOUuzOYhjK1r4l2RBudFNneb4FLGL7+9UMzmWf2xuuK8pmVp/bWOEveHxPiW/ydI9rvzOb7BeT5S6MZvmAGVPMeuJz3YI4Ck0/Z3kO/JBbs1VzA8x/Lu7oV2v2cDCIrX0brVrSYR3zY9EVI9AN49HneafgKs/Yhtw/JNwt378KIGF5uqKuBqx2h+D3ZuLJmr4JV3pxqxOSbU+ECNKIAtNSBLFDSbx/IniXNhVN6KbZ34M+k6Cpal/Tp6VzX33OPfWer7QFEFvgRCQERMld1tp6w5l4IxtDgAefXKRC9dAqmXz21iQX4zu6IXvoV0I+PoBe1qIz2my77rxg7Cc5OBRdBUaef/vjQ9gGmSnHomCaplBvLBMGwQ0xJ1A1IjVL4pQ5+ZJc8eVmWNUifHNE4DSeRCddKldJR+oitat1bX0dqgNPExeaMxbkL/Lcou3SiKYOlSBQJZSGyqHGtaK2Mfp7sGQ9xVJD+vzQ0Q97oO0E1pb9kqfo8PJnnbhWARxOj8WFWT8lWAWrBfIUu5tz72IUuU1CKpIRV1qHkUdYth0o/dthydVR5JT1uLMr/IxUOVkPFYXQJxrFX4vSzb6qTehR1NXn/UWHWBQAAj3fquLK4jjUNEw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1336e5da-4a28-4f86-8533-08db5cf28fe4
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 07:35:07.6599
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lbdMoReB8qLDUiQAWr2Af0yUFb3Fqk605J7rJlalMuEkhxstpGK+Ksv8LRw/SOl4Boh0kBaFyRqcF0h4snOj7Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5834

On Wed, May 24, 2023 at 04:37:42PM -0700, Stefano Stabellini wrote:
> On Wed, 24 May 2023, Jan Beulich wrote:
> > >> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
> > >>      modify_bars() to consistently respect BARs of hidden devices while
> > >>      setting up "normal" ones (i.e. to avoid as much as possible the
> > >>      "continue" path introduced here), setting up of the former may want
> > >>      doing first.
> > > 
> > > But BARs of hidden devices should be mapped into dom0 physmap?
> > 
> > Yes.
> 
> The BARs would be mapped read-only (not read-write), right? Otherwise we
> let dom0 access devices that belong to Xen, which doesn't seem like a
> good idea.

It's my understanding that Xen will mark any regions of the BARs
in-use by the hypervisor as read-only, but the rest will be writable.

> But even if we map the BARs read-only, what is the benefit of mapping
> them to Dom0? If Dom0 loads a driver for it and the driver wants to
> initialize the device, the driver will crash because the MMIO region is
> read-only instead of read-write, right?

I think USB is a good example, Xen uses the debug functionality of
EHCI/XHCI, but by marking the device as hidden it allows dom0 to also
make use of any remaining USB ports.

For r/o devices I don't see much point in mapping the BARs to dom0, as
dom0 is not allowed to size them in the first place, so will be able
to detect the BAR position, but not the BAR size.  I guess this could
cause issues in the future if dom0 starts repositioning BARs, but
let's leave that aside.

> How does this device hiding work for dom0? How does dom0 know not to
> access a device that is present on the PCI bus but is used by Xen?

I think the objective for hidden is to allow dom0 to use the device,
but Xen should protect any MMIO region it doesn't want dom0 to
modify.

> The reason why I was suggesting to hide the device completely in the
> past was that I assumed we had to hide the device (make it "disappear"
> on the PCI bus) otherwise Dom0 would access it. Is there another way to
> mark a PCI devices as "inaccessible" or "disabled"?

I'm not aware of anything else, short of using stuff like the ACPI
STAO and reporting disabled devices there in addition of marking their
config space as r/o.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 25 07:45:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 07:45:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539324.840088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q25f7-0003W7-U6; Thu, 25 May 2023 07:45:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539324.840088; Thu, 25 May 2023 07:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q25f7-0003W0-R4; Thu, 25 May 2023 07:45:29 +0000
Received: by outflank-mailman (input) for mailman id 539324;
 Thu, 25 May 2023 07:45:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/c8s=BO=citrix.com=prvs=5022a0095=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q25f7-0003Vu-2d
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 07:45:29 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1c241a21-fad0-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 09:45:27 +0200 (CEST)
Received: from mail-mw2nam04lp2168.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 03:45:23 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by LV8PR03MB7424.namprd03.prod.outlook.com (2603:10b6:408:192::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.14; Thu, 25 May
 2023 07:45:20 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Thu, 25 May 2023
 07:45:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c241a21-fad0-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685000727;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=BA5WCsCObzoYz5f0dsxhE66miKi5UEwOUHsoAw062Bg=;
  b=CkRVCBFQg7ZlkmlIFtQCGQlwFSC1tS8SV3LQodQ6FugpnPxHHbcq5wVV
   KfdIgqy285CahA1xKMC+aPF/ICqR1AtjycnotewVFOrvk6TJAZauNHBQ4
   nWfdau/lU1B1EeVo0e5Ly1AZjm3pwevKpD8w4pgjX8aiqmy8LA1Wxtcx/
   s=;
X-IronPort-RemoteIP: 104.47.73.168
X-IronPort-MID: 110217133
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:BZjh+qLvr6pvrPVSFE+R+ZQlxSXFcZb7ZxGr2PjKsXjdYENSgTwGn
 TRMCmvUO6mCMWr2fd1wbo/npkwDvMeDzYBiHgtlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wZiPawjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4wHHsX5
 cUHDwsSVRfcnu+TxKLgQOVj05FLwMnDZOvzu1lG5BSBUbMDfsqGRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/VspTGNnGSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv13rWWwnqgA+r+EpWm2uxNp1O+7VdIKzQTe1WA+92/lX+xDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8U55R+MzOzI4g+fLmkCUjNFLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDRLoViMA6tjn5YQs1BTGS44/FLbv14OlXzbt3
 zqNsS4ywa0JitIG3Lm6+laBhC+wop/OTUg+4QC/sn+Z0z6VrbWNP+SAgWU3J94ZRGpFZjFtZ
 EQ5pvU=
IronPort-HdrOrdr: A9a23:8kkUGKCnE1EeOvzlHehvsseALOsnbusQ8zAXPhhKOG9omwmj5r
 STdPRy726OtN9jYgB0pTngAtjVfZq4z/VICOYqTMiftWXdyRKVxcRZnPvfKl7bamHDH4xmpN
 ldmsFFYbWbYTca7beckW+F+pQbsai6GciT9KnjJhxWPHtXgtRbnntE43GgYzBLrWd9dOIE/a
 6nl4t6jgvlVWUca8y6AnUffu7YutHHrpLpZhYaGwUq8k2rgSmz4LD3KgOf1BsFST9DqI1Skl
 TtokjU96+nu/G+xgT903bJ75NKsNH9yt1Fbfb87vQ9G3HBmwysbIRkV6ajuCkvoOazzV42nN
 7Hs34bTqFOwkKUUnC+pBPs3wX66S0p+m/GwUKVhnHyyPaJJg7SRvAxwr6wvXPimgcdVHwW6s
 929lPck6ASIQLLnSz76dSNfxZ2lnCsqX5nvf8Pg2dZWY4+bqYUiYAE5ktaHLoJASq/sekcYb
 BTJfCZwMwTXUKRbnjfsGUq6NuwXk4rFhPDblketteT2z12mmk860cD3sQQkloJ6Zp4YZhZ4O
 bvNLhuidh1P4krRJM4IN1Ebdq8C2TLTx6JGGWOIW7/HKVCAH7Jo46f2sR82An/EqZn8LIC3L
 D6FH9Iv287fEzjTeeU2odQzxzLSGKhGRzw18B3/fFCy+PBbYuuFRfGZEElksOmrflaKNbcQe
 yPNJVfBOKmBXfyGLxOwxb1V/BpWCYjufUuy4cGsm+105/2w8zRx7bmmc/oVeHQ+OMfKz/C6n
 hqZkm7GCwP1DHlKy/FaN64YQKsRqW1x+MDLEHgxZlk9GDWXrc88DT9uW7Jpf1jYQcyx5DeXH
 EOZY8PwZnL4VVfCw7zni9U0u00NDcf3F1mOEk69zPj5yjPAOU+UpOkCDhvNVO8V2hCc/8=
X-Talos-CUID: 9a23:AJ7SpmOoTDQTGu5DHzR9rV4tBecebUbak27dBkuoUmxlR+jA
X-Talos-MUID: =?us-ascii?q?9a23=3AfhoEoQxWOd0VeqawgYwnvrzwUEWaqLqCGmcqn7g?=
 =?us-ascii?q?tgJKnHCleNQzHqDKoc4Byfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,190,1681185600"; 
   d="scan'208";a="110217133"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WYtzxjn+y3dz/Nou8ba4tS8VnqlEIPt3swNz7ZU4YG2pVCXydRmgxWahRPo4ec2yF93sSaiGIs94HccxDo6XUB+HXrP1h1sLBbTqt+1kqALIA/IANI86+ZrbXPwCobvBOfOF9pkj63qEAl3V3nu/HSCuulDYIwb4D7pxxvjzGZGYTEOo6qtgotZ01W6m1PouoLqfXVdBW1VFgx7VnOdcvsllRmG7gimdIh0xVGxyVUSmZHGgrqBTIHNDIO6QQf60deTUjqAEfNuRHI/D1rWqLMc2OKpCRvKFjqBkVNHsVAdjX+GhSia2CcI1bdbRl0j5gucRsCPU+3RrMvrYO1XiFA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EJvHevO7biyUOSdytZ8ksfwQqaBAWDYcktoOqdcqjiM=;
 b=Xhqc+2DfRkKXKiMV1ExxN2uNF7qwof3gt6nJSc6N88Eyj4SGymUR0RMZEaRkVy2vZF3nAVjmYhednBxUicfJrKpTJSVdkCk8UOpo8Vltf4mkuVXZp7fleYaWGWWagK5VcO16MfgKokHgWQSUPKblDVrkXOVY+raQBvL2/Q2i0ZptyocM1Im5xl2GnUM3cVDklxmXiLwNVr5GMPGBURJh2ytJN9VeYTRSFqy6k6yDYU6btPjhZGEt9IaFOwJgy8GUMkmVUb33e8xItBDcojARhpr4loib/YvahQ9Ij4ohDpMpdah4LWEHufKCo5eVooCQZ1yDhhRyTm+d1HXhobGBAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EJvHevO7biyUOSdytZ8ksfwQqaBAWDYcktoOqdcqjiM=;
 b=Xi5qq/s4p5Etgv2wZ8ZNNzFQGmmY2+ctgp0cIk4COTcyUMY+Y0mluvL5W4CuPzOTqODMlruQjz8MS6xwHKWKKvghFsc5Ak3k8p/sLcreqelPtLKE071+s/yhqS5/ejLlIlAv92RKwpDI2Mh3dAnpaweVz0jgrHovLUx13tv2ZTM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 25 May 2023 09:45:14 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] iommu/vtd: fix address translation for superpages
Message-ID: <ZG8SCgaXxY07daEf@Air-de-Roger>
References: <20230524152208.18302-1-roger.pau@citrix.com>
 <2099b1b9-e3c3-aae2-351e-cbf067dc6ecc@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2099b1b9-e3c3-aae2-351e-cbf067dc6ecc@suse.com>
X-ClientProxiedBy: LO2P265CA0174.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a::18) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|LV8PR03MB7424:EE_
X-MS-Office365-Filtering-Correlation-Id: c61dd321-defc-406b-48ae-08db5cf3fd2f
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KTYR0be/E8p2AGNXsbPPeAOdF7fW8W8sjGGNEeZ933JOv+b/pIXeMzCrTGmBDewr4nZS+lP10sP2wXCIujaea1KpK/DsfvKGc3tgBGN0bHM7vimh81q3IcTfSf57oHpzutS13fz1PJjQbVM4VyIlk8K1Ft83qX/DjQWV633HBe7hitG8XgILnC4+kfIMAaLrY6fTxgmXNMVvGDoOIHKalhFP3L1baId0DCmBg2bU/b+V4j369V0diAUR9DSO/soIKhMbf82N70vMNXGFIFp3LwoRbD/97Pf1OT7INw9JZNXebcaZ+8j8nCOYk6yXU5U5s+pIVcKtUGqQxvHklhiQtDpaevc0vVeBkbmxCBrSXz7ayNgVGxeoia0PhP/99hksQYiDh7RjZHiMK1JjZPlaxOU1RQAwBw3p8i5C4JJE3ctT8DJP7gUbZcJlmLDbKsWPEmoRnNoz6YWCfeeDkP4z8RQ7/SKApgGv0+MOMBm0UjxMxjSRTwYaQn01MCT5N7BQZU7phT+nXIxCMRTtmJfORXs3ovNad2bbXSiWDnElONhsfuHJziAB0zOix0O1nNod
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(136003)(396003)(366004)(39860400002)(346002)(451199021)(66476007)(186003)(83380400001)(2906002)(6916009)(41300700001)(66946007)(66556008)(4326008)(316002)(6666004)(478600001)(53546011)(26005)(5660300002)(6512007)(9686003)(6506007)(8936002)(8676002)(38100700002)(82960400001)(85182001)(86362001)(33716001)(6486002)(66899021);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eW5hc1J4bUczbDhXL1ByU0JreFMzSlpvOWdOb1JLZjRwOEFIc3RaTUh3ditw?=
 =?utf-8?B?emxxN2hNM2FYZ3UvZnQ4Yk1qaDZBVWt4QjhrMjM3QW53S1BIK25QREVRRW5y?=
 =?utf-8?B?cVFRRGg5K1JmL0l2eEhoZ2J3aW5FZjRZbjRxRFFNWXNoYmhiMXYxaDQyYUlz?=
 =?utf-8?B?d1lNb3E2ZWptdGRDTFNybWtPaWhEK2YvcGFlanhFMmJPcXJRQkkzMFlFa05E?=
 =?utf-8?B?UENnbktlK0NPMEhJYlp5azZ0Z0l4RzFnMGt5Um9BaWNDeVRpMkFQY0ZUQk9L?=
 =?utf-8?B?V0pDOFIvZW9GNUZDclFKQVF5dE5HRTBTVjlRQUhEblh4eFFDZTdQRnA2MFlJ?=
 =?utf-8?B?RmZ3RXMyaS9Mdk5zWkRlN0d1MkU5WkEwQ0ZKOFJyODFDVWtkQTFhU3BBWG5I?=
 =?utf-8?B?YmtJU05ra29mTDlwcVZ5UXdNdXRGWllYUVM2SzhRU2RLTWZTNXJnVW55ekR1?=
 =?utf-8?B?ODQycGM1VHJpd3lYL2Fla01kdk9ma0M5WEtrZnYxUXJvMm5vMzQyenpnMmkx?=
 =?utf-8?B?QWV3Rzh2Y051NFYvNVIwcEJkSzlpZ3E5N1hsbUlkditQSjZmMWt1dGxjbTFz?=
 =?utf-8?B?TDAwSmE4d21Yc1k2R2cwQ09RcjBNSjNiN2tSK2RPbldOS280cDNrOEFrOTJw?=
 =?utf-8?B?cE5KcTNXTEgxTG1WVFZhZ2lscGN1UExWQSt0dVVhRTM3eXFFVXc2aVRrNXpJ?=
 =?utf-8?B?VktuMkhvMXFlMDNPQ09zVkYyQW91cStOOVNUYXZVT3N0NjBISXNZRVBrQjd3?=
 =?utf-8?B?eGpDRlhmbEVTSG00dmxXTjd2MjI5Z1d5dFRLa1dhWmhHQXVZTXZXN1F4OG13?=
 =?utf-8?B?T0dhYjZLSWNXZTFZZHQ3bDhoNlc0bEFLNk4xSU95eSt6MjZtMXZlTXlzZmJF?=
 =?utf-8?B?VnVYRWFJOHE5ZU5IRWJsdm9QUUtSaXU0cGJySFJoOEVZWWhvQ1hoVUZFbktK?=
 =?utf-8?B?Vk9KM3hqMG1ZNThIeUNMdnFTRlA1YVk0ekdMQmdEeHIxNUtTWkR0RmNzenov?=
 =?utf-8?B?S2tHS2JEUlNVdUdQWGlNQmlqbzl0cENsa1FoR0hmejdkOEpIMEhzbjNnbTdH?=
 =?utf-8?B?bW54VUxqTzIxcnc4WndMZjgvWE1BalVhS25CaDBObEdwM0taYjFxYTQxT0s3?=
 =?utf-8?B?KzIwck5vbGdqMWpZbFVVVlduUDRLRmh1eHpkZkh0NDhFK1dtN3gvK2ZncGdT?=
 =?utf-8?B?bkV6YWJUMnp6N1ZsalRUTUExT2d3aG5aUjJNL2xIRHVHWC9UdHl6bXFOUFdM?=
 =?utf-8?B?VnR6WHBZUGRudWZaRTJOV1JobXcvOTcwSG1vZ2FFd0RNMVROTVZZaDZ4VllM?=
 =?utf-8?B?MXVFcFY4MEZBV2tkVFZkK2NLelBqZG12U1pIbG92aVBxVTIvRWhtNXJZMlZS?=
 =?utf-8?B?ejMyWkNhZTBJTWtJRXRIVng0dFhvaDJXUlVWR01IWFJPTWFyRlhwRGl5Yldi?=
 =?utf-8?B?bFNYa1hGS05TcTZzL0tjOTBBTEx3QU9aL0MzVFZRUEcwVWlCVk9UcER0TUJJ?=
 =?utf-8?B?aldua2s5SkdLVDZEQXVMbGx0VGVVb2ZMaVZTa1RyN0wxVkJwaUpPZHErVm85?=
 =?utf-8?B?NlQ1RUpkazNieEdtajRiT0N2SkxJOGp3K1ZwN3JBSG1OdUZQREVtWkVXTzlT?=
 =?utf-8?B?bys3bElVaDMrRXdGS1VXWk5uOGxEWDNGR2V6ZWZwaGtRLzhiVnpCNUNtbjd0?=
 =?utf-8?B?dUM0SThtMGNxOU5jTWw3SVVDdzdpTGlEOUpxM2F5SDhpcGFDbW10K0pVczRO?=
 =?utf-8?B?cEtnNVIxQkVFRDk1ZVFoeW5kdVUzYUI1M2llZlNzc2VZWFFoR2hVVmR2ZDVx?=
 =?utf-8?B?MDhNT1ltYnRzUmZ1K0RHVk44UTBGVzVuVnFMYWt3bnI5VDVPSG5nc1h2UUlQ?=
 =?utf-8?B?UDhCZ3BpcmhOMFFYS0t6TG1JaXlXRzdJRXpBcFJoTzlJTzhWcHdtNXB3TFp3?=
 =?utf-8?B?UmxsSC9PYlloU3M4RDFxYzFaMWtCakZCbXArMWtza05ZUU15dUVlUEF1WDNZ?=
 =?utf-8?B?ano4V1BrZGRLdVMxdnBKWkZaM3lxa21ITk1hSWR1WXNkMmtNVG55c2tIM2pr?=
 =?utf-8?B?bXV1UDRjSVEzcGhrN1Q2QmtjdzZKeXZnUnd1eWFqMmJ1Rm1zNnQ2ZS9lN3FS?=
 =?utf-8?B?M2kzeVUyVjM3a0VNT09sUTh4TzJxdStldzlTcGdnV0RHZFBxUFZmTmhYNHhu?=
 =?utf-8?B?VFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	iSlVQz0b9yRAHlZYjpZ5YQThn4FVks3Sqr9kyLsjCFHT95WB6c4F/vLnihywogeyCW7oT3Gd2MC4D4arW9Sg/cIESP+CCvArgvykP5jgcKdPHOKvScOSlHKLH1OzJM4X2fgbqONL6iwamZNKWC+SMlO00RQSDdyO7egTd5ZwWZhLkFBHOczbqEmcfFDrfmtY7rm4zgnhFm0BA9snFFryjsnUq+SBMVYIwpmr6JbOYdXAbwhUp/1KdfFTvVIjlJaCYervu0+qm6a21lcZmAU7a8IjCUVTUvQ0bu2G3TPuqucakl7jJj8+gnPxFibqTPIVzPoZGDBN4+Z66VGEiuORXLerMo6/F9r4aL1n4/jfTI7c0jNIZBLbZ5d5IybuSAcCicELDR3N4GOM5wwrh3uE8HXs2SX+XiYO8x12CJec5AvrJFbNtIRwBjgpTMj1c95FDu/g5mGM8sL0/slMwdXm9b+txZ+5vU06dcGu0GKw8ED97AYAN0gNSCl1Uhasm5Czrh9X/WZHuCh2xEtvM2HFQErxjeONM3pux5wW34lB8Yd9TyznogdKw1s3mZ2vnuBndDSpflZeGAxuETKW+dqpV5Kkptu8tQm48GliQcKNDG6nq4bAL2su7l3Q85ZmuOey03jp9GGhHpIW6OiaYR+khlky5TVMioyp8zh4usnmBVmvfregnRkYDq7PLl+W7USMB8ie4uPbcHjXxD+pJAbHo8bcFendkp0RYYAjOXn5aL/Oex9UK168+3HGyWxyakaW15gykpaFQlJ4nvwE9vu9Oe1Y5Z4lITMoURowdKsDEentJtqfKexxrMqFt1NLc2aGDECtvUftEdR8WwVS9z1JVA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c61dd321-defc-406b-48ae-08db5cf3fd2f
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 07:45:20.0736
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MbwmNh6ydPBk0Ku7GYqwj4MS6qDOlDQL+ZtU4GTFdiEBgVE/N8VI8O2ac7VIN9FAzQ8/IPfXAIzfXa2e3kY2/w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR03MB7424

On Wed, May 24, 2023 at 06:11:03PM +0200, Jan Beulich wrote:
> On 24.05.2023 17:22, Roger Pau Monne wrote:
> > When translating an address that falls inside of a superpage in the
> > IOMMU page tables the fetching of the PTE value wasn't masking of the
> > contiguous related data, which caused the returned data to be
> > corrupt as it would contain bits that the caller would interpret as
> > part of the address.
> > 
> > Fix this by masking off the contiguous bits.
> > 
> > Fixes: c71e55501a61 ('VT-d: have callers specify the target level for page table walks')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Just to clarify: The title says superpages and you also only deal with
> superpages. Yet in the earlier discussion I pointed out that the 4k-page
> case looks to also be flawed (I don't think anymore we iterate one too
> many times, but I'm pretty sure the r/w flags are missing in what we
> return to intel_iommu_lookup_page()). Did you convince yourself
> otherwise in the meantime? Or is that going to be a separate change
> (whether by you or someone else, like me)?

Gah no, i did assert that the iterations are OK, but completely forgot
about the r/w bits.

> 
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -368,7 +368,7 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
> >                   * with the address adjusted to account for the residual of
> >                   * the walk.
> >                   */
> > -                pte_maddr = pte->val +
> > +                pte_maddr = (pte->val & ~DMA_PTE_CONTIG_MASK) +
> 
> While this addresses the problem at hand, wouldn't masking by PADDR_MASK
> be more forward compatible (for whenever another of the high bits gets
> used)?

Right, I've just masked ~DMA_PTE_CONTIG_MASK like it's done below when
splitting a superpage, but for the use case here it does make more
sense to use PADDR_MASK.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 25 08:08:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:08:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539334.840098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2619-0006bq-Ac; Thu, 25 May 2023 08:08:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539334.840098; Thu, 25 May 2023 08:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2619-0006bj-7u; Thu, 25 May 2023 08:08:15 +0000
Received: by outflank-mailman (input) for mailman id 539334;
 Thu, 25 May 2023 08:08:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q2617-0006bZ-AF
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:08:13 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2062a.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4a78622e-fad3-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 10:08:11 +0200 (CEST)
Received: from DB7PR05CA0020.eurprd05.prod.outlook.com (2603:10a6:10:36::33)
 by GV1PR08MB8714.eurprd08.prod.outlook.com (2603:10a6:150:83::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 08:08:02 +0000
Received: from DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:36:cafe::bb) by DB7PR05CA0020.outlook.office365.com
 (2603:10a6:10:36::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16 via Frontend
 Transport; Thu, 25 May 2023 08:08:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT034.mail.protection.outlook.com (100.127.142.97) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.17 via Frontend Transport; Thu, 25 May 2023 08:08:02 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Thu, 25 May 2023 08:08:02 +0000
Received: from f18fa8b90ba2.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 27CBF12C-16D6-4BB6-A4FD-AC0854470D8E.1; 
 Thu, 25 May 2023 08:07:50 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f18fa8b90ba2.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 25 May 2023 08:07:50 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM9PR08MB6164.eurprd08.prod.outlook.com (2603:10a6:20b:287::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.27; Thu, 25 May
 2023 08:07:47 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 08:07:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a78622e-fad3-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qt09OIczyo/j7ElpWKbXC2KkxF2yoD+ctz0QI77c/lI=;
 b=wLVbzNdFwa/Sndst2i0wVDOYOGXh4mfwCZXsbDUIcIYYt4g/jTw+Udfug1/QQ+cvgVpCnjxxAeGKd8j3NEp2761fHk1ocsQFRYJKhOB5Ulo556qJ/7gsfpKPYGCvFTdGZOLxNqOeXSE894JC3zXMb94T1p5J2W4fk0VdOyNH2YM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5b494e79564c249c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MJV9evszSb1uutdEMGY0r0Z6fmfG/UbreMabzQbrSrZ9vYMhAawQJLAi6e9JzWbcq5XvPestrlQrcEmquHSEsf8rcC/awiGwgVtQ9fXcJJcvXjSZ5IpyL4Ac/BBaSA7z2ZqicRLf1tBel0jt/D7fqBpU0Yzl0J6E8927w9XABeo5fH0p4yCzSmcK9TU9gYv229xw1e8hSMHSbOEZdRibWz5Fd4kycHJdQRyyYeRnwZNnxoosnKF34cPDmGrZh2wWbv0f+v8rodIGL8AO7XGYbTSVjGgEu7um13srb+l2WU+vU8LNq7HdroelbZI/7wi2AarqR8vssixsuA8KIjYTSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qt09OIczyo/j7ElpWKbXC2KkxF2yoD+ctz0QI77c/lI=;
 b=ZtIryTet5waZAlSYHcMEyfKRHJZ5GO6fAcxi5FYFcn7SK5x3Z/v3is2bzcgdQfDvg/hfzJg6sRbmgKzj8kB4tNaYtcgov63mFnutxHg1aOtZsWfZfi/ISGIYELDCxx1dRs2X80TP8jxOhaT7VQ2j1zLdu+SvD3oa1NathfPMFFxe195eu9WLuY4t76B41NKYJ/KzLtBkB67cL9i5sNldWfZR9v402lpL7S8jzvW96QCpdypeCyLtfGbFhf/UZmIqAXawKBgwZybGgzrkWJOctT+emO2L9tPZjjyN6Alq6T/SmYbukYosckwCvKBv68Ob/wGHY6F4s5DA7BpKb0aBVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qt09OIczyo/j7ElpWKbXC2KkxF2yoD+ctz0QI77c/lI=;
 b=wLVbzNdFwa/Sndst2i0wVDOYOGXh4mfwCZXsbDUIcIYYt4g/jTw+Udfug1/QQ+cvgVpCnjxxAeGKd8j3NEp2761fHk1ocsQFRYJKhOB5Ulo556qJ/7gsfpKPYGCvFTdGZOLxNqOeXSE894JC3zXMb94T1p5J2W4fk0VdOyNH2YM=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>, Michal Orzel <michal.orzel@amd.com>
Subject: Re: [PATCH 3/3] xen/misra: xen-analysis.py: Fix cppcheck report
 relative paths
Thread-Topic: [PATCH 3/3] xen/misra: xen-analysis.py: Fix cppcheck report
 relative paths
Thread-Index: AQHZijShF3eaMfwagUuZTgN2kgKHd69qMFGAgAB7NYA=
Date: Thu, 25 May 2023 08:07:47 +0000
Message-ID: <B9D3CF47-6CF9-45C3-9CC1-35B95ACFCC95@arm.com>
References: <20230519093019.2131896-1-luca.fancellu@arm.com>
 <20230519093019.2131896-4-luca.fancellu@arm.com>
 <alpine.DEB.2.22.394.2305241744170.44000@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2305241744170.44000@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM9PR08MB6164:EE_|DBAEUR03FT034:EE_|GV1PR08MB8714:EE_
X-MS-Office365-Filtering-Correlation-Id: e642a45e-4dcc-447c-c6b5-08db5cf7296c
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ZqhzibhTd8t9JvJnUsEVB8d15NTxTIsLrvBoRdEUFVK+j/raLvcHiMAec1GxKbcX2HWvIiMhtu8M/ORN0MBxqiZBshw7iatdZykZJ45fY+lP/XAHXzGkOPjjx899yrsW9Q6ioGpTabNkZIVPoWbfj6KNSYtBrvf9ga2PCqmmU0bdY4UjUOSOug/kh4qHQVanc7jub3gJb0CpqnWmZrV7lFjcFdYczfx08yxyVCLsbu/XRSPFFc+HL8mj6QtbOsAFWEvDKXV4wB+reH9QUbW3AeVT96K6/j0bFcg8trC0cCJ5LOYmZL7jZo0PUwVOfmXdI1zMGIW8PdWjFPcdqG5ddSDBqvnDQ1Cme0vc70R3QA1or0UXu3KckU+4twdTDsIq10Vnbkq8OgF7iiYsKrhKJCgzhZWvdVjw1rUyV/lELDvsOTb0BpoAp6GhbH9cymGyh10texESMe+JR5SYpdp6KjjTrysImA5hmQVUZdpa/P+KrcOTo9OeXYVyDfpu/F5h8tRsS6lW5xBmrPXkQhAcKNgK/gyG2UNiOZwi+mo5zLoJc7teS6/cqT9lWB6oxOl1+LldCpNwDN2u5/boxZFPoNMVRBcxHq5qvW4O+PsQfp3cYQPGcFcHjw1uMi/URkWjl3ik5ybNabqtsTd9Fszp23bY6/g25RwR1kwyvN713h14XnGCLFPeldmpscognuAL6onsH/kNYBKJBkkwGLwkVQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(6512007)(41300700001)(8676002)(8936002)(6506007)(2616005)(186003)(26005)(53546011)(86362001)(4326008)(36756003)(33656002)(2906002)(91956017)(478600001)(66476007)(66446008)(64756008)(6916009)(66556008)(66946007)(5660300002)(83380400001)(76116006)(316002)(54906003)(38070700005)(38100700002)(6486002)(71200400001)(122000001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <0B291FC8C418E1449B346725431B0405@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6164
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	12459d8f-9f7d-4dfd-4576-08db5cf7207b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pvzpAo0aHwR7fuYqpoomP92RUYrsPzQJvdbtd2EicWllbdPHR49TumCtXBzH1VUegLW8VL+bJ5i0tDhj9ssz1gQMilhtwkg0ZG/xFUhEEapCQObwn4Pnj0C1DX0Bp+3TIIdxzyGRPqaINmeNukTHC+MgqOuVDukxtaryspyMjnpeUyUenQMCs9iW8QukXhPLmaDTQmRD7xY/k+LQYLJju97bEY7+dRH8K9bhD7HCTXbpXjyoQkLpRfvhovXGuAItJHGV/lufa4iELseOLwJtNZx4voJfQ4ImUMc/aZbCrEa1Y5eKFy2mgDc+HFgRSVY8he/sJTgsKosZZzhNh+AlrOKDYQizhhHcVfvmxKYrxCHYgXatKIPmCCWUD89BxTtRN+FrN4iuWZXBy/OgSL3kePtF4X8w9iyw9JqtGnNWkfTEzz06geldVkga3kG8umDpN5K6Q8kJN63USWCnCdINHXoQxSloBYKyuSb45pHu/2KELWp1iVzYf8mwVZr3DCJTy6awC9dzDvHzmrEkrfAixIPo+MjwzSff4cL5PTA11NJ1NrGY30MQA7QEmI9oErNrkGhLChqMjvBlokyLOwaL1LMlMCTIdPwof9gLA2BYYgE2QtUYsdXUpPJBKYyfJtrzsDk2gD+Wp4WW8wIisES3eRHCxUo41eaJ4AmlODc7NPXr8jauU3mof+BLd1LwnnO0dYqRGp900CbmsRFzcJXQ11ZvYSJ6kJwPorIH5Jvv1/DPRiyjH0e3Cp/DlXgnkR1L
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(136003)(376002)(396003)(451199021)(46966006)(40470700004)(36840700001)(40460700003)(6486002)(41300700001)(26005)(53546011)(6512007)(6506007)(8676002)(8936002)(6862004)(5660300002)(36756003)(36860700001)(2616005)(82310400005)(83380400001)(47076005)(336012)(2906002)(33656002)(186003)(86362001)(356005)(81166007)(40480700001)(82740400003)(4326008)(70586007)(70206006)(54906003)(478600001)(316002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 08:08:02.4573
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e642a45e-4dcc-447c-c6b5-08db5cf7296c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB8714

DQoNCj4gT24gMjUgTWF5IDIwMjMsIGF0IDAxOjQ2LCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gT24gRnJpLCAxOSBNYXkgMjAyMywgTHVj
YSBGYW5jZWxsdSB3cm90ZToNCj4+IEZpeCB0aGUgZ2VuZXJhdGlvbiBvZiB0aGUgcmVsYXRpdmUg
cGF0aCBmcm9tIHRoZSByZXBvLCBmb3IgY3BwY2hlY2sNCj4+IHJlcG9ydHMsIHdoZW4gdGhlIHNj
cmlwdCBpcyBsYXVuY2hpbmcgbWFrZSB3aXRoIGluLXRyZWUgYnVpbGQuDQo+PiANCj4+IEZpeGVz
OiBiMDQ2ZjdlMzc0ODkgKCJ4ZW4vbWlzcmE6IHhlbi1hbmFseXNpcy5weTogdXNlIHRoZSByZWxh
dGl2ZSBwYXRoIGZyb20gdGhlIC4uLiIpDQo+PiBSZXBvcnRlZC1ieTogTWljaGFsIE9yemVsIDxt
aWNoYWwub3J6ZWxAYW1kLmNvbT4NCj4+IFNpZ25lZC1vZmYtYnk6IEx1Y2EgRmFuY2VsbHUgPGx1
Y2EuZmFuY2VsbHVAYXJtLmNvbT4NCj4+IC0tLQ0KPj4gLi4uL3hlbl9hbmFseXNpcy9jcHBjaGVj
a19yZXBvcnRfdXRpbHMucHkgICAgIHwgMjUgKysrKysrKysrKysrKysrKy0tLQ0KPj4gMSBmaWxl
IGNoYW5nZWQsIDIxIGluc2VydGlvbnMoKyksIDQgZGVsZXRpb25zKC0pDQo+PiANCj4+IGRpZmYg
LS1naXQgYS94ZW4vc2NyaXB0cy94ZW5fYW5hbHlzaXMvY3BwY2hlY2tfcmVwb3J0X3V0aWxzLnB5
IGIveGVuL3NjcmlwdHMveGVuX2FuYWx5c2lzL2NwcGNoZWNrX3JlcG9ydF91dGlscy5weQ0KPj4g
aW5kZXggZmRjMjk5YzdlMDI5Li4xMDEwMGY2YzZhNTcgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vc2Ny
aXB0cy94ZW5fYW5hbHlzaXMvY3BwY2hlY2tfcmVwb3J0X3V0aWxzLnB5DQo+PiArKysgYi94ZW4v
c2NyaXB0cy94ZW5fYW5hbHlzaXMvY3BwY2hlY2tfcmVwb3J0X3V0aWxzLnB5DQo+PiBAQCAtMSw2
ICsxLDcgQEANCj4+ICMhL3Vzci9iaW4vZW52IHB5dGhvbjMNCj4+IA0KPj4gLWltcG9ydCBvcw0K
Pj4gK2ltcG9ydCBvcywgcmUNCj4+ICtmcm9tIC4gaW1wb3J0IHNldHRpbmdzDQo+PiBmcm9tIHht
bC5ldHJlZSBpbXBvcnQgRWxlbWVudFRyZWUNCj4+IA0KPj4gY2xhc3MgQ3BwY2hlY2tIVE1MUmVw
b3J0RXJyb3IoRXhjZXB0aW9uKToNCj4+IEBAIC0xMDEsMTIgKzEwMiwyOCBAQCBkZWYgY3BwY2hl
Y2tfbWVyZ2VfdHh0X2ZyYWdtZW50cyhmcmFnbWVudHNfbGlzdCwgb3V0X3R4dF9maWxlLCBzdHJp
cF9wYXRocyk6DQo+PiAgICAgICAgICAgICB0ZXh0X3JlcG9ydF9jb250ZW50ID0gbGlzdCh0ZXh0
X3JlcG9ydF9jb250ZW50KQ0KPj4gICAgICAgICAgICAgIyBTdHJpcCBwYXRoIGZyb20gcmVwb3J0
IGxpbmVzDQo+PiAgICAgICAgICAgICBmb3IgaSBpbiBsaXN0KHJhbmdlKDAsIGxlbih0ZXh0X3Jl
cG9ydF9jb250ZW50KSkpOg0KPj4gLSAgICAgICAgICAgICAgICBmb3IgcGF0aCBpbiBzdHJpcF9w
YXRoczoNCj4+IC0gICAgICAgICAgICAgICAgICAgIHRleHRfcmVwb3J0X2NvbnRlbnRbaV0gPSB0
ZXh0X3JlcG9ydF9jb250ZW50W2ldLnJlcGxhY2UoDQo+PiAtICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhdGggKyAiLyIsICIi
KQ0KPj4gICAgICAgICAgICAgICAgICMgU3BsaXQgYnkgOiBzZXBhcmF0b3INCj4+ICAgICAgICAg
ICAgICAgICB0ZXh0X3JlcG9ydF9jb250ZW50W2ldID0gdGV4dF9yZXBvcnRfY29udGVudFtpXS5z
cGxpdCgiOiIpDQo+PiANCj4+ICsgICAgICAgICAgICAgICAgZm9yIHBhdGggaW4gc3RyaXBfcGF0
aHM6DQo+PiArICAgICAgICAgICAgICAgICAgICB0ZXh0X3JlcG9ydF9jb250ZW50W2ldWzBdID0g
XA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgIHRleHRfcmVwb3J0X2NvbnRlbnRbaV1bMF0u
cmVwbGFjZShwYXRoICsgIi8iLCAiIikNCj4gDQo+IFdoeSBtb3ZpbmcgdGhpcyBmb3IgbG9vcCBh
ZnRlciAiU3BsaXQgYnkgOiBzZXBhcmF0b3IiPyBJZiBpdCBpcw0KPiBuZWNlc3NhcnksIHdvdWxk
IGl0IG1ha2Ugc2Vuc2UgdG8gZG8gaXQgaW4gdGhlIHByZXZpb3VzIHBhdGNoPw0KDQpIaSBTdGVm
YW5vLA0KDQpJbiB0aGUgcGF0Y2ggYmVmb3JlIEkgd2FzIGZpeGluZyB0aGUgYnVnLCBzbyBJIHRo
b3VnaHQgaXQgd2FzIGJldHRlciB0byBkb27igJl0IGludHJvZHVjZSBvdGhlciBjaGFuZ2VzLA0K
aGVyZSBJIG1vdmVkIHRoZSBsb29wIGFmdGVyIHRoZSBzcGxpdCBiZWNhdXNlIGluIHRoaXMgd2F5
IHdlIHRha2UgaW50byBhY2NvdW50IG9ubHkgdGhlIGZpcnN0IGVsZW1lbnQNCm9mIHRoZSDigJg6
4oCZIHNlcGFyYXRlZCBzdHJpbmcsIHRoYXQgaXMgdGhlIHBhdGggKyDigJwobGluZSxjb2wp4oCd
Lg0KDQpCZWZvcmUgdGhpcyBwYXRoIGl0IHdhcyBvayB0byBkbyB0aGUgcmVwbGFjZSBvbiB0aGUg
ZnVsbCBzdHJpbmcgYmVjYXVzZSB3ZSB3ZXJlIGdvaW5nIGp1c3QgdG8gcmVtb3ZlIHRoZQ0KcGF0
aCwgbm93IGluc3RlYWQgd2UgdXNlIHRoYXQgcGF0aCB0byBjaGVjayBpZiBpdCBhY3R1YWxseSBl
eGlzdHMuDQoNCj4gDQo+IA0KPj4gKyAgICAgICAgICAgICAgICAjIFdoZW4gdGhlIGNvbXBpbGF0
aW9uIGlzIGluLXRyZWUsIHRoZSBtYWtlZmlsZSBwbGFjZXMNCj4+ICsgICAgICAgICAgICAgICAg
IyB0aGUgZGlyZWN0b3J5IGluIC94ZW4veGVuLCBtYWtpbmcgY3BwY2hlY2sgcHJvZHVjZQ0KPj4g
KyAgICAgICAgICAgICAgICAjIHJlbGF0aXZlIHBhdGggZnJvbSB0aGVyZSwgc28gY2hlY2sgaWYg
Inhlbi8iIGlzIGEgcHJlZml4DQo+PiArICAgICAgICAgICAgICAgICMgb2YgdGhlIHBhdGggYW5k
IGlmIGl0J3Mgbm90LCBjaGVjayBpZiBpdCBjYW4gYmUgYWRkZWQgdG8NCj4+ICsgICAgICAgICAg
ICAgICAgIyBoYXZlIGEgcmVsYXRpdmUgcGF0aCBmcm9tIHRoZSByZXBvc2l0b3J5IGluc3RlYWQg
b2YgZnJvbQ0KPj4gKyAgICAgICAgICAgICAgICAjIC94ZW4veGVuDQo+PiArICAgICAgICAgICAg
ICAgIGlmIG5vdCB0ZXh0X3JlcG9ydF9jb250ZW50W2ldWzBdLnN0YXJ0c3dpdGgoInhlbi8iKToN
Cj4+ICsgICAgICAgICAgICAgICAgICAgICMgY3BwY2hlY2sgZmlyc3QgZW50cnkgaXMgaW4gdGhp
cyBmb3JtYXQ6DQo+PiArICAgICAgICAgICAgICAgICAgICAjIHBhdGgvdG8vZmlsZShsaW5lLGNv
bHMpLCByZW1vdmUgKGxpbmUsY29scykNCj4+ICsgICAgICAgICAgICAgICAgICAgIGNwcGNoZWNr
X2ZpbGUgPSByZS5zdWIocidcKC4qXCknLCAnJywNCj4+ICsgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgdGV4dF9yZXBvcnRfY29udGVudFtpXVswXSkNCj4+ICsgICAg
ICAgICAgICAgICAgICAgIGlmIG9zLnBhdGguaXNmaWxlKHNldHRpbmdzLnhlbl9kaXIgKyAiLyIg
KyBjcHBjaGVja19maWxlKToNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICB0ZXh0X3JlcG9y
dF9jb250ZW50W2ldWzBdID0gXA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAieGVu
LyIgKyB0ZXh0X3JlcG9ydF9jb250ZW50W2ldWzBdDQo+PiArDQo+PiAgICAgICAgICAgICAjIHNv
cnQgYWxwaGFiZXRpY2FsbHkgZm9yIHNlY29uZCBmaWVsZCAobWlzcmEgcnVsZSkgYW5kIGFzIHNl
Y29uZA0KPj4gICAgICAgICAgICAgIyBjcml0ZXJpYSBmb3IgdGhlIGZpcnN0IGZpZWxkIChmaWxl
IG5hbWUpDQo+PiAgICAgICAgICAgICB0ZXh0X3JlcG9ydF9jb250ZW50LnNvcnQoa2V5ID0gbGFt
YmRhIHg6ICh4WzFdLCB4WzBdKSkNCj4+IC0tIA0KPj4gMi4zNC4xDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu May 25 08:08:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:08:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539336.840108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q261X-0006ye-J1; Thu, 25 May 2023 08:08:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539336.840108; Thu, 25 May 2023 08:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q261X-0006yX-G5; Thu, 25 May 2023 08:08:39 +0000
Received: by outflank-mailman (input) for mailman id 539336;
 Thu, 25 May 2023 08:08:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q261V-0006xv-I5
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:08:37 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0630.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 575cb0d9-fad3-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 10:08:33 +0200 (CEST)
Received: from AM6P195CA0019.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:81::32)
 by DU2PR08MB7344.eurprd08.prod.outlook.com (2603:10a6:10:2f3::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 08:08:30 +0000
Received: from AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:81:cafe::49) by AM6P195CA0019.outlook.office365.com
 (2603:10a6:209:81::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Thu, 25 May 2023 08:08:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT052.mail.protection.outlook.com (100.127.140.214) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.16 via Frontend Transport; Thu, 25 May 2023 08:08:29 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Thu, 25 May 2023 08:08:29 +0000
Received: from c428a9cf5fc2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B86A06AF-D8EC-449F-A5F9-58F0CA247007.1; 
 Thu, 25 May 2023 08:08:22 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c428a9cf5fc2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 25 May 2023 08:08:22 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS2PR08MB8309.eurprd08.prod.outlook.com (2603:10a6:20b:554::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 08:08:20 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 08:08:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 575cb0d9-fad3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Au+jPYUDDeqPC+Ot0IjQKqfRxPc9X6pYFIZFJsySTNM=;
 b=OKhM/0jk5YZsofWIoJRln5Yw9nFJFjepxlNjqKsp33SMm9zDqFJTnrcO3a+vXhY2l5HcTJ34imyEyXjEI0TFFedSw4PQrEp9TE3wERx5VSDnCZk8k5WKbmlIpHA5hRO+bnWQSvieQu7Ws0n4+R2Vvem/QWkdmbOMAWTsuxGp1OQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: d1dc515e75afbecf
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=goYzyirzxaKEnkhdVsw3cEriToVX1wHCfZ9CLbUGR4LTELaTgp0PTNq+JhwKCViTLOG3UOCOKBRjIurROIyD6PADuoLWCZYC7orGbn+35Ar8yClzt3vNpBx8Zy+Yq/IiQMiFiJbLPmOR/QXfh9wSbprmICVkFBFIDOrnekrUZ/4fe5w8flVAQiZV9Fd/qLqFWUqh2N5D+2Cs25ktHrgGWQQM9ZCX0YH/ZQnTNAtX7lBjkezjEx1AMNmqkWDxLpYnbD1fMdaWaek4HF2UjbaJ9CCCfT/SMB5HMvBSVBI1M5eZ7RHYm9ZULB191bFIoOiee0jriLAN/juB7ixg8OO7wA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Au+jPYUDDeqPC+Ot0IjQKqfRxPc9X6pYFIZFJsySTNM=;
 b=Dx8NrbdNvJZHxJQJA3WtTW2o3Zp9SEoAdyo4tTO6We+sqcqlZpVLW0Ct3KVucRyRXmeUg/JLtUoGpNcQk+QZv1cjkgz2lyUQKxLtnwjkI+Gd00KgtbbG83JwlVe4eMK9V0Ap0SJYnctlxFTmgzVYnYFAbZN9CMrU3MSuyeMpLOTvqCEgl1eSBcILMGProDAetvjVRS+GUFAg3gUFbWsKkkqqLdHfZsBFQloBh/U2XRL8FAdxcxFBNu8weuazkuGV0gXc12BUAkMojLGFD5vN+tiY9BEjVvUN0hiXVqvz2cHTYOMmO1bOyALn6elgjOb5a7qUPWWdEefPw0R+09L7Wg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Au+jPYUDDeqPC+Ot0IjQKqfRxPc9X6pYFIZFJsySTNM=;
 b=OKhM/0jk5YZsofWIoJRln5Yw9nFJFjepxlNjqKsp33SMm9zDqFJTnrcO3a+vXhY2l5HcTJ34imyEyXjEI0TFFedSw4PQrEp9TE3wERx5VSDnCZk8k5WKbmlIpHA5hRO+bnWQSvieQu7Ws0n4+R2Vvem/QWkdmbOMAWTsuxGp1OQ=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 2/2] xen/misra: diff-report.py: add report patching
 feature
Thread-Topic: [PATCH 2/2] xen/misra: diff-report.py: add report patching
 feature
Thread-Index: AQHZijbbQbnxRobwCkSJohEa6eKRVa9hY3kAgAjV3YCAAHJTAA==
Date: Thu, 25 May 2023 08:08:20 +0000
Message-ID: <A3B0B9E6-1770-4159-8A92-6A213FD9BBBF@arm.com>
References: <20230519094613.2134153-1-luca.fancellu@arm.com>
 <20230519094613.2134153-3-luca.fancellu@arm.com>
 <CA6576E0-E49F-4E36-9363-CEB23B508DCE@arm.com>
 <alpine.DEB.2.22.394.2305241818410.44000@ubuntu-linux-20-04-desktop>
In-Reply-To:
 <alpine.DEB.2.22.394.2305241818410.44000@ubuntu-linux-20-04-desktop>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS2PR08MB8309:EE_|AM7EUR03FT052:EE_|DU2PR08MB7344:EE_
X-MS-Office365-Filtering-Correlation-Id: e58b2a48-f1f0-4af4-3a5a-08db5cf739c9
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3b0S//1u7s7hc3bvw9x7CzyEdHZVvc+2Cgk/QilGup/0w0SVDtF1ciAV4b1F4vsivdVAT7MB6RSBsNX9JsVl275bNkiqjY1ttd6o52RRWhF/Zdjor5SLvlKQWvw1l+ej17zYIoSEtzW6XdVForYASFDX/H58KB6sxXQPctY8LJLPUpmCfNvFsJbf1+qYQh0d9C3iPE2z46/Jvw90N8JVQ0dce1oiKIzcWu5YzuMF+WCjWNnT1mMdBDAoE+XtuwIN/zcSQE4Xhzi5VUP26y+R2N8Joo5YiuiStE4NhLdy7D3uijk8HHsoZdRgQYFJJGWj2QpQl0NGfZOs6r7gfK+MNNsPBKsvHJnQxZVY4lfEMQJicqlSMJq3yh7KVr8k2pwxs53kT6BCCWnwTTk1HErlAsN7YMw4Zdy9taGTW1E+G2GnepWjf8CLmRdNArZX3e+viN3ZSPBIF2pUfNfbscGjTr3tcOHaj/Cuc3OOVvU282UOpWBbXwyODa0jHZ8e56zllklqiD2TBZO2dE8g0zxave1dFjcS0Vn1qgO8S0n1PmBYoU/oBvpvQxhoxdANIOVcZ+dY/daPiqTh+bOqkGV6NugwemytydjExyHD2z7KymPSw9gn0xuMhr03s3TBHC0RpX7ruB8xHsWytV5opeVYzw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(376002)(136003)(39860400002)(396003)(451199021)(122000001)(5660300002)(36756003)(33656002)(38100700002)(41300700001)(38070700005)(76116006)(66476007)(64756008)(66556008)(66946007)(66446008)(316002)(6916009)(91956017)(83380400001)(4326008)(71200400001)(478600001)(186003)(8936002)(86362001)(2906002)(8676002)(6512007)(26005)(6486002)(2616005)(6506007)(54906003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <E3AC06F0A8DE794984F40F6B38E155C2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8309
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	be507f4e-a6fb-4b2f-5e81-08db5cf73458
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	T+s0Jc+8IqOoAapSWNBxzjtuCr33W350Ga+ouuixWDDB/x3cO8VQLZnICXcmRqvbrdZOtqNgwOkI2g5RJH729Qk/9BiTjW0F6lUZlcEIHalKN1+zn+yDNQK/oqs9ECl+KO4pgqBUdz8DiGXWEhuBGkzobCSIWxjMfe6g92BuyrhfoTQXvQN/SCpaNuFxE0Ffsklbl3XhlvEBCpbx0MtQlgeKJVCtE2qb2jmaxQ8MZ3LM/ntOlLQRUc7RgHBj0lBscGUL18C2v9xLA5P7/82glfR7bCMnXNL8TrJpxKLRDdzo5t+4B4vOfLWko0/WFKKl27GextX5c86M+8uYzUEOfL0468f38Zj0oEae//PYBKqEq9R3o3+zlLTNUkLPFbTcHx5ztm9l8C0lAosxKX8SI3/hlyRpyEYUrfQeCC9QKpDokZ2NeacaOj9uY6nwiaEyBMhQN+3T/GRRPYUptcs/cYqv1PAi2ad26xkd/ib1I+kTABAThVnQaibzGueItXRLe0TDDFU2Z1p6DHHofCCXV2GVG36f9/BL9OM6NmbBdygNzYFO5DikOBLMGxhwKt/qv768NoyVox736iB2Z5thAL4yrL05QJfzMB7leZJAZMh/5MTQp0rcJM56wbYHxsiDRoP5MoSKUA1BHTpNKnab/kXIuHAHJOnEvhdb4sgXhPabaxZTQIoBQyf/gIibBERCME1N/+H+N+h4ZhRbdKNM2hGZVMWGOB4kSdshqIzDo4yWiivEbgV7Kp00rp0R2JLv
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(396003)(136003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(6862004)(8676002)(8936002)(5660300002)(36860700001)(82310400005)(47076005)(186003)(83380400001)(6512007)(26005)(86362001)(2616005)(81166007)(6506007)(356005)(336012)(82740400003)(40460700003)(70586007)(6486002)(41300700001)(40480700001)(70206006)(33656002)(4326008)(316002)(36756003)(478600001)(2906002)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 08:08:29.8598
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e58b2a48-f1f0-4af4-3a5a-08db5cf739c9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR08MB7344

DQo+Pj4gICAgaWYgYXJncy5vdXQgPT0gInN0ZG91dCI6DQo+Pj4gICAgICAgIGZpbGVfb3V0ID0g
c3lzLnN0ZG91dA0KPj4+ICAgIGVsc2U6DQo+Pj4gQEAgLTYzLDExICs5MCwzNSBAQCBkZWYgbWFp
bihhcmd2KToNCj4+PiAgICAgICAgbmV3X3JlcC5wYXJzZSgpDQo+Pj4gICAgICAgIGRlYnVnLmRl
YnVnX3ByaW50X3BhcnNlZF9yZXBvcnQobmV3X3JlcCkNCj4+PiAgICAgICAgbG9nX2luZm8oIiBb
T0tdIikNCj4+PiAtICAgIGV4Y2VwdCBSZXBvcnRFcnJvciBhcyBlOg0KPj4+ICsgICAgICAgIGRp
ZmZfc291cmNlID0gTm9uZQ0KPj4+ICsgICAgICAgIGlmIGFyZ3MucGF0Y2g6DQo+Pj4gKyAgICAg
ICAgICAgIGRpZmZfc291cmNlID0gb3MucGF0aC5yZWFscGF0aChhcmdzLnBhdGNoKQ0KPj4+ICsg
ICAgICAgIGVsaWYgYXJncy5iYXNlbGluZV9yZXY6DQo+Pj4gKyAgICAgICAgICAgIGdpdF9kaWZm
ID0gaW52b2tlX2NvbW1hbmQoDQo+Pj4gKyAgICAgICAgICAgICAgICAiZ2l0IGRpZmYgLS1naXQt
ZGlyPXt9IC1DIC1DIHt9Li57fSIuZm9ybWF0KHJlcG9fZGlyLA0KPj4+ICsgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhcmdzLmJhc2Vs
aW5lX3JldiwNCj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgYXJncy5yZXBvcnRfcmV2KSwNCj4+PiArICAgICAgICAgICAgICAg
IFRydWUsICJFcnJvciBvY2N1cmVkIGludm9raW5nOlxue31cblxue30iDQo+Pj4gKyAgICAgICAg
ICAgICkNCj4+IA0KPj4gSeKAmXZlIG5vdGljZWQgbm93IGFuIGlzc3VlIGhlcmUsIHdoZW4gdXNp
bmcgLS1iYXNlbGluZS1yZXYvLS1yZXBvcnQtcmV2LCB0aGUgZml4IGlzIHRoaXMgb25lOg0KPj4g
DQo+PiBkaWZmIC0tZ2l0IGEveGVuL3NjcmlwdHMvZGlmZi1yZXBvcnQucHkgYi94ZW4vc2NyaXB0
cy9kaWZmLXJlcG9ydC5weQ0KPj4gaW5kZXggZDYwOGUzYTA1YWExLi42MzZmOThmNWVlYmUgMTAw
NzU1DQo+PiAtLS0gYS94ZW4vc2NyaXB0cy9kaWZmLXJlcG9ydC5weQ0KPj4gKysrIGIveGVuL3Nj
cmlwdHMvZGlmZi1yZXBvcnQucHkNCj4+IEBAIC05NSw5ICs5NSw4IEBAIGRlZiBtYWluKGFyZ3Yp
Og0KPj4gICAgICAgICAgICAgZGlmZl9zb3VyY2UgPSBvcy5wYXRoLnJlYWxwYXRoKGFyZ3MucGF0
Y2gpDQo+PiAgICAgICAgIGVsaWYgYXJncy5iYXNlbGluZV9yZXY6DQo+PiAgICAgICAgICAgICBn
aXRfZGlmZiA9IGludm9rZV9jb21tYW5kKA0KPj4gLSAgICAgICAgICAgICAgICAiZ2l0IGRpZmYg
LS1naXQtZGlyPXt9IC1DIC1DIHt9Li57fSIuZm9ybWF0KHJlcG9fZGlyLA0KPj4gLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGFyZ3Mu
YmFzZWxpbmVfcmV2LA0KPj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGFyZ3MucmVwb3J0X3JldiksDQo+PiArICAgICAgICAgICAg
ICAgICJnaXQgLS1naXQtZGlyPXt9Ly5naXQgZGlmZiAtQyAtQyB7fS4ue30iDQo+PiArICAgICAg
ICAgICAgICAgIC5mb3JtYXQocmVwb19kaXIsIGFyZ3MuYmFzZWxpbmVfcmV2LCBhcmdzLnJlcG9y
dF9yZXYpLA0KPj4gICAgICAgICAgICAgICAgIFRydWUsICJFcnJvciBvY2N1cmVkIGludm9raW5n
Olxue31cblxue30iDQo+PiAgICAgICAgICAgICApDQo+PiAgICAgICAgICAgICBkaWZmX3NvdXJj
ZSA9IGdpdF9kaWZmLnNwbGl0bGluZXMoa2VlcGVuZHM9VHJ1ZSkNCj4+IA0KPj4gSeKAmWxsIHdh
aXQgZm9yIG90aGVyIGZlZWRiYWNrIG9uIHRoZSBwYXRjaCBiZWZvcmUgc2VuZGluZyBpdCBhZ2Fp
bi4NCj4gDQo+IFdpdGggdGhpcyBjaGFuZ2U6DQo+IA0KPiBBY2tlZC1ieTogU3RlZmFubyBTdGFi
ZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPg0KPiBUZXN0ZWQtYnk6IFN0ZWZhbm8gU3Rh
YmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCg0KVGhhbmsgeW91IFN0ZWZhbm8sDQoN
Ckkgd2lsbCBwdXNoIHRoZSBzZXJpZSB3aXRoIHRoZSBmaXggYW5kIHlvdXIgdGFncw0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu May 25 08:09:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:09:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539344.840118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q262C-0007gS-2s; Thu, 25 May 2023 08:09:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539344.840118; Thu, 25 May 2023 08:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q262B-0007gL-Vs; Thu, 25 May 2023 08:09:19 +0000
Received: by outflank-mailman (input) for mailman id 539344;
 Thu, 25 May 2023 08:09:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/c8s=BO=citrix.com=prvs=5022a0095=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q262B-0006xv-86
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:09:19 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6f946945-fad3-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 10:09:14 +0200 (CEST)
Received: from mail-bn7nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 04:09:12 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB6876.namprd03.prod.outlook.com (2603:10b6:a03:43b::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.14; Thu, 25 May
 2023 08:09:10 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Thu, 25 May 2023
 08:09:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f946945-fad3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685002156;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=OhpifKVdT3H0GGYml+BiKdiX37xk/3pT/fvPW7rjgxs=;
  b=Gl3PhYRxEbup0vxdS1Te/6eZxJKYzidU98ox5C4FGQB0hs7jNBCQmUcm
   +8sYvhyO6m284D2ZRLRZTvnI4KclOJ/nnbtK0NQbioW6cJtCf3uX32h1p
   EeydjxSdNumYOZmFDQnZcThqM61zlnd83STSUxVOg/2OnPXB6MaJxOEXy
   M=;
X-IronPort-RemoteIP: 104.47.70.102
X-IronPort-MID: 109097446
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:/0kp7qvwkbAprvWY5U6zBVNYCufnVElfMUV32f8akzHdYApBsoF/q
 tZmKW2HOP+JajT1KI8lPozi8kJT75bUn9NrTAs/rS41Fiwb+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Fv0gnRkPaoQ5AKEyyFJZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwI2oSSzKaqeGPwqO4acxp2u0TI9L0I9ZK0p1g5Wmx4fcOZ7nmGv2Pz/kHmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ouiP60aIC9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiANNKReLkqqACbFu7/2ADAk0EbxiAh8KomE7hR/Rzc
 lIYw397xUQ13AnxJjXnZDWorXjBshMCVt54F+wh9BrL2qfS+xyeBGUPUnhGctNOnMw7Wz0sk
 EOIltXBBDpzvbnTQnWYnop4thu3MCkRaGUEOikNSFJd58G5+dlpyBXSUtxkDai5yMXvHi39y
 CyLqy54gKgPickM1OOw+lWvby+Qm6UlhzUdvm3/Nl9JJCsgDGJ5T+REMWTm0Ms=
IronPort-HdrOrdr: A9a23:YfAyXqA+x7DeX/TlHemp55DYdb4zR+YMi2TDtnoBLyC9Hfbo8/
 xG8M566faMslcssRIb6Le90cu7K080nKQdieIs1NGZLWrbUQCTXeRfBOXZsl/d8hrFm9K1BJ
 0PT0AuYOeeMbAl5fyX3DWF
X-Talos-CUID: 9a23:jCMiPW1LFq9io2bnPGKcLLxfEZo6V1/E4nTpIV67KkNsFJSlRkSP5/Yx
X-Talos-MUID: =?us-ascii?q?9a23=3AB8FKtQ4R2ox0L1WWSuRSBCbexoxR77SkWFI30q4?=
 =?us-ascii?q?ggNiWNXNsHRvEkjiOF9o=3D?=
X-IronPort-AV: E=Sophos;i="6.00,190,1681185600"; 
   d="scan'208";a="109097446"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fJzcoiShheu/WNMPNk5+UC11C2roSw8TKxPtoVc3D3oHCVwGq2G8ny8S+Ms1WYbovdARYI5yqFAjNv/+SvYeEtyYFzSBUKoBbFEVoRxS8NhEL5siwHlbtFOH7ntUeKj+sB7BOwURHvSOu+IdKLhIpxk078gs+sGQG6rItaC/+1fA4wjCVCzh+RNLuLZeFqbUmxD7AsqWEZ+aQUfDaPFVKlcefBP9cYYqbPtD366yi1eiPcJB8ORUj+gMoeVblSz0R7fbBRJSE35vho+yZBHzu5rsinGGzXD7l4JYQgIpgR0lCo9XfwJnkL+3Jz4kKopKdYHcd7+3bX7acfMcolAYzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UlsVx+vz8iEzY3emfUcaCeeXNrSn049ryPKxtlFAKgc=;
 b=KTBl9X0gkK0kCVwA0xPwS7ct/fPtYb4OYH5aDDLi9oIKjB/kP0poaHr70DdpqDgN94OV6AAw5JO72cAFq3+g97ArdN+wG5qWNKXGpe4jb1l0Ez1DJJunkne+Hc58e8W+7E4lfCqv5pkCYYM0jCGFV4Lav1D+tM4TNcrwyEMLImuSR/Vj+afImHC5k4lXw40MC6pebFJNKxSkd3MvCqELBhHMatm78BLHTWihkBW89XFPMA4AQHEScMgOsEVPNOC44ep7fClB7ITMjAB40IO+SVL4pjlgHYbdVD1hrPplJSwiA27pUzxfCR0o0C4f6qwbXRUnem865tHcr/Exx4dLoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UlsVx+vz8iEzY3emfUcaCeeXNrSn049ryPKxtlFAKgc=;
 b=L9ktaMuZd0wiJt6Pq8SyUgTbUZMGskDSxm2n8pRIMmBjJ6EIFqE1oFfSwDuxTQI5CcD0Fe6baOHqch6j+VbQlNqY3hFU8rB5NhdnCXZJFbaIm5fjjJWzvqK5DR/egy9bS8t6Y2CPfrd7L2lvvhUq3bDzVMuyRRfnPfZfXmNbewA=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH v3] iommu/vtd: fix address translation for leaf entries
Date: Thu, 25 May 2023 10:08:59 +0200
Message-Id: <20230525080859.29859-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0173.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a::17) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB6876:EE_
X-MS-Office365-Filtering-Correlation-Id: 6117e259-897c-4f01-7c31-08db5cf7516f
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Xfvf5j4MphXMVfFyEFHn38YHkSaaZEmkKObpdX59ykjOLJTNurDniUZXAExmhmD7ZRAaHExZx7srwkama580y8Fp+evaxKB9T8CjZTRkU9GKOM0n57o7g1YjW8HoTZGGvoSqfpK3rU3nI8W5OXdKhnQWyoEb4ummvkDx1FjJynBdOkm/NLDnzHgVsXLGMl9gzkuK5kHZ1F/aJXDPllQma+uCB0E5bNKQKc0dhVPobBFnOAzApJMus9EwnE1IuFWyU7GVji1Rn8sWAFjc1pS3P9tMU2N+zRieXiWg1LBsZulPXqUgIVS2kH98MMoJjeLAqD5C6uRf0adNRLJyqs8NsdsGSV28oXkzWT64QzGPSCXmumirIq6rYo1UAYPZ3u/J2h8UurfBTR7Yev8rPRYiHU4n2nufaVOaPhlF4K/cyDlTpPzEhE0/kuRMqF6hu5YqfGnSdEek3xrrHKRk13uMfXrGg1GUFC3DwRUDu6dlTMOye708ApE6nqtqqWCZjYJiKhUfqsnq2lVcogDtD5Y2Vng6aCDHv3svEwh4UFCLxROXrW8q41ah46DU0BjQ5Cuw
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(39860400002)(376002)(346002)(396003)(136003)(451199021)(36756003)(38100700002)(82960400001)(86362001)(8936002)(8676002)(5660300002)(26005)(1076003)(6512007)(6506007)(2906002)(83380400001)(186003)(66556008)(6666004)(66946007)(316002)(41300700001)(6486002)(2616005)(4326008)(54906003)(478600001)(6916009)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?V2RhTXBGZGRGOUR6Yzh6Ym42YUo1eDF5UzJ1N2lZVGg2aUpNa05qakZSOXIr?=
 =?utf-8?B?RzBZdEpHeEI1clFkbWZkcnNUVjNxNGQ4MjdaWlQ2U3F0UmRoZlNYTlpZc2ZD?=
 =?utf-8?B?OTdwajBhUzZHa3RPUWhpcDUxaC91UzNQMnpSVnczanRVNzNZeTlVMHpkcDkz?=
 =?utf-8?B?L29SdnJYclFHVjZGSHdrcmNzZ3JkR20rL2p1U2gzc3VkNlpVc3dIbTV1MUNC?=
 =?utf-8?B?WFdZdmFOTHJ6Y0pIS01vNU5JR2VwMldIaDRrKzIvbG1ZTW5CZ1lLQXNVSHhs?=
 =?utf-8?B?UU5HR2RyNVNpMTcxYVF3QVFHYVloU2RjTldhblpyUzdFanpmZ0lOOXZnaFhO?=
 =?utf-8?B?QXl6SzNiS0ltYm5WSjgybVdWNUcrZ011alNvTVVIUXAxWTdBZFJYbmVaa1Rp?=
 =?utf-8?B?RHB2QjJJcDEvR0pLU2pmQ04vTXF4a3ArUXMvL0R5dU8yK0tibWVlRUpxekxR?=
 =?utf-8?B?dDhoVU9KVUJ6Uk9XMS8xMWxhSHFidU5pYnpoNENqMy9UdjBkaUcrK2FiSFhl?=
 =?utf-8?B?OXRJVUdPekdXSHRpNkF5TEE3cXZpcXhranNTdU96c2ZQczFKNHhaaU9mTnoy?=
 =?utf-8?B?ZTV2WG1YT3pRZUNBMThyTi9qTEZ3MHREOUp3dmZxR3FhUkF3ZGlxTFJaZ1RU?=
 =?utf-8?B?WCtiVTlNR04ra3RTMnp0N216OHB5UHd6aUZZMVhKcFgrazJJdEFVQUVUVmxG?=
 =?utf-8?B?eUxEQmxrcjgyWjgzb3VSUVJzaEt0UkVwdzU1aFVLZnFpQTB4U0JqTUk0VUIy?=
 =?utf-8?B?R2xLYnpqRHZac2F3anlKcHpFdzhnMFhtazhqV2xNL2pMOHcvcE5oL2t2YXgw?=
 =?utf-8?B?dmF5TFR1Q0NsTVRBSUlLK2xXZzZUdU91cllMV2NJS0ZVY0N5dmlwcEYwLzAv?=
 =?utf-8?B?NEdUaCtiRDRLMWJLdkRleFJuekJpSVZOd3IyVG5oUVFYLzU5aGpuWC83eWtW?=
 =?utf-8?B?dURpT3Z6RkVLazBYdVpYYmJVdi9XV1ZqK3RoeHR1Y3huZmtPZytjWnc4L2Jl?=
 =?utf-8?B?NkJKMi9GVXlpeUF5MDZzR3Vlc1ZFK01tbGNEWUxsVnIzYjNkYTI1cWVvKy9s?=
 =?utf-8?B?R00zTDRscWxFT0tRV3V6cWtwVmdWYjY4SDd6cUpTZFZXak5NN1dZMzBKRG9y?=
 =?utf-8?B?SUU1alZld1R2VWVONWdYSXB3eEFvaEhBTVdWUzBsSC9nZ0FQVFJnbW11dWNJ?=
 =?utf-8?B?NTBYdXRpdWpzUEx5aWt6cWVNeS9kNXdHbW9yMzFISkZMellUUUdHeWNFd2Ux?=
 =?utf-8?B?amlYVjFUdmhmbGVmZUlKdENQUUpYRW1GQ3VmaTJzb25tOEt4M29ZTE4yMFFN?=
 =?utf-8?B?enZEWmRYQ2VEKzNnejZrK2EzUk5QMHVZVjNMaVlSWmFvM1hBb3Y0d29wRUxj?=
 =?utf-8?B?Y1JWRk5kUisyZFIxSFNCdGRyS0xFWDBmV2diTm5IRlFVNWhoR0xpUFpYNWgv?=
 =?utf-8?B?VnlXbFBlWXNIM0Z5aXE5djluYUhjR0M3d01aYWVzbG9yNnFhbE8xVW5rSWI4?=
 =?utf-8?B?ZCtLWlBhekdON1JvbnRmT2Y3dU5SejlrNGlaRFFBclNURTYwRm1UZDZKZHRk?=
 =?utf-8?B?K1UwMTlMUlNtUEtNdXZ0ZWh2RFMvbUR3RDkxaFY4K3p6anFoTU8zcCtuWTBX?=
 =?utf-8?B?S3k2aWJlek9tZE9mRjBxcU51MnRUZjFjVGdBakRnRzcxUlo1ZWlmT3V6eEEy?=
 =?utf-8?B?NCtVMXJHcUdtbm9jajExWWZVWHFrUWg3bklyMEgydTBFd0w5a0d4dG9oRVF4?=
 =?utf-8?B?SXpqYkJ6Uzl6L2EvcE00Zll4cG1tMmZQS04xWHVTanoyM2g4Z0tCdm1LdUZp?=
 =?utf-8?B?SE5Pd2JObWZwYWIrdTZUU3Y0ajk3dUZEVjgyR3U4OCtWTHdWTjZ0M1dBMDV2?=
 =?utf-8?B?U2VsZ2VXMm9Yc2szWDZYRytva0VuNWNML0JXRGNTYmUwZjNVWUNmRHBkdGhq?=
 =?utf-8?B?VEVHMlE3N2huMHR4b2VqaDdzVERGejgvVkR5NEJQTEZESGVYMW5PZFZmN0N2?=
 =?utf-8?B?bG9WL0J4MzdRUVNCY1pqdXkyVGVYSFgxUEZKNDlGd0ZDd080Zm11cmNnN1lC?=
 =?utf-8?B?UDdjeVp5VU0zN2oxWXFlcVVDMUxmbTFDSktXRUpnMkhEZ1ZIRm9MVnplVEl5?=
 =?utf-8?B?UjlNdWx2SVRPV3ZUZU90dUFsRGw4UFVEOG5rVktFWUtpKzJkY2hLYzdUbEky?=
 =?utf-8?B?aFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	TdRfZlaWa2kzr87mOnahsY5YzeN8l4ihmPhS95f6YXpVG12hq8vSy9kXS3L0o2sWtP+ZKNWcjF1pWPXhE9B6Wp/SpoofF0f///zeMouJxSklRWxkqI3W6avGtaBhCn71xFKgiBTZsGxAlE2BEgsEMsxR0DCqBayKWShSWdOnKXHh6LpqUqOZ82RccmFXAauZfE5VHRUqkSJ/ZbxY/9y1bUK/9s6gwhVui2KIxDHrS6Wu4oKeNsYtmc20BgqP8eNB7CIGoAxK1xJ6RwQuAmLNpQC63LtKTCpdQF34AHgv/Jq9/ATAaP9YrVHD36/gN90VUg6zrFECLXcA503+XAiWB+LKoUWb6D64tlWuvHcvh8wq5dHm5RfQaujzfw/HrrCk26crH81/yYrGOE3nVqvVq7N0qzOZXalSlPfpEIMIkgk+wqLC0T4tWD79+6X1xQM6rWnVe7kC/ZCc2di07iQBBMhj1G7F9It1alpdtkvWHr+Oo3XN+dm6jIrLaEZ63URpSQToEScBM6yovZXUEuE+dgiX3QqGVdI7JzTy3oL0vfHafALBMnpGXqaqegm4SkL59lqcNGob28JSC7d1K5Dormvtv1b1iJq7vHywv8+DXW0DhdNM55F6gKSv7pCsOoDpFFdAIlCp6EOxbmy6kyJ5IWjhYLZMBm9qKgIZ7RAJD2fMQ7QcGdzlebPn1+2TuAlcerIwU71Sb2ZfXslwgcR4og1mksTSG5+BHGrvpTLhsRPtvZFMx2m/2v2+1O6hJ0jgwW9+ZQdJMGOufhzDn0/33jWmY6fHuqA3xqo98DL06BsPadoR4Sr0Q5NU+Hegv2sY
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6117e259-897c-4f01-7c31-08db5cf7516f
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 08:09:09.8894
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: g5XxW4YokkKRwVGK+Dc3jUtZ1IIs/d2OcWFXo20GE+QExMinMTn0P6QvSZSHivITD+OsLPwtVw/16ZK1eXjDkQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6876

Fix two issues related to leaf address lookups in VT-d:

* When translating an address that falls inside of a superpage in the
  IOMMU page tables the fetching of the PTE value wasn't masking of the
  contiguous related data, which caused the returned data to be
  corrupt as it would contain bits that the caller would interpret as
  part of the address.

* When the requested leaf address wasn't mapped by a superpage the
  returned value wouldn't have any of the low 12 bits set, thus missing
  the permission bits expected by the caller.

Take the opportunity to also adjust the function comment to note that
when returning the full PTE the bits above PADDR_BITS are removed.

Fixes: c71e55501a61 ('VT-d: have callers specify the target level for page table walks')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Use PADDR_MASK.
 - Also fix the non-superpage case.

Changes since v1:
 - Return all the PTE bits except for the contiguous count ones.
---
 xen/drivers/passthrough/vtd/iommu.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 130a159cde07..0e3062c820f9 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -310,7 +310,7 @@ static u64 bus_to_context_maddr(struct vtd_iommu *iommu, u8 bus)
  *   failure,
  * - for target > 0 the physical address of the page table holding the leaf
  *   PTE for the requested address,
- * - for target == 0 the full PTE.
+ * - for target == 0 the full PTE contents below PADDR_BITS limit.
  */
 static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
                                        unsigned int target,
@@ -368,7 +368,7 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
                  * with the address adjusted to account for the residual of
                  * the walk.
                  */
-                pte_maddr = pte->val +
+                pte_maddr = (pte->val & PADDR_MASK) +
                     (addr & ((1UL << level_to_offset_bits(level)) - 1) &
                      PAGE_MASK);
                 if ( !target )
@@ -413,7 +413,11 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
         }
 
         if ( --level == target )
+        {
+            if ( !target )
+                pte_maddr = pte->val & PADDR_MASK;
             break;
+        }
 
         unmap_vtd_domain_page(parent);
         parent = map_vtd_domain_page(pte_maddr);
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Thu May 25 08:16:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:16:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539349.840128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q268o-0000qq-R7; Thu, 25 May 2023 08:16:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539349.840128; Thu, 25 May 2023 08:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q268o-0000qj-Nu; Thu, 25 May 2023 08:16:10 +0000
Received: by outflank-mailman (input) for mailman id 539349;
 Thu, 25 May 2023 08:16:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q268n-0000qb-Am
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:16:09 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2049.outbound.protection.outlook.com [40.107.7.49])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64b3350f-fad4-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 10:16:04 +0200 (CEST)
Received: from AM0PR02CA0140.eurprd02.prod.outlook.com (2603:10a6:20b:28d::7)
 by GV1PR08MB7804.eurprd08.prod.outlook.com (2603:10a6:150:5b::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 08:15:35 +0000
Received: from AM7EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:28d:cafe::ff) by AM0PR02CA0140.outlook.office365.com
 (2603:10a6:20b:28d::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17 via Frontend
 Transport; Thu, 25 May 2023 08:15:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT064.mail.protection.outlook.com (100.127.140.127) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.17 via Frontend Transport; Thu, 25 May 2023 08:15:35 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Thu, 25 May 2023 08:15:34 +0000
Received: from 796bfa392c9f.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 520A2C61-F63C-4453-8C00-F756A70E7474.1; 
 Thu, 25 May 2023 08:15:23 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 796bfa392c9f.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 25 May 2023 08:15:23 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB8205.eurprd08.prod.outlook.com (2603:10a6:10:3b9::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 08:15:19 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 08:15:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64b3350f-fad4-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HP5sJB4xUY/NHfvvEPBXJTj8QveLscNRQPhIjfj+vY0=;
 b=48MkGWw3wDscG5iZWlcdjBuzY/kksnO3NUounBGOsYbk7AHV82C2ucPwUt5HYxvX9nySFCYEfkOKGsp5W+7ztRMYtRap7Yu/sFAOR1p4BfWt863QurlJwFIm8rPTqk8XEnxsk3DSUC+KyT57gvcgyBLxShA0HC/1Sra4dQMqJbw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f210bb6480b0639a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XTTSSbnd80H5Wx1XZrZjJ6qckt+tGUizivEUvBva22n445nWRhcAxPA02pjjXj+elEWIriVS28gbpAFGEhu9bm0/ZCfYK75MZ4B2Agp2dfmPM3C63IjlEpjTQfFR3CyFuKX5rVSaF4vty8ZKgDFkVZTrJAZHbcevzEBKkTxPZbS+4BG7Fvi9/HDO+uhXG7tEPGspFjLORGby0W/G2fyhUx3QyIRh4YOImcww5pW+6oRpMDxYwakDe3d6TsaMGpz7SY4tFCSiAHbwdap2dzQYRRN+c0aSOXUnBTzsewq06zu9KFfxMf6P4lbdsx4X8HKfKdr47hE3uiEjXB6W32xOZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HP5sJB4xUY/NHfvvEPBXJTj8QveLscNRQPhIjfj+vY0=;
 b=E2AIj3pVVWJt5krGl+DrNo61gV9ubw3phLRS6+HxNB0SIfzm5UYGj/f1JMHyifCFo+nEZkhbI5hcKiPaP+7L5hTCJJoDFuXRu/qMFuuI6sU8SsG88VDL1vfemlXXNWfyISA6ubF2t019xtgXaJlYaC8FMKtwQpLenapwFA28wvXYsF4H/Rcwm2Gu243WoNlh1+aRu7jJ3HtC8OAeSRTHM6ANl81AeghNBwF1Vwqtp7pOJlGq2ZG3eewLiZhUhhXvWjXSg9wK9GWQtAZLVWQD1rL5r9m9zIdlMyCHaSkVrv6xiFqEeUA0QwSNU4lWyT0lynZlsfVu62n8WLn7oqyuUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HP5sJB4xUY/NHfvvEPBXJTj8QveLscNRQPhIjfj+vY0=;
 b=48MkGWw3wDscG5iZWlcdjBuzY/kksnO3NUounBGOsYbk7AHV82C2ucPwUt5HYxvX9nySFCYEfkOKGsp5W+7ztRMYtRap7Yu/sFAOR1p4BfWt863QurlJwFIm8rPTqk8XEnxsk3DSUC+KyT57gvcgyBLxShA0HC/1Sra4dQMqJbw=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Nick Rosbrook
	<rosbrookn@gmail.com>, Anthony PERARD <anthony.perard@citrix.com>, Juergen
 Gross <jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, =?iso-8859-1?Q?Marek_Marczykowski-G=F3recki?=
	<marmarek@invisiblethingslab.com>, Henry Wang <Henry.Wang@arm.com>, Community
 Manager <community.manager@xenproject.org>
Subject: Re: [PATCH v7 00/12] SVE feature for arm guests
Thread-Topic: [PATCH v7 00/12] SVE feature for arm guests
Thread-Index: AQHZjUpmzBq0o2ig+kKXA6VQLOswlq9qp3WA
Date: Thu, 25 May 2023 08:15:19 +0000
Message-ID: <D81DB0C7-F75A-4C41-9731-FC41DC6042E7@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
In-Reply-To: <20230523074326.3035745-1-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB8205:EE_|AM7EUR03FT064:EE_|GV1PR08MB7804:EE_
X-MS-Office365-Filtering-Correlation-Id: f905886a-53ad-43b1-95c0-08db5cf83755
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 63ZXgGmQ8sY3Yna9zhAA+ASZcD+GI8k00oPyoJwbEUq04ppJ0R+TVskGRpTgHWfoHxu9ES4LI55IxYYLnMwl5LmM5S/APfcKjsjE1b4qBYLHiZgm5qHgOa0SY4gYLqRJkixbboWtUiLfHqgLCoX20owWILGf6P43T/akaIWP8T0LEeb8S5LzKR+hUtsyPRNAiVdMglZA9+paaAnm0S53nrBHRjK2PisenGOztKeaWV65+iZzbmKCPMH3V0eb/4n3S0gVpGc0dDYRBgsxnUQglyLbMPxhGOGHYSPZv3GxP6RKyMpp7k/vjOUyuzC708NOZMr5bJSDT0n7d83AEF8jm9ukc7THJ6S/IqxqtiTQW+2jw5PSofozKcVQ6BGNo88QwD1OKws8cfZ2qIVBdmFJ3gvX92sLlZTH/DgH4/R0jNDPlY8ISiQfVfpIMHZrIa0wMOj2rPTWGj4/sTKEeBHQ5Ssscs2PQnRzNRjsleBfkVMZBd8ozLIw854aB7zcBeyYvL4vlK3kuIz/kSoQX2R8UIWUZ28wUj340Jk99hfTZbgwB+8tmJyrvB3ZuwbxG5zaWD0ZixBoqmkaCKiQiu82aEuX2QCo/nUoKLF4rq7qFKoJ03MsFronjjKnVqMlABUj+jF1nLBureiPYc5urKZEjA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(39860400002)(396003)(136003)(376002)(451199021)(478600001)(316002)(26005)(6512007)(6506007)(71200400001)(54906003)(41300700001)(6486002)(91956017)(4326008)(66946007)(66446008)(66476007)(6916009)(76116006)(64756008)(66556008)(8936002)(5660300002)(8676002)(7416002)(83380400001)(2616005)(86362001)(2906002)(36756003)(38070700005)(122000001)(38100700002)(186003)(33656002)(53546011)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <9D992C08A14E3443A56A1DC81052708C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB8205
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e9f575de-4c46-4f84-0112-08db5cf82dff
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	veE0XOJ9FVWLp4NNnt/Of015GmBf7NMZL7VgW5/gBMJgzg9USf7DUipSPxfqeOSyAX/j1Oq2nHMrbB4mNUmTKxm4dJfHAzkWqgqu/6f0KScH0V0Od+/so48nwqCIKQ9l2hOC4iXCFOCZ80e+nII9Uk5bjL5iYRc0bFbO6/VwrEsdl8iobr4AV7S/RLnSNgxek+tr09BxaBr5+Ey68IGz8mMBg/MU6wskN0c0CGAxzz7CzmsiYAy0xxFCb5GmZwou+TejXkc7EvJJcPX/U/VGbYf8G80y8SJRsjunWiBn2VbMmJrbrJs9Z2uxUS2J4aI9bJZd4QHkDPvzWzrzfRClXuVBq9JXJMZMeU9h6PPTbqDZ/uL+CmTd9pic9j2RmHJSUAinHUDQbLzB9pznBjvZ2udY3dMZmKTeHsAHeeZAqOumFG2ekk+f/o4iL/bmnJkooAbbA62YpFOpZPA7RsEqYpTMAVv2/JcOwAs2HVRJbbNfLmrTPp/3zTXNcUmfRi9OOTQu7e33iY9U1Wy4OcJbzAqWdYPijkQPpZ49iQKANrMWSQ+0nrF8La1pHuGfSCjFYZx5cguJuh99bykcr8cgiE4UZLDxAEJ6XaymlElULr25NBZMNuxlJIbygGBvnyHcPdtKz5RMBqUeKkGmTuASHaDI/7z3yYIjp1rw6FX1qMOohnvCzG1lW8+hk0u5DD8VqMsnn7fBBVDny6V/PnD9CLE+1xROU3C4+/k0sLaafWWHiGF5KAyAxPG9Prpb1Nnf
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(396003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(82310400005)(36860700001)(2906002)(40460700003)(6512007)(26005)(6506007)(41300700001)(70586007)(54906003)(4326008)(6916009)(70206006)(478600001)(40480700001)(336012)(81166007)(53546011)(82740400003)(83380400001)(86362001)(2616005)(186003)(47076005)(5660300002)(8676002)(316002)(8936002)(36756003)(356005)(33656002)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 08:15:35.2120
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f905886a-53ad-43b1-95c0-08db5cf83755
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB7804



> On 23 May 2023, at 08:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
> This serie is introducing the possibility for Dom0 and DomU guests to use
> sve/sve2 instructions.
>=20
> SVE feature introduces new instruction and registers to improve performan=
ces on
> floating point operations.
>=20
> The SVE feature is advertised using the ID_AA64PFR0_EL1 register, SVE fie=
ld, and
> when available the ID_AA64ZFR0_EL1 register provides additional informati=
on
> about the implemented version and other SVE feature.
>=20
> New registers added by the SVE feature are Z0-Z31, P0-P15, FFR, ZCR_ELx.
>=20
> Z0-Z31 are scalable vector register whose size is implementation defined =
and
> goes from 128 bits to maximum 2048, the term vector length will be used t=
o refer
> to this quantity.
> P0-P15 are predicate registers and the size is the vector length divided =
by 8,
> same size is the FFR (First Fault Register).
> ZCR_ELx is a register that can control and restrict the maximum vector le=
ngth
> used by the <x> exception level and all the lower exception levels, so fo=
r
> example EL3 can restrict the vector length usable by EL3,2,1,0.
>=20
> The platform has a maximum implemented vector length, so for every value
> written in ZCR register, if this value is above the implemented length, t=
hen the
> lower value will be used. The RDVL instruction can be used to check what =
vector
> length is the HW using after setting ZCR.
>=20
> For an SVE guest, the V0-V31 registers are part of the Z0-Z31, so there i=
s no
> need to save them separately, saving Z0-Z31 will save implicitly also V0-=
V31.
>=20
> SVE usage can be trapped using a flag in CPTR_EL2, hence in this serie th=
e
> register is added to the domain state, to be able to trap only the guests=
 that
> are not allowed to use SVE.
>=20
> This serie is introducing a command line parameter to enable Dom0 to use =
SVE and
> to set its maximum vector length that by default is 0 which means the gue=
st is
> not allowed to use SVE. Values from 128 to 2048 mean the guest can use SV=
E with
> the selected value used as maximum allowed vector length (which could be =
lower
> if the implemented one is lower).
> For DomUs, an XL parameter with the same way of use is introduced and a d=
om0less
> DTB binding is created.
>=20
> The context switch is the most critical part because there can be big reg=
isters
> to be saved, in this serie an easy approach is used and the context is
> saved/restored every time for the guests that are allowed to use SVE.
>=20
> Luca Fancellu (12):
>  xen/arm: enable SVE extension for Xen
>  xen/arm: add SVE vector length field to the domain
>  xen/arm: Expose SVE feature to the guest
>  xen/arm: add SVE exception class handling
>  arm/sve: save/restore SVE context switch
>  xen/common: add dom0 xen command line argument for Arm
>  xen: enable Dom0 to use SVE feature
>  xen/physinfo: encode Arm SVE vector length in arch_capabilities
>  tools: add physinfo arch_capabilities handling for Arm
>  xen/tools: add sve parameter in XL configuration
>  xen/arm: add sve property for dom0less domUs
>  xen/changelog: Add SVE and "dom0" options to the changelog for Arm
>=20
> CHANGELOG.md                                  |   3 +
> SUPPORT.md                                    |   6 +
> docs/man/xl.cfg.5.pod.in                      |  16 ++
> docs/misc/arm/device-tree/booting.txt         |  16 ++
> docs/misc/xen-command-line.pandoc             |  20 +-
> tools/golang/xenlight/helpers.gen.go          |   4 +
> tools/golang/xenlight/types.gen.go            |  24 +++
> tools/include/libxl.h                         |  11 +
> .../include/xen-tools/arm-arch-capabilities.h |  28 +++
> tools/include/xen-tools/common-macros.h       |   2 +
> tools/libs/light/libxl.c                      |   1 +
> tools/libs/light/libxl_arm.c                  |  33 +++
> tools/libs/light/libxl_internal.h             |   1 -
> tools/libs/light/libxl_types.idl              |  23 +++
> tools/ocaml/libs/xc/xenctrl.ml                |   4 +-
> tools/ocaml/libs/xc/xenctrl.mli               |   4 +-
> tools/ocaml/libs/xc/xenctrl_stubs.c           |   8 +-
> tools/python/xen/lowlevel/xc/xc.c             |   8 +-
> tools/xl/xl_info.c                            |   8 +
> tools/xl/xl_parse.c                           |   8 +
> xen/arch/arm/Kconfig                          |  10 +-
> xen/arch/arm/README.LinuxPrimitives           |  11 +
> xen/arch/arm/arm64/Makefile                   |   1 +
> xen/arch/arm/arm64/cpufeature.c               |   7 +-
> xen/arch/arm/arm64/domctl.c                   |   4 +
> xen/arch/arm/arm64/sve-asm.S                  | 195 ++++++++++++++++++
> xen/arch/arm/arm64/sve.c                      | 182 ++++++++++++++++
> xen/arch/arm/arm64/vfp.c                      |  79 ++++---
> xen/arch/arm/arm64/vsysreg.c                  |  41 +++-
> xen/arch/arm/cpufeature.c                     |   6 +-
> xen/arch/arm/domain.c                         |  55 ++++-
> xen/arch/arm/domain_build.c                   |  66 ++++++
> xen/arch/arm/include/asm/arm64/sve.h          |  72 +++++++
> xen/arch/arm/include/asm/arm64/sysregs.h      |   4 +
> xen/arch/arm/include/asm/arm64/vfp.h          |  12 ++
> xen/arch/arm/include/asm/cpufeature.h         |  14 ++
> xen/arch/arm/include/asm/domain.h             |   8 +
> xen/arch/arm/include/asm/processor.h          |   3 +
> xen/arch/arm/setup.c                          |   5 +-
> xen/arch/arm/sysctl.c                         |   4 +
> xen/arch/arm/traps.c                          |  36 +++-
> xen/arch/x86/dom0_build.c                     |  48 ++---
> xen/common/domain.c                           |  23 +++
> xen/common/kernel.c                           |  28 +++
> xen/include/public/arch-arm.h                 |   2 +
> xen/include/public/sysctl.h                   |   4 +
> xen/include/xen/domain.h                      |   1 +
> xen/include/xen/lib.h                         |  10 +
> 48 files changed, 1052 insertions(+), 107 deletions(-)
> create mode 100644 tools/include/xen-tools/arm-arch-capabilities.h
> create mode 100644 xen/arch/arm/arm64/sve-asm.S
> create mode 100644 xen/arch/arm/arm64/sve.c
> create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

Hi All,

I received some r-by for this serie dependent on some fix, so I will wait u=
ntil next
week for further comments and then I will push the series with the fixes an=
d with
the tags to ease the committers work


>=20
> --=20
> 2.34.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Thu May 25 08:17:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:17:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539354.840138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q269o-0001RC-7G; Thu, 25 May 2023 08:17:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539354.840138; Thu, 25 May 2023 08:17:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q269o-0001R5-4f; Thu, 25 May 2023 08:17:12 +0000
Received: by outflank-mailman (input) for mailman id 539354;
 Thu, 25 May 2023 08:17:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q269m-0001Qe-L3; Thu, 25 May 2023 08:17:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q269m-0007xx-EG; Thu, 25 May 2023 08:17:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q269l-00071M-Ry; Thu, 25 May 2023 08:17:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q269l-00006H-RT; Thu, 25 May 2023 08:17:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IO4x92YZiOECSPxkw0qRRD2YFoqfmvbu8Py96uDMwV8=; b=gTTSvqDRhOPNScgbAfONzop4z1
	JkZZRM4yRKrqtebso9wItONpKJvXiHT3rLmwsImRVgtAA9RTLij7h0lOrBmmBHGeD5LHuDkFhkybp
	NQzGTFGzBjmx89QEmdsHrx18z1JbBl/bx13w/15T1q1ZKXUpJDkriTnNlFXYmmIqJTwg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180934-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180934: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=933174ae28ba72ab8de5b35cb7c98fc211235096
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 May 2023 08:17:09 +0000

flight 180934 linux-linus real [real]
flight 180940 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180934/
http://logs.test-lab.xenproject.org/osstest/logs/180940/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-freebsd12-amd64 21 guest-start/freebsd.repeat fail pass in 180940-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                933174ae28ba72ab8de5b35cb7c98fc211235096
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   38 days
Failing since        180281  2023-04-17 06:24:36 Z   38 days   70 attempts
Testing same since   180934  2023-05-24 22:41:38 Z    0 days    1 attempts

------------------------------------------------------------
2491 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 314227 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 25 08:31:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:31:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539361.840148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26NO-0003u7-Hk; Thu, 25 May 2023 08:31:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539361.840148; Thu, 25 May 2023 08:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26NO-0003u0-Dm; Thu, 25 May 2023 08:31:14 +0000
Received: by outflank-mailman (input) for mailman id 539361;
 Thu, 25 May 2023 08:31:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q26NM-0003tq-MA; Thu, 25 May 2023 08:31:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q26NM-0008DE-DG; Thu, 25 May 2023 08:31:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q26NL-0007dU-V3; Thu, 25 May 2023 08:31:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q26NL-0006bZ-UY; Thu, 25 May 2023 08:31:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gqn+DZ9itr7fUDzo6mXIW4CJeMdsOQVDoeJDK+Xn2kw=; b=awPReUQXqcoGx6FE5EXHpK89Sw
	frwDhKzPyRmOWJeb6iZ8Ky0Y5HfiYFlukMIDmuHGG0WWFmuSr2dUh33CDFFoz30/eTrWowqyhltwz
	l539PRGw+dtgZcgm9pnAug+jT+5YYZ1QEqb6VQnhlmcK3bsSSyopQFUhnf4y1vtvIRG0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180937-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180937: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=b300c134465465385045ab705b68a42699688332
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 May 2023 08:31:11 +0000

flight 180937 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180937/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                b300c134465465385045ab705b68a42699688332
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    7 days
Failing since        180699  2023-05-18 07:21:24 Z    7 days   29 attempts
Testing same since   180937  2023-05-25 02:03:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 6870 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 25 08:31:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:31:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539364.840158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26NR-00049h-UB; Thu, 25 May 2023 08:31:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539364.840158; Thu, 25 May 2023 08:31:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26NR-00049a-Qq; Thu, 25 May 2023 08:31:17 +0000
Received: by outflank-mailman (input) for mailman id 539364;
 Thu, 25 May 2023 08:31:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/c8s=BO=citrix.com=prvs=5022a0095=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q26NQ-00049G-Vp
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:31:16 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7f395b56-fad6-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 10:31:11 +0200 (CEST)
Received: from mail-bn8nam04lp2041.outbound.protection.outlook.com (HELO
 NAM04-BN8-obe.outbound.protection.outlook.com) ([104.47.74.41])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 04:31:06 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DS7PR03MB5559.namprd03.prod.outlook.com (2603:10b6:5:2ce::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.14; Thu, 25 May
 2023 08:31:02 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Thu, 25 May 2023
 08:31:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f395b56-fad6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685003473;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=7Etvq6HWq75pNMbdDFz1SgwhJaT/TYJZe0IHPAFYX24=;
  b=Mhip75Ekv/KhfPoWRDpJQ0S8cBIBR7KqtrU0fUTNWgMgIrKpa+HUmwMi
   ZOzq/Fp/lpKRy+K2QV4IUi10EI4uWWr6SoddB9UrdqQNTgIQg6Z4sgeV2
   Xr6r1ElLfS/duualMIhV80fK2C0pGqDorjCOOr1kEf2xgmBz9syKkGEha
   g=;
X-IronPort-RemoteIP: 104.47.74.41
X-IronPort-MID: 110222491
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:R3gBlqshsvqz7BaQBirl2iI+L+fnVEVfMUV32f8akzHdYApBsoF/q
 tZmKWiCPK6JZzSnL9Enb4m09U0OvJHUyoQxTgJrrSxhHioW+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Fv0gnRkPaoQ5AKEyyFJZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwKg5XYBmomuiMx5mFabltgPkbMo7PM9ZK0p1g5Wmx4fcOZ7nmG/mPz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0oujP6xabI5efTTLSlRtlyfq
 W/cuXzwHzkRNcCFyCrD+XWp7gPKtXqiANhITuHpr5aGhnWQ4l0qODQocGKSpP+fkHWhWY1+L
 WoLr39GQa8asRbDosPGdw21pjuIswARX/JUEvYm80edx6zM+QGbC2MYCDlbZ7QOtsU7WDgr3
 V+hhM7yCHpkt7j9dJ6G3rKdrDf3My5FK2YHPXMAVVFdv4Clp5wvhBXSSNolCLSyktD+BTD3x
 XaNsTQ6gLIQy8UM0s1X4Gz6vt5lnbCRJiZd2+kddjPNAt9RDGJ9W7GV1A==
IronPort-HdrOrdr: A9a23:Lf6EtqtJ7XkPnYrFF+55ifPg7skDYdV00zEX/kB9WHVpm6uj9/
 xG/c576faQsl16ZJhOo6HjBED+ewK4yXcY2+Qs1NSZMjUO2lHYT72KhLGKqwEIcBeQygcy78
 tdmqFFebnNMWQ=
X-Talos-CUID: 9a23:MB179W2W8J7LqGZH9Nw8W7xfGP5/fHT20X3qJGSGKjZoToCtVxi09/Yx
X-Talos-MUID: 9a23:fUOYkAgc8iwSs7qwx8ZVoMMpN+NZ2pSHNmU0uJAAndO/Ki4vGjG7k2Hi
X-IronPort-AV: E=Sophos;i="6.00,190,1681185600"; 
   d="scan'208";a="110222491"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fKWy6pmni5ibF0o04yvQtkOUGcbQoTtPMg3c6/f+R5ymYeOvrfv0AUy2kcgYMC/h8c5G9qwwD8Qu6TNAKgAYRbw1HXYqxpu3zaW1WpzJ44bgF4nw8CO5oXypouWnKMWhrtMfZnEhiCygNwNjotbC3XfjsTn+LaU2nrdS7RUaU2MWGO69TT4eecjDRF+qvFmRs9M5WDKAJRUg4+c3GcX2opcyOFNU+NNJ+Phb6ztdnZi1OQDhfgS4d3G9nZ7vk/6hQWpiOUbImoBk/0sjHvHcB9gqj+cWuTmX4ikcEVnTYnYGaGHdpaYSBifl75U3raOGh+rv0PEFYa2NFstcsLG7Ag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pSg29f/N1YhWUe1oKVP2tfbEbXqkMVXNz0UXS8Yow8E=;
 b=L4kGga43soRYHgjRml9K3+jai22AWMMWuvacd00iVUgOFa3GXJ3aUkMX2TCbHr4qMc+eVg36Xo/sFaUYJ7Eul53TAG2wZYMHvxnCtGJU1O4W5UI5Ae8nBdbjcITKagU/IE71JYQNjVKdQaZd3J5niXGWDPu5GmvWm2I16/YcmY3IC8gAyTe7FyNIpX/fp2x9EgZIg6rqMIyQTZ68C2UzpEI1RrmSi3ltL4dRRibIBVeGhEU5c5a3afkOa6fNF99qM9ovC83GPnpFtqdXG0NCQQ/RZtsK4i7xS4spn6O4nBzZHfs5j4igEL8NL92ioVJqo8VqxfZsEwrhmCRzRdU+xQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pSg29f/N1YhWUe1oKVP2tfbEbXqkMVXNz0UXS8Yow8E=;
 b=DvihjvBQGFAihhV/BVNAF6vKyHTZuTEiy9k3jJlvIzSjJUvY/SEGfbKkphwpyyh6BgvoTxHOG+q5f/OXOYrT0WajGSAUddGdhBTaUXg3Lc45b8Yv3h2zffIPBzLs4LXB46jXdByhnrporfmUJSVhcHg4jTGNeRl6zdg99Gn0UGo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [PATCH] vpci/header: cope with devices not having vpci allocated
Date: Thu, 25 May 2023 10:30:51 +0200
Message-Id: <20230525083051.30366-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0044.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::32) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DS7PR03MB5559:EE_
X-MS-Office365-Filtering-Correlation-Id: f7976430-eb58-4fe9-500e-08db5cfa5f79
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nmjRlqR4UVhMHf8a7PT1Jd4KR82T8fwvECPzEDKQ1eqd/LJthIUIfbHbaf8jizuI3O8hOD/oDkcXZ3yviGBfB4xufmmnGzXUarw3/QlM8VQkzmL+ikbkJd3BMW02+K5Nltg3cxQ8pWnhpYGYtRfRxLIjvTpxYuk5j7BFIdJcN6vGcgZVM5/vBHgiZWsbSvQCg1cLLH1Xd4zzzSb1zigcPvogIYeVhYqjdlA1hdwjz4ogAaqjYWDfvgBzuu7P0rh4MJ6EE3ykb3CrUO3Vw20OsqRou3q2v/Qntijpoo66kNQ4OdWO98YYjXYEbv1jCiVbXmgD2XElOsT7DayikoHZrFvAnrbbvbueBS75L/TbyKkipZ5aGrO/Or8TKlo2YZVhg5hHH0e5AN8q+pj7oznBUm8AsTALYCITiPTWH0waDOWN7xCPY1t/Balqdm/8n8YR6TVA+Fq8v7RJfVABdTM3c0V7VZc5xdan67hUKwny2p3/9tuY0qiju21FA2PCcUjZCtTDGiAPtYAoUdR4c1TqBKzLbNk9gEqVUSusV2CWhgfdn6Dn+Iw9tD1a1vQhkHvH
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(396003)(136003)(346002)(39860400002)(451199021)(36756003)(316002)(82960400001)(86362001)(38100700002)(8676002)(41300700001)(8936002)(66476007)(66556008)(66946007)(107886003)(6916009)(4326008)(2906002)(186003)(478600001)(6486002)(5660300002)(6666004)(6506007)(6512007)(1076003)(2616005)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZUlKOFBvcHZmUERrMmxxOTNSMkY1RzBxbHVxNmdVVWJiMitONHN3eEFoZk92?=
 =?utf-8?B?S3VsOThRWWQyY0tHS29ET0xLaVlxZ0Y2M2dnajhDdHJUT0VZNTJDRWJuMDdj?=
 =?utf-8?B?bm9OejUydVBnV2hEaWd6aGlnZ1ZMM1YxRnJQUWFCN3dCZEVFQWRpSytFcXlB?=
 =?utf-8?B?TlBtZ3FMY1ZUbThpVXg0RzJyUmE4dzIzRWZXTUpndVNkenNWYlN5Lzl3ODZ4?=
 =?utf-8?B?VEl2OXJVWFZZZTZrY3pjMTNKeUtiNFMvRVRtQkhDR2R4Z1EvaS9wZTZscmo1?=
 =?utf-8?B?Z2k4WWpBYVpldGhEakpEMEFuOWphdlRRWmorU2IwZHNBdzNha0ErNzJsWWpI?=
 =?utf-8?B?a0djUlhHTHV1MHRmMm5JVU5nK3A2K1NDbSszenZpL01vRGNob1VNeEV1YWcv?=
 =?utf-8?B?YjdHdzkvK094d1ArbFhvU1FzL2pFNDB1UUFjZmRvTGZuNk9tbVBOUWZMdDB4?=
 =?utf-8?B?RGdWbFVDN2R1eHpFNWl4YUhsaW8wbEE2aHRTMlQ4cjcrR0dQQTBpeHk3elU0?=
 =?utf-8?B?NmNTa2lYVVFrcmJYZks1cUVXMVhOVGM1M2V6R3pnNDhjWVFqOFhhUGJBbllQ?=
 =?utf-8?B?UlpxZkpPajJ4bGQ4YzhUWjZRcVNoSFUySWp1ZEVsOVZuYUZLTk1oN2N2SlN3?=
 =?utf-8?B?dWZISjN0azMrd0ZaUzVSWE9XU2VkRFU0R0V3T3JCT1FKcSthaC8vWDNxWFVJ?=
 =?utf-8?B?aEIyVGJGVmRmVEFLcmp2WGtoby8rYmdHNTMwUnR1TUd0ZGxpdXhLZUI3ZzZP?=
 =?utf-8?B?c2F6VEQyWVRzZEswYW5lN1ZISnN2U0c3L0cwbGczN1V3SkZDRGlTdzFPbjBk?=
 =?utf-8?B?dE9uZXcxRWtrQUlrYmtUTS85KzdWeGxjNEFveUlKZG9JRDVVaHJXMEpQUHN0?=
 =?utf-8?B?V294QkNxUU52NWlMa0FUc0FpN04wSzNicll1NjVsQUp1WmdhYUtkNnhVY1Y5?=
 =?utf-8?B?cmlaUDlxZjVJRGZrcDZwODg3dWgzY2VUL0hpVVo2M1JYNEo4MzVFaENzano3?=
 =?utf-8?B?aktpOUR0L3M3S0tEQkhyMmVCNzBBVWRhSC9GVFVraUpxOUZ6MWg1UnU5ODl0?=
 =?utf-8?B?SytreGQxYVFPUGhKc2RuUWtua3YxcmhHeGZZdnI5K05uU0dKalkrcUltT2RN?=
 =?utf-8?B?VmdCZzhoSDdSK1JiZ3VVZzdsb28rUkh5M095Q2ZXSlZnSVExVjFPUFE0K0th?=
 =?utf-8?B?MUZ1d2Q4Nm8rZER2a3FvRTR3c2xHVHoxUVEwVnpzNFQ5b1Z3U0l5cVBTdTlF?=
 =?utf-8?B?MUZXdFpsRmhDdWhtUklYRmtTUnkwY001dStwQUo0Vmorb3pkMTBKc2JtU3Vs?=
 =?utf-8?B?NjN3MFpDTG1ZUlNBK3BxaEY2L2ZiUXNJVFR0dTFYaHRObFFmMlJkMWc4QTgw?=
 =?utf-8?B?WGpwUlhqTG9RN2k5alNJY20yR0k5WGhhSUVCc3c1NDZaNmlCVjdnWGMzek53?=
 =?utf-8?B?UmF3SHZ1Snp6SkFXYmMvbFIwMkdVdFIzZVN5czhEbUlZc0Ura01POW9HU0da?=
 =?utf-8?B?dTJ0blZDcHNUMXhFaG1xbzJUSXVRR3F2S0dWTlBmZlJGUUcwTURYV250enZH?=
 =?utf-8?B?OXNvS0FKYytSRTZOMXJPSDBaU01wNFlFTlp3cUQvMyszRzhJTDRrd1p1SUg0?=
 =?utf-8?B?dTlScnUweVpuQUZRcGpUNml4TTFmc2NmdURTK3RHL3hEUStqdGkwM01kK0NH?=
 =?utf-8?B?M1BuSjdXbW1lRG1UWlA5bmRpMTBHRDF4UE9ESzVIRkU2aldEbkhPdW0vMlV6?=
 =?utf-8?B?UHJwd2VDcmdZemtGbktRWE1lb2pZbkpNQmxqY2hidWtOQzlWMHRjc2g2R2NC?=
 =?utf-8?B?RTAxN2VpTTJWTnlXZGhVb0d0dWdMQXg4NnFVZ0tyd0FLWHRkNFRPNW5UcmpY?=
 =?utf-8?B?VWlNcWZHODFBa1JBWFRlM1dVNWxJdHVlVFErNXV5S1UyaDhRUTZhNWNIdmQv?=
 =?utf-8?B?SXFFYUV2SnhHQ1JjVU0xeXUzYkZyd0ZSMFhTMkRabC9UK1NLU3FSWGlUSWtx?=
 =?utf-8?B?a29uZ1JJd2ZyNGNWdDl5Zm55WE1NOFlWUzY1NnNPaDYzMnk4SHFQLzdDWlBz?=
 =?utf-8?B?RjUxL0krd2RZTmM4Y1JQU2t4MWF1V3N4YllRcS9yZ3lYTUs1b3dGbWxIb1BZ?=
 =?utf-8?B?Z2dMT3ZRR1JaMFo2MEEwNTRTZU5wb1BSbndKUnVqRkJFd2QycFpqSUhmbjNE?=
 =?utf-8?B?Wnc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	1DZvClAXp239OI6jRC1WTVdF3u2K57CGochlkSJN+NFjxJtWNiGNdS7xSIPZQQvaSVPKLtB5tDj/O9IR7nK1UMHGh6WEH88qcB7MYXqnKCIVJOayq9QuPksyyaeQ9f94nfp3ftMXkNra0m7Zrj8bcKYQFFDDIlOrNgg9vFWHlnNpV5nXMX+x7piOdayEyOPTNsOEAQV5EncFTpovJZaiOn3R2LHfdMGgdOz0xmk3P/wpu/R3krj5slWXydmyIoAevOp0qMLRyIuje0/BbEmdrf92wVbO1L4wm19iRFnwGlzJcCwS7JJIItueswmi07mE0TH5/FSCxC/ib4KgR2rL0DDVkjI3qtCl29mSVgpujkW/yr8QOFmU2ZSwFsL+Zq7SDiac4iA3m1zuSQ0G7R8R5XgFgH1Bj/ec9wiPzc7MdZnMBFsYEsuC5JpE8rb+WGMIkGhMbEBFmDzy4noqwi7lrJ8TaOfZ+4vzFsDGBzONGOYyuDF/EnOkaVe1YJSLRAeOLGSFu8sqqbpQf+cudqPvaEeSQD43BT7v9WKGYXNL9TGzQElfCGuRMqcv95aAEQ1RlV6IX0MJ9yKvh+3+smqmv0K3BIqvHQoA1qcIG9P6LRGDExi8N5/Dvn5KlciYWDuDCY0Dt3kx5FAeJ6Qb2ysLUs4k+Bz3sClnc7rezN0TPfy3c1ZAROBvt8+O6ib8KVXmEQPZJQK4wcQqjfI7VhoH1isGKNx43SvDBqQeGWbMQFyxf2d+JhSyh9a1a584i5TfX/y7Yfr0t2Wfxd2GbrWTnKM683+8gTow2OICW+WOPHY=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7976430-eb58-4fe9-500e-08db5cfa5f79
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 08:31:01.9019
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Gm2CmLV3RRe2ruM66e+Zx+Qg6RdDH49MhH0R7uSD2VMAcWRnKCkJSNMIHIgZuODZgms2/DFSoeTNWo8rNeGgiA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5559

When traversing the list of pci devices assigned to a domain cope with
some of them not having the vpci struct allocated. It's possible for
the hardware domain to have read-only devices assigned that are not
handled by vPCI, or for unprivileged domains to have some devices
handled by an emulator different than vPCI.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/drivers/vpci/header.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index ec2e978a4e6b..3c1fcfb208cf 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -289,6 +289,20 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
      */
     for_each_pdev ( pdev->domain, tmp )
     {
+        if ( !tmp->vpci )
+            /*
+             * For the hardware domain it's possible to have devices assigned
+             * to it that are not handled by vPCI, either because those are
+             * read-only devices, or because vPCI setup has failed.
+             *
+             * For unprivileged domains we should aim for passthrough devices
+             * to be capable of being handled by different emulators, and hence
+             * a domain could have some devices handled by vPCI and others by
+             * QEMU for example, and the later won't have pdev->vpci
+             * allocated.
+             */
+            continue;
+
         if ( tmp == pdev )
         {
             /*
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Thu May 25 08:34:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:34:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539373.840168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26QI-00054c-D5; Thu, 25 May 2023 08:34:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539373.840168; Thu, 25 May 2023 08:34:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26QI-00054V-8y; Thu, 25 May 2023 08:34:14 +0000
Received: by outflank-mailman (input) for mailman id 539373;
 Thu, 25 May 2023 08:34:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q26QG-00054I-SV
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:34:12 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id ec00ff07-fad6-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 10:34:11 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8C5AF1042;
 Thu, 25 May 2023 01:34:55 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4041B3F67D;
 Thu, 25 May 2023 01:34:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec00ff07-fad6-11ed-b230-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 0/3] diff-report.py tool
Date: Thu, 25 May 2023 09:33:58 +0100
Message-Id: <20230525083401.3838462-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

--------------------------------------------------------------------------------
This serie is dependent on this patch, in case cppcheck report are generated
using xen-analysis.py that calls the makefile with in-tree build, because
this tool (in particular the patching feature) needs the path from the
repository instead of the path from /xen/xen:
https://patchwork.kernel.org/project/xen-devel/patch/20230519093019.2131896-4-luca.fancellu@arm.com/
--------------------------------------------------------------------------------

Now that we have a tool (xen-analysis.py) that wraps cppcheck to generates
reports, we have a generic overview of how many static analysis issues and non
compliances to MISRA C we have for a certain revision of the codebase.

This is great and eventually the work to be done should be just having less and
less findings in the report until we reach zero.

This is just an ideal trend, because in practice we might have issues that
comes from existing code (macro for example) that are not going to be fixed
soon for any reason, but we would like to check and see how many issues are
introduced by the commits added (ideally zero, but if any is added and the fault
resides outside the changed code, maintainers might decide to include it
anyway).

So the idea is to check the difference between the reports of the codebase: one
called "baseline" which is basically the current codebase, the other one called
"new report" that is the codebase after the changes.
To check if any new finding is added, we need to have a look on every finding in
the "new report" that is not listed in the "baseline".

It seems very simple, but what can happen to existing findings in the code after
a commit is applied?
Basically existing findings can be shifted in position due to changes to
unrelated lines, or they can be also deleted or fixed for changes involving the
findings line (Michal was the first one to point that out).

So comparing the two report now seems quite difficult, because if we compare
them we will have all the new findings plus all the findings that changed
position due to the changes applied.

To overcome this, the diff-tool.py is created and it allows to "patch" the
"baseline" report, looking into the changes applied to the baseline codebase,
described by git diff.

This serie is organised in two patch, I've tried to split the amount of code and
to leave a meaning, so in the first patch you have everything you need to
import cppcheck reports and do a "raw" diff between reports, this gives you
an hint about new findings + old findings that has changed place.

The second patch is adding the "patching" system, having a class that parses
the git diff output and later "patching" the baseline before doing the
comparison. This last option is activated only when passing the git diff changes
to the tool, but everything is described (I hope) in the help.

Some consideration needs to be made, this tool can translate in coordinates
(file, line) the findings from the "baseline" to the "new report" using the git
diff output as, let me say, a translation matrix.
This doesn't mean it can understand the meaning of the findings and recognise
them in the new codebase, so for example, a finding related to a line that is
moved to another part of the file won't be recognised as "old finding" and will
be just removed from the "baseline patched report", however we will find it
in the new report unless it contains a fix for the reported issue.

This means that the tool is not really suited to be a gatekeeper for the merge
action, it is more suitable to help the maintainer to understand when a change
is introducing new issues without having to check and compare manually two
reports of (nowadays) hundreds of finding.
Eventually we could run it in the CI and make the CI reply to the patchwork
thread with its output!

The tool has also a debug argument that when applied, generates extra files
that can be checked against the original files, for example the reports are
imported in the tool, and afterwards the debug code will generate them back from
the imported data and they should be the same (if everything works).
Another debug check is to export the representation of the parsed git diff
output, so that the developer can check how and if the parser interpreted
correctly the data.

Future works for this tool might be to parse also Coverity reports and
eventually (don't know if it is possible) also eclair text report.

Luca Fancellu (3):
  xen/misra: add diff-report.py tool
  xen/misra: diff-report.py: add report patching feature
  maintainers: Add Xen MISRA Analysis Tools section

 MAINTAINERS                                   |  10 +
 xen/scripts/diff-report.py                    | 130 ++++++++++
 .../xen_analysis/diff_tool/__init__.py        |   0
 .../xen_analysis/diff_tool/cppcheck_report.py |  44 ++++
 xen/scripts/xen_analysis/diff_tool/debug.py   |  61 +++++
 xen/scripts/xen_analysis/diff_tool/report.py  | 187 ++++++++++++++
 .../diff_tool/unified_format_parser.py        | 232 ++++++++++++++++++
 7 files changed, 664 insertions(+)
 create mode 100755 xen/scripts/diff-report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/__init__.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/debug.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/unified_format_parser.py

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 08:34:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:34:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539374.840174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26QI-000580-Mu; Thu, 25 May 2023 08:34:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539374.840174; Thu, 25 May 2023 08:34:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26QI-00057c-Gr; Thu, 25 May 2023 08:34:14 +0000
Received: by outflank-mailman (input) for mailman id 539374;
 Thu, 25 May 2023 08:34:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q26QH-00054I-I4
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:34:13 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id eca93009-fad6-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 10:34:12 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 89BA2113E;
 Thu, 25 May 2023 01:34:56 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A8A1C3F67D;
 Thu, 25 May 2023 01:34:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eca93009-fad6-11ed-b230-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 1/3] xen/misra: add diff-report.py tool
Date: Thu, 25 May 2023 09:33:59 +0100
Message-Id: <20230525083401.3838462-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230525083401.3838462-1-luca.fancellu@arm.com>
References: <20230525083401.3838462-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new tool, diff-report.py that can be used to make diff between
reports generated by xen-analysis.py tool.
Currently this tool supports the Xen cppcheck text report format in
its operations.

The tool prints every finding that is in the report passed with -r
(check report) which is not in the report passed with -b (baseline).

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes from v2:
 - Add A-by, T-by Stefano
Changes from v1:
 - removed 2 method from class ReportEntry that landed there by a
   mistake on rebase.
 - Made the script compatible also with python2
---
 xen/scripts/diff-report.py                    |  80 ++++++++++++++
 .../xen_analysis/diff_tool/__init__.py        |   0
 .../xen_analysis/diff_tool/cppcheck_report.py |  44 ++++++++
 xen/scripts/xen_analysis/diff_tool/debug.py   |  40 +++++++
 xen/scripts/xen_analysis/diff_tool/report.py  | 100 ++++++++++++++++++
 5 files changed, 264 insertions(+)
 create mode 100755 xen/scripts/diff-report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/__init__.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/debug.py
 create mode 100644 xen/scripts/xen_analysis/diff_tool/report.py

diff --git a/xen/scripts/diff-report.py b/xen/scripts/diff-report.py
new file mode 100755
index 000000000000..f97cb2355cc3
--- /dev/null
+++ b/xen/scripts/diff-report.py
@@ -0,0 +1,80 @@
+#!/usr/bin/env python3
+
+from __future__ import print_function
+import os
+import sys
+from argparse import ArgumentParser
+from xen_analysis.diff_tool.cppcheck_report import CppcheckReport
+from xen_analysis.diff_tool.debug import Debug
+from xen_analysis.diff_tool.report import ReportError
+
+
+def log_info(text, end='\n'):
+    # type: (str, str) -> None
+    global args
+    global file_out
+
+    if (args.verbose):
+        print(text, end=end, file=file_out)
+
+
+def main(argv):
+    # type: (list) -> None
+    global args
+    global file_out
+
+    parser = ArgumentParser(prog="diff-report.py")
+    parser.add_argument("-b", "--baseline", required=True, type=str,
+                        help="Path to the baseline report.")
+    parser.add_argument("--debug", action='store_true',
+                        help="Produce intermediate reports during operations.")
+    parser.add_argument("-o", "--out", default="stdout", type=str,
+                        help="Where to print the tool output. Default is "
+                             "stdout")
+    parser.add_argument("-r", "--report", required=True, type=str,
+                        help="Path to the 'check report', the one checked "
+                             "against the baseline.")
+    parser.add_argument("-v", "--verbose", action='store_true',
+                        help="Print more informations during the run.")
+
+    args = parser.parse_args()
+
+    if args.out == "stdout":
+        file_out = sys.stdout
+    else:
+        try:
+            file_out = open(args.out, "wt")
+        except OSError as e:
+            print("ERROR: Issue opening file {}: {}".format(args.out, e))
+            sys.exit(1)
+
+    debug = Debug(args)
+
+    try:
+        baseline_path = os.path.realpath(args.baseline)
+        log_info("Loading baseline report {}".format(baseline_path), "")
+        baseline = CppcheckReport(baseline_path)
+        baseline.parse()
+        debug.debug_print_parsed_report(baseline)
+        log_info(" [OK]")
+        new_rep_path = os.path.realpath(args.report)
+        log_info("Loading check report {}".format(new_rep_path), "")
+        new_rep = CppcheckReport(new_rep_path)
+        new_rep.parse()
+        debug.debug_print_parsed_report(new_rep)
+        log_info(" [OK]")
+    except ReportError as e:
+        print("ERROR: {}".format(e))
+        sys.exit(1)
+
+    output = new_rep - baseline
+    print(output, end="", file=file_out)
+
+    if len(output) > 0:
+        sys.exit(1)
+
+    sys.exit(0)
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/xen/scripts/xen_analysis/diff_tool/__init__.py b/xen/scripts/xen_analysis/diff_tool/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py b/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
new file mode 100644
index 000000000000..e7e80a9dde84
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/cppcheck_report.py
@@ -0,0 +1,44 @@
+#!/usr/bin/env python3
+
+import re
+from .report import Report, ReportError
+
+
+class CppcheckReport(Report):
+    def __init__(self, report_path):
+        # type: (str) -> None
+        super(CppcheckReport, self).__init__(report_path)
+        # This matches a string like:
+        # path/to/file.c(<line number>,<digits>):<whatever>
+        # and captures file name path and line number
+        # the last capture group is used for text substitution in __str__
+        self.__report_entry_regex = re.compile(r'^(.*)\((\d+)(,\d+\):.*)$')
+
+    def parse(self):
+        # type: () -> None
+        report_path = self.get_report_path()
+        try:
+            with open(report_path, "rt") as infile:
+                report_lines = infile.readlines()
+        except OSError as e:
+            raise ReportError("Issue with reading file {}: {}"
+                              .format(report_path, e))
+        for line in report_lines:
+            entry = self.__report_entry_regex.match(line)
+            if entry and entry.group(1) and entry.group(2):
+                file_path = entry.group(1)
+                line_number = int(entry.group(2))
+                self.add_entry(file_path, line_number, line)
+            else:
+                raise ReportError("Malformed report entry in file {}:\n{}"
+                                  .format(report_path, line))
+
+    def __str__(self):
+        # type: () -> str
+        ret = ""
+        for entry in self.to_list():
+            ret += re.sub(self.__report_entry_regex,
+                          r'{}({}\3'.format(entry.file_path,
+                                            entry.line_number),
+                          entry.text)
+        return ret
diff --git a/xen/scripts/xen_analysis/diff_tool/debug.py b/xen/scripts/xen_analysis/diff_tool/debug.py
new file mode 100644
index 000000000000..65cca2464110
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/debug.py
@@ -0,0 +1,40 @@
+#!/usr/bin/env python3
+
+from __future__ import print_function
+import os
+from .report import Report
+
+
+class Debug:
+    def __init__(self, args):
+        self.args = args
+
+    def __get_debug_out_filename(self, path, type):
+        # type: (str, str) -> str
+        # Take basename
+        file_name = os.path.basename(path)
+        # Split in name and extension
+        file_name = os.path.splitext(file_name)
+        if self.args.out != "stdout":
+            out_folder = os.path.dirname(self.args.out)
+        else:
+            out_folder = "./"
+        dbg_report_path = out_folder + file_name[0] + type + file_name[1]
+
+        return dbg_report_path
+
+    def __debug_print_report(self, report, type):
+        # type: (Report, str) -> None
+        report_name = self.__get_debug_out_filename(report.get_report_path(),
+                                                    type)
+        try:
+            with open(report_name, "wt") as outfile:
+                print(report, end="", file=outfile)
+        except OSError as e:
+            print("ERROR: Issue opening file {}: {}".format(report_name, e))
+
+    def debug_print_parsed_report(self, report):
+        # type: (Report) -> None
+        if not self.args.debug:
+            return
+        self.__debug_print_report(report, ".parsed")
diff --git a/xen/scripts/xen_analysis/diff_tool/report.py b/xen/scripts/xen_analysis/diff_tool/report.py
new file mode 100644
index 000000000000..4a303d61b3ea
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/report.py
@@ -0,0 +1,100 @@
+#!/usr/bin/env python3
+
+import os
+
+
+class ReportError(Exception):
+    pass
+
+
+class Report(object):
+    class ReportEntry:
+        def __init__(self, file_path, line_number, entry_text, line_id):
+            # type: (str, int, list, int) -> None
+            if not isinstance(line_number, int) or \
+               not isinstance(line_id, int):
+                raise ReportError("ReportEntry constructor wrong type args")
+            self.file_path = file_path
+            self.line_number = line_number
+            self.text = entry_text
+            self.line_id = line_id
+
+    def __init__(self, report_path):
+        # type: (str) -> None
+        self.__entries = {}
+        self.__path = report_path
+        self.__last_line_order = 0
+
+    def parse(self):
+        # type: () -> None
+        raise ReportError("Please create a specialised class from 'Report'.")
+
+    def get_report_path(self):
+        # type: () -> str
+        return self.__path
+
+    def get_report_entries(self):
+        # type: () -> dict
+        return self.__entries
+
+    def add_entry(self, entry_path, entry_line_number, entry_text):
+        # type: (str, int, str) -> None
+        entry = Report.ReportEntry(entry_path, entry_line_number, entry_text,
+                                   self.__last_line_order)
+        if entry_path in self.__entries.keys():
+            self.__entries[entry_path].append(entry)
+        else:
+            self.__entries[entry_path] = [entry]
+        self.__last_line_order += 1
+
+    def to_list(self):
+        # type: () -> list
+        report_list = []
+        for _, entries in self.__entries.items():
+            for entry in entries:
+                report_list.append(entry)
+
+        report_list.sort(key=lambda x: x.line_id)
+        return report_list
+
+    def __str__(self):
+        # type: () -> str
+        ret = ""
+        for entry in self.to_list():
+            ret += entry.file_path + ":" + entry.line_number + ":" + entry.text
+
+        return ret
+
+    def __len__(self):
+        # type: () -> int
+        return len(self.to_list())
+
+    def __sub__(self, report_b):
+        # type: (Report) -> Report
+        if self.__class__ != report_b.__class__:
+            raise ReportError("Diff of different type of report!")
+
+        filename, file_extension = os.path.splitext(self.__path)
+        diff_report = self.__class__(filename + ".diff" + file_extension)
+        # Put in the diff report only records of this report that are not
+        # present in the report_b.
+        for file_path, entries in self.__entries.items():
+            rep_b_entries = report_b.get_report_entries()
+            if file_path in rep_b_entries.keys():
+                # File path exists in report_b, so check what entries of that
+                # file path doesn't exist in report_b and add them to the diff
+                rep_b_entries_num = [
+                    x.line_number for x in rep_b_entries[file_path]
+                ]
+                for entry in entries:
+                    if entry.line_number not in rep_b_entries_num:
+                        diff_report.add_entry(file_path, entry.line_number,
+                                              entry.text)
+            else:
+                # File path doesn't exist in report_b, so add every entry
+                # of that file path to the diff
+                for entry in entries:
+                    diff_report.add_entry(file_path, entry.line_number,
+                                          entry.text)
+
+        return diff_report
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 08:34:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:34:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539375.840188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26QK-0005ZK-4O; Thu, 25 May 2023 08:34:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539375.840188; Thu, 25 May 2023 08:34:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26QK-0005ZB-1L; Thu, 25 May 2023 08:34:16 +0000
Received: by outflank-mailman (input) for mailman id 539375;
 Thu, 25 May 2023 08:34:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q26QI-00054I-IM
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:34:14 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id ed391c58-fad6-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 10:34:13 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 875D115BF;
 Thu, 25 May 2023 01:34:57 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A70313F67D;
 Thu, 25 May 2023 01:34:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed391c58-fad6-11ed-b230-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 2/3] xen/misra: diff-report.py: add report patching feature
Date: Thu, 25 May 2023 09:34:00 +0100
Message-Id: <20230525083401.3838462-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230525083401.3838462-1-luca.fancellu@arm.com>
References: <20230525083401.3838462-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a feature to the diff-report.py script that improves the comparison
between two analysis report, one from a baseline codebase and the other
from the changes applied to the baseline.

The comparison between reports of different codebase is an issue because
entries in the baseline could have been moved in position due to addition
or deletion of unrelated lines or can disappear because of deletion of
the interested line, making the comparison between two revisions of the
code harder.

Having a baseline report, a report of the codebase with the changes
called "new report" and a git diff format file that describes the
changes happened to the code from the baseline, this feature can
understand which entries from the baseline report are deleted or shifted
in position due to changes to unrelated lines and can modify them as
they will appear in the "new report".

Having the "patched baseline" and the "new report", now it's simple
to make the diff between them and print only the entry that are new.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes from v2:
 - Fix --baseline-rev/--report-rev git command
 - Add A-by, T-by Stefano
Changes from v1:
 - Made the script compatible with python2
---
 xen/scripts/diff-report.py                    |  54 +++-
 xen/scripts/xen_analysis/diff_tool/debug.py   |  21 ++
 xen/scripts/xen_analysis/diff_tool/report.py  |  87 +++++++
 .../diff_tool/unified_format_parser.py        | 232 ++++++++++++++++++
 4 files changed, 392 insertions(+), 2 deletions(-)
 create mode 100644 xen/scripts/xen_analysis/diff_tool/unified_format_parser.py

diff --git a/xen/scripts/diff-report.py b/xen/scripts/diff-report.py
index f97cb2355cc3..636f98f5eebe 100755
--- a/xen/scripts/diff-report.py
+++ b/xen/scripts/diff-report.py
@@ -7,6 +7,10 @@ from argparse import ArgumentParser
 from xen_analysis.diff_tool.cppcheck_report import CppcheckReport
 from xen_analysis.diff_tool.debug import Debug
 from xen_analysis.diff_tool.report import ReportError
+from xen_analysis.diff_tool.unified_format_parser import \
+    (UnifiedFormatParser, UnifiedFormatParseError)
+from xen_analysis.settings import repo_dir
+from xen_analysis.utils import invoke_command
 
 
 def log_info(text, end='\n'):
@@ -36,9 +40,32 @@ def main(argv):
                              "against the baseline.")
     parser.add_argument("-v", "--verbose", action='store_true',
                         help="Print more informations during the run.")
+    parser.add_argument("--patch", type=str,
+                        help="The patch file containing the changes to the "
+                             "code, from the baseline analysis result to the "
+                             "'check report' analysis result.\n"
+                             "Do not use with --baseline-rev/--report-rev")
+    parser.add_argument("--baseline-rev", type=str,
+                        help="Revision or SHA of the codebase analysed to "
+                             "create the baseline report.\n"
+                             "Use together with --report-rev")
+    parser.add_argument("--report-rev", type=str,
+                        help="Revision or SHA of the codebase analysed to "
+                             "create the 'check report'.\n"
+                             "Use together with --baseline-rev")
 
     args = parser.parse_args()
 
+    if args.patch and (args.baseline_rev or args.report_rev):
+        print("ERROR: '--patch' argument can't be used with '--baseline-rev'"
+              " or '--report-rev'.")
+        sys.exit(1)
+
+    if bool(args.baseline_rev) != bool(args.report_rev):
+        print("ERROR: '--baseline-rev' must be used together with "
+              "'--report-rev'.")
+        sys.exit(1)
+
     if args.out == "stdout":
         file_out = sys.stdout
     else:
@@ -63,11 +90,34 @@ def main(argv):
         new_rep.parse()
         debug.debug_print_parsed_report(new_rep)
         log_info(" [OK]")
-    except ReportError as e:
+        diff_source = None
+        if args.patch:
+            diff_source = os.path.realpath(args.patch)
+        elif args.baseline_rev:
+            git_diff = invoke_command(
+                "git --git-dir={}/.git diff -C -C {}..{}"
+                .format(repo_dir, args.baseline_rev, args.report_rev),
+                True, "Error occured invoking:\n{}\n\n{}"
+            )
+            diff_source = git_diff.splitlines(keepends=True)
+        if diff_source:
+            log_info("Parsing changes...", "")
+            diffs = UnifiedFormatParser(diff_source)
+            debug.debug_print_parsed_diff(diffs)
+            log_info(" [OK]")
+    except (ReportError, UnifiedFormatParseError) as e:
         print("ERROR: {}".format(e))
         sys.exit(1)
 
-    output = new_rep - baseline
+    if args.patch or args.baseline_rev:
+        log_info("Patching baseline...", "")
+        baseline_patched = baseline.patch(diffs)
+        debug.debug_print_patched_report(baseline_patched)
+        log_info(" [OK]")
+        output = new_rep - baseline_patched
+    else:
+        output = new_rep - baseline
+
     print(output, end="", file=file_out)
 
     if len(output) > 0:
diff --git a/xen/scripts/xen_analysis/diff_tool/debug.py b/xen/scripts/xen_analysis/diff_tool/debug.py
index 65cca2464110..fcf1d861b5cf 100644
--- a/xen/scripts/xen_analysis/diff_tool/debug.py
+++ b/xen/scripts/xen_analysis/diff_tool/debug.py
@@ -3,6 +3,7 @@
 from __future__ import print_function
 import os
 from .report import Report
+from .unified_format_parser import UnifiedFormatParser
 
 
 class Debug:
@@ -38,3 +39,23 @@ class Debug:
         if not self.args.debug:
             return
         self.__debug_print_report(report, ".parsed")
+
+    def debug_print_patched_report(self, report):
+        # type: (Report) -> None
+        if not self.args.debug:
+            return
+        # The patched report contains already .patched in its name
+        self.__debug_print_report(report, "")
+
+    def debug_print_parsed_diff(self, diff):
+        # type: (UnifiedFormatParser) -> None
+        if not self.args.debug:
+            return
+        diff_filename = diff.get_diff_path()
+        out_pathname = self.__get_debug_out_filename(diff_filename, ".parsed")
+        try:
+            with open(out_pathname, "wt") as outfile:
+                for change_obj in diff.get_change_sets().values():
+                    print(change_obj, end="", file=outfile)
+        except OSError as e:
+            print("ERROR: Issue opening file {}: {}".format(out_pathname, e))
diff --git a/xen/scripts/xen_analysis/diff_tool/report.py b/xen/scripts/xen_analysis/diff_tool/report.py
index 4a303d61b3ea..b80eb31114f0 100644
--- a/xen/scripts/xen_analysis/diff_tool/report.py
+++ b/xen/scripts/xen_analysis/diff_tool/report.py
@@ -1,6 +1,7 @@
 #!/usr/bin/env python3
 
 import os
+from .unified_format_parser import UnifiedFormatParser, ChangeSet
 
 
 class ReportError(Exception):
@@ -47,6 +48,92 @@ class Report(object):
             self.__entries[entry_path] = [entry]
         self.__last_line_order += 1
 
+    def remove_entries(self, entry_file_path):
+        # type: (str) -> None
+        del self.__entries[entry_file_path]
+
+    def remove_entry(self, entry_path, line_id):
+        # type: (str, int) -> None
+        if entry_path in self.__entries.keys():
+            len_entry_path = len(self.__entries[entry_path])
+            if len_entry_path == 1:
+                del self.__entries[entry_path]
+            else:
+                if line_id in self.__entries[entry_path]:
+                    self.__entries[entry_path].remove(line_id)
+
+    def patch(self, diff_obj):
+        # type: (UnifiedFormatParser) -> Report
+        filename, file_extension = os.path.splitext(self.__path)
+        patched_report = self.__class__(filename + ".patched" + file_extension)
+        remove_files = []
+        rename_files = []
+        remove_entry = []
+        ChangeMode = ChangeSet.ChangeMode
+
+        # Copy entries from this report to the report we are going to patch
+        for entries in self.__entries.values():
+            for entry in entries:
+                patched_report.add_entry(entry.file_path, entry.line_number,
+                                         entry.text)
+
+        # Patch the output report
+        patched_rep_entries = patched_report.get_report_entries()
+        for file_diff, change_obj in diff_obj.get_change_sets().items():
+            if change_obj.is_change_mode(ChangeMode.COPY):
+                # Copy the original entry pointed by change_obj.orig_file into
+                # a new key in the patched report named change_obj.dst_file,
+                # that here is file_diff variable content, because this
+                # change_obj is pushed into the change_sets with the
+                # change_obj.dst_file key
+                if change_obj.orig_file in self.__entries.keys():
+                    for entry in self.__entries[change_obj.orig_file]:
+                        patched_report.add_entry(file_diff,
+                                                 entry.line_number,
+                                                 entry.text)
+
+            if file_diff in patched_rep_entries.keys():
+                if change_obj.is_change_mode(ChangeMode.DELETE):
+                    # No need to check changes here, just remember to delete
+                    # the file from the report
+                    remove_files.append(file_diff)
+                    continue
+                elif change_obj.is_change_mode(ChangeMode.RENAME):
+                    # Remember to rename the file entry on this report
+                    rename_files.append(change_obj)
+
+                for line_num, change_type in change_obj.get_change_set():
+                    len_rep = len(patched_rep_entries[file_diff])
+                    for i in range(len_rep):
+                        rep_item = patched_rep_entries[file_diff][i]
+                        if change_type == ChangeSet.ChangeType.REMOVE:
+                            if rep_item.line_number == line_num:
+                                # This line is removed with this changes,
+                                # append to the list of entries to be removed
+                                remove_entry.append(rep_item)
+                            elif rep_item.line_number > line_num:
+                                rep_item.line_number -= 1
+                        elif change_type == ChangeSet.ChangeType.ADD:
+                            if rep_item.line_number >= line_num:
+                                rep_item.line_number += 1
+                    # Remove deleted entries from the list
+                    if len(remove_entry) > 0:
+                        for entry in remove_entry:
+                            patched_report.remove_entry(entry.file_path,
+                                                        entry.line_id)
+                        del remove_entry[:]
+
+        if len(remove_files) > 0:
+            for file_name in remove_files:
+                patched_report.remove_entries(file_name)
+
+        if len(rename_files) > 0:
+            for change_obj in rename_files:
+                patched_rep_entries[change_obj.dst_file] = \
+                    patched_rep_entries.pop(change_obj.orig_file)
+
+        return patched_report
+
     def to_list(self):
         # type: () -> list
         report_list = []
diff --git a/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py b/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py
new file mode 100644
index 000000000000..8b3fbc318df7
--- /dev/null
+++ b/xen/scripts/xen_analysis/diff_tool/unified_format_parser.py
@@ -0,0 +1,232 @@
+#!/usr/bin/env python3
+
+import re
+import sys
+
+try:
+    from enum import Enum
+except Exception:
+    if sys.version_info[0] == 2:
+        print("Please install enum34 package when using python 2.")
+    else:
+        print("Please use python version 3.5 or above.")
+    sys.exit(1)
+
+try:
+    from typing import Tuple
+except Exception:
+    if sys.version_info[0] == 2:
+        print("Please install typing package when using python 2.")
+    else:
+        print("Please use python version 3.5 or above.")
+    sys.exit(1)
+
+
+class UnifiedFormatParseError(Exception):
+    pass
+
+
+class ParserState(Enum):
+    FIND_DIFF_HEADER = 0
+    REGISTER_CHANGES = 1
+    FIND_HUNK_OR_DIFF_HEADER = 2
+
+
+class ChangeSet(object):
+    class ChangeType(Enum):
+        REMOVE = 0
+        ADD = 1
+
+    class ChangeMode(Enum):
+        NONE = 0
+        CHANGE = 1
+        RENAME = 2
+        DELETE = 3
+        COPY = 4
+
+    def __init__(self, a_file, b_file):
+        # type: (str, str) -> None
+        self.orig_file = a_file
+        self.dst_file = b_file
+        self.change_mode = ChangeSet.ChangeMode.NONE
+        self.__changes = []
+
+    def __str__(self):
+        # type: () -> str
+        str_out = "{}: {} -> {}:\n{}\n".format(
+            str(self.change_mode), self.orig_file, self.dst_file,
+            str(self.__changes)
+        )
+        return str_out
+
+    def set_change_mode(self, change_mode):
+        # type: (ChangeMode) -> None
+        self.change_mode = change_mode
+
+    def is_change_mode(self, change_mode):
+        # type: (ChangeMode) -> bool
+        return self.change_mode == change_mode
+
+    def add_change(self, line_number, change_type):
+        # type: (int, ChangeType) -> None
+        self.__changes.append((line_number, change_type))
+
+    def get_change_set(self):
+        # type: () -> dict
+        return self.__changes
+
+
+class UnifiedFormatParser(object):
+    def __init__(self, args):
+        # type: (str | list) -> None
+        if isinstance(args, str):
+            self.__diff_file = args
+            try:
+                with open(self.__diff_file, "rt") as infile:
+                    self.__diff_lines = infile.readlines()
+            except OSError as e:
+                raise UnifiedFormatParseError(
+                    "Issue with reading file {}: {}"
+                    .format(self.__diff_file, e)
+                )
+        elif isinstance(args, list):
+            self.__diff_file = "git-diff-local.txt"
+            self.__diff_lines = args
+        else:
+            raise UnifiedFormatParseError(
+                "UnifiedFormatParser constructor called with wrong arguments")
+
+        self.__git_diff_header = re.compile(r'^diff --git a/(.*) b/(.*)$')
+        self.__git_hunk_header = \
+            re.compile(r'^@@ -\d+,(\d+) \+(\d+),(\d+) @@.*$')
+        self.__diff_set = {}
+        self.__parse()
+
+    def get_diff_path(self):
+        # type: () -> str
+        return self.__diff_file
+
+    def add_change_set(self, change_set):
+        # type: (ChangeSet) -> None
+        if not change_set.is_change_mode(ChangeSet.ChangeMode.NONE):
+            if change_set.is_change_mode(ChangeSet.ChangeMode.COPY):
+                # Add copy change mode items using the dst_file key, because
+                # there might be other changes for the orig_file in this diff
+                self.__diff_set[change_set.dst_file] = change_set
+            else:
+                self.__diff_set[change_set.orig_file] = change_set
+
+    def __parse(self):
+        # type: () -> None
+        def parse_diff_header(line):
+            # type: (str) -> ChangeSet | None
+            change_item = None
+            diff_head = self.__git_diff_header.match(line)
+            if diff_head and diff_head.group(1) and diff_head.group(2):
+                change_item = ChangeSet(diff_head.group(1), diff_head.group(2))
+
+            return change_item
+
+        def parse_hunk_header(line):
+            # type: (str) -> Tuple[int, int, int]
+            file_linenum = -1
+            hunk_a_linemax = -1
+            hunk_b_linemax = -1
+            hunk_head = self.__git_hunk_header.match(line)
+            if hunk_head and hunk_head.group(1) and hunk_head.group(2) \
+               and hunk_head.group(3):
+                file_linenum = int(hunk_head.group(2))
+                hunk_a_linemax = int(hunk_head.group(1))
+                hunk_b_linemax = int(hunk_head.group(3))
+
+            return (file_linenum, hunk_a_linemax, hunk_b_linemax)
+
+        file_linenum = 0
+        hunk_a_linemax = 0
+        hunk_b_linemax = 0
+        diff_elem = None
+        parse_state = ParserState.FIND_DIFF_HEADER
+        ChangeMode = ChangeSet.ChangeMode
+        ChangeType = ChangeSet.ChangeType
+
+        for line in self.__diff_lines:
+            if parse_state == ParserState.FIND_DIFF_HEADER:
+                diff_elem = parse_diff_header(line)
+                if diff_elem:
+                    # Found the diff header, go to the next stage
+                    parse_state = ParserState.FIND_HUNK_OR_DIFF_HEADER
+            elif parse_state == ParserState.FIND_HUNK_OR_DIFF_HEADER:
+                # Here only these change modalities will be registered:
+                # deleted file mode <mode>
+                # rename from <path>
+                # rename to <path>
+                # copy from <path>
+                # copy to <path>
+                #
+                # These will be ignored:
+                # old mode <mode>
+                # new mode <mode>
+                # new file mode <mode>
+                #
+                # Also these info will be ignored
+                # similarity index <number>
+                # dissimilarity index <number>
+                # index <hash>..<hash> <mode>
+                if line.startswith("deleted file"):
+                    # If the file is deleted, register it but don't go through
+                    # the changes that will be only a set of lines removed
+                    diff_elem.set_change_mode(ChangeMode.DELETE)
+                    parse_state = ParserState.FIND_DIFF_HEADER
+                elif line.startswith("new file"):
+                    # If the file is new, skip it, as it doesn't give any
+                    # useful information on the report translation
+                    parse_state = ParserState.FIND_DIFF_HEADER
+                elif line.startswith("rename to"):
+                    # Renaming operation can be a pure renaming or a rename
+                    # and a set of change, so keep looking for the hunk
+                    # header
+                    diff_elem.set_change_mode(ChangeMode.RENAME)
+                elif line.startswith("copy to"):
+                    # This is a copy operation, mark it
+                    diff_elem.set_change_mode(ChangeMode.COPY)
+                else:
+                    # Look for the hunk header
+                    (file_linenum, hunk_a_linemax, hunk_b_linemax) = \
+                        parse_hunk_header(line)
+                    if file_linenum >= 0:
+                        if diff_elem.is_change_mode(ChangeMode.NONE):
+                            # The file has only changes
+                            diff_elem.set_change_mode(ChangeMode.CHANGE)
+                        parse_state = ParserState.REGISTER_CHANGES
+                    else:
+                        # ... or there could be a diff header
+                        new_diff_elem = parse_diff_header(line)
+                        if new_diff_elem:
+                            # Found a diff header, register the last change
+                            # item
+                            self.add_change_set(diff_elem)
+                            diff_elem = new_diff_elem
+            elif parse_state == ParserState.REGISTER_CHANGES:
+                if (hunk_b_linemax > 0) and line.startswith("+"):
+                    diff_elem.add_change(file_linenum, ChangeType.ADD)
+                    hunk_b_linemax -= 1
+                elif (hunk_a_linemax > 0) and line.startswith("-"):
+                    diff_elem.add_change(file_linenum, ChangeType.REMOVE)
+                    hunk_a_linemax -= 1
+                    file_linenum -= 1
+                elif ((hunk_a_linemax + hunk_b_linemax) > 0) and \
+                        line.startswith(" "):
+                    hunk_a_linemax -= 1 if (hunk_a_linemax > 0) else 0
+                    hunk_b_linemax -= 1 if (hunk_b_linemax > 0) else 0
+
+                if (hunk_a_linemax + hunk_b_linemax) <= 0:
+                    parse_state = ParserState.FIND_HUNK_OR_DIFF_HEADER
+
+                file_linenum += 1
+
+        if diff_elem is not None:
+            self.add_change_set(diff_elem)
+
+    def get_change_sets(self):
+        # type: () -> dict
+        return self.__diff_set
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 08:34:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:34:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539376.840198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26QM-0005qq-Et; Thu, 25 May 2023 08:34:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539376.840198; Thu, 25 May 2023 08:34:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26QM-0005qh-A2; Thu, 25 May 2023 08:34:18 +0000
Received: by outflank-mailman (input) for mailman id 539376;
 Thu, 25 May 2023 08:34:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q26QL-0005ld-5H
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:34:17 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id ece9717e-fad6-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 10:34:12 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EFA9B15DB;
 Thu, 25 May 2023 01:34:58 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A4B0B3F67D;
 Thu, 25 May 2023 01:34:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ece9717e-fad6-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 3/3] maintainers: Add Xen MISRA Analysis Tools section
Date: Thu, 25 May 2023 09:34:01 +0100
Message-Id: <20230525083401.3838462-4-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230525083401.3838462-1-luca.fancellu@arm.com>
References: <20230525083401.3838462-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a section for the Xen MISRA Analysis Tools.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v2:
 - New patch, suggested by Stefano:
   https://lore.kernel.org/all/alpine.DEB.2.22.394.2305171232440.128889@ubuntu-linux-20-04-desktop/
---
 MAINTAINERS | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index f2f1881b32cc..c5b2dc2b024c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -667,6 +667,16 @@ F:	tools/xentrace/
 F:	xen/common/trace.c
 F:	xen/include/xen/trace.h
 
+XEN MISRA ANALYSIS TOOLS
+M:	Luca Fancellu <luca.fancellu@arm.com>
+S:	Supported
+F:	xen/scripts/xen_analysis/
+F:	xen/scripts/xen-analysis.py
+F:	xen/scripts/diff-report.py
+F:	xen/tools/cppcheck-plat/
+F:	xen/tools/convert_misra_doc.py
+F:	xen/tools/cppcheck-cc.sh
+
 XSM/FLASK
 M:	Daniel P. Smith <dpsmith@apertussolutions.com>
 S:	Supported
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 08:39:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:39:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539395.840209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26Va-0007RO-44; Thu, 25 May 2023 08:39:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539395.840209; Thu, 25 May 2023 08:39:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26VZ-0007RH-Vz; Thu, 25 May 2023 08:39:41 +0000
Received: by outflank-mailman (input) for mailman id 539395;
 Thu, 25 May 2023 08:39:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+adF=BO=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1q26VY-0007RB-7a
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:39:40 +0000
Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com
 [64.147.123.19]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac5a6183-fad7-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 10:39:34 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.west.internal (Postfix) with ESMTP id 461C73200A48;
 Thu, 25 May 2023 04:39:33 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute1.internal (MEProxy); Thu, 25 May 2023 04:39:34 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 25 May 2023 04:39:29 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac5a6183-fad7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to; s=fm1; t=
	1685003972; x=1685090372; bh=fJPOoxz9d4rewdetMuVslRJGP7SrQ2uzfRc
	G+ZHKW7E=; b=Di57kEOJH9BF5Zyu87ZaXM8VpHjXtYMzpROV/L89VssjWOxOY81
	7cxzUC3Pqx+VmJhtQMqK+RNs0lTkpxjwwf0N8/E6qb/xSt21/uRIi0h6NAduprSD
	+MfXWON3G8zd4I7SHDKMZcqe0YicR0FE7VM5vSIVyFPc6Z+SZ26+XiXP3wWQk+qg
	CdZYRY0zg+tj5E926LRWzMIoAB5CiiXbqUrcgpFeTZogVgAd7CKqVzpY0wXLm2WA
	PiH+lkKhFmshnQkjbSC5F7Q7dcxorc1NtUOrOxZ6xT764lpwycWgwvS41k+hc5RA
	kQ5UWyCfABXNAPhPP0Zo+vyI28loS+/OxZg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1685003972; x=1685090372; bh=fJPOoxz9d4rew
	detMuVslRJGP7SrQ2uzfRcG+ZHKW7E=; b=U7ZLi7XvP35O24LCoycAGNE3ou5YH
	vb6rGxPIaQzyGz3KUFGjbsz3+qJ64pKG2uTcxAwt+vpr74rZsLNRn0KX9EM04M5p
	2QY5yBb+9zCLYrY/shc4lytz3AnL335JAVO4VQQQKHKDWsl/EFCsm2mjGxZl6bJE
	6kQ6D808GVNLmMp0HQlB4okmiZYv8O/XMdIpnMmDxfxiHetiudmQoz8lRsJJhizF
	W4Lf1+6dT/sQNhuV12gNhYHC+MT7anbZDv4bK2c7w+52KaKz3Zm4FfetMMzLgN+Q
	sKIvUEkBJgSroqmMOqPJ7phuKKZ0Wd4JU4idv5q+1TQ8AEhKPHflOyARw==
X-ME-Sender: <xms:xB5vZBZQjoHsShmONSMFlTNTKHnRO72KvjF-JQP6-sgQ7H_BJs_u_Q>
    <xme:xB5vZIb5yTUXVpkWSyQSddbRd25t7DmTbPniLNiErZggBZavA2KcL1qw220QW8HSA
    -7pvDLRM9DUwQ>
X-ME-Received: <xmr:xB5vZD_6FZCemiDBj-AnFvYLONygrSHNlmWxgDuEWoCNlzmg_xGo7PQAzZh_>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeejjedgtdehucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:xB5vZPqSJ7EHDubZ9Ey86c8ANFXkTaYmDuG1A0qy_CXJLQOuv407ew>
    <xmx:xB5vZMr0oYKt7PUCzgvMDmFwvahKMpIpABs-nUG2uys3uI2C5FUErA>
    <xmx:xB5vZFRSAXhL1HHnrkErv9yAXUpKa9-FH6WXZTCL8aLcm03lemxROA>
    <xmx:xB5vZD1mj8BTLzwIMTiSPxLiHkf3IWcVxTH8LEdsUvc3M7ZOa1MaUQ>
Feedback-ID: i1568416f:Fastmail
Date: Thu, 25 May 2023 10:39:26 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com,
	wei.chen@arm.com, George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Christian Lindig <christian.lindig@cloud.com>
Subject: Re: [PATCH v7 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Message-ID: <ZG8evxN0mF8NDTPS@mail-itl>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-10-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="5vUbqozINaCJ3g6D"
Content-Disposition: inline
In-Reply-To: <20230523074326.3035745-10-luca.fancellu@arm.com>


--5vUbqozINaCJ3g6D
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Thu, 25 May 2023 10:39:26 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com,
	wei.chen@arm.com, George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Christian Lindig <christian.lindig@cloud.com>
Subject: Re: [PATCH v7 09/12] tools: add physinfo arch_capabilities handling
 for Arm

On Tue, May 23, 2023 at 08:43:23AM +0100, Luca Fancellu wrote:
> On Arm, the SVE vector length is encoded in arch_capabilities field
> of struct xen_sysctl_physinfo, make use of this field in the tools
> when building for arm.
>=20
> Create header arm-arch-capabilities.h to handle the arch_capabilities
> field of physinfo for Arm.
>=20
> Removed include for xen-tools/common-macros.h in
> python/xen/lowlevel/xc/xc.c because it is already included by the
> arm-arch-capabilities.h header.
>=20
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Acked-by: George Dunlap <george.dunlap@citrix.com>
> Acked-by: Christian Lindig <christian.lindig@cloud.com>
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
> Changes from v6:
>  - Fix licence header in arm-atch-capabilities.h, add R-by (Anthony)
> Changes from v5:
>  - no changes
> Changes from v4:
>  - Move arm-arch-capabilities.h into xen-tools/, add LIBXL_HAVE_,
>    fixed python return type to I instead of i. (Anthony)
> Changes from v3:
>  - add Ack-by for the Golang bits (George)
>  - add Ack-by for the OCaml tools (Christian)
>  - now xen-tools/libs.h is named xen-tools/common-macros.h
>  - changed commit message to explain why the header modification
>    in python/xen/lowlevel/xc/xc.c
> Changes from v2:
>  - rename arm_arch_capabilities.h in arm-arch-capabilities.h, use
>    MASK_EXTR.
>  - Now arm-arch-capabilities.h needs MASK_EXTR macro, but it is
>    defined in libxl_internal.h, it doesn't feel right to include
>    that header so move MASK_EXTR into xen-tools/libs.h that is also
>    included in libxl_internal.h
> Changes from v1:
>  - now SVE VL is encoded in arch_capabilities on Arm
> Changes from RFC:
>  - new patch
> ---
>  tools/golang/xenlight/helpers.gen.go          |  2 ++
>  tools/golang/xenlight/types.gen.go            |  1 +
>  tools/include/libxl.h                         |  6 ++++
>  .../include/xen-tools/arm-arch-capabilities.h | 28 +++++++++++++++++++
>  tools/include/xen-tools/common-macros.h       |  2 ++
>  tools/libs/light/libxl.c                      |  1 +
>  tools/libs/light/libxl_internal.h             |  1 -
>  tools/libs/light/libxl_types.idl              |  1 +
>  tools/ocaml/libs/xc/xenctrl.ml                |  4 +--
>  tools/ocaml/libs/xc/xenctrl.mli               |  4 +--
>  tools/ocaml/libs/xc/xenctrl_stubs.c           |  8 ++++--
>  tools/python/xen/lowlevel/xc/xc.c             |  8 ++++--
>  tools/xl/xl_info.c                            |  8 ++++++
>  13 files changed, 62 insertions(+), 12 deletions(-)
>  create mode 100644 tools/include/xen-tools/arm-arch-capabilities.h
>=20

(...)

> diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowleve=
l/xc/xc.c
> index 9728b34185ac..b3699fdac58e 100644
> --- a/tools/python/xen/lowlevel/xc/xc.c
> +++ b/tools/python/xen/lowlevel/xc/xc.c
> @@ -22,6 +22,7 @@
>  #include <xen/hvm/hvm_info_table.h>
>  #include <xen/hvm/params.h>
> =20
> +#include <xen-tools/arm-arch-capabilities.h>
>  #include <xen-tools/common-macros.h>
> =20
>  /* Needed for Python versions earlier than 2.3. */
> @@ -897,7 +898,7 @@ static PyObject *pyxc_physinfo(XcObject *self)
>      if ( p !=3D virt_caps )
>        *(p-1) =3D '\0';
> =20
> -    return Py_BuildValue("{s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s,s:s}",
> +    return Py_BuildValue("{s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s,s:s,s:I}",
>                              "nr_nodes",         pinfo.nr_nodes,
>                              "threads_per_core", pinfo.threads_per_core,
>                              "cores_per_socket", pinfo.cores_per_socket,
> @@ -907,7 +908,10 @@ static PyObject *pyxc_physinfo(XcObject *self)
>                              "scrub_memory",     pages_to_kib(pinfo.scrub=
_pages),
>                              "cpu_khz",          pinfo.cpu_khz,
>                              "hw_caps",          cpu_cap,
> -                            "virt_caps",        virt_caps);
> +                            "virt_caps",        virt_caps,
> +                            "arm_sve_vl",
> +                              arch_capabilities_arm_sve(pinfo.arch_capab=
ilities)
> +                        );

This should be added only when building for ARM, similar as for other
bindings.

>  }
> =20
>  static PyObject *pyxc_getcpuinfo(XcObject *self, PyObject *args, PyObjec=
t *kwds)

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--5vUbqozINaCJ3g6D
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmRvHr8ACgkQ24/THMrX
1yw72QgAi3Ytlx2NiTs7zaLyrl/JEgTJbX7368gg/O/lS1Dv0lwFY7dH9D6Hbu7o
5BjOlIi/KTR1EWu02IvkJmEEt0yWJPQ4ypOmZNO6KP+z5fJaqSNfB/1im9KkS9T9
zuzreX13JCoJEnHXfc6CiP1TY3VEmZA8HjKAwJ+apuU6I3XBVautTAqLw6gTjW8V
9KMYFWXnng2OSUKTI5ILf0ktMRG2jiIDJSH/yHLAsIXrlchk+jxYcqyRnO5lePRx
FzvClVP7pj8RunkjwkcHvLP9q7FtjyEyaKasLC5InRVm2P6U/gVxpyiFNwxvdCeJ
td3gLSpKwIYLtUNAJvjXh8CvVKNQOw==
=SgUV
-----END PGP SIGNATURE-----

--5vUbqozINaCJ3g6D--


From xen-devel-bounces@lists.xenproject.org Thu May 25 08:53:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:53:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539401.840218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26iW-0001Ru-Cm; Thu, 25 May 2023 08:53:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539401.840218; Thu, 25 May 2023 08:53:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26iW-0001Rn-98; Thu, 25 May 2023 08:53:04 +0000
Received: by outflank-mailman (input) for mailman id 539401;
 Thu, 25 May 2023 08:53:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UGVO=BO=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1q26iU-0001Rc-Ac
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:53:02 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2061e.outbound.protection.outlook.com
 [2a01:111:f400:7eab::61e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8a3bf825-fad9-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 10:52:56 +0200 (CEST)
Received: from MW2PR16CA0046.namprd16.prod.outlook.com (2603:10b6:907:1::23)
 by SA0PR12MB4384.namprd12.prod.outlook.com (2603:10b6:806:9f::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Thu, 25 May
 2023 08:52:53 +0000
Received: from CO1NAM11FT099.eop-nam11.prod.protection.outlook.com
 (2603:10b6:907:1:cafe::39) by MW2PR16CA0046.outlook.office365.com
 (2603:10b6:907:1::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Thu, 25 May 2023 08:52:53 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT099.mail.protection.outlook.com (10.13.175.171) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6411.30 via Frontend Transport; Thu, 25 May 2023 08:52:52 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 25 May
 2023 03:52:51 -0500
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 25 May 2023 03:52:50 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a3bf825-fad9-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZnQaQTk26mH33VBGktPes/YsmpC9umrQSM4M545Lac4YOVpjhF5jH3DFuiO0SSSgzF4Eve0J2MZnDkMOaZ/NQlr4SgbbGPWKLWQ2qMg6WTFqdRo1xMGgDdixZmEEbnk4j5UIN/vW6yN2rpXYlbbAFeP864OP1LB7BqoL7ADJhan3CbYM3wxQge27WGej2FcY7/m8F8EhI/C3ihjV1fXH6rjf+QC7U0L3FHq9zgpc8VG3MYidwFkuQALSBwrMKpSW4ZqkW5dc3CRCnuSiGgWpDNAKktd1O6WETS8oHW7XOFx3my1jYbtP2ale6pAwgnedPxzQ0ETTAz+xLw3RoaMoeQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/OyPcgKZoFP8YKYHn+5Au+VhrdJACiIX6cyKBpe3eDc=;
 b=O1yjYBZASixDvI/iS36UpC0O1mRGYmnR6GshlLj3Zy0KjYLzvDKIiDB2BECwFIQTUN5Nv4dcYJwUyGK7Tf9D3g1KX/PFD7MvtWZ30RM19EsHvuFEjJmV0oJTOqGho0QzZgNf2wUph1IIY8gDNOYudUW9IwjhfHeSToiZb27FGrkR2/0gC1jsfi64B6Ehuv1oP3bmBUAlOuw2JYMLWxozR21d1MJMZ/oIaApnlyTwMvWdwo9rSO5Raj8EiQMuq5zp60e0BmQts+iEQWVGBQw+KJ67KeSlSzpYK74bXChWqAqyYYq+nvzq5E6AWKhv00Fh6AT+ENB7eJla0CquoKhLHQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/OyPcgKZoFP8YKYHn+5Au+VhrdJACiIX6cyKBpe3eDc=;
 b=FlCtNr2Ff4cmhdvhcKSf5bPqlse/fJsXwashj53aX1j6hJVQzjCjod4JVaFdjipTAqlVeAKBkGoyKAaTHbie9LIwPmHTQ+4aUXJUpesUgefiluRQWMejraR5mSu/KUPPiVm0s8D0VuaP+qtTfi4lYp0iaH0JgzP/febz98bM94k=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <3dcbec7c-87d2-31cc-ad0f-d8ff4c8baeae@amd.com>
Date: Thu, 25 May 2023 10:52:43 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v7 11/12] xen/arm: add sve property for dom0less domUs
To: Bertrand Marquis <Bertrand.Marquis@arm.com>, Luca Fancellu
	<Luca.Fancellu@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-12-luca.fancellu@arm.com>
 <D01601D3-8215-46A0-8539-CF126A5FE11F@arm.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <D01601D3-8215-46A0-8539-CF126A5FE11F@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT099:EE_|SA0PR12MB4384:EE_
X-MS-Office365-Filtering-Correlation-Id: 2ac61525-2327-470e-9c9d-08db5cfd6d2b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aELOJXecHJW9xH4lHQV9JzXEKa0OljZpSubEN6ppA5T2CG9o8Bu67ypP82UmIfJZDE1nnGFW+Ll/bY19PC1PPGCghvSQg82Nfr+/euwzo8LVcXGblIS6ZXFZC4+yjlsDJdkgsHSDpQBFWKDas+5sv1FoAfjoI+A/BFutPC3TilKbd1z4P6nLB+Uh/oC3YJ/VqRnHmxzjqkzbhN+1P9QegC2voBUBIzEHLRSCRgT3m3W76OlbqAq5GMC04j/Tt+Qq9S/Z2Ks7lX4mK1fz2yLAkYLftD+0HHlGAADsC5xaG2l5ASQ5xpE7WJzvCwIeM0xuKkCaEJX8pSJ3XHqGflrkNWcGOnQAxJpapssEHpxcfT2pXMbrlEosjzKzYo1lTZtlvCutqwIQ/EHtPdn6kmYtGmV2ls5q+PcfWo7QrdZQY/Uu57MdjvPCMzEI2uhQgKlsdOdJ2SEkJECkEotgpaevoNvx86ZrPn56b6kTVPmqiW0H0X1nDp5TDhFXBg2Vuib2zEQNhGgprgtxEs5fIitznARnhht+NHzej2fTM6Apsw4iBY/35/zmFFrzSEUTF89sAYnL/9+aTF9cv4dyqYh8KQ5f9i/VKYA2kDttAsbzXeScFj9RMP6PfKB6pkZp+K1D2YANT66WgzV+HuDFpwCnJtqFHHnyvmo8C4Lv82JUe1ayMdMtILtetsK4kKWcQNWTqRXsMoKobnZwUhg+gfibZ056ECaxv7RfXjO0wVjuKQeL6t4G1uFqmTQVEX00B8xSZrcQios7Mz2/BwzZ+95giA34wjGZHLccS60DlHWfKbyea0bdE/o7Jc2sv05LP8xKpn8+2tj0NviIxpeNC5SeTg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199021)(36840700001)(46966006)(40470700004)(4326008)(478600001)(26005)(186003)(53546011)(70586007)(70206006)(110136005)(54906003)(16576012)(316002)(31686004)(40460700003)(36860700001)(2616005)(6666004)(83380400001)(426003)(47076005)(336012)(41300700001)(31696002)(82310400005)(81166007)(8936002)(356005)(44832011)(2906002)(86362001)(40480700001)(82740400003)(5660300002)(36756003)(8676002)(32563001)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 08:52:52.9706
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ac61525-2327-470e-9c9d-08db5cfd6d2b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT099.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4384

Hi Luca,

Sorry for jumping into this but I just wanted to read the dt binding doc and spotted one thing by accident.

On 24/05/2023 17:20, Bertrand Marquis wrote:
> 
> 
> Hi Luca,
> 
>> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>>
>> Add a device tree property in the dom0less domU configuration
>> to enable the guest to use SVE.
>>
>> Update documentation.
>>
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> 
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

(...)
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 9202a96d9c28..ba4fe9e165ee 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -4008,6 +4008,34 @@ void __init create_domUs(void)
>>             d_cfg.max_maptrack_frames = val;
>>         }
>>
>> +        if ( dt_get_property(node, "sve", &val) )
>> +        {
>> +#ifdef CONFIG_ARM64_SVE
>> +            unsigned int sve_vl_bits;
>> +            bool ret = false;
>> +
>> +            if ( !val )
>> +            {
>> +                /* Property found with no value, means max HW VL supported */
>> +                ret = sve_domctl_vl_param(-1, &sve_vl_bits);
>> +            }
>> +            else
>> +            {
>> +                if ( dt_property_read_u32(node, "sve", &val) )
>> +                    ret = sve_domctl_vl_param(val, &sve_vl_bits);
>> +                else
>> +                    panic("Error reading 'sve' property");
Both here and ...

>> +            }
>> +
>> +            if ( ret )
>> +                d_cfg.arch.sve_vl = sve_encode_vl(sve_vl_bits);
>> +            else
>> +                panic("SVE vector length error\n");
>> +#else
>> +            panic("'sve' property found, but CONFIG_ARM64_SVE not selected");
here you are missing \n at the end of string. If you take a look at panic() implementation,
new line char is not added so in your case it would result in an ugly formatted panic message.

~Michal


From xen-devel-bounces@lists.xenproject.org Thu May 25 08:55:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:55:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539405.840227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26l2-00021i-Pm; Thu, 25 May 2023 08:55:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539405.840227; Thu, 25 May 2023 08:55:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26l2-00021b-Mx; Thu, 25 May 2023 08:55:40 +0000
Received: by outflank-mailman (input) for mailman id 539405;
 Thu, 25 May 2023 08:55:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q26l1-00021V-El
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:55:39 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2061c.outbound.protection.outlook.com
 [2a01:111:f400:fe13::61c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e98e935c-fad9-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 10:55:35 +0200 (CEST)
Received: from DU2PR04CA0319.eurprd04.prod.outlook.com (2603:10a6:10:2b5::24)
 by AS1PR08MB7452.eurprd08.prod.outlook.com (2603:10a6:20b:4dc::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 08:55:32 +0000
Received: from DBAEUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2b5:cafe::3c) by DU2PR04CA0319.outlook.office365.com
 (2603:10a6:10:2b5::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16 via Frontend
 Transport; Thu, 25 May 2023 08:55:32 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT045.mail.protection.outlook.com (100.127.142.142) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Thu, 25 May 2023 08:55:31 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Thu, 25 May 2023 08:55:31 +0000
Received: from 912fde970d27.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 82475F19-895C-4A67-891A-BC9886F922FC.1; 
 Thu, 25 May 2023 08:55:25 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 912fde970d27.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 25 May 2023 08:55:25 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB9108.eurprd08.prod.outlook.com (2603:10a6:10:47a::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Thu, 25 May
 2023 08:55:22 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 08:55:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e98e935c-fad9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cWcc9kxYhxm2dPFps10rBh+mKb5CXHjxBQEa56fevtc=;
 b=u0+Cr3ApwoY8vGL1yLUQoNRIaKKiF3Q4EGbXS95I3t3fKSNlFX3w/SEVsrJ9GP4xfGRmoZJmWZ1cKLtTbTxXJJ6Qcml/VMaA2Qto7q3irMCwv19PAnharKLhCTO9FTAPGWyGleBvPnwptYCVXHp6I05VeFebXFbQf4uHpsEQREs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2267f36ba8566114
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gXPSMvSUW3sVSGxprKleMrp5YDHd/vNQPHYK9aN04Xz1nqZz/SgqnBCYQsydPcg6ysL7SawnHifArc4LVfH7jglTBmKq2d1q+pmWVqyQwQcKLcWmJAutlJ02MU/OQRNyh+0OxgLfGHJmnH9wRLTjK/eHvNcPwLwiEm6wR4GiNyFHtwUypuA7Iku15c2OIdwvBt21J12DkevxaXkEOPp+lbOYRcPVRw76plRE33cbTv8zvCfBOfkIoTyGScSWgTSeruoDKGyMv9Jw9m9JhjYruYvyi0UPcKeI7O25eURA+L/WUrD9kJi+cRoflBVS4yD/Actv9St4LVUFUj8rdOfDuQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cWcc9kxYhxm2dPFps10rBh+mKb5CXHjxBQEa56fevtc=;
 b=Dty+p2yBotySzgNsvtrOmdFl46voYyCarfO0X3Sd0+qrMV2mfw1o+x7z2fhWEzYp31qGBsrf+wr5eM5SK9DPtcdGHvBygFUVCViCwFxFNGeZHwjZHnWjolMyCC/YhETwgu02ULQcum78JgDkjRI9Zct7r+slqKySpPua0ZG45+aN6tPnVtGv4qRLvsNiL9MRQfERtuJWOHgY0bNtwdkWCnahoYDTrXHEmPWsSG06fv0g9u0R8F3laUdlS/pOii5UgcYTVFGc1JkmQhey2LVzqeb0zC7ccdt6uS3XNlgOdKILcuNUHga9DnrAcRD5p80lHcny/VUa/lf1tbScOBtgRw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cWcc9kxYhxm2dPFps10rBh+mKb5CXHjxBQEa56fevtc=;
 b=u0+Cr3ApwoY8vGL1yLUQoNRIaKKiF3Q4EGbXS95I3t3fKSNlFX3w/SEVsrJ9GP4xfGRmoZJmWZ1cKLtTbTxXJJ6Qcml/VMaA2Qto7q3irMCwv19PAnharKLhCTO9FTAPGWyGleBvPnwptYCVXHp6I05VeFebXFbQf4uHpsEQREs=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 11/12] xen/arm: add sve property for dom0less domUs
Thread-Topic: [PATCH v7 11/12] xen/arm: add sve property for dom0less domUs
Thread-Index: AQHZjUpoyjbTsitUikWCGPYyG/Olya9qshGMgAAAlQA=
Date: Thu, 25 May 2023 08:55:22 +0000
Message-ID: <D7FF883F-D8DF-48A0-A32D-15F2D0CF7426@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-12-luca.fancellu@arm.com>
 <D01601D3-8215-46A0-8539-CF126A5FE11F@arm.com>
 <3dcbec7c-87d2-31cc-ad0f-d8ff4c8baeae@amd.com>
In-Reply-To: <3dcbec7c-87d2-31cc-ad0f-d8ff4c8baeae@amd.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB9108:EE_|DBAEUR03FT045:EE_|AS1PR08MB7452:EE_
X-MS-Office365-Filtering-Correlation-Id: d5ba1fc7-d750-4259-bd54-08db5cfdcbdb
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 WdfUlRTXAW4VktWFgZjhTBN5Ct2eMvwhVsZchgK6ab56cfPQIvQc8h5cRDnlIj1nrBgLOI/9TM5sCfbecLNI9n/hq67jmfF+yzK73YHqBXPbRdaJw1stvsVILvM2bAb+jiWQHTdZbOWzV1EGu9SiVRs4nuOUkFeDkedQKo93Jmc/4X2EeG+okUU9fxZ1KaH2tbLUka3bSls6fjLT9Ad+tOMVSZX06+1CEGdI8rtlVC2FIrXWf0O3iVkW4IjHAffeJHx+RUrzOlgV0o012kLNil7EfODV9cp3jWsVXGi5mkiIzZLFmVai6Si3i42dkLCN/XhrXxR4B6kzkPlpBOrPB5nCzkAgGMVa+V/y7W8vVJe+1eNE9000LG2BVgukrSY6LSOfPoU1eGJ/Mvi0uiUj7xuNWsDUCBFDh5X29miPaTDFvjjYV8CCCUQqnyEJQ/m0/wWNOAbUGC46hcyaIfTk7ZcQw2g4btFKL2UTlBD78Bx9NUNPxVXHGCS+DGjOtunhPpyqX/ml7BbiNRGf39oLB3VghCeabcjO9M2N9we2k4vLiQnke/9LeVHUe4XgTkq4GDS48ijYlbk1Jn+m6nEdzo1tIINhJIuWFCNu/duMI7wvqCJm6DsjpWlB62IOskg5zluWQ/jJcjdb8NU0QZjvfMaAHIEDe0VOK7Y7bz57HsY=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(39860400002)(376002)(366004)(451199021)(478600001)(53546011)(6512007)(6506007)(186003)(54906003)(2616005)(26005)(71200400001)(6486002)(122000001)(38100700002)(76116006)(66446008)(91956017)(6916009)(64756008)(4326008)(33656002)(83380400001)(36756003)(66556008)(316002)(8676002)(66946007)(2906002)(66476007)(41300700001)(8936002)(86362001)(38070700005)(5660300002)(32563001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C231D55EC84D384AB8E3F81214DE44C0@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9108
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1389ef13-dfee-4fad-6c3f-08db5cfdc671
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pLP+ZOKe9Y8tgscwJDCxU7w/GJvkDo/3J+pw6sEmGoKaNl1PXGi92fcZGfjCcbU3iAwXc/9BzA+t+/yF63ODiEbQ8SIkaVhmtQFSg0RySb+Pc82xkxWEkjZUvRJk9sz63vkEIB1lLu/B2UUDAWjbEEq/U+4qoXXWZmiPSYMa/wTnz/+W5pPOq6Qud8ABcKGdCSS7jmhRYmj6gXQNuWYC+DPb9T0+Bbhq5UebXvLm2AQlylDTFdvNNi4qWYr781MmDCWFlOCTo7QKaC7x7nX6COB2o1NJbZJko0lMzmE9C2n8d33qObEGvFlU2awP7f2fvyqtPl+95GOWjlN5bxk0EcTuOTkkYPAUa0wqnCsi7ou5GFxp4u7mXIOLsquO7xYEad3wInsBPkLRS6LJ0mx1n0nS3lH3kPGGWjun5OcYqr3bCuDZcm0JM+ZRnYsPV+leHWoFBv5IwMvdbTQI+i9+83YraSajScjn3+6raw/zYBnLLeCdCUgXS2czecF9wHUlSx65nWGXRC2vv7HYT7k7qtV0NWJI6EP0K3Cmxu2wnBekVL+wQWxjrkeBSYOwC6j/YbjFxIAjulTiR3hRKm6luzJvm9VcHMoWgbHS6iBr/kqRQBuiTGeTCYl111ZTZeEp5acU4pPnd1KeU8SY6gjaoYXUBl5kDw3S2N1ojklzE3Pxe+C/Op0thPyIYu6kiGBWWFxqop6wYW3t8Pdh0loNHl51cFWYJAoQKsPydCbPp+IGjHsy7AuUUS6P5hdTUEPpyI9xYmvr8tMtqIN2WqBA2g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(396003)(376002)(451199021)(46966006)(40470700004)(36840700001)(8676002)(82310400005)(5660300002)(2906002)(8936002)(316002)(4326008)(70206006)(70586007)(54906003)(478600001)(47076005)(33656002)(2616005)(83380400001)(40460700003)(53546011)(186003)(107886003)(36756003)(26005)(6506007)(6512007)(41300700001)(6862004)(86362001)(40480700001)(336012)(36860700001)(81166007)(6486002)(82740400003)(356005)(32563001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 08:55:31.9557
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d5ba1fc7-d750-4259-bd54-08db5cfdcbdb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR08MB7452



> On 25 May 2023, at 09:52, Michal Orzel <michal.orzel@amd.com> wrote:
>=20
> Hi Luca,
>=20
> Sorry for jumping into this but I just wanted to read the dt binding doc =
and spotted one thing by accident.
>=20
> On 24/05/2023 17:20, Bertrand Marquis wrote:
>>=20
>>=20
>> Hi Luca,
>>=20
>>> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>>>=20
>>> Add a device tree property in the dom0less domU configuration
>>> to enable the guest to use SVE.
>>>=20
>>> Update documentation.
>>>=20
>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>=20
>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>=20
> (...)
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index 9202a96d9c28..ba4fe9e165ee 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -4008,6 +4008,34 @@ void __init create_domUs(void)
>>>            d_cfg.max_maptrack_frames =3D val;
>>>        }
>>>=20
>>> +        if ( dt_get_property(node, "sve", &val) )
>>> +        {
>>> +#ifdef CONFIG_ARM64_SVE
>>> +            unsigned int sve_vl_bits;
>>> +            bool ret =3D false;
>>> +
>>> +            if ( !val )
>>> +            {
>>> +                /* Property found with no value, means max HW VL suppo=
rted */
>>> +                ret =3D sve_domctl_vl_param(-1, &sve_vl_bits);
>>> +            }
>>> +            else
>>> +            {
>>> +                if ( dt_property_read_u32(node, "sve", &val) )
>>> +                    ret =3D sve_domctl_vl_param(val, &sve_vl_bits);
>>> +                else
>>> +                    panic("Error reading 'sve' property");
> Both here and ...
>=20
>>> +            }
>>> +
>>> +            if ( ret )
>>> +                d_cfg.arch.sve_vl =3D sve_encode_vl(sve_vl_bits);
>>> +            else
>>> +                panic("SVE vector length error\n");
>>> +#else
>>> +            panic("'sve' property found, but CONFIG_ARM64_SVE not sele=
cted");
> here you are missing \n at the end of string. If you take a look at panic=
() implementation,
> new line char is not added so in your case it would result in an ugly for=
matted panic message.

Hi Michal,

Thank you for pointing that out! Indeed there might be some issues, I will =
fix in the next push.

@Bertrand, can I retain your R-by with this fix?

>=20
> ~Michal



From xen-devel-bounces@lists.xenproject.org Thu May 25 08:56:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:56:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539408.840238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26ll-0002Xo-2Y; Thu, 25 May 2023 08:56:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539408.840238; Thu, 25 May 2023 08:56:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26lk-0002Xh-VU; Thu, 25 May 2023 08:56:24 +0000
Received: by outflank-mailman (input) for mailman id 539408;
 Thu, 25 May 2023 08:56:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q26lj-0002XS-SD
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:56:23 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20623.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::623])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 058a5f37-fada-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 10:56:22 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8369.eurprd04.prod.outlook.com (2603:10a6:20b:3b0::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Thu, 25 May
 2023 08:56:19 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 08:56:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 058a5f37-fada-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PER1QlZsZ59YvVeSJgHMkWjWaGlBqIU0JctjpeWojm9edanvG/CJkiV4U4iCCh1c+Ps/jOedzdqlOjzXdnjsuk6QYhb8mxo/YGD9GIXJ3BxdIxyxoni9mZCiNyg9Axp2UdHo5UR5W+kuWTcDXy7lb8D5OONNzevlXkTgf2MSHfGWsv0tWMQ3CwMqpEbSd4VZOOPiuT17UKQX/Hqi+ZteZ93bGZnM67e4evzTdlYIE1tjYnB52w7mrJvT3oMRiryaK/SkSoM9oPgRmu4Y2zcz6ealXyRgYy/SWlUOfF2QITR+G2etSxRqlS+Qn2k1+bUWnkuAIkcgqTSctmkhDBS57w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=C6fLJWvGDfGn+BhrKUr7jiiyU/8I0tYuAGp9pCjZmKw=;
 b=dSe6hK+6r+d6I5RXTF27IE+Cv9ghhDbsmysgLjnI+netHVxze7Hmj7pA2TSQVS6zRGanKNF7aqXPPlvDhG56iIy4j1jw6o9oIknagcCPiTry9nmuliTJKbyVyNioKfyfuIeMERFrHr9dnG+/7iia/FIKqj7zesMAmFYmNCahOiutSIMH9T3llOWznxzVYhpZPwGgEIkP7Z7M+1HjZ08SyZI3PD3qrK2gbIK8RtF5m+vp4yLAZmds1rKLHxDDRSz1DtBTZUmtue29DaIqCAIN1L1PJmqsc8zYNS1qxl9ECpGWM5NlL9bKtaOj50rzsKCQU3oYcj8qyjT8hkDz6sHIBg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C6fLJWvGDfGn+BhrKUr7jiiyU/8I0tYuAGp9pCjZmKw=;
 b=H+vS+Mf+gJrr4z38VrHE/bZJoNOa+LlZFDnPw9nehgYYybusBa2+VWYwnOF0Ya2UGs7sNPnQQBUFJ7HRw0eJW/q5fpODP+hJA9AdHuO5KTqEF2vknBJV0tXG7DB8ydwh6ZmqVFKH9sgZSZo/nOJmR95mYgcbc/W6Q6JFTaOpvE0wGhFqwFn9qjz+gc+BmwjqZ8XnTzMgk2rVyjts3enMLA5df2asuluNsf/6cV9sTeWfir8dkkL6KYKafJLP6k2OBmB1n9lbpZMVMc6ZpIDybpqsR09P42cWSwZo7+hUUo498pDE1rTO02m3sWCBIlhDgWg70eh7Z4DnnveDljsyjw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <ac51e967-c0e1-d5a2-5454-6725f1758c49@suse.com>
Date: Thu, 25 May 2023 10:56:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v3] iommu/vtd: fix address translation for leaf entries
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <20230525080859.29859-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230525080859.29859-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0072.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8369:EE_
X-MS-Office365-Filtering-Correlation-Id: 4b75b192-0d41-4b37-6d7b-08db5cfde7f2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tL2lHp64bEsnXS7IyCNbK9ZTcXakgHczRhRP4749qhFF3Mx0xoU3cGw6zZYZ7AcwPbl4NuaZ5tSmlm6ZX0K7+Fr6fXkiVoG/joVD8u+IHq8dyUt6P7HrEDP57WyBQqV1z4e7stxnjBjERkyl0Q2Of+InDHAnlPe1hq58eUgxzvNHY4Zl1Gs5qhKyWpDD9rmg7hTvIu0vKntTNqL+3N4vUYHIO/AiLGTOm9B/+qFOFmabAx7/CaCGpBRp2tae529Lqi0ES8QhVvkX2s6a8qZN7yS5GFDLCWhjuuMACzjrvH8H/if0UjICwkhTncEYzFBcbTKelJRB9yAlrt/lQUZC0GZkZf6bKMcK3UjtwbAn0T5CNpIlJ4EuH77uzlw3U3feBvgJv62eK8mQv+FEGvFm4A6l3ZKoUosEfuR6SUVsCXWmtu3pEQ6a8DT5H2KupitT/HlSe0DdpCwj2BoQOlhXBLi4hspZu8P/dd0HO6QOkPad84oGUd+28ZNkuMPaBKQnhcKuzlu2mXTpglSS6yjdFzslxGFt6Nn5C9sxstS085uRiQgwYXqriXZ8kYE2a/Ira6910LYxbeQGWn6gVnsb1TY62YHMtyhLEdi4n7d6VTq6/PDwOyq+/GBsLukKj4x2B959b5Xp2G+oAdWc+vey2g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(376002)(136003)(346002)(366004)(451199021)(41300700001)(66476007)(66946007)(66556008)(2906002)(4744005)(31686004)(186003)(478600001)(6486002)(4326008)(26005)(316002)(6916009)(5660300002)(8936002)(8676002)(53546011)(2616005)(6512007)(36756003)(6506007)(83380400001)(86362001)(38100700002)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VndUUksyaXM5akRMSldYV3ZRWStISUxCRUJiUmJKZTVWM3FFdkhaaTkzK1ZK?=
 =?utf-8?B?MGxDMUVicStkMUdkN0NXTXF2bnpWRldrZ0ZEeEJycDhUb2hKODhBTHBnbVFh?=
 =?utf-8?B?UWJWVVR5eTJFa0RQcVQrcTMxQ2tFM3RuVmZjYm5Ub1FGVitIZW41dE5mcHdP?=
 =?utf-8?B?N0paNVhzMExXcS96VTJmcUVDSm9QbU9uQlRMeWkyU2VPUm81RDhEVnoyOHNp?=
 =?utf-8?B?cHBweVdmdUs3dFBuYjMxTUJtMUFDOS93SnlMRVJTL092Q0tBaDRDeHVvQVJT?=
 =?utf-8?B?bFQwNjltNStXcGlzK3pEY3cyNlpNTlo5Sjl1MEpRRzd1WU1SK1gwNjF3SEZt?=
 =?utf-8?B?OGRXcUNFQ0tTdFV6U2RXclplb2NwREZzc3ZjYnI4RDhaTmFEWTczZDRRdzBZ?=
 =?utf-8?B?UlU0Uy9qSVJ3VjBERXlkSmkzNkVORmhITXMyRUttSEFJVFNBQmtpSmY3VG9s?=
 =?utf-8?B?ejNMS2VlUFJENmZLcld0R0ZPZEFLSWVjWFhla0dsK00ya25qN3hWcFVTOVli?=
 =?utf-8?B?aDNVMGZzOCt0TithOVB1R3RIUDZaTEtXSGw0Q3ZDRnlLK1F2aVViVlV3UHBM?=
 =?utf-8?B?WlluTERoL0s0SFlzTmZFYSs4ZzIzYWc2ZFIrNlF4SDJVOWM1YjBPaC84MG80?=
 =?utf-8?B?KzhFa0tScnk3TnRPcy9sR1I2U1hVK1htYkhYQW5oQW1DR3RnTGJLaGtNN2ha?=
 =?utf-8?B?TE5nVnZ6NSthSVBLV2x4eVZzN3VMSDU3WGtqRmUyWkRrUzRaZGhjNk8zRGp6?=
 =?utf-8?B?dUR4UUZ5VCtBQ0c0V0JrMmgyL1lFalhHNUlMNGtkd2Y4ZEhLTjlwbXJxWGNk?=
 =?utf-8?B?L0daK3hhRXFKU1pBenVKeGh2b2phaVNieEJ3dkc0TjdjQSsxRStQRWduQUhl?=
 =?utf-8?B?QTlRMVJEY1JLY1dHYnp2c3FIZ2VsRUtSK0dXUFBETElYT0RuZWVpb2QzdkU0?=
 =?utf-8?B?bnd3SjRYL3VHMldnL2ZSYU9QdGVPR25JRkwzYVpZVVNtSXVWR3V3SkQ2VU5j?=
 =?utf-8?B?MEt4MmxPUnJkeVFrMEtpS0ZTbTNQS1ZZQUVZN0x4L01UR21nbC9NL3NyRis2?=
 =?utf-8?B?aXkwNjd1N28wREFFbnVUKzYrazdTRDBSdnd6bjdqLzBMZlhMUE9FaVdKVkVV?=
 =?utf-8?B?OGN5QmxjUDFYODVnRFo5cmNwYTE5MHgxcjQxN1FIV0J1bTJxTUh4Wm9nR2tB?=
 =?utf-8?B?MFVJaTNzY0NlY294ejBXTGZFbHR6aVR6NHRQZFUzUkpsN3prSzZLcno3bVBJ?=
 =?utf-8?B?c3dPL3dhMnl0Q2VmTEJHd1UvYUQrTzFNUGtkRDl6WDBNd2l1dUNERnI2L3Vy?=
 =?utf-8?B?S3o0OWRTWkRvZnVGRVJwQkNMdFB5bWxhZUw1VnhUeXNvTVhYamErOVZLT3Zt?=
 =?utf-8?B?YkNzQ05SYWx5dW5uTnh4SUFUM3YwNTQ2TWRCcUo3MlExVlFJL3Q3R1RvSmZ5?=
 =?utf-8?B?ZXhyT0ljcjh4NVA0KytDS1lIaUxTWmZJZXFhcnIzVFlTNE1Ud2g5SlB4UmNm?=
 =?utf-8?B?WDhBN3N4QXJuRHdXK1JUVFUwNVlVNmRhK3Zsckgxb0VIb0NYU0dZUElGVTZW?=
 =?utf-8?B?YVJ5dVRHNlVCbzRmd0E3VUNsMSsvZ3NiOXIwK05sUGR2S09tZXA3NUswSW5z?=
 =?utf-8?B?Z0VGSFU5Uk1yN0lKS2wraXNqRlRxUkV0YjNVQm05L1lJUE5NblZyY2cyWVk4?=
 =?utf-8?B?Sm1vTkx1YmxMZ1lHbW4xUDU0R2hTcFNXUjZlZFNLOGRjSFdqcDFYVEdxQncx?=
 =?utf-8?B?M3pHaVpWNGowcU4zSE9veURUcUU2Q2owTklYSTliR1FJVDVudGlRMzlka3pj?=
 =?utf-8?B?bzh2V3JWMHlZSHN0dGpSS3JodmlFeXVGaFhkamw3dlJqejQ5S0JFUmJGM05H?=
 =?utf-8?B?T2ZxL21OZTR5TGhKaTdqVW1DK3E0S2VpSC9kRnQrQnRZV3BZaktVVUY2QmF3?=
 =?utf-8?B?bTZYdHRaUGRGTURGQzdKTE5USmxaUnJPdlYvbEY3akUvSWZsakczNWRkZmlG?=
 =?utf-8?B?dDFYeURTaUhvaUJQK2lDYi9vQUZSNUlMUlFPZmgvWHJoTnZHR2NjSHkzZXNZ?=
 =?utf-8?B?aWU3VXgvM3E3NEsxNE9kc0tJKzIrckFwTm81RS9mREhxRzhYc3BNdTdvK2Qv?=
 =?utf-8?Q?8tjksdNpxf0dO2cnGrH/dRVBw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b75b192-0d41-4b37-6d7b-08db5cfde7f2
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 08:56:19.2580
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hJ3UXI313+aI22DTnXpsniAxkJegaT6SPhEYzXxpQom4MXB+75pA1nftTB8MVO2oNSsc2d4wBxBx0XWMuO7iRg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8369

On 25.05.2023 10:08, Roger Pau Monne wrote:
> Fix two issues related to leaf address lookups in VT-d:
> 
> * When translating an address that falls inside of a superpage in the
>   IOMMU page tables the fetching of the PTE value wasn't masking of the
>   contiguous related data, which caused the returned data to be
>   corrupt as it would contain bits that the caller would interpret as
>   part of the address.
> 
> * When the requested leaf address wasn't mapped by a superpage the
>   returned value wouldn't have any of the low 12 bits set, thus missing
>   the permission bits expected by the caller.
> 
> Take the opportunity to also adjust the function comment to note that
> when returning the full PTE the bits above PADDR_BITS are removed.
> 
> Fixes: c71e55501a61 ('VT-d: have callers specify the target level for page table walks')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Thu May 25 08:56:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:56:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539409.840245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26ll-0002aU-DJ; Thu, 25 May 2023 08:56:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539409.840245; Thu, 25 May 2023 08:56:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26ll-0002Zj-6E; Thu, 25 May 2023 08:56:25 +0000
Received: by outflank-mailman (input) for mailman id 539409;
 Thu, 25 May 2023 08:56:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JOix=BO=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1q26lk-0002XS-J0
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:56:24 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 05e84676-fada-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 10:56:23 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 443A86441D;
 Thu, 25 May 2023 08:56:22 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9425AC433EF;
 Thu, 25 May 2023 08:56:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05e84676-fada-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685004981;
	bh=T3x6HsAO3RQbrXA8lAmGWzhIEFxQo3e/SuF/zmio4Hs=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=mRtUeUA+TIvgOPBLeDE0LEeluR6Lkzocki1ce4LxvGW3Kh74OsnmFoUFRRmQEOqgY
	 vPbs8UllpBsHYQNETyRPKHNQXbkjzvcZ/qcNMJMO2YSn7oQ9dFwS8Ece6KmReDjGKv
	 UHsfvGg5kn0tsbjmZAqeTwYppOTlzTxEsz6d+ZO+0GcrxaOJ0wF3N2ge6+B4EiqPJP
	 Rr14bkavs3kxuqVnqdy8wSuE2VMYBsI0nPEtl+gq6jIzwj54Hlj3QoMxeOG7OsIhLt
	 Fod2UUbjMqcZpiBjbvghJGsfjbRDVycHZQ/GoUx1cUGXfXrx/0id06caAMVPKwYBaI
	 blTpXwhMXt6qg==
Date: Thu, 25 May 2023 11:55:55 +0300
From: Mike Rapoport <rppt@kernel.org>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 01/34] mm: Add PAGE_TYPE_OP folio functions
Message-ID: <20230525085555.GV4967@kernel.org>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-2-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230501192829.17086-2-vishal.moola@gmail.com>

Hi,

On Mon, May 01, 2023 at 12:27:56PM -0700, Vishal Moola (Oracle) wrote:
> No folio equivalents for page type operations have been defined, so
> define them for later folio conversions.

Can you please elaborate why would we need folios for page table descriptors? 
 
> Also changes the Page##uname macros to take in const struct page* since
> we only read the memory here.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  include/linux/page-flags.h | 20 ++++++++++++++++++--
>  1 file changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 1c68d67b832f..607b495d1b57 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -902,6 +902,8 @@ static inline bool is_page_hwpoison(struct page *page)
>  
>  #define PageType(page, flag)						\
>  	((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
> +#define folio_test_type(folio, flag)					\
> +	((folio->page.page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
>  
>  static inline int page_type_has_type(unsigned int page_type)
>  {
> @@ -914,20 +916,34 @@ static inline int page_has_type(struct page *page)
>  }
>  
>  #define PAGE_TYPE_OPS(uname, lname)					\
> -static __always_inline int Page##uname(struct page *page)		\
> +static __always_inline int Page##uname(const struct page *page)		\
>  {									\
>  	return PageType(page, PG_##lname);				\
>  }									\
> +static __always_inline int folio_test_##lname(const struct folio *folio)\
> +{									\
> +	return folio_test_type(folio, PG_##lname);			\
> +}									\
>  static __always_inline void __SetPage##uname(struct page *page)		\
>  {									\
>  	VM_BUG_ON_PAGE(!PageType(page, 0), page);			\
>  	page->page_type &= ~PG_##lname;					\
>  }									\
> +static __always_inline void __folio_set_##lname(struct folio *folio)	\
> +{									\
> +	VM_BUG_ON_FOLIO(!folio_test_type(folio, 0), folio);		\
> +	folio->page.page_type &= ~PG_##lname;				\
> +}									\
>  static __always_inline void __ClearPage##uname(struct page *page)	\
>  {									\
>  	VM_BUG_ON_PAGE(!Page##uname(page), page);			\
>  	page->page_type |= PG_##lname;					\
> -}
> +}									\
> +static __always_inline void __folio_clear_##lname(struct folio *folio)	\
> +{									\
> +	VM_BUG_ON_FOLIO(!folio_test_##lname(folio), folio);		\
> +	folio->page.page_type |= PG_##lname;				\
> +}									\
>  
>  /*
>   * PageBuddy() indicates that the page is free and in the buddy system
> -- 
> 2.39.2
> 
> 

-- 
Sincerely yours,
Mike.


From xen-devel-bounces@lists.xenproject.org Thu May 25 08:57:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:57:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539418.840258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26mb-0003gk-Nq; Thu, 25 May 2023 08:57:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539418.840258; Thu, 25 May 2023 08:57:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26mb-0003gd-LD; Thu, 25 May 2023 08:57:17 +0000
Received: by outflank-mailman (input) for mailman id 539418;
 Thu, 25 May 2023 08:57:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q26ma-0003gJ-Ab
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:57:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q26mZ-0000RH-V1; Thu, 25 May 2023 08:57:15 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.31.224]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q26mZ-0005Ib-Nx; Thu, 25 May 2023 08:57:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=2Cr5YdIAm+OmCFzy0YZ2QOgndjiPq6HghzQKMO5ZyQA=; b=YPAGZkAF+kQf8uFTnN9HteFCTL
	X4JOg9ftqOA5rZnVWZrsN9Uxv40huYVh1ES63T3INKPaMhRn9Kf3UPKiCslUwdG8QV28TG78uygdz
	v7hmukPotSdjAabK+1HHzaBf6CkqYWPUAXHUbRHR3C08qIDZiGdVYmqpcRo8fD8fxsnY=;
Message-ID: <a22d37ae-46f5-6ead-26df-e4b34d23f4ac@xen.org>
Date: Thu, 25 May 2023 09:57:13 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v7 01/12] xen/arm: enable SVE extension for Xen
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-2-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230523074326.3035745-2-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 23/05/2023 08:43, Luca Fancellu wrote:
> Enable Xen to handle the SVE extension, add code in cpufeature module
> to handle ZCR SVE register, disable trapping SVE feature on system
> boot only when SVE resources are accessed.
> While there, correct coding style for the comment on coprocessor
> trapping.
> 
> Now cptr_el2 is part of the domain context and it will be restored
> on context switch, this is a preparation for saving the SVE context
> which will be part of VFP operations, so restore it before the call
> to save VFP registers.
> To save an additional isb barrier, restore cptr_el2 before an
> existing isb barrier and move the call for saving VFP context after
> that barrier. To keep a (mostly) specularity of ctxt_switch_to()
> and ctxt_switch_from(), move vfp_save_state() up in the function.
> 
> Change the KConfig entry to make ARM64_SVE symbol selectable, by
> default it will be not selected.
> 
> Create sve module and sve_asm.S that contains assembly routines for
> the SVE feature, this code is inspired from linux and it uses
> instruction encoding to be compatible with compilers that does not
> support SVE, imported instructions are documented in
> README.LinuxPrimitives.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 25 08:58:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:58:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539423.840268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26nm-0004J6-2j; Thu, 25 May 2023 08:58:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539423.840268; Thu, 25 May 2023 08:58:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26nl-0004Iz-Uw; Thu, 25 May 2023 08:58:29 +0000
Received: by outflank-mailman (input) for mailman id 539423;
 Thu, 25 May 2023 08:58:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q26nk-0004In-1c
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:58:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q26nj-0000UW-Iv; Thu, 25 May 2023 08:58:27 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.31.224]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q26nj-0005Jy-CT; Thu, 25 May 2023 08:58:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=2Cr5YdIAm+OmCFzy0YZ2QOgndjiPq6HghzQKMO5ZyQA=; b=RkQ7iaZ1zUEz36xDR/lAiPXraU
	7ja6b3+8fCOSUwEy82NEZ38pr9uqpOFyiXNtdnhWTfG48DGgeD1nZUV+znQmYlL+f+/tj8h58mnph
	EU7kPPmlf4ZlEg6CpUR0+x7wTwn+3W7Ma3mbmqroqZnnRRpYpzx4rtZJInFkTJDH7Jco=;
Message-ID: <80fa0ce7-62be-79ea-6190-8c1140937374@xen.org>
Date: Thu, 25 May 2023 09:58:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v7 01/12] xen/arm: enable SVE extension for Xen
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-2-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230523074326.3035745-2-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 23/05/2023 08:43, Luca Fancellu wrote:
> Enable Xen to handle the SVE extension, add code in cpufeature module
> to handle ZCR SVE register, disable trapping SVE feature on system
> boot only when SVE resources are accessed.
> While there, correct coding style for the comment on coprocessor
> trapping.
> 
> Now cptr_el2 is part of the domain context and it will be restored
> on context switch, this is a preparation for saving the SVE context
> which will be part of VFP operations, so restore it before the call
> to save VFP registers.
> To save an additional isb barrier, restore cptr_el2 before an
> existing isb barrier and move the call for saving VFP context after
> that barrier. To keep a (mostly) specularity of ctxt_switch_to()
> and ctxt_switch_from(), move vfp_save_state() up in the function.
> 
> Change the KConfig entry to make ARM64_SVE symbol selectable, by
> default it will be not selected.
> 
> Create sve module and sve_asm.S that contains assembly routines for
> the SVE feature, this code is inspired from linux and it uses
> instruction encoding to be compatible with compilers that does not
> support SVE, imported instructions are documented in
> README.LinuxPrimitives.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 25 08:59:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:59:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539424.840278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26oF-0004p7-9y; Thu, 25 May 2023 08:58:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539424.840278; Thu, 25 May 2023 08:58:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26oF-0004p0-6w; Thu, 25 May 2023 08:58:59 +0000
Received: by outflank-mailman (input) for mailman id 539424;
 Thu, 25 May 2023 08:58:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JOix=BO=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1q26oE-0004nX-5n
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:58:58 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5f106e3a-fada-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 10:58:53 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E55A9616BE;
 Thu, 25 May 2023 08:58:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5932C43442;
 Thu, 25 May 2023 08:58:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f106e3a-fada-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685005133;
	bh=oNoesoVEhmHSzqjLPuecf1DZL/ZoWi60ZyKcNVMfFVo=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=hiSzUHy75eUy1mVFCElzz3/neUE7rxHTDy2xRb8QXAlD+6lMoim/IeLDqnbd+2ow/
	 SO/JIK+ySPeuNGcKhVYLYIXFBPVj//3J6rmBEV0G3lN6xc01u+YyUUxRpXHWlPx6Pp
	 eEfQ/WXmV6xY0WSYRJNqg+5epfYbsC3S4xe/XrDch3HDF6usrp0dvIA10d4rEPqzvy
	 /cz0UmYos7f1FMG/9x7hluyjJ0c7dsCs3WwZ6QoPV8vqzEPTZL/cxobm8SySST/t9p
	 HbI0v8j8dZid+K0UklL55Bya63sq5cDfCnqZtoJrypH8E7sdkPNnq3dw+mBZR1DIpF
	 iohe6mj09FGwg==
Date: Thu, 25 May 2023 11:58:19 +0300
From: Mike Rapoport <rppt@kernel.org>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	David Hildenbrand <david@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>
Subject: Re: [PATCH v2 02/34] s390: Use _pt_s390_gaddr for gmap address
 tracking
Message-ID: <20230525085819.GW4967@kernel.org>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-3-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230501192829.17086-3-vishal.moola@gmail.com>

On Mon, May 01, 2023 at 12:27:57PM -0700, Vishal Moola (Oracle) wrote:
> s390 uses page->index to keep track of page tables for the guest address
> space. In an attempt to consolidate the usage of page fields in s390,
> replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap.
> 
> This will help with the splitting of struct ptdesc from struct page, as
> well as allow s390 to use _pt_frag_refcount for fragmented page table
> tracking.
> 
> Since page->_pt_s390_gaddr aliases with mapping, ensure its set to NULL
> before freeing the pages as well.

Wouldn't it be easier to use _pt_pad_1 which is aliased with lru and that
does not seem to be used by page tables at all?
 
> This also reverts commit 7e25de77bc5ea ("s390/mm: use pmd_pgtable_page()
> helper in __gmap_segment_gaddr()") which had s390 use
> pmd_pgtable_page() to get a gmap page table, as pmd_pgtable_page()
> should be used for more generic process page tables.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  arch/s390/mm/gmap.c      | 56 +++++++++++++++++++++++++++-------------
>  include/linux/mm_types.h |  2 +-
>  2 files changed, 39 insertions(+), 19 deletions(-)
> 
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index dfe905c7bd8e..a9e8b1805894 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -70,7 +70,7 @@ static struct gmap *gmap_alloc(unsigned long limit)
>  	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
>  	if (!page)
>  		goto out_free;
> -	page->index = 0;
> +	page->_pt_s390_gaddr = 0;
>  	list_add(&page->lru, &gmap->crst_list);
>  	table = page_to_virt(page);
>  	crst_table_init(table, etype);
> @@ -187,16 +187,20 @@ static void gmap_free(struct gmap *gmap)
>  	if (!(gmap_is_shadow(gmap) && gmap->removed))
>  		gmap_flush_tlb(gmap);
>  	/* Free all segment & region tables. */
> -	list_for_each_entry_safe(page, next, &gmap->crst_list, lru)
> +	list_for_each_entry_safe(page, next, &gmap->crst_list, lru) {
> +		page->_pt_s390_gaddr = 0;
>  		__free_pages(page, CRST_ALLOC_ORDER);
> +	}
>  	gmap_radix_tree_free(&gmap->guest_to_host);
>  	gmap_radix_tree_free(&gmap->host_to_guest);
>  
>  	/* Free additional data for a shadow gmap */
>  	if (gmap_is_shadow(gmap)) {
>  		/* Free all page tables. */
> -		list_for_each_entry_safe(page, next, &gmap->pt_list, lru)
> +		list_for_each_entry_safe(page, next, &gmap->pt_list, lru) {
> +			page->_pt_s390_gaddr = 0;
>  			page_table_free_pgste(page);
> +		}
>  		gmap_rmap_radix_tree_free(&gmap->host_to_rmap);
>  		/* Release reference to the parent */
>  		gmap_put(gmap->parent);
> @@ -318,12 +322,14 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
>  		list_add(&page->lru, &gmap->crst_list);
>  		*table = __pa(new) | _REGION_ENTRY_LENGTH |
>  			(*table & _REGION_ENTRY_TYPE_MASK);
> -		page->index = gaddr;
> +		page->_pt_s390_gaddr = gaddr;
>  		page = NULL;
>  	}
>  	spin_unlock(&gmap->guest_table_lock);
> -	if (page)
> +	if (page) {
> +		page->_pt_s390_gaddr = 0;
>  		__free_pages(page, CRST_ALLOC_ORDER);
> +	}
>  	return 0;
>  }
>  
> @@ -336,12 +342,14 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
>  static unsigned long __gmap_segment_gaddr(unsigned long *entry)
>  {
>  	struct page *page;
> -	unsigned long offset;
> +	unsigned long offset, mask;
>  
>  	offset = (unsigned long) entry / sizeof(unsigned long);
>  	offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE;
> -	page = pmd_pgtable_page((pmd_t *) entry);
> -	return page->index + offset;
> +	mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1);
> +	page = virt_to_page((void *)((unsigned long) entry & mask));
> +
> +	return page->_pt_s390_gaddr + offset;
>  }
>  
>  /**
> @@ -1351,6 +1359,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
>  	/* Free page table */
>  	page = phys_to_page(pgt);
>  	list_del(&page->lru);
> +	page->_pt_s390_gaddr = 0;
>  	page_table_free_pgste(page);
>  }
>  
> @@ -1379,6 +1388,7 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr,
>  		/* Free page table */
>  		page = phys_to_page(pgt);
>  		list_del(&page->lru);
> +		page->_pt_s390_gaddr = 0;
>  		page_table_free_pgste(page);
>  	}
>  }
> @@ -1409,6 +1419,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
>  	/* Free segment table */
>  	page = phys_to_page(sgt);
>  	list_del(&page->lru);
> +	page->_pt_s390_gaddr = 0;
>  	__free_pages(page, CRST_ALLOC_ORDER);
>  }
>  
> @@ -1437,6 +1448,7 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr,
>  		/* Free segment table */
>  		page = phys_to_page(sgt);
>  		list_del(&page->lru);
> +		page->_pt_s390_gaddr = 0;
>  		__free_pages(page, CRST_ALLOC_ORDER);
>  	}
>  }
> @@ -1467,6 +1479,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr)
>  	/* Free region 3 table */
>  	page = phys_to_page(r3t);
>  	list_del(&page->lru);
> +	page->_pt_s390_gaddr = 0;
>  	__free_pages(page, CRST_ALLOC_ORDER);
>  }
>  
> @@ -1495,6 +1508,7 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr,
>  		/* Free region 3 table */
>  		page = phys_to_page(r3t);
>  		list_del(&page->lru);
> +		page->_pt_s390_gaddr = 0;
>  		__free_pages(page, CRST_ALLOC_ORDER);
>  	}
>  }
> @@ -1525,6 +1539,7 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr)
>  	/* Free region 2 table */
>  	page = phys_to_page(r2t);
>  	list_del(&page->lru);
> +	page->_pt_s390_gaddr = 0;
>  	__free_pages(page, CRST_ALLOC_ORDER);
>  }
>  
> @@ -1557,6 +1572,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
>  		/* Free region 2 table */
>  		page = phys_to_page(r2t);
>  		list_del(&page->lru);
> +		page->_pt_s390_gaddr = 0;
>  		__free_pages(page, CRST_ALLOC_ORDER);
>  	}
>  }
> @@ -1762,9 +1778,9 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
>  	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
>  	if (!page)
>  		return -ENOMEM;
> -	page->index = r2t & _REGION_ENTRY_ORIGIN;
> +	page->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN;
>  	if (fake)
> -		page->index |= GMAP_SHADOW_FAKE_TABLE;
> +		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
>  	s_r2t = page_to_phys(page);
>  	/* Install shadow region second table */
>  	spin_lock(&sg->guest_table_lock);
> @@ -1814,6 +1830,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
>  	return rc;
>  out_free:
>  	spin_unlock(&sg->guest_table_lock);
> +	page->_pt_s390_gaddr = 0;
>  	__free_pages(page, CRST_ALLOC_ORDER);
>  	return rc;
>  }
> @@ -1846,9 +1863,9 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
>  	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
>  	if (!page)
>  		return -ENOMEM;
> -	page->index = r3t & _REGION_ENTRY_ORIGIN;
> +	page->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN;
>  	if (fake)
> -		page->index |= GMAP_SHADOW_FAKE_TABLE;
> +		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
>  	s_r3t = page_to_phys(page);
>  	/* Install shadow region second table */
>  	spin_lock(&sg->guest_table_lock);
> @@ -1898,6 +1915,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
>  	return rc;
>  out_free:
>  	spin_unlock(&sg->guest_table_lock);
> +	page->_pt_s390_gaddr = 0;
>  	__free_pages(page, CRST_ALLOC_ORDER);
>  	return rc;
>  }
> @@ -1930,9 +1948,9 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
>  	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
>  	if (!page)
>  		return -ENOMEM;
> -	page->index = sgt & _REGION_ENTRY_ORIGIN;
> +	page->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN;
>  	if (fake)
> -		page->index |= GMAP_SHADOW_FAKE_TABLE;
> +		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
>  	s_sgt = page_to_phys(page);
>  	/* Install shadow region second table */
>  	spin_lock(&sg->guest_table_lock);
> @@ -1982,6 +2000,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
>  	return rc;
>  out_free:
>  	spin_unlock(&sg->guest_table_lock);
> +	page->_pt_s390_gaddr = 0;
>  	__free_pages(page, CRST_ALLOC_ORDER);
>  	return rc;
>  }
> @@ -2014,9 +2033,9 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
>  	if (table && !(*table & _SEGMENT_ENTRY_INVALID)) {
>  		/* Shadow page tables are full pages (pte+pgste) */
>  		page = pfn_to_page(*table >> PAGE_SHIFT);
> -		*pgt = page->index & ~GMAP_SHADOW_FAKE_TABLE;
> +		*pgt = page->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE;
>  		*dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT);
> -		*fake = !!(page->index & GMAP_SHADOW_FAKE_TABLE);
> +		*fake = !!(page->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE);
>  		rc = 0;
>  	} else  {
>  		rc = -EAGAIN;
> @@ -2054,9 +2073,9 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
>  	page = page_table_alloc_pgste(sg->mm);
>  	if (!page)
>  		return -ENOMEM;
> -	page->index = pgt & _SEGMENT_ENTRY_ORIGIN;
> +	page->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN;
>  	if (fake)
> -		page->index |= GMAP_SHADOW_FAKE_TABLE;
> +		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
>  	s_pgt = page_to_phys(page);
>  	/* Install shadow page table */
>  	spin_lock(&sg->guest_table_lock);
> @@ -2101,6 +2120,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
>  	return rc;
>  out_free:
>  	spin_unlock(&sg->guest_table_lock);
> +	page->_pt_s390_gaddr = 0;
>  	page_table_free_pgste(page);
>  	return rc;
>  
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 306a3d1a0fa6..6161fe1ae5b8 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -144,7 +144,7 @@ struct page {
>  		struct {	/* Page table pages */
>  			unsigned long _pt_pad_1;	/* compound_head */
>  			pgtable_t pmd_huge_pte; /* protected by page->ptl */
> -			unsigned long _pt_pad_2;	/* mapping */
> +			unsigned long _pt_s390_gaddr;	/* mapping */
>  			union {
>  				struct mm_struct *pt_mm; /* x86 pgds only */
>  				atomic_t pt_frag_refcount; /* powerpc */
> -- 
> 2.39.2
> 
> 

-- 
Sincerely yours,
Mike.


From xen-devel-bounces@lists.xenproject.org Thu May 25 08:59:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 08:59:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539431.840287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26p7-0005Pz-KF; Thu, 25 May 2023 08:59:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539431.840287; Thu, 25 May 2023 08:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26p7-0005Ps-Hc; Thu, 25 May 2023 08:59:53 +0000
Received: by outflank-mailman (input) for mailman id 539431;
 Thu, 25 May 2023 08:59:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q26p6-0005OI-GW
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 08:59:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q26p6-0000Vo-91; Thu, 25 May 2023 08:59:52 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.31.224]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q26p6-0005L4-2R; Thu, 25 May 2023 08:59:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=5FPnP54xSLr88Acy6vRkjDCgqtY7AJiaUVTjSA6QkRA=; b=gczCrFu1VXkMAbJvqrZcefJIac
	HI+g4gTYsQLSK2U0QqgNv0aiXdnCYqUfK9zuzxcdLNUDtGgofr3F7F/3KXveFMhvp2mFFFM+KqtKx
	96K7u0+eqSdYxssrPbHaAUgbzxGB+C6U/riMO2AdxFNU6BdP0tcAL4t1qFnNjbsJlW00=;
Message-ID: <9dbf2ceb-e39f-3c9d-7790-51524ba67d5d@xen.org>
Date: Thu, 25 May 2023 09:59:50 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v7 02/12] xen/arm: add SVE vector length field to the
 domain
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-3-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230523074326.3035745-3-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 23/05/2023 08:43, Luca Fancellu wrote:
> Add sve_vl field to arch_domain and xen_arch_domainconfig struct,
> to allow the domain to have an information about the SVE feature
> and the number of SVE register bits that are allowed for this
> domain.
> 
> sve_vl field is the vector length in bits divided by 128, this
> allows to use less space in the structures.
> 
> The field is used also to allow or forbid a domain to use SVE,
> because a value equal to zero means the guest is not allowed to
> use the feature.
> 
> Check that the requested vector length is lower or equal to the
> platform supported vector length, otherwise fail on domain
> creation.
> 
> Check that only 64 bit domains have SVE enabled, otherwise fail.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> Changes from v6:
>   - Style fix, have is_sve_domain as static inline instead of macro
>     (Julien)
> Changes from v5:
>   - Update commit message stating the interface ver. bump (Bertrand)
>   - in struct arch_domain, protect sve_vl with CONFIG_ARM64_SVE,
>     given the change, move also is_sve_domain() where it's protected
>     inside sve.h and create a stub when the macro is not defined,
>     protect the usage of sve_vl where needed.
>     (Julien)
>   - Add a check for 32 bit guest running on top of 64 bit host that
>     have sve parameter enabled to stop the domain creation, added in
>     construct_domain() of domain_build.c and subarch_do_domctl of
>     domctl.c. (Julien)
> Changes from v4:
>   - Return 0 in get_sys_vl_len() if sve is not supported, code style fix,
>     removed else if since the conditions can't fallthrough, removed not
>     needed condition checking for VL bits validity because it's already
>     covered, so delete is_vl_valid() function. (Jan)
> Changes from v3:
>   - don't use fixed types when not needed, use encoded value also in
>     arch_domain so rename sve_vl_bits in sve_vl. (Jan)
>   - rename domainconfig_decode_vl to sve_decode_vl because it will now
>     be used also to decode from arch_domain value
>   - change sve_vl from uint16_t to uint8_t and move it after "type" field
>     to optimize space.
> Changes from v2:
>   - rename field in xen_arch_domainconfig from "sve_vl_bits" to
>     "sve_vl" and use the implicit padding after gic_version to
>     store it, now this field is the VL/128. (Jan)
>   - Created domainconfig_decode_vl() function to decode the sve_vl
>     field and use it as plain bits value inside arch_domain.
>   - Changed commit message reflecting the changes
> Changes from v1:
>   - no changes
> Changes from RFC:
>   - restore zcr_el2 in sve_restore_state, that will be introduced
>     later in this serie, so remove zcr_el2 related code from this
>     patch and move everything to the later patch (Julien)
>   - add explicit padding into struct xen_arch_domainconfig (Julien)
>   - Don't lower down the vector length, just fail to create the
>     domain. (Julien)
> ---
>   xen/arch/arm/arm64/domctl.c          |  4 ++++
>   xen/arch/arm/arm64/sve.c             | 12 +++++++++++
>   xen/arch/arm/domain.c                | 29 ++++++++++++++++++++++++++
>   xen/arch/arm/domain_build.c          |  7 +++++++
>   xen/arch/arm/include/asm/arm64/sve.h | 31 ++++++++++++++++++++++++++++
>   xen/arch/arm/include/asm/domain.h    |  5 +++++
>   xen/include/public/arch-arm.h        |  2 ++
>   7 files changed, 90 insertions(+)
> 
> diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c
> index 0de89b42c448..14fc622e9956 100644
> --- a/xen/arch/arm/arm64/domctl.c
> +++ b/xen/arch/arm/arm64/domctl.c
> @@ -10,6 +10,7 @@
>   #include <xen/sched.h>
>   #include <xen/hypercall.h>
>   #include <public/domctl.h>
> +#include <asm/arm64/sve.h>
>   #include <asm/cpufeature.h>
>   
>   static long switch_mode(struct domain *d, enum domain_type type)
> @@ -43,6 +44,9 @@ long subarch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>           case 32:
>               if ( !cpu_has_el1_32 )
>                   return -EINVAL;
> +            /* SVE is not supported for 32 bit domain */
> +            if ( is_sve_domain(d) )
> +                return -EINVAL;
>               return switch_mode(d, DOMAIN_32BIT);
>           case 64:
>               return switch_mode(d, DOMAIN_64BIT);
> diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
> index e05ccc38a896..a9144e48ef6b 100644
> --- a/xen/arch/arm/arm64/sve.c
> +++ b/xen/arch/arm/arm64/sve.c
> @@ -8,6 +8,7 @@
>   #include <xen/types.h>
>   #include <asm/arm64/sve.h>
>   #include <asm/arm64/sysregs.h>
> +#include <asm/cpufeature.h>
>   #include <asm/processor.h>
>   #include <asm/system.h>
>   
> @@ -49,6 +50,17 @@ register_t compute_max_zcr(void)
>       return vl_to_zcr(hw_vl);
>   }
>   
> +/* Get the system sanitized value for VL in bits */
> +unsigned int get_sys_vl_len(void)
> +{
> +    if ( !cpu_has_sve )
> +        return 0;
> +
> +    /* ZCR_ELx len field is ((len + 1) * 128) = vector bits length */
> +    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
> +            SVE_VL_MULTIPLE_VAL;
> +}
> +
>   /*
>    * Local variables:
>    * mode: C
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index d5ab15db46c4..6c22551b0ed2 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -13,6 +13,7 @@
>   #include <xen/wait.h>
>   
>   #include <asm/alternative.h>
> +#include <asm/arm64/sve.h>
>   #include <asm/cpuerrata.h>
>   #include <asm/cpufeature.h>
>   #include <asm/current.h>
> @@ -555,6 +556,8 @@ int arch_vcpu_create(struct vcpu *v)
>       v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
>   
>       v->arch.cptr_el2 = get_default_cptr_flags();
> +    if ( is_sve_domain(v->domain) )
> +        v->arch.cptr_el2 &= ~HCPTR_CP(8);
>   
>       v->arch.hcr_el2 = get_default_hcr_flags();
>   
> @@ -599,6 +602,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>       unsigned int max_vcpus;
>       unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
>       unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
> +    unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
>   
>       if ( (config->flags & ~flags_optional) != flags_required )
>       {
> @@ -607,6 +611,26 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>           return -EINVAL;
>       }
>   
> +    /* Check feature flags */
> +    if ( sve_vl_bits > 0 )
> +    {
> +        unsigned int zcr_max_bits = get_sys_vl_len();
> +
> +        if ( !zcr_max_bits )
> +        {
> +            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
> +            return -EINVAL;
> +        }
> +
> +        if ( sve_vl_bits > zcr_max_bits )
> +        {
> +            dprintk(XENLOG_INFO,
> +                    "Requested SVE vector length (%u) > supported length (%u)\n",
> +                    sve_vl_bits, zcr_max_bits);
> +            return -EINVAL;
> +        }
> +    }
> +
>       /* The P2M table must always be shared between the CPU and the IOMMU */
>       if ( config->iommu_opts & XEN_DOMCTL_IOMMU_no_sharept )
>       {
> @@ -749,6 +773,11 @@ int arch_domain_create(struct domain *d,
>       if ( (rc = domain_vpci_init(d)) != 0 )
>           goto fail;
>   
> +#ifdef CONFIG_ARM64_SVE
> +    /* Copy the encoded vector length sve_vl from the domain configuration */
> +    d->arch.sve_vl = config->arch.sve_vl;
> +#endif
> +
>       return 0;
>   
>   fail:
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 71f307a572e9..9dd1ed5bce44 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -26,6 +26,7 @@
>   #include <asm/platform.h>
>   #include <asm/psci.h>
>   #include <asm/setup.h>
> +#include <asm/arm64/sve.h>
>   #include <asm/cpufeature.h>
>   #include <asm/domain_build.h>
>   #include <xen/event.h>
> @@ -3670,6 +3671,12 @@ static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
>           return -EINVAL;
>       }
>   
> +    if ( is_sve_domain(d) && (kinfo->type == DOMAIN_32BIT) )
> +    {
> +        printk("SVE is not available for 32-bit domain\n");
> +        return -EINVAL;
> +    }
> +
>       if ( is_64bit_domain(d) )
>           vcpu_switch_to_aarch64_mode(v);
>   
> diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
> index c0466243c7bc..4b63412727fc 100644
> --- a/xen/arch/arm/include/asm/arm64/sve.h
> +++ b/xen/arch/arm/include/asm/arm64/sve.h
> @@ -8,13 +8,44 @@
>   #ifndef _ARM_ARM64_SVE_H
>   #define _ARM_ARM64_SVE_H
>   
> +#include <xen/sched.h>
> +
>   #define SVE_VL_MAX_BITS 2048U
>   
>   /* Vector length must be multiple of 128 */
>   #define SVE_VL_MULTIPLE_VAL 128U
>   
> +static inline unsigned int sve_decode_vl(unsigned int sve_vl)
> +{
> +    /* SVE vector length is stored as VL/128 in xen_arch_domainconfig */
> +    return sve_vl * SVE_VL_MULTIPLE_VAL;
> +}
> +
>   register_t compute_max_zcr(void);
>   
> +#ifdef CONFIG_ARM64_SVE
> +
> +static inline bool is_sve_domain(const struct domain *d)
> +{
> +    return d->arch.sve_vl > 0;
> +}
> +
> +unsigned int get_sys_vl_len(void);
> +
> +#else /* !CONFIG_ARM64_SVE */
> +
> +static inline bool is_sve_domain(const struct domain *d)
> +{
> +    return false;
> +}
> +
> +static inline unsigned int get_sys_vl_len(void)
> +{
> +    return 0;
> +}
> +
> +#endif /* CONFIG_ARM64_SVE */
> +
>   #endif /* _ARM_ARM64_SVE_H */
>   
>   /*
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> index e776ee704b7d..331da0f3bcc3 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -67,6 +67,11 @@ struct arch_domain
>       enum domain_type type;
>   #endif
>   
> +#ifdef CONFIG_ARM64_SVE
> +    /* max SVE encoded vector length */
> +    uint8_t sve_vl;
> +#endif
> +
>       /* Virtual MMU */
>       struct p2m_domain p2m;
>   
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 1528ced5097a..38311f559581 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -300,6 +300,8 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
>   struct xen_arch_domainconfig {
>       /* IN/OUT */
>       uint8_t gic_version;
> +    /* IN - Contains SVE vector length divided by 128 */
> +    uint8_t sve_vl;
>       /* IN */
>       uint16_t tee_type;
>       /* IN */

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 25 09:04:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:04:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539437.840298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26t5-0006x8-9O; Thu, 25 May 2023 09:03:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539437.840298; Thu, 25 May 2023 09:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26t5-0006x1-6U; Thu, 25 May 2023 09:03:59 +0000
Received: by outflank-mailman (input) for mailman id 539437;
 Thu, 25 May 2023 09:03:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0YGP=BO=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1q26t3-0006wv-IK
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:03:57 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20613.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1374a759-fadb-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 11:03:55 +0200 (CEST)
Received: from AM6PR05CA0001.eurprd05.prod.outlook.com (2603:10a6:20b:2e::14)
 by DBBPR08MB6170.eurprd08.prod.outlook.com (2603:10a6:10:200::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 09:03:52 +0000
Received: from AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::56) by AM6PR05CA0001.outlook.office365.com
 (2603:10a6:20b:2e::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17 via Frontend
 Transport; Thu, 25 May 2023 09:03:52 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT044.mail.protection.outlook.com (100.127.140.169) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Thu, 25 May 2023 09:03:52 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Thu, 25 May 2023 09:03:52 +0000
Received: from fbbbfd49a036.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6DBA8194-BC65-46F5-BE12-49B510BD0F0B.1; 
 Thu, 25 May 2023 09:03:41 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fbbbfd49a036.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 25 May 2023 09:03:41 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com (2603:10a6:20b:85::25)
 by AS8PR08MB8299.eurprd08.prod.outlook.com (2603:10a6:20b:56f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 09:03:40 +0000
Received: from AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482]) by AM6PR08MB3784.eurprd08.prod.outlook.com
 ([fe80::5e17:39a6:eec7:c482%4]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 09:03:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1374a759-fadb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WRynqVwXjJMpUlCc78Xq0TrpPcne2D1ACfj/WSgty70=;
 b=U80Wqb4hJOQ8KINiwvxmjGNIOeEPb/eBAyC5/tShPT7H0Xtey8Joan/1qov7fKX8se7bns2JKUk42XsH8HWh7KMO9G6zZb2Zx51fmBgy7mJrIvioxVpEV//TKcpL78JayhEXIWVaQ+yyUOwjxIiMVV8yv24emNcpBT5E+69KmD0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7fbd2d0fde1ef26f
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G4mZq7YR3W4BdAuWoYvgQNUF83e4KCuYdldATFgcFhQXxPWsPvn6ZYwcX1aSThku9jp82olSRBymHqYMNHVf81jXT7JQ55d9SKKXXOCckN87LAG8D39AYJnDRcyFH57MeLpIe3DoiqFFQx4ftBXdkAb1AD7KZgDmZJ2eEreC1uitIYAOQ9tYVCG0tKVI8GX9SQV/xoqgPudFxDa6VQ/2Sg0w9YLFeuEL/ihBeQL7LzfAxRsYT4NYoAIr1MmAaKstNifCoyFH1Vz7meuJ0gMwV93pNTeCm6S0hCcjMoV9dcELfQEwe3FN9Nu0xWINnnppLdkgAJ8xYeZ816ktFpnHMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=WRynqVwXjJMpUlCc78Xq0TrpPcne2D1ACfj/WSgty70=;
 b=NEOSaofd+du3nSQXoFN1WfzOHxfwUUEVap+u5TkGMwQX7xF2vXaqiSjyX1btxCns9H90naxbAp1Gck5yxPZbSkmskB30rSSPar+Jjz7ZC0Qs4YDFiOeIA4ce06q/2O1FQ75GRGjls1b6LyouaBHBEBKE+KE5Ro7KxYsn96jrExW/O7AQu91dpFoakZH5nIXvnNb1JDhBNy+fZrGArsfDpr8t1DSZzavrrO5sDSX76xP+1fi3ssV4NHGBE4XiX8BZNiFuRW/WMPxgVf/jZexcM0/0r6+SIRCbYId+8HGmPcmQGpIcNOitzHwF6O8ZX3t8U7rOrGYx/Dw/A0zu5GOzLQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WRynqVwXjJMpUlCc78Xq0TrpPcne2D1ACfj/WSgty70=;
 b=U80Wqb4hJOQ8KINiwvxmjGNIOeEPb/eBAyC5/tShPT7H0Xtey8Joan/1qov7fKX8se7bns2JKUk42XsH8HWh7KMO9G6zZb2Zx51fmBgy7mJrIvioxVpEV//TKcpL78JayhEXIWVaQ+yyUOwjxIiMVV8yv24emNcpBT5E+69KmD0=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 11/12] xen/arm: add sve property for dom0less domUs
Thread-Topic: [PATCH v7 11/12] xen/arm: add sve property for dom0less domUs
Thread-Index: AQHZjUpY6xFVIVbG80+SN4dEj9SmF69pi9WAgAEmH4CAAAC+AIAAAkSA
Date: Thu, 25 May 2023 09:03:40 +0000
Message-ID: <65C07834-FFE3-4158-BFD0-21A7888730A6@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-12-luca.fancellu@arm.com>
 <D01601D3-8215-46A0-8539-CF126A5FE11F@arm.com>
 <3dcbec7c-87d2-31cc-ad0f-d8ff4c8baeae@amd.com>
 <D7FF883F-D8DF-48A0-A32D-15F2D0CF7426@arm.com>
In-Reply-To: <D7FF883F-D8DF-48A0-A32D-15F2D0CF7426@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3784:EE_|AS8PR08MB8299:EE_|AM7EUR03FT044:EE_|DBBPR08MB6170:EE_
X-MS-Office365-Filtering-Correlation-Id: 0b19abb9-73ba-4a1d-e425-08db5cfef636
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8SKnp/w2aGCy7ZJgUMFZbDEPiIjQwe1E8baqAsiOdYhUmA6tMf6mlke2/ucIVoy495u7qj1Se+nrQ2JDH7BXUl+8clL6j1+HnXYIZRt8U/Rg1ZwOGnY6nktkjGIG3/hpE68DKMbcBO8EkJpxLr7lKKNzguqNn/Y7iXKdXCl29+Sjoa01ibWF9Sxv7K5ruznoPa6MkQPRMdZFNdP6IoygDN6WAq8stp1vhMRTE+KELYRzXASuwMJjcZWzWO4Ra3rqWfsnkEdomR8NP6jRkK20flVJHskcAtJcXMz5tNMPzqTcti1RWfLUo1ETKjYtuHGb/ukFLguhgWWawRSHrPa87b1SiDygZBdofStMBwtsXjkecjUIZtbWIG+FsECH4R/t2OIu1axgW1sNoWK0NxeAj0pTeVmXiEU+zbjqid5XKitsP9HpbdtngduEzblAXC1gKWQO0O/IqYjFLrc2lKVyTQnTbeRqjikHvWMEIsihAnPb66kT8NeNR5GxDtm7VfFLSRJSMkZBnyyDbwIRCPU47LiPpA67DYzo1hngI0IwWpUO54NSvMUkbiY1B3S6aAq9tuJyYD+GOhs7QM6Uwv44YgQIhEGpdtVtBVVjF1HBKZHS8aIhJQer1BCpdbmocFLPbcfalOkWjYzBfHtrWIUHyIQlePDEWm5H7wZELOwuq5E=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3784.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(366004)(39860400002)(346002)(396003)(451199021)(8936002)(8676002)(6506007)(6862004)(5660300002)(122000001)(186003)(36756003)(2906002)(33656002)(86362001)(2616005)(83380400001)(6512007)(53546011)(316002)(54906003)(4326008)(71200400001)(478600001)(66476007)(66446008)(66556008)(37006003)(6636002)(76116006)(66946007)(64756008)(38100700002)(6486002)(91956017)(38070700005)(41300700001)(32563001)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <011946BC5B080A46B2D37DA5AB19A026@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8299
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	54be1df2-5b4e-4f6e-d949-08db5cfeeec3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ABXpQf55DBYTGAYu5LsLKarAgyKowskkN8JIu3H3U9+/xrmgRogSJ9fGye6k7s93wA6ra+Urvy3td5OcNnmt50SDNkyC4z6i5zwrsG0RrOLZ5DpAJJanQpLxA2Y1YGMluSgQLzUqFML5/x8bEUTznDrfzGlUdxZOXxGVPP67ivRz7C5jZBH7oVeRNEk3ogUR5PzzN1rNWWmeHUxB+G09xaFZGkieMGitg2o2WOaJOQzFCy9RSUoiBeRBSEwnBNQsUe3zKaUcLQY9b1sDbOxgr/628CT+f3W0Fdro3UWK/4PWnZfjND9qe3CztIyjE6JlPiCpF4JFbzzKsquMeA6YcLLezDDKm8mM2pNQngfKQ+YM4KIgYAdq8LUvbMYI/BINhCc5SGnoFguwYnVQ0xXCFccSsMesQhDq6uToluBHnP9kBLRgrVK7MSp+YLLtkBFa8jYLDvp3zJl/ubbtAfIHRZrMNLNuX+gCJE3HXCZN3KvvrbFyPnVid/NXR4LyxR+p2Rws1NV9ICTHJbskHeUls8vYNiRTMNEfXT9fLBoTLGaZILwxDNwp9PvbKe3ur7CnbolaI9G2b+KGm5Nzt7NRN/Ovfsn64qMIXkBepkMrIk7MO9b6EWiDVaZXwvZ7WoUzGyIFsPWoNi0/Kfq3PxjIBnMiSCP7qYY402hUbRMGCez98KcNINDrxdfuA5iBKm6GgIO6FKppgnnz2g7oQ7ZTF5O5ckvNmpmS/68bgovHZ7yQ9yV8w9Z9YzNlg/DQtCOR4XGibqwiiKP41Ea/luTfDA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(346002)(396003)(376002)(136003)(451199021)(36840700001)(46966006)(40470700004)(40460700003)(70586007)(70206006)(478600001)(6636002)(4326008)(316002)(54906003)(37006003)(86362001)(33656002)(36756003)(83380400001)(47076005)(6512007)(6506007)(26005)(107886003)(2616005)(336012)(36860700001)(53546011)(186003)(41300700001)(8676002)(6862004)(8936002)(2906002)(6486002)(40480700001)(82310400005)(82740400003)(81166007)(356005)(5660300002)(32563001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 09:03:52.4673
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b19abb9-73ba-4a1d-e425-08db5cfef636
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6170

Hi Luca,

> On 25 May 2023, at 10:55, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>=20
>=20
>=20
>> On 25 May 2023, at 09:52, Michal Orzel <michal.orzel@amd.com> wrote:
>>=20
>> Hi Luca,
>>=20
>> Sorry for jumping into this but I just wanted to read the dt binding doc=
 and spotted one thing by accident.
>>=20
>> On 24/05/2023 17:20, Bertrand Marquis wrote:
>>>=20
>>>=20
>>> Hi Luca,
>>>=20
>>>> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>>>>=20
>>>> Add a device tree property in the dom0less domU configuration
>>>> to enable the guest to use SVE.
>>>>=20
>>>> Update documentation.
>>>>=20
>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>>=20
>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>=20
>> (...)
>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>>> index 9202a96d9c28..ba4fe9e165ee 100644
>>>> --- a/xen/arch/arm/domain_build.c
>>>> +++ b/xen/arch/arm/domain_build.c
>>>> @@ -4008,6 +4008,34 @@ void __init create_domUs(void)
>>>>           d_cfg.max_maptrack_frames =3D val;
>>>>       }
>>>>=20
>>>> +        if ( dt_get_property(node, "sve", &val) )
>>>> +        {
>>>> +#ifdef CONFIG_ARM64_SVE
>>>> +            unsigned int sve_vl_bits;
>>>> +            bool ret =3D false;
>>>> +
>>>> +            if ( !val )
>>>> +            {
>>>> +                /* Property found with no value, means max HW VL supp=
orted */
>>>> +                ret =3D sve_domctl_vl_param(-1, &sve_vl_bits);
>>>> +            }
>>>> +            else
>>>> +            {
>>>> +                if ( dt_property_read_u32(node, "sve", &val) )
>>>> +                    ret =3D sve_domctl_vl_param(val, &sve_vl_bits);
>>>> +                else
>>>> +                    panic("Error reading 'sve' property");
>> Both here and ...
>>=20
>>>> +            }
>>>> +
>>>> +            if ( ret )
>>>> +                d_cfg.arch.sve_vl =3D sve_encode_vl(sve_vl_bits);
>>>> +            else
>>>> +                panic("SVE vector length error\n");
>>>> +#else
>>>> +            panic("'sve' property found, but CONFIG_ARM64_SVE not sel=
ected");
>> here you are missing \n at the end of string. If you take a look at pani=
c() implementation,
>> new line char is not added so in your case it would result in an ugly fo=
rmatted panic message.
>=20
> Hi Michal,
>=20
> Thank you for pointing that out! Indeed there might be some issues, I wil=
l fix in the next push.
>=20
> @Bertrand, can I retain your R-by with this fix?

Yes

Bertrand

>=20
>>=20
>> ~Michal




From xen-devel-bounces@lists.xenproject.org Thu May 25 09:04:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:04:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539438.840308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26t9-0007Cn-Ji; Thu, 25 May 2023 09:04:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539438.840308; Thu, 25 May 2023 09:04:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26t9-0007Ce-FY; Thu, 25 May 2023 09:04:03 +0000
Received: by outflank-mailman (input) for mailman id 539438;
 Thu, 25 May 2023 09:04:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UGVO=BO=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1q26t8-0006wv-1p
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:04:02 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e8b::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 15c44897-fadb-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 11:04:00 +0200 (CEST)
Received: from MWH0EPF00056D0E.namprd21.prod.outlook.com
 (2603:10b6:30f:fff2:0:1:0:12) by DM8PR12MB5448.namprd12.prod.outlook.com
 (2603:10b6:8:27::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 09:03:57 +0000
Received: from CO1NAM11FT044.eop-nam11.prod.protection.outlook.com
 (2a01:111:f400:7eab::207) by MWH0EPF00056D0E.outlook.office365.com
 (2603:1036:d20::b) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.6 via Frontend
 Transport; Thu, 25 May 2023 09:03:56 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT044.mail.protection.outlook.com (10.13.175.188) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6433.17 via Frontend Transport; Thu, 25 May 2023 09:03:56 +0000
Received: from SATLEXMB07.amd.com (10.181.41.45) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 25 May
 2023 04:03:55 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB07.amd.com
 (10.181.41.45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 25 May
 2023 02:03:55 -0700
Received: from [10.71.193.39] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 25 May 2023 04:03:54 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15c44897-fadb-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UycS6/KMFi9OkibPv8oMrbvHIRIo+og8qOwQpfwGwBMA3BHFNKoNIaasmx32iErv6ayizKp8XU2BDtFqWI53MrNX7kyAfgA/NHCztr5tPi6mzjiTGveMZb/46Krh6iCknN8BAIm7jO9x4ChN9WinBY22DnVfoQcegKYb+Em/tFz6dCnezFTvkPj7FaQWiA/2bTpxIuTgm1yPsgGQSwzYocjJA6+4cOcd2bZ9OpWf5HSz8MVC96sGU0VNeIKBpn7GFXCSGwsFZP5HZWoh1a3RT+ruA04DsY1VXivIRX/jQojYC05NMgzAHvQ+AEABvgrZnfanfrPzTtZJrHhWpVVWcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dBOZ3LCuYza8qg/qQVvD0R+nUvbF8irFn4EYuZyKHsg=;
 b=XE94DOJG0pyDj005o9c5xVcKf82Plg+MelB6A/yjAUD3W/vdjDB6UGwGcagnuLHGh5fJVvtdnBZXz7+i60ZujW+joX192ggMnvr8k8vPzHhwhkC8TpmbAxXuy12MlL7tHQGvgFMllRd3JZ81p9HHJkp8GpU1+ztVKSzX2Y1DAdWdOzqkG5Oblyf1vfKivVrdyLfso+EqBLUdd2eXOs1Lo/Ms8WpjsA9awRFzZI/kxiAux8epDWrYBjP4Hv6eVqS+RUtDmSAmLzYmdaCRbRKQPpcodlraEabnNHCtOk6dhfJDG0svXaEjGmkHuiZX2ltILhhNKon69vExCRWN2EuoCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dBOZ3LCuYza8qg/qQVvD0R+nUvbF8irFn4EYuZyKHsg=;
 b=bu+F03bUqfjkt6OQ8pO0c+drrt9Udj0CN2WBSOF+alAPql7kK/G1ADgY2JEynKz+KcPZRFAVf4Dn6ukn+Y/eEi4J2zCpMTSVOOcauIwZ1aFMHkRbpp0MVCj3889DF9oV4+oRwkB3oRCMTi/T4LLTTrdnysy/bi/Gwqc76O/lpK4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <b4d27103-6b02-d5af-980b-e9be6f41469e@amd.com>
Date: Thu, 25 May 2023 11:03:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v7 11/12] xen/arm: add sve property for dom0less domUs
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
	"Volodymyr Babchuk" <Volodymyr_Babchuk@epam.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-12-luca.fancellu@arm.com>
 <D01601D3-8215-46A0-8539-CF126A5FE11F@arm.com>
 <3dcbec7c-87d2-31cc-ad0f-d8ff4c8baeae@amd.com>
 <D7FF883F-D8DF-48A0-A32D-15F2D0CF7426@arm.com>
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <D7FF883F-D8DF-48A0-A32D-15F2D0CF7426@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT044:EE_|DM8PR12MB5448:EE_
X-MS-Office365-Filtering-Correlation-Id: ac2bba45-a760-43c3-6dc0-08db5cfef885
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	04mb6XJZMZtHnOS7jsIVw+k5t12x2SnE4t4dQp4KNF0W47EsgnmR9gpiiKugwmUV0eLWkfShjsmEB1nujY7WuMOCkmpF1QzGNCxnYQWEvhQuBxOpjx/F6uWa6Y6UH9X56nOmg1E2wbQt54KPe53Oai+aFSUBQqdvIDiab0rTFNJw1qLK9EJQkEv5IW+8PdUTp6IIWHYTfk41pys00haK01R8NeZpkhWJEhq0b17eJaRd+a1TQFpRmJLb/IoVc1hUWuXeBgRLYV6+XtjRtv2kPCrc7EbDEgQgMmXkoemr9nJXvYR+TTUVD2TWpifhmgfyC6dWAGwbBE0LRqjyseu5Wae82HfIwdR1Ln5fKMwhmj/+beGzKNph4r5LAeQvUtD+Pe98/mIaLKmSHk0JADd3WgRIOd5z51fCGOLFffp9OUEBRm5DCs8+0zoubTSjnEgTwkKW/cp5bBBtDWwW0D8gCFVFhcOvvGVOlB24y6di5iFwb0dWIR4V4Fm/ZZGhVgpOTshNjkRsAPrnKHJuFevi1dON/0zMPDCp3GKzOLI2PoeyTpTNKHyKQSsKq+7oEs1Kc2sTqM3KqBTkC98+hB+yI+b1NwgDOfv/NO6ZfkfSPzZ0kzwkaM3RSW289VvxQGZrEmqXa9w8ZAQ5dRkaA5VBJJvyZtlTXJvRfHKDTY053FGakPJqUYgMlpFGlJ24wti/uPwtIE02dUlqIUW6EnkB6HQNM66ivZHoIWSHjXdPkCRCK9h11RbGN1itavu6ea5KGUZoC3DEqfM7YPZFLN7sqZLEL9hEV5xo8GZJ+WUDv5uBbPKYbORad9XY5gSeUQb/QKp5QqpiCSs2bneDSv579w==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(376002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(2616005)(40460700003)(83380400001)(47076005)(336012)(2906002)(426003)(186003)(36756003)(86362001)(82310400005)(31696002)(82740400003)(356005)(81166007)(36860700001)(8676002)(8936002)(5660300002)(16576012)(70586007)(54906003)(31686004)(70206006)(41300700001)(6916009)(4326008)(316002)(40480700001)(26005)(478600001)(53546011)(44832011)(32563001)(43740500002)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 09:03:56.2593
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ac2bba45-a760-43c3-6dc0-08db5cfef885
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT044.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR12MB5448



On 25/05/2023 10:55, Luca Fancellu wrote:
> 
> 
>> On 25 May 2023, at 09:52, Michal Orzel <michal.orzel@amd.com> wrote:
>>
>> Hi Luca,
>>
>> Sorry for jumping into this but I just wanted to read the dt binding doc and spotted one thing by accident.
>>
>> On 24/05/2023 17:20, Bertrand Marquis wrote:
>>>
>>>
>>> Hi Luca,
>>>
>>>> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>>>>
>>>> Add a device tree property in the dom0less domU configuration
>>>> to enable the guest to use SVE.
>>>>
>>>> Update documentation.
>>>>
>>>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>>>
>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>
>> (...)
>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>>> index 9202a96d9c28..ba4fe9e165ee 100644
>>>> --- a/xen/arch/arm/domain_build.c
>>>> +++ b/xen/arch/arm/domain_build.c
>>>> @@ -4008,6 +4008,34 @@ void __init create_domUs(void)
>>>>            d_cfg.max_maptrack_frames = val;
>>>>        }
>>>>
>>>> +        if ( dt_get_property(node, "sve", &val) )
>>>> +        {
>>>> +#ifdef CONFIG_ARM64_SVE
>>>> +            unsigned int sve_vl_bits;
>>>> +            bool ret = false;
>>>> +
>>>> +            if ( !val )
>>>> +            {
>>>> +                /* Property found with no value, means max HW VL supported */
>>>> +                ret = sve_domctl_vl_param(-1, &sve_vl_bits);
>>>> +            }
>>>> +            else
>>>> +            {
>>>> +                if ( dt_property_read_u32(node, "sve", &val) )
>>>> +                    ret = sve_domctl_vl_param(val, &sve_vl_bits);
>>>> +                else
>>>> +                    panic("Error reading 'sve' property");
>> Both here and ...
>>
>>>> +            }
>>>> +
>>>> +            if ( ret )
>>>> +                d_cfg.arch.sve_vl = sve_encode_vl(sve_vl_bits);
>>>> +            else
>>>> +                panic("SVE vector length error\n");
>>>> +#else
>>>> +            panic("'sve' property found, but CONFIG_ARM64_SVE not selected");
>> here you are missing \n at the end of string. If you take a look at panic() implementation,
>> new line char is not added so in your case it would result in an ugly formatted panic message.
> 
> Hi Michal,
> 
> Thank you for pointing that out! Indeed there might be some issues, I will fix in the next push.
With that fixed,
Reviewed-by: Michal Orzel <michal.orzel@amd.com>

~Michal



From xen-devel-bounces@lists.xenproject.org Thu May 25 09:06:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:06:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539445.840318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26v2-00083o-Ui; Thu, 25 May 2023 09:06:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539445.840318; Thu, 25 May 2023 09:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26v2-00083h-RM; Thu, 25 May 2023 09:06:00 +0000
Received: by outflank-mailman (input) for mailman id 539445;
 Thu, 25 May 2023 09:06:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q26v2-00083X-3W
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:06:00 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20605.outbound.protection.outlook.com
 [2a01:111:f400:fe13::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5c96a40d-fadb-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 11:05:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8622.eurprd04.prod.outlook.com (2603:10a6:102:219::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 09:05:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 09:05:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c96a40d-fadb-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IA4CIa+Vc+QrdzQT9pqPdopfTlREjYq5ml8y9KfgJj7fDOUzMHyKPu8jlmNY34JIqQ1qpbV/Uq/SvtgLLnnC32uMOE0eg2bqwtyN/hwCwkQtL5AFEF+qREQbjE4lHi6CdsmgLnYXxX/uW7qkSZUTpxq7P9U5XstXN7MwPn4HGopORLgS4UjZDDorTPsVnqGlDDkXshB3kX6fnJAY/ZcvvWpcKRwiuZl0CoGuc2LJUtHSoQRmFLkhkBd8EgksCrnOnI3iC23JZwIPX5IakLyyJgaXu9ZZJC7cDk3wg1Eij8MZtZrNtqib6FWC7TgT3nBM7vtYmLOToJZKHCfudQqkdA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GrUiGt6mj0LVWBMgUBMQiyi6/Kq8PjlXSx1Wp5gIW9M=;
 b=MMWzlJXZou04Uj1e+ijxjSzOYCIhXJZ7L8S+M0YziPn3DP504O2VuMtTvEP6xWRKuRScidnQqEM8Q8YAJ5N9To13y4V3g5QEl+wjQlPqBuijaQ7DlZJrzoZThnV5aUTwIJfOg8gI2zB/FkYEuyAGy1mWPz3OqwSFjJ71BAXEsbGStPXV7Y/YFV1tQemIlsYxUJobH5AL4RjT5tlsngMJ3LFnyXBBQpPakPd/duYvFSSprvysvQqQteWon7/MS10HSAgI+L6Jc3QRv/dWKba3kVbQY+rW+DYAvN9Yqw1gpgVHQVhjf7iKP1p3gQ6pGV6k4Ca8DrVOjBsgUuPAuvgNuQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GrUiGt6mj0LVWBMgUBMQiyi6/Kq8PjlXSx1Wp5gIW9M=;
 b=yNdD8PNyx8Gs/1es6kvLPrkkKV9mkLdJP/ARSl3PyjFFh5Fu9RRHpi1RbJampkGnM37mGnieEmk+hJAZNDfUCWxr3ZMcLEBQD4funwt75k3gtoMJ5DEjJeEWmBCFIU5Y3hOS6uxz1kVbW4pli4EtP/k8/HFd3IVKaLIzIq7yqXa2jrJE5m9ig965vh49mNFyY5E6dTJkrPuW2gq068Gn6nvW3WnT0fa9SbcYDGorN4YIgfxQvj9MPnloyZY12anM0boMKDY2eJ0jqtS4q8by89msUn1ofQVDvTwFAUhDdAk3RVYWEc+4PRHIJ9mp2whDUq49DyNKT2h43HDVExdYGQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <25a73290-7677-e202-5e91-fdf32ad5c01b@suse.com>
Date: Thu, 25 May 2023 11:05:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] vpci/header: cope with devices not having vpci allocated
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20230525083051.30366-1-roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230525083051.30366-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0130.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8622:EE_
X-MS-Office365-Filtering-Correlation-Id: cd2a1d50-7ec7-4a08-73c5-08db5cff3ea4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ExXzCXNEmGxUwKnkt9LqHFxC8X9ewGA1V9kUj1roAingHwBRU/KTwgfNT/ZFcEhpSvQ69Ns+QjyOE1u/2n5pT5ObH1Xf3VfM1ZuEogblhIIShzbbwKuU3HCmO5PG19g7GUSChIWunenOQ+xht1FG1w3k6PZeeT51y4Ug/AcgKPDEaPjEiRhwc2qQWhf0exYQ0lMKNhzymp+yhktAYhTvqdXCxSG6+HnNBqj4Ii+UM3VwbrKGSUQ9jKaOBw1BXnDXQEhNepju1jPurGWvSL4tSVrdJoiyLCQxpwPfrVIBEPtq7RcMonuuqKRBzEdRHJF8GS9MbyQTS1lsRHYlFYzcxAg1s3GmnDLRwdcpJAUDskQHzQxUdbNyVKjQhgo/D0lblJgchG2saA9F5tZ33xQTkwsjT8wiwuSYPc2A90j0b+y4AMmNi0VV1cQDW96MquK4ebyCCV5XH2pvBXJBLbDZ0zwLjKhcGZcoO9giwyJK3gFKqMxnWSwTgMpLvZ7vYL2BaGXWPRvECj8Ec9M2fEUQyJxCSll10exVJh2KNLsB4QEUnOenSx2/nxuD+ug5uathbIgcqdfvOJ7D5gnbJZb5ibamgGAmbd3P/YbivwcV+Y/XkN2Axz78iH23YX33Gb3UulXhkMa/RKCIb35Q3wE24A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(346002)(366004)(136003)(39860400002)(451199021)(478600001)(83380400001)(2616005)(31696002)(38100700002)(86362001)(6486002)(26005)(6506007)(6512007)(36756003)(186003)(53546011)(41300700001)(8936002)(316002)(2906002)(8676002)(31686004)(5660300002)(66946007)(66556008)(66476007)(4326008)(6916009)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZzZNY1IvdHJVeFBxaERPaWJXV3FsQjZaQzNsanhNeWQvckVhMjl4czhORUhm?=
 =?utf-8?B?bXdaTGZIZFhPUDhyS2ZOZDQzVERDQlBWK2xZcWdLOVdJMXZqaE9jcm1hU2tr?=
 =?utf-8?B?Q09vU0l0MXdRT3dnODhNV1JoVkV0MXFEQ1NIREFTTWdXbFFBTWUwQXRSRXRo?=
 =?utf-8?B?TTNGalZaUnhzc1FaN29EVDhTSWVad0gyeUlFd0MyamU2RmFUU09XRkluMjZW?=
 =?utf-8?B?SzZLL2lub20rNXZ2b3krdHlpSHNRNFJDV2Nidm1PN3I1eTVzWDZKWWFyMS8w?=
 =?utf-8?B?WGZTa1UveXVhbDZhTGRrc3M3cnJzTG5Bb1BWdFlNSFBHeStEdE5HUWxtOThB?=
 =?utf-8?B?WkJCUFJ2RFNvU0NSc2hsOFNscXkxUzdXZ1V6aHY4SFRsdUM3VzZ4WURrSHJB?=
 =?utf-8?B?ajdnc3p6MFVMUHZqZ1JhU1VUOFJCR2lLRm4yQkxndkJoNXJ6S3UwMnhqY09v?=
 =?utf-8?B?b2tNd01aaGFoTTNXamRaY1BJTE9HUU82cmZrVTNoWXNkL3ZwS3M2d2NYWmgw?=
 =?utf-8?B?emJWMlVOZHlibm5qTGpiNkhCMjMya0tKNkVxMDUvZXVnVzdodDZMZmJad29p?=
 =?utf-8?B?MmdENlRINUVmd0UyZWVENHU3dXVzeWhPMm1NU3FIVDlKUStCN2RQNVBDQW1a?=
 =?utf-8?B?TDQraEhSQ3VjRjg1TEljbXhXSEk3RHZhL3lMcXorWEV1eENNYUJsRFBVaDAz?=
 =?utf-8?B?N2NCRktpTjN5alFhTzZhUlp1QWNSSGJJMklGeTFrM1B1dTNTNFJCTFhBWWls?=
 =?utf-8?B?RCtUR3JjOUFqQXVPZSs4SzZ2RGozRUttam9qNC85N0ZlUGlYNkdtNG5aV2Nt?=
 =?utf-8?B?bitESXdpQ1I0Zm5BYVFQcTZRaUNpRXIzNGd2ZFpjbXRNM2x5ODk5MU1vS1FB?=
 =?utf-8?B?NDlKSms0YmplYjRidzBabGkxT3AvL2srempjWDNKeTJqamUvVW82QmpDUEFV?=
 =?utf-8?B?RU4zR2ZRdU9uVDh2Nloxc05UKzdaOCtWUUI5VWorcWtib1oyamdMd2ZPMFNy?=
 =?utf-8?B?TGRTdVRzNEY4LzJVY0RsbjlmZ2dnUVNiVTgwZktOSXpWYmV5dDJ6bm5kMVF0?=
 =?utf-8?B?QjQyWnZOQTlDak1tSXJiYXpJUHpmMnVmL09FOVZGL1VVSXBKMXo2V2RHL1dK?=
 =?utf-8?B?VCt6ejFVREk1U0dPRmtRbWdOT0EzTmxUNW5iSTdjN3gvdkVkelBndGdDVy9Z?=
 =?utf-8?B?azBmQXd3S21pV3g1Vy9MV29vWVE3SXlNYmZ6ajNjbXphR09mRFNNaFVmQ1k0?=
 =?utf-8?B?V1NibXZVL3plWENSWmxMYmN5bVJtcS9OQUpDMTNIT2o2ME43RzM4c091K3Vk?=
 =?utf-8?B?MGFLc2sxa3pXMzlObFJmTU9oR2x5TGlmamJPWlBwQ2FPTjZ6UjB2ZzNjaVNx?=
 =?utf-8?B?WXJuTjhUNDdjREorVXl5RklXd0t0YUlldUE2UG12YzNLQlVQS2J1NXRWTS9E?=
 =?utf-8?B?UzU0emorRE50NlZTLzF4aElFM0EwUWk3VmJMUkhUaWg5MkF6blliWE5CZFoy?=
 =?utf-8?B?QnROODdHejNSdk54aEpqWk43WEgwNWNQU3FDYkpaUi9xbWVneW9NZ0IvcjRz?=
 =?utf-8?B?TkpETVBhYXlpVE9OUEtrMWU0Tkp0Q0lWd1huYjdZNmEyajZueGdvcmpRZDRi?=
 =?utf-8?B?dDNXOG5PTEVGQjlpRlRuWlg1RUxHODFuTjJjU3ZLU0syUDNCcXlJanZYWkFQ?=
 =?utf-8?B?QXAwa0lJcGJNYkxjNGJvS0I1T2hrZVgwTm95UG9hRS9oV0lzNk9nUVlCdy8x?=
 =?utf-8?B?QmUxUU5yWkd0ZWdLSUVsNWJyM3Z3ajQ3cWdSQW1LSk1JVUJmc2wyclZ0Y0tR?=
 =?utf-8?B?aFdBRmEvVnNKOTJjY0N3YjBpK3hvYzlGYXhZdng2WXNNWGN0emVHNWd2eGxv?=
 =?utf-8?B?eHlBZ0o0ZjFsRWZic0VSN1pqeDRYTStSVnE2WlY3bVdlYWlYWXBNMkF6cUZa?=
 =?utf-8?B?L2x2cStxY3VacDB1cUxJY1dpL1lQWnMzSmhYNE1qQks5cXRxem9DaFJyTWUr?=
 =?utf-8?B?ekpoZzJRUVM5R1g3RnJyTEtVU2I4blVwNGdSSHRDZUJsNFNYc0l3NnlJcDJp?=
 =?utf-8?B?UUQxdzhOeE43Rm1OTkNzZVg3bnh4TWFsVHhJNkIyWGlPOXlrL0J2TlcxQStG?=
 =?utf-8?Q?H6H0bGoMJh+gskOMQNo98vGVR?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cd2a1d50-7ec7-4a08-73c5-08db5cff3ea4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 09:05:54.1766
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Ff3yv9ieeLICbhXZE0F4YHL5Wx+8lYBae/j0CjJ9MMNBUyhYOV80jQJeCR9j6EhbgyBNNQYiPwqp/N88q552Vw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8622

On 25.05.2023 10:30, Roger Pau Monne wrote:
> When traversing the list of pci devices assigned to a domain cope with
> some of them not having the vpci struct allocated. It's possible for
> the hardware domain to have read-only devices assigned that are not
> handled by vPCI, or for unprivileged domains to have some devices
> handled by an emulator different than vPCI.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/drivers/vpci/header.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> index ec2e978a4e6b..3c1fcfb208cf 100644
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -289,6 +289,20 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>       */
>      for_each_pdev ( pdev->domain, tmp )
>      {
> +        if ( !tmp->vpci )
> +            /*
> +             * For the hardware domain it's possible to have devices assigned
> +             * to it that are not handled by vPCI, either because those are
> +             * read-only devices, or because vPCI setup has failed.

So this really is a forward looking comment, becoming true only (aiui)
when my patch also makes it in.

> +             * For unprivileged domains we should aim for passthrough devices
> +             * to be capable of being handled by different emulators, and hence
> +             * a domain could have some devices handled by vPCI and others by
> +             * QEMU for example, and the later won't have pdev->vpci
> +             * allocated.

This, otoh, I don't understand: Do we really intend to have pass-through
devices handled by qemu or alike, for PVH? Or are you thinking of hybrid
HVM (some vPCI, some qemu)? Plus, when considering hybrid guests, won't
we need to take into account BARs of externally handled devices as well,
to avoid overlaps?

In any event, until the DomU side picture is more clear, I'd suggest we
avoid making statements pinning down expectations. That said, you're the
maintainer, so if you really want it like this, so be it.

Jan

> +             */
> +            continue;
> +
>          if ( tmp == pdev )
>          {
>              /*



From xen-devel-bounces@lists.xenproject.org Thu May 25 09:09:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539451.840328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26yR-0000IR-GU; Thu, 25 May 2023 09:09:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539451.840328; Thu, 25 May 2023 09:09:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26yR-0000IK-DZ; Thu, 25 May 2023 09:09:31 +0000
Received: by outflank-mailman (input) for mailman id 539451;
 Thu, 25 May 2023 09:09:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q26yQ-0000ID-DR
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:09:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q26yP-0000tN-UJ; Thu, 25 May 2023 09:09:29 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.31.224]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q26yP-0005wh-NZ; Thu, 25 May 2023 09:09:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=9jWPcWgImnzT5Zq/hphNxjVKI52uOxicuBWNvgVmZgg=; b=mRodFwbhjOYXStg4GOqdcSZqxt
	q/NXArX27pXgCEIqUYYAQbZDihM0+Tr16rLEjZrTZbKqLw2MpVQU/0a/2YMZIVKuBeS7ET9CnC+JQ
	emM9PXFcaHBhbJTgYA6Nkd/K1nFLcNzA7E01LgU4pE/0SDOVUIiQOrhA713IZufwD+fI=;
Message-ID: <5b09d5d9-360f-d0ef-06c4-6efe1b660a90@xen.org>
Date: Thu, 25 May 2023 10:09:27 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v7 05/12] arm/sve: save/restore SVE context switch
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-6-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230523074326.3035745-6-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 23/05/2023 08:43, Luca Fancellu wrote:
> +int sve_context_init(struct vcpu *v)
> +{
> +    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
> +    uint64_t *ctx = _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
> +                             sve_ffrreg_ctx_size(sve_vl_bits),
> +                             L1_CACHE_BYTES);
> +
> +    if ( !ctx )
> +        return -ENOMEM;
> +
> +    /*
> +     * Point to the end of Z0-Z31 memory, just before FFR memory, to be kept in
> +     * sync with sve_context_free()

Nit: Missing a full stop.

> +     */
> +    v->arch.vfp.sve_zreg_ctx_end = ctx +
> +        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
> +
> +    v->arch.zcr_el2 = vl_to_zcr(sve_vl_bits);
> +
> +    return 0;
> +}
> +
> +void sve_context_free(struct vcpu *v)
> +{
> +    unsigned int sve_vl_bits;
> +
> +    if ( v->arch.vfp.sve_zreg_ctx_end )
> +        return;
> +
> +    sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
> +
> +    /*
> +    * Point to the end of Z0-Z31 memory, just before FFR memory, to be kept
> +    * in sync with sve_context_init()
> +    */

The spacing looks a bit odd in this comment. Did you miss an extra space?

Also, I notice this comment is the exact same as on top as 
sve_context_init(). I think this is a bit misleading because the logic 
is different. I would suggest the following:

"Currently points to the end of Z0-Z31 memory which is not the start of 
the buffer. To be kept in sync with the sve_context_init()."

Lastly, nit: Missing a full stop.

> +    v->arch.vfp.sve_zreg_ctx_end -=
> +        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
> +
> +    XFREE(v->arch.vfp.sve_zreg_ctx_end);
> +}
> +

[...]

> diff --git a/xen/arch/arm/include/asm/arm64/vfp.h b/xen/arch/arm/include/asm/arm64/vfp.h
> index e6e8c363bc16..4aa371e85d26 100644
> --- a/xen/arch/arm/include/asm/arm64/vfp.h
> +++ b/xen/arch/arm/include/asm/arm64/vfp.h
> @@ -6,7 +6,19 @@
>   
>   struct vfp_state
>   {
> +    /*
> +     * When SVE is enabled for the guest, fpregs memory will be used to
> +     * save/restore P0-P15 registers, otherwise it will be used for the V0-V31
> +     * registers.
> +     */
>       uint64_t fpregs[64] __vfp_aligned;
> +    /*
> +     * When SVE is enabled for the guest, sve_zreg_ctx_end points to memory
> +     * where Z0-Z31 registers and FFR can be saved/restored, it points at the
> +     * end of the Z0-Z31 space and at the beginning of the FFR space, it's done
> +     * like that to ease the save/restore assembly operations.
> +     */
> +    uint64_t *sve_zreg_ctx_end;

Sorry I only noticed now. But shouldn't this be protected with #ifdef 
CONFIG_SVE? Same...

>       register_t fpcr;
>       register_t fpexc32_el2;
>       register_t fpsr;
> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
> index 331da0f3bcc3..814652d92568 100644
> --- a/xen/arch/arm/include/asm/domain.h
> +++ b/xen/arch/arm/include/asm/domain.h
> @@ -195,6 +195,8 @@ struct arch_vcpu
>       register_t tpidrro_el0;
>   
>       /* HYP configuration */
> +    register_t zcr_el1;
> +    register_t zcr_el2;

... here.

>       register_t cptr_el2;
>       register_t hcr_el2;
>       register_t mdcr_el2;

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 25 09:10:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:10:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539454.840338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26zL-0001eL-QX; Thu, 25 May 2023 09:10:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539454.840338; Thu, 25 May 2023 09:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q26zL-0001eC-Mz; Thu, 25 May 2023 09:10:27 +0000
Received: by outflank-mailman (input) for mailman id 539454;
 Thu, 25 May 2023 09:10:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JOix=BO=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1q26zK-0001e2-57
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:10:26 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fb59b5ac-fadb-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 11:10:24 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 809E861BBA;
 Thu, 25 May 2023 09:10:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8403CC433EF;
 Thu, 25 May 2023 09:10:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb59b5ac-fadb-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685005822;
	bh=MayrgJlsA1aps5AHa6GJzt/W5Ikkau0xb4ZAVQYzQBo=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=SgalnFj1QWcNg1KScmp5H/gdVoIyJuaOadG2ifarouuXrmiB7bi9+cLPwWxcgkwKz
	 1IyntPwPA+EmY6AjW5UDtLdwwJ3zz/UtAAl1BLherHNkW0KbQ0Vj890ZE5p6a13DIS
	 hkYVdqhetUjWojkx2/7AjaVgzIa198Y5wjTklUVGqqcLaG8f9YIGzEMH5AmQoMQBog
	 1bbeK2vGuhU9HCOtlm+WrwdReQcCmUNdPB4Y9zS1KRog1DX9x6MgYk37MxpgTJO9T1
	 0x6ywoFyjjiRy6focH76H1sf0CwEaS3h/tQ+TQUinImlTf/D6/o+fMS3tvP214c0RS
	 Gpi+okO0Ilpeg==
Date: Thu, 25 May 2023 12:09:56 +0300
From: Mike Rapoport <rppt@kernel.org>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 05/34] mm: add utility functions for ptdesc
Message-ID: <20230525090956.GX4967@kernel.org>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-6-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230501192829.17086-6-vishal.moola@gmail.com>

On Mon, May 01, 2023 at 12:28:00PM -0700, Vishal Moola (Oracle) wrote:
> Introduce utility functions setting the foundation for ptdescs. These
> will also assist in the splitting out of ptdesc from struct page.
> 
> ptdesc_alloc() is defined to allocate new ptdesc pages as compound
> pages. This is to standardize ptdescs by allowing for one allocation
> and one free function, in contrast to 2 allocation and 2 free functions.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  include/asm-generic/tlb.h | 11 ++++++++++
>  include/linux/mm.h        | 44 +++++++++++++++++++++++++++++++++++++++
>  include/linux/pgtable.h   | 12 +++++++++++
>  3 files changed, 67 insertions(+)
> 
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index b46617207c93..6bade9e0e799 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -481,6 +481,17 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
>  	return tlb_remove_page_size(tlb, page, PAGE_SIZE);
>  }
>  
> +static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt)
> +{
> +	tlb_remove_table(tlb, pt);
> +}
> +
> +/* Like tlb_remove_ptdesc, but for page-like page directories. */
> +static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt)
> +{
> +	tlb_remove_page(tlb, ptdesc_page(pt));
> +}
> +
>  static inline void tlb_change_page_size(struct mmu_gather *tlb,
>  						     unsigned int page_size)
>  {
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b18848ae7e22..258f3b730359 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2744,6 +2744,45 @@ static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long a
>  }
>  #endif /* CONFIG_MMU */
>  
> +static inline struct ptdesc *virt_to_ptdesc(const void *x)
> +{
> +	return page_ptdesc(virt_to_head_page(x));

Do we ever use compound pages for page tables?

> +}
> +
> +static inline void *ptdesc_to_virt(const struct ptdesc *pt)
> +{
> +	return page_to_virt(ptdesc_page(pt));
> +}
> +
> +static inline void *ptdesc_address(const struct ptdesc *pt)
> +{
> +	return folio_address(ptdesc_folio(pt));
> +}
> +
> +static inline bool ptdesc_is_reserved(struct ptdesc *pt)
> +{
> +	return folio_test_reserved(ptdesc_folio(pt));
> +}
> +
> +static inline struct ptdesc *ptdesc_alloc(gfp_t gfp, unsigned int order)
> +{
> +	struct page *page = alloc_pages(gfp | __GFP_COMP, order);
> +
> +	return page_ptdesc(page);
> +}
> +
> +static inline void ptdesc_free(struct ptdesc *pt)
> +{
> +	struct page *page = ptdesc_page(pt);
> +
> +	__free_pages(page, compound_order(page));
> +}

The ptdesc_{alloc,free} API does not sound right to me. The name
ptdesc_alloc() implies the allocation of the ptdesc itself, rather than
allocation of page table page. The same goes for free.

> +
> +static inline void ptdesc_clear(void *x)
> +{
> +	clear_page(x);
> +}
> +
>  #if USE_SPLIT_PTE_PTLOCKS
>  #if ALLOC_SPLIT_PTLOCKS
>  void __init ptlock_cache_init(void);
> @@ -2970,6 +3009,11 @@ static inline void mark_page_reserved(struct page *page)
>  	adjust_managed_page_count(page, -1);
>  }
>  
> +static inline void free_reserved_ptdesc(struct ptdesc *pt)
> +{
> +	free_reserved_page(ptdesc_page(pt));
> +}
> +
>  /*
>   * Default method to free all the __init memory into the buddy system.
>   * The freed pages will be poisoned with pattern "poison" if it's within
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 5e0f51308724..b067ac10f3dd 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1041,6 +1041,18 @@ TABLE_MATCH(ptl, ptl);
>  #undef TABLE_MATCH
>  static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
>  
> +#define ptdesc_page(pt)			(_Generic((pt),			\
> +	const struct ptdesc *:		(const struct page *)(pt),	\
> +	struct ptdesc *:		(struct page *)(pt)))
> +
> +#define ptdesc_folio(pt)		(_Generic((pt),			\
> +	const struct ptdesc *:		(const struct folio *)(pt),	\
> +	struct ptdesc *:		(struct folio *)(pt)))
> +
> +#define page_ptdesc(p)			(_Generic((p),			\
> +	const struct page *:		(const struct ptdesc *)(p),	\
> +	struct page *:			(struct ptdesc *)(p)))
> +
>  /*
>   * No-op macros that just return the current protection value. Defined here
>   * because these macros can be used even if CONFIG_MMU is not defined.
> -- 
> 2.39.2
> 
> 

-- 
Sincerely yours,
Mike.


From xen-devel-bounces@lists.xenproject.org Thu May 25 09:17:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:17:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539459.840348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2768-0002JI-H1; Thu, 25 May 2023 09:17:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539459.840348; Thu, 25 May 2023 09:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2768-0002JB-Dx; Thu, 25 May 2023 09:17:28 +0000
Received: by outflank-mailman (input) for mailman id 539459;
 Thu, 25 May 2023 09:17:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q2766-0002J5-Un
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:17:27 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on062e.outbound.protection.outlook.com
 [2a01:111:f400:fe02::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f6901952-fadc-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 11:17:25 +0200 (CEST)
Received: from AM6P194CA0041.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::18)
 by DU0PR08MB7520.eurprd08.prod.outlook.com (2603:10a6:10:314::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Thu, 25 May
 2023 09:17:13 +0000
Received: from AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::e7) by AM6P194CA0041.outlook.office365.com
 (2603:10a6:209:84::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17 via Frontend
 Transport; Thu, 25 May 2023 09:17:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT027.mail.protection.outlook.com (100.127.140.124) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.16 via Frontend Transport; Thu, 25 May 2023 09:17:12 +0000
Received: ("Tessian outbound 5bb4c51d5a1f:v136");
 Thu, 25 May 2023 09:17:12 +0000
Received: from a567cb62b03f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BE27AF6E-6B8A-4B04-BFF4-48EE56CA1CFB.1; 
 Thu, 25 May 2023 09:17:00 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a567cb62b03f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 25 May 2023 09:17:00 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB10221.eurprd08.prod.outlook.com (2603:10a6:20b:63e::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 09:16:56 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 09:16:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6901952-fadc-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9qlAWWIBo9VfCY6P6XnrXoPSBlGsgRUrRh8W1tE9tKM=;
 b=5l9+izL4OLo8ByJ11gys4+ZenthcXPe996yvW/nxx1d2WowzJcHj9Eq9yiTlnmsyX0IUziVWKtT3h00MRoyjMloyvF7nxrb9CutlGTI3w92xmqYZP7flNvWv1510gZSAxXPjuRoh3N56mf/Q1VFDHBl2owaibhdYunU4iFkm4Cw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: cee3b7b0e379f6f1
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kV1YfFY+SSokqfw4WpWcN3JS6qCcDahK94xlISVa46tWeEWRdUn0CCkesEYch9sLHYnMbhZ5Wo3m0ofbK6FhJvi2638SBjKfY31xR8wkuVcbZZr4GwVRWFuTB+Rv9NufUW3FLsAI1UHu7bQGt+Ckxdz0RUaOd0ifizJMSHAxCQ/e08LV8HP1LsNrJud5fVGZ/ZjWAiV2LYO8kCFAB62V5IrvPTHL18vxqtQ7tFKqsyIN+VWKJQtw2aKqwLBqFI0UzUudNBtSXEWG2RQyi8Nq4EU5VbMG+xXzcKOukt7in2nz6ZAh5hwB3scm/xnxID9hBLz+3bZapNUD3HF1zeANCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9qlAWWIBo9VfCY6P6XnrXoPSBlGsgRUrRh8W1tE9tKM=;
 b=PrUqaAYjsx1tcK+nCfagTKNKdh9cfHTB+onQ9IRuBJ1sQwfYGWWYGlxz/h1xvSxxErc6JG9zpCGps35wAW9nFFcKqH6kjCjJ7lxp9TK1xB4jPnZKcKXzVwTjijBHVD5/uZPi6sWS3GZbnQ7irHwB1O7CP/PAQPwJap/oUpqXXyKXJeepC/44F843ZQVfZ3LdiDh1wsz0S5TFGBOgy/IFmOtNqt5sMlN816llCSrHx57dj2qGu7qIuQiRka2kc5c7JGvsdMq3x40b2G4GRcQL2F4+POf47tIi47wWwF8//RUCNeux1FfmmDYKh8z/fg077kHqZ9sBp7hpQXEYNrTkAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9qlAWWIBo9VfCY6P6XnrXoPSBlGsgRUrRh8W1tE9tKM=;
 b=5l9+izL4OLo8ByJ11gys4+ZenthcXPe996yvW/nxx1d2WowzJcHj9Eq9yiTlnmsyX0IUziVWKtT3h00MRoyjMloyvF7nxrb9CutlGTI3w92xmqYZP7flNvWv1510gZSAxXPjuRoh3N56mf/Q1VFDHBl2owaibhdYunU4iFkm4Cw=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: =?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross
	<jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>, David
 Scott <dave@recoil.org>, Christian Lindig <christian.lindig@cloud.com>
Subject: Re: [PATCH v7 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Topic: [PATCH v7 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Index: AQHZjUpkSkVWSiVT2k+ILuaW20r/+q9qrj4AgAAKbYA=
Date: Thu, 25 May 2023 09:16:56 +0000
Message-ID: <2BCDFCFC-30F0-4D61-AE92-65046CDD5696@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-10-luca.fancellu@arm.com> <ZG8evxN0mF8NDTPS@mail-itl>
In-Reply-To: <ZG8evxN0mF8NDTPS@mail-itl>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB10221:EE_|AM7EUR03FT027:EE_|DU0PR08MB7520:EE_
X-MS-Office365-Filtering-Correlation-Id: e3150216-a820-4c38-c23b-08db5d00d30d
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ayUnq1co47RUUKFjuaY7w8J8jNtpIF9uSdikfyIlTmyd2dPoyclQJ1wyJFxm3RadkQDosqa7C5MvfDUZ7iDkRBKE/RmcsOeFHtFN/QJq0TvF7/zgu0u2Lt+ODARIpMZoEWL3MEdMfkPYoJ2qVcrL/ToRpMA76/6CCJgxjm+nvBwaeDHkekdn4wJcXUhkHBJDR5PKVMc1XV7jP9r6W9KpKx9IcjZveZmInoU8HNrJsfS8ntY/dJOEqgBsFcBHheksjz6FPwZ+9JjlfIrq34uuz1NQSrkIMK7srg4UEdpOXFJZCtntH3wdmWYql0kQbNUkYF8aTlDbBx69ijGEYqxuWeqYkipHEZ0I+HqcPaokrc4qlzGXSQrefLbMEGYPcQgsxTRd8o6MadjRIkY/EjLvmVktsh6UHMNcXk6w2NiotTk/Rn68AYN0XQyD7AAfxmghTYICTur8ae8jVzn40kHMh+uaBz6d/vaNhHnTIqLDozQSR/A647v5lxHtvcM/cnaPRdK9SyKi/dLOBLKzhe8FD8vcUzE2WTuQVKfdT08M0Skmrb3INd/Ze1ioPUa7cKX54j5kMBN4L8k24QrhhIOESaFacdXQUkB6qD3pjjCcT5c5wAo+wxRH83ZQLd4pctCTvU2kRPIn7ZEZFaTChcEf3g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(366004)(396003)(346002)(39860400002)(376002)(451199021)(8936002)(8676002)(5660300002)(7416002)(66574015)(186003)(83380400001)(6506007)(6512007)(53546011)(86362001)(26005)(2616005)(122000001)(38100700002)(38070700005)(41300700001)(33656002)(6486002)(6916009)(71200400001)(66946007)(66446008)(66476007)(4326008)(64756008)(478600001)(66556008)(76116006)(316002)(91956017)(36756003)(54906003)(2906002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <9113EAF7132E7349BC4CD45983F337C7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB10221
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	04360655-2738-4bea-2b97-08db5d00c97c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VXngi7Qs7TDwPj2SUDv4s482i/G5StRKzj6I6A+RCGhgaVtVWsCMTXwchr214PUrYGqCb5IvjiYQYKtv1P347OtXuB5tl17AqdDer2aaWnxwVbKy/QFmgadVyU0koWAA/Zkmuc4ToTjB1f5LQg5SWlKIMZ/BT4oNJ3TEnycrgZF4sQ7VnobM/DQ8c1IMFaDhi4cS+EumNVpSiwjBMa9i7I5qYn/b/wMBBltGw4Gvpt/bWgzalr8Ft3pak/R9n1osC6y4d9SNlxz5nAyy9ZGrWmw/7dJJwSrzqpFHDn7bBhQBF1h/u9pndl5eBD5lswAYn6qp93TKRXHhmRivpLwKZEz6IfCl5AXg0gs4ZpMUvcb0IekLpaNw6lVAidXi9KMp+RWQj39Qb8kcLVY8xf/QgRnY2QbMnYN1M0B2dX64D56hcW5eAmIKkAb2q00Yr7ukvC4kXyznR5Vr98uKcDYTvmBBitSe4R+b0V1NsvqBiJrtB8ZQJCYhp2z+pnnXoxVdNeKlnXmzCehiJChdx/aEMYjZ9dZd1XMk/MrZ5VMzLgm1Sm+OPHQnxxaiv/Zf5elsn94feGW+PAocSqOd69Fy2M47drECqwrVCKTidoxEt5fYldE3Z45hi+r3xax7Ru2LUZqqvyr+ki8Otz4pk0nnGAFlgy1ZXtqpqZGjWRl+wJ1/iEXSGZEW+2VnHD29hf3/s5aG/iZSz9873suwW9h3hbiN8cbNjvpImncUCvE1w1yPGj03ywZWltIkIeIztiu7
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(346002)(136003)(376002)(451199021)(36840700001)(40470700004)(46966006)(83380400001)(40460700003)(2906002)(36756003)(82310400005)(5660300002)(6486002)(6862004)(8676002)(8936002)(66574015)(336012)(41300700001)(36860700001)(47076005)(40480700001)(356005)(6506007)(53546011)(107886003)(33656002)(82740400003)(81166007)(316002)(2616005)(4326008)(6512007)(54906003)(478600001)(186003)(70206006)(70586007)(26005)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 09:17:12.4553
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e3150216-a820-4c38-c23b-08db5d00d30d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB7520

DQoNCj4gT24gMjUgTWF5IDIwMjMsIGF0IDA5OjM5LCBNYXJlayBNYXJjenlrb3dza2ktR8OzcmVj
a2kgPG1hcm1hcmVrQGludmlzaWJsZXRoaW5nc2xhYi5jb20+IHdyb3RlOg0KPiANCj4gT24gVHVl
LCBNYXkgMjMsIDIwMjMgYXQgMDg6NDM6MjNBTSArMDEwMCwgTHVjYSBGYW5jZWxsdSB3cm90ZToN
Cj4+IE9uIEFybSwgdGhlIFNWRSB2ZWN0b3IgbGVuZ3RoIGlzIGVuY29kZWQgaW4gYXJjaF9jYXBh
YmlsaXRpZXMgZmllbGQNCj4+IG9mIHN0cnVjdCB4ZW5fc3lzY3RsX3BoeXNpbmZvLCBtYWtlIHVz
ZSBvZiB0aGlzIGZpZWxkIGluIHRoZSB0b29scw0KPj4gd2hlbiBidWlsZGluZyBmb3IgYXJtLg0K
Pj4gDQo+PiBDcmVhdGUgaGVhZGVyIGFybS1hcmNoLWNhcGFiaWxpdGllcy5oIHRvIGhhbmRsZSB0
aGUgYXJjaF9jYXBhYmlsaXRpZXMNCj4+IGZpZWxkIG9mIHBoeXNpbmZvIGZvciBBcm0uDQo+PiAN
Cj4+IFJlbW92ZWQgaW5jbHVkZSBmb3IgeGVuLXRvb2xzL2NvbW1vbi1tYWNyb3MuaCBpbg0KPj4g
cHl0aG9uL3hlbi9sb3dsZXZlbC94Yy94Yy5jIGJlY2F1c2UgaXQgaXMgYWxyZWFkeSBpbmNsdWRl
ZCBieSB0aGUNCj4+IGFybS1hcmNoLWNhcGFiaWxpdGllcy5oIGhlYWRlci4NCj4+IA0KPj4gU2ln
bmVkLW9mZi1ieTogTHVjYSBGYW5jZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KPj4gQWNr
ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4NCj4+IEFja2Vk
LWJ5OiBDaHJpc3RpYW4gTGluZGlnIDxjaHJpc3RpYW4ubGluZGlnQGNsb3VkLmNvbT4NCj4+IFJl
dmlld2VkLWJ5OiBBbnRob255IFBFUkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4NCj4+
IC0tLQ0KPj4gQ2hhbmdlcyBmcm9tIHY2Og0KPj4gLSBGaXggbGljZW5jZSBoZWFkZXIgaW4gYXJt
LWF0Y2gtY2FwYWJpbGl0aWVzLmgsIGFkZCBSLWJ5IChBbnRob255KQ0KPj4gQ2hhbmdlcyBmcm9t
IHY1Og0KPj4gLSBubyBjaGFuZ2VzDQo+PiBDaGFuZ2VzIGZyb20gdjQ6DQo+PiAtIE1vdmUgYXJt
LWFyY2gtY2FwYWJpbGl0aWVzLmggaW50byB4ZW4tdG9vbHMvLCBhZGQgTElCWExfSEFWRV8sDQo+
PiAgIGZpeGVkIHB5dGhvbiByZXR1cm4gdHlwZSB0byBJIGluc3RlYWQgb2YgaS4gKEFudGhvbnkp
DQo+PiBDaGFuZ2VzIGZyb20gdjM6DQo+PiAtIGFkZCBBY2stYnkgZm9yIHRoZSBHb2xhbmcgYml0
cyAoR2VvcmdlKQ0KPj4gLSBhZGQgQWNrLWJ5IGZvciB0aGUgT0NhbWwgdG9vbHMgKENocmlzdGlh
bikNCj4+IC0gbm93IHhlbi10b29scy9saWJzLmggaXMgbmFtZWQgeGVuLXRvb2xzL2NvbW1vbi1t
YWNyb3MuaA0KPj4gLSBjaGFuZ2VkIGNvbW1pdCBtZXNzYWdlIHRvIGV4cGxhaW4gd2h5IHRoZSBo
ZWFkZXIgbW9kaWZpY2F0aW9uDQo+PiAgIGluIHB5dGhvbi94ZW4vbG93bGV2ZWwveGMveGMuYw0K
Pj4gQ2hhbmdlcyBmcm9tIHYyOg0KPj4gLSByZW5hbWUgYXJtX2FyY2hfY2FwYWJpbGl0aWVzLmgg
aW4gYXJtLWFyY2gtY2FwYWJpbGl0aWVzLmgsIHVzZQ0KPj4gICBNQVNLX0VYVFIuDQo+PiAtIE5v
dyBhcm0tYXJjaC1jYXBhYmlsaXRpZXMuaCBuZWVkcyBNQVNLX0VYVFIgbWFjcm8sIGJ1dCBpdCBp
cw0KPj4gICBkZWZpbmVkIGluIGxpYnhsX2ludGVybmFsLmgsIGl0IGRvZXNuJ3QgZmVlbCByaWdo
dCB0byBpbmNsdWRlDQo+PiAgIHRoYXQgaGVhZGVyIHNvIG1vdmUgTUFTS19FWFRSIGludG8geGVu
LXRvb2xzL2xpYnMuaCB0aGF0IGlzIGFsc28NCj4+ICAgaW5jbHVkZWQgaW4gbGlieGxfaW50ZXJu
YWwuaA0KPj4gQ2hhbmdlcyBmcm9tIHYxOg0KPj4gLSBub3cgU1ZFIFZMIGlzIGVuY29kZWQgaW4g
YXJjaF9jYXBhYmlsaXRpZXMgb24gQXJtDQo+PiBDaGFuZ2VzIGZyb20gUkZDOg0KPj4gLSBuZXcg
cGF0Y2gNCj4+IC0tLQ0KPj4gdG9vbHMvZ29sYW5nL3hlbmxpZ2h0L2hlbHBlcnMuZ2VuLmdvICAg
ICAgICAgIHwgIDIgKysNCj4+IHRvb2xzL2dvbGFuZy94ZW5saWdodC90eXBlcy5nZW4uZ28gICAg
ICAgICAgICB8ICAxICsNCj4+IHRvb2xzL2luY2x1ZGUvbGlieGwuaCAgICAgICAgICAgICAgICAg
ICAgICAgICB8ICA2ICsrKysNCj4+IC4uLi9pbmNsdWRlL3hlbi10b29scy9hcm0tYXJjaC1jYXBh
YmlsaXRpZXMuaCB8IDI4ICsrKysrKysrKysrKysrKysrKysNCj4+IHRvb2xzL2luY2x1ZGUveGVu
LXRvb2xzL2NvbW1vbi1tYWNyb3MuaCAgICAgICB8ICAyICsrDQo+PiB0b29scy9saWJzL2xpZ2h0
L2xpYnhsLmMgICAgICAgICAgICAgICAgICAgICAgfCAgMSArDQo+PiB0b29scy9saWJzL2xpZ2h0
L2xpYnhsX2ludGVybmFsLmggICAgICAgICAgICAgfCAgMSAtDQo+PiB0b29scy9saWJzL2xpZ2h0
L2xpYnhsX3R5cGVzLmlkbCAgICAgICAgICAgICAgfCAgMSArDQo+PiB0b29scy9vY2FtbC9saWJz
L3hjL3hlbmN0cmwubWwgICAgICAgICAgICAgICAgfCAgNCArLS0NCj4+IHRvb2xzL29jYW1sL2xp
YnMveGMveGVuY3RybC5tbGkgICAgICAgICAgICAgICB8ICA0ICstLQ0KPj4gdG9vbHMvb2NhbWwv
bGlicy94Yy94ZW5jdHJsX3N0dWJzLmMgICAgICAgICAgIHwgIDggKysrKy0tDQo+PiB0b29scy9w
eXRob24veGVuL2xvd2xldmVsL3hjL3hjLmMgICAgICAgICAgICAgfCAgOCArKysrLS0NCj4+IHRv
b2xzL3hsL3hsX2luZm8uYyAgICAgICAgICAgICAgICAgICAgICAgICAgICB8ICA4ICsrKysrKw0K
Pj4gMTMgZmlsZXMgY2hhbmdlZCwgNjIgaW5zZXJ0aW9ucygrKSwgMTIgZGVsZXRpb25zKC0pDQo+
PiBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvaW5jbHVkZS94ZW4tdG9vbHMvYXJtLWFyY2gtY2Fw
YWJpbGl0aWVzLmgNCj4+IA0KPiANCj4gKC4uLikNCj4gDQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMv
cHl0aG9uL3hlbi9sb3dsZXZlbC94Yy94Yy5jIGIvdG9vbHMvcHl0aG9uL3hlbi9sb3dsZXZlbC94
Yy94Yy5jDQo+PiBpbmRleCA5NzI4YjM0MTg1YWMuLmIzNjk5ZmRhYzU4ZSAxMDA2NDQNCj4+IC0t
LSBhL3Rvb2xzL3B5dGhvbi94ZW4vbG93bGV2ZWwveGMveGMuYw0KPj4gKysrIGIvdG9vbHMvcHl0
aG9uL3hlbi9sb3dsZXZlbC94Yy94Yy5jDQo+PiBAQCAtMjIsNiArMjIsNyBAQA0KPj4gI2luY2x1
ZGUgPHhlbi9odm0vaHZtX2luZm9fdGFibGUuaD4NCj4+ICNpbmNsdWRlIDx4ZW4vaHZtL3BhcmFt
cy5oPg0KPj4gDQo+PiArI2luY2x1ZGUgPHhlbi10b29scy9hcm0tYXJjaC1jYXBhYmlsaXRpZXMu
aD4NCj4+ICNpbmNsdWRlIDx4ZW4tdG9vbHMvY29tbW9uLW1hY3Jvcy5oPg0KPj4gDQo+PiAvKiBO
ZWVkZWQgZm9yIFB5dGhvbiB2ZXJzaW9ucyBlYXJsaWVyIHRoYW4gMi4zLiAqLw0KPj4gQEAgLTg5
Nyw3ICs4OTgsNyBAQCBzdGF0aWMgUHlPYmplY3QgKnB5eGNfcGh5c2luZm8oWGNPYmplY3QgKnNl
bGYpDQo+PiAgICAgaWYgKCBwICE9IHZpcnRfY2FwcyApDQo+PiAgICAgICAqKHAtMSkgPSAnXDAn
Ow0KPj4gDQo+PiAtICAgIHJldHVybiBQeV9CdWlsZFZhbHVlKCJ7czppLHM6aSxzOmksczppLHM6
bCxzOmwsczpsLHM6aSxzOnMsczpzfSIsDQo+PiArICAgIHJldHVybiBQeV9CdWlsZFZhbHVlKCJ7
czppLHM6aSxzOmksczppLHM6bCxzOmwsczpsLHM6aSxzOnMsczpzLHM6SX0iLA0KPj4gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICJucl9ub2RlcyIsICAgICAgICAgcGluZm8ubnJfbm9kZXMs
DQo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgInRocmVhZHNfcGVyX2NvcmUiLCBwaW5m
by50aHJlYWRzX3Blcl9jb3JlLA0KPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICJjb3Jl
c19wZXJfc29ja2V0IiwgcGluZm8uY29yZXNfcGVyX3NvY2tldCwNCj4+IEBAIC05MDcsNyArOTA4
LDEwIEBAIHN0YXRpYyBQeU9iamVjdCAqcHl4Y19waHlzaW5mbyhYY09iamVjdCAqc2VsZikNCj4+
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAic2NydWJfbWVtb3J5IiwgICAgIHBhZ2VzX3Rv
X2tpYihwaW5mby5zY3J1Yl9wYWdlcyksDQo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ImNwdV9raHoiLCAgICAgICAgICBwaW5mby5jcHVfa2h6LA0KPj4gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICJod19jYXBzIiwgICAgICAgICAgY3B1X2NhcCwNCj4+IC0gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgInZpcnRfY2FwcyIsICAgICAgICB2aXJ0X2NhcHMpOw0KPj4gKyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAidmlydF9jYXBzIiwgICAgICAgIHZpcnRfY2FwcywNCj4+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgImFybV9zdmVfdmwiLA0KPj4gKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIGFyY2hfY2FwYWJpbGl0aWVzX2FybV9zdmUocGluZm8uYXJj
aF9jYXBhYmlsaXRpZXMpDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgKTsNCj4gDQo+IFRo
aXMgc2hvdWxkIGJlIGFkZGVkIG9ubHkgd2hlbiBidWlsZGluZyBmb3IgQVJNLCBzaW1pbGFyIGFz
IGZvciBvdGhlcg0KPiBiaW5kaW5ncy4NCg0KSGkgTWFyZWssDQoNClRoYW5rIHlvdSBmb3IgdGFr
aW5nIHRoZSB0aW1lIHRvIHJldmlldyB0aGlzLCBhcmUgeW91IG9rIGlmIEkgbWFrZSB0aGVzZSBj
aGFuZ2VzIHRvIHRoZSBjb2RlPw0KDQpkaWZmIC0tZ2l0IGEvdG9vbHMvcHl0aG9uL3hlbi9sb3ds
ZXZlbC94Yy94Yy5jIGIvdG9vbHMvcHl0aG9uL3hlbi9sb3dsZXZlbC94Yy94Yy5jDQppbmRleCBi
MzY5OWZkYWM1OGUuLmM3ZjY5MDE4OTc3MCAxMDA2NDQNCi0tLSBhL3Rvb2xzL3B5dGhvbi94ZW4v
bG93bGV2ZWwveGMveGMuYw0KKysrIGIvdG9vbHMvcHl0aG9uL3hlbi9sb3dsZXZlbC94Yy94Yy5j
DQpAQCAtODcyLDYgKzg3Miw4IEBAIHN0YXRpYyBQeU9iamVjdCAqcHl4Y19waHlzaW5mbyhYY09i
amVjdCAqc2VsZikNCiAgICAgY29uc3QgY2hhciAqdmlydGNhcF9uYW1lc1tdID0geyAiaHZtIiwg
InB2IiB9Ow0KICAgICBjb25zdCB1bnNpZ25lZCB2aXJ0Y2Fwc19iaXRzW10gPSB7IFhFTl9TWVND
VExfUEhZU0NBUF9odm0sDQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
WEVOX1NZU0NUTF9QSFlTQ0FQX3B2IH07DQorICAgIFB5T2JqZWN0ICpvYmpyZXQ7DQorICAgIGlu
dCByZXRjb2RlOw0KIA0KICAgICBpZiAoIHhjX3BoeXNpbmZvKHNlbGYtPnhjX2hhbmRsZSwgJnBp
bmZvKSAhPSAwICkNCiAgICAgICAgIHJldHVybiBweXhjX2Vycm9yX3RvX2V4Y2VwdGlvbihzZWxm
LT54Y19oYW5kbGUpOw0KQEAgLTg5OCwyMCArOTAwLDMxIEBAIHN0YXRpYyBQeU9iamVjdCAqcHl4
Y19waHlzaW5mbyhYY09iamVjdCAqc2VsZikNCiAgICAgaWYgKCBwICE9IHZpcnRfY2FwcyApDQog
ICAgICAgKihwLTEpID0gJ1wwJzsNCiANCi0gICAgcmV0dXJuIFB5X0J1aWxkVmFsdWUoIntzOmks
czppLHM6aSxzOmksczpsLHM6bCxzOmwsczppLHM6cyxzOnMsczpJfSIsDQotICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICJucl9ub2RlcyIsICAgICAgICAgcGluZm8ubnJfbm9kZXMsDQotICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICJ0aHJlYWRzX3Blcl9jb3JlIiwgcGluZm8udGhyZWFk
c19wZXJfY29yZSwNCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgImNvcmVzX3Blcl9zb2Nr
ZXQiLCBwaW5mby5jb3Jlc19wZXJfc29ja2V0LA0KLSAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAibnJfY3B1cyIsICAgICAgICAgIHBpbmZvLm5yX2NwdXMsDQotICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICJ0b3RhbF9tZW1vcnkiLCAgICAgcGFnZXNfdG9fa2liKHBpbmZvLnRvdGFsX3Bh
Z2VzKSwNCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgImZyZWVfbWVtb3J5IiwgICAgICBw
YWdlc190b19raWIocGluZm8uZnJlZV9wYWdlcyksDQotICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICJzY3J1Yl9tZW1vcnkiLCAgICAgcGFnZXNfdG9fa2liKHBpbmZvLnNjcnViX3BhZ2VzKSwN
Ci0gICAgICAgICAgICAgICAgICAgICAgICAgICAgImNwdV9raHoiLCAgICAgICAgICBwaW5mby5j
cHVfa2h6LA0KLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAiaHdfY2FwcyIsICAgICAgICAg
IGNwdV9jYXAsDQotICAgICAgICAgICAgICAgICAgICAgICAgICAgICJ2aXJ0X2NhcHMiLCAgICAg
ICAgdmlydF9jYXBzLA0KLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAiYXJtX3N2ZV92bCIs
DQotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYXJjaF9jYXBhYmlsaXRpZXNfYXJtX3N2
ZShwaW5mby5hcmNoX2NhcGFiaWxpdGllcykNCisgICAgb2JqcmV0ID0gUHlfQnVpbGRWYWx1ZSgi
e3M6aSxzOmksczppLHM6aSxzOmwsczpsLHM6bCxzOmksczpzLHM6c30iLA0KKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICJucl9ub2RlcyIsICAgICAgICAgcGluZm8ubnJfbm9kZXMsDQorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgInRocmVhZHNfcGVyX2NvcmUiLCBwaW5mby50aHJlYWRz
X3Blcl9jb3JlLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgICJjb3Jlc19wZXJfc29ja2V0
IiwgcGluZm8uY29yZXNfcGVyX3NvY2tldCwNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAi
bnJfY3B1cyIsICAgICAgICAgIHBpbmZvLm5yX2NwdXMsDQorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgInRvdGFsX21lbW9yeSIsICAgICBwYWdlc190b19raWIocGluZm8udG90YWxfcGFnZXMp
LA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgICJmcmVlX21lbW9yeSIsICAgICAgcGFnZXNf
dG9fa2liKHBpbmZvLmZyZWVfcGFnZXMpLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgICJz
Y3J1Yl9tZW1vcnkiLCAgICAgcGFnZXNfdG9fa2liKHBpbmZvLnNjcnViX3BhZ2VzKSwNCisgICAg
ICAgICAgICAgICAgICAgICAgICAgICAiY3B1X2toeiIsICAgICAgICAgIHBpbmZvLmNwdV9raHos
DQorICAgICAgICAgICAgICAgICAgICAgICAgICAgImh3X2NhcHMiLCAgICAgICAgICBjcHVfY2Fw
LA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgICJ2aXJ0X2NhcHMiLCAgICAgICAgdmlydF9j
YXBzDQogICAgICAgICAgICAgICAgICAgICAgICAgKTsNCisNCisgICAgI2lmIGRlZmluZWQoX19h
YXJjaDY0X18pDQorICAgICAgICBpZiAob2JqcmV0KSB7DQorICAgICAgICAgICAgcmV0Y29kZSA9
IFB5RGljdF9TZXRJdGVtU3RyaW5nKA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgICBvYmpy
ZXQsICJhcm1fc3ZlX3ZsIiwNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgYXJjaF9jYXBh
YmlsaXRpZXNfYXJtX3N2ZShwaW5mby5hcmNoX2NhcGFiaWxpdGllcykNCisgICAgICAgICAgICAg
ICAgICAgICAgICApOw0KKyAgICAgICAgICAgIGlmICggcmV0Y29kZSA8IDAgKQ0KKyAgICAgICAg
ICAgICAgICByZXR1cm4gTlVMTDsNCisgICAgICAgIH0NCisgICAgI2VuZGlmDQorDQorICAgIHJl
dHVybiBvYmpyZXQ7DQogfQ0KIA0KIHN0YXRpYyBQeU9iamVjdCAqcHl4Y19nZXRjcHVpbmZvKFhj
T2JqZWN0ICpzZWxmLCBQeU9iamVjdCAqYXJncywgUHlPYmplY3QgKmt3ZHMpDQoNCg0KUGxlYXNl
IG5vdGljZSB0aGF0IG5vdyB3ZSBjYW4gaGF2ZSBhIHBhdGggdGhhdCBjb3VsZCByZXR1cm4gTlVM
TCwgYXJlIHlvdSBvayBmb3INCkl0IG9yIHNob3VsZCBJIGp1c3QgaWdub3JlIHRoZSByZXR1cm4g
Y29kZSBmb3IgUHlEaWN0X1NldEl0ZW1TdHJpbmc/DQoNCkFsc28sIGRvIHlvdSB3YW50IG1lIHRv
IHByb3RlY3QgdGhlIGluY2x1ZGUgdG8gPHhlbi10b29scy9hcm0tYXJjaC1jYXBhYmlsaXRpZXMu
aD4NCndpdGggaWZkZWY/DQoNCj4gDQo+PiB9DQo+PiANCj4+IHN0YXRpYyBQeU9iamVjdCAqcHl4
Y19nZXRjcHVpbmZvKFhjT2JqZWN0ICpzZWxmLCBQeU9iamVjdCAqYXJncywgUHlPYmplY3QgKmt3
ZHMpDQo+IA0KPiAtLSANCj4gQmVzdCBSZWdhcmRzLA0KPiBNYXJlayBNYXJjenlrb3dza2ktR8Oz
cmVja2kNCj4gSW52aXNpYmxlIFRoaW5ncyBMYWINCg0K


From xen-devel-bounces@lists.xenproject.org Thu May 25 09:18:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:18:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539463.840358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q276r-0002tS-VH; Thu, 25 May 2023 09:18:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539463.840358; Thu, 25 May 2023 09:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q276r-0002tL-Sk; Thu, 25 May 2023 09:18:13 +0000
Received: by outflank-mailman (input) for mailman id 539463;
 Thu, 25 May 2023 09:18:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q276q-0002tC-V7
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:18:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q276q-00013L-CC; Thu, 25 May 2023 09:18:12 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.31.224]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q276q-0006N7-4N; Thu, 25 May 2023 09:18:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Qv55Y6m1lllBaFGCfpqSkg94suT4Na6FI0EpMe1yfok=; b=1JQiFao6+RqeVw4RddjdQ8XVxf
	7SiVRfQLx50szY7gvsCHA8ZYpWPVdDFwkS+mSQcwueqUeJiFURyCVvk3o3aqAWAu/0cClNjP08QPU
	tHiCiq8oeK4BzleMGkknrF6elpwsBWRdQpYjCMGUlQa6U4J4TNiSyMad4rivk+FACEnA=;
Message-ID: <b92aa9fb-440e-d315-92b5-ccc10e1e38f8@xen.org>
Date: Thu, 25 May 2023 10:18:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v7 07/12] xen: enable Dom0 to use SVE feature
Content-Language: en-US
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-8-luca.fancellu@arm.com>
 <04D83C51-E734-465D-BC2D-4F0535C91B77@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <04D83C51-E734-465D-BC2D-4F0535C91B77@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 24/05/2023 11:05, Bertrand Marquis wrote:
> Hi Luca,

Hi,


>> On 23 May 2023, at 09:43, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>>
>> Add a command line parameter to allow Dom0 the use of SVE resources,
>> the command line parameter sve=<integer>, sub argument of dom0=,
>> controls the feature on this domain and sets the maximum SVE vector
>> length for Dom0.
>>
>> Add a new function, parse_signed_integer(), to parse an integer
>> command line argument.
>>
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> 
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> with ...
> 
>> ---
>> Changes from v6:
>> - Fixed case for e==NULL in parse_signed_integer, drop parenthesis
>>    from if conditions, delete inline sve_domctl_vl_param and rely on
>>    DCE from the compiler (Jan)
>> - Drop parenthesis from opt_dom0_sve (Julien)
>> - Do not continue if 'sve' is in command line args but
>>    CONFIG_ARM64_SVE is not selected:
>>    https://lore.kernel.org/all/7614AE25-F59D-430A-9C3E-30B1CE0E1580@arm.com/
>> Changes from v5:
>> - stop the domain if VL error occurs (Julien, Bertrand)
>> - update the documentation
>> - Rename sve_sanitize_vl_param to sve_domctl_vl_param to
>>    mark the fact that we are sanitizing a parameter coming from
>>    the user before encoding it into sve_vl in domctl structure.
>>    (suggestion from Bertrand in a separate discussion)
>> - update comment in parse_signed_integer, return boolean in
>>    sve_domctl_vl_param (Jan).
>> Changes from v4:
>> - Negative values as user param means max supported HW VL (Jan)
>> - update documentation, make use of no_config_param(), rename
>>    parse_integer into parse_signed_integer and take long long *,
>>    also put a comment on the -2 return condition, update
>>    declaration comment to reflect the modifications (Jan)
>> Changes from v3:
>> - Don't use fixed len types when not needed (Jan)
>> - renamed domainconfig_encode_vl to sve_encode_vl
>> - Use a sub argument of dom0= to enable the feature (Jan)
>> - Add parse_integer() function
>> Changes from v2:
>> - xen_domctl_createdomain field has changed into sve_vl and its
>>    value now is the VL / 128, create an helper function for that.
>> Changes from v1:
>> - No changes
>> Changes from RFC:
>> - Changed docs to explain that the domain won't be created if the
>>    requested vector length is above the supported one from the
>>    platform.
>> ---
>> docs/misc/xen-command-line.pandoc    | 20 ++++++++++++++++++--
>> xen/arch/arm/arm64/sve.c             | 20 ++++++++++++++++++++
>> xen/arch/arm/domain_build.c          | 26 ++++++++++++++++++++++++++
>> xen/arch/arm/include/asm/arm64/sve.h | 10 ++++++++++
>> xen/common/kernel.c                  | 28 ++++++++++++++++++++++++++++
>> xen/include/xen/lib.h                | 10 ++++++++++
>> 6 files changed, 112 insertions(+), 2 deletions(-)
>>
>> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
>> index e0b89b7d3319..47e5b4eb6199 100644
>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -777,9 +777,9 @@ Specify the bit width of the DMA heap.
>>
>> ### dom0
>>      = List of [ pv | pvh, shadow=<bool>, verbose=<bool>,
>> -                cpuid-faulting=<bool>, msr-relaxed=<bool> ]
>> +                cpuid-faulting=<bool>, msr-relaxed=<bool> ] (x86)
>>
>> -    Applicability: x86
>> +    = List of [ sve=<integer> ] (Arm)
>>
>> Controls for how dom0 is constructed on x86 systems.
>>
>> @@ -838,6 +838,22 @@ Controls for how dom0 is constructed on x86 systems.
>>
>>      If using this option is necessary to fix an issue, please report a bug.
>>
>> +Enables features on dom0 on Arm systems.
>> +
>> +*   The `sve` integer parameter enables Arm SVE usage for Dom0 domain and sets

NIT: "Domain" is bit redundant here.

>> +    the maximum SVE vector length, the option is applicable only to AArch64
>> +    guests.
> 
> Here i would just remove "guests", just AArch64 is enough.
> I am ok if you choose to use "AArch64 Dom0 kernels"

So far we have no use of AArch64 in our documentation. We have a few use 
of Arm64 (with various uppercase).

In the code base, we seem to have a mix of AArch64 and Arm64. At the 
moment, I am not going to ask for consistency in the code. But we should 
aim to not introduce inconsistency in the documentation.

I don't have a strong opinion whether we should use aarch64 or arm64. My 
only request is to be consistent.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 25 09:19:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:19:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539469.840367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2787-0003TA-8y; Thu, 25 May 2023 09:19:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539469.840367; Thu, 25 May 2023 09:19:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2787-0003T3-6I; Thu, 25 May 2023 09:19:31 +0000
Received: by outflank-mailman (input) for mailman id 539469;
 Thu, 25 May 2023 09:19:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JOix=BO=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1q2786-0003BH-C3
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:19:30 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3f561c39-fadd-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 11:19:28 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0FA426442A;
 Thu, 25 May 2023 09:19:27 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3CF9C433EF;
 Thu, 25 May 2023 09:19:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f561c39-fadd-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685006366;
	bh=wqsrR+VLdLpuPAmDdXqCVY2XwnsXyfpXvlaNwL8b95Y=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=foE41UqPH1HgRiXHDcvmNVSgga8+6+movhrIvHcP0lKkim5d2C0CveXiAkrsugwUY
	 mTMqHgjGefxYEA1DbMZBeEKmAQcDqRXgChs6yPJ6RFCubSjjaqUfXH5DibAVcYpUBA
	 aCcKgzmG5mhVSa4fQNmh2lbUtoTD+weF0ODUHN0azy9cnLxTNG6czawfKdDGiZ3h9+
	 eQhVfl9VDk2zYUu2V/zd6ljbuCqaKQw9PEhJG6TAu5DceAsH/0TRlb4M7Mib7O0vr6
	 Y2yg35as+E26Oh74LjFv8mXxr+pdJYlNpysggvOKmpyB8i+xjdzE7MqNjdH/FF5KDP
	 bMNE7SNyaBLAA==
Date: Thu, 25 May 2023 12:19:00 +0300
From: Mike Rapoport <rppt@kernel.org>
To: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 13/34] mm: Create ptdesc equivalents for
 pgtable_{pte,pmd}_page_{ctor,dtor}
Message-ID: <20230525091900.GY4967@kernel.org>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-14-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230501192829.17086-14-vishal.moola@gmail.com>

On Mon, May 01, 2023 at 12:28:08PM -0700, Vishal Moola (Oracle) wrote:
> Creates ptdesc_pte_ctor(), ptdesc_pmd_ctor(), ptdesc_pte_dtor(), and
> ptdesc_pmd_dtor() and make the original pgtable constructor/destructors
> wrappers.

I think pgtable_pXY_ctor/dtor names would be better.
 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  include/linux/mm.h | 56 ++++++++++++++++++++++++++++++++++------------
>  1 file changed, 42 insertions(+), 14 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 58c911341a33..dc61aeca9077 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2847,20 +2847,34 @@ static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; }
>  static inline void ptlock_free(struct ptdesc *ptdesc) {}
>  #endif /* USE_SPLIT_PTE_PTLOCKS */
>  
> -static inline bool pgtable_pte_page_ctor(struct page *page)
> +static inline bool ptdesc_pte_ctor(struct ptdesc *ptdesc)
>  {
> -	if (!ptlock_init(page_ptdesc(page)))
> +	struct folio *folio = ptdesc_folio(ptdesc);
> +
> +	if (!ptlock_init(ptdesc))
>  		return false;
> -	__SetPageTable(page);
> -	inc_lruvec_page_state(page, NR_PAGETABLE);
> +	__folio_set_table(folio);
> +	lruvec_stat_add_folio(folio, NR_PAGETABLE);
>  	return true;
>  }
>  
> +static inline bool pgtable_pte_page_ctor(struct page *page)
> +{
> +	return ptdesc_pte_ctor(page_ptdesc(page));
> +}
> +
> +static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc)
> +{
> +	struct folio *folio = ptdesc_folio(ptdesc);
> +
> +	ptlock_free(ptdesc);
> +	__folio_clear_table(folio);
> +	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
> +}
> +
>  static inline void pgtable_pte_page_dtor(struct page *page)
>  {
> -	ptlock_free(page_ptdesc(page));
> -	__ClearPageTable(page);
> -	dec_lruvec_page_state(page, NR_PAGETABLE);
> +	ptdesc_pte_dtor(page_ptdesc(page));
>  }
>  
>  #define pte_offset_map_lock(mm, pmd, address, ptlp)	\
> @@ -2942,20 +2956,34 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd)
>  	return ptl;
>  }
>  
> -static inline bool pgtable_pmd_page_ctor(struct page *page)
> +static inline bool ptdesc_pmd_ctor(struct ptdesc *ptdesc)
>  {
> -	if (!pmd_ptlock_init(page_ptdesc(page)))
> +	struct folio *folio = ptdesc_folio(ptdesc);
> +
> +	if (!pmd_ptlock_init(ptdesc))
>  		return false;
> -	__SetPageTable(page);
> -	inc_lruvec_page_state(page, NR_PAGETABLE);
> +	__folio_set_table(folio);
> +	lruvec_stat_add_folio(folio, NR_PAGETABLE);
>  	return true;
>  }
>  
> +static inline bool pgtable_pmd_page_ctor(struct page *page)
> +{
> +	return ptdesc_pmd_ctor(page_ptdesc(page));
> +}
> +
> +static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc)
> +{
> +	struct folio *folio = ptdesc_folio(ptdesc);
> +
> +	pmd_ptlock_free(ptdesc);
> +	__folio_clear_table(folio);
> +	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
> +}
> +
>  static inline void pgtable_pmd_page_dtor(struct page *page)
>  {
> -	pmd_ptlock_free(page_ptdesc(page));
> -	__ClearPageTable(page);
> -	dec_lruvec_page_state(page, NR_PAGETABLE);
> +	ptdesc_pmd_dtor(page_ptdesc(page));
>  }
>  
>  /*
> -- 
> 2.39.2
> 
> 

-- 
Sincerely yours,
Mike.


From xen-devel-bounces@lists.xenproject.org Thu May 25 09:22:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:22:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539473.840377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27Ao-0004uN-NN; Thu, 25 May 2023 09:22:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539473.840377; Thu, 25 May 2023 09:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27Ao-0004uG-Jb; Thu, 25 May 2023 09:22:18 +0000
Received: by outflank-mailman (input) for mailman id 539473;
 Thu, 25 May 2023 09:22:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q27An-0004uA-HV
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:22:17 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20604.outbound.protection.outlook.com
 [2a01:111:f400:7d00::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a2a0617b-fadd-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 11:22:14 +0200 (CEST)
Received: from AM4PR0101CA0070.eurprd01.prod.exchangelabs.com
 (2603:10a6:200:41::38) by DB9PR08MB6457.eurprd08.prod.outlook.com
 (2603:10a6:10:23e::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 09:22:12 +0000
Received: from AM7EUR03FT047.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:200:41:cafe::40) by AM4PR0101CA0070.outlook.office365.com
 (2603:10a6:200:41::38) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17 via Frontend
 Transport; Thu, 25 May 2023 09:22:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT047.mail.protection.outlook.com (100.127.140.69) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.15 via Frontend Transport; Thu, 25 May 2023 09:22:11 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Thu, 25 May 2023 09:22:11 +0000
Received: from ee2f245e56bd.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 908875A9-8B14-4813-8600-06692F3F25EA.1; 
 Thu, 25 May 2023 09:22:00 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ee2f245e56bd.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 25 May 2023 09:22:00 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM8PR08MB6465.eurprd08.prod.outlook.com (2603:10a6:20b:365::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 09:21:56 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 09:21:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2a0617b-fadd-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H5VbXe48kVeShUYEX7XgaKDAkwnI9cCDdJgqbOfkCzg=;
 b=THR75J+JHvVoknw3vwlCt0GeO/pCej5aqj7HvzjpAfAoQ5+OZGBg5kr+VuWLndOMwiwcppkuM9GMMttlVQBVD9ZwnWXPse9H90nFXpfZiXcj7Uferl7Z98V2ZJhaEcagq+NVpn/vsUyZiADdrF8YhmVVoZ/cKSRS5xE4YJzd3q0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: cf052c608fce1c62
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CZ4ROIFq5n5sGyCqB9gfEnRrKnIpFtAZkY/p6GlWAg8DeewdiwE+Tu8ME7I4AeOEyv+Vu9xHh9Nd548AK5jziV3Qcfzer5FZR1o2+G9g+HK3tmA4+IAmBJ+i0YcG1d4rdMY7QDxNqqulWC3c7T20OONKlHg8PzKJQfymobjEXJX0jfysEpR2kQSoNV7DKHJUm3ZYR9R799HFukbVGs7qeIucgvuyEKM295bOQS68siJONCtoedDTaaTKVWceK3NZLfyaxG+/XCVctLg1uFhMerm0E7564ndZ9KDEvgb3d8pvFBHChsBZ2f5NDKhjrYJO5JuxW9OavMVbmtT5hZBVjg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=H5VbXe48kVeShUYEX7XgaKDAkwnI9cCDdJgqbOfkCzg=;
 b=emQRl47qsSC1dHJzaEjd+Ttw1HHHiIxXy9TmGxC65laJZA2USGzOcYjp2CMJOK7PEPSHkkBVIH73B2sZDZztfHSWcgTSYhYUzKnnojukGVNVPFioS7R7s8h0BYPrj5m2vpYm5YIaOJ8r9wZwY3dGej9RvYXy03i8rynV7Gbyee7f1nkkyCwKIfG6s4WSh2PA+PEbIIpQY/Nvq1kqF/Mfa1zbBYUzPREyzSbQIB0FOnXRENuzCc0syI3v7FHwJ7f8xkz7QzSRWY3nekJpK5NGurk9C8sfl4M6Y96qA3dqCpzfTrqTvXOfWOk1mR/j3LFVa53Jgg/+JWvG10q6SFwXjw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H5VbXe48kVeShUYEX7XgaKDAkwnI9cCDdJgqbOfkCzg=;
 b=THR75J+JHvVoknw3vwlCt0GeO/pCej5aqj7HvzjpAfAoQ5+OZGBg5kr+VuWLndOMwiwcppkuM9GMMttlVQBVD9ZwnWXPse9H90nFXpfZiXcj7Uferl7Z98V2ZJhaEcagq+NVpn/vsUyZiADdrF8YhmVVoZ/cKSRS5xE4YJzd3q0=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: =?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross
	<jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>, David
 Scott <dave@recoil.org>, Christian Lindig <christian.lindig@cloud.com>
Subject: Re: [PATCH v7 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Topic: [PATCH v7 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Index: AQHZjUpkSkVWSiVT2k+ILuaW20r/+q9qrj4AgAAKbYCAAAFlAA==
Date: Thu, 25 May 2023 09:21:55 +0000
Message-ID: <1D2A4448-D89C-458B-A2EB-C0368E6534C1@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-10-luca.fancellu@arm.com> <ZG8evxN0mF8NDTPS@mail-itl>
 <2BCDFCFC-30F0-4D61-AE92-65046CDD5696@arm.com>
In-Reply-To: <2BCDFCFC-30F0-4D61-AE92-65046CDD5696@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM8PR08MB6465:EE_|AM7EUR03FT047:EE_|DB9PR08MB6457:EE_
X-MS-Office365-Filtering-Correlation-Id: a2b5b7db-cda3-4c87-3582-08db5d01854e
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 xZ2YkVkiaxB7T6T+tIbNjeoTad66u9ZHEBqGfZDsjpAgolCnAuP3e9s1CLnmFntFnFZJBev/+hybSnPtuHg0YByXCvaKXLt2yjoR5GzEe/QxC20WZnWSP/kN9/TKDdUMh9IuRhuEUkeg/bo4kuhRlObuLLmQMSHIETLZ/mjdUdLa2r6PTkCA2/qlO2fU5H25mHMisEQIO6V4z9iT9v4jvVW+/tEAfEKbV1lBQ0I+oUYGhISep9Yz2Q+ZyiyND2bmPXB0Tn3r8l+rNmAxTyDteBeol97MoGOUNp83UaFT90LKGtr5Cu80VtiQfPVK0UUDk9Jr7POTTxQOmSn4mmBU2BQniOdkLVgeZi/3unBzX6v5YxMhg6Yr/bd+TaX+/oIuvL+7/yYOzGE9hLznk24DEwovEBsAIi7S+sLro4JV8gBTtR97fw9b3ZHTOyW3BnYzBmLWoI/aNlVhqJ5TpOp2Nv/C4d7EsmahEi23fJEES2R6UhE4RV7TZnYTTr5Xa4CnuYKI3mD2dRa6stzv7YrpCb9iTH0XiFLQkpYnRN+h6T4Q2fI9dp2RAQa6rTUpiA4r7CiVBlYks1fY1OV6D/RpY2rSbPLqf4wB/e1Id4nMLC99hOy41oZcdFEtEIQUOHOuyDvSO7uG+7OaqNRTBktIcg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(346002)(376002)(136003)(396003)(451199021)(5660300002)(7416002)(54906003)(478600001)(8676002)(41300700001)(6486002)(6512007)(6506007)(26005)(66556008)(66946007)(66446008)(71200400001)(4326008)(66476007)(6916009)(64756008)(316002)(91956017)(76116006)(8936002)(186003)(2616005)(83380400001)(2906002)(66574015)(38100700002)(122000001)(38070700005)(86362001)(33656002)(36756003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <CAB3DD94A2F0E64CA4FCED2116B62BEC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6465
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d293be29-6264-45b1-902b-08db5d017bfa
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dQFAX3s4t+B6SzrfXJWiAS6MIPYe4UHnkjJmkV/E2CxNehb0D8PpZOqUbt2fWUJJiXCzxIcV7+GWFJ285cDpvYV9ShmSSfmU0DQxSerHl6fnLPWOsA9vbBOfYoon9L+fhCPVNTDwQjXcXzfH8GuiiREk82L4H/V63LzgSFIo9Fw9aCSzTvRzuFFL8pAIzpvNlqdjCMvNXZl0+eAY/YGeNPRxN9vcYKCqUIOTEgSdalUEVphUK4yO+kwhnAVcxmG2SAXDCwcxtk5zp09y1v3VfRm0/DjTXoLdHizNBnJSm9/T/0w1EXM8/q+HmxyrZF9SJaomcwSsOeJVE7fdBNOS1S/sazb0Y2zVnI+YjuZPzld9DOCwlNfLiDme7hAgH8zyvV892bwtCOc3B6e1KoIT+DxPhVZig8QHvLRWOPSHmQ5IvLlg+VTag0yockxm6MsRVHEj0p6GPCCPv4I/aKCYC6k0MW4YA6hENYhZvyElD4IFvPP+ViJJNoD1/u+wg4LAq/G/xv4W5n/bRjnLmvxIueMpUye6EfBSZrtrQj1hijlv8vbZ5LRxUDMnwe4B2zATP0/kG0jhVtpKp6JwSZW8oR9ankSjZcc/O7qB9AiPicMr6Gb1npHBrc6pk8oZhEG1GQjXnYyFcsZ6LyXjVWTv5009vPm5c8XKSpfz5uoEYjOoQAj7khb9FweydcvTPhh2LrX6fe4kgEZv7uLyOE6TvJUBdiOhRLJ+PtAATXe87m7Ut7vyAWGybr4kSDieEsrn
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199021)(36840700001)(46966006)(40470700004)(5660300002)(6862004)(8936002)(8676002)(40460700003)(40480700001)(2616005)(186003)(33656002)(47076005)(36860700001)(66574015)(83380400001)(2906002)(86362001)(336012)(36756003)(82310400005)(82740400003)(356005)(81166007)(26005)(6506007)(6512007)(107886003)(316002)(70206006)(70586007)(4326008)(54906003)(6486002)(478600001)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 09:22:11.4965
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a2b5b7db-cda3-4c87-3582-08db5d01854e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6457

Pj4gDQo+PiAoLi4uKQ0KPj4gDQo+Pj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3B5dGhvbi94ZW4vbG93
bGV2ZWwveGMveGMuYyBiL3Rvb2xzL3B5dGhvbi94ZW4vbG93bGV2ZWwveGMveGMuYw0KPj4+IGlu
ZGV4IDk3MjhiMzQxODVhYy4uYjM2OTlmZGFjNThlIDEwMDY0NA0KPj4+IC0tLSBhL3Rvb2xzL3B5
dGhvbi94ZW4vbG93bGV2ZWwveGMveGMuYw0KPj4+ICsrKyBiL3Rvb2xzL3B5dGhvbi94ZW4vbG93
bGV2ZWwveGMveGMuYw0KPj4+IEBAIC0yMiw2ICsyMiw3IEBADQo+Pj4gI2luY2x1ZGUgPHhlbi9o
dm0vaHZtX2luZm9fdGFibGUuaD4NCj4+PiAjaW5jbHVkZSA8eGVuL2h2bS9wYXJhbXMuaD4NCj4+
PiANCj4+PiArI2luY2x1ZGUgPHhlbi10b29scy9hcm0tYXJjaC1jYXBhYmlsaXRpZXMuaD4NCj4+
PiAjaW5jbHVkZSA8eGVuLXRvb2xzL2NvbW1vbi1tYWNyb3MuaD4NCj4+PiANCj4+PiAvKiBOZWVk
ZWQgZm9yIFB5dGhvbiB2ZXJzaW9ucyBlYXJsaWVyIHRoYW4gMi4zLiAqLw0KPj4+IEBAIC04OTcs
NyArODk4LDcgQEAgc3RhdGljIFB5T2JqZWN0ICpweXhjX3BoeXNpbmZvKFhjT2JqZWN0ICpzZWxm
KQ0KPj4+ICAgIGlmICggcCAhPSB2aXJ0X2NhcHMgKQ0KPj4+ICAgICAgKihwLTEpID0gJ1wwJzsN
Cj4+PiANCj4+PiAtICAgIHJldHVybiBQeV9CdWlsZFZhbHVlKCJ7czppLHM6aSxzOmksczppLHM6
bCxzOmwsczpsLHM6aSxzOnMsczpzfSIsDQo+Pj4gKyAgICByZXR1cm4gUHlfQnVpbGRWYWx1ZSgi
e3M6aSxzOmksczppLHM6aSxzOmwsczpsLHM6bCxzOmksczpzLHM6cyxzOkl9IiwNCj4+PiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAibnJfbm9kZXMiLCAgICAgICAgIHBpbmZvLm5yX25vZGVz
LA0KPj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICJ0aHJlYWRzX3Blcl9jb3JlIiwgcGlu
Zm8udGhyZWFkc19wZXJfY29yZSwNCj4+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAiY29y
ZXNfcGVyX3NvY2tldCIsIHBpbmZvLmNvcmVzX3Blcl9zb2NrZXQsDQo+Pj4gQEAgLTkwNyw3ICs5
MDgsMTAgQEAgc3RhdGljIFB5T2JqZWN0ICpweXhjX3BoeXNpbmZvKFhjT2JqZWN0ICpzZWxmKQ0K
Pj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICJzY3J1Yl9tZW1vcnkiLCAgICAgcGFnZXNf
dG9fa2liKHBpbmZvLnNjcnViX3BhZ2VzKSwNCj4+PiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAiY3B1X2toeiIsICAgICAgICAgIHBpbmZvLmNwdV9raHosDQo+Pj4gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgImh3X2NhcHMiLCAgICAgICAgICBjcHVfY2FwLA0KPj4+IC0gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgInZpcnRfY2FwcyIsICAgICAgICB2aXJ0X2NhcHMpOw0KPj4+ICsg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgInZpcnRfY2FwcyIsICAgICAgICB2aXJ0X2NhcHMs
DQo+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAiYXJtX3N2ZV92bCIsDQo+Pj4gKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGFyY2hfY2FwYWJpbGl0aWVzX2FybV9zdmUocGlu
Zm8uYXJjaF9jYXBhYmlsaXRpZXMpDQo+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICk7DQo+
PiANCj4+IFRoaXMgc2hvdWxkIGJlIGFkZGVkIG9ubHkgd2hlbiBidWlsZGluZyBmb3IgQVJNLCBz
aW1pbGFyIGFzIGZvciBvdGhlcg0KPj4gYmluZGluZ3MuDQo+IA0KPiBIaSBNYXJlaywNCj4gDQo+
IFRoYW5rIHlvdSBmb3IgdGFraW5nIHRoZSB0aW1lIHRvIHJldmlldyB0aGlzLCBhcmUgeW91IG9r
IGlmIEkgbWFrZSB0aGVzZSBjaGFuZ2VzIHRvIHRoZSBjb2RlPw0KPiANCj4gZGlmZiAtLWdpdCBh
L3Rvb2xzL3B5dGhvbi94ZW4vbG93bGV2ZWwveGMveGMuYyBiL3Rvb2xzL3B5dGhvbi94ZW4vbG93
bGV2ZWwveGMveGMuYw0KPiBpbmRleCBiMzY5OWZkYWM1OGUuLmM3ZjY5MDE4OTc3MCAxMDA2NDQN
Cj4gLS0tIGEvdG9vbHMvcHl0aG9uL3hlbi9sb3dsZXZlbC94Yy94Yy5jDQo+ICsrKyBiL3Rvb2xz
L3B5dGhvbi94ZW4vbG93bGV2ZWwveGMveGMuYw0KPiBAQCAtODcyLDYgKzg3Miw4IEBAIHN0YXRp
YyBQeU9iamVjdCAqcHl4Y19waHlzaW5mbyhYY09iamVjdCAqc2VsZikNCj4gICAgIGNvbnN0IGNo
YXIgKnZpcnRjYXBfbmFtZXNbXSA9IHsgImh2bSIsICJwdiIgfTsNCj4gICAgIGNvbnN0IHVuc2ln
bmVkIHZpcnRjYXBzX2JpdHNbXSA9IHsgWEVOX1NZU0NUTF9QSFlTQ0FQX2h2bSwNCj4gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgWEVOX1NZU0NUTF9QSFlTQ0FQX3B2IH07
DQo+ICsgICAgUHlPYmplY3QgKm9ianJldDsNCj4gKyAgICBpbnQgcmV0Y29kZTsNCj4gDQo+ICAg
ICBpZiAoIHhjX3BoeXNpbmZvKHNlbGYtPnhjX2hhbmRsZSwgJnBpbmZvKSAhPSAwICkNCj4gICAg
ICAgICByZXR1cm4gcHl4Y19lcnJvcl90b19leGNlcHRpb24oc2VsZi0+eGNfaGFuZGxlKTsNCj4g
QEAgLTg5OCwyMCArOTAwLDMxIEBAIHN0YXRpYyBQeU9iamVjdCAqcHl4Y19waHlzaW5mbyhYY09i
amVjdCAqc2VsZikNCj4gICAgIGlmICggcCAhPSB2aXJ0X2NhcHMgKQ0KPiAgICAgICAqKHAtMSkg
PSAnXDAnOw0KPiANCj4gLSAgICByZXR1cm4gUHlfQnVpbGRWYWx1ZSgie3M6aSxzOmksczppLHM6
aSxzOmwsczpsLHM6bCxzOmksczpzLHM6cyxzOkl9IiwNCj4gLSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAibnJfbm9kZXMiLCAgICAgICAgIHBpbmZvLm5yX25vZGVzLA0KPiAtICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICJ0aHJlYWRzX3Blcl9jb3JlIiwgcGluZm8udGhyZWFkc19wZXJf
Y29yZSwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAiY29yZXNfcGVyX3NvY2tldCIs
IHBpbmZvLmNvcmVzX3Blcl9zb2NrZXQsDQo+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAg
Im5yX2NwdXMiLCAgICAgICAgICBwaW5mby5ucl9jcHVzLA0KPiAtICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICJ0b3RhbF9tZW1vcnkiLCAgICAgcGFnZXNfdG9fa2liKHBpbmZvLnRvdGFsX3Bh
Z2VzKSwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAiZnJlZV9tZW1vcnkiLCAgICAg
IHBhZ2VzX3RvX2tpYihwaW5mby5mcmVlX3BhZ2VzKSwNCj4gLSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAic2NydWJfbWVtb3J5IiwgICAgIHBhZ2VzX3RvX2tpYihwaW5mby5zY3J1Yl9wYWdl
cyksDQo+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgImNwdV9raHoiLCAgICAgICAgICBw
aW5mby5jcHVfa2h6LA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICJod19jYXBzIiwg
ICAgICAgICAgY3B1X2NhcCwNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAidmlydF9j
YXBzIiwgICAgICAgIHZpcnRfY2FwcywNCj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAi
YXJtX3N2ZV92bCIsDQo+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhcmNoX2NhcGFi
aWxpdGllc19hcm1fc3ZlKHBpbmZvLmFyY2hfY2FwYWJpbGl0aWVzKQ0KPiArICAgIG9ianJldCA9
IFB5X0J1aWxkVmFsdWUoIntzOmksczppLHM6aSxzOmksczpsLHM6bCxzOmwsczppLHM6cyxzOnN9
IiwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICJucl9ub2RlcyIsICAgICAgICAgcGlu
Zm8ubnJfbm9kZXMsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAidGhyZWFkc19wZXJf
Y29yZSIsIHBpbmZvLnRocmVhZHNfcGVyX2NvcmUsDQo+ICsgICAgICAgICAgICAgICAgICAgICAg
ICAgICAiY29yZXNfcGVyX3NvY2tldCIsIHBpbmZvLmNvcmVzX3Blcl9zb2NrZXQsDQo+ICsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAibnJfY3B1cyIsICAgICAgICAgIHBpbmZvLm5yX2NwdXMs
DQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAidG90YWxfbWVtb3J5IiwgICAgIHBhZ2Vz
X3RvX2tpYihwaW5mby50b3RhbF9wYWdlcyksDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAg
ICAiZnJlZV9tZW1vcnkiLCAgICAgIHBhZ2VzX3RvX2tpYihwaW5mby5mcmVlX3BhZ2VzKSwNCj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICJzY3J1Yl9tZW1vcnkiLCAgICAgcGFnZXNfdG9f
a2liKHBpbmZvLnNjcnViX3BhZ2VzKSwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICJj
cHVfa2h6IiwgICAgICAgICAgcGluZm8uY3B1X2toeiwNCj4gKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICJod19jYXBzIiwgICAgICAgICAgY3B1X2NhcCwNCj4gKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICJ2aXJ0X2NhcHMiLCAgICAgICAgdmlydF9jYXBzDQo+ICAgICAgICAgICAgICAg
ICAgICAgICAgICk7DQo+ICsNCj4gKyAgICAjaWYgZGVmaW5lZChfX2FhcmNoNjRfXykNCj4gKyAg
ICAgICAgaWYgKG9ianJldCkgew0KPiArICAgICAgICAgICAgcmV0Y29kZSA9IFB5RGljdF9TZXRJ
dGVtU3RyaW5nKA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgIG9ianJldCwgImFybV9z
dmVfdmwiLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgIGFyY2hfY2FwYWJpbGl0aWVz
X2FybV9zdmUocGluZm8uYXJjaF9jYXBhYmlsaXRpZXMpDQo+ICsgICAgICAgICAgICAgICAgICAg
ICAgICApOw0KPiArICAgICAgICAgICAgaWYgKCByZXRjb2RlIDwgMCApDQo+ICsgICAgICAgICAg
ICAgICAgcmV0dXJuIE5VTEw7DQo+ICsgICAgICAgIH0NCj4gKyAgICAjZW5kaWYNCj4gKw0KPiAr
ICAgIHJldHVybiBvYmpyZXQ7DQo+IH0NCj4gDQo+IHN0YXRpYyBQeU9iamVjdCAqcHl4Y19nZXRj
cHVpbmZvKFhjT2JqZWN0ICpzZWxmLCBQeU9iamVjdCAqYXJncywgUHlPYmplY3QgKmt3ZHMpDQo+
IA0KPiANCj4gUGxlYXNlIG5vdGljZSB0aGF0IG5vdyB3ZSBjYW4gaGF2ZSBhIHBhdGggdGhhdCBj
b3VsZCByZXR1cm4gTlVMTCwgYXJlIHlvdSBvayBmb3INCj4gSXQgb3Igc2hvdWxkIEkganVzdCBp
Z25vcmUgdGhlIHJldHVybiBjb2RlIGZvciBQeURpY3RfU2V0SXRlbVN0cmluZz8NCj4gDQo+IEFs
c28sIGRvIHlvdSB3YW50IG1lIHRvIHByb3RlY3QgdGhlIGluY2x1ZGUgdG8gPHhlbi10b29scy9h
cm0tYXJjaC1jYXBhYmlsaXRpZXMuaD4NCj4gd2l0aCBpZmRlZj8NCj4gDQoNCkVESVQ6IEkgc2F3
IHRoaXMgZG9lc27igJl0IGV2ZW4gY29tcGlsZSwgSSB3aWxsIGFzayBsYXRlciB3aGVuIEkgd2ls
bCBoYXZlIHNvbWV0aGluZyB3b3JraW5nLA0KSSBzYXcgUHlEaWN0X1NldEl0ZW1TdHJpbmcgaXMg
dXNlZCBzb21ld2hlcmUgZWxzZSBzbyBJIHdpbGwgdXNlIHRoYXQgYXBwcm9hY2ggYmVmb3JlDQpQ
cm9wb3NpbmcgeW91IGEgc29sdXRpb24NCg0KDQoNCj4+IA0KPj4+IH0NCj4+PiANCj4+PiBzdGF0
aWMgUHlPYmplY3QgKnB5eGNfZ2V0Y3B1aW5mbyhYY09iamVjdCAqc2VsZiwgUHlPYmplY3QgKmFy
Z3MsIFB5T2JqZWN0ICprd2RzKQ0KPj4gDQo+PiAtLSANCj4+IEJlc3QgUmVnYXJkcywNCj4+IE1h
cmVrIE1hcmN6eWtvd3NraS1Hw7NyZWNraQ0KPj4gSW52aXNpYmxlIFRoaW5ncyBMYWINCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu May 25 09:30:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:30:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539479.840387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27IN-0006Q9-L7; Thu, 25 May 2023 09:30:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539479.840387; Thu, 25 May 2023 09:30:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27IN-0006Q2-I8; Thu, 25 May 2023 09:30:07 +0000
Received: by outflank-mailman (input) for mailman id 539479;
 Thu, 25 May 2023 09:30:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q27IM-0006Pw-Cb
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:30:06 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q27IM-0001FK-86
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:30:06 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.31.224]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q27IM-0006nL-1u
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:30:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:To:Subject:MIME-Version:Date:Message-ID;
	bh=mAHcM+JyaGpkSiKOnIxs92Vw9HmmNqq8PddYLpb3OLg=; b=2nN1PF/CsjWoPAiogXbQCznDgM
	6Qw8RRsfzel7Vb8WZQFT/CW2Xwbs6e5d/5RImFIqrGKzhjBqy/c5wKsC5k6oUw0Pf3gIzlaNNrJ0r
	bSsYqjB5r4mov3Y1Yr/JhWH+5iAtGodTufx3PMm9pQmGX4bECoAkQzPOWrI3cuh8Orog=;
Message-ID: <5871ee5a-95b3-bd3b-7440-d40212a4d5e7@xen.org>
Date: Thu, 25 May 2023 10:30:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v7 11/12] xen/arm: add sve property for dom0less domUs
Content-Language: en-US
To: xen-devel@lists.xenproject.org
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-12-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230523074326.3035745-12-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 23/05/2023 08:43, Luca Fancellu wrote:
> Add a device tree property in the dom0less domU configuration
> to enable the guest to use SVE.
> 
> Update documentation.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> Changes from v6:
>   - Use ifdef in create_domUs and fail if 'sve' is used on systems
>     with CONFIG_ARM64_SVE not selected (Bertrand, Julien, Jan)
> Changes from v5:
>   - Stop the domain creation if SVE not supported or SVE VL
>     errors (Julien, Bertrand)
>   - now sve_sanitize_vl_param is renamed to sve_domctl_vl_param
>     and returns a boolean, change the affected code.
>   - Reworded documentation.
> Changes from v4:
>   - Now it is possible to specify the property "sve" for dom0less
>     device tree node without any value, that means the platform
>     supported VL will be used.
> Changes from v3:
>   - Now domainconfig_encode_vl is named sve_encode_vl
> Changes from v2:
>   - xen_domctl_createdomain field name has changed into sve_vl
>     and its value is the VL/128, use domainconfig_encode_vl
>     to encode a plain VL in bits.
> Changes from v1:
>   - No changes
> Changes from RFC:
>   - Changed documentation
> ---
>   docs/misc/arm/device-tree/booting.txt | 16 +++++++++++++++
>   xen/arch/arm/domain_build.c           | 28 +++++++++++++++++++++++++++
>   2 files changed, 44 insertions(+)
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> index 3879340b5e0a..32a0e508c471 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -193,6 +193,22 @@ with the following properties:
>       Optional. Handle to a xen,cpupool device tree node that identifies the
>       cpupool where the guest will be started at boot.
>   
> +- sve
> +
> +    Optional. The `sve` property enables Arm SVE usage for the domain and sets
> +    the maximum SVE vector length, the option is applicable only to AArch64

Depending on the discussion on the other patch, s/aarch64/arm64/. With 
the other comments addressed:

Acked-by: Julien Grall <jgrall@amazon.com>

> +    guests.
> +    A value equal to 0 disables the feature, this is the default value.
> +    Specifying this property with no value, means that the SVE vector length
> +    will be set equal to the maximum vector length supported by the platform.
> +    Values above 0 explicitly set the maximum SVE vector length for the domain,
> +    allowed values are from 128 to maximum 2048, being multiple of 128.
> +    Please note that when the user explicitly specifies the value, if that value
> +    is above the hardware supported maximum SVE vector length, the domain
> +    creation will fail and the system will stop, the same will occur if the
> +    option is provided with a non zero value, but the platform doesn't support
> +    SVE.
> +
>   - xen,enhanced
>   
>       A string property. Possible property values are:
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 9202a96d9c28..ba4fe9e165ee 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -4008,6 +4008,34 @@ void __init create_domUs(void)
>               d_cfg.max_maptrack_frames = val;
>           }
>   
> +        if ( dt_get_property(node, "sve", &val) )
> +        {
> +#ifdef CONFIG_ARM64_SVE
> +            unsigned int sve_vl_bits;
> +            bool ret = false;
> +
> +            if ( !val )
> +            {
> +                /* Property found with no value, means max HW VL supported */
> +                ret = sve_domctl_vl_param(-1, &sve_vl_bits);
> +            }
> +            else
> +            {
> +                if ( dt_property_read_u32(node, "sve", &val) )
> +                    ret = sve_domctl_vl_param(val, &sve_vl_bits);
> +                else
> +                    panic("Error reading 'sve' property");
> +            }
> +
> +            if ( ret )
> +                d_cfg.arch.sve_vl = sve_encode_vl(sve_vl_bits);
> +            else
> +                panic("SVE vector length error\n");
> +#else
> +            panic("'sve' property found, but CONFIG_ARM64_SVE not selected");
> +#endif
> +        }
> +
>           /*
>            * The variable max_init_domid is initialized with zero, so here it's
>            * very important to use the pre-increment operator to call

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 25 09:31:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:31:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539483.840398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27Ji-000721-1Y; Thu, 25 May 2023 09:31:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539483.840398; Thu, 25 May 2023 09:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27Jh-00071u-UR; Thu, 25 May 2023 09:31:29 +0000
Received: by outflank-mailman (input) for mailman id 539483;
 Thu, 25 May 2023 09:31:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q27Jh-00071m-Fk
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:31:29 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20617.outbound.protection.outlook.com
 [2a01:111:f400:fe13::617])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ecaa2593-fade-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 11:31:28 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7043.eurprd04.prod.outlook.com (2603:10a6:208:19b::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 09:31:25 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 09:31:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecaa2593-fade-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ia345a1+gL+X9cnlb+2pcNPA9HfCDKqOXywG6Bk9BKAxGBRhCTDkn/GKbx7HhO5BkMrEac1JS9GvIfHn1wwPE+UXIbCxsija5vzPJ0O1D5Mq4EKkUTzvHEjhvf2uZ4kN3wtsPwnj10swDT44hqsML+hcvLVpIkqbG2Ory1imCB3RLtxBrGHmpVjA9WJo9xIXQLYN14mfElEzAi6dQqkuyOXS0Jg0UNDf1Mkt7ZR+BHzRrvSbwaFKin3jundPuhPujriCEwsWUHa56v44DplI6WC10ajE9/LL00XJbWm0VyGfDTrvcC6PGlUWEA5SUyWy5CPoopVe6L2/AXBmNAmi4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6KY2ydO/NQVS4BYUFFczmONTLzCJLHomyYln0F7WV8c=;
 b=aStlLGmXYJO1Xw2vfPV6pZFX2vNkYyAZB4YL2Eu5Bxw7Emp7IdmtLdZw1X0Ljjdpro+yo1GnmfH343j3inPaoIrr4zQmxvdmBfnTRJZzZljptwaf2AxcDljlJHB4EErYuSGgGnkUre1mHy+kCq/OssH86ziRkdsd14rSNd0I+BvI+r5K4LYBk9JtdJreNK6hT883H1DRFHG9xlbR5Ue6dwxStP7gGY892EPKebTXVrm3d/iSUZ2psjiCFidaDGSJums3T0L1StllZSi4ZJQQJQi8k75kEuslryltbtsZduWm2w4V8muUROF98B4O9ye5iIPHqV+jottS39h3qXMPTA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6KY2ydO/NQVS4BYUFFczmONTLzCJLHomyYln0F7WV8c=;
 b=PrZCIlhQ8H3K/lg++pKCFuZ4wokQRmE770kaextXNKJXT3jE8Kkx9rNLa40uttGgA+sC063kLAfe1507sXwd55cVe6pkgtO+dsB0rE0suSpnZOwGWOEFs9ichlYVVzIW00DpNmhpTYKQD7zS5Ga5i87dSFK0dFIG0Jx/Am7soI4PxBZn9KMPTd5IUgnDe03JSGNG7mtxDVO2R6Jc/06H7IuqL1ctrobjeA2lyI3ikNj50sEILsUYW6Bn+RwrlvodcFlg1xYyTAgnNCjUmdmpUYk5R+mzdcZP0EHP0KF6tgSmTUworUUAqwxzslgIytUT72n9ha3BvS0VLt28hDLOXg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6a2e6371-97a5-397d-ca1a-247b610b49d3@suse.com>
Date: Thu, 25 May 2023 11:31:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] iscsi_ibft: Fix finding the iBFT under Xen Dom 0
Content-Language: en-US
To: Ross Lagerwall <ross.lagerwall@citrix.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 x86@kernel.org, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Peter Jones
 <pjones@redhat.com>, Konrad Rzeszutek Wilk <konrad@kernel.org>,
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
References: <20230524160558.3686226-1-ross.lagerwall@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230524160558.3686226-1-ross.lagerwall@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0012.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7043:EE_
X-MS-Office365-Filtering-Correlation-Id: 47cefac9-c554-4532-8a88-08db5d02cf2f
X-LD-Processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xzrecXDw0dci0oTr3gIYdPQayGjgE3Jt4SbzyWT2nc8HgI2l1tphpKKHL3EWyFbhQgOb6DZ5e++/p+mbcJrkEqoiBR02mqD9FfBIFElDm4tO19G1T47KbHhObCn0OkHyX8FJyxzk9fA6UefbZq8q2+0vPqGPh6aQgC77eE9xgyJa5Nl2jmlGcY38PsrBMMsvGNlKED1pKT3VckeMGuXeqhfj0pPV3xmssLj2BfwVHi+T2QSaA18UgmSnZdQ2OIJs5b84e02PEYQXf4frfTrWzn2iQEGktg0EtW0asO5VQn6CvCQhbhCwk0MGPmDQxPqBYX1HCXcuK20z89bWsLVcD7jORIZlWDSBLTkBlRiSaTD09Ey9bqg/CpbYRDnhSaFEFZZlKxsKBVoxLjxKR9zJ6N7/uYVpXGXjkGbaAi3EzLja0HbOh7DHWfFFlmkdk94F+6Z4/QlSQ/auJCXhYioIdUpL1uHTxFakF2z9EX/zsOZRf+qbyySs6AfOuyh6xE7CbljQ319ZiNYjaoIC6O9vumG+xW8oK+gwfFuK1JmKVaC55Gc2jx1TV0Fcvmp7qOztBY8NC1LhMTt8eDfEnVXplYp5UwsVKIU95VDelWCiMan7uKTV2LJk+qR0v1tdrttzvnjc8EfMjTyjBZg6ou+2Sw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(39860400002)(376002)(136003)(396003)(451199021)(66946007)(6916009)(4326008)(66476007)(31686004)(66556008)(54906003)(316002)(478600001)(6486002)(41300700001)(186003)(86362001)(53546011)(6512007)(6506007)(26005)(8676002)(8936002)(7416002)(2906002)(4744005)(31696002)(36756003)(38100700002)(5660300002)(2616005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dEtDVGVVUW9iR1pjZ3V2NnlscnF5aU1vZENWaThjSEtXOGRFZlhLb0pjUjV1?=
 =?utf-8?B?KzkyanQvK0N4L091ZkJUeDY1MnRjd3B4dllmRWxYT3BHNldsVExsVDArbUMz?=
 =?utf-8?B?aHAvVnlXendpMW02cTNrU3FrTUJIZnVHM0JSY2pyUERIRE1LdlJyQzRvSGxH?=
 =?utf-8?B?SmY1SjQ1M05DMnpoQUVMbjdZZzhyZXQrd2l0K3FGeVJUU2VDdW80ak9jYnk4?=
 =?utf-8?B?T2N2ZXJONXk0L2pFclltSkdsdFNIUmR3RVprbERnR0NPcUk1WHZOYTY3OTgx?=
 =?utf-8?B?OGdNc000MU9qUHhiRnlScEF1U3lmdEltSkhSUmxpbGZTdHo0emZCNGNBUEFt?=
 =?utf-8?B?TUc4NlRvSm9kMDNNU3JZUFhmM2t3OTZPTVZ6NWVmREo5d0I1TDk2RWdDem5v?=
 =?utf-8?B?a21MMHl3Ynd3R21tbFB0ejNLSS9MbjFpcHFnTkFiQjlVOUpzRWZQR0dvVFpH?=
 =?utf-8?B?c2NjdkdOY3R3bFV4eGNCOE5RVUVkUnZFQ2s4TTNUcVFYWkpPYmpnSm8vZC9W?=
 =?utf-8?B?aWRORmovb0dJdnh1TGNNWGZSeXFVUGNNNFFMOEtORjNKd3hqczQ1OFhTSzNC?=
 =?utf-8?B?cFVsOGozRWZYVks4Zk9GNTY1K1lIQUk0YWFhc3lyNHloM1UrVnpRVFhuNUFD?=
 =?utf-8?B?empzYjR6cXV0M0JkRUFxaENzVFJSY1ZJNkd4NEhaYXZHbksyM29tSDJwWGF6?=
 =?utf-8?B?NVFHMng4NkhqcDBmYXY5N2VDRkYyVnN5MjVWeVMvL1AwZVJPZ3BNUkJvc1pi?=
 =?utf-8?B?bU4vQUJqZXdjUVJlZ0JUZHZZakkydXRmWnlrWitaamZiTlBlRFZvSjhZdVYx?=
 =?utf-8?B?Q25LR2JsbFRwb24randhUCtacUozQXljNk9rMXZ2UlRYVW56WDVmaXYyLzZX?=
 =?utf-8?B?amZRcW5LVmhNSnlmVWhXZmI5SXhsaGpaNm5GcTExWHRNejZjdjg1N0JwY2xS?=
 =?utf-8?B?Qld5aFZGcnhqelpoNzd5QzhwSmlTbGgvL2s2enpJSWJUaUpaTFlLZFhManhq?=
 =?utf-8?B?MWZ5ZEwrazRuUjl6bVNqSEhwdURTRUVNeGhGUGN2bWo5aVJFL2ZXRXh5Rjlu?=
 =?utf-8?B?OERLMFFmVUp3YVdaTWsxZkZ3ZVNwd0hlQkM1bEZXVTlsZjMvV0VSdXFLa3dy?=
 =?utf-8?B?R3lNWlJjNUQwRG5UZS9xMXAvRWxNNlp1NDk5YmZ3aG5MK1BTTGhnNjdXS01T?=
 =?utf-8?B?aFFRSUw5WStqWnVRL0lLakZ4SitoOGJuYjY0V29NbXNjMFgrbmkvTVhQOW02?=
 =?utf-8?B?VjRxTVZGVTFZOHpCcStxMS9vY3A2dkVUM09jZVRZR3VoMU0vSCs1bVNEdzYv?=
 =?utf-8?B?clU3aDNtbE8ya2RDNWNoeVVwT2g5R252MlJZRE9OUFR4ZUUvV0RLTU00WjZK?=
 =?utf-8?B?NXorWU5TWjhnT2g0WnAvRkZjY3ZxK295Q3lDbUJsU2xydWhneUF2MHNIK0Jw?=
 =?utf-8?B?L0pLbTQ4MHI2bFB2L2MrMGllZjdJL1p2K1c0bWlvMnJMQ1dkSnMyVWRMS2R0?=
 =?utf-8?B?ZXg5dmdUSWFWWXBXSVNTektNdkJmODMxN0o4NVU0VnZQOURrQ0tXQ0kzcUdx?=
 =?utf-8?B?K2MrdU1WeG00MWFDekNFVzEranY3YUIrS2ZTR1JpZy92UVhtazBJMmpZWE43?=
 =?utf-8?B?YzMxWXNadkcrZkZHbnk2T2x2RXdqa3JUTkhQa2lFOFRJSzRDWHRPaTJOOWVM?=
 =?utf-8?B?YXVlTWpwRUtieWZjaStVMlJGRWZaV2Q0R21udVA0azNwbURRMW9laWRLdmNC?=
 =?utf-8?B?ZEpsTndZZXdxVFU1WElabHRqRzV0SHN2TURqSk9nSU80N0x1dExnODVkSmFS?=
 =?utf-8?B?RysxSlAvRkRkelEwcnJPVElFaTY2QmFzMnM1bENmTmtVcVJGYkdkQWRKRElH?=
 =?utf-8?B?NHI5MWJsZ3gwTEtlT0JRU0RMbm9leGw1N3VUczREZUd1a1ZZT0xYQm1iOWZL?=
 =?utf-8?B?em5lMFRNcmJ1bWdha1orN2t5eldMc2xJWjI4TGh2d011YTVQOERJUDZqWVdo?=
 =?utf-8?B?Q0QvRk9EUGlIZnkxVnhQQklTZ0VBUXQrTnI0QWlHbk85UkwzMEY0NDFRb3FC?=
 =?utf-8?B?WE9wNW1TMGM4amRzMXFGaFpSUGRmcE1lQjllbUpMbDJuV1pYeEJZUktUcnFq?=
 =?utf-8?Q?gD6dYKjhKf+m8RZZXaDhFUrfq?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 47cefac9-c554-4532-8a88-08db5d02cf2f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 09:31:25.2392
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: a0UNId4DA3RnHWL+Rod/craTmySJkpdzQiaqLPiEc+4IpH+YaslOVGIrWKDJu0hJtk9ee3p3vMB1UemOe7xzpg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7043

On 24.05.2023 18:05, Ross Lagerwall wrote:
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -772,8 +772,14 @@ char * __init xen_memory_setup(void)
>  	 * UNUSABLE regions in domUs are not handled and will need
>  	 * a patch in the future.
>  	 */

I think this comment now wants to move ...

> -	if (xen_initial_domain())
> +	if (xen_initial_domain()) {

... here. And then likely you want a blank line ...

>  		xen_ignore_unusable();

... here.

> +		/* Reserve 0.5 MiB to 1 MiB region so iBFT can be found */
> +		xen_e820_table.entries[xen_e820_table.nr_entries].addr = 0x80000;
> +		xen_e820_table.entries[xen_e820_table.nr_entries].size = 0x80000;
> +		xen_e820_table.entries[xen_e820_table.nr_entries].type = E820_TYPE_RESERVED;
> +		xen_e820_table.nr_entries++;

Surely this can be omitted when !CONFIG_ISCSI_IBFT_FIND?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 09:41:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:41:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539487.840408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27T9-00006R-Up; Thu, 25 May 2023 09:41:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539487.840408; Thu, 25 May 2023 09:41:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27T9-00006K-Ra; Thu, 25 May 2023 09:41:15 +0000
Received: by outflank-mailman (input) for mailman id 539487;
 Thu, 25 May 2023 09:41:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q27T8-00006E-Qg
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:41:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q27T5-0001aX-Np; Thu, 25 May 2023 09:41:11 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.31.224]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q27T5-0007OO-G6; Thu, 25 May 2023 09:41:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=sgjQ2tuT1cOJuon2f8PXv6xy2y15LwN3xmnzMsCAyHU=; b=CN2BGRhZdluDa5cT7I4VbsmL3D
	NIz12dnYK0sHS4lWz48eRHhHqcvGyWw01enui0AccQdVC5Xg9H486BQc88yOWIVjFLhjHtOxog7Bh
	WA9p8xR274LHEBlhhvCFFYlSJ8aVEvB39Y5kwN8Gh8sxZWkk/dFC1UZzkHc2fiMvwFkI=;
Message-ID: <135b568d-abc4-0406-870f-a37a4c9081d1@xen.org>
Date: Thu, 25 May 2023 10:41:08 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v7 12/12] xen/changelog: Add SVE and "dom0" options to the
 changelog for Arm
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Henry Wang <Henry.Wang@arm.com>,
 Community Manager <community.manager@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-13-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230523074326.3035745-13-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 23/05/2023 08:43, Luca Fancellu wrote:
> Arm now can use the "dom0=" Xen command line option and the support
> for guests running SVE instructions is added, put entries in the
> changelog.
> 
> Mention the "Tech Preview" status and add an entry in SUPPORT.md
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> Acked-by: Henry Wang <Henry.Wang@arm.com> # CHANGELOG
> ---
> Changes from v6:
>   - Add Henry's A-by to CHANGELOG
> Changes from v5:
>   - Add Tech Preview status and add entry in SUPPORT.md (Bertrand)
> Changes from v4:
>   - No changes
> Change from v3:
>   - new patch
> ---
>   CHANGELOG.md | 3 +++
>   SUPPORT.md   | 6 ++++++
>   2 files changed, 9 insertions(+)
> 
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index 5bfd3aa5c0d5..512b7bdc0fcb 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -11,6 +11,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>      cap toolstack provided values.
>    - Ignore VCPUOP_set_singleshot_timer's VCPU_SSHOTTMR_future flag. The only
>      known user doesn't use it properly, leading to in-guest breakage.
> + - The "dom0" option is now supported on Arm and "sve=" sub-option can be used
> +   to enable dom0 guest to use SVE/SVE2 instructions.
>   
>   ### Added
>    - On x86, support for features new in Intel Sapphire Rapids CPUs:
> @@ -20,6 +22,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>      - Bus-lock detection, used by Xen to mitigate (by rate-limiting) the system
>        wide impact of a guest misusing atomic instructions.
>    - xl/libxl can customize SMBIOS strings for HVM guests.
> + - On Arm, Xen supports guests running SVE/SVE2 instructions. (Tech Preview)
>   
>   ## [4.17.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.17.0) - 2022-12-12
>   
> diff --git a/SUPPORT.md b/SUPPORT.md
> index 6dbed9d5d029..e0fa2246807b 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -99,6 +99,12 @@ Extension to the GICv3 interrupt controller to support MSI.
>   
>       Status: Experimental
>   
> +### ARM Scalable Vector Extension (SVE/SVE2)
> +
> +AArch64 guest can use Scalable Vector Extension (SVE/SVE2).

I think we should cover dom0 here as well. So s/guest/domain/.

Also, we don't use AArch64 in SUPPORT.MD so far. So please use 
ARM64/arm64. At some point we will need to do some renaming for consistency.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 25 09:50:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:50:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539491.840418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27bV-0000mp-PC; Thu, 25 May 2023 09:49:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539491.840418; Thu, 25 May 2023 09:49:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27bV-0000md-Lj; Thu, 25 May 2023 09:49:53 +0000
Received: by outflank-mailman (input) for mailman id 539491;
 Thu, 25 May 2023 09:49:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q27bU-0000lE-9W
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:49:52 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20624.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7d88ab8e-fae1-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 11:49:50 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8517.eurprd04.prod.outlook.com (2603:10a6:10:2d0::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.27; Thu, 25 May
 2023 09:49:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 09:49:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d88ab8e-fae1-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZfmUtkdKWRNO91GgO0I7Jj2yQkO/RPppIMV8JBsFSiMXg04JHu91t+yxP//mPqPcZ3YrtzRC452dgGM1XBU4CAHoYtWKbITJ6QAsMoofz+au+wa3TMo2crIO5bnkkZX9sFNe5Wc/EZ7DCGuFKXvBW8MmQPAYmProSJM05NJCW8lBt9QsnpseW0v/jAfI2Igf7rUuKZHoy/qpJVVIKQB30W0sT7z5wP6/BH2R9Vhz/gRFwx+jyqYLRV4nFbdBatI+um6YyFI6w8/AqM+OJ2qSPIY4weRnqCvqCLanvCGixJ8SOHjZuWH34xXNPlHdJCbHAbbtlpRTiMVhycRBhpZaRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RBfuJTSQwwVM98L/5wB8xg9qUWpgSvF1woP3lB96eB4=;
 b=WDCRkYuBFvfsGJD3uQsoBMeOD6aZxP9XnG+x/mAPLnNwAeooFsG9VZo/sixvkPhq07cazwrKVKGdXwL0BVl/PEPS6328h7rsncDQ7yVs6ovY8eyZFiX13zVR5k5AmpapMa/Q/2/yk8UfBs1cigzXZ9lP7IPxMjCJJEIdoYqWa59U3GgfGkCJQFLjRyWaBjVb0TOapCOxNh6w6UlnU6y9/QPI5/NYP+bw5N9eVeWCOqrHPtewPi4TlnDsfyVjk2ZKA/KMRO9AA9H7EbWOznAUtCZ3iiz59qNqyOnPqVH6UqcmwOD/sh5DWsn0N7p2n0NSW9A8NREuVnEJ5lMLw+m70w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RBfuJTSQwwVM98L/5wB8xg9qUWpgSvF1woP3lB96eB4=;
 b=gK5ugxFxjbS96SLN/dKjNlLYmTzbfzx9OADgTxPzSE+tduXKp1CeNTt2En6qO+VKl+UtwuZ1r6Kub0LMjkPaUn+6z6F+kXRpRYT99TieVmJoemN1bR5SKxkAsv0dsbQtGHlWNtnzJsCK83Nx9pk8v5hQUguBS8Dt3ykRP7vO1Ok4YoaHK1bVwvG+TZPWi3AFzZeuhSlj+o3nsT1BNrNphz1mcf++ajaxfloq5Meh5uKRqN8PX1sAokGdzPT7o9g0N4GSeBuJtZs1KGR3F/PfLpKHszC1q5UxC/51j0GecR9Fzr3Gm6utZPA1D7JB7i6vea7G38fGNgvqwsukZaZ68A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <af7e2e9c-1065-da68-d9b4-fd116fe2e2b0@suse.com>
Date: Thu, 25 May 2023 11:49:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: xen | Failed pipeline for staging | 511b9f28
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <646e6ac6d749a_281a7a5e8825bc@gitlab-sidekiq-catchall-v2-596d74f7d4-ntx7r.mail>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <646e6ac6d749a_281a7a5e8825bc@gitlab-sidekiq-catchall-v2-596d74f7d4-ntx7r.mail>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0171.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8517:EE_
X-MS-Office365-Filtering-Correlation-Id: f9c21034-a140-4bf9-b309-08db5d05600d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eRvXFfYiYYS4MJHsjNHNvJY5HuaINxdTFseLySE7yQM2vWG5qm556LIQgjyDhVA1t3XnizW9LkCKpzCwjCFmBjYSIbm4+M2Tawi4Mm4VSnnmI+1PqNkwkXGuDfQ8pmmqVre8sS1DZDJuhBVa9nGuxCO/jc0zAczKqKF3kyr3pmMwoIB3dqW6K4ahsZKe6GIWzDnLaxcGTLz04Z36Z8HRObieet2WR43yzpmhO3gRl/Bb6j4UIH5hu8KhIuRcXNIr86cdumC48e8wlHi25/R4EsAL3pRQ3bu0wklJB+m1SNzLuL1eF9rasVgj8iQVD/3xQkhCXrQDy6lcWAGgzjeuNIwQPPfDwGaeGCfsv/cgUPVFsRErfvQTWwU5yoK4iGNhLQ1Tlfox/48gKQOD/JkQ2JLYZHyovY3Z/2IcEGVdLIZAa8joSCG/OKRY7a5wLXIzRtTt3W4PoZX20tJA2U2I2dWCoWNK6hAo73XvyB5jJxiON9lat9K5jAqzY4u+vAGsp2UgITlDBRQqBQM+rOh0PkvpjzICiEMbrwWJApaCPQseggD5in57Lr3juuiKMFWkJajDTFUAvCq/Dzk+H3bI0Y8v3BwLTe5GLG5kyLCHShfaN8wtSTmIkSs9S1DF3qW6RXHImucEfpM0gCpflD0W6e49UsFhH+E2DEn24wj7RwYZAGL4Y1ZCHzUScFj2aDpH
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(136003)(346002)(396003)(39860400002)(376002)(451199021)(316002)(478600001)(26005)(6506007)(6666004)(6512007)(53546011)(966005)(31686004)(41300700001)(66556008)(66946007)(6916009)(66476007)(6486002)(8676002)(8936002)(5660300002)(86362001)(31696002)(2906002)(83380400001)(38100700002)(36756003)(186003)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?czJYL3NhaWpRdFhReVhZeHRhMVRKKy9YeHloajZNTnpDbTZhL01NYXZnMTFs?=
 =?utf-8?B?WlkxbTRhd05oV2NlanpSemFvMVdtYlU1OFhKRUFvQWFDOU1Eb3E0TUhFdUM1?=
 =?utf-8?B?TitEaGczZjVLeXpZbzFQVU9JdUJZd2c5TUtnUHVBaHdpYlptS1FjQnJaVjAy?=
 =?utf-8?B?U2dGcWVEQ1lXMURienFqYUc0WUowL3dwRkZHcXo3SUhyVnhLL1hoQllKWktO?=
 =?utf-8?B?d0NCc25SNTJ5N3M1Sm9SbGZhcVp0alVVZ1kvY0F3R2JyaGtmMEszQm5vMWEw?=
 =?utf-8?B?MHhaV0xHS21Ud3k3Yms3N1hjMVIrOGJGMElESmNUSTNmWnRGNm5xRGhPVXkw?=
 =?utf-8?B?bzkzNXdCUnV6d3FrMXltbWU0cmtYTTZJZHVlN3BrMUZ3UDN2MSt0V3VBVVox?=
 =?utf-8?B?b2IwVnZwZDMrekNYNCsrN3JsV3VUYVExS0p3ZEtvSXNpSExPSlliZ3pFb3pV?=
 =?utf-8?B?ZStJWE85RCtVRjZFOXdGZHhHWU1Ea25IM1psZThWa28yTEJrNFRYM3pOV1Rs?=
 =?utf-8?B?dTJsYitkN0hETzArL1M3SDdnVjF6eFZPd0dkcmErbkZMSGNMV1VOUTJmQzJr?=
 =?utf-8?B?RmtXWngzS1pkZ1NDdlRFTzl4bG5BSjJuckNqZS8xVWgvdmRqcUpmTC9rMzNz?=
 =?utf-8?B?dG40NlBrMTkrRncrbnRlYmFNVStIMnN1dFRHNzcrZVBBN2ZJclRxUjlVNVB5?=
 =?utf-8?B?empLZkVzS2RMYVVabVZ0QnAwZ3hhOWdGQmtCTnJTaC9TeGVBSnhZbG0xZjVP?=
 =?utf-8?B?Ly9kS3RrVU1XTGZOTnVMaHB0dVJoWXRPWDJKMWxualltdGl1eFVlMEZLSkg4?=
 =?utf-8?B?emRCR3Vra3UraDVyVkF1YTduUXloYmFMQ3lZZjJ3dWhHc0tvSVRybVBGOWw2?=
 =?utf-8?B?SnAxbDVMS1pjaXk2dnZucDdvQTJrT0daR2lMdUliUFVxdnRYSzhHc3RnVWtF?=
 =?utf-8?B?YjRYZFF2STFMOTlRMjVJTVJmeDlOcldnbmNCOFNoL3pveTlNSGV1aitwVTZi?=
 =?utf-8?B?WjYrM3AwYUp3T3FEb1Baa3ZQS3EvbHViU1VPMHJqVE5jZWxvMGN2WUZGZmx6?=
 =?utf-8?B?Sys2R3NVR0ZyS0liTU9Vb0dmN1l5VVowMEw4WDJmcVZaUitmK0grWVh0S0pW?=
 =?utf-8?B?UUF3OGxkZVNvK2VNcW5pNk05WTN2Zy9xVHIyOUJOTDFnSTVESFVpVTViQTFn?=
 =?utf-8?B?V05Sb2hnOGRhSHVqaUg5aytpQ3VxaVIrWFNGY2VsbGl6N1phT3IxNGwwbHJH?=
 =?utf-8?B?NE8ySTJFa09Uc242K0k3UHZoTG9ydy84VTh6NDFPQ05pL1JaalNVRWV0dFNM?=
 =?utf-8?B?Snd5UGpldG5xcGE4YjN5RzRQdFBZb2NkazJLYVdqalNWbXY2c2IyK0kxQlBR?=
 =?utf-8?B?NjNnWkcvM2cwTDdtcjYxRmFHMDdtTEtIazNSd1RpVUVuVXRKeFBIWlUrcXhR?=
 =?utf-8?B?UmljeStwbVBKM3JzVThkdTM4aSt3dTBZZjQ4bE9VMTVGZ3BtLzRXMnJaamU4?=
 =?utf-8?B?Vm96cmhlOGtrOElpRFZmM3F1UFkxeU83Z0V0eXh5cFJ2TFlrWDNMK2ZBMHV3?=
 =?utf-8?B?dzNWWExNbFlLWER1Uko3bzNFQWVDUTBCNnFzUWxobDVMYktmWWJuTldvUVp0?=
 =?utf-8?B?SllKZThYa2lnTXBHWlM0K1VCR1NYNWlmUkZBTFErcXRBRDBvdjZRUTRZSC9U?=
 =?utf-8?B?RWJreEJobjFZLzJqWE92Q2tjTlJrNldXYjBYL2ZwNjBUa3ltUGJhbE9CKy9K?=
 =?utf-8?B?RDNabi9SbytUQVBoTU9ReU5PeDBWRmdUTnFPQ1p1dzFSWnY2Zzg2Q0ErczZZ?=
 =?utf-8?B?eFhoZ05oM29waHdtZ2tka3I5SzhxY3Y3aDk5Vm5TbWhrTGduZFRkbzJidGlY?=
 =?utf-8?B?bXdqdkpObDFIRGNvQnpFWi8xQndoR2hHQXBUOE5lWnQwNExiTDhVQkpJUE4x?=
 =?utf-8?B?a3JXdi85RWV0c00wdXlvSDhPd0hzdFAvT2duSUpKLzdiMkRiYlZPbWF6VkRv?=
 =?utf-8?B?QTFOZGxiY3phczhVTkhXRzhpaS90TWJyUzhwUGRJVThTSDBGaXVrNnBscUJ4?=
 =?utf-8?B?Wjc4ZEZtUkQ3VG5lUjUrdDhJRzQrVnVVOE4yODlIdUVGdlNqQ0lNUVQ5VnBE?=
 =?utf-8?Q?DxdVOKNlRC6VNMFGwTYJwmr0+?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f9c21034-a140-4bf9-b309-08db5d05600d
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 09:49:47.3313
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5jr6I9OHietLA/hXOPaTaHDSo0Rf+Lwfd/bNa/qhZ0rkQ/BsshGOmE+8FBZo/CSCUSWuf6muA6FBDBRxJaBebQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8517

On 24.05.2023 21:51, GitLab wrote:
> 
> 
> Pipeline #878023438 has failed!
> 
> Project: xen ( https://gitlab.com/xen-project/xen )
> Branch: staging ( https://gitlab.com/xen-project/xen/-/commits/staging )
> 
> Commit: 511b9f28 ( https://gitlab.com/xen-project/xen/-/commit/511b9f286c3dadd041e0d90beeff7d47c9bf3b7a )
> Commit Message: x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS c...
> Commit Author: Andrew Cooper ( https://gitlab.com/andyhhp )
> 
> Pipeline #878023438 ( https://gitlab.com/xen-project/xen/-/pipelines/878023438 ) triggered by Ganis ( https://gitlab.com/ganis )
> had 3 failed jobs.
> 
> Job #4345633611 ( https://gitlab.com/xen-project/xen/-/jobs/4345633611/raw )
> 
> Stage: build
> Name: opensuse-tumbleweed-gcc
> Job #4345633612 ( https://gitlab.com/xen-project/xen/-/jobs/4345633612/raw )
> 
> Stage: build
> Name: opensuse-tumbleweed-gcc-debug

While I understand these continue to be ignored failures, I would have wanted
to look at ...

> Job #4345633615 ( https://gitlab.com/xen-project/xen/-/jobs/4345633615/raw )
> 
> Stage: test
> Name: build-each-commit-gcc

... the failure here. However, the log as shown ends at 4Mb as usual, yet not
as usual the job artifacts consist of only config.log. So it's not clear
whether anything relevant would want further looking into ...

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 09:59:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 09:59:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539497.840427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27kY-0002KU-PC; Thu, 25 May 2023 09:59:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539497.840427; Thu, 25 May 2023 09:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27kY-0002KN-Ma; Thu, 25 May 2023 09:59:14 +0000
Received: by outflank-mailman (input) for mailman id 539497;
 Thu, 25 May 2023 09:59:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ugP6=BO=citrix.com=prvs=502bf10e6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q27kW-0002KH-Lo
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 09:59:12 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ca0820c3-fae2-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 11:59:09 +0200 (CEST)
Received: from mail-bn7nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 05:59:06 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ0PR03MB5615.namprd03.prod.outlook.com (2603:10b6:a03:27a::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 09:59:04 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 09:59:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca0820c3-fae2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685008749;
  h=message-id:date:subject:to:references:from:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=f5QF+wu9OCCG1FY++5LMdmi37O131AgZqgYHvdCHpN8=;
  b=heIpgncJvVTkqRockXUaZVE1Vpe5Xm997rQRvQwBlzMSbQjZxtDlNgff
   FoyQpxVDCcIbiXX8exGwyrlgAq4j6zKYp8XHTPP+hr1bZ0l7m2x1kWh+3
   SXPLtb1t4E4KPfK0VTDWt8t/mcpgufP2ORGfOClooocDBrDJd8MnlcilI
   8=;
X-IronPort-RemoteIP: 104.47.70.109
X-IronPort-MID: 110752667
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:st927q50Jw5RtIMZ9rNh/QxRtCHGchMFZxGqfqrLsTDasY5as4F+v
 jEeUTiDM/+Ja2egco1ya4ni901Uvsfcyd5mQQc4qSE9Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa0R4AeC/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mq
 aJfay5dcy+/3+/p3JuLaLVrpecCBZy+VG8fkikIITDxK98DGMiGaYOVoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MlEooiOmF3Nn9I7RmQe18mEqCq
 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNtLTOLlrKA03zV/wEQ5WQJOdwC0hcOLj1CvepUOc
 lY34hAX+P1aGEuDC4OVsweDiHyOswMYWtFQO/Yn8wzLwa3Riy6JC25BQjNfZdgOsM4tWSdsx
 lKPh8nuBzFkrPuSU331y1uPhTa7OCxQJ2ldYyYBFFMB+4O6+NB1iQ/TRNF+FqLzlsfyBTz73
 zGNqm45mqkXiskIka68+Dgrng6Rm3QAdSZtji2/Y45vxlkRiFKND2Bw1WXm0A==
IronPort-HdrOrdr: A9a23:27XnAKsJyhr+1wH5DgPZ1p/07skDstV00zEX/kB9WHVpm6yj+v
 xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78
 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK
 Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
X-Talos-CUID: 9a23:jeuUUm+lwRb4AL+QX7OVvxJKXYM7ImDi8C/dfFOROUQ1T562aHbFrQ==
X-Talos-MUID: 9a23:5JHMIQgyASCzgauEC/h8qMMpHpZ3u4b1B2Q3o5AYqujDZHVqGC+ntWHi
X-IronPort-AV: E=Sophos;i="6.00,190,1681185600"; 
   d="scan'208";a="110752667"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QsVbMc85FB1FUovuoH67QLVw/8XB4dfonznnruAbLY9Tfd7aC2MskDrXCKCHQYDWVcqJS2loEzDLIwiKLRcF5en1U8vNSg7RizH2rcYEum5AcFG5ggQ0+7Ls2cmXgBeimZPqt3rSVoWjHeAWFi34O06HaJdsldj0rccwzSMCnhiUAKuZGiPJ+h826l/fiKsmM0PmROSAapiwt9QYd1RDlGsxz+RJLwCTnLXnxHhXxrtrGO9XiqmnGemIuPFYr/mW2sJA1Ph3JDQsOFRI5bdEJZUDc8QAqOTB1jWS886+7dx567wzmYnZ1dIA/LZObtVdqEfoK/XRrxk7W72Rw2E7tg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7qG8QPeDflbqXoAZnw2M0wkZgXEnjgnNnIHBvERyQmE=;
 b=SUfLhEpCrePVVGm18GW47Sztytui0/ZeNLsqZrzPzG8RIGOLwBC6aAYOdfOenrgheeTMn0k0Ni7O/alTtcA9xXFXDWu79d14IybopqJMkhXBtH3k2uU1edkEAR77adAP/HhPh8fEkeDUbgfxZZC3zvYDhu1ASPLaThLiH3/D5c8hyWlWp6AmvcEnQkapVcDBm/AQ5hCpRUmdbxYMeJp+ohNpqW7t4G4GI5Lglw8tP2RgZFaEz/NF0sQvQo7XkRlQZLW9Vrv0Rasp3NNZOuYUpN7DmlBLUZNQGWqQs0eVv9qau++kN4DH1vo2DqFj4K0loenyrp1ck4RG8w9662GUJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7qG8QPeDflbqXoAZnw2M0wkZgXEnjgnNnIHBvERyQmE=;
 b=JvQvFhMKfxWBZEvk7j7/DinXnzGrkAeZ/IvFJYZsvdEDfabjl+ACc9faKc1jYETBNBAuvxqZw1DrRBEsj50zsoxXAWzmTtAGU2sDXgkZDbk4po8VK4CvuCXNsGDJC+oGSVrxewuoafhdWdLCT5K1t3QWL2XMgufDd3lPfcIJ6jo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <b43ba6bb-60e1-8567-064c-1452271ec255@citrix.com>
Date: Thu, 25 May 2023 10:58:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: xen | Failed pipeline for staging | 511b9f28
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <646e6ac6d749a_281a7a5e8825bc@gitlab-sidekiq-catchall-v2-596d74f7d4-ntx7r.mail>
 <af7e2e9c-1065-da68-d9b4-fd116fe2e2b0@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <af7e2e9c-1065-da68-d9b4-fd116fe2e2b0@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0333.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18c::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ0PR03MB5615:EE_
X-MS-Office365-Filtering-Correlation-Id: f3b854af-fc59-41ef-fb7b-08db5d06aba3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DF9XzWMmbKzYhNkxWYPtYsGRbzfD+m9RemLT0E08No3oNJY0uxxqu3gROb9zyJb66HQ2fpAvR8VAW1v9bYZASyVSjSgfGdVzY7aEoa+JcAPBpLvEqmvIrRNFDtO3XZQqrxSVBSZZ6wx/o4s8cJEO5X4OZbYNTKAHyZ6jnoRc/CVlvmiCW4LYr8vGjqrk+u7a4Dd+PAXzsOxFaXaBMTeeOsH4UNOJq8gHUkiBCZ0kwuRZkPHGHj0MALhJSnt1l+FeNYnumPw/aIkklGmNF6lrxWmvJ5FBMjK6EKEkzKLyCe0TlFdho2ZA3WnkMmn9QdH3zBiejAldU84JbXBxQbKm2XS0GdiMJw3TEAzxKVaiBr7Hz3vcirRE8gHArn4hKD1n2o4mQJaZdvLuvkQrDMmgShhsE9fuNSzFeT1fzn7rFUauH2MhEOsMnE6aD5qxpggXGjL/fDiNB6E6Gb7JXu2K4qjStuccizzjK/rimqDJ6r1jA6xb3k+gWkWQS2ohEmAfN9ZVnEq/JvxUsauEILXyPZdQ3uQH2+UvzVxfY3xpSdCqu4kcancjwCvC+dp2QW+TFIorvrUZ59yrSZOyWtxFE5KjRvdHnEoK7sSmIkXo2F+Hv69uWaG8UbJL1mYhTIrcAkzBcJ1WLMOKz/SS+GZjKeu9PpwYaAntxBBMt6eqhsl+tPPjkBsMwMGnLdIuyH9K
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(346002)(396003)(376002)(136003)(451199021)(66556008)(66476007)(66946007)(8936002)(36756003)(8676002)(5660300002)(316002)(31686004)(110136005)(478600001)(6486002)(41300700001)(6666004)(966005)(38100700002)(82960400001)(186003)(6512007)(2906002)(53546011)(83380400001)(6506007)(2616005)(31696002)(26005)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N242Q093eDVDT1BnWTBGR3ZtQXFNUXB0M1padnFVcFNrbllDSVVCak93RTEw?=
 =?utf-8?B?SzQwNTJmS3Y1ZGFUOGVWZG5kNGdkUDd0QlBTWEdaRVh0c2wySzdQU2ZSMFQr?=
 =?utf-8?B?YlM1dHN6OWo0YThRQnRFY2krYmR0M2hVUXVjNzI4QjAzQk1QVmtZU2pQeDYx?=
 =?utf-8?B?WTQzRE9KQlZ5d0JsUHNxdnI3UFRTMnZhNWdqQjBQODNqTUVXSVNOV05JUGNG?=
 =?utf-8?B?WkorS3RGYldkZlRSKzNJZHlqYThRVEhOV0w0eVNoUE10VE5CbnRaaEpoREpB?=
 =?utf-8?B?Sk9HcmQ5L3dZSGd3SEx6ZlNXV3JKSGN5ZTh6MGY4K3k3Rkpwa2VnZ3V4bEZh?=
 =?utf-8?B?Z3lGUUppRGFYb092RE1FRlhBWit0U1o3cnNmRHN6b1FTcWVGMlpNYXBlN2hJ?=
 =?utf-8?B?ckNCL25oSTlKMzJKRzZkckZTU3BDeWRHYXgvU0tTdWxhenQ1Y0VqNVpaV1kr?=
 =?utf-8?B?ZHpLY0x4TVBCdmtRQVBVY05Cand4RDRDWmZEdTdQa1A0aXhjVFJ5Y3p2VUp5?=
 =?utf-8?B?RFI4WlJOYUxjckNBM1J3RGZGQUZoTEh5Rjhya0QzaDVWblZZa1d4Z054RGE1?=
 =?utf-8?B?Z2Izd25ZSkhWbmhYT0IyZzUxSk5qbTRKTmppYjI2d1NEY0MvSTlPRmU0eFNI?=
 =?utf-8?B?UUlnNExmUU9oa21GSklyVS84cWRZTnNoT2hOSVA1S3lUNGxLRjFDVEdwNWlO?=
 =?utf-8?B?eXFtV29jQ1JqWEFWUHZKdHVnR0RwRzkvUDZWL0d5WEFQY2E5YjBKY012clBB?=
 =?utf-8?B?dU8rT1dEMjZ1UjlMeUtaaXA2SG4vUUdoQ2pFQUpOY2IxWkIzOEpVcy8zY3Jl?=
 =?utf-8?B?TW1KTzJQMjh5TDNoUkNVcXp0MkRNTG1zVXVNblYvdU5LN0Y2eU1MbUJicHJY?=
 =?utf-8?B?T0xwZ045emY4d01lVWhTNXF4UW9WZE9HNytqaFVzOUxIMHZqUGdicWhQTVJ4?=
 =?utf-8?B?SlpqQUczc3NzUmkyL2t6Uno2aDlpRDB4NmJBNzhIVnkwN0ppUThZVjd1bG5x?=
 =?utf-8?B?YmdraXZJSVN6TW0rWWFieFlwTWNZVGNhOEovWjF1YXE2N21Tc3J6RThPMk5C?=
 =?utf-8?B?L2FFclM5VFI2UTBwbjNpYlM5bGQ0aE9nTkdQK2hTQTlXejE0cm1LQ3M4YVFj?=
 =?utf-8?B?SEhVemk5b05ZYkVhelkyQVBTeE5QNHlNY0VXNDRVSW1EWmpaR25uTVU1Y2NB?=
 =?utf-8?B?Q3VFbmZBRE05T0laYnpLSTlnWUxrdXh5VzlTd0M3Qi82WTAvY1NjY1Yvbjhz?=
 =?utf-8?B?MUcrMjd0MDJuQ0tBLzJzUVptOEJwL2M5MEhiWlQ3a2VwL1hVN3ZwS2JacytO?=
 =?utf-8?B?MTFYaDBtT1haU0xpYStUVThpN2ZtMFFtMDQ1T0x2ZXdzQ0kveGtkVDJDNm1B?=
 =?utf-8?B?dnVwb3FFUE9GNkt4K2NJVU94Z0dQRWFvd21hTkhtYzc5RlpsNjBjaTZwbFpB?=
 =?utf-8?B?WGRLMk9VK1ZxQVFzeHcvajNPU1QxT3hFV2l3MmxmcDFPQ2FVQkVMS3h6SUti?=
 =?utf-8?B?VFJhRjR4OU1kTVhoZHNUcEQ5M2VNOGxOVEFpcGJxSjU3S1k4ZTZsd2ZGL3ky?=
 =?utf-8?B?cTNvTU1CM0MzZHhzc3FqUEJpNzVua0l1a3packlEWnd0cmY5ck15T0lTSEw3?=
 =?utf-8?B?NTlUelkxOGFjb2JackJITXM4ejhqL3pvbEl0Sm55bGluQlQvMUlpbjBJSS85?=
 =?utf-8?B?VWd4RWRZL1J4WnhuaTcwRXBmMVB3N29SVExlazhVNm9sWDhOMFFTWGZabjdF?=
 =?utf-8?B?VVppTmM0YmFlS1dKYjVIMzM2dTNCYkxrUWhxcHpuVXJ0NzNnQzdMYTBINU5z?=
 =?utf-8?B?KzFiZ0RIdXBHM0tKTjhpYjAvRGpzTFpRUXdVb0hvYUJBZHo4TStDU200aFNr?=
 =?utf-8?B?QjJkWThvVllCYVFlcUFyNUZGem5GN1FkSitLU3RCZ0NnUlBQZ01FZ3BiazM4?=
 =?utf-8?B?NjRYeDJXblRIVFMyUUhFKzI5U1BXRlBpOXZsMzZFVUtoSVZtZ3ZNbHdORDh5?=
 =?utf-8?B?WkRKM3VVY0FHV0pDaDcyNzVvZHFDWTlwa3A0WnB1Q01rRmZKVFF2TjVCSXVB?=
 =?utf-8?B?QjRmc1NzNUtHdGkzUzBFZWxiK3ltT09PaWRYSk9oWmc3T1pjSGg5bnFOa0NF?=
 =?utf-8?B?M0NZcDNmbXRmMGJsTXBKSGREUlcwVkRXUWcxbTE1OHVkUjl4OXFNMytpK3Bp?=
 =?utf-8?B?MFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	PYBvrYOvyZn1XQsi8872ilCINzullmLNdqHgtwVDPpmbeHqFjvB8NfwMlaysxPPuLZ1u7hagvlPVG+LqAyy6p3Y6gzlK9jAaiXiHHYycO8DSGVJHmdjbTLQn4uYbfxBMr3nZUs+1CrgJQtrZWfRS0XZ3XKjHh6y+eo+9f6BLvAAXqSWoBdoewdWgFkICIN67y5eRyWb4Ii7LBsT5J0bofyHJGrtDvRTpl6M8I1pJ/JLZ+8RefiijPzCWxDniwTKIGzqfL7Tn3+yBD/wPjsPE2A1+TRJ/6HI6NZ2hUIaRtP7zvnVzS3X+HZTahzGISQQFmuKkCeCTxzBKewBohhH/yX6d86VNEY+UdlstI+G+mgFjsbl9wqVEjPp0VHdkzFN+yz7yQWq6466psYlcx0rSFx+2jbkXmNvy/kMmMAOgbY2cvmqXXbvKHBWtJJuoOEwXHNDHr2DyqZHRA6t1pVFUa2SMFJp1guPMJKkul35qEkdbkYsZPPAUBvApyYaGY44yafwfjdprtEwugUnGntH/bp6+DWhzWvcYu9YNESDkgvua3b28llxPv2vkrXhwneNYMYIQtxvwgvIt+4Iz800QLFhRJ3EW7LTQVP3wk3HdNTxWW2h9PDTAfmtyjfxkhpca/wDZ7cF1eRGUwahdI/bnhzokQHUOE66qm90BiZytFpKHf0OoAgfGciTUPCoRVFEJq/IMe27SSukQXW11139XQ5A/n6HFA2S4RZydaZOaHKlit0cSmoGUhmevHfoCYc8aBzIU25No0aHkaxVsVuOj4S11+D2/d+2KRDyqmOMofqU=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f3b854af-fc59-41ef-fb7b-08db5d06aba3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 09:59:03.7269
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: J34rv+Bo/YF4fQO/gQGd1ZtxADKmhTOR02F01lUbeEnnMrKSQ6jEAr1n31zhtbrzOL8NjxVevB1dZK0ZVj0isnqRnFLdXP5RTnLHcFZKPZM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5615

On 25/05/2023 10:49 am, Jan Beulich wrote:
> On 24.05.2023 21:51, GitLab wrote:
>>
>> Pipeline #878023438 has failed!
>>
>> Project: xen ( https://gitlab.com/xen-project/xen )
>> Branch: staging ( https://gitlab.com/xen-project/xen/-/commits/staging )
>>
>> Commit: 511b9f28 ( https://gitlab.com/xen-project/xen/-/commit/511b9f286c3dadd041e0d90beeff7d47c9bf3b7a )
>> Commit Message: x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS c...
>> Commit Author: Andrew Cooper ( https://gitlab.com/andyhhp )
>>
>> Pipeline #878023438 ( https://gitlab.com/xen-project/xen/-/pipelines/878023438 ) triggered by Ganis ( https://gitlab.com/ganis )
>> had 3 failed jobs.
>>
>> Job #4345633611 ( https://gitlab.com/xen-project/xen/-/jobs/4345633611/raw )
>>
>> Stage: build
>> Name: opensuse-tumbleweed-gcc
>> Job #4345633612 ( https://gitlab.com/xen-project/xen/-/jobs/4345633612/raw )
>>
>> Stage: build
>> Name: opensuse-tumbleweed-gcc-debug
> While I understand these continue to be ignored failures,

It's a Qemu failure.  Olaf has a request out for a backport into the
qemu-xen tree.

>  I would have wanted
> to look at ...
>
>> Job #4345633615 ( https://gitlab.com/xen-project/xen/-/jobs/4345633615/raw )
>>
>> Stage: test
>> Name: build-each-commit-gcc
> ... the failure here. However, the log as shown ends at 4Mb as usual, yet not
> as usual the job artifacts consist of only config.log. So it's not clear
> whether anything relevant would want further looking into ...

This one is weird, and yes - there's nothing at all to go on.

First thing is to get the build log out reliably, because everything
else is guesswork.

This ended up failing on my pre-push run too, so I think it's something
deterministic on larger pushes, but I'm positive that my series is fully
bisectable, so I suspect it's not a content issue.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 25 10:01:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 10:01:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539501.840438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27mq-0003oH-5H; Thu, 25 May 2023 10:01:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539501.840438; Thu, 25 May 2023 10:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27mq-0003oA-2S; Thu, 25 May 2023 10:01:36 +0000
Received: by outflank-mailman (input) for mailman id 539501;
 Thu, 25 May 2023 10:01:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q27mp-0003o4-9d
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 10:01:35 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2026e70d-fae3-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 12:01:32 +0200 (CEST)
Received: from AM5PR0602CA0002.eurprd06.prod.outlook.com
 (2603:10a6:203:a3::12) by AS1PR08MB7402.eurprd08.prod.outlook.com
 (2603:10a6:20b:4c6::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 10:01:28 +0000
Received: from AM7EUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:a3:cafe::67) by AM5PR0602CA0002.outlook.office365.com
 (2603:10a6:203:a3::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15 via Frontend
 Transport; Thu, 25 May 2023 10:01:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT019.mail.protection.outlook.com (100.127.140.245) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.17 via Frontend Transport; Thu, 25 May 2023 10:01:27 +0000
Received: ("Tessian outbound 3570909035da:v136");
 Thu, 25 May 2023 10:01:27 +0000
Received: from cb247d40592a.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3D06F41F-86C9-4C6E-B2C7-3448B5380DA3.1; 
 Thu, 25 May 2023 10:01:21 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cb247d40592a.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 25 May 2023 10:01:21 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB9533.eurprd08.prod.outlook.com (2603:10a6:10:44f::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 10:01:16 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 10:01:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2026e70d-fae3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h3nwhvaNq+a9tWvRHr72FWk4rDmSs/1wG+E0R3lDKQE=;
 b=84QUq1q5O+pGCmmYW/hiqueFa5uhADNOkC+dfXgWvDezIgc5BibeK5/z8x0f9Ho+ABjPZwQnfbHO/evG9fydi8C49hN5NzQRgKyRaXbqPQdNkXZ4WHuhOQxiceUVeF9edUT/Bx1e+RV6P3vErF0QlAAnkPiBL5X90pzuBmR/yOk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2cdbdb56bc15511c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fBdlJMZ/i55y+5oos6o69Q8dETLHkGAlc2BO/3sDMgUhGopsUZDlg2hLYGOmQ1ClzIvhN+FaVEUmjrpUy83o1Jh4iO+vPMmfevtaeoo7+xxEkOZQ9XkYmOurHvFQTwDdlnuXdC+eKBHmljt51xmBGd22KmcwimGjANlq/4E5u93DYRRv0UwioZHhJBTmIVzhoCVb7CZeJ4kht6eu7cyN2BQUD/FLlzhXyp7Q8Bp3P5nl93A97jSYwx/up918q0fNJZDJoxU3Pv5fcvEc95fzycSn3XTP4YzHlcjnpDhGp4pOPkWXW1yw4YCOePTnHA55Rhzk5Lg8LrXuHB2UBzjWHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=h3nwhvaNq+a9tWvRHr72FWk4rDmSs/1wG+E0R3lDKQE=;
 b=YFgyfrY2g22PnOZYAmCr0nLwh08h+Bxd2LzgJZLuXGYCh6YxDq020sFeatULPD1M+lbbvIMuR0xlU/1ZkAqmWenkkNKFLD4fPHfvOGIHAVvyAUWnBTcLxk+jdLbgDfJVkDlac4oY8nERFwGknIEAj2eD2RX80rt/S7rYoWrVE2dxaD4BJYqLxJm98KXYKY04lLYbnT0ui72JjvEpBrdQU9y/gnLON5HVJtYt5HE8KkRIXnR3DX2rQq8YbjuT7GRZWkS7sGcVZnVGqhWqXDM0bqKV/b5mfoJinEnjNG5TVH4mcilQNy+17KQ8BFuD0frZTQ7J1zt1BG7h5859LZZ9gw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h3nwhvaNq+a9tWvRHr72FWk4rDmSs/1wG+E0R3lDKQE=;
 b=84QUq1q5O+pGCmmYW/hiqueFa5uhADNOkC+dfXgWvDezIgc5BibeK5/z8x0f9Ho+ABjPZwQnfbHO/evG9fydi8C49hN5NzQRgKyRaXbqPQdNkXZ4WHuhOQxiceUVeF9edUT/Bx1e+RV6P3vErF0QlAAnkPiBL5X90pzuBmR/yOk=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v7 05/12] arm/sve: save/restore SVE context switch
Thread-Topic: [PATCH v7 05/12] arm/sve: save/restore SVE context switch
Thread-Index: AQHZjUpcuuLAh7YurESbASUYh0J8dK9qtqGAgAAObYA=
Date: Thu, 25 May 2023 10:01:16 +0000
Message-ID: <8DF528FE-8F09-469C-B8C4-28B34B20A273@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-6-luca.fancellu@arm.com>
 <5b09d5d9-360f-d0ef-06c4-6efe1b660a90@xen.org>
In-Reply-To: <5b09d5d9-360f-d0ef-06c4-6efe1b660a90@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB9533:EE_|AM7EUR03FT019:EE_|AS1PR08MB7402:EE_
X-MS-Office365-Filtering-Correlation-Id: b7ecb8ea-0873-4729-2155-08db5d0701be
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 d6Hrc0lIyTztkZoz3DYYq/D07IPe/P8BCi8zMRDmoiCjMKI3b5j0KrvYRL4734yD6bSitYiFWfMbYsHjOy+UOyi8YNv6Q2cGZ4JX08pw5iTHNWFNL4+mg08snHwmgB1vmfNogYzqpJ2PHgByPWmxzK71TyTJLItqIBXO1zX5ZfxiRFXn1qbmrTxLvMdNvIKTvh1mgxL5DygrHRQQwohFzcJtav6zy7LtCFSJ5EHhyAHZqYX4zEnyw4v4cuH71IAu61RfPYnFEMZtKsiv9kyEXEIlAMTqD3bPtV+2hgJ7TmaZ9NLZxZcwUFSbiQFPXydfRbxVXdo+XH03bqeum6LDgU41qywZcjR3jui+pQawBpOkstdCs+k4xYhImqRu5/ReHHcNfVqG8iWYYLpPqOwlZpxX0OzgXKgLqsYP5eCgqFsf418X4J4NXC/FWA3F0QcH1DeaeeJrgbrznzpUbIOvvfOCkiwvxQ5/obwWBuXUzalRttgt5yCtsANH0WKbHQNltkGydeDlZFMZIhiTZxkJRsSNT7Q8+hqX3hC5jvYgFMh6QXIRudBETYaMd5ZZrzylGAdbAAZQf5hjxCf5GRjMO+OjN0I5UhzS/nzXLEZ9K3H/o8Hz99IOb7NseYVJ4LowkNmwHeoJhnR1uP44BFGK9w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(376002)(39860400002)(136003)(346002)(451199021)(38070700005)(33656002)(86362001)(36756003)(6486002)(6916009)(54906003)(4326008)(316002)(91956017)(66556008)(66446008)(64756008)(76116006)(66946007)(66476007)(478600001)(71200400001)(8676002)(8936002)(2906002)(38100700002)(122000001)(41300700001)(2616005)(6506007)(26005)(53546011)(6512007)(186003)(83380400001)(5660300002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <A3D43044F7918A4787B7B8264447E173@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9533
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8e255eb1-49c6-404a-33c3-08db5d06fae4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6WDt5kJtebhr4PiKkG0sVIXE/+EkUGq8yo/FHUFl3sM4ZYxcR5AENugshXWZCcl6sBCZML+o+7A5NXOiFPrbYhIeJuEHmJSojCnt54CsJW71eUBuxz9vCLoj+w9wazKrd58i4ovDovzOJfrBBAnVYyV26YA827oSi1Yoj+npCH0FrfEf+ZuFRPsMxr13brihes0uuGpU1Ik4a5gBBZ7kMgWq0xqxb+Pas1CcpVxI87Ttoq4eQc7DSjO7V2HGSfQuCUuMW87/yTIhkdNgyDfKvP9xp0wI9iFt2T4eHppCA4Smnzme7f+8IFbWWLOF5LOBlCs8rqfUlZCS2B2TYbDz2JN1DX0k25zqd/vKNbm/QkdfZtLRr+pEgCY0uUF2LtBmPJo5jGp8RWF0jit4nqW3YJUbDUR2XzuJwyGHakGwrTxnzItJgmk7KYrDnm/wkzf2T4qLfIS0RVycbfXYatjeVYZV0hspYUQ8xYhr8Et/vepJ9A+I9XPHglFuyU5SbD6fEyCQzmQsYIqc85ZbdYN15Gks7zEYmm8jQ8ooSgLdCj+cwYQaudxxGZPF8ZU47RUpOSOOD1johmeDa8P/dtcQdu2x2yj9HUUJ1/KfFIaGo55F2uQ7MsEWTSVzasi1fJCnlyE7bbrwmwtDMD5FRE6wihGoeujTuw8Otws+L3H46z9i6mAc2xznY+9dxNrWCTGXy0uA1wK/KxyjOGH/3aBrF88BArIieYDEFCvwXIv68bVVl8WcAB48fySzR9IMvMGg
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(376002)(39860400002)(346002)(396003)(136003)(451199021)(40470700004)(36840700001)(46966006)(4326008)(6506007)(6512007)(26005)(70206006)(316002)(70586007)(107886003)(54906003)(6486002)(41300700001)(478600001)(40480700001)(40460700003)(5660300002)(8936002)(8676002)(6862004)(2616005)(86362001)(2906002)(36756003)(336012)(356005)(81166007)(82740400003)(82310400005)(186003)(36860700001)(47076005)(33656002)(83380400001)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 10:01:27.7667
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b7ecb8ea-0873-4729-2155-08db5d0701be
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR08MB7402

DQoNCj4gT24gMjUgTWF5IDIwMjMsIGF0IDEwOjA5LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEx1Y2EsDQo+IA0KPiBPbiAyMy8wNS8yMDIzIDA4OjQzLCBM
dWNhIEZhbmNlbGx1IHdyb3RlOg0KPj4gK2ludCBzdmVfY29udGV4dF9pbml0KHN0cnVjdCB2Y3B1
ICp2KQ0KPj4gK3sNCj4+ICsgICAgdW5zaWduZWQgaW50IHN2ZV92bF9iaXRzID0gc3ZlX2RlY29k
ZV92bCh2LT5kb21haW4tPmFyY2guc3ZlX3ZsKTsNCj4+ICsgICAgdWludDY0X3QgKmN0eCA9IF94
emFsbG9jKHN2ZV96cmVnX2N0eF9zaXplKHN2ZV92bF9iaXRzKSArDQo+PiArICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBzdmVfZmZycmVnX2N0eF9zaXplKHN2ZV92bF9iaXRzKSwNCj4+ICsg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIEwxX0NBQ0hFX0JZVEVTKTsNCj4+ICsNCj4+ICsg
ICAgaWYgKCAhY3R4ICkNCj4+ICsgICAgICAgIHJldHVybiAtRU5PTUVNOw0KPj4gKw0KPj4gKyAg
ICAvKg0KPj4gKyAgICAgKiBQb2ludCB0byB0aGUgZW5kIG9mIFowLVozMSBtZW1vcnksIGp1c3Qg
YmVmb3JlIEZGUiBtZW1vcnksIHRvIGJlIGtlcHQgaW4NCj4+ICsgICAgICogc3luYyB3aXRoIHN2
ZV9jb250ZXh0X2ZyZWUoKQ0KPiANCj4gTml0OiBNaXNzaW5nIGEgZnVsbCBzdG9wLg0KDQpJ4oCZ
bGwgZml4DQoNCj4gDQo+PiArICAgICAqLw0KPj4gKyAgICB2LT5hcmNoLnZmcC5zdmVfenJlZ19j
dHhfZW5kID0gY3R4ICsNCj4+ICsgICAgICAgIChzdmVfenJlZ19jdHhfc2l6ZShzdmVfdmxfYml0
cykgLyBzaXplb2YodWludDY0X3QpKTsNCj4+ICsNCj4+ICsgICAgdi0+YXJjaC56Y3JfZWwyID0g
dmxfdG9femNyKHN2ZV92bF9iaXRzKTsNCj4+ICsNCj4+ICsgICAgcmV0dXJuIDA7DQo+PiArfQ0K
Pj4gKw0KPj4gK3ZvaWQgc3ZlX2NvbnRleHRfZnJlZShzdHJ1Y3QgdmNwdSAqdikNCj4+ICt7DQo+
PiArICAgIHVuc2lnbmVkIGludCBzdmVfdmxfYml0czsNCj4+ICsNCj4+ICsgICAgaWYgKCB2LT5h
cmNoLnZmcC5zdmVfenJlZ19jdHhfZW5kICkNCj4+ICsgICAgICAgIHJldHVybjsNCj4+ICsNCj4+
ICsgICAgc3ZlX3ZsX2JpdHMgPSBzdmVfZGVjb2RlX3ZsKHYtPmRvbWFpbi0+YXJjaC5zdmVfdmwp
Ow0KPj4gKw0KPj4gKyAgICAvKg0KPj4gKyAgICAqIFBvaW50IHRvIHRoZSBlbmQgb2YgWjAtWjMx
IG1lbW9yeSwganVzdCBiZWZvcmUgRkZSIG1lbW9yeSwgdG8gYmUga2VwdA0KPj4gKyAgICAqIGlu
IHN5bmMgd2l0aCBzdmVfY29udGV4dF9pbml0KCkNCj4+ICsgICAgKi8NCj4gDQo+IFRoZSBzcGFj
aW5nIGxvb2tzIGEgYml0IG9kZCBpbiB0aGlzIGNvbW1lbnQuIERpZCB5b3UgbWlzcyBhbiBleHRy
YSBzcGFjZT8NCj4gDQo+IEFsc28sIEkgbm90aWNlIHRoaXMgY29tbWVudCBpcyB0aGUgZXhhY3Qg
c2FtZSBhcyBvbiB0b3AgYXMgc3ZlX2NvbnRleHRfaW5pdCgpLiBJIHRoaW5rIHRoaXMgaXMgYSBi
aXQgbWlzbGVhZGluZyBiZWNhdXNlIHRoZSBsb2dpYyBpcyBkaWZmZXJlbnQuIEkgd291bGQgc3Vn
Z2VzdCB0aGUgZm9sbG93aW5nOg0KPiANCj4gIkN1cnJlbnRseSBwb2ludHMgdG8gdGhlIGVuZCBv
ZiBaMC1aMzEgbWVtb3J5IHdoaWNoIGlzIG5vdCB0aGUgc3RhcnQgb2YgdGhlIGJ1ZmZlci4gVG8g
YmUga2VwdCBpbiBzeW5jIHdpdGggdGhlIHN2ZV9jb250ZXh0X2luaXQoKS4iDQo+IA0KPiBMYXN0
bHksIG5pdDogTWlzc2luZyBhIGZ1bGwgc3RvcC4NCg0KT2sgSeKAmWxsIGNoYW5nZSBpdA0KDQo+
IA0KPj4gKyAgICB2LT5hcmNoLnZmcC5zdmVfenJlZ19jdHhfZW5kIC09DQo+PiArICAgICAgICAo
c3ZlX3pyZWdfY3R4X3NpemUoc3ZlX3ZsX2JpdHMpIC8gc2l6ZW9mKHVpbnQ2NF90KSk7DQo+PiAr
DQo+PiArICAgIFhGUkVFKHYtPmFyY2gudmZwLnN2ZV96cmVnX2N0eF9lbmQpOw0KPj4gK30NCj4+
ICsNCj4gDQo+IFsuLi5dDQo+IA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9pbmNsdWRl
L2FzbS9hcm02NC92ZnAuaCBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2FzbS9hcm02NC92ZnAuaA0K
Pj4gaW5kZXggZTZlOGMzNjNiYzE2Li40YWEzNzFlODVkMjYgMTAwNjQ0DQo+PiAtLS0gYS94ZW4v
YXJjaC9hcm0vaW5jbHVkZS9hc20vYXJtNjQvdmZwLmgNCj4+ICsrKyBiL3hlbi9hcmNoL2FybS9p
bmNsdWRlL2FzbS9hcm02NC92ZnAuaA0KPj4gQEAgLTYsNyArNiwxOSBAQA0KPj4gICAgc3RydWN0
IHZmcF9zdGF0ZQ0KPj4gIHsNCj4+ICsgICAgLyoNCj4+ICsgICAgICogV2hlbiBTVkUgaXMgZW5h
YmxlZCBmb3IgdGhlIGd1ZXN0LCBmcHJlZ3MgbWVtb3J5IHdpbGwgYmUgdXNlZCB0bw0KPj4gKyAg
ICAgKiBzYXZlL3Jlc3RvcmUgUDAtUDE1IHJlZ2lzdGVycywgb3RoZXJ3aXNlIGl0IHdpbGwgYmUg
dXNlZCBmb3IgdGhlIFYwLVYzMQ0KPj4gKyAgICAgKiByZWdpc3RlcnMuDQo+PiArICAgICAqLw0K
Pj4gICAgICB1aW50NjRfdCBmcHJlZ3NbNjRdIF9fdmZwX2FsaWduZWQ7DQo+PiArICAgIC8qDQo+
PiArICAgICAqIFdoZW4gU1ZFIGlzIGVuYWJsZWQgZm9yIHRoZSBndWVzdCwgc3ZlX3pyZWdfY3R4
X2VuZCBwb2ludHMgdG8gbWVtb3J5DQo+PiArICAgICAqIHdoZXJlIFowLVozMSByZWdpc3RlcnMg
YW5kIEZGUiBjYW4gYmUgc2F2ZWQvcmVzdG9yZWQsIGl0IHBvaW50cyBhdCB0aGUNCj4+ICsgICAg
ICogZW5kIG9mIHRoZSBaMC1aMzEgc3BhY2UgYW5kIGF0IHRoZSBiZWdpbm5pbmcgb2YgdGhlIEZG
UiBzcGFjZSwgaXQncyBkb25lDQo+PiArICAgICAqIGxpa2UgdGhhdCB0byBlYXNlIHRoZSBzYXZl
L3Jlc3RvcmUgYXNzZW1ibHkgb3BlcmF0aW9ucy4NCj4+ICsgICAgICovDQo+PiArICAgIHVpbnQ2
NF90ICpzdmVfenJlZ19jdHhfZW5kOw0KPiANCj4gU29ycnkgSSBvbmx5IG5vdGljZWQgbm93LiBC
dXQgc2hvdWxkbid0IHRoaXMgYmUgcHJvdGVjdGVkIHdpdGggI2lmZGVmIENPTkZJR19TVkU/IFNh
bWUuLi4NCj4gDQo+PiAgICAgIHJlZ2lzdGVyX3QgZnBjcjsNCj4+ICAgICAgcmVnaXN0ZXJfdCBm
cGV4YzMyX2VsMjsNCj4+ICAgICAgcmVnaXN0ZXJfdCBmcHNyOw0KPj4gZGlmZiAtLWdpdCBhL3hl
bi9hcmNoL2FybS9pbmNsdWRlL2FzbS9kb21haW4uaCBiL3hlbi9hcmNoL2FybS9pbmNsdWRlL2Fz
bS9kb21haW4uaA0KPj4gaW5kZXggMzMxZGEwZjNiY2MzLi44MTQ2NTJkOTI1NjggMTAwNjQ0DQo+
PiAtLS0gYS94ZW4vYXJjaC9hcm0vaW5jbHVkZS9hc20vZG9tYWluLmgNCj4+ICsrKyBiL3hlbi9h
cmNoL2FybS9pbmNsdWRlL2FzbS9kb21haW4uaA0KPj4gQEAgLTE5NSw2ICsxOTUsOCBAQCBzdHJ1
Y3QgYXJjaF92Y3B1DQo+PiAgICAgIHJlZ2lzdGVyX3QgdHBpZHJyb19lbDA7DQo+PiAgICAgICAg
LyogSFlQIGNvbmZpZ3VyYXRpb24gKi8NCj4+ICsgICAgcmVnaXN0ZXJfdCB6Y3JfZWwxOw0KPj4g
KyAgICByZWdpc3Rlcl90IHpjcl9lbDI7DQo+IA0KPiAuLi4gaGVyZS4NCg0KU3VyZSBJIGNhbiBw
cm90ZWN0IHRoZW0uIEl0IHdhcyBkb25lIG9uIHB1cnBvc2UgYmVmb3JlIHRvIGF2b2lkIGlmZGVm
cyBidXQgSSB0aGluayBzYXZpbmcgc3BhY2UNCmlzIGJldHRlciBoZXJlIGFuZCBhbHNvIHRoZXJl
IHdvbuKAmXQgYmUgYW55IHVzZSBvZiB0aGVtIHdoZW4gdGhlIGNvbmZpZyBpcyBvZmYuDQoNCg0K
PiANCj4+ICAgICAgcmVnaXN0ZXJfdCBjcHRyX2VsMjsNCj4+ICAgICAgcmVnaXN0ZXJfdCBoY3Jf
ZWwyOw0KPj4gICAgICByZWdpc3Rlcl90IG1kY3JfZWwyOw0KPiANCj4gQ2hlZXJzLA0KPiANCj4g
LS0gDQo+IEp1bGllbiBHcmFsbA0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu May 25 10:01:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 10:01:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539502.840448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27n3-00047t-Hs; Thu, 25 May 2023 10:01:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539502.840448; Thu, 25 May 2023 10:01:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27n3-00047m-EZ; Thu, 25 May 2023 10:01:49 +0000
Received: by outflank-mailman (input) for mailman id 539502;
 Thu, 25 May 2023 10:01:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zmzE=BO=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q27n1-000474-R4
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 10:01:47 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20608.outbound.protection.outlook.com
 [2a01:111:f400:fe13::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 27bf86ec-fae3-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 12:01:45 +0200 (CEST)
Received: from AM6P192CA0104.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:8d::45)
 by DU0PR08MB9201.eurprd08.prod.outlook.com (2603:10a6:10:415::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 10:01:39 +0000
Received: from AM7EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8d:cafe::66) by AM6P192CA0104.outlook.office365.com
 (2603:10a6:209:8d::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16 via Frontend
 Transport; Thu, 25 May 2023 10:01:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT033.mail.protection.outlook.com (100.127.140.129) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6433.16 via Frontend Transport; Thu, 25 May 2023 10:01:38 +0000
Received: ("Tessian outbound 3a01b65b5aad:v136");
 Thu, 25 May 2023 10:01:37 +0000
Received: from 05b95d110bc8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 383CD494-7500-4087-B566-D27A0966BF93.1; 
 Thu, 25 May 2023 10:01:26 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 05b95d110bc8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 25 May 2023 10:01:26 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB9533.eurprd08.prod.outlook.com (2603:10a6:10:44f::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 10:01:22 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::362c:56c7:5ea4:422e%7]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 10:01:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27bf86ec-fae3-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JBMy0hH57+9uBRGM1IIQjFWLs8eB6UAgJ0AIWOyiAAU=;
 b=HWvzGXW0EPrJa0bVjXLpxNFr/On9jB0NPs339JE4V5vZsmrNE8ZI696DzqyyI9bbXriMWwffCBL2NBdK+mvUu/Oyb5AqMfZtlrwVpBiUU4bmQkwwaJgflHEby7CQxLfGUT2zLOl7E78lTiCgqzcoLNobVH3BWl6vpRkoUwCnb+o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 399fe5bfda899e67
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H7rB4KVwDBoxB49d7PqmXP6PYmj3AcFz0naZx6HVYcRQ2i08zAGd0j4+DzoIBIn/YCMH9w4qAfu0K3XJtMZ+x9dW4Dh5VfEcFIuGXKjHU6aL8a1tvol3oLxOdUJ1IxfCEy10KrEtM3UvxHPxMrZeYfKYpI02EWE8rVmJXHdpCBbVI9GYhsnluczJWzwX2EncXCE6grhJa+IdpfUhzvaJt3+SqVmQczhsCKNz1RyPC0YsACuhpO5NsGxpt9TkW9JViEVNmHqjB8kYwGwTVhR5Whur1HH5azKGQgmUHpOkn47E0LEWsV95zilajQdFbOI4QjKoBluy7GPKnbRgPPoG7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JBMy0hH57+9uBRGM1IIQjFWLs8eB6UAgJ0AIWOyiAAU=;
 b=iT40OFKaxeCL4EVXyX6ELt8gXiKMJwOyJMhCZzCLjp0RznaBfXf6MsIka6U3Of5HzuCyRtoMTtiAJWO/Nx65YsO+AUlw3V56/CVrTqegDHeKK/1cbwowDGK9p/VEIbj+TI1xjKv4rZZFUo/V8AGet+uPIR1iA520b806Kx4mgM0WZ7B03+FMA5LMPkrdx0xn5j9Pz21621tLOs3ykuKfzQFNaClx/Il35XSmucJo8A7PirV7SuGBRXWZgOF5ZlUuc2NhJIfoBNnMkdmzCo0Oi48c9cnCZgQlUPS4aOeFTaZbv5u33Hfvlo8+b/Jih6x2UPJlYPvVxs3H6OHkqq8jww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JBMy0hH57+9uBRGM1IIQjFWLs8eB6UAgJ0AIWOyiAAU=;
 b=HWvzGXW0EPrJa0bVjXLpxNFr/On9jB0NPs339JE4V5vZsmrNE8ZI696DzqyyI9bbXriMWwffCBL2NBdK+mvUu/Oyb5AqMfZtlrwVpBiUU4bmQkwwaJgflHEby7CQxLfGUT2zLOl7E78lTiCgqzcoLNobVH3BWl6vpRkoUwCnb+o=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: =?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>, George Dunlap
	<george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, Juergen Gross
	<jgross@suse.com>, Christian Lindig <christian.lindig@citrix.com>, David
 Scott <dave@recoil.org>, Christian Lindig <christian.lindig@cloud.com>
Subject: Re: [PATCH v7 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Topic: [PATCH v7 09/12] tools: add physinfo arch_capabilities handling
 for Arm
Thread-Index: AQHZjUpkSkVWSiVT2k+ILuaW20r/+q9qrj4AgAAKbYCAAAFlAIAACwWA
Date: Thu, 25 May 2023 10:01:22 +0000
Message-ID: <625F8843-47F5-4E60-8849-637DFFBA2AE7@arm.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-10-luca.fancellu@arm.com> <ZG8evxN0mF8NDTPS@mail-itl>
 <2BCDFCFC-30F0-4D61-AE92-65046CDD5696@arm.com>
 <1D2A4448-D89C-458B-A2EB-C0368E6534C1@arm.com>
In-Reply-To: <1D2A4448-D89C-458B-A2EB-C0368E6534C1@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.500.231)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB9533:EE_|AM7EUR03FT033:EE_|DU0PR08MB9201:EE_
X-MS-Office365-Filtering-Correlation-Id: a82e4857-1372-44b9-68e5-08db5d0707dc
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 31/1z+LOj3ekbDINeJvlS8MbbkGMU2+kI5xqRDIu64dxNBzDG1MRrJbP+NLwPQ/JiAo6UpM0qHpD9uVTvsAmWzQEIcPMBuopaOz4EEe/UPdS7IAgzcCFouJLS/V1xJt9K/m4qq2fNYwXFVZ5mo9HlSTGVATny4Dp/jUqwOLCBDHBo8tYnzmASLr7LXEGwPSzziKI7XYy2QSTr2W7pJ/vUvGa8I3687G5HDLQfQThnYbuiP44cQVlGcB6GIeFzIt12xOBsRo9SZ4blN4p0psgMm550+pFo1+fFeCXSqtCpwZux8HS1V2Qyd7ExLINXNArspnbTSYg0zPCumo1NDGfpQaclCxEZfjPDHOuB9f53veKBCVHC9H8XPJe+LYCXNefQFBWvJUEUa8YorHi/vbarJVgJQFK1+UUOAeg2TQLNjKpQntI9c7/1ZXwmkM+U5n8nu69316GgvgLSQgoIRIxuAwv6ZY5QNHbW+6cn3zN6HfFe8wEq26xoP+T5j1hO1PYCwiGPPM9zw4tedYOEc7Sgi8TrA59GB6UKcOpUnCYDBUtehCElZqlvPGLngmGfOp0gj5ZtxEGQ1J74WkKgL1QaMETar5aD9rZ27Z8IYNe0wxInWAWgPwyPYuVdL+8VRKB/2OcfwmeDJYNLMNZpBwFiA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(396003)(376002)(39860400002)(136003)(346002)(451199021)(38070700005)(33656002)(86362001)(36756003)(6486002)(6916009)(54906003)(4326008)(316002)(91956017)(66556008)(66446008)(64756008)(76116006)(66946007)(66476007)(478600001)(71200400001)(8676002)(8936002)(2906002)(7416002)(38100700002)(122000001)(41300700001)(2616005)(6506007)(26005)(53546011)(6512007)(186003)(66574015)(83380400001)(5660300002)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <550BE1B358D49848BABE896EF9E340D1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9533
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b477ee12-acac-40f8-0a32-08db5d06fe99
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tMWNgI1oUND8N6vCpLyJYt6BIMTYlxn57Ygkce6wPWTGLUVLC1h9UC2uCwGTTJVX6YCibPljVs0KEycVFMHOXHRfNnoSsmjaWc8UJxSFTT8qNMF88VU2QW21sY0vTsPbaPWjHUs0DzBQ0HaeQozhjlBZBgFuIuJE5wJGTnmlmTs3dh+y4weEXGS1sbgXmAk/vv4uAMwwdiU7VtlsGjnKjERKj7pIGwdCPTYoTOXKz0IXhPhxgMGbenpbxD2kOuX97DDcWrn+l+KnO6bND6PjCE79qtqCSfXMgO2ZIo6+omrXuZ1F4XxqofDfihttkXDZhuiKLzujIzMY6OswT1ZYE7ZS0OLyQkJRuYEokvZ1u1j2q9bpC2O/LJURHRilpEf7y9+qQdcVJ0SSvag31TAFSNUlLeywpo9V7sCbd7LOGH2TE6bUB1DxqMQwamWHdZkEY9kvLbBVNCaB/016f9iKh6kL4MaFvVjYf7VAfh+7cEKJ6t73Wf3GDvkqskB88o0OJwYlprwumqZLaOQy3OSYOeafWAG+wwIEY3nssczecsn52T2gtIxvCdIWFLKX2/vKX1+R0Viej/eRWE3N9nZ3tW4USCqQaHXBrHuVMkQ82qJNnm+UA6NvpPlv4iJXWMC9BBInM3iUqrxIjSTspVi9XrSvDniF+f7Ft6HcSVOeJMghuNroZOIzaZ9YzPgr7Q28wJkiQHLezCYPUXhGm0Cc5kLcScESi/jWywG1EtU4IFarHOKHEWju4986kuuPfqOh
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(4326008)(33656002)(36756003)(82310400005)(40480700001)(53546011)(6862004)(8676002)(8936002)(6506007)(41300700001)(6512007)(26005)(2616005)(186003)(336012)(107886003)(5660300002)(83380400001)(70206006)(54906003)(70586007)(316002)(478600001)(86362001)(82740400003)(81166007)(356005)(6486002)(36860700001)(66574015)(2906002)(47076005)(40460700003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 10:01:38.0315
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a82e4857-1372-44b9-68e5-08db5d0707dc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9201

DQoNCj4gT24gMjUgTWF5IDIwMjMsIGF0IDEwOjIxLCBMdWNhIEZhbmNlbGx1IDxMdWNhLkZhbmNl
bGx1QGFybS5jb20+IHdyb3RlOg0KPiANCj4+PiANCj4+PiAoLi4uKQ0KPj4+IA0KPj4+PiBkaWZm
IC0tZ2l0IGEvdG9vbHMvcHl0aG9uL3hlbi9sb3dsZXZlbC94Yy94Yy5jIGIvdG9vbHMvcHl0aG9u
L3hlbi9sb3dsZXZlbC94Yy94Yy5jDQo+Pj4+IGluZGV4IDk3MjhiMzQxODVhYy4uYjM2OTlmZGFj
NThlIDEwMDY0NA0KPj4+PiAtLS0gYS90b29scy9weXRob24veGVuL2xvd2xldmVsL3hjL3hjLmMN
Cj4+Pj4gKysrIGIvdG9vbHMvcHl0aG9uL3hlbi9sb3dsZXZlbC94Yy94Yy5jDQo+Pj4+IEBAIC0y
Miw2ICsyMiw3IEBADQo+Pj4+ICNpbmNsdWRlIDx4ZW4vaHZtL2h2bV9pbmZvX3RhYmxlLmg+DQo+
Pj4+ICNpbmNsdWRlIDx4ZW4vaHZtL3BhcmFtcy5oPg0KPj4+PiANCj4+Pj4gKyNpbmNsdWRlIDx4
ZW4tdG9vbHMvYXJtLWFyY2gtY2FwYWJpbGl0aWVzLmg+DQo+Pj4+ICNpbmNsdWRlIDx4ZW4tdG9v
bHMvY29tbW9uLW1hY3Jvcy5oPg0KPj4+PiANCj4+Pj4gLyogTmVlZGVkIGZvciBQeXRob24gdmVy
c2lvbnMgZWFybGllciB0aGFuIDIuMy4gKi8NCj4+Pj4gQEAgLTg5Nyw3ICs4OTgsNyBAQCBzdGF0
aWMgUHlPYmplY3QgKnB5eGNfcGh5c2luZm8oWGNPYmplY3QgKnNlbGYpDQo+Pj4+ICAgaWYgKCBw
ICE9IHZpcnRfY2FwcyApDQo+Pj4+ICAgICAqKHAtMSkgPSAnXDAnOw0KPj4+PiANCj4+Pj4gLSAg
ICByZXR1cm4gUHlfQnVpbGRWYWx1ZSgie3M6aSxzOmksczppLHM6aSxzOmwsczpsLHM6bCxzOmks
czpzLHM6c30iLA0KPj4+PiArICAgIHJldHVybiBQeV9CdWlsZFZhbHVlKCJ7czppLHM6aSxzOmks
czppLHM6bCxzOmwsczpsLHM6aSxzOnMsczpzLHM6SX0iLA0KPj4+PiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICJucl9ub2RlcyIsICAgICAgICAgcGluZm8ubnJfbm9kZXMsDQo+Pj4+ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgInRocmVhZHNfcGVyX2NvcmUiLCBwaW5mby50aHJlYWRzX3Bl
cl9jb3JlLA0KPj4+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICJjb3Jlc19wZXJfc29ja2V0
IiwgcGluZm8uY29yZXNfcGVyX3NvY2tldCwNCj4+Pj4gQEAgLTkwNyw3ICs5MDgsMTAgQEAgc3Rh
dGljIFB5T2JqZWN0ICpweXhjX3BoeXNpbmZvKFhjT2JqZWN0ICpzZWxmKQ0KPj4+PiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICJzY3J1Yl9tZW1vcnkiLCAgICAgcGFnZXNfdG9fa2liKHBpbmZv
LnNjcnViX3BhZ2VzKSwNCj4+Pj4gICAgICAgICAgICAgICAgICAgICAgICAgICAiY3B1X2toeiIs
ICAgICAgICAgIHBpbmZvLmNwdV9raHosDQo+Pj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAg
Imh3X2NhcHMiLCAgICAgICAgICBjcHVfY2FwLA0KPj4+PiAtICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICJ2aXJ0X2NhcHMiLCAgICAgICAgdmlydF9jYXBzKTsNCj4+Pj4gKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAidmlydF9jYXBzIiwgICAgICAgIHZpcnRfY2FwcywNCj4+Pj4gKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAiYXJtX3N2ZV92bCIsDQo+Pj4+ICsgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBhcmNoX2NhcGFiaWxpdGllc19hcm1fc3ZlKHBpbmZvLmFyY2hf
Y2FwYWJpbGl0aWVzKQ0KPj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgKTsNCj4+PiANCj4+
PiBUaGlzIHNob3VsZCBiZSBhZGRlZCBvbmx5IHdoZW4gYnVpbGRpbmcgZm9yIEFSTSwgc2ltaWxh
ciBhcyBmb3Igb3RoZXINCj4+PiBiaW5kaW5ncy4NCj4+IA0KPj4gSGkgTWFyZWssDQo+PiANCj4+
IFRoYW5rIHlvdSBmb3IgdGFraW5nIHRoZSB0aW1lIHRvIHJldmlldyB0aGlzLCBhcmUgeW91IG9r
IGlmIEkgbWFrZSB0aGVzZSBjaGFuZ2VzIHRvIHRoZSBjb2RlPw0KPj4gDQo+PiBkaWZmIC0tZ2l0
IGEvdG9vbHMvcHl0aG9uL3hlbi9sb3dsZXZlbC94Yy94Yy5jIGIvdG9vbHMvcHl0aG9uL3hlbi9s
b3dsZXZlbC94Yy94Yy5jDQo+PiBpbmRleCBiMzY5OWZkYWM1OGUuLmM3ZjY5MDE4OTc3MCAxMDA2
NDQNCj4+IC0tLSBhL3Rvb2xzL3B5dGhvbi94ZW4vbG93bGV2ZWwveGMveGMuYw0KPj4gKysrIGIv
dG9vbHMvcHl0aG9uL3hlbi9sb3dsZXZlbC94Yy94Yy5jDQo+PiBAQCAtODcyLDYgKzg3Miw4IEBA
IHN0YXRpYyBQeU9iamVjdCAqcHl4Y19waHlzaW5mbyhYY09iamVjdCAqc2VsZikNCj4+ICAgIGNv
bnN0IGNoYXIgKnZpcnRjYXBfbmFtZXNbXSA9IHsgImh2bSIsICJwdiIgfTsNCj4+ICAgIGNvbnN0
IHVuc2lnbmVkIHZpcnRjYXBzX2JpdHNbXSA9IHsgWEVOX1NZU0NUTF9QSFlTQ0FQX2h2bSwNCj4+
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgWEVOX1NZU0NUTF9QSFlTQ0FQ
X3B2IH07DQo+PiArICAgIFB5T2JqZWN0ICpvYmpyZXQ7DQo+PiArICAgIGludCByZXRjb2RlOw0K
Pj4gDQo+PiAgICBpZiAoIHhjX3BoeXNpbmZvKHNlbGYtPnhjX2hhbmRsZSwgJnBpbmZvKSAhPSAw
ICkNCj4+ICAgICAgICByZXR1cm4gcHl4Y19lcnJvcl90b19leGNlcHRpb24oc2VsZi0+eGNfaGFu
ZGxlKTsNCj4+IEBAIC04OTgsMjAgKzkwMCwzMSBAQCBzdGF0aWMgUHlPYmplY3QgKnB5eGNfcGh5
c2luZm8oWGNPYmplY3QgKnNlbGYpDQo+PiAgICBpZiAoIHAgIT0gdmlydF9jYXBzICkNCj4+ICAg
ICAgKihwLTEpID0gJ1wwJzsNCj4+IA0KPj4gLSAgICByZXR1cm4gUHlfQnVpbGRWYWx1ZSgie3M6
aSxzOmksczppLHM6aSxzOmwsczpsLHM6bCxzOmksczpzLHM6cyxzOkl9IiwNCj4+IC0gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIm5yX25vZGVzIiwgICAgICAgICBwaW5mby5ucl9ub2RlcywN
Cj4+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgInRocmVhZHNfcGVyX2NvcmUiLCBwaW5m
by50aHJlYWRzX3Blcl9jb3JlLA0KPj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAiY29y
ZXNfcGVyX3NvY2tldCIsIHBpbmZvLmNvcmVzX3Blcl9zb2NrZXQsDQo+PiAtICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICJucl9jcHVzIiwgICAgICAgICAgcGluZm8ubnJfY3B1cywNCj4+IC0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgInRvdGFsX21lbW9yeSIsICAgICBwYWdlc190b19r
aWIocGluZm8udG90YWxfcGFnZXMpLA0KPj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAi
ZnJlZV9tZW1vcnkiLCAgICAgIHBhZ2VzX3RvX2tpYihwaW5mby5mcmVlX3BhZ2VzKSwNCj4+IC0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgInNjcnViX21lbW9yeSIsICAgICBwYWdlc190b19r
aWIocGluZm8uc2NydWJfcGFnZXMpLA0KPj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAi
Y3B1X2toeiIsICAgICAgICAgIHBpbmZvLmNwdV9raHosDQo+PiAtICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICJod19jYXBzIiwgICAgICAgICAgY3B1X2NhcCwNCj4+IC0gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgInZpcnRfY2FwcyIsICAgICAgICB2aXJ0X2NhcHMsDQo+PiAtICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICJhcm1fc3ZlX3ZsIiwNCj4+IC0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBhcmNoX2NhcGFiaWxpdGllc19hcm1fc3ZlKHBpbmZvLmFyY2hfY2FwYWJp
bGl0aWVzKQ0KPj4gKyAgICBvYmpyZXQgPSBQeV9CdWlsZFZhbHVlKCJ7czppLHM6aSxzOmksczpp
LHM6bCxzOmwsczpsLHM6aSxzOnMsczpzfSIsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIm5yX25vZGVzIiwgICAgICAgICBwaW5mby5ucl9ub2RlcywNCj4+ICsgICAgICAgICAgICAg
ICAgICAgICAgICAgICAidGhyZWFkc19wZXJfY29yZSIsIHBpbmZvLnRocmVhZHNfcGVyX2NvcmUs
DQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgImNvcmVzX3Blcl9zb2NrZXQiLCBwaW5m
by5jb3Jlc19wZXJfc29ja2V0LA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICJucl9j
cHVzIiwgICAgICAgICAgcGluZm8ubnJfY3B1cywNCj4+ICsgICAgICAgICAgICAgICAgICAgICAg
ICAgICAidG90YWxfbWVtb3J5IiwgICAgIHBhZ2VzX3RvX2tpYihwaW5mby50b3RhbF9wYWdlcyks
DQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgImZyZWVfbWVtb3J5IiwgICAgICBwYWdl
c190b19raWIocGluZm8uZnJlZV9wYWdlcyksDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAg
ICAgInNjcnViX21lbW9yeSIsICAgICBwYWdlc190b19raWIocGluZm8uc2NydWJfcGFnZXMpLA0K
Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICJjcHVfa2h6IiwgICAgICAgICAgcGluZm8u
Y3B1X2toeiwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAiaHdfY2FwcyIsICAgICAg
ICAgIGNwdV9jYXAsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgInZpcnRfY2FwcyIs
ICAgICAgICB2aXJ0X2NhcHMNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgKTsNCj4+ICsNCj4+
ICsgICAgI2lmIGRlZmluZWQoX19hYXJjaDY0X18pDQo+PiArICAgICAgICBpZiAob2JqcmV0KSB7
DQo+PiArICAgICAgICAgICAgcmV0Y29kZSA9IFB5RGljdF9TZXRJdGVtU3RyaW5nKA0KPj4gKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBvYmpyZXQsICJhcm1fc3ZlX3ZsIiwNCj4+ICsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgYXJjaF9jYXBhYmlsaXRpZXNfYXJtX3N2ZShwaW5mby5h
cmNoX2NhcGFiaWxpdGllcykNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICApOw0KPj4gKyAg
ICAgICAgICAgIGlmICggcmV0Y29kZSA8IDAgKQ0KPj4gKyAgICAgICAgICAgICAgICByZXR1cm4g
TlVMTDsNCj4+ICsgICAgICAgIH0NCj4+ICsgICAgI2VuZGlmDQo+PiArDQo+PiArICAgIHJldHVy
biBvYmpyZXQ7DQo+PiB9DQo+PiANCj4+IHN0YXRpYyBQeU9iamVjdCAqcHl4Y19nZXRjcHVpbmZv
KFhjT2JqZWN0ICpzZWxmLCBQeU9iamVjdCAqYXJncywgUHlPYmplY3QgKmt3ZHMpDQo+PiANCj4+
IA0KPj4gUGxlYXNlIG5vdGljZSB0aGF0IG5vdyB3ZSBjYW4gaGF2ZSBhIHBhdGggdGhhdCBjb3Vs
ZCByZXR1cm4gTlVMTCwgYXJlIHlvdSBvayBmb3INCj4+IEl0IG9yIHNob3VsZCBJIGp1c3QgaWdu
b3JlIHRoZSByZXR1cm4gY29kZSBmb3IgUHlEaWN0X1NldEl0ZW1TdHJpbmc/DQo+PiANCj4+IEFs
c28sIGRvIHlvdSB3YW50IG1lIHRvIHByb3RlY3QgdGhlIGluY2x1ZGUgdG8gPHhlbi10b29scy9h
cm0tYXJjaC1jYXBhYmlsaXRpZXMuaD4NCj4+IHdpdGggaWZkZWY/DQo+PiANCj4gDQo+IEVESVQ6
IEkgc2F3IHRoaXMgZG9lc27igJl0IGV2ZW4gY29tcGlsZSwgSSB3aWxsIGFzayBsYXRlciB3aGVu
IEkgd2lsbCBoYXZlIHNvbWV0aGluZyB3b3JraW5nLA0KPiBJIHNhdyBQeURpY3RfU2V0SXRlbVN0
cmluZyBpcyB1c2VkIHNvbWV3aGVyZSBlbHNlIHNvIEkgd2lsbCB1c2UgdGhhdCBhcHByb2FjaCBi
ZWZvcmUNCj4gUHJvcG9zaW5nIHlvdSBhIHNvbHV0aW9uDQo+IA0KPiANCg0KT2ssIHNvIHRoaXMg
aXMgbXkgcHJvcG9zZWQgc29sdXRpb246DQoNCmRpZmYgLS1naXQgYS90b29scy9weXRob24veGVu
L2xvd2xldmVsL3hjL3hjLmMgYi90b29scy9weXRob24veGVuL2xvd2xldmVsL3hjL3hjLmMNCmlu
ZGV4IGIzNjk5ZmRhYzU4ZS4uZTUyYWE4OGYzYzVmIDEwMDY0NA0KLS0tIGEvdG9vbHMvcHl0aG9u
L3hlbi9sb3dsZXZlbC94Yy94Yy5jDQorKysgYi90b29scy9weXRob24veGVuL2xvd2xldmVsL3hj
L3hjLmMNCkBAIC04NzIsNiArODcyLDcgQEAgc3RhdGljIFB5T2JqZWN0ICpweXhjX3BoeXNpbmZv
KFhjT2JqZWN0ICpzZWxmKQ0KICAgICBjb25zdCBjaGFyICp2aXJ0Y2FwX25hbWVzW10gPSB7ICJo
dm0iLCAicHYiIH07DQogICAgIGNvbnN0IHVuc2lnbmVkIHZpcnRjYXBzX2JpdHNbXSA9IHsgWEVO
X1NZU0NUTF9QSFlTQ0FQX2h2bSwNCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBYRU5fU1lTQ1RMX1BIWVNDQVBfcHYgfTsNCisgICAgUHlPYmplY3QgKm9ianJldDsNCiAN
CiAgICAgaWYgKCB4Y19waHlzaW5mbyhzZWxmLT54Y19oYW5kbGUsICZwaW5mbykgIT0gMCApDQog
ICAgICAgICByZXR1cm4gcHl4Y19lcnJvcl90b19leGNlcHRpb24oc2VsZi0+eGNfaGFuZGxlKTsN
CkBAIC04OTgsMjAgKzg5OSwzNiBAQCBzdGF0aWMgUHlPYmplY3QgKnB5eGNfcGh5c2luZm8oWGNP
YmplY3QgKnNlbGYpDQogICAgIGlmICggcCAhPSB2aXJ0X2NhcHMgKQ0KICAgICAgICoocC0xKSA9
ICdcMCc7DQogDQotICAgIHJldHVybiBQeV9CdWlsZFZhbHVlKCJ7czppLHM6aSxzOmksczppLHM6
bCxzOmwsczpsLHM6aSxzOnMsczpzLHM6SX0iLA0KLSAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAibnJfbm9kZXMiLCAgICAgICAgIHBpbmZvLm5yX25vZGVzLA0KLSAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAidGhyZWFkc19wZXJfY29yZSIsIHBpbmZvLnRocmVhZHNfcGVyX2NvcmUsDQot
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICJjb3Jlc19wZXJfc29ja2V0IiwgcGluZm8uY29y
ZXNfcGVyX3NvY2tldCwNCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgIm5yX2NwdXMiLCAg
ICAgICAgICBwaW5mby5ucl9jcHVzLA0KLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAidG90
YWxfbWVtb3J5IiwgICAgIHBhZ2VzX3RvX2tpYihwaW5mby50b3RhbF9wYWdlcyksDQotICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICJmcmVlX21lbW9yeSIsICAgICAgcGFnZXNfdG9fa2liKHBp
bmZvLmZyZWVfcGFnZXMpLA0KLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAic2NydWJfbWVt
b3J5IiwgICAgIHBhZ2VzX3RvX2tpYihwaW5mby5zY3J1Yl9wYWdlcyksDQotICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICJjcHVfa2h6IiwgICAgICAgICAgcGluZm8uY3B1X2toeiwNCi0gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgImh3X2NhcHMiLCAgICAgICAgICBjcHVfY2FwLA0KLSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAidmlydF9jYXBzIiwgICAgICAgIHZpcnRfY2FwcywN
Ci0gICAgICAgICAgICAgICAgICAgICAgICAgICAgImFybV9zdmVfdmwiLA0KLSAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGFyY2hfY2FwYWJpbGl0aWVzX2FybV9zdmUocGluZm8uYXJjaF9j
YXBhYmlsaXRpZXMpDQorICAgIG9ianJldCA9IFB5X0J1aWxkVmFsdWUoIntzOmksczppLHM6aSxz
OmksczpsLHM6bCxzOmwsczppLHM6cyxzOnN9IiwNCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAibnJfbm9kZXMiLCAgICAgICAgIHBpbmZvLm5yX25vZGVzLA0KKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICJ0aHJlYWRzX3Blcl9jb3JlIiwgcGluZm8udGhyZWFkc19wZXJfY29yZSwNCisg
ICAgICAgICAgICAgICAgICAgICAgICAgICAiY29yZXNfcGVyX3NvY2tldCIsIHBpbmZvLmNvcmVz
X3Blcl9zb2NrZXQsDQorICAgICAgICAgICAgICAgICAgICAgICAgICAgIm5yX2NwdXMiLCAgICAg
ICAgICBwaW5mby5ucl9jcHVzLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgICJ0b3RhbF9t
ZW1vcnkiLCAgICAgcGFnZXNfdG9fa2liKHBpbmZvLnRvdGFsX3BhZ2VzKSwNCisgICAgICAgICAg
ICAgICAgICAgICAgICAgICAiZnJlZV9tZW1vcnkiLCAgICAgIHBhZ2VzX3RvX2tpYihwaW5mby5m
cmVlX3BhZ2VzKSwNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAic2NydWJfbWVtb3J5Iiwg
ICAgIHBhZ2VzX3RvX2tpYihwaW5mby5zY3J1Yl9wYWdlcyksDQorICAgICAgICAgICAgICAgICAg
ICAgICAgICAgImNwdV9raHoiLCAgICAgICAgICBwaW5mby5jcHVfa2h6LA0KKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICJod19jYXBzIiwgICAgICAgICAgY3B1X2NhcCwNCisgICAgICAgICAg
ICAgICAgICAgICAgICAgICAidmlydF9jYXBzIiwgICAgICAgIHZpcnRfY2Fwcw0KICAgICAgICAg
ICAgICAgICAgICAgICAgICk7DQorDQorICAgICNpZiBkZWZpbmVkKF9fYWFyY2g2NF9fKQ0KKyAg
ICAgICAgaWYgKCBvYmpyZXQgKSB7DQorICAgICAgICAgICAgdW5zaWduZWQgaW50IHN2ZV92bF9i
aXRzOw0KKyAgICAgICAgICAgIFB5T2JqZWN0ICpweV9hcm1fc3ZlX3ZsOw0KKw0KKyAgICAgICAg
ICAgIHN2ZV92bF9iaXRzID0gYXJjaF9jYXBhYmlsaXRpZXNfYXJtX3N2ZShwaW5mby5hcmNoX2Nh
cGFiaWxpdGllcyk7DQorICAgICAgICAgICAgcHlfYXJtX3N2ZV92bCA9IFB5TG9uZ19Gcm9tVW5z
aWduZWRMb25nKHN2ZV92bF9iaXRzKTsNCisNCisgICAgICAgICAgICBpZiAoICFweV9hcm1fc3Zl
X3ZsICkNCisgICAgICAgICAgICAgICAgcmV0dXJuIE5VTEw7DQorDQorICAgICAgICAgICAgaWYo
IFB5RGljdF9TZXRJdGVtU3RyaW5nKG9ianJldCwgImFybV9zdmVfdmwiLCBweV9hcm1fc3ZlX3Zs
KSApDQorICAgICAgICAgICAgICAgIHJldHVybiBOVUxMOw0KKyAgICAgICAgfQ0KKyAgICAjZW5k
aWYNCisNCisgICAgcmV0dXJuIG9ianJldDsNCiB9DQogDQogc3RhdGljIFB5T2JqZWN0ICpweXhj
X2dldGNwdWluZm8oWGNPYmplY3QgKnNlbGYsIFB5T2JqZWN0ICphcmdzLCBQeU9iamVjdCAqa3dk
cykNCg0KV291bGQgaXQgd29yayBmb3IgeW91Pw0KDQoNCg0KPiANCj4+PiANCj4+Pj4gfQ0KPj4+
PiANCj4+Pj4gc3RhdGljIFB5T2JqZWN0ICpweXhjX2dldGNwdWluZm8oWGNPYmplY3QgKnNlbGYs
IFB5T2JqZWN0ICphcmdzLCBQeU9iamVjdCAqa3dkcykNCj4+PiANCj4+PiAtLSANCj4+PiBCZXN0
IFJlZ2FyZHMsDQo+Pj4gTWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tpDQo+Pj4gSW52aXNpYmxl
IFRoaW5ncyBMYWINCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Thu May 25 10:02:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 10:02:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539509.840457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27nZ-0004oh-Qn; Thu, 25 May 2023 10:02:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539509.840457; Thu, 25 May 2023 10:02:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27nZ-0004oa-Nt; Thu, 25 May 2023 10:02:21 +0000
Received: by outflank-mailman (input) for mailman id 539509;
 Thu, 25 May 2023 10:02:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q27nY-0004n3-2u
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 10:02:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q27nX-00021m-HJ; Thu, 25 May 2023 10:02:19 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.31.224]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q27nX-0000U3-99; Thu, 25 May 2023 10:02:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=+ujPelSO20CPR8bWPE/O1k8/qxE3zzgkrkPbPkX6ELg=; b=4GQ7foU/SfmM6f+0WtaKltqSkQ
	326Vdx60tRQvR5xQAXM4Z/izgxobfaS1lG7iUpxpP7WVxdTI/1FssLU0WIc5YslMUuO1SIo2WS0Aj
	QleYGA4ZAF5Tgzkjzi0jxCBtnu0rZ7sxTGJKeGUWmL2fqrOf3tSbCaPZtFphc3+noe7o=;
Message-ID: <af5588fe-6c76-6cb2-2d35-9caf0914de15@xen.org>
Date: Thu, 25 May 2023 11:02:15 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [XEN v7 06/11] xen: dt: Replace u64 with uint64_t as the callback
 function parameters for dt_for_each_range()
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com, michal.orzel@amd.com
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
 <20230518143920.43186-7-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230518143920.43186-7-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Ayan,

On 18/05/2023 15:39, Ayan Kumar Halder wrote:
> In the callback functions invoked by dt_for_each_range() ie handle_pci_range(),
> map_range_to_domain(), 'u64' should be replaced with 'uint64_t' as the data type
> for the parameters. The reason being Xen coding style mentions that u32/u64
> should be avoided.
> 
> Also dt_for_each_range() invokes the callback functions with 'uint64_t'
> arguments. Thus, is_bar_valid() needs to change the parameter types accordingly.
> 
> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
> Reviewed-by: Michal Orzel <michal.orzel@amd.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 25 10:07:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 10:07:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539515.840468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27sc-0005at-DY; Thu, 25 May 2023 10:07:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539515.840468; Thu, 25 May 2023 10:07:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q27sc-0005am-A3; Thu, 25 May 2023 10:07:34 +0000
Received: by outflank-mailman (input) for mailman id 539515;
 Thu, 25 May 2023 10:07:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q27sa-0005aT-KZ
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 10:07:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q27sa-00027V-6O; Thu, 25 May 2023 10:07:32 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.31.224]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q27sZ-0000hv-VP; Thu, 25 May 2023 10:07:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=QbrOr9E3GqxP0BG8gU+zcrcR/75yIDTj2OiWleo21xQ=; b=ao59pIgA7oA3AU2Z/MJZ0bpLPe
	O9dl2ViBCU/UZ1o6L6C1XyQ1PuKOovfar2FQNh2ED3sq97/trTL89U+F+WDQFBM7Qeu08xC45N+Ta
	uav4DlrUZTqMwe74ETovEfy3dInnGBXZd29ktN+U0AtLtvZd+tO+p4Pdw+P+ZxJD2dA0=;
Message-ID: <c384a6e7-90a5-7102-3f17-64abeca40c76@xen.org>
Date: Thu, 25 May 2023 11:07:29 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v7 05/12] arm/sve: save/restore SVE context switch
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230523074326.3035745-1-luca.fancellu@arm.com>
 <20230523074326.3035745-6-luca.fancellu@arm.com>
 <5b09d5d9-360f-d0ef-06c4-6efe1b660a90@xen.org>
 <8DF528FE-8F09-469C-B8C4-28B34B20A273@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <8DF528FE-8F09-469C-B8C4-28B34B20A273@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Luca,

On 25/05/2023 11:01, Luca Fancellu wrote:
>> On 25 May 2023, at 10:09, Julien Grall <julien@xen.org> wrote:
>>> diff --git a/xen/arch/arm/include/asm/arm64/vfp.h b/xen/arch/arm/include/asm/arm64/vfp.h
>>> index e6e8c363bc16..4aa371e85d26 100644
>>> --- a/xen/arch/arm/include/asm/arm64/vfp.h
>>> +++ b/xen/arch/arm/include/asm/arm64/vfp.h
>>> @@ -6,7 +6,19 @@
>>>     struct vfp_state
>>>   {
>>> +    /*
>>> +     * When SVE is enabled for the guest, fpregs memory will be used to
>>> +     * save/restore P0-P15 registers, otherwise it will be used for the V0-V31
>>> +     * registers.
>>> +     */
>>>       uint64_t fpregs[64] __vfp_aligned;
>>> +    /*
>>> +     * When SVE is enabled for the guest, sve_zreg_ctx_end points to memory
>>> +     * where Z0-Z31 registers and FFR can be saved/restored, it points at the
>>> +     * end of the Z0-Z31 space and at the beginning of the FFR space, it's done
>>> +     * like that to ease the save/restore assembly operations.
>>> +     */
>>> +    uint64_t *sve_zreg_ctx_end;
>>
>> Sorry I only noticed now. But shouldn't this be protected with #ifdef CONFIG_SVE? Same...
>>
>>>       register_t fpcr;
>>>       register_t fpexc32_el2;
>>>       register_t fpsr;
>>> diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
>>> index 331da0f3bcc3..814652d92568 100644
>>> --- a/xen/arch/arm/include/asm/domain.h
>>> +++ b/xen/arch/arm/include/asm/domain.h
>>> @@ -195,6 +195,8 @@ struct arch_vcpu
>>>       register_t tpidrro_el0;
>>>         /* HYP configuration */
>>> +    register_t zcr_el1;
>>> +    register_t zcr_el2;
>>
>> ... here.
> 
> Sure I can protect them. It was done on purpose before to avoid ifdefs but I think saving space
> is better here and also there won’t be any use of them when the config is off.

I wasn't thinking about saving space. I was more thinking about catching 
any (mis)use of the fields in common code. With the #ifdef, the 
compilation would fail.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 25 10:25:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 10:25:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539521.840478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q289c-00084S-V4; Thu, 25 May 2023 10:25:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539521.840478; Thu, 25 May 2023 10:25:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q289c-00084L-S1; Thu, 25 May 2023 10:25:08 +0000
Received: by outflank-mailman (input) for mailman id 539521;
 Thu, 25 May 2023 10:25:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q289b-00084F-QV
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 10:25:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q289a-0002ZP-Vy; Thu, 25 May 2023 10:25:06 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=[192.168.31.224]) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q289a-0001Su-OT; Thu, 25 May 2023 10:25:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=rOKTM3PhuAeFMtJp9urA7WOmRhwGY1mGPdZZzOFtHR4=; b=aPEb/NczIkzi2P7r7zLYVNu96W
	5sQli5nmir1G7lEvCO/s7l4S8HufAi5UQTzXAj904BmR1BtX6xCTQ/NmxYvGQQKi5HXGFtesvmVTq
	nV1Z3qH/KtZ5V0CmKvEoA8czApsCwMwlrn+qTYsu3h9XdfE7bg7+Y3OUQsQWlfnAjuKA=;
Message-ID: <72b9c858-e414-d0f4-ed9a-b51712e4c310@xen.org>
Date: Thu, 25 May 2023 11:25:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [XEN v7 07/11] xen/arm: p2m: Use the pa_range_info table to
 support ARM_32 and ARM_64
Content-Language: en-US
To: Ayan Kumar Halder <ayankuma@amd.com>, Michal Orzel
 <michal.orzel@amd.com>, Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
 <20230518143920.43186-8-ayan.kumar.halder@amd.com>
 <66f04988-9a4b-39c0-fb17-c508b98e3bdf@amd.com>
 <06e08bf5-ac14-b0ae-743e-60da5e2396e4@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <06e08bf5-ac14-b0ae-743e-60da5e2396e4@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 24/05/2023 15:21, Ayan Kumar Halder wrote:
> 
> On 19/05/2023 09:54, Michal Orzel wrote:
>> Hi Ayan,
> Hi Michal,
>>
>> On 18/05/2023 16:39, Ayan Kumar Halder wrote:
>>> Restructure the code so that one can use pa_range_info[] table for both
>>> ARM_32 as well as ARM_64.
>>>
>>> Also, removed the hardcoding for P2M_ROOT_ORDER and P2M_ROOT_LEVEL as
>>> p2m_root_order can be obtained from the pa_range_info[].root_order and
>>> p2m_root_level can be obtained from pa_range_info[].sl0.
>>>
>>> Refer ARM DDI 0406C.d ID040418, B3-1345,
>>> "Use of concatenated first-level translation tables
>>>
>>> ...However, a 40-bit input address range with a translation 
>>> granularity of 4KB
>>> requires a total of 28 bits of address resolution. Therefore, a stage 2
>>> translation that supports a 40-bit input address range requires two 
>>> concatenated
>>> first-level translation tables,..."
>>>
>>> Thus, root-order is 1 for 40-bit IPA on ARM_32.
>>>
>>> Refer ARM DDI 0406C.d ID040418, B3-1348,
>>>
>>> "Determining the required first lookup level for stage 2 translations
>>>
>>> For a stage 2 translation, the output address range from the stage 1
>>> translations determines the required input address range for the stage 2
>>> translation. The permitted values of VTCR.SL0 are:
>>>
>>> 0b00 Stage 2 translation lookup must start at the second level.
>>> 0b01 Stage 2 translation lookup must start at the first level.
>>>
>>> VTCR.T0SZ must indicate the required input address range. The size of 
>>> the input
>>> address region is 2^(32-T0SZ) bytes."
>>>
>>> Thus VTCR.SL0 = 1 (maximum value) and VTCR.T0SZ = -8 when the size of 
>>> input
>>> address region is 2^40 bytes.
>>>
>>> Thus, pa_range_info[].t0sz = 1 (VTCR.S) | 8 (VTCR.T0SZ) ie 11000b 
>>> which is 24.
>>>
>>> VTCR.T0SZ, is bits [5:0] for ARM_64.
>>> VTCR.T0SZ is bits [3:0] and S(sign extension), bit[4] for ARM_32.
>>>
>>> Thus, VTCR.T0SZ bits are interpreted accordingly for different 
>>> architecture.
>>> For this, we have used union.
>>>
>>> pa_range_info[] is indexed by ID_AA64MMFR0_EL1.PARange which is 
>>> present in Arm64
>>> only. This is the reason we do not specify the indices for ARM_32. 
>>> Also, we
>>> duplicated the entry "{ 40,      24/*24*/,  1,          1 }" between 
>>> ARM_64 and
>>> ARM_32. This is done to avoid introducing extra #if-defs.
>>>
>>> Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
>>> ---
>>> Changes from -
>>>
>>> v3 - 1. New patch introduced in v4.
>>> 2. Restructure the code such that pa_range_info[] is used both by 
>>> ARM_32 as
>>> well as ARM_64.
>>>
>>> v4 - 1. Removed the hardcoded definitions of P2M_ROOT_ORDER and 
>>> P2M_ROOT_LEVEL.
>>> The reason being root_order will not be always 1 (See the next patch).
>>> 2. Updated the commit message to explain t0sz, sl0 and root_order 
>>> values for
>>> 32-bit IPA on Arm32.
>>> 3. Some sanity fixes.
>>>
>>> v5 - pa_range_info is indexed by system_cpuinfo.mm64.pa_range. ie
>>> when PARange is 0, the PA size is 32, 1 -> 36 and so on. So 
>>> pa_range_info[] has
>>> been updated accordingly.
>>> For ARM_32 pa_range_info[0] = 0 and pa_range_info[1] = 0 as we do not 
>>> support
>>> 32-bit, 36-bit physical address range yet.
>>>
>>> v6 - 1. Added pa_range_info[] entries for ARM_32 without indices. 
>>> Some entry
>>> may be duplicated between ARM_64 and ARM_32.
>>> 2. Recalculate p2m_ipa_bits for ARM_32 from T0SZ (similar to ARM_64).
>>> 3. Introduced an union to reinterpret T0SZ bits between ARM_32 and 
>>> ARM_64.
>>>
>>>   xen/arch/arm/include/asm/p2m.h |  6 ------
>>>   xen/arch/arm/p2m.c             | 37 +++++++++++++++++++++++-----------
>>>   2 files changed, 25 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/include/asm/p2m.h 
>>> b/xen/arch/arm/include/asm/p2m.h
>>> index f67e9ddc72..940495d42b 100644
>>> --- a/xen/arch/arm/include/asm/p2m.h
>>> +++ b/xen/arch/arm/include/asm/p2m.h
>>> @@ -14,16 +14,10 @@
>>>   /* Holds the bit size of IPAs in p2m tables.  */
>>>   extern unsigned int p2m_ipa_bits;
>>> -#ifdef CONFIG_ARM_64
>>>   extern unsigned int p2m_root_order;
>>>   extern unsigned int p2m_root_level;
>>>   #define P2M_ROOT_ORDER    p2m_root_order
>>>   #define P2M_ROOT_LEVEL p2m_root_level
>>> -#else
>>> -/* First level P2M is always 2 consecutive pages */
>>> -#define P2M_ROOT_ORDER    1
>>> -#define P2M_ROOT_LEVEL 1
>>> -#endif
>>>   struct domain;
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index 418997843d..755cb86c5b 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -19,9 +19,9 @@
>>>   #define INVALID_VMID 0 /* VMID 0 is reserved */
>>> -#ifdef CONFIG_ARM_64
>>>   unsigned int __read_mostly p2m_root_order;
>>>   unsigned int __read_mostly p2m_root_level;
>>> +#ifdef CONFIG_ARM_64
>>>   static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>>>   /* VMID is by default 8 bit width on AArch64 */
>>>   #define MAX_VMID       max_vmid
>>> @@ -2247,16 +2247,6 @@ void __init setup_virt_paging(void)
>>>       /* Setup Stage 2 address translation */
>>>       register_t val = 
>>> VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
>>> -#ifdef CONFIG_ARM_32
>>> -    if ( p2m_ipa_bits < 40 )
>>> -        panic("P2M: Not able to support %u-bit IPA at the moment\n",
>>> -              p2m_ipa_bits);
>>> -
>>> -    printk("P2M: 40-bit IPA\n");
>>> -    p2m_ipa_bits = 40;
>>> -    val |= VTCR_T0SZ(0x18); /* 40 bit IPA */
>>> -    val |= VTCR_SL0(0x1); /* P2M starts at first level */
>>> -#else /* CONFIG_ARM_64 */
>>>       static const struct {
>>>           unsigned int pabits; /* Physical Address Size */
>>>           unsigned int t0sz;   /* Desired T0SZ, minimum in comment */
>>> @@ -2265,6 +2255,7 @@ void __init setup_virt_paging(void)
>>>       } pa_range_info[] __initconst = {
>>>           /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table 
>>> D5-6 */
>>>           /*      PA size, t0sz(min), root-order, sl0(max) */
>>> +#ifdef CONFIG_ARM_64
>>>           [0] = { 32,      32/*32*/,  0,          1 },
>>>           [1] = { 36,      28/*28*/,  0,          1 },
>>>           [2] = { 40,      24/*24*/,  1,          1 },
>>> @@ -2273,11 +2264,22 @@ void __init setup_virt_paging(void)
>>>           [5] = { 48,      16/*16*/,  0,          2 },
>>>           [6] = { 52,      12/*12*/,  4,          2 },
>>>           [7] = { 0 }  /* Invalid */
>>> +#else
>>> +        { 40,      24/*24*/,  1,          1 },
>>> +        { 0 },  /* Invalid */
>> Do we really need this invalid entry?
> 
> Actually I preserved it to be consistent with ARM_64, but we can drop it 
> for ARM_32.
> 
> The only benefit of keeping this entry is that if p2m_ipa_bits is set to 
> 0 in p2m_restrict_ipa_bits(), then ...
> 
> panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", pa_range);
> 
> This will print pa_range as an index pointing to invalid entry.
> 
> 
> Also, this reminds me that I should change the message as 
> ID_AA64MMFR0_EL1 does not exist on ARM_32.
> 
> So this may look better for both ARM_64 and ARM_32 :-
> 
> panic("Unsupported value for p2m_ipa_bits = 0x%x\n", p2m_ipa_bits);
> 
>>
>>> +#endif
>>>       };
>>>       unsigned int i;
>>>       unsigned int pa_range = 0x10; /* Larger than any possible value */
>>> +    /* Typecast pa_range_info[].t0sz into ARM_32 and ARM_64 bit 
>>> variants. */
>> This would want some explanation in the code.
> 
> /*
> 
> VTCR.T0SZ, is bits [5:0] for ARM_64.
> 
> VTCR.T0SZ is bits [3:0] and S(sign extension), bit[4] for ARM_32.
> 
> Thus, VTCR.T0SZ bits are interpreted accordingly for different 
> architecture.
> 
> */
> 
>>
>>> +    union {
>>> +        signed int t0sz_32:5;
>>> +        unsigned int t0sz_64:6;
>>> +    } t0sz;
>>> +
>>> +#ifdef CONFIG_ARM_64
>>>       /*
>>>        * Restrict "p2m_ipa_bits" if needed. As P2M table is always 
>>> configured
>>>        * with IPA bits == PA bits, compare against "pabits".
>>> @@ -2291,6 +2293,7 @@ void __init setup_virt_paging(void)
>>>        */
>>>       if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
>>>           max_vmid = MAX_VMID_16_BIT;
>>> +#endif
>>>       /* Choose suitable "pa_range" according to the resulted 
>>> "p2m_ipa_bits". */
>>>       for ( i = 0; i < ARRAY_SIZE(pa_range_info); i++ )
>>> @@ -2306,24 +2309,34 @@ void __init setup_virt_paging(void)
>>>       if ( pa_range >= ARRAY_SIZE(pa_range_info) || 
>>> !pa_range_info[pa_range].pabits )
>>>           panic("Unknown encoding of ID_AA64MMFR0_EL1.PARange %x\n", 
>>> pa_range);
>>> +#ifdef CONFIG_ARM_64
>>>       val |= VTCR_PS(pa_range);
>>>       val |= VTCR_TG0_4K;
>>>       /* Set the VS bit only if 16 bit VMID is supported. */
>>>       if ( MAX_VMID == MAX_VMID_16_BIT )
>>>           val |= VTCR_VS;
>>> +#endif
>>> +
>>>       val |= VTCR_SL0(pa_range_info[pa_range].sl0);
>>>       val |= VTCR_T0SZ(pa_range_info[pa_range].t0sz);
>>>       p2m_root_order = pa_range_info[pa_range].root_order;
>>>       p2m_root_level = 2 - pa_range_info[pa_range].sl0;
>>> +
>>> +#ifdef CONFIG_ARM_64
>>> +    t0sz.t0sz_64 = pa_range_info[pa_range].t0sz;
>>>       p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
>> This should be:
>> p2m_ipa_bits = 64 - t0sz.t0sz_64;
> Agreed. I should fix this.
>>
>> Another alternative would be to use anonymous unions+struct/union 
>> inside pa_range_info, e.g:
>>          union {
>>              unsigned int t0sz;
>>              struct {
>>                  signed int t0sz_32:5;
>>              };
>>          };
>> so, if t0sz stores 24, t0sz_32 would automatically store -8.
>> This could simplify the code later on, as you could just do:
>>
>> #ifdef CONFIG_ARM_64
>>      p2m_ipa_bits = 64 - pa_range_info[pa_range].t0sz;
>> #else
>>      p2m_ipa_bits = 32 - pa_range_info[pa_range].t0sz_32;
>> #endif
>>
>> However, I think it would require placing extra braces around 
>> initializers, i.e:
>> [0] = { 32,      {32/*32*/},  0,          1 },
> 
> I am not very sure of this alternative approach because of the extra 
> braces.
> 
> However, if you think it is better this way, then I can change this.
> 
> May be Julien/Stefano should also comment on this alternative approach.

I find the solution suggested by Michal not straightforward. One would 
need to remember to add {} in order to get the proper value.

Furthermore, the value would be written with t0sz_32 but read with t0sz 
in one case. The latter is much bigger and it is not clear what would be 
the values of the other (padding) bits.

Regarding Ayan's solution, it is pretty unclear why you need an union as 
the values arereading/writing from/to the same fields. I also don't we 
should modify 64-bit code (at least the reasoning is not clear). So I 
would consider to write:

{
    struct
    {
        signed int val: 5;
    } t0sz_32 = { pa_range ... };

    p2m_ipa_bits = 32 - t0sz32.val;
}

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 25 10:36:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 10:36:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539525.840488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q28KP-00019G-1n; Thu, 25 May 2023 10:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539525.840488; Thu, 25 May 2023 10:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q28KO-000199-Tl; Thu, 25 May 2023 10:36:16 +0000
Received: by outflank-mailman (input) for mailman id 539525;
 Thu, 25 May 2023 10:36:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ga2A=BO=citrix.com=prvs=5022cd99a=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q28KO-000193-1J
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 10:36:16 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f7c50c54-fae7-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 12:36:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7c50c54-fae7-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685010974;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=mA6akEa2coskAXl2tw+5C0vLkLFfO7xbPc822gVdzDU=;
  b=hWaYs6QXY++ILrynlyoPcnCFBsjvbFmT/9l/8nN6NR6jSqtPn/tSHv9W
   5xD0yMnCGCy4RWCpSfonMmdH0NJI0pg4Y/5UQsAlJf45DxE/3o4xL1lVC
   dTDstivONcoo+ccjuPmfm/p3GxWFPgogDnPKwZuOSOJB/T4/qCXh7QfSU
   g=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 112826491
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:F2fOl6+/k5YlxMCft662DrUDTH6TJUtcMsCJ2f8bNWPcYEJGY0x3y
 jZODW6OO/7cMDehLtF/OoXk8U8G7JHRnd9hT1E+/y88E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ird7ks31BjOkGlA5AdmOKoX5AO2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkl33
 KEBOChUcSy/jv7n2pWiafJ3wcg8eZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAj3/jczpeuRSNqLA++WT7xw1tyrn9dtHSf7RmQO0MxxzI/
 Dyarj2R7hcyE/C1lgDeqFWVnvbmuyejG7wuSLbhz6s/6LGU7jNKU0BHPbehmtG7gEOjX9NUK
 2QP5zEj66M18SSDTMT5XhC+iG6JuFgbQdU4O/Yh9AiHx67Q4gCYLmsJVDhMbJohrsBebSMu/
 k+EmZXuHzMHmK2YTzeR+6mZqRu2ODMJNikSaCkcVwwH7tL/5oYpgXrnR85uCqevgvXpGDv7x
 HaBqy1WulkIpZdVjePhpwmB2m/y4MGTFWbZ+zk7QEqcx15gdJb8eLCU4ESK99FZD52CCVWe6
 S1sd9el0MgCCpSElSqoSeoLHa206/vtDAAwkWKDDLF6qW3zpifLkZR4pWgneRw3appslSrBO
 he7hO9H2HNE0JJGh4dTapn5NcklxLOI+T/NBqGNNYomjnScmWa6EMBSiay4hTiFfKsEy/tX1
 XKnnSGEUx4n5VxPlmbeegvk+eZDKtoC7W3SX4vn6B+szKCTYnWYIZ9cbgvSP71itvje+ViNm
 zq6Cydt40wFONASnwGNqdJDRbz0BSNT6W/KRzx/KbfYf1sO9JAJAP7N27IxE7FYc1Buvr6Qp
 BmVAxYIoGcTcFWbcW1mnFg/MuKwNXu+xFpnVRER0aGAgiZ9MdbxtfZHL/Pav9APrYRe8BK9d
 NFdE+3oPxiFYm2vF+g1BXUlkLFfSQ==
IronPort-HdrOrdr: A9a23:CB5NB6DYzypvg1PlHemU55DYdb4zR+YMi2TDtnoBLyC9Hfb4qy
 nDppQmPHzP+VEssRMb6LW90cC7KBu2n/MYjucs1NGZLWrbUQCTXeVfBOXZsl/d8/GXzJ8h6U
 4aSdkGNDQ1NykD/L2KmnjFL+od
X-Talos-CUID: =?us-ascii?q?9a23=3AgRnweGm2j/+Me4Rv5jcqilHL8DjXOUT0kW7AOxa?=
 =?us-ascii?q?nNVpgVOyPbkaz47gjn9U7zg=3D=3D?=
X-Talos-MUID: 9a23:Qvs3jAT+QTei3Yc9RXTLqBs9DpwyxJ22EWkrlc1FhfXYPnVJbmI=
X-IronPort-AV: E=Sophos;i="6.00,190,1681185600"; 
   d="scan'208";a="112826491"
Date: Thu, 25 May 2023 11:36:05 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH 02/15] build: rework asm-offsets.* build step to use
 kbuild
Message-ID: <6bba5a7f-baca-4d36-b75e-46644fe94759@perard>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-3-anthony.perard@citrix.com>
 <6cf71fcf-789a-e80d-2d9d-407257fe5e3f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <6cf71fcf-789a-e80d-2d9d-407257fe5e3f@suse.com>

On Wed, May 24, 2023 at 04:09:39PM +0200, Jan Beulich wrote:
> On 23.05.2023 18:37, Anthony PERARD wrote:
> > Use $(if_changed_dep, ) macro to generate "asm-offsets.s" and remove
> > the use of $(move-if-changes,). That mean that "asm-offset.s" will be
> > changed even when the output doesn't change.
> > 
> > But "asm-offsets.s" is only used to generated "asm-offsets.h". So
> > instead of regenerating "asm-offsets.h" every time "asm-offsets.s"
> > change, we will use "$(filechk, )" to only update the ".h" when the
> > output change. Also, with "$(filechk, )", the file does get
> > regenerated when the rule change in the makefile.
> > 
> > This changes also result in a cleaner build log.
> > 
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> > ---
> > 
> > Instead of having a special $(cmd_asm-offsets.s) command, we could
> > probably reuse $(cmd_cc_s_c) from Rules.mk, but that would mean that
> > an hypothetical additional flags "-flto" in CFLAGS would not be
> > removed anymore, not sure if that matter here.
> 
> There's no real code being generated there, and what we're after are
> merely the special .ascii directives. So the presence of -flto should
> have no effect there, and hence it would even look more consistent to
> me if we didn't use different options (and even rules) for .c -> .s
> transformations.
> 
> > But then we could write this:
> > 
> > targets += arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s
> > arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s: CFLAGS-y += -g0
> > arch/$(TARGET_ARCH)/include/asm/asm-offsets.h: arch/$(TARGET_ARCH)/$(TARGET_SUBARCH)/asm-offsets.s FORCE
> > 
> > instead of having to write a rule for asm-offsets.s
> 
> Ftaod, I'd be happy to ack the patch as it is, but I would favor if
> you could do the rework / simplification as outlined.

Thanks, I'll do this rework.

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 25 11:31:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 11:31:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539529.840498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29BO-0007St-To; Thu, 25 May 2023 11:31:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539529.840498; Thu, 25 May 2023 11:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29BO-0007Sm-Qh; Thu, 25 May 2023 11:31:02 +0000
Received: by outflank-mailman (input) for mailman id 539529;
 Thu, 25 May 2023 11:31:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q29BN-0007SZ-FM
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 11:31:01 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20624.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9e97f313-faef-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 13:30:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7824.eurprd04.prod.outlook.com (2603:10a6:102:cd::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Thu, 25 May
 2023 11:30:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 11:30:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e97f313-faef-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QSSjR4MfMBZzjWrFClmhnAYK1vlbR7fK5JjzctnArrpl8K1k/fJmCK/2BUdaZOvGgNcJAPer65ZvNmdHACg1/ETkCHmLrrNZcRTzB5sTFF1FwAhd5s4eiY6IIIo+G9IM4/GPArSbGY5mXQlyK16kaSa3bF1gtYTMfUaJfQDFlgFXkjA/MNcQehV8eRQYtJ+Ei++cP0OAe4rbvnXP45/dS+SoK9L1MMPdgX+hS5Aa2CIZrPD6dkjRARytdLC2hZvXi655CbdTLqs9wKlNKC3QT7Dut+uG5AquCW+vL+/OQ28oEF4p/Dk2mmBItJH43OaQHsGfHadBmBbw+7I3Cc/B4g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ctiJ9TLrZSYG2xXmKp4mHAkJBWrEYJzMaVxtsSmLrDg=;
 b=SEnqbgYhpltP96T+tD3xpGRY46hx5cHa4wX4FmyOEqQFioU7UoZfgxCMPHBuibRo2V9u+YZOANPXt43gCbBYSuWCj15bZUbkM3Q/kpsNp3Ve3t18g3GH59BZC4GjFOtNRxeRt+Nb0j+fa2KYSECVZbreADVHcUk/liA5cbelLeHZg6Ebcfu9ieAi358op0FqBnG4Z9vZHoCT7mzpmqjEBrKVHUxGqe8LSR/tB0mH8VIZYdCXJaVDPQWdnTqlDJkfk4jK358sLMIkghADPGhIdU/59IRuRgjqPcXEl+GPLYNu9a8dcI7FCvLAuJRKJvcLqiccyqk4w1nFz1+K77hsKQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ctiJ9TLrZSYG2xXmKp4mHAkJBWrEYJzMaVxtsSmLrDg=;
 b=5ZjCO/nU+WrZEIdw+xnqL+AocGyURAQUdmjDXB9j73ZnFxi+0TWhMqI7LOigQQxzq5QHe2K6Njh4jthsHieGlyMV26Tgmwz4/ORFAIraLXDEw35yQ/jPBO3DkoKji+KuwZlWPfrtq1yS1Qjt2xOZvFT3RxvqGghoM/tZa1po8IxDURA3ZXSMpvajTbCGS2asUZN5pGI/7uEKK+ne9bNHw4bHF1KxAB85cCGfPtaEIY78vp3cClHkr1ERimfYAAN/A6W5L1V5RIQo0bTtDwpPP1m29emmIwzjlPCvuMOPNLuYaY2xTjx2RSasftr5ZSjyTlmYXMYIvn/0Cx08UCbhdg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <6790d5ae-9742-f5f3-bd8c-62602ee9cb1d@suse.com>
Date: Thu, 25 May 2023 13:30:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [RFC] Xen crashes on ASSERT on suspend/resume, suggested fix
Content-Language: en-US
To: Stefano Stabellini <stefano.stabellini@amd.com>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 xenia.ragiadakou@amd.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <alpine.DEB.2.22.394.2305181638390.128889@ubuntu-linux-20-04-desktop>
 <ZGzFnE2w/YqYT35c@Air-de-Roger> <ZGzSnu8m/IqjmyHx@Air-de-Roger>
 <alpine.DEB.2.22.394.2305241646520.44000@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305241646520.44000@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0262.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7824:EE_
X-MS-Office365-Filtering-Correlation-Id: f2740e9f-ae4c-46da-849a-08db5d138106
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VdNd8dqhBs7Z7954MzsOzKqrAOYpt0r0lvHJxCnzshLVeKNbGhzYun9CLwbbXSF7yTHx55/SCH+Dc9xEukoug0QMPcTsoJ4OZn/LLHB0vA2DR/UzMyQw18BI0N4Com8mR41LjLlg815pwIKc3qBzHF6R9KmOSFFmUF7+9GKIT1JeqDMzd0ZPY7tZAJIWQ91yreTHypo58HPG84qqNfGqgm5vRtQVuPZz7A1gHW3KH3kvUmwlEfMIbpLbwyLUg8nWNmg86WTogFEpPwC8N5HvgQI4C5b2CnHuz98YSuK0/NyyMDwpFrq/zQKz9pGFNQOPB2tQvMh+uZa9bpiBQJn16bPC1N0WRE36IHTx/siG1W/FLmdsKSSkyJ4b1AweQhex6mHbFnWNURakb7LBe8M1/Iwib6bCfWMFpkPZHr10lCyRKGZnM5RoFXVjivoBilDEEMissM4DZyxYNCROCgZ4xJP/l0LVBfqlk5oXyKfanybt5AgK3LdrF1HmUcTMd5mSXH+cVWf2rDrYVs6KN1G+wKwcXQVWzNu4ICduId6jzcmSC/CGpyr30w/FtUqveIrEyhqDP5/H2vtWXnk+5Wr2J4bzyyGFarBLRaRYAx9aBZENyoiClGhtSBAJqiJO8a1N0OIEp4gcSxFso1jNabm/uA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(366004)(346002)(396003)(136003)(376002)(451199021)(31686004)(2906002)(15650500001)(5660300002)(316002)(83380400001)(8936002)(8676002)(6486002)(41300700001)(6916009)(36756003)(66476007)(478600001)(4326008)(66556008)(66946007)(6512007)(86362001)(6506007)(31696002)(26005)(53546011)(186003)(38100700002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?K29CYU9UZ1JudE9GRVArSDhWblVvUVVrdnpIZ0JyK0h4cWhXbmhBRE1jTlZJ?=
 =?utf-8?B?ZFdxMko2dUJMeWd0M0RpK0k0Z3VNMkMxVDhzRUM5WkNsdE05ZkdYaDFuWmp6?=
 =?utf-8?B?clk0alV6TVBlMTk0RGxQWDcxSnJLd0hjK3NkQ2hsUklQNVFCVDVIZ1Q4WGZr?=
 =?utf-8?B?MXZQVHYyaE5WVERoUjVyR1pnUDJVMy9aaUdIb3hEeDlXYlRDRW5qeWovRW5B?=
 =?utf-8?B?bTNESTdCV1ByaG8reFFnRE9ONVFHQUQvUkNlUGdLKy9TRTNkZDA1NkRzVzEy?=
 =?utf-8?B?RlRRc0JrclZZeVVnbkJsM3g3UTg5OFFKbWRmYkI3TU5aZ2dHem8vSXhtMXZK?=
 =?utf-8?B?ZmxKbHh6YkdldzY0ZXdTVDI0dmhUY2JRUlJmNEZ6SzdFVjhST3VTSlBlczlC?=
 =?utf-8?B?TzQ1OUxETS9MY2dUK1AyWitjV0JqQUx2MDJ4Y29sZDlHaldER3NQZHlEZTZq?=
 =?utf-8?B?NEVHbUZ4MmNTMWVqODg2WlJnVXNodExDS0tPZlh0RXoxanVCVnNEZG5aekVG?=
 =?utf-8?B?YUtWNXJxRnA0dVRWM2tqcDhueFExY05iOUlJS1duOXlUNG9IREZQRllGWVo5?=
 =?utf-8?B?SFRXdy9qV2dZWE9XVTJoMG5BOFU5dHJVYWkvSXpVdDRKRStvRzdIdi9Lbmlv?=
 =?utf-8?B?RDVTTVpNYk03R2xZWmJJZDNTaHgvSGxqVHhldjQ0SU51OG1TdnE4aGJuTGZW?=
 =?utf-8?B?QTM1WGhVUjBnSklyY3JkalBVcTBweElDamc0VnZLdThGWnpoRElnazJRdHFo?=
 =?utf-8?B?Q05QK1hMdVZ0VGM4aXVpRGxPS1NReDlWTlpmWk4rUUI2MG1QSXRySmJjZS84?=
 =?utf-8?B?Nkp5a01URnExdzk1dEZHc1RLNGJQN2p3QU5JcmhTTWUxUUxPVGk0czdib0Jx?=
 =?utf-8?B?UzdpcTF2ZjhocndpQ0hseVg5ZkFPWnlqNnBLUzVKYzJZZDhOdHhwc0REcmF1?=
 =?utf-8?B?eHkwUGVTOEdONnR0WTdHV0dqNHhtVktRT0R6WGxLSGJpSEtSckRMSlpoOHEr?=
 =?utf-8?B?OGp5cER6MTVHVnZjeERGa1hZekxwSlNPUzZtMXBEY3B1QUlhZVJqYnF2cCs2?=
 =?utf-8?B?VDlnc3BzeC9zNXRCdHNTTW0zR3MwMjYwWU5INEtORkk0dlR2U1V4V3VmN2Zm?=
 =?utf-8?B?eUFLRnpGUGtoUXBWeHl4MVJ3VVhTcllMZGRhaEZHbTloZmY0c1ZLZ0NBSGpR?=
 =?utf-8?B?YStKbjJIL3QxZzhma2l3U3FPb1BaRzRqd3NzQXdxSkkzM21obWJhdHBSaWUx?=
 =?utf-8?B?T29YYmIxS09WRXY0dnZvdUZRRGR2azhaOUU0ZGhhWnRkRWJYM2hwMEErci9G?=
 =?utf-8?B?VUludUtzWEFzZFdYeWt5TENwbnpWdEJIemMvUERSbUJ1UE56MzZRMEVLeEV6?=
 =?utf-8?B?czlBQjExek1ieTFza2JxWGpvSXBtL3Z2U285RGgxOE9OUmJYR3F2aDdyb0dH?=
 =?utf-8?B?cGFRa2NZRGZKcnZxMy8xT3BhNkpQamJmS1VuZk5XYXh5NTJsOTdtSjBiTGM0?=
 =?utf-8?B?bU1Kak5kOHRmbWxuaU10Ui82L1FqdmJRWlErVkQyRWp5aTlKV0pZMGJFYkUr?=
 =?utf-8?B?eFVDQUk1R2ZwSmJhd2UzTUo3YkIvUS9TS0t2QVpuVER6WTJFdXNWNlpmRzlG?=
 =?utf-8?B?MnpHT3paemRIZXY5WEVqVkp0QmdWU2hxcDdFSmU0UE95TXdDcUZ4SHR0S0JF?=
 =?utf-8?B?UURDS0hoZ2tQSHhBdWEvMVA2SUlpUzIxV3U0cXhkS09xaTc0eFV5SmxMc1Bp?=
 =?utf-8?B?ZnlSOHYrWWVZWUh6OVZGcFF6VFJwOURlQWFtdkVZQ3FRVnNFQ1ZObFdLdUhI?=
 =?utf-8?B?RC9IUVg3bjRKTUU5YnhXVDFXQk94eHJOYXk5cXdUbHFickdTSFUySGFMaXNJ?=
 =?utf-8?B?VDFMOXh4RWtzT0ZxdkphWWQ1eFFVbmVHVUs4ZFFnTGx3SHFtMk1lSWNUdUgr?=
 =?utf-8?B?dm40bklVbkVzSG5XK2ZFVlJibzhSam1Ba2VKV1dTMGpHQjVnN0pVWENLdnVN?=
 =?utf-8?B?QUsxSFd1ZmVDMjY1TGtwcmhTaFMvbXdiZEkzN2s5Y0plYmc1aUtMT01GUnF4?=
 =?utf-8?B?dUgrK3U5aytIMmVwQzFocXd2SHpwYjN1cGRrakRZZkZFRHg0WWJHcVJqT3Vo?=
 =?utf-8?Q?nB8f5W0GN1qdptBGJOOK10VDQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f2740e9f-ae4c-46da-849a-08db5d138106
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 11:30:55.4783
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4VRaCZy6L5qMY6ePOACZxSoSxVvdyQzkOp4GjE5HVd5OfB9l1rUAG2esgWLAldDVmIF4OpLUrzvS2wxqPfkU7A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7824

On 25.05.2023 01:51, Stefano Stabellini wrote:
> xen/irq: fix races between send_cleanup_vector and _clear_irq_vector

This title is, I'm afraid, already misleading. No such race can occur
afaict, as both callers of _clear_irq_vector() acquire the IRQ
descriptor lock first, and irq_complete_move() (the sole caller of
send_cleanup_vector()) is only ever invoked as or by an ->ack()
hook, which in turn is only invoked with, again, the descriptor lock
held.

> It is possible that send_cleanup_vector and _clear_irq_vector are
> running at the same time on different CPUs. In that case we have a race
> as both _clear_irq_vector and irq_move_cleanup_interrupt are trying to
> clear old_vector.
> 
> This patch fixes 3 races:
> 
> 1) As irq_move_cleanup_interrupt is running on multiple CPUs at the
> same time, and also _clear_irq_vector is running, it is possible that
> only some per_cpu(vector_irq, cpu)[old_vector] are valid but not all.
> So, turn the ASSERT in _clear_irq_vector into an if.

Note again the locking which is in effect.

> 2) It is possible that _clear_irq_vector is running at the same time as
> release_old_vec, called from irq_move_cleanup_interrupt. At the moment,
> it is possible for _clear_irq_vector to read a valid old_cpu_mask but an
> invalid old_vector (because it is being set to invalid by
> release_old_vec). To avoid this problem in release_old_vec move clearing
> old_cpu_mask before setting old_vector to invalid. This way, we know that
> in _clear_irq_vector if old_vector is invalid also old_cpu_mask is zero
> and we don't enter the loop.

All invocations of release_old_vec() are similarly inside suitably
locked regions.

> 3) It is possible that release_old_vec is running twice at the same time
> for the same old_vector. Change the code in release_old_vec to make it
> OK to call it twice. Remove both ASSERTs. With those gone, it should be
> possible now to call release_old_vec twice in a row for the same
> old_vector.

Same here.

Any such issues would surface more frequently and without any suspend /
resume involved. What is still missing is that connection, and only then
it'll (or really: may) become clear what needs adjusting. If you've seen
the issue exactly once, then I'm afraid there's not much we can do unless
someone can come up with a plausible explanation of something being
broken on any of the involved code paths. More information will need to
be gathered out of the next occurrence of this, whenever that's going to
be. One of the things we will want to know, as mentioned before, is the
value that per_cpu(vector_irq, cpu)[old_vector] has when the assertion
triggers. Iirc Roger did suggest another piece of data you'd want to log.

> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -112,16 +112,11 @@ static void release_old_vec(struct irq_desc *desc)
>  {
>      unsigned int vector = desc->arch.old_vector;
>  
> -    desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
>      cpumask_clear(desc->arch.old_cpu_mask);
> +    desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
>  
> -    if ( !valid_irq_vector(vector) )
> -        ASSERT_UNREACHABLE();
> -    else if ( desc->arch.used_vectors )
> -    {
> -        ASSERT(test_bit(vector, desc->arch.used_vectors));
> +    if ( desc->arch.used_vectors )
>          clear_bit(vector, desc->arch.used_vectors);
> -    }
>  }
>  
>  static void _trace_irq_mask(uint32_t event, int irq, int vector,
> @@ -230,9 +225,11 @@ static void _clear_irq_vector(struct irq_desc *desc)
>  
>          for_each_cpu(cpu, tmp_mask)
>          {
> -            ASSERT(per_cpu(vector_irq, cpu)[old_vector] == irq);
> -            TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
> -            per_cpu(vector_irq, cpu)[old_vector] = ~irq;
> +            if ( per_cpu(vector_irq, cpu)[old_vector] == irq )
> +            {
> +                TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
> +                per_cpu(vector_irq, cpu)[old_vector] = ~irq;
> +            }
>          }

As said before - replacing ASSERT() by a respective if() cannot really
be done without discussing the "else" in the description. Except of
course in trivial/obvious cases, but I think we agree here we don't
have such a case.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 11:31:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 11:31:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539530.840503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29BP-0007WJ-9B; Thu, 25 May 2023 11:31:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539530.840503; Thu, 25 May 2023 11:31:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29BP-0007Vc-48; Thu, 25 May 2023 11:31:03 +0000
Received: by outflank-mailman (input) for mailman id 539530;
 Thu, 25 May 2023 11:31:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ga2A=BO=citrix.com=prvs=5022cd99a=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q29BN-0007SY-IR
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 11:31:01 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9deabf42-faef-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 13:30:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9deabf42-faef-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685014258;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=Em+0vcfCTguW7tO+ocbqU9w1x1Bds9BQQ0dpjssYHL0=;
  b=cQBZ0xygIkN1P/w6oJpOqRrCFPmRgEkOxpodvlCXAylI5xbV8Uekbvu/
   bxUDgBD49XMnBWG2Pp/qerOt8KJ+kbR8XPdaalb4NWH/3wtwpFBCOdxOW
   6REffffcxPngjtBKZ2HeDXEjYi5t1e2bC7qCzonhK/MkIcLPrwNC1L5Pr
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110762556
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:bo9ob6O+16LxKxvvrR2Kl8FynXyQoLVcMsEvi/4bfWQNrUoi0DcPm
 2AaDDiHa6vZM2H3LowiYYrk/EgA68DUm9ZiHgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tF5AFmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uR0LWdT9
 sVBFCJXYA+Bibnp0La1dNA506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUoXTHZoLxBvF+
 Aoq+UynGjonEMG6kAO6sVm3u9f0kBr1SN0dQejQGvlC3wTImz175ActfVmxrOS9i0W+c8lCM
 EFS8S0rxYAt8GS7Q9+7WAe3yFaUsxhZV9dOHukS7ACW1rGS8wufHnIDTDNKdJohrsBebTUy2
 0WAhd/BGT1lu7rTQnWYnop4thvrZ3JTdzVbI3ZZE01cuYKLTJwPYgznTNVPFrO2t4TJC2v+3
 DKE93cug7sttJtev0mkxmzvjzWpr5nPawc64ATLQ26ohj9EiJ6Zi5+AsgaCs6sZRGqNZhzY5
 SVfxZDChAwbJcvV/BFhVtnhC11ACxytFDTHyWBiEJA6n9hG0y7yJNsAiN2SyaoADyrlRdMLS
 BWL0e+yzMUJVJdPUUOQS9zZNijS5fK8fekJrMz8YNtUeYRWfwSa5ixobkP49zmzwBR9zPplY
 s3CLJ7E4ZMm5UNPlWDeegvg+eVzmnBWKZ37HvgXMChLIZLBPSXIGN/pwXOFb/wj7bPsnTg5B
 +13bpPQoz0GCb2WX8Ui2dJLRXgQM2MBDIz7w+QOMLbrzvxORDtwVJc8ANoJJ+RYokiivr6Wo
 CHtBBYJkAKXaL+uAVziV02PoYjHBf5XxU/X9wR1Vbp08xDPubqS0Zo=
IronPort-HdrOrdr: A9a23:msU3XK27aGRWy6tQi7Ym2gqjBEIkLtp133Aq2lEZdPU0SKGlfq
 GV7ZAmPHrP4gr5N0tOpTntAse9qBDnhPtICOsqTNSftWDd0QPFEGgF1+rfKlXbcBEWndQtt5
 uIHZIfNDSKNykcsS77ijPIb+rJwrO8gd+VbTG19QYSceloAZsQnjuQEmygYytLrJEtP+tCKH
 KbjPA33gaISDAsQemQIGIKZOTHr82jruOaXfZXbyRXkDVnlFmTmcXHLyQ=
X-Talos-CUID: =?us-ascii?q?9a23=3A7/0uPmkPvoP2wzn+UVgkT7YP/L3XOX/e4HHNIHb?=
 =?us-ascii?q?hNXQzRqKXRwTP6KdqrfM7zg=3D=3D?=
X-Talos-MUID: 9a23:i0PsXAZuPvhwseBT6SfHuRV/EZ5R3OeFE301qpg/tdnbOnkl
X-IronPort-AV: E=Sophos;i="6.00,191,1681185600"; 
   d="scan'208";a="110762556"
Date: Thu, 25 May 2023 12:30:53 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
 =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH 03/15] build, x86: clean build log for boot/ targets
Message-ID: <fdd4b7a2-75a6-4552-a332-8407515ca7a0@perard>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-4-anthony.perard@citrix.com>
 <eaaae1c8-fdef-0746-b744-3a3e04933164@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <eaaae1c8-fdef-0746-b744-3a3e04933164@suse.com>

On Wed, May 24, 2023 at 04:16:54PM +0200, Jan Beulich wrote:
> On 23.05.2023 18:37, Anthony PERARD wrote:
> > We are adding %.lnk to .PRECIOUS or make treat them as intermediate
> > targets and remove them.
> 
> What's wrong with them getting removed? Note also that's no different from
> today, so it's an orthogonal change in any event (unless I'm overlooking

Indeed, those targets are been removed today. That doesn't cause any
issue because make knows that they are just intermediate and it doesn't
have to rebuilt them if there are just missing.

But, the macro $(if_changed,) doesn't know about intermediates, and if
the target is missing, then it's going to be rebuilt. So with
$(if_changed,) the %.lnk targets are been rebuilt at every incremental
build which cause make to relink "xen" when there's otherwise nothing
to be done.
(I'm using a command like `XEN_BUILD_DATE=today XEN_BUILD_TIME=now make`
to avoid "compile.h" from been regenerated every time)

So, the change isn't orthogonal, but needs a better explanation in the
commit message.

> something). Plus if such behavior was intended, shouldn't $(targets) be
> made a prereq of .PRECIOUS in generic makefile logic?

I think I need to reevaluate what to do here. Maybe .PRECIOUS isn't the
right one to use. But yes, we probably want something generic to tell
make to never delete any $(targets) when they are intermediate.

Linux uses .SECONDARY or .NOTINTERMEDIATE, and .SECONDARY might be
better than .PRECIOUS. PRECIOUS also prevent make from delete a target
when make is interrupted or killed, which might not be desired.

> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> > ---
> >  xen/arch/x86/boot/Makefile | 16 ++++++++++++----
> >  1 file changed, 12 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/arch/x86/boot/Makefile b/xen/arch/x86/boot/Makefile
> > index 03d8ce3a9e..2693b938bd 100644
> > --- a/xen/arch/x86/boot/Makefile
> > +++ b/xen/arch/x86/boot/Makefile
> > @@ -5,6 +5,8 @@ head-bin-objs := cmdline.o reloc.o
> >  nocov-y   += $(head-bin-objs)
> >  noubsan-y += $(head-bin-objs)
> >  targets   += $(head-bin-objs)
> > +targets   += $(head-bin-objs:.o=.bin)
> > +targets   += $(head-bin-objs:.o=.lnk)
> 
> Leaving aside the question of whether .lnk really should be part
> of $(targets), don't these two lines also ...
> 
> > @@ -26,10 +28,16 @@ $(head-bin-objs): XEN_CFLAGS := $(CFLAGS_x86_32) -fpic
> >  LDFLAGS_DIRECT-$(call ld-option,--warn-rwx-segments) := --no-warn-rwx-segments
> >  LDFLAGS_DIRECT += $(LDFLAGS_DIRECT-y)
> >  
> > -%.bin: %.lnk
> > -	$(OBJCOPY) -j .text -O binary $< $@
> > +%.bin: OBJCOPYFLAGS := -j .text -O binary
> > +%.bin: %.lnk FORCE
> > +	$(call if_changed,objcopy)
> >  
> > -%.lnk: %.o $(src)/build32.lds
> > -	$(LD) $(subst x86_64,i386,$(LDFLAGS_DIRECT)) -N -T $(filter %.lds,$^) -o $@ $<
> > +quiet_cmd_ld_lnk_o = LD      $@
> > +cmd_ld_lnk_o = $(LD) $(subst x86_64,i386,$(LDFLAGS_DIRECT)) -N -T $(filter %.lds,$^) -o $@ $<
> > +
> > +%.lnk: %.o $(src)/build32.lds FORCE
> > +	$(call if_changed,ld_lnk_o)
> >  
> >  clean-files := *.lnk *.bin
> 
> ... eliminate the need for this?

Yes, but that doesn't seems to work here.

I think there's a "targets:=" missing in "Makefile.clean". So, new patch
to fix that, and then I can eliminate the "clean-files".

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 25 11:57:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 11:57:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539539.840517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29aZ-0002Lq-Ev; Thu, 25 May 2023 11:57:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539539.840517; Thu, 25 May 2023 11:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29aZ-0002Lj-CB; Thu, 25 May 2023 11:57:03 +0000
Received: by outflank-mailman (input) for mailman id 539539;
 Thu, 25 May 2023 11:57:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q29aY-0002Lb-3S
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 11:57:02 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0623.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::623])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3fc65663-faf3-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 13:56:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7256.eurprd04.prod.outlook.com (2603:10a6:10:1a3::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Thu, 25 May
 2023 11:56:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 11:56:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3fc65663-faf3-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OoBEpURevyiRtGI33kqGWUt+V/AMQeSgcFTzZGmz3lzZz8ITOu3zFPRvaTHMaPVBmxlvhpeC5u9K4G6mxMaOpfnP0Za6OUGWJrPxlMqugzwZcr5R0LrNhoZzIqkhbGSCWt1xc2NJyNkFUNPM1IvOIUmWj5Bvl7r89W8KVF6A+/KGL25GVcO7fTvsrQuPGJgGjKPxl5w3wSvGKsOAW0GP7f/jYgLoBFGpMbi8mUENSk9YwcnQN5ElyYIK3SlU229rAdLCsfWeM+HQB+AnxM7eexIhN27puBaPskB/6yyLr/RQ73YHDbbhA3Oh/ppSlkN9uIKBEzZ24Y0CfgjMG1Qdbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6mVQ1Wdu+XW6QNw/E/Hho87SBUc3tt0I9Q5fw0s8mF8=;
 b=eN/JwXWwSayYoNraaA+dl5v09VLOXrnlfty3u3gMwmsg8G0aSHa6UnNcJAw5JhHstcTAqZfNn314z1OiApWDO7dSSso69tvjCHB13a6APZR93bHnNNgVNOlr5wEdBJRjNRLFdgLG3qFLjXvP1S2pI5GYRgeCQxJVbktg9q0OcoFBeitDqJaLs2BEIfciFXN7quDMBJvLqy6x1InNEG+POTPQNHGPV5GHZpW9vdeMN8Z7T+Bn9Q8Wv00zfqHKu588DJRJ9tI3SppvL52g+dejyWEa6L48//mhLggSLwdlwEBpTehyPbdBSc6FiyfN166SsBuGIj+z6snrwFqMIvx/8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6mVQ1Wdu+XW6QNw/E/Hho87SBUc3tt0I9Q5fw0s8mF8=;
 b=ouHa3bxAqCKcUlx8L7tK6IFkNAa2ObPXwL07jbK2PH9qsHdc0kEWijxDlK2cVfJeKvfigEGnpUbnrVAll8sRmd40aTsQaAwUjYXhpN7JUgHiZITkH09p3VFAY1/bYRYsFlHklxVmbSNNDN6I8U42D4GUseIIKhpkExNdUs4SEEWNxGmwFdEZJ7w1S72EYzA36VdQxgLugBQ0PtZdgC6PYSr7jIq8CTdZIj7Rxz3HK3AD8ONOOrbn8PgM/uyudWjiusp6nD/yQctTIXz++4vVaWyw5eIqTuL+viSXzzShlTUQgvrF73EUphji4t5Z8eBm4Wze6DItVI2sGGNBYKCVBQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <9528ea84-9f10-d10a-b2cf-798434d48a59@suse.com>
Date: Thu, 25 May 2023 13:56:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 07/15] build: move XEN_HAS_BUILD_ID out of Config.mk
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>, xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-8-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-8-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0027.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1c::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7256:EE_
X-MS-Office365-Filtering-Correlation-Id: 77b0e497-f8b2-4666-9de7-08db5d1722be
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qmu/QAqDrB6Jz3dwp5z9F0n36Y3iu47MomdcKmMSAqUP6wWLogEInb3OGHhM9Z1jmHJPTC53jIWlEx9PciGwXVTAmTY8QKcBz+nTamaQVdAdBIxY7kCBmuQ1TJX978KfQYNmKfnJ1tkJsOOsqUFsArmwZI0TOiWPjQ2nPebdtSyABhTziyE/0LaX6uSE8aqro4uTJfQniqsbes79HOa8nqXKtih0+WWfdwtLixa4lBTrIZMvZvwjr6EjT/4JKUiOQ4ysqQGfrz+qMpfXHmhtTb1bofUTpCFHD+ZUlOp1FX+w7g1Mbg/Fr2m4anhTGlICJSJDRuzlVQZuGILZiJfWmLnBW1FQaNMaomlX+7YKEKqIA96ZzNbCB8JkE2YqjEfNEBf02F5ICWqrOk2C5OUhxTq7iWi86+RV/8Qb2HCdifusvSMlBi34g9kclB1XtEEQZ6WW+3FVaUQRUp+3uJuFhvjpxFgvhy9NVSDvmdyr5CQAp0UXLvEIrBjRNM+8ys4q6unzpD0QZ83kTjOO5nZ1ZlNuH8OJtZHasgHcLpdVj7w+MYSoP1Owncy9agJKrMbY1mM+7b2ccHUTYwa6Bh8826/290GnIoocR+LvHJA75tR5ARXSIWtKbx6eKK9JGDZOVotmdLEU+5AFN2Rnkhj/8w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(346002)(366004)(396003)(376002)(136003)(451199021)(31696002)(86362001)(38100700002)(36756003)(6512007)(26005)(6506007)(7416002)(186003)(53546011)(5660300002)(83380400001)(2906002)(2616005)(478600001)(66946007)(54906003)(8676002)(8936002)(6486002)(4326008)(6916009)(66556008)(66476007)(41300700001)(31686004)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SjZ1SS84V1JDNU9nbWhuSmpDZ2tvWVFuaElETThNQXFEWHhQMHNMTVN6RnB0?=
 =?utf-8?B?YW44N2ZGcCs2WTFSaGRma1doWm9ib1doNU9HZ0ZvRkdWcW80MmdpWkRTdzRh?=
 =?utf-8?B?RDc2TU5LYmE3WEptbis0ajJOTnNpMU9kNGJQa2ZkM1gvUnJpVWxYZTdpcExo?=
 =?utf-8?B?VGdVRDdoV3RPclZkZWVmSXVFS0YvTnhaYUJEWXgyNWkvb3ZwL1JaQXRkQnl1?=
 =?utf-8?B?dTJzVldLRDRWU1VVT1JQVzZKNUVoQjdibHNXanBzeTZJbVZJcUZxNThCQkdQ?=
 =?utf-8?B?ZG1pK2FFTWJMYytSbFU1d3FzVDBkelZINnRGZlBmNGJKaWpwN0F0eWdRWFRD?=
 =?utf-8?B?UjVyak1mM2tsaUhyRFltWEJhTEJKSzN4Zm5DeWF5RDhQUzBra0RiV25jakt1?=
 =?utf-8?B?NjFwTE9rcTBYNDJMZS9TVHJJZnRXS2lMS3NSRHU2QzFsVWJmZXljUGUyVi9h?=
 =?utf-8?B?V1ZxclZkdnFWUlEycXc1akZRcmpxWW9FTitxWG5qb0duVXpVWmJwNkZWS1Q0?=
 =?utf-8?B?enhES3Q0d1hHT3BWOVBNRXE4YmV1dXBBSEVGZ3NPTUpQdzRmejhqQWhxNXZJ?=
 =?utf-8?B?cXVIRHdoTlN1Y3hXWjlXVERRQlpjNFJuV0djRU1oQjF2K0svQ2dva1RISitV?=
 =?utf-8?B?UVFFcmlqNmV3TjVOWlp5ZzhLcU4ySHVxU012TkE5dEkrNThEQTZiR3dTMjI3?=
 =?utf-8?B?ZVNscHJNMExmQWh5RndlRE1hZllpK3dnZFVLSnNSek81ZjFiekl5VUE3ZkJW?=
 =?utf-8?B?Mm1IRjFJRGQ1T1FWOG5UZTQ5eWVDckRFRTRDdkFPazFwZ3EyRW4zQldaeE5K?=
 =?utf-8?B?Tjc0OEEvVkFNanhtdXBMQUx5Kzlpd0RMdVlUeHl5UW44MDd6T0hxc3gwL3Uv?=
 =?utf-8?B?NWNqNVRTT3Y0Y1VsQmFGc3FPQ3l4cG80TEFhbUlTaloxMnViVFpFSTd0S3F0?=
 =?utf-8?B?UmdRODhHWnJyMlNkZ1pmQ2dqS2NPMHJvRlpqRzdoL0JGTXRNcFVYOER4UVB6?=
 =?utf-8?B?MjF1UWZJblVabmZuV3Rucnc2RTNSQ09IcEJKWUo5MS9IWTZtMlEyQUo1UndM?=
 =?utf-8?B?bHUwQ3BFcXNqem5Zd3ZGdExBQ3duRXZYL0ZhVXlhY0dVVTdOSG5CcUlMaEJS?=
 =?utf-8?B?SXh2NE1lREdpaG9OY0lJZStLb3RWcGhoREZmTWtobXhDSWZSU0k4NENQV3p4?=
 =?utf-8?B?UzFocVlVR3YvSlVhVEg2eEJNY3lQRkh4cEF0OThFV1JzbVVoVDh6NjgvV2h2?=
 =?utf-8?B?QytCY0VabkJaWHh4Tjcya0ZxOGFiMVh6dG0rWW5nREdWcjZib1Z2SU5GWk1h?=
 =?utf-8?B?ZC9ER2VrRFhWVlV2dU5BTjI2VzFQM00rVW1vMmx1TzljVWoxajMxSmhEcmZm?=
 =?utf-8?B?eDB1RTlsRXFXZktBWkMwOU5NeWo3RnduY3MrYUtOSFRTa2Z5YndPN3JwaFNX?=
 =?utf-8?B?N2tBdFJBcytYMjFSbzFsNjk3YjduYUxOWWdKQ3ZxUmFBZlM0K0plYUFGRlZZ?=
 =?utf-8?B?U0UwbG55RVNkQkF2K0ZDSDd4cVZsbnFnTGxLdzFnb1NnblM0VWVsVWdMQ2Jy?=
 =?utf-8?B?YW5lSDR3OEJlRDQwL092bnFFcExaYjZuZkVJeHNUZzg0amFrNWJEa1VwUEVH?=
 =?utf-8?B?WGw4MEI1c0VkdHZiVStTanpCSXRzYkF1RFlNNWNzYVRxaTI2NlNCcFE0OUU3?=
 =?utf-8?B?b0h6RUNGSFZNcmliRWkzQ3pCVFh2Q053WlRsYmVpeWRaVWQ5ZDV6bElzSitO?=
 =?utf-8?B?R09EbmJ5WnE2OVljS01rRXl5ZjZ5a01vVDdlWE9kclJtZVU5bklMdjVaY1pT?=
 =?utf-8?B?eFgxNWRNcnRWRkMvR3l1QzRZUExpb2N2NVZTWTJMWTRLZFoxZUxqc29GZWRy?=
 =?utf-8?B?anBpMVRpYlBPV1BOV2F6c2c3NC9QWXd0Q1kwcEFIcEpTcUpnMTAvclNmaFhs?=
 =?utf-8?B?WFR0V2RZSC9pWGttSHAvbTZRN0ZybDJLbTdEckQ5THdXOXpUcVVWRURJRnIy?=
 =?utf-8?B?cTlHMnBwbjBmMEw3NVNIcmpLT3pTK2RZbTd1TWg4ZmZVbkd2WXRLTzdhREx2?=
 =?utf-8?B?c2g5VVgvOFFZQ3Z4TkVaZVJ4cS9FbUdhR0R3OXB3TG00OUEzUW9BekZJRytU?=
 =?utf-8?Q?2BbzvIuuXbJf6D++AaRzE0hTP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 77b0e497-f8b2-4666-9de7-08db5d1722be
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 11:56:55.3149
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0EF9oUs00MdVTemF/Muvy+TKYozwdvcFMKhF3Y1CzPD/Vu3ZvtAP51TkQ9lrg7y9X8OLBhYTud2WZBo1JWXULQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7256

On 23.05.2023 18:38, Anthony PERARD wrote:
> Whether or not the linker can do build id is only used by the
> hypervisor build, so move that there.
> 
> Rename $(build_id_linker) to $(XEN_LDFLAGS_BUILD_ID) as this is a
> better name to be exported as to use the "XEN_*" namespace.
> 
> Also update XEN_TREEWIDE_CFLAGS so flags can be used for
> arch/x86/boot/ CFLAGS_x86_32
> 
> Beside a reordering of the command line where CFLAGS is used, there
> shouldn't be any other changes.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one nit:

> --- a/xen/scripts/Kbuild.include
> +++ b/xen/scripts/Kbuild.include
> @@ -91,6 +91,9 @@ cc-ifversion = $(shell [ $(CONFIG_GCC_VERSION)0 $(1) $(2)000 ] && echo $(3) || e
>  
>  clang-ifversion = $(shell [ $(CONFIG_CLANG_VERSION)0 $(1) $(2)000 ] && echo $(3) || echo $(4))
>  
> +ld-ver-build-id = $(shell $(1) --build-id 2>&1 | \
> +					grep -q build-id && echo n || echo y)

I realize you only move this line, but I think we want indentation here
to improve at this occasion:

ld-ver-build-id = $(shell $(1) --build-id 2>&1 | \
                          grep -q build-id && echo n || echo y)

I'll be happy to adjust while committing. Which raises the question: Is
there any dependency here on earlier patches in the series? It doesn't
look so to me, but I may easily be overlooking something. (Of course
first further arch maintainer acks would be needed.)

> --- a/xen/test/livepatch/Makefile
> +++ b/xen/test/livepatch/Makefile
> @@ -37,7 +37,7 @@ $(obj)/modinfo.o:
>  
>  #
>  # This target is only accessible if CONFIG_LIVEPATCH is defined, which
> -# depends on $(build_id_linker) being available. Hence we do not
> +# depends on $(XEN_LDFLAGS_BUILD_ID) being available. Hence we do not
>  # need any checks.

As an aside, I'm a little confused by "is only accessible" here. I don't
see how CONFIG_LIVEPATCH controls reachability. At the very least the
parent dir's Makefile doesn't use CONFIG_LIVEPATCH at all.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 12:04:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 12:04:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539545.840527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29hQ-0003xc-B7; Thu, 25 May 2023 12:04:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539545.840527; Thu, 25 May 2023 12:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29hQ-0003xV-8S; Thu, 25 May 2023 12:04:08 +0000
Received: by outflank-mailman (input) for mailman id 539545;
 Thu, 25 May 2023 12:04:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q29hP-0003xO-7m
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 12:04:07 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0629.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3ea811a8-faf4-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 14:04:05 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8520.eurprd04.prod.outlook.com (2603:10a6:10:2d3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 12:04:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 12:04:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ea811a8-faf4-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ez36WMowM3z4pb39px9vodN1uefUCLSbZd+1KQycbvyQAo3zPc3LEvKPRxRoe+Y/yH2sKHK5R+O4I4n4zMrwdLJCU5Lik1tfg+355+NErdAZAG82C4RHOp2hOvAql3GdMoVM7sUexY8VYCCJibP5Rbrc41LhnhT79IxXQBlYcR2XaY042P8FoxDFfFDwYLV6q6Z2b5q8uawvjRj9Y8dyplWC33AFzxuJeJJNcRawRAbEgZBIROiEgQqynX0a9cimQ8Y88j6rd0XLrVROBbg8L7jn7XmkngTlA2r61dFYNEIwxsSW3f7FIYi0QTlU+RmQZWfr5ZRBGD113I9oaNrdmw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Dwp7XA7BkKPFTSzfgcedWNSXYZUcivQb56k+3yfThZA=;
 b=jFOyQzmuOxDZ/E9w7FVQdfb2tvlOthmHRJn20KTk88U7l+VLZ1zYsl/FBmVqBW1azSiuU47YKpelHrZ4V8GSSbkPFonQeWVWwCWvxhJpBDQ5YpTtC6oNQh/PDhaB48hCsI+Z0UOF/45h2ZZC+9ukGAEGcCuULShvFzFvzsB+pb2RKF3a6zVkWUo06LELYye6NXuCv9tmGTqoYfXFLV9mwDN1k+iKivmAQx9NLayQYTA8ilRrQBH51+e56wKVWIBz8VbuaRbAcd8HfwtkT5Tsqt8eJ8mR31T66SE9ZH+dAzj5gsYdMiUqBibKyaS8svarWRbWWSjeZiKCJbf0nX1UOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Dwp7XA7BkKPFTSzfgcedWNSXYZUcivQb56k+3yfThZA=;
 b=iFh57fqHJX5pc9keMEsYzoKa8qCbwPflhqTObPXrMlXoRns8wqeS6G5C357npMmpLq7sTgix1uo+cBVzKiwNPuqOKmwUmA5VJjl7atZGE3RD9koYI7ftkoFYEqqoelmVi9q/FIKiyK5da6eg2YwqBu6JFF/VhFTTIhCuowjGXh0Mt6wAVa2E8OFdQbsR9AeHJgu0JOuLwolAYgtaGoD4RMnhMvZ++Ql8pz3pAXK2S9innT7c7ToBTQJhg4X+4IyCJ3Es3SjlQrAMnE6Ac/86jY8UxYYNNqppw0MNmcpQj1NhzkU7pfHhLRQK1Yz7K1wjd1E+2zJtOupZaDeFejmFrg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b04a433a-0606-e473-cb1e-41b45bd1079f@suse.com>
Date: Thu, 25 May 2023 14:04:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 08/15] build: use $(filechk, ) for all
 compat/.xlat/%.lst
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-9-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-9-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0142.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:95::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8520:EE_
X-MS-Office365-Filtering-Correlation-Id: 1b6f7874-f670-45a7-a22d-08db5d182138
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aF5Gyvkdh/opCqON6w0BgH0+nJ2fwgVzAUOQ8SyN9j8uC6kPm7iVrq9TqeycmwjYOQX+GQ/5qcmth1sVnqq8Sezz1N7pwDiXRy7vsb25tClJwrftUk5hvgAO61r90ZdPYiA3A72PbxO88Lju+C60SQykXhB8CJDjY5W8gXBnPhdi7es9yagmxtX75xLK9p+Eo7dMtmFq0Ba7LQMhzqDjbr65aOEbfTv6MP7soq/9AXaC0QLFKHKQFS1FbT7nwkusw78VWV0wKGA8RBUKuIS7xv5v9pNDoev/vgcO7F0l88jTSNdwSgk+x+OBE8aDusIoFJKBsDXwZuM3bpoC3dY5Q0Zwd1vQwsydZmQpDR4QRVUC8MZWlKBh+Gs/4BlC/xacbTXgIWkCkMIWZlrobGFk1yrjO/3Vi0WMMDiRRmr/Y13prhvMe9sxglRZAeE/YNDHoOgmXf5YbO38tuECMW0EbRiuUdM60+1GQ8fDTapbRAvJjTyaiA8GHZsYj88XpRetpUWgIi3kOfmpxeVXInqHiowUoQdhwDylTdavdlTItNFkXcMRTjxpcTkInXY0FVyjf3w5j/tkdQHBi9O2N+hPL2BsdPG9LEeNI8xmMB3bdR09bwAw0k0ZHVH2k1+UDi7KyH1W6CrkCHXrK3WgFiCYfg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(39860400002)(366004)(376002)(346002)(451199021)(54906003)(8676002)(6512007)(6506007)(186003)(8936002)(36756003)(478600001)(4326008)(316002)(66946007)(66476007)(6916009)(66556008)(26005)(41300700001)(53546011)(6486002)(38100700002)(4744005)(2906002)(5660300002)(31696002)(31686004)(2616005)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RWp1ZGJWMDRKVWlIb1lrR2JGZUI1Y0lSZGc5ZkxFMlFLREx3aWxoQURVRGht?=
 =?utf-8?B?aXZ2N1RiYnB1cDVnNDlBcE9oQ0NQYlNaK2ovZWZlc3FLc1VQWlliVDdhUUl1?=
 =?utf-8?B?dzdrR3FLUXdSS2NhcERNaWxQWnQ5UDA3MFVCYy8zWExsYlp1VzVYODY3Qmpu?=
 =?utf-8?B?MGFwWDdtbUFtMWZod1M5VENONjN2QjJRb2NabmhwQ3R4d3dKZHpQSlZ4M2Ru?=
 =?utf-8?B?dm1XNkxOVllsK2RxMXptR1puZC82UVVOYy9Wb2k1RThXYzBLcFpFL0NFVWtx?=
 =?utf-8?B?MmFRZDFLS0kzNVMxYkY3M0xERHFjdjFkWFQ4SFMrL0FJQWVXeXRMbW4rVUdT?=
 =?utf-8?B?VnlQajhlTnU0dXJIR3RreTBwNUNLZnlSbHFwUllUS3F0a2dpTUlSM21sMGcw?=
 =?utf-8?B?L09VeG12ak9WcTRPUERSN3NFR1p0ckdZcnlsQUlMcXNDRTQwendSeFl6bkkv?=
 =?utf-8?B?Z0grdkloS3RNTjlOVWdiM3ZLbTdDYXlOWDVuVkFFRDB0ZnQ5RTZaQjBRb092?=
 =?utf-8?B?aUlNaEZIL3NIc3psdGJzVGFKeUtGSGwydXFza2lRcVV1QW44ZlBvTkwraHFV?=
 =?utf-8?B?d3p4VDk4K0FsVTV5RGM4NmxDRDFLMzkwb1M0MS9hNmdEbE9wYnpWMW9tSmxh?=
 =?utf-8?B?WmFJUVdpOHU4Nll0T1RBNTRHY0ZHSUNvUzVJUXdOMUw2dFZ3K2xsZk9DUldp?=
 =?utf-8?B?a291NGJzM0pDRFJPRHhyQkZFTlIvUnNDaElkb2pTaE05MjNpMXdFTHYzbnR0?=
 =?utf-8?B?WHpMdmh6VUx3OFA0ZUdNclZJelN0ZWgwYnlpMGFVbTFwaGZvUFdlVWFRZFMx?=
 =?utf-8?B?TmhoQVJ5QXBGWlFONWNqM2xYaUY3dUVNQkdtZjRFNVZaV0FaOFRMTGJObGtu?=
 =?utf-8?B?MXpQM0FraEIyRXBYRGtSbnFsVzlkQ2dnMmtSMUJBYXpPOU50Z2xpcDhIQ1dh?=
 =?utf-8?B?d2FYUHlxOEJxbG1oREliUkdpaFZIcTNlU3g0cVBJeE9lR2lpOGllKzYrMFVN?=
 =?utf-8?B?aXRmNTNIWVFSRzdaR2JlMFVvZENkTlg4aUhSaTFSTksrM0JyWTl5K0lGK0dQ?=
 =?utf-8?B?ckx3NWluNVF6T21qZ2ptZUhEc3JaekQrT1pIbk5zMGZXaE9RMEJyaEsxZkZq?=
 =?utf-8?B?RjN3NDZERlYxWWt2dFlmb280VUV0NGdoZ0crSG9UR2NJSTVIR3o0dWlKcnor?=
 =?utf-8?B?MldCU1laZzNzMm5laEVqWGlaWEdVbzgzWGxpcUdHQk9NVEhHZVNHMjByZjdo?=
 =?utf-8?B?MkNVN3pwSHdrdzZKVlh5ZWl2dU1BMDBITHEvMnZYVm9SWDZaVmFQNzRERDRG?=
 =?utf-8?B?OVdvOFc0OHhUMmJOeFlFVWdXREp0T2NkOWplUVMzWHFOeVpBSjVHY3F0MmFu?=
 =?utf-8?B?MTk1Q0N0T081R21Mc3NGVXFjNTlwM3pwYmdHaE80WUtHeUd4UldTTk1EVXBB?=
 =?utf-8?B?YThpcndtUmE2UGM4QmNnRFlsQXJQZ2x6NVpNejFDR01JaVVsLzVmWGN6UW4w?=
 =?utf-8?B?MFltajkyS25LVE9GSDI2YjlTK2x2dTl3cHlnQ0s0YkFoUkFCd3FTYTk0YVA5?=
 =?utf-8?B?L3FaWFBJa0R1a2ZWZ2hqcXhQNFNNeTB2Y3ZNWDBMY0t3bDdYTkFXd2xZT0pm?=
 =?utf-8?B?N1ZPVWNoV3pDM0RlL0laVTFzUXBVcVozdlJQeWxxcjk3dmowejBJNXZFckVh?=
 =?utf-8?B?bnVWaEw2aGxIQncvOFN6L2FKcWNDdjIxZGR0d0d4dllhbHBiOHB5b3p0RFBN?=
 =?utf-8?B?dkVXVHZiaE5yOFE4eUtURjlmV2xZdWN4WXVyUDNvS3dKcU5UWVB1Q1hsbVFj?=
 =?utf-8?B?T1ZiLzVteHZEdXlUNGU5am1Xazd3dTN1VjQ1UVM0NEtUVHFWdU5QZjdsYkww?=
 =?utf-8?B?R0dHRG1teEN2NnVmcXBHcnlWblBwaGJWUllzbjlwaS9MV0FpUUxJK0Z6NVNy?=
 =?utf-8?B?VVNwVVFjWlNXWEVvT09MTTlpVTVnZ1JINjBaWDVyNFZsSE10b29DRGh2MW82?=
 =?utf-8?B?UGV2M0ZhTFdPaWR6cVZGM3FhS21pVUt5aDJDR1k1ay91a2FoVkd2SklyS0Jp?=
 =?utf-8?B?QTBiSzlVT3dSclZYS0hnT216S1o3K0xTUmlDRDN6RGNGVndoWjVNOGozL1VV?=
 =?utf-8?Q?KHmFTKvl5ptKZNvzTZlo0sOWo?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b6f7874-f670-45a7-a22d-08db5d182138
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 12:04:02.2635
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AgBj0EuXSEJowiQnbtoMesOy+NjjjlRt/P79jz2MmwS5cgJ0RYn4S/Y4jCQ3BK0E6+/99zGh5NAb762zLz2YGw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8520

On 23.05.2023 18:38, Anthony PERARD wrote:
> Make use of filechk mean that we don't have to use

I think you mean "Making use of filechk means ...", or else it reads as
if you're changing how filechk behaves. (I'd again be happy to adjust
while committing, provided you agree; here it looks pretty clear that
there is no dependency on earlier patches, and there's also no need to
wait for further acks.)

> $(move-if-changed,). It also mean that will have sometime "UPD .." in
> the build output when the target changed, rather than having "GEN ..."
> all the time when "xlat.lst" happen to have a more recent modification
> timestamp.
> 
> While there, replace `grep -v` by `sed '//d'` to avoid an extra
> fork and pipe when building.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 12:12:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 12:12:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539549.840538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29pf-0005UN-5u; Thu, 25 May 2023 12:12:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539549.840538; Thu, 25 May 2023 12:12:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29pf-0005UG-2e; Thu, 25 May 2023 12:12:39 +0000
Received: by outflank-mailman (input) for mailman id 539549;
 Thu, 25 May 2023 12:12:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q29pd-0005UA-4S
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 12:12:37 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2042.outbound.protection.outlook.com [40.107.13.42])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6f0514e8-faf5-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 14:12:35 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9577.eurprd04.prod.outlook.com (2603:10a6:10:304::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Thu, 25 May
 2023 12:12:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 12:12:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f0514e8-faf5-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MpTZywnEYM4zbt8AF+0Y0tkRrcrKUH3RCakldnFud1TSBfgkZAub+J6L1+BEyom1GvSGdzJpo4zwR1ukiGQ15NpqHIgyz/W3KHns0nTpZSPSuLQFKZDUqqSm4Bb/tF08g97/9Jbw4RZC9lZ7E3LEKkS6uDYTrhC9olOH4II7Ov66oHgAXtKtGeix5d/RdbPUu45ITsjkNUxvbdH2HX/yIDS6rDR7rDVJklZWfzjH8Jbtd2knbyFbKsj17w7z1IWkqSELVgb7Z7epB5PGiUlE2TnKBZt+c3fH3A+FMuP4qRVYQYdzmPD2rxGxjcPvIPD/TJnutAILywfXK8BbcwZNOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3RcKKWEFacIzy8yDKk3DXqhVvBvwGfBilpK60x0keR0=;
 b=ZYQZMT0/pQuldyzIoqkszwNo1hXjK5NCVMakjYRhWkgj/4C3G20bkciNXDxeBVmEgz0fQsXdl9SnyMQCO5P3k74TosyUEFhl2EyWe/+DNGfEd0VCPoWDGNqCD80UqBkbGcTqUdKo/EkLMsaDaxBSnOs/Mx33qEvBwb9au6mf40EQlj6NgMuMOlvIO3BwZVBgKMH+lnUfmUsx0YhKvAsTkH+TYhGOJejPPsW3A8ZQHTYGezup1aOny0NkZY4ZIP7ONP7ioQzGyOSqPXyhJcfzV9sBid4rNZJJSiYdEsLymREZ42FU5wbHDfe4x9ooC88h0OBJisAjpAT8kgj46hqOBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3RcKKWEFacIzy8yDKk3DXqhVvBvwGfBilpK60x0keR0=;
 b=ng7KvKFZPU2jcTa6yLh3kRP4wWWtz1iq1NgR3NH9rUBxlaJ7IRkLkQQit/PyxIciXKlwLG729u6rjMkWp0MghOQtK5vm7qmA47p8rZUgJR0RUTsRq5K0LDf5USfg+BJ+MflGqqYywm17PbrwjqAEhUet0lPPar+T+5b41WkaORZpLd3z3p0GaIfKSqy8REEqLfhlS3SQCTPyRP8yMt4DY3nFquQmINNWKsmZK1j+P3YSvQxTQaaF3I6ls2ZHtk7u7jDXqm/QJEW5npyalJ1JDVG0shuVkAcQ2DEpn/kZdk2ker5OS3dfsWq8P7CAyfYxF10TCWiAUyrg90RyDBMRpA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3eb3f5f2-e766-ee8b-e0c9-ef769dc0ca74@suse.com>
Date: Thu, 25 May 2023 14:12:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 09/15] build: hide commands run for kconfig
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Doug Goldstein <cardoe@cardoe.com>, xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-10-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-10-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0131.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::14) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9577:EE_
X-MS-Office365-Filtering-Correlation-Id: a15cf555-43c2-4552-d766-08db5d194198
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZJdwEXUrsTpsGD319EGsDgW81cPKmXHZONYmjGQy3r32lOI2JRIHH6qM9hLf0qn2Zuy9mYC5/YO9Dxc6re1x702wbnQ1VmBcC3LxGlVmT+2xCohxe86GDNq4xgoSz7eapc1rp+SXawONLFjugxkXKyqXPpHZg+VaCYOZPUVKX+1iRDMPhxpRAbrE6LtP2JbLp/8WTyqkpZOcRU4Bg/thrXCNdaTClS3gWQcwVuzxCKakROIEz4IK5xU5GfwBOs2glowN/XBnagdzHiaWKLLpQJh2ZJTfyB27Xl/5BYO3Fz6DxzPRJM8/Vnm++UsBlJylaePXRSag2ef3M0cAQSaMtAaMJ192T9ZpouuZ7N8P1ldtU/GIQHsWYrfvZOuaHeyeuLcPrnbdgBl1qNGVm3E+gziF67tzlEOcN/zLsQLMqsMrlUbi6FwKoA4LK3td+O5+oV6bDltcc6CiAus1DiFfkFnLBzUi+Z/4FKOns4D1gVLawIlpk4Ci4AgBeHW701lpgB7Sfq3tkGmMBH9AAvsc+bbMH3cEvmKqqHnyx7WrdvJ7l5cVz0d78/vjlc2puA0zOHfaVu06mu/l/PPx18pW3Z3XRq/eBBFPv/e2dW4OI5LCaV/aGUW0tHMZ1IqTnY8xXrSm4Qe+wPO9bglik7UlHQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(396003)(136003)(376002)(366004)(39860400002)(451199021)(6486002)(41300700001)(26005)(6512007)(6506007)(8676002)(8936002)(38100700002)(5660300002)(53546011)(36756003)(2616005)(31696002)(2906002)(4744005)(186003)(86362001)(66476007)(6916009)(4326008)(66946007)(66556008)(31686004)(54906003)(478600001)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cGFhN2xGaFpjOWQ1YzMrVGxHQkdDS1JPV0V2OVdkVTYwTVVQdW9mNjFTS0Nq?=
 =?utf-8?B?MUQxOGQweVdxNmR0TGFqcWR2MUtpSXNFSi90MnBySFl1OHV2RmoyODNVYUF4?=
 =?utf-8?B?cGdTdkR6TUpValVCeVR1ZEVKOVJiZTlGeVI5cXhxaDZHSzY4dmY4d1dodERi?=
 =?utf-8?B?cjgyVjAzZ1p5LytWbmt1bmtxSVQ4V1hpV0pQbkwxREQremtHRjE2Z0p3RVNY?=
 =?utf-8?B?anVkQXg0YnRLY2F0MEpJS3A5WmczaGo5dG5abTlnVktKaForQ2RtdTVRSUVP?=
 =?utf-8?B?dzlMUG0rSzJ3d3czcDVBcGU3T1VUOXFjdXA3M1dhVlRmSnBEOWlMTjAvTitI?=
 =?utf-8?B?SDFkeVlLN2RBNGV2eVdZWUFhMmV4MGRlZDVNU1Vxb3BNdXVQKzlQVUlYZGlq?=
 =?utf-8?B?eWZrM0FHWDNQaXFIbmlPc0JtVjVpMVhldDlUUFhEVjB6KzhaVG5lR0pRcUNN?=
 =?utf-8?B?S0ErV25NYnhneHZ5ZzVBbHZmQ29UeFU4Q2IxOEVOVUJjSmVxZmhlK3J5RG8x?=
 =?utf-8?B?TXpDcnV1WEliQVBTQ1ZZdUJjNVZDSXBwVUpTeHdmRGJsdjFDK0J0TDRCMUpQ?=
 =?utf-8?B?SFRBR3hwMkVVWDl0UWk4OXNTUEo0K1hjb1FSNEtzZlhPQStHNlJTRG1yMEI1?=
 =?utf-8?B?RWIvRFA4d2h5RFNqaWY4TEk4TWtUNWNQakxQN255dzlvbStTY05nc0pKMG54?=
 =?utf-8?B?NTZrNFZsNkk4UkdnbVY3WFhjL1orRjBodUtyWXZRbGR6clAxd2c4dEFvc3dK?=
 =?utf-8?B?VXBTTERheXd6QU1pbGQzbDJJc3VsRlhteElJQnQ3RWp3S2Z0TkFScm43UEp0?=
 =?utf-8?B?TGVJa21VRHF4b0tQRU9HTDg0dUhKUC9yMUFEdmQrNFA3ZlgvcFhxNVl2VGlS?=
 =?utf-8?B?cjk1ZzYyVHRnRGoxWW1sN21Hc3JQNVdoOHpKWnZUdFpHdUNMeUh4ejhIUENi?=
 =?utf-8?B?RVZ1OHRaa2VubXdlM3VPSDNwU2pIUU0zZEVHUjIySzIzK1dETDFXVkhQaHVj?=
 =?utf-8?B?TFhsdkdCdzF4bUJjcDhoRSt6V2gvRWx0cGNXR0dpNXVBNFB3N3dqRzRBcnp4?=
 =?utf-8?B?QTkvRTY2SGh5eE9aMC9sQ29rMDQ3ZWxiMlBkWUI4WG9PNUV2UFhoZ1RNcDhN?=
 =?utf-8?B?ajZMY0lERi85RnNvZ2taSW83VGFiaXMxWk5DSmlNaXg4bVNrWnp3bTdxN2Ux?=
 =?utf-8?B?dDZYalpKMW9EcUJ2UzFzdzBWUDJDTFpLY0xFcDhadWZ4a2NzWUFFQ2t5OWJ2?=
 =?utf-8?B?SVRMeHR5N3FsRENiK05rV2RLNEtsRzhSLzN3NmpNU1p5dHJzVW9hMHNLVjh3?=
 =?utf-8?B?bjBpNUN5UFdlQzhwakVmUnJrQVpHeThBc3lsZDNyTDZzR1h4dFZ1c21DMml6?=
 =?utf-8?B?TVR6UHN0VzNZWGIzZEM4Mm9Ua1BwUVMzbUptRUlHK09aY3FhVVBxOEprUGpz?=
 =?utf-8?B?ZUlxanB0TE1mK2xHbU83b1NObHFJUktnR09ma0tCZGxiZGRWdDQ3S2VxbEpp?=
 =?utf-8?B?emVmdkJrRGNNZHc0aFA1OEIyaERqYzhHUWRiMWVoODM5MVFHVHh0N3FVN09R?=
 =?utf-8?B?V2NGL0dhMERpd2pVSjR0VDMzTVhhK2hwemxMK0I5cUtUR0Q0dkpSTFlaU1FY?=
 =?utf-8?B?SUo5dTlTa0NPZW0rZHR5TEhKbm9CNEhkMzc1cFlJK1M5cEFjdXZWeXhNN3Uv?=
 =?utf-8?B?Z1NqTWdXQ2twVlFTQmJYV090eUU5RU5XU1A1STN6YmRMeUJQMHdDL1F5TnpE?=
 =?utf-8?B?VHNTdnpjZkxzdlZ6ZFpYTXpVcXVXZlR4cWgxVFVUOCsyNzhVSGhQd2ZOK3RH?=
 =?utf-8?B?OVMwMVpwOEtUNmE0RjkyeHluR21hTkFzNDZ6TklXQm1vdUp0UzFSK1lXbkRl?=
 =?utf-8?B?R3lUTkhUc2ErLzdBOTE4am52QWJvalJGclBKcDYwSkdIb2FacTIyTndXVlFp?=
 =?utf-8?B?SHRRWm5IQy94TS9EWEpIbmR3ZU5wTFU5M3dzdEdzUTR6L1dKNnByZnFsd0xz?=
 =?utf-8?B?UmhwWDNJWnZkL09xQkxSM2plRDdCa2VKOVE2YXVBTWY1dTY5aTVhLzZ3Zisy?=
 =?utf-8?B?TEx2NXZ0S2xkNTVkczFLQzcwT3p0RVNKZXU1UUFKQ0NaVndlNVA4cUxPK2lt?=
 =?utf-8?Q?tNjI1V3DzaKwIFko6ThcHxirH?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a15cf555-43c2-4552-d766-08db5d194198
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 12:12:06.1065
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ztmDl+fwIDaVPd7NZLqkYCndH+oEi1IB/9RMwLR2tZk4Ksbma+eFy79vHyAuEEJ0K009szA0PezWHTyUqXy0Vw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9577

On 23.05.2023 18:38, Anthony PERARD wrote:
> but still show a log entry for syncconfig. We have to use kecho
> instead of $(cmd,) to avoid issue with prompt from kconfig.

Reading this description I was looking for uses of $(cmd ...) that you
replace. I think this wants wording differently, e.g. "We have to
use kecho, not $(cmd,), to ..."

> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -382,6 +382,7 @@ $(KCONFIG_CONFIG): tools_fixdep
>  # This exploits the 'multi-target pattern rule' trick.
>  # The syncconfig should be executed only once to make all the targets.
>  include/config/%.conf include/config/%.conf.cmd: $(KCONFIG_CONFIG)
> +	$(Q)$(kecho) "  SYNC    $@"
>  	$(Q)$(MAKE) $(build)=tools/kconfig syncconfig

The latter of the Linux commits you reference also extends the comment,
to keep people from trying to switch to using $(cmd ...). I think we
should follow suit.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 12:17:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 12:17:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539553.840547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29uV-00068e-NP; Thu, 25 May 2023 12:17:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539553.840547; Thu, 25 May 2023 12:17:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q29uV-00068X-Kh; Thu, 25 May 2023 12:17:39 +0000
Received: by outflank-mailman (input) for mailman id 539553;
 Thu, 25 May 2023 12:17:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q29uT-00068F-Ld
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 12:17:37 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20603.outbound.protection.outlook.com
 [2a01:111:f400:fe16::603])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2262098b-faf6-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 14:17:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB9066.eurprd04.prod.outlook.com (2603:10a6:10:2f1::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 12:17:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 12:17:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2262098b-faf6-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gVtPJSTIwRlp0xDBAsYirNCP7y+6RE7/09YoTjhFSFWBMjr0sNPANrtLyABuv6AJQxanRigTv18PesiVRGNVL9AQ/Bz7JuqYCwqV7S11ELRS4nliV0zhlRsb+XNMKRqqfOf5Ga9WEfBV51Kme/s9yeHv5tNxsmpDAHk7AbDHhLfMziki9eNZV4q+4KVj8R9X4UdcLpqQqlVGGxifMdPSWpuu0XlklH4Gn3SPNNKRx6fMG6ras7pcsHpizeHfWhtnPeDzfH5VeBB7xWhtTchsvf8btP6oBR2urB/JEzbq8PP2VfMKSusPwXpm57ZBJR4boiYOwXoPM/Sb/C7JTST4YQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q7dVG7pTtKyNxCjXZNiKqOIYzPJo4XJd6QYa5cnCVZY=;
 b=d9km/pn11mzu/YnZodFF5ykQABYduUBIMxaLIan5UPoir18sAc2T4j+MBV+bTKh4/wNAWTxeawITgrDaSYLAmDJ2KVt2it5Zo2PFGDCPZD1xsPRyROaQBpsjdK9vzf+w2djJw9xnqYeiYKNWCxEbiDR1OO1EnJx697n6jK/iSwzNCh7zlBeJ/U4wfL/Id+U65qprrX8zVsQ50+uIwxDbfvChvk1vUx2QaGCI2VMxQrapCT4jr/deFQPEUXlaTvLO/cZyEo+7q31rJ+spWanaBAkOHqe7u8BtLTx9gRLQtF+xkqV1U2qWfkCnZKvl8T1HSjd21rcjdhQjhaKGNbg9Vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q7dVG7pTtKyNxCjXZNiKqOIYzPJo4XJd6QYa5cnCVZY=;
 b=uOrmCMgLJth9Qfu0jmP2ASIinJ29TlDF0axyfTJT5jhh99Sg/Hn8v/ZBECIi25NQbBgxmxjP6amKvpZ/cuHcnmjLOV6Y6Bw9EQsXHnqs9nvTwrYU9BCJr9ZckWoK6vYBnduxxcONft5p42dDyGt8yCIQImhxyRbhrqH0tTzbs1nECBSXu3VD1joG4CZYHIobPmBYbrZFJDu/QYS5Ae+C86S5OM3nCiHEw3V9UfhS7x1rYk2Pmhp637CfqiK3hC9DLL/3ebwv3ZljypKHP9LMEs4molu9a7xyqLH9k7UKObeba5L3SNumttwM+ujOetOM6MjYGP3359RXAfYc958/jA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e7586363-5762-5154-efd2-543e784bc3b0@suse.com>
Date: Thu, 25 May 2023 14:17:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 10/15] build: rename $(AFLAGS) to $(XEN_AFLAGS)
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Luca Fancellu <Luca.Fancellu@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-11-anthony.perard@citrix.com>
 <17CC7699-2B73-499B-946B-E423F7C9620E@arm.com>
 <BA29A878-86E4-4B5F-A344-C920C3D82076@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <BA29A878-86E4-4B5F-A344-C920C3D82076@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0012.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:1d::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB9066:EE_
X-MS-Office365-Filtering-Correlation-Id: 6344e0d0-8059-445a-80d1-08db5d1a0529
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	am2Z5BNt/oAxX3q6KrZUaN6oVibgUYznLexV9b9SYQ9KEx2EaVoHYM6CHkIbjv5fHOq4MILAaN8z1+LJHgNPNL2gNXKG8X8hZKphabP9xs/AS6OgnifkCqoGksFKYCGEWCCiq4zj+svLzOlJF86eT8YdklZTUse4o3uNMBBhNPjLiJIj9zhTM+kq9sRZ6iwF4tdkvQuLCXMCb5rN4/qJEw9rUo3ZaC9aAjQnDT+DYhgH8KuEMZRxlUVnXWw/enkXNhR/VC2eWoI0YXYpoi4FE7ilRaWCcHkN4uCCtt6GwwSGEAuRUnVi6H4rfEjQ2IOCdggDCNoCOgq8mDe6jztVY5Mpsrywla0v+bTU2pSDUrsqxhzWKhzFy1mVkK8/kmKy5SHSeXZORDH4ieck1gx/vwOAmN3D/W1dHqlXBv3QvwvX7FIPLZ/BIWdDaxNGrEeRDfXoCZnJgDg/cmyybGvOwk9mFQNNB/8m2Dc6Y6DFvSWy8urReex01AKMUFL/qCLsSCngM0vKiE3YhEpDDKBwasG6Ts2yr0z8fg3kZTu4u7dU5oJU3nuu0boCIKGjzQkZ4d1bW1br5v4+5r2NFyLC7s102Gu5XJi3ntEIm24oNuCLsGPskzAkw+0C4Ek2dIOQysNwHAFpQyNnL39GxwYRKA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(396003)(136003)(366004)(39860400002)(451199021)(6916009)(66946007)(66556008)(66476007)(4326008)(8936002)(8676002)(36756003)(5660300002)(316002)(31686004)(54906003)(478600001)(41300700001)(6486002)(6666004)(38100700002)(186003)(53546011)(6512007)(2906002)(83380400001)(26005)(31696002)(2616005)(6506007)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ckJISHAxeWdUdVRwdFJUcFdKU1lLdTJQbHJLWkVDaGluazBpaGFwZTBLMDF2?=
 =?utf-8?B?ZGw4SmhnQzFlSXZ0RW05S2JzMkNtT3VNMVNQVUd3Wi9DWGhIbVo1M3duUDho?=
 =?utf-8?B?eTZzSHZWMlgvdHM2eDRUVXR2aytXNWh1dmJJVnhidGVWOEZBRnlyQ0RscTQx?=
 =?utf-8?B?bzBCdjlFdlF0UlJZU0d4MEJ2dG1RdkZrRkRqNWxBTkpKMFlRbHpzRDZIMjVq?=
 =?utf-8?B?WDJ4NHZnQ29XclNOaFVZdTh2dGwxSWMwQWxtelo0T0czTDZMV1Mwa1lJNkVk?=
 =?utf-8?B?UHlKQ0YyZXhjWmUvem40NHhxTUF3ejBSWDVseUNFNTNZNGVFSkh4S3pSK29m?=
 =?utf-8?B?MXZDdFdLRU9NWllwaTNmU09TL3NRRVVJMUd3Vm5Bak5Td0Y1VDRoY1FDYjMz?=
 =?utf-8?B?S1gyZUg3ZUNydm45VEhPcjlrOHZYYlBXakZvU09pZEh2UXJvaEZHdnU1ZEFW?=
 =?utf-8?B?R2VJR29YMjdwMXArVVhrbHhDMU9WcGprS2Ywb04wM3FIdzVrU2NXTlk0WVhB?=
 =?utf-8?B?MEFoNTV0U09DV24yWDY0YXAvZVFkdTVzRXdtUG5PeGU3NnFhc1FqcllCTzNy?=
 =?utf-8?B?Lzk3Uk5odVJHOU1DZ2JBUTdaR2tDZ2swcjhiU1U5ZHluVDFML2oxbmdzdjEw?=
 =?utf-8?B?WGlFQ1FvMnB3Qkl1SlZoRGZMa0VBL04yVFRhS3Y0aVgwelpuMDJJVEUzU1U4?=
 =?utf-8?B?cVlGc1JPQk52SktXcHBma1JqU0ZnMnpHTy8vZjJQZHA4Q1JOellId2dKb2Qw?=
 =?utf-8?B?cVk3QmRpZFhRbnJvcjBQN2Z0VWZiMzU0dTdHTmQxZkdmSWhjdGt0ZFBNSkRO?=
 =?utf-8?B?WG9jVjFmMjlTSnFXa3FSUEFwNTFtNCtzeVBTdVE4RU5UbWl1ekZiQ094T2VL?=
 =?utf-8?B?bFZzbXZVeURHWkNqV1ZyVlljK0NNWGhoQnRCVG1BZnUxRUp5alpKbkNtczNw?=
 =?utf-8?B?VzZLZDdHaGpOY3JxNVV3UjAzUWZtMFBCL28vdUc1aXhwMXhodjc5bXZka3hT?=
 =?utf-8?B?dEVlVnZ3a0hjejdyRUdkeEhHd2Zpd3dLYlNSYXUxTFhJRStpTU16MDVyY2Zs?=
 =?utf-8?B?ZldtL1N6MFdlZCtsSkNBdExWcG9TTUNzNVBBMktaOFNoYzQxdEVBRzZMb2pI?=
 =?utf-8?B?bElHS21VTzlwUkVmM3RoZDlwZEZqUndSY296bXZlZm5YRDMrNEU4RUdEem5j?=
 =?utf-8?B?SjFzRTNJYUNVUGFhYldGVDgzNjk4MERaSjJXQURSckRUaFZkT0E4Y25aYTUw?=
 =?utf-8?B?aVpHMWZvQkthQlFVNnlvMkNoOXAraDU5RWtzcXV1NjRkVmRFa2dDNUlUUkdF?=
 =?utf-8?B?ZWd3S1BoVlVIRFcyUTB0TTdUUnp3VTJVWE5qYlc1SWhWOWVwamF5WkhmVFFJ?=
 =?utf-8?B?ODkwS3QrRTh6QkZGaHpHUWVCWU12NG9kUkRxOElSVzl2QUtXcmNFdm92Smxk?=
 =?utf-8?B?aDIwd25iT1ByMDlKSmF1ZVMrdGRKZERlblNOWFlWMDQ0M1QvekxBR3VJT01H?=
 =?utf-8?B?ZCtSaHJ1c1BhL0M2UlRHZXhnanVIYzUwNnAvNHN0emJLRjFUZWdRU0JibHh1?=
 =?utf-8?B?MGMwcWdIejRnRENKU1VBUmhKbFZiTWFaRGx4WmkyWmdVaS91eDVDVW1wY0sx?=
 =?utf-8?B?RFVWdVdWa2NEZzYxZlBwSHc1aVorUWFycmo5VVBtaU4xQklLZWJDd3B3dXhx?=
 =?utf-8?B?dVl2S2ZqTHJ2S1Zkcm9ObzlXbWlGSHRSWGdhQXRqcTJ2eVlsSHZNNks3ZlU0?=
 =?utf-8?B?Z3FvZjE5cFdWRXZxb1c3a0ovSGZscFB0Wm9mUnU1VENoSmFXdHp2OTdKZjBX?=
 =?utf-8?B?Tkw5MzBuVUQxUnUzQkx1UFh2am8xZ0UxR1piUi93SHVRVExIYS9DZ242MCsv?=
 =?utf-8?B?My9teWpLdXBwajc0cTR0eVprYWRBd3drclQ1SDh3cHgrVEtRM3JCbHZycnMr?=
 =?utf-8?B?c2xUaWx1S0hXNXBoTnRHWkIyQWdlUFQvYjZydzhZekNHaGZobGx4cE8rSTBk?=
 =?utf-8?B?WGtNV05aNndnb0Z1TVhWUHE3NWgydFNMN2l0VGUvVHhOUmx6R2REa2VYZm9t?=
 =?utf-8?B?a3RLVWJBbDNSRHRzc3c2S3FJbVZjS0pzSWVTYUU1TGJVZDhaRlozVWpQZTFi?=
 =?utf-8?Q?YbpGix1FDiJ8oFkl+SfShUeqg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6344e0d0-8059-445a-80d1-08db5d1a0529
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 12:17:34.1852
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U+7CTNb61B9ayR9vx0+nWp+y/YX/86cEzSzxuVWYPljFwH67g91ddeuOKQ15UkfQU+BzJRWOUBVdd3zmQu1lUg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB9066

On 24.05.2023 10:47, Luca Fancellu wrote:
> 
> 
>> On 24 May 2023, at 09:29, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>>
>>
>>
>>> On 23 May 2023, at 17:38, Anthony PERARD <anthony.perard@citrix.com> wrote:
>>>
>>> We don't want the AFLAGS from the environment, they are usually meant
>>> to build user space application and not for the hypervisor.
>>>
>>> Config.mk doesn't provied any $(AFLAGS) so we can start a fresh
> 
> NIT: there is a typo s/provied/provide/

And preferably with that adjustment ...

>>> $(XEN_AFLAGS).
>>>
>>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>>> ---
>>
>> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
>> Tested-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

There is some interaction with the asm-offsets change, but I think the
two patches are still functionally and contextually independent (and
hence the one here could go in ahead of the other one earlier in the
series, with said adjustment made while committing). Please confirm.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 12:29:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 12:29:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539559.840558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2A5m-0007lP-S8; Thu, 25 May 2023 12:29:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539559.840558; Thu, 25 May 2023 12:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2A5m-0007lI-PQ; Thu, 25 May 2023 12:29:18 +0000
Received: by outflank-mailman (input) for mailman id 539559;
 Thu, 25 May 2023 12:29:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2A5m-0007lC-5Y
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 12:29:18 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2048.outbound.protection.outlook.com [40.107.7.48])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c3e02087-faf7-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 14:29:17 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8412.eurprd04.prod.outlook.com (2603:10a6:10:24d::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 12:28:47 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 12:28:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3e02087-faf7-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GUs3IaAk/BVTsfBldSaYOO5Ac0vIrHJwRPpzxfv3S3JTGw5wr2ExfkDbSFoG8KE1YC7E3EAPVrV48X5yOpft7pIjUgBT8/fyU4j58ZTAyhdW5O9awSUkMWDwBd0xVpQFp/lKt5JKLWVmlG6m5O0oB0hheJgSv1ifRdZzRu5Mdbvwx7Y7Kqy37cEQIbhzWbfOKMZrIDLtTuH1bE7xdbZUzRVf7KXdfLVyrE4BfCdA/RYo7EtHeGr/b135Mp6E0oAR3wiaOP/ua1GlGu+cSwGWNj6svkW4BMqo+g5JO68TCZnUVAoS0hz2A5F/m51aYGESMmBj8S1NVagos/9WMFXNAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Y6xkpF3UdqhiGgHXn/x/bWlWeO+PYGhF6H4b6VKbcO4=;
 b=J/ZAdoNrxVTTKFOhSOEvpwIiDwpq0zPWTnu/nkBDn7IKeLyr2PigxDaZI5wJX4ru5mkTp+88m7cIYkkee5CBgutRmYZLdB/z658Yjna2bQdf9BvAwlqPTxir+SkoXq3aaKwDDDYA4i00Umeh0SFty3FxHpIgbsnX8vIrAyoF0qKUXUAm9dGDjdKj/TpY1o6IJ+4/qqbrZEusnQKPeIhTCZ40NLfI9hjoEt9CGYTDzEwyS2/bgx9vu8e+0ntCo/D+GCJwhkl6hOBEBMYifplipskMfah0ZV3m7wWQtSOvQBfQOnRTborl51BWYCi1tJc9GJ8WeaT7Ai32L5tZFfHKVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y6xkpF3UdqhiGgHXn/x/bWlWeO+PYGhF6H4b6VKbcO4=;
 b=XwUlgV5LovBnUuTUud8jNXJJSLppUh/+yl/Lz/BXdpN1+t4docRNCww4E8/Xo9aJ2SvNSdOLMiGRr9axFk8QAOyOV8LcQz5sVhMfcTdGAkUPurgldYHytb1sxOT+MjDUkdJnuk/Uvebx1jBITGW/NmOQJTjczKoeZLKxrvZGWQJIditVUX0Y4FzfEh4HvCBeLKcwQgdRmuWZiAWjWkeSE2GgEpzaO5P55a3KoU6HG7yJv+VUSgO/jKqQSmWvFFxRSi6lPlgqRpr4sKM2cmR0LsvG92fzB9fDMORlqtCnEMh3Ibpvlaqtt2bkL9v+79mzR/jL8cfha8p+dSgb6bWhXA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <85728cb9-f538-e48d-2b57-3abdc793dbc8@suse.com>
Date: Thu, 25 May 2023 14:28:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 11/15] build: rename CFLAGS to XEN_CFLAGS in
 xen/Makefile
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-12-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-12-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0162.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8412:EE_
X-MS-Office365-Filtering-Correlation-Id: 297949e2-d4b3-449a-8b4a-08db5d1b9671
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iRN1LhU2Mro5ZmFPksp6+qbUZzxZwC0XilT8aScqI0c4Dkv9ATK7pDK7coHR7A0hJbRXoUaNsvPggkvbmYf3TIXMDUB9R6s/nRaLTNSf0BaCYGnpXgTJeS3eC0Y6u1FGMj14wb5bqbo2O+Pp9kCKe8zqpGeYndpJ3FlQ3Gh60qdZFvSwBqbby7FKbTZ4fXVyETP3rVLEL94l6LKQsAkzpNYZswVmo/HkycCo9EX3qor6uMed5/YjkkiA7isJR23LXx8M/ajq/lXz4jLvRCXfmyREB7U4VebhdChrbs9iqU7hsO7T9VXYqogLziKObOJ4F2QeNO9xso1dn/mXsLrop0L0ULYgapuei0DKR3ETn6LdVz88zSidzL42Ng/qgBUZbSSk6pzGzt1iYbOAk/1gbi2HuHr04mMrwW0dXDeKXojFcTb+VlenoYTDSDBN6/uY6j/cwhsVMupbvB0Vdl00xDtAL9hU6liKe/gfFf0XKJIf6KNEjYa4eNQjywvYoJXqnrpGrXZjoe+bf4dt0X53PFvuOGMtkVjKlZOtUGqDrwV0IcAd04kkytGymmiH2XkOos88ihqtsPbe8XKBbcsnlClkayVZz1YyTtVdyxVUVdJtbV2yHySQpoAMSl2zzN60RoTczwTQDHb1qa260f/1qg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(396003)(136003)(376002)(366004)(39860400002)(451199021)(6486002)(41300700001)(7416002)(26005)(6512007)(6506007)(8676002)(8936002)(38100700002)(5660300002)(53546011)(36756003)(2616005)(31696002)(83380400001)(2906002)(186003)(86362001)(66476007)(6916009)(4326008)(66946007)(66556008)(31686004)(54906003)(478600001)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U1JLem1FcVE5M2Ficndic2tZVDF4L3YyRFFwMGx1RGdrRTE5bnZMU3R5UlQx?=
 =?utf-8?B?T1RValFXNVJHQjcrK013L3NGUkdGb3kxRm9nUVRSVzJsQ3hxYXEzNUdKdUdX?=
 =?utf-8?B?YU1qWVFYRmY5K0s5aWpuRXB6eXpuUnZSMmhyWFArbE53SnIwdXVIblJLU05R?=
 =?utf-8?B?a0g3SmVZUHA2Y2xZQWdRYkMxZHVtZlZwU0hRa3RveXhwdXZUOEtvWEduelov?=
 =?utf-8?B?VlRhMFdwNGx1YU9GMnJwdTV6U1N4NzNnOUlEeFRrd3ptMExYaHNvcy8xSFJu?=
 =?utf-8?B?MlI1MWlBYnYvNVV4RTE4N0YrTDl0dTU2SGV3RE5ST0F4YkJRaWZjenR6OFor?=
 =?utf-8?B?K0praHNvaGtZTVRjcjlSdURDSVU1dEtzbDUvKzFnK0E1WGhXWUw4dHFCVXRa?=
 =?utf-8?B?VjNETXZBYllnNEluYVNhZWY5QmdhMU1sL1JIOC8xejFGMzE5dlFLQnJkakY0?=
 =?utf-8?B?ZGdnRGFkRkdBZ3F4NmhZR3QwWDlMcHZVNGd1WUF3ZFNUemZCK01NTk1rbFRH?=
 =?utf-8?B?aXhzaG1IckEwWm51VllaRHpXUUFUQ2lWT2JVUHYzRDBoT3V6WTF6Vm5YbkpN?=
 =?utf-8?B?N1dZM21BakdUS0MvazFCZnFYK29DS3EyOHJwa2FJcTRmdXI0NjZld2pUUG5X?=
 =?utf-8?B?MWtLaStDWTdqZHo4UXNUOEN5NUtiOGVuNWI1d2VWWmdmVEJHOEhBWUtQYXFr?=
 =?utf-8?B?SVdZU3h0Q3FqcGMwbFQyNnkzMUlsQmpmc3NrcFRsUnVSN1BOR2poL1JrMGJX?=
 =?utf-8?B?QkVLWFFPZDlCaDVwOTVCd3hsQXlUTGU5bkI1Ny85Y0xEM0VNWS9IVVlUYzdi?=
 =?utf-8?B?Y1Z0Uy9ZcGgwdUgrRzREb3ZmM3RBZ1BMbGlBMzBlUFAxdEV5RmxPQUJ6Q3E4?=
 =?utf-8?B?eFBzSXl5aG1nbFNHM1FxdVZYTWZoeFgxL0VwQXZDdDlJVERXWGlPa3Y4SDJC?=
 =?utf-8?B?ZjU2S3hISEZHSEwzS1FOaUZ1cTNFMkVpYmRpcHBxQURvYm9LQjUvZllTV1Nq?=
 =?utf-8?B?TW1yWjJaMk5Ea2FQNW11eWdKaUEwY2dNeEJybnNpYzVMU05xUFhmTyt2OW1T?=
 =?utf-8?B?aWZ6RGJBbkpuZ1BlN2xXUGpmYSthNHBDVWNZcDc0UTI4VmM4WU42TzlQcndV?=
 =?utf-8?B?aWsvNGdPTnB1L1lPazhnWm15WWJjNmVjQzdSa1NBWVFvSlVueG1qUStac3RH?=
 =?utf-8?B?aSt2aFNtVXE1aE5XVXorbXdXS0NOQk1Ia2lDcW9kU1BFeVlFc2c3SzlsRnY0?=
 =?utf-8?B?ckdSOGdLelU3QzQ1bGNkTmZyQisvWXdYTG0yQWZmYnhYdUtzT2FnRHJqcENJ?=
 =?utf-8?B?YUxSUWNiM2hTbGU0cWxDdTNpTHo4d09tOXZyMlc3cTJyTjZCVU0zVDNQbWx5?=
 =?utf-8?B?aU9iS2huaTRzSCtoVEQ3T2lieGUyajM4MGFrRGY4UkFBK1lUOVd3UXJPMnpt?=
 =?utf-8?B?dnJ3TXRvMzJvSngyQVBEMUxjbVRESnp2ZDdXb0RmOFE3KzlpTWc4WFlXNkhX?=
 =?utf-8?B?aU13c09SdkdweThPYjVBOUoyNXRNVTdEb2NpTVhwQ0NDMXMweWhNQlNGSS9z?=
 =?utf-8?B?RzBqcVNGNk9Ud2hwYkVoV0t3U0lIZ3RWejRTdjFoV2phR0xGcGUyS2dhVWtk?=
 =?utf-8?B?c3p1cy9kUEpwd2pTYXloUGlZSDQrMzduUE9zQWVUU3ZnN0dXczNPVUVmblNC?=
 =?utf-8?B?TnlHZ0x3V0JXc1p6ZHJudDRBZUpORWFFY2Q3Mkt0QnFidEFqcERPaG9mRzBM?=
 =?utf-8?B?cnNOSm1kZlJWRWhvdUplV2w2UE1aOVdhdTNWR0FqdWJ6Ym9VWmVqVUZRSEhF?=
 =?utf-8?B?ald4OXRMbDNOcGNCVTloWllqcStmWlBadWVlMVltOVliQkxZZjFPcHdLNVBV?=
 =?utf-8?B?YW1JVnZiUDdsZExDWHdUZWhJWm96MlduRHRkZ3cxc2VlWFlONFd0Q2s2OThT?=
 =?utf-8?B?T0YrY3ErR0czVkM1UTBCTHFldmxMSWhTY0hzdjRBRTBmK2NLOStnU0I1RFNl?=
 =?utf-8?B?UGZRZU9EUnZQdmUrdXlJWHZaWlozTzFTdWlyUm5BSkRCL2lQRVR4c21rZ1Zv?=
 =?utf-8?B?M25GUWhJK0hVaG15ekpRaVpTdXYvaTUvOHl1YSsxa1E2T1duZGQ0ZDZhbUU3?=
 =?utf-8?Q?GS4rYaptHjcJLndpc1rkfeHTe?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 297949e2-d4b3-449a-8b4a-08db5d1b9671
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 12:28:47.4112
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nTaIriYZ/586mhc+p/JDS3dNJhizeeT+SwGPvdXJF8barQSoK2Isrnh3ZjibMvc3lH/L4otG2bdvQQolpJgAkw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8412

On 23.05.2023 18:38, Anthony PERARD wrote:
> This is a preparatory patch. A future patch will not even use
> $(CFLAGS) to seed $(XEN_CFLAGS).
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

I have a question though, albeit not directly related to this patch:

> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -259,6 +259,7 @@ export KBUILD_DEFCONFIG := $(ARCH)_defconfig
>  export XEN_TREEWIDE_CFLAGS := $(CFLAGS)
>  
>  XEN_AFLAGS =
> +XEN_CFLAGS = $(CFLAGS)
>  
>  # CLANG_FLAGS needs to be calculated before calling Kconfig
>  ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
> @@ -284,7 +285,7 @@ CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
>  endif
>  
>  CLANG_FLAGS += -Werror=unknown-warning-option
> -CFLAGS += $(CLANG_FLAGS)
> +XEN_CFLAGS += $(CLANG_FLAGS)
>  export CLANG_FLAGS
>  endif
>  
> @@ -293,7 +294,7 @@ ifeq ($(call ld-ver-build-id,$(LD)),n)
>  XEN_LDFLAGS_BUILD_ID :=
>  XEN_HAS_BUILD_ID := n
>  else
> -CFLAGS += -DBUILD_ID
> +XEN_CFLAGS += -DBUILD_ID
>  XEN_TREEWIDE_CFLAGS += -DBUILD_ID

Is this redundancy necessary? IOW can't XEN_CFLAGS, at an appopriate
place, simply have $(XEN_TREEWIDE_CFLAGS) appended?

Apart from this the same process question again: Is this independent
of earlier patches (except the immediately preceding one), and could
hence - provided arch maintainer acks arrive - go in ahead of time?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 12:41:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 12:41:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539563.840568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2AHk-0001iV-VG; Thu, 25 May 2023 12:41:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539563.840568; Thu, 25 May 2023 12:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2AHk-0001iO-SO; Thu, 25 May 2023 12:41:40 +0000
Received: by outflank-mailman (input) for mailman id 539563;
 Thu, 25 May 2023 12:41:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2AHj-0001iI-6t
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 12:41:39 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20617.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::617])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c70573c-faf9-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 14:41:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7264.eurprd04.prod.outlook.com (2603:10a6:800:1b1::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 12:41:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 12:41:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c70573c-faf9-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DB8avvGHhgigVOD0qvuktE21DXIQg0fRZE+nP7TswJfbi73AeDjTFylgp6JQ3xSFS4u/LHrO72yvPpxJF1eBQP7xupRklBDco1NB3OYSN+xWaeF5ZwlDKrj7oEd+SYHbUzpMjhcEIrqoSWLq+EDj8I2vZnMdweMEkh4UtGou/LFKLbgR+nI56TgqLVZVYB1PQdZgYYRRDEmDpwW4cgvyW/jfVkqT5REwAhPSO1uFOiQTnVl3bdfIehxIVKRk9FESv7IxI922dZ7N1yClCX7/KoYhMjgK7e85gdhiqVIuIHhh5E2Sld1rIWmkrXvuNKDfE5e7vP7+ZxfqDT7cZ0DcDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/byRGCDO8LzARuzkYb9wlOh+PDPQHqVhTGXLw4lF+qg=;
 b=EBlQG3xqw89DvfPgYd4WwNq016EpwmWWO+iDZt3Ns17hG824jJ8p/ef5pxmMmBlydJPRx0/Meug5m9WUlK+QiYd9RpDEZNb+wUElsXyIha/YvzGB47NADM1PAPGkx9Z9pwEJh52tWcU+ihz6mjsUl1oUi/dCA9ZUF0ctrdLhAvdBrkZfv3MY6FSLq4N727BCU2OcT0e+EwAKzSgHG/u6MmaBL6gOPVGShkwU4xFpbZsqZydlPXjxaRajY5rDFGIR4yHfTz7NkMldU+Vgs9y+KmQtxoKwNz3+b4DBJd+a9JoX0xm5JcAUuZaunJEMGMTt8M+gInz2MAZMqQ0L+f5Nuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/byRGCDO8LzARuzkYb9wlOh+PDPQHqVhTGXLw4lF+qg=;
 b=kfzkBsK7l2eMAz5bfHweQTD1AgZdfFbwHFJVCfKcsNgb46CNPLoFp0OoTpGhgpafJ50jsK0WKm1xvow2+16ARF//8B1yP4hc7sR4MFFhfXtcNVcvZHjBs21MeEyur5Sh7lIVsq/ixtwXnW7KCjzSrfsxJ45kecqjXm0VMNMvf7VE+ihzWYItlTknklx331kj2JXHQcJvZLRv+VPK3eCfZKBikfOBFboaUgGwLDWHINJiFhVyxNrnELx7xdMVSA0uvdwJYQLyFF3RzxhebGDTL7Gu8MIrWuPFPgA3+OfwRK3gCaRRnIvDqA7Pw/aVf9tJHvxo74nfWTNqx9JsV16TMQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3586c7b8-38a7-5eee-4a19-8c0ba12d4fa8@suse.com>
Date: Thu, 25 May 2023 14:41:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 12/15] build: avoid Config.mk's CFLAGS
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-13-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-13-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0159.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7264:EE_
X-MS-Office365-Filtering-Correlation-Id: 286f12fb-30ff-414d-380d-08db5d1d5f71
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yG3e7Rz5MySgxmVhn/2zQu2p1KwMegTq1yol1U41rQXooynUBvEObKKZu2cUyBsnVx5DpY9iEYhxTdramxXWYQhNtANlCmwFzEpMwQWIp7AZKtWKfA33kUn16kkKUKUvwdV2+AMqNbGJmbDKwYFZ2VLN+vh757xJMqXgdHMGWy2eecMWEl9i9Nj+9R9xMPpx7pF6GQ6hEoSLBB4Td89RAnOuso4uNRx3INj4CJ6rAfgKmd3314qcDRtiHQXU8begAbYCmusGNFbbe6zUmUQe1mlVs7FA6Ztlmr3k7U0uwNQVob3ShPDr8FNfRQ1IKwxsR2j7xMtMfIr+AcL2NFH0xq83uDhr1cVIJ3F+JapG+s9n0jIIyz8QkA0tvNanX5b7+qqgf0SYI0DCAFdwvd8yobBMHARFjSfXnLcw/fbBgy1B4gvUfG6y1pZoebi+pqbN6IBj6yUBFLBuFqhUthHoGL+ie0/s6xx1B4qwoqxs5uSXNAPEURaw2E42mYrDYOFrb1e8qZsj1i9+lFwfyO/Jp4sm5R0qzHU6yiR7FejRj7Q6qoMgzQvmaHOzJ25h9ZImwl5Ckq/tQ970TLO10Iu+4cSbemaWE8f7Nqk/RMWuYAhqD4nsHsB2dMXoOu1D2cL41Smhd7YYif3Xi/pvkbYu7w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(346002)(136003)(39860400002)(396003)(451199021)(36756003)(86362001)(31696002)(54906003)(316002)(4326008)(66946007)(66556008)(66476007)(6916009)(478600001)(6666004)(6486002)(8936002)(8676002)(7416002)(41300700001)(5660300002)(2906002)(38100700002)(53546011)(2616005)(6506007)(26005)(186003)(6512007)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?blpqY3NkdzZHQVFlTjVwc29JMWl3ODZKbUVHcUZQSGpKdmg3cmxiaHM1c2V5?=
 =?utf-8?B?NzFScFd0ZjlMc2dHZFJRVkU4Y1VkS2ppMHpFdGxwZlVMZVVJVGhva3pEcklo?=
 =?utf-8?B?cjdkUG13SUh1b3JLcDNjSEcvUXpxSllkell4azlQQU5qOXg0bmhVK0x5amtK?=
 =?utf-8?B?cGdhcmNxN3lDY0QvNmJSR1BSMk91SHNyMWJaK2NpMUFUMVJ6V2hQdFI5Z1lJ?=
 =?utf-8?B?cktDeUNlZHZPUFp6YlRMcUd6eGpuUEtvSndvVnhzYlludDZuUHE4SVhzc3ps?=
 =?utf-8?B?TWhZM0MwYk4zMzk1cnU1VVY2NU8xSmkvajRDcFNFWkNOQ01vUWc5bmtOS3V4?=
 =?utf-8?B?S21RS3ZxMDVwRS9nK2lOMnpHSFEwUjl6dGNFdXlaQWh0VktiWU45ejhvSUhk?=
 =?utf-8?B?QzFJSTlZWGRXU1ZoKzdIUnlGWUxZY1RQZHF3STZFNU8xRTd4d2FCbUhyYUlB?=
 =?utf-8?B?TlNwVmdSUkVLOGlrR3NLQjZqc09HWUZTS2dkUXJGRXVlbmlMcXBOYkhhdTZ2?=
 =?utf-8?B?U05mM1F1MFRNYXowcEYvUmJtZGdRNDVDTHRPMVM4OGsyTmRwQVJ2TU1qMjN3?=
 =?utf-8?B?b1ZtU2NRSUxqK0hNWmZHMEVzdkxlMDllVExzWUgvcGw2UFNXUXRwSWFqR2JL?=
 =?utf-8?B?UFVZUGNReFdKdkN0eGRuUUg4ekhIYzB0byt1VDBJVTV4S2hKSTB6ZVNqcTRZ?=
 =?utf-8?B?ZWxNeXpVQnJ0WStXVm1yVzJYenhhczNxY3dEQ0NtQlVpa2gyUzd4OVVvVnR6?=
 =?utf-8?B?a3JEQXZJOHRuNGVYRkUzMlVkZFJjRTZWY0U5TFVYa0UwQVlqNmlidUZ4OFE2?=
 =?utf-8?B?K2owOGJXRWVHNitiZGJPVi9pOXVocDhyeTFBV28yREphL0lOT0tNU3ZuR3dJ?=
 =?utf-8?B?MngxQ29PeThvdy94SzMyUVQvWGtqanRyKzFCNk5td3dEZVVhaUVnMjFUTzRF?=
 =?utf-8?B?TVBJcU53M29pU28wQTJhaGRpZ2pqSnluZE5Jbk5pZ2pyZjRzTnJoeHFPc1l5?=
 =?utf-8?B?SGVESU5Cd1N1WUE1bDluR3JwL1BJSVVoTVpmWG5yVVJ3ZzFKeHQvWitLdUJj?=
 =?utf-8?B?bE83ZEIvT21ZUnNoVitLbzdwUG1xZHJUbkZQNHU5ZnJ2eisyTkJrc05VejBp?=
 =?utf-8?B?ZXk2dmlPZk9zN1ZvYXU5eVlLL2p0SjY0dGczUnZVQ3I3ZVB6dCt1ZzNsdjdI?=
 =?utf-8?B?RnlPWUhSWG90YnJpWmJuUUM2Z1pndlArWGR3aytORUd4QzRnaGE5K0pZS2dQ?=
 =?utf-8?B?NGZ2anJVVzZnYjZrUzJRTkJwb1lqTEJOVHlMZGx1b3psMkd6QXh4RnRlTm4x?=
 =?utf-8?B?aGpWekhNWFgzZGtwNm8zdzYxbXRtNC9TT1k3Tlp6cE16eFNQYlAxM2pqT0RZ?=
 =?utf-8?B?YUpQZ29FU01VMTBPWjI2YnYzbHhnSGdqWHd3TjJpYmp5MnJyaHhhZVBidEdk?=
 =?utf-8?B?OExVL1V0a3JUK0pXWmptaVdzRS9IdURQSUptSHNRWjExMmllTnNUMTRyODJ4?=
 =?utf-8?B?bnlRdHFIMzZEWHhKd0JJTkluMEZ2eDkyb2Z4cXByb2krUDJ0czlPZUdIMVE3?=
 =?utf-8?B?c1dUQTlqTVpXanZSQTBVMmdLYnNoMktZM0FMeWpCRjF6UHdJSzBnT1pTVWd2?=
 =?utf-8?B?TnVyMnhnMjNHb3Y1dFUvQ0dveFlMZk5Fd1pDUjZ1YUJjMG1Xa3RHS2wrbk51?=
 =?utf-8?B?YitIaHZuaWFienk5aDU1Y0RHeEdVazdyaEFQd3BreE1WMndNNWU2cEV4MlJ4?=
 =?utf-8?B?bFJ1LzJENFdHWDFBb0RTM3FFc2k2TmJhMzVLTXBuMUNGR3JmeEhLNkdWOHN6?=
 =?utf-8?B?ejZoWUIzVytWNjR2TEFZNFB5NmFEeHpvK0N1cDFZR25TejR4dTlSSkpwdUF0?=
 =?utf-8?B?alEyc3FkYksvNlFCODZnYzhGN3FmWlMvLy80MUZ2bWd6V3NQcVE1SlpEYkcr?=
 =?utf-8?B?MHd6L0FYY2RvRTlJdjArSVY1K3pvYVJYTUFYdndrODI2K2ZDSXRzWXA4eTZI?=
 =?utf-8?B?OG9vSmwwZUN4YytEYm04MG9pMHVzd1dKZFloNXowQS9Ka1VRSHVpQ0xJTGI2?=
 =?utf-8?B?Uk1WOXZqWDRGZEdySlhjc1lsWEdIRUFQVWF5d1RaeCtwWXEvVlYwSlkxSnN5?=
 =?utf-8?Q?1dYEzpAi5wpqlHTjkCNFcM/sO?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 286f12fb-30ff-414d-380d-08db5d1d5f71
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 12:41:34.1075
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RfouYWiKti57dVgMH7++k14OmXx5rY8Q7ux0iwZ/H0aBqObgto0pznDUSiiXwWbryF8XPK/l8ZkHcHAD6TTSxA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7264

On 23.05.2023 18:38, Anthony PERARD wrote:
> The variable $(CFLAGS) is too often set in the environment,
> especially when building a package for a distribution. Often, those
> CFLAGS are intended to be use to build user spaces binaries, not a
> kernel. This mean packager needs to takes extra steps to build Xen by
> overriding the CFLAGS provided by the package build environment.
> 
> With this patch, we avoid using the variable $(CFLAGS). Also, the
> hypervisor's build system have complete control over which CFLAGS are
> used.

"..., apart from $(EXTRA_CFLAGS_XEN_CORE)", as you say ...

> No change intended to XEN_CFLAGS used, beside some flags which may be
> in a different order on the command line.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
> 
> Notes:
>     There's still $(EXTRA_CFLAGS_XEN_CORE) which allows to add more CFLAGS
>     if someone building Xen needs to add more CFLAGS to the hypervisor
>     build.

... only here.

> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -259,7 +259,16 @@ export KBUILD_DEFCONFIG := $(ARCH)_defconfig
>  export XEN_TREEWIDE_CFLAGS := $(CFLAGS)
>  
>  XEN_AFLAGS =
> -XEN_CFLAGS = $(CFLAGS)
> +XEN_CFLAGS =
> +ifeq ($(XEN_OS),SunOS)
> +    XEN_CFLAGS +=  -Wa,--divide -D_POSIX_C_SOURCE=200112L -D__EXTENSIONS__

So this (and the arch.mk additions) duplicate stuff we have in config/*.mk.
Such duplication isn't really nice. Setting AS, CC, etc also happens there,
and hence I expect you're going to duplicate that as well in a later patch.
Can't we massage (if necessary) the config/*.mk relevant to the hypervisor
build, so they can be included from xen/Makefile? That way all such basic
settings could remain in a central place, which has been well known for
many years.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 12:45:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 12:45:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539567.840578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2ALY-0002KL-Ff; Thu, 25 May 2023 12:45:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539567.840578; Thu, 25 May 2023 12:45:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2ALY-0002KE-CR; Thu, 25 May 2023 12:45:36 +0000
Received: by outflank-mailman (input) for mailman id 539567;
 Thu, 25 May 2023 12:45:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iF6a=BO=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q2ALW-0002K6-TM
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 12:45:35 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 09448bea-fafa-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 14:45:32 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id EA2A6218E7;
 Thu, 25 May 2023 12:45:31 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id C08D1134B2;
 Thu, 25 May 2023 12:45:31 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id UrYVLWtYb2SoOwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 25 May 2023 12:45:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09448bea-fafa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685018731; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8/qb4mvGXaF5ziF701Ys6QNe9gutXJXSdzsKEpVabFg=;
	b=OEYiHpuV+bO+5fAq5M8BGxfdSJAxk7EpApMsCntWwUygGyAx0Au79r8Tc0C7sbzf3E2WHP
	JkqO883TPvxwHzVSLyxk7pUxRffT6LZEJ8iJrch6FuweYjXthMWIG+MrgJnlxJ1tNgYinX
	cM5CtVKoXjmR1Ro0mTmj/UNo49YDq9I=
Message-ID: <3e440134-6ab8-b22d-5081-b3926e4742a2@suse.com>
Date: Thu, 25 May 2023 14:45:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Content-Language: en-US
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-6-jgross@suse.com>
 <21847835-4f7e-a09a-458e-e68dc59d4268@xen.org>
 <9745474f-db84-c8f3-3662-95728d4d5bd3@suse.com>
 <91380959-b63d-7d4f-0920-7d87dc0fc19d@xen.org>
 <de77cc78-e07a-934f-e241-15fe851706df@suse.com>
 <2aaf1cf4-baca-0974-ac0c-80328037ce52@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v5 05/14] tools/xenstore: use accounting buffering for
 node accounting
In-Reply-To: <2aaf1cf4-baca-0974-ac0c-80328037ce52@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------jEsxAE4qVjp09cwkZQ0b88sX"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------jEsxAE4qVjp09cwkZQ0b88sX
Content-Type: multipart/mixed; boundary="------------NALbn84w27EIjcCQSSFKELUC";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <3e440134-6ab8-b22d-5081-b3926e4742a2@suse.com>
Subject: Re: [PATCH v5 05/14] tools/xenstore: use accounting buffering for
 node accounting
References: <20230508114754.31514-1-jgross@suse.com>
 <20230508114754.31514-6-jgross@suse.com>
 <21847835-4f7e-a09a-458e-e68dc59d4268@xen.org>
 <9745474f-db84-c8f3-3662-95728d4d5bd3@suse.com>
 <91380959-b63d-7d4f-0920-7d87dc0fc19d@xen.org>
 <de77cc78-e07a-934f-e241-15fe851706df@suse.com>
 <2aaf1cf4-baca-0974-ac0c-80328037ce52@xen.org>
In-Reply-To: <2aaf1cf4-baca-0974-ac0c-80328037ce52@xen.org>

--------------NALbn84w27EIjcCQSSFKELUC
Content-Type: multipart/mixed; boundary="------------RmjQVFQea5VWowSoqo2mMzxR"

--------------RmjQVFQea5VWowSoqo2mMzxR
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTEuMDUuMjMgMTQ6MDcsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4gSGkgSnVlcmdlbiwN
Cj4gDQo+IE9uIDExLzA1LzIwMjMgMDY6MjUsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+PiBP
biAxMC4wNS4yMyAyMzozMSwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+IE9uIDEwLzA1LzIw
MjMgMTM6NTQsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+Pj4+IE9uIDA5LjA1LjIzIDIwOjQ2
LCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+Pj4+PiBIaSBKdWVyZ2VuLA0KPj4+Pj4NCj4+Pj4+
IE9uIDA4LzA1LzIwMjMgMTI6NDcsIEp1ZXJnZW4gR3Jvc3Mgd3JvdGU6DQo+Pj4+Pj4gQWRk
IHRoZSBub2RlIGFjY291bnRpbmcgdG8gdGhlIGFjY291bnRpbmcgaW5mb3JtYXRpb24gYnVm
ZmVyaW5nIGluDQo+Pj4+Pj4gb3JkZXIgdG8gYXZvaWQgaGF2aW5nIHRvIHVuZG8gaXQgaW4g
Y2FzZSBvZiBmYWlsdXJlLg0KPj4+Pj4+DQo+Pj4+Pj4gVGhpcyByZXF1aXJlcyB0byBjYWxs
IGRvbWFpbl9uYmVudHJ5X2RlYygpIGJlZm9yZSBhbnkgY2hhbmdlcyB0byB0aGUNCj4+Pj4+
PiBkYXRhIGJhc2UsIGFzIGl0IGNhbiByZXR1cm4gYW4gZXJyb3Igbm93Lg0KPj4+Pj4+DQo+
Pj4+Pj4gU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0K
Pj4+Pj4+IC0tLQ0KPj4+Pj4+IFY1Og0KPj4+Pj4+IC0gYWRkIGVycm9yIGhhbmRsaW5nIGFm
dGVyIGRvbWFpbl9uYmVudHJ5X2RlYygpIGNhbGxzIChKdWxpZW4gR3JhbGwpDQo+Pj4+Pj4g
LS0tDQo+Pj4+Pj4gwqAgdG9vbHMveGVuc3RvcmUveGVuc3RvcmVkX2NvcmUuY8KgwqAgfCAy
OSArKysrKysrLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KPj4+Pj4+IMKgIHRvb2xzL3hlbnN0
b3JlL3hlbnN0b3JlZF9kb21haW4uaCB8wqAgNCArKy0tDQo+Pj4+Pj4gwqAgMiBmaWxlcyBj
aGFuZ2VkLCA5IGluc2VydGlvbnMoKyksIDI0IGRlbGV0aW9ucygtKQ0KPj4+Pj4+DQo+Pj4+
Pj4gZGlmZiAtLWdpdCBhL3Rvb2xzL3hlbnN0b3JlL3hlbnN0b3JlZF9jb3JlLmMgDQo+Pj4+
Pj4gYi90b29scy94ZW5zdG9yZS94ZW5zdG9yZWRfY29yZS5jDQo+Pj4+Pj4gaW5kZXggODM5
MmJkZWM5Yi4uMjJkYTQzNGUyYSAxMDA2NDQNCj4+Pj4+PiAtLS0gYS90b29scy94ZW5zdG9y
ZS94ZW5zdG9yZWRfY29yZS5jDQo+Pj4+Pj4gKysrIGIvdG9vbHMveGVuc3RvcmUveGVuc3Rv
cmVkX2NvcmUuYw0KPj4+Pj4+IEBAIC0xNDU0LDcgKzE0NTQsNiBAQCBzdGF0aWMgdm9pZCBk
ZXN0cm95X25vZGVfcm0oc3RydWN0IGNvbm5lY3Rpb24gKmNvbm4sIA0KPj4+Pj4+IHN0cnVj
dCBub2RlICpub2RlKQ0KPj4+Pj4+IMKgIHN0YXRpYyBpbnQgZGVzdHJveV9ub2RlKHN0cnVj
dCBjb25uZWN0aW9uICpjb25uLCBzdHJ1Y3Qgbm9kZSAqbm9kZSkNCj4+Pj4+PiDCoCB7DQo+
Pj4+Pj4gwqDCoMKgwqDCoCBkZXN0cm95X25vZGVfcm0oY29ubiwgbm9kZSk7DQo+Pj4+Pj4g
LcKgwqDCoCBkb21haW5fbmJlbnRyeV9kZWMoY29ubiwgZ2V0X25vZGVfb3duZXIobm9kZSkp
Ow0KPj4+Pj4+IMKgwqDCoMKgwqAgLyoNCj4+Pj4+PiDCoMKgwqDCoMKgwqAgKiBJdCBpcyBu
b3QgcG9zc2libGUgdG8gZWFzaWx5IHJldmVydCB0aGUgY2hhbmdlcyBpbiBhIHRyYW5zYWN0
aW9uLg0KPj4+Pj4+IEBAIC0xNjQ1LDYgKzE2NDQsOSBAQCBzdGF0aWMgaW50IGRlbG5vZGVf
c3ViKGNvbnN0IHZvaWQgKmN0eCwgc3RydWN0IA0KPj4+Pj4+IGNvbm5lY3Rpb24gKmNvbm4s
DQo+Pj4+Pj4gwqDCoMKgwqDCoCBpZiAocmV0ID4gMCkNCj4+Pj4+PiDCoMKgwqDCoMKgwqDC
oMKgwqAgcmV0dXJuIFdBTEtfVFJFRV9TVUNDRVNTX1NUT1A7DQo+Pj4+Pj4gK8KgwqDCoCBp
ZiAoZG9tYWluX25iZW50cnlfZGVjKGNvbm4sIGdldF9ub2RlX293bmVyKG5vZGUpKSkNCj4+
Pj4+PiArwqDCoMKgwqDCoMKgwqAgcmV0dXJuIFdBTEtfVFJFRV9FUlJPUl9TVE9QOw0KPj4+
Pj4NCj4+Pj4+IEkgdGhpbmsgdGhlcmUgaXMgYSBwb3RlbnRpYWwgaXNzdWUgd2l0aCB0aGUg
YnVmZmVyaW5nIGhlcmUuIEluIGNhc2Ugb2YgDQo+Pj4+PiBmYWlsdXJlLCB0aGUgbm9kZSBj
b3VsZCBoYXZlIGJlZW4gcmVtb3ZlZCwgYnV0IHRoZSBxdW90YSB3b3VsZCBub3QgYmUgDQo+
Pj4+PiBwcm9wZXJseSBhY2NvdW50ZWQuDQo+Pj4+DQo+Pj4+IFlvdSBtZWFuIHRoZSBjYXNl
IHdoZXJlIGFub3RoZXIgbm9kZSBoYXMgYmVlbiBkZWxldGVkIGFuZCBkdWUgdG8gYWNjb3Vu
dGluZw0KPj4+PiBidWZmZXJpbmcgdGhlIGNvcnJlY3RlZCBhY2NvdW50aW5nIGRhdGEgd291
bGRuJ3QgYmUgY29tbWl0dGVkPw0KPj4+Pg0KPj4+PiBUaGlzIGlzIG5vIHByb2JsZW0sIGFz
IGFuIGVycm9yIHJldHVybmVkIGJ5IGRlbG5vZGVfc3ViKCkgd2lsbCByZXN1bHQgaW4NCj4+
Pj4gY29ycnVwdCgpIGJlaW5nIGNhbGxlZCwgd2hpY2ggaW4gdHVybiB3aWxsIGNvcnJlY3Qg
dGhlIGFjY291bnRpbmcgZGF0YS4NCj4+Pg0KPj4+IFRvIG1lIGNvcnJ1cHQoKSBpcyBhIGJp
ZyBoYW1tZXIgYW5kIGl0IGZlZWxzIHdyb25nIHRvIGNhbGwgaXQgd2hlbiBJIHRoaW5rIHdl
IA0KPj4+IGhhdmUgZWFzaWVyL2Zhc3RlciB3YXkgdG8gZGVhbCB3aXRoIHRoZSBpc3N1ZS4g
Q291bGQgd2UgaW5zdGVhZCBjYWxsIA0KPj4+IGFjY19jb21taXQoKSBiZWZvcmUgcmV0dXJu
aW5nPw0KPj4NCj4+IFlvdSBhcmUgYXdhcmUgdGhhdCB0aGlzIGlzIGEgdmVyeSBwcm9ibGVt
YXRpYyBzaXR1YXRpb24gd2UgYXJlIGluPw0KPiANCj4gSXQgaXMgbm90IHZlcnkgY2xlYXIg
ZnJvbSB0aGUgY29kZS4gQW5kIHRoYXQncyB3aHkgY29tbWVudHMgYXJlIGFsd2F5cyB1c2Vm
dWwgdG8gDQo+IGNsYXJpZnkgd2h5IGNvcnJ1cHQoKSBpcyB0aGUgcmlnaHQgY2FsbC4NCg0K
SSBhZ3JlZS4gSSdsbCBhZGQgYSBjb21tZW50Lg0KDQo+IA0KPj4NCj4+IFdlIGNvdWxkbid0
IGFsbG9jYXRlIGEgc21hbGwgYW1vdW50IG9mIG1lbW9yeSAoYXJvdW5kIDY0IGJ5dGVzKSEg
DQo+IA0KPiBTbyBsb25nIHRoaXMgaXMgdGhlIG9ubHkgcmVhc29uIHRoZW4uLi4NCj4gDQo+
IFhlbnN0b3JlZA0KPj4gd2lsbCBwcm9iYWJseSBkaWUgd2l0aGluIG1pbGxpc2Vjb25kcy4g
VXNpbmcgdGhlIGJpZyBoYW1tZXIgaW4gc3VjaCBhDQo+PiBzaXR1YXRpb24gaXMgZmluZSBJ
TU8uIEl0IHdpbGwgbWF5YmUgcmVzdWx0IGluIHNvbHZpbmcgdGhlIHByb2JsZW0gYnkNCj4+
IGZyZWVpbmcgb2YgbWVtb3J5IChxdWl0ZSB1bmxpa2VseSwgdGhvdWdoKSwgYnV0IGl0IHdv
bid0IGxlYXZlIHhlbnN0b3JlZA0KPj4gaW4gYSB3b3JzZSBzdGF0ZSB0aGFuIHdpdGggeW91
ciBzdWdnZXN0aW9uLg0KPiANCj4gLi4uIHRoaXMgbWlnaHQgYmUgT0suIEJ1dCBpbiB0aGUg
cGFzdCwgd2UgaGFkIGEgcGxhY2Ugd2hlcmUgY29ycnVwdCgpIGNvdWxkIGJlIA0KPiByZWxp
YWJseSB0cmlnZ2VyZWQgYnkgYSBndWVzdC4gSWYgeW91IHRoaW5rIHRoYXQncyBub3QgcG9z
c2libGUsIHRoZW4gaXQgc2hvdWxkIA0KPiBiZSBwcm9wZXJseSBkb2N1bWVudGVkLg0KDQpP
a2F5LCB3aWxsIGRvIHNvLg0KDQo+IA0KPj4NCj4+IEFuZCBjYWxsaW5nIGFjY19jb21taXQo
KSBoZXJlIHdvdWxkbid0IHJlYWxseSBoZWxwLCBhcyBhY2NvdW50aW5nIGRhdGENCj4+IGNv
dWxkbid0IGJlIHJlY29yZGVkLCBzbyB0aGVyZSBhcmUgbWlzc2luZyB1cGRhdGVzIGFueXdh
eSBkdWUgdG8gdGhlIGZhaWxlZA0KPj4gY2FsbCBvZiBkb21haW5fbmJlbnRyeV9kZWMoKS4N
Cj4gDQo+IFdlIGFyZSByZW1vdmluZyB0aGUgbm9kZSBhZnRlciB0aGUgYWNjb3VudGluZyBp
cyB1cGRhdGVkLiBTbyBpZiB0aGUgYWNjb3VudGluZyANCj4gZmFpbCwgdGhlbiBpdCBzaG91
bGQgc3RpbGwgYmUgY29ycmVjdCBmb3IgYW55dGhpbmcgdGhhdCB3YXMgcmVtb3ZlZCBiZWZv
cmUuDQoNCk9oLCByaWdodC4NCg0KPiANCj4+DQo+Pj4+PiBBbHNvLCBJIHRoaW5rIGEgY29t
bWVudCB3b3VsZCBiZSB3YXJyYW50IHRvIGV4cGxhaW4gd2h5IHdlIGFyZSByZXR1cm5pbmcg
DQo+Pj4+PiBXQUxLX1RSRUVfRVJST1JfU1RPUCBoZXJlIHdoZW4uLi4NCj4+Pj4+DQo+Pj4+
Pj4gKw0KPj4+Pj4+IMKgwqDCoMKgwqAgLyogSW4gY2FzZSBvZiBlcnJvciBzdG9wIHRoZSB3
YWxrLiAqLw0KPj4+Pj4+IMKgwqDCoMKgwqAgaWYgKCFyZXQgJiYgZG9fdGRiX2RlbGV0ZShj
b25uLCAma2V5LCAmbm9kZS0+YWNjKSkNCj4+Pj4+PiDCoMKgwqDCoMKgwqDCoMKgwqAgcmV0
dXJuIFdBTEtfVFJFRV9TVUNDRVNTX1NUT1A7DQo+Pj4+Pg0KPj4+Pj4gLi4uIHRoaXMgaXMg
bm90IHRoZSBjYXNlIHdoZW4gZG9fdGRiX2RlbGV0ZSgpIGZhaWxzIGZvciBzb21lIHJlYXNv
bnMuDQo+Pj4+DQo+Pj4+IFRoZSBtYWluIGlkZWEgd2FzIHRoYXQgdGhlIHJlbW92ZSBpcyB3
b3JraW5nIGZyb20gdGhlIGxlYWZzIHRvd2FyZHMgdGhlIHJvb3QuDQo+Pj4+IEluIGNhc2Ug
b25lIGVudHJ5IGNhbid0IGJlIHJlbW92ZWQsIHdlIHNob3VsZCBqdXN0IHN0b3AuDQo+Pj4+
DQo+Pj4+IE9UT0ggcmV0dXJuaW5nIFdBTEtfVFJFRV9FUlJPUl9TVE9QIG1pZ2h0IGJlIGNs
ZWFuZXIsIGFzIHRoaXMgd291bGQgbWFrZSBzdXJlDQo+Pj4+IHRoYXQgYWNjb3VudGluZyBk
YXRhIGlzIHJlcGFpcmVkIGFmdGVyd2FyZHMuIEV2ZW4gaWYgZG9fdGRiX2RlbGV0ZSgpIGNh
bid0DQo+Pj4+IHJlYWxseSBmYWlsIGluIG5vcm1hbCBjb25maWd1cmF0aW9ucywgYXMgdGhl
IG9ubHkgZmFpbHVyZSByZWFzb25zIGFyZToNCj4+Pj4NCj4+Pj4gLSB0aGUgbm9kZSBpc24n
dCBmb3VuZCAocXVpdGUgdW5saWtlbHksIGFzIHdlIGp1c3QgcmVhZCBpdCBiZWZvcmUgZW50
ZXJpbmcNCj4+Pj4gwqDCoCBkZWxub2RlX3N1YigpKSwgb3INCj4+Pj4gLSB3cml0aW5nIHRo
ZSB1cGRhdGVkIGRhdGEgYmFzZSBmYWlsZWQgKGluIG5vcm1hbCBjb25maWd1cmF0aW9ucyBp
dCBpcyBpbg0KPj4+PiDCoMKgIGFscmVhZHkgYWxsb2NhdGVkIG1lbW9yeSwgc28gbm8gd2F5
IHRvIGZhaWwgdGhhdCkNCj4+Pj4NCj4+Pj4gSSB0aGluayBJJ2xsIHN3aXRjaCB0byByZXR1
cm4gV0FMS19UUkVFX0VSUk9SX1NUT1AgaGVyZS4NCj4+Pg0KPj4+IFNlZSBhYm92ZSBmb3Ig
YSBkaWZmZXJlbnQgcHJvcG9zYWwuDQo+Pg0KPj4gV2l0aG91dCBkZWxldGluZyB0aGUgbm9k
ZSBpbiB0aGUgZGF0YSBiYXNlIHRoaXMgd291bGQgYmUgYW5vdGhlciBhY2NvdW50aW5nDQo+
PiBkYXRhIGluY29uc2lzdGVuY3ksIHNvIGNhbGxpbmcgY29ycnVwdCgpIGlzIHRoZSBjb3Jy
ZWN0IGNsZWFudXAgbWVhc3VyZS4NCj4gDQo+IEhtbW0uLi4gSSByZWFkIHRoaXMgYXMgdGhp
cyBpcyBhbHJlYWR5IGEgYnVnIHJhdGhlciB0aGFuIG9uZSBpbnRyb2R1Y2VkIGJ5IHRoaXMg
DQo+IHBhdGNoLiBJSVVDLCB0aGVuIHRoaXMgc2hvdWxkIGJlIGRvbmUgaW4gYSBuZXcgY29t
bWl0Lg0KDQpObywgcHJldmlvdXNseSBkb21haW5fbmJlbnRyeV9kZWMoKSBjb3VsZG4ndCBm
YWlsLCBzbyB0aGlzIGlzIGEgbmV3IHNpdHVhdGlvbi4NCk9yIGRpZCBJIG1pc3VuZGVyc3Rh
bmQgd2hhdCB5b3UgbWVhbj8NCg0KDQpKdWVyZ2VuDQo=
--------------RmjQVFQea5VWowSoqo2mMzxR
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------RmjQVFQea5VWowSoqo2mMzxR--

--------------NALbn84w27EIjcCQSSFKELUC--

--------------jEsxAE4qVjp09cwkZQ0b88sX
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRvWGsFAwAAAAAACgkQsN6d1ii/Ey/f
SggAjuPmBmkdosMrZMMR3ga/Ukd94KTPukPs4LAXbOOdZ/fYnGUzpwGr0PhDCQ1JuUwkp8h2bF9/
1+GSgbo9+wQCZjod7M1LAKPWjy3jWA91pG0gdH+kPaGHget5JuPsTRcf7ejfbEYpg1aVE/0Ejccu
ovyJ1UvruDx9m2wQFyEDi/s+/D9F+PpSeDLPQ9pdJs9V6yysu4D7fn0G/fNN4a1lGnnkqME5d+No
tBJwZkBPk51rTtpEy7w6AMkH6JPQZROmRWhzafys92kqBIgk95b/AqsSze0991WFji0zyFijnlR1
6jr9pGqaaYYjkERb9mGbC0ctjeuxPQFeTVX2KmdBmA==
=blbb
-----END PGP SIGNATURE-----

--------------jEsxAE4qVjp09cwkZQ0b88sX--


From xen-devel-bounces@lists.xenproject.org Thu May 25 12:49:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 12:49:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539573.840588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2AOj-0002yd-38; Thu, 25 May 2023 12:48:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539573.840588; Thu, 25 May 2023 12:48:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2AOi-0002yW-Vy; Thu, 25 May 2023 12:48:52 +0000
Received: by outflank-mailman (input) for mailman id 539573;
 Thu, 25 May 2023 12:48:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tVPt=BO=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1q2AOg-0002yP-Rr
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 12:48:51 +0000
Received: from sender3-of-o58.zoho.com (sender3-of-o58.zoho.com
 [136.143.184.58]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7d188d59-fafa-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 14:48:48 +0200 (CEST)
Received: from [10.10.1.138] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1685018921909599.6999728037861;
 Thu, 25 May 2023 05:48:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d188d59-fafa-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1685018924; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=OC+iVi66v4ZUQXOXjJZxV/Tc03LQTONsmQ1C349Vu/XXeVKWAFiyhctPyHbWUwRZSEBmnENJcQ8d4AWHqOLuhGZDPpwG/apGCC4pr6XX0rVQvockVUJdSErqeXYmT/xi1Dw8sLbKBzvP3jfgwJtqwC3TT/eGRpDdKWNNcBH2+M0=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1685018924; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=Nrwi7EB3kUI4MvvcpNAWEhE0/2/LrdcYBPX8lCg0UDk=; 
	b=i2d5rnJBVDO1KUnpAVEO0PG1jNzqTeGs7vpodMbv7GSzDtT8+np0cca9+gSRnZnCBkND7pa75xibsp4TORm0/ddjsUQ+ThUVYty13zATEdOkgICKwiAJW18Q8NEgj3gtkv56wBFzPdQPbuqx5UgA5OHbRsfH7WD046OiWaU6ovM=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1685018924;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To:Cc;
	bh=Nrwi7EB3kUI4MvvcpNAWEhE0/2/LrdcYBPX8lCg0UDk=;
	b=Zni2iwAzrNCF70ioCahNknfgiThAAFYXUkseqPsNQDVO3oHL8NtdDlwrTjxGc6GD
	lhJT05x1377giKk+Z7LqCekofbigv5s7X2CdLlV6aGDXB6aVVl0DfaLM52cbCC1Fmio
	hTwEV0c18XhacHJc6kbO5HjurdJvibWH+6kmZhpE=
Message-ID: <48aa8d36-7f70-0223-db09-f966aaffdca8@apertussolutions.com>
Date: Thu, 25 May 2023 08:48:40 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.0
Subject: Re: [XEN PATCH 04/15] build: hide policy.bin commands
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-5-anthony.perard@citrix.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
In-Reply-To: <20230523163811.30792-5-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 5/23/23 12:38, Anthony PERARD wrote:
> Instead, show only when "policy.bin" is been updated.
> 
> We still have the full command from the flask/policy Makefile, but we
> can't change that Makefile.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
>   xen/xsm/flask/Makefile | 9 +++++++--
>   1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/xsm/flask/Makefile b/xen/xsm/flask/Makefile
> index 3fdcf7727e..fc44ad684f 100644
> --- a/xen/xsm/flask/Makefile
> +++ b/xen/xsm/flask/Makefile
> @@ -48,10 +48,15 @@ targets += flask-policy.S
>   FLASK_BUILD_DIR := $(abs_objtree)/$(obj)
>   POLICY_SRC := $(FLASK_BUILD_DIR)/xenpolicy-$(XEN_FULLVERSION)
>   
> +policy_chk = \
> +    $(Q)if ! cmp -s $(POLICY_SRC) $@; then \
> +        $(kecho) '  UPD     $@'; \
> +        cp $(POLICY_SRC) $@; \
> +    fi
>   $(obj)/policy.bin: FORCE
> -	$(MAKE) -f $(XEN_ROOT)/tools/flask/policy/Makefile.common \
> +	$(Q)$(MAKE) -f $(XEN_ROOT)/tools/flask/policy/Makefile.common \
>   	        -C $(XEN_ROOT)/tools/flask/policy \
>   	        FLASK_BUILD_DIR=$(FLASK_BUILD_DIR) POLICY_FILENAME=$(POLICY_SRC)
> -	cmp -s $(POLICY_SRC) $@ || cp $(POLICY_SRC) $@
> +	$(call policy_chk)
>   
>   clean-files := policy.* $(POLICY_SRC)

Outside the suggestion and nit(s) from Jan, all else looks good to me.

Acked-by: Daniel P. Smith <dpsmith@apertussolutions.com>


From xen-devel-bounces@lists.xenproject.org Thu May 25 12:49:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 12:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539574.840598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2AOy-0003LW-9n; Thu, 25 May 2023 12:49:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539574.840598; Thu, 25 May 2023 12:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2AOy-0003LP-75; Thu, 25 May 2023 12:49:08 +0000
Received: by outflank-mailman (input) for mailman id 539574;
 Thu, 25 May 2023 12:49:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2AOx-0002yP-3o
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 12:49:07 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2079.outbound.protection.outlook.com [40.107.7.79])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 883dd781-fafa-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 14:49:05 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7307.eurprd04.prod.outlook.com (2603:10a6:102:84::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 12:48:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 12:48:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 883dd781-fafa-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oaWTopFuOrzF8y7FA2XjuWV5BlzOC0P9fbohT6b6//I2bmYLuVtZ8hJ/sx7MWbxhOYBVlAhEKTh7ksw8Yq2yk8wurtDgCLSEI9SVm4AjU6+6G5sCgWataJ4NHeA/Zng2U3pJIo3mv56KGbxqKcCyvkM8eySZcINjYd3qc3H+HKgHLof106A9yB2sb66NshqGWvhvNllB/CohVhDwETwF+3DdmCLkaMi1tlAiKNMi5E3GSQdZCqRnTQwxYtgMOwccJlwyi6OqYcTRCu0RfEeBybg4Fq4glCHXItkk3MZsPSnaakHN2Ev46nrbqmuQyoGtys6gAYZbwD23/xarkJZc4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=l0vGmsTjxn+2fgr6wE3651FgBj6ujZMA+pLxHCcRnUo=;
 b=b8rplEC1HTyAc0y3oFPK80kCQcNJlHWRHuJ03yVQAB2w1BThlRyK8iHgPqTER2MjqN7JN+wvmr5ahfr2644NbLnguu41mqbpSMjT3Y6H/zGCCWLbsCapPOXoesynVCqEHA+EaxglUW4YDvuAQUuftvjimIkGjbAbCOk/slXxixINEPmTpZ84cPXlEzPxz2bpi91i0ifcTy5YMTIzN2sj4GbuHHU895sZJut3cJZgb2/Z2ZnDpGI3S214F8GPzjKeu1VWl4nSIzczB8iblvX1cGWHU70JjpZ5T/Ig9LAtLReNJDdTzAp+P7h1E0mX4AkI9h5L+nqIQA399T2Pd0xG8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l0vGmsTjxn+2fgr6wE3651FgBj6ujZMA+pLxHCcRnUo=;
 b=qk85TDNnUO6xhwf2rK982tfBHIBdXtfxCWxoH/aJcCt6yB+8aUabEThC57gkhFSpmP349rsOF/Kx0N1jNxY8dKjYEJSJaMddggOerqOvQ5Wq8Gwv+45/NPOlIwiXKygErRTAu5/a4JBlEr/WFIN0vbZHbM5BdfwKH36dA4bIcLWxPMwtEGG1n7KojLOYesMKRrWm9j0pK2ImbLJKxcc6+2HRcHyLe79NnzekiZhhP7hJyazvc4hmmWEcQ3QDwEkIsyA9uAMD59z78T1x1f4PppWXaq5T2kNDcHtkJLpc3TSsg0jQLo8TkMUz8jzVKnQvWBVWaMevbysohW7oR8UdlQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5d36011b-1815-6e02-0c05-b2b7cf35a722@suse.com>
Date: Thu, 25 May 2023 14:48:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 13/15] build: fix compile.h compiler version command
 line
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Anthony PERARD <anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-14-anthony.perard@citrix.com>
 <f04c4e98-84b1-acde-5acd-2f5e18f591d7@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f04c4e98-84b1-acde-5acd-2f5e18f591d7@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0006.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7307:EE_
X-MS-Office365-Filtering-Correlation-Id: fae0504f-b88b-4539-bd67-08db5d1e5afe
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hnAF+cz0+xyPvt5vn2NusEloC74hHoUb9ghu+3yCKxqQFGLqYg17T27tX/NDwEUmzpzi7+Vwiz1EuQSaQ5Yfqh4NmRPXrwWr1VnQeEP2kHsC6Rz8JNyUf6XChMjps6Ffkkeu0rADxr9Vkl4r2gYYfQ+U1RWXNXDKyi1OgcIn84pF2jY1U2pBXG6tye80zUrROhFWqVjUFZB9t6/u26mkW4SIb/xR4e/y57cnWrEUPx4lAICbu96EnstGEM1K10BuH811epIVzLUonXE3QYbBcCO7zEgL5O0u4MhdfgQF/DOIgjmjc6Um4wsshfpnXEvL9oDGxpbdoOida4F8WAyHchyrzQk5SI2TInMY8xXF7CB4reDrG6LYQHZoE/M8ZnBmINRsBDpR8l75DaKly9mXYy9kMmqNZRFsRliKhmm7j9YYV+G4CbbiSwHIhIhQsvsXzvocjLnYcPjdf8uYV3joA7d7HUewL3BV0qmnjp8e2BYvHMvOOT07wKBR4o+Ujc+YN7v/ecao6Lvstf76rxeDR2mEScjcFzgohW/snnf1I6Uui9kcPDrcD6jULjLDOSuJLBRMiTz3isw5y3Qy+MXy2WlVsU4q3y66j1xIwfwVVN9VWHYcsiH9jIkpMJHXewc5XUZFRYz3FfFbjRLV7qf6sQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(396003)(136003)(376002)(39860400002)(451199021)(6486002)(41300700001)(26005)(6506007)(6512007)(8676002)(8936002)(36756003)(38100700002)(5660300002)(2616005)(31696002)(2906002)(4744005)(53546011)(186003)(86362001)(6916009)(66476007)(4326008)(66946007)(66556008)(31686004)(54906003)(478600001)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SEEzYjRkaTMyK1dSYTFhNzZCSWpVKzlUL3lnL3RYS3U5ekpRYlJHZGpXc3pn?=
 =?utf-8?B?cytkMXM0RFFnb1NreGxtRm5YWmh2ZHZscUYyTjZkUE1NMzVsemtwaytTVVhW?=
 =?utf-8?B?eDdueks1M21XNzJ6Z3NRWVN6eHB6dDFBaXoydS9aNTI5OUl0SGQzT2ZDMkN3?=
 =?utf-8?B?UmcyVHArYVRzTVVkQUIyN0NqMXM4YXNjR3E2N2ZxUEhvWFJ3dVVMZVpBZVVK?=
 =?utf-8?B?MXBwSmg5bzRhcWd6cEpFc2dnbkFxNitIekRqODExajNGU3B0ZUozYUJnYUht?=
 =?utf-8?B?bDhXYnRQMnlaTXJ3RlNFbGNuVWhvUWtWTlEvcmFnZHRTSXZVZVVocUxNZFJq?=
 =?utf-8?B?a3ExVVJMYmVUUUtsWTcwY25zRHdsZDlRY09ldzFQcXhjVXNmakREdUdHaHk4?=
 =?utf-8?B?MlF6Ym5sTVphTGlTK0pKSHNpZHptaVo2RGtEUmFwRXZtTkZEYWxYaktTUTAv?=
 =?utf-8?B?Tk1neTNPd3Y5Z0RFOEJtSm5iSDdyRDNhemx5dzZRWnp2aW4vc0JVSEtCTElK?=
 =?utf-8?B?VkFBSkRRRmRJTkVYUGxLU3JNUUtSb0NDcGovTEltbDd6R1NrS0p1OCswM285?=
 =?utf-8?B?SCtkZjJ6U2dheGt0dkMrbFRuRDVaM0hQSndXZ25CQTdnRUtoTUxVZEh4YnBV?=
 =?utf-8?B?VDNmdjBMSVNLR1R4VTU3VFAzc0FmcGJkNEZXQU8vbGYyRS9QWGkvM214OEdS?=
 =?utf-8?B?dnQvM0ZyNGVMeHFlME1aN2h2Z08zckdjMmhpanVyOHluTW5Mc0I4QzVjUVBv?=
 =?utf-8?B?TXFOVWRoaGNBaW1NcWZFbDBGd0VwcExDdklEZ2ZnajJud0g3MGRVUHFNb3ds?=
 =?utf-8?B?YUs0Tm5rc2o5a0dqYW1UUWU3OG95RFlpT1JUVC9YbFo1aVlKand3UzU3cGg2?=
 =?utf-8?B?bFV0UGRrUC9BYnlSTVJWRDlOOVJPcU91eFlYZlBMV2Y4WnNJVUMxMFBuVWti?=
 =?utf-8?B?a2Z2UUJwQ3BRTEhIMFhyUS9VRURpYkgrbXkxVFROODJDVUJyRVgzeHh3UUxI?=
 =?utf-8?B?eHdnNzZaVFBQazhzY3NHMDRZcmZwVkVnK3hJVHgxTEh1eXc1dW5EKzlzMDZi?=
 =?utf-8?B?a3FXQ2hoTDlBUmxPT3NkMDkrWmVjQjZSR0NORU1hdHdxTXBtdXhmaVFGc0ZD?=
 =?utf-8?B?NTJrMkVCQlZxV3EwRDBVTWgvUXlTajNaVXYvU1FiSTlLdjR1dmlFckJZWnE0?=
 =?utf-8?B?UG9XSkVaYzZZby9kaW9OUjZObUZ6azc2aXVQTEhidzE5N3AwQXJQb2NWdmZQ?=
 =?utf-8?B?TGk4em1JNWhrTjAzUUllR2VnTGhWODQ3QUthZm8xTndUUW9Sa3pSSWhkQXBI?=
 =?utf-8?B?bXFUOTVJRThKNGVmYmU3Rm5TU0JpR2V3STJZaGhNWUcwNDBiL2IvT3hJRTg0?=
 =?utf-8?B?RXRCV1dSN2xWQWM0ZDJjc092eityUnBaNmdld08zUmFTWUVZMUV3eEZrTHVs?=
 =?utf-8?B?K1h6M2hlOUFqRFM0cWp1ZWc1S05xMTVhTHJpa3gxWXhzcW5PT054VjJHbkVu?=
 =?utf-8?B?K3RiZkoxZ3JTcVVjR1lQSnF2cDN6QzNpcW5JZmYvZ0RpeHVjU0tmTi80WW1u?=
 =?utf-8?B?Mno1enBkMis0MnN5ME5wb2Rtc3l1ZGFXc1FyendvdmkvNUxKc3FNRWtWUUxD?=
 =?utf-8?B?UkpEOXlHbURIWG5NUEhRbERNeEZEaFcwYVYvRHNzZHNKaStyckdwbXM5eVNo?=
 =?utf-8?B?dCtraE04Z1FMbDhGRTh3Uko5ZVZEVm9aTUVLQ0VOWndYbE1ndFFCTHRkRkRX?=
 =?utf-8?B?NjFvQTNUR0k0cTMvWlhvMHYrZUw0dm5QeFdLajFpSnBURWx4ZlFkbi92VGFa?=
 =?utf-8?B?TGJoVzlpeW5nVDFVVC80ZHhXV1c0bFBuTTYyRHdoZlA4Vm9ScUZndHMzaHVr?=
 =?utf-8?B?MWRBYU9ZQmcrUXdYOG1Vc05tWTZ6aXM0bm5HTE5WVG1IMXZVSHRxT0FKbnln?=
 =?utf-8?B?RVh2VUxwUzZ3T3pJUmpCOGJsNTZJakpRZk1NdVRpTmEydHZxaExvb2NqWWtK?=
 =?utf-8?B?cmkrVG1hTzJ0TGdMZTl6N0xySElLa2ZvM0FwaUYwd0dPNlp1eXR4QlkwaDBq?=
 =?utf-8?B?bUx6dHNOZng4bmlkUk5kYm91bE1STnExWVRrOFQxWFY4WjRBVXExUGg5eHh2?=
 =?utf-8?Q?h7+nyNGs6RdL71IFpXoZTOvns?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fae0504f-b88b-4539-bd67-08db5d1e5afe
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 12:48:36.1637
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zt0KfU3uSHb5e3ok2SGpT4v2jnuExqHS01fK/zE8vuTxCe3NpUVej3fA0K8HT+cmFIst5hcthT41JwyOoJMg4w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7307

On 23.05.2023 20:14, Andrew Cooper wrote:
> On 23/05/2023 5:38 pm, Anthony PERARD wrote:
>> CFLAGS is just from Config.mk, instead use the flags used to build
>> Xen.
>>
>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>> ---
>>
>> Notes:
>>     I don't know if CFLAGS is even useful there, just --version without the
>>     flags might produce the same result.
> 
> I can't think of any legitimate reason for CFLAGS to be here.
> 
> Any compiler which does differ its output based on CLFAGS is probably
> one we don't want to be using...

Well, I wouldn't go quite as far in general, but I agree for the --version
case. Actually at least with gcc it's even "better": I've tried a couple
of 32-bit compilers with "-m64 --version", which would normally choke on
the -m64. But that options is ignored altogether when --version is there.
(Which has up- and downsides of course; the command failing might also be
useful, in telling us that the compiler isn't usable in the first place.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 12:50:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 12:50:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539581.840608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2APx-0004ti-Jb; Thu, 25 May 2023 12:50:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539581.840608; Thu, 25 May 2023 12:50:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2APx-0004tb-GO; Thu, 25 May 2023 12:50:09 +0000
Received: by outflank-mailman (input) for mailman id 539581;
 Thu, 25 May 2023 12:50:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2APw-00043X-AN
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 12:50:08 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2062e.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aca77a17-fafa-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 14:50:06 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PR3PR04MB7307.eurprd04.prod.outlook.com (2603:10a6:102:84::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 12:50:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 12:50:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aca77a17-fafa-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CtITCzKaIiJJFak2vVHIbwQybhzvpGF/X/kFgP0YdvzBbho+mkt1GwTHiiCJXOQ/jFXbdy4wsZMed5KcBMsmqolov7BDwLB8DBoqC1HGo4ZyBUXM11nmH/Kfki6cOPMwAN4cT2gdFnmADBMmpz9nWncj6P1+avcx4ObFmr/w+B7NpW+yG4F0B0+nlI9XM1zXelfgKPYlVBJ1xikCGDzwyJMby6GPm94Kc1ESL+Zv/5ZXjI59iEI8rIW1Bqgx5p20CpYN87kckaZMq20a46kTgvakC25QNRClgNjNYsVGI7By4ZvdmQ9Gf85cdpcZcwmMNX2SvpOk6TvtH8+oyvdZlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EUrh6mM0e6631BB3xw5QIAo8gpsPFec7oLGrtfQnYfU=;
 b=HZz1wPDze80WQNEGORDFAm5NP7D1V4ALg1nSyP7eVEgnyokR+ibPqXWFJRFvU41n0pLk79kFv8SNSIQ7UhGlAeIIYahOSz3OS+u3Wdlyuh4IH22FyPm9KKmkBaSCw11z1lc13cN2KruBuQAP9SP3nwyDva0LoNSievg1kMN3LCYcStLbvfi5UDmK73DRp87pGpqbASD2UkdsBvTfwOsaQ7PAlhbNAK0yr4OBHRK84WJz/+wyWbT6eT++HKfIM6K0u2QWbPzWlheGsSCvV1DAO3lOD51v1zRjjg//uT5VE9QotgFHSXbhlFsoFZeBZea98dwIdKGoFUSEY0MebhZCsg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EUrh6mM0e6631BB3xw5QIAo8gpsPFec7oLGrtfQnYfU=;
 b=NTUGlHVObS53LytWR2KsP+AIxDhSLRrNKTOpyTNIRMvLbE9+G77CalXhAcU5DEbyXn4+a18vpQZPWFUbUvlybFiJ+UZ8leYoY0xObpX4S+Kvg+pB+DJvOxUV9B2Mnwx6iCbvsgY+1q95KeiYnvzDlfanHU5s5NtAQE3jz6b7LopjLuMpvhNIvDhrA2leg5PKWTu3JYOWU9Sl+YBADiJlS4ywwCUmWd6dWouOit/WEjk2rFj9qhS0ZfRKg+pOBFqAy7KuycDLuIFnenhwjq1Zt11QxZoKY/Gvgx0CzD4riDGOJNUeD8tTAk96zcF1mIiQ6TMXtzu7kXb+9WTuLY+m0A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <45f240fb-6f07-f132-1071-f6783f8121af@suse.com>
Date: Thu, 25 May 2023 14:50:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 13/15] build: fix compile.h compiler version command
 line
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Luca Fancellu <Luca.Fancellu@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-14-anthony.perard@citrix.com>
 <35D40E55-2D93-431A-8B16-FCFEBBDA25B1@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <35D40E55-2D93-431A-8B16-FCFEBBDA25B1@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0057.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:93::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PR3PR04MB7307:EE_
X-MS-Office365-Filtering-Correlation-Id: c30c2009-d87e-4121-6d27-08db5d1e8fc7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	dvB/K+0B9WNxQxCR2r6e8uzdtk2uskIc0A9BNU+0lqfvGOnmYY/3M/gs6DuMmPWkqbylzHQrquQ4qaTyGKpFO5sZfqhJ591hos0Cfv4YaUDBkNh7zQQcCHnAZ6RpM9Awng7RJHLe+DK67kRMifODsf70FIcTFaYsEXnmQKfdNyheU33gPAwnq0XY4cqScLR4Hhq6lVpzEvtupX6ZCjwprB5llQqXZy7UIEbyKq+tsd9uC9a7qC9FiOnLwlDnvXqwAX+m6ycjrEUmb5OwNf4KTM1jxAnAj9EO76JB7VK/aGwHMn4kUkkSkNS8YTmTu0nNVSiJRPiLOuI1bqT0G/oWOX3t33xjmrhIkuoLWhMwzvmuQaEYvG9LndoNhVDOebj5DtgMHGk0Vi98K6yNrPSlSNzOCyTLO7HQWeSDtTWgSR1zBd/6GF4mPLeWAd7isGrKvAphJuG4amKIyeaBUzGfW0dbXz3ab5GNC+zdFI0iFZGkOIqVdUs+4OuySgbhv/z15Yq+c+jRQUeRJ4sxe4antzFNs9yYKXLwwqMmImJLvX2UaPn+S/Q4l431RcDa2HNoW4hKW9T9EEmUbNVTuUDpzjd318WEJ0a9d+KoROynxSojBlkBVUrul4ZC3HGiMVnwM4B11qKu+S1HL2PbW0MU3w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(396003)(136003)(376002)(39860400002)(451199021)(6486002)(41300700001)(26005)(6506007)(6512007)(8676002)(8936002)(36756003)(38100700002)(5660300002)(2616005)(83380400001)(31696002)(2906002)(53546011)(186003)(86362001)(6916009)(66476007)(4326008)(66946007)(66556008)(31686004)(54906003)(478600001)(316002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bzI0aENuRkxjRVlwUGErODArcndoR2htR2V0V09mbjVCUGF0Q3FDWDh4Y2Vm?=
 =?utf-8?B?c2FHQUV1WThBdjJobGwyT1dQaUR1UldYa3ZvZ1YxaHJuS0F6Q25iclpYZURn?=
 =?utf-8?B?QWQ2YzM3NFFGRHdYM29sYVAwN2VocDg5bGJVZDRJNWR2cXNCOGRkQUUrUE9Y?=
 =?utf-8?B?L1ZRVHRNT0dyU3VaVXFDNitOM3ExNU1UMWhYZGVUTmVjZ0pxeXg0a2Q4VXUw?=
 =?utf-8?B?UC80alA1LzRPdmpObGZ2YWZiMWRuUFVGMUhmVlV6eWdlZk03MERvNVl5VlRB?=
 =?utf-8?B?L01vVnFuazlFakZOR3VQWFZzVllEUExSRGNzaDhaTW90aXA1alU3T0VkVXlI?=
 =?utf-8?B?eGNpSTVLUmcwWDdjR2haZlhEcm9wdE1ZV3pJUUlWNXZJQmpWZFZDSUZSQWJr?=
 =?utf-8?B?UWFQdlRhVWNTeDJZbVdBZHFzeE5Sa0hUT0JCU0pDNGYvbFB5WE9QemUwT3pK?=
 =?utf-8?B?Um1uNDd6L0RYR3lYTGZxek1TUHZScTNtUlcvYmtMOHkvYWM3WXhzdnlXYldR?=
 =?utf-8?B?M2JQNTJzMmFUaktvMlFuazExZHBIZ3B5Z0YyQW45UjNXdkhvcjZKV0xXNTND?=
 =?utf-8?B?b2FwTk1zMjVjdTVuN0MwRlFLZlpNTXcyYk40eWlOdlN1SjEvRTBNc0NwU0tu?=
 =?utf-8?B?NFphRUEwVnNHUDg5QWtEcHhmU1JKd1NmSmI2MUdOSkxDc3ZDZjI2VTM2QVhH?=
 =?utf-8?B?ek5wZGpVRmJGazR0VC9oVFRacStJa3FJYWlNZkhJMWdZKzJUV0psT2pPK0tG?=
 =?utf-8?B?ZGVBZ1dTSlk5VGI0K0F5UldVUUgzb1N5WVZYK1lmNGlURFZJNm44L2c1a3kw?=
 =?utf-8?B?eTlJMGFJdkhzS0k1NmF2ZEUzcElkRkJoenVGMDlYekdWRXRiQ2lTOG9rcllw?=
 =?utf-8?B?aTYrQWVQOHhxb1dRM1puMkNTUFlIZ0dxRlNGK1J3a0o1c1FGZUdMVDZ3VXpG?=
 =?utf-8?B?WjFhSXgrMHVaQWdoN2ZpZGVZUTNkWDVhblRPTVA1NXVURFJaLzU2ankveTNF?=
 =?utf-8?B?QXpMclo3SjFnMGw3c0Fnb3cxRjFhOHU1UzRvbzdpck1jeDRWN1JnOXVOR3Rx?=
 =?utf-8?B?UXUzUkFjQ1ppM0JWOFNlTGJKMUhuL3dpQm1mOVN6N2RtYnpQT1NLWFdBTlJJ?=
 =?utf-8?B?dFJ2cm5IaTB5MkpPM01HTDBMUnVxdDFLZ3BmRmV5bWpGcmZiZkU5SEt1WnU4?=
 =?utf-8?B?U2grT3Q4WXlSeVBabTdlaEtYRm02amZ3bnd6WU5FdVEwaHcwK2RJa2w5Z3Z4?=
 =?utf-8?B?RHl1bEpCeDcyV2FBWVcydE5vcmVvVDNzM3VxbTRXVnB4c3F5K3M5ajlJV212?=
 =?utf-8?B?cS9tSTZ3RytZWUtBNUZ4OGR1ZHJZazZNRmhhd0l4MG4yWnJrY0RNYzRyaVBY?=
 =?utf-8?B?b1QzbjRCM1JYeU5YUmp6YU1Xb2Z6TDNSaTJPOUZOb0M5WjJqSDhzWHpaVWh0?=
 =?utf-8?B?K0JsZlZUcnlYYzk1Z2E1a203Q3hEd0xlWGxaOGFRYldhdkFJOTVDcituUzV4?=
 =?utf-8?B?d3lwSGlvYkZnY3BuTXo2VXdXU3U3RFd1eDNnc0d4NUoxWERycUlKckdLN3Fa?=
 =?utf-8?B?ek1SaXZ6bFFOMGphY0ZOOThiUjFZd2hhcjl4dGV1S0s1M1NJWXpSYjFYQ3Q1?=
 =?utf-8?B?VU93WHFHNmFQUVp0cTFrQlFWT3lPWGN4UlFyemFSVnZzVHBMZWMxNWcyVnlm?=
 =?utf-8?B?YmdhOHMrbG5NZWVVbEhCcDk2TWpKNWtXaGZIUlVlWk9ZL1RyZCtpcUw1N2Mz?=
 =?utf-8?B?U3BFeGdHeFVER1Q0eGhTWUM3VktsaDdZWWpBeFZWM1BmU3Vobngxa0I1d1NZ?=
 =?utf-8?B?TVFlMUs5cWprRUVhb0gvRWNvYm1RQWV5RHZvcnpRMStpNGpRQkowMS9EcktX?=
 =?utf-8?B?Z2s2OTdlU2JkMUlnK2VYY3JQYWh3RXk4eEh1Ym9QKzZ5MGx1bDlvcjVORDRm?=
 =?utf-8?B?dWdkMjV3LzRRcUNvU0tNQ1k2Slc3cEpwN2ZLbUxFdWNDTW9MV0p0NVUxczZu?=
 =?utf-8?B?ZEc3SEkwdWRDTmF2QmtnWE5lYUZrbDFFVlVFd0E2N0FSaVg5dGtEOExSZUcx?=
 =?utf-8?B?SWRMdXJLdHZ0NWpjRmJzOFRiMzExWVNVYUJyQlFZNFFnekViUmtDYjlDN1BC?=
 =?utf-8?Q?G1vAZP4BmXg2vNR9Je3qgqCMZ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c30c2009-d87e-4121-6d27-08db5d1e8fc7
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 12:50:04.7125
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: V/ESDcVXzQatiP0XuuWn/JkmdOz+smac9B/z1B8IAf2p07mjb+3UsrMS3dywwVb9jX5WrOLU5FWbYfQK7ExGAQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR04MB7307

On 24.05.2023 11:43, Luca Fancellu wrote:
> 
> 
>> On 23 May 2023, at 17:38, Anthony PERARD <anthony.perard@citrix.com> wrote:
>>
>> CFLAGS is just from Config.mk, instead use the flags used to build
>> Xen.
>>
>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>> ---
>>
>> Notes:
>>    I don't know if CFLAGS is even useful there, just --version without the
>>    flags might produce the same result.
>>
>> xen/build.mk | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/build.mk b/xen/build.mk
>> index e2a78aa806..d468bb6e26 100644
>> --- a/xen/build.mk
>> +++ b/xen/build.mk
>> @@ -23,7 +23,7 @@ define cmd_compile.h
>>    -e 's/@@whoami@@/$(XEN_WHOAMI)/g' \
>>    -e 's/@@domain@@/$(XEN_DOMAIN)/g' \
>>    -e 's/@@hostname@@/$(XEN_BUILD_HOST)/g' \
>> -    -e 's!@@compiler@@!$(shell $(CC) $(CFLAGS) --version 2>&1 | head -1)!g' \
>> +    -e 's!@@compiler@@!$(shell $(CC) $(XEN_CFLAGS) --version 2>&1 | head -1)!g' \
>>    -e 's/@@version@@/$(XEN_VERSION)/g' \
>>    -e 's/@@subversion@@/$(XEN_SUBVERSION)/g' \
>>    -e 's/@@extraversion@@/$(XEN_EXTRAVERSION)/g' \
>> -- 
>> Anthony PERARD
>>
>>
> 
> Yes I think Andrew is right, so I guess $(XEN_CFLAGS) can be dropped?
> 
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> Tested-by: Luca Fancellu <luca.fancellu@arm.com>
> 
> I’ve tested this patch with and without the $(XEN_CFLAGS), so if you drop it you can
> retain my r-by if you want.

Acked-by: Jan Beulich <jbeulich@suse.com>
preferably with the $(CFLAGS) dropped, which again I'd be happy to do
while committing.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 13:13:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 13:13:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539585.840618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Am1-0007RG-Ci; Thu, 25 May 2023 13:12:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539585.840618; Thu, 25 May 2023 13:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Am1-0007R9-9p; Thu, 25 May 2023 13:12:57 +0000
Received: by outflank-mailman (input) for mailman id 539585;
 Thu, 25 May 2023 13:12:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iF6a=BO=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q2Am0-0007R3-EE
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 13:12:56 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id db35442c-fafd-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 15:12:53 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A707021CF6;
 Thu, 25 May 2023 13:12:52 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 80B9F13356;
 Thu, 25 May 2023 13:12:52 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id HiGiHdReb2TuRwAAMHmgww
 (envelope-from <jgross@suse.com>); Thu, 25 May 2023 13:12:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db35442c-fafd-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685020372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=m6DN/3xx1Z2aQkGZaGSC8H8tDPMmppvZicifQJ2ZcP0=;
	b=Y2OJwmGgwsa/wSVCwNRKNp0KsfJ6rDEUe7/cyOX06KvkJCs0ANJ+IYp547Pr86Cd4dBhGj
	lkObV1Z1X3btWpTTF37FzA4h41glq4gjlnvYPCKHk3KxDbseAmez4GQ7xPUwxUknLCmNEP
	KTQQQzjgdIZVQzy9+SZZvvbODsh5U1s=
Message-ID: <33653405-8ffc-1a25-56eb-b86a05be2fd9@suse.com>
Date: Thu, 25 May 2023 15:12:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Subject: Re: [PATCH 1/3] docs: fix complex-and-wrong xenstore-path wording
Content-Language: en-US
To: Yann Dirson <yann.dirson@vates.fr>, xen-devel@lists.xenproject.org
Cc: xihuan.yang@citrix.com, min.li1@citrix.com
References: <20230510142011.1120417-1-yann.dirson@vates.fr>
 <20230510142011.1120417-2-yann.dirson@vates.fr>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230510142011.1120417-2-yann.dirson@vates.fr>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------8etra55gbla2Q3KY6lLtI0XU"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------8etra55gbla2Q3KY6lLtI0XU
Content-Type: multipart/mixed; boundary="------------0V0ATrsKxJ6h9vhFfR0xjohd";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Yann Dirson <yann.dirson@vates.fr>, xen-devel@lists.xenproject.org
Cc: xihuan.yang@citrix.com, min.li1@citrix.com
Message-ID: <33653405-8ffc-1a25-56eb-b86a05be2fd9@suse.com>
Subject: Re: [PATCH 1/3] docs: fix complex-and-wrong xenstore-path wording
References: <20230510142011.1120417-1-yann.dirson@vates.fr>
 <20230510142011.1120417-2-yann.dirson@vates.fr>
In-Reply-To: <20230510142011.1120417-2-yann.dirson@vates.fr>

--------------0V0ATrsKxJ6h9vhFfR0xjohd
Content-Type: multipart/mixed; boundary="------------AWakUPhxGVZTsu5h0Z0y8FAb"

--------------AWakUPhxGVZTsu5h0Z0y8FAb
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMTAuMDUuMjMgMTY6MjAsIFlhbm4gRGlyc29uIHdyb3RlOg0KPiAiMCBvciAxIC4uLiB0
byBpbmRpY2F0ZSB3aGV0aGVyIGl0IGlzIGNhcGFibGUgb3IgaW5jYXBhYmxlLCByZXNwZWN0
aXZlbHkiDQo+IGlzIGx1Y2tpbHkganVzdCBzd2FwcGVkIHdvcmRzLiAgTWFraW5nIHRoaXMg
c2hvcnRlciB3aWxsDQo+IG1ha2UgdGhlIHJlYWRpbmcgZWFzaWVyLg0KPiANCj4gU2lnbmVk
LW9mZi1ieTogWWFubiBEaXJzb24gPHlhbm4uZGlyc29uQHZhdGVzLmZyPg0KDQpSZXZpZXdl
ZC1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KDQoNCkp1ZXJnZW4NCg0K

--------------AWakUPhxGVZTsu5h0Z0y8FAb
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------AWakUPhxGVZTsu5h0Z0y8FAb--

--------------0V0ATrsKxJ6h9vhFfR0xjohd--

--------------8etra55gbla2Q3KY6lLtI0XU
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRvXtQFAwAAAAAACgkQsN6d1ii/Ey+K
NwgAkHq1hlziw/EC/Z4cyE/CcaCuqwFlpdbXbvzDDkMhAtu4xPhc9fHZ1kAZJ2PdbDLjlxNGP3nl
pqN4frXtZuDNM4GLhWRSWLQVAw5AohCKVAU2BLFvBhHIUbxNgs92D9Cfx2r1BwmgE+CPeqfTZEdb
a7NWqUtg/VmIbhY02RbZzWO7j1XklOj+gR/R5XxcEEK8tg9bkE1NN+LVyVTKD4M/xfy5beIxX9zG
OaUEjmLBZsmYfBcgq+UEAXgNQh8PXK0cZQFxvX/1DVZf12fDTPbpWOux5adjrBprMDoMNgdJ0zmZ
TjaQVv52sxQpEuUQsdJCvCa63KGhZUI3Uf8ARou8mg==
=oytU
-----END PGP SIGNATURE-----

--------------8etra55gbla2Q3KY6lLtI0XU--


From xen-devel-bounces@lists.xenproject.org Thu May 25 13:25:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 13:25:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539591.840628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Axh-0000bQ-JU; Thu, 25 May 2023 13:25:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539591.840628; Thu, 25 May 2023 13:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Axh-0000bJ-Fu; Thu, 25 May 2023 13:25:01 +0000
Received: by outflank-mailman (input) for mailman id 539591;
 Thu, 25 May 2023 13:24:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2Axf-0000bD-SH
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 13:24:59 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20609.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::609])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8a90ff36-faff-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 15:24:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9190.eurprd04.prod.outlook.com (2603:10a6:20b:44d::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 13:24:55 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 13:24:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a90ff36-faff-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JFg0HMY67/tALq5lQ4capQRIBH/PvZrtEQ+IZ00tvMAOD2fhiiexPoEUuGIwB2aI1UM+qIvMZyk7lHnSap9hM4xfQwiOAuFdc0XjwkwshWwb+y+5EoP8sVYqBiMz/eckZ4rKRKA/L39XbaCgKGq+s56fvOAMnI/jXI7q/tK5s4Yp/b3GNzjJ45yxnu2UtD02jrXINSeCcU6s7moYUfktUWoyFMrpcn2LdXnd3GWX8+49yl+IZ7qQ50ECS8Cjd+b3fnDaQ8+jMIB+6+femjvMTN9nFZFKGBYlZcW+pKp1IidU3pSJDSkyu0ODb/pS0pgLLaJlA2WsqjRbHCRFBVoPiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oCtypZcKre8MHrjWUn3KU38EG/uIyM7n69p9uIcxYJo=;
 b=PEMFj0md2YcenuVjM/EeJ/6U5TQmEWDzjc0pIJW6BCabesxw+cHD26q1uos/j5MLycUus7G9N62wwLo1oHg/TbaNY/Z3RhQmg4/u7xiG1gVY9Vk3fJY+huUnmDY6GiBYffSmDDG9AGpTYSxOZVoE7jDqXaoIBWd6Ft/jwHg5RLTVpJ/WxFAcWgywpGNczDSxKmbsLFF1IS/VsjGMQUf1Yr/XWTQ30Ny8cn2QyyRDAy0yagxL6POBflThQJZ1tNS0GXhXVYfN1m8gqiK0DsVWbxS6Ei7a+GS1YLmBHgc3s6UFFzYFDTRM6EC1nvgYxAsmcjpCRfZe0Z/QoaNe/cLh8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oCtypZcKre8MHrjWUn3KU38EG/uIyM7n69p9uIcxYJo=;
 b=aEB+Ukzo0WtSPC5cfl4JGpZ9xq8Z5cCH2U29he8o1h4AmeJ228+HoeOMj5WUqzdOnHXMDizpzqcaoJZJXPJWTGTqacCYoM9KlIk0Yl/sRDvNaQxOMxzQFu7wncWG1ytHh0AOV2oME8szVkp4BGauQRIonWkQg6mMYEvHV2Wv3phgv/aqkM52jEsg/dGhm1m+zkxwjBIaHP+3sB8jq1zsnwSLv5GsS1f+ytao/7bJ9b9CWhSlnUomrGsVmbmLZUY3Z9Sl8kK90QPaLWaiMAONOLUDic4ny1U635A+LQBaA7iYOya/p5E1b5s8A1oN1x/O9kGP+AuXR8MLIMvOT0HkBg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4a7aaa7a-92c2-3348-3ae3-40c09c2b1c98@suse.com>
Date: Thu, 25 May 2023 15:24:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4dmJuzNVUE5UIY@Air-de-Roger>
 <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com>
 <ZG4uUO93Iub36IFp@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZG4uUO93Iub36IFp@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0064.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4b::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9190:EE_
X-MS-Office365-Filtering-Correlation-Id: 3c2a3551-ee0b-4699-653b-08db5d236dbe
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VdC93cY3CnEs6PbQjCilDHQYitIcezWp9HjFtdB7iAayV7znwWplksSM+CiDMq1GAAk3l+6Df0/o6jyzehCNPTAWIW539rfsvfQAq7Dz5rMGS8mKvbZnG3/89woE8o3NTS/G0INT69l4qMZ960Pl3LpAGFEEgIHVhzCJHCqjIXUnyUbbLfgSuknbWSr8Li9YoHMkovL1/37VTul1VfzbHUTJfmWLOhUSQE4i+X9LJMhRJrgAro6XL80NFgH/suagNwPTQyW2i8UFki+Hm1J4AyARsCfNVQo1eDGNTuWGMq5/Kfd41d1/FdDP7lbO92zbbZObkJiFVLZZ4CYb0RdLPPfaJuLuD9SOoY6O+RCGuEyVpU+JsoOfIAJo7N626nwWqn/LPxbUgRjuJXqDGR0KnQlG8B7p4N+Sjq0gPebAHD2aGPFZFORbw16fFGlHYjHNNFs9zCS90htcnhyErmJXEx1iwgrcWcqgTTUYdlQo8rNj81CSyshDBgklfLDn5Znct9WcvEKYX0CXLmCrhryKLwALMDHnrRsZcrEJjVZKnRI9+C+2zmGVb7wM2x341dF1CBKOiLhgqtP/ZJdErTb6G4mHZ89rzR4g6bcMmulpYBLDYTdB0FCxnk+jjx0+OH425gdfwOB690hbfsSdnAXITA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(346002)(396003)(366004)(136003)(451199021)(31686004)(2906002)(54906003)(15650500001)(66899021)(5660300002)(8676002)(8936002)(41300700001)(316002)(66946007)(6916009)(4326008)(66476007)(66556008)(478600001)(36756003)(6486002)(53546011)(6506007)(2616005)(38100700002)(6512007)(86362001)(31696002)(26005)(83380400001)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M1o5QXJQaUpDM0RGTDVzYXhhVXhKbWV0OUNOdTBnQ2RoMmZIZlRnc3RuNjFx?=
 =?utf-8?B?RklmYllZc1VIek5lVndEd2xqWnZNTk1KNFBFalNGa1RRZHh4SUdEUk5qU0J5?=
 =?utf-8?B?dzhEdUFzaGR3Qyt2WmxaSUFNRDFwYXBPQnJBWFZ2b2VBRGh5clllNjZCbVFa?=
 =?utf-8?B?NkVKalkxbXRKV2p0M01nbzhOSlFMMzlWNXg4aytEMFFKMFd0ZU1uZlFCWjU3?=
 =?utf-8?B?Z1M2VEFydDBrOWx4TFBJbG00UDQ3cmdKM1VYTC9TaXBCeVQ3eEFzTWp2ZE9H?=
 =?utf-8?B?UTd1cWxYVlV2WmYwempsbncwSVFyMGZRQUozUlZCTUZSQ3VlbFdhRU0yRTdj?=
 =?utf-8?B?eUlXU3RRK3dpOWNleEtUWkRkekRBM2VDUjN2cXVPVzB1V05yNGFreG81QU9S?=
 =?utf-8?B?ZFhiVTF5QzdHa1lZcWwreHBFUkdnK3k5YlUzd2NqRFVyZmExUWpyeGR1QTRF?=
 =?utf-8?B?d1FZTkVwUmFBZElCMTRyUElkamRwSXlEbHBMVFZHSjJiRGRCZnlwdjdHa0o0?=
 =?utf-8?B?VG0xWTJyWWppTDRXSlpBVWNnSnpTRERHOGViMm9yL2Zqb0xkSEJXem5nWmlq?=
 =?utf-8?B?NG1aUGoxZjBZSFR4VTdXRFc2L1N1ckdLUjhMcTBuWlpiTGkzTUJXNHp0Tld4?=
 =?utf-8?B?N1NEZGRsYnZZK2I5VERhYjlwYmFrbmJPNFVUSENQY1FTZ1pMcm1GS2hvM0hn?=
 =?utf-8?B?VmlWQlFzZU4rSmtlSnNlclNxRDBxNUgwcmlJYkttejhmTklDZjhIM00vcUdk?=
 =?utf-8?B?NlVNRXhXVXN0YjZwemtCOE5CSGtMWWhuVkh6RUF2bzJ4YnBjVjk5NHk2d3JX?=
 =?utf-8?B?M2pFSGZNUUU5cXdpWC9KNzZGdmxlYVcyS0JraGZsU3NUc0xCaTJ4VDJrT08w?=
 =?utf-8?B?UDV5TGZySUJ0blBKSUdRblJ1VXpzTVVQYmt1dTQyVC9IS1o0QStYNU1lZEJO?=
 =?utf-8?B?dXhNVGM3c0VNVitscGowUjVyZ1NHelpMR3JpdmZ0YmxxYVA2VkJEVmZlZXNN?=
 =?utf-8?B?OXR4WnRYcDNDZi9OMTZ6bTMwSVRDeVYwUXZSalJLbUZuVXJZeDRHM1VkMzFR?=
 =?utf-8?B?bHI5b2sxbnlDTjJybVd4Q01adk4zS21pLzZONGlFbEFXN2J2Mkx0eVN6SjU1?=
 =?utf-8?B?UG9MOGs3VWRESFgya0hRVm1BRVlBVGgxWjNabkZrTGRCQU5tMGFKUjF4RHFQ?=
 =?utf-8?B?SkZvc01YeCs4R2F4b0lZNjgzTURINzF4OHF1d0Y1V3JyWU45eUVkanlTeFBE?=
 =?utf-8?B?VytYK0VwbWpZbnIrM0hsQ3dSZWJWV0ZxTzR2N0hqUzErT0FDdmN5Z1g0UVRD?=
 =?utf-8?B?eU04ZUIxLzRTU3lWZ0VOVlY3YTdwUWdsR1laUUNzaS83VGZSZ2ZaZU9KSWpl?=
 =?utf-8?B?dGtHOWVvZGpoVzU2RDloVmlhZE5ZRktIcm5nNGU4VlkyME54WEMxeXFkOTg1?=
 =?utf-8?B?WkJsOERoOHhDbmZ1WU9UVjkrR2M4NkZFbXk2QzlHV3VEalQwNnJBRjFaTXpN?=
 =?utf-8?B?eWttUXkzRE5JUURSNmJlbjRVNzNvS0dMT0dXMXdxcnpzZnZRS3AySHJFZzEv?=
 =?utf-8?B?dU5hUm9GdHMvbFoyVmR6aHNlYmV0ckRqNFovTVBFbG9QY2F5anVQMlNENzBV?=
 =?utf-8?B?MjhGVEtqU0RHMGQzbUZXdU5ZTWIzcXJ6cFZzaWFOQlZmMlAySW93dndUTmRB?=
 =?utf-8?B?Q2JZbG14eGc1QnpCWDdIWkhFM3c4WU8vWGNGcHdnajdPcEpCSWRQU3dyaFpy?=
 =?utf-8?B?MTg3TWREK2tiUTNHbTdoemsrcU5jVzFOSVQ1VC9XSmozb2lZdkFKNmxqWWND?=
 =?utf-8?B?STJqUTRSTUF3eVJXcXNncy9ObXlUQ2p1cGxVSHNUbTFrWXBpVmh1a2Q2Rk1z?=
 =?utf-8?B?aEdQRnp0N0JtU3NwR2NIYnRVRlFGY2VsRnpra1pLMkJPSEJWSkQ5TEdJRGVH?=
 =?utf-8?B?RXV6MUkxTkNvZktpckV4NFhCNGtiaWxGVXRVWUFhc251OVgvMFQzeFY4ZE5W?=
 =?utf-8?B?ZzZiR0J6UGpzaE5WZmRUQm5FL0VIaGErMXk4QUhnRXVVM3l0Q0J3b3lqN0ZO?=
 =?utf-8?B?VTlNcFZQKzdZR3ArWHI4ZmZOOUhUTUgwTGJ6L2NSK2RhN3VxT1poVkZzWWJx?=
 =?utf-8?Q?knkaax/3x8cvqcqzsGvfb3M04?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3c2a3551-ee0b-4699-653b-08db5d236dbe
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 13:24:55.1558
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4V+452HzbtQLWCcNx7OWDyijp4X4S+DUqI7Wfhjd3R1Ix4s2448+QCIUKeO2WDzPik7dYwN+GIdSHpwqhJQSwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9190

On 24.05.2023 17:33, Roger Pau Monné wrote:
> On Wed, May 24, 2023 at 04:44:49PM +0200, Jan Beulich wrote:
>> On 24.05.2023 16:22, Roger Pau Monné wrote:
>>> On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
>>>> Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
>>>> console) are associated with DomXEN, not Dom0. This means that while
>>>> looking for overlapping BARs such devices cannot be found on Dom0's list
>>>> of devices; DomXEN's list also needs to be scanned.
>>>>
>>>> Suppress vPCI init altogether for r/o devices (which constitute a subset
>>>> of hidden ones).
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> RFC: The modify_bars() change is intentionally mis-formatted, as the
>>>>      necessary re-indentation would make the diff difficult to read. At
>>>>      this point I'd merely like to gather input towards possible better
>>>>      approaches to solve the issue (not the least because quite possibly
>>>>      there are further places needing changing).
>>>
>>> I think we should also handle the case of !pdev->vpci for the hardware
>>> doamin, as it's allowed for the vpci_add_handlers() call in
>>> setup_one_hwdom_device() to fail and the device wold still be assigned
>>> to the hardware domain.
>>>
>>> I can submit that as a separate bugfix, as it's already an issue
>>> without taking r/o or hidden devices into account.
>>
>> Yeah, I think that wants dealing with separately. I'm not actually sure
>> though that "is allowed to fail" is proper behavior ...
> 
> One better option would be to mark the device r/o if the
> vpci_add_handlers() call fails in setup_one_hwdom_device(), as that
> would prevent dom0 from accessing native MSI(-X) capabilities.

Perhaps, but again in a separate patch.

>>>> RFC: Whether to actually suppress vPCI init is up for debate; adding the
>>>>      extra logic is following Roger's suggestion (I'm not convinced it is
>>>>      useful to have). If we want to keep that, a 2nd question would be
>>>>      whether to keep it in vpci_add_handlers(): Both of its callers (can)
>>>>      have a struct pci_seg readily available, so checking ->ro_map at the
>>>>      call sites would be easier.
>>>
>>> But that would duplicate the logic into the callers, which doesn't
>>> seem very nice to me, and makes it easier to make mistakes if further
>>> callers are added and r/o is not checked there.
>>
>> Right, hence why I didn't do it the alternative way from the beginning.
>> Both approaches have a pro and a con.
>>
>> But prior to answering the 2nd question, what about the 1st one? Is it
>> really worth to have the extra logic?
> 
> Why would we want to do all the vPCI initialization for r/o devices?
> None of the handlers setup will get called, so I see it the other way
> around: not short-circuiting vpci_add_handlers() for r/o devices is a
> waste of time and resources because none of the setup state would be
> used anyway.

Hmm, yes, that's a valid way of looking at (and justifying) it.

>>>  And
>>> hence doing those before or after normal devices will lead to the same
>>> result.  The loop in modify_bars() is there to avoid attempting to map
>>> the same range twice, or to unmap a range while there are devices
>>> still using it, but the unmap is never done during initial device
>>> setup.
>>
>> Okay, so maybe indeed there's no effect on the final result. Yet still
>> the anomaly bothered me when seeing it in the logs (it actually mislead
>> me initially in my conclusions as to what was actually going on), when
>> I added a printk() to that new "continue" path. We would skip hidden
>> devices up until them getting initialized themselves. There would be
>> less skipping if all (there aren't going to be many) DomXEN devices
>> were initialized first.
> 
> I think that just makes the logic more complicated for no reason, the
> only reason you don't see this with devices assigned to dom0 is
> because device addition is interleaved with calls to
> vpci_add_handlers().  However it would also be valid to add all
> devices to dom0 and then call vpci_add_handlers() for each one of them.

Okay, I'll leave that alone than, at least as long as we don't see any
reason because of actual behavioral differences.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 13:25:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 13:25:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539593.840638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2AyF-00011a-TL; Thu, 25 May 2023 13:25:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539593.840638; Thu, 25 May 2023 13:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2AyF-00011T-Qo; Thu, 25 May 2023 13:25:35 +0000
Received: by outflank-mailman (input) for mailman id 539593;
 Thu, 25 May 2023 13:25:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Uvvp=BO=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1q2AyD-00010z-Sn
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 13:25:34 +0000
Received: from smtp-bc09.mail.infomaniak.ch (smtp-bc09.mail.infomaniak.ch
 [45.157.188.9]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9fea07c7-faff-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 15:25:32 +0200 (CEST)
Received: from smtp-3-0001.mail.infomaniak.ch (unknown [10.4.36.108])
 by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QRphW6LYlzMqNXH;
 Thu, 25 May 2023 15:25:31 +0200 (CEST)
Received: from unknown by smtp-3-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QRphL6y4hzMrvhL; Thu, 25 May 2023 15:25:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9fea07c7-faff-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1685021131;
	bh=FnMYgGOtsnzYjWDK5xXufVoQQ5U2qAP20b96T/ZBHdg=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=N/UVINRRySa+8dk6MOPMQdB5xSfG70ugF/mQ8iSHsAXEhJ3x5cnQ644+BuQ9671BQ
	 zwHHW8JkLMYSrp/tJcXYwsHIYERU1eHBBUHAhpe4S2ynd7Zz3nYjsu9tDLnvH3P5lj
	 9BBmh81fJGEtVFfk8S66LDxQxXZ1KMberOvy83Ck=
Message-ID: <0b069bc3-0362-d8ec-fc2a-05dd65218c39@digikod.net>
Date: Thu, 25 May 2023 15:25:09 +0200
MIME-Version: 1.0
User-Agent:
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
To: Trilok Soni <quic_tsoni@quicinc.com>, Borislav Petkov <bp@alien8.de>,
 Dave Hansen <dave.hansen@linux.intel.com>, "H . Peter Anvin"
 <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
 Kees Cook <keescook@chromium.org>, Paolo Bonzini <pbonzini@redhat.com>,
 Sean Christopherson <seanjc@google.com>, Thomas Gleixner
 <tglx@linutronix.de>, Vitaly Kuznetsov <vkuznets@redhat.com>,
 Wanpeng Li <wanpengli@tencent.com>
Cc: Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
 James Morris <jamorris@linux.microsoft.com>,
 John Andersen <john.s.andersen@intel.com>, Liran Alon
 <liran.alon@oracle.com>,
 "Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
 Marian Rotariu <marian.c.rotariu@gmail.com>,
 =?UTF-8?Q?Mihai_Don=c8=9bu?= <mdontu@bitdefender.com>,
 =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>,
 Zahra Tarkhani <ztarkhani@microsoft.com>,
 =?UTF-8?Q?=c8=98tefan_=c8=98icleru?= <ssicleru@bitdefender.com>,
 dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
 linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
 qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
 x86@kernel.org, xen-devel@lists.xenproject.org
References: <20230505152046.6575-1-mic@digikod.net>
 <1e10da25-5704-18ee-b0ce-6de704e6f0e1@quicinc.com>
Content-Language: en-US
From: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>
In-Reply-To: <1e10da25-5704-18ee-b0ce-6de704e6f0e1@quicinc.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha


On 24/05/2023 23:04, Trilok Soni wrote:
> On 5/5/2023 8:20 AM, Mickaël Salaün wrote:
>> Hi,
>>
>> This patch series is a proof-of-concept that implements new KVM features
>> (extended page tracking, MBEC support, CR pinning) and defines a new API to
>> protect guest VMs. No VMM (e.g., Qemu) modification is required.
>>
>> The main idea being that kernel self-protection mechanisms should be delegated
>> to a more privileged part of the system, hence the hypervisor. It is still the
>> role of the guest kernel to request such restrictions according to its
> 
> Only for the guest kernel images here? Why not for the host OS kernel?

As explained in the Future work section, protecting the host would be 
useful, but that doesn't really fit with the KVM model. The Protected 
KVM project is a first step to help in this direction [11].

In a nutshell, KVM is close to a type-2 hypervisor, and the host kernel 
is also part of the hypervisor.


> Embedded devices w/ Android you have mentioned below supports the host
> OS as well it seems, right?

What do you mean?


> 
> Do we suggest that all the functionalities should be implemented in the
> Hypervisor (NS-EL2 for ARM) or even at Secure EL like Secure-EL1 (ARM).

KVM runs in EL2. TrustZone is mainly used to enforce DRM, which means 
that we may not control the related code.

This patch series is dedicated to hypervisor-enforced kernel integrity, 
then KVM.

> 
> I am hoping that whatever we suggest the interface here from the Guest
> to the Hypervisor becomes the ABI right?

Yes, hypercalls are part of the KVM ABI.

> 
> 
>>
>> # Current limitations
>>
>> The main limitation of this patch series is the statically enforced
>> permissions. This is not an issue for kernels without module but this needs to
>> be addressed.  Mechanisms that dynamically impact kernel executable memory are
>> not handled for now (e.g., kernel modules, tracepoints, eBPF JIT), and such
>> code will need to be authenticated.  Because the hypervisor is highly
>> privileged and critical to the security of all the VMs, we don't want to
>> implement a code authentication mechanism in the hypervisor itself but delegate
>> this verification to something much less privileged. We are thinking of two
>> ways to solve this: implement this verification in the VMM or spawn a dedicated
>> special VM (similar to Windows's VBS). There are pros on cons to each approach:
>> complexity, verification code ownership (guest's or VMM's), access to guest
>> memory (i.e., confidential computing).
> 
> Do you foresee the performance regressions due to lot of tracking here?

The performance impact of execution prevention should be negligible 
because once configured the hypervisor do nothing except catch 
illegitimate access attempts.


> Production kernels do have lot of tracepoints and we use it as feature
> in the GKI kernel for the vendor hooks implementation and in those cases
> every vendor driver is a module.

As explained in this section, dynamic kernel modifications such as 
tracepoints or modules are not currently supported by this patch series. 
Handling tracepoints is possible but requires more work to define and 
check legitimate changes. This proposal is still useful for static 
kernels though.


> Separate VM further fragments this
> design and delegates more of it to proprietary solutions?

What do you mean? KVM is not a proprietary solution.

For dynamic checks, this would require code not run by KVM itself, but 
either the VMM or a dedicated VM. In this case, the dynamic 
authentication code could come from the guest VM or from the VMM itself. 
In the former case, it is more challenging from a security point of view 
but doesn't rely on external (proprietary) solution. In the latter case, 
open-source VMMs should implement the specification to provide the 
required service (e.g. check kernel module signature).

The goal of the common API layer provided by this RFC is to share code 
as much as possible between different hypervisor backends.


> 
> Do you have any performance numbers w/ current RFC?

No, but the only hypervisor performance impact is at boot time and 
should be negligible. I'll try to get some numbers for the 
hardware-enforcement impact, but it should be negligible too.


From xen-devel-bounces@lists.xenproject.org Thu May 25 13:27:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 13:27:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539597.840648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2B04-0001gU-8l; Thu, 25 May 2023 13:27:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539597.840648; Thu, 25 May 2023 13:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2B04-0001gN-61; Thu, 25 May 2023 13:27:28 +0000
Received: by outflank-mailman (input) for mailman id 539597;
 Thu, 25 May 2023 13:27:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/c8s=BO=citrix.com=prvs=5022a0095=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q2B02-0001gF-U0
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 13:27:27 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e11fece1-faff-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 15:27:24 +0200 (CEST)
Received: from mail-dm6nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 09:27:20 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by MW5PR03MB6929.namprd03.prod.outlook.com (2603:10b6:303:1c6::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Thu, 25 May
 2023 13:27:18 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Thu, 25 May 2023
 13:27:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e11fece1-faff-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685021243;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=5OqJjT+X6aK6IRmfWysi6qFL6oVYjSEhF1cUGWuV52c=;
  b=c0rcDHvVZQ6WY63Vx2fgSlaMmsRolj83xAmsSlLP4fyzGZPHOKhQYcUN
   ffTnBINMKMs/kL0oTGYg/mCYzxvlTd8PRCaHY8q680HoiuG/+DT2WyqB1
   uvCdIEYPNx2N/B81SRjduLjOqPqIqMyROWqQJ1tovXid2DtquaQThem3T
   U=;
X-IronPort-RemoteIP: 104.47.58.101
X-IronPort-MID: 110802728
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:RAw4R6Lcvj7R9IF2FE+Rw5QlxSXFcZb7ZxGr2PjKsXjdYENS0jRSm
 GBNDGrUaf2NZ2CnKNAjboixpENU6MSBmIMxSwBlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wZjPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4qP29W0
 sEaOQwoSRS7qO+dz+67DbFz05FLwMnDZOvzu1lG5BSAVbMKZM6GRK/Ho9hFwD03m8ZCW+7EY
 NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dnpTGLlWSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv13r+Xx3yhCOr+EpW056dNnAGOw1cPDR06TXaxn6aDmheHDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy
 hmOhdyBONB0mLicSHbY+rLKqzq3YHARNTVbPXZCShYZ6d7+po11lgjIUttoDK+yiJvyBC30x
 DeJ6iM5gt3/kPI26klyxnif6xrEm3QDZlRdCtn/No590j5EWQ==
IronPort-HdrOrdr: A9a23:TL7abaG7jXXHQa6ipLqELMeALOsnbusQ8zAXPiBKJCC9E/bo8v
 xG+c5w6faaslkssR0b9+xoW5PwI080l6QU3WB5B97LMDUO0FHCEGgI1/qA/9SPIUzDHu4279
 YbT0B9YueAcGSTW6zBkXWF+9VL+qj5zEix792uq0uE1WtRGtldBwESMHf9LmRGADNoKLAeD5
 Sm6s9Ot1ObCA8qhpTSPAhiYwDbzee77a7bXQ==
X-Talos-CUID: 9a23:rT4kV2Mrh2d95u5DBRdp7GkpGNEZV1rZ9FjdCEDkCSExV+jA
X-Talos-MUID: 9a23:gL54owULMF8IHz/q/DP93RZ+aNpt2LirCGwVvJ845POUbBUlbg==
X-IronPort-AV: E=Sophos;i="6.00,191,1681185600"; 
   d="scan'208";a="110802728"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CgiRxCjW5xWxRDW4JeWZuEY0Z1uO8v5mw1kOM1PmHfKcpvETg8XACH5HNj1P1jeV/P4wOA+LMtPbWTVrhtXOOJEhqM0Lqv9S3a2NmuXJFf3czGQPMar4td20nIOl+2fxQjKtsLwDmAvygNW2dOXJkb2mbDY2G/IHFo/yOHI2Y/ZiwY1ASWM9k0tpuEOF/gjlos0BV/X5h7ipK5LqGJtbbPC1yE9t9cJYCgvebb2/xZrH2JL9kQwzdhTUlpGmrzz5dli+3/J1QxXAeJDw06NIa47GCxdwyZU8bDW3c/sZjzuuFEy/kNvDr2KYQyMK4kSEy1crcY7BNNxkQ+ymHBwdRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wgdESvyLJ7sYJsZcsTFjNWa6MQxCvm8r2d9D2ykKoFs=;
 b=KDxvSVeQjKHos+J79i0hb9go7jCyprHmzvgp/8bSSlpqonPqr/DLQrDNVKTN5DPVdBdLmWUpiXweb5fZML9IYds8Erm6FEtQaSP1FWsT98TV31Oo5+jR/mgUJfK/6LqnsbQP6vlI6FEQg9ui5mPi3vNU8KshpYX5HepXj4bV1lYww81qqlNnodvJQJAfYEdZNw9eFMQ+HPTYS3JAzhvia77gtS7clpFeGpA/cU8Zfved/5lirs4AaDpPumFMuwwg+yNz6gxJmntqpu/fgX7dgckuQfqytamQ5EwiHPiQTMdYaS0jzIeNTYkVK+xSw/8jIiPKuROc6qEo/VpuicQPQg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wgdESvyLJ7sYJsZcsTFjNWa6MQxCvm8r2d9D2ykKoFs=;
 b=GLplXEKy6hTT3s3696WXxZf/CORsheTQOZ72bss18jvQqt/bYcotKfAdSm6egVIQ68H4+Tz4v0BxXCVztwjnuSU0P1dehMKGYspc/UIZLEGSkZW1XGriLh423jZ2kctDQuormrOWsyyqR36ZGUs5bc0xXJbmsu1MHoYrsOWCT6o=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 25 May 2023 15:27:11 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [PATCH] vpci/header: cope with devices not having vpci allocated
Message-ID: <ZG9iLwNWyK+I4HLf@Air-de-Roger>
References: <20230525083051.30366-1-roger.pau@citrix.com>
 <25a73290-7677-e202-5e91-fdf32ad5c01b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <25a73290-7677-e202-5e91-fdf32ad5c01b@suse.com>
X-ClientProxiedBy: LO2P123CA0088.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:138::21) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|MW5PR03MB6929:EE_
X-MS-Office365-Filtering-Correlation-Id: 98b05093-63fa-4072-7861-08db5d23c2e1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kgUL7CSzyT+cYeYsXpBvaWEZ1ANUMbhIbbFUopoCS27H6SvCY7EqIvvyJKqURMzwTeQSr2hWoOhGTZnlXHod1ILOxSvBVYEW1psukKWVmIpWoKfqb1EhyeKtxog+0pQo28r/84wEA4sbw94VI/AwAsL5f4wKqZJNuITVM4Z85d9SILWC/QuOBzON15q0sllOj9p5L1U8Q0X1vNgnZJ8Wr3K88e0dbKOuB4QyClbXr6cIW0eUX9ATNcgqUP2l22YvCnFExVu5G7MnK1sLqWCeeiIqUmnAZ0rqywE7MybGNNX9U6gE0/suqTiJVMmUbNpiVQMsqVSShB9FFzUL4nCI3KCmBwYZjzjCIY9RPzchs6zgcHCgp70Q4y5U1uaZtcioBo8d7EojGKK3J9AtVWGVyREFvftzy0RMZu+Fd5MP5ny48b0btuolFe/4jz6ymjYwOQIHKyBgg6qvZFn+pfgKebJzp+1XV8nv54mfgvKmluCzA30LHpx3ftoeJVsoJsBriAVIxISlfLlrvTQoH8BOA/0M27gQwGYQd5Gtukoodk+MrgP7fJwOWpbtd5+q667ETwxLrEbssTpoMNZTb4H319937DMkKJ0jIh9kVU4zm2A=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(366004)(376002)(136003)(396003)(346002)(451199021)(26005)(186003)(6512007)(82960400001)(6506007)(9686003)(53546011)(2906002)(38100700002)(85182001)(83380400001)(6486002)(41300700001)(316002)(6666004)(508600001)(33716001)(66946007)(6916009)(66476007)(66556008)(4326008)(86362001)(8936002)(8676002)(5660300002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eDJxWFBncFUyVzJVa2Yrb2s2bXF5NmthUUdWU1dFNHVwZWU2RFlZZ1pjN1Jj?=
 =?utf-8?B?VHRqRFBpeXRINS8wTGE0MitodzFGZTZqSng2UTY0NFYxbGdHVDJscVBmeU9T?=
 =?utf-8?B?VkpWZGo2VVZUQkRXRHRJeFRYQmVjTTMvVnlJV09IQXFEY3UvZUxnQUJTZHhn?=
 =?utf-8?B?TVg1UC9KWU5yT3cxclk4YmhnemJTM0ErcUhVbUE0TEM0WjcwR0FIa0dvMXN6?=
 =?utf-8?B?b1UrNEVjaWVtSTlRMmFnNmx4NkJJUzhlR1BGR0o3d04yRUxhaWExWkZMNFgy?=
 =?utf-8?B?cUxVYmNMTnNIYmU4SkpWOWZLZ2gyL0NQZlJqSDQvbk9NcHFMU3U1MFIrMjhX?=
 =?utf-8?B?RUpGdFNmdTNtb2VVN0lGVFBGdGNSM0I0Zms3Ky9sejFKdjlDOUN0ZG9aTjJV?=
 =?utf-8?B?aWxaREhxclhhbHg4NWxmNktjcnlXUCtxT0FDL01yZzU1c3o0RjQ5VmVjU1FG?=
 =?utf-8?B?bjQyTHFqN3piTm5UU0R2RHFFbmxEUFBXdFhueGFIRVpHZU15Zjc4TW5xb2JI?=
 =?utf-8?B?ZGtEVGlweU9OKzNORWdJQmp3Y0VSZm1BY1U5VzJWcEhReEFtWkdpaEkvT1Y4?=
 =?utf-8?B?dkRvR1NJejFZdG45OHRLcHlCTHJRTS9oRUk5aGxrZ3BteTc4NFVIeUV2Q0k3?=
 =?utf-8?B?QjdaMlhKV2U2cGh6WWxTNlg1Zzd6bWIxNk9kRTk2UWdVb2xzNFBnSXIrc2ow?=
 =?utf-8?B?cHNWTzdwbTVWbUFCQVJaSnBOMlJ5QVltanFyOURVUm5USnhDK0JLbjhZdHhX?=
 =?utf-8?B?ZW85TXhYTkNaV3dYMzNvV2tTQUVsMXR3dmF6bzNGZDFVNS94Um1KVnExUHY0?=
 =?utf-8?B?QUtObDI2dmFwbExrWm5XeEpuU09Oc0hxMTVaS3Q2ZU9xWDZWemhxK2U5ZkYz?=
 =?utf-8?B?dkhzdndMMWpZYjdlNFJNbUJ0NHpGQVpYOEY3WnFpdVMwVDJML1RYU2VFUDg0?=
 =?utf-8?B?VEZrWW1IOFlnZjhpQjBLa1Y3RUFKZ0FIWXlvY0YzT2piNldKeG5JaVZQRzU5?=
 =?utf-8?B?bzhMN3ZxUzFmVy80alpZSUhmN2hJK21kOTVMQ0xjSWRsUGpEYVpEVFQwQ09u?=
 =?utf-8?B?NEtvb3RnUnRlbVFubnBwRmlNT1NhUWRDV2h2VTJvYVZPUkQ1VDZEZ0pCaWRo?=
 =?utf-8?B?TVgrT1hBMWsvb3JsYkpFT0ZrR041WlB5Vis0dGpkVHo4V3J5ekJocEtoQ0s0?=
 =?utf-8?B?djNqZkJKcHorTDFtT0xVYm9FSmlNUDUyblpNU0hpK1ZBbU8rTFA1Zi92NW1l?=
 =?utf-8?B?aTloQWJqYlNCeDhqWHNqSTdNamhCcTh3T1p4UktnNlprcldHWDFxYndNNldC?=
 =?utf-8?B?L3A5QkFpcHdrR24xUGcrR3FLbWJOdGVJdVBxekEyV3h2cnNQcnZIcFJ1NHZv?=
 =?utf-8?B?eHliN1JjMDNiMG0za3doUXFrWVdqbVJzdTB3SGNjN3pzNUMzV2E5VzQzMDVP?=
 =?utf-8?B?YncrRXd1dlhiZ3VJQ0hlMG16S2hCR3pNSkd5QXc1MzBrSVRpb21MckFBcHlD?=
 =?utf-8?B?NElJVEZTcGVFSGhOK2hkcUZiaXVjS0hKTERMSzl4bEg4OWVWcDVGcjBCVDFM?=
 =?utf-8?B?a2xISUxWUnU4SGQzT05lWXh4VGJmMExtOVdsdkhxbVJ0elBCRXI4V0J5SFZo?=
 =?utf-8?B?d2txVlBFWXQzRFhzQUhkNlpZZEtEYjdaN2hTbENEWjFVbjlqb2x5Y1B1VlZE?=
 =?utf-8?B?VXY4STZjbjMwKzlCM0pNeVBseVN6OXJ4QzlyeElFL0pJemVNRUIrRUZZdkNZ?=
 =?utf-8?B?elRKN1pJeDUzSFRQOUczc0F2ZDUvMXR5SWRNTDdpb1lnTWRYRUp5T1NCRGFE?=
 =?utf-8?B?bnpFQk04dGtPUW94LzJwb0Z0RkZPQzJVcmxkUFdSS2VQeVFUOEUxZHg1S0lX?=
 =?utf-8?B?R3JKZCtOMnZXRFVYYkhrd3JhdmtBR1JuYzZDaSt5NVdHSWJCYmNFbU1odTI0?=
 =?utf-8?B?NGhlYVlsNC9jTmtUbWlja0FDb1g0S2o5ZTQ0a3phUGpnV2NyT0VDSEt4dUJr?=
 =?utf-8?B?U29CVFF0cDFyKzY4TWx5SGFWTzRqMnZYWmdMZGNaUmYzTi8zNmNLdUQ4aUVt?=
 =?utf-8?B?R3ppWG9aZkFQVS9UUERjR1dYK2hUMUtQQkpXWFVhT3BSUTdCZHlaRS91dHg1?=
 =?utf-8?B?KzlUcFYxM1BXeFhDcFZpUjNleGlyQUJOQ0MrekNDRFI4RlhycjdPTGZ1K1dB?=
 =?utf-8?B?dnc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	mUXsgU+HvcywJBHnBuby2sqvel3t6y2jvNxD9XUSwJV3FAg8v/OHz5KfrFInTXX6D7TMcon4FlCOAjsVvOxDfclavehIRC1YyZPgmmP5RmmEu1tUt60TAavYJ2St25ZM0oXSR2g4RXFqZszFn8KozHRNvQ3z8nHcJXQhh5Op/xYpb9JndcbVLxbl8SMG+s3d/yQvoZ1HAhCa/IPwEPeeWxUl5DZ5KWMicAatmlGsG5r9hjHM5mZvU0pOEwg9IJfKmu2ovdagDFU56NMPbSJX/Re99wU88HLiqMgU6br2sRYvDpT9Q19sAErm0w4EwyS2zEcwTPWxzGCZmqpIAFLF4TziyiKRcOQqvAqwbBwOzCUztbj4aA0eJ+MClXfrV5hga12Yz3DX2IBeQ4UUgoKe+nhO57h49WKrMyguG7j9AO9nfR92jMdxHk9cKp8anJVKysV3Mk7OVU/UPwzut2mzZLTNBKxVtYQ5sJVOvPZ1TIxbnpg/KSP+g0hV8bw5BNSGfyijewdAH26urYUtnGy5srBzmd3VdzRrNa9sJTr1Q/6Ddade8WPKPR7geTRjcSQPF2wUoFINHE6A4EjdCYDilItqRqCmlXlokI/SWAPz3c223lDWEPnyox4Nq/LVEYA09hZeCLA6ND3Ey0Uj1vNhmkNfHDRuhcsSGiLPPV9dNupQUbGKnINOUygTVp/WmL0+qYWjOvKQckPVu9hpNl3FDJGmhf9RrEYBqVDxtc/CWVneCj+TSiePFSH5mtKYCOOpTLPv9LwSVxTN7DPCVznBAfQP8bfX+SdE38b5J6yLwyg=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 98b05093-63fa-4072-7861-08db5d23c2e1
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 13:27:18.0425
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cIA15CZ8W0PbcChyEVQPfkeESa4cLWcJ1BZ+lr4rehibJOdv2+pj0cAvSbElNMznguTNPn40gy9/SLN626wZfA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW5PR03MB6929

On Thu, May 25, 2023 at 11:05:52AM +0200, Jan Beulich wrote:
> On 25.05.2023 10:30, Roger Pau Monne wrote:
> > When traversing the list of pci devices assigned to a domain cope with
> > some of them not having the vpci struct allocated. It's possible for
> > the hardware domain to have read-only devices assigned that are not
> > handled by vPCI, or for unprivileged domains to have some devices
> > handled by an emulator different than vPCI.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> >  xen/drivers/vpci/header.c | 14 ++++++++++++++
> >  1 file changed, 14 insertions(+)
> > 
> > diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> > index ec2e978a4e6b..3c1fcfb208cf 100644
> > --- a/xen/drivers/vpci/header.c
> > +++ b/xen/drivers/vpci/header.c
> > @@ -289,6 +289,20 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
> >       */
> >      for_each_pdev ( pdev->domain, tmp )
> >      {
> > +        if ( !tmp->vpci )
> > +            /*
> > +             * For the hardware domain it's possible to have devices assigned
> > +             * to it that are not handled by vPCI, either because those are
> > +             * read-only devices, or because vPCI setup has failed.
> 
> So this really is a forward looking comment, becoming true only (aiui)
> when my patch also makes it in.

The r/o part yes, device setup failing is already possible.

I think it's fine to have the r/o part added already.

> > +             * For unprivileged domains we should aim for passthrough devices
> > +             * to be capable of being handled by different emulators, and hence
> > +             * a domain could have some devices handled by vPCI and others by
> > +             * QEMU for example, and the later won't have pdev->vpci
> > +             * allocated.
> 
> This, otoh, I don't understand: Do we really intend to have pass-through
> devices handled by qemu or alike, for PVH? Or are you thinking of hybrid
> HVM (some vPCI, some qemu)?

I was thinking about hybrid.

> Plus, when considering hybrid guests, won't
> we need to take into account BARs of externally handled devices as well,
> to avoid overlaps?

On that scenario we would request non-overlapping BARs for things to
work as expected, I think that's already the case for HVM if you mix
QEMU and demu for the same domain.

> In any event, until the DomU side picture is more clear, I'd suggest we
> avoid making statements pinning down expectations. That said, you're the
> maintainer, so if you really want it like this, so be it.

OK, I don't have a strong opinion, so I'm fine with dropping the "For
unprivileged domains ..." paragraph.

Would you like me to resend with that dropped?

I assume you also want the commit message adjusted to not mention
unprivileged domains?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 25 13:31:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 13:31:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539602.840657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2B3G-0003Cl-Q0; Thu, 25 May 2023 13:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539602.840657; Thu, 25 May 2023 13:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2B3G-0003Ce-NI; Thu, 25 May 2023 13:30:46 +0000
Received: by outflank-mailman (input) for mailman id 539602;
 Thu, 25 May 2023 13:30:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2B3F-0003CY-3T
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 13:30:45 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20624.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58d27310-fb00-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 15:30:43 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8462.eurprd04.prod.outlook.com (2603:10a6:10:2cd::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Thu, 25 May
 2023 13:30:41 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 13:30:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58d27310-fb00-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=InUBYeojjrbBAR6HKS1f+pHKjhqvKu1obfgYbbRM5umjyDgByahPr3nxJims3C/LAYaSUO+iMW7kJhLYP5zurm8ZVDkJW9TKpeRoulPmFtxJ3ILsrOo7WZ+O9K7uirBZ4QCkTgQbrRsq619UMwpaukwF/lh9YjBP5lkTMfmMe/ur/XSlK9Rz8qYo4k0EWQcwYqG+daEXZ4jd4WA1jpwuBfAJWPW347hUcPVMfOE0VJdNKy0STE5YVoe1k3xschCPbFHCJmQdWCwraOfKi6vQ3hw1wCev3UjTVAKzgAkzjlmDlIVZJY22KUptUJcuxiSRPRCU1wk9jJ2RjdjshhbwxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tONP3mdy83Z3QFc2VCPihdDUGYa+TnO8ymzDuor7OjE=;
 b=CDXKBi8sBzLCSAvm+D/qtztZijn9C4wzD4bJU4JHlCGFGrUXne6zXWNddNW7+0ICFz08I6bXFWiEydlNn/PCbTbkbK3bklTC2zv1HsQ/JTtkMlsoTfmByjA0NuYtocW00XPyL+4zhVXrFuVv9+q/wYO2ZlJ5E4lf/NX5F7hVGHUhiWmEzii7X6tyoIR0EEHRSlqFdlwXFoO5fzaWAiojII+3uVW3uBhHPsNahshB9toyZmC9hUpoYvqZLYIGL7ybQgoiKH8zPKKBRbj8RL//XOcYF40ybLs/ejbKFelr4j5laCIq8z20hn398L1pXER65o3l1cAm/jWvYHgIloGqqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tONP3mdy83Z3QFc2VCPihdDUGYa+TnO8ymzDuor7OjE=;
 b=Sf5cuMnGnnLgmj1qI2KPNzgaK2DS5gE0l5KF2imd9h2fRmTBwj59N9JmIlCIf7G+DW/Q3kdedtb7+UXExjEtE8UVyBPk6AHSm8U8icSBBxwQ34S9LATKrGSJ0EAlzy0LDYpSXBV1I2R6nvRBMm19pKiJ6hdufZqYgsYeez2UkCM9OjHRoITYUON5qAAnLUaY2PIX2HfpWGq2mh+Jx7CqeLSgL/xBcKXSkq6ZFdwNmRAsCSHTxTyCy0LSH3my/kOetnAd7906SKbFllZ/Xy+sNYlyuWA8W+lK7u8S4xg2/HmH2OzFhpMKIibQBqX+pwEhqJQSzzgnqPZPYfBBVA7Bqw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <609ce6b2-d546-ba51-0f1b-6439c6361c01@suse.com>
Date: Thu, 25 May 2023 15:30:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] vpci/header: cope with devices not having vpci allocated
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
References: <20230525083051.30366-1-roger.pau@citrix.com>
 <25a73290-7677-e202-5e91-fdf32ad5c01b@suse.com>
 <ZG9iLwNWyK+I4HLf@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZG9iLwNWyK+I4HLf@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0165.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b3::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8462:EE_
X-MS-Office365-Filtering-Correlation-Id: c9c36b00-7b43-4377-6249-08db5d243c21
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YeGZreGJtar4N5hUHuzGIrOw7fC5mcabLk2Mr2+IBSrHqUHsJuqPVii0X6HWxrMNc9lkF/lIt1lxtAailGhky77tic9Eu8SORsqvM2tjr1DZmJqjlOJAahBfU5KiGKqWWZgVvKoRqXV2Asd/Ad+2veVDVCFdudM8zXmwbS3JyuplgFUlvMuemmqMmcN612w4m3K1/n2HUnw6E+qDT0Kh3uj08KrelT3/Zk1C0Hee95xsGMsGEgVj7DdwZOkhCO5tAy0CJ3P7RjcvUzi5KqV3RMOrry13h5Dr7cyDWXjWMPPHEeCWGT9mCgImiT5T/yyumk+lDL8wQnl1lkL2p7WYLR0vWDg4sRpyJpAxqIuVlxnZrC/JhcHuptH4dNKMewxcZo43IS1A688knjBqxrtaZ32V6vWqHvNlJ+QCPZAdDLSL0PUTCCX/K3lGdauDy20jw0m6nM15yxxzPIldXs1DVf+9JOz+D2QxResRhC5XH+funPoWMCaW2kDc5+9rMgHmsdIhH3OoHuIXPxyM9xrYRZXc6p1LaFztDjJvcjtJjarCPDyGMLY6B3XyND6ta4tXX0N4tfhMB6cJv58YihNnbGpd1fSEeWz9CaJvla4+Bj4ADPDMIw2nE6eNWi90hsLJuJFILMDS4NH6tsbQrcy+Qg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(346002)(39860400002)(136003)(366004)(451199021)(6512007)(26005)(53546011)(186003)(6506007)(38100700002)(2616005)(83380400001)(36756003)(2906002)(316002)(6916009)(4326008)(66946007)(66476007)(66556008)(6486002)(31696002)(41300700001)(86362001)(31686004)(478600001)(8676002)(8936002)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bUJZS0ZSQW5kVmovcXl5aTRUZWpaeVc0SC9VeTlEdFpNM3RSaFRSMDhrUTVJ?=
 =?utf-8?B?Rnd5cGkyNFdtbE5TYVVPNlRzWno0WXE2VlJZTkFFdUVaTnl5Wms5RkFHbW0z?=
 =?utf-8?B?eHNEazRaWGkrZU14a29tMms4NGtKZlRkSGNnelRyY0pueEtjY1JDRVl3NCtB?=
 =?utf-8?B?cC8vL0hpalNQZmNuVHFKQ0xGNk9YZmY5MUdJdHFaT21weEhKQ3NqSm5SZWhl?=
 =?utf-8?B?dHBXaUk1T1dnNVNLS2xKOFN6ZW1YdmRoWUN2b0w3ZlJ6TWZjMjJtSHkwUlBn?=
 =?utf-8?B?ckR2ZnlqblFnU09xcy9ySVhqZG9yS0pxUE52RjllMFNvQ0IwWmk3cWp6ZVBa?=
 =?utf-8?B?ejFaWW51bHJOb2toSEhRQjNnOWwzUGMxUG52NXZUZElaNlJURFhWVXprb0kz?=
 =?utf-8?B?b1dTVDZXR3FzaXV5SG13ZXFXaFlmYVZwaDhGUEhSc01wOTRDQkFFb2NWbGpV?=
 =?utf-8?B?czA5am94VzRTUHpEVUY0c0NlTEU4c0x2ZW96Z0Jadk9KNnhURk03U1d6ZDhE?=
 =?utf-8?B?YkV0WTlxc2d0VWRRYXJaOEhnRjNIQUEzd2pEdWNjcSt1MGpZYnp5VVB3aFda?=
 =?utf-8?B?bURoUW1DWkp0NElaTDZDaDIxbVpGQlh5Sk5nSkErVUQvRERtUzdqc0E2SVM0?=
 =?utf-8?B?cXE1dnJ4dTVmQkEyVFhyTzd0OE1kVXdKTVFKdHl0YWlvdnVZNlEvaTVYbmQ0?=
 =?utf-8?B?UC9vNGhjWFlNbzlNYWJrR0tGMUtVNFFPL1lIQUtRZitTZ0ZtamM5QUVScFhO?=
 =?utf-8?B?VlFsazhXMWZ2bk9ZZmVHV1NsUmFzbTZRY1czWGRtTVNoRlNpWnFOVVVBTm40?=
 =?utf-8?B?OTFFQmxxRU56THlqWXQ4TXlESDNlV21mYm9oLzQvVHNqVjNldG9nYXNtS0Uv?=
 =?utf-8?B?REFicGNBWHhBWWlPL1FNVGxjakpZV0Zqb0tZaExwV1c4K05Kak1tMU40bWMy?=
 =?utf-8?B?RG1wazRvcTNLNTNTMFVQS3VZbTg2ZG5FTWZFV0pZYXBvQk04c0lXeEtVWmpZ?=
 =?utf-8?B?bWhaR1FjZ2FsaEdhbVFON3JkWE1NVTdML1JQQ2N0OHdtMWV1SjF0dlB1MWhp?=
 =?utf-8?B?R05IWTVhV3hpcmU3OFhnK1NFY20xSEdid3FLYzQ1aC9MV01vL3g2NnlXeGlj?=
 =?utf-8?B?b0dpY2ZiYlF0K2pRcG04WHpzZGJBYlllOFdFMHhpbHBMZENHczZqelNkZmNp?=
 =?utf-8?B?ZWs3TUdsNmZyeVpIQzF3TllvdUtIckl2cHZNcS9VMGE0VzI3OEF0Tlk3MXZX?=
 =?utf-8?B?cUt6SmRtZjdOV21TbS9PcXdqVTJSMWhCV0hZWGxQOHN1Ky9wMEdyYVdESi90?=
 =?utf-8?B?eGJCaUdEdmlpdHh2U3FYMEZTY3hBclRlYk9sOWJkMCszaFc4eTJwVnZqbExZ?=
 =?utf-8?B?bGtPbm9LMkxNMENMeW9TN2hMQWxQaGlxWFBteFdWSU9aVlFhdkJLUkFpZVph?=
 =?utf-8?B?azAzeWlTVDAvdUlSeFZWL3NoWUdlZ2FYWnc0MzZaQ2ZDSnUvdEhhUHoxcnNu?=
 =?utf-8?B?QWhTSGVmQWVXTG9SYldGRnh4YitWWlk2SmpjMGltRUVCY0wzRVJUL2ROTW95?=
 =?utf-8?B?a0tXMHJzaWRPcHpRUFh4cnhGd1JwazV6K0JvWGZ6TUxObFlDQlRIbjVJV0ZX?=
 =?utf-8?B?OTk3OUUrNE05b0FyZVdJOFRFeGtMdWZ3SS9JNFIrZXRTOE56ZFplS1pTejQy?=
 =?utf-8?B?WDNDT3R0SVJIbS9US2E3aGt5SWMrSXBhVEJYaDRZWmwrUUZDTGIwZXYyRWJR?=
 =?utf-8?B?Mi95RktEdWZqOFhtL0NSNTFXRFhsT24yQmNrT3pQbno3K0RqT2RPRERyMXZ2?=
 =?utf-8?B?UVRNZ0JvSGhsbm9SaXZyYnBkckZPa0tvR3B6YlRBK1hXcVRQWW4rVXBXdGpu?=
 =?utf-8?B?ZGVxc2d5bjRDdGphbENBbVgyZFRnMUxFVFBxSEVZRG1HMTdzZE44SnRVVmw0?=
 =?utf-8?B?ZWdyZ1o2Unhub3pYaVZWMHc1MTJXVHpjWWZqZ3ZKcENBTHRGNU4yTjF1OVND?=
 =?utf-8?B?enU1TTRlcVZaamdBbGVrTEMyRnNrZUF5d2lscEQxVTlWOERPT0RvRnZmNjYr?=
 =?utf-8?B?YUszeGxIczJMWlVwMHdhbVR6L2gzc0ZMQ3ByQVd2VEtNd0N5T2VTME9KdmdV?=
 =?utf-8?Q?gHbotJH7oIPD+n3kxWH5k4YMW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c9c36b00-7b43-4377-6249-08db5d243c21
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 13:30:41.3270
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 63Yb7YXDYqM05SnXlvnXVoCg2EVqMZ8jUpqml3BK77M9ywwMh/00nEp4zE58f+NKBOcUhauIKxtq5H1U7fNMhw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8462

On 25.05.2023 15:27, Roger Pau Monné wrote:
> On Thu, May 25, 2023 at 11:05:52AM +0200, Jan Beulich wrote:
>> On 25.05.2023 10:30, Roger Pau Monne wrote:
>>> When traversing the list of pci devices assigned to a domain cope with
>>> some of them not having the vpci struct allocated. It's possible for
>>> the hardware domain to have read-only devices assigned that are not
>>> handled by vPCI, or for unprivileged domains to have some devices
>>> handled by an emulator different than vPCI.
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> ---
>>>  xen/drivers/vpci/header.c | 14 ++++++++++++++
>>>  1 file changed, 14 insertions(+)
>>>
>>> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
>>> index ec2e978a4e6b..3c1fcfb208cf 100644
>>> --- a/xen/drivers/vpci/header.c
>>> +++ b/xen/drivers/vpci/header.c
>>> @@ -289,6 +289,20 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
>>>       */
>>>      for_each_pdev ( pdev->domain, tmp )
>>>      {
>>> +        if ( !tmp->vpci )
>>> +            /*
>>> +             * For the hardware domain it's possible to have devices assigned
>>> +             * to it that are not handled by vPCI, either because those are
>>> +             * read-only devices, or because vPCI setup has failed.
>>
>> So this really is a forward looking comment, becoming true only (aiui)
>> when my patch also makes it in.
> 
> The r/o part yes, device setup failing is already possible.
> 
> I think it's fine to have the r/o part added already.
> 
>>> +             * For unprivileged domains we should aim for passthrough devices
>>> +             * to be capable of being handled by different emulators, and hence
>>> +             * a domain could have some devices handled by vPCI and others by
>>> +             * QEMU for example, and the later won't have pdev->vpci
>>> +             * allocated.
>>
>> This, otoh, I don't understand: Do we really intend to have pass-through
>> devices handled by qemu or alike, for PVH? Or are you thinking of hybrid
>> HVM (some vPCI, some qemu)?
> 
> I was thinking about hybrid.
> 
>> Plus, when considering hybrid guests, won't
>> we need to take into account BARs of externally handled devices as well,
>> to avoid overlaps?
> 
> On that scenario we would request non-overlapping BARs for things to
> work as expected, I think that's already the case for HVM if you mix
> QEMU and demu for the same domain.
> 
>> In any event, until the DomU side picture is more clear, I'd suggest we
>> avoid making statements pinning down expectations. That said, you're the
>> maintainer, so if you really want it like this, so be it.
> 
> OK, I don't have a strong opinion, so I'm fine with dropping the "For
> unprivileged domains ..." paragraph.
> 
> Would you like me to resend with that dropped?

Yes, please, because ...

> I assume you also want the commit message adjusted to not mention
> unprivileged domains?

... some adjustment will be wanted. Mentioning (vague) plans in the
description is fine with me, if you want to.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 13:34:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 13:34:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539605.840667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2B6d-0003lP-8x; Thu, 25 May 2023 13:34:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539605.840667; Thu, 25 May 2023 13:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2B6d-0003lI-6K; Thu, 25 May 2023 13:34:15 +0000
Received: by outflank-mailman (input) for mailman id 539605;
 Thu, 25 May 2023 13:34:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ga2A=BO=citrix.com=prvs=5022cd99a=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q2B6b-0003lC-Up
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 13:34:13 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d48766e4-fb00-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 15:34:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d48766e4-fb00-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685021651;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=YuXT6CmD5hAPYpvYA3geWSPDUf8sUjKkWT9oJZHJvSE=;
  b=G15HhiiaUX0Px9Tkh2myxKq6tjIX8Cpi2WLHeRr98qkazfbljM18SvFL
   X4Qm3zLaeyki/br2I++AU7lm+UJJmJyPaOQez8VhsmR/CQuIUOKVvgDIa
   cHbBxD+E4IiV6GM3mZ+LfCYS/BBE9T+J4z1f3PNrlyGMse1Xl6Fr4tFpP
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110803567
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:2lSInaJ8fZH7LswiFE+R55UlxSXFcZb7ZxGr2PjKsXjdYENS1DdUy
 TFLWGCEM/rfYmKgeIxxa97l90kDvJ7SydRnHQVlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wZjPakjUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5TGVAX6
 OwzdwkRf0+HhL7qxJ+2RNhF05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWleG0hn75YntApUicv6Yf6GnP1g1hlrPqNbI5f/TTHJ0Fxh3F+
 D2uE2LRACxAbta44iq+onu329/mvynqZaAgPejtnhJtqALKnTFCYPEMbnO5q/Skjk+1W/pEN
 lcZvCEpqMAa+FSwS9jhXzWxuHOeogMHQN1UDvE77weWjKHT5m6xFmUCCzJMdtEinMs3XiAxk
 E+EmcvzAj5iu6HTTmiSnp+dtT6oMDIZBXMDbyQDCwAC5rHeTJob10yVCIw5Sejs04OzQGurq
 9yXkMQgr5ELvJcWx47kxEzW3Dav+sTNYwM71yyCCwpJ8ThFTIKiYoWp733S4vBBMJuVQzG9g
 ZQUpySNxLtQVM/QzURhVM1IRej0vKjdbFUwlHY1R/EcGyKRF2lPlGy6yBV3Pw9XP8kNYlcFi
 2eD6FoKtPe/0JZHBJKbgr5d6exwlcAM9vy/DJg4i+aihbAvHDJrBAk0OSatM5nFySDAa50XN
 5aBatqLBn0HE6lhxzfeb75DgeN6m3tknjmKGMiTI/GbPV22PST9dFv4GAHWMrBRAF2s+205D
 Oqzx+PVkk4CAYUSkwHc8JIJLEBiEEXX8ave8pQNHsbae1oOJY3UI6OJqV/XU9A/zvs9eyah1
 i3VZ3K0P3Kk2yOceFvUMCw4AF4tNL4mxU8G0eUXFQ7A8xAejUyHtc/zq7NfkWEbydFe
IronPort-HdrOrdr: A9a23:VQj646G2YsorJ6EkpLqEwMeALOsnbusQ8zAX/mt6Q3VuA7elfg
 6V7Y0mPH7P+U4ssRQb8+xoV5PwJE80maQFg7X5eI3SPzUO21HIEGgB1/qH/9SIIUSXndK1l5
 0BT0EUMqyWMbEVt7ed3OB6KbodKRu8nZxASd2w856ld29XV50=
X-Talos-CUID: 9a23:dJCjwmP9xKLjF+5DfDFWrVMEMZ8cbmyCyHSMf0uVBGhYYejA
X-Talos-MUID: 9a23:9DOsvQZbdRZPXuBTtQK23jtFH9dTs621VXAks58Xhuu2HHkl
X-IronPort-AV: E=Sophos;i="6.00,191,1681185600"; 
   d="scan'208";a="110803567"
Date: Thu, 25 May 2023 14:34:06 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH 04/15] build: hide policy.bin commands
Message-ID: <ce86e239-5cc1-4f7c-ba6b-de6eb9c1442d@perard>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-5-anthony.perard@citrix.com>
 <c75368db-6444-6910-487c-8ac9477a6785@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <c75368db-6444-6910-487c-8ac9477a6785@suse.com>

On Wed, May 24, 2023 at 09:11:10AM +0200, Jan Beulich wrote:
> On 23.05.2023 18:38, Anthony PERARD wrote:
> > --- a/xen/xsm/flask/Makefile
> > +++ b/xen/xsm/flask/Makefile
> > @@ -48,10 +48,15 @@ targets += flask-policy.S
> >  FLASK_BUILD_DIR := $(abs_objtree)/$(obj)
> >  POLICY_SRC := $(FLASK_BUILD_DIR)/xenpolicy-$(XEN_FULLVERSION)
> >  
> > +policy_chk = \
> > +    $(Q)if ! cmp -s $(POLICY_SRC) $@; then \
> > +        $(kecho) '  UPD     $@'; \
> > +        cp $(POLICY_SRC) $@; \
> 
> Wouldn't this better use move-if-changed? Which, if "UPD ..." output is
> desired, would then need overriding from what Config.mk supplies?

I don't like move-if-changed, because it remove the original target. On
incremental build, make will keep building the original target even
when not needed. So we keep seeing the `checkpolicy` command line when
there's otherwise nothing to do.

I could introduce a new generic macro instead, copy-if-changed, which
will do compare and copy (like policy_chk is doing here).

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 25 13:42:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 13:42:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539609.840678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BEh-0005Ff-3i; Thu, 25 May 2023 13:42:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539609.840678; Thu, 25 May 2023 13:42:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BEh-0005FY-0G; Thu, 25 May 2023 13:42:35 +0000
Received: by outflank-mailman (input) for mailman id 539609;
 Thu, 25 May 2023 13:42:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ga2A=BO=citrix.com=prvs=5022cd99a=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q2BEf-0005FS-0l
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 13:42:33 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fdbc6bbe-fb01-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 15:42:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdbc6bbe-fb01-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685022150;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=y+1eK+8GwXxwMdgbWvFqnH4uW+OjTUI42zbErWbZYAA=;
  b=Mgo17UC78NA0You37/UFEdqu1VuzgCsmMvd4+Prwe+R8UzovzJv3cdUN
   JTVvjG3KcGVbzOhjQgYTUjO4yXZjTgYhllaKa99WIdHG9Ex7JCf1By6z5
   DAg13OPul98UOAI//W/bngiJ61z4XdgmwIyDbAK3o7hNQbI/9r38NIwBa
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110414499
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:9C2jNKOx2qPvnObvrR1pl8FynXyQoLVcMsEvi/4bfWQNrUpw32NSx
 mBOXWvXPPuMYWvzeop+PIu0oUtTvpTTm9U1GQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tF5AFmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0vx5B04X+
 dUHEzsyfyqug767x7eYUPY506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLo3mvuogX/uNSVVsluPqYI84nTJzRw327/oWDbQUoXTHZkMwhrJ9
 woq+UzEQS4AG/uj+QG11WOrp+rLgGDjZKkrQejQGvlC3wTImz175ActfVmxrOS9i0W+c8lCM
 EFS8S0rxYAw6UiqQ9/VTxC+5nmesXY0QMFMGuc37AWMzKv84AuDAGUACDlbZ7QOq8seVTEsk
 FiTkLvBFTFp9bGYV3+Z3rOVti+pfzgYK3cYYi0JRhdD5MPsyKk6lh/VR8xvOLK0hNbyXzr3x
 li3QDMW3utJy5RRjuPioA6B2mj3znTUcuIrzh/ZWE384ipmX4WadraF7QTW6vNjNpnMGzFto
 0M4s8SZ6ekPC7SEmyqMXPgBEdmV2hqVDNHPqQUxRsd8rlxB71bmJNkNu28meC+FJ+5eIVfUj
 FnvVRS9DXO5FF+jdudJbo24EKzGJoCwRI2+Bpg4gjejC6WdlTNrHgk0PSZ8PEi3yiDAdJ3T3
 r/FGftA9V5AVcxaIMOeHo/xK4MDyCEk3n/0Tpvm1Rmh2rf2TCfLGetUbQDUNbpgsfvsTODpH
 zF3bpHi9vmieLemPnm/HXA7djjm0kTX9bip8pcKJ4Zv0yJtGX07Cu+5/I7Nj7dNxvwP/s+Rp
 yHVZ6Ot4Aan7ZExAVnQOy8LhXKGdcoXkE/XygR3YwjziiZ9O9r0hErdHrNuFYQaGCVY5aYcZ
 5E4lw+oWJyjlhyvF+whUKTA
IronPort-HdrOrdr: A9a23:ccpegqxAVeuqIkSJKZy6KrPwKb1zdoMgy1knxilNoH1uA6qlfq
 WV98jzuiWE7Ar5NEtBpTniAtjmfZq/z/NICOAqVN/JYOCBghrKEGgI1/qG/9SPIVydygdr78
 tdmnlFaeEZXDBB4/oTvmGDfOod/A==
X-Talos-CUID: 9a23:kDIQmGB06HouOpn6EzNp8mFNFuk6SXbi3U39JxO1DmNKUqLAHA==
X-Talos-MUID: 9a23:lzwWvgnHe9K/Jvj3goqrdnpfd/hyzIOSOHs9toQBmsyAKG9oBxyk2WE=
X-IronPort-AV: E=Sophos;i="6.00,191,1681185600"; 
   d="scan'208";a="110414499"
Date: Thu, 25 May 2023 14:42:23 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH 05/15] build: introduce a generic command for gzip
Message-ID: <0365c706-efdc-48fd-b182-b703b63843a3@perard>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-6-anthony.perard@citrix.com>
 <9e8f44e5-7dbd-5369-2ac5-5cf171908648@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <9e8f44e5-7dbd-5369-2ac5-5cf171908648@suse.com>

On Wed, May 24, 2023 at 09:17:09AM +0200, Jan Beulich wrote:
> On 23.05.2023 18:38, Anthony PERARD wrote:
> > Make the gzip command generic and use -9 which wasn't use for
> > config.gz. (xen.gz does use -9)
> 
> You mention xen.gz here, but you don't make its rule use this new
> construct. Is that intentional (and if so, why)? (There we also go
> through $@.new, and being consistent in that regard would imo be
> desirable as well.)

It was kind of a justification to say that -9 was ok, because we already
use it.

But I can't use it for xen.gz because the generic command is added to
Rules.mk. But I can probably add cmd_gzip to Kbuild.include instead and
use it for xen.gz, (with simply $(call cmd,) instead of using
if_changed).

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 25 13:47:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 13:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539613.840688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BJe-0005rv-Mc; Thu, 25 May 2023 13:47:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539613.840688; Thu, 25 May 2023 13:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BJe-0005ro-J4; Thu, 25 May 2023 13:47:42 +0000
Received: by outflank-mailman (input) for mailman id 539613;
 Thu, 25 May 2023 13:47:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oUrw=BO=linaro.org=linus.walleij@srs-se1.protection.inumbo.net>)
 id 1q2BJe-0005ri-2c
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 13:47:42 +0000
Received: from mail-yw1-x112a.google.com (mail-yw1-x112a.google.com
 [2607:f8b0:4864:20::112a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b703002a-fb02-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 15:47:40 +0200 (CEST)
Received: by mail-yw1-x112a.google.com with SMTP id
 00721157ae682-561e919d355so7691267b3.0
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 06:47:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b703002a-fb02-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1685022459; x=1687614459;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=hvX3qTYnWmFe+W70V5JCbvykxTeA+ozEK9uZ5GQt4so=;
        b=q2nzyAvWejUD3JwR7mFGQPT2GPWeaVIEeaFR/7oUXDoDkjI262vYRp7uITJhm38K4g
         upLwGmxLF+ZBT0Wt7HbiVX6ykWWIDZp3Mz6q52Xx0uYDkOJaVqkOqg5ir58fI8ADb2YM
         1vtADouc4aX/U0dSoHAlHGn8Ek0q5R5ozG5/oU/l4b+lHgq7Y+Pst4Uw/btzgwsbj4dC
         bxQ+xjA7JDTKRN3DDTWsIZquxnFaWFng6dkdYrlMib9gT6NKdQqDDVSKuXc0ArDxO8Jn
         qvZiZS4fX/4HXTsXGLKhMhTqVtgrKEGQEebKeBDb2vy2Di6/bSl6fzBnf5gDxNVQH2Xt
         Ku7Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685022459; x=1687614459;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=hvX3qTYnWmFe+W70V5JCbvykxTeA+ozEK9uZ5GQt4so=;
        b=M3vYaiiCpluCsSFnWpwh0Q9+enDwx0rNUGksVHYx/KOHX2opRGJLrJFrRUTEDSSOV0
         Xhr0BKATvHlYgoVdUkLbE+nRcLGdyjfSQuDrZvYcvfBITUe5y5Z4ERRUESDnmXLLS4F2
         Pr/oeOEZYg3rrGNSlBd/RWpPyFQfSevFWCBB/a6TuySmPNMTMkyfKQbKFky3NhURNHbn
         OLAL0bKyOQZr1O2itCYnH5G0wB3vEQTuKD4aGG4JSb24x5EMQ018Jp6f/yJCvUd8nDDm
         aKysSOrZUyGEtumv0wiuQ+TDppyzlYCgvS2XlsiAjRTfd1bxBRqNWTaM1k3+jeOTR6aF
         fAew==
X-Gm-Message-State: AC+VfDxVEYlXKlF7hD4td9qvImQaF1c17PoSH62yT3CkQhJGARdGK5sU
	EomGatmJTH4xJ+XOUqYqUNZRTgD4y9oBrXpQgB8GUA==
X-Google-Smtp-Source: ACHHUZ7y8gATzBWNHSdzx5J9bh9s6aYETJ1C8/dygpgpBavsOQ8azBCpcw01ORvU0zwyW0QsR5nbg5dxKgdYsiiDjgM=
X-Received: by 2002:a81:9c51:0:b0:55a:ec:6de4 with SMTP id n17-20020a819c51000000b0055a00ec6de4mr21696722ywa.10.1685022459535;
 Thu, 25 May 2023 06:47:39 -0700 (PDT)
MIME-Version: 1.0
References: <20230523140342.2672713-1-linus.walleij@linaro.org>
 <20230524221147.5791ba3a@kernel.org> <20230524221247.1dc731a8@kernel.org>
In-Reply-To: <20230524221247.1dc731a8@kernel.org>
From: Linus Walleij <linus.walleij@linaro.org>
Date: Thu, 25 May 2023 15:47:27 +0200
Message-ID: <CACRpkdbUrEZ1FAqMCq35z+g3NF1gx_9c_0vhQw6ioqkyOwaAnw@mail.gmail.com>
Subject: Re: [PATCH] xen/netback: Pass (void *) to virt_to_page()
To: Jakub Kicinski <kuba@kernel.org>
Cc: Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org, 
	netdev@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 25, 2023 at 7:12=E2=80=AFAM Jakub Kicinski <kuba@kernel.org> wr=
ote:
> On Wed, 24 May 2023 22:11:47 -0700 Jakub Kicinski wrote:
> > On Tue, 23 May 2023 16:03:42 +0200 Linus Walleij wrote:
> > > virt_to_page() takes a virtual address as argument but
> > > the driver passes an unsigned long, which works because
> > > the target platform(s) uses polymorphic macros to calculate
> > > the page.
> > >
> > > Since many architectures implement virt_to_pfn() as
> > > a macro, this function becomes polymorphic and accepts both a
> > > (unsigned long) and a (void *).
> > >
> > > Fix this up by an explicit (void *) cast.
> >
> > Paul, Wei, looks like netdev may be the usual path for this patch
> > to flow thru, although I'm never 100% sure with Xen.
> > Please ack or LUK if you prefer to direct the patch elsewhere?
>
> Ugh, Wei already acked this, sorry for the noise.

Don't worry about it Jakub, it's queued in the asm-generic tree
along with patches making things give nasty compile messages
if they are not typed right, we try to keep down the level of noise
this way: silence it while fixing the root cause.

If you prefer to take it into the net tree that works too but no need.

Yours,
Linus Walleij


From xen-devel-bounces@lists.xenproject.org Thu May 25 14:00:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 14:00:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539617.840698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BVP-0007NU-OP; Thu, 25 May 2023 13:59:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539617.840698; Thu, 25 May 2023 13:59:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BVP-0007NN-Le; Thu, 25 May 2023 13:59:51 +0000
Received: by outflank-mailman (input) for mailman id 539617;
 Thu, 25 May 2023 13:59:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Uvvp=BO=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1q2BVO-0007N8-76
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 13:59:50 +0000
Received: from smtp-190f.mail.infomaniak.ch (smtp-190f.mail.infomaniak.ch
 [185.125.25.15]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 689a30aa-fb04-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 15:59:47 +0200 (CEST)
Received: from smtp-3-0001.mail.infomaniak.ch (unknown [10.4.36.108])
 by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QRqS21ymJzMqR7q;
 Thu, 25 May 2023 15:59:46 +0200 (CEST)
Received: from unknown by smtp-3-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QRqRs62HQzMrvhY; Thu, 25 May 2023 15:59:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 689a30aa-fb04-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1685023186;
	bh=dkxhZ1HkwQnup1ntsnpZMoLqXeEGTbS0hUQ4TXONwZM=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=z7dLLPtoGNMkprHSpGt6xiWHcyKmtDBlHRXiVa8nrKOQpQVJaMTM9UlR6ArI194Tz
	 QXVWg9U8wg3ep4cqLdnYovLTDgvCD2x6qehmxK+0nYNikwoJuBw6oD50iugwe2xhqn
	 Gl2TgNvsZgLXncnPn3fvZ8S1jZ3eCH7T1FazAaIE=
Message-ID: <facfd178-3157-80b4-243b-a5c8dabadbfb@digikod.net>
Date: Thu, 25 May 2023 15:59:35 +0200
MIME-Version: 1.0
User-Agent:
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Content-Language: en-US
To: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
 "Christopherson,, Sean" <seanjc@google.com>, "bp@alien8.de" <bp@alien8.de>,
 "dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
 "keescook@chromium.org" <keescook@chromium.org>,
 "hpa@zytor.com" <hpa@zytor.com>, "mingo@redhat.com" <mingo@redhat.com>,
 "tglx@linutronix.de" <tglx@linutronix.de>,
 "pbonzini@redhat.com" <pbonzini@redhat.com>,
 "wanpengli@tencent.com" <wanpengli@tencent.com>,
 "vkuznets@redhat.com" <vkuznets@redhat.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
 "yuanyu@google.com" <yuanyu@google.com>,
 "jamorris@linux.microsoft.com" <jamorris@linux.microsoft.com>,
 "marian.c.rotariu@gmail.com" <marian.c.rotariu@gmail.com>,
 "Graf, Alexander" <graf@amazon.com>,
 "Andersen, John S" <john.s.andersen@intel.com>,
 "madvenka@linux.microsoft.com" <madvenka@linux.microsoft.com>,
 "liran.alon@oracle.com" <liran.alon@oracle.com>,
 "ssicleru@bitdefender.com" <ssicleru@bitdefender.com>,
 "tgopinath@microsoft.com" <tgopinath@microsoft.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
 "linux-security-module@vger.kernel.org"
 <linux-security-module@vger.kernel.org>, "will@kernel.org"
 <will@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "dev@lists.cloudhypervisor.org" <dev@lists.cloudhypervisor.org>,
 "mdontu@bitdefender.com" <mdontu@bitdefender.com>,
 "linux-hardening@vger.kernel.org" <linux-hardening@vger.kernel.org>,
 "linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
 "virtualization@lists.linux-foundation.org"
 <virtualization@lists.linux-foundation.org>,
 "nicu.citu@icloud.com" <nicu.citu@icloud.com>,
 "ztarkhani@microsoft.com" <ztarkhani@microsoft.com>,
 "x86@kernel.org" <x86@kernel.org>
References: <20230505152046.6575-1-mic@digikod.net>
 <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
From: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>
In-Reply-To: <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha


On 25/05/2023 00:20, Edgecombe, Rick P wrote:
> On Fri, 2023-05-05 at 17:20 +0200, Mickaël Salaün wrote:
>> # How does it work?
>>
>> This implementation mainly leverages KVM capabilities to control the
>> Second
>> Layer Address Translation (or the Two Dimensional Paging e.g.,
>> Intel's EPT or
>> AMD's RVI/NPT) and Mode Based Execution Control (Intel's MBEC)
>> introduced with
>> the Kaby Lake (7th generation) architecture. This allows to set
>> permissions on
>> memory pages in a complementary way to the guest kernel's managed
>> memory
>> permissions. Once these permissions are set, they are locked and
>> there is no
>> way back.
>>
>> A first KVM_HC_LOCK_MEM_PAGE_RANGES hypercall enables the guest
>> kernel to lock
>> a set of its memory page ranges with either the HEKI_ATTR_MEM_NOWRITE
>> or the
>> HEKI_ATTR_MEM_EXEC attribute. The first one denies write access to a
>> specific
>> set of pages (allow-list approach), and the second only allows kernel
>> execution
>> for a set of pages (deny-list approach).
>>
>> The current implementation sets the whole kernel's .rodata (i.e., any
>> const or
>> __ro_after_init variables, which includes critical security data such
>> as LSM
>> parameters) and .text sections as non-writable, and the .text section
>> is the
>> only one where kernel execution is allowed. This is possible thanks
>> to the new
>> MBEC support also brough by this series (otherwise the vDSO would
>> have to be
>> executable). Thanks to this hardware support (VT-x, EPT and MBEC),
>> the
>> performance impact of such guest protection is negligible.
>>
>> The second KVM_HC_LOCK_CR_UPDATE hypercall enables guests to pin some
>> of its
>> CPU control register flags (e.g., X86_CR0_WP, X86_CR4_SMEP,
>> X86_CR4_SMAP),
>> which is another complementary hardening mechanism.
>>
>> Heki can be enabled with the heki=1 boot command argument.
>>
>>
> 
> Can the guest kernel ask the host VMM's emulated devices to DMA into
> the protected data? It should go through the host userspace mappings I
> think, which don't care about EPT permissions. Or did I miss where you
> are protecting that another way? There are a lot of easy ways to ask
> the host to write to guest memory that don't involve the EPT. You
> probably need to protect the host userspace mappings, and also the
> places in KVM that kmap a GPA provided by the guest.

Good point, I'll check this confused deputy attack. Extended KVM 
protections should indeed handle all ways to map guests' memory. I'm 
wondering if current VMMs would gracefully handle such new restrictions 
though.


> 
> [ snip ]
> 
>>
>> # Current limitations
>>
>> The main limitation of this patch series is the statically enforced
>> permissions. This is not an issue for kernels without module but this
>> needs to
>> be addressed.  Mechanisms that dynamically impact kernel executable
>> memory are
>> not handled for now (e.g., kernel modules, tracepoints, eBPF JIT),
>> and such
>> code will need to be authenticated.  Because the hypervisor is highly
>> privileged and critical to the security of all the VMs, we don't want
>> to
>> implement a code authentication mechanism in the hypervisor itself
>> but delegate
>> this verification to something much less privileged. We are thinking
>> of two
>> ways to solve this: implement this verification in the VMM or spawn a
>> dedicated
>> special VM (similar to Windows's VBS). There are pros on cons to each
>> approach:
>> complexity, verification code ownership (guest's or VMM's), access to
>> guest
>> memory (i.e., confidential computing).
> 
> The kernel often creates writable aliases in order to write to
> protected data (kernel text, etc). Some of this is done right as text
> is being first written out (alternatives for example), and some happens
> way later (jump labels, etc). So for verification, I wonder what stage
> you would be verifying? If you want to verify the end state, you would
> have to maintain knowledge in the verifier of all the touch-ups the
> kernel does. I think it would get very tricky.

For now, in the static kernel case, all rodata and text GPA is 
restricted, so aliasing such memory in a writable way before or after 
the KVM enforcement would still restrict write access to this memory, 
which could be an issue but not a security one. Do you have such 
examples in mind?


> 
> It also seems it will be a decent ask for the guest kernel to keep
> track of GPA permissions as well as normal virtual memory pemirssions,
> if this thing is not widely used.

This would indeed be required to properly handle the dynamic cases.


> 
> So I wondering if you could go in two directions with this:
> 1. Make this a feature only for super locked down kernels (no modules,
> etc). Forbid any configurations that might modify text. But eBPF is
> used for seccomp, so you might be turning off some security protections
> to get this.

Good idea. For "super locked down kernels" :) , we should disable all 
kernel executable changes with the related kernel build configuration 
(e.g. eBPF JIT, kernel module, kprobes…) to make sure there is no such 
legitimate access. This looks like an acceptable initial feature.


> 2. Loosen the rules to allow the protections to not be so one-way
> enable. Get less security, but used more widely.

This is our goal. I think both static and dynamic cases are legitimate 
and have value according to the level of security sought. This should be 
a build-time configuration.

> 
> There were similar dilemmas with the PV CR pinning stuff.


From xen-devel-bounces@lists.xenproject.org Thu May 25 14:00:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 14:00:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539621.840707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BW1-0000P0-4d; Thu, 25 May 2023 14:00:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539621.840707; Thu, 25 May 2023 14:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BW1-0000Ot-1g; Thu, 25 May 2023 14:00:29 +0000
Received: by outflank-mailman (input) for mailman id 539621;
 Thu, 25 May 2023 14:00:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/c8s=BO=citrix.com=prvs=5022a0095=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q2BVz-0000Lg-Pl
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 14:00:27 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ea16526-fb04-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 16:00:26 +0200 (CEST)
Received: from mail-bn8nam12lp2176.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 10:00:15 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BLAPR03MB5396.namprd03.prod.outlook.com (2603:10b6:208:29e::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Thu, 25 May
 2023 14:00:12 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Thu, 25 May 2023
 14:00:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ea16526-fb04-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685023226;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=UzuLL46ifWJg12q8nFWFBXxUcWdquULEjhSJazPxAZg=;
  b=Ok02U3jqZRlO5mcHum+huM3Ta/5bxyjxo/+oulRoeNfyqCzTTNCU4AcU
   FmbUUtlZA7Y1VnNfU9bJtWHyqJduhFU1HowUFS35fw8FZWrLl2XbMaAre
   eIubsa+dJal/EhuxB/NJodqqPCTS9/qbI0bFbNRiNewRrOcSHtSOO0YrX
   U=;
X-IronPort-RemoteIP: 104.47.55.176
X-IronPort-MID: 112878800
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:dLb+aayoD3zKgwu/Jch6t+cCxyrEfRIJ4+MujC+fZmUNrF6WrkVSz
 WEfDG6HaaqPajP1L9l1boS/80JUu8PdyoNlSgA9rCAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw/zF8EsHUMja4mtC5QRjP60T5TcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KTlSx
 +wdMAgBVBa8mb2c67GdSdFmlMt2eaEHPKtH0p1h5RfwKK98BLrlE+DN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjWVlVMvuFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37aVzXyrCNpKfFG+3uZG2laY4EpUMhBVbFurhKiG0FeaV/sKf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHr7m9WX+bsLCOoluaJSkQBX8PY2kDVwRt3jX4iIQ6jxaKRNAzFqew14fxAWupn
 G7MqzUijbIOi8JNz7+84V3MnzOroN7OUxIx4QLUGGmi62uVebKYWmBh0nCDhd4oEWpTZgDpU
 KQs8yRG0N0zMA==
IronPort-HdrOrdr: A9a23:5OIwUqwsfY7DfCiltpS2KrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-Talos-CUID: 9a23:01yjRG/b11mCTrTOMeaVvxYIKNAgakXR9nHvPRCSImVPS4WfW1DFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3A44xsBw8N+U9js+7hLjcFXu2Qf99Eua+lLmZKq5h?=
 =?us-ascii?q?FgsmUEH1PYS65gB3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,191,1681185600"; 
   d="scan'208";a="112878800"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oJmhjsPiio2sIzs49AtnoHUHIRSnvFXJbTQbXiB3x9t0+w45foD+C2fJ6kCsBRcTx6EltCtabzBTUj4r+UiFFGgYv/f1/GSc+6mrq/OSVG+/8faGx+lHJ8KStTdJqdFMUNRmnmMv0KNM+TrEFynrmZbdeV4nr2XDNGTXqGLxmrBBJ/BSXu0OtsB/lZkVZOE6xXWQorcAoXslLRxOBZSkJs9vYdRa5LzFzGOm95c+Y/wdrRsa/kaddCS2bpA51kjFhzXB/0d89TDztXhBjMhvKImZJperQdcvhnXtQ8Z4ND+q9N4qZYHxqJJ2hz427j8BUhAOC7yHJ5kb9aCeXWEDeA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=W5p5DJMD7fDdbhJkydkRr8jPzzmZu4eJ1AyYSH/VK+Y=;
 b=FPuCRxPaP8ToefFLfrNU9Y+4xgT9kJDHHUpWlCAb9jRNen31mKcuzhNsFukZtLoEiGyKjMnbXSNELo1w7/kdmm4nvGH7aWY/kd2yefMtxsdsFZ0Tc2el+k+iaH9fHLw45XwmjGeyXClPjGuQ0+cAiAlcRkhI9RTGP4gsllnuFEMGCWBBPxvsq7zNp0/iekY370GMi7Vk2zKRFi6UP23rr3CceHOujiQjRR8g0hf2eN5oRnxl248/EQUrEWjhGv+Te78ILQmYmycvk4xQHdG2ImcMdwn3WDY6rq8JZSfnR4qLWDy5eWP1jsy4YhnL9f74VgIRV58LyTnaq0J0+UHCxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W5p5DJMD7fDdbhJkydkRr8jPzzmZu4eJ1AyYSH/VK+Y=;
 b=Q0PjpSlJLgUEoeALVZyBxlZYaABtB5Fk9NxtM2ULWpymT364HUy6VwDjv10ygp5aP+3uwBHpqAFTqk4RjgVmO3sGrewzBa05Iw1l16vmgAUTGnV2eOG8Nlv3+Uvwqoec0DaEWBj07NaeN1lB0NAqP+hgfE9Fd7yUbHY/NiOhvKU=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 25 May 2023 16:00:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH v3 06/12] x86/shadow: restrict OOS allocation to when
 it's really needed
Message-ID: <ZG9p5fhfEw3I5bez@Air-de-Roger>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
 <3c05fb6c-f71e-1b86-6146-96f2b3f3c9ae@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3c05fb6c-f71e-1b86-6146-96f2b3f3c9ae@suse.com>
X-ClientProxiedBy: LO4P123CA0679.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:351::12) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BLAPR03MB5396:EE_
X-MS-Office365-Filtering-Correlation-Id: 83eaa2f9-aaa6-4bbe-3148-08db5d285b32
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	I3rPpO/g4kD2MnMy3kChqVn/vUFGn4HLjIbOje4nL0FYLNtvCKmg4ygBeIqH4kmcI4bAXw7iuRaqS4Y7GlXzM4GugfYjrpK0GC58vwNMiOlpYJQnSxvMRQ2FqWm3feK7N6MC+lg0iLrLniSJ+2Gp6bozQVwQ26n3ju1ztcsk1GhUXZAuzIW4wScsUQhvputVIAMTDIRG9RVJT68+EH12cTsvKCn4aTVFYBX3ZjhroVQ4MDx9RzWORHY9rY3eVI5xj2pRLy4pKb5ycfbkOtk5SaxrHeayqWUBFIZqwnbUl/0HznV8GtkYbg52aJiCQLzVi3EgQhJJJm2A2AeRoxwngcLblQ/BNp0LFVeeo9MT3waFQetRMk5WCm9EiIVT0gG1EYOuK4jZQG1y4JmGik9T+48aTaB/kUgwL6w0cMnnhviYzJgRfozx+XeLEb3K4XLlsWbrx2BMdu+ov+Hd7eDOYhSzaKm9ezXdjmngWrvtG/O2tGM3QinOczUMTjVnYtx0UyTshzerD14JiD1fWRGAWAqeu885YKdkd+r5aY5XAvF0uXCQ1NddXYY44lB3ML99
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(346002)(376002)(366004)(136003)(396003)(39860400002)(451199021)(478600001)(82960400001)(38100700002)(8676002)(8936002)(5660300002)(6506007)(6512007)(9686003)(26005)(6916009)(85182001)(66946007)(66556008)(66476007)(83380400001)(33716001)(316002)(186003)(4744005)(2906002)(41300700001)(86362001)(6486002)(4326008)(6666004)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dmp4ay9oN1BwN2NEOWhYeUprUERtUS9obXdKbmV2UUtDOXMxeWd6azdMZXYy?=
 =?utf-8?B?SU50dU9KcERqeE16VWtzUzFGOVNiaXFDNFQvNGdBQ2dESmRWTWdkWUxSZG8y?=
 =?utf-8?B?MkErOFpkOXdhR3JNUmVhaDRndUtScHZqRitRaVVuZUdKYnR4ZGhmbVpCWXBK?=
 =?utf-8?B?YW1ZM1k5b2xYVWhsSlpQVHEyNmVJaDRSc1Z5cTc2cHJkYTJHcHVkRlFwSnM3?=
 =?utf-8?B?R3JiSHhrR3FadEp1Zk1PZVRXdFRpYnBNSUNvd0I5NE9zenJSTlZkMkpMb09w?=
 =?utf-8?B?a3lLc2E2KytpTFdDeDl3MFR3MVUyRzI2amNoL0p6VDJuVEVsUnJqWjNSNEFh?=
 =?utf-8?B?dUFwYlkrbDdkTFd1ZFk2WWtBTW5SOCtRa0NYOVgvWWZDNjZjTDNvemVtbkdx?=
 =?utf-8?B?Z1FLUHJabXZWR2VnM3ZHWE1EMmRseEZMbWlCbjlGMUQrdUpWYzJUbWVVTnhl?=
 =?utf-8?B?MzVJWjNOR0xsQXU3dFNib05VOVk5WnlzOHhjZDMrclRDcVlVc29YdkRxZ21K?=
 =?utf-8?B?cG5KTUw5c1RtbW9UOEFrTVZpVnVOdzUvamZ4T2h1ZkQzV1hXZERocE04NDVO?=
 =?utf-8?B?cnA1WVdqcHFCUStWMW5yNWRJL0VQT29LNjMrTUZQN0RxYnVUcnRLRjVmTVRG?=
 =?utf-8?B?aEV2dWwxMlZHL1V5TVUxemR5cGp3bGh1RTBmYXRzeHF6eDJmdER3RXFGR0hH?=
 =?utf-8?B?WUxnZlBhNHlvdU5iazllS1JtdVlXNEFQeTd3aFdjVHdRelJkZndJR0VCT0Zy?=
 =?utf-8?B?Uk1jc3Y4dFVHa1M1eFZRV09waE5OWkNmTnlOU2NMOU5Hc2dGYzlGTGR2T2dq?=
 =?utf-8?B?ZzRyMnpPaTNpTHpJeFhuN1hVNlBqWVBnR25nb20zcVhFK0JoTW51TUxBbklJ?=
 =?utf-8?B?d3BYZE5lZVhvd2JiNTlrQ3Z0M2xicUg4aDVGNDhrZ3N5K2NaZGl5RmlOWkdn?=
 =?utf-8?B?Z2tqQjV1TjFxYUV4MW9PeGR1Ty8yTHFuLzVualNySHM1TlNqc3M0a1FvSHlh?=
 =?utf-8?B?Q0FzZUtFTGd6MytPL2RpWnZKZUxrZ2hzc3ZOUHRzbEVtc0pkZDZKa0hTQUFT?=
 =?utf-8?B?ZjNjNXMycE02em1wUm1XYzlpNGEzS250OStHQW9OaWpqVHpZYlNxaGZXMDA4?=
 =?utf-8?B?aDdZTXhsaDVOQlVmaGRwbHNqVXpLTDVOS1VsdSs2K3VqSTUrdVJZTnRMc0RK?=
 =?utf-8?B?b2RzUzhVLzVoU2RCSnpNTmRlR2dHdmNnQThONUc4NEZ2SjMzVWVNbEZOUHlP?=
 =?utf-8?B?Tjdlczh4OW00VFIvdHFlWmRuUXExekdlSnZOdWtTUlZsQzh0VlA4ZmU5TjM4?=
 =?utf-8?B?VnZ4bzliOEN2cU4yUTZVVTFTVHN6Y0p1VnozQm9UekFNaTc5ODhENE9vTVFJ?=
 =?utf-8?B?Tm0yL2xSQlFacDdWT2I1VzBsbG5Samg2ZDlkMERBaFFVMDByR0NmK3owRmxi?=
 =?utf-8?B?Y0FIa1dEQ044enJNZEltWjRTVWRaQ2ZaUWJ4N3oyTkFCaXprQTY1Uk13ZndY?=
 =?utf-8?B?L1JCb3JrdjBZQTZJN0xwMEhvT0E2Z04wYnZTTEZNdTZET2ZUZFRhWS96K0Mv?=
 =?utf-8?B?V3FZcGowakpTUStqQ0hyWE9ZRHpwMkIzVzVPL0s1ZTlMWHp6TGZPeVQ4cWxR?=
 =?utf-8?B?N2lZVXZvSHJnVkgxbFhKRkVtQ011UHhMRUd2cXVxcTUreEp5RjNaOXpRRnli?=
 =?utf-8?B?WmhTWEg1RVMrK0tqdklZSzNjcmcyWCtKcXhOOHl1ZDdwaUlVU0laZlRVTUZp?=
 =?utf-8?B?bXhhSkl1bGRyWDdvNnk1VkxCSE15R05HOVlFSlJqWnVNNmMzQkdmaHE5R1ZU?=
 =?utf-8?B?V1RCYzZiWU8rNE1uTlNOV1Y1TlY4dm81TFFxa2ZaL1oxZXNwRUNaRjRpdHJw?=
 =?utf-8?B?ZWdnK1NRUzcrNEs2YWszd3gveHBDaWJWMERsOUNDTzdmVVpkaE5GR2tKWXRI?=
 =?utf-8?B?N2JJR1pBZS9naVhTR25ycmZ5U0JjcDBuN1VsSDg2YlRSSEFLdHpUT0c0N2lx?=
 =?utf-8?B?eDBDOU54d3JCQ2VidEJEb3Z2YTlOU2VmOGR2VVIxOTZaYk5TOFlyOXRMbUov?=
 =?utf-8?B?cndPcVJMd0hyTnlQUW9rUUI2MGk4ayt2WjJrLzFuNWlSaFAzc0Y2MGRONjBU?=
 =?utf-8?B?d3pnTldBcDdaWXZORTZob3doaXIvOS9velh6aVNhanlxdk84NnQ4dHBGeDdk?=
 =?utf-8?B?U0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	FU8wEwqmwbsOZdlj6mavlYez2RfJn4Q0nzHNztsVkszq2wcUCQUgEXR6Q1yOCWBzFj7PYg+7g5cMv9183q0i4imoC+k2kICWeIA4Z9UOG8bKOfuH+WWe7q/vz6DspfNgDGAgAThh4g0ESAqOxQv9fWjyeT9gvKyLhhu68Jh0hjkXyezWsiv+R76VPWceH69LhomAOyFDZS8+iTf+nLTjc4KiEcBY/TTgqy+uev9wc+TWcqwxiXLDEvVhARToZSrCigfYbpi9svAYJzop+Gp4vRfdQkuykVrHZCaG4HXDQIhnWfjCHqN4tykymVS02Emc0vdUltdowCZjW6GTlJnJfyzJ/2Jo/I88yq0TC27DHEG7NZ+p9mS5lgsjDm7WKHfCYHJAj1MBwf24pA2zk02iqFFG2JekEfksblXgy43p80AzhxBnQooq098vOw9eDM+uIBKPArUQN4iG6yp1CwliO47CBTH0iBEkZfAwdVbimOoYz7i6v8WXE5QvVFHfhKkWOUERakBsz8b8sVLoCcWjReUtH+7LnHYsB82bGnw88lgqHQGN97AmbNtvFqeopcM9YVAIew6P+er7WeiVxCb5ds6BndOXpVq6OwBus+YekqNZmyfJAhmH/Kne2CB4A6I5cknI875Faxz1QRv2bQHMClRy+o3utvpiGUA20x2bKAKIAsbLowiIcgq8Bxs2I3WAFsA265FV78irUkMXxRloQU+4mNuNTDu01sFCjQOShNod2JZQ/a3HOFmNn34cIbH3zvu4F56xKzxdsodPwVagU6Gijx0Dgqu0/VVFp3cCJP7D/kDpjeJWUltvyyNFQj4w
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 83eaa2f9-aaa6-4bbe-3148-08db5d285b32
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 14:00:11.6462
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iIZDqaOzGjzJWnXGTkwmXmbIRSVWuyWPClNaVferEhSrSfIHUax15FBnMHEzXXzN7GsBiBg8z3YsmcWySgqf+w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5396

On Tue, May 16, 2023 at 09:40:22AM +0200, Jan Beulich wrote:
> PV domains won't use it, and even HVM ones won't when OOS is turned off
> for them. There's therefore no point in putting extra pressure on the
> (limited) pool of memory.
> 
> While there also zap the sh_type_to_size[] entry when OOS is disabled
> altogether.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 25 14:01:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 14:01:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539627.840718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BWj-0000xZ-DO; Thu, 25 May 2023 14:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539627.840718; Thu, 25 May 2023 14:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BWj-0000xS-9y; Thu, 25 May 2023 14:01:13 +0000
Received: by outflank-mailman (input) for mailman id 539627;
 Thu, 25 May 2023 14:01:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/c8s=BO=citrix.com=prvs=5022a0095=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q2BWh-0000xB-Od
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 14:01:11 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 98f5d079-fb04-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 16:01:10 +0200 (CEST)
Received: from mail-dm6nam10lp2100.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.100])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 10:01:03 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BN9PR03MB6106.namprd03.prod.outlook.com (2603:10b6:408:11b::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Thu, 25 May
 2023 14:01:00 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Thu, 25 May 2023
 14:01:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98f5d079-fb04-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685023270;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Lrt0E0ucrqu2Z7oO5HZFoTaczFhtHcQkEXDGHr248hY=;
  b=Q9qugLWpk/LRncp2GgG5Y5WpCZGkZKkKFLBIri4vYbS4MYfm7t40KYS2
   Twu7+hQx1qyhfMyEncVeUku9TlHIQPsQCi6qMjl6bAVc0HHqiZk16l6HH
   KfHVt2CadXw9ZEgLYq682klkGaYwEK9EM+qcp8OqAS9jvbdPe4jmqP77l
   k=;
X-IronPort-RemoteIP: 104.47.58.100
X-IronPort-MID: 109167145
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:4lyKrq0ddi66WXHgRvbD5eZwkn2cJEfYwER7XKvMYLTBsI5bpzwOx
 2RMCGmBPKyPM2rwKNx+Pt/g9UxSvseAzoAwSgZppC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFkOKgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfAUdy2
 tYWDRc0Uw2ZtdO4nJWQS8BOr5F2RCXrFNt3VnBI6xj8VK5jbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqsy6KlFQZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r13r6Ww3yiBN96+LuQyPlnqV/C1G8qVzIVBUGAk/upkmeHcocKQ
 6AT0m90xUQoz2SpRNTgWxyzoFafowURHdFXFoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbRRwtJWFRHTb8a2bxRuuOC09PWIEIygeQmM4D8LLpYgyilfDS4hlGavs1tntQ2iom
 3aNsTQ0gKgVgYgTzaKn8FvbgjWq4J/UUgoy4QaRVWWghu9kWLOYi0WTwQCzxZ59wEyxEzFtY
 FBsdxCi0d0z
IronPort-HdrOrdr: A9a23:UcwmPKyoSpma4lB3PFFCKrPwIr1zdoMgy1knxilNoH1uHvBw8v
 rEoB1173DJYVoqNk3I++rhBEDwexLhHPdOiOF6UItKNzOW21dAQrsSiLfK8nnNHDD/6/4Y9Y
 oISdkbNDQoNykZsfrH
X-Talos-CUID: 9a23:DuPRFm6dWqMMgAua/dss9GgoNeorV1rk8WrKEX62VUFCSZS3YArF
X-Talos-MUID: 9a23:GtDHkwaLze3nfOBTiTjhrTMzJsRRvIutWFo2o5cWo+C9HHkl
X-IronPort-AV: E=Sophos;i="6.00,191,1681185600"; 
   d="scan'208";a="109167145"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hijVTtZJ4S41np/B2uPlpQzNnfwC4C/RL3/KfywUIp6Y646Zt9jP0QykmRyHSVj3aIOz4sYPAwJeT2lDvDTqmH2Xqnzc7cf9Oxcj8k4TT1MSXt4wtfBmFTDbHGzmaBCr4fSy8SGSo9glzqm+Y1bd8CGkA8gqdQopU6CPyKJARIz5Nem5Iu1BLkgL9tNoLR8I6ri4aYmD+IACnY/0P4G9lYqVZ/ZwGCEuZfEpjN4W36qOY06r1/pZDTtVPZ0Eu+hoZaD7o6GaXPWeSZjmTA8ZxK0Wgt5JfoPaT4XfvuEAUywssv5X/xdv8zlJEZPwFYeFUOzIHGV030jWHKtw5ugAZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SmCC61FQej6WVTmQKC1u6F0PZYCpXnzW0MgqGQaydAo=;
 b=BluSnaSj1doKK3ABsFEe+cW6/0T9pCb8v3a1x5/hmiwoIhqlba+cLqV07/lYl9Ca2e5YYOv9LxxuOE+jqd/yntMEstmdH5EMf00YGISUDBgBtY7DGa/dWT6w48qFIu9qiXU3Htn7Qg6nkjuQi4E/F94bzIgEZDlq/66FjNUxwX9gyGhB3vJXYgzPNfdDJuAk6bLuxSMPNCvnH4zjbLcNV1+M7Qd2bRYBOJc3XXmV5Y443XoN88cMz/L0gpcNjxV+/19KWaS6Nf4dw8K2489K8aEvkL0eYE7B0LZPbZ9L3laXxW46Z2MQv2yr8Tyk91laVrXQmNFDZbLTeHE7P9XmKA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SmCC61FQej6WVTmQKC1u6F0PZYCpXnzW0MgqGQaydAo=;
 b=Jza6VAif7SydCToMLXkOPUpBOSmrmNmE64ZH9Ayzrb1DPXZr74uDWETKYs4rMOnmKF6zLTyzceIgTQ3sSRzymbHadY+2zuv/knCVeVxfABzTgMbY26/iq0c1FEo8LmE0tPlcz4zA5mrt3pilHh+mr4Xw9/2x04LZKIgGJclaqEM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 25 May 2023 16:00:54 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH v3 07/12] x86/shadow: OOS doesn't track VAs anymore
Message-ID: <ZG9qFg+NoGx7H/c3@Air-de-Roger>
References: <184df995-e668-1cea-6f9f-8e79a1ffcbbd@suse.com>
 <834a38d1-6917-7aa8-c560-0c943abb44c5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <834a38d1-6917-7aa8-c560-0c943abb44c5@suse.com>
X-ClientProxiedBy: LO4P123CA0598.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:295::22) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BN9PR03MB6106:EE_
X-MS-Office365-Filtering-Correlation-Id: 2a1794b2-58ed-41a1-6c5b-08db5d28782c
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RQ2WejiAMcOWf+KUxGe5e+ZDP/RxQTZE6RfVWVQ3SpC49wTsWCYnnlP5A4Ax3UJ3RyHwpLNBlkyEXzsUVFcJCIkWw7/YUIYGGU611CFMp0eClHguPRmIMzGYMJqn5YTCSyPm/taDLa1v0gVwjHyRsR6i1+Hpi56UlVPEB6Ae3sK8rvqMj9Gw71FQa85AW3AXUWbTWGFfnxxpiVyu2YLHM/xsFlBjs73PGZeGKMrIjnWdNSJeVoDmDHNvcKQ+QTHdaDDkP0pVqBckglu3Fj9Z8Z4NfxX5MvSDfmRADjPzjCoU4XNssxNA2pcotOyBYN0B2IJmRurann9MNoJkF39k83hKE/zzxtL3u1CfzSGEMtUFCwOcmea5sZ3/4XX9VipzSgbW/qypBdcCkTQZHpLik8Wx7SXoAiugrM7lSLsOVoyK4hwilNqc7SOTzm/DcZlEzWxajbYS2WLlapO5I9sLDQ3zzvFDX+raJ/byaq45XjL8vNHRVFPMMtonSnDfSFX8/cH+5kNk13d0ckOkAGk4GjvsYRForcK/F+YJMy+iuP6b4CCBxou8hGp9RVLcQ61Sz09OsB0QybXN74vsin7y//WYc3BMI3zlXQn6erO4Bzs=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(136003)(376002)(346002)(366004)(39860400002)(396003)(451199021)(4744005)(2906002)(186003)(6666004)(4326008)(86362001)(41300700001)(6486002)(66476007)(85182001)(66946007)(66556008)(26005)(6916009)(316002)(33716001)(54906003)(82960400001)(508600001)(9686003)(6512007)(6506007)(38100700002)(5660300002)(8676002)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?akFkN2c4K0J3SEx4UzNzVUdyMkl4OXlMNFFQcFd6UUN2Q3F5N1A1aEh6Q3Bj?=
 =?utf-8?B?RytXMHUzMVVTdzlnZHdHcjlpMjlJcFlreFo2K2JtNklSeEEzdnJUQnBOM2ll?=
 =?utf-8?B?ZVhOUjFSSVRkM1dHem44S2JDSXVadjVpMitSTjFkTU5CaEk3RmtqZHJVYzE0?=
 =?utf-8?B?bTlGMFpabDFEVWlYYzVDVThzQ2F6R05yNU5NOGI0K0gzSFN6bU9WN1pzcXRr?=
 =?utf-8?B?SkpzdzdRU1BZUzR6TUd0RkhkbGRZajQ4RFRlTjgwckkzMFpvOHYyUC9YOHd0?=
 =?utf-8?B?M2ZjdnhRTUlqMzJhQVoxNVNTdnZIWmlRYWJOaklNSFhiR3h1RlBndWdzN01M?=
 =?utf-8?B?NlBDUEppaCtSV2g4OEdZTWpoeWsraGkrcjNGUHlSRS9vTWdqN0N1cGQ4WTkz?=
 =?utf-8?B?QVQzTlJpSDFVSmM3MXluMmlHVEw0dzN4b1EwT1l2bWlCdk50M0dndCtUMG5x?=
 =?utf-8?B?RUwxZ2Z3WmJOZGw3MXNpY0cxOFJHYzQycnI4REU3RDRBZmR2UWZKa0R1VzdW?=
 =?utf-8?B?a0ZuRDVCRnllWTNUMFpXVi84THlka012V0dEQW52c1NScDRnaTg0Z01LOHpG?=
 =?utf-8?B?UWZScmx3TVV0L29wZVhzQm5aQXZHTDY1RXVpS0VQUXd2SmhpTXhOMmh0emQx?=
 =?utf-8?B?eTFwckZDM3AwTldxWWQrME8ybG9EUi9LUEdPcEtIRTdBSVhzb1lmaEd6Q0Zx?=
 =?utf-8?B?b3FBZ2FTL1dDMy9FV3ZsTFFTdFdjL29nbFd3aEJZVndtQ0d2VmFrQUhQTDFJ?=
 =?utf-8?B?VVNYbHJZck9vTnhTV1lWdWFnbERqNVZzNHN1UlFmczRzSEhoaEd3ZEZzMEFw?=
 =?utf-8?B?N1RtWGJycmZURnZ6VWhydkJEOU4weFRYbEFwQlFVNzhwN2lZZ2o5M3ZWaGlr?=
 =?utf-8?B?V3JtRnk4aU5oR1lYVkQ0TXZRdmp2YkY4QktkNE55TitqU3hLWmNHSUt4eWNz?=
 =?utf-8?B?d0MxSmNUQzltOEI1RURGSEpOUXFPclVxdUdmUnlNWk9jTVNlSjZwNkJnTUht?=
 =?utf-8?B?b1NLcnR3VkpldzhUUmRKUVlWSjlFdks3SFZ3eUh1SnVyVllBZU1seFgvaU9O?=
 =?utf-8?B?QitOOUZXZlRyMExkNE8vLzRPZVNUN0h6ZU5QY2VxVlJrMWNvd3RaZjJUcC9D?=
 =?utf-8?B?RnoweS9VZThHWkhCd0UzR2dtMW9JaTAyOFF0bVhzNlRDZWt0R3c2bGg4NHA0?=
 =?utf-8?B?VEljRExxQWkxaEpudyttUFpwM2pwVmRMRmRlZUMvdlZoNEpSMitvU0ZnMUFX?=
 =?utf-8?B?eUNjaGJTMjY3ZVdMd1Z0SC9DVVh5SHpzOUJYcWJDZU1zcHE0RjUxMVVqS1FJ?=
 =?utf-8?B?eWV6Y3h4UE5TVzlrRzVWQkRRYWVLaGZhbUtZR1VtWU9PYUsydW5kNHJYc0Fk?=
 =?utf-8?B?NFNEbTBlWE5WTUZRV3Nzc2YwRkp2dXBlZ2ZTQ2ZtSUg3L0pPZmZrWDRmZVlp?=
 =?utf-8?B?VXUxekxNc2N0Umo2WVRyMzhGdUlzd0w0T0RDbDBuVi9oc1lrS0QxcEFWNzV3?=
 =?utf-8?B?bU9taTNtZHBMVVdEMXQxSFpwVWhTTkI4RTVXYUhaZlFENFl3Z0lENXVKREdn?=
 =?utf-8?B?NVVsV3o1WjBFZkpxL2FNblAvOEZQK3cxSmlNUXY3bXpxOHE0dXU0RTVkRnUx?=
 =?utf-8?B?WG8weVp4bTd1MjhpVkxBZ2hyTVNiL3JKeGs3QjdzV0VXbk5MSHIrNmt4ZVlv?=
 =?utf-8?B?clQ5Z3FTTU5QNkdHcmowV0FNNnBmZmlpeGNrdGhHTmxWSFZKWFI4ZVhBTFBv?=
 =?utf-8?B?TU5ZNjlvU0VxSk9iMUU1d3NXNzdOd2JaY0g0cHVxUVlWSlNXeUs4RlgvZmV3?=
 =?utf-8?B?WjU1a1NoYWdzOWdBVHdqSGw2STRFVS9BUzloaS9FS1Z1UXExUE45RlZrV1B0?=
 =?utf-8?B?NVZONWt2OWQ0aHVwcUt0RmIrNFJaVUQ4VGtwd0tDMkpKOHZSWVhkcERVbTVH?=
 =?utf-8?B?eUFSSTVZdkZwVTE4NjFiRWRscGpWcUxoM0oxdUJOcjdrRHhEc0VyaEYvd3Az?=
 =?utf-8?B?NFZadWNGZTdENkdLVGVrOW5mcGlQZCt5VWVJV0FSLzREdHNFYmF6Rm9xZnZo?=
 =?utf-8?B?WmxNWWtTbE90R2UzeW0zVzlrZEVvdzBZZ0ZOOWphRDNOYkhuNGdGcHhpL3VF?=
 =?utf-8?B?VEJPei83RGRHd1hYZ0ZRM0ltb01MMW84Tlo3b0dSYUZiMEJxaWgyeWZ2Q3JS?=
 =?utf-8?B?WWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	YTM+GDjz35fCn1+DPLilAu2KtUP7GT26/u5dWACJ67FajDu5ts0KXX2A7FvfyR5oZ06YltsvbQIIVTSblCCF4J1Kj+xx1oX74O83cqzK3hORBHBjFiEyJbxdLBSk/4ULNb9uEGvfuxF5JjebZueIhIB4NSc/DRBD6VZyUJtF0fqzHp3bK52XaoWo68xCOnO4obkT1FClrJF1S/7fZ+k0yaFP7MyHqO6ppPSdLCpiCTT9P4eyA0+5j6S/xVgHmuFczh/DtY0FQJ6SM3+bX0TRwSNaArQ9FwtO1ImIrAeUPa96kpEYM0Tgy6ZIvmDkcxHz8s2Kk9WB9Ge8u2Ti0+wEwOAb3YZO5488gzWkS+v3AcqaLr71ViqK6FokTsJQ/UYjLCqLrCsiGk6imXoXZ6r5Ek1c7TTsUV9blV03UpsVSBOD2hdPML1RWve7l5ih8QdxJgBdXq9EoWSrDWT5sQ1cX5UyyoVQpqll7i3W2BG6numz0ITsksCcfG3xj60t0jiVgFD7N6Pa6/OL83PBqr9z/ykQoOwZt1hllFw1Y92DWc8uWooe/omUwZ2pNSwooAlMwmsp1/UArF2BmJb/1SKzqBaoq3PNrSQFWPVUmzxuM0pxBLj6MJhfxJG3zoJmutmZR9yOqC/eJVCak/J69rOHCBZ8aSDkzEsiKXMVnTWMTUNblIzZPWl441uLkklAheyCU9Pym58BQFfsmXFZcwgoXe6qcgYx6wgL5wsFm/UEtJr1BzHaf5+cMDLmzNOXscEIUAzEktxZEhq+fiO/PPVxbe0c3FeBvQMN8OCPusPa8Y7HeZuv4id29h59R/P4M2CB
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2a1794b2-58ed-41a1-6c5b-08db5d28782c
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 14:01:00.2082
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2p/fMKsF2pyC/WtDR6ts60gDNheXiRcoeNoyxCsgl7jaHz4MvszuSJu2SpG2B53pzDfPHOv7aM715DuH24qAtA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6106

On Tue, May 16, 2023 at 09:40:50AM +0200, Jan Beulich wrote:
> The tracking lasted only for about two weeks, but the related comment
> parts were never purged.
> 
> Fixes: 50b74f55e0c0 ("OOS cleanup: Fixup arrays instead of fixup tables")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 25 14:23:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 14:23:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539635.840728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BsG-0003Wh-5z; Thu, 25 May 2023 14:23:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539635.840728; Thu, 25 May 2023 14:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2BsG-0003Wa-2f; Thu, 25 May 2023 14:23:28 +0000
Received: by outflank-mailman (input) for mailman id 539635;
 Thu, 25 May 2023 14:23:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2BsF-0003WT-5A
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 14:23:27 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2062c.outbound.protection.outlook.com
 [2a01:111:f400:fe13::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b4600bf4-fb07-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 16:23:23 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7310.eurprd04.prod.outlook.com (2603:10a6:800:1a2::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Thu, 25 May
 2023 14:23:20 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 14:23:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4600bf4-fb07-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MUKsxaG+s8bPnE4aQtSQF53TsC+vEjCEj3efGh9cldw9D97jlixIdINGmvSZXtaAY6uR2F5rC6n0zWS58H5b43weRetfheEQKOEZ2pK5fcsjMMglRzwVizkaOBlxl52+8K1IBXbMbAFSG4OTHWtNtwVN6h4fthQPwCABb2Wf3CTOH7IkXJk18jrWCcvfXdHgjSkNeyBGE4vQYL3RqE34bDnkY4M9mAfbOvgwVLyRfOvjypxIi7eQfxZQ390eRVHlws4o7/tgqk8Rhv2S3StKIF7am/aUokfmBBRtKuljk6Dn0XD8F6vrOFsvdw6n6aVYEJispZojXGdZbVBqOu6H4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yUFCk+hUYamSYOLj0TqZi8PS+hcJtHPvWTWNAM5Fbr0=;
 b=b7RB9yz/EwRytrdvInyz5az1JtpOSma7Q3ozzPC8rqAhwsC8WTenjcUmNOlSZEFM3aFCFBEu8WpP7bDqTcfraKaDSn4651gTbP3kvzCJyuRlXlj3tYjDcZwkfV9YnvdASmzVYGf4a6rsFXFLyB04ciCnePPg44aBkH8f9+e5aYCfq6CyQtip8dqVBq8SbQsK9REoZIkbrVNPhj8X4mz3RvbbbmSOqjHWuBxEviwNVKrNhD/RIHfmLGK1/6J8aB5dS34YrF4HOMKuWVgIEah2AhHIF9QoKANmN3KFWwZncjEjkUI7agxq/1WhHppO5oTBD435bHMoNFKR8bLFkA0rlA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yUFCk+hUYamSYOLj0TqZi8PS+hcJtHPvWTWNAM5Fbr0=;
 b=fYND1oRPJiPmzc6SVxU9hxbESMdMf26covHD5ZkOX0LILK7P0aeY+IspyEFAH4G4WXOYbmVto/4svjr5IVHKShwp31Wgd0EroFXvkzivp18JlPAqjjze5vZpOFRUYLVIDa4CbEWR1ToiNyhXFylU4h5dIiD3Ir2p4bNC2eBCAnZL/MkiSAXD5/9pshxMMWRip8KI305ZJR2LQhZkRHhd+AZgworoRXf9HwaUf2BfL1Jr7t6nO+47FgzVfT8pFkk5Wh4WTnDRqYMPkEI0F9PrQpWIkDfZdj0SFD4/69ePeoWSxZTi1WKoqHCNq9Wp0I7RWhCYgn+9FAjlbfy23+S+xg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <35a85941-1e4e-9361-1da4-9b5b3d8a9b99@suse.com>
Date: Thu, 25 May 2023 16:23:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 04/15] build: hide policy.bin commands
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-5-anthony.perard@citrix.com>
 <c75368db-6444-6910-487c-8ac9477a6785@suse.com>
 <ce86e239-5cc1-4f7c-ba6b-de6eb9c1442d@perard>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ce86e239-5cc1-4f7c-ba6b-de6eb9c1442d@perard>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0130.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7310:EE_
X-MS-Office365-Filtering-Correlation-Id: 016fb096-b1f5-47fc-6312-08db5d2b96c1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	GVVDzcQqkdifq4A6iGTxF3joxJT55GdlEHVWmJir0/LU/+u08F31A+na+3P0yGQ3grAxk7nwxCppkQlnWBWEkXKoiJQT9IJFk5t3VUdpZK2qCvfU1Rp5IvHMF7soubKeEHHB72lezpSVNvdfi7CzOoNBfwCkS0kSlkrUQxepyqhrD/33+563SJCoefZu93YavHpBayM4Q/kK1qDkDJHplM3USCsFtTLUDbYIWCTbXZrr9VX18TO+w4McAvK/YPEbzAhktaKbLzmli7Awafx805Yg5xKnKhN516ZaE8Qg6+Ex5R762+yK6YuRs/+nGf1L/CKcOTzrVpsnH8Ly4gCqa0nuxVdOdRblE1xATKrUM32WPnRrre5Mv4xQ0Q4/r3xWT1yKXgzB4TqeEjs5eCUPbIGlaIEWG2Dgh33EftqIA+sZR45uOXrAuEe7xxYgaOfl9om/9r5uW9JKHVXp3Y1+k1EYgacqabbJYaNkl7oFhANnDpkRAsqi7UTgW4GlN6JiwTwRQiJAwCKDqb/o0YFZB/DD5Xl5P1XJ0299NgYCvuiGM1SMBjJ4/+/tFYDcKLnCsJM1uOvSri4Fc4vQPGbECUH6fnhZOF2zvjAxQrxtq1OhnFBGZHTlziE0JwyamtGIvmIcMZWaMPpOVXXe5D4k7A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(376002)(346002)(366004)(136003)(451199021)(8676002)(8936002)(5660300002)(26005)(186003)(6512007)(6506007)(41300700001)(53546011)(66899021)(38100700002)(31686004)(31696002)(478600001)(66946007)(66476007)(66556008)(86362001)(316002)(4326008)(6916009)(2616005)(6486002)(2906002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L3laK2FXZnlQSHdaVUN2QU5DT0M2R1M1VE5VbUxUa0psYXJqRFRVdTJraDR6?=
 =?utf-8?B?dEtaZUE5eVZMbEsraTUrOFZiVzZUZTluTUJyV1I0cTZlVm1pK0NDT2hTL1VK?=
 =?utf-8?B?NlNva3FRQTlndFNWRmpNRlNVOXRybTVibFR3RmtXQjF4UkpycGlTbXB1MTA5?=
 =?utf-8?B?dFJyZTY5aElIVW1xSndWMWtUTUttNUZGeURtMS9vN2ZaMjRCck4vcDdTYzk2?=
 =?utf-8?B?VmZKQ1ZQRzVUdkptZSs1VTM2Q2pBSEEvSVFoMTNuTmFiY1ZlNXZGMHVWTDJx?=
 =?utf-8?B?YmdSaERpdHR0ZTlkcEhaMzJpU1hsWjdBaitaYUpwUG01Y2dwVlArQmI2eXU5?=
 =?utf-8?B?RVRwOXk4NzlGOHc0M3ozT0w3emRhS2VPRFlXV2lZT1B5cENqREw2TFVBKzRn?=
 =?utf-8?B?VHoxNXRDWTRZaXNRY0xYaDlFTCtBaVFxZ1RrR3lyQXdtMytYT1VwTFJXTWth?=
 =?utf-8?B?aTFYbHFMUHZYbWpUdzZsTFFVT2RWWHdpQS9zclEveTJVdnU5U1F5TzFueFZ3?=
 =?utf-8?B?VExXUTg4SUsybmZzY2xOTFVTbisxUFlNdS9wTTNjN3VQeE1aVU1kU3RZUW5r?=
 =?utf-8?B?aHB2VCtBOG9PZjl6UVBJeTF6SGpHZzlheDUzK1Ftb3h1d3YzcXp2bXZWNTg5?=
 =?utf-8?B?NEZpUkdiaWxsVDlib1E4WWVKV0pISnp3RXlZQ0RRZFFPMW1IdHEwaEN0UFUr?=
 =?utf-8?B?SXNyNWxJQ2F6ZEdHOEdRb00vY1hRVG9pbjR2MWVrK1JXRDZHUCswcEJKRS9V?=
 =?utf-8?B?dEtLSldROEthSDZEN1M3bDB2cUFEeHgyZ1hUOGpOc2paRlMrU2pHK3Y5N2tu?=
 =?utf-8?B?OVBrclErYjRRRnJTVEZENmtWUTl6SGZLYzdxaGU3VjA4YWhKMUJnMHkwODc2?=
 =?utf-8?B?NVBCdHVlYVFpd04yR3llQTV3bHRENk5uU3V2eXptYjRmalY0dEw4OXJIK3l4?=
 =?utf-8?B?R21mTk1hS1hWbk5PZ3BIZmZYMm50dGVkRU1FNWRLRytWb2tyT0JaSnZ1ZDVI?=
 =?utf-8?B?ekdWRVNVU2x0SEk2ZlZQVzkxVjNLRU5jYSt5VlNlV1JjdXRWMEMrdHZEeXli?=
 =?utf-8?B?WC9IWDBDNElwU2JMOGJUSUVoSmtOQWhaMEt4QXZ1VDJ4Qk1nVnpnUjQvMlVC?=
 =?utf-8?B?M0xLNFVEcWFsNkFxWVlyVFVhYW12UkhnOG96b3k2c0NkdVc3eW9wcEEvcWRK?=
 =?utf-8?B?N1N4cHdhSS9qQzZtbTM4cmovV05RRHc4Q1hWRkh0OW1NWktGMCs5VlJLQzh3?=
 =?utf-8?B?V0daSnB5Q2t1dTBuNGpxUU1RN052TVgxRS9yMXNNYml3WHh3eDlKZ2xKcDhN?=
 =?utf-8?B?MVVIWGh6VlJoZzFkTFdrY0NNSWdaUHgwUGUrMTBXMHhxMlZVeTltTlhHbGJu?=
 =?utf-8?B?VHFiZ0YrODZLL2w2NEJQZDJGTXhrUmtzdU9vdnBkOXRjVHI5VkZnNHFrNFNY?=
 =?utf-8?B?Mkc0OXZScDhBTi9ablY2SGtmR01mQ3c5NlhnWmlyZ0pOeEZtWnZDRThtWmZF?=
 =?utf-8?B?bEY0WDQxRFRvakVlVXdvTUYwMkhsSmdhL0pLQW5VMXhOaUlzRHZUeW1GZHRl?=
 =?utf-8?B?bEJHS0twdEo2NFRDU0pjU1RNQVJ6clkweHJpNnlrM0lnTW05RGQzaThHVWtB?=
 =?utf-8?B?R1I3ZVRZc29UZXM4WE94aml4OWdTU0Q4aVBybEZ6eVQyMFA0cUlUUnEzWGZr?=
 =?utf-8?B?NkdyQjlaKzhZNys4YUV1Z1h5K1NWNU1FY29RaGxqVDIyZE5SdzhialJwbWc1?=
 =?utf-8?B?all4YURPTVlzeEJjZjk0amoxeUdkY1YvbkNQUkRma3ZtWFJ3bHJaekpUdWll?=
 =?utf-8?B?eUgrMmpKR2lTdHJXVHV0TGNJeHFrNXpzTXljUGFQdTZ0elRkcDUzcWtwUjBx?=
 =?utf-8?B?ZmtTVC9UMHdlTGw0REI3bnp6TXpUdzNzSjJ0OG9qOHh1YTUwbWhBT0R3RVND?=
 =?utf-8?B?bDF2OEt2cVltYnZzUlRnaDVDbE1DeWJaemU4VkcvREdSbnZNcVBMdnRaZEJj?=
 =?utf-8?B?cjRQYVVNcTJyaUtUWWtiS1FPL24wZU0xVFhzYlVuNk4wR0JFWkRoajl2b1dS?=
 =?utf-8?B?Y29PZFFyZHVsOHh1M245YUxjMVhrSEJyN1RlRi83QnlpYlYrcTM4TmZtZDlh?=
 =?utf-8?Q?ILduHa882jqFZSaqmVeZVDlsc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 016fb096-b1f5-47fc-6312-08db5d2b96c1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 14:23:19.8810
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dAruJ6u+fdEcibi2SCW+xGM0mBY5eXW/skfW7tcd7Y4I/aiJTQi1LciSPIo07HQr3VVyBj8S1ABv1/NrIx1yPg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7310

On 25.05.2023 15:34, Anthony PERARD wrote:
> On Wed, May 24, 2023 at 09:11:10AM +0200, Jan Beulich wrote:
>> On 23.05.2023 18:38, Anthony PERARD wrote:
>>> --- a/xen/xsm/flask/Makefile
>>> +++ b/xen/xsm/flask/Makefile
>>> @@ -48,10 +48,15 @@ targets += flask-policy.S
>>>  FLASK_BUILD_DIR := $(abs_objtree)/$(obj)
>>>  POLICY_SRC := $(FLASK_BUILD_DIR)/xenpolicy-$(XEN_FULLVERSION)
>>>  
>>> +policy_chk = \
>>> +    $(Q)if ! cmp -s $(POLICY_SRC) $@; then \
>>> +        $(kecho) '  UPD     $@'; \
>>> +        cp $(POLICY_SRC) $@; \
>>
>> Wouldn't this better use move-if-changed? Which, if "UPD ..." output is
>> desired, would then need overriding from what Config.mk supplies?
> 
> I don't like move-if-changed, because it remove the original target. On
> incremental build, make will keep building the original target even
> when not needed. So we keep seeing the `checkpolicy` command line when
> there's otherwise nothing to do.
> 
> I could introduce a new generic macro instead, copy-if-changed, which
> will do compare and copy (like policy_chk is doing here).

Ah, yes, I think I see what you mean. I'd be fine with copy-if-changed,
ideally accompanied by some rule of thumb when to prefer it over
move-if-changed.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 14:32:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 14:32:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539641.840738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2C0O-00054K-69; Thu, 25 May 2023 14:31:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539641.840738; Thu, 25 May 2023 14:31:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2C0O-00054D-2C; Thu, 25 May 2023 14:31:52 +0000
Received: by outflank-mailman (input) for mailman id 539641;
 Thu, 25 May 2023 14:31:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ga2A=BO=citrix.com=prvs=5022cd99a=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q2C0N-000547-2v
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 14:31:51 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e11b3980-fb08-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 16:31:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e11b3980-fb08-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685025109;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=nop7TuiEhp3ovvt2veeEr8PdxbmglTYz+bcTRfLn2ik=;
  b=T0Kd9xl8H7UgYLkiM8N/kfP8TYJ5T7XfHtXT1caGMx1HeKvm2naofgbr
   aW9aeIEXLycEJ2P5wwQEMbtpJduDXWLEmYxmT4Kk0P03pbyZvl4vExFDV
   wBTZQXGMVu2FpXh7DO136w/Z+aauKYThzDyJ6MTtZDCUxXTHVIkSb/kBp
   E=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110423598
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:0NWuAa3FW6LJG2XD0vbD5bd2kn2cJEfYwER7XKvMYLTBsI5bp2AOm
 zAbXm2PP/ncNzP2f90lOdu29BkD7ZTVyoVhTwM/pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFkOKgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfJlhU9
 qcqIwI2SC/fi/mJmaOwaLQ3iZF2RCXrFNt3VnBIyDjYCbAtQIzZQrWM7thdtNsyrpkQR7CEP
 ZNfMGcxKk2aOHWjOX9OYH46tO6umnn4dSwesF+PrLA7y2PS0BZwwP7mN9+9ltmiHJwNwhzD+
 zOWl4j/KjEQFNW28yegyHe9vbefv3vfYbkeP7Lto5aGh3XMnzdOWXX6T2CTqv6/jEm8V9tBK
 lc89S8nrKx0/0uuJvH3WBeysXOfvhoRXtNWO+I/4QCJjKHT5m6xD2wFVSJIacZgutU/Qzcrz
 XeWk9ivDjtq2JWFRHTY+rqKoDeaPSkOMXREdSICVREC4dTovMc0lB2nZt97HbS8lNHdBTD6y
 DfMpy87750QisgR3qn94lHDgBqrvJHCSgNz7QLSNkqn8wd4aYiNd4Gur1/B4p5oLoyUU12At
 3gsgNWF4aYFCpTlvCaAWvkXFbelofOMKiTBgEVHFoMksT+q/haLQ4dU5z1vIVZzBewNczTpf
 Uz7tBtY4dlYO37CRbRsf4u7BsAuzK7hPdfoTPbZapxJeJcZXBCD1DFjYwiXxW+FraQ3ufhhY
 9HBK5/qVCtET/09l1JaWtvxz5cM/zo9nlHYd6vd8EiryoXHdXW3FOwKZQ7mgv8C0IuIpwDc8
 tB6PsSMyglCXOCWXhQ74bL/PnhRcyFlWMmeR9h/M7faf1E4QD1J5+r5m+tJRmBzo0hCeg4kF
 FmZU1QQ9lfwjGavxe6iOiE6M+OHsXqSQBsG0c0Q0bSAgSBLjWWHtv13m34LkV4PqoReIQZcF
 aVtRil5Kq0nps770zocd4Lhi4dpaQ6mgwmDVwL8PmhjLsM7HVeTo4K8FucKyMXpJnPt3fbSX
 pX6jl+LKXb9b18K4DnqhAKHkArq4Cl1dBNaVErUONhDEHjRHHxRA3Wp1JcfephcQSgvMxPGj
 2569z9E/7iSy2L0mfGV7Z25Q3CBSLUgRRsEQTOHtt5b90DypwKe/GOJa87QFRi1aY8+0PzKi
 Tl9px0kDMA6oQ==
IronPort-HdrOrdr: A9a23:XaJjE61VeZY35VWvOrNtFwqjBEQkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7gr5PUtLpTnuAsa9qB/nm6KdpLNhX4tKPzOW31dATrsSjrcKqgeIc0HDH6xmpM
 JdmsBFY+EYZmIK6foSjjPYLz4hquP3j5xBh43lvglQpdcBUdAQ0+97YDzrYnGfXGN9dOME/A
 L33Ls7m9KnE05nFviTNz0+cMXogcbEr57iaQ5uPW9a1OHf5QnYk4ITCnKjr20jbw8=
X-Talos-CUID: 9a23:VPzQIGxKXaYvzByImSx1BgUzI9omKU/W/kz2HBO0ElkwWba3FASfrfY=
X-Talos-MUID: =?us-ascii?q?9a23=3AP5jnUwxP6Pnh31ZDvwavW98DzV2aqISkKEkkq4g?=
 =?us-ascii?q?jgcyVDnQgGW+5tQ+7aYByfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,191,1681185600"; 
   d="scan'208";a="110423598"
Date: Thu, 25 May 2023 15:31:41 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, Roger Pau
 =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ross Lagerwall <ross.lagerwall@citrix.com>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH 07/15] build: move XEN_HAS_BUILD_ID out of Config.mk
Message-ID: <87c6aa9f-57cb-497f-934a-ff9506607780@perard>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-8-anthony.perard@citrix.com>
 <9528ea84-9f10-d10a-b2cf-798434d48a59@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <9528ea84-9f10-d10a-b2cf-798434d48a59@suse.com>

On Thu, May 25, 2023 at 01:56:53PM +0200, Jan Beulich wrote:
> On 23.05.2023 18:38, Anthony PERARD wrote:
> > Whether or not the linker can do build id is only used by the
> > hypervisor build, so move that there.
> > 
> > Rename $(build_id_linker) to $(XEN_LDFLAGS_BUILD_ID) as this is a
> > better name to be exported as to use the "XEN_*" namespace.
> > 
> > Also update XEN_TREEWIDE_CFLAGS so flags can be used for
> > arch/x86/boot/ CFLAGS_x86_32
> > 
> > Beside a reordering of the command line where CFLAGS is used, there
> > shouldn't be any other changes.
> > 
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one nit:
> 
> > --- a/xen/scripts/Kbuild.include
> > +++ b/xen/scripts/Kbuild.include
> > @@ -91,6 +91,9 @@ cc-ifversion = $(shell [ $(CONFIG_GCC_VERSION)0 $(1) $(2)000 ] && echo $(3) || e
> >  
> >  clang-ifversion = $(shell [ $(CONFIG_CLANG_VERSION)0 $(1) $(2)000 ] && echo $(3) || echo $(4))
> >  
> > +ld-ver-build-id = $(shell $(1) --build-id 2>&1 | \
> > +					grep -q build-id && echo n || echo y)
> 
> I realize you only move this line, but I think we want indentation here
> to improve at this occasion:
> 
> ld-ver-build-id = $(shell $(1) --build-id 2>&1 | \
>                           grep -q build-id && echo n || echo y)
> 
> I'll be happy to adjust while committing. Which raises the question: Is
> there any dependency here on earlier patches in the series? It doesn't
> look so to me, but I may easily be overlooking something. (Of course
> first further arch maintainer acks would be needed.)

I don't see any dependency on earlier patch, so it looks fine to apply
earlier. Thanks.

> > --- a/xen/test/livepatch/Makefile
> > +++ b/xen/test/livepatch/Makefile
> > @@ -37,7 +37,7 @@ $(obj)/modinfo.o:
> >  
> >  #
> >  # This target is only accessible if CONFIG_LIVEPATCH is defined, which
> > -# depends on $(build_id_linker) being available. Hence we do not
> > +# depends on $(XEN_LDFLAGS_BUILD_ID) being available. Hence we do not
> >  # need any checks.
> 
> As an aside, I'm a little confused by "is only accessible" here. I don't
> see how CONFIG_LIVEPATCH controls reachability. At the very least the
> parent dir's Makefile doesn't use CONFIG_LIVEPATCH at all.

Yes. `make tests` works just fine without CONFIG_LIVEPATCH. Even when
that comment was introduce, it didn't seems like there were anything
preventing the target from been run.

The only thing is that `make tests` doesn't build the tests if xen-syms
was built without build_id, so the status of CONFIG_LIVEPATCH doesn't
seems to matter.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 25 14:37:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 14:37:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539644.840747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2C5D-0005g8-Na; Thu, 25 May 2023 14:36:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539644.840747; Thu, 25 May 2023 14:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2C5D-0005g1-Kq; Thu, 25 May 2023 14:36:51 +0000
Received: by outflank-mailman (input) for mailman id 539644;
 Thu, 25 May 2023 14:36:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ga2A=BO=citrix.com=prvs=5022cd99a=anthony.perard@srs-se1.protection.inumbo.net>)
 id 1q2C5C-0005fv-CP
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 14:36:50 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 93121e8e-fb09-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 16:36:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93121e8e-fb09-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685025407;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=WYlecdcfITSHWJwnhw60PwZawoXJ0bmlqJMKFxXgcAc=;
  b=CaOcVg4xhy88sXWDR36Kg52r93lrM2K4SMhvA0JTz+Yq+/AX2W8++osR
   reBUjDWp9FR02wNi4Ze4oi3AjVGB4dXCwyApp9IQJDrO2UdV/vObKkXYf
   /8qkWqhonjA58Usla8iXIyo9P5YuFYE6BFyKhsRBHmId168XdvvMzvItR
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110291947
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:/K7LgKAmOJEckBVW/+Xjw5YqxClBgxIJ4kV8jS/XYbTApDlx0jJRz
 DYfCD3UOq2PYmSnL49xOo238ExVup7dxoVjQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbCRMs8pvlDs15K6p4G5C4wRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw67xbOX1A8
 dgjEjEIQE7YrcyL6o+AY7w57igjBJGD0II3v3hhyXfSDOo8QICFSKLPjTNa9G5u3IYUR6+YP
 pdHL2M1N3wsYDUWUrsTIJs4gOevgGi5azBCoUiZjaE2/3LS3Ep6172F3N/9I4XSH5QMzxrHz
 o7A1yPoOzUaCfqZ8h3fw3D2rMzMxGD7eLtHQdVU8dY12QbOlwT/EiY+V1G2vP24gU6WQM9EJ
 gof/S9Ghaov8E2mSPHtUhv+p2SL1jYHQMZZGeA+7ACLy4LX7hyfC2xCSSROAPQ5sOcmSDps0
 UWG9/v5CDoqvLCLRHa18raPsSj0KSUTNXUFZyIPUU0C+daLiIgrgwjGVNpLDK+/hdqzEjb1q
 w1mtwBn2e9V15RSkfzmoxae2WnESoX1ohAd91/7Zn6r6idCXo+gSdLzw3bmx8t7BdPMJrWeh
 0Qsl8+b5eEIKJiCki2RXekAdI2UC+a53C702gA2QcR4n9i50zv6JN0LvmkiTKt8GpxcEQIFd
 nM/ru+4CHV7GHKxJZF6bIuqYyjB5fixTI+1Phw4gzcnX3SQSONl1Hs2DaJz9zq3+KTJrU3YE
 cnzTCpUJSxGYZmLNRLvLwvn7Zclxzol2UTYTo3hwhKs3NK2PSDFFeZYawHVNLlktMtoRTk5F
 P4GbaO3J+h3CrWiMkE7D6ZIRbz1EZTLLc+v8JEGHgJyCgFnBHsgG5fs/F/VQKQ8x/49vr6Rr
 hmAtrpwlAKXaYvvdV/bNRiOqdrHAf5CkJ7MFXdyYw35hSZ5Pd3HAWV2X8JfQITLPddLlZZcJ
 8Tpse3aahiTYlwrIwggUKQ=
IronPort-HdrOrdr: A9a23:dL77KaGB3WxbrIZNpLqELMeALOsnbusQ8zAXPiBKJCC9E/bo8v
 xG+c5w6faaslkssR0b9+xoW5PwI080l6QU3WB5B97LMDUO0FHCEGgI1/qA/9SPIUzDHu4279
 YbT0B9YueAcGSTW6zBkXWF+9VL+qj5zEix792uq0uE1WtRGtldBwESMHf9LmRGADNoKLAeD5
 Sm6s9Ot1ObCA8qhpTSPAhiYwDbzee77a7bXQ==
X-Talos-CUID: 9a23:7OM+92Gvc0/aUrbCqmJ2rmIsBs4hVkbnzXL+fRSSJ1tDdYa8HAo=
X-Talos-MUID: =?us-ascii?q?9a23=3AGWu+hQ2VSJ2BhrkXjG9VQ9h5VDUjpLWXLEAwt5M?=
 =?us-ascii?q?9nJO1N3ZaGDTEzzSXe9py?=
X-IronPort-AV: E=Sophos;i="6.00,191,1681185600"; 
   d="scan'208";a="110291947"
Date: Thu, 25 May 2023 15:36:40 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH 08/15] build: use $(filechk, ) for all
 compat/.xlat/%.lst
Message-ID: <9b27e21a-6a9a-420d-b74f-41e4b7ada2fb@perard>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-9-anthony.perard@citrix.com>
 <b04a433a-0606-e473-cb1e-41b45bd1079f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <b04a433a-0606-e473-cb1e-41b45bd1079f@suse.com>

On Thu, May 25, 2023 at 02:04:00PM +0200, Jan Beulich wrote:
> On 23.05.2023 18:38, Anthony PERARD wrote:
> > Make use of filechk mean that we don't have to use
> 
> I think you mean "Making use of filechk means ...", or else it reads as
> if you're changing how filechk behaves. (I'd again be happy to adjust
> while committing, provided you agree; here it looks pretty clear that
> there is no dependency on earlier patches, and there's also no need to
> wait for further acks.)

The change sounds good. And yes, no dependency.

> > $(move-if-changed,). It also mean that will have sometime "UPD .." in
> > the build output when the target changed, rather than having "GEN ..."
> > all the time when "xlat.lst" happen to have a more recent modification
> > timestamp.
> > 
> > While there, replace `grep -v` by `sed '//d'` to avoid an extra
> > fork and pipe when building.
> > 
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 25 14:40:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 14:40:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539648.840757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2C8i-00076b-6P; Thu, 25 May 2023 14:40:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539648.840757; Thu, 25 May 2023 14:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2C8i-00076U-3d; Thu, 25 May 2023 14:40:28 +0000
Received: by outflank-mailman (input) for mailman id 539648;
 Thu, 25 May 2023 14:40:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2C8g-00076O-DI
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 14:40:26 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2051.outbound.protection.outlook.com [40.107.7.51])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 15cd379c-fb0a-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 16:40:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB9802.eurprd04.prod.outlook.com (2603:10a6:150:110::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Thu, 25 May
 2023 14:39:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 14:39:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15cd379c-fb0a-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QyvSfGat793pFeYC1aQElEm0tx+434NJKlA3OxBJaMnu2JHukG9jNd5DeIn/ZZmI1gdvgXTyDqyBoyaWV2B3dx39PLipiIobDHRLCGKzu9B78FfKFggV4wfKyTZOprLQM84xWxhzo07ZLHJDgp100S0zlU5ed0DBnBfTvumKy48eBDb+pd5F+Gcibapjm4QdD/1p/+hgVzkzj0iuj1TKt7+sljYlZF/0e0EYHhVN4mJVWND9IRm8EZYgIzMZG2O8yqsTTltxKM2OQCl2raW1ckU0erNAfAv8kruvmYfO1Sla/ACRs6jy7thDlt+4G0D4CLSuwiKZq53t+zQUxztUqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FYNH3iEaPXt1PGD9HSjTtWX/XIVVJiQGo3QmL6Kn4+s=;
 b=ELC/6bnOts+J0QW22JfheEHQo6Ppt9RAicA8npsszRwNTu1gBEL9SVgoa2TSabiPn6FCRErGrLwzInIXhnA8c7E/b1ViPwumtK3B+lJvNuDhMS1YfhygUFNJXBiVdchiY1RN4tNlA0q0RY+SYwfNNLjpmJt+ZQyuCpvL02LE2VrfdB+DiwBH43sYjaGNvcIBUrL+f20041msrjqpx77o2qjPKi7cxmNhbWLBCRyjWZhg3v8ay8ZXepLcoyfManC74x81kxQEU8L/iyAPOVqWcI9QBw81w7yUlHhZPzSvMxdm7o+/cHIX47ciNPMvMX+UKB4n+xobrkzNk2T5xlGFAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FYNH3iEaPXt1PGD9HSjTtWX/XIVVJiQGo3QmL6Kn4+s=;
 b=Jbl6HDKQgIB7omRnIr0ovR+SX97lkMFDT21U3vxPEYd4YaluFSxq+iBrMhcfRelUYv9nvVv9bzQV7CcxWbL+fV3pSVJ6qgTc+LSqyCq8G90PVBkcIt3TGx1KRTF9liB4cUuPQB+2Sv21zDLTPU1+3zHc6JFc5Tupx2OKAFqqBPv7SRWQtyj1VdZq3/IE/0tDtLS6SWg1LnXfGjah+CXxua1iO9kHE4nyvjGOg3JVzqOqe2ui3KSPjS8zbDmbRQh38V25a05ZwseQ4lxaQTsR/+eTWYMoDajC8Lryf0kdOAK4ALYcrD8uVp2kqHBLTwbmqUV6f9sYZk94faBqWE17bA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <2b1b1744-2bc3-c7c0-a2d8-6aa6996d4af9@suse.com>
Date: Thu, 25 May 2023 16:39:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4zx+TvUWTCEMh3@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZG4zx+TvUWTCEMh3@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0200.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ad::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB9802:EE_
X-MS-Office365-Filtering-Correlation-Id: bee7947e-441a-411f-e46a-08db5d2de75a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nMWTPgwMj9GRj5MjvD3nTaj4BFgoX8g+wxq8ycRTYNdR3d/TzHaUCbpKdJcl4TtVKVd2iyTECRTvDVBR+D94dZbZDXcIFOTXoECbmWms0iszO1Yw+byz7g444Pa5s8UErO3gRMJXL1BfopIOeVLSsorHbj9a6V3oSsmukPcI5tPWyD8L4v0Dcn48RuOpWxqLGMPZRu9ymgEFq8sgFyRzf1yN4Zb+gx+T7SiDasDNqYOVeToYeskSUq+Bnm1QDRHAJAxFGwitED/r9lOGH9eRTlgg8Jnp7Q/gI0+wtjDC8vjhrV6xBX+3aTAoeYtOezt8IWm+hk9dmg9ZFaBvhw/EHviVvx7QBIsMFARH8JDyIglkjghZGoJ+u3l+VXdIQtRkRVybgzs9V1+UiUlbPIGEL2kJT66qCRsv7nbFbBBwlroBGeXBl8uwr8Tmcn/ABL+uG3cN1lZ30TL5T47XKzRbogjM0eCiBObttIRA264LpImw5gqgVFkreLFWfHCucMCbTbTsu6OTGcXwh74UFuKmoRRHGC9kQP433OL6q2DluCHgvpDtnOKgh6kDxEhvTTzZncFue+IxKkUz1LyOpoGJtHJg85l69/g/z47soI1Yt/MLibiq7oZW0rNcmpbouzhm3LI0PHfdZCJuwH/3oB6fKQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(136003)(39860400002)(346002)(366004)(451199021)(31686004)(31696002)(2906002)(6666004)(6512007)(66946007)(6506007)(26005)(41300700001)(15650500001)(54906003)(66556008)(4326008)(6916009)(478600001)(66476007)(53546011)(38100700002)(83380400001)(86362001)(186003)(2616005)(5660300002)(316002)(8676002)(36756003)(8936002)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dEZGM3RSVXMzQkNQS0o3SXozNWpOUk04YXlGTGVBWHNMTEFEb20zb3VuMHdo?=
 =?utf-8?B?cld2ZXJTM1VsZ1hjRWtZcEZMUGQ1NkhyNmF1em5jeXZQd3V5NXBNdUxHaHpU?=
 =?utf-8?B?alc3Z002ajVzL09sUFdwZUF6aVIxQkJ6RE50UkF3dC9DelpRdG5IY2VWV1NP?=
 =?utf-8?B?Zy9ac3h2dE1ubEdaQlM3Q05GNDZNamhRalY1UXNaUjBCRU5BNTNSRTRBWDJH?=
 =?utf-8?B?aW5Cei82YW0yNXppeVp4SzdnZllCWFYwT01pczQ0UTk4RW9QcEl4Qk5EREVt?=
 =?utf-8?B?anBlaUo1ZFYzOEVzWnd0T3JVbG9neitQOGFHNzhKUHMvbXFGT3R3bFRrNCtG?=
 =?utf-8?B?SEE4VnVnenR5aHhUY2lhMzJFd0JqaDk5N3ZkNGZmb3BseFlrWnE5QVpPbGpJ?=
 =?utf-8?B?d2VDVWtHbFJwWXVZUlpaTmRlaUR0b3JmeUQxYjgvMitVQU8yeXZYT3dXd3Vz?=
 =?utf-8?B?VzB4ZXZrRGNnS09XeEd5ZWpQRzJ1N05kekRjYmpKdzhSZGhGS3J5TjBObmdo?=
 =?utf-8?B?YXpUNUxac3Qxc3J3ajIyUzBEMFdvN0NGQjZFV0E4Y25sc0dvMkdaSlpXRk9J?=
 =?utf-8?B?Skw1VFlLWFc2ZVNHaUZVY1JBbnBnQ3BFb2JLL2xnZHVtd3R1ellrU1hnSkxi?=
 =?utf-8?B?MzhPY2FlNTlRL0g2N0JoeHRWODVoT29nZkkvUHhvY2QweVFQMTRPVUpoOG1x?=
 =?utf-8?B?Zk5WWC9rNk9vSTdGdzYyMjI3YkNZWHVlejh1bkExb2x2Uks0aGR1b212MEVv?=
 =?utf-8?B?T0hnNGIzQXcxampnODVoR1gyUFRYQW5VUnZsK0JReWlKUzJMaTQ0VHpkSktp?=
 =?utf-8?B?a1hXUU85ZFZEQ3JnMnlPL2puTDVhM3Q3RmxwWmZMNXMxYkdhd29ZSlBLNkNM?=
 =?utf-8?B?cUIrRW5Jem5EeGhGbDJjUHd2QVVjMGpwNjNjUVNreFVSYThsbWI3WlJsYU52?=
 =?utf-8?B?cjJEN0pOSEljL2huQ3hCQ1JYcWZkYzNnVVRXWHU1aFdPbVVCdkljam5reDFP?=
 =?utf-8?B?clFUbnFKaU1SdWFmSjA0K08vRmxnbWdYM1ZTM3ZyWHBTOFRrOHJzdzRoeVVV?=
 =?utf-8?B?TFdZVzNIZk9IRGVtVThHTitKbzh0RUdyamoreXlJaDVtT1R6QnlXd2JudU1T?=
 =?utf-8?B?enhLKzdBRTdMbThjeEhLcVZXcEpUT01xdjd3aWQ4QVlGSFhkbXhKQ0wrWVR2?=
 =?utf-8?B?WndrSUxoZTVGRmh5bzg2TEcyYlpLb042WU9kNE5kVlk3Z05NbnUzVStUTzdG?=
 =?utf-8?B?RTRBNXhOdkVHVG12YXp3cjZCY1lOUVdEZ1hreXRpanFHT1M1MXlmY1d3dnlh?=
 =?utf-8?B?Qkhxcmo2UXNta3dpRW9QdWthVnk4ZHJtUW5XZGJvaWsxa0FKOU5JOG9Qei94?=
 =?utf-8?B?ZHRRMmtMUUVyL1prbkZLTHZaSDVRZmVnbFhHQ3g3anpTNERISEc4V3NGWkdu?=
 =?utf-8?B?aTh1ZThnSkNXWklHQ1g0bllCZTJHMW84UFNGU3RMbWpBNmhjTExac01qWVBF?=
 =?utf-8?B?Q2tHQndSL2dxWXRIQWROWWN0YnpsVnVHTzl3NmxYbnJqcWlmRDZNV0pwVTB6?=
 =?utf-8?B?b1BvNkRIaFIvSUVLSWZZOUgwYzVnSmlvUDZGSnN2UU10UFllTUpaQVByMnAx?=
 =?utf-8?B?aFZmU05CcWdwRWx6T3dFbTFCNGZPOG01NDYwaEFaY1RKM3NuTlZzTXk4MWdL?=
 =?utf-8?B?T096VW9RNnFvam0yUVdRVEtxMXpYTXoxbU55UDU1UytYRlF6ZHZqTDhHRFBC?=
 =?utf-8?B?NTZiR2IyVXlaSTlUYWNFVWN0cHdzQWQybEVZY1doTTdmUS9jQytSa2NReC8v?=
 =?utf-8?B?NTNUQzJ3SlE4d2xRUVZYbjdqVlp3cHpwSUJtZnlic2ZYVFZUQTZOQzZ5SlBI?=
 =?utf-8?B?WXkvNnhVZVpvWTVGNUJ1NnVlUXNNTUo4RGdxR3VTTVBmMmQzdVkxL1VjdVVN?=
 =?utf-8?B?N05Wa25qZTVYTE1MbGJwWlRLYnRSYkp0Y3ZweERid2lXMWE5c0lNTVkza1ZX?=
 =?utf-8?B?SlNlR1k0ZHUrRmJmczBIMFdralFKLzZyR0lTRTFxdFpEWkpFb0JqWStnSm1Z?=
 =?utf-8?B?L3BRZ0F3VkZmK2hDYzc2TUo2S0lORXlxV3V2ZXA5eHdMblZzMUd0QnV4eDhu?=
 =?utf-8?Q?bGy5GJz4HGxY5MzcUkezosJpz?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bee7947e-441a-411f-e46a-08db5d2de75a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 14:39:54.1657
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cgqV7NbURMbHy0gf4YlNo3JNK/Kl5AcCcs8sq58bnOOoSxoxafJTGBz8eyHV8ShJzbhCINfre/f9ptMSF/BpBA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB9802

On 24.05.2023 17:56, Roger Pau Monné wrote:
> On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
>> --- a/xen/drivers/vpci/header.c
>> +++ b/xen/drivers/vpci/header.c
>> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
>>      struct vpci_header *header = &pdev->vpci->header;
>>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>>      struct pci_dev *tmp, *dev = NULL;
>> +    const struct domain *d;
>>      const struct vpci_msix *msix = pdev->vpci->msix;
>>      unsigned int i;
>>      int rc;
>> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
>>  
>>      /*
>>       * Check for overlaps with other BARs. Note that only BARs that are
>> -     * currently mapped (enabled) are checked for overlaps.
>> +     * currently mapped (enabled) are checked for overlaps. Note also that
>> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
>>       */
>> -    for_each_pdev ( pdev->domain, tmp )
>> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
> 
> Looking at this again, I think this is slightly more complex, as during
> runtime dom0 will get here with pdev->domain == hardware_domain OR
> dom_xen, and hence you also need to account that devices that have
> pdev->domain == dom_xen need to iterate over devices that belong to
> the hardware_domain, ie:
> 
> for ( d = pdev->domain; ;
>       d = (pdev->domain == dom_xen) ? hardware_domain : dom_xen )

Right, something along these lines. To keep loop continuation expression
and exit condition simple, I'll probably prefer

for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
      ; d = dom_xen )

> And we likely want to limit this to devices that belong to the
> hardware_domain or to dom_xen (in preparation for vPCI being used for
> domUs).

I'm afraid I don't understand this remark, though.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 14:54:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 14:54:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539653.840768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CMC-0000Cn-Ch; Thu, 25 May 2023 14:54:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539653.840768; Thu, 25 May 2023 14:54:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CMC-0000Cg-9S; Thu, 25 May 2023 14:54:24 +0000
Received: by outflank-mailman (input) for mailman id 539653;
 Thu, 25 May 2023 14:54:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/c8s=BO=citrix.com=prvs=5022a0095=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q2CMA-0000Ca-Sh
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 14:54:23 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0694db6b-fb0c-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 16:54:20 +0200 (CEST)
Received: from mail-bn7nam10lp2104.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.104])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 10:54:17 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6770.namprd03.prod.outlook.com (2603:10b6:510:121::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Thu, 25 May
 2023 14:54:15 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Thu, 25 May 2023
 14:54:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0694db6b-fb0c-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685026460;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=CQcQeL4dM6nru1H9wEarA6jMvS6wNi2oY0CLI5DTXjQ=;
  b=arkBQzfHuxgzMYgu8yjJhirIWYqWUfBrZEjqTA9pq0/Sbg/Eyc82nnMR
   AeOHc1faKf7qL2h4yyy8YUfK5KAAmUwhB7eI/yZ5scq1FpMrXGvU73XXg
   2AWrUD56XykAnj8w+YSgRH5EnmeEQ1Pkz0J57dHnqbqWDfd5VPo7Zxeua
   I=;
X-IronPort-RemoteIP: 104.47.70.104
X-IronPort-MID: 110294510
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:GKHAjKpFbyx1jW/U3EEHVXy5ViBeBmIZZBIvgKrLsJaIsI4StFCzt
 garIBnUM/6JZGL9ct8jOY2+909T6sfXydJrTAE6/3g8Q3ka85uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzSRNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAGAsSRrY38C9+pbhEcdclMQ/HZC2E4xK7xmMzRmBZRonabbqZvySoPV+g3I3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3j+CraYKJEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdtKTuzkq6cy6LGV7nAuBS1VcQu7ncKGlVSxRdlAF
 U9KwwN7+MDe82TuFLERRSaQsHOC+xIRRddUO+k78x2WjLrZ5R6DAWoJRSIHb8Yp3PLaXhQv3
 16N2tnvWjpmteTNTWrHr+/I6zSvJSISMGkOIzceShcI6MXip4d1iQ/TStFkE+i+idid9SzM/
 g1mZRMW39075fPnHY3glbwbq1pAfqT0czM=
IronPort-HdrOrdr: A9a23:hsfhPqwI98zTqqmPa3HEKrPw2r1zdoMgy1knxilNoHxuH/BwWf
 rPoB17726TtN91YhsdcL+7V5VoLUmzyXcx2/hyAV7AZniAhILLFvAA0WKK+VSJdxEWtNQtsJ
 uIG5IUNDSaNykfsS+V2miF+9ZL+qj5zEir792usUuEm2tRGtBdBwQSMHfqLqVvLjM2fKbQjP
 Cnl7d6TzzLQwVuUu2LQkMrcsLkvNPxmJfvcXc9dmIaAFnnt0LS1FbieSLopCsjbw==
X-Talos-CUID: 9a23:UyA/VGCnU3VPYpn6E3lG00RPQuIuSFmHl1ngMU+kOD80Z7LAHA==
X-Talos-MUID: 9a23:oYShOAiH7KjFAhKeeI9ENcMpKPpl0+eKWEAxqqoZocaaPjNOOGmkpWHi
X-IronPort-AV: E=Sophos;i="6.00,191,1681185600"; 
   d="scan'208";a="110294510"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ahwcnJP1mTEv4Dl7J5kvEQWlU6eIzP6LMQurKpcEz+M663alUno6mZvJpbjiSnTY93EVVd3GmGlObCn0q5qv8GMoLVfxHbYd9BFn+NvDfR0G9DsITNhdNRpw2KJl1fYgbmZkhx75ltZ3wIdrYmxDe91tEqq0PHRYnkU2qrKJetHzxAw45exaPgyvhAX1d/pnF7KrL0ngBWrJAefm4q7GdRkQIvgDlIhIY/1sh+A8P69l0Si+QeWXUPGSlBNulmcuVjSE1XNvZz0aRR6RonmARJ4aisz6bu0XWd3GCNGOgagQO+eyF2WSkWiN7UBvutL6T8IzE2fUne7RGsaPsZ8M4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+7lYlhK5AwmL/nNaSMu10HnFnCudJ9cJnJ6lzC2Kzwg=;
 b=SCnbPMYJth3NZCKNihCUOdWEBK8NdU5WkzhK6YE8pjykAzgIvx5F3QJktZkjbKiV1PoCjugDQ/KaZie7lmVftCS+aY76cTxfyIhOrGZfHmvUVCQC2pnohumoAnEyPVnZr6OMRSq/cdZGLGDHyP/82JD/tk2v0p/YyeCredZbIciBXQIicyUJ0LdKyLjBvLirHMMaRC3VlxtC6Z6kOy0Pkvy9nQBJe7OP1wiLce8JdC8l9Vd9jGEuNAWvyTm70pU5tiUt5uYDfNa9b7BYSIluNWb6JejuBkQpgVL9wZuS5pKZIrWvZY37iGF6ksJzrhRQ6Qx0HMypuRD1SY5BtIHb9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+7lYlhK5AwmL/nNaSMu10HnFnCudJ9cJnJ6lzC2Kzwg=;
 b=mc5A5FmfwaKJBgDY9ZSc2OI2AGMjEF+vjexl8H88okrdqRMbaPDJ32d2lffkM9M9tVWqyVBfUJ+PMlFOPtBbFeUxaanfCntP/+j8XHIiQ0adUljEkUs6VrltoQKABYuHo9sWSUUP41Xh4cjhM3mMQ3Suxf1JJp34xE4TbO29kJc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [PATCH v2] vpci/header: cope with devices not having vpci allocated
Date: Thu, 25 May 2023 16:54:05 +0200
Message-Id: <20230525145405.35348-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.40.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO3P265CA0007.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:bb::12) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6770:EE_
X-MS-Office365-Filtering-Correlation-Id: 22c1ef7a-109d-4854-e667-08db5d2fe8b7
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WFoeF/nbM8MpCKhmKVCogzHhpD9ukLvVPkC9AU2a+a0m6NQnTXa64SAqgnStNkBOFtRpYlLfRWQWpVPSyP4CHWmjar/6YOoMzk4wBWc0mviudBkT8zC3XmusHJDEtj48kB+dugyk3P72CNeC0XNraN17YkyIfnZrZGHmdyKOleQwaRH71C0hHnFX7kzQXcFTU0s7L5MZxkH7TBfwRvPCCYZ0jtdT+X3mWVzKCoTuKPX296UtIB7iIY8tM/QGavhaLZxgqVllMnBL2KqNaT6fR7/JjVaPBPiOPhqO1aEVXuuKSLXotlOpennV4hjMxAaDULO7NvD5mBQcpv439Y5XghQvVzcEBBjzJX1qwa4j8NBbPHPc5TeKpHRxOOIiSREFK1Inxo1FUllzThqwCrkmkWwyG3XVnFYXezx2j54umpnSVk0WhShgoRwIEIP0fxr/cI0ah6fNkqT57jg9Gdqzw464Z9+C9jefafsNhCMjki+gOeWr/C/JncPDRVVyID4bcNOZP9CM/Lk5obhDJyEcypAeal5e3UugdOuQbis6CVvGIfmdsPd+0M0gu3bvx8/K
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(376002)(136003)(346002)(451199021)(4326008)(66476007)(6916009)(66946007)(66556008)(2906002)(5660300002)(41300700001)(316002)(8936002)(8676002)(6666004)(6486002)(478600001)(6506007)(6512007)(1076003)(26005)(107886003)(186003)(36756003)(2616005)(38100700002)(82960400001)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SkxNcDR4UWNpdzU3TmxkdHdYdXhuTWVCR041MUxGcHp3STNDRTIxY3dwbWF5?=
 =?utf-8?B?NFpBdmMrb3kzN21sbHptK29Zc1FqSUQ0QXVEdkRCa0d1NkNyWTArMTV1eDBK?=
 =?utf-8?B?UUJhaEVhRGxVV1lISFh6ZGRxUjJPRjhlZXB3ZzNWUElkUzVIcEF1U0JQRnJr?=
 =?utf-8?B?ZFc1SG5hdWVNREx0WDdKOUthZE1rRE9RZGZyODk0UGFWUzhWc29zckFCUjlj?=
 =?utf-8?B?NHk3ZWhvbm1nb3kvdnBaWG9pRzR5cDhNWEM2aFFSTFFLaEdiTThpOXpGQWpz?=
 =?utf-8?B?UitxVElHSDN3d0lnbnc2ZFlwVVNPdWtNMDhKd1dZa1hkYkJTT29RTlozcFpu?=
 =?utf-8?B?M2hUT2FGZjJoOFpjMGtFczdmcjIrYWkrRVBkbGRVOGNKbFZtc3V2SXN2ZU5u?=
 =?utf-8?B?cXF1R1BDZTd6SFUwTlJseCtPNzErNDgrWkdCUHdNTW4ydndXazUrd24ybHd4?=
 =?utf-8?B?WmlqQ096VDJxNGVsR0JhZHJvZ2NRcHBiNTAxN1ZOZFViYTJmYjYvczU4K2FT?=
 =?utf-8?B?RzlSVGk2ZGNqUlRXWnJwcHdYTHhLZFBhQ3Y3MTlsWFdZSkJUaHFWRTlrcTZ2?=
 =?utf-8?B?OWRGOWt5TFR5N3FvRWdWUTlRZEJTL200YmY5SkhlWDZwYnh0OXZrVVp5SXVk?=
 =?utf-8?B?OEJVRnJnWSsxK1Zrd2pPQXZ0RFhCY3paazc3MzNLbjZENXFFcWFuaDU1MDJ1?=
 =?utf-8?B?Z2lic0hnUVNtbWxuME14NnRVVU82WjFsNm1kMGkzb3BONE9FUlhIcURtQWY3?=
 =?utf-8?B?SG1keWtrTDF3UEU4Qit4MU9jWGhUdVBpYWM1TlQ2ZnpoWjAxd2s3Q01LYWo0?=
 =?utf-8?B?ZUtjSDdTa2paWDhFbHRWUmh2TlNlK3JsTGFuZC9PUUdFRUNaamU2c1M3c1gy?=
 =?utf-8?B?MnQxdnJ4akhwcXRkVnR0UXBZRUtBaWhmZTVXSTdsVG1jYjhtM3poaTFoUjdr?=
 =?utf-8?B?M3NoSHRKNFd3Y2ZCSFNYQ1RqVWlteXkraFRoM2xtYjMwRWtEODhFTjgvdDZJ?=
 =?utf-8?B?N3c2V3BKdzhJZ1FkdE9QUzNjRWJQd3JsTEE1cG1hSWhPMDFEbndpcFBPdmJI?=
 =?utf-8?B?bThLajJpREoyL0o1b09DVjAveEVySTBnTll1b3Rqc2VjWXM3MnpPamtUY1F1?=
 =?utf-8?B?SkZNUmpIemQvMlFGVzMvUitMU3JOSzR5bHZsOURRVFN1ZlA2SEJPa1ZLTTRn?=
 =?utf-8?B?YjUwWFEzTVFjZjhIMjdwYVJaQTAxUkdnSWh0S2praENZTyszSXJTOS9DSzZS?=
 =?utf-8?B?VHhKekE2aVhwSURsVjhlTk5vZVMyZG9NVjhPM2Z3YjlkeGVWSHVpT0FyaVZ3?=
 =?utf-8?B?c2dWOWs0UW5zcU5CZjg1c29UeWJQQmpCNUlkbHc4em15VUdGZElkcVQwbFo1?=
 =?utf-8?B?MUM4OXI5Yms1MmsreTFHMEE4cUp0a3BhVUhtVmN0ZFpKMzhBU2ZtbC9vVVg1?=
 =?utf-8?B?TVlVSW8xc092OGNySTR4SFo0WmpVWGV4c0RZMndQakdzQjJjcTNNeE9WQmtR?=
 =?utf-8?B?YkdKYW1CTkNDZVNtZ3lyb2I4dDJsN3pOS2pTZ0lHdkM2QUEzL1E1aGFJUFNX?=
 =?utf-8?B?YUVzVXlYSDZIbThiOFdzWXp5M0wvVWNWWTlLaGJZSk5RY3I0eFZDODVRU0o1?=
 =?utf-8?B?bWJ3eDJlc3dodXFCNzROZnVwdGdSVUtyaksrQ0xVVXdOU3RsYXZNQkZPNUFE?=
 =?utf-8?B?SE9EOGlQNVhWNkFVUTZQZi9BRnkrY3pxQkJ2ZUpZdG5EekJMa25qNGxac3NV?=
 =?utf-8?B?eVFRL2wxYVI2NW96Tmp6c1J6MUh5Wk9yMHNPZEFWcEFKV2p6OVlmTFVyTDdi?=
 =?utf-8?B?V1o2Vm1obGhrdEdKSEZFckxlUkF6akk0MVJ6Vm1UTzh6cnEyb0FITDNabmd1?=
 =?utf-8?B?dlRsNFdYQXhnMzltTzVORU5YQ3IyYWhXRmxVaitIcTBETHNNQnlyU2Fwazd3?=
 =?utf-8?B?OFc4OTVVZGQrQjBpazdFdEFka25XckNqaXlvVzdRbFh3Rnd2M0Z1Rm16OVYy?=
 =?utf-8?B?QXRPdiswdUcvT29IWVVyaTIxbWFnVUtwNjNYeGRLVW1uUkxjZHZrU1h6UDJj?=
 =?utf-8?B?Y09zcHA1YUl6SlJDTmpnUUdPTEJZNTdHcVlENVdHTTdCYllxaTBZU09Sb3dE?=
 =?utf-8?B?VkRINnZ1enBVVENxZTYyNDlmenNKb0gwaDc3ZzBCdGJXckFsQ1ZhbnJ3dkNY?=
 =?utf-8?B?dHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	4G151kkwfpBG6o6aOHLRdSRqdxf8hz9Vi7ddlWXiH046SEFKUrqEJfg3QmNAn4TaoeBC9/R7w7Q4HYcKk3eg0wJpIftv5DIdh3ZGQU+8z8IW6GLlLgkR+xjPZZplZF2EsmTSbteZdbbVInr0Xiw80JDFfG59DSQxVoMaAXwsCMAPORbQd+luJ9qlX4g7qKPIBrboXASdWBfO8SGhmWfX4vCm1tmu3FsJxNiDnQR+madcBzBlmV/xYx0Lpc/tTh14NCMXqL6Biuf2Y5pj648fmbR9zkIB283/OtfOq3giuTZtzqiPRLARxy622u9nPPVYt72OkwH3nD0rM0RmVBMimtSmyMeOd6YVQT0RLXqXaQwpo3ntnQXo60iz4vN8z490tNnSJat6hDTBdGfRD4C2ocJHGrNmFKsVbmocoA3f5EBja13QXToBUw9gNKL6/c8IGlxABZslvDIUcMk4AGJZCsq80toTe0pmU10G0g6lKtF7TO3JqCvWrtVuiioV0H2IK6rrNnYno3ZSFVCxJuDG08YnAJAZQnlEwnCjMXHfUer7xdEZzeq+MeYoWPL5oFRstw2OhA830zGG5LSOTeKEHgWG2DOdWt+XyR07Dq7wT2za1HFjW+G3oN7u7SQ4ELYdL2HFWlZ0s77e7YP6WVshypTDfXgx2mGn/uRbvte2ZAtsHmrQui4oI6Y3KUYvLO8RnyguWxa4qB60qe2WOaEWc2KCLav34pLQpEbn8x+CgE4RuYBJq9+816OtdJyhg05lef2hPCuE4FgLllEl+MYBNQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 22c1ef7a-109d-4854-e667-08db5d2fe8b7
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 14:54:15.3867
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EhHaZjXp+ikUELp7Ztdd08I8mx/osNQAYjvjXeq5FOzL+Hr8BCWFStgvrPByQ2cpkteQPZA3tt8ILPGUoEI9Wg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6770

When traversing the list of pci devices assigned to a domain cope with
some of them not having the vpci struct allocated. It should be
possible for the hardware domain to have read-only devices assigned
that are not handled by vPCI, such support will be added by further
patches.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Do not mention domU plans.
---
 xen/drivers/vpci/header.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index ec2e978a4e6b..766fd98b2196 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -289,6 +289,14 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
      */
     for_each_pdev ( pdev->domain, tmp )
     {
+        if ( !tmp->vpci )
+            /*
+             * For the hardware domain it's possible to have devices assigned
+             * to it that are not handled by vPCI, either because those are
+             * read-only devices, or because vPCI setup has failed.
+             */
+            continue;
+
         if ( tmp == pdev )
         {
             /*
-- 
2.40.0



From xen-devel-bounces@lists.xenproject.org Thu May 25 15:02:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:02:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539659.840778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CU5-0001n7-AB; Thu, 25 May 2023 15:02:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539659.840778; Thu, 25 May 2023 15:02:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CU5-0001n0-6y; Thu, 25 May 2023 15:02:33 +0000
Received: by outflank-mailman (input) for mailman id 539659;
 Thu, 25 May 2023 15:02:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/c8s=BO=citrix.com=prvs=5022a0095=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q2CU4-0001mu-JY
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:02:32 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a8efb73-fb0d-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 17:02:30 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 11:02:20 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SN7PR03MB7273.namprd03.prod.outlook.com (2603:10b6:806:2dd::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Thu, 25 May
 2023 15:02:17 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6411.028; Thu, 25 May 2023
 15:02:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a8efb73-fb0d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685026949;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=H67qnQ+xhgPRGu0oig9yV912KhvTtsBMFwRut9xU/Tk=;
  b=S9I3InzVK8bAWxKMzaZKFK/1XgNwcmtosp0yS1YX89togCXfzFgC24+B
   LpY++MlHfPjjaJ32vkpxw0VTWkCN0n+Ck89JhNmpFOQg7PQC6f2Dn/MoU
   1KJxyWSdxGnqORZ7JOcxRfvvJYdCo6Dvm1B+T1JC+fuphFfRhuSJgKkNx
   0=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 109177803
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:dZ9grKnzBsrqq/OqDakb2HDo5gyrJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIcUW+AP62Lazekedl3bI/goB8PucWGytVhTQFpq3g3FCMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE0p5KyaVA8w5ARkPqgW5gKGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 achAjEQThqxvuON5ryYcPlUgdUmc8a+aevzulk4pd3YJdAPZMmaBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3iea8WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHuqAd1NROXjnhJsqAC44DINCQYGbmKcoeXp1E/gVo8cK
 2VBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rrZ5UOVC3YJShZFacc6r4kmSDoyz
 FiLktj1Qzt1v9W9UmmB/72ZqTezPyk9LmIYYyIACwwf7LHeTJobixvOSpNpFv6zh9isQDXom
 WnU/W45mqkZitMN2+Oj51fbjjmwp5/PCAko+gHQWWHj5QR8DGK4W7GVBZHgxa4oBO6kopOp5
 xDoR+D2ADgyMKyw
IronPort-HdrOrdr: A9a23:OTt1F6hj0LPsj5b0KUk9KjDOuHBQXtgji2hC6mlwRA09TyXPrb
 HKoB17737JYVkqME3I9erqBEDiex3hHPxOjbX5Zo3SPjUO0VHAEGgF1+HfKlbbdBEWmNQx6U
 /OGZIObOEZoTJB7foTQWODYrUd/OU=
X-Talos-CUID: =?us-ascii?q?9a23=3AEG735mvjUnQjrnhq9bTGfuYL6It+S1fc0VfaEXa?=
 =?us-ascii?q?5MkBna4eFcViB/fxrxp8=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3APCpPMQ1EOD+1xC5w/gnGtPW3cjUjvo+IVl4MioU?=
 =?us-ascii?q?9gu6EKXwuOmmQ0Xe3a9py?=
X-IronPort-AV: E=Sophos;i="6.00,191,1681185600"; 
   d="scan'208";a="109177803"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N4xkXM5IEIcibrohRbzzPymj83u0wLprEA9smrVs1+ZhLesJ2X8KVt8Y4zT/N+1q8HHbWtE0tjIlE0jgSxMgmUAgjdDp/drbHrC9PpU3e65VH1je8xh7NCRmRpiIhxy4g64x3dhpz16h/ljbdxIcmzh7v53Tgv6NaamAdCeb+aE5uKQMGBtHHSO1rA8OZprO+2WlxueyWWBp6rwpBZ9umDx/vXPVeyZ4pc31MV76RH9QX1RvBEMSQex8qDIyCMQ3DGrYD6re4zkRxVkitHMBXTDe3tKEt2XX031JiQOL96zsqzuicypWqUpUQTijwb/irUFT5kOz3yLUVeal6145Vw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nv/Cp1Joc8laf1E+rCi3GRW0NW+O6nIj92ZsL2gumXQ=;
 b=YsWBonGZa45haX4ZEKoEkNXTsWeuSZYxQyYplHrNJbhEstFnsRWhFV/UEBH4UViy1wjwpF+V5uWElLivVNok3+G3VkbMn7ZJUFE/iQJWydNjTLQOD79Pleu51MQs6omSfIcwjHGvl6bUniQhV0S6o6wAZOJP6NfaVsUHEIYKHCDpSZvtWML32QYqhQcrzj/ULPgT4JbY4YyRBT408ex/dqnV4y/v8txodgJU64fVTQLBBRfXL2wuKxlSnMPg6O7AgJ+mxXAkPEma2YjZFzZ2DQw5U4VZeu0/c8tRnP5aLU3qIWgEAGPWwhf717+4B1S/Xse+/XgoNwXWhaN9GEOBdQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nv/Cp1Joc8laf1E+rCi3GRW0NW+O6nIj92ZsL2gumXQ=;
 b=daESfjU6IUzepELMns3FkEnuGk7+gBCuF69LQa27zUhCRAQqcKRYH6SuVCaUaMwdpeS3XKydr6iLDMVu+oW0EOpXsexgHjELWcSHi6eqeWJzqCy4uHUI+AS3tSiMkSd+Y+9IXqLfu0p0+SjVndZ6h0/iApTdMMz13C0uPuoPXwc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Thu, 25 May 2023 17:02:11 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Message-ID: <ZG94c9y4j4udFmsy@Air-de-Roger>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4zx+TvUWTCEMh3@Air-de-Roger>
 <2b1b1744-2bc3-c7c0-a2d8-6aa6996d4af9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2b1b1744-2bc3-c7c0-a2d8-6aa6996d4af9@suse.com>
X-ClientProxiedBy: LO4P123CA0378.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18e::23) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SN7PR03MB7273:EE_
X-MS-Office365-Filtering-Correlation-Id: cd3bf8c2-7d90-4042-1698-08db5d3107fd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JmYKOOHx6KlZEepp0E/58ing6tc2NXGUQSp4NNfyL8aq2SRaxMZtrCyemFOOwv2W2HYFPFRkySt6xyhiyWptpUL37S0cri6lwuAWAfoFcW/U9R9WUEJl7/IhehRDDWlDUkZGa+HCWqY8OY9GHFDetiwBfiiwk04O4I9aHPXO7vYro9CzdoYtxfGlEIuy/ZgaUHut2ySuKA026WV8Rj340TrLmhQCgF1Digye8p+/taztdmJzGrw9GgbIhEzxYnn8J8chFc+MwfCTBUAeW/dssxObgBYhPSShyJvmvWAy4AxlWByLx79/oqpDXil8X0Sv3mLaLG6LPilOh78hr7EtaZqMw0J7UGaUHdcDTkgtHMnJQ7Whut0cmb+snYA80tKNbeaiv69l+NvYm3jitVkh254+3ai05JuBHp+9Qg5Fw3Fue+Nx1ONzqNaVI0Oc+FZIdbPzuy2ZU2ayyoAHEVt2qWIqCOFyEF4bkrTE1ZZDzBudwsPk7CvPQPUuOqJgPUMaSet54Vh3QlvqsFJJB9OUtXjgyOA4ehB2cGZvw8OCs0NdJOgeeFwP9bGyjO7INUbM
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(366004)(136003)(39860400002)(346002)(396003)(451199021)(54906003)(86362001)(41300700001)(6486002)(508600001)(316002)(6916009)(4326008)(66476007)(6666004)(66556008)(66946007)(5660300002)(33716001)(8676002)(38100700002)(6512007)(15650500001)(26005)(9686003)(186003)(82960400001)(53546011)(6506007)(2906002)(8936002)(83380400001)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SmNhY0RxSnprTjJ6Z05tUlRDQmg4bTZxd0V4czNsQU1NOVZJMTY3VVZnSm9Z?=
 =?utf-8?B?c0ovWU5sOXMrWnBKcncvTk9Pa01QTG5ZKzd0QjY4U0RKanVlcHU2K1N5M3ZK?=
 =?utf-8?B?ZHNjc01LTUJOWGRHRFVYNkhheVFTSkhRVml3OTlFUVRsa1hTMW9WbUV1NlRS?=
 =?utf-8?B?Tm9Tc1RuVW9tb2FpL2pPVHZtb0tPb2N2b2xRa2tWRHg0N1gxcVNLRUVHOFI1?=
 =?utf-8?B?NmgzRjZLeFVkcVd1R3A4YlNUbkVlOS9naEV2cGFMcm5OMGNRNmM5VlRkYzcw?=
 =?utf-8?B?R2lITFE0MGtncmNXMUI2NnN5dW1rUUU4dzYvWEFyZXgwQjk5VkNJc2JKSTB6?=
 =?utf-8?B?bVNteEZpRkdxSGE2VE9JVkh0ZXdVNklvZXltb3UrMUlqM09Xb0w3L0xrTEpp?=
 =?utf-8?B?ZkxZajJhWEZMNnI5QThPL1d2NUpEeVg2VnZrbHRhakdFZlNkNnUwVkc3NzhS?=
 =?utf-8?B?c2NwcWV5M1lzb1RsWWRoWFpubGlqWEd3b3FOMUdoWnJrbTBUczJPZkhaVmQr?=
 =?utf-8?B?MEpLN3BVRjZCSi9qcFE0eVNkRDVVUDRkY1lWeFdXSi9Lb0wzdHZkcWswWjZV?=
 =?utf-8?B?VENXcTZaaG0vQjV6UlJsZkdHUUZ2Q2hWTWhFdDJQUEtIVWQ1ZklFdVNRWVJ3?=
 =?utf-8?B?Znh4ZVBaSmhmSkdLVWpUdjl3c25PZDFpUWppekpUK1FzT3FCdEZwZFIzTGNC?=
 =?utf-8?B?TU9GUkJVSGp0Vm9XL3d6WmJ4amJUY0ZCKzVmeTY5TG9vaUVIY05wVkx1TVJ6?=
 =?utf-8?B?T2g2Qis5SU15cEk2NU1MUlRYSEpwcGxIaUIrNm5TNXJUNzF1R254cWFqOHBU?=
 =?utf-8?B?UEdLbTJ4dWxlbWVBKzkvb3NTSGNxalZ0bDFZVG9BY1BMTzNGVHNiR0IrY0Jt?=
 =?utf-8?B?VkRJYW5pVWVIZEQ2dWlhR2NXd0lpNEJ2WUFXQXB2eE1DcHBKVmE3VnZxZzVZ?=
 =?utf-8?B?U0xyMmZ5WmJycnhFOHF3blEwUmRWa1pxUFpoWWV2eW5ldjhES1RpWFExZDhZ?=
 =?utf-8?B?S0pYMXBhVnl3VjFhVXN5dGVNOWlkejFkMmNFdW5BRXFBMWY0d1B0akJSeEll?=
 =?utf-8?B?Sm9EaWRjTVVJQWpsVkxLNUljR0cyQWYrYU9ydmk0OEo4YnlFY1A4OUZ5K0NB?=
 =?utf-8?B?aGVTV1RhTXZra1lMNGFJVE13Rml6aHhyR1Q1SmRxcWVyVnVOYitaNHpaZmJS?=
 =?utf-8?B?THBCalA3ckRha2p5YkdDR0RDV216MmtNTUIrVlRjQW4rN2Vjb2ErYmlveDFw?=
 =?utf-8?B?b0JzOVNvRWNOWFdzWmRTZTNRQlJHYjhCTzVQeXh5TjBoaXdOUGErTzdxVDR5?=
 =?utf-8?B?c29Ob1FmYzlUandmUmx6Q0wzeC8xdjBZYWk1T0FlUndWL3h0bEc5TmpVQWtu?=
 =?utf-8?B?VDRDYitHVWs0eWt0Z3pobkRKbGg4MklDWWpQSS8vTkJIL0Z3WDVMaGtXT09Y?=
 =?utf-8?B?OVdyeWg0UTRJYm12TDVRaEhvRWNxY3VmbStjMFBBeU9NdFNnM3JtT1VJcWtY?=
 =?utf-8?B?MTc0b3ZYclFZa3Q5eWsreFVubXE1TjZQOFlDWCtpQXdvRnVpNXpsbG54a256?=
 =?utf-8?B?RS94TDJqRERPVkJ6R2Q4VUZTdXVzY281NWxPTGNOcDlBOHlNMUVERnVvTTMy?=
 =?utf-8?B?SWxaOWs1WnJRODRzU0ozZWZUeWpBOXNFa0l0TjNMWlJhUnA5ZGpBM09pLzFp?=
 =?utf-8?B?ejJIVis1UHZmWjVIZGI1bEJLVXFUU2lPZVN5Wm9RTHl1eUZKNGdlV25MVGFz?=
 =?utf-8?B?blpTajB6VzlLYUp0OElYTVI2NUh3cjRFVkszWDdwdVgvUUorZ3hRL3NCNXBK?=
 =?utf-8?B?THgvRjR6QmJCdEtlVzJtczdnekpPSXdKSjhwTU1Mbitzd0JCZnhxRUZXRHFp?=
 =?utf-8?B?NGl6SHl3WW5mT3Jyc2Nsa0lxMksrUUd1UkFjcFRRSlhmbkxEN1pxeU1GdGF0?=
 =?utf-8?B?UTl2NVp2clVlVTZ2d2l2YmY3eE1hYmFnUFdDQUNJcDFwa0hxSlQ5aklDREdJ?=
 =?utf-8?B?djVTSVI4S0hqaWtmR2RNZVg2NkV4VytpS1lJSFhmbHFSci9vVWEzU3R1N2lT?=
 =?utf-8?B?SFJvUnhVSXFuSURCdXFQMmdHeTVxWGhQS2ZaQ2I3Si95d3Z4Qi9wQy82V3VW?=
 =?utf-8?B?bTdubSt5c0RIRmR4TklZR0dCekhEbjFnQUNUZFFGdVJtV1dFT2EyeFFOSlRt?=
 =?utf-8?B?V2c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	oOhTQajh1pYTSPEVzwjLBVoAuontOIpNGsWTHOjGF5EWHCn92FrvzHtZ3K+J7lf7wqJ30rOaBAfHlxkuzn2jqfYT3xoDAYhtPtTCpueYGSFaoc9KXNW1tjlWhUYi80M8ffZBdxszzqCcaoFa2+XNEkivU/WKThTIku7+8ycWSNRMmN7RqxRQYhk6IDLFqhMbpXsb9ovJP63adeDBBgX9oxTQFvjZ5cBVFb4add8obuZp3FoOQk7/QFJBnugabwk28NdtDZ2VvZimIZpEZRemq9algcGzVbZd0a3GypEbAOrRv6xFmfOy0ALECA03Fp66L+MleqOa7s6bh49KgDbu5eDUpeK6/oD+4NsoOXAB/IGpNHT2toS6Pz4aKVwY3j2yYa0lroj8LJO0TZkYqvQ7D1qyMAjstWqfaoHF5qOinukAi+Q0KP04B75zZ9+RHHcaIvbP6eTpbExc0RXzrm9FhOYoL9Yn2FRBMcmxKqa380ieiKkIfaEzrX0bLJbr0CdwW70kRiATlut7lqQ0VrK3eyq6KN4QD9DaaLmmTam/ql99sEL0Yn0Ujmsj18pKcNUiNxRL8txGTqZbGgtQSHTLC3BvSbUvk9+ZEfp0cBZkTReE3wyY36gmaaCreDPqbYRDtm0qaanCnoTIrQiqTYse5NXLMEWzGiPxHTvvrECI/46Nmz8laRMbcYE8EL0Fg0jsalsx8GnGxo+s9PzWbI2Siga7EmsiNljxvxHExw7QS4f8NLDhJrdzkmgXrzUqaDDLRAEiEXqga8+BY2ar7G4kXFGiTNmP67F+LOPok777uJqN0N/8uSO/+xcW1aTAYFfygLCeh1kFbop8lYRiocGhaA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cd3bf8c2-7d90-4042-1698-08db5d3107fd
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 15:02:17.4030
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BJgovvR3QXo2Nxk3gAg2jZjx/aCdDNfupzjxa7P/gOWBeM8+UF6PDlstSRn8Ff3M+BWUqpPGAC4FkPyqUEr82g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR03MB7273

On Thu, May 25, 2023 at 04:39:51PM +0200, Jan Beulich wrote:
> On 24.05.2023 17:56, Roger Pau Monné wrote:
> > On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
> >> --- a/xen/drivers/vpci/header.c
> >> +++ b/xen/drivers/vpci/header.c
> >> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
> >>      struct vpci_header *header = &pdev->vpci->header;
> >>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
> >>      struct pci_dev *tmp, *dev = NULL;
> >> +    const struct domain *d;
> >>      const struct vpci_msix *msix = pdev->vpci->msix;
> >>      unsigned int i;
> >>      int rc;
> >> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
> >>  
> >>      /*
> >>       * Check for overlaps with other BARs. Note that only BARs that are
> >> -     * currently mapped (enabled) are checked for overlaps.
> >> +     * currently mapped (enabled) are checked for overlaps. Note also that
> >> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
> >>       */
> >> -    for_each_pdev ( pdev->domain, tmp )
> >> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
> > 
> > Looking at this again, I think this is slightly more complex, as during
> > runtime dom0 will get here with pdev->domain == hardware_domain OR
> > dom_xen, and hence you also need to account that devices that have
> > pdev->domain == dom_xen need to iterate over devices that belong to
> > the hardware_domain, ie:
> > 
> > for ( d = pdev->domain; ;
> >       d = (pdev->domain == dom_xen) ? hardware_domain : dom_xen )
> 
> Right, something along these lines. To keep loop continuation expression
> and exit condition simple, I'll probably prefer
> 
> for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
>       ; d = dom_xen )

LGTM.  I would add parentheses around the pdev->domain != dom_xen
condition, but that's just my personal taste.

We might want to add an

ASSERT(pdev->domain == hardware_domain || pdev->domain == dom_xen);

here, just to remind that this chunk must be revisited when adding
domU support (but you can also argue we haven't done this elsewhere),
I just feel here it's not so obvious we don't want do to this for
domUs.

> > And we likely want to limit this to devices that belong to the
> > hardware_domain or to dom_xen (in preparation for vPCI being used for
> > domUs).
> 
> I'm afraid I don't understand this remark, though.

This was looking forward to domU support, so that you already cater
for pdev->domain not being hardware_domain or dom_xen, but we might
want to leave that for later, when domU support is actually
introduced.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 25 15:10:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:10:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539663.840788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CbS-0003Hf-2P; Thu, 25 May 2023 15:10:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539663.840788; Thu, 25 May 2023 15:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CbR-0003HY-V8; Thu, 25 May 2023 15:10:09 +0000
Received: by outflank-mailman (input) for mailman id 539663;
 Thu, 25 May 2023 15:10:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2CbR-0003HO-87; Thu, 25 May 2023 15:10:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2CbR-00015M-1N; Thu, 25 May 2023 15:10:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2CbQ-0004mK-Ck; Thu, 25 May 2023 15:10:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2CbQ-0003pl-CC; Thu, 25 May 2023 15:10:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SiCQDPXsbjnRGwmshGl+X8XyoceOXLNTmElAMqPmKaI=; b=dS2cYWjC/S03q+lclGWvy7KyCa
	wjfYqOrCiHXUCK8/PlfzU0/niyEoGmW4+3W7JXYHdvzepxRrw43TsbDjty/kynXbst3g0lbLdnwTx
	bX2wmYRAXGOkGynEsEMnN0n6w/Ne/ijr0mws0yPupBhZSQJMlJlvZml/IViRIAz6ElzI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180938-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180938: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-vhd:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=511b9f286c3dadd041e0d90beeff7d47c9bf3b7a
X-Osstest-Versions-That:
    xen=380c6c170393c48852d4f2b1ea97125a399cfc61
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 May 2023 15:10:08 +0000

flight 180938 xen-unstable real [real]
flight 180944 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180938/
http://logs.test-lab.xenproject.org/osstest/logs/180944/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 180930

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd        7 xen-install         fail pass in 180944-retest
 test-armhf-armhf-libvirt 18 guest-start/debian.repeat fail pass in 180944-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180930
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180930
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180930
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180930
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180930
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180930
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180930
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180930
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180930
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  511b9f286c3dadd041e0d90beeff7d47c9bf3b7a
baseline version:
 xen                  380c6c170393c48852d4f2b1ea97125a399cfc61

Last test of basis   180930  2023-05-24 17:38:36 Z    0 days
Testing same since   180938  2023-05-25 02:40:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 511b9f286c3dadd041e0d90beeff7d47c9bf3b7a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 19:15:48 2023 +0100

    x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
    
    MSR_ARCH_CAPS data is now included in featureset information.  Replace
    opencoded checks with regular feature ones.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 205a9f970378c31ae3e00b52d59103a2e881b9e0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 19:05:01 2023 +0100

    x86/tsx: Remove opencoded MSR_ARCH_CAPS check
    
    The current cpu_has_tsx_ctrl tristate is serving double pupose; to signal the
    first pass through tsx_init(), and the availability of MSR_TSX_CTRL.
    
    Drop the variable, replacing it with a once boolean, and altering
    cpu_has_tsx_ctrl to come out of the feature information.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 8f6bc7f9b72eb7cf0c8c5ae5d80498a58ba0b7c3
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 16:59:25 2023 +0100

    x86/vtx: Remove opencoded MSR_ARCH_CAPS check
    
    MSR_ARCH_CAPS data is now included in featureset information.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit a87d131a8c2952e53ba9ed513d5553426cdeac34
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue May 16 14:07:43 2023 +0100

    x86/cpufeature: Rework {boot_,}cpu_has()
    
    One area where Xen deviates from Linux is that test_bit() forces a volatile
    read.  This leads to poor code generation, because the optimiser cannot merge
    bit operations on the same word.
    
    Drop the use of test_bit(), and write the expressions in regular C.  This
    removes the include of bitops.h (which is a frequent source of header
    tangles), and it offers the optimiser far more flexibility.
    
    Bloat-o-meter reports a net change of:
    
      add/remove: 0/0 grow/shrink: 21/87 up/down: 641/-2751 (-2110)
    
    with half of that in x86_emulate() alone.  vmx_ctxt_switch_to() seems to be
    the fastpath with the greatest delta at -24, where the optimiser has
    successfully removed the branch hidden in cpu_has_msr_tsc_aux.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit bbb289f3d5bdd3358af748d7c567343532ac45b5
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 15:53:35 2023 +0100

    x86/boot: Expose MSR_ARCH_CAPS data in guest max policies
    
    We already have common and default feature adjustment helpers.  Introduce one
    for max featuresets too.
    
    Offer MSR_ARCH_CAPS unconditionally in the max policy, and stop clobbering the
    data inherited from the Host policy.  This will be necessary to level a VM
    safely for migration.  Annotate the ARCH_CAPS CPUID bit as special.  Note:
    ARCH_CAPS is still max-only for now, so will not be inhereted by the default
    policies.
    
    With this done, the special case for dom0 can be shrunk to just resampling the
    Host policy (as ARCH_CAPS isn't visible by default yet).
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 70553000d6b44dd7c271a35932b0b3e1f22c5532
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 15:37:02 2023 +0100

    x86/boot: Record MSR_ARCH_CAPS for the Raw and Host CPU policy
    
    Extend x86_cpu_policy_fill_native() with a read of ARCH_CAPS based on the
    CPUID information just read, removing the specially handling in
    calculate_raw_cpu_policy().
    
    Right now, the only use of x86_cpu_policy_fill_native() outside of Xen is the
    unit tests.  Getting MSR data in this context is left to whomever first
    encounters a genuine need to have it.
    
    Extend generic_identify() to read ARCH_CAPS into x86_capability[], which is
    fed into the Host Policy.  This in turn means there's no need to special case
    arch_caps in calculate_host_policy().
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit ce8c930851a5ca21c4e70f83be7e8b290ce1b519
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 18:50:59 2023 +0100

    x86/cpu-policy: MSR_ARCH_CAPS feature names
    
    Seed the default visibility from the dom0 special case, which for the most
    part just exposes the *_NO bits.  EIBRS is the one non-*_NO bit, which is
    "just" a status bit to the guest indicating a change in implemention of IBRS
    which is already fully supported.
    
    Insert a block dependency from the ARCH_CAPS CPUID bit to the entire content
    of the MSR.  This is because MSRs have no structure information similar to
    CPUID, and used by x86_cpu_policy_clear_out_of_range_leaves(), in order to
    bulk-clear inaccessable words.
    
    The overall CPUID bit is still max-only, so all of MSR_ARCH_CAPS is hidden in
    the default policies.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit d9fe459ffad8a6eac2f695adb2331aff83c345d1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 17:55:21 2023 +0100

    x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
    
    Bits through 24 are already defined, meaning that we're not far off needing
    the second word.  Put both in right away.
    
    As both halves are present now, the arch_caps field is full width.  Adjust the
    unit test, which notices.
    
    The bool bitfield names in the arch_caps union are unused, and somewhat out of
    date.  They'll shortly be automatically generated.
    
    Add CPUID and MSR prefixes to the ./xen-cpuid verbose output, now that there
    are a mix of the two.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 43912f8dbb1888ffd7f00adb10724c70e71927c4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 14:14:53 2023 +0100

    x86/boot: Adjust MSR_ARCH_CAPS handling for the Host policy
    
    We are about to move MSR_ARCH_CAPS into featureset, but the order of
    operations (copy raw policy, then copy x86_capabilitiles[] in) will end up
    clobbering the ARCH_CAPS value.
    
    Some toolstacks use this information to handle TSX compatibility across the
    CPUs and microcode versions where support was removed.
    
    To avoid this transient breakage, read from raw_cpu_policy rather than
    modifying it in place.  This logic will be removed entirely in due course.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit ef1987fcb0fdfaa7ee148024037cb5fa335a7b2d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 13:52:39 2023 +0100

    x86/boot: Rework dom0 feature configuration
    
    Right now, dom0's feature configuration is split between between the common
    path and a dom0-specific one.  This mostly is by accident, and causes some
    very subtle bugs.
    
    First, start by clearly defining init_dom0_cpuid_policy() to be the domain
    that Xen builds automatically.  The late hwdom case is still constructed in a
    mostly normal way, with the control domain having full discretion over the CPU
    policy.
    
    Identifying this highlights a latent bug - the two halves of the MSR_ARCH_CAPS
    bodge are asymmetric with respect to the hardware domain.  This means that
    shim, or a control-only dom0 sees the MSR_ARCH_CAPS CPUID bit but none of the
    MSR content.  This in turn declares the hardware to be retpoline-safe by
    failing to advertise the {R,}RSBA bits appropriately.  Restrict this logic to
    the hardware domain, although the special case will cease to exist shortly.
    
    For the CPUID Faulting adjustment, the comment in ctxt_switch_levelling()
    isn't actually relevant.  Provide a better explanation.
    
    Move the recalculate_cpuid_policy() call outside of the dom0-cpuid= case.
    This is no change for now, but will become necessary shortly.
    
    Finally, place the second half of the MSR_ARCH_CAPS bodge after the
    recalculate_cpuid_policy() call.  This is necessary to avoid transiently
    breaking the hardware domain's view while the handling is cleaned up.  This
    special case will cease to exist shortly.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu May 25 15:23:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:23:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539693.840813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Cnu-00055x-GJ; Thu, 25 May 2023 15:23:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539693.840813; Thu, 25 May 2023 15:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Cnu-00055q-CW; Thu, 25 May 2023 15:23:02 +0000
Received: by outflank-mailman (input) for mailman id 539693;
 Thu, 25 May 2023 15:23:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2Cnt-00055k-0Z
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:23:01 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20619.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 07bb0c29-fb10-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 17:22:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9315.eurprd04.prod.outlook.com (2603:10a6:20b:4e6::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Thu, 25 May
 2023 15:22:56 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 15:22:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07bb0c29-fb10-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VWr7PFq8eUdUOg9Nr0Nh+S2L2FLQ9VQh2mXrf3rO4eVUQiDQLsEPuTZUDInfZXGMckESY1LXFWcgu2Zfiadd/f9BD11QLNMSutUkm4tWbkgtjxabUBKjPlt+6/tcmoofgicLezN7YF6m0Etyc3cQl+WfrkEAZDJnW7hytNtmLbsZMBFEDZb2HXOcgvq51n0ckwKja3IhvizbrkDzsFEjNgei9FaycEzCIYPA4ZLMvDLP6pdT6FUL0y5Xx67hxNtL0p2y1U8gVBHp37DUpTcKAVGZQ2uWdHR79tXMi9gxWXrInz0ncrvPsGx2QN9gYfuYt3gUvqZqYlyqrhXeH5R+KA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kkFVCOGYL4an4mCcuOA6hS6maBBNiaz7sHi1Adol5Uw=;
 b=VH8WnybyegBYMBf6vLqRxe9VM1lvvXw+niyC8NNyJ5BwU19LQhX13cJd5qtKoPR63u33tx3WYYyazCwtrwNr1ZxABfhW8I6rPc73rGe6UTmCZM0P9na4gI11TjZUFTJlOw235az6lPBbzaMAzLZLEUjDnHdHat528kV3fblmkyoyq5jusEawsN5D/g1v0Eu4zd7r/a8kalW0i/A3N2It4GfHrrsDHoMd+oip6QTc49cgauwRPK7N8RVB3Va0XNH01cRpF//Lyf70xx0Xc9VOWK99hpiL8LnorBKWYMhWGpQMQmBNQP3q8WwUrL+hJyHJxQCHgolbhsB21gwwUk1Ieg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kkFVCOGYL4an4mCcuOA6hS6maBBNiaz7sHi1Adol5Uw=;
 b=Uit04lPsyjKmBeAK8psoUnE1usyI+lXgIq/l5s8l4uRUmHDTJ3agkPeAHhI/cO77hxJvXwjjS6+wsmk3H7z7A2Y3m9udsEpSYpSSG/m6aXXPQOHW57t5+TpDSx85b/tBEAwbxy265B0bgLucOyxnF7Un1adVdmn/0wxuw0gJNVUFIX9TXSUdFIjtJEGwwI4qp+Gk7Ou9cmqy0hRC3fqN4CavzBab9WuFkLH+7aedZc1uSe27jrczgev00h8NqO1tqEvToMNy+vj+Bmhci1EojNTlnLmpzXr7gCDFapLB7BxWf2C9gHA2MSDnyUMC2QidOGHuiMLl1JcLVYX/gT8RBQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <56cb3727-5905-1a7a-ceed-cf110bdddad4@suse.com>
Date: Thu, 25 May 2023 17:22:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2] vpci/header: cope with devices not having vpci
 allocated
Content-Language: en-US
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20230525145405.35348-1-roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230525145405.35348-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0181.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9315:EE_
X-MS-Office365-Filtering-Correlation-Id: 9ee730d6-799a-4d0b-2b10-08db5d33ea23
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ua4TqcZuFvOJfYqJIqoBXfxuHZqU1PtREHoYDfwCO3AKRlZiGiG6asMTxt8Hz/gNCx9WWTpedFt5M0msPdA8KCMWWjo1iQ4zuBmPnIJrEbMHAgYwgt81MdvQV8ysBGmo3O886dJ9+CAE/HuvwAMHYutuxMRMBpL8537IHVNVbFXiOEMGYnByoIFm2yXJPZjstdRf590/a5TY598FmxdrYJ/syKaU/HIV/Om9MdLZEMtdOlg1uIpCsqbmCCo3HHhaU6oJYnaIeIjNntu0xq+FEJuKYUI93Tgl130TL/PMvvAoXWij8s4lW3EoWIT2ikKEqSZ07TT6T1P+8/3SCZehA5cIJZw0yYuCjBrnJEF+R+rzKuqaZh7XLyVMqu/Sb03ilwzKGjF436tJ+g+CoZyJatn0wZCIeA6PBXFZI5aAGQV75hy70Q0N/aSrYLZ4aryf5egdOH9myjx9jYwueJmlSwkxKT8OcEhxoQgFz7u7xRjQ4uIKb0xSeg9i9bE1aPSwNmTgDM2svLS0SWQLzIg3bNlMbpsFlsvJ1QOG1lrT141uSue2A11PXqqcCB91/qZW+AZCPrsJdD56qI0RjoAsU2TGTd3iUZPXUwotAk4ktjcQ+BCCqNWzf8E4UhDGgaEjyK7koqdl2jvyEVD2R2/PEQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(346002)(366004)(376002)(39860400002)(451199021)(26005)(6506007)(6512007)(186003)(53546011)(38100700002)(2616005)(36756003)(2906002)(4744005)(6486002)(5660300002)(41300700001)(316002)(6666004)(86362001)(66476007)(31686004)(478600001)(66556008)(66946007)(6916009)(4326008)(31696002)(8936002)(8676002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b1lqcXFDTlRiNm9LZlpGQzRyNVdYckJGZ2RjMjFPNnBFTXh0bXREaE1RMTVD?=
 =?utf-8?B?UXR0RWdjODBZRzFzL2FreDBQS292alRNMEdYNVBhY3JiVVFISHRnbFB1UlIz?=
 =?utf-8?B?dk9pK29sNHJ3L1RCeHcxUXlLN3dHNGgzT1pmOXdQYnl2VXdCZHV5Tnp5b1VP?=
 =?utf-8?B?SXVOLzRtcFdkT3lxZWVicFZhajdMQ3krODNPV1NOTnBSVjFnT2ttanBGZEdz?=
 =?utf-8?B?bEl0OVhlbC9MbVZVUFhOTXJMSDVGLzVYR0lrTTdlL3FuNDFVelgyUFd1UXpD?=
 =?utf-8?B?Wm5BZUljbzd0TUFabmtjbTNULzZSZVlrT2UrZEJnWnRhNjV5V0FyM2hTZzg0?=
 =?utf-8?B?ZE9vRVM0dU1rcElWUU51TlduV0pxdXg0aDFJWGNWc2pYOGRKanN4N2ZucjJo?=
 =?utf-8?B?SFBKVWhTL1R0ZXhWR3dqRGg0ZHg3RlBrTFhGWWgwSStlV2tGY3VrWU93QjBM?=
 =?utf-8?B?eWN3RTZTZTVuYUkzWnMvVnY0ZEhuN2hMa1V6WlZ1REhQbldsOXVnTXNBdTVi?=
 =?utf-8?B?RzFsNHgrWTJIeDVvTXRMYTJsbnlPNlZ3UlVGMmZxbzFYOXRhUXRrWmNRLzVZ?=
 =?utf-8?B?TWk5N1lLSHcvY0lHSDJWZzgxK1JoSmQrUEorYVhURWkwUUxkYzQ5TkpwWU42?=
 =?utf-8?B?U0paYm1kYmUrVXhFKy9DT2RhdFJua0ZwOWdlM3pGVTJMWnFOS0hnOFRkWGxi?=
 =?utf-8?B?TkdNZWxvV3lvZTd3eGxDcy9pZlpoR3VsZlExSG5jM0VpT0JubmJuM3grK2hn?=
 =?utf-8?B?TUl6N3VvR29iVmJqL2k2eTZndjFwMFltODVFQ0w2NGhQUFlGQ0NHMURGZm9M?=
 =?utf-8?B?bFlWVSt1MTR2TVZVWnowN08vMjMxU3J3VXFkZnJqQlRhZVp4Z3NnU05DMDRw?=
 =?utf-8?B?ejNvM0NuRUVxNWhwRk9DM0JydFJkV2lNb0FDYzY3RklzbDZqWmtHOC9yVnF3?=
 =?utf-8?B?OEZjTWVFQktpcG1iOU53N2QyNTZFOGNiTW9IVXlyeU5WYy9FcGxWbkRvMDFH?=
 =?utf-8?B?VmlUU01QZWRHK005ZDNoM3orSFRvSU9rWVErRk95RUV6VDlvOTYyME81Uyta?=
 =?utf-8?B?cEZObC9ST1RRMFhiVTFVNHNQc2xSSjAxVllOQXF6bFFaRnNlM3I3U0tkLy80?=
 =?utf-8?B?cFFIZndZaitEYlI2Qmd5VjNJdm9WdlhtVVpKNDBLbDh2TE9mZUFvS3RHZlZJ?=
 =?utf-8?B?ZmxnRGxVWUZDMmZMMDQwdHJqSTd0N2ZnREhMWHM4VTJrQ1FUN2c0UU15dWtQ?=
 =?utf-8?B?dHRSZ2wrbzgwcStDWUFVam5zYmNrNWFEdXBXM1pyWlpMdWI0czFOanRxVXpC?=
 =?utf-8?B?a1IwUE1FL3l6VmdGS09ERy84UVJid1J0VkRoN2FGam1pcmRHSjRyTEUvZHd2?=
 =?utf-8?B?MUdUejljTDdFakJWTHJENDRKZ0xLR0ZnTzVkbDFJT3J3elZwQit0ejl1TWJK?=
 =?utf-8?B?OE5jWG5LU2NiV1hyTFRVQXZ4Z0NOY0xpa0s5Y3dWS0lPa3RNOTl3RmFTZXY3?=
 =?utf-8?B?M1E4ak9KSEZHUSs4MHg2MS9EaFZPZCtqYzc5M2tCOUM2TFNsUjlINUpQcjE1?=
 =?utf-8?B?cXN6WEE5SE9hQ0pPbU4xYzZISFFGK1VXRVZwVGFpa2NDM1ZmV1d3cjhWN1Ry?=
 =?utf-8?B?dE1lOC93RG1UcEZ6U2pHTjJMMGJuaGNKdEJmeVl0c2hySEh3TkxHUkFwZFZC?=
 =?utf-8?B?UkkrYXJLWTI5UTFhRnQ1TjB0bFdaZ3Nab0RJNWdXMU1qcFBQMFRWdmtKTWdT?=
 =?utf-8?B?SEM3RUdDaS9jSXRHa3NJRkgvMGxyZEFSRG5HUysyQzFVYi9zNjZPUVY0R3JE?=
 =?utf-8?B?dWhvVVpyZnV6c2ZFUklseitJZGpLbko0NElPUUhVcW1oRElsaTcrRXFCRC9u?=
 =?utf-8?B?czhUeHJxeTN3dzFyU3Jrc1dHb1RaSmQ0VEE3Nm4zTFNDMGlsNDc3cWozaGJH?=
 =?utf-8?B?b0U3Y2ZZdmF4MkEwSVh0TEEzZkxPM1Vmdi80VkwzQXQwRWluY0NuZ29HTHpn?=
 =?utf-8?B?ak1GdXdMdGdBWEZobXQzOUY0R1gxb2FhQWtkdC9kMGNMQS8zRWNCQlJiWmUx?=
 =?utf-8?B?S0lFalJKV0Z1T0Vkd1IzWjBLMkl2eEpzOE1qaGo4WnRmNTNrZko2U2lpMHFw?=
 =?utf-8?Q?/CABWn4QugVUbSa9z0d2l8z4A?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ee730d6-799a-4d0b-2b10-08db5d33ea23
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 15:22:55.8576
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JLS7SbbeWp6OCU+tdja7EFqr1rD8Ufjv8NcTVrKFlP2k786iC1jNobit1TDjWjUqbL+Vou5mF8+LJZh2zRDeJQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9315

On 25.05.2023 16:54, Roger Pau Monne wrote:
> When traversing the list of pci devices assigned to a domain cope with
> some of them not having the vpci struct allocated. It should be
> possible for the hardware domain to have read-only devices assigned
> that are not handled by vPCI, such support will be added by further
> patches.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

I was wondering whether to put an assertion in (permitting both DomXEN
and hwdom, i.e. slightly more relaxed than I had it).

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 15:25:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:25:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539697.840823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Cpv-0005fa-SV; Thu, 25 May 2023 15:25:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539697.840823; Thu, 25 May 2023 15:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Cpv-0005fT-PS; Thu, 25 May 2023 15:25:07 +0000
Received: by outflank-mailman (input) for mailman id 539697;
 Thu, 25 May 2023 15:25:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Cpu-0005fH-7Q; Thu, 25 May 2023 15:25:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Cpt-0001LA-Tx; Thu, 25 May 2023 15:25:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Cpt-00059b-DT; Thu, 25 May 2023 15:25:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Cpt-0000uR-Cz; Thu, 25 May 2023 15:25:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=A9VJus19G6XViFJlOI69LhgHUhHzAtp68eNxaNfjOhY=; b=qCbt9p9ttRB91RHGUInjCy2tCb
	OcFg90jLp3EE5hq+0igyAKkUMLmr3RqmAEbZLr8iR8USQ24l0TxzCHxgFg4K0lHR5yLRddIPVqwEQ
	TxA99shZ0IaRBzNH/W/nav/e8hYahBV99gTNbQu1WVpRfwGbAIKWnCdyz+4Th0IVVGVY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180943-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180943: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=354be8936d97d4f2cb8cc004bb0296826d89bd8d
X-Osstest-Versions-That:
    xen=cca2361947b3c9851b3ded6e43cc48caf5258eee
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 May 2023 15:25:05 +0000

flight 180943 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180943/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  354be8936d97d4f2cb8cc004bb0296826d89bd8d
baseline version:
 xen                  cca2361947b3c9851b3ded6e43cc48caf5258eee

Last test of basis   180936  2023-05-25 02:02:55 Z    0 days
Testing same since   180943  2023-05-25 13:01:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cca2361947..354be8936d  354be8936d97d4f2cb8cc004bb0296826d89bd8d -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 25 15:25:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:25:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539702.840833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Cqe-0006Ab-55; Thu, 25 May 2023 15:25:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539702.840833; Thu, 25 May 2023 15:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Cqe-0006AU-29; Thu, 25 May 2023 15:25:52 +0000
Received: by outflank-mailman (input) for mailman id 539702;
 Thu, 25 May 2023 15:25:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5KKD=BO=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q2Cqc-0006AI-GW
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:25:50 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6baca182-fb10-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 17:25:46 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4PFPc4Mb
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 25 May 2023 17:25:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6baca182-fb10-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1685028339; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=quzZBMTmydlZ4adhVV6lCqYcLo9rn550c/nfskkxSU6vtao+zyVPA44GHHZRTFN1QZ
    Rw9wEi1qTiFVNKlhAudkE09Ng7RJYzY5BUwEfGDPvD52o/9gzxQyDoXJTNnFswsbSXBu
    wq8mW1BglPkWNSgO7LxRksP66eJwsytbWiGVoo1mTbM2DOvtedY3Ggq4pq2VSoFc5V7W
    ifpIZld6n01OvCYD+QMixP+gXeIg2zgQxlE/pqsfsiKwe9WXzgZ9a3AmPywcj4ZD8Qpa
    4fijOxcpYnT36OOKDsGNlKoCcd5zTau4SM84ReuijslaRsvUpHHtMnG0qMdZAd4sfCBr
    CoFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685028339;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=6nkcLF5uy3qZOAbDqmnHxK6TWE4JxR4ZFTQLpYrVHLQ=;
    b=dWGz7y+8trXENdoxg8d1slysEgSXH5u23ZnEEcGZsjFl4pQMN7QJeTPFP9E5kTTxQn
    Hx5tt+gNReH+DWE+02pHmnQJOcMXxAob7seEGLE/abd4IRDjOWCR+QjWM58agHOw6MFA
    ZzUnXC/N5fnsU47L8ay8uWDbPRwxR+5eYkHFAGd2u9irakvLPZ2OUB8Fy6XHXYryusZp
    m5Myuje1c8tr7CASwE0tOzDmDMgryTV0DSe8vVhZIZjQ/b4eCy2Fpysy+fF4qdHs/5Ub
    7oVQjR/Y9qTUTXnnyWlHznubXOemYYiRc4bSqtywC7Mwepkp/pXoVbkJJkit4jRNj+TX
    egGQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685028339;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=6nkcLF5uy3qZOAbDqmnHxK6TWE4JxR4ZFTQLpYrVHLQ=;
    b=DB/2IH+5VPsCGZ7hqpA66qWDxsVxUoK9ZOaypGXumBTPHQSYkoc2Lngm41UC4nMR2g
    6chu4rYH0biogtOgQqsOfCFTvpe2J3Kggl0EXeEQeuoEX3MsxF6aCUddfeuQYoc1Pdks
    k2/rKhoAcWc+nrjyfmSJQ6fB0wFb7AoQc2WF0WquS2GRnQ/AseOLj4XlzSxrOGMS8784
    Y4/KgkQmfThGuZCYtVLMfPpSrUBCmQn7C9t99TC4WCHYt6nqTq3X6I9rnQHNX6//gyZa
    Sy/PqNNh/Ro8PzqAoo3Lk4eey/pYNuerO0eh36d8Q3+kr7CqLda0xSx/XBDjBseeFXA1
    RKCw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685028339;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=6nkcLF5uy3qZOAbDqmnHxK6TWE4JxR4ZFTQLpYrVHLQ=;
    b=XAPlsu7v6M5r/PoyiEpkniX21UvWfqXi4eotn5dFvZcRsMxWAnDiPxuKVdh2NPOYJe
    SoIOKjPMBYizpPixD0Bw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xqFv7EJ0tgRX/vKfT/e8Ig6v0dNw4QAWpzMWrRQ=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] x86/hvm/ioreq: remove empty line after function declaration
Date: Thu, 25 May 2023 17:25:27 +0200
Message-Id: <20230525152527.10281-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

Introduced in commit 6ddfaabceeec3c31bc97a7208c46f581de55f71d
("x86/hvm/ioreq: simplify code and use consistent naming").

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 xen/arch/x86/hvm/ioreq.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 20dbb4c8cf..4eb7a70182 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -169,7 +169,6 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf)
 }
 
 static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf)
-
 {
     struct domain *d = s->target;
     struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;


From xen-devel-bounces@lists.xenproject.org Thu May 25 15:28:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:28:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539708.840842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtA-0006of-IQ; Thu, 25 May 2023 15:28:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539708.840842; Thu, 25 May 2023 15:28:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtA-0006oY-FP; Thu, 25 May 2023 15:28:28 +0000
Received: by outflank-mailman (input) for mailman id 539708;
 Thu, 25 May 2023 15:28:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g/QV=BO=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q2Ct8-0006oN-Rs
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:28:26 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ca1db830-fb10-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 17:28:24 +0200 (CEST)
Received: by mail-lf1-x12f.google.com with SMTP id
 2adb3069b0e04-4effb818c37so2557066e87.3
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 08:28:24 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 d22-20020ac244d6000000b004f37c0dfcaasm246853lfm.118.2023.05.25.08.28.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 May 2023 08:28:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca1db830-fb10-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685028504; x=1687620504;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=g6IwEhjuZSm/ewR/A7CFJpYoO/4uY1WhC5D+h1Q06Z4=;
        b=gXN1Zdz+cZ9/Zfw8qiBW855GjUHpGWS8Dc5Kz5HYd+kG8hrOlPfka4B11YNG+bPkn7
         15x4B76EuFl8jfmymW6Xdj8gYgciEFzxo8/P/PrvqvafPDomeQ/c0HyiZTpEWGlF7cpg
         z5iXkBrCOXq0XOKjrZVgkWuvqo+jfSJ+AD6XtPvE9WQgPgeMMKUkkwpSCrOhh2AuSBkJ
         Sle2tJXMyWfkQkJbcSeP8NJVybg2onEA4waPikZY/dpN3VYZfm/+WByG1QM+oZ0RLliB
         ygqJZSdex2YJ5u8GhLgnX8OeFsRnvd1rv34/OF9V4cOx3qDEEz1Zu+Qcex6T9EbrrKZn
         +BYw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685028504; x=1687620504;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=g6IwEhjuZSm/ewR/A7CFJpYoO/4uY1WhC5D+h1Q06Z4=;
        b=HfD9MsoP0CTLofHZ2H5j+7HHg38R10z7saTNybJ7ELcPpmTGaI4sg6cd95xIcZ+rOl
         xc1BUX40Xc1vIoBbIpSZF3TrnPEa9Y96gJhm8u2n+1M86x688C8idwEAnq6+UrngbgLi
         LG/VdpY5NRCygA9E350w7e+ODly2hExMZBPxKtRqFFCeE2TqAzIYZDirEv/Uc8qeh5wn
         zq/nl5PvrBaM8Vi7H49zJncjt546/TR5FKnK5TqyzeJE/C5INffSElgnmY0pEMvGHspJ
         IXbpnj5VrgwWBhyAun5iF40cqvKXxsbpBUec9985jlWd+obeoW92CYSCAvJIYZBajl3x
         VwIw==
X-Gm-Message-State: AC+VfDxYC18e++8sJ4Qcm+Ls+c04Dt6nIg5WV516/6dCFMMkycseC7/U
	BvmogCCLznMXPgHmSocSRsqEngQplvw=
X-Google-Smtp-Source: ACHHUZ4wLZbVn/YLgbo+GfrZD07wWJQJBwpfFiaHDDTJEwwexSrOO/zpwfZ980nkmT9KtZEkge4Xng==
X-Received: by 2002:ac2:457c:0:b0:4f3:f98c:77fc with SMTP id k28-20020ac2457c000000b004f3f98c77fcmr4438415lfm.8.1685028503506;
        Thu, 25 May 2023 08:28:23 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v9 0/5] enable MMU for RISC-V
Date: Thu, 25 May 2023 18:28:13 +0300
Message-Id: <cover.1685027257.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series introduces the following things:
1. Functionality to build the page tables for Xen that map
   link-time to physical-time location.
2. Check that Xen is less then page size.
3. Check that load addresses don't overlap with linker addresses.
4. Prepare things for proper switch to virtual memory world.
5. Load the built page table into the SATP
6. Enable MMU.

---
Changes in V9:
  - update comment for VM layout description.
  - update VPN_MASK: remove type cast.
  - Remove double blank lines.
  - Add Reviewed-by for patch #2
---
Changes in V8:
	- Add "#ifdef RV_STAGE1_MODE == SATP_MODE_SV39" instead of "#ifdef SV39"
	  in the comment to VM layout description.
	- Update the upper bound of direct map area in VM layout description.
	- Add parentheses for lvl in pt_index() macros.
	- introduce macros paddr_to_pfn() and pfn_to_paddr() and use them inside
	  paddr_to_pte()/pte_to_paddr().
	- Remove "__" in sfence_vma() and add blanks inside the parentheses of
	  asm volatile.
	- Parenthesize the two & against the || at the start of setup_initial_mapping()
	  function.
	- Code style fixes.
	- Remove ". = ALIGN(PAGE_SIZE);" before "*(.bss.page_aligned)" in vmlinux.lds.S
	  file as .bss.page_aligned specifies proper alignment themselves.
---
Changes in V7:
	- Fix range of frametable range in RV64 layout.
	- Add ifdef SV39 to the RV64 layout comment to make it explicit that
    description if for SV39 mode.
	- Add missed row in the RV64 layout table.
 	- define CONFIG_PAGING_LEVELS=2 for SATP_MODE_SV32.
 	- update switch_stack_and_jump macros(): add constraint 'X' for fn,
    memory clobber and wrap into do {} while ( false ).
 	- add noreturn to definition of enable_mmu().
 	- update pt_index() to "(pt_linear_offset(lvl, (va)) & VPN_MASK)".
 	- expand macros pte_to_addr()/addr_to_pte() in paddr_to_pte() and
    pte_to_paddr() functions and after drop them.
 	- remove inclusion of <asm/config.h>.
 	- update commit message around definition of PGTBL_INITIAL_COUNT.
 	- remove PGTBL_ENTRY_AMOUNT and use PAGETABLE_ENTRIES instead.
 	- code style fixes
 	- remove permission argument of setup_initial_mapping() function
 	- remove calc_pgtbl_lvls_num() as it's not needed anymore after definition
    of CONFIG_PAGING_LEVELS
 	- introduce sfence_vma()
 	- remove satp_mode argument from check_pgtbl_mode_support() and use
    RV_STAGE1_MODE directly instead.
 	- change .align to .p2align
 	- drop inclusion of <asm/asm-offsets.h> from head.S. This change isn't
    necessary for the current patch series.
 	- create a separate patch for xen.lds.S.
---
Changes in V6:
  - update RV VM layout and related to it things
 	- move PAGE_SHIFT, PADDR_BITS to the top of page-bits.h
 	- cast argument x of pte_to_addr() macros to paddr_t to avoid risk of overflow for RV32
 	- update type of num_levels from 'unsigned long' to 'unsigned int'
 	- define PGTBL_INITIAL_COUNT as ((CONFIG_PAGING_LEVELS - 1) + 1)
 	- update type of permission arguments. changed it from 'unsgined long' to 'unsigned int'
 	- fix code style
 	- switch while 'loop' to 'for' loop
 	- undef HANDLE_PGTBL
 	- clean root page table after MMU is disabled in check_pgtbl_mode_support() function
 	- align __bss_start properly
 	- remove unnecesssary const for paddr_to_pte, pte_to_paddr, pte_is_valid functions
 	- add switch_stack_and_jump macros and use it inside enable_mmu() before jump to
 	  cont_after_mmu_is_enabled() function

---
Changes in V5:
  * rebase the patch series on top of current staging
  * Update cover letter: it was removed the info about the patches on which
    MMU patch series is based as they were merged to staging
  * add new patch with description of VM layout for RISC-V2
  * Indent fields of pte_t struct
	* Rename addr_to_pte() and ppn_to_paddr() to match their content
---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
  * Update the cover letter message: the patch series isn't depend on
    [ RISC-V basic exception handling implementation ] as it was decied
    to enable MMU before implementation of exception handling. Also MMU
    patch series is based on two other patches which weren't merged [1]
    and [2]
  - Update the commit message for [ [PATCH v3 1/3]
    xen/riscv: introduce setup_initial_pages ].
  - update definition of pte_t structure to have a proper size of pte_t in case of RV32.
  - update asm/mm.h with new functions and remove unnecessary 'extern'.
  - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
  - update paddr_to_pte() to receive permissions as an argument.
  - add check that map_start & pa_start is properly aligned.
  - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to <asm/page-bits.h>
  - Rename PTE_SHIFT to PTE_PPN_SHIFT
  - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses and after
    setup PTEs permission for sections; update check that linker and load addresses don't
    overlap.
  - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is necessary.
  - rewrite enable_mmu in C; add the check that map_start and pa_start are aligned on 4k
    boundary.
  - update the comment for setup_initial_pagetable funcion
  - Add RV_STAGE1_MODE to support different MMU modes.
  - update the commit message that MMU is also enabled here
  - set XEN_VIRT_START very high to not overlap with load address range
  - align bss section
---
Changes in V2:
  * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
    introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
  * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
  * Remove clear_pagetables() functions as pagetables were zeroed during
    .bss initialization
  * Rename _setup_initial_pagetables() to setup_initial_mapping()
  * Make PTE_DEFAULT equal to RX.
  * Update prototype of setup_initial_mapping(..., bool writable) -> 
    setup_initial_mapping(..., UL flags)  
  * Update calls of setup_initial_mapping according to new prototype
  * Remove unnecessary call of:
    _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
  * Define index* in the loop of setup_initial_mapping
  * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
    as we don't have such section
  * make arguments of paddr_to_pte() and pte_is_valid() as const.
  * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
  * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
  * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
  * set __section(".bss.page_aligned") for page tables arrays
  * fix identatations
  * Change '__attribute__((section(".entry")))' to '__init'
  * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
    setup_inital_mapping() as they should be already aligned.
  * Remove clear_pagetables() as initial pagetables will be
    zeroed during bss initialization
  * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
    as there is no such section in xen.lds.S
  * Update the argument of pte_is_valid() to "const pte_t *p"
  * Remove patch "[PATCH v1 3/3] automation: update RISC-V smoke test" from the patch series
    as it was introduced simplified approach for RISC-V smoke test by Andrew Cooper
  * Add patch [[xen/riscv: remove dummy_bss variable] as there is no any sense in
    dummy_bss variable after introduction of inittial page tables.
---

Oleksii Kurochko (5):
  xen/riscv: add VM space layout
  xen/riscv: introduce setup_initial_pages
  xen/riscv: align __bss_start
  xen/riscv: setup initial pagetables
  xen/riscv: remove dummy_bss variable

 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  50 ++++-
 xen/arch/riscv/include/asm/current.h   |  11 +
 xen/arch/riscv/include/asm/mm.h        |  14 ++
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  61 ++++++
 xen/arch/riscv/include/asm/processor.h |   5 +
 xen/arch/riscv/mm.c                    | 276 +++++++++++++++++++++++++
 xen/arch/riscv/setup.c                 |  22 +-
 xen/arch/riscv/xen.lds.S               |   5 +-
 10 files changed, 445 insertions(+), 10 deletions(-)
 create mode 100644 xen/arch/riscv/include/asm/current.h
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 15:28:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:28:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539709.840850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtB-0006vz-1i; Thu, 25 May 2023 15:28:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539709.840850; Thu, 25 May 2023 15:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtA-0006tL-SN; Thu, 25 May 2023 15:28:28 +0000
Received: by outflank-mailman (input) for mailman id 539709;
 Thu, 25 May 2023 15:28:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g/QV=BO=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q2CtA-0006oN-4v
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:28:28 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cb3e7992-fb10-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 17:28:26 +0200 (CEST)
Received: by mail-lf1-x130.google.com with SMTP id
 2adb3069b0e04-4f00c33c3d6so2724974e87.2
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 08:28:26 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 d22-20020ac244d6000000b004f37c0dfcaasm246853lfm.118.2023.05.25.08.28.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 May 2023 08:28:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb3e7992-fb10-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685028506; x=1687620506;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Mm1x9FKPEwGMnotOEN0ZErT2+QXcBOPbrrKH58hZrN4=;
        b=D3CH1CX9YfjUZGnsUIEi7FEnsOrx6s5OYDBwZb7ShKNqlque/r/fWvCOhwXB7WsXxE
         pE6mfffKb3IdJYRSyVAPEGmPCliGcecKh6a+3yCenBLMefva9Z+FKbG53pUHSBEnq49Q
         F0SwOdRC1adfPx4IgUhr22lYIrMkGk8UmIhfefvmY8YQL469i5wGEnr0tLjDTa88thcz
         LFGDzh+0F29TmHRDjcilEWsw9hC57pik85xuesETEG8wQlHrMOtn8WA5Hd8/E0fBlygZ
         /fwGlClt4P4UcwJ0Zzw8YHBtKVy+NEQO8WrpJWvVvK1mPuzgQG5uRZNjH2GKbg0LxrdX
         /G8g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685028506; x=1687620506;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Mm1x9FKPEwGMnotOEN0ZErT2+QXcBOPbrrKH58hZrN4=;
        b=l5tHVABrnCOX2hKjiulaC6SgUFDnipSX1/FKKnoJsGB2QhYMY8n8PS9+8umU6rU3NC
         +K9hsE5Ja5wur2SUIiAwx4MtzIsMeoaSxxNKCft5DmAMkbA7Wuop9b2hdxaiURUBb3e0
         u/7S696pkbZvMxP6dDQgTjydBIGh7Tj70R/xFLCun6okLdPLK+ZKglHlxbm+qc3Y5+QA
         WNfT+h+lsIrsdxSKrYvQ91qgLpnQl0beLvWlyjFlQ66Pjdkh22xe/1jZl2WzHNrlWkf1
         thGg2UySgJNks49QZBZpoWVw3cQntgj4W0Z14nDjGBRQvj06TQWe0MrZAMmbijKHuxBN
         emyg==
X-Gm-Message-State: AC+VfDzYFQWlV2565lgcwTu+Pv1xdSFMOONhprMJlGGFBdgNEv1cEJJN
	avsXoloyUVMDBgg9hG3M/mWzfZaLNDk=
X-Google-Smtp-Source: ACHHUZ5YmwnjCDBxXsrdxE4FYSVhGy9E8zB7sa3ZQT449fJ8dmMhSu+11AOT8E9ycG0jaCoD5Vsvyg==
X-Received: by 2002:ac2:5dfc:0:b0:4f3:aa09:e7e8 with SMTP id z28-20020ac25dfc000000b004f3aa09e7e8mr8090365lfq.44.1685028505609;
        Thu, 25 May 2023 08:28:25 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v9 1/5] xen/riscv: add VM space layout
Date: Thu, 25 May 2023 18:28:14 +0300
Message-Id: <1621fd09987d20b3233132d422e5c9dfe300e3f7.1685027257.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1685027257.git.oleksii.kurochko@gmail.com>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Also it was added explanation about ignoring of top VA bits

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V9:
 - Update comment for VM layout description.
---
Changes in V8:
 - Add "#ifdef RV_STAGE1_MODE == SATP_MODE_SV39" instead of "#ifdef SV39"
   in the comment to VM layout description.
 - Update the upper bound of direct map area in VM layout description.
---
Changes in V7:
 - Fix range of frametable range in RV64 layout.
 - Add ifdef SV39 to the RV64 layout comment to make it explicit that
   description if for SV39 mode.
 - Add missed row in the RV64 layout table.
---
Changes in V6:
 - update comment above the RISCV-64 layout table
 - add Slot column to the table with RISCV-64 Layout
 - update RV-64 layout table.
---
Changes in V5:
* the patch was introduced in the current patch series.
---
 xen/arch/riscv/include/asm/config.h | 36 +++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 763a922a04..9900d29dab 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -4,6 +4,42 @@
 #include <xen/const.h>
 #include <xen/page-size.h>
 
+/*
+ * RISC-V64 Layout:
+ *
+#if RV_STAGE1_MODE == SATP_MODE_SV39
+ *
+ * From the riscv-privileged doc:
+ *   When mapping between narrower and wider addresses,
+ *   RISC-V zero-extends a narrower physical address to a wider size.
+ *   The mapping between 64-bit virtual addresses and the 39-bit usable
+ *   address space of Sv39 is not based on zero-extension but instead
+ *   follows an entrenched convention that allows an OS to use one or
+ *   a few of the most-significant bits of a full-size (64-bit) virtual
+ *   address to quickly distinguish user and supervisor address regions.
+ *
+ * It means that:
+ *   top VA bits are simply ignored for the purpose of translating to PA.
+ *
+ * ============================================================================
+ *    Start addr    |   End addr        |  Size  | Slot       |area description
+ * ============================================================================
+ * FFFFFFFFC0800000 |  FFFFFFFFFFFFFFFF |1016 MB | L2 511     | Unused
+ * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | L2 511     | Fixmap
+ * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | L2 511     | FDT
+ * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | L2 511     | Xen
+ *                 ...                  |  1 GB  | L2 510     | Unused
+ * 0000003200000000 |  0000007F80000000 | 309 GB | L2 200-509 | Direct map
+ *                 ...                  |  1 GB  | L2 199     | Unused
+ * 0000003100000000 |  00000031C0000000 |  3 GB  | L2 196-198 | Frametable
+ *                 ...                  |  1 GB  | L2 195     | Unused
+ * 0000003080000000 |  00000030C0000000 |  1 GB  | L2 194     | VMAP
+ *                 ...                  | 194 GB | L2 0 - 193 | Unused
+ * ============================================================================
+ *
+#endif
+ */
+
 #if defined(CONFIG_RISCV_64)
 # define LONG_BYTEORDER 3
 # define ELFSIZE 64
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 15:28:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:28:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539710.840863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtE-0007L0-BN; Thu, 25 May 2023 15:28:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539710.840863; Thu, 25 May 2023 15:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtE-0007Kn-5y; Thu, 25 May 2023 15:28:32 +0000
Received: by outflank-mailman (input) for mailman id 539710;
 Thu, 25 May 2023 15:28:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g/QV=BO=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q2CtC-0007Iw-Hb
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:28:30 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cca46b2b-fb10-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 17:28:29 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id
 2adb3069b0e04-4f3a9ad31dbso2711671e87.0
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 08:28:29 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 d22-20020ac244d6000000b004f37c0dfcaasm246853lfm.118.2023.05.25.08.28.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 May 2023 08:28:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cca46b2b-fb10-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685028508; x=1687620508;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Ak5/VYvLaKUDJ7Sz+gu41WP7brOGu0vB5O+difHyipI=;
        b=aW1BQ6jXSMeDfCwjRJI1CMhQGizpWZ8hl6D4mBfwJulrTD76+rtAeRwNogiHbjp0c9
         Aba9o7muTtTiFru9ngh9CnDezKr4aq0FaArypsWw4RYA0tmTBryDrwtkCmc0nmGaTZxa
         rrGpdhA1iakE+SxhhfhuK0jLkd1XrlPNmaN+TPYDWuybem8AnNxJHRCgRuBvm23zmtpE
         TqgdSGKU2uHqufE5Kt8tUqmtmt9b62XQjuH5gfceqriy+qFXtEJRGg7N+MVAwQtllnLL
         k94ZP8+fF8Umxor0EQPT3A9WQJZy1typrmRHzS0dKEBUdEC5STYjN7wyh1tgwoThtf0r
         k29w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685028508; x=1687620508;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Ak5/VYvLaKUDJ7Sz+gu41WP7brOGu0vB5O+difHyipI=;
        b=jX72RzCjXThIz2fXBVtp0cpgr8aaUMuGFB9o6dUi/pVVDuBVw5+rriQ/uoqgRW0cuM
         Q0UXKlA9Nf0SCpLyRsW7aErWa3ZBMX9lfS/xqCxdATjsysMO1PrECcfhprRBAzW2yPnT
         EK7ZMQVpIiW0tSjwd3Oi43meSj+E/0gwM0Cvp7L5Bss5wedIH3n1YxzxEOo1IaQZYloD
         O1rcmXMqFmx+dDRC2o+fhIDiXJ6m3Z3nmZi1XpGZ2xWxv/XrkjLWrBy+8ekLpYbbzj+J
         2kkPyRMLaPXaerBayypFUhDwfI0v6ZwlAI0oC+2zubPLXrJJa2VQ//HwV+qb5xE5Z2o8
         oqzw==
X-Gm-Message-State: AC+VfDxwXc3QeBNPoNulx0ZDTzarnZfPYLDcmXQDdqcRZ+16PVw7HQGO
	Sdk4q9e1IbSatNSrh8B0TbOXZh6g7bc=
X-Google-Smtp-Source: ACHHUZ5LIYPq4wEs4Zh9N0tcZuz3kOuUeDJqEHWBua4EctG5O60FNlAGTzTLyZbmAH/vG14Jy0ukDA==
X-Received: by 2002:ac2:46fb:0:b0:4ed:bdac:7a49 with SMTP id q27-20020ac246fb000000b004edbdac7a49mr6415577lfo.54.1685028508062;
        Thu, 25 May 2023 08:28:28 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v9 2/5] xen/riscv: introduce setup_initial_pages
Date: Thu, 25 May 2023 18:28:15 +0300
Message-Id: <8f8fb8849830ad2b249b9af903fe1eca70f7578a.1685027257.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1685027257.git.oleksii.kurochko@gmail.com>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The idea was taken from xvisor but the following changes
were done:
* Use only a minimal part of the code enough to enable MMU
* rename {_}setup_initial_pagetables functions
* add an argument for setup_initial_mapping to have
  an opportunity to make set PTE flags.
* update setup_initial_pagetables function to map sections
  with correct PTE flags.
* Rewrite enable_mmu() to C.
* map linker addresses range to load addresses range without
  1:1 mapping. It will be 1:1 only in case when
  load_start_addr is equal to linker_start_addr.
* add safety checks such as:
  * Xen size is less than page size
  * linker addresses range doesn't overlap load addresses
    range
* Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
* change PTE_LEAF_DEFAULT to RW instead of RWX.
* Remove phys_offset as it is not used now
* Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
  in  setup_inital_mapping() as they should be already aligned.
  Make a check that {map_pa}_start are aligned.
* Remove clear_pagetables() as initial pagetables will be
  zeroed during bss initialization
* Remove __attribute__((section(".entry")) for setup_initial_pagetables()
  as there is no such section in xen.lds.S
* Update the argument of pte_is_valid() to "const pte_t *p"
* Add check that Xen's load address is aligned at 4k boundary
* Refactor setup_initial_pagetables() so it is mapping linker
  address range to load address range. After setup needed
  permissions for specific section ( such as .text, .rodata, etc )
  otherwise RW permission will be set by default.
* Add function to check that requested SATP_MODE is supported

Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V9:
  - Add "Reviewed-by: Jan Beulich <jbeulich@suse.com>" to commit message.
  - Update macros VPN_MASK.
  - Remove double blank lines.
---
Changes in V8:
	- Add parentheses for lvl in pt_index() macros.
	- introduce macros paddr_to_pfn() and pfn_to_paddr() and use them inside
	  paddr_to_pte()/pte_to_paddr().
	- Remove "__" in sfence_vma() and add blanks inside the parentheses of
	  asm volatile.
	- Parenthesize the two & against the || at the start of setup_initial_mapping()
	  function.
	- Code style fixes.
---
Changes in V7:
 	- define CONFIG_PAGING_LEVELS=2 for SATP_MODE_SV32.
 	- update switch_stack_and_jump macros(): add constraint 'X' for fn,
    memory clobber and wrap into do {} while ( false ).
 	- add noreturn to definition of enable_mmu().
 	- update pt_index() to "(pt_linear_offset(lvl, (va)) & VPN_MASK)".
 	- expand macros pte_to_addr()/addr_to_pte() in paddr_to_pte() and
    pte_to_paddr() functions and after drop them.
 	- remove inclusion of <asm/config.h>.
 	- update commit message around definition of PGTBL_INITIAL_COUNT.
 	- remove PGTBL_ENTRY_AMOUNT and use PAGETABLE_ENTRIES instead.
 	- code style fixes
 	- remove permission argument of setup_initial_mapping() function
 	- remove calc_pgtbl_lvls_num() as it's not needed anymore after definition
    of CONFIG_PAGING_LEVELS.
 	- introduce sfence_vma().
 	- remove satp_mode argument from check_pgtbl_mode_support() and use
    RV_STAGE1_MODE directly instead.
 	- change .align to .p2align.
 	- drop inclusion of <asm/asm-offsets.h> from head.S. This change isn't
    necessary for the current patch series.
---
Changes in V6:
 	- move PAGE_SHIFT, PADDR_BITS to the top of page-bits.h
 	- cast argument x of pte_to_addr() macros to paddr_t to avoid risk of overflow for RV32
 	- update type of num_levels from 'unsigned long' to 'unsigned int'
 	- define PGTBL_INITIAL_COUNT as ((CONFIG_PAGING_LEVELS - 1) + 1)
 	- update type of permission arguments. changed it from 'unsgined long' to 'unsigned int'
 	- fix code style
 	- switch while 'loop' to 'for' loop
 	- undef HANDLE_PGTBL
 	- clean root page table after MMU is disabled in check_pgtbl_mode_support() function
 	- align __bss_start properly
 	- remove unnecesssary const for paddr_to_pte, pte_to_paddr, pte_is_valid functions
 	- add switch_stack_and_jump macros and use it inside enable_mmu() before jump to
 	  cont_after_mmu_is_enabled() function
---
Changes in V5:
	* Indent fields of pte_t struct
	* Rename addr_to_pte() and ppn_to_paddr() to match their content
---
Changes in V4:
  * use GB() macros instead of defining SZ_1G
  * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
  * remove unnecessary 'asm' word at the end of #error
  * encapsulate pte_t definition in a struct
  * rename addr_to_ppn() to ppn_to_paddr().
  * change type of paddr argument from const unsigned long to paddr_t
  * pte_to_paddr() update prototype.
  * calculate size of Xen binary based on an amount of page tables
  * use unsgined int instead of 'uint32_t' instead of uint32_t as
    their use isn't warranted.
  * remove extern of bss_{start,end} as they aren't used in mm.c anymore
  * fix code style
  * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
  * make enable_mmu() as noinline to prevent under link-time optimization
    because of the nature of enable_mmu()
  * add function to check that SATP_MODE is supported.
  * update the commit message
  * update setup_initial_pagetables to set correct PTE flags in one pass
    instead of calling setup_pte_permissions after setup_initial_pagetables()
    as setup_initial_pagetables() isn't used to change permission flags.
---
Changes in V3:
 - update definition of pte_t structure to have a proper size of pte_t
   in case of RV32.
 - update asm/mm.h with new functions and remove unnecessary 'extern'.
 - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
 - update paddr_to_pte() to receive permissions as an argument.
 - add check that map_start & pa_start is properly aligned.
 - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to
   <asm/page-bits.h>
 - Rename PTE_SHIFT to PTE_PPN_SHIFT
 - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses
   and after setup PTEs permission for sections; update check that linker
   and load addresses don't overlap.
 - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is
   necessary.
 - rewrite enable_mmu in C; add the check that map_start and pa_start are
   aligned on 4k boundary.
 - update the comment for setup_initial_pagetable funcion
 - Add RV_STAGE1_MODE to support different MMU modes
 - set XEN_VIRT_START very high to not overlap with load address range
 - align bss section
---
Changes in V2:
 * update the commit message:
 * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
   introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
 * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
 * Remove clear_pagetables() functions as pagetables were zeroed during
   .bss initialization
 * Rename _setup_initial_pagetables() to setup_initial_mapping()
 * Make PTE_DEFAULT equal to RX.
 * Update prototype of setup_initial_mapping(..., bool writable) -> 
   setup_initial_mapping(..., UL flags)  
 * Update calls of setup_initial_mapping according to new prototype
 * Remove unnecessary call of:
   _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
 * Define index* in the loop of setup_initial_mapping
 * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
   as we don't have such section
 * make arguments of paddr_to_pte() and pte_is_valid() as const.
 * make xen_second_pagetable static.
 * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
 * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
 * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
 * set __section(".bss.page_aligned") for page tables arrays
 * fix identatations
 * Change '__attribute__((section(".entry")))' to '__init'
 * Remove phys_offset as it isn't used now.
 * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
   setup_inital_mapping() as they should be already aligned.
 * Remove clear_pagetables() as initial pagetables will be
   zeroed during bss initialization
 * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
   as there is no such section in xen.lds.S
 * Update the argument of pte_is_valid() to "const pte_t *p"
---
 xen/arch/riscv/Makefile                |   1 +
 xen/arch/riscv/include/asm/config.h    |  14 +-
 xen/arch/riscv/include/asm/current.h   |  11 +
 xen/arch/riscv/include/asm/mm.h        |  14 ++
 xen/arch/riscv/include/asm/page-bits.h |  10 +
 xen/arch/riscv/include/asm/page.h      |  61 ++++++
 xen/arch/riscv/include/asm/processor.h |   5 +
 xen/arch/riscv/mm.c                    | 276 +++++++++++++++++++++++++
 xen/arch/riscv/setup.c                 |  11 +
 xen/arch/riscv/xen.lds.S               |   3 +
 10 files changed, 405 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/include/asm/current.h
 create mode 100644 xen/arch/riscv/include/asm/mm.h
 create mode 100644 xen/arch/riscv/include/asm/page.h
 create mode 100644 xen/arch/riscv/mm.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 443f6bf15f..956ceb02df 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -1,5 +1,6 @@
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-y += entry.o
+obj-y += mm.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 9900d29dab..38862df0b8 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -75,12 +75,24 @@
   name:
 #endif
 
-#define XEN_VIRT_START  _AT(UL, 0x80200000)
+#ifdef CONFIG_RISCV_64
+#define XEN_VIRT_START 0xFFFFFFFFC0000000 /* (_AC(-1, UL) + 1 - GB(1)) */
+#else
+#error "RV32 isn't supported"
+#endif
 
 #define SMP_CACHE_BYTES (1 << 6)
 
 #define STACK_SIZE PAGE_SIZE
 
+#ifdef CONFIG_RISCV_64
+#define CONFIG_PAGING_LEVELS 3
+#define RV_STAGE1_MODE SATP_MODE_SV39
+#else
+#define CONFIG_PAGING_LEVELS 2
+#define RV_STAGE1_MODE SATP_MODE_SV32
+#endif
+
 #endif /* __RISCV_CONFIG_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/include/asm/current.h b/xen/arch/riscv/include/asm/current.h
new file mode 100644
index 0000000000..d87e6717e0
--- /dev/null
+++ b/xen/arch/riscv/include/asm/current.h
@@ -0,0 +1,11 @@
+#ifndef __ASM_CURRENT_H
+#define __ASM_CURRENT_H
+
+#define switch_stack_and_jump(stack, fn) do {               \
+    asm volatile (                                          \
+            "mv sp, %0\n"                                   \
+            "j " #fn :: "r" (stack), "X" (fn) : "memory" ); \
+    unreachable();                                          \
+} while ( false )
+
+#endif /* __ASM_CURRENT_H */
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
new file mode 100644
index 0000000000..64293eacee
--- /dev/null
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -0,0 +1,14 @@
+#ifndef _ASM_RISCV_MM_H
+#define _ASM_RISCV_MM_H
+
+#include <asm/page-bits.h>
+
+#define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
+#define paddr_to_pfn(pa)  ((unsigned long)((pa) >> PAGE_SHIFT))
+
+void setup_initial_pagetables(void);
+
+void enable_mmu(void);
+void cont_after_mmu_is_enabled(void);
+
+#endif /* _ASM_RISCV_MM_H */
diff --git a/xen/arch/riscv/include/asm/page-bits.h b/xen/arch/riscv/include/asm/page-bits.h
index 1801820294..4a3e33589a 100644
--- a/xen/arch/riscv/include/asm/page-bits.h
+++ b/xen/arch/riscv/include/asm/page-bits.h
@@ -4,4 +4,14 @@
 #define PAGE_SHIFT              12 /* 4 KiB Pages */
 #define PADDR_BITS              56 /* 44-bit PPN */
 
+#ifdef CONFIG_RISCV_64
+#define PAGETABLE_ORDER         (9)
+#else /* CONFIG_RISCV_32 */
+#define PAGETABLE_ORDER         (10)
+#endif
+
+#define PAGETABLE_ENTRIES       (1 << PAGETABLE_ORDER)
+
+#define PTE_PPN_SHIFT           10
+
 #endif /* __RISCV_PAGE_BITS_H__ */
diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
new file mode 100644
index 0000000000..a7e2eee964
--- /dev/null
+++ b/xen/arch/riscv/include/asm/page.h
@@ -0,0 +1,61 @@
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <xen/const.h>
+#include <xen/types.h>
+
+#include <asm/mm.h>
+#include <asm/page-bits.h>
+
+#define VPN_MASK                    (PAGETABLE_ENTRIES - 1UL)
+
+#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
+#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) + PAGE_SHIFT)
+#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
+#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
+#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
+
+#define PTE_VALID                   BIT(0, UL)
+#define PTE_READABLE                BIT(1, UL)
+#define PTE_WRITABLE                BIT(2, UL)
+#define PTE_EXECUTABLE              BIT(3, UL)
+#define PTE_USER                    BIT(4, UL)
+#define PTE_GLOBAL                  BIT(5, UL)
+#define PTE_ACCESSED                BIT(6, UL)
+#define PTE_DIRTY                   BIT(7, UL)
+#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
+
+#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE | PTE_WRITABLE)
+#define PTE_TABLE                   (PTE_VALID)
+
+/* Calculate the offsets into the pagetables for a given VA */
+#define pt_linear_offset(lvl, va)   ((va) >> XEN_PT_LEVEL_SHIFT(lvl))
+
+#define pt_index(lvl, va) (pt_linear_offset((lvl), (va)) & VPN_MASK)
+
+/* Page Table entry */
+typedef struct {
+#ifdef CONFIG_RISCV_64
+    uint64_t pte;
+#else
+    uint32_t pte;
+#endif
+} pte_t;
+
+static inline pte_t paddr_to_pte(paddr_t paddr,
+                                 unsigned int permissions)
+{
+    return (pte_t) { .pte = (paddr_to_pfn(paddr) << PTE_PPN_SHIFT) | permissions };
+}
+
+static inline paddr_t pte_to_paddr(pte_t pte)
+{
+    return pfn_to_paddr(pte.pte >> PTE_PPN_SHIFT);
+}
+
+static inline bool pte_is_valid(pte_t p)
+{
+    return p.pte & PTE_VALID;
+}
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/xen/arch/riscv/include/asm/processor.h b/xen/arch/riscv/include/asm/processor.h
index a71448e02e..6db681d805 100644
--- a/xen/arch/riscv/include/asm/processor.h
+++ b/xen/arch/riscv/include/asm/processor.h
@@ -69,6 +69,11 @@ static inline void die(void)
         wfi();
 }
 
+static inline void sfence_vma(void)
+{
+    asm volatile ( "sfence.vma" ::: "memory" );
+}
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
new file mode 100644
index 0000000000..692ae9cb5e
--- /dev/null
+++ b/xen/arch/riscv/mm.c
@@ -0,0 +1,276 @@
+#include <xen/compiler.h>
+#include <xen/init.h>
+#include <xen/kernel.h>
+#include <xen/pfn.h>
+
+#include <asm/early_printk.h>
+#include <asm/csr.h>
+#include <asm/current.h>
+#include <asm/mm.h>
+#include <asm/page.h>
+#include <asm/processor.h>
+
+struct mmu_desc {
+    unsigned int num_levels;
+    unsigned int pgtbl_count;
+    pte_t *next_pgtbl;
+    pte_t *pgtbl_base;
+};
+
+extern unsigned char cpu0_boot_stack[STACK_SIZE];
+
+#define PHYS_OFFSET ((unsigned long)_start - XEN_VIRT_START)
+#define LOAD_TO_LINK(addr) ((addr) - PHYS_OFFSET)
+#define LINK_TO_LOAD(addr) ((addr) + PHYS_OFFSET)
+
+/*
+ * It is expected that Xen won't be more then 2 MB.
+ * The check in xen.lds.S guarantees that.
+ * At least 3 page tables (in case of Sv39 ) are needed to cover 2 MB.
+ * One for each page level table with PAGE_SIZE = 4 Kb.
+ *
+ * One L0 page table can cover 2 MB(512 entries of one page table * PAGE_SIZE).
+ *
+ * It might be needed one more page table in case when Xen load address
+ * isn't 2 MB aligned.
+ */
+#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_root[PAGETABLE_ENTRIES];
+
+pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+stage1_pgtbl_nonroot[PGTBL_INITIAL_COUNT * PAGETABLE_ENTRIES];
+
+#define HANDLE_PGTBL(curr_lvl_num)                                          \
+    index = pt_index(curr_lvl_num, page_addr);                              \
+    if ( pte_is_valid(pgtbl[index]) )                                       \
+    {                                                                       \
+        /* Find L{ 0-3 } table */                                           \
+        pgtbl = (pte_t *)pte_to_paddr(pgtbl[index]);                        \
+    }                                                                       \
+    else                                                                    \
+    {                                                                       \
+        /* Allocate new L{0-3} page table */                                \
+        if ( mmu_desc->pgtbl_count == PGTBL_INITIAL_COUNT )                 \
+        {                                                                   \
+            early_printk("(XEN) No initial table available\n");             \
+            /* panic(), BUG() or ASSERT() aren't ready now. */              \
+            die();                                                          \
+        }                                                                   \
+        mmu_desc->pgtbl_count++;                                            \
+        pgtbl[index] = paddr_to_pte((unsigned long)mmu_desc->next_pgtbl,    \
+                                    PTE_VALID);                             \
+        pgtbl = mmu_desc->next_pgtbl;                                       \
+        mmu_desc->next_pgtbl += PAGETABLE_ENTRIES;                          \
+    }
+
+static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
+                                         unsigned long map_start,
+                                         unsigned long map_end,
+                                         unsigned long pa_start)
+{
+    unsigned int index;
+    pte_t *pgtbl;
+    unsigned long page_addr;
+
+    if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
+    {
+        early_printk("(XEN) Xen should be loaded at 4k boundary\n");
+        die();
+    }
+
+    if ( (map_start & ~XEN_PT_LEVEL_MAP_MASK(0)) ||
+         (pa_start & ~XEN_PT_LEVEL_MAP_MASK(0)) )
+    {
+        early_printk("(XEN) map and pa start addresses should be aligned\n");
+        /* panic(), BUG() or ASSERT() aren't ready now. */
+        die();
+    }
+
+    for ( page_addr = map_start;
+          page_addr < map_end;
+          page_addr += XEN_PT_LEVEL_SIZE(0) )
+    {
+        pgtbl = mmu_desc->pgtbl_base;
+
+        switch ( mmu_desc->num_levels )
+        {
+        case 4: /* Level 3 */
+            HANDLE_PGTBL(3);
+        case 3: /* Level 2 */
+            HANDLE_PGTBL(2);
+        case 2: /* Level 1 */
+            HANDLE_PGTBL(1);
+        case 1: /* Level 0 */
+            {
+                unsigned long paddr = (page_addr - map_start) + pa_start;
+                unsigned int permissions = PTE_LEAF_DEFAULT;
+                pte_t pte_to_be_written;
+
+                index = pt_index(0, page_addr);
+
+                if ( is_kernel_text(LINK_TO_LOAD(page_addr)) ||
+                     is_kernel_inittext(LINK_TO_LOAD(page_addr)) )
+                    permissions =
+                        PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
+
+                if ( is_kernel_rodata(LINK_TO_LOAD(page_addr)) )
+                    permissions = PTE_READABLE | PTE_VALID;
+
+                pte_to_be_written = paddr_to_pte(paddr, permissions);
+
+                if ( !pte_is_valid(pgtbl[index]) )
+                    pgtbl[index] = pte_to_be_written;
+                else
+                {
+                    if ( (pgtbl[index].pte ^ pte_to_be_written.pte) &
+                         ~(PTE_DIRTY | PTE_ACCESSED) )
+                    {
+                        early_printk("PTE overridden has occurred\n");
+                        /* panic(), <asm/bug.h> aren't ready now. */
+                        die();
+                    }
+                }
+            }
+        }
+    }
+}
+#undef HANDLE_PGTBL
+
+static bool __init check_pgtbl_mode_support(struct mmu_desc *mmu_desc,
+                                            unsigned long load_start)
+{
+    bool is_mode_supported = false;
+    unsigned int index;
+    unsigned int page_table_level = (mmu_desc->num_levels - 1);
+    unsigned level_map_mask = XEN_PT_LEVEL_MAP_MASK(page_table_level);
+
+    unsigned long aligned_load_start = load_start & level_map_mask;
+    unsigned long aligned_page_size = XEN_PT_LEVEL_SIZE(page_table_level);
+    unsigned long xen_size = (unsigned long)(_end - start);
+
+    if ( (load_start + xen_size) > (aligned_load_start + aligned_page_size) )
+    {
+        early_printk("please place Xen to be in range of PAGE_SIZE "
+                     "where PAGE_SIZE is XEN_PT_LEVEL_SIZE( {L3 | L2 | L1} ) "
+                     "depending on expected SATP_MODE \n"
+                     "XEN_PT_LEVEL_SIZE is defined in <asm/page.h>\n");
+        die();
+    }
+
+    index = pt_index(page_table_level, aligned_load_start);
+    stage1_pgtbl_root[index] = paddr_to_pte(aligned_load_start,
+                                            PTE_LEAF_DEFAULT | PTE_EXECUTABLE);
+
+    sfence_vma();
+    csr_write(CSR_SATP,
+              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
+              RV_STAGE1_MODE << SATP_MODE_SHIFT);
+
+    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) == RV_STAGE1_MODE )
+        is_mode_supported = true;
+
+    csr_write(CSR_SATP, 0);
+
+    sfence_vma();
+
+    /* Clean MMU root page table */
+    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
+
+    return is_mode_supported;
+}
+
+/*
+ * setup_initial_pagetables:
+ *
+ * Build the page tables for Xen that map the following:
+ *  1. Calculate page table's level numbers.
+ *  2. Init mmu description structure.
+ *  3. Check that linker addresses range doesn't overlap
+ *     with load addresses range
+ *  4. Map all linker addresses and load addresses ( it shouldn't
+ *     be 1:1 mapped and will be 1:1 mapped only in case if
+ *     linker address is equal to load address ) with
+ *     RW permissions by default.
+ *  5. Setup proper PTE permissions for each section.
+ */
+void __init setup_initial_pagetables(void)
+{
+    struct mmu_desc mmu_desc = { CONFIG_PAGING_LEVELS, 0, NULL, NULL };
+
+    /*
+     * Access to _start, _end is always PC-relative thereby when access
+     * them we will get load adresses of start and end of Xen.
+     * To get linker addresses LOAD_TO_LINK() is required to use.
+     */
+    unsigned long load_start    = (unsigned long)_start;
+    unsigned long load_end      = (unsigned long)_end;
+    unsigned long linker_start  = LOAD_TO_LINK(load_start);
+    unsigned long linker_end    = LOAD_TO_LINK(load_end);
+
+    if ( (linker_start != load_start) &&
+         (linker_start <= load_end) && (load_start <= linker_end) )
+    {
+        early_printk("(XEN) linker and load address ranges overlap\n");
+        die();
+    }
+
+    if ( !check_pgtbl_mode_support(&mmu_desc, load_start) )
+    {
+        early_printk("requested MMU mode isn't supported by CPU\n"
+                     "Please choose different in <asm/config.h>\n");
+        die();
+    }
+
+    mmu_desc.pgtbl_base = stage1_pgtbl_root;
+    mmu_desc.next_pgtbl = stage1_pgtbl_nonroot;
+
+    setup_initial_mapping(&mmu_desc,
+                          linker_start,
+                          linker_end,
+                          load_start);
+}
+
+void __init noreturn noinline enable_mmu()
+{
+    /*
+     * Calculate a linker time address of the mmu_is_enabled
+     * label and update CSR_STVEC with it.
+     * MMU is configured in a way where linker addresses are mapped
+     * on load addresses so in a case when linker addresses are not equal
+     * to load addresses, after MMU is enabled, it will cause
+     * an exception and jump to linker time addresses.
+     * Otherwise if load addresses are equal to linker addresses the code
+     * after mmu_is_enabled label will be executed without exception.
+     */
+    csr_write(CSR_STVEC, LOAD_TO_LINK((unsigned long)&&mmu_is_enabled));
+
+    /* Ensure page table writes precede loading the SATP */
+    sfence_vma();
+
+    /* Enable the MMU and load the new pagetable for Xen */
+    csr_write(CSR_SATP,
+              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
+              RV_STAGE1_MODE << SATP_MODE_SHIFT);
+
+    asm volatile ( ".p2align 2" );
+ mmu_is_enabled:
+    /*
+     * Stack should be re-inited as:
+     * 1. Right now an address of the stack is relative to load time
+     *    addresses what will cause an issue in case of load start address
+     *    isn't equal to linker start address.
+     * 2. Addresses in stack are all load time relative which can be an
+     *    issue in case when load start address isn't equal to linker
+     *    start address.
+     *
+     * We can't return to the caller because the stack was reseted
+     * and it may have stash some variable on the stack.
+     * Jump to a brand new function as the stack was reseted
+     */
+
+    switch_stack_and_jump((unsigned long)cpu0_boot_stack + STACK_SIZE,
+                          cont_after_mmu_is_enabled);
+}
+
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 3786f337e0..315804aa87 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -2,6 +2,7 @@
 #include <xen/init.h>
 
 #include <asm/early_printk.h>
+#include <asm/mm.h>
 
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
@@ -26,3 +27,13 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 
     unreachable();
 }
+
+void __init noreturn cont_after_mmu_is_enabled(void)
+{
+    early_printk("All set up\n");
+
+    for ( ;; )
+        asm volatile ("wfi");
+
+    unreachable();
+}
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index 31e0d3576c..fe475d096d 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -172,3 +172,6 @@ ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misaligned")
 
 ASSERT(!SIZEOF(.got),      ".got non-empty")
 ASSERT(!SIZEOF(.got.plt),  ".got.plt non-empty")
+
+ASSERT(_end - _start <= MB(2), "Xen too large for early-boot assumptions")
+
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 15:28:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:28:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539711.840872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtF-0007bz-MH; Thu, 25 May 2023 15:28:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539711.840872; Thu, 25 May 2023 15:28:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtF-0007bn-J0; Thu, 25 May 2023 15:28:33 +0000
Received: by outflank-mailman (input) for mailman id 539711;
 Thu, 25 May 2023 15:28:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g/QV=BO=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q2CtD-0007Iw-SB
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:28:31 +0000
Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com
 [2a00:1450:4864:20::136])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ce124b03-fb10-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 17:28:31 +0200 (CEST)
Received: by mail-lf1-x136.google.com with SMTP id
 2adb3069b0e04-4f4db9987f8so482761e87.1
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 08:28:31 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 d22-20020ac244d6000000b004f37c0dfcaasm246853lfm.118.2023.05.25.08.28.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 May 2023 08:28:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce124b03-fb10-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685028511; x=1687620511;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kekLKeFIbeYDOHCuOxDTya7bEvDS56y6sN78THRHMuo=;
        b=L0CP3/72F4YA3SVThEPqoI7mTlLcCENpjUKA/RQczdmkGSSN/de6tE/mxtfAJdhp/b
         s7Pmr8VVHhdqY4kZV6LfPBeV1Q9uTNKQYVB9czdcwZiT7nKq/20beg2Q/XXpIcIVl3g8
         DnxVTZOcwq56h8WzlnSlSw8SHJFilJqe5ATvyh+KXLiZofdcdDWXrGCoR+KIIf6DDHzO
         vUR7WmdwtwEAwZlfpBm6WgzSi4ztPzBnFxT0FGkrVTQtZxaBsQarYV2ZRSUYcyouLu/J
         PDLHHmPqMS4W1a79PtbJ5g+k/qdARdyheDU2XrzNei7cF3DEiHKdaJEUy4Ymp9Z0W4bq
         sBcw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685028511; x=1687620511;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=kekLKeFIbeYDOHCuOxDTya7bEvDS56y6sN78THRHMuo=;
        b=KnjVyu8Pqntljor17/+E23b85aOFh1jnTtP9ajheDG63U2ZZuuozuj0agIDrf2COnr
         BgHavI9UM5CbnBK6qIftfOE5Skvbd2I+5yoPVdsVVs6AV8nGed7C3umPnhdu3E7CsJSX
         H1ZMg8ieAcX96jMk9x0KkdqlP05fwQ3MBUcxqF3UqaNJnBo3NvDoFtce8Lq9RDXqkW6r
         AxgRPgjC6mMZ5FtYZrmnI6mrGnejPkGV5mAYgiY7iciQ6IJX4xtXtUYPvS60Mx57OOhZ
         DG5rsEw/Sf/RabKt2TP4SUNcjrfNi/M5TN+9Bf2PEZ8m+Qd3mOjtOowUyvaB4iCbe+s8
         QLfg==
X-Gm-Message-State: AC+VfDyp6Yq91aTYGqeASX4kKxGxFrDhDKobJxuGPIKPwq+2wAocMF+O
	7bJ+kK1qV/J9LCRgxzZUcmztRrIIaGg=
X-Google-Smtp-Source: ACHHUZ54mKL4yz2tswCT6bUSDgiMcAVpL9rQyxTaKnt81R9rivSGkf0GpgexQ4aAjwSzkEN/G8cAXg==
X-Received: by 2002:a05:6512:3c86:b0:4eb:412f:9e0e with SMTP id h6-20020a0565123c8600b004eb412f9e0emr1126234lfv.26.1685028510793;
        Thu, 25 May 2023 08:28:30 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v9 4/5] xen/riscv: setup initial pagetables
Date: Thu, 25 May 2023 18:28:17 +0300
Message-Id: <6ea28216df1c7f29ebd88e20adb05cdf75af20fe.1685027257.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1685027257.git.oleksii.kurochko@gmail.com>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch does two thing:
1. Setup initial pagetables.
2. Enable MMU which end up with code in
   cont_after_mmu_is_enabled()

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V9:
 - Nothing changed. Only rebase
---
Changes in V8:
 - Nothing changed. Only rebase
---
Changes in V7:
 - Nothing changed. Only rebase
---
Changes in V6:
 - Nothing changed. Only rebase
---
Changes in V5:
 - Nothing changed. Only rebase
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 - update the commit message that MMU is also enabled here
 - remove early_printk("All set up\n") as it was moved to
   cont_after_mmu_is_enabled() function after MMU is enabled.
---
Changes in V2:
 * Update the commit message
---
 xen/arch/riscv/setup.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 315804aa87..cf5dc5824e 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -21,7 +21,10 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 {
     early_printk("Hello from C env\n");
 
-    early_printk("All set up\n");
+    setup_initial_pagetables();
+
+    enable_mmu();
+
     for ( ;; )
         asm volatile ("wfi");
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 15:28:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:28:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539712.840879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtG-0007fr-5i; Thu, 25 May 2023 15:28:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539712.840879; Thu, 25 May 2023 15:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtF-0007eQ-TU; Thu, 25 May 2023 15:28:33 +0000
Received: by outflank-mailman (input) for mailman id 539712;
 Thu, 25 May 2023 15:28:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g/QV=BO=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q2CtE-0006oN-5y
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:28:32 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd7e7230-fb10-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 17:28:30 +0200 (CEST)
Received: by mail-lf1-x12c.google.com with SMTP id
 2adb3069b0e04-4ec8eca56cfso2741115e87.0
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 08:28:30 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 d22-20020ac244d6000000b004f37c0dfcaasm246853lfm.118.2023.05.25.08.28.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 May 2023 08:28:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd7e7230-fb10-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685028509; x=1687620509;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ObHRDoMQo8U3MMOm/by0OlrwtRpV6knweGLD9K1sUP8=;
        b=drPj5fG4WKbu9d3nAuD6VrAYls8hQO9/NYjfguD0i9Poaj0iRwR+THXvfyzM4FF+P/
         wuaw256sDJlu4mJnX90KkPNnuQubJt3XuiP4eJxmKRkq3MuHTJTYSOkhoT2IL7vQ7ZIr
         qwZT+pDSTKSQMYbaD+pn9SuaKanVAWR7cVh0bo15evTaVWUGbTGuUHt66iRCddyRpeJG
         kZIO1om0Ex8QvSDCo7oWjiofpqiNVm5Dw7vUcNEsoO+hoBhkSgRZQVcSJ1popT4H7X1z
         Of+pRN7ycVD73EkLDzGsGAfedwaITfADfN41qlL03KiLEgLF3rnfsoiUe3qy8oLLaCfF
         I5RQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685028509; x=1687620509;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ObHRDoMQo8U3MMOm/by0OlrwtRpV6knweGLD9K1sUP8=;
        b=TgZEU8TjDSn6VaAY3W3oFqh/3nURh+MTwmPhKx7dLJfRxMNoYqAQM9pBeeafwYXU5r
         N/kL2iMvJLgh79QyorUtPQUZnh1aC2qSQjnk95IyxAj0C9D8cyI7hbCZ68L9GdC2aooi
         Z0voQ4OOhHykR8M/f46sQfm1aJ96eE6/DLjo3lRSkLw8REeqPoyVFMrL5/TVCuirHb28
         x4TjUbqIjjRrr/Q2ZCo04+iGvUt95Qf5zuXQxgkh89VpGF+2S7yZm5dW4MIYG6TWllPA
         shu+CpslSmpeqHm9dZZ3eL9Q9GqVkX9nPFbAhMQuXstEoZwgh6GpxnW2SdWSodNEHyjc
         zw9Q==
X-Gm-Message-State: AC+VfDxl+NkEkHGOktiZHRKSJiqaBQ+waAvNSRNt5ljdFrF9Eubvtmc2
	KKDyGPxb3JEsuQPuMKBy9QRWPP+ootU=
X-Google-Smtp-Source: ACHHUZ6uJoNS31A74ZGm3w3R7FoQGZ4ZyNwMNZs5ONYPsi80/dMr18p3ZBIuGnO4iiD4j0AxztChFg==
X-Received: by 2002:ac2:51ad:0:b0:4f3:a3e0:8502 with SMTP id f13-20020ac251ad000000b004f3a3e08502mr6333998lfk.33.1685028509497;
        Thu, 25 May 2023 08:28:29 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v9 3/5] xen/riscv: align __bss_start
Date: Thu, 25 May 2023 18:28:16 +0300
Message-Id: <1158df1cde660e817c4f6d6e0a46ef22bd92dc04.1685027257.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1685027257.git.oleksii.kurochko@gmail.com>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

bss clear cycle requires proper alignment of __bss_start.

ALIGN(PAGE_SIZE) before "*(.bss.page_aligned)" in xen.lds.S
was removed as any contribution to "*(.bss.page_aligned)" have to
specify proper aligntment themselves.

Fixes: cfa0409f7cbb ("xen/riscv: initialize .bss section")
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V9:
 * Nothing changed. Only rebase.
---
Changes in V8:
 * Remove ". = ALIGN(PAGE_SIZE);" before "*(.bss.page_aligned)" in
   vmlinux.lds.S file as any contribution to .bss.page_aligned have to specify
   proper alignment themselves.
 * Add "Fixes: cfa0409f7cbb ("xen/riscv: initialize .bss section")" to
   the commit message
 * Add "Reviewed-by: Jan Beulich <jbeulich@suse.com>" to the commit message
---
Changes in V7:
 * the patch was introduced in the current patch series.
---
 xen/arch/riscv/xen.lds.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index fe475d096d..df71d31e17 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -137,9 +137,9 @@ SECTIONS
     __init_end = .;
 
     .bss : {                     /* BSS */
+        . = ALIGN(POINTER_ALIGN);
         __bss_start = .;
         *(.bss.stack_aligned)
-        . = ALIGN(PAGE_SIZE);
         *(.bss.page_aligned)
         . = ALIGN(PAGE_SIZE);
         __per_cpu_start = .;
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 15:28:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:28:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539713.840882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtG-0007kx-Cy; Thu, 25 May 2023 15:28:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539713.840882; Thu, 25 May 2023 15:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2CtG-0007hu-6D; Thu, 25 May 2023 15:28:34 +0000
Received: by outflank-mailman (input) for mailman id 539713;
 Thu, 25 May 2023 15:28:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g/QV=BO=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q2CtF-0007Iw-5E
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:28:33 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cedc8939-fb10-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 17:28:32 +0200 (CEST)
Received: by mail-lf1-x12a.google.com with SMTP id
 2adb3069b0e04-4f3b9c88af8so2696988e87.2
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 08:28:32 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 d22-20020ac244d6000000b004f37c0dfcaasm246853lfm.118.2023.05.25.08.28.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 May 2023 08:28:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cedc8939-fb10-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685028512; x=1687620512;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kwG3AnwYJS+DU9adWejunsOd8v1f8pd4XLzBxkGPaG4=;
        b=JTWkfkif1+qceBtUUaP1P/LT30dp7yhQ3qOblax897UX2yC+BmvpW6wNmjSCGlp1qk
         /wXutc/LMGl4dT6C1hhw6NtyeFs13K/s9Ou8ANw1LtRDwbTaZPNqXoRYfvU2rkyVE7iU
         m45hSv4+/7vWMIt7Ii7ktR1uxVGa64Q5G/GNSxSAeDxev2O+LC+mdaLUnqIozp3LM49Z
         qrD1QZl36yGaFJtwdNOccgzQGJbzQAJA1fWminzaWPrNJCWvAsrSG3E6Blg+PLr5sYCo
         RUxQscAG1Uk2kRqQ080R/w9BhO6+neeG0jt6dPwvW/Gndi907CumessFZFuDxEhOOVDS
         jefw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685028512; x=1687620512;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=kwG3AnwYJS+DU9adWejunsOd8v1f8pd4XLzBxkGPaG4=;
        b=gM45+YUxtUdp6kUGXKmPaATn13FQ/i51j5mR+8sYMmDH9HrRBrCPoqgjB/ika3ba5C
         7KPM83PsKBCow+3rScw3zDo61oafPAfXbSlfobZxxVr3qzOZdDXqBiN+7EKVnDcS8es4
         Eelg8Y9KmiqqLZ0UeA/IKwIOPIzx8zmd9a/UUVECPEkhr48DH1JYVMDGDQ254oiJxr6d
         Xhg67zs2vafvQrDbhA/yV/kYIxgzk9A6g4Dw5yGSZn7go7lEN5D1MRUIpqESBCzKbGGZ
         xgSLgmLI3AZYBkdgtTMb9J4/bsRMzmOm/MP+cD1+Ip80c6l3Ughoz4omNmhOIBJyWbUI
         oUKg==
X-Gm-Message-State: AC+VfDyLSxZjABLmrR9XqrCmaVOgmWlq+TP/DO+7uBL4n+IH3NCXfJaG
	NBkkJU1JgCtxwMru7junSQBpXvY/dhY=
X-Google-Smtp-Source: ACHHUZ4vr7O7ztP9Ch0rL2DTwYQconLtosuUvlnWmCXVHIcHCiwPS10arJ7erkjc7Wm6/ivKbi2nqA==
X-Received: by 2002:ac2:4850:0:b0:4f2:5007:bd7f with SMTP id 16-20020ac24850000000b004f25007bd7fmr6275460lfy.36.1685028512122;
        Thu, 25 May 2023 08:28:32 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v9 5/5] xen/riscv: remove dummy_bss variable
Date: Thu, 25 May 2023 18:28:18 +0300
Message-Id: <03151586ddd34f61a24809d11bcc0be5e847b384.1685027257.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1685027257.git.oleksii.kurochko@gmail.com>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

After introduction of initial pagetables there is no any sense
in dummy_bss variable as bss section will not be empty anymore.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V9:
 - Nothing changed. Only rebase
---
Changes in V8:
 - Nothing changed. Only rebase
---
Changes in V7:
 - Nothing changed. Only rebase
---
Changes in V6:
 - Nothing changed. Only rebase
---
Changes in V5:
 - Nothing changed. Only rebase
---
Changes in V4:
 - Nothing changed. Only rebase
---
Changes in V3:
 * patch was introduced in the current one patch series (v3).
---
Changes in V2:
 * patch was introduced in the current one patch series (v2).
---
 xen/arch/riscv/setup.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index cf5dc5824e..845d18d86f 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -8,14 +8,6 @@
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
-/*  
- * To be sure that .bss isn't zero. It will simplify code of
- * .bss initialization.
- * TODO:
- *   To be deleted when the first real .bss user appears
- */
-int dummy_bss __attribute__((unused));
-
 void __init noreturn start_xen(unsigned long bootcpu_id,
                                paddr_t dtb_addr)
 {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 15:31:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:31:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539734.840903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Cw5-0002eB-RO; Thu, 25 May 2023 15:31:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539734.840903; Thu, 25 May 2023 15:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Cw5-0002e4-ON; Thu, 25 May 2023 15:31:29 +0000
Received: by outflank-mailman (input) for mailman id 539734;
 Thu, 25 May 2023 15:31:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2Cw3-0002dx-Tq
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:31:28 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2040.outbound.protection.outlook.com [40.107.7.40])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 35cd2053-fb11-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 17:31:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8835.eurprd04.prod.outlook.com (2603:10a6:20b:42e::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16; Thu, 25 May
 2023 15:30:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 15:30:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35cd2053-fb11-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JfJspY+BXdwVKmc/eDdF/CVhaKdGflqZtb8mQv2hga13FUsYqL7uWi8i8tD2GJcbfGQpnCxwulbn6TB81+rIBsg9xX3LNE+y+Ms3oTOP4WhmCdxuomusFOduapw5+oY2S+hj8GdCs3HssuzZIhMLIBqlAEVmqe13Sxej94xMmkW18iafBQ7vaE80oOjg9tRIgFXlfj9vGLesV96gyrLzViawV6jc6d0v71DN38DXWZC4iW6e68mwbHBB1HBy+pzNHMcZapjVkiE6n9VliagfH+07DEH2Poviuocg/0ODFVEN2XOxMioA6CWWe8OptQj0Bmxg1Mxi+XbuT2VpLuK1Eg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XthclJhARAbRFWfZZIRRFnBerPteSOKIaVn5ezanCdw=;
 b=Xluegp34WxPDhLuGDKPLuVV9O5eEzUYXubpZzilBnUB4xG9497Txpyk4ZzqItFEYrRzesisu+AJDgD1BbNYB7Zf368Z2dhbUIT28MBkxWzb908vfZVvvtqsGGJ3dAiLyGBFFtkndKWOpqAt3OoyOvOXh6mavscz2mDSaI96fX9Cc1tpM2bLuQ+wdXcyGEUYdbVkR/y2g6v9GGVTqCEYQj8Mhq5AGq77Qqf+4f+c+Fg3DL+Md7+HcmPF3B5hGjnxOtzhtHFuTIZhVXzS4DnaAx5INy5+E/DQJVvWF+EcDkHwHFFhns0H9q+GL3k7gypuXlHAXCYiFAQOEuJxfd+gcCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XthclJhARAbRFWfZZIRRFnBerPteSOKIaVn5ezanCdw=;
 b=C13BKj3JMuqNWeuTYFaMo1rv5rqtyfl+qLPdgwmHalTkZE/TPxZ2yy2v3b0votR8e2RmFhvGzDwU+Ywv9CGjB9rnjKeWJXlRYViZpiZ6k008XwZfMEJa/U7GYp9llYmkGwrEPWoZhkIej0zYnl2382lzjYzuWDHyUH7N2CqLgdGoOXn9EkJLahxkeq/1eP/QxY8/oc9YD1LhPNamykgtYjTOX3NjivWcATzNstMenR8VjCTVVsOaoBTF8MDOYrKMhY1AkrVe5+L5W1MPUllShYjQTUsoGafcy8hLd8RZpefXIL4ZMHKbh7yPj9nhV1MaKeaZoOYibIEcxx1j7KcxKA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <cedbc257-9ad9-56f1-5060-eaf173d45760@suse.com>
Date: Thu, 25 May 2023 17:30:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4zx+TvUWTCEMh3@Air-de-Roger>
 <2b1b1744-2bc3-c7c0-a2d8-6aa6996d4af9@suse.com>
 <ZG94c9y4j4udFmsy@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZG94c9y4j4udFmsy@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0134.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8835:EE_
X-MS-Office365-Filtering-Correlation-Id: 2df1157f-f1f9-444d-757c-08db5d3508eb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1Jjd0WqZN5lOiCiUn4i1b+D/38llGWaMc0JJBAh6TFS2MUsdlCc5afLZeAqKdg2VrZSa4MQgImJYoVeiJYhzuuKybRuTVLcLbBuF1OYk/ogNRRgqO0LwuFcaaHXySgUs01M+4cbH2iF/xrQLnP0c+JNJntsdT0SCJliHLP5GuunRRWOokPvRm84IjerXj880DAJrQcy6X2F6VWgMDuxaDI679pVN2XLeFdMrDnBxxy9AZDDUDFZH3QlolLBxYGrqBxHWU2sfkJupmMVNjPG/ZMaZLaVG3S9HmppA+Gr3pv3sQYSo8pcE9SovGSPq0Dqo4Fjr3c1KkZIMQb6agiOvt8y2EKp07cGGcx3vs8ja5Vg+4LGOTqE48BrmVTbMfptnnxKKqixCSsohEM2WrqP8hxCbC57LELmfcIbQUxre1uxGwYWkAjSfZ19HSwUkV7GFVhAQIBjy/Qv5JCwZUrOreAcWSZteX5i6PF5+thmoeIXIQabph0GRBOGEVlLdkTEjtR+ZpOcy+rPEBMGY167ifKThhksEXl8SHEgDjuP9zEAxUJCzdcZcXXj94ajTQoKZGroem9cZGyH9HyJkIgOGRW8JVkKUKdT1IGP0ajMjR4P4mywqsFdceEpxgLLlOAW2o3NJtFsHvty0VXpy486uHA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(396003)(376002)(366004)(136003)(451199021)(31686004)(6916009)(66946007)(66556008)(66476007)(478600001)(316002)(4326008)(54906003)(86362001)(31696002)(36756003)(6506007)(26005)(186003)(2616005)(53546011)(6512007)(83380400001)(5660300002)(41300700001)(8936002)(15650500001)(8676002)(2906002)(6486002)(6666004)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZWFsQVZyUG1oTURsVXI4bHhiRCtyL1lTdDlHcEJJRXNSOGNFNVlSY0lhOUY5?=
 =?utf-8?B?bWRGVXBjU1Q4UllwUm04K04wSTdsTnR3L21JT0V6bXhrQTYyR3krcThKL01m?=
 =?utf-8?B?SlRucWZPay93cFQ0OGJJMHhvN3l5QVI5WWtXN3QxbGdldHZkajdUTVJaaUtS?=
 =?utf-8?B?cGZMcUNURG9WbGRWV1VodzNmY1diMUVWcDNmTXpXSWhObmhiamU1a2ZvMnpC?=
 =?utf-8?B?VWVrSGNDR0lyZDFlS2RxU2FlZndkZ2RzQnZkVEJTQ3pIc1B6cTBZTmlocmpH?=
 =?utf-8?B?bFdDNmtoZXdraEkvVkcrUmVyZDgxUWhUek1rcUUzZk4xb1hCNmFHZUh6bXBE?=
 =?utf-8?B?M3FMRW5oUnk1eDlKSHJQNWQxZmNrQ1hlSmxCRDNiQUhtMFpZWWNjenlaUVYy?=
 =?utf-8?B?aW5GSk9jc1Y0bnJjOW8xck43U01jQmtiTFN0NWFMNFJBbUh4RksvaVZONzFq?=
 =?utf-8?B?Q0RhcUV1UlphWjd5TWk4dGdXWjJaNG9VQlJhT2hZWTdUQUgyeUFmN2E5c0Yz?=
 =?utf-8?B?bGo0a0d5SjdkV0pVMUtIbGxUbTBBUVp6Mit3ZkRqc0pJNW85dzgvRmVNYUVU?=
 =?utf-8?B?YUpyc3EvajRQRENlbEU2ZmdKU3liOWZ5dFg4RUFadWs1bDI1TUFqdE5taXlU?=
 =?utf-8?B?UmhraHliT1BXVXpPK2VSUTJsaXdOaTJuVzd4VEkxbzhabzJoT0xYaXBtOXFS?=
 =?utf-8?B?bnphZ0ZjclJHL0loQkQ2SEh6ckFodk51cFNZMHpEWk4wNTU0Ni8xY0pPdnN5?=
 =?utf-8?B?a1haZlhJMSt6UnZGWUlSV211bHJjYkx5N1lIUmdEY3FXNWZxaVFrcDlmbWU2?=
 =?utf-8?B?MjV6bGVGYlNTNWY3LzNob1R3dG9NbUZWTE1xdkQyWEFjWHlqRS9CbEFQR0Uz?=
 =?utf-8?B?RDFjczkrSUxpb1cyKzlZUU9NWng3MFRRcStMMCtLamFyKzl6bkVmZlVVckVz?=
 =?utf-8?B?cVFUVlZDbGlEVllvaTZYUmhxQjN1SStQa1didmRGbDVxZ0VEN2tMNFgrQkdJ?=
 =?utf-8?B?Wko4NGVjNXpCNnpnT3lRYVNITEJHV1VOQ2pxTHBHanpuY3k2c2pVNzYvL1k0?=
 =?utf-8?B?UGVreWl1N3dsbjNybkczK3krNE1ad0hBbU9WTERDNG1jb2FkZU1DWC9UWm1x?=
 =?utf-8?B?OFlhRUdBRlI0R3R0a1J5bVNSVFJzTUVKaUFUdXNHMXQxN1Vsa0hmQjB2L1RG?=
 =?utf-8?B?VFo4QWhuWmpZa0pQbkRJSGFiSkJYRnlDVGxnK1RBSDN4MHRiZk1nNEd5dFpx?=
 =?utf-8?B?ZkN0dWtHY2FYY1U2ZUZBV3dkc09MSjdiMkRVTlY5S1JFN3ExYnY4TVNrdWxq?=
 =?utf-8?B?K1lzTmRZTEZ4cFhqWndvbENrL2VtUjhWRXRuRHpidTAwY3BUc1Vpb015WWVt?=
 =?utf-8?B?OVBhODNuQjJMejVLMWVUcm1naTBZdDZ4K2xndWsrd0pNYlFjY1kwQTFGYTFL?=
 =?utf-8?B?R2ZJb3BtR0VBUm9WVFpHTjhqbVozNDl2NU9CRC9vUDJXZDJ1MitEOXVIbTJQ?=
 =?utf-8?B?OXQxaVFERHpzNlBwQzhCS1E0aDhlRVV0alVVK3lMM0VsdmxRKzVZaGZoRjg3?=
 =?utf-8?B?NVlXSWxtL2hrTzk5K0Nyb0ZiaXIyenlVdlZMZUwvRnZqUFlZNUNpNTZoZ1RL?=
 =?utf-8?B?SmRHZ0pNNXhjNjZGMHFaUnFuek1La3BlNFhTaXEyenBRNEtJd1hFdXBZZG1t?=
 =?utf-8?B?NGJWeGFiTFFDbDU2U3dKK081UjEzdUVlSWZXdy80ZEkzT1I2bE93ZXBrbkpN?=
 =?utf-8?B?dE5lSUNwQkljaEVTUFY3OUJHd2FJcXBNdnppWXVnc0IzT2dEd1F5dXRJb2Fh?=
 =?utf-8?B?SUJkSEZYRFVjNkkvM28zK0I2c2gyZGY3SWZ1K0xNQmRqZXgyMU8wWG1iRkow?=
 =?utf-8?B?SjZlRGc4RitXd1VQcUZVMUorS1RGQ1RNWUlUNFJFWC9Oa1pDbzZLZzdhcEdk?=
 =?utf-8?B?OFJFR051RmpEbGJTeWN5OWZWdE5mTzhmSERYbGFaQllCN2RhZ3NoSUNyL3ZH?=
 =?utf-8?B?VzZqTWFINHZJV09XL0tCOUZxWHQyMWd1VEJLZzNUYm04TVJGZTV5TmhHdEpl?=
 =?utf-8?B?WkRSTmZOZ1FRN2RLK0FSbWExbC8vQ0J5b2RaVEpEaUVuMExTUUNtVmd2a3Va?=
 =?utf-8?Q?OJ1cvOIxQc3TGJyvlfL3gVzIY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2df1157f-f1f9-444d-757c-08db5d3508eb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 15:30:56.9499
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XkeHH/2W38sCcDyep8WofYcnn/e9SQ1tsMyDjE5K40BJFK8o1PjayJdaBjN6yP8ZCdDdptgeBtor2+P97VsI7Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8835

On 25.05.2023 17:02, Roger Pau Monné wrote:
> On Thu, May 25, 2023 at 04:39:51PM +0200, Jan Beulich wrote:
>> On 24.05.2023 17:56, Roger Pau Monné wrote:
>>> On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
>>>> --- a/xen/drivers/vpci/header.c
>>>> +++ b/xen/drivers/vpci/header.c
>>>> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
>>>>      struct vpci_header *header = &pdev->vpci->header;
>>>>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>>>>      struct pci_dev *tmp, *dev = NULL;
>>>> +    const struct domain *d;
>>>>      const struct vpci_msix *msix = pdev->vpci->msix;
>>>>      unsigned int i;
>>>>      int rc;
>>>> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
>>>>  
>>>>      /*
>>>>       * Check for overlaps with other BARs. Note that only BARs that are
>>>> -     * currently mapped (enabled) are checked for overlaps.
>>>> +     * currently mapped (enabled) are checked for overlaps. Note also that
>>>> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
>>>>       */
>>>> -    for_each_pdev ( pdev->domain, tmp )
>>>> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
>>>
>>> Looking at this again, I think this is slightly more complex, as during
>>> runtime dom0 will get here with pdev->domain == hardware_domain OR
>>> dom_xen, and hence you also need to account that devices that have
>>> pdev->domain == dom_xen need to iterate over devices that belong to
>>> the hardware_domain, ie:
>>>
>>> for ( d = pdev->domain; ;
>>>       d = (pdev->domain == dom_xen) ? hardware_domain : dom_xen )
>>
>> Right, something along these lines. To keep loop continuation expression
>> and exit condition simple, I'll probably prefer
>>
>> for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
>>       ; d = dom_xen )
> 
> LGTM.  I would add parentheses around the pdev->domain != dom_xen
> condition, but that's just my personal taste.
> 
> We might want to add an
> 
> ASSERT(pdev->domain == hardware_domain || pdev->domain == dom_xen);
> 
> here, just to remind that this chunk must be revisited when adding
> domU support (but you can also argue we haven't done this elsewhere),
> I just feel here it's not so obvious we don't want do to this for
> domUs.

I could add such an assertion, if only ...

>>> And we likely want to limit this to devices that belong to the
>>> hardware_domain or to dom_xen (in preparation for vPCI being used for
>>> domUs).
>>
>> I'm afraid I don't understand this remark, though.
> 
> This was looking forward to domU support, so that you already cater
> for pdev->domain not being hardware_domain or dom_xen, but we might
> want to leave that for later, when domU support is actually
> introduced.

... I understood why this checking doesn't apply to DomU-s as well,
in your opinion.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 15:49:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:49:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539740.840913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2DCv-0004Gz-95; Thu, 25 May 2023 15:48:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539740.840913; Thu, 25 May 2023 15:48:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2DCv-0004Gs-6H; Thu, 25 May 2023 15:48:53 +0000
Received: by outflank-mailman (input) for mailman id 539740;
 Thu, 25 May 2023 15:48:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2DCt-0004Gi-Vp; Thu, 25 May 2023 15:48:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2DCt-0001ue-KP; Thu, 25 May 2023 15:48:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2DCt-0005eW-8F; Thu, 25 May 2023 15:48:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2DCt-0006sA-7n; Thu, 25 May 2023 15:48:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fs4nICueRx/dvt912l/5EnQM226Gug2ptBknoUTrMss=; b=asriBWWxD0Ce8ZlJRdAzQkLfla
	VIJenTAJTEhxd5afKuXLgmnkzWTysNdWoYAt3pWLfSWYY3vdBnpDKogj3MjGz3wssVkGAQM/Jt1Sa
	AdAW7d5+DK5vDlEk2QvIH6KEfq5SsBRdz16U/JV2Knidydgfqzwqq402gOslNpagBDjM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180942-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180942: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=b300c134465465385045ab705b68a42699688332
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 May 2023 15:48:51 +0000

flight 180942 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180942/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                b300c134465465385045ab705b68a42699688332
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    8 days
Failing since        180699  2023-05-18 07:21:24 Z    7 days   30 attempts
Testing same since   180937  2023-05-25 02:03:36 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 6870 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 25 15:49:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:49:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539743.840922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2DDP-0004YT-Lc; Thu, 25 May 2023 15:49:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539743.840922; Thu, 25 May 2023 15:49:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2DDP-0004YM-Is; Thu, 25 May 2023 15:49:23 +0000
Received: by outflank-mailman (input) for mailman id 539743;
 Thu, 25 May 2023 15:49:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xzDE=BO=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2DDO-0004Y9-3d
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:49:22 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on0628.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::628])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b656cf67-fb13-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 17:49:20 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB6776.eurprd04.prod.outlook.com (2603:10a6:20b:103::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.29; Thu, 25 May
 2023 15:49:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 15:49:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b656cf67-fb13-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XNgFkT1SzOoyT95EcCnb05bN7foeUUDuAwA9nZ/h2yt5EXxcFlV8ZccfJeHVJqAI0A3btie7OmVMOUV97Mrd8htjm1W7q6lPJG0mEZsKOJzZKzr0BVMgeTS4RJtH6/3fewdFj9S/v1mQjw32UBwsK1ixDf4MKoB/swhIySp59lQmQFp2eSjhjb7y61dliXBjoHOGTXDro+p81CAXF11vPBNZ78fVBCs95YagSaWi9xvQcJFz5gqLaOM0x1H44MsdY11jsVD0OWSK7faDFg7/vm6qgVTy63AQi+1v7oJ523mEaJstYquK46HRU9/NSxN1w+Jbt24KIun8M9LD9HtDpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+5pWTeVq/v8v8pHXg8YXid1x4BnzAHHD8VkxfAd2QbA=;
 b=nsIciXUIw8lyiEkG2m3KiVLkcDmtr14Li+qZ/HFKO/ySm9IiCDJTTd2yHqY3gmB6c2kb8MrIpBSAirPKrMSQ5KjjN4iLY/t1eSk/vfuurYdqPFFOoMZfXz4mjQdoVm2N1pc/3NnsNL4ESYhyPWOT7EJlnK2682bjyWiGzzGWiQysCOobP+0Sw2q3wjs/DsH3wx5U/epADSrql/uvBgcIkgFxmge2d4K7UGNpmuI3fEvQ2AKfCP3RwR+KLPkLJ08Fzgsq2O7x7hj+5qqyn713lqmjrlcSsB5nvB3QoUZLAmsj8njUITnY8MKC0VJv4G+fqTomQrsmmMGtwqlplQ3yfA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+5pWTeVq/v8v8pHXg8YXid1x4BnzAHHD8VkxfAd2QbA=;
 b=ZNTCWyB2+VgvD2Rm7JUfm+1w526vSKPYt6hwMC6B0QcopmCewPkCi20oGJuoO79wmq0NTZrBu4Xy8TBAdh/f1tgWx9/HSjdO0EZjktb2kPYGDOp2+HXX6UHXX5OLL4ikVp0O7QL5xXCo3REKmiRzO9kJQYmk19b2T2KonaI4tR9gGLRsnSkkyzQPWxNt4DGmD17l/KZPn8tnM2oHsm5Xnv/ChSKKQkV4GTsxfM0PEcbi5SvmXLGqlt9KD7xpVjyc9QdprzKxqjdQhtH9BvOfwXc93FVEb+XnFnJzLsA/3Dhi5p1U3fvxoYak56ch6TO3iLw2CfMBSNNWKPYbzsFZDA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <210e2a81-45a9-50e7-b37c-f4b0d80d0e95@suse.com>
Date: Thu, 25 May 2023 17:49:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4zx+TvUWTCEMh3@Air-de-Roger>
 <2b1b1744-2bc3-c7c0-a2d8-6aa6996d4af9@suse.com>
 <ZG94c9y4j4udFmsy@Air-de-Roger>
 <cedbc257-9ad9-56f1-5060-eaf173d45760@suse.com>
In-Reply-To: <cedbc257-9ad9-56f1-5060-eaf173d45760@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0270.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b5::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB6776:EE_
X-MS-Office365-Filtering-Correlation-Id: 1f8bb3a4-c1d7-4fcb-34de-08db5d3798fc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FeBpvbpXAV+Bh33p5m1zF7YjSBmfuqu5/wCKElrMgW8JxrlNwfsOQQmYd4m6Isn7/ssPQQeWHAye9PDpqvYOaaZ1JSOdPJ9oFtBr/weII7ugor7OqdbTu3H/wml8RaYVqc4MqoEKrRL3FdHxyr40f3fy2b7Ay5e0pM5SyJh7uH605DJOeHs8RheeZK/25G8a80zh1ME77atR47gLQaDol2CQt3r87ZJtuAl9JFIKSePb4/d6toxEF+1RXneNCb8tuXyxwnENV8SB+K4VL2mCbynHzuO98C1i4ZwczpvrScK/+zWQ/c2qSVw8AatwSgOpiyl7s+wjxkYuh1GHX+Rxz8dqvKPBtlOLaWg99+MyEgUiwmBRtW982X2vPL8fxqu6YZAtLod0EWI+oPLmdsR0cg26LExUf7kqoweR5jk/EuYLive0qmX2FjMtg9eqkztCyQSGG5+ARUUzDo+8xHvNxLn/CuBQ+d4f+FD3Oh5PNploCrRELhwYCisulqomZklN/aBM30YhiCOfS7C0FTk7g/YrRJQ0cmErNRiyLBjK+eLrUHEyA6hmrqgPW8p88Bs6Cv4555mtKIwwtZVEUZtuqZrqB0QOsUsyqbIkfEbLSwpLKOwlj4FuSpvrxYyaSh+aD/PtAr60wwvZXQNSm+ihYA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(376002)(366004)(136003)(396003)(451199021)(316002)(54906003)(31686004)(4326008)(66476007)(66946007)(66556008)(6916009)(38100700002)(6486002)(478600001)(41300700001)(8676002)(5660300002)(86362001)(8936002)(15650500001)(6506007)(83380400001)(26005)(6512007)(31696002)(53546011)(36756003)(2906002)(2616005)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QlZwNjNuV3hMOGNES3FJb2pVbTZyRUZ3b0Nxb2RVMUVZbk56WmY0ejdNVzg3?=
 =?utf-8?B?alExMGY3M3NzQWZNb1pDN3FGSWJOcVd5d0N5b0w1dGRKWnV4MG5paFd6Nnc1?=
 =?utf-8?B?SGpTN3ZsL1ptMkRvS3RDZmpDcTZWK1pjZWdQV0VjT0d3bE1ZRU45SllSdWd1?=
 =?utf-8?B?UmlmSHlYaWVWMXY2cUN6N0lycit4VjlJd1JHL1VVTm5aOWw0ZUZKWjA0c3kw?=
 =?utf-8?B?bC8zR0EzR3ZjeEVabU9KSklsY1FSenZ3L3ZoZzVNbVpxd3FiVnZtR2VMdVlK?=
 =?utf-8?B?QmRTWDh4UnBObFYwbUdiMFptV3dCQXBsRUYrOGZzYnRxajFJZGRNTWlLU0VY?=
 =?utf-8?B?aGswaEtaWmpCMUpYSEpYYVEvQ0JiSDBuQ1M2eE5oOEhhVFYvL2FUb3h4ZG5X?=
 =?utf-8?B?d2d5UTZzSDJtYnhpYm41VTN3cjZGVS8wYnRYMUhsWllZa2YweFdmbUQ2N0h4?=
 =?utf-8?B?N3I0YmdiVEsvVmcwc3JZWmxHUEdpbVpGb2Z0Q2U5RGovNlNxWXJidDNmeFor?=
 =?utf-8?B?V040eFA2MklpeHhHVzFhWXhnQWJ5MUdOUVg5UmVnZXU5aU5QTkR0WTUvOU9t?=
 =?utf-8?B?VkVQQWNrWFhqTVlTSDQ2QXFoTHFjWHdQdGdPYUR0a3pxT3NFNXJWdGJuZ29u?=
 =?utf-8?B?MDVSL0NTMTNLN0o2UjN5STVCTW01cE1GN0dQNjRCWEFrYnJOY1hTc3VtS1Nw?=
 =?utf-8?B?NExuOGdUR1FjOS9aQzJzSTByWkpzc1VRNmJxWHRzSGN5V0JBemd5VGpEY2dR?=
 =?utf-8?B?aWIvVjhYcUtoQkdrTTBjaDBHaEpIbnowRFVkMjZLK1ZCL05UMUNPZWpZRkZs?=
 =?utf-8?B?UktlSUUrcHFHbG91VGRVOEUxcFZFc3Z2VDJxNjQ5MllENzBBZ0RyejdrTzNL?=
 =?utf-8?B?NkpFZ3NZYWZ0RWhQTHQ3WmNnNVFYN0VDUFk2a0RaRklXSCthRWU5WDlSVU5D?=
 =?utf-8?B?RDJBN24yYURJR21reTBXdThOREkreWprTUFOemU1dEhOenVwemxVY1hkb2RK?=
 =?utf-8?B?VitMYk9kU0M4ckw0TG93cml5aUpLcDVFeXBtOFlDS1ZzVGI4cmRraE9jVHpn?=
 =?utf-8?B?Y0ZJS2lsdmRBbVJST0x5aXRsSW1vMXVqT1FXeVFSQkpHMmFTbmthVmtoVzBG?=
 =?utf-8?B?SFBBTlQyTVdYSFJPellTa1ZaNkd1NDYwaHVtSjRXaktOTERybTZIQ1RqbDZo?=
 =?utf-8?B?ZTI2bGRxbXFIUEFYVmx1dTZ1b2J0eGlTNmtWQUtQRkYxTHFTM29ESDBJL3lH?=
 =?utf-8?B?czI0cXl5WWJjcUdCVEZuMHZleVNpMCs2aGZ0d3lMcVNsS05hVWswTmV1ODRI?=
 =?utf-8?B?dmxBRmp2cTdEcGNpWU1KQ1dzbEQwNUs3Wm9lWDVIVW0yYkpLZExvSzB3V2w5?=
 =?utf-8?B?N0pPK25FZGVTbHRGTkZlTGg5Q01tSUZjOGw0UG9LTG1VK1dGc05tUE1yd2Vn?=
 =?utf-8?B?T2dDWGFHS0RpNXN2bHdCeUtZWTNSUkpVS01xSGJReVo5NThYQ0NFaWNxWUJW?=
 =?utf-8?B?NkRtREVpYTgwNzFibXdYamVQR21QSTBIUTM3blluL3BrSFFpRU1VNnl4ZXFM?=
 =?utf-8?B?OWV0UEUrU0w3d1FKRTU5MXFRUEpUSUZCTkxHMk9IM2k1NlVOZG9QRUNvV3BK?=
 =?utf-8?B?OWVyTWtLalhMcEtWc0dvWldoMmZJQjcyb2VPc05XUERnZEMrOGVzQk14KzMw?=
 =?utf-8?B?RXZTMWNkbGlwa1Z2enVJcHY0Z0RTMW16Umx1ZXFramNHUDBKdWU0ZXA2d05R?=
 =?utf-8?B?UkFqSDhoOHFzV2k2bTdjRHZaQk9hWUcrcm5xRmpnK0tRYVo0dnIwL21YM2R6?=
 =?utf-8?B?SkFsNnJNanVWRUVuOXhlUkxNTHYxZnhoRjdCMUIrSVUrZ295cUJCWUF3ekVB?=
 =?utf-8?B?dVdLZDdIbjJ5K3hRTjd5WEFTczhQcEVJaG9NQTlicEg3MVlKelJYWVRrc1hp?=
 =?utf-8?B?RmZFcVFsZkZKeWFBcm00TkxMWWYwTjBwUXZHWGFEbGRKVUxCRDJOTGhBclVp?=
 =?utf-8?B?aUdQdXZWREdDQWhDNGlqbEROWHJuTHlpVFArbUZKaXMrRTFqZ1NYdlphWWR1?=
 =?utf-8?B?ZnFOb3dMcGRQcnduOThsV3FIOXB1S3Y5cnE0VGZPbkt0ZEZmcC8xbFltWmxI?=
 =?utf-8?Q?dHDrUDATOprMr/wMIr+aXMx2r?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1f8bb3a4-c1d7-4fcb-34de-08db5d3798fc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 15:49:17.5610
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gmscC2gx9kUTQEEfcoK+qPmXquypRDDXymFnmGUddYWxZxqt6kBr1ri+PuQAZsvFsCoPCOqln01GS6Ply177Kw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB6776

On 25.05.2023 17:30, Jan Beulich wrote:
> On 25.05.2023 17:02, Roger Pau Monné wrote:
>> On Thu, May 25, 2023 at 04:39:51PM +0200, Jan Beulich wrote:
>>> On 24.05.2023 17:56, Roger Pau Monné wrote:
>>>> On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
>>>>> --- a/xen/drivers/vpci/header.c
>>>>> +++ b/xen/drivers/vpci/header.c
>>>>> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
>>>>>      struct vpci_header *header = &pdev->vpci->header;
>>>>>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>>>>>      struct pci_dev *tmp, *dev = NULL;
>>>>> +    const struct domain *d;
>>>>>      const struct vpci_msix *msix = pdev->vpci->msix;
>>>>>      unsigned int i;
>>>>>      int rc;
>>>>> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
>>>>>  
>>>>>      /*
>>>>>       * Check for overlaps with other BARs. Note that only BARs that are
>>>>> -     * currently mapped (enabled) are checked for overlaps.
>>>>> +     * currently mapped (enabled) are checked for overlaps. Note also that
>>>>> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
>>>>>       */
>>>>> -    for_each_pdev ( pdev->domain, tmp )
>>>>> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
>>>>
>>>> Looking at this again, I think this is slightly more complex, as during
>>>> runtime dom0 will get here with pdev->domain == hardware_domain OR
>>>> dom_xen, and hence you also need to account that devices that have
>>>> pdev->domain == dom_xen need to iterate over devices that belong to
>>>> the hardware_domain, ie:
>>>>
>>>> for ( d = pdev->domain; ;
>>>>       d = (pdev->domain == dom_xen) ? hardware_domain : dom_xen )
>>>
>>> Right, something along these lines. To keep loop continuation expression
>>> and exit condition simple, I'll probably prefer
>>>
>>> for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
>>>       ; d = dom_xen )
>>
>> LGTM.  I would add parentheses around the pdev->domain != dom_xen
>> condition, but that's just my personal taste.
>>
>> We might want to add an
>>
>> ASSERT(pdev->domain == hardware_domain || pdev->domain == dom_xen);
>>
>> here, just to remind that this chunk must be revisited when adding
>> domU support (but you can also argue we haven't done this elsewhere),
>> I just feel here it's not so obvious we don't want do to this for
>> domUs.
> 
> I could add such an assertion, if only ...
> 
>>>> And we likely want to limit this to devices that belong to the
>>>> hardware_domain or to dom_xen (in preparation for vPCI being used for
>>>> domUs).
>>>
>>> I'm afraid I don't understand this remark, though.
>>
>> This was looking forward to domU support, so that you already cater
>> for pdev->domain not being hardware_domain or dom_xen, but we might
>> want to leave that for later, when domU support is actually
>> introduced.
> 
> ... I understood why this checking doesn't apply to DomU-s as well,
> in your opinion.

Or did you mean that to go inside the if() your patch adds (and hence
my patch won't need to add anymore)? I didn't think you did, because
then it would rather be

ASSERT(d == hardware_domain || d == dom_xen)

imo.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 25 15:52:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 15:52:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539752.840933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2DGW-0006IE-3a; Thu, 25 May 2023 15:52:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539752.840933; Thu, 25 May 2023 15:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2DGW-0006I7-0r; Thu, 25 May 2023 15:52:36 +0000
Received: by outflank-mailman (input) for mailman id 539752;
 Thu, 25 May 2023 15:52:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sqFY=BO=intel.com=rick.p.edgecombe@srs-se1.protection.inumbo.net>)
 id 1q2DGT-0006I1-R7
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 15:52:33 +0000
Received: from mga11.intel.com (mga11.intel.com [192.55.52.93])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 263829f6-fb14-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 17:52:29 +0200 (CEST)
Received: from fmsmga006.fm.intel.com ([10.253.24.20])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 May 2023 08:52:26 -0700
Received: from orsmsx602.amr.corp.intel.com ([10.22.229.15])
 by fmsmga006.fm.intel.com with ESMTP; 25 May 2023 08:52:25 -0700
Received: from orsmsx611.amr.corp.intel.com (10.22.229.24) by
 ORSMSX602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 25 May 2023 08:52:24 -0700
Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by
 ORSMSX611.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 25 May 2023 08:52:23 -0700
Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by
 orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23 via Frontend Transport; Thu, 25 May 2023 08:52:23 -0700
Received: from NAM12-DM6-obe.outbound.protection.outlook.com (104.47.59.176)
 by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.23; Thu, 25 May 2023 08:52:23 -0700
Received: from MN0PR11MB5963.namprd11.prod.outlook.com (2603:10b6:208:372::10)
 by MW4PR11MB6618.namprd11.prod.outlook.com (2603:10b6:303:1ec::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.14; Thu, 25 May
 2023 15:52:20 +0000
Received: from MN0PR11MB5963.namprd11.prod.outlook.com
 ([fe80::6984:19a5:fe1c:dfec]) by MN0PR11MB5963.namprd11.prod.outlook.com
 ([fe80::6984:19a5:fe1c:dfec%6]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 15:52:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 263829f6-fb14-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1685029949; x=1716565949;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=as1KtnbbP95GMhC+Isj4LyQXmPIr9UTa8ZaoLXqtC1I=;
  b=Ik2y9qkOUT1kDZoA8b9kEQ9nc+00oXOInEe8FEas1dlIfLGg3l4mPnaq
   53YG3jlwzRv3YrybVdYoliNLPtY5nO43lOdcOFwTS6VajEsbp6X/gEecx
   +GXdECyMGsfT8mMNI3l13l96Id0h+i4EeegmxBXJYPTDsM38Ro5+GyAd1
   S7M01D/jXxpr7/wfhnY35HYxiiDWXd+oTknIB5YKmkLhUaIkxRpU793oL
   9NXETg/ycqtI9rUchtlYbWm44ZA5vxwlaVa6iGC3q9m0bD6YgWxsSg11t
   QcFI6GK8l9Lr3pAt4JH2c2L84YwVNu2cFvzdW3N2zv6F+eP60DbWsK4Jc
   Q==;
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="351439804"
X-IronPort-AV: E=Sophos;i="6.00,191,1681196400"; 
   d="scan'208";a="351439804"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="951498958"
X-IronPort-AV: E=Sophos;i="6.00,191,1681196400"; 
   d="scan'208";a="951498958"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f5wIxVIqBYcRBUzgUEPVY+c+nmr8Ek7Q1MDubKS8gNDKkXHmis5Z2OZUaXcWdAXDD7ga+j/gNjdKOgkLe0Mz6QLrB0dXzNzPDx9sAgwWXw7njjlVSNCszw+114Ro2q6RR3NA3DefG7EQ1DhgB7dBpPVJg7vRD3pfwDjvqh55Hxw+1wy2xcC5vLKzQdwwZOCSyXWABdD00vBF7133PrD7GkXXc8uw/8evEL2emV6MjII3qx0wuDc6p6gudAPSrMGCoMnfXvuuSLzhPJHXyohL41mY/wSPavcisD44J6XttKFKGOFVpV+RHHhxz411PRe0YJyha6sJXvXVBbr/zVf4lg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=as1KtnbbP95GMhC+Isj4LyQXmPIr9UTa8ZaoLXqtC1I=;
 b=E1HSIIWuEjGVylzzB1ce7EH5+ytQ0N1+ajpNCtHsowyo/ik7aSZWpJQdPvFAfJH23gLpTiahEjqr+2dkKRYWx2GjhR3Le5V7NnixItYmiS2J/Y6WYPmcqFdWW7iFRZlTyttrSjIqgmNDKxEA1tpKbIMwns25fTudxCDHYJYG9wHI3S4RwSr1J8VWJqwTrAamR5SDUnWcSXXnEQrrZZBoQ0dS4ouyCAL25N+7HsjKhiTO+X3m3x9ebqzviur+PZVYn3js2rQaeCWIVdHzG9th7h7P/1a7mZATVWDKKIVxN/QDL5DwFAuDC9gBOIbJ2Nr4S4SA+rTf9WOk6oKsr5Ob6w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
To: "mic@digikod.net" <mic@digikod.net>, "Christopherson,, Sean"
	<seanjc@google.com>, "dave.hansen@linux.intel.com"
	<dave.hansen@linux.intel.com>, "bp@alien8.de" <bp@alien8.de>,
	"keescook@chromium.org" <keescook@chromium.org>, "hpa@zytor.com"
	<hpa@zytor.com>, "mingo@redhat.com" <mingo@redhat.com>, "tglx@linutronix.de"
	<tglx@linutronix.de>, "pbonzini@redhat.com" <pbonzini@redhat.com>,
	"wanpengli@tencent.com" <wanpengli@tencent.com>, "vkuznets@redhat.com"
	<vkuznets@redhat.com>
CC: "kvm@vger.kernel.org" <kvm@vger.kernel.org>, "qemu-devel@nongnu.org"
	<qemu-devel@nongnu.org>, "liran.alon@oracle.com" <liran.alon@oracle.com>,
	"marian.c.rotariu@gmail.com" <marian.c.rotariu@gmail.com>, "Graf, Alexander"
	<graf@amazon.com>, "Andersen, John S" <john.s.andersen@intel.com>,
	"madvenka@linux.microsoft.com" <madvenka@linux.microsoft.com>,
	"ssicleru@bitdefender.com" <ssicleru@bitdefender.com>, "yuanyu@google.com"
	<yuanyu@google.com>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, "tgopinath@microsoft.com"
	<tgopinath@microsoft.com>, "jamorris@linux.microsoft.com"
	<jamorris@linux.microsoft.com>, "linux-security-module@vger.kernel.org"
	<linux-security-module@vger.kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "will@kernel.org" <will@kernel.org>,
	"dev@lists.cloudhypervisor.org" <dev@lists.cloudhypervisor.org>,
	"mdontu@bitdefender.com" <mdontu@bitdefender.com>,
	"linux-hardening@vger.kernel.org" <linux-hardening@vger.kernel.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>, "nicu.citu@icloud.com"
	<nicu.citu@icloud.com>, "ztarkhani@microsoft.com" <ztarkhani@microsoft.com>,
	"x86@kernel.org" <x86@kernel.org>
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Thread-Topic: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Thread-Index: AQHZf2Ve5xw6RDm4tUGLnlOpizAoCa9qHQcAgAEGdICAAB9/gA==
Date: Thu, 25 May 2023 15:52:20 +0000
Message-ID: <7cb6c4c28c077bb9f866c2d795e918610e77d49f.camel@intel.com>
References: <20230505152046.6575-1-mic@digikod.net>
	 <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
	 <facfd178-3157-80b4-243b-a5c8dabadbfb@digikod.net>
In-Reply-To: <facfd178-3157-80b4-243b-a5c8dabadbfb@digikod.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Evolution 3.44.4-0ubuntu1 
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: MN0PR11MB5963:EE_|MW4PR11MB6618:EE_
x-ms-office365-filtering-correlation-id: ce685577-ebcb-4eba-34ba-08db5d3805fc
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: Zw+G+vxJtZgVE+JD9lyX+QTOxcyrDQcg5ky6MmT9a2edwuoBvtxBn3wsJjjXWU9PhhmUOtrvQ0zScqzlawSriG2pI13yZPkzlBGpIPn/0BS5RUmQbSGbytp96RqQOzQ8jvqcr5oIa4mElKIecO2zaoAZTkQpe42WMKK5yx40ksnMG60/YNkDhTPFrI4pBtE0wNc6qnizJDLCf4b5+PKm043LmuGep/v9+yCfVMYX7/ZrpjIJU6XXUIq+D2LYkwkkrcpMRposdiefQVt/Fy7gemlf3l+8mYZebBl94TDr5ppeiNHtb3ipLc8y+KQDhq7/WVZLZL9/QgBH+hX4iwcw9zTxKOeeqDQgHfQbGIXBQDY+1tcdwC/GlTMUhSAoO2uNA/aWRbB2yVlH8L6h2CDu5GqF0XoEEHyx/h/H3cpZcsrPETE814K35MZB9y9Zs/+ODDFGK6SVxVce43nI+IcgHKBKjwyGAPzlMIM1wSpKY562PUWt1UGEHP+T1hA1lcq7JG9UxvLI4pFEJ9JIV7Hgpvb0mGc00e9X0vB1E9ai4hg0d1A2E/hpUQRiV8v+hPYX7xDJU/wQB3khEGTyfam4VEORI2PZshvblXBE2jPOV1HDbiHpYJx7o+mTcMrTWr38eXakF9HOSv6mKuz0/W1T1w==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN0PR11MB5963.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(396003)(39860400002)(366004)(376002)(136003)(451199021)(54906003)(2906002)(7406005)(7416002)(5660300002)(8936002)(8676002)(110136005)(66574015)(91956017)(76116006)(41300700001)(66946007)(64756008)(316002)(66446008)(478600001)(36756003)(4326008)(66556008)(6486002)(71200400001)(66476007)(6512007)(86362001)(6506007)(2616005)(38100700002)(83380400001)(26005)(122000001)(82960400001)(186003)(38070700005)(921005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?ay9NajZoUmdOUlA0UFVMQTdHUHN2M2NVYTNSWk9mb051bHRWbzJrYm80RlQx?=
 =?utf-8?B?dDVTZk5id3ltdGdQc0lUMFpJb0YvcHJBQWRRbjhGYzREbzk2UE85TTlPYTVB?=
 =?utf-8?B?b3lIUU1aYUhUT2VsUkNmWU1xZkxFL1RMcWRROHlTZ1hZSW9qa050VGswdDE3?=
 =?utf-8?B?YUUrUzZXR3hIMWlBeTJoSFFuUGo4VzU2WDVJbXBnK2tNaENNTzBMS0VONWd3?=
 =?utf-8?B?em9CQmR6Vi93L0RiOGlrSFJBQk1adVhaZ1RVY3d3dVlKN2d0NXZESjdMYnVR?=
 =?utf-8?B?UngrYk1qOHRHa2dmZ2ViSEx3RDNhZzlOZEd0S01PUm9PSW9YZVN1TlIwOFBX?=
 =?utf-8?B?VElML3pJTURlb2R1QXh6bnRpQ1p2RHhsaDc4TzZZaGtOSHpMRWFQMER6a3FK?=
 =?utf-8?B?ckJCUmFFOWx0SlBzZkdTZ1RpMjNKbTZmMGdEWTlMUElMbjBhYkYxS2tpcVlp?=
 =?utf-8?B?bTJVWDBrSzdDZU8rVi9YV1BtSDVIVzJSOC9HRkNQdWp1UWZucnhDWXgweldv?=
 =?utf-8?B?cWtrdnFqdDc5VnA4UmFmeFNQYU1URnlYWVFjVUkwV0xLWWZRTHBUS25Hek1i?=
 =?utf-8?B?RFlwZ3R4NVZKZ1VUeSs0Mzh0N0pONUxSeXpRcVdPdmZDTVNNU25ZemliTnIr?=
 =?utf-8?B?b3ZYcXhlOFdlT2tXYWZHOGc5cmk4dDlLOVd4QndwOS9wbGlmNThNUlNHRTNI?=
 =?utf-8?B?N2lOVlBLRXhLK2hxWkJWY3Brb1VGYnFoaTBOZjlDeThCSUR5TkZtSVltUEFw?=
 =?utf-8?B?bjRmVXBPaHdpODFmc3NZZ3NMRWhhUzI0YkhTVmhMU2NWdjlwMFNGWTdaSWVY?=
 =?utf-8?B?b0w4anRRY3pZcDJwVDhNYU8vT3BRTHJORTVwcDM2VXhFTEdrSnMvQVJBZ2Yr?=
 =?utf-8?B?WTBXa1M2MnJIcXhsdHkzY2hUdTlPRUh0K1lJNnJEVEQwTDVHbFB4bDZ2WXlr?=
 =?utf-8?B?dXZkb2xKcHFHaDlJK3VTdTRhNUhpVkdlQURPS29EU3pveXZIN2N4U0J0NmdM?=
 =?utf-8?B?d3JINjNDWnNHR3o2YnB5ZThVeHhpTis2dys3USsrckliUzJhUDlrR1Q4ZlFo?=
 =?utf-8?B?dml6WjB5eGlVR2JNcEhSRDZqTWF1YjRPRVdnbDZwcS9rRkVNMmdBT00yWVAr?=
 =?utf-8?B?OTNHd3h6TnJqRXFPRkxrTTg2SStZa1E3QmpZUHFJd0gxTnN2MjBSQUoxSFVL?=
 =?utf-8?B?b2RrRXFSRU1mVVZ2M3NLMFJ5SFdySGp3OHdSVmgrRWtUWCs4OTl0ZFBqME83?=
 =?utf-8?B?VGRDdkhRM0M4dkY2RXJZSjM0M3dRRXRuMGJHZUp5Si9qS25NWVVOemY3QzJ1?=
 =?utf-8?B?NWh5NWR2OXVPRFpvYW9iNzNkam1DRzN5T3JGSENaS2ZTa3BKeEtrOW1NdExh?=
 =?utf-8?B?WTVHL05WbUp5QzN1SFB1SkR6NFdjZjA3R3gzVFAyZXlQVFBKaVU3NmhXK0RF?=
 =?utf-8?B?NjJEUGRBSVd1Vml2NjFsWG1WM2FUZlNzTkpSdGNNdG9JVHZTaHZZdzNPZjE0?=
 =?utf-8?B?YXZOV3ZFbXY0ejdsTUJVcU9zc2c1ZjArL2NETDBwV2J4Si9NV1pHbUVHKzJU?=
 =?utf-8?B?SHZTUW8zTDFEV2p4YkZDYm03R0RPbHNwOXJ4MzdYSnN3b3VQejBzMWFaSlpx?=
 =?utf-8?B?Q2NKblpFbmRVS0RMK2EycHN2S1MxRkw3ZzlDbU9lRlBoOUxLaktvUGRjVzRE?=
 =?utf-8?B?Y2pNZWxkekI2b1h6bFVQZnBnTTNBR1p4bzI4T1o0RlIyUk9ZaWVFeUZmRk1q?=
 =?utf-8?B?bDRlenBoSG5KaXpYWUZzYnJldDgzdUxFUU9CdnAvaVU1dG44VGp5aWJncCsr?=
 =?utf-8?B?RlJpZElXSnVqTVQySWltVVE1M3g2bktSa3UvOXhPcGlaRy9odlVvUzhNWFI1?=
 =?utf-8?B?Y25HMDVwRkNrVDJEUmJaQ1ZvdkJjQm1WYzQyL09wQm5KbzY5SUs2aTJGUUI5?=
 =?utf-8?B?RUxCdnJOaUFpSDVlTmE0TitSTkk3QmhickNHUjBiZzFJekNzVERETTkyN0Fk?=
 =?utf-8?B?bmJvc0NOZzhDTEtMUEdVMGV3Mm5kRUtUYlZMRnZBbkcvSUFCdm1OSmVlU3BW?=
 =?utf-8?B?UythTHFtVTJzY04wd2hFYkRGckhDbFQxbzdJcTRoWHBIWGZEb3V3SUhIT2Q3?=
 =?utf-8?B?MG5FclkvNUlTWXBQOCs3QWJDYWZtQkw5L3VXU0YyV1ZzL2wwVjM1NlorQmM2?=
 =?utf-8?Q?kYGLIR0EAFcb5EHq6zuF2rU=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <7EF2B76737B290498C4DD414B74D2308@namprd11.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MN0PR11MB5963.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ce685577-ebcb-4eba-34ba-08db5d3805fc
X-MS-Exchange-CrossTenant-originalarrivaltime: 25 May 2023 15:52:20.2873
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 2ML/7gl4eUsJIkM/yFhJg7UrfALPFIG4jNjwO9UKK6ZXldoDIFYLRd3LjUHMn5elkdNg9kZW2WIjFAXfGWPSJQg0BELqzIevKozO7hYwOh8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR11MB6618
X-OriginatorOrg: intel.com

T24gVGh1LCAyMDIzLTA1LTI1IGF0IDE1OjU5ICswMjAwLCBNaWNrYcOrbCBTYWxhw7xuIHdyb3Rl
Og0KWyBzbmlwIF0NCg0KPiA+IFRoZSBrZXJuZWwgb2Z0ZW4gY3JlYXRlcyB3cml0YWJsZSBhbGlh
c2VzIGluIG9yZGVyIHRvIHdyaXRlIHRvDQo+ID4gcHJvdGVjdGVkIGRhdGEgKGtlcm5lbCB0ZXh0
LCBldGMpLiBTb21lIG9mIHRoaXMgaXMgZG9uZSByaWdodCBhcw0KPiA+IHRleHQNCj4gPiBpcyBi
ZWluZyBmaXJzdCB3cml0dGVuIG91dCAoYWx0ZXJuYXRpdmVzIGZvciBleGFtcGxlKSwgYW5kIHNv
bWUNCj4gPiBoYXBwZW5zDQo+ID4gd2F5IGxhdGVyIChqdW1wIGxhYmVscywgZXRjKS4gU28gZm9y
IHZlcmlmaWNhdGlvbiwgSSB3b25kZXIgd2hhdA0KPiA+IHN0YWdlDQo+ID4geW91IHdvdWxkIGJl
IHZlcmlmeWluZz8gSWYgeW91IHdhbnQgdG8gdmVyaWZ5IHRoZSBlbmQgc3RhdGUsIHlvdQ0KPiA+
IHdvdWxkDQo+ID4gaGF2ZSB0byBtYWludGFpbiBrbm93bGVkZ2UgaW4gdGhlIHZlcmlmaWVyIG9m
IGFsbCB0aGUgdG91Y2gtdXBzIHRoZQ0KPiA+IGtlcm5lbCBkb2VzLiBJIHRoaW5rIGl0IHdvdWxk
IGdldCB2ZXJ5IHRyaWNreS4NCj4gDQo+IEZvciBub3csIGluIHRoZSBzdGF0aWMga2VybmVsIGNh
c2UsIGFsbCByb2RhdGEgYW5kIHRleHQgR1BBIGlzIA0KPiByZXN0cmljdGVkLCBzbyBhbGlhc2lu
ZyBzdWNoIG1lbW9yeSBpbiBhIHdyaXRhYmxlIHdheSBiZWZvcmUgb3IgYWZ0ZXINCj4gdGhlIEtW
TSBlbmZvcmNlbWVudCB3b3VsZCBzdGlsbCByZXN0cmljdCB3cml0ZSBhY2Nlc3MgdG8gdGhpcyBt
ZW1vcnksDQo+IHdoaWNoIGNvdWxkIGJlIGFuIGlzc3VlIGJ1dCBub3QgYSBzZWN1cml0eSBvbmUu
IERvIHlvdSBoYXZlIHN1Y2ggDQo+IGV4YW1wbGVzIGluIG1pbmQ/DQo+IA0KDQpPbiB4ODYsIGxv
b2sgYXQgYWxsIHRoZSBjYWxsZXJzIG9mIHRoZSB0ZXh0X3Bva2UoKSBmYW1pbHkuIEluDQphcmNo
L3g4Ni9pbmNsdWRlL2FzbS90ZXh0LXBhdGNoaW5nLmguDQoNCj4gDQo+ID4gDQo+ID4gSXQgYWxz
byBzZWVtcyBpdCB3aWxsIGJlIGEgZGVjZW50IGFzayBmb3IgdGhlIGd1ZXN0IGtlcm5lbCB0byBr
ZWVwDQo+ID4gdHJhY2sgb2YgR1BBIHBlcm1pc3Npb25zIGFzIHdlbGwgYXMgbm9ybWFsIHZpcnR1
YWwgbWVtb3J5DQo+ID4gcGVtaXJzc2lvbnMsDQo+ID4gaWYgdGhpcyB0aGluZyBpcyBub3Qgd2lk
ZWx5IHVzZWQuDQo+IA0KPiBUaGlzIHdvdWxkIGluZGVlZCBiZSByZXF1aXJlZCB0byBwcm9wZXJs
eSBoYW5kbGUgdGhlIGR5bmFtaWMgY2FzZXMuDQo+IA0KPiANCj4gPiANCj4gPiBTbyBJIHdvbmRl
cmluZyBpZiB5b3UgY291bGQgZ28gaW4gdHdvIGRpcmVjdGlvbnMgd2l0aCB0aGlzOg0KPiA+IDEu
IE1ha2UgdGhpcyBhIGZlYXR1cmUgb25seSBmb3Igc3VwZXIgbG9ja2VkIGRvd24ga2VybmVscyAo
bm8NCj4gPiBtb2R1bGVzLA0KPiA+IGV0YykuIEZvcmJpZCBhbnkgY29uZmlndXJhdGlvbnMgdGhh
dCBtaWdodCBtb2RpZnkgdGV4dC4gQnV0IGVCUEYgaXMNCj4gPiB1c2VkIGZvciBzZWNjb21wLCBz
byB5b3UgbWlnaHQgYmUgdHVybmluZyBvZmYgc29tZSBzZWN1cml0eQ0KPiA+IHByb3RlY3Rpb25z
DQo+ID4gdG8gZ2V0IHRoaXMuDQo+IA0KPiBHb29kIGlkZWEuIEZvciAic3VwZXIgbG9ja2VkIGRv
d24ga2VybmVscyIgOikgLCB3ZSBzaG91bGQgZGlzYWJsZSBhbGwNCj4ga2VybmVsIGV4ZWN1dGFi
bGUgY2hhbmdlcyB3aXRoIHRoZSByZWxhdGVkIGtlcm5lbCBidWlsZCBjb25maWd1cmF0aW9uDQo+
IChlLmcuIGVCUEYgSklULCBrZXJuZWwgbW9kdWxlLCBrcHJvYmVz4oCmKSB0byBtYWtlIHN1cmUg
dGhlcmUgaXMgbm8NCj4gc3VjaCANCj4gbGVnaXRpbWF0ZSBhY2Nlc3MuIFRoaXMgbG9va3MgbGlr
ZSBhbiBhY2NlcHRhYmxlIGluaXRpYWwgZmVhdHVyZS4NCg0KSG93IG1hbnkgdXNlcnMgZG8geW91
IHRoaW5rIHdpbGwgd2FudCB0aGlzIHByb3RlY3Rpb24gYnV0IG5vdA0KcHJvdGVjdGlvbnMgdGhh
dCB3b3VsZCBoYXZlIHRvIGJlIGRpc2FibGVkPyBUaGUgbWFpbiBvbmUgdGhhdCBjYW1lIHRvDQpt
aW5kIGZvciBtZSBpcyBjQlBGIHNlY2NvbXAgc3R1ZmYuDQoNCkJ1dCBhbHNvLCB0aGUgYWx0ZXJu
YXRpdmUgdG8gSklUaW5nIGNCUEYgaXMgdGhlIGVCUEYgaW50ZXJwcmV0ZXIgd2hpY2gNCkFGQUlV
IGlzIGNvbnNpZGVyZWQgYSBqdWljeSBlbm91Z2ggdGFyZ2V0IGZvciBzcGVjdWxhdGl2ZSBhdHRh
Y2tzIHRoYXQNCnRoZXkgY3JlYXRlZCBhbiBvcHRpb24gdG8gY29tcGlsZSBpdCBvdXQuIEFuZCBs
ZWF2aW5nIGFuIGludGVycHJldGVyIGluDQp0aGUga2VybmVsIG1lYW5zIGFueSBkYXRhIGNvdWxk
IGJlICJleGVjdXRlZCIgaW4gdGhlIG5vcm1hbCBub24tDQpzcGVjdWxhdGl2ZSBzY2VuYXJpbywg
a2luZCBvZiB3b3JraW5nIGFyb3VuZCB0aGUgaHlwZXJ2aXNvciBleGVjdXRhYmxlDQpwcm90ZWN0
aW9ucy4gRHJvcHBpbmcgZS9jQlBGIGVudGlyZWx5IHdvdWxkIGJlIGFuIG9wdGlvbiwgYnV0IHRo
ZW4gSQ0Kd29uZGVyIGhvdyBtYW55IHVzZXJzIHlvdSBoYXZlIGxlZnQuIEhvcGVmdWxseSB0aGF0
IGlzIGFsbCBjb3JyZWN0LA0KaXQncyBoYXJkIHRvIGtlZXAgdHJhY2sgd2l0aCB0aGUgcGFjZSBv
ZiBCUEYgZGV2ZWxvcG1lbnQuDQoNCkkgd29uZGVyIGlmIGl0IG1pZ2h0IGJlIGEgZ29vZCBpZGVh
IHRvIFBPQyB0aGUgZ3Vlc3Qgc2lkZSBiZWZvcmUNCnNldHRsaW5nIG9uIHRoZSBLVk0gaW50ZXJm
YWNlLiBUaGVuIHlvdSBjYW4gYWxzbyBsb29rIGF0IHRoZSB3aG9sZQ0KdGhpbmcgYW5kIGp1ZGdl
IGhvdyBtdWNoIHVzYWdlIGl0IHdvdWxkIGdldCBmb3IgdGhlIGRpZmZlcmVudCBvcHRpb25zDQpv
ZiByZXN0cmljdGlvbnMuDQoNCj4gDQo+IA0KPiA+IDIuIExvb3NlbiB0aGUgcnVsZXMgdG8gYWxs
b3cgdGhlIHByb3RlY3Rpb25zIHRvIG5vdCBiZSBzbyBvbmUtd2F5DQo+ID4gZW5hYmxlLiBHZXQg
bGVzcyBzZWN1cml0eSwgYnV0IHVzZWQgbW9yZSB3aWRlbHkuDQo+IA0KPiBUaGlzIGlzIG91ciBn
b2FsLiBJIHRoaW5rIGJvdGggc3RhdGljIGFuZCBkeW5hbWljIGNhc2VzIGFyZQ0KPiBsZWdpdGlt
YXRlIA0KPiBhbmQgaGF2ZSB2YWx1ZSBhY2NvcmRpbmcgdG8gdGhlIGxldmVsIG9mIHNlY3VyaXR5
IHNvdWdodC4gVGhpcyBzaG91bGQNCj4gYmUgDQo+IGEgYnVpbGQtdGltZSBjb25maWd1cmF0aW9u
Lg0KDQpZZWEsIHRoZSBwcm9wZXIgd2F5IHRvIGRvIHRoaXMgaXMgcHJvYmFibHkgdG8gbW92ZSBh
bGwgdGV4dCBoYW5kbGluZw0Kc3R1ZmYgaW50byBhIHNlcGFyYXRlIGRvbWFpbiBvZiBzb21lIHNv
cnQsIGxpa2UgeW91IG1lbnRpb25lZA0KZWxzZXdoZXJlLiBJdCB3b3VsZCBiZSBxdWl0ZSBhIGpv
Yi4NCg==


From xen-devel-bounces@lists.xenproject.org Thu May 25 16:07:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 16:07:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539761.840943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2DUY-00007b-Gx; Thu, 25 May 2023 16:07:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539761.840943; Thu, 25 May 2023 16:07:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2DUY-00007U-Cc; Thu, 25 May 2023 16:07:06 +0000
Received: by outflank-mailman (input) for mailman id 539761;
 Thu, 25 May 2023 16:07:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=trtu=BO=flex--seanjc.bounces.google.com=3pYdvZAYKCZwOA6JF8CKKCHA.8KITAJ-9ARAHHEOPO.TAJLNKFA8P.KNC@srs-se1.protection.inumbo.net>)
 id 1q2DUX-00007M-Fg
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 16:07:05 +0000
Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com
 [2607:f8b0:4864:20::b49])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2fac2890-fb16-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 18:07:03 +0200 (CEST)
Received: by mail-yb1-xb49.google.com with SMTP id
 3f1490d57ef6-ba8cf175f5bso1669442276.0
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 09:07:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fac2890-fb16-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20221208; t=1685030822; x=1687622822;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:from:to:cc:subject:date:message-id:reply-to;
        bh=T8fhdJsa5NorDsEqTWIJQ5b4Ve27T8A/3D8mN23h+AM=;
        b=FEqq2WYT2bpFD+1TtSHkaFzNdRiEuyI2g6NACjIvhMwGd2nTXz/TFAkSgH9Ujas6q/
         vhfG2wZKYTJDUOpnLW2+ofbeXlQQzMc4kCX+fmSU72PlK4P1Bu6ChRJwig3K6o+kmK9V
         pCl4CF26O5Po2Isyv/JlTTvFCsfs3ZUrFwA7UHCTWbISgu9JRVShkNuOUaec2GLwo+qR
         9KOOx1CsOYaN5w2VTeTg1ACrMvUsp1nWuR5WaKwUiyP0rqYNcgG1cqc/7vdfOhawEkwt
         dDqOiNZlzeyBYZO39XTCtpGesfMZs3DSR4tffu8EiIq8HLxLRNoWCQV/DwcUzn5RT3NK
         ujWQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685030822; x=1687622822;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=T8fhdJsa5NorDsEqTWIJQ5b4Ve27T8A/3D8mN23h+AM=;
        b=dzkee4sy1mcaRFeaLRw2hoC1W+iTFKBHxZI1k98RqtuMKfLxpg8oDY0daoKNo2Yhfi
         nZnrDoppSQtpV9YuJQSOUmOoeeLlSyRG3AUZO/YFy8NQAtHdbDWs47pRI6IYe7pjEK9H
         2Fdw109F/pHlTehPozVMBJeJ3zXtoJ6f3XZIXRFtqRYPter5xp6V2LBFwtllYxFBZgGa
         Zit8ehGnjzJe8cafC9KUgv2iItXhkxZL2O5369/ZGUY4pQCDmeUExD2+16XLWb4nNsg/
         nJaBVOwlyb7P9/IpHbW1hdhDDEMj8eo4M9tO06xmWzK+3MKbMVfk/4wCG/IbkdP/4gXj
         ooZQ==
X-Gm-Message-State: AC+VfDwc4ICx357/lauaOjBHiJVMOlym1Fx7tmMtfYkiRRCnkfX51N9k
	HPK7k+AThvRMTGtT4kxV1omVnaJdQBU=
X-Google-Smtp-Source: ACHHUZ6yBYNFZuYhfNGO52o3ys/I0sQ5pEx4h3FPL/IsVNBvdz1hiS3BsLpfaNj0iWMqAGXV//6VIasdS2w=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a25:6584:0:b0:ba8:381b:f764 with SMTP id
 z126-20020a256584000000b00ba8381bf764mr2233557ybb.3.1685030821985; Thu, 25
 May 2023 09:07:01 -0700 (PDT)
Date: Thu, 25 May 2023 09:07:00 -0700
In-Reply-To: <7cb6c4c28c077bb9f866c2d795e918610e77d49f.camel@intel.com>
Mime-Version: 1.0
References: <20230505152046.6575-1-mic@digikod.net> <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
 <facfd178-3157-80b4-243b-a5c8dabadbfb@digikod.net> <7cb6c4c28c077bb9f866c2d795e918610e77d49f.camel@intel.com>
Message-ID: <ZG+HpFjIuSWvyo+B@google.com>
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
From: Sean Christopherson <seanjc@google.com>
To: Rick P Edgecombe <rick.p.edgecombe@intel.com>
Cc: "mic@digikod.net" <mic@digikod.net>, 
	"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>, "bp@alien8.de" <bp@alien8.de>, 
	"keescook@chromium.org" <keescook@chromium.org>, "hpa@zytor.com" <hpa@zytor.com>, 
	"mingo@redhat.com" <mingo@redhat.com>, "tglx@linutronix.de" <tglx@linutronix.de>, 
	"pbonzini@redhat.com" <pbonzini@redhat.com>, "wanpengli@tencent.com" <wanpengli@tencent.com>, 
	"vkuznets@redhat.com" <vkuznets@redhat.com>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>, 
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>, "liran.alon@oracle.com" <liran.alon@oracle.com>, 
	"marian.c.rotariu@gmail.com" <marian.c.rotariu@gmail.com>, Alexander Graf <graf@amazon.com>, 
	John S Andersen <john.s.andersen@intel.com>, 
	"madvenka@linux.microsoft.com" <madvenka@linux.microsoft.com>, 
	"ssicleru@bitdefender.com" <ssicleru@bitdefender.com>, "yuanyu@google.com" <yuanyu@google.com>, 
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, 
	"tgopinath@microsoft.com" <tgopinath@microsoft.com>, 
	"jamorris@linux.microsoft.com" <jamorris@linux.microsoft.com>, 
	"linux-security-module@vger.kernel.org" <linux-security-module@vger.kernel.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "will@kernel.org" <will@kernel.org>, 
	"dev@lists.cloudhypervisor.org" <dev@lists.cloudhypervisor.org>, 
	"mdontu@bitdefender.com" <mdontu@bitdefender.com>, 
	"linux-hardening@vger.kernel.org" <linux-hardening@vger.kernel.org>, 
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>, 
	"virtualization@lists.linux-foundation.org" <virtualization@lists.linux-foundation.org>, 
	"nicu.citu@icloud.com" <nicu.citu@icloud.com>, "ztarkhani@microsoft.com" <ztarkhani@microsoft.com>, 
	"x86@kernel.org" <x86@kernel.org>
Content-Type: text/plain; charset="us-ascii"

On Thu, May 25, 2023, Rick P Edgecombe wrote:
> I wonder if it might be a good idea to POC the guest side before
> settling on the KVM interface. Then you can also look at the whole
> thing and judge how much usage it would get for the different options
> of restrictions.

As I said earlier[*], IMO the control plane logic needs to live in host userspace.
I think any attempt to have KVM providen anything but the low level plumbing will
suffer the same fate as CR4 pinning and XO memory.  Iterating on an imperfect
solution to incremently improve security is far, far easier to do in userspace,
and far more likely to get merged.

[*] https://lore.kernel.org/all/ZFUyhPuhtMbYdJ76@google.com


From xen-devel-bounces@lists.xenproject.org Thu May 25 16:22:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 16:22:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539765.840953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Dj9-0002dv-Os; Thu, 25 May 2023 16:22:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539765.840953; Thu, 25 May 2023 16:22:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Dj9-0002do-M0; Thu, 25 May 2023 16:22:11 +0000
Received: by outflank-mailman (input) for mailman id 539765;
 Thu, 25 May 2023 16:22:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Dj8-0002de-PV; Thu, 25 May 2023 16:22:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Dj8-0003Aw-EO; Thu, 25 May 2023 16:22:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Dj8-0006Qt-33; Thu, 25 May 2023 16:22:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Dj8-0001Jh-2T; Thu, 25 May 2023 16:22:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tvPgIpFivIzl2ZPAJlToNDn01ojWLUjJ1aWNJtBkZIE=; b=xr2jLRn4k7ygUksD3qKzOHO/E5
	MlkXNhurQj3GgTm2ZTVIcpM7igM0dZzm+FFf+BPsbg06fDMpsKVFZXc/8BJ8bGar9h9vrefKhZkh/
	L8H1/jWgwl8WfyJ0HVgG2QttGcdh5di804iZvCwkqSmZ0pvg2Rz73Cgt/PvpaVyqx/24=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180939-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180939: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=96c8d39af007000daf3d5dfa845365f66379aaac
X-Osstest-Versions-That:
    libvirt=44a0f2f0c8ff5e78c238013ed297b8fce223ac5a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 May 2023 16:22:10 +0000

flight 180939 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180939/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180924
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180924
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180924
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              96c8d39af007000daf3d5dfa845365f66379aaac
baseline version:
 libvirt              44a0f2f0c8ff5e78c238013ed297b8fce223ac5a

Last test of basis   180924  2023-05-24 04:20:20 Z    1 days
Testing same since   180939  2023-05-25 04:20:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Lin Yang <lin.a.yang@intel.com>
  Tim Wiederhake <twiederh@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   44a0f2f0c8..96c8d39af0  96c8d39af007000daf3d5dfa845365f66379aaac -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 25 16:33:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 16:33:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539771.840962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Dtg-0004BV-Ou; Thu, 25 May 2023 16:33:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539771.840962; Thu, 25 May 2023 16:33:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Dtg-0004BO-MB; Thu, 25 May 2023 16:33:04 +0000
Received: by outflank-mailman (input) for mailman id 539771;
 Thu, 25 May 2023 16:33:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PUrY=BO=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1q2Dtf-0004Ax-1B
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 16:33:03 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cf578314-fb19-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 18:32:59 +0200 (CEST)
Received: by mail-wr1-x433.google.com with SMTP id
 ffacd0b85a97d-30ab87a1897so737285f8f.1
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 09:32:59 -0700 (PDT)
Received: from [10.95.98.145] (54-240-197-226.amazon.com. [54.240.197.226])
 by smtp.gmail.com with ESMTPSA id
 a15-20020a5d508f000000b002ceacff44c7sm2245602wrt.83.2023.05.25.09.32.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 25 May 2023 09:32:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf578314-fb19-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685032378; x=1687624378;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:from:to:cc:subject:date:message-id:reply-to;
        bh=VGXhwShJtq5AojMIg498oGlzsbOIfhMUdgCHR/Nlq2g=;
        b=KZ8H4xV7Q0YczJWdvBXuGZiLYdvqjjHyRV/1tz+UK8vGo05Veam+EPARyikJrD4iSL
         inJqPW3ci9sGwtiCi9A3i9yyA0nOjKEfqxTNjValKA1qqwa/+y4ERbjguyTYPn3xMDwN
         f4DxNmBdVCQ2I2mp1/iZBQyR8vSrcgTcjC9S/IidGKeNweEjKnSZpebpWrpGdtCVbFRh
         t4OaDQhbG2p1s/HNyeDYtcbxzzn9K+yxIiuYQvIU9W8GRZqV30ZTnM47SUdNGOuEEahX
         m3CW8fB0HBUZrrXjYVDvTDW/IuVRY95tjy4qtFtLmPYv5oR9JaC0mW8eLlo0Rw9+gkut
         ytxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685032378; x=1687624378;
        h=content-transfer-encoding:in-reply-to:organization:references:cc:to
         :content-language:subject:reply-to:user-agent:mime-version:date
         :message-id:from:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=VGXhwShJtq5AojMIg498oGlzsbOIfhMUdgCHR/Nlq2g=;
        b=ECe3X06cNUmAyweLfd6vPHRNDCTsLvfMWRdXrWklHs1cIkfq0lYT8h800NzwuH7h92
         zljg6p8+fT0vUK52bnH5gxh0a9QLVf7ZCA+ba9hWWSnRS7/tb3PhnQ1pDLp3ph2pROME
         SY4fecSEwlRdS9m/6Fu6SRK8nUkript8SAAAUyFCdFo45L20oWZxeosIEs88ITql5Q+W
         BvyWltmPQdQdXvrw+QlzNceLxsO9xpJZeSdLHsefm8GZ35hKxT1lyBtxmp/pLXJt1yiV
         kknF17D4dc+iFweHEKRl5zsfXApSgCIrMhnt4+NRRVNPEopvoIhXxB8NgPhUlMT8n4eI
         8Xrg==
X-Gm-Message-State: AC+VfDwkp0uV0zf0P/L8ZohZ0sUeCuf+3awp8aYV4GBCsap6C7r2VUY4
	0eTMIrApviJaa/iUJ1YblHs=
X-Google-Smtp-Source: ACHHUZ6S4mOH9xjAEGvKpNWZjARkRluwpr/syuFuaxhIXRseNcOolvZbC3ZIiCDBpN+uyVMVJ5wm1g==
X-Received: by 2002:a5d:494f:0:b0:309:4988:7f83 with SMTP id r15-20020a5d494f000000b0030949887f83mr2823677wrs.20.1685032378193;
        Thu, 25 May 2023 09:32:58 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Message-ID: <1527f564-1e41-9ea5-4ef1-249988b81c04@xen.org>
Date: Thu, 25 May 2023 17:32:56 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.10.1
Reply-To: paul@xen.org
Subject: Re: [PATCH v1] x86/hvm/ioreq: remove empty line after function
 declaration
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20230525152527.10281-1-olaf@aepfle.de>
Organization: Xen Project
In-Reply-To: <20230525152527.10281-1-olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 25/05/2023 16:25, Olaf Hering wrote:
> Introduced in commit 6ddfaabceeec3c31bc97a7208c46f581de55f71d
> ("x86/hvm/ioreq: simplify code and use consistent naming").
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>   xen/arch/x86/hvm/ioreq.c | 1 -
>   1 file changed, 1 deletion(-)
> 

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Thu May 25 17:00:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 17:00:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539776.840976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2EKM-0007bg-UT; Thu, 25 May 2023 17:00:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539776.840976; Thu, 25 May 2023 17:00:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2EKM-0007bZ-Rh; Thu, 25 May 2023 17:00:38 +0000
Received: by outflank-mailman (input) for mailman id 539776;
 Thu, 25 May 2023 17:00:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JN8z=BO=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q2EKL-0007bT-1P
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 17:00:37 +0000
Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com
 [2607:f8b0:4864:20::b32])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aa906900-fb1d-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 19:00:36 +0200 (CEST)
Received: by mail-yb1-xb32.google.com with SMTP id
 3f1490d57ef6-ba71cd7ce7fso1180553276.1
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 10:00:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa906900-fb1d-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685034035; x=1687626035;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=JHrTsqL0nqeqxBvVw7n3mE3d9UFJpL2j8If6jZFByTc=;
        b=VEXX4D+/ehgygAWm5BmRxKMQVayX+huiD86BmHpUJbQZIavyZk8MqgFgBVJCgltW8G
         DGf1XLm8HhYD7C5Ltu5hncRV4xkFiRLvjnKAKUZAVqhHyZU7uDae6IK3C5qSD3GFgpGy
         OqLSmV63QlFHc9BLqOH+z3/46t/qsNvryv2gOmzRQJTarAzeFF3wkmaawd/u1vYzudf+
         2OoFyE7ArYkSpPwfiPG4wUCBT5egkmxxlXwXAVhzZYomT8xAIzGjfTrChNVAlbK2FaKO
         3t03QrSsNM3MzVGbAgb9jEqdKkZ/GGbY36ZIzPCKupNWpFQuCEQc+t41/FAd5d9jW5uu
         ry0A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685034035; x=1687626035;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=JHrTsqL0nqeqxBvVw7n3mE3d9UFJpL2j8If6jZFByTc=;
        b=dupNkAB+VaN4dGOJexfTrVsAZln0lPKMC7J8DuJLICWsrGgK348UoEcjKeBlGRmEMy
         UP3VpgqVCcBN/k5EP+pyfP2Qsa0fwp88WpgkvwLNIwR2z1CeuybINUrN0mXA76B4FtI5
         yU5qCapWrkapKVENWUDoZ0/MGuYYS1CKFZb9F4gNQgForoC8YlECjnoocHKO2WSwTsP5
         +Su3SKoDe/51Gavzr8oENrtqGOicu+diYAYVjrJIFjE7NOWjnTA+H48NDftwLwu2WhZL
         rvV3OldGTeWTwYGWBMctfSXEbBc0Id5TuUE0BjpGlxocfvyJTZJ7JKsmO4PBvfA26YmK
         uj/Q==
X-Gm-Message-State: AC+VfDyKfMWYCVGCkCxfyQEEuN4ysxpuHXZWGmFY3h/Z/sJZFNCdeyi7
	4dQ1aRCZPqL/mfm498uPLYUg43iJtAJ0nUTbxeA=
X-Google-Smtp-Source: ACHHUZ47ymCJWriUjad8DQ05hIMcCKin/PTMMU7nNv3HHwEyV9e0xB3DhHdcf1Fkkg8cgW7lAUtL7p6ayG4dp+Ur5MI=
X-Received: by 2002:a25:2487:0:b0:ba8:5ded:13f3 with SMTP id
 k129-20020a252487000000b00ba85ded13f3mr4152229ybk.17.1685034034679; Thu, 25
 May 2023 10:00:34 -0700 (PDT)
MIME-Version: 1.0
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-2-vishal.moola@gmail.com> <20230525085555.GV4967@kernel.org>
In-Reply-To: <20230525085555.GV4967@kernel.org>
From: Vishal Moola <vishal.moola@gmail.com>
Date: Thu, 25 May 2023 10:00:23 -0700
Message-ID: <CAOzc2pxx489C26NnS9NHkUQY9PYiagzt-nYK6LnkJ1N3NYQWzg@mail.gmail.com>
Subject: Re: [PATCH v2 01/34] mm: Add PAGE_TYPE_OP folio functions
To: Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org, 
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, 
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, 
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 25, 2023 at 1:56=E2=80=AFAM Mike Rapoport <rppt@kernel.org> wro=
te:
>
> Hi,
>
> On Mon, May 01, 2023 at 12:27:56PM -0700, Vishal Moola (Oracle) wrote:
> > No folio equivalents for page type operations have been defined, so
> > define them for later folio conversions.
>
> Can you please elaborate why would we need folios for page table descript=
ors?

Thanks for the review!

These macros are for callers that care about the page type, i.e. Table and
Buddy. Aside from accounting for those cases, the page tables don't use fol=
ios.
These are more for the cleanliness of those callers.


From xen-devel-bounces@lists.xenproject.org Thu May 25 17:12:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 17:12:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539780.840986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2EVq-0000ge-W3; Thu, 25 May 2023 17:12:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539780.840986; Thu, 25 May 2023 17:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2EVq-0000gX-SU; Thu, 25 May 2023 17:12:30 +0000
Received: by outflank-mailman (input) for mailman id 539780;
 Thu, 25 May 2023 17:12:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JN8z=BO=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q2EVp-0000gR-9F
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 17:12:29 +0000
Received: from mail-oa1-x2c.google.com (mail-oa1-x2c.google.com
 [2001:4860:4864:20::2c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5231282a-fb1f-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 19:12:27 +0200 (CEST)
Received: by mail-oa1-x2c.google.com with SMTP id
 586e51a60fabf-19edebe85adso969320fac.2
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 10:12:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5231282a-fb1f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685034745; x=1687626745;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=H8r4ODEq2+fLCDOwkU0I9uMZzQGsmacbN0I7clioZxg=;
        b=hlN2P2+4u+i/qYY499lGWAoyNY/dboRXOhJznjo9Nm7b8P8cZFR+c+0fCkL2rW25CD
         +hZ5J0+Ex4V3ScR/Xe/US44TDdlNVBI/xdsG40Ua1AXHZ442jMGry74GV2BfJq0J5IYV
         u7lelVrP8y0maLYqq75TrtqE6TS9ejRf7OJtZV90tVtzTBQB8gLHvCfhdBCn30nX09L2
         ck5ghpRGKOeV6W36JYgZ3HGXIa/8ueDRuLasr3XHrrQINIR3LY+3Qk5dm5YcGu7+Oweg
         Z/YZFzs6cf7vI3ciL7ceyYyXeS7nwrYpgRm5g9mSuxHdlnBWIXgAoNtf5095aMXhJIqs
         nB3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685034745; x=1687626745;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=H8r4ODEq2+fLCDOwkU0I9uMZzQGsmacbN0I7clioZxg=;
        b=ZWeXtbH6ew++Z41Tf06sSqUJaK28Uv8mHLr8MihTS3g7IyLuxz66X3kN92MPmzLnbD
         0DI4mhYdnueK+lTYrrJvxqCWslaFBsP/wrurAuLtjsHHPrN/lDt21vEhWtbOM7/st8H8
         zcpdMK2XRObL91Hj4tZeOUx7yzaDKMki2nDxwZUOfCh+k/pEyNgygH9YVluwGGMgVrI4
         UkCv/h8UWjizrc7SlREAo4xswxdMNiASUWfOyt2ueTHMETPe0as/0xgNOW9j3BKM6HGu
         yko6TJsaP6bfiPxBQ14k2QG6bPgEbSpmFi3w50mB4sbYAgVZeVYKTNhOpUt3fkHTu5qV
         j9lQ==
X-Gm-Message-State: AC+VfDwRMnjSoQ96sydMr/k5NPP8MLaxr6qi6c3e5Re9aW263A5e0NmX
	74e4mig0j99Dxpy5I1FpX0m2pVFb3PSqH0/QXwE=
X-Google-Smtp-Source: ACHHUZ7vTuIkb3AY2emRZbpynC4n929WANyhiAk9Z8pBe93X++8c36ksUYj3vhKz06//2+LG6F86/hC/ahr56+n93g0=
X-Received: by 2002:a05:6871:505:b0:19a:1694:f03f with SMTP id
 s5-20020a056871050500b0019a1694f03fmr1875889oal.47.1685034745319; Thu, 25 May
 2023 10:12:25 -0700 (PDT)
MIME-Version: 1.0
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-3-vishal.moola@gmail.com> <20230525085819.GW4967@kernel.org>
In-Reply-To: <20230525085819.GW4967@kernel.org>
From: Vishal Moola <vishal.moola@gmail.com>
Date: Thu, 25 May 2023 10:12:14 -0700
Message-ID: <CAOzc2pw63URkr08q4_VP+3wbRDnFjyUE8zxQrvQtnJ5kbtGhFg@mail.gmail.com>
Subject: Re: [PATCH v2 02/34] s390: Use _pt_s390_gaddr for gmap address tracking
To: Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org, 
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, 
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, 
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org, 
	David Hildenbrand <david@redhat.com>, Claudio Imbrenda <imbrenda@linux.ibm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 25, 2023 at 1:58=E2=80=AFAM Mike Rapoport <rppt@kernel.org> wro=
te:
>
> On Mon, May 01, 2023 at 12:27:57PM -0700, Vishal Moola (Oracle) wrote:
> > s390 uses page->index to keep track of page tables for the guest addres=
s
> > space. In an attempt to consolidate the usage of page fields in s390,
> > replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap.
> >
> > This will help with the splitting of struct ptdesc from struct page, as
> > well as allow s390 to use _pt_frag_refcount for fragmented page table
> > tracking.
> >
> > Since page->_pt_s390_gaddr aliases with mapping, ensure its set to NULL
> > before freeing the pages as well.
>
> Wouldn't it be easier to use _pt_pad_1 which is aliased with lru and that
> does not seem to be used by page tables at all?

I initially thought the same, but s390 page tables use lru.


From xen-devel-bounces@lists.xenproject.org Thu May 25 17:14:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 17:14:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539786.840996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2EXA-0001Hi-Dc; Thu, 25 May 2023 17:13:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539786.840996; Thu, 25 May 2023 17:13:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2EXA-0001Hb-Ae; Thu, 25 May 2023 17:13:52 +0000
Received: by outflank-mailman (input) for mailman id 539786;
 Thu, 25 May 2023 17:13:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ3/=BO=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q2EX9-0001HN-Nu
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 17:13:51 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 837ac1f3-fb1f-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 19:13:49 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-219-f84HUxG6PwGfwNYI4AHG2A-1; Thu, 25 May 2023 13:13:44 -0400
Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com
 [10.11.54.9])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D8B86811E78;
 Thu, 25 May 2023 17:13:43 +0000 (UTC)
Received: from localhost (unknown [10.39.192.20])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E46F7492B00;
 Thu, 25 May 2023 17:13:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 837ac1f3-fb1f-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685034828;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SdjpHAWvX8URSv7CTSo6yI4LC/WzXYrrywaCg9aNOmQ=;
	b=NeSZC3olCqtP5SI5BnFAEFAt+8JykoVjmmDYb7hP9YY+2gWunXH6LvuAX4tRkQZlnOTven
	swCkRC0RKyv0u5OzA/GtaxNPXTx6/blsn/R7u2zoxEsIBokrwJRjTqP7Aa6Q/J7i/yoC7H
	MCSWntT4uB9Et5CXlGWIRnw5eqinl4U=
X-MC-Unique: f84HUxG6PwGfwNYI4AHG2A-1
Date: Wed, 24 May 2023 15:36:34 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: qemu-devel@nongnu.org, Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, Hanna Reitz <hreitz@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
	xen-devel@lists.xenproject.org, eblake@redhat.com,
	Anthony Perard <anthony.perard@citrix.com>, qemu-block@nongnu.org
Subject: Re: [PATCH v2 5/6] block/linux-aio: convert to blk_io_plug_call() API
Message-ID: <20230524193634.GB17357@fedora>
References: <20230523171300.132347-1-stefanha@redhat.com>
 <20230523171300.132347-6-stefanha@redhat.com>
 <n6hik7dbl26lomhxvfal2kjrq6jhdiknjepb372dvxavuwiw6q@3l3mo4eywoxq>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="GYXeKr/XNUj7Tdkf"
Content-Disposition: inline
In-Reply-To: <n6hik7dbl26lomhxvfal2kjrq6jhdiknjepb372dvxavuwiw6q@3l3mo4eywoxq>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9


--GYXeKr/XNUj7Tdkf
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, May 24, 2023 at 10:52:03AM +0200, Stefano Garzarella wrote:
> On Tue, May 23, 2023 at 01:12:59PM -0400, Stefan Hajnoczi wrote:
> > Stop using the .bdrv_co_io_plug() API because it is not multi-queue
> > block layer friendly. Use the new blk_io_plug_call() API to batch I/O
> > submission instead.
> >=20
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > Reviewed-by: Eric Blake <eblake@redhat.com>
> > ---
> > include/block/raw-aio.h |  7 -------
> > block/file-posix.c      | 28 ----------------------------
> > block/linux-aio.c       | 41 +++++++++++------------------------------
> > 3 files changed, 11 insertions(+), 65 deletions(-)
> >=20
> > diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
> > index da60ca13ef..0f63c2800c 100644
> > --- a/include/block/raw-aio.h
> > +++ b/include/block/raw-aio.h
> > @@ -62,13 +62,6 @@ int coroutine_fn laio_co_submit(int fd, uint64_t off=
set, QEMUIOVector *qiov,
> >=20
> > void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
> > void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
> > -
> > -/*
> > - * laio_io_plug/unplug work in the thread's current AioContext, theref=
ore the
> > - * caller must ensure that they are paired in the same IOThread.
> > - */
> > -void laio_io_plug(void);
> > -void laio_io_unplug(uint64_t dev_max_batch);
> > #endif
> > /* io_uring.c - Linux io_uring implementation */
> > #ifdef CONFIG_LINUX_IO_URING
> > diff --git a/block/file-posix.c b/block/file-posix.c
> > index 7baa8491dd..ac1ed54811 100644
> > --- a/block/file-posix.c
> > +++ b/block/file-posix.c
> > @@ -2550,26 +2550,6 @@ static int coroutine_fn raw_co_pwritev(BlockDriv=
erState *bs, int64_t offset,
> >     return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_WRITE);
> > }
> >=20
> > -static void coroutine_fn raw_co_io_plug(BlockDriverState *bs)
> > -{
> > -    BDRVRawState __attribute__((unused)) *s =3D bs->opaque;
> > -#ifdef CONFIG_LINUX_AIO
> > -    if (s->use_linux_aio) {
> > -        laio_io_plug();
> > -    }
> > -#endif
> > -}
> > -
> > -static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
> > -{
> > -    BDRVRawState __attribute__((unused)) *s =3D bs->opaque;
> > -#ifdef CONFIG_LINUX_AIO
> > -    if (s->use_linux_aio) {
> > -        laio_io_unplug(s->aio_max_batch);
> > -    }
> > -#endif
> > -}
> > -
> > static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
> > {
> >     BDRVRawState *s =3D bs->opaque;
> > @@ -3914,8 +3894,6 @@ BlockDriver bdrv_file =3D {
> >     .bdrv_co_copy_range_from =3D raw_co_copy_range_from,
> >     .bdrv_co_copy_range_to  =3D raw_co_copy_range_to,
> >     .bdrv_refresh_limits =3D raw_refresh_limits,
> > -    .bdrv_co_io_plug        =3D raw_co_io_plug,
> > -    .bdrv_co_io_unplug      =3D raw_co_io_unplug,
> >     .bdrv_attach_aio_context =3D raw_aio_attach_aio_context,
> >=20
> >     .bdrv_co_truncate                   =3D raw_co_truncate,
> > @@ -4286,8 +4264,6 @@ static BlockDriver bdrv_host_device =3D {
> >     .bdrv_co_copy_range_from =3D raw_co_copy_range_from,
> >     .bdrv_co_copy_range_to  =3D raw_co_copy_range_to,
> >     .bdrv_refresh_limits =3D raw_refresh_limits,
> > -    .bdrv_co_io_plug        =3D raw_co_io_plug,
> > -    .bdrv_co_io_unplug      =3D raw_co_io_unplug,
> >     .bdrv_attach_aio_context =3D raw_aio_attach_aio_context,
> >=20
> >     .bdrv_co_truncate                   =3D raw_co_truncate,
> > @@ -4424,8 +4400,6 @@ static BlockDriver bdrv_host_cdrom =3D {
> >     .bdrv_co_pwritev        =3D raw_co_pwritev,
> >     .bdrv_co_flush_to_disk  =3D raw_co_flush_to_disk,
> >     .bdrv_refresh_limits    =3D cdrom_refresh_limits,
> > -    .bdrv_co_io_plug        =3D raw_co_io_plug,
> > -    .bdrv_co_io_unplug      =3D raw_co_io_unplug,
> >     .bdrv_attach_aio_context =3D raw_aio_attach_aio_context,
> >=20
> >     .bdrv_co_truncate                   =3D raw_co_truncate,
> > @@ -4552,8 +4526,6 @@ static BlockDriver bdrv_host_cdrom =3D {
> >     .bdrv_co_pwritev        =3D raw_co_pwritev,
> >     .bdrv_co_flush_to_disk  =3D raw_co_flush_to_disk,
> >     .bdrv_refresh_limits    =3D cdrom_refresh_limits,
> > -    .bdrv_co_io_plug        =3D raw_co_io_plug,
> > -    .bdrv_co_io_unplug      =3D raw_co_io_unplug,
> >     .bdrv_attach_aio_context =3D raw_aio_attach_aio_context,
> >=20
> >     .bdrv_co_truncate                   =3D raw_co_truncate,
> > diff --git a/block/linux-aio.c b/block/linux-aio.c
> > index 442c86209b..5021aed68f 100644
> > --- a/block/linux-aio.c
> > +++ b/block/linux-aio.c
> > @@ -15,6 +15,7 @@
> > #include "qemu/event_notifier.h"
> > #include "qemu/coroutine.h"
> > #include "qapi/error.h"
> > +#include "sysemu/block-backend.h"
> >=20
> > /* Only used for assertions.  */
> > #include "qemu/coroutine_int.h"
> > @@ -46,7 +47,6 @@ struct qemu_laiocb {
> > };
> >=20
> > typedef struct {
> > -    int plugged;
> >     unsigned int in_queue;
> >     unsigned int in_flight;
> >     bool blocked;
> > @@ -236,7 +236,7 @@ static void qemu_laio_process_completions_and_submi=
t(LinuxAioState *s)
> > {
> >     qemu_laio_process_completions(s);
> >=20
> > -    if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
> > +    if (!QSIMPLEQ_EMPTY(&s->io_q.pending)) {
> >         ioq_submit(s);
> >     }
> > }
> > @@ -277,7 +277,6 @@ static void qemu_laio_poll_ready(EventNotifier *opa=
que)
> > static void ioq_init(LaioQueue *io_q)
> > {
> >     QSIMPLEQ_INIT(&io_q->pending);
> > -    io_q->plugged =3D 0;
> >     io_q->in_queue =3D 0;
> >     io_q->in_flight =3D 0;
> >     io_q->blocked =3D false;
> > @@ -354,31 +353,11 @@ static uint64_t laio_max_batch(LinuxAioState *s, =
uint64_t dev_max_batch)
> >     return max_batch;
> > }
> >=20
> > -void laio_io_plug(void)
> > +static void laio_unplug_fn(void *opaque)
> > {
> > -    AioContext *ctx =3D qemu_get_current_aio_context();
> > -    LinuxAioState *s =3D aio_get_linux_aio(ctx);
> > +    LinuxAioState *s =3D opaque;
> >=20
> > -    s->io_q.plugged++;
> > -}
> > -
> > -void laio_io_unplug(uint64_t dev_max_batch)
> > -{
> > -    AioContext *ctx =3D qemu_get_current_aio_context();
> > -    LinuxAioState *s =3D aio_get_linux_aio(ctx);
> > -
> > -    assert(s->io_q.plugged);
> > -    s->io_q.plugged--;
> > -
> > -    /*
> > -     * Why max batch checking is performed here:
> > -     * Another BDS may have queued requests with a higher dev_max_batc=
h and
> > -     * therefore in_queue could now exceed our dev_max_batch. Re-check=
 the max
> > -     * batch so we can honor our device's dev_max_batch.
> > -     */
> > -    if (s->io_q.in_queue >=3D laio_max_batch(s, dev_max_batch) ||
>=20
> Why are we removing this condition?
> Could the same situation occur with the new API?

The semantics of unplug_fn() are different from .bdrv_co_unplug():
1. unplug_fn() is only called when the last blk_io_unplug() call occurs,
   not every time blk_io_unplug() is called.
2. unplug_fn() is per-thread, not per-BlockDriverState, so there is no
   way to get per-BlockDriverState fields like dev_max_batch.

Therefore this condition cannot be moved to laio_unplug_fn().

How important is this condition? I believe that dropping it does not
have much of an effect but maybe I missed something.

Also, does it make sense to define per-BlockDriverState batching limits
when the AIO engine (Linux AIO or io_uring) is thread-local and shared
between all BlockDriverStates? I believe the fundamental reason (that we
discovered later) why dev_max_batch is effective is because the Linux
kernel processes 32 I/O request submissions at a time. Anything above 32
adds latency without a batching benefit.

Stefan

--GYXeKr/XNUj7Tdkf
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmRuZ0IACgkQnKSrs4Gr
c8h/zwgAkZBovDywM+nj3pwJJe206j/dS6hWP+DFJEHkWevLQELYi6+0p5Vxt94o
1Ri/jvcDuX+JPYuDiWJ5VVahwfnU4Je4NtU5XX5HDWKRcstewKDalmdQeePchKXX
0T/x2oGpn4GVZGXle/mxqmkm9A1Nf6amsdz/4baA1WDGJQHsRNPN7WBRL4RKj6UQ
ENIqYrhzXma5dlXadstRE9cjKcwgD2Te/4sh3gKB7axxjQYzFlXtxsoWxw9zlrEZ
feHF3vznLdocukGw/BR+sXLAKxd591Hdaw/o8LKrJQclLSNvOnC43d9vAN/PrFwU
E2o8ur8zGvrC5ggwZhrZ/uhfbbLDFg==
=Hpse
-----END PGP SIGNATURE-----

--GYXeKr/XNUj7Tdkf--



From xen-devel-bounces@lists.xenproject.org Thu May 25 17:51:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 17:51:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539790.841005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2F7e-0005fI-8V; Thu, 25 May 2023 17:51:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539790.841005; Thu, 25 May 2023 17:51:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2F7e-0005fB-5x; Thu, 25 May 2023 17:51:34 +0000
Received: by outflank-mailman (input) for mailman id 539790;
 Thu, 25 May 2023 17:51:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x5Sm=BO=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1q2F7d-0005f5-2t
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 17:51:33 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20628.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::628])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c6fe3548-fb24-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 19:51:30 +0200 (CEST)
Received: from BN8PR04CA0003.namprd04.prod.outlook.com (2603:10b6:408:70::16)
 by CH3PR12MB8186.namprd12.prod.outlook.com (2603:10b6:610:129::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 17:51:27 +0000
Received: from BN8NAM11FT032.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:70:cafe::3a) by BN8PR04CA0003.outlook.office365.com
 (2603:10b6:408:70::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28 via Frontend
 Transport; Thu, 25 May 2023 17:51:27 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN8NAM11FT032.mail.protection.outlook.com (10.13.177.88) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6433.18 via Frontend Transport; Thu, 25 May 2023 17:51:26 +0000
Received: from SATLEXMB08.amd.com (10.181.40.132) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 25 May
 2023 12:51:26 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB08.amd.com
 (10.181.40.132) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 25 May
 2023 10:51:25 -0700
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 25 May 2023 12:51:24 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6fe3548-fb24-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QmwDVvqvLBMq9rvX7/nxvfxoGSbLQPs70Yvy74wHswksUfHiAUO1ekfrMyt5duXjy6ctniy+aR81XAT+dwq2/o0Jk9oH18aVffPSolO0PIG+lQ4Y37nPGaRNZ/8pD9at3adjNazBXZUCcrMBRaGVrkgbqBPxQB+dYtsUuzUgO2TLBqjGDuKVPoINciLYiYtqTZ0AskYkkBz6Lr7wzgZu6oqIhEeWrHilZbuO0mQAjbtYzqXQc6LeWnzi+F/ZRJ0w0PGwOeY1bs2vf50ey0bxeFfgLB4gbLyqTdeYK0h1kHhD714BjTdvFsGKqR8R125hCsr0AU4TCg/GtZqeleDt6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=4r+bKjQj1My3UE73dkVNZmyr82oDetzB3Ozifh2S6OI=;
 b=h/yS2EYvlry1GKi5WgXNF7RnwQw3xaWxYh3uza1KXfxVlpuS89UAo8J9XrP5rhoQ2NiowAxAu3yUZGHGLLrddqTTwjqU5b+V793LlPSJ9RDXKL4uyKtoVpyske3GATXBvn3oiF5miiVGtQrJmT0u9etssfjwqPbxKP/QW+VibNqGsolUPHLAAIunRVmZ25QVbie4U9lZ4eX//ZvvMDVHn/MRzy2pQhX3vjIiyUvJJABRIAB6HjWU9gJp5yNEriCqTtdrGfyhjpnGaQdV7DKAON9v8oTjG9sy/ucwRISeBMJ8/KZUlGfCWXpBUggw6110uN3SXYmbf5FGfkq5DDzhHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4r+bKjQj1My3UE73dkVNZmyr82oDetzB3Ozifh2S6OI=;
 b=LIwhuUimWdC8zR4ENl6OwMbTBmnMAbqcpEreJnUf7Rku72oZV3zppPNNpwEKcrjn8xMY5KKxdIy4B6dN4bq/grsxJM5PFaNYk/+am4GGwA6swD8nW6dkEwgSpDjiMtldLc1PGxs4J+ehgqPWkNt0i+aN0Pjge/n5V13XZoqMxa0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
Subject: [PATCH] xen/arm: un-break build with clang
Date: Thu, 25 May 2023 13:51:15 -0400
Message-ID: <20230525175115.110606-1-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT032:EE_|CH3PR12MB8186:EE_
X-MS-Office365-Filtering-Correlation-Id: 90b7500e-368c-4be0-af7e-08db5d48a98e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UX+rSzk3SfpsN/vTe2L11jDBcG/JizlQGcpFGpkG3gZzZzzgnAP4ScMgey7l8rUWaSFsT7sYYuR8gD0U4hkmgC4RBw7BXkzp/pjoWh67AyIIlbgOfNBgYglzICVOH0/PhKMrTGP4Ik0aBkMgbzOyTyf5ls5F24xrvYcgnn5f9pm7ntxorIcAGJZhhwa3LB32z1s9yNDuGAQw9vWBzTrmgSZP7dYHMuUSDwd6ohdG5FnpacU3inWsLRcfD+UwHc+yRTzcO0A5lPuRIZ/VnpvuNFah3NPXLBhlxHwzhIlM55JVPAWqROoARp9I8SIwyjvvHCzq16jWoMNmLs5+WsLUFVATPoKaMKZew9RJJpBQNvhs1YuE+RPD46Rvkj28FDNvSbWA7Y6+gOBVh2MFZZWEY0q6vRg7x1mem8ZbHg4RG+LzbX+HwQkJZfyJ0aBwR3EyaBSL0yWtP0KEEJ4HCVOpcZCyyNnJzvlAPWfwinhIcPpGFtgxe+YQD0dqnUIERkjqFA2ggqU+12JL+YJpxp9E0dipX21zeEPnYSk8P5tBvzL9OJlzwHrNBEKMZZ82xIJheVgTuigoGEDBgQBGK0eHqmi8CWUuzdTJZIo7navnZ7SQDEjY3QM3+0GJmTVOJj9GMwA1KubuYQyr1Qo2THmd5g9um5C97YF7i1ky6Sbe7XqHFoEOktF3yV3CXH6/FN5zO1hmTszITIyw+oJeotPPFeL4OyGhlvlqfHjqISkf3c8gIhJCdvSRdNbMgXE9F9GziVhRykPYDQb2DBkaRJ+BDA==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(376002)(396003)(346002)(39860400002)(136003)(451199021)(36840700001)(40470700004)(46966006)(81166007)(356005)(8676002)(8936002)(40460700003)(5660300002)(82740400003)(2906002)(47076005)(40480700001)(1076003)(26005)(186003)(83380400001)(426003)(82310400005)(36756003)(2616005)(336012)(86362001)(44832011)(36860700001)(4326008)(316002)(6916009)(70206006)(70586007)(6666004)(54906003)(478600001)(41300700001)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 17:51:26.6317
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 90b7500e-368c-4be0-af7e-08db5d48a98e
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT032.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8186

clang doesn't like extern with __attribute__((__used__)):

  ./arch/arm/include/asm/setup.h:171:8: error: 'used' attribute ignored [-Werror,-Wignored-attributes]
  extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
         ^
  ./arch/arm/include/asm/lpae.h:273:29: note: expanded from macro 'DEFINE_BOOT_PAGE_TABLE'
  lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \
                              ^
  ./include/xen/compiler.h:71:27: note: expanded from macro '__section'
  #define __section(s)      __used __attribute__((__section__(s)))
                            ^
  ./include/xen/compiler.h:104:39: note: expanded from macro '__used'
  #define __used         __attribute__((__used__))
                                        ^

Introduce a new EXTERN_DEFINE_BOOT_PAGE_TABLE macro without
__attribute__((__used__)) and use this for the relevant declarations.

Fixes: 1c78d76b67e1 ("xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping")
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
I tested with clang 12 and clang 16

Here is my make command line:
make -j $(nproc) \
    clang=y \
    CC="clang --target=aarch64-none-linux-gnu -march=armv8a+nocrypto" \
    CXX="clang++ --target=aarch64-none-linux-gnu -march=armv8a+nocrypto" \
    HOSTCC=clang \
    HOSTCXX=clang++ \
    XEN_TARGET_ARCH=arm64 \
    CROSS_COMPILE=aarch64-none-linux-gnu- \
    dist-xen
---
 xen/arch/arm/include/asm/lpae.h  | 4 ++++
 xen/arch/arm/include/asm/setup.h | 8 ++++----
 xen/include/xen/compiler.h       | 1 +
 3 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpae.h
index 3fdd5d0de28e..294a8aa4bd30 100644
--- a/xen/arch/arm/include/asm/lpae.h
+++ b/xen/arch/arm/include/asm/lpae.h
@@ -273,6 +273,10 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
 lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \
     name[XEN_PT_LPAE_ENTRIES]
 
+#define EXTERN_DEFINE_BOOT_PAGE_TABLE(name)                                   \
+extern lpae_t __aligned(PAGE_SIZE) __no_used_section(".data.page_aligned")    \
+    name[XEN_PT_LPAE_ENTRIES]
+
 #define DEFINE_PAGE_TABLES(name, nr)                    \
 lpae_t __aligned(PAGE_SIZE) name[XEN_PT_LPAE_ENTRIES * (nr)]
 
diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 38e2ce255fcf..af53e58a6a07 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -168,13 +168,13 @@ u32 device_tree_get_u32(const void *fdt, int node,
 int map_range_to_domain(const struct dt_device_node *dev,
                         u64 addr, u64 len, void *data);
 
-extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
+EXTERN_DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
 
 #ifdef CONFIG_ARM_64
-extern DEFINE_BOOT_PAGE_TABLE(boot_first_id);
+EXTERN_DEFINE_BOOT_PAGE_TABLE(boot_first_id);
 #endif
-extern DEFINE_BOOT_PAGE_TABLE(boot_second_id);
-extern DEFINE_BOOT_PAGE_TABLE(boot_third_id);
+EXTERN_DEFINE_BOOT_PAGE_TABLE(boot_second_id);
+EXTERN_DEFINE_BOOT_PAGE_TABLE(boot_third_id);
 
 /* Find where Xen will be residing at runtime and return a PT entry */
 lpae_t pte_of_xenaddr(vaddr_t);
diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index 7d7ae2e5e4d9..70ba563e29c2 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -73,6 +73,7 @@
 #define __section(s)      __attribute__((__section__(s)))
 #endif
 #define __used_section(s) __used __attribute__((__section__(s)))
+#define __no_used_section(s) __attribute__((__section__(s)))
 #define __text_section(s) __attribute__((__section__(s)))
 
 #define __aligned(a) __attribute__((__aligned__(a)))
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 18:04:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 18:04:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539794.841016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2FKO-0007GN-Dy; Thu, 25 May 2023 18:04:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539794.841016; Thu, 25 May 2023 18:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2FKO-0007GG-Ab; Thu, 25 May 2023 18:04:44 +0000
Received: by outflank-mailman (input) for mailman id 539794;
 Thu, 25 May 2023 18:04:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JN8z=BO=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q2FKM-0007GA-R4
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 18:04:42 +0000
Received: from mail-yb1-xb33.google.com (mail-yb1-xb33.google.com
 [2607:f8b0:4864:20::b33])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9e2cf7dc-fb26-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 20:04:40 +0200 (CEST)
Received: by mail-yb1-xb33.google.com with SMTP id
 3f1490d57ef6-ba94605bcd5so1224619276.2
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 11:04:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e2cf7dc-fb26-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685037879; x=1687629879;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YHz1MnM6TYYio2C+7LZLfrVXy67ugaabkk4Lxt+VEKA=;
        b=NoHSm+P+R3f9P1hTtI0Aa2538QwCPLtSoQPrBLURQdxmp107FQ1F0pBnbEQWwggzlT
         fiTACnc6uzA/OAWqwR1JVWYYICkwBDdnIXj4ERwntlgZc/Oij0dm8rv+0LrV+CltMo3B
         fKyXLHfkdRh4xQKVcMe8OCp5giU3Tnn12LZCznOtQl1pIuAziPLT4fePeWe5Cvh7ZNzP
         bRQ3kdfBju0tbpe4F8kOJL6Oz3PfBgKD+/0uu8kOUyKFxwwoV5gkGDAbnMcHEgSeCLSi
         HSGZ+QgYvXiydUgpNmaTWLnoPbGEK3dVe0T9hw+SN0OStPQTocdixTet+7JhvERZXcH+
         xYiA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685037879; x=1687629879;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=YHz1MnM6TYYio2C+7LZLfrVXy67ugaabkk4Lxt+VEKA=;
        b=AnrnO1yOS7N66wOR7jp6LalM1dBd/FfAu7PZY99d+h3XFYbaZxXGdCt3b6sZmXokM3
         bAr14pg6h231vK36c2EUTV+/CbWsvEVn7qeG7Sx6Nz97kOSlcvPU3FE9pNa9eLE6yHzK
         3WoIGhH1QAnLrT0+PRc+EPx7BIKjKooYLCwoJ5SNIbnlWcmtk4h4NulBWeDHwYkvW7Rn
         S4wqCozZ2GjGB5SLehIcuVTUXZP5rXx1w9gQSSoUVmAlqjAuB/5ZjmuS5uDB96nbriRn
         lemGyvLs8+d/aJTfqMciWgtxrlPA7gdNLyU8041OTl1jlHBjEEP//vEQ5ru3g9KgEBlX
         uAfw==
X-Gm-Message-State: AC+VfDxBmBibcj++VcFjVCDK9G8Of5VjPCj8b2SmyKXJjrz8VgVMQxGV
	dgIdGNzn7HdwDtW2RsrMX1AJUfyAue9jxwsOSwU=
X-Google-Smtp-Source: ACHHUZ7leu3e42UYud7EgWJZoN6PW1XGYdS9PCBG/tq5rzS7YnuUSWzFoklHW8B2kAANtPb/IRvLBp1ufSDbPWAmKj4=
X-Received: by 2002:a25:7493:0:b0:ba8:15a3:f2e4 with SMTP id
 p141-20020a257493000000b00ba815a3f2e4mr4848492ybc.0.1685037879336; Thu, 25
 May 2023 11:04:39 -0700 (PDT)
MIME-Version: 1.0
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-6-vishal.moola@gmail.com> <20230525090956.GX4967@kernel.org>
In-Reply-To: <20230525090956.GX4967@kernel.org>
From: Vishal Moola <vishal.moola@gmail.com>
Date: Thu, 25 May 2023 11:04:28 -0700
Message-ID: <CAOzc2pxSH6GhBnAoSOjvYJk2VdMDFZi3H_1qGC5Cdyp3j4AzPQ@mail.gmail.com>
Subject: Re: [PATCH v2 05/34] mm: add utility functions for ptdesc
To: Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org, 
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, 
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, 
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 25, 2023 at 2:10=E2=80=AFAM Mike Rapoport <rppt@kernel.org> wro=
te:
>
> On Mon, May 01, 2023 at 12:28:00PM -0700, Vishal Moola (Oracle) wrote:
> > Introduce utility functions setting the foundation for ptdescs. These
> > will also assist in the splitting out of ptdesc from struct page.
> >
> > ptdesc_alloc() is defined to allocate new ptdesc pages as compound
> > pages. This is to standardize ptdescs by allowing for one allocation
> > and one free function, in contrast to 2 allocation and 2 free functions=
.
> >
> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> > ---
> >  include/asm-generic/tlb.h | 11 ++++++++++
> >  include/linux/mm.h        | 44 +++++++++++++++++++++++++++++++++++++++
> >  include/linux/pgtable.h   | 12 +++++++++++
> >  3 files changed, 67 insertions(+)
> >
> > diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> > index b46617207c93..6bade9e0e799 100644
> > --- a/include/asm-generic/tlb.h
> > +++ b/include/asm-generic/tlb.h
> > @@ -481,6 +481,17 @@ static inline void tlb_remove_page(struct mmu_gath=
er *tlb, struct page *page)
> >       return tlb_remove_page_size(tlb, page, PAGE_SIZE);
> >  }
> >
> > +static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt)
> > +{
> > +     tlb_remove_table(tlb, pt);
> > +}
> > +
> > +/* Like tlb_remove_ptdesc, but for page-like page directories. */
> > +static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, stru=
ct ptdesc *pt)
> > +{
> > +     tlb_remove_page(tlb, ptdesc_page(pt));
> > +}
> > +
> >  static inline void tlb_change_page_size(struct mmu_gather *tlb,
> >                                                    unsigned int page_si=
ze)
> >  {
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index b18848ae7e22..258f3b730359 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -2744,6 +2744,45 @@ static inline pmd_t *pmd_alloc(struct mm_struct =
*mm, pud_t *pud, unsigned long a
> >  }
> >  #endif /* CONFIG_MMU */
> >
> > +static inline struct ptdesc *virt_to_ptdesc(const void *x)
> > +{
> > +     return page_ptdesc(virt_to_head_page(x));
>
> Do we ever use compound pages for page tables?

Mips and s390 crst tables use multi-order (but not compound) pages.
The ptdesc api *should* change that, but until all the allocation/free path=
s
are changed it may cause problems.
Thanks for catching that, I'll change it in v3.

> > +}
> > +
> > +static inline void *ptdesc_to_virt(const struct ptdesc *pt)
> > +{
> > +     return page_to_virt(ptdesc_page(pt));
> > +}
> > +
> > +static inline void *ptdesc_address(const struct ptdesc *pt)
> > +{
> > +     return folio_address(ptdesc_folio(pt));
> > +}
> > +
> > +static inline bool ptdesc_is_reserved(struct ptdesc *pt)
> > +{
> > +     return folio_test_reserved(ptdesc_folio(pt));
> > +}
> > +
> > +static inline struct ptdesc *ptdesc_alloc(gfp_t gfp, unsigned int orde=
r)
> > +{
> > +     struct page *page =3D alloc_pages(gfp | __GFP_COMP, order);
> > +
> > +     return page_ptdesc(page);
> > +}
> > +
> > +static inline void ptdesc_free(struct ptdesc *pt)
> > +{
> > +     struct page *page =3D ptdesc_page(pt);
> > +
> > +     __free_pages(page, compound_order(page));
> > +}
>
> The ptdesc_{alloc,free} API does not sound right to me. The name
> ptdesc_alloc() implies the allocation of the ptdesc itself, rather than
> allocation of page table page. The same goes for free.

I'm not sure I see the difference. Could you elaborate?

> > +
> > +static inline void ptdesc_clear(void *x)
> > +{
> > +     clear_page(x);
> > +}
> > +
> >  #if USE_SPLIT_PTE_PTLOCKS
> >  #if ALLOC_SPLIT_PTLOCKS
> >  void __init ptlock_cache_init(void);
> > @@ -2970,6 +3009,11 @@ static inline void mark_page_reserved(struct pag=
e *page)
> >       adjust_managed_page_count(page, -1);
> >  }
> >
> > +static inline void free_reserved_ptdesc(struct ptdesc *pt)
> > +{
> > +     free_reserved_page(ptdesc_page(pt));
> > +}
> > +
> >  /*
> >   * Default method to free all the __init memory into the buddy system.
> >   * The freed pages will be poisoned with pattern "poison" if it's with=
in
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 5e0f51308724..b067ac10f3dd 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -1041,6 +1041,18 @@ TABLE_MATCH(ptl, ptl);
> >  #undef TABLE_MATCH
> >  static_assert(sizeof(struct ptdesc) <=3D sizeof(struct page));
> >
> > +#define ptdesc_page(pt)                      (_Generic((pt),          =
       \
> > +     const struct ptdesc *:          (const struct page *)(pt),      \
> > +     struct ptdesc *:                (struct page *)(pt)))
> > +
> > +#define ptdesc_folio(pt)             (_Generic((pt),                 \
> > +     const struct ptdesc *:          (const struct folio *)(pt),     \
> > +     struct ptdesc *:                (struct folio *)(pt)))
> > +
> > +#define page_ptdesc(p)                       (_Generic((p),           =
       \
> > +     const struct page *:            (const struct ptdesc *)(p),     \
> > +     struct page *:                  (struct ptdesc *)(p)))
> > +
> >  /*
> >   * No-op macros that just return the current protection value. Defined=
 here
> >   * because these macros can be used even if CONFIG_MMU is not defined.
> > --
> > 2.39.2
> >
> >
>
> --
> Sincerely yours,
> Mike.


From xen-devel-bounces@lists.xenproject.org Thu May 25 18:06:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 18:06:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539799.841026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2FLX-0007sF-R8; Thu, 25 May 2023 18:05:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539799.841026; Thu, 25 May 2023 18:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2FLX-0007s8-OK; Thu, 25 May 2023 18:05:55 +0000
Received: by outflank-mailman (input) for mailman id 539799;
 Thu, 25 May 2023 18:05:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ugP6=BO=citrix.com=prvs=502bf10e6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q2FLW-0007rq-1V
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 18:05:54 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c7f5dd30-fb26-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 20:05:51 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 14:05:40 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5356.namprd03.prod.outlook.com (2603:10b6:5:22b::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 18:05:38 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 18:05:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7f5dd30-fb26-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685037951;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Hd3dPrbaCsPhnxHCcLdHUfjjCGgc1frqe2LSHQE/fKI=;
  b=N2UYiZTG0sIJhOovSEwe2zKExGhgrCvFnY1U4ZTjJZz2yFPs8F1Yzyl3
   jcOCAmGZwunquCtnDMIoQveQ5JFobQg1m83Cnp5tnGfm0tVwTGXtOM3ay
   Bzscsfz5CJTJnhKd2ODHu4dvwh5Eq82OSRvzOFDs3UzuLWb8gmQ3x1iUN
   s=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 109203140
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:lYJUHq1LZhZzJoikHfbD5XRwkn2cJEfYwER7XKvMYLTBsI5bpz1Tm
 zYXWT+FO/eINDPxc9wlOozgp0lQ6JOHmNNqSAs5pC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFkOKgS1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfPXMJy
 K0bDjs2bAGi3uCXwbGDCedDmZF2RCXrFNt3VnBI6xj8VK9ja7aTBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqsy6Kkl0ZPLvFabI5fvSjQ8lPk1nej
 WXB52njWTkRNcCFyCrD+XWp7gPKtXqjCdhKT+TkrpaGhnWz2UEQUkZLWmG0hvWeinCgae8OM
 lMtr39GQa8asRbDosPGdwakrWGNpAJaW9tUH+Aw8giLxoLd5grfDW8BJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaETMOMWYIaCsATA0Ey9ruuoc+ilTIVNkLOKyoitz4Hxngz
 jbMqzIx750ZgNQXzay98RbCiii1u5nSZgcv40PcWWfNxj1+YImpdom582/x5PxLLJuaZlSZt
 X1CkM+bhMgMB5yQkC2GQM0WAaqkof2CNVX0nlpHD5QnsTO39BaekZt45Th/IAJjNJYCcDqwO
 UvL41oPtdlUIWegarJxb8SpEcM2wKP8FNPjEPfJct5JZZs3fwiClM1zWXOtM6nWuBBEuckC1
 V2zK65A0V5y5Xxb8QeL
IronPort-HdrOrdr: A9a23:fqnAFK9NaZoLIbBvdYVuk+DWI+orL9Y04lQ7vn2ZKCY4TiX8ra
 uTdZsguiMc5Ax+ZJhDo7C90di7IE80nKQdieN9AV7IZniEhILHFvAG0aLShxHmBi3i5qp8+M
 5bAsxD4QTLfDpHsfo=
X-Talos-CUID: 9a23:eQZECm42ZNa1wXKfSdssyFxTJ5EmVHbmzmbxek2ZCGVRC7O6VgrF
X-Talos-MUID: =?us-ascii?q?9a23=3AWua0Lg1gtUbLcCWvHENGN26BcjUjw7uDCxpWzJo?=
 =?us-ascii?q?8tO6IEixOJQWg3S2oe9py?=
X-IronPort-AV: E=Sophos;i="6.00,191,1681185600"; 
   d="scan'208";a="109203140"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dE3CNVTCa/Kk8b23yaEaskfbs7L1QcGK7grB9f9Z4g4qfkW9dzfGYyj09PmmxmddUNNpssMbCRbpWyndYZIiuKVsoLr0lJFVv9dLz2w3GlLDWspVaRloU4BsRzCjjob3mcGFlxyrVojx07/1kxsSRse7T8OaGqofcviyJeKr8M3PfDMwUTmIB+wQtADlXRUAkLJsL6Yx7G+LVjct4DReFevZ+zcip2Psq39Z6VGL0eQe7OhLIjco3XIBd0EuJxmfp31sFZseB9Z+LIAxsty3AuRLChuCAUAAMrhza/lZObM1eduXUSbZMB068oxZeH6F6mIJX0KnOuOEtl4qsB9e9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=y4GbgPiS1x1TpkNVAqOrkzAGX47h2muemCOuyQR/UaY=;
 b=IuMBNPvqbv9HjezK4fD+RZMpJToUd3GJO+3PSpk/sjQiIRrPzu15Chdj/+dGGjEIcOLnJ/Oj1v7slPaIE5K2n8SiDPc7DXCQYMSDshsG4u8KTK6inc7zPy5GamXocTFeKvWqAl6C6zGltbC+jq17XJBgYj+CGqgwo/r0I0j/RfsRzWTRQgBjcqqpzfbkcsUiqaShXX3K+9CyqWlbjbYZMstNyRSJMFf5MUIzIR03TRSjvpoS6Pn5iOmazICqeKJshnWUcIHwKV6GZoHL9Q+IzyTULh/3iJoSVUuV2hAl5Sdh+9oMqtUuW3NVKTiDg/nQOpTRgct3U9sSBdYe44MdEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y4GbgPiS1x1TpkNVAqOrkzAGX47h2muemCOuyQR/UaY=;
 b=kSfvuDayXJeWB9WWA5f5I9YGcyjvlye3WTToLUjVcc5uq/Q4E/MRsuAcdlvRz7e/hdNpQvbdubZ5Fj4zD5kUcLS7NmOABBCOo9RtqVz2r6EfXm6upEYoX1KmDRz8n1eRIrYiW1HpJqdRKXSkqGJaeAiBvekIM3EwfL1H2JpjEiQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <45621f03-2d3e-f208-1d0c-018479b5e8ef@citrix.com>
Date: Thu, 25 May 2023 19:05:31 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] xen/arm: un-break build with clang
Content-Language: en-GB
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>
References: <20230525175115.110606-1-stewart.hildebrand@amd.com>
In-Reply-To: <20230525175115.110606-1-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0168.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::36) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM6PR03MB5356:EE_
X-MS-Office365-Filtering-Correlation-Id: da734695-3606-4424-8b02-08db5d4aa47b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	W6RJ9kxK7Cxa570ZOB1kQLo7wDaTvzwuu99tIiVgu77eWjiCXcN5CibBJNHW3nCKYrE6ejYY9loE8Ty+EuiF68w+lBi/lB9Kdysstwvc/Zwjpb/QOmUgtLsgU901m1jgQ5BM27QCAkiO6s5Hto/a4EorXX7cxZ6d/CwVa8PJ+bguKVn+9JQCPakEGAWMP0J5n3Cjz/iXqWdur9CekNRDXOf6imqjP7Uq+uimMQObrxQLc0iqO3dquRVCYGg8UVu1RXwsGrrLiwK4wKqpFrGBcXF4wWgu+hy22gSTAhfvrrzZPUf6uOQE9imqWQ1JhFJWzmOEUc98Hl5JxsFs0gKzEPLbdokVj1GhG7K3RHYlhyQAUBFtx//ztY8ndsBDHIKmQ72ErIhMFpmCf8n37jJ+aBywyc2zT3PiMzHYbrWVxraIFsABw/YbuDlpavFhzqqZSeDZcdqUwXjkW8uxRMOA0oK4uNhH6xuAh4MIy1+5S8I3jAvFaSuuNaTVLzK7Y1AU1/oLcPb0vPKviUQEgxze61AnVM7aVlzqsAW3adO4K3u6eAAqMi82TzivaYbVAIWy2477AHmYCtLRFxrLS4LLpFwoqqBWketTT24nJfT/Y/JtFinWEyAk7uDAald0JQf4hSgW60MG4mId1Og990nalA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(136003)(396003)(39860400002)(366004)(451199021)(2616005)(4326008)(5660300002)(82960400001)(31696002)(8936002)(8676002)(86362001)(38100700002)(4744005)(2906002)(316002)(36756003)(41300700001)(54906003)(6666004)(478600001)(31686004)(6486002)(186003)(66556008)(66946007)(66476007)(53546011)(6512007)(6506007)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aU9hNzFYOTZ5aWJzRlFtclIwQkU0VzY2bWJHYjZscEVOaVg3ZzlDMmszVTRa?=
 =?utf-8?B?bUE4S25lQU9HUmp0bTY5ZGdiWU9JYnFIYVZnYVEra1FDeGpLT09aeWtETDIv?=
 =?utf-8?B?Z2xpeFZLT0RwUU42YjEwSzRreVkrMW55V0Z6UlVXQm5sYVlIZWlpNkdrT25Z?=
 =?utf-8?B?NHBaQWpZUXlrNjFoRXNVaFg3WHc5WGxMbHJpcSt5d3BhYkxNQXdpYjJtd3Z3?=
 =?utf-8?B?bzVvdWw0MnpDSk5HWU5zN2tZNUUwOE9tdzBreWxNVmd2R3dnWlMydVFyWDVK?=
 =?utf-8?B?Z0N4dmVUeDVXdHUwK2tIckYzZlBabmhuaGxvbytEcnVDbUZSbmRQenU2V2Ri?=
 =?utf-8?B?TW4zTkMzZjdVczhWWHpXOEYvT00wZ1k4QUpKZlFRTVJGWWZtemNvNlpLb1Ri?=
 =?utf-8?B?OGJoeGRvSlJhVWgyblR0TG93bXN0ZWtxNWcwNlhzQnJlUFJhdVoxYmJ0ZndV?=
 =?utf-8?B?NWFPWXMvQmlFaDZtcGNYUDZJKy81MlRTb3hVS1JIZ2tZYThuT2V5eW5Fcnh5?=
 =?utf-8?B?MCtLbUV0aityQnlaZ2h5SGpDcTU0RFlZZ1ZIcHQxdks4M01BYzgvZzlZRjhF?=
 =?utf-8?B?dTA1SG9SR3VnSndtZDVqSTFhS1oyRlhpRFhWUTJoblp0SjJ0Q1lDWGEvaVBY?=
 =?utf-8?B?YnMwMDZoY2tUb2VvUjlRMmxCMWV3c05VYTBqL29YZzRONlgzVnd0YkNybzli?=
 =?utf-8?B?c1BVSDBHdUFqbVFYRTdVdWRHYXJoR2tDZkF5YnNBbTF3Yk9GdkVkOG9peXBu?=
 =?utf-8?B?VjNLNGtIaWVmVFJqRm1MTUJRRktLOUNMeXV5MFFRZVJXdnQ2dml0NVRvWVFN?=
 =?utf-8?B?QXF3L0t5M1ZoZDhIQVVwb1pOVG9BZkhjZmpjZlE2cmU1ckJIVFc4cDdNbVJM?=
 =?utf-8?B?NEllanJxeUVMVld6WVFHQ0ppc3lsN2pqc2E4UHBwUUtaQkJsOGg4YXI5SEVW?=
 =?utf-8?B?MStPejNadmFETHZBVG1hNDQ0Q0c5aCtYTnk5VVpEY1ZEb09VeXBHNG5lRXhD?=
 =?utf-8?B?V3JaOERWV1pkTkVZUnhjZG1HMXRDQ3JZSFMxSnhrY2V2elZMTlc3VXl2cEky?=
 =?utf-8?B?V3VoWGJlblZ5czJaR2FGTmp2QlBMdDRpZUpLUm1ETHprcDA5Wjc4dkMrYzZw?=
 =?utf-8?B?dmd5WmVpcG9XZlQ1WVF0UmZLMFFMTU40OHlZZFQyWVhBNzFsWDYvemphMVZ5?=
 =?utf-8?B?YkZMYnRlWUJ0c3MwbnRhV2JENnVvVGxBVm8xN3JOWlh1Q3JPeTlTa2YxTWpY?=
 =?utf-8?B?VkJ2M2VTV1lseFhocnIzYWdJMHpRR0t3RXVScDlVQ2tTd0tybjAwQVpReFRx?=
 =?utf-8?B?MHFSSFRKeUhBUCtVOWI3VEVVeWt5dXlpNkVuZFpxT2xaNDFqblp1MnltVFNk?=
 =?utf-8?B?dTF3dzQ1T0V0amlYVzdVdnMrdEw2ajlyM0dBeFVmc1p0aDZnaEwzbG5MZmpl?=
 =?utf-8?B?WjFLeUU0aTBlaVpWY2FUbHVhVHFYR25jbXJOTzV0TnVrM2UxTUh3K3RoQzdM?=
 =?utf-8?B?a0hySm0xQjJVd2haS1B0eGhJM2pwSDQ0dDhxcmlhWmlTblJaMFpqT0J3ZXcy?=
 =?utf-8?B?YXdTcEgwVUhXQlJrSVhiNUZodGJ4Y01IdS9YVEYzMUFZRVB6U21jWVVQS3JC?=
 =?utf-8?B?eEVGZHVPZGUrcDAyYzBUK0dKeU1zV1BXVHJqRmZJakNZRk1DUFR4ZnQwWVdz?=
 =?utf-8?B?dGZaM0JRcUxmaVJzZVFEVXc4WUhWeXNOMGMrTEJ0cVE3ejNaSEc3aFUrdmlh?=
 =?utf-8?B?WjFzU1VHaGV5TWlyT254UlhNWHBINkIxajE0WVQvKzBBY2p6cWZZalBHVVhn?=
 =?utf-8?B?MjdDNFN3Umc3d0dBYmNqNitSOStob3dwaVlvdUJCWmV6SXdvM1BNbVF2VVhR?=
 =?utf-8?B?dnR2YW5PS1l4dElwWFRNUVI1dGVEdnkvMWFQaUV1N2pHM2o4T1hwcnROdW1H?=
 =?utf-8?B?MU44L2RMeVR3MnQ1dW11R1pmSmttOTJkdXJNK2xHZllCVC8vTlQrQlZZN2JF?=
 =?utf-8?B?SVFSdFJSTG5mSTRma0FsNEJia0RGNzNSZVpFTjAzZ0UxcU1MdzFUK0V6WXlu?=
 =?utf-8?B?WllrWXpqYXNTa25sL29nUnlXdnc3cGJ6Q3l5RytLMEVQdkJocXd5NitSK0NX?=
 =?utf-8?B?NjRLb0E4QzgwbE43eXh4aG9GQ1VnZDdrbkNaY2xkYjZrSUFubkNtT2RLMjl2?=
 =?utf-8?B?eHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	AbwtBagFJyM7wjUxNfhmGJrabvfhy9oquOK4rggGYYjYr27idPcxRn1cC9cd6oLcX+Ftn+1O/U4Nk+BpAMLMcXvwP7EuzPQjZ9Kt6alSx8VpU+AzLRo2ABS1wlM0FF4Zvqn6A1sAMk/JP9ue8hUNkISSyvTEB5oIvIvCRv9Y9mh+oPDboSfJBKfnaAyrC81tfRuH+ScaNeDgbE/6nx3UTFRyhSsDayQOT4r//Ilsu4zkLdJZ6iPOwr33RLTyXgnCGx7yFC5DDbNnvQGBnDrr6OD2c6ug/eCMzeUmdYjJ9SEQv3myC3cWrvJAfWJCkpJHFBPXQnoiCpJrfnqXp/KZeVY5Pfu7YQWp2Dqmy+5Yvo3xdUxnZvme4bihbltFWfp4x0p01lLqeg11AKAAjTIt92yJ+6o9+cCAWgCwiJhjcwjJcQ8SWgH/HZwg5uUyd6oI6fQq7MIb2D45AuAHMjh0m1iskf8ZqQNtKnxNMoa4KyroqVEoz/QiuzbDwmZMRHF8uWMzwJXOY5qn3YvLL5m5G1wkq0qmoAHS2K9KN+q1nlmi1rDfEPBfN9CY3gEdERbbwriFFYQ0fih1XkscbIguzVlmvbCoxY4sxGHS0G05lNEfKE7nmnq0kSgh5yvrEDZ9bIqq+vujCSf3j0OXQ0gKkYcjs38qLVKRiAno8CJR2PSk+h/Uhzeh19UJ3PHdtQZq/iPRJLzRFltpRz5KL4pVaeb3yHyrwPhGuhL2t6k1LaEKueAVnMAiRdtCxPnBheWQx33cmjf2+Go1tEA54p8CK1v9hJXo0x0zhb1AxAUnqSosccwBEiEk70KvP8HL03pf8hCQkP5lPKY46IQBW2LCk3o0i430eb3NDd9wCPAHMt96x29NTpBgPNYCj+63Xfaw8wn5L74ItKGt3M5qTN+dd7CRGQr8kKqPfA25J9uGzW4=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: da734695-3606-4424-8b02-08db5d4aa47b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 18:05:37.7618
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EDIkCGcSeULl50ZAMPQhtI9qF1YFPd2iq0Ju/1NoS3V6LMjQvk5EVzFPdAIy1uxBINfpdKLnNbP5lI8QOn3Vk/2LnD1eKcioWEzKPDzrI0E=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5356

On 25/05/2023 6:51 pm, Stewart Hildebrand wrote:
> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
> index 38e2ce255fcf..af53e58a6a07 100644
> --- a/xen/arch/arm/include/asm/setup.h
> +++ b/xen/arch/arm/include/asm/setup.h
> @@ -168,13 +168,13 @@ u32 device_tree_get_u32(const void *fdt, int node,
>  int map_range_to_domain(const struct dt_device_node *dev,
>                          u64 addr, u64 len, void *data);
>  
> -extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
> +EXTERN_DEFINE_BOOT_PAGE_TABLE(boot_pgtable);

The problem is using DEFINE_$blah() when you mean DECLARE_$blah(). 
They're split everywhere else in Xen for good reason.

But the macro looks like pure obfuscation to start with.  It should just
be a simple

extern lpae_t boot_pgtable[XEN_PT_LPAE_ENTRIES];

The declaration shouldn't have an alignment or section attribute on, and
deleting the macro makes the header easier to read.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 25 18:17:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 18:17:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539804.841036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2FWi-0000ym-Tc; Thu, 25 May 2023 18:17:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539804.841036; Thu, 25 May 2023 18:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2FWi-0000yf-Qj; Thu, 25 May 2023 18:17:28 +0000
Received: by outflank-mailman (input) for mailman id 539804;
 Thu, 25 May 2023 18:17:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JN8z=BO=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q2FWi-0000yZ-4u
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 18:17:28 +0000
Received: from mail-yb1-xb2f.google.com (mail-yb1-xb2f.google.com
 [2607:f8b0:4864:20::b2f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 669b59ea-fb28-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 20:17:26 +0200 (CEST)
Received: by mail-yb1-xb2f.google.com with SMTP id
 3f1490d57ef6-ba1815e12efso763585276.3
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 11:17:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 669b59ea-fb28-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685038645; x=1687630645;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=vEt/V8+3jpKdaWf/91twz9m1AxbB8mw1sieW7q1FOX4=;
        b=qpx33W23LxzGgtt7x0f+jHNiqnq/vW7f+lo4R60odkrqHHbvAvDFwCToA7ZeGOtlZu
         1pix82nq+XRaSlrOoV5XSSn7dUtYbrVRPJSVcc+Y0QUEIud/VeliblGAWTefKuC3uiLW
         Y767SW9acPu7TamxQ3rDZ7ad1waallUvH2KGWMSoPdMgQ6uqByETyo++E8JgCnYWaIR7
         T+3+xQi6fbJlgvqxHEFBxTPhg1oh030qaBtpvB360U8uoO6SqTXFxFoolc38s3Jd+HWF
         qSuZfJvCEliq2Q2QxhelaN+CYntsdlTkVUzhuGY+pHQjjIlm9D34YIwBuMF1/x8S1rOB
         qR0Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685038645; x=1687630645;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=vEt/V8+3jpKdaWf/91twz9m1AxbB8mw1sieW7q1FOX4=;
        b=kZuWoUSTm3y06yFwAx9VQz161G5+Di/GJfUNL1GApKqroxrT68J87Mj7AOqvHFh0lD
         fOy0/KIDnBYLvqTvaJ3ar020IUNTDL/psziwDlB9w4JU2jpRbF4gf5U/Iz5jL/34vIBz
         GZdToI58znNCvmkcPvm8HW7NYzl3xevAS6UyIIAplfU6kNEzmGqCu35EbTPAHIWe/L71
         SG6wDLKQ3MQlT8gRKi6ifrUjbARGDMQoN4YksKHpxavg9qs6DYRbHvCuFnceba9ZSJ0H
         AN4TfdZb7K7wxbKjgwhyj80QeYExsjAqIy34CtEITVPkaGdHRy6bbmOE/IFDZXe3PCjQ
         POuQ==
X-Gm-Message-State: AC+VfDzn6rbTBRkE39LlCvI0EYNXO7rChTml5nv+40BvpFmy9mItbxxI
	1LUGHp7YfkePsQcgnU3ZDA2MKybRLEicGW1LF7w=
X-Google-Smtp-Source: ACHHUZ4CxdCafEVA2N8+YE3r54vbJuyNv4iSvWQn7ibpFdu7Q8YcRob13nPjv7XKdbKB6AoaGk0hrDu/6J2TDu7u8Qo=
X-Received: by 2002:a25:ac6:0:b0:ba7:bb4c:7960 with SMTP id
 189-20020a250ac6000000b00ba7bb4c7960mr4320550ybk.26.1685038645132; Thu, 25
 May 2023 11:17:25 -0700 (PDT)
MIME-Version: 1.0
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-14-vishal.moola@gmail.com> <20230525091900.GY4967@kernel.org>
In-Reply-To: <20230525091900.GY4967@kernel.org>
From: Vishal Moola <vishal.moola@gmail.com>
Date: Thu, 25 May 2023 11:17:14 -0700
Message-ID: <CAOzc2pxNRbohxxNnaKtBNOBgOschHMj278-6hWZK9A1oJOgujA@mail.gmail.com>
Subject: Re: [PATCH v2 13/34] mm: Create ptdesc equivalents for pgtable_{pte,pmd}_page_{ctor,dtor}
To: Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org, 
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, 
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, 
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 25, 2023 at 2:19=E2=80=AFAM Mike Rapoport <rppt@kernel.org> wro=
te:
>
> On Mon, May 01, 2023 at 12:28:08PM -0700, Vishal Moola (Oracle) wrote:
> > Creates ptdesc_pte_ctor(), ptdesc_pmd_ctor(), ptdesc_pte_dtor(), and
> > ptdesc_pmd_dtor() and make the original pgtable constructor/destructors
> > wrappers.
>
> I think pgtable_pXY_ctor/dtor names would be better.

I have it as ptdesc to keep it consistent with the rest of the functions. I
also think it makes more sense as it's initializing stuff tracked by a ptde=
sc.

> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> > ---
> >  include/linux/mm.h | 56 ++++++++++++++++++++++++++++++++++------------
> >  1 file changed, 42 insertions(+), 14 deletions(-)
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 58c911341a33..dc61aeca9077 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -2847,20 +2847,34 @@ static inline bool ptlock_init(struct ptdesc *p=
tdesc) { return true; }
> >  static inline void ptlock_free(struct ptdesc *ptdesc) {}
> >  #endif /* USE_SPLIT_PTE_PTLOCKS */
> >
> > -static inline bool pgtable_pte_page_ctor(struct page *page)
> > +static inline bool ptdesc_pte_ctor(struct ptdesc *ptdesc)
> >  {
> > -     if (!ptlock_init(page_ptdesc(page)))
> > +     struct folio *folio =3D ptdesc_folio(ptdesc);
> > +
> > +     if (!ptlock_init(ptdesc))
> >               return false;
> > -     __SetPageTable(page);
> > -     inc_lruvec_page_state(page, NR_PAGETABLE);
> > +     __folio_set_table(folio);
> > +     lruvec_stat_add_folio(folio, NR_PAGETABLE);
> >       return true;
> >  }
> >
> > +static inline bool pgtable_pte_page_ctor(struct page *page)
> > +{
> > +     return ptdesc_pte_ctor(page_ptdesc(page));
> > +}
> > +
> > +static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc)
> > +{
> > +     struct folio *folio =3D ptdesc_folio(ptdesc);
> > +
> > +     ptlock_free(ptdesc);
> > +     __folio_clear_table(folio);
> > +     lruvec_stat_sub_folio(folio, NR_PAGETABLE);
> > +}
> > +
> >  static inline void pgtable_pte_page_dtor(struct page *page)
> >  {
> > -     ptlock_free(page_ptdesc(page));
> > -     __ClearPageTable(page);
> > -     dec_lruvec_page_state(page, NR_PAGETABLE);
> > +     ptdesc_pte_dtor(page_ptdesc(page));
> >  }
> >
> >  #define pte_offset_map_lock(mm, pmd, address, ptlp)  \
> > @@ -2942,20 +2956,34 @@ static inline spinlock_t *pmd_lock(struct mm_st=
ruct *mm, pmd_t *pmd)
> >       return ptl;
> >  }
> >
> > -static inline bool pgtable_pmd_page_ctor(struct page *page)
> > +static inline bool ptdesc_pmd_ctor(struct ptdesc *ptdesc)
> >  {
> > -     if (!pmd_ptlock_init(page_ptdesc(page)))
> > +     struct folio *folio =3D ptdesc_folio(ptdesc);
> > +
> > +     if (!pmd_ptlock_init(ptdesc))
> >               return false;
> > -     __SetPageTable(page);
> > -     inc_lruvec_page_state(page, NR_PAGETABLE);
> > +     __folio_set_table(folio);
> > +     lruvec_stat_add_folio(folio, NR_PAGETABLE);
> >       return true;
> >  }
> >
> > +static inline bool pgtable_pmd_page_ctor(struct page *page)
> > +{
> > +     return ptdesc_pmd_ctor(page_ptdesc(page));
> > +}
> > +
> > +static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc)
> > +{
> > +     struct folio *folio =3D ptdesc_folio(ptdesc);
> > +
> > +     pmd_ptlock_free(ptdesc);
> > +     __folio_clear_table(folio);
> > +     lruvec_stat_sub_folio(folio, NR_PAGETABLE);
> > +}
> > +
> >  static inline void pgtable_pmd_page_dtor(struct page *page)
> >  {
> > -     pmd_ptlock_free(page_ptdesc(page));
> > -     __ClearPageTable(page);
> > -     dec_lruvec_page_state(page, NR_PAGETABLE);
> > +     ptdesc_pmd_dtor(page_ptdesc(page));
> >  }
> >
> >  /*
> > --
> > 2.39.2
> >
> >
>
> --
> Sincerely yours,
> Mike.


From xen-devel-bounces@lists.xenproject.org Thu May 25 18:35:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 18:35:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539808.841046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Fnw-0003Qr-C9; Thu, 25 May 2023 18:35:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539808.841046; Thu, 25 May 2023 18:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Fnw-0003Qj-9F; Thu, 25 May 2023 18:35:16 +0000
Received: by outflank-mailman (input) for mailman id 539808;
 Thu, 25 May 2023 18:35:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2bT+=BO=quicinc.com=quic_tsoni@srs-se1.protection.inumbo.net>)
 id 1q2Fnu-0003Qd-RZ
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 18:35:14 +0000
Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com
 [205.220.168.131]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e1dd1935-fb2a-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 20:35:12 +0200 (CEST)
Received: from pps.filterd (m0279863.ppops.net [127.0.0.1])
 by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 34PFGFcD001882; Thu, 25 May 2023 18:34:31 GMT
Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com
 [199.106.103.254])
 by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3qt27n1j61-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 25 May 2023 18:34:31 +0000
Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com
 [10.52.223.231])
 by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 34PIYDC6001496
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 25 May 2023 18:34:13 GMT
Received: from [10.110.51.179] (10.80.80.8) by nasanex01a.na.qualcomm.com
 (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.42; Thu, 25 May
 2023 11:34:12 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1dd1935-fb2a-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=message-id : date :
 mime-version : subject : to : cc : references : from : in-reply-to :
 content-type : content-transfer-encoding; s=qcppdkim1;
 bh=YlvFsbEPrwbVAdxwxMbKnxn1gExf5XgZ9Xo7rbW6dUQ=;
 b=Lzyf9aLPcAqinix9YfLlsRAGGN+gR2pGIMlP8b/TpYgMTdhl/3rPyJ97zvP/hy/rFL48
 cqe4Pi8D360VWoL8sYMXdrz3fQmuqTWxk7EjcGsPO8A4qjig2fE0oOs8zCtvb5BhmMB9
 sAdHk7SIWExI5l2C0Z5IPdzh1SSC8+UsT4WL8RiZ4aJdj3SoEqluyXexy6vivclVNQRp
 xfywfC0GZvFljCxVu48w6+0GrjH6HI80FeJdCtLJPgWUaVFWinxGnDMoTThNWZaZkTrb
 fqWNgaubCjXAnculU0WyTmLSQbSFozQhX9Eign2tg15P5DGaZoNtC7o225DL42lxfvDi MA== 
Message-ID: <e17da8f4-4d5d-adb7-02c9-631ffdfc9037@quicinc.com>
Date: Thu, 25 May 2023 11:34:11 -0700
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.9.0
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Content-Language: en-US
To: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>,
        Borislav Petkov
	<bp@alien8.de>,
        Dave Hansen <dave.hansen@linux.intel.com>,
        "H . Peter Anvin"
	<hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
        Kees Cook
	<keescook@chromium.org>,
        Paolo Bonzini <pbonzini@redhat.com>,
        Sean
 Christopherson <seanjc@google.com>,
        Thomas Gleixner <tglx@linutronix.de>,
        Vitaly Kuznetsov <vkuznets@redhat.com>,
        Wanpeng Li <wanpengli@tencent.com>
CC: Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
        James Morris <jamorris@linux.microsoft.com>,
        John Andersen
	<john.s.andersen@intel.com>,
        Liran Alon <liran.alon@oracle.com>,
        "Madhavan T
 . Venkataraman" <madvenka@linux.microsoft.com>,
        Marian Rotariu
	<marian.c.rotariu@gmail.com>,
        =?UTF-8?Q?Mihai_Don=c8=9bu?=
	<mdontu@bitdefender.com>,
        =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?=
	<nicu.citu@icloud.com>,
        Rick Edgecombe <rick.p.edgecombe@intel.com>,
        Thara
 Gopinath <tgopinath@microsoft.com>,
        Will Deacon <will@kernel.org>,
        Zahra
 Tarkhani <ztarkhani@microsoft.com>,
        =?UTF-8?Q?=c8=98tefan_=c8=98icleru?=
	<ssicleru@bitdefender.com>,
        <dev@lists.cloudhypervisor.org>, <kvm@vger.kernel.org>,
        <linux-hardening@vger.kernel.org>, <linux-hyperv@vger.kernel.org>,
        <linux-kernel@vger.kernel.org>,
        <linux-security-module@vger.kernel.org>, <qemu-devel@nongnu.org>,
        <virtualization@lists.linux-foundation.org>, <x86@kernel.org>,
        <xen-devel@lists.xenproject.org>
References: <20230505152046.6575-1-mic@digikod.net>
 <1e10da25-5704-18ee-b0ce-6de704e6f0e1@quicinc.com>
 <0b069bc3-0362-d8ec-fc2a-05dd65218c39@digikod.net>
From: Trilok Soni <quic_tsoni@quicinc.com>
In-Reply-To: <0b069bc3-0362-d8ec-fc2a-05dd65218c39@digikod.net>
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 8bit
X-Originating-IP: [10.80.80.8]
X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To
 nasanex01a.na.qualcomm.com (10.52.223.231)
X-QCInternal: smtphost
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085
X-Proofpoint-ORIG-GUID: uBbHTCakZngacL0OpkU9CBmNWESa7i53
X-Proofpoint-GUID: uBbHTCakZngacL0OpkU9CBmNWESa7i53
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26
 definitions=2023-05-25_10,2023-05-25_03,2023-05-22_02
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 spamscore=0
 mlxlogscore=999 adultscore=0 priorityscore=1501 suspectscore=0
 clxscore=1015 mlxscore=0 bulkscore=0 malwarescore=0 phishscore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2304280000 definitions=main-2305250156

On 5/25/2023 6:25 AM, Mickaël Salaün wrote:
> 
> On 24/05/2023 23:04, Trilok Soni wrote:
>> On 5/5/2023 8:20 AM, Mickaël Salaün wrote:
>>> Hi,
>>>
>>> This patch series is a proof-of-concept that implements new KVM features
>>> (extended page tracking, MBEC support, CR pinning) and defines a new 
>>> API to
>>> protect guest VMs. No VMM (e.g., Qemu) modification is required.
>>>
>>> The main idea being that kernel self-protection mechanisms should be 
>>> delegated
>>> to a more privileged part of the system, hence the hypervisor. It is 
>>> still the
>>> role of the guest kernel to request such restrictions according to its
>>
>> Only for the guest kernel images here? Why not for the host OS kernel?
> 
> As explained in the Future work section, protecting the host would be 
> useful, but that doesn't really fit with the KVM model. The Protected 
> KVM project is a first step to help in this direction [11].
> 
> In a nutshell, KVM is close to a type-2 hypervisor, and the host kernel 
> is also part of the hypervisor.
> 
> 
>> Embedded devices w/ Android you have mentioned below supports the host
>> OS as well it seems, right?
> 
> What do you mean?

I think you have answered this above w/ pKVM and I was referring the 
host protection as well w/ Heki. The link/references below refers to the 
Android OS it seems and not guest VM.

> 
> 
>>
>> Do we suggest that all the functionalities should be implemented in the
>> Hypervisor (NS-EL2 for ARM) or even at Secure EL like Secure-EL1 (ARM).
> 
> KVM runs in EL2. TrustZone is mainly used to enforce DRM, which means 
> that we may not control the related code.
> 
> This patch series is dedicated to hypervisor-enforced kernel integrity, 
> then KVM.
> 
>>
>> I am hoping that whatever we suggest the interface here from the Guest
>> to the Hypervisor becomes the ABI right?
> 
> Yes, hypercalls are part of the KVM ABI.

Sure. I just hope that they are extensible enough to support for other 
Hypervisors too. I am not sure if they are on this list like ACRN / Xen 
and see if it fits their need too.

Is there any other Hypervisor you plan to test this feature as well?

> 
>>
>>
>>>
>>> # Current limitations
>>>
>>> The main limitation of this patch series is the statically enforced
>>> permissions. This is not an issue for kernels without module but this 
>>> needs to
>>> be addressed.  Mechanisms that dynamically impact kernel executable 
>>> memory are
>>> not handled for now (e.g., kernel modules, tracepoints, eBPF JIT), 
>>> and such
>>> code will need to be authenticated.  Because the hypervisor is highly
>>> privileged and critical to the security of all the VMs, we don't want to
>>> implement a code authentication mechanism in the hypervisor itself 
>>> but delegate
>>> this verification to something much less privileged. We are thinking 
>>> of two
>>> ways to solve this: implement this verification in the VMM or spawn a 
>>> dedicated
>>> special VM (similar to Windows's VBS). There are pros on cons to each 
>>> approach:
>>> complexity, verification code ownership (guest's or VMM's), access to 
>>> guest
>>> memory (i.e., confidential computing).
>>
>> Do you foresee the performance regressions due to lot of tracking here?
> 
> The performance impact of execution prevention should be negligible 
> because once configured the hypervisor do nothing except catch 
> illegitimate access attempts.

Yes, if you are using the static kernel only and not considering the 
other dynamic patching features like explained. They need to be thought 
upon differently to reduce the likely impact.

> 
> 
>> Production kernels do have lot of tracepoints and we use it as feature
>> in the GKI kernel for the vendor hooks implementation and in those cases
>> every vendor driver is a module.
> 
> As explained in this section, dynamic kernel modifications such as 
> tracepoints or modules are not currently supported by this patch series. 
> Handling tracepoints is possible but requires more work to define and 
> check legitimate changes. This proposal is still useful for static 
> kernels though.
> 
> 
>> Separate VM further fragments this
>> design and delegates more of it to proprietary solutions?
> 
> What do you mean? KVM is not a proprietary solution.

Ah, I was referring the VBS Windows VM mentioned in the above text. Is 
it open-source? The reference of VM (or dedicated VM) didn't mention 
that VM itself will be open-source running Linux kernel.

> 
> For dynamic checks, this would require code not run by KVM itself, but 
> either the VMM or a dedicated VM. In this case, the dynamic 
> authentication code could come from the guest VM or from the VMM itself. 
> In the former case, it is more challenging from a security point of view 
> but doesn't rely on external (proprietary) solution. In the latter case, 
> open-source VMMs should implement the specification to provide the 
> required service (e.g. check kernel module signature).
> 
> The goal of the common API layer provided by this RFC is to share code 
> as much as possible between different hypervisor backends.
> 
> 
>>
>> Do you have any performance numbers w/ current RFC?
> 
> No, but the only hypervisor performance impact is at boot time and 
> should be negligible. I'll try to get some numbers for the 
> hardware-enforcement impact, but it should be negligible too.

Thanks. Please share the data once you have it ready.

---Trilok Soni



From xen-devel-bounces@lists.xenproject.org Thu May 25 18:44:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 18:44:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539815.841056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Fwb-0004xM-9X; Thu, 25 May 2023 18:44:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539815.841056; Thu, 25 May 2023 18:44:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Fwb-0004xF-6b; Thu, 25 May 2023 18:44:13 +0000
Received: by outflank-mailman (input) for mailman id 539815;
 Thu, 25 May 2023 18:44:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x5Sm=BO=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1q2Fwa-0004x9-3b
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 18:44:12 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2060a.outbound.protection.outlook.com
 [2a01:111:f400:7e8a::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 224c86de-fb2c-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 20:44:10 +0200 (CEST)
Received: from MW4PR04CA0198.namprd04.prod.outlook.com (2603:10b6:303:86::23)
 by BL3PR12MB6594.namprd12.prod.outlook.com (2603:10b6:208:38d::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Thu, 25 May
 2023 18:44:07 +0000
Received: from CO1NAM11FT014.eop-nam11.prod.protection.outlook.com
 (2603:10b6:303:86:cafe::65) by MW4PR04CA0198.outlook.office365.com
 (2603:10b6:303:86::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17 via Frontend
 Transport; Thu, 25 May 2023 18:44:06 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1NAM11FT014.mail.protection.outlook.com (10.13.175.99) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6433.17 via Frontend Transport; Thu, 25 May 2023 18:44:06 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 25 May
 2023 13:44:02 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 25 May
 2023 13:44:01 -0500
Received: from [192.168.137.15] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 25 May 2023 13:44:00 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 224c86de-fb2c-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TgWISFYeUtJBes0bFYE7R/AyboaVBXe+PsBVo6XGEEecYVET4P1Vg2c4DhpO0mdYM1o8wwrkg3FMa1D98WxmdzHms8kzler2s2HDsv5RKoKTDUD7RrvoCD3DPgddoX+ktWGdxOXH+hbKnMYPwgEpxabdxw8xvrtd0nfmHwFwmLWDPiqRddTgECh+MKzIkJXpIY7GMf1u64OLGp7jRxMZGFVw03JeBYpZ1DO/sebXgVfslugHRJdkteIIR9xu9EJ/wLDmLQ8FzjfdNQZEA3j9FVaoVRnXi1t6eQ42aYyq+RUxUK0+3Skg8rT5xBXWKxWDybe1ufieXVvWoZ+bMJovpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=53IppQY8P00JUZOEakZOn5qLBVo1mulM6P3qTzEkU3E=;
 b=JNps74dxtQDk/afS8DlQ7oDXHXWfOevYQk0blHpLn8VaogllHh0ETQQzdg0X5F6CVf19B1Jx5b5KR+mxARQR0KV8yIllRkxcT6DrO3DIXB/MB3ARTlqqRq7ddK774y0xm8Ws7m12K9vqt1JsW64WhbLAjGJwL/kihBjBU2Gm4BKkZ6vwVVV7cgyFVnMNOGsf+X3gXj7OEU8LAcFfgQcn5yTzRKUThDXBHiY2EOEui5ZxsA+C7pcYdP6ALvRRa+kSwwZIca+lTsEGm8VRhpDnA8kT83zE8Ro8KuagszANiKJ60XTKJVIz6cMMmr4bbOxq/ejmi9LpRgvRIZ2KsNDfEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=53IppQY8P00JUZOEakZOn5qLBVo1mulM6P3qTzEkU3E=;
 b=0/K3NRaXTQ/nDPUCeCTF9ihx0HRL/ESSiyEI9Hux1khaE4u8EYhPnAG6rM5l2dw2xyoCwyizFkF01QeGj5702PTbKGHQmdHKceDsrRYfpqeMoFotJIUG4LrSb3s96BmuhAvvLGaLRUKegYRKorHlY76btDouPyMxz5UNJEbuUMc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <c27e63fc-1952-6c90-158e-987274e78d6a@amd.com>
Date: Thu, 25 May 2023 14:43:59 -0400
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] xen/arm: un-break build with clang
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>, "Volodymyr
 Babchuk" <Volodymyr_Babchuk@epam.com>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>
References: <20230525175115.110606-1-stewart.hildebrand@amd.com>
 <45621f03-2d3e-f208-1d0c-018479b5e8ef@citrix.com>
Content-Language: en-US
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
In-Reply-To: <45621f03-2d3e-f208-1d0c-018479b5e8ef@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1NAM11FT014:EE_|BL3PR12MB6594:EE_
X-MS-Office365-Filtering-Correlation-Id: 55fa35f9-7c27-416f-9c50-08db5d5004c8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7XdkTIcFQGacXHVNqp80cuXtLp4dmbqJYfE+p2E8/wZvHbILCAVxAb7quufubArTvKm4TMZvY0449hioT+uZmBu+9FV8rCpEa6g10Tk7FIYpLBqmC7QWJff7TOl4o3IwVo8o8+BV1ZRs+PJo0B9nZ3M7fOvpGLiUd8psMhbVcYriaJkDQK++yzbbfrG5XW5Fgy7g7osT5Qekrf+BZ27PnmkgjhrVl67UEqFQ8TPaIQYi5dUjZ771Rpr2wRdPK9gZaVs3JW9hKHALt33pt/X5g5AgdZHF03xDd5xfNYhavFZq/fe07STLlHvrvOLsQcq+E7ax9GvmmVwJEtj9ZtItQkJ6x9v3AGcpCzvPCouEZ44l95rDseiYh9VFeTKCD123tJFzSMAqTDcuhUlR/l4yB6HKLCUEDxkoVDT01+F+y9ryJbYHy5XJdfT6cvbREBdWSvoOycKVrAiSc5bLs3j0jRhcPb0vqnbSRN9h/9/m3hZmyHXqe4bks39e4yVGgGr+2e0BGm6fIsdzrp6rYIT2rIsiZVCOf79Mwen4vtHmhH+Yy/FX5vajzsB4Zgcc+nXbUygRCR5xu0XSVAFTFg+byPuwWZl8EhLPRS7fkvQ/sAiL9l1RVtXgUCyim+JaZTztAtto0h3IppX38UboEsS/GQgf6HV0UXIFy4miR7FpYDa0As23NO8SB4lcSqYFWF62via83vWZsw/hNduCelcC00yp7OqSwDvYeY/QUpuT3sh76n98BtTi4FAbkXdFTxtuOWdvM/VtcrU30lS3bv5h+1njJQ0ZRddRUOuzCMZIqles1H6rvaF6llpuZ3kxKLPC
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(346002)(136003)(396003)(376002)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(31686004)(82310400005)(36860700001)(336012)(426003)(356005)(44832011)(81166007)(36756003)(86362001)(31696002)(40460700003)(82740400003)(8676002)(47076005)(316002)(41300700001)(40480700001)(8936002)(70586007)(4326008)(2906002)(186003)(53546011)(5660300002)(478600001)(70206006)(54906003)(26005)(110136005)(2616005)(16576012)(36900700001)(43740500002);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 18:44:06.0517
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 55fa35f9-7c27-416f-9c50-08db5d5004c8
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1NAM11FT014.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6594

On 5/25/23 14:05, Andrew Cooper wrote:
> On 25/05/2023 6:51 pm, Stewart Hildebrand wrote:
>> diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
>> index 38e2ce255fcf..af53e58a6a07 100644
>> --- a/xen/arch/arm/include/asm/setup.h
>> +++ b/xen/arch/arm/include/asm/setup.h
>> @@ -168,13 +168,13 @@ u32 device_tree_get_u32(const void *fdt, int node,
>>  int map_range_to_domain(const struct dt_device_node *dev,
>>                          u64 addr, u64 len, void *data);
>>
>> -extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
>> +EXTERN_DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
> 
> The problem is using DEFINE_$blah() when you mean DECLARE_$blah().
> They're split everywhere else in Xen for good reason.
> 
> But the macro looks like pure obfuscation to start with.  It should just
> be a simple
> 
> extern lpae_t boot_pgtable[XEN_PT_LPAE_ENTRIES];
> 
> The declaration shouldn't have an alignment or section attribute on, and
> deleting the macro makes the header easier to read.

This indeed makes much more sense. I will send v2 with simplified extern declarations.

To clarify, the definitions in xen/arch/arm/mm.c are to remain unchanged.


From xen-devel-bounces@lists.xenproject.org Thu May 25 19:06:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 19:06:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539820.841066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2GI3-0007Se-1q; Thu, 25 May 2023 19:06:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539820.841066; Thu, 25 May 2023 19:06:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2GI2-0007SX-V7; Thu, 25 May 2023 19:06:22 +0000
Received: by outflank-mailman (input) for mailman id 539820;
 Thu, 25 May 2023 19:06:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qmlc=BO=rabbit.lu=slack@srs-se1.protection.inumbo.net>)
 id 1q2GI1-0007SR-5s
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 19:06:21 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b0a44ba-fb2f-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 21:06:19 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-30a1fdde3d6so2445840f8f.0
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 12:06:19 -0700 (PDT)
Received: from localhost.localdomain (82-64-138-184.subs.proxad.net.
 [82.64.138.184]) by smtp.googlemail.com with ESMTPSA id
 e2-20020adffc42000000b002ff2c39d072sm2624203wrs.104.2023.05.25.12.06.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 May 2023 12:06:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b0a44ba-fb2f-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=rabbit-lu.20221208.gappssmtp.com; s=20221208; t=1685041579; x=1687633579;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=CVMTxMc7NLMK+GzBqb2fW25lHCbqVo5E3F7PbRyMJ0E=;
        b=E1x+B8OwDE+9ZOZsnPE8s3ImkRyxNSZx5za53rzkgxyEgIe6XIHXCc9xMSMKtjKLwV
         On3/EPpMvHELvZe+D/yj3fMKWETXFslpKsf+7tyEaUV9S+8ijKKOGvl+i7LUmpb6J5mX
         j7wD/YNPe9K2f8a1EdpCcMglpZTQLnfgAd3Tv7gdav0ETw+KhRH2s1lS3x5CZG4EhwH0
         hq6ay0dO9RtWVHZXWqZzO4SCg56fwLCap3TlE9aJ7swqQ0in0i5CZenbQMEcNzXoc37Q
         TZaJj0kEWN5e1TilFvxD4LGarBOprPxWPFDBYg/eha5qLRbdPR+Dn+pP2UtZePUJ1EHZ
         5bbA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685041579; x=1687633579;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=CVMTxMc7NLMK+GzBqb2fW25lHCbqVo5E3F7PbRyMJ0E=;
        b=hz0wYzqU9oECZ1O3vmp2Us+VwR1QgrwnA0Ms1x/CPMZGNYNwtpEAJaz123DLs1Yvwo
         22uaxXJGNQuTiClwBSe/54ETEEWZn1dpXJfn2asZodn1wqT7BbGV94LyU5BSlVWd5DDg
         h1xSOk3I+Jmkr+pKzRCp4hwhOiRPF8y0KZkVD2dbu9IAykyg1dP+ZregPwftoZAfFGt0
         HHqve17ezOT24BVQ27ol2YM/jw/WjOeAZdqMRr+URCmBcmSEQFkueHhkig5TGH2JYwqD
         5o46kImfr/7BMKf1ocKBE6Afkf53EP1w/BvpAr+YFPqcS9UVFbJoMtH7pn++Ye+JLY+j
         xRTA==
X-Gm-Message-State: AC+VfDwDEmh7SkHCrGvZ/FN7QdaW+BWGwJONvoJVa+sX68O+udfARU4Q
	ZD3RePbOEQwfJH2VgFjlnz4E0ZH7HBjKfkpPKgJh76mW
X-Google-Smtp-Source: ACHHUZ5Kj/eKkrWE7kUTwbV8I9sL80o2vyGOfJTc3zMd7PAaSVBeTLLPOTty5menZwj75qxZCQCb6g==
X-Received: by 2002:adf:e843:0:b0:2f1:e162:d48 with SMTP id d3-20020adfe843000000b002f1e1620d48mr3102266wrn.47.1685041578697;
        Thu, 25 May 2023 12:06:18 -0700 (PDT)
From: =?UTF-8?q?Cyril=20R=C3=A9bert?= <slack@rabbit.lu>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Cyril=20R=C3=A9bert?= <slack@rabbit.lu>,
	Yann Dirson <yann.dirson@vates.fr>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [XEN PATCH] tools/xenstore: remove deprecated parameter from xenstore commands help
Date: Thu, 25 May 2023 21:06:04 +0200
Message-Id: <47cbac6bcf8f454b47bc6430c101f064a5623261.1685041564.git.slack@rabbit.lu>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Completing commit c65687e ("tools/xenstore: remove socket-only option from xenstore client").
As the socket-only option (-s) has been removed from the Xenstore access commands (xenstore-*),
also remove the parameter from the commands help (xenstore-* -h).

Suggested-by: Yann Dirson <yann.dirson@vates.fr>
Signed-off-by: Cyril Rébert <slack@rabbit.lu>
---
 tools/xenstore/xenstore_client.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/tools/xenstore/xenstore_client.c b/tools/xenstore/xenstore_client.c
index 0628ba275e..de788b3e0a 100644
--- a/tools/xenstore/xenstore_client.c
+++ b/tools/xenstore/xenstore_client.c
@@ -94,25 +94,25 @@ usage(enum mode mode, int incl_mode, const char *progname)
 	errx(1, "Usage: %s <mode> [-h] [...]", progname);
     case MODE_read:
 	mstr = incl_mode ? "read " : "";
-	errx(1, "Usage: %s %s[-h] [-p] [-s] [-R] key [...]", progname, mstr);
+	errx(1, "Usage: %s %s[-h] [-p] [-R] key [...]", progname, mstr);
     case MODE_write:
 	mstr = incl_mode ? "write " : "";
-	errx(1, "Usage: %s %s[-h] [-s] [-R] key value [...]", progname, mstr);
+	errx(1, "Usage: %s %s[-h] [-R] key value [...]", progname, mstr);
     case MODE_rm:
 	mstr = incl_mode ? "rm " : "";
-	errx(1, "Usage: %s %s[-h] [-s] [-t] key [...]", progname, mstr);
+	errx(1, "Usage: %s %s[-h] [-t] key [...]", progname, mstr);
     case MODE_exists:
 	mstr = incl_mode ? "exists " : "";
 	/* fallthrough */
     case MODE_list:
 	mstr = mstr ? : incl_mode ? "list " : "";
-	errx(1, "Usage: %s %s[-h] [-p] [-s] key [...]", progname, mstr);
+	errx(1, "Usage: %s %s[-h] [-p] key [...]", progname, mstr);
     case MODE_ls:
 	mstr = mstr ? : incl_mode ? "ls " : "";
-	errx(1, "Usage: %s %s[-h] [-f] [-p] [-s] [path]", progname, mstr);
+	errx(1, "Usage: %s %s[-h] [-f] [-p] [path]", progname, mstr);
     case MODE_chmod:
 	mstr = incl_mode ? "chmod " : "";
-	errx(1, "Usage: %s %s[-h] [-u] [-r] [-s] key <mode [modes...]>", progname, mstr);
+	errx(1, "Usage: %s %s[-h] [-u] [-r] key <mode [modes...]>", progname, mstr);
     case MODE_watch:
 	mstr = incl_mode ? "watch " : "";
 	errx(1, "Usage: %s %s[-h] [-n NR] key", progname, mstr);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 25 19:16:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 19:16:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539825.841076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2GRR-0000Xh-TU; Thu, 25 May 2023 19:16:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539825.841076; Thu, 25 May 2023 19:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2GRR-0000Xa-QU; Thu, 25 May 2023 19:16:05 +0000
Received: by outflank-mailman (input) for mailman id 539825;
 Thu, 25 May 2023 19:16:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x5Sm=BO=amd.com=Stewart.Hildebrand@srs-se1.protection.inumbo.net>)
 id 1q2GRQ-0000XU-A2
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 19:16:04 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20601.outbound.protection.outlook.com
 [2a01:111:f400:7e8c::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 94c3acd1-fb30-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 21:16:00 +0200 (CEST)
Received: from BN9PR03CA0794.namprd03.prod.outlook.com (2603:10b6:408:13f::19)
 by CY8PR12MB8215.namprd12.prod.outlook.com (2603:10b6:930:77::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.15; Thu, 25 May
 2023 19:15:56 +0000
Received: from BN8NAM11FT020.eop-nam11.prod.protection.outlook.com
 (2603:10b6:408:13f:cafe::99) by BN9PR03CA0794.outlook.office365.com
 (2603:10b6:408:13f::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.16 via Frontend
 Transport; Thu, 25 May 2023 19:15:56 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN8NAM11FT020.mail.protection.outlook.com (10.13.176.223) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.6433.18 via Frontend Transport; Thu, 25 May 2023 19:15:56 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 25 May
 2023 14:15:56 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 25 May
 2023 14:15:55 -0500
Received: from ubuntu.mshome.net (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2375.34 via Frontend
 Transport; Thu, 25 May 2023 14:15:54 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94c3acd1-fb30-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YBX5RDi94lpGSMbYpz5yl+vW7XiWHNlij6lWeGcf1ceCQfLgriV1/bGGLP2Ro0oIjo/p4+b6TGEtfI55VgYrSVDX6YX8Pe2UCydsadUPMh2aJxKbv3+Grd3EHMc2S0DiyIlyTL/ktna5Awpt21pGz5MWcIe1Wu2annJ3693eK2Ev4XAvhaQmvvobQfYVgokQG+Rg+xRvqGZWdl0SkH2ElxRw2rBs4AJmHmTKnQJk5+rJa6DdpQq8vi2OXg7MUkGax2hCLEmmtX/URofuxfB3cGmoksKfqIZSrGqRmq+niHpHjoiAUBcPt/k7qPFhd3ovUlZ2Tqn8JPQNKk1jUfyBkg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RInPXW+Y8kCE2dzZ7qyAFtB20UReemp02DbRSA4j2fg=;
 b=JS+V9y90C1y+2/nax73iyL3l5SMpT81gRd1w9g6La2Z44fTr0vP1pnPBxa8EZOQ7inHECZ8JB3fe+HPjgX7pNk+xyMTIGOH/FYiD7FF7+r8hviTtTgHdzVWd5x5NRx+5kp5omOlXcPtTXOhkDT8KsahO2F6XQMqZgf6bbpL77l063rCzYNFyYJL7ACgICXWf2A32TqgJrJFxCa2qc0lN35dT43LdiB6hagmkeGbc9/1dZ3N+aqa7Vh0rORmUXHwaiwBgZATmbbp8+PfG9L5fiZ7S4BfF/FP2ZZSGbsdsFCMm2CpPKr8+MkbZyVqJRbIoSoV/VQ5ThwxeFIJl0Qxwyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RInPXW+Y8kCE2dzZ7qyAFtB20UReemp02DbRSA4j2fg=;
 b=Ec66yiAf61j4FQ2U0dOCIIKG1NPyZGfNPJBblJ84Kme+EmyynJcNbEdocZcBubiFIWOL1zxQ9NUzbYDzd1jk7/eivtEMYM6+j4TCBKfTuN87VmssqlG2KkPqnLVcW6KefiyFoGCIJlzvmHzVVf+7yXrctNSFpmksANgyGcRBz28=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stewart Hildebrand <stewart.hildebrand@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Stewart Hildebrand <stewart.hildebrand@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v2] xen/arm: un-break build with clang
Date: Thu, 25 May 2023 15:15:31 -0400
Message-ID: <20230525191531.120224-1-stewart.hildebrand@amd.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN8NAM11FT020:EE_|CY8PR12MB8215:EE_
X-MS-Office365-Filtering-Correlation-Id: b4096752-e555-40c5-b65e-08db5d547771
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	mhTFRClntWwzxAdPcn3MirRdjVMomuiElpkJWZUCrjswrDCrnIduDuKJjcVZiihyA65kA+eY4Oaep0oqwViwI47oiBiN30uGuzJ1ikUzFrK5GZFNjetOpc8ipO3XXDRsEta+KmWCnv5CuY7Gp1kY0bwXahcQ7Uy93ShbJ+kQmPTatbH8rEIad/4fy+9QjILd+qkQd647R6z3Yn+jiszSLGDQRKDBJ+RUOtg3cd7TBuYY9X/UjoWrRUJI5dKIdi5Z41jlpp3VoyVWCMXQWqi/Kwh4qFsye/YoNEAKDHkz+Wcgo6erntNCaoRPMs+J5mkOfOPRcA99lPDO+2SJZ6YEGYsRWkiz25UvC9BYh/l3wfdLv3dCH2IBOyCqP/s1Qj45CiRUXl77JkMnKG2pHIxcLHAlAshy6iphX8gqMHEkZjf3MI3YK8nfTW9y4NRzZrx6oBTMJnE/ebKm7gm+tT3dG+UaBrFZoxsjxdRWHUWLaZEfOiTETuS3nceMJHQvrtSSYxIhIqHpshL4cSUeNxTn0jp/0g41D/5Lif4lv+qEJvgSJq9871keZTMVMUk0puwqKMBUJyBrbVFfLE2WxXDzCRPW4jqhj8AjY/ovLDgkM4pLfHGUzWx/8liMyqKCwfHaqBi5T9ySPIqcXdXZxmpnP3KsopxLHs1UAU5JLKKHXBcxVwGSUDjb85x0M5+ko39YgUEfpkWDEra9BPAwe2VtuzCbN/40lt3Jc40lX1uPSw61ObaKLG2lgKEq2aHe6UTgCBTyq3RA3zQfaak+jyq6Tg==
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(376002)(396003)(451199021)(36840700001)(46966006)(40470700004)(2906002)(2616005)(36756003)(186003)(44832011)(8936002)(8676002)(40460700003)(5660300002)(26005)(1076003)(40480700001)(82310400005)(478600001)(70206006)(70586007)(6916009)(4326008)(86362001)(82740400003)(54906003)(81166007)(356005)(83380400001)(426003)(336012)(47076005)(316002)(41300700001)(36860700001)(6666004)(36900700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 19:15:56.5165
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b4096752-e555-40c5-b65e-08db5d547771
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN8NAM11FT020.eop-nam11.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8215

clang doesn't like extern with __attribute__((__used__)):

  ./arch/arm/include/asm/setup.h:171:8: error: 'used' attribute ignored [-Werror,-Wignored-attributes]
  extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
         ^
  ./arch/arm/include/asm/lpae.h:273:29: note: expanded from macro 'DEFINE_BOOT_PAGE_TABLE'
  lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \
                              ^
  ./include/xen/compiler.h:71:27: note: expanded from macro '__section'
  #define __section(s)      __used __attribute__((__section__(s)))
                            ^
  ./include/xen/compiler.h:104:39: note: expanded from macro '__used'
  #define __used         __attribute__((__used__))
                                        ^

Simplify the declarations by getting rid of the macro (and thus the
__aligned/__section/__used attributes) in the header. No functional change
intended as the macro/attributes are present in the respective definitions in
xen/arch/arm/mm.c.

Fixes: 1c78d76b67e1 ("xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping")
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
---
v1->v2:
* simplify by getting rid of the macro per Andrew's suggestion

---
I tested with clang 12 and clang 16

Here is my make command line:
make -j $(nproc) \
    clang=y \
    CC="clang --target=aarch64-none-linux-gnu -march=armv8a+nocrypto" \
    CXX="clang++ --target=aarch64-none-linux-gnu -march=armv8a+nocrypto" \
    HOSTCC=clang \
    HOSTCXX=clang++ \
    XEN_TARGET_ARCH=arm64 \
    CROSS_COMPILE=aarch64-none-linux-gnu- \
    dist-xen
---
 xen/arch/arm/include/asm/setup.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 38e2ce255fcf..1dbf3ced8079 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -168,13 +168,13 @@ u32 device_tree_get_u32(const void *fdt, int node,
 int map_range_to_domain(const struct dt_device_node *dev,
                         u64 addr, u64 len, void *data);
 
-extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
+extern lpae_t boot_pgtable[XEN_PT_LPAE_ENTRIES];
 
 #ifdef CONFIG_ARM_64
-extern DEFINE_BOOT_PAGE_TABLE(boot_first_id);
+extern lpae_t boot_first_id[XEN_PT_LPAE_ENTRIES];
 #endif
-extern DEFINE_BOOT_PAGE_TABLE(boot_second_id);
-extern DEFINE_BOOT_PAGE_TABLE(boot_third_id);
+extern lpae_t boot_second_id[XEN_PT_LPAE_ENTRIES];
+extern lpae_t boot_third_id[XEN_PT_LPAE_ENTRIES];
 
 /* Find where Xen will be residing at runtime and return a PT entry */
 lpae_t pte_of_xenaddr(vaddr_t);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Thu May 25 19:17:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 19:17:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539829.841086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2GSM-00014S-7M; Thu, 25 May 2023 19:17:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539829.841086; Thu, 25 May 2023 19:17:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2GSM-00014J-4Y; Thu, 25 May 2023 19:17:02 +0000
Received: by outflank-mailman (input) for mailman id 539829;
 Thu, 25 May 2023 19:17:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sqFY=BO=intel.com=rick.p.edgecombe@srs-se1.protection.inumbo.net>)
 id 1q2GSL-000149-IJ
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 19:17:01 +0000
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b64a91a6-fb30-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 21:16:58 +0200 (CEST)
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 May 2023 12:16:48 -0700
Received: from orsmsx602.amr.corp.intel.com ([10.22.229.15])
 by FMSMGA003.fm.intel.com with ESMTP; 25 May 2023 12:16:47 -0700
Received: from orsmsx601.amr.corp.intel.com (10.22.229.14) by
 ORSMSX602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 25 May 2023 12:16:47 -0700
Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by
 orsmsx601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23 via Frontend Transport; Thu, 25 May 2023 12:16:47 -0700
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.46) by
 edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.23; Thu, 25 May 2023 12:16:46 -0700
Received: from MN0PR11MB5963.namprd11.prod.outlook.com (2603:10b6:208:372::10)
 by IA0PR11MB7911.namprd11.prod.outlook.com (2603:10b6:208:40e::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Thu, 25 May
 2023 19:16:44 +0000
Received: from MN0PR11MB5963.namprd11.prod.outlook.com
 ([fe80::6984:19a5:fe1c:dfec]) by MN0PR11MB5963.namprd11.prod.outlook.com
 ([fe80::6984:19a5:fe1c:dfec%6]) with mapi id 15.20.6433.016; Thu, 25 May 2023
 19:16:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b64a91a6-fb30-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1685042218; x=1716578218;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=g5JJbQpzk9KTIUfEMKg72HvNldEktM/APqNazebbZT0=;
  b=M9C2Sb0YysP8ltEsjRcZNgdygmDKE600yLC5mod2T6ZZFU3OuQ1KMJYh
   6F9iijLsdbfIlh66luUE37V2bmqx35FqbjqPLQbYk7bzwda2i5FLYgyKt
   sTK/zr1/fXo9Pt+Km74GPn2vSZ+Lw4gs18Wf58uhzI55OuwIvnHL6p3p5
   C509C24mCCoMqORF9FVJikgirnbR7vVrjBqSSMyviGuBbGLiUkKrfwHhX
   UJ5J0fVon8Vx57PpsQ+ML+fZObjLHAoYdAjxgotwdRni6CDXhD6JZzZD+
   2Z4MowZhSqf4OsxIQrSyViiIvE9AqOnxI88wvV/cux3qlozmi7UIwatof
   Q==;
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="356377148"
X-IronPort-AV: E=Sophos;i="6.00,192,1681196400"; 
   d="scan'208";a="356377148"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="794776668"
X-IronPort-AV: E=Sophos;i="6.00,192,1681196400"; 
   d="scan'208";a="794776668"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jKU6aNgj0C6xM8XHCUStPmJZ+KZnb6hqWomyDUvkB6Y3x7z5WeFKyxIx75ib7tsbKFOYcbG58nkP8YveicRNwKXwe028iqdXfOdK+U25qnkEbbcUgwRwrh470Zpkl+mzBUbLhzPBJyiWv2nfba277YGvgaIErGUanT38FI0TJWUllYk+QeChMzRjvFLTmSLW18fXWCZnZxFCLW4mQFt1D3+E8mlnlkGF8kTM5x7t1YnK3dZeIP57kigf44gYzOdA4xK1CA556HEkvXKMgJspwvsMmjKB9QIlRpmK+34P6wUM8DzE4t8p73cap3RqUiEKJUFVbpNr7cKxVtLbNRyuwQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=g5JJbQpzk9KTIUfEMKg72HvNldEktM/APqNazebbZT0=;
 b=UoittoKY3opRk4VKtyWXy/jq0C9+o3Grakzg1R4wSXfb2Ukc+9AMzVILcRZX8qTuRZzSN81jFphHbkfbUY5dZdLb9piXp7FVSgQ7ald/Js1hqMnMA4mMC3mQiu1KERLif+9B+Dn1GBzzDjNqLhvwnoMKcKd9dUW8CaefHplHz0WMrVTR6T79ZZTyJqI9A888LCttX8DAWQwcPbgTBv+vcRtvKqG3WcjlRLYBIn4AEpB+xW4F9X/2eahhBFYD/7daO2kZ5jdw72Je/iMOT4ro85Nr2rFepNvHB65C2q7eFNrZiy/GY0yyZNUNxVPHNuonbLEE2flRK4ThXXoQTR+KOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
To: "Christopherson,, Sean" <seanjc@google.com>
CC: "ssicleru@bitdefender.com" <ssicleru@bitdefender.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>, "mic@digikod.net"
	<mic@digikod.net>, "marian.c.rotariu@gmail.com" <marian.c.rotariu@gmail.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>, "pbonzini@redhat.com"
	<pbonzini@redhat.com>, "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"tgopinath@microsoft.com" <tgopinath@microsoft.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"liran.alon@oracle.com" <liran.alon@oracle.com>, "ztarkhani@microsoft.com"
	<ztarkhani@microsoft.com>, "mdontu@bitdefender.com" <mdontu@bitdefender.com>,
	"x86@kernel.org" <x86@kernel.org>, "bp@alien8.de" <bp@alien8.de>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"jamorris@linux.microsoft.com" <jamorris@linux.microsoft.com>,
	"vkuznets@redhat.com" <vkuznets@redhat.com>, "Andersen, John S"
	<john.s.andersen@intel.com>, "nicu.citu@icloud.com" <nicu.citu@icloud.com>,
	"keescook@chromium.org" <keescook@chromium.org>, "Graf, Alexander"
	<graf@amazon.com>, "wanpengli@tencent.com" <wanpengli@tencent.com>,
	"dev@lists.cloudhypervisor.org" <dev@lists.cloudhypervisor.org>,
	"madvenka@linux.microsoft.com" <madvenka@linux.microsoft.com>,
	"mingo@redhat.com" <mingo@redhat.com>, "will@kernel.org" <will@kernel.org>,
	"linux-security-module@vger.kernel.org"
	<linux-security-module@vger.kernel.org>, "hpa@zytor.com" <hpa@zytor.com>,
	"yuanyu@google.com" <yuanyu@google.com>, "linux-hyperv@vger.kernel.org"
	<linux-hyperv@vger.kernel.org>, "linux-hardening@vger.kernel.org"
	<linux-hardening@vger.kernel.org>, "dave.hansen@linux.intel.com"
	<dave.hansen@linux.intel.com>
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Thread-Topic: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Thread-Index: AQHZf2Ve5xw6RDm4tUGLnlOpizAoCa9qHQcAgAEGdICAAB9/gIAABBoAgAA1AQA=
Date: Thu, 25 May 2023 19:16:43 +0000
Message-ID: <99bea6ec8b3091986008a3a9f2e7c74f0abaea92.camel@intel.com>
References: <20230505152046.6575-1-mic@digikod.net>
	 <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
	 <facfd178-3157-80b4-243b-a5c8dabadbfb@digikod.net>
	 <7cb6c4c28c077bb9f866c2d795e918610e77d49f.camel@intel.com>
	 <ZG+HpFjIuSWvyo+B@google.com>
In-Reply-To: <ZG+HpFjIuSWvyo+B@google.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Evolution 3.44.4-0ubuntu1 
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: MN0PR11MB5963:EE_|IA0PR11MB7911:EE_
x-ms-office365-filtering-correlation-id: f220390b-43ad-4962-0944-08db5d5493a7
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: HN7MBKegHsRwUSnjSRFp+3MlHU73niuACJd8ACjfbzqldL0YwJvRQXrZdYK/5/8yG/76ZAa9ouZGC2tmEHLbPWuvgAmTAJ1aYTujovthpMBqV927urR3Fg/dLGJnlJIyqyp1Y4e4t+5y4YWhooNim8XrlfGxvGvbgUzoElZHs92mCK5LEkDt+rtdYpb2DD/wzpxoFFgq1CjPKpK1rENUlt65ZTTXKXClDFnNxNRZdoWTGGO76aOQfZdX2q+badj/u4edfA5LVAJA/YJlbdcKvoErJsbJ4wS/kZzRv4Mikg8bmFIrLGSgwS2UHjNtxlS350Ltb/z1LR7HGLWow6WQPTS24OpQK4lf6Sq/2SY7laWaoIp/u5QU4icmUSSRsbpHvFr8eAizKfJf/sLeJABjwwerJ60ps/MmZu+uizgIdQjEwTIQ7BcJv7femS1sH0au6J4PWeBV0NLRzQkX1BAjXs0PEERV56PWdE1Xqqhq8DwEu3Upv9QA93EUEnYb2+AH6O4wjpcQUJAehlZcKTvEeHQil2pAFRaFNnYJyU0j0IMsTsZHaUmrOgJh/AAtd4gwzJiRmPWBiiegChnqfU8sHq0nS0y0ekBMOovMD0zpvug=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN0PR11MB5963.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(376002)(136003)(396003)(39860400002)(451199021)(86362001)(36756003)(64756008)(966005)(6486002)(41300700001)(38070700005)(71200400001)(316002)(4326008)(6916009)(66446008)(66946007)(76116006)(66556008)(91956017)(66476007)(478600001)(54906003)(7416002)(7406005)(5660300002)(186003)(83380400001)(122000001)(2906002)(38100700002)(82960400001)(26005)(8936002)(2616005)(8676002)(6506007)(6512007);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?eHRCUFlUdU0zVFdVTW44enR2WUdZcDhqdE0wVjNkamRZNDBFSWZMa1J6MHpm?=
 =?utf-8?B?ZW00WExWekJ3RG5xTXA3SnhGWXVrbHNldTJwREIyZm8vakJYUzRMS3dGVWNT?=
 =?utf-8?B?TEpqMmtPYUNEVXhPMEFVa0w1cHJpQXJybDFQeGFucUl1U0lic0RkVjNGcFZy?=
 =?utf-8?B?dklGWmNzS1RTMUVzbFkwcmRBVE15MFVoZ3RmTm14Tk5KRVhLY1dpMm5NaTho?=
 =?utf-8?B?MTVqOHpFOTVhY3V6c1c2cERUVkp3SVNzQkNxcVRNN3VGNWZLNmh6THhHQU1U?=
 =?utf-8?B?cnNuRFdacWJsa0hNZFlqSDVBL0kvMkxWSm9wYU1pMFJmZlhtVkZhL2tDOVUy?=
 =?utf-8?B?R2MyV1AwdXEzd05kVXIxMWRLK1JHZ3ZuMnZBWmVCRk1NdzdlSDVBbmxJSXIr?=
 =?utf-8?B?RTBQcWRvUnpGa3d5QTBZcVA3Z2krZ09WTWZxbGd4NSsySW4zRVZvNCtFdkhO?=
 =?utf-8?B?RGttQXJlT2hNRXYwbUhHRjd4STZYMnJZeWw1bVpJekNGVDEwaUlUTnRRSHFy?=
 =?utf-8?B?d3U4dUhHeXcrUnIwL2VwSlcvdmczQ1Yzais1NXp1Yzh4S2k3WW56UWFhYjRP?=
 =?utf-8?B?ZFhSNmhLNE9xVG9qWFd2OC80SWxhditEZFZLd2orSU5MWHkrTmdUSkp4bDVi?=
 =?utf-8?B?cENiaEh1QkVEalAxT2lBaW5UaWxWWDNnVmpCbExoQmlqUlUxTjlpUFhKMFl2?=
 =?utf-8?B?YVBXTVRWTGNTMmRBSWEzbmNyUXk1YVgzMmhsOGhnK3NmLzl0OVlib2VkZTdZ?=
 =?utf-8?B?c3pTR0ppQVgwODR1RE10RmVqVEJYSURwcDQrbm9DK3ZrMWdmQWVVaC8wdjEy?=
 =?utf-8?B?TkI3akx1a3c4YlBzbFBHY05iUnh2ZDlncThpcG5BZHhtMWdqQVVSOEJ6QjRl?=
 =?utf-8?B?bEVOaTkyQnkzQjM4SkdLSy9yR3AvSFpBSmdwUGV2ZkcxWnQxMmR2Q2UrWWgw?=
 =?utf-8?B?V2dBMTh4bzJ0ZFFaWEhyTUs5VDFPZXVvelZ0eVpKcXhhcUo2My8wUG5wWDc2?=
 =?utf-8?B?N1k5VHJHSDV1cHgwY3pPUVJCS25PZm45bVR6YnpoSzRqZVZoRXluOW5LM0Zh?=
 =?utf-8?B?bHMwa3lMaEp0S0lSYWFZeU11dkxib1c3TGxJNjFock02Q1VkR3lwbEFUM1Iy?=
 =?utf-8?B?SmxYOVhJUkJIZ2ZndkRkaVliWllRVWpDS0orVlZDeWdWa0x5L1pLSlVVMzVh?=
 =?utf-8?B?elpFVmNSLzBOQjlMT3ZVUEFYTnF6OWRKNVRrbTErMklRZURuSWlVcStVYi9S?=
 =?utf-8?B?TTg1bk9CaVJVcm5iT3VydktBQ0Zra0xlSDBXd21FTmFWTmZsaG1NMmNVaGF5?=
 =?utf-8?B?eEp5Qi8xa21xTmIzTDJXb04rM3N6ejZ2eFpqUzRVc1U4WTU1WFZZZGtXS2Fq?=
 =?utf-8?B?aFN0Sm5GTXZ3c0ZSSGVSNWllYWp2OUpxR3JobVhyL1Bxc0pvMXhUcWtoaGxX?=
 =?utf-8?B?cGxVcGpUdjlrKzE3cmZNd1k2QUl1Ky9WaTUzeURDRVhEVzNTVHlrZXZHZjF1?=
 =?utf-8?B?NllsMWkyYVh1clQ1TCt5WkJyYkNKZmZQVlJGZFJ2RnhXVFB5S0hZSHJkdytJ?=
 =?utf-8?B?SU14a0svK1VtWXRFeXBMclRsVnhtaUVlVFlQeVZFUzcrTnRScWsrQmZEdm9Z?=
 =?utf-8?B?SVFtMkNTc0cxR1plWFFydnBwd3RZZkxQbWgxVXRPMVFDZStaY25nTXNvZm1T?=
 =?utf-8?B?NXVSS2ZkbCt3eXNjMDB6WG5qNGorYzUrNkVhL3NIQUp3V0FNd1g1aUNpTFVm?=
 =?utf-8?B?RngwQURacHlxZWlkN1QvUVd1OGlvNWFsYXlYZmpnYmUyRDVHOWRMdHR1Y3FO?=
 =?utf-8?B?VEpZaXRrd1l3ZEVSWkNQUndVbGFmOHhyYkhiVkp5MTB2N1k4cy9PZElVNlZ1?=
 =?utf-8?B?NjVnZWZwbXlMdll0WDkrT0dUMkpsMFpyVEhGRlhPc1lmYlZ5WjQzcXN4RER4?=
 =?utf-8?B?ZEFYZ1VaYmlETjZydWRiTGtTcDJDNmw3SXVNYzdxcHl6M0Rxa2Z2QnFnUnh6?=
 =?utf-8?B?QWI2Z1JIK3BHMVBJdTd5UjQ3dVlOTWxramFtTU55WjkwT0w5MmdwS0VzRkR3?=
 =?utf-8?B?NVkvUDBWRkFHR090YWM4cVE5Mld3Z2ZEVU53R0I4VjI2OW0xViswcTR2WWh4?=
 =?utf-8?B?cC9vZVo4L1VnenYxenF6MDA2ZnYya3podWtaZ0FIRGJaN0ZqZHJ5NXlJb2VS?=
 =?utf-8?B?YXc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <AA10C771A476A6418E0A8249E035B02D@namprd11.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MN0PR11MB5963.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f220390b-43ad-4962-0944-08db5d5493a7
X-MS-Exchange-CrossTenant-originalarrivaltime: 25 May 2023 19:16:43.8644
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: rn1A9kFAXkIu3a7t4A4AqbkVZiy6ABK0S/SBBG78xekDWKioiJzLwMOrIWI8edU6XMR4N+zsyMxTdaGNDDotV0DqA/fMVH6Dz7hmT7eIn8Q=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR11MB7911
X-OriginatorOrg: intel.com

T24gVGh1LCAyMDIzLTA1LTI1IGF0IDA5OjA3IC0wNzAwLCBTZWFuIENocmlzdG9waGVyc29uIHdy
b3RlOg0KPiBPbiBUaHUsIE1heSAyNSwgMjAyMywgUmljayBQIEVkZ2Vjb21iZSB3cm90ZToNCj4g
PiBJIHdvbmRlciBpZiBpdCBtaWdodCBiZSBhIGdvb2QgaWRlYSB0byBQT0MgdGhlIGd1ZXN0IHNp
ZGUgYmVmb3JlDQo+ID4gc2V0dGxpbmcgb24gdGhlIEtWTSBpbnRlcmZhY2UuIFRoZW4geW91IGNh
biBhbHNvIGxvb2sgYXQgdGhlIHdob2xlDQo+ID4gdGhpbmcgYW5kIGp1ZGdlIGhvdyBtdWNoIHVz
YWdlIGl0IHdvdWxkIGdldCBmb3IgdGhlIGRpZmZlcmVudA0KPiA+IG9wdGlvbnMNCj4gPiBvZiBy
ZXN0cmljdGlvbnMuDQo+IA0KPiBBcyBJIHNhaWQgZWFybGllclsqXSwgSU1PIHRoZSBjb250cm9s
IHBsYW5lIGxvZ2ljIG5lZWRzIHRvIGxpdmUgaW4NCj4gaG9zdCB1c2Vyc3BhY2UuDQo+IEkgdGhp
bmsgYW55IGF0dGVtcHQgdG8gaGF2ZSBLVk0gcHJvdmlkZW4gYW55dGhpbmcgYnV0IHRoZSBsb3cg
bGV2ZWwNCj4gcGx1bWJpbmcgd2lsbA0KPiBzdWZmZXIgdGhlIHNhbWUgZmF0ZSBhcyBDUjQgcGlu
bmluZyBhbmQgWE8gbWVtb3J5LsKgIEl0ZXJhdGluZyBvbiBhbg0KPiBpbXBlcmZlY3QNCj4gc29s
dXRpb24gdG8gaW5jcmVtZW50bHkgaW1wcm92ZSBzZWN1cml0eSBpcyBmYXIsIGZhciBlYXNpZXIg
dG8gZG8gaW4NCj4gdXNlcnNwYWNlLA0KPiBhbmQgZmFyIG1vcmUgbGlrZWx5IHRvIGdldCBtZXJn
ZWQuDQo+IA0KPiBbKl0gaHR0cHM6Ly9sb3JlLmtlcm5lbC5vcmcvYWxsL1pGVXloUHVodE1iWWRK
NzZAZ29vZ2xlLmNvbQ0KDQpTdXJlLCBJIHNob3VsZCBoYXZlIHB1dCBpdCBtb3JlIGdlbmVyYWxs
eS4gSSBqdXN0IG1lYW50IHBlb3BsZSBhcmUgbm90DQpnb2luZyB0byB3YW50IHRvIG1haW50YWlu
IGhvc3QtYmFzZWQgZmVhdHVyZXMgdGhhdCBndWVzdHMgY2FuJ3QNCmVmZmVjdGl2ZWx5IHVzZS4N
Cg0KTXkgdGFrZWF3YXkgZnJvbSB0aGUgQ1IgcGlubmluZyB3YXMgdGhhdCB0aGUgZ3Vlc3Qga2Vy
bmVsIGludGVncmF0aW9uDQp3YXMgc3VycHJpc2luZ2x5IHRyaWNreSBkdWUgdG8gdGhlIG9uZS13
YXkgbmF0dXJlIG9mIHRoZSBpbnRlcmZhY2UuIFhPDQp3YXMgbW9yZSBmbGV4aWJsZSB0aGFuIENS
IHBpbm5pbmcgaW4gdGhhdCByZXNwZWN0LCBiZWNhdXNlIHRoZSBndWVzdA0KY291bGQgdHVybiBp
dCBvZmYgKGFuZCBpbmRlZWQsIGluIHRoZSBYTyBrZXJuZWwgdGV4dCBwYXRjaGVzIGl0IGhhZCB0
bw0KZG8gdGhpcyBhIGxvdCkuDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu May 25 19:24:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 19:24:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539836.841095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2GZb-0002d8-2b; Thu, 25 May 2023 19:24:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539836.841095; Thu, 25 May 2023 19:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2GZb-0002d1-00; Thu, 25 May 2023 19:24:31 +0000
Received: by outflank-mailman (input) for mailman id 539836;
 Thu, 25 May 2023 19:24:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGWh=BO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q2GZZ-0002cv-SV
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 19:24:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c38e3f94-fb31-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 21:24:27 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 92EB060F93;
 Thu, 25 May 2023 19:24:26 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 67AFBC4339B;
 Thu, 25 May 2023 19:24:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c38e3f94-fb31-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685042666;
	bh=R5zvbql2G4QtPhAse6Tir/ekcUVFYxTrBthVEBTBa+s=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=kWGmZh8SGu+cQVGNQHHVa3YgUb2k03velPDWbjsfcxQj+xfHxhnsVNTfJSBw1xcXx
	 vt7H5xEj5QIXwAStRDA2B5JmESMuuhSi+u1TQdZv/sV2S0M39VkKvORntSP5j86qvl
	 HOrt/lG7oBIf6AiWC4yaCKeDr9kuHZUjhn7Oeyq8WGRsuuogLZAGqjS2f+ipCWudpV
	 4KxsIhP7hiNmVtGLIq0R0ElAhHWFf7Ut542XFtK2es57rj/S5+ZJmRSk8b7ol1YOoy
	 JWOPA1NVtCLs2HwHnpMFvmszAnvfR9Jy5pSz0lodKzrVZWxLoWwAYCzloydFCCSTxh
	 MiX4u1QOX+xRw==
Date: Thu, 25 May 2023 12:24:23 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
In-Reply-To: <8956af09-9ba4-11bf-a272-25f508bbbb3c@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305251224070.44000@ubuntu-linux-20-04-desktop>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com> <ZG4dmJuzNVUE5UIY@Air-de-Roger> <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com> <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop> <8956af09-9ba4-11bf-a272-25f508bbbb3c@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 25 May 2023, Jan Beulich wrote:
> On 25.05.2023 01:37, Stefano Stabellini wrote:
> > On Wed, 24 May 2023, Jan Beulich wrote:
> >>>> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
> >>>>      modify_bars() to consistently respect BARs of hidden devices while
> >>>>      setting up "normal" ones (i.e. to avoid as much as possible the
> >>>>      "continue" path introduced here), setting up of the former may want
> >>>>      doing first.
> >>>
> >>> But BARs of hidden devices should be mapped into dom0 physmap?
> >>
> >> Yes.
> > 
> > The BARs would be mapped read-only (not read-write), right? Otherwise we
> > let dom0 access devices that belong to Xen, which doesn't seem like a
> > good idea.
> > 
> > But even if we map the BARs read-only, what is the benefit of mapping
> > them to Dom0? If Dom0 loads a driver for it and the driver wants to
> > initialize the device, the driver will crash because the MMIO region is
> > read-only instead of read-write, right?
> > 
> > How does this device hiding work for dom0? How does dom0 know not to
> > access a device that is present on the PCI bus but is used by Xen?
> 
> None of these are new questions - this has all been this way for PV Dom0,
> and so far we've limped along quite okay. That's not to say that we
> shouldn't improve things if we can, but that first requires ideas as to
> how.

For PV, that was OK because PV requires extensive guest modifications
anyway. We only run Linux and few BSDs as Dom0. So, making the interface
cleaner and reducing guest changes is nice-to-have but not critical.

For PVH, this is different. One of the top reasons for AMD to work on
PVH is to enable arbitrary non-Linux OSes as Dom0 (when paired with
dom0less/hyperlaunch). It could be anything from Zephyr to a
proprietary RTOS like VxWorks. Minimal guest changes for advanced
features (e.g. Dom0 S3) might be OK but in general I think we should aim
at (almost) zero guest changes. On ARM, it is already the case (with some
non-upstream patches for dom0less PCI.)

For this specific patch, which is necessary to enable PVH on AMD x86 in
gitlab-ci, we can do anything we want to make it move faster. But
medium/long term I think we should try to make non-Xen-aware PVH Dom0
possible.

Cheers,

Stefano


From xen-devel-bounces@lists.xenproject.org Thu May 25 19:30:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 19:30:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539840.841106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2GfP-00046K-No; Thu, 25 May 2023 19:30:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539840.841106; Thu, 25 May 2023 19:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2GfP-00046D-K2; Thu, 25 May 2023 19:30:31 +0000
Received: by outflank-mailman (input) for mailman id 539840;
 Thu, 25 May 2023 19:30:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ugP6=BO=citrix.com=prvs=502bf10e6=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q2GfO-000467-6K
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 19:30:30 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99360c18-fb32-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 21:30:27 +0200 (CEST)
Received: from mail-dm6nam11lp2174.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.174])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 25 May 2023 15:30:24 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH7PR03MB7486.namprd03.prod.outlook.com (2603:10b6:510:2ec::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Thu, 25 May
 2023 19:30:16 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.015; Thu, 25 May 2023
 19:30:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99360c18-fb32-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685043027;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=vBB8MLGHWfAklkP8+SQ9S+Nt/UV3VjdgiaAQIE3ypfA=;
  b=dZ+95Jrixi3+MhFtgQAr+lFIFFjtmRZLic2Jqrk63Ob0201V+QVOyjnC
   hzQ2V1SpZR3zklM2CLdwmwEl+rlNbzD/DWbyKd4VKackMu+DiYMuuksfH
   xKObkQblN4ywDmfGkbNfs/CBHBgG82IMDBdCv9WERML6JTBLUPTcqQgqf
   k=;
X-IronPort-RemoteIP: 104.47.57.174
X-IronPort-MID: 110331147
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:XTRGY6C+q+niXxVW/6/iw5YqxClBgxIJ4kV8jS/XYbTApDgg1DNTn
 GcZCziDOfnbazD0Lo0lO9iw/EoD7ZTVy94wQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbCRMs8pvlDs15K6p4G5C4wRkDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw69xrPV1s1
 /wiEg8pKS6Bnf6sxoibY7w57igjBJGD0II3nFhFlGicIdN4BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI+uxuvDS7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraWxXigAdtOS9VU8NZ3gnyIzzULIiYpWEHr+eacpW+xddxQf
 hl8Fi0G6PJaGFaQZsLhUgKxumLCvh8YV9daCeQ85CmEz6aS6AGcbkAUQzgEZNE4ucseQT0xy
 kTPj97vHSZosrCeVTSa7Lj8hSipJSEfIGsGZCkFZQgI+d/upMc0lB2nZtR+FK+4iPXlFDe2x
 CqFxAA0iq8Pl8cN2+O+9ErenjO3jpHTS0g+4QC/dlyi6gR1dYu0fbuC4FLQ7etDBIuBR1zHt
 38B8/Vy98gLBJCJ0SCIHuMEGejw4+7faWKAx1lyA5Mm6jKhvWa5epxd6y1/I0EvNdsYfTjuY
 wnYvgY5CIJvAUZGpJRfO+qZY/nGB4C6fTg5fpg4tuZzX6U=
IronPort-HdrOrdr: A9a23:NdkaLKiPWQ09Y9c6mxVQ6cx4QHBQXhoji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2RSvILBzySTV/wb57C8gcSVbeW19QYRcem9AZsQkzuQCWygYzJLrBEtP+tfKH
 IFjPA33AZJfx4sH6KG7ilsZZm7mzXT/qiWGSI7Ow==
X-Talos-CUID: 9a23:TDZzMG46B8PBoKrnYtss5XQIJvwHUH3n9kj0ZHG6E3pzToe1YArF
X-Talos-MUID: =?us-ascii?q?9a23=3AA/f9IQ9qpF2Msfrugpk57pmQf9pqwr+nUVgnqqo?=
 =?us-ascii?q?LoZncKglfGjizgjviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,192,1681185600"; 
   d="scan'208";a="110331147"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b/I7q7BlVg8PNhUbAB93geOetCbvz4VU6a5yxCK7jeAvioEnm3WfUxM4qVS8Iv7g9cJDMNdZMmPuyqwJbReWLyCcAI4uEGCTCU3Ksw79qBFVyKPXMQZL//M7dgz4wUl1alO0L/+two87hp5sEg/sMG9QvcmJdyWLzcWkthveeo8IQMOPTnX0/PUZRjxKoEt6fFhZ4IWWokAbV6+L6qowXjpXDc7YA0fSnbxTAr3MEsu4ihdw60fMR13KnZ3Tu1CIbf8w2ZlppdR8soShlrY8aCCPNkj0k2+WdHQBwxG/Tip9JoIQ2qYdccpNSwYADMo04sJcWuyIQpTp0g4Z9JW+Gg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VzB6hvnNaOVOqCCDoKkwdEO412pBtd5xfzri0dfo7cw=;
 b=Q0ZhznQOBK2aFphRE26pMy+8uskTq+vMNyM9fPpUSeo0P6lg9UsAwn0go4ZcPiH2z2Mr5sQCZKxukbrjzz+J6RgbqxdyxjKzK3DaKkXNJhopT1YlQyOoux9aOOxsrvcRqYnlu6v2yfKq2SScmeIDPKtrrlZfeDE1toIsFInVuSL2YjcEyXvnkFV4zOo4fqU/o+FfZgRV9c34xiY3a9Bp1AY5YHmMTIXOCPq7TiLW2iRcTV+OVPRY6Ogb9+RSq/yM7qLmdJCVDG16CSkTDXE8letJ+EhhM1Ti6AH1WKFj40nS1E1u8G7xjciXucTHpZ5yT/66qxCclCG3Py2YGhmk7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VzB6hvnNaOVOqCCDoKkwdEO412pBtd5xfzri0dfo7cw=;
 b=FbirjiM/1dOkUcGzY/Xr2izG2kbY2JAhKBixtFcYK6Iv1LqYNx75eXCwySC1VHazaq+1rLEYWmTzCBxRqTvBDh6vsIz44LlSGD/E/yagkzJfLwPAbBl354i0GDfCiLg/sAlL4oWukP8NyN7OLNrqZKPZo81c9FzDIMzXalznFUQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <f18d29b9-3505-4af7-8fbc-f477d555514a@citrix.com>
Date: Thu, 25 May 2023 20:30:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] xen/arm: un-break build with clang
Content-Language: en-GB
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20230525191531.120224-1-stewart.hildebrand@amd.com>
In-Reply-To: <20230525191531.120224-1-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0062.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::26) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH7PR03MB7486:EE_
X-MS-Office365-Filtering-Correlation-Id: c9810615-3312-47d8-74ed-08db5d5677ad
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Zw/+Ica6ecmEUWGJ98vynXzJ956V19CLsKznLT9rLo2fuRBLCxZKaTu0zq5YcpM4rC/jONHv9HZ3jzKRcLOIFLQbbHHySmqRwhzF5qyJlrmw7i/HpZozdE6/yvKoc9m9a9qnHPj6y1IIO0YVebtqCzD9TnRzzO1lQR7wT5KyjOXkMAjoYoyVeb22s8nt+7A81p+BgnQkesIX22PxuEE7LoLJy6Il1IgnW/ja3NbdvrA49tCi4qmdOmUFI11bJCV4sEVFP8f0T+OHaaFUooYZeTA33JBjmkbhzZ0yEfzNvoLkyq2EdqBnK72VrabmlRtLmSX+w1xZB7BT7qc4+FJ/oOf+NI/h2PtPGDHNv0UegrNAKdaY5xplBi8UMtXkz8J0pjQdR1uXUacyOpEJwpFH1GT6MJpG4PoJkQbAfVq/q1UhA4LKzun+emV9a6zTpSSe+9XaikIKI3t3I3yJi6yxOeG4E/a3v1sH+0KkQwTfQkvJ8BhicLEgzPBrw8+bW4y1WXrDapQOES3JnvjRKefx+OpCU7USl5OHfuH41sk2yG6IFyV/z+VupeLuBQgyTRMIWW3wXnLZgsm2AGxPYrFrOFKWX4ysk/V+IjbZ0bNjSRW1d3Y/AthRf9OjTG9UCM4sb0bFeb+Ghv75qGcH2eH0vw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(136003)(39860400002)(346002)(396003)(376002)(366004)(451199021)(6666004)(66556008)(31686004)(66946007)(54906003)(66476007)(316002)(4326008)(6486002)(508600001)(41300700001)(8676002)(8936002)(5660300002)(82960400001)(38100700002)(36756003)(31696002)(86362001)(2616005)(53546011)(26005)(186003)(6512007)(2906002)(6506007)(4744005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NUlTL2tVR2tTRU5IS1JZWlRNUVZ4bFpnVWdzWWVhMTRVazcxLytJdGh6T3M0?=
 =?utf-8?B?WUVpclFYdFhDdWRYU0hXN29JVFJGTCtBcmZlYjNyeGplUVlTZExMdXBHSzcx?=
 =?utf-8?B?RWoxKzNIN3FEZVpJOXJuUkdQKzY3TXhWcUsraTNUeCtNb2lSbDVRTzR2ckI3?=
 =?utf-8?B?MEYza1E5NXYrMFFRUi85QW92LzB6UHlvaEtRV3JyQ0k1ZmFBVWhkTGszZG5m?=
 =?utf-8?B?SnNGSnM2M00vWDFGSTdERkorQnZyUHVrcVdtbmwydlQ0UmNwcnFOT0FCSFFr?=
 =?utf-8?B?Zk1Ia2lEem01blgwT0ZteURLMmdlSWkrVkdZd3M4UUsyNmpZNXJLK2xDRTZY?=
 =?utf-8?B?SUVxcHBlSFFHWE53YTJBbnJaQ2FJSlJQbHlaTm4yMWJ4T29oM0JBTHFNaU9a?=
 =?utf-8?B?b1dHRFJSUHM1RVVzN09tZTlJcHRlTlFuRHlGUjlIRkRVN0k0TEkzd1JVQi9T?=
 =?utf-8?B?cFFXL1I0Wmt4WDlZTUdWQ1BCNVNTRk9Pb2JjVUdSVlBXMy9VUDhhK3I3Y1gz?=
 =?utf-8?B?QzB3RkFiUmx6Q3p5RkEzNUUxaDVEd0xMUjFYMnRVd0ZLMVFrNGlFcW5uMjRZ?=
 =?utf-8?B?V3dEbGZPOEtCM3FLS1BFY0JZSWNIeW1mMzExbDVvVFd4ZHFPRDFDVWwxakhI?=
 =?utf-8?B?V3YvSXJqSEZsQ0RyNWtLcmJaN0o0ZnE0ck5zckI3aGJpSkYyOGVYQXFuMllq?=
 =?utf-8?B?dHdnWkhPOCt3ZGZYbWh2bDRUTktrMHYzMW90ZWFMQ2pKREhtRnNMRkFvc1Rk?=
 =?utf-8?B?VjBNa0R3YXlLTEtId1NQU2wzN1FTNmYxb0FuN0hkUExYcUZXT3dXMUpCZHJX?=
 =?utf-8?B?YjdTU09iQmkzQm1FNkd0SEg4ekIvRmhSVnRWNEpKRm8wUmhiRUErbU1HVFV2?=
 =?utf-8?B?aDhsMmRKNjdGWTdDQ2QxYW52VkJJOHY0aVJTNE9BY3ZPaWxnN21kdVlqNitC?=
 =?utf-8?B?WWxzdVlRSjhMMFcvbW50ZllXT2w2YVVoTGxGR0pBd0d0TGdmYkljdUlyS2Ji?=
 =?utf-8?B?TTJYUkxTZGRHSFNLMThtODRLdXJaUFA5aGpsaWJ2MUwwaVlWRzlDSVpTaGQv?=
 =?utf-8?B?QXY3V1RKeXJjei9SWWR3M0NUdnVOUmlIUzg4U3l2eTgwQkZqV0tZblZORGUx?=
 =?utf-8?B?dzJoS0dIcFR4RkUrYzhibUVEMWU2cG1yVXR6U3pOTVBSajA2eUJ1UXBUbU1L?=
 =?utf-8?B?MmVsdFBnVGhsLzFySmRMK1l1dm8xcXlQMU1aNVdCK3g3YTk3Q3pHcVFrL2M4?=
 =?utf-8?B?Mi9YYjVYeDkydE10VXE3cTl1SmsycGhWTW8zNzFYVS9jZ3NVdkJpLy9McDZv?=
 =?utf-8?B?M3lJb1Q2eTJNZHdINHJYQzBtSDFSWm1rcDBkQjV2VUtvM1IvUU1QR0UwdmxL?=
 =?utf-8?B?Z0V5dHJ1UjF3SHBPR3lIeDNYTEduQ0xpcUprRGJSSUhnVkgrQ21ra3YvK2Fa?=
 =?utf-8?B?cGcyc3hkdER5bTdaNW02My9PTURoRDMybFJ4b2pqWkZobWtLbmdpdG1HMFc5?=
 =?utf-8?B?U0hFVC9kYzV5L0wyWS9OT2dHaVYycHIvWm1QeFZ5MUtYTlVNVFBZU00vZEN4?=
 =?utf-8?B?clRpeFg3azcwLzlFTU9rbnhJV1Y3Mys1VUZVS0o0TEFIT1NTQldaZlE3Vmto?=
 =?utf-8?B?cURrY2l2Wkh0cDAyM3pRaTNTOTVhSnFKYnM4aHB3T0lLczI0S0FqQ0FKT25y?=
 =?utf-8?B?UXgrc2RUck5UdC9MRlM4cXhxQmFQbGl0V1BScWFIWjc0aXkzeThGNUtWOXh5?=
 =?utf-8?B?cmg4Rld6aGFYamgvWlN6d2ZvVmcyWDdKV1ZiTjRsVnRTSlZHSHJrOTB3cnp6?=
 =?utf-8?B?OVE5cjJ5dTR0dk1WYVRId2ZubGNRM0lQc3dMTTROb3ZpVUFmY1gzUTJVOW5h?=
 =?utf-8?B?eTREcVh2S21pcWo5TlpRSTU1dWlXZThFenZVWnUxRGIyemVUS3UrMEtmWjBG?=
 =?utf-8?B?NlJpNFhmMkdJM0xNWk44S0kwZ1BRY1FrVmlidmRxd2t5c2xJOGdEUnpkeUd1?=
 =?utf-8?B?Umd4T3pndVVwVzU4QVpTM0w1MmtMRmRZN0VqYlJ1YlU1bkNWcmZTRVFsOGxI?=
 =?utf-8?B?NUo2am9SUDhtbEx2OHFqOE8welNJeFZlcU5aRDNYR3k5citUc1Zmc1NONDFP?=
 =?utf-8?B?cXdlTFFFVFM0MFJPcEpBNCtDZG9taXk3b05GL09XRDlzK0pBZEJLZGJnb2N1?=
 =?utf-8?B?R0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	TTaGie1AIEaeAQGgZsEtpfZYz9MhtGml9778Jk9Gvl8cV4g9j2M2BNcxROevim7bBPI4+5kep3nTwNohI7LAkq6OTD0rC/+NPdU7/qo/KZj4MjGembyLK6eBZ6cwS4haIQ7KWg3caqV0MnrDJspgVIcdaQxC03ee9DOtIVGH2IRYdMj0cwKmHU0h7N/OQhbplWTlNPxybJJGG125D0j6A4bkDVNs/c0aOkBouEC7iNLZPbCL8VwXSzTRnCszR0aLryhNCoGUfJgOEs/dC2x+nzAFmxvUCcN6+KZ8Q1nUn6xlwpjZrBCOyWZB6aG4hIgDxg1Fo1HT57iipTpWkxaXqcSmbQ4zTKuWIiGLrfIL8qCFCt288jDUodB76U0z0R+j1nopfpDYSwZKfd+A0kUpArc1F0A+ZwhZ+CrpBtg6AUXdRv2v+3YYzbVqKdd1+Ds9i6lDapyipsy3G+Preb1xr1Q+wxOLtJ4CdJ/8RYHencfRQgHXP458Hg0lCwMmAXYOdbcG8TwXITPIfiOVCtlUBa6L0bmUog9LWA15lWHLrj4vzRX8ciMMdYsB2ZKeBZ5d6unih0fi1l/VQ/L6omfTt5ShwWpKHOk/JUR4Y6NB93Ow+6Jo98A2om5jjeW49ZnnIZxU3mnPCCwhbMPpquvAVo+EajueISDtrCsq39Km3IYEhPagjAinhwSo3Jb/iM6kjHCYXa3mWCrwdU95eNSoe3xzEIlqHRqemHameB8OPS/IDLs9IyCPndIXdWGKDuSiw9E/B6cinn+wDbTUbwX4m3xyLebKI/bwv/s4vjAjFjFeBIx9J/XOW+kxrU1stihREFEqbp57S91e/HTa6xJ0FnMOqtXZx74KTgmN9CrjgcF2DCC27N/2DNGijc1wefHbAOtjgEm4AmgAK4Q5ZBGQeQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c9810615-3312-47d8-74ed-08db5d5677ad
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2023 19:30:16.2758
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: nSNatGhBAGIZL9ANruGi7Peb204xVrEGtqXLhz8Q5LEm4xdP+P/bvuAtwyKBUhvNGvh9N4mLOJIRNAyAf7MW+i/v4tZiqb6yz9fK4rQQp3A=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7486

On 25/05/2023 8:15 pm, Stewart Hildebrand wrote:
> I tested with clang 12 and clang 16
>
> Here is my make command line:
> make -j $(nproc) \
>     clang=y \
>     CC="clang --target=aarch64-none-linux-gnu -march=armv8a+nocrypto" \
>     CXX="clang++ --target=aarch64-none-linux-gnu -march=armv8a+nocrypto" \
>     HOSTCC=clang \
>     HOSTCXX=clang++ \
>     XEN_TARGET_ARCH=arm64 \
>     CROSS_COMPILE=aarch64-none-linux-gnu- \
>     dist-xen

Looking much better now.  But the fact that Gitlab doesn't spot this
suggests that there ought to be some non-GCC ARM build tests.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 25 19:32:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 19:32:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539845.841116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Gh5-0004e9-2K; Thu, 25 May 2023 19:32:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539845.841116; Thu, 25 May 2023 19:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Gh4-0004e2-Vk; Thu, 25 May 2023 19:32:14 +0000
Received: by outflank-mailman (input) for mailman id 539845;
 Thu, 25 May 2023 19:32:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGWh=BO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q2Gh3-0004du-M0
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 19:32:13 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7aca304-fb32-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 21:32:11 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D04BB647FF;
 Thu, 25 May 2023 19:32:09 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A8A14C433EF;
 Thu, 25 May 2023 19:32:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7aca304-fb32-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685043129;
	bh=B/epPnjiNeekW5xaMeN0DF1Jyj5h1HNXt8wfDIEGktw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=LWQiGzcNDuf3V7YhqmPNoDrMrpJZBP80LKiF8/klUWlALyPFr5ptcfmFb62Z9etJC
	 F7tdZ2Fah2avfrinoZv4tHKTc/iU3mSOs8uTwGZEXyk13vlaz8htx00LMcl42yRvMA
	 YroCASo8pgDvOkecqE3ArH2RBpb5dLCo0SUs8QzUgAxqIIZZbYv8KYnsOXF4S8GEpr
	 N6iiMnh/D6Ol8qIfrOUkIGLhhyCKG+6maD1/e3kVOuEp5cojOJqhHU/BY0tsjb6npW
	 0TuiBtyat9NtVoTcomnuTFZ1ifm8JfSAfTRgGjwJ9E2+Kalcnz8+eIxFR9myhldb4F
	 abrbuX6G6vVLw==
Date: Thu, 25 May 2023 12:32:07 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Jan Beulich <jbeulich@suse.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
In-Reply-To: <alpine.DEB.2.22.394.2305251224070.44000@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2305251226000.44000@ubuntu-linux-20-04-desktop>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com> <ZG4dmJuzNVUE5UIY@Air-de-Roger> <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com> <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop> <8956af09-9ba4-11bf-a272-25f508bbbb3c@suse.com>
 <alpine.DEB.2.22.394.2305251224070.44000@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 25 May 2023, Stefano Stabellini wrote:
> On Thu, 25 May 2023, Jan Beulich wrote:
> > On 25.05.2023 01:37, Stefano Stabellini wrote:
> > > On Wed, 24 May 2023, Jan Beulich wrote:
> > >>>> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
> > >>>>      modify_bars() to consistently respect BARs of hidden devices while
> > >>>>      setting up "normal" ones (i.e. to avoid as much as possible the
> > >>>>      "continue" path introduced here), setting up of the former may want
> > >>>>      doing first.
> > >>>
> > >>> But BARs of hidden devices should be mapped into dom0 physmap?
> > >>
> > >> Yes.
> > > 
> > > The BARs would be mapped read-only (not read-write), right? Otherwise we
> > > let dom0 access devices that belong to Xen, which doesn't seem like a
> > > good idea.
> > > 
> > > But even if we map the BARs read-only, what is the benefit of mapping
> > > them to Dom0? If Dom0 loads a driver for it and the driver wants to
> > > initialize the device, the driver will crash because the MMIO region is
> > > read-only instead of read-write, right?
> > > 
> > > How does this device hiding work for dom0? How does dom0 know not to
> > > access a device that is present on the PCI bus but is used by Xen?
> > 
> > None of these are new questions - this has all been this way for PV Dom0,
> > and so far we've limped along quite okay. That's not to say that we
> > shouldn't improve things if we can, but that first requires ideas as to
> > how.
> 
> For PV, that was OK because PV requires extensive guest modifications
> anyway. We only run Linux and few BSDs as Dom0. So, making the interface
> cleaner and reducing guest changes is nice-to-have but not critical.
> 
> For PVH, this is different. One of the top reasons for AMD to work on
> PVH is to enable arbitrary non-Linux OSes as Dom0 (when paired with
> dom0less/hyperlaunch). It could be anything from Zephyr to a
> proprietary RTOS like VxWorks. Minimal guest changes for advanced
> features (e.g. Dom0 S3) might be OK but in general I think we should aim
> at (almost) zero guest changes. On ARM, it is already the case (with some
> non-upstream patches for dom0less PCI.)
> 
> For this specific patch, which is necessary to enable PVH on AMD x86 in
> gitlab-ci, we can do anything we want to make it move faster. But
> medium/long term I think we should try to make non-Xen-aware PVH Dom0
> possible.

Like I wrote, personally I am happy with whatever gets us to have the PVH
test in gitlab-ci faster.

However, on the specific problem of PCI devices used by Xen and how to
deal with them for Dom0 PVH, I think they should be completely hidden.
Hidden in the sense that they don't appear on the Dom0 PCI bus. If the
hidden device is a function of a multi-function PCI device, then the
entire multi-function PCI device should be hidden.

I don't think this case is very important because devices used by Xen
are timers, IOMMUs, UARTs, all devices that typically are not
multi-function, so it is OK to be extra careful and remove the entire
device from Dom0 in the odd case that the device is both multi-function
and only partially used by Xen. This is what I would do for Xen on ARM
too.


From xen-devel-bounces@lists.xenproject.org Thu May 25 19:55:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 19:55:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539850.841126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2H2z-0007Bx-SQ; Thu, 25 May 2023 19:54:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539850.841126; Thu, 25 May 2023 19:54:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2H2z-0007Bk-PN; Thu, 25 May 2023 19:54:53 +0000
Received: by outflank-mailman (input) for mailman id 539850;
 Thu, 25 May 2023 19:54:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGWh=BO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q2H2y-0007AE-Ji
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 19:54:52 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 01c3e60b-fb36-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 21:54:50 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 0EEE964AB3;
 Thu, 25 May 2023 19:54:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9C2EAC433EF;
 Thu, 25 May 2023 19:54:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01c3e60b-fb36-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685044488;
	bh=UgP9LzRUTFhEjKfF5nFmQDnDkejaezEN+QxiQDzy4Z0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nVhk+q107mVgBLc1lN0K4yu9OO9MfUZI4g8nk2hv5CcDek91UuRCnc7Go3ox6LZxJ
	 gClMPIyRy0h2vjg/l1uNDMfeI2A12Bh4ShLxqHR5uXSb3fFTqmiSJsMB0Cmk54/MsO
	 NxChg56Rb3ZTyPGS0U4JofK/I/P+unx6YCfQfXQ0BvAc2/jXZCoUN+ssiYA3GInNNh
	 AhH8lOZznE7wd5JWe8zuxRCjknS+Vk86kNMSKp2pzYm+Xb93/vHdQ3LYhw/YF3kxK2
	 nzs6026mT1S6EoBjWKbOWAlqcbeGz44NDM4qf0kmaksL8FJhDGDbUJXPek+8m1+wiM
	 z3tynzKDAuJsQ==
Date: Thu, 25 May 2023 12:54:46 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <stefano.stabellini@amd.com>, 
    xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com, 
    xenia.ragiadakou@amd.com, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [RFC] Xen crashes on ASSERT on suspend/resume, suggested fix
In-Reply-To: <6790d5ae-9742-f5f3-bd8c-62602ee9cb1d@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305251248000.44000@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305181638390.128889@ubuntu-linux-20-04-desktop> <ZGzFnE2w/YqYT35c@Air-de-Roger> <ZGzSnu8m/IqjmyHx@Air-de-Roger> <alpine.DEB.2.22.394.2305241646520.44000@ubuntu-linux-20-04-desktop>
 <6790d5ae-9742-f5f3-bd8c-62602ee9cb1d@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 25 May 2023, Jan Beulich wrote:
> On 25.05.2023 01:51, Stefano Stabellini wrote:
> > xen/irq: fix races between send_cleanup_vector and _clear_irq_vector
> 
> This title is, I'm afraid, already misleading. No such race can occur
> afaict, as both callers of _clear_irq_vector() acquire the IRQ
> descriptor lock first, and irq_complete_move() (the sole caller of
> send_cleanup_vector()) is only ever invoked as or by an ->ack()
> hook, which in turn is only invoked with, again, the descriptor lock
> held.

Yes I see that you are right about the locking, and thank you for taking
the time to look into it.

One last question: could it be that a second interrupt arrives while
->ack() is being handled?  do_IRQ() is running with interrupts disabled?


> > It is possible that send_cleanup_vector and _clear_irq_vector are
> > running at the same time on different CPUs. In that case we have a race
> > as both _clear_irq_vector and irq_move_cleanup_interrupt are trying to
> > clear old_vector.
> > 
> > This patch fixes 3 races:
> > 
> > 1) As irq_move_cleanup_interrupt is running on multiple CPUs at the
> > same time, and also _clear_irq_vector is running, it is possible that
> > only some per_cpu(vector_irq, cpu)[old_vector] are valid but not all.
> > So, turn the ASSERT in _clear_irq_vector into an if.
> 
> Note again the locking which is in effect.
> 
> > 2) It is possible that _clear_irq_vector is running at the same time as
> > release_old_vec, called from irq_move_cleanup_interrupt. At the moment,
> > it is possible for _clear_irq_vector to read a valid old_cpu_mask but an
> > invalid old_vector (because it is being set to invalid by
> > release_old_vec). To avoid this problem in release_old_vec move clearing
> > old_cpu_mask before setting old_vector to invalid. This way, we know that
> > in _clear_irq_vector if old_vector is invalid also old_cpu_mask is zero
> > and we don't enter the loop.
> 
> All invocations of release_old_vec() are similarly inside suitably
> locked regions.
> 
> > 3) It is possible that release_old_vec is running twice at the same time
> > for the same old_vector. Change the code in release_old_vec to make it
> > OK to call it twice. Remove both ASSERTs. With those gone, it should be
> > possible now to call release_old_vec twice in a row for the same
> > old_vector.
> 
> Same here.
> 
> Any such issues would surface more frequently and without any suspend /
> resume involved. What is still missing is that connection, and only then
> it'll (or really: may) become clear what needs adjusting. If you've seen
> the issue exactly once, then I'm afraid there's not much we can do unless
> someone can come up with a plausible explanation of something being
> broken on any of the involved code paths. More information will need to
> be gathered out of the next occurrence of this, whenever that's going to
> be. One of the things we will want to know, as mentioned before, is the
> value that per_cpu(vector_irq, cpu)[old_vector] has when the assertion
> triggers. Iirc Roger did suggest another piece of data you'd want to log.

Understood, thanks for the explanation.


> > --- a/xen/arch/x86/irq.c
> > +++ b/xen/arch/x86/irq.c
> > @@ -112,16 +112,11 @@ static void release_old_vec(struct irq_desc *desc)
> >  {
> >      unsigned int vector = desc->arch.old_vector;
> >  
> > -    desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
> >      cpumask_clear(desc->arch.old_cpu_mask);
> > +    desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
> >  
> > -    if ( !valid_irq_vector(vector) )
> > -        ASSERT_UNREACHABLE();
> > -    else if ( desc->arch.used_vectors )
> > -    {
> > -        ASSERT(test_bit(vector, desc->arch.used_vectors));
> > +    if ( desc->arch.used_vectors )
> >          clear_bit(vector, desc->arch.used_vectors);
> > -    }
> >  }
> >  
> >  static void _trace_irq_mask(uint32_t event, int irq, int vector,
> > @@ -230,9 +225,11 @@ static void _clear_irq_vector(struct irq_desc *desc)
> >  
> >          for_each_cpu(cpu, tmp_mask)
> >          {
> > -            ASSERT(per_cpu(vector_irq, cpu)[old_vector] == irq);
> > -            TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
> > -            per_cpu(vector_irq, cpu)[old_vector] = ~irq;
> > +            if ( per_cpu(vector_irq, cpu)[old_vector] == irq )
> > +            {
> > +                TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
> > +                per_cpu(vector_irq, cpu)[old_vector] = ~irq;
> > +            }
> >          }
> 
> As said before - replacing ASSERT() by a respective if() cannot really
> be done without discussing the "else" in the description. Except of
> course in trivial/obvious cases, but I think we agree here we don't
> have such a case.


From xen-devel-bounces@lists.xenproject.org Thu May 25 20:11:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 20:11:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539858.841136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2HJD-0001IN-D6; Thu, 25 May 2023 20:11:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539858.841136; Thu, 25 May 2023 20:11:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2HJD-0001IG-9m; Thu, 25 May 2023 20:11:39 +0000
Received: by outflank-mailman (input) for mailman id 539858;
 Thu, 25 May 2023 20:11:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGWh=BO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q2HJC-0001IA-67
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 20:11:38 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 59966129-fb38-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 22:11:36 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 65474610A2;
 Thu, 25 May 2023 20:11:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8705FC433D2;
 Thu, 25 May 2023 20:11:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59966129-fb38-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685045494;
	bh=EzZL9IYsv4ObAENOhFVVXjh8vx3gjUIKAELWKpquYKE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Qb/wCxBv5VX19WHxEbwJeK7X5ZL7TLciSxHvUQHi7bGirPVUEulAVgg7OALsYGcAX
	 XMGBi8dpZh08Xz2dJGufS5XEU29KKIi8U8ytzxv/uKVD2diMY7KGYP8E8sGA6N42Z6
	 RoufjQidA1opWN2IHPhhQ+gMICE7D723d3V3O6xIHL4ebwD9zVh1dyu3Fw0rqjMpdI
	 9X5GZ8zNg0gwM6RYBM+iTeucDqZl6Ey6mgS/8ZhLYwfBwUouS435+TyRgmPkpuDujh
	 dwjlRSLClc9yI33Ybi8thfFgUBqa+y2CQuR0gvGM+t6IkdZCKGV8VTBze2s1NOXJ+i
	 Bk2StOvXTsKfg==
Date: Thu, 25 May 2023 13:11:31 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 3/3] maintainers: Add Xen MISRA Analysis Tools
 section
In-Reply-To: <20230525083401.3838462-4-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305251311220.44000@ubuntu-linux-20-04-desktop>
References: <20230525083401.3838462-1-luca.fancellu@arm.com> <20230525083401.3838462-4-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 25 May 2023, Luca Fancellu wrote:
> Add a section for the Xen MISRA Analysis Tools.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes from v2:
>  - New patch, suggested by Stefano:
>    https://lore.kernel.org/all/alpine.DEB.2.22.394.2305171232440.128889@ubuntu-linux-20-04-desktop/
> ---
>  MAINTAINERS | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index f2f1881b32cc..c5b2dc2b024c 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -667,6 +667,16 @@ F:	tools/xentrace/
>  F:	xen/common/trace.c
>  F:	xen/include/xen/trace.h
>  
> +XEN MISRA ANALYSIS TOOLS
> +M:	Luca Fancellu <luca.fancellu@arm.com>
> +S:	Supported
> +F:	xen/scripts/xen_analysis/
> +F:	xen/scripts/xen-analysis.py
> +F:	xen/scripts/diff-report.py
> +F:	xen/tools/cppcheck-plat/
> +F:	xen/tools/convert_misra_doc.py
> +F:	xen/tools/cppcheck-cc.sh
> +
>  XSM/FLASK
>  M:	Daniel P. Smith <dpsmith@apertussolutions.com>
>  S:	Supported
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 25 20:20:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 20:20:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539862.841146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2HRw-0002ng-7b; Thu, 25 May 2023 20:20:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539862.841146; Thu, 25 May 2023 20:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2HRw-0002nZ-4K; Thu, 25 May 2023 20:20:40 +0000
Received: by outflank-mailman (input) for mailman id 539862;
 Thu, 25 May 2023 20:20:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JOix=BO=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1q2HRv-0002nT-9m
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 20:20:39 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9c7ee068-fb39-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 22:20:38 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id DE92864A34;
 Thu, 25 May 2023 20:20:36 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D613BC433D2;
 Thu, 25 May 2023 20:20:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c7ee068-fb39-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685046036;
	bh=TRdhRXxXUIYfs640uIn+I4Rf6UAXCv8idpkcYdsSebM=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=oiC2XGhNWUAmWE7pr8r3QtJ9JQsEo/jXvve3lXs2r5Y+W4R2mTSVXer6rhbIqd0H3
	 uoSQkJ728HLdXkJmf2eWWiJHTRm/fGQOsi2eZMGN8iMwau2U+ioJb0W4RoKYGYtykh
	 z9scTeHJfdOUnqeX313qP2xIMkwidI5DxymtOtruiJkZ/HiGGdvi3+NdefCU8HC/4L
	 yVF6ukcziPPo5ghUZkvreoJAGRzTQdLIctyd1RS1O+YIm3/yAJVKFl1SAT+U6nvVPE
	 TUmcZj1oUZgh7b4Odb9/HzteZyqvh1+JrTl0ADJm08qL98dpyZZ8OeR78PQJMngFPN
	 0x/EvMWBkU0Qw==
Date: Thu, 25 May 2023 23:20:11 +0300
From: Mike Rapoport <rppt@kernel.org>
To: Vishal Moola <vishal.moola@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 01/34] mm: Add PAGE_TYPE_OP folio functions
Message-ID: <20230525202011.GZ4967@kernel.org>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-2-vishal.moola@gmail.com>
 <20230525085555.GV4967@kernel.org>
 <CAOzc2pxx489C26NnS9NHkUQY9PYiagzt-nYK6LnkJ1N3NYQWzg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAOzc2pxx489C26NnS9NHkUQY9PYiagzt-nYK6LnkJ1N3NYQWzg@mail.gmail.com>

On Thu, May 25, 2023 at 10:00:23AM -0700, Vishal Moola wrote:
> On Thu, May 25, 2023 at 1:56 AM Mike Rapoport <rppt@kernel.org> wrote:
> >
> > Hi,
> >
> > On Mon, May 01, 2023 at 12:27:56PM -0700, Vishal Moola (Oracle) wrote:
> > > No folio equivalents for page type operations have been defined, so
> > > define them for later folio conversions.
> >
> > Can you please elaborate why would we need folios for page table descriptors?
> 
> Thanks for the review!
> 
> These macros are for callers that care about the page type, i.e. Table and
> Buddy. Aside from accounting for those cases, the page tables don't use folios.
> These are more for the cleanliness of those callers.

But why using folio APIs for PageType will be cleaner than using page APIs?
Do you have an example?

-- 
Sincerely yours,
Mike.


From xen-devel-bounces@lists.xenproject.org Thu May 25 20:26:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 20:26:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539866.841156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2HXD-0003Qt-Qf; Thu, 25 May 2023 20:26:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539866.841156; Thu, 25 May 2023 20:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2HXD-0003Qm-Ny; Thu, 25 May 2023 20:26:07 +0000
Received: by outflank-mailman (input) for mailman id 539866;
 Thu, 25 May 2023 20:26:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JOix=BO=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1q2HXC-0003Qg-9E
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 20:26:06 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5eb24711-fb3a-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 22:26:04 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id EBF4A600BE;
 Thu, 25 May 2023 20:26:02 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 185ECC433D2;
 Thu, 25 May 2023 20:25:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5eb24711-fb3a-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685046362;
	bh=q0CSXKuNpY/cOn/THS8/5Wy4aJR83TTmgU6zxWwJSXw=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=SQE8oA8lQRwEus253i9PgXAYxS+1hfEWgxQkG8dNMuK5+C2MzFWHC2gCVa/sCTaIh
	 mINB5DgFECw+pGenAIWbq/ant+MDQxcFaxJeNamZqmWfH2SE50ljnoCt5WDkZMFN/y
	 zSsFGUWBnPo7WzVzFj57OnJKTLZv8v06II8J4E2+FMu/J0lzw0cg8/Um5wKRhTuiBz
	 M1a9kePx9hC6/XnF66pTT3tDmfydVWOZ+PUuN61Pd1nQtUDi8l6wozX8HGFWMhSLig
	 QS2F5MDwxfui88OZl1d4xhMuIeMffwSLM7NnRPkICdlmFaTyYVH5nOaWhDuFI01iqL
	 PEjExdKRm1jxQ==
Date: Thu, 25 May 2023 23:25:37 +0300
From: Mike Rapoport <rppt@kernel.org>
To: Vishal Moola <vishal.moola@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 05/34] mm: add utility functions for ptdesc
Message-ID: <20230525202537.GA4967@kernel.org>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-6-vishal.moola@gmail.com>
 <20230525090956.GX4967@kernel.org>
 <CAOzc2pxSH6GhBnAoSOjvYJk2VdMDFZi3H_1qGC5Cdyp3j4AzPQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAOzc2pxSH6GhBnAoSOjvYJk2VdMDFZi3H_1qGC5Cdyp3j4AzPQ@mail.gmail.com>

On Thu, May 25, 2023 at 11:04:28AM -0700, Vishal Moola wrote:
> On Thu, May 25, 2023 at 2:10 AM Mike Rapoport <rppt@kernel.org> wrote:
> > > +
> > > +static inline struct ptdesc *ptdesc_alloc(gfp_t gfp, unsigned int order)
> > > +{
> > > +     struct page *page = alloc_pages(gfp | __GFP_COMP, order);
> > > +
> > > +     return page_ptdesc(page);
> > > +}
> > > +
> > > +static inline void ptdesc_free(struct ptdesc *pt)
> > > +{
> > > +     struct page *page = ptdesc_page(pt);
> > > +
> > > +     __free_pages(page, compound_order(page));
> > > +}
> >
> > The ptdesc_{alloc,free} API does not sound right to me. The name
> > ptdesc_alloc() implies the allocation of the ptdesc itself, rather than
> > allocation of page table page. The same goes for free.
> 
> I'm not sure I see the difference. Could you elaborate?

I read ptdesc_alloc() as "allocate a ptdesc" rather than as "allocate a
page for page table and return ptdesc pointing to that page". Seems very
confusing to me already and it will be even more confusion when we'll start
allocating actual ptdescs.
 
-- 
Sincerely yours,
Mike.


From xen-devel-bounces@lists.xenproject.org Thu May 25 20:39:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 20:39:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539870.841166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Hjo-0004xM-VD; Thu, 25 May 2023 20:39:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539870.841166; Thu, 25 May 2023 20:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Hjo-0004xF-Se; Thu, 25 May 2023 20:39:08 +0000
Received: by outflank-mailman (input) for mailman id 539870;
 Thu, 25 May 2023 20:39:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JN8z=BO=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q2Hjn-0004x5-CF
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 20:39:07 +0000
Received: from mail-yb1-xb2e.google.com (mail-yb1-xb2e.google.com
 [2607:f8b0:4864:20::b2e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 30f15455-fb3c-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 22:39:06 +0200 (CEST)
Received: by mail-yb1-xb2e.google.com with SMTP id
 3f1490d57ef6-babb985f9c8so207612276.1
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 13:39:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30f15455-fb3c-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685047145; x=1687639145;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=VbD1OYIGibvwFMRrdJMTdc4wGgHOsGEW6NjvtgF3JXU=;
        b=hsCL+sMjCSPEeQwY9ipI1+Ugi88AzJEwmAmsO1IR16vD0JLbTTRj1EWY5Xyhze+GRV
         OIRanM33HOk1xRjR7/GHHUD5lVS6HFpcFxeDfzeE8/I0OcBZz8czUxvwLPGCPl3ZI/jZ
         DVqHoFaNkDaMUMuSYvpfhlO6EV7GylFCaeIZ84F6j+93L973raF6hD3O66C/pZ4eWLz+
         QLbFT/lbaOp/9U1+H051UFvXelrd7lzOtz3vj1HsEWe84ksN1HTRbSJYJedE8uZu9k6u
         F1jI+oM85Ujez7vT22bFzFiXTlvVTKRouPJkd38Xldm/McWaSi1VxP376ACeyocGF45q
         tGcA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685047145; x=1687639145;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=VbD1OYIGibvwFMRrdJMTdc4wGgHOsGEW6NjvtgF3JXU=;
        b=l9Ql/6qVhx2pf5S+5sAahuBZp2ILHAXjziLFR5RdlHQd0Exte2pu4ljDSfv1xMzXvL
         1wke6KnkmCy0BltJwXKnOwpx0WJUgtZ6Z97/d2TATrRlAxAFBSaB3DbCCacNN8e57DSC
         owfh5dyJrKLSXaYAWWW1OonLV3H6J+sM+2mJUW8fPGo5VqzNDrvagXW3E1XaNW2LrCgO
         TEKQCyPYSCBgGJoQwS1mZl5CUlenkEJHeeYQyHGPtaNMuczPibk+nWHwizvHeApkZLJu
         UI3JTx+tSe2O6bIP4yWkCdkvht3bzyfez+uSF6qqClZaQoEg1X7oTI/YFbJGsQrRF4Xm
         hTUw==
X-Gm-Message-State: AC+VfDw5ZSlOHyOohCW6YxOub0iJMJcJFLnzs9Hf2LXbn+4qbINIgJXv
	Ggc4nEerrjXktmU5/i+8KUPsnl2uzQwh50Rau2M=
X-Google-Smtp-Source: ACHHUZ4OcPsUwJDpZZlKfNKbpYbRuRkX6Q4UETTfFRyNyytwfdAtGP02e8IuqJSt1+a/7sd0aU+yePVkYawV73NLE1o=
X-Received: by 2002:a25:d111:0:b0:ba8:4b48:1de0 with SMTP id
 i17-20020a25d111000000b00ba84b481de0mr4810389ybg.47.1685047145065; Thu, 25
 May 2023 13:39:05 -0700 (PDT)
MIME-Version: 1.0
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-2-vishal.moola@gmail.com> <20230525085555.GV4967@kernel.org>
 <CAOzc2pxx489C26NnS9NHkUQY9PYiagzt-nYK6LnkJ1N3NYQWzg@mail.gmail.com> <20230525202011.GZ4967@kernel.org>
In-Reply-To: <20230525202011.GZ4967@kernel.org>
From: Vishal Moola <vishal.moola@gmail.com>
Date: Thu, 25 May 2023 13:38:54 -0700
Message-ID: <CAOzc2pzGPBYL3S=noc1AAEtep04GexRmn2f_T3BPgVFZKaqXTg@mail.gmail.com>
Subject: Re: [PATCH v2 01/34] mm: Add PAGE_TYPE_OP folio functions
To: Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org, 
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, 
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, 
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 25, 2023 at 1:20=E2=80=AFPM Mike Rapoport <rppt@kernel.org> wro=
te:
>
> On Thu, May 25, 2023 at 10:00:23AM -0700, Vishal Moola wrote:
> > On Thu, May 25, 2023 at 1:56=E2=80=AFAM Mike Rapoport <rppt@kernel.org>=
 wrote:
> > >
> > > Hi,
> > >
> > > On Mon, May 01, 2023 at 12:27:56PM -0700, Vishal Moola (Oracle) wrote=
:
> > > > No folio equivalents for page type operations have been defined, so
> > > > define them for later folio conversions.
> > >
> > > Can you please elaborate why would we need folios for page table desc=
riptors?
> >
> > Thanks for the review!
> >
> > These macros are for callers that care about the page type, i.e. Table =
and
> > Buddy. Aside from accounting for those cases, the page tables don't use=
 folios.
> > These are more for the cleanliness of those callers.
>
> But why using folio APIs for PageType will be cleaner than using page API=
s?
> Do you have an example?

Ah, for example in mm/memory-failure.c there are a couple uses of PageTable=
.
Like the line :
if (folio_test_slab(folio) || PageTable(&folio->page) ||
folio_test_reserved(folio))
where that PageTable(&folio->page) can now be written as folio_test_table(f=
olio)
instead.

Also there are numerous uses of PageBuddy in mm/compaction.c that will
likely need to be converted to folios as well.


From xen-devel-bounces@lists.xenproject.org Thu May 25 20:53:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 20:53:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539874.841176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Hxr-0007Kf-88; Thu, 25 May 2023 20:53:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539874.841176; Thu, 25 May 2023 20:53:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Hxr-0007KY-3j; Thu, 25 May 2023 20:53:39 +0000
Received: by outflank-mailman (input) for mailman id 539874;
 Thu, 25 May 2023 20:53:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JN8z=BO=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q2Hxq-0007KS-6b
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 20:53:38 +0000
Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com
 [2607:f8b0:4864:20::1136])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 377f41ed-fb3e-11ed-8611-37d641c3527e;
 Thu, 25 May 2023 22:53:36 +0200 (CEST)
Received: by mail-yw1-x1136.google.com with SMTP id
 00721157ae682-56187339d6eso2628037b3.2
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 13:53:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 377f41ed-fb3e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685048015; x=1687640015;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=e72vGCwhPoa8gN25X6qmdDXVUsdWWJFQx8MjqDcoGDA=;
        b=PRDb4uH8g3Vv6gZWsZ1gkHY+ANMWnqtSqEQNi+t68NKJyRAPGtGIjXS+o6ABKHZ7yv
         q51K7Vr8Q1brsvSZQkdt4WeLJo6vXr04Grr6RxGrB/PY5E3pp1cBQWvGni0wb9TWYIbM
         5FWOkPlw4M42rE2TxcMBGldDhB3WLqb55j0CqgwQEW0TmVHxxCeHx2Zs7VLCqGumlN8c
         PmQWTAjBOIMu7fHadjtAjQF7IDQQLa3x/OUyFY2aJiBpGADt0K07OCNxrHer3KqiVV69
         GxawL7cJdKwcX3ZUAGYa8EVBTKoLWza/yXtntrwGkP8KyxHL6U40gF5XovIoxt2V2Fr9
         5Wvw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685048015; x=1687640015;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=e72vGCwhPoa8gN25X6qmdDXVUsdWWJFQx8MjqDcoGDA=;
        b=lbsddx1kGoiV/T9/oP/K3/N7mnMYgZ+Gipsd+t2ZC115qZMGn2HksFk6O6YQisL7d9
         7a4E1/csr8PvrJVkbLn5NXorBDjUlho+tV4ZSHdU87VdCEjm92pjwI4sf/T6x92l3tgI
         toM6RwygW38Z6sdqOidabnbMLH8X3tUzygADoTzjCJAh4lmk5edqh2FJE6N83TG+ORks
         cDJFlg6qgwYvryMU8WUIT/0mhpPlkYp+7SCyWcn+STNvx730BrcE3dcdCHaJVqHqj/Zm
         oDfyTEbEKanTmNeZf7gThsdymSFEN7L5gZf5RdkRGe1rQoPahzBAKC19uUV6cIJY1gpU
         H/+w==
X-Gm-Message-State: AC+VfDwfCKr8X1B8axfvZy0vBPQBvee0AH5BOqonHgfGHM0yU24xulKv
	o3qXm7VX4aSnYDudGwYdApGH6IiCL6M815htMBs=
X-Google-Smtp-Source: ACHHUZ6an6zID91jt8f3LQ4QGAvFJ9u++x53xf0fSCLXe0WivFCmGXJlgIcqvJgo7KICituWVYFJf3qgdSw5/O75Zdg=
X-Received: by 2002:a81:4810:0:b0:55a:84c9:e952 with SMTP id
 v16-20020a814810000000b0055a84c9e952mr1013445ywa.17.1685048014974; Thu, 25
 May 2023 13:53:34 -0700 (PDT)
MIME-Version: 1.0
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-6-vishal.moola@gmail.com> <20230525090956.GX4967@kernel.org>
 <CAOzc2pxSH6GhBnAoSOjvYJk2VdMDFZi3H_1qGC5Cdyp3j4AzPQ@mail.gmail.com> <20230525202537.GA4967@kernel.org>
In-Reply-To: <20230525202537.GA4967@kernel.org>
From: Vishal Moola <vishal.moola@gmail.com>
Date: Thu, 25 May 2023 13:53:24 -0700
Message-ID: <CAOzc2pxD21mxisy-M5b_SDUv0MYwNHqaVDJnJpARuDG_HjCbOg@mail.gmail.com>
Subject: Re: [PATCH v2 05/34] mm: add utility functions for ptdesc
To: Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org, 
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, 
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, 
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, 
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, 
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, 
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org, 
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, May 25, 2023 at 1:26=E2=80=AFPM Mike Rapoport <rppt@kernel.org> wro=
te:
>
> On Thu, May 25, 2023 at 11:04:28AM -0700, Vishal Moola wrote:
> > On Thu, May 25, 2023 at 2:10=E2=80=AFAM Mike Rapoport <rppt@kernel.org>=
 wrote:
> > > > +
> > > > +static inline struct ptdesc *ptdesc_alloc(gfp_t gfp, unsigned int =
order)
> > > > +{
> > > > +     struct page *page =3D alloc_pages(gfp | __GFP_COMP, order);
> > > > +
> > > > +     return page_ptdesc(page);
> > > > +}
> > > > +
> > > > +static inline void ptdesc_free(struct ptdesc *pt)
> > > > +{
> > > > +     struct page *page =3D ptdesc_page(pt);
> > > > +
> > > > +     __free_pages(page, compound_order(page));
> > > > +}
> > >
> > > The ptdesc_{alloc,free} API does not sound right to me. The name
> > > ptdesc_alloc() implies the allocation of the ptdesc itself, rather th=
an
> > > allocation of page table page. The same goes for free.
> >
> > I'm not sure I see the difference. Could you elaborate?
>
> I read ptdesc_alloc() as "allocate a ptdesc" rather than as "allocate a
> page for page table and return ptdesc pointing to that page". Seems very
> confusing to me already and it will be even more confusion when we'll sta=
rt
> allocating actual ptdescs.

Hmm, I see what you're saying. I'm envisioning this function evolving into
one that allocates a ptdesc later. I don't see why we would need to have bo=
th a
page table page AND ptdesc at any point, but that may be a lack of knowledg=
e
from my part.

I was thinking later, if necessary, we could make another function
(only to be used internally) to allocate page table pages.


From xen-devel-bounces@lists.xenproject.org Thu May 25 20:57:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 20:57:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539878.841186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2I1P-0007vi-MY; Thu, 25 May 2023 20:57:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539878.841186; Thu, 25 May 2023 20:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2I1P-0007vb-JS; Thu, 25 May 2023 20:57:19 +0000
Received: by outflank-mailman (input) for mailman id 539878;
 Thu, 25 May 2023 20:57:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UjAG=BO=infradead.org=willy@srs-se1.protection.inumbo.net>)
 id 1q2I1N-0007vU-CN
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 20:57:18 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ba03550e-fb3e-11ed-b230-6b7b168915f2;
 Thu, 25 May 2023 22:57:15 +0200 (CEST)
Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1q2I19-00DAXr-9n; Thu, 25 May 2023 20:57:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba03550e-fb3e-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Transfer-Encoding:
	Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=w/nv11jmZL//tzZgnsZQ422wtTnnU+GBTr8mcjZt+5o=; b=jz222m+qmkx46CtsL2SK6imku3
	qq2pJIrpZYwwtoNlBcZF7Fk0xlPqceK3lGx9yfcEP5je3dxHl9hetkE+18QuDijP55CEPQoODBYNI
	uvYKUPiKzBlONmhR187tH+srkvPeqkL343XpLTCXJPsX8gyRZRXys5FShMS4Kw48PvGIyNIrdiJob
	qzWNUEmQ7TjGXaPWTNoBqzaR0fCIqDhz5vX5aceLEg5RLyYCA0NTJNp94Q35H+/kQzQkdBPbuLfU0
	BUQp3kv2YpGE9fMQX8b3xVpJANGw8phzZpM8mq6dy60BHa+pYvxVMkKbTWE80Y8Sdq1m5IiayRsFG
	t/mQy7HA==;
Date: Thu, 25 May 2023 21:57:03 +0100
From: Matthew Wilcox <willy@infradead.org>
To: Vishal Moola <vishal.moola@gmail.com>
Cc: Mike Rapoport <rppt@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 01/34] mm: Add PAGE_TYPE_OP folio functions
Message-ID: <ZG/Ln0Nf/Zx//EQk@casper.infradead.org>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-2-vishal.moola@gmail.com>
 <20230525085555.GV4967@kernel.org>
 <CAOzc2pxx489C26NnS9NHkUQY9PYiagzt-nYK6LnkJ1N3NYQWzg@mail.gmail.com>
 <20230525202011.GZ4967@kernel.org>
 <CAOzc2pzGPBYL3S=noc1AAEtep04GexRmn2f_T3BPgVFZKaqXTg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAOzc2pzGPBYL3S=noc1AAEtep04GexRmn2f_T3BPgVFZKaqXTg@mail.gmail.com>

On Thu, May 25, 2023 at 01:38:54PM -0700, Vishal Moola wrote:
> On Thu, May 25, 2023 at 1:20 PM Mike Rapoport <rppt@kernel.org> wrote:
> >
> > On Thu, May 25, 2023 at 10:00:23AM -0700, Vishal Moola wrote:
> > > On Thu, May 25, 2023 at 1:56 AM Mike Rapoport <rppt@kernel.org> wrote:
> > > >
> > > > Hi,
> > > >
> > > > On Mon, May 01, 2023 at 12:27:56PM -0700, Vishal Moola (Oracle) wrote:
> > > > > No folio equivalents for page type operations have been defined, so
> > > > > define them for later folio conversions.
> > > >
> > > > Can you please elaborate why would we need folios for page table descriptors?
> > >
> > > Thanks for the review!
> > >
> > > These macros are for callers that care about the page type, i.e. Table and
> > > Buddy. Aside from accounting for those cases, the page tables don't use folios.
> > > These are more for the cleanliness of those callers.
> >
> > But why using folio APIs for PageType will be cleaner than using page APIs?
> > Do you have an example?
> 
> Ah, for example in mm/memory-failure.c there are a couple uses of PageTable.
> Like the line :
> if (folio_test_slab(folio) || PageTable(&folio->page) ||
> folio_test_reserved(folio))
> where that PageTable(&folio->page) can now be written as folio_test_table(folio)
> instead.
> 
> Also there are numerous uses of PageBuddy in mm/compaction.c that will
> likely need to be converted to folios as well.

... and you can currently call PageTable() on the second/third/... page
of an allocation and it will return false, regardless of what the
first page is typed as.  For most architectures, this doesn't matter,
but /proc/kpageflags will underreport the amount of memory allocated
as page tables on architectures which use multi-page allocations for
their page tables as there's currently absolutely nothing to indicate
the size of the allocation.

To fix this, we need to use __GFP_COMP.


From xen-devel-bounces@lists.xenproject.org Thu May 25 21:41:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 21:41:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539882.841195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Ii4-0004mY-VX; Thu, 25 May 2023 21:41:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539882.841195; Thu, 25 May 2023 21:41:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Ii4-0004mR-Si; Thu, 25 May 2023 21:41:24 +0000
Received: by outflank-mailman (input) for mailman id 539882;
 Thu, 25 May 2023 21:41:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Ii3-0004mH-4w; Thu, 25 May 2023 21:41:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Ii2-0002kX-T6; Thu, 25 May 2023 21:41:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Ii2-0004U6-I6; Thu, 25 May 2023 21:41:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Ii2-0004yZ-He; Thu, 25 May 2023 21:41:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hKROev7f1wDLP7EuinYyAMYfSQbR1nnNGfM2gGDqhbw=; b=j8lekfTmaIahaccjHrt3Iegz48
	Wtk4yT7c5xhH1AIPGKI83ZNn6Xq0btyUQCQQ86iOqelAi09JZfE6SUpDgPCU8RX7oz3uQnkowlw6t
	g+UZaEeHqbv7ymenoycjg5HBeGxyDp7vIbMLZfwESFhujs+FyowMfbzG7brS/whXZa/Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180947-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180947: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=b300c134465465385045ab705b68a42699688332
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 May 2023 21:41:22 +0000

flight 180947 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180947/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                b300c134465465385045ab705b68a42699688332
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    8 days
Failing since        180699  2023-05-18 07:21:24 Z    7 days   31 attempts
Testing same since   180937  2023-05-25 02:03:36 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 6870 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 25 21:58:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 21:58:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539890.841206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Iy2-0006Q2-GP; Thu, 25 May 2023 21:57:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539890.841206; Thu, 25 May 2023 21:57:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Iy2-0006Pv-CQ; Thu, 25 May 2023 21:57:54 +0000
Received: by outflank-mailman (input) for mailman id 539890;
 Thu, 25 May 2023 21:57:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Iy0-0006Pf-KX; Thu, 25 May 2023 21:57:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Iy0-00033L-C7; Thu, 25 May 2023 21:57:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Ixz-0004wI-OH; Thu, 25 May 2023 21:57:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Ixz-0001sS-Nm; Thu, 25 May 2023 21:57:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Iy8IL+qaRLKrMbIyIXt1mNUBsutab8B11CE9R+l/WE8=; b=IHbhBfYxMe+EWJt6MC9cUjqlct
	kO2JPgOWwXmjVXUqAduj9Aa5TnzrQ0ei3lHPkod3JkX+Kj8qaXgk8TLdI00aCNttPJIvEmQveJwX0
	SPg7ZnURBFkTIfhbbCjkX8YzoumWP2oNQGMy/y9sAaKWWTPVchJwCho7cpxP2mtbPrYw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180941-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180941: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=933174ae28ba72ab8de5b35cb7c98fc211235096
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 25 May 2023 21:57:51 +0000

flight 180941 linux-linus real [real]
flight 180948 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180941/
http://logs.test-lab.xenproject.org/osstest/logs/180948/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                933174ae28ba72ab8de5b35cb7c98fc211235096
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   39 days
Failing since        180281  2023-04-17 06:24:36 Z   38 days   71 attempts
Testing same since   180934  2023-05-24 22:41:38 Z    0 days    2 attempts

------------------------------------------------------------
2491 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 314227 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 25 22:08:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 22:08:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539896.841216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2J8P-0007y0-G9; Thu, 25 May 2023 22:08:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539896.841216; Thu, 25 May 2023 22:08:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2J8P-0007xt-Cd; Thu, 25 May 2023 22:08:37 +0000
Received: by outflank-mailman (input) for mailman id 539896;
 Thu, 25 May 2023 22:08:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGWh=BO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q2J8O-0007xn-B8
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 22:08:36 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b02f94b4-fb48-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 00:08:34 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A460160B2E;
 Thu, 25 May 2023 22:08:32 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0E3CC433D2;
 Thu, 25 May 2023 22:08:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b02f94b4-fb48-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685052512;
	bh=5g/HkwMm8XNJI1vp98454/Sli426iE1jv1+3sg/DUu8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GNLIeoSgGabMd4itR0MGgzqTsrXKnM4R4qyH/q99ymFajk7DQMnmFThnAv5z/Iw5k
	 /rnO8/07lsgABKIpBZGTpE+jm5IQmgCwm7g685/EH0thlueNueVMNKavHOB1lwaV8h
	 udYSvE2FFvwO9aMGXiv22geT9lKzAb/cTn/sM28zSCfhcAcMLiJ9hStY0dWAQF0p8T
	 TIT1a4XirtmuE+7AiZXAmF/MDAuWPP0W9eh5LhfTcX/JPEJTHgraxH4796CYFQa4Rn
	 1MruwYMqmKCMS+xOYhB4NscI4QeNr0bGlSo8tZwquXDDCSDQaOQl1Ta+L7FlTTMwwL
	 zgSlHuo2iDNBw==
Date: Thu, 25 May 2023 15:08:29 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/3] xen/misra: xen-analysis.py: Fix latent bug
In-Reply-To: <20230519093019.2131896-3-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305251508170.44000@ubuntu-linux-20-04-desktop>
References: <20230519093019.2131896-1-luca.fancellu@arm.com> <20230519093019.2131896-3-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 19 May 2023, Luca Fancellu wrote:
> Currenly there is a latent bug that is not triggered because
> the function cppcheck_merge_txt_fragments is called with the
> parameter strip_paths having a list of only one element.
> 
> The bug is that the split function should not be in the
> loop for strip_paths, but one level before, fix it.
> 
> Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/scripts/xen_analysis/cppcheck_report_utils.py | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> index c5f466aff141..fdc299c7e029 100644
> --- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
> +++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> @@ -104,8 +104,8 @@ def cppcheck_merge_txt_fragments(fragments_list, out_txt_file, strip_paths):
>                  for path in strip_paths:
>                      text_report_content[i] = text_report_content[i].replace(
>                                                                  path + "/", "")
> -                    # Split by : separator
> -                    text_report_content[i] = text_report_content[i].split(":")
> +                # Split by : separator
> +                text_report_content[i] = text_report_content[i].split(":")
>  
>              # sort alphabetically for second field (misra rule) and as second
>              # criteria for the first field (file name)
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 25 22:09:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 22:09:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539898.841226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2J8x-0008ST-Nz; Thu, 25 May 2023 22:09:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539898.841226; Thu, 25 May 2023 22:09:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2J8x-0008SM-LH; Thu, 25 May 2023 22:09:11 +0000
Received: by outflank-mailman (input) for mailman id 539898;
 Thu, 25 May 2023 22:09:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGWh=BO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q2J8w-0008Qr-3K
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 22:09:10 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c4de03bd-fb48-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 00:09:08 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 756FD64899;
 Thu, 25 May 2023 22:09:07 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 78834C433D2;
 Thu, 25 May 2023 22:09:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4de03bd-fb48-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685052546;
	bh=roZDVn3FHLL+z6CJu27XomIMLio0lYp2gQiMPRyC4Vo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=c2tGmow08pKxFfqWvx/ppvNV+j7JZUw5H5I6IP2PBHG+1m66x8Y+wSLf9G924EkY/
	 Xh2JazEARfEPYLrwVVWD/fk/Om9HEQcmf2XDK3U2kYGdaLLxx0/YI4ZYgje4olE8ZD
	 Yc91h06uuykEsEUMYImvERQ8JFZOJQB/C91k5dyulFi3ftGVrImzbgJ/rJFKQxXgbs
	 n/1oIyyf8AhknAqge+g3eW+C64PnA7tCbx9fkvCmq8c7X1KPxIm01t6Fec/+MhOL5C
	 z27oTroCDHZ2xEWJxBLpE9q6Xx67ArKQMggmJYIH6Tshb3pWPwK+tM17oHVic4jJ4+
	 d4OSGfE7Tt2Ww==
Date: Thu, 25 May 2023 15:09:03 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>, Michal Orzel <michal.orzel@amd.com>
Subject: Re: [PATCH 3/3] xen/misra: xen-analysis.py: Fix cppcheck report
 relative paths
In-Reply-To: <20230519093019.2131896-4-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.22.394.2305251508440.44000@ubuntu-linux-20-04-desktop>
References: <20230519093019.2131896-1-luca.fancellu@arm.com> <20230519093019.2131896-4-luca.fancellu@arm.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 19 May 2023, Luca Fancellu wrote:
> Fix the generation of the relative path from the repo, for cppcheck
> reports, when the script is launching make with in-tree build.
> 
> Fixes: b046f7e37489 ("xen/misra: xen-analysis.py: use the relative path from the ...")
> Reported-by: Michal Orzel <michal.orzel@amd.com>
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Tested-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  .../xen_analysis/cppcheck_report_utils.py     | 25 ++++++++++++++++---
>  1 file changed, 21 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/scripts/xen_analysis/cppcheck_report_utils.py b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> index fdc299c7e029..10100f6c6a57 100644
> --- a/xen/scripts/xen_analysis/cppcheck_report_utils.py
> +++ b/xen/scripts/xen_analysis/cppcheck_report_utils.py
> @@ -1,6 +1,7 @@
>  #!/usr/bin/env python3
>  
> -import os
> +import os, re
> +from . import settings
>  from xml.etree import ElementTree
>  
>  class CppcheckHTMLReportError(Exception):
> @@ -101,12 +102,28 @@ def cppcheck_merge_txt_fragments(fragments_list, out_txt_file, strip_paths):
>              text_report_content = list(text_report_content)
>              # Strip path from report lines
>              for i in list(range(0, len(text_report_content))):
> -                for path in strip_paths:
> -                    text_report_content[i] = text_report_content[i].replace(
> -                                                                path + "/", "")
>                  # Split by : separator
>                  text_report_content[i] = text_report_content[i].split(":")
>  
> +                for path in strip_paths:
> +                    text_report_content[i][0] = \
> +                        text_report_content[i][0].replace(path + "/", "")
> +
> +                # When the compilation is in-tree, the makefile places
> +                # the directory in /xen/xen, making cppcheck produce
> +                # relative path from there, so check if "xen/" is a prefix
> +                # of the path and if it's not, check if it can be added to
> +                # have a relative path from the repository instead of from
> +                # /xen/xen
> +                if not text_report_content[i][0].startswith("xen/"):
> +                    # cppcheck first entry is in this format:
> +                    # path/to/file(line,cols), remove (line,cols)
> +                    cppcheck_file = re.sub(r'\(.*\)', '',
> +                                           text_report_content[i][0])
> +                    if os.path.isfile(settings.xen_dir + "/" + cppcheck_file):
> +                        text_report_content[i][0] = \
> +                            "xen/" + text_report_content[i][0]
> +
>              # sort alphabetically for second field (misra rule) and as second
>              # criteria for the first field (file name)
>              text_report_content.sort(key = lambda x: (x[1], x[0]))
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 25 23:19:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 25 May 2023 23:19:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539904.841236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2KEV-0007I4-N9; Thu, 25 May 2023 23:18:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539904.841236; Thu, 25 May 2023 23:18:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2KEV-0007Hx-Jq; Thu, 25 May 2023 23:18:59 +0000
Received: by outflank-mailman (input) for mailman id 539904;
 Thu, 25 May 2023 23:18:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGWh=BO=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q2KEU-0007Hr-6w
 for xen-devel@lists.xenproject.org; Thu, 25 May 2023 23:18:58 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 84dd478f-fb52-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 01:18:56 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E1A0E60B97;
 Thu, 25 May 2023 23:18:54 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 55C29C433D2;
 Thu, 25 May 2023 23:18:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84dd478f-fb52-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685056734;
	bh=En/PW37VhNrHjZx8SCC8JoA7W2AGgRTtjGtgFDrtiYk=;
	h=From:To:Cc:Subject:Date:From;
	b=hPtQ2y5IDzTlekGESTgzGnfNKZ+ChPKwBC7nztZUi6SK6OfScobvpDtHS5DNB69nP
	 jl15k7FfLaROdDaVGjc39IPNIxrjDo+hFA9anzskY1VlhJbnr8VfB8FAAE284KY3vy
	 9pHScrkzEmVpbtdvTX9MA0A+5PCRBvuLf2M6S8rUmL6CADtlEOw7C3FWwAj8/cQmfI
	 5HMl0RdhX4bieIg7R8Soc9ZK2l0l59yrvlsCXW6Hgw038yvSwFY1JZxSwbhpE2iS6M
	 +luSkV/AjdvrmjcTy16M2puTtCb4IzRI3M7OVvLV9R4nwz2Q3L3dwd5MI64IGBTHc6
	 XsxGfISFmcfpQ==
From: Stefano Stabellini <sstabellini@kernel.org>
To: roger.pau@citrix.com,
	jbeulich@suse.com
Cc: andrew.cooper3@citrix.com,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Xenia.Ragiadakou@amd.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: [PATCH v2] xen/x86/pvh: handle ACPI RSDT table in PVH Dom0 build
Date: Thu, 25 May 2023 16:18:51 -0700
Message-Id: <20230525231851.700750-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Stefano Stabellini <stefano.stabellini@amd.com>

Xen always generates a XSDT table even if the firmware provided a RSDT
table. Copy the RSDT header from the firmware table when the XSDT table
is missing.

Fixes: 1d74282c455f ('x86: setup PVHv2 Dom0 ACPI tables')
Suggested-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---

Note that this patch is sufficient to get Xen and Dom0 PVH to boot on
QEMU. It turns out that dom0-iommu=none is not needed because QEMU can
actually emulate an AMD IOMMU. I just needed to use the right command
line arguments. Without dom0-iommu=none, the error fixed by the second
patch in v1 doesn't manifest, so I dropped patch 2/2.

FYI this is the QEMU command line to use:

qemu-system-x86_64 \
    -machine q35 \
    -device amd-iommu \
    -m 2G -smp 2 \
    -monitor none -serial stdio \
    -nographic \
    -device virtio-net-pci,netdev=n0 \
    -netdev user,id=n0,tftp=binaries,bootfile=/pxelinux.0

This is pxelinux.0:

kernel xen console=com1 dom0=pvh dom0_mem=1G noreboot
module bzImage console=hvc0
module xen-rootfs.cpio.gz
boot

I'll work on adding a gitlab-ci test for this next.

---
 xen/arch/x86/hvm/dom0_build.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index fd2cbf68bc..e1043e40d2 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -966,7 +966,16 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
         rc = -EINVAL;
         goto out;
     }
-    xsdt_paddr = rsdp->xsdt_physical_address;
+    /*
+     * Note the header is the same for both RSDT and XSDT, so it's fine to
+     * copy the native RSDT header to the Xen crafted XSDT if no native
+     * XSDT is available.
+     */
+    if ( rsdp->revision > 1 && rsdp->xsdt_physical_address )
+        xsdt_paddr = rsdp->xsdt_physical_address;
+    else
+        xsdt_paddr = rsdp->rsdt_physical_address;
+
     acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
     table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
     if ( !table )
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 26 01:16:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 01:16:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539910.841246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2M40-0001Qh-I9; Fri, 26 May 2023 01:16:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539910.841246; Fri, 26 May 2023 01:16:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2M40-0001Qa-F3; Fri, 26 May 2023 01:16:16 +0000
Received: by outflank-mailman (input) for mailman id 539910;
 Fri, 26 May 2023 01:16:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i5vf=BP=gmail.com=alistair23@srs-se1.protection.inumbo.net>)
 id 1q2M3z-0001QU-GZ
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 01:16:15 +0000
Received: from mail-ua1-x933.google.com (mail-ua1-x933.google.com
 [2607:f8b0:4864:20::933])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e72922c5-fb62-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 03:16:13 +0200 (CEST)
Received: by mail-ua1-x933.google.com with SMTP id
 a1e0cc1a2514c-7865ce76c6dso62219241.2
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 18:16:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e72922c5-fb62-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685063771; x=1687655771;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=NB5REcbLpwjs6+Y48dM4uBEElXpnuURkA3t4yUlx7h8=;
        b=CJf+e7RuT6G0XZq3UF4RVieEdm9Eul75geX6129LVYurhkwzadqBbGIhaDj78Adro9
         9/mSRA0JkFWLs0mry+bmzL6OOY6OAxZB6yA9aFb4ywfXDtqhITRFor+b0gtqXKUsI8r9
         pKMkvJ2iD7lths8YzNpaCJYkSeeujqHNyU7FSXQdIE3oh4D4hIokC8rvchFfXztugqea
         la06NFZmhK9BYhANhDwg24J9vlTrJsnHZRIDrlmL7M5JMVcVvlu7yJOuk4mPCOOYqHTX
         FF8ZAODMN2zt70skhIz/kg52ymElEu0oH7lAPpAOtpxVp77yT/drXd72y5FjaqOxEd6H
         O21w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685063772; x=1687655772;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=NB5REcbLpwjs6+Y48dM4uBEElXpnuURkA3t4yUlx7h8=;
        b=SjBWlJp/N9RLZKmPcFLDQwo7jJif8QGl64YWnBZphe+qtS09QnjZFXptLfvKymeQwf
         q1Z09MVdz1vqGaF0OyWNI3aiNEYW+ZYQin9rCpFT8KvzDTDLggeMf03AxDze8dUb5V+Y
         DtQ6pUm5u/Pw/Ih4jjdAfeVZWZL8nz0+QzJ6ja0245vmYKuExfIvQvdZM8PI6ht1F5EL
         dac7Mc1bBFWbZNSiuKnDX1QVL8evtRrh07/sM7C0Hz78qPeOzK5jtJ0VuHKg+sW5vyeG
         upQzwFTyMU9FxqEl+CwYGB0WzEsHoA/kbG6fPUdpM9lYkfH4uJ3Z/Ar15yrazDYE+loA
         R3lA==
X-Gm-Message-State: AC+VfDzg0ulBxRpuuFuiedaMLkfiDeIDuHZNfVUrjn5wQR9A4Nn5Afl+
	R+Dibfo6NBIHAJuD2FrYGGdlEAHDSALNBmcFKYQ=
X-Google-Smtp-Source: ACHHUZ548+vK/qS35LDCn2jIrt2OtiARxrAZizkWeIMagbrka75GyGA7V+WCUgsLM7rI/yq2wlZKwe+Vwfi9QiOVvdY=
X-Received: by 2002:a05:6102:1d2:b0:434:70b4:b356 with SMTP id
 s18-20020a05610201d200b0043470b4b356mr53546vsq.28.1685063771697; Thu, 25 May
 2023 18:16:11 -0700 (PDT)
MIME-Version: 1.0
References: <a261b10b-25e4-3374-d6c4-05b307619d81@suse.com> <aa76774a-0d73-989e-e054-1b30490160e1@suse.com>
In-Reply-To: <aa76774a-0d73-989e-e054-1b30490160e1@suse.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Fri, 26 May 2023 11:15:45 +1000
Message-ID: <CAKmqyKPR===gfi0A0h0qMy-2s=aANGzNvkBWdFecc_xDtsQt3w@mail.gmail.com>
Subject: Re: [PATCH v2 1/2] build: shorten macro references
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	Bobby Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair.francis@wdc.com>, 
	Connor Davis <connojdavis@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, May 8, 2023 at 10:58=E2=80=AFPM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> Presumably by copy-and-paste we've accumulated a number of instances of
> $(@D)/$(@F), which really is nothing else than $@. The split form only
> needs using when we want to e.g. insert a leading . at the beginning of
> the file name portion of the full name.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Acked-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
> v2: Insert blanks after ">".
>
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -104,9 +104,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
>         $(MAKE) $(build)=3D$(@D) $(@D)/.$(@F).1.o
>         $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
>             $(@D)/.$(@F).1.o -o $@
> -       $(NM) -pa --format=3Dsysv $(@D)/$(@F) \
> +       $(NM) -pa --format=3Dsysv $@ \
>                 | $(objtree)/tools/symbols --all-symbols --xensyms --sysv=
 --sort \
> -               >$(@D)/$(@F).map
> +               > $@.map
>         rm -f $(@D)/.$(@F).[0-9]*
>
>  .PHONY: include
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -10,9 +10,9 @@ $(TARGET): $(TARGET)-syms
>
>  $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
>         $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) -=
o $@
> -       $(NM) -pa --format=3Dsysv $(@D)/$(@F) \
> +       $(NM) -pa --format=3Dsysv $@ \
>                 | $(objtree)/tools/symbols --all-symbols --xensyms --sysv=
 --sort \
> -               >$(@D)/$(@F).map
> +               > $@.map
>
>  $(obj)/xen.lds: $(src)/xen.lds.S FORCE
>         $(call if_changed_dep,cpp_lds_S)
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -150,9 +150,9 @@ $(TARGET)-syms: $(objtree)/prelink.o $(o
>         $(MAKE) $(build)=3D$(@D) $(@D)/.$(@F).1.o
>         $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
>             $(orphan-handling-y) $(@D)/.$(@F).1.o -o $@
> -       $(NM) -pa --format=3Dsysv $(@D)/$(@F) \
> +       $(NM) -pa --format=3Dsysv $@ \
>                 | $(objtree)/tools/symbols --all-symbols --xensyms --sysv=
 --sort \
> -               >$(@D)/$(@F).map
> +               > $@.map
>         rm -f $(@D)/.$(@F).[0-9]* $(@D)/..$(@F).[0-9]*
>  ifeq ($(CONFIG_XEN_IBT),y)
>         $(SHELL) $(srctree)/tools/check-endbr.sh $@
> @@ -224,8 +224,9 @@ endif
>         $(MAKE) $(build)=3D$(@D) .$(@F).1r.o .$(@F).1s.o
>         $(LD) $(call EFI_LDFLAGS,$(VIRT_BASE)) -T $(obj)/efi.lds -N $< \
>               $(@D)/.$(@F).1r.o $(@D)/.$(@F).1s.o $(orphan-handling-y) $(=
note_file_option) -o $@
> -       $(NM) -pa --format=3Dsysv $(@D)/$(@F) \
> -               | $(objtree)/tools/symbols --all-symbols --xensyms --sysv=
 --sort >$(@D)/$(@F).map
> +       $(NM) -pa --format=3Dsysv $@ \
> +               | $(objtree)/tools/symbols --all-symbols --xensyms --sysv=
 --sort \
> +               > $@.map
>  ifeq ($(CONFIG_DEBUG_INFO),y)
>         $(if $(filter --strip-debug,$(EFI_LDFLAGS)),:$(space))$(OBJCOPY) =
-O elf64-x86-64 $@ $@.elf
>  endif
>
>


From xen-devel-bounces@lists.xenproject.org Fri May 26 02:40:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 02:40:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539916.841256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2NNX-0002aR-LQ; Fri, 26 May 2023 02:40:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539916.841256; Fri, 26 May 2023 02:40:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2NNX-0002a5-F6; Fri, 26 May 2023 02:40:31 +0000
Received: by outflank-mailman (input) for mailman id 539916;
 Fri, 26 May 2023 02:40:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2NNW-0002Zu-3e; Fri, 26 May 2023 02:40:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2NNV-0000kn-Nh; Fri, 26 May 2023 02:40:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2NNV-0006NX-6G; Fri, 26 May 2023 02:40:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2NNV-0000pE-5r; Fri, 26 May 2023 02:40:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+5fVVlzdJYtr4cpcaN+HofrTBVv291aOgEP/mRGEXck=; b=y8dVnw3fizOFMMnc4DDsoOjPEj
	WjNil0uvD8Irvcsjsur4SAMxV+9w6nPfO31NNo+O8eWtWh+REf+4bLemIkly6Pbn11stqC5NwAr8H
	Nv19PwQqUFaTnQcDnaE3N//glgBzMJvDG2ZECUQ2PzX5veCuK3ynnUe0/bHwnJfzqpL0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180949-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180949: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=a3cb6d5004ff638aefe686ecd540718a793bd1b1
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 02:40:29 +0000

flight 180949 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180949/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                a3cb6d5004ff638aefe686ecd540718a793bd1b1
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    8 days
Failing since        180699  2023-05-18 07:21:24 Z    7 days   32 attempts
Testing same since   180949  2023-05-25 22:08:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 7534 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 26 02:50:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 02:50:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539924.841265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2NWg-0003Lx-IC; Fri, 26 May 2023 02:49:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539924.841265; Fri, 26 May 2023 02:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2NWg-0003Lq-F2; Fri, 26 May 2023 02:49:58 +0000
Received: by outflank-mailman (input) for mailman id 539924;
 Fri, 26 May 2023 02:49:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jaOs=BP=gmail.com=wei.liu.linux@srs-se1.protection.inumbo.net>)
 id 1q2NWf-0003Lk-7p
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 02:49:57 +0000
Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com
 [209.85.214.180]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe080a77-fb6f-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 04:49:55 +0200 (CEST)
Received: by mail-pl1-f180.google.com with SMTP id
 d9443c01a7336-1ae452c2777so3031055ad.0
 for <xen-devel@lists.xenproject.org>; Thu, 25 May 2023 19:49:54 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([20.69.120.36])
 by smtp.gmail.com with ESMTPSA id
 17-20020a170902e9d100b00199203a4fa3sm2065914plk.203.2023.05.25.19.49.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 25 May 2023 19:49:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe080a77-fb6f-11ed-8611-37d641c3527e
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685069393; x=1687661393;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/OPLzWUVi7gsop6G9dajjYUDmWusRBpLPLHka0+jrUs=;
        b=DRrLcN0pszpWK0JR6rgtYWVRdlBz/tBpSlRR3BiV8tDhNFgeXXSpppgo5mRNDuxUUo
         SWFq4/TdDZEVUNgUw9UGKXvhF7S9K6bIc2FFU2i0ofINXcdZWa70YSkknGmJ3xoX9Il+
         0aUblIgzicdRccdzP+pEaxa8Ta66lvLoAwdNsg0juELp115Ku3CDbT9JIsuoX/J/YS0P
         BJ/avCz4ueo3FRBWOCqxK7ROJNHKxFxzd000Ncir+HUtNp7zMwzAA2oa1GYv+HhmTQVE
         I+J6C24oQZzX9RxVkKsEA+uUXF9sn86eRkWYAzcHdoQNkHOgWQRu3vkvevCsHZmeKGdI
         983w==
X-Gm-Message-State: AC+VfDz5pweW5N+WTVZSEg5lT8G5V6THStUaPuBV7dDV+D4y6gfdAoOs
	wRoseE5O0xmjnIMOJu4Foso=
X-Google-Smtp-Source: ACHHUZ5XFpzeemUD1TUof6eCOZ3HnG5K/XByXYk4BPuootwhqPjfiNQxiSFweZaNQrfTmkP0NOc3aw==
X-Received: by 2002:a17:903:1c7:b0:1ad:bccc:af77 with SMTP id e7-20020a17090301c700b001adbcccaf77mr4770197plh.18.1685069393239;
        Thu, 25 May 2023 19:49:53 -0700 (PDT)
Date: Fri, 26 May 2023 02:49:51 +0000
From: Wei Liu <wei.liu@kernel.org>
To: Jakub Kicinski <kuba@kernel.org>
Cc: Linus Walleij <linus.walleij@linaro.org>, Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org
Subject: Re: [PATCH] xen/netback: Pass (void *) to virt_to_page()
Message-ID: <ZHAeT2m7297SOKfS@liuwe-devbox-debian-v2>
References: <20230523140342.2672713-1-linus.walleij@linaro.org>
 <20230524221147.5791ba3a@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230524221147.5791ba3a@kernel.org>

On Wed, May 24, 2023 at 10:11:47PM -0700, Jakub Kicinski wrote:
> On Tue, 23 May 2023 16:03:42 +0200 Linus Walleij wrote:
> > virt_to_page() takes a virtual address as argument but
> > the driver passes an unsigned long, which works because
> > the target platform(s) uses polymorphic macros to calculate
> > the page.
> > 
> > Since many architectures implement virt_to_pfn() as
> > a macro, this function becomes polymorphic and accepts both a
> > (unsigned long) and a (void *).
> > 
> > Fix this up by an explicit (void *) cast.
> 
> Paul, Wei, looks like netdev may be the usual path for this patch 
> to flow thru, although I'm never 100% sure with Xen.

Yes. Netdev is the right path.

Thanks,
Wei.


From xen-devel-bounces@lists.xenproject.org Fri May 26 02:58:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 02:58:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539929.841276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Nf0-0004rH-DC; Fri, 26 May 2023 02:58:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539929.841276; Fri, 26 May 2023 02:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Nf0-0004rA-AD; Fri, 26 May 2023 02:58:34 +0000
Received: by outflank-mailman (input) for mailman id 539929;
 Fri, 26 May 2023 02:58:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Ney-0004qz-Dl; Fri, 26 May 2023 02:58:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Ney-00013t-1m; Fri, 26 May 2023 02:58:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Nex-0006kM-Fj; Fri, 26 May 2023 02:58:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Nex-00085Y-FD; Fri, 26 May 2023 02:58:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qn7vyZwagSXvNDBOGXRmdEYDGFlTPPFomSefXo5iTPU=; b=Wpm94rzKP8IhgvJNNfgl0Aex2B
	Rbke7gex9Tl0vHfr1LUBdjYCvxeufbyVHAlUJ0Fa5ZsE8izQU4Fk1CGdTjiI2cytdhvgNzr1DZdNS
	8GIs99AW6CJnwlql4quyxA6uzQJFT0q+igGQr2S4kI+gb877GLKeRI0j4D7d/3c3vg70=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180946-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180946: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=354be8936d97d4f2cb8cc004bb0296826d89bd8d
X-Osstest-Versions-That:
    xen=380c6c170393c48852d4f2b1ea97125a399cfc61
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 02:58:31 +0000

flight 180946 xen-unstable real [real]
flight 180951 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180946/
http://logs.test-lab.xenproject.org/osstest/logs/180951/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 180930

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd11-amd64 19 guest-localmigrate/x10 fail pass in 180951-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180930
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180930
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180930
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180930
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180930
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180930
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180930
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180930
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180930
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  354be8936d97d4f2cb8cc004bb0296826d89bd8d
baseline version:
 xen                  380c6c170393c48852d4f2b1ea97125a399cfc61

Last test of basis   180930  2023-05-24 17:38:36 Z    1 days
Failing since        180938  2023-05-25 02:40:04 Z    1 days    2 attempts
Testing same since   180946  2023-05-25 15:37:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 354be8936d97d4f2cb8cc004bb0296826d89bd8d
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu May 25 14:58:14 2023 +0200

    public: fix comment typo regarding IOREQ Server
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 053ffa783e6e8f402ba6bafd620666a3623c0fc8
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Thu May 25 14:57:14 2023 +0200

    x86/iommu: adjust type in arch_iommu_hwdom_init()
    
    The 'i' iterator index stores a PDX, not a PFN, and hence the initial
    assignation of start (which stores a PFN) needs a conversion from PFN
    to PDX.
    
    This is harmless currently, as the PDX compression skips the bottom
    MAX_ORDER bits which cover the low 1MB, but still do the conversion
    from PDX to PFN for type correctness.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 56c0063f4e7a71ba8bb91207d7f111e971dcec02
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu May 25 14:56:55 2023 +0200

    xen/misra: xen-analysis.py: Improve the cppcheck version check
    
    Use tuple comparison to check the cppcheck version.
    
    Take the occasion to harden the regex, escaping the dots so that we
    check for them instead of generic characters.
    
    Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit cca2361947b3c9851b3ded6e43cc48caf5258eee
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Thu May 18 14:24:15 2023 +0200

    automation: Enable parallel build with cppcheck analysis
    
    The limitation was fixed by the commit:
    45bfff651173d538239308648c6a6cd7cbe37172
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 511b9f286c3dadd041e0d90beeff7d47c9bf3b7a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 19:15:48 2023 +0100

    x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
    
    MSR_ARCH_CAPS data is now included in featureset information.  Replace
    opencoded checks with regular feature ones.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 205a9f970378c31ae3e00b52d59103a2e881b9e0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 19:05:01 2023 +0100

    x86/tsx: Remove opencoded MSR_ARCH_CAPS check
    
    The current cpu_has_tsx_ctrl tristate is serving double pupose; to signal the
    first pass through tsx_init(), and the availability of MSR_TSX_CTRL.
    
    Drop the variable, replacing it with a once boolean, and altering
    cpu_has_tsx_ctrl to come out of the feature information.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 8f6bc7f9b72eb7cf0c8c5ae5d80498a58ba0b7c3
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 16:59:25 2023 +0100

    x86/vtx: Remove opencoded MSR_ARCH_CAPS check
    
    MSR_ARCH_CAPS data is now included in featureset information.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit a87d131a8c2952e53ba9ed513d5553426cdeac34
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue May 16 14:07:43 2023 +0100

    x86/cpufeature: Rework {boot_,}cpu_has()
    
    One area where Xen deviates from Linux is that test_bit() forces a volatile
    read.  This leads to poor code generation, because the optimiser cannot merge
    bit operations on the same word.
    
    Drop the use of test_bit(), and write the expressions in regular C.  This
    removes the include of bitops.h (which is a frequent source of header
    tangles), and it offers the optimiser far more flexibility.
    
    Bloat-o-meter reports a net change of:
    
      add/remove: 0/0 grow/shrink: 21/87 up/down: 641/-2751 (-2110)
    
    with half of that in x86_emulate() alone.  vmx_ctxt_switch_to() seems to be
    the fastpath with the greatest delta at -24, where the optimiser has
    successfully removed the branch hidden in cpu_has_msr_tsc_aux.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit bbb289f3d5bdd3358af748d7c567343532ac45b5
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 15:53:35 2023 +0100

    x86/boot: Expose MSR_ARCH_CAPS data in guest max policies
    
    We already have common and default feature adjustment helpers.  Introduce one
    for max featuresets too.
    
    Offer MSR_ARCH_CAPS unconditionally in the max policy, and stop clobbering the
    data inherited from the Host policy.  This will be necessary to level a VM
    safely for migration.  Annotate the ARCH_CAPS CPUID bit as special.  Note:
    ARCH_CAPS is still max-only for now, so will not be inhereted by the default
    policies.
    
    With this done, the special case for dom0 can be shrunk to just resampling the
    Host policy (as ARCH_CAPS isn't visible by default yet).
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 70553000d6b44dd7c271a35932b0b3e1f22c5532
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 15:37:02 2023 +0100

    x86/boot: Record MSR_ARCH_CAPS for the Raw and Host CPU policy
    
    Extend x86_cpu_policy_fill_native() with a read of ARCH_CAPS based on the
    CPUID information just read, removing the specially handling in
    calculate_raw_cpu_policy().
    
    Right now, the only use of x86_cpu_policy_fill_native() outside of Xen is the
    unit tests.  Getting MSR data in this context is left to whomever first
    encounters a genuine need to have it.
    
    Extend generic_identify() to read ARCH_CAPS into x86_capability[], which is
    fed into the Host Policy.  This in turn means there's no need to special case
    arch_caps in calculate_host_policy().
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit ce8c930851a5ca21c4e70f83be7e8b290ce1b519
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 18:50:59 2023 +0100

    x86/cpu-policy: MSR_ARCH_CAPS feature names
    
    Seed the default visibility from the dom0 special case, which for the most
    part just exposes the *_NO bits.  EIBRS is the one non-*_NO bit, which is
    "just" a status bit to the guest indicating a change in implemention of IBRS
    which is already fully supported.
    
    Insert a block dependency from the ARCH_CAPS CPUID bit to the entire content
    of the MSR.  This is because MSRs have no structure information similar to
    CPUID, and used by x86_cpu_policy_clear_out_of_range_leaves(), in order to
    bulk-clear inaccessable words.
    
    The overall CPUID bit is still max-only, so all of MSR_ARCH_CAPS is hidden in
    the default policies.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit d9fe459ffad8a6eac2f695adb2331aff83c345d1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 17:55:21 2023 +0100

    x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
    
    Bits through 24 are already defined, meaning that we're not far off needing
    the second word.  Put both in right away.
    
    As both halves are present now, the arch_caps field is full width.  Adjust the
    unit test, which notices.
    
    The bool bitfield names in the arch_caps union are unused, and somewhat out of
    date.  They'll shortly be automatically generated.
    
    Add CPUID and MSR prefixes to the ./xen-cpuid verbose output, now that there
    are a mix of the two.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 43912f8dbb1888ffd7f00adb10724c70e71927c4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 14:14:53 2023 +0100

    x86/boot: Adjust MSR_ARCH_CAPS handling for the Host policy
    
    We are about to move MSR_ARCH_CAPS into featureset, but the order of
    operations (copy raw policy, then copy x86_capabilitiles[] in) will end up
    clobbering the ARCH_CAPS value.
    
    Some toolstacks use this information to handle TSX compatibility across the
    CPUs and microcode versions where support was removed.
    
    To avoid this transient breakage, read from raw_cpu_policy rather than
    modifying it in place.  This logic will be removed entirely in due course.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit ef1987fcb0fdfaa7ee148024037cb5fa335a7b2d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 13:52:39 2023 +0100

    x86/boot: Rework dom0 feature configuration
    
    Right now, dom0's feature configuration is split between between the common
    path and a dom0-specific one.  This mostly is by accident, and causes some
    very subtle bugs.
    
    First, start by clearly defining init_dom0_cpuid_policy() to be the domain
    that Xen builds automatically.  The late hwdom case is still constructed in a
    mostly normal way, with the control domain having full discretion over the CPU
    policy.
    
    Identifying this highlights a latent bug - the two halves of the MSR_ARCH_CAPS
    bodge are asymmetric with respect to the hardware domain.  This means that
    shim, or a control-only dom0 sees the MSR_ARCH_CAPS CPUID bit but none of the
    MSR content.  This in turn declares the hardware to be retpoline-safe by
    failing to advertise the {R,}RSBA bits appropriately.  Restrict this logic to
    the hardware domain, although the special case will cease to exist shortly.
    
    For the CPUID Faulting adjustment, the comment in ctxt_switch_levelling()
    isn't actually relevant.  Provide a better explanation.
    
    Move the recalculate_cpuid_policy() call outside of the dom0-cpuid= case.
    This is no change for now, but will become necessary shortly.
    
    Finally, place the second half of the MSR_ARCH_CAPS bodge after the
    recalculate_cpuid_policy() call.  This is necessary to avoid transiently
    breaking the hardware domain's view while the handling is cleaned up.  This
    special case will cease to exist shortly.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 26 04:03:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 04:03:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539937.841285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Ofi-0003iS-7m; Fri, 26 May 2023 04:03:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539937.841285; Fri, 26 May 2023 04:03:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Ofi-0003iL-57; Fri, 26 May 2023 04:03:22 +0000
Received: by outflank-mailman (input) for mailman id 539937;
 Fri, 26 May 2023 04:03:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Ofg-0003iB-7K; Fri, 26 May 2023 04:03:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Off-0002Vs-Pa; Fri, 26 May 2023 04:03:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Off-00089T-CE; Fri, 26 May 2023 04:03:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Off-0006RI-Bj; Fri, 26 May 2023 04:03:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sJLqXphbZt/C1tiMmfwzGxURl0lwt7ntJTDmivB9TKI=; b=SqhewG1VoZ9jVnlHsP+fW5mL9C
	fAOrkzwdnc6pxpUPWddm8xPsGDQNYwHKYd8Nw6+NaQOMj2Rm4OWqpeaez3AOgLRtENK/zh2l5JBGk
	H33QCL3DRXr37m56TwZsSnOBCFXk2TmVZyVZrX9PTYTIFJlNnNYm9enKBFdiMXVpMTlg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180952-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180952: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=a3cb6d5004ff638aefe686ecd540718a793bd1b1
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 04:03:19 +0000

flight 180952 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180952/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                a3cb6d5004ff638aefe686ecd540718a793bd1b1
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    8 days
Failing since        180699  2023-05-18 07:21:24 Z    7 days   33 attempts
Testing same since   180949  2023-05-25 22:08:34 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 7534 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 26 04:54:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 04:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539943.841296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2PSe-0000ft-04; Fri, 26 May 2023 04:53:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539943.841296; Fri, 26 May 2023 04:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2PSd-0000fm-T8; Fri, 26 May 2023 04:53:55 +0000
Received: by outflank-mailman (input) for mailman id 539943;
 Fri, 26 May 2023 04:53:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r+8a=BP=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1q2PSc-0000fg-6J
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 04:53:54 +0000
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d12878f-fb81-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 06:53:50 +0200 (CEST)
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 May 2023 21:53:47 -0700
Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81])
 by orsmga004.jf.intel.com with ESMTP; 25 May 2023 21:53:47 -0700
Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by
 fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 25 May 2023 21:53:46 -0700
Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by
 fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23 via Frontend Transport; Thu, 25 May 2023 21:53:46 -0700
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.174)
 by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.23; Thu, 25 May 2023 21:53:46 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by IA1PR11MB7812.namprd11.prod.outlook.com (2603:10b6:208:3f7::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18; Fri, 26 May
 2023 04:53:41 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::dd3:4277:e348:9e00]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::dd3:4277:e348:9e00%3]) with mapi id 15.20.6433.015; Fri, 26 May 2023
 04:53:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d12878f-fb81-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1685076830; x=1716612830;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=SaulrSyKYfs1/+/Q4ScjH2ac5iTQsTWF0ItlPcCb8aE=;
  b=F4DBqIupzZuM+pPwscmy3x6w1Q6RZwvzGv1Nhz5floig20TDmvk1cGMg
   eQT5ec2KPU+OZxwCZrTGgR/RPHnDTbfIXlxQOiIq16h1tMhTLggQadD07
   V1faMLddAMUr/8EFV6buwWRQpihjWkxLnmwLcd1tV3lL6l04RlbcsP5vM
   4yi8wotZFfME9IQZfjer44y1fMhlBktLu3iLAmKnqB1PcxnRShMd+fm+d
   IWAYkZbQ6MFRhQ/1nZqDwFtrR5jpW1PC0DnO2/1s5rlcvKcvpGNDoL9zY
   j97DSzXPdpmtj8l2UzpDxWjhZDIKnI4fzr29FExhVI2JL/IMvaD1jqn+8
   A==;
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="419872241"
X-IronPort-AV: E=Sophos;i="6.00,193,1681196400"; 
   d="scan'208";a="419872241"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="829363785"
X-IronPort-AV: E=Sophos;i="6.00,193,1681196400"; 
   d="scan'208";a="829363785"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F3orqFzX26qLaAQ7ZAwOlOYkJ0LwR87NDhxkc5f6o5GPvkHwM0GpmVXcjc75r/BUJYoxZipkn+gc78EYdU9II8NzsiP06S5KRvy9eI58oDn0NbMXPNJRWX/mm0sGUgxkUOyKm5aBe6CgNkqNfKQWNr3M7hIeR9Vq+kwXR96epFZeXSucMM2pdGIOd9a6LD5lfzDHWeHvPfLGwF++a3WtXLipazNInTYZ0HsHMq0AAcNPTD+1qir6Tv3E736aArr5CL4HvJKf0EkpLDjbG9nt06UrSGz4xBqyRGxe99dglEJba9P6zYsxsn/53SkSXeG1m2WTNy+1KRLuqAJoFczagg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SaulrSyKYfs1/+/Q4ScjH2ac5iTQsTWF0ItlPcCb8aE=;
 b=lq7ZS2TUjhMgBR0VogLGsRcB3ifIkAQXspk/oH++Xm0UU4C3ObBbg5ilWGdh6d2CTEGvIewen8oAqgGrQHfZyoaruaV213vV8RInjDXeWyq3omlIbF7BOrSSi3Z5EDaJotFiGU9XF23n7E/kRGjcyaONPf1NIaJtwmgFuf1iTup7GKN1f2dpycrB/p9awMgGG5rcaZ50FwaMw6ps6Udssokw99v96QkhTL8g0Fxe3xxXBZhUhKpREjWZohMcIThqCnAJdHIRUhPC1uGlnZ577a7WKDjGIOiMPv2KrYsH8dtFQ3SYYq5eukmF2h33Z759NODdpt8hyt1ua+Fo4lUV/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: "Beulich, Jan" <JBeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, Wei Liu
	<wl@xen.org>, =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>
Subject: RE: [PATCH 1/2] VMX/cpu-policy: check availability of RDTSCP and
 INVPCID
Thread-Topic: [PATCH 1/2] VMX/cpu-policy: check availability of RDTSCP and
 INVPCID
Thread-Index: AQHZeD7CR06R3EJga0ikV9LB8IqFva9sK3uQ
Date: Fri, 26 May 2023 04:53:40 +0000
Message-ID: <BN9PR11MB5276D6B7DEB5B1EA91F3B0498C479@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <ea294e55-f543-640e-7b12-777941ac4500@suse.com>
 <cae19837-31f6-9fb3-5c90-37aaf8920594@suse.com>
In-Reply-To: <cae19837-31f6-9fb3-5c90-37aaf8920594@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BN9PR11MB5276:EE_|IA1PR11MB7812:EE_
x-ms-office365-filtering-correlation-id: 844e1fab-f5b5-4044-5ef3-08db5da52d13
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: EBBBZ7Vd4bPd+A9BilqnAZnDIWVPD9+ZhgLR/OUVscwLqXGvUYfCqvaVqiGyINjgxBoxhd0i+5S1VqkDfi97sQxrg5PSr6yMDy1Sr3OtsXFjZmuXAasOwnEfHyqwaUvmYmwQepbE0xdDpcpsVYG5LttBpxEi8dxpnRQd9bLUJOKu/HwW2rX9ID4CqhsqGjTC59sWE5N9HJ8z7UPLQUEh1+A398i/jzqQycmRz+4Bxw85JnAxTQpwqNrZ5KhWLH19r9v2aLKkECyKgf7/6PJO/SnoRn4sDw+nCjv7Ijeq+LrB3Ejt7E4gmyeCkbdZwjj88mxHFJgEKlLq+1g6cYKQgVannCxodwTFLCBjKuRKS73FnO5KBCVCb+OfXhXXS4PO+XlL/pMG0YPpHMEPYI1JEU3/iNds1bawHYe6vVNzGHTDuZym5IaB2BHUgTd3btopNFv+X1XpLu5pxkdIl5Qg7sURffyq275wEXaO74ZNvNDftUqLej60HU1Rxvsxd7IGn5s1ClT4mrrHiVkGaEFLk67TuaoMPQAa2zKwMBqajWrXGk85BryPAt3+ho2YcZNd85uVywVGvUcjid74CkCOU7A43NAV/Lg4Le4kNxmrLG/5c+v6E2YrRTQGCNLJDgL1
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(366004)(136003)(346002)(396003)(451199021)(122000001)(316002)(71200400001)(52536014)(26005)(9686003)(76116006)(6506007)(82960400001)(107886003)(66476007)(66446008)(64756008)(66946007)(66556008)(4326008)(38100700002)(5660300002)(55016003)(33656002)(41300700001)(7696005)(8676002)(8936002)(86362001)(478600001)(54906003)(110136005)(186003)(4744005)(2906002)(38070700005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?RFdYYU9KOEV1NGpNdXgwb3BFU3lYbHNnS0xXOUVvWDNkczFCUnNhNU9EdjY2?=
 =?utf-8?B?WjgxT3YxbVJDZ1U2alhtTFE3eFhXMm5pYkR5cGNUVWpLR3FacC95MDF2VDBS?=
 =?utf-8?B?dEcybGNLNU5TOVQ3VE1jb0FsNUJOUURYVzFDeDc4bHc0SkRBWklhUW9RN05t?=
 =?utf-8?B?ZUw3UnVaQml1OHZtWHZuYWUwT0MzMWtpdG9zSEpJMVFtSnUyWm5MWCsyZGgr?=
 =?utf-8?B?MXhHU2ZwYXRwR1laMFM3QW4xZEJlRCtGOStBL2VHZEJyOEFoRkVzbmlRZW85?=
 =?utf-8?B?MVBLMmJyU2x2UzBnNlNCSGw4bFFoL21NRVEyZmd0WnpWV2VRM0IyZWUwYUVQ?=
 =?utf-8?B?YWZRRmdleFJObDJjd0NSWEM2Tno4Y0krVVdNY0loeUxaV01IdndLZnAzcFNL?=
 =?utf-8?B?K3BtVkJvbmd6TDJhdUVSSDJTNUlxYlBIL05yanZ6SUtLMDd2UUpncjBBcnpX?=
 =?utf-8?B?ck5KK0lWYjhxamNRT21JZE9tZkJMSGlvSW1hbExqNm4xZ0k2VXNBck81WlIr?=
 =?utf-8?B?RGFCSFFIRkpScmx4M1g1U0FzM1NWc0JRenA2TFJ1N0Q1WTVkajBkcjN4T0Zo?=
 =?utf-8?B?SWNmS2RkT3NVaUVQR3FOR1V5NG1DdlNnbm1FZmttemZ4WGFBMzlOanVRN2p5?=
 =?utf-8?B?TUhuRUFwYVNrcW03R3hNR1Rta3A5Ylh1WUJUdlpCemdhenJLRWVpR1IyS0Zu?=
 =?utf-8?B?UDBRb0tCVXdYMkxMejdLZkQ0TTJhOHFWM2xoZ3lpRDFEem5idGx3VzVoMGgr?=
 =?utf-8?B?K2VQaGZKWm5qbUFNcXFPSHV0YmJpZDhiMlpnZFIzRlc2aFJVTVBpaXR2ZjQ5?=
 =?utf-8?B?a1VPa0tUckxHNTJtb0krUVUrL1NVUWZpZUorOGlJSW1GaHk2T29SNzJKRlZ0?=
 =?utf-8?B?L2JOR0o4a0pnN2dMTmFxaWhZbEZJUjZ3TXlrNUVXZDlOdytkaTFaaU5FSUZx?=
 =?utf-8?B?am94SzBCRVFNa2lDRjdMWUtubTdnd1Q1aUQzQSs1aEtHSGNOSWF5WEhkV0dH?=
 =?utf-8?B?S0JPUnl5V3NQSmgvV0Y4dWhoNVNJdkxncVpWTDVxbXMvcmZYeTkyZFAxQm9n?=
 =?utf-8?B?eTlVTWdBV1kxWUVlaU1JNTdZVktlVmdiZ0NPdUt3VUZ2eXBQSDdNSjN4T3RO?=
 =?utf-8?B?dDhLWEdiaDFJdFphc1lpT1Avb3RWQlc2cHhqaHFSb2dsQXgyQmkrSDN4UTYr?=
 =?utf-8?B?Zjl1MWRBV1pmUzIvN2ZBTTMybXlpYXR6TEVCbFoyZW96VlR6THR2eW5aU0U0?=
 =?utf-8?B?OVFEaWwvNDdnWWRqRzFBbDBvUmNCL3FoK2ZmMzBVNzY3T0VqaXBsTEhPb2Jn?=
 =?utf-8?B?WlZ3U3pycmh2Y0ZVMzI5am5WcDdHMkNQRXRUdTZ4TWQyby8xTHhYOTA5UitK?=
 =?utf-8?B?b0ZyOUNjbWtJNEtkZ0hhK0lvZi84K1MrRXlFTzJ1SWswMEVtS0VodXhSSVJi?=
 =?utf-8?B?MTJLVVA5SVFrUDJDTFlLbGpDKzFhSkRrT1EwLzk1a1RvcUJaQ05hYU9TTHpQ?=
 =?utf-8?B?LzdnUEhTNjQ5MDlPem01YmxzUXZ6UkcyWW1ZUDVFb1hzWGtIS3RObGlxcjM4?=
 =?utf-8?B?N0t4WUpwS0hlUzVHbGpRZEw1TXg3dVpxMzMvR3VidzRyTTNPRE9TYXZ1ZXg5?=
 =?utf-8?B?S25rc1FGdnJHZk5tSGNtbHRJWmhzYlVxcG90ZlQ4Si9nMWFpNFZRMWNRZHBG?=
 =?utf-8?B?ZEVieVE0a2VOQnREaHRpVVVDMEw0UHVjR3huNFBzYUV6R0YwRnNSSlBPRE5n?=
 =?utf-8?B?VFEzbTRaZ2k2Yk9uMkxqc2lvaHdKK2paWnpZYjZPeXhsNjJCYnRoL2NGYkVm?=
 =?utf-8?B?MkxJMkY0MmpVOFJCTkxFNi91Nk92YnZDNTRraUc2bGFFM2VuM2x5TVBTSXNx?=
 =?utf-8?B?Y1k1SEZpaUV1Ujh0Ylp2aGtuS1RGQ2ZoZTFuaFc5ZDZGZDMvYUZ1cVd4TVZs?=
 =?utf-8?B?a05BOTFwLy9iWUlvaUNLaTlrOHZVSisyUjNJR0hpeUxWZzQydkRWdVZKZnBV?=
 =?utf-8?B?MjIxbWpGbXFEYW9tcmk2dDk5RU9UU0hpbWVIeVZRdU1HOFpKMlpuSm9JelFF?=
 =?utf-8?B?Z01EY0Y2cXJJUStidkZkZnNxQ0NUTGxvNjZlQlVtMTAweHBmWUZrL2hUMGt5?=
 =?utf-8?Q?CYuD5p90+yHKo9KR7E3GXNiZZ?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 844e1fab-f5b5-4044-5ef3-08db5da52d13
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 May 2023 04:53:40.9818
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Qm7OYwW+u/vtORHWAKH7b3xofd+4VjziMKoY4oky3CEyTGVZPymy2rslwbxm810zog/8rNG/eAL/TXBmu2lP9Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR11MB7812
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFdlZG5lc2Rh
eSwgQXByaWwgMjYsIDIwMjMgODo1OCBQTQ0KPiANCj4gQm90aCBoYXZlIHNlcGFyYXRlIGVuYWJs
ZSBiaXRzLCB3aGljaCBhcmUgb3B0aW9uYWwuIFdoaWxlIG9uIHJlYWwNCj4gaGFyZHdhcmUgd2Ug
Y2FuIHBlcmhhcHMgZXhwZWN0IHRoZXNlIFZNWCBjb250cm9scyB0byBiZSBhdmFpbGFibGUgaWYN
Cj4gKGFuZCBvbmx5IGlmKSB0aGUgYmFzZSBDUFUgZmVhdHVyZSBpcyBhdmFpbGFibGUsIHdoZW4g
cnVubmluZw0KPiB2aXJ0dWFsaXplZCBvdXJzZWx2ZXMgdGhpcyBtYXkgbm90IGJlIHRoZSBjYXNl
Lg0KPiANCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0K
DQpSZXZpZXdlZC1ieTogS2V2aW4gVGlhbiA8a2V2aW4udGlhbkBpbnRlbC5jb20+DQo=


From xen-devel-bounces@lists.xenproject.org Fri May 26 04:54:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 04:54:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539944.841306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2PSu-0000zw-C1; Fri, 26 May 2023 04:54:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539944.841306; Fri, 26 May 2023 04:54:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2PSu-0000zh-8g; Fri, 26 May 2023 04:54:12 +0000
Received: by outflank-mailman (input) for mailman id 539944;
 Fri, 26 May 2023 04:54:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r+8a=BP=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1q2PSs-0000yE-VA
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 04:54:10 +0000
Received: from mga17.intel.com (mga17.intel.com [192.55.52.151])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5747a850-fb81-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 06:54:06 +0200 (CEST)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 May 2023 21:54:03 -0700
Received: from fmsmsx602.amr.corp.intel.com ([10.18.126.82])
 by orsmga001.jf.intel.com with ESMTP; 25 May 2023 21:54:03 -0700
Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by
 fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 25 May 2023 21:54:02 -0700
Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by
 fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 25 May 2023 21:54:02 -0700
Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by
 fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23 via Frontend Transport; Thu, 25 May 2023 21:54:02 -0700
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.174)
 by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.23; Thu, 25 May 2023 21:54:02 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by IA1PR11MB7812.namprd11.prod.outlook.com (2603:10b6:208:3f7::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18; Fri, 26 May
 2023 04:54:00 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::dd3:4277:e348:9e00]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::dd3:4277:e348:9e00%3]) with mapi id 15.20.6433.015; Fri, 26 May 2023
 04:54:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5747a850-fb81-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1685076847; x=1716612847;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=pkqjBEQ7YSnExSyCl4Pjd3ZAfAz6UV3577RlgbCfLqM=;
  b=NWxx3KIsDFeXSyJwR7qL6BLZgwRDCqwti6MFmY82uY8gE88IGijjNMys
   xGPBYXVP2ci4AsA5z4Vm+f1JKsiwZRn4Cbmm5g/1eNOR3AcNT3Y7x3t/E
   L2ZPJkbeyWGHjbR17Mcyfd9x8Wl21murPFyjFMIG7hxRiC8ibjPbhxqY7
   Xg1C5DZCqJ3HqWbQ6nlb/2Bn2oiD+JM469hSykKabBkaFv0nuHN18Cp+a
   DxQT+SxMIDKnAbICyai8DTDvLdrPa7aw66ahbHGsf/uWyMlglmwN1BEeb
   kU61HHgvbC0gkwzzJldGqDcaNXuHKqDqhX1DWJqPhNcI80eQDAHKgoiNt
   A==;
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="334462426"
X-IronPort-AV: E=Sophos;i="6.00,193,1681196400"; 
   d="scan'208";a="334462426"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="738110179"
X-IronPort-AV: E=Sophos;i="6.00,193,1681196400"; 
   d="scan'208";a="738110179"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HzOgazDEAth5diNqPh6TJ8ppZ/LTWor48q3PKhT0Vfa07ZGzBujrwpfMF+eR5JCE/XZBT5J42Puyz8rPH/QcVfpyYwhneMxyDsgp78FdeBtyqt53CVTKFWpQXDu2k4P3bJetlEbnX6FXig42BNSsCrol3wh2pynujkXlQeG0M7TPaz1Tzj90or86Diq1++kIqn0j4Q+1Kz5OFwLKjI3MTfYD9Sa3yA0Bd702mKF36hrLhkhrPNq/8uVFOZLtUaeqne4yQL2ZwPZvDm972uRXY8BilDp3P2g5oFtnRbE7/zb9ETSkwF6kJk17lupWnHJb4nGcxM7qKcK38MwcER3Blw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pkqjBEQ7YSnExSyCl4Pjd3ZAfAz6UV3577RlgbCfLqM=;
 b=lskNkctDOHqc3U2PEgNvtvBbWCYpksj3b30FOv5YCHX4/piywd6ge1xsIzSg4rzC8u2uuNHSCBuanG9c5d1Xed/Dh5vJxBkm+25WTSP33I4xAgp9bZKZIlPUrbq4MWUBJQG5j4FSPhaXzP/vUdPNjpJhwxCDc5tpxUmySOC0UO0aA7+EEXhEX3noGucACFXEa/Y7hIQxM1loudwCXY4OxJRaNlxoY0+4eWaRM4gJ0/KokcrQIfO8GqFGTEGmp+skvVZd5ar3CagxbXL3aRW6/w39VKKTV4Ph6BqDYES9VM6OB8kUugWcv9YMq7ZWbMoB7Hd2pSaLtYY6rOTyQmRlQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: "Beulich, Jan" <JBeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, Wei Liu
	<wl@xen.org>, =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?= <roger.pau@citrix.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>
Subject: RE: [PATCH 2/2] VMX/cpu-policy: disable RDTSCP and INVPCID insns as
 needed
Thread-Topic: [PATCH 2/2] VMX/cpu-policy: disable RDTSCP and INVPCID insns as
 needed
Thread-Index: AQHZeD7g5l4KMzlp30WzRh+c548YVq9sK54g
Date: Fri, 26 May 2023 04:54:00 +0000
Message-ID: <BN9PR11MB5276DC175A1696A92B7CE6F28C479@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <ea294e55-f543-640e-7b12-777941ac4500@suse.com>
 <fa9e9ece-df60-e249-7cc2-ad3af50d26bb@suse.com>
In-Reply-To: <fa9e9ece-df60-e249-7cc2-ad3af50d26bb@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BN9PR11MB5276:EE_|IA1PR11MB7812:EE_
x-ms-office365-filtering-correlation-id: a66f340a-22fc-431f-71f4-08db5da5387b
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: qyCfA1Xj7rT/44tzROWug96+fcwKnT8bftBs96zrgiZCNdlBNJGC1vp752oA2i1xtAOe71LGhunbK5CUIAQS6xq1dzN58GIbc/EmrKWZbo2Bf4K3bV9380iW9QvX8wlwRwKQh7ghGV1KCJwXjzHq1njf3Usc0yWsV2nMXiLBc7BExpSDV9QqtGLGgi7IQtv21zzA1Yg6A8hY4QYl7/5dZbJfKVa+hDzgZ2Se9h6jF2BxtZMROVSBC9TMEEMC+jtQ9zH+aei9CiE9fsY7gFvvp+fLK+La2lvmOAiPWe21IBNzA9EH1Oe6uM0c3eUsbOEYJsedss1v7xYoJia/RTQ0IT4TxetWpa7moAOsixeQnjqfz4z+lBqBLhFumDCrLY/Ap72f46cd8dSglx3mODZafbeuT9+RJn35mvDbtq+PqN6DDAvdGDW8mctnbsdL439Gxy3wcWFTIiZjJheh7MgHyJjD/7TmwJk2wzEWjlGa1mqbxHR8I1yoAKOy5uzEFbzxn0cU8IQrA0TIgfhrDT7x3uf34mYo7OgfJyL/ssYZeKK5K0dTOgNYxgS6+4gFwH9vsI3R5ZOewLA11MP509Y1A+WLzg3lY8EqHW2gD8j5GoTdwNuSiRF6WrszjJ+Qpsl/
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(366004)(136003)(346002)(396003)(451199021)(122000001)(316002)(71200400001)(52536014)(26005)(9686003)(76116006)(6506007)(82960400001)(107886003)(66476007)(66446008)(64756008)(66946007)(66556008)(4326008)(38100700002)(558084003)(5660300002)(55016003)(33656002)(41300700001)(7696005)(8676002)(8936002)(83380400001)(86362001)(478600001)(54906003)(110136005)(186003)(2906002)(38070700005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?VmhWQjRiVWRVcXFzOVA3MDZ0N05UV3l4eDh4Y1U2UXdFZ2tBN0dOdElsSSt6?=
 =?utf-8?B?SndmN1pBQTNTU2J6WUxrWFpYUnAxRkhTbzM2bjFvR042MFBqdnYzYWEyN0RH?=
 =?utf-8?B?NmtTdytLdmp1MTBhTmVuSnFocytwRTY3SUtXY1BaUThuNVJ2ZTdwbWxNNmZu?=
 =?utf-8?B?bVVaYlBEdStqN084UCtXSUNpV0x5NkswaEJUSHpqS1QwVEtSMk84ay9XSDZN?=
 =?utf-8?B?WllKbTNWTEFiM1JhK3RXdmhROWhiWklpeEVQQVpadjgwL0RLMHp4alVuWDZm?=
 =?utf-8?B?Q2NFNFQ4dm5XOUkzSHhkZVQwaEdyTVFwOE8vNm55THZKdjU5OVp2Y0VNTUo4?=
 =?utf-8?B?L1d6SGlzUW1wUEhSY0pxSHE2M040RTVzN2M0M2N0em1heHdSSmF0Qk8vR2hl?=
 =?utf-8?B?Rys4SnlqMENCR2JGVDFaWWtNSGhWRzBKRm51OVR4QlpyRk9nUFVvNnFPQlk1?=
 =?utf-8?B?NHpadnYvOVdLZ2VvRXNYMkpUTWVJNURTWjFFeENJa0sxdnRvQThCOWdpKzJP?=
 =?utf-8?B?c0E3NWlRMEhTYVE0T1VMNGY4Y0svSjVJN1c5R2FmTmZheWM3NzkwcUh0aDVQ?=
 =?utf-8?B?YjY1MEU4cEdSaDNjbXdXWmt6cW9sV0FpTkVpQ2QycXVlOWpBQm8rcGJZaERX?=
 =?utf-8?B?bWJHRkdna1lseDh2T2JTelZrSFRlOWFSMjloWEg2SkZaRWJTMkxyRVBhK0da?=
 =?utf-8?B?Ryt6cGo5RXk2UUt0eU9TVVNUVnpkcW1EajdhU0NYS0xxbW9iTGVzczlkVkh1?=
 =?utf-8?B?MkxMRUs3bEtBVkpKUUIwQXRWL1NUL1JLdDlqSWpoRm9oYVpsQmdsNU1xUWh1?=
 =?utf-8?B?SEZlNERSUk9kNWN1VHVKbHpMaUFNSnAvKzJUbXJZdW5iSy9rd25UN0ZBRXpr?=
 =?utf-8?B?NWFpSE1GSU1yREp5cTcxbklTUHlNNTdDWW5IUnJwUWM4QjJoQWRTY2xyclpS?=
 =?utf-8?B?Vk10QTNZTjNIWCtTMkltQW1wZkExb1l5M2x5bmEzV2xCQWxoRURkcVhWRHQ0?=
 =?utf-8?B?VTIxU0VhYkxYeUpJRHBOS0Iyb3pzS21hNEowZDdrbTJZVXBLR2VvMjIrYlRT?=
 =?utf-8?B?SHVsRUlJTER5Tm56V1RWbWI0VmNRd2p1L1YrbXRMYysxeVplWkxzTDZKU1M2?=
 =?utf-8?B?VEoxcE51UU1lbnR5b3lyMWVoMUNQbngrelYrTHpqOFJWMXRxdTdGZHRPdk9G?=
 =?utf-8?B?dzhHZFV6ZGsycEh0M3M1Z25oTlkwY1ZPeXZKS2J0WU8zTVBiRGFpS2NhZnBt?=
 =?utf-8?B?Z0JyYXh4YmJ6cVA5eEVGaHFwek9sclNWVld0OUpnTjh6ZlVHTlNsZHNnZTJB?=
 =?utf-8?B?cEpnMjlZeWU0Mzc2dVE0THZxOC9mcFRuUnZBL0tpaXRwMTBaZmdObmU2RGti?=
 =?utf-8?B?V3k4Y2d4ckNodlNhNW9uaCsrSjFkVThySGtJeWpmdlpnRG0weVAwTkl6MGE1?=
 =?utf-8?B?NmVXUzhUY3Q1YmJjNktxeWFReVNOY0FzV0JUUjF0UkMrbjhLenBQbTh1OEhY?=
 =?utf-8?B?RElWMEYxd1puVUNXNVE3Y05sVjhQaDRVbXVxa0huaDl3TUJuMExCelo2Nzdw?=
 =?utf-8?B?eXg4Y0lFWVpaaTBZTS9zNUljakt5YXlJS25QSnRJMlU3Rk02bzFjUE8rVTNQ?=
 =?utf-8?B?bHo5cVRja0d2SGhNSlUydXFVZ0crb2Z1L1VNc1hpUDNVc1YyMXVsUXB2QTBU?=
 =?utf-8?B?bi9VNjdvYjJNejczUUNrdVIyRlh3NUVOVXk0MGZrdjJzb3VxYkl2Z3lqekpS?=
 =?utf-8?B?Y3dXZTJSdTNHM1pyTnZ1VzgvakpmSnNGUVpscTFIYmR1cU05YlFxYkRFTTgy?=
 =?utf-8?B?K1NLbmJqNHlIME1uRkhvamFCZGVDWkNyemhNQ1NNa1hEcXp3Umo5citkVXJj?=
 =?utf-8?B?UUQ0eTVsQWF4UVNuZUVnazl4ZHFiLzRzY0Y5eWY4L1NGdEx4MmlBRVlNRDg0?=
 =?utf-8?B?RExPQ1ZEd2N4d013K2pDQlRqMythOHBRU285eUlKMy9laTY4ZU9ibml4RDVE?=
 =?utf-8?B?Sk5mU3V1UXU4OTZmNlpQaVlNWi84TDFKYTdteVdLc2VBTTJBYitmdjVXVVhY?=
 =?utf-8?B?UFQ1ZDdIWmpnZXpYRkRtZ2lRUnhYUDF5dkVlRXBvRzhJenorWGNiekhtLzgx?=
 =?utf-8?Q?D4Qr8uYzob/1eQ6/TfKWyM2RT?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a66f340a-22fc-431f-71f4-08db5da5387b
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 May 2023 04:54:00.1399
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: v8A0VwTjB0cenxQ08iV4LDHD82kYiStrVttQePYnQnHNXFs/0D5fsdugvAZAbfZLmu95HUc6d50Zu1EGpIXPDA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR11MB7812
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFdlZG5lc2Rh
eSwgQXByaWwgMjYsIDIwMjMgODo1OCBQTQ0KPiANCj4gV2hlbiBlaXRoZXIgZmVhdHVyZSBpcyBh
dmFpbGFibGUgaW4gaGFyZHdhcmUsIGJ1dCBkaXNhYmxlZCBmb3IgYSBndWVzdCwNCj4gdGhlIHJl
c3BlY3RpdmUgaW5zbiB3b3VsZCBiZXR0ZXIgY2F1c2UgI1VEIGlmIGF0dGVtcHRlZCB0byBiZSB1
c2VkLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29t
Pg0KPiANCg0KUmV2aWV3ZWQtYnk6IEtldmluIFRpYW4gPGtldmluLnRpYW5AaW50ZWwuY29tPg0K


From xen-devel-bounces@lists.xenproject.org Fri May 26 04:56:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 04:56:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539953.841317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2PUb-0001qo-QZ; Fri, 26 May 2023 04:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539953.841317; Fri, 26 May 2023 04:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2PUb-0001qh-Lm; Fri, 26 May 2023 04:55:57 +0000
Received: by outflank-mailman (input) for mailman id 539953;
 Fri, 26 May 2023 04:55:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r+8a=BP=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1q2PUa-0001qT-My
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 04:55:56 +0000
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 97532b78-fb81-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 06:55:55 +0200 (CEST)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 May 2023 21:55:51 -0700
Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81])
 by orsmga006.jf.intel.com with ESMTP; 25 May 2023 21:55:51 -0700
Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by
 fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 25 May 2023 21:55:50 -0700
Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by
 fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 25 May 2023 21:55:50 -0700
Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by
 fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23 via Frontend Transport; Thu, 25 May 2023 21:55:50 -0700
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.176)
 by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.23; Thu, 25 May 2023 21:55:49 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by IA1PR11MB7812.namprd11.prod.outlook.com (2603:10b6:208:3f7::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18; Fri, 26 May
 2023 04:55:46 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::dd3:4277:e348:9e00]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::dd3:4277:e348:9e00%3]) with mapi id 15.20.6433.015; Fri, 26 May 2023
 04:55:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97532b78-fb81-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1685076955; x=1716612955;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=m4iWD8+lw6JC2tVzQzm57VIb3ADGgqs1WLVgT7YIrqU=;
  b=E26m4uMn0E0/AHS2HIreZVlWCMjJL13uj9lfJnPVIlYt9suo3QSb4Ob1
   LGoiU3Qhn2+2jBtAYve3kgRHCxnf0WUt9lQBTI2oooaEHqhsgwy8U0NcQ
   QNqiwiKy+GvO+De2XuGoZk62GJIDjrgv059pe870x7YVj9/UPU0xRqrTA
   d1qfODjAupIRx4O4yEXFpZLRtyJr521DiGH+IHMASoU0tAaYuRonILJm6
   ftCaESd1IY7lpFLoQd4U5pDUtdlDJsLvUXKxpnQcyN37k3X65nECemRZg
   VGS+S5NAY7B+cxXJiRf13shFdHMsHHrCbwlfikyUBMDcbu+97gzvhgZes
   Q==;
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="356499328"
X-IronPort-AV: E=Sophos;i="6.00,193,1681196400"; 
   d="scan'208";a="356499328"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="682598561"
X-IronPort-AV: E=Sophos;i="6.00,193,1681196400"; 
   d="scan'208";a="682598561"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VULHrfWVbod17586tKbpX0pPZGv8YGY5nJF+anGLq2WepqeNUNjrLdMufd0p15u20Vz9QEsIZ7wgXj5r4x229hX76xZ4GuAikXki1rHadQIOJSJLfX4FE7dXTPlmSOHVbwqBXoKfswoyoTmNBXEFLfC6fNPGveUeBUVV+L2BCs0FZVynbsJaVeP6ksR+YK5YNK5cuUW/5Bb4cv54KK8uziggg37bQO3NMcOUYYuvw39VJxCPz2/XxKgp8c538+dMFtWYp+rHBQipGs69Zf15fs2Y+5THNL+fqn+0GI2Ew65XpvfOGq245OaF1m7+HQKg2BQLeB4W3CZJqULsIaygcQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=m4iWD8+lw6JC2tVzQzm57VIb3ADGgqs1WLVgT7YIrqU=;
 b=BYZXTt6RNa3hoJmOYHa0p2rx+AL1D7TqONrTTqs+L/QJmf5TXdQDQ4RTd3Z32LPizOBhJdvceaWiS5YrmqelW+Gm0PgDLZpQBB7SdG7mNwUvUrWrGY8vYJLzHgxojpDxPcjxCaAk5K77SwryQduQnDxDzx+xEHuJUWXecyUsGpL9YrM2igCBb9lQ5GQzN6DlnASQiRPZuvBsqz3Ch77L3PW2TF22m9+Vk9JQ3Ly8CM7oCA19WSYRM4ojT2kbHcgLMHPxKSkPnTMkyVEYMw4lGC3k+kii8j+ci8ZiJ8vg8Pn5Uxk661G8O3DEOSE6GoQ9Bx8B6mTxY2Mh9ECrUE+45g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Beulich, Jan"
	<JBeulich@suse.com>, =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, "Nakajima, Jun"
	<jun.nakajima@intel.com>
Subject: RE: [PATCH] x86: Use printk_once() instead of opencoding it
Thread-Topic: [PATCH] x86: Use printk_once() instead of opencoding it
Thread-Index: AQHZg3Zrj6JMEuxy80yDuy8o0bJaj69sFa4g
Date: Fri, 26 May 2023 04:55:45 +0000
Message-ID: <BN9PR11MB527686FE7AD93C1A4B1991168C479@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20230510193357.12278-1-andrew.cooper3@citrix.com>
In-Reply-To: <20230510193357.12278-1-andrew.cooper3@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BN9PR11MB5276:EE_|IA1PR11MB7812:EE_
x-ms-office365-filtering-correlation-id: db799f96-ab1a-4654-35a8-08db5da57772
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: gdNZ2ceY/Bm3OeoqiwMjqYSs1+jV4Qp/LIW7I9SsWVv+cocOfdIBYrjdD1MakC06msit05Wy9XLaU+u3u3iU0sBZ+Y+Xcqeq07R3KGe61zzdoYMhQGHj+A8GzJAPL5NQ4lUVIkOfj1B1mypypUuZCQ/7cnWpA7/acoycNwgoC0pD3HT9vKTtKt0UiAFmj958JwjcT8lMgYfxZrdYeuF3oWG1yZ60trDjO5d2RsajhUh5kFH+UQmzsPIfg5cFI7Ti8xn54o6pOAb9JT3fiWMQYYX/XFBhNt+3VQ399KW4qlbIQtpEcwG9uuimUbv4tDhtu2iRNo2Oe5/kFAW3+Z/oMgoOq4rukvI8J4pIf32HSeRf9Mh8dMHjonvtoYA4rXaDTqeYeXvcLW1QBNlXkZp6MnpehorZmurj03lkBehnAYjkxt3P4OFhv5rteCQBfQ+QwCpBkp9jd3dOFcjLPBk+EFruNhzkOF4El3HZIw/ZjgNexXpWEQhATEz7R16EFBzfQaEG2CiGTk7JTNV4MvaubpV5FBT5O9klVUn/L0nFYCsgP4/APAEYG9TA2cVvC13hILjwieA66QMp/3YH0ozcTVpUEdmXFJoddykRY5XKOqnizxOJDVZJzw2tV2csFBWr
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(366004)(136003)(346002)(396003)(451199021)(122000001)(316002)(71200400001)(52536014)(26005)(9686003)(76116006)(6506007)(82960400001)(107886003)(66476007)(66446008)(64756008)(66946007)(66556008)(4326008)(38100700002)(5660300002)(55016003)(33656002)(41300700001)(7696005)(8676002)(8936002)(83380400001)(86362001)(478600001)(54906003)(110136005)(186003)(4744005)(2906002)(38070700005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?TnA5SHJZMG1lN2ZtQnhQWGV4VnJSczhHSUJlVG1Ubkt3b3hpVFU4ZWh3Z3g2?=
 =?utf-8?B?eGZHQ0thK1B6TVZTSmFMQmREc3NNamhpVXRTclZ4MGdpcTlTVHpYZDFjbUR1?=
 =?utf-8?B?QSsybjBuMmprdGtzOFRXZ0oxRXJGQXVobWwwTHp2aTZ3NUlnYkMzeStXSjVi?=
 =?utf-8?B?SWNERGFOTWxrTWFSSUd4ZTVEK0dtM2s0RmlHYlVyR2hVYmFNT2ZNcC9HTmw0?=
 =?utf-8?B?MGxQa0ZnWk5nZ1FHNzU5ZE5mY0kwME12TitrbnhMT09RR2hjWEUyWURCSzI1?=
 =?utf-8?B?U21VeW9yS0NpTTg3VnRoNFpJaHJUTHk5NmdkSHJzbnlLbW5lbXg3VDVyM2FM?=
 =?utf-8?B?WWo3WHF2VVREN25LWVhhNGFLYzBmR2NYS1VTZ1lNT2FTczdINStlL1M0TVZq?=
 =?utf-8?B?SzRvZldDTnRJaFVMN0NjU3dXSFF5MDlHM09RWHA0Q1pRQ3duWEs4TnFuSW9K?=
 =?utf-8?B?QktvV0dVcnM4NEtaTTV6SDl4emV3UHFrQzhTaU5aQm5CUUlYcFE3cnowbGdl?=
 =?utf-8?B?clVKS2ZmVFg1aW5aUXJaeVo5ZE5kK0RVL29UTTVneU9XYVd1NUVjWXBWdzJZ?=
 =?utf-8?B?RGFKTUdCTmNHTGRmQzIxSm04OWZSczRJS2dXbW15YThrOEl3MXBXcHNzTXdj?=
 =?utf-8?B?bGtEYmt0UVBwdU1WcUplb2NYcVh0eXBORlN3aSs3Q2Z2aGsxWmxpWnJVNm5T?=
 =?utf-8?B?bTBoQXZjRi9SY2ViRTBUK0RaZUQ4R05ldEJMZEZjeGI0aHlWdTlEV09KZVdy?=
 =?utf-8?B?cFhMUXRTVUE4OWFlbDdsd2tMdFI5QmhSekFodWdsenBUK2g5MEZ0NVdyUFQ3?=
 =?utf-8?B?VXV1WUc2TXlmU2lCVWd5Y2pJUnZuK1c0eW1ERDlka1NaWWQyalUzSFUzZ3lE?=
 =?utf-8?B?a2k2VWtVb2ZRVzNrNEI0eTFWcGlGVFlteWNYbklJVlFMT3crZUxEcDNrcXVW?=
 =?utf-8?B?TzEyWlkzOFg1WEtOcWpoM3g3V0xuNWZ6QlFlcFNPTFVxT3RhT0xKL3lNQUZP?=
 =?utf-8?B?bDdPSXlZMmpiQVo5UWhqVkRDaDUwTnQ3Y2FQaDhXajhUckJkK25sVjY1Y2dR?=
 =?utf-8?B?UXF5TExOQ2xMaEs5QmlxWXUwOEF4M0ZMS05ObzZBTXlHREtZY3pwUUNjR3NS?=
 =?utf-8?B?TGcwbUZYQjJJOWNhaXpUb2diWktjNW02eG40cUt2QnJPMEJ5THpXWXhFNVky?=
 =?utf-8?B?S29vSFJEMDdmNTB6NWppQXJEYWc3cmxNcmxaWXNrU2VMalZZMWI2MFZjNlpl?=
 =?utf-8?B?NVppM1NqYXNydzFoaklvRTVjSTZ2M2NaTCtRcVdHRXNFMmxSaXMvRjRwWmFH?=
 =?utf-8?B?ZGlzcmFLSE9jNjc0RExQRnA3akt6dldkSTYzam1TSUtGUGh3cHYzS3cyTjJ4?=
 =?utf-8?B?YVJvWExTeldkRlBpYzQ1ZkJSY1ZFTHY3cnBCNEZoUytBdHJxL2ZueTc3Rmtp?=
 =?utf-8?B?bStvaStqSXM1VW1qbFZiaVFtaVYrek5NTjFMdHovcVlMMWVib2NLc2hHbnFD?=
 =?utf-8?B?OFlSQ2V5aXloZFZxTDNtRkY3aU9icVZtWWFTY1dUWXBvejNDNWpyVkxEcnlB?=
 =?utf-8?B?KzBkNlVFUkNSVHQ4MnY1NXFLTy9hck9lSjhrL1cxZG9Gc0ZlNWtObUowTFpy?=
 =?utf-8?B?NTNmTzVoOUNPbzM0ZFNDVkcvOVp4MWhSOTZ0c2djTkRBSXd1Q1lJb1QxM1hY?=
 =?utf-8?B?d0M1cnRXbnFpNHBpcHlOS2JxOG9GMmVxd1JaUS9Za2VsRE42Ymo5aWFCWFo4?=
 =?utf-8?B?K1loc0JaYnpPcGxVeFNVSXhjUFJvY1NJUW5MeDdXelYrclhhS0lkZmhEdXFL?=
 =?utf-8?B?aEt5aURXeDVxRVJvSGVuNEtsSWRIUm1ud1VEcHQ3WUpwT1Fhdll0dXU5MzZP?=
 =?utf-8?B?M0RMQTd4ejRGR3JxdEtER3pxcXBKVE1ydStxSzAxdk12QTlKMWc2RWhUeHA1?=
 =?utf-8?B?TitTVGpPUjFMKzdUY1dIcjE0ekh5cVJWWGhHcVE1d1luc1lXT2Vtc0lBSkw4?=
 =?utf-8?B?NEJUOFk5eno3WEoxTlNOUWg5Skc2VnFMWWF2L2RXVFlYYmQvOE5HcytYbEMz?=
 =?utf-8?B?cmQreVBOeERYY1o5SHdPM09lb1hkcE9lR3RyZ1JnT0NNbWh0MVpYY25CMmNk?=
 =?utf-8?Q?yWxuu5dtja8uYSn7ombhkESwn?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: db799f96-ab1a-4654-35a8-08db5da57772
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 May 2023 04:55:45.7425
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: lhc/0F0eRNiREZeNZGcmnUu2+RbvazXgl22tt6HMOTHyMkv4uLV45KZERu0JSgYK/7bs7TNiXsE96V8x+dgrRw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR11MB7812
X-OriginatorOrg: intel.com

PiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50
OiBUaHVyc2RheSwgTWF5IDExLCAyMDIzIDM6MzQgQU0NCj4gDQo+IFRlY2huaWNhbGx5IG91ciBo
ZWxwZXIgcG9zdC1kYXRlcyBhbGwgb2YgdGhlc2UgZXhhbXBsZXMsIGJ1dCBpdCdzIGdvb2QgY2xl
YW51cA0KPiBuZXZlcnRoZWxlc3MuICBOb25lIG9mIHRoZXNlIGV4YW1wbGVzIHNob3VsZCBiZSB1
c2luZyBmdWxseSBsb2NrZWQNCj4gdGVzdF9hbmRfc2V0X2Jvb2woKSBpbiB0aGUgZmlyc3QgcGxh
Y2UuDQo+IA0KPiBObyBmdW5jdGlvbmFsIGNoYW5nZS4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQoNClJldmlld2VkLWJ5OiBL
ZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4NCg==


From xen-devel-bounces@lists.xenproject.org Fri May 26 04:56:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 04:56:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539956.841326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2PVF-0002N6-2S; Fri, 26 May 2023 04:56:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539956.841326; Fri, 26 May 2023 04:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2PVE-0002Mz-Uu; Fri, 26 May 2023 04:56:36 +0000
Received: by outflank-mailman (input) for mailman id 539956;
 Fri, 26 May 2023 04:56:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r+8a=BP=intel.com=kevin.tian@srs-se1.protection.inumbo.net>)
 id 1q2PVD-0002Mr-RF
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 04:56:35 +0000
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aeaa1923-fb81-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 06:56:33 +0200 (CEST)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 May 2023 21:56:31 -0700
Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81])
 by orsmga006.jf.intel.com with ESMTP; 25 May 2023 21:56:31 -0700
Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by
 fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Thu, 25 May 2023 21:56:30 -0700
Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by
 fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23 via Frontend Transport; Thu, 25 May 2023 21:56:30 -0700
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.170)
 by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.23; Thu, 25 May 2023 21:56:30 -0700
Received: from BN9PR11MB5276.namprd11.prod.outlook.com (2603:10b6:408:135::18)
 by IA1PR11MB7812.namprd11.prod.outlook.com (2603:10b6:208:3f7::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18; Fri, 26 May
 2023 04:56:28 +0000
Received: from BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::dd3:4277:e348:9e00]) by BN9PR11MB5276.namprd11.prod.outlook.com
 ([fe80::dd3:4277:e348:9e00%3]) with mapi id 15.20.6433.015; Fri, 26 May 2023
 04:56:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aeaa1923-fb81-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1685076993; x=1716612993;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=xLpSRQs2/RbFKHEh6YqMMtyheTjAvjGhS1osYhd82n0=;
  b=b0Fun9qbSmGi5CHOhPeHg5Exa+EltIJhBKh4ThAXNydX42lRY/zdjYnE
   mMhBU7eDlBCapD0zFRqCsHOtCFi8619U0+QD0QzSA6mid3VeNAFq/AQ8f
   fuZbbgJDgH/uXYk6fQiglQBgEwsdOgVNi1Zgj6X9cpA7cQraGqZkWaez2
   d38+OFG2s6J0KfBP/fmxiZ1iS0AivdG5ACs7C0+k4WIBIsO9uctvhl0Oh
   ES0Bloype70DulzrFjQvDKj9GYko1AtdEDJMvQBATYqJ51B9aftGy31x2
   F2fzLss5vvsVBe9ElKfvbS3rmGmK2Tgf9Ub5KN4wBWGkevMPhOJoxUSsi
   Q==;
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="356499480"
X-IronPort-AV: E=Sophos;i="6.00,193,1681196400"; 
   d="scan'208";a="356499480"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="682598750"
X-IronPort-AV: E=Sophos;i="6.00,193,1681196400"; 
   d="scan'208";a="682598750"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y/RL2g9MkeKtdvgYB4yw+q42QTODe44bN+El7GIZxlwY/c49F2cYE363KFtskC92+ZrR1zIpY5Vdhph9bgYmwvNQxtL84JZiJXaIZ92H5D8lhp4RRe7KFUEDF9thBZ1W938yLRUyNl5Nq5TVk8dYKNCcCOFmre47LwBbOCA9WiUmw/MKFXgoscHDcSX6H6qwM/D4T/85o1MKPA3dui0Irtl5+UfNeAUn8WhJxMlb2oftPNUuuXFRlyryCTvaK3pFapdUZMvHP5rqI79fmlEZOiVFbu+DqcGm842OKQinPcdchqf08XRySkJjorJfeJaIJ5+7tbbHBeFDSzTzfhNd0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xLpSRQs2/RbFKHEh6YqMMtyheTjAvjGhS1osYhd82n0=;
 b=gXYnkppxLs61M6PXK744Sv3zrMtViX7uqXDHyb72B6vMCdx7gCD4B72iBL0oPhDlCmC7Tr9PUvU3ndd0cp7RiQULLChvaHOk4ISOFjikGCtp8y1y77cEdSyCVV1mc3mt70VacqBZXfzH2yskmyvJF5+3bLnDTrd1R392xn7IGWJroZt7B1WeYyHK9pSFl+b5Dd2TMAirWAg1/GqluN/9u9bJASkkiWWD2zvXWAbq/T755ek2mma+AMsn45ogjiL+rD/pP7VINJCAQAqJNANgdfU4RbqBXAXyesjd81HiipHGpcdcsbwrTTOHdVqXZlBn3geKar2cXlgqlI9/W7swWQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Tian, Kevin" <kevin.tian@intel.com>
To: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Beulich, Jan"
	<JBeulich@suse.com>, =?utf-8?B?UGF1IE1vbm7DqSwgUm9nZXI=?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, "Nakajima, Jun"
	<jun.nakajima@intel.com>
Subject: RE: [PATCH 2/4] x86/vtx: Remove opencoded MSR_ARCH_CAPS check
Thread-Topic: [PATCH 2/4] x86/vtx: Remove opencoded MSR_ARCH_CAPS check
Thread-Index: AQHZiAZCK4QzVtVhyE6LhJcXivukV69sDMAA
Date: Fri, 26 May 2023 04:56:28 +0000
Message-ID: <BN9PR11MB52761BEBB12A7D42812EDAA08C479@BN9PR11MB5276.namprd11.prod.outlook.com>
References: <20230516145334.1271347-1-andrew.cooper3@citrix.com>
 <20230516145334.1271347-3-andrew.cooper3@citrix.com>
In-Reply-To: <20230516145334.1271347-3-andrew.cooper3@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BN9PR11MB5276:EE_|IA1PR11MB7812:EE_
x-ms-office365-filtering-correlation-id: 15434de6-885e-4356-7359-08db5da5910d
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: Ueip2I5HSoChXR0ZqprCFbg2yNrjkrSc9UXhi0vA+LQ/J+DW+W5Cqm5zSf3HqcAbkvTU7hwh3XE2Mnjjyixs3ueedt802rRuMwjQhfYYz0M2epB4s2IL0HCfM73kdXCC5rJeGTTI3mEVR8JqQ1rX38Y1Ql2gzESK+ZU6EXd9VpKQ/eA8ddgDQwti82UUQDAagBn5rMhBtsHRreDMhkillnT8nROZ/HjgrBmn2f0bq19jf+6cAl0chbiYwFGQQpNtwu6vLvWdeDNYM3C1VqNSYqwyLOmgm6NMm/B9mUwJeA6/vyrjpFEMv0Zol7Kul5w5SyOK3vZQlLKeYc/oGlMhDB9ssnhNv68gb+8IeyslDsSlEsJrS39Lammj5zprMhNfjz42YKz8yjqE90DuvHTQfa24k6WZxTA07ZvtQ1g0faZRSExmXFZqO9/F0NkWzmh4T96i1pTMeeiQkAKateTjYlopNAPhf7N1svd92k/DRSXsSl9tRDsZ61zgp8MvPvqi6lmuYnDcBUiplwwsqnX2LV4ttowCHkvrgfts6UCaJp2kVtHC48zfW2JhLjduoyX/JqXfMcbhymk2/fdfQ8ojfaGpuKHPYfcDMR5EPsPrX/TPCW3BUSRYyjSqzNhLTQXU
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN9PR11MB5276.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(366004)(136003)(346002)(396003)(451199021)(122000001)(316002)(71200400001)(52536014)(26005)(9686003)(76116006)(6506007)(82960400001)(107886003)(66476007)(66446008)(64756008)(66946007)(66556008)(4326008)(38100700002)(558084003)(5660300002)(55016003)(33656002)(41300700001)(7696005)(8676002)(8936002)(86362001)(478600001)(54906003)(110136005)(186003)(2906002)(38070700005);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?Q2VoRXQxRkRucCtCQllTYWt5NTIwSW0xVHRXakJWcjk0SFd0RndwcVp6T014?=
 =?utf-8?B?YUZ4eHRYZ1hnQU1oNVhYYVMzN3NVendWZW8yUDIwNVJSR0hhQ1ZPY0U2ZEY3?=
 =?utf-8?B?REZIT0JMMVUvR3ViR3VqMi9sb1ljdDlBNDFHREtDUXlVdVFTdjlkWjF2SzhE?=
 =?utf-8?B?dTU3ZEN2Qk9Xbk5lZmg0OER2UnNxSVlyRFZndlM0MXdWVTJtUmluVE1PRnlQ?=
 =?utf-8?B?aXZhM1M5NkxCb2s2cHhXcUZjdlA0cVdBaW5ibGFNaUhjUG0yYnJHaVpkeEY1?=
 =?utf-8?B?MUxnckZveC9FN0tzSGlXbWplR3JIckhoUnN2ZkpMN2Ewc3ZXT001THF0T242?=
 =?utf-8?B?YzlOdTBxMkVmTithNExkUE9BbG10UXI3b2ZvbkxTTGlTRTRxeXBsS2dJU0M4?=
 =?utf-8?B?dkUrUEtpSjQ2OVEwZDF1cy9Bc2pFNlZsVXdpcWVwcmtZUFN0WXF4aTNlbjRC?=
 =?utf-8?B?ZzMwWVltUDVNNHA0RWpwUEQrd01TUWw5M2ZPeDlaaExHWndkR0FTZityc3ZP?=
 =?utf-8?B?ZlhURncwQllGMnlBemg4cGRGYzRBOEJUd29hUE1jQUpsVmZLcUFkZ1Y3S1ZK?=
 =?utf-8?B?dXRTYzczZnY1THZLTzA4QlZ0MmZ4RDRyME41WnFRRW1CVXQzU0h0YktjL0ZL?=
 =?utf-8?B?OWRicDdZRlVMT3lDTHZ6WmJXaWJQYmErMVVKQ0Q0YThXTER2UmhsUlhTM0RX?=
 =?utf-8?B?K3gxS3dqSkRiM1BTRUFHVXFmZ3I1b0JBQmZ5eHJibzZLSFFlNnFxTzlCeTNM?=
 =?utf-8?B?VzZ6UzczQ1lIS2JoYXlWNG1uUmdidzdFM0QrY2w0MWlMWElINDEySXU2T2ZV?=
 =?utf-8?B?enhxNUtHVjRvSjEzWGhGL3JnclZERUZnWVYwN0w0ZjdodW8rRytMTEJOZ3RP?=
 =?utf-8?B?M2dkRmxrc3Q3MEZmU3MzbnllNk44WU0yRC94SklKRHJjYllwZTd1RHRyMEN2?=
 =?utf-8?B?SkUwUzRuL21Ea2VDb1BtRWJBWUpieGxBRGVhQXJyOGRSVDhCZ213Q1J6OEFq?=
 =?utf-8?B?emd2aXJLeERiZFBJNGpNbjR4eldHeWRrTXU5Zzl6WTFBOWtpalBrRWx0dmZ3?=
 =?utf-8?B?VUxqTkhuZXVqc0YzQjRGWVhiTnhkY1BZVndqTXlTbVR3bXhhNE5rankxK0c5?=
 =?utf-8?B?TmlNeFFDZnBwS25XZTJlc2ZCWTV0a3AveVZvc29VQ1hFcUhMWHRwRXNQaHR2?=
 =?utf-8?B?SXZKNWJIMjNlcUoySzVaZnVnb0gyaXBJSkx6bVE4RkZYVHNWMEV1d2swZzZE?=
 =?utf-8?B?WG9GRERKTCtoTEZ4cm1IdWMxUTl0TUhoNWp5ZVk1Y3RuN0E5ZGU5RFVXcSt2?=
 =?utf-8?B?a05UWmJnZ3RjWi9CRkQweXBEQVJHV0tOcFZqU3QwcHg5VmN5QXE4a1JVbVdO?=
 =?utf-8?B?YTZaaUpJbytGckpVckJtMmZhbmlObkh0Nm1lWlFlcjdEOE0vc1Btb3FrVlNx?=
 =?utf-8?B?SC85cWR2dlRJczQvd3hGYWhqT0UzcnhLVkVUejhsVkZ0M00zU2ZuM1VwSmFl?=
 =?utf-8?B?OGxwUm53Q1pmVDFIcXAvOHNDT2k0cXluRmVyVWE2cjNLM2wxTjEwOGlHZ1Bn?=
 =?utf-8?B?bDZxbVRkbW5XS3BxemR2ODNzN00vT0p1bUpnMkcvc3QrNVRBRm9FUHU1UmUy?=
 =?utf-8?B?eU9XK05PaVk5QWd4Ui83U1ZPNG9uRk5tNEFTM1dIRXVUcDRHeFh0ckV1cnB1?=
 =?utf-8?B?Szl1eGJEUUdmZnFDZlVUblRIVGUrS3EvZndmTktZNGxJdTJmdFl5QjRKTVhU?=
 =?utf-8?B?Yk0rdUd5b3BoZkxNc2FEVWlaOGtiUGF4TUNxLzhDcSsvVE5QSU04cUREK3d2?=
 =?utf-8?B?VWo3eHdQUFJuYzlFcWdGcmwray96Q01FZ0ZQLzNOb05qTVd2VG5RME1CZ2lC?=
 =?utf-8?B?YkdKRDJRdlhnaThsMmNVdmZGSXM3QVVCaVp5TGNLZjRQd3c0anpmSTRYYVps?=
 =?utf-8?B?NERIN2o5d0JsRytKL3h1d1Y0UjR5V0NaWVJ4bkVXWm1UV1dWL3lLUFdwTWxp?=
 =?utf-8?B?RDhNaGFrY1dna1QwY3lZQmp6dnBVcklsOXgrL0EzWVBEaE9SZTJIYWVyUjA4?=
 =?utf-8?B?bVFZVHNwWXg3QVp0N3MrUjRWNGIyaEZTQk1NU29QUm4yOCtRb1BTMlpkSlZx?=
 =?utf-8?Q?BGthpLWRRxUXuPWr4cre+p/zO?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5276.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 15434de6-885e-4356-7359-08db5da5910d
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 May 2023 04:56:28.7516
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 55qL7+RyMIrxWUK1KExL9yjn93fOXGCFcdOKkGyIq0Dv/3qJmlRe8Ro23SB5MyBgdxrKdiqy0MbXdPBG+SjiHA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR11MB7812
X-OriginatorOrg: intel.com

PiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50
OiBUdWVzZGF5LCBNYXkgMTYsIDIwMjMgMTA6NTQgUE0NCj4gDQo+IE1TUl9BUkNIX0NBUFMgZGF0
YSBpcyBub3cgaW5jbHVkZWQgaW4gZmVhdHVyZXNldCBpbmZvcm1hdGlvbi4NCj4gDQo+IFNpZ25l
ZC1vZmYtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQoNClJl
dmlld2VkLWJ5OiBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4NCg==


From xen-devel-bounces@lists.xenproject.org Fri May 26 05:48:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 05:48:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539914.841336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2QIm-00089z-R9; Fri, 26 May 2023 05:47:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539914.841336; Fri, 26 May 2023 05:47:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2QIm-00089s-OO; Fri, 26 May 2023 05:47:48 +0000
Received: by outflank-mailman (input) for mailman id 539914;
 Fri, 26 May 2023 02:36:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=muQi=BP=linux.microsoft.com=jamorris@srs-se1.protection.inumbo.net>)
 id 1q2NJU-0001ga-TG
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 02:36:20 +0000
Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 1825963e-fb6e-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 04:36:19 +0200 (CEST)
Received: by linux.microsoft.com (Postfix, from userid 1001)
 id 1932320FBE98; Thu, 25 May 2023 19:36:18 -0700 (PDT)
Received: from localhost (localhost [127.0.0.1])
 by linux.microsoft.com (Postfix) with ESMTP id 1639D30013A9;
 Thu, 25 May 2023 19:36:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1825963e-fb6e-11ed-b230-6b7b168915f2
DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 1932320FBE98
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com;
	s=default; t=1685068578;
	bh=QsI2Tqrh7Dq8rATEw0dJAIe6Y8MEWIZaKj+0i4m1b94=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rc/phama1ktXAiy+vs2NVCSXABwbcfYCYzvoNhWo6b4QznrBTZJyht2XFXGcOWpXT
	 eN3vRQ77IyNVtRNsEt3dUwkwOKfWmDDqOGFtsKRkzHP0DTyYh3PuCH9zyVE+5kTU93
	 rssQ3lVi3y4biGwhIuiBnmIaxdqEKLD8WYOLFyno=
Date: Thu, 25 May 2023 19:36:18 -0700 (PDT)
From: James Morris <jamorris@linux.microsoft.com>
To: =?ISO-8859-15?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>
cc: Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, 
    "H . Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>, 
    Kees Cook <keescook@chromium.org>, Paolo Bonzini <pbonzini@redhat.com>, 
    Sean Christopherson <seanjc@google.com>, 
    Thomas Gleixner <tglx@linutronix.de>, 
    Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, 
    Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>, 
    John Andersen <john.s.andersen@intel.com>, 
    Liran Alon <liran.alon@oracle.com>, 
    "Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>, 
    Marian Rotariu <marian.c.rotariu@gmail.com>, 
    =?UTF-8?Q?Mihai_Don=C8=9Bu?= <mdontu@bitdefender.com>, 
    =?UTF-8?Q?Nicu=C8=99or_C=C3=AE=C8=9Bu?= <nicu.citu@icloud.com>, 
    Rick Edgecombe <rick.p.edgecombe@intel.com>, 
    Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>, 
    Zahra Tarkhani <ztarkhani@microsoft.com>, 
    =?UTF-8?Q?=C8=98tefan_=C8=98icleru?= <ssicleru@bitdefender.com>, 
    dev@lists.cloudhypervisor.org, kvm@vger.kernel.org, 
    linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org, 
    linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org, 
    qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, 
    x86@kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
In-Reply-To: <20230505152046.6575-1-mic@digikod.net>
Message-ID: <17f62cb1-a5de-2020-2041-359b8e96b8c0@linux.microsoft.com>
References: <20230505152046.6575-1-mic@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

[Side topic]

Would folks be interested in a Linux Plumbers Conference MC on this 
topic generally, across different hypervisors, VMMs, and architectures?

If so, please let me know who the key folk would be and we can try writing 
up an MC proposal.



-- 
James Morris
<jamorris@linux.microsoft.com>


From xen-devel-bounces@lists.xenproject.org Fri May 26 06:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 06:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539967.841346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2R6o-00057F-LR; Fri, 26 May 2023 06:39:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539967.841346; Fri, 26 May 2023 06:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2R6o-000578-IW; Fri, 26 May 2023 06:39:30 +0000
Received: by outflank-mailman (input) for mailman id 539967;
 Fri, 26 May 2023 06:39:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7mHh=BP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2R6o-000570-6o
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 06:39:30 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on061b.outbound.protection.outlook.com
 [2a01:111:f400:fe02::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0e3f2fd0-fb90-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 08:39:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7247.eurprd04.prod.outlook.com (2603:10a6:800:1a2::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Fri, 26 May
 2023 06:39:24 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Fri, 26 May 2023
 06:39:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e3f2fd0-fb90-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WIFaFhV+3QKJxn5Mq+71L9sFoQ8u15DtJFRqJY+GjIgYAIyhxIlt7n/ELHLmtPQ6ldBiRruiKzb64DcyJLAuN18PcScOezvBl+H23zDtD3bEAM9IGrBn0ETUtGuD/0pb9erra6fuNNrxfw0eAbsh/l2+rieYcji5O0zWcBYCR/PMBrK679XDYA1soziExnf4R4Li98NW+HhwZEJWBsCOH+n7B/vx4AcTEFqwHcIOeUddnNZwrptxASE+5ZC9GRUN85jtsgcsPncPBpJcx5KGFIVU7FWQ9qtHaNogu3W8nWHQUBvYY/Vaw+CBUhXfBA9HapgQVzRK7ybQYqkXFLC+9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XtSzzzpng+FxZ8tPNm+1f4rXVUhvSlo2x0OWjzc0ykI=;
 b=AytSRLiZxskjmH3onJC1BYkbZxTU4OYi9isfNqv3GyGWB/fi3Oc2yHVWXdAwjgECXQkPzNhtGx9xlqjnAavp32To+qWAgKDbjmGrqA8NI3do4ozVlDkUZz6T/vB3p2jS8PEPug+jZwe1QPfaCS+MhuZuWKRBHE/csyECobagGqI70UH+wiifa7MQA4w3hT3P9rak/AgRriYqiMm+b3p4GDFu8VUF1kq5UV3a2dUTD4D05gLJZXobT8Q+0miMXy6zBmBAhw8byEDkvFq6/1ImhI1z7OQAIj1/4S8PXb6oziZ7lymHK9p2X+pGyzA9F7DzrZp8sUKo5ymXzhs111wEhA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XtSzzzpng+FxZ8tPNm+1f4rXVUhvSlo2x0OWjzc0ykI=;
 b=m+Ejf1/mewFSt6pF5chihC+TgqnFPWGat83xxYeONCmfw29R+fRkTTzx/fsrUIlu2a5iwkGil5Nl55fw8Lpm+iIRRWxaVpg1PSKxDVGNyE3TmoJzG3suAI6Sej693OTZtV5I596DzysnIgHbfTIS5zXkHXgh9Mu4bQO7PYokIotrHEFixHVB/YQLftzZXacL4ygXCGw+Yo6ANZDKzOolntU1KD9kZ+aQGlRZMmRwAKfWD1TlT/FzLm7TwxfUG54HW6BOxCZTZFIdq8Aq6ARxyWk0g82X8Llv3Dol+4f2oigWuzmy5spmVkh+IES/ot+xpBzDCwIqt6Y3DC14xELfgQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <22f1e765-891d-ef2d-01b5-e9dfe6ca895b@suse.com>
Date: Fri, 26 May 2023 08:39:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4dmJuzNVUE5UIY@Air-de-Roger>
 <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com>
 <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop>
 <8956af09-9ba4-11bf-a272-25f508bbbb3c@suse.com>
 <alpine.DEB.2.22.394.2305251224070.44000@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305251224070.44000@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0183.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7247:EE_
X-MS-Office365-Filtering-Correlation-Id: dbcfb854-f985-4a1f-550a-08db5db3f1ce
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BOx6YjxBRhCd9+0XA+qjtEUiLX0nE7f3ACp0cUO2WBDaheBskuLNgRVO39p45QmcQVJmCnXY/QThEAHQaBCPaVwgQMLWb1xyHGHLeuVOMVi2CLN32LdrNJs3wi9oDbqNRAI6eTGPqNiJECxJn3xJXqiUDYRok0dacHld70xawksRcq8/wdENK+SU+exqw7Q5z6i+oxIqpIr7ZLpCgR+jzyfwEFo/SWDIdYUiWReRL6zx2VxY25YPvYPmTWbCA3EEByokfVxHdhINoo5fGLooERRJiyM3JQNFh08EvPyJ6964VG4sbwrvvwE7MuIcxjr5Ui1POMU8WwhskvN7jSo3nSxwO8J0dCxSvHO82+DjKQZ6wUAjsVw7rki7oEebOp4xM/d22WDpUhSvoFr71NwyLDrc0sPTjSdu+a/lz8JQzfYzBS8Io9T/WJCRNyBEVAa1e3H/LtrLhg1+RpwIYO60tWE6MN1sC5/SxugDKBKyLtLsWvdA9gOUdnc7/CQ4AccQzvxmsBAt5l9aSmYYpSz7+LTG+TKLgZ3lW0O8p2E3LMGFnMU+ALduWbgtyE7uuf4Zhr3w1dtN7FlpORkEHJcFdI3dFPiGkMMM9cpcq0NMvUEdOLE9AUlJXlT2nRjdunVKV1LsJrcYZ7yVv3KQl2hAIQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(346002)(376002)(366004)(136003)(451199021)(5660300002)(31686004)(54906003)(478600001)(8676002)(41300700001)(6486002)(66946007)(6506007)(6512007)(66556008)(6916009)(66476007)(316002)(4326008)(26005)(53546011)(8936002)(186003)(83380400001)(2616005)(2906002)(38100700002)(31696002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Qkw3U0pOVTcrcGxqSlIzTWpQcjlLZzgwU3hySmYwK3cvSmJTTm5VSlZxemhM?=
 =?utf-8?B?ZUZzZjJLWTU3aHgyM0JlWVpCTkV2S1JPUk5qVmg3QkFMSDRPRGZ4TkQ5WHI3?=
 =?utf-8?B?R3piRWgzUlFDKzhGR1crdmZtUHNUQlhHQ3FjV2gxaEM1RHVLSFBSY0FRajFk?=
 =?utf-8?B?OXlVU0JKaHZpdmNRY0VXTGVCaFQwcTZjTXhOYkJ1R2x1bGhiZlRBTTcwWEh1?=
 =?utf-8?B?M2NtaDJLaFRQZFAyMUpxbmNxbWZqbEU2NnVlQ3UxYi9hRjM5SWVibXBnMlhq?=
 =?utf-8?B?amhJaFdoNElXYVU2NXRKWGZQcjNQVzA4UERBM1pXMDUyWDRrQjNCZnpMQUVu?=
 =?utf-8?B?c2NDQmN4SmZOdnljM0VpRmNFazdpNHhJRVdlQjZOQmNMMnJvc25NZlgyMEVB?=
 =?utf-8?B?QWZPbldtSkhETmhNSEtKR2J6ejJtcGVtNXdWek1tK2RrK1JwSCsrOVJKVnRG?=
 =?utf-8?B?ZllqZUlEUDBlcW9mUWZWQzNtdG9MKzJyZ3VuQUhGZGpwVVgxR3Q0blYwOXVK?=
 =?utf-8?B?aTc3MzZTQUZqU2tBR3pMRElkRHVieXMxQ3NBWU9vVXRmQ1d1SDR0Q051Z2VP?=
 =?utf-8?B?bXBKVWI5WXlLRXI4RnBFTnlXMXozdDJmeUN2VVR4bktWSG9RRkhaRXl3S2hW?=
 =?utf-8?B?Uko5eFlTWW1FaGFrbk5mSG5VUjFFN256RUgrRTU5T0RaNER1K1FDM0sxUHk3?=
 =?utf-8?B?d2lyemdTQU5ieXN5SkxNUExpbS9zQ3dCNXUyUTRMODd3ajNaamlIL0R5RkVO?=
 =?utf-8?B?QWJDNEkvc1lwSjZ1SGFiVTUxRDVvQTMrNTkrYkpGb3JBSFRWN2JiKzFWTGZF?=
 =?utf-8?B?VGtsTkkyNmZ2QjI4eVJFMTg4bVFkWDFsNkE2ZkRsejRiZ09TaTZDbVlUVldy?=
 =?utf-8?B?c0R5WFFqbk5Hb0h6aXl6bzRPeW5EajFYM2JtYnNsTGxyb2NJZ3VqMjFoYnFj?=
 =?utf-8?B?VTlPczMvQUdGYUdCazdFUVp0ckxSNXlPdjVRSlBvK3lXNzFOY055dlVPY2l2?=
 =?utf-8?B?bFZxV2pnVHcyaDZhci9MRFJyVTJMeUFvdE1YQjBJdlhlWmFSUjZYaUVhbVdy?=
 =?utf-8?B?ZDVkdW5FOEIxT3B3WlVENkpOMU81OXRaMnVTVENkN2RCbHZCODBPaWVMWTQy?=
 =?utf-8?B?NU00eE1icnFoUmhtY296S2JXVXFaMndiRzc4ODFFdjJhVnIrVWNDbGxzRG9u?=
 =?utf-8?B?K2xPbStNdDEyeExaQ3lvRW5EYkU1eVVSQnA0djhVbFZJSkxUbUhBelRIVGpZ?=
 =?utf-8?B?RkdFRFZ3dzRRblZIMTdXQVN4cDZkSW1odm9Ud1pwVjNsQ2QxL2huU2NXZWow?=
 =?utf-8?B?L2lYNzJkdnpSOHRSWGhweFQxMGVoZ0Z0eGF0SGs3cmQ1OWpHSFFVdzRPbzFx?=
 =?utf-8?B?RVBNdkx1eExjQm13UmxtL09hUkhPcHpHVjJqQUNnSnpMNnkxb1YrR0EveVVh?=
 =?utf-8?B?MGtpMVZqcXJ0c1QyR1gzb1cyM0d5UjdzYStManlmNGVHNVVWNm9YOE9ZN09i?=
 =?utf-8?B?NHRBc0tiSHVtWi9JQ3N5SnhtWnByemtoSS8xcktxYi9ReEh1cWFwLy9oQUoy?=
 =?utf-8?B?TlRzVHJhRUZvMW83OFVKd2U1aHFLQXVlWVJXMkFpblM0ZWszVHdUaXJuTFJj?=
 =?utf-8?B?aFcwNDNlVkJSUVdkM0dxeU0wOGRwRWhJZVJxcnZ6Qk1jU2w0UGtFNUFEWHcw?=
 =?utf-8?B?OVNJMjh4S1NnaU42c2dyWnB2YlRRRnJ0cjZWR2VxY1BPTFI3RUJld1ExMkhh?=
 =?utf-8?B?VUF4QlcxOVZwajN0QU9STGszUU03Z0hNMmF6NkZJcWpRbVgvL1M0Q1JuS2VS?=
 =?utf-8?B?K1ZMei9GTWRKN2VjYWtoVERuQ3NBdTJrWTY0dEs2aml3WFFSWmFscnRQSmhh?=
 =?utf-8?B?K2hRc2l6UVA5cEhjM1ZsVWh0blV5ZUY1L3kzQzBjVC9PQjA2cG1sTHlNeldB?=
 =?utf-8?B?aHY2MnUzQmdUY2NFSGxnV0MvakR2VFNwb0lrQ2Eyd0lzSGtUWVY1VllSUDNZ?=
 =?utf-8?B?Q1VwMWpKeXZ2M1k4Sks1N1JxRjJEZDErV1RHZzZRZTcwMWV4dVFwNHM3TlJz?=
 =?utf-8?B?MGdCM1QxUFJrTzhMZXBWbXRRaWwzUUlwaTBQTjdRQXRNcHVtblNQeCtWMnFy?=
 =?utf-8?Q?7hROfuXVTqKGJMiTgoOdG9N8T?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dbcfb854-f985-4a1f-550a-08db5db3f1ce
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 06:39:24.2148
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TV5IetxGFlQuysfL3hC+c6TaRy2xfDMsEEKgquNJ0vXVZ7uZShwnHoEl+0sx0hsr9Ve1XkJiujbhsIckYgrTTg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7247

On 25.05.2023 21:24, Stefano Stabellini wrote:
> On Thu, 25 May 2023, Jan Beulich wrote:
>> On 25.05.2023 01:37, Stefano Stabellini wrote:
>>> On Wed, 24 May 2023, Jan Beulich wrote:
>>>>>> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
>>>>>>      modify_bars() to consistently respect BARs of hidden devices while
>>>>>>      setting up "normal" ones (i.e. to avoid as much as possible the
>>>>>>      "continue" path introduced here), setting up of the former may want
>>>>>>      doing first.
>>>>>
>>>>> But BARs of hidden devices should be mapped into dom0 physmap?
>>>>
>>>> Yes.
>>>
>>> The BARs would be mapped read-only (not read-write), right? Otherwise we
>>> let dom0 access devices that belong to Xen, which doesn't seem like a
>>> good idea.
>>>
>>> But even if we map the BARs read-only, what is the benefit of mapping
>>> them to Dom0? If Dom0 loads a driver for it and the driver wants to
>>> initialize the device, the driver will crash because the MMIO region is
>>> read-only instead of read-write, right?
>>>
>>> How does this device hiding work for dom0? How does dom0 know not to
>>> access a device that is present on the PCI bus but is used by Xen?
>>
>> None of these are new questions - this has all been this way for PV Dom0,
>> and so far we've limped along quite okay. That's not to say that we
>> shouldn't improve things if we can, but that first requires ideas as to
>> how.
> 
> For PV, that was OK because PV requires extensive guest modifications
> anyway. We only run Linux and few BSDs as Dom0. So, making the interface
> cleaner and reducing guest changes is nice-to-have but not critical.
> 
> For PVH, this is different. One of the top reasons for AMD to work on
> PVH is to enable arbitrary non-Linux OSes as Dom0 (when paired with
> dom0less/hyperlaunch). It could be anything from Zephyr to a
> proprietary RTOS like VxWorks. Minimal guest changes for advanced
> features (e.g. Dom0 S3) might be OK but in general I think we should aim
> at (almost) zero guest changes. On ARM, it is already the case (with some
> non-upstream patches for dom0less PCI.)
> 
> For this specific patch, which is necessary to enable PVH on AMD x86 in
> gitlab-ci, we can do anything we want to make it move faster. But
> medium/long term I think we should try to make non-Xen-aware PVH Dom0
> possible.

I don't think Linux could boot as PVH Dom0 without any awareness. Hence
I guess it's not easy to see how other OSes might. What you're after
looks rather like a HVM Dom0 to me, with it being unclear where the
external emulator then would run (in a stubdom maybe, which might be
possible to arrange for via the dom0less way of creating boot time
DomU-s) and how it would get any necessary xenstore based information.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 26 06:45:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 06:45:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539971.841356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RCH-0006Yx-9X; Fri, 26 May 2023 06:45:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539971.841356; Fri, 26 May 2023 06:45:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RCH-0006Yq-5k; Fri, 26 May 2023 06:45:09 +0000
Received: by outflank-mailman (input) for mailman id 539971;
 Fri, 26 May 2023 06:45:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7mHh=BP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2RCF-0006Yk-Nl
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 06:45:07 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061a.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d98bf52d-fb90-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 08:45:06 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7247.eurprd04.prod.outlook.com (2603:10a6:800:1a2::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Fri, 26 May
 2023 06:45:03 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Fri, 26 May 2023
 06:45:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d98bf52d-fb90-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f8bKQCDHSqG25La4LCF1AKrZP77iAC1h5VZbY0/J5Ei29Xt6fUUCjhGlJjY03MNOYNLTxUmF454BxAWgS5//KKGqcDqEAKXfRcVkq+p9P0u4D7/rozuQ8qsUF1hO/e0Dc//3gQoO1EPioJ3u5xPyNnEZEINdDpt/fNgwfLLVbOKIcvcjjdVRirtcW9K12rRfc9NzRWClA2fKlmAw41jR0JyUiqIg2UYc2J8S9hOmmJo+nu3jGLwXfRvB0oCiSmR2SheGTOa00eu4sNCy/9wK7pkoYZCWSiKjTDzHVinSBk363mBgmCZoC4WWpR9ozN2LB+5qDh0Dd0G0LgJiaqDGKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=J23g29un1LOhO77WgR3rIOzyGYtTUUL25Pkp0eBhcyw=;
 b=MmQ5d2vQqlQ4dtYOAk4xn5IeyKUGe9wsPh3dZvBS8o6kCUJMhYIEBhR4dSgKteL37bZLHtcPZNUkoVVqGAUeWYMHfAg3JhOVz3VNUXY7YxWEXIqMu+ipVwimiLNwgm22T3+sHMqkoJnjjSZDW7qJemDabzLd0cUGx0cmLLPiq+nB8rZJnO4o85fBIpRPQ0492JWHrP4jh59IYV5qlM+yC2NgArLmPNnBMddkwpkQKlXfLOg3kdp4LOTIaH6HXShl2EoWAGxXgiYdEWV2w7iPIQO04m4/BVtuIByCK9UN6fnbdREAJRHPRxwwv/RebqudlqSrX7JK9jgUOgQzJWAZKA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J23g29un1LOhO77WgR3rIOzyGYtTUUL25Pkp0eBhcyw=;
 b=ASKo6+dJdl1r4UK6pImdlTLsvQd7ghk1CdDSu9ywSSd12W4hy987tl5ExkNX8QSp5mmz4TGjKqpB+gjQIFxVJT7uzP9emfUD+/fx8+UkwR1YK7g6rUkOJXR3t3vIlXAs4lPvZTQ72MLgLTyHvNK6P91JY/jlJIGaZcoAxbNlt13K9OeOX/kDbK9mgHe+8R0Ce17QqvtFwW0iAuzG/yLq7D+lp5Q6h5Gs10QFLXCoF3SRDPZekYJJmj5TLrgZN+SfJ6uN7uQCywPbYi/efzjLIYMvsywu2E+Nztcu2SpK1Zx/A4yQXJP9e8Frogxp/UQ2n3VteMlVed6JfAe6oeYiuw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <03127618-13a2-872e-82e9-b23ce8095f70@suse.com>
Date: Fri, 26 May 2023 08:45:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4dmJuzNVUE5UIY@Air-de-Roger>
 <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com>
 <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop>
 <8956af09-9ba4-11bf-a272-25f508bbbb3c@suse.com>
 <alpine.DEB.2.22.394.2305251224070.44000@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2305251226000.44000@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305251226000.44000@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0068.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7247:EE_
X-MS-Office365-Filtering-Correlation-Id: 1164e7dd-3157-4c54-596c-08db5db4bbda
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aHRmfQW5A1LHVZwIReh2Vxo6/BqIUYN8Zx2/RvZFBUoENztIr6nRkyzCLGQSlPXZdzUWq2psKB0/PbNRVSUhYYVbNTE+buBpP4hA3RT2IimDhRoPUoOg1Pq/uqvqGLNeUAA7Bt759TPrSvp2ZfWBZI5IkNGgJSoBDCuisMQTuEh5TURl5pF6083YEyVkpKXiLzhsMMSZxDQx4FBZHHeQdqdBNlVHShH32fdiAGA3vS/wduyCyDNSttBwB53Wb1ioCrzj4/4H3raVfR66Dubf/qZ/tRcx73A+LxbE6d4ClTABucMFJ56et5AMxUwYUG763OJYGiueZSgt0Qt2YkwcP+k/6ZZ5fuQuqCKP4BLDyLHsPcime0pMLzMCP3qhj/y4bBMm6gwwBcdzoFx2ZRRCPlfxixP3cSjwdmkkMTcvmyj6KS5r2n3bXSDq+z+PLroKl1LemhaQ9Phku9a5v7qqUYnWSMxksZ/6GG1mkXYhWB7urp0fl+p3UZP1zlCyqlelAB60OclsGKjLerS5MkQ/ETMz6pnRlIj4oweyNvuLi9di9oVisSae+5WwSwi5xcLJgZLwhkXUeeXGXoZ1EfMJuR1xnt+2sQiFHlQu1Vywn/J28QtB1nBdKAAxa9vrYhBUoj8aTJgyW8J0Mi5VLwbU3A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(346002)(376002)(366004)(136003)(451199021)(5660300002)(31686004)(54906003)(478600001)(8676002)(41300700001)(6486002)(66946007)(66899021)(6506007)(6512007)(66556008)(6916009)(66476007)(316002)(4326008)(26005)(53546011)(8936002)(186003)(2616005)(2906002)(38100700002)(31696002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ajFOa0NNR0pnSDhsNWU4SGZWaWJRSUNHKzJubGl2TXlyQnhrT1F6eURLUlYw?=
 =?utf-8?B?ZFFCajZuTjNsZTB6MlNUS0lTK0hwZEFoTThNSGhPQkdvelB6MzVlU2ZvbzlT?=
 =?utf-8?B?cDhucXVWcjZsZVRscHhEQ2F0VjduOTAzRXZMUndrUEVHY1hRZHIyd2FNZnBL?=
 =?utf-8?B?MThzSEgrcVFBQ0k2cTZ6Y0NML1VmcjRWdGJiMWtNTjFjTFd5VmRWc2hrQTlk?=
 =?utf-8?B?bFMzcEtjL2dIdGFWZEhydXFRVThKUWtna2ptQU5oOWMvL0haZHM4NDIwZGxi?=
 =?utf-8?B?THNQUHVtSjFSa3F0Sm81WlpXM1pzUWVSeFVFUUpPUzRDcTI5NlozY1hYckNX?=
 =?utf-8?B?NFdRT2l1N0Z2dHRwNEJHTjF6M25udzNQSjVJNnlHTEI3aHh0K0lEOFNPdVlO?=
 =?utf-8?B?RkxpaEhPSDBZRGg1RjFnSFZZblFtWjAxMDN6T29EbmFwYzZJZjhVZzBqeW9X?=
 =?utf-8?B?bjJBWjNtZ3hwQjJ6MGcxb2diMjlNWEZuNkUwbWw3YjcxWDJlbFl6aWVRT1dC?=
 =?utf-8?B?Ykp2d0pnUHJOME1Fc0s4NWR3QVpmczZLdC9mbStYWDM4Q2QzT0FwMENnenlF?=
 =?utf-8?B?QUViVWdJZHBwK1IrU3drY0xPZlJNTGtwdTg1U2Rmdk5GOXFrWjJHUkZReVBK?=
 =?utf-8?B?elNjQTM1WklLeXl4WHNpRjFaYVR6YmpZQUE1SHJYTlRXUmI4RkcwaWNrUXdE?=
 =?utf-8?B?clhxbVFDalIwYTFnWjdNakhRMGRJMTY3Q1l1S2dFa296REVkYWovSjNqUi9x?=
 =?utf-8?B?OG5nNGFFRGlwS1R5dXB1NURNaW5PTUthUUkvb0E4S05kT3dKTndUZ05jN0FE?=
 =?utf-8?B?STZMVlhwZW5tSWYvR096dzZLS0cwWUxBbjd1VWJ2dnBXakNTZUNKcklQbkR2?=
 =?utf-8?B?ZTh6Rm5Xd1l0VzcvVmhWcTZ2Tk9CYzRmVWQyY1JjL2M0ZHRLczZTM3hQRlVF?=
 =?utf-8?B?Ukd0VENpNjFyUmJidURLZFNETXFPKy9sd3RqMFFTZnA3WmhXNC9tVEtjeEJk?=
 =?utf-8?B?bkVWb2lkMTZrRnF2UmRtYkMwbG5INGUvZW1YWXUrVTA5VkhGbU5RaERNNEpW?=
 =?utf-8?B?eThOQllkYmpTZkxoNnBha2twRHIvWjV1QWlhMDJYM2I0Ykh4ZmdtVGsvTXJJ?=
 =?utf-8?B?aHhPZnJMcTUrb3hwOHNsMXU5NS9xd1ZWajdvMTR4ZWJyRXdqb3pSdkFrbXdY?=
 =?utf-8?B?N0ErWHBFVG40VEJZeEc1YTlKZGtSeHlsTjBJcjBuRkFOeXJ5c0U4dTRxMlRY?=
 =?utf-8?B?UGJwQ1VUZXpVeGNuWTNOK1hlN1U1ZnJ6UDRLVGpYNHJjcERHbmRYRy8zTndF?=
 =?utf-8?B?ckVuV2kvR1BrOFVDWFBBMjNtU1ExcjRZZlBiVmYwZWJ4ZTJZM3VQZ3hGTlFv?=
 =?utf-8?B?eFE1ekR1UnVOaDN5NXE0WWtBWWNkODh5UU81TG5EektnK05NR0o2VmZiYVps?=
 =?utf-8?B?UXR3emo3cG1UZEVMVFN5VTVoU2pSRFRzL3laTlBraERRRzRqR1hIazZFdGZp?=
 =?utf-8?B?N0w4MllOWXREZGNDK1RmTDZGOE01UitEdndIQVZYaC9XdDQra1ZtNzcwMC9O?=
 =?utf-8?B?RTYrR3Q4OGZnRWVyRGxkYXlHREZtTk5FUDFLMHRTRVhYMXdDaEhGZHVKYzhP?=
 =?utf-8?B?VXEyck1QMDZmamExVXk1cllMQmI0S0c4OU5HRitocVlFYmlBWVF5K0gwWWo4?=
 =?utf-8?B?ck4wYk0zOEJVVG8vN1c2LzZpc29hcXBIMXFlcDVWZzZpZkJRaUh6SFZDcFIy?=
 =?utf-8?B?Y2sxZisvRWxad0c0S0J1TGZQTkk4S09rWityYmlVemFxMWpoK1ljRWptWUI1?=
 =?utf-8?B?UG9aOGN4NzJzalYzbUhSVytIZ0YwUWc2Ni9hMHJYTFhxN2RJVXZJams0QnhT?=
 =?utf-8?B?VWxhamhFYTZkWFY0WXNhRk1TZXJzVUVJNWQzV2ZLaWh3dGVlZGRCRWhKM0Fm?=
 =?utf-8?B?REZna082ajBNU1h4SkY1bm9aRFhveUtxNURscm9Zc0tFclNIczRTc2NkaGJN?=
 =?utf-8?B?bU5KT25LNUhxQ0hkMDBvaU94N0ZiN3BjR2xYRDVlZVhxWm9vN3psMExPZjZi?=
 =?utf-8?B?ZHp5SDVkbCtjd3k5NU9sZlJzZmRWZUVYMEpPbUw2MFN2SlRiQmRSYWc0bW5x?=
 =?utf-8?Q?KFgnYzljxUqLMQWhlJj2DsZmB?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1164e7dd-3157-4c54-596c-08db5db4bbda
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 06:45:03.1774
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KgPwtS//wNFZMlaNU9Ydvszg1vwlUMYeqbTXRTVhL1beC+NIwqD5L11wI/vhOCHspvmLdwbDtknumsXjEfg4kw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7247

On 25.05.2023 21:32, Stefano Stabellini wrote:
> Like I wrote, personally I am happy with whatever gets us to have the PVH
> test in gitlab-ci faster.
> 
> However, on the specific problem of PCI devices used by Xen and how to
> deal with them for Dom0 PVH, I think they should be completely hidden.
> Hidden in the sense that they don't appear on the Dom0 PCI bus. If the
> hidden device is a function of a multi-function PCI device, then the
> entire multi-function PCI device should be hidden.
> 
> I don't think this case is very important because devices used by Xen
> are timers, IOMMUs, UARTs,

... USB debug ports (EHCI, XHCI), ...

> all devices that typically are not multi-function,

except for the ones added. Furthermore see video_endboot() for a case
of also hiding the VGA device, which isn't unlikely to have secondary
functions (sound controllers are not uncommon). Hence ...

> so it is OK to be extra careful and remove the entire
> device from Dom0 in the odd case that the device is both multi-function
> and only partially used by Xen. This is what I would do for Xen on ARM
> too.

... at best I would see this as a non-default mode of operation. Of
course we could also play more funny games with vPCI, like surfacing
a "stub" device in place of a hidden one, so the other functions can
still be found.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 26 07:04:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 07:04:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539975.841366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RUX-0000cL-Rp; Fri, 26 May 2023 07:04:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539975.841366; Fri, 26 May 2023 07:04:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RUX-0000cE-O6; Fri, 26 May 2023 07:04:01 +0000
Received: by outflank-mailman (input) for mailman id 539975;
 Fri, 26 May 2023 07:04:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7mHh=BP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2RUW-0000c8-IH
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 07:04:00 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20616.outbound.protection.outlook.com
 [2a01:111:f400:fe16::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ce32203-fb93-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 09:03:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB7606.eurprd04.prod.outlook.com (2603:10a6:20b:23e::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.27; Fri, 26 May
 2023 07:03:57 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Fri, 26 May 2023
 07:03:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ce32203-fb93-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Pe5paRYqS0CjEzBmXzIAog+9OPWjuUY6UPLRklSI0q08rBQ2UBsiYA0PJ85k6cbkOTGoQsAbBM0vX6DD3YE9UB1mkh+cLzXhiMz7Yb8CiVDFI3UHQ2p1tbhxn185PGbtBYiVNFzsLKSpEiHK75Trd6/lbdU4W4YbR1CGI1zeg07V/ZheJzCYaX6g3GAepghme3npG1qpv+sB0DPsahUyCV1j0nPolg6HYrbFkof/Qd19PywFUNGwZ5BQ+GLqrYHiPh2Xzf3S52resybSwf78uXT4+aLpmsvQlkBQKN3Z0r970RZQAJG7rSdJhdkdQecAORCxIFKQBNpgsvgLfXmO+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jZS4E3RE0DPANCemp9MMAnYN7VeL3U/TsArg9jnn23o=;
 b=LAhNLJ+IQfjRbS1NuXV0lwQqfra0uPwCXUjKy9V86FJS98hgGghJPge7MTfDrTh0DRmRWvlueD4iaV66efaja0GXKelYco9MgJm59zqo4a1WOYwEL9TZDZaaYYdrZYDileh8ZaDivnUev5mBwktasaKgPYdnbuwRHOsvT4srzpHeAOStyjzFAZpAe2quy7/32jWMzGsGTAs4iSSH5saqDoxNG8wvxo0uSUJ8lVzNHLa/1XN5ajM1gt+Oi5firg7D8v158Wkmjk/Qbwkq7p4ITpnt3HxrPJqDJfTU/wxWpUkCNCzrwpDPy2DSGpYGTdG9joQR2daM1/Tox2s6dX4Faw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jZS4E3RE0DPANCemp9MMAnYN7VeL3U/TsArg9jnn23o=;
 b=EtDWtWpMLLVPXbiopkn20meFSgP6P9q9uX2/K+ZUsUVEdplX27mX5TCICCFhEas/z+YNBZARhwmnmDTaYlR3OEwTY9v7bPHzvZyhlFz2CQf49SABzT9KN6/paYSktqn+pt0J/AkKq5nhvVeEp39PClLL9VzvJTp36b82wPjsq5s3SN4BX5CpOoSbWSUu+3nrQPwFTZuFZV8McmqxCANp6zWa+rdsFCF89gLJ56qFWyJJRngqfrG/YIb6nZSFHGo4jTtW3NEns4xYlwnbl/jY5/627ahMufiRBAsj8KvVcFAsOuA4mETkzdSmOyu94FvatafZk1YBm3npFCTmGF5ACQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <c143dc69-20bd-da87-3d01-d405c358fc67@suse.com>
Date: Fri, 26 May 2023 09:03:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [RFC] Xen crashes on ASSERT on suspend/resume, suggested fix
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Stefano Stabellini <stefano.stabellini@amd.com>,
 xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 xenia.ragiadakou@amd.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <alpine.DEB.2.22.394.2305181638390.128889@ubuntu-linux-20-04-desktop>
 <ZGzFnE2w/YqYT35c@Air-de-Roger> <ZGzSnu8m/IqjmyHx@Air-de-Roger>
 <alpine.DEB.2.22.394.2305241646520.44000@ubuntu-linux-20-04-desktop>
 <6790d5ae-9742-f5f3-bd8c-62602ee9cb1d@suse.com>
 <alpine.DEB.2.22.394.2305251248000.44000@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305251248000.44000@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0094.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB7606:EE_
X-MS-Office365-Filtering-Correlation-Id: fc3910d8-0915-4b64-fcac-08db5db75f93
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kNaDQs2UNg6NSmt0vKQnUl+sobJ4WuvXIxgSA5ZuCV0kcCfzoTQDGgk+C1z6y+dCbF0wCWVzSFMf8y5nK2n931KYNR4eVthHsRHhej4jUeP2fInicH/g7RFQwm3E/F9v0bCtny3T7ENzWbRnxvHJMD7Ys29yWiuPzXzu52FBzXWsD9Tow1HIdwq5nNyijVxZo81GSwWWevc11FTYQw4E9f4UCiIHg5nHZ0M7zEygoR2FfIJZtkHSQ7J0KYebnP4ROXfT9BJF43DxoZVwZliKo5eDmEbhEF8Jc03jQhyvl/9Y+g/fzXJZQ3WmawkJzKXMOvdmYKPY1GHPf1/S7NsIWg4pP/RuObrxowCZ1TTJQutiTssEPBU7v4dT84Zo+s4Mjl8YyrF+QjXWxqxi14DZPe0qmQ+uO+ipao/wMqUlVvc8G+tMlnggI3xr/jh6RqOGXsDRnMZ6rka0DelkhqcxlNKYaykY1/XpfLQEew0qQ+CIvnrQ3sD1xrY6q/j2XplsWoKuy7qu4GRuyuyB0UhD05kDNS9dSZC5yyh7YluMOaDdOp96zec+H77apHpmgh2QPf/zkdEfp2yw3O6dDbdp0iSs3M2h2KVv2G8J7MR3bpltfTm2fMZOxC9c2+zWVAjKOkB0s1g6zQ+ycymQee/9AA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(366004)(396003)(136003)(39860400002)(346002)(451199021)(31686004)(4326008)(54906003)(66946007)(66556008)(66476007)(478600001)(316002)(6916009)(6666004)(6486002)(41300700001)(5660300002)(26005)(186003)(8676002)(8936002)(53546011)(6512007)(6506007)(2616005)(36756003)(2906002)(83380400001)(38100700002)(86362001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bHNRSlJ3ZGs4UUVwYnFHd1NhNGVYQzRlZWMrR3ZuV1ZYTmVUa0VaUDBydFZL?=
 =?utf-8?B?SUR2QlNZRXhiOUVscmlud1hpRFNaTUlzR3o0ZDFuRkhuVHJpKzY4K3hqU1Vp?=
 =?utf-8?B?Sk55dHhlakV4T08xK3F2eXRrTER5WFFpMHBRZU01NzF5dnpUZEtLcHBwZ0sz?=
 =?utf-8?B?UkdzVG1zN0JmZ0tzNEhoOVN4KzgyZlQ5MzBOV1J2Sld2enovTVNOZkVLRzAx?=
 =?utf-8?B?K1pPcmVnbnpvZ1NEQ29lV1ByNDhENlZEdXZOaGhGUHhxUFFXK0RtUmhVQTQx?=
 =?utf-8?B?d0pKT3BLc3pHQmluai9DNFp0ZEdhWkVuLy9Yb3RFeUN6bkd2NTlSUWQxdEw1?=
 =?utf-8?B?aEdaK2hoR1VKZ0I2ZmloNmFmMG5CQXozNjV6UWtKa0NYV1dPaTlSbk4vL2Jt?=
 =?utf-8?B?aTVTdUZNUkpndGtOejhQWS9DVnZNYTlRamFLcTJMNnRiTXBVVlhEU1NYSDNM?=
 =?utf-8?B?S3VpRUxjcXVzWDNkc3RYQi8wUkdvWnEwc0ZDTVh5cWhVQmRmOURET1Q2U0dy?=
 =?utf-8?B?MituQ0Z2dXgyVXJ0NVFKTWthWUh0aW41Q1NBLytRZVl3aVJ6OHYrWW5TNi9q?=
 =?utf-8?B?ejZMUmNjdC9La09hMG1sVHZ0Q25WUnFEeHFqSno2WEVuYXI1L0JlNkhSeVly?=
 =?utf-8?B?Vk5yWWt6UEwrM3BHYVYxS3ZreWFNU2J0V2lZTktNMVM3cjBvbGRpS29hTXB0?=
 =?utf-8?B?b05Qa0tFQnFFYlFFRFdDTHlYdjkwQkNKeHBQZFhReVJjNlpUaDMwcWhhaUpC?=
 =?utf-8?B?WExxU0pROVM0MkNYZXhlMXo5OHdoZkF0T1czVWFwZkdvVVM2c0tlSGJIcC92?=
 =?utf-8?B?YUdtNXlPVlF0eVgxTDIzbkZpTExtQmtoWTkzZlgvOU5LamhzSzYxbWRnVmt0?=
 =?utf-8?B?R1Y2V3dyS0dkZEhSSU9DTURoRzlRUWl2WnBjWENvYzRhS0VBV2lJMVZ4TUQ3?=
 =?utf-8?B?OWRWWHVKWnNha3BFb0xSeVNTUUpaWk82TjkrVGR5VStiZUw5UUxyeGNRbTBJ?=
 =?utf-8?B?TG83dkx0Z1RRQXlsK0Nyd2xkOEs4WUJZK2QzQ2hQQzl3Q2JLZHRHTUE0enJr?=
 =?utf-8?B?c0hsMi80eHZOalBzdkVOaDc3SDhkR0ZuSk5jWmEvNzFWL0d4MUVMZGxlNnBB?=
 =?utf-8?B?WXAxUklWN1VwcVVEYjN4Mm4zWExROVJ5aU9hdk9nNEpMN0lQcGkyZFMwSjY3?=
 =?utf-8?B?R1lzckxPb2xLTmZrYTduNXdxWVVlaWFFOWJEbStsVWVnQWpsK3loU1NFRUpm?=
 =?utf-8?B?aUF3YUZWNmllTys5U1lqdFNLTUk0TGdKNXhIMThQMnczc1o3Slp0bERIeDl0?=
 =?utf-8?B?eFAyWmxsMU5oYlI5OXk4RTRyL0VMa0Fsd3RWTHNSRTl5V1d1WkJob0FjYnhh?=
 =?utf-8?B?d1hRODJDY3lmeUJJUzJ3dUxPSlRsNFU2dTQ0bDdMMXlNZ01FWkpTb3NoOFNZ?=
 =?utf-8?B?Tmt2TENWcHJzWXhpMGZyak03dVpqZkphQ1FMMitFNTdVLzF3N0ExV1FtWm0w?=
 =?utf-8?B?QkEyWTd1V3NLUmZrYkxmazFxWWluRDJobTZnNUtaVFNPMi8xN2VrSGFxanRX?=
 =?utf-8?B?N2hTNkdJSVZILzAvNUFXNlplQ3QvMEF5LzJyeVF1L2IzOCtNZW1tVk1FNnM3?=
 =?utf-8?B?OURGOXd3SjA1YnA1bFd0QUYwQkJ6WTVCRjNxejZVQVVib1l6ci8xTTRQbFVK?=
 =?utf-8?B?VXNQOG82NkhsN3AvSERQQ3gwbks1WExNVnZTdUpqUmpNdjE0SHc4bTRuN3lD?=
 =?utf-8?B?WjZ1djBya3ZNa05pc0NoOWxMQ1ZXdmsxM1kvTnBON21MRWNndm8xV0pVM2tF?=
 =?utf-8?B?Qkx0L20zUDBrc1VpSUpiVDJ0ejFiM1d3MTBudnRkS0UwUFJReW5BMmJDWDdo?=
 =?utf-8?B?NUNDeFdpRGJvdWhMdjR4L1ZSMk1BWTV5Vks0NmxTVXBnV0Q5dGtURUdmWjNz?=
 =?utf-8?B?UHJmeUlqVUE1cG8wSkQxaUpGOC80R1JPTXFyTXRKS0R0bzdFa3hKV3BHVzhC?=
 =?utf-8?B?VjZzTGVFVVllcU5NY2R0eVI3UCtmNXJVTkVnWDh1bGI5RnR6UVRUbTdwV3ox?=
 =?utf-8?B?MzRFS1NUelFsUGdvWVorR1NPbzFiNkgyT0tCVmZoV1k1d1FWU2RIRm5SQmdF?=
 =?utf-8?Q?A84tCcxDOIkzjVYyehuLeGCL6?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fc3910d8-0915-4b64-fcac-08db5db75f93
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 07:03:56.9734
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hvPXFKkVVL0yJAO8b7YKGQt5ZhRmthu6K9632KPuV0O1fjupEbqOyibKEUvuRBPl2kyhimfmmyUew+lLVNCZNw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB7606

On 25.05.2023 21:54, Stefano Stabellini wrote:
> On Thu, 25 May 2023, Jan Beulich wrote:
>> On 25.05.2023 01:51, Stefano Stabellini wrote:
>>> xen/irq: fix races between send_cleanup_vector and _clear_irq_vector
>>
>> This title is, I'm afraid, already misleading. No such race can occur
>> afaict, as both callers of _clear_irq_vector() acquire the IRQ
>> descriptor lock first, and irq_complete_move() (the sole caller of
>> send_cleanup_vector()) is only ever invoked as or by an ->ack()
>> hook, which in turn is only invoked with, again, the descriptor lock
>> held.
> 
> Yes I see that you are right about the locking, and thank you for taking
> the time to look into it.
> 
> One last question: could it be that a second interrupt arrives while
> ->ack() is being handled?  do_IRQ() is running with interrupts disabled?

It is, at least as far as the invocation of ->ack() is concerned. Else
the locking scheme would be broken. You may not that around ->handler()
invocation we enable interrupts.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 26 07:29:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 07:29:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539982.841385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RtO-0003UK-6S; Fri, 26 May 2023 07:29:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539982.841385; Fri, 26 May 2023 07:29:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RtO-0003UD-2r; Fri, 26 May 2023 07:29:42 +0000
Received: by outflank-mailman (input) for mailman id 539982;
 Fri, 26 May 2023 07:29:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xc68=BP=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q2RtM-0003Ci-Nj
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 07:29:40 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.160]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 11cfd26e-fb97-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 09:29:37 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4Q7TV64U
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 26 May 2023 09:29:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11cfd26e-fb97-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1685086171; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=H/D+yzWwuRwNmOJCusUnT3mbUQFvhvq4V4befm3R/bj1yR8Iw5A7gv9CTBDnBS1yVJ
    AD0TR2a3h/AJqqKmsLz3VjdLSvulHRzRRliCu5ZLlR+mCjreC6dOITwOz190XPO9/8F+
    +YxJ8dHgPrkuGr3oqfAyzJTbUaH6V6uXaQKQ+DU1CGtE3CGrx9yBhcqt/LdZXUHoJHXB
    yatqjrcgmmTY8hwySxNgS1PoNp+WhVIUPGPLycu+ma+igzsZ8ipwZmWBcr6wgSGHxRTR
    7DntscX9NjZmRhpA4EU3yQoMUpynTmAQdm95ur8PxMJ9aYoPUFU5cKeALcPHKI2sRqPy
    Lbbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685086171;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=xlUaEBLrDwvOI31K8HyKZ+e2YJX5bW2ka7iFu75oZV0=;
    b=TolnBD0bn9B5uc4F/hAEjsr97WdpDc6OznbUV6sa/ETcmvT2Ew30QWcBM6m0drjvKi
    bdILLfCWmbS5Fojjxy/kNtGxGF/OTa4oKwpR7pJ1Vzib0acmnPpOnNYSpp79RoWBeW7D
    in45eGYsUiRo+Bq5MHl3IWFqiZeQDkM22osTZvPKnEXCvk60bWDmdNfXW+TNKCigvTR3
    6JU30d6jz7arS8VL5tr4OCI9PxSq7AQlIXJRQZu9uJCv11rHnwS2yb3dH/pLSMoJ8ym0
    IojFsGIy3apHCEaijSBmoV+z7NPAwJZO7qgvfoDlM/1EVX57/QbaGn8/W2rd8mH+LVB/
    nWgA==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685086171;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=xlUaEBLrDwvOI31K8HyKZ+e2YJX5bW2ka7iFu75oZV0=;
    b=k0GlIgy9QGsr5QrtCgd0GqyeNVnC15yE/VDdnX3Inglo+desuHaVZUc2iUzjFAC5WC
    +nSfTnJQ6J0rIH+pIPxkJgh+CXWgMH/qEq0evn/v+KnaPV0eVFNuvJW58UIS8JQnX8MS
    oF8wBdjSpQ4gxUQ9dP1fUGuZRqJlwUvY6ugEd50o1EcQ/1eKsxrKzmg3GIANf81N/6jA
    BB7fiuzsbci86YL1WfbpHR0+nqqrs+QJzMmQRZjEzOu6XKywvgqyzzrMc69svmE03Kaq
    u7HYHGTdJc+pffTd4g2uZzW0nXq+WGYOQUYyBkO53GloBMqyHFu4sxfUkH4EPm7zrJD0
    Y6lA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685086171;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=xlUaEBLrDwvOI31K8HyKZ+e2YJX5bW2ka7iFu75oZV0=;
    b=GSWj0tmQJHv1LO/CZ10iNM94Pd5Ry6/9Vp1KdrRND+ItjXNGtFQ53S8BDUztz673cW
    0yuTA3lo+cA98DhB23BQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xqFv7EJ0tgRX/vKfT/e8Ig6v0dNw4QAWpzMWrRQ=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 1/3] xentrace: allow xentrace to write to stdout
Date: Fri, 26 May 2023 09:29:14 +0200
Message-Id: <20230526072916.7424-2-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230526072916.7424-1-olaf@aepfle.de>
References: <20230526072916.7424-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

The output file is optional. In case it is missing, xentrace is supposed
to write to stdout - unless it is a tty, which is checked prior using it.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/xentrace/xentrace.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/tools/xentrace/xentrace.c b/tools/xentrace/xentrace.c
index 864e30d50c..b81abe8a51 100644
--- a/tools/xentrace/xentrace.c
+++ b/tools/xentrace/xentrace.c
@@ -1152,11 +1152,9 @@ static void parse_args(int argc, char **argv)
         }
     }
 
-    /* get outfile (required last argument) */
-    if (optind != (argc-1))
-        usage();
-
-    opts.outfile = argv[optind];
+    /* get outfile (optional last argument) */
+    if (argc > optind)
+        opts.outfile = argv[optind];
 }
 
 /* *BSD has no O_LARGEFILE */


From xen-devel-bounces@lists.xenproject.org Fri May 26 07:29:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 07:29:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539981.841375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RtD-0003Cv-V2; Fri, 26 May 2023 07:29:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539981.841375; Fri, 26 May 2023 07:29:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RtD-0003Co-SN; Fri, 26 May 2023 07:29:31 +0000
Received: by outflank-mailman (input) for mailman id 539981;
 Fri, 26 May 2023 07:29:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xc68=BP=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q2RtC-0003Ci-Cb
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 07:29:30 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.217]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0b9c4b39-fb97-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 09:29:27 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4Q7TQ64S
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate)
 for <xen-devel@lists.xenproject.org>;
 Fri, 26 May 2023 09:29:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b9c4b39-fb97-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1685086166; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Ur1znmznv/irok7EsjxiiCamUSWWrFTLgNz8u5VernbZYGN0Okl9C1gnIrlnkiBvVn
    fdoW2F2efsmHLD4RaFY5sJRhSQ1Q4goxHqTEv/hoo5ANm1rcRU6iziVWNxPKPk269Auc
    5t9SGupCrLNTIDQkBdg2KQqMvjQ6YSME8RkPg81h3jTAzD0DUAu+0PjvTYSdsdFd2zaI
    0FMBegg9rx6tYr8MUS0WckRflEYbChV5MofZMJR5IaSWzId+Qc8sVVSGfkRR9WUC+keO
    BWEVDRvGBg6IAnDc+oiM7ymsQyds2j/zPlc6wcDvyoEfBozsBKCey3XAM5aIigRw4ysz
    2Hqg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685086166;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:To:From:Cc:Date:From:Subject:Sender;
    bh=3EQ+rt5wqtPkqaK/u6suMbJJzkne6HyzF1i/T5IWaN8=;
    b=tKAYbzVxa/B3dlH8WooR0I+9XgQfd9zk5p4IigHpl0o0LT2DvMJdmIvHWOvj8tbKgf
    JPvViO3N6ZV99qUSiwHhkW5qoUKEbGfUb5lM/jLJ3S9nfHZL4IZ1m3wOFZr3PMnVrDb4
    IHmE7iKTNNrO9JPi84ONBtUI7EIFuQqeRXaasKgHGjtZF/ukEb5vI4sQtq9v8xwIuZaQ
    10MhcnZajeLzk95tKgfVpgH3epxIbxa2dCFo93KdjjEhUT3sz3q5mw198PUb13ib5H8W
    xHVP/npHiNFBDSm8qNDd/kSObSMamQy0/ZYY/1ttDo0Yy2ByVxZ3jX4v+7+AM0rc0ban
    R4ug==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685086166;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:To:From:Cc:Date:From:Subject:Sender;
    bh=3EQ+rt5wqtPkqaK/u6suMbJJzkne6HyzF1i/T5IWaN8=;
    b=QKj6/ZFxYIRP1AyrInjOzRMyeqi2DyUWRVkTYD5FLny6TFZSILXyDNun12TULb7Utf
    XT2QLc7AN4ftN9nN9+a1M/zbNcqLc1mfPUZypm+C2kXLsrimfcOxL0+2T5dAKz6i7aAG
    B/VyBaVd9LhkLDAPHkmPWXysdEpH9Zbeuy3Ua1IGiHfdfXx6hyuGQJdXUrZ2I6YuXoL2
    O12YEM533cxGqp+1rkhDnJE1GbEQTYU2bYnkmsgIalNT8sZkUQseis0P8ksq3wOMSZUJ
    eGIouxVnw0y2pEAXMPehzcsIiq/YfQ7X/TMGEkehWGs+Jsa6vkyD5QOrH0yTrZ5Xnswi
    TGRg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685086166;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:To:From:Cc:Date:From:Subject:Sender;
    bh=3EQ+rt5wqtPkqaK/u6suMbJJzkne6HyzF1i/T5IWaN8=;
    b=+R0vIN9palNWZjNGJtfDK42C7uco48jXQmslcFq71DvX0Z7hZwqHOwYnKVQE+ccu5D
    MJWPZezXNrF/oOdkroDA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xqFv7EJ0tgRX/vKfT/e8Ig6v0dNw4QAWpzMWrRQ=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v1 0/3] xentrace changes
Date: Fri, 26 May 2023 09:29:13 +0200
Message-Id: <20230526072916.7424-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

Olaf Hering (3):
  xentrace: allow xentrace to write to stdout
  xentrace: remove return value from monitor_tbufs
  xentrace: close output file in the function which opened it

 tools/xentrace/xentrace.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)



From xen-devel-bounces@lists.xenproject.org Fri May 26 07:29:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 07:29:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539983.841396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RtP-0003kA-Dc; Fri, 26 May 2023 07:29:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539983.841396; Fri, 26 May 2023 07:29:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RtP-0003k3-AX; Fri, 26 May 2023 07:29:43 +0000
Received: by outflank-mailman (input) for mailman id 539983;
 Fri, 26 May 2023 07:29:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xc68=BP=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q2RtO-0003UM-FD
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 07:29:42 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [85.215.255.52]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12f7af00-fb97-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 09:29:41 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4Q7TX64X
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 26 May 2023 09:29:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12f7af00-fb97-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1685086173; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=l1i/SpN7IoXyLCxtKbRcE8qxemJPzqzhZiiabaWtd8WLn+qgtkSOJOsO07vC8PyCrH
    0tU5ArybFAiqgIrW3AyIPnYeZBX/ilDR+HtbCuUcphuFrpuYhCsD9TCQf9pmrNHdPyzO
    O4KMPY6v4oJAICXF853449xMDiyRgaQLIvC3Mgl6M7HI2fwAdySVQj+OOkcgEKdUcdLi
    TkPpyp1D+8CiENDUdERdN5tYYTp/kmxico2RCNYFfxKWpPgp6f0iaibJbH2+ZW36+mm9
    wa0TtLEe07LgMAHTZjySnonJU9xwv9RCcGXsL/MFskehM0QxxUTSfITeVNQwxSh72DbG
    VK9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685086173;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=WpoeBuGWahQjx/3EeADqCoRxafSHjWOb2yRv/RTe6Uo=;
    b=CNJ3dvKAoMGNLpiIxAvXaQHTpNikEjPtwMG9JacnTIOf/iKgd7MdCIEQ1RSANa5M1y
    E8zfs63wZJKlHIhW9WLT5BKZTX+jl34aA4Pvx2AUztlO6y+0YdvBFTztLJOMm6/AQ2Cq
    5ZReVSJHjmKabF/skGUxoTW7TjaAdiDkuc2Gzq7iybW8yP09N4Hw5OyMRFXNxWqhn5Lp
    ILfnRosz5OCSKTLCPjsA7e8nLdOvhwQAbyG9ap/uwFiQfMm2tCz4zyiVVwZc9a90fyOH
    hYQ5CliZGoyip+WCXl8gLRXzMsSRnwXs5ZseL55Bbv1viJWAvDq/Hm9vXmBCBjfP7Mx4
    oL2g==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685086173;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=WpoeBuGWahQjx/3EeADqCoRxafSHjWOb2yRv/RTe6Uo=;
    b=T5w+qTlNfVPYH50sWMUQyIOgq2tZbX8fDtIAT+Rq9j6GRIDMB6NL0pHmsWW5B2c3N+
    iCFt3SSZYaI6LMvvLK9eK00+S20chnJyigJRQXQZNB/fr4JWpNtuBCyoQzWmwLwewEP0
    5BXurV1dRGRfXu+kKLSR1kiCwo3YbnZm/if2AUwHq4pbYnj6RovPTeDjWLlwsJt/3Yam
    ea4+8selNmD6cfQwzfATMgc9IqXVp5SezkOy0km1OWyg5rDW1sKhCmRkd8Dwxo9ab2bC
    m3/V7M491uiPxP1jgo00pY3dw6tirenWI3NcrtRDHUkGi59kX0X5W8VyKTGF5qs+aTLx
    kJlg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685086173;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=WpoeBuGWahQjx/3EeADqCoRxafSHjWOb2yRv/RTe6Uo=;
    b=0huKukAolae739l9QzADDZQTc3VDAUFQwHadcUOSh4Vlxh7dRZQHLrz913NCWi2egX
    09Ywh8DrvlUFlCLeVJDQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xqFv7EJ0tgRX/vKfT/e8Ig6v0dNw4QAWpzMWrRQ=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 2/3] xentrace: remove return value from monitor_tbufs
Date: Fri, 26 May 2023 09:29:15 +0200
Message-Id: <20230526072916.7424-3-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230526072916.7424-1-olaf@aepfle.de>
References: <20230526072916.7424-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

The function always returns zero.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/xentrace/xentrace.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/tools/xentrace/xentrace.c b/tools/xentrace/xentrace.c
index b81abe8a51..a073cab26d 100644
--- a/tools/xentrace/xentrace.c
+++ b/tools/xentrace/xentrace.c
@@ -668,7 +668,7 @@ static void wait_for_event_or_timeout(unsigned long milliseconds)
  * monitor_tbufs - monitor the contents of tbufs and output to a file
  * @logfile:       the FILE * representing the file to log to
  */
-static int monitor_tbufs(void)
+static void monitor_tbufs(void)
 {
     int i;
 
@@ -795,8 +795,6 @@ static int monitor_tbufs(void)
     free(data);
     /* don't need to munmap - cleanup is automatic */
     close(outfd);
-
-    return 0;
 }
 
 
@@ -1164,7 +1162,6 @@ static void parse_args(int argc, char **argv)
 
 int main(int argc, char **argv)
 {
-    int ret;
     struct sigaction act;
 
     opts.outfile = 0;
@@ -1226,9 +1223,9 @@ int main(int argc, char **argv)
     sigaction(SIGINT,  &act, NULL);
     sigaction(SIGALRM, &act, NULL);
 
-    ret = monitor_tbufs();
+    monitor_tbufs();
 
-    return ret;
+    return 0;
 }
 
 /*


From xen-devel-bounces@lists.xenproject.org Fri May 26 07:29:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 07:29:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.539984.841402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RtP-0003n6-PZ; Fri, 26 May 2023 07:29:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 539984.841402; Fri, 26 May 2023 07:29:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2RtP-0003mO-I4; Fri, 26 May 2023 07:29:43 +0000
Received: by outflank-mailman (input) for mailman id 539984;
 Fri, 26 May 2023 07:29:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xc68=BP=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q2RtO-0003Ci-GV
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 07:29:42 +0000
Received: from mo4-p01-ob.smtp.rzone.de (mo4-p01-ob.smtp.rzone.de
 [85.215.255.50]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 13aa6cd4-fb97-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 09:29:40 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4Q7Ta64Y
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 26 May 2023 09:29:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13aa6cd4-fb97-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1685086176; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=RAqu1yHSHxgRyjWc+mh6lftf8lCD1fl4SAElcfkzB2KkjlTPYR1UmDbkHSJbibZKn4
    CSxDWsO2eMx2cmAMEOmbT3svp/tOC+mxaa89Y06r2FOUS3vJvRAYWhEOTzjHaBCFbM59
    xQ8flx0m89L7ZEphGyRlsg26r/7nwmEmJpWjJ4fLTOE55tYrdCPGQgSPmnUFkURKI3Zj
    4RHcbMpsc+h6z3UquEe2k7UCECE/rB9U664p5n6zozLaDUP/45qpKb1lqSLxWXOZUsfR
    OKtkanyZLfXuI/2sCkgYozt3kRIJAdGmMIzB9cZjvB6raihuvzT6TuEPm4458avyXb80
    PtVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685086176;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=S9fPMSqhE3ipoGZSZM4sNJyNsCfbZYCsqL8IUbcARXw=;
    b=aDOvrs9KzVLHyoDJ3Z+DvNde7Ddzq5Va3a1UbIrTZHuLnM4MzMrsvGQyUsYeJ/jNul
    wIrsTMXKxX2XLQ/uM9ZV/dP5KxiJEW48CjkFU5cQzU25D9n4YFDRoTsz3Q3EZlWu+f8X
    PMxJY/SsqTTXNwlz0ZaQ8UcgSTLnGoDURmAISgI4lQNFR3ZwI4nRXXXyqDZ9mnM98Cu8
    6IqzBa2zq2du2jWpLLAe8qaFeaLuCnRdhh3L5jdOkyCPAEvXtXeZtn81NSmh2xAcS21d
    LdSoljnzprqbMT1KzGgdCHvF9N5fSxdszYqKweq59DxOM4N5rAWDwHn15WbKA/MgLDq2
    2Tzg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685086176;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=S9fPMSqhE3ipoGZSZM4sNJyNsCfbZYCsqL8IUbcARXw=;
    b=pWnFatpRSi7NIG8u86l6nkWRr1ZTeC1IktYz6Lrb/tHwg4aBAL2ByevDmk3W1lwhQ7
    C8MTIUtirgzlEZrd/oQb/Zn7tKOjqyGeNRwog0TC4Ja6uLaHdyft9aS1NHFuGLymZiCL
    4RMeycPNWCTlTFx1kRIBkXsAFXPs6IwX3AGj9RE7iJmxx5jop2CY3XiVHuxbZpWwsK5t
    jy55ihCzo41wr6gOuUESWAeDNFajU8r/psIlvKfC7ZXuabxrzKfyY7n6xL/thBOpTEJx
    ld78/uoFw4DYow7Hj3pSAzw+NN2gMW1cJEaPSW1YOxiCpwESAAXvfhrWusf1nHk996ra
    Gq8Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685086176;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date:
    From:Subject:Sender;
    bh=S9fPMSqhE3ipoGZSZM4sNJyNsCfbZYCsqL8IUbcARXw=;
    b=hPVQ0e7hJGk5SflODIvVj8uOgvQm0ZnSqnb4EW2uECxsL0FLB2rPLHYhunnkxtAd9l
    KczprFLs/MSbHqociQAw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xqFv7EJ0tgRX/vKfT/e8Ig6v0dNw4QAWpzMWrRQ=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 3/3] xentrace: close output file in the function which opened it
Date: Fri, 26 May 2023 09:29:16 +0200
Message-Id: <20230526072916.7424-4-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230526072916.7424-1-olaf@aepfle.de>
References: <20230526072916.7424-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/xentrace/xentrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xentrace/xentrace.c b/tools/xentrace/xentrace.c
index a073cab26d..3548255123 100644
--- a/tools/xentrace/xentrace.c
+++ b/tools/xentrace/xentrace.c
@@ -794,7 +794,6 @@ static void monitor_tbufs(void)
     free(meta);
     free(data);
     /* don't need to munmap - cleanup is automatic */
-    close(outfd);
 }
 
 
@@ -1225,6 +1224,7 @@ int main(int argc, char **argv)
 
     monitor_tbufs();
 
+    close(outfd);
     return 0;
 }
 


From xen-devel-bounces@lists.xenproject.org Fri May 26 07:35:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 07:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540001.841416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Ryh-0006Kj-Bm; Fri, 26 May 2023 07:35:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540001.841416; Fri, 26 May 2023 07:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Ryh-0006Kc-8r; Fri, 26 May 2023 07:35:11 +0000
Received: by outflank-mailman (input) for mailman id 540001;
 Fri, 26 May 2023 07:35:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7mHh=BP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2Ryf-0006KW-Mf
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 07:35:09 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2061c.outbound.protection.outlook.com
 [2a01:111:f400:fe16::61c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d6d2499d-fb97-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 09:35:08 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8263.eurprd04.prod.outlook.com (2603:10a6:20b:3f9::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18; Fri, 26 May
 2023 07:35:06 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Fri, 26 May 2023
 07:35:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6d2499d-fb97-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oZ9SCV5eCmdmeyxTonD/7wRfKAAHavs9Q7Go2o0d3E+PlfEx0a67ez8Kiop+m3v8BadvfHPi3apEZKHns70BybA9KX9boYmF+QTFlv35xDT2WyVdAjifDTnoqs7LYhx2RJleF5Syi5amQ3R08J7DzrUehD6brRhfdlIG2hAWvDdPbTtYCX2zlaTkwyfGQ582KauLj0UPngUdd4ZEut1nbjM1KJJu6riLxuhZ35vxajEh/klS5H2upzPDqjE1qlnquYFsdNoV3LGLlAzAGo1ZbWslxcyWbRos0qi08BbilWyfBZzqAz/jhPfPIt/8EXF5uLRGsav5r6FusUKtX+SYXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fbZyhmXvoa+G2jYdUn7X/SI6Ky0Cna4gbsBdriJhCF8=;
 b=oZP1yYIV67P2mlwiZVa50HsckZ19dQ1Gy/gWcvs+QhEN28tTLDN8rCmowVHapRbl+lJmwy/W7A7S9kAMlqwLwZcaw87iIIORdga65iIb78MOZj0f8oknb+na52UE0xpxydqblqYq/xJVRhWRonw5r3AHI+8Ww1IsheVnX6HD8Nwik25NDfH/iuww4XI1IivUwLZUzwrzbxCaDOo2rsgyM1b/c9IMApPf/SPS1zAq4gQeJtLkTErkURs0hl5TZhg13lhpycioVO2EG3X+Lmc5BXyyZSaPopcv2qWk49pim6fT95lCxl8ZBZNtLcn4MWPik80WgDtH1QsQm+Zo009PSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fbZyhmXvoa+G2jYdUn7X/SI6Ky0Cna4gbsBdriJhCF8=;
 b=sUHq+VRPgdokegqWER2IhOvNP8ZzAPy+qT6FYuYTdVGyVByHy9ntAKvWlECgE2e30cJS8mDQI6cwaQsccXMxT+w/kN7dLvEp3wer4eBoW46SEsHhMIO8XJ1kiix8vqDf+Xw4YQbybi1bEH8uDdTgR40UGeCzjx2pMZCz1DQfYOj4nJgY545AKbRCw5MO8Er4cb0NBwZueksnMrU6Re47JXZtpsW6yQCII2Gz/tPSO9349Xjd4A2gboF1KKMt7InF1Sxznlj+ndY0o612YhV/UUZTb7Xn0N9Whl2XH6pz2XcoHd0/y0vqBh49ROc7oyooX1INuGVnZeOY628qjDKmQA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5567b45d-d8ee-7f43-526f-7f601c6ddd46@suse.com>
Date: Fri, 26 May 2023 09:35:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/vPIC: register only one ELCR handler instance
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0228.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b2::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8263:EE_
X-MS-Office365-Filtering-Correlation-Id: f60df170-e778-4783-f49c-08db5dbbb9ef
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	V9vNXTxr9AaVO8POA2Wx5VHYH2b3zMmmwwPrSVmtBcmQexeE5qCTWchB5sjayXAAT4mIvbRkhDuqDdGpJa4jfxIhjcub5m6hvDzMjPX9xxouozMQzgosfyInlDPJFfTnFTdWz24q7WQ3pkWFLwBY6GK/QDfSXKF457l62f4Pr2YD4/RmOTJtu/p++BBAW6HJwCFRUHvvmTUSOA4GK/6bNRHYEUn8XebcqBRE3xm/7INn0F8v1gZd7ButODa4u6aVE5sKl31ueth0p2x4n9J3AoBPQjGhY9pjDoLSTkPg8dH1FIYk7cNochBoupt/JBayPpDC0fKCRu+LAz16QEHQIOaCIiVad6CJNceqcICmKACCh2DgLF2YTtQpThAzjceb1rJZyq8V0+J1BZPfne78dSTKdpYAXpFYbzIbVbxMIStS0wnVsUuSU58lb0WzyhMIzwsfnxG6fnND9sPT91l1GlDy5q0p6UQ3vg/vdXhRumDoQhw5+lHUayKm5RdZnt+p2w9h1z8/zzkzKEkIPPRI/xhzoUUibtGAeI/sczpphJ4Phme5isuNRuSs3NU3WcH0EHwl0+qJyBllphvIMF7HL1SpQ8bhsCPJDdBPX5gg7mKvnXFeI4Od0HhgSYaO/GltqetnoxctqaFSKRa2FkZ+8w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(396003)(376002)(366004)(136003)(39860400002)(451199021)(31686004)(54906003)(86362001)(83380400001)(31696002)(478600001)(2616005)(186003)(2906002)(66946007)(6512007)(6916009)(4326008)(66556008)(66476007)(36756003)(6506007)(316002)(26005)(6486002)(41300700001)(5660300002)(8676002)(8936002)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UE9VZFdJZDh4dld4c3V2amk5bjhFNFJ0TktxYiswekNvSjVMTWJpUzBSb293?=
 =?utf-8?B?RHh2Q0ZvNHI4QzFSb1dMQ2NkQ0IyV1AxaFB5S0dGS1E5WXlWbkRSMGNBeXpv?=
 =?utf-8?B?RzFRRUo2UUwwNXpvaXpZNVY2SG92ZTdITmdPc2EvR1hEYnR1SWRKRjJsWm5h?=
 =?utf-8?B?UDl4SUVoT3FBMVlGK2lBclpwazQwNVQ3czZVNFAyL1pPdTcrQ25MOFVOY1hv?=
 =?utf-8?B?dXhRZXY5ODBqenJhWkZUOEhwc25wYU5uTXRFYkJMcDdrZjgyMUZCcGw0MjJH?=
 =?utf-8?B?ZGp3cHpNN1ZIdDZVQmU0UEo2cUgwR3F0SzE1ZGg3aXJiQTNkZ250NEdqMnFv?=
 =?utf-8?B?K0c2K2w0VjMwdzJqSlJtTEZxMVR4cDFpV0ZGNDczbE1IUVNTUWw0Rml3UHFM?=
 =?utf-8?B?RzkzU3FzcTdsTU5uRllLMDlnY0Vwd29yanpHSCt3TWRmQWFvZ0VIclJmdE5C?=
 =?utf-8?B?ZDM2U29lelhlNFNBOHM3VWsvK09SUUp3NitOL0xEcHFiRHlGQ2dETUhaWngw?=
 =?utf-8?B?NWV2Q2hyTkdNVVAxVHgzMHBCbFlUbENEZDJPaWo2VG5YZmFNQjFYRDBmTTg3?=
 =?utf-8?B?OTBtakUxNFRWSGdHL0ZaQnJ4cytKdUJBN09UVFFEY1l0NzFJNzJmVnhrUDMx?=
 =?utf-8?B?Z3lBbHFwOXhINUQyTVN2VkpMaUZYU3VvYndxbFRwbkxqR3NDMGdZYitnOHls?=
 =?utf-8?B?bFJLZFBTREF0M25QRGYyelBLbXpURnRWSnhpdkVPb04wdEFESzJnTU83aGFl?=
 =?utf-8?B?UGtZdUZqa2lqY3JaMzY4aXkyQ1VFaC9jUlBvamtxNUxnZlVaa2dMT3NSeHhm?=
 =?utf-8?B?MVYxeEZjYmtCcDRpM1prMTRXNGIrZS8yY3lyT2w0SGtuY2JrbzlSRHFXYUpR?=
 =?utf-8?B?ZUovSytaZmxHSmEyaWs0cEpXb3hsb2JUeURETGl1SzZESzZXbk91V0IyV1RO?=
 =?utf-8?B?THd3bFBXZnZPa3hNaG9GYURXclh1ZTR5NWJiYmY0SXp0Q1ErSktXcHd4dXZG?=
 =?utf-8?B?WlNwc1JsQmp6eWpFVWJXeitMRzBYRUF5OEpGamJXdHBVNmZmMXpyRnYyd3dM?=
 =?utf-8?B?ZGE2WFc4ZGFnQ005SXAvaUpxMmFhUWxrcFN3VTAyZTEySlNlWHVDVGZGQzVk?=
 =?utf-8?B?cUpEK1Fsb0tUYytHb1BxZXRFRXcwSnp0aHE2QmdKME9vZHJNUUdJbDBURHZr?=
 =?utf-8?B?ZStsN0NnYUxZYk5udG9lVU5ad3pTY0pDZGxTajJhOUxjOVprckhUKzRZZ0th?=
 =?utf-8?B?VUtvV3lhU0M3ZlJNK2V2ZWFMcEwxdlF6Q0NybE40eUd6QnhNdDFtUkNqVE9S?=
 =?utf-8?B?bkRBNFZyNVowajl4SkVrVFZ5dVUyS0tESkxPZmUydU5nSUZod3pYUkFVTkhJ?=
 =?utf-8?B?ZGwzVVBDVlRNUFFMczIvRGhnemZpbmszZVlLK3NORzJjdGZnNXQ0ZlZDNksw?=
 =?utf-8?B?Q1JKdlhYaTQ0UC9MZ1daNXRSQk9aUWRqUXoyVTBqTGlCdmJxMGp4eDBMZnp3?=
 =?utf-8?B?OGliS29ic2Z4UWdRKzYrL0FwUmxZNG9jelF4cnJXaW5HR2N3KzFVL1h4bG1n?=
 =?utf-8?B?a016QW45TStuSHJNMEhqMnFJbkd3MGdEamgydXA2TzVGR2VGS0QzemVWWktt?=
 =?utf-8?B?SkV1QUNCNG5SRkovZnF5VVV4aWR6OWRzbGlYU3dkeUNOcmRZNFJFRjJPeFRx?=
 =?utf-8?B?S1dvTlpsUTd5T1Y3OU80Y3hoYTRWNFdnei9CQUx6L0MzYmNwOTh1YURHbHQv?=
 =?utf-8?B?VGI3YmxPRFZndDFPLzFNaHc2SS9BaTVQK2JYc1lja2M0Ry9qYU1QYXAzdDFE?=
 =?utf-8?B?dUZ6UXk3Y1AyK0tBOHVWcUIwbHRCZ1FRN0xBbGxqZEI5aU1BV241WVZIZWJp?=
 =?utf-8?B?SitGeXdMd0RhYlJURmFodlRpaHVRT0ZIc0x2MndMd0g4UkZUSzhlVEl2NGRQ?=
 =?utf-8?B?UGJUZTVsRmxXelJuMXovckY5OTY4dFJOZVJJUnhNblpLdU1oSEhHOHM2RW9Y?=
 =?utf-8?B?S3EvcGx0cFhwMlRaSnRZczRFYWYxTWRNRGZCTEpvMHdmVXd2RTZFcVFzeTlh?=
 =?utf-8?B?L1REVy9nRGkzU3pqSFFaNkZkYWVLYjBWdnFqSnZwOGZ3aGw3cmdDNW5EbENM?=
 =?utf-8?Q?KOU3vVVWkg4ZbSA8FXMDcsTgU?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f60df170-e778-4783-f49c-08db5dbbb9ef
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 07:35:06.4422
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yAoX0lRO4H/Ydp8LvAZm8p9lI6Zm3AZkC6zTm3bW21SxnfPZRsE7+sisags4dQBoxQqonLLAUPDmg3OsjCIb+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8263

There's no point consuming two port-I/O slots. Even less so considering
that some real hardware permits both ports to be accessed in one go,
emulating of which requires there to be only a single instance.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -377,25 +377,34 @@ static int cf_check vpic_intercept_elcr_
     int dir, unsigned int port, unsigned int bytes, uint32_t *val)
 {
     struct hvm_hw_vpic *vpic;
-    uint32_t data;
+    unsigned int data, shift = 0;
 
-    BUG_ON(bytes != 1);
+    BUG_ON(bytes > 2 - (port & 1));
 
     vpic = &current->domain->arch.hvm.vpic[port & 1];
 
-    if ( dir == IOREQ_WRITE )
-    {
-        /* Some IRs are always edge trig. Slave IR is always level trig. */
-        data = *val & vpic_elcr_mask(vpic);
-        if ( vpic->is_master )
-            data |= 1 << 2;
-        vpic->elcr = data;
-    }
-    else
-    {
-        /* Reader should not see hardcoded level-triggered slave IR. */
-        *val = vpic->elcr & vpic_elcr_mask(vpic);
-    }
+    do {
+        if ( dir == IOREQ_WRITE )
+        {
+            /* Some IRs are always edge trig. Slave IR is always level trig. */
+            data = (*val >> shift) & vpic_elcr_mask(vpic);
+            if ( vpic->is_master )
+                data |= 1 << 2;
+            vpic->elcr = data;
+        }
+        else
+        {
+            /* Reader should not see hardcoded level-triggered slave IR. */
+            data = vpic->elcr & vpic_elcr_mask(vpic);
+            if ( !shift )
+                *val = data;
+            else
+                *val |= data << shift;
+        }
+
+        ++vpic;
+        shift += 8;
+    } while ( --bytes );
 
     return X86EMUL_OKAY;
 }
@@ -470,8 +479,7 @@ void vpic_init(struct domain *d)
     register_portio_handler(d, 0x20, 2, vpic_intercept_pic_io);
     register_portio_handler(d, 0xa0, 2, vpic_intercept_pic_io);
 
-    register_portio_handler(d, 0x4d0, 1, vpic_intercept_elcr_io);
-    register_portio_handler(d, 0x4d1, 1, vpic_intercept_elcr_io);
+    register_portio_handler(d, 0x4d0, 2, vpic_intercept_elcr_io);
 }
 
 void vpic_irq_positive_edge(struct domain *d, int irq)


From xen-devel-bounces@lists.xenproject.org Fri May 26 09:15:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 09:15:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540019.841430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2TXf-0000IZ-7f; Fri, 26 May 2023 09:15:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540019.841430; Fri, 26 May 2023 09:15:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2TXf-0000IS-30; Fri, 26 May 2023 09:15:23 +0000
Received: by outflank-mailman (input) for mailman id 540019;
 Fri, 26 May 2023 09:15:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2TXd-0000IH-I6; Fri, 26 May 2023 09:15:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2TXd-0002Mf-Ch; Fri, 26 May 2023 09:15:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2TXc-0006yF-RJ; Fri, 26 May 2023 09:15:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2TXc-0004Op-Qu; Fri, 26 May 2023 09:15:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xxxwvTQrjkgxS6ZWWEHbANC2W7/xnXqSBWMyvt6pnIU=; b=DPwVZik9CTgWqXVzo0xZG8ybXY
	B1DtlPJShvVb9CeFe6CbCfy3i7UODCGNmZUMBcpxPo4cuXgGX5iAsd827yAff6A/SQ1xzCYd7Z2Td
	XdfJJsxG4smMxWs8MjNe36egwFxtGXEp7CCweukbfuEc0yZeksVtFKY98kWPJ1HScxMY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180954-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180954: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=a3cb6d5004ff638aefe686ecd540718a793bd1b1
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 09:15:20 +0000

flight 180954 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180954/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                a3cb6d5004ff638aefe686ecd540718a793bd1b1
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    8 days
Failing since        180699  2023-05-18 07:21:24 Z    8 days   34 attempts
Testing same since   180949  2023-05-25 22:08:34 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 7534 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 26 10:04:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 10:04:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540025.841440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2UIe-0005lc-NG; Fri, 26 May 2023 10:03:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540025.841440; Fri, 26 May 2023 10:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2UIe-0005lV-K6; Fri, 26 May 2023 10:03:56 +0000
Received: by outflank-mailman (input) for mailman id 540025;
 Fri, 26 May 2023 10:03:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X81t=BP=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1q2UId-0005lP-5E
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 10:03:55 +0000
Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com
 [2a00:1450:4864:20::136])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9e78084a-fbac-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 12:03:53 +0200 (CEST)
Received: by mail-lf1-x136.google.com with SMTP id
 2adb3069b0e04-4f3a9ad31dbso576406e87.0
 for <xen-devel@lists.xenproject.org>; Fri, 26 May 2023 03:03:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e78084a-fbac-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685095432; x=1687687432;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=PKW6t5nq5lV7zUztAetBNtiLiZfpIF4x4clPqehH+eI=;
        b=j7uzly3VQWJsx8ht1fSi4RoY+xYlatMyz/7sXzjTXcaEl1rwxRcP/lL7IpZGjhJeZA
         lS9fUQNB46mxxYta236sQrTzoFIky6UMl6xs3rpbeYl4KuxwXnFwjV3+w9An7WH2jLZy
         AToKm6dsNto6wyMYsafzjg6bsdloKp4jbbRcM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685095432; x=1687687432;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=PKW6t5nq5lV7zUztAetBNtiLiZfpIF4x4clPqehH+eI=;
        b=A7jYV+IguLroVf3vFAcOaExaTnKhrbXrWKe3qL4c7irVxsM0+ESqfCfRbUMMChYD3b
         HSOJOF8EVa2i9hocH4uqII0HBjgcQd23XdO/bjezxdJ6YeXfOGWP8V0EEO16bLZ5tMP8
         /NiewgRvJxJuQn/FBiQHdT7sZJ7KhyjuLVLB2IHYJHYoMcCUairpuv6vSb8YQeg50L03
         0gfnHgJgpNHIsXc3hEyH5zXW9vY7919Q5wXHmpcMvLj9hMw4Wrsc8NWnGCLv+M24GlGA
         jD40CSgYYednQ1UPbBiE60okXQKmJvD71MRggI7OmL+TZHEI41CRx59sLsn95gz/GlNc
         T2dA==
X-Gm-Message-State: AC+VfDxMINotkRC0BNgy0G8UNzwoST7Ib0mamIlmgF4rLkGr0cGtC6SR
	yqNLaFRanVlGx3bz/t2a7PNFdCT56+rjAyxLbwuNig==
X-Google-Smtp-Source: ACHHUZ6QdcG0BJFUGgKCosYtaP+IGY3RtVwHe+eD4t4d7eoqkbBDKZXIllr32kPaFsJF9GKs6ABRxIPUWs34oAdHQ9k=
X-Received: by 2002:ac2:5227:0:b0:4f2:455d:18bd with SMTP id
 i7-20020ac25227000000b004f2455d18bdmr348743lfl.16.1685095432597; Fri, 26 May
 2023 03:03:52 -0700 (PDT)
MIME-Version: 1.0
References: <20230526072916.7424-1-olaf@aepfle.de> <20230526072916.7424-4-olaf@aepfle.de>
In-Reply-To: <20230526072916.7424-4-olaf@aepfle.de>
From: George Dunlap <george.dunlap@cloud.com>
Date: Fri, 26 May 2023 11:03:41 +0100
Message-ID: <CA+zSX=Z+p8CmB_=r6LKJeDjg3wdq4AzMgK0qs2GQLYRLwbe0QQ@mail.gmail.com>
Subject: Re: [PATCH v1 3/3] xentrace: close output file in the function which
 opened it
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>, 
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Content-Type: multipart/alternative; boundary="000000000000ac4c0705fc95dac8"

--000000000000ac4c0705fc95dac8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 26, 2023 at 8:29=E2=80=AFAM Olaf Hering <olaf@aepfle.de> wrote:

> Signed-off-by: Olaf Hering <olaf@aepfle.de>
>

Wow, this file is ugly.

Reviewed-by: George Dunlap <george.dunlap@cloud.com>

--000000000000ac4c0705fc95dac8
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Fri, May 26, 2023 at 8:29=E2=80=AF=
AM Olaf Hering &lt;<a href=3D"mailto:olaf@aepfle.de">olaf@aepfle.de</a>&gt;=
 wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Signed-o=
ff-by: Olaf Hering &lt;<a href=3D"mailto:olaf@aepfle.de" target=3D"_blank">=
olaf@aepfle.de</a>&gt;<br></blockquote><div><br></div><div>Wow, this file i=
s ugly.<div><br></div><div>Reviewed-by: George Dunlap &lt;<a href=3D"mailto=
:george.dunlap@cloud.com">george.dunlap@cloud.com</a>&gt;</div></div><div>=
=C2=A0</div></div></div>

--000000000000ac4c0705fc95dac8--


From xen-devel-bounces@lists.xenproject.org Fri May 26 10:04:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 10:04:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540030.841449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2UJa-0006MD-2X; Fri, 26 May 2023 10:04:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540030.841449; Fri, 26 May 2023 10:04:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2UJZ-0006M6-W0; Fri, 26 May 2023 10:04:53 +0000
Received: by outflank-mailman (input) for mailman id 540030;
 Fri, 26 May 2023 10:04:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2UJY-0006KC-Ad; Fri, 26 May 2023 10:04:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2UJY-0003Sn-2v; Fri, 26 May 2023 10:04:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2UJX-0001SU-Dd; Fri, 26 May 2023 10:04:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2UJX-0003Eh-DC; Fri, 26 May 2023 10:04:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P9NXW5yvteeHkxJWo1qdqC9u8HKUsIi3FwN9q7AHZko=; b=MCA0BucuJXAEbUZYvEXNMz1Axr
	7ScG8x0HtaNvVfGvcTYYHao8VKqScFXhC1mzznY6J5t7zvyPdtD6Uksf4zSGvYYp0hxJjvNz4LSDP
	cVFXXcyL+q4bKkUCPi8FhEuowUKDeWBxaC2zdNpULDaTHa8/RmOXF4B3LupU2BwvpgUY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180950-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180950: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-amd64-freebsd12-amd64:guest-saverestore.2:fail:heisenbug
    linux-linus:test-amd64-coresched-amd64-xl:debian-fixup:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9db898594c541444e19b2d20fb8a06262cf8fcd9
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 10:04:51 +0000

flight 180950 linux-linus real [real]
flight 180957 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180950/
http://logs.test-lab.xenproject.org/osstest/logs/180957/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-freebsd12-amd64 18 guest-saverestore.2 fail pass in 180957-retest
 test-amd64-coresched-amd64-xl 13 debian-fixup       fail pass in 180957-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                9db898594c541444e19b2d20fb8a06262cf8fcd9
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   39 days
Failing since        180281  2023-04-17 06:24:36 Z   39 days   72 attempts
Testing same since   180950  2023-05-25 22:12:20 Z    0 days    1 attempts

------------------------------------------------------------
2518 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 318588 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 26 10:04:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 10:04:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540034.841459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2UJc-0006cH-Ai; Fri, 26 May 2023 10:04:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540034.841459; Fri, 26 May 2023 10:04:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2UJc-0006c7-7X; Fri, 26 May 2023 10:04:56 +0000
Received: by outflank-mailman (input) for mailman id 540034;
 Fri, 26 May 2023 10:04:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X81t=BP=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1q2UJb-0006KH-3q
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 10:04:55 +0000
Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com
 [2a00:1450:4864:20::234])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c2705a06-fbac-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 12:04:53 +0200 (CEST)
Received: by mail-lj1-x234.google.com with SMTP id
 38308e7fff4ca-2af2b74d258so5409661fa.3
 for <xen-devel@lists.xenproject.org>; Fri, 26 May 2023 03:04:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2705a06-fbac-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685095493; x=1687687493;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=rhHiPT1CszJKo1aBZSqdDVYp8Mb+er3glqmeqq9eATg=;
        b=hRcIVYFerwrgJ5fKQE1NlOqPrauqwyPkQrQ2a+yM/z6kZZ05CdXStzzTFad1xtJNbV
         /LNRHmP4wPK96zUTIhX0iKqakFl1oi4QfNxTlZSK5CZJvVKvzX/5H6YZ6JY0sn3lnLW6
         p98j7i2TWgOqV49UojL47qzkBOiNL/NlU1XQw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685095493; x=1687687493;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=rhHiPT1CszJKo1aBZSqdDVYp8Mb+er3glqmeqq9eATg=;
        b=AzoOGpa2u7QAPmk6WSmHeabkDIPkRl+TiMWNAVnv3bRL7FVvd6JGAcQb3tY9INaXwD
         wuSEUVerdRhZ42JoxEI6tUz8/B3IaA8gP0DZAt5EIyfhOgDlo/R41YY/It2WnYuZtYjo
         mf9ZLs6CbYZHsQrJn4FvC+8KEVe6LQlsV1SuEJKm/elJ8Ea3/9bHlDIDwA6d07FzrHey
         C2XTEKZ24qrvKc4otRKiXBh9AqceT7duVdhjdTpptaijYEGTZd2lVP/hkarhxQLHjjMU
         xW4Z0TaWOjYo3AyDJMgKeY2vpFzscJuJUiTuE6pODX+e7jIqpq4HpQykfx1V4rxeVBLA
         AvUw==
X-Gm-Message-State: AC+VfDxVxzoaSta4kv9o4KHMwQ6+HQHvjGchyA2b5bpK0pRuDrev/w7x
	2d/bTyx/vyjJlVx2+h7+Wjf1fcATb0bOWD0AxzmesQ==
X-Google-Smtp-Source: ACHHUZ4lhqvWR9UdWmKFj7cZVhxUQM6UX7dg1VM6yf4OE/MClMEvIrEV3GPQHTc94COuR1/2EezLGCAJ564Rq+agaxE=
X-Received: by 2002:a2e:c49:0:b0:2af:2786:2712 with SMTP id
 o9-20020a2e0c49000000b002af27862712mr525171ljd.25.1685095492954; Fri, 26 May
 2023 03:04:52 -0700 (PDT)
MIME-Version: 1.0
References: <20230526072916.7424-1-olaf@aepfle.de> <20230526072916.7424-2-olaf@aepfle.de>
In-Reply-To: <20230526072916.7424-2-olaf@aepfle.de>
From: George Dunlap <george.dunlap@cloud.com>
Date: Fri, 26 May 2023 11:04:42 +0100
Message-ID: <CA+zSX=ayTmoFY+jyyvnM9ug4pXGGz-NAcXj4F7YtWn1jmmYF8w@mail.gmail.com>
Subject: Re: [PATCH v1 1/3] xentrace: allow xentrace to write to stdout
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>, 
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Content-Type: multipart/alternative; boundary="00000000000045450a05fc95de71"

--00000000000045450a05fc95de71
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 26, 2023 at 8:29=E2=80=AFAM Olaf Hering <olaf@aepfle.de> wrote:

> The output file is optional. In case it is missing, xentrace is supposed
> to write to stdout - unless it is a tty, which is checked prior using it.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>
>

Acked-by: George Dunlap <george.dunlap@cloud.com>

--00000000000045450a05fc95de71
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Fri, May 26, 2023 at 8:29=E2=80=AF=
AM Olaf Hering &lt;<a href=3D"mailto:olaf@aepfle.de">olaf@aepfle.de</a>&gt;=
 wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">The outp=
ut file is optional. In case it is missing, xentrace is supposed<br>
to write to stdout - unless it is a tty, which is checked prior using it.<b=
r>
<br>
Signed-off-by: Olaf Hering &lt;<a href=3D"mailto:olaf@aepfle.de" target=3D"=
_blank">olaf@aepfle.de</a>&gt;<br></blockquote><div><br></div><div>Acked-by=
: George Dunlap &lt;<a href=3D"mailto:george.dunlap@cloud.com">george.dunla=
p@cloud.com</a>&gt;</div></div></div>

--00000000000045450a05fc95de71--


From xen-devel-bounces@lists.xenproject.org Fri May 26 10:12:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 10:12:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540041.841469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2UQd-0008Pm-2N; Fri, 26 May 2023 10:12:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540041.841469; Fri, 26 May 2023 10:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2UQc-0008Pf-W3; Fri, 26 May 2023 10:12:10 +0000
Received: by outflank-mailman (input) for mailman id 540041;
 Fri, 26 May 2023 10:12:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X81t=BP=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1q2UQc-0008PZ-8T
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 10:12:10 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c6433396-fbad-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 12:12:09 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2af225e5b4bso5346091fa.3
 for <xen-devel@lists.xenproject.org>; Fri, 26 May 2023 03:12:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6433396-fbad-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685095929; x=1687687929;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=AhfjEJJ7c5rJX0Z0KvcDFdNSqDnBAEJVFDchzhucJrg=;
        b=E7txLwawbdHK6MGzWi8eg+ugipVXzeRo9puMNyrpEF6yi/zF9QLmXM/Y+N2tSU7Hv6
         SP1ij1Y9sxVVm5Bh8Y+t3ea6YaGMWUowsf7Ul7aGmwdkrwVohcs4Zk3vBlZRX6/vhdaJ
         bzSmnFsc1H/Mqt/4cLzXaTpwMZON4kfCwl4ac=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685095929; x=1687687929;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=AhfjEJJ7c5rJX0Z0KvcDFdNSqDnBAEJVFDchzhucJrg=;
        b=L+krsvL/PUYHqnWwTNSlfD9BTPGE7fGYrZpCT/0QR3iXmXvlvrId3KYpq2tyTcG4DO
         BCyDnDa4LrSM99dehSxNHY84L9wcL8SFlLyrmv+YDaYoq9anLc4TyEV1XjN1kwzinQsa
         yqHNPAd9RsU4B7VDVqcgH4KuW0d2fCmbF931zCWzt2s+0vuBsOrltynpK0XBMmuTFsWV
         V4dcqVODGU142KWLJKSAuE0k04K2Q5OoUgs4hL1QIgdO52v6tX3XWVhsk0t5C3AvnhFx
         7L6M4mBgIsHU0S2RQqLQDWLUpc0Ps0ou8T/XOa9xlp9a90WGSwsVCqkWd5zo2JuFmgPh
         ktEA==
X-Gm-Message-State: AC+VfDwjRZuLDXojqNlFhFEwKESORvI7+P5s/uvEz+rUnF1b16tU0wS1
	3P5SRog6NstbWbaq4mNL/U2aHVQSNPiAbNdPdSWy+A==
X-Google-Smtp-Source: ACHHUZ4NukaulJo1fdhHjOKKMY3Npa/RoLZeIraFM4Azlf9Vu5CjA887KiEfG1dZlespbB0Z7UIZSuQWQWKzdiXqyUc=
X-Received: by 2002:a2e:9c9a:0:b0:2a7:7b8e:5888 with SMTP id
 x26-20020a2e9c9a000000b002a77b8e5888mr669610lji.27.1685095928855; Fri, 26 May
 2023 03:12:08 -0700 (PDT)
MIME-Version: 1.0
References: <20230526072916.7424-1-olaf@aepfle.de> <20230526072916.7424-3-olaf@aepfle.de>
In-Reply-To: <20230526072916.7424-3-olaf@aepfle.de>
From: George Dunlap <george.dunlap@cloud.com>
Date: Fri, 26 May 2023 11:11:58 +0100
Message-ID: <CA+zSX=ZNZD2qQ1HGtqauoJdU_g1T45_gLq6XCG2Sn9VJJTNnbg@mail.gmail.com>
Subject: Re: [PATCH v1 2/3] xentrace: remove return value from monitor_tbufs
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>, 
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Content-Type: multipart/alternative; boundary="00000000000040a09f05fc95f87b"

--00000000000040a09f05fc95f87b
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 26, 2023 at 8:29=E2=80=AFAM Olaf Hering <olaf@aepfle.de> wrote:

> The function always returns zero.
>

I think a better argument (which I propose to replace the content of the
commit message) would be something like this:

---
The program is structured so that fatal errors cause exit() to be called
directly, rather than being passed up the stack; returning a value here may
mislead people into believing otherwise.
---

With that change:

Reviewed-by: George Dunlap <george.dunlap@cloud.com>

If that sounds OK to you I'll modify it on check-in.

 -George

--00000000000040a09f05fc95f87b
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Fri, May 26, 2023 at 8:29=E2=80=AF=
AM Olaf Hering &lt;<a href=3D"mailto:olaf@aepfle.de">olaf@aepfle.de</a>&gt;=
 wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">The func=
tion always returns zero.<br></blockquote><div><br></div><div>I think a bet=
ter argument (which I propose to replace the content of the commit message)=
 would be something like this:</div><div><br></div><div>---</div><div>The p=
rogram is structured so that fatal errors cause exit() to be called directl=
y, rather than being passed up the stack; returning a value here may mislea=
d people into believing otherwise.<br></div><div>---</div><div><br></div><d=
iv>With that change:</div><div><br></div><div>Reviewed-by: George Dunlap &l=
t;<a href=3D"mailto:george.dunlap@cloud.com">george.dunlap@cloud.com</a>&gt=
;</div><div><br></div><div>If that sounds OK to you I&#39;ll modify it on c=
heck-in.</div><div><br></div><div>=C2=A0-George</div><div><br></div></div><=
/div>

--00000000000040a09f05fc95f87b--


From xen-devel-bounces@lists.xenproject.org Fri May 26 10:14:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 10:14:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540045.841479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2USo-0000ZA-FL; Fri, 26 May 2023 10:14:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540045.841479; Fri, 26 May 2023 10:14:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2USo-0000Z3-CA; Fri, 26 May 2023 10:14:26 +0000
Received: by outflank-mailman (input) for mailman id 540045;
 Fri, 26 May 2023 10:14:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8JpW=BP=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q2USn-0000Yx-Hk
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 10:14:25 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 15f72935-fbae-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 12:14:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15f72935-fbae-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685096062;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=l0JCLWaWxAiVqnK2HID86lF3S13SCfgrB4rF/jITy/I=;
	b=ip+IsKji95mPrphWLVAUcdYl0y7pMoo/0rzxXxKZorc2i1ogyDwgLlmNAyb0qqipND//Q9
	mcdcjSKoJ1BmMv2yfve0Ih3RShF4W067n1L36VQOlRFjew11DbOIHP9BY3hPEmKWEMkYtF
	3++Mlj3E1pedAPDLuSJp8O8yrjmKl61cO3sikLlCo0aFQA4sDzjqIug5Y2sHJzxAouulen
	yWchVZzHYCyYvLl2IBhaE7LhZZ90HvEDnnYiIPfmZVnM5oPDhrPgISY7m926weiwqHfWOw
	VMBPX/ASMwnX6UaU0XNMhsxdic6Oppo5GnIK5pHcSkZVNuJmv/nyAhmosdGW7Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685096062;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=l0JCLWaWxAiVqnK2HID86lF3S13SCfgrB4rF/jITy/I=;
	b=mQPEsm7iwifOjyLdaiqWmBusXWewAsMMsfF7/82drVMMawEZjza+eT9dmICykmFbK9zGX6
	K4OvjM1+WRhVDmDw==
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
In-Reply-To: <20230524204818.3tjlwah2euncxzmh@box.shutemov.name>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name>
Date: Fri, 26 May 2023 12:14:21 +0200
Message-ID: <87y1lbl7r6.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Wed, May 24 2023 at 23:48, Kirill A. Shutemov wrote:
> On Mon, May 08, 2023 at 09:44:17PM +0200, Thomas Gleixner wrote:
>>  #ifdef CONFIG_SMP
>> -/**
>> - * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
>> - * @apicid: APIC ID to check
>> - */
>> -bool apic_id_is_primary_thread(unsigned int apicid)
>> +static void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid)
>>  {
>> -	u32 mask;
>> -
>> -	if (smp_num_siblings == 1)
>> -		return true;
>>  	/* Isolate the SMT bit(s) in the APICID and check for 0 */
>> -	mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
>> -	return !(apicid & mask);
>> +	u32 mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
>> +
>> +	if (smp_num_siblings == 1 || !(apicid & mask))
>> +		cpumask_set_cpu(cpu, &__cpu_primary_thread_mask);
>>  }
>> +#else
>> +static inline void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid) { }
>>  #endif
>>  
>>  /*
>
> This patch causes boot regression on TDX guest. The guest crashes on SMP
> bring up.

I rather call it a security feature: It makes TDX unbreakably secure.

> The change makes use of smp_num_siblings earlier than before: the mask get
> constructed in acpi_boot_init() codepath. Later on smp_num_siblings gets
> updated in detect_ht().
>
> In my setup with 16 vCPUs, smp_num_siblings is 16 before detect_ht() and
> set to 1 in detect_ht().

  early_init_intel(c)
    if (detect_extended_topology_early(c) < 0)
       detect_ht_early(c);

  acpi_boot_init()
    ....

  identify_boot_cpu(c)
    detect_ht(c);

Aaargh. That whole CPU identification code is a complete horrorshow.

I'll have a look....




From xen-devel-bounces@lists.xenproject.org Fri May 26 10:20:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 10:20:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540051.841490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2UYN-00026U-6l; Fri, 26 May 2023 10:20:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540051.841490; Fri, 26 May 2023 10:20:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2UYN-00026N-3e; Fri, 26 May 2023 10:20:11 +0000
Received: by outflank-mailman (input) for mailman id 540051;
 Fri, 26 May 2023 10:20:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2UYM-00026D-04; Fri, 26 May 2023 10:20:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2UYL-0003xR-R4; Fri, 26 May 2023 10:20:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2UYL-0001qe-Ga; Fri, 26 May 2023 10:20:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2UYL-00074D-G7; Fri, 26 May 2023 10:20:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mOIvvqvL5tmgMyzh0Oz/Qv89ROEfIhDo9KgaK/35azE=; b=EgwM1A1pk7DVM6eTZHNyny3OOw
	qd+KO/pVjlaq2KFmgir5G/nn6ZD2EdvXxPBpPVPIWeJIK5DFP2lQp9C1CRdmeUFSnbu8ff/dfCRCB
	fnfk9deZUPB/SZNFF8MU5dKEDJwjBjF+6ojrcQ4Xcbb/cjBT7YmbNjqMdrI50MkUtito=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180956-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180956: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=40cd186bfd15655b7d9d3ee149292c718c208917
X-Osstest-Versions-That:
    xen=354be8936d97d4f2cb8cc004bb0296826d89bd8d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 10:20:09 +0000

flight 180956 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180956/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  40cd186bfd15655b7d9d3ee149292c718c208917
baseline version:
 xen                  354be8936d97d4f2cb8cc004bb0296826d89bd8d

Last test of basis   180943  2023-05-25 13:01:48 Z    0 days
Testing same since   180956  2023-05-26 08:01:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Yann Dirson <yann.dirson@vates.fr>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   354be8936d..40cd186bfd  40cd186bfd15655b7d9d3ee149292c718c208917 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 26 11:07:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 11:07:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540060.841530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VIK-0007MS-Kv; Fri, 26 May 2023 11:07:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540060.841530; Fri, 26 May 2023 11:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VIK-0007Lh-FS; Fri, 26 May 2023 11:07:40 +0000
Received: by outflank-mailman (input) for mailman id 540060;
 Fri, 26 May 2023 11:07:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gDTS=BP=citrix.com=prvs=5031e17c9=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q2VIJ-0006de-D0
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 11:07:39 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 85dd0dc6-fbb5-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 13:07:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85dd0dc6-fbb5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685099257;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=vEXEzdcx1S5fOqtwj2ao5hamG4txXf0RO67hpizdai4=;
  b=FEAJ+gYBr6LYHULZViVJXq+m/QuHEC+/x629ZJUb2kajNMvkVAs7S8Sw
   xGlZ9Qd3o+z8EQq1a0EGu5m6ajfEpHODhYhsHAMiXkFxbZ2Vo2b6bgzNu
   V5dFj310rlpUSvWBLbKrX0ig6zfRxIyybX1DLyHKsISxSIxN2BduYgjas
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 113007428
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:tJ6uWqtd3OBBXcY/+tRZjNws5ufnVEZeMUV32f8akzHdYApBsoF/q
 tZmKTvUaPzcYWLwc9B2bIu+8U4Bv56HyN5jSQJv/HswRi9A+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Fv0gnRkPaoQ5AKEySFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwdCE/XgnYvOeNkeiXUO9VhtYHFeTmI9ZK0p1g5Wmx4fcORJnCR+PB5MNC3Sd2jcdLdRrcT
 5NHM3w1Nk2GOkARfA5NU/rSn8/x7pX7WxRepEiYuuwc5G/LwRYq+LPsLMDUapqBQsA9ckOw/
 zudpzymXktKXDCZ4WGl1m+plOH2pz7UfolLGrKq0sAwilLGkwT/DzVJDADm8JFVkHWWS99Zb
 kAZ5Ccqhawz71CwCMnwWQWip3yJtQJaXMBfe8UYwgyQzqvf4y6CG3MJCDVGbbQOq8seVTEsk
 FiTkLvU6SdH6ePPDyjHr/HN8G30YHJORYMfWcMaZVcU0en6+7o2tUOVYsxlKa6nquTYFC6ll
 lhmsxMCr7kUiMcK0YCy8lbGny+gq/D1c+Il2unEdjn7t10kPeZJc6TtsAGGtqgYcO51W3Hb5
 BA5d96iAPfi5H1nvAiEW60zEb6g/J5p2xWM0Ac0T/HNG9lAkkNPnLy8AhkkfC+F0e5eI1cFh
 XM/XisPjKK/xFPwMcdKj3uZUqzGN5TIG9X/TezzZdFTeJV3fwLv1HgwNRPIhDGxzBFywPtX1
 XKnnSCEVypy5UNPlWDeegvg+eVzmnBWKZ37GfgXMChLIZLBPSXIGN/pwXOFb/wj7bPsnTg5B
 +13bpPQoz0GCb2WX8Ui2dJLRbz8BSRhVM+eRg0+XrLrHzeK70l7VqeKnet6It0890mX/8+Rl
 kyAtoZj4AKXrRX6xc+iMS0LhG/HNXqnkU8GAA==
IronPort-HdrOrdr: A9a23:wxLCwK0KRd9D0nOJOKu6kAqjBHYkLtp133Aq2lEZdPU0SKGlfq
 GV7ZEmPHrP4gr5N0tOpTntAse9qBDnhPxICOsqXYtKNTOO0AeVxelZhrcKqAeQeBEWmNQ96U
 9hGZIOcuEZDzJB/LvHCN/TKadd/DGFmprY+ts31x1WPGVXgzkL1XYANu6ceHcGIzVuNN4CO7
 e3wNFInDakcWR/VLXBOpFUN9KzweEijfjdEGc7OyI=
X-Talos-CUID: =?us-ascii?q?9a23=3AWWr33mjv9KVZtgSkKcC2qGod5DJuLmzv/G6LGEq?=
 =?us-ascii?q?DWDxFUrusSU20wY5pjJ87?=
X-Talos-MUID: 9a23:r1ZrHwjq1uMW5mdymVQMK8Mpb+dM/qG8Bnk3jLogsJCKOj5UBW2Xg2Hi
X-IronPort-AV: E=Sophos;i="6.00,194,1681185600"; 
   d="scan'208";a="113007428"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 2/4] x86/spec-ctrl: Synthesize RSBA/RRSBA bits with older microcode
Date: Fri, 26 May 2023 12:06:54 +0100
Message-ID: <20230526110656.4018711-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

In order to level a VM safely for migration, the toolstack needs to know the
RSBA/RRSBA properties of the CPU, whether or not they happen to be enumerated.

Synthesize the bits when missing.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/include/asm/cpufeature.h |  1 +
 xen/arch/x86/spec_ctrl.c              | 50 +++++++++++++++++++++++----
 2 files changed, 44 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 50235f098d70..08e3eedd1280 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -192,6 +192,7 @@ static inline bool boot_cpu_has(unsigned int feat)
 #define cpu_has_tsx_ctrl        boot_cpu_has(X86_FEATURE_TSX_CTRL)
 #define cpu_has_taa_no          boot_cpu_has(X86_FEATURE_TAA_NO)
 #define cpu_has_fb_clear        boot_cpu_has(X86_FEATURE_FB_CLEAR)
+#define cpu_has_rrsba           boot_cpu_has(X86_FEATURE_RRSBA)
 
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 0774d40627dd..2647784615cc 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -578,7 +578,10 @@ static bool __init check_smt_enabled(void)
     return false;
 }
 
-/* Calculate whether Retpoline is known-safe on this CPU. */
+/*
+ * Calculate whether Retpoline is known-safe on this CPU.  Synthesize missing
+ * RSBA/RRSBA bits when running with old microcode.
+ */
 static bool __init retpoline_calculations(void)
 {
     unsigned int ucode_rev = this_cpu(cpu_sig).rev;
@@ -592,13 +595,18 @@ static bool __init retpoline_calculations(void)
         return false;
 
     /*
-     * RSBA may be set by a hypervisor to indicate that we may move to a
-     * processor which isn't retpoline-safe.
-     *
      * Processors offering Enhanced IBRS are not guarenteed to be
      * repoline-safe.
      */
-    if ( cpu_has_rsba || cpu_has_eibrs )
+    if ( cpu_has_eibrs )
+        goto unsafe_maybe_fixup_rrsba;
+
+    /*
+     * RSBA is explicitly enumerated in some cases, but may also be set by a
+     * hypervisor to indicate that we may move to a processor which isn't
+     * retpoline-safe.
+     */
+    if ( cpu_has_rsba )
         return false;
 
     switch ( boot_cpu_data.x86_model )
@@ -648,6 +656,8 @@ static bool __init retpoline_calculations(void)
 
         /*
          * Skylake, Kabylake and Cannonlake processors are not retpoline-safe.
+         * Note: the eIBRS-capable steppings are filtered out earlier, so the
+         * remainder here are the ones which suffer only RSBA behaviour.
          */
     case 0x4e: /* Skylake M */
     case 0x55: /* Skylake X */
@@ -656,7 +666,7 @@ static bool __init retpoline_calculations(void)
     case 0x67: /* Cannonlake? */
     case 0x8e: /* Kabylake M */
     case 0x9e: /* Kabylake D */
-        return false;
+        goto unsafe_maybe_fixup_rsba;
 
         /*
          * Atom processors before Goldmont Plus/Gemini Lake are retpoline-safe.
@@ -687,6 +697,32 @@ static bool __init retpoline_calculations(void)
     if ( safe )
         return true;
 
+    /*
+     * The meaning of the RSBA and RRSBA bits have evolved over time.  The
+     * agreed upon meaning at the time of writing (May 2023) is thus:
+     *
+     * - RSBA (RSB Alterantive) means that an RSB may fall back to an
+     *   alternative predictor on underflow.  Skylake uarch and later all have
+     *   this property.  Broadwell too, when running microcode versions prior
+     *   to Jan 2018.
+     *
+     * - All eIBRS-capable processors suffer RSBA, but eIBRS also introduces
+     *   tagging of predictions with the mode in which they were learned.  So
+     *   when eIBRS is active, RSBA becomes RRSBA (Restricted RSBA).
+     *
+     * Some parts (Broadwell) are not expected to ever enumerate this
+     * behaviour directly.  Other parts have differing enumeration with
+     * microcode version.  Fix up Xen's idea, so we can advertise them safely
+     * to guests, and so toolstacks can level a VM safelty for migration.
+     */
+ unsafe_maybe_fixup_rrsba:
+    if ( !cpu_has_rrsba )
+        setup_force_cpu_cap(X86_FEATURE_RRSBA);
+
+ unsafe_maybe_fixup_rsba:
+    if ( !cpu_has_rsba )
+        setup_force_cpu_cap(X86_FEATURE_RSBA);
+
     return false;
 }
 
@@ -1146,7 +1182,7 @@ void __init init_speculation_mitigations(void)
             thunk = THUNK_JMP;
     }
 
-    /* Determine if retpoline is safe on this CPU. */
+    /* Determine if retpoline is safe on this CPU.  Fix up RSBA/RRSBA enumerations. */
     retpoline_safe = retpoline_calculations();
 
     /*
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 26 11:07:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 11:07:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540057.841500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VIG-0006ds-Pq; Fri, 26 May 2023 11:07:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540057.841500; Fri, 26 May 2023 11:07:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VIG-0006dl-N4; Fri, 26 May 2023 11:07:36 +0000
Received: by outflank-mailman (input) for mailman id 540057;
 Fri, 26 May 2023 11:07:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gDTS=BP=citrix.com=prvs=5031e17c9=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q2VIF-0006de-GA
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 11:07:35 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 822189af-fbb5-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 13:07:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 822189af-fbb5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685099252;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=anpmt4luKppeT0MDr6VWEfx6VCbmSawCR46DmW0f5Hg=;
  b=dvJg3jA4R6QyLT8roDDUsv6TxrgBVbrMVBc30EnK7PTmIqpCh+lhwjf6
   B5IVdmZX71+kzkGxsm1pXYoJu7vgS4plMl/BfB9txlH9NvhjXgAG5tNd3
   B/VSD1dOqWpBndMfPndATRmuyzK+Clnt0SB1Ez9UfcIUXPWr1WHJ84prr
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 113007427
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:cickcK/WINsFFjzDL4PlDrUD836TJUtcMsCJ2f8bNWPcYEJGY0x3x
 msdDG/UOfaKYmP0eo0nO97loExTuZSGx4Q3SAtupS88E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ird7ks31BjOkGlA5AdmOKoV5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkS5
 M4oMWs2ayvS3c6sxou7WM1Vq8k8eZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0Mxx3A/
 j2apTuR7hcyLfvHlhS94HiVp8DQuT/Ad6gCJJr/z6s/6LGU7jNKU0BHPbehmtGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0ZjZLO7RkskfXkPOSulvHQDFeFVatdeDKqudqVA4az
 wSymui4XxB1toSVW1ak27qL+Gba1TcuEUcOYioNTA0g6tbloZ0ugh+ncuuPAJJZnfWuR2iun
 mniQDwWwuxK0JVVj/nTEUXv2WrEm3TfcuIiCuw7tEqB5xgxWoOqbpfABbPzvacZd9bxorVsU
 RE5dymiAAImV8nleM+lGr9l8FSVCxGtblXhbaZHRcVJythU0yfLkXpsyD9/Plx1Fc0PZCXkZ
 kTe0SsIus8OZCD7MfMuPdPrYyjP8UQGPY65PhwzRoMUCqWdiSfdpH0+DaJu9zyFfLcQfVEXZ
 s7ALJfE4YcyAqV71jumL9ogPUsQ7nlmnwv7HMmrpylLJJLCPBZ5v59ZagrRBg34hYvYyDjoH
 yF3bZbSkEkHC7SvO0E6M+c7dDg3EJTyPriuw+Q/SwJJClE5cI39I5c9GY8cRrE=
IronPort-HdrOrdr: A9a23:Yx0NpK1fMEVRptAoccEy6AqjBEQkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7gr5PUtLpTnuAsa9qB/nm6KdpLNhX4tKPzOW31dATrsSjrcKqgeIc0HDH6xmpM
 JdmsBFY+EYZmIK6foSjjPYLz4hquP3j5xBh43lvglQpdcBUdAQ0+97YDzrYnGfXGN9dOME/A
 L33Ls7m9KnE05nFviTNz0+cMXogcbEr57iaQ5uPW9a1OHf5QnYk4ITCnKjr20jbw8=
X-Talos-CUID: 9a23:cGaxMW9suFZKIxMsw72VvxIFKPA8KUbY9yaKD1W+MGZ7bIS4REDFrQ==
X-Talos-MUID: 9a23:FStVZgt3Zg1Ra3R0I82nnRY6a+lQ8a6XKW8StYk9lOOUBQs3AmLI
X-IronPort-AV: E=Sophos;i="6.00,194,1681185600"; 
   d="scan'208";a="113007427"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH 2/4] x86/spec-ctrl: Synthesize missing RSBA/RRSBA bits
Date: Fri, 26 May 2023 12:06:53 +0100
Message-ID: <20230526110656.4018711-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

---
 xen/arch/x86/include/asm/cpufeature.h |  1 +
 xen/arch/x86/spec_ctrl.c              | 50 +++++++++++++++++++++++----
 2 files changed, 44 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 50235f098d70..08e3eedd1280 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -192,6 +192,7 @@ static inline bool boot_cpu_has(unsigned int feat)
 #define cpu_has_tsx_ctrl        boot_cpu_has(X86_FEATURE_TSX_CTRL)
 #define cpu_has_taa_no          boot_cpu_has(X86_FEATURE_TAA_NO)
 #define cpu_has_fb_clear        boot_cpu_has(X86_FEATURE_FB_CLEAR)
+#define cpu_has_rrsba           boot_cpu_has(X86_FEATURE_RRSBA)
 
 /* Synthesized. */
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 0774d40627dd..daf77d77e139 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -578,7 +578,10 @@ static bool __init check_smt_enabled(void)
     return false;
 }
 
-/* Calculate whether Retpoline is known-safe on this CPU. */
+/*
+ * Calculate whether Retpoline is known-safe on this CPU.  Fixes up missing
+ * RSBA/RRSBA enumeration from older microcode versions.
+ */
 static bool __init retpoline_calculations(void)
 {
     unsigned int ucode_rev = this_cpu(cpu_sig).rev;
@@ -592,13 +595,18 @@ static bool __init retpoline_calculations(void)
         return false;
 
     /*
-     * RSBA may be set by a hypervisor to indicate that we may move to a
-     * processor which isn't retpoline-safe.
-     *
      * Processors offering Enhanced IBRS are not guarenteed to be
      * repoline-safe.
      */
-    if ( cpu_has_rsba || cpu_has_eibrs )
+    if ( cpu_has_eibrs )
+        goto unsafe_maybe_fixup_rrsba;
+
+    /*
+     * RSBA is explicitly enumerated in some cases, but may also be set by a
+     * hypervisor to indicate that we may move to a processor which isn't
+     * retpoline-safe.
+     */
+    if ( cpu_has_rsba )
         return false;
 
     switch ( boot_cpu_data.x86_model )
@@ -648,6 +656,8 @@ static bool __init retpoline_calculations(void)
 
         /*
          * Skylake, Kabylake and Cannonlake processors are not retpoline-safe.
+         * Note: the eIBRS-capable steppings are filtered out earlier, so the
+         * remainder here are the ones which suffer only RSBA behaviour.
          */
     case 0x4e: /* Skylake M */
     case 0x55: /* Skylake X */
@@ -656,7 +666,7 @@ static bool __init retpoline_calculations(void)
     case 0x67: /* Cannonlake? */
     case 0x8e: /* Kabylake M */
     case 0x9e: /* Kabylake D */
-        return false;
+        goto unsafe_maybe_fixup_rsba;
 
         /*
          * Atom processors before Goldmont Plus/Gemini Lake are retpoline-safe.
@@ -687,6 +697,32 @@ static bool __init retpoline_calculations(void)
     if ( safe )
         return true;
 
+    /*
+     * The meaning of the RSBA and RRSBA bits have evolved over time.  The
+     * agreed upon meaning at the time of writing (May 2023) is thus:
+     *
+     * - RSBA (RSB Alterantive) means that an RSB may fall back to an
+     *   alternative predictor on underflow.  Skylake uarch and later all have
+     *   this property.  Broadwell too, when running microcode versions prior
+     *   to Jan 2018.
+     *
+     * - All eIBRS-capable processors suffer RSBA, but eIBRS also introduces
+     *   tagging of predictions with the mode in which they were learned.  So
+     *   when eIBRS is active, RSBA becomes RRSBA (Restricted RSBA).
+     *
+     * Some parts (Broadwell) are not expected to ever enumerate this
+     * behaviour directly.  Other parts have differing enumeration with
+     * microcode version.  Fix up Xen's idea, so we can advertise them safely
+     * to guests, and so toolstacks can level a VM safelty for migration.
+     */
+ unsafe_maybe_fixup_rrsba:
+    if ( !cpu_has_rrsba )
+        setup_force_cpu_cap(X86_FEATURE_RRSBA);
+
+ unsafe_maybe_fixup_rsba:
+    if ( !cpu_has_rsba )
+        setup_force_cpu_cap(X86_FEATURE_RSBA);
+
     return false;
 }
 
@@ -1146,7 +1182,7 @@ void __init init_speculation_mitigations(void)
             thunk = THUNK_JMP;
     }
 
-    /* Determine if retpoline is safe on this CPU. */
+    /* Determine if retpoline is safe on this CPU.  Fix up RSBA/RRSBA enumerations. */
     retpoline_safe = retpoline_calculations();
 
     /*
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 26 11:07:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 11:07:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540059.841515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VIJ-0006wQ-Cf; Fri, 26 May 2023 11:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540059.841515; Fri, 26 May 2023 11:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VIJ-0006w8-6A; Fri, 26 May 2023 11:07:39 +0000
Received: by outflank-mailman (input) for mailman id 540059;
 Fri, 26 May 2023 11:07:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gDTS=BP=citrix.com=prvs=5031e17c9=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q2VII-0006de-F4
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 11:07:38 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 84a6571e-fbb5-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 13:07:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84a6571e-fbb5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685099256;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=wC2o+30LdUKtUUpwusZsSoxUr8yh/q6D9ho4+eihRUc=;
  b=gVfLAo0aa7IwytHWZHndmHdlkXSYTN7/yNj8L4kW70EEzAKVURQSRYaB
   woJZXTbfNfz8H2EAFCtpaRnKiniv5bIKW8O0ngD+6ntuG5v+aUrETWM7R
   dk/webmxF/tKuYJZH7L/z9Tf3pPQU+knZ8517FjeaeUl/ACJ3aKB2ZM7y
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 113007431
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:Zymny6KieKkIhJKGFE+RwZUlxSXFcZb7ZxGr2PjKsXjdYENS3jFUm
 jMYUDiCO6zeZWGhL98jOYyypkoAscPdz9cyG1FlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wZgPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4rEFN+7
 KcjLQtcNDyZgdu20paCZ+Rz05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TTHJ4NxhnE/
 TuuE2LRJkpHEMeCzjW8tXf0hfORgiHfZL4qG+jtnhJtqALKnTFCYPEMbnOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHsljw2VsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LQHGz
 XfQwYmvX2Y29uTIFzTErOz8QS6O1TY9cX4wVTZfdg4+soPPuocKjgDrd/tuD/vg5jHqIg3Yz
 zePpSk4orwci88Xyqm2lWz6byKQSovhFVBsuFiONo6xxkYgPdP+OdT0gbTOxawYRLt1WGVtq
 5TtdyK2yOkVRa+AmyWWKAnmNOH4vq3VWNEwbLMGInXAy9hP0yT7FWyzyGskTKuMDirjUWGBX
 aMrkVkNjKK/xVPzBUONX6q/Ct4x0Y/rHsn/W/bfY7JmO8YhKVfcrX0yPBDBhQgBdXTAd4llZ
 f93lu71Vx4n5VlPlmLqF4/xL5d3rszB+Y8jbc+ilEn2uVZvTHWUVa0EIDOzUwzN14vd+F+92
 48GZ6O3J+B3DLWWjt//rdRCcjjn7BETWfjLliCgXrTeelE6QDp4Wqa5LHFIU9UNopm5X9zgp
 hmVMnK0AnKk2RUr9S3ihqhfVY7S
IronPort-HdrOrdr: A9a23:dVF/867Y88ohT6Z5EgPXwDjXdLJyesId70hD6qkQc3Fom62j5q
 STdZEgvyMc5wx/ZJhNo7690cq7MBbhHPxOkOos1N6ZNWGLhILPFuBfBOPZqAEIcBeOlNK1u5
 0BT0EEMqyWMbB75/yKnDVREbwbsaa6GHbDv5ah859vJzsaGp2J921Ce2Cm+tUdfng9OXI+fq
 Dsn/Zvln6bVlk8SN+0PXUBV/irnay3qHq3CSR2fyLO8WO1/EiV1II=
X-Talos-CUID: 9a23:+tN6sWCs5MboMu76E3V2xn4NQfsgSSyH4G3WGk+qLV9ERaLAHA==
X-Talos-MUID: 9a23:Ph3L3Aq2Yv2gxBx1mkgezxU5aeNz/qOhMQcIyZYa5daGERBMAzjI2Q==
X-IronPort-AV: E=Sophos;i="6.00,194,1681185600"; 
   d="scan'208";a="113007431"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 4/4] x86/cpu-policy: Derive {,R}RSBA for guest policies
Date: Fri, 26 May 2023 12:06:56 +0100
Message-ID: <20230526110656.4018711-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The RSBA bit, "RSB Alternative", means that the RSB may use alternative
predictors when empty.  From a practical point of view, this mean "Retpoline
not safe".

Enhanced IBRS (officially IBRS_ALL in Intel's docs, previously IBRS_ATT) is a
statement that IBRS is implemented in hardware (as opposed to the form
retrofitted to existing CPUs in microcode).

The RRSBA bit, "Restricted-RSBA", is a combination of RSBA, and the eIBRS
property that predictions are tagged with the mode in which they were learnt.
Therefore, it means "when eIBRS is active, the RSB may fall back to
alternative predictors but restricted to the current prediction mode".  As
such, it's stronger statement than RSBA, but still means "Retpoline not safe".

Add feature dependencies for EIBRS and RRSBA.  While technically they're not
linked, absolutely nothing good can of letting the guest see RRSBA without
EIBRS.  Furthermore, we use this dependency to simplify the max/default
derivation logic.

The max policies gets RSBA and RRSBA unconditionally set (with the EIBRS
dependency maybe hiding RRSBA).  We can run any VM, even if it has been told
"somewhere else, Retpoline isn't safe".

The default policies inherit the host settings, because the guest wants to run
with as few (anti)features as it can safely get away with.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu-policy.c | 25 +++++++++++++++++++++++++
 xen/tools/gen-cpuid.py    |  5 ++++-
 2 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index bdbc5660acd4..eb1ecb75f593 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -423,8 +423,14 @@ static void __init guest_common_max_feature_adjustments(uint32_t *fs)
          * Retpoline not safe)", so these need to be visible to a guest in all
          * cases, even when it's only some other server in the pool which
          * suffers the identified behaviour.
+         *
+         * We can always run any VM which has previously (or will
+         * subsequently) run on hardware where Retpoline is not safe.  Note:
+         * The dependency logic may hide RRSBA for other reasons.
          */
         __set_bit(X86_FEATURE_ARCH_CAPS, fs);
+        __set_bit(X86_FEATURE_RSBA, fs);
+        __set_bit(X86_FEATURE_RRSBA, fs);
     }
 }
 
@@ -432,6 +438,25 @@ static void __init guest_common_default_feature_adjustments(uint32_t *fs)
 {
     if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
     {
+        /*
+         * The {,R}RSBA bits under virt mean "you might migrate somewhere
+         * where retpoline is not safe".  In particular, a VM's settings may
+         * not be applicable to the current host.
+         *
+         * Discard the settings inherited from the max policy, and and feed in
+         * the host values.  The ideal case for a VM is for neither of these
+         * bits to be set, but toolstacks must accumuate them across anywhere
+         * the VM might migrate to, in case any possible destination happens
+         * to be unsafe.
+         *
+         * Note: The dependency logic might hide RRSBA for other reasons.
+         */
+        fs[FEATURESET_m10Al] &= ~(cpufeat_mask(X86_FEATURE_RSBA) |
+                                  cpufeat_mask(X86_FEATURE_RRSBA));
+        fs[FEATURESET_m10Al] |=
+            host_cpu_policy.arch_caps.lo & (cpufeat_mask(X86_FEATURE_RSBA) |
+                                            cpufeat_mask(X86_FEATURE_RRSBA));
+
         /*
          * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
          * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
index f28ff708a2fc..22294a26adc0 100755
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -318,7 +318,7 @@ def crunch_numbers(state):
         # IBRSB/IBRS, and we pass this MSR directly to guests.  Treating them
         # as dependent features simplifies Xen's logic, and prevents the guest
         # from seeing implausible configurations.
-        IBRSB: [STIBP, SSBD, INTEL_PSFD],
+        IBRSB: [STIBP, SSBD, INTEL_PSFD, EIBRS],
         IBRS: [AMD_STIBP, AMD_SSBD, PSFD,
                IBRS_ALWAYS, IBRS_FAST, IBRS_SAME_MODE],
         AMD_STIBP: [STIBP_ALWAYS],
@@ -328,6 +328,9 @@ def crunch_numbers(state):
 
         # The ARCH_CAPS CPUID bit enumerates the availability of the whole register.
         ARCH_CAPS: list(range(RDCL_NO, RDCL_NO + 64)),
+
+        # The behaviour described by RRSBA depend on eIBRS being active.
+        EIBRS: [RRSBA],
     }
 
     deep_features = tuple(sorted(deps.keys()))
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 26 11:07:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 11:07:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540058.841510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VIJ-0006so-0U; Fri, 26 May 2023 11:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540058.841510; Fri, 26 May 2023 11:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VII-0006sg-Te; Fri, 26 May 2023 11:07:38 +0000
Received: by outflank-mailman (input) for mailman id 540058;
 Fri, 26 May 2023 11:07:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gDTS=BP=citrix.com=prvs=5031e17c9=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q2VIH-0006de-6X
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 11:07:37 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 84906ae6-fbb5-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 13:07:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84906ae6-fbb5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685099255;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=j+U/re5HOnLv+lutE3nsZHY4bWNR15bqPcginliGyR4=;
  b=RlwHTCK0HksvNFBJDce4kObAFGvWZzSfxwcdjZuEcPQbkIxaArH/dKFq
   jbdc29yjI8Nk4kkN8GRA3Q3cgivWspW6vs/zQXeNs21dMsexlHrOoTTle
   zNFJ1PVW1l4qi7XRAvOm/BXMHH9z4S08bg/0LZN9FjtmMuSOM1MOY8lKQ
   E=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 113007426
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:+y1uo6/tDosUUvzhYApgDrUDkH6TJUtcMsCJ2f8bNWPcYEJGY0x3m
 GEeCGDTbvmPMTP9fYwkPonk9BhU6p/VzNU3HQE/+Co8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ird7ks31BjOkGlA5AdmOKoV5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkkS5
 M4oMWs2ayvS3c6sxou7WM1Vq8k8eZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAr3/zaTBH7nmSorI6+TP7xw1tyrn9dtHSf7RmQO0Mxx3A/
 j2apTqR7hcyKey/+z+p1CmQo7HuhSn2CKcONofjz6s/6LGU7jNKU0BHPbehmtGph0j7V99BJ
 kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c/h6HvA+6QqN4rHJ+AvfDW8BJhZebPQ2uclwQiYlv
 mJlhPuwW2Yp6ufMDyvAqPHN92ja1TUpwXEqOT42fRJe2NzY8dsflBT2TsRHKYO4kYigcd3v+
 AyioC87jrQVqMcE0aSn4FzK6w6RSoj1oh0dvVuOAD/8hu9tTMv8PtHztwCHhRpVBNzBJmRtq
 kTojCR3AAomKZiW3BKAT+wWdF1Cz6bUaWaM6bKD8nRIythMx5JBVdoIiN2dDB0zWirhRdMOS
 BG7hO+pzMUPVEZGlIcuC25LN+wkzLL7CfPuXe3OY9xFb/BZLVHXoXk1NBLMhj68yiDAdJ3T3
 7/BLa5A6l5DU8xaIMeeHb9BgdfHOAhirY8seXwL50v+iufPDJJkYbwELEGPfogE0U9wmy2Mq
 4w3H5LTm31ivBjWPnG/HXg7cQpbchDWxPne96RqSwJ0ClA7QDp9U6SIkOpJlk4Mt/09q9okN
 0qVAidwoGcTT1WbcW1mtlgLhGvTYKtC
IronPort-HdrOrdr: A9a23:n1Cee6C37ont8aDlHelc55DYdb4zR+YMi2TDt3oddfU1SL39qy
 nKpp4mPHDP5wr5NEtPpTniAtjkfZq/z+8X3WB5B97LMDUO3lHIEGgL1+DfKlbbak/DH4BmtZ
 uICJIOb+EZDTJB/LrHCAvTKade/DFQmprY+9s3zB1WPHBXg7kL1XYeNu4CeHcGPjWvA/ACZe
 Ohz/sCnRWMU1INYP+2A3EUNtKz2uEixPrdEGY77wdM0nj0sQ+V
X-Talos-CUID: =?us-ascii?q?9a23=3AY5WKo2m3R9Oe+iJa1rq9jAyh33rXOSHM/kfJeHS?=
 =?us-ascii?q?9MjczZKeTcF+C3JpJntU7zg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AUotOmw2gayxu/XvVVnnq1bdTpzUj/pSOOkUht84?=
 =?us-ascii?q?6npelLSZ9axKBrg7se9py?=
X-IronPort-AV: E=Sophos;i="6.00,194,1681185600"; 
   d="scan'208";a="113007426"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 0/4] x86: RSBA and RRSBA handling
Date: Fri, 26 May 2023 12:06:51 +0100
Message-ID: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This series deals with the hanlding of the RSBA and RRSBA bits across all
parts and all mistakes encountered in various microcode versions.

With it in place, here are some examples from various generations of Intel
hardware:

  BDX Raw
  BDX Host
  BDX HVM Max   rsba

  KBL Raw       rsba misc-pkg-ctrl energy-ctrl
  KBL Host      rsba misc-pkg-ctrl energy-ctrl
  KBL HVM Max   rsba

  SKX Raw       rsba misc-pkg-ctrl energy-ctrl
  SKX Host      rsba misc-pkg-ctrl energy-ctrl
  SKX HVM Max   rsba

  CFL Raw       rdcl-no eibrs skip-l1dfl mds-no tsx-ctrl misc-pkg-ctrl energy-ctrl fb-clear
  CFL Host      rdcl-no eibrs rsba skip-l1dfl mds-no tsx-ctrl misc-pkg-ctrl energy-ctrl fb-clear rrsba
  CFL HVM Max   rdcl-no eibrs rsba mds-no fb-clear rrsba

  CLX Raw       rdcl-no eibrs skip-l1dfl mds-no tsx-ctrl misc-pkg-ctrl energy-ctrl sbdr-ssdp-no psdp-no fb-clear rrsba
  CLX Host      rdcl-no eibrs rsba skip-l1dfl mds-no tsx-ctrl misc-pkg-ctrl energy-ctrl sbdr-ssdp-no psdp-no fb-clear rrsba
  CLX HVM Max   rdcl-no eibrs rsba mds-no sbdr-ssdp-no psdp-no fb-clear rrsba

  SPR Raw       rdcl-no eibrs skip-l1dfl mds-no if-pschange-mc-no tsx-ctrl taa-no misc-pkg-ctrl energy-ctrl
  SPR Host      rdcl-no eibrs rsba skip-l1dfl mds-no if-pschange-mc-no tsx-ctrl taa-no misc-pkg-ctrl energy-ctrl rrsba
  SPR HVM Max   rdcl-no eibrs rsba mds-no if-pschange-mc-no taa-no rrsba


Of note:
 * The SPR CPU is pre-release and didn't get the MMIO ucode in the end
   (sbdr-ssdp-no psdp-no fb-clear).
 * SKX/KBL enumerate RSBA following the energy filtering microcode.  Prior to
   that, they don't enumerate MSR_ARCH_CAPS at all.
 * CFL and SPR fails to enumerate both RSBA and RRSBA.  CLX fails to enumerate
   RSBA.  These should be addressed in due course.


Andrew Cooper (4):
  x86/spec-ctrl: Rename retpoline_safe() to retpoline_calculations()
  x86/spec-ctrl: Synthesize missing RSBA/RRSBA bits
  x86/cpu-policy: Rearrange guest_common_default_feature_adjustments()
  x86/cpu-policy: Derive {,R}RSBA for guest policies

 xen/arch/x86/cpu-policy.c             | 59 ++++++++++++++------
 xen/arch/x86/include/asm/cpufeature.h |  1 +
 xen/arch/x86/spec_ctrl.c              | 78 +++++++++++++++++++++------
 xen/tools/gen-cpuid.py                |  5 +-
 4 files changed, 111 insertions(+), 32 deletions(-)

-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 26 11:07:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 11:07:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540061.841540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VIN-0007gb-1E; Fri, 26 May 2023 11:07:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540061.841540; Fri, 26 May 2023 11:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VIM-0007gR-TW; Fri, 26 May 2023 11:07:42 +0000
Received: by outflank-mailman (input) for mailman id 540061;
 Fri, 26 May 2023 11:07:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gDTS=BP=citrix.com=prvs=5031e17c9=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q2VIJ-0006sm-Tl
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 11:07:39 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 84d6f7e6-fbb5-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 13:07:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84d6f7e6-fbb5-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685099257;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=tRC3UNaCj65Aajc92g1f04RU5FXAg5eRZ78lBWu320s=;
  b=LM1gi+wmkEVqhGjUQ7uNSs8gDNE+T9yfECPD10vO/zN6lcPbZhf9wxJ0
   ArK/L+GdX8fwxvzLj6ZVNRqGGxeBvFd1vKjk5CG/KB+f8YliAIaGbqb0S
   Z05gtVJakFiqbVsZ90PJP8/dsciMpTDEr2omoWG9VdRFFQxNcqsBaKYRr
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109294592
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:cBW1faP5CyulEM/vrR21l8FynXyQoLVcMsEvi/4bfWQNrUpz0jAFz
 WMcXjiDPfeJMGL9ft4nb460oUgF78PTmtZgQQto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tF5AJmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0s14WE9x9
 60pESgENkGDpP2Q3r38Q/Y506zPLOGzVG8eknRpzDWfBvc6W5HTBa7N4Le03h9p2JoIR6yHI
 ZNEN3w2Nk+ojx5nYz/7DLoXmuuyi2a5WDpfsF+P/oI84nTJzRw327/oWDbQUoXSFJ8EwhvJ/
 Aoq+UynLT0YHvKalwOOzViV3umegTvSCawdQejQGvlC3wTImz175ActfUS/iem0jAi5Qd03A
 1wZ/G8ioLY/8GSvT8LhRFuorXicpBkeVtFMVeog52ml6IDZ/gKYDWgsVSNaZZots8peeNAx/
 gbXxZWzX2Up6eDLDyvHrd94sA9eJwA+IjYsWi1VfDId6oGyoKIsqAzACd98RfvdYsLOJRn8x
 DWDrS4bjroVjNIW26jTwW0rkw5AtbCSEFdru1y/snaNq1ogOdX7P9DABU3zt64oEWqPcrWWU
 JHoceC65ftGM5yCnTflrA4lTODwvKbt3NExbDdS83gdG9aFoSbLkWN4umsWyKJV3iEsJ1fUj
 Lf741852XOqFCLCgVVLS4ywEd826qPrCM7oUPvZBvIXPMgsLFTWoH03Ox/Kt4wIrKTKuftjU
 Xt8WZ/2ZUv29Iw9lGbmLwvj+eNDKt8CKZP7GsmgkkXPPUu2b3+JU7YVWGazghQCxPrc+m39q
 o8PX/ZmPj0DCIUSlAGLq99MRb3LRFBnba3LRzt/LLPbeVY+QzhwUpc8A9oJIuRYokicrc+Ql
 lnVZ6OS4Aak7ZEbAW1mskxeVY4=
IronPort-HdrOrdr: A9a23:VFnX+K3Pp0nw9MLWDklb9wqjBEIkLtp133Aq2lEZdPU0SKGlfq
 GV7ZAmPHrP4gr5N0tOpTntAse9qBDnhPtICOsqTNSftWDd0QPFEGgF1+rfKlXbcBEWndQtt5
 uIHZIfNDSKNykcsS77ijPIb+rJwrO8gd+VbTG19QYSceloAZsQnjuQEmygYytLrJEtP+tCKH
 KbjPA33gaISDAsQemQIGIKZOTHr82jruOaXfZXbyRXkDVnlFmTmcXHLyQ=
X-Talos-CUID: =?us-ascii?q?9a23=3AnFnHrmkr7iIBbMER6dP25rr6IULXOVbSlVnvIR6?=
 =?us-ascii?q?JMH5gEOyVakC25qwjmtU7zg=3D=3D?=
X-Talos-MUID: 9a23:jwmPlgkPzsQImLdwUzX6dno8G+tqyYqrVnkGrr8/48/VKTRZahmS2WE=
X-IronPort-AV: E=Sophos;i="6.00,194,1681185600"; 
   d="scan'208";a="109294592"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 1/4] x86/spec-ctrl: Rename retpoline_safe() to retpoline_calculations()
Date: Fri, 26 May 2023 12:06:52 +0100
Message-ID: <20230526110656.4018711-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This is prep work, split out to simply the diff on the following change.

 * Rename to retpoline_calculations(), and call unconditionally.  It is
   shortly going to synthesize missing enumerations required for guest safety.
 * For Broadwell, store the ucode revision calculation in a variable and fall
   out of the bottom of the switch statement.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/spec_ctrl.c | 30 ++++++++++++++++++++----------
 1 file changed, 20 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 50d467f74cf8..0774d40627dd 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -579,9 +579,10 @@ static bool __init check_smt_enabled(void)
 }
 
 /* Calculate whether Retpoline is known-safe on this CPU. */
-static bool __init retpoline_safe(void)
+static bool __init retpoline_calculations(void)
 {
     unsigned int ucode_rev = this_cpu(cpu_sig).rev;
+    bool safe = false;
 
     if ( boot_cpu_data.x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
         return true;
@@ -626,18 +627,18 @@ static bool __init retpoline_safe(void)
          * versions.
          */
     case 0x3d: /* Broadwell */
-        return ucode_rev >= 0x2a;
+        safe = ucode_rev >= 0x2a;      break;
     case 0x47: /* Broadwell H */
-        return ucode_rev >= 0x1d;
+        safe = ucode_rev >= 0x1d;      break;
     case 0x4f: /* Broadwell EP/EX */
-        return ucode_rev >= 0xb000021;
+        safe = ucode_rev >= 0xb000021; break;
     case 0x56: /* Broadwell D */
         switch ( boot_cpu_data.x86_mask )
         {
-        case 2:  return ucode_rev >= 0x15;
-        case 3:  return ucode_rev >= 0x7000012;
-        case 4:  return ucode_rev >= 0xf000011;
-        case 5:  return ucode_rev >= 0xe000009;
+        case 2:  safe = ucode_rev >= 0x15;      break;
+        case 3:  safe = ucode_rev >= 0x7000012; break;
+        case 4:  safe = ucode_rev >= 0xf000011; break;
+        case 5:  safe = ucode_rev >= 0xe000009; break;
         default:
             printk("Unrecognised CPU stepping %#x - assuming not reptpoline safe\n",
                    boot_cpu_data.x86_mask);
@@ -681,6 +682,12 @@ static bool __init retpoline_safe(void)
                boot_cpu_data.x86_model);
         return false;
     }
+
+    /* Only Broadwell gets here. */
+    if ( safe )
+        return true;
+
+    return false;
 }
 
 /*
@@ -1113,7 +1120,7 @@ void __init init_speculation_mitigations(void)
 {
     enum ind_thunk thunk = THUNK_DEFAULT;
     bool has_spec_ctrl, ibrs = false, hw_smt_enabled;
-    bool cpu_has_bug_taa;
+    bool cpu_has_bug_taa, retpoline_safe;
 
     hw_smt_enabled = check_smt_enabled();
 
@@ -1139,6 +1146,9 @@ void __init init_speculation_mitigations(void)
             thunk = THUNK_JMP;
     }
 
+    /* Determine if retpoline is safe on this CPU. */
+    retpoline_safe = retpoline_calculations();
+
     /*
      * Has the user specified any custom BTI mitigations?  If so, follow their
      * instructions exactly and disable all heuristics.
@@ -1160,7 +1170,7 @@ void __init init_speculation_mitigations(void)
              * On all hardware, we'd like to use retpoline in preference to
              * IBRS, but only if it is safe on this hardware.
              */
-            if ( retpoline_safe() )
+            if ( retpoline_safe )
                 thunk = THUNK_RETPOLINE;
             else if ( has_spec_ctrl )
                 ibrs = true;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 26 11:07:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 11:07:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540062.841546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VIN-0007ka-F4; Fri, 26 May 2023 11:07:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540062.841546; Fri, 26 May 2023 11:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VIN-0007j7-77; Fri, 26 May 2023 11:07:43 +0000
Received: by outflank-mailman (input) for mailman id 540062;
 Fri, 26 May 2023 11:07:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gDTS=BP=citrix.com=prvs=5031e17c9=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q2VIK-0006sm-4j
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 11:07:40 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 86c738e5-fbb5-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 13:07:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86c738e5-fbb5-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685099258;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=EimvUEFnHc9YAG/yN5Pgzk9rUwYKNv9peudeFTVu77c=;
  b=PvKL+afdQnaNWhiSZ5B9SnzxTg7o+V21zHDXathmm1JiN3DJLKq+t8Pk
   LaKKELoh0eF9Hy0JZaEmLyvkTkvz0kdOF7cECWLpJ0AORxlfxB8Pd6vKe
   0+ZNHWmngystJIpGpggQ9nxaA6GoeOChk0MIKxca+ABvWuKtFqhrppCER
   k=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 109294593
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:l5hXo6yNE+lXeo8WdSx6t+cnxirEfRIJ4+MujC+fZmUNrF6WrkVUn
 DMZDWyHOa7ZN2H0e4t1PoWz8E0Ovp7XndRlTQQ/rSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw/zF8EsHUMja4mtC5QRjP64T5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KUpBr
 NInKmE8VwiK37O2462dVOYrj9t2eaEHPKtH0p1h5TTQDPJgSpHfWaTao9Rf2V/chOgXQ6yYP
 ZBAL2MyMlKZOUYn1lQ/UfrSmM+BgHXlfiIeg1WSvactuEDYzRBr0airO93QEjCPbZwNzxzJ+
 j+XpAwVBDkibNWklxuq20uioeXTkQTXYLAyN7q3o6sCbFq7mTVIVUx+uUGAiea9ol6zXZRYM
 UN80jojq+0++VKmSvH5XgakuziUsxgEQd1SHuYmrgaXxcL8wSyUG2wFRT5pc8E9uYk9QjlC6
 7OSt4q3X3o16uTTEC/DsO7O9lteJBT5M0cpanYqcglU0uD7qdlijjHQaMhsV6eq24id9S7L/
 xiGqy03hrM2hMEN1rmm8V2vvw9AtqQlXSZuuFyJAzvNAhdRIdf8Otf2sQSzAeNodt7xc7WXg
 JQTdyFyBsgqBIrFqiGCSf5l8FqBt6fca220bbKC8vAcG9WRF5yLJ9g4DNJWfh0B3iM4ldjBP
 ifuVft5vsM7AZdTRfYfj3iNI8or17P8Mt/uS+rZaNFDCrAoKl/apHo/ORLJgTG3+KTJrU3ZE
 c3HGSpLJS9AYZmLMRLsH7tNuVPV7nxWKZzvqWDTkE38jOv2iI+9QrYZKlqeBt0EAFe/iFyNq
 b53bpLaoyizpcWiOkE7B6ZPdwFVRZX6bLiqw/FqmhmreFc+Qz15UKaAmNvMueVNxsxoqwsBx
 VnlMmcw9bY1rSGeQelWQhiPsI/SYKs=
IronPort-HdrOrdr: A9a23:SC0f8Kufta02ZhVw7QSW8kU07skDhtV00zEX/kB9WHVpm6yj+v
 xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBO3ZogEIcxeUygc379
 YDT0ERMr3N5CNB/KHHCAnTKadd/DGEmprY+ts3GR1WPH9Xg6IL1XYJNu6CeHcGIjWvnfACZe
 ChDswsnUvYRV0nKv6VK1MiROb5q9jChPvdEGM7705O0nj3sduwgoSKaCSl4g==
X-Talos-CUID: 9a23:baL6FGBHnqecSeb6EwJorWArGJEUS2XUlCbQAFCeF2kuQaLAHA==
X-Talos-MUID: 9a23:fcjH6ggysa07Z57y66MR+cMpDJ9h2PqPJ2k0n7IMp8WjGRRCGxSag2Hi
X-IronPort-AV: E=Sophos;i="6.00,194,1681185600"; 
   d="scan'208";a="109294593"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 3/4] x86/cpu-policy: Rearrange guest_common_default_feature_adjustments()
Date: Fri, 26 May 2023 12:06:55 +0100
Message-ID: <20230526110656.4018711-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This is prep work, split out to simply the diff on the following change.

 * Split the INTEL check out of the IvyBridge RDRAND check, as the former will
   be reused.
 * Use asm/intel-family.h to remove a raw 0x3a model number.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu-policy.c | 34 +++++++++++++++++++---------------
 1 file changed, 19 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 74266d30b551..bdbc5660acd4 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -10,6 +10,7 @@
 #include <asm/cpu-policy.h>
 #include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/svm/svm.h>
+#include <asm/intel-family.h>
 #include <asm/msr-index.h>
 #include <asm/paging.h>
 #include <asm/setup.h>
@@ -429,21 +430,24 @@ static void __init guest_common_max_feature_adjustments(uint32_t *fs)
 
 static void __init guest_common_default_feature_adjustments(uint32_t *fs)
 {
-    /*
-     * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
-     * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
-     * compensate.
-     *
-     * Mitigate by hiding RDRAND from guests by default, unless explicitly
-     * overridden on the Xen command line (cpuid=rdrand).  Irrespective of the
-     * default setting, guests can use RDRAND if explicitly enabled
-     * (cpuid="host,rdrand=1") in the VM's config file, and VMs which were
-     * previously using RDRAND can migrate in.
-     */
-    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
-         boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x3a &&
-         cpu_has_rdrand && !is_forced_cpu_cap(X86_FEATURE_RDRAND) )
-        __clear_bit(X86_FEATURE_RDRAND, fs);
+    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+    {
+        /*
+         * IvyBridge client parts suffer from leakage of RDRAND data due to SRBDS
+         * (XSA-320 / CVE-2020-0543), and won't be receiving microcode to
+         * compensate.
+         *
+         * Mitigate by hiding RDRAND from guests by default, unless explicitly
+         * overridden on the Xen command line (cpuid=rdrand).  Irrespective of the
+         * default setting, guests can use RDRAND if explicitly enabled
+         * (cpuid="host,rdrand=1") in the VM's config file, and VMs which were
+         * previously using RDRAND can migrate in.
+         */
+        if ( boot_cpu_data.x86 == 6 &&
+             boot_cpu_data.x86_model == INTEL_FAM6_IVYBRIDGE &&
+             cpu_has_rdrand && !is_forced_cpu_cap(X86_FEATURE_RDRAND) )
+            __clear_bit(X86_FEATURE_RDRAND, fs);
+    }
 
     /*
      * On certain hardware, speculative or errata workarounds can result in
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Fri May 26 11:09:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 11:09:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540079.841560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VJj-0001Se-Qk; Fri, 26 May 2023 11:09:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540079.841560; Fri, 26 May 2023 11:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VJj-0001SV-NG; Fri, 26 May 2023 11:09:07 +0000
Received: by outflank-mailman (input) for mailman id 540079;
 Fri, 26 May 2023 11:09:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gDTS=BP=citrix.com=prvs=5031e17c9=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q2VJi-0001SF-Ie
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 11:09:06 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b95a61b4-fbb5-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 13:09:05 +0200 (CEST)
Received: from mail-mw2nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 26 May 2023 07:08:58 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH7PR03MB7297.namprd03.prod.outlook.com (2603:10b6:510:24d::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.27; Fri, 26 May
 2023 11:08:56 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.018; Fri, 26 May 2023
 11:08:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b95a61b4-fbb5-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685099345;
  h=message-id:date:from:subject:to:references:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=jOKHXyI8yUBES1UXZ5ks+cEpqCkC8PveEW4UQ9MmEtI=;
  b=ZxCfU6lVB+H3PuNMvxq5tBnu8WSvGv8om+LLoca9vSljQTNa8fyfWYn3
   kqw35xW9IxLGKN0DDXQCrQ30H7GZXIx/TeH5b3ZdIbCWRhAu3qBo7b1xC
   HV7zcClJD/pLaR4BF3vnL99j/jTd1FYwTOchQhbkJyHZWxCXkGXvfE6fV
   c=;
X-IronPort-RemoteIP: 104.47.55.107
X-IronPort-MID: 109859152
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ndZfJanIl4mtbo5l9m/0k97o5gyNJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xIaXDvUPquIY2TzLtx0bou+p0NQuZHVzdZhSFM9rygwHyMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE0p5KyaVA8w5ARkPqgW5gGGzRH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 dFfGAxSLVPbvtK74bvgFbQy3N14EfC+aevzulk4pd3YJdAPZMmbBoD1v5pf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVE3ieCyWDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOzoq68z3Qf7Kmo7ODAUfGu8o/6C00uRAo5ZA
 HMX/C8qsv1nnKCsZpynN/Gim1aUsxhZV9dOHukS7ACW1rGS8wufHnIDTDNKdJohrsBeeNAx/
 lqAntesACM1trSQECqZ7u3N9Wz0PjUJJ2gfYyNCVREC/9TovIA0iFTIU8pnF6m2yNbyHFkc3
 gy3kcT3vJ1L5eZj6klx1Qmvb+6EznQRcjMI2w==
IronPort-HdrOrdr: A9a23:GrAk6aPVUxXO+cBcTjGjsMiBIKoaSvp037BK7S1MoH1uA6ilfq
 WV9sjzuiWatN98Yh8dcLO7Scy9qBHnhP1ICOAqVN/PYOCBggqVxelZhrcKqAeQeREWmNQ86U
 4aSdkYNDXxZ2IK8foT4mODYqkdKA/sytHXuQ/cpU0dPD2Dc8tbnmFE4p7wKDwNeOFBb6BJba
 a01458iBeLX28YVci/DmltZZm/mzWa/KiWGSLvHnQcmXKzsQ8=
X-Talos-CUID: 9a23:AFKXdW+Lx8zZFOlndvmVv2NTKvx6MT6F9UfdAkbjVjs3EqTOdGbFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3A4iFksg2VylPSowi+NhZRMNsngjUj04msIXAKk5I?=
 =?us-ascii?q?94syrbHB2EAieijGzXdpy?=
X-IronPort-AV: E=Sophos;i="6.00,194,1681185600"; 
   d="scan'208";a="109859152"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EXircX0PHbMBQulGBrbRzvmQEGVLHVkEfK6hTyxp3fOSM1qbZ3HAE2gShWyKQfKZAMg0aKMy4eOcM5x0nUNKuKgLKzIg0d+70R0fEfdWhSHSBKcws/g+n4oaVMRrV3osx4JP9Ci16dvGZ2NS4KQ3VwG5yxFksi+khjQXZBnjzG2i0UsYqPTYhNRFfVtbZ5lmbZanmJnF15dbd2wOZsjTiozBnAzAvWu9NC0vzWdU1qaY3LGDkVVyp2xD3ouj55tS+lSNkKpi0y+NJlaK9jHqIK/8xAnyGcba/GZQrd/bFt5OmJggNMLqJBV/lnHs+EPkId8BbFNRb4tri46xvRSL+w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YBYY5bMh3hQvbzjPhGt+16NXqWuUmSKHdlWdKj+FKT4=;
 b=L7LHNhJg5yeG51qstQLWGPyCAmBG/AO2g5kTkifyA/LidGC/w5hZddUrf24BGrre7Wd5Se9ny5T9lGQ+h4jVSNoDMDpsI9lZ2WEEZry2L94tj2MVxLmZXdUiMWzoqgqC0KKKqiKtgn3uwi4rVDLGcLGesIit41g2NADHwx6pTbm135M8kF/sBmcbkgE72dkKGzjFISLjDPq8bwcD2yiVzntgWi9+4EvM0QD/48175Ils86ZpazyQQ2nWw+N2S+dhh7oBrUta4fwAgB6qQDs1m1yI9VfDl/JS+kVi7SQkqkjbQxlVjQTWL2wgK/Gxjd7+8n7Y5G0RXoArPCtodmIZlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YBYY5bMh3hQvbzjPhGt+16NXqWuUmSKHdlWdKj+FKT4=;
 b=knafUeEQPEgkUCLwNW7qxwNNtPzRlBLm7cfNlatCX7OHS5QjVVvL0WLKIEIKqUFFiW7MmqTwcaDHlkTpwyyYgSXEzCnQZDqu4go30tGiU+8qprjQL1n5tTAYvnj2aBNQA9azCiC5c9hfzVvRo+DiwJxCVP04D2r3wWuS2HpklGo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <bf51f176-4076-8a1c-7cbe-c7ff24c1cf3e@citrix.com>
Date: Fri, 26 May 2023 12:08:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/4] x86/spec-ctrl: Synthesize missing RSBA/RRSBA bits
Content-Language: en-GB
To: xen-devel <xen-devel@lists.xenproject.org>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
 <20230526110656.4018711-3-andrew.cooper3@citrix.com>
In-Reply-To: <20230526110656.4018711-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0595.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:295::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH7PR03MB7297:EE_
X-MS-Office365-Filtering-Correlation-Id: 06e19128-45f1-4e25-53af-08db5dd9987f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	u9jAPGNmDr8gNo8mhWlycW6OLkwBwzmzcc/X7gcjX6lghMp7c7kdcIaNAU5LmNiJoYl/zbmSVBcsBNk3D3122VFkUgOh22i00/jF+HVt23CYAmOyhHVu1v4sUPz1wL3zrUbyucgpZe2z+VXlu0vyfM7zNuk/OBNdjS1EgHTzNPObpkEkuhSkVdoIyrTo3Kj7E1XRqHexAZkHQSmkXYwSW8zkpYu2kcHvI8cGAPKrEHafFYw7cIUR7itIpnFhdHPmqHx12l9psoe8POqH5Y0md422rWJnnIgJb039uPGQC2n68ua9nSuxFq/sq8Dq8nzNx0jhpXkmfckMvWOvIt26ttHGDGrbzT0kzj005z6mPw4PCq/R2MPaysCOb5xWJhpF+BROrI0DWi8KmYyy528vpMvf+kEbENz2Ws3IwqshqpIN6JmmGh4BIqeHojmA7abQd+byepXw/VMo7E4pGutJIyCp4FYK+FMWrasVCXsPK30EqpPhchvghtwmky0NNNiOl1sIJp3Cxc5f6Xts4QT83PUo6skzPIgh9ys9q+NBMi4o8HtX/ptGPA397FTZpzUbsx/zbazjV5c1s26qn2ZAO4pI+a0zpEGQFYax+ttBE11dtCwsRCz72HziMApzw5G0x28E2d1Znd1UNVMUqSxDMg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(376002)(136003)(39860400002)(366004)(346002)(451199021)(5660300002)(8676002)(8936002)(2616005)(186003)(53546011)(4744005)(2906002)(86362001)(36756003)(83380400001)(82960400001)(31696002)(38100700002)(6506007)(6512007)(26005)(6916009)(316002)(66476007)(31686004)(66556008)(6666004)(66946007)(508600001)(6486002)(41300700001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OFFlSUlsdjg0eHBrc2dmbWV1OXYvOERJSkRtbitlRng5S0VaMjY2N002aExo?=
 =?utf-8?B?MmJaTTh1K29qSU9tS0JHZTdIclFGWUtaOXhXd3ZEay9EdHJZb0xEaHdOS1FZ?=
 =?utf-8?B?QnZadjNHZzlRYTdEYUtjK3lyc0kxZ1ViVEdwM2JFUVVWUVBTSG1OcmtTVG9o?=
 =?utf-8?B?V25JQUlNalpLU2Z3MFV0YnFhZG1WSXFwSjhLQVlZMzUxUEZNWnBRZG91R1BG?=
 =?utf-8?B?OS8ySkM2Q0tuRUNpODFmRkhMWlZFVWJXdG5jbE5HZ0Fxc0dKaU82dGNyUWRQ?=
 =?utf-8?B?Y0c5WUFNTWhjTVFKRjF2aEVzbnBkUXVWZXhjSWJiVFVLN2l6RUtwN3BmNkNo?=
 =?utf-8?B?QjZ3Vlo4TEU0aGs2aE5KcVg2SW1tZkdqNU1PV2NubmZBck43bXV2dEF6Sysr?=
 =?utf-8?B?WStMWTYzV0pJczNwMy8yRHNGR2dKVHJSVnVwM2pldnF1eTNYMmhiTjkyQWc3?=
 =?utf-8?B?NDY3bDdBRmVVcEdUTVNQTDllVERaZjlSYk03RUFUM2QxMlltY2c1TDhRUUlP?=
 =?utf-8?B?VEVoS25Qa0lmeWw2OWNnTXcwN0w5bzZJVDd5OTBtTEp2OHZNZzR6dWFHYlR5?=
 =?utf-8?B?Mkx4WkxkaGMzZkZtUkt0anYwaXM2d20xTnk2YitEYmZjMlE0NU01QjlXQy92?=
 =?utf-8?B?UFBKbnd4b1lRd3RaVjdqR3Z6RzF2QzZVWGxURDI2YXBFRUtCWXZTRHluc3dl?=
 =?utf-8?B?eHM5YnFIeVZqY3NUZkdCbEFGV1kzQmRyK3kzK2Y1VWtlYTd3R3dlQVJDZWJu?=
 =?utf-8?B?VzF3bTZRcFN2NytTL3J6T1BhdmxoMFFIUVFXRjFtOUJLV0tmekJwM1RmQ3dv?=
 =?utf-8?B?dlBKZHRVOHdjKzVYTFZ5SjVDdTRsU3g5NnBvdzRCUFRuQlEvZW5ZOStnNTNq?=
 =?utf-8?B?ajZMRkFoT1lvQk91T1ZCWk5HdmNKTm5CZ0Noa2xLd3J2U2w0RWRRQVVVVStu?=
 =?utf-8?B?SHJLUXNFcmpvdllYSXRnY3hGVFZwTlNXekNVcTY4T2tzbUpndDgvK3IyWEJv?=
 =?utf-8?B?WXZRRkRRN0lJV0p1QXh0ak9tQzZOb3NGT3lzZjV5T3FSNGxoWSs1Ly9IbXQ0?=
 =?utf-8?B?Wjd3TTZjdE5aRTNPaFY0TkZTaTRPZVk3aFV2VkVkRnNPWXZmU3RQN0pGcUtq?=
 =?utf-8?B?YlRTTTlnN0tJam1Tem9vMEVVQmloVWtMRUt4RlN3OWkvWUs0Qk5RN3FhbTVr?=
 =?utf-8?B?VEVoakZmTlUzVEhBQjVBbEtCTCtrU0JRcnRZQk1WWktIYlpwdzRLSVBWL0tO?=
 =?utf-8?B?K2k3ajNZQmJPVWcrRCtHUTlvUHR3ckhNVW5iNUdyZjE2U0s4bkdOMlI2S0V6?=
 =?utf-8?B?ZXhkWlJoMFE0N0VweFc0Y0ZOM0xER0ZDM1JWc2pkQm5SREwvWWRQSGFOS2t5?=
 =?utf-8?B?VTJKRmRoS0xTa1Y2TW5ITFBva0tpdEhacTVRUkR4UXhwWThKYlh4a3NIMXJX?=
 =?utf-8?B?YkhmZ3JjSlRyeEJHa21PVHdUdFFpTVlLelV4VGQ5Q09EN1NtTThkOFRjT1Zq?=
 =?utf-8?B?aUlxYzQ4TmVqZU5rN1FYMnY1dnhUbmVUVWZpdW96UnREbTJQd0Q2K2FHc29v?=
 =?utf-8?B?TDlGTFNlanYydm8yK01HWVUvNis3eWdpaHlRdVVsRTlCMWhBM1FBYXBVVUc5?=
 =?utf-8?B?eERaOUpKYmIzc245eGZtd2w4eElhV0tUclh4S2t4bFJ3UW13dFVMcmxkaDlZ?=
 =?utf-8?B?Mkg1QzNPaWtmYnFtenhjWFEvUjNrdHdiK2VMN2JrNDQ0cTdJWWFnZk9oMnZk?=
 =?utf-8?B?dzZlVkFXY2ZVME5KQmVWTzRUQXkzd1ZKNlJUTjlHQjV1Vy8xSVRLTUZDaHdl?=
 =?utf-8?B?bWpRaXc1ZVppOEtVdlVoM2EzbDQzMHZSUVY3YkE3dyt5TWpHQjdSRHpSeEQ3?=
 =?utf-8?B?V2hBYWJUR0N0cHp1alVhNVZBczVtYjZGT0lWZ3dsQ20rTmRTV2xwNnlQbG5u?=
 =?utf-8?B?VXdEblVhVStBYVV4R2FiRjRySCtMSG1PYXdLbUpqMmh0U0doaWwxMEltMXBJ?=
 =?utf-8?B?VkJwOVZIVm5FKzFYWlJHSHQwbjl3YUswQnI5Z1Mwdko4WEU1ZDdZNzJPeUF5?=
 =?utf-8?B?QytTRnQ2RXZ4NFN6Y3diYTFOa2YxMGdKOENBN0VhMkZ5enFOT09aQlVEVXJS?=
 =?utf-8?B?QVk3QlljVHhCSjdmYXBUM2hCb25ROHdMRnNpMFo2SnZUOFZkOE1EK3BVUXRN?=
 =?utf-8?B?WXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	9F06OL06gmK37Xof6gWn+wesBEYfjB58T8wScBstD3CdzHum8bdWXlsMey9MdIZUEb1r5wukMSvgLGdXoxeWTg41885YfKwNHpp6HxatkixNe3EXAYSF05Y/IJfKazI08mSOjG1uHZNMgEHnrzS1Xptvpq+cvtF+lunv3LvQg0Y4gWU8VWdccsyZLWrRm3w530oKWjjFCT+FujwhEtLRL549ErecDRp5IKmW+nYQW/mZ1ggLEd9TTaoBseirEEZCaicbPQGqqbnm2VujN40pSzaNsQ71isB2a5Xo90l9QQ9XMMxS8O1ya2w8pZdRTTTPtY+JPhGfWsX/vsGggLmj4gKyEfPbkpT3dDEIPSUPNXp4vJ5f6AVLK/SvoDMtm3qtBt1Clfhve2hG1n3BEs45J9/2tOdR9AHcR9Fj7WbaGnIbxgEC2WA7Cs2KDZSTv7e+004WLsElqD/2ATYCtFDY3vzymEtaW+IgDbOFIzMz41DWZ5UNq5DbRlgwwb9gaVb3sy373kxAe8Rtork3oPNinkYUCnpK7mAEHfu4o/x1Or+S6X+tzuNkTkjt2+iB41MsYqlCer/V7yGqd4WKI4o240/N+Nvk/5gwFaFO7JsFL70yvLKZBZ5+TmnnHRwmKrJfDMZePUTJi17FAzBDMiY3BgpgfVaNsqKlN9AHtJkcLw4qH4n7SbD5PuaTSv+dZchaKYwjJMnqgfhPNf3wfCl72i1r7bc/BFcox2CuyZyAe/mSFvVrijbL4mWF891JcGHVhRuRfsR6CKG61XYe5nJ6ZA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 06e19128-45f1-4e25-53af-08db5dd9987f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 11:08:55.4862
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hDzVgYjhzR9fSIJQdCd0blSRX3R9WWTHVKZe4RU6ryf66qSTBjC64b0JOzKu1gQgtPCz6sw27Lw8baoeZ2T+YhIGZeOC+kk34Urkc7c3GJc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7297

On 26/05/2023 12:06 pm, Andrew Cooper wrote:
> ---
>  xen/arch/x86/include/asm/cpufeature.h |  1 +
>  xen/arch/x86/spec_ctrl.c              | 50 +++++++++++++++++++++++----
>  2 files changed, 44 insertions(+), 7 deletions(-)

Sorry, please ignore this patch and look at the other patch 2, which has
a commit message.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 26 11:13:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 11:13:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540083.841570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VNR-0002yh-B8; Fri, 26 May 2023 11:12:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540083.841570; Fri, 26 May 2023 11:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2VNR-0002ya-7i; Fri, 26 May 2023 11:12:57 +0000
Received: by outflank-mailman (input) for mailman id 540083;
 Fri, 26 May 2023 11:12:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xc68=BP=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q2VNP-0002yU-T5
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 11:12:56 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.220]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 420ae4c3-fbb6-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 13:12:53 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4QBCh7HZ
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 26 May 2023 13:12:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 420ae4c3-fbb6-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1685099564; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=LMcp5cCNYPTzG929FLKrdYC8n0yglRa2GcfiXwApUCt60opSuPk20FcbXGuz9C/0TC
    ESqAzAfAgB2bbp4VbdZsMPoSU7zhW/p042iHcenD0q+sX9P04x9SLW3G5nMTu7PIzgcu
    0VMZ/AsXW9bNXP+y/N0UCpuPrQba7G8bcqPv4VcW2+fFz9sMwO/uGLqg6E3GC3/eaO9S
    J3WlBAOHO/9vi3ZEeS5Z7x7qtXjbeydvOHteT0+BM7KctGUnxE6qBGZ6Xuq1sd3zQ9ma
    mjqDH7r4ZMuNnTau6pp0y72nMXoUru6M7Fz6wnz2eBg/KbjdLljXR3ccLHyl3X4p8xMf
    pRPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685099564;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=0+fY4qC2+4dPkzJL3ujCsk+0LSKdFXsIx11Nzf82MXA=;
    b=qS44uTZpH+fEPKOe3UUF+C05+tZsGp428kLLAIfatyjBt4Fz2NIhob0i/s3SQ92wG/
    WnTh4PPZzP0OEqBjnLFjAsSsFt2yB2Yq9aU9AI7MwzwxxSeUwOgOM/gX829TiqiZI0OS
    UlpJZ+dHXxzAYDJYVHOob9xd61kTGolmlWNwK4kD4C4zMCmcYJdjT/thxn5UF3Sqbdwh
    qZMZZ2xSR/jTsk3rPsvdzl0sxyH0TGeyODKoBf2CmT1K2XfY/sSfxr8geB7Rj1bAmFRS
    2oWIHMZYto27wMfQuSqKRroKBZCs/KY9lbENDecm2XV8POFi1u7otUZLL3cWG2Ggma3q
    8ISg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685099564;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=0+fY4qC2+4dPkzJL3ujCsk+0LSKdFXsIx11Nzf82MXA=;
    b=OYitfTlWWb1Iv7d8V4ssTvPDr5XT58k5XlLdtJ50e4y4hLUK1xZH51i8TCDI4gy/Cx
    d4tvZptX/5Mj6jF67ZvOSq+JbA4gSH7pS67CcYL8v1l+K1ztTfQTvwcgZTSTGcUGbbw5
    GsOcIsumuIyfDMEAWaJxb4wWMdByFTuHrNVDxF2HZ07Z5rxwlnNki82jBHb4pM2ZCe8n
    4rlqRLOrJwyG2+aTH9GbxdUGbrB6ct4pnNm/uMWKP4kPmKtaNkoji8LwY6pvfT3crhLS
    h/OFkO0ft9uqL7u9K9ci4dg/B4+B/0lRw8RjIIOhn7VL/pVM3WC6AQZUmGhvuAja7+WS
    le9Q==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685099563;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=0+fY4qC2+4dPkzJL3ujCsk+0LSKdFXsIx11Nzf82MXA=;
    b=YcXPMSGCdM++hmS8gcBC8sXQxo3BEiaC5N5bgwiGRB6wVZrAV9+3o1X4F0NJim+nSC
    2eed7AQBbr0jMoU8Y7Dg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4BhOIaRvsld/sN75OpaIeOWAiVTRkMz6wPlUdSg=="
Date: Fri, 26 May 2023 13:12:38 +0200
From: Olaf Hering <olaf@aepfle.de>
To: George Dunlap <george.dunlap@cloud.com>
Cc: xen-devel@lists.xenproject.org, George Dunlap
 <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Anthony PERARD
 <anthony.perard@citrix.com>
Subject: Re: [PATCH v1 2/3] xentrace: remove return value from monitor_tbufs
Message-ID: <20230526131238.55609d32.olaf@aepfle.de>
In-Reply-To: <CA+zSX=ZNZD2qQ1HGtqauoJdU_g1T45_gLq6XCG2Sn9VJJTNnbg@mail.gmail.com>
References: <20230526072916.7424-1-olaf@aepfle.de>
	<20230526072916.7424-3-olaf@aepfle.de>
	<CA+zSX=ZNZD2qQ1HGtqauoJdU_g1T45_gLq6XCG2Sn9VJJTNnbg@mail.gmail.com>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/0nvFFNb0dz3+e+xzHWCcSdc";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/0nvFFNb0dz3+e+xzHWCcSdc
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Fri, 26 May 2023 11:11:58 +0100 George Dunlap <george.dunlap@cloud.com>:

> With that change:
> Reviewed-by: George Dunlap <george.dunlap@cloud.com>
> If that sounds OK to you I'll modify it on check-in.

Yes, sounds good. Thank you.


Olaf

--Sig_/0nvFFNb0dz3+e+xzHWCcSdc
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmRwlCYACgkQ86SN7mm1
DoDh4g/+MyvZVLW+AS63/qvpavuWKrWtawsOwdtZEejPUF4SuqcRcPDrGOFyRObp
lLdc5YU1LEUzEky60Xr/IuspipsIbxQkpEX0Jm8c4dw1FTUG3CXfZOpW9kX0uRil
ws97EF+Dji6CvhXM73aVxDOSoIY6H/FUo54uSG6QQasCEC9edlG/YqN5ACrcUkHM
4sWNY8WLmweQKOI3JWU8dx/osvzxuUoyabsTCsaqUW0T6i9WsCXQmChqmpmPhg+1
PmRv/17SWlrJKz7SU5RpfGL2Yu7vVk+mhRNYTkSUku4xKUkTnjM6knAlu4SlYY+X
0OP8SxpnB06FmBAZAjfpRVFF09dKouiGEw7qzvIz5YpmyKBC7v71SiIxMJCErP2R
kanrWKBWYFalq61EozmWqRlsd4Rx+YHi5tWIvlJHlt+SoAirJ4poHnOH0kq8Wb/m
1Sgl3+yCYluGRoAsLg9DGtXquVqgZNwAv079w8+mSQAcs2u0kt9AyYlYcj0Katu5
MZAJd2e51Dkf/EF2mzOgFMJGrh0ttLlbEK/GXleC5jjFKJPpn1eDiDkL4e7Izsss
pNK1gBQS8iNAbgTz21x9R2NZ+Iy+yRH34r0c4dqcphjTgO/AtdlemDnkISpDmZt1
WxK8y8NvpycQnMip3PtCnQFPPFjKDUg/KDEkKTZsn5hsqqB7IOk=
=EoS7
-----END PGP SIGNATURE-----

--Sig_/0nvFFNb0dz3+e+xzHWCcSdc--


From xen-devel-bounces@lists.xenproject.org Fri May 26 11:45:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 11:45:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540094.841580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Vsd-0006nj-Se; Fri, 26 May 2023 11:45:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540094.841580; Fri, 26 May 2023 11:45:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Vsd-0006nc-PA; Fri, 26 May 2023 11:45:11 +0000
Received: by outflank-mailman (input) for mailman id 540094;
 Fri, 26 May 2023 11:45:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Vsc-0006nQ-Mz; Fri, 26 May 2023 11:45:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Vsc-0006PA-B1; Fri, 26 May 2023 11:45:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Vsb-0005Ei-RM; Fri, 26 May 2023 11:45:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Vsb-0006mY-Qu; Fri, 26 May 2023 11:45:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Sp8O/TC5qSI8M9yQu91+zEqOYF23a6/2OAH5xHYBezU=; b=foQoor+giAOynn82BBYo+JB9EE
	VY5EKtKcjd6zfupwLH4mDqzyycUCtxM1PMFtzGwjx5tJ5cKrYYeLt+YB1jb4ejuwjxwdp1d++F4F6
	Ivlz6H4XqsusOGKhqVDIss8VN8tqoGKW027UkznYsokblarf6IsTCP02zS+WhtC5jVQA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180958-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180958: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=a3cb6d5004ff638aefe686ecd540718a793bd1b1
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 11:45:09 +0000

flight 180958 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180958/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                a3cb6d5004ff638aefe686ecd540718a793bd1b1
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    9 days
Failing since        180699  2023-05-18 07:21:24 Z    8 days   35 attempts
Testing same since   180949  2023-05-25 22:08:34 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 7534 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 26 12:09:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 12:09:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540104.841590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2WGS-0001Xm-0i; Fri, 26 May 2023 12:09:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540104.841590; Fri, 26 May 2023 12:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2WGR-0001Xf-U0; Fri, 26 May 2023 12:09:47 +0000
Received: by outflank-mailman (input) for mailman id 540104;
 Fri, 26 May 2023 12:09:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q2WGQ-0001XW-AF
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 12:09:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q2WGP-0007h1-JZ; Fri, 26 May 2023 12:09:45 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226] helo=[10.95.96.139])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q2WGP-0002tG-DG; Fri, 26 May 2023 12:09:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=oO6Lm0cwvoASXQTywWMRuAMiRTjNWPRWvHlEXdn1Iig=; b=wr8MQVIoUVJ5LNL9XrL7o5zZDs
	10QoXI+O2wCK4tVaGWzsGanTZ7tV3DVaH7+jdybzSzeVfJv9aN8KH+BmIFQmX9Z2zuKFVlXpGYBtX
	7EoUsD0Ilsg+k5cYJpfKwpcjE4CUTvzNTcWcuBwxpsO3eiGHpPZEVthrJAkbU0EcjkLA=;
Message-ID: <06e6c3a1-e2ab-6e82-1899-fdcd8add31cf@xen.org>
Date: Fri, 26 May 2023 13:09:43 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [PATCH v2] xen/arm: un-break build with clang
Content-Language: en-US
To: Stewart Hildebrand <stewart.hildebrand@amd.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20230525191531.120224-1-stewart.hildebrand@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230525191531.120224-1-stewart.hildebrand@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 25/05/2023 20:15, Stewart Hildebrand wrote:
> clang doesn't like extern with __attribute__((__used__)):
> 
>    ./arch/arm/include/asm/setup.h:171:8: error: 'used' attribute ignored [-Werror,-Wignored-attributes]
>    extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
>           ^
>    ./arch/arm/include/asm/lpae.h:273:29: note: expanded from macro 'DEFINE_BOOT_PAGE_TABLE'
>    lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \
>                                ^
>    ./include/xen/compiler.h:71:27: note: expanded from macro '__section'
>    #define __section(s)      __used __attribute__((__section__(s)))
>                              ^
>    ./include/xen/compiler.h:104:39: note: expanded from macro '__used'
>    #define __used         __attribute__((__used__))
>                                          ^
> 
> Simplify the declarations by getting rid of the macro (and thus the
> __aligned/__section/__used attributes) in the header. No functional change
> intended as the macro/attributes are present in the respective definitions in
> xen/arch/arm/mm.c.
> 
> Fixes: 1c78d76b67e1 ("xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping")
> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 26 12:21:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 12:21:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540108.841599 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2WRI-0004Ar-Vh; Fri, 26 May 2023 12:21:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540108.841599; Fri, 26 May 2023 12:21:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2WRI-0004Ak-T1; Fri, 26 May 2023 12:21:00 +0000
Received: by outflank-mailman (input) for mailman id 540108;
 Fri, 26 May 2023 12:20:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1q2WRH-0004Ab-IT
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 12:20:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q2WRG-0008NE-Sj; Fri, 26 May 2023 12:20:58 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226] helo=[10.95.96.139])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1q2WRG-0003Ex-M2; Fri, 26 May 2023 12:20:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=NNz7FVICZkmfqoG0QWZ3+XFh21dEf6szpZS9+lKViOw=; b=61shpGDvN3JjXrV54fTWweHvie
	O+MsOUPey+5Hm7aePZVYazL5Mcg08BjSioxbR6PVGZfRVzBdmnTlhkvllotB2y5xEzAHu8E46Tr7v
	bq1TAAJvydRYWFcQ1UxWwtEeua92VbeZ9lSAofH+TVUE+ZDXdq3MAqbrAcdQpE+UeGU0=;
Message-ID: <fce78524-f6a0-1906-ae33-b2c8d469a678@xen.org>
Date: Fri, 26 May 2023 13:20:56 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [XEN v7 00/11] Add support for 32-bit physical address
Content-Language: en-US
To: Ayan Kumar Halder <ayan.kumar.halder@amd.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, stefano.stabellini@amd.com,
 Volodymyr_Babchuk@epam.com, bertrand.marquis@arm.com,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 wl@xen.org, rahul.singh@arm.com, michal.orzel@amd.com
References: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20230518143920.43186-1-ayan.kumar.halder@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Ayan,

On 18/05/2023 15:39, Ayan Kumar Halder wrote:
> Ayan Kumar Halder (11):
>    xen/arm: domain_build: Track unallocated pages using the frame number
>    xen/arm: Typecast the DT values into paddr_t
>    xen/arm: Introduce a wrapper for dt_device_get_address() to handle
>      paddr_t
>    xen/arm: smmu: Use writeq_relaxed_non_atomic() for writing to
>      SMMU_CBn_TTBR0
>    xen/arm: domain_build: Check if the address fits the range of physical
>      address
>    xen: dt: Replace u64 with uint64_t as the callback function parameters
>      for dt_for_each_range()

I have committed the patches up to here.

>    xen/arm: p2m: Use the pa_range_info table to support ARM_32 and ARM_64
>    xen/arm: Introduce choice to enable 64/32 bit physical addressing
>    xen/arm: guest_walk: LPAE specific bits should be enclosed within
>      "ifndef CONFIG_PHYS_ADDR_T_32"
>    xen/arm: Restrict zeroeth_table_offset for ARM_64
>    xen/arm: p2m: Enable support for 32bit IPA for ARM_32

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 26 12:29:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 12:29:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540112.841610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2WZe-000536-QY; Fri, 26 May 2023 12:29:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540112.841610; Fri, 26 May 2023 12:29:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2WZe-00052z-Mj; Fri, 26 May 2023 12:29:38 +0000
Received: by outflank-mailman (input) for mailman id 540112;
 Fri, 26 May 2023 12:29:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xi8L=BP=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q2WZd-00052r-VA
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 12:29:37 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f9e6b288-fbc0-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 14:29:36 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id F10A321B33;
 Fri, 26 May 2023 12:29:35 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id B9FC6134AB;
 Fri, 26 May 2023 12:29:35 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id zLRpKy+mcGQ1WAAAGKfGzw
 (envelope-from <jgross@suse.com>); Fri, 26 May 2023 12:29:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9e6b288-fbc0-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685104175; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=kwjcmuagLfOHSWG0LiIoiDu9YlO+nsSe2Nz9n6SGCS0=;
	b=rCCOrHUhD58lG93TowQiKzFXAJl9OQOgFzen9QaDterYVrWTcNgBL6TVLBCQDSXGoWXUPv
	HO9M2/VQhjW2d9qhAsD+giDTAHdRqZhH+l1W7lduTGHtoLvlPHpfugRphcyc3XxLC1xCbk
	Tum53QqzTiKHoxe8tXy1jIxTPCNyZb4=
Message-ID: <6e83b70e-a3a5-d11f-c0ea-fb0c5f0c0829@suse.com>
Date: Fri, 26 May 2023 14:29:35 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Juergen Gross <jgross@suse.com>
Subject: Re: [XEN PATCH] tools/xenstore: remove deprecated parameter from
 xenstore commands help
To: =?UTF-8?Q?Cyril_R=c3=a9bert?= <slack@rabbit.lu>,
 xen-devel@lists.xenproject.org
Cc: Yann Dirson <yann.dirson@vates.fr>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <47cbac6bcf8f454b47bc6430c101f064a5623261.1685041564.git.slack@rabbit.lu>
Content-Language: en-US
In-Reply-To: <47cbac6bcf8f454b47bc6430c101f064a5623261.1685041564.git.slack@rabbit.lu>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------Wcf7RH3Pc0cME6RJcl0dOAgc"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------Wcf7RH3Pc0cME6RJcl0dOAgc
Content-Type: multipart/mixed; boundary="------------n5mvIR6liUv90bJsSbK43Q0S";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: =?UTF-8?Q?Cyril_R=c3=a9bert?= <slack@rabbit.lu>,
 xen-devel@lists.xenproject.org
Cc: Yann Dirson <yann.dirson@vates.fr>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <6e83b70e-a3a5-d11f-c0ea-fb0c5f0c0829@suse.com>
Subject: Re: [XEN PATCH] tools/xenstore: remove deprecated parameter from
 xenstore commands help
References: <47cbac6bcf8f454b47bc6430c101f064a5623261.1685041564.git.slack@rabbit.lu>
In-Reply-To: <47cbac6bcf8f454b47bc6430c101f064a5623261.1685041564.git.slack@rabbit.lu>

--------------n5mvIR6liUv90bJsSbK43Q0S
Content-Type: multipart/mixed; boundary="------------KSZlhZ4E2rwQyhmRzNiIE0mB"

--------------KSZlhZ4E2rwQyhmRzNiIE0mB
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjUuMDUuMjMgMjE6MDYsIEN5cmlsIFLDqWJlcnQgd3JvdGU6DQo+IENvbXBsZXRpbmcg
Y29tbWl0IGM2NTY4N2UgKCJ0b29scy94ZW5zdG9yZTogcmVtb3ZlIHNvY2tldC1vbmx5IG9w
dGlvbiBmcm9tIHhlbnN0b3JlIGNsaWVudCIpLg0KPiBBcyB0aGUgc29ja2V0LW9ubHkgb3B0
aW9uICgtcykgaGFzIGJlZW4gcmVtb3ZlZCBmcm9tIHRoZSBYZW5zdG9yZSBhY2Nlc3MgY29t
bWFuZHMgKHhlbnN0b3JlLSopLA0KPiBhbHNvIHJlbW92ZSB0aGUgcGFyYW1ldGVyIGZyb20g
dGhlIGNvbW1hbmRzIGhlbHAgKHhlbnN0b3JlLSogLWgpLg0KPiANCj4gU3VnZ2VzdGVkLWJ5
OiBZYW5uIERpcnNvbiA8eWFubi5kaXJzb25AdmF0ZXMuZnI+DQo+IFNpZ25lZC1vZmYtYnk6
IEN5cmlsIFLDqWJlcnQgPHNsYWNrQHJhYmJpdC5sdT4NCg0KUmV2aWV3ZWQtYnk6IEp1ZXJn
ZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------KSZlhZ4E2rwQyhmRzNiIE0mB
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------KSZlhZ4E2rwQyhmRzNiIE0mB--

--------------n5mvIR6liUv90bJsSbK43Q0S--

--------------Wcf7RH3Pc0cME6RJcl0dOAgc
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmRwpi8FAwAAAAAACgkQsN6d1ii/Ey/o
ZQgAnCXBIohKM9ALfeQfamwssAohKGyC+/u7MV0wsbQQLoQixFRX6cbgk/6vxclSl5OsOUVIFLw1
PgCJo0T8IKwISQCMEYBaMina86T6sQ+2SNlCNsJsBhBwq7kyVQbxFmtbDAUkXYCKbhvNXrPiSd12
w7bh8Y9hiVtOcA6BFnqntmIvJJTjpOhsPu4f1jIKsn/NhU8ce+xz92m0n6uSIH/Ho2cXdpYXmcZY
ER2E3SDe/HHarUkPHcv4HIsgmOdhiM9wm1XjVujUHkrwtCe8GMg7E1G0Bgc4dHv+kKU+YwYNrP1G
FBUFguX6BSVqnzXyCDSdWAXFYU4C0EFT/Y7SJD8l/g==
=Prr3
-----END PGP SIGNATURE-----

--------------Wcf7RH3Pc0cME6RJcl0dOAgc--


From xen-devel-bounces@lists.xenproject.org Fri May 26 12:38:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 12:38:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540116.841619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Wi6-0006k5-KQ; Fri, 26 May 2023 12:38:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540116.841619; Fri, 26 May 2023 12:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Wi6-0006jy-Hg; Fri, 26 May 2023 12:38:22 +0000
Received: by outflank-mailman (input) for mailman id 540116;
 Fri, 26 May 2023 12:38:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Xc68=BP=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q2Wi5-0006jp-AD
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 12:38:21 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.163]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 31f65e63-fbc2-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 14:38:20 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4QCcE7ih
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 26 May 2023 14:38:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31f65e63-fbc2-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1685104694; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=WhKwkL3PZvTC5PFUX4/6PjJyLk2w3VNxaA4P3s2hzqx+fryyZdmQPN6wFImmNN8QSt
    FSlnf+vfShdkZWfOKC+FrGJMdx3QKwwhLE6W+f8h5zK2ZWyBKrv8D0SmYiMKFfOtV9Oe
    vAcPApae6dCHcLm+54nvxSHu3kILit4NrCMpi9aGP4d4NZ7x7pccl6DGuE5rfdI4GIoo
    W7NooGIW8g12iht9JwdsUNoRkhxE19QJE251FH4vn1zSSejcVqwAvh//lhFRnx/Ea6gE
    dJpdaOY4x3fETgraPqCEPGh9GR/imvUwcjr+m0LAO8AskhAplkw8le/ZPoLi69h4fVfx
    tnTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685104694;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=TV6XPOqE6SFQtp9w1o26J0iwlJ8FZqSaP4PKDn98ZHc=;
    b=kjUs0VtQMVGLqmD8Of+u5PqcFizlXG2edidpieVCblFrTWLmAL+Zwjwk7Vgut/RQ9n
    8iIwUJrgrEJYMw9lWQgUhhS0cHmcIn7pQccMzFviig1Z9J83WvxAc0mWBj4choJL1b1l
    Xgp5Jw5H3Hja8GRL7OwZlEYvELcEUebuter8KcKqxmdmvtR+XFTRnngq4DjgNaMNLeN2
    tf+ZThknRovrkP8HNyMv4xRuT3fSbuMHeTM0ygtRkiwWHo5TPhOYyswkM9cGPaPrGepg
    C8StWC1s7Mc8eMWF4CpS7bdxoEufDZwR6h1hX0p4Lws2NEod4TYOQMOTWi6IFu9J0xXa
    xhPw==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685104694;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=TV6XPOqE6SFQtp9w1o26J0iwlJ8FZqSaP4PKDn98ZHc=;
    b=rAws7qeGCU+DW0tyUMYx5DJe59k15Mv0ZPCtskiNSREh2b394MbDT1c6+P5CPd9j1P
    nqwSrypqIMt25pj0zAuXV9P8Lw4DjEvSLuNoPnE32nIneDc2jMawYZFi/eOIT/TeRU3U
    6zQjYlBNwixDxuuwu0lHMSbBI4VL/o5H9ylfnOCnlCXJwl8Lfb+U+bTa1ThbabWYFOPZ
    vrEMY2AXLeeesxD2HHo5dLvEya2XAa1j8F6hYJiJ9t3ra8lKOA1cvjkgCiFR54KgryDT
    nhUGiWxdKPAuWQVnFtH+3AgMl0L8X5EKraLVH+I8n6hE31jfigTAqcKcYCbx49gtSK9M
    W9mA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685104694;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=TV6XPOqE6SFQtp9w1o26J0iwlJ8FZqSaP4PKDn98ZHc=;
    b=fdF0v2+49ioYdWmmkdyKrpGubJ4ZlCRPNP8Wooiikj7E7tD61I+EBOTNLGRf5nmEr2
    nj+OwfcvDeb1dLQVqeBA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xqFv7EJ0tgRX/vKfT/e8Ig6v0dNw4QAWpzMWrRQ=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1] xentrace: adjust exit code for --help option
Date: Fri, 26 May 2023 14:38:10 +0200
Message-Id: <20230526123810.23210-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

Invoking the --help option of any tool should not return with an error,
if that tool does have a documented and implemented help option.

Adjust the usage() function to exit with either error or success.
Handle the existing entry in the option table to call usage accordingly.

Adjust the getopt value for help. The char '?' is returned for unknown
options. Returning 'h' instead of '?' allows to handle --help.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/xentrace/xentrace.c | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/tools/xentrace/xentrace.c b/tools/xentrace/xentrace.c
index 3548255123..be6226f088 100644
--- a/tools/xentrace/xentrace.c
+++ b/tools/xentrace/xentrace.c
@@ -807,7 +807,7 @@ static void monitor_tbufs(void)
 const char *program_version     = "xentrace v1.2";
 const char *program_bug_address = "<mark.a.williamson@intel.com>";
 
-static void usage(void)
+static void usage(int status)
 {
 #define USAGE_STR \
 "Usage: xentrace [OPTION...] [output file]\n" \
@@ -854,7 +854,7 @@ static void usage(void)
     printf(USAGE_STR);
     printf("\nReport bugs to %s\n", program_bug_address);
 
-    exit(EXIT_FAILURE);
+    exit(status);
 }
 
 /* convert the argument string pointed to by arg to a long int representation,
@@ -873,7 +873,7 @@ long sargtol(const char *restrict arg, int base)
     {
         fprintf(stderr, "Invalid option argument: %s\n", arg);
         fprintf(stderr, "Error: %s\n\n", strerror(errno));
-        usage();
+        usage(EXIT_FAILURE);
     }
     else if (endp == arg)
     {
@@ -901,7 +901,7 @@ long sargtol(const char *restrict arg, int base)
 
 invalid:
     fprintf(stderr, "Invalid option argument: %s\n\n", arg);
-    usage();
+    usage(EXIT_FAILURE);
     return 0; /* not actually reached */
 }
 
@@ -917,10 +917,10 @@ static long argtol(const char *restrict arg, int base)
     if (errno != 0) {
         fprintf(stderr, "Invalid option argument: %s\n", arg);
         fprintf(stderr, "Error: %s\n\n", strerror(errno));
-        usage();
+        usage(EXIT_FAILURE);
     } else if (endp == arg || *endp != '\0') {
         fprintf(stderr, "Invalid option argument: %s\n\n", arg);
-        usage();
+        usage(EXIT_FAILURE);
     }
 
     return val;
@@ -1090,7 +1090,7 @@ static void parse_args(int argc, char **argv)
         { "discard-buffers", no_argument,      0, 'D' },
         { "dont-disable-tracing", no_argument, 0, 'x' },
         { "start-disabled", no_argument,       0, 'X' },
-        { "help",           no_argument,       0, '?' },
+        { "help",           no_argument,       0, 'h' },
         { "version",        no_argument,       0, 'V' },
         { 0, 0, 0, 0 }
     };
@@ -1144,8 +1144,12 @@ static void parse_args(int argc, char **argv)
             opts.memory_buffer = sargtol(optarg, 0);
             break;
 
+        case 'h':
+            usage(EXIT_SUCCESS);
+            break;
+
         default:
-            usage();
+            usage(EXIT_FAILURE);
         }
     }
 


From xen-devel-bounces@lists.xenproject.org Fri May 26 12:43:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 12:43:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540121.841629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2WnP-0008Iq-7Y; Fri, 26 May 2023 12:43:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540121.841629; Fri, 26 May 2023 12:43:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2WnP-0008Ij-4y; Fri, 26 May 2023 12:43:51 +0000
Received: by outflank-mailman (input) for mailman id 540121;
 Fri, 26 May 2023 12:43:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7mHh=BP=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q2WnN-0008IZ-G5
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 12:43:49 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20630.outbound.protection.outlook.com
 [2a01:111:f400:7d00::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5a072a1-fbc2-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 14:43:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB9637.eurprd04.prod.outlook.com (2603:10a6:102:272::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18; Fri, 26 May
 2023 12:43:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.016; Fri, 26 May 2023
 12:43:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5a072a1-fbc2-11ed-b230-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RvMk3VylcAy5f8Nly3tOySrIRrT5xSA1aHhMmZeAHGg5pWXRt+6WigUSLM1vTKNh20tlqRFF2ZPDA/RYadE1L1630ws6VMoQvMxU+LIp/xU/3B0uBv+DPvIRAgmrxWzcwLz+qrhIJHk7D/QV/B0nzKMDBNc8SQ4xBTbUElsAHNgyNNb0fR8zGXQW1qveCmoTj8HUjw+90sycbz6DeNIo11lPIJ3PmV5T7+j7K0EWw6/WrxTxQV9+HRerjLVGdxCbmq5JlX7jRDJYzUKXNjkdTfoHiDvxhxrxC7GNBgZ3vmNFUBCVbIXU15SwHV2S3pxPY0QIJEIiTc8OVK8bovyVhQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LbQjKUhKwl5FaUv+O+ExmUqYGjiB6T0MjPVKu0iLULw=;
 b=Lhi4uaYBB3336tfbjiEfxpduX4ua0lSdK6XeGfQJe1JxUZJR30c/77iAusJg3qt2SJ1pWYf+vplF0dZb/w9FMOt3isbFkHhGvS15CbDa8KGkDMzwRU/KQHhkN/e+sPczYEdp7KyIIJM3BRt/jC/0RPUzfdNEXrQ3e8CuUMSMhCUFrRbjRgGH+d+3PRQqRH8x/BWx9Ov+1acfW27dGoVmU3rJfL8GGsZtT0VVExwk4funphrBM+x2YZmFPt9z0vRTk6Qglvp5QZhDrJJZPRNtYJ7AKDU+CEvrQp3KfO2qAfYJ8U01tBHD6mhmUcvb3j9vKZgdpaSMQKivJgkMy/Ghjg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LbQjKUhKwl5FaUv+O+ExmUqYGjiB6T0MjPVKu0iLULw=;
 b=GxDHCE+CLmlluxP7q4oButl2gCicZsNDmtBkdPQFNUAXHW6DTyR8yWrpmOxZLncJztdPYZptn/FpD8xBYBtfG8uRGqARz1LlPnS4+szgY0SN8a5OMsmh21YsdwXOhmb3IObiECbMDFiIJY4hzzAQz2+SwAH0v06uRZSJy3n0vGcynQMGrzE7A9iFipIu4JsEgsR5AQ2TfNcZYxHMb+VeMmrakZRJPsON1Dltsh3euoJZSL0MQxzSZDeZrxfC4KTaheq1FzvSXQQdLlwKnZNFjNllIcFF8+rhsoMhlselK6dKfEYgEC5PAo37dKRlIh2zb1dzL7P+pK/ryjs1Fqpdjg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8fbe0439-392b-e7a0-1480-878fb923e58f@suse.com>
Date: Fri, 26 May 2023 14:43:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2] xen/x86/pvh: handle ACPI RSDT table in PVH Dom0 build
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org,
 Xenia.Ragiadakou@amd.com, Stefano Stabellini <stefano.stabellini@amd.com>,
 roger.pau@citrix.com
References: <20230525231851.700750-1-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230525231851.700750-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0221.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB9637:EE_
X-MS-Office365-Filtering-Correlation-Id: 553ca0e9-4c1e-43c6-1515-08db5de6d7df
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EQrpYqk8augXYLzJ0iiy4juTgZdWZwNX0I7AvSYMrxE5qFhbW5X4zxtgvXsE1RV9rK3mBcPSGfbOjQOyyhecTBp8fNS/bAX/L8XZn2abfwJxvDzSUQQVmb8EK1rlI7aike2RoYwYaYmK4RLmi9fTfK+aRiz/iYbpwghUGCuXfB2M0g43R4fCzhHKQXSbfxRHHXfEsYbdN4u3a5WGmanw/32Kvcoq+aRIE/TT7chIwpGW8dn6SwR9+YkFJplJmLuqGhTXTo82TgfZcH7DJOE4Cw1Pq8UMqcFmz58ZmE3qt3gCMn6Ez8hRg3BemtCGn1FEgxUFoF3wPo0v5S2onPw/7E4UnWOhezApnrYyDadTL5rPDIGZp6kU04BBNXNOmuMteuZCRcep8oXJEPFnXl0wEzp5H3m9SLXMaUiHhgc9bEdIrkia9vOiTWrLBOcRupvYMwwASQZS/KRzk0Ctzxvx3HCJr4G2cGiO3PpqbyfrM4pWo5SHyHsmX5v7a5ttiUB/+6Akjsa+TZEQb/CL+kWhIdD22n9fUD8xNR6Umy+NIvfy5swqeSnVlAKHd7VnkAnjJd5MclxaLv4bEjRO44bzlnixzNhN/a6KG+FN6Z8oRvaNreWwufErxPWrt0g0GqT0yeUKIobxLd+N6U1u/HKsmA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(39860400002)(136003)(396003)(366004)(451199021)(41300700001)(4326008)(316002)(8676002)(66946007)(6916009)(66556008)(66476007)(2906002)(5660300002)(31686004)(8936002)(6486002)(478600001)(26005)(6506007)(186003)(36756003)(2616005)(6512007)(53546011)(38100700002)(31696002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?U0xTcXNKZUZ2ZzhsWld3c084WUQvZDlKNGpWTWJFK1EwbVJibnYxMGkwdU9Z?=
 =?utf-8?B?bEN0MkhnMlgvRE00NzZEZXFvc0Q4M3VGNFJtUUttamhQUDUzT0ZaYmxUQUZD?=
 =?utf-8?B?aEtIYm9MZS9aMXJLQi9VZ2dnclFjWEFrS292MGIwQ29OVnk1em5kQ0p6YndZ?=
 =?utf-8?B?dE1EZDVocU5mOXI5Y3JQamVhRmFBQTRSK01uZnJZS3FlWHNsUmNLekk1Mlc4?=
 =?utf-8?B?VG5aNTRicEZ5Qkw5bDV1Z3l5cEZ3WFM3aGR5RTFkZlRtcjk1WWdhcUZnS1Jm?=
 =?utf-8?B?OGlCMTRPV2E0Z0gyTThVRzZrODNSYVlUbkZGVk0rcU5xVCtab2p0TXU1aDU4?=
 =?utf-8?B?TFdNdGh6UnhOdWhma0xzY3BTakdxVml3RFNEd2RDQ242R3NCbE1LTng3NjBv?=
 =?utf-8?B?djlhUGxJeTZxcFJRU0M3MmY1ZERXTEU0bXFTUjZMejVQMXk4dlh4MFdFUjNT?=
 =?utf-8?B?ZC9QaWs0TjNuSFhJNGNvaG1lR2FlcFZFSFpqenQvUEZlSUowMXU4V0FkV0Vi?=
 =?utf-8?B?SG5nNHYya3E4bHA2eG9TUUxSUTRVQm9ZZm1yMWpCK2xHUE9HbFQrZ0lrVDFX?=
 =?utf-8?B?MkU0VCtxNXhXVEtkL25LelJDeVBNYjJvaDRseWtGRUZYSFV3WkxVQnhCNHZy?=
 =?utf-8?B?bk44QXhpN2p1WmJ5d3ZObkI1dzJpSjhhWWkwbmllbDhtM2l4NE5HS0hqcWlJ?=
 =?utf-8?B?NFlNbE9qY2tzRS9mN1FCTHpMbW94TDV5U2s4alNIS09RRTlkUTFxWFdvejhG?=
 =?utf-8?B?c0R6bzZSNkNzanNvZno1RTBqS3lGc2JaQ0J4UjAvcEtCSFRBM3NsdjQwOEV1?=
 =?utf-8?B?bTZMc3pmY0E1eG4yNjIrR3phRytSekJLM1JCQ0xZd3k2dWRaMEl4NzdVd2lv?=
 =?utf-8?B?elhIRVdzWEJORVlnQS9VSC9USENLU3N5NERXeHZUOUhPNStneGR1ZmovdmNw?=
 =?utf-8?B?SVp1RWkvUEtzcGxBcDdCbXZxU0YrY3Z6Rjc2NFFHa0FoUEZpQllacmZ1ZFNW?=
 =?utf-8?B?TEpEZGdPbEFBRkNzelJFby96WXQrNGN5Rk1vK0tpa2hzVG9pSmNzak9CN21o?=
 =?utf-8?B?L1M1RWRpMlFZNWV1czhwQ1YyYjVxalJ5ekEyKzh5TDlZZ1Z6ZDgzSFR4TkpS?=
 =?utf-8?B?TGRoSVV3Z21MZXVoeE5WYWh0Vy9ySC9JRFBDZG1vU2kzalp5Si92UnNiMDNI?=
 =?utf-8?B?UkJocVQ5bzNNL2l4eE9acnN2cTVOM3lndjRTTlVPU2hCWXZ1Q0wvQ3RmWWJz?=
 =?utf-8?B?SjBYL0dTc2h0U0dKQlZrNndyM0cxY1BtU21GcGtWeEJranZVQ1BybFBxUDMy?=
 =?utf-8?B?R2RwWjVQVVZYa2wyRzdxamRqc3NQS0RGY29SU3liZVNBdExwczFXU25YSUNM?=
 =?utf-8?B?eUlsdW1RM0VpNEFLckIzSFlONWNNVHpHTXRqbjM5Z1E4bFZZM3VrZ1dWdGM3?=
 =?utf-8?B?Z3BqYStLbGdMVUVJLzdhZlBFT0IyRkkzcE1jOE53bGdCQ2d4RmVhY2twYW1q?=
 =?utf-8?B?cnZhcnQyT0R5dk12dmMzTmwrNnRxZXcvSndFOWQ1TWUvUklRd1c4NHl6Mjdp?=
 =?utf-8?B?cEVFaGtGWmlaeHloamY2Y0N4SklaM0hxNDhxa0VuNHRUSE14c0JKNE5RNzYw?=
 =?utf-8?B?cWE0Tk9sTGQwcVRnUDFtZkZVRGlNWFVmVDBITWJzdTVpOXhreis0bWk5WlJI?=
 =?utf-8?B?Z1hRVnhWRlZEQ1YrTVprZEhPNnJMQ3NlVnZOK3N1b0xCS1ppTHNFdlVvRkRI?=
 =?utf-8?B?UitQODltVnk3VVJNV0w0N3QyQTBsWVo1ZlNuYVVaTFVyckZPcUk4Q3l2elg1?=
 =?utf-8?B?aE10MzZmWUpxbHR0V0JVV2RFREN2Z2VBMUlsSzVOeXJ2NUkrS01hWmkrcHI0?=
 =?utf-8?B?VU1hS2ZGOGZlOWJMK204aEM5bFZBSnV5VjdVeDh1SDdIT2tsamJ1OHM1ZXYw?=
 =?utf-8?B?K2tZSzZobXl2VkpqMGQ4ZGxXUlVVbDVzVlpSOXhTYzM3Z2tzOHlVVWtkTnRZ?=
 =?utf-8?B?UVQ1TDVZMTBwOFBicnB4bzlnQjExVXpKTVNOb2dYUU1ITzhRUEprbCtvamhT?=
 =?utf-8?B?Szc4YUkyby9UUDZxSS9qV3VrN1VqQng1Uk90OE5DVENvVGsvSWVTY0lIWVdK?=
 =?utf-8?Q?krE+0OLI4s9hRA0gOqH1Ict/c?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 553ca0e9-4c1e-43c6-1515-08db5de6d7df
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 12:43:45.0477
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6KqC426xLfZcoiwI+jWKpZsKCz1sec441aV6/XwpQ75gqa7vabvpA0kTdnhVGbsu2B0e25InRjTjEv6OpF7xvg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB9637

On 26.05.2023 01:18, Stefano Stabellini wrote:
> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -966,7 +966,16 @@ static int __init pvh_setup_acpi_xsdt(struct domain *d, paddr_t madt_addr,
>          rc = -EINVAL;
>          goto out;
>      }
> -    xsdt_paddr = rsdp->xsdt_physical_address;
> +    /*
> +     * Note the header is the same for both RSDT and XSDT, so it's fine to
> +     * copy the native RSDT header to the Xen crafted XSDT if no native
> +     * XSDT is available.
> +     */
> +    if ( rsdp->revision > 1 && rsdp->xsdt_physical_address )
> +        xsdt_paddr = rsdp->xsdt_physical_address;
> +    else
> +        xsdt_paddr = rsdp->rsdt_physical_address;
> +
>      acpi_os_unmap_memory(rsdp, sizeof(*rsdp));
>      table = acpi_os_map_memory(xsdt_paddr, sizeof(*table));
>      if ( !table )

To continue context some:

    {
        printk("Unable to map XSDT\n");
        rc = -EINVAL;
        goto out;
    }
    xsdt->header = *table;
    acpi_os_unmap_memory(table, sizeof(*table));

You're now pointing the guest's XSDT addr at a table that's really an
RSDT. After copying, I think you want to adjust the table signature
('R' -> 'X'). Luckily a revision of 1 is valid for both tables.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 26 15:01:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 15:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540128.841640 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Yw0-00010P-0d; Fri, 26 May 2023 15:00:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540128.841640; Fri, 26 May 2023 15:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Yvz-00010I-TX; Fri, 26 May 2023 15:00:51 +0000
Received: by outflank-mailman (input) for mailman id 540128;
 Fri, 26 May 2023 15:00:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EF2/=BP=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q2Yvy-0000zw-TD
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 15:00:50 +0000
Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com
 [2a00:1450:4864:20::436])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19407304-fbd6-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 17:00:48 +0200 (CEST)
Received: by mail-wr1-x436.google.com with SMTP id
 ffacd0b85a97d-30add74cc30so264575f8f.3
 for <xen-devel@lists.xenproject.org>; Fri, 26 May 2023 08:00:48 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 y9-20020a5d4709000000b00307d58b3da9sm5360033wrq.25.2023.05.26.08.00.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 26 May 2023 08:00:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19407304-fbd6-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685113248; x=1687705248;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=C28wxsizkvytBc8ueJDeuJt4h314LQLVAiDr/nIoKrw=;
        b=YAKsBSL5leCDi1kBLrOfpLfnxJcwXqlq2zFIFA9Zmba9uzXgbpEt1/edQBsKeGY2ow
         jAJKv9jsu3bCh6wCPW4r+CwuZj7GWkDNdndYfs88W59JURSJry7BesL5UWHt5TSuVIaC
         fKmTIPekn6XJnFVSrzpbEupsoIP2A5EqXOByM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685113248; x=1687705248;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=C28wxsizkvytBc8ueJDeuJt4h314LQLVAiDr/nIoKrw=;
        b=Euwy6dgJBCVoFPhtsAoR3c5hUER27FwNoJKdNTdHmxEiXSHq/kXNUmTKLTH1VxKExX
         UkqGZiFP7k/np5VfvfeAZebkChKOBbEYuRnVT84pkyl7Tf3XyjloXi3iJepiZFJHjIcr
         XRzveKt6mAxJMf/BlowbCC/BVGWdXU5ANGvN/by9Vt7ngpGUnVJ5EDDAYNQG5OFwrJxs
         QrLmGKNDIgi4H8mLVx+nP1WglyPHiO+Uqo2yhuZHGtQ5SbNJrkFTRoDpAJaxwnexgZvu
         yXAwZade1QQ7UwSMopTgcyR1UZQwUu47BU3j2WcpwCzAawHSeL0Fw7JbOyq8AORJxIuK
         Uy2w==
X-Gm-Message-State: AC+VfDzp6u5E/GFlFX3+bGNEae1M6LhkAn2/alN5VtAsapqUJlcs9wX2
	2aQmQOa3IOrfdPVLVH4lF4HZDoTa5OPDvJcVM+g=
X-Google-Smtp-Source: ACHHUZ58CsujQt2FRr6ZgH9ICKzc1N7vEfbKrbuhNPCY2+Ia+n86KmeK3JhFkeKt0m5/h4IAQh4+7Q==
X-Received: by 2002:a5d:44c8:0:b0:309:3b8d:16a8 with SMTP id z8-20020a5d44c8000000b003093b8d16a8mr1828847wrr.50.1685113247790;
        Fri, 26 May 2023 08:00:47 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 0/3] Add Automatic IBRS support
Date: Fri, 26 May 2023 16:00:41 +0100
Message-Id: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adds support for AMD's Automatic IBRS. It's a set-and-forget feature that
prevents lower privileged executions from affecting speculations of higher
privileged executions, so retpolines are not required. Furthermore, it
clears the RSB upon VMEXIT, so we can avoid doing it if the feature is
present.

Patch 1 adds the relevant bit definitions for CPUID and EFER.

Patch 2 Hooks up AutoIBRS to spec_ctrl. so it's used when IBRS is picked.
        It also tweaks the heuristics so AutoIBRS is preferred over
        retpolines as BTI mitigation. This is enough to protect Xen.

Patch 3 exposes the feature to HVM guests.

Alejandro Vallejo (3):
  x86: Add bit definitions for Automatic IBRS
  x86: Add support for AMD's Automatic IBRS
  x86: Expose Automatic IBRS to guests

 tools/libs/light/libxl_cpuid.c              |  1 +
 tools/misc/xen-cpuid.c                      |  2 +
 xen/arch/x86/hvm/hvm.c                      |  3 ++
 xen/arch/x86/include/asm/cpufeature.h       |  1 +
 xen/arch/x86/include/asm/msr-index.h        |  4 +-
 xen/arch/x86/pv/emul-priv-op.c              |  4 +-
 xen/arch/x86/setup.c                        |  3 ++
 xen/arch/x86/smpboot.c                      |  3 ++
 xen/arch/x86/spec_ctrl.c                    | 52 +++++++++++++++------
 xen/include/public/arch-x86/cpufeatureset.h |  1 +
 10 files changed, 56 insertions(+), 18 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 26 15:01:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 15:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540130.841660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Yw1-0001U1-IH; Fri, 26 May 2023 15:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540130.841660; Fri, 26 May 2023 15:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Yw1-0001Tu-Dy; Fri, 26 May 2023 15:00:53 +0000
Received: by outflank-mailman (input) for mailman id 540130;
 Fri, 26 May 2023 15:00:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EF2/=BP=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q2Yvz-0000zw-NR
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 15:00:51 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1a4891ba-fbd6-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 17:00:50 +0200 (CEST)
Received: by mail-wr1-x42d.google.com with SMTP id
 ffacd0b85a97d-309550d4f73so1702708f8f.1
 for <xen-devel@lists.xenproject.org>; Fri, 26 May 2023 08:00:50 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 y9-20020a5d4709000000b00307d58b3da9sm5360033wrq.25.2023.05.26.08.00.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 26 May 2023 08:00:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a4891ba-fbd6-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685113249; x=1687705249;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=quS4cTSFGRVakDp9QHbKiL2cvkdUMIvwadA2xaR3dOE=;
        b=Q4Q946ylMHUyvYrwPgiKBe5i3wcohM3WYhaSRXDR8DGJIBq3WGAullplwSgyZNiJEb
         cJRzHyCxqr0TmEQhpN9SRYNztmV2e0F9/GuRldBJB+osDeIWnUI243Y4YYBEpGxm++2B
         AztWewjWlY0WnWRTxTWSmAL7BPUE+BJPEbRkM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685113249; x=1687705249;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=quS4cTSFGRVakDp9QHbKiL2cvkdUMIvwadA2xaR3dOE=;
        b=WTGP6x9TBAOyWhgVgI2cdeH8F/aLiMNnPMiH+8WbJMYIEuAgHCvHDSl4ovelPgTg80
         Qdh6qpXZXioe2kZml9B3UPQ6do6YcrS1SGZnH+C1IN60bKeZGrbPqWFW5AWyUT8cjpKd
         ZrAUuR/D11MT8COYo6+c8expYfyS5elUQRip5PeqqhAf9wyhm4Ge35lLG+Z7ORBjIhLD
         Bc9qCJXQlfzYSp/7Edch8KsfeZLNdDbGTO7OY0WrBbxzpbR1qS3//FpuuN1iwijW6PEy
         JPLZSGobBoJst2tjBnylQvdOQ535pQDMO3b4xT2I6gvYBhbnswrQcNdnDM8XPtqnScKg
         sOoQ==
X-Gm-Message-State: AC+VfDznHIT2bOUuI3r8RGi9MU43vzvsqCISszQ7sTq9zSFd8U7FaM9m
	y4Je3r+6O0DqadsJG1+hAtk1ZfzC8v/jojaBvGk=
X-Google-Smtp-Source: ACHHUZ4ReO3uxjh5k5rw9APzBLZv58Icukc7CnHI7cJZNUVC14W3TZanYNZO2aqb9V/L/dppIWKiyw==
X-Received: by 2002:a5d:45cf:0:b0:306:2ff1:5227 with SMTP id b15-20020a5d45cf000000b003062ff15227mr1680848wrs.23.1685113249664;
        Fri, 26 May 2023 08:00:49 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 3/3] x86: Expose Automatic IBRS to guests
Date: Fri, 26 May 2023 16:00:44 +0100
Message-Id: <20230526150044.31553-4-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
References: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Expose AutoIBRS to HVM guests, because they can just use it. Make sure
writes to EFER:AIBRSE are gated on the feature being exposed. Also hide
EFER:AIBRSE from PV guests as they have no say in the matter.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
 xen/arch/x86/hvm/hvm.c                      | 3 +++
 xen/arch/x86/include/asm/msr-index.h        | 3 ++-
 xen/arch/x86/pv/emul-priv-op.c              | 4 ++--
 xen/include/public/arch-x86/cpufeatureset.h | 2 +-
 4 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d7d31b5393..07f39d5e61 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -936,6 +936,9 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
     if ( (value & EFER_FFXSE) && !p->extd.ffxsr )
         return "FFXSE without feature";
 
+    if ( (value & EFER_AIBRSE) && !p->extd.automatic_ibrs )
+        return "AutoIBRS without feature";
+
     return NULL;
 }
 
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index 73d0af2615..49cb334c61 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -178,7 +178,8 @@
 #define  EFER_AIBRSE                        (_AC(1, ULL) << 21) /* Automatic IBRS Enable */
 
 #define EFER_KNOWN_MASK \
-    (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | EFER_SVME | EFER_FFXSE)
+    (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | EFER_SVME | EFER_FFXSE | \
+     EFER_AIBRSE)
 
 #define MSR_STAR                            0xc0000081 /* legacy mode SYSCALL target */
 #define MSR_LSTAR                           0xc0000082 /* long mode SYSCALL target */
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 8a4ef9c35e..142bc4818c 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -853,8 +853,8 @@ static uint64_t guest_efer(const struct domain *d)
 {
     uint64_t val;
 
-    /* Hide unknown bits, and unconditionally hide SVME from guests. */
-    val = read_efer() & EFER_KNOWN_MASK & ~EFER_SVME;
+    /* Hide unknown bits, and unconditionally hide SVME and AIBRSE from guests. */
+    val = read_efer() & EFER_KNOWN_MASK & ~(EFER_SVME | EFER_AIBRSE);
     /*
      * Hide the 64-bit features from 32-bit guests.  SCE has
      * vendor-dependent behaviour.
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index e3952f62bc..42401e9452 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -287,7 +287,7 @@ XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
 XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
 XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and limit too) */
-XEN_CPUFEATURE(AUTOMATIC_IBRS,     11*32+ 8) /*   HW can handle IBRS on its own */
+XEN_CPUFEATURE(AUTOMATIC_IBRS,     11*32+ 8) /*S  HW can handle IBRS on its own */
 XEN_CPUFEATURE(CPUID_USER_DIS,     11*32+17) /*   CPUID disable for CPL > 0 software */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:1.ebx, word 12 */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 26 15:01:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 15:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540129.841646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Yw0-00013n-9d; Fri, 26 May 2023 15:00:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540129.841646; Fri, 26 May 2023 15:00:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Yw0-00013G-4g; Fri, 26 May 2023 15:00:52 +0000
Received: by outflank-mailman (input) for mailman id 540129;
 Fri, 26 May 2023 15:00:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EF2/=BP=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q2Yvz-0000zw-51
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 15:00:51 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19fb77f0-fbd6-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 17:00:49 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id
 ffacd0b85a97d-3094910b150so767740f8f.0
 for <xen-devel@lists.xenproject.org>; Fri, 26 May 2023 08:00:49 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 y9-20020a5d4709000000b00307d58b3da9sm5360033wrq.25.2023.05.26.08.00.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 26 May 2023 08:00:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19fb77f0-fbd6-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685113249; x=1687705249;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Aq+skqTQH86wc6+81w7SMNTxMqLNXZC2DHkkGE1IzhI=;
        b=T6BO4cqPCbGEhZ5PCjpe91ginA4nXrAzyQmSbfVLZOAvp4Zj7rvlROJFprRc/Vn0nf
         JQy0rEWso4koi/6FhMLhhDTRNPJsuaL0qguFMWUgOD+Wrx9WPODK66DK+iOUGU/MFe+o
         jX4+3FGfT93UqOM5G7XvXR/0SH7HaHE5Rr6Ms=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685113249; x=1687705249;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Aq+skqTQH86wc6+81w7SMNTxMqLNXZC2DHkkGE1IzhI=;
        b=cnItFWs/GSiNG6I5uSKA0gMxRyFc/+PV1Qi9rVwJzaCojU7kXIMgTZmgPZJXsermHB
         1DE5CBTa1h1iY3LEEXH7u+SmakVbKt2pFWSBAOql1ck7DdfehHpUZZ1sG/ABmK9kIJlw
         vOBtEX3c7fB1pUnu3cTLYpwzBs6LBcHzj2KJ8VZURSrvzsCQb3i6rgj7AzZ+bRqeQOeC
         cyatyW4XHDmckcp8Ppq7Rn0HzQPeUojE3jNUFKcEmSvSH6WCXI+kBNJ9DWafgD795PVu
         rbF58LCZ8uuch9pIa6iI0fRpEU1hz2ZhaeofhwcqI702qiNkV3V/253/8x5zQb/rNdfL
         nJcA==
X-Gm-Message-State: AC+VfDw68pMlP3kM4G+NJT7xWNATL5wyOhkmlV8dJm0jJKCXsB4GTYmh
	BMDonACzCTZIBkR1HwHFbCUeg+OgJk6N8ZaJ+zY=
X-Google-Smtp-Source: ACHHUZ4/xo+8Sgu4DB3lQdQ43O8WX7nB0TnQXrZXtMOVBIJsH6U4WYusZnccaPILzgVa7xZoMHTUPw==
X-Received: by 2002:a5d:5750:0:b0:307:8800:bbdb with SMTP id q16-20020a5d5750000000b003078800bbdbmr1427885wrw.58.1685113249085;
        Fri, 26 May 2023 08:00:49 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/3] x86: Add support for AMD's Automatic IBRS
Date: Fri, 26 May 2023 16:00:43 +0100
Message-Id: <20230526150044.31553-3-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
References: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In cases where AutoIBRS is supported by the host:

* Prefer AutoIBRS to retpolines as BTI mitigation in heuristics
  calculations.
* Always enable AutoIBRS if IBRS is chosen as a BTI mitigation.
* Avoid stuffing the RAS/RSB on VMEXIT if AutoIBRS is enabled.
* Delay setting AutoIBRS until after dom0 is set up, just like setting
  SPEC_CTRL.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
 xen/arch/x86/setup.c     |  3 +++
 xen/arch/x86/smpboot.c   |  3 +++
 xen/arch/x86/spec_ctrl.c | 52 ++++++++++++++++++++++++++++------------
 3 files changed, 43 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 74e3915a4d..09cfef2676 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -2036,6 +2036,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
         barrier();
         wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
         info->last_spec_ctrl = default_xen_spec_ctrl;
+
+        if ( cpu_has_auto_ibrs && (default_xen_spec_ctrl & SPEC_CTRL_IBRS) )
+            write_efer(read_efer() | EFER_AIBRSE);
     }
 
     /* Copy the cpu info block, and move onto the BSP stack. */
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index cf9bb220f9..1d52c1dd0a 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -376,6 +376,9 @@ void start_secondary(void *unused)
     {
         wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
         info->last_spec_ctrl = default_xen_spec_ctrl;
+
+        if ( cpu_has_auto_ibrs && (default_xen_spec_ctrl & SPEC_CTRL_IBRS) )
+            write_efer(read_efer() | EFER_AIBRSE);
     }
     update_mcu_opt_ctrl();
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 50d467f74c..c887fc3df9 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -390,7 +390,7 @@ custom_param("pv-l1tf", parse_pv_l1tf);
 
 static void __init print_details(enum ind_thunk thunk)
 {
-    unsigned int _7d0 = 0, _7d2 = 0, e8b = 0, max = 0, tmp;
+    unsigned int _7d0 = 0, _7d2 = 0, e8b = 0, e21a = 0, max = 0, tmp;
     uint64_t caps = 0;
 
     /* Collect diagnostics about available mitigations. */
@@ -399,7 +399,10 @@ static void __init print_details(enum ind_thunk thunk)
     if ( max >= 2 )
         cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
     if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
+    {
         cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
+        cpuid(0x80000021, &e21a, &tmp, &tmp, &tmp);
+    }
     if ( cpu_has_arch_caps )
         rdmsrl(MSR_ARCH_CAPABILITIES, caps);
 
@@ -430,11 +433,12 @@ static void __init print_details(enum ind_thunk thunk)
            (e8b  & cpufeat_mask(X86_FEATURE_IBPB_RET))       ? " IBPB_RET"       : "");
 
     /* Hardware features which need driving to mitigate issues. */
-    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (e8b  & cpufeat_mask(X86_FEATURE_IBPB)) ||
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB))          ? " IBPB"           : "",
            (e8b  & cpufeat_mask(X86_FEATURE_IBRS)) ||
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB))          ? " IBRS"           : "",
+           (e21a & cpufeat_mask(X86_FEATURE_AUTOMATIC_IBRS)) ? " AUTO_IBRS"      : "",
            (e8b  & cpufeat_mask(X86_FEATURE_AMD_STIBP)) ||
            (_7d0 & cpufeat_mask(X86_FEATURE_STIBP))          ? " STIBP"          : "",
            (e8b  & cpufeat_mask(X86_FEATURE_AMD_SSBD)) ||
@@ -468,7 +472,9 @@ static void __init print_details(enum ind_thunk thunk)
            thunk == THUNK_JMP       ? "JMP" : "?",
            (!boot_cpu_has(X86_FEATURE_IBRSB) &&
             !boot_cpu_has(X86_FEATURE_IBRS))         ? "No" :
-           (default_xen_spec_ctrl & SPEC_CTRL_IBRS)  ? "IBRS+" :  "IBRS-",
+           (cpu_has_auto_ibrs &&
+            (default_xen_spec_ctrl & SPEC_CTRL_IBRS)) ? "AUTO_IBRS+" :
+            (default_xen_spec_ctrl & SPEC_CTRL_IBRS)  ? "IBRS+" : "IBRS-",
            (!boot_cpu_has(X86_FEATURE_STIBP) &&
             !boot_cpu_has(X86_FEATURE_AMD_STIBP))    ? "" :
            (default_xen_spec_ctrl & SPEC_CTRL_STIBP) ? " STIBP+" : " STIBP-",
@@ -1150,15 +1156,20 @@ void __init init_speculation_mitigations(void)
     }
     else
     {
-        /*
-         * Evaluate the safest Branch Target Injection mitigations to use.
-         * First, begin with compiler-aided mitigations.
-         */
-        if ( IS_ENABLED(CONFIG_INDIRECT_THUNK) )
+        /* Evaluate the safest BTI mitigations with lowest overhead */
+        if ( cpu_has_auto_ibrs )
+        {
+            /*
+             * We'd rather use Automatic IBRS if present. It helps in order
+             * to avoid stuffing the RSB manually on every VMEXIT.
+             */
+            ibrs = true;
+        }
+        else if ( IS_ENABLED(CONFIG_INDIRECT_THUNK) )
         {
             /*
-             * On all hardware, we'd like to use retpoline in preference to
-             * IBRS, but only if it is safe on this hardware.
+             * Otherwise, we'd like to use retpoline in preference to
+             * plain IBRS, but only if it is safe on this hardware.
              */
             if ( retpoline_safe() )
                 thunk = THUNK_RETPOLINE;
@@ -1357,7 +1368,9 @@ void __init init_speculation_mitigations(void)
      */
     if ( opt_rsb_hvm )
     {
-        setup_force_cpu_cap(X86_FEATURE_SC_RSB_HVM);
+        /* Automatic IBRS wipes the RSB for us on VMEXIT */
+        if ( !(ibrs && cpu_has_auto_ibrs) )
+            setup_force_cpu_cap(X86_FEATURE_SC_RSB_HVM);
 
         /*
          * For SVM, Xen's RSB safety actions are performed before STGI, so
@@ -1582,17 +1595,26 @@ void __init init_speculation_mitigations(void)
 
         bsp_delay_spec_ctrl = !cpu_has_hypervisor && default_xen_spec_ctrl;
 
-        /*
-         * If delaying MSR_SPEC_CTRL setup, use the same mechanism as
-         * spec_ctrl_enter_idle(), by using a shadow value of zero.
-         */
         if ( bsp_delay_spec_ctrl )
         {
+            /*
+             * If delaying MSR_SPEC_CTRL setup, use the same mechanism as
+             * spec_ctrl_enter_idle(), by using a shadow value of zero.
+             */
             info->shadow_spec_ctrl = 0;
             barrier();
             info->spec_ctrl_flags |= SCF_use_shadow;
             barrier();
         }
+        else if ( ibrs && cpu_has_auto_ibrs )
+        {
+            /*
+             * If we're not delaying setting SPEC_CTRL there's no need to
+             * delay setting Automatic IBRS either. Flip the toggle if
+             * supported and IBRS is expected.
+             */
+            write_efer(read_efer() | EFER_AIBRSE);
+        }
 
         val = bsp_delay_spec_ctrl ? 0 : default_xen_spec_ctrl;
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 26 15:01:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 15:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540131.841670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Yw2-0001is-Qj; Fri, 26 May 2023 15:00:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540131.841670; Fri, 26 May 2023 15:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Yw2-0001il-NA; Fri, 26 May 2023 15:00:54 +0000
Received: by outflank-mailman (input) for mailman id 540131;
 Fri, 26 May 2023 15:00:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EF2/=BP=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q2Yw1-00010A-4A
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 15:00:53 +0000
Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com
 [2a00:1450:4864:20::431])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 19a130fd-fbd6-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 17:00:49 +0200 (CEST)
Received: by mail-wr1-x431.google.com with SMTP id
 ffacd0b85a97d-30adb83aa69so400204f8f.2
 for <xen-devel@lists.xenproject.org>; Fri, 26 May 2023 08:00:49 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 y9-20020a5d4709000000b00307d58b3da9sm5360033wrq.25.2023.05.26.08.00.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 26 May 2023 08:00:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19a130fd-fbd6-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685113248; x=1687705248;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=XDLeIg4hyXVwKDMD+oHA+cHDIiqPkUjqW0gJGqDa3iY=;
        b=lS0i8c6WoI9s0QqxcoAmJ5//1XJoK5J+/1TLCjhECbnjiBguOP+bm16ow1csZ8ABqo
         qmCgYFnUgKqrtw877Z5LpRvAGiWxJqob6DhdG7SpwWmT+cez8P7sPAWeMrPUIwsJjK+i
         eIxNK96f7zjvDjWieUlJcug2+2K8z+MBru3O0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685113248; x=1687705248;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=XDLeIg4hyXVwKDMD+oHA+cHDIiqPkUjqW0gJGqDa3iY=;
        b=PU1ugSmk2CeVQZUk8G8NjrFUHdS1hTlhfJo18wIo4B3vRhJ0EchyXCstaEjLEpO/ze
         x754UTmj4N1MxGpDlWgzkUMv/kSB3i4wTtKZBtlbwfjx9oWC+qWCEBRWhhFoqwaxRTh+
         fjiw3KxDZm9rvwOiRN+NDzP8/XELdDy3cIsYg7/DjCuxBG6vFYe2rqd2JdBkSJwq9XKj
         2L58mh/bE6w2xPi2g9pHs9+yonoG14SshfmA6/VsssRaU1BmyX5qSB2d4S21QX/93Zv8
         BrJJCOUQ7EqzrPMWCMPvbEOUKl25rCCLbLccZWyLY+O4hX5ph0il6Yzyd3jKW8eMioU1
         dT1A==
X-Gm-Message-State: AC+VfDwc2zfjmb3wWEosgu+JFaDAb/pYQMX2aK76E5dbgoFtKyJxfpZn
	medIg+XAlwSdk0P1RnQaEMXskcDU5j29KxoC6rw=
X-Google-Smtp-Source: ACHHUZ6O2SdPZO8PSTTB62xGQM2fmOLCEpBsE31ubWtroVWUNCOUztUGBnQp2k33vsG3N+pYq+inkw==
X-Received: by 2002:adf:dc89:0:b0:307:bd64:f5a4 with SMTP id r9-20020adfdc89000000b00307bd64f5a4mr1613643wrj.52.1685113248440;
        Fri, 26 May 2023 08:00:48 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 1/3] x86: Add bit definitions for Automatic IBRS
Date: Fri, 26 May 2023 16:00:42 +0100
Message-Id: <20230526150044.31553-2-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
References: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is AMD's version of Intel's Enhanced IBRS. Exposed in CPUID
and toggled in EFER.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
 tools/libs/light/libxl_cpuid.c              | 1 +
 tools/misc/xen-cpuid.c                      | 2 ++
 xen/arch/x86/include/asm/cpufeature.h       | 1 +
 xen/arch/x86/include/asm/msr-index.h        | 1 +
 xen/include/public/arch-x86/cpufeatureset.h | 1 +
 5 files changed, 6 insertions(+)

diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index cca0f19d93..f5ce9f9795 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -317,6 +317,7 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
 
         {"lfence+",      0x80000021, NA, CPUID_REG_EAX,  2,  1},
         {"nscb",         0x80000021, NA, CPUID_REG_EAX,  6,  1},
+        {"auto-ibrs",    0x80000021, NA, CPUID_REG_EAX,  8,  1},
         {"cpuid-user-dis", 0x80000021, NA, CPUID_REG_EAX, 17, 1},
 
         {"maxhvleaf",    0x40000000, NA, CPUID_REG_EAX,  0,  8},
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index 5d0c64a45f..e487885a5c 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -200,6 +200,8 @@ static const char *const str_e21a[32] =
     [ 2] = "lfence+",
     [ 6] = "nscb",
 
+    [ 8] = "auto-ibrs",
+
     /* 16 */                [17] = "cpuid-user-dis",
 };
 
diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 50235f098d..d5947a6826 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -161,6 +161,7 @@ static inline bool boot_cpu_has(unsigned int feat)
 #define cpu_has_amd_ssbd        boot_cpu_has(X86_FEATURE_AMD_SSBD)
 #define cpu_has_virt_ssbd       boot_cpu_has(X86_FEATURE_VIRT_SSBD)
 #define cpu_has_ssb_no          boot_cpu_has(X86_FEATURE_SSB_NO)
+#define cpu_has_auto_ibrs       boot_cpu_has(X86_FEATURE_AUTOMATIC_IBRS)
 
 /* CPUID level 0x00000007:0.edx */
 #define cpu_has_avx512_4vnniw   boot_cpu_has(X86_FEATURE_AVX512_4VNNIW)
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index 082fb2e0d9..73d0af2615 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -175,6 +175,7 @@
 #define  EFER_NXE                           (_AC(1, ULL) << 11) /* No Execute Enable */
 #define  EFER_SVME                          (_AC(1, ULL) << 12) /* Secure Virtual Machine Enable */
 #define  EFER_FFXSE                         (_AC(1, ULL) << 14) /* Fast FXSAVE/FXRSTOR */
+#define  EFER_AIBRSE                        (_AC(1, ULL) << 21) /* Automatic IBRS Enable */
 
 #define EFER_KNOWN_MASK \
     (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | EFER_SVME | EFER_FFXSE)
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 777041425e..e3952f62bc 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -287,6 +287,7 @@ XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
 XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
 XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and limit too) */
+XEN_CPUFEATURE(AUTOMATIC_IBRS,     11*32+ 8) /*   HW can handle IBRS on its own */
 XEN_CPUFEATURE(CPUID_USER_DIS,     11*32+17) /*   CPUID disable for CPL > 0 software */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:1.ebx, word 12 */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri May 26 15:22:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 15:22:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540142.841680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2ZGe-0005xo-MF; Fri, 26 May 2023 15:22:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540142.841680; Fri, 26 May 2023 15:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2ZGe-0005xh-JR; Fri, 26 May 2023 15:22:12 +0000
Received: by outflank-mailman (input) for mailman id 540142;
 Fri, 26 May 2023 15:22:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/nsh=BP=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1q2ZGc-0005xW-Tw
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 15:22:11 +0000
Received: from smtp-8fad.mail.infomaniak.ch (smtp-8fad.mail.infomaniak.ch
 [2001:1600:3:17::8fad])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 13f27dda-fbd9-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 17:22:08 +0200 (CEST)
Received: from smtp-2-0001.mail.infomaniak.ch (unknown [10.5.36.108])
 by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QSTDb2m0HzMqJtD;
 Fri, 26 May 2023 17:22:07 +0200 (CEST)
Received: from unknown by smtp-2-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QSTDW5jBkzMpq8r; Fri, 26 May 2023 17:22:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13f27dda-fbd9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1685114527;
	bh=q+ImSjEjmZCszdCMwbCdSpVg4DvQ3poPxVGa8t7f7U4=;
	h=Date:Subject:From:To:Cc:References:In-Reply-To:From;
	b=tQXLni40CMr4EfzplXOAcaiNi5QkukQ3CoKpUm7sPlK7iwEW6fPx8nY8GaaUiadIG
	 M5KO5fQyWE3EVFklsTrG7jSk0ERYB4XC8Vvc3kkq30lc7cfnaZ5NXLcagUfQK3M2xQ
	 HRaKSJVPM1hu3rh60mZzFma3PA+dy4ElTuUEFWyU=
Message-ID: <58a803f6-c3de-3362-673f-767767a43f9c@digikod.net>
Date: Fri, 26 May 2023 17:22:03 +0200
MIME-Version: 1.0
User-Agent:
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Content-Language: en-US
From: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>
To: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
 "Christopherson,, Sean" <seanjc@google.com>, "bp@alien8.de" <bp@alien8.de>,
 "dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
 "keescook@chromium.org" <keescook@chromium.org>,
 "hpa@zytor.com" <hpa@zytor.com>, "mingo@redhat.com" <mingo@redhat.com>,
 "tglx@linutronix.de" <tglx@linutronix.de>,
 "pbonzini@redhat.com" <pbonzini@redhat.com>,
 "wanpengli@tencent.com" <wanpengli@tencent.com>,
 "vkuznets@redhat.com" <vkuznets@redhat.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
 "yuanyu@google.com" <yuanyu@google.com>,
 "jamorris@linux.microsoft.com" <jamorris@linux.microsoft.com>,
 "marian.c.rotariu@gmail.com" <marian.c.rotariu@gmail.com>,
 "Graf, Alexander" <graf@amazon.com>,
 "Andersen, John S" <john.s.andersen@intel.com>,
 "madvenka@linux.microsoft.com" <madvenka@linux.microsoft.com>,
 "liran.alon@oracle.com" <liran.alon@oracle.com>,
 "ssicleru@bitdefender.com" <ssicleru@bitdefender.com>,
 "tgopinath@microsoft.com" <tgopinath@microsoft.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
 "linux-security-module@vger.kernel.org"
 <linux-security-module@vger.kernel.org>, "will@kernel.org"
 <will@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "dev@lists.cloudhypervisor.org" <dev@lists.cloudhypervisor.org>,
 "mdontu@bitdefender.com" <mdontu@bitdefender.com>,
 "linux-hardening@vger.kernel.org" <linux-hardening@vger.kernel.org>,
 "linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
 "virtualization@lists.linux-foundation.org"
 <virtualization@lists.linux-foundation.org>,
 "nicu.citu@icloud.com" <nicu.citu@icloud.com>,
 "ztarkhani@microsoft.com" <ztarkhani@microsoft.com>,
 "x86@kernel.org" <x86@kernel.org>
References: <20230505152046.6575-1-mic@digikod.net>
 <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
 <facfd178-3157-80b4-243b-a5c8dabadbfb@digikod.net>
In-Reply-To: <facfd178-3157-80b4-243b-a5c8dabadbfb@digikod.net>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha


On 25/05/2023 15:59, Mickaël Salaün wrote:
> 
> On 25/05/2023 00:20, Edgecombe, Rick P wrote:
>> On Fri, 2023-05-05 at 17:20 +0200, Mickaël Salaün wrote:
>>> # How does it work?
>>>
>>> This implementation mainly leverages KVM capabilities to control the
>>> Second
>>> Layer Address Translation (or the Two Dimensional Paging e.g.,
>>> Intel's EPT or
>>> AMD's RVI/NPT) and Mode Based Execution Control (Intel's MBEC)
>>> introduced with
>>> the Kaby Lake (7th generation) architecture. This allows to set
>>> permissions on
>>> memory pages in a complementary way to the guest kernel's managed
>>> memory
>>> permissions. Once these permissions are set, they are locked and
>>> there is no
>>> way back.
>>>
>>> A first KVM_HC_LOCK_MEM_PAGE_RANGES hypercall enables the guest
>>> kernel to lock
>>> a set of its memory page ranges with either the HEKI_ATTR_MEM_NOWRITE
>>> or the
>>> HEKI_ATTR_MEM_EXEC attribute. The first one denies write access to a
>>> specific
>>> set of pages (allow-list approach), and the second only allows kernel
>>> execution
>>> for a set of pages (deny-list approach).
>>>
>>> The current implementation sets the whole kernel's .rodata (i.e., any
>>> const or
>>> __ro_after_init variables, which includes critical security data such
>>> as LSM
>>> parameters) and .text sections as non-writable, and the .text section
>>> is the
>>> only one where kernel execution is allowed. This is possible thanks
>>> to the new
>>> MBEC support also brough by this series (otherwise the vDSO would
>>> have to be
>>> executable). Thanks to this hardware support (VT-x, EPT and MBEC),
>>> the
>>> performance impact of such guest protection is negligible.
>>>
>>> The second KVM_HC_LOCK_CR_UPDATE hypercall enables guests to pin some
>>> of its
>>> CPU control register flags (e.g., X86_CR0_WP, X86_CR4_SMEP,
>>> X86_CR4_SMAP),
>>> which is another complementary hardening mechanism.
>>>
>>> Heki can be enabled with the heki=1 boot command argument.
>>>
>>>
>>
>> Can the guest kernel ask the host VMM's emulated devices to DMA into
>> the protected data? It should go through the host userspace mappings I
>> think, which don't care about EPT permissions. Or did I miss where you
>> are protecting that another way? There are a lot of easy ways to ask
>> the host to write to guest memory that don't involve the EPT. You
>> probably need to protect the host userspace mappings, and also the
>> places in KVM that kmap a GPA provided by the guest.
> 
> Good point, I'll check this confused deputy attack. Extended KVM
> protections should indeed handle all ways to map guests' memory. I'm
> wondering if current VMMs would gracefully handle such new restrictions
> though.

I guess the host could map arbitrary data to the guest, so that need to 
be handled, but how could the VMM (not the host kernel) bypass/update 
EPT initially used for the guest (and potentially later mapped to the host)?


From xen-devel-bounces@lists.xenproject.org Fri May 26 15:36:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 15:36:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540146.841689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2ZTu-0007ZR-Rl; Fri, 26 May 2023 15:35:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540146.841689; Fri, 26 May 2023 15:35:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2ZTu-0007ZK-PA; Fri, 26 May 2023 15:35:54 +0000
Received: by outflank-mailman (input) for mailman id 540146;
 Fri, 26 May 2023 15:35:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/nsh=BP=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1q2ZTt-0007ZE-Nh
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 15:35:53 +0000
Received: from smtp-bc0c.mail.infomaniak.ch (smtp-bc0c.mail.infomaniak.ch
 [2001:1600:4:17::bc0c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fe589af6-fbda-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 17:35:51 +0200 (CEST)
Received: from smtp-3-0000.mail.infomaniak.ch (unknown [10.4.36.107])
 by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QSTXP6kHdzMqPDr;
 Fri, 26 May 2023 17:35:49 +0200 (CEST)
Received: from unknown by smtp-3-0000.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QSTXL3K3rz1Sjg; Fri, 26 May 2023 17:35:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe589af6-fbda-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1685115349;
	bh=IVLXl6zuTaxu1Zgxrh25RMUzIB63fNH4/OIZK2HDhZ0=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=kVMTGpVqhamFd/35Tv8GgGxW+MQc9ut6C15rUQoLevvrCTeKGaS+bLkhx73UqS/CF
	 F+QUQYt6dTBHCqDv3OJzGYoC8livFVhydjlyIX+RYotxBnPtUyqUMczbhbmQDNz+Be
	 GJfCUWCNzFZiETlN+G79KrUNM0ssQag3MigoVxSk=
Message-ID: <caa8c89c-cae4-5a40-d6a1-f93ba7045d83@digikod.net>
Date: Fri, 26 May 2023 17:35:45 +0200
MIME-Version: 1.0
User-Agent:
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Content-Language: en-US
To: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
 "Christopherson,, Sean" <seanjc@google.com>,
 "dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
 "bp@alien8.de" <bp@alien8.de>, "keescook@chromium.org"
 <keescook@chromium.org>, "hpa@zytor.com" <hpa@zytor.com>,
 "mingo@redhat.com" <mingo@redhat.com>,
 "tglx@linutronix.de" <tglx@linutronix.de>,
 "pbonzini@redhat.com" <pbonzini@redhat.com>,
 "wanpengli@tencent.com" <wanpengli@tencent.com>,
 "vkuznets@redhat.com" <vkuznets@redhat.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
 "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
 "liran.alon@oracle.com" <liran.alon@oracle.com>,
 "marian.c.rotariu@gmail.com" <marian.c.rotariu@gmail.com>,
 "Graf, Alexander" <graf@amazon.com>,
 "Andersen, John S" <john.s.andersen@intel.com>,
 "madvenka@linux.microsoft.com" <madvenka@linux.microsoft.com>,
 "ssicleru@bitdefender.com" <ssicleru@bitdefender.com>,
 "yuanyu@google.com" <yuanyu@google.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "tgopinath@microsoft.com" <tgopinath@microsoft.com>,
 "jamorris@linux.microsoft.com" <jamorris@linux.microsoft.com>,
 "linux-security-module@vger.kernel.org"
 <linux-security-module@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "will@kernel.org" <will@kernel.org>,
 "dev@lists.cloudhypervisor.org" <dev@lists.cloudhypervisor.org>,
 "mdontu@bitdefender.com" <mdontu@bitdefender.com>,
 "linux-hardening@vger.kernel.org" <linux-hardening@vger.kernel.org>,
 "linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
 "virtualization@lists.linux-foundation.org"
 <virtualization@lists.linux-foundation.org>,
 "nicu.citu@icloud.com" <nicu.citu@icloud.com>,
 "ztarkhani@microsoft.com" <ztarkhani@microsoft.com>,
 "x86@kernel.org" <x86@kernel.org>
References: <20230505152046.6575-1-mic@digikod.net>
 <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
 <facfd178-3157-80b4-243b-a5c8dabadbfb@digikod.net>
 <7cb6c4c28c077bb9f866c2d795e918610e77d49f.camel@intel.com>
From: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>
In-Reply-To: <7cb6c4c28c077bb9f866c2d795e918610e77d49f.camel@intel.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha


On 25/05/2023 17:52, Edgecombe, Rick P wrote:
> On Thu, 2023-05-25 at 15:59 +0200, Mickaël Salaün wrote:
> [ snip ]
> 
>>> The kernel often creates writable aliases in order to write to
>>> protected data (kernel text, etc). Some of this is done right as
>>> text
>>> is being first written out (alternatives for example), and some
>>> happens
>>> way later (jump labels, etc). So for verification, I wonder what
>>> stage
>>> you would be verifying? If you want to verify the end state, you
>>> would
>>> have to maintain knowledge in the verifier of all the touch-ups the
>>> kernel does. I think it would get very tricky.
>>
>> For now, in the static kernel case, all rodata and text GPA is
>> restricted, so aliasing such memory in a writable way before or after
>> the KVM enforcement would still restrict write access to this memory,
>> which could be an issue but not a security one. Do you have such
>> examples in mind?
>>
> 
> On x86, look at all the callers of the text_poke() family. In
> arch/x86/include/asm/text-patching.h.

OK, thanks!


> 
>>
>>>
>>> It also seems it will be a decent ask for the guest kernel to keep
>>> track of GPA permissions as well as normal virtual memory
>>> pemirssions,
>>> if this thing is not widely used.
>>
>> This would indeed be required to properly handle the dynamic cases.
>>
>>
>>>
>>> So I wondering if you could go in two directions with this:
>>> 1. Make this a feature only for super locked down kernels (no
>>> modules,
>>> etc). Forbid any configurations that might modify text. But eBPF is
>>> used for seccomp, so you might be turning off some security
>>> protections
>>> to get this.
>>
>> Good idea. For "super locked down kernels" :) , we should disable all
>> kernel executable changes with the related kernel build configuration
>> (e.g. eBPF JIT, kernel module, kprobes…) to make sure there is no
>> such
>> legitimate access. This looks like an acceptable initial feature.
> 
> How many users do you think will want this protection but not
> protections that would have to be disabled? The main one that came to
> mind for me is cBPF seccomp stuff.
> 
> But also, the alternative to JITing cBPF is the eBPF interpreter which
> AFAIU is considered a juicy enough target for speculative attacks that
> they created an option to compile it out. And leaving an interpreter in
> the kernel means any data could be "executed" in the normal non-
> speculative scenario, kind of working around the hypervisor executable
> protections. Dropping e/cBPF entirely would be an option, but then I
> wonder how many users you have left. Hopefully that is all correct,
> it's hard to keep track with the pace of BPF development.

seccomp-bpf doesn't rely on JIT, so it is not an issue. For eBPF, JIT is 
optional, but other text changes may be required according to the eBPF 
program type (e.g. using kprobes).


> 
> I wonder if it might be a good idea to POC the guest side before
> settling on the KVM interface. Then you can also look at the whole
> thing and judge how much usage it would get for the different options
> of restrictions.

The next step is to handle dynamic permissions, but it will be easier to 
first implement that in KVM itself (which already has the required 
authentication code). The current interface may be flexible enough 
though, only new attribute flags should be required (and potentially an 
async mode). Anyway, this will enable to look at the whole thing.


> 
>>
>>
>>> 2. Loosen the rules to allow the protections to not be so one-way
>>> enable. Get less security, but used more widely.
>>
>> This is our goal. I think both static and dynamic cases are
>> legitimate
>> and have value according to the level of security sought. This should
>> be
>> a build-time configuration.
> 
> Yea, the proper way to do this is probably to move all text handling
> stuff into a separate domain of some sort, like you mentioned
> elsewhere. It would be quite a job.

Not necessarily to move this code, but to make sure that the changes are 
legitimate (e.g. text signatures, legitimate addresses). This doesn't 
need to be perfect but it should improve the current state by increasing 
the cost of attacks.


From xen-devel-bounces@lists.xenproject.org Fri May 26 15:44:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 15:44:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540150.841700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Zbq-0000bW-Nu; Fri, 26 May 2023 15:44:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540150.841700; Fri, 26 May 2023 15:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2Zbq-0000bP-KD; Fri, 26 May 2023 15:44:06 +0000
Received: by outflank-mailman (input) for mailman id 540150;
 Fri, 26 May 2023 15:44:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Zbo-0000bF-Uc; Fri, 26 May 2023 15:44:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Zbo-0008ST-Ey; Fri, 26 May 2023 15:44:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Zbn-0002gQ-V9; Fri, 26 May 2023 15:44:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2Zbn-0000b8-UM; Fri, 26 May 2023 15:44:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8DXU7qCCSWM5u+Z2zPNW/GccCZjEBnPL/RfNrFADmg4=; b=Et6nvMUZgebEzwaqw8y+IDX2vA
	BJtFHXQnVKTzsaJ/dEgPV5bLXi4RmQyxV1yZITggwXJTZAkXb9PsU513843Blh2AywkNXSWoyvwaQ
	TStl+KtJcWm2++6RY/LHm1EMym/ghwAH6+EU7qIGjR9CMpoQXw9pVAU4OFKCLuKkE/do=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180953-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180953: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=354be8936d97d4f2cb8cc004bb0296826d89bd8d
X-Osstest-Versions-That:
    xen=380c6c170393c48852d4f2b1ea97125a399cfc61
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 15:44:03 +0000

flight 180953 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180953/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 180930

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd11-amd64 19 guest-localmigrate/x10 fail in 180946 pass in 180953
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install    fail pass in 180946

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop  fail in 180946 like 180930
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180930
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180930
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180930
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180930
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180930
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180930
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180930
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180930
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  354be8936d97d4f2cb8cc004bb0296826d89bd8d
baseline version:
 xen                  380c6c170393c48852d4f2b1ea97125a399cfc61

Last test of basis   180930  2023-05-24 17:38:36 Z    1 days
Failing since        180938  2023-05-25 02:40:04 Z    1 days    3 attempts
Testing same since   180946  2023-05-25 15:37:15 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 354be8936d97d4f2cb8cc004bb0296826d89bd8d
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu May 25 14:58:14 2023 +0200

    public: fix comment typo regarding IOREQ Server
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 053ffa783e6e8f402ba6bafd620666a3623c0fc8
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Thu May 25 14:57:14 2023 +0200

    x86/iommu: adjust type in arch_iommu_hwdom_init()
    
    The 'i' iterator index stores a PDX, not a PFN, and hence the initial
    assignation of start (which stores a PFN) needs a conversion from PFN
    to PDX.
    
    This is harmless currently, as the PDX compression skips the bottom
    MAX_ORDER bits which cover the low 1MB, but still do the conversion
    from PDX to PFN for type correctness.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 56c0063f4e7a71ba8bb91207d7f111e971dcec02
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Thu May 25 14:56:55 2023 +0200

    xen/misra: xen-analysis.py: Improve the cppcheck version check
    
    Use tuple comparison to check the cppcheck version.
    
    Take the occasion to harden the regex, escaping the dots so that we
    check for them instead of generic characters.
    
    Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit cca2361947b3c9851b3ded6e43cc48caf5258eee
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Thu May 18 14:24:15 2023 +0200

    automation: Enable parallel build with cppcheck analysis
    
    The limitation was fixed by the commit:
    45bfff651173d538239308648c6a6cd7cbe37172
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 511b9f286c3dadd041e0d90beeff7d47c9bf3b7a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 19:15:48 2023 +0100

    x86/spec-ctrl: Remove opencoded MSR_ARCH_CAPS check
    
    MSR_ARCH_CAPS data is now included in featureset information.  Replace
    opencoded checks with regular feature ones.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 205a9f970378c31ae3e00b52d59103a2e881b9e0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 19:05:01 2023 +0100

    x86/tsx: Remove opencoded MSR_ARCH_CAPS check
    
    The current cpu_has_tsx_ctrl tristate is serving double pupose; to signal the
    first pass through tsx_init(), and the availability of MSR_TSX_CTRL.
    
    Drop the variable, replacing it with a once boolean, and altering
    cpu_has_tsx_ctrl to come out of the feature information.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 8f6bc7f9b72eb7cf0c8c5ae5d80498a58ba0b7c3
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 16:59:25 2023 +0100

    x86/vtx: Remove opencoded MSR_ARCH_CAPS check
    
    MSR_ARCH_CAPS data is now included in featureset information.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit a87d131a8c2952e53ba9ed513d5553426cdeac34
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue May 16 14:07:43 2023 +0100

    x86/cpufeature: Rework {boot_,}cpu_has()
    
    One area where Xen deviates from Linux is that test_bit() forces a volatile
    read.  This leads to poor code generation, because the optimiser cannot merge
    bit operations on the same word.
    
    Drop the use of test_bit(), and write the expressions in regular C.  This
    removes the include of bitops.h (which is a frequent source of header
    tangles), and it offers the optimiser far more flexibility.
    
    Bloat-o-meter reports a net change of:
    
      add/remove: 0/0 grow/shrink: 21/87 up/down: 641/-2751 (-2110)
    
    with half of that in x86_emulate() alone.  vmx_ctxt_switch_to() seems to be
    the fastpath with the greatest delta at -24, where the optimiser has
    successfully removed the branch hidden in cpu_has_msr_tsc_aux.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit bbb289f3d5bdd3358af748d7c567343532ac45b5
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 15:53:35 2023 +0100

    x86/boot: Expose MSR_ARCH_CAPS data in guest max policies
    
    We already have common and default feature adjustment helpers.  Introduce one
    for max featuresets too.
    
    Offer MSR_ARCH_CAPS unconditionally in the max policy, and stop clobbering the
    data inherited from the Host policy.  This will be necessary to level a VM
    safely for migration.  Annotate the ARCH_CAPS CPUID bit as special.  Note:
    ARCH_CAPS is still max-only for now, so will not be inhereted by the default
    policies.
    
    With this done, the special case for dom0 can be shrunk to just resampling the
    Host policy (as ARCH_CAPS isn't visible by default yet).
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 70553000d6b44dd7c271a35932b0b3e1f22c5532
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 15:37:02 2023 +0100

    x86/boot: Record MSR_ARCH_CAPS for the Raw and Host CPU policy
    
    Extend x86_cpu_policy_fill_native() with a read of ARCH_CAPS based on the
    CPUID information just read, removing the specially handling in
    calculate_raw_cpu_policy().
    
    Right now, the only use of x86_cpu_policy_fill_native() outside of Xen is the
    unit tests.  Getting MSR data in this context is left to whomever first
    encounters a genuine need to have it.
    
    Extend generic_identify() to read ARCH_CAPS into x86_capability[], which is
    fed into the Host Policy.  This in turn means there's no need to special case
    arch_caps in calculate_host_policy().
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit ce8c930851a5ca21c4e70f83be7e8b290ce1b519
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 18:50:59 2023 +0100

    x86/cpu-policy: MSR_ARCH_CAPS feature names
    
    Seed the default visibility from the dom0 special case, which for the most
    part just exposes the *_NO bits.  EIBRS is the one non-*_NO bit, which is
    "just" a status bit to the guest indicating a change in implemention of IBRS
    which is already fully supported.
    
    Insert a block dependency from the ARCH_CAPS CPUID bit to the entire content
    of the MSR.  This is because MSRs have no structure information similar to
    CPUID, and used by x86_cpu_policy_clear_out_of_range_leaves(), in order to
    bulk-clear inaccessable words.
    
    The overall CPUID bit is still max-only, so all of MSR_ARCH_CAPS is hidden in
    the default policies.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit d9fe459ffad8a6eac2f695adb2331aff83c345d1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 17:55:21 2023 +0100

    x86/cpu-policy: Infrastructure for MSR_ARCH_CAPS
    
    Bits through 24 are already defined, meaning that we're not far off needing
    the second word.  Put both in right away.
    
    As both halves are present now, the arch_caps field is full width.  Adjust the
    unit test, which notices.
    
    The bool bitfield names in the arch_caps union are unused, and somewhat out of
    date.  They'll shortly be automatically generated.
    
    Add CPUID and MSR prefixes to the ./xen-cpuid verbose output, now that there
    are a mix of the two.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 43912f8dbb1888ffd7f00adb10724c70e71927c4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon May 15 14:14:53 2023 +0100

    x86/boot: Adjust MSR_ARCH_CAPS handling for the Host policy
    
    We are about to move MSR_ARCH_CAPS into featureset, but the order of
    operations (copy raw policy, then copy x86_capabilitiles[] in) will end up
    clobbering the ARCH_CAPS value.
    
    Some toolstacks use this information to handle TSX compatibility across the
    CPUs and microcode versions where support was removed.
    
    To avoid this transient breakage, read from raw_cpu_policy rather than
    modifying it in place.  This logic will be removed entirely in due course.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit ef1987fcb0fdfaa7ee148024037cb5fa335a7b2d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri May 12 13:52:39 2023 +0100

    x86/boot: Rework dom0 feature configuration
    
    Right now, dom0's feature configuration is split between between the common
    path and a dom0-specific one.  This mostly is by accident, and causes some
    very subtle bugs.
    
    First, start by clearly defining init_dom0_cpuid_policy() to be the domain
    that Xen builds automatically.  The late hwdom case is still constructed in a
    mostly normal way, with the control domain having full discretion over the CPU
    policy.
    
    Identifying this highlights a latent bug - the two halves of the MSR_ARCH_CAPS
    bodge are asymmetric with respect to the hardware domain.  This means that
    shim, or a control-only dom0 sees the MSR_ARCH_CAPS CPUID bit but none of the
    MSR content.  This in turn declares the hardware to be retpoline-safe by
    failing to advertise the {R,}RSBA bits appropriately.  Restrict this logic to
    the hardware domain, although the special case will cease to exist shortly.
    
    For the CPUID Faulting adjustment, the comment in ctxt_switch_levelling()
    isn't actually relevant.  Provide a better explanation.
    
    Move the recalculate_cpuid_policy() call outside of the dom0-cpuid= case.
    This is no change for now, but will become necessary shortly.
    
    Finally, place the second half of the MSR_ARCH_CAPS bodge after the
    recalculate_cpuid_policy() call.  This is necessary to avoid transiently
    breaking the hardware domain's view while the handling is cleaned up.  This
    special case will cease to exist shortly.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 26 16:02:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 16:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540158.841709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2ZtX-0003dY-Ai; Fri, 26 May 2023 16:02:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540158.841709; Fri, 26 May 2023 16:02:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2ZtX-0003dR-82; Fri, 26 May 2023 16:02:23 +0000
Received: by outflank-mailman (input) for mailman id 540158;
 Fri, 26 May 2023 16:02:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2ZtW-0003dH-7g; Fri, 26 May 2023 16:02:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2ZtW-0002BK-3b; Fri, 26 May 2023 16:02:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2ZtV-0003IB-KI; Fri, 26 May 2023 16:02:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2ZtV-0007WN-Jr; Fri, 26 May 2023 16:02:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Fcas7WC63mmHio/75IvbwdJmBYLQ1UmVtmo48ZNDjOk=; b=XYRodLSUaSbAMSTwhS43JraoLg
	F4v1gyV+8VoaMU9EjbLylyYN/wLtyTzhLNtF8h5QO/q4FbA9H8l4elG6hc8DEtpoXbdsYwcpJ9ri+
	luI83n2vrqxZnim4xkl/zXwJlRUZ4LZLZaBNdbT1zD8RLH0QRCQjehdiWOHdHsjlOmAI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180963-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 180963: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
X-Osstest-Versions-That:
    xen=40cd186bfd15655b7d9d3ee149292c718c208917
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 16:02:21 +0000

flight 180963 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180963/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
baseline version:
 xen                  40cd186bfd15655b7d9d3ee149292c718c208917

Last test of basis   180956  2023-05-26 08:01:52 Z    0 days
Testing same since   180963  2023-05-26 13:01:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   40cd186bfd..f54dd5b53e  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 26 16:09:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 16:09:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540164.841720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2a0X-0004Hv-0d; Fri, 26 May 2023 16:09:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540164.841720; Fri, 26 May 2023 16:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2a0W-0004Ho-UA; Fri, 26 May 2023 16:09:36 +0000
Received: by outflank-mailman (input) for mailman id 540164;
 Fri, 26 May 2023 16:09:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/nsh=BP=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1q2a0V-0004Hi-Kz
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 16:09:35 +0000
Received: from smtp-8fa8.mail.infomaniak.ch (smtp-8fa8.mail.infomaniak.ch
 [2001:1600:4:17::8fa8])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b37b170a-fbdf-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 18:09:33 +0200 (CEST)
Received: from smtp-2-0001.mail.infomaniak.ch (unknown [10.5.36.108])
 by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QSVHH60KKzMqG8D;
 Fri, 26 May 2023 18:09:31 +0200 (CEST)
Received: from unknown by smtp-2-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QSVHD3cfwzMpq8P; Fri, 26 May 2023 18:09:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b37b170a-fbdf-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1685117371;
	bh=GWA0mFQb+8wMidUB9IFkGhraKuBAw7wtqDQzDAqbLj0=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=SHueMjFNo3rqrLqDwx3uLeE8+X5tcivE1oEPiVsupYo6ZiLJ/vTPNLG6eFJuARjXz
	 EChyFftAMsPJ3se0xTmj43Dxi+ekN6Tb9V8ErOqv9uDvNm67z7yhPpfobsNkHwpj/z
	 M1TjsYiSKBpBQcpWBvcaYrXlssdnU8/pqhuOht+c=
Message-ID: <4142c8dc-5385-fb1d-4f8b-2a98bb3f99af@digikod.net>
Date: Fri, 26 May 2023 18:09:14 +0200
MIME-Version: 1.0
User-Agent:
Subject: Re: [ANNOUNCE] KVM Microconference at LPC 2023
To: Paolo Bonzini <pbonzini@redhat.com>,
 James Morris <jamorris@linux.microsoft.com>
Cc: Sean Christopherson <seanjc@google.com>, Marc Zyngier <maz@kernel.org>,
 Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H . Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
 Kees Cook <keescook@chromium.org>, Paolo Bonzini <pbonzini@redhat.com>,
 Thomas Gleixner <tglx@linutronix.de>, Vitaly Kuznetsov
 <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
 John Andersen <john.s.andersen@intel.com>, Liran Alon
 <liran.alon@oracle.com>,
 "Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
 Marian Rotariu <marian.c.rotariu@gmail.com>,
 =?UTF-8?Q?Mihai_Don=c8=9bu?= <mdontu@bitdefender.com>,
 =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>,
 Zahra Tarkhani <ztarkhani@microsoft.com>,
 =?UTF-8?Q?=c8=98tefan_=c8=98icleru?= <ssicleru@bitdefender.com>,
 dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
 linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
 qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
 x86@kernel.org, xen-devel@lists.xenproject.org
References: <2f19f26e-20e5-8198-294e-27ea665b706f@redhat.com>
Content-Language: en-US
From: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>
In-Reply-To: <2f19f26e-20e5-8198-294e-27ea665b706f@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha

See James Morris's proposal here: 
https://lore.kernel.org/all/17f62cb1-a5de-2020-2041-359b8e96b8c0@linux.microsoft.com/

On 26/05/2023 04:36, James Morris wrote:
 > [Side topic]
 >
 > Would folks be interested in a Linux Plumbers Conference MC on this
 > topic generally, across different hypervisors, VMMs, and architectures?
 >
 > If so, please let me know who the key folk would be and we can try 
writing
 > up an MC proposal.

The fine-grain memory management proposal from James Gowans looks 
interesting, especially the "side-car" virtual machines: 
https://lore.kernel.org/all/88db2d9cb42e471692ff1feb0b9ca855906a9d95.camel@amazon.com/


On 09/05/2023 11:55, Paolo Bonzini wrote:
> Hi all!
> 
> We are planning on submitting a CFP to host a KVM Microconference at
> Linux Plumbers Conference 2023. To help justify the proposal, we would
> like to gather a list of folks that would likely attend, and crowdsource
> a list of topics to include in the proposal.
> 
> For both this year and future years, the intent is that a KVM
> Microconference will complement KVM Forum, *NOT* supplant it. As you
> probably noticed, KVM Forum is going through a somewhat radical change in
> how it's organized; the conference is now free and (with some help from
> Red Hat) organized directly by the KVM and QEMU communities. Despite the
> unexpected changes and some teething pains, community response to KVM
> Forum continues to be overwhelmingly positive! KVM Forum will remain
> the venue of choice for KVM/userspace collaboration, for educational
> content covering both KVM and userspace, and to discuss new features in
> QEMU and other userspace projects.
> 
> At least on the x86 side, however, the success of KVM Forum led us
> virtualization folks to operate in relative isolation. KVM depends on
> and impacts multiple subsystems (MM, scheduler, perf) in profound ways,
> and recently we’ve seen more and more ideas/features that require
> non-trivial changes outside KVM and buy-in from stakeholders that
> (typically) do not attend KVM Forum. Linux Plumbers Conference is a
> natural place to establish such collaboration within the kernel.
> 
> Therefore, the aim of the KVM Microconference will be:
> * to provide a setting in which to discuss KVM and kernel internals
> * to increase collaboration and reduce friction with other subsystems
> * to discuss system virtualization issues that require coordination with
> other subsystems (such as VFIO, or guest support in arch/)
> 
> Below is a rough draft of the planned CFP submission.
> 
> Thanks!
> 
> Paolo Bonzini (KVM Maintainer)
> Sean Christopherson (KVM x86 Co-Maintainer)
> Marc Zyngier (KVM ARM Co-Maintainer)
> 
> 
> ===================
> KVM Microconference
> ===================
> 
> KVM (Kernel-based Virtual Machine) enables the use of hardware features
> to improve the efficiency, performance, and security of virtual machines
> created and managed by userspace.  KVM was originally developed to host
> and accelerate "full" virtual machines running a traditional kernel and
> operating system, but has long since expanded to cover a wide array of use
> cases, e.g. hosting real time workloads, sandboxing untrusted workloads,
> deprivileging third party code, reducing the trusted computed base of
> security sensitive workloads, etc.  As KVM's use cases have grown, so too
> have the requirements placed on KVM and the interactions between it and
> other kernel subsystems.
> 
> The KVM Microconference will focus on how to evolve KVM and adjacent
> subsystems in order to satisfy new and upcoming requirements: serving
> guest memory that cannot be accessed by host userspace[1], providing
> accurate, feature-rich PMU/perf virtualization in cloud VMs[2], etc.
> 
> 
> Potential Topics:
>     - Serving inaccessible/unmappable memory for KVM guests (protected VMs)
>     - Optimizing mmu_notifiers, e.g. reducing TLB flushes and spurious zapping
>     - Supporting multiple KVM modules (for non-disruptive upgrades)
>     - Improving and hardening KVM+perf interactions
>     - Implementing arch-agnostic abstractions in KVM (e.g. MMU)
>     - Defining KVM requirements for hardware vendors
>     - Utilizing "fault" injection to increase test coverage of edge cases
>     - KVM vs VFIO (e.g. memory types, a rather hot topic on the ARM side)
> 
> 
> Key Attendees:
>     - Paolo Bonzini <pbonzini@redhat.com> (KVM Maintainer)
>     - Sean Christopherson <seanjc@google.com>  (KVM x86 Co-Maintainer)
>     - Your name could be here!
> 
> [1] https://lore.kernel.org/all/20221202061347.1070246-1-chao.p.peng@linux.intel.com
> [2] https://lore.kernel.org/all/CALMp9eRBOmwz=mspp0m5Q093K3rMUeAsF3vEL39MGV5Br9wEQQ@mail.gmail.com
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri May 26 16:29:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 16:29:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540168.841730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2aJC-0006ik-Hy; Fri, 26 May 2023 16:28:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540168.841730; Fri, 26 May 2023 16:28:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2aJC-0006id-Eb; Fri, 26 May 2023 16:28:54 +0000
Received: by outflank-mailman (input) for mailman id 540168;
 Fri, 26 May 2023 16:28:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SRto=BP=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1q2aJB-0006iV-FA
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 16:28:53 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 668a4aa2-fbe2-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 18:28:52 +0200 (CEST)
Received: by mail-ed1-x52c.google.com with SMTP id
 4fb4d7f45d1cf-51458e3af68so1427299a12.2
 for <xen-devel@lists.xenproject.org>; Fri, 26 May 2023 09:28:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 668a4aa2-fbe2-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685118532; x=1687710532;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ti7bS5wJW8UvsSbPQEU2t0yrsJeXnjnzm7h2Il6etAs=;
        b=CuFQttPH807VxteXnOBThYKabFLqRFET9y8FVtIWtSX1DhFCxWoV6woywAE1ZmGhcr
         48cDlK8aXLx3I5Q+n16k8x3pKu9n0TBX58NtlPp87j5pBfS6npg86IUeNS+ebNgPG3GR
         2YvwSnYwTYmkh47o5pHESoG0oo2fHhJBH2lWH95OBxSWtiDjDuhukPmplMtWuC8Wr3oZ
         e+DolZoD26JMk+o400Mk1AolfvtpfFkCltA82ZEVgZtuTduYDwCEg/HlAI0MRR4XcF3z
         i0xRxx4X4X2vgx39KzTaPG446g7mBiCIyPoBialXb/ZFles30OYMuMj9SyjuayTAjenO
         eV4w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685118532; x=1687710532;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ti7bS5wJW8UvsSbPQEU2t0yrsJeXnjnzm7h2Il6etAs=;
        b=ZfO8gAyMClP+1zRTvSB6RIXxPXaO/+zfsvmSdK1qz7ks7KpWZkLn9t3k6dMn4bwpZf
         C1XmTvG9VvCs3tR3KLk2yLis/kAgDzfeecEq/yECGB+GABhnBu1DMD4UIz1LP2JOi8pj
         rkDL7c56y7y2LR1OficmJOqlWRJ059Fbz4OXixr+/CoMXvpAwmucVi5cVtsRU9PGtWB+
         UKPlGT9yPti88UM+/ZQFL8BhvCtSrVP+T7wgeQZJBAzBeFEVb2j/ptmHdER9f/L4In9m
         YAq9luwbC/u2r52ndp0S9AOQOuzq1cSFewu97nFE5EzuwnaTavrfHEP/zoRRWUfkxImT
         BkyA==
X-Gm-Message-State: AC+VfDwdOcJN440lIH+nuqW/bDKll3Ux/nh/yRtBRcoXhp1Yn28Nh6NA
	FkNSrbXrx5UVtS9svTAtO/UDqu+SBPntYapiLDg=
X-Google-Smtp-Source: ACHHUZ4JxyW/MrCdN/raA4Tf2RM8IoZptvNwS6W1fHLbkGynxSQiztoMWgs9gJQ4H+phb/8OXXJjniaYT4frqJCsY80=
X-Received: by 2002:a17:907:3e02:b0:966:2fdf:f66c with SMTP id
 hp2-20020a1709073e0200b009662fdff66cmr2813222ejc.3.1685118531674; Fri, 26 May
 2023 09:28:51 -0700 (PDT)
MIME-Version: 1.0
References: <20230526150044.31553-1-alejandro.vallejo@cloud.com> <20230526150044.31553-3-alejandro.vallejo@cloud.com>
In-Reply-To: <20230526150044.31553-3-alejandro.vallejo@cloud.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 26 May 2023 12:28:39 -0400
Message-ID: <CAKf6xptoZhWyxHUSQ3UiLz_ysRuC7gQ2xSTAXLBp008H5TX40Q@mail.gmail.com>
Subject: Re: [PATCH 2/3] x86: Add support for AMD's Automatic IBRS
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 26, 2023 at 11:01=E2=80=AFAM Alejandro Vallejo
<alejandro.vallejo@cloud.com> wrote:
>
> In cases where AutoIBRS is supported by the host:
>
> * Prefer AutoIBRS to retpolines as BTI mitigation in heuristics
>   calculations.
> * Always enable AutoIBRS if IBRS is chosen as a BTI mitigation.
> * Avoid stuffing the RAS/RSB on VMEXIT if AutoIBRS is enabled.
> * Delay setting AutoIBRS until after dom0 is set up, just like setting
>   SPEC_CTRL.
>
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> ---

> --- a/xen/arch/x86/spec_ctrl.c
> +++ b/xen/arch/x86/spec_ctrl.c
> @@ -390,7 +390,7 @@ custom_param("pv-l1tf", parse_pv_l1tf);

> @@ -399,7 +399,10 @@ static void __init print_details(enum ind_thunk thun=
k)
>      if ( max >=3D 2 )
>          cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
>      if ( boot_cpu_data.extended_cpuid_level >=3D 0x80000008 )
> +    {
>          cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
> +        cpuid(0x80000021, &e21a, &tmp, &tmp, &tmp);
> +    }

Do you need to check boot_cpu_data.extended_cpuid_level >=3D 0x80000021?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri May 26 16:47:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 16:47:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540172.841740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2abI-0000iE-1h; Fri, 26 May 2023 16:47:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540172.841740; Fri, 26 May 2023 16:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2abH-0000i7-V8; Fri, 26 May 2023 16:47:35 +0000
Received: by outflank-mailman (input) for mailman id 540172;
 Fri, 26 May 2023 16:47:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gDTS=BP=citrix.com=prvs=5031e17c9=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q2abF-0000i0-Q7
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 16:47:34 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 007e675c-fbe5-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 18:47:31 +0200 (CEST)
Received: from mail-bn1nam02lp2047.outbound.protection.outlook.com (HELO
 NAM02-BN1-obe.outbound.protection.outlook.com) ([104.47.51.47])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 26 May 2023 12:47:06 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ2PR03MB7380.namprd03.prod.outlook.com (2603:10b6:a03:567::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.17; Fri, 26 May
 2023 16:47:00 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.018; Fri, 26 May 2023
 16:47:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 007e675c-fbe5-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685119651;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=DLZyZJs4KbdlkzBKyvf6GyZdY1cwG41VPNi2ch6Zgms=;
  b=Mp37JY4uaqjvWnvfC3gLpPvwxz22WvltQuXvTOMoTR9UXMvFgChwPXdo
   3rzVaiw4rrn+LZ/ohEGQizsTyuBGxCsNpsC9QxlN8W25XI53AIYq5Ke58
   SJpHwdqe9q/Rh4c9f0ZOKX06XkkMBTVTw6Ba459D4HBJC00GGSqbOd+Di
   I=;
X-IronPort-RemoteIP: 104.47.51.47
X-IronPort-MID: 110585841
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:FksOvq50F2FRZKGUuyxtTAxRtOjGchMFZxGqfqrLsTDasY5as4F+v
 jcbC2nSPPfYZzGmc9glO4u29E4A6pbUnIRhSFQ6+XwzHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa0R4geE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m8
 LsZB28gZQK6hcmwmL+6ZuRtvOYoBZy+VG8fkikIITDxK98DGcqGaYOToNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OlUotj9ABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdtKSe3lpqIz6LGV7lQxNhgZahywm/yopU6yWOl6J
 VVK3RN7+MDe82TuFLERRSaQqXqJvBcaV8BXVfMz7AWAyK386AKeG2RCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BXAGTT8JS00C+daLiIM8lBXUVf54DbW4yNbyHFnYw
 TqHsSw/jLU7ltMQ2uOw+lWvvt63jp3ATwpw7AOOWGugtll9fNT9O9Pu7kXH5/FdKorfVkOGo
 HUPh8mZ6qYJEI2JkyuOBu4KGdlF+sq4DdEVunY3d7FJythn0y/LkVx4iN2mGHpUDw==
IronPort-HdrOrdr: A9a23:Kp85tqkVWd4qKMtmKxCWwem6Qo3pDfMxiWdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcLC7V5Voj0msjKKdkrNhWotKOzOWxVdATbsSl7cKpgeNJ8SQzJ8/6U
 4NSdkaNDS0NykAsS+Y2njHLz9D+rm6GcmT7I+xrkuFDzsaE52Ihz0JdTpzeXcGIDWua6BJcq
 Z0qvA3xQZJLh8sH7iG7zQ+LqD+T5qhruOVXTc2QzocrCWehzKh77D3VzCewxclSjtKhZsy7G
 TflAT9x6O799W20AXV2WP/54lf3IKJ8KoOOOW8zuwubhn8gAehY4psH5WEoTAOuemqrHo6jd
 XWpB8kHsJrr1fcZHu8rxfB0xTplBwu93jh41mFhmaLm721eBsKT+56wa5JeBrQ7EQt+Pl6za
 Jwxmqc875aFwnJkijR78XBE0gCrDv/nVMS1cooy1BPW4oXb7Fc6aQZ4UNuCZ8FWAb38pouHu
 VCBNzVoNxWbVSZRXbEuXQH+q3mYl0DWjO9BmQSsM2c1DZb2Fh/0ksj3cQa2kwN8ZosIqM0kN
 jsA+BNrvVjX8UWZaVyCKMqWs2sEFHARhrKLSa7PUnnPLtvAQOMl7fHpJEOoM26cp0By5U/3L
 7bVklDiGI0c0XyTeWTwZxw9AzXSmnVZ0Wt9ihn3ek6hlTAfsuvDcXaI2pe1/dI4s9vTPEzYs
 zDe66/WJTYXCzT8YUg5XyLZ3AdEwhZbCQvgKdJZ7u/mLO7FmTUjJ2qTB/yHsuaLd92YBK3Pl
 IzGB7OGe5n0meHHlfFvTm5YQKZRqW4x+M+LJTn
X-Talos-CUID: 9a23:6VQ3tGyJ6gTekB1mMzilBgURXcEKLVDW3k7SCEngBGdFRbyva1S5rfY=
X-Talos-MUID: =?us-ascii?q?9a23=3AJ7C8IgwzVxfMb2axEBaYBjtj83KaqJyMNkUCrtI?=
 =?us-ascii?q?NgcyVGix8OAy4gi3pZbZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,194,1681185600"; 
   d="scan'208";a="110585841"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d1JIEWfiGq5PJT5guKHvoA8bvNrPAWmVgkbKuu2k21ooBJyJSFlHQyq4RL+mMKUy+Qq4ydG8mKiFTwMgxNgjSqDbeZQnEbQb7wji5uh3qb+tbhoM9VHb4ZA6SnC1v0MsKnnHF2ZkyVW/yQ9JMvDWF+T/yA2L7EOfhW7649XX9qHOKg5zwdQClvwyqFojje2HfunOg9w5JTL0B4ClpyhSjnXPRxnK2ARXqGo/bwbrg+OAEtsMsntHToyff1mdib5yBhleRvjTHrIkv7VxNoOaoERf8mynrAcEP89qPxKGaNay09YkfXLlkMuJmpc92Td+tZQSabXYUhfNrtW3Fo57zQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=MEMqJE2mFYOW2UV5EWUK73CYF5A4qYWDRnHheCtco0c=;
 b=gdaJJJbC8OKFYN8M+S5G/J4d9bgrnYzATHrJjfpsgsEPQjiI1rP1dyx+LvO+FMj61Xc7XvPAiVtCvK4AsHlrzseU+DfUj2uZpZ2eCHI9EfpOFfHmpf8qcaBrpY4Z8fqs3kchjWFikjhyv3ZiQoHhj0o/0yDQ6WM+MSw5ZBhLgOXps8XD/v+Y8T5IzXlkTw0KC31cQPYLJlaLi/DMp/qpyX+2p6QmouQiW0wtvuU0/IGFNpWM8c1CiboKP+d8wlwHf7LvrPJo8EdWuTq84+qPajjzYDP69+Bk0VOZ8UEs3QoL4smQMkWj7rq/FhRm7bXHMErteVSiRTEgQDTTgbkp5g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MEMqJE2mFYOW2UV5EWUK73CYF5A4qYWDRnHheCtco0c=;
 b=Wwi0lokETkm2Wb2KFuWAHK+17TXrPKbub1jVOfHb29qfw+BKSBtrWlDdxOYLBeIjg+wZ8H+jP9YejTxsePN3brBwLALUlzSEs1HKno5tzEaRZetKtAId1ZfXSd+OHEmjkRbZUT9WR8qSb5eK/tZ33jB2h9BgVsGO+Dsg30aI1zk=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <d2d13e52-6dd7-6783-f9ba-337afa464f40@citrix.com>
Date: Fri, 26 May 2023 17:46:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/3] x86: Add bit definitions for Automatic IBRS
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
 <20230526150044.31553-2-alejandro.vallejo@cloud.com>
In-Reply-To: <20230526150044.31553-2-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P265CA0047.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2ac::14) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ2PR03MB7380:EE_
X-MS-Office365-Filtering-Correlation-Id: 67417f1f-5706-4ebe-9520-08db5e08d1f3
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4tEDxaf9Q+3y5S/6PMW0gy3yxzicBOnWE6kh8ZV0BVKxGqui+q3KvqLkFh1uiJlpD9TghZs5cCPwSlHzkKcE3II1XnrF/QZXaK+93IZnObdxhbEicd/cnzrklm4Po3eTgjbX6dvBMEvvJO9CF81VImxUVy50Fd+b5b7tB23fSke3fuK/OLMsycyPVVebciXa3afqaRWg15ak9HNlUIz2mVTWlPXxJL92wlx5Dc42BX7Z7BlWScwwoRwCDZFFaILXrp9gNLbS7f5JxCIUXcE3Dct8nUzRkPRAQXnBrsdXK0jipaeGV5hjHILJYMZ2Llk2vIkv6Qdn8OK/AANxcGHcPIE2CUB5xUQKNzZcnpjpntq+BArtRPUyUNj/fFBdA8o6kW+eTUkmgZxsKLpuqW677tGrIKegKLghyixwcKEdVRp1CRLYmy3yzNtbu0v65S71DIYPznQCy5oVaLGG4hJ+OLNN4rxnyeNflvxdiLRlh1VqfYl3y/X1yo04kfgV6/5o67TLuZrqVK9UOIyJcYqbq3EbQT2QLPJRA1jkXCuRaSNgvDXCB8FsIk7N822LMBjWwmVe4meKbNgsk7Z5Jtt83rwgsfmc9DJVJ5aoIAfSP2RVD7n2AWKycgqxc5jP2F3kHdphzpIPYvo3pVDdFn85Lw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(376002)(39860400002)(136003)(396003)(451199021)(31686004)(66556008)(66946007)(66476007)(478600001)(316002)(110136005)(4326008)(54906003)(86362001)(36756003)(31696002)(83380400001)(26005)(2616005)(6512007)(107886003)(186003)(53546011)(6506007)(41300700001)(8936002)(8676002)(2906002)(6666004)(6486002)(82960400001)(5660300002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SG1jd3VudEtPVWMwYVhReFhGWkMxbkdSZlVLd0hjMFVsRkpSNkZwakRBSFRI?=
 =?utf-8?B?ZktqU1V4WVRGdWVUVDlkbzBBOEJCWWVHa0RaMWJUaHQvWmdtbTc0eXpuMUJW?=
 =?utf-8?B?NER5YzRWWGVUS2Z6eVd5UDJpTHRxUHZwbVMxN1I0UWtndTJlSXJVbGxBdFBy?=
 =?utf-8?B?OTZ2ZWxtSWhlV09BNFB0aC9nenROYU1hbDkxV05qdmVSWjJZTm1WZlBYdjlS?=
 =?utf-8?B?d3ptbTNTZDN6Wk5YRjF1WWdFd0RoTkluOFFGa0IwelFlMW1TbGE3clMyRFpa?=
 =?utf-8?B?SEhIR3lrMktqcXpqR2VUaEpqcTFTa1dXY2VCQnlNZUJWTHlPblJDOTNLVDdK?=
 =?utf-8?B?RGRaZkpHRm1aSVIveEQyOXJYU3FmNVllQTJCNEhHMXRZTHR5VzV5MWNOQ0JD?=
 =?utf-8?B?Zk1CelAzc1JmOUJJemwyNDQ3WmdPYmIxVS92VWtTVERNNzNpSlU1VHduSDla?=
 =?utf-8?B?R2c2akk0Q1BpN0ZLQWdZdjlIcjZwemliV3p3ME5OTkxiVURqb3lDS1BSMmhr?=
 =?utf-8?B?M3VJNDkzVmdXa0diNEhoWktCSHVjTXVLMjRNQ2Q0cC90Q2U3emV3MUZmZXdk?=
 =?utf-8?B?eXlpbWJaU3VYckFJQ01vaVdpdDVLM1J3c2Z0ZXJTdHNQSWR3WGExbW14WVlC?=
 =?utf-8?B?V2k1T0c4a0RSQ2UvcVVPYTBMREhHbytxYW1wSm9Yc3o4R0l0dE85TlpOT0g0?=
 =?utf-8?B?bTJib0hFZnNjdUNiS3hNSHhFRFI3N1ZZRHRwWkRuTU1pRkluRHY4RXNRekFY?=
 =?utf-8?B?NjB1OTNzVFdpenpwYlRsTHNSL2hKeVpBem5EcU9LaHBxTDJzVGVMV041OXVQ?=
 =?utf-8?B?bk8wSjN0R29WYlU4MmhqMDROL2JGYmN0c3lQWjY0QWF5SWFrbWJBbjVQTTdw?=
 =?utf-8?B?Slo0OGRWT01zTlpaMGxITmg0R2N0YjFFN1FmVkpHRXFKWS9BYVliSGtJdzlm?=
 =?utf-8?B?RWIrRFY3RlYrK0pGLzZiRGNwMzhqNndtZnlqRmxtT0Z1dXhXemhWMkEycW40?=
 =?utf-8?B?TmlwTi9FN3FlYVU5SDNYRVpZbk5RTjF3U2lWeFdUa1pLc3Ezck5qVnd2bnd5?=
 =?utf-8?B?T1d1ZGcwQnhkOVRDT0k2Z1QzODZ6MWFCTWhVQzJyOW50M1ZjL0I4ZTQ2bEh5?=
 =?utf-8?B?M05Ick4rRnBUNFh2UStPckplc2k4bDF1NFc5dWNQVFF4OEJOaGRwTm9GNTBJ?=
 =?utf-8?B?MHVTakZZSGx5VkkrWUR6dXdGVmpBZ0NjaHZ3SkIwNmwxSHZPVklNdS9NMitO?=
 =?utf-8?B?RjczU21WbGhSQ0VhT3lWYjM3b3NzMmY1Mmk5Rmc5d1ZBY2hsSmw3R0F3dFlK?=
 =?utf-8?B?UFdKRzdScjArK1ZTWjVuVFhkVGFHMy9BYWtwN2NlbXRIejdNbXJOZ2JCd0Jm?=
 =?utf-8?B?anVtYk1vbnVUN2laNmJhekYyaEVIWXo5dXM0Rkx0Vko0cFB6ZjVEcTNmQzBN?=
 =?utf-8?B?OGJGRnRnYXJpZWlsZVpaRjYrcXhSUTg4d2h1MU1SNTdBMiszWVQ1djVzYnk2?=
 =?utf-8?B?bEJJVjJUbFRLM1VyWTdsN0FTVU9iM3BFYWs4RUZnZzNqR3BhcEFHZEV6bThp?=
 =?utf-8?B?VWtsZEpsdXdDemRsQ2U0TEd1SkRaYnZRNXFZS0trak92OGx3VnVXQ3o4SS9O?=
 =?utf-8?B?YjJiS0VDbDB0eUhQcXVudWl5YVJiN3hYZ29uaHRxdWRCMlVKWFJ1UEhIcGVT?=
 =?utf-8?B?TFF2TkZOZGlvem1kRFFiWmxydndxci9oRS8vZ0RNMFdla0VzRnN0d1ZwbWZz?=
 =?utf-8?B?SDZQZXB1SGtFc0lBbjRCMWhqZ2M5Z3gxdnQ3L3ZMamtQZW1MRysrTGM3SlU4?=
 =?utf-8?B?Q0FzSlVKVUhiSFljalR0T01lVnFHSzZHTTQ3OGtlcmxRS0FKVXJVazd3Szlm?=
 =?utf-8?B?RW1ZMjFKanBQdnd5M3h4QUhVam00cTNlbkRWbFB3aXhSTjk5TDI4UzlHQ3ZT?=
 =?utf-8?B?OEtJRklrWHB6UDloOUJZRWUvNUo1eHo1V1V4NmpGSUx6Zk1CVnlxNVNFNkFw?=
 =?utf-8?B?RXlES3JmU2JjdVc3Rno0U2RnZjRUck1pNU4rT0loVUR3b05UaDlEM3NjUnBi?=
 =?utf-8?B?Qk9yeDFOSStpZEpIWkRjdWZ1VHNLbFRYRStvMEtPRjZveno4bzB3TGZ0Y0RB?=
 =?utf-8?B?Wk1yemo1Q0RNWmRVdVNhWWI0SVczam05eWwxWkN0a2N6VTNKM3gxaVc4V3RN?=
 =?utf-8?B?YkE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	AQfY/6FWtCbGZZVejQVcKWHfskvyMP0+4AT4mAulKA5Xdnqitz7sMdbDtkayHwQD11+uDUR3RvTz212byhnekClS+WWoiTuOJbIO6WYmy06S6ec1hoDV3nwdsMau+KcHqGFas0GrNVPRQnYwnfo7BChHM4we12eCJo3Io3YhdyEUVP+scuiOXAaf8twbivEr6wfwTewH4JqEOBzOE5wJSnnN9AppmyAPV1RFd+Dbajd4IXhEN2xD2mRnv49ysrwxvpuaZZusnnP7UfVcCLFXui29Qgi04VbLgIGpxLWLnSJLALI4yCM4ipiyUa+mZhUkgmhWcPEjZb4ApjnzYYDQ1NOBQhwLgrabEwL2HmuOPPN8bw22a30KvDqdzYTUcwA4m4T2LkQP071jgBiTMMIQCP0YmTp479N0ZvrcFgDrU38asw7yU0q+azsOS0xxRMV3/OBi96yeD1lKGyX8RKFWe76hkD5IzbkplZLRrd9kzY9aH2UrnJ0TrjGfri+3jm/s6pNATn+sq/gCrpi0a8iBGYACrQ+4t0L3RGOjGa8ghX+fodr9YvC18y20/YWjcbbIhs+HVe0XwoHQI+F3q+I87sHien2isypyimn4kcCya9is3sd95gSV7gukemV4oXDqcblTBCOU4t4zTVPKhotMntuTI8Imyu2X0sPY/SnTNo1w4KEG8UI6u/qwldOdeNFn0MM0DKQWoRxBFEofgfhJvmVo4pM/5dKZ+Dw54u4/XHC+Fk/5TAjv77poj1eL8AgvOB6DxFZoYucGFrLsAhorn+B+HN4eB56sGB+xg8UopyrQIW5Mb6chi62G2OpepGJVWJ3ETWlzyAiSCulM8Avq+r/55OsZCQqBEZqIT1ukPqQ=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 67417f1f-5706-4ebe-9520-08db5e08d1f3
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 16:47:00.3552
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XynSf0Doy34s9tOMKsBoefH0MZg/Dmu0zWGNjNA5AEIneTrD7+r6Q5xMj8KrHF8++yip4lMGr2wUQCb6AXSHLKiWVsx7ofo7E+4vKDC+OUg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR03MB7380

On 26/05/2023 4:00 pm, Alejandro Vallejo wrote:
> This is AMD's version of Intel's Enhanced IBRS. Exposed in CPUID
> and toggled in EFER.

AIBRS and EIBRS are very much not the same, and I argued hard to not
have Linux confuse the too, but alas.

Don't mention EIBRS at all.

Simply "Auto IBRS is a new feature in AMD Zen4 CPUs and late, intended
to reduce the overhead involved with operating IBRS", or something along
these lines.

> diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
> index 5d0c64a45f..e487885a5c 100644
> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -200,6 +200,8 @@ static const char *const str_e21a[32] =
>      [ 2] = "lfence+",
>      [ 6] = "nscb",
>  
> +    [ 8] = "auto-ibrs",
> +

This wants to be:

     [ 6] = "nscb",
+    [ 8] = "auto-ibrs",

as they are adjacent with names in two columns.  Gaps are only for
discontinuities in numbering.

> diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
> index 777041425e..e3952f62bc 100644
> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -287,6 +287,7 @@ XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
>  /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
>  XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
>  XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and limit too) */
> +XEN_CPUFEATURE(AUTOMATIC_IBRS,     11*32+ 8) /*   HW can handle IBRS on its own */

Were possible, we want to use the same names.  AUTO_IBRS is fine here,
and shorter to use throughout Xen.

Furthermore, it must match the cpu_has_* name, and that's already in the
better form.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 26 16:49:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 16:49:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540178.841750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2acz-0001KA-GA; Fri, 26 May 2023 16:49:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540178.841750; Fri, 26 May 2023 16:49:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2acz-0001K3-D9; Fri, 26 May 2023 16:49:21 +0000
Received: by outflank-mailman (input) for mailman id 540178;
 Fri, 26 May 2023 16:49:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/nsh=BP=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1q2acy-0001Jx-U3
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 16:49:20 +0000
Received: from smtp-8fae.mail.infomaniak.ch (smtp-8fae.mail.infomaniak.ch
 [83.166.143.174]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 41a59e6f-fbe5-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 18:49:19 +0200 (CEST)
Received: from smtp-2-0001.mail.infomaniak.ch (unknown [10.5.36.108])
 by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QSW9963HFzMqBk5;
 Fri, 26 May 2023 18:49:17 +0200 (CEST)
Received: from unknown by smtp-2-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QSW954GcnzMpq7x; Fri, 26 May 2023 18:49:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41a59e6f-fbe5-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1685119757;
	bh=aghhZZz/jAHNei4obfycQdpoNYo23x/hMTO/WYvr+D4=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=aNpuEEhr3qfgoQBoABl/c/Soy37k/dWbg29l53qY084NrLE1aBB96NeIw1GJ+t33V
	 pd9trvFbpyUukW/iVPpk0eBX/Jwsza/CoS5PoumlcDlW/vnInB0QV0Wjo7cpI7eeQk
	 00f3/lqql1UIb9Yq02grk2akHYn3y+PAXnI212IM=
Message-ID: <7671b432-569a-d176-315b-d5f66fe205ef@digikod.net>
Date: Fri, 26 May 2023 18:49:12 +0200
MIME-Version: 1.0
User-Agent:
Subject: Re: [PATCH v1 6/9] KVM: x86: Add Heki hypervisor support
Content-Language: en-US
To: Wei Liu <wei.liu@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>, "H . Peter Anvin" <hpa@zytor.com>,
 Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>,
 Paolo Bonzini <pbonzini@redhat.com>, Sean Christopherson
 <seanjc@google.com>, Thomas Gleixner <tglx@linutronix.de>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
 James Morris <jamorris@linux.microsoft.com>,
 John Andersen <john.s.andersen@intel.com>, Liran Alon
 <liran.alon@oracle.com>,
 "Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
 Marian Rotariu <marian.c.rotariu@gmail.com>,
 =?UTF-8?Q?Mihai_Don=c8=9bu?= <mdontu@bitdefender.com>,
 =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>,
 Zahra Tarkhani <ztarkhani@microsoft.com>,
 =?UTF-8?Q?=c8=98tefan_=c8=98icleru?= <ssicleru@bitdefender.com>,
 dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
 linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
 qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
 x86@kernel.org, xen-devel@lists.xenproject.org
References: <20230505152046.6575-1-mic@digikod.net>
 <20230505152046.6575-7-mic@digikod.net>
 <ZFlnJRsJh2fX3IJb@liuwe-devbox-debian-v2>
From: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>
In-Reply-To: <ZFlnJRsJh2fX3IJb@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha


On 08/05/2023 23:18, Wei Liu wrote:
> On Fri, May 05, 2023 at 05:20:43PM +0200, Mickaël Salaün wrote:
>> From: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
>>
>> Each supported hypervisor in x86 implements a struct x86_hyper_init to
>> define the init functions for the hypervisor.  Define a new init_heki()
>> entry point in struct x86_hyper_init.  Hypervisors that support Heki
>> must define this init_heki() function.  Call init_heki() of the chosen
>> hypervisor in init_hypervisor_platform().
>>
>> Create a heki_hypervisor structure that each hypervisor can fill
>> with its data and functions. This will allow the Heki feature to work
>> in a hypervisor agnostic way.
>>
>> Declare and initialize a "heki_hypervisor" structure for KVM so KVM can
>> support Heki.  Define the init_heki() function for KVM.  In init_heki(),
>> set the hypervisor field in the generic "heki" structure to the KVM
>> "heki_hypervisor".  After this point, generic Heki code can access the
>> KVM Heki data and functions.
>>
> [...]
>> +static void kvm_init_heki(void)
>> +{
>> +	long err;
>> +
>> +	if (!kvm_para_available())
>> +		/* Cannot make KVM hypercalls. */
>> +		return;
>> +
>> +	err = kvm_hypercall3(KVM_HC_LOCK_MEM_PAGE_RANGES, -1, -1, -1);
> 
> Why not do a proper version check or capability check here? If the ABI
> or supported features ever change then we have something to rely on?

The attributes will indeed get extended, but I wanted to have a simple 
proposal for now.

Do you mean to get the version of this hypercall e.g., with a dedicated 
flag, like with the 
landlock_create_ruleset/LANDLOCK_CREATE_RULESET_VERSION syscall?


> 
> Thanks,
> Wei.


From xen-devel-bounces@lists.xenproject.org Fri May 26 17:19:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 17:19:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540182.841759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2b5o-0004lL-Qx; Fri, 26 May 2023 17:19:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540182.841759; Fri, 26 May 2023 17:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2b5o-0004lE-O4; Fri, 26 May 2023 17:19:08 +0000
Received: by outflank-mailman (input) for mailman id 540182;
 Fri, 26 May 2023 17:19:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EF2/=BP=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q2b5n-0004l8-5O
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 17:19:07 +0000
Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com
 [2a00:1450:4864:20::431])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 688c5b79-fbe9-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 19:19:04 +0200 (CEST)
Received: by mail-wr1-x431.google.com with SMTP id
 ffacd0b85a97d-3093a6311dcso936100f8f.1
 for <xen-devel@lists.xenproject.org>; Fri, 26 May 2023 10:19:02 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 b5-20020a5d45c5000000b002fda1b12a0bsm5714002wrs.2.2023.05.26.10.19.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 26 May 2023 10:19:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 688c5b79-fbe9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685121541; x=1687713541;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=eqGxSIBES8XmrEgWWbBvNFGZokW+NPftAYdnxRGzyhg=;
        b=KYfwKhLF9AKtwMXTepu4DRRgVgzLuHYBlGPenk9W+lW0kS/WmrKKCuLwFW2Hmn5YlT
         KfSsWQ++xGRWL544zmAMQHFS5UdV3NgE5amL0yh15LlByHi8wwPlj/ipHFEtpGO0COtu
         TrQvQW5M3PcJqEdfcwhmeIo3Lb0lhy9F0XSeI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685121541; x=1687713541;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=eqGxSIBES8XmrEgWWbBvNFGZokW+NPftAYdnxRGzyhg=;
        b=bU86WfWkEPJW0Tl94LAsch1sQ4wrN08KGhazHg+9TWk2ZZreqIhMZlRFK1c/7DG977
         awf6CxPOCdMBbZnmYq3rmaTNTL1rBui1Teero3ZkJpS28PztMv0E9JfhM8bKr/4Lhuxh
         iiKCbE/h04azYvqRCsvA5FM9KUu22+TJ4yBZwIEfnHkWUcYmjd7AoCCbwF3WrVy+Ao8P
         RgxnShsRZOrNEh6AfKlJd+gmc6qWsu3pW8u7JyT/lmsO0qeFvqoGos6AKRXEEDCtloUO
         0sxEIYTjQlTXUNEpONRe8rVq8pladOJ4/yRYqm1LX93vgjbP6HmbUlQMCQuuGFMljINk
         3K4w==
X-Gm-Message-State: AC+VfDzsjMYAVSQMfiH3dXTZQI9Y1HBsNEHd/UycfGfGUMSWGUnM6voa
	H8Ai8wdifgWjWaBg46R0WXRHPQ==
X-Google-Smtp-Source: ACHHUZ5wNVgMhbx7SwRgiA8GEr4oXwAWEHITbixCDJ0Rk0G4CSDh+vmCfAEvcYqx5ztfJU6ZtVW8Fw==
X-Received: by 2002:a5d:594a:0:b0:309:4d12:64e7 with SMTP id e10-20020a5d594a000000b003094d1264e7mr2059711wri.31.1685121541506;
        Fri, 26 May 2023 10:19:01 -0700 (PDT)
Message-ID: <6470ea05.5d0a0220.39cd9.4208@mx.google.com>
X-Google-Original-Message-ID: <ZHDqA0i3vZYyD9G6@EMEAENGAAD19049.>
Date: Fri, 26 May 2023 18:18:59 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/3] x86: Add support for AMD's Automatic IBRS
References: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
 <20230526150044.31553-3-alejandro.vallejo@cloud.com>
 <CAKf6xptoZhWyxHUSQ3UiLz_ysRuC7gQ2xSTAXLBp008H5TX40Q@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAKf6xptoZhWyxHUSQ3UiLz_ysRuC7gQ2xSTAXLBp008H5TX40Q@mail.gmail.com>

On Fri, May 26, 2023 at 12:28:39PM -0400, Jason Andryuk wrote:
> >      if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
> > +    {
> >          cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
> > +        cpuid(0x80000021, &e21a, &tmp, &tmp, &tmp);
> > +    }
> 
> Do you need to check boot_cpu_data.extended_cpuid_level >= 0x80000021?
> 
> Regards,
> Jason

True that, yes. Will do on v2.

Alejandro


From xen-devel-bounces@lists.xenproject.org Fri May 26 17:25:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 17:25:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540185.841770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2bBr-0006DG-GD; Fri, 26 May 2023 17:25:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540185.841770; Fri, 26 May 2023 17:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2bBr-0006D9-DB; Fri, 26 May 2023 17:25:23 +0000
Received: by outflank-mailman (input) for mailman id 540185;
 Fri, 26 May 2023 17:25:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EF2/=BP=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q2bBp-0006D3-VP
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 17:25:22 +0000
Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com
 [2a00:1450:4864:20::431])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 49e08057-fbea-11ed-8611-37d641c3527e;
 Fri, 26 May 2023 19:25:20 +0200 (CEST)
Received: by mail-wr1-x431.google.com with SMTP id
 ffacd0b85a97d-3095557dd99so963277f8f.1
 for <xen-devel@lists.xenproject.org>; Fri, 26 May 2023 10:25:20 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 q13-20020a7bce8d000000b003f43f82001asm9460677wmj.31.2023.05.26.10.25.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 26 May 2023 10:25:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49e08057-fbea-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685121919; x=1687713919;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id:from:to
         :cc:subject:date:message-id:reply-to;
        bh=tVWBGoKTqzPYFhiehgdZZa3uz8aCOBB1zrZo2N0D55Q=;
        b=AQ6dDJ+LkP78tqjrY8Vvk2shOTpgvsMv8OH9Wh+cpyiZ+k4A5yAfPpOjxMGNBSfgNK
         aYDbiasIwMf7Sv5nHC8J+rM8SokJmpJVA9AOozY8tVcGGBBLh2hLRuaLNSl3i99+UT4j
         F+NqMPjLtO6B+/2Xyj2vMPciutYjSYZQxyOi4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685121919; x=1687713919;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=tVWBGoKTqzPYFhiehgdZZa3uz8aCOBB1zrZo2N0D55Q=;
        b=fBdqbdF3dDCQhx8PQKrbBSjQDfLPOjEuZv0ZDyigztvH4Cuu2N+V7bmLeDqEkT/zSL
         +xfK9CPRnYtKNLWD/uolZq1k48iO4eShfDJgqZCOAh/CB5zPxgsQKIgGS993MhD46Nw6
         kUJKptcE6MZu/jlo+ZCm9wE5e67Qzd1y7reZkDdkAWzJqnztq5ttSHuX+RB6bMDLjdmv
         HJgwOC5AOsQ+H+gHuSUUp316D9UzRH/1ZpGws7M3VL4sylo4zBnZiGR9SjQ+dVJrdFtl
         a5fzdFBnD1zQVdPUd8tdHMSKC9bPpfeBsV6YEpCzmDQa1fBKO8nZJnEDhtOtsK7tE8nu
         PN7A==
X-Gm-Message-State: AC+VfDxtbdPGN6N3RdL/l9ZPh/UtIVjTqYP35nmfLCWqMgLoxQ1lrM01
	Or2x9juZdnLWMuymQUszYSznyQ==
X-Google-Smtp-Source: ACHHUZ7Hlfd20juY4xKjmeGQcTlONIjcfAs4Iuu1GQQKdU/Na9fZihQ1YFRJA2Pw4N2/vTo9hUpOEg==
X-Received: by 2002:adf:e4cf:0:b0:309:4f21:ec7b with SMTP id v15-20020adfe4cf000000b003094f21ec7bmr2014174wrm.30.1685121919510;
        Fri, 26 May 2023 10:25:19 -0700 (PDT)
Message-ID: <6470eb7f.7b0a0220.994d2.c853@mx.google.com>
X-Google-Original-Message-ID: <ZHDrfZpfrKQMjVvT@EMEAENGAAD19049.>
Date: Fri, 26 May 2023 18:25:17 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>, Jan Beulich <jbeulich@suse.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH 1/3] x86: Add bit definitions for Automatic IBRS
References: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
 <20230526150044.31553-2-alejandro.vallejo@cloud.com>
 <d2d13e52-6dd7-6783-f9ba-337afa464f40@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d2d13e52-6dd7-6783-f9ba-337afa464f40@citrix.com>

On Fri, May 26, 2023 at 05:46:51PM +0100, Andrew Cooper wrote:
> AIBRS and EIBRS are very much not the same, and I argued hard to not
> have Linux confuse the too, but alas.
> 
> Don't mention EIBRS at all.
> 
> Simply "Auto IBRS is a new feature in AMD Zen4 CPUs and late, intended
> to reduce the overhead involved with operating IBRS", or something along
> these lines.
Sure. I did go out of my way to avoid ambiguityin the code, but it's true
the commit message is unfortunate.

> 
> > diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
> > index 5d0c64a45f..e487885a5c 100644
> > --- a/tools/misc/xen-cpuid.c
> > +++ b/tools/misc/xen-cpuid.c
> > @@ -200,6 +200,8 @@ static const char *const str_e21a[32] =
> >      [ 2] = "lfence+",
> >      [ 6] = "nscb",
> >  
> > +    [ 8] = "auto-ibrs",
> > +
> 
> This wants to be:
> 
>  [ 6] = "nscb",
> + [ 8] = "auto-ibrs",
> 
> as they are adjacent with names in two columns. Gaps are only for
> discontinuities in numbering.
>
> [...]
>
> Were possible, we want to use the same names. AUTO_IBRS is fine here,
> and shorter to use throughout Xen.
Sure to both.

Alejandro


From xen-devel-bounces@lists.xenproject.org Fri May 26 17:32:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 17:32:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540188.841779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2bIn-0007eO-7w; Fri, 26 May 2023 17:32:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540188.841779; Fri, 26 May 2023 17:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2bIn-0007eH-5H; Fri, 26 May 2023 17:32:33 +0000
Received: by outflank-mailman (input) for mailman id 540188;
 Fri, 26 May 2023 17:32:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2bIl-0007e7-4P; Fri, 26 May 2023 17:32:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2bIk-0002U7-SH; Fri, 26 May 2023 17:32:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2bIk-0005iu-Ky; Fri, 26 May 2023 17:32:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2bIk-0002Ip-KT; Fri, 26 May 2023 17:32:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kMI66ochNxeLyOw81Fpz/gGZiCtm1wTu3JsQEpvn1y4=; b=QLBQeR7M55wEXI0YqXizSDeS57
	vB/YZ84gXE86W1m+zfd0zE0IYWLNg1waH7D2i43eCt/Vonb8XA1zzm7a6+oBZF4fsn+HHV3cfgaE3
	Uk7AKr8430FLVb6XaeK0tV0QfHsRLE2ICCQReR7hZUjEu12kW45nil4HhTy2/Gk2JGhE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180955-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180955: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=a8c983d0fa1ba915f7a75deeceb20da1c88fd1db
X-Osstest-Versions-That:
    libvirt=96c8d39af007000daf3d5dfa845365f66379aaac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 17:32:30 +0000

flight 180955 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180955/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180939
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180939
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180939
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              a8c983d0fa1ba915f7a75deeceb20da1c88fd1db
baseline version:
 libvirt              96c8d39af007000daf3d5dfa845365f66379aaac

Last test of basis   180939  2023-05-25 04:20:22 Z    1 days
Testing same since   180955  2023-05-26 04:18:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   96c8d39af0..a8c983d0fa  a8c983d0fa1ba915f7a75deeceb20da1c88fd1db -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 26 17:40:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 17:40:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540194.841790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2bPz-0008JM-09; Fri, 26 May 2023 17:39:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540194.841790; Fri, 26 May 2023 17:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2bPy-0008JF-TG; Fri, 26 May 2023 17:39:58 +0000
Received: by outflank-mailman (input) for mailman id 540194;
 Fri, 26 May 2023 17:39:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gDTS=BP=citrix.com=prvs=5031e17c9=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q2bPx-0008J9-Lp
 for xen-devel@lists.xenproject.org; Fri, 26 May 2023 17:39:57 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5338a76e-fbec-11ed-b230-6b7b168915f2;
 Fri, 26 May 2023 19:39:56 +0200 (CEST)
Received: from mail-mw2nam12lp2044.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.44])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 26 May 2023 13:39:53 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM6PR03MB5372.namprd03.prod.outlook.com (2603:10b6:5:24f::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.19; Fri, 26 May
 2023 17:39:46 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.018; Fri, 26 May 2023
 17:39:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5338a76e-fbec-11ed-b230-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685122796;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ExOwuS4NkhFIgCw2A0pWS2ykuSTUwNMkpuCajmdIFJs=;
  b=Wy+KO2oVQamiPG/zhMgkaOmUDlew3jNFWNTPHUi5+tk5rbi0dnWW84Qz
   kFRLXaZHEY/3QvSeyFMEebd8DU1KgbvnLFm8FaOznbes+Rt5AQ+Khrlw6
   si5wkDli0DDKyXMtpbTrxyl7jPovzY4yrODiZmpEonIRJDOjJTGQio9rv
   w=;
X-IronPort-RemoteIP: 104.47.66.44
X-IronPort-MID: 110459821
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:47amDaiO5XQuNO7sRC3JIZqsX161oREKZh0ujC45NGQN5FlHY01je
 htvWG2Fa6yCYTb2c4okPd+y9R9VucTRyIVkSgs4/yhmQigb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QWAzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQDMDMGcwqAtd6455mGe/hN35wGKunCadZ3VnFIlVk1DN4AaLWaGeDv2oUd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilIvluSxWDbWUoXiqcF9t0CUv
 G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTeXkq6862gT7Kmo7Cw8Xf0WZ4v2F1xSkStJ4N
 1NM4ycvlP1nnKCsZpynN/Gim1aNuhMeUtxcCep89giJzqfW5C6SAm8ZQnhKb9lOnN87Q3km2
 0GEm/vtBCdzq/uFRHSF7LCWoDiufy8PIgcqZyUJUA8E6NnLu5wog1TESdMLOLGxps34H3f32
 T/ikcQlr7AajMpO0rrh+1nC2miovsKRElJz4RjLVGW46A8/fJSie4Gj9Vnc67BHMZqdSV6C+
 nMDnqBy8dwzMH1ErwTVKM1lIV1jz6/t3OH06bK3I6Qcyg==
IronPort-HdrOrdr: A9a23:mDnsTaqLyEpIkSNLnfIkfogaV5oZeYIsimQD101hICG9E/b2qy
 nKpp8mPHDP+VMssR0b+OxoQZPwIk80lqQa3WBuB8bBYOCOggLBRuwP0WKF+UyFJ8SUzI5gPM
 lbAtFDIey1IV9mjdvrpCmUeuxQuOVvKZrY4ts2GU0dKz1XVw==
X-Talos-CUID: 9a23:/Wsf7Gz230CtIHKK4QSOBgUPAsMub2fe/kveBGSFOHdIEeWwT3+5rfY=
X-Talos-MUID: =?us-ascii?q?9a23=3A2LcbUw2++pxnzPf+UvLejtXxSTUj4LavIl4v0pE?=
 =?us-ascii?q?65pOeJCxXJyjFlTm2e9py?=
X-IronPort-AV: E=Sophos;i="6.00,194,1681185600"; 
   d="scan'208";a="110459821"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oBn0EJNV4+Ro4kpzCAxBQ+gmFij3Ci2wKh+/fN5ybYkMG6zMnJYSNres6J97b44uFD32mazN6V7ArswNfXxaUjLmUbvKW+pmSq1rx95ObLxk4r9/sggJa1MeAlKvgWp9RLnFb+Ft4037q/kSDkYxYmpRilpVFFfekTzF5nSC96/9r6ztVktkEPLm1GIqZYyS6+ph2TTt1JFgzNjTQg4rsoDoJDGkYC59IEWuBfBjVl3TkwtIheLzI20pdEv5byjAQue7ZePL4UeqFUz5oUYlJ9FkhCJgyqAbnSUDVJNFTFBA+2ROpgYhC9i0OmKggeE/JiLVDzg301CC1Y0yRuFA0g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ExOwuS4NkhFIgCw2A0pWS2ykuSTUwNMkpuCajmdIFJs=;
 b=HsyevtedgjW2CnGvOO3kFlU+pkzmI1vtMhngrYrOBMkl9K3NTQ65Zi+Yf0Y8B2Aqucz+ve8rW5DhnE5tAhR+agtWtE0bA1ckcbBwl6tNJpYUsIWkNZoQWuXMvZl6EtxUuWjPDeCAaM/rzp9tIEnaheh2dnXlOdjH8ITlKVdLoGlfQCJsphcjw1GLKxdC7x69QzYvo4ZpYKkrTQUzzlXmngBADK8urwnWCobbt0xp3Z1GcWksNcmzo/50KY1LYa/wAWIpm6cERSj3wRiEc1Xt1Yl3ifKFj4WhsoE9T70zQ0orf5CsJ6IQ7go2xRBVlc633xIUMFFbH49UMGbPtSmzgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ExOwuS4NkhFIgCw2A0pWS2ykuSTUwNMkpuCajmdIFJs=;
 b=Ls01Jlf3yysfHiBu3dFmJ6FrmtABp2pkk7mhrccklRHxNxZgped0/7KWk/rg9I6T7piSjJWqd25bk5IUhwQvwOsi9zKVeB6ctFXxp9MK/3nHUPG//WxZxe+bPv4byJc3g7BoGpFFgDSFbQz6hLrE5/TXpb47JnRZuEC662dVv2g=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <0b35054e-6641-91b8-562e-cd5766300966@citrix.com>
Date: Fri, 26 May 2023 18:39:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/3] x86: Expose Automatic IBRS to guests
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
 <20230526150044.31553-4-alejandro.vallejo@cloud.com>
In-Reply-To: <20230526150044.31553-4-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LNXP265CA0065.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::29) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM6PR03MB5372:EE_
X-MS-Office365-Filtering-Correlation-Id: 8680d15b-2611-4bfe-4018-08db5e103196
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RQUE2zMMGieKzeXDEse4PuosqKx6DxXq0P8vxf51WYUPU3+EsUsigE0RYsdp1l82RarptpNNF+VbECiPb49wn3wog7pLmNdkMW6LVRu/etDUftkbsS/9jesPUbkxH/4eGsWWMrYSNpxa5G+qh6Q7/hs6NvmEZadLjBOWf1I+mr0DuGtDarIyTwPvpy0IXN4SRWNZDE33WvJMQYKRit9qjOHkmGJZpP8v1EhGz0Y528TymRGZVPQZXDC5Pi2sTmrqaROV1MPcjZQsBACLzNsesavRhzEjSjvEmGIQdKnuRUOPMfvBlH+QmFPUYgUHoAwjupZJ6z9B4b/zM+hD1xD25Rs0YyHQmpSIU3uWdW2r6ozFJJWspmLRdoTl0OXPxsfjMuUZBDxAKnF+cfbAvSWS16BwcD0kg2YjLxuQumngsypQss/nRXTiOeH5wkJXP7OpBPewyvbCxbz704svW7IgfCxRIfLLli265+6735TGZ5IeT0wOfW5ZM/2fd3sz8y7M2sRHJVGPIRUrw15l1N4VUGm+1kLuKc7AElByAyu6YcnsHOw4E2aX7ieexF+4vud7zELiwEmGSwcXwRIbgK864UUvV2SMaxgGgC9IfnWtGCLYTL+AdBSoHexaREAfCVptTZW/xzis0tde/Aj8RNChwA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(346002)(39860400002)(396003)(376002)(451199021)(31686004)(54906003)(110136005)(186003)(86362001)(83380400001)(478600001)(31696002)(2616005)(2906002)(4744005)(82960400001)(66476007)(36756003)(66556008)(53546011)(6512007)(6506007)(26005)(38100700002)(66946007)(4326008)(316002)(41300700001)(6666004)(6486002)(8936002)(8676002)(5660300002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?L3l1Zjk3NzN2dkZ4RFMvcjFwNEZ2clF3aWhIVkQ3djlvZ2J0ZzhKUVNzVDNE?=
 =?utf-8?B?YlVLUkVVTUowVmZreURRS2hvZmpGaC9ZOFMrUGF5UnFiM2ZQMkg2T0pkb3dG?=
 =?utf-8?B?V1NxRUx5bGFJcDAyTnRneFpWdk5rQ01BbWNvRmlBTm1zdGR1dGJES1ViNm9a?=
 =?utf-8?B?Q05ZVHphS1gyZzhPTDJPd1E3QkJBRFFVek9FOUQ0S3NXNVgvRUNWQ2MwUWRN?=
 =?utf-8?B?M0R1WSs4Y3dhVlllU0hJV1Z1Rit6b1FvamJGUlNTclhaVWEvUk1ZMmkyWStI?=
 =?utf-8?B?S0MvVDhSMlFiYXh2RWk0SVVFTk94YXNDRERjeUM5aDFkQWc4cXlySksySndO?=
 =?utf-8?B?VkhDM05XUjZndkN3dGdRMlMvR2VNYTF5dk91dG1pTXVHN3JDdWpnYUdtWHda?=
 =?utf-8?B?MFhCeCtLYUV1Z1JWZEdudHhNcm41ZDIvM2VPa0ZiNnlONW1mUGk1TmxiMUVO?=
 =?utf-8?B?QWJoWGU4Yy9ySzFGUFcxWkZTMTkvdFR0d1Q1M2wxN2xLbjBoV2hJNVVqZzF2?=
 =?utf-8?B?cmZCeEFYbTlFT25zcWZTT2FrcTZ1RzVVakpOSGZJR3crVUxML2ZMZmlRNkJn?=
 =?utf-8?B?RlBqZWpLa2pwUFA0MGprNGdZMUR0OHl2Y216bm05SmVvWHRFbW1MV3gyM0FZ?=
 =?utf-8?B?L1ZaQ3VZcVN1L1c4c1ZyRDZsa1RrbGdSbHFXK0Fra0ZWdDA1U0VrVmo3TkMz?=
 =?utf-8?B?cms5aE9GVGpzYzlvdmJNKzByb0Z6bUJmdEdseDUzTmtxcGVFdVJJT1BGODFX?=
 =?utf-8?B?WlRseFQrVEg5d2pTckhjUzM5TXRLUzVENDdHVjVpejBRcENaNUFGeU1xYS9D?=
 =?utf-8?B?TlpETFRkTW41SmdmWEl0bzVnelU4dGxBRTArcnY0U3d0akNtc3VlY3pWRlRG?=
 =?utf-8?B?NmtZWjNHYWF4czZtTG5LUVJkWUxobXVDL1c2cFZoK21kTllkV1BPMkVmUlJX?=
 =?utf-8?B?L2JJOEZ3L3VLa3h0V1FqK3lFbVgrWGU2SjE0ditCUGg2MXlzM0xzUkJTdG1U?=
 =?utf-8?B?dC9BTDk4Z0IwUFhMUlUzd0FHamlBSDJiZlJVTGhVWVFMdnV0YURHMGlGUndS?=
 =?utf-8?B?Vzh1eUJqY2ZKL2NtVWswOTMzL08zWnV6bGVlbzh2VFFOUGgxeng5MFJraTg5?=
 =?utf-8?B?Z1FBbXZOdEl0Z2JyZmdpL3JEN094NitiWjVucWVRNW8zb2kvaUdqdlhLeVp5?=
 =?utf-8?B?ZnBPT0JkeUhDUXFBNXFBeGVSZWZtTlJJY3ViSWlKVkN1MDZQYkN3Tm1ROE1C?=
 =?utf-8?B?V1pTTTc2VzE1VmFnR1QzZ3QzNHlPZXNLeERRMHVhUmd2ZytrNzBuZE5NeFlF?=
 =?utf-8?B?dFFRUlM4TjVOZWZnd2QwdVBkNk9tVmJZRnJUb0RaT3p3MzVacllCclRkaXky?=
 =?utf-8?B?L3RpTFlJTGUwelBISEZYWWxaWldyaTd1eTkxdnEzUXNCVERqQjROWmhhRTVz?=
 =?utf-8?B?dXFicEVsWmVlUHg0S0tnY3RqWjUyYks4Q2J5ek02eHZnQUpWTWx5bXNyRmg0?=
 =?utf-8?B?UVJwTkwzc2F6WW1zQ0NBNVNRUmNXUTlsRTJaNThPTVRVY1YxR0ozaFRhUkRl?=
 =?utf-8?B?Qm1XRmxRRWh0NVhoTStSZmtxZFo0T0J6UTRQTnAyREcyeGw4K0wyb25sbEpr?=
 =?utf-8?B?V2F1T3ZvUllSdEhuR29OOTd5SGR1MUxidVhPMUltZlJxSzVBNzBoWURxaFdV?=
 =?utf-8?B?YnRvTEFRcGJwMGxFNGV4RzFXQ011UUp5K1lkcUd2d2NxQk5wUm9QcHo5bkI1?=
 =?utf-8?B?S0lQYXpuOGI0a2Jxb01IR2QvMjZrajJERjBYNUtZbUZUYkcwRGdwZktqU2E1?=
 =?utf-8?B?Q1c5T0Q3bXp2ZzVyRmNGaUF5YXp0OEpkNXdWZ2orNThnR3pRdU9NM2FDejNX?=
 =?utf-8?B?alNXOUU3L2YwbjVHQ0VzN1QwTFMwaDJRK3NudW84WXJZRTFubVhRVHI5bmwz?=
 =?utf-8?B?TkUzaEo2V2x6dGlaQ1Vjd0oyUUd0dTE3dDM3VjgzaTUvNVdDQXIvb1FYMkpP?=
 =?utf-8?B?UVdPamYvSmhPcVJGZVRsVHcxTGw4L3Z1L2lPSnhpbkVNRXg4YTFkWERzS1dT?=
 =?utf-8?B?dm53SVg2MFNqc1pURVREMFZFeWJlUjRxYW1hYlBNT0ZLdHpackdaM2hwTmY0?=
 =?utf-8?B?elhaNVNpZEdaVW1hWlBDYjE4and4UUJMaGJlVEJyRXhzQzhJSWN4anhJYmlC?=
 =?utf-8?B?dmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	sbhvQkBkfuEowAOw+yYefy0wYLcZI3PSggr/bih4oFJpioARN9GQyNkX35JN9pmFnxpl8s10I4mdCkNe+w7J4i1F35cJBZ8VFSSdRJVs3uNo5UCmP3rA26pj1VYILRuVRAL/B/8MoAndu5EsYTdWXUf0Er9CmmNBb16A7ShZQEoWN70Y9Dhh42yENoqCYnCYr0Z6/ps9jvKwc1dZbRiqfrGs022lYqubV4gY30tBXU9DtQbnm0vxCrh2LzK8Zw+S0GGD7XU0eRT504LCKkheyYx12jN1JYLburl71MdaGw2Fd9/5fYVeNl88zTN8+eyi2sZy1XEwpJ7PiGuaph10vtcNizW9+UbWtM12ASImF6GVi0FwHMQToP5ScV3l7dYFK9LMNzYuD4eh6yOAahfAaHkBvf8uFZYI7/4BK6MvT+HhKFpmlEkKDpwyXKpuYOBgN3lrUMkbAKIzBuGn7lXapd4XPwcUXUtCwULq5P3uk0Lf5N4m6FVahdcnQiFIhRoi2UnRJDg0OXHv9Z7KjHAAOj+vMdStQCq2BTZzyW8GesfOoCv+KsTdfAjLLsf+O/Lgr2VfeU/zv/6tTH3Q8v9GmHx34C1FuS2KkLsFootTkH3NFeMkNGqYoPyTsJGxziT85UgXiRF+uREjWEmf26xqbioTu7C1D8LBSHdXuu865xxy8vj34+bE2fBKc+PRT2JBwyd6VUGWTLzVR3nGWm2cjygF7PuJhmOj3uNFUh5dyke0nUZ8oI9sKSkAwxKlEmmrfKdVCQ7qDCNg6bPPr5JPFYSE4miX8T+WXXeCNeHRQDvW7/PLAP5Xt6RA22aNuqFLQvWKRSJZVJJgpeUGv05jgDhaZGi3b4RLyLsviMD1xEw=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8680d15b-2611-4bfe-4018-08db5e103196
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2023 17:39:45.8515
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Yloj8zeNe5ZGjHg3FLUcveUJyC3+7DeT0QCdwRuwPj290OB6W67OqIJgcMD96WJMvdy+C/fGCoyx2mSEDEkvWO8PNJiCOFrUthkrIRViXZA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5372

On 26/05/2023 4:00 pm, Alejandro Vallejo wrote:
> Expose AutoIBRS to HVM guests, because they can just use it. Make sure
> writes to EFER:AIBRSE are gated on the feature being exposed. Also hide
> EFER:AIBRSE from PV guests as they have no say in the matter.
>
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>

It's worth saying "EFER is fully switched by VMRUN, so there's nothing
further for Xen to do in order for HVM guests to use AutoIBRS".

We can in principle support AutoIBRS on PV guests, but it's fine not to
for now.

This patch probably wants reordering to #2, because it is entirely
independent of what Xen is doing with AutoIBRS for spec safety.

It will need a minor rebase over the bit name shortening, but otherwise
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 26 18:31:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 18:31:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540200.841799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2cDn-0006D7-NB; Fri, 26 May 2023 18:31:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540200.841799; Fri, 26 May 2023 18:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2cDn-0006D0-KS; Fri, 26 May 2023 18:31:27 +0000
Received: by outflank-mailman (input) for mailman id 540200;
 Fri, 26 May 2023 18:31:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2cDm-0006Cq-3F; Fri, 26 May 2023 18:31:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2cDl-00053M-Pl; Fri, 26 May 2023 18:31:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2cDl-0000wb-83; Fri, 26 May 2023 18:31:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2cDl-0004iB-7Z; Fri, 26 May 2023 18:31:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XZqu31LM3SMSyAPgKe3rFSoCGnH/VkE2MxeDSJ0A9i4=; b=giZRCyhMwg6Em5/5vnexa5uFfD
	xsazmlcd0X3/Dbz+iPRjztRprv0Uk69uN0KxBxO63ic/2PZq2wjchZjSrcBEuBqzvWFIxt46qLtmJ
	KtC3kzgDSa2tV76FeZ/u1TNuf/C7fowFbMqInbnaHuHltt6ne8X5fxKvTJxB7cWM8sD4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180962-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180962: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=a3cb6d5004ff638aefe686ecd540718a793bd1b1
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 18:31:25 +0000

flight 180962 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180962/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                a3cb6d5004ff638aefe686ecd540718a793bd1b1
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    9 days
Failing since        180699  2023-05-18 07:21:24 Z    8 days   36 attempts
Testing same since   180949  2023-05-25 22:08:34 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 7534 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 26 23:01:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 23:01:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540213.841810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2gR1-0007Ho-RV; Fri, 26 May 2023 23:01:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540213.841810; Fri, 26 May 2023 23:01:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2gR1-0007Hh-OF; Fri, 26 May 2023 23:01:23 +0000
Received: by outflank-mailman (input) for mailman id 540213;
 Fri, 26 May 2023 23:01:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2gR0-0007HX-GL; Fri, 26 May 2023 23:01:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2gR0-0003KO-1X; Fri, 26 May 2023 23:01:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2gQz-0003vO-JT; Fri, 26 May 2023 23:01:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2gQz-0002pB-J1; Fri, 26 May 2023 23:01:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3uPn305yzZtqOEPCb4EFIIoWDEmNvejjNCGmfIowl7g=; b=ddGl20srFAYFtqPmG6fBG77Mgf
	ke9Z8ByFYrx6pfyrN0cxTK4FWmpn1U/wMwtf1LJNYhhAwZZffLgcbiJArZE+Y21OuPn3fAl8pfeXc
	v1zs7IGFhbeKfUCq1RtO00Jhs5zCbbNpBMZkT2mFE/OCdjb6tJd000Y6B7Mz27F7d06k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180966-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180966: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=f9bdb3818faae00b950f6a09eda1fa40193ef1f5
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 23:01:21 +0000

flight 180966 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180966/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                f9bdb3818faae00b950f6a09eda1fa40193ef1f5
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    9 days
Failing since        180699  2023-05-18 07:21:24 Z    8 days   37 attempts
Testing same since   180966  2023-05-26 18:39:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 7943 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 26 23:19:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 26 May 2023 23:19:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540221.841819 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2giA-0000V0-ET; Fri, 26 May 2023 23:19:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540221.841819; Fri, 26 May 2023 23:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2giA-0000Ut-BZ; Fri, 26 May 2023 23:19:06 +0000
Received: by outflank-mailman (input) for mailman id 540221;
 Fri, 26 May 2023 23:19:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2gi8-0000Uj-P2; Fri, 26 May 2023 23:19:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2gi8-0003mQ-H9; Fri, 26 May 2023 23:19:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2gi7-0004QA-Vu; Fri, 26 May 2023 23:19:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2gi7-00077h-VR; Fri, 26 May 2023 23:19:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Yb61s4tC8SI8Ke5C/Prk/teq3bESrfmBp7WMeE0Okww=; b=rjLQqmBl7wnQSqFkuC7Sm09tHG
	RwoiA4m55D+/yN86DSPCryAkVEXz4KlrdFjYXApDIFGsSDnzbW3W95i6Oy5YkIyETNnDm+iahvGhx
	tXgD0PCnD2EKx0JS1rhDarUMBpOSaTd3ImbODQkdWLFWq+pCLWo/K0WhL5TKkZw7F78k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180959-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180959: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-arm64-xsm:xen-build:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0d85b27b0cc6b5cf54567c5ad913a247a71583ce
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 26 May 2023 23:19:03 +0000

flight 180959 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180959/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180278
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                0d85b27b0cc6b5cf54567c5ad913a247a71583ce
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   40 days
Failing since        180281  2023-04-17 06:24:36 Z   39 days   73 attempts
Testing same since   180959  2023-05-26 10:09:30 Z    0 days    1 attempts

------------------------------------------------------------
2518 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 319111 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 27 03:58:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 03:58:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540229.841833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2l4g-0001cs-Iu; Sat, 27 May 2023 03:58:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540229.841833; Sat, 27 May 2023 03:58:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2l4g-0001ck-Dp; Sat, 27 May 2023 03:58:38 +0000
Received: by outflank-mailman (input) for mailman id 540229;
 Sat, 27 May 2023 03:58:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2l4e-0001ca-P4; Sat, 27 May 2023 03:58:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2l4e-0000i9-CD; Sat, 27 May 2023 03:58:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2l4d-0000r6-K4; Sat, 27 May 2023 03:58:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2l4d-0003wa-Ik; Sat, 27 May 2023 03:58:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ADAGHSNoWgjRS/C79OqaAPyQorZdlkR0vul3sy2PmIg=; b=dim+4cUHfGbClIezBxSTKOVTBG
	kfZoIJ2CKCXYVvNbtZYAkFDYNf8yvs5i6wGK/ix8jLaR59UWyEd+hdI0ya1fhomPYIF2cSyYTWavQ
	XenGpnzte99nCqKxaWBe623zdsnf3WqFdKU/GOgb3BrJ4k1GdXIDA+YtiFfOlH9Yf/a8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180968-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180968: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=9c9fff18c45b54fd9adf2282323aab1b6f0ec866
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 May 2023 03:58:35 +0000

flight 180968 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180968/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                9c9fff18c45b54fd9adf2282323aab1b6f0ec866
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    9 days
Failing since        180699  2023-05-18 07:21:24 Z    8 days   38 attempts
Testing same since   180968  2023-05-26 23:08:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8089 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 27 05:18:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 05:18:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540241.841843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2mJJ-00022e-HA; Sat, 27 May 2023 05:17:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540241.841843; Sat, 27 May 2023 05:17:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2mJJ-00022X-D0; Sat, 27 May 2023 05:17:49 +0000
Received: by outflank-mailman (input) for mailman id 540241;
 Sat, 27 May 2023 05:17:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mJI-00022N-B3; Sat, 27 May 2023 05:17:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mJI-0002yt-4E; Sat, 27 May 2023 05:17:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mJH-00051g-KT; Sat, 27 May 2023 05:17:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mJH-0004Db-Jz; Sat, 27 May 2023 05:17:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=upyuigXRz0WqxqTz5RQ1++8tiyyYTVZgD+/Fwv211mU=; b=B8sXLn9/4CDbrZ1lmW3gRZ+0gb
	zRAStHXENxRNM8LMh+UQrupZEoKTwyRuTDcmwVOchuw0a8FdbJhYMZWNKu+OVXLkq7srgLbPIq6YW
	i2NiSE0CgdtRxueVVnd0q+J/k7BoNN6OdXFsf49B/4495HvVIADHKYDVzf+jEP95Obm4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180969-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180969: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:build-arm64-pvops:kernel-build:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a92c9ab69f6696b26ef0c1ca3e8b922d1fc82e86
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 May 2023 05:17:47 +0000

flight 180969 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180969/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                a92c9ab69f6696b26ef0c1ca3e8b922d1fc82e86
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   40 days
Failing since        180281  2023-04-17 06:24:36 Z   39 days   74 attempts
Testing same since   180969  2023-05-26 23:41:48 Z    0 days    1 attempts

------------------------------------------------------------
2538 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 320470 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 27 05:24:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 05:24:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540247.841852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2mPw-0003UP-7d; Sat, 27 May 2023 05:24:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540247.841852; Sat, 27 May 2023 05:24:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2mPw-0003UI-4T; Sat, 27 May 2023 05:24:40 +0000
Received: by outflank-mailman (input) for mailman id 540247;
 Sat, 27 May 2023 05:24:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mPu-0003U8-PR; Sat, 27 May 2023 05:24:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mPu-00036t-88; Sat, 27 May 2023 05:24:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mPt-0005Om-PR; Sat, 27 May 2023 05:24:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mPt-0000N1-Of; Sat, 27 May 2023 05:24:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wQMmCmisbGUJno0DxarftENju7qcOBJmC0t2ziOTcDA=; b=Ed7Ykw/qF/PSCWuS509DAJ8iJg
	8PdVo9i1ZoK3vF/+0ZXPBbdHYGY9M3xYW2RoiIUDbVDoaY7ybd4MSzLuORIxSJdFNAPg4SsPN/aKp
	LknWoq3oerAM7ExCt5XsJVxjnWLG9zVpsYsp1SepiGlN9gvCzvY28LrFnzbLjjon2qK8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180972-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180972: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=ac84b57b4d74606f7f83667a0606deef32b2049d
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 May 2023 05:24:37 +0000

flight 180972 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180972/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                ac84b57b4d74606f7f83667a0606deef32b2049d
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z    9 days
Failing since        180699  2023-05-18 07:21:24 Z    8 days   39 attempts
Testing same since   180972  2023-05-27 03:59:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8318 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 27 05:36:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 05:36:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540274.841931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2mai-0005pp-6i; Sat, 27 May 2023 05:35:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540274.841931; Sat, 27 May 2023 05:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2mai-0005pi-2j; Sat, 27 May 2023 05:35:48 +0000
Received: by outflank-mailman (input) for mailman id 540274;
 Sat, 27 May 2023 05:35:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6b3Y=BQ=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q2mag-0005hV-4i
 for xen-devel@lists.xenproject.org; Sat, 27 May 2023 05:35:46 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 53dc798f-fc50-11ed-b231-6b7b168915f2;
 Sat, 27 May 2023 07:35:45 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id F2DAF1F381;
 Sat, 27 May 2023 05:35:44 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id C6564134AB;
 Sat, 27 May 2023 05:35:44 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id bi9+LrCWcWTnVwAAGKfGzw
 (envelope-from <jgross@suse.com>); Sat, 27 May 2023 05:35:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53dc798f-fc50-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685165745; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=oSTN1BPSP9tLeSFNLAlCJe9fSQGY+JofX+1sr1rhixA=;
	b=Ivl0RidbdCzC08rHVPLS4MR+BjtZ+BzVdtqVjHybO8r+ulA5IboiwPUlZ04IM4FzA84dIe
	59yYLh39GkLsYjvNIncm6vW7hLlOfUmkNVaYBP5CAS1XGs/ciXQngBpBqXhLi/heor+AJi
	bkuzGtfS8JdheiVtx9RVgVQ0s2QBjK8=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	sstabellini@kernel.org
Subject: [GIT PULL] xen: branch for v6.4-rc4
Date: Sat, 27 May 2023 07:35:44 +0200
Message-Id: <20230527053544.31822-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-6.4-rc4-tag

xen: branch for v6.4-rc4

It contains 3 fixes:

- a double free fix in the Xen pvcalls backend driver

- a fix for a regression causing the MSI related sysfs entries to not
  being created in Xen PV guests

- a fix in the Xen blkfront driver for handling insane input data better

Thanks.

Juergen

 arch/x86/pci/xen.c           | 8 +++++---
 drivers/block/xen-blkfront.c | 3 ++-
 drivers/xen/pvcalls-back.c   | 9 ++++-----
 include/linux/msi.h          | 9 ++++++++-
 kernel/irq/msi.c             | 4 ++--
 5 files changed, 21 insertions(+), 12 deletions(-)

Dan Carpenter (1):
      xen/pvcalls-back: fix double frees with pvcalls_new_active_socket()

Maximilian Heyne (1):
      x86/pci/xen: populate MSI sysfs entries

Ross Lagerwall (1):
      xen/blkfront: Only check REQ_FUA for writes


From xen-devel-bounces@lists.xenproject.org Sat May 27 05:37:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 05:37:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540286.841972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2mc0-0006il-V2; Sat, 27 May 2023 05:37:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540286.841972; Sat, 27 May 2023 05:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2mc0-0006ic-Rb; Sat, 27 May 2023 05:37:08 +0000
Received: by outflank-mailman (input) for mailman id 540286;
 Sat, 27 May 2023 05:37:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mbz-0006iD-88; Sat, 27 May 2023 05:37:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mbz-0003MG-5o; Sat, 27 May 2023 05:37:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mby-0005qV-O3; Sat, 27 May 2023 05:37:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2mby-0001Cs-Nh; Sat, 27 May 2023 05:37:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VnMaf3+OtuRspyUhaDXI9ftFJVlY9cz68KbPO6GXpkk=; b=dIIuArQFF86I+8/n3dvuXBQtm6
	kDityczLNdbfJFup8xiYwyZx7uNoTCEYru04wFG7IFq0A4UIl97u6ZWts79DJZHiQZt7TKOlSuoZw
	feCCbEQKP20/lS7tBWxyKESwp+hDjFxWg9wJDyUbHQZUOyR7GwVGXoa8Jd9E4zLnt7sc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180965-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180965: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
X-Osstest-Versions-That:
    xen=380c6c170393c48852d4f2b1ea97125a399cfc61
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 May 2023 05:37:06 +0000

flight 180965 xen-unstable real [real]
flight 180971 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180965/
http://logs.test-lab.xenproject.org/osstest/logs/180971/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2   8 xen-boot            fail pass in 180971-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 180971 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 180971 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180930
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180930
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180930
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180930
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180930
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180930
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180930
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180930
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180930
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180930
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
baseline version:
 xen                  380c6c170393c48852d4f2b1ea97125a399cfc61

Last test of basis   180930  2023-05-24 17:38:36 Z    2 days
Failing since        180938  2023-05-25 02:40:04 Z    2 days    4 attempts
Testing same since   180965  2023-05-26 16:10:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ayan Kumar Halder <ayan.kumar.halder@amd.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Michal Orzel <michal.orzel@amd.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Yann Dirson <yann.dirson@vates.fr>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   380c6c1703..f54dd5b53e  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc -> master


From xen-devel-bounces@lists.xenproject.org Sat May 27 10:42:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 10:42:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540302.841989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2rNH-0004AO-9s; Sat, 27 May 2023 10:42:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540302.841989; Sat, 27 May 2023 10:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2rNH-0004AH-7H; Sat, 27 May 2023 10:42:15 +0000
Received: by outflank-mailman (input) for mailman id 540302;
 Sat, 27 May 2023 10:42:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tmt/=BQ=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1q2rNF-0004AB-FF
 for xen-devel@lists.xenproject.org; Sat, 27 May 2023 10:42:13 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 21f76c70-fc7b-11ed-b231-6b7b168915f2;
 Sat, 27 May 2023 12:42:10 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9F50861072;
 Sat, 27 May 2023 10:42:09 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE11AC433EF;
 Sat, 27 May 2023 10:42:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21f76c70-fc7b-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685184129;
	bh=Z4TeIapFpeS1e5WxOMUpOrDg87Rxe8ydQavxJsO0vug=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=qtjpACU3i7FVjLT023Ddg5iX1Sm9mYGyR9BAcAldcZGtkGi1IgPSjR20Gkom5YVFi
	 9GqwWTqSe6aCGE+Jk6r1lk9ASI1BqwfpG0h2HSrQ1BaMlG84/DZwyIoKGVbNuFJs46
	 KM/CX4KgN01bs8ctMAmNzdH3P/7vJMOVtl/spQpp1NeNWLVBfmoG8tubTTHEz78nbM
	 Mi5TzSK4O/LtfJpDtvhI5QLWzSn68VROvh7bVsKVmfCA0L8DXw2EUj7uC78b19+5Rx
	 6iPjAV5tMRrgS7SBBOhhGo75UiSObo7waKqxJ5uuMYjPYcI7MRtTbdg+ba8TB9jNgs
	 b+IpYHQpsYW9A==
Date: Sat, 27 May 2023 13:41:44 +0300
From: Mike Rapoport <rppt@kernel.org>
To: Vishal Moola <vishal.moola@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 05/34] mm: add utility functions for ptdesc
Message-ID: <20230527104144.GH4967@kernel.org>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-6-vishal.moola@gmail.com>
 <20230525090956.GX4967@kernel.org>
 <CAOzc2pxSH6GhBnAoSOjvYJk2VdMDFZi3H_1qGC5Cdyp3j4AzPQ@mail.gmail.com>
 <20230525202537.GA4967@kernel.org>
 <CAOzc2pxD21mxisy-M5b_SDUv0MYwNHqaVDJnJpARuDG_HjCbOg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAOzc2pxD21mxisy-M5b_SDUv0MYwNHqaVDJnJpARuDG_HjCbOg@mail.gmail.com>

On Thu, May 25, 2023 at 01:53:24PM -0700, Vishal Moola wrote:
> On Thu, May 25, 2023 at 1:26 PM Mike Rapoport <rppt@kernel.org> wrote:
> >
> > On Thu, May 25, 2023 at 11:04:28AM -0700, Vishal Moola wrote:
> > > On Thu, May 25, 2023 at 2:10 AM Mike Rapoport <rppt@kernel.org> wrote:
> > > > > +
> > > > > +static inline struct ptdesc *ptdesc_alloc(gfp_t gfp, unsigned int order)
> > > > > +{
> > > > > +     struct page *page = alloc_pages(gfp | __GFP_COMP, order);
> > > > > +
> > > > > +     return page_ptdesc(page);
> > > > > +}
> > > > > +
> > > > > +static inline void ptdesc_free(struct ptdesc *pt)
> > > > > +{
> > > > > +     struct page *page = ptdesc_page(pt);
> > > > > +
> > > > > +     __free_pages(page, compound_order(page));
> > > > > +}
> > > >
> > > > The ptdesc_{alloc,free} API does not sound right to me. The name
> > > > ptdesc_alloc() implies the allocation of the ptdesc itself, rather than
> > > > allocation of page table page. The same goes for free.
> > >
> > > I'm not sure I see the difference. Could you elaborate?
> >
> > I read ptdesc_alloc() as "allocate a ptdesc" rather than as "allocate a
> > page for page table and return ptdesc pointing to that page". Seems very
> > confusing to me already and it will be even more confusion when we'll start
> > allocating actual ptdescs.
> 
> Hmm, I see what you're saying. I'm envisioning this function evolving into
> one that allocates a ptdesc later. I don't see why we would need to have both a
> page table page AND ptdesc at any point, but that may be a lack of knowledge
> from my part.

Sorry if I wasn't clear, by "page table page" I meant the page (or memory
for that matter) for actual page table rather than struct page describing
that memory.

So what we allocate here is the actual memory for the page tables and not
the memory for the metadata. That's why I think the name ptdesc_alloc is
confusing.
 
> I was thinking later, if necessary, we could make another function
> (only to be used internally) to allocate page table pages.

-- 
Sincerely yours,
Mike.


From xen-devel-bounces@lists.xenproject.org Sat May 27 11:14:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 11:14:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540308.842000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2rsh-0007g0-1k; Sat, 27 May 2023 11:14:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540308.842000; Sat, 27 May 2023 11:14:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2rsg-0007ft-UI; Sat, 27 May 2023 11:14:42 +0000
Received: by outflank-mailman (input) for mailman id 540308;
 Sat, 27 May 2023 11:14:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2rsf-0007fi-Ps; Sat, 27 May 2023 11:14:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2rsf-0003V0-Ig; Sat, 27 May 2023 11:14:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2rsf-00072T-0E; Sat, 27 May 2023 11:14:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2rse-0005Fn-W7; Sat, 27 May 2023 11:14:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=W8y8fozRiXMIZZNE1y7iEtmU6vZSYGaeX+vS1uB2K30=; b=XSoo6XrdiBv18Xu6WFkSs26+Eo
	WjLO+Yk+7RS0BtLDhEdIecpTy4AbXoeXBs/KNpyzcS9a0XoYFXDFHJIOMX5MGkYqNWVBrkPPajzhW
	R7g+kkpw+EbWjSFeopGOWao+FCkjPR1/czfqe54NPIEGpVqCgq8nwez7Djpwd0TcaHo4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180975-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180975: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=ac84b57b4d74606f7f83667a0606deef32b2049d
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 May 2023 11:14:40 +0000

flight 180975 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180975/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 180691
 build-amd64-xsm               6 xen-build                fail REGR. vs. 180691
 build-i386-xsm                6 xen-build                fail REGR. vs. 180691
 build-i386                    6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-vhd        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-raw   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a

version targeted for testing:
 qemuu                ac84b57b4d74606f7f83667a0606deef32b2049d
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   10 days
Failing since        180699  2023-05-18 07:21:24 Z    9 days   40 attempts
Testing same since   180972  2023-05-27 03:59:19 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8318 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 27 13:40:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 13:40:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540316.842010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2u9R-0005gS-A8; Sat, 27 May 2023 13:40:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540316.842010; Sat, 27 May 2023 13:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2u9R-0005gL-75; Sat, 27 May 2023 13:40:09 +0000
Received: by outflank-mailman (input) for mailman id 540316;
 Sat, 27 May 2023 13:40:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cAXd=BQ=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q2u9Q-0005ae-0a
 for xen-devel@lists.xenproject.org; Sat, 27 May 2023 13:40:08 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fc9c4d35-fc93-11ed-8611-37d641c3527e;
 Sat, 27 May 2023 15:40:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc9c4d35-fc93-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685194803;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uhphG4bdKEhlSrLILx8LQHUnhtq5h2kh7PjylDw7WHk=;
	b=c7tPQEf1tDSaswC+udb6n2rEFtodyCz+0hqKKRaloSAWHscsivpNExWbDw/lNzJYugWAC1
	Jm29ndY/DVZdilpSdABXMmg7zcNPlf2ipCdzoFFpOsDlGdRTaa/cDp4VQIgvIIuUN9JR/Z
	4wJQUUIDzQZJXqPtRi0VknCvN575jHF4GBgn5HyL1/e+ips0AysDA2DuVxTFjRXavrRMCR
	ErEe1ocMuKOicQKxwfjXlIfBvhnKCrT8khka2tjtWb5cZaLE3ZEkUSF51OwhE4SNq1hAJg
	OsmAHNMb9ZxT3Z/Q3tFuCRyz5jSjP+VvrhVlox0l3Z4ceASC3h0ieO88IF9MuA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685194803;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=uhphG4bdKEhlSrLILx8LQHUnhtq5h2kh7PjylDw7WHk=;
	b=IAhogYOgiDrfm9G9lFn3jmOMGU4kEEQCTcf6iURfWSFeMNN2CJaahVSnX2MzxuXmnO+mLX
	zbrDN0kZP7Q4fvCA==
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom
 Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
In-Reply-To: <87y1lbl7r6.ffs@tglx>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name> <87y1lbl7r6.ffs@tglx>
Date: Sat, 27 May 2023 15:40:02 +0200
Message-ID: <87sfbhlwp9.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Fri, May 26 2023 at 12:14, Thomas Gleixner wrote:
> On Wed, May 24 2023 at 23:48, Kirill A. Shutemov wrote:
>> This patch causes boot regression on TDX guest. The guest crashes on SMP
>> bring up.

The below should fix that. Sigh...

Thanks,

        tglx
----
Subject: x86/smp: Initialize cpu_primary_thread_mask late
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 26 May 2023 21:38:47 +0200

Marking primary threads in the cpumask during early boot is only correct in
certain configurations, but broken e.g. for the legacy hyperthreading
detection.

This is due to the complete mess in the CPUID evaluation code which
initializes smp_num_siblings only half during early init and fixes it up
later when identify_boot_cpu() is invoked.

So using smp_num_siblings before identify_boot_cpu() leads to incorrect
results.

Fixing the early CPU init code to provide the proper data is a larger scale
surgery as the code has dependencies on data structures which are not
initialized during early boot.

Move the initialization of cpu_primary_thread_mask wich depends on
smp_num_siblings being correct to an early initcall so that it is set up
correctly before SMP bringup.

Fixes: f54d4434c281 ("x86/apic: Provide cpu_primary_thread mask")
Reported-by: "Kirill A. Shutemov" <kirill@shutemov.name>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/apic/apic.c |   18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2398,6 +2398,21 @@ static void cpu_mark_primary_thread(unsi
 	if (smp_num_siblings == 1 || !(apicid & mask))
 		cpumask_set_cpu(cpu, &__cpu_primary_thread_mask);
 }
+
+/*
+ * Due to the utter mess of CPUID evaluation smp_num_siblings is not valid
+ * during early boot. Initialize the primary thread mask before SMP
+ * bringup.
+ */
+static int __init smp_init_primary_thread_mask(void)
+{
+	unsigned int cpu;
+
+	for (cpu = 0; cpu < nr_logical_cpuids; cpu++)
+		cpu_mark_primary_thread(cpu, cpuid_to_apicid[cpu]);
+	return 0;
+}
+early_initcall(smp_init_primary_thread_mask);
 #else
 static inline void cpu_mark_primary_thread(unsigned int cpu, unsigned int apicid) { }
 #endif
@@ -2544,7 +2559,8 @@ int generic_processor_info(int apicid, i
 	set_cpu_present(cpu, true);
 	num_processors++;
 
-	cpu_mark_primary_thread(cpu, apicid);
+	if (system_state >= SYSTEM_BOOTING)
+		cpu_mark_primary_thread(cpu, apicid);
 
 	return cpu;
 }


From xen-devel-bounces@lists.xenproject.org Sat May 27 14:59:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 14:59:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540320.842020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2vNt-0004az-2Y; Sat, 27 May 2023 14:59:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540320.842020; Sat, 27 May 2023 14:59:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2vNs-0004as-Ua; Sat, 27 May 2023 14:59:08 +0000
Received: by outflank-mailman (input) for mailman id 540320;
 Sat, 27 May 2023 14:59:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2vNr-0004ac-Nd; Sat, 27 May 2023 14:59:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2vNr-0008M0-Aw; Sat, 27 May 2023 14:59:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2vNr-0001E5-0v; Sat, 27 May 2023 14:59:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2vNr-0005Fl-0T; Sat, 27 May 2023 14:59:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MqxDvOb8EemOZXbae171qj7c+Y7jMZUTL9hHdjtVXyA=; b=39rvqLtBSzNcPcBFwQPOwDk0pU
	St4BCqsU21lry/+xZhIv8iJB65QxljOSkebgi1cn09fYtPEiL4I9/2hugP+JajPz+af7axHlbWHxr
	CHWAwlcdYLnbA7ujQnHFfGm5rR5j3UwH2XhW9eN/ZIpkh4BlikI+2pwJEn5xgGfkSVFo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180973-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180973: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=a1bdffdd9638601b17a6d115eb148422b66bcea0
X-Osstest-Versions-That:
    libvirt=a8c983d0fa1ba915f7a75deeceb20da1c88fd1db
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 May 2023 14:59:07 +0000

flight 180973 libvirt real [real]
flight 180978 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180973/
http://logs.test-lab.xenproject.org/osstest/logs/180978/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 180978-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180955
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180955
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180955
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              a1bdffdd9638601b17a6d115eb148422b66bcea0
baseline version:
 libvirt              a8c983d0fa1ba915f7a75deeceb20da1c88fd1db

Last test of basis   180955  2023-05-26 04:18:51 Z    1 days
Testing same since   180973  2023-05-27 04:18:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Jiri Denemark <jdenemar@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Weblate <noreply@weblate.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   a8c983d0fa..a1bdffdd96  a1bdffdd9638601b17a6d115eb148422b66bcea0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat May 27 15:10:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 15:10:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540326.842029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2vYM-0006Bp-15; Sat, 27 May 2023 15:09:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540326.842029; Sat, 27 May 2023 15:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2vYL-0006Bi-Ui; Sat, 27 May 2023 15:09:57 +0000
Received: by outflank-mailman (input) for mailman id 540326;
 Sat, 27 May 2023 15:09:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PCBS=BQ=infradead.org=willy@srs-se1.protection.inumbo.net>)
 id 1q2vYJ-0006Bc-UP
 for xen-devel@lists.xenproject.org; Sat, 27 May 2023 15:09:56 +0000
Received: from casper.infradead.org (casper.infradead.org
 [2001:8b0:10b:1236::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 88cde50a-fca0-11ed-b231-6b7b168915f2;
 Sat, 27 May 2023 17:09:54 +0200 (CEST)
Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red
 Hat Linux)) id 1q2vXv-003tpc-6G; Sat, 27 May 2023 15:09:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88cde50a-fca0-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=u50hcZSLxjXk+ZnHObIQ29gSZuYnejTLeMWOgyxvfyQ=; b=kvDbU0q2z5jUTh3d5nQNpnIto9
	5xrW767Q1PCPoDvq7cJx36GAZEG2mizda3DrQtiBOGyiytCq2OMTOEXq8TWh2LZ+xzaLqiKD9V9Eb
	EJX8E77j208rlnFcLKHX2rZvcHGssHh261tJ08qNlv2+zYnLCkftBGtSrrF/p39TJSIPJr0ITg4/l
	1g7Vh1u4qG6tN9I49LGc4DG6JUJ0KFWAYdrSkVV8XgiRrI/avMqTujLKkfiEnrTSZ/y+N3Nd9R9D8
	/OJnFeU9SGlsUclnIO5MUMB9v4WCZl1IYBNx4a3hMv/KnLwfYzmktawxgr1m6qrMuAbL8hw04B8Os
	8rYGwubA==;
Date: Sat, 27 May 2023 16:09:31 +0100
From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@kernel.org>
Cc: Vishal Moola <vishal.moola@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 05/34] mm: add utility functions for ptdesc
Message-ID: <ZHIdK+170XoK2jVe@casper.infradead.org>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-6-vishal.moola@gmail.com>
 <20230525090956.GX4967@kernel.org>
 <CAOzc2pxSH6GhBnAoSOjvYJk2VdMDFZi3H_1qGC5Cdyp3j4AzPQ@mail.gmail.com>
 <20230525202537.GA4967@kernel.org>
 <CAOzc2pxD21mxisy-M5b_SDUv0MYwNHqaVDJnJpARuDG_HjCbOg@mail.gmail.com>
 <20230527104144.GH4967@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230527104144.GH4967@kernel.org>

On Sat, May 27, 2023 at 01:41:44PM +0300, Mike Rapoport wrote:
> Sorry if I wasn't clear, by "page table page" I meant the page (or memory
> for that matter) for actual page table rather than struct page describing
> that memory.
> 
> So what we allocate here is the actual memory for the page tables and not
> the memory for the metadata. That's why I think the name ptdesc_alloc is
> confusing.

But that's going to be the common pattern in the Glorious Future.
You allocate a folio and that includes both the folio memory descriptor
and the 2^n pages of memory described by that folio.  Similarly for all
the other memory descriptors.


From xen-devel-bounces@lists.xenproject.org Sat May 27 17:23:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 17:23:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540331.842040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2xcp-0003kS-6H; Sat, 27 May 2023 17:22:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540331.842040; Sat, 27 May 2023 17:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2xcp-0003kL-22; Sat, 27 May 2023 17:22:43 +0000
Received: by outflank-mailman (input) for mailman id 540331;
 Sat, 27 May 2023 17:22:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ljfk=BQ=kernel.org=pr-tracker-bot@srs-se1.protection.inumbo.net>)
 id 1q2xco-0003kF-2P
 for xen-devel@lists.xenproject.org; Sat, 27 May 2023 17:22:42 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 12b72635-fcb3-11ed-8611-37d641c3527e;
 Sat, 27 May 2023 19:22:38 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id AC0AD60ED0;
 Sat, 27 May 2023 17:22:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPS id 17CC1C433D2;
 Sat, 27 May 2023 17:22:35 +0000 (UTC)
Received: from aws-us-west-2-korg-oddjob-1.ci.codeaurora.org
 (localhost.localdomain [127.0.0.1])
 by aws-us-west-2-korg-oddjob-1.ci.codeaurora.org (Postfix) with ESMTP id
 EBE86C4166F; Sat, 27 May 2023 17:22:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12b72635-fcb3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685208155;
	bh=iLmQvKnR0W+7FxEY1A+v5WguoW1iSCLbA5KWnRCGKPw=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=AHu3Z+8trOd0ClOKxvqkTTzNpWRx1bSts5AHV7HDO6qVSEdYY96pXN3wfcDR3uQVn
	 2lsYHdNlsTMZyhu8tN6IEO9sMjAHbOIIXCk4qYpdEXbXcRTeKgcH6p7BjntXpMRu2p
	 Wik+rvjWLFAZvsJ8m8WmNgxG5EFQ7r+B2G6UIh/HoAOfvNVe8LPCKp2WPfyO7jYsDZ
	 IT9W38B4eZp4U6K2ZkDPh73nvUNbwNzK/ZSu/GXl1E9RoU6admHgkwe4U8UMGq8nwI
	 hyK6Uh48bs5P+jHGfKIjt1+arTJIh+8hWjuT6V96LRBF2Po3U+aVDIhZSJwHsstmbf
	 vfO3lSPFLLstQ==
Subject: Re: [GIT PULL] xen: branch for v6.4-rc4
From: pr-tracker-bot@kernel.org
In-Reply-To: <20230527053544.31822-1-jgross@suse.com>
References: <20230527053544.31822-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20230527053544.31822-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-6.4-rc4-tag
X-PR-Tracked-Commit-Id: 335b4223466dd75f9f3ea4918187afbadd22e5c8
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 4e893b5aa4ac2c8a56a40d18fe87e9d2295e5dcf
Message-Id: <168520815495.27218.6793284360828021541.pr-tracker-bot@kernel.org>
Date: Sat, 27 May 2023 17:22:34 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org

The pull request you sent on Sat, 27 May 2023 07:35:44 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-6.4-rc4-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/4e893b5aa4ac2c8a56a40d18fe87e9d2295e5dcf

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Sat May 27 19:03:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 19:03:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540335.842049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2zBu-0005OV-BN; Sat, 27 May 2023 19:03:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540335.842049; Sat, 27 May 2023 19:03:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q2zBu-0005OO-8P; Sat, 27 May 2023 19:03:02 +0000
Received: by outflank-mailman (input) for mailman id 540335;
 Sat, 27 May 2023 19:03:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2zBt-0005OE-2k; Sat, 27 May 2023 19:03:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2zBs-0006d2-T5; Sat, 27 May 2023 19:03:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q2zBs-0007Sn-BA; Sat, 27 May 2023 19:03:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q2zBs-0007YI-Ag; Sat, 27 May 2023 19:03:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mBK4u6VRB3URDsuiVJOnAf+hNPLFzCO7zVvw5nTEA90=; b=2Q0rutdqh7zfLdD46OAaUavTZN
	EZxkukn4Bys9t+WPjQQTHYzO8H+P5geGFz0d8BcpF3za1WFJvaWpHYilMuufXsUwyh5qsetfzTVaJ
	bZNuEEWyD4wxgRZihrr8yqmz40wzRoMEomXf9sOVcW+7L31u89BKGgNG/HpRKwTtzUtk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180974-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180974: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=49572d5361298711207ab387a6c318407deb963a
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 May 2023 19:03:00 +0000

flight 180974 linux-linus real [real]
flight 180979 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180974/
http://logs.test-lab.xenproject.org/osstest/logs/180979/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                49572d5361298711207ab387a6c318407deb963a
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   40 days
Failing since        180281  2023-04-17 06:24:36 Z   40 days   75 attempts
Testing same since   180974  2023-05-27 05:22:35 Z    0 days    1 attempts

------------------------------------------------------------
2542 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 321366 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 27 22:01:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 22:01:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540344.842060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q31ye-0006Qs-7R; Sat, 27 May 2023 22:01:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540344.842060; Sat, 27 May 2023 22:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q31ye-0006Ql-3c; Sat, 27 May 2023 22:01:32 +0000
Received: by outflank-mailman (input) for mailman id 540344;
 Sat, 27 May 2023 22:01:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q31yc-0006Qb-Bx; Sat, 27 May 2023 22:01:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q31yb-0002C1-VB; Sat, 27 May 2023 22:01:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q31yb-0005JL-Ba; Sat, 27 May 2023 22:01:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q31yb-0004JD-B5; Sat, 27 May 2023 22:01:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=auU6xgaPRCBuoqEko9aAFFAg5mHRDf6MuYBVx4foh2Y=; b=im55DlZfpadMF3QHjlQYHsYWoo
	idtnDDi8fC8ueU3uVxzSnkG19w5FRzS7Y5LXa3k53dQ/8UgYD9HUYhQiXoX+GJFXZbrvTq1PyGhaN
	dgn2YCSn8JKncM1+uUhnt0ND6Z6VbnLyzQlrLa/DygxtYzno/L95847OzmOkMp2yCUBU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180976-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180976: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:host-install:broken:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-coresched-i386-xl:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
X-Osstest-Versions-That:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 May 2023 22:01:29 +0000

flight 180976 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180976/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      5 host-install             broken pass in 180965
 test-armhf-armhf-xl-credit2   8 xen-boot         fail in 180965 pass in 180976
 test-amd64-coresched-i386-xl 22 guest-start/debian.repeat  fail pass in 180965
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 180965
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180965
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180965

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180965
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180965
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180965
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180965
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180965
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180965
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180965
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180965
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180965
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180965
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180965
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180965
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
baseline version:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc

Last test of basis   180976  2023-05-27 05:41:04 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     fail    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-step test-amd64-amd64-examine host-install

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat May 27 22:30:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 27 May 2023 22:30:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540350.842070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q32Qk-0001On-EB; Sat, 27 May 2023 22:30:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540350.842070; Sat, 27 May 2023 22:30:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q32Qk-0001Og-Ac; Sat, 27 May 2023 22:30:34 +0000
Received: by outflank-mailman (input) for mailman id 540350;
 Sat, 27 May 2023 22:30:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q32Qj-0001OW-2e; Sat, 27 May 2023 22:30:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q32Qi-0002q4-R6; Sat, 27 May 2023 22:30:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q32Qi-0006Bu-6h; Sat, 27 May 2023 22:30:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q32Qi-0005eV-6E; Sat, 27 May 2023 22:30:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Qjux1F2MbRsXr+Hv8TXB32nO3106obojAGPIx9W55l0=; b=4/4kVYmMaeCc+AADZuHNp/MtcN
	ZjManBCHNvB1IxB98zEVYZtnF0Dq0LChqL5JfM45w/C4rUeAnna2o4YveSekB+MrfFrHXJcmwJAKF
	+vhykseFaLqHanKU5BNLLRu8edXQiqIsFDQQnZc+2VJ4IKbMvx+Y4MUbPtzMamtHyjzo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180977-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180977: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ac84b57b4d74606f7f83667a0606deef32b2049d
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 27 May 2023 22:30:32 +0000

flight 180977 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180977/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180691
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180691
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ac84b57b4d74606f7f83667a0606deef32b2049d
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   10 days
Failing since        180699  2023-05-18 07:21:24 Z    9 days   41 attempts
Testing same since   180972  2023-05-27 03:59:19 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8318 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 28 03:10:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 May 2023 03:10:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540361.842080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q36nG-0002gT-Ug; Sun, 28 May 2023 03:10:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540361.842080; Sun, 28 May 2023 03:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q36nG-0002g5-Ox; Sun, 28 May 2023 03:10:06 +0000
Received: by outflank-mailman (input) for mailman id 540361;
 Sun, 28 May 2023 03:10:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q36nE-0002TE-UA; Sun, 28 May 2023 03:10:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q36nE-000093-Gx; Sun, 28 May 2023 03:10:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q36nD-0001nb-UF; Sun, 28 May 2023 03:10:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q36nD-0002lB-T7; Sun, 28 May 2023 03:10:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=neat0vY3JBhVp7fU51BblJSwVTeXWggRhJxVxjOX+2A=; b=R0ZVMd1mDkMr3QUjg5jyGbJBUK
	BbgIUJ3jXuncT8DymDKUPn4Qf55/XBwjizPN3vDSxkNy7I0aEsCAvWZx5JXM2pisApn558ysAQJgm
	AMK/joOAn/o9NoMmAnvWDarB8KtjVSkJVy1prIoirxBQOPwyIlxb5k5VyUey5Dbk0zDo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180981-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180981: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:host-build-prep:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ac84b57b4d74606f7f83667a0606deef32b2049d
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 May 2023 03:10:03 +0000

flight 180981 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180981/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                   5 host-build-prep          fail REGR. vs. 180691

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install          fail pass in 180977
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180977

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 180977 like 180691
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail in 180977 like 180691
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 180977 like 180691
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 180977 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 180977 never pass
 test-armhf-armhf-xl         15 migrate-support-check fail in 180977 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 180977 never pass
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 180977 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 180977 never pass
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 180977 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 180977 never pass
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 180977 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 180977 never pass
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 180977 never pass
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 180977 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 180977 never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check fail in 180977 never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 180977 never pass
 test-armhf-armhf-xl-vhd     14 migrate-support-check fail in 180977 never pass
 test-armhf-armhf-xl-vhd 15 saverestore-support-check fail in 180977 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                ac84b57b4d74606f7f83667a0606deef32b2049d
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   10 days
Failing since        180699  2023-05-18 07:21:24 Z    9 days   42 attempts
Testing same since   180972  2023-05-27 03:59:19 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

(No revision log; it would be 8318 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 28 05:17:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 May 2023 05:17:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540368.842089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q38mC-00074z-2M; Sun, 28 May 2023 05:17:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540368.842089; Sun, 28 May 2023 05:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q38mB-00074s-Vv; Sun, 28 May 2023 05:17:07 +0000
Received: by outflank-mailman (input) for mailman id 540368;
 Sun, 28 May 2023 05:17:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q38mB-00074i-Hc; Sun, 28 May 2023 05:17:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q38mB-0003J8-Ad; Sun, 28 May 2023 05:17:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q38mA-00084T-U2; Sun, 28 May 2023 05:17:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q38mA-0000Ou-TZ; Sun, 28 May 2023 05:17:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mWXPUH5fRQlPU23j3VSl+W1vK9OnJy+c83fosRO+gzA=; b=14P2vrwl82xQFP3EneMlibadaA
	EsqIrJ+flMeyydnnwDP9AD4N8FG+QShvtU1rLfKmgiP6a0C7EspAiifM11aXLWsxDXKaJoXfzkX17
	YfBNemoolo/h808fzgBmLKkaTn1eBpVlCvlqB7Y8D7NQqDFDaHmv8oF9ClKFfSYA9f8s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180980-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180980: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4e893b5aa4ac2c8a56a40d18fe87e9d2295e5dcf
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 May 2023 05:17:06 +0000

flight 180980 linux-linus real [real]
flight 180984 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180980/
http://logs.test-lab.xenproject.org/osstest/logs/180984/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                4e893b5aa4ac2c8a56a40d18fe87e9d2295e5dcf
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   41 days
Failing since        180281  2023-04-17 06:24:36 Z   40 days   76 attempts
Testing same since   180980  2023-05-27 19:11:55 Z    0 days    1 attempts

------------------------------------------------------------
2547 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 321807 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 28 05:48:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 May 2023 05:48:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540378.842100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q39GL-00026v-LX; Sun, 28 May 2023 05:48:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540378.842100; Sun, 28 May 2023 05:48:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q39GL-00026o-Ic; Sun, 28 May 2023 05:48:17 +0000
Received: by outflank-mailman (input) for mailman id 540378;
 Sun, 28 May 2023 05:48:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1NL0=BR=kernel.org=rppt@srs-se1.protection.inumbo.net>)
 id 1q39GK-00026i-Oy
 for xen-devel@lists.xenproject.org; Sun, 28 May 2023 05:48:16 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3c1d6599-fd1b-11ed-8611-37d641c3527e;
 Sun, 28 May 2023 07:48:14 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A7A4B60B63;
 Sun, 28 May 2023 05:48:12 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B2385C433D2;
 Sun, 28 May 2023 05:48:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c1d6599-fd1b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685252892;
	bh=bkuMHroVl9TlOSFo6p59v17X6rAsvGxb3kzfT5vZogM=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=R/fA88QDLAPfI5+3dlnjis0mHISdgNRk033rp1tEp/+Mnh4pdPF6GyqhT2d0f44ZM
	 O9W0HkIZYw4XGrrX1f3tUzE0jNE3oALs8Cn992s+EfmiHsu8hfXhu3DkQX9TcwN2oj
	 7E7N3KWaDItcFkwUQzuLHOiQy27MUurhgxG80g7TkNYefeCpTsWvx1dhgZiE8imgx7
	 3iInnTBsRh7R7es7uW5hUx43VvmKiGSLfYpxMoZFKFBm9CDY8XYAON1aC4gzYndzuk
	 JoH7BurJumWuV7HAw5lKspfn5I/oVTTKuKbhXpKROtOf7xIXSt3aAHLO5P0Ikkt3g7
	 bN8aIuRcKOvMA==
Date: Sun, 28 May 2023 08:47:45 +0300
From: Mike Rapoport <rppt@kernel.org>
To: Matthew Wilcox <willy@infradead.org>
Cc: Vishal Moola <vishal.moola@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>, linux-mm@kvack.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org, linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org
Subject: Re: [PATCH v2 05/34] mm: add utility functions for ptdesc
Message-ID: <20230528054745.GI4967@kernel.org>
References: <20230501192829.17086-1-vishal.moola@gmail.com>
 <20230501192829.17086-6-vishal.moola@gmail.com>
 <20230525090956.GX4967@kernel.org>
 <CAOzc2pxSH6GhBnAoSOjvYJk2VdMDFZi3H_1qGC5Cdyp3j4AzPQ@mail.gmail.com>
 <20230525202537.GA4967@kernel.org>
 <CAOzc2pxD21mxisy-M5b_SDUv0MYwNHqaVDJnJpARuDG_HjCbOg@mail.gmail.com>
 <20230527104144.GH4967@kernel.org>
 <ZHIdK+170XoK2jVe@casper.infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ZHIdK+170XoK2jVe@casper.infradead.org>

On Sat, May 27, 2023 at 04:09:31PM +0100, Matthew Wilcox wrote:
> On Sat, May 27, 2023 at 01:41:44PM +0300, Mike Rapoport wrote:
> > Sorry if I wasn't clear, by "page table page" I meant the page (or memory
> > for that matter) for actual page table rather than struct page describing
> > that memory.
> > 
> > So what we allocate here is the actual memory for the page tables and not
> > the memory for the metadata. That's why I think the name ptdesc_alloc is
> > confusing.
> 
> But that's going to be the common pattern in the Glorious Future.
> You allocate a folio and that includes both the folio memory descriptor
> and the 2^n pages of memory described by that folio.  Similarly for all
> the other memory descriptors.

I'm not arguing with that, I'm not happy about the naming. IMO, the name
should reflect that we allocate memory for page tables rather than for the
descriptor of that memory, say pgtable_alloc() or page_table_alloc().

-- 
Sincerely yours,
Mike.


From xen-devel-bounces@lists.xenproject.org Sun May 28 10:57:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 May 2023 10:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540393.842110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3E5P-0007XT-L1; Sun, 28 May 2023 10:57:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540393.842110; Sun, 28 May 2023 10:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3E5P-0007XM-Ho; Sun, 28 May 2023 10:57:19 +0000
Received: by outflank-mailman (input) for mailman id 540393;
 Sun, 28 May 2023 10:57:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3E5O-0007XB-2f; Sun, 28 May 2023 10:57:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3E5N-0003YE-JQ; Sun, 28 May 2023 10:57:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3E5N-00037U-0f; Sun, 28 May 2023 10:57:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3E5N-00026b-0D; Sun, 28 May 2023 10:57:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z7FlE00XDElMF1R6VIAJatSIuO8clVtceSZYaABnnPE=; b=wVOxsNSnl0l00zsYrp8J+wRhH4
	P2Bgqxh8lchJCj89dHPtP/vQIFMPVlUFsZKi5fLufxLpiVqRDJpBhVzGl9N1/Kacw9IqVrpq0QVkN
	4RL5Ngfh4BioiMFEsSfCzFzCBrmLF7X2pQxOjsSv3ekfasbGSBEzce5In7V6YH2X9+Hs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180982-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180982: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:host-install:broken:heisenbug
    xen-unstable:test-amd64-coresched-i386-xl:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
X-Osstest-Versions-That:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 May 2023 10:57:17 +0000

flight 180982 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180982/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      5 host-install   broken in 180976 pass in 180982
 test-amd64-coresched-i386-xl 22 guest-start/debian.repeat fail in 180976 pass in 180982
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 180976 pass in 180982
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 180976 pass in 180982
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 18 guest-localmigrate/x10 fail pass in 180976

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180976
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180976
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180976
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180976
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180976
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180976
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat    fail  like 180976
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180976
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180976
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180976
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180976
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180976
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180976
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
baseline version:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc

Last test of basis   180982  2023-05-28 01:53:35 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 28 13:30:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 May 2023 13:30:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540402.842120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3GTL-0006Yf-K6; Sun, 28 May 2023 13:30:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540402.842120; Sun, 28 May 2023 13:30:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3GTL-0006YY-Gu; Sun, 28 May 2023 13:30:11 +0000
Received: by outflank-mailman (input) for mailman id 540402;
 Sun, 28 May 2023 13:30:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3GTJ-0006YO-S1; Sun, 28 May 2023 13:30:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3GTJ-0006qY-GH; Sun, 28 May 2023 13:30:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3GTJ-0006x9-6v; Sun, 28 May 2023 13:30:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3GTJ-0004pq-6R; Sun, 28 May 2023 13:30:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GgQlCazfrq7UOdM4lQWEjqmz22zQ3pZauPrHdzZTMLc=; b=RtmIp/vVRm+HRjD93V3yUJYZlI
	ybIp3abL2/4qs4NGF11Vy6RYPYMQBHsr1TUnA27dOAx32+th/+U5NTP4Dtr2PI46Jq22o6ue08QhX
	cC2d/6irvd/pTGDC4TugjcIJylI98zGyV1FNXLJzBTYnPPqcOeADFuXHzbsyUAYTNtVE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180983-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180983: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:host-build-prep:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ac84b57b4d74606f7f83667a0606deef32b2049d
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 May 2023 13:30:09 +0000

flight 180983 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180983/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken  in 180981
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-armhf                  5 host-build-prep fail in 180981 REGR. vs. 180691

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-install fail in 180981 pass in 180983
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180977
 test-amd64-i386-pair         10 xen-install/src_host       fail pass in 180981

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw  1 build-check(1)           blocked in 180981 n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)           blocked in 180981 n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)           blocked in 180981 n/a
 test-armhf-armhf-xl           1 build-check(1)           blocked in 180981 n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)          blocked in 180981 n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)           blocked in 180981 n/a
 build-armhf-libvirt           1 build-check(1)           blocked in 180981 n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)           blocked in 180981 n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)         blocked in 180981 n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)           blocked in 180981 n/a
 test-armhf-armhf-libvirt      1 build-check(1)           blocked in 180981 n/a
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180691
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180691
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ac84b57b4d74606f7f83667a0606deef32b2049d
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   11 days
Failing since        180699  2023-05-18 07:21:24 Z   10 days   43 attempts
Testing same since   180972  2023-05-27 03:59:19 Z    1 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

(No revision log; it would be 8318 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 28 14:31:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 May 2023 14:31:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540409.842130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3HQR-0004Oc-SX; Sun, 28 May 2023 14:31:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540409.842130; Sun, 28 May 2023 14:31:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3HQR-0004OV-PF; Sun, 28 May 2023 14:31:15 +0000
Received: by outflank-mailman (input) for mailman id 540409;
 Sun, 28 May 2023 14:31:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3HQQ-0004OL-Gn; Sun, 28 May 2023 14:31:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3HQP-0008Hd-Ux; Sun, 28 May 2023 14:31:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3HQP-0008Ol-Fi; Sun, 28 May 2023 14:31:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3HQP-0002GF-FG; Sun, 28 May 2023 14:31:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vwSlBqSfYgfyh7OMmgZ0RE6i5zeRrIwnhfW25PeCR2I=; b=Og4Ct6qSKwtPGK1ZLrI8GRDTqA
	XSkzJI3LMvjqTAZhgr8LijaesUHHKkLN/OZXJVUE0aIdxtgtQq5w+YTS4MNj+vMDvGYiL92ugB0Ht
	Gl2U7ysYxKUuLfLMFtiUXenDUbyqGHM4meQHI5heyqPt7uDgesMXyE1awRZfrgJxSaaQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180985-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 180985: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=e35b5df3f561ea5678a21aa1b39f14308fc6363c
X-Osstest-Versions-That:
    libvirt=a1bdffdd9638601b17a6d115eb148422b66bcea0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 May 2023 14:31:13 +0000

flight 180985 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180985/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180973
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180973
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180973
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              e35b5df3f561ea5678a21aa1b39f14308fc6363c
baseline version:
 libvirt              a1bdffdd9638601b17a6d115eb148422b66bcea0

Last test of basis   180973  2023-05-27 04:18:59 Z    1 days
Testing same since   180985  2023-05-28 04:18:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Yuri Chornoivan <yurchor@ukr.net>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   a1bdffdd96..e35b5df3f5  e35b5df3f561ea5678a21aa1b39f14308fc6363c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun May 28 19:36:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 May 2023 19:36:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540417.842140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3MBp-0000lb-JU; Sun, 28 May 2023 19:36:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540417.842140; Sun, 28 May 2023 19:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3MBp-0000lU-G3; Sun, 28 May 2023 19:36:29 +0000
Received: by outflank-mailman (input) for mailman id 540417;
 Sun, 28 May 2023 19:36:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3MBo-0000lI-Mp; Sun, 28 May 2023 19:36:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3MBo-0007Ca-EV; Sun, 28 May 2023 19:36:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3MBn-0003f0-SO; Sun, 28 May 2023 19:36:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3MBn-0008D4-Rw; Sun, 28 May 2023 19:36:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YR2sMiGVMDhyN5WVYLcD6pA2LTP6mv0fR09EQCQmzdY=; b=b6gQqUAbY1nAi9gq+y88k54/yH
	BsOng8TI2JQ+dUkDQRmhZRq7ohX7hLhuwuNLgnkJEFoMV9low7KL+LtF4aqS6WUburrZISYnorF2t
	osPLU8wOTLu7wFp35ixaTdByL0BdNqaazW4URT+i82gAKrBnapjPC8EWH5RazqEdWSME=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180986-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180986: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-migrate/dst_host/src_host:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=416839029e3858f61dc7dd346559c03e74ed8380
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 May 2023 19:36:27 +0000

flight 180986 linux-linus real [real]
flight 180988 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180986/
http://logs.test-lab.xenproject.org/osstest/logs/180988/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair 27 guest-migrate/dst_host/src_host fail pass in 180988-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                416839029e3858f61dc7dd346559c03e74ed8380
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   41 days
Failing since        180281  2023-04-17 06:24:36 Z   41 days   77 attempts
Testing same since   180986  2023-05-28 05:20:42 Z    0 days    1 attempts

------------------------------------------------------------
2547 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 321857 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 28 21:43:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 28 May 2023 21:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540424.842150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3OAl-0005G9-HN; Sun, 28 May 2023 21:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540424.842150; Sun, 28 May 2023 21:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3OAl-0005G2-ER; Sun, 28 May 2023 21:43:31 +0000
Received: by outflank-mailman (input) for mailman id 540424;
 Sun, 28 May 2023 21:43:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3OAj-0005Fp-Qb; Sun, 28 May 2023 21:43:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3OAj-0001vy-Ie; Sun, 28 May 2023 21:43:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3OAj-00088v-6p; Sun, 28 May 2023 21:43:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3OAj-0005fE-6J; Sun, 28 May 2023 21:43:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=90KaXy6UUbQRvXisn04eATIFV44IBTje0sorfBuXUNo=; b=Hed2xwm5mthssZ0TG+9v0cP6K1
	HIZFzppSvtj1DFIDh3NBUaZuTeF6baFmVlSABstm0B4zKC0ilblCZ66+Ez2Jyhw1WCbOJvTqX/4c9
	vYPs9pQfbPSsMBEeqJJto39KyrkapakbHMW35qKsJFpLCFAytNjeJ7QOE/D02qWxiKZY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180987-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180987: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ac84b57b4d74606f7f83667a0606deef32b2049d
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 28 May 2023 21:43:29 +0000

flight 180987 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180987/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pair     10 xen-install/src_host fail in 180983 pass in 180987
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180977
 test-amd64-i386-pair         11 xen-install/dst_host       fail pass in 180983
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat  fail pass in 180983

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180691
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180691
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ac84b57b4d74606f7f83667a0606deef32b2049d
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   11 days
Failing since        180699  2023-05-18 07:21:24 Z   10 days   44 attempts
Testing same since   180972  2023-05-27 03:59:19 Z    1 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8318 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 29 02:19:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:19:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540432.842160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3STw-0005TG-QN; Mon, 29 May 2023 02:19:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540432.842160; Mon, 29 May 2023 02:19:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3STw-0005T8-L7; Mon, 29 May 2023 02:19:36 +0000
Received: by outflank-mailman (input) for mailman id 540432;
 Mon, 29 May 2023 02:19:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3STu-0005T2-T8
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:19:34 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 3f33056c-fdc7-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 04:19:32 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A5E50AB6;
 Sun, 28 May 2023 19:20:16 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C295F3F64C;
 Sun, 28 May 2023 19:19:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f33056c-fdc7-11ed-b231-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <wei.chen@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v5 00/17] Device tree based NUMA support for Arm - Part#3
Date: Mon, 29 May 2023 10:19:04 +0800
Message-Id: <20230529021921.2606623-1-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

(Henry: Following the offline discussion with Wei, I will be
the one to follow-up the upstream comments for this series.)

The preparation work to support NUMA on Arm has been merged
and can be found at [1] and [2]. The initial discussions of
the Arm NUMA support can be found at [3].

- Background of this series:

Xen memory allocation and scheduler modules are NUMA aware.
But actually, on x86 has implemented the architecture APIs
to support NUMA. Arm was providing a set of fake architecture
APIs to make it compatible with NUMA awared memory allocation
and scheduler.

Arm system was working well as a single node NUMA system with
these fake APIs, because we didn't have multiple nodes NUMA
system on Arm. But in recent years, more and more Arm devices
support multiple nodes NUMA system.

So now we have a new problem. When Xen is running on these Arm
devices, Xen still treat them as single node SMP systems. The
NUMA affinity capability of Xen memory allocation and scheduler
becomes meaningless. Because they rely on input data that does
not reflect real NUMA layout.

Xen still think the access time for all of the memory is the
same for all CPUs. However, Xen may allocate memory to a VM
from different NUMA nodes with different access speeds. This
difference can be amplified in workloads inside VM, causing
performance instability and timeouts.

So in this patch series, we implement a set of NUMA API to use
device tree to describe the NUMA layout. We reuse most of the
code of x86 NUMA to create and maintain the mapping between
memory and CPU, create the matrix between any two NUMA nodes.
Except ACPI and some x86 specified code, we have moved other
code to common. In next stage, when we implement ACPI based
NUMA for Arm64, we may move the ACPI NUMA code to common too,
but in current stage, we keep it as x86 only.

This patch serires has been tested and booted well on FVP in
Arm64 mode with NUMA configs in device tree and one HPE x86
NUMA machine.

[1] https://lists.xenproject.org/archives/html/xen-devel/2022-06/msg00499.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2022-11/msg01043.html
[3] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg01903.html

v4 -> v5:
1. Coding style (extra blank line, label indentation and printk variable
   type) and in-code comment fixes and improvements.
2. Move the from/to range check in numa_set_distance() to caller,
   Drop the unnecessary check ensured by caller in numa_set_distance().
3. Rework the invalid distance check in numa_set_distance() following
   Linux, add more in-code comments about these distance checks.
4. Rename "numa_device_tree.c" to "numa-dt.c".
5. Check the from/to range to avoid the side-effect of the 8-bit
   truncation by numa_set_distance().

v3 -> v4:
1. s/definition/declaration/ in commit message.
2. Add Acked-by tag from Jan for non-Arm parts.
3. Drop unnecessary initializer for node_distance_map. Pre-set the
   distance map to NUMA_NO_DISTANCE.
4. Drop NUMA_DISTANCE_UDF_MIN and its usage.
5. Drop EXPORT_SYMBOL(__node_distance).
6. Rework __node_distance()'s return value logic.
7. The distance map default value is now NUMA_NO_DISTANCE, update
   the logic accordingly and add in-code comment as a note.
8. Add Acked-by tag from Jan for related patches.

Henry Wang (1):
  xen/arm: Set correct per-cpu cpu_core_mask

Wei Chen (16):
  xen/arm: use NR_MEM_BANKS to override default NR_NODE_MEMBLKS
  xen/arm: implement helpers to get and update NUMA status
  xen/arm: implement node distance helpers for Arm
  xen/arm: use arch_get_ram_range to get memory ranges from bootinfo
  xen/arm: build NUMA cpu_to_node map in dt_smp_init_cpus
  xen/arm: Add boot and secondary CPU to NUMA system
  xen/arm: introduce a helper to parse device tree processor node
  xen/arm: introduce a helper to parse device tree memory node
  xen/arm: introduce a helper to parse device tree NUMA distance map
  xen/arm: unified entry to parse all NUMA data from device tree
  xen/arm: keep guest still be NUMA unware
  xen/arm: enable device tree based NUMA in system init
  xen/arm: implement numa_node_to_arch_nid for device tree NUMA
  xen/arm: use CONFIG_NUMA to gate node_online_map in smpboot
  xen/arm: Provide Kconfig options for Arm to enable NUMA
  docs: update numa command line to support Arm

 SUPPORT.md                        |   1 +
 docs/misc/xen-command-line.pandoc |   2 +-
 xen/arch/arm/Kconfig              |  11 ++
 xen/arch/arm/Makefile             |   2 +
 xen/arch/arm/domain_build.c       |   6 +
 xen/arch/arm/include/asm/numa.h   |  91 ++++++++-
 xen/arch/arm/numa-dt.c            | 299 ++++++++++++++++++++++++++++++
 xen/arch/arm/numa.c               | 184 ++++++++++++++++++
 xen/arch/arm/setup.c              |  17 ++
 xen/arch/arm/smpboot.c            |  38 ++++
 xen/arch/x86/include/asm/numa.h   |   1 -
 xen/arch/x86/srat.c               |   2 +-
 xen/include/xen/numa.h            |  10 +
 13 files changed, 660 insertions(+), 4 deletions(-)
 create mode 100644 xen/arch/arm/numa-dt.c
 create mode 100644 xen/arch/arm/numa.c

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:19:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:19:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540434.842180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SU2-0005xv-7r; Mon, 29 May 2023 02:19:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540434.842180; Mon, 29 May 2023 02:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SU2-0005xm-4l; Mon, 29 May 2023 02:19:42 +0000
Received: by outflank-mailman (input) for mailman id 540434;
 Mon, 29 May 2023 02:19:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SU1-0005T2-FR
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:19:41 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 4433fca7-fdc7-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 04:19:40 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 10EC0139F;
 Sun, 28 May 2023 19:20:25 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 75E293F64C;
 Sun, 28 May 2023 19:19:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4433fca7-fdc7-11ed-b231-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 02/17] xen/arm: implement helpers to get and update NUMA status
Date: Mon, 29 May 2023 10:19:06 +0800
Message-Id: <20230529021921.2606623-3-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

NUMA has one global and one implementation specific switches. For
ACPI NUMA implementation, Xen has acpi_numa, so we introduce
device_tree_numa for device tree NUMA implementation. And use
enumerations to indicate init, off and on status.

arch_numa_disabled will get device_tree_numa status, but for
arch_numa_setup we have not provided boot arguments to setup
device_tree_numa. So we just return -EINVAL in this patch.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. No change.
v2 -> v3:
1. Rename the first entry of enum dt_numa_status as DT_NUMA_DEFAULT.
2. Make enum dt_numa_status device_tree_numa as __ro_after_init and
   assign it explicitly to DT_NUMA_DEFAULT.
3. Update the year in copyright to 2023.
4. Don't move the x86 numa_disabled() and make Arm's numa_disabled()
   a static inline function for !CONFIG_NUMA.
v1 -> v2:
1. Use arch_numa_disabled to replace numa_enable_with_firmware.
2. Introduce enumerations for device tree numa status.
3. Use common numa_disabled, drop Arm version numa_disabled.
4. Introduce arch_numa_setup for Arm.
5. Rename bad_srat to numa_bad.
6. Add numa_enable_with_firmware helper.
7. Add numa_disabled helper.
8. Refine commit message.
---
 xen/arch/arm/include/asm/numa.h | 17 +++++++++++
 xen/arch/arm/numa.c             | 50 +++++++++++++++++++++++++++++++++
 2 files changed, 67 insertions(+)
 create mode 100644 xen/arch/arm/numa.c

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 7d6ae36a19..83f60ad05b 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -22,6 +22,8 @@ typedef u8 nodeid_t;
  */
 #define NR_NODE_MEMBLKS NR_MEM_BANKS
 
+extern bool numa_disabled(void);
+
 #else
 
 /* Fake one node for now. See also node_online_map. */
@@ -39,6 +41,21 @@ extern mfn_t first_valid_mfn;
 #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
 #define __node_distance(a, b) (20)
 
+static inline bool numa_disabled(void)
+{
+    return true;
+}
+
+static inline bool arch_numa_unavailable(void)
+{
+    return true;
+}
+
+static inline bool arch_numa_broken(void)
+{
+    return true;
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
new file mode 100644
index 0000000000..eb5d0632cb
--- /dev/null
+++ b/xen/arch/arm/numa.c
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm Architecture support layer for NUMA.
+ *
+ * Copyright (C) 2023 Arm Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+#include <xen/init.h>
+#include <xen/numa.h>
+
+enum dt_numa_status {
+    DT_NUMA_DEFAULT,
+    DT_NUMA_ON,
+    DT_NUMA_OFF,
+};
+
+static enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
+
+void __init numa_fw_bad(void)
+{
+    printk(KERN_ERR "NUMA: device tree numa info table not used.\n");
+    device_tree_numa = DT_NUMA_OFF;
+}
+
+bool __init arch_numa_unavailable(void)
+{
+    return device_tree_numa != DT_NUMA_ON;
+}
+
+bool arch_numa_disabled(void)
+{
+    return device_tree_numa == DT_NUMA_OFF;
+}
+
+int __init arch_numa_setup(const char *opt)
+{
+    return -EINVAL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:19:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:19:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540433.842169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3STy-0005i5-VX; Mon, 29 May 2023 02:19:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540433.842169; Mon, 29 May 2023 02:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3STy-0005hy-St; Mon, 29 May 2023 02:19:38 +0000
Received: by outflank-mailman (input) for mailman id 540433;
 Mon, 29 May 2023 02:19:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3STx-0005T2-RV
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:19:37 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 422099fa-fdc7-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 04:19:37 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9B0FAC14;
 Sun, 28 May 2023 19:20:21 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 02F7F3F64C;
 Sun, 28 May 2023 19:19:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 422099fa-fdc7-11ed-b231-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 01/17] xen/arm: use NR_MEM_BANKS to override default NR_NODE_MEMBLKS
Date: Mon, 29 May 2023 10:19:05 +0800
Message-Id: <20230529021921.2606623-2-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

As a memory range described in device tree cannot be split across
multiple nodes. And it is very likely than if you have more than
64 nodes, you may need a lot more than 2 regions per node. So the
default NR_NODE_MEMBLKS value (MAX_NUMNODES * 2) makes no sense
on Arm.

So, for Arm, we would just define NR_NODE_MEMBLKS as an alias to
NR_MEM_BANKS. And in the future NR_MEM_BANKS will be user-configurable
via kconfig, but for now leave NR_MEM_BANKS as 128 on Arm. This
avoids having different way to define the value based NUMA vs non-NUMA.

Further discussions can be found here[1].

[1] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. Add Acked-by tag from Jan.
v2 -> v3:
By checking the discussion in [1] and [2]
[1] https://lists.xenproject.org/archives/html/xen-devel/2023-01/msg00595.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html
1. No change
v1 -> v2:
1. Add code comments to explain using NR_MEM_BANKS for Arm
2. Refine commit messages.
---
 xen/arch/arm/include/asm/numa.h | 19 ++++++++++++++++++-
 xen/include/xen/numa.h          |  9 +++++++++
 2 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index e2bee2bd82..7d6ae36a19 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -3,9 +3,26 @@
 
 #include <xen/mm.h>
 
+#include <asm/setup.h>
+
 typedef u8 nodeid_t;
 
-#ifndef CONFIG_NUMA
+#ifdef CONFIG_NUMA
+
+/*
+ * It is very likely that if you have more than 64 nodes, you may
+ * need a lot more than 2 regions per node. So, for Arm, we would
+ * just define NR_NODE_MEMBLKS as an alias to NR_MEM_BANKS.
+ * And in the future NR_MEM_BANKS will be bumped for new platforms,
+ * but for now leave NR_MEM_BANKS as it is on Arm. This avoid to
+ * have different way to define the value based NUMA vs non-NUMA.
+ *
+ * Further discussions can be found here:
+ * https://lists.xenproject.org/archives/html/xen-devel/2021-09/msg02322.html
+ */
+#define NR_NODE_MEMBLKS NR_MEM_BANKS
+
+#else
 
 /* Fake one node for now. See also node_online_map. */
 #define cpu_to_node(cpu) 0
diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h
index 29b8c2df89..b86d0851fc 100644
--- a/xen/include/xen/numa.h
+++ b/xen/include/xen/numa.h
@@ -13,7 +13,16 @@
 #define MAX_NUMNODES 1
 #endif
 
+/*
+ * Some architectures may have different considerations for
+ * number of node memory blocks. They can define their
+ * NR_NODE_MEMBLKS in asm/numa.h to reflect their architectural
+ * implementation. If the arch does not have specific implementation,
+ * the following default NR_NODE_MEMBLKS will be used.
+ */
+#ifndef NR_NODE_MEMBLKS
 #define NR_NODE_MEMBLKS (MAX_NUMNODES * 2)
+#endif
 
 #define vcpu_to_node(v) (cpu_to_node((v)->processor))
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:19:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:19:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540435.842190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUA-0006Im-Gy; Mon, 29 May 2023 02:19:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540435.842190; Mon, 29 May 2023 02:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUA-0006If-C2; Mon, 29 May 2023 02:19:50 +0000
Received: by outflank-mailman (input) for mailman id 540435;
 Mon, 29 May 2023 02:19:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SU9-0006GL-Cb
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:19:49 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 46f9665a-fdc7-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 04:19:45 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C881BAB6;
 Sun, 28 May 2023 19:20:29 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E732E3F64C;
 Sun, 28 May 2023 19:19:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46f9665a-fdc7-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 03/17] xen/arm: implement node distance helpers for Arm
Date: Mon, 29 May 2023 10:19:07 +0800
Message-Id: <20230529021921.2606623-4-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

We will parse NUMA nodes distances from device tree. So we need
a matrix to record the distances between any two nodes we parsed.
Accordingly, we provide this node_set_distance API for device tree
NUMA to set the distance for any two nodes in this patch. When
NUMA initialization failed, __node_distance will return
NUMA_REMOTE_DISTANCE, this will help us avoid doing rollback
for distance maxtrix when NUMA initialization failed.

As both x86 and Arm have implemented __node_distance, so we move
its declaration from asm/numa.h to xen/numa.h. At same time, the
outdated u8 return value of x86 has been changed to unsigned char.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com> # non-Arm parts
---
v4 -> v5:
1. Coding style (extra blank line and printk variable type) and
   in-code comment fixes and improvements.
2. Move the from/to range check in numa_set_distance() to caller,
   Drop the unnecessary check ensured by caller in numa_set_distance().
3. Rework the invalid distance check in numa_set_distance() following
   Linux, add more in-code comments about these distance checks.
v3 -> v4:
1. s/definition/declaration/ in commit message.
2. Add Acked-by tag from Jan for non-Arm parts.
3. Drop unnecessary initializer for node_distance_map. Pre-set the
   distance map to NUMA_NO_DISTANCE.
4. Drop NUMA_DISTANCE_UDF_MIN and its usage.
5. Drop EXPORT_SYMBOL(__node_distance).
6. Rework __node_distance()'s return value logic.
v2 -> v3:
1. Use __ro_after_init for node_distance_map.
2. Correct format of if condition identation in numa_set_distance().
3. Drop the unnecessary change to the year of copyright.
4. Use ARRAY_SIZE() to determine node_distance_map's row, column size.
v1 -> v2:
1. Use unsigned int/char instead of uint32_t/u8.
2. Re-org the commit message.
---
 xen/arch/arm/Makefile           |  1 +
 xen/arch/arm/include/asm/numa.h | 12 ++++++++
 xen/arch/arm/numa.c             | 51 +++++++++++++++++++++++++++++++++
 xen/arch/x86/include/asm/numa.h |  1 -
 xen/arch/x86/srat.c             |  2 +-
 xen/include/xen/numa.h          |  1 +
 6 files changed, 66 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index d85fc040df..814c472c4f 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += mem_access.o
 obj-y += mm.o
 obj-y += monitor.o
+obj-$(CONFIG_NUMA) += numa.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += platform.o
diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 83f60ad05b..96c856a9f7 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -22,7 +22,19 @@ typedef u8 nodeid_t;
  */
 #define NR_NODE_MEMBLKS NR_MEM_BANKS
 
+/*
+ * In ACPI spec, 0-9 are the reserved values for node distance,
+ * 10 indicates local node distance, 20 indicates remote node
+ * distance. Set node distance map in device tree will follow
+ * the ACPI's definition.
+ */
+#define NUMA_DISTANCE_UDF_MAX   9
+#define NUMA_LOCAL_DISTANCE     10
+#define NUMA_REMOTE_DISTANCE    20
+
 extern bool numa_disabled(void);
+extern void numa_set_distance(nodeid_t from, nodeid_t to,
+                              unsigned int distance);
 
 #else
 
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index eb5d0632cb..31332a6ea7 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -28,6 +28,11 @@ enum dt_numa_status {
 
 static enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
 
+static unsigned char __ro_after_init
+node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
+    [0 ... MAX_NUMNODES - 1] = { [0 ... MAX_NUMNODES - 1] = NUMA_NO_DISTANCE }
+};
+
 void __init numa_fw_bad(void)
 {
     printk(KERN_ERR "NUMA: device tree numa info table not used.\n");
@@ -48,3 +53,49 @@ int __init arch_numa_setup(const char *opt)
 {
     return -EINVAL;
 }
+
+void __init numa_set_distance(nodeid_t from, nodeid_t to,
+                              unsigned int distance)
+{
+    /*
+     * Since the NUMA device tree binding does not clearly specify the valid
+     * range of node distance, here we keep consistent with the ACPI, whose
+     * SLIT table uses 1 byte to describe the node distance. Hence node
+     * distances that cannot fit in 1 byte are invalid. Also, node distance
+     * 0-9 are undefined values.
+     * Reject all above-mentioned invalid distance values.
+     */
+    if ( (uint8_t)distance != distance || distance <= NUMA_DISTANCE_UDF_MAX )
+    {
+        printk(XENLOG_WARNING
+               "NUMA: invalid distance: from=%"PRIu8" to=%"PRIu8" distance=%u\n",
+               from, to, distance);
+        return;
+    }
+
+    node_distance_map[from][to] = distance;
+}
+
+unsigned char __node_distance(nodeid_t from, nodeid_t to)
+{
+    if ( from == to )
+        return NUMA_LOCAL_DISTANCE;
+
+    /*
+     * When NUMA is off, any distance will be treated as unreachable, so
+     * directly return NUMA_NO_DISTANCE from here as an optimization.
+     */
+    if ( numa_disabled() )
+        return NUMA_NO_DISTANCE;
+
+    /*
+     * Check whether the nodes are in the matrix range.
+     * When any node is out of range, except from and to nodes are the
+     * same (see above), we treat them as unreachable.
+     */
+    if ( from >= ARRAY_SIZE(node_distance_map) ||
+         to >= ARRAY_SIZE(node_distance_map[0]) )
+        return NUMA_NO_DISTANCE;
+
+    return node_distance_map[from][to];
+}
diff --git a/xen/arch/x86/include/asm/numa.h b/xen/arch/x86/include/asm/numa.h
index 7866afa408..45456ac441 100644
--- a/xen/arch/x86/include/asm/numa.h
+++ b/xen/arch/x86/include/asm/numa.h
@@ -22,7 +22,6 @@ extern void init_cpu_to_node(void);
 #define arch_want_default_dmazone() (num_online_nodes() > 1)
 
 void srat_parse_regions(paddr_t addr);
-extern u8 __node_distance(nodeid_t a, nodeid_t b);
 unsigned int arch_get_dma_bitsize(void);
 
 #endif
diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index 56749ddca5..50faf5d352 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -328,7 +328,7 @@ unsigned int numa_node_to_arch_nid(nodeid_t n)
 	return 0;
 }
 
-u8 __node_distance(nodeid_t a, nodeid_t b)
+unsigned char __node_distance(nodeid_t a, nodeid_t b)
 {
 	unsigned index;
 	u8 slit_val;
diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h
index b86d0851fc..8356e47b61 100644
--- a/xen/include/xen/numa.h
+++ b/xen/include/xen/numa.h
@@ -114,6 +114,7 @@ extern bool numa_memblks_available(void);
 extern bool numa_update_node_memblks(nodeid_t node, unsigned int arch_nid,
                                      paddr_t start, paddr_t size, bool hotplug);
 extern void numa_set_processor_nodes_parsed(nodeid_t node);
+extern unsigned char __node_distance(nodeid_t a, nodeid_t b);
 
 #else
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:19:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:19:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540436.842200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUB-0006Yr-Nc; Mon, 29 May 2023 02:19:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540436.842200; Mon, 29 May 2023 02:19:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUB-0006Yi-Jn; Mon, 29 May 2023 02:19:51 +0000
Received: by outflank-mailman (input) for mailman id 540436;
 Mon, 29 May 2023 02:19:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUA-0006GL-9U
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:19:50 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 490f5575-fdc7-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 04:19:48 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 40520C14;
 Sun, 28 May 2023 19:20:33 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9BFF03F64C;
 Sun, 28 May 2023 19:19:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 490f5575-fdc7-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 04/17] xen/arm: use arch_get_ram_range to get memory ranges from bootinfo
Date: Mon, 29 May 2023 10:19:08 +0800
Message-Id: <20230529021921.2606623-5-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Implement the same helper "arch_get_ram_range" as x86 for NUMA
code to get memory bank from Arm bootinfo.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Use arch_get_ram_range instead of arch_get_memory_map.
---
 xen/arch/arm/numa.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 31332a6ea7..e9b2ec93bc 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -99,3 +99,14 @@ unsigned char __node_distance(nodeid_t from, nodeid_t to)
 
     return node_distance_map[from][to];
 }
+
+int __init arch_get_ram_range(unsigned int idx, paddr_t *start, paddr_t *end)
+{
+    if ( idx >= bootinfo.mem.nr_banks )
+        return -ENOENT;
+
+    *start = bootinfo.mem.bank[idx].start;
+    *end = *start + bootinfo.mem.bank[idx].size;
+
+    return 0;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:19:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:19:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540439.842210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUF-0006uc-4a; Mon, 29 May 2023 02:19:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540439.842210; Mon, 29 May 2023 02:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUF-0006uB-19; Mon, 29 May 2023 02:19:55 +0000
Received: by outflank-mailman (input) for mailman id 540439;
 Mon, 29 May 2023 02:19:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUD-0006GL-ON
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:19:53 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 4af984a0-fdc7-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 04:19:51 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8E789C14;
 Sun, 28 May 2023 19:20:36 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F38EB3F64C;
 Sun, 28 May 2023 19:19:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4af984a0-fdc7-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 05/17] xen/arm: build NUMA cpu_to_node map in dt_smp_init_cpus
Date: Mon, 29 May 2023 10:19:09 +0800
Message-Id: <20230529021921.2606623-6-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

NUMA implementation has a cpu_to_node array to store CPU to NODE
map. Xen is using CPU logical ID in runtime components, so we
use CPU logical ID as CPU index in cpu_to_node.

In device tree case, cpu_logical_map is created in dt_smp_init_cpus.
So, when NUMA is enabled, dt_smp_init_cpus will fetch CPU NUMA id
at the same time for cpu_to_node.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Use static inline to replace macros to perform
   function paramerters type check.
2. Add numa_disabled to gate the numa-node-id check for
   CONFIG_NUMA on but numa disabled user case.
3. Use macro instead of static inline function to stub
   numa_set_node.
---
 xen/arch/arm/include/asm/numa.h |  4 ++++
 xen/arch/arm/smpboot.c          | 36 +++++++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 96c856a9f7..97d4a67dea 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -68,6 +68,10 @@ static inline bool arch_numa_broken(void)
     return true;
 }
 
+static inline void numa_set_node(unsigned int cpu, nodeid_t node)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index e107b86b7b..7506085540 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -118,7 +118,12 @@ static void __init dt_smp_init_cpus(void)
     {
         [0 ... NR_CPUS - 1] = MPIDR_INVALID
     };
+    static nodeid_t node_map[NR_CPUS] __initdata =
+    {
+        [0 ... NR_CPUS - 1] = NUMA_NO_NODE
+    };
     bool bootcpu_valid = false;
+    unsigned int nid = 0;
     int rc;
 
     mpidr = system_cpuinfo.mpidr.bits & MPIDR_HWID_MASK;
@@ -169,6 +174,28 @@ static void __init dt_smp_init_cpus(void)
             continue;
         }
 
+        if ( IS_ENABLED(CONFIG_NUMA) )
+        {
+            /*
+             * When CONFIG_NUMA is set, try to fetch numa infomation
+             * from CPU dts node, otherwise the nid is always 0.
+             */
+            if ( !dt_property_read_u32(cpu, "numa-node-id", &nid) )
+            {
+                printk(XENLOG_WARNING
+                       "cpu[%d] dts path: %s: doesn't have numa information!\n",
+                       cpuidx, dt_node_full_name(cpu));
+                /*
+                 * During the early stage of NUMA initialization, when Xen
+                 * found any CPU dts node doesn't have numa-node-id info, the
+                 * NUMA will be treated as off, all CPU will be set to a FAKE
+                 * node 0. So if we get numa-node-id failed here, we should
+                 * set nid to 0.
+                 */
+                nid = 0;
+            }
+        }
+
         /*
          * 8 MSBs must be set to 0 in the DT since the reg property
          * defines the MPIDR[23:0]
@@ -228,9 +255,13 @@ static void __init dt_smp_init_cpus(void)
         {
             printk("cpu%d init failed (hwid %"PRIregister"): %d\n", i, hwid, rc);
             tmp_map[i] = MPIDR_INVALID;
+            node_map[i] = NUMA_NO_NODE;
         }
         else
+        {
             tmp_map[i] = hwid;
+            node_map[i] = nid;
+        }
     }
 
     if ( !bootcpu_valid )
@@ -246,6 +277,11 @@ static void __init dt_smp_init_cpus(void)
             continue;
         cpumask_set_cpu(i, &cpu_possible_map);
         cpu_logical_map(i) = tmp_map[i];
+
+        nid = node_map[i];
+        if ( nid >= MAX_NUMNODES )
+            nid = 0;
+        numa_set_node(i, nid);
     }
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:19:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:19:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540441.842220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUH-0007Fg-Dn; Mon, 29 May 2023 02:19:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540441.842220; Mon, 29 May 2023 02:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUH-0007FX-Ac; Mon, 29 May 2023 02:19:57 +0000
Received: by outflank-mailman (input) for mailman id 540441;
 Mon, 29 May 2023 02:19:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUF-0005T2-Ts
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:19:55 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 4d035c6a-fdc7-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 04:19:55 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DAD58C14;
 Sun, 28 May 2023 19:20:39 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4BE1B3F64C;
 Sun, 28 May 2023 19:19:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d035c6a-fdc7-11ed-b231-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 06/17] xen/arm: Add boot and secondary CPU to NUMA system
Date: Mon, 29 May 2023 10:19:10 +0800
Message-Id: <20230529021921.2606623-7-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

In this patch, we make NUMA node online and add cpu to
its NUMA node. This will make NUMA-aware components
have NUMA affinity data to support their work.

To keep the mostly the same behavior of x86, we use
numa_detect_cpu_node to online node. The difference is that,
we have prepared cpu_to_node in dt_smp_init_cpus, so we don't
need to setup cpu_to_node in numa_detect_cpu_node.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Use unsigned int instead of int for cpu id.
2. Use static inline for stub to do type check.
---
 xen/arch/arm/include/asm/numa.h |  9 +++++++++
 xen/arch/arm/numa.c             | 10 ++++++++++
 xen/arch/arm/setup.c            |  5 +++++
 3 files changed, 24 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 97d4a67dea..b04ace26db 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -35,6 +35,7 @@ typedef u8 nodeid_t;
 extern bool numa_disabled(void);
 extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
+extern void numa_detect_cpu_node(unsigned int cpu);
 
 #else
 
@@ -72,6 +73,14 @@ static inline void numa_set_node(unsigned int cpu, nodeid_t node)
 {
 }
 
+static inline void numa_add_cpu(unsigned int cpu)
+{
+}
+
+static inline void numa_detect_cpu_node(unsigned int cpu)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index e9b2ec93bc..b5a87531f7 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -76,6 +76,16 @@ void __init numa_set_distance(nodeid_t from, nodeid_t to,
     node_distance_map[from][to] = distance;
 }
 
+void numa_detect_cpu_node(unsigned int cpu)
+{
+    nodeid_t node = cpu_to_node[cpu];
+
+    if ( node == NUMA_NO_NODE )
+        node = 0;
+
+    node_set_online(node);
+}
+
 unsigned char __node_distance(nodeid_t from, nodeid_t to)
 {
     if ( from == to )
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 74b40e527f..ab9eb6fb80 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1205,6 +1205,11 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     for_each_present_cpu ( i )
     {
+        /* Detect and online node based on cpu_to_node[]. */
+        numa_detect_cpu_node(i);
+        /* Set up node_to_cpumask based on cpu_to_node[]. */
+        numa_add_cpu(i);
+
         if ( (num_online_cpus() < nr_cpu_ids) && !cpu_online(i) )
         {
             int ret = cpu_up(i);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:20:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:20:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540443.842230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUL-0007ep-Oq; Mon, 29 May 2023 02:20:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540443.842230; Mon, 29 May 2023 02:20:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUL-0007ee-Ke; Mon, 29 May 2023 02:20:01 +0000
Received: by outflank-mailman (input) for mailman id 540443;
 Mon, 29 May 2023 02:20:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUK-0005T2-5U
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:20:00 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 4ee597bc-fdc7-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 04:19:58 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 330B4AB6;
 Sun, 28 May 2023 19:20:43 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 984EA3F64C;
 Sun, 28 May 2023 19:19:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ee597bc-fdc7-11ed-b231-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 07/17] xen/arm: introduce a helper to parse device tree processor node
Date: Mon, 29 May 2023 10:19:11 +0800
Message-Id: <20230529021921.2606623-8-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Processor NUMA ID information is stored in device tree's processor
node as "numa-node-id". We need a new helper to parse this ID from
processor node. If we get this ID from processor node, this ID's
validity still need to be checked. Once we got a invalid NUMA ID
from any processor node, the device tree will be marked as NUMA
information invalid.

Since new helpers need to know the NUMA status, move the
enum dt_numa_status to the Arm NUMA header.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. Rename "numa_device_tree.c" to "numa-dt.c".
v3 -> v4:
1. No change.
v2 -> v3:
1. Move the enum dt_numa_status to the Arm NUMA header.
2. Update the year in copyright to 2023.
v1 -> v2:
1. Move numa_disabled from fdt_numa_processor_affinity_init
   to fdt_parse_numa_cpu_node.
2. Move invalid NUMA id check to fdt_parse_numa_cpu_node.
3. Return ENODATA for normal dtb without NUMA info.
4. Use NUMA status helpers instead of SRAT functions.
---
 xen/arch/arm/Makefile           |  1 +
 xen/arch/arm/include/asm/numa.h |  8 +++++
 xen/arch/arm/numa-dt.c          | 64 +++++++++++++++++++++++++++++++++
 xen/arch/arm/numa.c             |  8 +----
 4 files changed, 74 insertions(+), 7 deletions(-)
 create mode 100644 xen/arch/arm/numa-dt.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 814c472c4f..d4cf2f7752 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -39,6 +39,7 @@ obj-y += mem_access.o
 obj-y += mm.o
 obj-y += monitor.o
 obj-$(CONFIG_NUMA) += numa.o
+obj-$(CONFIG_DEVICE_TREE_NUMA) += numa-dt.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += platform.o
diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index b04ace26db..2987158d16 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -22,6 +22,14 @@ typedef u8 nodeid_t;
  */
 #define NR_NODE_MEMBLKS NR_MEM_BANKS
 
+enum dt_numa_status {
+    DT_NUMA_DEFAULT,
+    DT_NUMA_ON,
+    DT_NUMA_OFF,
+};
+
+extern enum dt_numa_status device_tree_numa;
+
 /*
  * In ACPI spec, 0-9 are the reserved values for node distance,
  * 10 indicates local node distance, 20 indicates remote node
diff --git a/xen/arch/arm/numa-dt.c b/xen/arch/arm/numa-dt.c
new file mode 100644
index 0000000000..83601c83e7
--- /dev/null
+++ b/xen/arch/arm/numa-dt.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Arm Architecture support layer for device tree NUMA.
+ *
+ * Copyright (C) 2023 Arm Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+#include <xen/init.h>
+#include <xen/nodemask.h>
+#include <xen/numa.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/device_tree.h>
+
+/* Callback for device tree processor affinity */
+static int __init fdt_numa_processor_affinity_init(nodeid_t node)
+{
+    numa_set_processor_nodes_parsed(node);
+    device_tree_numa = DT_NUMA_ON;
+
+    printk(KERN_INFO "DT: NUMA node %"PRIu8" processor parsed\n", node);
+
+    return 0;
+}
+
+/* Parse CPU NUMA node info */
+static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
+{
+    unsigned int nid;
+
+    if ( numa_disabled() )
+        return -EINVAL;
+
+    /*
+     * device_tree_get_u32 will return NUMA_NO_NODE when this CPU
+     * DT node doesn't have numa-node-id. This can help us to
+     * distinguish a bad DTB and a normal DTB without NUMA info.
+     */
+    nid = device_tree_get_u32(fdt, node, "numa-node-id", NUMA_NO_NODE);
+    if ( nid == NUMA_NO_NODE )
+    {
+        numa_fw_bad();
+        return -ENODATA;
+    }
+    else if ( nid >= MAX_NUMNODES )
+    {
+        printk(XENLOG_ERR "DT: CPU NUMA node id %u is invalid\n", nid);
+        numa_fw_bad();
+        return -EINVAL;
+    }
+
+    return fdt_numa_processor_affinity_init(nid);
+}
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index b5a87531f7..08e15ebbb0 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -20,13 +20,7 @@
 #include <xen/init.h>
 #include <xen/numa.h>
 
-enum dt_numa_status {
-    DT_NUMA_DEFAULT,
-    DT_NUMA_ON,
-    DT_NUMA_OFF,
-};
-
-static enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
+enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
 
 static unsigned char __ro_after_init
 node_distance_map[MAX_NUMNODES][MAX_NUMNODES] = {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:20:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:20:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540444.842240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUO-0008Ha-9P; Mon, 29 May 2023 02:20:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540444.842240; Mon, 29 May 2023 02:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUO-0008Gm-2l; Mon, 29 May 2023 02:20:04 +0000
Received: by outflank-mailman (input) for mailman id 540444;
 Mon, 29 May 2023 02:20:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUM-0005T2-Mj
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:20:02 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 50f275af-fdc7-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 04:20:01 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7813DAB6;
 Sun, 28 May 2023 19:20:46 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DD1783F64C;
 Sun, 28 May 2023 19:19:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50f275af-fdc7-11ed-b231-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 08/17] xen/arm: introduce a helper to parse device tree memory node
Date: Mon, 29 May 2023 10:19:12 +0800
Message-Id: <20230529021921.2606623-9-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Memory blocks' NUMA ID information is stored in device tree's
memory nodes as "numa-node-id". We need a new helper to parse
and verify this ID from memory nodes.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. Fix coding style: printk variable type and label indented.
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Move numa_disabled check to fdt_parse_numa_memory_node.
2. Use numa_bad to replace bad_srat.
3. Replace tabs by spaces.
4. Align parameters.
5. return ENODATA for a normal dtb without numa info.
6. Un-addressed comment:
   "Why not parse numa-node-id and call fdt_numa_memory_affinity_init
   from xen/arch/arm/bootfdt.c:device_tree_get_meminfo. Is it because
   device_tree_get_meminfo is called too early?"
   I checked the device_tree_get_meminfo code and I think the answer
   is similar as I reply in RFC. I prefer a unify numa initialization
   entry. Don't want to make numa parse code in different places.
7. Use node id as dummy PXM for numa_update_node_memblks.
---
 xen/arch/arm/numa-dt.c | 89 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 89 insertions(+)

diff --git a/xen/arch/arm/numa-dt.c b/xen/arch/arm/numa-dt.c
index 83601c83e7..cebc7e4300 100644
--- a/xen/arch/arm/numa-dt.c
+++ b/xen/arch/arm/numa-dt.c
@@ -34,6 +34,26 @@ static int __init fdt_numa_processor_affinity_init(nodeid_t node)
     return 0;
 }
 
+/* Callback for parsing of the memory regions affinity */
+static int __init fdt_numa_memory_affinity_init(nodeid_t node,
+                                                paddr_t start, paddr_t size)
+{
+    if ( !numa_memblks_available() )
+    {
+        dprintk(XENLOG_WARNING,
+                "Too many NUMA entries, try bigger NR_NODE_MEMBLKS\n");
+        return -EINVAL;
+    }
+
+    numa_fw_nid_name = "numa-node-id";
+    if ( !numa_update_node_memblks(node, node, start, size, false) )
+        return -EINVAL;
+
+    device_tree_numa = DT_NUMA_ON;
+
+    return 0;
+}
+
 /* Parse CPU NUMA node info */
 static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
 {
@@ -62,3 +82,72 @@ static int __init fdt_parse_numa_cpu_node(const void *fdt, int node)
 
     return fdt_numa_processor_affinity_init(nid);
 }
+
+/* Parse memory node NUMA info */
+static int __init fdt_parse_numa_memory_node(const void *fdt, int node,
+                                             const char *name,
+                                             unsigned int addr_cells,
+                                             unsigned int size_cells)
+{
+    unsigned int nid;
+    int ret = 0, len;
+    paddr_t addr, size;
+    const struct fdt_property *prop;
+    unsigned int idx, ranges;
+    const __be32 *addresses;
+
+    if ( numa_disabled() )
+        return -EINVAL;
+
+    /*
+     * device_tree_get_u32 will return NUMA_NO_NODE when this memory
+     * DT node doesn't have numa-node-id. This can help us to
+     * distinguish a bad DTB and a normal DTB without NUMA info.
+     */
+    nid = device_tree_get_u32(fdt, node, "numa-node-id", NUMA_NO_NODE);
+    if ( node == NUMA_NO_NODE )
+    {
+        numa_fw_bad();
+        return -ENODATA;
+    }
+    else if ( nid >= MAX_NUMNODES )
+    {
+        printk(XENLOG_WARNING "Node id %u exceeds maximum value\n", nid);
+        goto invalid_data;
+    }
+
+    prop = fdt_get_property(fdt, node, "reg", &len);
+    if ( !prop )
+    {
+        printk(XENLOG_WARNING
+               "fdt: node `%s': missing `reg' property\n", name);
+        goto invalid_data;
+    }
+
+    addresses = (const __be32 *)prop->data;
+    ranges = len / (sizeof(__be32)* (addr_cells + size_cells));
+    for ( idx = 0; idx < ranges; idx++ )
+    {
+        device_tree_get_reg(&addresses, addr_cells, size_cells, &addr, &size);
+        /* Skip zero size ranges */
+        if ( !size )
+            continue;
+
+        ret = fdt_numa_memory_affinity_init(nid, addr, size);
+        if ( ret )
+            goto invalid_data;
+    }
+
+    if ( idx == 0 )
+    {
+        printk(XENLOG_ERR
+               "bad property in memory node, idx=%u ret=%d\n", idx, ret);
+        goto invalid_data;
+    }
+
+    return 0;
+
+ invalid_data:
+    numa_fw_bad();
+    return -EINVAL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:20:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:20:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540447.842250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUS-0000pS-Jf; Mon, 29 May 2023 02:20:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540447.842250; Mon, 29 May 2023 02:20:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUS-0000oA-Fx; Mon, 29 May 2023 02:20:08 +0000
Received: by outflank-mailman (input) for mailman id 540447;
 Mon, 29 May 2023 02:20:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUR-0006GL-2j
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:20:07 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 52d3afae-fdc7-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 04:20:05 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C3BB9AB6;
 Sun, 28 May 2023 19:20:49 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 34C853F64C;
 Sun, 28 May 2023 19:20:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52d3afae-fdc7-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 09/17] xen/arm: introduce a helper to parse device tree NUMA distance map
Date: Mon, 29 May 2023 10:19:13 +0800
Message-Id: <20230529021921.2606623-10-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

A NUMA aware device tree will provide a "distance-map" node to
describe distance between any two nodes. This patch introduce a
new helper to parse this distance map.

Note that, since the NUMA device tree binding does not explicitly
specify the range of valid node distance, hence rather than
rejecting node distance values >= 0xff, saturate the distance at
0xfe, while keeping 0xff for NUMA_NO_DISTANCE, so overall we can
keep things consistent with ACPI.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. Fix coding style (printk variable type and label indented) and
   in-code comment.
2. Check the from/to range to avoid the side-effect of the 8-bit
   truncation by numa_set_distance().
v3 -> v4:
1. The distance map default value is now NUMA_NO_DISTANCE, update
   the logic accordingly and add in-code comment as a note.
v2 -> v3:
1. No change.
v1 -> v2:
1. Get rid of useless braces.
2. Use new NUMA status helper.
3. Use PRIu32 to replace u in print messages.
4. Fix opposite = __node_distance(to, from).
5. disable dtb numa info table when we find an invalid data
   in dtb.
---
 xen/arch/arm/numa-dt.c | 116 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 116 insertions(+)

diff --git a/xen/arch/arm/numa-dt.c b/xen/arch/arm/numa-dt.c
index cebc7e4300..2fb6663e08 100644
--- a/xen/arch/arm/numa-dt.c
+++ b/xen/arch/arm/numa-dt.c
@@ -151,3 +151,119 @@ static int __init fdt_parse_numa_memory_node(const void *fdt, int node,
     numa_fw_bad();
     return -EINVAL;
 }
+
+/* Parse NUMA distance map v1 */
+static int __init fdt_parse_numa_distance_map_v1(const void *fdt, int node)
+{
+    const struct fdt_property *prop;
+    const __be32 *matrix;
+    unsigned int i, entry_count;
+    int len;
+
+    printk(XENLOG_INFO "NUMA: parsing numa-distance-map\n");
+
+    prop = fdt_get_property(fdt, node, "distance-matrix", &len);
+    if ( !prop )
+    {
+        printk(XENLOG_WARNING
+               "NUMA: No distance-matrix property in distance-map\n");
+        goto invalid_data;
+    }
+
+    if ( len % sizeof(__be32) != 0 )
+    {
+        printk(XENLOG_WARNING
+               "distance-matrix in node is not a multiple of u32\n");
+        goto invalid_data;
+    }
+
+    entry_count = len / sizeof(__be32);
+    if ( entry_count == 0 )
+    {
+        printk(XENLOG_WARNING "NUMA: Invalid distance-matrix\n");
+        goto invalid_data;
+    }
+
+    matrix = (const __be32 *)prop->data;
+    for ( i = 0; i + 2 < entry_count; i += 3 )
+    {
+        unsigned int from, to, distance, opposite;
+
+        from = dt_next_cell(1, &matrix);
+        to = dt_next_cell(1, &matrix);
+        distance = dt_next_cell(1, &matrix);
+
+        if ( from >= MAX_NUMNODES || to >= MAX_NUMNODES )
+        {
+            printk(XENLOG_WARNING "NUMA: invalid nodes: from=%u to=%u MAX=%u\n",
+                   from, to, MAX_NUMNODES);
+            goto invalid_data;
+        }
+
+        if ( (from == to && distance != NUMA_LOCAL_DISTANCE) ||
+             (from != to && distance <= NUMA_LOCAL_DISTANCE) )
+        {
+            printk(XENLOG_WARNING
+                   "NUMA: Invalid distance: NODE#%u->NODE#%u:%u\n",
+                   from, to, distance);
+            goto invalid_data;
+        }
+
+        printk(XENLOG_INFO "NUMA: distance: NODE#%u->NODE#%u:%u\n",
+               from, to, distance);
+
+        /* Get opposite way distance */
+        opposite = __node_distance(to, from);
+        /* The default value in node_distance_map is NUMA_NO_DISTANCE */
+        if ( opposite == NUMA_NO_DISTANCE )
+        {
+            /* Bi-directions are not set, set both */
+            numa_set_distance(from, to, distance);
+            numa_set_distance(to, from, distance);
+        }
+        else
+        {
+            /*
+             * Opposite way distance has been set to a different value.
+             * It may be a firmware device tree bug?
+             */
+            if ( opposite != distance )
+            {
+                /*
+                 * In device tree NUMA distance-matrix binding:
+                 * https://www.kernel.org/doc/Documentation/devicetree/bindings/numa.txt
+                 * There is a notes mentions:
+                 * "Each entry represents distance from first node to
+                 *  second node. The distances are equal in either
+                 *  direction."
+                 *
+                 * That means device tree doesn't permit this case.
+                 * But in ACPI spec, it cares to specifically permit this
+                 * case:
+                 * "Except for the relative distance from a System Locality
+                 *  to itself, each relative distance is stored twice in the
+                 *  matrix. This provides the capability to describe the
+                 *  scenario where the relative distances for the two
+                 *  directions between System Localities is different."
+                 *
+                 * That means a real machine allows such NUMA configuration.
+                 * So, place a WARNING here to notice system administrators,
+                 * is it the special case that they hijack the device tree
+                 * to support their rare machines?
+                 */
+                printk(XENLOG_WARNING
+                       "Un-matched bi-direction! NODE#%u->NODE#%u:%u, NODE#%u->NODE#%u:%u\n",
+                       from, to, distance, to, from, opposite);
+            }
+
+            /* Opposite way distance was set before, just set this way */
+            numa_set_distance(from, to, distance);
+        }
+    }
+
+    return 0;
+
+ invalid_data:
+    numa_fw_bad();
+    return -EINVAL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:20:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:20:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540452.842260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUZ-0001in-6K; Mon, 29 May 2023 02:20:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540452.842260; Mon, 29 May 2023 02:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3SUZ-0001iZ-0t; Mon, 29 May 2023 02:20:15 +0000
Received: by outflank-mailman (input) for mailman id 540452;
 Mon, 29 May 2023 02:20:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUX-0006GL-G5
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:20:13 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 56df9c79-fdc7-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 04:20:11 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8C396AB6;
 Sun, 28 May 2023 19:20:56 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F06273F64C;
 Sun, 28 May 2023 19:20:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56df9c79-fdc7-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 11/17] xen/arm: keep guest still be NUMA unware
Date: Mon, 29 May 2023 10:19:15 +0800
Message-Id: <20230529021921.2606623-12-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

The NUMA information provided in the host Device-Tree
are only for Xen. For dom0, we want to hide them as they
may be different (for now, dom0 is still not aware of NUMA)
The CPU and memory nodes are recreated from scratch for the
domain. So we already skip the "numa-node-id" property for
these two types of nodes.

However, some devices like PCIe may have "numa-node-id"
property too. We have to skip them as well.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Add Rb
---
 xen/arch/arm/domain_build.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 3f4558ade6..bf2aa3d56c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1185,6 +1185,10 @@ static int __init write_properties(struct domain *d, struct kernel_info *kinfo,
                 continue;
         }
 
+        /* Dom0 is currently NUMA unaware */
+        if ( dt_property_name_is_equal(prop, "numa-node-id") )
+            continue;
+
         res = fdt_property(kinfo->fdt, prop->name, prop_data, prop_len);
 
         if ( res )
@@ -2584,6 +2588,8 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
         DT_MATCH_TYPE("memory"),
         /* The memory mapped timer is not supported by Xen. */
         DT_MATCH_COMPATIBLE("arm,armv7-timer-mem"),
+        /* Numa info doesn't need to be exposed to Domain-0 */
+        DT_MATCH_COMPATIBLE("numa-distance-map-v1"),
         { /* sentinel */ },
     };
     static const struct dt_device_match timer_matches[] __initconst =
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:27:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540476.842273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbj-00049f-9m; Mon, 29 May 2023 02:27:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540476.842273; Mon, 29 May 2023 02:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbj-00049G-46; Mon, 29 May 2023 02:27:39 +0000
Received: by outflank-mailman (input) for mailman id 540476;
 Mon, 29 May 2023 02:27:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUm-0006GL-6F
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:20:28 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 5ebb7b14-fdc7-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 04:20:25 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9FD6BAB6;
 Sun, 28 May 2023 19:21:09 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4FD823F64C;
 Sun, 28 May 2023 19:20:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ebb7b14-fdc7-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Henry Wang <Henry.Wang@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v5 15/17] xen/arm: Set correct per-cpu cpu_core_mask
Date: Mon, 29 May 2023 10:19:19 +0800
Message-Id: <20230529021921.2606623-16-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In the common sysctl command XEN_SYSCTL_physinfo, the cores_per_socket
is calculated based on the cpu_core_mask of CPU0. Currently on Arm
this is a fixed value 1 (can be checked via xl info), which is not
correct. This is because during the Arm cpu online process,
set_cpu_sibling_map() only sets the per-cpu cpu_core_mask for itself.

cores_per_socket refers to the number of cores that belong to the same
socket (NUMA node). Therefore, this commit introduces a helper function
numa_set_cpu_core_mask(cpu), which sets the per-cpu cpu_core_mask to
the cpus in the same NUMA node as cpu. Calling this function at the
boot time can ensure the correct cpu_core_mask, leading to the correct
cores_per_socket to be returned by XEN_SYSCTL_physinfo.

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. New patch
---
 xen/arch/arm/include/asm/numa.h |  7 +++++++
 xen/arch/arm/numa.c             | 11 +++++++++++
 xen/arch/arm/setup.c            |  5 +++++
 3 files changed, 23 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 71b95a9a62..d4c89909d0 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -46,6 +46,7 @@ extern void numa_set_distance(nodeid_t from, nodeid_t to,
 extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
 extern void numa_init(void);
+extern void numa_set_cpu_core_mask(int cpu);
 
 /*
  * Device tree NUMA doesn't have architecural node id.
@@ -62,6 +63,12 @@ static inline unsigned int numa_node_to_arch_nid(nodeid_t n)
 #define cpu_to_node(cpu) 0
 #define node_to_cpumask(node)   (cpu_online_map)
 
+static inline void numa_set_cpu_core_mask(int cpu)
+{
+    cpumask_or(per_cpu(cpu_core_mask, cpu),
+               per_cpu(cpu_core_mask, cpu), &cpu_possible_map);
+}
+
 /*
  * TODO: make first_valid_mfn static when NUMA is supported on Arm, this
  * is required because the dummy helpers are using it.
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 13a167fc4f..1ac2df37fc 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -52,6 +52,17 @@ int __init arch_numa_setup(const char *opt)
     return -EINVAL;
 }
 
+void numa_set_cpu_core_mask(int cpu)
+{
+    nodeid_t node = cpu_to_node[cpu];
+
+    if ( node == NUMA_NO_NODE )
+        node = 0;
+
+    cpumask_or(per_cpu(cpu_core_mask, cpu),
+               per_cpu(cpu_core_mask, cpu), &node_to_cpumask(node));
+}
+
 void __init numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance)
 {
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index bfcb0c7b6b..9b586605a6 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1226,6 +1226,11 @@ void __init start_xen(unsigned long boot_phys_offset,
     }
 
     printk("Brought up %ld CPUs\n", (long)num_online_cpus());
+
+    /* Set per-cpu cpu_core_mask to cpus that belongs to the same NUMA node. */
+    for_each_online_cpu ( i )
+        numa_set_cpu_core_mask(i);
+
     /* TODO: smp_cpus_done(); */
 
     /* This should be done in a vpmu driver but we do not have one yet. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:27:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540488.842290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbm-0004g1-Ei; Mon, 29 May 2023 02:27:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540488.842290; Mon, 29 May 2023 02:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbm-0004fu-Bf; Mon, 29 May 2023 02:27:42 +0000
Received: by outflank-mailman (input) for mailman id 540488;
 Mon, 29 May 2023 02:27:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUd-0005T2-7U
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:20:19 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 5aecbb5a-fdc7-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 04:20:18 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 55B2FAB6;
 Sun, 28 May 2023 19:21:03 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B1E103F64C;
 Sun, 28 May 2023 19:20:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5aecbb5a-fdc7-11ed-b231-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 13/17] xen/arm: implement numa_node_to_arch_nid for device tree NUMA
Date: Mon, 29 May 2023 10:19:17 +0800
Message-Id: <20230529021921.2606623-14-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Device tree based NUMA doesn't have the proximity domain like
ACPI. So we can return node id directly as arch nid.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Use numa_node_to_arch_nid instead of dummy node_to_pxm.
---
 xen/arch/arm/include/asm/numa.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 55ac4665db..71b95a9a62 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -47,6 +47,15 @@ extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
 extern void numa_init(void);
 
+/*
+ * Device tree NUMA doesn't have architecural node id.
+ * So we can just return node id as arch nid.
+ */
+static inline unsigned int numa_node_to_arch_nid(nodeid_t n)
+{
+    return n;
+}
+
 #else
 
 /* Fake one node for now. See also node_online_map. */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:27:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540492.842294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbm-0004jd-Ns; Mon, 29 May 2023 02:27:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540492.842294; Mon, 29 May 2023 02:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbm-0004jI-K1; Mon, 29 May 2023 02:27:42 +0000
Received: by outflank-mailman (input) for mailman id 540492;
 Mon, 29 May 2023 02:27:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUc-0006GL-4Y
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:20:18 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 58e20ee3-fdc7-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 04:20:15 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D6395C14;
 Sun, 28 May 2023 19:20:59 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 476F93F64C;
 Sun, 28 May 2023 19:20:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58e20ee3-fdc7-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 12/17] xen/arm: enable device tree based NUMA in system init
Date: Mon, 29 May 2023 10:19:16 +0800
Message-Id: <20230529021921.2606623-13-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

In this patch, we can start to create NUMA system that is
based on device tree.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. Fix coding style: label indented by 1 space.
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. replace ~0 by INVALID_PADDR.
2. only print error messages for invalid dtb data.
3. remove unnecessary return.
4. remove the parameter of numa_init.
---
 xen/arch/arm/include/asm/numa.h |  5 +++
 xen/arch/arm/numa.c             | 57 +++++++++++++++++++++++++++++++++
 xen/arch/arm/setup.c            |  7 ++++
 3 files changed, 69 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 15308f5a36..55ac4665db 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -45,6 +45,7 @@ extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
 extern void numa_detect_cpu_node(unsigned int cpu);
 extern int numa_device_tree_init(const void *fdt);
+extern void numa_init(void);
 
 #else
 
@@ -90,6 +91,10 @@ static inline void numa_detect_cpu_node(unsigned int cpu)
 {
 }
 
+static inline void numa_init(void)
+{
+}
+
 #endif
 
 #define arch_want_default_dmazone() (false)
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index 08e15ebbb0..13a167fc4f 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -18,7 +18,11 @@
  *
  */
 #include <xen/init.h>
+#include <xen/device_tree.h>
+#include <xen/nodemask.h>
 #include <xen/numa.h>
+#include <xen/pfn.h>
+#include <xen/acpi.h>
 
 enum dt_numa_status __ro_after_init device_tree_numa = DT_NUMA_DEFAULT;
 
@@ -104,6 +108,59 @@ unsigned char __node_distance(nodeid_t from, nodeid_t to)
     return node_distance_map[from][to];
 }
 
+void __init numa_init(void)
+{
+    unsigned int idx;
+    paddr_t ram_start = INVALID_PADDR;
+    paddr_t ram_size = 0;
+    paddr_t ram_end = 0;
+
+    /* NUMA has been turned off through Xen parameters */
+    if ( numa_off )
+        goto mem_init;
+
+    /* Initialize NUMA from device tree when system is not ACPI booted */
+    if ( acpi_disabled )
+    {
+        int ret = numa_device_tree_init(device_tree_flattened);
+        if ( ret )
+        {
+            numa_off = true;
+            if ( ret == -EINVAL )
+                printk(XENLOG_WARNING
+                       "Init NUMA from device tree failed, ret=%d\n", ret);
+        }
+    }
+    else
+    {
+        /* We don't support NUMA for ACPI boot currently */
+        printk(XENLOG_WARNING
+               "ACPI NUMA has not been supported yet, NUMA off!\n");
+        numa_off = true;
+    }
+
+ mem_init:
+    /*
+     * Find the minimal and maximum address of RAM, NUMA will
+     * build a memory to node mapping table for the whole range.
+     */
+    ram_start = bootinfo.mem.bank[0].start;
+    ram_size  = bootinfo.mem.bank[0].size;
+    ram_end   = ram_start + ram_size;
+    for ( idx = 1 ; idx < bootinfo.mem.nr_banks; idx++ )
+    {
+        paddr_t bank_start = bootinfo.mem.bank[idx].start;
+        paddr_t bank_size = bootinfo.mem.bank[idx].size;
+        paddr_t bank_end = bank_start + bank_size;
+
+        ram_size  = ram_size + bank_size;
+        ram_start = min(ram_start, bank_start);
+        ram_end   = max(ram_end, bank_end);
+    }
+
+    numa_initmem_init(PFN_UP(ram_start), PFN_DOWN(ram_end));
+}
+
 int __init arch_get_ram_range(unsigned int idx, paddr_t *start, paddr_t *end)
 {
     if ( idx >= bootinfo.mem.nr_banks )
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index ab9eb6fb80..bfcb0c7b6b 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -1121,6 +1121,13 @@ void __init start_xen(unsigned long boot_phys_offset,
     /* Parse the ACPI tables for possible boot-time configuration */
     acpi_boot_table_init();
 
+    /*
+     * Try to initialize NUMA system, if failed, the system will
+     * fallback to uniform system which means system has only 1
+     * NUMA node.
+     */
+    numa_init();
+
     end_boot_allocator();
 
     /*
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:27:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540475.842270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbj-00048p-1t; Mon, 29 May 2023 02:27:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540475.842270; Mon, 29 May 2023 02:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbi-00048a-TO; Mon, 29 May 2023 02:27:38 +0000
Received: by outflank-mailman (input) for mailman id 540475;
 Mon, 29 May 2023 02:27:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUo-0005T2-Gz
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:20:30 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 60a4ea8c-fdc7-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 04:20:28 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E72D6AB6;
 Sun, 28 May 2023 19:21:12 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 585163F64C;
 Sun, 28 May 2023 19:20:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60a4ea8c-fdc7-11ed-b231-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 16/17] xen/arm: Provide Kconfig options for Arm to enable NUMA
Date: Mon, 29 May 2023 10:19:20 +0800
Message-Id: <20230529021921.2606623-17-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Arm platforms support both ACPI and device tree. We don't
want users to select device tree NUMA or ACPI NUMA manually.
We hope users can just enable NUMA for Arm, and device tree
NUMA and ACPI NUMA can be selected depends on device tree
feature and ACPI feature status automatically. In this case,
these two kinds of NUMA support code can be co-exist in one
Xen binary. Xen can check feature flags to decide using
device tree or ACPI as NUMA based firmware.

So in this patch, we introduce a generic option:
CONFIG_ARM_NUMA for users to enable NUMA for Arm.
And one CONFIG_DEVICE_TREE_NUMA option for ARM_NUMA
to select when HAS_DEVICE_TREE option is enabled.
Once when ACPI NUMA for Arm is supported, ACPI_NUMA
can be selected here too.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Remove the condition of selecting DEVICE_TREE_NUMA.
---
 xen/arch/arm/Kconfig | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c..e751ad50d1 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -39,6 +39,17 @@ config ACPI
 config ARM_EFI
 	bool
 
+config ARM_NUMA
+	bool "Arm NUMA (Non-Uniform Memory Access) Support (UNSUPPORTED)" if UNSUPPORTED
+	depends on HAS_DEVICE_TREE
+	select DEVICE_TREE_NUMA
+	help
+	  Enable Non-Uniform Memory Access (NUMA) for Arm architecutres
+
+config DEVICE_TREE_NUMA
+	bool
+	select NUMA
+
 config GICV3
 	bool "GICv3 driver"
 	depends on !NEW_VGIC
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:27:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:27:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540493.842302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbn-0004mP-5I; Mon, 29 May 2023 02:27:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540493.842302; Mon, 29 May 2023 02:27:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbm-0004l6-Qy; Mon, 29 May 2023 02:27:42 +0000
Received: by outflank-mailman (input) for mailman id 540493;
 Mon, 29 May 2023 02:27:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUh-0006GL-IW
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:20:23 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 5ce5dd70-fdc7-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 04:20:22 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A3C84AB6;
 Sun, 28 May 2023 19:21:06 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0BE463F64C;
 Sun, 28 May 2023 19:20:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ce5dd70-fdc7-11ed-8611-37d641c3527e
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 14/17] xen/arm: use CONFIG_NUMA to gate node_online_map in smpboot
Date: Mon, 29 May 2023 10:19:18 +0800
Message-Id: <20230529021921.2606623-15-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

node_online_map in smpboot is still needed for Arm when NUMA is
turned off by Kconfig.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. No change.
---
 xen/arch/arm/smpboot.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 7506085540..eedbb57291 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -41,8 +41,10 @@ integer_param("maxcpus", max_cpus);
 /* CPU logical map: map xen cpuid to an MPIDR */
 register_t __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = MPIDR_INVALID };
 
+#ifndef CONFIG_NUMA
 /* Fake one node for now. See also asm/numa.h */
 nodemask_t __read_mostly node_online_map = { { [0] = 1UL } };
+#endif
 
 /* Xen stack for bringing up the first CPU. */
 static unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:27:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:27:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540495.842305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbn-0004uM-G3; Mon, 29 May 2023 02:27:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540495.842305; Mon, 29 May 2023 02:27:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbn-0004sM-7t; Mon, 29 May 2023 02:27:43 +0000
Received: by outflank-mailman (input) for mailman id 540495;
 Mon, 29 May 2023 02:27:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUq-0005T2-Px
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:20:32 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 62f45eda-fdc7-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 04:20:32 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B8BF5AB6;
 Sun, 28 May 2023 19:21:16 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9FD643F64C;
 Sun, 28 May 2023 19:20:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62f45eda-fdc7-11ed-b231-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 17/17] docs: update numa command line to support Arm
Date: Mon, 29 May 2023 10:19:21 +0800
Message-Id: <20230529021921.2606623-18-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

Current numa command in documentation is x86 only. Remove
x86 from numa command's arch limitation in this patch.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. No change.
v2 -> v3:
1. Add the Acked-by tag from Jan.
v1 -> v2:
1. Update Arm NUMA status in SUPPORT.md to "Tech Preview".
---
 SUPPORT.md                        | 1 +
 docs/misc/xen-command-line.pandoc | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index 6dbed9d5d0..8ab8b94afe 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -411,6 +411,7 @@ on embedded platforms and the x86 PV shim.
 Enables NUMA aware scheduling in Xen
 
     Status, x86: Supported
+    Status, Arm: Tech Preview
 
 ## Scalability
 
diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e0b89b7d33..2fea22dd70 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1890,7 +1890,7 @@ i.e. a limit on the number of guests it is possible to start each having
 assigned a device sharing a common interrupt line.  Accepts values between
 1 and 255.
 
-### numa (x86)
+### numa
 > `= on | off | fake=<integer> | noacpi`
 
 > Default: `on`
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:27:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:27:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540496.842314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbo-00053n-0m; Mon, 29 May 2023 02:27:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540496.842314; Mon, 29 May 2023 02:27:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Sbn-0004zi-KZ; Mon, 29 May 2023 02:27:43 +0000
Received: by outflank-mailman (input) for mailman id 540496;
 Mon, 29 May 2023 02:27:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x08u=BS=arm.com=Henry.Wang@srs-se1.protection.inumbo.net>)
 id 1q3SUT-0005T2-6T
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:20:09 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 54da51f7-fdc7-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 04:20:08 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2BF55AB6;
 Sun, 28 May 2023 19:20:53 -0700 (PDT)
Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com
 [10.169.190.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 914E93F64C;
 Sun, 28 May 2023 19:20:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54da51f7-fdc7-11ed-b231-6b7b168915f2
From: Henry Wang <Henry.Wang@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Wei Chen <wei.chen@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Henry Wang <Henry.Wang@arm.com>
Subject: [PATCH v5 10/17] xen/arm: unified entry to parse all NUMA data from device tree
Date: Mon, 29 May 2023 10:19:14 +0800
Message-Id: <20230529021921.2606623-11-Henry.Wang@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230529021921.2606623-1-Henry.Wang@arm.com>
References: <20230529021921.2606623-1-Henry.Wang@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Wei Chen <wei.chen@arm.com>

In this function, we scan the whole device tree to parse CPU node id,
memory node id and distance-map. Though early_scan_node will invoke
a handler to process memory nodes. If we want to parse memory node id
in that handler, we have to embed NUMA parse code in that handler.
But we still need to scan whole device tree to find CPU NUMA id and
distance-map. In this case, we include memory NUMA id parse in this
function too. Another benefit is that we have a unique entry for device
tree NUMA data parse.

Signed-off-by: Wei Chen <wei.chen@arm.com>
Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
v4 -> v5:
1. No change.
v3 -> v4:
1. No change.
v2 -> v3:
1. No change.
v1 -> v2:
1. Fix typos in commit message.
2. Fix code style and align parameters.
3. Use strncmp to replace memcmp.
---
 xen/arch/arm/include/asm/numa.h |  1 +
 xen/arch/arm/numa-dt.c          | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h
index 2987158d16..15308f5a36 100644
--- a/xen/arch/arm/include/asm/numa.h
+++ b/xen/arch/arm/include/asm/numa.h
@@ -44,6 +44,7 @@ extern bool numa_disabled(void);
 extern void numa_set_distance(nodeid_t from, nodeid_t to,
                               unsigned int distance);
 extern void numa_detect_cpu_node(unsigned int cpu);
+extern int numa_device_tree_init(const void *fdt);
 
 #else
 
diff --git a/xen/arch/arm/numa-dt.c b/xen/arch/arm/numa-dt.c
index 2fb6663e08..8198a0da2e 100644
--- a/xen/arch/arm/numa-dt.c
+++ b/xen/arch/arm/numa-dt.c
@@ -267,3 +267,33 @@ static int __init fdt_parse_numa_distance_map_v1(const void *fdt, int node)
     numa_fw_bad();
     return -EINVAL;
 }
+
+static int __init fdt_scan_numa_nodes(const void *fdt, int node,
+                                      const char *uname, int depth,
+                                      unsigned int address_cells,
+                                      unsigned int size_cells, void *data)
+{
+    int len, ret = 0;
+    const void *prop;
+
+    prop = fdt_getprop(fdt, node, "device_type", &len);
+    if ( prop )
+    {
+        if ( strncmp(prop, "cpu", len) == 0 )
+            ret = fdt_parse_numa_cpu_node(fdt, node);
+        else if ( strncmp(prop, "memory", len) == 0 )
+            ret = fdt_parse_numa_memory_node(fdt, node, uname,
+                                address_cells, size_cells);
+    }
+    else if ( fdt_node_check_compatible(fdt, node,
+                                        "numa-distance-map-v1") == 0 )
+        ret = fdt_parse_numa_distance_map_v1(fdt, node);
+
+    return ret;
+}
+
+/* Initialize NUMA from device tree */
+int __init numa_device_tree_init(const void *fdt)
+{
+    return device_tree_for_each_node(fdt, 0, fdt_scan_numa_nodes, NULL);
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 02:40:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 02:40:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540528.842340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Snb-0000eb-7k; Mon, 29 May 2023 02:39:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540528.842340; Mon, 29 May 2023 02:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Snb-0000eU-3r; Mon, 29 May 2023 02:39:55 +0000
Received: by outflank-mailman (input) for mailman id 540528;
 Mon, 29 May 2023 02:39:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mgog=BS=shutemov.name=kirill@srs-se1.protection.inumbo.net>)
 id 1q3Sna-0000eO-9t
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 02:39:54 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 14eb82a5-fdca-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 04:39:51 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.west.internal (Postfix) with ESMTP id 68EBC320083A;
 Sun, 28 May 2023 22:39:45 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Sun, 28 May 2023 22:39:48 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 28 May 2023 22:39:42 -0400 (EDT)
Received: by box.shutemov.name (Postfix, from userid 1000)
 id DC398109530; Mon, 29 May 2023 05:39:39 +0300 (+03)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14eb82a5-fdca-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name;
	 h=cc:cc:content-type:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1685327985; x=
	1685414385; bh=CkLoqHnVGlymW7d8k4EFDgQjQ/Ry/XUh7kKzLI6+vlE=; b=W
	7EccITdMZC0pSl/b1cfDKDyxZJGjSM57ZL2lxSNLgeqB4IerV8LFdI9zEG9Uv3G5
	XelSpBSDuEiMFswqvgC+uqMSU41P+0pHnc0CfVm+/om40YXV8TBgy4FQ3uAnJ0sd
	0leGMjl2uPikmG/RgzzVFUtdTyGqRV9PoTLcM/5fX1Q+VN3mISeO/SGfktGELjZZ
	NdxDajeL23HVze7H5M+ZON2UH8rVcjxVZPc4eeORlBj44HD0strS02gomytli9s7
	PBU0KbXi2fQ1wTum70GIBqTrZdwfVGFyVfYmU4ShoSq7JbcP7lXhTS7MnuWZ/MSP
	q530uy7ak3gazTiZQa03w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1685327985; x=1685414385; bh=CkLoqHnVGlymW
	7d8k4EFDgQjQ/Ry/XUh7kKzLI6+vlE=; b=Ym18twRp4YDc99fHB93LquZJ0Z4zp
	embehcemE/kKIw1lZWuqx1KLQozamaAeoJM8goXmgoaZW3VB8Dgu+XxcQIKA7hdr
	ii4iswSCwWVizYzEQhscnjWq5UJr55hgxRG1klgsxQfGEpvQcFkllQ/IlrtZeuNd
	IkgtrkaFdbgTtMVHe/1BaJAZgG5u4VOq3VGMrtOS9mrpBphl1Sv9YkU1wdeQ//ZD
	7Iq1V0iH3LUN1FJ4cnVRQ7ifNB0s2ek3SVY9vbksOnEwoqh4M6UrCmTyI4hRZ4zX
	H05f0ian4GSUghIm00k8vxuIOxyQfYMTfS1T0c/t3HVr9UH3nfIjwNYjw==
X-ME-Sender: <xms:bhB0ZISkSebDoY7ePFrAHMDJbiQqzCSLtGQEC7GihGdfbUW05RzW4w>
    <xme:bhB0ZFzAz6AVefd2szJo-ws8Uh2hNyL_zT_y1XDQB_wTeqEm_eppvzBm55tfHAdwY
    NArb6jCfmWTz_8OPNI>
X-ME-Received: <xmr:bhB0ZF3ndzPOosUupfAXA2uJY_qxBkSmJjfdBatmCRw5_Peevlwyk7NCarF_NFDe7DIgoQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekgedgheelucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesthdttddttddtvdenucfhrhhomhepfdfmihhr
    ihhllhcutedrucfuhhhuthgvmhhovhdfuceokhhirhhilhhlsehshhhuthgvmhhovhdrnh
    grmhgvqeenucggtffrrghtthgvrhhnpefhieeghfdtfeehtdeftdehgfehuddtvdeuheet
    tddtheejueekjeegueeivdektdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh
    epmhgrihhlfhhrohhmpehkihhrihhllhesshhhuhhtvghmohhvrdhnrghmvg
X-ME-Proxy: <xmx:bhB0ZMB8itjXVeTPWF_K8Nmn0VUSqgIS-owqp_lG18bg5bU4EsSh7g>
    <xmx:bhB0ZBhiT1zEB1GcRZ1uvkLgyLjitMFL8pFJUil6ufu20rCWUASJ9g>
    <xmx:bhB0ZIrw9F7S8Wzi-Fm-LbfqbFWndpEyT87JM97uPNmv25HvJJw6bQ>
    <xmx:cRB0ZEAbi-FwJdt75RNpPlGa_l7-Cjrsterd_z2tGu_lje6NiEeMZQ>
Feedback-ID: ie3994620:Fastmail
Date: Mon, 29 May 2023 05:39:39 +0300
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,	Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,	Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
Message-ID: <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name>
 <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87sfbhlwp9.ffs@tglx>

On Sat, May 27, 2023 at 03:40:02PM +0200, Thomas Gleixner wrote:
> On Fri, May 26 2023 at 12:14, Thomas Gleixner wrote:
> > On Wed, May 24 2023 at 23:48, Kirill A. Shutemov wrote:
> >> This patch causes boot regression on TDX guest. The guest crashes on SMP
> >> bring up.
> 
> The below should fix that. Sigh...

Okay, this gets me fixes the boot for TDX guest:

Tested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

But it gets broken again on "x86/smpboot: Implement a bit spinlock to
protect the realmode stack" with

[    0.554079] .... node  #0, CPUs:        #1  #2
[    0.738071] Callback from call_rcu_tasks() invoked.
[   10.562065] CPU2 failed to report alive state
[   10.566337]   #3
[   20.570066] CPU3 failed to report alive state
[   20.574268]   #4
...

Notably CPU1 is missing from "failed to report" list. So CPU1 takes the
lock fine, but seems never unlocks it.

Maybe trampoline_lock(%rip) in head_64.S somehow is not the same as
&tr_lock in trampoline_64.S. I donno.

I haven't find the root cause yet. But bypassing locking in
LOAD_REALMODE_ESP makes the issue go away.

I will look more into it.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


From xen-devel-bounces@lists.xenproject.org Mon May 29 03:46:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 03:46:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540532.842350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3TpI-0008Ge-P7; Mon, 29 May 2023 03:45:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540532.842350; Mon, 29 May 2023 03:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3TpI-0008GX-MF; Mon, 29 May 2023 03:45:44 +0000
Received: by outflank-mailman (input) for mailman id 540532;
 Mon, 29 May 2023 03:45:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3TpG-0008GK-QM; Mon, 29 May 2023 03:45:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3TpG-0001cS-Jr; Mon, 29 May 2023 03:45:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3TpG-0007ta-1Z; Mon, 29 May 2023 03:45:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3TpG-0007G4-0o; Mon, 29 May 2023 03:45:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4hA5eqgjmgr4woVLtlEMn/v34BYhMfR1RcOUJex29PI=; b=ruv1t81nKbbGj7834JbS2ok2dY
	M7xbsRtYkezqfVS67DPlFOQIj/5iYY8t1zH+Vv9G6fg8OkVJO8UqYQHYIgFU/a9SqJAvFovtmlbiG
	XXCwM5+zP8DQi1xijSl2jIYIQd5uYFEMAkYheNy9iKNARY1uLdevtH6lbh8uH/z/3dQ0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180991-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180991: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0f8323b44b20c982f303cc01ccf7146556bc3d4d
X-Osstest-Versions-That:
    ovmf=ba91d0292e593df8528b66f99c1b0b14fadc8e16
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 03:45:42 +0000

flight 180991 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180991/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0f8323b44b20c982f303cc01ccf7146556bc3d4d
baseline version:
 ovmf                 ba91d0292e593df8528b66f99c1b0b14fadc8e16

Last test of basis   180928  2023-05-24 13:10:44 Z    4 days
Testing same since   180991  2023-05-29 01:42:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gua Guo <gua.guo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ba91d0292e..0f8323b44b  0f8323b44b20c982f303cc01ccf7146556bc3d4d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 29 03:55:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 03:55:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540538.842361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Tyn-0001KT-Mv; Mon, 29 May 2023 03:55:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540538.842361; Mon, 29 May 2023 03:55:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Tyn-0001KM-Hp; Mon, 29 May 2023 03:55:33 +0000
Received: by outflank-mailman (input) for mailman id 540538;
 Mon, 29 May 2023 03:55:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3Tym-0001KC-M0; Mon, 29 May 2023 03:55:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3Tym-0001m5-BM; Mon, 29 May 2023 03:55:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3Tyl-0000AH-T3; Mon, 29 May 2023 03:55:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3Tyl-0005xb-SS; Mon, 29 May 2023 03:55:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JgYx3wbq5FVXAetuGyWkTlyIL+eA89I+pSZvvHaUWhc=; b=CMQUhNKRm09Y4CTT5eQxPME22E
	o9ctWgtG+YocoJsHghZFyMUnzUOK2+S5uPwflGP5G+7VFr3yaQEYCbjCrHSxooi/Fqj7PYwVWopiG
	76dXl52jx8SO8zyxAWrYiywY1s65dqxl4O0wUa+k6kqzj2napvaZjmkQsgZu4dY1qrMA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180989-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180989: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-qcow2:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7877cb91f1081754a1487c144d85dc0d2e2e7fc4
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 03:55:31 +0000

flight 180989 linux-linus real [real]
flight 180993 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180989/
http://logs.test-lab.xenproject.org/osstest/logs/180993/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qcow2 19 guest-start/debian.repeat fail pass in 180993-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                7877cb91f1081754a1487c144d85dc0d2e2e7fc4
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   42 days
Failing since        180281  2023-04-17 06:24:36 Z   41 days   78 attempts
Testing same since   180989  2023-05-28 19:41:46 Z    0 days    1 attempts

------------------------------------------------------------
2551 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 322503 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 29 06:05:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 06:05:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540546.842370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3W0B-0006T0-Ni; Mon, 29 May 2023 06:05:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540546.842370; Mon, 29 May 2023 06:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3W0B-0006St-KL; Mon, 29 May 2023 06:05:07 +0000
Received: by outflank-mailman (input) for mailman id 540546;
 Mon, 29 May 2023 06:05:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3W0A-0006Sj-2r; Mon, 29 May 2023 06:05:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3W09-0005EB-RA; Mon, 29 May 2023 06:05:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3W09-0007kc-DB; Mon, 29 May 2023 06:05:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3W09-0001iq-CN; Mon, 29 May 2023 06:05:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/WzEiwJeL+Tag3OuUoTBXTdkCOGBY+iB35veaqarqbs=; b=1u2u+x/MggVf/e8wi7XZ+tkz2T
	P2KD+dBg+1qan/PFrNg3wKoVBru8/6+Lyt5yvGGqwzptvKeLd+X79FwI6bzNDzTpPn0ZUEV+KiwR/
	0YFhet6QhYGQIBXD43Nu2QZvKMti2udTAPwb1mMYCOCMa4ep1Qp1W2VJdsybUrmpsT4s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180990-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180990: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ac84b57b4d74606f7f83667a0606deef32b2049d
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 06:05:05 +0000

flight 180990 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180990/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pair     11 xen-install/dst_host fail in 180987 pass in 180990
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 180987 pass in 180990
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180977
 test-amd64-i386-pair         10 xen-install/src_host       fail pass in 180987
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 180987

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180691
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                ac84b57b4d74606f7f83667a0606deef32b2049d
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   11 days
Failing since        180699  2023-05-18 07:21:24 Z   10 days   45 attempts
Testing same since   180972  2023-05-27 03:59:19 Z    2 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8318 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 29 06:05:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 06:05:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540550.842381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3W0y-0006vT-38; Mon, 29 May 2023 06:05:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540550.842381; Mon, 29 May 2023 06:05:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3W0x-0006vM-Um; Mon, 29 May 2023 06:05:55 +0000
Received: by outflank-mailman (input) for mailman id 540550;
 Mon, 29 May 2023 06:05:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3W0x-0006vA-54; Mon, 29 May 2023 06:05:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3W0x-0005Ea-47; Mon, 29 May 2023 06:05:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3W0w-0007ns-PA; Mon, 29 May 2023 06:05:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3W0w-0002to-Oe; Mon, 29 May 2023 06:05:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ylr7UD94GA/kEkG+Vgu7Kr0Z5rUcxARD/g6TLjITZ9c=; b=Kw76xx4fOjoXl6rfwn9YzUK48Q
	YTUWoM8DdoSwFOV9wWaCPpTECA9sBublW5B0toggUQ31ArdWvdaE6qq/u78IBHYFLgpOP1JpUl4Ct
	wLqJRcZWcAlIyR/5wMlBYaigIjrI3tPIiWxTT74eUu66kvdKDgkykkUG4w3bB4VrqwD0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180994-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180994: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=5258c4186f3a41c14a1d4cd40217810882ccd222
X-Osstest-Versions-That:
    ovmf=0f8323b44b20c982f303cc01ccf7146556bc3d4d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 06:05:54 +0000

flight 180994 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180994/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 5258c4186f3a41c14a1d4cd40217810882ccd222
baseline version:
 ovmf                 0f8323b44b20c982f303cc01ccf7146556bc3d4d

Last test of basis   180991  2023-05-29 01:42:18 Z    0 days
Testing same since   180994  2023-05-29 03:46:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gua Guo <gua.guo@intel.com>
  Nickle Wang <nicklew@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0f8323b44b..5258c4186f  5258c4186f3a41c14a1d4cd40217810882ccd222 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 29 08:09:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 08:09:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540564.842390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3XwA-0002nW-EQ; Mon, 29 May 2023 08:09:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540564.842390; Mon, 29 May 2023 08:09:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3XwA-0002nP-Aq; Mon, 29 May 2023 08:09:06 +0000
Received: by outflank-mailman (input) for mailman id 540564;
 Mon, 29 May 2023 08:09:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FxNl=BS=citrix.com=prvs=506ffa617=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q3Xw8-0002nJ-BK
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 08:09:04 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 112ccb2a-fdf8-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 10:09:01 +0200 (CEST)
Received: from mail-bn7nam10lp2101.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.101])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 May 2023 04:08:56 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH8PR03MB7270.namprd03.prod.outlook.com (2603:10b6:510:253::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.28; Mon, 29 May
 2023 08:08:51 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Mon, 29 May 2023
 08:08:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 112ccb2a-fdf8-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685347741;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=CKzLDcN+WtPmClbJg1qt2eeshoV1QlQ7LU0SVlbGlDU=;
  b=dosOffBU8RYgHYh/UYUXBJt68TVhxqqDlkCm5/WLQfQ0IxT352Xcr/um
   7UC+uPCyForeDWXP9JWQOzTswcjDfoDuqbDbvXFydUeoYSU9oL8b/1KFX
   wVbErx1OIoXDsVjFrXU0UlodWX/8svo1snWiqneQFvIdVtkA17Y2Z9mvw
   0=;
X-IronPort-RemoteIP: 104.47.70.101
X-IronPort-MID: 110102959
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:yxp7bq6eX4Eix0nIOWCZwAxRtBLGchMFZxGqfqrLsTDasY5as4F+v
 mYfDWDXOfeCMDD9e9x+PI3jpkxTuZSAzNQxG1E5+CtmHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa0R7AeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5ms
 ttbFywJTTS/hP+myZf4FeVhq+ENFZy+VG8fkikIITDxK98DGMiGaYOVoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6MlEooiOGF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxHuiANhDSezQGvhC3wXL5nNJLyYtcmSxmL6D0kKwavtGJ
 BlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml2qfSpgqUGGUAZjpAc8A98t87QyQw0
 V2ElM+vAiZg2JWOUm6U/LqQqTK0OAAWIHUEaCtCShEKi+QPu6k2hxPLC9xlT6i8i4StHSmqm
 mjT6i8jm78UkMgHkb2h+kzKiC6toZ6PSRMp4gLQXSSu6QYRiJOZWrFEIGPztZ5oRLt1hHHY1
 JTYs6ByNNwzMKw=
IronPort-HdrOrdr: A9a23:h3bsZ6jdjfPtYzvSb4266ilVW3BQXiAji2hC6mlwRA09TyX5ra
 2TdTogtSMc6QxhPE3I/OrrBEDuexzhHPJOj7X5Xo3SOTUO2lHYT72KhLGKq1Hd8kXFndK1vp
 0QEZSWZueQMbB75/yKnTVREbwbsaW6GHbDv5ag859vJzsaFZ2J921Ce2Gm+tUdfng8OXI+fq
 DsgPZvln6bVlk8SN+0PXUBV/irnaywqHq3CSR2fiLO8WO1/EuV1II=
X-Talos-CUID: =?us-ascii?q?9a23=3A0SWQ0Ws1XvKKPmR0dA+Be3c96ItiVT7i0HPzB3O?=
 =?us-ascii?q?7FHZgbf7KRVK93Y1dxp8=3D?=
X-Talos-MUID: 9a23:oS72fwSseCAWsOuERXSz3GlCK51P2J+JN0o01q9et8e1HDV/bmI=
X-IronPort-AV: E=Sophos;i="6.00,200,1681185600"; 
   d="scan'208";a="110102959"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U8ygu6b+FnGIQ/UV/pqYQHVFrZWqaxHYKrCoal9oyVAYCEbZxLct65m6T8OAqk8fwZacuqKcMIOIAsGnKX06okJhQJnudCXtDjTvok+nyXpm0xfi2Ja2+v88JxrdlzrhnYmPkKuLhJxABUFRl9Sr85GA/mg9IRzCTqtKz/GJYn9wMFWFKeVxRfe7LrqpVepD2IPToo/Lcst7FD9sU615J86thtQw26rf1ZU2WuU3vr9hwp7VDvWPrEhca2lxZOaPNAYqzGh2gtYuSXDtLXOl//QDzUUkJKlS4jvDKO41TQpp9B7eZLiQZOiK+eNJ+lA05uzBHr3ERXQzLiVdnxGPZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=n7khsImkjEpR3DrS4DQjOEyufxofrbpJZMWBFT4z4pc=;
 b=DwO4AZSHcjcOXshcyb2ASY71aEkA8uc17ekoLjJLvIkjMzX7NfVC/Oc2By9yoSy2tdj/rR5y4Ss5PyAhghx1i2ipyB2C3SnMYlCX4DxHO2OVag5hF2scwyRTcNbO8HL/zvMemu1TYS2JSrbhfK8W8cbLLQcLhVGm4BFpkuwlYG/OFaY5KXSmInxD3UMcPQRYWgzjVCuMa7NlKA0dDtecvl+NiSeJUW0eIFe6gxF2h4uVNi5Xksb4iaa3+t1bLG8aeROxPeArenJCcNTH9jY0j7eKA4jHgYNZVK4SWOSSHE1iBVDJZXopioBXjhoVWlRguOdSrxfv6z/Dgq9I5ggbIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=n7khsImkjEpR3DrS4DQjOEyufxofrbpJZMWBFT4z4pc=;
 b=GZtRbgOPx7dtD/rMBUtCI1LhJblZIQnDbVfRFHMKfvDC5I5jetATp1o/ba9PxLriIj6gxkKKzmFxR8isABYvwVlZsb76IHJSFw8EaIlpZAcrRmyUhSOfcFDsth/d/pSorsUr9fu3jo3Qbz3PqVhrnOERniv4X/RvlKDzjkXn01w=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 29 May 2023 10:08:44 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Message-ID: <ZHRdjCKSVtWVkX96@Air-de-Roger>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4zx+TvUWTCEMh3@Air-de-Roger>
 <2b1b1744-2bc3-c7c0-a2d8-6aa6996d4af9@suse.com>
 <ZG94c9y4j4udFmsy@Air-de-Roger>
 <cedbc257-9ad9-56f1-5060-eaf173d45760@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <cedbc257-9ad9-56f1-5060-eaf173d45760@suse.com>
X-ClientProxiedBy: LO4P123CA0599.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:295::11) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH8PR03MB7270:EE_
X-MS-Office365-Filtering-Correlation-Id: 1059f7e0-4b20-4ad9-19a1-08db601befb5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Qw3HV2MGtYDaECO0gznXcx97U8R/j+VeokEp6zY1evnbZc8SoVfaHn2UusFIzt8TnnK0syWqL57jU02Pbwp1XSH+ugL/mTf4JQ0p10CQOsbqstpFntYcSC8hJNelTNXL21Su4JhKSlpg5jnNVOX1AV9cMr3Whms9WjExy5qdo/iD4KiopJnZlyhaX+O03kvVNtSsMaSRVBgAefGvIFWifeUKQSbl+l+08H7JYJ0/WNNsvY+nzmfYP/RBaMifuIQ/F8NEEG1vLC/1G+GT2J0OPIAamxhhP49rX0IrXe07njDg/ji2//sZc28NaoUnqdJGyQaVxO2HXLrRw0KLEJ9QFqYZI6eHSK77ZF9jRQwKZVC2tbkkI7xDYf+CblCZscS2ER3/gwfcjtVn0uxCaxUbXE8B9JE/AvONQt05FL+dfHsFF4NXjfHd4npwVSo6EjIhE5OB4IYchNHGUHAbznZJVYs6tll7k6dacDnq1JkNVdiAAKvWJ6ENNFnyuoMwUNgQyAyiwWvTViCemBFTpcbYGr1k7bB0pgfemQQXAy+/EXYw1OfyFxyWG5d6Hj/9qxKf
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(376002)(136003)(396003)(346002)(366004)(451199021)(41300700001)(316002)(26005)(6512007)(53546011)(6506007)(6486002)(82960400001)(6666004)(9686003)(8676002)(15650500001)(5660300002)(85182001)(8936002)(38100700002)(478600001)(33716001)(86362001)(83380400001)(54906003)(2906002)(6916009)(186003)(66946007)(4326008)(66476007)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MExQdEQwR0RtY1lwYStXa0pRelNFY0xuNGt5eTVoQ2d5Ukkrb1ZsQWw4aFJK?=
 =?utf-8?B?bnl3dWluekJocXZidG9oZGgwMlRyZy9CbmpwNkZPUno4VSs4UUprQmdOSjlq?=
 =?utf-8?B?Vkc5RzVmZTBUWnhiMDJJZHMxQTlqMXRqSFB0M21YVGxPZ1ZHbFY0RWNSYjRX?=
 =?utf-8?B?dVhBYmdkeVBib0xwUUhWNkhsUy9OemtmTUtBTHpIYWNCQVRlNzhhSDRIckZM?=
 =?utf-8?B?NE9wZUJCcDY0QkNpMzllQTNKUlVERm51VzlOQU9MOHdrMWh6SHROUEwwWDRz?=
 =?utf-8?B?VWhuMFV3eUtnZ1lERkRhcmhsWmRMaEZBR2hENlpPM1pOSEhpL2Z6SURkaDBC?=
 =?utf-8?B?VnFyYXRBUWY2bFU2TkVPR2NRYVlScjgwSEF1eEVFLzg4bzVqdGtCSVdhcm1J?=
 =?utf-8?B?cUF4U2FraEw4ZHZ0UzZDUXVuODlqb3gwRERxQkUvSU02MjAxT0xobW9ndENv?=
 =?utf-8?B?aGd2b0RMejM4YU14TldWdi94RnE2OU5nQy9ZcTFaOXlnYlJyRE1YOHE1YXBM?=
 =?utf-8?B?d3lWd0FVUWhZVmNSTHpuR25qbnM3ak5NNUZpeXRteDE0RjdPOWxKSk5PRi9s?=
 =?utf-8?B?Y3RYYjY3SG5mQy9NWGhHcWZaQlZTdjBhR3RhbTFvSzFXNmo5L2ptL015VmNM?=
 =?utf-8?B?aTNRVHhSYmczVy85WGdseUFFREtSYnYzSWw1MCt6c3FBTTZncVl3bDkyY1BU?=
 =?utf-8?B?T21ZQVFlWTF3MldldGoyQ2RQNjRxY2lqTktZQmRBMlZGQXg2bUpyRm9TOGJq?=
 =?utf-8?B?QTdZTTRIMXY3QVNaZU40NGdqelV6bmVidm9TZ3Y1M0YvSW5WNHZKRmZPZ2h5?=
 =?utf-8?B?c3hxNnF1WnQveHJmVTB5YnB2Umh5bEpLTWJzVVJCUTB3TFJCWTAyYUhWZFd4?=
 =?utf-8?B?RGVzMzdIcU5UV09rbG1lVnJnK1JaYzB3MmNuMXVWd0pST0xOYk9rSTdRUEdL?=
 =?utf-8?B?NjUwQm1aTFZieTQyWnBZVmRoS3QycWg0VWVtcEdKZ0RaYVdFS0tjUFlJWU1u?=
 =?utf-8?B?RHhJakEwaVBKRHhKOGRrdFY4N1Jvb1NnQ1lkdWZqSWUrRUl2bXZGaVBhV3VL?=
 =?utf-8?B?bnZFZTRKRDZiZGdxYXViTkZ2WWtpZ1ZTSHM3SXBEZGhoVEJjeUQ1RlorUmFF?=
 =?utf-8?B?RGx6RFgvYTFYSEY3b2JzWjBZSE9KUDNCSFNSU2pGMjNIcy92c3h1RUErbVNy?=
 =?utf-8?B?Mmw3cmRudllMVnFKbDR2U2tvc21RNGRsZ2l1K3NtRTlEaUFqZkNnTzkvTjVo?=
 =?utf-8?B?LzN2cGphY0JhdXhOTWFZUUhZZFFuR3hZNDQrV2lIS2ZBMG5NTW04RUxnYVc2?=
 =?utf-8?B?cTc3aVd2ZjlZVG40U3FPWk5vV0hvclBEUThac0J6TWk5L3ZpbjdEUUthQjFi?=
 =?utf-8?B?YWczMHBlcmFtSk10YmxFRjcwNEt4cUNCOWkzSXRzY0UxODhiUmo0Sks4RWJS?=
 =?utf-8?B?b05ySGpvTklHNkUxSmROSUJVSnJRbWN1SFhpYjdCTm84YXlFU0xjYlFPTy9M?=
 =?utf-8?B?dTJwTTV5RDhETndzMmsyNWVibWkybERRMWc4ZVpMZHNtQk9DVGpTWENJaDdo?=
 =?utf-8?B?SS8zMENzM3J0R1kxT2Zyd05IM3YwZVpwV3RtanNQSjgrY3UyVkhBUXF5eGdh?=
 =?utf-8?B?MlJkN2lKN0NPRlh4TitaWWoxRTU5bWx6b3laLzYvT1I5dFA0bktxY0d1d09I?=
 =?utf-8?B?UkJSTFlCcnJEODNsVVBoMDVlQ05scjA1UnhPY1JDbVQ5Z1hXdTlROFhWUHpx?=
 =?utf-8?B?M2YxYzdsQk9DWFFqK3hUaDhFU1cvQlM1WnNFcVJzMDlQN00zWmJ3b0VHdlhz?=
 =?utf-8?B?VXZhNjFqMGNRcEhVbXR2aHZzYytxUG9pOFgrVDFFK1llUHlqN2FGSzI4L2ow?=
 =?utf-8?B?Wnhub1lFQnR0MTU2dFM5WmdiMHJoeXlrbTBQM1BRRjJzSVp2TmxKS25pYWo1?=
 =?utf-8?B?VmZiN2dpeGcrRTFyRFlxY2MyWmZGeis4Y1ZRRVFtenNLT1czVlBCZzU3c0FT?=
 =?utf-8?B?cldBY3paT3lDdUhwNVdkWVBGMEppbzBDQmV3VTRMaTI1ZUdEZGk4TVd6UWVF?=
 =?utf-8?B?MDh2NUhEY1dncTc2ZmpDVkg2WnFRZFFZN3dKalFJNlBGbzJoalgxOHZwaXVJ?=
 =?utf-8?Q?0kLXevTNxBklSeZH3k9T4YuxB?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	p6C9Y93FUaCLbBD1KRsNFsowEvfjWuR9MZnzGD4JaQWlcPV4SSBizchgBJ2BgdW3g4iwBT1sVVSnyJbPG6GdGUodgCA4wjqPCBt6M7DA0RO7H+Q7sP3ulu2x/EiP/AuoxhnEDYxFiMAATinsIhddjJtM/CBjjQi2NqDfJd8eygAHkK2jhq35sjqAnyzSjapG15gYtRN9YHgV14pwvg0NwDFx5Lkd1VlL6BxIxS/cyT4clk6+pMUf9+er1R7e/O5Thj57oK/hIhA1YnORmUakp4KgbeaMs5M5mMOUVYJ4Oxt1JNaCCionpHEjgKKPkry4xfsB1DMfNoRHykk0PowuvySBUYpWFho/QDYhI6HZUwOtUEbRV4CVfVGZuxSel+iNklGHSZqciHNtud1s9r39lxOxhJRH+ySYYWK2B+MhFjyD2+wWEkHs3L2vvSXc4ebyhd79DdVJCuxlfXEzXrwvZLfEMezmrIS96yH8UlusAJ7EC/eeqhBrzqPNqiBB+pZPZHKyowYQUOors55QWDFP6BPFCSS1SN1IZYyTUdx1C5jK3NBqZiVModXVf/f3j0Mt/ja62vF+gS9xgt8r1LgRuyGh9MVyd89nMbI81c2Nn7EIngMMPZHw+kbqXp4rPj00bm6TuzGOIUFNQJeF1ettsk5ZIJm9T+3N6rPzS6A90Vaq5ow7klop6Cnc3Aw4dGAEklL4DdzO+cJtoCU/JiXbnUlEYH3UiKP35y1VUd+AJ6oVMQkZGQYpuN9WfUkJjo3vXEEWRt21h0IRUoMPqpuOZg5NU5SSiA/UFMQ97A5bQn2/bodFtyTQVx4PsHVGqC8vEpaebp0N3JI1o5rVa+uhmg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1059f7e0-4b20-4ad9-19a1-08db601befb5
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2023 08:08:50.8385
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FqMj9J8Jln0FMRYgzxDopY4vf2AszR+Gku1foGKNsZbhulj2Wcouvv13Htroq/2/B2fx6xD846HIGmBPIxe8hQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR03MB7270

On Thu, May 25, 2023 at 05:30:54PM +0200, Jan Beulich wrote:
> On 25.05.2023 17:02, Roger Pau Monné wrote:
> > On Thu, May 25, 2023 at 04:39:51PM +0200, Jan Beulich wrote:
> >> On 24.05.2023 17:56, Roger Pau Monné wrote:
> >>> On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
> >>>> --- a/xen/drivers/vpci/header.c
> >>>> +++ b/xen/drivers/vpci/header.c
> >>>> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
> >>>>      struct vpci_header *header = &pdev->vpci->header;
> >>>>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
> >>>>      struct pci_dev *tmp, *dev = NULL;
> >>>> +    const struct domain *d;
> >>>>      const struct vpci_msix *msix = pdev->vpci->msix;
> >>>>      unsigned int i;
> >>>>      int rc;
> >>>> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
> >>>>  
> >>>>      /*
> >>>>       * Check for overlaps with other BARs. Note that only BARs that are
> >>>> -     * currently mapped (enabled) are checked for overlaps.
> >>>> +     * currently mapped (enabled) are checked for overlaps. Note also that
> >>>> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
> >>>>       */
> >>>> -    for_each_pdev ( pdev->domain, tmp )
> >>>> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
> >>>
> >>> Looking at this again, I think this is slightly more complex, as during
> >>> runtime dom0 will get here with pdev->domain == hardware_domain OR
> >>> dom_xen, and hence you also need to account that devices that have
> >>> pdev->domain == dom_xen need to iterate over devices that belong to
> >>> the hardware_domain, ie:
> >>>
> >>> for ( d = pdev->domain; ;
> >>>       d = (pdev->domain == dom_xen) ? hardware_domain : dom_xen )
> >>
> >> Right, something along these lines. To keep loop continuation expression
> >> and exit condition simple, I'll probably prefer
> >>
> >> for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
> >>       ; d = dom_xen )
> > 
> > LGTM.  I would add parentheses around the pdev->domain != dom_xen
> > condition, but that's just my personal taste.
> > 
> > We might want to add an
> > 
> > ASSERT(pdev->domain == hardware_domain || pdev->domain == dom_xen);
> > 
> > here, just to remind that this chunk must be revisited when adding
> > domU support (but you can also argue we haven't done this elsewhere),
> > I just feel here it's not so obvious we don't want do to this for
> > domUs.
> 
> I could add such an assertion, if only ...
> 
> >>> And we likely want to limit this to devices that belong to the
> >>> hardware_domain or to dom_xen (in preparation for vPCI being used for
> >>> domUs).
> >>
> >> I'm afraid I don't understand this remark, though.
> > 
> > This was looking forward to domU support, so that you already cater
> > for pdev->domain not being hardware_domain or dom_xen, but we might
> > want to leave that for later, when domU support is actually
> > introduced.
> 
> ... I understood why this checking doesn't apply to DomU-s as well,
> in your opinion.

It's my understanding that domUs can never get hidden or read-only
devices assigned, and hence there no need to check for overlap with
devices assigned to dom_xen, as those cannot have any BARs mapped in
a domU physmap.

So for domUs the overlap check only needs to be performed against
devices assigned to pdev->domain.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 29 08:39:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 08:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540572.842400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3YPb-0006MG-Tt; Mon, 29 May 2023 08:39:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540572.842400; Mon, 29 May 2023 08:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3YPb-0006M9-Pk; Mon, 29 May 2023 08:39:31 +0000
Received: by outflank-mailman (input) for mailman id 540572;
 Mon, 29 May 2023 08:39:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FxNl=BS=citrix.com=prvs=506ffa617=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q3YPZ-0006M3-Su
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 08:39:29 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51b17eed-fdfc-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 10:39:27 +0200 (CEST)
Received: from mail-dm3nam02lp2044.outbound.protection.outlook.com (HELO
 NAM02-DM3-obe.outbound.protection.outlook.com) ([104.47.56.44])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 May 2023 04:39:25 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BY1PR03MB7190.namprd03.prod.outlook.com (2603:10b6:a03:52d::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Mon, 29 May
 2023 08:39:20 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Mon, 29 May 2023
 08:39:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51b17eed-fdfc-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685349567;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=zyqsEN0u4JweN6CymyoR3leAD4A8ZzsxJ3ZEsOQjunw=;
  b=Vm0gExyHlVLGrGAUe+w0oZ9sqlhpjTLXSjGBV0a4Rell89sNfs+3l7Jo
   ZlS64fF/yF3LDcu+grud+k9G4N4Z2R3FOEQehD6lyrFvgd58SljqhHU1i
   ORiG5/eZlLibkSQY5p7e4hcLh2Iu95ZXnoD0km+s+s9IBdOOGXoriqgt2
   Q=;
X-IronPort-RemoteIP: 104.47.56.44
X-IronPort-MID: 110663272
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:4x37vq6uEklCkFHhHtq7KwxRtB/GchMFZxGqfqrLsTDasY5as4F+v
 jNOCDvTPavbN2LxKNsjbYXi9EkOsJPWyIJgHlZkrCBkHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa0R7AeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mz
 McjGgogLT25q+u8552lQdMrlOI4BZy+VG8fkikIITDxK98DGcqGb4CRoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkkotjNABM/KMEjCObd9SkUuC4
 HrP4kzyAw0ANczZwj2Amp6prraWxX2lA9hNTtVU8NZDg0ydgWMSLSctC2apvMK9px+PXfFAf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmKKRYWKQ8PGTtzzaBMQOBWoLZCtBQQ5e5dDm+Ns3lkiXEoolF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9bABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:NZxGWarLwHL+45F/wPcDxocaV5rUeYIsimQD101hICG9vPbo7P
 xG/c5rryMc5wxhP03I9erwXJVoBEmskKKdgrNhXotKPjOW3ldARbsKheCJrgEIWReOlNK1vp
 0QCpSWZueAdWSSTvyU3OB7KbodKRW8kZyAtKPm1HFqQhhta6Z8qylEKivzKDwLeCB2QaMjEp
 yS/8xGoCflX082QICUOlkpNtKz2uEirvjdEGA77tocmXazZPqTht7HLyQ=
X-Talos-CUID: 9a23:HJnNcGBuFzl4nM36ExBcz3wuN4MISFHUxlD/PUSCTnRPU5TAHA==
X-Talos-MUID: 9a23:NrOc5wlKvLPKSJa01/fSdnpLN9ZmpKuvLXo2z6gUpvKAGH1BPRSC2WE=
X-IronPort-AV: E=Sophos;i="6.00,200,1681185600"; 
   d="scan'208";a="110663272"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gzgjkN+O6/2UiSLCs6XUNj2wnFmmZhcpT8vvOLEXfO4FU/3p8qlSzQhGQAqqNQL+ktvai4/RZJqy+fsd+fvITHxVoHJoHb7Q7tS1Z1OY19dCT84cHZlNpwuerz6M8kjdnXLUIoznxbzaPNM6nNweRBPd2frpdyYy9M+bFj8Y+BdOyXEANKp4UQj/NjG4l+vctXPJcaeVKZImgG7I1RdWtE27vaFgHB97Gm0X3BMvcYzNqDp6rYe0pX/uTluWdNlTA1Ax2xSrVqS2hdRW0xUqSAvzMuY6IRsBJ7eC8bs/bo/rXwwFbMj/2n45FB0bS1/Md0m7b9QLCV6sBqZ44B9vbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nu3VbNvYT/3P7tnAnz5ZryeArbkkmg+3pcEsfx5uZwk=;
 b=G6TM9ObRAaosUaAebq9xrXnGW8NCrFyXKcDAOjaK/KsEpXkmDv/vstvUKRhqLoJRtNAFEAzL5Z6pVm48ikoMlTlAXShIBpKslfPQq5dEWqjrb4gQ6qY5ulewflw4KzO6DAwTPsxK7+fvleG7KR36AAnixci5E5YEGpKGUXiABBuITODL98bkR+NEjAgsDTAvGQcuYMyxY8w9nTmoarbmuPkzjB+sY7F/ob26YuJYdU+cp8rGCGClwnxWOHDz4Ks7n54KLbOna8atMfYjOC4QXE8u0KbodmgSzYL5tPwjIeqD1iH2LEoMTty8v2p37XDzkcoMzxoYcyzv1ZiCdJIllQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nu3VbNvYT/3P7tnAnz5ZryeArbkkmg+3pcEsfx5uZwk=;
 b=HrVgKsOxaFANiF+M0Wif10vbacDsSA846OGrikCi5n+xw6DPAZaxueMmvSjLDsmqa7+626xMJh+26c9hzw8mX1X6DRDeOUn7JSTwBFvJszbb342CAJI88tSaYdX6OS61wHkm3dzRU9JRl9wHCfysF7R4+Dr1CylBJiN896XbNUY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 29 May 2023 10:39:14 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/vPIC: register only one ELCR handler instance
Message-ID: <ZHRkstB6UKWAadVZ@Air-de-Roger>
References: <5567b45d-d8ee-7f43-526f-7f601c6ddd46@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <5567b45d-d8ee-7f43-526f-7f601c6ddd46@suse.com>
X-ClientProxiedBy: LO4P123CA0321.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:197::20) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BY1PR03MB7190:EE_
X-MS-Office365-Filtering-Correlation-Id: 61f8e69a-59e3-43c2-0920-08db602031dd
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	g2zmfoc6yVIvFe3WUaVU9ZbygSVnbRKYf5zJLX1LfF5LrACCxYoV+YDZDwSfuPlh6xclxnFdmnzWdgLcyT+647fGtAKENwSER95X1cdSOM82DbVRtF3hhKLfJu1URy0uT4sAHhLYzYstdUmt3cDI4xWsLuZWnT4YUezUsaLLFQQbYPzt50ozITOw9bF78Lvsui3XOYyWamyg6ohOREfgdW59+3hnNZfgaNmxqouwxnvtIb+lKXCyagnty+GJLXbsRINUDIMLogi61Aq+/aBgLSNYWnX1U58Yk/oL/B/GwGykA8oZeA7gLWgpHp7ONNgF9UCA9IL+hY14/RyqRup6fGxMvYF8iCGV91lJhWm9vczd32tSVr22k5zfUJ4TKJ3rcueW9I8kyk7tWQRljFIYd8mdpMIhhzfr+pm96vHfrhbwwJ8xUQ7g2Ahjw6B8LkZzD1ESIXSLlsgQANrWkUdNrAP+YPu25jWR8gGsh2fg+qj1x6MUL02ftB4jyxgea4sOiBYvIb4GMAi/jxcI9XZG26GWG9MYk2sdtBXBJGGt9hVm11Gvm6IxJKky4xP/lZp+
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(39860400002)(366004)(346002)(396003)(136003)(376002)(451199021)(83380400001)(66556008)(5660300002)(6666004)(316002)(66946007)(85182001)(66476007)(6916009)(4326008)(33716001)(82960400001)(8676002)(6486002)(41300700001)(38100700002)(8936002)(86362001)(54906003)(2906002)(6506007)(6512007)(9686003)(186003)(478600001)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NFVUaEdjdFRzMWduS0JYMFlRZzZiNEZicm1hZ3E5UGx1QnpXZDZOVzJVbmo0?=
 =?utf-8?B?UFEvei8wei9NTzZ6NEIzVFZ4UW1sb0lGUTNRMGF2NTlHUTVUTVpGZit6SUEz?=
 =?utf-8?B?VE5xcXFSbFBWUVlpd1dtRS9VWlVOU1pDcjJlNlYraFpyNTdYbEpyanplN251?=
 =?utf-8?B?VkN0WlFRQ1ZDZ3hEdkdKVWlqME5KdnZHWHEzcG9wU0poV25mNHAwVkMyczkr?=
 =?utf-8?B?d0hjM2M3ZE9WY3gxVUpKcWlZd0VnMXNISHdOYUFiMlA4NEg1dng5TVh3NDQv?=
 =?utf-8?B?L2UzWHRpc3FZQUhjUzBEbVdoVnFseEM0SVVzT3ZUTjF2TjU3RzNSVXJKM2V5?=
 =?utf-8?B?WEkwZy9hMVNMV3F5NUVFLzdhTEtoSU8vL3RpWEsrZUFOYmRGRVpHdStXME5H?=
 =?utf-8?B?a1BrUFE1VkVYaUVTOWRScXVMQXM5Ulp3RDJnL290N09udFZiVlNHaTMvMXlq?=
 =?utf-8?B?d0dpczczeWJlVXpKeHYzYXBkZ3Zoc3g3V3FvLzlGZENWMXlHY01ZdUJyS0pn?=
 =?utf-8?B?V0I3R1QzNTBNTEJ6S3JBOUFMTE9hSHBPazYzUjZYV0oyQm96cmxqT0c2WW9y?=
 =?utf-8?B?VmF0WFAzKzY0VlllTVVTakFxaFUzWFZQTG45NDdrVUYycnVma3cxcWZCOG42?=
 =?utf-8?B?T3RSREc4MlliczV5NFMyREVZNVlmZExzNE5QL1M3TnV3NXNvTSs3aEpnNUw3?=
 =?utf-8?B?eEs3cDA0dDAzUXlpaUJ2emZlQ0tVVVRlVGZObWgzaTN1cGFvdHgyZWRsejJw?=
 =?utf-8?B?bTZPY0p0elluQTFvNysrSE9RWXZSNnFLWmdDcVArcXd4anVwMUNFZzRqZWd5?=
 =?utf-8?B?MVFYWEM5TkVsVVhHZUNCUEZkWFpZSkYrOE5jaCtpQzFnNm5tRU9CTElaNmFI?=
 =?utf-8?B?a0puOU1OMHZ6WVpoV0ZKcEhZcEdEQWxIU1VjOW9jWG1BbXcvSXpnT0l1V1NR?=
 =?utf-8?B?K2tTdW1mK1VTY08vb3c1NkljRU1TcnhHNndOVmlHUWhQem5RbExXK3lzdlZo?=
 =?utf-8?B?L3FLcXdDcXFscnBCdm9NUksrZ0U5M3hBdTV0SGxVajRSU2dKbC9LRG81MWpI?=
 =?utf-8?B?YVlGSW5nT04xcTVSSUcvUitEZUp6Qk9ncTBEemE3ZnhKQjBpVjU3UndtR21D?=
 =?utf-8?B?VGVvRlVQZCtOalJ0aXMvNXZ5ZytxUzBWU2x6enQ0NmZ5Uk81ZTdSVnNUVXhr?=
 =?utf-8?B?YzEwaGhXVUxrcG0wRjU2alM2UzE2UzdiUG9LdEFtK1d1RElobllFV2NBdi9n?=
 =?utf-8?B?N3MxMzhiTjI3YW9zKytDVEhza3RMUEVSRFFhMEhoeml6eUdxYjEvQzE5NEoy?=
 =?utf-8?B?TU1YcllhbXd3ekh4RlZ6U1B3NnZjQytMdjJCR3BDTDZPUFM4eURFK1BIZ0h3?=
 =?utf-8?B?TXNZOTUvNGtZUVJSU0pVamt6TmVSazljSmMycktWSVF4Q1RQMW5LS1ZQemti?=
 =?utf-8?B?OHpaaEY2WUF4V2hXeTc1ajB4b2VZUTI3SVI0OUI0MjFDMEUzekkzK3lGcm1w?=
 =?utf-8?B?RXlMdGtxc1ZpY3lqeFZteFhoZ3hMV1FlNE9HNDcxaXBxcEIzN3NuNjFvajVt?=
 =?utf-8?B?SC93d2RSWUxPZjJRbnNrc3B4N2VTVXBiMXFIYVJvWE9zRktvSnZ4ZmZsNlVC?=
 =?utf-8?B?elM1Z1ZtenJkOTlGZTFHb054ZDM5dnZnZW1lN2VqbzRWNzNBbUo5NHd2ZkQ4?=
 =?utf-8?B?aTlrMkdueFBJaXYxM2doQlBVVDNFTW9iaDkrWnprRnF0MEdUcWlseU1sMi9Y?=
 =?utf-8?B?T0RxU1VVVm5TbnRIYmpmeTFtYXNkN0hLVlREbXNuR2E2ZXNVSVZQcmFoaEtN?=
 =?utf-8?B?UW1lRFN5dWU4WFB4TDZaR1B4R0RIRnBDM29lOGdyRXdMWkV3bkxDM1JzQXpx?=
 =?utf-8?B?aWs3QkdSRGFmbVlZUjJ2SDZtMmZIR2JPbEdXNUVTenZsQ2F5YkJWdy90Z2l2?=
 =?utf-8?B?VkFobDdqK3FzWVpmVHpjTHJpUjRLZDFocTJEMDlobXpsbWI2ZWw3WjFHV3RZ?=
 =?utf-8?B?aWFRdmxMa2pUbU0xbnV5czNORHZoYjV6SWUraVo4LzVoWCsycFdwTXhHMkRt?=
 =?utf-8?B?R2FwWUV5aFhlM0p3cXJYby9TM2U5NWJHQTNLa2lvZGdPYWlWVktqNHI4WWJG?=
 =?utf-8?B?UjFHNi9RS3Z1eW5ySjdaTE10YThjdU9Nc2FkUHFjMHQza1BJbEx0TzA0R2lU?=
 =?utf-8?B?ekE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	QO2U6sZxvb4UZhFDi7mnl92Tq2shwuabOYWO1apaDAdSOOESSZzmz1uVikwWrBlWFBnc4/WnJtg0MtVhJD8Kp/pZwfu/tyeH7FwgDdCGsvVgmmCsmDjapbKtDwJbGbp5yZh4XPDfDrcaJCnQJVkt60cQpKV8jFMRk2W6bU4OeJLPwptwUIRLQV9AycUyv3nGrFwg9AyAWe8PqjDNpr/m/zA/6IqJSXo8xIpcfP3RKksa3Gt18vHFV2TQ9lP81wb+rmxtlOYW8PJRDGavxWS5GnGbP34vl2RrSWBByjDMzXJ4yazzX3CoJ3GzR5B8FbE8Iua8dZZNDJYqjm7Kpho4ehMexA4DTBqUh6Re6lM2Z7PGeO+BjAcd9KGuy//4eNZmsmlVqMFnt2B9dqQcl7+9tS830BtgQMrQP9JLA+8wP86KB99rCGX936iMceAcZO5gpW68uGLjywJWPOovZbJWL7CIUorZJWBrIE7/s1QW1YthxgILCItE6y0FOxHurKDhn55euFjh7tey8BOzYcRfdeasMlX+0BrroBoHbIZ2iCWJLm78+p+UoWRVyEBaWWR/81FAqht+ziGy8u+VGkXL6XHQWHbtDkYMGJ03TKBmc712SVirdPm+BJksbB7dliX8u8zHr3a25X3nyM8GJUXMBAC+4eoWUHhBYilXupHluXmgIogu+fhp8gg4FKgj+xa/5v9J789mjTzYw24/foqdWS6Ai/I021wczAzCFYbroM9rSD12uLyyx2ltZGoevFRrXKXrS7pEVRPhrvGmivuMSVjsE5TqjLM8708xcNM3jvFCiijOHzprTHYL00DlBShW
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 61f8e69a-59e3-43c2-0920-08db602031dd
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2023 08:39:19.9054
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vM0SLOzsMCHaDPRnLe06ngSo+Ke/0UA1YaETVyxsPPX8+9B+fbRF/Xi9dWnSarTk6JyAZEfbL6BsdPYCK9JZNw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY1PR03MB7190

On Fri, May 26, 2023 at 09:35:04AM +0200, Jan Beulich wrote:
> There's no point consuming two port-I/O slots. Even less so considering
> that some real hardware permits both ports to be accessed in one go,
> emulating of which requires there to be only a single instance.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/hvm/vpic.c
> +++ b/xen/arch/x86/hvm/vpic.c
> @@ -377,25 +377,34 @@ static int cf_check vpic_intercept_elcr_
>      int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>  {
>      struct hvm_hw_vpic *vpic;
> -    uint32_t data;
> +    unsigned int data, shift = 0;
>  
> -    BUG_ON(bytes != 1);
> +    BUG_ON(bytes > 2 - (port & 1));
>  
>      vpic = &current->domain->arch.hvm.vpic[port & 1];
>  
> -    if ( dir == IOREQ_WRITE )
> -    {
> -        /* Some IRs are always edge trig. Slave IR is always level trig. */
> -        data = *val & vpic_elcr_mask(vpic);
> -        if ( vpic->is_master )
> -            data |= 1 << 2;
> -        vpic->elcr = data;
> -    }
> -    else
> -    {
> -        /* Reader should not see hardcoded level-triggered slave IR. */
> -        *val = vpic->elcr & vpic_elcr_mask(vpic);
> -    }
> +    do {
> +        if ( dir == IOREQ_WRITE )
> +        {
> +            /* Some IRs are always edge trig. Slave IR is always level trig. */
> +            data = (*val >> shift) & vpic_elcr_mask(vpic);
> +            if ( vpic->is_master )
> +                data |= 1 << 2;

Not that you added this, but I'm confused.  The spec I'm reading
explicitly states that bits 0:2 are reserved and must be 0.

Is this some quirk of the specific chipset we aim to emulate?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 29 08:51:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 08:51:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540576.842409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Yaa-0000J4-Sh; Mon, 29 May 2023 08:50:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540576.842409; Mon, 29 May 2023 08:50:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Yaa-0000Ix-Q1; Mon, 29 May 2023 08:50:52 +0000
Received: by outflank-mailman (input) for mailman id 540576;
 Mon, 29 May 2023 08:50:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DxYg=BS=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1q3Yaa-0000Ir-2F
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 08:50:52 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e8da86da-fdfd-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 10:50:50 +0200 (CEST)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-175-O6i07ewmP02_dqMZo2c01A-1; Mon, 29 May 2023 04:50:47 -0400
Received: by mail-wm1-f69.google.com with SMTP id
 5b1f17b1804b1-3f60481749eso16155415e9.1
 for <xen-devel@lists.xenproject.org>; Mon, 29 May 2023 01:50:47 -0700 (PDT)
Received: from sgarzare-redhat (host-87-12-25-16.business.telecomitalia.it.
 [87.12.25.16]) by smtp.gmail.com with ESMTPSA id
 y8-20020a7bcd88000000b003f6038faa19sm16812744wmj.19.2023.05.29.01.50.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 29 May 2023 01:50:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8da86da-fdfd-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685350249;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=lzKl+ULfTc+HsRslEfywwM+DpjvalmbzGtSUSdUPWng=;
	b=jVZ8PhYKvH698nV5TqP1ssPNohlGS95C2+/OIQIupA54uJWrjWQIHLypVG8qU0d59Fn2hE
	s2RvpDkoSjQgEddSOjnRiMQRsFREmGtaL0sqpK1H/Vqn+myFbKvYiTWHTUS/9C6Z80o9p2
	oxz3PCzbZd5H088u0r/TKyX06oanZK4=
X-MC-Unique: O6i07ewmP02_dqMZo2c01A-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685350246; x=1687942246;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lzKl+ULfTc+HsRslEfywwM+DpjvalmbzGtSUSdUPWng=;
        b=e23SjBbaSDrmB+K9jAx59gkP3cMUnkq0iyXan/WwggcaKYOocp81y2v2k/nMwmnrVI
         zysbQdw018j88O5w+Eo4dpXWog6roQYxjgl4Q6piLDXR0ldrBa+VJB/43LnFGISuZS5L
         QcaukKcOGh9tk0D4WbM1AJXI0mt6b86KTNdMTd/S5PnU7/UaqfO/E1txlWzZ0R8EZCJl
         V64rfJP6fRY0CRW4PMsHKjAri7Ppy+j9M7uxTBZpYaClrjL/CaFLhl1NFlG9dTtYAWg1
         eTgiV9ryFT0qrkdo8Pi7IcCZdMDjIKy1G8A+hUl8HY+9z4dFiL4nw9mQO9tOMIEt74YQ
         22cA==
X-Gm-Message-State: AC+VfDyGShafQkSRuc54RJQnuI4MeHIuix4ZHlAq3wqeoKtArOv4/2OS
	KANJT/Xraz4iYu5CzyRisMkiilGeMcOs1PUcUfC8xpGRxfklz1M7CWuQAQNKnc1BWwZN8pqc8+G
	wAk2v03h/lEvgryLah7tct7t9J2E=
X-Received: by 2002:a05:600c:3d0d:b0:3f6:eae:7417 with SMTP id bh13-20020a05600c3d0d00b003f60eae7417mr6312397wmb.1.1685350246530;
        Mon, 29 May 2023 01:50:46 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ7f6ydC3WQoJ5To9rb9W4gLmOML/1z0QINwg7VZr8e+1JLO+IwB0Gh6MUE4/gSFzctkGKbI+A==
X-Received: by 2002:a05:600c:3d0d:b0:3f6:eae:7417 with SMTP id bh13-20020a05600c3d0d00b003f60eae7417mr6312372wmb.1.1685350246121;
        Mon, 29 May 2023 01:50:46 -0700 (PDT)
Date: Mon, 29 May 2023 10:50:34 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Aarushi Mehta <mehta.aaru20@gmail.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Julia Suvorova <jusual@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, Hanna Reitz <hreitz@redhat.com>, 
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>, xen-devel@lists.xenproject.org, 
	eblake@redhat.com, Anthony Perard <anthony.perard@citrix.com>, 
	qemu-block@nongnu.org
Subject: Re: [PATCH v2 5/6] block/linux-aio: convert to blk_io_plug_call() API
Message-ID: <63lutuyufibun4jscbjjlshbqqw6otetzfi67rfnfrxacwutnj@igewwxh4uwys>
References: <20230523171300.132347-1-stefanha@redhat.com>
 <20230523171300.132347-6-stefanha@redhat.com>
 <n6hik7dbl26lomhxvfal2kjrq6jhdiknjepb372dvxavuwiw6q@3l3mo4eywoxq>
 <20230524193634.GB17357@fedora>
MIME-Version: 1.0
In-Reply-To: <20230524193634.GB17357@fedora>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline

On Wed, May 24, 2023 at 03:36:34PM -0400, Stefan Hajnoczi wrote:
>On Wed, May 24, 2023 at 10:52:03AM +0200, Stefano Garzarella wrote:
>> On Tue, May 23, 2023 at 01:12:59PM -0400, Stefan Hajnoczi wrote:
>> > Stop using the .bdrv_co_io_plug() API because it is not multi-queue
>> > block layer friendly. Use the new blk_io_plug_call() API to batch I/O
>> > submission instead.
>> >
>> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>> > Reviewed-by: Eric Blake <eblake@redhat.com>
>> > ---
>> > include/block/raw-aio.h |  7 -------
>> > block/file-posix.c      | 28 ----------------------------
>> > block/linux-aio.c       | 41 +++++++++++------------------------------
>> > 3 files changed, 11 insertions(+), 65 deletions(-)
>> >
>> > diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
>> > index da60ca13ef..0f63c2800c 100644
>> > --- a/include/block/raw-aio.h
>> > +++ b/include/block/raw-aio.h
>> > @@ -62,13 +62,6 @@ int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
>> >
>> > void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
>> > void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
>> > -
>> > -/*
>> > - * laio_io_plug/unplug work in the thread's current AioContext, therefore the
>> > - * caller must ensure that they are paired in the same IOThread.
>> > - */
>> > -void laio_io_plug(void);
>> > -void laio_io_unplug(uint64_t dev_max_batch);
>> > #endif
>> > /* io_uring.c - Linux io_uring implementation */
>> > #ifdef CONFIG_LINUX_IO_URING
>> > diff --git a/block/file-posix.c b/block/file-posix.c
>> > index 7baa8491dd..ac1ed54811 100644
>> > --- a/block/file-posix.c
>> > +++ b/block/file-posix.c
>> > @@ -2550,26 +2550,6 @@ static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset,
>> >     return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_WRITE);
>> > }
>> >
>> > -static void coroutine_fn raw_co_io_plug(BlockDriverState *bs)
>> > -{
>> > -    BDRVRawState __attribute__((unused)) *s = bs->opaque;
>> > -#ifdef CONFIG_LINUX_AIO
>> > -    if (s->use_linux_aio) {
>> > -        laio_io_plug();
>> > -    }
>> > -#endif
>> > -}
>> > -
>> > -static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
>> > -{
>> > -    BDRVRawState __attribute__((unused)) *s = bs->opaque;
>> > -#ifdef CONFIG_LINUX_AIO
>> > -    if (s->use_linux_aio) {
>> > -        laio_io_unplug(s->aio_max_batch);
>> > -    }
>> > -#endif
>> > -}
>> > -
>> > static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
>> > {
>> >     BDRVRawState *s = bs->opaque;
>> > @@ -3914,8 +3894,6 @@ BlockDriver bdrv_file = {
>> >     .bdrv_co_copy_range_from = raw_co_copy_range_from,
>> >     .bdrv_co_copy_range_to  = raw_co_copy_range_to,
>> >     .bdrv_refresh_limits = raw_refresh_limits,
>> > -    .bdrv_co_io_plug        = raw_co_io_plug,
>> > -    .bdrv_co_io_unplug      = raw_co_io_unplug,
>> >     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
>> >
>> >     .bdrv_co_truncate                   = raw_co_truncate,
>> > @@ -4286,8 +4264,6 @@ static BlockDriver bdrv_host_device = {
>> >     .bdrv_co_copy_range_from = raw_co_copy_range_from,
>> >     .bdrv_co_copy_range_to  = raw_co_copy_range_to,
>> >     .bdrv_refresh_limits = raw_refresh_limits,
>> > -    .bdrv_co_io_plug        = raw_co_io_plug,
>> > -    .bdrv_co_io_unplug      = raw_co_io_unplug,
>> >     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
>> >
>> >     .bdrv_co_truncate                   = raw_co_truncate,
>> > @@ -4424,8 +4400,6 @@ static BlockDriver bdrv_host_cdrom = {
>> >     .bdrv_co_pwritev        = raw_co_pwritev,
>> >     .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
>> >     .bdrv_refresh_limits    = cdrom_refresh_limits,
>> > -    .bdrv_co_io_plug        = raw_co_io_plug,
>> > -    .bdrv_co_io_unplug      = raw_co_io_unplug,
>> >     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
>> >
>> >     .bdrv_co_truncate                   = raw_co_truncate,
>> > @@ -4552,8 +4526,6 @@ static BlockDriver bdrv_host_cdrom = {
>> >     .bdrv_co_pwritev        = raw_co_pwritev,
>> >     .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
>> >     .bdrv_refresh_limits    = cdrom_refresh_limits,
>> > -    .bdrv_co_io_plug        = raw_co_io_plug,
>> > -    .bdrv_co_io_unplug      = raw_co_io_unplug,
>> >     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
>> >
>> >     .bdrv_co_truncate                   = raw_co_truncate,
>> > diff --git a/block/linux-aio.c b/block/linux-aio.c
>> > index 442c86209b..5021aed68f 100644
>> > --- a/block/linux-aio.c
>> > +++ b/block/linux-aio.c
>> > @@ -15,6 +15,7 @@
>> > #include "qemu/event_notifier.h"
>> > #include "qemu/coroutine.h"
>> > #include "qapi/error.h"
>> > +#include "sysemu/block-backend.h"
>> >
>> > /* Only used for assertions.  */
>> > #include "qemu/coroutine_int.h"
>> > @@ -46,7 +47,6 @@ struct qemu_laiocb {
>> > };
>> >
>> > typedef struct {
>> > -    int plugged;
>> >     unsigned int in_queue;
>> >     unsigned int in_flight;
>> >     bool blocked;
>> > @@ -236,7 +236,7 @@ static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
>> > {
>> >     qemu_laio_process_completions(s);
>> >
>> > -    if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
>> > +    if (!QSIMPLEQ_EMPTY(&s->io_q.pending)) {
>> >         ioq_submit(s);
>> >     }
>> > }
>> > @@ -277,7 +277,6 @@ static void qemu_laio_poll_ready(EventNotifier *opaque)
>> > static void ioq_init(LaioQueue *io_q)
>> > {
>> >     QSIMPLEQ_INIT(&io_q->pending);
>> > -    io_q->plugged = 0;
>> >     io_q->in_queue = 0;
>> >     io_q->in_flight = 0;
>> >     io_q->blocked = false;
>> > @@ -354,31 +353,11 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64_t dev_max_batch)
>> >     return max_batch;
>> > }
>> >
>> > -void laio_io_plug(void)
>> > +static void laio_unplug_fn(void *opaque)
>> > {
>> > -    AioContext *ctx = qemu_get_current_aio_context();
>> > -    LinuxAioState *s = aio_get_linux_aio(ctx);
>> > +    LinuxAioState *s = opaque;
>> >
>> > -    s->io_q.plugged++;
>> > -}
>> > -
>> > -void laio_io_unplug(uint64_t dev_max_batch)
>> > -{
>> > -    AioContext *ctx = qemu_get_current_aio_context();
>> > -    LinuxAioState *s = aio_get_linux_aio(ctx);
>> > -
>> > -    assert(s->io_q.plugged);
>> > -    s->io_q.plugged--;
>> > -
>> > -    /*
>> > -     * Why max batch checking is performed here:
>> > -     * Another BDS may have queued requests with a higher dev_max_batch and
>> > -     * therefore in_queue could now exceed our dev_max_batch. Re-check the max
>> > -     * batch so we can honor our device's dev_max_batch.
>> > -     */
>> > -    if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch) ||
>>
>> Why are we removing this condition?
>> Could the same situation occur with the new API?
>
>The semantics of unplug_fn() are different from .bdrv_co_unplug():
>1. unplug_fn() is only called when the last blk_io_unplug() call occurs,
>   not every time blk_io_unplug() is called.
>2. unplug_fn() is per-thread, not per-BlockDriverState, so there is no
>   way to get per-BlockDriverState fields like dev_max_batch.
>
>Therefore this condition cannot be moved to laio_unplug_fn().

I see now.

>
>How important is this condition? I believe that dropping it does not
>have much of an effect but maybe I missed something.

With Kevin we agreed to add it to avoid extra latency in some devices,
but we didn't do much testing on this.

IIRC what solved the performance degradation was the check in
laio_do_submit() that we still have after this changes.

So it may not have much effect, but maybe it's worth mentioning in
the commit description.

>
>Also, does it make sense to define per-BlockDriverState batching limits
>when the AIO engine (Linux AIO or io_uring) is thread-local and shared
>between all BlockDriverStates? I believe the fundamental reason (that we
>discovered later) why dev_max_batch is effective is because the Linux
>kernel processes 32 I/O request submissions at a time. Anything above 32
>adds latency without a batching benefit.

This is a good point, maybe we should confirm it with some tests though.

Thanks,
Stefano



From xen-devel-bounces@lists.xenproject.org Mon May 29 08:53:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 08:53:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540580.842419 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Ycl-0000sv-7r; Mon, 29 May 2023 08:53:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540580.842419; Mon, 29 May 2023 08:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3Ycl-0000so-57; Mon, 29 May 2023 08:53:07 +0000
Received: by outflank-mailman (input) for mailman id 540580;
 Mon, 29 May 2023 08:53:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FxNl=BS=citrix.com=prvs=506ffa617=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q3Yck-0000sg-5d
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 08:53:06 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 37f89c1f-fdfe-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 10:53:03 +0200 (CEST)
Received: from mail-dm6nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 May 2023 04:53:01 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA1PR03MB6355.namprd03.prod.outlook.com (2603:10b6:806:1b6::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Mon, 29 May
 2023 08:52:58 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Mon, 29 May 2023
 08:52:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37f89c1f-fdfe-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685350383;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=+ogx2VbmbtIDEHz1VDdQI/OS63in2EM0sFzGzchWDy8=;
  b=Fonl6jrjtzI0eh7Gnc3g8FIxrVzs8yJ5V6D6bCxAa9B64tzOvCMMIiA1
   tZuAw9HXgGlrd5U5zqCXQaC/yhawMkYLPDN9sls5fZyvgkssuW+6bJ2sl
   RX79rM3dK2beaLGom7hnl/yrFz0nlMxxEsOn5gO2CvuPotXrczo6BhOhH
   M=;
X-IronPort-RemoteIP: 104.47.57.169
X-IronPort-MID: 110108251
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:flBzlK2btvjtRI4eT/bD5cFwkn2cJEfYwER7XKvMYLTBsI5bpzMGm
 mFNXj2Hb/qPZzD9etBzO4/l9U1U7JDcz9NrGwpppC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFkNagR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfLEty0
 f0hOjAxVRXfrrmU0vWKEMl2mZF2RCXrFNt3VnBI6xj8VK9jbbWdBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvC6Kkl0ZPLvFabI5fvSQQspYhACAr
 3/u9GXlGBAKcteYzFJp91r137eWxXOhBdt6+LuQ+q9z3E/QzVQpFxxOd0alndeV1hDvcocKQ
 6AT0m90xUQoz2SpRNTgWxyzoFafowURHdFXFoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3
 UPPmMnmbQGDq5WQQHOZs7uR8zW7PHFMKXdYPHFVCwwY/9PkvYc/yArVScpuG7K0iduzHizsx
 zeNr241gLB7YdM36phXNGvv21qEzqUlhCZsjukLdgpJNj9EWbM=
IronPort-HdrOrdr: A9a23:L8G2rqMKNjMswsBcTnqjsMiBIKoaSvp037Dk7SFMoHtuA6qlfq
 GV7ZMmPHrP4gr5N0tMpTntAsW9qDbnhP1ICWd4B8bfYOCkghrUEGlahbGSvAEIYheOiNK1t5
 0BT0EOMqyVMbEgt7eC3ODQKb9Jq+VvsprY59s2qU0DcegAUdAE0+4WMGim+2RNNXh7LKt8Op
 qAx9ZN4wGtcW4Qaa2AdwM4dtmGid3XtY7sJSULDR4/6AWIkFqTmcXHOind8BcCci9FhYwv+2
 jdkwD/++GKvvyhxgXHvlWjn6h+qZ/OysZjGMfJsMQTJzn24zzYHLhcZw==
X-Talos-CUID: 9a23:v1mvVm5JkREVvrdIZdss2WQoPsEcYH3h93KMChSaGyFYEp2PVgrF
X-Talos-MUID: 9a23:MzCBSgbaqTpwoOBTu2712CMhP/9S/KX2Kl0Trbsq55fVHHkl
X-IronPort-AV: E=Sophos;i="6.00,200,1681185600"; 
   d="scan'208";a="110108251"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bnMj1rbF1HdlNpVldcjP0zKLuocujjQmrTWMgX5qsLRF9jfFSre9+etl4DpEztRd5QfzE1VpDuPy5e4gRalQzgZIdbpxg2FSwfCtwBpZ9N2jZIcqUPcuTVYTcRhCbCjVg/59BSnoXW0ow/RE0XiOpDelcemeOdkn07LnDlSeK1Z1fzEsrJO3iVuOA0qg1UHXuuSzJiCCPcI+CdmQBu6xfY8pm6ZuKpeVaQFWCScUe4ML12hSSaTCYWlKTMjBcWA7GH7+AmF3h4gMboofk9YbDcLRxEo4xgG6iSmcNtfaejGtCWxenkXvipPuVwt9uqPtmqzK1W21ZD61h0Gd6QvBcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JlxRk83gRDW2cqGmDzUxNl8Bv4umjbDDyyOgrnw3Goo=;
 b=oCA/tbP2wuYbyHL3Gklq1OSp0FDquRwHWmsLdLanYou+xiRRpLKsPcDkwhJRdVmi+C2cl6M/GqLUg8kQFxQrxouPkoEfNrt3fTyY30pjOsussRfvX2IzdB/BXqHpc/Tw4NoqEbkYSJ4iiK94sWPPxz3biaWg3clnR8x9oHvPVm3OsK7yKEF5xDi8rCYnNJ5pfcHX08DIhKJIBS8tHbu+m9E0LNhEv3F6sDXJdD8YLNv0zCao0DiP/HQwdF1OirO3V0j1V+v0xT297XsdZc+pRUfcAG+4E+rKjc/5Xkb0Aax87uc40QCHCq9kG3N8FEkR9t47J/TD5Gadptmw3PD1og==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JlxRk83gRDW2cqGmDzUxNl8Bv4umjbDDyyOgrnw3Goo=;
 b=LPhfv654aoD2JaatrdSoNkyS9Xr5sOLV20mBv66ks8pY4quk+MlAqbIIN2Awg4dgWGpzRUywOIg/JDVmq1sDDK3icNL8a4ecxzxdL5lTLdotL8FDD3Cb4OTsmLDHskbLNc3btxGUYl3GCoA/tlmoUUi6ga+M1xcdtgbRc7tajvI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 29 May 2023 10:52:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2] vpci/header: cope with devices not having vpci
 allocated
Message-ID: <ZHRn5DxPzP6sqv19@Air-de-Roger>
References: <20230525145405.35348-1-roger.pau@citrix.com>
 <56cb3727-5905-1a7a-ceed-cf110bdddad4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <56cb3727-5905-1a7a-ceed-cf110bdddad4@suse.com>
X-ClientProxiedBy: LO4P265CA0139.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c4::14) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA1PR03MB6355:EE_
X-MS-Office365-Filtering-Correlation-Id: 09a42b99-c741-4234-5645-08db60221989
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AxYsqdzKMcepQ6cG/GFGA16Go9JBAsyinI3GgrfXWjvm2yRv++ouDQeW0cGUDrxF7j5YmmbzEz8375i0tIzpACS+nSGUHvBmPj+2I56RHZa1KU9UHnSAmaESxXszTy4SE4eb+B3i96u3qHB7EKBIzdXyEohVlIzDC+aWoA9zpNw4pJSccJQxS30qXAwm7Z4GPXcR+qCBYNbeVB/DlHMEyfs7Yum7XFxvDASIrN9XYOSjRb3r1BscJkESu+WcNLvAFnIrSbDMAPgk6rcncKaBqPPJH66nTIYAIgGQ22KxQjDE/kHZZQ71+0G9b5S2W+W7TLCXjgEqlcLXgx9gcMhQ8bJWEslFzozEQCylVlTv2OjChNMgBCkwTldMymIDwV+y++vNpwMzYBUh//Ah4i7hMoDqgrRUDNP7JoH8ecOjUTo5p0zF8khvMZfRjbjbvK8gnR3SFUZps56mGBswHHucpqcDGxaZEB80mKRRgc0lNUSCIvNP/XgZp8ax0F0INhmfCIapoHVZOBQJXDuHpzs6cVtnRMOVmTLS4mT6F3PxswhAgW0agbZyM45HxTt39ZTt
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(376002)(396003)(39860400002)(346002)(366004)(136003)(451199021)(82960400001)(478600001)(38100700002)(66476007)(66556008)(66946007)(33716001)(86362001)(4326008)(6916009)(6666004)(4744005)(2906002)(186003)(316002)(26005)(6486002)(6512007)(9686003)(6506007)(53546011)(41300700001)(5660300002)(85182001)(8936002)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cGV3cjNpNmdQMnVlY1R4akxFYjQ1ZHdkUVFlNXZ6QTBWa25JSVUxVS81K05y?=
 =?utf-8?B?RStld0hJMDY4REQzam5GWUlNdFFPd2NEazBqTHFLRnNRcTRDd1dVczRIaG8z?=
 =?utf-8?B?bHM5Sjd1RXVURUtoNkVqakQwNXRHc3RNRHhGYWVENGxZaU1scTJNdU5zY25x?=
 =?utf-8?B?TVIzcVZ1VTZkVUNmQ01BVE1NK1pwRVdqWGE5TkpMWVVHUFpsRkRPUUMzeU1E?=
 =?utf-8?B?a0dFSysvM29rQ2pJcTZPeVNWODR3WVkraFM2aDhENTlINUV2b1NLSDVkS3g2?=
 =?utf-8?B?NWJkWGFndStYZEp3OG5XT0hxdmlRc3J1QjlNRU5VamVac21qTVZSSzhRUGRR?=
 =?utf-8?B?a2xmNnlteHdIWWZnbDMvbjJCUUlnT0JOVWFwcVorMi9oelN4c2dvZXYzcVo5?=
 =?utf-8?B?UUM5Zi9BWitwRHN6SVNWVzBYUC9wKzk5QmVIRklpeE5vdUoyTWk5YU0xd3hs?=
 =?utf-8?B?V2VUK3hsWXAzcDAwdkE2TWtEU1lRV290NVNVV0ZkYnNsTk5YQlpZTTZwT2h3?=
 =?utf-8?B?UkRBbWdtTDFqeGxBczBLOEdvTHB3V0xIUVpSeDR6bHYzVFRuS2VqR3NvT0lE?=
 =?utf-8?B?MDhwMzVEZ3NyUzdvc3VGMVZlbkZJMWZ5dnVkb1Y2Q0cvKzd6UkZCQWNnMFha?=
 =?utf-8?B?SFdXYmd0UXdnaXJheUhROXR0VFFSZ2FMZDFXMEJZMHlHOHVEbWlFZ0Q2OWR1?=
 =?utf-8?B?SEZCMWJlVEhYcVRBd3JXd2MzQUQ5TXdPa0JXZEhiRmYxTmlHY0R4VDlDTHAv?=
 =?utf-8?B?WGhNekpFM2lEN1JVV0FaRGI2RTZiV3lObTdFVkFaT01uaEMva0dhcm41cFpU?=
 =?utf-8?B?alpacVVUamFvUk5UTnFDZ3FNYXJjSEVyY1lKSDFPRE44cWFaMnFVeVJQSVhZ?=
 =?utf-8?B?VDNJdHlHY2FyRjNNamtGb1RrM2kwTG94WHV0OHNvSVhmNElWajNodFBHaVYv?=
 =?utf-8?B?bERSL08zZExZVWZFZDVVWENXeGpwZ3ZYMVZaQStsem1hSzJFMnhUUkRQeWFN?=
 =?utf-8?B?RHBPblhzajhDZnJOcThOODBHcUsxZDIyeVI0aUQzYkYvWHlaRTNEMXdic1do?=
 =?utf-8?B?R2tWQm1sKzFEalY0dHlCTmdLczRnUXN3dXAxUmYrZS9NU2lCTDRETGlYT0p0?=
 =?utf-8?B?TFpBL2hMYjQxaWZlaG5jUld5bFFCRnl1Wk1tSDUybEFjU01ISjM5bDd2YkJT?=
 =?utf-8?B?ZUlaSWhJRGs2OFJGUTNUb3F2KzUvUHhWTXhFaEh3cVBtd0diYkp0ZmJuTzJx?=
 =?utf-8?B?cW5IRUM0bkJpbTFZcE5yQkxyUytmN1I0WC9SbWE0YmtEdytVWlhxSTMwVEhn?=
 =?utf-8?B?UUxYalMyUEVmdk41RnNJa1NlMEJaSUdaOE1oelovN29RZmJNSG04dW1pOWk5?=
 =?utf-8?B?WXBpQkNkVTVpeG5xMEptNnJzbmN3TzdWTmQ2M1dwTHU0U1p6b3RnR2ZCaVAx?=
 =?utf-8?B?YlYxQUhnOSs2dzR6dG5PaVd3cHJFZk9Oelh1bXA3K1NRV1ptL2tZalNKRDFN?=
 =?utf-8?B?aWwydzVYQXJpc0RCTElMeEUyM21lOUozRXVTSWVORzJ0bERqZXFsV0xhcWdN?=
 =?utf-8?B?dlhWZHd1aGU5R0pGdVhjSEhkdndLZmNUZXZTb3Zvc0N0bk5YYytabWZVT0hC?=
 =?utf-8?B?YUVHZG53Nk5MY1hncTIxby90UjJZVUw5ajVHbTNOVmliUUJTRzZBeWtkNXQv?=
 =?utf-8?B?M2M1OFh5Yngyam85RHg0VzJaN2ZLVjJXTGMxM3VWdFBCS25EYnJWQnhvbTE4?=
 =?utf-8?B?U0NGa0JwZmpZTTFJaGI2eWRmZ2VoUUVUUFBSOWtXYUJCNFRDc0s3YjRiR1N6?=
 =?utf-8?B?ak9aaG92MmNuOU5CVWZmTGN3N1Z1ZTc3aDMzSEViQjZTU0preDdnam5XcTJF?=
 =?utf-8?B?cGtLSDNFdFFqcDFhd0x0Wlh1K0ZLeUlpUU0vcGNnV2VzRjhmQVVMVWRYd3RB?=
 =?utf-8?B?NWlEaDNtSlh2bXhIK0d1YzMzalowUE5wa2djSHY0UGltM1c3eGRJVGpUWnpY?=
 =?utf-8?B?Rm5TYm5nb2J3cTFqVTJMa3hCRHZzZ2x6TTJYZkwvY0tnUWVrUUo5M3orL0wx?=
 =?utf-8?B?M0IxUW5ZQzBLQ3V4SGtkQzl0RzN5YlNhdWdXd2NtK0NtQkVGNHl4Uk04d1Zv?=
 =?utf-8?B?S2FHMVFKbkxjNzdWRUwyNW1aNXd5cjk5ZUQ5ZmtYb0pGT2tWa29CUkIyYngr?=
 =?utf-8?B?N3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	FOZqLfPZKYH4UrO0iuYZ02+WFWz1kVt7odhjQIQwuesELPfDUORq4Fqt/RzPABvbx5sigtshU6+41PKyPEHxhzi+DQ6p/Lxg3YK1p4d6inHYQr6Q7JFMgSdr16/NAQvyFupD3LtpdyYTVGV60u6pD7zblIugzh7CmNiuduTsbpmVklMppYqF16InnUPRhXy0iJpYTxGm2ityD2L3XoiDnELvb5+P/MuPJM+Q7YSInsSqEIwbbBEZE8csFjBa8/2ACkmYqqos0mtyZeODMBc0IqyJtvDfFZwJ0yfY9SeVYSflADmCnmcJnP+YndaeRO22RPjCJkYUveeu657Hg79V7JPnwK8oDSWKSGN6x1kvshDGSM4ddCcVCaPC5gtTZkntyO3UqDjhoUw96xrv4I/wUkmbcBPb30eP9K0WuB6bwZSpIHw80Bi8gQrRLdd9+2ytdPQbAhsSl+0/gzBkpJevUoib8fOuVIQVYU/sQSZKyaYh3oRIQ8n4PNVlx3nOnGIwPl2W0pSHB9D08N/bGhQ2SHw/FWo9GBuS+WJEUdpZKSGD94KoEuniuD9oPDa6dQaJzuTC9sjmcqE8M2VBviF6aOuJMzMRvULqp5bi8q1s4qyZ4xfT84VL4PrRA0Ocr5/7+er1Giaj7PebvUHlwmHph2+IGF9q6zV0x7R8jMlLpjIQFu6jyHCXbticFHPE7EJluCDI1vXVoAVu4DHR+7ZuW7ffoSn/l1sFzdZRqyQqsECQftPF9Gp38b0obuvcNom22RmCMzPkOJcVTqCm8a2gI1hJzi/vfP4ALpw+lxgTfUs=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 09a42b99-c741-4234-5645-08db60221989
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2023 08:52:57.8121
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1WftqSlZ73/uiQaZs/2e8nNAf/nOTnqGfyJGXf01toJgpxE53zCri7jFE8xcCpzZ1OPJT/BgUrsdtz0QZoVJtg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6355

On Thu, May 25, 2023 at 05:22:53PM +0200, Jan Beulich wrote:
> On 25.05.2023 16:54, Roger Pau Monne wrote:
> > When traversing the list of pci devices assigned to a domain cope with
> > some of them not having the vpci struct allocated. It should be
> > possible for the hardware domain to have read-only devices assigned
> > that are not handled by vPCI, such support will be added by further
> > patches.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

> I was wondering whether to put an assertion in (permitting both DomXEN
> and hwdom, i.e. slightly more relaxed than I had it).

I also wondered it, but didn't think it was very helpful, but that was
back when I had the comment about how I envisioned this to work for
domUs.

Let me know if you want me to add it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 29 09:42:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 09:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540589.842429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ZOC-0006Qu-1I; Mon, 29 May 2023 09:42:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540589.842429; Mon, 29 May 2023 09:42:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ZOB-0006Qn-Ut; Mon, 29 May 2023 09:42:07 +0000
Received: by outflank-mailman (input) for mailman id 540589;
 Mon, 29 May 2023 09:42:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3ZOA-0006Qd-25; Mon, 29 May 2023 09:42:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3ZO9-0002GL-PZ; Mon, 29 May 2023 09:42:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3ZO9-0003lH-6c; Mon, 29 May 2023 09:42:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3ZO9-00006M-5z; Mon, 29 May 2023 09:42:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Sp04qRuCQFTG4GQTSmKdAR6Nd1mc5bKOEwT76ggPXnY=; b=BX3z1lnonr2c5sm2cZA8ChKSah
	KOEsxfjnOzYywxaG/n0XxrQZWr15zWVIfKkVV1nHEw14En99w1ygjty5KtjUuBBKpDZmVlwlfFHwX
	yG0/n4E7nLEmNXoZtBdfU0vr9hTnd1roO0Negidl0+pPOGTTfYLJ8CZ2brtw3R8yHh4M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180992-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 180992: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-i386-examine-bios:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
X-Osstest-Versions-That:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 09:42:05 +0000

flight 180992 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180992/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 18 guest-localmigrate/x10 fail in 180982 pass in 180992
 test-amd64-i386-examine-bios  6 xen-install                fail pass in 180982
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 180982
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat  fail pass in 180982

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 180982 like 180976
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 180976
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180982
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180982
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180982
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180982
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180982
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180982
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180982
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180982
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180982
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180982
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180982
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180982
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
baseline version:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc

Last test of basis   180992  2023-05-29 01:52:05 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 29 10:23:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 10:23:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540595.842440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3a1m-0002Rr-02; Mon, 29 May 2023 10:23:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540595.842440; Mon, 29 May 2023 10:23:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3a1l-0002Rk-TO; Mon, 29 May 2023 10:23:01 +0000
Received: by outflank-mailman (input) for mailman id 540595;
 Mon, 29 May 2023 10:23:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3a1j-0002RH-Uj; Mon, 29 May 2023 10:22:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3a1j-0003Es-M7; Mon, 29 May 2023 10:22:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3a1j-000625-3a; Mon, 29 May 2023 10:22:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3a1j-00064Q-34; Mon, 29 May 2023 10:22:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/GI3aawWKEfVfsQQtH9wT/NGtqc+dZw/Z/z8cHtgBu4=; b=dIYEvvvfysd7vXj6T+bzeqtnSh
	MyZZwZaBoRFnaDlOXs6lii3JoP1oACcoywPQPNjm5w2c+V9QVSndMxqK1E79oYl3u3RrODCJob0j3
	w2fhkuWvwhnLcYuCcRmHGoy8C+GTOXhuBHuFnsFy8zzfXw/zU0wVv9eJCRIIdXnlIN0U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180997-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180997: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e1f5c6249af08c1df2c6257e4bb6abbf6134318c
X-Osstest-Versions-That:
    ovmf=5258c4186f3a41c14a1d4cd40217810882ccd222
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 10:22:59 +0000

flight 180997 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180997/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e1f5c6249af08c1df2c6257e4bb6abbf6134318c
baseline version:
 ovmf                 5258c4186f3a41c14a1d4cd40217810882ccd222

Last test of basis   180994  2023-05-29 03:46:14 Z    0 days
Testing same since   180997  2023-05-29 06:11:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Giri Mudusuru <girim@apple.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5258c4186f..e1f5c6249a  e1f5c6249af08c1df2c6257e4bb6abbf6134318c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 29 12:13:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 12:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540609.842510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bki-0006iB-S0; Mon, 29 May 2023 12:13:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540609.842510; Mon, 29 May 2023 12:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bki-0006hQ-Mi; Mon, 29 May 2023 12:13:32 +0000
Received: by outflank-mailman (input) for mailman id 540609;
 Mon, 29 May 2023 12:13:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yi1m=BS=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q3bkh-0005DI-6A
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 12:13:31 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 388e07b9-fe1a-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 14:13:29 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2af2451b3f1so33742581fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 29 May 2023 05:13:29 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 z14-20020a2e964e000000b002ac87c15fd4sm2427762ljh.95.2023.05.29.05.13.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 29 May 2023 05:13:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 388e07b9-fe1a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685362408; x=1687954408;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0YD5229mDIwkWy00dCe850+V9AEq/Mk9/bgvE344cjU=;
        b=TpzCdOknut6VDyJFWFVTwZjazt6GcrLBoZXZ77wvjfK4aVJ5kE1OhaT/+bJRKu25lt
         bdw4Nb3uqgPY83iObryh3p/gGP4Os9mAGLnnl5ervnCbr6/qJRUG1t/N4yc8itMRwU4Q
         Lj7wRdGPYIG1TBcvsjxpiZz1Z5WZ/SLmVfiiBtgEls4/1P3+aMC3fTp9bRw28egIj0Jz
         9V5iEcUY8G1zFevbcjIOqg98oPWPndMJwFpb0wNe+Mchw1ozKddBN4xW+3T25DkjaWkT
         s7C3vlqEE3n5b9YpR85vs4yrfFLXk39ReBBckDHC51g05h3khQjs6sB1mrr9BFhte+Ut
         PIgw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685362408; x=1687954408;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=0YD5229mDIwkWy00dCe850+V9AEq/Mk9/bgvE344cjU=;
        b=dm7sjlYwyF25vm7D/+SlDze8+NkcSGpZNNwZXxlWnx5USuA7gaDfS9dCrxHzArw4CJ
         3qlbz4rCNoPbibzaRRyOswgnzr9VomVzj1Ft/qAB6cnBO25nUNzq4NEmLSnd1RALu8aW
         gBvmSMLPb5xYqRo2Kpfu4r1+vLppb62WB51W/ZWRGWfyeGYxW4gOP3RWHgQDcU/+2o9k
         kiyB60B7Aq7cf/bHjN6l+0wzzkOQ9b6qJtQUKdBbAO2CUxB2QLi19RsrMkh0lgkVinor
         2qQJsRA8DCtD+CGSRmHvNVpTjn2C88sda37bHFCiHZ8q+b3zu7Yg2HAsjHlbj7aVQbd6
         h3Kg==
X-Gm-Message-State: AC+VfDyocQaQ8DLuFWSIusjSK0PjD6XUQ8pyZllhC0lfTAlvx9bYkua7
	hYOL/S40GvPbBeqlF3/q1jqqT2KdIhE=
X-Google-Smtp-Source: ACHHUZ7qWlQ0izumLBrxWGN4a1dMw2W/W9ZTX1VBQsZPtZd6SyEXM+BC9wFxToGEYSGFFfxny5MvVA==
X-Received: by 2002:a2e:2416:0:b0:2a8:ad36:f8ca with SMTP id k22-20020a2e2416000000b002a8ad36f8camr4249022ljk.14.1685362408351;
        Mon, 29 May 2023 05:13:28 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 6/6] xen/riscv: test basic handling stuff
Date: Mon, 29 May 2023 15:13:19 +0300
Message-Id: <90e4f333ded1a18e5c0e0b389580ed5d25e0272a.1685359848.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1685359848.git.oleksii.kurochko@gmail.com>
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V6:
  - Nothing changed
---
Changes in V5:
  - Nothing changed
---
Changes in V4:
  - Add Acked-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V3:
  - Nothing changed
---
Changes in V2:
  - Nothing changed
---
 xen/arch/riscv/setup.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 1cae0e5ccc..481d88249d 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,3 +1,4 @@
+#include <xen/bug.h>
 #include <xen/compile.h>
 #include <xen/init.h>
 
@@ -9,6 +10,20 @@
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
 
+static void test_run_in_exception(struct cpu_user_regs *regs)
+{
+    early_printk("If you see this message, ");
+    early_printk("run_in_exception_handler is most likely working\n");
+}
+
+static void test_macros_from_bug_h(void)
+{
+    run_in_exception_handler(test_run_in_exception);
+    WARN();
+    early_printk("If you see this message, ");
+    early_printk("WARN is most likely working\n");
+}
+
 void __init noreturn start_xen(unsigned long bootcpu_id,
                                paddr_t dtb_addr)
 {
@@ -28,6 +43,8 @@ void __init noreturn cont_after_mmu_is_enabled(void)
 {
     trap_init();
 
+    test_macros_from_bug_h();
+
     early_printk("All set up\n");
 
     for ( ;; )
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 12:13:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 12:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540604.842459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkf-0005SU-3e; Mon, 29 May 2023 12:13:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540604.842459; Mon, 29 May 2023 12:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkf-0005SN-0w; Mon, 29 May 2023 12:13:29 +0000
Received: by outflank-mailman (input) for mailman id 540604;
 Mon, 29 May 2023 12:13:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yi1m=BS=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q3bkd-0005DI-Uh
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 12:13:27 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 357cbe60-fe1a-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 14:13:23 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2afb2874e83so34087091fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 29 May 2023 05:13:23 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 z14-20020a2e964e000000b002ac87c15fd4sm2427762ljh.95.2023.05.29.05.13.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 29 May 2023 05:13:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 357cbe60-fe1a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685362403; x=1687954403;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=2wS4sHj5fpuxy/8ZdBzZWymM7lvOzvF/cgGKwz35zSs=;
        b=oFhjRCZKzrYwF+owkbb3RKWL6/jm79aylv+J2iwZ2QOMkVysM58yvLz7OifkhRv6LI
         nMvskOlsnJnV+oK4UR0HMdOo7pczmnSCt7GC2fNrcnII+Z3lACFOiEJjqBrJeYFnZIo5
         2miIrnv1V5yLrx+ttlgqpCh710HwaZsDSNrobvR9osjpiUPWlXtLeQ+0ndVDBNBGysuu
         fh/5T6vAAaNTH0G8PqN82doOTH2U6FrW/WsUAxYtOvDA8xi1sAISDK8PoRoWQFUy+ZDN
         5nr0gchXb68mRd8UqpNShytU5hG4xo0OlAa6WzgYgx7YzcY5CJDaRebbEp4+/tMhNv6I
         oLFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685362403; x=1687954403;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=2wS4sHj5fpuxy/8ZdBzZWymM7lvOzvF/cgGKwz35zSs=;
        b=Sx8rwMwkYWsuLeiNEB6GwxsOk0NyUXwzWQH9HZh45rP16q0MsSLy2Qx+kDIm48aEtx
         BQfUlf8QBjY8XRWEs6Wjn+1NirKcyov0bl1PVHWe+xKxRkbYv58eQPjBrArK5DBceDZ4
         AyPD+fUQAl0QvOu+G+3UJs+TPHBYubdiJBXqqnsWdIGanTVklZMA09P7Zm+zLflrlkzI
         zHhJj+e8WjOgjXLqiJwQozX8IDfWFkmJr7t5+ri39F/+7jrFYbPYeaXXbt19UKdm7RUQ
         Lek0EmQAlbOsJYVyqMIHdsasARK3pol0E/nlYWN+QVpnEr/ShcO796jx8b0kszOgUyw8
         rqeA==
X-Gm-Message-State: AC+VfDwhjr9qe4KwuEVIFbaNbgIHNfeCs6y3axRCRD45KToqUHZVbk33
	CWOdvsdtfgTSajtXKxFF6KFFRvRXOY4=
X-Google-Smtp-Source: ACHHUZ6g1/rL6bvLFZMs6yUzD4ZO31HbVGegA9JXfMzmJTfDMwfha26Jy6fwh3bmxnXm5MeZxT6JJg==
X-Received: by 2002:a2e:9049:0:b0:2ac:7852:4096 with SMTP id n9-20020a2e9049000000b002ac78524096mr3709634ljg.24.1685362402851;
        Mon, 29 May 2023 05:13:22 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 0/6] [PATCH v5 0/7] RISCV basic exception handling implementation
Date: Mon, 29 May 2023 15:13:13 +0300
Message-Id: <cover.1685359848.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch series is based on:
    enable MMU for RISC-V [1]
which haven't been commited yet.

The patch series provides a basic implementation of exception handling.
It can do only basic things such as decode a cause of an exception,
save/restore registers and execute "wfi" instruction if an exception
can not be handled.

To verify that exception handling works well it was implemented macros
from <asm/bug.h> such as BUG/WARN/run_in_exception/assert_failed.
The implementation of macros is used "ebreak" instruction and set up bug
frame tables for each type of macros.
Also it was implemented register save/restore to return and continue work
after WARN/run_in_exception.
Not all functionality of the macros was implemented as some of them
require hard-panic the system which is not available now. Instead of
hard-panic 'wfi' instruction is used but it should be definitely changed
in the neareset future.
It wasn't implemented show_execution_state() and stack trace discovering
as it's not necessary now.

[1] https://lore.kernel.org/xen-devel/cover.1685027257.git.oleksii.kurochko@gmail.com/T/#t

---
Changes in V6:
 - Update the cover letter message: the patch set is based on MMU patch series.
 - Introduce new patch with temporary printk functionality. ( it will be
   removed when Xen common code will be ready )
 - Change early_printk() to printk().
 - Remove usage of LINK_TO_LOAD() due to the MMU being enabled first.
 - Add additional explanatory comments.
 - Remove patch "xen/riscv: initialize boot_info structure" from the patch
   series.
---
Changes in V5:
 - Rebase on top of [1] and [2]
 - Add new patch which introduces stub for <asm/bug.h> to keep Xen compilable
   as in the patch [xen/riscv: introduce decode_cause() stuff] is used
   header <xen/lib.h> which requires <asm/bug.h>.
 - Remove <xen/error.h> from riscv/traps/c as nothing would require
   inclusion.
 - decode_reserved_interrupt_cause(), decode_interrupt_cause(),
   decode_cause, do_unexpected_trap() were made as static they are expected
   to be used only in traps.c
 - Remove "#include <xen/types.h>" from <asm/bug.h> as there is no any need in it anymore
 - Update macros GET_INSN_LENGTH: remove UL and 'unsigned int len;' from it
 - Remove " include <xen/bug.h>" from risc/setup.c. it is not needed in the current version of
   the patch
 - change an argument type from vaddr_t to uint32_t for is_valid_bugaddr and introduce 
   read_instr() to read instruction properly as the length of qinstruction can be
   either 32 or 16 bits.
 - Code style fixes
 - update the comments before do_bug_frame() in riscv/trap.c
 - [[PATCH v4 5/5] automation: modify RISC-V smoke test ] was dropped as it was provided
   more simple solution by Andrew.  CI: Simplify RISCV smoke testing
 - Refactor is_valid_bugaddr() function.
 - 2 new patches ([PATCH v5 {1-2}/7]) were introduced, the goal of which is to recalculate
   addresses used in traps.c, which can be linker time relative. It is needed as we don't
   have enabled MMU yet.
---
Changes in V4:
  - Rebase the patch series on top of new version of [introduce generic
    implementation of macros from bug.h] patch series.
  - Update the cover letter message as 'Early printk' was merged and
    the current one patch series is based only on [introduce generic
    implementation of macros from bug.h] which hasn't been commited yet.
  - The following patches of the patch series were merged to staging:
      [PATCH v3 01/14] xen/riscv: change ISA to r64G
      [PATCH v3 02/14] xen/riscv: add <asm/asm.h> header
      [PATCH v3 03/14] xen/riscv: add <asm/riscv_encoding.h header
      [PATCH v3 04/14] xen/riscv: add <asm/csr.h> header
      [PATCH v3 05/14] xen/riscv: introduce empty <asm/string.h>
      [PATCH v3 06/14] xen/riscv: introduce empty <asm/cache.h>
      [PATCH v3 07/14] xen/riscv: introduce exception context
      [PATCH v3 08/14] xen/riscv: introduce exception handlers implementation
      [PATCH v3 10/14] xen/riscv: mask all interrupts
  - Fix addressed comments in xen-devel mailing list.

---
Changes in V3:
  - Change the name of config RISCV_ISA_RV64IMA to RISCV_ISA_RV64G
    as instructions from Zicsr and Zifencei extensions aren't part of
    I extension any more.
  - Rebase the patch "xen/riscv: introduce an implementation of macros
    from <asm/bug.h>" on top of patch series [introduce generic implementation
    of macros from bug.h]
  - Update commit messages
---
Changes in V2:
  - take the latest riscv_encoding.h from OpenSBI, update it with Xen
    related changes, and update the commit message with "Origin:"
    tag and the commit message itself.
  - add "Origin:" tag to the commit messag of the patch
    [xen/riscv: add <asm/csr.h> header].
  - Remove the patch [xen/riscv: add early_printk_hnum() function] as the
    functionality provided by the patch isn't used now.
  - Refactor prcoess.h: move structure offset defines to asm-offsets.c,
    change register_t to unsigned long.
  - Refactor entry.S to use offsets defined in asm-offsets.C
  - Rename {__,}handle_exception to handle_trap() and do_trap() to be more
    consistent with RISC-V spec.
  - Merge the pathc which introduces do_unexpected_trap() with the patch
    [xen/riscv: introduce exception handlers implementation].
  - Rename setup_trap_handler() to trap_init() and update correspondingly
    the patches in the patch series.
  - Refactor bug.h, remove bug_instr_t type from it.
  - Refactor decode_trap_cause() function to be more optimization-friendly.
  - Add two new empty headers: <cache.h> and <string.h> as they are needed to
    include <xen/lib.h> which provides ARRAY_SIZE and other macros.
  - Code style fixes.
---

Oleksii Kurochko (6):
  xen/riscv: introduce temporary printk stuff
  xen/riscv: introduce dummy <asm/bug.h>
  xen/riscv: introduce decode_cause() stuff
  xen/riscv: introduce trap_init()
  xen/riscv: introduce an implementation of macros from <asm/bug.h>
  xen/riscv: test basic handling stuff

 xen/arch/riscv/early_printk.c      | 168 ++++++++++++++++++++
 xen/arch/riscv/include/asm/bug.h   |  38 +++++
 xen/arch/riscv/include/asm/traps.h |   1 +
 xen/arch/riscv/setup.c             |  20 +++
 xen/arch/riscv/traps.c             | 238 ++++++++++++++++++++++++++++-
 xen/arch/riscv/xen.lds.S           |  10 ++
 6 files changed, 474 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/include/asm/bug.h

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 12:13:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 12:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540605.842466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkf-0005Vz-Fx; Mon, 29 May 2023 12:13:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540605.842466; Mon, 29 May 2023 12:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkf-0005VS-9Y; Mon, 29 May 2023 12:13:29 +0000
Received: by outflank-mailman (input) for mailman id 540605;
 Mon, 29 May 2023 12:13:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yi1m=BS=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q3bke-0005DI-5h
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 12:13:28 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 35be092a-fe1a-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 14:13:24 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2af189d323fso48962231fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 29 May 2023 05:13:24 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 z14-20020a2e964e000000b002ac87c15fd4sm2427762ljh.95.2023.05.29.05.13.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 29 May 2023 05:13:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35be092a-fe1a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685362403; x=1687954403;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/C052OVgKf2/+kQyjQaALA5nb1G75JwZbX4aVlgaqOA=;
        b=VNtgcoWpnSU3L62BC1zjb8x+g1k92ZREVQ2sFWV0gKoTLZkqrSmvRezX6zNPMdy89n
         P/KxOIkgrwgdu606XaKgCeDWNNHro4WpxMJsPOH+nw0DLFxOiBSgK4MME7eafqAtubYF
         LIu4ySxhet2MWQrU8I9DpuRQuDM66qgl4WrYR1w8ajBnYP6dNp1ZZhlkbkL40pGBoIKa
         oiwIx/dYzIdA/p9rDDAJMvH48rlhZ0wMpqQeVY/cl1HA4dPEc8PAQDtCpxpTZXrfNfYk
         fS0tYDLDOfD9fCIdl6T2FGhybdO6eVoF4cgHCfkl7NpoqvTMz32KbN5Yi86Sb4wrPOY+
         jCPQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685362403; x=1687954403;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/C052OVgKf2/+kQyjQaALA5nb1G75JwZbX4aVlgaqOA=;
        b=aHtuGAUjwJWGy95Q7u3juqRWBFfqYz1YNAsIdwysYlXwS69NiccELaTwSxYtc2lWf1
         IUzxKsPcOGB2v/rRyU6cC6IfgwEYKC3yj0vjrifoc2x+Hk+XwypwDo2BXuL133rIr3Hn
         1CGMEZkKgXMAfyBjwmY3F3L6OOGbsQ9pbvnspx2v3J2nFuLh/MoYvGcLAPV8wGaOamsC
         EA8c3NJwGXzDW2oqxrcRWQg2b5RBlTeD1f9xvUXcrvxJw3C8w2Ep3M1nObUEZ1FERU3t
         KSS2AcNUpidtVQB17Xm9flN5iwoDbTTe+xZXUnan8asHUh/lX5YySEZuwx5obY9ITx4C
         nzWQ==
X-Gm-Message-State: AC+VfDzU38Vd0Lg/m9oUfStx2GxDUSFPpAIl8zaRWTlcqLdNn9tBAJf2
	g9ybcKbe/lBV5LzfrioInLXZAgU6wJo=
X-Google-Smtp-Source: ACHHUZ4KpIiE+T3YC+yLia9p7gQBArv3+/5DXIUwucKoZGsfb2YMWCRkXG+rsinXEzmUEgPso8B6ug==
X-Received: by 2002:a05:6512:10cc:b0:4f2:47ea:2f32 with SMTP id k12-20020a05651210cc00b004f247ea2f32mr2439587lfg.23.1685362403464;
        Mon, 29 May 2023 05:13:23 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 1/6] xen/riscv: introduce temporary printk stuff
Date: Mon, 29 May 2023 15:13:14 +0300
Message-Id: <2a43fd49b34ff2e3e8e3c35e9cf0bdd72ad63f91.1685359848.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1685359848.git.oleksii.kurochko@gmail.com>
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introdcuces printk related stuff which should be deleted
after Xen common code will be available.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V6:
 - the patch was introduced in the current patch series (V6)
---
 xen/arch/riscv/early_printk.c | 168 ++++++++++++++++++++++++++++++++++
 1 file changed, 168 insertions(+)

diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
index 610c814f54..7d24932a35 100644
--- a/xen/arch/riscv/early_printk.c
+++ b/xen/arch/riscv/early_printk.c
@@ -40,3 +40,171 @@ void early_printk(const char *str)
         str++;
     }
 }
+
+/*
+ * The following #if 1 ... #endif should be removed after printk
+ * and related stuff are ready.
+ */
+#if 1
+
+#include <xen/stdarg.h>
+#include <xen/string.h>
+
+/**
+ * strlen - Find the length of a string
+ * @s: The string to be sized
+ */
+size_t (strlen)(const char * s)
+{
+    const char *sc;
+
+    for (sc = s; *sc != '\0'; ++sc)
+        /* nothing */;
+    return sc - s;
+}
+
+/**
+ * memcpy - Copy one area of memory to another
+ * @dest: Where to copy to
+ * @src: Where to copy from
+ * @count: The size of the area.
+ *
+ * You should not use this function to access IO space, use memcpy_toio()
+ * or memcpy_fromio() instead.
+ */
+void *(memcpy)(void *dest, const void *src, size_t count)
+{
+    char *tmp = (char *) dest, *s = (char *) src;
+
+    while (count--)
+        *tmp++ = *s++;
+
+    return dest;
+}
+
+int vsnprintf(char* str, size_t size, const char* format, va_list args)
+{
+    size_t i = 0; /* Current position in the output string */
+    size_t written = 0; /* Total number of characters written */
+    char* dest = str;
+
+    while ( format[i] != '\0' && written < size - 1 )
+    {
+        if ( format[i] == '%' )
+        {
+            i++;
+
+            if ( format[i] == '\0' )
+                break;
+
+            if ( format[i] == '%' )
+            {
+                if ( written < size - 1 )
+                {
+                    dest[written] = '%';
+                    written++;
+                }
+                i++;
+                continue;
+            }
+
+            /*
+             * Handle format specifiers.
+             * For simplicity, only %s and %d are implemented here.
+             */
+
+            if ( format[i] == 's' )
+            {
+                char* arg = va_arg(args, char*);
+                size_t arglen = strlen(arg);
+
+                size_t remaining = size - written - 1;
+
+                if ( arglen > remaining )
+                    arglen = remaining;
+
+                memcpy(dest + written, arg, arglen);
+
+                written += arglen;
+                i++;
+            }
+            else if ( format[i] == 'd' )
+            {
+                int arg = va_arg(args, int);
+
+                /* Convert the integer to string representation */
+                char numstr[32]; /* Assumes a maximum of 32 digits */
+                int numlen = 0;
+                int num = arg;
+                size_t remaining;
+
+                if ( arg < 0 )
+                {
+                    if ( written < size - 1 )
+                    {
+                        dest[written] = '-';
+                        written++;
+                    }
+
+                    num = -arg;
+                }
+
+                do
+                {
+                    numstr[numlen] = '0' + num % 10;
+                    num = num / 10;
+                    numlen++;
+                } while ( num > 0 );
+
+                /* Reverse the string */
+                for (int j = 0; j < numlen / 2; j++)
+                {
+                    char tmp = numstr[j];
+                    numstr[j] = numstr[numlen - 1 - j];
+                    numstr[numlen - 1 - j] = tmp;
+                }
+
+                remaining = size - written - 1;
+
+                if ( numlen > remaining )
+                    numlen = remaining;
+
+                memcpy(dest + written, numstr, numlen);
+
+                written += numlen;
+                i++;
+            }
+        }
+        else
+        {
+            if ( written < size - 1 )
+            {
+                dest[written] = format[i];
+                written++;
+            }
+            i++;
+        }
+    }
+
+    if ( size > 0 )
+        dest[written] = '\0';
+
+    return written;
+}
+
+void printk(const char *format, ...)
+{
+    static char buf[1024];
+
+    va_list args;
+    va_start(args, format);
+
+    (void)vsnprintf(buf, sizeof(buf), format, args);
+
+    early_printk(buf);
+
+    va_end(args);
+}
+
+#endif
+
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 12:13:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 12:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540603.842450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkd-0005DX-Sm; Mon, 29 May 2023 12:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540603.842450; Mon, 29 May 2023 12:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkd-0005DQ-Pw; Mon, 29 May 2023 12:13:27 +0000
Received: by outflank-mailman (input) for mailman id 540603;
 Mon, 29 May 2023 12:13:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yi1m=BS=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q3bkd-0005DK-2F
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 12:13:27 +0000
Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com
 [2a00:1450:4864:20::234])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 36b8513a-fe1a-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 14:13:25 +0200 (CEST)
Received: by mail-lj1-x234.google.com with SMTP id
 38308e7fff4ca-2af7081c9ebso33984231fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 29 May 2023 05:13:26 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 z14-20020a2e964e000000b002ac87c15fd4sm2427762ljh.95.2023.05.29.05.13.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 29 May 2023 05:13:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36b8513a-fe1a-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685362405; x=1687954405;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wkQ4zHDkTr4O9GodmGbD3ibtCyQrQEbN6NGk6h/UX0A=;
        b=d58sQl2RjRtSnDVfLMcJjRKVv1upnmb9HuOWNe/xYJmPy3KUgpdZOWqOGyHyY1UiN1
         c1qldE4Id8rQqM/qwM/Y6+tMhWjabRmrOBH/TzzsaYqTZ1FKuB6lBQpJmBCQL5tejuzz
         uxc6QNunuKRvGTS2RMV3KuXMQAfbkQR9sJpV+Nzly3CskFCeMDq1/ZH1dkEJAI3fUsr2
         lvyo8oseMqvIsh0TVDdNBasMESmZzrJOwg4NLM1u8wx7wcwahPcQ35WLShm4+fqQmyDN
         5OYTKGZI/CEwikX5y/LyhTX9AsqngumqHBM8ii6vekCz9IMHnKJtkE4afavoiJSrrUnn
         vfbQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685362405; x=1687954405;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=wkQ4zHDkTr4O9GodmGbD3ibtCyQrQEbN6NGk6h/UX0A=;
        b=N3LseP5VmksNXALGPd6BW08UaoPk1MgfGeJEfmE13gc3I0W9wZ870aN8ZQXJQkeIWH
         tHskXeggc1FCTQXelAXm0C71T1igC5Mk3Ld9ww4sMvoYEVUWgCNxy2QmxjK28bJGK3dm
         gtYN0QYDCqNxu59Af1lgFsyJKEiNZAFaqnUxgryGwGIt5GBhUpEyHpxiK8P9bCzfaf64
         DzHLp9+LceyeiDXvtUEZ4++nT8tbPoP5lWNUivO9Rd+zgvYPQe4lbT5sWwMURVCqM4eR
         yIW5cC2LxoxcaQCS2rIdg4qm7jYSMfKfhEEa43QzEqZaYMOLs5VFAtcqiDPfoATEHqQy
         zvRg==
X-Gm-Message-State: AC+VfDzIuMWbtG70ZGYW5UOrRCstG1M6+Zwo757PAXHQLujr4fXKBa80
	HXISeD7qmUMKHLRa2caIcAlVGgFZnNg=
X-Google-Smtp-Source: ACHHUZ4Ha9S4qBbRlL0G/+/O2i7GhUTN8dMSJOG9tXm+dMCi/JovQUoEALmoa6jvQUrT/H5v3za5aw==
X-Received: by 2002:a2e:9417:0:b0:2ad:8a4b:6a9e with SMTP id i23-20020a2e9417000000b002ad8a4b6a9emr3593392ljh.26.1685362405214;
        Mon, 29 May 2023 05:13:25 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 3/6] xen/riscv: introduce decode_cause() stuff
Date: Mon, 29 May 2023 15:13:16 +0300
Message-Id: <c1f6afd064126f652b9d63159840ea8eba2b95ea.1685359848.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1685359848.git.oleksii.kurochko@gmail.com>
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces stuff needed to decode a reason of an
exception.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V6:
 - Remove usage of LINK_TO_LOAD() due to the MMU being enabled first.
 - Change early_printk() to printk()
---
Changes in V5:
  - Remove <xen/error.h> from riscv/traps/c as nothing would require
    inclusion.
  - decode_reserved_interrupt_cause(), decode_interrupt_cause(), decode_cause, do_unexpected_trap()
    were made as static they are expected to be used only in traps.c
  - use LINK_TO_LOAD() for addresses which can be linker time relative.
---
Changes in V4:
  - fix string in decode_reserved_interrupt_cause()
---
Changes in V3:
  - Nothing changed
---
Changes in V2:
  - Make decode_trap_cause() more optimization friendly.
  - Merge the pathc which introduces do_unexpected_trap() to the current one.
---
 xen/arch/riscv/traps.c | 84 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 83 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
index ccd3593f5a..ea1012e83e 100644
--- a/xen/arch/riscv/traps.c
+++ b/xen/arch/riscv/traps.c
@@ -4,10 +4,92 @@
  *
  * RISC-V Trap handlers
  */
+
+#include <xen/lib.h>
+
+#include <asm/csr.h>
+#include <asm/early_printk.h>
 #include <asm/processor.h>
 #include <asm/traps.h>
 
-void do_trap(struct cpu_user_regs *cpu_regs)
+static const char *decode_trap_cause(unsigned long cause)
+{
+    static const char *const trap_causes[] = {
+        [CAUSE_MISALIGNED_FETCH] = "Instruction Address Misaligned",
+        [CAUSE_FETCH_ACCESS] = "Instruction Access Fault",
+        [CAUSE_ILLEGAL_INSTRUCTION] = "Illegal Instruction",
+        [CAUSE_BREAKPOINT] = "Breakpoint",
+        [CAUSE_MISALIGNED_LOAD] = "Load Address Misaligned",
+        [CAUSE_LOAD_ACCESS] = "Load Access Fault",
+        [CAUSE_MISALIGNED_STORE] = "Store/AMO Address Misaligned",
+        [CAUSE_STORE_ACCESS] = "Store/AMO Access Fault",
+        [CAUSE_USER_ECALL] = "Environment Call from U-Mode",
+        [CAUSE_SUPERVISOR_ECALL] = "Environment Call from S-Mode",
+        [CAUSE_MACHINE_ECALL] = "Environment Call from M-Mode",
+        [CAUSE_FETCH_PAGE_FAULT] = "Instruction Page Fault",
+        [CAUSE_LOAD_PAGE_FAULT] = "Load Page Fault",
+        [CAUSE_STORE_PAGE_FAULT] = "Store/AMO Page Fault",
+        [CAUSE_FETCH_GUEST_PAGE_FAULT] = "Instruction Guest Page Fault",
+        [CAUSE_LOAD_GUEST_PAGE_FAULT] = "Load Guest Page Fault",
+        [CAUSE_VIRTUAL_INST_FAULT] = "Virtualized Instruction Fault",
+        [CAUSE_STORE_GUEST_PAGE_FAULT] = "Guest Store/AMO Page Fault",
+    };
+
+    if ( cause < ARRAY_SIZE(trap_causes) && trap_causes[cause] )
+        return trap_causes[cause];
+    return "UNKNOWN";
+}
+
+static const char *decode_reserved_interrupt_cause(unsigned long irq_cause)
+{
+    switch ( irq_cause )
+    {
+    case IRQ_M_SOFT:
+        return "M-mode Software Interrupt";
+    case IRQ_M_TIMER:
+        return "M-mode TIMER Interrupt";
+    case IRQ_M_EXT:
+        return "M-mode External Interrupt";
+    default:
+        return "UNKNOWN IRQ type";
+    }
+}
+
+static const char *decode_interrupt_cause(unsigned long cause)
+{
+    unsigned long irq_cause = cause & ~CAUSE_IRQ_FLAG;
+
+    switch ( irq_cause )
+    {
+    case IRQ_S_SOFT:
+        return "Supervisor Software Interrupt";
+    case IRQ_S_TIMER:
+        return "Supervisor Timer Interrupt";
+    case IRQ_S_EXT:
+        return "Supervisor External Interrupt";
+    default:
+        return decode_reserved_interrupt_cause(irq_cause);
+    }
+}
+
+static const char *decode_cause(unsigned long cause)
+{
+    if ( cause & CAUSE_IRQ_FLAG )
+        return decode_interrupt_cause(cause);
+
+    return decode_trap_cause(cause);
+}
+
+static void do_unexpected_trap(const struct cpu_user_regs *regs)
 {
+    unsigned long cause = csr_read(CSR_SCAUSE);
+
+    printk("Unhandled exception: %s\n", decode_cause(cause));
+
     die();
 }
+
+void do_trap(struct cpu_user_regs *cpu_regs)
+{
+    do_unexpected_trap(cpu_regs);
+}
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 12:13:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 12:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540608.842498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkh-0006L2-Hp; Mon, 29 May 2023 12:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540608.842498; Mon, 29 May 2023 12:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkh-0006Ju-CV; Mon, 29 May 2023 12:13:31 +0000
Received: by outflank-mailman (input) for mailman id 540608;
 Mon, 29 May 2023 12:13:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yi1m=BS=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q3bkg-0005DI-95
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 12:13:30 +0000
Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com
 [2a00:1450:4864:20::22a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 38235dff-fe1a-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 14:13:28 +0200 (CEST)
Received: by mail-lj1-x22a.google.com with SMTP id
 38308e7fff4ca-2af2e1725bdso39959611fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 29 May 2023 05:13:28 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 z14-20020a2e964e000000b002ac87c15fd4sm2427762ljh.95.2023.05.29.05.13.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 29 May 2023 05:13:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38235dff-fe1a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685362407; x=1687954407;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=IGRAwgV1/X9tXgdqrBZ2V9h6YU+/IZLXtXO6lnBKNgc=;
        b=BhOu4Ae/aJWmaV/4Fkq+tg/eZ+FemcZMvELH4qD4sK+/f2TTmCvN+iKQoyDYXyn7Z4
         2CVo8KZNghTDITtZePCZfONohTwxFzo11Q2I/samt/c/WRbwcAptcFfVSXSam50/Po16
         Svt0qtGXhcq9zh0FxML00zdW2vQ5AewvqrQmLNmiud9J5ASW6G619kng2eI1FtP7+qhh
         fO5nHeGpy5o/q7sXMANx+1jLrLPONUyKTcItWAfItQwDX4z9gsMnlBHLyR1C6lfgyPi8
         T+BWqcIFb3RziKoFTyaTn/+C/t9DXdTq5ccyqMIPLtztxGzUBXEkYJdxToCbj7A62L9u
         XVaw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685362407; x=1687954407;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=IGRAwgV1/X9tXgdqrBZ2V9h6YU+/IZLXtXO6lnBKNgc=;
        b=GKHTaygzvNhYjm85Lhoocl2Od8LptQObhkboTrBsmXPjaZ/pa4z/Y/p9dz2fKwPQMw
         3HzjSM/WKxgyKHNgXe4DqF703KrTqqd0GdAoazbRNRIgdUTSBNZuXd8ktlN58xafNQNo
         M7kY+ZaSs+1inkSflMOxmgh7Y4e/s2LLZOjzwmYWo8F688zAss5XtKMf0DqePFZVqX/i
         v6C/8XSOC76/H5n/TK1MrAiFYq9RU70UUpbTucHUHt2wMDLlTCeIW9pBJzZHxIiwgWSb
         FCywEzjnwy9Kmnm5wNjO3TXETuhKm5hIzaUlVVbn2Na/1zL6/DsJcTvdOEMnb8fKys7e
         d0mw==
X-Gm-Message-State: AC+VfDwqySRO2SEyl8z630gNRiazV+H/50SF3dxe0LhFFHMPiyhsfI4R
	OxdqN3JhphU//Bw5fMZvjBMcP/FTqjQ=
X-Google-Smtp-Source: ACHHUZ4X24KzltwWEPBY4ezedLUOWadfqfw7Wb+VGCa/0rc3MsIxgXlvFdTfbAFC+17nv9jiuAbTFw==
X-Received: by 2002:a05:651c:c2:b0:2a7:7493:9966 with SMTP id 2-20020a05651c00c200b002a774939966mr3195466ljr.24.1685362407086;
        Mon, 29 May 2023 05:13:27 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 5/6] xen/riscv: introduce an implementation of macros from <asm/bug.h>
Date: Mon, 29 May 2023 15:13:18 +0300
Message-Id: <bd2dd42c778714f25e7e98f74ff5e98eee1cd0a5.1685359848.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1685359848.git.oleksii.kurochko@gmail.com>
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patch introduces macros: BUG(), WARN(), run_in_exception(),
assert_failed.
To be precise, the macros from generic bug implementation (<xen/bug.h>)
will be used.

The implementation uses "ebreak" instruction in combination with
diffrent bug frame tables (for each type) which contains useful
information.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V6:
  - Avoid LINK_TO_LOAD() as bug.h functionality expected to be used
    after MMU is enabled.
  - Change early_printk() to printk()
---
Changes in V5:
  - Remove "#include <xen/types.h>" from <asm/bug.h> as there is no any need in it anymore
  - Update macros GET_INSN_LENGTH: remove UL and 'unsigned int len;' from it
  - Remove " include <xen/bug.h>" from risc/setup.c. it is not needed in the current version of
    the patch
  - change an argument type from vaddr_t to uint32_t for is_valid_bugaddr and introduce read_instr() to
    read instruction properly as the length of qinstruction can be either 32 or 16 bits.
  - Code style fixes
  - update the comments before do_bug_frame() in riscv/trap.c
  - Refactor is_valid_bugaddr() function.
  - introduce macros cast_to_bug_frame(addr) to hide casts.
  - use LINK_TO_LOAD() for addresses which are linker time relative.
---
Changes in V4:
  - Updates in RISC-V's <asm/bug.h>:
    * Add explanatory comment about why there is only defined for 32-bits length
      instructions and 16/32-bits BUG_INSN_{16,32}.
    * Change 'unsigned long' to 'unsigned int' inside GET_INSN_LENGTH().
    * Update declaration of is_valid_bugaddr(): switch return type from int to bool
      and the argument from 'unsigned int' to 'vaddr'.
  - Updates in RISC-V's traps.c:
    * replace /xen and /asm includes 
    * update definition of is_valid_bugaddr():switch return type from int to bool
      and the argument from 'unsigned int' to 'vaddr'. Code style inside function
      was updated too.
    * do_bug_frame() refactoring:
      * local variables start and bug became 'const struct bug_frame'
      * bug_frames[] array became 'static const struct bug_frame[] = ...'
      * remove all casts
      * remove unneeded comments and add an explanatory comment that the do_bug_frame()
        will be switched to a generic one.
    * do_trap() refactoring:
      * read 16-bits value instead of 32-bits as compressed instruction can
        be used and it might happen than only 16-bits may be accessible.
      * code style updates
      * re-use instr variable instead of re-reading instruction.
  - Updates in setup.c:
    * add blank line between xen/ and asm/ includes.
---
Changes in V3:
  - Rebase the patch "xen/riscv: introduce an implementation of macros
    from <asm/bug.h>" on top of patch series [introduce generic implementation
    of macros from bug.h]
---
Changes in V2:
  - Remove __ in define namings
  - Update run_in_exception_handler() with
    register void *fn_ asm(__stringify(BUG_FN_REG)) = (fn);
  - Remove bug_instr_t type and change it's usage to uint32_t
---
 xen/arch/riscv/include/asm/bug.h |  28 +++++++
 xen/arch/riscv/traps.c           | 129 +++++++++++++++++++++++++++++++
 xen/arch/riscv/xen.lds.S         |  10 +++
 3 files changed, 167 insertions(+)

diff --git a/xen/arch/riscv/include/asm/bug.h b/xen/arch/riscv/include/asm/bug.h
index e8b1e40823..bf3194443f 100644
--- a/xen/arch/riscv/include/asm/bug.h
+++ b/xen/arch/riscv/include/asm/bug.h
@@ -7,4 +7,32 @@
 #ifndef _ASM_RISCV_BUG_H
 #define _ASM_RISCV_BUG_H
 
+#ifndef __ASSEMBLY__
+
+#define BUG_INSTR "ebreak"
+
+/*
+ * The base instruction set has a fixed length of 32-bit naturally aligned
+ * instructions.
+ *
+ * There are extensions of variable length ( where each instruction can be
+ * any number of 16-bit parcels in length ) but they aren't used in Xen
+ * and Linux kernel ( where these definitions were taken from ).
+ *
+ * Compressed ISA is used now where the instruction length is 16 bit  and
+ * 'ebreak' instruction, in this case, can be either 16 or 32 bit (
+ * depending on if compressed ISA is used or not )
+ */
+#define INSN_LENGTH_MASK        _UL(0x3)
+#define INSN_LENGTH_32          _UL(0x3)
+
+#define BUG_INSN_32             _UL(0x00100073) /* ebreak */
+#define BUG_INSN_16             _UL(0x9002)     /* c.ebreak */
+#define COMPRESSED_INSN_MASK    _UL(0xffff)
+
+#define GET_INSN_LENGTH(insn)                               \
+    (((insn) & INSN_LENGTH_MASK) == INSN_LENGTH_32 ? 4 : 2) \
+
+#endif /* !__ASSEMBLY__ */
+
 #endif /* _ASM_RISCV_BUG_H */
diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
index 48c1059954..535fb058e1 100644
--- a/xen/arch/riscv/traps.c
+++ b/xen/arch/riscv/traps.c
@@ -5,6 +5,8 @@
  * RISC-V Trap handlers
  */
 
+#include <xen/bug.h>
+#include <xen/errno.h>
 #include <xen/lib.h>
 
 #include <asm/csr.h>
@@ -114,7 +116,134 @@ static void do_unexpected_trap(const struct cpu_user_regs *regs)
     die();
 }
 
+void show_execution_state(const struct cpu_user_regs *regs)
+{
+    printk("implement show_execution_state(regs)\n");
+}
+
+/*
+ * TODO: change early_printk's function to early_printk with format
+ *       when s(n)printf() will be added.
+ *
+ * Probably the TODO won't be needed as generic do_bug_frame()
+ * has been introduced and current implementation will be replaced
+ * with generic one when panic(), printk() and find_text_region()
+ * (virtual memory?) will be ready/merged
+ */
+int do_bug_frame(const struct cpu_user_regs *regs, vaddr_t pc)
+{
+    const struct bug_frame *start, *end;
+    const struct bug_frame *bug = NULL;
+    unsigned int id = 0;
+    const char *filename, *predicate;
+    int lineno;
+
+    static const struct bug_frame* bug_frames[] = {
+        &__start_bug_frames[0],
+        &__stop_bug_frames_0[0],
+        &__stop_bug_frames_1[0],
+        &__stop_bug_frames_2[0],
+        &__stop_bug_frames_3[0],
+    };
+
+    for ( id = 0; id < BUGFRAME_NR; id++ )
+    {
+        start = cast_to_bug_frame(bug_frames[id]);
+        end   = cast_to_bug_frame(bug_frames[id + 1]);
+
+        while ( start != end )
+        {
+            if ( (vaddr_t)bug_loc(start) == pc )
+            {
+                bug = start;
+                goto found;
+            }
+
+            start++;
+        }
+    }
+
+ found:
+    if ( bug == NULL )
+        return -ENOENT;
+
+    if ( id == BUGFRAME_run_fn )
+    {
+        void (*fn)(const struct cpu_user_regs *) = bug_ptr(bug);
+
+        fn(regs);
+
+        goto end;
+    }
+
+    /* WARN, BUG or ASSERT: decode the filename pointer and line number. */
+    filename = bug_ptr(bug);
+    lineno = bug_line(bug);
+
+    switch ( id )
+    {
+    case BUGFRAME_warn:
+        printk("Xen WARN at %s:%d\n", filename, lineno);
+
+        show_execution_state(regs);
+
+        goto end;
+
+    case BUGFRAME_bug:
+        printk("Xen BUG at %s:%d\n", filename, lineno);
+
+        show_execution_state(regs);
+
+        printk("change wait_for_interrupt to panic() when common is available\n");
+        die();
+
+    case BUGFRAME_assert:
+        /* ASSERT: decode the predicate string pointer. */
+        predicate = bug_msg(bug);
+
+        printk("Assertion %s failed at %s:%d\n", predicate, filename, lineno);
+
+        show_execution_state(regs);
+
+        printk("change wait_for_interrupt to panic() when common is available\n");
+        die();
+    }
+
+    return -EINVAL;
+
+ end:
+    return 0;
+}
+
+static bool is_valid_bugaddr(uint32_t insn)
+{
+    return insn == BUG_INSN_32 ||
+           (insn & COMPRESSED_INSN_MASK) == BUG_INSN_16;
+}
+
+static uint32_t read_instr(unsigned long pc)
+{
+    uint16_t instr16 = *(uint16_t *)pc;
+
+    if ( GET_INSN_LENGTH(instr16) == 2 )
+        return (uint32_t)instr16;
+    else
+        return *(uint32_t *)pc;
+}
+
 void do_trap(struct cpu_user_regs *cpu_regs)
 {
+    register_t pc = cpu_regs->sepc;
+    uint32_t instr = read_instr(pc);
+
+    if ( is_valid_bugaddr(instr) )
+    {
+        if ( !do_bug_frame(cpu_regs, pc) )
+        {
+            cpu_regs->sepc += GET_INSN_LENGTH(instr);
+            return;
+        }
+    }
+
     do_unexpected_trap(cpu_regs);
 }
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index df71d31e17..0412493911 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -40,6 +40,16 @@ SECTIONS
     . = ALIGN(PAGE_SIZE);
     .rodata : {
         _srodata = .;          /* Read-only data */
+        /* Bug frames table */
+       __start_bug_frames = .;
+       *(.bug_frames.0)
+       __stop_bug_frames_0 = .;
+       *(.bug_frames.1)
+       __stop_bug_frames_1 = .;
+       *(.bug_frames.2)
+       __stop_bug_frames_2 = .;
+       *(.bug_frames.3)
+       __stop_bug_frames_3 = .;
         *(.rodata)
         *(.rodata.*)
         *(.data.rel.ro)
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 12:13:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 12:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540606.842472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkf-0005d4-Nz; Mon, 29 May 2023 12:13:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540606.842472; Mon, 29 May 2023 12:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkf-0005cG-Kh; Mon, 29 May 2023 12:13:29 +0000
Received: by outflank-mailman (input) for mailman id 540606;
 Mon, 29 May 2023 12:13:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yi1m=BS=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q3bke-0005DI-Ci
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 12:13:28 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 363edad5-fe1a-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 14:13:25 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2af1c884b08so39784941fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 29 May 2023 05:13:25 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 z14-20020a2e964e000000b002ac87c15fd4sm2427762ljh.95.2023.05.29.05.13.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 29 May 2023 05:13:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 363edad5-fe1a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685362404; x=1687954404;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=HHQHWourHZby5JiDzW3SkVKPGSqK79/gkhN0T0QJ+Ak=;
        b=oR7cLdVsdieoIpYv82dgRiF9/zLzzAid/pFmpbkcUioNpbkh3/qHGt1TYm/rVl5DJx
         soBQV7eVkPDC0RIPuDv5yucnEkEcnYfnRudMEh+t68+EuLv+mttGoX1YiqDjHUsHrhEH
         lZ3sVbDL7CVDCrCl9dkg9P4qewvgQSaEh3q7feQ15eHAH3lhz/MAmDWqpl5MstytXF+J
         4TDFF2YLMwYSt4MQIeQZko4sCdaOIfmFXNa0HodxhDc48UdutIvupJGlIGz45tGNjFi5
         +iQnR4E2Fc3RAkjutsdeAKvcIUAdApps6M+YGBJlGLaYrRbNbsVKvoO6/ymWvaAxb/L+
         2OsA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685362404; x=1687954404;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=HHQHWourHZby5JiDzW3SkVKPGSqK79/gkhN0T0QJ+Ak=;
        b=YHxJV0JNnM7jSoKAd6xXe0fSkENQ8GrbSLUeVHXDYRW6W74GTmPTajuc09MS/siQFf
         GMJv5cIksu2mwJWhE8MsgzzRNUMam6PL+butp29skEvxrP0/W0XbtCmzjbWNhdfRDIzn
         wdRguiiZwnu3gC1EumeLmK6shD7lmBR7U2khR6D1svF1RQ48vCbEzvOK/H0TL5umiPWN
         J0AbjT+7WveMyFNwnxEw6NhjlsVUR4OsItffS/1IBiVFXUelTU82c/yDibFbNF6ELGG2
         KvzTdAxRwA2ZaTXLDYnv3mluFGoLV15bKTGgT6pGTXtZLsvWvAbe0dRR/nSrzH4ERSld
         gR9A==
X-Gm-Message-State: AC+VfDxrdW2ObdctrgE7LVoBoTIU8nsLkKB9T5ML+/+sdEsHxS0MDf43
	+4PX1CBoZ9Z7TI9bZFQrySC4yG03FYw=
X-Google-Smtp-Source: ACHHUZ43cpZFa9TVNywdOJd3rfXwSv/GypAR6BFC3btMjVEMX79QTrjh+aBZWQaeBunLSyl/B6FQUA==
X-Received: by 2002:a2e:9b0d:0:b0:2b0:248c:bc4e with SMTP id u13-20020a2e9b0d000000b002b0248cbc4emr2678913lji.19.1685362404357;
        Mon, 29 May 2023 05:13:24 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 2/6] xen/riscv: introduce dummy <asm/bug.h>
Date: Mon, 29 May 2023 15:13:15 +0300
Message-Id: <268eb55c4f6ff4eb9e1a2524eae7645ffc79568c.1685359848.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1685359848.git.oleksii.kurochko@gmail.com>
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

<xen/lib.h> will be used in the patch "xen/riscv: introduce
decode_cause() stuff" and requires <asm/bug.h>

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V6:
 - Nothing changed. Only rebase.
---
Changes in V5:
 * the patch was introduced in the current patch series (V5)
---
 xen/arch/riscv/include/asm/bug.h | 10 ++++++++++
 1 file changed, 10 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/bug.h

diff --git a/xen/arch/riscv/include/asm/bug.h b/xen/arch/riscv/include/asm/bug.h
new file mode 100644
index 0000000000..e8b1e40823
--- /dev/null
+++ b/xen/arch/riscv/include/asm/bug.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2021-2023 Vates
+ *
+ */
+#ifndef _ASM_RISCV_BUG_H
+#define _ASM_RISCV_BUG_H
+
+#endif /* _ASM_RISCV_BUG_H */
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 12:13:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 12:13:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540607.842480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkg-0005oO-8T; Mon, 29 May 2023 12:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540607.842480; Mon, 29 May 2023 12:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3bkg-0005n0-3U; Mon, 29 May 2023 12:13:30 +0000
Received: by outflank-mailman (input) for mailman id 540607;
 Mon, 29 May 2023 12:13:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yi1m=BS=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q3bkf-0005DI-5h
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 12:13:29 +0000
Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com
 [2a00:1450:4864:20::22e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 373c73be-fe1a-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 14:13:26 +0200 (CEST)
Received: by mail-lj1-x22e.google.com with SMTP id
 38308e7fff4ca-2af2b74d258so33764371fa.3
 for <xen-devel@lists.xenproject.org>; Mon, 29 May 2023 05:13:26 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 z14-20020a2e964e000000b002ac87c15fd4sm2427762ljh.95.2023.05.29.05.13.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 29 May 2023 05:13:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 373c73be-fe1a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685362406; x=1687954406;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GfEaiOQUBCZEc9m9TcpbthmlviGr6gh0qCBn/DEmm+4=;
        b=T2UzzMCjI4y6qWUGyeniQt8VMvQpgboaSIZdZw0x1K5eOf1UZLucx1O6DAJg2vRMVd
         3+MH40pASrruXzh+CDny1F81oCfKe/f9L8HzA9tL5jJl1ODC6DwGNL79QXiTjpyhCSAj
         38wnNLObZ4aVUvQMuY5JytvfKRl0FqanO2nvTRZf4D/21Qcvg9R+T8IRyQSjlb2oTA8K
         hx7CWP1xtHp4AhT6FDVcxyrKqrljjCIot8geRQk6L8Ykl+dpHmcF1+oGAunc09D6BxWR
         OAGTscoHZTDmOnLcCLMtUO6VL5gBDu+aHhC/P1uZh8wUx32fCOKIFMT6dW8/UJbv+xPJ
         d7sg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685362406; x=1687954406;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=GfEaiOQUBCZEc9m9TcpbthmlviGr6gh0qCBn/DEmm+4=;
        b=h4KtRcyFTiTZqT6a7I554or4cIcJIfwsP+Ny+9iPKDx7SnnnVThZcpG4gnmVjTwUs3
         +jXC6PPRzQaamX1G8zJsex5o9hVqYBnS2p8XluN6mZ7M6GUUiRlq9LOy7fyeuRFx4Nq6
         ry441QoVoO0eKYlNnU+g/3T8ziOeGLdzLylqGf4lO+gu69ViLgpVZ6J0OBA1KsL+nYOx
         UvkdM8wq6MA2mR3CJW0dgtPNsaxSindNpaQT31mBw8TK3RTks1uQlWd2nV7gXnAWLllw
         WC1P1I07pOnexJta+KRGxmcw46hv+CcXRMBEEm4MKLTu4obea01z6l9sl7dk/UnGDSHq
         Ea5Q==
X-Gm-Message-State: AC+VfDy/Nf65NALe3gPSVIEUCdyWPrnyxoD7lrJoIAvYyCesrao3sN7d
	eKZrev/1hc9ZcOI6RZw6MO/bK6qEBNA=
X-Google-Smtp-Source: ACHHUZ64pZrBI6bynejBHjG+p4XjvJ1GwyCqiLiS7raB5U4Qd6ZRbytGgFAO1fA2A6GAzNeTb1SnEQ==
X-Received: by 2002:a2e:b706:0:b0:2b0:497a:2029 with SMTP id j6-20020a2eb706000000b002b0497a2029mr4504061ljo.23.1685362406101;
        Mon, 29 May 2023 05:13:26 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH v6 4/6] xen/riscv: introduce trap_init()
Date: Mon, 29 May 2023 15:13:17 +0300
Message-Id: <f4c4b711106283e26536105105892b93bb39ea3e.1685359848.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <cover.1685359848.git.oleksii.kurochko@gmail.com>
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
Changes in V6:
 - trap_init() is now called after enabling the MMU.
 - Add additional explanatory comments.
---
Changes in V5:
  - Nothing changed
---
Changes in V4:
  - Nothing changed
---
Changes in V3:
  - Nothing changed
---
Changes in V2:
  - Rename setup_trap_handler() to trap_init().
  - Add Reviewed-by to the commit message.
---
 xen/arch/riscv/include/asm/traps.h |  1 +
 xen/arch/riscv/setup.c             |  3 +++
 xen/arch/riscv/traps.c             | 25 +++++++++++++++++++++++++
 3 files changed, 29 insertions(+)

diff --git a/xen/arch/riscv/include/asm/traps.h b/xen/arch/riscv/include/asm/traps.h
index f3fb6b25d1..f1879294ef 100644
--- a/xen/arch/riscv/include/asm/traps.h
+++ b/xen/arch/riscv/include/asm/traps.h
@@ -7,6 +7,7 @@
 
 void do_trap(struct cpu_user_regs *cpu_regs);
 void handle_trap(void);
+void trap_init(void);
 
 #endif /* __ASSEMBLY__ */
 
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 845d18d86f..1cae0e5ccc 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -3,6 +3,7 @@
 
 #include <asm/early_printk.h>
 #include <asm/mm.h>
+#include <asm/traps.h>
 
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
@@ -25,6 +26,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
 
 void __init noreturn cont_after_mmu_is_enabled(void)
 {
+    trap_init();
+
     early_printk("All set up\n");
 
     for ( ;; )
diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
index ea1012e83e..48c1059954 100644
--- a/xen/arch/riscv/traps.c
+++ b/xen/arch/riscv/traps.c
@@ -12,6 +12,31 @@
 #include <asm/processor.h>
 #include <asm/traps.h>
 
+#define cast_to_bug_frame(addr) \
+    (const struct bug_frame *)(addr)
+
+/*
+ * Initialize the trap handling.
+ *
+ * The function is called after MMU is enabled.
+ */
+void trap_init(void)
+{
+    /*
+     * When the MMU is off, addr varialbe will be a physical address otherwise
+     * it would be a virtual address.
+     *
+     * It will work fine as:
+     *  - access to addr is PC-relative.
+     *  - -nopie is used. -nopie really suppresses the compiler emitting
+     *    code going through .got (which then indeed would mean using absolute
+     *    addresses).
+     */
+    unsigned long addr = (unsigned long)&handle_trap;
+
+    csr_write(CSR_STVEC, addr);
+}
+
 static const char *decode_trap_cause(unsigned long cause)
 {
     static const char *const trap_causes[] = {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Mon May 29 12:46:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 12:46:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540634.842520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3cG3-0003jh-An; Mon, 29 May 2023 12:45:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540634.842520; Mon, 29 May 2023 12:45:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3cG3-0003ja-7c; Mon, 29 May 2023 12:45:55 +0000
Received: by outflank-mailman (input) for mailman id 540634;
 Mon, 29 May 2023 12:45:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3cG1-0003jQ-IH; Mon, 29 May 2023 12:45:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3cG1-0006PV-CN; Mon, 29 May 2023 12:45:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3cG0-0001Ux-Sm; Mon, 29 May 2023 12:45:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3cG0-0006gf-SS; Mon, 29 May 2023 12:45:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DWUm5/hBJxxlXbFjv/4oJKKsk7oKljhbMVDxbJiv+B4=; b=d/5/SfVvaM6vLbWTbn8cLvf1LV
	2YJ5pLHNkrPu8NOOFUQTxTXaW5h1mKRDJxIscnStn7gOc71m/w5LevQ6IO/0zDIYpBkp/WebRSCdl
	6dBZ2+h/tSr9w2WZHgpgScwh9491NTlmqbpc46VF44tpcs9PcslHwxk0uWgmlz+QOESg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180998-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180998: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c0bce66068c76b5d46f901027daf1074316031ac
X-Osstest-Versions-That:
    ovmf=e1f5c6249af08c1df2c6257e4bb6abbf6134318c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 12:45:52 +0000

flight 180998 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180998/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c0bce66068c76b5d46f901027daf1074316031ac
baseline version:
 ovmf                 e1f5c6249af08c1df2c6257e4bb6abbf6134318c

Last test of basis   180997  2023-05-29 06:11:40 Z    0 days
Testing same since   180998  2023-05-29 10:43:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e1f5c6249a..c0bce66068  c0bce66068c76b5d46f901027daf1074316031ac -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 29 13:35:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 13:35:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540642.842530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3d23-0000eS-2x; Mon, 29 May 2023 13:35:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540642.842530; Mon, 29 May 2023 13:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3d23-0000eL-04; Mon, 29 May 2023 13:35:31 +0000
Received: by outflank-mailman (input) for mailman id 540642;
 Mon, 29 May 2023 13:35:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FxNl=BS=citrix.com=prvs=506ffa617=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q3d21-0000eF-GE
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 13:35:29 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a916561a-fe25-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 15:35:24 +0200 (CEST)
Received: from mail-bn7nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 May 2023 09:34:58 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA3PR03MB7489.namprd03.prod.outlook.com (2603:10b6:806:39a::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Mon, 29 May
 2023 13:34:55 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Mon, 29 May 2023
 13:34:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a916561a-fe25-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685367323;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=mm/mjZDXWXp0viiSqxFreJYp6rstE9w8o1HVuhdlklk=;
  b=BOLnh11bBQkzxfYyMnqcsRb7wWni/rF28y/CzsjudXOh46Gtl50N1hTe
   lzaYICbwMazD0USu2cfkue6pa6zUyMXZj5EaPQ07lRXtfiX92UxaA76vM
   ONDOCLrDGYGhb04k3zTMo8YqXUjJgBNXO00hr8G3v8QT9RDLdaqSVv8VI
   8=;
X-IronPort-RemoteIP: 104.47.70.102
X-IronPort-MID: 110135326
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:uzFzQapynw7bYPuLgevraDRgQCheBmJWZRIvgKrLsJaIsI4StFCzt
 garIBmOMqqMZ2SjLd9xPIuzpkwB7cPRn4RgTlQ5qCsxFX8V8ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzShNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAHdUTQHTjbyo++iqdMNe2/scLujLAJxK7xmMzRmBZRonabbqZvySoPV+g3I3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3j+CraYKPEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdtKTeblraQ06LGV7k8WFBgoWVKdndOgikmkevREF
 ksUwzV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq16bO8vT60fy8PIgcqZzIATAYDy8nupsc0lB2nZs14DKe/g9nxGDfx6
 zOHti4zg/MUl8Fj/7u8+VfLkje9vK/DRwQ+5hjUdm+95wY/b4mgD6Si5ELH9/9GIMCcR0OYo
 Xkfs8GE6aYFCpTlvCaKSu8cEaqp4/uAOTv0jltmHp1n/DOok1aqeYFL/Dh/PgFnKM8Ccj7yS
 FDfskVa45o7FHCta6lwYY64FcUx5aflHNXhEPvTa7JzjoNZcQaG+GRkYxGW1mW0yEw0y/hnY
 9GcbNqmCmscBeJ/1j2qSuwB0LgtgCcj2WfUQpO9xBOiuVaDWEOopX4+GAPmRogEAGms+m05L
 /432xO29ihi
IronPort-HdrOrdr: A9a23:iHu4XK69+28YcPFwIgPXwA7XdLJyesId70hD6qkQc3Fom62j5q
 STdZEgvyMc5wx/ZJhNo7690cq7MBbhHPxOkOos1N6ZNWGLhILPFuBfBOPZqAEIcBeOlNK1u5
 0BT0B/YueAcGRSvILBzySTV/wb57C8gceVbeW19QYQcem9AZsQkDuQCWygYzNLrBEtP+teKH
 IFjPA33QZJfx4sH72G7ilsZZm6mzXT/qiWGiI7Ow==
X-Talos-CUID: =?us-ascii?q?9a23=3AaOWWdWmXyU9Bk69CwUYzh4mQRwPXOTqE72aAKRG?=
 =?us-ascii?q?mMjx4dpenFGCv+oJEr9U7zg=3D=3D?=
X-Talos-MUID: 9a23:P1/CQATVGst8rWoMRXTPgS1nCsthxZiEEUAKi7QgoPSDNRVvbmI=
X-IronPort-AV: E=Sophos;i="6.00,201,1681185600"; 
   d="scan'208";a="110135326"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dmrBsyTCWfv2BJcHEb7XxRLtev5fOgYo3Mjyh3n0czsukyV03ykrUJ77/Fikq6TtR9Mg5ChICoXjpO5PUA+uKkj/FkbjA6lMvMTZQ6v7EZRar+uoz0+8P2ARdhfQdC2NREXdJFyaRHeWuo0jzWfaXlZNkk0pMZ8Rcy/Rbf2AARVCHXH8FPTwkyHDVQ15KTnrCritx6RgnwGTJ6lTMlFs/IN8gqoQtYHXTQuHu9Eo8SNwYKK70LUhCukZijNXotBEt4nyA6xRgHyq5J04PszV49oUTOeKowmv/gfnP/z+Jxf86xraNNlkUPKuLxQBpq5Kc50QDfSwwiaOYKVojGS3ug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p5fh7T91ZW40vbvUofnxfdRuyhsnnORmYCbrJDWPat8=;
 b=OohwWHc7ZWK7uE3xgD4ta00zT/5FJEdBQC/LXSQh5C4Zwyu+O/rw77lZGW5rPDnQ01FmoQ/lxbNHF570/fmzuqR7aaSdxG/FyFMKLLp2pYlMV10k6hCskYqvQmWXPsQXosbIn1XktCkEFp+9mdBPhlQdpBv1jaLQasUjbjW2X2Usr041Jub3fMf8spHNnO8FSmpTjDLKu7KzFj4zcqIlvqfnGBCGS6jwTEFl8KLKsbDQrut2iKyLismbco1dDd0AXz5o882yZvRDDwwMJXxDMs1brPXjDVEm+bAgY73f21tQ9s9qgRLRyLKmVuZB3ed3WWtKUgZjDTftPc7lIR2w/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p5fh7T91ZW40vbvUofnxfdRuyhsnnORmYCbrJDWPat8=;
 b=h612c2fxWBkE8GVBp9xIzOhoCECePE/x6R5WCI55QaHViOtPmbdXU3WeiDOSQZoJuq7JzMAgYAzhyMgtCABp+IWbYF05K1ptgb6T19U0ThjIBkBUwcOUAc3aCZ0FVokLFMAP7o+nvx72I9VxN9a8+e8OqBD3vSYysQ/s2/HOae4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 29 May 2023 15:34:47 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v2 1/2] x86: annotate entry points with type and size
Message-ID: <ZHSp9+ouRrXFEY4R@Air-de-Roger>
References: <e4bf47ca-2ae6-1fd4-56a6-e4e777150b64@suse.com>
 <db10bc3d-962e-72a7-b53d-93a7ddd7f3ef@suse.com>
 <fd492a4a-11ba-b63a-daf4-99697db0db0e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <fd492a4a-11ba-b63a-daf4-99697db0db0e@suse.com>
X-ClientProxiedBy: LO2P265CA0145.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::13) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA3PR03MB7489:EE_
X-MS-Office365-Filtering-Correlation-Id: 143f69fc-1569-48c3-e2da-08db60497c74
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/A1qIgfCzGrEjr/7X0RHOupG41dBuvio1l+D1kt+IZOQxyPcjvvcAp9i+Om+deFS2cSqm3OOyGcVfPQGSdMuwnVNHNjxKYq5hLIaQ4G9VPZWKAIdefyx+gyFkZJrOo43MZPUGrc0r3+NnwvP/9FQ5omksPt3qI1d+UdW9l5o6NN8cWBsbeSIbwijPc9E7Kg1oHfTuSqXAcX07rYTt1aMbXz6+Qa1Q6xAUksFfvrw/P4QcXSaapDTqmuvcPzEBnq//vXE5LEsHtsevRsY6AMZzDpqwFIKlQ1Qk9ky5w8I2vPKgd4icfn2ROLpCDaXt5LGXfyfXesFofx4a3U6SOCXpmUbpdjR0vuNekVHbsmecPEBJEBTs4HgVn6yT7/NzcvZWHz82ZvLoZg4BIxEWjzq9fFm6+8W62KyEhKomL/lYo0l+wtWKNzAaX2e+U/ReSfWhY+2YIuR4fHtMox/pwxz+UQQKTfWPC49gd+tt6KX7NjqPFk5n2wJ0zctrY6Fr5OzAc8SSiZZlQScRF0aACQUOeIGpGimR/WDcUXK/kbgyt0dOjG7H4TMsCucQxMD7QNsDJ6E/nrEsaDzWusco655zbWgBiai3fEq2eVLCce5M6k=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(346002)(366004)(396003)(136003)(376002)(451199021)(54906003)(478600001)(8936002)(8676002)(5660300002)(7416002)(85182001)(2906002)(33716001)(86362001)(66556008)(66946007)(66476007)(6916009)(4326008)(82960400001)(316002)(38100700002)(41300700001)(186003)(9686003)(26005)(6506007)(6512007)(6486002)(6666004)(966005)(83380400001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NXZBWjFJNU11d08zTGxsT2hlbGN5b2hPeFdyN0g0ZU9IVXdrcnYzS3V4SE9S?=
 =?utf-8?B?Q1ZQVUovUUtCUGtIOWJwRThnZVZQb1lXakc2VmxQOExjYnM3VXpnejZLNlYw?=
 =?utf-8?B?MEZXWWxXWVZnM3YyUE5QbzZIQ2U4WThXK1k0MC9PT0NFTGRlSnFXSEZ6TEhk?=
 =?utf-8?B?dDRNTm0wRkkvd2dTVnNjZGNwQlhWMTlBL3ltQzJpZmJSQmlmYVdsc2VlK0c0?=
 =?utf-8?B?bm1na3VEZVhhc0R4cFFUdkNaUS9KLzJySzMxU1NQeWdmVEczR3FJcGRLYmlT?=
 =?utf-8?B?eitIM2UxYi9OODQ4UkpQWEJMdktnL2MvOFdTd0NaRDFlQWpPakE3SG5saGRQ?=
 =?utf-8?B?TjAwQ0Fubm1ocHA2cGh6MGF6Qk1jaEk4SzFyVmFiMnhXL0gwcXo1ZlFiMlA2?=
 =?utf-8?B?TFh1cG1vTmhlWmw5ejdIYWp3dk9UUEU5aGFxZzRTMmRzOURXOVJsSDh6WWdM?=
 =?utf-8?B?MUM0aG5KUXpZUklSUkZyS1p3UGdhYWdub1EzSUFsSEVodnlDSXZpZDR4bEZL?=
 =?utf-8?B?OUJZWGFkNUdqbGZMNDBjK1Z5SjVNTThhZ2dQVXJIa0h0cVpYdXdsWkdLOE9W?=
 =?utf-8?B?OWJNeHFHbXdiUEFITjNGVlJnejFLZk9nRzVFRHF5QkNzRzluQitYQlRHbXIv?=
 =?utf-8?B?eURGd3ZjQndlSmM1cnJFK2RFNWdxVVAwaGZzaitIaFVVcVpOQ044SVlNVVNV?=
 =?utf-8?B?WXI0NndYaHlVUnE4UERpUnVjTXZPOWxGQjF3ekUxT2NVbEhVMkVSaGJoNVRj?=
 =?utf-8?B?MW5yQ29UcXFNQTkxNDlOdnoyRzFUcitnd0s2Wm1COENUZXkwNGlxYy9jdlF0?=
 =?utf-8?B?MGRCaTNwOC96MU55MlM4M1hhSXhvQjlpem5vMGNsWVRMamRDVHUvcExmK2Rp?=
 =?utf-8?B?U3Zlc29GK3MzZEFwZjBjc0ZxK1htSXRSSFRVWHJ4MElUQVFDZHNXekJ6MnZP?=
 =?utf-8?B?MVZoMmp1NWhhanVjQ0p2aWZGZk5sN1U5ZEpRSzE5akpxVzRZdGNQSy9ydkRX?=
 =?utf-8?B?bEJBN2hOZXVCclJRb0xrczBqRU5FY2hEb29EUTRNaWRmVDg5UnJlYkpnaElS?=
 =?utf-8?B?ODBLQVpibEk4UjZ6aU1EUDlQN2QxV09nelgvbWhaMWtEMEE1bWFmZzhjKy90?=
 =?utf-8?B?OWtYL3htQXJYVUxUbENXT21vbVhOdE9mMXd6UThqUlFtQXFNREhLSXhNZHNz?=
 =?utf-8?B?a3I1c3VZZHBvSTJYODlUWjhJTnVVOU1SdGRPcWUxL1hCVlhFS2pOa1J4clRE?=
 =?utf-8?B?cjUyL2RNM0ZwSGlxejdzWVJ0Z3RCNmpISWcweG56WVg2NmF2RTkzRll6eFlr?=
 =?utf-8?B?MjRtTWd6SnpzWFNrcWY5Nzl6STcwakRYc2dRdkJPMklBZHplTEpFWkh1aExy?=
 =?utf-8?B?dU5tM0V4SFgrcnlvc2FvYWYvMmZLT0FzT1B4MDVFNSt0WWd0L2dJNjdldzlU?=
 =?utf-8?B?ZWMyRDBDQkM4eGxidXpxTW5YNUwxakRLZ0pHYy8zM3VOdXF4R2RYS0xZbFJZ?=
 =?utf-8?B?WWJtK0ViUWk1UGNxYktPS2plMEdpUUhQWGNXcG5Vb0ZDUUVKWUxCbForMHQ5?=
 =?utf-8?B?a1dOdzRQdDdYVGZCeGsySzVENk9JcysxYWhxd3BhUG8zNnVSa2N2YUZXQUwv?=
 =?utf-8?B?YXU2L2ZQaEZtSld2TmpMSFErMzh1c2VzSDQ1akNxbGhsbHhOYytpUVMxbXJD?=
 =?utf-8?B?UFUvTXNXRi9GOXlocW9YTE9WODZXdGg4YysxQ1NyWmZ3WHVRdG4xcWw2Ty9R?=
 =?utf-8?B?R0YwelBxand5NjhSZHErVzZOY3orM2JTYjhId0VFMmZtTTdjNEVMUGpVK0ly?=
 =?utf-8?B?NytBZVZiTFRIeUhPY0pGKzNJWGJVeWd5c2Erb3VSdG5QLzFRMkRPM0JGNnUr?=
 =?utf-8?B?Rnl0V1VXM0dPNDJ4MEJUMjhQcWk0QUhvMkNxTmkwNVpGWXBTTERFcG5KT1N5?=
 =?utf-8?B?R2tIWjc3QmR3Q0Vib0NnR290a05YR3M5eHJSSENVdXB4WFF5dnA3Z05rVnhD?=
 =?utf-8?B?OUNJLzdpejJhWDZFcUtxQlRmRVZwUk5vbGdQNk9TTGRFZHN1UlRFVHhrZ2Jv?=
 =?utf-8?B?OURWS3V1NEJwV1V5LzRaYk5QQk5MOGMvSExIWkFjM0JKdTdGL1RyVGgwenZK?=
 =?utf-8?Q?kDOvsCNyQge2HaSGyLeDPyHKU?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?Y2xteUkrU3YzbHZFMm1vSWt5MnRsN1phc002L21DS08vemw4MEEzbFF1TFRX?=
 =?utf-8?B?UG1hbStsbmd0ekRNR3V1a1pYMXhhUzFrYW0weGVJbkE5RWFRMzhieThjU1Js?=
 =?utf-8?B?eUF4WXVHNFM1NUFtV3dVbzArSTVDVWJtVzZWQ1p2YzAzZUNjUEZabTN2RG1s?=
 =?utf-8?B?SHc2YytPd1hjM01rWlFxL2gvdjdJWHk5Ukh0M05FUE5UVWZ1Y29yZkZTTS9D?=
 =?utf-8?B?V0pEdTFkRnBkc2d5Ym41MkU1eXREN1ZSU0UveUkybFV5bGNmTGZYK0tEQ094?=
 =?utf-8?B?eldFeWVwbkpEbHdqbWVhd2ZkK3k2Qkp3dW5mOUpYbXdoMXZHK245TlQ4VUdx?=
 =?utf-8?B?ODU4S1lGZTh1ejhYdEJGNE93Y0ZNbGVaTXROMjJPaEZyNmRCbWlDbklQYlJN?=
 =?utf-8?B?elRtVU5GMkRiMGNLdUdtZ3BsN1NWL080WnV5cjhabkt6d1pTRTU5dlBpNmZk?=
 =?utf-8?B?d1ZzdGt1QzdEaGh4NGl4Rm5QeG00YW5sZExDa0Rid0JIV21OeEVtZHdvaC9E?=
 =?utf-8?B?cW1OVFViOXZ5NUZDVkdGNWswOTlTMmhkY3hqOU5IRzd3a0E4TDhiNXB0b0o1?=
 =?utf-8?B?N0RJMmRyemUzUzhOaU5yQ1ZlbURmbTlPZENScTJtdTJDdmZ4ODVHaG9rLytx?=
 =?utf-8?B?YTRxRTVQbktCa01vZUdNTGhQbm1rNWNROC9GUzQrU3I1dTZlUVZkeDhETWVR?=
 =?utf-8?B?MEtmNUp5NEQ2UmEzb05TcmM3R0FZdkg0RVVZOThGQWtKUmRCNWM1cmdBMmtL?=
 =?utf-8?B?bkZkbDRVN1c2ZHl3U0ZRb1J5cng3TXVVNVI2M3ZCUk9SM1VwVXZ4cS9Gd3N0?=
 =?utf-8?B?c1pNVEp3THR5eTZZYXE5T2FNS3hyNHBOZVp6NGtEMjZRSDNIRFdSQ0ExQTY1?=
 =?utf-8?B?NkpmK3p0WDhEdlJ6aHZlbyt6R0V3OFAxcm4vYU5BSnYyWjBFTW5HZVRjV2VU?=
 =?utf-8?B?TmZJTFN3K1RpeFo4WWhYTFZpeDBFSmNFakZ6Qkt6dE1Yc1lsK3RsRE9jZE5L?=
 =?utf-8?B?cElLazE1OXBOU0Z2UUZVNmRad2RGQTI2ZTNZNUhQZ0svdzJWbUVYUTJYVzBT?=
 =?utf-8?B?dFU1b01hSHk1eDdiY3dPbG1vQlhoNWltcXZ5OE1XZ0pzSjdUcHdyY1dMckUv?=
 =?utf-8?B?ZnlUVkRtcjNlaHBLbHE1U3BrWXBOd1U2YVVyL1B6cjhTaEhEaUc3UFpGWWxU?=
 =?utf-8?B?S01ObVlaenZudVh1aXpkeG0zb1Q3Vi9pc1VBZ0RKL0Z0VlRyeStBUmxSSUh2?=
 =?utf-8?Q?gsUQARB9OIeAUWZ?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 143f69fc-1569-48c3-e2da-08db60497c74
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2023 13:34:54.4174
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: U8a62VdDbJ38jB5y4XZDdB8GAN//JUnYUyEYEyCMvPxl4PUZ/JiSZ9H+LnUqJv0Ixzj8JP2gw+GXDrCgX6azGA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR03MB7489

On Tue, May 23, 2023 at 01:30:51PM +0200, Jan Beulich wrote:
> Recent gas versions generate minimalistic Dwarf debug info for items
> annotated as functions and having their sizes specified [1]. "Borrow"
> Arm's END() and (remotely) derive other annotation infrastructure from
> Linux'es.
> 
> For switch_to_kernel() and restore_all_guest() so far implicit alignment
> (from being first in their respective sections) is being made explicit
> (as in: using FUNC() without 2nd argument). Whereas for
> {,compat}create_bounce_frame() and autogen_entrypoints[] alignment is
> newly arranged for.
> 
> Except for the added alignment padding (including their knock-on
> effects) no change in generated code/data.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> [1] https://sourceware.org/git?p=binutils-gdb.git;a=commitdiff;h=591cc9fbbfd6d51131c0f1d4a92e7893edcc7a28
> ---
> v2: Full rework.
> ---
> Only two of the assembly files are being converted for now. More could
> be done right here or as follow-on in separate patches.
> 
> In principle the framework should be possible to use by other
> architectures as well. If we want this, the main questions are going to
> be:
> - What header file name? (I don't really like Linux'es linkage.h, so I'd
>   prefer e.g. asm-defns.h or asm_defns.h as we already have in x86.)
> - How much per-arch customization do we want to permit up front (i.e.
>   without knowing how much of it is going to be needed)? Initially I'd
>   expect only the default function alignment (and padding) to require
>   per-arch definitions.
> 
> Note that the FB-label in autogen_stubs() cannot be converted just yet:
> Such labels cannot be used with .type. We could further diverge from
> Linux'es model and avoid setting STT_NOTYPE explicitly (that's the type
> labels get by default anyway).
> 
> Note that we can't use ALIGN() (in place of SYM_ALIGN()) as long as we
> still have ALIGN.

FWIW, as I'm looking into using the newly added macros in order to add
annotations suitable for live-patching, I would need to switch some of
the LABEL usages into it's own functions, as it's not possible to
livepatch a function that has labels jumped into from code paths
outside of the function.

> --- a/xen/arch/x86/include/asm/asm_defns.h
> +++ b/xen/arch/x86/include/asm/asm_defns.h
> @@ -81,6 +81,45 @@ register unsigned long current_stack_poi
>  
>  #ifdef __ASSEMBLY__
>  
> +#define SYM_ALIGN(algn...) .balign algn
> +
> +#define SYM_L_GLOBAL(name) .globl name
> +#define SYM_L_WEAK(name)   .weak name

Won't this better be added when required?  I can't spot any weak
symbols in assembly ATM, and you don't introduce any _WEAK macro
variants below.

> +#define SYM_L_LOCAL(name)  /* nothing */
> +
> +#define SYM_T_FUNC         STT_FUNC
> +#define SYM_T_DATA         STT_OBJECT
> +#define SYM_T_NONE         STT_NOTYPE
> +
> +#define SYM(name, typ, linkage, algn...)          \
> +        .type name, SYM_T_ ## typ;                \
> +        SYM_L_ ## linkage(name);                  \
> +        SYM_ALIGN(algn);                          \
> +        name:
> +
> +#define END(name) .size name, . - name
> +
> +#define ARG1_(x, y...) (x)
> +#define ARG2_(x, y...) ARG1_(y)
> +
> +#define LAST__(nr) ARG ## nr ## _
> +#define LAST_(nr)  LAST__(nr)
> +#define LAST(x, y...) LAST_(count_args(x, ## y))(x, ## y)

I find LAST not very descriptive, won't it better be named OPTIONAL()
or similar? (and maybe placed in lib.h?)

> +
> +#define FUNC(name, algn...) \
> +        SYM(name, FUNC, GLOBAL, LAST(16, ## algn), 0x90)

A rant, should the alignment of functions use a different padding?
(ie: ret or ud2?) In order to prevent stray jumps falling in the
padding and fall trough into the next function.  That would also
prevent the implicit fall trough used in some places.

> +#define LABEL(name, algn...) \
> +        SYM(name, NONE, GLOBAL, LAST(16, ## algn), 0x90)
> +#define DATA(name, algn...) \
> +        SYM(name, DATA, GLOBAL, LAST(0, ## algn), 0xff)
> +
> +#define FUNC_LOCAL(name, algn...) \
> +        SYM(name, FUNC, LOCAL, LAST(16, ## algn), 0x90)
> +#define LABEL_LOCAL(name, algn...) \
> +        SYM(name, NONE, LOCAL, LAST(16, ## algn), 0x90)

Is there much value in adding local labels to the symbol table?

AFAICT the main purpose of this macro is to be used to declare aligned
labels, and here avoid the ALIGN + label name pair, but could likely
drop the .type directive?

> +#define DATA_LOCAL(name, algn...) \
> +        SYM(name, DATA, LOCAL, LAST(0, ## algn), 0xff)
> +
>  #ifdef HAVE_AS_QUOTED_SYM
>  #define SUBSECTION_LBL(tag)                        \
>          .ifndef .L.tag;                            \
> --- a/xen/arch/x86/x86_64/compat/entry.S
> +++ b/xen/arch/x86/x86_64/compat/entry.S
> @@ -8,10 +8,11 @@
>  #include <asm/page.h>
>  #include <asm/processor.h>
>  #include <asm/desc.h>
> +#include <xen/lib.h>

Shouldn't the inclusion of lib.h be in asm_defs.h, as that's where the
usage of count_args() resides? (I assume that's why lib.h is added
here).

>  #include <public/xen.h>
>  #include <irq_vectors.h>
>  
> -ENTRY(entry_int82)
> +FUNC(entry_int82)
>          ENDBR64
>          ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
>          pushq $0
> @@ -27,9 +28,10 @@ ENTRY(entry_int82)
>  
>          mov   %rsp, %rdi
>          call  do_entry_int82
> +END(entry_int82)
>  
>  /* %rbx: struct vcpu */
> -ENTRY(compat_test_all_events)
> +FUNC(compat_test_all_events)
>          ASSERT_NOT_IN_ATOMIC
>          cli                             # tests must not race interrupts
>  /*compat_test_softirqs:*/
> @@ -66,24 +68,21 @@ compat_test_guest_events:
>          call  compat_create_bounce_frame
>          jmp   compat_test_all_events
>  
> -        ALIGN
>  /* %rbx: struct vcpu */
> -compat_process_softirqs:
> +LABEL_LOCAL(compat_process_softirqs)

Shouldn't this be a local function rather than a local label?  It's
fully isolated.  I guess it would create issues with
compat_process_trap, as we would then require a jump from the
preceding compat_process_nmi.

>          sti
>          call  do_softirq
>          jmp   compat_test_all_events
>  
> -        ALIGN
>  /* %rbx: struct vcpu, %rdx: struct trap_bounce */
> -.Lcompat_process_trapbounce:
> +LABEL_LOCAL(.Lcompat_process_trapbounce)

It's my understanding that here the '.L' prefix is pointless, since
LABEL_LOCAL() will forcefully create a symbol for the label due to the
usage of .type?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 29 13:39:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 13:39:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540649.842540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3d5c-0001Hr-Ny; Mon, 29 May 2023 13:39:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540649.842540; Mon, 29 May 2023 13:39:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3d5c-0001Hk-Jn; Mon, 29 May 2023 13:39:12 +0000
Received: by outflank-mailman (input) for mailman id 540649;
 Mon, 29 May 2023 13:39:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FxNl=BS=citrix.com=prvs=506ffa617=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q3d5b-0001He-Iq
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 13:39:11 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2f94d852-fe26-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 15:39:09 +0200 (CEST)
Received: from mail-dm6nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 29 May 2023 09:39:06 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH7PR03MB7091.namprd03.prod.outlook.com (2603:10b6:510:2a4::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.21; Mon, 29 May
 2023 13:39:01 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Mon, 29 May 2023
 13:39:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f94d852-fe26-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685367549;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=V6kbflP2uyH2jf1gCs1e0iBxaHajIoKSjgqTEclgZ7M=;
  b=VmxQ0MeTcc3QsMpYm019ShUeKcjL9n/+18Br+i06XQxOHDAV84TuyaFh
   FkHqjzm7jP3oGJWKYWfK/bd0ANxb3bbgQScc06qmmG749wObIForsJeA3
   NY/9d36AXKHiTnpESAMGZU/tccVZV7zQGi6JtwzRJQdOYMSao2OLOHAji
   Y=;
X-IronPort-RemoteIP: 104.47.59.169
X-IronPort-MID: 110692140
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:r+6Z668I1Czb64k+PDtmDrUDzn6TJUtcMsCJ2f8bNWPcYEJGY0x3x
 jFLWmGDafaJYWWhfdh/Pomz/UwH78TQyd5lQVc6pSk8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ird7ks31BjOkGlA5AdmOKoa5Aa2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklJ3
 qBEKS8mLSu82cKa6YCwVrZMmZ4KeZyD0IM34hmMzBn/JNN/G9XvZvuP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWCilUvgdABM/KMEjCObd9SkUuC4
 HrP4kzyAw0ANczZwj2Amp6prraWxX2qAttOTNVU8NZ2j3u52C9PWSY4SEa/v6LptQ2MYtZ2f
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHmKKRYWKQ8PGTtzzaESoIKW4PYwcUQA1D5MPsyKkolQ7GRNtnFK+zj/X2FCv2z
 jTMqzIx750NisoM27S+7ErwiTumrZjUTSY4/gzSGGmi62tRboO/e5ah71Sd6P9aNZuYVXGIp
 n1CkM+bhMgECpuHhSGWQOEAGbivz/mAOTzYx1VoGvEJ/jCs4GKqfJoW7it3IkxoKe4bdTSva
 0jW0Sta45lVO3mmZLF2eKq+Ds0rye7rEtGNaxzPRt9HY5w0eArZ+ihrPBSUxzq0zhlqlrwjM
 5CGd8rqFWwdFals0DuxQaEazKMvwSc9g2jUQPgX0iia7FZXX1bNIZ9tDbdERrpRAH+syOkNz
 +tiCg==
IronPort-HdrOrdr: A9a23:Ezt7/KyS3EgtOD5m8B3MKrPwHL1zdoMgy1knxilNoHtuHvBw9v
 rAoB1/73TJYVkqNk3I9ergBED4ewKkyXcW2/hyAV7SZmnbUQKTRekJgLcKqAeQeBEWmNQtsp
 tIQuxTD8DxEEg/reuS2njfLz/4+qjjzEl/v5a780tQ
X-Talos-CUID: 9a23:8DeAfW6yhZ0myS5WDdss82VIRsE6WT7m8ynwD2aKAno2aKLPYArF
X-Talos-MUID: =?us-ascii?q?9a23=3AP+imrAx5C16T0+fUW63VZ+05ZwSaqLihLmoT0sl?=
 =?us-ascii?q?XgOeZKChuYGeizwaORqZyfw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,201,1681185600"; 
   d="scan'208";a="110692140"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nIujZPqEPQiLU5QR/sDM3xj/7Mb3JLn9uM0cP0KkbcNqq9djeQ06cIxy1bRNZ+KRd0olIeakarWv41zQHnnWa0ZsFViSLLf3vJ4k7P1gLLUSap6xvhzOGZwPb4w0QdSwHq2kbqS8oea+AxmLXA0a4lnC1BSlwjRWEwQ8WCO284S03Cuw14kAyArzer08CUoaBCAm/TFPYyFRRy0yzBi79buQFUAFutmmTo4DIr9FV+/SdDAFiCEhZQ/ghJXpce65PtE0yNobb0xMWd67XofNiIzvkqMrmHYpe+G+cd0/S+TBWiC1uadZIh/DaveOpHbP6/lrXVkOZ+VGL4njIeietw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CGDcpZLIe32txpHdFxFnu4res4G7PYnJowBw6+3JgUs=;
 b=NFsY6Q8tuO764DFyd7ZooCoe4K70dVNpcbHMZI61eZOqwxxUwLWSRYQrzJlU+IlKff98oTqo72YTwblsl2ZNNQrBboU+HkC6np6UNoODOCKKAPPH1nV2N4DtM6tgA4t+lxeIe2ZlRxpmuY1JV/nA+iyYgbnKB+4U87C0UbEk51V7xRR+DFF3Q8SyGoiLLc1ZU5XMzEdO4S9SX2CRwjIivtS615KFcmbhvCFCqtqZZOf7OMshqvTO7Q1bQK839iErsN70bwKK4k4fDgwRBYG9zxemrt5fI13gmCy3+zy97mvq1UEaAFiXbB7R63Hd2pfvs0dqQzrqmmozM4hUKZ7eFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CGDcpZLIe32txpHdFxFnu4res4G7PYnJowBw6+3JgUs=;
 b=PorGPeUL2S2vTa5JasEipwLWqLVsJMa/8TxJBl6SekPoVx0NCqvza7HOMndhAOLJ9rdEK4hNhTnibpzdoqthVZCp6ukaE/rio1CH0ECw9lHBvQ2L54ZR845clFoUwWk5Hc9bLPadYoQ3FPUf0YhzcciXYdF8MOTglBp9eoQ/vno=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Mon, 29 May 2023 15:38:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v2 2/2] x86: also mark assembler globals hidden
Message-ID: <ZHSq73d8EKfvG9Ib@Air-de-Roger>
References: <e4bf47ca-2ae6-1fd4-56a6-e4e777150b64@suse.com>
 <db10bc3d-962e-72a7-b53d-93a7ddd7f3ef@suse.com>
 <52f884cf-b88f-6fd9-e4d9-79da2bfde422@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <52f884cf-b88f-6fd9-e4d9-79da2bfde422@suse.com>
X-ClientProxiedBy: LO4P265CA0133.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c4::8) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH7PR03MB7091:EE_
X-MS-Office365-Filtering-Correlation-Id: f6cc048d-4e05-48c0-f390-08db604a0fab
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AtOJZAlDCa4nLI6i24orfhXdESs63InNGcpL1ynfNPlGZe8ghdJV8KZRYTwxibbwqB+C1OAPHdWk8rV9oZ5ao3ut1T18TyvtPhr2f35QMeKFbO72PeOc0Q613MVoE7Oj6lQnm/z+D0aYOo0iRl+tG7KK2kBjNZhx7WiI1WJze9Bqz8HiP3Fx2uOt0+NzAmP/R1jX7LrSJNTtuQFBAjMhhSSbwwmHwrMHBUjeu0p+507ZaA7OlqNU5SlAQRRn94Vb7GyyIuFeYRLg3ClSueAuyiFwQB0XJ/xW03fhO7uU/rN8+2mxsSHRWT7gVcjQzmkc+iAX7bVi2+N+CvJBJJKof+in1hSlXUDvqQEXeQpdR7ArUT2ScVzIW64Y//t6qzpefPgDNriGZwkyJxn/GOtOrAnRyC704OcnsWRNKrV7GtaxVbzUwEd6tugfOYHbZ5y0EY2vZaWZTcZ0rf6f3IXXPlFDeEtN4LhJB88/TYD/kQvRrveWnKcJNwCaRCpB6jdmju/ZPJP7PQcsKZctFZ19BXIcnARsaRZ6J6k7yH3O53eMohw/5+IrdPJ8QS4AQdsT
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(366004)(396003)(376002)(39860400002)(136003)(346002)(451199021)(186003)(26005)(6512007)(9686003)(6506007)(41300700001)(38100700002)(6666004)(6486002)(54906003)(478600001)(82960400001)(66556008)(66946007)(66476007)(6916009)(316002)(4326008)(7416002)(4744005)(2906002)(5660300002)(33716001)(86362001)(8676002)(85182001)(8936002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QW1SU3RFYWNQa2hQSmx2Y3krSEdHRldsRTJVeVE1eFFjbnUvNlN3V25jV3Rv?=
 =?utf-8?B?SE5JdUF4cFhHVXpqTStlMmVIaWRKMEtEY01Ya0dQTHdpRnlSM2Q5SThOOENv?=
 =?utf-8?B?V3pvY1NlQWpXY2dBS0RIamQ1OE91ZCsrMkgzQ1hmT0FuNzZDaXRmT2xVYlZJ?=
 =?utf-8?B?R2VSVmhuQVZHWlpzVUMyQlhLTVFGamtQLzZzdng3ZWdkcFdnZzBqMS9qOTRp?=
 =?utf-8?B?cVEwRkJzMldBcmxhWFJjOTVJZU0weXdWRFl4MGthRU1nUzFGdmI5T3lFR3Aw?=
 =?utf-8?B?RENISkhGL1haV2VrUWtZODg0V2NZTzVpWTJIeHJZU2g3U0kzKzVIU1VWUXZE?=
 =?utf-8?B?THpuRlo4TWVDN3NEN2tJd0lYQXBUVTg5OW9pZzhmSUhiWmhvWkNueE5wa1ov?=
 =?utf-8?B?MkZDblBiVVZURWVlMVpsNzh3MWY1YkpkbVBQQnNqQ05JTjl3Z1crRndSRFBP?=
 =?utf-8?B?Y2RJclM5Vkw0RkFSVjhJM2lJZlJNNThDbjJxdzRwOCswbTA1VUVSUEhSbFFx?=
 =?utf-8?B?VCt6Tm5xZExqM1YySGVWeDA4K2EySjRaaGRNY3dNTXdxQUlRbjhSSGxFOUZT?=
 =?utf-8?B?dzd4MDV3TkZtU0F3TFFwVWhHV0NNRHlyb2VKNGhTM0Rwby9ST3NMNjROQzRj?=
 =?utf-8?B?MUNsSjBGRU9SbGlZdDRydnFWRklYaXphTm5mOGlpeGpEU2p0dS92aGgxOGcv?=
 =?utf-8?B?TStPMGFINXIvUy8vMlphc3dvSW1zUnRaVkt5SVpxaFpza1l5RzQ3aVZZanJH?=
 =?utf-8?B?NlNJOFBMYldUTEpIVEZoQzZubkxVWXJSWGpqOWtxYVpOYWNxcXV6YTJ1YTFz?=
 =?utf-8?B?SWM0UGxDb1BPWFNMKzcxSTl0WkNjQXgveWovclZGdnV5U3JTNHNpQVcvbURv?=
 =?utf-8?B?NDh6RnBHd1UxSENjbkx3eW9qM21PNXJNZXVIbmI3NmhyUjlETzdid3hRVVha?=
 =?utf-8?B?OU43RkNjOUR2N1NFWS9QcFV6ZmpLQlFBdVVRWk00WHVxTzZXNXZzZThMKzJI?=
 =?utf-8?B?eVRWNVRqVWNoblUzMkNuQURrYktSNTk5d3lZbXhaZDgwNFpWUTYyY2hyaVhJ?=
 =?utf-8?B?ZC9ZSDVmeXVIREI0ZzdRV1V5bU1mZlIwWXFlNUtRL0o5eExrakdDQllxTUwr?=
 =?utf-8?B?Smd0NWFCdjZvQjBDVS9pSllLd3JVQTlQSWU2cXNPWnRwaUJCVDdaUjZSUEN5?=
 =?utf-8?B?eEdnQ1JvYmFiNm8zeWY2dm43Nlc5ZkNZZDhIRHF4OGpaVkpyellpZzByN2tJ?=
 =?utf-8?B?QVFQQ3VobGJ0NXVmbTVGR0NZZnRKczdoekJkQXNvMTVuVW40VkVXek5pQXNr?=
 =?utf-8?B?M0ZWV2IrZDNrM3RvVzB5bk9OWERZWjdkbEs0ZEZzY0RiR2pHbjJmWlRQekZI?=
 =?utf-8?B?M0dlajFhNXNESEZrVng1aDdvOEY5am5nRW5EM0pjTytSMkpOM0hoRGVWK2dJ?=
 =?utf-8?B?SGJlV3B5dlFQQXM0Wm43dHBhNmwwSmdGaldaenVzVG1ONFo4RGNsK3dreUh5?=
 =?utf-8?B?dWlONDk3YkZ6ZkxsYjAyNVlNZWF0dFBjY0V2OCtnb2xtYVVOanJWRUdCUUdk?=
 =?utf-8?B?VVlNVnFpQktnbE9OMm52ZFBpSitLV1ExQXVGK0FlTFZscmdxZnF0aWFRaGNZ?=
 =?utf-8?B?TDdib1dNaWpqREpYRzI1a0ZmaTZiako2ZDVKOVh5WlgwbXBjNWJETmUyNVBH?=
 =?utf-8?B?M29adEtoZWVQWjRYVDBaZEVFeE92S3J6TE5iRDROdGpGaG10Z1B6bWhzdVV2?=
 =?utf-8?B?MnhUQ1cwem1OT1pVNmRIYXZkM2FtRGZudmVYeHFnRkt6YXFicGUyWm1IZDl2?=
 =?utf-8?B?c05LU2JzSjZTNUdwc0ptbGFzYnFUVmlkQlVtOEM5TXVzZENxTnhWYU0xTGVs?=
 =?utf-8?B?UzdJeHkySHZnZjlsc3RCOWVCRlMzVDBQbXpBbS9nb09OdXhUeHByd3A1TDBF?=
 =?utf-8?B?d2RUakljbW0wRklaZ1hjNU9nZTlrWFFjMDd3NTdCNXZuQ05PNER0dzBtbE85?=
 =?utf-8?B?OTVVOG1EL3dOVU1zQmlWcTJiQ0t4Vm04dDN1bmwvSkF0WHp6enJkYXUvVnN1?=
 =?utf-8?B?Q3J2ZlQyT205MG1nTG92Z1JiT1pTRzNwdmRlV3ZiWEFjNWsxWXVldWk1c0Nu?=
 =?utf-8?Q?IoMzzdV8LMRSwm4BHmIgej4k4?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?M1Rvb0R0QjBjY3JpL2V6YS9zVEJHeVVyZTdjNVh3cVpYM0plR0R6NWx5YU5J?=
 =?utf-8?B?dTJNaUNlNm91UWlnRXVNLy9hTUhYek5Oc2hxOUQwdEliZkdFYU9oUXlLR3Bl?=
 =?utf-8?B?WUpvQUZNbW1pamhTc1hrREgxL2oySDJ1VkxhUXUxcnhXNk12N2VzS2FhN3Rp?=
 =?utf-8?B?WlRETzRvWTFkVDRoMVFxZmMza2tKYXUwVkpaZjYzSmJUMnF5MVAyL3dYQWtR?=
 =?utf-8?B?T09TeGdIKzhsbUt4WEpZbXNsSExNKy9VTjhudFRFT0FRK0VzRXdBSm5wTGdo?=
 =?utf-8?B?RUJ3bHFnU0VaVzJOVzRnMlR2ZHhmbmRNRU5kM08yR0E4amdLdWhuRDArVTM2?=
 =?utf-8?B?RzUvRUxTSXdwVE9wS2M2VGw2bUJKN3VLR253aTFWUFNPK2JDbjZCL2JOT2ZT?=
 =?utf-8?B?blZZRzZReXhwbGRYVGpIWC9YWjJJTktlZGNUNkR6STVsdFVPUjNKTnU0Smo3?=
 =?utf-8?B?elYxNGtodEZCQXZzSzdzSjk0cXNNNTVXYjhpN1VyaEdSUURPSjAwUG5zSmI5?=
 =?utf-8?B?Y1ZaQ1pKZ3k3VzlSWlp0TS9lVUtuWmpUSGEwZFQwTWhPbXJtZjVGckFROEVs?=
 =?utf-8?B?N3VKS1hiSkFCYXZjQXBuSkQ0MnBJdWFTTTcrMG8wU1IwaUliZm1BR1BYK3px?=
 =?utf-8?B?cjRCS2tSMmpaSnNHYytPUWp3elNSWFBXNjBmVU05SFJ6M01UR3lhQW1pbW45?=
 =?utf-8?B?V1N0M1FFWTV1OFhLaFVMay9YNjA2eDc1R1pHVkhzTmw2QUc3Rm5RVDJUbzU5?=
 =?utf-8?B?Vm5qMzJhcjM4VjYySEIwREd2ZEFxbzNrWGJJUTY1MHJUOWN0bGJ3bU5ISzVj?=
 =?utf-8?B?bCtBQ0sxTkc0VUNMNXozVW5iN0FJYSt0dmo0bUxZOUlaNTBGOGZyODFuNnRE?=
 =?utf-8?B?MytVeTVWMlhSNFBxbHBaZVBtbzRVd2JDNVBvVW8zZGpXd25hZUZybGlxdGwr?=
 =?utf-8?B?dU1yZFJUbGRScmM1MmJyYmwrWEpEOXZtWHpQN2ZsTXRoa3JPYnRJc1ZuZDBB?=
 =?utf-8?B?L3ZxS0Q4Wjd2a1pNMlBTeXN4YUNSejZDajVHMVE5L3lwM01aUDRvbmpCeTZH?=
 =?utf-8?B?TEVXYWF3WHFhMlljR3A5ZGxwd3oxd1pLdzhRUFhRK3k1VldHMFpzc2lKbTAr?=
 =?utf-8?B?V2ZUUHZaY2drRUVldWdTYVE1THNZUjBLUHNIZUNEQkx2cnVqUXU1Tm9Ybll1?=
 =?utf-8?B?MFh1Q3JGcFpPdkNWSWhtMEhOOUZ3MEhuU01ZMDM2MXJtN1djWkVzVXR3RDhY?=
 =?utf-8?Q?kwsh9AluYdldlj7?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f6cc048d-4e05-48c0-f390-08db604a0fab
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2023 13:39:01.1932
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dJd50uZCQj4MxKRE0vDZKrE/mBjNFJTOKCrFU4fSytFPtpjd+JAaOCbxP5LN0eS8xwavqHKtipnMSQAR9dBIBQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR03MB7091

On Tue, May 23, 2023 at 01:31:16PM +0200, Jan Beulich wrote:
> Let's have assembler symbols be consistent with C ones. In principle
> there are (a few) cases where gas can produce smaller code this way,
> just that for now there's a gas bug causing smaller code to be emitted
> even when that shouldn't be the case.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 29 15:02:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 15:02:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540655.842550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3eNj-0001xJ-Ik; Mon, 29 May 2023 15:01:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540655.842550; Mon, 29 May 2023 15:01:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3eNj-0001xC-F1; Mon, 29 May 2023 15:01:59 +0000
Received: by outflank-mailman (input) for mailman id 540655;
 Mon, 29 May 2023 15:01:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3eNi-0001x2-BQ; Mon, 29 May 2023 15:01:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3eNi-00010O-1o; Mon, 29 May 2023 15:01:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3eNh-00057u-Jl; Mon, 29 May 2023 15:01:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3eNh-0003aw-JL; Mon, 29 May 2023 15:01:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Q+T0mB5I8re5shnjqiZfkRoR/1maJslybSCzFGsjpGg=; b=0UBIRSaZDinFpyj49cJjPN5GTs
	u8FXzNVacT7Npp2JcUmf4nRZ+eErLsuajgid7bdThMkHxngOD7mpcdD8Oq9c1uoINaLuDksyYatKX
	ng+0gVEzRS0fepg2deJuAEI3CBvhcI360xYp4E3qXS/fYTjIesH4aDpSQJ9GLAyupLGs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180999-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 180999: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=04c5b3023e49c35d291f41d2c39b4d12a62b8f9c
X-Osstest-Versions-That:
    ovmf=c0bce66068c76b5d46f901027daf1074316031ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 15:01:57 +0000

flight 180999 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180999/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 04c5b3023e49c35d291f41d2c39b4d12a62b8f9c
baseline version:
 ovmf                 c0bce66068c76b5d46f901027daf1074316031ac

Last test of basis   180998  2023-05-29 10:43:39 Z    0 days
Testing same since   180999  2023-05-29 13:12:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c0bce66068..04c5b3023e  04c5b3023e49c35d291f41d2c39b4d12a62b8f9c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 29 16:03:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 16:03:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540661.842560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3fLT-0000M7-11; Mon, 29 May 2023 16:03:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540661.842560; Mon, 29 May 2023 16:03:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3fLS-0000M0-UT; Mon, 29 May 2023 16:03:42 +0000
Received: by outflank-mailman (input) for mailman id 540661;
 Mon, 29 May 2023 16:03:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zYLD=BS=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1q3fLR-0000Lu-5e
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 16:03:41 +0000
Received: from smtp-bc09.mail.infomaniak.ch (smtp-bc09.mail.infomaniak.ch
 [2001:1600:3:17::bc09])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5e2aa35c-fe3a-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 18:03:36 +0200 (CEST)
Received: from smtp-3-0000.mail.infomaniak.ch (unknown [10.4.36.107])
 by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QVL131MNPzMqYfG;
 Mon, 29 May 2023 18:03:35 +0200 (CEST)
Received: from unknown by smtp-3-0000.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QVL0w4znNz3vb2; Mon, 29 May 2023 18:03:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e2aa35c-fe3a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1685376215;
	bh=n6vDyeLcnXuJ2e1Kx1oy6dfLDQBfFmEzNL/ERVZDqQU=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=kQUNssi6VBi8KE2R5ANm9fpqaLV9V+BO/pl/pZOb+r07FAQvClXzKkx8C9t2Hytji
	 /4n7ba89FjY/Zfl8t2rhFg1o9pE4iawj/J1DIhc82vGXeZvlW/Z9zID2JwoJwYww3/
	 p8LCe3PI5P3tfO/+Mau3UxTlSDY6CLpnXs/pesiA=
Message-ID: <90ca1173-4ec1-099f-5744-3d6dc29d919d@digikod.net>
Date: Mon, 29 May 2023 18:03:28 +0200
MIME-Version: 1.0
User-Agent:
Subject: Re: [PATCH v1 3/9] virt: Implement Heki common code
Content-Language: en-US
To: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>,
 Wei Liu <wei.liu@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>, "H . Peter Anvin" <hpa@zytor.com>,
 Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>,
 Paolo Bonzini <pbonzini@redhat.com>, Sean Christopherson
 <seanjc@google.com>, Thomas Gleixner <tglx@linutronix.de>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
 James Morris <jamorris@linux.microsoft.com>,
 John Andersen <john.s.andersen@intel.com>,
 Marian Rotariu <marian.c.rotariu@gmail.com>,
 =?UTF-8?Q?Mihai_Don=c8=9bu?= <mdontu@bitdefender.com>,
 =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>,
 Zahra Tarkhani <ztarkhani@microsoft.com>,
 =?UTF-8?Q?=c8=98tefan_=c8=98icleru?= <ssicleru@bitdefender.com>,
 dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
 linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
 qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
 x86@kernel.org, xen-devel@lists.xenproject.org
References: <20230505152046.6575-1-mic@digikod.net>
 <20230505152046.6575-4-mic@digikod.net>
 <ZFkxhWhjyIzrPkt8@liuwe-devbox-debian-v2>
 <e8fcc1b8-6c0f-9556-a110-bd994d3fe3c6@linux.microsoft.com>
From: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>
In-Reply-To: <e8fcc1b8-6c0f-9556-a110-bd994d3fe3c6@linux.microsoft.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha


On 17/05/2023 14:47, Madhavan T. Venkataraman wrote:
> Sorry for the delay. See inline...
> 
> On 5/8/23 12:29, Wei Liu wrote:
>> On Fri, May 05, 2023 at 05:20:40PM +0200, Mickaël Salaün wrote:
>>> From: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
>>>
>>> Hypervisor Enforced Kernel Integrity (Heki) is a feature that will use
>>> the hypervisor to enhance guest virtual machine security.
>>>
>>> Configuration
>>> =============
>>>
>>> Define the config variables for the feature. This feature depends on
>>> support from the architecture as well as the hypervisor.
>>>
>>> Enabling HEKI
>>> =============
>>>
>>> Define a kernel command line parameter "heki" to turn the feature on or
>>> off. By default, Heki is on.
>>
>> For such a newfangled feature can we have it off by default? Especially
>> when there are unsolved issues around dynamically loaded code.
>>
> 
> Yes. We can certainly do that.

By default the Kconfig option should definitely be off. We also need to 
change the Kconfig option to only be set if kernel module, JIT, kprobes 
and other dynamic text change feature are disabled at build time  (see 
discussion with Sean).

With this new Kconfig option for the static case, I think the boot 
option should be on by default because otherwise it would not really be 
possible to switch back to on later without taking the risk to silently 
breaking users' machines. However, we should rename this option to 
something like "heki_static" to be in line with the new Kconfig option.

The goal of Heki is to improve and complement kernel self-protection 
mechanisms (which don't have boot time options), and to make it 
available to everyone, see 
https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings
In practice, it would then be kind of useless to be required to set a 
boot option to enable Heki (rather than to disable it).


> 
>>>
>> [...]
>>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>>> index 3604074a878b..5cf5a7a97811 100644
>>> --- a/arch/x86/Kconfig
>>> +++ b/arch/x86/Kconfig
>>> @@ -297,6 +297,7 @@ config X86
>>>   	select FUNCTION_ALIGNMENT_4B
>>>   	imply IMA_SECURE_AND_OR_TRUSTED_BOOT    if EFI
>>>   	select HAVE_DYNAMIC_FTRACE_NO_PATCHABLE
>>> +	select ARCH_SUPPORTS_HEKI		if X86_64
>>
>> Why is there a restriction on X86_64?
>>
> 
> We want to get the PoC working and reviewed on X64 first. We have tested this only on X64 so far.

X86_64 includes Intel CPUs, which can support EPT and MBEC, which are a 
requirement for Heki. ARM might have similar features but we're focused 
on x86 for now.

As a side note, I only have access to an Intel machine, which means that 
I cannot work on AMD support. However, I'll be pleased to implement such 
support if I get access to a machine with a recent AMD CPU.


> 
>>>   
>>>   config INSTRUCTION_DECODER
>>>   	def_bool y
>>> diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/asm/sections.h
>>> index a6e8373a5170..42ef1e33b8a5 100644
>>> --- a/arch/x86/include/asm/sections.h
>>> +++ b/arch/x86/include/asm/sections.h
>> [...]
>>>   
>>> +#ifdef CONFIG_HEKI
>>> +
>>> +/*
>>> + * Gather all of the statically defined sections so heki_late_init() can
>>> + * protect these sections in the host page table.
>>> + *
>>> + * The sections are defined under "SECTIONS" in vmlinux.lds.S
>>> + * Keep this array in sync with SECTIONS.
>>> + */
>>
>> This seems a bit fragile, because it requires constant attention from
>> people who care about this functionality. Can this table be
>> automatically generated?
>>
> 
> We realize that. But I don't know of a way this can be automatically generated. Also, the permissions for
> each section is specific to the use of that section. The developer who introduces a new section is the
> one who will know what the permissions should be.
> 
> If any one has any ideas of how we can generate this table automatically or even just add a build time check
> of some sort, please let us know.

One clean solution might be to parse the vmlinux.lds.S file, extract 
section and their permission, and fill that into an automatically 
generated header file.

Another way to do it would be to extract sections and associated 
permissions with objdump, but that could be an issue because of longer 
build time.

A better solution would be to extract such sections and associated 
permissions at boot time. I guess the kernel already has such helpers 
used in early boot.


From xen-devel-bounces@lists.xenproject.org Mon May 29 16:05:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 16:05:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540665.842569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3fMn-0000tb-BP; Mon, 29 May 2023 16:05:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540665.842569; Mon, 29 May 2023 16:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3fMn-0000tU-8M; Mon, 29 May 2023 16:05:05 +0000
Received: by outflank-mailman (input) for mailman id 540665;
 Mon, 29 May 2023 16:05:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3fMm-0000sz-02; Mon, 29 May 2023 16:05:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3fMl-0002tY-FN; Mon, 29 May 2023 16:05:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3fMk-0006Yr-Tk; Mon, 29 May 2023 16:05:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3fMk-0003w7-TH; Mon, 29 May 2023 16:05:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HDJP1QPbkaO/NVwHEFNipyx5RrxiXpeH2OBRuiGwT1w=; b=Vp+2thI0FuGvOlwARC8mqV9Tfd
	4ybIn7hGrFCPYBC1vI5KkbtmM5O7ZnnV1qO8sfJkzxV0mBWGX/AVqQcZIxTKbSlOeAv+IfthSw+/b
	wpv6amVXEZwATJhP3zvEUspscABGsz8iG+I3pg5kPyGRNZyqo8nicTtMCTEc9aTZ4vsU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180995-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 180995: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-freebsd11-amd64:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:leak-check/check:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e338142b39cf40155054f95daa28d210d2ee1b2d
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 16:05:02 +0000

flight 180995 linux-linus real [real]
flight 181000 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/180995/
http://logs.test-lab.xenproject.org/osstest/logs/181000/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-freebsd11-amd64 19 guest-localmigrate/x10 fail pass in 181000-retest
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 23 leak-check/check fail pass in 181000-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                e338142b39cf40155054f95daa28d210d2ee1b2d
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   42 days
Failing since        180281  2023-04-17 06:24:36 Z   42 days   79 attempts
Testing same since   180995  2023-05-29 03:59:06 Z    0 days    1 attempts

------------------------------------------------------------
2552 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             fail    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 322787 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 29 16:43:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 16:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540673.842580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3fy1-0005Ix-Et; Mon, 29 May 2023 16:43:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540673.842580; Mon, 29 May 2023 16:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3fy1-0005Iq-C7; Mon, 29 May 2023 16:43:33 +0000
Received: by outflank-mailman (input) for mailman id 540673;
 Mon, 29 May 2023 16:43:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3fxz-0005Ig-Q1; Mon, 29 May 2023 16:43:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3fxz-0003vM-Ir; Mon, 29 May 2023 16:43:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3fxz-0007SU-58; Mon, 29 May 2023 16:43:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3fxz-0005qX-4i; Mon, 29 May 2023 16:43:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rq5LJMXiN5dDdOB9S223qxRXVUTdaBvZOBp4Himqhe8=; b=lEdOWg7aB9Ii0AGNdDqNrANUXZ
	mWIPuAL6KztNLc+RBxFkmsgfmlKjgDt8KKfsIrvyw0Pe/YNruQ3MRSJ4Gq2TICwqslnO+iGGpAC6T
	L6qA8yW3ntrpZniCKuigzqSCQaku85MWi59sOL1YMglrBvQ2Rad2R0yqZKm9It54EPnU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-180996-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 180996: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ac84b57b4d74606f7f83667a0606deef32b2049d
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 16:43:31 +0000

flight 180996 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/180996/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pair     10 xen-install/src_host fail in 180990 pass in 180996
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail in 180990 pass in 180996
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat  fail pass in 180977
 test-amd64-i386-pair         11 xen-install/dst_host       fail pass in 180990

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180691
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                ac84b57b4d74606f7f83667a0606deef32b2049d
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   12 days
Failing since        180699  2023-05-18 07:21:24 Z   11 days   46 attempts
Testing same since   180972  2023-05-27 03:59:19 Z    2 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8318 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 29 16:48:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 16:48:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540679.842590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3g2W-0005vo-0I; Mon, 29 May 2023 16:48:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540679.842590; Mon, 29 May 2023 16:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3g2V-0005vh-Td; Mon, 29 May 2023 16:48:11 +0000
Received: by outflank-mailman (input) for mailman id 540679;
 Mon, 29 May 2023 16:48:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zYLD=BS=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1q3g2U-0005vZ-6e
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 16:48:10 +0000
Received: from smtp-42af.mail.infomaniak.ch (smtp-42af.mail.infomaniak.ch
 [84.16.66.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 971b3151-fe40-11ed-b231-6b7b168915f2;
 Mon, 29 May 2023 18:48:08 +0200 (CEST)
Received: from smtp-2-0001.mail.infomaniak.ch (unknown [10.5.36.108])
 by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QVM0S0h90zMqS8N;
 Mon, 29 May 2023 18:48:08 +0200 (CEST)
Received: from unknown by smtp-2-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QVM0N4qpgzMpvXY; Mon, 29 May 2023 18:48:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 971b3151-fe40-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1685378887;
	bh=L+AoM15vKXrvn30NczrqA+aX9dwRdpATxmicKsa0b54=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=NIV0lHsv0wTD7BCC4ZXt7w9oowUL9SV7UxKoOT7S1KqEP+2aSUrieQxP9LJP59+mO
	 ZNd3IkCScExVnFAtc3C902qJRT2LHk41z0nJeTFW/IpjysBJPb3bghczvpI3mYK0CS
	 5fhux6+oZzWtW4Q9BILMrxW1qJ1honAjKd3plkns=
Message-ID: <901ff104-215c-8e81-fbae-5ecd8fa94449@digikod.net>
Date: Mon, 29 May 2023 18:48:03 +0200
MIME-Version: 1.0
User-Agent:
Subject: Re: [PATCH v1 5/9] KVM: x86: Add new hypercall to lock control
 registers
Content-Language: en-US
To: Wei Liu <wei.liu@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>, Dave Hansen
 <dave.hansen@linux.intel.com>, "H . Peter Anvin" <hpa@zytor.com>,
 Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>,
 Paolo Bonzini <pbonzini@redhat.com>, Sean Christopherson
 <seanjc@google.com>, Thomas Gleixner <tglx@linutronix.de>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
 James Morris <jamorris@linux.microsoft.com>,
 John Andersen <john.s.andersen@intel.com>,
 "Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
 Marian Rotariu <marian.c.rotariu@gmail.com>,
 =?UTF-8?Q?Mihai_Don=c8=9bu?= <mdontu@bitdefender.com>,
 =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>,
 Zahra Tarkhani <ztarkhani@microsoft.com>,
 =?UTF-8?Q?=c8=98tefan_=c8=98icleru?= <ssicleru@bitdefender.com>,
 dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
 linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
 qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
 x86@kernel.org, xen-devel@lists.xenproject.org
References: <20230505152046.6575-1-mic@digikod.net>
 <20230505152046.6575-6-mic@digikod.net>
 <ZFlllHjntehpthma@liuwe-devbox-debian-v2>
From: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>
In-Reply-To: <ZFlllHjntehpthma@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha


On 08/05/2023 23:11, Wei Liu wrote:
> On Fri, May 05, 2023 at 05:20:42PM +0200, Mickaël Salaün wrote:
>> This enables guests to lock their CR0 and CR4 registers with a subset of
>> X86_CR0_WP, X86_CR4_SMEP, X86_CR4_SMAP, X86_CR4_UMIP, X86_CR4_FSGSBASE
>> and X86_CR4_CET flags.
>>
>> The new KVM_HC_LOCK_CR_UPDATE hypercall takes two arguments.  The first
>> is to identify the control register, and the second is a bit mask to
>> pin (i.e. mark as read-only).
>>
>> These register flags should already be pinned by Linux guests, but once
>> compromised, this self-protection mechanism could be disabled, which is
>> not the case with this dedicated hypercall.
>>
>> Cc: Borislav Petkov <bp@alien8.de>
>> Cc: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: H. Peter Anvin <hpa@zytor.com>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Sean Christopherson <seanjc@google.com>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
>> Cc: Wanpeng Li <wanpengli@tencent.com>
>> Signed-off-by: Mickaël Salaün <mic@digikod.net>
>> Link: https://lore.kernel.org/r/20230505152046.6575-6-mic@digikod.net
> [...]
>>   	hw_cr4 = (cr4_read_shadow() & X86_CR4_MCE) | (cr4 & ~X86_CR4_MCE);
>>   	if (is_unrestricted_guest(vcpu))
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index ffab64d08de3..a529455359ac 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -7927,11 +7927,77 @@ static unsigned long emulator_get_cr(struct x86_emulate_ctxt *ctxt, int cr)
>>   	return value;
>>   }
>>   
>> +#ifdef CONFIG_HEKI
>> +
>> +extern unsigned long cr4_pinned_mask;
>> +
> 
> Can this be moved to a header file?

Yep, but I'm not sure which one. Any preference Kees?


> 
>> +static int heki_lock_cr(struct kvm *const kvm, const unsigned long cr,
>> +			unsigned long pin)
>> +{
>> +	if (!pin)
>> +		return -KVM_EINVAL;
>> +
>> +	switch (cr) {
>> +	case 0:
>> +		/* Cf. arch/x86/kernel/cpu/common.c */
>> +		if (!(pin & X86_CR0_WP))
>> +			return -KVM_EINVAL;
>> +
>> +		if ((read_cr0() & pin) != pin)
>> +			return -KVM_EINVAL;
>> +
>> +		atomic_long_or(pin, &kvm->heki_pinned_cr0);
>> +		return 0;
>> +	case 4:
>> +		/* Checks for irrelevant bits. */
>> +		if ((pin & cr4_pinned_mask) != pin)
>> +			return -KVM_EINVAL;
>> +
> 
> It is enforcing the host mask on the guest, right? If the guest's set is a
> super set of the host's then it will get rejected.
> 
> 
>> +		/* Ignores bits not present in host. */
>> +		pin &= __read_cr4();
>> +		atomic_long_or(pin, &kvm->heki_pinned_cr4);

We assume that the host's mask is a superset of the guest's mask. I 
guess we should check the absolute supported bits instead, even if it 
would be weird for the host to not support these bits.


>> +		return 0;
>> +	}
>> +	return -KVM_EINVAL;
>> +}
>> +
>> +int heki_check_cr(const struct kvm *const kvm, const unsigned long cr,
>> +		  const unsigned long val)
>> +{
>> +	unsigned long pinned;
>> +
>> +	switch (cr) {
>> +	case 0:
>> +		pinned = atomic_long_read(&kvm->heki_pinned_cr0);
>> +		if ((val & pinned) != pinned) {
>> +			pr_warn_ratelimited(
>> +				"heki-kvm: Blocked CR0 update: 0x%lx\n", val);
> 
> I think if the message contains the VM and VCPU identifier it will
> become more useful.

Indeed, and this should be the case for all log messages, but I'd left 
that for future work. ;) I'll update the logs for the next series with a 
new kvm_warn_ratelimited() helper using VCPU's PID.


From xen-devel-bounces@lists.xenproject.org Mon May 29 18:01:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 18:01:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540683.842600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3hBF-0005g3-8G; Mon, 29 May 2023 18:01:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540683.842600; Mon, 29 May 2023 18:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3hBF-0005fw-5b; Mon, 29 May 2023 18:01:17 +0000
Received: by outflank-mailman (input) for mailman id 540683;
 Mon, 29 May 2023 18:01:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3hBE-0005fm-A4; Mon, 29 May 2023 18:01:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3hBE-0005j1-0R; Mon, 29 May 2023 18:01:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3hBD-0001jh-MD; Mon, 29 May 2023 18:01:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3hBD-0001K8-Ln; Mon, 29 May 2023 18:01:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Fn425c8Mwtf3jL6+VlgDMAQaSHZk1ftuLuIjM4AG/24=; b=wzhctO7FUGulPeHRcKvvzSxWXy
	7gcEKmHsHBhYd++yxWbdQqWFUaZ0Mo8BB6wOxHLyxZn/FvuoqoEyGxXEFhdbmeP3s6mGwmwCBaTR+
	TMjjTQpkdqXwzgM53xhEvCP+vB9Tqqf0jnG9AbIYnx2otsPFB6ZFXYuO0Tx22a1TtV0c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181001-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 181001: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8e934ab9562a33191af21ce3babf1ad37a3cdc34
X-Osstest-Versions-That:
    ovmf=04c5b3023e49c35d291f41d2c39b4d12a62b8f9c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 18:01:15 +0000

flight 181001 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181001/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8e934ab9562a33191af21ce3babf1ad37a3cdc34
baseline version:
 ovmf                 04c5b3023e49c35d291f41d2c39b4d12a62b8f9c

Last test of basis   180999  2023-05-29 13:12:08 Z    0 days
Testing same since   181001  2023-05-29 15:42:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Oliver Smith-Denny <osde@linux.microsoft.com>
  Sami Mujawar <sami.mujawar@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   04c5b3023e..8e934ab956  8e934ab9562a33191af21ce3babf1ad37a3cdc34 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 29 19:27:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 19:27:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540691.842609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3iWX-0005b2-Kl; Mon, 29 May 2023 19:27:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540691.842609; Mon, 29 May 2023 19:27:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3iWX-0005av-ID; Mon, 29 May 2023 19:27:21 +0000
Received: by outflank-mailman (input) for mailman id 540691;
 Mon, 29 May 2023 19:27:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g+MY=BS=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q3iWW-0005ao-7n
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 19:27:20 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d12e0931-fe56-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 21:27:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d12e0931-fe56-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685388433;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Kl0ODdCeKiDH/b09sXJZwrPwGLdocmdMEThYYertZKo=;
	b=Vxb6resa0Cee8qKiX40iLmpLbizKny04GkH5PE3gewsxqPqAzhTPcwc8GkVpD7IYkBdOob
	kldM9lYeqBeHeuVdf1n1uSRqpZ8thZOUUpxSgDMIWLmXNRAjjAYL9XB4nCkRJyP2BNBrgs
	+LjDC7xzihRAukwVXXSnFrbgcjETfKlxcQ0f7eAmLBfEur7iq3CkzZd7Fv+C3xIIfX9Oqn
	UDvSgGVwTUuYApQRu8CALqoBA3mLboEZWctl8SJ4zwTlaLr0WQyCLTztU3yg0BGhcD0i4q
	r0GC5zQfvpRfIKFDunUCoMLmRn5jMSReNtV5bplMpx4cZ9Wq3FpYR24PpP9nqg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685388433;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Kl0ODdCeKiDH/b09sXJZwrPwGLdocmdMEThYYertZKo=;
	b=2dl6kWdCmqa//U1xNID+wwwyKgUQy1Ux9vDUW+tt7QyZfM5CWCbMX54JcFWLtsOWsuy9ZI
	CACTmSWLwHX7odAw==
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
In-Reply-To: <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name> <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx> <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
Date: Mon, 29 May 2023 21:27:13 +0200
Message-ID: <87bki3kkfi.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Mon, May 29 2023 at 05:39, Kirill A. Shutemov wrote:
> On Sat, May 27, 2023 at 03:40:02PM +0200, Thomas Gleixner wrote:
> But it gets broken again on "x86/smpboot: Implement a bit spinlock to
> protect the realmode stack" with
>
> [    0.554079] .... node  #0, CPUs:        #1  #2
> [    0.738071] Callback from call_rcu_tasks() invoked.
> [   10.562065] CPU2 failed to report alive state
> [   10.566337]   #3
> [   20.570066] CPU3 failed to report alive state
> [   20.574268]   #4
> ...
>
> Notably CPU1 is missing from "failed to report" list. So CPU1 takes the
> lock fine, but seems never unlocks it.
>
> Maybe trampoline_lock(%rip) in head_64.S somehow is not the same as
> &tr_lock in trampoline_64.S. I donno.

It's definitely the same in the regular startup (16bit mode), but TDX
starts up via:

trampoline_start64
  trampoline_compat
    LOAD_REALMODE_ESP <- lock

That place cannot work with that LOAD_REALMODE_ESP macro. The untested
below should cure it.

Thanks,

        tglx
---
--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -37,12 +37,12 @@
 	.text
 	.code16
 
-.macro LOAD_REALMODE_ESP
+.macro LOAD_REALMODE_ESP lock:req
 	/*
 	 * Make sure only one CPU fiddles with the realmode stack
 	 */
 .Llock_rm\@:
-        lock btsl       $0, tr_lock
+        lock btsl       $0, \lock
         jnc             2f
         pause
         jmp             .Llock_rm\@
@@ -63,7 +63,7 @@ SYM_CODE_START(trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	LOAD_REALMODE_ESP
+	LOAD_REALMODE_ESP tr_lock
 
 	call	verify_cpu		# Verify the cpu supports long mode
 	testl   %eax, %eax		# Check for return code
@@ -106,7 +106,7 @@ SYM_CODE_START(sev_es_trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	LOAD_REALMODE_ESP
+	LOAD_REALMODE_ESP tr_lock
 
 	jmp	.Lswitch_to_protected
 SYM_CODE_END(sev_es_trampoline_start)
@@ -189,7 +189,7 @@ SYM_CODE_START(pa_trampoline_compat)
 	 * In compatibility mode.  Prep ESP and DX for startup_32, then disable
 	 * paging and complete the switch to legacy 32-bit mode.
 	 */
-	LOAD_REALMODE_ESP
+	LOAD_REALMODE_ESP pa_tr_lock
 	movw	$__KERNEL_DS, %dx
 
 	movl	$(CR0_STATE & ~X86_CR0_PG), %eax



From xen-devel-bounces@lists.xenproject.org Mon May 29 20:32:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 20:32:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540695.842620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3jWw-0004OZ-Bv; Mon, 29 May 2023 20:31:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540695.842620; Mon, 29 May 2023 20:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3jWw-0004OS-8n; Mon, 29 May 2023 20:31:50 +0000
Received: by outflank-mailman (input) for mailman id 540695;
 Mon, 29 May 2023 20:31:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mgog=BS=shutemov.name=kirill@srs-se1.protection.inumbo.net>)
 id 1q3jWu-0004OM-Eu
 for xen-devel@lists.xenproject.org; Mon, 29 May 2023 20:31:48 +0000
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d11fe426-fe5f-11ed-8611-37d641c3527e;
 Mon, 29 May 2023 22:31:43 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 2291B320090C;
 Mon, 29 May 2023 16:31:36 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Mon, 29 May 2023 16:31:38 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 29 May 2023 16:31:33 -0400 (EDT)
Received: by box.shutemov.name (Postfix, from userid 1000)
 id EF39B10CE6B; Mon, 29 May 2023 23:31:29 +0300 (+03)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d11fe426-fe5f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name;
	 h=cc:cc:content-type:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1685392295; x=
	1685478695; bh=DCa8NPk9IuG+WNbCMRnuP/IxB0Pl7FH8qyE+0eRHJLE=; b=g
	51xJiMNAfnywRNaD3sFCi4EWoKX95CMbfsWfgr8i8QBh2XU01iTYEH651FTYMfBR
	eENJxHZXEBY6SHI82PoBlibyNwyMHb5CEMDsM4HAc/1DPIrxGAcw2ini7p8OpwAt
	0XBSEzx0uuMml7y7tKe1HD4n1O+OodJSt1hUCaj7Z2ltFzWeMlVWjlIyOi5ibTuM
	H9lf4NAVJx/6bOTle/jlhB17+06jUR86qkstmwB2UJcP9BoEw4R0yx5aq6TZNMff
	uNOrKIqz/mXARFrDKSjCs7Ano9QeYkuN6kasWAfb6WLqR9/gKZNp2CgHsNO7G+Ex
	0cqUHmJsfss/o80o0bceQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1685392295; x=1685478695; bh=DCa8NPk9IuG+W
	NbCMRnuP/IxB0Pl7FH8qyE+0eRHJLE=; b=jxLgL3msxxvEE+iVvJ9EYQ6flYOI+
	PTknpL87XHUqZUUXtxuycAfdgmrGeDDBzHwt8yoTQ8OTW2tYp4I4HyGD6vZjKWiL
	EI9CvuulhRsv5KAy5SIv37QTKPsdZYc5cEtmkwrQ1jaNbUtFsZGjmPXwfGyHF6lH
	fda1PWXR4MgivTlR3gOsOvs4n3GQsmRNmwkN+A3mfnYvulUbqBSSBOMyg5CsT2Gg
	QuF4SHa6SSA/9ajq/ExTcDInNXEhick4e1XyH6ixuPrGszeje3ZjtMWd2vH6G6gq
	hI2niC4RY0efP30eC6MD4ryUHVv+bxW+e8U7tILGJuxI57iI3hUGqGTcA==
X-ME-Sender: <xms:pQt1ZGquY-RJJPwhiyXLj88_vwSRsqQaqIvqq3TYp75IlzMqclkMuQ>
    <xme:pQt1ZEqb0sqR3fGtfUCNZMdnr3xVXXAEXb3EL3QbwrSYPaw-KgK0HqHyVRfmLGHlL
    2SLFJsRbwj10RktFEs>
X-ME-Received: <xmr:pQt1ZLPyyN6Dxb9Ex9rfS4LxYu9JqnWKuu0r_uHQMiI5jU_MFwU8gqlxTxSoeOxsWMaAJQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekhedgudegkecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehttddttddttddvnecuhfhrohhmpedfmfhi
    rhhilhhlucetrdcuufhhuhhtvghmohhvfdcuoehkihhrihhllhesshhhuhhtvghmohhvrd
    hnrghmvgeqnecuggftrfgrthhtvghrnhephfeigefhtdefhedtfedthefghedutddvueeh
    tedttdehjeeukeejgeeuiedvkedtnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrg
    hmpehmrghilhhfrhhomhepkhhirhhilhhlsehshhhuthgvmhhovhdrnhgrmhgv
X-ME-Proxy: <xmx:pQt1ZF6Ug6G2DzdYaiBpXCA-6GY5DMUd-e0vIMZjZWfBENTGm7YSXQ>
    <xmx:pQt1ZF5kPoNys_QUug6msAqk6_nQsiFszi7Le9iIkVO_5nb8Pt9Vww>
    <xmx:pQt1ZFjMS3tBme0-PUUGqQ47XYrB6EOI87nK0zN0js6fveWGb-TiMg>
    <xmx:pwt1ZLYZvYFsVW_s17E_0AwQLCBmIG-wSGrCfVwazXpeHtO0wIVizQ>
Feedback-ID: ie3994620:Fastmail
Date: Mon, 29 May 2023 23:31:29 +0300
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,	Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,	Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
Message-ID: <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name>
 <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87bki3kkfi.ffs@tglx>

On Mon, May 29, 2023 at 09:27:13PM +0200, Thomas Gleixner wrote:
> On Mon, May 29 2023 at 05:39, Kirill A. Shutemov wrote:
> > On Sat, May 27, 2023 at 03:40:02PM +0200, Thomas Gleixner wrote:
> > But it gets broken again on "x86/smpboot: Implement a bit spinlock to
> > protect the realmode stack" with
> >
> > [    0.554079] .... node  #0, CPUs:        #1  #2
> > [    0.738071] Callback from call_rcu_tasks() invoked.
> > [   10.562065] CPU2 failed to report alive state
> > [   10.566337]   #3
> > [   20.570066] CPU3 failed to report alive state
> > [   20.574268]   #4
> > ...
> >
> > Notably CPU1 is missing from "failed to report" list. So CPU1 takes the
> > lock fine, but seems never unlocks it.
> >
> > Maybe trampoline_lock(%rip) in head_64.S somehow is not the same as
> > &tr_lock in trampoline_64.S. I donno.
> 
> It's definitely the same in the regular startup (16bit mode), but TDX
> starts up via:
> 
> trampoline_start64
>   trampoline_compat
>     LOAD_REALMODE_ESP <- lock
> 
> That place cannot work with that LOAD_REALMODE_ESP macro. The untested
> below should cure it.

Yep, works for me.

Aaand the next patch that breaks TDX boot is... <drum roll>

	x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable it

Disabling parallel bringup helps. I didn't look closer yet. If you have
an idea let me know.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


From xen-devel-bounces@lists.xenproject.org Mon May 29 20:55:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 20:55:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540699.842630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3jtO-0006rt-5t; Mon, 29 May 2023 20:55:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540699.842630; Mon, 29 May 2023 20:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3jtO-0006rm-1c; Mon, 29 May 2023 20:55:02 +0000
Received: by outflank-mailman (input) for mailman id 540699;
 Mon, 29 May 2023 20:55:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3jtM-0006rc-1T; Mon, 29 May 2023 20:55:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3jtL-0001Mw-IQ; Mon, 29 May 2023 20:54:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3jtL-0003Js-5o; Mon, 29 May 2023 20:54:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3jtL-0003tf-5J; Mon, 29 May 2023 20:54:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xlPZzT12CfL0DJmP9EQiurD2i+HxFPmgptGG5OcNsx8=; b=IVyJCg1vsonFzS2r6le1rnd2kz
	E40c/Z3kCUIVEWuzZz1BjnwBuyrRH/cSykFiPv126XwVDcQzILkdgRQS+EHiOpq8xwWgM2bKgmmoK
	EpjaETel0uQKuuBVBiAJoORMGHZ9tGU2wM3/N634hQfTq+4oap4LyiJpqbmV6PWBuVeU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181004-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 181004: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=1034d223f8cc6bf8b9b86c57e564753cdad46f88
X-Osstest-Versions-That:
    ovmf=8e934ab9562a33191af21ce3babf1ad37a3cdc34
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 20:54:59 +0000

flight 181004 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181004/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1034d223f8cc6bf8b9b86c57e564753cdad46f88
baseline version:
 ovmf                 8e934ab9562a33191af21ce3babf1ad37a3cdc34

Last test of basis   181001  2023-05-29 15:42:05 Z    0 days
Testing same since   181004  2023-05-29 18:15:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8e934ab956..1034d223f8  1034d223f8cc6bf8b9b86c57e564753cdad46f88 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 29 22:51:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 29 May 2023 22:51:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540705.842639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3li0-0001y1-F7; Mon, 29 May 2023 22:51:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540705.842639; Mon, 29 May 2023 22:51:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3li0-0001xu-Bv; Mon, 29 May 2023 22:51:24 +0000
Received: by outflank-mailman (input) for mailman id 540705;
 Mon, 29 May 2023 22:51:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3lhy-0001xk-Lv; Mon, 29 May 2023 22:51:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3lhy-0003uu-FN; Mon, 29 May 2023 22:51:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3lhx-0007OJ-WE; Mon, 29 May 2023 22:51:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3lhx-0004IF-Vj; Mon, 29 May 2023 22:51:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=akg+PiamJhUSixfzkPnlrTkZ3NvMMa5XixYckE9nkeE=; b=mwYpQexCsspGeJ4LbetEKpwf9a
	x6rATbjLv7Tc02Ixs2wSvTKQa8LvSirhOqWAjOny3JCE8AMEvN7E9EQP45iCnblmTbJniSKPSmmvV
	sB41vwuUyPS6+CSNfy9CS4UP5JlJanmE49H06zWJzIq4fHxIGzfexj7eaYO2Y1hTw2Wo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181002-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 181002: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-arndale:host-install(5):broken:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-vhd:guest-start/debian.repeat:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8b817fded42d8fe3a0eb47b1149d907851a3c942
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 29 May 2023 22:51:21 +0000

flight 181002 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181002/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale     <job status>                 broken
 test-armhf-armhf-xl-arndale   5 host-install(5)        broken REGR. vs. 180278
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278
 test-arm64-arm64-xl-vhd     17 guest-start/debian.repeat fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                8b817fded42d8fe3a0eb47b1149d907851a3c942
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   43 days
Failing since        180281  2023-04-17 06:24:36 Z   42 days   80 attempts
Testing same since   181002  2023-05-29 16:11:58 Z    0 days    1 attempts

------------------------------------------------------------
2553 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-arndale broken
broken-step test-armhf-armhf-xl-arndale host-install(5)

Not pushing.

(No revision log; it would be 323291 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 30 00:54:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 00:54:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540713.842650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ndI-000662-Lr; Tue, 30 May 2023 00:54:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540713.842650; Tue, 30 May 2023 00:54:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ndI-00065v-Ii; Tue, 30 May 2023 00:54:40 +0000
Received: by outflank-mailman (input) for mailman id 540713;
 Tue, 30 May 2023 00:54:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m7B1=BT=shutemov.name=kirill@srs-se1.protection.inumbo.net>)
 id 1q3ndG-00065l-AR
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 00:54:38 +0000
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8ab3eaeb-fe84-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 02:54:35 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id 2516C5C0115;
 Mon, 29 May 2023 20:54:33 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Mon, 29 May 2023 20:54:33 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 29 May 2023 20:54:30 -0400 (EDT)
Received: by box.shutemov.name (Postfix, from userid 1000)
 id 50794102633; Tue, 30 May 2023 03:54:28 +0300 (+03)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ab3eaeb-fe84-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name;
	 h=cc:cc:content-type:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1685408073; x=
	1685494473; bh=X0c3nPbso4/0oqcOAoLD0+cBIPIras9+w1Pgn+nO65Q=; b=T
	CpJAvctGQu5S8VNM66C4FtRjgGq+q2MMUeaaYMu/HRHh4mEh7tcuyRuKGV4Mu1TM
	bPKX+5J4fl5PcZjZe+BDqQjvRSOpFrIlZq/fUwOeVF0Vkn6L+7z7bbDkp7/3HLSy
	8Q7A7P3l3sK33G79IiUGI6gyrN6Ssgfy4fLfvvdjLaVkwb9jwoY1Da7YDny6eGQC
	d5fy28JHVhf2h28hjqoG/eQCTsoJBmLwqyIlfi8Lh+zezV/dOvyXi3qbvN6ojnnR
	AUajs9bVEMhhU+D7YYhU7KMj/a6bfChoCakItQcYs2XGwVOAV85jszU0b44fYZ6F
	XgI7XkLmbTxGpFuiSO0gw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1685408073; x=1685494473; bh=X0c3nPbso4/0o
	qcOAoLD0+cBIPIras9+w1Pgn+nO65Q=; b=IdTnfWW5E7gVhvTEsHohP2Bwmn0WD
	YLKITfgHaYIgSlaFBWykuQ8uaRJvMhCZjh72YNyUSnLpF34Q6eS5tYsDlXlgI4Tm
	Fl41MrtIZNPFaKU1w7vcs+XW2Jwb0qk+JlrCyo2Hx8xp06hAbbxdoZ/iIp73cMh6
	jPpRlwmBDQq7eel++fmaTcxDtMTZ4MNWyWDLm7CyV2Sq6sDfwgiCdmTojDQN8bV8
	Ao3L2aIodNehScXAsUA4Nu0WBS/5hNXLD5mcBurMlbGDqvKYjlL6UQ+NVdauwRUG
	K75NcrCzRC06cuvgrhoJu4tTrDFE1tA7dMZ4ujnKquiYWErzK8nW0e83g==
X-ME-Sender: <xms:R0l1ZPnuRgrRKbHAQnFn6Dk1UOlq8SI_pxyLVuaTKE2xNVAoJeF1tg>
    <xme:R0l1ZC2SaZ60xaaLY0cIcVdRN9oYuSd84RMlMDFIFmMkiMxQj4eU0yEE5qZexcsp8
    6TVt5dhxqNigjSw30Y>
X-ME-Received: <xmr:R0l1ZFoDJAvHxTvcP5H3fy5tpsgojfUK2OFM7oQXBZY6PT83cLKCZ5UqZI-OEFCAqO6oNg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekiedgfeekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesthdttddttddtvdenucfhrhhomhepfdfmihhr
    ihhllhcutedrucfuhhhuthgvmhhovhdfuceokhhirhhilhhlsehshhhuthgvmhhovhdrnh
    grmhgvqeenucggtffrrghtthgvrhhnpefhieeghfdtfeehtdeftdehgfehuddtvdeuheet
    tddtheejueekjeegueeivdektdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh
    epmhgrihhlfhhrohhmpehkihhrihhllhesshhhuhhtvghmohhvrdhnrghmvg
X-ME-Proxy: <xmx:R0l1ZHm1M4GFUYoy1T_HKVif9sWKE7Vx-Gdei9gW_GugdY6yIE5Y2A>
    <xmx:R0l1ZN2-3u2974MjnGYlBlFfYuQD_eWmtDcZBpWF4dboPyqwekjw0Q>
    <xmx:R0l1ZGtu9IFogJNQH3LK2d2jfxh4Wy4Q41z9_vyhI4V_YDaJJ4sitQ>
    <xmx:SUl1ZC1BVfuKc-tubEg0R0ZWBkvATsLR5IPsd-2lVdd02_aMwMmTZQ>
Feedback-ID: ie3994620:Fastmail
Date: Tue, 30 May 2023 03:54:28 +0300
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,	Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,	Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
Message-ID: <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name>
 <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>

On Mon, May 29, 2023 at 11:31:29PM +0300, Kirill A. Shutemov wrote:
> On Mon, May 29, 2023 at 09:27:13PM +0200, Thomas Gleixner wrote:
> > On Mon, May 29 2023 at 05:39, Kirill A. Shutemov wrote:
> > > On Sat, May 27, 2023 at 03:40:02PM +0200, Thomas Gleixner wrote:
> > > But it gets broken again on "x86/smpboot: Implement a bit spinlock to
> > > protect the realmode stack" with
> > >
> > > [    0.554079] .... node  #0, CPUs:        #1  #2
> > > [    0.738071] Callback from call_rcu_tasks() invoked.
> > > [   10.562065] CPU2 failed to report alive state
> > > [   10.566337]   #3
> > > [   20.570066] CPU3 failed to report alive state
> > > [   20.574268]   #4
> > > ...
> > >
> > > Notably CPU1 is missing from "failed to report" list. So CPU1 takes the
> > > lock fine, but seems never unlocks it.
> > >
> > > Maybe trampoline_lock(%rip) in head_64.S somehow is not the same as
> > > &tr_lock in trampoline_64.S. I donno.
> > 
> > It's definitely the same in the regular startup (16bit mode), but TDX
> > starts up via:
> > 
> > trampoline_start64
> >   trampoline_compat
> >     LOAD_REALMODE_ESP <- lock
> > 
> > That place cannot work with that LOAD_REALMODE_ESP macro. The untested
> > below should cure it.
> 
> Yep, works for me.
> 
> Aaand the next patch that breaks TDX boot is... <drum roll>
> 
> 	x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable it
> 
> Disabling parallel bringup helps. I didn't look closer yet. If you have
> an idea let me know.

Okay, it crashes around .Lread_apicid due to touching MSRs that trigger #VE.

Looks like the patch had no intention to enable parallel bringup on TDX.

+        * Intel-TDX has a secure RDMSR hypercall, but that needs to be
+        * implemented seperately in the low level startup ASM code.

But CC_ATTR_GUEST_STATE_ENCRYPT that used to filter it out is
SEV-ES-specific thingy and doesn't cover TDX. I don't think we have an
attribute that fits nicely here.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


From xen-devel-bounces@lists.xenproject.org Tue May 30 00:59:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 00:59:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540717.842660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3nhV-0006ji-7q; Tue, 30 May 2023 00:59:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540717.842660; Tue, 30 May 2023 00:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3nhV-0006jb-3R; Tue, 30 May 2023 00:59:01 +0000
Received: by outflank-mailman (input) for mailman id 540717;
 Tue, 30 May 2023 00:58:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3nhT-0006jQ-Pr; Tue, 30 May 2023 00:58:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3nhT-0007dx-GN; Tue, 30 May 2023 00:58:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3nhS-0004oD-Ve; Tue, 30 May 2023 00:58:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3nhS-0007hZ-VF; Tue, 30 May 2023 00:58:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=grQOQ9s9w5Jm2Rg2BPQboCNSGS5XHpKZgNMS1vFpxBg=; b=2I1HPgvO6UeuV+1QRO+jH1nuSQ
	cBDvvrYIPOgYUrvPlp7YbVbpqRcZZjBgHp9gvY+Nyhr1+UnKxUres2RQLQCcjP7AWyIiRCkX0zSum
	sdQ4tTHQFNK2211fcpQfNtZDcWKI+9tcOXMNHu0cl+L79X2CT9qg119jOWPmRkrh1NmQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181003-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 181003: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:xen-install/dst_host:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:xen-install/src_host:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start.2:fail:heisenbug
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=ac84b57b4d74606f7f83667a0606deef32b2049d
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 30 May 2023 00:58:58 +0000

flight 181003 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181003/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-vhd 21 guest-start/debian.repeat fail in 180996 pass in 181003
 test-amd64-i386-pair         11 xen-install/dst_host       fail pass in 180990
 test-amd64-i386-pair         10 xen-install/src_host       fail pass in 180996
 test-amd64-amd64-xl-qcow2    22 guest-start.2              fail pass in 180996
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 180996

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180691
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                ac84b57b4d74606f7f83667a0606deef32b2049d
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   12 days
Failing since        180699  2023-05-18 07:21:24 Z   11 days   47 attempts
Testing same since   180972  2023-05-27 03:59:19 Z    2 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  Bernhard Beschow <shentey@gmail.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8318 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 30 05:14:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 05:14:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540723.842670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3rgO-0006I6-AD; Tue, 30 May 2023 05:14:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540723.842670; Tue, 30 May 2023 05:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3rgO-0006Hy-5f; Tue, 30 May 2023 05:14:08 +0000
Received: by outflank-mailman (input) for mailman id 540723;
 Tue, 30 May 2023 02:43:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OULz=BT=daynix.com=akihiko.odaki@srs-se1.protection.inumbo.net>)
 id 1q3pKd-00089x-T4
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 02:43:32 +0000
Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com
 [2607:f8b0:4864:20::42d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bfd13341-fe93-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 04:43:28 +0200 (CEST)
Received: by mail-pf1-x42d.google.com with SMTP id
 d2e1a72fcca58-64d3578c25bso4579357b3a.3
 for <xen-devel@lists.xenproject.org>; Mon, 29 May 2023 19:43:26 -0700 (PDT)
Received: from alarm.. ([157.82.204.253]) by smtp.gmail.com with ESMTPSA id
 63-20020a630542000000b0051baf3f1b3esm7801785pgf.76.2023.05.29.19.43.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 29 May 2023 19:43:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfd13341-fe93-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=daynix-com.20221208.gappssmtp.com; s=20221208; t=1685414604; x=1688006604;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=swwLnIkzsF+teWY2PyMri5LA/E3FuwMRZkjgfo8Sf88=;
        b=Mk7dJszWz2jAD6FWi0cao85HuTooPQab6qm7QHzPUgs/YznOaSVoUH9MQcXHmnerZP
         hR1JtUquNXQLzoQle2nym9+2/UrkKMlqwAOJAlHXH8JOu8vd6du751KrQDltRrPAe9C5
         i1VvizG0Bl3Nf1w+iysV9cthnw/1aAdUpjl9a1u7cX6ZAtNQwV7EGM9toZpV9T192QsT
         o0h6jomXg0eaMr7lSBclIWMo17OT0dP1IZg4szXD2PIilLzDOI3PuhuK1TGfGk/kvaVl
         KVHS+BTO5NgxEjKNvlVKL3EKGDw6qnxJICKErDGA00YBshaZ+29p1wqkgd3bzox0LFNV
         lBww==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685414604; x=1688006604;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=swwLnIkzsF+teWY2PyMri5LA/E3FuwMRZkjgfo8Sf88=;
        b=LkRiddBMga5RiglUZRUnPQ/4ACIeZUFDKE6KRQVxPf1Gtdrc4bI2Of7o+OlTyHPYO8
         U/psgCWic12kL1lnCRvxraJ7jMyk3vLIJBVEXwGaAXj1IYAlEEHkL1A4o7/4794v9oG0
         kmj8v/om4EQw2PcT0cC0dwX9R7WRWf8ipXvbj7RApPOq9ia80EwrZXK1G17Di42D2QxC
         eJRPAlnfExdR0hkAjmBgm6rL1M/nWDJA8bRHwxf5ub8yKOR/QIvZBzwYSFKzSfDjE0Tv
         rsEqxr3+A/R49f8mpgqbxSK+4yJoFP7ZcKYuGucEbHe2OfkjupVWxWXIyxepOE95bSeS
         WP4A==
X-Gm-Message-State: AC+VfDxWprrod1daLLOq3FXytiU1pDj/cjmrQoK5xYFI3iEUhU/iu17j
	wakNhQQ9+B1uQqW+MSMlgugbrQ==
X-Google-Smtp-Source: ACHHUZ5YsC9cr+QFUOl5hhAO7DM8g16IxDu9WjEWSsTOketuhwKTj1vai2Ctzeoq8pHPAgrmQUbTCw==
X-Received: by 2002:a05:6a20:ce4f:b0:10e:de4f:3437 with SMTP id id15-20020a056a20ce4f00b0010ede4f3437mr869413pzb.39.1685414604413;
        Mon, 29 May 2023 19:43:24 -0700 (PDT)
From: Akihiko Odaki <akihiko.odaki@daynix.com>
To: 
Cc: Mauro Matteo Cascella <mcascell@redhat.com>,
	P J P <pj.pandit@yahoo.co.in>,
	Alexander Bulekov <alxndr@bu.edu>,
	Dmitry Fleytman <dmitry.fleytman@gmail.com>,
	Beniamino Galvani <b.galvani@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Strahinja Jankovic <strahinja.p.jankovic@gmail.com>,
	Jason Wang <jasowang@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	Alistair Francis <alistair@alistair23.me>,
	Stefan Weil <sw@weilnetz.de>,
	=?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
	Andrew Jeffery <andrew@aj.id.au>,
	Joel Stanley <joel@jms.id.au>,
	Richard Henderson <richard.henderson@linaro.org>,
	Helge Deller <deller@gmx.de>,
	Sriram Yagnaraman <sriram.yagnaraman@est.tech>,
	Thomas Huth <huth@tuxfamily.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Subbaraya Sundeep <sundeep.lkml@gmail.com>,
	Jan Kiszka <jan.kiszka@web.de>,
	Tyrone Ting <kfting@nuvoton.com>,
	Hao Wu <wuhaotsh@google.com>,
	Max Filippov <jcmvbkbc@gmail.com>,
	Jiri Pirko <jiri@resnulli.us>,
	Daniel Henrique Barboza <danielhb413@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Greg Kurz <groug@kaod.org>,
	Harsh Prateek Bora <harshpb@linux.ibm.com>,
	Sven Schnelle <svens@stackframe.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Rob Herring <robh@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	qemu-arm@nongnu.org,
	qemu-devel@nongnu.org,
	qemu-ppc@nongnu.org,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@daynix.com>
Subject: [PATCH 0/2] net: Update MemReentrancyGuard for NIC
Date: Tue, 30 May 2023 11:43:00 +0900
Message-Id: <20230530024302.14215-1-akihiko.odaki@daynix.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Recently MemReentrancyGuard was added to DeviceState to record that the
device is engaging in I/O. The network device backend needs to update it
when delivering a packet to a device.

This implementation follows what bottom half does, but it does not add
a tracepoint for the case that the network device backend started
delivering a packet to a device which is already engaging in I/O. This
is because such reentrancy frequently happens for
qemu_flush_queued_packets() and is insignificant.

This series consists of two patches. The first patch makes a bulk change to
add a new parameter to qemu_new_nic() and does not contain behavioral changes.
The second patch actually implements MemReentrancyGuard update.

Akihiko Odaki (2):
  net: Provide MemReentrancyGuard * to qemu_new_nic()
  net: Update MemReentrancyGuard for NIC

 include/net/net.h             |  2 ++
 hw/net/allwinner-sun8i-emac.c |  3 ++-
 hw/net/allwinner_emac.c       |  3 ++-
 hw/net/cadence_gem.c          |  3 ++-
 hw/net/dp8393x.c              |  3 ++-
 hw/net/e1000.c                |  3 ++-
 hw/net/e1000e.c               |  2 +-
 hw/net/eepro100.c             |  4 +++-
 hw/net/etraxfs_eth.c          |  3 ++-
 hw/net/fsl_etsec/etsec.c      |  3 ++-
 hw/net/ftgmac100.c            |  3 ++-
 hw/net/i82596.c               |  2 +-
 hw/net/igb.c                  |  2 +-
 hw/net/imx_fec.c              |  2 +-
 hw/net/lan9118.c              |  3 ++-
 hw/net/mcf_fec.c              |  3 ++-
 hw/net/mipsnet.c              |  3 ++-
 hw/net/msf2-emac.c            |  3 ++-
 hw/net/mv88w8618_eth.c        |  3 ++-
 hw/net/ne2000-isa.c           |  3 ++-
 hw/net/ne2000-pci.c           |  3 ++-
 hw/net/npcm7xx_emc.c          |  3 ++-
 hw/net/opencores_eth.c        |  3 ++-
 hw/net/pcnet.c                |  3 ++-
 hw/net/rocker/rocker_fp.c     |  4 ++--
 hw/net/rtl8139.c              |  3 ++-
 hw/net/smc91c111.c            |  3 ++-
 hw/net/spapr_llan.c           |  3 ++-
 hw/net/stellaris_enet.c       |  3 ++-
 hw/net/sungem.c               |  2 +-
 hw/net/sunhme.c               |  3 ++-
 hw/net/tulip.c                |  3 ++-
 hw/net/virtio-net.c           |  6 ++++--
 hw/net/vmxnet3.c              |  2 +-
 hw/net/xen_nic.c              |  4 ++--
 hw/net/xgmac.c                |  3 ++-
 hw/net/xilinx_axienet.c       |  3 ++-
 hw/net/xilinx_ethlite.c       |  3 ++-
 hw/usb/dev-network.c          |  3 ++-
 net/net.c                     | 15 +++++++++++++++
 40 files changed, 90 insertions(+), 41 deletions(-)

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 05:14:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 05:14:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540725.842672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3rgO-0006Ji-H4; Tue, 30 May 2023 05:14:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540725.842672; Tue, 30 May 2023 05:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3rgO-0006JZ-Dg; Tue, 30 May 2023 05:14:08 +0000
Received: by outflank-mailman (input) for mailman id 540725;
 Tue, 30 May 2023 02:43:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OULz=BT=daynix.com=akihiko.odaki@srs-se1.protection.inumbo.net>)
 id 1q3pKi-0008AP-G7
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 02:43:36 +0000
Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com
 [2607:f8b0:4864:20::532])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c47ade56-fe93-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 04:43:33 +0200 (CEST)
Received: by mail-pg1-x532.google.com with SMTP id
 41be03b00d2f7-53202149ae2so2376395a12.3
 for <xen-devel@lists.xenproject.org>; Mon, 29 May 2023 19:43:33 -0700 (PDT)
Received: from alarm.. ([157.82.204.253]) by smtp.gmail.com with ESMTPSA id
 63-20020a630542000000b0051baf3f1b3esm7801785pgf.76.2023.05.29.19.43.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 29 May 2023 19:43:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c47ade56-fe93-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=daynix-com.20221208.gappssmtp.com; s=20221208; t=1685414612; x=1688006612;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=DUSBfeomTiau/0Y+TQ+XPKgzi9WAbYuw9lFLHs7E/DQ=;
        b=w6Z20Rgo4xXfTYdtH29au26JsCc0AdguJDZCe9tpOlJ/HNKSUDkh1thz3zb240lx51
         vhN4SuNqyJPdv3kTgH5zw+25+0fHFfFR4b+s4TsIrgvAdKKd/8pXQAMiZv5j5smaBhm6
         VsefE8tNCEi1frSiquFiZ+TSnismCsNsedKxw7SpwP4P0f2RJxOJOi1Pv+1ZPc4ejb12
         pCNFmbl01dHfpWqTlqePVRqhiBS25887MtmbDjor2q4IevAvTK4ZX03NRktd/p6x7Rby
         aNtPfkJ78rsMuv4W0Op91DAdU9/XKDIzhucWk0OmLTAGwvmhAa5VpaujI920NxGFCyzs
         4j6Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685414612; x=1688006612;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=DUSBfeomTiau/0Y+TQ+XPKgzi9WAbYuw9lFLHs7E/DQ=;
        b=IarMHVKXqDsdYd50hktTR3gwzv6BhklIr0MNj3FAlnC3odhC2otNAzXZ2KHDv1iJVj
         RfQLKxc4o9/BzqMgUu5s7wtiiqpORj3oG82fpVL7WefiJ32P6wVgbSmTnvscDgw+oBSE
         mIODwynXmsLMHSr/p6J+U37rmIQzEGaQZaggCKVe8crxy1fYNU6eEgXEADoQ6tlBn1v6
         uElMKlh9galHSperzMUtnpvvF8+aL+wbxr37pZktTkzeBaw1NHMuvRMPdecBO94G2L0Z
         8NypMKgla4hHt0S7V3gGCCqzMIAO/PUraN3LmDO9ksPjfIR8fNMsHO+6/Phu3g2O1wim
         HGxQ==
X-Gm-Message-State: AC+VfDzNP8CgSYkjnaprjvzVikdHvGaPLsXmSiaOEdJ+dmg3pdamMoj/
	+2dCboRVfp46BxuECpo8zVi38g==
X-Google-Smtp-Source: ACHHUZ4/tmzoprzvDSp1+GkGS0w0qgBMQNWlO3/kfqD0rkZYdMgOG7YxabsrsxdA7VJUy2aQVS84Jw==
X-Received: by 2002:a05:6a20:e18b:b0:ff:a017:2b07 with SMTP id ks11-20020a056a20e18b00b000ffa0172b07mr906955pzb.20.1685414612047;
        Mon, 29 May 2023 19:43:32 -0700 (PDT)
From: Akihiko Odaki <akihiko.odaki@daynix.com>
To: 
Cc: Mauro Matteo Cascella <mcascell@redhat.com>,
	P J P <pj.pandit@yahoo.co.in>,
	Alexander Bulekov <alxndr@bu.edu>,
	Dmitry Fleytman <dmitry.fleytman@gmail.com>,
	Beniamino Galvani <b.galvani@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Strahinja Jankovic <strahinja.p.jankovic@gmail.com>,
	Jason Wang <jasowang@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	Alistair Francis <alistair@alistair23.me>,
	Stefan Weil <sw@weilnetz.de>,
	=?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
	Andrew Jeffery <andrew@aj.id.au>,
	Joel Stanley <joel@jms.id.au>,
	Richard Henderson <richard.henderson@linaro.org>,
	Helge Deller <deller@gmx.de>,
	Sriram Yagnaraman <sriram.yagnaraman@est.tech>,
	Thomas Huth <huth@tuxfamily.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Subbaraya Sundeep <sundeep.lkml@gmail.com>,
	Jan Kiszka <jan.kiszka@web.de>,
	Tyrone Ting <kfting@nuvoton.com>,
	Hao Wu <wuhaotsh@google.com>,
	Max Filippov <jcmvbkbc@gmail.com>,
	Jiri Pirko <jiri@resnulli.us>,
	Daniel Henrique Barboza <danielhb413@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Greg Kurz <groug@kaod.org>,
	Harsh Prateek Bora <harshpb@linux.ibm.com>,
	Sven Schnelle <svens@stackframe.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Rob Herring <robh@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	qemu-arm@nongnu.org,
	qemu-devel@nongnu.org,
	qemu-ppc@nongnu.org,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@daynix.com>
Subject: [PATCH 1/2] net: Provide MemReentrancyGuard * to qemu_new_nic()
Date: Tue, 30 May 2023 11:43:01 +0900
Message-Id: <20230530024302.14215-2-akihiko.odaki@daynix.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530024302.14215-1-akihiko.odaki@daynix.com>
References: <20230530024302.14215-1-akihiko.odaki@daynix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Recently MemReentrancyGuard was added to DeviceState to record that the
device is engaging in I/O. The network device backend needs to update it
when delivering a packet to a device.

In preparation for such a change, add MemReentrancyGuard * as a
parameter of qemu_new_nic().

Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
---
 include/net/net.h             | 1 +
 hw/net/allwinner-sun8i-emac.c | 3 ++-
 hw/net/allwinner_emac.c       | 3 ++-
 hw/net/cadence_gem.c          | 3 ++-
 hw/net/dp8393x.c              | 3 ++-
 hw/net/e1000.c                | 3 ++-
 hw/net/e1000e.c               | 2 +-
 hw/net/eepro100.c             | 4 +++-
 hw/net/etraxfs_eth.c          | 3 ++-
 hw/net/fsl_etsec/etsec.c      | 3 ++-
 hw/net/ftgmac100.c            | 3 ++-
 hw/net/i82596.c               | 2 +-
 hw/net/igb.c                  | 2 +-
 hw/net/imx_fec.c              | 2 +-
 hw/net/lan9118.c              | 3 ++-
 hw/net/mcf_fec.c              | 3 ++-
 hw/net/mipsnet.c              | 3 ++-
 hw/net/msf2-emac.c            | 3 ++-
 hw/net/mv88w8618_eth.c        | 3 ++-
 hw/net/ne2000-isa.c           | 3 ++-
 hw/net/ne2000-pci.c           | 3 ++-
 hw/net/npcm7xx_emc.c          | 3 ++-
 hw/net/opencores_eth.c        | 3 ++-
 hw/net/pcnet.c                | 3 ++-
 hw/net/rocker/rocker_fp.c     | 4 ++--
 hw/net/rtl8139.c              | 3 ++-
 hw/net/smc91c111.c            | 3 ++-
 hw/net/spapr_llan.c           | 3 ++-
 hw/net/stellaris_enet.c       | 3 ++-
 hw/net/sungem.c               | 2 +-
 hw/net/sunhme.c               | 3 ++-
 hw/net/tulip.c                | 3 ++-
 hw/net/virtio-net.c           | 6 ++++--
 hw/net/vmxnet3.c              | 2 +-
 hw/net/xen_nic.c              | 4 ++--
 hw/net/xgmac.c                | 3 ++-
 hw/net/xilinx_axienet.c       | 3 ++-
 hw/net/xilinx_ethlite.c       | 3 ++-
 hw/usb/dev-network.c          | 3 ++-
 net/net.c                     | 1 +
 40 files changed, 75 insertions(+), 41 deletions(-)

diff --git a/include/net/net.h b/include/net/net.h
index 1448d00afb..a7d8deaccb 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -157,6 +157,7 @@ NICState *qemu_new_nic(NetClientInfo *info,
                        NICConf *conf,
                        const char *model,
                        const char *name,
+                       MemReentrancyGuard *reentrancy_guard,
                        void *opaque);
 void qemu_del_nic(NICState *nic);
 NetClientState *qemu_get_subqueue(NICState *nic, int queue_index);
diff --git a/hw/net/allwinner-sun8i-emac.c b/hw/net/allwinner-sun8i-emac.c
index fac4405f45..cc350d40e5 100644
--- a/hw/net/allwinner-sun8i-emac.c
+++ b/hw/net/allwinner-sun8i-emac.c
@@ -824,7 +824,8 @@ static void allwinner_sun8i_emac_realize(DeviceState *dev, Error **errp)
 
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
     s->nic = qemu_new_nic(&net_allwinner_sun8i_emac_info, &s->conf,
-                           object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
 
diff --git a/hw/net/allwinner_emac.c b/hw/net/allwinner_emac.c
index 372e5b66da..e10965de14 100644
--- a/hw/net/allwinner_emac.c
+++ b/hw/net/allwinner_emac.c
@@ -453,7 +453,8 @@ static void aw_emac_realize(DeviceState *dev, Error **errp)
 
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
     s->nic = qemu_new_nic(&net_aw_emac_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 
     fifo8_create(&s->rx_fifo, RX_FIFO_SIZE);
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
index 42ea2411a2..a7bce1c120 100644
--- a/hw/net/cadence_gem.c
+++ b/hw/net/cadence_gem.c
@@ -1633,7 +1633,8 @@ static void gem_realize(DeviceState *dev, Error **errp)
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
 
     s->nic = qemu_new_nic(&net_gem_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
 
     if (s->jumbo_max_len > MAX_FRAME_SIZE) {
         error_setg(errp, "jumbo-max-len is greater than %d",
diff --git a/hw/net/dp8393x.c b/hw/net/dp8393x.c
index 45b954e46c..abfcc6f69f 100644
--- a/hw/net/dp8393x.c
+++ b/hw/net/dp8393x.c
@@ -943,7 +943,8 @@ static void dp8393x_realize(DeviceState *dev, Error **errp)
                           "dp8393x-regs", SONIC_REG_COUNT << s->it_shift);
 
     s->nic = qemu_new_nic(&net_dp83932_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 
     s->watchdog = timer_new_ns(QEMU_CLOCK_VIRTUAL, dp8393x_watchdog, s);
diff --git a/hw/net/e1000.c b/hw/net/e1000.c
index 4dc4dd7704..7581378226 100644
--- a/hw/net/e1000.c
+++ b/hw/net/e1000.c
@@ -1696,7 +1696,8 @@ static void pci_e1000_realize(PCIDevice *pci_dev, Error **errp)
                                macaddr);
 
     d->nic = qemu_new_nic(&net_e1000_info, &d->conf,
-                          object_get_typename(OBJECT(d)), dev->id, d);
+                          object_get_typename(OBJECT(d)), dev->id,
+                          &dev->mem_reentrancy_guard, d);
 
     qemu_format_nic_info_str(qemu_get_queue(d->nic), macaddr);
 
diff --git a/hw/net/e1000e.c b/hw/net/e1000e.c
index c3848797b8..e41a6c1038 100644
--- a/hw/net/e1000e.c
+++ b/hw/net/e1000e.c
@@ -320,7 +320,7 @@ e1000e_init_net_peer(E1000EState *s, PCIDevice *pci_dev, uint8_t *macaddr)
     int i;
 
     s->nic = qemu_new_nic(&net_e1000e_info, &s->conf,
-        object_get_typename(OBJECT(s)), dev->id, s);
+        object_get_typename(OBJECT(s)), dev->id, &dev->mem_reentrancy_guard, s);
 
     s->core.max_queue_num = s->conf.peers.queues ? s->conf.peers.queues - 1 : 0;
 
diff --git a/hw/net/eepro100.c b/hw/net/eepro100.c
index dc07984ae9..e2b03b787d 100644
--- a/hw/net/eepro100.c
+++ b/hw/net/eepro100.c
@@ -1874,7 +1874,9 @@ static void e100_nic_realize(PCIDevice *pci_dev, Error **errp)
     nic_reset(s);
 
     s->nic = qemu_new_nic(&net_eepro100_info, &s->conf,
-                          object_get_typename(OBJECT(pci_dev)), pci_dev->qdev.id, s);
+                          object_get_typename(OBJECT(pci_dev)),
+                          pci_dev->qdev.id,
+                          &pci_dev->qdev.mem_reentrancy_guard, s);
 
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
     TRACE(OTHER, logout("%s\n", qemu_get_queue(s->nic)->info_str));
diff --git a/hw/net/etraxfs_eth.c b/hw/net/etraxfs_eth.c
index 1b82aec794..ba57a978d1 100644
--- a/hw/net/etraxfs_eth.c
+++ b/hw/net/etraxfs_eth.c
@@ -618,7 +618,8 @@ static void etraxfs_eth_realize(DeviceState *dev, Error **errp)
 
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
     s->nic = qemu_new_nic(&net_etraxfs_info, &s->conf,
-                          object_get_typename(OBJECT(s)), dev->id, s);
+                          object_get_typename(OBJECT(s)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 
     s->phy.read = tdk_read;
diff --git a/hw/net/fsl_etsec/etsec.c b/hw/net/fsl_etsec/etsec.c
index 798ea33d08..00315f305d 100644
--- a/hw/net/fsl_etsec/etsec.c
+++ b/hw/net/fsl_etsec/etsec.c
@@ -391,7 +391,8 @@ static void etsec_realize(DeviceState *dev, Error **errp)
     eTSEC        *etsec = ETSEC_COMMON(dev);
 
     etsec->nic = qemu_new_nic(&net_etsec_info, &etsec->conf,
-                              object_get_typename(OBJECT(dev)), dev->id, etsec);
+                              object_get_typename(OBJECT(dev)), dev->id,
+                              &dev->mem_reentrancy_guard, etsec);
     qemu_format_nic_info_str(qemu_get_queue(etsec->nic), etsec->conf.macaddr.a);
 
     etsec->ptimer = ptimer_init(etsec_timer_hit, etsec, PTIMER_POLICY_LEGACY);
diff --git a/hw/net/ftgmac100.c b/hw/net/ftgmac100.c
index d3bf14be53..be2cf63c08 100644
--- a/hw/net/ftgmac100.c
+++ b/hw/net/ftgmac100.c
@@ -1118,7 +1118,8 @@ static void ftgmac100_realize(DeviceState *dev, Error **errp)
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
 
     s->nic = qemu_new_nic(&net_ftgmac100_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
 
diff --git a/hw/net/i82596.c b/hw/net/i82596.c
index ec21e2699a..dc64246f75 100644
--- a/hw/net/i82596.c
+++ b/hw/net/i82596.c
@@ -743,7 +743,7 @@ void i82596_common_init(DeviceState *dev, I82596State *s, NetClientInfo *info)
         qemu_macaddr_default_if_unset(&s->conf.macaddr);
     }
     s->nic = qemu_new_nic(info, &s->conf, object_get_typename(OBJECT(dev)),
-                dev->id, s);
+                dev->id, &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 
     if (USE_TIMER) {
diff --git a/hw/net/igb.c b/hw/net/igb.c
index 1c989d7677..141bc56d79 100644
--- a/hw/net/igb.c
+++ b/hw/net/igb.c
@@ -315,7 +315,7 @@ igb_init_net_peer(IGBState *s, PCIDevice *pci_dev, uint8_t *macaddr)
     int i;
 
     s->nic = qemu_new_nic(&net_igb_info, &s->conf,
-        object_get_typename(OBJECT(s)), dev->id, s);
+        object_get_typename(OBJECT(s)), dev->id, &dev->mem_reentrancy_guard, s);
 
     s->core.max_queue_num = s->conf.peers.queues ? s->conf.peers.queues - 1 : 0;
 
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
index 5d1f1f104c..6881e3e4f0 100644
--- a/hw/net/imx_fec.c
+++ b/hw/net/imx_fec.c
@@ -1334,7 +1334,7 @@ static void imx_eth_realize(DeviceState *dev, Error **errp)
 
     s->nic = qemu_new_nic(&imx_eth_net_info, &s->conf,
                           object_get_typename(OBJECT(dev)),
-                          dev->id, s);
+                          dev->id, &dev->mem_reentrancy_guard, s);
 
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
index e5c4af182d..cf7b8c897a 100644
--- a/hw/net/lan9118.c
+++ b/hw/net/lan9118.c
@@ -1361,7 +1361,8 @@ static void lan9118_realize(DeviceState *dev, Error **errp)
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
 
     s->nic = qemu_new_nic(&net_lan9118_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
     s->eeprom[0] = 0xa5;
     for (i = 0; i < 6; i++) {
diff --git a/hw/net/mcf_fec.c b/hw/net/mcf_fec.c
index 8aa27bd322..57dd49abea 100644
--- a/hw/net/mcf_fec.c
+++ b/hw/net/mcf_fec.c
@@ -643,7 +643,8 @@ static void mcf_fec_realize(DeviceState *dev, Error **errp)
     mcf_fec_state *s = MCF_FEC_NET(dev);
 
     s->nic = qemu_new_nic(&net_mcf_fec_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
 
diff --git a/hw/net/mipsnet.c b/hw/net/mipsnet.c
index 2ade72dea0..8e925de867 100644
--- a/hw/net/mipsnet.c
+++ b/hw/net/mipsnet.c
@@ -255,7 +255,8 @@ static void mipsnet_realize(DeviceState *dev, Error **errp)
     sysbus_init_irq(sbd, &s->irq);
 
     s->nic = qemu_new_nic(&net_mipsnet_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
 
diff --git a/hw/net/msf2-emac.c b/hw/net/msf2-emac.c
index db3a04deb1..145a5e46ab 100644
--- a/hw/net/msf2-emac.c
+++ b/hw/net/msf2-emac.c
@@ -530,7 +530,8 @@ static void msf2_emac_realize(DeviceState *dev, Error **errp)
 
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
     s->nic = qemu_new_nic(&net_msf2_emac_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
 
diff --git a/hw/net/mv88w8618_eth.c b/hw/net/mv88w8618_eth.c
index ef30b0d4a6..2185f1131a 100644
--- a/hw/net/mv88w8618_eth.c
+++ b/hw/net/mv88w8618_eth.c
@@ -350,7 +350,8 @@ static void mv88w8618_eth_realize(DeviceState *dev, Error **errp)
 
     address_space_init(&s->dma_as, s->dma_mr, "emac-dma");
     s->nic = qemu_new_nic(&net_mv88w8618_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
 }
 
 static const VMStateDescription mv88w8618_eth_vmsd = {
diff --git a/hw/net/ne2000-isa.c b/hw/net/ne2000-isa.c
index 6ced6775ff..a79f7fad1f 100644
--- a/hw/net/ne2000-isa.c
+++ b/hw/net/ne2000-isa.c
@@ -74,7 +74,8 @@ static void isa_ne2000_realizefn(DeviceState *dev, Error **errp)
     ne2000_reset(s);
 
     s->nic = qemu_new_nic(&net_ne2000_isa_info, &s->c,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->c.macaddr.a);
 }
 
diff --git a/hw/net/ne2000-pci.c b/hw/net/ne2000-pci.c
index edc6689d33..fee93c6ec0 100644
--- a/hw/net/ne2000-pci.c
+++ b/hw/net/ne2000-pci.c
@@ -71,7 +71,8 @@ static void pci_ne2000_realize(PCIDevice *pci_dev, Error **errp)
 
     s->nic = qemu_new_nic(&net_ne2000_info, &s->c,
                           object_get_typename(OBJECT(pci_dev)),
-                          pci_dev->qdev.id, s);
+                          pci_dev->qdev.id,
+                          &pci_dev->qdev.mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->c.macaddr.a);
 }
 
diff --git a/hw/net/npcm7xx_emc.c b/hw/net/npcm7xx_emc.c
index 8156f701b0..1d4e8f59f3 100644
--- a/hw/net/npcm7xx_emc.c
+++ b/hw/net/npcm7xx_emc.c
@@ -821,7 +821,8 @@ static void npcm7xx_emc_realize(DeviceState *dev, Error **errp)
 
     qemu_macaddr_default_if_unset(&emc->conf.macaddr);
     emc->nic = qemu_new_nic(&net_npcm7xx_emc_info, &emc->conf,
-                            object_get_typename(OBJECT(dev)), dev->id, emc);
+                            object_get_typename(OBJECT(dev)), dev->id,
+                            &dev->mem_reentrancy_guard, emc);
     qemu_format_nic_info_str(qemu_get_queue(emc->nic), emc->conf.macaddr.a);
 }
 
diff --git a/hw/net/opencores_eth.c b/hw/net/opencores_eth.c
index 0b3dc3146e..f96d6ea2cc 100644
--- a/hw/net/opencores_eth.c
+++ b/hw/net/opencores_eth.c
@@ -732,7 +732,8 @@ static void sysbus_open_eth_realize(DeviceState *dev, Error **errp)
     sysbus_init_irq(sbd, &s->irq);
 
     s->nic = qemu_new_nic(&net_open_eth_info, &s->conf,
-                          object_get_typename(OBJECT(s)), dev->id, s);
+                          object_get_typename(OBJECT(s)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
 }
 
 static void qdev_open_eth_reset(DeviceState *dev)
diff --git a/hw/net/pcnet.c b/hw/net/pcnet.c
index d456094575..1627f6939c 100644
--- a/hw/net/pcnet.c
+++ b/hw/net/pcnet.c
@@ -1718,7 +1718,8 @@ void pcnet_common_init(DeviceState *dev, PCNetState *s, NetClientInfo *info)
     s->poll_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, pcnet_poll_timer, s);
 
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
-    s->nic = qemu_new_nic(info, &s->conf, object_get_typename(OBJECT(dev)), dev->id, s);
+    s->nic = qemu_new_nic(info, &s->conf, object_get_typename(OBJECT(dev)),
+                          dev->id, &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 
     /* Initialize the PROM */
diff --git a/hw/net/rocker/rocker_fp.c b/hw/net/rocker/rocker_fp.c
index cbeed65bd5..0d21948ada 100644
--- a/hw/net/rocker/rocker_fp.c
+++ b/hw/net/rocker/rocker_fp.c
@@ -241,8 +241,8 @@ FpPort *fp_port_alloc(Rocker *r, char *sw_name,
     port->conf.bootindex = -1;
     port->conf.peers = *peers;
 
-    port->nic = qemu_new_nic(&fp_port_info, &port->conf,
-                             sw_name, NULL, port);
+    port->nic = qemu_new_nic(&fp_port_info, &port->conf, sw_name, NULL,
+                             &DEVICE(r)->mem_reentrancy_guard, port);
     qemu_format_nic_info_str(qemu_get_queue(port->nic),
                              port->conf.macaddr.a);
 
diff --git a/hw/net/rtl8139.c b/hw/net/rtl8139.c
index 5a5aaf868d..f4df03af71 100644
--- a/hw/net/rtl8139.c
+++ b/hw/net/rtl8139.c
@@ -3397,7 +3397,8 @@ static void pci_rtl8139_realize(PCIDevice *dev, Error **errp)
     s->eeprom.contents[9] = s->conf.macaddr.a[4] | s->conf.macaddr.a[5] << 8;
 
     s->nic = qemu_new_nic(&net_rtl8139_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), d->id, s);
+                          object_get_typename(OBJECT(dev)), d->id,
+                          &d->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 
     s->cplus_txbuffer = NULL;
diff --git a/hw/net/smc91c111.c b/hw/net/smc91c111.c
index ad778cd8fc..4eda971ef3 100644
--- a/hw/net/smc91c111.c
+++ b/hw/net/smc91c111.c
@@ -783,7 +783,8 @@ static void smc91c111_realize(DeviceState *dev, Error **errp)
     sysbus_init_irq(sbd, &s->irq);
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
     s->nic = qemu_new_nic(&net_smc91c111_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
     /* ??? Save/restore.  */
 }
diff --git a/hw/net/spapr_llan.c b/hw/net/spapr_llan.c
index a6876a936d..475d5f3a34 100644
--- a/hw/net/spapr_llan.c
+++ b/hw/net/spapr_llan.c
@@ -325,7 +325,8 @@ static void spapr_vlan_realize(SpaprVioDevice *sdev, Error **errp)
     memcpy(&dev->perm_mac.a, &dev->nicconf.macaddr.a, sizeof(dev->perm_mac.a));
 
     dev->nic = qemu_new_nic(&net_spapr_vlan_info, &dev->nicconf,
-                            object_get_typename(OBJECT(sdev)), sdev->qdev.id, dev);
+                            object_get_typename(OBJECT(sdev)), sdev->qdev.id,
+                            &sdev->qdev.mem_reentrancy_guard, dev);
     qemu_format_nic_info_str(qemu_get_queue(dev->nic), dev->nicconf.macaddr.a);
 
     dev->rxp_timer = timer_new_us(QEMU_CLOCK_VIRTUAL, spapr_vlan_flush_rx_queue,
diff --git a/hw/net/stellaris_enet.c b/hw/net/stellaris_enet.c
index 8dd60783d8..6768a6912f 100644
--- a/hw/net/stellaris_enet.c
+++ b/hw/net/stellaris_enet.c
@@ -492,7 +492,8 @@ static void stellaris_enet_realize(DeviceState *dev, Error **errp)
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
 
     s->nic = qemu_new_nic(&net_stellaris_enet_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
 
diff --git a/hw/net/sungem.c b/hw/net/sungem.c
index eb01520790..347ccdd19d 100644
--- a/hw/net/sungem.c
+++ b/hw/net/sungem.c
@@ -1361,7 +1361,7 @@ static void sungem_realize(PCIDevice *pci_dev, Error **errp)
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
     s->nic = qemu_new_nic(&net_sungem_info, &s->conf,
                           object_get_typename(OBJECT(dev)),
-                          dev->id, s);
+                          dev->id, &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic),
                              s->conf.macaddr.a);
 }
diff --git a/hw/net/sunhme.c b/hw/net/sunhme.c
index 1f3d8011ae..82e38a428a 100644
--- a/hw/net/sunhme.c
+++ b/hw/net/sunhme.c
@@ -892,7 +892,8 @@ static void sunhme_realize(PCIDevice *pci_dev, Error **errp)
 
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
     s->nic = qemu_new_nic(&net_sunhme_info, &s->conf,
-                          object_get_typename(OBJECT(d)), d->id, s);
+                          object_get_typename(OBJECT(d)), d->id,
+                          &d->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
 
diff --git a/hw/net/tulip.c b/hw/net/tulip.c
index 915e5fb595..1f8e45de5e 100644
--- a/hw/net/tulip.c
+++ b/hw/net/tulip.c
@@ -983,7 +983,8 @@ static void pci_tulip_realize(PCIDevice *pci_dev, Error **errp)
 
     s->nic = qemu_new_nic(&net_tulip_info, &s->c,
                           object_get_typename(OBJECT(pci_dev)),
-                          pci_dev->qdev.id, s);
+                          pci_dev->qdev.id,
+                          &pci_dev->qdev.mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->c.macaddr.a);
 }
 
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index ed9f240bfd..9728158b72 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -3696,10 +3696,12 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
          * Happen when virtio_net_set_netclient_name has been called.
          */
         n->nic = qemu_new_nic(&net_virtio_info, &n->nic_conf,
-                              n->netclient_type, n->netclient_name, n);
+                              n->netclient_type, n->netclient_name,
+                              &dev->mem_reentrancy_guard, n);
     } else {
         n->nic = qemu_new_nic(&net_virtio_info, &n->nic_conf,
-                              object_get_typename(OBJECT(dev)), dev->id, n);
+                              object_get_typename(OBJECT(dev)), dev->id,
+                              &dev->mem_reentrancy_guard, n);
     }
 
     for (i = 0; i < n->max_queue_pairs; i++) {
diff --git a/hw/net/vmxnet3.c b/hw/net/vmxnet3.c
index 18b9edfdb2..5051818fe4 100644
--- a/hw/net/vmxnet3.c
+++ b/hw/net/vmxnet3.c
@@ -2083,7 +2083,7 @@ static void vmxnet3_net_init(VMXNET3State *s)
 
     s->nic = qemu_new_nic(&net_vmxnet3_info, &s->conf,
                           object_get_typename(OBJECT(s)),
-                          d->id, s);
+                          d->id, &d->mem_reentrancy_guard, s);
 
     s->peer_has_vhdr = vmxnet3_peer_has_vnet_hdr(s);
     s->tx_sop = true;
diff --git a/hw/net/xen_nic.c b/hw/net/xen_nic.c
index 9bbf6599fc..e735f79c5b 100644
--- a/hw/net/xen_nic.c
+++ b/hw/net/xen_nic.c
@@ -293,8 +293,8 @@ static int net_init(struct XenLegacyDevice *xendev)
         return -1;
     }
 
-    netdev->nic = qemu_new_nic(&net_xen_info, &netdev->conf,
-                               "xen", NULL, netdev);
+    netdev->nic = qemu_new_nic(&net_xen_info, &netdev->conf, "xen", NULL,
+                               &xendev->qdev.mem_reentrancy_guard, netdev);
 
     qemu_set_info_str(qemu_get_queue(netdev->nic),
                       "nic: xenbus vif macaddr=%s", netdev->mac);
diff --git a/hw/net/xgmac.c b/hw/net/xgmac.c
index 0ab6ae91aa..1f4f277d84 100644
--- a/hw/net/xgmac.c
+++ b/hw/net/xgmac.c
@@ -402,7 +402,8 @@ static void xgmac_enet_realize(DeviceState *dev, Error **errp)
 
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
     s->nic = qemu_new_nic(&net_xgmac_enet_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 
     s->regs[XGMAC_ADDR_HIGH(0)] = (s->conf.macaddr.a[5] << 8) |
diff --git a/hw/net/xilinx_axienet.c b/hw/net/xilinx_axienet.c
index 5b19a01eaa..7d1fd37b4a 100644
--- a/hw/net/xilinx_axienet.c
+++ b/hw/net/xilinx_axienet.c
@@ -967,7 +967,8 @@ static void xilinx_enet_realize(DeviceState *dev, Error **errp)
 
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
     s->nic = qemu_new_nic(&net_xilinx_enet_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 
     tdk_init(&s->TEMAC.phy);
diff --git a/hw/net/xilinx_ethlite.c b/hw/net/xilinx_ethlite.c
index 89f4f3b254..989afaf037 100644
--- a/hw/net/xilinx_ethlite.c
+++ b/hw/net/xilinx_ethlite.c
@@ -235,7 +235,8 @@ static void xilinx_ethlite_realize(DeviceState *dev, Error **errp)
 
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
     s->nic = qemu_new_nic(&net_xilinx_ethlite_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), dev->id, s);
+                          object_get_typename(OBJECT(dev)), dev->id,
+                          &dev->mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
 
diff --git a/hw/usb/dev-network.c b/hw/usb/dev-network.c
index 5fff487ee5..2c33e36cad 100644
--- a/hw/usb/dev-network.c
+++ b/hw/usb/dev-network.c
@@ -1386,7 +1386,8 @@ static void usb_net_realize(USBDevice *dev, Error **errp)
 
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
     s->nic = qemu_new_nic(&net_usbnet_info, &s->conf,
-                          object_get_typename(OBJECT(s)), s->dev.qdev.id, s);
+                          object_get_typename(OBJECT(s)), s->dev.qdev.id,
+                          &s->dev.qdev.mem_reentrancy_guard, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
     snprintf(s->usbstring_mac, sizeof(s->usbstring_mac),
              "%02x%02x%02x%02x%02x%02x",
diff --git a/net/net.c b/net/net.c
index 6492ad530e..982df2479f 100644
--- a/net/net.c
+++ b/net/net.c
@@ -319,6 +319,7 @@ NICState *qemu_new_nic(NetClientInfo *info,
                        NICConf *conf,
                        const char *model,
                        const char *name,
+                       MemReentrancyGuard *reentrancy_guard,
                        void *opaque)
 {
     NetClientState **peers = conf->peers.ncs;
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 05:14:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 05:14:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540727.842679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3rgO-0006Q0-SK; Tue, 30 May 2023 05:14:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540727.842679; Tue, 30 May 2023 05:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3rgO-0006N1-Oh; Tue, 30 May 2023 05:14:08 +0000
Received: by outflank-mailman (input) for mailman id 540727;
 Tue, 30 May 2023 02:43:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OULz=BT=daynix.com=akihiko.odaki@srs-se1.protection.inumbo.net>)
 id 1q3pKn-0008AP-Lq
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 02:43:41 +0000
Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com
 [2607:f8b0:4864:20::436])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c8cf90dd-fe93-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 04:43:41 +0200 (CEST)
Received: by mail-pf1-x436.google.com with SMTP id
 d2e1a72fcca58-64d30ab1f89so2605307b3a.3
 for <xen-devel@lists.xenproject.org>; Mon, 29 May 2023 19:43:40 -0700 (PDT)
Received: from alarm.. ([157.82.204.253]) by smtp.gmail.com with ESMTPSA id
 63-20020a630542000000b0051baf3f1b3esm7801785pgf.76.2023.05.29.19.43.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 29 May 2023 19:43:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8cf90dd-fe93-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=daynix-com.20221208.gappssmtp.com; s=20221208; t=1685414619; x=1688006619;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ejrIrGDi4fwDc42huLgpjGNxWrqatNvW4qPUqySFFzs=;
        b=bA03kVN0LRRXIzKN4XRu7rfTWzkdAzIu5vrroA41QIqCpQkLzqpjwbAKDmNVrQc619
         sel8WanjG9YLk5mNX33VW98hchhdEjHAjMcVWMOs98OW41wGMxEr94iO1KxTFBIMgiPA
         mjJzoZXp3qVL1Firr5Vy8may4ZNLv7dUH3ERx5Uvq1sGdqZ3uKieNa3iZvE0Le9gfN1m
         VutCkV1KEbIvbRwPj4wqIy2kZTcaHUeJZ6aEezuK2d86xDfoF2+Kgn4zuSX2aTRnEtPW
         bA1j++wF+++Qk/bOAfW4Gd9EqnZ4kQYwpQoN9StGcROzUAjdbV6aRYnloFE3Wx4G3OBc
         oGgg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685414619; x=1688006619;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ejrIrGDi4fwDc42huLgpjGNxWrqatNvW4qPUqySFFzs=;
        b=KOEyw78IubrG08U2suEvL57ljeXumZvlXzon2re3np37pQBr2b/CL0knziWrszUI3v
         ioGb4+FxSeqLTvme4+k8PxcZ362QJGyenPCQBOL+rz397QyZkhh7dwYNCFfbSxdPpfE+
         BOUdB1ZhuobiYVcY5MNPBpv1GY/uHOIWr+YMVFTBZgQ6tpnY+qcno2sVDznkht/AJaSh
         SZCTxFUIs/y1ERbQ5E9uS3T70M8R0FBrCc1nkn7P9y3Pd3jmjlB65GBpr08dX1P9TMuB
         bWrzrJeFEyJkuNyTTif3oNP98ugaiXsI0ZPQl+RCDGY+OanyrQBHJ7dSkvHTCTRvqTPT
         W8mQ==
X-Gm-Message-State: AC+VfDytSCE32wdAYCPXKiVvfncWDiVZidj/R2aioLRRTKvbEsmAOF2S
	pEAW88KVU3OgdMNkKxCEqJlByA==
X-Google-Smtp-Source: ACHHUZ6aiCOJhP/F1eiuxvqFrg32PvLvuYZpAKh2r88y5LDB6dQkwW+lr4Qrw3EN74hLySTW1nk6pA==
X-Received: by 2002:a05:6a20:d805:b0:10b:78d7:502 with SMTP id iv5-20020a056a20d80500b0010b78d70502mr906865pzb.36.1685414619631;
        Mon, 29 May 2023 19:43:39 -0700 (PDT)
From: Akihiko Odaki <akihiko.odaki@daynix.com>
To: 
Cc: Mauro Matteo Cascella <mcascell@redhat.com>,
	P J P <pj.pandit@yahoo.co.in>,
	Alexander Bulekov <alxndr@bu.edu>,
	Dmitry Fleytman <dmitry.fleytman@gmail.com>,
	Beniamino Galvani <b.galvani@gmail.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	Strahinja Jankovic <strahinja.p.jankovic@gmail.com>,
	Jason Wang <jasowang@redhat.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	Alistair Francis <alistair@alistair23.me>,
	Stefan Weil <sw@weilnetz.de>,
	=?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
	Andrew Jeffery <andrew@aj.id.au>,
	Joel Stanley <joel@jms.id.au>,
	Richard Henderson <richard.henderson@linaro.org>,
	Helge Deller <deller@gmx.de>,
	Sriram Yagnaraman <sriram.yagnaraman@est.tech>,
	Thomas Huth <huth@tuxfamily.org>,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Subbaraya Sundeep <sundeep.lkml@gmail.com>,
	Jan Kiszka <jan.kiszka@web.de>,
	Tyrone Ting <kfting@nuvoton.com>,
	Hao Wu <wuhaotsh@google.com>,
	Max Filippov <jcmvbkbc@gmail.com>,
	Jiri Pirko <jiri@resnulli.us>,
	Daniel Henrique Barboza <danielhb413@gmail.com>,
	David Gibson <david@gibson.dropbear.id.au>,
	Greg Kurz <groug@kaod.org>,
	Harsh Prateek Bora <harshpb@linux.ibm.com>,
	Sven Schnelle <svens@stackframe.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Rob Herring <robh@kernel.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	qemu-arm@nongnu.org,
	qemu-devel@nongnu.org,
	qemu-ppc@nongnu.org,
	xen-devel@lists.xenproject.org,
	Akihiko Odaki <akihiko.odaki@daynix.com>
Subject: [PATCH 2/2] net: Update MemReentrancyGuard for NIC
Date: Tue, 30 May 2023 11:43:02 +0900
Message-Id: <20230530024302.14215-3-akihiko.odaki@daynix.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530024302.14215-1-akihiko.odaki@daynix.com>
References: <20230530024302.14215-1-akihiko.odaki@daynix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Recently MemReentrancyGuard was added to DeviceState to record that the
device is engaging in I/O. The network device backend needs to update it
when delivering a packet to a device.

This implementation follows what bottom half does, but it does not add
a tracepoint for the case that the network device backend started
delivering a packet to a device which is already engaging in I/O. This
is because such reentrancy frequently happens for
qemu_flush_queued_packets() and is insignificant.

Reported-by: Alexander Bulekov <alxndr@bu.edu>
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
---
 include/net/net.h |  1 +
 net/net.c         | 14 ++++++++++++++
 2 files changed, 15 insertions(+)

diff --git a/include/net/net.h b/include/net/net.h
index a7d8deaccb..685ec58318 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -124,6 +124,7 @@ typedef QTAILQ_HEAD(NetClientStateList, NetClientState) NetClientStateList;
 typedef struct NICState {
     NetClientState *ncs;
     NICConf *conf;
+    MemReentrancyGuard *reentrancy_guard;
     void *opaque;
     bool peer_deleted;
 } NICState;
diff --git a/net/net.c b/net/net.c
index 982df2479f..3523cceafc 100644
--- a/net/net.c
+++ b/net/net.c
@@ -332,6 +332,7 @@ NICState *qemu_new_nic(NetClientInfo *info,
     nic = g_malloc0(info->size + sizeof(NetClientState) * queues);
     nic->ncs = (void *)nic + info->size;
     nic->conf = conf;
+    nic->reentrancy_guard = reentrancy_guard,
     nic->opaque = opaque;
 
     for (i = 0; i < queues; i++) {
@@ -805,6 +806,7 @@ static ssize_t qemu_deliver_packet_iov(NetClientState *sender,
                                        int iovcnt,
                                        void *opaque)
 {
+    MemReentrancyGuard *owned_reentrancy_guard;
     NetClientState *nc = opaque;
     int ret;
 
@@ -817,12 +819,24 @@ static ssize_t qemu_deliver_packet_iov(NetClientState *sender,
         return 0;
     }
 
+    if (nc->info->type != NET_CLIENT_DRIVER_NIC ||
+        qemu_get_nic(nc)->reentrancy_guard->engaged_in_io) {
+        owned_reentrancy_guard = NULL;
+    } else {
+        owned_reentrancy_guard = qemu_get_nic(nc)->reentrancy_guard;
+        owned_reentrancy_guard->engaged_in_io = true;
+    }
+
     if (nc->info->receive_iov && !(flags & QEMU_NET_PACKET_FLAG_RAW)) {
         ret = nc->info->receive_iov(nc, iov, iovcnt);
     } else {
         ret = nc_sendv_compat(nc, iov, iovcnt, flags);
     }
 
+    if (owned_reentrancy_guard) {
+        owned_reentrancy_guard->engaged_in_io = false;
+    }
+
     if (ret == 0) {
         nc->receive_disabled = 1;
     }
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 07:47:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 07:47:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540752.842700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3u4P-00069K-QD; Tue, 30 May 2023 07:47:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540752.842700; Tue, 30 May 2023 07:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3u4P-00069D-MW; Tue, 30 May 2023 07:47:05 +0000
Received: by outflank-mailman (input) for mailman id 540752;
 Tue, 30 May 2023 07:47:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IZK1=BT=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q3u4O-000697-S7
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 07:47:05 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.221]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29de84d1-febe-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 09:47:02 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4U7l1GRk
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate)
 for <xen-devel@lists.xenproject.org>;
 Tue, 30 May 2023 09:47:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29de84d1-febe-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; t=1685432821; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=O5pyuOQajUfKJJZelXLlncUTPzw4wa6xYLWChxwlgTdnLlhGpsnBjduSIFjqI6Io5k
    ocGAZQ/HHUUlHn1Jn1j+Rea+YnkoqFh2Geg8IG/5g+4zebFFPW+cglcaoR+dCEKm2y4P
    iElANl/2rFQB37b0If/3oKqVBlRcWcxRV+Oks809Mih7UA4LLxkMXwmvrKcPop1GjZis
    bw3xBHFzezVMnLntvJ9BlmaGt6qjSJ1g0ncu5T4cA9EuKvbejqNuEpaaj0RvCU+kuNch
    Zdf8wwdNSc/Wso267BUf+Msg5zTRDxSr6KjbQojDt5jEtZWWT5KWFomvHJbVtI5CZamu
    wXMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685432821;
    s=strato-dkim-0002; d=strato.com;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=H5ed1sClG/HYqUdlEePO7MG/Kd4d6+fyktyGnUhkACg=;
    b=TxZUjPq/fwjKlY6c2zUCqkRJnnE5UALq+Rs3+8AU8bVyjaSfLv1T8RBhzogilKOwaU
    rDHA4qV383QSUv7byKOMigeVtEoMujcL7TJ5O9dGOm3W44qDdXv4SAyh6czOH4zMbCvJ
    vAY9YV0XE9X9eP+zsKybxtgaACXoQOAwt7WecCzFcCTNL3NUU+tCB31H6Kc+1AUTRhGl
    7+Yzh7axqvq0tMw7Nx2gYafF16P1XdVSCdi8jULwngcsHas1DFjOLq/v8CD+R+5OYZRL
    AMrof0b+bLsvZwRm678vEylOP1vGdVFH2hDNQkErhdsf7NhrgMj2h9762vTyGhDyQmGj
    68nQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685432821;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=H5ed1sClG/HYqUdlEePO7MG/Kd4d6+fyktyGnUhkACg=;
    b=QQNHsF23mPH1dl6f4cVBWeoGI2u4GCvu6Av+pnCJ4poNU/jxlaZHdmnN3fuAg8CCls
    zw83E0ItOkVsIS4orwtWC0Ta8HCzn8Hn6w7Yk5F16sXpPq6RLB680fhm5vsLzd+gzryg
    s9vklLIMbVkx8fZAWRiR2IBXMRPF+LZDnUVAXB2HTopUR6FhT921FGliTFR2lqx8viOE
    7pKdQNeGjyWO+am0AAuinq77ICFUgwvBqnv9cQ9BLd/IZq9HzviZJXqe+oLuxx182F2m
    beo+CtaEEBT1UXjYhUGN3Np1pygIeOP3J+gzWWXo5s5WsSv3ibwoTvpXRp1CXOXc/rYn
    WGwg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685432821;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=H5ed1sClG/HYqUdlEePO7MG/Kd4d6+fyktyGnUhkACg=;
    b=AKl3jpU53VLlQn2K2dQNCHTu1fjVf8VavxraG4HyN2wbjJRHr/6miAlNrvq++FI+fj
    ETWvnzMUosOXVsVNYuDg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4BhOIaRvsld/sN75OpaIeOWAiVTRkMz6wPlUdSg=="
Date: Tue, 30 May 2023 09:46:54 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: HVM domU not created anymore in staging
Message-ID: <20230530094654.372003a0.olaf@aepfle.de>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/a1m.CKEqRsSk3sn4wKm2ua/";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/a1m.CKEqRsSk3sn4wKm2ua/
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

For some reason the staging branch fails to create HVM domUs for me.
It seems there was some breakage introduced between 20230522T161155.c7908869
and 20230526T091957.40cd186b. I was hoping that osstest will spot this
failure and do the bisect for me, but apparently there is no report yet.
Did osstest actually spot any HVM error?
To me it looks like 180992 shows many failures, but nothing fatal.

The symptom is that xl create just hangs, a domU id is apparently not
even claimed. In case osstest is not busy with any bisect, I will try
to bisect it in my environment.

Olaf

# Xen cmdline:
conring_size=3D1M smt=3D1 loglvl=3Dall guest_loglvl=3Dall crashkernel=3D192=
M,below=3D4G
console=3Dcom1 com1=3D115200 dom0_mem=3D8G dom0_max_vcpus=3D4 dom0_vcpus_pin
maxcpus=3D8 console_timestamps=3Ddatems tbuf_size=3D-1

#domU.cfg:
name=3D"X"
serial=3D'pty'
vcpus=3D'2'
cpus=3D"4-5"
memory=3D'2222'
boot=3D'dc'
disk=3D[
'vdev=3Dhda, backendtype=3Dqdisk, format=3Draw, access=3Drw, target=3D/N/di=
sk0.raw',
]
vif=3D[ 'bridge=3Dbr0,mac=3D00:08:01:18:17:40,type=3Dioemu' ]
builder=3D"hvm"
device_model_override=3D"/usr/lib64/qemu-8.0/bin/qemu-system-i386"
device_model_version=3D"qemu-xen"
usbdevice=3D['tablet']

--Sig_/a1m.CKEqRsSk3sn4wKm2ua/
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmR1qe4ACgkQ86SN7mm1
DoCFag//WxUYGbjZ7V1DoWcMrNFJDPQ+kXwQuMRd0JX0Wjw2hLmZ+ydNuTfh2MVM
2yzqXv6TtmCsrDZ8IluKQsiG331JhDDhO9uGfmCecfA7JCyQ2YElw+szcd+qo99k
3Vuk9yXevw/luRXRklvGZE0TzB3r4ivuW8vjiy6WQK5U9UUFOrQxm/5nyG9bjEfc
xdF28vQWFX2ZCL4ayalI4UyZ+lTUNHH6fXmJk5dVG5pZ1rE8DFQs76BDwtyYOGWe
Mb2TPl9+x6e4M8JKdNZQ2e8kOppodq9p6EU15TVYLywvXBowI8ywJNkeO7VAGtiv
SU2ljM//jPfkkn7B1XC2QDkt8OuHQhH9AaijagQ0HjveLn0n92YSjXVA3UhXqvDJ
eavYBejegKV1jaeXaK0KFQJSsjOlKVoMsC35BJV3GXaKDDqajyoywoRyHoAuZfzk
INv/D7trAX93upz8awfLqSBJikwug0hvf3m/K7/kIHBDAbhfyf5t3p+0GRRk3GHx
cXDoaTUTDhxTS5H8eO9pRohc6VspdNdrrRK30dDgEnAVbh7btYw4eAWlwib8X/09
S1iuXbKPkddL1xSNUAF0TqTz4a4ZswOmzVxTvZhAgmzXHD/pgy8NwLpBdAdSG9mK
2gMSlSjOOptHAfCvOQKtHgoN5+teYXVtoT+AENOwlOm526ABv90=
=XCTd
-----END PGP SIGNATURE-----

--Sig_/a1m.CKEqRsSk3sn4wKm2ua/--


From xen-devel-bounces@lists.xenproject.org Tue May 30 07:59:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 07:59:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540757.842709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uG7-0007fQ-Ro; Tue, 30 May 2023 07:59:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540757.842709; Tue, 30 May 2023 07:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uG7-0007fJ-P2; Tue, 30 May 2023 07:59:11 +0000
Received: by outflank-mailman (input) for mailman id 540757;
 Tue, 30 May 2023 07:59:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IZK1=BT=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q3uG6-0007fD-Fl
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 07:59:10 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.22]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id da9028b2-febf-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 09:59:08 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4U7x7GWG
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate)
 for <xen-devel@lists.xenproject.org>;
 Tue, 30 May 2023 09:59:07 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da9028b2-febf-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1685433547; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=Uk3c7AFdFuaLjxlaw1li65Z5pN+m1Vxmd75qBMXxEcGu3Ni6SJFUyzeN5i0OPaDcNG
    dD7ykNkenSFcz2jDBzLENQNfBxWgjc0P1E35cYnviS+41zc8l7SK+/AbWkNUw10d9Q8g
    UJTtgr9VZsKk+prRnJaQADqbcPclpqmLwUSa7tJaGgLQEBQ9A1xYvhvlqHIRkzjF173P
    eB1zVaINdLHXUvyYj6ITVuKK/xObmph+zDaw2geK6lGqSQ7UPfxm3aAY2lcaoMhv2Qte
    4HUtlKfi6TpRy11rb0AgE03g9pfrKLNnIWIZRyA0QCHHMnoNqvi/s+xr5taJXUOBeXFN
    9pBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685433547;
    s=strato-dkim-0002; d=strato.com;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=48SLYBKwuJ1qGErn4ucN0izxxjzjGz4onuRxZLQh++Q=;
    b=TuaDSn6oj3ELd7ichlbzvvscVHYrJ4f5qISLA47HHrb9kk6WlCbmiUUDii9YyGoQI/
    54TI5xiF8pdUetPo9zlH0JyR40ZXnY+mNGaEMSIbyoC2lopGaPiBI+BKGfy8k6TWA7OT
    zHRRQvTqa0lgZfX1XudYwQO4S8eRSe1CZii56NyOU780asNMxs+coplraLzHjW9G1VBI
    BwT980FsXdDDpGZJAlTy8zdJyrDB0THLP4GD6SCGZK9mPnIX3xM5G/iiviq+sD7a5IUy
    GE/wcqN9fkxZL062WZAdgbx5vBdYpA05PyBRo4RtkzCjZYE53VAKZ+/biNBHyIySbUfr
    Ho1g==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685433547;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=48SLYBKwuJ1qGErn4ucN0izxxjzjGz4onuRxZLQh++Q=;
    b=jT9fu7IXF/5joZsUYE3mP2qGyC6E/dhNlaZywRIfuUAwMn7dzBrKAa0oWbJjIet89h
    QbzEM+YoK8tE9RxiovorD8r1GeYHK7eNTmfKeFZ33Z4vzik7qpOSFNLAyHtFO535LRk2
    McHhrEy4yQgcCdz8n5V2feFa2TVpRs7ix2twIGdXka90dyrC8m5zlGiCj0SYhDIi0UTY
    6pQocUp18+RGHIJKT4dKAqOn0Uv9NuxaHdmsJNCC/Gv/eROwq2qfmXliQmr3jAN62HiB
    MZFfuqB+y97WZ+ZcDFO5Y0eqRVMuc6OQWMtm4YL1qxY1tCAGqb9KvIr8PvOX25Nx4OXy
    sQiA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685433547;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=48SLYBKwuJ1qGErn4ucN0izxxjzjGz4onuRxZLQh++Q=;
    b=LGD7cf3bTg2myJCohY8ag/XPFtZg4vhuuQ4YhpW9WwtYYNfG/+P3d6JDG2ql3P9HzA
    IvOLRdlhkH+N89X5HoBw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4BhOIaRvsld/sN75OpaIeOWAiVTRkMz6wPlUdSg=="
Date: Tue, 30 May 2023 09:58:59 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: xentrace buffer size, maxcpus and online cpus
Message-ID: <20230530095859.60a3e4ea.olaf@aepfle.de>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/NA6tvDi3aD7wV1LPUj6STap";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/NA6tvDi3aD7wV1LPUj6STap
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

While looking again through calculate_tbuf_size after a very long time,
I was wondering why the code uses nr_cpu_ids instead of num_online_cpus.
In case Xen was booted with maxcpus=3DN, would it be safe to use N as
upper limit? I think this would increase the per-cpu buffer size for
each active pcpu, and as a result more events could be captured.


Olaf

--Sig_/NA6tvDi3aD7wV1LPUj6STap
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmR1rMMACgkQ86SN7mm1
DoDPxxAAnruClTu+L1qd6H0ttk3cIGjTuql6iBT8Y6iQcPWPFqTzh1XlwhpicnPa
wzQZjTMPG3XRYc/Lamr9xL/4M/kjg00AN7xrjhuh1InZi1bRh7jI9r2t9vTSGjyB
TBPOXVoVoI8Jn6e6f5yr9boYW5YtpMcHujUAcAv8YDzL6FfgHmKbA+Gm9wV3LfVk
2wRomrViYBZuOPytQk/ln2SwG21KH+Lyjk7iSGcSXUHONKuGw0gD+LKF7c5rpV22
vrC348ohcGMoAVJA90DPvn4AoCy+rZ9dZ4rQJ+uy9g1PV7j4bopqd7b9nsJPcHym
3SCgTs0+1oReM3GuZIHf9rifrJlpW+RLysThjoFidkmqAAS0nFRIy2YlOOJYsmNg
Q0pj/i6Sxrr/bH+BRzus2ebtbW6SUk5cT6Rhx+3k0P8uS9ybxn7txmRd9PDunWSi
NAEaz2SxFmvTwfH1fbk4I7v9mPaO8Mp1Mj4uh1hoV9p/Po4U1FGcAD6vtnDoZ0A+
kBQsJmznqmykrYKXT1sbxTUCx/oqZzHa8KnrcP/E+tXJVreUfFzGk6EYXoJeac5e
aPwPR80hTMrvcaW+VK/h//Xtmkdg5Mof89Rd5UB5IanvLvrLXgDt29y8lgGzkjiW
IPZhov9+9iWG9Svudqc9djdIDcynTH00bUbjt5VbcmT31TYjCfE=
=cKim
-----END PGP SIGNATURE-----

--Sig_/NA6tvDi3aD7wV1LPUj6STap--


From xen-devel-bounces@lists.xenproject.org Tue May 30 08:07:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:07:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540767.842720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uNh-0001Hr-1F; Tue, 30 May 2023 08:07:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540767.842720; Tue, 30 May 2023 08:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uNg-0001Hk-Um; Tue, 30 May 2023 08:07:00 +0000
Received: by outflank-mailman (input) for mailman id 540767;
 Tue, 30 May 2023 08:06:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3uNe-0001He-SG
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:06:59 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2051.outbound.protection.outlook.com [40.107.7.51])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f2629950-fec0-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:06:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB8PR04MB7100.eurprd04.prod.outlook.com (2603:10a6:10:127::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 08:06:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 08:06:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2629950-fec0-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O8cGvdae7M5Xh4PYIXJ0TpcAhyY5/WH1iTZdxRr7hpDh34ReK0n1l7TJAAK7glyicAfuykbV2FajA94XtLpHJ/Fl8KX1rMg+vba+C6Goz3Orm1V5x2h3CnPo1hhOwX6iR2PbR99BRQRGBCySuJYDEl5V7mCFGRtyjS8PLLOhPLDct5U1DwAV1SMeX1W23jULAEIGYQA4P1+YIHdqIrrXv2fvCqexRzRQcL75ZS2D7D//yamdnh9U3ca+BYJdxVPjxbutCLkJ7x8qQLV7Z2NYAQ486FQYLnMXEqTy6vcYzb0sWDl2jLIVF1uVy2TChfJEbjNsfidR6ff2xasOSriKYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=BHwNiK6jZO8xC/ncSOQrfAgpObym3fYYc5idrzpd99I=;
 b=au/+NgOglTmVnlwvKoNsAr3qgM5/3HobYC7saBYtFjD2Uwokb/EpmtWkexNXRJ2Pj+0/de3Ztym0vqoF4DwHmcmgPLXmeU0JP6LEtL6mv+egUX8krxgeXHBYlH8WIEyayg5Fcu3qFGy+psrwzieTxck/jChTkotDicynHbhe1RHzNugNWn5gcm8F8ATLnW55qqFnR59aIBq95Bj3ysT8AYaNo5qbdWmcsRsSLm1P9BEP5aPowZSO5WCVnzdPVD9mr+R5OAnVWfCXN6h9231LIYGoqxzi6mPWDrhgXJUX2tQHf1BDjZlvrJm9N6sihOfnqrGafsQHY0Go3KwbKzs9AA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BHwNiK6jZO8xC/ncSOQrfAgpObym3fYYc5idrzpd99I=;
 b=U6apqAtWR0RaJv9Q3uIUjoFue9b9KMknatbP/tPi+eP3c2e0nQDz7RMDT2LI9xsIutKfR4ydOhlmBtO+PKaj6usybU2+FSO9tsfDPgn5ob7ni9nIAUqx41n9gl5fL5ZVo2wbXdXnzu8NIN/XOEdtTCWscLyJMfi6rh6hpqRxkdzO8QDazZGSSiEdWCxW2kkeXhUVoy2LCVzIJNxy9OCpGaSTTR9/FGRGEQShF61I6MwrSmJp760whldwMjkUYa6Bp0lPXk2ZeM+5Ll9gXtwxmQFe+YC2Jb9qwyiNnr815I0phvsnKUxR4OSelk4aB5QEcSz6ay9dRp9xCSnOhOioWw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bba057a2-0a68-bf05-9a92-59546b52c73c@suse.com>
Date: Tue, 30 May 2023 10:06:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 1/2] x86: annotate entry points with type and size
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <e4bf47ca-2ae6-1fd4-56a6-e4e777150b64@suse.com>
 <db10bc3d-962e-72a7-b53d-93a7ddd7f3ef@suse.com>
 <fd492a4a-11ba-b63a-daf4-99697db0db0e@suse.com>
 <ZHSp9+ouRrXFEY4R@Air-de-Roger>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZHSp9+ouRrXFEY4R@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0188.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB8PR04MB7100:EE_
X-MS-Office365-Filtering-Correlation-Id: 5689e02b-7c1c-428c-bc1e-08db60e4c4df
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2h/fzW/CXfb6y+j144+kP/IU7NNbGXtfxU6LxqWoNL+P5zOsWLoc677vCS7Ify154wkQDFE0m3aVeWXqI8tadrHhpgBjnMUYIYELzSioOxyufzvZxn/1cz4nfF9tzEyraW3WfzdT7tkJovKDzkUfpN5GIMPs+BxMWDMtD/BksmQaMZYs96Tzgax+UlunnTIlDI6yiNqTRc9XEqIgpABUrud2Xuk/f0yr4oAGoQR72bF4MCIYdBvDQAELeIQncmzIK3zYrkAxMieFb7sLOcrveJ2Jqw8Ux1fFtk9PKmEcXR/GwJfJHFOzU+FNfrTIBR2NH2iJ09DyWtBlyIb1GDh08Qv66Adk0Un/J1pmUSXiSb7YbY5Vnf6G7AjP2ilXljiZz/mkTaA/t5GVVNU+wvlA/L7Dy8Cii0L4dzvmPPK3XnX/hoar239dvHRzn3PsKngWvHbDSF8NNWJ97FYUFBMY14KFzaTY6AcZiCEyc7oEcCQynwY1qTKR2INAkzwhZU3Sgj8Q+ca/JohFOUJScXlIsY2DXE3NoxseBttb2Z1xbClyWghZO/1DAOKbPmQHEPWIE8eHCgrF2vj9EBy3+v5D1VW7ZAtklWZ9cPSbTOidfxgoZPR27HxNACJ9zo1Lkgm49kbkTjK4pfccVEJxOZhOTA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(346002)(376002)(136003)(366004)(39860400002)(451199021)(54906003)(478600001)(38100700002)(31686004)(66556008)(66476007)(66946007)(6486002)(41300700001)(7416002)(8936002)(8676002)(5660300002)(86362001)(31696002)(26005)(2906002)(53546011)(186003)(6506007)(6512007)(83380400001)(6916009)(4326008)(316002)(2616005)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?cS9PYWFya0JrWmZLT3p3RVNHbjBqM0UvalB5dm5XNm10aVlNekk0UEJSK2gw?=
 =?utf-8?B?dUErMFBrWXAzZ0N6b0hxa0JDbDhSbjVGajhSY3VjeHdZUW5XZExwSUlHQVQy?=
 =?utf-8?B?dDZvdmxubEFDb1dKSEYyV2syL3R2UEhvNVp0NmF2R0tqR2ppcjRsdnZweFBS?=
 =?utf-8?B?V2hxd1VTT0JWMTVhUGlQdllMZTIwcGdhcU1ZUVU2L2hwVWIvSW5wWWwzdUkx?=
 =?utf-8?B?SFBtcGpzSGhZeit6eXdUd0NpUjNEanNMSjZJSGFZTmtrd3M0YkpuVUpWSXo0?=
 =?utf-8?B?K0NyRFE0MFFtK0FWelozRVJYZGxmcWNpYWQ5N0NhenVEeUs5amF1cjBDcm14?=
 =?utf-8?B?dERpSkpsMGF2Zi9sU2xwUWcwTmkwTFNQWEZYVEhUVnYzWVd5cDhwVHJtV3Zl?=
 =?utf-8?B?VURoYmlyZXB4VUpKYlppd0NvWUd2WFEzTTE1Zm93dWV0MlZxUXAvU1N5b2dm?=
 =?utf-8?B?OVc3bWp2Yi9EQ1FyelVrSXlPdmJwblN4QWdMT1lEMDhZc0pKamYxdERXMVkz?=
 =?utf-8?B?dktwNjJZRk55TktuTUdtdlltWDNKT25WRWFDYk1Zbm8yRFdhZllXZmNTZU9l?=
 =?utf-8?B?azlFWlBWNjBXUStWdmlVSGVLcm5YTG10YStPcS9ncFRqajlEbDZmWlRTSHpR?=
 =?utf-8?B?a3N5Szg2VTNOaVlSaHRUZFdJdDVpV2JoVUc5N0FOT0lIcUVhUFF4NUYvNjU3?=
 =?utf-8?B?L0FzaktWM1I1NUFBVnJYNkh0TEMrOE1FTklpOFZvMjZWbklveCtZMEVxclUr?=
 =?utf-8?B?U1Y4THp5QWhETjJtRW5HekpZTS8zSCs2MGxnMmlkTzljbHFMMnByZGdmWXlz?=
 =?utf-8?B?Snc3RWYzMXVFNlN5U2h0OWQwbzhXMlJPUENWK0d0UkR4QmZ2QkNOMlVzMEZZ?=
 =?utf-8?B?QjkybXA2SmNJdXZuNEUyTExwUktaRm5ZTExPVEhSaVYzMVlQL3FrRVVtZitU?=
 =?utf-8?B?MHNHWlhhRU5iekxSVUxGN2ZuVnBBTWdueUpUNVhZZ3FmdHEwWVNPc3RHRGRw?=
 =?utf-8?B?eWtMdUFYWUZ2NW5lYUxqZS9yRWlsb1VTdFpzdGJyZkV6akU0ZGxxWjdabnh2?=
 =?utf-8?B?UVZyZEJ0anA2TmVRN20ybnZVWVJLRFJONlNqQjdHM0FsWUJhSEx0NW1Ta0pJ?=
 =?utf-8?B?UHArd25JSUd1eE1CNktzb1o4MFN4M09zM29TYUYwazZYVUduTzdnN0pyNlBt?=
 =?utf-8?B?OExqdXBuVnc5SzQyOU9sUy8xcUlzWTFzTnc4RTRZYjZ4cVBGTzFWOXpjNFZ6?=
 =?utf-8?B?b2JXdDl2TzNvRnVEVzNxZEFyYmR6RUZpZkVrd0VFTnNqdVBKNnFzRHlSWVM4?=
 =?utf-8?B?MnNwTFczVTB0WTdRN3RWUGxXdzJ5aGdnSWlZaExPVnRlanJkemp5dVhQUFJO?=
 =?utf-8?B?cklKWEZ6WkNnZTFQWHdmeDNPUHJoNEQrcjJSbDBpL3FDNTRqZi91NEwrVlAx?=
 =?utf-8?B?OFdJOEFjeWNNU240ckJoYS8rVU83K2FCUWxqNzBsYlRFWEo4VTVLQ0xvUWJn?=
 =?utf-8?B?QU5yTGo5Z1U0dThxYkRjdjhuQ3p0WjVIaFRDNGJ0N2J3czVBUVFPTzBmYXpJ?=
 =?utf-8?B?dVRQVXkzS2dTS2FMQnZwT1RtZ2xuQVYxaURBTXNqbFdqcXdjK1NYR2tUQ1k2?=
 =?utf-8?B?QUNub2IrU1FUNGI3KzVrNTVxNVBIT09zNHNQSDVWYnRFWHpITmhkMHBpRW1h?=
 =?utf-8?B?YjZSVkVnMzdwOUdwb1VYQ0J3R3N1b0NBY0VqNVI3VFUxQUZKMTIwdllhR3JO?=
 =?utf-8?B?akM5NGxqbWRTUGZYRkxvczkvQy9LY01qQUNzL3J6b1FEek05Z0hEK0E1WVB4?=
 =?utf-8?B?akY3Smp4OEhKRkxFeExBemJpWlphQkJ5Wkd5dDV1VC96ajM4VldtbGpnS29W?=
 =?utf-8?B?dEpQM3AvcHh3Q3hsUWdZNmlxRFlsM29VaDZjN09FTG1jQVFNSUhOc0xtZlVL?=
 =?utf-8?B?U2M2b0JManBIaWpFeHBsclkzbko5UHRteHU4ZmdOaFl3eE9kZ1Y2OUVTRzhr?=
 =?utf-8?B?SS93WUJoMEY4Y0NZVGVEeFdTUFZ3SC9aNXhmWmR3dHoyODg3eHNhVEgwS2xk?=
 =?utf-8?B?U2VyZUp6a3Z3MWlpNXhoNEZjVDJIOExqZFA3L2d3UmlIMmhEbTBTZXNaZmcr?=
 =?utf-8?Q?Hqh23Rj2xXgeKQkY1zcQrnl7V?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5689e02b-7c1c-428c-bc1e-08db60e4c4df
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 08:06:27.7685
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YfGRSdRPaHvcqfcXfQjAWFO1gvsGiv1rNoYX/q5Wz+zbiuhrkZkSjFLGKLSpD5xvToNzEIxH0dRdmgXUccDdjQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR04MB7100

On 29.05.2023 15:34, Roger Pau Monné wrote:
> On Tue, May 23, 2023 at 01:30:51PM +0200, Jan Beulich wrote:
>> Note that the FB-label in autogen_stubs() cannot be converted just yet:
>> Such labels cannot be used with .type. We could further diverge from
>> Linux'es model and avoid setting STT_NOTYPE explicitly (that's the type
>> labels get by default anyway).
>>
>> Note that we can't use ALIGN() (in place of SYM_ALIGN()) as long as we
>> still have ALIGN.
> 
> FWIW, as I'm looking into using the newly added macros in order to add
> annotations suitable for live-patching, I would need to switch some of
> the LABEL usages into it's own functions, as it's not possible to
> livepatch a function that has labels jumped into from code paths
> outside of the function.

Hmm, I'm not sure what the best way is to overcome that restriction. I'm
not convinced we want to arbitrarily name things "functions".

>> --- a/xen/arch/x86/include/asm/asm_defns.h
>> +++ b/xen/arch/x86/include/asm/asm_defns.h
>> @@ -81,6 +81,45 @@ register unsigned long current_stack_poi
>>  
>>  #ifdef __ASSEMBLY__
>>  
>> +#define SYM_ALIGN(algn...) .balign algn
>> +
>> +#define SYM_L_GLOBAL(name) .globl name
>> +#define SYM_L_WEAK(name)   .weak name
> 
> Won't this better be added when required?  I can't spot any weak
> symbols in assembly ATM, and you don't introduce any _WEAK macro
> variants below.

Well, Andrew specifically mentioned to desire to also have Linux'es
support for weak symbols. Hence I decided to add it here despite
(for now) being unused). I can certainly drop that again, but in
particular if we wanted to use the scheme globally, I think we may
want to make it "complete".

>> +#define SYM_L_LOCAL(name)  /* nothing */
>> +
>> +#define SYM_T_FUNC         STT_FUNC
>> +#define SYM_T_DATA         STT_OBJECT
>> +#define SYM_T_NONE         STT_NOTYPE
>> +
>> +#define SYM(name, typ, linkage, algn...)          \
>> +        .type name, SYM_T_ ## typ;                \
>> +        SYM_L_ ## linkage(name);                  \
>> +        SYM_ALIGN(algn);                          \
>> +        name:
>> +
>> +#define END(name) .size name, . - name
>> +
>> +#define ARG1_(x, y...) (x)
>> +#define ARG2_(x, y...) ARG1_(y)
>> +
>> +#define LAST__(nr) ARG ## nr ## _
>> +#define LAST_(nr)  LAST__(nr)
>> +#define LAST(x, y...) LAST_(count_args(x, ## y))(x, ## y)
> 
> I find LAST not very descriptive, won't it better be named OPTIONAL()
> or similar? (and maybe placed in lib.h?)

I don't think OPTIONAL describes the purpose. I truly mean "last" here.
As to placing in lib.h - perhaps, but then we may want to have forms
with more than 2 arguments right away (and it would be a little unclear
how far up to go).

>> +
>> +#define FUNC(name, algn...) \
>> +        SYM(name, FUNC, GLOBAL, LAST(16, ## algn), 0x90)
> 
> A rant, should the alignment of functions use a different padding?
> (ie: ret or ud2?) In order to prevent stray jumps falling in the
> padding and fall trough into the next function.  That would also
> prevent the implicit fall trough used in some places.

Yes, but that's a separate topic (for which iirc patches are pending
as well, just of course not integrated with the work here. There's
the slight risk of overlooking some "fall-through" case ...

>> +#define LABEL(name, algn...) \
>> +        SYM(name, NONE, GLOBAL, LAST(16, ## algn), 0x90)
>> +#define DATA(name, algn...) \
>> +        SYM(name, DATA, GLOBAL, LAST(0, ## algn), 0xff)
>> +
>> +#define FUNC_LOCAL(name, algn...) \
>> +        SYM(name, FUNC, LOCAL, LAST(16, ## algn), 0x90)
>> +#define LABEL_LOCAL(name, algn...) \
>> +        SYM(name, NONE, LOCAL, LAST(16, ## algn), 0x90)
> 
> Is there much value in adding local labels to the symbol table?
> 
> AFAICT the main purpose of this macro is to be used to declare aligned
> labels, and here avoid the ALIGN + label name pair, but could likely
> drop the .type directive?

Right, .type ... NOTYPE is kind of redundant, but it fits the model
better here.

>> --- a/xen/arch/x86/x86_64/compat/entry.S
>> +++ b/xen/arch/x86/x86_64/compat/entry.S
>> @@ -8,10 +8,11 @@
>>  #include <asm/page.h>
>>  #include <asm/processor.h>
>>  #include <asm/desc.h>
>> +#include <xen/lib.h>
> 
> Shouldn't the inclusion of lib.h be in asm_defs.h, as that's where the
> usage of count_args() resides? (I assume that's why lib.h is added
> here).

When the uses are in macros I'm always largely undecided, and I slightly
tend towards the (in general, perhaps not overly relevant here) "less
dependencies" solution. As in: Source files not using the macros which
use count_args() also don't need libs.h then.

>> @@ -66,24 +68,21 @@ compat_test_guest_events:
>>          call  compat_create_bounce_frame
>>          jmp   compat_test_all_events
>>  
>> -        ALIGN
>>  /* %rbx: struct vcpu */
>> -compat_process_softirqs:
>> +LABEL_LOCAL(compat_process_softirqs)
> 
> Shouldn't this be a local function rather than a local label?  It's
> fully isolated.  I guess it would create issues with
> compat_process_trap, as we would then require a jump from the
> preceding compat_process_nmi.

Alternatives are possible, but right now I consider this an inner label
of compat_test_all_events.

>>          sti
>>          call  do_softirq
>>          jmp   compat_test_all_events
>>  
>> -        ALIGN
>>  /* %rbx: struct vcpu, %rdx: struct trap_bounce */
>> -.Lcompat_process_trapbounce:
>> +LABEL_LOCAL(.Lcompat_process_trapbounce)
> 
> It's my understanding that here the '.L' prefix is pointless, since
> LABEL_LOCAL() will forcefully create a symbol for the label due to the
> usage of .type?

I don't think .type has this effect. There's certainly no such label in
the symbol table of the object file I have as a result.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 08:24:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:24:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540774.842741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ueh-0003zi-US; Tue, 30 May 2023 08:24:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540774.842741; Tue, 30 May 2023 08:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ueh-0003zZ-PQ; Tue, 30 May 2023 08:24:35 +0000
Received: by outflank-mailman (input) for mailman id 540774;
 Tue, 30 May 2023 08:24:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3ueg-0003jy-8X
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:24:34 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66f67c8f-fec3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:24:32 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 480CA1F8B9;
 Tue, 30 May 2023 08:24:32 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 14B9B1342F;
 Tue, 30 May 2023 08:24:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id otS+A8CydWQQEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:24:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66f67c8f-fec3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435072; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jR2CaVJDZ2qBizxTmbgLfYMIkR5bcQwiTfYf2yndaVw=;
	b=ndCpbH2mN9HH6RaiDM5jQgtMcVmlkGwIzhpf4zOY3Th+tfmISRAkyh7oilwuq+O5Zui46K
	pzBlInsfyAkE81KtMFKzHCXqVxxQcSWaFcJVfpzE3kv0Wh0LGBi62esnFugxS4sbEpaDLr
	ZWXTCDs4sDNYpOgDvvazwKgW8+Qp2uY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 01/14] tools/xenstore: take transaction internal nodes into account for quota
Date: Tue, 30 May 2023 10:24:11 +0200
Message-Id: <20230530082424.32126-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The accounting for the number of nodes of a domain in an active
transaction is not working correctly, as it is checking the node quota
only against the number of nodes outside the transaction.

This can result in the transaction finally failing, as node quota is
checked at the end of the transaction again.

On the other hand even in a transaction deleting many nodes, new nodes
might not be creatable, in case the node quota was already reached at
the start of the transaction.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V3:
- rewrite of commit message (Julien Grall)
---
 tools/xenstore/xenstored_domain.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f62be2245c..dbbf97accc 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1116,9 +1116,8 @@ int domain_nbentry_fix(unsigned int domid, int num, bool update)
 
 int domain_nbentry(struct connection *conn)
 {
-	return (domain_is_unprivileged(conn))
-		? conn->domain->nbentry
-		: 0;
+	return domain_is_unprivileged(conn)
+	       ? domain_nbentry_add(conn, conn->id, 0, true) : 0;
 }
 
 static bool domain_chk_quota(struct domain *domain, int mem)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:24:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:24:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540773.842730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uec-0003kB-Kr; Tue, 30 May 2023 08:24:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540773.842730; Tue, 30 May 2023 08:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uec-0003k4-IF; Tue, 30 May 2023 08:24:30 +0000
Received: by outflank-mailman (input) for mailman id 540773;
 Tue, 30 May 2023 08:24:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3ueb-0003jy-Ko
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:24:29 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 63c3b5fe-fec3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:24:26 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A7BCC21A59;
 Tue, 30 May 2023 08:24:26 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 64CC91342F;
 Tue, 30 May 2023 08:24:26 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id W1Y4F7qydWQAEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:24:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63c3b5fe-fec3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435066; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=OJ89+jr0EtFeuiyfFiZWQHjzc5XdEw3eDryL9bo7Zkg=;
	b=s6Pt85Vd8/ZQxOAeLDFjZLpJkBzV1uNBB70sY81G2N785oa7HHv3yJk32WSfVXqb/jD9gT
	WRSLf/LFL7cFhwdT/mjzvTj2kUKBuVKfDPitp4wR42KTnYR28VmUtzunutctvNXy5JBepk
	mOzd54SHmdzkgtgSKcknpPuqjNRlm5A=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v6 00/14] tools/xenstore: rework internal accounting
Date: Tue, 30 May 2023 10:24:10 +0200
Message-Id: <20230530082424.32126-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series reworks the Xenstore internal accounting to use a uniform
generic framework. It is adding some additional useful diagnostic
information, like accounting trace and max. per-domain and global quota
values seen.

Changes in V2:
- added patch 1 (leftover from previous series)
- rebase

Changes in V3:
- addressed comments

Changes in V4:
- fixed patch 3

Changes in V5:
- addressed comments

Changes in V6:
- addressed comments

Juergen Gross (14):
  tools/xenstore: take transaction internal nodes into account for quota
  tools/xenstore: manage per-transaction domain accounting data in an
    array
  tools/xenstore: introduce accounting data array for per-domain values
  tools/xenstore: add framework to commit accounting data on success
    only
  tools/xenstore: use accounting buffering for node accounting
  tools/xenstore: add current connection to domain_memory_add()
    parameters
  tools/xenstore: use accounting data array for per-domain values
  tools/xenstore: add accounting trace support
  tools/xenstore: add TDB access trace support
  tools/xenstore: switch transaction accounting to generic accounting
  tools/xenstore: remember global and per domain max accounting values
  tools/xenstore: use generic accounting for remaining quotas
  tools/xenstore: switch get_optval_int() to get_optval_uint()
  tools/xenstore: switch quota management to be table based

 docs/misc/xenstore.txt                 |   5 +-
 tools/xenstore/xenstored_control.c     |  65 ++--
 tools/xenstore/xenstored_core.c        | 185 ++++++-----
 tools/xenstore/xenstored_core.h        |  24 +-
 tools/xenstore/xenstored_domain.c      | 434 ++++++++++++++++++-------
 tools/xenstore/xenstored_domain.h      |  59 +++-
 tools/xenstore/xenstored_transaction.c |  25 +-
 tools/xenstore/xenstored_watch.c       |  15 +-
 8 files changed, 532 insertions(+), 280 deletions(-)

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:24:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:24:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540775.842750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uen-0004ID-4J; Tue, 30 May 2023 08:24:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540775.842750; Tue, 30 May 2023 08:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uen-0004I4-1G; Tue, 30 May 2023 08:24:41 +0000
Received: by outflank-mailman (input) for mailman id 540775;
 Tue, 30 May 2023 08:24:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3uem-0003jy-2I
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:24:40 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a52283f-fec3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:24:37 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E4B991F8B9;
 Tue, 30 May 2023 08:24:37 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id A91A71342F;
 Tue, 30 May 2023 08:24:37 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id Cw/8J8WydWQWEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:24:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a52283f-fec3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435077; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JlFoqnoU3I4JMyJapnFZtevlraEzWmolnTL1zQ4kqcs=;
	b=pAjU0NzvYb3hu5IjT4euugnFDqmO9DY7Mdg+wAd5qK+8Fh9plpFTbET8xJO/DU3qFQM+mH
	GNNq5G6XgMHgofsImvKPEZ1A1uUkV0QMRWjPjDB0aXYY4NDEzXKKeAxZGBJKrYbXPl6ewG
	ZSm2/fz3g8EejT7wt79SmoufknW3sT0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 02/14] tools/xenstore: manage per-transaction domain accounting data in an array
Date: Tue, 30 May 2023 10:24:12 +0200
Message-Id: <20230530082424.32126-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to prepare keeping accounting data in an array instead of
using independent fields, switch the struct changed_domain accounting
data to that scheme, for now only using an array with one element.

In order to be able to extend this scheme add the needed indexing enum
to xenstored_domain.h.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- make "what" parameter of acc_add_changed_dom() an enum type, and
  assert() that it won't exceed the accounting array (Julien Grall)
---
 tools/xenstore/xenstored_domain.c | 19 +++++++++++--------
 tools/xenstore/xenstored_domain.h | 10 ++++++++++
 2 files changed, 21 insertions(+), 8 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index dbbf97accc..609a9a13ab 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -99,8 +99,8 @@ struct changed_domain
 	/* Identifier of the changed domain. */
 	unsigned int domid;
 
-	/* Amount by which this domain's nbentry field has changed. */
-	int nbentry;
+	/* Accounting data. */
+	int acc[ACC_TR_N];
 };
 
 static struct hashtable *domhash;
@@ -550,7 +550,7 @@ int acc_fix_domains(struct list_head *head, bool chk_quota, bool update)
 	int cnt;
 
 	list_for_each_entry(cd, head, list) {
-		cnt = domain_nbentry_fix(cd->domid, cd->nbentry, update);
+		cnt = domain_nbentry_fix(cd->domid, cd->acc[ACC_NODES], update);
 		if (!update) {
 			if (chk_quota && cnt >= quota_nb_entry_per_domain)
 				return ENOSPC;
@@ -595,19 +595,21 @@ static struct changed_domain *acc_get_changed_domain(const void *ctx,
 	return cd;
 }
 
-static int acc_add_dom_nbentry(const void *ctx, struct list_head *head, int val,
-			       unsigned int domid)
+static int acc_add_changed_dom(const void *ctx, struct list_head *head,
+			       enum accitem what, int val, unsigned int domid)
 {
 	struct changed_domain *cd;
 
+	assert(what < ARRAY_SIZE(cd->acc));
+
 	cd = acc_get_changed_domain(ctx, head, domid);
 	if (!cd)
 		return 0;
 
 	errno = 0;
-	cd->nbentry += val;
+	cd->acc[what] += val;
 
-	return cd->nbentry;
+	return cd->acc[what];
 }
 
 static void domain_conn_reset(struct domain *domain)
@@ -1071,7 +1073,8 @@ static int domain_nbentry_add(struct connection *conn, unsigned int domid,
 
 	if (conn && conn->transaction) {
 		head = transaction_get_changed_domains(conn->transaction);
-		ret = acc_add_dom_nbentry(conn->transaction, head, add, domid);
+		ret = acc_add_changed_dom(conn->transaction, head, ACC_NODES,
+					  add, domid);
 		if (errno) {
 			fail_transaction(conn->transaction);
 			return -1;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 279cccb3ad..40803574f6 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -19,6 +19,16 @@
 #ifndef _XENSTORED_DOMAIN_H
 #define _XENSTORED_DOMAIN_H
 
+/*
+ * All accounting data is stored in a per-domain array.
+ * Depending on the account item there might be other scopes as well, like e.g.
+ * a per transaction array.
+ */
+enum accitem {
+	ACC_NODES,
+	ACC_TR_N,		/* Number of elements per transaction. */
+};
+
 void handle_event(void);
 
 void check_domains(void);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:24:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:24:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540776.842760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uet-0004f3-CK; Tue, 30 May 2023 08:24:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540776.842760; Tue, 30 May 2023 08:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uet-0004eu-9C; Tue, 30 May 2023 08:24:47 +0000
Received: by outflank-mailman (input) for mailman id 540776;
 Tue, 30 May 2023 08:24:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3uer-0004aU-J0
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:24:45 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6df9e9d5-fec3-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:24:43 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 7A6591F8B9;
 Tue, 30 May 2023 08:24:43 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 488CF1342F;
 Tue, 30 May 2023 08:24:43 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id fkBOEMuydWQgEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:24:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6df9e9d5-fec3-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435083; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=no4Yd3rxrRD6C77O0adarS1Qkx7TupRDK427VBM6jfA=;
	b=SGYV6yJVNlc9BQ+ThmqMiH64egg6OHGvi9IzKF3jPQLImMY0iQBzWwO/1vspst89NZLWuz
	OxQTvPBkDHm7E6bDqyBiyFCgl0Yn/nBA6xkTJcsvGiRYLZRB/C3PlN2ceg2MuoQLFechNJ
	+k1HsqAm+5JQmVEKdfw1XDctaqrokPk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 03/14] tools/xenstore: introduce accounting data array for per-domain values
Date: Tue, 30 May 2023 10:24:13 +0200
Message-Id: <20230530082424.32126-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the scheme of an accounting data array for per-domain
accounting data and use it initially for the number of nodes owned by
a domain.

Make the accounting data type to be unsigned int, as no data is allowed
to be negative at any time.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V3:
- remove domid parameter from domain_acc_add_chk() (Julien Grall)
- rename domain_acc_add_chk() (Julien Grall)
- modify overflow check (Julien Grall)
V4:
- fix overflow check
---
 tools/xenstore/xenstored_domain.c | 70 ++++++++++++++++++-------------
 tools/xenstore/xenstored_domain.h |  3 +-
 2 files changed, 43 insertions(+), 30 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 609a9a13ab..30fb9acec6 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -69,8 +69,8 @@ struct domain
 	/* Has domain been officially introduced? */
 	bool introduced;
 
-	/* number of entry from this domain in the store */
-	int nbentry;
+	/* Accounting data for this domain. */
+	unsigned int acc[ACC_N];
 
 	/* Amount of memory allocated for this domain. */
 	int memory;
@@ -246,7 +246,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 
 	if (keep_orphans) {
 		set_tdb_key(node->name, &key);
-		domain->nbentry--;
+		domain_nbentry_dec(NULL, domain->domid);
 		node->perms.p[0].id = priv_domid;
 		node->acc.memory = 0;
 		domain_nbentry_inc(NULL, priv_domid);
@@ -270,7 +270,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 		ret = WALK_TREE_SKIP_CHILDREN;
 	}
 
-	return domain->nbentry > 0 ? ret : WALK_TREE_SUCCESS_STOP;
+	return domain->acc[ACC_NODES] ? ret : WALK_TREE_SUCCESS_STOP;
 }
 
 static void domain_tree_remove(struct domain *domain)
@@ -278,7 +278,7 @@ static void domain_tree_remove(struct domain *domain)
 	int ret;
 	struct walk_funcs walkfuncs = { .enter = domain_tree_remove_sub };
 
-	if (domain->nbentry > 0) {
+	if (domain->acc[ACC_NODES]) {
 		ret = walk_node_tree(domain, NULL, "/", &walkfuncs, domain);
 		if (ret == WALK_TREE_ERROR_STOP)
 			syslog(LOG_ERR,
@@ -437,7 +437,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	resp = talloc_asprintf_append(resp, "%-16s: %8d\n", #t, e); \
 	if (!resp) return ENOMEM
 
-	ent(nodes, d->nbentry);
+	ent(nodes, d->acc[ACC_NODES]);
 	ent(watches, d->nbwatch);
 	ent(transactions, ta);
 	ent(outstanding, d->nboutstanding);
@@ -1047,8 +1047,27 @@ int domain_adjust_node_perms(struct node *node)
 	return 0;
 }
 
-static int domain_nbentry_add(struct connection *conn, unsigned int domid,
-			      int add, bool no_dom_alloc)
+static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
+{
+	assert(what < ARRAY_SIZE(d->acc));
+
+	if ((add < 0 && -add > d->acc[what]) ||
+	    (add > 0 && (INT_MAX - d->acc[what]) < add)) {
+		/*
+		 * In a transaction when a node is being added/removed AND the
+		 * same node has been added/removed outside the transaction in
+		 * parallel, the resulting value will be wrong. This is no
+		 * problem, as the transaction will fail due to the resulting
+		 * conflict.
+		 */
+		return (add < 0) ? 0 : INT_MAX;
+	}
+
+	return d->acc[what] + add;
+}
+
+static int domain_acc_add(struct connection *conn, unsigned int domid,
+			  enum accitem what, int add, bool no_dom_alloc)
 {
 	struct domain *d;
 	struct list_head *head;
@@ -1071,56 +1090,49 @@ static int domain_nbentry_add(struct connection *conn, unsigned int domid,
 		}
 	}
 
-	if (conn && conn->transaction) {
+	if (conn && conn->transaction && what < ACC_TR_N) {
 		head = transaction_get_changed_domains(conn->transaction);
-		ret = acc_add_changed_dom(conn->transaction, head, ACC_NODES,
+		ret = acc_add_changed_dom(conn->transaction, head, what,
 					  add, domid);
 		if (errno) {
 			fail_transaction(conn->transaction);
 			return -1;
 		}
-		/*
-		 * In a transaction when a node is being added/removed AND the
-		 * same node has been added/removed outside the transaction in
-		 * parallel, the resulting number of nodes will be wrong. This
-		 * is no problem, as the transaction will fail due to the
-		 * resulting conflict.
-		 * In the node remove case the resulting number can be even
-		 * negative, which should be avoided.
-		 */
-		return max(d->nbentry + ret, 0);
+		return domain_acc_add_valid(d, what, ret);
 	}
 
-	d->nbentry += add;
+	d->acc[what] = domain_acc_add_valid(d, what, add);
 
-	return d->nbentry;
+	return d->acc[what];
 }
 
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
-	return (domain_nbentry_add(conn, domid, 1, false) < 0) ? errno : 0;
+	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
+	       ? errno : 0;
 }
 
 int domain_nbentry_dec(struct connection *conn, unsigned int domid)
 {
-	return (domain_nbentry_add(conn, domid, -1, true) < 0) ? errno : 0;
+	return (domain_acc_add(conn, domid, ACC_NODES, -1, true) < 0)
+	       ? errno : 0;
 }
 
 int domain_nbentry_fix(unsigned int domid, int num, bool update)
 {
 	int ret;
 
-	ret = domain_nbentry_add(NULL, domid, update ? num : 0, update);
+	ret = domain_acc_add(NULL, domid, ACC_NODES, update ? num : 0, update);
 	if (ret < 0 || update)
 		return ret;
 
 	return domid_is_unprivileged(domid) ? ret + num : 0;
 }
 
-int domain_nbentry(struct connection *conn)
+unsigned int domain_nbentry(struct connection *conn)
 {
 	return domain_is_unprivileged(conn)
-	       ? domain_nbentry_add(conn, conn->id, 0, true) : 0;
+	       ? domain_acc_add(conn, conn->id, ACC_NODES, 0, true) : 0;
 }
 
 static bool domain_chk_quota(struct domain *domain, int mem)
@@ -1597,7 +1609,7 @@ static int domain_check_acc_init_sub(const void *k, void *v, void *arg)
 	 * If everything is correct incrementing the value for each node will
 	 * result in dom->nodes being 0 at the end.
 	 */
-	dom->nodes = -d->nbentry;
+	dom->nodes = -d->acc[ACC_NODES];
 
 	if (!hashtable_insert(domains, &dom->domid, dom)) {
 		talloc_free(dom);
@@ -1652,7 +1664,7 @@ static int domain_check_acc_cb(const void *k, void *v, void *arg)
 	if (!d)
 		return 0;
 
-	d->nbentry += dom->nodes;
+	d->acc[ACC_NODES] += dom->nodes;
 
 	return 0;
 }
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 40803574f6..9d05eb01da 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -27,6 +27,7 @@
 enum accitem {
 	ACC_NODES,
 	ACC_TR_N,		/* Number of elements per transaction. */
+	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
 };
 
 void handle_event(void);
@@ -77,7 +78,7 @@ int domain_alloc_permrefs(struct node_perms *perms);
 int domain_nbentry_inc(struct connection *conn, unsigned int domid);
 int domain_nbentry_dec(struct connection *conn, unsigned int domid);
 int domain_nbentry_fix(unsigned int domid, int num, bool update);
-int domain_nbentry(struct connection *conn);
+unsigned int domain_nbentry(struct connection *conn);
 int domain_memory_add(unsigned int domid, int mem, bool no_quota_check);
 
 /*
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:24:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:24:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540777.842770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uex-00051b-Lb; Tue, 30 May 2023 08:24:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540777.842770; Tue, 30 May 2023 08:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uex-00051S-HX; Tue, 30 May 2023 08:24:51 +0000
Received: by outflank-mailman (input) for mailman id 540777;
 Tue, 30 May 2023 08:24:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3uex-0003jy-04
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:24:51 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 70f8e3b8-fec3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:24:48 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 174831FD68;
 Tue, 30 May 2023 08:24:49 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id DAB1F1342F;
 Tue, 30 May 2023 08:24:48 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id CEUUNNCydWQnEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:24:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70f8e3b8-fec3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435089; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=olCuVho6OZH6MpjXcVAcQn5iIBiU/Kk5eG2G0pVcAeA=;
	b=G0umNJ+zmNmEU76MUF+iuOMqUyQq9MJmcnZGGUdrH0Dz2XTiErbyqhnV+hVqojmoNMq18r
	zc+3jPa1NzLb3BCGaIBoc71fy3oRZGguQ9b/BRVk1DSoK0skxvzIuRZqBOEQJfK46KIiit
	3TGk9E75ar0fk99/8E9KhFUqnzp6T58=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 04/14] tools/xenstore: add framework to commit accounting data on success only
Date: Tue, 30 May 2023 10:24:14 +0200
Message-Id: <20230530082424.32126-5-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of modifying accounting data and undo those modifications in
case of an error during further processing, add a framework for
collecting the needed changes and commit them only when the whole
operation has succeeded.

This scheme can reuse large parts of the per transaction accounting.
The changed_domain handling can be reused, but the array size of the
accounting data should be possible to be different for both use cases.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V3:
- call acc_commit() earlier (Julien Grall)
- add assert() to acc_commit()
- use fixed sized acc array in struct changed_domain (Julien Grall)
V5:
- set conn->in to NULL only locally in acc_commit() (Julien Grall)
- define ACC_CHD_N in enum (Julien Grall)
---
 tools/xenstore/xenstored_core.c   |  7 ++++
 tools/xenstore/xenstored_core.h   |  3 ++
 tools/xenstore/xenstored_domain.c | 54 ++++++++++++++++++++++++++++++-
 tools/xenstore/xenstored_domain.h |  6 +++-
 4 files changed, 68 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 3ca68681e3..8392bdec9b 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1023,6 +1023,9 @@ static void send_error(struct connection *conn, int error)
 			break;
 		}
 	}
+
+	acc_drop(conn);
+
 	send_reply(conn, XS_ERROR, xsd_errors[i].errstring,
 			  strlen(xsd_errors[i].errstring) + 1);
 }
@@ -1034,6 +1037,9 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 
 	assert(type != XS_WATCH_EVENT);
 
+	/* Commit accounting now, as later errors won't undo any changes. */
+	acc_commit(conn);
+
 	if ( len > XENSTORE_PAYLOAD_MAX ) {
 		send_error(conn, E2BIG);
 		return;
@@ -2195,6 +2201,7 @@ struct connection *new_connection(const struct interface_funcs *funcs)
 	new->is_stalled = false;
 	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
+	INIT_LIST_HEAD(&new->acc_list);
 	INIT_LIST_HEAD(&new->ref_list);
 	INIT_LIST_HEAD(&new->watches);
 	INIT_LIST_HEAD(&new->transaction_list);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index c59b06551f..1f811f38cb 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -139,6 +139,9 @@ struct connection
 	struct list_head out_list;
 	uint64_t timeout_msec;
 
+	/* Not yet committed accounting data (valid if in != NULL). */
+	struct list_head acc_list;
+
 	/* Referenced requests no longer pending. */
 	struct list_head ref_list;
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 30fb9acec6..e59e40088e 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -100,7 +100,7 @@ struct changed_domain
 	unsigned int domid;
 
 	/* Accounting data. */
-	int acc[ACC_TR_N];
+	int acc[ACC_CHD_N];
 };
 
 static struct hashtable *domhash;
@@ -1070,6 +1070,7 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 			  enum accitem what, int add, bool no_dom_alloc)
 {
 	struct domain *d;
+	struct changed_domain *cd;
 	struct list_head *head;
 	int ret;
 
@@ -1090,6 +1091,22 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 		}
 	}
 
+	/* Temporary accounting data until final commit? */
+	if (conn && conn->in && what < ACC_REQ_N) {
+		/* Consider transaction local data. */
+		ret = 0;
+		if (conn->transaction && what < ACC_TR_N) {
+			head = transaction_get_changed_domains(
+				conn->transaction);
+			cd = acc_find_changed_domain(head, domid);
+			if (cd)
+				ret = cd->acc[what];
+		}
+		ret += acc_add_changed_dom(conn->in, &conn->acc_list, what,
+					   add, domid);
+		return errno ? -1 : domain_acc_add_valid(d, what, ret);
+	}
+
 	if (conn && conn->transaction && what < ACC_TR_N) {
 		head = transaction_get_changed_domains(conn->transaction);
 		ret = acc_add_changed_dom(conn->transaction, head, what,
@@ -1106,6 +1123,41 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 	return d->acc[what];
 }
 
+void acc_drop(struct connection *conn)
+{
+	struct changed_domain *cd;
+
+	while ((cd = list_top(&conn->acc_list, struct changed_domain, list))) {
+		list_del(&cd->list);
+		talloc_free(cd);
+	}
+}
+
+void acc_commit(struct connection *conn)
+{
+	struct changed_domain *cd;
+	enum accitem what;
+	struct buffered_data *in = conn->in;
+
+	/*
+	 * Make sure domain_acc_add() below can't add additional data to
+	 * to be committed accounting records.
+	 */
+	conn->in = NULL;
+
+	while ((cd = list_top(&conn->acc_list, struct changed_domain, list))) {
+		list_del(&cd->list);
+		for (what = 0; what < ACC_REQ_N; what++)
+			if (cd->acc[what])
+				domain_acc_add(conn, cd->domid, what,
+					       cd->acc[what], true);
+
+		talloc_free(cd);
+	}
+
+	conn->in = in;
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 9d05eb01da..e40657216b 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -25,8 +25,10 @@
  * a per transaction array.
  */
 enum accitem {
-	ACC_NODES,
+	ACC_REQ_N,		/* Number of elements per request. */
+	ACC_NODES = ACC_REQ_N,
 	ACC_TR_N,		/* Number of elements per transaction. */
+	ACC_CHD_N = ACC_TR_N,	/* max(ACC_REQ_N, ACC_TR_N), for changed dom. */
 	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
 };
 
@@ -113,6 +115,8 @@ int domain_get_quota(const void *ctx, struct connection *conn,
  * If "update" is true, "chk_quota" is ignored.
  */
 int acc_fix_domains(struct list_head *head, bool chk_quota, bool update);
+void acc_drop(struct connection *conn);
+void acc_commit(struct connection *conn);
 
 /* Write rate limiting */
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:24:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:24:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540778.842780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uf4-0005Yu-30; Tue, 30 May 2023 08:24:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540778.842780; Tue, 30 May 2023 08:24:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uf3-0005Yl-Vh; Tue, 30 May 2023 08:24:57 +0000
Received: by outflank-mailman (input) for mailman id 540778;
 Tue, 30 May 2023 08:24:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3uf2-0003jy-Fu
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:24:56 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 74484629-fec3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:24:54 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 9EB0E1F8D9;
 Tue, 30 May 2023 08:24:54 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 7610F1342F;
 Tue, 30 May 2023 08:24:54 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id LMRWG9aydWQuEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:24:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74484629-fec3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435094; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PVy5LUjaBCB6xfueR1GIwgjHHdb2Ubavw8Y/RlsOUdw=;
	b=pDN+UtzvxhTkDhn4RWMABc4WHN3yCJTogfv1v1jNdCS8M7wkGfMkjMEOpTTmIt8ZpXkojK
	k9D+RPARhXycTlu3NIdnrODx0W7WA0nvqPuo4Qm5eTrVp28PcTNKiju7+TYPlMBZkcktTp
	+EBCPyBji0n/Jd0QfXNotqCMkO/j9wY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 05/14] tools/xenstore: use accounting buffering for node accounting
Date: Tue, 30 May 2023 10:24:15 +0200
Message-Id: <20230530082424.32126-6-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the node accounting to the accounting information buffering in
order to avoid having to undo it in case of failure.

This requires to call domain_nbentry_dec() before any changes to the
data base, as it can return an error now.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V5:
- add error handling after domain_nbentry_dec() calls (Julien Grall)
V6:
- return WALK_TREE_ERROR_STOP after failed do_tdb_delete()
- add comment why calling corrupt() is fine (Julien Grall)
---
 tools/xenstore/xenstored_core.c   | 37 ++++++++++++-------------------
 tools/xenstore/xenstored_domain.h |  4 ++--
 2 files changed, 16 insertions(+), 25 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 8392bdec9b..0a9c88ca67 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1454,7 +1454,6 @@ static void destroy_node_rm(struct connection *conn, struct node *node)
 static int destroy_node(struct connection *conn, struct node *node)
 {
 	destroy_node_rm(conn, node);
-	domain_nbentry_dec(conn, get_node_owner(node));
 
 	/*
 	 * It is not possible to easily revert the changes in a transaction.
@@ -1645,9 +1644,12 @@ static int delnode_sub(const void *ctx, struct connection *conn,
 	if (ret > 0)
 		return WALK_TREE_SUCCESS_STOP;
 
+	if (domain_nbentry_dec(conn, get_node_owner(node)))
+		return WALK_TREE_ERROR_STOP;
+
 	/* In case of error stop the walk. */
 	if (!ret && do_tdb_delete(conn, &key, &node->acc))
-		return WALK_TREE_SUCCESS_STOP;
+		return WALK_TREE_ERROR_STOP;
 
 	/*
 	 * Fire the watches now, when we can still see the node permissions.
@@ -1657,8 +1659,6 @@ static int delnode_sub(const void *ctx, struct connection *conn,
 	watch_exact = strcmp(root, node->name);
 	fire_watches(conn, ctx, node->name, node, watch_exact, NULL);
 
-	domain_nbentry_dec(conn, get_node_owner(node));
-
 	return WALK_TREE_RM_CHILDENTRY;
 }
 
@@ -1679,6 +1679,12 @@ int rm_node(struct connection *conn, const void *ctx, const char *name)
 	ret = walk_node_tree(ctx, conn, name, &walkfuncs, (void *)name);
 	if (ret < 0) {
 		if (ret == WALK_TREE_ERROR_STOP) {
+			/*
+			 * This can't be triggered by an unprivileged guest,
+			 * so calling corrupt() is fine here.
+			 * In fact it is needed in order to fix a potential
+			 * accounting inconsistency.
+			 */
 			corrupt(conn, "error when deleting sub-nodes of %s\n",
 				name);
 			errno = EIO;
@@ -1797,29 +1803,14 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 		return EPERM;
 
 	old_perms = node->perms;
-	domain_nbentry_dec(conn, get_node_owner(node));
+	if (domain_nbentry_dec(conn, get_node_owner(node)))
+		return ENOMEM;
 	node->perms = perms;
-	if (domain_nbentry_inc(conn, get_node_owner(node))) {
-		node->perms = old_perms;
-		/*
-		 * This should never fail because we had a reference on the
-		 * domain before and Xenstored is single-threaded.
-		 */
-		domain_nbentry_inc(conn, get_node_owner(node));
+	if (domain_nbentry_inc(conn, get_node_owner(node)))
 		return ENOMEM;
-	}
 
-	if (write_node(conn, node, false)) {
-		int saved_errno = errno;
-
-		domain_nbentry_dec(conn, get_node_owner(node));
-		node->perms = old_perms;
-		/* No failure possible as above. */
-		domain_nbentry_inc(conn, get_node_owner(node));
-
-		errno = saved_errno;
+	if (write_node(conn, node, false))
 		return errno;
-	}
 
 	fire_watches(conn, ctx, name, node, false, &old_perms);
 	send_ack(conn, XS_SET_PERMS);
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index e40657216b..466549709f 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -25,9 +25,9 @@
  * a per transaction array.
  */
 enum accitem {
+	ACC_NODES,
 	ACC_REQ_N,		/* Number of elements per request. */
-	ACC_NODES = ACC_REQ_N,
-	ACC_TR_N,		/* Number of elements per transaction. */
+	ACC_TR_N = ACC_REQ_N,	/* Number of elements per transaction. */
 	ACC_CHD_N = ACC_TR_N,	/* max(ACC_REQ_N, ACC_TR_N), for changed dom. */
 	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
 };
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:25:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:25:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540780.842790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufA-00067J-Bm; Tue, 30 May 2023 08:25:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540780.842790; Tue, 30 May 2023 08:25:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufA-000678-8D; Tue, 30 May 2023 08:25:04 +0000
Received: by outflank-mailman (input) for mailman id 540780;
 Tue, 30 May 2023 08:25:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3uf8-0003jy-KJ
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:25:02 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77a45502-fec3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:24:59 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 4450D21A59;
 Tue, 30 May 2023 08:25:00 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 14CFD1342F;
 Tue, 30 May 2023 08:25:00 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id 8THRA9yydWQ5EAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:25:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77a45502-fec3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435100; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7OuYOTisLhx5B5y8mrKkHc+fXyiK/HdjNpO3kbF28Fo=;
	b=Jc4/PTJUXoRQ1dIO1ufK/Dy4y8X58j0cJtjLgPzRWky/xTMGy/2MXALCkTAQxEb6BTqjYp
	pj4a+c3DKXpr6CdjBP4grNlwothY6QbrTNzbcQR6AB2CN2Mau83C0KIoIs8igOvV+KlkHF
	z1BrhW61IYQDNuR3JM3jDsisqt/LP+w=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 06/14] tools/xenstore: add current connection to domain_memory_add() parameters
Date: Tue, 30 May 2023 10:24:16 +0200
Message-Id: <20230530082424.32126-7-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to enable switching memory accounting to the generic array
based accounting, add the current connection to the parameters of
domain_memory_add().

This requires to add the connection to some other functions, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c   | 28 ++++++++++++++++------------
 tools/xenstore/xenstored_domain.c |  3 ++-
 tools/xenstore/xenstored_domain.h | 14 +++++++++-----
 tools/xenstore/xenstored_watch.c  | 11 ++++++-----
 4 files changed, 33 insertions(+), 23 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 0a9c88ca67..006ad9224c 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -246,7 +246,8 @@ static void free_buffered_data(struct buffered_data *out,
 		}
 	}
 
-	domain_memory_add_nochk(conn->id, -out->hdr.msg.len - sizeof(out->hdr));
+	domain_memory_add_nochk(conn, conn->id,
+				-out->hdr.msg.len - sizeof(out->hdr));
 
 	if (out->hdr.msg.type == XS_WATCH_EVENT) {
 		req = out->pend.req;
@@ -631,24 +632,25 @@ int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 	 * nodes to new owners.
 	 */
 	if (old_acc.memory)
-		domain_memory_add_nochk(old_domid,
+		domain_memory_add_nochk(conn, old_domid,
 					-old_acc.memory - key->dsize);
-	ret = domain_memory_add(new_domid, data->dsize + key->dsize,
-				no_quota_check);
+	ret = domain_memory_add(conn, new_domid,
+				data->dsize + key->dsize, no_quota_check);
 	if (ret) {
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
-			domain_memory_add_nochk(old_domid,
+			domain_memory_add_nochk(conn, old_domid,
 						old_acc.memory + key->dsize);
 		return ret;
 	}
 
 	/* TDB should set errno, but doesn't even set ecode AFAICT. */
 	if (tdb_store(tdb_ctx, *key, *data, TDB_REPLACE) != 0) {
-		domain_memory_add_nochk(new_domid, -data->dsize - key->dsize);
+		domain_memory_add_nochk(conn, new_domid,
+					-data->dsize - key->dsize);
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
-			domain_memory_add_nochk(old_domid,
+			domain_memory_add_nochk(conn, old_domid,
 						old_acc.memory + key->dsize);
 		errno = EIO;
 		return errno;
@@ -683,7 +685,7 @@ int do_tdb_delete(struct connection *conn, TDB_DATA *key,
 
 	if (acc->memory) {
 		domid = get_acc_domid(conn, key, acc->domid);
-		domain_memory_add_nochk(domid, -acc->memory - key->dsize);
+		domain_memory_add_nochk(conn, domid, -acc->memory - key->dsize);
 	}
 
 	return 0;
@@ -1055,11 +1057,13 @@ void send_reply(struct connection *conn, enum xsd_sockmsg_type type,
 	if (len <= DEFAULT_BUFFER_SIZE) {
 		bdata->buffer = bdata->default_buffer;
 		/* Don't check quota, path might be used for returning error. */
-		domain_memory_add_nochk(conn->id, len + sizeof(bdata->hdr));
+		domain_memory_add_nochk(conn, conn->id,
+					len + sizeof(bdata->hdr));
 	} else {
 		bdata->buffer = talloc_array(bdata, char, len);
 		if (!bdata->buffer ||
-		    domain_memory_add_chk(conn->id, len + sizeof(bdata->hdr))) {
+		    domain_memory_add_chk(conn, conn->id,
+					  len + sizeof(bdata->hdr))) {
 			send_error(conn, ENOMEM);
 			return;
 		}
@@ -1124,7 +1128,7 @@ void send_event(struct buffered_data *req, struct connection *conn,
 		}
 	}
 
-	if (domain_memory_add_chk(conn->id, len + sizeof(bdata->hdr))) {
+	if (domain_memory_add_chk(conn, conn->id, len + sizeof(bdata->hdr))) {
 		talloc_free(bdata);
 		return;
 	}
@@ -3332,7 +3336,7 @@ static void add_buffered_data(struct buffered_data *bdata,
 	 * be smaller. So ignore it. The limit will be applied for any resource
 	 * after the state has been fully restored.
 	 */
-	domain_memory_add_nochk(conn->id, len + sizeof(bdata->hdr));
+	domain_memory_add_nochk(conn, conn->id, len + sizeof(bdata->hdr));
 }
 
 void read_state_buffered_data(const void *ctx, struct connection *conn,
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index e59e40088e..7770c4f395 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1236,7 +1236,8 @@ static bool domain_chk_quota(struct domain *domain, int mem)
 	return false;
 }
 
-int domain_memory_add(unsigned int domid, int mem, bool no_quota_check)
+int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
+		      bool no_quota_check)
 {
 	struct domain *domain;
 
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 466549709f..b94548fd7d 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -81,25 +81,29 @@ int domain_nbentry_inc(struct connection *conn, unsigned int domid);
 int domain_nbentry_dec(struct connection *conn, unsigned int domid);
 int domain_nbentry_fix(unsigned int domid, int num, bool update);
 unsigned int domain_nbentry(struct connection *conn);
-int domain_memory_add(unsigned int domid, int mem, bool no_quota_check);
+int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
+		      bool no_quota_check);
 
 /*
  * domain_memory_add_chk(): to be used when memory quota should be checked.
  * Not to be used when specifying a negative mem value, as lowering the used
  * memory should always be allowed.
  */
-static inline int domain_memory_add_chk(unsigned int domid, int mem)
+static inline int domain_memory_add_chk(struct connection *conn,
+					unsigned int domid, int mem)
 {
-	return domain_memory_add(domid, mem, false);
+	return domain_memory_add(conn, domid, mem, false);
 }
+
 /*
  * domain_memory_add_nochk(): to be used when memory quota should not be
  * checked, e.g. when lowering memory usage, or in an error case for undoing
  * a previous memory adjustment.
  */
-static inline void domain_memory_add_nochk(unsigned int domid, int mem)
+static inline void domain_memory_add_nochk(struct connection *conn,
+					   unsigned int domid, int mem)
 {
-	domain_memory_add(domid, mem, true);
+	domain_memory_add(conn, domid, mem, true);
 }
 void domain_watch_inc(struct connection *conn);
 void domain_watch_dec(struct connection *conn);
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 8ad0229df6..e30cd89be3 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -199,7 +199,7 @@ static struct watch *add_watch(struct connection *conn, char *path, char *token,
 	watch->token = talloc_strdup(watch, token);
 	if (!watch->node || !watch->token)
 		goto nomem;
-	if (domain_memory_add(conn->id, strlen(path) + strlen(token),
+	if (domain_memory_add(conn, conn->id, strlen(path) + strlen(token),
 			      no_quota_check))
 		goto nomem;
 
@@ -274,8 +274,9 @@ int do_unwatch(const void *ctx, struct connection *conn,
 	list_for_each_entry(watch, &conn->watches, list) {
 		if (streq(watch->node, node) && streq(watch->token, vec[1])) {
 			list_del(&watch->list);
-			domain_memory_add_nochk(conn->id, -strlen(watch->node) -
-							  strlen(watch->token));
+			domain_memory_add_nochk(conn, conn->id,
+						-strlen(watch->node) -
+						strlen(watch->token));
 			talloc_free(watch);
 			domain_watch_dec(conn);
 			send_ack(conn, XS_UNWATCH);
@@ -291,8 +292,8 @@ void conn_delete_all_watches(struct connection *conn)
 
 	while ((watch = list_top(&conn->watches, struct watch, list))) {
 		list_del(&watch->list);
-		domain_memory_add_nochk(conn->id, -strlen(watch->node) -
-						  strlen(watch->token));
+		domain_memory_add_nochk(conn, conn->id, -strlen(watch->node) -
+							strlen(watch->token));
 		talloc_free(watch);
 		domain_watch_dec(conn);
 	}
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:25:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:25:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540785.842799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufF-0006al-Lx; Tue, 30 May 2023 08:25:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540785.842799; Tue, 30 May 2023 08:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufF-0006aa-JB; Tue, 30 May 2023 08:25:09 +0000
Received: by outflank-mailman (input) for mailman id 540785;
 Tue, 30 May 2023 08:25:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3ufD-0003jy-T5
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:25:07 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7afecc01-fec3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:25:05 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id DF5731F889;
 Tue, 30 May 2023 08:25:05 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id A84081342F;
 Tue, 30 May 2023 08:25:05 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id GXMbJ+GydWRBEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:25:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7afecc01-fec3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435105; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DHl2BsepDMEj6T8xhPpCjPhpm9Zzmks3cBUz9DoXzj8=;
	b=MmgtHG00ykKI59EX2xkRoRrp2QTQIqMGs5coLdIqfa5e8T8GyblJwTj8d/1Ja1CehCIK8D
	UZ17FiweudVIJRT3IY/tW96AjrHHPHBkKc6Lhv7lPIwS9geMI2oGliYfSQi5qGQkefacOW
	EYbMqoMC/t51MCRruMs+DSkbbDEtFaA=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 07/14] tools/xenstore: use accounting data array for per-domain values
Date: Tue, 30 May 2023 10:24:17 +0200
Message-Id: <20230530082424.32126-8-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the accounting of per-domain usage of Xenstore memory, watches, and
outstanding requests to the array based mechanism.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V5:
- drop domid parameter from domain_outstanding_inc() (Julien Grall)
---
 tools/xenstore/xenstored_core.c   |   4 +-
 tools/xenstore/xenstored_domain.c | 109 +++++++++++-------------------
 tools/xenstore/xenstored_domain.h |   8 ++-
 3 files changed, 48 insertions(+), 73 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 006ad9224c..43b8772cb3 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -255,7 +255,7 @@ static void free_buffered_data(struct buffered_data *out,
 			req->pend.ref.event_cnt--;
 			if (!req->pend.ref.event_cnt && !req->on_out_list) {
 				if (req->on_ref_list) {
-					domain_outstanding_domid_dec(
+					domain_outstanding_dec(conn,
 						req->pend.ref.domid);
 					list_del(&req->list);
 				}
@@ -271,7 +271,7 @@ static void free_buffered_data(struct buffered_data *out,
 		out->on_ref_list = true;
 		return;
 	} else
-		domain_outstanding_dec(conn);
+		domain_outstanding_dec(conn, conn->id);
 
 	talloc_free(out);
 }
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 7770c4f395..a35ed97fd7 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -72,19 +72,12 @@ struct domain
 	/* Accounting data for this domain. */
 	unsigned int acc[ACC_N];
 
-	/* Amount of memory allocated for this domain. */
-	int memory;
+	/* Memory quota data for this domain. */
 	bool soft_quota_reported;
 	bool hard_quota_reported;
 	time_t mem_last_msg;
 #define MEM_WARN_MINTIME_SEC 10
 
-	/* number of watch for this domain */
-	int nbwatch;
-
-	/* Number of outstanding requests. */
-	int nboutstanding;
-
 	/* write rate limit */
 	wrl_creditt wrl_credit; /* [ -wrl_config_writecost, +_dburst ] */
 	struct wrl_timestampt wrl_timestamp;
@@ -200,14 +193,15 @@ static bool domain_can_write(struct connection *conn)
 
 static bool domain_can_read(struct connection *conn)
 {
-	struct xenstore_domain_interface *intf = conn->domain->interface;
+	struct domain *domain = conn->domain;
+	struct xenstore_domain_interface *intf = domain->interface;
 
 	if (domain_is_unprivileged(conn)) {
-		if (conn->domain->wrl_credit < 0)
+		if (domain->wrl_credit < 0)
 			return false;
-		if (conn->domain->nboutstanding >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST] >= quota_req_outstanding)
 			return false;
-		if (conn->domain->memory >= quota_memory_per_domain_hard &&
+		if (domain->acc[ACC_MEM] >= quota_memory_per_domain_hard &&
 		    quota_memory_per_domain_hard)
 			return false;
 	}
@@ -438,10 +432,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	if (!resp) return ENOMEM
 
 	ent(nodes, d->acc[ACC_NODES]);
-	ent(watches, d->nbwatch);
+	ent(watches, d->acc[ACC_WATCH]);
 	ent(transactions, ta);
-	ent(outstanding, d->nboutstanding);
-	ent(memory, d->memory);
+	ent(outstanding, d->acc[ACC_OUTST]);
+	ent(memory, d->acc[ACC_MEM]);
 
 #undef ent
 
@@ -1187,14 +1181,16 @@ unsigned int domain_nbentry(struct connection *conn)
 	       ? domain_acc_add(conn, conn->id, ACC_NODES, 0, true) : 0;
 }
 
-static bool domain_chk_quota(struct domain *domain, int mem)
+static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 {
 	time_t now;
+	struct domain *domain;
 
-	if (!domain || !domid_is_unprivileged(domain->domid) ||
-	    (domain->conn && domain->conn->is_ignored))
+	if (!conn || !domid_is_unprivileged(conn->id) ||
+	    conn->is_ignored)
 		return false;
 
+	domain = conn->domain;
 	now = time(NULL);
 
 	if (mem >= quota_memory_per_domain_hard &&
@@ -1239,80 +1235,57 @@ static bool domain_chk_quota(struct domain *domain, int mem)
 int domain_memory_add(struct connection *conn, unsigned int domid, int mem,
 		      bool no_quota_check)
 {
-	struct domain *domain;
+	int ret;
 
-	domain = find_domain_struct(domid);
-	if (domain) {
-		/*
-		 * domain_chk_quota() will print warning and also store whether
-		 * the soft/hard quota has been hit. So check no_quota_check
-		 * *after*.
-		 */
-		if (domain_chk_quota(domain, domain->memory + mem) &&
-		    !no_quota_check)
-			return ENOMEM;
-		domain->memory += mem;
-	} else {
-		/*
-		 * The domain the memory is to be accounted for should always
-		 * exist, as accounting is done either for a domain related to
-		 * the current connection, or for the domain owning a node
-		 * (which is always existing, as the owner of the node is
-		 * tested to exist and deleted or replaced by domid 0 if not).
-		 * So not finding the related domain MUST be an error in the
-		 * data base.
-		 */
-		errno = ENOENT;
-		corrupt(NULL, "Accounting called for non-existing domain %u\n",
-			domid);
-		return ENOENT;
-	}
+	ret = domain_acc_add(conn, domid, ACC_MEM, 0, true);
+	if (ret < 0)
+		return -ret;
+
+	/*
+	 * domain_chk_quota() will print warning and also store whether the
+	 * soft/hard quota has been hit. So check no_quota_check *after*.
+	 */
+	if (domain_chk_quota(conn, ret + mem) && !no_quota_check)
+		return ENOMEM;
+
+	/*
+	 * The domain the memory is to be accounted for should always exist,
+	 * as accounting is done either for a domain related to the current
+	 * connection, or for the domain owning a node (which is always
+	 * existing, as the owner of the node is tested to exist and deleted
+	 * or replaced by domid 0 if not).
+	 * So not finding the related domain MUST be an error in the data base.
+	 */
+	domain_acc_add(conn, domid, ACC_MEM, mem, true);
 
 	return 0;
 }
 
 void domain_watch_inc(struct connection *conn)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nbwatch++;
+	domain_acc_add(conn, conn->id, ACC_WATCH, 1, true);
 }
 
 void domain_watch_dec(struct connection *conn)
 {
-	if (!conn || !conn->domain)
-		return;
-	if (conn->domain->nbwatch)
-		conn->domain->nbwatch--;
+	domain_acc_add(conn, conn->id, ACC_WATCH, -1, true);
 }
 
 int domain_watch(struct connection *conn)
 {
 	return (domain_is_unprivileged(conn))
-		? conn->domain->nbwatch
+		? domain_acc_add(conn, conn->id, ACC_WATCH, 0, true)
 		: 0;
 }
 
 void domain_outstanding_inc(struct connection *conn)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nboutstanding++;
+	domain_acc_add(conn, conn->id, ACC_OUTST, 1, true);
 }
 
-void domain_outstanding_dec(struct connection *conn)
+void domain_outstanding_dec(struct connection *conn, unsigned int domid)
 {
-	if (!conn || !conn->domain)
-		return;
-	conn->domain->nboutstanding--;
-}
-
-void domain_outstanding_domid_dec(unsigned int domid)
-{
-	struct domain *d = find_domain_by_domid(domid);
-
-	if (d)
-		d->nboutstanding--;
+	domain_acc_add(conn, domid, ACC_OUTST, -1, true);
 }
 
 static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index b94548fd7d..086133407b 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -29,7 +29,10 @@ enum accitem {
 	ACC_REQ_N,		/* Number of elements per request. */
 	ACC_TR_N = ACC_REQ_N,	/* Number of elements per transaction. */
 	ACC_CHD_N = ACC_TR_N,	/* max(ACC_REQ_N, ACC_TR_N), for changed dom. */
-	ACC_N = ACC_TR_N,	/* Number of elements per domain. */
+	ACC_WATCH = ACC_TR_N,
+	ACC_OUTST,
+	ACC_MEM,
+	ACC_N,			/* Number of elements per domain. */
 };
 
 void handle_event(void);
@@ -109,8 +112,7 @@ void domain_watch_inc(struct connection *conn);
 void domain_watch_dec(struct connection *conn);
 int domain_watch(struct connection *conn);
 void domain_outstanding_inc(struct connection *conn);
-void domain_outstanding_dec(struct connection *conn);
-void domain_outstanding_domid_dec(unsigned int domid);
+void domain_outstanding_dec(struct connection *conn, unsigned int domid);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:25:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:25:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540788.842810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufJ-00070Y-0F; Tue, 30 May 2023 08:25:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540788.842810; Tue, 30 May 2023 08:25:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufI-00070G-Sr; Tue, 30 May 2023 08:25:12 +0000
Received: by outflank-mailman (input) for mailman id 540788;
 Tue, 30 May 2023 08:25:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3ufI-0004aU-5T
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:25:12 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ea9a95f-fec3-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:25:11 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 799101F889;
 Tue, 30 May 2023 08:25:11 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 47C431342F;
 Tue, 30 May 2023 08:25:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id ZDc1EOeydWRaEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:25:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ea9a95f-fec3-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435111; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eL/RUzvMqUI/IZ9lFKQeYON1DbmVe9TphDqE0qAfVKU=;
	b=Kyvu7npXZQhY8AVGHSJiWB120bln7NxEyG7FdEL+abUAntLNmyzxoPB5lvoTKwZAR0MfP9
	T+NPZkaJBTMW7umam+WF/CbhKh+DpBXsuWipd1vT+3WMOQWgfY1uCqmgQWELDVv6BOEO2q
	VtTZFPj/QvzqGnZNP/Rp6sqdTxRXD9Y=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 08/14] tools/xenstore: add accounting trace support
Date: Tue, 30 May 2023 10:24:18 +0200
Message-Id: <20230530082424.32126-9-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new trace switch "acc" and the related trace calls.

The "acc" switch is off per default.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c   |  2 +-
 tools/xenstore/xenstored_core.h   |  1 +
 tools/xenstore/xenstored_domain.c | 10 ++++++++++
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 43b8772cb3..eb916b0647 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2756,7 +2756,7 @@ static void set_quota(const char *arg, bool soft)
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
 const char *const trace_switches[] = {
-	"obj", "io", "wrl",
+	"obj", "io", "wrl", "acc",
 	NULL
 };
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 1f811f38cb..3e0734a6c6 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -302,6 +302,7 @@ extern unsigned int trace_flags;
 #define TRACE_OBJ	0x00000001
 #define TRACE_IO	0x00000002
 #define TRACE_WRL	0x00000004
+#define TRACE_ACC	0x00000008
 extern const char *const trace_switches[];
 int set_trace_switch(const char *arg);
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index a35ed97fd7..03825ca24b 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -538,6 +538,12 @@ static struct domain *find_domain_by_domid(unsigned int domid)
 	return (d && d->introduced) ? d : NULL;
 }
 
+#define trace_acc(...)				\
+do {						\
+	if (trace_flags & TRACE_ACC)		\
+		trace("acc: " __VA_ARGS__);	\
+} while (0)
+
 int acc_fix_domains(struct list_head *head, bool chk_quota, bool update)
 {
 	struct changed_domain *cd;
@@ -601,6 +607,8 @@ static int acc_add_changed_dom(const void *ctx, struct list_head *head,
 		return 0;
 
 	errno = 0;
+	trace_acc("local change domid %u: what=%u %d add %d\n", domid, what,
+		  cd->acc[what], val);
 	cd->acc[what] += val;
 
 	return cd->acc[what];
@@ -1112,6 +1120,8 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 		return domain_acc_add_valid(d, what, ret);
 	}
 
+	trace_acc("global change domid %u: what=%u %u add %d\n", domid, what,
+		  d->acc[what], add);
 	d->acc[what] = domain_acc_add_valid(d, what, add);
 
 	return d->acc[what];
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:25:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:25:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540791.842820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufP-0007bt-Ea; Tue, 30 May 2023 08:25:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540791.842820; Tue, 30 May 2023 08:25:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufP-0007bh-AM; Tue, 30 May 2023 08:25:19 +0000
Received: by outflank-mailman (input) for mailman id 540791;
 Tue, 30 May 2023 08:25:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3ufO-0004aU-EG
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:25:18 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 82059d2a-fec3-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:25:17 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 1F1A321A59;
 Tue, 30 May 2023 08:25:17 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id DF50F1342F;
 Tue, 30 May 2023 08:25:16 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id eDIuNeyydWRiEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:25:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82059d2a-fec3-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435117; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=noF7ke7ifpEIxkUkKVhChZXs11mDKBANrUTCMXGJOiU=;
	b=tlgpDHdz8caNb34W8RIlsLAy26Sfgj7qJbgKucowy65/NRunvSG+57sum30JQySSNSLzgP
	XE+6r+9Wm/WSIem9S3/HVBbB8WMLJaG5ZMp9CWTzjzR8uAVkWyEprcpFgqUg1Onpgm6T9W
	ogJ6/qGAGrfLUFNZBoq5aY8uUxkndlA=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 09/14] tools/xenstore: add TDB access trace support
Date: Tue, 30 May 2023 10:24:19 +0200
Message-Id: <20230530082424.32126-10-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new trace switch "tdb" and the related trace calls.

The "tdb" switch is off per default.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c        | 8 +++++++-
 tools/xenstore/xenstored_core.h        | 7 +++++++
 tools/xenstore/xenstored_transaction.c | 7 ++++++-
 3 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index eb916b0647..069c03d4b0 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -589,6 +589,8 @@ static void get_acc_data(TDB_DATA *key, struct node_account_data *acc)
 		if (old_data.dptr == NULL) {
 			acc->memory = 0;
 		} else {
+			trace_tdb("read %s size %zu\n", key->dptr,
+				  old_data.dsize + key->dsize);
 			hdr = (void *)old_data.dptr;
 			acc->memory = old_data.dsize;
 			acc->domid = hdr->perms[0].id;
@@ -655,6 +657,7 @@ int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 		errno = EIO;
 		return errno;
 	}
+	trace_tdb("store %s size %zu\n", key->dptr, data->dsize + key->dsize);
 
 	if (acc) {
 		/* Don't use new_domid, as it might be a transaction node. */
@@ -682,6 +685,7 @@ int do_tdb_delete(struct connection *conn, TDB_DATA *key,
 		errno = EIO;
 		return errno;
 	}
+	trace_tdb("delete %s\n", key->dptr);
 
 	if (acc->memory) {
 		domid = get_acc_domid(conn, key, acc->domid);
@@ -731,6 +735,8 @@ struct node *read_node(struct connection *conn, const void *ctx,
 		goto error;
 	}
 
+	trace_tdb("read %s size %zu\n", key.dptr, data.dsize + key.dsize);
+
 	node->parent = NULL;
 	talloc_steal(node, data.dptr);
 
@@ -2756,7 +2762,7 @@ static void set_quota(const char *arg, bool soft)
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
 const char *const trace_switches[] = {
-	"obj", "io", "wrl", "acc",
+	"obj", "io", "wrl", "acc", "tdb",
 	NULL
 };
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 3e0734a6c6..5a11dc1231 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -303,9 +303,16 @@ extern unsigned int trace_flags;
 #define TRACE_IO	0x00000002
 #define TRACE_WRL	0x00000004
 #define TRACE_ACC	0x00000008
+#define TRACE_TDB	0x00000010
 extern const char *const trace_switches[];
 int set_trace_switch(const char *arg);
 
+#define trace_tdb(...)				\
+do {						\
+	if (trace_flags & TRACE_TDB)		\
+		trace("tdb: " __VA_ARGS__);	\
+} while (0)
+
 extern TDB_CONTEXT *tdb_ctx;
 extern int dom0_domid;
 extern int dom0_event;
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 2b15506953..11c8bcec84 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -374,8 +374,11 @@ static int finalize_transaction(struct connection *conn,
 				if (tdb_error(tdb_ctx) != TDB_ERR_NOEXIST)
 					return EIO;
 				gen = NO_GENERATION;
-			} else
+			} else {
+				trace_tdb("read %s size %zu\n", key.dptr,
+					  key.dsize + data.dsize);
 				gen = hdr->generation;
+			}
 			talloc_free(data.dptr);
 			if (i->generation != gen)
 				return EAGAIN;
@@ -399,6 +402,8 @@ static int finalize_transaction(struct connection *conn,
 			set_tdb_key(i->trans_name, &ta_key);
 			data = tdb_fetch(tdb_ctx, ta_key);
 			if (data.dptr) {
+				trace_tdb("read %s size %zu\n", ta_key.dptr,
+					  ta_key.dsize + data.dsize);
 				hdr = (void *)data.dptr;
 				hdr->generation = ++generation;
 				*is_corrupt |= do_tdb_write(conn, &key, &data,
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:25:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:25:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540794.842830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufW-0008Ft-Ms; Tue, 30 May 2023 08:25:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540794.842830; Tue, 30 May 2023 08:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufW-0008FZ-Je; Tue, 30 May 2023 08:25:26 +0000
Received: by outflank-mailman (input) for mailman id 540794;
 Tue, 30 May 2023 08:25:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3ufV-0003jy-49
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:25:25 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8508d1c8-fec3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:25:22 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B92A51F889;
 Tue, 30 May 2023 08:25:22 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 853E81342F;
 Tue, 30 May 2023 08:25:22 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id LyImH/KydWRqEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:25:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8508d1c8-fec3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435122; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FAJG3Zd9C0VQkhjt5AKqPh163CHjeZxpHxiHsqJTU/0=;
	b=NMH9xjltNIAo8Hnn/IkE0yXUDBJ6/qKe+M8SzwN4EYMNS0n4782v7CTdeF36TdALsqdp+r
	tjsH+LH3ioh4uAlnOavKx+2fpWbqIsBnLltpamnrxGLKysWp13i5mMgn/ALKOu/fFkF5/o
	hQ2433zw35OdjrQNf3spUOv47rtiKwU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 10/14] tools/xenstore: switch transaction accounting to generic accounting
Date: Tue, 30 May 2023 10:24:20 +0200
Message-Id: <20230530082424.32126-11-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As transaction accounting is active for unprivileged domains only, it
can easily be added to the generic per-domain accounting.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V5:
- use list_empty(&conn->transaction_list) for detection of "no
  transaction active" (Julien Grall)
V6:
- move comment (Julien Grall)
---
 tools/xenstore/xenstored_core.c        |  3 +--
 tools/xenstore/xenstored_core.h        |  1 -
 tools/xenstore/xenstored_domain.c      | 21 ++++++++++++++++++---
 tools/xenstore/xenstored_domain.h      |  4 ++++
 tools/xenstore/xenstored_transaction.c | 15 +++++++--------
 5 files changed, 30 insertions(+), 14 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 069c03d4b0..3343f939b4 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2093,7 +2093,7 @@ static void consider_message(struct connection *conn)
 	 * stalled. This will ignore new requests until Live-Update happened
 	 * or it was aborted.
 	 */
-	if (lu_is_pending() && conn->transaction_started == 0 &&
+	if (lu_is_pending() && list_empty(&conn->transaction_list) &&
 	    conn->in->hdr.msg.type == XS_TRANSACTION_START) {
 		trace("Delaying transaction start for connection %p req_id %u\n",
 		      conn, conn->in->hdr.msg.req_id);
@@ -2200,7 +2200,6 @@ struct connection *new_connection(const struct interface_funcs *funcs)
 	new->funcs = funcs;
 	new->is_ignored = false;
 	new->is_stalled = false;
-	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
 	INIT_LIST_HEAD(&new->acc_list);
 	INIT_LIST_HEAD(&new->ref_list);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 5a11dc1231..3564d85d7d 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -151,7 +151,6 @@ struct connection
 	/* List of in-progress transactions. */
 	struct list_head transaction_list;
 	uint32_t next_transaction_id;
-	unsigned int transaction_started;
 	time_t ta_start_time;
 
 	/* List of delayed requests. */
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 03825ca24b..25c6d20036 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -417,12 +417,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 {
 	struct domain *d = find_domain_struct(domid);
 	char *resp;
-	int ta;
 
 	if (!d)
 		return ENOENT;
 
-	ta = d->conn ? d->conn->transaction_started : 0;
 	resp = talloc_asprintf(ctx, "Domain %u:\n", domid);
 	if (!resp)
 		return ENOMEM;
@@ -433,7 +431,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 
 	ent(nodes, d->acc[ACC_NODES]);
 	ent(watches, d->acc[ACC_WATCH]);
-	ent(transactions, ta);
+	ent(transactions, d->acc[ACC_TRANS]);
 	ent(outstanding, d->acc[ACC_OUTST]);
 	ent(memory, d->acc[ACC_MEM]);
 
@@ -1298,6 +1296,23 @@ void domain_outstanding_dec(struct connection *conn, unsigned int domid)
 	domain_acc_add(conn, domid, ACC_OUTST, -1, true);
 }
 
+void domain_transaction_inc(struct connection *conn)
+{
+	domain_acc_add(conn, conn->id, ACC_TRANS, 1, true);
+}
+
+void domain_transaction_dec(struct connection *conn)
+{
+	domain_acc_add(conn, conn->id, ACC_TRANS, -1, true);
+}
+
+unsigned int domain_transaction_get(struct connection *conn)
+{
+	return (domain_is_unprivileged(conn))
+		? domain_acc_add(conn, conn->id, ACC_TRANS, 0, true)
+		: 0;
+}
+
 static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
 static wrl_creditt wrl_config_rate           = WRL_RATE   * WRL_FACTOR;
 static wrl_creditt wrl_config_dburst         = WRL_DBURST * WRL_FACTOR;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 086133407b..01b6f1861b 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -32,6 +32,7 @@ enum accitem {
 	ACC_WATCH = ACC_TR_N,
 	ACC_OUTST,
 	ACC_MEM,
+	ACC_TRANS,
 	ACC_N,			/* Number of elements per domain. */
 };
 
@@ -113,6 +114,9 @@ void domain_watch_dec(struct connection *conn);
 int domain_watch(struct connection *conn);
 void domain_outstanding_inc(struct connection *conn);
 void domain_outstanding_dec(struct connection *conn, unsigned int domid);
+void domain_transaction_inc(struct connection *conn);
+void domain_transaction_dec(struct connection *conn);
+unsigned int domain_transaction_get(struct connection *conn);
 int domain_get_quota(const void *ctx, struct connection *conn,
 		     unsigned int domid);
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 11c8bcec84..9cfb0017c8 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -479,8 +479,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	if (conn->transaction)
 		return EBUSY;
 
-	if (domain_is_unprivileged(conn) &&
-	    conn->transaction_started > quota_max_transaction)
+	if (domain_transaction_get(conn) > quota_max_transaction)
 		return ENOSPC;
 
 	/* Attach transaction to ctx for autofree until it's complete */
@@ -501,13 +500,14 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 		exists = transaction_lookup(conn, conn->next_transaction_id++);
 	} while (!IS_ERR(exists));
 
+	if (list_empty(&conn->transaction_list))
+		conn->ta_start_time = time(NULL);
+
 	/* Now we own it. */
 	list_add_tail(&trans->list, &conn->transaction_list);
 	talloc_steal(conn, trans);
 	talloc_set_destructor(trans, destroy_transaction);
-	if (!conn->transaction_started)
-		conn->ta_start_time = time(NULL);
-	conn->transaction_started++;
+	domain_transaction_inc(conn);
 	wrl_ntransactions++;
 
 	snprintf(id_str, sizeof(id_str), "%u", trans->id);
@@ -533,8 +533,8 @@ int do_transaction_end(const void *ctx, struct connection *conn,
 
 	conn->transaction = NULL;
 	list_del(&trans->list);
-	conn->transaction_started--;
-	if (!conn->transaction_started)
+	domain_transaction_dec(conn);
+	if (list_empty(&conn->transaction_list))
 		conn->ta_start_time = 0;
 
 	chk_quota = trans->node_created && domain_is_unprivileged(conn);
@@ -588,7 +588,6 @@ void conn_delete_all_transactions(struct connection *conn)
 
 	assert(conn->transaction == NULL);
 
-	conn->transaction_started = 0;
 	conn->ta_start_time = 0;
 }
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:25:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:25:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540797.842840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufa-0000HR-0b; Tue, 30 May 2023 08:25:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540797.842840; Tue, 30 May 2023 08:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ufZ-0000GV-Rg; Tue, 30 May 2023 08:25:29 +0000
Received: by outflank-mailman (input) for mailman id 540797;
 Tue, 30 May 2023 08:25:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3ufZ-0004aU-Cy
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:25:29 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 88c648e3-fec3-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:25:28 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6EA42219FE;
 Tue, 30 May 2023 08:25:28 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 25B181342F;
 Tue, 30 May 2023 08:25:28 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id NDPPB/iydWR7EAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:25:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88c648e3-fec3-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435128; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ia066ss6DTXiD5PD3QIOik+sQxxMkgbmr/cEVOihzyk=;
	b=evEyQlHIjl/Ewq3mKgq5PhhYfh0V/g3t7Dmfb4m4TObb+r6mzLKlg+hbUwObTz5Th3vdKm
	Nj/yprAgVJ+b1RPOngdZAEVAQnxXtA8h7Dm4ll6+rpEJirP+4i871wU8OmyJIaoFt5ap/i
	CLvjnyMx9olMLg2F4YhO+W3z0AYEsFo=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 11/14] tools/xenstore: remember global and per domain max accounting values
Date: Tue, 30 May 2023 10:24:21 +0200
Message-Id: <20230530082424.32126-12-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add saving the maximum values of the different accounting data seen
per domain and (for unprivileged domains) globally, and print those
values via the xenstore-control quota command. Add a sub-command for
resetting the global maximum values seen.

This should help for a decision how to set the related quotas.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 docs/misc/xenstore.txt             |   5 +-
 tools/xenstore/xenstored_control.c |  22 ++++++-
 tools/xenstore/xenstored_domain.c  | 100 +++++++++++++++++++++++------
 tools/xenstore/xenstored_domain.h  |   2 +
 4 files changed, 108 insertions(+), 21 deletions(-)

diff --git a/docs/misc/xenstore.txt b/docs/misc/xenstore.txt
index d807ef0709..38015835b1 100644
--- a/docs/misc/xenstore.txt
+++ b/docs/misc/xenstore.txt
@@ -426,7 +426,7 @@ CONTROL			<command>|[<parameters>|]
 	print|<string>
 		print <string> to syslog (xenstore runs as daemon) or
 		to console (xenstore runs as stubdom)
-	quota|[set <name> <val>|<domid>]
+	quota|[set <name> <val>|<domid>|max [-r]]
 		without parameters: print the current quota settings
 		with "set <name> <val>": set the quota <name> to new value
 		<val> (The admin should make sure all the domain usage is
@@ -435,6 +435,9 @@ CONTROL			<command>|[<parameters>|]
 		violating the new quota setting isn't increased further)
 		with "<domid>": print quota related accounting data for
 		the domain <domid>
+		with "max [-r]": show global per-domain maximum values of all
+		unprivileged domains, optionally reset the values by adding
+		"-r"
 	quota-soft|[set <name> <val>]
 		like the "quota" command, but for soft-quota.
 	help			<supported-commands>
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 403295788a..620a7da997 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -306,6 +306,22 @@ static int quota_get(const void *ctx, struct connection *conn,
 	return domain_get_quota(ctx, conn, atoi(vec[0]));
 }
 
+static int quota_max(const void *ctx, struct connection *conn,
+		     char **vec, int num)
+{
+	if (num > 1)
+		return EINVAL;
+
+	if (num == 1) {
+		if (!strcmp(vec[0], "-r"))
+			domain_reset_global_acc();
+		else
+			return EINVAL;
+	}
+
+	return domain_max_global_acc(ctx, conn);
+}
+
 static int do_control_quota(const void *ctx, struct connection *conn,
 			    char **vec, int num)
 {
@@ -315,6 +331,9 @@ static int do_control_quota(const void *ctx, struct connection *conn,
 	if (!strcmp(vec[0], "set"))
 		return quota_set(ctx, conn, vec + 1, num - 1, hard_quotas);
 
+	if (!strcmp(vec[0], "max"))
+		return quota_max(ctx, conn, vec + 1, num - 1);
+
 	return quota_get(ctx, conn, vec, num);
 }
 
@@ -978,7 +997,8 @@ static struct cmd_s cmds[] = {
 	{ "memreport", do_control_memreport, "[<file>]" },
 #endif
 	{ "print", do_control_print, "<string>" },
-	{ "quota", do_control_quota, "[set <name> <val>|<domid>]" },
+	{ "quota", do_control_quota,
+		"[set <name> <val>|<domid>|max [-r]]" },
 	{ "quota-soft", do_control_quota_s, "[set <name> <val>]" },
 	{ "help", do_control_help, "" },
 };
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 25c6d20036..6f3a27765a 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -43,6 +43,8 @@ static evtchn_port_t virq_port;
 
 xenevtchn_handle *xce_handle = NULL;
 
+static unsigned int acc_global_max[ACC_N];
+
 struct domain
 {
 	/* The id of this domain */
@@ -70,7 +72,10 @@ struct domain
 	bool introduced;
 
 	/* Accounting data for this domain. */
-	unsigned int acc[ACC_N];
+	struct acc {
+		unsigned int val;
+		unsigned int max;
+	} acc[ACC_N];
 
 	/* Memory quota data for this domain. */
 	bool soft_quota_reported;
@@ -199,9 +204,9 @@ static bool domain_can_read(struct connection *conn)
 	if (domain_is_unprivileged(conn)) {
 		if (domain->wrl_credit < 0)
 			return false;
-		if (domain->acc[ACC_OUTST] >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST].val >= quota_req_outstanding)
 			return false;
-		if (domain->acc[ACC_MEM] >= quota_memory_per_domain_hard &&
+		if (domain->acc[ACC_MEM].val >= quota_memory_per_domain_hard &&
 		    quota_memory_per_domain_hard)
 			return false;
 	}
@@ -264,7 +269,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 		ret = WALK_TREE_SKIP_CHILDREN;
 	}
 
-	return domain->acc[ACC_NODES] ? ret : WALK_TREE_SUCCESS_STOP;
+	return domain->acc[ACC_NODES].val ? ret : WALK_TREE_SUCCESS_STOP;
 }
 
 static void domain_tree_remove(struct domain *domain)
@@ -272,7 +277,7 @@ static void domain_tree_remove(struct domain *domain)
 	int ret;
 	struct walk_funcs walkfuncs = { .enter = domain_tree_remove_sub };
 
-	if (domain->acc[ACC_NODES]) {
+	if (domain->acc[ACC_NODES].val) {
 		ret = walk_node_tree(domain, NULL, "/", &walkfuncs, domain);
 		if (ret == WALK_TREE_ERROR_STOP)
 			syslog(LOG_ERR,
@@ -426,14 +431,41 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8d\n", #t, e); \
+	resp = talloc_asprintf_append(resp, "%-16s: %8u (max: %8u\n", #t, \
+				      d->acc[e].val, d->acc[e].max); \
+	if (!resp) return ENOMEM
+
+	ent(nodes, ACC_NODES);
+	ent(watches, ACC_WATCH);
+	ent(transactions, ACC_TRANS);
+	ent(outstanding, ACC_OUTST);
+	ent(memory, ACC_MEM);
+
+#undef ent
+
+	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
+
+	return 0;
+}
+
+int domain_max_global_acc(const void *ctx, struct connection *conn)
+{
+	char *resp;
+
+	resp = talloc_asprintf(ctx, "Max. seen accounting values:\n");
+	if (!resp)
+		return ENOMEM;
+
+#define ent(t, e) \
+	resp = talloc_asprintf_append(resp, "%-16s: %8u\n", #t,   \
+				      acc_global_max[e]);         \
 	if (!resp) return ENOMEM
 
-	ent(nodes, d->acc[ACC_NODES]);
-	ent(watches, d->acc[ACC_WATCH]);
-	ent(transactions, d->acc[ACC_TRANS]);
-	ent(outstanding, d->acc[ACC_OUTST]);
-	ent(memory, d->acc[ACC_MEM]);
+	ent(nodes, ACC_NODES);
+	ent(watches, ACC_WATCH);
+	ent(transactions, ACC_TRANS);
+	ent(outstanding, ACC_OUTST);
+	ent(memory, ACC_MEM);
 
 #undef ent
 
@@ -1049,10 +1081,12 @@ int domain_adjust_node_perms(struct node *node)
 
 static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 {
+	unsigned int val;
+
 	assert(what < ARRAY_SIZE(d->acc));
 
-	if ((add < 0 && -add > d->acc[what]) ||
-	    (add > 0 && (INT_MAX - d->acc[what]) < add)) {
+	if ((add < 0 && -add > d->acc[what].val) ||
+	    (add > 0 && (INT_MAX - d->acc[what].val) < add)) {
 		/*
 		 * In a transaction when a node is being added/removed AND the
 		 * same node has been added/removed outside the transaction in
@@ -1063,7 +1097,13 @@ static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 		return (add < 0) ? 0 : INT_MAX;
 	}
 
-	return d->acc[what] + add;
+	val = d->acc[what].val + add;
+	if (val > d->acc[what].max)
+		d->acc[what].max = val;
+	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
+		acc_global_max[what] = val;
+
+	return val;
 }
 
 static int domain_acc_add(struct connection *conn, unsigned int domid,
@@ -1119,10 +1159,10 @@ static int domain_acc_add(struct connection *conn, unsigned int domid,
 	}
 
 	trace_acc("global change domid %u: what=%u %u add %d\n", domid, what,
-		  d->acc[what], add);
-	d->acc[what] = domain_acc_add_valid(d, what, add);
+		  d->acc[what].val, add);
+	d->acc[what].val = domain_acc_add_valid(d, what, add);
 
-	return d->acc[what];
+	return d->acc[what].val;
 }
 
 void acc_drop(struct connection *conn)
@@ -1160,6 +1200,28 @@ void acc_commit(struct connection *conn)
 	conn->in = in;
 }
 
+static int domain_reset_global_acc_sub(const void *k, void *v, void *arg)
+{
+	struct domain *d = v;
+	unsigned int i;
+
+	for (i = 0; i < ACC_N; i++)
+		d->acc[i].max = d->acc[i].val;
+
+	return 0;
+}
+
+void domain_reset_global_acc(void)
+{
+	unsigned int i;
+
+	for (i = 0; i < ACC_N; i++)
+		acc_global_max[i] = 0;
+
+	/* Set current max values seen. */
+	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
@@ -1660,7 +1722,7 @@ static int domain_check_acc_init_sub(const void *k, void *v, void *arg)
 	 * If everything is correct incrementing the value for each node will
 	 * result in dom->nodes being 0 at the end.
 	 */
-	dom->nodes = -d->acc[ACC_NODES];
+	dom->nodes = -d->acc[ACC_NODES].val;
 
 	if (!hashtable_insert(domains, &dom->domid, dom)) {
 		talloc_free(dom);
@@ -1715,7 +1777,7 @@ static int domain_check_acc_cb(const void *k, void *v, void *arg)
 	if (!d)
 		return 0;
 
-	d->acc[ACC_NODES] += dom->nodes;
+	d->acc[ACC_NODES].val += dom->nodes;
 
 	return 0;
 }
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 01b6f1861b..416df25cb2 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -127,6 +127,8 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 int acc_fix_domains(struct list_head *head, bool chk_quota, bool update);
 void acc_drop(struct connection *conn);
 void acc_commit(struct connection *conn);
+int domain_max_global_acc(const void *ctx, struct connection *conn);
+void domain_reset_global_acc(void);
 
 /* Write rate limiting */
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:27:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:27:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540815.842859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uhe-0002IS-Mt; Tue, 30 May 2023 08:27:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540815.842859; Tue, 30 May 2023 08:27:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uhe-0002IL-Jd; Tue, 30 May 2023 08:27:38 +0000
Received: by outflank-mailman (input) for mailman id 540815;
 Tue, 30 May 2023 08:27:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3ufg-0003jy-3b
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:25:36 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8bc0677c-fec3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:25:33 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 0554A1F889;
 Tue, 30 May 2023 08:25:34 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id CD2B91342F;
 Tue, 30 May 2023 08:25:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id oRLLMP2ydWSFEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:25:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8bc0677c-fec3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435134; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BZljV9afnd20nS/p60KJmGZ4MLKXj7xRbzIHp0/RuAw=;
	b=miIs/Od5U+2MBfR4RPN/pKmvsjjzq/OOk61Vp0h7EZQ7p1L8jbB1o7ue593EAHW6GJf3bv
	S33YumdNuacLhTkgwTqqNJ9m3WI+/ujHNHZ4vRo2yBm46j1klcntsUId2RyNT5eBNLbFbL
	s0Td736xRCQzLOlGtGn1FtNv6qrL+yg=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v6 12/14] tools/xenstore: use generic accounting for remaining quotas
Date: Tue, 30 May 2023 10:24:22 +0200
Message-Id: <20230530082424.32126-13-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The maxrequests, node size, number of node permissions, and path length
quota are a little bit special, as they are either active in
transactions only (maxrequests), or they are just per item instead of
count values. Nevertheless being able to know the maximum number of
those quota related values per domain would be beneficial, so add them
to the generic accounting.

The per domain value will never show current numbers other than zero,
but the maximum number seen can be gathered the same way as the number
of nodes during a transaction.

To be able to use the const qualifier for a new function switch
domain_is_unprivileged() to take a const pointer, too.

For printing the quota/max values, adapt the print format string to
the longest quota name (now 17 characters long).

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V5:
- add comment (Julien Grall)
- add missing quota printing (Julien Grall)
V6:
- keep assert() (Julien Grall)
---
 tools/xenstore/xenstored_core.c        | 15 ++++-----
 tools/xenstore/xenstored_core.h        |  2 +-
 tools/xenstore/xenstored_domain.c      | 43 ++++++++++++++++++++++----
 tools/xenstore/xenstored_domain.h      |  6 ++++
 tools/xenstore/xenstored_transaction.c |  4 +--
 tools/xenstore/xenstored_watch.c       |  2 +-
 6 files changed, 55 insertions(+), 17 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 3343f939b4..dd00f74cb6 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -799,8 +799,9 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 		+ node->perms.num * sizeof(node->perms.p[0])
 		+ node->datalen + node->childlen;
 
-	if (!no_quota_check && domain_is_unprivileged(conn) &&
-	    data.dsize >= quota_max_entry_size) {
+	/* Call domain_max_chk() in any case in order to record max values. */
+	if (domain_max_chk(conn, ACC_NODESZ, data.dsize, quota_max_entry_size)
+	    && !no_quota_check) {
 		errno = ENOSPC;
 		return errno;
 	}
@@ -1170,7 +1171,7 @@ static bool valid_chars(const char *node)
 		       "0123456789-/_@") == strlen(node));
 }
 
-bool is_valid_nodename(const char *node)
+bool is_valid_nodename(const struct connection *conn, const char *node)
 {
 	int local_off = 0;
 	unsigned int domid;
@@ -1190,7 +1191,8 @@ bool is_valid_nodename(const char *node)
 	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
 		local_off = 0;
 
-	if (strlen(node) > local_off + quota_max_path_len)
+	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off,
+			   quota_max_path_len))
 		return false;
 
 	return valid_chars(node);
@@ -1252,7 +1254,7 @@ static struct node *get_node_canonicalized(struct connection *conn,
 	*canonical_name = canonicalize(conn, ctx, name);
 	if (!*canonical_name)
 		return NULL;
-	if (!is_valid_nodename(*canonical_name)) {
+	if (!is_valid_nodename(conn, *canonical_name)) {
 		errno = EINVAL;
 		return NULL;
 	}
@@ -1784,8 +1786,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 		return EINVAL;
 
 	perms.num--;
-	if (domain_is_unprivileged(conn) &&
-	    perms.num > quota_nb_perms_per_node)
+	if (domain_max_chk(conn, ACC_NPERM, perms.num, quota_nb_perms_per_node))
 		return ENOSPC;
 
 	permstr = in->buffer + strlen(in->buffer) + 1;
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 3564d85d7d..9339820156 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -258,7 +258,7 @@ void check_store(void);
 void corrupt(struct connection *conn, const char *fmt, ...);
 
 /* Is this a valid node name? */
-bool is_valid_nodename(const char *node);
+bool is_valid_nodename(const struct connection *conn, const char *node);
 
 /* Get name of parent node. */
 char *get_parent(const void *ctx, const char *node);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 6f3a27765a..57a573e5dd 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -431,7 +431,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8u (max: %8u\n", #t, \
+	resp = talloc_asprintf_append(resp, "%-17s: %8u (max: %8u\n", #t, \
 				      d->acc[e].val, d->acc[e].max); \
 	if (!resp) return ENOMEM
 
@@ -440,6 +440,10 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	ent(transactions, ACC_TRANS);
 	ent(outstanding, ACC_OUTST);
 	ent(memory, ACC_MEM);
+	ent(transaction-nodes, ACC_TRANSNODES);
+	ent(node-permissions, ACC_NPERM);
+	ent(path-length, ACC_PATHLEN);
+	ent(node-size, ACC_NODESZ);
 
 #undef ent
 
@@ -457,7 +461,7 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
 		return ENOMEM;
 
 #define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-16s: %8u\n", #t,   \
+	resp = talloc_asprintf_append(resp, "%-17s: %8u\n", #t,   \
 				      acc_global_max[e]);         \
 	if (!resp) return ENOMEM
 
@@ -466,6 +470,10 @@ int domain_max_global_acc(const void *ctx, struct connection *conn)
 	ent(transactions, ACC_TRANS);
 	ent(outstanding, ACC_OUTST);
 	ent(memory, ACC_MEM);
+	ent(transaction-nodes, ACC_TRANSNODES);
+	ent(node-permissions, ACC_NPERM);
+	ent(path-length, ACC_PATHLEN);
+	ent(node-size, ACC_NODESZ);
 
 #undef ent
 
@@ -1079,6 +1087,18 @@ int domain_adjust_node_perms(struct node *node)
 	return 0;
 }
 
+static void domain_acc_valid_max(struct domain *d, enum accitem what,
+				 unsigned int val)
+{
+	assert(what < ARRAY_SIZE(d->acc));
+	assert(what < ARRAY_SIZE(acc_global_max));
+
+	if (val > d->acc[what].max)
+		d->acc[what].max = val;
+	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
+		acc_global_max[what] = val;
+}
+
 static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 {
 	unsigned int val;
@@ -1098,10 +1118,7 @@ static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
 	}
 
 	val = d->acc[what].val + add;
-	if (val > d->acc[what].max)
-		d->acc[what].max = val;
-	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
-		acc_global_max[what] = val;
+	domain_acc_valid_max(d, what, val);
 
 	return val;
 }
@@ -1222,6 +1239,20 @@ void domain_reset_global_acc(void)
 	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
 }
 
+bool domain_max_chk(const struct connection *conn, enum accitem what,
+		    unsigned int val, unsigned int quota)
+{
+	if (!conn || !conn->domain)
+		return false;
+
+	if (domain_is_unprivileged(conn) && val > quota)
+		return true;
+
+	domain_acc_valid_max(conn->domain, what, val);
+
+	return false;
+}
+
 int domain_nbentry_inc(struct connection *conn, unsigned int domid)
 {
 	return (domain_acc_add(conn, domid, ACC_NODES, 1, false) < 0)
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 416df25cb2..78ca434531 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -33,6 +33,10 @@ enum accitem {
 	ACC_OUTST,
 	ACC_MEM,
 	ACC_TRANS,
+	ACC_TRANSNODES,
+	ACC_NPERM,
+	ACC_PATHLEN,
+	ACC_NODESZ,
 	ACC_N,			/* Number of elements per domain. */
 };
 
@@ -129,6 +133,8 @@ void acc_drop(struct connection *conn);
 void acc_commit(struct connection *conn);
 int domain_max_global_acc(const void *ctx, struct connection *conn);
 void domain_reset_global_acc(void);
+bool domain_max_chk(const struct connection *conn, unsigned int what,
+		    unsigned int val, unsigned int quota);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 9cfb0017c8..580d7bd090 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -252,8 +252,8 @@ int access_node(struct connection *conn, struct node *node,
 
 	i = find_accessed_node(trans, node->name);
 	if (!i) {
-		if (trans->nodes >= quota_trans_nodes &&
-		    domain_is_unprivileged(conn)) {
+		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1,
+				   quota_trans_nodes)) {
 			ret = ENOSPC;
 			goto err;
 		}
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index e30cd89be3..61b1e3421e 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -176,7 +176,7 @@ static int check_watch_path(struct connection *conn, const void *ctx,
 		*path = canonicalize(conn, ctx, *path);
 		if (!*path)
 			return errno;
-		if (!is_valid_nodename(*path))
+		if (!is_valid_nodename(conn, *path))
 			goto inval;
 	}
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:27:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:27:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540812.842850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uhd-00022W-DA; Tue, 30 May 2023 08:27:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540812.842850; Tue, 30 May 2023 08:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uhd-00022K-9B; Tue, 30 May 2023 08:27:37 +0000
Received: by outflank-mailman (input) for mailman id 540812;
 Tue, 30 May 2023 08:27:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3ufr-0003jy-Pf
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:25:47 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 92732d03-fec3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:25:44 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 3E70521A59;
 Tue, 30 May 2023 08:25:45 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 063AB1342F;
 Tue, 30 May 2023 08:25:45 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id zt8OAAmzdWSUEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:25:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92732d03-fec3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435145; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dc5oUhlqBewK3kuc1O++rsIE5PWvk3UZM/4iWlb7004=;
	b=gp0SW9SOYyf+c0vfM3dxU0wbcaBLupU4cEV3ZjP5GsXF2ooVqkZFVWLstgxyxWsax9wpi4
	NuflrKSS9GIRt9f6deRZH28e65bIfZkpAypAgzp7gNEL7IdhRUxd2Pph80GRiSoD7AiH1T
	plu+CMVy3zBretLwttNE4duHT49NswY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 14/14] tools/xenstore: switch quota management to be table based
Date: Tue, 30 May 2023 10:24:24 +0200
Message-Id: <20230530082424.32126-15-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of having individual quota variables switch to a table based
approach like the generic accounting. Include all the related data in
the same table and add accessor functions.

This enables to use the command line --quota parameter for setting all
possible quota values, keeping the previous parameters for
compatibility.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V2:
- new patch
One further remark: it would be rather easy to add soft-quota for all
the other quotas (similar to the memory one). This could be used as
an early warning for the need to raise global quota.
V5:
- fix what_matches() (Julien Grall)
---
 tools/xenstore/xenstored_control.c     |  43 ++------
 tools/xenstore/xenstored_core.c        |  81 ++++++++-------
 tools/xenstore/xenstored_core.h        |  10 --
 tools/xenstore/xenstored_domain.c      | 138 ++++++++++++++++---------
 tools/xenstore/xenstored_domain.h      |  12 ++-
 tools/xenstore/xenstored_transaction.c |   5 +-
 tools/xenstore/xenstored_watch.c       |   2 +-
 7 files changed, 153 insertions(+), 138 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 620a7da997..ed80924ee4 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -221,35 +221,6 @@ static int do_control_log(const void *ctx, struct connection *conn,
 	return 0;
 }
 
-struct quota {
-	const char *name;
-	int *quota;
-	const char *descr;
-};
-
-static const struct quota hard_quotas[] = {
-	{ "nodes", &quota_nb_entry_per_domain, "Nodes per domain" },
-	{ "watches", &quota_nb_watch_per_domain, "Watches per domain" },
-	{ "transactions", &quota_max_transaction, "Transactions per domain" },
-	{ "outstanding", &quota_req_outstanding,
-		"Outstanding requests per domain" },
-	{ "transaction-nodes", &quota_trans_nodes,
-		"Max. number of accessed nodes per transaction" },
-	{ "memory", &quota_memory_per_domain_hard,
-		"Total Xenstore memory per domain (error level)" },
-	{ "node-size", &quota_max_entry_size, "Max. size of a node" },
-	{ "path-max", &quota_max_path_len, "Max. length of a node path" },
-	{ "permissions", &quota_nb_perms_per_node,
-		"Max. number of permissions per node" },
-	{ NULL, NULL, NULL }
-};
-
-static const struct quota soft_quotas[] = {
-	{ "memory", &quota_memory_per_domain_soft,
-		"Total Xenstore memory per domain (warning level)" },
-	{ NULL, NULL, NULL }
-};
-
 static int quota_show_current(const void *ctx, struct connection *conn,
 			      const struct quota *quotas)
 {
@@ -260,9 +231,11 @@ static int quota_show_current(const void *ctx, struct connection *conn,
 	if (!resp)
 		return ENOMEM;
 
-	for (i = 0; quotas[i].quota; i++) {
+	for (i = 0; i < ACC_N; i++) {
+		if (!quotas[i].name)
+			continue;
 		resp = talloc_asprintf_append(resp, "%-17s: %8d %s\n",
-					      quotas[i].name, *quotas[i].quota,
+					      quotas[i].name, quotas[i].val,
 					      quotas[i].descr);
 		if (!resp)
 			return ENOMEM;
@@ -274,7 +247,7 @@ static int quota_show_current(const void *ctx, struct connection *conn,
 }
 
 static int quota_set(const void *ctx, struct connection *conn,
-		     char **vec, int num, const struct quota *quotas)
+		     char **vec, int num, struct quota *quotas)
 {
 	unsigned int i;
 	int val;
@@ -286,9 +259,9 @@ static int quota_set(const void *ctx, struct connection *conn,
 	if (val < 1)
 		return EINVAL;
 
-	for (i = 0; quotas[i].quota; i++) {
-		if (!strcmp(vec[0], quotas[i].name)) {
-			*quotas[i].quota = val;
+	for (i = 0; i < ACC_N; i++) {
+		if (quotas[i].name && !strcmp(vec[0], quotas[i].name)) {
+			quotas[i].val = val;
 			send_ack(conn, XS_CONTROL);
 			return 0;
 		}
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 0a350dd6f8..6f03f6687a 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -89,17 +89,6 @@ unsigned int trace_flags = TRACE_OBJ | TRACE_IO;
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type);
 
-int quota_nb_entry_per_domain = 1000;
-int quota_nb_watch_per_domain = 128;
-int quota_max_entry_size = 2048; /* 2K */
-int quota_max_transaction = 10;
-int quota_nb_perms_per_node = 5;
-int quota_trans_nodes = 1024;
-int quota_max_path_len = XENSTORE_REL_PATH_MAX;
-int quota_req_outstanding = 20;
-int quota_memory_per_domain_soft = 2 * 1024 * 1024; /* 2 MB */
-int quota_memory_per_domain_hard = 2 * 1024 * 1024 + 512 * 1024; /* 2.5 MB */
-
 unsigned int timeout_watch_event_msec = 20000;
 
 void trace(const char *fmt, ...)
@@ -800,8 +789,7 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 		+ node->datalen + node->childlen;
 
 	/* Call domain_max_chk() in any case in order to record max values. */
-	if (domain_max_chk(conn, ACC_NODESZ, data.dsize, quota_max_entry_size)
-	    && !no_quota_check) {
+	if (domain_max_chk(conn, ACC_NODESZ, data.dsize) && !no_quota_check) {
 		errno = ENOSPC;
 		return errno;
 	}
@@ -1191,8 +1179,7 @@ bool is_valid_nodename(const struct connection *conn, const char *node)
 	if (sscanf(node, "/local/domain/%5u/%n", &domid, &local_off) != 1)
 		local_off = 0;
 
-	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off,
-			   quota_max_path_len))
+	if (domain_max_chk(conn, ACC_PATHLEN, strlen(node) - local_off))
 		return false;
 
 	return valid_chars(node);
@@ -1504,7 +1491,7 @@ static struct node *create_node(struct connection *conn, const void *ctx,
 	for (i = node; i; i = i->parent) {
 		/* i->parent is set for each new node, so check quota. */
 		if (i->parent &&
-		    domain_nbentry(conn) >= quota_nb_entry_per_domain) {
+		    domain_nbentry(conn) >= hard_quotas[ACC_NODES].val) {
 			ret = ENOSPC;
 			goto err;
 		}
@@ -1786,7 +1773,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 		return EINVAL;
 
 	perms.num--;
-	if (domain_max_chk(conn, ACC_NPERM, perms.num, quota_nb_perms_per_node))
+	if (domain_max_chk(conn, ACC_NPERM, perms.num))
 		return ENOSPC;
 
 	permstr = in->buffer + strlen(in->buffer) + 1;
@@ -2655,7 +2642,16 @@ static void usage(void)
 "                          memory: total used memory per domain for nodes,\n"
 "                                  transactions, watches and requests, above\n"
 "                                  which Xenstore will stop talking to domain\n"
+"                          nodes: number nodes owned by a domain\n"
+"                          node-permissions: number of access permissions per\n"
+"                                            node\n"
+"                          node-size: total size of a node (permissions +\n"
+"                                     children names + content)\n"
 "                          outstanding: number of outstanding requests\n"
+"                          path-length: length of a node path\n"
+"                          transactions: number of concurrent transactions\n"
+"                                        per domain\n"
+"                          watches: number of watches per domain"
 "  -q, --quota-soft <what>=<nb> set a soft quota <what> to the value <nb>,\n"
 "                          causing a warning to be issued via syslog() if the\n"
 "                          limit is violated, allowed quotas are:\n"
@@ -2720,7 +2716,12 @@ static unsigned int get_optval_uint(const char *arg)
 
 static bool what_matches(const char *arg, const char *what)
 {
-	unsigned int what_len = strlen(what);
+	unsigned int what_len;
+
+	if (!what)
+		return false;
+
+	what_len = strlen(what);
 
 	return !strncmp(arg, what, what_len) && arg[what_len] == '=';
 }
@@ -2728,7 +2729,7 @@ static bool what_matches(const char *arg, const char *what)
 static void set_timeout(const char *arg)
 {
 	const char *eq = strchr(arg, '=');
-	int val;
+	unsigned int val;
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<seconds>\n");
@@ -2742,22 +2743,22 @@ static void set_timeout(const char *arg)
 static void set_quota(const char *arg, bool soft)
 {
 	const char *eq = strchr(arg, '=');
-	int val;
+	struct quota *q = soft ? soft_quotas : hard_quotas;
+	unsigned int val;
+	unsigned int i;
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<nb>\n");
 	val = get_optval_uint(eq + 1);
-	if (what_matches(arg, "outstanding") && !soft)
-		quota_req_outstanding = val;
-	else if (what_matches(arg, "transaction-nodes") && !soft)
-		quota_trans_nodes = val;
-	else if (what_matches(arg, "memory")) {
-		if (soft)
-			quota_memory_per_domain_soft = val;
-		else
-			quota_memory_per_domain_hard = val;
-	} else
-		barf("unknown quota \"%s\"\n", arg);
+
+	for (i = 0; i < ACC_N; i++) {
+		if (what_matches(arg, q[i].name)) {
+			q[i].val = val;
+			return;
+		}
+	}
+
+	barf("unknown quota \"%s\"\n", arg);
 }
 
 /* Sorted by bit values of TRACE_* flags. Flag is (1u << index). */
@@ -2819,7 +2820,7 @@ int main(int argc, char *argv[])
 			no_domain_init = true;
 			break;
 		case 'E':
-			quota_nb_entry_per_domain = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NODES].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'F':
 			pidfile = optarg;
@@ -2837,10 +2838,10 @@ int main(int argc, char *argv[])
 			recovery = false;
 			break;
 		case 'S':
-			quota_max_entry_size = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NODESZ].val = strtoul(optarg, NULL, 10);
 			break;
 		case 't':
-			quota_max_transaction = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_TRANS].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'T':
 			tracefile = optarg;
@@ -2860,15 +2861,17 @@ int main(int argc, char *argv[])
 			verbose = true;
 			break;
 		case 'W':
-			quota_nb_watch_per_domain = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_WATCH].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'A':
-			quota_nb_perms_per_node = strtol(optarg, NULL, 10);
+			hard_quotas[ACC_NPERM].val = strtoul(optarg, NULL, 10);
 			break;
 		case 'M':
-			quota_max_path_len = strtol(optarg, NULL, 10);
-			quota_max_path_len = min(XENSTORE_REL_PATH_MAX,
-						 quota_max_path_len);
+			hard_quotas[ACC_PATHLEN].val =
+				strtoul(optarg, NULL, 10);
+			hard_quotas[ACC_PATHLEN].val =
+				 min((unsigned int)XENSTORE_REL_PATH_MAX,
+				     hard_quotas[ACC_PATHLEN].val);
 			break;
 		case 'Q':
 			set_quota(optarg, false);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 9339820156..92d5b50f3c 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -316,16 +316,6 @@ extern TDB_CONTEXT *tdb_ctx;
 extern int dom0_domid;
 extern int dom0_event;
 extern int priv_domid;
-extern int quota_nb_watch_per_domain;
-extern int quota_max_transaction;
-extern int quota_max_entry_size;
-extern int quota_nb_perms_per_node;
-extern int quota_max_path_len;
-extern int quota_nb_entry_per_domain;
-extern int quota_req_outstanding;
-extern int quota_trans_nodes;
-extern int quota_memory_per_domain_soft;
-extern int quota_memory_per_domain_hard;
 extern bool keep_orphans;
 
 extern unsigned int timeout_watch_event_msec;
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 57a573e5dd..a641f633a5 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -43,7 +43,61 @@ static evtchn_port_t virq_port;
 
 xenevtchn_handle *xce_handle = NULL;
 
-static unsigned int acc_global_max[ACC_N];
+struct quota hard_quotas[ACC_N] = {
+	[ACC_NODES] = {
+		.name = "nodes",
+		.descr = "Nodes per domain",
+		.val = 1000,
+	},
+	[ACC_WATCH] = {
+		.name = "watches",
+		.descr = "Watches per domain",
+		.val = 128,
+	},
+	[ACC_OUTST] = {
+		.name = "outstanding",
+		.descr = "Outstanding requests per domain",
+		.val = 20,
+	},
+	[ACC_MEM] = {
+		.name = "memory",
+		.descr = "Total Xenstore memory per domain (error level)",
+		.val = 2 * 1024 * 1024 + 512 * 1024,	/* 2.5 MB */
+	},
+	[ACC_TRANS] = {
+		.name = "transactions",
+		.descr = "Active transactions per domain",
+		.val = 10,
+	},
+	[ACC_TRANSNODES] = {
+		.name = "transaction-nodes",
+		.descr = "Max. number of accessed nodes per transaction",
+		.val = 1024,
+	},
+	[ACC_NPERM] = {
+		.name = "node-permissions",
+		.descr = "Max. number of permissions per node",
+		.val = 5,
+	},
+	[ACC_PATHLEN] = {
+		.name = "path-max",
+		.descr = "Max. length of a node path",
+		.val = XENSTORE_REL_PATH_MAX,
+	},
+	[ACC_NODESZ] = {
+		.name = "node-size",
+		.descr = "Max. size of a node",
+		.val = 2048,
+	},
+};
+
+struct quota soft_quotas[ACC_N] = {
+	[ACC_MEM] = {
+		.name = "memory",
+		.descr = "Total Xenstore memory per domain (warning level)",
+		.val = 2 * 1024 * 1024,			/* 2.0 MB */
+	},
+};
 
 struct domain
 {
@@ -204,10 +258,10 @@ static bool domain_can_read(struct connection *conn)
 	if (domain_is_unprivileged(conn)) {
 		if (domain->wrl_credit < 0)
 			return false;
-		if (domain->acc[ACC_OUTST].val >= quota_req_outstanding)
+		if (domain->acc[ACC_OUTST].val >= hard_quotas[ACC_OUTST].val)
 			return false;
-		if (domain->acc[ACC_MEM].val >= quota_memory_per_domain_hard &&
-		    quota_memory_per_domain_hard)
+		if (domain->acc[ACC_MEM].val >= hard_quotas[ACC_MEM].val &&
+		    hard_quotas[ACC_MEM].val)
 			return false;
 	}
 
@@ -422,6 +476,7 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 {
 	struct domain *d = find_domain_struct(domid);
 	char *resp;
+	unsigned int i;
 
 	if (!d)
 		return ENOENT;
@@ -430,22 +485,15 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 	if (!resp)
 		return ENOMEM;
 
-#define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-17s: %8u (max: %8u\n", #t, \
-				      d->acc[e].val, d->acc[e].max); \
-	if (!resp) return ENOMEM
-
-	ent(nodes, ACC_NODES);
-	ent(watches, ACC_WATCH);
-	ent(transactions, ACC_TRANS);
-	ent(outstanding, ACC_OUTST);
-	ent(memory, ACC_MEM);
-	ent(transaction-nodes, ACC_TRANSNODES);
-	ent(node-permissions, ACC_NPERM);
-	ent(path-length, ACC_PATHLEN);
-	ent(node-size, ACC_NODESZ);
-
-#undef ent
+	for (i = 0; i < ACC_N; i++) {
+		if (!hard_quotas[i].name)
+			continue;
+		resp = talloc_asprintf_append(resp, "%-17s: %8u (max %8u)\n",
+					      hard_quotas[i].name,
+					      d->acc[i].val, d->acc[i].max);
+		if (!resp)
+			return ENOMEM;
+	}
 
 	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
 
@@ -455,27 +503,21 @@ int domain_get_quota(const void *ctx, struct connection *conn,
 int domain_max_global_acc(const void *ctx, struct connection *conn)
 {
 	char *resp;
+	unsigned int i;
 
 	resp = talloc_asprintf(ctx, "Max. seen accounting values:\n");
 	if (!resp)
 		return ENOMEM;
 
-#define ent(t, e) \
-	resp = talloc_asprintf_append(resp, "%-17s: %8u\n", #t,   \
-				      acc_global_max[e]);         \
-	if (!resp) return ENOMEM
-
-	ent(nodes, ACC_NODES);
-	ent(watches, ACC_WATCH);
-	ent(transactions, ACC_TRANS);
-	ent(outstanding, ACC_OUTST);
-	ent(memory, ACC_MEM);
-	ent(transaction-nodes, ACC_TRANSNODES);
-	ent(node-permissions, ACC_NPERM);
-	ent(path-length, ACC_PATHLEN);
-	ent(node-size, ACC_NODESZ);
-
-#undef ent
+	for (i = 0; i < ACC_N; i++) {
+		if (!hard_quotas[i].name)
+			continue;
+		resp = talloc_asprintf_append(resp, "%-17s: %8u\n",
+					      hard_quotas[i].name,
+					      hard_quotas[i].max);
+		if (!resp)
+			return ENOMEM;
+	}
 
 	send_reply(conn, XS_CONTROL, resp, strlen(resp) + 1);
 
@@ -590,7 +632,7 @@ int acc_fix_domains(struct list_head *head, bool chk_quota, bool update)
 	list_for_each_entry(cd, head, list) {
 		cnt = domain_nbentry_fix(cd->domid, cd->acc[ACC_NODES], update);
 		if (!update) {
-			if (chk_quota && cnt >= quota_nb_entry_per_domain)
+			if (chk_quota && cnt >= hard_quotas[ACC_NODES].val)
 				return ENOSPC;
 			if (cnt < 0)
 				return ENOMEM;
@@ -1091,12 +1133,12 @@ static void domain_acc_valid_max(struct domain *d, enum accitem what,
 				 unsigned int val)
 {
 	assert(what < ARRAY_SIZE(d->acc));
-	assert(what < ARRAY_SIZE(acc_global_max));
+	assert(what < ARRAY_SIZE(hard_quotas));
 
 	if (val > d->acc[what].max)
 		d->acc[what].max = val;
-	if (val > acc_global_max[what] && domid_is_unprivileged(d->domid))
-		acc_global_max[what] = val;
+	if (val > hard_quotas[what].max && domid_is_unprivileged(d->domid))
+		hard_quotas[what].max = val;
 }
 
 static int domain_acc_add_valid(struct domain *d, enum accitem what, int add)
@@ -1233,19 +1275,19 @@ void domain_reset_global_acc(void)
 	unsigned int i;
 
 	for (i = 0; i < ACC_N; i++)
-		acc_global_max[i] = 0;
+		hard_quotas[i].max = 0;
 
 	/* Set current max values seen. */
 	hashtable_iterate(domhash, domain_reset_global_acc_sub, NULL);
 }
 
 bool domain_max_chk(const struct connection *conn, enum accitem what,
-		    unsigned int val, unsigned int quota)
+		    unsigned int val)
 {
 	if (!conn || !conn->domain)
 		return false;
 
-	if (domain_is_unprivileged(conn) && val > quota)
+	if (domain_is_unprivileged(conn) && val > hard_quotas[what].val)
 		return true;
 
 	domain_acc_valid_max(conn->domain, what, val);
@@ -1294,8 +1336,7 @@ static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 	domain = conn->domain;
 	now = time(NULL);
 
-	if (mem >= quota_memory_per_domain_hard &&
-	    quota_memory_per_domain_hard) {
+	if (mem >= hard_quotas[ACC_MEM].val && hard_quotas[ACC_MEM].val) {
 		if (domain->hard_quota_reported)
 			return true;
 		syslog(LOG_ERR, "Domain %u exceeds hard memory quota, Xenstore interface to domain stalled\n",
@@ -1312,15 +1353,14 @@ static bool domain_chk_quota(struct connection *conn, unsigned int mem)
 			syslog(LOG_INFO, "Domain %u below hard memory quota again\n",
 			       domain->domid);
 		}
-		if (mem >= quota_memory_per_domain_soft &&
-		    quota_memory_per_domain_soft &&
-		    !domain->soft_quota_reported) {
+		if (mem >= soft_quotas[ACC_MEM].val &&
+		    soft_quotas[ACC_MEM].val && !domain->soft_quota_reported) {
 			domain->mem_last_msg = now;
 			domain->soft_quota_reported = true;
 			syslog(LOG_WARNING, "Domain %u exceeds soft memory quota\n",
 			       domain->domid);
 		}
-		if (mem < quota_memory_per_domain_soft &&
+		if (mem < soft_quotas[ACC_MEM].val &&
 		    domain->soft_quota_reported) {
 			domain->mem_last_msg = now;
 			domain->soft_quota_reported = false;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 78ca434531..5123453f6c 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -40,6 +40,16 @@ enum accitem {
 	ACC_N,			/* Number of elements per domain. */
 };
 
+struct quota {
+	const char *name;
+	const char *descr;
+	unsigned int val;
+	unsigned int max;
+};
+
+extern struct quota hard_quotas[ACC_N];
+extern struct quota soft_quotas[ACC_N];
+
 void handle_event(void);
 
 void check_domains(void);
@@ -134,7 +144,7 @@ void acc_commit(struct connection *conn);
 int domain_max_global_acc(const void *ctx, struct connection *conn);
 void domain_reset_global_acc(void);
 bool domain_max_chk(const struct connection *conn, unsigned int what,
-		    unsigned int val, unsigned int quota);
+		    unsigned int val);
 
 /* Write rate limiting */
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 580d7bd090..db06d0e7f1 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -252,8 +252,7 @@ int access_node(struct connection *conn, struct node *node,
 
 	i = find_accessed_node(trans, node->name);
 	if (!i) {
-		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1,
-				   quota_trans_nodes)) {
+		if (domain_max_chk(conn, ACC_TRANSNODES, trans->nodes + 1)) {
 			ret = ENOSPC;
 			goto err;
 		}
@@ -479,7 +478,7 @@ int do_transaction_start(const void *ctx, struct connection *conn,
 	if (conn->transaction)
 		return EBUSY;
 
-	if (domain_transaction_get(conn) > quota_max_transaction)
+	if (domain_transaction_get(conn) > hard_quotas[ACC_TRANS].val)
 		return ENOSPC;
 
 	/* Attach transaction to ctx for autofree until it's complete */
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index 61b1e3421e..e8eb35de02 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -239,7 +239,7 @@ int do_watch(const void *ctx, struct connection *conn, struct buffered_data *in)
 			return EEXIST;
 	}
 
-	if (domain_watch(conn) > quota_nb_watch_per_domain)
+	if (domain_watch(conn) > hard_quotas[ACC_WATCH].val)
 		return E2BIG;
 
 	watch = add_watch(conn, vec[0], vec[1], relative, false);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:27:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:27:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540821.842875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uhg-0002bG-CB; Tue, 30 May 2023 08:27:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540821.842875; Tue, 30 May 2023 08:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uhg-0002ae-4A; Tue, 30 May 2023 08:27:40 +0000
Received: by outflank-mailman (input) for mailman id 540821;
 Tue, 30 May 2023 08:27:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3ufl-0003jy-EJ
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:25:41 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8f145bb4-fec3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:25:39 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 94E5621A59;
 Tue, 30 May 2023 08:25:39 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 669251342F;
 Tue, 30 May 2023 08:25:39 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id Wv2vFwOzdWSLEAAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:25:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f145bb4-fec3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685435139; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=REpVrQ+Mytzbnn74qnGvj4ZQIpyZ9Gk/rmciaMx2IaE=;
	b=fz+FiJi+4hKrrcmBY8TuKMwvh+b6y7xL5GKaxpCAshU8cVJCe0uO2SV3ngjRmUbBiRXHsA
	E6ncrbRx/qNjJSSgz3p/LPW+nfnAmvP2wFXLtqJNrQ2WJmcE2Il0kCf4+3Lrj5jAGM0YR3
	dqdzDX+vxN0KH6oZKOZ06adaJ4HqH4k=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v6 13/14] tools/xenstore: switch get_optval_int() to get_optval_uint()
Date: Tue, 30 May 2023 10:24:23 +0200
Message-Id: <20230530082424.32126-14-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530082424.32126-1-jgross@suse.com>
References: <20230530082424.32126-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Let get_optval_int() return an unsigned value and rename it
accordingly.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V5:
- new patch, carved out from next patch in series (Julien Grall)
---
 tools/xenstore/xenstored_core.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index dd00f74cb6..0a350dd6f8 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2706,13 +2706,13 @@ int dom0_domid = 0;
 int dom0_event = 0;
 int priv_domid = 0;
 
-static int get_optval_int(const char *arg)
+static unsigned int get_optval_uint(const char *arg)
 {
 	char *end;
-	long val;
+	unsigned long val;
 
-	val = strtol(arg, &end, 10);
-	if (!*arg || *end || val < 0 || val > INT_MAX)
+	val = strtoul(arg, &end, 10);
+	if (!*arg || *end || val > INT_MAX)
 		barf("invalid parameter value \"%s\"\n", arg);
 
 	return val;
@@ -2732,7 +2732,7 @@ static void set_timeout(const char *arg)
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<seconds>\n");
-	val = get_optval_int(eq + 1);
+	val = get_optval_uint(eq + 1);
 	if (what_matches(arg, "watch-event"))
 		timeout_watch_event_msec = val * 1000;
 	else
@@ -2746,7 +2746,7 @@ static void set_quota(const char *arg, bool soft)
 
 	if (!eq)
 		barf("quotas must be specified via <what>=<nb>\n");
-	val = get_optval_int(eq + 1);
+	val = get_optval_uint(eq + 1);
 	if (what_matches(arg, "outstanding") && !soft)
 		quota_req_outstanding = val;
 	else if (what_matches(arg, "transaction-nodes") && !soft)
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:27:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:27:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540820.842870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uhg-0002Yl-05; Tue, 30 May 2023 08:27:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540820.842870; Tue, 30 May 2023 08:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uhf-0002Ye-Sj; Tue, 30 May 2023 08:27:39 +0000
Received: by outflank-mailman (input) for mailman id 540820;
 Tue, 30 May 2023 08:27:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3ugC-0004aU-5u
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:26:08 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01on2043.outbound.protection.outlook.com [40.107.13.43])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9eb08c37-fec3-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:26:05 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7446.eurprd04.prod.outlook.com (2603:10a6:10:1aa::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 08:25:36 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 08:25:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9eb08c37-fec3-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Hh3DqetLGB8C2p4BuDNnJgm5p3o4K2BuAlzx4vB7dLTyM3LQot2qoBLOYAKRGKPv/1HnLq8k2KlIiQlSf5tBabxkjpl4HVc4H2OLGNif1TxU/VBpwS1q5d/jl97A84O0aPXyg+cf91yYrHHqW3bpu4MpPjL5ebfxcbn+aGgZOYpEkkvBaMK66NEmEYoYxDjmEA6lhZI7bWqReHT9ucqQKBg3jSFsY8VuRtQzX+YLFyOz8w6Ca8UfOz/ZOFO14BeB+1S5NTbpuBzY758eEtuwZ5/6Xi3HgQE3Zh03aopOH8jcgyH4W4u6+jyfmZkH5rcpebx0tl+DfQqMQbaoLCagpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=PpLYpGGB9e+TbEwsNORP02sNNtzPIMRgsx8T5MprrHY=;
 b=Y2hmC9muMNbBydPYXqUWqBBhdFoMppCWpdiuxZ8Agfd0edLF2k2UDlUFGjx4JRskx2VgGJeXOoT5UHo4rmqoy2sJAQO7rXzNZ/jzQEHC9OZFu+L7k8kGLvMvp9xd6gz3C9p7QZrAGsRPQRbnFv2gb51T33B8j0Thfh4sRWo8Cs8+w9Z6Q7C7FKLr8LOKaKyA0fx2fOsrLOu85aPpczER5npaXJXU/CQZZTnf8jgxKB2dkBUhn70nlWeNMpdfeWiscADTNjuzPikwE/K9azkvQMLbvhJW9I1c6b+1bHu7LsFIJJ7EMzEo3IL+An6MNibk4aleYqr7RsE/u8xNATNxtw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PpLYpGGB9e+TbEwsNORP02sNNtzPIMRgsx8T5MprrHY=;
 b=0NXfR51sCcH/0QeQlkgvBZFcWuES2I2mhpEgFkBF+vK7ZKW1Cr8YVpRbnGmhqLbSw1uLYS+ucEwFBvpFLVjVg/zHi+VBtTH6pTMmBVSqa3IRb/cdsutoJcbhftyHb8w3KdFQFBfbAxcousuYFVj83Exn0ed/3aq2ZEjjwP2vuStfhK5lxOTa2W6BgzQv6/PLdvTQgTPuyvh4+7a0dfoQExW8vuWZyLZKfX6olvuIPBUIgZTF2VpjqRQkr1m+KDF4rsJv+T4W8En8RA5SMTF0Cfay3XK+VNaul8Q4W5fRQeQrjwe4o+cRJn1EzSEeEGiEKriJ7IbdPStqZHa9OIfqMg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <64820a10-622e-dc10-988b-613542349ec9@suse.com>
Date: Tue, 30 May 2023 10:25:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 2/3] x86: Add support for AMD's Automatic IBRS
Content-Language: en-US
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
 <20230526150044.31553-3-alejandro.vallejo@cloud.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230526150044.31553-3-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0042.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7446:EE_
X-MS-Office365-Filtering-Correlation-Id: 3a38aeb6-99e2-4b30-77f5-08db60e77161
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jKX2mz3Zbek9WhuOgWi0I9VaIKhXXucyK79+q5Xkd578eISQvgs8J78p7QpQk/wn24nFGzU8PHFfdTxB6mxkt1M1QfqNkSBsDl/ati5xZZrxUi+/SM7VDKnnbqtq5r6Cg5DCKEyGQFIlCpZnoThqGST671hcVo4cvoc/qrXqciUxu8xsF8pIiJ1MVOMWi9TzF24hvbuNrrqud82P2Sh38WCA9VkRtyivnZ/CXOGBmejxgnz0/DnfMmVoI4+j4Tj2Eg7rXPplN/RBztV7ie60afPCIL1XgkP3NEPgxYlAMrWSuUFT5nwBCptVZQ9ErFjFGSP3sC+duIup1/MHQsdCTmBb+skhL8d8daOqUVnFa10r5MfXolCav3lF4uBjLdlswJXS8Fex5HANNve4XixTt/yQVt+uYKHdJmWyE5yi+jOh5E40awzsCPpncPKR65uAIQXxKW8k77T4CwKmtVwrPkKXBLHEeYxjDkMALE7Jpm7XD3CQPo0mykxQkAFxMlQiHEJnSa7pKaAF/3/n0BuI3oLoncbak3oEQ4XdjR+S1gVyu4kZJYew54jn1lrLkxz+KOmoNF68qjvY8u5M6aU+IUvZWaBxQqA+HsCDjLC5tUlHIDewwVpMQqMPBM0YX+H2c8JGkfkZzBKv6jYpMLOwsA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39850400004)(396003)(366004)(136003)(346002)(451199021)(478600001)(54906003)(8936002)(8676002)(5660300002)(36756003)(4744005)(31696002)(86362001)(2906002)(66556008)(66946007)(4326008)(6916009)(66476007)(316002)(38100700002)(41300700001)(26005)(31686004)(6506007)(186003)(6512007)(53546011)(2616005)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?akdDR0hqZklncytBNVhKellGWUNWbFdLM3plZDhRZDg2Ly9GSkRrdlpSS2ZB?=
 =?utf-8?B?MUdGY0FZc1MwbnBhUW9vS1hzbG45RmY3Mk1JYkY2bUsvL1dPd3EzZGNTck8r?=
 =?utf-8?B?YnIyMVlub0dQL0FrV2MyTDY1OVh5SGRJdWszR3ZRcEl0Y0pWVlE2a0tGUnQr?=
 =?utf-8?B?Uk9xQWZqMmRPNnN6Y3dKWlFkNjZlaWx6MlZlck5DTS9aNkZWNytRcWtMSTY4?=
 =?utf-8?B?NnZPb2tRekREZXZuZUt3UEdSejVNQVUyTUtZbzBiSjd5TjZYQTlUSGtDQ2F3?=
 =?utf-8?B?NXlWQkdIK29nb0d1bm5zMS9yb0lxVWlhTjNpdFJrOFBqM3U1T2hjZHZFL2gy?=
 =?utf-8?B?bElrYlBOTFJueDY2bFhwOW16anNXSTBFSENyd0plZ3hKY2JVVlpRbGUraDlQ?=
 =?utf-8?B?THFKSjVlUlEvSXlVc1ZRVmRzcjJNaVZRVFlFQ0pvODMrSXFZZ0IwS2JMYUdP?=
 =?utf-8?B?QlNydGhITEVNcTdwRG80SUtoNlhvUStEVU8vN3hkTTc3SVQ1Z3ZpTklkMTcw?=
 =?utf-8?B?OXp6WTgzQWhXTWJZVndhZGJHb2hkbURGS2tEZnEvWkwzRWhHSWZSU0ZsejdX?=
 =?utf-8?B?V05haVlVK0tMY1hNWTNSa1pWQ0EvQlk3YlNMTnFvWXdkV1E0YWZqQWkxZU5p?=
 =?utf-8?B?RDhOemxITlJMU1ZraDJuWTNyU3dGSzdIMVVvaDRGV2RpMVRLZG1mSkplbERr?=
 =?utf-8?B?UkdXM1YrOWFkcEE0d2xBbGxYSzZ6TUxmdytKK2g2Tk5uN1ZMWWxuTGlKT3c0?=
 =?utf-8?B?a3NlbW42bVVrNXNHanNrZlFEUEJyc2JKVjBQYm9FK05QaU4ySko2aGx4VmFw?=
 =?utf-8?B?ZWNMcjZieU9qeWY1OFgwWnpTUkduUVJ3alliWlg2RkErSllubUNxZkwveE1X?=
 =?utf-8?B?cGlIK3psbnR0L0U3RWREOGp6bWhDR2dTYWRXajdqMGJ2YWlTcU5HMDlKRTFz?=
 =?utf-8?B?cHpDTzVVQWE1TU9lWlkvUEpzVm5hU3lYMFRHNzRBQ1VzQ0F1SGQ2bmtzY3pL?=
 =?utf-8?B?ejc4a0tSVTJMMmNMbG40SHRLU256bm9HS21UQUxNMUNpalJIZ3MrNXdGY3dF?=
 =?utf-8?B?V2JNRHdIMlRGd2l5OHJybU9JZWp2d1ZWODhXUFpMaVJqRUhtL0hPeUpuN1oz?=
 =?utf-8?B?TFdFMnNVK2E3THAwcFIvb1JicVRjblJ2Nm5JTWNlL1I0QkltSmx0MDlTWEpD?=
 =?utf-8?B?SUcvL3Q1UlUySW1scXp6djJQeTdLc1h6UXJUUDNZZmxzbENvY2tUdExjSDBH?=
 =?utf-8?B?WkdCZHVHWVlGNkwxV0F2VDdEb0dyVXBBc1FqZ0tRUXZhWm9veVU4WXJvZnJO?=
 =?utf-8?B?WGt5UHJ3RDZOU0FxT0pXV3RWWlpWOXVsazAwTS9wLzF6UmdvbFc2cXVYaWxt?=
 =?utf-8?B?Yk1razB3UGFEU1pLTEhSWjF0dk0xMWZ5R3lDU3lYVlpSOFFKb1hrYk45cTAy?=
 =?utf-8?B?ODljaXloUmM2NDlSbVVCOURDMS9nNTNLeWVkOVhTQ0FLMDVSTURVbGVlSEkz?=
 =?utf-8?B?MlJ0V1g3YjN1UDVKTjdIQWN4a3N0TlhtSVNiTEdXSXpXWnBWWll2Q0toZXcw?=
 =?utf-8?B?a05mbytpN0tnSkN3RTgzUGhpWXloWi8ySUN4UXg2K1ZHUDY1UUltY1hqSFJB?=
 =?utf-8?B?cnpWbWM5cmE5Q3hDbkViWm1qaUxnZWF0Y2UxaDFlWG1mOGFJVnFQYkpETFlO?=
 =?utf-8?B?NElGa3J4KzVVOGt3aHdBdkRKbnlhMGV1SWViT1hGU2xyQ1plYTQ0TStYVDBZ?=
 =?utf-8?B?aWUxT29xK0U2cjVvUlZjakFLc2xJamFVcjZic1dGMEsyUVA0TE4yWUI4d3kz?=
 =?utf-8?B?U1dYekllQXljVnRaQ29zbTlZRFF6Zkt0bnY2WEY3SkJsUXVweGdpbElleTli?=
 =?utf-8?B?NVZXclo5RGQ0SWo4NWtlUERWMndUdUZiNnVzWXFsTEVRNTRYMXROQUxoeTJu?=
 =?utf-8?B?REZ4aW05WEY5ZEVxWEV6SzhKd0xqNTloNDl1TXBVbGRONzhESXJSRWdzajZE?=
 =?utf-8?B?UDVnVXZQd3p5WGZiR0kvd0pHZmNteXNEZGVsVWR3OUxMbHVVWTZ6ckV2bjlF?=
 =?utf-8?B?UnZ3OVFBNy9rZGNRZHAyS0JvZXcrK3RwWFJkaWlIWjBidTlUNU9PQlI0MkI2?=
 =?utf-8?Q?cMuGuXJQ9n27oscW1UQNqv09/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a38aeb6-99e2-4b30-77f5-08db60e77161
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 08:25:36.0288
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ALvGtT/GBkH14bAUV2hg0AFmL3OY2wcIsT5y2QSA4j82SFVA41gS2qLIqo761x540z9Hq0Wr4BR0YVNNW7rWgg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7446

On 26.05.2023 17:00, Alejandro Vallejo wrote:
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -376,6 +376,9 @@ void start_secondary(void *unused)
>      {
>          wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
>          info->last_spec_ctrl = default_xen_spec_ctrl;
> +
> +        if ( cpu_has_auto_ibrs && (default_xen_spec_ctrl & SPEC_CTRL_IBRS) )
> +            write_efer(read_efer() | EFER_AIBRSE);
>      }

Did you consider using trampoline_efer instead, which would then also take
care of the S3 resume path (which otherwise I think you'd also need to
fiddle with)?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 08:28:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540840.842889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uip-0004GN-JQ; Tue, 30 May 2023 08:28:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540840.842889; Tue, 30 May 2023 08:28:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uip-0004GG-Gc; Tue, 30 May 2023 08:28:51 +0000
Received: by outflank-mailman (input) for mailman id 540840;
 Tue, 30 May 2023 08:28:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3uio-0004G0-FX; Tue, 30 May 2023 08:28:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3uio-0000wn-2r; Tue, 30 May 2023 08:28:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3uin-0002Fr-Ku; Tue, 30 May 2023 08:28:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3uin-0006iQ-KT; Tue, 30 May 2023 08:28:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iaNPB2YqhCwCWyNr52wK3huKBePSGh2H4Xvoxhn1OzQ=; b=Um5BmEvclRb27KxFEbLvtY2Edi
	/TLjqhk+NkQmJzQr299RF/521WjGJU0fz5kRuEEaHM64Q84vU+ZRqqN309+qyaGuSgGPL5p9sxl6u
	2GeqmP3N1qMGYyKr6U4yyljB0w62Pgjsq7B93ogb/3GyDSDIFHxNxnQnqwLebFUBv9U8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181008-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 181008: tolerable FAIL - PUSHED
X-Osstest-Failures:
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-start/debianhvm.repeat:fail:heisenbug
X-Osstest-Versions-This:
    ovmf=9d9761af50e538d983e00b1cb2d0ffcee261e552
X-Osstest-Versions-That:
    ovmf=1034d223f8cc6bf8b9b86c57e564753cdad46f88
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 30 May 2023 08:28:49 +0000

flight 181008 ovmf real [real]
flight 181010 ovmf real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/181008/
http://logs.test-lab.xenproject.org/osstest/logs/181010/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-ovmf-amd64 20 guest-start/debianhvm.repeat fail pass in 181010-retest

version targeted for testing:
 ovmf                 9d9761af50e538d983e00b1cb2d0ffcee261e552
baseline version:
 ovmf                 1034d223f8cc6bf8b9b86c57e564753cdad46f88

Last test of basis   181004  2023-05-29 18:15:27 Z    0 days
Testing same since   181008  2023-05-30 03:42:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nickle Wang <nicklew@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1034d223f8..9d9761af50  9d9761af50e538d983e00b1cb2d0ffcee261e552 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 30 08:41:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:41:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540847.842900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uun-0006jL-P9; Tue, 30 May 2023 08:41:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540847.842900; Tue, 30 May 2023 08:41:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uun-0006jE-MS; Tue, 30 May 2023 08:41:13 +0000
Received: by outflank-mailman (input) for mailman id 540847;
 Tue, 30 May 2023 08:41:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3uum-0006j8-QK
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:41:12 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20626.outbound.protection.outlook.com
 [2a01:111:f400:fe13::626])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b9d9a2c5-fec5-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:41:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9275.eurprd04.prod.outlook.com (2603:10a6:10:356::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Tue, 30 May
 2023 08:41:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 08:41:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9d9a2c5-fec5-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mn5j6fbedzDhAuwKeggBHM9dkUQeSn1z3bfBfPLhwXl6wuAyOA4WHndpqtPxbC1SenSmQfOvenWGTV89jkzqXrvMZjpnbkWB2TUB9BkpaCAP2gbeEE4fwYqO+2X55h4pjd3ZEfQFupTSy6M2zLJ08yvfha3nLUlD6zbKgFAPQVtjPOSuMyzrhfzMIl+jFjnO1qbNXFiHkqfsjH1Omz42mXCYkLU0aCStvns8YVwOyaGCtthnyU7NydbYshfzX2P//OU5OMsk4Lc06w1RjOhtUKFaOKW4OB60rrSZvd0lUPudE9JuD2sOcxN5rbF7YVjgqc0nFHTm0ni16p98aAWubA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cRa9BPYrmmiPhwNLlH5ynl6mhJUAmPhIm+HoXb8+MWU=;
 b=T/f1C5wTGbmtUOUsXJq0LzH4zOMGfHJUmCz07j30+vZI9Q4N6YpACkSFub0XXckpAI5Ikbc9QNODW1BJDMOrrz58AWfg+widKEHWXIxpJrYkx79K+x+Il3Ipfx8vFNFDYEEBxb0/ZpUb8N8dthafQbwi7CIhAx+ubYOt0qEO4YVXPVK5MTebxbJsdAXYjanNC3lUbARwzeRVg9kvad8YXU23Ju5Vi51EjE72PyUd9kVLGQod5GMgC4B83KvpTc9+iQhqtyokVKLYW8NKJZ3KLXaxnH1Z8sPlKHOLApPgmo7Bn+FCW2JujrpAYm39Pe2iZANDc4voWODRcfeXWUT6BQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cRa9BPYrmmiPhwNLlH5ynl6mhJUAmPhIm+HoXb8+MWU=;
 b=1tZXNLP31BvYEiC3WZcoAQhYcuOLOGax51KDgKxCYZatPuDRzU3Eekj60dZqMP6nFcVFB7i31F8iCdGXueEIdSOzotKlG44mfZ6zlIY6Oko2bichm0kNRoyF7SuNHtmOAWS2NFOop/UEf9XdJt+xOtiCGisEXdk8I1uB1qFvxA76xrcnEDGXLXKYzb4exO5GD1ua9k/PGMq1qSB3RHIaNblzK4RRkV0AQXYkRvpFDo396jd7MKuFMGy86m8RemFQOqhUrcnNxP4IOFoaQW88VPg4A6bucRMtfiybmE3g5ZrWwLdUFWgyAjlBuvThk5yl+frNOnHikh4XfdexNwCZOw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <578d341d-0c54-de64-73e7-1dfc7e5d7584@suse.com>
Date: Tue, 30 May 2023 10:41:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: xentrace buffer size, maxcpus and online cpus
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>
References: <20230530095859.60a3e4ea.olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230530095859.60a3e4ea.olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0185.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9275:EE_
X-MS-Office365-Filtering-Correlation-Id: 2d639733-0585-4d55-a07c-08db60e99c7a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qMEzfHVMPJbYsY52iu4wtxTf9PKVaDDA7uIGFYJrsdo7DMKLiwGKEmKCE6nEq7CBMjyBqXyJ7jC0rnElc4wvFnriS1NsAyLqQUGMRcJ+GNUZL22eebORcyaqSQnO4w3zzMGIOoS0zgI8WpLUIK2r0oFiIbgKFOq42+UGen4DBhNZCaD1ZKYaadRohBoh8OPvcQKvBl2XWXDT/IEGAxzvuytsd634ON/lTS6qh9jvpGU8tNrQjl3Yy5aMkIOyamiY80VKgR1B8+m+bW0vhGR3Kg1eRJYfVEjEhckZDPMSBAlkC+fNLzJ13Bz7tNgW8QYjFnt6VJeRPXjNjtxbctoTB1fXHayIipZZeyPymnSQ0WIw1rYSkm0wfQ1kb/VqIT6O1/zp95mKOYNrJmHNAocjTPDm0ycPEl8VQg+RtZk/m8BxoucEgLzW779oeZlFXBse4DRYnh2YAzpGz8IAYlCfUx1eanHlmahw+7Nc84bpOIBudJnXYQ9QdWzDNpQ4Kqaffz14B45gpDpqR8+SpyACbzDFBfaNfvnn1fykwzXZ4TvHJQ2HNaJCPQVxRVs3l7WoS2oEcxUxMptApEhedOiHWDOg5PKMgcCVJvLY5r+CD+bTt6euy2q79d/1Y8sYEUE/7cFz1XExpHuyY6Uy1s17yA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(366004)(39860400002)(346002)(396003)(451199021)(478600001)(83380400001)(5660300002)(186003)(6916009)(4326008)(4744005)(86362001)(8676002)(6486002)(2906002)(8936002)(41300700001)(316002)(31686004)(38100700002)(31696002)(6512007)(6506007)(26005)(66946007)(2616005)(66556008)(53546011)(66476007)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?YkN2TjJJcUloU2dhenhEdWpVcklEYzIrakV4cFlYWmtnNzhXVmJOd1g5VGN3?=
 =?utf-8?B?NjdKR0lVdWo2cU1ld001YXZXVjZDL3hRU3ZmY2MzNms1dDZpWG9jRUk2K1BN?=
 =?utf-8?B?OWZ4VHBmd2F5NTNpMVRqNUFGcXk0OTFraUtaajZyWTNKNndTaHYrbmlCU2Ji?=
 =?utf-8?B?N2hkdHZhRkRWNEViSVNSbXhJdk53M21xam1UeWlGeEFkSnZaRzhCd2hiQXYz?=
 =?utf-8?B?akdaazZVTWFWVXRyK0lsNk11Nk5LZnQzdFllN0dLUllGcnVhUGxHWmYwb3Zt?=
 =?utf-8?B?Q2tGcXo3K3dxMU5ScmJuOFRGem9GU1RTWnRmT3hIK0wzU2FGL0hwaklrOUd1?=
 =?utf-8?B?OWtBWnQ0SWFOYXRESkxjTnhlOGFueEI3bklWVnYxRFZERHRsM0lWSG5NaVZE?=
 =?utf-8?B?cTNuUWNxVVpRUlgxbXluekJPWjhHZVU4WnNOVFVEUjNEcmNaWkFkaHFxRVdB?=
 =?utf-8?B?c2JQaWU4L3hXZWxHbGJBN1M3ekc4S1lqaUFvSDA2N3Q2NVA3R0NvOUpKeENQ?=
 =?utf-8?B?MVdrMldNQk9oTHZGek5SU3RCMUM2d3duV2ZQejRnb0s3OTBQc2VMZWxuVzhB?=
 =?utf-8?B?Y24zcHFuWnhuZ3dPcnlJQ29OTklsdm5rZWdYZXpyR2hMMmlJNUpFTll3K3Jy?=
 =?utf-8?B?Q2tGNFZ4V2h5Y0R5YlFBQktqZmFxbnU2enFkMDVHZDZBVU56bEoreXp3Ry8z?=
 =?utf-8?B?Mll1UzdscTZYY1EwcTlRaDhndzNjMHFyYkw5a0xYNGpSeExUUURPSlp6aDdZ?=
 =?utf-8?B?ckxaOWdkdkMzZmNQNE9FdTJEdzk5LzVscTJFSisxQ0FiWU1HSTNZcGl2a0FN?=
 =?utf-8?B?ZEdoZkp0clVFcWlPQUY4Z0FnSzdBbEFsL0pRNktINHQ5QW1qanhqcGUwbVdY?=
 =?utf-8?B?MVIySklDWVJIWG00SVdQUzY4bE5WSE5BbXJMcHE4L0s5MlNpZnZvUVc5R0hH?=
 =?utf-8?B?c0lOTXAxcFVyR0dZaHJYb1pSeWl1ZlBybjdHTnNEeEtmQ0hnSzBpSTlxNTdV?=
 =?utf-8?B?Y1YvbEE0dXRXOGVkMXExQXlHRlUzd1h1QjBEd2pLbjJLZXVUeFZJZW9ybC9V?=
 =?utf-8?B?MFZXOGNsQURCTTZaMzBrK1lvRCtSZEpkWTRwSG9iTW9mamRRZGJHbkhkVU9y?=
 =?utf-8?B?WFBUU3o1K0xCT1NTRWNQOUQxNFI0UjUxL2laRVJ6amtyTk1QK1VCWUtjaXpv?=
 =?utf-8?B?TnBlTVVhNVZlcGsxb29EYWw3VFI2cUF4ZlMwQS8rajJ2LzRFWTZmb2FjdWxp?=
 =?utf-8?B?SFNxZXJDejRidkhyUUFyQjM4Ylk4K1V1eGh2NGVGVks1citITnVaNXYzVHNs?=
 =?utf-8?B?empqa3daK1doYjdXRXkyK3FUTFRPTFIvYk45bExLc1JnaG9TT2czQVgvdzE4?=
 =?utf-8?B?Q2ZWbmswUk5VTnRBVUxnSy9CTXNvWlBiVFhlY0ZYRDkzc3ZKekxWZnRTWTJR?=
 =?utf-8?B?ZnRHclovREVvcTBxcDFkdlQwYlNrTFdaMFMwc3UzWTAwZHNSRjlTL2tLMjcr?=
 =?utf-8?B?S2QxSXl3QjBZU3k5QTcvYWRQS01XZW00cm9YNkJEMTd1NEJxNGowY1lDTjMx?=
 =?utf-8?B?UnczS08ycVVsU3MxcVR3eHdEczRibmtPVVJYN1dCNHRwZWV1b21jb1JZMkc2?=
 =?utf-8?B?YjV2SHFsK1R5R2lIT1Y0NFh4Rk5ydE9UOWZjMytVNHN2OS9JaVU4MnpuS1BK?=
 =?utf-8?B?aEJqRnBWRDczYVpvSlNzdGl3d09jZUpFYjJiTzNRa21ZcTNqWUgwSWYxNEpY?=
 =?utf-8?B?UFl6VUdkcGJnMHMzQXZQdGNxOWNkSXNCZ3VLMUlLQVlIVS9jN1ZaQ1JtMUh0?=
 =?utf-8?B?WEYwejR0RDdnRTJaTUZ3ZnFpYXFKenBBOTJJV1VtNnprMHJ2cUl6a2tKZXd2?=
 =?utf-8?B?aTFXekFSWkdBWnhDa1FHN29tWWREcG9tbkx5VkxETGU3aHcrMHJDNzhVQ0VH?=
 =?utf-8?B?Ym5TNTYrZk9ZOWlDN0tmVjRONE9ldkxEZ0lJb3FDK0d5L3JCakp3Slp3clFp?=
 =?utf-8?B?b1RGUXc2Ky9yK2lCRVRpTWlmNFk3RVBDN2VsY1dLclZQVXMyOTRYb1N2OWRl?=
 =?utf-8?B?b0VTQjlwL2tsZnRCK1ZZdis1MlFFOEN5aUtPS0FJSkE5UzY0RHhzdER5Y3Na?=
 =?utf-8?Q?UMPufwuDw811UJ1k5JsVDrDCb?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d639733-0585-4d55-a07c-08db60e99c7a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 08:41:07.5839
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uT55UjZ7T1Dv82OUKHF97h8EcnTE9OeLEx+ClI2t38gYTVWe6ggNMUfbg2AoJgDeU0W9Tuthse6KrcIbTYT/TA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9275

On 30.05.2023 09:58, Olaf Hering wrote:
> While looking again through calculate_tbuf_size after a very long time,
> I was wondering why the code uses nr_cpu_ids instead of num_online_cpus.
> In case Xen was booted with maxcpus=N, would it be safe to use N as
> upper limit? I think this would increase the per-cpu buffer size for
> each active pcpu, and as a result more events could be captured.

Using this N would be correct afaict, but that N isn't num_online_cpus().
CPUs may have been offlined by the time trace buffers are initialized, so
without looking too closely I think it would be num_present_cpus() that
you're after.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 08:45:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:45:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540852.842910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uyf-0007Kg-9o; Tue, 30 May 2023 08:45:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540852.842910; Tue, 30 May 2023 08:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3uyf-0007KZ-6b; Tue, 30 May 2023 08:45:13 +0000
Received: by outflank-mailman (input) for mailman id 540852;
 Tue, 30 May 2023 08:45:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3uye-0007KT-3a
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:45:12 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on20625.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::625])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4940fd9f-fec6-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:45:11 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7140.eurprd04.prod.outlook.com (2603:10a6:208:192::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 08:45:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 08:45:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4940fd9f-fec6-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f8zQ6/dI58QMuBvxhefJn5BHNsFI8ZnRKSJoARCIS99XfG9mivg9tPdpAvVfKFAzapf5WeihXbvGHRk6l8I/UmlMlt77vNW+MydAUHW4yk0fgPO2rO/gJCT/rayCeUGJwupJty2YY/RChEjUEa8KpJsCUKC/QNTRL+iQQZr4nVxf1auQr7TenZW4r9PvjJe1agwRZ3dvaNY8OwjffP+dFaO9iDSyzgoYb51RO1WQ3RbHRFKlY5iyXbcykDmRYX8BX/V42o3fCS87zHP5gwya9zMbhcCx5CEBoMJfpxSeP9WUTrQdcokwkdS3sLbEzzM5V+Le74UpJgelabmqE8mOtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zu47yJ2xj9MjD0zlql6promCfCIozmdXgsSqEFHlAvE=;
 b=L0ihrSf1BzpJ0gcjmkQlbwbGoXF8+W1G80dJPW+4XhZJK01Knlec2xkQG9g2VubhR3K25WMEoTqTlTs4tK7ZbOfu2vf7FJ0SntcVJ23uZS7UWh14IUdXiECyOBdHmVX4YHea6UuAV1+clY8YQN/ltt6DHKxYehDYF3hHKPMaZSZrfW/3c3oK1use0lkmm0IAJtWCPBBnKjkJgmR4nGrjnA4bPhXrfqD/DVax9GgGTHquMDZ6u6i+9yDelegfpY8z5y1aIpYoeXKw1L4OToacdydJkfDurIHKQbY0AqSTysuuVJx4HrckAtXoWydvrCj/6GxaWXtEv1RB8aHSx9WdEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zu47yJ2xj9MjD0zlql6promCfCIozmdXgsSqEFHlAvE=;
 b=LdkG6kFTyWnump+7FRP3QDiShFo3CaRwK9aWNiZkpGGraXgxkiG5sGDjtgxYqj/6cYq1hdz8zEdFBQ+ji6pVYKcFv6kzTwVTzNmr29azv43e1Hjq0RAgR+wy4Xv6I9Q12jdfquRJQDtbpSwWumRUylU6LHHR+pkBEg3QSTghKUndeEuaPdkJ1qneJv10RwC42rKq1+TBNqPQpLk44I1KnTGlhSb4pvYbUo04ipCvFTux28SF2KuhL8MLuoynOV75RSOcA/BbGP1SWUYx+lHb43wQ0q/0QVTZpfepzo9IHVs3+sNB0C1x+JRmRb0bK2PSuHhbNFhlnuTYQ9o8yYYgvg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <25663dac-6023-a9a7-a495-c995762191d8@suse.com>
Date: Tue, 30 May 2023 10:45:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4zx+TvUWTCEMh3@Air-de-Roger>
 <2b1b1744-2bc3-c7c0-a2d8-6aa6996d4af9@suse.com>
 <ZG94c9y4j4udFmsy@Air-de-Roger>
 <cedbc257-9ad9-56f1-5060-eaf173d45760@suse.com>
 <ZHRdjCKSVtWVkX96@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZHRdjCKSVtWVkX96@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0130.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:94::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7140:EE_
X-MS-Office365-Filtering-Correlation-Id: 9cc2c8b1-c472-4668-095b-08db60ea2c6f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+xlgMfD+UCKKntAVIx3aY4qKeqB3BQmGPUihItzZJPJTQVhNcUfZSx+N3+naLphCXQ1331lqHmN57jPayuAt8xEke0sy78SDo1X3I/HKKk2RZhxMtKSsschb1awadZxtnP65yQOoaFTCmOXT/7SI3ycgQOH+E9c2ENS/9VT653YsGNFS5n8CCyWhcV8U/bxR+uFChkCFw6cZZc4XxKNUnrN1xptHCLzUC82U+Pasujv4dI53qT+5vLvZN59EqK9N6yI6kpiQkEThvjz5oEMeItGfZPSIREWq8qQrs0l/jx62phFFPIfVFCC+i162OnaNXmIL/IP46tdvfmkjH0gFC2/Dd6E0JtgjZrrK5xotRUHfNNG8y0ALILS43tUkNMsCjAa0YdpjNaorHdoTllJCOHdB5E45E9ccM1BiwaxhjHBeNZDblrTNfyk/8eUhamL1SkxKAIOioLoWppLk930xMjcToEagzXeOU5aNFccyZZZVw0kUSLWLuemKzwCfKmdtNwlj9vNqcp8vVuLAllBrpA5qq2XHFq1tciqC9qxjMwHKwAIQ6tehFsfdMsZHPgrJk6RgZiddNvzqDtrIu0xJZJx1MLKNXUq4T5FvCWf2jZmQ7c2LClU0s4tc5Ew3lgJRhdsOH9KuPUVcffF8FV9v5A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(39860400002)(366004)(396003)(376002)(136003)(451199021)(38100700002)(31686004)(41300700001)(316002)(6512007)(31696002)(6916009)(4326008)(86362001)(6486002)(2906002)(8936002)(8676002)(15650500001)(2616005)(66946007)(66476007)(66556008)(53546011)(36756003)(54906003)(6506007)(26005)(83380400001)(478600001)(5660300002)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?OXA5b21lZm5US05YRFgwQmMxZWZYRFQ0TlA0a29DejViK2lML0VLd1hhUnpR?=
 =?utf-8?B?cFNnZC9pU1A4WEdpUHdkeEZSeDhoUm81WjdIMk11VTNwRWxYQ1BMTXhkZjlq?=
 =?utf-8?B?UDBNakFLTnR3Vnc2MUxGUkJiVUFnSUpUMVhsaGQrbEJRMVZVRFVIazJRU1Q0?=
 =?utf-8?B?c3lSOVQyb0hsKzNTdFE3b01jaTg1UEFiSUd5MDNvZDNTYlQ3K1hycXdJS2ds?=
 =?utf-8?B?Zi8wZks0Qm0zVUs2b2FoM0dhYXFpdkREQm5qRDIxYWdjb3lNSStmVml3VnV1?=
 =?utf-8?B?ODAyQlE5RDhaaHR4b1dMeCs5aVg3YjZYNllPRm10Tyt4cWZvZFllNkFRa010?=
 =?utf-8?B?cHk4ZzhhR3JRUkZKSExmd2YyUGE1ZGFZc2NUd296aUlJOEtXQUxUUEdGWDNE?=
 =?utf-8?B?TnV3aW1hMUJqU21tQkRKUnhjOEF0L0lpMWJkejY5ekNvSTNiMWZMNnFoVkZk?=
 =?utf-8?B?dDQzUmtrWTIvNXdJTzY3U3djeUlhdmk4RVJ5TlJzaDNVZEhFak9BWloySGQ4?=
 =?utf-8?B?T0pVVW5pZER5ZVl1dnp0Yi91dDExcTZkOEt6V205aFlESURDZEI4WHlBMDl5?=
 =?utf-8?B?Z2JjK1BrOUtibjY4dGorMWZkNzVlWXZ5bHB0cld2U3dod3ZkWU1MVHd6Rm1F?=
 =?utf-8?B?VTFDMzRSZWszaXdEU1BkYlBjN3FDRjZUbVNZVStZSGN6L0JoTFZqcUFzZzQr?=
 =?utf-8?B?Z3A2N0dWQTNwa2ZFbTl5c2ZZanZOZmpMTE9qRWZoVlZwaUltLzRTMlNibVMz?=
 =?utf-8?B?L1lmOGk2K2RXK3JNbGU0cE5ZQSs1NmVMNzhWbW80dTZDZGtGVW8wd0NrcHdG?=
 =?utf-8?B?aXp4Q3RZSlZIaHc4NEFscStvL3RWZ3Z6OU9zMmZ5Q1E4SVZqVndOMVRnVjJo?=
 =?utf-8?B?TDZKa2h3RWgzYUQrdmdLRVRnRFZGejd4ajNIeVlIV0dyanEvV3dMelZUc0tQ?=
 =?utf-8?B?M2VNRk9CVTl0OVZ5ZFJjNEUxY2JjdTYzeVBTZVdzUlJZTFl0c20zYjNRQ1Vo?=
 =?utf-8?B?Q2xpSWhVMjdmcWdJYUxZMlRjaEtYSHdod3NicWFqRWxhTWFtTzF5SGl1VWR0?=
 =?utf-8?B?L1VVTWVoNnRhRXprdFRpMGF4VEFxNmhETUZ4U2tYMSt1a1RoL3lxZ1pNMmNG?=
 =?utf-8?B?dlk5bnZGWUpLa05kNjhWVEM1Z1hDcnZRQm5aMENsZjM0c1poSFlscThVcExj?=
 =?utf-8?B?cERlejBLV3Iwb3RxMUNxREd1QXR1b01tc3QrVGxmM0ZTamg2NFZiTytCbHkr?=
 =?utf-8?B?RjhoL3VFNzhBb0F5Z1V1bEIvd3NSTXhUZGFqMmJ2b1gxTXRoeXRySTl5UUNv?=
 =?utf-8?B?NWNyWGVMUGhNS0poaXl6TlBrMEtuOHlsemQwWlpuOEdpMlBOL1lyR01GV3dq?=
 =?utf-8?B?UHZ1MEhTNHptekxLL3BacUtFMERmQzNveGpreFZtT3E4MTZudGw5ME5OUmRL?=
 =?utf-8?B?ekx2UENnQXdtSWZQQUdTVDQ1RVBMb3hIVTRXUW9GNXQ5dHhnanZnUlEvK3Ey?=
 =?utf-8?B?U0NsSnNCQy9DbVdYTHpUbDd1UC9TcW9IZEpXMFovcUhCNnZrSVlVRXlsMk4z?=
 =?utf-8?B?Q1ZHNXVHS3BZRG1CNnJWUDlWbi94Q05Ec250SURqMXdzalF1Ylc5b3RockxW?=
 =?utf-8?B?dndwdnFkVVpCTVgrSXFnOWpNRzRJQW9TWElRck15Ukd3djFOalh4enNneGYv?=
 =?utf-8?B?NVgxbGdrL0czR2lPUXE2SHV5WVp0anlKa0lJOWxDMStuclhwV2doV1ZoYlhX?=
 =?utf-8?B?NmMzLzZGYndhVDJuRXlNb3haWkVIQUpxVytTSXQ2U2x4S1phQnozOVBHRm4y?=
 =?utf-8?B?dzJESUlZYUFjbXRSRUJ0NUM0Tm5hQ0F6VTYvMWJRWkkwNVM1NDlEckpjbTY4?=
 =?utf-8?B?TFdwT3ppUlhvRlNRSERuUXdCVnhicUpVZWNOVEhqRXZ6ZFVYWmp1SkRRSTBG?=
 =?utf-8?B?Z0FkZ3BSbDlmNlcwTlVxUVE1MTRodXlMb1F1dVpsNWNTUlhhVlRsazgvbkxz?=
 =?utf-8?B?V2cxcVI4cU15cWVxUnJaUmpxbjNIanNQNVl6aXRtdEo5cXBCek10ZVB2ZWlF?=
 =?utf-8?B?Y2U1Q2d2K3BVVk1nYUpaTzFrSEZYaDdwamdNd3pnekZhcExlUEtTS3RCZ2hO?=
 =?utf-8?Q?l44IRTLFP8NiiOL4o8m2tVt/U?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9cc2c8b1-c472-4668-095b-08db60ea2c6f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 08:45:08.8994
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ytAHlm03kXwHyuEbx43D/TlJoX6MA7UclLbYBcvBGfDlYnz6uB4O3Hgxw8FXZTRBUyio2Dew+xor8+5Wd7Tfmw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7140

On 29.05.2023 10:08, Roger Pau Monné wrote:
> On Thu, May 25, 2023 at 05:30:54PM +0200, Jan Beulich wrote:
>> On 25.05.2023 17:02, Roger Pau Monné wrote:
>>> On Thu, May 25, 2023 at 04:39:51PM +0200, Jan Beulich wrote:
>>>> On 24.05.2023 17:56, Roger Pau Monné wrote:
>>>>> On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
>>>>>> --- a/xen/drivers/vpci/header.c
>>>>>> +++ b/xen/drivers/vpci/header.c
>>>>>> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
>>>>>>      struct vpci_header *header = &pdev->vpci->header;
>>>>>>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>>>>>>      struct pci_dev *tmp, *dev = NULL;
>>>>>> +    const struct domain *d;
>>>>>>      const struct vpci_msix *msix = pdev->vpci->msix;
>>>>>>      unsigned int i;
>>>>>>      int rc;
>>>>>> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
>>>>>>  
>>>>>>      /*
>>>>>>       * Check for overlaps with other BARs. Note that only BARs that are
>>>>>> -     * currently mapped (enabled) are checked for overlaps.
>>>>>> +     * currently mapped (enabled) are checked for overlaps. Note also that
>>>>>> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
>>>>>>       */
>>>>>> -    for_each_pdev ( pdev->domain, tmp )
>>>>>> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
>>>>>
>>>>> Looking at this again, I think this is slightly more complex, as during
>>>>> runtime dom0 will get here with pdev->domain == hardware_domain OR
>>>>> dom_xen, and hence you also need to account that devices that have
>>>>> pdev->domain == dom_xen need to iterate over devices that belong to
>>>>> the hardware_domain, ie:
>>>>>
>>>>> for ( d = pdev->domain; ;
>>>>>       d = (pdev->domain == dom_xen) ? hardware_domain : dom_xen )
>>>>
>>>> Right, something along these lines. To keep loop continuation expression
>>>> and exit condition simple, I'll probably prefer
>>>>
>>>> for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
>>>>       ; d = dom_xen )
>>>
>>> LGTM.  I would add parentheses around the pdev->domain != dom_xen
>>> condition, but that's just my personal taste.
>>>
>>> We might want to add an
>>>
>>> ASSERT(pdev->domain == hardware_domain || pdev->domain == dom_xen);
>>>
>>> here, just to remind that this chunk must be revisited when adding
>>> domU support (but you can also argue we haven't done this elsewhere),
>>> I just feel here it's not so obvious we don't want do to this for
>>> domUs.
>>
>> I could add such an assertion, if only ...
>>
>>>>> And we likely want to limit this to devices that belong to the
>>>>> hardware_domain or to dom_xen (in preparation for vPCI being used for
>>>>> domUs).
>>>>
>>>> I'm afraid I don't understand this remark, though.
>>>
>>> This was looking forward to domU support, so that you already cater
>>> for pdev->domain not being hardware_domain or dom_xen, but we might
>>> want to leave that for later, when domU support is actually
>>> introduced.
>>
>> ... I understood why this checking doesn't apply to DomU-s as well,
>> in your opinion.
> 
> It's my understanding that domUs can never get hidden or read-only
> devices assigned, and hence there no need to check for overlap with
> devices assigned to dom_xen, as those cannot have any BARs mapped in
> a domU physmap.
> 
> So for domUs the overlap check only needs to be performed against
> devices assigned to pdev->domain.

I fully agree, but the assertion you suggested doesn't express that. Or
maybe I'm misunderstanding what you did suggest, and there was an
implication of some further if() around it.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 08:48:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540857.842920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v1U-0007ug-NG; Tue, 30 May 2023 08:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540857.842920; Tue, 30 May 2023 08:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v1U-0007uZ-Jx; Tue, 30 May 2023 08:48:08 +0000
Received: by outflank-mailman (input) for mailman id 540857;
 Tue, 30 May 2023 08:48:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3v1T-0007uT-Dg
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:48:07 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on20609.outbound.protection.outlook.com
 [2a01:111:f400:fe13::609])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b0ae076c-fec6-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:48:04 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9275.eurprd04.prod.outlook.com (2603:10a6:10:356::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Tue, 30 May
 2023 08:48:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 08:48:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0ae076c-fec6-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dX5PfQYFXAH5khAZDP5pnVUg/sTMTBLRy7bccOa5q8pvVygLI3fYCU3DJ+yocfeUSpABHQpBm6gR0SFg55ToYsVO6x8L/5urPcbkGxZEzdTBU6nqATHUnVErvc9RW77k8d4cu5Tnw4QWNLALyfsNn1sM2yTlvB+eHALuGO0zUeKqbcbsa82lK+EYjg/8Ip7pSiYVUZjMPNFMKKFEYefNIFRaon5PWYR6ba9mrQVpjKLZ3O3CxWJej3vyGIhEAQPO09x+8ISY9t0PQv5NdQiXhOVavIW6z08UzDHBCLxoyqXVj4dKPEwvugDuKt4Qkr2JjAss4Ky4qeOpIabBDl59Fg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fyntQ9/Hwrs8GyNAUspHduLZgouzXOk3CPacRdLAasM=;
 b=lvKsL/xeYvoB9WC2Ij4fLuuQkijtVwoJY0NK58eQ2OcOMv8jwjh9Vlnj7tbOgNF9zc9qpgF6Tz7ofwgAATlRuHcttv6pGUQIm/Q9OEnkwluEFIT1dsVG5LJwUAzhbpEHk4GJodzD8kbiNY3yx+KHIx+gorAb/fSfS/THOGxcPVRaELrIc9So8o5FXR8oRxg5FZ0mO5KBDgQHHs6dtIay9c94UKevwiZZKJLmapQPaHjRKPATbkdwvup4P25R6+G90jbDpPnt8RcHZ9lNxHiAV659CVbMMCG6a4vtUV6CLiMBnyU5T2TNrZ9EAfCF+3C1zAuTMSZ5WjqUG5xVAZAknA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fyntQ9/Hwrs8GyNAUspHduLZgouzXOk3CPacRdLAasM=;
 b=GnGHuN8LlHdr0/Z7uxnYYBuAq+nnCaeRd9g7mWG1Uk3egkLeOC/TVLu8nskSStP0OVfIA1HILVY6Xd3Fns7ZavG6b7X42FDOrJY5cgzrFUWVBh7hP9EZs5GS/P63feqGDlLnXpIAoM+d78Aid6WTsPyBaFHrwJ/hlJBTWgYFAATRd1qCX9ag4sfa03mXxZ9KjjqBIJl0Ww3Coykm2R/RkVR4CynSYgyLnSJO0hiwV3gQDsQ8CPTbQXY2aIsTKoRO92U5pCPn/X/3gxTi0lGb+RCM/3kNXP3uCc9GWy73tjOgD7oGC0CvI15u+XsQJjjY2/dSvo6C2txh/0VHLAI+1A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <5f04bdd7-2337-812b-cf9d-985fe34d0f5d@suse.com>
Date: Tue, 30 May 2023 10:48:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] x86/vPIC: register only one ELCR handler instance
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <5567b45d-d8ee-7f43-526f-7f601c6ddd46@suse.com>
 <ZHRkstB6UKWAadVZ@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZHRkstB6UKWAadVZ@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0126.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9275:EE_
X-MS-Office365-Filtering-Correlation-Id: 68643f1c-c424-4851-a63b-08db60ea93e5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	upI7ZTYOVu+JAkiTyTzKyj/ipt8+WyXkH2iUT3ko+x6q6QndWvKuaaxtVp4bfx354ZeQ25TT33xewcx1cCBMeibwdGZAhuP2F0wXyi6FRhVXzScWYeVp58rBtIKSlB4zjxgu9T39GZeDHoRiMnj7dIqpje08XLhQoOeVyGAwbknJLpXd9Uj98+NiSb/2S3DQExEWw/5by8MI350+OxdR9bRN9lp2uwVbLd6MSORL+og2q6gO9cgsQjmLi1U/3TtSoZFoTiu23+DDuXcyalrE7UGxexZG23POnE8Chdx0wtqWPoIc6rzO6IbpwYT60lxStGtiDeltjIkuoHd7kac2QswtwXlkhRSJvztJ5qCf17RIH4lBQBEOpymCFF97B+8QZMbaRmM4SFWad4tP/CnqFAFUVUnvdfxqpaXW/yn/Eix2dQwKaSLpD6SW/U1XrfXINKJ5jwTDGUQkzP8AGTtjF1KWCq/k0kOaqBhMVxklpkarUwNoJvPaqf/YHZgf82BJwc/HOiIKBr9onnZvBIyXMWS2oeMb+poQ0JZrOwYlbvn8+SlGpoTESir/43YZWVbxLDOgtVystf72x0lA8Vxfv0bABAjiOxrwymzgvgVOWoXZq1bTE/3HSZ6I7HnwzdEa0FudtGrJJLphM2oa/63MIg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(39860400002)(376002)(346002)(136003)(451199021)(31686004)(38100700002)(41300700001)(316002)(6512007)(31696002)(6916009)(4326008)(86362001)(2906002)(8936002)(8676002)(6486002)(2616005)(66946007)(53546011)(66476007)(36756003)(66556008)(54906003)(6506007)(26005)(83380400001)(478600001)(5660300002)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RDFwMnJYazJtTzFRYjlCZHRRSVI5T01RakJJVDVkcmxOaWhqUkVKSENNcmdh?=
 =?utf-8?B?MktJSDNlQWNYa3REb2duM0JKSGRsQmU3YUFWcElIb3FVWkQrRUtSclNqRmFI?=
 =?utf-8?B?cklKTEthdGY1Znp2ZkZaa3BXNkcwOGRaaGdqK3l1UEREcVlmUFgvVU9aenVO?=
 =?utf-8?B?Y1cwK3ZKZ3dPcG9oOTFxQm9VcVpzN09nRUpQU3phWTU1TDlDS1VZeThRRDRy?=
 =?utf-8?B?WEE1UTBGblBjWTg5LzZ1K0ZHZW1xMnlieXpZLytCWnBiR3UwV2JqRGlwdmtE?=
 =?utf-8?B?VzZLclpUNmtrelVOQUdsVXVWNlY2cDQ0R3ZORDZEVUJoV1ErcGZJSnJVWVNu?=
 =?utf-8?B?am5lbThDU1draytONTJ5aHlSZVVkK1FRQVVJdWYrQ0dXRzR4TUJDMVRNWXA0?=
 =?utf-8?B?RGdJTTE3SzAvNUVkbm1qaDRVald2eXUrZlNZb3VuQTlnK2FYUHBTNi9tK1dj?=
 =?utf-8?B?R0dzYTQzK3llODA1eEJmZnNmcFdRUUJnVUtCRGlkRU11VXBlTitIOFBrR29M?=
 =?utf-8?B?SktsOUd4OTZveWlnMXE2YjMyNlZrVnVkS1JVRTBEU1B5TFpWUTN2M1dYN2hQ?=
 =?utf-8?B?RnZVUjk2QXZycDV5am5EaXlydEVsdDI4SC9Eb3VWVUpsVDJYYWM5dnQzeXdk?=
 =?utf-8?B?YWJDWDBQSk4xdERSTFBYdy83TWFudStENkhOaURoQmJ3VDVwV0JlMUhQdTFI?=
 =?utf-8?B?NW1pRXNNNnRoNE5kZjNnNGFKMW01OFFWaUIwbVgrOWdPSklUcTlUZ1pLaWtq?=
 =?utf-8?B?WW9VeGo0cmUxM01KOTJLamZ4dng4OXNBWEdDeEdQbHc1SU9xK0doR3hBTmox?=
 =?utf-8?B?VkErelVrY3FCUmlkQ0JoL05aNzZPNHJpekRSMEZGY0doRXFsZDg0M0F1M21D?=
 =?utf-8?B?R0ptWlFVTDVMbzgzRk5WNHdHb0FITzU5YjZJNmMrZjR5MElOanFjNjhjYnZn?=
 =?utf-8?B?TzA0Y213UlAybjFvQmVhOExLRytkbkZKcFY0UGppZjVRNnEvR05BeVNheTdt?=
 =?utf-8?B?cmJNVVhzanB0dEx0VFFzNGFEVXhKOHFXVHZUM044WFk2YXZ3OGp6eDJCL21a?=
 =?utf-8?B?eExrc09aU1lGUFRyR0JuWUhRbE1NMFdsTllWYVI1OFZBR1lSZU9Zem9FYjVP?=
 =?utf-8?B?dTk1TyszMzE1U3FFM2Q2WDJ4dFdsdDVzZUo1N1Q5Y2kzYmJoenpuQ0R1WDNU?=
 =?utf-8?B?WDR0UUxkM0NpMnVhY3lSU29scWZvOUNqQ2w1NUY5RUI0ZWR5VnZ1ZkNiMkRz?=
 =?utf-8?B?NDB1TTl0V3AzU0RBU1B5ajBrV2VRa3dRdUdBais5UEYwL2JLa01vN3Awakll?=
 =?utf-8?B?WHVINTNucGJITTRWRUdpQ2R2c0VrcFlMUVNNdlViTHFubDB0Z0tsWFlPRUlL?=
 =?utf-8?B?QVdHOXRZYmxGZ2JyQXUzL2pJcXZKU3Y5aFhjaFU4cnBxamxBaXRNam90a1Zt?=
 =?utf-8?B?M3Q4djhpR3JPNk9QS3BhdGJMeTFBZitoTmVTdGRCZTEyVUVyQXFQV3NSYkRS?=
 =?utf-8?B?MlFqTEEvL1ZINXJxaGJ1ckVWNWp6QXFjWVRDakpLaXIvdXNkN250MExlcjhR?=
 =?utf-8?B?c25ORG5RNW5Pa3FXT0wreHJXNHFaVFdFQk9YYkhMQ0gxSE1ibXByc3o2RCta?=
 =?utf-8?B?aEw1U09TaTVTV2EycVViNmU0UjZIMDR2VzI1Nk42Qjc3RzBTU0NzWEQ2U2o0?=
 =?utf-8?B?VGxUQzRsaVNuN0J6Sm9HcWoveFhRdklxSllteHQydHpick52M1hSVjZGZ2dt?=
 =?utf-8?B?OHMzOE40VjU1ZGxSYklwc0l2bHluSGUvcDB3d1F2b01mZlRDLzlOTTgwd3Zq?=
 =?utf-8?B?bUZCR29rMmVjRWFYUjRlZlJRRU4rcmlNbW1jNnc1YUsxNXNYa0xoS0tHdUVF?=
 =?utf-8?B?RWhLQTBkMDB1TUFxWElyODk1eXJvTmNndVpuNTBtMG9nZmllTlkwRjlkanNh?=
 =?utf-8?B?Zm1tdWVUTjdkR3lRT0tOL0lXcXduTnF4bFhoRGswc1p1TlpBdzVLWlltMndj?=
 =?utf-8?B?QThtMzAvaStocVRHbWdOWk1yS0FkZHhBZU9OWURZNFpSR0Erb2NyU0VBYmhH?=
 =?utf-8?B?U3VwLzdtcTdSSkpKTXd1bTB4UEx5TlN2RnYranM4ZWlQck1oNTNiQWxFUHJN?=
 =?utf-8?Q?Ly30Mdv7psssNNmRBe0RJ0DIk?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 68643f1c-c424-4851-a63b-08db60ea93e5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 08:48:02.4298
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jXBFjqEHQAIeShsZvgsLCzU0z9xG+g0pbjEhyEdK6qnD1hgf5kfvi/I/GfItPBqkQtxro9A8um609lHyYWkf7A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9275

On 29.05.2023 10:39, Roger Pau Monné wrote:
> On Fri, May 26, 2023 at 09:35:04AM +0200, Jan Beulich wrote:
>> There's no point consuming two port-I/O slots. Even less so considering
>> that some real hardware permits both ports to be accessed in one go,
>> emulating of which requires there to be only a single instance.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/hvm/vpic.c
>> +++ b/xen/arch/x86/hvm/vpic.c
>> @@ -377,25 +377,34 @@ static int cf_check vpic_intercept_elcr_
>>      int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>>  {
>>      struct hvm_hw_vpic *vpic;
>> -    uint32_t data;
>> +    unsigned int data, shift = 0;
>>  
>> -    BUG_ON(bytes != 1);
>> +    BUG_ON(bytes > 2 - (port & 1));
>>  
>>      vpic = &current->domain->arch.hvm.vpic[port & 1];
>>  
>> -    if ( dir == IOREQ_WRITE )
>> -    {
>> -        /* Some IRs are always edge trig. Slave IR is always level trig. */
>> -        data = *val & vpic_elcr_mask(vpic);
>> -        if ( vpic->is_master )
>> -            data |= 1 << 2;
>> -        vpic->elcr = data;
>> -    }
>> -    else
>> -    {
>> -        /* Reader should not see hardcoded level-triggered slave IR. */
>> -        *val = vpic->elcr & vpic_elcr_mask(vpic);
>> -    }
>> +    do {
>> +        if ( dir == IOREQ_WRITE )
>> +        {
>> +            /* Some IRs are always edge trig. Slave IR is always level trig. */
>> +            data = (*val >> shift) & vpic_elcr_mask(vpic);
>> +            if ( vpic->is_master )
>> +                data |= 1 << 2;
> 
> Not that you added this, but I'm confused.  The spec I'm reading
> explicitly states that bits 0:2 are reserved and must be 0.
> 
> Is this some quirk of the specific chipset we aim to emulate?

I don't think so. Note that upon reads the bit is masked out again.
Adding back further context, there's even a comment to this effect:

+        else
+        {
+            /* Reader should not see hardcoded level-triggered slave IR. */
+            data = vpic->elcr & vpic_elcr_mask(vpic);

The setting of the bit is solely for internal handling purposes,
aiui.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 08:51:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:51:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540863.842930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v4D-0000xR-97; Tue, 30 May 2023 08:50:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540863.842930; Tue, 30 May 2023 08:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v4D-0000xK-5v; Tue, 30 May 2023 08:50:57 +0000
Received: by outflank-mailman (input) for mailman id 540863;
 Tue, 30 May 2023 08:50:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3v4B-0000xA-71; Tue, 30 May 2023 08:50:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3v4B-0001VU-00; Tue, 30 May 2023 08:50:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3v4A-0003Ly-JY; Tue, 30 May 2023 08:50:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3v4A-0002yQ-J6; Tue, 30 May 2023 08:50:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Xes63AIhHAcVZsX/paJLKPAkOj0zN8deUj04j3vYPIg=; b=ksAtOWgt0HqTlEbELNP/a4HHd+
	ipAPvgi7JXHPoxYXpDC0BKly6OFSXyj3AJhi5CmL7G3gyuavm6KyyoTHIrrApadXP0j7Rvg916eI6
	bYxwNyjwD90jvAyrmnnS9jC87aluc69HEUHuskuT6w0M23XpYmw5LBCgbxXxcbNNUaVw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181005-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 181005: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8b817fded42d8fe3a0eb47b1149d907851a3c942
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 30 May 2023 08:50:54 +0000

flight 181005 linux-linus real [real]
flight 181009 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/181005/
http://logs.test-lab.xenproject.org/osstest/logs/181009/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                8b817fded42d8fe3a0eb47b1149d907851a3c942
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   43 days
Failing since        180281  2023-04-17 06:24:36 Z   43 days   81 attempts
Testing same since   181002  2023-05-29 16:11:58 Z    0 days    2 attempts

------------------------------------------------------------
2553 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 323291 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 30 08:54:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:54:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540870.842949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v7d-0001nu-VS; Tue, 30 May 2023 08:54:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540870.842949; Tue, 30 May 2023 08:54:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v7d-0001nl-SV; Tue, 30 May 2023 08:54:29 +0000
Received: by outflank-mailman (input) for mailman id 540870;
 Tue, 30 May 2023 08:54:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v7b-0001Xf-SA
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:54:27 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 94f9d21f-fec7-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:54:27 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id E137421AA0;
 Tue, 30 May 2023 08:54:26 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id AFFF11341B;
 Tue, 30 May 2023 08:54:26 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id XFOhKcK5dWQ3GwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:54:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94f9d21f-fec7-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436866; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=A4prpz1ycS+3ar6IMyNb023Z80jGKOwvg7NsWxkIJDI=;
	b=CyEx1QkqV0aW6vYFlgEvz3YRsqbzt0vsDcu4cWuHuoBYYLQiMn/ZKdDPxxpX6Py0AQe2wE
	dUqQoS9c0SyuVbtd2OkyjsZO994Kq/6E0IkZOK7zvKYJ7g9sdI91NNOb5ewcLjM17TUbr2
	WgA03+R3+yykhFFC9sBNVj6ej82YKVI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 01/16] tools/xenstore: verify command line parameters better
Date: Tue, 30 May 2023 10:54:03 +0200
Message-Id: <20230530085418.5417-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add some more verification of command line parameters.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c | 19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 6f03f6687a..22df395aac 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2820,7 +2820,7 @@ int main(int argc, char *argv[])
 			no_domain_init = true;
 			break;
 		case 'E':
-			hard_quotas[ACC_NODES].val = strtoul(optarg, NULL, 10);
+			hard_quotas[ACC_NODES].val = get_optval_uint(optarg);
 			break;
 		case 'F':
 			pidfile = optarg;
@@ -2838,10 +2838,10 @@ int main(int argc, char *argv[])
 			recovery = false;
 			break;
 		case 'S':
-			hard_quotas[ACC_NODESZ].val = strtoul(optarg, NULL, 10);
+			hard_quotas[ACC_NODESZ].val = get_optval_uint(optarg);
 			break;
 		case 't':
-			hard_quotas[ACC_TRANS].val = strtoul(optarg, NULL, 10);
+			hard_quotas[ACC_TRANS].val = get_optval_uint(optarg);
 			break;
 		case 'T':
 			tracefile = optarg;
@@ -2861,14 +2861,13 @@ int main(int argc, char *argv[])
 			verbose = true;
 			break;
 		case 'W':
-			hard_quotas[ACC_WATCH].val = strtoul(optarg, NULL, 10);
+			hard_quotas[ACC_WATCH].val = get_optval_uint(optarg);
 			break;
 		case 'A':
-			hard_quotas[ACC_NPERM].val = strtoul(optarg, NULL, 10);
+			hard_quotas[ACC_NPERM].val = get_optval_uint(optarg);
 			break;
 		case 'M':
-			hard_quotas[ACC_PATHLEN].val =
-				strtoul(optarg, NULL, 10);
+			hard_quotas[ACC_PATHLEN].val = get_optval_uint(optarg);
 			hard_quotas[ACC_PATHLEN].val =
 				 min((unsigned int)XENSTORE_REL_PATH_MAX,
 				     hard_quotas[ACC_PATHLEN].val);
@@ -2883,13 +2882,13 @@ int main(int argc, char *argv[])
 			set_timeout(optarg);
 			break;
 		case 'e':
-			dom0_event = strtol(optarg, NULL, 10);
+			dom0_event = get_optval_uint(optarg);
 			break;
 		case 'm':
-			dom0_domid = strtol(optarg, NULL, 10);
+			dom0_domid = get_optval_uint(optarg);
 			break;
 		case 'p':
-			priv_domid = strtol(optarg, NULL, 10);
+			priv_domid = get_optval_uint(optarg);
 			break;
 #ifndef NO_LIVE_UPDATE
 		case 'U':
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:54:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:54:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540869.842940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v7Y-0001Xs-OL; Tue, 30 May 2023 08:54:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540869.842940; Tue, 30 May 2023 08:54:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v7Y-0001Xl-L6; Tue, 30 May 2023 08:54:24 +0000
Received: by outflank-mailman (input) for mailman id 540869;
 Tue, 30 May 2023 08:54:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v7W-0001Xf-Qs
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:54:22 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91c9383c-fec7-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:54:21 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 4D15B1F8D9;
 Tue, 30 May 2023 08:54:21 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 0876A1341B;
 Tue, 30 May 2023 08:54:21 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id tILCAL25dWQsGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:54:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91c9383c-fec7-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436861; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=BEQ4pJzSljiedlksFG62VfASc5LgFXWw4ihzUYYNrDM=;
	b=QCL/dYFaWd3Q8iy9jsljmWcODukXCN5z4yTKT7GOCpUELIS17R9JtP5EunuOUWPa3Z40j5
	qwgXBHCGDJAEJAMrwCyyFE+3tQmMCEzqocU+VnmMq2B3CEhIPlr97uNfkZ9fb6G93ytP7d
	Nc0uxerEBv7k6Hov/P1gri6blVX19Js=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 00/16] tools/xenstore: more cleanups
Date: Tue, 30 May 2023 10:54:02 +0200
Message-Id: <20230530085418.5417-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Some more cleanups of Xenstore.

Based on top of the previous Xenstore series "tools/xenstore: rework
internal accounting".

Changes in V2:
- rebase
- one small modification of patch 10
- added patches 11-13

Changes in V3:
- rebase
- modified patch 4
- added patches 10, 11 and 13

Juergen Gross (16):
  tools/xenstore: verify command line parameters better
  tools/xenstore: do some cleanup of hashtable.c
  tools/xenstore: modify interface of create_hashtable()
  tools/xenstore: rename hashtable_insert() and let it return 0 on
    success
  tools/xenstore: make some write limit functions static
  tools/xenstore: switch write limiting to use millisecond time base
  tools/xenstore: remove stale TODO file
  tools/xenstore: remove unused events list
  tools/xenstore: remove support of file backed data base
  tools/libs/store: use xen_list.h instead of xenstore/list.h
  tools/libs/store: make libxenstore independent of utils.h
  tools/xenstore: remove no longer needed functions from xs_lib.c
  tools/xenstore: replace xs_lib.c with a header
  tools/xenstore: split out environment specific live update code
  tools/xenstore: split out rest of live update control code
  tools/xenstore: remove unused stuff from list.h

 .gitignore                                |   1 -
 MAINTAINERS                               |   1 +
 tools/include/xen-tools/xenstore-common.h | 128 +++++
 tools/include/xenstore.h                  |   3 +
 tools/include/xenstore_lib.h              |   3 -
 tools/libs/store/Makefile                 |   4 -
 tools/libs/store/xs.c                     | 131 +++--
 tools/xenstore/COPYING                    | 514 -----------------
 tools/xenstore/Makefile                   |   7 +-
 tools/xenstore/Makefile.common            |  12 +-
 tools/xenstore/TODO                       |  10 -
 tools/xenstore/hashtable.c                |  98 ++--
 tools/xenstore/hashtable.h                |  22 +-
 tools/xenstore/list.h                     | 227 --------
 tools/xenstore/xenstore_client.c          | 133 ++++-
 tools/xenstore/xenstored_control.c        | 661 +---------------------
 tools/xenstore/xenstored_control.h        |   8 -
 tools/xenstore/xenstored_core.c           |  72 +--
 tools/xenstore/xenstored_core.h           |   7 +-
 tools/xenstore/xenstored_domain.c         | 458 ++++++++-------
 tools/xenstore/xenstored_domain.h         |  24 +-
 tools/xenstore/xenstored_lu.c             | 400 +++++++++++++
 tools/xenstore/xenstored_lu.h             |  81 +++
 tools/xenstore/xenstored_lu_daemon.c      | 133 +++++
 tools/xenstore/xenstored_lu_minios.c      | 121 ++++
 tools/xenstore/xenstored_transaction.c    |   4 +-
 tools/xenstore/xenstored_watch.c          |   5 -
 tools/xenstore/xs_lib.c                   | 292 ----------
 tools/xenstore/xs_lib.h                   |  50 --
 tools/xenstore/xs_tdb_dump.c              |  86 ---
 30 files changed, 1409 insertions(+), 2287 deletions(-)
 create mode 100644 tools/include/xen-tools/xenstore-common.h
 delete mode 100644 tools/xenstore/COPYING
 delete mode 100644 tools/xenstore/TODO
 create mode 100644 tools/xenstore/xenstored_lu.c
 create mode 100644 tools/xenstore/xenstored_lu.h
 create mode 100644 tools/xenstore/xenstored_lu_daemon.c
 create mode 100644 tools/xenstore/xenstored_lu_minios.c
 delete mode 100644 tools/xenstore/xs_lib.c
 delete mode 100644 tools/xenstore/xs_lib.h
 delete mode 100644 tools/xenstore/xs_tdb_dump.c

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:54:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:54:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540871.842960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v7k-00027W-7z; Tue, 30 May 2023 08:54:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540871.842960; Tue, 30 May 2023 08:54:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v7k-00027L-4U; Tue, 30 May 2023 08:54:36 +0000
Received: by outflank-mailman (input) for mailman id 540871;
 Tue, 30 May 2023 08:54:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v7j-00026J-3w
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:54:35 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9854e8a4-fec7-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:54:32 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 8289E1F889;
 Tue, 30 May 2023 08:54:32 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 502901341B;
 Tue, 30 May 2023 08:54:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id EAdEEsi5dWRHGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:54:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9854e8a4-fec7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436872; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+rmDgbWyQvHCkeJMLRkTXiOnv1Jhx5fDWkp0U5diuzo=;
	b=B28C07bKo9E41KCRfualFAQuI0wrsve02hIL/WxIpbA7ouCbA43guc6NrW6ooGeBQT4yPB
	YsvgdGPR7NDUeV8vS6edlt6GtxlC/psakHlXACEvIw0phw1z4EomFbRpO46ksm0Dp9IRLt
	HCo1aiuY++qWTWgVMItXpvklCn+ludU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 02/16] tools/xenstore: do some cleanup of hashtable.c
Date: Tue, 30 May 2023 10:54:04 +0200
Message-Id: <20230530085418.5417-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Do the following cleanups:
- hashtable_count() isn't used at all, so remove it
- replace prime_table_length and max_load_factor with macros
- make hash() static
- add a loadlimit() helper function
- remove the /***/ lines between functions
- do some style corrections

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V3:
- more style corrections (Julien Grall)
---
 tools/xenstore/hashtable.c | 76 +++++++++++++++-----------------------
 tools/xenstore/hashtable.h | 10 -----
 2 files changed, 30 insertions(+), 56 deletions(-)

diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
index 3d4466b597..dc209158fa 100644
--- a/tools/xenstore/hashtable.c
+++ b/tools/xenstore/hashtable.c
@@ -40,22 +40,25 @@ static const unsigned int primes[] = {
 50331653, 100663319, 201326611, 402653189,
 805306457, 1610612741
 };
-const unsigned int prime_table_length = sizeof(primes)/sizeof(primes[0]);
-const unsigned int max_load_factor = 65; /* percentage */
 
-/*****************************************************************************/
-/* indexFor */
-static inline unsigned int
-indexFor(unsigned int tablelength, unsigned int hashvalue) {
+#define PRIME_TABLE_LEN   ARRAY_SIZE(primes)
+#define MAX_LOAD_PERCENT  65
+
+static inline unsigned int indexFor(unsigned int tablelength,
+                                    unsigned int hashvalue)
+{
     return (hashvalue % tablelength);
 }
 
-/*****************************************************************************/
-struct hashtable *
-create_hashtable(const void *ctx, unsigned int minsize,
-                 unsigned int (*hashf) (const void *),
-                 int (*eqf) (const void *, const void *),
-                 unsigned int flags)
+static unsigned int loadlimit(unsigned int pindex)
+{
+    return ((uint64_t)primes[pindex] * MAX_LOAD_PERCENT) / 100;
+}
+
+struct hashtable *create_hashtable(const void *ctx, unsigned int minsize,
+                                   unsigned int (*hashf) (const void *),
+                                   int (*eqf) (const void *, const void *),
+                                   unsigned int flags)
 {
     struct hashtable *h;
     unsigned int pindex, size = primes[0];
@@ -64,8 +67,11 @@ create_hashtable(const void *ctx, unsigned int minsize,
     if (minsize > (1u << 30)) return NULL;
 
     /* Enforce size as prime */
-    for (pindex=0; pindex < prime_table_length; pindex++) {
-        if (primes[pindex] > minsize) { size = primes[pindex]; break; }
+    for (pindex = 0; pindex < PRIME_TABLE_LEN; pindex++) {
+        if (primes[pindex] > minsize) {
+            size = primes[pindex];
+            break;
+        }
     }
 
     h = talloc_zero(ctx, struct hashtable);
@@ -81,7 +87,7 @@ create_hashtable(const void *ctx, unsigned int minsize,
     h->entrycount   = 0;
     h->hashfn       = hashf;
     h->eqfn         = eqf;
-    h->loadlimit    = (unsigned int)(((uint64_t)size * max_load_factor) / 100);
+    h->loadlimit    = loadlimit(pindex);
     return h;
 
 err1:
@@ -90,9 +96,7 @@ err0:
    return NULL;
 }
 
-/*****************************************************************************/
-unsigned int
-hash(const struct hashtable *h, const void *k)
+static unsigned int hash(const struct hashtable *h, const void *k)
 {
     /* Aim to protect against poor hash functions by adding logic here
      * - logic taken from java 1.4 hashtable source */
@@ -104,9 +108,7 @@ hash(const struct hashtable *h, const void *k)
     return i;
 }
 
-/*****************************************************************************/
-static int
-hashtable_expand(struct hashtable *h)
+static int hashtable_expand(struct hashtable *h)
 {
     /* Double the size of the table to accomodate more entries */
     struct entry **newtable;
@@ -114,7 +116,7 @@ hashtable_expand(struct hashtable *h)
     struct entry **pE;
     unsigned int newsize, i, index;
     /* Check we're not hitting max capacity */
-    if (h->primeindex == (prime_table_length - 1)) return 0;
+    if (h->primeindex == (PRIME_TABLE_LEN - 1)) return 0;
     newsize = primes[++(h->primeindex)];
 
     newtable = talloc_realloc(h, h->table, struct entry *, newsize);
@@ -144,21 +146,11 @@ hashtable_expand(struct hashtable *h)
     }
 
     h->tablelength = newsize;
-    h->loadlimit   = (unsigned int)
-        (((uint64_t)newsize * max_load_factor) / 100);
+    h->loadlimit   = loadlimit(h->primeindex);
     return -1;
 }
 
-/*****************************************************************************/
-unsigned int
-hashtable_count(const struct hashtable *h)
-{
-    return h->entrycount;
-}
-
-/*****************************************************************************/
-int
-hashtable_insert(struct hashtable *h, void *k, void *v)
+int hashtable_insert(struct hashtable *h, void *k, void *v)
 {
     /* This method allows duplicate keys - but they shouldn't be used */
     unsigned int index;
@@ -186,9 +178,7 @@ hashtable_insert(struct hashtable *h, void *k, void *v)
     return -1;
 }
 
-/*****************************************************************************/
-void * /* returns value associated with key */
-hashtable_search(const struct hashtable *h, const void *k)
+void *hashtable_search(const struct hashtable *h, const void *k)
 {
     struct entry *e;
     unsigned int hashvalue, index;
@@ -204,7 +194,6 @@ hashtable_search(const struct hashtable *h, const void *k)
     return NULL;
 }
 
-/*****************************************************************************/
 void
 hashtable_remove(struct hashtable *h, const void *k)
 {
@@ -234,10 +223,8 @@ hashtable_remove(struct hashtable *h, const void *k)
     }
 }
 
-/*****************************************************************************/
-int
-hashtable_iterate(struct hashtable *h,
-                  int (*func)(const void *k, void *v, void *arg), void *arg)
+int hashtable_iterate(struct hashtable *h,
+                      int (*func)(const void *k, void *v, void *arg), void *arg)
 {
     int ret;
     unsigned int i;
@@ -260,10 +247,7 @@ hashtable_iterate(struct hashtable *h,
     return 0;
 }
 
-/*****************************************************************************/
-/* destroy */
-void
-hashtable_destroy(struct hashtable *h)
+void hashtable_destroy(struct hashtable *h)
 {
     talloc_free(h);
 }
diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
index cc0090f133..04310783b6 100644
--- a/tools/xenstore/hashtable.h
+++ b/tools/xenstore/hashtable.h
@@ -74,16 +74,6 @@ hashtable_search(const struct hashtable *h, const void *k);
 void
 hashtable_remove(struct hashtable *h, const void *k);
 
-/*****************************************************************************
- * hashtable_count
-   
- * @name        hashtable_count
- * @param   h   the hashtable
- * @return      the number of items stored in the hashtable
- */
-unsigned int
-hashtable_count(const struct hashtable *h);
-
 /*****************************************************************************
  * hashtable_iterate
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:54:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:54:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540872.842970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v7p-0002X4-Jl; Tue, 30 May 2023 08:54:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540872.842970; Tue, 30 May 2023 08:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v7p-0002Wx-Gy; Tue, 30 May 2023 08:54:41 +0000
Received: by outflank-mailman (input) for mailman id 540872;
 Tue, 30 May 2023 08:54:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v7o-00026J-1b
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:54:40 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9bacd9f4-fec7-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:54:38 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2218621A40;
 Tue, 30 May 2023 08:54:38 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id E8A4D1341B;
 Tue, 30 May 2023 08:54:37 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id g7lhN825dWRUGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:54:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bacd9f4-fec7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436878; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Cue6nfogHSgqt5dCa9QFdMWvkgXw8ctS0Aa/J6WXU5g=;
	b=qqjsGhpBHowB5V3YueMcW3BlkgUPW5ojafkELlWa5hJ9NhvwQ2JKtqqNZNGXR6LezdjgGV
	5Wix1+G/kh7I1KgYJPPp1iAZ5fwlVXoVnUqM1FPVl6ooGEFB371Mbyv4tuv9mbhvHo/rBP
	+p2PCO4IN9PikCRhGelVsmq5PwgmuzQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 03/16] tools/xenstore: modify interface of create_hashtable()
Date: Tue, 30 May 2023 10:54:05 +0200
Message-Id: <20230530085418.5417-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The minsize parameter of create_hashtable() doesn't have any real use
case for Xenstore, so drop it.

For better talloc_report_full() diagnostic output add a name parameter
to create_hashtable().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- make code more readable (Julien Grall)
---
 tools/xenstore/hashtable.c        | 23 ++++++-----------------
 tools/xenstore/hashtable.h        |  4 ++--
 tools/xenstore/xenstored_core.c   |  2 +-
 tools/xenstore/xenstored_domain.c |  4 ++--
 4 files changed, 11 insertions(+), 22 deletions(-)

diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
index dc209158fa..3d2d3a0e22 100644
--- a/tools/xenstore/hashtable.c
+++ b/tools/xenstore/hashtable.c
@@ -55,39 +55,28 @@ static unsigned int loadlimit(unsigned int pindex)
     return ((uint64_t)primes[pindex] * MAX_LOAD_PERCENT) / 100;
 }
 
-struct hashtable *create_hashtable(const void *ctx, unsigned int minsize,
+struct hashtable *create_hashtable(const void *ctx, const char *name,
                                    unsigned int (*hashf) (const void *),
                                    int (*eqf) (const void *, const void *),
                                    unsigned int flags)
 {
     struct hashtable *h;
-    unsigned int pindex, size = primes[0];
-
-    /* Check requested hashtable isn't too large */
-    if (minsize > (1u << 30)) return NULL;
-
-    /* Enforce size as prime */
-    for (pindex = 0; pindex < PRIME_TABLE_LEN; pindex++) {
-        if (primes[pindex] > minsize) {
-            size = primes[pindex];
-            break;
-        }
-    }
 
     h = talloc_zero(ctx, struct hashtable);
     if (NULL == h)
         goto err0;
-    h->table = talloc_zero_array(h, struct entry *, size);
+    talloc_set_name_const(h, name);
+    h->table = talloc_zero_array(h, struct entry *, primes[0]);
     if (NULL == h->table)
         goto err1;
 
-    h->tablelength  = size;
+    h->primeindex   = 0;
+    h->tablelength  = primes[h->primeindex];
     h->flags        = flags;
-    h->primeindex   = pindex;
     h->entrycount   = 0;
     h->hashfn       = hashf;
     h->eqfn         = eqf;
-    h->loadlimit    = loadlimit(pindex);
+    h->loadlimit    = loadlimit(h->primeindex);
     return h;
 
 err1:
diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
index 04310783b6..0e1a6f61c2 100644
--- a/tools/xenstore/hashtable.h
+++ b/tools/xenstore/hashtable.h
@@ -10,7 +10,7 @@ struct hashtable;
    
  * @name                    create_hashtable
  * @param   ctx             talloc context to use for allocations
- * @param   minsize         minimum initial size of hashtable
+ * @param   name            talloc name of the hashtable
  * @param   hashfunction    function for hashing keys
  * @param   key_eq_fn       function for determining key equality
  * @param   flags           flags HASHTABLE_*
@@ -23,7 +23,7 @@ struct hashtable;
 #define HASHTABLE_FREE_KEY   (1U << 1)
 
 struct hashtable *
-create_hashtable(const void *ctx, unsigned int minsize,
+create_hashtable(const void *ctx, const char *name,
                  unsigned int (*hashfunction) (const void *),
                  int (*key_eq_fn) (const void *, const void *),
                  unsigned int flags
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 22df395aac..418790d8d7 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2521,7 +2521,7 @@ void check_store(void)
 	struct check_store_data data;
 
 	/* Don't free values (they are all void *1) */
-	data.reachable = create_hashtable(NULL, 16, hash_from_key_fn,
+	data.reachable = create_hashtable(NULL, "checkstore", hash_from_key_fn,
 					  keys_equal_fn, HASHTABLE_FREE_KEY);
 	if (!data.reachable) {
 		log("check_store: ENOMEM");
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index a641f633a5..a1c91ef3f3 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1015,7 +1015,7 @@ void domain_init(int evtfd)
 	int rc;
 
 	/* Start with a random rather low domain count for the hashtable. */
-	domhash = create_hashtable(NULL, 8, domhash_fn, domeq_fn, 0);
+	domhash = create_hashtable(NULL, "domains", domhash_fn, domeq_fn, 0);
 	if (!domhash)
 		barf_perror("Failed to allocate domain hashtable");
 
@@ -1807,7 +1807,7 @@ struct hashtable *domain_check_acc_init(void)
 {
 	struct hashtable *domains;
 
-	domains = create_hashtable(NULL, 8, domhash_fn, domeq_fn,
+	domains = create_hashtable(NULL, "domain_check", domhash_fn, domeq_fn,
 				   HASHTABLE_FREE_VALUE);
 	if (!domains)
 		return NULL;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:54:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:54:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540873.842980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v7u-000338-U8; Tue, 30 May 2023 08:54:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540873.842980; Tue, 30 May 2023 08:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v7u-00032z-QE; Tue, 30 May 2023 08:54:46 +0000
Received: by outflank-mailman (input) for mailman id 540873;
 Tue, 30 May 2023 08:54:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v7t-00026J-OZ
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:54:45 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f0b082e-fec7-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:54:44 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C10101F889;
 Tue, 30 May 2023 08:54:43 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 9748E1341B;
 Tue, 30 May 2023 08:54:43 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id 7bKQI9O5dWRcGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:54:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f0b082e-fec7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436883; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vPoeTUcM9T4+NNSVnWYJP6NUt2CPkIcsiQEBOk5fFTE=;
	b=NswNjplESYHuW86lZ30UhtlPK3LGYRd+y9vt/onZ9rsOpu7+iTK908uDCmMltQoOxQ+HtN
	VjVCvbzycNhVl4trNzRWbkos1R6QjDlgB9u6N5ML9kw6esQbnNe6wJSKZUMmMqb+JjooJW
	WRzohHLqbLXEbLB/pVyLbS81BrRBBcM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 04/16] tools/xenstore: rename hashtable_insert() and let it return 0 on success
Date: Tue, 30 May 2023 10:54:06 +0200
Message-Id: <20230530085418.5417-5-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today hashtable_insert() returns 0 in case of an error. Change that to
let it return an errno value in the error case and 0 in case of success.
In order to avoid any missed return value checks or related future
backport errors, rename hashtable_insert() to hashtable_add().

Even if not used today, do the same switch for the return value of
hashtable_expand().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- rename to hashtable_add() (triggered by Julien Grall)
---
 tools/xenstore/hashtable.c             | 17 +++++++++++------
 tools/xenstore/hashtable.h             |  8 ++++----
 tools/xenstore/xenstored_core.c        |  6 +++---
 tools/xenstore/xenstored_domain.c      |  4 ++--
 tools/xenstore/xenstored_transaction.c |  4 ++--
 5 files changed, 22 insertions(+), 17 deletions(-)

diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
index 3d2d3a0e22..11f6bf8f15 100644
--- a/tools/xenstore/hashtable.c
+++ b/tools/xenstore/hashtable.c
@@ -105,14 +105,15 @@ static int hashtable_expand(struct hashtable *h)
     struct entry **pE;
     unsigned int newsize, i, index;
     /* Check we're not hitting max capacity */
-    if (h->primeindex == (PRIME_TABLE_LEN - 1)) return 0;
+    if (h->primeindex == (PRIME_TABLE_LEN - 1))
+        return ENOSPC;
     newsize = primes[++(h->primeindex)];
 
     newtable = talloc_realloc(h, h->table, struct entry *, newsize);
     if (!newtable)
     {
         h->primeindex--;
-        return 0;
+        return ENOMEM;
     }
 
     h->table = newtable;
@@ -136,10 +137,10 @@ static int hashtable_expand(struct hashtable *h)
 
     h->tablelength = newsize;
     h->loadlimit   = loadlimit(h->primeindex);
-    return -1;
+    return 0;
 }
 
-int hashtable_insert(struct hashtable *h, void *k, void *v)
+int hashtable_add(struct hashtable *h, void *k, void *v)
 {
     /* This method allows duplicate keys - but they shouldn't be used */
     unsigned int index;
@@ -153,7 +154,11 @@ int hashtable_insert(struct hashtable *h, void *k, void *v)
         hashtable_expand(h);
     }
     e = talloc_zero(h, struct entry);
-    if (NULL == e) { --(h->entrycount); return 0; } /*oom*/
+    if (NULL == e)
+    {
+        --h->entrycount;
+       return ENOMEM;
+    }
     e->h = hash(h,k);
     index = indexFor(h->tablelength,e->h);
     e->k = k;
@@ -164,7 +169,7 @@ int hashtable_insert(struct hashtable *h, void *k, void *v)
         talloc_steal(e, v);
     e->next = h->table[index];
     h->table[index] = e;
-    return -1;
+    return 0;
 }
 
 void *hashtable_search(const struct hashtable *h, const void *k)
diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
index 0e1a6f61c2..5a2cc4a4be 100644
--- a/tools/xenstore/hashtable.h
+++ b/tools/xenstore/hashtable.h
@@ -30,13 +30,13 @@ create_hashtable(const void *ctx, const char *name,
 );
 
 /*****************************************************************************
- * hashtable_insert
+ * hashtable_add
    
- * @name        hashtable_insert
+ * @name        hashtable_add
  * @param   h   the hashtable to insert into
  * @param   k   the key - hashtable claims ownership and will free on removal
  * @param   v   the value - does not claim ownership
- * @return      non-zero for successful insertion
+ * @return      zero for successful insertion
  *
  * This function will cause the table to expand if the insertion would take
  * the ratio of entries to table size over the maximum load factor.
@@ -49,7 +49,7 @@ create_hashtable(const void *ctx, const char *name,
  */
 
 int 
-hashtable_insert(struct hashtable *h, void *k, void *v);
+hashtable_add(struct hashtable *h, void *k, void *v);
 
 /*****************************************************************************
  * hashtable_search
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 418790d8d7..c467a704a1 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2404,8 +2404,8 @@ int remember_string(struct hashtable *hash, const char *str)
 	char *k = talloc_strdup(NULL, str);
 
 	if (!k)
-		return 0;
-	return hashtable_insert(hash, k, (void *)1);
+		return ENOMEM;
+	return hashtable_add(hash, k, (void *)1);
 }
 
 /**
@@ -2438,7 +2438,7 @@ static int check_store_step(const void *ctx, struct connection *conn,
 				: WALK_TREE_SKIP_CHILDREN;
 	}
 
-	if (!remember_string(data->reachable, node->name))
+	if (remember_string(data->reachable, node->name))
 		return WALK_TREE_ERROR_STOP;
 
 	domain_check_acc_add(node, data->domains);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index a1c91ef3f3..815f15cd1d 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -538,7 +538,7 @@ static struct domain *alloc_domain(const void *context, unsigned int domid)
 	domain->generation = generation;
 	domain->introduced = false;
 
-	if (!hashtable_insert(domhash, &domain->domid, domain)) {
+	if (hashtable_add(domhash, &domain->domid, domain)) {
 		talloc_free(domain);
 		errno = ENOMEM;
 		return NULL;
@@ -1795,7 +1795,7 @@ static int domain_check_acc_init_sub(const void *k, void *v, void *arg)
 	 */
 	dom->nodes = -d->acc[ACC_NODES].val;
 
-	if (!hashtable_insert(domains, &dom->domid, dom)) {
+	if (hashtable_add(domains, &dom->domid, dom)) {
 		talloc_free(dom);
 		return -1;
 	}
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index db06d0e7f1..334f1609f1 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -601,13 +601,13 @@ int check_transactions(struct hashtable *hash)
 		list_for_each_entry(trans, &conn->transaction_list, list) {
 			tname = talloc_asprintf(trans, "%"PRIu64,
 						trans->generation);
-			if (!tname || !remember_string(hash, tname))
+			if (!tname || remember_string(hash, tname))
 				goto nomem;
 
 			list_for_each_entry(i, &trans->accessed, list) {
 				if (!i->ta_node)
 					continue;
-				if (!remember_string(hash, i->trans_name))
+				if (remember_string(hash, i->trans_name))
 					goto nomem;
 			}
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:54:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:54:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540876.842990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v80-0003TY-5t; Tue, 30 May 2023 08:54:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540876.842990; Tue, 30 May 2023 08:54:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v80-0003TO-2k; Tue, 30 May 2023 08:54:52 +0000
Received: by outflank-mailman (input) for mailman id 540876;
 Tue, 30 May 2023 08:54:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v7y-0001Xf-LH
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:54:50 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a26dfdfb-fec7-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:54:49 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 5D46721A40;
 Tue, 30 May 2023 08:54:49 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 310B61341B;
 Tue, 30 May 2023 08:54:49 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id 6Ne2Ctm5dWRnGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:54:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a26dfdfb-fec7-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436889; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mgrLaJJsHMjrdEFqr21l8LMXJwY+KaUuWrUY/uXgM0o=;
	b=u+ijtaE+RrDD0MS7PulMDnQ/laO0pERHqv3QbqW9HjAKQ2YFzUTjkoAlLK3aZoRryrXEXD
	Qc5oiOycsVBKZS7xuUWooDb8YEZNBaqHfS9Va7LifVA5j/4SGAhCdnoM36Kxa1n/V/ENCW
	ozJc5M4z6vKjHsABJxDcYbUV9ODKnQY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 05/16] tools/xenstore: make some write limit functions static
Date: Tue, 30 May 2023 10:54:07 +0200
Message-Id: <20230530085418.5417-6-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Some wrl_*() functions are only used in xenstored_domain.c, so make
them static. In order to avoid the need of forward declarations, move
the whole function block to the start of the file.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_domain.c | 456 +++++++++++++++---------------
 tools/xenstore/xenstored_domain.h |   3 -
 2 files changed, 228 insertions(+), 231 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 815f15cd1d..b21bdf6194 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -157,6 +157,234 @@ struct changed_domain
 
 static struct hashtable *domhash;
 
+static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
+static wrl_creditt wrl_config_rate           = WRL_RATE   * WRL_FACTOR;
+static wrl_creditt wrl_config_dburst         = WRL_DBURST * WRL_FACTOR;
+static wrl_creditt wrl_config_gburst         = WRL_GBURST * WRL_FACTOR;
+static wrl_creditt wrl_config_newdoms_dburst =
+	                         WRL_DBURST * WRL_NEWDOMS * WRL_FACTOR;
+
+long wrl_ntransactions;
+
+static long wrl_ndomains;
+static wrl_creditt wrl_reserve; /* [-wrl_config_newdoms_dburst, +_gburst ] */
+static time_t wrl_log_last_warning; /* 0: no previous warning */
+
+#define trace_wrl(...)				\
+do {						\
+	if (trace_flags & TRACE_WRL)		\
+		trace("wrl: " __VA_ARGS__);	\
+} while (0)
+
+void wrl_gettime_now(struct wrl_timestampt *now_wt)
+{
+	struct timespec now_ts;
+	int r;
+
+	r = clock_gettime(CLOCK_MONOTONIC, &now_ts);
+	if (r)
+		barf_perror("Could not find time (clock_gettime failed)");
+
+	now_wt->sec = now_ts.tv_sec;
+	now_wt->msec = now_ts.tv_nsec / 1000000;
+}
+
+static void wrl_xfer_credit(wrl_creditt *debit,  wrl_creditt debit_floor,
+			    wrl_creditt *credit, wrl_creditt credit_ceil)
+	/*
+	 * Transfers zero or more credit from "debit" to "credit".
+	 * Transfers as much as possible while maintaining
+	 * debit >= debit_floor and credit <= credit_ceil.
+	 * (If that's violated already, does nothing.)
+	 *
+	 * Sufficient conditions to avoid overflow, either of:
+	 *  |every argument| <= 0x3fffffff
+	 *  |every argument| <= 1E9
+	 *  |every argument| <= WRL_CREDIT_MAX
+	 * (And this condition is preserved.)
+	 */
+{
+	wrl_creditt xfer = MIN( *debit      - debit_floor,
+			        credit_ceil - *credit      );
+	if (xfer > 0) {
+		*debit -= xfer;
+		*credit += xfer;
+	}
+}
+
+static void wrl_domain_new(struct domain *domain)
+{
+	domain->wrl_credit = 0;
+	wrl_gettime_now(&domain->wrl_timestamp);
+	wrl_ndomains++;
+	/* Steal up to DBURST from the reserve */
+	wrl_xfer_credit(&wrl_reserve, -wrl_config_newdoms_dburst,
+			&domain->wrl_credit, wrl_config_dburst);
+}
+
+static void wrl_domain_destroy(struct domain *domain)
+{
+	wrl_ndomains--;
+	/*
+	 * Don't bother recalculating domain's credit - this just
+	 * means we don't give the reserve the ending domain's credit
+	 * for time elapsed since last update.
+	 */
+	wrl_xfer_credit(&domain->wrl_credit, 0,
+			&wrl_reserve, wrl_config_dburst);
+}
+
+static void wrl_credit_update(struct domain *domain, struct wrl_timestampt now)
+{
+	/*
+	 * We want to calculate
+	 *    credit += (now - timestamp) * RATE / ndoms;
+	 * But we want it to saturate, and to avoid floating point.
+	 * To avoid rounding errors from constantly adding small
+	 * amounts of credit, we only add credit for whole milliseconds.
+	 */
+	long seconds      = now.sec -  domain->wrl_timestamp.sec;
+	long milliseconds = now.msec - domain->wrl_timestamp.msec;
+	long msec;
+	int64_t denom, num;
+	wrl_creditt surplus;
+
+	seconds = MIN(seconds, 1000*1000); /* arbitrary, prevents overflow */
+	msec = seconds * 1000 + milliseconds;
+
+	if (msec < 0)
+                /* shouldn't happen with CLOCK_MONOTONIC */
+		msec = 0;
+
+	/* 32x32 -> 64 cannot overflow */
+	denom = (int64_t)msec * wrl_config_rate;
+	num  =  (int64_t)wrl_ndomains * 1000;
+	/* denom / num <= 1E6 * wrl_config_rate, so with
+	   reasonable wrl_config_rate, denom / num << 2^64 */
+
+	/* at last! */
+	domain->wrl_credit = MIN( (int64_t)domain->wrl_credit + denom / num,
+				  WRL_CREDIT_MAX );
+	/* (maybe briefly violating the DBURST cap on wrl_credit) */
+
+	/* maybe take from the reserve to make us nonnegative */
+	wrl_xfer_credit(&wrl_reserve,        0,
+			&domain->wrl_credit, 0);
+
+	/* return any surplus (over DBURST) to the reserve */
+	surplus = 0;
+	wrl_xfer_credit(&domain->wrl_credit, wrl_config_dburst,
+			&surplus,            WRL_CREDIT_MAX);
+	wrl_xfer_credit(&surplus,     0,
+			&wrl_reserve, wrl_config_gburst);
+	/* surplus is now implicitly discarded */
+
+	domain->wrl_timestamp = now;
+
+	trace_wrl("dom %4d %6ld msec %9ld credit  %9ld reserve %9ld discard\n",
+		  domain->domid, msec, (long)domain->wrl_credit,
+		  (long)wrl_reserve, (long)surplus);
+}
+
+void wrl_check_timeout(struct domain *domain,
+		       struct wrl_timestampt now,
+		       int *ptimeout)
+{
+	uint64_t num, denom;
+	int wakeup;
+
+	wrl_credit_update(domain, now);
+
+	if (domain->wrl_credit >= 0)
+		/* not blocked */
+		return;
+
+	if (!*ptimeout)
+		/* already decided on immediate wakeup,
+		   so no need to calculate our timeout */
+		return;
+
+	/* calculate  wakeup = now + -credit / (RATE / ndoms); */
+
+	/* credit cannot go more -ve than one transaction,
+	 * so the first multiplication cannot overflow even 32-bit */
+	num   = (uint64_t)(-domain->wrl_credit * 1000) * wrl_ndomains;
+	denom = wrl_config_rate;
+
+	wakeup = MIN( num / denom /* uint64_t */, INT_MAX );
+	if (*ptimeout==-1 || wakeup < *ptimeout)
+		*ptimeout = wakeup;
+
+	trace_wrl("domain %u credit=%ld (reserve=%ld) SLEEPING for %d\n",
+		  domain->domid, (long)domain->wrl_credit, (long)wrl_reserve,
+		  wakeup);
+}
+
+#define WRL_LOG(now, ...) \
+	(syslog(LOG_WARNING, "write rate limit: " __VA_ARGS__))
+
+void wrl_apply_debit_actual(struct domain *domain)
+{
+	struct wrl_timestampt now;
+
+	if (!domain || !domain_is_unprivileged(domain->conn))
+		/* sockets and privileged domain escape the write rate limit */
+		return;
+
+	wrl_gettime_now(&now);
+	wrl_credit_update(domain, now);
+
+	domain->wrl_credit -= wrl_config_writecost;
+	trace_wrl("domain %u credit=%ld (reserve=%ld)\n", domain->domid,
+		  (long)domain->wrl_credit, (long)wrl_reserve);
+
+	if (domain->wrl_credit < 0) {
+		if (!domain->wrl_delay_logged) {
+			domain->wrl_delay_logged = true;
+			WRL_LOG(now, "domain %ld is affected\n",
+				(long)domain->domid);
+		} else if (!wrl_log_last_warning) {
+			WRL_LOG(now, "rate limiting restarts\n");
+		}
+		wrl_log_last_warning = now.sec;
+	}
+}
+
+void wrl_log_periodic(struct wrl_timestampt now)
+{
+	if (wrl_log_last_warning &&
+	    (now.sec - wrl_log_last_warning) > WRL_LOGEVERY) {
+		WRL_LOG(now, "not in force recently\n");
+		wrl_log_last_warning = 0;
+	}
+}
+
+void wrl_apply_debit_direct(struct connection *conn)
+{
+	if (!conn)
+		/* some writes are generated internally */
+		return;
+
+	if (conn->transaction)
+		/* these are accounted for when the transaction ends */
+		return;
+
+	if (!wrl_ntransactions)
+		/* we don't conflict with anyone */
+		return;
+
+	wrl_apply_debit_actual(conn->domain);
+}
+
+void wrl_apply_debit_trans_commit(struct connection *conn)
+{
+	if (wrl_ntransactions <= 1)
+		/* our own transaction appears in the counter */
+		return;
+
+	wrl_apply_debit_actual(conn->domain);
+}
+
 static bool check_indexes(XENSTORE_RING_IDX cons, XENSTORE_RING_IDX prod)
 {
 	return ((prod - cons) <= XENSTORE_RING_SIZE);
@@ -1446,234 +1674,6 @@ unsigned int domain_transaction_get(struct connection *conn)
 		: 0;
 }
 
-static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
-static wrl_creditt wrl_config_rate           = WRL_RATE   * WRL_FACTOR;
-static wrl_creditt wrl_config_dburst         = WRL_DBURST * WRL_FACTOR;
-static wrl_creditt wrl_config_gburst         = WRL_GBURST * WRL_FACTOR;
-static wrl_creditt wrl_config_newdoms_dburst =
-	                         WRL_DBURST * WRL_NEWDOMS * WRL_FACTOR;
-
-long wrl_ntransactions;
-
-static long wrl_ndomains;
-static wrl_creditt wrl_reserve; /* [-wrl_config_newdoms_dburst, +_gburst ] */
-static time_t wrl_log_last_warning; /* 0: no previous warning */
-
-#define trace_wrl(...)				\
-do {						\
-	if (trace_flags & TRACE_WRL)		\
-		trace("wrl: " __VA_ARGS__);	\
-} while (0)
-
-void wrl_gettime_now(struct wrl_timestampt *now_wt)
-{
-	struct timespec now_ts;
-	int r;
-
-	r = clock_gettime(CLOCK_MONOTONIC, &now_ts);
-	if (r)
-		barf_perror("Could not find time (clock_gettime failed)");
-
-	now_wt->sec = now_ts.tv_sec;
-	now_wt->msec = now_ts.tv_nsec / 1000000;
-}
-
-static void wrl_xfer_credit(wrl_creditt *debit,  wrl_creditt debit_floor,
-			    wrl_creditt *credit, wrl_creditt credit_ceil)
-	/*
-	 * Transfers zero or more credit from "debit" to "credit".
-	 * Transfers as much as possible while maintaining
-	 * debit >= debit_floor and credit <= credit_ceil.
-	 * (If that's violated already, does nothing.)
-	 *
-	 * Sufficient conditions to avoid overflow, either of:
-	 *  |every argument| <= 0x3fffffff
-	 *  |every argument| <= 1E9
-	 *  |every argument| <= WRL_CREDIT_MAX
-	 * (And this condition is preserved.)
-	 */
-{
-	wrl_creditt xfer = MIN( *debit      - debit_floor,
-			        credit_ceil - *credit      );
-	if (xfer > 0) {
-		*debit -= xfer;
-		*credit += xfer;
-	}
-}
-
-void wrl_domain_new(struct domain *domain)
-{
-	domain->wrl_credit = 0;
-	wrl_gettime_now(&domain->wrl_timestamp);
-	wrl_ndomains++;
-	/* Steal up to DBURST from the reserve */
-	wrl_xfer_credit(&wrl_reserve, -wrl_config_newdoms_dburst,
-			&domain->wrl_credit, wrl_config_dburst);
-}
-
-void wrl_domain_destroy(struct domain *domain)
-{
-	wrl_ndomains--;
-	/*
-	 * Don't bother recalculating domain's credit - this just
-	 * means we don't give the reserve the ending domain's credit
-	 * for time elapsed since last update.
-	 */
-	wrl_xfer_credit(&domain->wrl_credit, 0,
-			&wrl_reserve, wrl_config_dburst);
-}
-
-void wrl_credit_update(struct domain *domain, struct wrl_timestampt now)
-{
-	/*
-	 * We want to calculate
-	 *    credit += (now - timestamp) * RATE / ndoms;
-	 * But we want it to saturate, and to avoid floating point.
-	 * To avoid rounding errors from constantly adding small
-	 * amounts of credit, we only add credit for whole milliseconds.
-	 */
-	long seconds      = now.sec -  domain->wrl_timestamp.sec;
-	long milliseconds = now.msec - domain->wrl_timestamp.msec;
-	long msec;
-	int64_t denom, num;
-	wrl_creditt surplus;
-
-	seconds = MIN(seconds, 1000*1000); /* arbitrary, prevents overflow */
-	msec = seconds * 1000 + milliseconds;
-
-	if (msec < 0)
-                /* shouldn't happen with CLOCK_MONOTONIC */
-		msec = 0;
-
-	/* 32x32 -> 64 cannot overflow */
-	denom = (int64_t)msec * wrl_config_rate;
-	num  =  (int64_t)wrl_ndomains * 1000;
-	/* denom / num <= 1E6 * wrl_config_rate, so with
-	   reasonable wrl_config_rate, denom / num << 2^64 */
-
-	/* at last! */
-	domain->wrl_credit = MIN( (int64_t)domain->wrl_credit + denom / num,
-				  WRL_CREDIT_MAX );
-	/* (maybe briefly violating the DBURST cap on wrl_credit) */
-
-	/* maybe take from the reserve to make us nonnegative */
-	wrl_xfer_credit(&wrl_reserve,        0,
-			&domain->wrl_credit, 0);
-
-	/* return any surplus (over DBURST) to the reserve */
-	surplus = 0;
-	wrl_xfer_credit(&domain->wrl_credit, wrl_config_dburst,
-			&surplus,            WRL_CREDIT_MAX);
-	wrl_xfer_credit(&surplus,     0,
-			&wrl_reserve, wrl_config_gburst);
-	/* surplus is now implicitly discarded */
-
-	domain->wrl_timestamp = now;
-
-	trace_wrl("dom %4d %6ld msec %9ld credit %9ld reserve %9ld discard\n",
-		  domain->domid, msec, (long)domain->wrl_credit,
-		  (long)wrl_reserve, (long)surplus);
-}
-
-void wrl_check_timeout(struct domain *domain,
-		       struct wrl_timestampt now,
-		       int *ptimeout)
-{
-	uint64_t num, denom;
-	int wakeup;
-
-	wrl_credit_update(domain, now);
-
-	if (domain->wrl_credit >= 0)
-		/* not blocked */
-		return;
-
-	if (!*ptimeout)
-		/* already decided on immediate wakeup,
-		   so no need to calculate our timeout */
-		return;
-
-	/* calculate  wakeup = now + -credit / (RATE / ndoms); */
-
-	/* credit cannot go more -ve than one transaction,
-	 * so the first multiplication cannot overflow even 32-bit */
-	num   = (uint64_t)(-domain->wrl_credit * 1000) * wrl_ndomains;
-	denom = wrl_config_rate;
-
-	wakeup = MIN( num / denom /* uint64_t */, INT_MAX );
-	if (*ptimeout==-1 || wakeup < *ptimeout)
-		*ptimeout = wakeup;
-
-	trace_wrl("domain %u credit=%ld (reserve=%ld) SLEEPING for %d\n",
-		  domain->domid, (long)domain->wrl_credit, (long)wrl_reserve,
-		  wakeup);
-}
-
-#define WRL_LOG(now, ...) \
-	(syslog(LOG_WARNING, "write rate limit: " __VA_ARGS__))
-
-void wrl_apply_debit_actual(struct domain *domain)
-{
-	struct wrl_timestampt now;
-
-	if (!domain || !domid_is_unprivileged(domain->domid))
-		/* sockets and privileged domain escape the write rate limit */
-		return;
-
-	wrl_gettime_now(&now);
-	wrl_credit_update(domain, now);
-
-	domain->wrl_credit -= wrl_config_writecost;
-	trace_wrl("domain %u credit=%ld (reserve=%ld)\n", domain->domid,
-		  (long)domain->wrl_credit, (long)wrl_reserve);
-
-	if (domain->wrl_credit < 0) {
-		if (!domain->wrl_delay_logged) {
-			domain->wrl_delay_logged = true;
-			WRL_LOG(now, "domain %ld is affected\n",
-				(long)domain->domid);
-		} else if (!wrl_log_last_warning) {
-			WRL_LOG(now, "rate limiting restarts\n");
-		}
-		wrl_log_last_warning = now.sec;
-	}
-}
-
-void wrl_log_periodic(struct wrl_timestampt now)
-{
-	if (wrl_log_last_warning &&
-	    (now.sec - wrl_log_last_warning) > WRL_LOGEVERY) {
-		WRL_LOG(now, "not in force recently\n");
-		wrl_log_last_warning = 0;
-	}
-}
-
-void wrl_apply_debit_direct(struct connection *conn)
-{
-	if (!conn)
-		/* some writes are generated internally */
-		return;
-
-	if (conn->transaction)
-		/* these are accounted for when the transaction ends */
-		return;
-
-	if (!wrl_ntransactions)
-		/* we don't conflict with anyone */
-		return;
-
-	wrl_apply_debit_actual(conn->domain);
-}
-
-void wrl_apply_debit_trans_commit(struct connection *conn)
-{
-	if (wrl_ntransactions <= 1)
-		/* our own transaction appears in the counter */
-		return;
-
-	wrl_apply_debit_actual(conn->domain);
-}
-
 const char *dump_state_connections(FILE *fp)
 {
 	const char *ret = NULL;
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 5123453f6c..89be643de4 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -163,9 +163,6 @@ struct wrl_timestampt {
 extern long wrl_ntransactions;
 
 void wrl_gettime_now(struct wrl_timestampt *now_ts);
-void wrl_domain_new(struct domain *domain);
-void wrl_domain_destroy(struct domain *domain);
-void wrl_credit_update(struct domain *domain, struct wrl_timestampt now);
 void wrl_check_timeout(struct domain *domain,
                        struct wrl_timestampt now,
                        int *ptimeout);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:54:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:54:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540881.843000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v85-00042G-Kj; Tue, 30 May 2023 08:54:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540881.843000; Tue, 30 May 2023 08:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v85-000427-GZ; Tue, 30 May 2023 08:54:57 +0000
Received: by outflank-mailman (input) for mailman id 540881;
 Tue, 30 May 2023 08:54:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v84-0001Xf-7N
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:54:56 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a5b21a0b-fec7-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:54:55 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id EB24D1F889;
 Tue, 30 May 2023 08:54:54 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id BF4151341B;
 Tue, 30 May 2023 08:54:54 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id FzdRLd65dWRyGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:54:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5b21a0b-fec7-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=V3h05quqAbrf3bWoKchuxvTz+b9VKvGHcNHNZTs9vBM=;
	b=KDH9xbEiYbUUGDJ3HuOa8yTt8ETH1QBzfEreJbmbvV1JPt17u0JFjCfaa6wL/TX5/wIfSk
	gKvC23z7CVd6hIracf1j2rcEABO1md8Cw5j3mTC0BljlvP99V+xMI638d3sT2toIXocnz1
	IBVvzM7FOyV9vrEWom4CgS9ce6WDxE4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 06/16] tools/xenstore: switch write limiting to use millisecond time base
Date: Tue, 30 May 2023 10:54:08 +0200
Message-Id: <20230530085418.5417-7-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There is no need to keep struct wrl_timestampt, as it serves the same
purpose as the more simple time base provided by get_now().

Move some more stuff from xenstored_domain.h into xenstored_domain.c
as it is being used nowhere else.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   |  8 ++---
 tools/xenstore/xenstored_core.h   |  7 ++--
 tools/xenstore/xenstored_domain.c | 56 +++++++++++++------------------
 tools/xenstore/xenstored_domain.h | 21 ++----------
 4 files changed, 32 insertions(+), 60 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index c467a704a1..e8f46495de 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -192,7 +192,7 @@ void reopen_log(void)
 	}
 }
 
-static uint64_t get_now_msec(void)
+uint64_t get_now_msec(void)
 {
 	struct timespec now_ts;
 
@@ -510,7 +510,6 @@ fail:
 static void initialize_fds(int *p_sock_pollfd_idx, int *ptimeout)
 {
 	struct connection *conn;
-	struct wrl_timestampt now;
 	uint64_t msecs;
 
 	if (fds)
@@ -530,13 +529,12 @@ static void initialize_fds(int *p_sock_pollfd_idx, int *ptimeout)
 		xce_pollfd_idx = set_fd(xenevtchn_fd(xce_handle),
 					POLLIN|POLLPRI);
 
-	wrl_gettime_now(&now);
-	wrl_log_periodic(now);
 	msecs = get_now_msec();
+	wrl_log_periodic(msecs);
 
 	list_for_each_entry(conn, &connections, list) {
 		if (conn->domain) {
-			wrl_check_timeout(conn->domain, now, ptimeout);
+			wrl_check_timeout(conn->domain, msecs, ptimeout);
 			check_event_timeout(conn, msecs, ptimeout);
 			if (conn_can_read(conn) ||
 			    (conn_can_write(conn) &&
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 92d5b50f3c..84a611cbb5 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -47,10 +47,6 @@
 /* DEFAULT_BUFFER_SIZE should be large enough for each errno string. */
 #define DEFAULT_BUFFER_SIZE 16
 
-typedef int32_t wrl_creditt;
-#define WRL_CREDIT_MAX (1000*1000*1000)
-/* ^ satisfies non-overflow condition for wrl_xfer_credit */
-
 struct xs_state_connection;
 
 struct buffered_data
@@ -320,6 +316,9 @@ extern bool keep_orphans;
 
 extern unsigned int timeout_watch_event_msec;
 
+/* Get internal time in milliseconds. */
+uint64_t get_now_msec(void);
+
 /* Map the kernel's xenstore page. */
 void *xenbus_map(void);
 void unmap_xenbus(void *interface);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index b21bdf6194..60d3aa1ddb 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -99,6 +99,8 @@ struct quota soft_quotas[ACC_N] = {
 	},
 };
 
+typedef int32_t wrl_creditt;
+
 struct domain
 {
 	/* The id of this domain */
@@ -139,7 +141,7 @@ struct domain
 
 	/* write rate limit */
 	wrl_creditt wrl_credit; /* [ -wrl_config_writecost, +_dburst ] */
-	struct wrl_timestampt wrl_timestamp;
+	uint64_t wrl_timestamp;
 	bool wrl_delay_logged;
 };
 
@@ -157,6 +159,17 @@ struct changed_domain
 
 static struct hashtable *domhash;
 
+/* Write rate limiting */
+
+/* Satisfies non-overflow condition for wrl_xfer_credit. */
+#define WRL_CREDIT_MAX (1000*1000*1000)
+#define WRL_FACTOR     1000 /* for fixed-point arithmetic */
+#define WRL_RATE        200
+#define WRL_DBURST       10
+#define WRL_GBURST     1000
+#define WRL_NEWDOMS       5
+#define WRL_LOGEVERY    120 /* seconds */
+
 static wrl_creditt wrl_config_writecost      = WRL_FACTOR;
 static wrl_creditt wrl_config_rate           = WRL_RATE   * WRL_FACTOR;
 static wrl_creditt wrl_config_dburst         = WRL_DBURST * WRL_FACTOR;
@@ -176,19 +189,6 @@ do {						\
 		trace("wrl: " __VA_ARGS__);	\
 } while (0)
 
-void wrl_gettime_now(struct wrl_timestampt *now_wt)
-{
-	struct timespec now_ts;
-	int r;
-
-	r = clock_gettime(CLOCK_MONOTONIC, &now_ts);
-	if (r)
-		barf_perror("Could not find time (clock_gettime failed)");
-
-	now_wt->sec = now_ts.tv_sec;
-	now_wt->msec = now_ts.tv_nsec / 1000000;
-}
-
 static void wrl_xfer_credit(wrl_creditt *debit,  wrl_creditt debit_floor,
 			    wrl_creditt *credit, wrl_creditt credit_ceil)
 	/*
@@ -215,7 +215,7 @@ static void wrl_xfer_credit(wrl_creditt *debit,  wrl_creditt debit_floor,
 static void wrl_domain_new(struct domain *domain)
 {
 	domain->wrl_credit = 0;
-	wrl_gettime_now(&domain->wrl_timestamp);
+	domain->wrl_timestamp = get_now_msec();
 	wrl_ndomains++;
 	/* Steal up to DBURST from the reserve */
 	wrl_xfer_credit(&wrl_reserve, -wrl_config_newdoms_dburst,
@@ -234,7 +234,7 @@ static void wrl_domain_destroy(struct domain *domain)
 			&wrl_reserve, wrl_config_dburst);
 }
 
-static void wrl_credit_update(struct domain *domain, struct wrl_timestampt now)
+static void wrl_credit_update(struct domain *domain, uint64_t now)
 {
 	/*
 	 * We want to calculate
@@ -243,18 +243,12 @@ static void wrl_credit_update(struct domain *domain, struct wrl_timestampt now)
 	 * To avoid rounding errors from constantly adding small
 	 * amounts of credit, we only add credit for whole milliseconds.
 	 */
-	long seconds      = now.sec -  domain->wrl_timestamp.sec;
-	long milliseconds = now.msec - domain->wrl_timestamp.msec;
 	long msec;
 	int64_t denom, num;
 	wrl_creditt surplus;
 
-	seconds = MIN(seconds, 1000*1000); /* arbitrary, prevents overflow */
-	msec = seconds * 1000 + milliseconds;
-
-	if (msec < 0)
-                /* shouldn't happen with CLOCK_MONOTONIC */
-		msec = 0;
+	/* Prevent overflow by limiting to 32 bits. */
+	msec = MIN(now - domain->wrl_timestamp, 1000 * 1000 * 1000);
 
 	/* 32x32 -> 64 cannot overflow */
 	denom = (int64_t)msec * wrl_config_rate;
@@ -286,9 +280,7 @@ static void wrl_credit_update(struct domain *domain, struct wrl_timestampt now)
 		  (long)wrl_reserve, (long)surplus);
 }
 
-void wrl_check_timeout(struct domain *domain,
-		       struct wrl_timestampt now,
-		       int *ptimeout)
+void wrl_check_timeout(struct domain *domain, uint64_t now, int *ptimeout)
 {
 	uint64_t num, denom;
 	int wakeup;
@@ -325,13 +317,13 @@ void wrl_check_timeout(struct domain *domain,
 
 void wrl_apply_debit_actual(struct domain *domain)
 {
-	struct wrl_timestampt now;
+	uint64_t now;
 
 	if (!domain || !domain_is_unprivileged(domain->conn))
 		/* sockets and privileged domain escape the write rate limit */
 		return;
 
-	wrl_gettime_now(&now);
+	now = get_now_msec();
 	wrl_credit_update(domain, now);
 
 	domain->wrl_credit -= wrl_config_writecost;
@@ -346,14 +338,14 @@ void wrl_apply_debit_actual(struct domain *domain)
 		} else if (!wrl_log_last_warning) {
 			WRL_LOG(now, "rate limiting restarts\n");
 		}
-		wrl_log_last_warning = now.sec;
+		wrl_log_last_warning = now / 1000;
 	}
 }
 
-void wrl_log_periodic(struct wrl_timestampt now)
+void wrl_log_periodic(uint64_t now)
 {
 	if (wrl_log_last_warning &&
-	    (now.sec - wrl_log_last_warning) > WRL_LOGEVERY) {
+	    (now / 1000 - wrl_log_last_warning) > WRL_LOGEVERY) {
 		WRL_LOG(now, "not in force recently\n");
 		wrl_log_last_warning = 0;
 	}
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index 89be643de4..bf63f3fcc6 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -146,27 +146,10 @@ void domain_reset_global_acc(void);
 bool domain_max_chk(const struct connection *conn, unsigned int what,
 		    unsigned int val);
 
-/* Write rate limiting */
-
-#define WRL_FACTOR   1000 /* for fixed-point arithmetic */
-#define WRL_RATE      200
-#define WRL_DBURST     10
-#define WRL_GBURST   1000
-#define WRL_NEWDOMS     5
-#define WRL_LOGEVERY  120 /* seconds */
-
-struct wrl_timestampt {
-	time_t sec;
-	int msec;
-};
-
 extern long wrl_ntransactions;
 
-void wrl_gettime_now(struct wrl_timestampt *now_ts);
-void wrl_check_timeout(struct domain *domain,
-                       struct wrl_timestampt now,
-                       int *ptimeout);
-void wrl_log_periodic(struct wrl_timestampt now);
+void wrl_check_timeout(struct domain *domain, uint64_t now, int *ptimeout);
+void wrl_log_periodic(uint64_t now);
 void wrl_apply_debit_direct(struct connection *conn);
 void wrl_apply_debit_trans_commit(struct connection *conn);
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:55:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:55:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540886.843010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v8A-0004WS-Sv; Tue, 30 May 2023 08:55:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540886.843010; Tue, 30 May 2023 08:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v8A-0004WJ-Ph; Tue, 30 May 2023 08:55:02 +0000
Received: by outflank-mailman (input) for mailman id 540886;
 Tue, 30 May 2023 08:55:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v89-0001Xf-Rv
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:55:01 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a909bf9e-fec7-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:55:00 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 8C8AE1F889;
 Tue, 30 May 2023 08:55:00 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 5AB231341B;
 Tue, 30 May 2023 08:55:00 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id xBDPFOS5dWR9GwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:55:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a909bf9e-fec7-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436900; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VpYOE55y7dDkYrLF82bs2KE6V7NluVg/KzLUtjOisvA=;
	b=iMJzxlH9siBLYShrQFYqlH5CQYszTzmAOFp3HhomdYCbBt8B/Kc4C5X6Dv4RlLILzzMPfz
	z1INeqWlNT0XJ/Q50/xR2EtE+Wd57/qpIVwuJhaRggpPqXG6zymUsn/cuZ8Rq4xWI+49Rx
	t9XVoZxKFJzB0Jn69ZicAfzsJSNMqGE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 07/16] tools/xenstore: remove stale TODO file
Date: Tue, 30 May 2023 10:54:09 +0200
Message-Id: <20230530085418.5417-8-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The TODO file is not really helpful any longer. It contains only
entries which no longer apply or it is unknown what they are meant
for ("Dynamic/supply nodes", "Remove assumption that rename doesn't
fail").

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/TODO | 10 ----------
 1 file changed, 10 deletions(-)
 delete mode 100644 tools/xenstore/TODO

diff --git a/tools/xenstore/TODO b/tools/xenstore/TODO
deleted file mode 100644
index 71d5bbbf50..0000000000
--- a/tools/xenstore/TODO
+++ /dev/null
@@ -1,10 +0,0 @@
-TODO in no particular order.  Some of these will never be done.  There
-are omissions of important but necessary things.  It is up to the
-reader to fill in the blanks.
-
-- Timeout failed watch responses
-- Dynamic/supply nodes
-- Persistant storage of introductions, watches and transactions, so daemon can restart
-- Remove assumption that rename doesn't fail
-- Multi-root transactions, for setting up front and back ends at same time.
-
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:55:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:55:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540888.843020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v8G-00053n-6t; Tue, 30 May 2023 08:55:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540888.843020; Tue, 30 May 2023 08:55:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v8G-00053I-3U; Tue, 30 May 2023 08:55:08 +0000
Received: by outflank-mailman (input) for mailman id 540888;
 Tue, 30 May 2023 08:55:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v8E-0001Xf-RM
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:55:06 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ac67b290-fec7-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:55:06 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 3798D1F889;
 Tue, 30 May 2023 08:55:06 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 074B31341B;
 Tue, 30 May 2023 08:55:06 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id DXN3AOq5dWSIGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:55:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac67b290-fec7-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436906; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l2FBUE2T4MH30TEu8Jlu1UNyg7s+XGZ/EyB1vpKoEVo=;
	b=rVU9GKPTrUebAF1qgXiONfJxixh4JihMiavSlJhFXZkfAKrbYpPykvpwZ6zAfDpiMsZZf5
	bYyNH5k03CIyHg48Kk2Z1omFwaMBi+tkAk4/D6joyNh0AtXaGlHbUIoyIdF5J2xq7o1Tsh
	lK08Pq0iQywMso6/4/1KuixPQ8phuIQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 08/16] tools/xenstore: remove unused events list
Date: Tue, 30 May 2023 10:54:10 +0200
Message-Id: <20230530085418.5417-9-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

struct watch contains an unused struct list_head events. Remove it.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_watch.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index e8eb35de02..4195c59e17 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -36,9 +36,6 @@ struct watch
 	/* Watches on this connection */
 	struct list_head list;
 
-	/* Current outstanding events applying to this watch. */
-	struct list_head events;
-
 	/* Offset into path for skipping prefix (used for relative paths). */
 	unsigned int prefix_len;
 
@@ -205,8 +202,6 @@ static struct watch *add_watch(struct connection *conn, char *path, char *token,
 
 	watch->prefix_len = relative ? strlen(get_implicit_path(conn)) + 1 : 0;
 
-	INIT_LIST_HEAD(&watch->events);
-
 	domain_watch_inc(conn);
 	list_add_tail(&watch->list, &conn->watches);
 	talloc_set_destructor(watch, destroy_watch);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:55:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:55:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540893.843030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v8S-0005vu-Gs; Tue, 30 May 2023 08:55:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540893.843030; Tue, 30 May 2023 08:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v8S-0005vf-D6; Tue, 30 May 2023 08:55:20 +0000
Received: by outflank-mailman (input) for mailman id 540893;
 Tue, 30 May 2023 08:55:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v8R-00026J-Hq
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:55:19 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b323688d-fec7-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:55:17 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 7B8D61F889;
 Tue, 30 May 2023 08:55:17 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 4F13C1341B;
 Tue, 30 May 2023 08:55:17 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id sl/qEfW5dWSlGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:55:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b323688d-fec7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436917; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=C/Ox51cVL9p1eHyc6sg+qCU7lAujcwGHF0wI/e2QVOQ=;
	b=W7vChUNhEyba3TIvObcpqaw4ipjlkF/Oxa17NKQJqIUpgk1XuMwL/Zx4C2YI/1XW9/FGQ3
	gfEFXx+1prc+c5Wa4TN71oPmfD/4/H6EI1VEHG25T3Qv9KVchysjnhqybNr/NRs0N3tjY1
	tUGt1Tb+POeEiwXtMmpdPYLWMJvEgL4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 10/16] tools/libs/store: use xen_list.h instead of xenstore/list.h
Date: Tue, 30 May 2023 10:54:12 +0200
Message-Id: <20230530085418.5417-11-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Replace the usage of the xenstore private list.h header with the
common xen_list.h one.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
---
 tools/libs/store/xs.c | 56 +++++++++++++++++++++----------------------
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index 7a9a8b1656..3813b69ae2 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -35,13 +35,13 @@
 #include <errno.h>
 #include "xenstore.h"
 #include "xs_lib.h"
-#include "list.h"
 #include "utils.h"
 
 #include <xentoolcore_internal.h>
+#include <xen_list.h>
 
 struct xs_stored_msg {
-	struct list_head list;
+	XEN_TAILQ_ENTRY(struct xs_stored_msg) list;
 	struct xsd_sockmsg hdr;
 	char *body;
 };
@@ -70,7 +70,7 @@ struct xs_handle {
          * A list of fired watch messages, protected by a mutex. Users can
          * wait on the conditional variable until a watch is pending.
          */
-	struct list_head watch_list;
+	XEN_TAILQ_HEAD(, struct xs_stored_msg) watch_list;
 	pthread_mutex_t watch_mutex;
 	pthread_cond_t watch_condvar;
 
@@ -84,7 +84,7 @@ struct xs_handle {
          * because we serialise requests. The requester can wait on the
          * conditional variable for its response.
          */
-	struct list_head reply_list;
+	XEN_TAILQ_HEAD(, struct xs_stored_msg) reply_list;
 	pthread_mutex_t reply_mutex;
 	pthread_cond_t reply_condvar;
 
@@ -133,8 +133,8 @@ static void *read_thread(void *arg);
 struct xs_handle {
 	int fd;
 	Xentoolcore__Active_Handle tc_ah; /* for restrict */
-	struct list_head reply_list;
-	struct list_head watch_list;
+	XEN_TAILQ_HEAD(, struct xs_stored_msg) reply_list;
+	XEN_TAILQ_HEAD(, struct xs_stored_msg) watch_list;
 	/* Clients can select() on this pipe to wait for a watch to fire. */
 	int watch_pipe[2];
 	/* Filtering watch event in unwatch function? */
@@ -180,7 +180,7 @@ int xs_fileno(struct xs_handle *h)
 
 	if ((h->watch_pipe[0] == -1) && (pipe(h->watch_pipe) != -1)) {
 		/* Kick things off if the watch list is already non-empty. */
-		if (!list_empty(&h->watch_list))
+		if (!XEN_TAILQ_EMPTY(&h->watch_list))
 			while (write(h->watch_pipe[1], &c, 1) != 1)
 				continue;
 	}
@@ -262,8 +262,8 @@ static struct xs_handle *get_handle(const char *connect_to)
 	if (h->fd == -1)
 		goto err;
 
-	INIT_LIST_HEAD(&h->reply_list);
-	INIT_LIST_HEAD(&h->watch_list);
+	XEN_TAILQ_INIT(&h->reply_list);
+	XEN_TAILQ_INIT(&h->watch_list);
 
 	/* Watch pipe is allocated on demand in xs_fileno(). */
 	h->watch_pipe[0] = h->watch_pipe[1] = -1;
@@ -329,12 +329,12 @@ struct xs_handle *xs_open(unsigned long flags)
 static void close_free_msgs(struct xs_handle *h) {
 	struct xs_stored_msg *msg, *tmsg;
 
-	list_for_each_entry_safe(msg, tmsg, &h->reply_list, list) {
+	XEN_TAILQ_FOREACH_SAFE(msg, &h->reply_list, list, tmsg) {
 		free(msg->body);
 		free(msg);
 	}
 
-	list_for_each_entry_safe(msg, tmsg, &h->watch_list, list) {
+	XEN_TAILQ_FOREACH_SAFE(msg, &h->watch_list, list, tmsg) {
 		free(msg->body);
 		free(msg);
 	}
@@ -459,17 +459,17 @@ static void *read_reply(
 
 	mutex_lock(&h->reply_mutex);
 #ifdef USE_PTHREAD
-	while (list_empty(&h->reply_list) && read_from_thread && h->fd != -1)
+	while (XEN_TAILQ_EMPTY(&h->reply_list) && read_from_thread && h->fd != -1)
 		condvar_wait(&h->reply_condvar, &h->reply_mutex);
 #endif
-	if (list_empty(&h->reply_list)) {
+	if (XEN_TAILQ_EMPTY(&h->reply_list)) {
 		mutex_unlock(&h->reply_mutex);
 		errno = EINVAL;
 		return NULL;
 	}
-	msg = list_top(&h->reply_list, struct xs_stored_msg, list);
-	list_del(&msg->list);
-	assert(list_empty(&h->reply_list));
+	msg = XEN_TAILQ_FIRST(&h->reply_list);
+	XEN_TAILQ_REMOVE(&h->reply_list, msg, list);
+	assert(XEN_TAILQ_EMPTY(&h->reply_list));
 	mutex_unlock(&h->reply_mutex);
 
 	*type = msg->hdr.type;
@@ -883,7 +883,7 @@ static void xs_maybe_clear_watch_pipe(struct xs_handle *h)
 {
 	char c;
 
-	if (list_empty(&h->watch_list) && (h->watch_pipe[0] != -1))
+	if (XEN_TAILQ_EMPTY(&h->watch_list) && (h->watch_pipe[0] != -1))
 		while (read(h->watch_pipe[0], &c, 1) != 1)
 			continue;
 }
@@ -907,7 +907,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 	 * we haven't called xs_watch.	Presumably the application
 	 * will do so later; in the meantime we just block.
 	 */
-	while (list_empty(&h->watch_list) && h->fd != -1) {
+	while (XEN_TAILQ_EMPTY(&h->watch_list) && h->fd != -1) {
 		if (nonblocking) {
 			mutex_unlock(&h->watch_mutex);
 			errno = EAGAIN;
@@ -925,13 +925,13 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 
 #endif /* !defined(USE_PTHREAD) */
 
-	if (list_empty(&h->watch_list)) {
+	if (XEN_TAILQ_EMPTY(&h->watch_list)) {
 		mutex_unlock(&h->watch_mutex);
 		errno = EINVAL;
 		return NULL;
 	}
-	msg = list_top(&h->watch_list, struct xs_stored_msg, list);
-	list_del(&msg->list);
+	msg = XEN_TAILQ_FIRST(&h->watch_list);
+	XEN_TAILQ_REMOVE(&h->watch_list, msg, list);
 
 	xs_maybe_clear_watch_pipe(h);
 	mutex_unlock(&h->watch_mutex);
@@ -1007,12 +1007,12 @@ bool xs_unwatch(struct xs_handle *h, const char *path, const char *token)
 	/* Filter the watch list to remove potential message */
 	mutex_lock(&h->watch_mutex);
 
-	if (list_empty(&h->watch_list)) {
+	if (XEN_TAILQ_EMPTY(&h->watch_list)) {
 		mutex_unlock(&h->watch_mutex);
 		return res;
 	}
 
-	list_for_each_entry_safe(msg, tmsg, &h->watch_list, list) {
+	XEN_TAILQ_FOREACH_SAFE(msg, &h->watch_list, list, tmsg) {
 		assert(msg->hdr.type == XS_WATCH_EVENT);
 
 		s = msg->body;
@@ -1034,7 +1034,7 @@ bool xs_unwatch(struct xs_handle *h, const char *path, const char *token)
 
 		if (l_token && !strcmp(token, l_token) &&
 		    l_path && xs_path_is_subpath(path, l_path)) {
-			list_del(&msg->list);
+			XEN_TAILQ_REMOVE(&h->watch_list, msg, list);
 			free(msg);
 		}
 	}
@@ -1290,12 +1290,12 @@ static int read_message(struct xs_handle *h, int nonblocking)
 		cleanup_push(pthread_mutex_unlock, &h->watch_mutex);
 
 		/* Kick users out of their select() loop. */
-		if (list_empty(&h->watch_list) &&
+		if (XEN_TAILQ_EMPTY(&h->watch_list) &&
 		    (h->watch_pipe[1] != -1))
 			while (write(h->watch_pipe[1], body, 1) != 1) /* Cancellation point */
 				continue;
 
-		list_add_tail(&msg->list, &h->watch_list);
+		XEN_TAILQ_INSERT_TAIL(&h->watch_list, msg, list);
 
 		condvar_signal(&h->watch_condvar);
 
@@ -1304,13 +1304,13 @@ static int read_message(struct xs_handle *h, int nonblocking)
 		mutex_lock(&h->reply_mutex);
 
 		/* There should only ever be one response pending! */
-		if (!list_empty(&h->reply_list)) {
+		if (!XEN_TAILQ_EMPTY(&h->reply_list)) {
 			mutex_unlock(&h->reply_mutex);
 			saved_errno = EEXIST;
 			goto error_freebody;
 		}
 
-		list_add_tail(&msg->list, &h->reply_list);
+		XEN_TAILQ_INSERT_TAIL(&h->reply_list, msg, list);
 		condvar_signal(&h->reply_condvar);
 
 		mutex_unlock(&h->reply_mutex);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:55:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:55:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540895.843040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v8X-0006SJ-QK; Tue, 30 May 2023 08:55:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540895.843040; Tue, 30 May 2023 08:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3v8X-0006Rt-M4; Tue, 30 May 2023 08:55:25 +0000
Received: by outflank-mailman (input) for mailman id 540895;
 Tue, 30 May 2023 08:55:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v8X-00026J-3C
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:55:25 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b6783c8e-fec7-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:55:23 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 1786B1FD68;
 Tue, 30 May 2023 08:55:23 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id DFDEA1341B;
 Tue, 30 May 2023 08:55:22 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id Yr9iNfq5dWStGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:55:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6783c8e-fec7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436923; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PDnOcjlXr0n33CXGUYwRzMJ17HYI0m/s+3vU9XMRSFo=;
	b=qv913pYssmeeJ5myFZc6q/HZnmAK0DzNHSU/bLRVhTOAjx7HDYStaGNUIxBn0M80tU1fLj
	V/Io4V9cjfxZX1TtVVeEgCasH71U4ZfqUDamaNL6plq9nQIHy9i/lhrH6yEfgeOPp5iYk5
	5P4NzcT9d6LLQi6FPKCSUi+1Iib70T8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 11/16] tools/libs/store: make libxenstore independent of utils.h
Date: Tue, 30 May 2023 10:54:13 +0200
Message-Id: <20230530085418.5417-12-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There is no real need for including tools/xenstore/utils.h from
libxenstore, as only streq() and ARRAY_SIZE() are obtained via that
header.

streq() is just !strcmp(), and ARRAY_SIZE() is brought in via
xen-tools/common-macros.h.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
---
 tools/libs/store/xs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index 3813b69ae2..76ffb1be45 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -33,9 +33,9 @@
 #include <signal.h>
 #include <stdint.h>
 #include <errno.h>
+#include <xen-tools/common-macros.h>
 #include "xenstore.h"
 #include "xs_lib.h"
-#include "utils.h"
 
 #include <xentoolcore_internal.h>
 #include <xen_list.h>
@@ -437,7 +437,7 @@ static int get_error(const char *errorstring)
 {
 	unsigned int i;
 
-	for (i = 0; !streq(errorstring, xsd_errors[i].errstring); i++)
+	for (i = 0; strcmp(errorstring, xsd_errors[i].errstring); i++)
 		if (i == ARRAY_SIZE(xsd_errors) - 1)
 			return EINVAL;
 	return xsd_errors[i].errnum;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:57:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:57:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540901.843049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAc-0007lb-Aw; Tue, 30 May 2023 08:57:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540901.843049; Tue, 30 May 2023 08:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAc-0007lT-7o; Tue, 30 May 2023 08:57:34 +0000
Received: by outflank-mailman (input) for mailman id 540901;
 Tue, 30 May 2023 08:57:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v8i-0001Xf-JE
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:55:36 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bd38e216-fec7-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:55:34 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6734E1F889;
 Tue, 30 May 2023 08:55:34 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 1AC071341B;
 Tue, 30 May 2023 08:55:34 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id nsI9BQa6dWTTGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:55:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd38e216-fec7-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436934; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9kduxocio2haUGFhCxcwbf0SQ0WaR56Q7PFegyHHkDk=;
	b=hrslmxzKcG8ASAIHPzW0MHkJONOK5n/YPKTUt+pz+3cRpdd6hFQFDiwLOH+//G988CkF30
	MnugLwwy4K1cq3vC+4+qpayuH06ApSWjgu9lN1ayo5+W4wDN77C38pNXE3EUH/ejYj/nMp
	xnvGq1rzvJutpiTcnridf2D7uXbreNk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 13/16] tools/xenstore: replace xs_lib.c with a header
Date: Tue, 30 May 2023 10:54:15 +0200
Message-Id: <20230530085418.5417-14-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of including the same small C source in multiple binaries from
2 source directories, use a header file with inline functions as a
replacement.

As some of the functions are exported by libxenstore, rename the inline
functions from xs_*() do xenstore_*() and add xs_*() wrappers to
libxenstore.

With that no sources required to build libxenstore are left in
tools/xenstore, so the file COPYING can be removed now.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
---
 MAINTAINERS                               |   1 +
 tools/include/xen-tools/xenstore-common.h | 128 ++++++
 tools/libs/store/Makefile                 |   4 -
 tools/libs/store/xs.c                     |  28 +-
 tools/xenstore/COPYING                    | 514 ----------------------
 tools/xenstore/Makefile                   |   2 +-
 tools/xenstore/Makefile.common            |   2 +-
 tools/xenstore/xenstore_client.c          |   4 +-
 tools/xenstore/xenstored_control.c        |   9 +-
 tools/xenstore/xenstored_core.c           |  15 +-
 tools/xenstore/xs_lib.c                   | 126 ------
 tools/xenstore/xs_lib.h                   |  31 --
 12 files changed, 168 insertions(+), 696 deletions(-)
 create mode 100644 tools/include/xen-tools/xenstore-common.h
 delete mode 100644 tools/xenstore/COPYING
 delete mode 100644 tools/xenstore/xs_lib.c
 delete mode 100644 tools/xenstore/xs_lib.h

diff --git a/MAINTAINERS b/MAINTAINERS
index f2f1881b32..751703373d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -657,6 +657,7 @@ F:	tools/helpers/init-xenstore-domain.c
 F:	tools/include/xenstore-compat/
 F:	tools/include/xenstore.h
 F:	tools/include/xenstore_lib.h
+F:	tools/include/xen-tools/xenstore-common.h
 F:	tools/libs/store/
 F:	tools/xenstore/
 
diff --git a/tools/include/xen-tools/xenstore-common.h b/tools/include/xen-tools/xenstore-common.h
new file mode 100644
index 0000000000..8cb097a8fc
--- /dev/null
+++ b/tools/include/xen-tools/xenstore-common.h
@@ -0,0 +1,128 @@
+/* SPDX-License-Identifier: LGPL-2.1-or-later */
+/* Common functions for libxenstore, xenstored and xenstore-clients. */
+
+#ifndef __XEN_TOOLS_XENSTORE_COMMON__
+#define __XEN_TOOLS_XENSTORE_COMMON__
+
+#include <limits.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <xenstore_lib.h>
+
+static inline const char *xenstore_daemon_rundir(void)
+{
+    char *s = getenv("XENSTORED_RUNDIR");
+
+    return s ? s : XEN_RUN_STORED;
+}
+
+static inline const char *xenstore_daemon_path(void)
+{
+    static char buf[PATH_MAX];
+    char *s = getenv("XENSTORED_PATH");
+
+    if ( s )
+        return s;
+
+    if ( snprintf(buf, sizeof(buf), "%s/socket", xenstore_daemon_rundir()) >=
+         PATH_MAX )
+        return NULL;
+
+    return buf;
+}
+
+/* Convert strings to permissions.  False if a problem. */
+static inline bool xenstore_strings_to_perms(struct xs_permissions *perms,
+                                             unsigned int num,
+                                             const char *strings)
+{
+    const char *p;
+    char *end;
+    unsigned int i;
+
+    for ( p = strings, i = 0; i < num; i++ )
+    {
+        /* "r", "w", or "b" for both. */
+        switch ( *p )
+        {
+        case 'r':
+            perms[i].perms = XS_PERM_READ;
+            break;
+
+        case 'w':
+            perms[i].perms = XS_PERM_WRITE;
+            break;
+
+        case 'b':
+            perms[i].perms = XS_PERM_READ|XS_PERM_WRITE;
+            break;
+
+        case 'n':
+            perms[i].perms = XS_PERM_NONE;
+            break;
+
+        default:
+            errno = EINVAL;
+            return false;
+        }
+
+        p++;
+        perms[i].id = strtol(p, &end, 0);
+        if ( *end || !*p )
+        {
+            errno = EINVAL;
+            return false;
+        }
+
+        p = end + 1;
+    }
+
+    return true;
+}
+
+/* Convert permissions to a string (up to len MAX_STRLEN(unsigned int)+1). */
+static inline bool xenstore_perm_to_string(const struct xs_permissions *perm,
+                                           char *buffer, size_t buf_len)
+{
+    switch ( (int)perm->perms & ~XS_PERM_IGNORE )
+    {
+    case XS_PERM_WRITE:
+        *buffer = 'w';
+        break;
+
+    case XS_PERM_READ:
+        *buffer = 'r';
+        break;
+
+    case XS_PERM_READ|XS_PERM_WRITE:
+        *buffer = 'b';
+        break;
+
+    case XS_PERM_NONE:
+        *buffer = 'n';
+        break;
+
+    default:
+        errno = EINVAL;
+        return false;
+    }
+
+    snprintf(buffer + 1, buf_len - 1, "%i", (int)perm->id);
+
+    return true;
+}
+
+/* Given a string and a length, count how many strings (nul terms). */
+static inline unsigned int xenstore_count_strings(const char *strings,
+                                                  unsigned int len)
+{
+    unsigned int num;
+    const char *p;
+
+    for ( p = strings, num = 0; p < strings + len; p++ )
+        if ( *p == '\0' )
+            num++;
+
+    return num;
+}
+#endif /* __XEN_TOOLS_XENSTORE_COMMON__ */
diff --git a/tools/libs/store/Makefile b/tools/libs/store/Makefile
index daed9d148f..c1a1508713 100644
--- a/tools/libs/store/Makefile
+++ b/tools/libs/store/Makefile
@@ -9,7 +9,6 @@ ifeq ($(CONFIG_Linux),y)
 LDLIBS += -ldl
 endif
 
-OBJS-y   += xs_lib.o
 OBJS-y   += xs.o
 
 LIBHEADER = xenstore.h xenstore_lib.h
@@ -21,9 +20,6 @@ CFLAGS += -include $(XEN_ROOT)/tools/config.h
 CFLAGS += $(CFLAGS_libxentoolcore)
 CFLAGS += -DXEN_RUN_STORED="\"$(XEN_RUN_STORED)\""
 
-vpath xs_lib.c $(XEN_ROOT)/tools/xenstore
-CFLAGS += -iquote $(XEN_ROOT)/tools/xenstore
-
 xs.opic: CFLAGS += -DUSE_PTHREAD
 ifeq ($(CONFIG_Linux),y)
 xs.opic: CFLAGS += -DUSE_DLSYM
diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index bb93246bfb..140b9a2839 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -34,8 +34,8 @@
 #include <stdint.h>
 #include <errno.h>
 #include <xen-tools/common-macros.h>
+#include <xen-tools/xenstore-common.h>
 #include "xenstore.h"
-#include "xs_lib.h"
 
 #include <xentoolcore_internal.h>
 #include <xen_list.h>
@@ -627,7 +627,7 @@ static char **xs_directory_common(char *strings, unsigned int len,
 	char *p, **ret;
 
 	/* Count the strings. */
-	*num = xs_count_strings(strings, len);
+	*num = xenstore_count_strings(strings, len);
 
 	/* Transfer to one big alloc for easy freeing. */
 	ret = malloc(*num * sizeof(char *) + len);
@@ -775,7 +775,7 @@ struct xs_permissions *xs_get_permissions(struct xs_handle *h,
 		return NULL;
 
 	/* Count the strings: each one perms then domid. */
-	*num = xs_count_strings(strings, len);
+	*num = xenstore_count_strings(strings, len);
 
 	/* Transfer to one big alloc for easy freeing. */
 	ret = malloc(*num * sizeof(struct xs_permissions));
@@ -784,7 +784,7 @@ struct xs_permissions *xs_get_permissions(struct xs_handle *h,
 		return NULL;
 	}
 
-	if (!xs_strings_to_perms(ret, *num, strings)) {
+	if (!xenstore_strings_to_perms(ret, *num, strings)) {
 		free_no_errno(ret);
 		ret = NULL;
 	}
@@ -811,7 +811,7 @@ bool xs_set_permissions(struct xs_handle *h,
 	for (i = 0; i < num_perms; i++) {
 		char buffer[MAX_STRLEN(unsigned int)+1];
 
-		if (!xs_perm_to_string(&perms[i], buffer, sizeof(buffer)))
+		if (!xenstore_perm_to_string(&perms[i], buffer, sizeof(buffer)))
 			goto unwind;
 
 		iov[i+1].iov_base = strdup(buffer);
@@ -977,7 +977,7 @@ static char **read_watch_internal(struct xs_handle *h, unsigned int *num,
 	assert(msg->hdr.type == XS_WATCH_EVENT);
 
 	strings     = msg->body;
-	num_strings = xs_count_strings(strings, msg->hdr.len);
+	num_strings = xenstore_count_strings(strings, msg->hdr.len);
 
 	ret = malloc(sizeof(char*) * num_strings + msg->hdr.len);
 	if (!ret) {
@@ -1366,11 +1366,27 @@ error:
 	return ret;
 }
 
+const char *xs_daemon_socket(void)
+{
+	return xenstore_daemon_path();
+}
+
 const char *xs_daemon_socket_ro(void)
 {
 	return xs_daemon_socket();
 }
 
+const char *xs_daemon_rundir(void)
+{
+	return xenstore_daemon_rundir();
+}
+
+bool xs_strings_to_perms(struct xs_permissions *perms, unsigned int num,
+			 const char *strings)
+{
+	return xenstore_strings_to_perms(perms, num, strings);
+}
+
 #ifdef USE_PTHREAD
 static void *read_thread(void *arg)
 {
diff --git a/tools/xenstore/COPYING b/tools/xenstore/COPYING
deleted file mode 100644
index c764b2e329..0000000000
--- a/tools/xenstore/COPYING
+++ /dev/null
@@ -1,514 +0,0 @@
-This license (LGPL) applies to the xenstore library which interfaces
-with the xenstore daemon (as stated in xs.c, xenstore.h, xs_lib.c and
-xenstore_lib.h).  The remaining files in the directory are licensed as
-stated in the comments (as of this writing, GPL, see ../../COPYING).
-
-
-                  GNU LESSER GENERAL PUBLIC LICENSE
-                       Version 2.1, February 1999
-
- Copyright (C) 1991, 1999 Free Software Foundation, Inc.
-	51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-
-[This is the first released version of the Lesser GPL.  It also counts
- as the successor of the GNU Library Public License, version 2, hence
- the version number 2.1.]
-
-                            Preamble
-
-  The licenses for most software are designed to take away your
-freedom to share and change it.  By contrast, the GNU General Public
-Licenses are intended to guarantee your freedom to share and change
-free software--to make sure the software is free for all its users.
-
-  This license, the Lesser General Public License, applies to some
-specially designated software packages--typically libraries--of the
-Free Software Foundation and other authors who decide to use it.  You
-can use it too, but we suggest you first think carefully about whether
-this license or the ordinary General Public License is the better
-strategy to use in any particular case, based on the explanations
-below.
-
-  When we speak of free software, we are referring to freedom of use,
-not price.  Our General Public Licenses are designed to make sure that
-you have the freedom to distribute copies of free software (and charge
-for this service if you wish); that you receive source code or can get
-it if you want it; that you can change the software and use pieces of
-it in new free programs; and that you are informed that you can do
-these things.
-
-  To protect your rights, we need to make restrictions that forbid
-distributors to deny you these rights or to ask you to surrender these
-rights.  These restrictions translate to certain responsibilities for
-you if you distribute copies of the library or if you modify it.
-
-  For example, if you distribute copies of the library, whether gratis
-or for a fee, you must give the recipients all the rights that we gave
-you.  You must make sure that they, too, receive or can get the source
-code.  If you link other code with the library, you must provide
-complete object files to the recipients, so that they can relink them
-with the library after making changes to the library and recompiling
-it.  And you must show them these terms so they know their rights.
-
-  We protect your rights with a two-step method: (1) we copyright the
-library, and (2) we offer you this license, which gives you legal
-permission to copy, distribute and/or modify the library.
-
-  To protect each distributor, we want to make it very clear that
-there is no warranty for the free library.  Also, if the library is
-modified by someone else and passed on, the recipients should know
-that what they have is not the original version, so that the original
-author's reputation will not be affected by problems that might be
-introduced by others.
-
-  Finally, software patents pose a constant threat to the existence of
-any free program.  We wish to make sure that a company cannot
-effectively restrict the users of a free program by obtaining a
-restrictive license from a patent holder.  Therefore, we insist that
-any patent license obtained for a version of the library must be
-consistent with the full freedom of use specified in this license.
-
-  Most GNU software, including some libraries, is covered by the
-ordinary GNU General Public License.  This license, the GNU Lesser
-General Public License, applies to certain designated libraries, and
-is quite different from the ordinary General Public License.  We use
-this license for certain libraries in order to permit linking those
-libraries into non-free programs.
-
-  When a program is linked with a library, whether statically or using
-a shared library, the combination of the two is legally speaking a
-combined work, a derivative of the original library.  The ordinary
-General Public License therefore permits such linking only if the
-entire combination fits its criteria of freedom.  The Lesser General
-Public License permits more lax criteria for linking other code with
-the library.
-
-  We call this license the "Lesser" General Public License because it
-does Less to protect the user's freedom than the ordinary General
-Public License.  It also provides other free software developers Less
-of an advantage over competing non-free programs.  These disadvantages
-are the reason we use the ordinary General Public License for many
-libraries.  However, the Lesser license provides advantages in certain
-special circumstances.
-
-  For example, on rare occasions, there may be a special need to
-encourage the widest possible use of a certain library, so that it
-becomes a de-facto standard.  To achieve this, non-free programs must
-be allowed to use the library.  A more frequent case is that a free
-library does the same job as widely used non-free libraries.  In this
-case, there is little to gain by limiting the free library to free
-software only, so we use the Lesser General Public License.
-
-  In other cases, permission to use a particular library in non-free
-programs enables a greater number of people to use a large body of
-free software.  For example, permission to use the GNU C Library in
-non-free programs enables many more people to use the whole GNU
-operating system, as well as its variant, the GNU/Linux operating
-system.
-
-  Although the Lesser General Public License is Less protective of the
-users' freedom, it does ensure that the user of a program that is
-linked with the Library has the freedom and the wherewithal to run
-that program using a modified version of the Library.
-
-  The precise terms and conditions for copying, distribution and
-modification follow.  Pay close attention to the difference between a
-"work based on the library" and a "work that uses the library".  The
-former contains code derived from the library, whereas the latter must
-be combined with the library in order to run.
-
-                  GNU LESSER GENERAL PUBLIC LICENSE
-   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
-
-  0. This License Agreement applies to any software library or other
-program which contains a notice placed by the copyright holder or
-other authorized party saying it may be distributed under the terms of
-this Lesser General Public License (also called "this License").
-Each licensee is addressed as "you".
-
-  A "library" means a collection of software functions and/or data
-prepared so as to be conveniently linked with application programs
-(which use some of those functions and data) to form executables.
-
-  The "Library", below, refers to any such software library or work
-which has been distributed under these terms.  A "work based on the
-Library" means either the Library or any derivative work under
-copyright law: that is to say, a work containing the Library or a
-portion of it, either verbatim or with modifications and/or translated
-straightforwardly into another language.  (Hereinafter, translation is
-included without limitation in the term "modification".)
-
-  "Source code" for a work means the preferred form of the work for
-making modifications to it.  For a library, complete source code means
-all the source code for all modules it contains, plus any associated
-interface definition files, plus the scripts used to control
-compilation and installation of the library.
-
-  Activities other than copying, distribution and modification are not
-covered by this License; they are outside its scope.  The act of
-running a program using the Library is not restricted, and output from
-such a program is covered only if its contents constitute a work based
-on the Library (independent of the use of the Library in a tool for
-writing it).  Whether that is true depends on what the Library does
-and what the program that uses the Library does.
-
-  1. You may copy and distribute verbatim copies of the Library's
-complete source code as you receive it, in any medium, provided that
-you conspicuously and appropriately publish on each copy an
-appropriate copyright notice and disclaimer of warranty; keep intact
-all the notices that refer to this License and to the absence of any
-warranty; and distribute a copy of this License along with the
-Library.
-
-  You may charge a fee for the physical act of transferring a copy,
-and you may at your option offer warranty protection in exchange for a
-fee.
-
-  2. You may modify your copy or copies of the Library or any portion
-of it, thus forming a work based on the Library, and copy and
-distribute such modifications or work under the terms of Section 1
-above, provided that you also meet all of these conditions:
-
-    a) The modified work must itself be a software library.
-
-    b) You must cause the files modified to carry prominent notices
-    stating that you changed the files and the date of any change.
-
-    c) You must cause the whole of the work to be licensed at no
-    charge to all third parties under the terms of this License.
-
-    d) If a facility in the modified Library refers to a function or a
-    table of data to be supplied by an application program that uses
-    the facility, other than as an argument passed when the facility
-    is invoked, then you must make a good faith effort to ensure that,
-    in the event an application does not supply such function or
-    table, the facility still operates, and performs whatever part of
-    its purpose remains meaningful.
-
-    (For example, a function in a library to compute square roots has
-    a purpose that is entirely well-defined independent of the
-    application.  Therefore, Subsection 2d requires that any
-    application-supplied function or table used by this function must
-    be optional: if the application does not supply it, the square
-    root function must still compute square roots.)
-
-These requirements apply to the modified work as a whole.  If
-identifiable sections of that work are not derived from the Library,
-and can be reasonably considered independent and separate works in
-themselves, then this License, and its terms, do not apply to those
-sections when you distribute them as separate works.  But when you
-distribute the same sections as part of a whole which is a work based
-on the Library, the distribution of the whole must be on the terms of
-this License, whose permissions for other licensees extend to the
-entire whole, and thus to each and every part regardless of who wrote
-it.
-
-Thus, it is not the intent of this section to claim rights or contest
-your rights to work written entirely by you; rather, the intent is to
-exercise the right to control the distribution of derivative or
-collective works based on the Library.
-
-In addition, mere aggregation of another work not based on the Library
-with the Library (or with a work based on the Library) on a volume of
-a storage or distribution medium does not bring the other work under
-the scope of this License.
-
-  3. You may opt to apply the terms of the ordinary GNU General Public
-License instead of this License to a given copy of the Library.  To do
-this, you must alter all the notices that refer to this License, so
-that they refer to the ordinary GNU General Public License, version 2,
-instead of to this License.  (If a newer version than version 2 of the
-ordinary GNU General Public License has appeared, then you can specify
-that version instead if you wish.)  Do not make any other change in
-these notices.
-
-  Once this change is made in a given copy, it is irreversible for
-that copy, so the ordinary GNU General Public License applies to all
-subsequent copies and derivative works made from that copy.
-
-  This option is useful when you wish to copy part of the code of
-the Library into a program that is not a library.
-
-  4. You may copy and distribute the Library (or a portion or
-derivative of it, under Section 2) in object code or executable form
-under the terms of Sections 1 and 2 above provided that you accompany
-it with the complete corresponding machine-readable source code, which
-must be distributed under the terms of Sections 1 and 2 above on a
-medium customarily used for software interchange.
-
-  If distribution of object code is made by offering access to copy
-from a designated place, then offering equivalent access to copy the
-source code from the same place satisfies the requirement to
-distribute the source code, even though third parties are not
-compelled to copy the source along with the object code.
-
-  5. A program that contains no derivative of any portion of the
-Library, but is designed to work with the Library by being compiled or
-linked with it, is called a "work that uses the Library".  Such a
-work, in isolation, is not a derivative work of the Library, and
-therefore falls outside the scope of this License.
-
-  However, linking a "work that uses the Library" with the Library
-creates an executable that is a derivative of the Library (because it
-contains portions of the Library), rather than a "work that uses the
-library".  The executable is therefore covered by this License.
-Section 6 states terms for distribution of such executables.
-
-  When a "work that uses the Library" uses material from a header file
-that is part of the Library, the object code for the work may be a
-derivative work of the Library even though the source code is not.
-Whether this is true is especially significant if the work can be
-linked without the Library, or if the work is itself a library.  The
-threshold for this to be true is not precisely defined by law.
-
-  If such an object file uses only numerical parameters, data
-structure layouts and accessors, and small macros and small inline
-functions (ten lines or less in length), then the use of the object
-file is unrestricted, regardless of whether it is legally a derivative
-work.  (Executables containing this object code plus portions of the
-Library will still fall under Section 6.)
-
-  Otherwise, if the work is a derivative of the Library, you may
-distribute the object code for the work under the terms of Section 6.
-Any executables containing that work also fall under Section 6,
-whether or not they are linked directly with the Library itself.
-
-  6. As an exception to the Sections above, you may also combine or
-link a "work that uses the Library" with the Library to produce a
-work containing portions of the Library, and distribute that work
-under terms of your choice, provided that the terms permit
-modification of the work for the customer's own use and reverse
-engineering for debugging such modifications.
-
-  You must give prominent notice with each copy of the work that the
-Library is used in it and that the Library and its use are covered by
-this License.  You must supply a copy of this License.  If the work
-during execution displays copyright notices, you must include the
-copyright notice for the Library among them, as well as a reference
-directing the user to the copy of this License.  Also, you must do one
-of these things:
-
-    a) Accompany the work with the complete corresponding
-    machine-readable source code for the Library including whatever
-    changes were used in the work (which must be distributed under
-    Sections 1 and 2 above); and, if the work is an executable linked
-    with the Library, with the complete machine-readable "work that
-    uses the Library", as object code and/or source code, so that the
-    user can modify the Library and then relink to produce a modified
-    executable containing the modified Library.  (It is understood
-    that the user who changes the contents of definitions files in the
-    Library will not necessarily be able to recompile the application
-    to use the modified definitions.)
-
-    b) Use a suitable shared library mechanism for linking with the
-    Library.  A suitable mechanism is one that (1) uses at run time a
-    copy of the library already present on the user's computer system,
-    rather than copying library functions into the executable, and (2)
-    will operate properly with a modified version of the library, if
-    the user installs one, as long as the modified version is
-    interface-compatible with the version that the work was made with.
-
-    c) Accompany the work with a written offer, valid for at least
-    three years, to give the same user the materials specified in
-    Subsection 6a, above, for a charge no more than the cost of
-    performing this distribution.
-
-    d) If distribution of the work is made by offering access to copy
-    from a designated place, offer equivalent access to copy the above
-    specified materials from the same place.
-
-    e) Verify that the user has already received a copy of these
-    materials or that you have already sent this user a copy.
-
-  For an executable, the required form of the "work that uses the
-Library" must include any data and utility programs needed for
-reproducing the executable from it.  However, as a special exception,
-the materials to be distributed need not include anything that is
-normally distributed (in either source or binary form) with the major
-components (compiler, kernel, and so on) of the operating system on
-which the executable runs, unless that component itself accompanies
-the executable.
-
-  It may happen that this requirement contradicts the license
-restrictions of other proprietary libraries that do not normally
-accompany the operating system.  Such a contradiction means you cannot
-use both them and the Library together in an executable that you
-distribute.
-
-  7. You may place library facilities that are a work based on the
-Library side-by-side in a single library together with other library
-facilities not covered by this License, and distribute such a combined
-library, provided that the separate distribution of the work based on
-the Library and of the other library facilities is otherwise
-permitted, and provided that you do these two things:
-
-    a) Accompany the combined library with a copy of the same work
-    based on the Library, uncombined with any other library
-    facilities.  This must be distributed under the terms of the
-    Sections above.
-
-    b) Give prominent notice with the combined library of the fact
-    that part of it is a work based on the Library, and explaining
-    where to find the accompanying uncombined form of the same work.
-
-  8. You may not copy, modify, sublicense, link with, or distribute
-the Library except as expressly provided under this License.  Any
-attempt otherwise to copy, modify, sublicense, link with, or
-distribute the Library is void, and will automatically terminate your
-rights under this License.  However, parties who have received copies,
-or rights, from you under this License will not have their licenses
-terminated so long as such parties remain in full compliance.
-
-  9. You are not required to accept this License, since you have not
-signed it.  However, nothing else grants you permission to modify or
-distribute the Library or its derivative works.  These actions are
-prohibited by law if you do not accept this License.  Therefore, by
-modifying or distributing the Library (or any work based on the
-Library), you indicate your acceptance of this License to do so, and
-all its terms and conditions for copying, distributing or modifying
-the Library or works based on it.
-
-  10. Each time you redistribute the Library (or any work based on the
-Library), the recipient automatically receives a license from the
-original licensor to copy, distribute, link with or modify the Library
-subject to these terms and conditions.  You may not impose any further
-restrictions on the recipients' exercise of the rights granted herein.
-You are not responsible for enforcing compliance by third parties with
-this License.
-
-  11. If, as a consequence of a court judgment or allegation of patent
-infringement or for any other reason (not limited to patent issues),
-conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License.  If you cannot
-distribute so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you
-may not distribute the Library at all.  For example, if a patent
-license would not permit royalty-free redistribution of the Library by
-all those who receive copies directly or indirectly through you, then
-the only way you could satisfy both it and this License would be to
-refrain entirely from distribution of the Library.
-
-If any portion of this section is held invalid or unenforceable under
-any particular circumstance, the balance of the section is intended to
-apply, and the section as a whole is intended to apply in other
-circumstances.
-
-It is not the purpose of this section to induce you to infringe any
-patents or other property right claims or to contest validity of any
-such claims; this section has the sole purpose of protecting the
-integrity of the free software distribution system which is
-implemented by public license practices.  Many people have made
-generous contributions to the wide range of software distributed
-through that system in reliance on consistent application of that
-system; it is up to the author/donor to decide if he or she is willing
-to distribute software through any other system and a licensee cannot
-impose that choice.
-
-This section is intended to make thoroughly clear what is believed to
-be a consequence of the rest of this License.
-
-  12. If the distribution and/or use of the Library is restricted in
-certain countries either by patents or by copyrighted interfaces, the
-original copyright holder who places the Library under this License
-may add an explicit geographical distribution limitation excluding those
-countries, so that distribution is permitted only in or among
-countries not thus excluded.  In such case, this License incorporates
-the limitation as if written in the body of this License.
-
-  13. The Free Software Foundation may publish revised and/or new
-versions of the Lesser General Public License from time to time.
-Such new versions will be similar in spirit to the present version,
-but may differ in detail to address new problems or concerns.
-
-Each version is given a distinguishing version number.  If the Library
-specifies a version number of this License which applies to it and
-"any later version", you have the option of following the terms and
-conditions either of that version or of any later version published by
-the Free Software Foundation.  If the Library does not specify a
-license version number, you may choose any version ever published by
-the Free Software Foundation.
-
-  14. If you wish to incorporate parts of the Library into other free
-programs whose distribution conditions are incompatible with these,
-write to the author to ask for permission.  For software which is
-copyrighted by the Free Software Foundation, write to the Free
-Software Foundation; we sometimes make exceptions for this.  Our
-decision will be guided by the two goals of preserving the free status
-of all derivatives of our free software and of promoting the sharing
-and reuse of software generally.
-
-                            NO WARRANTY
-
-  15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
-WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
-EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
-OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
-KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
-IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
-LIBRARY IS WITH YOU.  SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
-THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
-
-  16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
-WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
-AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
-FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
-CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
-LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
-RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
-FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
-SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
-DAMAGES.
-
-                     END OF TERMS AND CONDITIONS
-
-           How to Apply These Terms to Your New Libraries
-
-  If you develop a new library, and you want it to be of the greatest
-possible use to the public, we recommend making it free software that
-everyone can redistribute and change.  You can do so by permitting
-redistribution under these terms (or, alternatively, under the terms
-of the ordinary General Public License).
-
-  To apply these terms, attach the following notices to the library.
-It is safest to attach them to the start of each source file to most
-effectively convey the exclusion of warranty; and each file should
-have at least the "copyright" line and a pointer to where the full
-notice is found.
-
-
-    <one line to give the library's name and a brief idea of what it does.>
-    Copyright (C) <year>  <name of author>
-
-    This library is free software; you can redistribute it and/or
-    modify it under the terms of the GNU Lesser General Public
-    License as published by the Free Software Foundation; either
-    version 2.1 of the License, or (at your option) any later version.
-
-    This library is distributed in the hope that it will be useful,
-    but WITHOUT ANY WARRANTY; without even the implied warranty of
-    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-    Lesser General Public License for more details.
-
-    You should have received a copy of the GNU Lesser General Public
-    License along with this library; If not, see <http://www.gnu.org/licenses/>.
-
-Also add information on how to contact you by electronic and paper mail.
-
-You should also get your employer (if you work as a programmer) or
-your school, if any, to sign a "copyright disclaimer" for the library,
-if necessary.  Here is a sample; alter the names:
-
-  Yoyodyne, Inc., hereby disclaims all copyright interest in the
-  library `Frob' (a library for tweaking knobs) written by James
-  Random Hacker.
-
-  <signature of Ty Coon>, 1 April 1990
-  Ty Coon, President of Vice
-
-That's all there is to it!
-
-
diff --git a/tools/xenstore/Makefile b/tools/xenstore/Makefile
index 56723139a1..dc39b6cb31 100644
--- a/tools/xenstore/Makefile
+++ b/tools/xenstore/Makefile
@@ -44,7 +44,7 @@ xenstored: $(XENSTORED_OBJS-y)
 $(CLIENTS): xenstore
 	ln -f xenstore $@
 
-xenstore: xenstore_client.o xs_lib.o
+xenstore: xenstore_client.o
 	$(CC) $(LDFLAGS) $^ $(LDLIBS) -o $@ $(APPEND_LDFLAGS)
 
 xenstore-control: xenstore_control.o
diff --git a/tools/xenstore/Makefile.common b/tools/xenstore/Makefile.common
index b18f95c103..f71c9bfd55 100644
--- a/tools/xenstore/Makefile.common
+++ b/tools/xenstore/Makefile.common
@@ -2,7 +2,7 @@
 
 XENSTORED_OBJS-y := xenstored_core.o xenstored_watch.o xenstored_domain.o
 XENSTORED_OBJS-y += xenstored_transaction.o xenstored_control.o
-XENSTORED_OBJS-y += xs_lib.o talloc.o utils.o tdb.o hashtable.o
+XENSTORED_OBJS-y += talloc.o utils.o tdb.o hashtable.o
 
 XENSTORED_OBJS-$(CONFIG_Linux) += xenstored_posix.o
 XENSTORED_OBJS-$(CONFIG_NetBSD) += xenstored_posix.o
diff --git a/tools/xenstore/xenstore_client.c b/tools/xenstore/xenstore_client.c
index 8ff8abf12a..96f97ac77e 100644
--- a/tools/xenstore/xenstore_client.c
+++ b/tools/xenstore/xenstore_client.c
@@ -23,7 +23,7 @@
 
 #include <sys/ioctl.h>
 
-#include "xs_lib.h"
+#include <xen-tools/xenstore-common.h>
 
 #define PATH_SEP '/'
 #define MAX_PATH_LEN 256
@@ -361,7 +361,7 @@ static void do_ls(struct xs_handle *h, char *path, int cur_depth, int show_perms
                 for (i = 0; i < nperms; i++) {
                     if (i)
                         putchar(',');
-                    xs_perm_to_string(perms+i, buf, sizeof(buf));
+                    xenstore_perm_to_string(perms+i, buf, sizeof(buf));
                     fputs(buf, stdout);
                 }
                 putchar(')');
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 6ef800ff64..0d131e2ebc 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -31,10 +31,10 @@
 #include <fcntl.h>
 #include <unistd.h>
 #include <xenctrl.h>
+#include <xen-tools/xenstore-common.h>
 
 #include "utils.h"
 #include "talloc.h"
-#include "xs_lib.h"
 #include "xenstored_core.h"
 #include "xenstored_control.h"
 #include "xenstored_domain.h"
@@ -560,7 +560,8 @@ static FILE *lu_dump_open(const void *ctx)
 	char *filename;
 	int fd;
 
-	filename = talloc_asprintf(ctx, "%s/state_dump", xs_daemon_rundir());
+	filename = talloc_asprintf(ctx, "%s/state_dump",
+				   xenstore_daemon_rundir());
 	if (!filename)
 		return NULL;
 
@@ -583,7 +584,7 @@ static void lu_get_dump_state(struct lu_dump_state *state)
 	state->size = 0;
 
 	state->filename = talloc_asprintf(NULL, "%s/state_dump",
-					  xs_daemon_rundir());
+					  xenstore_daemon_rundir());
 	if (!state->filename)
 		barf("Allocation failure");
 
@@ -1017,7 +1018,7 @@ int do_control(const void *ctx, struct connection *conn,
 	if (cmd == ARRAY_SIZE(cmds))
 		return EINVAL;
 
-	num = xs_count_strings(in->buffer, in->used);
+	num = xenstore_count_strings(in->buffer, in->used);
 	if (cmds[cmd].max_pars)
 		num = min(num, cmds[cmd].max_pars);
 	vec = talloc_array(ctx, char *, num);
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index a78cbf1116..62deee9cb9 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -42,11 +42,11 @@
 #include <setjmp.h>
 
 #include <xenevtchn.h>
+#include <xen-tools/xenstore-common.h>
 
 #include "utils.h"
 #include "list.h"
 #include "talloc.h"
-#include "xs_lib.h"
 #include "xenstored_core.h"
 #include "xenstored_watch.h"
 #include "xenstored_transaction.h"
@@ -1201,7 +1201,8 @@ static char *perms_to_strings(const void *ctx, const struct node_perms *perms,
 	char buffer[MAX_STRLEN(unsigned int) + 1];
 
 	for (*len = 0, i = 0; i < perms->num; i++) {
-		if (!xs_perm_to_string(&perms->p[i], buffer, sizeof(buffer)))
+		if (!xenstore_perm_to_string(&perms->p[i], buffer,
+					     sizeof(buffer)))
 			return NULL;
 
 		strings = talloc_realloc(ctx, strings, char,
@@ -1279,7 +1280,7 @@ static int send_directory_part(const void *ctx, struct connection *conn,
 	struct node *node;
 	char gen[24];
 
-	if (xs_count_strings(in->buffer, in->used) != 2)
+	if (xenstore_count_strings(in->buffer, in->used) != 2)
 		return EINVAL;
 
 	/* First arg is node name. */
@@ -1766,7 +1767,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 	char *name, *permstr;
 	struct node *node;
 
-	perms.num = xs_count_strings(in->buffer, in->used);
+	perms.num = xenstore_count_strings(in->buffer, in->used);
 	if (perms.num < 2)
 		return EINVAL;
 
@@ -1779,7 +1780,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 	perms.p = talloc_array(ctx, struct xs_permissions, perms.num);
 	if (!perms.p)
 		return ENOMEM;
-	if (!xs_strings_to_perms(perms.p, perms.num, permstr))
+	if (!xenstore_strings_to_perms(perms.p, perms.num, permstr))
 		return errno;
 
 	if (domain_alloc_permrefs(&perms) < 0)
@@ -2575,7 +2576,7 @@ static void destroy_fds(void)
 static void init_sockets(void)
 {
 	struct sockaddr_un addr;
-	const char *soc_str = xs_daemon_socket();
+	const char *soc_str = xenstore_daemon_path();
 
 	if (!soc_str)
 		barf_perror("Failed to obtain xs domain socket");
@@ -2890,7 +2891,7 @@ int main(int argc, char *argv[])
 
 	/* Make sure xenstored directory exists. */
 	/* Errors ignored here, will be reported when we open files */
-	mkdir(xs_daemon_rundir(), 0755);
+	mkdir(xenstore_daemon_rundir(), 0755);
 
 	if (dofork) {
 		openlog("xenstored", 0, LOG_DAEMON);
diff --git a/tools/xenstore/xs_lib.c b/tools/xenstore/xs_lib.c
deleted file mode 100644
index 826fa7b5a8..0000000000
--- a/tools/xenstore/xs_lib.c
+++ /dev/null
@@ -1,126 +0,0 @@
-/* 
-    Common routines between Xen store user library and daemon.
-    Copyright (C) 2005 Rusty Russell IBM Corporation
-
-    This library is free software; you can redistribute it and/or
-    modify it under the terms of the GNU Lesser General Public
-    License as published by the Free Software Foundation; either
-    version 2.1 of the License, or (at your option) any later version.
-
-    This library is distributed in the hope that it will be useful,
-    but WITHOUT ANY WARRANTY; without even the implied warranty of
-    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-    Lesser General Public License for more details.
-
-    You should have received a copy of the GNU Lesser General Public
-    License along with this library; If not, see <http://www.gnu.org/licenses/>.
-*/
-
-#include <assert.h>
-#include <unistd.h>
-#include <stdio.h>
-#include <string.h>
-#include <stdlib.h>
-#include <errno.h>
-#include "xs_lib.h"
-
-/* Common routines for the Xen store daemon and client library. */
-
-const char *xs_daemon_rundir(void)
-{
-	char *s = getenv("XENSTORED_RUNDIR");
-	return (s ? s : XEN_RUN_STORED);
-}
-
-static const char *xs_daemon_path(void)
-{
-	static char buf[PATH_MAX];
-	char *s = getenv("XENSTORED_PATH");
-	if (s)
-		return s;
-	if (snprintf(buf, sizeof(buf), "%s/socket",
-		     xs_daemon_rundir()) >= PATH_MAX)
-		return NULL;
-	return buf;
-}
-
-const char *xs_daemon_socket(void)
-{
-	return xs_daemon_path();
-}
-
-/* Convert strings to permissions.  False if a problem. */
-bool xs_strings_to_perms(struct xs_permissions *perms, unsigned int num,
-			 const char *strings)
-{
-	const char *p;
-	char *end;
-	unsigned int i;
-
-	for (p = strings, i = 0; i < num; i++) {
-		/* "r", "w", or "b" for both. */
-		switch (*p) {
-		case 'r':
-			perms[i].perms = XS_PERM_READ;
-			break;
-		case 'w':
-			perms[i].perms = XS_PERM_WRITE;
-			break;
-		case 'b':
-			perms[i].perms = XS_PERM_READ|XS_PERM_WRITE;
-			break;
-		case 'n':
-			perms[i].perms = XS_PERM_NONE;
-			break;
-		default:
-			errno = EINVAL;
-			return false;
-		} 
-		p++;
-		perms[i].id = strtol(p, &end, 0);
-		if (*end || !*p) {
-			errno = EINVAL;
-			return false;
-		}
-		p = end + 1;
-	}
-	return true;
-}
-
-/* Convert permissions to a string (up to len MAX_STRLEN(unsigned int)+1). */
-bool xs_perm_to_string(const struct xs_permissions *perm,
-                       char *buffer, size_t buf_len)
-{
-	switch ((int)perm->perms & ~XS_PERM_IGNORE) {
-	case XS_PERM_WRITE:
-		*buffer = 'w';
-		break;
-	case XS_PERM_READ:
-		*buffer = 'r';
-		break;
-	case XS_PERM_READ|XS_PERM_WRITE:
-		*buffer = 'b';
-		break;
-	case XS_PERM_NONE:
-		*buffer = 'n';
-		break;
-	default:
-		errno = EINVAL;
-		return false;
-	}
-	snprintf(buffer+1, buf_len-1, "%i", (int)perm->id);
-	return true;
-}
-
-/* Given a string and a length, count how many strings (nul terms). */
-unsigned int xs_count_strings(const char *strings, unsigned int len)
-{
-	unsigned int num;
-	const char *p;
-
-	for (p = strings, num = 0; p < strings + len; p++)
-		if (*p == '\0')
-			num++;
-
-	return num;
-}
diff --git a/tools/xenstore/xs_lib.h b/tools/xenstore/xs_lib.h
deleted file mode 100644
index 5b9874d741..0000000000
--- a/tools/xenstore/xs_lib.h
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
-    Common routines between Xen store user library and daemon.
-    Copyright (C) 2005 Rusty Russell IBM Corporation
-
-    This library is free software; you can redistribute it and/or
-    modify it under the terms of the GNU Lesser General Public
-    License as published by the Free Software Foundation; either
-    version 2.1 of the License, or (at your option) any later version.
-
-    This library is distributed in the hope that it will be useful,
-    but WITHOUT ANY WARRANTY; without even the implied warranty of
-    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-    Lesser General Public License for more details.
-
-    You should have received a copy of the GNU Lesser General Public
-    License along with this library; If not, see <http://www.gnu.org/licenses/>.
-*/
-
-#ifndef XS_LIB_H
-#define XS_LIB_H
-
-#include "xenstore_lib.h"
-
-/* Convert permissions to a string (up to len MAX_STRLEN(unsigned int)+1). */
-bool xs_perm_to_string(const struct xs_permissions *perm,
-		       char *buffer, size_t buf_len);
-
-/* Given a string and a length, count how many strings (nul terms). */
-unsigned int xs_count_strings(const char *strings, unsigned int len);
-
-#endif /* XS_LIB_H */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:57:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:57:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540902.843060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAd-00081J-Li; Tue, 30 May 2023 08:57:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540902.843060; Tue, 30 May 2023 08:57:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAd-00081A-Im; Tue, 30 May 2023 08:57:35 +0000
Received: by outflank-mailman (input) for mailman id 540902;
 Tue, 30 May 2023 08:57:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v8c-00026J-Vx
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:55:31 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b9cffd27-fec7-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:55:28 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A9A4A21AC5;
 Tue, 30 May 2023 08:55:28 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 7C90D1341B;
 Tue, 30 May 2023 08:55:28 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id nqsZHQC6dWTJGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:55:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9cffd27-fec7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436928; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2wtj5eHWgzZr7krRLiuMYraBKXJGaUFEyS88A4AEvqM=;
	b=Nwsry05T1Rngm5DXTspGyKAjGth+ONvkFoM5gfgoS+npzFYP5E9MdvZM28EDIol2CTQwV9
	mV+usJ5MKeRYA6gDqpTD3dD2GQ1MpOJoxyHyYa23b07dQkLAia78mYFbXuAYLBWUV7erGo
	9u2nAqvZF3lAXjgifBk47gZO+8Bpais=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 12/16] tools/xenstore: remove no longer needed functions from xs_lib.c
Date: Tue, 30 May 2023 10:54:14 +0200
Message-Id: <20230530085418.5417-13-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xs_daemon_tdb() in xs_lib.c is no longer used at all, so it can be
removed. xs_domain_dev() and xs_write_all() are not used by xenstored,
so they can be moved to tools/libs/store/xs.c.

xs_daemon_rootdir() is used by xenstored only and it only calls
xs_daemon_rundir(), so replace its use cases with xs_daemon_rundir()
and remove it from xs_lib.c.

xs_daemon_socket_ro() is needed in libxenstore only, so move it to
tools/libs/store/xs.c.

Move functions used by xenstore-client only to xenstore_client.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- move xs_write_all(), too
V3:
- remove xs_daemon_rootdir(), move xs_daemon_socket_ro()
---
 tools/include/xenstore.h           |   3 +
 tools/include/xenstore_lib.h       |   3 -
 tools/libs/store/xs.c              |  43 ++++++++
 tools/xenstore/xenstore_client.c   | 129 ++++++++++++++++++++++
 tools/xenstore/xenstored_control.c |   4 +-
 tools/xenstore/xenstored_core.c    |   3 +-
 tools/xenstore/xs_lib.c            | 166 -----------------------------
 tools/xenstore/xs_lib.h            |  19 ----
 8 files changed, 178 insertions(+), 192 deletions(-)

diff --git a/tools/include/xenstore.h b/tools/include/xenstore.h
index 2b3f69fb61..a442252849 100644
--- a/tools/include/xenstore.h
+++ b/tools/include/xenstore.h
@@ -113,6 +113,9 @@ void *xs_read(struct xs_handle *h, xs_transaction_t t,
 bool xs_write(struct xs_handle *h, xs_transaction_t t,
 	      const char *path, const void *data, unsigned int len);
 
+/* Simple write function: loops for you. */
+bool xs_write_all(int fd, const void *data, unsigned int len);
+
 /* Create a new directory.
  * Returns false on failure, or success if it already exists.
  */
diff --git a/tools/include/xenstore_lib.h b/tools/include/xenstore_lib.h
index 2266009ec1..43eec87379 100644
--- a/tools/include/xenstore_lib.h
+++ b/tools/include/xenstore_lib.h
@@ -47,9 +47,6 @@ const char *xs_daemon_rundir(void);
 const char *xs_daemon_socket(void);
 const char *xs_daemon_socket_ro(void);
 
-/* Simple write function: loops for you. */
-bool xs_write_all(int fd, const void *data, unsigned int len);
-
 /* Convert strings to permissions.  False if a problem. */
 bool xs_strings_to_perms(struct xs_permissions *perms, unsigned int num,
 			 const char *strings);
diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index 76ffb1be45..bb93246bfb 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -311,6 +311,26 @@ struct xs_handle *xs_domain_open(void)
 	return xs_open(0);
 }
 
+static const char *xs_domain_dev(void)
+{
+	char *s = getenv("XENSTORED_PATH");
+	if (s)
+		return s;
+#if defined(__RUMPUSER_XEN__) || defined(__RUMPRUN__)
+	return "/dev/xen/xenbus";
+#elif defined(__linux__)
+	if (access("/dev/xen/xenbus", F_OK) == 0)
+		return "/dev/xen/xenbus";
+	return "/proc/xen/xenbus";
+#elif defined(__NetBSD__)
+	return "/kern/xen/xenbus";
+#elif defined(__FreeBSD__)
+	return "/dev/xen/xenstore";
+#else
+	return "/dev/xen/xenbus";
+#endif
+}
+
 struct xs_handle *xs_open(unsigned long flags)
 {
 	struct xs_handle *xsh = NULL;
@@ -431,6 +451,24 @@ out_false:
 #ifdef XSTEST
 #define read_all read_all_choice
 #define xs_write_all write_all_choice
+#else
+/* Simple routine for writing to sockets, etc. */
+bool xs_write_all(int fd, const void *data, unsigned int len)
+{
+	while (len) {
+		int done;
+
+		done = write(fd, data, len);
+		if (done < 0 && errno == EINTR)
+			continue;
+		if (done <= 0)
+			return false;
+		data += done;
+		len -= done;
+	}
+
+	return true;
+}
 #endif
 
 static int get_error(const char *errorstring)
@@ -1328,6 +1366,11 @@ error:
 	return ret;
 }
 
+const char *xs_daemon_socket_ro(void)
+{
+	return xs_daemon_socket();
+}
+
 #ifdef USE_PTHREAD
 static void *read_thread(void *arg)
 {
diff --git a/tools/xenstore/xenstore_client.c b/tools/xenstore/xenstore_client.c
index 0628ba275e..8ff8abf12a 100644
--- a/tools/xenstore/xenstore_client.c
+++ b/tools/xenstore/xenstore_client.c
@@ -8,6 +8,7 @@
  *
  */
 
+#include <assert.h>
 #include <err.h>
 #include <errno.h>
 #include <fcntl.h>
@@ -40,12 +41,140 @@ enum mode {
     MODE_watch,
 };
 
+/* Sanitising (quoting) possibly-binary strings. */
+struct expanding_buffer {
+    char *buf;
+    int avail;
+};
+
 static char *output_buf = NULL;
 static int output_pos = 0;
 static struct expanding_buffer ebuf;
 
 static int output_size = 0;
 
+/* Ensure that given expanding buffer has at least min_avail characters. */
+static char *expanding_buffer_ensure(struct expanding_buffer *ebuf,
+                                     int min_avail)
+{
+    int want;
+    char *got;
+
+    if ( ebuf->avail >= min_avail )
+        return ebuf->buf;
+
+    if ( min_avail >= INT_MAX/3 )
+        return 0;
+
+    want = ebuf->avail + min_avail + 10;
+    got = realloc(ebuf->buf, want);
+    if ( !got )
+        return 0;
+
+    ebuf->buf = got;
+    ebuf->avail = want;
+    return ebuf->buf;
+}
+
+/* sanitise_value() may return NULL if malloc fails. */
+static char *sanitise_value(struct expanding_buffer *ebuf,
+                            const char *val, unsigned len)
+{
+    int used, remain, c;
+    unsigned char *ip;
+
+#define ADD(c) (ebuf->buf[used++] = (c))
+#define ADDF(f,c) (used += sprintf(ebuf->buf+used, (f), (c)))
+
+    assert(len < INT_MAX/5);
+
+    ip = (unsigned char *)val;
+    used = 0;
+    remain = len;
+
+    if ( !expanding_buffer_ensure(ebuf, remain + 1) )
+        return NULL;
+
+    while ( remain-- > 0 )
+    {
+        c= *ip++;
+
+        if ( c >= ' ' && c <= '~' && c != '\\' )
+        {
+            ADD(c);
+            continue;
+        }
+
+        if ( !expanding_buffer_ensure(ebuf, used + remain + 5) )
+            /* for "<used>\\nnn<remain>\0" */
+            return 0;
+
+        ADD('\\');
+        switch (c)
+        {
+        case '\t':  ADD('t');   break;
+        case '\n':  ADD('n');   break;
+        case '\r':  ADD('r');   break;
+        case '\\':  ADD('\\');  break;
+        default:
+            if ( c < 010 ) ADDF("%03o", c);
+            else           ADDF("x%02x", c);
+        }
+    }
+
+    ADD(0);
+    assert(used <= ebuf->avail);
+    return ebuf->buf;
+
+#undef ADD
+#undef ADDF
+}
+
+/* *out_len_r on entry is ignored; out must be at least strlen(in)+1 bytes. */
+static void unsanitise_value(char *out, unsigned *out_len_r, const char *in)
+{
+    const char *ip;
+    char *op;
+    unsigned c;
+    int n;
+
+    for ( ip = in, op = out; (c = *ip++); *op++ = c )
+    {
+        if ( c == '\\' )
+        {
+            c = *ip++;
+
+#define GETF(f) do                   \
+{                                    \
+     n = 0;                          \
+     sscanf(ip, f "%n", &c, &n);     \
+     ip += n;                        \
+} while ( 0 )
+
+            switch ( c )
+            {
+            case 't':           c= '\t';           break;
+            case 'n':           c= '\n';           break;
+            case 'r':           c= '\r';           break;
+            case '\\':          c= '\\';           break;
+            case 'x':           GETF("%2x");       break;
+            case '0': case '4':
+            case '1': case '5':
+            case '2': case '6':
+            case '3': case '7': --ip; GETF("%3o"); break;
+            case 0:             --ip;              break;
+            default:;
+            }
+#undef GETF
+        }
+    }
+
+    *op = 0;
+
+    if ( out_len_r )
+        *out_len_r = op - out;
+}
+
 /* make sure there is at least 'len' more space in output_buf */
 static void expand_buffer(size_t len)
 {
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index ed80924ee4..6ef800ff64 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -560,7 +560,7 @@ static FILE *lu_dump_open(const void *ctx)
 	char *filename;
 	int fd;
 
-	filename = talloc_asprintf(ctx, "%s/state_dump", xs_daemon_rootdir());
+	filename = talloc_asprintf(ctx, "%s/state_dump", xs_daemon_rundir());
 	if (!filename)
 		return NULL;
 
@@ -583,7 +583,7 @@ static void lu_get_dump_state(struct lu_dump_state *state)
 	state->size = 0;
 
 	state->filename = talloc_asprintf(NULL, "%s/state_dump",
-					  xs_daemon_rootdir());
+					  xs_daemon_rundir());
 	if (!state->filename)
 		barf("Allocation failure");
 
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 07fbac55ac..a78cbf1116 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2888,10 +2888,9 @@ int main(int argc, char *argv[])
 
 	reopen_log();
 
-	/* make sure xenstored directories exist */
+	/* Make sure xenstored directory exists. */
 	/* Errors ignored here, will be reported when we open files */
 	mkdir(xs_daemon_rundir(), 0755);
-	mkdir(xs_daemon_rootdir(), 0755);
 
 	if (dofork) {
 		openlog("xenstored", 0, LOG_DAEMON);
diff --git a/tools/xenstore/xs_lib.c b/tools/xenstore/xs_lib.c
index b9941c567c..826fa7b5a8 100644
--- a/tools/xenstore/xs_lib.c
+++ b/tools/xenstore/xs_lib.c
@@ -32,11 +32,6 @@ const char *xs_daemon_rundir(void)
 	return (s ? s : XEN_RUN_STORED);
 }
 
-const char *xs_daemon_rootdir(void)
-{
-	return xs_daemon_rundir();
-}
-
 static const char *xs_daemon_path(void)
 {
 	static char buf[PATH_MAX];
@@ -49,61 +44,11 @@ static const char *xs_daemon_path(void)
 	return buf;
 }
 
-const char *xs_daemon_tdb(void)
-{
-	static char buf[PATH_MAX];
-	snprintf(buf, sizeof(buf), "%s/tdb", xs_daemon_rootdir());
-	return buf;
-}
-
 const char *xs_daemon_socket(void)
 {
 	return xs_daemon_path();
 }
 
-const char *xs_daemon_socket_ro(void)
-{
-	return xs_daemon_path();
-}
-
-const char *xs_domain_dev(void)
-{
-	char *s = getenv("XENSTORED_PATH");
-	if (s)
-		return s;
-#if defined(__RUMPUSER_XEN__) || defined(__RUMPRUN__)
-	return "/dev/xen/xenbus";
-#elif defined(__linux__)
-	if (access("/dev/xen/xenbus", F_OK) == 0)
-		return "/dev/xen/xenbus";
-	return "/proc/xen/xenbus";
-#elif defined(__NetBSD__)
-	return "/kern/xen/xenbus";
-#elif defined(__FreeBSD__)
-	return "/dev/xen/xenstore";
-#else
-	return "/dev/xen/xenbus";
-#endif
-}
-
-/* Simple routines for writing to sockets, etc. */
-bool xs_write_all(int fd, const void *data, unsigned int len)
-{
-	while (len) {
-		int done;
-
-		done = write(fd, data, len);
-		if (done < 0 && errno == EINTR)
-			continue;
-		if (done <= 0)
-			return false;
-		data += done;
-		len -= done;
-	}
-
-	return true;
-}
-
 /* Convert strings to permissions.  False if a problem. */
 bool xs_strings_to_perms(struct xs_permissions *perms, unsigned int num,
 			 const char *strings)
@@ -179,114 +124,3 @@ unsigned int xs_count_strings(const char *strings, unsigned int len)
 
 	return num;
 }
-
-char *expanding_buffer_ensure(struct expanding_buffer *ebuf, int min_avail)
-{
-	int want;
-	char *got;
-
-	if (ebuf->avail >= min_avail)
-		return ebuf->buf;
-
-	if (min_avail >= INT_MAX/3)
-		return 0;
-
-	want = ebuf->avail + min_avail + 10;
-	got = realloc(ebuf->buf, want);
-	if (!got)
-		return 0;
-
-	ebuf->buf = got;
-	ebuf->avail = want;
-	return ebuf->buf;
-}
-
-char *sanitise_value(struct expanding_buffer *ebuf,
-		     const char *val, unsigned len)
-{
-	int used, remain, c;
-	unsigned char *ip;
-
-#define ADD(c) (ebuf->buf[used++] = (c))
-#define ADDF(f,c) (used += sprintf(ebuf->buf+used, (f), (c)))
-
-	assert(len < INT_MAX/5);
-
-	ip = (unsigned char *)val;
-	used = 0;
-	remain = len;
-
-	if (!expanding_buffer_ensure(ebuf, remain + 1))
-		return NULL;
-
-	while (remain-- > 0) {
-		c= *ip++;
-
-		if (c >= ' ' && c <= '~' && c != '\\') {
-			ADD(c);
-			continue;
-		}
-
-		if (!expanding_buffer_ensure(ebuf, used + remain + 5))
-			/* for "<used>\\nnn<remain>\0" */
-			return 0;
-
-		ADD('\\');
-		switch (c) {
-		case '\t':  ADD('t');   break;
-		case '\n':  ADD('n');   break;
-		case '\r':  ADD('r');   break;
-		case '\\':  ADD('\\');  break;
-		default:
-			if (c < 010) ADDF("%03o", c);
-			else         ADDF("x%02x", c);
-		}
-	}
-
-	ADD(0);
-	assert(used <= ebuf->avail);
-	return ebuf->buf;
-
-#undef ADD
-#undef ADDF
-}
-
-void unsanitise_value(char *out, unsigned *out_len_r, const char *in)
-{
-	const char *ip;
-	char *op;
-	unsigned c;
-	int n;
-
-	for (ip = in, op = out; (c = *ip++); *op++ = c) {
-		if (c == '\\') {
-			c = *ip++;
-
-#define GETF(f) do {					\
-			n = 0;				\
-			sscanf(ip, f "%n", &c, &n);	\
-			ip += n;			\
-		} while (0)
-
-			switch (c) {
-			case 't':		c= '\t';		break;
-			case 'n':		c= '\n';		break;
-			case 'r':		c= '\r';		break;
-			case '\\':		c= '\\';		break;
-			case 'x':		GETF("%2x");		break;
-			case '0': case '4':
-			case '1': case '5':
-			case '2': case '6':
-			case '3': case '7':	--ip; GETF("%3o");	break;
-			case 0:			--ip;			break;
-			default:;
-			}
-#undef GETF
-		}
-	}
-
-	*op = 0;
-
-	if (out_len_r)
-		*out_len_r = op - out;
-}
diff --git a/tools/xenstore/xs_lib.h b/tools/xenstore/xs_lib.h
index efa05997d6..5b9874d741 100644
--- a/tools/xenstore/xs_lib.h
+++ b/tools/xenstore/xs_lib.h
@@ -21,10 +21,6 @@
 
 #include "xenstore_lib.h"
 
-const char *xs_daemon_rootdir(void);
-const char *xs_domain_dev(void);
-const char *xs_daemon_tdb(void);
-
 /* Convert permissions to a string (up to len MAX_STRLEN(unsigned int)+1). */
 bool xs_perm_to_string(const struct xs_permissions *perm,
 		       char *buffer, size_t buf_len);
@@ -32,19 +28,4 @@ bool xs_perm_to_string(const struct xs_permissions *perm,
 /* Given a string and a length, count how many strings (nul terms). */
 unsigned int xs_count_strings(const char *strings, unsigned int len);
 
-/* Sanitising (quoting) possibly-binary strings. */
-struct expanding_buffer {
-	char *buf;
-	int avail;
-};
-
-/* Ensure that given expanding buffer has at least min_avail characters. */
-char *expanding_buffer_ensure(struct expanding_buffer *, int min_avail);
-
-/* sanitise_value() may return NULL if malloc fails. */
-char *sanitise_value(struct expanding_buffer *, const char *val, unsigned len);
-
-/* *out_len_r on entry is ignored; out must be at least strlen(in)+1 bytes. */
-void unsanitise_value(char *out, unsigned *out_len_r, const char *in);
-
 #endif /* XS_LIB_H */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:57:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:57:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540904.843065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAd-00083Q-Vt; Tue, 30 May 2023 08:57:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540904.843065; Tue, 30 May 2023 08:57:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAd-00082x-QL; Tue, 30 May 2023 08:57:35 +0000
Received: by outflank-mailman (input) for mailman id 540904;
 Tue, 30 May 2023 08:57:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v8o-00026J-V6
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:55:42 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c0906cae-fec7-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:55:40 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 05B7B21AC5;
 Tue, 30 May 2023 08:55:40 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id CA1B81341B;
 Tue, 30 May 2023 08:55:39 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id tywAMAu6dWTbGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:55:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0906cae-fec7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436940; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6QVtp2PIRxf5AN+FHVfWcGFIWFulivYo2Ib1F7uyB48=;
	b=tuy0YI0hm/MsRZ9BBc6soUarJuhvVwBttFmcSUbXjBGy8iwDtoveyaLk1cvJ8ncjBHE452
	O0UelDeGG1i23qKGeDeq0mEdXRIatsKFBVY8G28w7obbpYdhRPDt9WCwX+fcJM2YTBL9fV
	Eoylvpq38d0njbYpIYyzTEQw6u9x8rc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 14/16] tools/xenstore: split out environment specific live update code
Date: Tue, 30 May 2023 10:54:16 +0200
Message-Id: <20230530085418.5417-15-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using #ifdef in xenstored_control.c split out the code of
environment specific functions (daemon or Mini-OS) to dedicated source
files.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/Makefile.common       |   8 +-
 tools/xenstore/xenstored_control.c   | 253 +--------------------------
 tools/xenstore/xenstored_lu.h        |  56 ++++++
 tools/xenstore/xenstored_lu_daemon.c | 133 ++++++++++++++
 tools/xenstore/xenstored_lu_minios.c | 121 +++++++++++++
 5 files changed, 317 insertions(+), 254 deletions(-)
 create mode 100644 tools/xenstore/xenstored_lu.h
 create mode 100644 tools/xenstore/xenstored_lu_daemon.c
 create mode 100644 tools/xenstore/xenstored_lu_minios.c

diff --git a/tools/xenstore/Makefile.common b/tools/xenstore/Makefile.common
index f71c9bfd55..c42796fe34 100644
--- a/tools/xenstore/Makefile.common
+++ b/tools/xenstore/Makefile.common
@@ -4,10 +4,10 @@ XENSTORED_OBJS-y := xenstored_core.o xenstored_watch.o xenstored_domain.o
 XENSTORED_OBJS-y += xenstored_transaction.o xenstored_control.o
 XENSTORED_OBJS-y += talloc.o utils.o tdb.o hashtable.o
 
-XENSTORED_OBJS-$(CONFIG_Linux) += xenstored_posix.o
-XENSTORED_OBJS-$(CONFIG_NetBSD) += xenstored_posix.o
-XENSTORED_OBJS-$(CONFIG_FreeBSD) += xenstored_posix.o
-XENSTORED_OBJS-$(CONFIG_MiniOS) += xenstored_minios.o
+XENSTORED_OBJS-$(CONFIG_Linux) += xenstored_posix.o xenstored_lu_daemon.o
+XENSTORED_OBJS-$(CONFIG_NetBSD) += xenstored_posix.o xenstored_lu_daemon.o
+XENSTORED_OBJS-$(CONFIG_FreeBSD) += xenstored_posix.o xenstored_lu_daemon.o
+XENSTORED_OBJS-$(CONFIG_MiniOS) += xenstored_minios.o xenstored_lu_minios.o
 
 # Include configure output (config.h)
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 0d131e2ebc..b61b41f16c 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -38,63 +38,13 @@
 #include "xenstored_core.h"
 #include "xenstored_control.h"
 #include "xenstored_domain.h"
+#include "xenstored_lu.h"
 #include "xenstored_watch.h"
 
-/* Mini-OS only knows about MAP_ANON. */
-#ifndef MAP_ANONYMOUS
-#define MAP_ANONYMOUS MAP_ANON
-#endif
-
 #ifndef NO_LIVE_UPDATE
-struct live_update {
-	/* For verification the correct connection is acting. */
-	struct connection *conn;
-
-	/* Pointer to the command used to request LU */
-	struct buffered_data *in;
+struct live_update *lu_status;
 
-#ifdef __MINIOS__
-	void *kernel;
-	unsigned int kernel_size;
-	unsigned int kernel_off;
-
-	void *dump_state;
-	unsigned long dump_size;
-#else
-	char *filename;
-#endif
-
-	char *cmdline;
-
-	/* Start parameters. */
-	bool force;
-	unsigned int timeout;
-	time_t started_at;
-};
-
-static struct live_update *lu_status;
-
-struct lu_dump_state {
-	void *buf;
-	unsigned int size;
-#ifndef __MINIOS__
-	int fd;
-	char *filename;
-#endif
-};
-
-static int lu_destroy(void *data)
-{
-#ifdef __MINIOS__
-	if (lu_status->dump_state)
-		munmap(lu_status->dump_state, lu_status->dump_size);
-#endif
-	lu_status = NULL;
-
-	return 0;
-}
-
-static const char *lu_begin(struct connection *conn)
+const char *lu_begin(struct connection *conn)
 {
 	if (lu_status)
 		return "live-update session already active.";
@@ -431,203 +381,6 @@ static const char *lu_cmdline(const void *ctx, struct connection *conn,
 	return NULL;
 }
 
-#ifdef __MINIOS__
-static const char *lu_binary_alloc(const void *ctx, struct connection *conn,
-				   unsigned long size)
-{
-	const char *ret;
-
-	syslog(LOG_INFO, "live-update: binary size %lu\n", size);
-
-	ret = lu_begin(conn);
-	if (ret)
-		return ret;
-
-	lu_status->kernel = talloc_size(lu_status, size);
-	if (!lu_status->kernel)
-		return "Allocation failure.";
-
-	lu_status->kernel_size = size;
-	lu_status->kernel_off = 0;
-
-	errno = 0;
-	return NULL;
-}
-
-static const char *lu_binary_save(const void *ctx, struct connection *conn,
-				  unsigned int size, const char *data)
-{
-	if (!lu_status || lu_status->conn != conn)
-		return "Not in live-update session.";
-
-	if (lu_status->kernel_off + size > lu_status->kernel_size)
-		return "Too much kernel data.";
-
-	memcpy(lu_status->kernel + lu_status->kernel_off, data, size);
-	lu_status->kernel_off += size;
-
-	errno = 0;
-	return NULL;
-}
-
-static const char *lu_arch(const void *ctx, struct connection *conn,
-			   char **vec, int num)
-{
-	if (num == 2 && !strcmp(vec[0], "-b"))
-		return lu_binary_alloc(ctx, conn, atol(vec[1]));
-	if (num > 2 && !strcmp(vec[0], "-d"))
-		return lu_binary_save(ctx, conn, atoi(vec[1]), vec[2]);
-
-	errno = EINVAL;
-	return NULL;
-}
-
-static FILE *lu_dump_open(const void *ctx)
-{
-	lu_status->dump_size = ROUNDUP(talloc_total_size(NULL) * 2,
-				       XC_PAGE_SHIFT);
-	lu_status->dump_state = mmap(NULL, lu_status->dump_size,
-				     PROT_READ | PROT_WRITE,
-				     MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
-	if (lu_status->dump_state == MAP_FAILED)
-		return NULL;
-
-	return fmemopen(lu_status->dump_state, lu_status->dump_size, "w");
-}
-
-static void lu_dump_close(FILE *fp)
-{
-	size_t size;
-
-	size = ftell(fp);
-	size = ROUNDUP(size, XC_PAGE_SHIFT);
-	munmap(lu_status->dump_state + size, lu_status->dump_size - size);
-	lu_status->dump_size = size;
-
-	fclose(fp);
-}
-
-static void lu_get_dump_state(struct lu_dump_state *state)
-{
-}
-
-static void lu_close_dump_state(struct lu_dump_state *state)
-{
-}
-
-static char *lu_exec(const void *ctx, int argc, char **argv)
-{
-	return "NYI";
-}
-#else
-static const char *lu_binary(const void *ctx, struct connection *conn,
-			     const char *filename)
-{
-	const char *ret;
-	struct stat statbuf;
-
-	syslog(LOG_INFO, "live-update: binary %s\n", filename);
-
-	if (stat(filename, &statbuf))
-		return "File not accessible.";
-	if (!(statbuf.st_mode & (S_IXOTH | S_IXGRP | S_IXUSR)))
-		return "File not executable.";
-
-	ret = lu_begin(conn);
-	if (ret)
-		return ret;
-
-	lu_status->filename = talloc_strdup(lu_status, filename);
-	if (!lu_status->filename)
-		return "Allocation failure.";
-
-	errno = 0;
-	return NULL;
-}
-
-static const char *lu_arch(const void *ctx, struct connection *conn,
-			   char **vec, int num)
-{
-	if (num == 2 && !strcmp(vec[0], "-f"))
-		return lu_binary(ctx, conn, vec[1]);
-
-	errno = EINVAL;
-	return NULL;
-}
-
-static FILE *lu_dump_open(const void *ctx)
-{
-	char *filename;
-	int fd;
-
-	filename = talloc_asprintf(ctx, "%s/state_dump",
-				   xenstore_daemon_rundir());
-	if (!filename)
-		return NULL;
-
-	fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
-	if (fd < 0)
-		return NULL;
-
-	return fdopen(fd, "w");
-}
-
-static void lu_dump_close(FILE *fp)
-{
-	fclose(fp);
-}
-
-static void lu_get_dump_state(struct lu_dump_state *state)
-{
-	struct stat statbuf;
-
-	state->size = 0;
-
-	state->filename = talloc_asprintf(NULL, "%s/state_dump",
-					  xenstore_daemon_rundir());
-	if (!state->filename)
-		barf("Allocation failure");
-
-	state->fd = open(state->filename, O_RDONLY);
-	if (state->fd < 0)
-		return;
-	if (fstat(state->fd, &statbuf) != 0)
-		goto out_close;
-	state->size = statbuf.st_size;
-
-	state->buf = mmap(NULL, state->size, PROT_READ, MAP_PRIVATE,
-			  state->fd, 0);
-	if (state->buf == MAP_FAILED) {
-		state->size = 0;
-		goto out_close;
-	}
-
-	return;
-
- out_close:
-	close(state->fd);
-}
-
-static void lu_close_dump_state(struct lu_dump_state *state)
-{
-	assert(state->filename != NULL);
-
-	munmap(state->buf, state->size);
-	close(state->fd);
-
-	unlink(state->filename);
-	talloc_free(state->filename);
-}
-
-static char *lu_exec(const void *ctx, int argc, char **argv)
-{
-	argv[0] = lu_status->filename;
-	execvp(argv[0], argv);
-
-	return "Error activating new binary.";
-}
-#endif
-
 static bool lu_check_lu_allowed(void)
 {
 	struct connection *conn;
diff --git a/tools/xenstore/xenstored_lu.h b/tools/xenstore/xenstored_lu.h
new file mode 100644
index 0000000000..d2f8e4e57c
--- /dev/null
+++ b/tools/xenstore/xenstored_lu.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: MIT */
+
+/*
+ * Live Update interfaces for Xen Store Daemon.
+ * Copyright (C) 2022 Juergen Gross, SUSE LLC
+ */
+
+#ifndef NO_LIVE_UPDATE
+struct live_update {
+	/* For verification the correct connection is acting. */
+	struct connection *conn;
+
+	/* Pointer to the command used to request LU */
+	struct buffered_data *in;
+
+#ifdef __MINIOS__
+	void *kernel;
+	unsigned int kernel_size;
+	unsigned int kernel_off;
+
+	void *dump_state;
+	unsigned long dump_size;
+#else
+	char *filename;
+#endif
+
+	char *cmdline;
+
+	/* Start parameters. */
+	bool force;
+	unsigned int timeout;
+	time_t started_at;
+};
+
+struct lu_dump_state {
+	void *buf;
+	unsigned int size;
+#ifndef __MINIOS__
+	int fd;
+	char *filename;
+#endif
+};
+
+extern struct live_update *lu_status;
+
+/* Live update private interfaces. */
+void lu_get_dump_state(struct lu_dump_state *state);
+void lu_close_dump_state(struct lu_dump_state *state);
+FILE *lu_dump_open(const void *ctx);
+void lu_dump_close(FILE *fp);
+char *lu_exec(const void *ctx, int argc, char **argv);
+const char *lu_arch(const void *ctx, struct connection *conn, char **vec,
+		    int num);
+const char *lu_begin(struct connection *conn);
+int lu_destroy(void *data);
+#endif
diff --git a/tools/xenstore/xenstored_lu_daemon.c b/tools/xenstore/xenstored_lu_daemon.c
new file mode 100644
index 0000000000..952d9f0b25
--- /dev/null
+++ b/tools/xenstore/xenstored_lu_daemon.c
@@ -0,0 +1,133 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+/*
+ * Live Update for Xen Store Daemon.
+ * Copyright (C) 2022 Juergen Gross, SUSE LLC
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <syslog.h>
+#include <sys/stat.h>
+#include <sys/mman.h>
+#include <xen-tools/xenstore-common.h>
+
+#include "talloc.h"
+#include "xenstored_core.h"
+#include "xenstored_lu.h"
+
+#ifndef NO_LIVE_UPDATE
+void lu_get_dump_state(struct lu_dump_state *state)
+{
+	struct stat statbuf;
+
+	state->size = 0;
+
+	state->filename = talloc_asprintf(NULL, "%s/state_dump",
+					  xenstore_daemon_rundir());
+	if (!state->filename)
+		barf("Allocation failure");
+
+	state->fd = open(state->filename, O_RDONLY);
+	if (state->fd < 0)
+		return;
+	if (fstat(state->fd, &statbuf) != 0)
+		goto out_close;
+	state->size = statbuf.st_size;
+
+	state->buf = mmap(NULL, state->size, PROT_READ, MAP_PRIVATE,
+			  state->fd, 0);
+	if (state->buf == MAP_FAILED) {
+		state->size = 0;
+		goto out_close;
+	}
+
+	return;
+
+ out_close:
+	close(state->fd);
+}
+
+void lu_close_dump_state(struct lu_dump_state *state)
+{
+	assert(state->filename != NULL);
+
+	munmap(state->buf, state->size);
+	close(state->fd);
+
+	unlink(state->filename);
+	talloc_free(state->filename);
+}
+
+FILE *lu_dump_open(const void *ctx)
+{
+	char *filename;
+	int fd;
+
+	filename = talloc_asprintf(ctx, "%s/state_dump",
+				   xenstore_daemon_rundir());
+	if (!filename)
+		return NULL;
+
+	fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
+	if (fd < 0)
+		return NULL;
+
+	return fdopen(fd, "w");
+}
+
+void lu_dump_close(FILE *fp)
+{
+	fclose(fp);
+}
+
+char *lu_exec(const void *ctx, int argc, char **argv)
+{
+	argv[0] = lu_status->filename;
+	execvp(argv[0], argv);
+
+	return "Error activating new binary.";
+}
+
+int lu_destroy(void *data)
+{
+	lu_status = NULL;
+
+	return 0;
+}
+
+static const char *lu_binary(const void *ctx, struct connection *conn,
+			     const char *filename)
+{
+	const char *ret;
+	struct stat statbuf;
+
+	syslog(LOG_INFO, "live-update: binary %s\n", filename);
+
+	if (stat(filename, &statbuf))
+		return "File not accessible.";
+	if (!(statbuf.st_mode & (S_IXOTH | S_IXGRP | S_IXUSR)))
+		return "File not executable.";
+
+	ret = lu_begin(conn);
+	if (ret)
+		return ret;
+
+	lu_status->filename = talloc_strdup(lu_status, filename);
+	if (!lu_status->filename)
+		return "Allocation failure.";
+
+	errno = 0;
+	return NULL;
+}
+
+const char *lu_arch(const void *ctx, struct connection *conn, char **vec,
+		    int num)
+{
+	if (num == 2 && !strcmp(vec[0], "-f"))
+		return lu_binary(ctx, conn, vec[1]);
+
+	errno = EINVAL;
+	return NULL;
+}
+#endif
diff --git a/tools/xenstore/xenstored_lu_minios.c b/tools/xenstore/xenstored_lu_minios.c
new file mode 100644
index 0000000000..0bec8a0037
--- /dev/null
+++ b/tools/xenstore/xenstored_lu_minios.c
@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+/*
+ * Live Update for Xen Store Daemon.
+ * Copyright (C) 2022 Juergen Gross, SUSE LLC
+ */
+
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <syslog.h>
+#include <sys/mman.h>
+#include <xenctrl.h>
+#include <xen-tools/common-macros.h>
+
+#include "talloc.h"
+#include "xenstored_lu.h"
+
+/* Mini-OS only knows about MAP_ANON. */
+#ifndef MAP_ANONYMOUS
+#define MAP_ANONYMOUS MAP_ANON
+#endif
+
+#ifndef NO_LIVE_UPDATE
+void lu_get_dump_state(struct lu_dump_state *state)
+{
+}
+
+void lu_close_dump_state(struct lu_dump_state *state)
+{
+}
+
+FILE *lu_dump_open(const void *ctx)
+{
+	lu_status->dump_size = ROUNDUP(talloc_total_size(NULL) * 2,
+				       XC_PAGE_SHIFT);
+	lu_status->dump_state = mmap(NULL, lu_status->dump_size,
+				     PROT_READ | PROT_WRITE,
+				     MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+	if (lu_status->dump_state == MAP_FAILED)
+		return NULL;
+
+	return fmemopen(lu_status->dump_state, lu_status->dump_size, "w");
+}
+
+void lu_dump_close(FILE *fp)
+{
+	size_t size;
+
+	size = ftell(fp);
+	size = ROUNDUP(size, XC_PAGE_SHIFT);
+	munmap(lu_status->dump_state + size, lu_status->dump_size - size);
+	lu_status->dump_size = size;
+
+	fclose(fp);
+}
+
+char *lu_exec(const void *ctx, int argc, char **argv)
+{
+	return "NYI";
+}
+
+int lu_destroy(void *data)
+{
+	if (lu_status->dump_state)
+		munmap(lu_status->dump_state, lu_status->dump_size);
+	lu_status = NULL;
+
+	return 0;
+}
+
+static const char *lu_binary_alloc(const void *ctx, struct connection *conn,
+				   unsigned long size)
+{
+	const char *ret;
+
+	syslog(LOG_INFO, "live-update: binary size %lu\n", size);
+
+	ret = lu_begin(conn);
+	if (ret)
+		return ret;
+
+	lu_status->kernel = talloc_size(lu_status, size);
+	if (!lu_status->kernel)
+		return "Allocation failure.";
+
+	lu_status->kernel_size = size;
+	lu_status->kernel_off = 0;
+
+	errno = 0;
+	return NULL;
+}
+
+static const char *lu_binary_save(const void *ctx, struct connection *conn,
+				  unsigned int size, const char *data)
+{
+	if (!lu_status || lu_status->conn != conn)
+		return "Not in live-update session.";
+
+	if (lu_status->kernel_off + size > lu_status->kernel_size)
+		return "Too much kernel data.";
+
+	memcpy(lu_status->kernel + lu_status->kernel_off, data, size);
+	lu_status->kernel_off += size;
+
+	errno = 0;
+	return NULL;
+}
+
+const char *lu_arch(const void *ctx, struct connection *conn, char **vec,
+		    int num)
+{
+	if (num == 2 && !strcmp(vec[0], "-b"))
+		return lu_binary_alloc(ctx, conn, atol(vec[1]));
+	if (num > 2 && !strcmp(vec[0], "-d"))
+		return lu_binary_save(ctx, conn, atoi(vec[1]), vec[2]);
+
+	errno = EINVAL;
+	return NULL;
+}
+#endif
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:57:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:57:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540909.843070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAe-00087N-AV; Tue, 30 May 2023 08:57:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540909.843070; Tue, 30 May 2023 08:57:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAe-00085t-3i; Tue, 30 May 2023 08:57:36 +0000
Received: by outflank-mailman (input) for mailman id 540909;
 Tue, 30 May 2023 08:57:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v8K-0001Xf-NR
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:55:12 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id afcd7abb-fec7-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 10:55:12 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E13551F889;
 Tue, 30 May 2023 08:55:11 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 9C4641341B;
 Tue, 30 May 2023 08:55:11 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id nYboJO+5dWSbGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:55:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afcd7abb-fec7-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436911; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TaRskO1LG2nxAfCeJXWFN3XQHUGJk1h0ZfPSHDmx7q4=;
	b=qZdkvmmgsoHJaX8DKw66YyAIkNJEtvN0fNMRnBIIhceKr7ODWV5TY2PN866cWKpjGOc9Om
	KL4kEipeiB3UgLkPUjc4bRb/PjtxXwgwvPMQ6YD1AO0qldnGCppdAKrNtEn8+SwbIhCJ3d
	Rl9eXVwnGcaQmxKvYIqKrCvAgicbMrk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 09/16] tools/xenstore: remove support of file backed data base
Date: Tue, 30 May 2023 10:54:11 +0200
Message-Id: <20230530085418.5417-10-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to prepare the replacement of TDB with direct accessible nodes
in memory, remove the support for a file backed data base.

This allows to remove xs_tdb_dump, too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                      |  1 -
 tools/xenstore/Makefile         |  5 +-
 tools/xenstore/xenstored_core.c | 18 ++-----
 tools/xenstore/xs_tdb_dump.c    | 86 ---------------------------------
 4 files changed, 4 insertions(+), 106 deletions(-)
 delete mode 100644 tools/xenstore/xs_tdb_dump.c

diff --git a/.gitignore b/.gitignore
index beac034784..26f532e77c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -248,7 +248,6 @@ tools/xenstore/xenstore-rm
 tools/xenstore/xenstore-watch
 tools/xenstore/xenstore-write
 tools/xenstore/xenstored
-tools/xenstore/xs_tdb_dump
 tools/xentop/xentop
 tools/xentrace/xentrace_setsize
 tools/xentrace/tbctl
diff --git a/tools/xenstore/Makefile b/tools/xenstore/Makefile
index ce7a68178f..56723139a1 100644
--- a/tools/xenstore/Makefile
+++ b/tools/xenstore/Makefile
@@ -29,7 +29,7 @@ CLIENTS += xenstore-write xenstore-ls xenstore-watch
 
 TARGETS := xenstore $(CLIENTS) xenstore-control
 ifeq ($(XENSTORE_XENSTORED),y)
-TARGETS += xs_tdb_dump xenstored
+TARGETS += xenstored
 endif
 
 .PHONY: all
@@ -50,9 +50,6 @@ xenstore: xenstore_client.o xs_lib.o
 xenstore-control: xenstore_control.o
 	$(CC) $(LDFLAGS) $^ $(LDLIBS) -o $@ $(APPEND_LDFLAGS)
 
-xs_tdb_dump: xs_tdb_dump.o utils.o tdb.o talloc.o
-	$(CC) $(LDFLAGS) $^ -o $@ $(APPEND_LDFLAGS)
-
 .PHONY: clean
 clean::
 	$(RM) $(TARGETS) $(DEPS_RM)
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index e8f46495de..07fbac55ac 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2301,8 +2301,6 @@ static void accept_connection(int sock)
 }
 #endif
 
-static int tdb_flags = TDB_INTERNAL | TDB_NOLOCK;
-
 /* We create initial nodes manually. */
 static void manual_node(const char *name, const char *child)
 {
@@ -2354,14 +2352,11 @@ void setup_structure(bool live_update)
 {
 	char *tdbname;
 
-	tdbname = talloc_strdup(talloc_autofree_context(), xs_daemon_tdb());
+	tdbname = talloc_strdup(talloc_autofree_context(), "/dev/mem");
 	if (!tdbname)
 		barf_perror("Could not create tdbname");
 
-	if (!(tdb_flags & TDB_INTERNAL))
-		unlink(tdbname);
-
-	tdb_ctx = tdb_open_ex(tdbname, 7919, tdb_flags,
+	tdb_ctx = tdb_open_ex(tdbname, 7919, TDB_INTERNAL | TDB_NOLOCK,
 			      O_RDWR | O_CREAT | O_EXCL | O_CLOEXEC,
 			      0640, &tdb_logger, NULL);
 	if (!tdb_ctx)
@@ -2659,8 +2654,6 @@ static void usage(void)
 "                          watch-event: time a watch-event is kept pending\n"
 "  -R, --no-recovery       to request that no recovery should be attempted when\n"
 "                          the store is corrupted (debug only),\n"
-"  -I, --internal-db [on|off] store database in memory, not on disk, default is\n"
-"                          memory, with \"--internal-db off\" it is on disk\n"
 "  -K, --keep-orphans      don't delete nodes owned by a domain when the\n"
 "                          domain is deleted (this is a security risk!)\n"
 "  -V, --verbose           to request verbose execution.\n");
@@ -2687,7 +2680,6 @@ static struct option options[] = {
 	{ "quota-soft", 1, NULL, 'q' },
 	{ "timeout", 1, NULL, 'w' },
 	{ "no-recovery", 0, NULL, 'R' },
-	{ "internal-db", 2, NULL, 'I' },
 	{ "keep-orphans", 0, NULL, 'K' },
 	{ "verbose", 0, NULL, 'V' },
 	{ "watch-nb", 1, NULL, 'W' },
@@ -2811,7 +2803,7 @@ int main(int argc, char *argv[])
 	orig_argv = argv;
 
 	while ((opt = getopt_long(argc, argv,
-				  "DE:F:HI::KNPS:t:A:M:Q:q:T:RVW:w:U",
+				  "DE:F:H::KNPS:t:A:M:Q:q:T:RVW:w:U",
 				  options, NULL)) != -1) {
 		switch (opt) {
 		case 'D':
@@ -2848,10 +2840,6 @@ int main(int argc, char *argv[])
 			if (set_trace_switch(optarg))
 				barf("Illegal trace switch \"%s\"\n", optarg);
 			break;
-		case 'I':
-			if (optarg && !strcmp(optarg, "off"))
-				tdb_flags = 0;
-			break;
 		case 'K':
 			keep_orphans = true;
 			break;
diff --git a/tools/xenstore/xs_tdb_dump.c b/tools/xenstore/xs_tdb_dump.c
deleted file mode 100644
index 5d2db392b4..0000000000
--- a/tools/xenstore/xs_tdb_dump.c
+++ /dev/null
@@ -1,86 +0,0 @@
-/* Simple program to dump out all records of TDB */
-#include <stdint.h>
-#include <stdlib.h>
-#include <fcntl.h>
-#include <stdio.h>
-#include <stdarg.h>
-#include <string.h>
-#include <sys/types.h>
-#include "xenstore_lib.h"
-#include "tdb.h"
-#include "talloc.h"
-#include "utils.h"
-
-static uint32_t total_size(struct xs_tdb_record_hdr *hdr)
-{
-	return sizeof(*hdr) + hdr->num_perms * sizeof(struct xs_permissions) 
-		+ hdr->datalen + hdr->childlen;
-}
-
-static char perm_to_char(unsigned int perm)
-{
-	return perm == XS_PERM_READ ? 'r' :
-		perm == XS_PERM_WRITE ? 'w' :
-		perm == XS_PERM_NONE ? '-' :
-		perm == (XS_PERM_READ|XS_PERM_WRITE) ? 'b' :
-		'?';
-}
-
-static void tdb_logger(TDB_CONTEXT *tdb, int level, const char * fmt, ...)
-{
-	va_list ap;
-
-	va_start(ap, fmt);
-	vfprintf(stderr, fmt, ap);
-	va_end(ap);
-}
-
-int main(int argc, char *argv[])
-{
-	TDB_DATA key;
-	TDB_CONTEXT *tdb;
-
-	if (argc != 2)
-		barf("Usage: xs_tdb_dump <tdbfile>");
-
-	tdb = tdb_open_ex(talloc_strdup(NULL, argv[1]), 0, 0, O_RDONLY, 0,
-			  &tdb_logger, NULL);
-	if (!tdb)
-		barf_perror("Could not open %s", argv[1]);
-
-	key = tdb_firstkey(tdb);
-	while (key.dptr) {
-		TDB_DATA data;
-		struct xs_tdb_record_hdr *hdr;
-
-		data = tdb_fetch(tdb, key);
-		hdr = (void *)data.dptr;
-		if (data.dsize < sizeof(*hdr))
-			fprintf(stderr, "%.*s: BAD truncated\n",
-				(int)key.dsize, key.dptr);
-		else if (data.dsize != total_size(hdr))
-			fprintf(stderr, "%.*s: BAD length %zu for %u/%u/%u (%u)\n",
-				(int)key.dsize, key.dptr, data.dsize,
-				hdr->num_perms, hdr->datalen,
-				hdr->childlen, total_size(hdr));
-		else {
-			unsigned int i;
-			char *p;
-
-			printf("%.*s: ", (int)key.dsize, key.dptr);
-			for (i = 0; i < hdr->num_perms; i++)
-				printf("%s%c%u",
-				       i == 0 ? "" : ",",
-				       perm_to_char(hdr->perms[i].perms),
-				       hdr->perms[i].id);
-			p = (void *)&hdr->perms[hdr->num_perms];
-			printf(" %.*s\n", hdr->datalen, p);
-			p += hdr->datalen;
-			for (i = 0; i < hdr->childlen; i += strlen(p+i)+1)
-				printf("\t-> %s\n", p+i);
-		}
-		key = tdb_nextkey(tdb, key);
-	}
-	return 0;
-}
-
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:57:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:57:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540913.843080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAf-0008LM-2N; Tue, 30 May 2023 08:57:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540913.843080; Tue, 30 May 2023 08:57:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAe-0008Ik-OK; Tue, 30 May 2023 08:57:36 +0000
Received: by outflank-mailman (input) for mailman id 540913;
 Tue, 30 May 2023 08:57:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v8z-00026J-CP
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:55:53 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c745943a-fec7-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:55:51 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 44D3321AC5;
 Tue, 30 May 2023 08:55:51 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 1569E1341B;
 Tue, 30 May 2023 08:55:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id DpfgAxe6dWT+GwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:55:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c745943a-fec7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436951; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FW0C2bAkKkIHGt/B2e/wyvut7P4v8g7aX6Xek0VYr58=;
	b=bESitT7N/VP0RcKobx2Ra1Ydei7bdiX/KL8hFEx7S6bIBeTaaEXceb5h+DXATke7XSr5jB
	/LeXVjTJIlME7gWKnYtS+44gsAxFYed9DbzxcTMo75zeW49esYQ2FypWirLv3vWiZ/RInF
	habeMLuctjcg7YXClZa/VgCeKa5A0lw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 16/16] tools/xenstore: remove unused stuff from list.h
Date: Tue, 30 May 2023 10:54:18 +0200
Message-Id: <20230530085418.5417-17-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove the hlist defines/functions and the rcu related functions from
tools/xenstore/list.h, as they are not used.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/list.h | 227 ------------------------------------------
 1 file changed, 227 deletions(-)

diff --git a/tools/xenstore/list.h b/tools/xenstore/list.h
index a464a38b61..d722a91220 100644
--- a/tools/xenstore/list.h
+++ b/tools/xenstore/list.h
@@ -88,48 +88,6 @@ static inline void list_add_tail(struct list_head *new, struct list_head *head)
 	__list_add(new, head->prev, head);
 }
 
-/*
- * Insert a new entry between two known consecutive entries. 
- *
- * This is only for internal list manipulation where we know
- * the prev/next entries already!
- */
-static __inline__ void __list_add_rcu(struct list_head * new,
-	struct list_head * prev,
-	struct list_head * next)
-{
-	new->next = next;
-	new->prev = prev;
-	next->prev = new;
-	prev->next = new;
-}
-
-/**
- * list_add_rcu - add a new entry to rcu-protected list
- * @new: new entry to be added
- * @head: list head to add it after
- *
- * Insert a new entry after the specified head.
- * This is good for implementing stacks.
- */
-static __inline__ void list_add_rcu(struct list_head *new, struct list_head *head)
-{
-	__list_add_rcu(new, head, head->next);
-}
-
-/**
- * list_add_tail_rcu - add a new entry to rcu-protected list
- * @new: new entry to be added
- * @head: list head to add it before
- *
- * Insert a new entry before the specified head.
- * This is useful for implementing queues.
- */
-static __inline__ void list_add_tail_rcu(struct list_head *new, struct list_head *head)
-{
-	__list_add_rcu(new, head->prev, head);
-}
-
 /*
  * Delete a list entry by making the prev/next entries
  * point to each other.
@@ -156,23 +114,6 @@ static inline void list_del(struct list_head *entry)
 	entry->prev = LIST_POISON2;
 }
 
-/**
- * list_del_rcu - deletes entry from list without re-initialization
- * @entry: the element to delete from the list.
- *
- * Note: list_empty on entry does not return true after this, 
- * the entry is in an undefined state. It is useful for RCU based
- * lockfree traversal.
- *
- * In particular, it means that we can not poison the forward 
- * pointers that may still be used for walking the list.
- */
-static inline void list_del_rcu(struct list_head *entry)
-{
-	__list_del(entry->prev, entry->next);
-	entry->prev = LIST_POISON2;
-}
-
 /**
  * list_del_init - deletes entry from list and reinitialize it.
  * @entry: the element to delete from the list.
@@ -339,172 +280,4 @@ static inline void list_splice_init(struct list_head *list,
 	     &pos->member != (head); 					\
 	     pos = n, n = list_entry(n->member.next, typeof(*n), member))
 
-
-/* 
- * Double linked lists with a single pointer list head. 
- * Mostly useful for hash tables where the two pointer list head is 
- * too wasteful.
- * You lose the ability to access the tail in O(1).
- */ 
-
-struct hlist_head { 
-	struct hlist_node *first; 
-}; 
-
-struct hlist_node { 
-	struct hlist_node *next, **pprev; 
-}; 
-
-#define HLIST_HEAD_INIT { .first = NULL } 
-#define HLIST_HEAD(name) struct hlist_head name = {  .first = NULL }
-#define INIT_HLIST_HEAD(ptr) ((ptr)->first = NULL) 
-#define INIT_HLIST_NODE(ptr) ((ptr)->next = NULL, (ptr)->pprev = NULL)
-
-static __inline__ int hlist_unhashed(struct hlist_node *h) 
-{ 
-	return !h->pprev;
-} 
-
-static __inline__ int hlist_empty(struct hlist_head *h) 
-{ 
-	return !h->first;
-} 
-
-static __inline__ void __hlist_del(struct hlist_node *n) 
-{
-	struct hlist_node *next = n->next;
-	struct hlist_node **pprev = n->pprev;
-	*pprev = next;  
-	if (next) 
-		next->pprev = pprev;
-}  
-
-static __inline__ void hlist_del(struct hlist_node *n)
-{
-	__hlist_del(n);
-	n->next = LIST_POISON1;
-	n->pprev = LIST_POISON2;
-}
-
-/**
- * hlist_del_rcu - deletes entry from hash list without re-initialization
- * @entry: the element to delete from the hash list.
- *
- * Note: list_unhashed() on entry does not return true after this, 
- * the entry is in an undefined state. It is useful for RCU based
- * lockfree traversal.
- *
- * In particular, it means that we can not poison the forward
- * pointers that may still be used for walking the hash list.
- */
-static inline void hlist_del_rcu(struct hlist_node *n)
-{
-	__hlist_del(n);
-	n->pprev = LIST_POISON2;
-}
-
-static __inline__ void hlist_del_init(struct hlist_node *n) 
-{
-	if (n->pprev)  {
-		__hlist_del(n);
-		INIT_HLIST_NODE(n);
-	}
-}  
-
-#define hlist_del_rcu_init hlist_del_init
-
-static __inline__ void hlist_add_head(struct hlist_node *n, struct hlist_head *h) 
-{ 
-	struct hlist_node *first = h->first;
-	n->next = first; 
-	if (first) 
-		first->pprev = &n->next;
-	h->first = n; 
-	n->pprev = &h->first; 
-} 
-
-static __inline__ void hlist_add_head_rcu(struct hlist_node *n, struct hlist_head *h) 
-{ 
-	struct hlist_node *first = h->first;
-	n->next = first;
-	n->pprev = &h->first; 
-	if (first) 
-		first->pprev = &n->next;
-	h->first = n; 
-} 
-
-/* next must be != NULL */
-static __inline__ void hlist_add_before(struct hlist_node *n, struct hlist_node *next)
-{
-	n->pprev = next->pprev;
-	n->next = next; 
-	next->pprev = &n->next; 
-	*(n->pprev) = n;
-}
-
-static __inline__ void hlist_add_after(struct hlist_node *n,
-				       struct hlist_node *next)
-{
-	next->next	= n->next;
-	*(next->pprev)	= n;
-	n->next		= next;
-}
-
-#define hlist_entry(ptr, type, member) container_of(ptr,type,member)
-
-/* Cannot easily do prefetch unfortunately */
-#define hlist_for_each(pos, head) \
-	for (pos = (head)->first; pos; pos = pos->next) 
-
-#define hlist_for_each_safe(pos, n, head) \
-	for (pos = (head)->first; n = pos ? pos->next : 0, pos; \
-	     pos = n)
-
-/**
- * hlist_for_each_entry	- iterate over list of given type
- * @tpos:	the type * to use as a loop counter.
- * @pos:	the &struct hlist_node to use as a loop counter.
- * @head:	the head for your list.
- * @member:	the name of the hlist_node within the struct.
- */
-#define hlist_for_each_entry(tpos, pos, head, member)			 \
-	for (pos = (head)->first;					 \
-	     pos && ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1;}); \
-	     pos = pos->next)
-
-/**
- * hlist_for_each_entry_continue - iterate over a hlist continuing after existing point
- * @tpos:	the type * to use as a loop counter.
- * @pos:	the &struct hlist_node to use as a loop counter.
- * @member:	the name of the hlist_node within the struct.
- */
-#define hlist_for_each_entry_continue(tpos, pos, member)		 \
-	for (pos = (pos)->next;						 \
-	     pos && ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1;}); \
-	     pos = pos->next)
-
-/**
- * hlist_for_each_entry_from - iterate over a hlist continuing from existing point
- * @tpos:	the type * to use as a loop counter.
- * @pos:	the &struct hlist_node to use as a loop counter.
- * @member:	the name of the hlist_node within the struct.
- */
-#define hlist_for_each_entry_from(tpos, pos, member)			 \
-	for (; pos && ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1;}); \
-	     pos = pos->next)
-
-/**
- * hlist_for_each_entry_safe - iterate over list of given type safe against removal of list entry
- * @tpos:	the type * to use as a loop counter.
- * @pos:	the &struct hlist_node to use as a loop counter.
- * @n:		another &struct hlist_node to use as temporary storage
- * @head:	the head for your list.
- * @member:	the name of the hlist_node within the struct.
- */
-#define hlist_for_each_entry_safe(tpos, pos, n, head, member) 		 \
-	for (pos = (head)->first;					 \
-	     pos && ({ n = pos->next; 1; }) && 				 \
-		({ tpos = hlist_entry(pos, typeof(*tpos), member); 1;}); \
-	     pos = n)
-
 #endif
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 08:57:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 08:57:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540915.843086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAf-0008VH-Ih; Tue, 30 May 2023 08:57:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540915.843086; Tue, 30 May 2023 08:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vAf-0008S5-7C; Tue, 30 May 2023 08:57:37 +0000
Received: by outflank-mailman (input) for mailman id 540915;
 Tue, 30 May 2023 08:57:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3v8u-00026J-A0
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 08:55:48 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c3f28ee2-fec7-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 10:55:45 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id ABFC41FE5D;
 Tue, 30 May 2023 08:55:45 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 7491B1341B;
 Tue, 30 May 2023 08:55:45 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id bdshGxG6dWTrGwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 08:55:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3f28ee2-fec7-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685436945; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=a7ufPEpw6m49aWGpLHSNYNAa4RLn86mQbyAgO3QVecY=;
	b=mvoAcCdbnBixFH4Vk0BI8S5maRbr9vJBNvfkgAjoDnMLkxpWY48jczwi1eA4e9pdeSDr/+
	EE3CEUeuwnsA6MgpGJT/L3+xzB1Xfm5m518GwoN3ZzVyIUmowuPtCwbmfOWSIHISW6M4ZM
	yvsQGOwS4GNmez8tM3OedZWhHvsom3k=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v3 15/16] tools/xenstore: split out rest of live update control code
Date: Tue, 30 May 2023 10:54:17 +0200
Message-Id: <20230530085418.5417-16-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530085418.5417-1-jgross@suse.com>
References: <20230530085418.5417-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move the rest of live update related code from xenstored_control.c to
a dedicated new source file.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/Makefile.common     |   2 +-
 tools/xenstore/xenstored_control.c | 409 -----------------------------
 tools/xenstore/xenstored_control.h |   8 -
 tools/xenstore/xenstored_core.c    |   1 +
 tools/xenstore/xenstored_lu.c      | 400 ++++++++++++++++++++++++++++
 tools/xenstore/xenstored_lu.h      |  25 ++
 6 files changed, 427 insertions(+), 418 deletions(-)
 create mode 100644 tools/xenstore/xenstored_lu.c

diff --git a/tools/xenstore/Makefile.common b/tools/xenstore/Makefile.common
index c42796fe34..657a16849e 100644
--- a/tools/xenstore/Makefile.common
+++ b/tools/xenstore/Makefile.common
@@ -1,7 +1,7 @@
 # Makefile shared with stubdom
 
 XENSTORED_OBJS-y := xenstored_core.o xenstored_watch.o xenstored_domain.o
-XENSTORED_OBJS-y += xenstored_transaction.o xenstored_control.o
+XENSTORED_OBJS-y += xenstored_transaction.o xenstored_control.o xenstored_lu.o
 XENSTORED_OBJS-y += talloc.o utils.o tdb.o hashtable.o
 
 XENSTORED_OBJS-$(CONFIG_Linux) += xenstored_posix.o xenstored_lu_daemon.o
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index b61b41f16c..145a0e5aff 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -16,21 +16,13 @@
     along with this program; If not, see <http://www.gnu.org/licenses/>.
 */
 
-#include <assert.h>
-#include <ctype.h>
 #include <errno.h>
 #include <stdarg.h>
 #include <stdio.h>
 #include <stdlib.h>
 #include <string.h>
-#include <syslog.h>
-#include <time.h>
 #include <sys/types.h>
-#include <sys/stat.h>
-#include <sys/mman.h>
-#include <fcntl.h>
 #include <unistd.h>
-#include <xenctrl.h>
 #include <xen-tools/xenstore-common.h>
 
 #include "utils.h"
@@ -39,69 +31,6 @@
 #include "xenstored_control.h"
 #include "xenstored_domain.h"
 #include "xenstored_lu.h"
-#include "xenstored_watch.h"
-
-#ifndef NO_LIVE_UPDATE
-struct live_update *lu_status;
-
-const char *lu_begin(struct connection *conn)
-{
-	if (lu_status)
-		return "live-update session already active.";
-
-	lu_status = talloc_zero(conn, struct live_update);
-	if (!lu_status)
-		return "Allocation failure.";
-	lu_status->conn = conn;
-	talloc_set_destructor(lu_status, lu_destroy);
-
-	return NULL;
-}
-
-struct connection *lu_get_connection(void)
-{
-	return lu_status ? lu_status->conn : NULL;
-}
-
-unsigned int lu_write_response(FILE *fp)
-{
-	struct xsd_sockmsg msg;
-
-	assert(lu_status);
-
-	msg = lu_status->in->hdr.msg;
-
-	msg.len = sizeof("OK");
-	if (fp && fwrite(&msg, sizeof(msg), 1, fp) != 1)
-		return 0;
-	if (fp && fwrite("OK", msg.len, 1, fp) != 1)
-		return 0;
-
-	return sizeof(msg) + msg.len;
-}
-
-bool lu_is_pending(void)
-{
-	return lu_status != NULL;
-}
-
-#else
-struct connection *lu_get_connection(void)
-{
-	return NULL;
-}
-
-unsigned int lu_write_response(FILE *fp)
-{
-	/* Unsupported */
-	return 0;
-}
-
-bool lu_is_pending(void)
-{
-	return false;
-}
-#endif
 
 struct cmd_s {
 	char *cmd;
@@ -352,344 +281,6 @@ static int do_control_print(const void *ctx, struct connection *conn,
 	return 0;
 }
 
-#ifndef NO_LIVE_UPDATE
-static const char *lu_abort(const void *ctx, struct connection *conn)
-{
-	syslog(LOG_INFO, "live-update: abort\n");
-
-	if (!lu_status)
-		return "No live-update session active.";
-
-	/* Destructor will do the real abort handling. */
-	talloc_free(lu_status);
-
-	return NULL;
-}
-
-static const char *lu_cmdline(const void *ctx, struct connection *conn,
-			      const char *cmdline)
-{
-	syslog(LOG_INFO, "live-update: cmdline %s\n", cmdline);
-
-	if (!lu_status || lu_status->conn != conn)
-		return "Not in live-update session.";
-
-	lu_status->cmdline = talloc_strdup(lu_status, cmdline);
-	if (!lu_status->cmdline)
-		return "Allocation failure.";
-
-	return NULL;
-}
-
-static bool lu_check_lu_allowed(void)
-{
-	struct connection *conn;
-	time_t now = time(NULL);
-	unsigned int ta_total = 0, ta_long = 0;
-
-	list_for_each_entry(conn, &connections, list) {
-		if (conn->ta_start_time) {
-			ta_total++;
-			if (now - conn->ta_start_time >= lu_status->timeout)
-				ta_long++;
-		}
-	}
-
-	/*
-	 * Allow LiveUpdate if one of the following conditions is met:
-	 *	- There is no active transactions
-	 *	- All transactions are long running (e.g. they have been
-	 *	active for more than lu_status->timeout sec) and the admin as
-	 *	requested to force the operation.
-	 */
-	return ta_total ? (lu_status->force && ta_long == ta_total) : true;
-}
-
-static const char *lu_reject_reason(const void *ctx)
-{
-	char *ret = NULL;
-	struct connection *conn;
-	time_t now = time(NULL);
-
-	list_for_each_entry(conn, &connections, list) {
-		unsigned long tdiff = now - conn->ta_start_time;
-
-		if (conn->ta_start_time && (tdiff >= lu_status->timeout)) {
-			ret = talloc_asprintf(ctx, "%s\nDomain %u: %ld s",
-					      ret ? : "Domains with long running transactions:",
-					      conn->id, tdiff);
-		}
-	}
-
-	return ret ? (const char *)ret : "Overlapping transactions";
-}
-
-static const char *lu_dump_state(const void *ctx, struct connection *conn)
-{
-	FILE *fp;
-	const char *ret;
-	struct xs_state_record_header end;
-	struct xs_state_preamble pre;
-
-	fp = lu_dump_open(ctx);
-	if (!fp)
-		return "Dump state open error";
-
-	memcpy(pre.ident, XS_STATE_IDENT, sizeof(pre.ident));
-	pre.version = htobe32(XS_STATE_VERSION);
-	pre.flags = XS_STATE_FLAGS;
-	if (fwrite(&pre, sizeof(pre), 1, fp) != 1) {
-		ret = "Dump write error";
-		goto out;
-	}
-
-	ret = dump_state_global(fp);
-	if (ret)
-		goto out;
-	ret = dump_state_connections(fp);
-	if (ret)
-		goto out;
-	ret = dump_state_nodes(fp, ctx);
-	if (ret)
-		goto out;
-
-	end.type = XS_STATE_TYPE_END;
-	end.length = 0;
-	if (fwrite(&end, sizeof(end), 1, fp) != 1)
-		ret = "Dump write error";
-
- out:
-	lu_dump_close(fp);
-
-	return ret;
-}
-
-void lu_read_state(void)
-{
-	struct lu_dump_state state = {};
-	struct xs_state_record_header *head;
-	void *ctx = talloc_new(NULL); /* Work context for subfunctions. */
-	struct xs_state_preamble *pre;
-
-	syslog(LOG_INFO, "live-update: read state\n");
-	lu_get_dump_state(&state);
-	if (state.size == 0)
-		barf_perror("No state found after live-update");
-
-	pre = state.buf;
-	if (memcmp(pre->ident, XS_STATE_IDENT, sizeof(pre->ident)) ||
-	    pre->version != htobe32(XS_STATE_VERSION) ||
-	    pre->flags != XS_STATE_FLAGS)
-		barf("Unknown record identifier");
-	for (head = state.buf + sizeof(*pre);
-	     head->type != XS_STATE_TYPE_END &&
-		(void *)head - state.buf < state.size;
-	     head = (void *)head + sizeof(*head) + head->length) {
-		switch (head->type) {
-		case XS_STATE_TYPE_GLOBAL:
-			read_state_global(ctx, head + 1);
-			break;
-		case XS_STATE_TYPE_CONN:
-			read_state_connection(ctx, head + 1);
-			break;
-		case XS_STATE_TYPE_WATCH:
-			read_state_watch(ctx, head + 1);
-			break;
-		case XS_STATE_TYPE_TA:
-			xprintf("live-update: ignore transaction record\n");
-			break;
-		case XS_STATE_TYPE_NODE:
-			read_state_node(ctx, head + 1);
-			break;
-		default:
-			xprintf("live-update: unknown state record %08x\n",
-				head->type);
-			break;
-		}
-	}
-
-	lu_close_dump_state(&state);
-
-	talloc_free(ctx);
-
-	/*
-	 * We may have missed the VIRQ_DOM_EXC notification and a domain may
-	 * have died while we were live-updating. So check all the domains are
-	 * still alive.
-	 */
-	check_domains();
-}
-
-static const char *lu_activate_binary(const void *ctx)
-{
-	int argc;
-	char **argv;
-	unsigned int i;
-
-	if (lu_status->cmdline) {
-		argc = 4;   /* At least one arg + progname + "-U" + NULL. */
-		for (i = 0; lu_status->cmdline[i]; i++)
-			if (isspace(lu_status->cmdline[i]))
-				argc++;
-		argv = talloc_array(ctx, char *, argc);
-		if (!argv)
-			return "Allocation failure.";
-
-		i = 0;
-		argc = 1;
-		argv[1] = strtok(lu_status->cmdline, " \t");
-		while (argv[argc]) {
-			if (!strcmp(argv[argc], "-U"))
-				i = 1;
-			argc++;
-			argv[argc] = strtok(NULL, " \t");
-		}
-
-		if (!i) {
-			argv[argc++] = "-U";
-			argv[argc] = NULL;
-		}
-	} else {
-		for (i = 0; i < orig_argc; i++)
-			if (!strcmp(orig_argv[i], "-U"))
-				break;
-
-		argc = orig_argc;
-		argv = talloc_array(ctx, char *, orig_argc + 2);
-		if (!argv)
-			return "Allocation failure.";
-
-		memcpy(argv, orig_argv, orig_argc * sizeof(*argv));
-		if (i == orig_argc)
-			argv[argc++] = "-U";
-		argv[argc] = NULL;
-	}
-
-	domain_deinit();
-
-	return lu_exec(ctx, argc, argv);
-}
-
-static bool do_lu_start(struct delayed_request *req)
-{
-	time_t now = time(NULL);
-	const char *ret;
-	struct buffered_data *saved_in;
-	struct connection *conn = req->data;
-
-	/*
-	 * Cancellation may have been requested asynchronously. In this
-	 * case, lu_status will be NULL.
-	 */
-	if (!lu_status) {
-		ret = "Cancellation was requested";
-		goto out;
-	}
-
-	assert(lu_status->conn == conn);
-
-	if (!lu_check_lu_allowed()) {
-		if (now < lu_status->started_at + lu_status->timeout)
-			return false;
-		if (!lu_status->force) {
-			ret = lu_reject_reason(req);
-			goto out;
-		}
-	}
-
-	assert(req->in == lu_status->in);
-	/* Dump out internal state, including "OK" for live update. */
-	ret = lu_dump_state(req->in, conn);
-	if (!ret) {
-		/* Perform the activation of new binary. */
-		ret = lu_activate_binary(req->in);
-	}
-
-	/* We will reach this point only in case of failure. */
- out:
-	/*
-	 * send_reply() will send the response for conn->in. Save the current
-	 * conn->in and restore it afterwards.
-	 */
-	saved_in = conn->in;
-	conn->in = req->in;
-	send_reply(conn, XS_CONTROL, ret, strlen(ret) + 1);
-	conn->in = saved_in;
-	talloc_free(lu_status);
-
-	return true;
-}
-
-static const char *lu_start(const void *ctx, struct connection *conn,
-			    bool force, unsigned int to)
-{
-	syslog(LOG_INFO, "live-update: start, force=%d, to=%u\n", force, to);
-
-	if (!lu_status || lu_status->conn != conn)
-		return "Not in live-update session.";
-
-#ifdef __MINIOS__
-	if (lu_status->kernel_size != lu_status->kernel_off)
-		return "Kernel not complete.";
-#endif
-
-	lu_status->force = force;
-	lu_status->timeout = to;
-	lu_status->started_at = time(NULL);
-	lu_status->in = conn->in;
-
-	errno = delay_request(conn, conn->in, do_lu_start, conn, false);
-
-	return NULL;
-}
-
-static int do_control_lu(const void *ctx, struct connection *conn,
-			 char **vec, int num)
-{
-	const char *ret = NULL;
-	unsigned int i;
-	bool force = false;
-	unsigned int to = 0;
-
-	if (num < 1)
-		return EINVAL;
-
-	if (!strcmp(vec[0], "-a")) {
-		if (num == 1)
-			ret = lu_abort(ctx, conn);
-		else
-			return EINVAL;
-	} else if (!strcmp(vec[0], "-c")) {
-		if (num == 2)
-			ret = lu_cmdline(ctx, conn, vec[1]);
-		else
-			return EINVAL;
-	} else if (!strcmp(vec[0], "-s")) {
-		for (i = 1; i < num; i++) {
-			if (!strcmp(vec[i], "-F"))
-				force = true;
-			else if (!strcmp(vec[i], "-t") && i < num - 1) {
-				i++;
-				to = atoi(vec[i]);
-			} else
-				return EINVAL;
-		}
-		ret = lu_start(ctx, conn, force, to);
-		if (!ret)
-			return errno;
-	} else {
-		ret = lu_arch(ctx, conn, vec, num);
-		if (!ret && errno)
-			return errno;
-	}
-
-	if (!ret)
-		ret = "OK";
-	send_reply(conn, XS_CONTROL, ret, strlen(ret) + 1);
-	return 0;
-}
-#endif
-
 static int do_control_help(const void *, struct connection *, char **, int);
 
 static struct cmd_s cmds[] = {
diff --git a/tools/xenstore/xenstored_control.h b/tools/xenstore/xenstored_control.h
index a8cb76559b..faa955968d 100644
--- a/tools/xenstore/xenstored_control.h
+++ b/tools/xenstore/xenstored_control.h
@@ -18,11 +18,3 @@
 
 int do_control(const void *ctx, struct connection *conn,
 	       struct buffered_data *in);
-void lu_read_state(void);
-
-struct connection *lu_get_connection(void);
-
-/* Write the "OK" response for the live-update command */
-unsigned int lu_write_response(FILE *fp);
-
-bool lu_is_pending(void);
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 62deee9cb9..31a862b715 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -52,6 +52,7 @@
 #include "xenstored_transaction.h"
 #include "xenstored_domain.h"
 #include "xenstored_control.h"
+#include "xenstored_lu.h"
 #include "tdb.h"
 
 #ifndef NO_SOCKETS
diff --git a/tools/xenstore/xenstored_lu.c b/tools/xenstore/xenstored_lu.c
new file mode 100644
index 0000000000..783bfd6456
--- /dev/null
+++ b/tools/xenstore/xenstored_lu.c
@@ -0,0 +1,400 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+/*
+ * Live Update interfaces for Xen Store Daemon.
+ * Copyright (C) 2022 Juergen Gross, SUSE LLC
+ */
+
+#include <assert.h>
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <syslog.h>
+#include <time.h>
+
+#include "talloc.h"
+#include "xenstored_core.h"
+#include "xenstored_domain.h"
+#include "xenstored_lu.h"
+#include "xenstored_watch.h"
+
+#ifndef NO_LIVE_UPDATE
+struct live_update *lu_status;
+
+const char *lu_begin(struct connection *conn)
+{
+	if (lu_status)
+		return "live-update session already active.";
+
+	lu_status = talloc_zero(conn, struct live_update);
+	if (!lu_status)
+		return "Allocation failure.";
+	lu_status->conn = conn;
+	talloc_set_destructor(lu_status, lu_destroy);
+
+	return NULL;
+}
+
+struct connection *lu_get_connection(void)
+{
+	return lu_status ? lu_status->conn : NULL;
+}
+
+unsigned int lu_write_response(FILE *fp)
+{
+	struct xsd_sockmsg msg;
+
+	assert(lu_status);
+
+	msg = lu_status->in->hdr.msg;
+
+	msg.len = sizeof("OK");
+	if (fp && fwrite(&msg, sizeof(msg), 1, fp) != 1)
+		return 0;
+	if (fp && fwrite("OK", msg.len, 1, fp) != 1)
+		return 0;
+
+	return sizeof(msg) + msg.len;
+}
+
+bool lu_is_pending(void)
+{
+	return lu_status != NULL;
+}
+
+void lu_read_state(void)
+{
+	struct lu_dump_state state = {};
+	struct xs_state_record_header *head;
+	void *ctx = talloc_new(NULL); /* Work context for subfunctions. */
+	struct xs_state_preamble *pre;
+
+	syslog(LOG_INFO, "live-update: read state\n");
+	lu_get_dump_state(&state);
+	if (state.size == 0)
+		barf_perror("No state found after live-update");
+
+	pre = state.buf;
+	if (memcmp(pre->ident, XS_STATE_IDENT, sizeof(pre->ident)) ||
+	    pre->version != htobe32(XS_STATE_VERSION) ||
+	    pre->flags != XS_STATE_FLAGS)
+		barf("Unknown record identifier");
+	for (head = state.buf + sizeof(*pre);
+	     head->type != XS_STATE_TYPE_END &&
+		(void *)head - state.buf < state.size;
+	     head = (void *)head + sizeof(*head) + head->length) {
+		switch (head->type) {
+		case XS_STATE_TYPE_GLOBAL:
+			read_state_global(ctx, head + 1);
+			break;
+		case XS_STATE_TYPE_CONN:
+			read_state_connection(ctx, head + 1);
+			break;
+		case XS_STATE_TYPE_WATCH:
+			read_state_watch(ctx, head + 1);
+			break;
+		case XS_STATE_TYPE_TA:
+			xprintf("live-update: ignore transaction record\n");
+			break;
+		case XS_STATE_TYPE_NODE:
+			read_state_node(ctx, head + 1);
+			break;
+		default:
+			xprintf("live-update: unknown state record %08x\n",
+				head->type);
+			break;
+		}
+	}
+
+	lu_close_dump_state(&state);
+
+	talloc_free(ctx);
+
+	/*
+	 * We may have missed the VIRQ_DOM_EXC notification and a domain may
+	 * have died while we were live-updating. So check all the domains are
+	 * still alive.
+	 */
+	check_domains();
+}
+
+static const char *lu_abort(const void *ctx, struct connection *conn)
+{
+	syslog(LOG_INFO, "live-update: abort\n");
+
+	if (!lu_status)
+		return "No live-update session active.";
+
+	/* Destructor will do the real abort handling. */
+	talloc_free(lu_status);
+
+	return NULL;
+}
+
+static const char *lu_cmdline(const void *ctx, struct connection *conn,
+			      const char *cmdline)
+{
+	syslog(LOG_INFO, "live-update: cmdline %s\n", cmdline);
+
+	if (!lu_status || lu_status->conn != conn)
+		return "Not in live-update session.";
+
+	lu_status->cmdline = talloc_strdup(lu_status, cmdline);
+	if (!lu_status->cmdline)
+		return "Allocation failure.";
+
+	return NULL;
+}
+
+static bool lu_check_lu_allowed(void)
+{
+	struct connection *conn;
+	time_t now = time(NULL);
+	unsigned int ta_total = 0, ta_long = 0;
+
+	list_for_each_entry(conn, &connections, list) {
+		if (conn->ta_start_time) {
+			ta_total++;
+			if (now - conn->ta_start_time >= lu_status->timeout)
+				ta_long++;
+		}
+	}
+
+	/*
+	 * Allow LiveUpdate if one of the following conditions is met:
+	 *	- There is no active transactions
+	 *	- All transactions are long running (e.g. they have been
+	 *	active for more than lu_status->timeout sec) and the admin as
+	 *	requested to force the operation.
+	 */
+	return ta_total ? (lu_status->force && ta_long == ta_total) : true;
+}
+
+static const char *lu_reject_reason(const void *ctx)
+{
+	char *ret = NULL;
+	struct connection *conn;
+	time_t now = time(NULL);
+
+	list_for_each_entry(conn, &connections, list) {
+		unsigned long tdiff = now - conn->ta_start_time;
+
+		if (conn->ta_start_time && (tdiff >= lu_status->timeout)) {
+			ret = talloc_asprintf(ctx, "%s\nDomain %u: %ld s",
+					      ret ? : "Domains with long running transactions:",
+					      conn->id, tdiff);
+		}
+	}
+
+	return ret ? (const char *)ret : "Overlapping transactions";
+}
+
+static const char *lu_dump_state(const void *ctx, struct connection *conn)
+{
+	FILE *fp;
+	const char *ret;
+	struct xs_state_record_header end;
+	struct xs_state_preamble pre;
+
+	fp = lu_dump_open(ctx);
+	if (!fp)
+		return "Dump state open error";
+
+	memcpy(pre.ident, XS_STATE_IDENT, sizeof(pre.ident));
+	pre.version = htobe32(XS_STATE_VERSION);
+	pre.flags = XS_STATE_FLAGS;
+	if (fwrite(&pre, sizeof(pre), 1, fp) != 1) {
+		ret = "Dump write error";
+		goto out;
+	}
+
+	ret = dump_state_global(fp);
+	if (ret)
+		goto out;
+	ret = dump_state_connections(fp);
+	if (ret)
+		goto out;
+	ret = dump_state_nodes(fp, ctx);
+	if (ret)
+		goto out;
+
+	end.type = XS_STATE_TYPE_END;
+	end.length = 0;
+	if (fwrite(&end, sizeof(end), 1, fp) != 1)
+		ret = "Dump write error";
+
+ out:
+	lu_dump_close(fp);
+
+	return ret;
+}
+
+static const char *lu_activate_binary(const void *ctx)
+{
+	int argc;
+	char **argv;
+	unsigned int i;
+
+	if (lu_status->cmdline) {
+		argc = 4;   /* At least one arg + progname + "-U" + NULL. */
+		for (i = 0; lu_status->cmdline[i]; i++)
+			if (isspace(lu_status->cmdline[i]))
+				argc++;
+		argv = talloc_array(ctx, char *, argc);
+		if (!argv)
+			return "Allocation failure.";
+
+		i = 0;
+		argc = 1;
+		argv[1] = strtok(lu_status->cmdline, " \t");
+		while (argv[argc]) {
+			if (!strcmp(argv[argc], "-U"))
+				i = 1;
+			argc++;
+			argv[argc] = strtok(NULL, " \t");
+		}
+
+		if (!i) {
+			argv[argc++] = "-U";
+			argv[argc] = NULL;
+		}
+	} else {
+		for (i = 0; i < orig_argc; i++)
+			if (!strcmp(orig_argv[i], "-U"))
+				break;
+
+		argc = orig_argc;
+		argv = talloc_array(ctx, char *, orig_argc + 2);
+		if (!argv)
+			return "Allocation failure.";
+
+		memcpy(argv, orig_argv, orig_argc * sizeof(*argv));
+		if (i == orig_argc)
+			argv[argc++] = "-U";
+		argv[argc] = NULL;
+	}
+
+	domain_deinit();
+
+	return lu_exec(ctx, argc, argv);
+}
+
+static bool do_lu_start(struct delayed_request *req)
+{
+	time_t now = time(NULL);
+	const char *ret;
+	struct buffered_data *saved_in;
+	struct connection *conn = req->data;
+
+	/*
+	 * Cancellation may have been requested asynchronously. In this
+	 * case, lu_status will be NULL.
+	 */
+	if (!lu_status) {
+		ret = "Cancellation was requested";
+		goto out;
+	}
+
+	assert(lu_status->conn == conn);
+
+	if (!lu_check_lu_allowed()) {
+		if (now < lu_status->started_at + lu_status->timeout)
+			return false;
+		if (!lu_status->force) {
+			ret = lu_reject_reason(req);
+			goto out;
+		}
+	}
+
+	assert(req->in == lu_status->in);
+	/* Dump out internal state, including "OK" for live update. */
+	ret = lu_dump_state(req->in, conn);
+	if (!ret) {
+		/* Perform the activation of new binary. */
+		ret = lu_activate_binary(req->in);
+	}
+
+	/* We will reach this point only in case of failure. */
+ out:
+	/*
+	 * send_reply() will send the response for conn->in. Save the current
+	 * conn->in and restore it afterwards.
+	 */
+	saved_in = conn->in;
+	conn->in = req->in;
+	send_reply(conn, XS_CONTROL, ret, strlen(ret) + 1);
+	conn->in = saved_in;
+	talloc_free(lu_status);
+
+	return true;
+}
+
+static const char *lu_start(const void *ctx, struct connection *conn,
+			    bool force, unsigned int to)
+{
+	syslog(LOG_INFO, "live-update: start, force=%d, to=%u\n", force, to);
+
+	if (!lu_status || lu_status->conn != conn)
+		return "Not in live-update session.";
+
+#ifdef __MINIOS__
+	if (lu_status->kernel_size != lu_status->kernel_off)
+		return "Kernel not complete.";
+#endif
+
+	lu_status->force = force;
+	lu_status->timeout = to;
+	lu_status->started_at = time(NULL);
+	lu_status->in = conn->in;
+
+	errno = delay_request(conn, conn->in, do_lu_start, conn, false);
+
+	return NULL;
+}
+
+int do_control_lu(const void *ctx, struct connection *conn, char **vec,
+		  int num)
+{
+	const char *ret = NULL;
+	unsigned int i;
+	bool force = false;
+	unsigned int to = 0;
+
+	if (num < 1)
+		return EINVAL;
+
+	if (!strcmp(vec[0], "-a")) {
+		if (num == 1)
+			ret = lu_abort(ctx, conn);
+		else
+			return EINVAL;
+	} else if (!strcmp(vec[0], "-c")) {
+		if (num == 2)
+			ret = lu_cmdline(ctx, conn, vec[1]);
+		else
+			return EINVAL;
+	} else if (!strcmp(vec[0], "-s")) {
+		for (i = 1; i < num; i++) {
+			if (!strcmp(vec[i], "-F"))
+				force = true;
+			else if (!strcmp(vec[i], "-t") && i < num - 1) {
+				i++;
+				to = atoi(vec[i]);
+			} else
+				return EINVAL;
+		}
+		ret = lu_start(ctx, conn, force, to);
+		if (!ret)
+			return errno;
+	} else {
+		ret = lu_arch(ctx, conn, vec, num);
+		if (!ret && errno)
+			return errno;
+	}
+
+	if (!ret)
+		ret = "OK";
+	send_reply(conn, XS_CONTROL, ret, strlen(ret) + 1);
+	return 0;
+}
+#endif
diff --git a/tools/xenstore/xenstored_lu.h b/tools/xenstore/xenstored_lu.h
index d2f8e4e57c..7aa524a07e 100644
--- a/tools/xenstore/xenstored_lu.h
+++ b/tools/xenstore/xenstored_lu.h
@@ -43,6 +43,16 @@ struct lu_dump_state {
 
 extern struct live_update *lu_status;
 
+struct connection *lu_get_connection(void);
+bool lu_is_pending(void);
+void lu_read_state(void);
+
+/* Write the "OK" response for the live-update command */
+unsigned int lu_write_response(FILE *fp);
+
+int do_control_lu(const void *ctx, struct connection *conn, char **vec,
+		  int num);
+
 /* Live update private interfaces. */
 void lu_get_dump_state(struct lu_dump_state *state);
 void lu_close_dump_state(struct lu_dump_state *state);
@@ -53,4 +63,19 @@ const char *lu_arch(const void *ctx, struct connection *conn, char **vec,
 		    int num);
 const char *lu_begin(struct connection *conn);
 int lu_destroy(void *data);
+#else
+static inline struct connection *lu_get_connection(void)
+{
+	return NULL;
+}
+
+static inline unsigned int lu_write_response(FILE *fp)
+{
+	return 0;
+}
+
+static inline bool lu_is_pending(void)
+{
+	return false;
+}
 #endif
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:05:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:05:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540942.843110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vI5-0003md-LM; Tue, 30 May 2023 09:05:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540942.843110; Tue, 30 May 2023 09:05:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vI5-0003mW-Im; Tue, 30 May 2023 09:05:17 +0000
Received: by outflank-mailman (input) for mailman id 540942;
 Tue, 30 May 2023 09:05:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3vI4-0003mM-At; Tue, 30 May 2023 09:05:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3vI4-0001pD-2x; Tue, 30 May 2023 09:05:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3vI3-0003oz-LI; Tue, 30 May 2023 09:05:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3vI3-0002MP-Kv; Tue, 30 May 2023 09:05:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KfgMzgw0fBGiLNO+L6khCcBfuAR1KqmoiEL6tAaYnQg=; b=50GSESkSg+Eh3Umb5TKCG69CMw
	pp+14cEawXDXsL/YJhlWfJn0AXPtIDUn19aATRRl6/vb3LEyruijxX9eofgNHhufU3JgZXNh4RVtW
	dn7OKz9Y3RF5bAPS0H180QAxa/rWKSLAOKPs5kIyEaad3paAzWJHkMg0b3UpgLM16x5E=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181006-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 181006: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=aa9bbd865502ed517624ab6fe7d4b5d89ca95e43
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 30 May 2023 09:05:15 +0000

flight 181006 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181006/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180691
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                aa9bbd865502ed517624ab6fe7d4b5d89ca95e43
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   12 days
Failing since        180699  2023-05-18 07:21:24 Z   12 days   48 attempts
Testing same since   181006  2023-05-30 01:10:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bernhard Beschow <shentey@gmail.com>
  Bin Meng <bin.meng@windriver.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Cédric Le Goater <clg@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Erico Nunes <ernunes@redhat.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicholas Piggin <npiggin@gmail.com>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Rene Engel <ReneEngel80@emailn.de>
  Richard Henderson <richard.henderson@linaro.org>
  Richard Purdie <richard.purdie@linuxfoundation.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sergio Lopez <slp@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vivek Kasireddy <vivek.kasireddy@intel.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8961 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 30 09:07:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:07:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540948.843120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vK7-0004Kh-1o; Tue, 30 May 2023 09:07:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540948.843120; Tue, 30 May 2023 09:07:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vK6-0004Ka-VO; Tue, 30 May 2023 09:07:22 +0000
Received: by outflank-mailman (input) for mailman id 540948;
 Tue, 30 May 2023 09:07:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3vK5-0004KU-2I
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:07:21 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20604.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::604])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6162a4eb-fec9-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 11:07:19 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS5PR04MB9972.eurprd04.prod.outlook.com (2603:10a6:20b:682::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Tue, 30 May
 2023 09:07:17 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 09:07:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6162a4eb-fec9-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EQSAlHXlVeJmNKNRjOYxQeCuD7VQ4lhiUk8wckrfWCvyxqvUvn3aFo75SmLQJIqqQaL04jvYs1awg44uV+PmklRT5KBHBHsW2958l8YCvsAVK0d6w1rksH4d8ANU1LaXr5cOnPuKMf43lXUbv6r79zTwEKtwcjlMbrOUtvUZttjlTkjy79p9jQyKzCLhp8/3RcGQMUD61H1ADlij2QEjDQhgYsrSnVn4+aNkclibMym5O1MAevbhlWzweF4L+IEZgZjEsPu2kYl8LiymrZcKtkw2DM7n41F079QrsoDUxtfpwwj3s5uNGEHqXcAgc7iNw1X9AnZc3oLqExVtRqVHJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=D6j4RpVJk8+FiiDqdSbxE0XZO+TpLAYBU5hXPOIP34A=;
 b=FhP4QvB7R9Zyf2ZTsXBOVKg5rfY+u4HRN7E/UeDEVTmAGPPp2PyW1m8hEcFbNAFb3/cthnTDC0Ub6RnNX9bWQ+RtvPz6UAnL31ratf2APpjI4R6Jnw9efLIJdvTbFB1ABe1q1MzuLAMm+R4SYRykOQD/ogxfJXr25mqNWsa5VtffrEtp0331JrJBwBXRxb/WsIiYtG7tA3/v/QSJpcTOIzZJRHBtDVRQP/BPL9TNDup+4tbwRdv4ApjObzDesw4dMloUZxKxdMd8muPUgfoZL4wqjWm8ETKA5A0m+1UzZ7da3GR0G4FX46SW/OyyiEt19rxy/r/M/EvdPNqV9UHJ/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D6j4RpVJk8+FiiDqdSbxE0XZO+TpLAYBU5hXPOIP34A=;
 b=xP9L72voP9PSs86AdCjAC0yZRl76XPPO2zQjTluN0WrpC/i1LOTfUNVYoDfUBuUjgg2gGCnrTzkgb2tvsp7vnBFpZkAzkyuGWSxRVCJIT/QZxqdfnlSkfsPd8EsTn8vMJdqMgNLxvL3AIUEpsuklddQwHbPw7XAmzr6fUoddOy+8eJaY+4yhBn7vk7DkiHuqlkGjuUh/58DGPrp/AxF0xarc3M7nIbhk0VfICbjpppnr8xMvHz0VY6cgw/0InEIf+uC/4u4RMvjhcA+gA1PvP9tQUtjSSpyeuSHD3Os9/YGLqZ6qijmOpe7QyJv5DtSZu8ambQLihHKYa5Xwnj4Bbg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <859c2409-f0ee-8fc7-5348-fd1678e91b4e@suse.com>
Date: Tue, 30 May 2023 11:07:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 1/4] x86/spec-ctrl: Rename retpoline_safe() to
 retpoline_calculations()
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
 <20230526110656.4018711-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230526110656.4018711-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0131.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:97::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS5PR04MB9972:EE_
X-MS-Office365-Filtering-Correlation-Id: d2e10683-654c-4e47-2e75-08db60ed441c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	of5/jPwlTCoCvc/K/kFk3A2VSOAOGkxYaUuF2yc/y7kyLTnjcVwtS5mIw6JY8A1rDd/uSpcxcjKp/S3L7QfrdB00QShLzrhHcjp8m3b/3w8Piakx6pn5fdWxeHocG2hzij65Yorqaq/bg8urDp8077GKqxLgUXG++PyLYt0z43wVVjvU3NlFocqPPk3dBco1JCTWfHGqozdy1lmPluqOJV/tzazRg80JhP+NhztehGaWKfWfEhiCONTYQBDCq37JYaaxCbeE+Oq53ZNr+o+90c8C6sNGlvsnm1DbLZtqAI9s31QDFaTuYhrmEWzI3X0mtv4AnU2KDSk2mt3lZwHFC3wB5aPvXS2beM0MOm1THXLz8MWYpxaHP8wNptkg8HNXMzrLPDCQfX+IDXu/41TnYoEz+E7EckqspJdsO9PTP6KRb/OBUin1Qj8YImnyeeKpj3kuXk5dEjL8plD64BUHf9+l9Y3bQRCt9tM5A7RNHV7K2XlhZU/RnGnNa9XxE3mOkNt4Y2w0v1djW6naomkMOZd/S1DQf1ab0Q8Cz1t/37R/10m7cR4oAxNRfD89ptPLK30KoiPvIEBCk/AUzlgCLiwQnSPLN1osoXF8ma4Vl3LFI/zHfcQx0ERsYr7NGcmNciQlZgoCr7EwOgyjqq0Kbg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(346002)(396003)(366004)(376002)(39860400002)(451199021)(478600001)(5660300002)(186003)(6916009)(4326008)(4744005)(8676002)(86362001)(8936002)(6486002)(2906002)(41300700001)(31686004)(38100700002)(31696002)(316002)(6512007)(6506007)(26005)(66946007)(2616005)(54906003)(53546011)(66476007)(66556008)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UUF0M29TcEZ6TXZLNTBtVDFTWEdWWm42WmVjcCs3aDc1REFYQ3FXWTlPUXY5?=
 =?utf-8?B?RlB6RHF1bk9rY2JXVGU5Sk40eEZuNERHa1hUait6VHN5cFZoK0cxRUg2bWor?=
 =?utf-8?B?MnN1TWx2K1RSNXl5QjF2cmxwbnhYUVN0cTFncFJleUViT1ZSaUQ0ZFpkUmJz?=
 =?utf-8?B?WEJ6TDZURnZKQktQWFpqQ3pTSnVoblZScmk5TG8xcU1yQnR1emlxMWVuTGg2?=
 =?utf-8?B?a0NGTFVTeHo0SVpoYjMvVmpsM3FBNnNmcHZFdmlUUEZVWHFKKzhVSGtDa1Jn?=
 =?utf-8?B?RGR4YlBUWmtWYW1YV0RSNllvMWNEV0tmeUM2OGZCek1ReEVUcXh4bndCc3dN?=
 =?utf-8?B?bW5zK2c5N0NUbEE1YnBEMmxaYUtvdXJORVFkMU5FQ0NEMld5czN1TEFoSkdN?=
 =?utf-8?B?TXZuZlZhY1NVdTBDNUpURm9IL04wN0dua3ZlbURzOXd2bjB3cGY2cUJ6M1c0?=
 =?utf-8?B?TnQvL2YvY1pxTUZDWFUyd3pLVGR5Y25CbUJrMmJOTUFWOWRyKzh2SDFRb2o5?=
 =?utf-8?B?dlFXK2pITzBuQXB2WVZqNjcxck1VYnd4ZzdJbG1MRk9OU0xQcmxra1RoTUhU?=
 =?utf-8?B?RTZ0SGJxMVBSTSswUCtOZ3BPdTN5Tlc2TmJlQ1pUWTRpRmdFRVlWczlvR05O?=
 =?utf-8?B?UUUxbzlHOHhoTmtYci83WEpJWmtEbEpjL3lwWStaUDJQR0FtbTVvR29oRjlz?=
 =?utf-8?B?cHRwd1crL29hemViV3VKaHUwVitSbU1kNDN1b3hjWEhuVE9mU0ErNFdEcjlC?=
 =?utf-8?B?TUxCUmRoN2xSWnpuTWJ5a011SDY4VlVWaE1vSU9JMkZMOWVZQ1BqeldXek9F?=
 =?utf-8?B?dGdrQU5mL011eGFxS2NycWpyRkI5KzY3c2dkS0RXQ3lENnJ0bmRxN1NZd0ZQ?=
 =?utf-8?B?MTFObnZvc2JkQVhMVVFMMUI2cEFJWVc3UVJhUUJXRkY4MTIzRGhHMk5sRVo1?=
 =?utf-8?B?bm4rNm1BbEZVcEp5SnNoZTN4cVhCUXpqbm1RSms0NXczZGlia01yV3RGTDJS?=
 =?utf-8?B?UmtzRHZsQjFYdVN1MUI5WTdGcUU1MmVmblNzTW91VDVRU3luSURoZHFOVy9D?=
 =?utf-8?B?bXBwK296ZER1cTkrbUxEcU8vdURJWW1CWGt3K1Z6clg1SzR4eXU5a29OM012?=
 =?utf-8?B?ZFZCY2hvOE1mb3MrUG01ZWRxRnZPenIyQ2N0QzVhK2cvK2JnK29STXIzcGpm?=
 =?utf-8?B?WERKTm9OUXcxNk12cFJYeDd0aVlldlFGaDFVbnJBSzQwb3g1OVYvSlZwTXhm?=
 =?utf-8?B?RGdpbFpqTHVZUGNuMzlocitINTFrRFF1YlJkNkdpcEQ0NTVwM1NzRXJmYXJH?=
 =?utf-8?B?WEFBWXZoNHBDUEVJb3dkVGJtNHdwejVvY3NCMjZkU1F0VTlQa1pkQWpjQ3Aw?=
 =?utf-8?B?ZVZLVkJHbnNEWUxYMG4vNTRoRzhhKzJ5V2E2c3BZaTdYNDRVYndmUStQQTBm?=
 =?utf-8?B?aE1tUTlnT2RXRDB3d1BIUjFpL1VjTXJGU1ZJVjFUR0Z0QXU2cHdQLzMzSWVk?=
 =?utf-8?B?a0ZiQlNhaElQY2Nhak1jMmFkL3k0WVFPMXozUUlhd3lIaDFFNXNVekJyUGE0?=
 =?utf-8?B?eTNtNlBGNU9Ya1FVelpqazg2a3hyVTJKakk3cGxDYmM1ZXVucnIzVUozQ1pj?=
 =?utf-8?B?bE9la0hhOFVvdXB1cGlBV1N2WlhKR3RRcm1PV1JINllJeUwrTlJEZHpxUWFx?=
 =?utf-8?B?RXBxaEcxODB0ZVBKVkdwL1pXellPTkhoSEJtRXBLUklRZWxYZUxPMzRNMUtv?=
 =?utf-8?B?bnU0dHdDV2lTVk1Lek1ZY0VQeU55Yy96MUZvUi9aRGdUbzNEc3JpVGpJeDlX?=
 =?utf-8?B?RERUZC80cnRYODhPajJFRHB6Q1ExQzF4UGlmYWozQVJCSnd6S2NaUkcvZmo3?=
 =?utf-8?B?WnVHdUhMa2d5SU9xSkhNY0RsZ09EeEdYWnRoOE93V3hSb3Y3SWh1dWsyQk5O?=
 =?utf-8?B?b2E5MlRJT29ZcWZoZ1RJcFRLM0pPQmpRQVdvSW9xS2hlVnNRY2M5UGJkY2lx?=
 =?utf-8?B?dGhzT2xsNWhvTXVKcmdSdHJwRFhZSCsrbkpJQTJuNWlERzlyZVRYa2xaMnF0?=
 =?utf-8?B?RmhDcDRCQkNiSXJSRWpoQXROMEdOb0VPak1BUkxUT3lrQTVLNHFzZ1o0S01G?=
 =?utf-8?Q?KXSriB+P+Ir9gcIIZR2vkw6iJ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d2e10683-654c-4e47-2e75-08db60ed441c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 09:07:17.0836
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2uMFd5boXy5lnIqkOZPMkvyhrc+ehd4hAZrqqHS4FOVYt+58IaEy7AfuIUiKX++0XfxPEhRYwJfgWkfRo2c1XQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS5PR04MB9972

On 26.05.2023 13:06, Andrew Cooper wrote:
> This is prep work, split out to simply the diff on the following change.
> 
>  * Rename to retpoline_calculations(), and call unconditionally.  It is
>    shortly going to synthesize missing enumerations required for guest safety.
>  * For Broadwell, store the ucode revision calculation in a variable and fall
>    out of the bottom of the switch statement.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

I guess subsequent patches will teach me why ...

> @@ -681,6 +682,12 @@ static bool __init retpoline_safe(void)
>                 boot_cpu_data.x86_model);
>          return false;
>      }
> +
> +    /* Only Broadwell gets here. */
> +    if ( safe )
> +        return true;
> +
> +    return false;

... this isn't just "return safe;".

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 09:08:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:08:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540953.843130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vL6-0004yb-G8; Tue, 30 May 2023 09:08:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540953.843130; Tue, 30 May 2023 09:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vL6-0004yU-Cv; Tue, 30 May 2023 09:08:24 +0000
Received: by outflank-mailman (input) for mailman id 540953;
 Tue, 30 May 2023 09:08:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1tCH=BT=citrix.com=prvs=5074c9224=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q3vL5-0004y5-6g
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:08:23 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 84ccddf3-fec9-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:08:20 +0200 (CEST)
Received: from mail-dm6nam12lp2171.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.171])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 05:08:17 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by LV3PR03MB7359.namprd03.prod.outlook.com (2603:10b6:408:1a2::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 09:08:13 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Tue, 30 May 2023
 09:08:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84ccddf3-fec9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685437700;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=QX4KCN4m7ZVeOB2VRjERMhtaPUv5AAwwibX1Fbaa/+g=;
  b=ATGbVD8BQgOOgofCNFnG9UGUBYv9IOlzqP0tfsZtitWk5VSCskwdj+Jw
   LHdOQpZ+YDqI4RjF1xfUv+M/XpSq91470NP/6DJOzS3SCy4Id+cctu0+i
   llzI9PeD9vcE0czKZ1h4UBWPoJTc7Gb0hKK0tTYzvYV15/l9WjEpj1r42
   s=;
X-IronPort-RemoteIP: 104.47.59.171
X-IronPort-MID: 110780281
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:0TUkx6qAXKV++wJaL6CI6BSbsAVeBmI+ZBIvgKrLsJaIsI4StFCzt
 garIBmBMvzbZWbyLt10bNm38htTvcKEyoNgTwVqqC9hQypDo5uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzCFNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACo2STfdiPus+qC6GsB2pZ8Kb+TGIapK7xmMzRmBZRonabbqZvyQoPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jemraYWMEjCJbZw9ckKwv
 GXJ8n6/GhgHHNee1SCE4jSngeqncSbTAdtKS+3grK4z6LGV7kEuCQczexydm6WarUf5B85+C
 kVTxSV7+MDe82TuFLERRSaQonSJoxodUNp4CPAh5UeGza+8yxaUAC0IQyBMbPQitdQqXno62
 1mRhdTrCDdz9rqPRhq16bO8vT60fy8PIgc/iTQsSAIE55zvpd81hxeWFNJ7Svfq15vyBC36x
 C2MoG4mnbIPgMUX1qK9u1fanzaroZuPRQkwjunKYl+YAspCTNbNT+SVBZLztJ6s8K7xooG9g
 UU5
IronPort-HdrOrdr: A9a23:UN7r8qrKR2vYBh1Xl4xbKO8aV5tMLNV00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssREb9uxo9pPwJE800aQFmbX5Wo3SJzUO2VHYVb2KiLGP/9SOIU3DH4JmpM
 Rdmu1FeafN5DtB/LnHCWuDYrEdKbC8mcjH5Ns2jU0dKz2CA5sQkzuRYTzrdnGeKjM2Z6bQQ/
 Gnl7d6TnebCD0qhoPRPAh3Y8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzSIwxsEVDtL4LE6tU
 zIiRbw6KmPu+yyjka07R6f071m3P/ajvdTDs2FjcYYbh3qlwaTfYxkH5GSoTwvp+mryVAy1P
 3BuQ0pMchf427YOku1vRzu8Q/91ytG0Q6p9XaoxV/Y5eDpTjMzDMRMwapfbxvi8kIl+PVxyr
 hC0W61v4deSUqoplW32/H4EzVR0makq3srluAey1RZTIslcbdU6agS5llcHpssFD/zrKonDO
 5tJsfB4+s+SyLTU1np+k1UhPC8VHU6GRmLBmAEp8yuyjBT2Et0ykMJrfZv6ksoxdYYcd1p9u
 7EOqNnmPVlVckNd59wA+8HXI+eFnHNaQikChPSHX3XUIU8f17doZ/+57s4oMuwfoYT8Zc0kJ
 PdFHtFqG8JfV70A8Hm5uwEzvn0ehT/Yd3R8LAd23Ag0YeMAYYDcBfzB2zGqvHQ48n2WabgKr
 KO0JE/OY6XEYKhI/cP4+TEYegjFZAvarxqhj8FYSP+nivqEPycigWJSoekGJPdVRAZZ0jYPl
 wvGBDOGeQo1DHYZpa/ummcZ0/Q
X-Talos-CUID: 9a23:kuGjymFcg8XnFYPUqmJ9z2kzRM88aEfPzSf5OWzjICVEZaC8HAo=
X-Talos-MUID: 9a23:M9zb0Qqak097RUXjbIMezxxfNd1NzeOIMhwutpxB59WlFCU3BDjI2Q==
X-IronPort-AV: E=Sophos;i="6.00,203,1681185600"; 
   d="scan'208";a="110780281"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YDZe7qk6KcRvl+vUGBXaOJ9+8hAD2P+y6IEuXQmr9Lw6GkAPK6mqIaSBldHUbkADLDSNCz1eluPYTHJ9s62HQ0Jho8rnvUFoympgMSBNzPyzNrSHJHLYX7jhFMYafmVK2TJI9549IijBvqJj6QCIyh4q95mV4E4VxDoz7IKcibjrDcBgjcQHJSMMoZ/nf+68SMjAKcFRz8MRCSoZ+QDlHWZL0zk/1RekKSp8MCI8q92hhjYIv4z2BR/+HJYN/YEBYyPzX3EPgIsyYWikan+KeP57Ni0/l+3FO8z/XTUKXbt9Fivi3T6zkUzEY9qsIedJ2OpenQE0jVdTKwLtEUi00w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3dlhsIQp2huyve/jcOpGQpBIbgHGAj6O4wLLauBNjxw=;
 b=jS3z+WST/V+YMSPHjP0+zh6aHXylW5ErS2lW/Ip1xyk9T3pxFyFvYyR5hY5SZa8wHr3zrXtlYxBBc/Sn/O+rGf85+pODwigBBr5v+TxsiyDVS0oMoTe8QowMoq75pe0rRhn2WEcWDyDkYuhctL4PndQzXVKdCd4Hkt4hSAUcRhk6JZe3GqC9ZDOPFuYmllC/7dNG9mmT/zY9KC+eZJqNmbrMqJzltBVvcZx74qWbdXOYTfjjnAFcC98DY3dPFMNL2s3nSuqJjbdUBeQEqBbv5VFrrMrzn9ou6uiPGA55qR06YkVFZi1ZZ9xXrzJkduWw56rJCsB6w8CCpz+K5OqJtA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3dlhsIQp2huyve/jcOpGQpBIbgHGAj6O4wLLauBNjxw=;
 b=UeVse2RxvaJX3zH4iJ3cLsQcIM4Sv9+hda+jMf1mJz/7+1p4KMxFIgZBbQR5cM9tpnLA5W5l+2BR+Sgmtq15AyiXdlNDoYam41vCZGjQ7pyHGH/1J8NwkXqeo+X/pAg/nuReG9LlWE+2mGMvyFIyWznJrF3u+NEqVarM8NYSvn4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 30 May 2023 11:08:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/vPIC: register only one ELCR handler instance
Message-ID: <ZHW897liNQKSbB0F@Air-de-Roger>
References: <5567b45d-d8ee-7f43-526f-7f601c6ddd46@suse.com>
 <ZHRkstB6UKWAadVZ@Air-de-Roger>
 <5f04bdd7-2337-812b-cf9d-985fe34d0f5d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5f04bdd7-2337-812b-cf9d-985fe34d0f5d@suse.com>
X-ClientProxiedBy: LO4P265CA0156.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2c7::18) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|LV3PR03MB7359:EE_
X-MS-Office365-Filtering-Correlation-Id: b953e6f1-2bf2-4565-a7e3-08db60ed657b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hd/7fxZp1yUw6KdOCDUTUTvwTxBT4Rm09VBUWCdgNSd57T8ODRB6rbtv/LNB+u6s+iEsjGOHuMwdNDSmpx2DgSbSwvkGULXmz7GmxeSIGGDld4BIMvSUd2kZjV3/6hnsRaTx1a4QlaWFAawpwY6Rig43dbsUzUAbj3oyBZCIKzE9msz9OkhjSVOioj1indAIqLuuLA8Uu0dhtQm6P7cYFB169bYmRaHxFowN299H0jaeAPKZkdo2fYuiMeLsS1QK7MWuKEVatrGNebmznGD8vdHvQnyJ0UOxxu/4pPvLspUwLMQMIX7DHJE+PUmSx9HMdYq47CFpMb2TS13c5EpYlnSidvmrmkeoTuve4xYZpnldPWim+n+uAt4eCut4u42lBJFg1T/GKJWEkyWxbA1KbHfNzBeIsLEzPpJdrXhzpx/3WpyBOzuN/pP/8NccMLsPrjrYEfYTeZpZ9vQOYEe7TXA8kAtnHai1lfKBieKWst7DcR2lkkxISa/mwAffVCb2L15OrO29daUKGICl64k8vfPKGHgfLFgiZmJgT54WD07kRk8HOpBKciyWMj6AUpbj
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(396003)(376002)(346002)(366004)(136003)(451199021)(478600001)(54906003)(5660300002)(8676002)(8936002)(85182001)(2906002)(86362001)(33716001)(66476007)(82960400001)(66556008)(4326008)(66946007)(316002)(6916009)(41300700001)(38100700002)(186003)(26005)(6506007)(6512007)(9686003)(53546011)(83380400001)(6486002)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QmVLamtTR3JzY21oOU9teEwyNlhkRzNiUTNLV0t6eVJvUjZ6a3lkbk5KNEVX?=
 =?utf-8?B?bnhhVGFOdEIxa0hvd2VyWFFhNHpOelorR3lMaEJOV01sZ2lvdjFJUkJYNUJk?=
 =?utf-8?B?dXh0eUVabVEvYjlMV200RklCM05oUnpYRHpMNjk0SWlMT3dkdG5xN3hVMUto?=
 =?utf-8?B?VTF1Q3krVDl4ZE5jbGx4V1BSRkRwWlVJMjNiWGJYN2JrSnh5Rkw0SU1HcmdE?=
 =?utf-8?B?Y0JCWnN6L2orTzkrdytwQ0pldXdlaFVIeDRyRnJHTGdOdWM3V0NqU0F4TVRj?=
 =?utf-8?B?cWV0ajlpTWFQejBFOHc4SEpQd0lMQ0FGdUhKeGYrdFhXb0pKN1diQ0NoL1ZB?=
 =?utf-8?B?MlFMNG9yR1VjQUp2d2czQitGWnB3RERIdmlGUW9ibnEyN2R6S2Q4dUxvRzM0?=
 =?utf-8?B?ZEpkS2RuNGdVSHEyMzdzdFlnVG1YaXBUQWllRXAwTldCcDVyYVRwbE82NWRI?=
 =?utf-8?B?aTNEclJQK0ZrcnhqK0FSam1CQnlmMWYrdzIzZjQxVDEwSUk2d013dGRnMGxE?=
 =?utf-8?B?VnFONW9DNjJjWVJIREZKK1NKYUhXZy9hMlNuY3I1TDNaMWxJOVFkTVZmYTg4?=
 =?utf-8?B?MnpGUWUwRWtQaDhWcjdobjRwUmJaeXlCRTEzekpSUVdQejBjaUh6WVQ1Wm5o?=
 =?utf-8?B?SWMvUCs5ay9XeHM2Z1FKUzI1Ty8rRjIways3M08yWUJSNFNnejB2R3ArTVdq?=
 =?utf-8?B?NkNWbU1BaFEzWWQrcVl0SisrZFFVMXdncEpvTUNCSDBNYnFydW11TGlyK2Vq?=
 =?utf-8?B?YlFGRWZEdFZZOGNFdkNjK1ZvU29GbU80SjRxMUd4R1RON1ZPSTg0Vlp5QTli?=
 =?utf-8?B?T1htdnBQNlVGaU14TlYrZGpCREVOdVdDNDg3cS90TTBtSDVZMHgreEoySS9E?=
 =?utf-8?B?ck1TWE1SRks0K2oralJMZzNWK2N4U2pjQ0NEWHZaa3UyZ1BpVm9vbThvTXFq?=
 =?utf-8?B?R09pbG9TTzVhZTdZcUxzWkVKUGFHM0lqbWJVY1NwdlNYVGtJVE9tMk9nMjlW?=
 =?utf-8?B?dldOYmduS0VvMERCazRIWUd5UFU0OXR5Vi9pR1Fpcjdpc205V1hOUy8reERJ?=
 =?utf-8?B?QXRVQnArMGZHOXlzamFFUHlVbk91MlFDQzBpVzRQU0pKMzdibDVIOFV3bmsw?=
 =?utf-8?B?TVhobTVsUStlcnIrOXRzMUVJOEtHVWF5Ri9IcjFnSDFjVHgzRHYxa1BMaCtt?=
 =?utf-8?B?dkFleXJENTVrNlNTOXo0WlZvWnk2TE8wZmxnZkh2dEVnRnJzRmxqWlVOZ3NC?=
 =?utf-8?B?OUg3QUJRM1p4YXo3cFZSMVpveVhDY2p3QWxraFBSbmo3MUxSa055aTJkYi9G?=
 =?utf-8?B?Y0dLdEMyWW9TRXRKMm1CSXBoOERKQWYxa01SbUNPQUdUYWYxUzY2TmtFajRp?=
 =?utf-8?B?WjBpL2lZdjZSWm55SE9aMTQ4R3V0S1ltL3dXeEVvNjcrMVQxaEp1QzV1RGNq?=
 =?utf-8?B?Z3ZLcFdSUU1UYzkybm1sMllFaGpGY0oxRitOdThKcDV1OGZtcW52SURvcnQ2?=
 =?utf-8?B?M3hLR1p0aVloWjE2SzNTdDV1RFB5bjZjMUI1U2RiREw1NEE2cERNQlBUcFll?=
 =?utf-8?B?THhZaWIxVlg0eHhmeVNXY3dkZzRmbXdkTHVPRG4wbXVuK0UrS2VlbFc1Skx4?=
 =?utf-8?B?LzREbTVaMERvZ1cvellxOCtxRzVqTzNESjdFTldLa3hJU3Q3a0pMaHpXMlZ3?=
 =?utf-8?B?NVNwYXJLZlJDbjQ0VVlqbTBDWUtxSTFwN3NkYWNMTmM4SEpUS2l3WURqZ3hM?=
 =?utf-8?B?UHl6cEJmcURIM05SYVBkbmtwajNhRUlQak1naSt0MEdTOGVveUNyOFdRVlhx?=
 =?utf-8?B?dFl2eXYyTG9Vb25HVzY5NTVrWWVWY2JCTHpmelhMaFYrcFlKem5wZkhrU0Y0?=
 =?utf-8?B?QkpXL1F3OVJJczl1bjdFOUVBYmpKcDNGbTNIa0ZEOFRUY3E0cUNtMTNEc01B?=
 =?utf-8?B?aWhUdXh5Sm5GRlVFa2xHdG5KNnVoSzhoaEQ4RXdOUVVBc0h2RVByZTd2TjZP?=
 =?utf-8?B?SFhRUzNVSFZKS3ZKSGdoWE5QdDhpV21RT3R3N2ZJbmRUTm4xUTJha2Fsb0hj?=
 =?utf-8?B?dStydU5aenRlR0lzVHhLd1d5NEFobEs0SDFMTTNSVFRoUmpyVXJhdmc1Wm1P?=
 =?utf-8?B?OVVPTis3ZDBsMUxBT25NR0dhTVhnNXJqdkYzZVhqMzdkMmpmTDFhb2Y4MXU4?=
 =?utf-8?B?bGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	PqArKOyyTAQLRRM/Y8VX05IhLpj426BaoWil5dwAMq0a7HUTNDZ8q4Qhb+LFos9x8Jq2vwWZdyHHX6kvNvBafHAycjMNnnIuJ/NCXH9sJPi+dVTPUYzYBEtOCdusj8+D8uu88ixrwhRkTIVjL0qVpQVAq6dr9iK5Q7FTkk/a2r2DEs0ZC+micoinP4mVklLqYGtz5GeKkczN1Sum+MRGuWBUQwrT4+EZ6m3DYYBVgReUyGM5g8oWGpDc1JyWzOKogjRxSlScs+uRHIvojd96KZLrJlX+m/Ygbya1P88m4HNy+No/s11hhA7PHhUVDrI/fYOnzpGaYl17qXvYqkCuFoaNX8HEu3yRLYN2rje/DE5zeDJcyJrY6La6/u4PAJwoAzcY/apXZXuL6jRy1lKYSRLpQw311e+qnezmnUaUGeNgvSt62BMGrB5fflA3khbSmpozASs5DXnfakCghqSyM/s4g5YYP1wpj+aI+dQIqk5VX1SiVLDRwIm38hipjJ85F0bEkE2iScSpPHB2+35APELulj1vVXWAu3vz3ps6Gt2RAYbVdFOk8AWEFhUloaQvN/V9hVkGgjnbc0U17xaD7JmyTjjH6Pj8Au451EAShyf4HBLpDDr2FUTpmOnXiTdjChgyIgJOTBQhvNMzLWgZZEfzMfv2Rei2ud+NIWdAO9SWQF8mHfH86d4wy9LJJ9r8eLXwAVvx5IviorheyRIT/2VByMkT4CTKF4gWtd5leJwRmDKZsxJtvIu5RNGWUaYJ9B4JLfTzDCIs9qVZdEpgmRWqfbkaJtx/0L/ugkgkdNF1QTVYDH44sn9EF/n8I3ui
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b953e6f1-2bf2-4565-a7e3-08db60ed657b
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 09:08:13.2543
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Bzh3dvKpfqBRV28uSiCdg5M86l0LuKIMER0lrnIETSOZJNknTTK0GMdn0UjxB0333GIGu81DSlkN2iRzYDF4Hg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR03MB7359

On Tue, May 30, 2023 at 10:48:02AM +0200, Jan Beulich wrote:
> On 29.05.2023 10:39, Roger Pau Monné wrote:
> > On Fri, May 26, 2023 at 09:35:04AM +0200, Jan Beulich wrote:
> >> There's no point consuming two port-I/O slots. Even less so considering
> >> that some real hardware permits both ports to be accessed in one go,
> >> emulating of which requires there to be only a single instance.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>
> >> --- a/xen/arch/x86/hvm/vpic.c
> >> +++ b/xen/arch/x86/hvm/vpic.c
> >> @@ -377,25 +377,34 @@ static int cf_check vpic_intercept_elcr_
> >>      int dir, unsigned int port, unsigned int bytes, uint32_t *val)
> >>  {
> >>      struct hvm_hw_vpic *vpic;
> >> -    uint32_t data;
> >> +    unsigned int data, shift = 0;
> >>  
> >> -    BUG_ON(bytes != 1);
> >> +    BUG_ON(bytes > 2 - (port & 1));
> >>  
> >>      vpic = &current->domain->arch.hvm.vpic[port & 1];
> >>  
> >> -    if ( dir == IOREQ_WRITE )
> >> -    {
> >> -        /* Some IRs are always edge trig. Slave IR is always level trig. */
> >> -        data = *val & vpic_elcr_mask(vpic);
> >> -        if ( vpic->is_master )
> >> -            data |= 1 << 2;
> >> -        vpic->elcr = data;
> >> -    }
> >> -    else
> >> -    {
> >> -        /* Reader should not see hardcoded level-triggered slave IR. */
> >> -        *val = vpic->elcr & vpic_elcr_mask(vpic);
> >> -    }
> >> +    do {
> >> +        if ( dir == IOREQ_WRITE )
> >> +        {
> >> +            /* Some IRs are always edge trig. Slave IR is always level trig. */
> >> +            data = (*val >> shift) & vpic_elcr_mask(vpic);
> >> +            if ( vpic->is_master )
> >> +                data |= 1 << 2;
> > 
> > Not that you added this, but I'm confused.  The spec I'm reading
> > explicitly states that bits 0:2 are reserved and must be 0.
> > 
> > Is this some quirk of the specific chipset we aim to emulate?
> 
> I don't think so. Note that upon reads the bit is masked out again.
> Adding back further context, there's even a comment to this effect:
> 
> +        else
> +        {
> +            /* Reader should not see hardcoded level-triggered slave IR. */
> +            data = vpic->elcr & vpic_elcr_mask(vpic);
> 
> The setting of the bit is solely for internal handling purposes,
> aiui.

Oh, I see, I should have paid more attention to the "Slave IR is
always level trig.", might have been helpful if this was noted as an
internal implementation detail.

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 30 09:13:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:13:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540957.843140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vPq-0006QO-1y; Tue, 30 May 2023 09:13:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540957.843140; Tue, 30 May 2023 09:13:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vPp-0006QH-VB; Tue, 30 May 2023 09:13:17 +0000
Received: by outflank-mailman (input) for mailman id 540957;
 Tue, 30 May 2023 09:13:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1tCH=BT=citrix.com=prvs=5074c9224=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q3vPp-0006QB-8S
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:13:17 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 338527f8-feca-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:13:13 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 05:13:04 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by DM4PR03MB6208.namprd03.prod.outlook.com (2603:10b6:5:39c::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 09:13:00 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Tue, 30 May 2023
 09:13:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 338527f8-feca-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685437993;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=tDtkzfK9yt2H4e5dpSN4bE2A3kcZ+fQvyTNKwVcqm9E=;
  b=H8fvFrMVhPROwdTV1pU+UyeOM6BIcl7d1VjbPQJ66Hw1/SNMDk4galnX
   laL4vTMnPfVjgqOtb3cizRKy9BGE4NURf0iZH9BdnXjf5pKp65OgDc+zN
   1EHsnV0lto8VZA+D0RgHyQGvUXWN5vxjda9XBW9G6sE6JgSFzXvPcrcKm
   w=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 110224105
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:rSikLKsp8Oj24wW1wn6Ry3U3uOfnVHxfMUV32f8akzHdYApBsoF/q
 tZmKW+HOa6DNjD1eop2PNuy8koC6JbQm4A2GlNkryAzEX8W+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Fv0gnRkPaoQ5AKFzyFMZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwLREvcRaCvMKMm6uxavY8vs8uEfHKI9ZK0p1g5Wmx4fcOZ7nmGvyPz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgP60boq9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiAdpITuHhrqMCbFu720cLNB4JSwCHguCSl165ePB8J
 WM09X97xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqG7audpz62PSkTLEcBaDUCQA9D5MPsyLzflTrKR9dnVaSz3tv8HGipx
 yjQ9XZuwbIOkcQMyqO3u0jdhC6hrYTISQhz4RjLWmWi7UVyY4vNi5GU1GU3JM1odO6xJmRtd
 lBd8yRCxIji1a2wqRE=
IronPort-HdrOrdr: A9a23:Pl4rG6ulBQzGUksfkM7GtemY7skDSdV00zEX/kB9WHVpmwKj5r
 mTdZUgpGfJYVMqMk3I9urwXZVoLUmsl6KdpLNhXotKPzOGhILLFvAH0WKK+VSJcBEWtNQ86U
 4KSdkYNDSfNykdsS842mWF+hQbreVvPJrGuQ4W9RlQcT0=
X-Talos-CUID: 9a23:YltlnG6XzOlf+UbTUNss8W82BcYkSV3nl27aA1DoMW1kdYW0VgrF
X-Talos-MUID: 9a23:jYTyvQZv0UXJ3eBTqhT0uDJ9KpxUwL2FFVAKsrclteakOnkl
X-IronPort-AV: E=Sophos;i="6.00,203,1681185600"; 
   d="scan'208";a="110224105"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HncBl6rgb0C61BvTsCG2oILWg9AHVxomE+H12TZNdPZoYOmLRGJqv9FxNfRQxP8bXg12ClAI//7rs95c6hRFmVR8Y9MRAnFpSnA0AJSZawq1XXjc+4DFJix0LmofO6OH9okqWgsiQgllpJYxP3vKatvPZncFF/7wcV5P6nBbGz4LfVqHlZTVuv2LwM4AKbr1we7Le8fHaFcCXBlzruQl5YZuNFV73Jzt8IEtyDmiWv1CKttzheNQkmJWYTLE1iAx8nhDyK+I5GplkDZK7dQM825UJh9ALCvWa5h6CoC0kPwMrf54NIred+xNvCjru9lwB3IERLiQaWvZxhdGSGZwlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iU728gA+ZXThMxQb9mO2Gvjrh3cLiEzfEq7dauG5PQA=;
 b=m0UQsBmWjyR3F7YEdwIRLE2BO/FmWrNC6UJZinQjGOuEU0E+99IkhwBJ/m10OmS/9YQ4FyE1yxV9neWsZ7tTLmfPTauE6KoM+TVTLb0XloXa/x+6wwT7yNEKR1LEYIzA+SSCdx7HVi2oLaKuRZeL9P1vWWXnBBYv0ySa41xN8Bk8jMX/lFiKohfq8FkaYhMB3zjKo9q2RyJ/KUhsVm4q/CvAeNwUNereYpCUcygrbXdDPw1CH26J57vfyJMCZcheJymz6dB41/9sfQ3C4076lO8FQpeWQhJLoXBg6N54arwmiAXlzSQ7pDLlQoLotK51rNc/5BpcTJQITx/RBQt4+g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iU728gA+ZXThMxQb9mO2Gvjrh3cLiEzfEq7dauG5PQA=;
 b=qgN+wkHvfPznuiCc4bGiMuKB78Npkez1iiuDF9cKBQ/r9XGo9nm1+OJmBnteyiPtn13y7K5sWn8JWVWOeZV8gnhYhVVFKEBUFJrI6nd/og8CJpgW5HUgy2qbN43vMYdXDmra0yPrYfrWI0gcVQd5Dw7F3KcNVbM9Cr80V+Dc8QM=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 30 May 2023 11:12:54 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Message-ID: <ZHW+Fu99ZGHPgMj+@Air-de-Roger>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4zx+TvUWTCEMh3@Air-de-Roger>
 <2b1b1744-2bc3-c7c0-a2d8-6aa6996d4af9@suse.com>
 <ZG94c9y4j4udFmsy@Air-de-Roger>
 <cedbc257-9ad9-56f1-5060-eaf173d45760@suse.com>
 <ZHRdjCKSVtWVkX96@Air-de-Roger>
 <25663dac-6023-a9a7-a495-c995762191d8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <25663dac-6023-a9a7-a495-c995762191d8@suse.com>
X-ClientProxiedBy: LO2P265CA0466.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a2::22) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|DM4PR03MB6208:EE_
X-MS-Office365-Filtering-Correlation-Id: 1578ef06-6bd4-40e4-5b5a-08db60ee1090
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Cy7W8Xu6ZPW4MSc3V2bQWGAbknU04aNIfJYbM9rqNAu/LHLgE86NTUqfY8Btp+lpHyu2fhTWFslj9vT1/Qbi0/tNu9tyB7n4E0EO1wolat/MxhiGX0PSoZjTSch31nZlqSJMV/uosrY4gIcKKgsdjoqMlpIT2sYlcTIW84YaEY5FsBIRa/mtpuwjQz9FX+Q0W2uTD9VRpKWb/cplKC2efdv2ynYZ97nOXHu5gNykSk9w+KmpLifW8QJ7luXvDSU63IFmskMptfYrP9xKhF79D6M4DWFusFNJGBGHF8DFWKxQ6ZPW2I568Y3xcGe/QAW3BPmp+NH9dU8nB9/XvNVXiPgWVRH7kBGQVzNI3wOykl6L/poakKlBi5vYMARAerWhLfHj/YOkWqRWyHz9t33rc0j4k1JpGAb0UET8uj8NiHN2TtnphTKJYFPnQpou1W3A4YTR11BiRA/j5PUUai70nfC/4hLV1eLkTouJoVCdFkkkXSO3bObYGIzl7NOchoczOlekOMDUjvQHHWtK4OEjaM1nCJM7DVm0D6bfdZiuOJLZPsbSk655uCOO/eBnbl2f
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(136003)(39860400002)(396003)(346002)(366004)(376002)(451199021)(82960400001)(54906003)(478600001)(38100700002)(66946007)(66556008)(66476007)(6486002)(41300700001)(85182001)(8676002)(8936002)(5660300002)(86362001)(26005)(2906002)(53546011)(33716001)(186003)(6506007)(6512007)(9686003)(15650500001)(83380400001)(6916009)(4326008)(316002)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eUxvYm9MWVZRM2JRTk9uR1hEWjlrdzhxeFFBSFltekVzeTR2Wk9sc2U2Yzd3?=
 =?utf-8?B?SFovc3FraVdCaEdZSkcvTGtrbjYvd1pSM1Y2WkUvODVLV2l0U2lVNmRWSnlw?=
 =?utf-8?B?a0FuaFBrWS9xU0tsVTdsU0RtRTd4NG1jSFZBYzMwS3BOSWNQeE1HM0ovcklw?=
 =?utf-8?B?T05EVGV2TFFteXExcFp6Q0Q2VUo2TTNkTUFENlN0Y3pZeWdOV1lERzFXU1Ra?=
 =?utf-8?B?Yy9YR2xVbGVsaWJkQUoyTFNLZHc0ckdFRVFJNjFLL1RUQ2tDeUUrMFgwMlh5?=
 =?utf-8?B?UlIrNTNxZDR4dlNlRkVGeHpTejk3Rk8yWk9aTzZPTHJiMys5enRZaHAzTFh4?=
 =?utf-8?B?bDJvcUxpK093QWQwSHExRkppMmxIUHd2QjFhTVgvTHJwWmlWdjZqcVJKUzgy?=
 =?utf-8?B?T2ZtSEp6WGdXOGp3Mzh5ajlBMThFNFNGZEhEUC8vKytuM1haK2pFRFRLeEsv?=
 =?utf-8?B?cHhtNit6MHMrWG5sZUR2ODhNTno3azlmS1MySGZRQytnSklSaGt4K2Z3b0gz?=
 =?utf-8?B?WTVZaWVSOEtkUWVoV0NiOTZ3OHdQbyt3SjNxL3RZOERZYWc4bkg3dTZHL1hx?=
 =?utf-8?B?Q2k4WnBmOTl6OEFnZTZrRUxnZ2Q3VzVRWjh3MGhwbWVUSCs1TnN3UnhtbHBw?=
 =?utf-8?B?Y2hSNkNzRy9MUjlvR0hVQXdxR29INDRQQVJTUmpJMmhMZzkvVVBKbVg5ZEVX?=
 =?utf-8?B?YWFIWDM2ZWRyVE8wdWFqZ2o3cHIySklncnZSUld0bUtZOGg4M3VGRU12c1dJ?=
 =?utf-8?B?ZWI3TjhQenV1NnZvL3NsZ1k1YW1FelZjTzRGZGhUL1VydVhkMDhYdThaa1d6?=
 =?utf-8?B?M1FEaERVTTRkeTFmMGQzdG1Gd01heXNmS1NXdzlsVFRNNXRQWENyc1AxYVlC?=
 =?utf-8?B?VmNvYUJVUnlUdE9MN3BZdVl5OWJiVmN3K1pmTU1pd2FLcnordEdWY1lPR1lP?=
 =?utf-8?B?NmdGQXplZVZlaXZhSDM0OEptRDJyLzNkdk12bHMrOW43Lzlubmt3RTREZFZt?=
 =?utf-8?B?NmJDZG54WEFyWHgrWFJxeFdGUWZEbENlOGRpcVVMTG1yUWcxeDgzWmNxUzdT?=
 =?utf-8?B?L3pRdi9DcDN0ajdFT1QxaDFnbnhrSHVDL2tuZUtEK2RNeklpRDNxRXpSVDJD?=
 =?utf-8?B?bU5KTy83eW1aTVJIcFg2TlFOcDhYdGMrYmIwK2JZTytxYm00bG5tK3NtdTBu?=
 =?utf-8?B?cEJTa2JCRTQ3NGVkTjdNUHU1bUFoOTBRUHd5TWJObzBEd0E3aFNWd1hicjhZ?=
 =?utf-8?B?MGxFbFdwZ05QUnhNSXB4MEJsb3dEREZ3cHNoQWtKZU84NC8yb3JNMWEzYTMy?=
 =?utf-8?B?N0k3VlZRY25UOFhkbldrWTB2K2hSYldKeTBodkZ0MmFZbWoxNDg4MVA4N01N?=
 =?utf-8?B?dnZKN0NKNE9Gc3o2bFFuQ0hPRXorSjRlTVhkc2NGQVhYQ1JOS0NmZUd3WXNC?=
 =?utf-8?B?dHlUMTQwNDl2Sk11MVlSYlJ3WDY0ZWN0ZEJOdjRyeFlXS2dYWWMxTnBDZ04z?=
 =?utf-8?B?Y0RIRmdHUmp0d2lKb3pjRFpqbHM3RThudmZZQy85M3NONUNUR1RLMENTaUM3?=
 =?utf-8?B?alJTZ0lJUVVBN2twSGJ2QXJ4UkNyRWhsY2l2eW82U244am1DdTZJQUU3RWps?=
 =?utf-8?B?enN6RGNTUFgzdVdRcTVHakpvVktSWTcrRGpqakpYTXpqd0ZMZGdwZEkwNEVv?=
 =?utf-8?B?YkhVTmFYZkJDKzAxK3YrbXNzSGhDbktyUmlwZjRKOVpqcTBLNTBGcmRoc0VX?=
 =?utf-8?B?YkVKQTFmNEdXWG55T0w0SWJ1T083NFM2dzMvL3pIeEZOczRZOVVJTHUzK3JK?=
 =?utf-8?B?elFEWVVwY3BCaW5BRkNZcFlrYkV4TXpuY2xDMDdDTTlMK2Q0M2xjY2RkaDg0?=
 =?utf-8?B?cmxFUjNpaEx6UThXVXBZeUFMK2xONTltMnpHeUdHc0NCdTdyMEpKekQ5aGxh?=
 =?utf-8?B?bFY2b1NOdzhCVmUrN2FlYXN0QWVBYkNHRmgzZkNrUFMwQ3Frb3hPcmNMbEt5?=
 =?utf-8?B?dkNVbDlIRU9lYTBGN0hpc3JxcTg1MEtLT0VGVGlIMXVWNWFGSEQyRU8vUHEw?=
 =?utf-8?B?eXFYVEh3QW1KU0dFT2ppeHBFZlBHZUhJbDNwM1V2TEJMekg4UWpTLzZTeWZy?=
 =?utf-8?B?VWUyU2hIN3JYM1BVU2xKRE9maC9HekpSMm15bExsQ0pXUUhBVUpHMHpoaW9l?=
 =?utf-8?B?cGc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	1gj1tVpdRSeOMIBEQ/9vk5/DMXV7i/NUVRegbOHJrWDvPY9Q5S/GEG3Giiz9B3DJdNr6PHNBiz0EmxlmjU2a/TJy303pUCGfcafAvCUzYsh1dwqDPQOTssiWSYIgnzPxVtym0OvUGrA3UkR4CBLY7yahmT0yl6mrqFjPtTIbAOa+TMfZMlXW3XQF7uCzmHTlwSbXwroCbeOY8xm5Wv5OsLnc+xLI3HjJW+sW/MzPzXZCEjbihGluDbVF9m+KGmfjbiSGRodZxIZiStJ+BuhGV8Wgx0Dd3ROnsba/2ZkIUnO7a0HX2kK7VfgEvtkPAN2xyoxmguS0d+JZ+GyH+y5TbWoe9t4uJX2GlBNiwlv7pNK8dkJjv3riEswmX+kfq8mMzyQnGVVJ+hG7y53SCQ0bCHYXM/Ad/1Gv4F9Mrl3ZJMgpku6YOnW4tHoDpeaCaKq1DUzbBfZIG9Pdgw2FCv/c5UK+QMfI1IHWaVPQOqkiO6eA8FymARcVbPCcGcHKlDPsTLFCOW79rgnBirAUWiOhYsQAFJbrXbeT3zoKLHJMKW4pJaixUa5bN7ptAQbkDLhWiWDXcoXN1e3rspfbhBRdB7iktrbcSjQ1tUbTZkBQsH0swvmcAoJnRGh0eIIyjbUt22Z0pY39C3FbtyVj5cTTHIOD5nK5evNI0ROnY+8Lp3NKUOBa/rbQa/2goMS5AaIkqmJoM5M4CKgosyMtIpVqS9Zs54rkSUMixsrrOVt+NV2cq8EToVjcSfZCpThe36OzhgMVtdWCEEJ0VPf98WKjxA8nUdVWY0oHH/D2ufvxVGTtvcEVoq0wWDAs3mCcE4Li0b2x/07w5c/1E9A4csatQQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1578ef06-6bd4-40e4-5b5a-08db60ee1090
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 09:13:00.2097
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gfWP83/qkaIsQELQvHrNFQaGZAkCX/n5TrluicEeRyt5vzoJHahZ9zH9AgonM6NdoCOybQoEq5twNPkHHaPZPw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6208

On Tue, May 30, 2023 at 10:45:09AM +0200, Jan Beulich wrote:
> On 29.05.2023 10:08, Roger Pau Monné wrote:
> > On Thu, May 25, 2023 at 05:30:54PM +0200, Jan Beulich wrote:
> >> On 25.05.2023 17:02, Roger Pau Monné wrote:
> >>> On Thu, May 25, 2023 at 04:39:51PM +0200, Jan Beulich wrote:
> >>>> On 24.05.2023 17:56, Roger Pau Monné wrote:
> >>>>> On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
> >>>>>> --- a/xen/drivers/vpci/header.c
> >>>>>> +++ b/xen/drivers/vpci/header.c
> >>>>>> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
> >>>>>>      struct vpci_header *header = &pdev->vpci->header;
> >>>>>>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
> >>>>>>      struct pci_dev *tmp, *dev = NULL;
> >>>>>> +    const struct domain *d;
> >>>>>>      const struct vpci_msix *msix = pdev->vpci->msix;
> >>>>>>      unsigned int i;
> >>>>>>      int rc;
> >>>>>> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
> >>>>>>  
> >>>>>>      /*
> >>>>>>       * Check for overlaps with other BARs. Note that only BARs that are
> >>>>>> -     * currently mapped (enabled) are checked for overlaps.
> >>>>>> +     * currently mapped (enabled) are checked for overlaps. Note also that
> >>>>>> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
> >>>>>>       */
> >>>>>> -    for_each_pdev ( pdev->domain, tmp )
> >>>>>> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
> >>>>>
> >>>>> Looking at this again, I think this is slightly more complex, as during
> >>>>> runtime dom0 will get here with pdev->domain == hardware_domain OR
> >>>>> dom_xen, and hence you also need to account that devices that have
> >>>>> pdev->domain == dom_xen need to iterate over devices that belong to
> >>>>> the hardware_domain, ie:
> >>>>>
> >>>>> for ( d = pdev->domain; ;
> >>>>>       d = (pdev->domain == dom_xen) ? hardware_domain : dom_xen )
> >>>>
> >>>> Right, something along these lines. To keep loop continuation expression
> >>>> and exit condition simple, I'll probably prefer
> >>>>
> >>>> for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
> >>>>       ; d = dom_xen )
> >>>
> >>> LGTM.  I would add parentheses around the pdev->domain != dom_xen
> >>> condition, but that's just my personal taste.
> >>>
> >>> We might want to add an
> >>>
> >>> ASSERT(pdev->domain == hardware_domain || pdev->domain == dom_xen);
> >>>
> >>> here, just to remind that this chunk must be revisited when adding
> >>> domU support (but you can also argue we haven't done this elsewhere),
> >>> I just feel here it's not so obvious we don't want do to this for
> >>> domUs.
> >>
> >> I could add such an assertion, if only ...
> >>
> >>>>> And we likely want to limit this to devices that belong to the
> >>>>> hardware_domain or to dom_xen (in preparation for vPCI being used for
> >>>>> domUs).
> >>>>
> >>>> I'm afraid I don't understand this remark, though.
> >>>
> >>> This was looking forward to domU support, so that you already cater
> >>> for pdev->domain not being hardware_domain or dom_xen, but we might
> >>> want to leave that for later, when domU support is actually
> >>> introduced.
> >>
> >> ... I understood why this checking doesn't apply to DomU-s as well,
> >> in your opinion.
> > 
> > It's my understanding that domUs can never get hidden or read-only
> > devices assigned, and hence there no need to check for overlap with
> > devices assigned to dom_xen, as those cannot have any BARs mapped in
> > a domU physmap.
> > 
> > So for domUs the overlap check only needs to be performed against
> > devices assigned to pdev->domain.
> 
> I fully agree, but the assertion you suggested doesn't express that. Or
> maybe I'm misunderstanding what you did suggest, and there was an
> implication of some further if() around it.

Maybe I'm getting myself confused, but if you add something like:

for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
      ; d = dom_xen )

Such loop would need to be avoided for domUs, so my suggestion was to
add the assert in order to remind us that the loop would need
adjusting if we ever add domU support.  But maybe you had already
plans to restrict the loop to dom0 only.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 30 09:13:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:13:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540958.843150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQA-0006rr-CW; Tue, 30 May 2023 09:13:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540958.843150; Tue, 30 May 2023 09:13:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQA-0006rk-9h; Tue, 30 May 2023 09:13:38 +0000
Received: by outflank-mailman (input) for mailman id 540958;
 Tue, 30 May 2023 09:13:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vQ9-0006nv-Ea
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:13:37 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 41f66420-feca-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 11:13:36 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 014D221AC5;
 Tue, 30 May 2023 09:13:36 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id C5D131341B;
 Tue, 30 May 2023 09:13:35 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id 1hfbLj++dWS+IgAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:13:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41f66420-feca-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438016; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=mn65uEelHtTfjq/hMbF7r3S9j3iWMNwjFkqnoEoOaTA=;
	b=oqZQt4tfc2gqo1/66UrJp6T5y06BuWYUZ5t+70WtONLk69XC4wBswFwdi8uFrgvIfYJ1hg
	OmW1HZEKxVc1Mkff6QCnbosJJywVTJobniazxphvRAqeesSklxovpYVV/kuSb96+vgdkm2
	7ENx2tFsMxej6gCdvnGX9pLcUapuw8c=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 00/11] tools/xenstore: drop TDB
Date: Tue, 30 May 2023 11:13:22 +0200
Message-Id: <20230530091333.7678-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Using TDB for storing the Xenstore nodes is adding more complexity
instead of removing it. With keeping the data in memory only, the main
reason for using TDB has disappeared.

This series is replacing TDB with a hashlist referencing directly
individually allocated Xenstore nodes.

The series is based on:
- V6 of my "tools/xenstore: rework internal accounting" series
- V3 of my "tools/xenstore: more cleanups" series

Juergen Gross (11):
  tools/xenstore: explicitly specify create or modify for tdb_store()
  tools/xenstore: replace key in struct node with data base name
  tools/xenstore: let transaction_prepend() return the name for access
  tools/xenstore: rename do_tdb_delete() and change parameter type
  tools/xenstore: rename do_tdb_write() and change parameter type
  tools/xenstore: switch get_acc_data() to use name instead of key
  tools/xenstore: add wrapper for tdb_fetch()
  tools/xenstore: make hashtable key and value parameters const
  tools/xenstore: add hashtable_replace() function
  tools/xenstore: drop use of tdb
  tools/xenstore: remove tdb code

 tools/xenstore/Makefile.common         |    2 +-
 tools/xenstore/hashtable.c             |   57 +-
 tools/xenstore/hashtable.h             |   20 +-
 tools/xenstore/tdb.c                   | 1748 ------------------------
 tools/xenstore/tdb.h                   |  132 --
 tools/xenstore/xenstored_core.c        |  274 ++--
 tools/xenstore/xenstored_core.h        |   28 +-
 tools/xenstore/xenstored_domain.c      |    4 +-
 tools/xenstore/xenstored_transaction.c |   80 +-
 tools/xenstore/xenstored_transaction.h |    5 +-
 10 files changed, 250 insertions(+), 2100 deletions(-)
 delete mode 100644 tools/xenstore/tdb.c
 delete mode 100644 tools/xenstore/tdb.h

-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:13:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:13:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540960.843161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQG-0007Eo-LU; Tue, 30 May 2023 09:13:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540960.843161; Tue, 30 May 2023 09:13:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQG-0007Eh-Ga; Tue, 30 May 2023 09:13:44 +0000
Received: by outflank-mailman (input) for mailman id 540960;
 Tue, 30 May 2023 09:13:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vQF-0006QB-JI
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:13:43 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 453695ae-feca-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:13:41 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 895B51FD7C;
 Tue, 30 May 2023 09:13:41 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 5FD0D1341B;
 Tue, 30 May 2023 09:13:41 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id ITIkFkW+dWTQIgAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:13:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 453695ae-feca-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438021; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IKNSjaTn7WbCWwMDq2fO8Wq3Ke/9ONYC9bEL6rJ5Ge0=;
	b=iXhDEYXNm+bDS7KIBFmBvDi1PcqStCSx/QzGLMrf2qb2Wk6YynCjoQnn1B2bvfmkQfQyQx
	6jzqU0i40BUxHxLJDolaKiko/WLf8y9z5nMqxIENJGo4hZNqy5QIdXZJDkoOaMZNGsRMxy
	TB/LcP1VQ0m3skbgAnkvrw+wKVHuZXA=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 01/11] tools/xenstore: explicitly specify create or modify for tdb_store()
Date: Tue, 30 May 2023 11:13:23 +0200
Message-Id: <20230530091333.7678-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530091333.7678-1-jgross@suse.com>
References: <20230530091333.7678-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using TDB_REPLACE for either creating or modifying a TDB
entry, use either TDB_INSERT or TDB_MODIFY when calling tdb_store().

At higher function levels use the abstract flag values NODE_CREATE
and NODE_MODIFY.

This is for preparing to get rid of TDB.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        | 29 ++++++++++++++------------
 tools/xenstore/xenstored_core.h        |  6 ++++--
 tools/xenstore/xenstored_domain.c      |  2 +-
 tools/xenstore/xenstored_transaction.c |  8 +++++--
 4 files changed, 27 insertions(+), 18 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 31a862b715..90c0bc7423 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -601,7 +601,7 @@ static unsigned int get_acc_domid(struct connection *conn, TDB_DATA *key,
 }
 
 int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
-		 struct node_account_data *acc, bool no_quota_check)
+		 struct node_account_data *acc, int flag, bool no_quota_check)
 {
 	struct xs_tdb_record_hdr *hdr = (void *)data->dptr;
 	struct node_account_data old_acc = {};
@@ -635,7 +635,8 @@ int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 	}
 
 	/* TDB should set errno, but doesn't even set ecode AFAICT. */
-	if (tdb_store(tdb_ctx, *key, *data, TDB_REPLACE) != 0) {
+	if (tdb_store(tdb_ctx, *key, *data,
+		      (flag == NODE_CREATE) ? TDB_INSERT : TDB_MODIFY) != 0) {
 		domain_memory_add_nochk(conn, new_domid,
 					-data->dsize - key->dsize);
 		/* Error path, so no quota check. */
@@ -774,7 +775,7 @@ static bool read_node_can_propagate_errno(void)
 }
 
 int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
-		   bool no_quota_check)
+		   int flag, bool no_quota_check)
 {
 	TDB_DATA data;
 	void *p;
@@ -812,7 +813,7 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 	p += node->datalen;
 	memcpy(p, node->children, node->childlen);
 
-	if (do_tdb_write(conn, key, &data, &node->acc, no_quota_check))
+	if (do_tdb_write(conn, key, &data, &node->acc, flag, no_quota_check))
 		return EIO;
 
 	return 0;
@@ -823,14 +824,14 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
  * node->key. This can later be used if the change needs to be reverted.
  */
 static int write_node(struct connection *conn, struct node *node,
-		      bool no_quota_check)
+		      int flag, bool no_quota_check)
 {
 	int ret;
 
 	if (access_node(conn, node, NODE_ACCESS_WRITE, &node->key))
 		return errno;
 
-	ret = write_node_raw(conn, &node->key, node, no_quota_check);
+	ret = write_node_raw(conn, &node->key, node, flag, no_quota_check);
 	if (ret && conn && conn->transaction) {
 		/*
 		 * Reverting access_node() is hard, so just fail the
@@ -1496,7 +1497,8 @@ static struct node *create_node(struct connection *conn, const void *ctx,
 			goto err;
 		}
 
-		ret = write_node(conn, i, false);
+		ret = write_node(conn, i, i->parent ? NODE_CREATE : NODE_MODIFY,
+				 false);
 		if (ret)
 			goto err;
 
@@ -1560,7 +1562,7 @@ static int do_write(const void *ctx, struct connection *conn,
 	} else {
 		node->data = in->buffer + offset;
 		node->datalen = datalen;
-		if (write_node(conn, node, false))
+		if (write_node(conn, node, NODE_MODIFY, false))
 			return errno;
 	}
 
@@ -1610,7 +1612,7 @@ static int remove_child_entry(struct connection *conn, struct node *node,
 	memdel(node->children, offset, childlen + 1, node->childlen);
 	node->childlen -= childlen + 1;
 
-	return write_node(conn, node, true);
+	return write_node(conn, node, NODE_MODIFY, true);
 }
 
 static int delete_child(struct connection *conn,
@@ -1807,7 +1809,7 @@ static int do_set_perms(const void *ctx, struct connection *conn,
 	if (domain_nbentry_inc(conn, get_node_owner(node)))
 		return ENOMEM;
 
-	if (write_node(conn, node, false))
+	if (write_node(conn, node, NODE_MODIFY, false))
 		return errno;
 
 	fire_watches(conn, ctx, name, node, false, &old_perms);
@@ -2321,7 +2323,7 @@ static void manual_node(const char *name, const char *child)
 	if (child)
 		node->childlen = strlen(child) + 1;
 
-	if (write_node(NULL, node, false))
+	if (write_node(NULL, node, NODE_CREATE, false))
 		barf_perror("Could not create initial node %s", name);
 	talloc_free(node);
 }
@@ -3469,12 +3471,13 @@ void read_state_node(const void *ctx, const void *state)
 			barf("allocation error restoring node");
 
 		set_tdb_key(parentname, &key);
-		if (write_node_raw(NULL, &key, parent, true))
+		if (write_node_raw(NULL, &key, parent, NODE_MODIFY, true))
 			barf("write parent error restoring node");
 	}
 
 	set_tdb_key(name, &key);
-	if (write_node_raw(NULL, &key, node, true))
+	if (write_node_raw(NULL, &key, node,
+			   strcmp(name, "/") ? NODE_CREATE : NODE_MODIFY, true))
 		barf("write node error restoring node");
 
 	if (domain_nbentry_inc(&conn, get_node_owner(node)))
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 84a611cbb5..9291efec17 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -238,7 +238,9 @@ static inline unsigned int get_node_owner(const struct node *node)
 
 /* Write a node to the tdb data base. */
 int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
-		   bool no_quota_check);
+		   int flag, bool no_quota_check);
+#define NODE_CREATE 0
+#define NODE_MODIFY 1
 
 /* Get a node from the tdb data base. */
 struct node *read_node(struct connection *conn, const void *ctx,
@@ -358,7 +360,7 @@ int remember_string(struct hashtable *hash, const char *str);
 
 void set_tdb_key(const char *name, TDB_DATA *key);
 int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
-		 struct node_account_data *acc, bool no_quota_check);
+		 struct node_account_data *acc, int flag, bool no_quota_check);
 int do_tdb_delete(struct connection *conn, TDB_DATA *key,
 		  struct node_account_data *acc);
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 60d3aa1ddb..7bc49ec97c 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -523,7 +523,7 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 		node->perms.p[0].id = priv_domid;
 		node->acc.memory = 0;
 		domain_nbentry_inc(NULL, priv_domid);
-		if (write_node_raw(NULL, &key, node, true)) {
+		if (write_node_raw(NULL, &key, node, NODE_MODIFY, true)) {
 			/* That's unfortunate. We only can try to continue. */
 			syslog(LOG_ERR,
 			       "error when moving orphaned node %s to dom0\n",
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 334f1609f1..0655073de7 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -290,7 +290,8 @@ int access_node(struct connection *conn, struct node *node,
 			i->check_gen = true;
 			if (node->generation != NO_GENERATION) {
 				set_tdb_key(i->trans_name, &local_key);
-				ret = write_node_raw(conn, &local_key, node, true);
+				ret = write_node_raw(conn, &local_key, node,
+						     NODE_CREATE, true);
 				if (ret)
 					goto err;
 				i->ta_node = true;
@@ -363,6 +364,7 @@ static int finalize_transaction(struct connection *conn,
 	TDB_DATA key, ta_key, data;
 	struct xs_tdb_record_hdr *hdr;
 	uint64_t gen;
+	int flag;
 
 	list_for_each_entry_safe(i, n, &trans->accessed, list) {
 		if (i->check_gen) {
@@ -405,8 +407,10 @@ static int finalize_transaction(struct connection *conn,
 					  ta_key.dsize + data.dsize);
 				hdr = (void *)data.dptr;
 				hdr->generation = ++generation;
+				flag = (i->generation == NO_GENERATION)
+				       ? NODE_CREATE : NODE_MODIFY;
 				*is_corrupt |= do_tdb_write(conn, &key, &data,
-							    NULL, true);
+							    NULL, flag, true);
 				talloc_free(data.dptr);
 				if (do_tdb_delete(conn, &ta_key, NULL))
 					*is_corrupt = true;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:14:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:14:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540963.843169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQM-0007bT-Rx; Tue, 30 May 2023 09:13:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540963.843169; Tue, 30 May 2023 09:13:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQM-0007bK-PG; Tue, 30 May 2023 09:13:50 +0000
Received: by outflank-mailman (input) for mailman id 540963;
 Tue, 30 May 2023 09:13:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vQL-0006QB-7d
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:13:49 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 48905308-feca-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:13:47 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2CB1621AC5;
 Tue, 30 May 2023 09:13:47 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 016C01341B;
 Tue, 30 May 2023 09:13:46 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id mh57Okq+dWTcIgAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:13:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48905308-feca-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438027; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+ClC9PU2wIUP/OJxhDjXBevd2pA0sV7fWT1Nek6XR94=;
	b=C3KZ57ybgGsu8Z0V+4N2Of16LDwlYaRYUWTtM/LtKwReuOqT2Z7sr1SI4lg4Ap3xxCae3x
	Yp70pj7tmoeIMr8BbgTfolyUTD2IOZjEIMICY1NOE5i3ZJfa7Qgc8rIL/4iatb/UbsIHlf
	EzCjgW7j3lbvdYv+3kCxJujgyFds42I=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 02/11] tools/xenstore: replace key in struct node with data base name
Date: Tue, 30 May 2023 11:13:24 +0200
Message-Id: <20230530091333.7678-3-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530091333.7678-1-jgross@suse.com>
References: <20230530091333.7678-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of storing the TDB key in struct node, only store the name of
the node used to access it in the data base.

Associated with that change replace the key parameter of access_node()
with the equivalent db_name.

This is in preparation to replace TDB with a more simple data storage.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        | 19 +++++++++++++------
 tools/xenstore/xenstored_core.h        |  4 ++--
 tools/xenstore/xenstored_transaction.c | 10 +++++-----
 tools/xenstore/xenstored_transaction.h |  2 +-
 4 files changed, 21 insertions(+), 14 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 90c0bc7423..a1d5d4a419 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -820,18 +820,20 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 }
 
 /*
- * Write the node. If the node is written, caller can find the key used in
- * node->key. This can later be used if the change needs to be reverted.
+ * Write the node. If the node is written, caller can find the DB name used in
+ * node->db_name. This can later be used if the change needs to be reverted.
  */
 static int write_node(struct connection *conn, struct node *node,
 		      int flag, bool no_quota_check)
 {
 	int ret;
+	TDB_DATA key;
 
-	if (access_node(conn, node, NODE_ACCESS_WRITE, &node->key))
+	if (access_node(conn, node, NODE_ACCESS_WRITE, &node->db_name))
 		return errno;
 
-	ret = write_node_raw(conn, &node->key, node, flag, no_quota_check);
+	set_tdb_key(node->db_name, &key);
+	ret = write_node_raw(conn, &key, node, flag, no_quota_check);
 	if (ret && conn && conn->transaction) {
 		/*
 		 * Reverting access_node() is hard, so just fail the
@@ -1445,10 +1447,13 @@ nomem:
 
 static void destroy_node_rm(struct connection *conn, struct node *node)
 {
+	TDB_DATA key;
+
 	if (streq(node->name, "/"))
 		corrupt(NULL, "Destroying root node!");
 
-	do_tdb_delete(conn, &node->key, &node->acc);
+	set_tdb_key(node->db_name, &key);
+	do_tdb_delete(conn, &key, &node->acc);
 }
 
 static int destroy_node(struct connection *conn, struct node *node)
@@ -1638,10 +1643,11 @@ static int delnode_sub(const void *ctx, struct connection *conn,
 	const char *root = arg;
 	bool watch_exact;
 	int ret;
+	const char *db_name;
 	TDB_DATA key;
 
 	/* Any error here will probably be repeated for all following calls. */
-	ret = access_node(conn, node, NODE_ACCESS_DELETE, &key);
+	ret = access_node(conn, node, NODE_ACCESS_DELETE, &db_name);
 	if (ret > 0)
 		return WALK_TREE_SUCCESS_STOP;
 
@@ -1649,6 +1655,7 @@ static int delnode_sub(const void *ctx, struct connection *conn,
 		return WALK_TREE_ERROR_STOP;
 
 	/* In case of error stop the walk. */
+	set_tdb_key(db_name, &key);
 	if (!ret && do_tdb_delete(conn, &key, &node->acc))
 		return WALK_TREE_ERROR_STOP;
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 9291efec17..f7cb035f26 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -181,8 +181,8 @@ struct node_account_data {
 
 struct node {
 	const char *name;
-	/* Key used to update TDB */
-	TDB_DATA key;
+	/* Name used to access data base. */
+	const char *db_name;
 
 	/* Parent (optional) */
 	struct node *parent;
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 0655073de7..9dab0cd165 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -227,7 +227,7 @@ void transaction_prepend(struct connection *conn, const char *name,
  * to be accessed in the data base.
  */
 int access_node(struct connection *conn, struct node *node,
-		enum node_access_type type, TDB_DATA *key)
+		enum node_access_type type, const char **db_name)
 {
 	struct accessed_node *i = NULL;
 	struct transaction *trans;
@@ -243,8 +243,8 @@ int access_node(struct connection *conn, struct node *node,
 
 	if (!conn || !conn->transaction) {
 		/* They're changing the global database. */
-		if (key)
-			set_tdb_key(node->name, key);
+		if (db_name)
+			*db_name = node->name;
 		return 0;
 	}
 
@@ -308,8 +308,8 @@ int access_node(struct connection *conn, struct node *node,
 		/* Nothing to delete. */
 		return -1;
 
-	if (key) {
-		set_tdb_key(i->trans_name, key);
+	if (db_name) {
+		*db_name = i->trans_name;
 		if (type == NODE_ACCESS_WRITE)
 			i->ta_node = true;
 		if (type == NODE_ACCESS_DELETE)
diff --git a/tools/xenstore/xenstored_transaction.h b/tools/xenstore/xenstored_transaction.h
index 883145163f..f6a2e2f7f5 100644
--- a/tools/xenstore/xenstored_transaction.h
+++ b/tools/xenstore/xenstored_transaction.h
@@ -41,7 +41,7 @@ void ta_node_created(struct transaction *trans);
 
 /* This node was accessed. */
 int __must_check access_node(struct connection *conn, struct node *node,
-                             enum node_access_type type, TDB_DATA *key);
+                             enum node_access_type type, const char **db_name);
 
 /* Queue watches for a modified node. */
 void queue_watches(struct connection *conn, const char *name, bool watch_exact);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:14:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:14:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540964.843180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQQ-0007v0-53; Tue, 30 May 2023 09:13:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540964.843180; Tue, 30 May 2023 09:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQQ-0007up-1T; Tue, 30 May 2023 09:13:54 +0000
Received: by outflank-mailman (input) for mailman id 540964;
 Tue, 30 May 2023 09:13:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vQP-0006nv-Gl
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:13:53 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4be8ab9d-feca-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 11:13:53 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B862121AC5;
 Tue, 30 May 2023 09:13:52 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 89F981341B;
 Tue, 30 May 2023 09:13:52 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id 56orIFC+dWTiIgAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:13:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4be8ab9d-feca-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438032; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LaigVqKga5DdOUggEAXiuJzbZ/VBvKJTFm/Qf2fJTXM=;
	b=suJmawvgPL32cthBV2CL4Jm3HF+dPY+NmhUQT7AcVgmXFdxKlAMvQbpYmlXtb0HIZ49TVu
	B+guJMw4FLRODOtsKtw0KdKXhSzg9sOGbW4nTXkydWTFxlDj8t+GqEynI3P+oyiBZdyyHP
	l1J0m368p8h7hUqkDij2ZmZwcXNiKro=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 03/11] tools/xenstore: let transaction_prepend() return the name for access
Date: Tue, 30 May 2023 11:13:25 +0200
Message-Id: <20230530091333.7678-4-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530091333.7678-1-jgross@suse.com>
References: <20230530091333.7678-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of setting the TDB key for accessing the node in the data base,
let transaction_prepend() return the associated name instead.

This is in preparation to replace TDB with a more simple data storage.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        |  4 +++-
 tools/xenstore/xenstored_transaction.c | 11 ++++-------
 tools/xenstore/xenstored_transaction.h |  3 +--
 3 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index a1d5d4a419..239f8c6bc4 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -694,6 +694,7 @@ struct node *read_node(struct connection *conn, const void *ctx,
 	TDB_DATA key, data;
 	struct xs_tdb_record_hdr *hdr;
 	struct node *node;
+	const char *db_name;
 	int err;
 
 	node = talloc(ctx, struct node);
@@ -708,7 +709,8 @@ struct node *read_node(struct connection *conn, const void *ctx,
 		return NULL;
 	}
 
-	transaction_prepend(conn, name, &key);
+	db_name = transaction_prepend(conn, name);
+	set_tdb_key(db_name, &key);
 
 	data = tdb_fetch(tdb_ctx, key);
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 9dab0cd165..1646c07040 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -196,20 +196,17 @@ static char *transaction_get_node_name(void *ctx, struct transaction *trans,
  * Prepend the transaction to name if node has been modified in the current
  * transaction.
  */
-void transaction_prepend(struct connection *conn, const char *name,
-			 TDB_DATA *key)
+const char *transaction_prepend(struct connection *conn, const char *name)
 {
 	struct accessed_node *i;
 
 	if (conn && conn->transaction) {
 		i = find_accessed_node(conn->transaction, name);
-		if (i) {
-			set_tdb_key(i->trans_name, key);
-			return;
-		}
+		if (i)
+			return i->trans_name;
 	}
 
-	set_tdb_key(name, key);
+	return name;
 }
 
 /*
diff --git a/tools/xenstore/xenstored_transaction.h b/tools/xenstore/xenstored_transaction.h
index f6a2e2f7f5..b196b1ab07 100644
--- a/tools/xenstore/xenstored_transaction.h
+++ b/tools/xenstore/xenstored_transaction.h
@@ -47,8 +47,7 @@ int __must_check access_node(struct connection *conn, struct node *node,
 void queue_watches(struct connection *conn, const char *name, bool watch_exact);
 
 /* Prepend the transaction to name if appropriate. */
-void transaction_prepend(struct connection *conn, const char *name,
-                         TDB_DATA *key);
+const char *transaction_prepend(struct connection *conn, const char *name);
 
 /* Mark the transaction as failed. This will prevent it to be committed. */
 void fail_transaction(struct transaction *trans);
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:14:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:14:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540965.843190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQX-0008Rr-Ib; Tue, 30 May 2023 09:14:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540965.843190; Tue, 30 May 2023 09:14:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQX-0008Qt-Em; Tue, 30 May 2023 09:14:01 +0000
Received: by outflank-mailman (input) for mailman id 540965;
 Tue, 30 May 2023 09:14:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vQW-0006QB-9G
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:14:00 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4f390beb-feca-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:13:58 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 55F191FD68;
 Tue, 30 May 2023 09:13:58 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 28F691341B;
 Tue, 30 May 2023 09:13:58 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id PqSlCFa+dWTpIgAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:13:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f390beb-feca-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438038; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ddci7abEBVlfL/pNul+H+iyRqLJOJhYkViCfX8p+eww=;
	b=pxxH4+oyCrMS287AX/MDVfgu7yb8HwHOOmqajtEaY9ZQ9NCdWbawDaEh7XoQs8uKm3bb+T
	sfmNjQepUoqnNKpoTkUe0Dx8hBcSj7A+nl/414isPNxY4m6koGqVAGVSN6Os6bhoy/+8+4
	ijJI79LQ3j/DFAaJKlulCT30M2Sx/3U=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 04/11] tools/xenstore: rename do_tdb_delete() and change parameter type
Date: Tue, 30 May 2023 11:13:26 +0200
Message-Id: <20230530091333.7678-5-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530091333.7678-1-jgross@suse.com>
References: <20230530091333.7678-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rename do_tdb_delete() to db_delete() and replace the key parameter
with db_name specifying the name of the node in the data base.

This is in preparation to replace TDB with a more simple data storage.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        | 31 ++++++++++++--------------
 tools/xenstore/xenstored_core.h        |  5 +++--
 tools/xenstore/xenstored_transaction.c | 18 ++++++---------
 3 files changed, 24 insertions(+), 30 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 239f8c6bc4..a2454ad24d 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -657,28 +657,31 @@ int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 	return 0;
 }
 
-int do_tdb_delete(struct connection *conn, TDB_DATA *key,
-		  struct node_account_data *acc)
+int db_delete(struct connection *conn, const char *name,
+	      struct node_account_data *acc)
 {
 	struct node_account_data tmp_acc;
 	unsigned int domid;
+	TDB_DATA key;
+
+	set_tdb_key(name, &key);
 
 	if (!acc) {
 		acc = &tmp_acc;
 		acc->memory = -1;
 	}
 
-	get_acc_data(key, acc);
+	get_acc_data(&key, acc);
 
-	if (tdb_delete(tdb_ctx, *key)) {
+	if (tdb_delete(tdb_ctx, key)) {
 		errno = EIO;
 		return errno;
 	}
-	trace_tdb("delete %s\n", key->dptr);
+	trace_tdb("delete %s\n", name);
 
 	if (acc->memory) {
-		domid = get_acc_domid(conn, key, acc->domid);
-		domain_memory_add_nochk(conn, domid, -acc->memory - key->dsize);
+		domid = get_acc_domid(conn, &key, acc->domid);
+		domain_memory_add_nochk(conn, domid, -acc->memory - key.dsize);
 	}
 
 	return 0;
@@ -1449,13 +1452,10 @@ nomem:
 
 static void destroy_node_rm(struct connection *conn, struct node *node)
 {
-	TDB_DATA key;
-
 	if (streq(node->name, "/"))
 		corrupt(NULL, "Destroying root node!");
 
-	set_tdb_key(node->db_name, &key);
-	do_tdb_delete(conn, &key, &node->acc);
+	db_delete(conn, node->db_name, &node->acc);
 }
 
 static int destroy_node(struct connection *conn, struct node *node)
@@ -1646,7 +1646,6 @@ static int delnode_sub(const void *ctx, struct connection *conn,
 	bool watch_exact;
 	int ret;
 	const char *db_name;
-	TDB_DATA key;
 
 	/* Any error here will probably be repeated for all following calls. */
 	ret = access_node(conn, node, NODE_ACCESS_DELETE, &db_name);
@@ -1657,8 +1656,7 @@ static int delnode_sub(const void *ctx, struct connection *conn,
 		return WALK_TREE_ERROR_STOP;
 
 	/* In case of error stop the walk. */
-	set_tdb_key(db_name, &key);
-	if (!ret && do_tdb_delete(conn, &key, &node->acc))
+	if (!ret && db_delete(conn, db_name, &node->acc))
 		return WALK_TREE_ERROR_STOP;
 
 	/*
@@ -2483,9 +2481,8 @@ static int clean_store_(TDB_CONTEXT *tdb, TDB_DATA key, TDB_DATA val,
 	}
 	if (!hashtable_search(reachable, name)) {
 		log("clean_store: '%s' is orphaned!", name);
-		if (recovery) {
-			do_tdb_delete(NULL, &key, NULL);
-		}
+		if (recovery)
+			db_delete(NULL, name, NULL);
 	}
 
 	talloc_free(name);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index f7cb035f26..7fc6d73e5a 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -358,11 +358,12 @@ extern xengnttab_handle **xgt_handle;
 
 int remember_string(struct hashtable *hash, const char *str);
 
+/* Data base access functions. */
 void set_tdb_key(const char *name, TDB_DATA *key);
 int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 		 struct node_account_data *acc, int flag, bool no_quota_check);
-int do_tdb_delete(struct connection *conn, TDB_DATA *key,
-		  struct node_account_data *acc);
+int db_delete(struct connection *conn, const char *name,
+	      struct node_account_data *acc);
 
 void conn_free_buffered_data(struct connection *conn);
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 1646c07040..bf173f3d1d 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -385,8 +385,7 @@ static int finalize_transaction(struct connection *conn,
 		/* Entries for unmodified nodes can be removed early. */
 		if (!i->modified) {
 			if (i->ta_node) {
-				set_tdb_key(i->trans_name, &ta_key);
-				if (do_tdb_delete(conn, &ta_key, NULL))
+				if (db_delete(conn, i->trans_name, NULL))
 					return EIO;
 			}
 			list_del(&i->list);
@@ -395,21 +394,21 @@ static int finalize_transaction(struct connection *conn,
 	}
 
 	while ((i = list_top(&trans->accessed, struct accessed_node, list))) {
-		set_tdb_key(i->node, &key);
 		if (i->ta_node) {
 			set_tdb_key(i->trans_name, &ta_key);
 			data = tdb_fetch(tdb_ctx, ta_key);
 			if (data.dptr) {
-				trace_tdb("read %s size %zu\n", ta_key.dptr,
+				trace_tdb("read %s size %zu\n", i->trans_name,
 					  ta_key.dsize + data.dsize);
 				hdr = (void *)data.dptr;
 				hdr->generation = ++generation;
 				flag = (i->generation == NO_GENERATION)
 				       ? NODE_CREATE : NODE_MODIFY;
+				set_tdb_key(i->node, &key);
 				*is_corrupt |= do_tdb_write(conn, &key, &data,
 							    NULL, flag, true);
 				talloc_free(data.dptr);
-				if (do_tdb_delete(conn, &ta_key, NULL))
+				if (db_delete(conn, i->trans_name, NULL))
 					*is_corrupt = true;
 			} else {
 				*is_corrupt = true;
@@ -422,7 +421,7 @@ static int finalize_transaction(struct connection *conn,
 			 */
 			*is_corrupt |= (i->generation == NO_GENERATION)
 				       ? false
-				       : do_tdb_delete(conn, &key, NULL);
+				       : db_delete(conn, i->node, NULL);
 		}
 		if (i->fire_watch)
 			fire_watches(conn, trans, i->node, NULL, i->watch_exact,
@@ -439,15 +438,12 @@ static int destroy_transaction(void *_transaction)
 {
 	struct transaction *trans = _transaction;
 	struct accessed_node *i;
-	TDB_DATA key;
 
 	wrl_ntransactions--;
 	trace_destroy(trans, "transaction");
 	while ((i = list_top(&trans->accessed, struct accessed_node, list))) {
-		if (i->ta_node) {
-			set_tdb_key(i->trans_name, &key);
-			do_tdb_delete(trans->conn, &key, NULL);
-		}
+		if (i->ta_node)
+			db_delete(trans->conn, i->trans_name, NULL);
 		list_del(&i->list);
 		talloc_free(i);
 	}
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:14:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:14:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540970.843200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQc-0000Xp-Rf; Tue, 30 May 2023 09:14:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540970.843200; Tue, 30 May 2023 09:14:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQc-0000Xh-OE; Tue, 30 May 2023 09:14:06 +0000
Received: by outflank-mailman (input) for mailman id 540970;
 Tue, 30 May 2023 09:14:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vQb-0006nv-3u
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:14:05 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 52899e5e-feca-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 11:14:04 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E51281FD68;
 Tue, 30 May 2023 09:14:03 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id B80581341B;
 Tue, 30 May 2023 09:14:03 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id hqiBK1u+dWTvIgAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:14:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52899e5e-feca-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438043; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yvze7UvbZmnFIXQucfaUCARD9SmDS6V0ehB8xJuM3Kk=;
	b=HEzaLiVLJ8hT9VjON0PqE8qdC8Q5droOX0FZbiLslhJ6HUpFhx70Tof9dSsnwQ681W4R7v
	U/F4O7E/F/5C/1JEV6mDdYKtoM+s70O6l0h12hmr21lP4/nGzRf3KWPC+1J+gF1zHrzy2G
	u6NCMehIlXWxRe8jdsEX8uMe4zvROGs=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 05/11] tools/xenstore: rename do_tdb_write() and change parameter type
Date: Tue, 30 May 2023 11:13:27 +0200
Message-Id: <20230530091333.7678-6-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530091333.7678-1-jgross@suse.com>
References: <20230530091333.7678-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rename do_tdb_write() to db_write() and replace the key parameter with
db_name specifying the name of the node in the data base, and the data
parameter with a data pointer and a length.

Do the same key parameter type change for write_node_raw(), too.

This is in preparation to replace TDB with a more simple data storage.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        | 67 +++++++++++++-------------
 tools/xenstore/xenstored_core.h        |  9 ++--
 tools/xenstore/xenstored_domain.c      |  4 +-
 tools/xenstore/xenstored_transaction.c | 20 ++++----
 4 files changed, 49 insertions(+), 51 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index a2454ad24d..8fbf686331 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -600,22 +600,27 @@ static unsigned int get_acc_domid(struct connection *conn, TDB_DATA *key,
 	       ? domid : conn->id;
 }
 
-int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
-		 struct node_account_data *acc, int flag, bool no_quota_check)
+int db_write(struct connection *conn, const char *db_name, void *data,
+	     size_t size, struct node_account_data *acc, int flag,
+	     bool no_quota_check)
 {
-	struct xs_tdb_record_hdr *hdr = (void *)data->dptr;
+	struct xs_tdb_record_hdr *hdr = data;
 	struct node_account_data old_acc = {};
 	unsigned int old_domid, new_domid;
 	int ret;
+	TDB_DATA key, dat;
 
+	set_tdb_key(db_name, &key);
+	dat.dptr = data;
+	dat.dsize = size;
 	if (!acc)
 		old_acc.memory = -1;
 	else
 		old_acc = *acc;
 
-	get_acc_data(key, &old_acc);
-	old_domid = get_acc_domid(conn, key, old_acc.domid);
-	new_domid = get_acc_domid(conn, key, hdr->perms[0].id);
+	get_acc_data(&key, &old_acc);
+	old_domid = get_acc_domid(conn, &key, old_acc.domid);
+	new_domid = get_acc_domid(conn, &key, hdr->perms[0].id);
 
 	/*
 	 * Don't check for ENOENT, as we want to be able to switch orphaned
@@ -623,35 +628,34 @@ int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
 	 */
 	if (old_acc.memory)
 		domain_memory_add_nochk(conn, old_domid,
-					-old_acc.memory - key->dsize);
-	ret = domain_memory_add(conn, new_domid,
-				data->dsize + key->dsize, no_quota_check);
+					-old_acc.memory - key.dsize);
+	ret = domain_memory_add(conn, new_domid, size + key.dsize,
+				no_quota_check);
 	if (ret) {
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
 			domain_memory_add_nochk(conn, old_domid,
-						old_acc.memory + key->dsize);
+						old_acc.memory + key.dsize);
 		return ret;
 	}
 
 	/* TDB should set errno, but doesn't even set ecode AFAICT. */
-	if (tdb_store(tdb_ctx, *key, *data,
+	if (tdb_store(tdb_ctx, key, dat,
 		      (flag == NODE_CREATE) ? TDB_INSERT : TDB_MODIFY) != 0) {
-		domain_memory_add_nochk(conn, new_domid,
-					-data->dsize - key->dsize);
+		domain_memory_add_nochk(conn, new_domid, -size - key.dsize);
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
 			domain_memory_add_nochk(conn, old_domid,
-						old_acc.memory + key->dsize);
+						old_acc.memory + key.dsize);
 		errno = EIO;
 		return errno;
 	}
-	trace_tdb("store %s size %zu\n", key->dptr, data->dsize + key->dsize);
+	trace_tdb("store %s size %zu\n", db_name, size + key.dsize);
 
 	if (acc) {
 		/* Don't use new_domid, as it might be a transaction node. */
 		acc->domid = hdr->perms[0].id;
-		acc->memory = data->dsize;
+		acc->memory = size;
 	}
 
 	return 0;
@@ -779,33 +783,34 @@ static bool read_node_can_propagate_errno(void)
 	return errno == ENOMEM || errno == ENOSPC;
 }
 
-int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
-		   int flag, bool no_quota_check)
+int write_node_raw(struct connection *conn, const char *db_name,
+		   struct node *node, int flag, bool no_quota_check)
 {
-	TDB_DATA data;
+	void *data;
+	size_t size;
 	void *p;
 	struct xs_tdb_record_hdr *hdr;
 
 	if (domain_adjust_node_perms(node))
 		return errno;
 
-	data.dsize = sizeof(*hdr)
+	size = sizeof(*hdr)
 		+ node->perms.num * sizeof(node->perms.p[0])
 		+ node->datalen + node->childlen;
 
 	/* Call domain_max_chk() in any case in order to record max values. */
-	if (domain_max_chk(conn, ACC_NODESZ, data.dsize) && !no_quota_check) {
+	if (domain_max_chk(conn, ACC_NODESZ, size) && !no_quota_check) {
 		errno = ENOSPC;
 		return errno;
 	}
 
-	data.dptr = talloc_size(node, data.dsize);
-	if (!data.dptr) {
+	data = talloc_size(node, size);
+	if (!data) {
 		errno = ENOMEM;
 		return errno;
 	}
 
-	hdr = (void *)data.dptr;
+	hdr = data;
 	hdr->generation = node->generation;
 	hdr->num_perms = node->perms.num;
 	hdr->datalen = node->datalen;
@@ -818,7 +823,8 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 	p += node->datalen;
 	memcpy(p, node->children, node->childlen);
 
-	if (do_tdb_write(conn, key, &data, &node->acc, flag, no_quota_check))
+	if (db_write(conn, db_name, data, size, &node->acc, flag,
+		     no_quota_check))
 		return EIO;
 
 	return 0;
@@ -832,13 +838,11 @@ static int write_node(struct connection *conn, struct node *node,
 		      int flag, bool no_quota_check)
 {
 	int ret;
-	TDB_DATA key;
 
 	if (access_node(conn, node, NODE_ACCESS_WRITE, &node->db_name))
 		return errno;
 
-	set_tdb_key(node->db_name, &key);
-	ret = write_node_raw(conn, &key, node, flag, no_quota_check);
+	ret = write_node_raw(conn, node->db_name, node, flag, no_quota_check);
 	if (ret && conn && conn->transaction) {
 		/*
 		 * Reverting access_node() is hard, so just fail the
@@ -3423,7 +3427,6 @@ void read_state_node(const void *ctx, const void *state)
 {
 	const struct xs_state_node *sn = state;
 	struct node *node, *parent;
-	TDB_DATA key;
 	char *name, *parentname;
 	unsigned int i;
 	struct connection conn = { .id = priv_domid };
@@ -3476,13 +3479,11 @@ void read_state_node(const void *ctx, const void *state)
 		if (add_child(node, parent, name))
 			barf("allocation error restoring node");
 
-		set_tdb_key(parentname, &key);
-		if (write_node_raw(NULL, &key, parent, NODE_MODIFY, true))
+		if (write_node_raw(NULL, parentname, parent, NODE_MODIFY, true))
 			barf("write parent error restoring node");
 	}
 
-	set_tdb_key(name, &key);
-	if (write_node_raw(NULL, &key, node,
+	if (write_node_raw(NULL, name, node,
 			   strcmp(name, "/") ? NODE_CREATE : NODE_MODIFY, true))
 		barf("write node error restoring node");
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 7fc6d73e5a..c4a995f745 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -237,8 +237,8 @@ static inline unsigned int get_node_owner(const struct node *node)
 }
 
 /* Write a node to the tdb data base. */
-int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
-		   int flag, bool no_quota_check);
+int write_node_raw(struct connection *conn, const char *db_name,
+		   struct node *node, int flag, bool no_quota_check);
 #define NODE_CREATE 0
 #define NODE_MODIFY 1
 
@@ -360,8 +360,9 @@ int remember_string(struct hashtable *hash, const char *str);
 
 /* Data base access functions. */
 void set_tdb_key(const char *name, TDB_DATA *key);
-int do_tdb_write(struct connection *conn, TDB_DATA *key, TDB_DATA *data,
-		 struct node_account_data *acc, int flag, bool no_quota_check);
+int db_write(struct connection *conn, const char *db_name, void *data,
+	     size_t size, struct node_account_data *acc, int flag,
+	     bool no_quota_check);
 int db_delete(struct connection *conn, const char *name,
 	      struct node_account_data *acc);
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 7bc49ec97c..e405ee31d3 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -511,19 +511,17 @@ static int domain_tree_remove_sub(const void *ctx, struct connection *conn,
 				  struct node *node, void *arg)
 {
 	struct domain *domain = arg;
-	TDB_DATA key;
 	int ret = WALK_TREE_OK;
 
 	if (node->perms.p[0].id != domain->domid)
 		return WALK_TREE_OK;
 
 	if (keep_orphans) {
-		set_tdb_key(node->name, &key);
 		domain_nbentry_dec(NULL, domain->domid);
 		node->perms.p[0].id = priv_domid;
 		node->acc.memory = 0;
 		domain_nbentry_inc(NULL, priv_domid);
-		if (write_node_raw(NULL, &key, node, NODE_MODIFY, true)) {
+		if (write_node_raw(NULL, node->name, node, NODE_MODIFY, true)) {
 			/* That's unfortunate. We only can try to continue. */
 			syslog(LOG_ERR,
 			       "error when moving orphaned node %s to dom0\n",
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index bf173f3d1d..029819e90c 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -228,7 +228,6 @@ int access_node(struct connection *conn, struct node *node,
 {
 	struct accessed_node *i = NULL;
 	struct transaction *trans;
-	TDB_DATA local_key;
 	int ret;
 	bool introduce = false;
 
@@ -286,8 +285,7 @@ int access_node(struct connection *conn, struct node *node,
 			i->generation = node->generation;
 			i->check_gen = true;
 			if (node->generation != NO_GENERATION) {
-				set_tdb_key(i->trans_name, &local_key);
-				ret = write_node_raw(conn, &local_key, node,
+				ret = write_node_raw(conn, i->trans_name, node,
 						     NODE_CREATE, true);
 				if (ret)
 					goto err;
@@ -358,7 +356,7 @@ static int finalize_transaction(struct connection *conn,
 				struct transaction *trans, bool *is_corrupt)
 {
 	struct accessed_node *i, *n;
-	TDB_DATA key, ta_key, data;
+	TDB_DATA key, data;
 	struct xs_tdb_record_hdr *hdr;
 	uint64_t gen;
 	int flag;
@@ -373,7 +371,7 @@ static int finalize_transaction(struct connection *conn,
 					return EIO;
 				gen = NO_GENERATION;
 			} else {
-				trace_tdb("read %s size %zu\n", key.dptr,
+				trace_tdb("read %s size %zu\n", i->node,
 					  key.dsize + data.dsize);
 				gen = hdr->generation;
 			}
@@ -395,18 +393,18 @@ static int finalize_transaction(struct connection *conn,
 
 	while ((i = list_top(&trans->accessed, struct accessed_node, list))) {
 		if (i->ta_node) {
-			set_tdb_key(i->trans_name, &ta_key);
-			data = tdb_fetch(tdb_ctx, ta_key);
+			set_tdb_key(i->trans_name, &key);
+			data = tdb_fetch(tdb_ctx, key);
 			if (data.dptr) {
 				trace_tdb("read %s size %zu\n", i->trans_name,
-					  ta_key.dsize + data.dsize);
+					  key.dsize + data.dsize);
 				hdr = (void *)data.dptr;
 				hdr->generation = ++generation;
 				flag = (i->generation == NO_GENERATION)
 				       ? NODE_CREATE : NODE_MODIFY;
-				set_tdb_key(i->node, &key);
-				*is_corrupt |= do_tdb_write(conn, &key, &data,
-							    NULL, flag, true);
+				*is_corrupt |= db_write(conn, i->node,
+							data.dptr, data.dsize,
+							NULL, flag, true);
 				talloc_free(data.dptr);
 				if (db_delete(conn, i->trans_name, NULL))
 					*is_corrupt = true;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:14:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:14:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540972.843209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQh-00011C-6k; Tue, 30 May 2023 09:14:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540972.843209; Tue, 30 May 2023 09:14:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQh-00010D-3F; Tue, 30 May 2023 09:14:11 +0000
Received: by outflank-mailman (input) for mailman id 540972;
 Tue, 30 May 2023 09:14:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vQg-0006nv-Bf
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:14:10 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 55e4061e-feca-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 11:14:09 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8B26B21A78;
 Tue, 30 May 2023 09:14:09 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 61E351341B;
 Tue, 30 May 2023 09:14:09 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id MrCaFmG+dWT8IgAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:14:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55e4061e-feca-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438049; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2qS4AOkAcsCiQqUZmpFoLkz4Si4Jg5kN7awE7TAatN8=;
	b=G9Ai5i1flNhm3A3LcvbIXQk6brAg2AMTkkjRCmSvAElodZ4CWvFqAX1ztlU+5jTnscqsHQ
	FSaYfw2QQEw9ZBvTbvlyoJED30dvHjDYuSYcuFJWkR2qHwEWZHZZddu9FR4RAoyJwoNeid
	tt7i1/bo+HEf42H/BG/IU/79q5y0AZc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 06/11] tools/xenstore: switch get_acc_data() to use name instead of key
Date: Tue, 30 May 2023 11:13:28 +0200
Message-Id: <20230530091333.7678-7-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530091333.7678-1-jgross@suse.com>
References: <20230530091333.7678-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Eliminate further TDB_DATA usage by switching get_acc_data() and
get_acc_domid() from a TDB key to the name of the node in the data base
as a parameter.

This is in preparation to replace TDB with a more simple data storage.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 8fbf686331..522b2bbf5f 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -566,19 +566,20 @@ void set_tdb_key(const char *name, TDB_DATA *key)
 	key->dsize = strlen(name);
 }
 
-static void get_acc_data(TDB_DATA *key, struct node_account_data *acc)
+static void get_acc_data(const char *name, struct node_account_data *acc)
 {
-	TDB_DATA old_data;
+	TDB_DATA key, old_data;
 	struct xs_tdb_record_hdr *hdr;
 
 	if (acc->memory < 0) {
-		old_data = tdb_fetch(tdb_ctx, *key);
+		set_tdb_key(name, &key);
+		old_data = tdb_fetch(tdb_ctx, key);
 		/* No check for error, as the node might not exist. */
 		if (old_data.dptr == NULL) {
 			acc->memory = 0;
 		} else {
-			trace_tdb("read %s size %zu\n", key->dptr,
-				  old_data.dsize + key->dsize);
+			trace_tdb("read %s size %zu\n", name,
+				  old_data.dsize + key.dsize);
 			hdr = (void *)old_data.dptr;
 			acc->memory = old_data.dsize;
 			acc->domid = hdr->perms[0].id;
@@ -593,11 +594,10 @@ static void get_acc_data(TDB_DATA *key, struct node_account_data *acc)
  * count prepended (e.g. 123/local/domain/...). So testing for the node's
  * key not to start with "/" or "@" is sufficient.
  */
-static unsigned int get_acc_domid(struct connection *conn, TDB_DATA *key,
+static unsigned int get_acc_domid(struct connection *conn, const char *name,
 				  unsigned int domid)
 {
-	return (!conn || key->dptr[0] == '/' || key->dptr[0] == '@')
-	       ? domid : conn->id;
+	return (!conn || name[0] == '/' || name[0] == '@') ? domid : conn->id;
 }
 
 int db_write(struct connection *conn, const char *db_name, void *data,
@@ -618,9 +618,9 @@ int db_write(struct connection *conn, const char *db_name, void *data,
 	else
 		old_acc = *acc;
 
-	get_acc_data(&key, &old_acc);
-	old_domid = get_acc_domid(conn, &key, old_acc.domid);
-	new_domid = get_acc_domid(conn, &key, hdr->perms[0].id);
+	get_acc_data(db_name, &old_acc);
+	old_domid = get_acc_domid(conn, db_name, old_acc.domid);
+	new_domid = get_acc_domid(conn, db_name, hdr->perms[0].id);
 
 	/*
 	 * Don't check for ENOENT, as we want to be able to switch orphaned
@@ -675,7 +675,7 @@ int db_delete(struct connection *conn, const char *name,
 		acc->memory = -1;
 	}
 
-	get_acc_data(&key, acc);
+	get_acc_data(name, acc);
 
 	if (tdb_delete(tdb_ctx, key)) {
 		errno = EIO;
@@ -684,7 +684,7 @@ int db_delete(struct connection *conn, const char *name,
 	trace_tdb("delete %s\n", name);
 
 	if (acc->memory) {
-		domid = get_acc_domid(conn, &key, acc->domid);
+		domid = get_acc_domid(conn, name, acc->domid);
 		domain_memory_add_nochk(conn, domid, -acc->memory - key.dsize);
 	}
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:14:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:14:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540978.843220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQs-0001mg-JN; Tue, 30 May 2023 09:14:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540978.843220; Tue, 30 May 2023 09:14:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vQs-0001lo-Em; Tue, 30 May 2023 09:14:22 +0000
Received: by outflank-mailman (input) for mailman id 540978;
 Tue, 30 May 2023 09:14:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vQr-0006nv-Cq
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:14:21 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5c8f7838-feca-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 11:14:20 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B92371FD68;
 Tue, 30 May 2023 09:14:20 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 8CABA1341B;
 Tue, 30 May 2023 09:14:20 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id jE0BIWy+dWQXIwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:14:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c8f7838-feca-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438060; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pjORuoarGwKPikziqKKH/AVmLzsVU2PLqxuLJ4KIqno=;
	b=eyzcjnB5RKUIttvDnaGpYy2j7xiggCSmWo8L6UwkBD3P+ZAQI4yt/NzO41yrot2UNjVKsF
	2HksuslRvKaueUwAKL19RRk6LKSozXfNBF27+f/b655fqpotLsUhekZKD51uiZSXW/SFdo
	nTBa22MPJyH00p+1z5Mf0JxQQ/Vl5Dw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 08/11] tools/xenstore: make hashtable key and value parameters const
Date: Tue, 30 May 2023 11:13:30 +0200
Message-Id: <20230530091333.7678-9-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530091333.7678-1-jgross@suse.com>
References: <20230530091333.7678-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The key is never modified by hashtable code, so it should be marked
as const.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/hashtable.c | 5 +++--
 tools/xenstore/hashtable.h | 4 ++--
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
index 11f6bf8f15..9daddd9782 100644
--- a/tools/xenstore/hashtable.c
+++ b/tools/xenstore/hashtable.c
@@ -11,7 +11,8 @@
 
 struct entry
 {
-    void *k, *v;
+    const void *k;
+    void *v;
     unsigned int h;
     struct entry *next;
 };
@@ -140,7 +141,7 @@ static int hashtable_expand(struct hashtable *h)
     return 0;
 }
 
-int hashtable_add(struct hashtable *h, void *k, void *v)
+int hashtable_add(struct hashtable *h, const void *k, void *v)
 {
     /* This method allows duplicate keys - but they shouldn't be used */
     unsigned int index;
diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
index 5a2cc4a4be..792f6cda7b 100644
--- a/tools/xenstore/hashtable.h
+++ b/tools/xenstore/hashtable.h
@@ -48,8 +48,8 @@ create_hashtable(const void *ctx, const char *name,
  * If in doubt, remove before insert.
  */
 
-int 
-hashtable_add(struct hashtable *h, void *k, void *v);
+int
+hashtable_add(struct hashtable *h, const void *k, void *v);
 
 /*****************************************************************************
  * hashtable_search
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:15:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:15:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540989.843230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vS3-00032e-1U; Tue, 30 May 2023 09:15:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540989.843230; Tue, 30 May 2023 09:15:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vS2-00032V-Uh; Tue, 30 May 2023 09:15:34 +0000
Received: by outflank-mailman (input) for mailman id 540989;
 Tue, 30 May 2023 09:15:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8xzU=BT=citrix.com=prvs=507ffd061=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q3vS1-00030f-D5
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:15:33 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8684e649-feca-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 11:15:32 +0200 (CEST)
Received: from mail-mw2nam10lp2107.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.107])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 05:15:30 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SJ2PR03MB7473.namprd03.prod.outlook.com (2603:10b6:a03:554::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 09:15:28 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 09:15:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8684e649-feca-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685438132;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=SAtsdJw7sX2UpLHWLe/N6LYJHFMEN5KhYZ+jyVzbeN4=;
  b=CbxpENWGDa0s3BwK5sx23qKxagH84M9KS52j6Nj7eMOomdI7iMAFZzxH
   vyCLheUnzCt4LCaa5hlhn3tXlF7ybWhjibm7yzW3w8SPTwH72KIRiiaZW
   G7JnmN5HTp/mFtXgd0sziv66I8YA6HlXeWuk2D1A3eGcIjzTN2o0F07LK
   c=;
X-IronPort-RemoteIP: 104.47.55.107
X-IronPort-MID: 110781381
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:apVXM6qd7D3DRSnMEQIXK6eaO/VeBmI1ZBIvgKrLsJaIsI4StFCzt
 garIBmGPKrcZjDzKYh1aIji/BwPu5bWy9ZqSAVkrSpmRHhG9puZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzCFNVfrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABBUYiGxiqWX/LOcVuly3dp5B5fuDrpK7xmMzRmBZRonabbqZvySoPpnhnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeWraYKKEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhr6Y10ATIljV75Bs+D3KprvagtUuEe/1ND
 kgT1CBy/YoZ6xn+JjX6d1jiyJKehTYeUddNF+wx6CmW17HZpQ2eAwAsUTppeNEg8sgsSlQCx
 lKP2t/kGzFrmLmUUm6GsKeZqyuoPioYJnNEYjULJTbp+PHmqYA3yxfQFNBqFfftisWvQGmvh
 TeXsCI5mrMfy9YR0Lm29kzGhDTqoYXVSgky5UPcWWfNAh5FWbNJrreAsTDzhcus5q7AJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:1CeqgqG984fIPvu+pLqEwceALOsnbusQ8zAXPiFKOH5om6mj/f
 xG88536faZslossQgb6La90cq7MBDhHPxOgLX5VI3KNDUO3lHGEGgI1+vfKlPbdREWwdQtsJ
 uII5IUNDQpNykDsS8h2njeLz/8+qjizEl1v5am856yd3AQV51d
X-Talos-CUID: 9a23:uEVrQ2OMp7MU5+5DeHBrxXwZXeEZc1rBzGmLPRWTV2NbcejA
X-Talos-MUID: 9a23:3ZuIZwRlDWOtn0h5RXS2rSNtO95X5p6OJ1IolZIosZXUFwdvbmI=
X-IronPort-AV: E=Sophos;i="6.00,203,1681185600"; 
   d="scan'208";a="110781381"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AFkIpiQXlk+zBS+wFo8ZRqtz1n7IVfFet11sryEKxXLs4ylJU9B7QxMnq49k0OYKqcyqufynThYp9vETpo7SiLB7xbeufQF2FIcdcWeqYjQvqBFjUwRgBKgA3G3ldUOjjNkO4KIg8PPHU2S6qVI93bX1on0eStd9wOU4ahAi9Ua+FEWd0zNzFNlBIYkXd2rF9myvnNS2TwB1Re4OhZsJxJYXqYxrKP3uSYf4Gb2hjjhUz2ycMkh3sjQkKrTCmLL/QF5ZmY/ptJBLGDyS47RUXJjP2dDTY5faF49csD2MnssMkcfDgP78H3SOoVRFMAWmvqIWJkNms93tjcj5KtwjTQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Bq/i3zi1iVB9iFkT2T3rezxP33svVrUAJxN3o0dVavA=;
 b=fJYMSc1iFJ+76FHR6ZW5YMYzykp5AcEd9ko+eHQQzNQirbaR74yKLegRC2pcNaxEdLS9MuvNUJpJqV3Mss39YTpR0Y4CpJ9iH1jW0/Eaold0Z6iNZtYXycMuH30Q7kzqsiYHIxof1M6pLDKAuaGCDstuHMNzAHkgFRL196MWRdUQjQ7w2k5IDqqgVw/7c0Rhtt+L6ZcUTT15M1G6kH69zDoPKll4Nq7hwiIT/CgyY3USYdBk8DTMoaDXIxjaZtFICZmQeFnzpXjMUuby2JDvZN8/p+lk0IDJWapeVEiBJ2vp4qAWlI2a0Q4UpIV34ncsudqPLRhQNZW+Ed634c8j2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Bq/i3zi1iVB9iFkT2T3rezxP33svVrUAJxN3o0dVavA=;
 b=FJLTQlv84NdAXqjjKdIu9gwyYVpDCtoWUPfOlUOwP2VZsU/dwQLiuSOVOH1pEG1uFszvpCm6QivBCowNWN/ZFdqeMHimRKvEzX3IirPLfPXYDzjwWAQtuxQA4dUZ81QWgKmv6E6xj9gl2kgzKKorpvzdJM2a1R4CWX/Tx66LIm4=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <2a378301-d261-3b31-0561-c3f00e47efbb@citrix.com>
Date: Tue, 30 May 2023 10:15:21 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/4] x86/spec-ctrl: Rename retpoline_safe() to
 retpoline_calculations()
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
 <20230526110656.4018711-2-andrew.cooper3@citrix.com>
 <859c2409-f0ee-8fc7-5348-fd1678e91b4e@suse.com>
In-Reply-To: <859c2409-f0ee-8fc7-5348-fd1678e91b4e@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P265CA0061.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2af::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SJ2PR03MB7473:EE_
X-MS-Office365-Filtering-Correlation-Id: ad9d9ed9-a0ce-4912-4af3-08db60ee6872
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	cA93DO1gPpDaGoeXnqorx/9wcnfACiasNS5sfodyBiOX4BU4n73kmEafjYbQI5GIjLhuP6at05C+on6pHfFqpeQfdM3TEp6yXGAEz+WxbPMHkbUSH+aWIA7CmxGANPnECPhPGp9ca3/RQLHUHuqGrzq1VNKCuSm4HCEodOiqKx8WR0poRMAt3uj2tvwHys/mbXW/NehucxiXlv1lenfsjdMU0DNZfc433CQlZC64iuzXnS7lHOsly4wI+FCHCxoshMulcDFAZ6CigM7HDdu2drEnw5rWtksp5aSBmZdcGSe4GorBfiCX1nsrF0/EBSX6WJJ0BojXudNNZLCu/0a8rmFPDeR3c9wAVsT2+LtYCdiKSwz1/lckTJY63p8pgTRB8pSAc7BVlJeXdwfNWPGlXW4GgSEPXDXaUMkpYNHpUX4nHsBZUGbacLiFaQ1W9w3mFFCwE9JIYRZWdWpbDqh1qoCW4VbQtAXcTjiMKE6/yfNXALtyHoVPlOoYU+0grXUZo9mBv3r1aD7vi6/9wgrn5Cy3cnsku51/e4EcotEc3XIgVdQZ/NGgWFVTTIVoNWky2FAqdz2IvNSFZZWKqcswn4Hax4GH3THg7mT4Z3GhnUAkE64ctIgULpuGvVktLfePgKwncjx+XQFcu3QHmgX7Fw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(366004)(136003)(346002)(39860400002)(396003)(451199021)(31686004)(53546011)(186003)(41300700001)(38100700002)(26005)(6512007)(6506007)(6666004)(6486002)(2616005)(478600001)(54906003)(4326008)(82960400001)(6916009)(66476007)(66946007)(66556008)(316002)(8676002)(5660300002)(2906002)(4744005)(86362001)(31696002)(8936002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bmRKaGlUdnV1NFBaOWZ4K2FoYlYrSnlFaU44Y3Z2ZDl3Nks2TUNOTXBTS1pk?=
 =?utf-8?B?UE9zMTJwTGRlM0xqVGhZWGFwUEZzcDNlNmNKaXBVRmYwMjJkbGpYeHpqWlFE?=
 =?utf-8?B?S2JnWW0rYVRtQ0pMa2s5cXhsWGR4cnEwTFJybTQrU3ZRdHhaR0c0TXNJL2JQ?=
 =?utf-8?B?R295MWxubFE1K1g3MUVNQWR5WXIvUTYxdGVsTHFDS3IvckhaUXhxWTdRUUZI?=
 =?utf-8?B?ekZKb3kreXQ5bVJUM2FWV215OHA1Z28vSy9wYlZqWGFjVkJyVzMrV1hFUG0v?=
 =?utf-8?B?YkoxdXNrQUZuYjFXQjRSaUJxakRHdEVCZ0FiNGcwcmZuc0tVTytxTWJoRnNr?=
 =?utf-8?B?SFZrVXZBU3hHS3J2MnVjelQ0SVhjQ1BGUlc3dGJMWk1RRUJaaUVBWjFIaTli?=
 =?utf-8?B?RHgxV2FmSGlQWndzMlM4Z24yTURkUk9nZjFSZVdRS1ZOZ3dGb09mVmQwMVA2?=
 =?utf-8?B?MnA1N29nYTJGVGpJVm1hYmQyTml1bHhacWxURUoyNzdYYXFuei9UMTZwSHda?=
 =?utf-8?B?cUkrWlRPZ2tvRjRIYnZmRlowbzE5SFp2aXRrcUNxWWtKUUt6VnhxT0tOUTZp?=
 =?utf-8?B?VzJnUDdnNFZzS1BuMDJWc3orQncxUlFmdnltSTM2aldIb1AyZTViWkNDd05v?=
 =?utf-8?B?SUNyaGtnaldJT3B3eVp3SFB1RmwxV0s2bHBOZGlxNkJPd0xiZmQyK3ZCRlRz?=
 =?utf-8?B?SkRkY3o5K2hwMGR1TE10WmhjTVlIU0hrcVFqem5WWkYvOEl5NUMvTHZuRUhy?=
 =?utf-8?B?bzNMZzA4MHFneFM5cVUySFNBMUhOa0pJdm02aC9Dc2RDT3VpSGUxYVB4Y05W?=
 =?utf-8?B?VUsvb2E3dW5STTg1U0xZREExSGRvTWRtWlRSZTY4aW5icjV0ZVlpWWkzRXgz?=
 =?utf-8?B?VnBYUzZVQzdsdy9CTkhvRWpVVzc3WlFCeXg0R0JzQlB2K2dYVEFrWkF0RE44?=
 =?utf-8?B?bTM0cnF3UnRqZDRFV3ltM0xXdkoxWnlVdEZ2dStlMW0vc3Z5NlBVYXlTNE9Q?=
 =?utf-8?B?ZXlBK0oyUHBTZk1hU1dxQnhmaVVTVm84SGhQY0RaSUtHNGZpUTlaaFlKL0Rx?=
 =?utf-8?B?SXdGZGE5OUx2eTNyQmFCRlJKV25BcGpQd1ZLMjR2Vm93aWoyazJHMXhRc0Vp?=
 =?utf-8?B?cEdMa29UOWp4Z0toU1JQejRoZzM3M1ZPdGRNZ1luaWM0V0hacUJYSi9ROVIw?=
 =?utf-8?B?QWRxY21pa1E4TXJsRDRnd1NqcVNlMDR1bnFCaHNESVNyM054SktDRHFPUCtE?=
 =?utf-8?B?RElQU2F2Q3hVRzQrb05HZHlqNlJoZHhGOGJ1eUJmTlVrNHhGa2I2ellTZkJK?=
 =?utf-8?B?a1k3R24yREFzZFU5QTMxTkxqN1FrZnYwb3ZJZjI0NS8yOWhRYjhkR0NpaUFS?=
 =?utf-8?B?c0lxRjVSSjJCbmlvZ1JnWkhaMTZVZTc1K2VUWkhuUnd6RHU2akF3OFZ5cG85?=
 =?utf-8?B?Q1l6Z0p2VnlxeXZNM2s4R0hwSVFUQ01iZ3BPWFpBNHhIS0NCMFhkb21kYkJO?=
 =?utf-8?B?QWV0UTNzYVdUQkdGY0JucmZIYWk3eHJ5YTM5cE9GVS82Y2pTaFVaTzBaRzRl?=
 =?utf-8?B?bFliZGtaZlVvMUhxTGkraHBrbmsyU0NpZHBZVWQwanB3NlE2WE9WclZ3NTE2?=
 =?utf-8?B?ZXB2OGppdlUwVFRvcTAzZjE0UlFmY2NxbW9uRnhOSTJHU0hMVUZqaGc0cFBt?=
 =?utf-8?B?V3I2VnV5V1JZY1pmTnp4SHIxbzJiSHR6eVNaU240YUlaeENxYzdMSENWZU9w?=
 =?utf-8?B?SXVLWkNCUFJrN0RMZTMwVjNoMmovQ0Y3VjVHb0swdUZwcUlWSWQ0cWhjSmUx?=
 =?utf-8?B?QUZkUHNnQ0pjSUpFT3Z6dzl1OGlmTEpYTXd0eGVaYVVsV3NFVER2MDhIN25N?=
 =?utf-8?B?OHpqN1hDM09CY3dQemEwSnh6NWxZNFo3YnVzdEd3dmVBL0RQYmZvN1Uxa0V0?=
 =?utf-8?B?U3gycU91dG8xNjRaQW1aV1VkTkE1M01WM29DU2twMXJ1OEpuOHBEaExTV2po?=
 =?utf-8?B?Z1YzSnFvSmw3K2lnUlpIandOb3JrQU04d0JXN0hNbnRUTzJpTktqOXpzUW1X?=
 =?utf-8?B?SmFET3MzWmFMcnJRekJZQlczRDJXTTNuNmtESzVoM1dTMUNubXVrcWhacEFO?=
 =?utf-8?B?WTkyRk5SYnRPeGZZVU5haEx5NC95U1lHQXl4S01YL3h2UkJibjFpWTBiQ1Zk?=
 =?utf-8?B?eWc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	ajORLzhvXFZwsXzFJNwoKaX0sXaaJYc+u1SROabwzWNXbyTlV6IDZPHFXmyVgXrg8Xfusdv0IgNM/mIynq2Kjzx9AuWRyTmNonDsJ9hfaNSmB0tI+F1dfntKxYXlyPx+DwkldN0xSXbrJy9XAZEEVF7hBb0B+f/hZwLVdaaQZyR9hDBPJJZ0wWrwpiK4qrgt1T7tW2RnoiRe9OYIdMBRSyX/Wnnqys7VEG7WZryCzgXLPBr39VJ0CSlxXmpPOvdHCZg6RsYg+5Zq0wOjNsmfaJmvo+9MeDHUBwHHZaxfU6MBH5aFYqeIEDAZoMjM0gSHvGAUaaLd5MouNYoMqACCYaCnbxoY+v21adKBpDyq5QQ6YWTaQmJ4lyxSkYKWIqvj0z7H5CrjNC6Tl9nd4X0E9jhIn27U4HxsMrHnzOAGIdEuGAKwGjwMco3lRd+3OqIiuJEABNsoUn/mtwpw4Gt2C3bgYIUrD0KpJOAeVQK5c1OTv+gABk1j4pE3xQ+qMUYNAzsdVtGi5yfxGRolTQoPWF8uUNI9X3EqMQURXZuVEBKH1OK2FsMM0cLKcIoerFHsJmS90ZBP98moUbwi5jnCpQ74S7TxQPNsrJkaIWR0r8szVLJVJ3VwDz5/Akk445S6MHSjjsJjxpbHwsrkVeLJLygABDZrJukGLHyKKYC5rtl0D6EULfwc6NZyOLx0qNtNV0l1qYjgBx3hf904FvcHr+9F//c2XvHRAFNYzdwKZJaAMKCfqK6x/dzwVdtc/ntb+J0k3r2D3pBq3FY63liE0tfJjDUq3i6ZdYY2kD78cbofkiHvh8rIqZz0ZjZV7zuL
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad9d9ed9-a0ce-4912-4af3-08db60ee6872
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 09:15:27.7162
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: X3xJGnmXIH3OLpBwDBWJ1lXyANLTP37YjY/d3GRThARGMAm4ALwP/pO2GVm9FLoxm1XoYmf2iO/T/yL1cZLowbo3OjZyLZEwIJqz6356Y0A=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR03MB7473

On 30/05/2023 10:07 am, Jan Beulich wrote:
> On 26.05.2023 13:06, Andrew Cooper wrote:
>> This is prep work, split out to simply the diff on the following change.
>>
>>  * Rename to retpoline_calculations(), and call unconditionally.  It is
>>    shortly going to synthesize missing enumerations required for guest safety.
>>  * For Broadwell, store the ucode revision calculation in a variable and fall
>>    out of the bottom of the switch statement.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>
> I guess subsequent patches will teach me why ...
>
>> @@ -681,6 +682,12 @@ static bool __init retpoline_safe(void)
>>                 boot_cpu_data.x86_model);
>>          return false;
>>      }
>> +
>> +    /* Only Broadwell gets here. */
>> +    if ( safe )
>> +        return true;
>> +
>> +    return false;
> ... this isn't just "return safe;".

Indeed they will.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 30 09:17:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:17:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540994.843240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vUA-0003hq-Dc; Tue, 30 May 2023 09:17:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540994.843240; Tue, 30 May 2023 09:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vUA-0003hj-AZ; Tue, 30 May 2023 09:17:46 +0000
Received: by outflank-mailman (input) for mailman id 540994;
 Tue, 30 May 2023 09:17:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vQn-0006QB-8W
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:14:17 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 593d96f0-feca-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:14:15 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 27F531FD85;
 Tue, 30 May 2023 09:14:15 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id EF9EE1341B;
 Tue, 30 May 2023 09:14:14 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id qDIuOWa+dWQKIwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:14:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 593d96f0-feca-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438055; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vR1hVHAjF1Inn8SNEDI2a5KzsJccXEFhtgtfIPwvc80=;
	b=CU+qGmFXQnb8N863IkBKFvZ/j9B3dsYmV6uk3b3CPw0QaolDHtrl9wCL74Xc1xEFKDMm96
	UKwlgdtwqBjZVtXL9cOyfe0lXHGz9C4LotYyt/ewG0QwHHZUURKV2INkGpc4ecFnQORhCP
	cOCNGOLEUOp5d0dbU0Pr4lq8NtT64Dc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 07/11] tools/xenstore: add wrapper for tdb_fetch()
Date: Tue, 30 May 2023 11:13:29 +0200
Message-Id: <20230530091333.7678-8-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530091333.7678-1-jgross@suse.com>
References: <20230530091333.7678-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a wrapper function for tdb_fetch taking the name of the node in
the data base as a parameter. Let it return a data pointer and the
length of the data via a length pointer provided as additional
parameter.

This enables to make set_tdb_key() static again.

This is in preparation to replace TDB with a more simple data storage.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        | 55 ++++++++++++++++----------
 tools/xenstore/xenstored_core.h        |  3 +-
 tools/xenstore/xenstored_transaction.c | 34 ++++++++--------
 3 files changed, 51 insertions(+), 41 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 522b2bbf5f..12c584f09b 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -85,7 +85,7 @@ bool keep_orphans = false;
 static int reopen_log_pipe[2];
 static int reopen_log_pipe0_pollfd_idx = -1;
 char *tracefile = NULL;
-TDB_CONTEXT *tdb_ctx = NULL;
+static TDB_CONTEXT *tdb_ctx = NULL;
 unsigned int trace_flags = TRACE_OBJ | TRACE_IO;
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type);
@@ -556,7 +556,7 @@ static void initialize_fds(int *p_sock_pollfd_idx, int *ptimeout)
 	}
 }
 
-void set_tdb_key(const char *name, TDB_DATA *key)
+static void set_tdb_key(const char *name, TDB_DATA *key)
 {
 	/*
 	 * Dropping const is fine here, as the key will never be modified
@@ -566,25 +566,39 @@ void set_tdb_key(const char *name, TDB_DATA *key)
 	key->dsize = strlen(name);
 }
 
+void *db_fetch(const char *db_name, size_t *size)
+{
+	TDB_DATA key, data;
+
+	set_tdb_key(db_name, &key);
+	data = tdb_fetch(tdb_ctx, key);
+	if (!data.dptr)
+		errno = (tdb_error(tdb_ctx) == TDB_ERR_NOEXIST) ? ENOENT : EIO;
+	else
+		*size = data.dsize;
+
+	return data.dptr;
+}
+
 static void get_acc_data(const char *name, struct node_account_data *acc)
 {
-	TDB_DATA key, old_data;
+	void *old_data;
+	size_t size;
 	struct xs_tdb_record_hdr *hdr;
 
 	if (acc->memory < 0) {
-		set_tdb_key(name, &key);
-		old_data = tdb_fetch(tdb_ctx, key);
+		old_data = db_fetch(name, &size);
 		/* No check for error, as the node might not exist. */
-		if (old_data.dptr == NULL) {
+		if (old_data == NULL) {
 			acc->memory = 0;
 		} else {
 			trace_tdb("read %s size %zu\n", name,
-				  old_data.dsize + key.dsize);
-			hdr = (void *)old_data.dptr;
-			acc->memory = old_data.dsize;
+				  size + strlen(name));
+			hdr = old_data;
+			acc->memory = size;
 			acc->domid = hdr->perms[0].id;
 		}
-		talloc_free(old_data.dptr);
+		talloc_free(old_data);
 	}
 }
 
@@ -698,7 +712,8 @@ int db_delete(struct connection *conn, const char *name,
 struct node *read_node(struct connection *conn, const void *ctx,
 		       const char *name)
 {
-	TDB_DATA key, data;
+	void *data;
+	size_t size;
 	struct xs_tdb_record_hdr *hdr;
 	struct node *node;
 	const char *db_name;
@@ -717,29 +732,27 @@ struct node *read_node(struct connection *conn, const void *ctx,
 	}
 
 	db_name = transaction_prepend(conn, name);
-	set_tdb_key(db_name, &key);
+	data = db_fetch(db_name, &size);
 
-	data = tdb_fetch(tdb_ctx, key);
-
-	if (data.dptr == NULL) {
-		if (tdb_error(tdb_ctx) == TDB_ERR_NOEXIST) {
+	if (data == NULL) {
+		if (errno == ENOENT) {
 			node->generation = NO_GENERATION;
 			err = access_node(conn, node, NODE_ACCESS_READ, NULL);
 			errno = err ? : ENOENT;
 		} else {
-			log("TDB error on read: %s", tdb_errorstr(tdb_ctx));
+			log("DB error on read: %s", strerror(errno));
 			errno = EIO;
 		}
 		goto error;
 	}
 
-	trace_tdb("read %s size %zu\n", key.dptr, data.dsize + key.dsize);
+	trace_tdb("read %s size %zu\n", db_name, size + strlen(db_name));
 
 	node->parent = NULL;
-	talloc_steal(node, data.dptr);
+	talloc_steal(node, data);
 
 	/* Datalen, childlen, number of permissions */
-	hdr = (void *)data.dptr;
+	hdr = data;
 	node->generation = hdr->generation;
 	node->perms.num = hdr->num_perms;
 	node->datalen = hdr->datalen;
@@ -748,7 +761,7 @@ struct node *read_node(struct connection *conn, const void *ctx,
 	/* Permissions are struct xs_permissions. */
 	node->perms.p = hdr->perms;
 	node->acc.domid = get_node_owner(node);
-	node->acc.memory = data.dsize;
+	node->acc.memory = size;
 	if (domain_adjust_node_perms(node))
 		goto error;
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index c4a995f745..e922dce775 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -310,7 +310,6 @@ do {						\
 		trace("tdb: " __VA_ARGS__);	\
 } while (0)
 
-extern TDB_CONTEXT *tdb_ctx;
 extern int dom0_domid;
 extern int dom0_event;
 extern int priv_domid;
@@ -359,7 +358,7 @@ extern xengnttab_handle **xgt_handle;
 int remember_string(struct hashtable *hash, const char *str);
 
 /* Data base access functions. */
-void set_tdb_key(const char *name, TDB_DATA *key);
+void *db_fetch(const char *db_name, size_t *size);
 int db_write(struct connection *conn, const char *db_name, void *data,
 	     size_t size, struct node_account_data *acc, int flag,
 	     bool no_quota_check);
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index 029819e90c..c51edf432f 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -356,26 +356,26 @@ static int finalize_transaction(struct connection *conn,
 				struct transaction *trans, bool *is_corrupt)
 {
 	struct accessed_node *i, *n;
-	TDB_DATA key, data;
+	void *data;
+	size_t size;
 	struct xs_tdb_record_hdr *hdr;
 	uint64_t gen;
 	int flag;
 
 	list_for_each_entry_safe(i, n, &trans->accessed, list) {
 		if (i->check_gen) {
-			set_tdb_key(i->node, &key);
-			data = tdb_fetch(tdb_ctx, key);
-			hdr = (void *)data.dptr;
-			if (!data.dptr) {
-				if (tdb_error(tdb_ctx) != TDB_ERR_NOEXIST)
-					return EIO;
+			data = db_fetch(i->node, &size);
+			hdr = data;
+			if (!data) {
+				if (errno != ENOENT)
+					return errno;
 				gen = NO_GENERATION;
 			} else {
 				trace_tdb("read %s size %zu\n", i->node,
-					  key.dsize + data.dsize);
+					  strlen(i->node) + size);
 				gen = hdr->generation;
 			}
-			talloc_free(data.dptr);
+			talloc_free(data);
 			if (i->generation != gen)
 				return EAGAIN;
 		}
@@ -393,19 +393,17 @@ static int finalize_transaction(struct connection *conn,
 
 	while ((i = list_top(&trans->accessed, struct accessed_node, list))) {
 		if (i->ta_node) {
-			set_tdb_key(i->trans_name, &key);
-			data = tdb_fetch(tdb_ctx, key);
-			if (data.dptr) {
+			data = db_fetch(i->trans_name, &size);
+			if (data) {
 				trace_tdb("read %s size %zu\n", i->trans_name,
-					  key.dsize + data.dsize);
-				hdr = (void *)data.dptr;
+					  strlen(i->trans_name) + size);
+				hdr = data;
 				hdr->generation = ++generation;
 				flag = (i->generation == NO_GENERATION)
 				       ? NODE_CREATE : NODE_MODIFY;
-				*is_corrupt |= db_write(conn, i->node,
-							data.dptr, data.dsize,
-							NULL, flag, true);
-				talloc_free(data.dptr);
+				*is_corrupt |= db_write(conn, i->node, data,
+							size, NULL, flag, true);
+				talloc_free(data);
 				if (db_delete(conn, i->trans_name, NULL))
 					*is_corrupt = true;
 			} else {
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:18:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:18:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.540999.843249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vUi-0004Df-Le; Tue, 30 May 2023 09:18:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 540999.843249; Tue, 30 May 2023 09:18:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vUi-0004DY-Iu; Tue, 30 May 2023 09:18:20 +0000
Received: by outflank-mailman (input) for mailman id 540999;
 Tue, 30 May 2023 09:18:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vQy-0006QB-6Z
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:14:28 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5fe6ce42-feca-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:14:26 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 52AE11FD68;
 Tue, 30 May 2023 09:14:26 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 255AB1341B;
 Tue, 30 May 2023 09:14:26 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id Wky5B3K+dWQgIwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:14:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5fe6ce42-feca-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438066; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Nn+RzqKwoWELmwzFwHD2wqJWR2WvrgIjhm6QTn0HxkY=;
	b=XR7ELocM9rR1j1jqXlfZ/vmeXZBCtZKD9ux/KqGv0tgb1PX5OkqsxSO/nIk1+WEXmR8OJL
	R4oPf6R8j6kh5cZQU9AdXitfloUxNJan7syXnfzmmaB+yS0T40ywiYMBzwWBHTLMC2rNpM
	JWz2l1YSdXWbOr7ZYh5RWuNg/IMDmGI=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 09/11] tools/xenstore: add hashtable_replace() function
Date: Tue, 30 May 2023 11:13:31 +0200
Message-Id: <20230530091333.7678-10-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530091333.7678-1-jgross@suse.com>
References: <20230530091333.7678-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For an effective way to replace a hashtable entry add a new function
hashtable_replace().

While at it let hashtable_add() fail if an entry with the specified
key does already exist.

This is in preparation to replace TDB with a more simple data storage.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/hashtable.c | 52 ++++++++++++++++++++++++++++++--------
 tools/xenstore/hashtable.h | 16 ++++++++++++
 2 files changed, 58 insertions(+), 10 deletions(-)

diff --git a/tools/xenstore/hashtable.c b/tools/xenstore/hashtable.c
index 9daddd9782..f358bec5ae 100644
--- a/tools/xenstore/hashtable.c
+++ b/tools/xenstore/hashtable.c
@@ -141,11 +141,32 @@ static int hashtable_expand(struct hashtable *h)
     return 0;
 }
 
+static struct entry *hashtable_search_entry(const struct hashtable *h,
+					    const void *k)
+{
+    struct entry *e;
+    unsigned int hashvalue, index;
+
+    hashvalue = hash(h, k);
+    index = indexFor(h->tablelength,hashvalue);
+    e = h->table[index];
+    while (NULL != e)
+    {
+        /* Check hash value to short circuit heavier comparison */
+        if ((hashvalue == e->h) && (h->eqfn(k, e->k))) return e;
+        e = e->next;
+    }
+    return NULL;
+}
+
 int hashtable_add(struct hashtable *h, const void *k, void *v)
 {
-    /* This method allows duplicate keys - but they shouldn't be used */
     unsigned int index;
     struct entry *e;
+
+    if (hashtable_search_entry(h, k))
+        return EEXIST;
+
     if (++(h->entrycount) > h->loadlimit)
     {
         /* Ignore the return value. If expand fails, we should
@@ -176,17 +197,28 @@ int hashtable_add(struct hashtable *h, const void *k, void *v)
 void *hashtable_search(const struct hashtable *h, const void *k)
 {
     struct entry *e;
-    unsigned int hashvalue, index;
-    hashvalue = hash(h,k);
-    index = indexFor(h->tablelength,hashvalue);
-    e = h->table[index];
-    while (NULL != e)
+
+    e = hashtable_search_entry(h, k);
+    return e ? e->v : NULL;
+}
+
+int hashtable_replace(struct hashtable *h, const void *k, void *v)
+{
+    struct entry *e;
+
+    e = hashtable_search_entry(h, k);
+    if (!e)
+        return ENOENT;
+
+    if (h->flags & HASHTABLE_FREE_VALUE)
     {
-        /* Check hash value to short circuit heavier comparison */
-        if ((hashvalue == e->h) && (h->eqfn(k, e->k))) return e->v;
-        e = e->next;
+        talloc_free(e->v);
+        talloc_steal(e, v);
     }
-    return NULL;
+
+    e->v = v;
+
+    return 0;
 }
 
 void
diff --git a/tools/xenstore/hashtable.h b/tools/xenstore/hashtable.h
index 792f6cda7b..214aea1b3d 100644
--- a/tools/xenstore/hashtable.h
+++ b/tools/xenstore/hashtable.h
@@ -51,6 +51,22 @@ create_hashtable(const void *ctx, const char *name,
 int
 hashtable_add(struct hashtable *h, const void *k, void *v);
 
+/*****************************************************************************
+ * hashtable_replace
+
+ * @name        hashtable_nsert
+ * @param   h   the hashtable to insert into
+ * @param   k   the key - hashtable claims ownership and will free on removal
+ * @param   v   the value - does not claim ownership
+ * @return      zero for successful insertion
+ *
+ * This function does check for an entry being present before replacing it
+ * with a new value.
+ */
+
+int
+hashtable_replace(struct hashtable *h, const void *k, void *v);
+
 /*****************************************************************************
  * hashtable_search
    
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:18:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:18:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541000.843255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vUj-0004IJ-0H; Tue, 30 May 2023 09:18:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541000.843255; Tue, 30 May 2023 09:18:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vUi-0004HO-Rq; Tue, 30 May 2023 09:18:20 +0000
Received: by outflank-mailman (input) for mailman id 541000;
 Tue, 30 May 2023 09:18:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vR5-0006QB-1o
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:14:35 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [2001:67c:2178:6::1c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 634ac66b-feca-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:14:32 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 052C021A78;
 Tue, 30 May 2023 09:14:32 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id C1CA21341B;
 Tue, 30 May 2023 09:14:31 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id lCScLXe+dWQvIwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:14:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 634ac66b-feca-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438072; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jGBTv27wqYZxcCghWenxih8jO9ehPYJw0cIhib+j0Oc=;
	b=mSZ1z3JEaFcs4msBCAWI/Aiw2gq/fY+swh5rs+4zk0IhAWi/TntiN+GH0zLttJhewPO5Mw
	VQyVJOYyAal4Sab6zcXEuLP/ewFpJwgoYI4ixKgchkoP/q/UL4Sfv3OQ28MBUC/mj6n0jy
	jSQv0W05KooLMdXhjvlc+5NLebo2skE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 10/11] tools/xenstore: drop use of tdb
Date: Tue, 30 May 2023 11:13:32 +0200
Message-Id: <20230530091333.7678-11-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530091333.7678-1-jgross@suse.com>
References: <20230530091333.7678-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today all Xenstore nodes are stored in a TDB data base. This data base
has several disadvantages:

- it is using a fixed sized hash table, resulting in high memory
  overhead for small installations with only very few VMs, and a rather
  large performance hit for systems with lots of VMs due to many
  collisions

- Xenstore is only single-threaded, while TDB is designed to be fit
  for multi-threaded use cases, resulting in much higher code
  complexity than needed

- special use cases of Xenstore are not possible to implement with TDB
  in an effective way, while an implementation of a data base tailored
  for Xenstore could simplify some handling (e.g. transactions) a lot

So drop using TDB and store the nodes directly in memory making them
easily accessible. Use a hash-based lookup mechanism for fast lookup
of nodes by their full path.

For now only replace TDB keeping the current access functions.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c        | 153 ++++++++++---------------
 tools/xenstore/xenstored_core.h        |   5 +-
 tools/xenstore/xenstored_transaction.c |   1 -
 3 files changed, 62 insertions(+), 97 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 12c584f09b..9b44de9d31 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -53,7 +53,6 @@
 #include "xenstored_domain.h"
 #include "xenstored_control.h"
 #include "xenstored_lu.h"
-#include "tdb.h"
 
 #ifndef NO_SOCKETS
 #if defined(HAVE_SYSTEMD)
@@ -85,7 +84,7 @@ bool keep_orphans = false;
 static int reopen_log_pipe[2];
 static int reopen_log_pipe0_pollfd_idx = -1;
 char *tracefile = NULL;
-static TDB_CONTEXT *tdb_ctx = NULL;
+static struct hashtable *nodes;
 unsigned int trace_flags = TRACE_OBJ | TRACE_IO;
 
 static const char *sockmsg_string(enum xsd_sockmsg_type type);
@@ -556,28 +555,29 @@ static void initialize_fds(int *p_sock_pollfd_idx, int *ptimeout)
 	}
 }
 
-static void set_tdb_key(const char *name, TDB_DATA *key)
-{
-	/*
-	 * Dropping const is fine here, as the key will never be modified
-	 * by TDB.
-	 */
-	key->dptr = (char *)name;
-	key->dsize = strlen(name);
-}
-
 void *db_fetch(const char *db_name, size_t *size)
 {
-	TDB_DATA key, data;
+	struct xs_tdb_record_hdr *hdr;
+	void *p;
 
-	set_tdb_key(db_name, &key);
-	data = tdb_fetch(tdb_ctx, key);
-	if (!data.dptr)
-		errno = (tdb_error(tdb_ctx) == TDB_ERR_NOEXIST) ? ENOENT : EIO;
-	else
-		*size = data.dsize;
+	hdr = hashtable_search(nodes, db_name);
+	if (!hdr) {
+		errno = ENOENT;
+		return NULL;
+	}
+
+	*size = sizeof(*hdr) + hdr->num_perms * sizeof(hdr->perms[0]) +
+		hdr->datalen + hdr->childlen;
+
+	p = talloc_size(NULL, *size);
+	if (!p) {
+		errno = ENOMEM;
+		return NULL;
+	}
 
-	return data.dptr;
+	memcpy(p, hdr, *size);
+
+	return p;
 }
 
 static void get_acc_data(const char *name, struct node_account_data *acc)
@@ -621,12 +621,10 @@ int db_write(struct connection *conn, const char *db_name, void *data,
 	struct xs_tdb_record_hdr *hdr = data;
 	struct node_account_data old_acc = {};
 	unsigned int old_domid, new_domid;
+	size_t name_len = strlen(db_name);
+	const char *name;
 	int ret;
-	TDB_DATA key, dat;
 
-	set_tdb_key(db_name, &key);
-	dat.dptr = data;
-	dat.dsize = size;
 	if (!acc)
 		old_acc.memory = -1;
 	else
@@ -642,29 +640,36 @@ int db_write(struct connection *conn, const char *db_name, void *data,
 	 */
 	if (old_acc.memory)
 		domain_memory_add_nochk(conn, old_domid,
-					-old_acc.memory - key.dsize);
-	ret = domain_memory_add(conn, new_domid, size + key.dsize,
+					-old_acc.memory - name_len);
+	ret = domain_memory_add(conn, new_domid, size + name_len,
 				no_quota_check);
 	if (ret) {
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
 			domain_memory_add_nochk(conn, old_domid,
-						old_acc.memory + key.dsize);
+						old_acc.memory + name_len);
 		return ret;
 	}
 
-	/* TDB should set errno, but doesn't even set ecode AFAICT. */
-	if (tdb_store(tdb_ctx, key, dat,
-		      (flag == NODE_CREATE) ? TDB_INSERT : TDB_MODIFY) != 0) {
-		domain_memory_add_nochk(conn, new_domid, -size - key.dsize);
+	if (flag == NODE_CREATE) {
+		/* db_name could be modified later, so allocate a copy. */
+		name = talloc_strdup(data, db_name);
+		ret = name ? hashtable_add(nodes, name, data) : ENOMEM;
+	} else
+		ret = hashtable_replace(nodes, db_name, data);
+
+	if (ret) {
+		/* Free data, as it isn't owned by hashtable now. */
+		talloc_free(data);
+		domain_memory_add_nochk(conn, new_domid, -size - name_len);
 		/* Error path, so no quota check. */
 		if (old_acc.memory)
 			domain_memory_add_nochk(conn, old_domid,
-						old_acc.memory + key.dsize);
-		errno = EIO;
+						old_acc.memory + name_len);
+		errno = ret;
 		return errno;
 	}
-	trace_tdb("store %s size %zu\n", db_name, size + key.dsize);
+	trace_tdb("store %s size %zu\n", db_name, size + name_len);
 
 	if (acc) {
 		/* Don't use new_domid, as it might be a transaction node. */
@@ -680,9 +685,6 @@ int db_delete(struct connection *conn, const char *name,
 {
 	struct node_account_data tmp_acc;
 	unsigned int domid;
-	TDB_DATA key;
-
-	set_tdb_key(name, &key);
 
 	if (!acc) {
 		acc = &tmp_acc;
@@ -691,15 +693,13 @@ int db_delete(struct connection *conn, const char *name,
 
 	get_acc_data(name, acc);
 
-	if (tdb_delete(tdb_ctx, key)) {
-		errno = EIO;
-		return errno;
-	}
+	hashtable_remove(nodes, name);
 	trace_tdb("delete %s\n", name);
 
 	if (acc->memory) {
 		domid = get_acc_domid(conn, name, acc->domid);
-		domain_memory_add_nochk(conn, domid, -acc->memory - key.dsize);
+		domain_memory_add_nochk(conn, domid,
+					-acc->memory - strlen(name));
 	}
 
 	return 0;
@@ -2352,43 +2352,29 @@ static void manual_node(const char *name, const char *child)
 	talloc_free(node);
 }
 
-static void tdb_logger(TDB_CONTEXT *tdb, int level, const char * fmt, ...)
+static unsigned int hash_from_key_fn(const void *k)
 {
-	va_list ap;
-	char *s;
-	int saved_errno = errno;
+	const char *str = k;
+	unsigned int hash = 5381;
+	char c;
 
-	va_start(ap, fmt);
-	s = talloc_vasprintf(NULL, fmt, ap);
-	va_end(ap);
+	while ((c = *str++))
+		hash = ((hash << 5) + hash) + (unsigned int)c;
 
-	if (s) {
-		trace("TDB: %s\n", s);
-		syslog(LOG_ERR, "TDB: %s",  s);
-		if (verbose)
-			xprintf("TDB: %s", s);
-		talloc_free(s);
-	} else {
-		trace("talloc failure during logging\n");
-		syslog(LOG_ERR, "talloc failure during logging\n");
-	}
+	return hash;
+}
 
-	errno = saved_errno;
+static int keys_equal_fn(const void *key1, const void *key2)
+{
+	return 0 == strcmp(key1, key2);
 }
 
 void setup_structure(bool live_update)
 {
-	char *tdbname;
-
-	tdbname = talloc_strdup(talloc_autofree_context(), "/dev/mem");
-	if (!tdbname)
-		barf_perror("Could not create tdbname");
-
-	tdb_ctx = tdb_open_ex(tdbname, 7919, TDB_INTERNAL | TDB_NOLOCK,
-			      O_RDWR | O_CREAT | O_EXCL | O_CLOEXEC,
-			      0640, &tdb_logger, NULL);
-	if (!tdb_ctx)
-		barf_perror("Could not create tdb file %s", tdbname);
+	nodes = create_hashtable(NULL, "nodes", hash_from_key_fn, keys_equal_fn,
+				 HASHTABLE_FREE_KEY | HASHTABLE_FREE_VALUE);
+	if (!nodes)
+		barf_perror("Could not create nodes hashtable");
 
 	if (live_update)
 		manual_node("/", NULL);
@@ -2402,24 +2388,6 @@ void setup_structure(bool live_update)
 	}
 }
 
-static unsigned int hash_from_key_fn(const void *k)
-{
-	const char *str = k;
-	unsigned int hash = 5381;
-	char c;
-
-	while ((c = *str++))
-		hash = ((hash << 5) + hash) + (unsigned int)c;
-
-	return hash;
-}
-
-
-static int keys_equal_fn(const void *key1, const void *key2)
-{
-	return 0 == strcmp(key1, key2);
-}
-
 int remember_string(struct hashtable *hash, const char *str)
 {
 	char *k = talloc_strdup(NULL, str);
@@ -2479,12 +2447,11 @@ static int check_store_enoent(const void *ctx, struct connection *conn,
 /**
  * Helper to clean_store below.
  */
-static int clean_store_(TDB_CONTEXT *tdb, TDB_DATA key, TDB_DATA val,
-			void *private)
+static int clean_store_(const void *key, void *val, void *private)
 {
 	struct hashtable *reachable = private;
 	char *slash;
-	char * name = talloc_strndup(NULL, key.dptr, key.dsize);
+	char *name = talloc_strdup(NULL, key);
 
 	if (!name) {
 		log("clean_store: ENOMEM");
@@ -2514,7 +2481,7 @@ static int clean_store_(TDB_CONTEXT *tdb, TDB_DATA key, TDB_DATA val,
  */
 static void clean_store(struct check_store_data *data)
 {
-	tdb_traverse(tdb_ctx, &clean_store_, data->reachable);
+	hashtable_iterate(nodes, clean_store_, data->reachable);
 	domain_check_acc(data->domains);
 }
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index e922dce775..63c2110135 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -33,7 +33,6 @@
 #include "xenstore_lib.h"
 #include "xenstore_state.h"
 #include "list.h"
-#include "tdb.h"
 #include "hashtable.h"
 
 #ifndef O_CLOEXEC
@@ -236,13 +235,13 @@ static inline unsigned int get_node_owner(const struct node *node)
 	return node->perms.p[0].id;
 }
 
-/* Write a node to the tdb data base. */
+/* Write a node to the data base. */
 int write_node_raw(struct connection *conn, const char *db_name,
 		   struct node *node, int flag, bool no_quota_check);
 #define NODE_CREATE 0
 #define NODE_MODIFY 1
 
-/* Get a node from the tdb data base. */
+/* Get a node from the data base. */
 struct node *read_node(struct connection *conn, const void *ctx,
 		       const char *name);
 
diff --git a/tools/xenstore/xenstored_transaction.c b/tools/xenstore/xenstored_transaction.c
index c51edf432f..21700c2e84 100644
--- a/tools/xenstore/xenstored_transaction.c
+++ b/tools/xenstore/xenstored_transaction.c
@@ -403,7 +403,6 @@ static int finalize_transaction(struct connection *conn,
 				       ? NODE_CREATE : NODE_MODIFY;
 				*is_corrupt |= db_write(conn, i->node, data,
 							size, NULL, flag, true);
-				talloc_free(data);
 				if (db_delete(conn, i->trans_name, NULL))
 					*is_corrupt = true;
 			} else {
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:18:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:18:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541010.843270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vVB-0005Ju-An; Tue, 30 May 2023 09:18:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541010.843270; Tue, 30 May 2023 09:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vVB-0005Jn-7K; Tue, 30 May 2023 09:18:49 +0000
Received: by outflank-mailman (input) for mailman id 541010;
 Tue, 30 May 2023 09:18:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3vRC-0006QB-8J
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:14:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66ac050e-feca-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:14:37 +0200 (CEST)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A7AB41FD68;
 Tue, 30 May 2023 09:14:37 +0000 (UTC)
Received: from imap1.suse-dmz.suse.de (imap1.suse-dmz.suse.de [192.168.254.73])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap1.suse-dmz.suse.de (Postfix) with ESMTPS id 6BD831341B;
 Tue, 30 May 2023 09:14:37 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap1.suse-dmz.suse.de with ESMTPSA id dLv8GH2+dWQzIwAAGKfGzw
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 09:14:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66ac050e-feca-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685438077; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=aSLXfU/AoPiTGzsb+RuZdDli8vNIfyTTVNUUuWSxHFY=;
	b=biqWcJIlFsLu8hU7Vvr2crtHmjVSONPR0njNfPOpYnmJDSGLCubhaApiyIdFkLxdYCuZZV
	bjxqUYE2pgIfquy8lMTKorkG4c11BPWyJL6oF1WD77IXbwgMjis/ln+Bhuod/1Br7VcdSD
	0V2M8pdnJwTTwLVpvFflohLDoIlj7tE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 11/11] tools/xenstore: remove tdb code
Date: Tue, 30 May 2023 11:13:33 +0200
Message-Id: <20230530091333.7678-12-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20230530091333.7678-1-jgross@suse.com>
References: <20230530091333.7678-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Now that TDB isn't used anymore, remove it.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/Makefile.common |    2 +-
 tools/xenstore/tdb.c           | 1748 --------------------------------
 tools/xenstore/tdb.h           |  132 ---
 3 files changed, 1 insertion(+), 1881 deletions(-)
 delete mode 100644 tools/xenstore/tdb.c
 delete mode 100644 tools/xenstore/tdb.h

diff --git a/tools/xenstore/Makefile.common b/tools/xenstore/Makefile.common
index 657a16849e..3259ab51e6 100644
--- a/tools/xenstore/Makefile.common
+++ b/tools/xenstore/Makefile.common
@@ -2,7 +2,7 @@
 
 XENSTORED_OBJS-y := xenstored_core.o xenstored_watch.o xenstored_domain.o
 XENSTORED_OBJS-y += xenstored_transaction.o xenstored_control.o xenstored_lu.o
-XENSTORED_OBJS-y += talloc.o utils.o tdb.o hashtable.o
+XENSTORED_OBJS-y += talloc.o utils.o hashtable.o
 
 XENSTORED_OBJS-$(CONFIG_Linux) += xenstored_posix.o xenstored_lu_daemon.o
 XENSTORED_OBJS-$(CONFIG_NetBSD) += xenstored_posix.o xenstored_lu_daemon.o
diff --git a/tools/xenstore/tdb.c b/tools/xenstore/tdb.c
deleted file mode 100644
index 29593b76c3..0000000000
--- a/tools/xenstore/tdb.c
+++ /dev/null
@@ -1,1748 +0,0 @@
- /* 
-   Unix SMB/CIFS implementation.
-
-   trivial database library
-
-   Copyright (C) Andrew Tridgell              1999-2004
-   Copyright (C) Paul `Rusty' Russell		   2000
-   Copyright (C) Jeremy Allison			   2000-2003
-   
-     ** NOTE! The following LGPL license applies to the tdb
-     ** library. This does NOT imply that all of Samba is released
-     ** under the LGPL
-   
-   This library is free software; you can redistribute it and/or
-   modify it under the terms of the GNU Lesser General Public
-   License as published by the Free Software Foundation; either
-   version 2 of the License, or (at your option) any later version.
-
-   This library is distributed in the hope that it will be useful,
-   but WITHOUT ANY WARRANTY; without even the implied warranty of
-   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-   Lesser General Public License for more details.
-
-   You should have received a copy of the GNU Lesser General Public
-   License along with this library; If not, see <http://www.gnu.org/licenses/>.
-*/
-
-
-#ifndef _SAMBA_BUILD_
-#ifdef HAVE_CONFIG_H
-#include <config.h>
-#endif
-
-#include <stdlib.h>
-#include <stdio.h>
-#include <stdint.h>
-#include <fcntl.h>
-#include <unistd.h>
-#include <string.h>
-#include <fcntl.h>
-#include <errno.h>
-#include <sys/mman.h>
-#include <sys/stat.h>
-#include "tdb.h"
-#include <stdarg.h>
-#include "talloc.h"
-#undef HAVE_MMAP
-#else
-#include "includes.h"
-#include "lib/tdb/include/tdb.h"
-#include "system/time.h"
-#include "system/shmem.h"
-#include "system/filesys.h"
-#endif
-
-#define TDB_MAGIC_FOOD "TDB file\n"
-#define TDB_VERSION (0x26011967 + 7)
-#define TDB_MAGIC (0x26011999U)
-#define TDB_FREE_MAGIC (~TDB_MAGIC)
-#define TDB_DEAD_MAGIC (0xFEE1DEAD)
-#define TDB_ALIGNMENT 4
-#define MIN_REC_SIZE (2*sizeof(struct list_struct) + TDB_ALIGNMENT)
-#define DEFAULT_HASH_SIZE 131
-#define TDB_PAGE_SIZE 0x2000
-#define FREELIST_TOP (sizeof(struct tdb_header))
-#define TDB_ALIGN(x,a) (((x) + (a)-1) & ~((a)-1))
-#define TDB_BYTEREV(x) (((((x)&0xff)<<24)|((x)&0xFF00)<<8)|(((x)>>8)&0xFF00)|((x)>>24))
-#define TDB_DEAD(r) ((r)->magic == TDB_DEAD_MAGIC)
-#define TDB_BAD_MAGIC(r) ((r)->magic != TDB_MAGIC && !TDB_DEAD(r))
-#define TDB_HASH_TOP(hash) (FREELIST_TOP + (BUCKET(hash)+1)*sizeof(tdb_off))
-#define TDB_DATA_START(hash_size) (TDB_HASH_TOP(hash_size-1))
-
-
-/* NB assumes there is a local variable called "tdb" that is the
- * current context, also takes doubly-parenthesized print-style
- * argument. */
-#define TDB_LOG(x) tdb->log_fn x
-
-/* lock offsets */
-#define GLOBAL_LOCK 0
-#define ACTIVE_LOCK 4
-
-#ifndef MAP_FILE
-#define MAP_FILE 0
-#endif
-
-#ifndef MAP_FAILED
-#define MAP_FAILED ((void *)-1)
-#endif
-
-#ifndef discard_const_p
-# if defined(__intptr_t_defined) || defined(HAVE_INTPTR_T)
-#  define discard_const(ptr) ((void *)((intptr_t)(ptr)))
-# else
-#  define discard_const(ptr) ((void *)(ptr))
-# endif
-# define discard_const_p(type, ptr) ((type *)discard_const(ptr))
-#endif
-
-/* free memory if the pointer is valid and zero the pointer */
-#ifndef SAFE_FREE
-#define SAFE_FREE(x) do { if ((x) != NULL) {talloc_free(discard_const_p(void *, (x))); (x)=NULL;} } while(0)
-#endif
-
-#define BUCKET(hash) ((hash) % tdb->header.hash_size)
-static TDB_DATA tdb_null;
-
-/* all contexts, to ensure no double-opens (fcntl locks don't nest!) */
-static TDB_CONTEXT *tdbs = NULL;
-
-static int tdb_munmap(TDB_CONTEXT *tdb)
-{
-	if (tdb->flags & TDB_INTERNAL)
-		return 0;
-
-#ifdef HAVE_MMAP
-	if (tdb->map_ptr) {
-		int ret = munmap(tdb->map_ptr, tdb->map_size);
-		if (ret != 0)
-			return ret;
-	}
-#endif
-	tdb->map_ptr = NULL;
-	return 0;
-}
-
-static void tdb_mmap(TDB_CONTEXT *tdb)
-{
-	if (tdb->flags & TDB_INTERNAL)
-		return;
-
-#ifdef HAVE_MMAP
-	if (!(tdb->flags & TDB_NOMMAP)) {
-		tdb->map_ptr = mmap(NULL, tdb->map_size, 
-				    PROT_READ|(tdb->read_only? 0:PROT_WRITE), 
-				    MAP_SHARED|MAP_FILE, tdb->fd, 0);
-
-		/*
-		 * NB. When mmap fails it returns MAP_FAILED *NOT* NULL !!!!
-		 */
-
-		if (tdb->map_ptr == MAP_FAILED) {
-			tdb->map_ptr = NULL;
-			TDB_LOG((tdb, 2, "tdb_mmap failed for size %d (%s)\n", 
-				 tdb->map_size, strerror(errno)));
-		}
-	} else {
-		tdb->map_ptr = NULL;
-	}
-#else
-	tdb->map_ptr = NULL;
-#endif
-}
-
-/* Endian conversion: we only ever deal with 4 byte quantities */
-static void *convert(void *buf, uint32_t size)
-{
-	uint32_t i, *p = buf;
-	for (i = 0; i < size / 4; i++)
-		p[i] = TDB_BYTEREV(p[i]);
-	return buf;
-}
-#define DOCONV() (tdb->flags & TDB_CONVERT)
-#define CONVERT(x) (DOCONV() ? convert(&x, sizeof(x)) : &x)
-
-/* the body of the database is made of one list_struct for the free space
-   plus a separate data list for each hash value */
-struct list_struct {
-	tdb_off next; /* offset of the next record in the list */
-	tdb_len rec_len; /* total byte length of record */
-	tdb_len key_len; /* byte length of key */
-	tdb_len data_len; /* byte length of data */
-	uint32_t full_hash; /* the full 32 bit hash of the key */
-	uint32_t magic;   /* try to catch errors */
-	/* the following union is implied:
-		union {
-			char record[rec_len];
-			struct {
-				char key[key_len];
-				char data[data_len];
-			}
-			uint32_t totalsize; (tailer)
-		}
-	*/
-};
-
-/* a byte range locking function - return 0 on success
-   this functions locks/unlocks 1 byte at the specified offset.
-
-   On error, errno is also set so that errors are passed back properly
-   through tdb_open(). */
-static int tdb_brlock(TDB_CONTEXT *tdb, tdb_off offset, 
-		      int rw_type, int lck_type, int probe)
-{
-	struct flock fl;
-	int ret;
-
-	if (tdb->flags & TDB_NOLOCK)
-		return 0;
-	if ((rw_type == F_WRLCK) && (tdb->read_only)) {
-		errno = EACCES;
-		return -1;
-	}
-
-	fl.l_type = rw_type;
-	fl.l_whence = SEEK_SET;
-	fl.l_start = offset;
-	fl.l_len = 1;
-	fl.l_pid = 0;
-
-	do {
-		ret = fcntl(tdb->fd,lck_type,&fl);
-	} while (ret == -1 && errno == EINTR);
-
-	if (ret == -1) {
-		if (!probe && lck_type != F_SETLK) {
-			/* Ensure error code is set for log fun to examine. */
-			tdb->ecode = TDB_ERR_LOCK;
-			TDB_LOG((tdb, 5,"tdb_brlock failed (fd=%d) at offset %d rw_type=%d lck_type=%d\n", 
-				 tdb->fd, offset, rw_type, lck_type));
-		}
-		/* Generic lock error. errno set by fcntl.
-		 * EAGAIN is an expected return from non-blocking
-		 * locks. */
-		if (errno != EAGAIN) {
-		TDB_LOG((tdb, 5, "tdb_brlock failed (fd=%d) at offset %d rw_type=%d lck_type=%d: %s\n", 
-				 tdb->fd, offset, rw_type, lck_type, 
-				 strerror(errno)));
-		}
-		return TDB_ERRCODE(TDB_ERR_LOCK, -1);
-	}
-	return 0;
-}
-
-/* lock a list in the database. list -1 is the alloc list */
-static int tdb_lock(TDB_CONTEXT *tdb, int list, int ltype)
-{
-	if (list < -1 || list >= (int)tdb->header.hash_size) {
-		TDB_LOG((tdb, 0,"tdb_lock: invalid list %d for ltype=%d\n", 
-			   list, ltype));
-		return -1;
-	}
-	if (tdb->flags & TDB_NOLOCK)
-		return 0;
-
-	/* Since fcntl locks don't nest, we do a lock for the first one,
-	   and simply bump the count for future ones */
-	if (tdb->locked[list+1].count == 0) {
-		if (tdb_brlock(tdb,FREELIST_TOP+4*list,ltype,F_SETLKW, 0)) {
-			TDB_LOG((tdb, 0,"tdb_lock failed on list %d ltype=%d (%s)\n", 
-					   list, ltype, strerror(errno)));
-			return -1;
-		}
-		tdb->locked[list+1].ltype = ltype;
-	}
-	tdb->locked[list+1].count++;
-	return 0;
-}
-
-/* unlock the database: returns void because it's too late for errors. */
-	/* changed to return int it may be interesting to know there
-	   has been an error  --simo */
-static int tdb_unlock(TDB_CONTEXT *tdb, int list,
-		      int ltype __attribute__((unused)))
-{
-	int ret = -1;
-
-	if (tdb->flags & TDB_NOLOCK)
-		return 0;
-
-	/* Sanity checks */
-	if (list < -1 || list >= (int)tdb->header.hash_size) {
-		TDB_LOG((tdb, 0, "tdb_unlock: list %d invalid (%d)\n", list, tdb->header.hash_size));
-		return ret;
-	}
-
-	if (tdb->locked[list+1].count==0) {
-		TDB_LOG((tdb, 0, "tdb_unlock: count is 0\n"));
-		return ret;
-	}
-
-	if (tdb->locked[list+1].count == 1) {
-		/* Down to last nested lock: unlock underneath */
-		ret = tdb_brlock(tdb, FREELIST_TOP+4*list, F_UNLCK, F_SETLKW, 0);
-	} else {
-		ret = 0;
-	}
-	tdb->locked[list+1].count--;
-
-	if (ret)
-		TDB_LOG((tdb, 0,"tdb_unlock: An error occurred unlocking!\n")); 
-	return ret;
-}
-
-/* This is based on the hash algorithm from gdbm */
-static uint32_t default_tdb_hash(TDB_DATA *key)
-{
-	uint32_t value;	/* Used to compute the hash value.  */
-	uint32_t   i;	/* Used to cycle through random values. */
-
-	/* Set the initial value from the key size. */
-	for (value = 0x238F13AF * key->dsize, i=0; i < key->dsize; i++)
-		value = (value + (key->dptr[i] << (i*5 % 24)));
-
-	return (1103515243 * value + 12345);  
-}
-
-/* check for an out of bounds access - if it is out of bounds then
-   see if the database has been expanded by someone else and expand
-   if necessary 
-   note that "len" is the minimum length needed for the db
-*/
-static int tdb_oob(TDB_CONTEXT *tdb, tdb_off len, int probe)
-{
-	struct stat st;
-	if (len <= tdb->map_size)
-		return 0;
-	if (tdb->flags & TDB_INTERNAL) {
-		if (!probe) {
-			/* Ensure ecode is set for log fn. */
-			tdb->ecode = TDB_ERR_IO;
-			TDB_LOG((tdb, 0,"tdb_oob len %d beyond internal malloc size %d\n",
-				 (int)len, (int)tdb->map_size));
-		}
-		return TDB_ERRCODE(TDB_ERR_IO, -1);
-	}
-
-	if (fstat(tdb->fd, &st) == -1)
-		return TDB_ERRCODE(TDB_ERR_IO, -1);
-
-	if (st.st_size < (off_t)len) {
-		if (!probe) {
-			/* Ensure ecode is set for log fn. */
-			tdb->ecode = TDB_ERR_IO;
-			TDB_LOG((tdb, 0,"tdb_oob len %d beyond eof at %d\n",
-				 (int)len, (int)st.st_size));
-		}
-		return TDB_ERRCODE(TDB_ERR_IO, -1);
-	}
-
-	/* Unmap, update size, remap */
-	if (tdb_munmap(tdb) == -1)
-		return TDB_ERRCODE(TDB_ERR_IO, -1);
-	tdb->map_size = st.st_size;
-	tdb_mmap(tdb);
-	return 0;
-}
-
-/* write a lump of data at a specified offset */
-static int tdb_write(TDB_CONTEXT *tdb, tdb_off off, void *buf, tdb_len len)
-{
-	if (tdb_oob(tdb, off + len, 0) != 0)
-		return -1;
-
-	if (tdb->map_ptr)
-		memcpy(off + (char *)tdb->map_ptr, buf, len);
-#ifdef HAVE_PWRITE
-	else if (pwrite(tdb->fd, buf, len, off) != (ssize_t)len) {
-#else
-	else if (lseek(tdb->fd, off, SEEK_SET) != (off_t)off
-		 || write(tdb->fd, buf, len) != (off_t)len) {
-#endif
-		/* Ensure ecode is set for log fn. */
-		tdb->ecode = TDB_ERR_IO;
-		TDB_LOG((tdb, 0,"tdb_write failed at %d len=%d (%s)\n",
-			   off, len, strerror(errno)));
-		return TDB_ERRCODE(TDB_ERR_IO, -1);
-	}
-	return 0;
-}
-
-/* read a lump of data at a specified offset, maybe convert */
-static int tdb_read(TDB_CONTEXT *tdb,tdb_off off,void *buf,tdb_len len,int cv)
-{
-	if (tdb_oob(tdb, off + len, 0) != 0)
-		return -1;
-
-	if (tdb->map_ptr)
-		memcpy(buf, off + (char *)tdb->map_ptr, len);
-#ifdef HAVE_PREAD
-	else if (pread(tdb->fd, buf, len, off) != (off_t)len) {
-#else
-	else if (lseek(tdb->fd, off, SEEK_SET) != (off_t)off
-		 || read(tdb->fd, buf, len) != (off_t)len) {
-#endif
-		/* Ensure ecode is set for log fn. */
-		tdb->ecode = TDB_ERR_IO;
-		TDB_LOG((tdb, 0,"tdb_read failed at %d len=%d (%s)\n",
-			   off, len, strerror(errno)));
-		return TDB_ERRCODE(TDB_ERR_IO, -1);
-	}
-	if (cv)
-		convert(buf, len);
-	return 0;
-}
-
-/* don't allocate memory: used in tdb_delete path. */
-static int tdb_key_eq(TDB_CONTEXT *tdb, tdb_off off, TDB_DATA key)
-{
-	char buf[64];
-	uint32_t len;
-
-	if (tdb_oob(tdb, off + key.dsize, 0) != 0)
-		return -1;
-
-	if (tdb->map_ptr)
-		return !memcmp(off + (char*)tdb->map_ptr, key.dptr, key.dsize);
-
-	while (key.dsize) {
-		len = key.dsize;
-		if (len > sizeof(buf))
-			len = sizeof(buf);
-		if (tdb_read(tdb, off, buf, len, 0) != 0)
-			return -1;
-		if (memcmp(buf, key.dptr, len) != 0)
-			return 0;
-		key.dptr += len;
-		key.dsize -= len;
-		off += len;
-	}
-	return 1;
-}
-
-/* read a lump of data, allocating the space for it */
-static char *tdb_alloc_read(TDB_CONTEXT *tdb, tdb_off offset, tdb_len len)
-{
-	char *buf;
-
-	if (!(buf = talloc_size(tdb, len))) {
-		/* Ensure ecode is set for log fn. */
-		tdb->ecode = TDB_ERR_OOM;
-		TDB_LOG((tdb, 0,"tdb_alloc_read malloc failed len=%d (%s)\n",
-			   len, strerror(errno)));
-		return TDB_ERRCODE(TDB_ERR_OOM, buf);
-	}
-	if (tdb_read(tdb, offset, buf, len, 0) == -1) {
-		SAFE_FREE(buf);
-		return NULL;
-	}
-	return buf;
-}
-
-/* read/write a tdb_off */
-static int ofs_read(TDB_CONTEXT *tdb, tdb_off offset, tdb_off *d)
-{
-	return tdb_read(tdb, offset, (char*)d, sizeof(*d), DOCONV());
-}
-static int ofs_write(TDB_CONTEXT *tdb, tdb_off offset, tdb_off *d)
-{
-	tdb_off off = *d;
-	return tdb_write(tdb, offset, CONVERT(off), sizeof(*d));
-}
-
-/* read/write a record */
-static int rec_read(TDB_CONTEXT *tdb, tdb_off offset, struct list_struct *rec)
-{
-	if (tdb_read(tdb, offset, rec, sizeof(*rec),DOCONV()) == -1)
-		return -1;
-	if (TDB_BAD_MAGIC(rec)) {
-		/* Ensure ecode is set for log fn. */
-		tdb->ecode = TDB_ERR_CORRUPT;
-		TDB_LOG((tdb, 0,"rec_read bad magic 0x%x at offset=%d\n", rec->magic, offset));
-		return TDB_ERRCODE(TDB_ERR_CORRUPT, -1);
-	}
-	return tdb_oob(tdb, rec->next+sizeof(*rec), 0);
-}
-static int rec_write(TDB_CONTEXT *tdb, tdb_off offset, struct list_struct *rec)
-{
-	struct list_struct r = *rec;
-	return tdb_write(tdb, offset, CONVERT(r), sizeof(r));
-}
-
-/* read a freelist record and check for simple errors */
-static int rec_free_read(TDB_CONTEXT *tdb, tdb_off off, struct list_struct *rec)
-{
-	if (tdb_read(tdb, off, rec, sizeof(*rec),DOCONV()) == -1)
-		return -1;
-
-	if (rec->magic == TDB_MAGIC) {
-		/* this happens when a app is showdown while deleting a record - we should
-		   not completely fail when this happens */
-		TDB_LOG((tdb, 0,"rec_free_read non-free magic 0x%x at offset=%d - fixing\n", 
-			 rec->magic, off));
-		rec->magic = TDB_FREE_MAGIC;
-		if (tdb_write(tdb, off, rec, sizeof(*rec)) == -1)
-			return -1;
-	}
-
-	if (rec->magic != TDB_FREE_MAGIC) {
-		/* Ensure ecode is set for log fn. */
-		tdb->ecode = TDB_ERR_CORRUPT;
-		TDB_LOG((tdb, 0,"rec_free_read bad magic 0x%x at offset=%d\n", 
-			   rec->magic, off));
-		return TDB_ERRCODE(TDB_ERR_CORRUPT, -1);
-	}
-	if (tdb_oob(tdb, rec->next+sizeof(*rec), 0) != 0)
-		return -1;
-	return 0;
-}
-
-/* update a record tailer (must hold allocation lock) */
-static int update_tailer(TDB_CONTEXT *tdb, tdb_off offset,
-			 const struct list_struct *rec)
-{
-	tdb_off totalsize;
-
-	/* Offset of tailer from record header */
-	totalsize = sizeof(*rec) + rec->rec_len;
-	return ofs_write(tdb, offset + totalsize - sizeof(tdb_off),
-			 &totalsize);
-}
-
-/* Remove an element from the freelist.  Must have alloc lock. */
-static int remove_from_freelist(TDB_CONTEXT *tdb, tdb_off off, tdb_off next)
-{
-	tdb_off last_ptr, i;
-
-	/* read in the freelist top */
-	last_ptr = FREELIST_TOP;
-	while (ofs_read(tdb, last_ptr, &i) != -1 && i != 0) {
-		if (i == off) {
-			/* We've found it! */
-			return ofs_write(tdb, last_ptr, &next);
-		}
-		/* Follow chain (next offset is at start of record) */
-		last_ptr = i;
-	}
-	TDB_LOG((tdb, 0,"remove_from_freelist: not on list at off=%d\n", off));
-	return TDB_ERRCODE(TDB_ERR_CORRUPT, -1);
-}
-
-/* Add an element into the freelist. Merge adjacent records if
-   neccessary. */
-static int tdb_free(TDB_CONTEXT *tdb, tdb_off offset, struct list_struct *rec)
-{
-	tdb_off right, left;
-
-	/* Allocation and tailer lock */
-	if (tdb_lock(tdb, -1, F_WRLCK) != 0)
-		return -1;
-
-	/* set an initial tailer, so if we fail we don't leave a bogus record */
-	if (update_tailer(tdb, offset, rec) != 0) {
-		TDB_LOG((tdb, 0, "tdb_free: upfate_tailer failed!\n"));
-		goto fail;
-	}
-
-	/* Look right first (I'm an Australian, dammit) */
-	right = offset + sizeof(*rec) + rec->rec_len;
-	if (right + sizeof(*rec) <= tdb->map_size) {
-		struct list_struct r;
-
-		if (tdb_read(tdb, right, &r, sizeof(r), DOCONV()) == -1) {
-			TDB_LOG((tdb, 0, "tdb_free: right read failed at %u\n", right));
-			goto left;
-		}
-
-		/* If it's free, expand to include it. */
-		if (r.magic == TDB_FREE_MAGIC) {
-			if (remove_from_freelist(tdb, right, r.next) == -1) {
-				TDB_LOG((tdb, 0, "tdb_free: right free failed at %u\n", right));
-				goto left;
-			}
-			rec->rec_len += sizeof(r) + r.rec_len;
-		}
-	}
-
-left:
-	/* Look left */
-	left = offset - sizeof(tdb_off);
-	if (left > TDB_DATA_START(tdb->header.hash_size)) {
-		struct list_struct l;
-		tdb_off leftsize;
-		
-		/* Read in tailer and jump back to header */
-		if (ofs_read(tdb, left, &leftsize) == -1) {
-			TDB_LOG((tdb, 0, "tdb_free: left offset read failed at %u\n", left));
-			goto update;
-		}
-		left = offset - leftsize;
-
-		/* Now read in record */
-		if (tdb_read(tdb, left, &l, sizeof(l), DOCONV()) == -1) {
-			TDB_LOG((tdb, 0, "tdb_free: left read failed at %u (%u)\n", left, leftsize));
-			goto update;
-		}
-
-		/* If it's free, expand to include it. */
-		if (l.magic == TDB_FREE_MAGIC) {
-			if (remove_from_freelist(tdb, left, l.next) == -1) {
-				TDB_LOG((tdb, 0, "tdb_free: left free failed at %u\n", left));
-				goto update;
-			} else {
-				offset = left;
-				rec->rec_len += leftsize;
-			}
-		}
-	}
-
-update:
-	if (update_tailer(tdb, offset, rec) == -1) {
-		TDB_LOG((tdb, 0, "tdb_free: update_tailer failed at %u\n", offset));
-		goto fail;
-	}
-
-	/* Now, prepend to free list */
-	rec->magic = TDB_FREE_MAGIC;
-
-	if (ofs_read(tdb, FREELIST_TOP, &rec->next) == -1 ||
-	    rec_write(tdb, offset, rec) == -1 ||
-	    ofs_write(tdb, FREELIST_TOP, &offset) == -1) {
-		TDB_LOG((tdb, 0, "tdb_free record write failed at offset=%d\n", offset));
-		goto fail;
-	}
-
-	/* And we're done. */
-	tdb_unlock(tdb, -1, F_WRLCK);
-	return 0;
-
- fail:
-	tdb_unlock(tdb, -1, F_WRLCK);
-	return -1;
-}
-
-
-/* expand a file.  we prefer to use ftruncate, as that is what posix
-  says to use for mmap expansion */
-static int expand_file(TDB_CONTEXT *tdb, tdb_off size, tdb_off addition)
-{
-	char buf[1024];
-#ifdef HAVE_FTRUNCATE_EXTEND
-	if (ftruncate(tdb->fd, size+addition) != 0) {
-		TDB_LOG((tdb, 0, "expand_file ftruncate to %d failed (%s)\n", 
-			   size+addition, strerror(errno)));
-		return -1;
-	}
-#else
-	char b = 0;
-
-#ifdef HAVE_PWRITE
-	if (pwrite(tdb->fd,  &b, 1, (size+addition) - 1) != 1) {
-#else
-	if (lseek(tdb->fd, (size+addition) - 1, SEEK_SET) != (off_t)(size+addition) - 1 || 
-	    write(tdb->fd, &b, 1) != 1) {
-#endif
-		TDB_LOG((tdb, 0, "expand_file to %d failed (%s)\n", 
-			   size+addition, strerror(errno)));
-		return -1;
-	}
-#endif
-
-	/* now fill the file with something. This ensures that the file isn't sparse, which would be
-	   very bad if we ran out of disk. This must be done with write, not via mmap */
-	memset(buf, 0x42, sizeof(buf));
-	while (addition) {
-		int n = addition>sizeof(buf)?sizeof(buf):addition;
-#ifdef HAVE_PWRITE
-		int ret = pwrite(tdb->fd, buf, n, size);
-#else
-		int ret;
-		if (lseek(tdb->fd, size, SEEK_SET) != (off_t)size)
-			return -1;
-		ret = write(tdb->fd, buf, n);
-#endif
-		if (ret != n) {
-			TDB_LOG((tdb, 0, "expand_file write of %d failed (%s)\n", 
-				   n, strerror(errno)));
-			return -1;
-		}
-		addition -= n;
-		size += n;
-	}
-	return 0;
-}
-
-
-/* expand the database at least size bytes by expanding the underlying
-   file and doing the mmap again if necessary */
-static int tdb_expand(TDB_CONTEXT *tdb, tdb_off size)
-{
-	struct list_struct rec;
-	tdb_off offset;
-
-	if (tdb_lock(tdb, -1, F_WRLCK) == -1) {
-		TDB_LOG((tdb, 0, "lock failed in tdb_expand\n"));
-		return -1;
-	}
-
-	/* must know about any previous expansions by another process */
-	tdb_oob(tdb, tdb->map_size + 1, 1);
-
-	/* always make room for at least 10 more records, and round
-           the database up to a multiple of TDB_PAGE_SIZE */
-	size = TDB_ALIGN(tdb->map_size + size*10, TDB_PAGE_SIZE) - tdb->map_size;
-
-	if (!(tdb->flags & TDB_INTERNAL))
-		tdb_munmap(tdb);
-
-	/*
-	 * We must ensure the file is unmapped before doing this
-	 * to ensure consistency with systems like OpenBSD where
-	 * writes and mmaps are not consistent.
-	 */
-
-	/* expand the file itself */
-	if (!(tdb->flags & TDB_INTERNAL)) {
-		if (expand_file(tdb, tdb->map_size, size) != 0)
-			goto fail;
-	}
-
-	tdb->map_size += size;
-
-	if (tdb->flags & TDB_INTERNAL) {
-		char *new_map_ptr = talloc_realloc_size(tdb, tdb->map_ptr,
-							tdb->map_size);
-		if (!new_map_ptr) {
-			tdb->map_size -= size;
-			goto fail;
-		}
-		tdb->map_ptr = new_map_ptr;
-	} else {
-		/*
-		 * We must ensure the file is remapped before adding the space
-		 * to ensure consistency with systems like OpenBSD where
-		 * writes and mmaps are not consistent.
-		 */
-
-		/* We're ok if the mmap fails as we'll fallback to read/write */
-		tdb_mmap(tdb);
-	}
-
-	/* form a new freelist record */
-	memset(&rec,'\0',sizeof(rec));
-	rec.rec_len = size - sizeof(rec);
-
-	/* link it into the free list */
-	offset = tdb->map_size - size;
-	if (tdb_free(tdb, offset, &rec) == -1)
-		goto fail;
-
-	tdb_unlock(tdb, -1, F_WRLCK);
-	return 0;
- fail:
-	tdb_unlock(tdb, -1, F_WRLCK);
-	return -1;
-}
-
-
-/* 
-   the core of tdb_allocate - called when we have decided which
-   free list entry to use
- */
-static tdb_off tdb_allocate_ofs(TDB_CONTEXT *tdb, tdb_len length, tdb_off rec_ptr,
-				struct list_struct *rec, tdb_off last_ptr)
-{
-	struct list_struct newrec;
-	tdb_off newrec_ptr;
-
-	memset(&newrec, '\0', sizeof(newrec));
-
-	/* found it - now possibly split it up  */
-	if (rec->rec_len > length + MIN_REC_SIZE) {
-		/* Length of left piece */
-		length = TDB_ALIGN(length, TDB_ALIGNMENT);
-		
-		/* Right piece to go on free list */
-		newrec.rec_len = rec->rec_len - (sizeof(*rec) + length);
-		newrec_ptr = rec_ptr + sizeof(*rec) + length;
-		
-		/* And left record is shortened */
-		rec->rec_len = length;
-	} else {
-		newrec_ptr = 0;
-	}
-	
-	/* Remove allocated record from the free list */
-	if (ofs_write(tdb, last_ptr, &rec->next) == -1) {
-		return 0;
-	}
-	
-	/* Update header: do this before we drop alloc
-	   lock, otherwise tdb_free() might try to
-	   merge with us, thinking we're free.
-	   (Thanks Jeremy Allison). */
-	rec->magic = TDB_MAGIC;
-	if (rec_write(tdb, rec_ptr, rec) == -1) {
-		return 0;
-	}
-	
-	/* Did we create new block? */
-	if (newrec_ptr) {
-		/* Update allocated record tailer (we
-		   shortened it). */
-		if (update_tailer(tdb, rec_ptr, rec) == -1) {
-			return 0;
-		}
-		
-		/* Free new record */
-		if (tdb_free(tdb, newrec_ptr, &newrec) == -1) {
-			return 0;
-		}
-	}
-	
-	/* all done - return the new record offset */
-	return rec_ptr;
-}
-
-/* allocate some space from the free list. The offset returned points
-   to a unconnected list_struct within the database with room for at
-   least length bytes of total data
-
-   0 is returned if the space could not be allocated
- */
-static tdb_off tdb_allocate(TDB_CONTEXT *tdb, tdb_len length,
-			    struct list_struct *rec)
-{
-	tdb_off rec_ptr, last_ptr, newrec_ptr;
-	struct {
-		tdb_off rec_ptr, last_ptr;
-		tdb_len rec_len;
-	} bestfit = { 0, 0, 0 };
-
-	if (tdb_lock(tdb, -1, F_WRLCK) == -1)
-		return 0;
-
-	/* Extra bytes required for tailer */
-	length += sizeof(tdb_off);
-
- again:
-	last_ptr = FREELIST_TOP;
-
-	/* read in the freelist top */
-	if (ofs_read(tdb, FREELIST_TOP, &rec_ptr) == -1)
-		goto fail;
-
-	bestfit.rec_ptr = 0;
-
-	/* 
-	   this is a best fit allocation strategy. Originally we used
-	   a first fit strategy, but it suffered from massive fragmentation
-	   issues when faced with a slowly increasing record size.
-	 */
-	while (rec_ptr) {
-		if (rec_free_read(tdb, rec_ptr, rec) == -1) {
-			goto fail;
-		}
-
-		if (rec->rec_len >= length) {
-			if (bestfit.rec_ptr == 0 ||
-			    rec->rec_len < bestfit.rec_len) {
-				bestfit.rec_len = rec->rec_len;
-				bestfit.rec_ptr = rec_ptr;
-				bestfit.last_ptr = last_ptr;
-				/* consider a fit to be good enough if we aren't wasting more than half the space */
-				if (bestfit.rec_len < 2*length) {
-					break;
-				}
-			}
-		}
-
-		/* move to the next record */
-		last_ptr = rec_ptr;
-		rec_ptr = rec->next;
-	}
-
-	if (bestfit.rec_ptr != 0) {
-		if (rec_free_read(tdb, bestfit.rec_ptr, rec) == -1) {
-			goto fail;
-		}
-
-		newrec_ptr = tdb_allocate_ofs(tdb, length, bestfit.rec_ptr, rec, bestfit.last_ptr);
-		tdb_unlock(tdb, -1, F_WRLCK);
-		return newrec_ptr;
-	}
-
-	/* we didn't find enough space. See if we can expand the
-	   database and if we can then try again */
-	if (tdb_expand(tdb, length + sizeof(*rec)) == 0)
-		goto again;
- fail:
-	tdb_unlock(tdb, -1, F_WRLCK);
-	return 0;
-}
-
-/* initialise a new database with a specified hash size */
-static int tdb_new_database(TDB_CONTEXT *tdb, int hash_size)
-{
-	struct tdb_header *newdb;
-	int size, ret = -1;
-
-	/* We make it up in memory, then write it out if not internal */
-	size = sizeof(struct tdb_header) + (hash_size+1)*sizeof(tdb_off);
-	if (!(newdb = talloc_zero_size(tdb, size)))
-		return TDB_ERRCODE(TDB_ERR_OOM, -1);
-
-	/* Fill in the header */
-	newdb->version = TDB_VERSION;
-	newdb->hash_size = hash_size;
-	if (tdb->flags & TDB_INTERNAL) {
-		tdb->map_size = size;
-		tdb->map_ptr = (char *)newdb;
-		memcpy(&tdb->header, newdb, sizeof(tdb->header));
-		/* Convert the `ondisk' version if asked. */
-		CONVERT(*newdb);
-		return 0;
-	}
-	if (lseek(tdb->fd, 0, SEEK_SET) == -1)
-		goto fail;
-
-	if (ftruncate(tdb->fd, 0) == -1)
-		goto fail;
-
-	/* This creates an endian-converted header, as if read from disk */
-	CONVERT(*newdb);
-	memcpy(&tdb->header, newdb, sizeof(tdb->header));
-	/* Don't endian-convert the magic food! */
-	memcpy(newdb->magic_food, TDB_MAGIC_FOOD, strlen(TDB_MAGIC_FOOD)+1);
-	if (write(tdb->fd, newdb, size) != size)
-		ret = -1;
-	else
-		ret = 0;
-
-  fail:
-	SAFE_FREE(newdb);
-	return ret;
-}
-
-/* Returns 0 on fail.  On success, return offset of record, and fills
-   in rec */
-static tdb_off tdb_find(TDB_CONTEXT *tdb, TDB_DATA key, uint32_t hash,
-			struct list_struct *r)
-{
-	tdb_off rec_ptr;
-	
-	/* read in the hash top */
-	if (ofs_read(tdb, TDB_HASH_TOP(hash), &rec_ptr) == -1)
-		return 0;
-
-	/* keep looking until we find the right record */
-	while (rec_ptr) {
-		if (rec_read(tdb, rec_ptr, r) == -1)
-			return 0;
-
-		if (!TDB_DEAD(r) && hash==r->full_hash && key.dsize==r->key_len) {
-			/* a very likely hit - read the key */
-			int cmp = tdb_key_eq(tdb, rec_ptr + sizeof(*r), key);
-			if (cmp < 0)
-				return 0;
-			else if (cmp > 0)
-				return rec_ptr;
-		}
-		rec_ptr = r->next;
-	}
-	return TDB_ERRCODE(TDB_ERR_NOEXIST, 0);
-}
-
-/* As tdb_find, but if you succeed, keep the lock */
-static tdb_off tdb_find_lock_hash(TDB_CONTEXT *tdb, TDB_DATA key, uint32_t hash, int locktype,
-			     struct list_struct *rec)
-{
-	uint32_t rec_ptr;
-
-	if (tdb_lock(tdb, BUCKET(hash), locktype) == -1)
-		return 0;
-	if (!(rec_ptr = tdb_find(tdb, key, hash, rec)))
-		tdb_unlock(tdb, BUCKET(hash), locktype);
-	return rec_ptr;
-}
-
-enum TDB_ERROR tdb_error(TDB_CONTEXT *tdb)
-{
-	return tdb->ecode;
-}
-
-static struct tdb_errname {
-	enum TDB_ERROR ecode; const char *estring;
-} emap[] = { {TDB_SUCCESS, "Success"},
-	     {TDB_ERR_CORRUPT, "Corrupt database"},
-	     {TDB_ERR_IO, "IO Error"},
-	     {TDB_ERR_LOCK, "Locking error"},
-	     {TDB_ERR_OOM, "Out of memory"},
-	     {TDB_ERR_EXISTS, "Record exists"},
-	     {TDB_ERR_NOLOCK, "Lock exists on other keys"},
-	     {TDB_ERR_NOEXIST, "Record does not exist"} };
-
-/* Error string for the last tdb error */
-const char *tdb_errorstr(TDB_CONTEXT *tdb)
-{
-	uint32_t i;
-	for (i = 0; i < sizeof(emap) / sizeof(struct tdb_errname); i++)
-		if (tdb->ecode == emap[i].ecode)
-			return emap[i].estring;
-	return "Invalid error code";
-}
-
-/* update an entry in place - this only works if the new data size
-   is <= the old data size and the key exists.
-   on failure return -1.
-*/
-
-static int tdb_update_hash(TDB_CONTEXT *tdb, TDB_DATA key, uint32_t hash, TDB_DATA dbuf)
-{
-	struct list_struct rec;
-	tdb_off rec_ptr;
-
-	/* find entry */
-	if (!(rec_ptr = tdb_find(tdb, key, hash, &rec)))
-		return -1;
-
-	/* must be long enough key, data and tailer */
-	if (rec.rec_len < key.dsize + dbuf.dsize + sizeof(tdb_off)) {
-		tdb->ecode = TDB_SUCCESS; /* Not really an error */
-		return -1;
-	}
-
-	if (tdb_write(tdb, rec_ptr + sizeof(rec) + rec.key_len,
-		      dbuf.dptr, dbuf.dsize) == -1)
-		return -1;
-
-	if (dbuf.dsize != rec.data_len) {
-		/* update size */
-		rec.data_len = dbuf.dsize;
-		return rec_write(tdb, rec_ptr, &rec);
-	}
- 
-	return 0;
-}
-
-/* find an entry in the database given a key */
-/* If an entry doesn't exist tdb_err will be set to
- * TDB_ERR_NOEXIST. If a key has no data attached
- * then the TDB_DATA will have zero length but
- * a non-zero pointer
- */
-
-TDB_DATA tdb_fetch(TDB_CONTEXT *tdb, TDB_DATA key)
-{
-	tdb_off rec_ptr;
-	struct list_struct rec;
-	TDB_DATA ret;
-	uint32_t hash;
-
-	/* find which hash bucket it is in */
-	hash = tdb->hash_fn(&key);
-	if (!(rec_ptr = tdb_find_lock_hash(tdb,key,hash,F_RDLCK,&rec)))
-		return tdb_null;
-
-	ret.dptr = tdb_alloc_read(tdb, rec_ptr + sizeof(rec) + rec.key_len,
-				  rec.data_len);
-	ret.dsize = rec.data_len;
-	tdb_unlock(tdb, BUCKET(rec.full_hash), F_RDLCK);
-	return ret;
-}
-
-/* check if an entry in the database exists 
-
-   note that 1 is returned if the key is found and 0 is returned if not found
-   this doesn't match the conventions in the rest of this module, but is
-   compatible with gdbm
-*/
-static int tdb_exists_hash(TDB_CONTEXT *tdb, TDB_DATA key, uint32_t hash)
-{
-	struct list_struct rec;
-	
-	if (tdb_find_lock_hash(tdb, key, hash, F_RDLCK, &rec) == 0)
-		return 0;
-	tdb_unlock(tdb, BUCKET(rec.full_hash), F_RDLCK);
-	return 1;
-}
-
-/* record lock stops delete underneath */
-static int lock_record(TDB_CONTEXT *tdb, tdb_off off)
-{
-	return off ? tdb_brlock(tdb, off, F_RDLCK, F_SETLKW, 0) : 0;
-}
-/*
-  Write locks override our own fcntl readlocks, so check it here.
-  Note this is meant to be F_SETLK, *not* F_SETLKW, as it's not
-  an error to fail to get the lock here.
-*/
- 
-static int write_lock_record(TDB_CONTEXT *tdb, tdb_off off)
-{
-	struct tdb_traverse_lock *i;
-	for (i = &tdb->travlocks; i; i = i->next)
-		if (i->off == off)
-			return -1;
-	return tdb_brlock(tdb, off, F_WRLCK, F_SETLK, 1);
-}
-
-/*
-  Note this is meant to be F_SETLK, *not* F_SETLKW, as it's not
-  an error to fail to get the lock here.
-*/
-
-static int write_unlock_record(TDB_CONTEXT *tdb, tdb_off off)
-{
-	return tdb_brlock(tdb, off, F_UNLCK, F_SETLK, 0);
-}
-/* fcntl locks don't stack: avoid unlocking someone else's */
-static int unlock_record(TDB_CONTEXT *tdb, tdb_off off)
-{
-	struct tdb_traverse_lock *i;
-	uint32_t count = 0;
-
-	if (off == 0)
-		return 0;
-	for (i = &tdb->travlocks; i; i = i->next)
-		if (i->off == off)
-			count++;
-	return (count == 1 ? tdb_brlock(tdb, off, F_UNLCK, F_SETLKW, 0) : 0);
-}
-
-/* actually delete an entry in the database given the offset */
-static int do_delete(TDB_CONTEXT *tdb, tdb_off rec_ptr, struct list_struct*rec)
-{
-	tdb_off last_ptr, i;
-	struct list_struct lastrec;
-
-	if (tdb->read_only) return -1;
-
-	if (write_lock_record(tdb, rec_ptr) == -1) {
-		/* Someone traversing here: mark it as dead */
-		rec->magic = TDB_DEAD_MAGIC;
-		return rec_write(tdb, rec_ptr, rec);
-	}
-	if (write_unlock_record(tdb, rec_ptr) != 0)
-		return -1;
-
-	/* find previous record in hash chain */
-	if (ofs_read(tdb, TDB_HASH_TOP(rec->full_hash), &i) == -1)
-		return -1;
-	for (last_ptr = 0; i != rec_ptr; last_ptr = i, i = lastrec.next)
-		if (rec_read(tdb, i, &lastrec) == -1)
-			return -1;
-
-	/* unlink it: next ptr is at start of record. */
-	if (last_ptr == 0)
-		last_ptr = TDB_HASH_TOP(rec->full_hash);
-	if (ofs_write(tdb, last_ptr, &rec->next) == -1)
-		return -1;
-
-	/* recover the space */
-	if (tdb_free(tdb, rec_ptr, rec) == -1)
-		return -1;
-	return 0;
-}
-
-/* Uses traverse lock: 0 = finish, -1 = error, other = record offset */
-static int tdb_next_lock(TDB_CONTEXT *tdb, struct tdb_traverse_lock *tlock,
-			 struct list_struct *rec)
-{
-	int want_next = (tlock->off != 0);
-
-	/* Lock each chain from the start one. */
-	for (; tlock->hash < tdb->header.hash_size; tlock->hash++) {
-
-		/* this is an optimisation for the common case where
-		   the hash chain is empty, which is particularly
-		   common for the use of tdb with ldb, where large
-		   hashes are used. In that case we spend most of our
-		   time in tdb_brlock(), locking empty hash chains.
-
-		   To avoid this, we do an unlocked pre-check to see
-		   if the hash chain is empty before starting to look
-		   inside it. If it is empty then we can avoid that
-		   hash chain. If it isn't empty then we can't believe
-		   the value we get back, as we read it without a
-		   lock, so instead we get the lock and re-fetch the
-		   value below.
-
-		   Notice that not doing this optimisation on the
-		   first hash chain is critical. We must guarantee
-		   that we have done at least one fcntl lock at the
-		   start of a search to guarantee that memory is
-		   coherent on SMP systems. If records are added by
-		   others during the search then thats OK, and we
-		   could possibly miss those with this trick, but we
-		   could miss them anyway without this trick, so the
-		   semantics don't change.
-
-		   With a non-indexed ldb search this trick gains us a
-		   factor of around 80 in speed on a linux 2.6.x
-		   system (testing using ldbtest).
-		 */
-		if (!tlock->off && tlock->hash != 0) {
-			uint32_t off;
-			if (tdb->map_ptr) {
-				for (;tlock->hash < tdb->header.hash_size;tlock->hash++) {
-					if (0 != *(uint32_t *)(TDB_HASH_TOP(tlock->hash) + (unsigned char *)tdb->map_ptr)) {
-						break;
-					}
-				}
-				if (tlock->hash == tdb->header.hash_size) {
-					continue;
-				}
-			} else {
-				if (ofs_read(tdb, TDB_HASH_TOP(tlock->hash), &off) == 0 &&
-				    off == 0) {
-					continue;
-				}
-			}
-		}
-
-		if (tdb_lock(tdb, tlock->hash, F_WRLCK) == -1)
-			return -1;
-
-		/* No previous record?  Start at top of chain. */
-		if (!tlock->off) {
-			if (ofs_read(tdb, TDB_HASH_TOP(tlock->hash),
-				     &tlock->off) == -1)
-				goto fail;
-		} else {
-			/* Otherwise unlock the previous record. */
-			if (unlock_record(tdb, tlock->off) != 0)
-				goto fail;
-		}
-
-		if (want_next) {
-			/* We have offset of old record: grab next */
-			if (rec_read(tdb, tlock->off, rec) == -1)
-				goto fail;
-			tlock->off = rec->next;
-		}
-
-		/* Iterate through chain */
-		while( tlock->off) {
-			tdb_off current;
-			if (rec_read(tdb, tlock->off, rec) == -1)
-				goto fail;
-
-			/* Detect infinite loops. From "Shlomi Yaakobovich" <Shlomi@exanet.com>. */
-			if (tlock->off == rec->next) {
-				TDB_LOG((tdb, 0, "tdb_next_lock: loop detected.\n"));
-				goto fail;
-			}
-
-			if (!TDB_DEAD(rec)) {
-				/* Woohoo: we found one! */
-				if (lock_record(tdb, tlock->off) != 0)
-					goto fail;
-				return tlock->off;
-			}
-
-			/* Try to clean dead ones from old traverses */
-			current = tlock->off;
-			tlock->off = rec->next;
-			if (!tdb->read_only && 
-			    do_delete(tdb, current, rec) != 0)
-				goto fail;
-		}
-		tdb_unlock(tdb, tlock->hash, F_WRLCK);
-		want_next = 0;
-	}
-	/* We finished iteration without finding anything */
-	return TDB_ERRCODE(TDB_SUCCESS, 0);
-
- fail:
-	tlock->off = 0;
-	if (tdb_unlock(tdb, tlock->hash, F_WRLCK) != 0)
-		TDB_LOG((tdb, 0, "tdb_next_lock: On error unlock failed!\n"));
-	return -1;
-}
-
-/* traverse the entire database - calling fn(tdb, key, data) on each element.
-   return -1 on error or the record count traversed
-   if fn is NULL then it is not called
-   a non-zero return value from fn() indicates that the traversal should stop
-  */
-int tdb_traverse(TDB_CONTEXT *tdb, tdb_traverse_func fn, void *private)
-{
-	TDB_DATA key, dbuf;
-	struct list_struct rec;
-	struct tdb_traverse_lock tl = { NULL, 0, 0 };
-	int ret, count = 0;
-
-	/* This was in the initializaton, above, but the IRIX compiler
-	 * did not like it.  crh
-	 */
-	tl.next = tdb->travlocks.next;
-
-	/* fcntl locks don't stack: beware traverse inside traverse */
-	tdb->travlocks.next = &tl;
-
-	/* tdb_next_lock places locks on the record returned, and its chain */
-	while ((ret = tdb_next_lock(tdb, &tl, &rec)) > 0) {
-		count++;
-		/* now read the full record */
-		key.dptr = tdb_alloc_read(tdb, tl.off + sizeof(rec), 
-					  rec.key_len + rec.data_len);
-		if (!key.dptr) {
-			ret = -1;
-			if (tdb_unlock(tdb, tl.hash, F_WRLCK) != 0)
-				goto out;
-			if (unlock_record(tdb, tl.off) != 0)
-				TDB_LOG((tdb, 0, "tdb_traverse: key.dptr == NULL and unlock_record failed!\n"));
-			goto out;
-		}
-		key.dsize = rec.key_len;
-		dbuf.dptr = key.dptr + rec.key_len;
-		dbuf.dsize = rec.data_len;
-
-		/* Drop chain lock, call out */
-		if (tdb_unlock(tdb, tl.hash, F_WRLCK) != 0) {
-			ret = -1;
-			goto out;
-		}
-		if (fn && fn(tdb, key, dbuf, private)) {
-			/* They want us to terminate traversal */
-			ret = count;
-			if (unlock_record(tdb, tl.off) != 0) {
-				TDB_LOG((tdb, 0, "tdb_traverse: unlock_record failed!\n"));
-				ret = -1;
-			}
-			tdb->travlocks.next = tl.next;
-			SAFE_FREE(key.dptr);
-			return count;
-		}
-		SAFE_FREE(key.dptr);
-	}
-out:
-	tdb->travlocks.next = tl.next;
-	if (ret < 0)
-		return -1;
-	else
-		return count;
-}
-
-/* find the first entry in the database and return its key */
-TDB_DATA tdb_firstkey(TDB_CONTEXT *tdb)
-{
-	TDB_DATA key;
-	struct list_struct rec;
-
-	/* release any old lock */
-	if (unlock_record(tdb, tdb->travlocks.off) != 0)
-		return tdb_null;
-	tdb->travlocks.off = tdb->travlocks.hash = 0;
-
-	if (tdb_next_lock(tdb, &tdb->travlocks, &rec) <= 0)
-		return tdb_null;
-	/* now read the key */
-	key.dsize = rec.key_len;
-	key.dptr =tdb_alloc_read(tdb,tdb->travlocks.off+sizeof(rec),key.dsize);
-	if (tdb_unlock(tdb, BUCKET(tdb->travlocks.hash), F_WRLCK) != 0)
-		TDB_LOG((tdb, 0, "tdb_firstkey: error occurred while tdb_unlocking!\n"));
-	return key;
-}
-
-/* find the next entry in the database, returning its key */
-TDB_DATA tdb_nextkey(TDB_CONTEXT *tdb, TDB_DATA oldkey)
-{
-	uint32_t oldhash;
-	TDB_DATA key = tdb_null;
-	struct list_struct rec;
-	char *k = NULL;
-
-	/* Is locked key the old key?  If so, traverse will be reliable. */
-	if (tdb->travlocks.off) {
-		if (tdb_lock(tdb,tdb->travlocks.hash,F_WRLCK))
-			return tdb_null;
-		if (rec_read(tdb, tdb->travlocks.off, &rec) == -1
-		    || !(k = tdb_alloc_read(tdb,tdb->travlocks.off+sizeof(rec),
-					    rec.key_len))
-		    || memcmp(k, oldkey.dptr, oldkey.dsize) != 0) {
-			/* No, it wasn't: unlock it and start from scratch */
-			if (unlock_record(tdb, tdb->travlocks.off) != 0)
-				return tdb_null;
-			if (tdb_unlock(tdb, tdb->travlocks.hash, F_WRLCK) != 0)
-				return tdb_null;
-			tdb->travlocks.off = 0;
-		}
-
-		SAFE_FREE(k);
-	}
-
-	if (!tdb->travlocks.off) {
-		/* No previous element: do normal find, and lock record */
-		tdb->travlocks.off = tdb_find_lock_hash(tdb, oldkey, tdb->hash_fn(&oldkey), F_WRLCK, &rec);
-		if (!tdb->travlocks.off)
-			return tdb_null;
-		tdb->travlocks.hash = BUCKET(rec.full_hash);
-		if (lock_record(tdb, tdb->travlocks.off) != 0) {
-			TDB_LOG((tdb, 0, "tdb_nextkey: lock_record failed (%s)!\n", strerror(errno)));
-			return tdb_null;
-		}
-	}
-	oldhash = tdb->travlocks.hash;
-
-	/* Grab next record: locks chain and returned record,
-	   unlocks old record */
-	if (tdb_next_lock(tdb, &tdb->travlocks, &rec) > 0) {
-		key.dsize = rec.key_len;
-		key.dptr = tdb_alloc_read(tdb, tdb->travlocks.off+sizeof(rec),
-					  key.dsize);
-		/* Unlock the chain of this new record */
-		if (tdb_unlock(tdb, tdb->travlocks.hash, F_WRLCK) != 0)
-			TDB_LOG((tdb, 0, "tdb_nextkey: WARNING tdb_unlock failed!\n"));
-	}
-	/* Unlock the chain of old record */
-	if (tdb_unlock(tdb, BUCKET(oldhash), F_WRLCK) != 0)
-		TDB_LOG((tdb, 0, "tdb_nextkey: WARNING tdb_unlock failed!\n"));
-	return key;
-}
-
-/* delete an entry in the database given a key */
-static int tdb_delete_hash(TDB_CONTEXT *tdb, TDB_DATA key, uint32_t hash)
-{
-	tdb_off rec_ptr;
-	struct list_struct rec;
-	int ret;
-
-	if (!(rec_ptr = tdb_find_lock_hash(tdb, key, hash, F_WRLCK, &rec)))
-		return -1;
-	ret = do_delete(tdb, rec_ptr, &rec);
-	if (tdb_unlock(tdb, BUCKET(rec.full_hash), F_WRLCK) != 0)
-		TDB_LOG((tdb, 0, "tdb_delete: WARNING tdb_unlock failed!\n"));
-	return ret;
-}
-
-int tdb_delete(TDB_CONTEXT *tdb, TDB_DATA key)
-{
-	uint32_t hash = tdb->hash_fn(&key);
-	return tdb_delete_hash(tdb, key, hash);
-}
-
-/* store an element in the database, replacing any existing element
-   with the same key 
-
-   return 0 on success, -1 on failure
-*/
-int tdb_store(TDB_CONTEXT *tdb, TDB_DATA key, TDB_DATA dbuf, int flag)
-{
-	struct list_struct rec;
-	uint32_t hash;
-	tdb_off rec_ptr;
-	char *p = NULL;
-	int ret = 0;
-
-	/* find which hash bucket it is in */
-	hash = tdb->hash_fn(&key);
-	if (tdb_lock(tdb, BUCKET(hash), F_WRLCK) == -1)
-		return -1;
-
-	/* check for it existing, on insert. */
-	if (flag == TDB_INSERT) {
-		if (tdb_exists_hash(tdb, key, hash)) {
-			tdb->ecode = TDB_ERR_EXISTS;
-			goto fail;
-		}
-	} else {
-		/* first try in-place update, on modify or replace. */
-		if (tdb_update_hash(tdb, key, hash, dbuf) == 0)
-			goto out;
-		if (tdb->ecode == TDB_ERR_NOEXIST &&
-		    flag == TDB_MODIFY) {
-			/* if the record doesn't exist and we are in TDB_MODIFY mode then
-			 we should fail the store */
-			goto fail;
-		}
-	}
-	/* reset the error code potentially set by the tdb_update() */
-	tdb->ecode = TDB_SUCCESS;
-
-	/* delete any existing record - if it doesn't exist we don't
-           care.  Doing this first reduces fragmentation, and avoids
-           coalescing with `allocated' block before it's updated. */
-	if (flag != TDB_INSERT)
-		tdb_delete_hash(tdb, key, hash);
-
-	/* Copy key+value *before* allocating free space in case malloc
-	   fails and we are left with a dead spot in the tdb. */
-
-	if (!(p = (char *)talloc_size(tdb, key.dsize + dbuf.dsize))) {
-		tdb->ecode = TDB_ERR_OOM;
-		goto fail;
-	}
-
-	memcpy(p, key.dptr, key.dsize);
-	if (dbuf.dsize)
-		memcpy(p+key.dsize, dbuf.dptr, dbuf.dsize);
-
-	/* we have to allocate some space */
-	if (!(rec_ptr = tdb_allocate(tdb, key.dsize + dbuf.dsize, &rec)))
-		goto fail;
-
-	/* Read hash top into next ptr */
-	if (ofs_read(tdb, TDB_HASH_TOP(hash), &rec.next) == -1)
-		goto fail;
-
-	rec.key_len = key.dsize;
-	rec.data_len = dbuf.dsize;
-	rec.full_hash = hash;
-	rec.magic = TDB_MAGIC;
-
-	/* write out and point the top of the hash chain at it */
-	if (rec_write(tdb, rec_ptr, &rec) == -1
-	    || tdb_write(tdb, rec_ptr+sizeof(rec), p, key.dsize+dbuf.dsize)==-1
-	    || ofs_write(tdb, TDB_HASH_TOP(hash), &rec_ptr) == -1) {
-		/* Need to tdb_unallocate() here */
-		goto fail;
-	}
- out:
-	SAFE_FREE(p); 
-	tdb_unlock(tdb, BUCKET(hash), F_WRLCK);
-	return ret;
-fail:
-	ret = -1;
-	goto out;
-}
-
-static int tdb_already_open(dev_t device,
-			    ino_t ino)
-{
-	TDB_CONTEXT *i;
-	
-	for (i = tdbs; i; i = i->next) {
-		if (i->device == device && i->inode == ino) {
-			return 1;
-		}
-	}
-
-	return 0;
-}
-
-/* a default logging function */
-static void null_log_fn(TDB_CONTEXT *tdb __attribute__((unused)),
-			int level __attribute__((unused)),
-			const char *fmt __attribute__((unused)), ...)
-{
-}
-
-
-TDB_CONTEXT *tdb_open_ex(const char *name, int hash_size, int tdb_flags,
-			 int open_flags, mode_t mode,
-			 tdb_log_func log_fn,
-			 tdb_hash_func hash_fn)
-{
-	TDB_CONTEXT *tdb;
-	struct stat st;
-	int rev = 0, locked = 0;
-	uint8_t *vp;
-	uint32_t vertest;
-
-	if (!(tdb = talloc_zero(name, TDB_CONTEXT))) {
-		/* Can't log this */
-		errno = ENOMEM;
-		goto fail;
-	}
-	tdb->fd = -1;
-	tdb->name = NULL;
-	tdb->map_ptr = NULL;
-	tdb->flags = tdb_flags;
-	tdb->open_flags = open_flags;
-	tdb->log_fn = log_fn?log_fn:null_log_fn;
-	tdb->hash_fn = hash_fn ? hash_fn : default_tdb_hash;
-
-	if ((open_flags & O_ACCMODE) == O_WRONLY) {
-		TDB_LOG((tdb, 0, "tdb_open_ex: can't open tdb %s write-only\n",
-			 name));
-		errno = EINVAL;
-		goto fail;
-	}
-	
-	if (hash_size == 0)
-		hash_size = DEFAULT_HASH_SIZE;
-	if ((open_flags & O_ACCMODE) == O_RDONLY) {
-		tdb->read_only = 1;
-		/* read only databases don't do locking or clear if first */
-		tdb->flags |= TDB_NOLOCK;
-		tdb->flags &= ~TDB_CLEAR_IF_FIRST;
-	}
-
-	/* internal databases don't mmap or lock, and start off cleared */
-	if (tdb->flags & TDB_INTERNAL) {
-		tdb->flags |= (TDB_NOLOCK | TDB_NOMMAP);
-		tdb->flags &= ~TDB_CLEAR_IF_FIRST;
-		if (tdb_new_database(tdb, hash_size) != 0) {
-			TDB_LOG((tdb, 0, "tdb_open_ex: tdb_new_database failed!"));
-			goto fail;
-		}
-		goto internal;
-	}
-
-	if ((tdb->fd = open(name, open_flags, mode)) == -1) {
-		TDB_LOG((tdb, 5, "tdb_open_ex: could not open file %s: %s\n",
-			 name, strerror(errno)));
-		goto fail;	/* errno set by open(2) */
-	}
-
-	/* ensure there is only one process initialising at once */
-	if (tdb_brlock(tdb, GLOBAL_LOCK, F_WRLCK, F_SETLKW, 0) == -1) {
-		TDB_LOG((tdb, 0, "tdb_open_ex: failed to get global lock on %s: %s\n",
-			 name, strerror(errno)));
-		goto fail;	/* errno set by tdb_brlock */
-	}
-
-	/* we need to zero database if we are the only one with it open */
-	if ((tdb_flags & TDB_CLEAR_IF_FIRST) &&
-		(locked = (tdb_brlock(tdb, ACTIVE_LOCK, F_WRLCK, F_SETLK, 0) == 0))) {
-		open_flags |= O_CREAT;
-		if (ftruncate(tdb->fd, 0) == -1) {
-			TDB_LOG((tdb, 0, "tdb_open_ex: "
-				 "failed to truncate %s: %s\n",
-				 name, strerror(errno)));
-			goto fail; /* errno set by ftruncate */
-		}
-	}
-
-	if (read(tdb->fd, &tdb->header, sizeof(tdb->header)) != sizeof(tdb->header)
-	    || strcmp(tdb->header.magic_food, TDB_MAGIC_FOOD) != 0
-	    || (tdb->header.version != TDB_VERSION
-		&& !(rev = (tdb->header.version==TDB_BYTEREV(TDB_VERSION))))) {
-		/* its not a valid database - possibly initialise it */
-		if (!(open_flags & O_CREAT) || tdb_new_database(tdb, hash_size) == -1) {
-			errno = EIO; /* ie bad format or something */
-			goto fail;
-		}
-		rev = (tdb->flags & TDB_CONVERT);
-	}
-	vp = (uint8_t *)&tdb->header.version;
-	vertest = (((uint32_t)vp[0]) << 24) | (((uint32_t)vp[1]) << 16) |
-		  (((uint32_t)vp[2]) << 8) | (uint32_t)vp[3];
-	tdb->flags |= (vertest==TDB_VERSION) ? TDB_BIGENDIAN : 0;
-	if (!rev)
-		tdb->flags &= ~TDB_CONVERT;
-	else {
-		tdb->flags |= TDB_CONVERT;
-		convert(&tdb->header, sizeof(tdb->header));
-	}
-	if (fstat(tdb->fd, &st) == -1)
-		goto fail;
-
-	/* Is it already in the open list?  If so, fail. */
-	if (tdb_already_open(st.st_dev, st.st_ino)) {
-		TDB_LOG((tdb, 2, "tdb_open_ex: "
-			 "%s (%d,%d) is already open in this process\n",
-			 name, (int)st.st_dev, (int)st.st_ino));
-		errno = EBUSY;
-		goto fail;
-	}
-
-	if (!(tdb->name = (char *)talloc_strdup(tdb, name))) {
-		errno = ENOMEM;
-		goto fail;
-	}
-
-	tdb->map_size = st.st_size;
-	tdb->device = st.st_dev;
-	tdb->inode = st.st_ino;
-	tdb->locked = talloc_zero_array(tdb, struct tdb_lock_type,
-					tdb->header.hash_size+1);
-	if (!tdb->locked) {
-		TDB_LOG((tdb, 2, "tdb_open_ex: "
-			 "failed to allocate lock structure for %s\n",
-			 name));
-		errno = ENOMEM;
-		goto fail;
-	}
-	tdb_mmap(tdb);
-	if (locked) {
-		if (tdb_brlock(tdb, ACTIVE_LOCK, F_UNLCK, F_SETLK, 0) == -1) {
-			TDB_LOG((tdb, 0, "tdb_open_ex: "
-				 "failed to take ACTIVE_LOCK on %s: %s\n",
-				 name, strerror(errno)));
-			goto fail;
-		}
-
-	}
-
-	/* We always need to do this if the CLEAR_IF_FIRST flag is set, even if
-	   we didn't get the initial exclusive lock as we need to let all other
-	   users know we're using it. */
-
-	if (tdb_flags & TDB_CLEAR_IF_FIRST) {
-	/* leave this lock in place to indicate it's in use */
-	if (tdb_brlock(tdb, ACTIVE_LOCK, F_RDLCK, F_SETLKW, 0) == -1)
-		goto fail;
-	}
-
-
- internal:
-	/* Internal (memory-only) databases skip all the code above to
-	 * do with disk files, and resume here by releasing their
-	 * global lock and hooking into the active list. */
-	if (tdb_brlock(tdb, GLOBAL_LOCK, F_UNLCK, F_SETLKW, 0) == -1)
-		goto fail;
-	tdb->next = tdbs;
-	tdbs = tdb;
-	return tdb;
-
- fail:
-	{ int save_errno = errno;
-
-	if (!tdb)
-		return NULL;
-	
-	if (tdb->map_ptr) {
-		if (tdb->flags & TDB_INTERNAL)
-			SAFE_FREE(tdb->map_ptr);
-		else
-			tdb_munmap(tdb);
-	}
-	SAFE_FREE(tdb->name);
-	if (tdb->fd != -1)
-		if (close(tdb->fd) != 0)
-			TDB_LOG((tdb, 5, "tdb_open_ex: failed to close tdb->fd on error!\n"));
-	SAFE_FREE(tdb->locked);
-	SAFE_FREE(tdb);
-	errno = save_errno;
-	return NULL;
-	}
-}
-
-/**
- * Close a database.
- *
- * @returns -1 for error; 0 for success.
- **/
-int tdb_close(TDB_CONTEXT *tdb)
-{
-	TDB_CONTEXT **i;
-	int ret = 0;
-
-	if (tdb->map_ptr) {
-		if (tdb->flags & TDB_INTERNAL)
-			SAFE_FREE(tdb->map_ptr);
-		else
-			tdb_munmap(tdb);
-	}
-	SAFE_FREE(tdb->name);
-	if (tdb->fd != -1)
-		ret = close(tdb->fd);
-	SAFE_FREE(tdb->locked);
-
-	/* Remove from contexts list */
-	for (i = &tdbs; *i; i = &(*i)->next) {
-		if (*i == tdb) {
-			*i = tdb->next;
-			break;
-		}
-	}
-
-	memset(tdb, 0, sizeof(*tdb));
-	SAFE_FREE(tdb);
-
-	return ret;
-}
diff --git a/tools/xenstore/tdb.h b/tools/xenstore/tdb.h
deleted file mode 100644
index ce3c7339f8..0000000000
--- a/tools/xenstore/tdb.h
+++ /dev/null
@@ -1,132 +0,0 @@
-#ifndef __TDB_H__
-#define __TDB_H__
-
-#include "utils.h"
-
-/* 
-   Unix SMB/CIFS implementation.
-
-   trivial database library
-
-   Copyright (C) Andrew Tridgell 1999-2004
-   
-     ** NOTE! The following LGPL license applies to the tdb
-     ** library. This does NOT imply that all of Samba is released
-     ** under the LGPL
-   
-   This library is free software; you can redistribute it and/or
-   modify it under the terms of the GNU Lesser General Public
-   License as published by the Free Software Foundation; either
-   version 2 of the License, or (at your option) any later version.
-
-   This library is distributed in the hope that it will be useful,
-   but WITHOUT ANY WARRANTY; without even the implied warranty of
-   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-   Lesser General Public License for more details.
-
-   You should have received a copy of the GNU Lesser General Public
-   License along with this library; If not, see <http://www.gnu.org/licenses/>.
-*/
-
-#ifdef  __cplusplus
-extern "C" {
-#endif
-
-
-/* flags to tdb_store() */
-#define TDB_REPLACE 1
-#define TDB_INSERT 2
-#define TDB_MODIFY 3
-
-/* flags for tdb_open() */
-#define TDB_DEFAULT 0 /* just a readability place holder */
-#define TDB_CLEAR_IF_FIRST 1
-#define TDB_INTERNAL 2 /* don't store on disk */
-#define TDB_NOLOCK   4 /* don't do any locking */
-#define TDB_NOMMAP   8 /* don't use mmap */
-#define TDB_CONVERT 16 /* convert endian (internal use) */
-#define TDB_BIGENDIAN 32 /* header is big-endian (internal use) */
-
-#define TDB_ERRCODE(code, ret) ((tdb->ecode = (code)), ret)
-
-/* error codes */
-enum TDB_ERROR {TDB_SUCCESS=0, TDB_ERR_CORRUPT, TDB_ERR_IO, TDB_ERR_LOCK, 
-		TDB_ERR_OOM, TDB_ERR_EXISTS, TDB_ERR_NOLOCK, TDB_ERR_LOCK_TIMEOUT,
-		TDB_ERR_NOEXIST};
-
-#ifndef uint32_t
-#define uint32_t unsigned
-#endif
-
-typedef struct TDB_DATA {
-	char *dptr;
-	size_t dsize;
-} TDB_DATA;
-
-typedef uint32_t tdb_len;
-typedef uint32_t tdb_off;
-
-/* this is stored at the front of every database */
-struct tdb_header {
-	char magic_food[32]; /* for /etc/magic */
-	uint32_t version; /* version of the code */
-	uint32_t hash_size; /* number of hash entries */
-	tdb_off rwlocks;
-	tdb_off reserved[31];
-};
-
-struct tdb_lock_type {
-	uint32_t count;
-	uint32_t ltype;
-};
-
-struct tdb_traverse_lock {
-	struct tdb_traverse_lock *next;
-	uint32_t off;
-	uint32_t hash;
-};
-
-/* this is the context structure that is returned from a db open */
-typedef struct tdb_context {
-	char *name; /* the name of the database */
-	void *map_ptr; /* where it is currently mapped */
-	int fd; /* open file descriptor for the database */
-	tdb_len map_size; /* how much space has been mapped */
-	int read_only; /* opened read-only */
-	struct tdb_lock_type *locked; /* array of chain locks */
-	enum TDB_ERROR ecode; /* error code for last tdb error */
-	struct tdb_header header; /* a cached copy of the header */
-	uint32_t flags; /* the flags passed to tdb_open */
-	struct tdb_traverse_lock travlocks; /* current traversal locks */
-	struct tdb_context *next; /* all tdbs to avoid multiple opens */
-	dev_t device;	/* uniquely identifies this tdb */
-	ino_t inode;	/* uniquely identifies this tdb */
-	void (*log_fn)(struct tdb_context *tdb, int level, const char *, ...) PRINTF_ATTRIBUTE(3,4); /* logging function */
-	uint32_t (*hash_fn)(TDB_DATA *key);
-	int open_flags; /* flags used in the open - needed by reopen */
-} TDB_CONTEXT;
-
-typedef int (*tdb_traverse_func)(TDB_CONTEXT *, TDB_DATA, TDB_DATA, void *);
-typedef void (*tdb_log_func)(TDB_CONTEXT *, int , const char *, ...);
-typedef uint32_t (*tdb_hash_func)(TDB_DATA *key);
-
-TDB_CONTEXT *tdb_open_ex(const char *name, int hash_size, int tdb_flags,
-			 int open_flags, mode_t mode,
-			 tdb_log_func log_fn,
-			 tdb_hash_func hash_fn);
-
-enum TDB_ERROR tdb_error(TDB_CONTEXT *tdb);
-const char *tdb_errorstr(TDB_CONTEXT *tdb);
-TDB_DATA tdb_fetch(TDB_CONTEXT *tdb, TDB_DATA key);
-int tdb_delete(TDB_CONTEXT *tdb, TDB_DATA key);
-int tdb_store(TDB_CONTEXT *tdb, TDB_DATA key, TDB_DATA dbuf, int flag);
-int tdb_close(TDB_CONTEXT *tdb);
-TDB_DATA tdb_firstkey(TDB_CONTEXT *tdb);
-TDB_DATA tdb_nextkey(TDB_CONTEXT *tdb, TDB_DATA key);
-int tdb_traverse(TDB_CONTEXT *tdb, tdb_traverse_func fn, void *);
-
-#ifdef  __cplusplus
-}
-#endif
-
-#endif /* tdb.h */
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:19:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:19:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541012.843279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vVM-0005gG-QG; Tue, 30 May 2023 09:19:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541012.843279; Tue, 30 May 2023 09:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vVM-0005g9-NF; Tue, 30 May 2023 09:19:00 +0000
Received: by outflank-mailman (input) for mailman id 541012;
 Tue, 30 May 2023 09:19:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3vVM-0004Cg-53
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:19:00 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2073.outbound.protection.outlook.com [40.107.7.73])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 026bd110-fecb-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 11:18:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB6851.eurprd04.prod.outlook.com (2603:10a6:208:182::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 09:18:29 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 09:18:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 026bd110-fecb-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BwkT4sEjvYNpfKFuOMbUjEZBFhf0h8GqzMLww7GdSoB65qZ050wJmjET3HJ6FklvasjQhYtlfTXGo2j1Adc+iqewvdgxMtaTbZFcNnrcZS7q2KXo5LH+Fr34DTqFXDQIRXE0agBmgkrvbHB4Lzxp6itbN80jRImEc2645/20bZ+/ILhwNvGf3EfZ/YEolBUNWBO+tKELxxOcyE7ZiiAEcmnzvSOOUs8opsVR2sCPO2+vMFt1tc3FBdpb+FYu4yhIAyfrcNvyEYzISW8THxHVYLOxqCtFom1cXeGA5K71dk8Sib9N363vkS5WjWcH3JbERulAuwR/TfnImbK0zK3q9Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JUm+r+Gxe0B9q3CmpmOq6xvj9aKAItU7wrYHA+GNleo=;
 b=m7jQvC7V3T2yXBxcNWDnS0acIpBy1bRbGwKO6HP4xzLyfLQoXEglhu4MMJKa5ERFg3fyc2icXdAWymr8mJEA1kslEUAxtNIzNCFBc/uBnUGYUBmM4wNVVs9Tfo4vrWZca0hJil/H6R+wFbZ8CVPlvYoPq78Q64iVuZKn47wnRV7sD2ii6VDrmvIaccizXKqJNkhcw7yjbtkYOoBSpHfItZTY/cByGRQvldc0ejrvPQS+07dqcoFqmq9JRman+ZBKKlH8e+Dr8rl0UCWRvuq66G9Ji8+I430XqxLt2aAHpyxCDnuoCdSk03ICbs9hZsCwVhb2BUrxdwx9Y9uQRxnm8Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JUm+r+Gxe0B9q3CmpmOq6xvj9aKAItU7wrYHA+GNleo=;
 b=3B/ErUtQIYjRtZZV6kne+EyTxPWM7IBV2YPCN51h/K+of+xmS6lxTvD0lC9P/dGBck/S+vrnNdoJ8TlM2sdhoSVM8ujrspos6119qWVyWao2CFOW682Ng68F9i53KTnAION1qUSaJrIq7Grohyzf6MndBAvP12vkF5OgIVW9U336NA2TA0tECH5TlrtLwJdInPCW0+BkxvISuXl7kq2OhQk2ebPgTCngFsge8Z5/tmvIL+rj8x90QGFDZ6u5ydm06gvdkJjFb2HhvkTDlkl/DGgWKcnGkCbPiwFjKbQ6ZcbP3KtwqaJituflzfxVqvtKZEpioBS5Jz0QwTg8BnJOtQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <3f38d4c1-dac7-611a-1882-a5e6de16d4f9@suse.com>
Date: Tue, 30 May 2023 11:18:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 2/4] x86/spec-ctrl: Synthesize RSBA/RRSBA bits with older
 microcode
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
 <20230526110656.4018711-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230526110656.4018711-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0191.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a4::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB6851:EE_
X-MS-Office365-Filtering-Correlation-Id: 39fe27de-1c63-4300-a7a7-08db60eed4b8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	w/KnyeBbYlN8kwIBP/Q/17Z/WrXt4cVzD3OyXyiitzomlxvXrlhNM2s9x5xhoMPJHR7Kf0ikGCqviT9DnSGILnFHaWgNf5pl7cpxE7yDjFpRw300bPg9jNieyCCtNs7ubkzXwPFOoYJtNJHoYrFaYu9oxfhd51cYJBS6lJ8ojBbxSdXPXoZpBcElsNYFBVdN9xalupTpTKNa0tOrN9w7TDQpEAtyGZzvmjIgfHSJes3498OE+2BkcUmPdvcRNCbMUne3AkdlFqIqjTlRCWN+MmQU6kDcRNTgFCAU9E5DmiS4eI/oKZO2/86wVdLDabSyRiqGP6+LuxYlu6pPAjyo6fqTL476vMTD3kjn+LDFnzSQUSSbJXG1mZYEfFC8hXFVKHTVi7b3DO/ruc7sX4QbB5UPttp21rEe1gX89B7VHs7OWKg7bugSbUOzpvnCT4KceRUX1INZehNgsc2wRjkxmvDpoBaiz4VLf060JlbcxgIYGX8K95ydxXzc5qOgCavnRciPXqNKBhWXE+Jo2QHMNdnSOVKBnCQHq1nIes6CfKLLFZDKWzA75xwNxGQE1HNZtrFQpfeujtyZna0yXfr8w7oj2JeFwHAE5ZayMFhpVV5u/bNjlIjU7wsWbqBC2o4V
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(39860400002)(366004)(136003)(376002)(346002)(451199021)(2616005)(2906002)(186003)(36756003)(38100700002)(8936002)(8676002)(966005)(5660300002)(86362001)(478600001)(66946007)(66556008)(66476007)(4326008)(6916009)(41300700001)(6486002)(31686004)(6506007)(316002)(6512007)(53546011)(26005)(54906003)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WU5reHB6VFNSNE5Tc2x0QTNLOW1CTkRUQmQ0b25mOTl2M1Q0amlKMUNTK1lH?=
 =?utf-8?B?T2xkaGE1ZVhnU25na3dpV3RBOWhwbFBLTktubGJ6U0tiZjJkSXdKNkZpNDV5?=
 =?utf-8?B?Qk10bmxaWURKNSs2aFVzVm04dFhXU2hQM3MreTFRRmFib0psTTU1TE84Z1NH?=
 =?utf-8?B?QlN1REsxMUpjWk9ibmQ0OFpCcnF2SnpqRzJhMDZlWXpwWTIrUlN1Y3MrdzFw?=
 =?utf-8?B?Lzh5dUZVSFhURnA5d25lWndUOVE0akQraUl1a0hZSmhERzBiVDRqWHEwRFQ2?=
 =?utf-8?B?M0VaeDlRYlVKM0ZTd3IwbHN0VGJiQW9MY0ltS1QzOURGbkM3RlFoQ2lHNUV5?=
 =?utf-8?B?L2FVQ2gvRk9sQW8zWHF5U0NqUko3Z1I1Ry81cnJvd2xhbkxUUWUwUERXMWZJ?=
 =?utf-8?B?MWVlUTQzVWxHWFROT0owbXhnalF0KytNYlVCU01lQkRwbjlkMUlwUmVySVla?=
 =?utf-8?B?dmVHelNNZ0R4bytnSi94SEwwQUNMaFJibFJjVDN5RXVyY0dJVVlVMFVpdSsz?=
 =?utf-8?B?YUo0U2U3MHU1anN5V3B1R2JRNmJXajJrbFVRQk9QeGFTYS9tNHNTYVJhdE5t?=
 =?utf-8?B?N2JCNENNNFJjVk5scElUaTA3cmI4YzJ5OGt0RWxJYldtRnNDSlJBYTlMVVpJ?=
 =?utf-8?B?NktlK205dGZZWVB3alh2SVV4VVRPOXIrNm1mRlh4ditwU2xVejQrRHU4ZWFj?=
 =?utf-8?B?blc0aFQ2TktzUHZGMXRCNHllZGkwbWZvZE5zbTIyOFVqQnp4NDNUNlBRaE9I?=
 =?utf-8?B?MW5TTEUrZTZtT1BKVmt5N3h6T1dKQzJIaUpUbjdQaVJPbUpPL1ZIVHpibTVo?=
 =?utf-8?B?WEFlbHZqL3ZXNmw5RllqOExJN1VsSzRZeXRzbTNIem9TWCtPd05UVlJ4S3Fs?=
 =?utf-8?B?QmxqejRheDJqcXJ5YXdoenNaSFFxbU9XenhQR2FidGI1bnFtWHZkNThXNGpR?=
 =?utf-8?B?SEY1S0VNVXdkTTVxYm5DVStYd0VZaHVFMDNIWlRyVGxMZmJjZWZKSWZ4Vldv?=
 =?utf-8?B?Ujl2NjR3L1pNOEhKa0hRR1JBSytEcGlaelZ0Y0FqanlmYnRMU1hBZU5mTUhY?=
 =?utf-8?B?VERZZExVNlUyZzZSUFJYSGI5ZzVzaU9rQjlZU0NXSnJQNkI0YmhnRGJTUDFG?=
 =?utf-8?B?TU5WZ0NoREhhRUEvVkpRY1JISDBIQ2pUMWRCWnI4SjRWSjV6TTJ6OWFRMjdV?=
 =?utf-8?B?SVI4ajd1YjYvK0dLdjVEMXBrVmtpTHlGb0pTeWMyZUVEWnExMXdhWWtaYXo2?=
 =?utf-8?B?bjVkalRqK0x2S1o4Vlk0eXp4RFhTK25QQlAzUElXS084QXFXam1NVzRXVVpr?=
 =?utf-8?B?MXoxYVdBd0Z1Z3NYTUhSbVIvODFHU1hNNGZXamUzVEZPMTF1dmh4K3dhREhu?=
 =?utf-8?B?OVgyRFh4SEd5RVR4M0tFQ1dJZkdJQTZPWWIwczExajZaaEUxMTgyeW1Iem9P?=
 =?utf-8?B?QTJFdis4RVVNM3V2ZGt0TDcwN0NBVDhadW5SMTl3b2lQTTN5VWpQOW9yL3o0?=
 =?utf-8?B?UEE1QlF1c0crMGFHWHNwc0FlTnJ6aHlMSkZTMms5QlczYmt4ZVpwaE5KYkJH?=
 =?utf-8?B?TGFwaC9GS25TcXF0K2J2WWh2SzNNV1ZQRFcwKzR1QkVhdGpzeGdnT0k4a0tJ?=
 =?utf-8?B?Wlp3Y2pnYjVUWG1DYjhpc1ZhcXZVRU1sYTBQbGl1TEFOYzlheWdscXdaem1R?=
 =?utf-8?B?VFV4Y0I3bG9Hd2NSOXFSclpWKzBJNTJjM2RNbVRVQXN2bGgzSjI5SHIxeWlK?=
 =?utf-8?B?bG9xY1lRU1NIZTM2QlJ6OXRTd1JuQXhhNXpMc0huYWw3YXg2STBiRGFuRUE2?=
 =?utf-8?B?NHphalgzZ3hsenRkMVgzTlRqUStNNm5Hbmx0ME83QnVXaHRsQ1crS2N4Vkkv?=
 =?utf-8?B?OENLVHdPdERjMVA4MEQyREt6RnZnVnNhT0Q5N2dNNVN4SURqaVA4YU1Ca1Ni?=
 =?utf-8?B?NGJvL1VqRENxRE1sRFhQVW1oNXlacjJldG13WGt1Ri9Ubk0yNkhTblNpV2VH?=
 =?utf-8?B?WXJxN2RLTmhTTkZNU0hXaUhSR3NqZmhYWXBNekV1WUVpYVdobjlTbEVRY3o1?=
 =?utf-8?B?WnNWRlFCamo0dUJxckhackxTQVM5WDRQa2lIY0JRMFdzczBzSTB2NFVxOVpo?=
 =?utf-8?Q?Hz0HE1Dmnddd3OcYwjA1Xkn/r?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 39fe27de-1c63-4300-a7a7-08db60eed4b8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 09:18:29.2112
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7Oapt4Tr+TfXu7/PTI4tjEQet7awJEIyXUaTTo1zedMIrs7IgGcgngoqiz4p6bfUua5OrufvFB11JdCkA2v9uQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6851

On 26.05.2023 13:06, Andrew Cooper wrote:
> @@ -687,6 +697,32 @@ static bool __init retpoline_calculations(void)
>      if ( safe )
>          return true;
>  
> +    /*
> +     * The meaning of the RSBA and RRSBA bits have evolved over time.  The
> +     * agreed upon meaning at the time of writing (May 2023) is thus:
> +     *
> +     * - RSBA (RSB Alterantive) means that an RSB may fall back to an
> +     *   alternative predictor on underflow.  Skylake uarch and later all have
> +     *   this property.  Broadwell too, when running microcode versions prior
> +     *   to Jan 2018.
> +     *
> +     * - All eIBRS-capable processors suffer RSBA, but eIBRS also introduces
> +     *   tagging of predictions with the mode in which they were learned.  So
> +     *   when eIBRS is active, RSBA becomes RRSBA (Restricted RSBA).
> +     *
> +     * Some parts (Broadwell) are not expected to ever enumerate this
> +     * behaviour directly.  Other parts have differing enumeration with
> +     * microcode version.  Fix up Xen's idea, so we can advertise them safely
> +     * to guests, and so toolstacks can level a VM safelty for migration.
> +     */

If the difference between the two is whether eIBRS is active (as you did
word it yet more explicitly in e.g. [1]), then ...

> + unsafe_maybe_fixup_rrsba:
> +    if ( !cpu_has_rrsba )
> +        setup_force_cpu_cap(X86_FEATURE_RRSBA);
> +
> + unsafe_maybe_fixup_rsba:
> +    if ( !cpu_has_rsba )
> +        setup_force_cpu_cap(X86_FEATURE_RSBA);
> +
>      return false;
>  }

... can both actually be active at the same time? IOW is there a "return
false" missing ahead of the 2nd label?

Not having looked at further patches yet it also strikes me as odd that
each of the two labels is used exactly once only. Leaving the shared
comment aside, imo this would then better avoid "goto".

Finally, what use are the two if()s? There's nothing wrong with forcing
a feature which is already available.

Jan

[1] https://lore.kernel.org/lkml/f43c3c33-f8b9-e764-709d-b3864d2bd9f8@citrix.com/


From xen-devel-bounces@lists.xenproject.org Tue May 30 09:24:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:24:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541021.843290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vaP-0007VE-DW; Tue, 30 May 2023 09:24:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541021.843290; Tue, 30 May 2023 09:24:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vaP-0007V7-Aq; Tue, 30 May 2023 09:24:13 +0000
Received: by outflank-mailman (input) for mailman id 541021;
 Tue, 30 May 2023 09:24:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3vaO-0007Uz-Bf
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:24:12 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061f.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb9ac3e8-fecb-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:24:10 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8914.eurprd04.prod.outlook.com (2603:10a6:20b:42d::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 09:24:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 09:24:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb9ac3e8-fecb-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W1MyNaSjLgTEwfakxh0Bp0daLXc7YsV8FpGMZS9yyjM3ztwhEEPw2P60c0UyvNCFNB7Qe21bBE5p8/DY2DJPf8pWmuhVLnDyeDznvLDdOLFGzDo9s+TPaV6n/j9CzDfY+wBa55+WWHCssOxayKBnL3KR48CsX/T4/TT+LnqLba+qfxE978QtaXn7jeqNBSxbBd2gXliOjLaiD42xdBfUku4LqTMXbSLxhyRKAxIJDwgTpL2H5PeNVTKuMIKD1hMLG7mk5xwE6nJYCKQfxJ0LU/ZxAsTB2LUl567Rhj/luqoa2goxE8n1EF+LZskDzA+FQDIVKCWFZmUoyQr1rgluMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pCWhJN35/zT5t4ZuRwJIkJSlCuigQm4LDHVK4mym/BA=;
 b=mnkfSBLlwsxvWLy7VGsBsr02+mq66jkRhezZRdRgcza+KeGyop+t+uMuRASc7FjiTapvlj8ikrSVWdkJuFTeaL2LIG7ScSt/VuLwhWYtdULxuMYZvVw8X+8U6F06cGlMDzBgVF86cuozyln+HrOE5FTyIynR+ffzYF2XLK3vz1YFpRxJNIv2txagKRg5Ei6KdIM7oURqLNOJlGuXKPC4VuxKBpCaCmTSSZh+bebJ5D/JD3beXVJ8CrAAXV13r/q+LYOIfP0pXUr8lTzd4Yvs6CRbfNxOqEDX5cbCORQZnG5ZfxGfuHqxXsmbUoAzBDSj/gmteUI1dzpF4/9W81KeYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pCWhJN35/zT5t4ZuRwJIkJSlCuigQm4LDHVK4mym/BA=;
 b=KaG/Ki85ZlOynEDIzA3NCq6CB435lXcu4b5Bm55yoJuiTVGheLhFJKyXIo94gm3dHt6+32WTsKPgpRv+z3jgj431DVyuUM2DehBUPaReDHLOPFRy5YyfC6v7kVCdIGTwjmzG7Jkcnhw6eVlq4d4yF5y4Hhc2ZBOnRRi0f8HulrpdLe7EklhuCNszomU2wgkwIispEZhEjexRz4dpZFORKkOJMuKE6fHMCPXtTx4F4KMzyhBV0CPdfrmH6NhPdXpm6kCiSMdZ7fzyJssLIGwZKMhvpqZMnHeI4FnFppn2CAA+elY+tcf+HgmsgnFofxTlXWmEZ3m4bJwOQTutWx7ysQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <524d0127-e0af-a2d3-d475-f462395bdbf7@suse.com>
Date: Tue, 30 May 2023 11:24:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 3/4] x86/cpu-policy: Rearrange
 guest_common_default_feature_adjustments()
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
 <20230526110656.4018711-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230526110656.4018711-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0062.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::23) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8914:EE_
X-MS-Office365-Filtering-Correlation-Id: 136ee586-a427-4f93-96fb-08db60ef9e92
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ox4vxi0Gi8rTo50pc6f4fn/YfymzrCpBrdTlr/3w6iHnPfcQ/TGbJ/nEUY1drsbQwBXIb2T7u7SbqhAdA84An9ndCtV5gZMT37bugS+twpbh0q7U+T3hkYsZ+AMufw03muVHpFRlXJQMyS+1VhfrJPX2+TCLPEXSAnQMESfcMwZab24aYNyS5lFxHWI2ZWS9+UNeAWMPTg4SXgvUIq4GHf18ZGwvmbFTcTSw2TpSOn5xlfdDwjD+2JAtbUzk7eY6siRjAV+vVeCWHe7Rx30x//T8GjRVtQOlNbiat113N4Snx5frae1Lw+TNcvlpUlYipbXq8IyNuw+U4GW29KziW68TquRmTzya/Omp141Nf5mdkCf2qP+yBS6TaxZF/RAcC+uulE48oVtUq7RWA3jk4bgNbZPs3KfuBwDARYvbKokqTknnEH+NjA7tjl8ekHxsRXmDYsZPqUTFzABegHZvmdIq7Jvec7yWHFTHn69Zi3NoBTRq+jbZfJbN8Tcrg1zKf7y13Bjx9cHB1bzWqHm22RcUz3vIwzZRi0MbpWTLloOL6zKq4tuXzM7gVWgqtchXxBRNZuwK1Fcom+jd6l5G8Oc+TtJ2qeJOjK9+pk8YStU5q25wSwzSeuRLlVnrDLw/9J7JBbFRb8qmaeFNzVNi4g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(39860400002)(366004)(136003)(346002)(451199021)(478600001)(54906003)(8936002)(8676002)(5660300002)(36756003)(4744005)(31696002)(86362001)(2906002)(66556008)(66946007)(4326008)(6916009)(66476007)(316002)(38100700002)(41300700001)(26005)(31686004)(6506007)(186003)(6512007)(53546011)(2616005)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UHZiMGdIQXNEUlJZUmhhTXJESHE0bEZwUjBMYi84RXB0eFhKZ1NrTytEeUJ5?=
 =?utf-8?B?YUNWVUs2UTBINXdKWENtOTJBVGc1czhEdHRLM0VWSTNTVWtlalMvKzh2NHYy?=
 =?utf-8?B?Ky9waWZSMlgwYitxcUNQQ01URFdRaFk5aXE3NTJtY05Lb0UzaFpVU2cxWC9t?=
 =?utf-8?B?c0R4eWhjN3lsb2g3TkJKb1NUcGUraWYveVNqNXhNU1Vrc1J3Vkc4bzE4dElk?=
 =?utf-8?B?RHc2RzJ6dFZLQS81Ui9tWVg3aEo3M0J5K0tiQ0ZZcVljeC8yMFE3Ylg4cVkz?=
 =?utf-8?B?eEIrM2YwUHRVb2phdkFuYXBtYUg2aXhoMzFENkYxam1rV0xGQm9BZ1piaW1U?=
 =?utf-8?B?MmZHRjZFWUN1ME1KVGxkRURmTGwvbU0vZm9FS042U01BM0RWL0lYdHRlWUlt?=
 =?utf-8?B?Z05uVHRLeExtZ2dxVGtDOTY2dkJ2OTZLTDRsWDZRbThqYmRtNmpPUEJqeEJM?=
 =?utf-8?B?TXg1NzBDTnY5eW5kRDRNN2h4QVNnWGV6QlFPZVo2cHJiaFZyOU1kYS9LOVlH?=
 =?utf-8?B?SkJ0TWRrbHZkVVlUTFI0T1hnaTYwUFBWT1dyVlZhQUNpTXgvSjZaRzVQeHNT?=
 =?utf-8?B?RWx4Y1dlUEsvbjhPUlRWQSs2VFRKRjFsSGlCQ210cFdESmJBcFZ0blJKYlVx?=
 =?utf-8?B?cWlvK2k4eENmbDh6Sk1WVENYOHplVE9Db2xCY0ZJYVJTRnhqWTlXd2V4TC9K?=
 =?utf-8?B?QVVzYVRzVXZqd3lwQm9YNng4MWJBOEVLdWFxdnVxQ1NGMU1BR2xZSDU4VnlF?=
 =?utf-8?B?WXVnN3J3MzI3cXpsb01jY3NUTTBPUDNnbjhzTGt2N0JLaUZhclgzS0FpVnE5?=
 =?utf-8?B?MDhYV1owWUNtdGhYYUZ5dVU0TTZ6RFExUXRxRXZmdzdlUkpwZGoyTlkwSDdl?=
 =?utf-8?B?cHJmZjJzemdGcXlnVUtRZjlUVUZUNmFRK0N0bVhsNVQ2dE8rNjV1em1jMmQ3?=
 =?utf-8?B?L2grMGNsZXQrdGJIcXdJNHpvSzVJbzMyNmlNeEhzUG45THpOUFM3R3AzS3I3?=
 =?utf-8?B?Q0lHaGFINmFQeVhvU0ZlMllMb0s4M0FuYTZyS1RoUW5zbGtSdnl6dTcva1lB?=
 =?utf-8?B?M3BtMWFUcGlMR0VjZ0cyWFJBVFg0SzJEVm44NkVucjZFY0dkVFZLVktrbEZa?=
 =?utf-8?B?ZVJ4OTZTWkE0RVZjUTFtTmdSUE9acWY3Q2c1RGVpQ1FUNE40WXQxeUdaWXFp?=
 =?utf-8?B?Y3FUb0RTQ2NWNHRES3ZqcnZTSklzaDlYOW9IR2Uwc25tUmRkaHVMVThWQjhl?=
 =?utf-8?B?L1JJMVhaVDV0NCtrWC83dDJLSXJJUTc0VDRBek5UU3Nrbkx5NE93UEJHd3Av?=
 =?utf-8?B?ZkVPVDNXTk1ZbDEzRnZCdk1XOFpmOEg0K1pDTGR4L3Erd1hVZEZjSTh3UTNu?=
 =?utf-8?B?MStnY2hPNGpneW1TOGF6dnRRdVpOdS9RSGVEY0FsUEVHVHdTcUwrSEhRYXl6?=
 =?utf-8?B?YWM5TEtaQVNrY1pRY3FFNjBldzl3dE5MMTJqNkR0d1NleFEyRy9DdHBUSUFB?=
 =?utf-8?B?cHRUb3V3Q1MyWE5tZFF3UVR0Q1phT3ltTmhLbk05QWhma0cwd2FNZW5pTitn?=
 =?utf-8?B?OEhHRmVQOEtmditnYjNQUXgwZDhheTlBb1lJN0wrWkl6Umw4cEs1SkxicVZw?=
 =?utf-8?B?T2dNRlNMVitOQTQwc0RpQlBMdG5yYjhObnhIWDBSY3Nwd1l5ZzQ4WW1Fai8r?=
 =?utf-8?B?bVV3VWRoekNtMDFtSHBJcGpPK1B0aTZZbmJyczZSQUVBbEZDUTQwNFZ5NjBj?=
 =?utf-8?B?YlE4eTBJcFFGYUNTZHJrV0NQeFhKT3FMNU40WndwY1RhVklZTUR6UjBnZ3Fk?=
 =?utf-8?B?L1Fkdi9kRnllV0g3eWJ6OXFSd25SMDg0STV4d0pPYnl0S2R6cFVHNjdJdHpX?=
 =?utf-8?B?c1ZHWW5WWjQvM0hNaWpxeDNUaUdLZVpkemNrSkZHelo4UHFDcHJUSWZEeTNW?=
 =?utf-8?B?YU1nTTVES0hhV20vdEYvTHpPUFVmcTJKVDMyTXFFVDd5V0RsaTZSQzd1NlVn?=
 =?utf-8?B?L04weXo1YXJwWERJUkdxVkIwOUZ6TXA5cE0wcnl4Tm1hMHRKc01nVFNMak1F?=
 =?utf-8?B?RGVpa2k2WWRNSzdVSThDS3VPVnhZK2RIelBQWWRVZHAwdDkrZVQvMGZ1SGVs?=
 =?utf-8?Q?NpubrQgP//02bf53BtwhAfs0P?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 136ee586-a427-4f93-96fb-08db60ef9e92
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 09:24:07.8949
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NCmgqJEI0KKQFlGXK2QWMVPz7u0dnmZO+tmIf/dXSkR9ZFR4u0BygTRa+kEtYjRBetnLPxP6W55nNTInt5paMQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8914

On 26.05.2023 13:06, Andrew Cooper wrote:
> This is prep work, split out to simply the diff on the following change.
> 
>  * Split the INTEL check out of the IvyBridge RDRAND check, as the former will
>    be reused.
>  * Use asm/intel-family.h to remove a raw 0x3a model number.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue May 30 09:26:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:26:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541025.843299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vcR-00085K-QR; Tue, 30 May 2023 09:26:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541025.843299; Tue, 30 May 2023 09:26:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vcR-00085D-Nw; Tue, 30 May 2023 09:26:19 +0000
Received: by outflank-mailman (input) for mailman id 541025;
 Tue, 30 May 2023 09:26:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8r7=BT=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q3vcQ-000857-JK
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:26:18 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0657eb9b-fecc-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 11:26:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0657eb9b-fecc-11ed-b231-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685438774;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=OuP6yOSR9XWm7UsYO6pZdZmKv2tnQHjp+lm/v984YCA=;
	b=4M9Rhxb+Ak3/yFJgB1S859CSZLFWoXOP0uMTWtvEKyHSLqPJWzDtmdPw+nD7OakjHwTvL9
	3H4QTINZn7/MOdikwDWln06pYSfL8jvb0I5oZP0DesRNI92ni3zwbDzgSLSlL/Pu5A7ayE
	rYYGLRlzgJx7Mzzdxn0P+XOLdusRXdTLu2ZGz+x2RQA5bSNBrZJT2AeE+kvVPAgUR6estR
	vY+t9gmb5UEUwYCeoMX6kP4VPGvNBn7Lz23Yllf7oRg5tOfqRvkOxsk2p5mTi2eG6HqWL4
	i7ehXbcRL2W2gE3TOlBREPsOXH6v8kr/26nERrHEOZ95S6/lVpsHxw/6/+/DQA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685438774;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=OuP6yOSR9XWm7UsYO6pZdZmKv2tnQHjp+lm/v984YCA=;
	b=HcG/D4NiqXeHLfwKoko4UyTPezLO1CVsxR+RlJLjEUh5/8YeL4Jf/2ZxSLH+TzNJfOKKZT
	r4jRkltXrdP6XoDg==
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
In-Reply-To: <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name> <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx> <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx> <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
Date: Tue, 30 May 2023 11:26:03 +0200
Message-ID: <87pm6ijhlg.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Mon, May 29 2023 at 23:31, Kirill A. Shutemov wrote:
> Aaand the next patch that breaks TDX boot is... <drum roll>
>
> 	x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable it
>
> Disabling parallel bringup helps. I didn't look closer yet. If you have
> an idea let me know.

So how does TDX end up with actual parallel bringup?

	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
		pr_info("Parallel CPU startup disabled due to guest state encryption\n");
		return false;
	}

It should take that path, no?

Thanks,

        tglx



From xen-devel-bounces@lists.xenproject.org Tue May 30 09:27:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:27:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541030.843310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vdC-0000Al-4T; Tue, 30 May 2023 09:27:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541030.843310; Tue, 30 May 2023 09:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vdC-0000Ae-0y; Tue, 30 May 2023 09:27:06 +0000
Received: by outflank-mailman (input) for mailman id 541030;
 Tue, 30 May 2023 09:27:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8r7=BT=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q3vdA-0000AD-O0
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:27:04 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 23103748-fecc-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 11:27:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23103748-fecc-11ed-b231-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685438823;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=1NMbaaXeWKdobW1XvAcSDK/dQsnYJNRF0z8NSxF7l9g=;
	b=3WlfyRf4MaHC/vnZ5XtmVOPXOLK+ZAlD6lhDlx3nxtYIx2aAF4vsyl4Fynj3non8rbgMrN
	sNrp+9JQLFYFbN/XqXCv9byGDKlv0RHWDCZwas9ku3sVJVgKghH6SR2F8V6+xNDQOkLcyD
	5qqJAyewlvw6f3Z/VcU7uuPM8NAolzMZEkhAvRU0VhntTSbYmIp8ge4zTpPWzrBil5oYlf
	DgeSLbc5m4g4ajmFKq5V5ym/tqWGbgOPaXXQPMUx1NtL/3KwVT8w9aL6e8LA+TS2nYZbmJ
	Q7LfBFEK9KhVtex48u51MPzAYtklC43xcJIdIGSYeuPXzECaBk3uOeXt5h/Wqg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685438823;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=1NMbaaXeWKdobW1XvAcSDK/dQsnYJNRF0z8NSxF7l9g=;
	b=GRvc8oucVrh5HH/FHNK9eWoRJUeKKRlfRG0kkGtIqetJQ9TTAPtqn+q5w5PfQy3yr3tFYY
	syTijd9671JLttAA==
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
In-Reply-To: <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name> <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx> <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx> <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name>
Date: Tue, 30 May 2023 11:26:52 +0200
Message-ID: <87mt1mjhk3.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 30 2023 at 03:54, Kirill A. Shutemov wrote:
> On Mon, May 29, 2023 at 11:31:29PM +0300, Kirill A. Shutemov wrote:
>> Disabling parallel bringup helps. I didn't look closer yet. If you have
>> an idea let me know.
>
> Okay, it crashes around .Lread_apicid due to touching MSRs that trigger #VE.
>
> Looks like the patch had no intention to enable parallel bringup on TDX.
>
> +        * Intel-TDX has a secure RDMSR hypercall, but that needs to be
> +        * implemented seperately in the low level startup ASM code.
>
> But CC_ATTR_GUEST_STATE_ENCRYPT that used to filter it out is
> SEV-ES-specific thingy and doesn't cover TDX. I don't think we have an
> attribute that fits nicely here.

Bah. That sucks.


From xen-devel-bounces@lists.xenproject.org Tue May 30 09:40:27 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:40:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541035.843320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vps-0002kT-Dj; Tue, 30 May 2023 09:40:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541035.843320; Tue, 30 May 2023 09:40:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vps-0002kM-9r; Tue, 30 May 2023 09:40:12 +0000
Received: by outflank-mailman (input) for mailman id 541035;
 Tue, 30 May 2023 09:40:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3vpr-0002kG-UC
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:40:11 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20619.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::619])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f7b3aac5-fecd-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:40:09 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM7PR04MB7062.eurprd04.prod.outlook.com (2603:10a6:20b:122::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 09:40:08 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 09:40:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7b3aac5-fecd-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=efMjdv7Y7y35+6rF8/j3SdstbkG7NBHek+zW/FeNOi0h7h91vzv53hw5aKjvMUerpnvdJwqAw+z5d/LAwX92iyczNoV3Kcj24YM3V8puoDO3hOCUjOv0/9eIXEtCXZPw73v/twR/wGPtU0L3vWFXzYO7RPHuBopHZjoezzuO38vpc6gFGEb9kD0kYiweg6A3oYBiS4Hlc8rtmm/5pIgPJvxQdyzCSJcDXPPq+yG84O+64GOzBN/zX/w1wgN19ardeVUwwe5n9XGzDWPD8edQaxKCVtUDRsPAZSBtr0nMNJNypqEJghaRU24pVmkPrkAyoIZRZiXgn3F41041ulHlGw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QPlRds1aKkS9Jt2WZWKGp/RmajI79kjJd7AQIbdsjcQ=;
 b=chUIVMNroHsgzIP5EEU7rwK9pB0/fuZisTVuDv5EAtTkOpZkagkLprOBJYKyxwLb0S58NqSXNMlm9+lyVDq9rR/ofckYPejF1jD93zJoB6YLpV1ITaul6S9FcjesITl0i8SfZovo4yIVOk8c2vEnNIw2zE+PVECA2kzqalajHHs8Xi/6h2s1vhKgq0gJ24LIij9JuAH4819KypLqyz5pJZvVyOqCLdF/s8+KI+3LDbm1J3oD/B8lH72zRzYtDDvDYjNoyqaXvDJ4vZGJ+guTO/AgzZZXzLtVcHCmblC431kFm8+5zsg/k1PpC7rjSIobVR55X5UJxOsh3CI6wur2Dw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QPlRds1aKkS9Jt2WZWKGp/RmajI79kjJd7AQIbdsjcQ=;
 b=mdGB6BX4FZu4w/CwqSxTM1jR+S0rqKt2mgl+n/OCZPGCH7OGaDwF/DS+hpzvoJ+81eyPLw1LkpfdXKzwEwS0fM24eR3tQ4SmRFnLGpI5dM4MGOFr6RNclynTFjIosNp6LUDYAqAuZx2oS/d9aaO2t4q1Xot1f/RoQH2StokP5SfXAOkBzStVLYls7IN9Amn7f7u5r3T5ZBOWJxjzfqmkTK5o/vFoXKXSbGYpM6wek5HEs+wD/yh++FS36k6Sv/RHbtWyvR5P8gHqrYifiiagMWwr2GqUU9L6TfC5v97Q/StgV/z0Cx9Bye80Z4OCelXbM4D6/W4IfZ8iW7gdkg/a3Q==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <8d31e4a3-3f3c-f7a2-8d7d-0b0febf17f65@suse.com>
Date: Tue, 30 May 2023 11:40:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 4/4] x86/cpu-policy: Derive {,R}RSBA for guest policies
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
 <20230526110656.4018711-6-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230526110656.4018711-6-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0113.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a8::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM7PR04MB7062:EE_
X-MS-Office365-Filtering-Correlation-Id: ce34007f-377a-4ec4-9950-08db60f1dad5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	l0C0g3gUXqlB8gnQQqPrO4zm8ohEwnElUildO8bT6wsc8JDWYAxcR7gFCH7NAshau21Zcdb2U/my+Zrbkndw/Bn4+2o+A2D2spjjshAd/R6m+mOUyXNT/2/LXa5KDg3uM0mykXRgfu+cdBoGw9WRYsV6PN1rAVz/2sGZA19xdv4ESgpG7cwfvhCtTFFwhokSjY8+cMfJN2M1VecAgq0zMNgFVcY57sDgk7UXqTGOqvSwifKUB8Zx2/OY/dv2BBVMCpxw5eSrKB+Hv4Yldl7AZo3SWN6yxq8eIw8Ml1omv7cR8t+pooTMJTZiNaewzzmEWMXuSFr9qAxa1YGCpZG0jC/Js1wBIKv/LFQqIigiD392GDf4xaQltPWSUnSgHWYeQxkaZEWo94VPSXMIGM9mNzl/Gwtc015rT3bP5fQHX1zWPOxR6LrbmWPZAqt13WbaTziY61qnCWchhhr22WGQqlRH7zhIdPpLs147zr0H2C8v5RCX9nH534qxaujgFtoaEOVtVmi/ndHcT1wgM8/cMm/bAZPeT/cvpzQFSHRRkHAOQ3LDslLinrzTPuJeMunVEanW05iop9gbItWZ/geFreqP6BwbZUulD2cofYf39jiOVIVvoTBTq5v/P71CJAUXB7AnpCDpZUkryMjiGgkBRA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(366004)(39850400004)(346002)(396003)(451199021)(26005)(38100700002)(41300700001)(6486002)(186003)(31686004)(6506007)(53546011)(6512007)(2616005)(478600001)(54906003)(66556008)(6916009)(66946007)(66476007)(4326008)(316002)(5660300002)(8676002)(8936002)(31696002)(86362001)(2906002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aUN1SkhqbUZmNXNMUTFOeStobTV2NXpDMjVvK1g2Y2JpYnRrVGZvcjhPV21V?=
 =?utf-8?B?WHc3RVFvNUV4M2o5eUhNLzZWUzd3ajBoMU5RNGJXNTFJd21MSTN2Z0hETXYv?=
 =?utf-8?B?QmpmYUNaNkdDckk1TGtCOUt1VXVRamxpNXlhOEsvWVJWUkc3eFQ1WWk0Nm8w?=
 =?utf-8?B?dHI0Q1FhRzZuc0hNVm5HQS9lWndDZHQ5ZXliY3d2RFJnaTBjclVNVFhMMHI3?=
 =?utf-8?B?T25EODhrTHBIcTQ5bWpmYTZGaE8zbEVQM1JuVVpPSFVqV1JZVVZMam9EZVJl?=
 =?utf-8?B?ZERrWU1neUpZRGxlV0l1VGtpbE1WNHA5cUg3aFFWZnE3SWZKTE5sVGlQQlVp?=
 =?utf-8?B?ckxHSUVhcGE1b2xGVDF6LzlUb0JpWEdVOFFBTVBhSDlGQmtpTldHVzVOY25h?=
 =?utf-8?B?aVR5eVRxUjlZQ2V5aEdNbThEUlo0dVczd3Q4UkZYK0lGY09UOWd0S2ZPczAw?=
 =?utf-8?B?UW1pYllJVXFOM0d2WE16RG43RU56Z2dUck1GQ1FwNFFCQytROTN3QjFOaTdn?=
 =?utf-8?B?cS9hYjF4d0pMZFJiZmdVelU0WjRoOHpXblZrUlZOdUd6alZsUjh2eVNOYWtX?=
 =?utf-8?B?Y2c3VG5DUlE5cTNHUWRpdEJyYWJnWEduSmRuWURsOWRIbUdVYkkwS0JKQjl4?=
 =?utf-8?B?OWc5eTc0QWo3bW5MZGNRY1RrQkxuVlpFVk5uUVBlbjl4MkxWUjF5dm5kbnRN?=
 =?utf-8?B?bVBENEhvZ241Q1lmZ1JJQ2c4c3NrU3NXQWZSL0FrZGMvRTJ5OENocTNsUzhW?=
 =?utf-8?B?cmxjejduRE9CVytwUzdBTzFVY3ZtV0Y2SWV2dFpZeCt3OHlZUjhXeUhwNlNO?=
 =?utf-8?B?ZmNiNFhFQnFzSzQzSFp0MElwY1laY3kzcmpvRWZnV2RaYXFadUw3M29FZ2Qw?=
 =?utf-8?B?SUF4bVlWbCtKb3htQ3Q3T1VJQisvYU5LWTA4ZWNLUDNTcURnWTg5SExuRzdB?=
 =?utf-8?B?L3o0bkpWMmNRVExwOFp5cUdVcGpZSEFVL1dCdXJqY05WQnhRSUt5VENaSUk3?=
 =?utf-8?B?VHBRWngyMnRhdEE0TW1rdXYxY2JUQWdWTldGUGtTMmVRNVVoZVRrVHZNVFVs?=
 =?utf-8?B?bEJQT2VmdHBuVllSYXdiOE5ReWNteHlJTXJnS0JnT3dGZHQrb3h5VDZNZk9J?=
 =?utf-8?B?SGF2Y0tYSmZnUWlBeThJZHZ1YVh2V1g0cjJxd1V2SnkxVzJiS2tXYm11Vmtw?=
 =?utf-8?B?UmdlVUxsanhHZFlhZG1TWWhhUzcyUXp3bW1EQmdLRTNMQ09mNWVZWm1hT1hM?=
 =?utf-8?B?VUh1SUQrcVJoQlJ1dU9rZDlValZMMy9IbXpWTG1CWG51MGlDNHZWZm9iNXNl?=
 =?utf-8?B?bTl0NVdmeWNtTFEvYkxrY2Q1SFJFYVZzM1NkQ3dIclEwTlZjTVk1MmxlL0pw?=
 =?utf-8?B?YlFHZVUvOGo2cVBIQ1N4OUJBUnFIdmhoY21iazVZKzRxVlZpR3QvSUxmVk1m?=
 =?utf-8?B?WVVJRWY2RTUwQkkrNFJoRzVwTnpvUWR3YkVlWktjT3ZXNVVMZmlCTnpuVzRw?=
 =?utf-8?B?cG5LWVN5QXZ0WUs3TEUwRXdVM2tDbCtVVFg2NGdaWTlKSE1JVjhaN0tkN2tk?=
 =?utf-8?B?Ty8yMWRmSkFSM0tKZ2pjeVNDNWl1MkhlU081Um82S1NqZHNucjhhTHdRQmpt?=
 =?utf-8?B?bVN0Wk96V2UrR0NCWnJaMFJ3QkFYemVLQ3laeElMVi9wUGhzcTE5cUxDRU9B?=
 =?utf-8?B?NW5BbWRtMnJpOHBrTGpaSlBrdDhVQkxQRkZ0aGJDN0pYN0R4NVZSZlNGaDJn?=
 =?utf-8?B?LzRlODlTU0RCR3F4QUtnS1QxN1N1RENjb0FvaEl0YTd1ZUFERkJTRUVRUWtl?=
 =?utf-8?B?U2N6UmFrYUVoUks4K0NNM09keFRKaDB1VDh1S3RqNitSV0tza1JUb1lBdUtH?=
 =?utf-8?B?KzBCZmhDK3Nwa21TSEV0aHpvVTd6SUlRR2krYmxHWndTaGhpcHNFWU9LbE5w?=
 =?utf-8?B?WEd3bmVheldQUkhZZi9QL0dxMTFOSEpsTGpRdm5EU29BQTZWV3BLb0JLdTFi?=
 =?utf-8?B?Um1kM3NWM1htd3c5SlBQaFE5eFFSYVdCTFJxbmVuRUJlL25CUXRaNUhqUk9J?=
 =?utf-8?B?UC80Y1RyeVRKR05jNHdicThlNU1rZEJZMERwb0hFSGdXNC8xMDk2ZEMwdmhJ?=
 =?utf-8?Q?DQkXgI99sjCakEyHLXbmlAoP2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ce34007f-377a-4ec4-9950-08db60f1dad5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 09:40:07.9366
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: a8CWCGthdpAopX2NRvVSHNXyITFmiZjrrWZhc/dEqOS9LijPgZQDN8l3v5tP5SyC4A4B6/ANGRgSSkmk/g1p1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR04MB7062

On 26.05.2023 13:06, Andrew Cooper wrote:
> The RSBA bit, "RSB Alternative", means that the RSB may use alternative
> predictors when empty.  From a practical point of view, this mean "Retpoline
> not safe".
> 
> Enhanced IBRS (officially IBRS_ALL in Intel's docs, previously IBRS_ATT) is a
> statement that IBRS is implemented in hardware (as opposed to the form
> retrofitted to existing CPUs in microcode).
> 
> The RRSBA bit, "Restricted-RSBA", is a combination of RSBA, and the eIBRS
> property that predictions are tagged with the mode in which they were learnt.
> Therefore, it means "when eIBRS is active, the RSB may fall back to
> alternative predictors but restricted to the current prediction mode".  As
> such, it's stronger statement than RSBA, but still means "Retpoline not safe".

Just for my own understanding: Whether retpoline is safe with RRSBA does
depend on the level of control a less privileged entity has over a more
privileged entity's alternative predictor state? If so, maybe add
"probably" to the quoted statement?

> --- a/xen/arch/x86/cpu-policy.c
> +++ b/xen/arch/x86/cpu-policy.c
> @@ -423,8 +423,14 @@ static void __init guest_common_max_feature_adjustments(uint32_t *fs)
>           * Retpoline not safe)", so these need to be visible to a guest in all
>           * cases, even when it's only some other server in the pool which
>           * suffers the identified behaviour.
> +         *
> +         * We can always run any VM which has previously (or will
> +         * subsequently) run on hardware where Retpoline is not safe.  Note:
> +         * The dependency logic may hide RRSBA for other reasons.
>           */
>          __set_bit(X86_FEATURE_ARCH_CAPS, fs);
> +        __set_bit(X86_FEATURE_RSBA, fs);
> +        __set_bit(X86_FEATURE_RRSBA, fs);
>      }
>  }

Similar question to what I raised before: Can't this lead to both bits being
set, which according to descriptions earlier in the series and elsewhere
ought to not be possible?

> --- a/xen/tools/gen-cpuid.py
> +++ b/xen/tools/gen-cpuid.py
> @@ -318,7 +318,7 @@ def crunch_numbers(state):
>          # IBRSB/IBRS, and we pass this MSR directly to guests.  Treating them
>          # as dependent features simplifies Xen's logic, and prevents the guest
>          # from seeing implausible configurations.
> -        IBRSB: [STIBP, SSBD, INTEL_PSFD],
> +        IBRSB: [STIBP, SSBD, INTEL_PSFD, EIBRS],

Is this really an architecturally established dependency? From an abstract
pov having just eIBRS ought to be enough of an indicator? And hence it would
be wrong to hide it just because IBRSB isn't also set. Plus aiui ...

> @@ -328,6 +328,9 @@ def crunch_numbers(state):
>  
>          # The ARCH_CAPS CPUID bit enumerates the availability of the whole register.
>          ARCH_CAPS: list(range(RDCL_NO, RDCL_NO + 64)),
> +
> +        # The behaviour described by RRSBA depend on eIBRS being active.
> +        EIBRS: [RRSBA],

... for the purpose of the change here this dependency is all you need.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 09:45:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:45:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541039.843330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vuW-0003Lz-VL; Tue, 30 May 2023 09:45:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541039.843330; Tue, 30 May 2023 09:45:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3vuW-0003Ls-Rg; Tue, 30 May 2023 09:45:00 +0000
Received: by outflank-mailman (input) for mailman id 541039;
 Tue, 30 May 2023 09:45:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3vuW-0003Lm-93
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:45:00 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a37a97bd-fece-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:44:58 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7825.eurprd04.prod.outlook.com (2603:10a6:20b:24f::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 09:44:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 09:44:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a37a97bd-fece-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NnXpTpWk1rbmVRrHDcGIhCCmbLm/w2iQ8NfrD64/kRoSUIo/WgPF847lH3JUhRMOmZHQvJO1TsySXaNBb4JX+/nWv4QV+eWEWkp5ocScGL3bvqq+5BZsNLXrrEMiaXy7di1xlGEwKytWOoPQ0mQJkvb9eayhRb9YWvvM+imktX+IPC2zaZDugf3zSBpuMgGoLGdaaJ5xdWD9YsEKp7v64DqlNGS0CMkDTNgNlc6r7xSelGpi0LheCaT7SZORnW8Xbl2iZR5E0fRHLZEtfrw+WbNRiwp9ulz3y0/RKGENX181KD1dA7JInZgrtt8YlMPbG/JI1RoP+iNWK991bsKv7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EtV/UBaCqfDObDGTLdONMnijnY3rZTsyyJjrQuedfww=;
 b=amXOQenF+qFT704DiqSyodO2JjIwRxsKukTCsdy+RSisoN1U47BK9sZRXz9w9mwzwoKVlOLuaWSsAiTyTMf2QPuMDBo31mXNDP6vXxFe9f+mBex7/3uihiK2zmZKYxrgbfmTmFptLPpcC7y6wRtdRliYLepp2HAqrn5adKCemOS8xiXuvd2pLsVGcW6u+fKSo4zjCzNDRmBVqpswYv+Pm4cORY7f/nGL1nbBxJUrdDR/++HEx1PhldYTfVKgjfARh8KY41eIyj3OgccZ+549w6Nvr0FIdDZy7fo0r56XIU8/L+QiffKuV7EJE46blKs/1K2O0gTqMLNCprTYB2BZRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EtV/UBaCqfDObDGTLdONMnijnY3rZTsyyJjrQuedfww=;
 b=plQ9bugyp0ElbDHwLbEZa3wwkn7ePC98jOCHM2Qhg2wv56hSNr7sC/TYKXf4YscH6EKwyJ6wo6hHXFBRwM8dujULi2OoRMghU1O2dbyXqrt85Cn3JJGsixF9M6wz0NYGSCYBOtKunDvDiyOxoIqc/4l7VSkSBVY3ajGmeITlll1jz2Ak5TTbs7tFh9A/vOIyxd8v+NDrM8M0UaV9mNkEctPymGnSQBkToiXHY86+Bm5s6PmU3WfvZbHZB36QM1V6kcu45yz3s4rAuIwUKxXFLK2xoZAvJ50bXTh+1sFxf3nt9EZiV09VBiBWsr6Q4OUCAhqWPILBsfBqnbctR8ya9A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <830fb3cc-6ade-f10e-2a0a-95b676c63d87@suse.com>
Date: Tue, 30 May 2023 11:44:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4zx+TvUWTCEMh3@Air-de-Roger>
 <2b1b1744-2bc3-c7c0-a2d8-6aa6996d4af9@suse.com>
 <ZG94c9y4j4udFmsy@Air-de-Roger>
 <cedbc257-9ad9-56f1-5060-eaf173d45760@suse.com>
 <ZHRdjCKSVtWVkX96@Air-de-Roger>
 <25663dac-6023-a9a7-a495-c995762191d8@suse.com>
 <ZHW+Fu99ZGHPgMj+@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZHW+Fu99ZGHPgMj+@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0093.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7825:EE_
X-MS-Office365-Filtering-Correlation-Id: 57d4156f-9764-45d0-b06a-08db60f285ac
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NqBhX0hIqew2215oAJel3HhsxB4zKp1LbjmqabBoAX0//olza5htim2d573W4h9cS6mxY7WwregDNDGZTmTFW93BVDa8e1tYcdSecakiITYglKASjUiKsqzQUBSe3jwOjKRrGmGYJVUeNm6D4VV1I02QtGII5xnjpu9rw3yxynDqcij6ziJgF5/+mSDuLMtvIc7EMyb6JBKQqgdkqz2HynJ+Nu7xxHEy2dbqUUUTue3txmxCf9d//CUYjOMM9V0pBp5QlspuKDZP2lxFZHX504Lp10VxwBPAPGdsHL9EJ446i5CQXro8aDPVhu6PuxrpeX/gQOb+w7hnIioPZ5vxbdTRHEgWTccyopPYBgpt/lZHUcRcIdGKmFLU3YlyZyyOGkAxauOsYPYpeUqrfMmY9CdmHjMOnA3QCGbOxs7R4grIKRt16RA7YhAm9g4rQFR/9/SOx5T63xXD5F/vGpBSdJGjXHjJVlt2kowf01enJP3J80p8M0pYC0c6v2wZF8omL8qIrighy+3KFbwhqfIPkW+HePon4yurNuGTFGHF7yhlwfV48IHolnyH1JBdHh1iJBOoyOPdYfh4+KkO2c36LMegD/U/DOvMt3QyiPE2eQ/8lPqtRKmYnyR2MFxjVqOy+vy3qEa24evNLhguCg0SKA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(136003)(346002)(396003)(366004)(451199021)(186003)(26005)(15650500001)(6512007)(6486002)(53546011)(6506007)(316002)(2906002)(5660300002)(41300700001)(36756003)(8676002)(8936002)(478600001)(38100700002)(31686004)(54906003)(6916009)(4326008)(31696002)(86362001)(2616005)(66946007)(83380400001)(66556008)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b0wxcXVhSTVDZ2xSZnp1SnBBb3Ewb0RVbmxwZjk3eVZNeFlyelhMUTdaTThI?=
 =?utf-8?B?WXhEYUJLaFI2RW8xMi9neTRwaWRCanQ2bVNKcmVFZnRnTENOL05Ca2htZFNO?=
 =?utf-8?B?dHVmcEhKaXJrNFc4ZWF4QnR1d1h1MWNGY3dHT2V0ampBajJNa3BDcStGWGpi?=
 =?utf-8?B?eWR4SlZFUDlnZm9qeXNzNG5LcmRFelROKy8rS1hkNWFtMHgxekVqL2cwUXdB?=
 =?utf-8?B?U2RzN0Y1MmdHZFljWVVXNmRqQkpaQ3I4SFBCMGNGaGtVemVYdHB6aEd6bURC?=
 =?utf-8?B?M2MyU0dET3VRYzhtRVRTZmw2Um9USWczWE5uTEN1VjVHRnNEMEFVUk1hL3FQ?=
 =?utf-8?B?ZjBjQkQvR0xSSUlEYzlnNjA1dWdUT3Qwc01LV0cvYjkyM3hqMTgwNDlzWDJT?=
 =?utf-8?B?cGlvRUEzZzN4UHAwblpCQW5xRjZyb1dMV2NPZWVISDZlL3Z0dml0MkJOSWVW?=
 =?utf-8?B?MlVkSGxuNEpUb3RCZXllRGpiUzF0dDJSeUZiYnpxVTlSNU5OL290K0pkbmxa?=
 =?utf-8?B?a1B3K0lQejhndm8xZGQ5bmRrU0ozTFpkZFhHMm1WeGMzcmEvVWVrWU11OVR5?=
 =?utf-8?B?ejdTVldkMXNhZ01uNGV2SHFKTGN6TG1KbnZ1M1hDelA4cFI1aGNoNFptTDRp?=
 =?utf-8?B?Ny9lS3RHdFdXQ3R0NTVCb0JqU3V4eC9Va0l6MlFZVGNsdDJMYzA2eUV0RWpo?=
 =?utf-8?B?Q0Foc1I1KytkMjFHaEwxQVkwSDM2MzRteDFGYWppMTJOejZKbUllbWhneUV4?=
 =?utf-8?B?a0RvSzA5L3Y3aysxaUMrYXVxVFltc3JUMTdRYmVOOStxOWJLMWFwVzZib2Vw?=
 =?utf-8?B?WnBSbm5ncE1EZlpwMjdQSjY3S0FtVWU0SS93SkdHbURwWVczais1azhWY25C?=
 =?utf-8?B?eW9FZFJqNHI1SXlXZnludms2enNaTG5IUVVmcklTVEMvUUY3dEZuNXQvNE15?=
 =?utf-8?B?REJOYjVsUys2QVd3ODVKc21pRzlONEF5NWpkMWFnMExLcFhFNm9DbW04Zkd2?=
 =?utf-8?B?bDBHYUp5Uzd0SS9FcG11RHMraWl0cmxWczFYeGRNWFhDS1M0U252SnNsUGdV?=
 =?utf-8?B?azRCamRKY3hvRFRvOEtZMW5kTnlRcC92S0dKU2JaSTNnbjdXdWo0UDBEOTcw?=
 =?utf-8?B?Y2poeno3cHd0OGJ3bzhKM3JlcEQxN0tVV0ZGR3BJVWZJZmIxSmRHZ0ZqYm9W?=
 =?utf-8?B?RkRXNDRkbjY5cytPcVZubFpER1VjYVFUTzRQUXdLQk1qOG00d1RoQUxDN0E5?=
 =?utf-8?B?R2RTdVBIeHhCbHpBUG5kd29BQXlNbVJjMDk1cHZVTXh6WVVuOXNrOFlRSEcr?=
 =?utf-8?B?UzZ0cG9zZm9VT3lSbTBkNlpNR1VweVpDYjNzMVkzcmRrQWxkd3lUUUVqZjhL?=
 =?utf-8?B?dTRFT0IvVkpqTFdOVFFvdHQrM21CMU9yYnhHMWlCcngxZEYvTFBlSGgvaFMv?=
 =?utf-8?B?SG10Y2E1eU9zTUNvZG5WektLS3MycUdBVmI1c2UxZkVaWFZ1Zy85MTY2MlZL?=
 =?utf-8?B?TTVObkt6dDZ2Sis2QzFBVjdxM1V4c2ViL3lRVVpnNThzRlU3dTNvMVgxZlFm?=
 =?utf-8?B?MHFkanlJem9IT1d2ZHJuU0tET0xyZHI1azlYWENlRGxNS0NpWFYxcTBtMm1W?=
 =?utf-8?B?VnV5Y1lsQWhKL2hXZitseG9WRnBiVnJaQ0dUM2NGWVA0V1RlYWQwRlpaK3o0?=
 =?utf-8?B?cVVHbnorRTBKR3pDeFRLc0tabWRpYk1jbzl2NnNTNC9uQ1c2aFYvaE1OM1Z5?=
 =?utf-8?B?amdKVW9KUlJLb0JZN2ZTMEIxZDRacWFsOFBhOUphOEorTUpJTHZzSlpKMHQ5?=
 =?utf-8?B?dlhZcGJiSTZCd24rNkx6VElCMWc1aFpJQVU3YVdVNXhBNTlMNGhsWFM1RHo5?=
 =?utf-8?B?UUdaN0NZTytEMnRPM2daSlhuK3IzaFlNVkdpSjlTWWlWTnVTRGs0cUJXRElu?=
 =?utf-8?B?UU8xUmhjNVRHVUpVY2VFOE42blJkbnR4Zm44L2JnUStNaFM3V0w5M0RvK0Jm?=
 =?utf-8?B?b1lJYVRabVc1RE0vZjJrbXNvUGYzZlZzVVZjR21kcUY5ZGJxNytDMHV0Y3E0?=
 =?utf-8?B?aGNoV2U5MHNLdGxTamhGWWY4dFVvVit4ODRnM1U3b2pkZFgva3ZKMHlXU0x5?=
 =?utf-8?Q?ONjJ+OrwjRvVs6ghqxEof7889?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 57d4156f-9764-45d0-b06a-08db60f285ac
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 09:44:54.5566
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9BBlK0ixlkoZqhyX/P7laCfVVXoUqHEoGgLqmIZmd7c6ApEKu0dMzAlEw8utjDBOmHP4QFLihX/Dwq0ncdIABw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7825

On 30.05.2023 11:12, Roger Pau Monné wrote:
> On Tue, May 30, 2023 at 10:45:09AM +0200, Jan Beulich wrote:
>> On 29.05.2023 10:08, Roger Pau Monné wrote:
>>> On Thu, May 25, 2023 at 05:30:54PM +0200, Jan Beulich wrote:
>>>> On 25.05.2023 17:02, Roger Pau Monné wrote:
>>>>> On Thu, May 25, 2023 at 04:39:51PM +0200, Jan Beulich wrote:
>>>>>> On 24.05.2023 17:56, Roger Pau Monné wrote:
>>>>>>> On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
>>>>>>>> --- a/xen/drivers/vpci/header.c
>>>>>>>> +++ b/xen/drivers/vpci/header.c
>>>>>>>> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
>>>>>>>>      struct vpci_header *header = &pdev->vpci->header;
>>>>>>>>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>>>>>>>>      struct pci_dev *tmp, *dev = NULL;
>>>>>>>> +    const struct domain *d;
>>>>>>>>      const struct vpci_msix *msix = pdev->vpci->msix;
>>>>>>>>      unsigned int i;
>>>>>>>>      int rc;
>>>>>>>> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
>>>>>>>>  
>>>>>>>>      /*
>>>>>>>>       * Check for overlaps with other BARs. Note that only BARs that are
>>>>>>>> -     * currently mapped (enabled) are checked for overlaps.
>>>>>>>> +     * currently mapped (enabled) are checked for overlaps. Note also that
>>>>>>>> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
>>>>>>>>       */
>>>>>>>> -    for_each_pdev ( pdev->domain, tmp )
>>>>>>>> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
>>>>>>>
>>>>>>> Looking at this again, I think this is slightly more complex, as during
>>>>>>> runtime dom0 will get here with pdev->domain == hardware_domain OR
>>>>>>> dom_xen, and hence you also need to account that devices that have
>>>>>>> pdev->domain == dom_xen need to iterate over devices that belong to
>>>>>>> the hardware_domain, ie:
>>>>>>>
>>>>>>> for ( d = pdev->domain; ;
>>>>>>>       d = (pdev->domain == dom_xen) ? hardware_domain : dom_xen )
>>>>>>
>>>>>> Right, something along these lines. To keep loop continuation expression
>>>>>> and exit condition simple, I'll probably prefer
>>>>>>
>>>>>> for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
>>>>>>       ; d = dom_xen )
>>>>>
>>>>> LGTM.  I would add parentheses around the pdev->domain != dom_xen
>>>>> condition, but that's just my personal taste.
>>>>>
>>>>> We might want to add an
>>>>>
>>>>> ASSERT(pdev->domain == hardware_domain || pdev->domain == dom_xen);
>>>>>
>>>>> here, just to remind that this chunk must be revisited when adding
>>>>> domU support (but you can also argue we haven't done this elsewhere),
>>>>> I just feel here it's not so obvious we don't want do to this for
>>>>> domUs.
>>>>
>>>> I could add such an assertion, if only ...
>>>>
>>>>>>> And we likely want to limit this to devices that belong to the
>>>>>>> hardware_domain or to dom_xen (in preparation for vPCI being used for
>>>>>>> domUs).
>>>>>>
>>>>>> I'm afraid I don't understand this remark, though.
>>>>>
>>>>> This was looking forward to domU support, so that you already cater
>>>>> for pdev->domain not being hardware_domain or dom_xen, but we might
>>>>> want to leave that for later, when domU support is actually
>>>>> introduced.
>>>>
>>>> ... I understood why this checking doesn't apply to DomU-s as well,
>>>> in your opinion.
>>>
>>> It's my understanding that domUs can never get hidden or read-only
>>> devices assigned, and hence there no need to check for overlap with
>>> devices assigned to dom_xen, as those cannot have any BARs mapped in
>>> a domU physmap.
>>>
>>> So for domUs the overlap check only needs to be performed against
>>> devices assigned to pdev->domain.
>>
>> I fully agree, but the assertion you suggested doesn't express that. Or
>> maybe I'm misunderstanding what you did suggest, and there was an
>> implication of some further if() around it.
> 
> Maybe I'm getting myself confused, but if you add something like:
> 
> for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
>       ; d = dom_xen )
> 
> Such loop would need to be avoided for domUs, so my suggestion was to
> add the assert in order to remind us that the loop would need
> adjusting if we ever add domU support.  But maybe you had already
> plans to restrict the loop to dom0 only.

Not really, no, but at the bottom of the loop I also have

        if ( !is_hardware_domain(d) )
            break;
    }

(still mis-formatted in the v2 patch). I.e. restricting to Dom0 goes
only as far as the 2nd loop iteration.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 09:55:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:55:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541043.843339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3w49-0004rC-Rd; Tue, 30 May 2023 09:54:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541043.843339; Tue, 30 May 2023 09:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3w49-0004r5-OV; Tue, 30 May 2023 09:54:57 +0000
Received: by outflank-mailman (input) for mailman id 541043;
 Tue, 30 May 2023 09:54:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=V7JO=BT=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1q3w48-0004qz-5A
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:54:56 +0000
Received: from smtp-bc0d.mail.infomaniak.ch (smtp-bc0d.mail.infomaniak.ch
 [45.157.188.13]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 066dfac3-fed0-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 11:54:53 +0200 (CEST)
Received: from smtp-3-0001.mail.infomaniak.ch (unknown [10.4.36.108])
 by smtp-2-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4QVnn84WwTzMqW8D;
 Tue, 30 May 2023 11:54:52 +0200 (CEST)
Received: from unknown by smtp-3-0001.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4QVnn45tdDzMptwh; Tue, 30 May 2023 11:54:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 066dfac3-fed0-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net;
	s=20191114; t=1685440492;
	bh=5ZmdG3yKx3sW/vBePa3RcoZacfn+S9kRY4DY3NdVPVQ=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=h4O+rX0i7QKKorWtjsx8mJ0onLcR/ElzKZ7tgeNnWZ2QuyLnqvL+j/z9Ef7HfLDJV
	 lLa7FoCZX6RPOVhgmpgBxsTMA5l1pV7MrAW4BE5zT3XXtq/2b4DJl4pJsUVwePUWqr
	 0SJDoyKVJy15X/pdAoU7Y1esqA0P8DJD36VJ+8Lc=
Message-ID: <5a5454a1-9ef2-1100-aefd-bbb0267f5339@digikod.net>
Date: Tue, 30 May 2023 11:54:47 +0200
MIME-Version: 1.0
User-Agent:
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Content-Language: en-US
To: Trilok Soni <quic_tsoni@quicinc.com>, Borislav Petkov <bp@alien8.de>,
 Dave Hansen <dave.hansen@linux.intel.com>, "H . Peter Anvin"
 <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
 Kees Cook <keescook@chromium.org>, Paolo Bonzini <pbonzini@redhat.com>,
 Sean Christopherson <seanjc@google.com>, Thomas Gleixner
 <tglx@linutronix.de>, Vitaly Kuznetsov <vkuznets@redhat.com>,
 Wanpeng Li <wanpengli@tencent.com>
Cc: Alexander Graf <graf@amazon.com>, Forrest Yuan Yu <yuanyu@google.com>,
 James Morris <jamorris@linux.microsoft.com>,
 John Andersen <john.s.andersen@intel.com>,
 "Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
 Marian Rotariu <marian.c.rotariu@gmail.com>,
 =?UTF-8?Q?Mihai_Don=c8=9bu?= <mdontu@bitdefender.com>,
 =?UTF-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
 Rick Edgecombe <rick.p.edgecombe@intel.com>,
 Thara Gopinath <tgopinath@microsoft.com>, Will Deacon <will@kernel.org>,
 Zahra Tarkhani <ztarkhani@microsoft.com>,
 =?UTF-8?Q?=c8=98tefan_=c8=98icleru?= <ssicleru@bitdefender.com>,
 dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
 linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
 qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
 x86@kernel.org, xen-devel@lists.xenproject.org
References: <20230505152046.6575-1-mic@digikod.net>
 <1e10da25-5704-18ee-b0ce-6de704e6f0e1@quicinc.com>
 <0b069bc3-0362-d8ec-fc2a-05dd65218c39@digikod.net>
 <e17da8f4-4d5d-adb7-02c9-631ffdfc9037@quicinc.com>
From: =?UTF-8?Q?Micka=c3=abl_Sala=c3=bcn?= <mic@digikod.net>
In-Reply-To: <e17da8f4-4d5d-adb7-02c9-631ffdfc9037@quicinc.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Infomaniak-Routing: alpha


On 25/05/2023 20:34, Trilok Soni wrote:
> On 5/25/2023 6:25 AM, Mickaël Salaün wrote:
>>
>> On 24/05/2023 23:04, Trilok Soni wrote:
>>> On 5/5/2023 8:20 AM, Mickaël Salaün wrote:
>>>> Hi,
>>>>
>>>> This patch series is a proof-of-concept that implements new KVM features
>>>> (extended page tracking, MBEC support, CR pinning) and defines a new
>>>> API to
>>>> protect guest VMs. No VMM (e.g., Qemu) modification is required.
>>>>
>>>> The main idea being that kernel self-protection mechanisms should be
>>>> delegated
>>>> to a more privileged part of the system, hence the hypervisor. It is
>>>> still the
>>>> role of the guest kernel to request such restrictions according to its
>>>
>>> Only for the guest kernel images here? Why not for the host OS kernel?
>>
>> As explained in the Future work section, protecting the host would be
>> useful, but that doesn't really fit with the KVM model. The Protected
>> KVM project is a first step to help in this direction [11].
>>
>> In a nutshell, KVM is close to a type-2 hypervisor, and the host kernel
>> is also part of the hypervisor.
>>
>>
>>> Embedded devices w/ Android you have mentioned below supports the host
>>> OS as well it seems, right?
>>
>> What do you mean?
> 
> I think you have answered this above w/ pKVM and I was referring the
> host protection as well w/ Heki. The link/references below refers to the
> Android OS it seems and not guest VM.
> 
>>
>>
>>>
>>> Do we suggest that all the functionalities should be implemented in the
>>> Hypervisor (NS-EL2 for ARM) or even at Secure EL like Secure-EL1 (ARM).
>>
>> KVM runs in EL2. TrustZone is mainly used to enforce DRM, which means
>> that we may not control the related code.
>>
>> This patch series is dedicated to hypervisor-enforced kernel integrity,
>> then KVM.
>>
>>>
>>> I am hoping that whatever we suggest the interface here from the Guest
>>> to the Hypervisor becomes the ABI right?
>>
>> Yes, hypercalls are part of the KVM ABI.
> 
> Sure. I just hope that they are extensible enough to support for other
> Hypervisors too. I am not sure if they are on this list like ACRN / Xen
> and see if it fits their need too.

KVM, Hyper-V and Xen mailing lists are CCed. The KVM hypercalls are 
specific to KVM, but this patch series also include a common guest API 
intended to be used with all hypervisors.


> 
> Is there any other Hypervisor you plan to test this feature as well?

We're also working on Hyper-V.

> 
>>
>>>
>>>
>>>>
>>>> # Current limitations
>>>>
>>>> The main limitation of this patch series is the statically enforced
>>>> permissions. This is not an issue for kernels without module but this
>>>> needs to
>>>> be addressed.  Mechanisms that dynamically impact kernel executable
>>>> memory are
>>>> not handled for now (e.g., kernel modules, tracepoints, eBPF JIT),
>>>> and such
>>>> code will need to be authenticated.  Because the hypervisor is highly
>>>> privileged and critical to the security of all the VMs, we don't want to
>>>> implement a code authentication mechanism in the hypervisor itself
>>>> but delegate
>>>> this verification to something much less privileged. We are thinking
>>>> of two
>>>> ways to solve this: implement this verification in the VMM or spawn a
>>>> dedicated
>>>> special VM (similar to Windows's VBS). There are pros on cons to each
>>>> approach:
>>>> complexity, verification code ownership (guest's or VMM's), access to
>>>> guest
>>>> memory (i.e., confidential computing).
>>>
>>> Do you foresee the performance regressions due to lot of tracking here?
>>
>> The performance impact of execution prevention should be negligible
>> because once configured the hypervisor do nothing except catch
>> illegitimate access attempts.
> 
> Yes, if you are using the static kernel only and not considering the
> other dynamic patching features like explained. They need to be thought
> upon differently to reduce the likely impact.

What do you mean? We plan to support dynamic code, and performance is of 
course part of the requirement.


> 
>>
>>
>>> Production kernels do have lot of tracepoints and we use it as feature
>>> in the GKI kernel for the vendor hooks implementation and in those cases
>>> every vendor driver is a module.
>>
>> As explained in this section, dynamic kernel modifications such as
>> tracepoints or modules are not currently supported by this patch series.
>> Handling tracepoints is possible but requires more work to define and
>> check legitimate changes. This proposal is still useful for static
>> kernels though.
>>
>>
>>> Separate VM further fragments this
>>> design and delegates more of it to proprietary solutions?
>>
>> What do you mean? KVM is not a proprietary solution.
> 
> Ah, I was referring the VBS Windows VM mentioned in the above text. Is
> it open-source? The reference of VM (or dedicated VM) didn't mention
> that VM itself will be open-source running Linux kernel.

This patch series is dedicated to KVM. Windows VBS was only mentioned as 
a comparable (but much more advanced) set of features. Everything 
required to use this new KVM features is and will be open-source. There 
is nothing to worry about licensing, the goal is to make it widely and 
freely available to protect users.


> 
>>
>> For dynamic checks, this would require code not run by KVM itself, but
>> either the VMM or a dedicated VM. In this case, the dynamic
>> authentication code could come from the guest VM or from the VMM itself.
>> In the former case, it is more challenging from a security point of view
>> but doesn't rely on external (proprietary) solution. In the latter case,
>> open-source VMMs should implement the specification to provide the
>> required service (e.g. check kernel module signature).
>>
>> The goal of the common API layer provided by this RFC is to share code
>> as much as possible between different hypervisor backends.
>>
>>
>>>
>>> Do you have any performance numbers w/ current RFC?
>>
>> No, but the only hypervisor performance impact is at boot time and
>> should be negligible. I'll try to get some numbers for the
>> hardware-enforcement impact, but it should be negligible too.
> 
> Thanks. Please share the data once you have it ready.

It's on my todo list, but again, that should not be an issue and I even 
doubt the difference to be measurable.


From xen-devel-bounces@lists.xenproject.org Tue May 30 09:57:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 09:57:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541048.843350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3w6S-0005So-Dw; Tue, 30 May 2023 09:57:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541048.843350; Tue, 30 May 2023 09:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3w6S-0005Sh-9g; Tue, 30 May 2023 09:57:20 +0000
Received: by outflank-mailman (input) for mailman id 541048;
 Tue, 30 May 2023 09:57:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Ox2=BT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q3w6Q-0005SY-Ed
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 09:57:18 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5b64b964-fed0-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 11:57:16 +0200 (CEST)
Received: by mail-wm1-x32c.google.com with SMTP id
 5b1f17b1804b1-3f6da07feb2so43741545e9.0
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 02:57:16 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 p5-20020a5d4585000000b0030796e103a1sm2757861wrq.5.2023.05.30.02.57.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 02:57:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b64b964-fed0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685440635; x=1688032635;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=P5oSMsY1ga/vmDE3/XkALs3Dzm2NZpe0Jaj6342SjPA=;
        b=FSqNdSkyrN9B1JRLQvkvb/KEIqmlvCOQubt24iCDwVh6ca9NAqRewnx/BtoOgGlk8c
         mLjy2Af9FFig+YNJOhtj4YtcPI+H/A9kEzKS/jh2Jw62g9DffsTz9hw1b7OtMgnNGGKv
         5agIcF6N5dJymsTnRsAMKccJNwlAqE34HRjHc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685440635; x=1688032635;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=P5oSMsY1ga/vmDE3/XkALs3Dzm2NZpe0Jaj6342SjPA=;
        b=eUaGbbh3R2dDYtmnZiW8JowHWqIVYgBH49K3BEJwx2aZUdkd1pRiaxNpSsylzKBkym
         iLdEomOL+LnzL6gzwQgSFewpS3iV9YKvIO7pddB4Qph/JLc63lPxyu1MIHkpNOyqO1Q6
         aBzxG8wZ2JT5XJARKg8Lzq3H8BU8pelskfsxywjh+7dSHenlXG7i74i9at/Yw6/Nc341
         EABnpi3lVOlvVfP7ybhN7JPP/FUQNa5bCc7j5B7DbJxcqwY9SfYA76BvTF6HMEhAejVp
         NcDicOKo2TueXsSnvgxbxlN0/4aLl5tcGRS/0mECmgEntMGIodUlgPM1WrCLRFOrBC9N
         eSKw==
X-Gm-Message-State: AC+VfDxLSNBJKYwSpwlk0XwffPF0SZV5fSbmxL1axDAxPjCo7aBfgmzQ
	KbKdZC+mgUUZpE8KO3RQSnaaYg==
X-Google-Smtp-Source: ACHHUZ7WZ8lG8a7tMRKwp+GxoSwp0WCcoh5ttPZP1yrMBlDDrfl6Xza54smftJJoLzqpqnaNU+IwoA==
X-Received: by 2002:a1c:f70b:0:b0:3f1:72fb:461a with SMTP id v11-20020a1cf70b000000b003f172fb461amr1211396wmh.2.1685440635478;
        Tue, 30 May 2023 02:57:15 -0700 (PDT)
Message-ID: <6475c87b.5d0a0220.132eb.a83f@mx.google.com>
X-Google-Original-Message-ID: <ZHXIedQKwDVWC820@EMEAENGAAD19049.>
Date: Tue, 30 May 2023 10:57:13 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 2/3] x86: Add support for AMD's Automatic IBRS
References: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
 <20230526150044.31553-3-alejandro.vallejo@cloud.com>
 <64820a10-622e-dc10-988b-613542349ec9@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <64820a10-622e-dc10-988b-613542349ec9@suse.com>

On Tue, May 30, 2023 at 10:25:36AM +0200, Jan Beulich wrote:
> On 26.05.2023 17:00, Alejandro Vallejo wrote:
> > --- a/xen/arch/x86/smpboot.c
> > +++ b/xen/arch/x86/smpboot.c
> > @@ -376,6 +376,9 @@ void start_secondary(void *unused)
> >      {
> >          wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
> >          info->last_spec_ctrl = default_xen_spec_ctrl;
> > +
> > +        if ( cpu_has_auto_ibrs && (default_xen_spec_ctrl & SPEC_CTRL_IBRS) )
> > +            write_efer(read_efer() | EFER_AIBRSE);
> >      }
> 
> Did you consider using trampoline_efer instead, which would then also take
> care of the S3 resume path (which otherwise I think you'd also need to
> fiddle with)?
> 
> Jan
I didn't because I didn't know about it. Good call though, it's indeed a
better place for it. Will do on v2.

Alejandro


From xen-devel-bounces@lists.xenproject.org Tue May 30 10:01:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 10:01:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541051.843360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wAH-0006zb-Sp; Tue, 30 May 2023 10:01:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541051.843360; Tue, 30 May 2023 10:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wAH-0006zU-Pn; Tue, 30 May 2023 10:01:17 +0000
Received: by outflank-mailman (input) for mailman id 541051;
 Tue, 30 May 2023 10:01:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8xzU=BT=citrix.com=prvs=507ffd061=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q3wAF-0006z5-M0
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 10:01:15 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7eb3db6-fed0-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 12:01:13 +0200 (CEST)
Received: from mail-mw2nam04lp2176.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.176])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 06:01:02 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by SA3PR03MB7369.namprd03.prod.outlook.com (2603:10b6:806:382::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Tue, 30 May
 2023 10:01:01 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 10:01:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7eb3db6-fed0-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685440873;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=WnJPlMSLXFSJlyjpUrBTijZJtJYvWWuyEAgSNJFjo9o=;
  b=aPUycUbUS0oS15QXqWicPYuiYelFdfaL6cZjmbRgvY/fDr8XaSYVphzw
   A3Qdcpl0DNc+7sjJE60G0eO2oY3ZTJwfquXTWbub5MoffK//zsvowMr9/
   31u5vzK7wuA7/uoTh5PyxGjhHjjGWDE8oi2qzmzicxeMeyGB4DrP0Jj+2
   I=;
X-IronPort-RemoteIP: 104.47.73.176
X-IronPort-MID: 113378520
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:2JW7i6DOXFK80xVW/wTiw5YqxClBgxIJ4kV8jS/XYbTApD0m0TQOn
 TEXW2jUPv6CNmb8L9F2aonkoBsFsZWBmN5gQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h
 yk6QoOdRCzhZiaE/n9BCpC48T8nk/nOHuGmYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs
 t7pyyHlEAbNNwVcbCRMs8pvlDs15K6p4G5D5gRnDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS
 uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada
 jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw4P58AEh36
 qYkOWoydVeNvMKNwpeyc7w57igjBJGD0II3nFhFlW2cKMl8BJfJTuPN+MNS2yo2ioZWB/HCa
 sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTK/uxuvDa7IA9ZidABNPL8fNCQSNoTtUGfv
 m/cpEzyAw0ANczZwj2Amp6prraWxnikCN9ITNVU8NZz3QSdy0kNLicUcnC1jfe9zXGgfeBQf
 hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZRZdpjuMIoSDgC0
 l6Sg8ivFTFpqKeSS3+W6vGTtzzaBMQOBWoLZCtBSBRf5dDm+N03lkiWEYglF7OphNroHz222
 yqNsCU1m7QUi4gMyrm/+lfExTmro/AlUzII2+keZUr9hisRWWJvT9XABYTzhRqYELukcw==
IronPort-HdrOrdr: A9a23:AoPpda5R/p6IxFX1bAPXwMzXdLJyesId70hD6qkXc3Bom62j+P
 xG+c5x6faaslgssR0b+OxoWpPwIk80hKQU3WB5B97LNmTbUQCTXeNfBOXZslndMhy72ulB1b
 pxN4hSYeeAamSSVPyKhTWFLw==
X-Talos-CUID: 9a23:Zc+bCm4gcHgXad/2mtss7lYfG+kcbSfh3S3vOAi2MkZSSpiccArF
X-Talos-MUID: 9a23:i+tECAb60npyueBTrmK1ijxHKuFS4YOFBmwDldI5pe+2Onkl
X-IronPort-AV: E=Sophos;i="6.00,203,1681185600"; 
   d="scan'208";a="113378520"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hLl+N8kkucjPmlXl9fjHW6OoeHY4PRVk3fcYn2SalKj5BXAFsHrpV3gZ5EWj5rS5OSZ72w60R7FhLnrX4oelRTs+8pvGYBiBvrY+JNJzeXgE2dmN8syffaXKh7dKv8MXSmbYcFG0aLjRR7y910Ugb/bB49JRaZYGiHP15vnfPMrwsZXeSeEtY5hhZ+Gb/pfqiECFPopd+eBq+vZpKuSogNgO34p2jpNVIPEmWr2z6yyckI9gPXfLkiYaEKOe+2IiSFOcIKYUTvwbIoA2CMP7eXWWMGdsVADtvbw6H3vGFmYOzPXuTIKtDrL1Cgudf0ZIPMT0vyv5uERlyjttkTSBCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yOt5yhWUZyOGvdUK1BpYksoi5YdvLivI4nWrbB08dww=;
 b=kp4I1MwKzcNHjRTO4FAVP3734NMgqHjPFUzEx3bMGAueQjTezOKMOPnaS6vbho5ivRBcSfiZRaAyKOa9ksB/DARfc4LFQVdfCvheY+0xTRbQ49m/qBvscFEhbPIUaMHo+DPdboZo7bmuoIOtQtwBGMC/euzVWitUkwEiJXwyeoOr8NzSQWasjOjo+8ebt2a2ZlnwJiNptToaU5L5NODF4sh3RsQILBd2PrY/GcrnGOTVDhYzAm40SmvBF+RQ4z7IEOCBILd4ZqMmhOljyyE1rJbuFsAQFywbtTOgyuE4FmNKkDuUvOmZ+Mn8kHhcnEQ8z8e/L0SH4PoYNgCWB7RaTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yOt5yhWUZyOGvdUK1BpYksoi5YdvLivI4nWrbB08dww=;
 b=ECTU0iNgFsYi6mTHh/VMLdS79RsxYiQGdDEHygrStN+4K2M1ZrQqeZqlXOgMQ42pqA2/OgUyNgDN+z2iJVCaKmQqkohvwnsTY39Fv2RHYj1iu/6paHU/QVyAKNR//8krceiPeSH32op7WpYXNyjqgTt04Q//H+7ZxH7m0EPNtF0=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <aedc61dd-7c9a-f1dd-6430-fc3c56624a39@citrix.com>
Date: Tue, 30 May 2023 11:00:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/4] x86/spec-ctrl: Synthesize RSBA/RRSBA bits with older
 microcode
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
 <20230526110656.4018711-4-andrew.cooper3@citrix.com>
 <3f38d4c1-dac7-611a-1882-a5e6de16d4f9@suse.com>
In-Reply-To: <3f38d4c1-dac7-611a-1882-a5e6de16d4f9@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LNXP265CA0001.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5e::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|SA3PR03MB7369:EE_
X-MS-Office365-Filtering-Correlation-Id: c1249ccf-6cfd-4ac3-9e5a-08db60f4c563
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	x15x4xetULPRpS3XL7oU1sFQotN3QhdAgalkXMiBLZa6vVhqRA5k684uLvHVJzy5GoGvHorj8Qg+g18WBCMiVbtsV8geEVi0/v8SkI6/9Hagsyj8e3BA2stNKag3nzrBYv/2vRNv7IUsSma0TlKiBpIMIIZWSIAlEWBhgoJcMJjFeJiZ8xvVHKAlVmdbbcPjFJMjkMl6IDI5OGA/C1nIKFCILN2pOIiXjQTe2kpEIOMuAyfUiB/EFcqdY6vvOxjiFE5ggsYSrnqJJc8uKTEVTG9fugTiR00SG+H2chjC9Z2QZU1fhVC6vPQEQQv/jVsuoXXo7HI54VGmLb1z2YiOoSQEj7pNrvZldPp9vjuYL/1nRywIKSL/R3a86otfJj9DB37HOvcO/ZEYn3BhdeELZ2DmjuxatU6qTSzEL2tZaN4fiM8OjuzxKid1YprMAfPW5lsn0grCRsIaMUt04jWGR/dUaQXynsWTMtwsoCtZk+NLFwsWsmepuBoPWz8+s8cp1Is8w66OKO0ivWH3+neF5XJmmQt0HB/5Ng/MX9JvsnM402QWYbUkF8vCRs59Xt8uWo4B4Idem4Rm1arFGzrDwArOGgJepLajOSyruQ6RdwJJ+pa9O7jk+UBSSpkHdtwCip+yguRE4jmkq8d2ujeS2w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(136003)(346002)(366004)(376002)(396003)(451199021)(86362001)(6486002)(41300700001)(31696002)(6916009)(6666004)(4326008)(66946007)(316002)(66476007)(66556008)(36756003)(5660300002)(186003)(2906002)(478600001)(31686004)(53546011)(26005)(6506007)(6512007)(2616005)(8676002)(54906003)(8936002)(82960400001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WS9MQmNYMzFpL2xNSjJFS0k0OW9OdGUrVlRzNW40NlE3TWRVMWtjK0xVOFp3?=
 =?utf-8?B?b29xWndLbTJZVUY5Q0dYRFVtQ3Z6S0xPVkdUOGplbnZZOTJTOGdPeVZISjdS?=
 =?utf-8?B?Mlp1ZHNlS3RXQXRtZ3VCcDJvcFJJZkh4eFFWd004cmw2TkpoTDVrOGNLWFly?=
 =?utf-8?B?WGhDQ0Q0R1Z5OWRzL1pnZEJHdHhZQ1lDemhFbFJyYXBkTHFWUDJicXArUUhP?=
 =?utf-8?B?eGFvRU52elg3TU1MY25nbUJpVkRQcE5lZ2MwcWpENTZmYWdubzM4OTFKb2oy?=
 =?utf-8?B?NWY4YjhGTllMcldmYnJaUmNNSlY5ZjErOGljRmJzUmdFRjFGL2hObWJyaHMx?=
 =?utf-8?B?OCt6QnhGeFRtUG9vcitMbGx1YUpvWmwzYnM4YkpGaExKZmtTZVVyMjlaTkJZ?=
 =?utf-8?B?MjA0KzgzWUpJYVpaWEluRDBTVTVjbjBreFNNSVpENHZCN3FBdi9rQWxRcEdY?=
 =?utf-8?B?eFdrY2E1MkVuTVJQVHFlWmRFUk9JcE1HbEJ0WXM1djBFMmhqeHpqcElPYUJo?=
 =?utf-8?B?bFhvZEpKTTJ5RmJNbVFEWCtXM1FaYS9aVVhSQnN3SzlMdWQxdVgwSm5CeU9F?=
 =?utf-8?B?SGdQblJEbUlBZWNVMFdmYTUyY2Znc0N5K3RRTXE4T0wwN1dGZ1EreUZJc0k2?=
 =?utf-8?B?ek5ZNGJrUXJaaE42RGd4VE5lOVFmWDlicUF6eVQ5NDdhQTEwWFhXR1FHemFh?=
 =?utf-8?B?aExyeWZkVlBxQUFEMi9kc0VQTVZadHhzRjFHWGNQOGdVUFJQT0ZGakZlaDNP?=
 =?utf-8?B?TmlFSk9zUklremYzYWtJMmJ6allRalNxVFpmSGpKQkMyL3hscGFHYTVtS0FJ?=
 =?utf-8?B?NHd5OEliVy9SV0wxUDNSRUV6Z3U1Ym8wV0VaWlFzY25McnIvVEFEdUI0OHox?=
 =?utf-8?B?SnZZUmpLOXZray81c2pwSnZzek9TdmIxd2ZQRTlUNWRlRkdLWWFWVHA2TCtX?=
 =?utf-8?B?NGRwQlY0UXpXNm5POWVudzJkNUFqU0YzWmNCYm0zNHBoa3I1SW1rOHpDSmRS?=
 =?utf-8?B?aXBUakZFYUdYc0libUJieTR6cGVSbHdiY2ZJaWM0SFZGOEZnUjZELytCVHA4?=
 =?utf-8?B?UnlRK2VJMVlwUVpCYzI3VGcwQmk3WWlmQ1V5Z3JRZDg0UG1TUmlNL201K211?=
 =?utf-8?B?cHVqbVFZNkMzVlFnV2w2RkdDT0VncmhpN3c2dlA3ZGxMSFIwWTV0QkZGR3k4?=
 =?utf-8?B?T0ZqM3hTQVhaT1VSSjEzTmJDaHZlMlBSbUc5TFlzWGJGZnRaQ2dOWFMwTnBx?=
 =?utf-8?B?VzQ4djc0VTJSR0xhZVNPdlFrUjl5ejkrclZMS1ZxUXd6M0NtMkJEOE1VNXRw?=
 =?utf-8?B?Z3lXVG1Pd1FGYll2NUJQdGF3UE1GZ2pkejgyK2h3a1lGaFQ2cThVSitvVDRu?=
 =?utf-8?B?eEJCRlhCYXZMK1BKKythSjZRTlRnMVJJVy9uRGJrdUhyNUpLc0FlZU1nV1No?=
 =?utf-8?B?bWJCblNUeURXR1Y1NDc0TitNeDhxdkdlKyt1dzBKWWNXb0NrWEI3L2c2WDFk?=
 =?utf-8?B?VjFPL1BvZi9HODV0SmdwVGYzS1BVY2FsZmpZUkg2eDV3NUxidHhwQTFUOUl0?=
 =?utf-8?B?K0gzbC9MUmhXSzYzNTUxeSthZ1A2TEV2K24xbDk5NEQxSGNvTEIyc2huLzQy?=
 =?utf-8?B?R1ladEcvcU9SWXZLRGxMV3dHOFpGNWJNeU1QMzFpSjV6RWhTOUFZL29BaDMx?=
 =?utf-8?B?SjlOSXNGQVE4RFUveVRVTXJGeGtSNTkzOHc0aUl4a25HRjdZQ1B5cUlETmVU?=
 =?utf-8?B?TVlRZGgxZnB2T3VDVzRmOEh6aU1MUW1BNGtJV09uNFkvSmRDR3NBZEFNK1lu?=
 =?utf-8?B?eVZBeHozeFUyeDZGb3RIN3dMNGVGZmVTeUZleFVraWpDUjZDWmlwTUE5Z29Y?=
 =?utf-8?B?WmJ4NlR0YmhhSDd6U1VoeXNmWWRZRWFIVERoNGYrY3dFbENXbXJkSEdoeldz?=
 =?utf-8?B?UDVRY1d2S1F0ZFNZWEFZa0hRaTZnWEsxclRqRWJYNjRScXNmUGZpTTBMY054?=
 =?utf-8?B?Y29Lam04bVZXVWF3eXdqc3JBYXF1bXFBZGVVTkZzeDJmUEZIK1N0Uldyb0RU?=
 =?utf-8?B?ZjJIcDB5OTlveGxoeGVNcUdVeVA2TTJ4eUxZQnRHNG1GeFBWYVVuYjRxUUZk?=
 =?utf-8?B?bHlVY3pnTVpNK0ZlL1Z4SzdzQk82aW5GNHN0VkVCYU1UMk9UbTFGaWVRdUMr?=
 =?utf-8?B?enc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	p6vL1Sex/B7Z06c8cxfUMx3ixq3Izw82aWSj2t+0bWo7/NR96x6rm2PTaxLvryQdp3cVk7P3r6CzlNDquMcZS5HhboiUW3/i/CzbOphR1FPuGBpNuM2vAmhsXd5VMrZdJ5LQMMIF6fQN67OB77pGTVPVDhzaiAUXJV8gIBA4hfYtAenJA7PLcOp1ZxsZoyhVJPG0Gcbg1G0XNGqLJ+iIr/On4VITI32tKrKnLF0mnSZ43Bk+UGdDKBTgRhylGzoRLoCfIT78ro2mReZ91RGJo7ycwk77bNdAIMHZz0Pq+0BsbrjkNrcRt5VGMJshVhZNa8j58vTiZ84yNLBpA8xiDJGa3g557Gp09cnip5rIFlqmqekCLKfD0Cqw0gmxwptKgVOByJoEesVbPuETNH8P2WY/s5cfBh36cy5EeEGzxXcjpL3qk7VNgttNWpZCzgqBleLMaXm/Ihm7uEMEBKRqbhtRZUbfUlXDRXjTVoKGTna2xU18NtFBRK1fN2jliNGI7eUMty2F5HGEpAQgQ4pEpdMTebJUKeb1ZYMrbPh/rLJsbqJxGD4ATuZdXwdt51jwW+h2YpfT1aQsiV7nAGB/VNi6KEu9R8DTq2SZBKQNC8h+DCnwS06PUqVeDo02vJRZwTpzG337rYiDn2rymOEC6d1r+91356JrewszgABFwursQ1HBZQt1fGID4eYFDNQrTkKRcO+b2aoxGklJ4H3hEdPP1xJucSmKX72Enz1Lciw3lO2gsOm7GUaJafR2FdLqrPyV5m4FoAB3+ZcBAi4SPFn2ebPzXyeoTDupkGP0xXT42dwxFGDauifRTWWt6wGQ8sfxQL1fcmCDZ4WTYdHT2Q==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c1249ccf-6cfd-4ac3-9e5a-08db60f4c563
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 10:01:00.6987
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1pVNuyTqz75ZJZnFa3uon9rPmC20CBz6IW7f8wKYTOpKEbn2288TAXkaVdgaPcHRHP9wTPxYwebgDXtJa5xiTfzjthO0vO8kplpaTq+Dc/8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR03MB7369

On 30/05/2023 10:18 am, Jan Beulich wrote:
> On 26.05.2023 13:06, Andrew Cooper wrote:
>> @@ -687,6 +697,32 @@ static bool __init retpoline_calculations(void)
>>      if ( safe )
>>          return true;
>>  
>> +    /*
>> +     * The meaning of the RSBA and RRSBA bits have evolved over time.  The
>> +     * agreed upon meaning at the time of writing (May 2023) is thus:
>> +     *
>> +     * - RSBA (RSB Alterantive) means that an RSB may fall back to an
>> +     *   alternative predictor on underflow.  Skylake uarch and later all have
>> +     *   this property.  Broadwell too, when running microcode versions prior
>> +     *   to Jan 2018.
>> +     *
>> +     * - All eIBRS-capable processors suffer RSBA, but eIBRS also introduces
>> +     *   tagging of predictions with the mode in which they were learned.  So
>> +     *   when eIBRS is active, RSBA becomes RRSBA (Restricted RSBA).
>> +     *
>> +     * Some parts (Broadwell) are not expected to ever enumerate this
>> +     * behaviour directly.  Other parts have differing enumeration with
>> +     * microcode version.  Fix up Xen's idea, so we can advertise them safely
>> +     * to guests, and so toolstacks can level a VM safelty for migration.
>> +     */
> If the difference between the two is whether eIBRS is active (as you did
> word it yet more explicitly in e.g. [1]), then ...
>
>> + unsafe_maybe_fixup_rrsba:
>> +    if ( !cpu_has_rrsba )
>> +        setup_force_cpu_cap(X86_FEATURE_RRSBA);
>> +
>> + unsafe_maybe_fixup_rsba:
>> +    if ( !cpu_has_rsba )
>> +        setup_force_cpu_cap(X86_FEATURE_RSBA);
>> +
>>      return false;
>>  }
> ... can both actually be active at the same time? IOW is there a "return
> false" missing ahead of the 2nd label?

I've already got a question out to Intel to this effect.  (I didn't say
the enumeration made much sense...)

> Not having looked at further patches yet it also strikes me as odd that
> each of the two labels is used exactly once only. Leaving the shared
> comment aside, imo this would then better avoid "goto".

They're both used twice, not once.  You asked why it wasn't "return
safe;" in the previous patch?  Well this is why.

> Finally, what use are the two if()s? There's nothing wrong with forcing
> a feature which is already available.

It breaks is_forced_cpu_cap().

Also, I considered having a printk() here.  I've still got it around in
a debug patch, but I decided against it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 30 10:15:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 10:15:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541055.843369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wNd-00007V-3v; Tue, 30 May 2023 10:15:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541055.843369; Tue, 30 May 2023 10:15:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wNd-00007O-16; Tue, 30 May 2023 10:15:05 +0000
Received: by outflank-mailman (input) for mailman id 541055;
 Tue, 30 May 2023 10:15:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3wNc-00007I-9D
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 10:15:04 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20621.outbound.protection.outlook.com
 [2a01:111:f400:fe16::621])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d6b74719-fed2-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 12:15:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9205.eurprd04.prod.outlook.com (2603:10a6:20b:44c::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 10:14:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 10:14:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6b74719-fed2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KyTuVu0rt16kQEHnRPl7nOkAOD/MAMjh2S/q/qpp5tZvac8c45JKIdsSx3s4WujO5FkCzMNAbI8yKYR5cWIY8Hq44Tcx5HTNbQgFEvXfrHbtQmzc7fPke1GIbqB3Fhkc1K7ILOB1bpsuBPGpCA55tDABYUAIya24di3Xule9hCqn1zbnMJdn8j67cWzirhQ6bJtaBpu8yss6fQyoSB6H71IWU2h5TQeXzPaMyD8zSTVwgzkxWyUkb4qdE3bNhq/vkCSw0oMH/Kc2fKF6ewli8U9rp5RMECoQQoebKSBX1AwCDm9L7frlbSTQaunD9k086DJzqh1RliQhIIYcbgJ6pQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oRsg96p14A5KH/hpQROyQkDHRPC8rXmb/qsvqJP9Rnk=;
 b=jU8aHXBKy7SEqLtHVorcwxia0APUH0IEJwPR5RJbGQ+KsADhgBksx1W6siYHwixmlamyCEZpIhYiydZNE8ilo80jWCEeXRyXUHY++fx3sxUUkZkqQ7BUNhox1zIDU6QQv9i5ujycJW0VUqmedi8cE6BOQoYmzqaVUfEPkfk06/qwrfcUCboNWFOeHigt+BQrKCH74F1b5QfXDp686HJYvoLTuNkA80x4QlPaE4SsssG24zkxi8QYUkGbID1MbJY20pt+RmzhSFaz4PVrh5SckrUH0EOMPgS4olQbXXLZnMbEdAGkbI/JeDcwpvUqhUQC8QXLlHiuxZrs/VO/5aPVVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oRsg96p14A5KH/hpQROyQkDHRPC8rXmb/qsvqJP9Rnk=;
 b=TpMfr73OzxPKQTG+UuvJktcF/OkVCeF1bGMVWwJ172gHbkbbWTAztMdeYQeeQoJaoGtvWbv5Ob4QS6eaDRVU6FqOlhkXvxvEDog+B24hz/i595jP+HiSQ/gqWegvoeHbpMVPHxKDkwBGBdGghtOPC+KjAF/N+cZbDoKLALbUuFMRy3TBhUF2lNGkj80yqueBayU7yJlXww09r9pHLGWVzNKyl1F391yu0wd0Vs5e29l7da3H4bCAWqAN2RR1HCLQgvHeUaMEQV9SIeLA+Q863EDy+ssdtq+guBBDmm2WW/EMFNUUY2Br3qNFu8sM50vA72YAej7MhzolDkSrVroLbA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <62d42514-25d3-57f1-f061-0bde197a995b@suse.com>
Date: Tue, 30 May 2023 12:14:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 13/15] build: fix compile.h compiler version command
 line
Content-Language: en-US
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-14-anthony.perard@citrix.com>
 <35D40E55-2D93-431A-8B16-FCFEBBDA25B1@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <35D40E55-2D93-431A-8B16-FCFEBBDA25B1@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0090.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9205:EE_
X-MS-Office365-Filtering-Correlation-Id: 2eb2888d-ed5e-46ee-f64a-08db60f6b895
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N5Wzma4n9vfekURcoZbolJ9UB/QfdNi9FgmbE8Q+IIgxGXb7ONPgWo0UPozZAJNlSuAsO6+d6Sji3d8rQFwhHetP8CvbyTSktCWdItp8h8tFOd5ZOwxC0DvbRCYoJaRVoeQIyo8Krsn7S32TQHGPeG0pNAXDLKvyR44kFn+8BP0uov6yx+vpNPqxPYS6qHJ2srhcsUFhoHMnhO7Robc1YqWNw6PVj7P9woih9KzVpRGQ9VnwTPEqtLn4XnN5qSARBZjz7ViZThDBAd9kIP7i1xTHDuD2tkciktmWFdXc7zRoaLXiGZnGk+1E/b0Dl1kH7uwUTB7cnlTWrueG+9MQ0qGo42voujfhgYPAMi7gfgqOiJ7Zgm4hrqOk5fN5hMYRLveV2SDPDtz4jHE1+qILqrKoDhxnHla1r0G7HEB5NFF4X7mLq1wbXvMiFcz+4vGfGh4CT6IDsnP4C8t5L+1Sl670AS+LpNS+BO2HPYw9zdRwx9K5VPZ47p64Pe8pT6IPleXQHDyskQeSCQxDoPYVNJD8v/pug+nLnUiaR6dTHDj7A6eHCHKb0PtrCQN3UmuT0LUSu/p0+pjGeGJNM+WqL5i5phc4JEXvV0h1Rt+BnSeXMAFt1LiNrjUXMRmROG6BC/t4sewLlFhydc8ktk7XaA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(376002)(136003)(39860400002)(396003)(346002)(451199021)(6506007)(186003)(6512007)(53546011)(2906002)(2616005)(36756003)(83380400001)(316002)(6916009)(4326008)(66476007)(66946007)(66556008)(41300700001)(6486002)(478600001)(54906003)(38100700002)(31686004)(31696002)(26005)(8676002)(5660300002)(8936002)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bEQvbU51QUx6VEtCaGVOMWNpb0lyVmt0cG96c0Y1Z2ZjQWN1SUdiNXlTMkhF?=
 =?utf-8?B?OEV6dDg1QXBzYUZ2Y2gyQmpVT2greURyanhVOTZmenRqWkI1bVpQbEU4eVRU?=
 =?utf-8?B?WWloenJKMWlmQVVURXBFYmowSjZsYTVrOE9oeXNvb1Q2V3RHVzRYYk8ycTJr?=
 =?utf-8?B?THQ3MTdiNWFNa3NaUFhEdjBEWUNSNHZndlEyQ1RVd2UvWHFlU09Rc2dvOU5B?=
 =?utf-8?B?VGVDdjVLd3ErK0FHY2M5ejYyNWJ3TzhUZkdYOVhLeGczNVZSZ0RwZzhwWHZu?=
 =?utf-8?B?Qno5Z1B5YXVBUXE4V3hEODRsbUN6MXRRUEd6ekpnTVdYL2NpSm9EajZaenNX?=
 =?utf-8?B?QzRGZS9FRDIySmp5dld6RjRWTUdpQ201VUFRVVFxRElIM21JQldUbGRITGNp?=
 =?utf-8?B?ZC9DSUNrNHZsWEx3VklCYkxpakw5NmpkL0xSY2dIS3lXd2NRTkZBRDNMTWhl?=
 =?utf-8?B?TFE2TW54TENyTktvQ3lhVDc5M2Q2eDdyUHBISHRqUFFjYVhtelNram5qOElh?=
 =?utf-8?B?WllSZWJtcHZJTlpaUGU2bFUxZ0Vzd2wraHdObUgvTmp4dFdNQ2lDU0w4b2ty?=
 =?utf-8?B?aE9JZXgzdGs2bWRmajZ2eE5Uc0lsSWI0Q1JkekkxcmZFOHNZK0FCbHc2TnBi?=
 =?utf-8?B?NXVHRUNIZmx0Vngxa2FDbjhOZ3BOWWJjUXRYWStKRGhKcFZ5Z1dxWnZSSmlY?=
 =?utf-8?B?aHo2OFVjSGgrdEZCTTg5VUliYVRMdzdqdHdlSXN4SmE3QlVQQXhXWXVRdkRV?=
 =?utf-8?B?WW5yVVhCY0dQSk1qVlhuYS9udlFxUDhWU3FWWGpWNThwL2MxRExsYTcxWW9p?=
 =?utf-8?B?ZnhDRllSTzR6bnlmMklTa3pWV3R1YTllTTFMVzlaa2phdm5vdlNMWlVSYjlT?=
 =?utf-8?B?WUo2V3JuYTNBRURDUlJScnAxWTNkQ0llYURVTTVyRmN4UDVHb3grRGlWbjh1?=
 =?utf-8?B?dGdYN1F3cmdoaFZsb01CWlY0b1hWdEwrbVlscTgxbFdXWm5HVGRCS3hOK1p2?=
 =?utf-8?B?UVNOWW5zLzNyOXRxS3VtRFYvWE1JRnNvb3dseEc2cnRsclkzMkJCSzF4aHdX?=
 =?utf-8?B?Z2NWUnpWaS9vSUpwcUhTYlZkamM2MW9UUndJOTBxVENYK2hQVmRiUTVjTW8z?=
 =?utf-8?B?K0ZBelYrUmt1R2RsNlBWelRFNTBJUWEybTg2Ym90dE1PK0YvTTZ5b1BybWZR?=
 =?utf-8?B?Zmx6N1ppNEExQ3RFL0V4QXdYWHdteFVuL1Y4ZGY2ejJkV2tiM1IvOGZNM2h6?=
 =?utf-8?B?L3N5VDczOEFZenBwMXcwR0hEMUFyQnVjazhrZDdqUmVkZnBNTkxKU2MvbkVK?=
 =?utf-8?B?Qm5vYm9WcWY0YWJyQktMZENwZ3FMWDVmVWFpVlhOMGE0OUJ6OGlXellsOW1q?=
 =?utf-8?B?Ky9sdFN5em9OcEp2NVk2Mm5DZm5FSFZmZE52TktodTAxZ05ocTBvclNPK0tW?=
 =?utf-8?B?dkl1UVpuNlRVSjY0LzUwZzE3d1Z3Mm4ydUQrOUxKSk9YaVl1Y3QxSnNtNWg4?=
 =?utf-8?B?eFBhK2FZeThzZHAxaHlCeHNHaHROUTVlcDJLSHBLRjZVU05vdUd4YnJ3bk96?=
 =?utf-8?B?YXRoVVplWTNyNkIyVmtvaW5OL3ZqWkVCVm1UR3FKemt0V09OZ2R6cjNHcXdH?=
 =?utf-8?B?eTJCbzE2SmFJUkV2cXk5L2Nyb0Jtc05SRHFnY3BlMWx1VmdHdG13NjI1TGFa?=
 =?utf-8?B?NnJ6dWpwT2xaYUpJZkJyNTVydk56MHZweTV1UHFPQkRkeit6OGtVd2hsTFAw?=
 =?utf-8?B?Q2lyTFBJMEZHUjEzQ0tveDdjQ1BhNlNsd0RHQXNuRmpLWnlVQ2ZZbkdKV21S?=
 =?utf-8?B?aFR1RTBPVWpsdVN3NlZFUkF5bXh2Y21SMzBrSGE2Yk5EYlg3TTdNZkpDNDBS?=
 =?utf-8?B?d0J5bXkzNm9rRnNHKzZ3c3BvNHk0WVhYcUs4MFlndmRQNU1nM3ZuRHJ2QVVo?=
 =?utf-8?B?cmVrcjI3ekYydVhJcFd0dk5IVmd5a1hNY21mQytUd25EMVUrSjVJL2Nvc2pI?=
 =?utf-8?B?c1lUTHF6NCtwVG1LVllLTHdJclowY3FWSGVQMFdYQ0ZtWVpFVUZmSmxLSlJm?=
 =?utf-8?B?cTVzR21GT2xFRHVaam5mb2JQN1U1ZGN0WmgzV1dpM2lLd1c5MFFWRUN6ME13?=
 =?utf-8?Q?WFUUSN5Po4PxsgEhwQR9nom3/?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2eb2888d-ed5e-46ee-f64a-08db60f6b895
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 10:14:58.0102
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 01Ex+1FUXswdZARukih4OkARaWWnlSFsnQlPCJyJoAFeRG+pPUIobyijSbRvmI6uqMb6X+EUDBUVfTcrqKlP2g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9205

On 24.05.2023 11:43, Luca Fancellu wrote:
> 
> 
>> On 23 May 2023, at 17:38, Anthony PERARD <anthony.perard@citrix.com> wrote:
>>
>> CFLAGS is just from Config.mk, instead use the flags used to build
>> Xen.
>>
>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>> ---
>>
>> Notes:
>>    I don't know if CFLAGS is even useful there, just --version without the
>>    flags might produce the same result.
>>
>> xen/build.mk | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/build.mk b/xen/build.mk
>> index e2a78aa806..d468bb6e26 100644
>> --- a/xen/build.mk
>> +++ b/xen/build.mk
>> @@ -23,7 +23,7 @@ define cmd_compile.h
>>    -e 's/@@whoami@@/$(XEN_WHOAMI)/g' \
>>    -e 's/@@domain@@/$(XEN_DOMAIN)/g' \
>>    -e 's/@@hostname@@/$(XEN_BUILD_HOST)/g' \
>> -    -e 's!@@compiler@@!$(shell $(CC) $(CFLAGS) --version 2>&1 | head -1)!g' \
>> +    -e 's!@@compiler@@!$(shell $(CC) $(XEN_CFLAGS) --version 2>&1 | head -1)!g' \
>>    -e 's/@@version@@/$(XEN_VERSION)/g' \
>>    -e 's/@@subversion@@/$(XEN_SUBVERSION)/g' \
>>    -e 's/@@extraversion@@/$(XEN_EXTRAVERSION)/g' \
>> -- 
>> Anthony PERARD
>>
>>
> 
> Yes I think Andrew is right, so I guess $(XEN_CFLAGS) can be dropped?
> 
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
> Tested-by: Luca Fancellu <luca.fancellu@arm.com>
> 
> I’ve tested this patch with and without the $(XEN_CFLAGS), so if you drop it you can
> retain my r-by if you want.

I'm sorry, I didn't look back here to spot this extra sentence before
committing the edited patch, which as a result I've now put in without
your tags.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 10:17:26 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 10:17:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541060.843380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wPt-0000l3-J7; Tue, 30 May 2023 10:17:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541060.843380; Tue, 30 May 2023 10:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wPt-0000kw-GW; Tue, 30 May 2023 10:17:25 +0000
Received: by outflank-mailman (input) for mailman id 541060;
 Tue, 30 May 2023 10:17:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Ox2=BT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q3wPs-0000kl-JT
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 10:17:24 +0000
Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com
 [2a00:1450:4864:20::336])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a8f393d-fed3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 12:17:22 +0200 (CEST)
Received: by mail-wm1-x336.google.com with SMTP id
 5b1f17b1804b1-3f6e13940daso45204065e9.0
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 03:17:22 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 l9-20020adfe589000000b00307c8d6b4a0sm2797762wrm.26.2023.05.30.03.17.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 03:17:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a8f393d-fed3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685441841; x=1688033841;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=bIVTLqmO7KKzS9DWVhRdiRP4b6om65+r3YY+mErbdJE=;
        b=Vbn5wEsq+ZgNLVS2MDIKvLyrCNzMKbD79XHAqvmiyd8cqnHZMw71jr7MIGEEt7YVQS
         d8pRoONkA4ExhQdYh5i2c4v8TLQ5FWH7oPeV/TeUPDzQl0i7q45cLdI6sR03Mi46DMA4
         J8jF7Qbeh3dB0X4vgQtaVT2UugLy64tsRQCEc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685441841; x=1688033841;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bIVTLqmO7KKzS9DWVhRdiRP4b6om65+r3YY+mErbdJE=;
        b=IQ4QE6rXl/ryo6KbeThPHN0w9jP4eDhVxm4TfGOKTDCh4KLsHw4s5P9B87fHrWZQOA
         +PgPrikYNIAIwY75HmcZumy4F/bkot+RQGpW/8O2G5FDDx90ktM3zgieLrAwIylPfgZ6
         mG57AbZdXFX9VlOiPZ6X2uz9+Z8VtilqarLkJQuX4RCP7xyyI7dPsP/MOK9RV0aRgtCr
         Iy94B647ph7inzymwhFAeTo8Fgdzrcf/VZwoLIx+puFhZvL1w2taSBjkiL/eatwI07w1
         8nHkXwzRvcI4EltfXRKQGWUmlbrTU8jEyA53z2m2etyyeRRRvWQJ3jfLxTlhgp5xBEtG
         EBmw==
X-Gm-Message-State: AC+VfDypiCe4wm+bxV2eq/WfkGrPYNW+Z5Bqmi6GMrB6HTeVcr7RFJo3
	gTiRi4Hj19y8pBzRK2zBlSJCy2Cg9KJA1+TXGUk=
X-Google-Smtp-Source: ACHHUZ4zEbxDixjeBxJed2ybv/xVyJumc98wd8grgHFyQNRHYLYPgoNWIQLp5kYn6+1kJNuTt1VveA==
X-Received: by 2002:a7b:c853:0:b0:3f6:45d:28a1 with SMTP id c19-20020a7bc853000000b003f6045d28a1mr1243411wml.14.1685441841378;
        Tue, 30 May 2023 03:17:21 -0700 (PDT)
Message-ID: <6475cd31.df0a0220.7b8e8.ab11@mx.google.com>
X-Google-Original-Message-ID: <ZHXNL0Hz7dnkALMC@EMEAENGAAD19049.>
Date: Tue, 30 May 2023 11:17:19 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/3] x86: Add support for AMD's Automatic IBRS
References: <20230526150044.31553-1-alejandro.vallejo@cloud.com>
 <20230526150044.31553-3-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230526150044.31553-3-alejandro.vallejo@cloud.com>

On Fri, May 26, 2023 at 04:00:43PM +0100, Alejandro Vallejo wrote:
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 74e3915a4d..09cfef2676 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -2036,6 +2036,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>          barrier();
>          wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
>          info->last_spec_ctrl = default_xen_spec_ctrl;
> +
> +        if ( cpu_has_auto_ibrs && (default_xen_spec_ctrl & SPEC_CTRL_IBRS) )
> +            write_efer(read_efer() | EFER_AIBRSE);
>      }
After thinking things through I think I'll get rid of this "delay AutoIBRS"
setting. I initially thought there might have been some handshake between
the newly created dom0 and Xen on this path, but that doesn't seem to be
the case. If so, I can remove some of this disjoint logic by setting AIBRSE
on the local EFER and trampoline_efer during init_speculation_mitigation.
Then the BSP will have the correct setting, the APs will pick it up on boot
and S3 wakeups will do the right thing too.

I'm assuming then bsp_delay_spec_ctrl is there mostly to delay STIBP for as
long as possible?

Alejandro


From xen-devel-bounces@lists.xenproject.org Tue May 30 10:20:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 10:20:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541063.843390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wSD-0001M8-Vj; Tue, 30 May 2023 10:19:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541063.843390; Tue, 30 May 2023 10:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wSD-0001M1-Sn; Tue, 30 May 2023 10:19:49 +0000
Received: by outflank-mailman (input) for mailman id 541063;
 Tue, 30 May 2023 10:19:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3wSC-0001Lv-Re
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 10:19:48 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20601.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8086eb5b-fed3-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 12:19:47 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8514.eurprd04.prod.outlook.com (2603:10a6:20b:341::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 10:19:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 10:19:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8086eb5b-fed3-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mIy4fJ+xmZuxgdfOaQuSQEQGja6ZWlgEmeSXM5VfAQWJUnjjsUMNH1XyCGY5Cxs+gttYyAHkS0Q9MfAZVUH82KT4fd9uvoKN/r4ZxxsbE3Dcqf2SHekaHconemcqx3GJYlFWZ5Eu0ZCC5bjSetHKzpBhZe7PLi5u9Ax6/504zTPdysOLB4FSuahrGou2iFUjj50vKHYHHIR69a+T2IzPtmtDgjvQxSUGbrLa2TLt48hBVepzVKmkg9/sbkrrSkrPDRGH5eAiGfvwRFFrUFPJqEvHkZuyXTuyuc9LVS4wH07V011ZsxpLxVCgD1YvJi/ZJBkiOcXBZAaRiOyUmCbIsg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Vvf5QyvswxG8whsCsjN4fxdDd7c+mulTeP7S3kU1ISs=;
 b=YZI/v5vTEj2zGS6KO/YPvt78FsIH4zlsiBNfFmkfHsV0EZarl7wX6A2wTPNDniBe4mAf6k/4ghMP9FU3AmH/eaEGoBZZK9vrPs9oB8LdZ9cCbkgn4e0WKgtxNh9fhd6fpmHZxM2p15+DQ1QmHLow4Qk44NM1EsaCQEz01+NZYcz5N2/eMcutUBdk2kIuOliPluSZmopxCV609AstkCwYRfpDrsZ9sKQwSCIWKbqz7gPA+mlELE1wVgfb4myXx3KlHdivxcj/zC9cRIzXgsUotKI65bfFSI4kStfljWZpjvOI6x/JpSwonc2FGEEUO7DjtMUvEem95YhC/eQvFsHSHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Vvf5QyvswxG8whsCsjN4fxdDd7c+mulTeP7S3kU1ISs=;
 b=VUfSZ0FeiqqoddbsR7drTKW8by7EdSNxaSiPZdoQXIKufMJ+0gNkkONrADmN6gts6A/1MLXJ2NgkDzG+j0xKHubN02hUozpW1F7ciU7YlO+BxnvW5HLtQqLUfaGqYDbPtPqdipyL0a7yr4eVAbTRoMudKN0wl36uGmNyh5O7qZvZl0NFZl0kgFNHVSi2O7tyMhSouB0QUwDgAuBnsIH0/ZDZGWjWeHKJNcMxZ6s/Rdj0xOdPi1S6MXo+zW3MlKRsKYyEorOuTBKbH+UskgNHEXTkpCvNKj9euuI+GWCmCvs4DVvzMR9Vo8B9inUwLpJ9OYzJ4ksqDj2A7IvUGa6sEA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <938f1e39-147c-471d-964f-7fad0fdba4dd@suse.com>
Date: Tue, 30 May 2023 12:19:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 2/4] x86/spec-ctrl: Synthesize RSBA/RRSBA bits with older
 microcode
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
 <20230526110656.4018711-4-andrew.cooper3@citrix.com>
 <3f38d4c1-dac7-611a-1882-a5e6de16d4f9@suse.com>
 <aedc61dd-7c9a-f1dd-6430-fc3c56624a39@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <aedc61dd-7c9a-f1dd-6430-fc3c56624a39@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0202.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ad::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8514:EE_
X-MS-Office365-Filtering-Correlation-Id: 07a75d28-d226-46a8-1550-08db60f7632f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	LXmBxEoTOrUI2jckncFkEdersT2byXNWffAZO6d4Wu1IrQUb0a745PPhegJSoPj4LtHYROIRyr92o/rXDKfTqqBGdt8NR9mH0DiEP4HNoNKRM15HaCuJexSI8cTfe8ZbPA/VVOiXr+sbpzkIeQ3pIzsAzFeyfQDfdaAR7iKwGcva1KtC/HGXvtob+K8FV2XTCkmYLLipxPzDevuTxx/XV6OgSnnh+dQeAAKuKU+aA7Lp7BOKpwftppIwi3ngg4AXsQbMEDZK9ItA2DbEAjnVh1/sa42UxnzraMXEBWlblEEaItKZFU3AeT7IAjte0Od/hLw0G3sQq5ftMYBG8T53McvEyyNiqxDHxsFeRCSr3VT2hkrD+AR2OVBnygd3SQfT28oSBLMZ/LhZ2iSXckBwRCgEN8Ri9BBINeNNfIyG+O7D3+aNV7k8FYiaE9AvbxtCicauZbpqu19/ABxBqgOvl72I1irABqdl2Iya9uWmp2nJtLBQeEyqo1g9UZUfDTxMiDFsO9D1nGdOhnVyKAvwibLAoU3uv2Omp47nL2iZ8JvI4/mD4bdXuVSd/3KyFOs2I4l4VC+st4tLH+SEMzzpYQYhsBUCHGJIJPKW2ypGIOZKr4IG470nub5OZeG12TVN5NG3HWJkfI9ZGxpPPXeatw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(39860400002)(346002)(366004)(136003)(396003)(451199021)(31686004)(36756003)(66946007)(66556008)(66476007)(6916009)(4326008)(54906003)(316002)(478600001)(31696002)(86362001)(53546011)(6512007)(26005)(6506007)(186003)(41300700001)(8936002)(8676002)(2906002)(5660300002)(6486002)(38100700002)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Q2F6SjVNQkdObUhDR0ZweERnbkFPR1BwaFZhN0lIdHRwVm5XYytTUUpkTGNy?=
 =?utf-8?B?V0x2SWwybUZvOVRWRTcwUld1ZnZDR0twcWZUcFZZYTQyZEk0V25DVVZHazAv?=
 =?utf-8?B?bGVNQjBKSmZSVUVQaVJlb0Z4S2dkbUMwdGF3TlI0NDY2alljcmZMbnBNb3V1?=
 =?utf-8?B?cEU3RVcvUHhCQXQrYitudVZGNG03eHhNSjk3SnJGRk1IL2Z5TnR3Ry85L1dH?=
 =?utf-8?B?alpVV3N3KzVIWS9lb2JESTlxOXhGK1ZWcU1SbkJhcS9QU2tPNHIveVkrRFV0?=
 =?utf-8?B?Q2FTa2E3OUxQZ3lTakgvc0ZiOUhRMTgvUUVpS1hIOExWRlhvTGx5cGtqVGE3?=
 =?utf-8?B?WmwrT1VXaU85L2tMaXVONXZJZjI4L29wNXBRbzlxYy9ZcUFjOXhNK0J5ZXgy?=
 =?utf-8?B?dTVLaXlOYjV1WXlNYjc5clRiVGppZE5aMFp6Q3hpQlRkTldPTUYzTjZUK3NU?=
 =?utf-8?B?dzdEZk5sQjZKTWNsSm1CeXBLejREbDRnMSswWnRQUTNrSnB1dWNIRzU3RGVi?=
 =?utf-8?B?NFI4aFdHTFA0ZTN4YmUvcTJwSVREdEl0bElibnhWMUxJSU1VOGhCY1pqYnhT?=
 =?utf-8?B?QmNTaXhEdmVwR2pYcG11dFl2ZVpZSFpXdlo4TzBFcXdEZU1qcjZ5RnAraHZ2?=
 =?utf-8?B?NU1QWWVBU1RtTmNuZ00yRVRpMUVGVVJhbjBvZm1EaVY5OGs4RnpSSDNzOURE?=
 =?utf-8?B?SXk5RFpJS29waWpvSzV1cjhxWDZ1WlkvK1R5ZzhxR3M3R2Z2a0FwWFZ2cEQz?=
 =?utf-8?B?NngrNDNKVzk5ZFpJNjJHSlljTUVKblVYZ3FONXNxbUVPaGhscGZmZkZlQ01a?=
 =?utf-8?B?WDR5RkpIY1VpbGg5MWtFSWtJbnZ2bVJKdHB1U054Umx3d08xbmhDZUpaeXVh?=
 =?utf-8?B?akVRTjFIN2UzY1A5Y3prVjNmWEtsaVJTNEpWUkpoS3dZWDgwcHo0dXhCWHBB?=
 =?utf-8?B?cWl3SUg3UmxzYnhTN2FrdWZaY284azJaMFRGbHdWSU1kVGtuWVNZeUJZbWRK?=
 =?utf-8?B?bytaaXRKT05LdXI5YnBoaFF3TWxZWHloZmZnQ21qZHZPZ0hGb1hHL1BRSkgz?=
 =?utf-8?B?RXRDZS83YWRLV3hqMmV4NGtETENwN0N3UkN2L2FXVThXcDg4SGUrcHRjVmth?=
 =?utf-8?B?UVhEUVpwNzRkaTJ3bzRLTDhxYnRuU0YvTTlTTzNyQjlhc29Gb2M2UGNjdlg3?=
 =?utf-8?B?djRTQVBBV3dJZ0lpTXIzY3dGTnFpVXlVcFN3dTVKYXpHQnN5cnFOUXRscFRk?=
 =?utf-8?B?bms0R0pOcGZidzdIMWQzVkF6b0tFendBTmZhS2tJSyt3bWhJak9tTmNqRkpq?=
 =?utf-8?B?QUpoYS9DWnVTWnZzV2ViWDRLVEpWMUh5bFFkaXk3RmdwbmRyR0pLeFo5bytU?=
 =?utf-8?B?MTNoM0ovV0U4eWxyT1JBZVNWNFhNYUlCc1A2UFFqekhjanNXNUdoOWRyZ0VX?=
 =?utf-8?B?OUxZV3RmQ1dUa1p2SVVLMjdHbFQ2QWs5RS9hTkZ1YXRJOWZXY3dqOHdkZ3Nr?=
 =?utf-8?B?QWhyMllGMjB6eGNQSmtzckVieG5ER3crTHovWktiM29PYzR5QXhkUjJvbi9B?=
 =?utf-8?B?dmVvQUxWSVc3bVhGZXdManExVzZFNFphcFhOSWtWeXgvY25seXNpeEx1dUQw?=
 =?utf-8?B?M0JhTEVpZFJ3RE9TaXdyUXFXOGtZYzFYUHZ0SHRNWlE2NVo1c0I5dXVmUjNL?=
 =?utf-8?B?WWNZeHBJekVLcnV4UlkrM3Foc2VibmNUN1VoL2JUV1pUeGJxNW03eUlqcWwv?=
 =?utf-8?B?cTRwZ3dKbCsvdVFuTytrOXlJYkFXTE1RL3Q1OXlUbS94dGRwVXhKeUpKclFG?=
 =?utf-8?B?d2VrbHMwZCtqRlQ0SHBjbUppdU1JVFBZZUhFczNaR3cwK0Jwa0lzRHREbjFJ?=
 =?utf-8?B?V0R6RzIzMXZCUDIyN0Z5c1NFc3U1SHdIZXN6MlppZEEyWHNmZkhXbzRvTGZ3?=
 =?utf-8?B?ZHhvOE0wVldTUldycHh3SHBEbDhlM1RHeHNSMmZhL3lvYzIxZlJsOHlPTjUz?=
 =?utf-8?B?eE9kYnNrMFVJOFovZzFrQStBNmptN09vc3RoQzA3NTRXMEdTeUd5WEl1K25E?=
 =?utf-8?B?aklFaVpsZzNLeGRnZGI5N2pqamRyQlFpOTFta2xYRUpaeS9vUEV5aWhoY1Nl?=
 =?utf-8?Q?lbIPS01gqsMGLqOpFB60YpcLP?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 07a75d28-d226-46a8-1550-08db60f7632f
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 10:19:44.2239
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: s/DTPcxHazvQbnY9x76k9D2sKKUdMyjyQmSm/y91oAu4aZFnn5iHAoMf/entpCioZVCeoca4MRRp8IfI4lNq5Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8514

On 30.05.2023 12:00, Andrew Cooper wrote:
> On 30/05/2023 10:18 am, Jan Beulich wrote:
>> On 26.05.2023 13:06, Andrew Cooper wrote:
>>> @@ -687,6 +697,32 @@ static bool __init retpoline_calculations(void)
>>>      if ( safe )
>>>          return true;
>>>  
>>> +    /*
>>> +     * The meaning of the RSBA and RRSBA bits have evolved over time.  The
>>> +     * agreed upon meaning at the time of writing (May 2023) is thus:
>>> +     *
>>> +     * - RSBA (RSB Alterantive) means that an RSB may fall back to an
>>> +     *   alternative predictor on underflow.  Skylake uarch and later all have
>>> +     *   this property.  Broadwell too, when running microcode versions prior
>>> +     *   to Jan 2018.
>>> +     *
>>> +     * - All eIBRS-capable processors suffer RSBA, but eIBRS also introduces
>>> +     *   tagging of predictions with the mode in which they were learned.  So
>>> +     *   when eIBRS is active, RSBA becomes RRSBA (Restricted RSBA).
>>> +     *
>>> +     * Some parts (Broadwell) are not expected to ever enumerate this
>>> +     * behaviour directly.  Other parts have differing enumeration with
>>> +     * microcode version.  Fix up Xen's idea, so we can advertise them safely
>>> +     * to guests, and so toolstacks can level a VM safelty for migration.
>>> +     */
>> If the difference between the two is whether eIBRS is active (as you did
>> word it yet more explicitly in e.g. [1]), then ...
>>
>>> + unsafe_maybe_fixup_rrsba:
>>> +    if ( !cpu_has_rrsba )
>>> +        setup_force_cpu_cap(X86_FEATURE_RRSBA);
>>> +
>>> + unsafe_maybe_fixup_rsba:
>>> +    if ( !cpu_has_rsba )
>>> +        setup_force_cpu_cap(X86_FEATURE_RSBA);
>>> +
>>>      return false;
>>>  }
>> ... can both actually be active at the same time? IOW is there a "return
>> false" missing ahead of the 2nd label?
> 
> I've already got a question out to Intel to this effect.  (I didn't say
> the enumeration made much sense...)
> 
>> Not having looked at further patches yet it also strikes me as odd that
>> each of the two labels is used exactly once only. Leaving the shared
>> comment aside, imo this would then better avoid "goto".
> 
> They're both used twice, not once.  You asked why it wasn't "return
> safe;" in the previous patch?  Well this is why.

Ouch, yes. The labels themselves are used just once, but there's
important fall-through from above here.

>> Finally, what use are the two if()s? There's nothing wrong with forcing
>> a feature which is already available.
> 
> It breaks is_forced_cpu_cap().

Hmm, yes, but is that important here? (If you decide to keep the if()s,
which I'm not opposed to, would you mind adding half a sentence to the
description or maybe a brief code comment?)

Jan

> Also, I considered having a printk() here.  I've still got it around in
> a debug patch, but I decided against it.
> 
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Tue May 30 10:23:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 10:23:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541067.843401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wVZ-0002n0-G9; Tue, 30 May 2023 10:23:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541067.843401; Tue, 30 May 2023 10:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wVZ-0002mt-Bc; Tue, 30 May 2023 10:23:17 +0000
Received: by outflank-mailman (input) for mailman id 541067;
 Tue, 30 May 2023 10:23:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3wVX-0002mS-T7
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 10:23:15 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on060b.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::60b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fc8c42e2-fed3-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 12:23:15 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB9402.eurprd04.prod.outlook.com (2603:10a6:10:36a::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 10:23:12 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 10:23:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc8c42e2-fed3-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gS1C/lEgDFNcHFd7Vfi5yxn5XvihyXIJUk+yUDqxnCS2iBGn25LdVAoJZKn9HmdCWcFeuh9uIlSHnISuAUDMoXVrdaAnD9vPO+UMc1uVJRIh/6+gztGnKJXKrP10h65uB+PR4sarPquipcCXDDeYZjKeKbA3IG1fzePCk9cQ2nX8lQbfwDd/Sy3xMNtoeAK1UAENbOMP9Q+O4MpN4k3GR//AjGMysyJCuAq6qeEK4rqLmrvSl0mum+HZUEY4CjbNZPHhxxhI1bhmVFDmnNOlPL/WZYJzc0aYnGAO+Q90JsYn4FbJ0MQc54caDKEWBrVUXO6NrdcGk59h/2iLX/mrew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=3vA+YNPVm2RUugOapgDwDHHEnURVq5x8/ZJkv83pTv4=;
 b=WWsNVqk8t+O5fKuB2hY72qNkupWa0aTlge8+khizkxgbHs4wwrWINCYpIePJoCad5Axretc8VhucwhiBHEqr7s0tlP1vR8xLzt6BA6pt1cAvSRKMrmSXuZyWDOOxT5aJwwJDayBEM6aWNwJyRxdc33dqWhSq9pAzX3vz9KLJndx37LT77LH34jaUA6y1DM0C72kg7bAI9C2iLX3YV2KEyXdqXC9JPPwpTrG8O0wqfco1i/Og5LEWbslTMFKBd+mOtOz7ZvxGFMoIuP+ZKynbx4fz5I9KnFAhSIbtw85oPig3mYyi8y0TWDcuNLezXoRX01DOoEksHpg+ywPnR9Pv6Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3vA+YNPVm2RUugOapgDwDHHEnURVq5x8/ZJkv83pTv4=;
 b=DY3y1wPcNN6hwpr7TBAT39akzhjUS9guywjxp36E1beD8ig9VBo+3M2HnFBCgy4M/t8XxBxktleU7xn6LNH2XKlm/yxlBOh787rWi+FBZgVvoe3yrcdYwt2pp42nyn4s1LyICUSosR6IDSSYJFfOt5A4NlykmV4d9MkjQCnTKiqFjAjmcsdUjFLG3/5PkgCLo9vqy4YarQzk6CvSed/hKolAcLeUK/cz83K3fc3q46QW5zvqCliOTxTwOkFvIMYB0lqamrAlljVxeTU/xVxHyd1hP/wOfa+AMsKoi4OAoOjXowjwOF9nVyYjQLNxmdVR2sEuCR2A1Kpq5Ly3K1vNEw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <4a3ecfc7-45e8-842a-3e1e-24c7f7d6af15@suse.com>
Date: Tue, 30 May 2023 12:23:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v9 0/5] enable MMU for RISC-V
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <cover.1685027257.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0212.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ac::11) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB9402:EE_
X-MS-Office365-Filtering-Correlation-Id: b31d4fe9-f1cd-4739-918c-08db60f7df2e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3+pBYCjER6zHqYhqTJQdo4nUYWEM76828Lc99+559mYNb5o08OLtNmwBz8BE5lv/9rtnds5G+dAWsN2dngs1uFuZ/u+Pws8ieTKz5AJmFcIRauvWf9nUOMs5KqXgW45PolaX2z5RzyBk7uiGbU88AwR9oQEygnOGzCBBvQYmwr0cM1G5Tx5ono2Rigkd+Uo6tyvTe4qWuEneKzYD4OlpD3epkPbRsALeeGIIaFT0GUqSq6hprSR6P+veo7yepmmo8rz50rQAD8jA4NjdhZy91s+InGdzgoaXoyZKwbZe16/VFjYIOeRFQ1cA6DQ1UaEEbfQyp7hdgZMJZLP2qPZWHn9a2ESLLBnO2xLyEWq4aK+IamRF66dQsdsi8+6bdi4yV9o+9s1iBdmnnIbQw7orRXwBzEpMJR8e5X2Kyz2WSPHoec+Cn1cok1v53Z7LaBuduZueElYMKB7y4xdTsmh0f6zb2ti0wCfBA4rS2iwHVCwieQNcEz7un+/K2CSSGHXjH7myO/p/4HtrrcPPKTXlVXDqqfBR3OSoEj1vIFJQ5Qjyd1GYcK3XX8oc+IDJ5alGPmaVzu6FZWLI/4Q30kOWdHFysOsgCMoqWoQ93xARix9w5xFvat8M1fMGy8AMgnSbQL628Gt40mg7w1jhLz2/Cg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(396003)(366004)(346002)(39860400002)(451199021)(186003)(2616005)(38100700002)(41300700001)(31686004)(6506007)(6512007)(53546011)(26005)(6486002)(478600001)(54906003)(66476007)(66556008)(66946007)(4326008)(6916009)(316002)(5660300002)(8676002)(8936002)(2906002)(4744005)(31696002)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?N09xOWsxZmNxdDd1VkRnM3V3Vy9wTG8veUt0ckE2eTIvU2xmZi9XMzBpSlZh?=
 =?utf-8?B?a0FsU2pmMHFGU3lVQVB5VWZadlN5b2pON0pjQW5IWHZRSTBJcFRTbWdvM0Yw?=
 =?utf-8?B?T0xwaDlSY0pwcjM3WUZkSFdlekMwd2RmNmsvTmpCdVplYlFXcms0Y2s3dWx2?=
 =?utf-8?B?WWs1NWFlSW9ZT0hVTTdzU0VWNnlQS1Rtbmtkd1NBa3ZIdmQ2d214a0tUYTVt?=
 =?utf-8?B?MzRoWEFodU1IL0VoQW15eWN1Y1o3bDFZMld5VkFnenhTLzhGTEJRUFdUazFP?=
 =?utf-8?B?am8rNDhrZEd3K2w3NEE2VGdYb0VhRTF2czh6WGZBcTk2RUFPVDJVRXVaeWdJ?=
 =?utf-8?B?L2xMZjNvTmhGQzVxclBtR003TzV0Q2RYMzlESVV1bXJxVm1jQmkwTGFyVE5i?=
 =?utf-8?B?bXhycjNvOHJrYTZiWTlOelJjZ0VyREVqQjNyS1J5U1E5czBRWGI5OVhpcmRp?=
 =?utf-8?B?TStCV0dVZmwwNjFKL0lWTHIrR3ZQcnA3UGZ6UkhSc1FhQUNJdU56ZXp1YXAr?=
 =?utf-8?B?ckx0akYya1JId0taWEN4UkljeVNZbTMwTDFFaGVISzJ3UGRCbExzWCtWNXZj?=
 =?utf-8?B?SFpQZzBoNVVxRkdHT25hN3o5dHFUYjduclNqaFJEWUlGalFEalJCWmlwWWlK?=
 =?utf-8?B?Vk5Pek1vaXVyNE00clFoQWhNYlVtd3J1Y00vTlNGbHNieDhwZUlTTTN6d3dG?=
 =?utf-8?B?a3VkOVErdFpnL2VybzBJay90YVZPTjEvZFdsdmZnVHZoRGh1eTFVa0VwZDIv?=
 =?utf-8?B?RkdOa3FFVy9jT2M0K0FCK2l4T2VKREtqTkgzTEVFdDFnYU5kMmtHdXRrdjlG?=
 =?utf-8?B?dG9CLzVFSHJZdGp5dFVhSzRLRy9NNzN6azdoVzhNZGJDQ1N3N1JZankyMHZJ?=
 =?utf-8?B?TlJHV3oxYzBHT2ZYRDFTL3ovMGx4ME5oMWNzNVRZR2xqN0UzUm1ORWFCQ2lK?=
 =?utf-8?B?SWhkQms5TEhWMkY2a2FOWTU3Q3YvQVdPZHB4TXdFbjl6L2ZodDhXRXZ1Mk9O?=
 =?utf-8?B?OTMwZmY4S1YwV1JtcTc4a2Fabk9mNHdMS1VRZm9PRWpZSEtBMFhBK1FRRm1r?=
 =?utf-8?B?OUsvVjVPUTlxc2RlUWU4MnMzdU1uZ0IyQlYwL1RlNTdWb0ozVTZnU0F4cXBD?=
 =?utf-8?B?Nm1iaXh6eEZUTnRDMnRUZDQ4YlpDcDVJSWpYVjZHaDg4U3RZbjZiVGhIV3pw?=
 =?utf-8?B?TUtoMENYR2dwbzZIeUlQRkpHN2plaVVvZGlTMDRMa0lFdjR1MW9JT2ZKWFNY?=
 =?utf-8?B?WnBXRENidmoyOW15RWM4UHpZVlA0aWpSNGRXLzcvREk1Rm56RSs3c0RVNGt5?=
 =?utf-8?B?T3c1WTBmeFBvR1lmQVZuQkI3R2dCUVN0eXdwVnVoSUViOGNsWjR3TkozK2tN?=
 =?utf-8?B?OXQ5dVNsbHBQalJVc3JXYmtEbUdLalk2NE9VM25WNnF3eFNZM2M0TFE1QWV2?=
 =?utf-8?B?amt1bjlieUVFekIwVjJOTHpFaStyWFV6cTZJK3BOSDA1eWZlYXhSQVErQnI4?=
 =?utf-8?B?VFVnQzdINWZadzhSN0w3M2h2R2w5RXZSVHlkZ3dESHZSVzkyM2NVMzIxY2l5?=
 =?utf-8?B?VlhSU2UzMTNyaUVDZUFCK3dDNE5sY2pLekhrUzMzakFUaldFbzk4S1BIRXNG?=
 =?utf-8?B?ZlkwZWNQZ2lQdEtEZXAxZVlUTmRoOUE2UWtYY3Erc1hlTlRaQko0YmEybGNa?=
 =?utf-8?B?aGx2ZUk0SG5NUm5TM2RFdU1uM2hzcGdvcURZc0F4KzhtS1RvcjBzUmFvU2Zy?=
 =?utf-8?B?R0tMV3E1eGdQRWYxM1B6QStaUDhWTExtRjZKOUhUa3ZXblJvOUVMV0EwZ1Zx?=
 =?utf-8?B?TkVIaVM4OE1ZR1Q5SEZPNXhkaDV6N2dUU0d4SVc0aGUwZHZ4dWNORDNsbGpj?=
 =?utf-8?B?eGltUVZiZFlrMGF5dUl2SG1ueFdLMWd6dG5KMEFhYkV5TVRKejBXdVRKdS9o?=
 =?utf-8?B?bEhKeWdVZTBmNGhHd2wwcmRicitmdWdmZFQxazhiSFV0ZkFJZ3RzdHoveUlE?=
 =?utf-8?B?cGUxdzFHenRoUDNtaHhscnBQY09MTnFNRnhvdVVOcnNkR1QzcWNieUJlWlcz?=
 =?utf-8?B?dzFVQkxrbk1ma1cxcmZNVW5PblljOCtJbTB6WmVPa0MrUGRTemZVTFgxTFhl?=
 =?utf-8?Q?/oeaUKEOMO5vhUFzB+JsvgNbl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b31d4fe9-f1cd-4739-918c-08db60f7df2e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 10:23:12.3022
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sak7m2uFf6cSkWhFePKbXQaLW1uS7owv2OaPZ8F/BI4PNKs641KjzxRGczdm4JuKUha46KLupGgx3h7pDBlDrA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9402

On 25.05.2023 17:28, Oleksii Kurochko wrote:
> Oleksii Kurochko (5):
>   xen/riscv: add VM space layout
>   xen/riscv: introduce setup_initial_pages
>   xen/riscv: align __bss_start
>   xen/riscv: setup initial pagetables
>   xen/riscv: remove dummy_bss variable

While the series is now okay from my perspective, it'll need maintainer
acks. I thought I'd remind you of the respective process aspect: It is
on you to chase them.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 10:32:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 10:32:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541073.843409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3weX-0004If-Dc; Tue, 30 May 2023 10:32:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541073.843409; Tue, 30 May 2023 10:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3weX-0004IY-Ay; Tue, 30 May 2023 10:32:33 +0000
Received: by outflank-mailman (input) for mailman id 541073;
 Tue, 30 May 2023 10:32:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3weW-0004IS-HV
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 10:32:32 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20605.outbound.protection.outlook.com
 [2a01:111:f400:7d00::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 47a310ba-fed5-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 12:32:30 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8247.eurprd04.prod.outlook.com (2603:10a6:20b:3f2::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 10:32:27 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 10:32:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47a310ba-fed5-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=i9i9US0zK9xH5w2cnI7k0L738tvs9uH0q+UnxtieeSvwHlPZBsrxMwWEv96/eGzDrwq+S6CQwnajiklBQLmDhM5X6eXEOjGmJ23DtEJGopyX4n0zz+0YnD9chqSmrxqpJymoe/nAZe7gDTKYdfPpjYFj1olUbBqFrbpb6LgJFwginMicNCHy4eL6SmRNmFF03u+CaCUSPH6kgQB+sV+Um2/8f8R04sq2TlRrauFKHcAwcBOBY1LCdUYN2KBIL3/pVP+f05jDo879NYzv2fMOrVexb4ittdVnQsR7EHPX+l3YE8g1ulTEBKERsIu+o6zM2aCHtLnniDeachVpf0ZcnQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ndw49eniBQQNb30Vo4vgHdAVfOcrEHVcwDEyZm9FUFA=;
 b=VF680DZJ7fl79LqMr5QkgFPCyXS5v5oi44iRUeGVT6PjiEUMu0JFsQzAOvu4NZg6zia4pSRmIcmc3AzgnMkDPkQ2fuHxH0PuO2lWde74aqYeYE9W/hqgpnoMbkvjtBXjUeDkDA+WkIYEArXxty5seLJ/e2hS7Yq1jhtsbsvme1XTvpzyomC5FxTyGUbb491p1riMXLpRn9pfpULfpkTEqprxbhJt9cFxiy8ES6kiooKieG4MB1Cab2BuuzeOn9wKKHiJdarWMYMW1APECdEaLpLHYopHSmrVi4eE9iY9ST/vljTSzJZp7qQn2hYek0hRmLI1UOQPGrUDsJ1n/xgs0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ndw49eniBQQNb30Vo4vgHdAVfOcrEHVcwDEyZm9FUFA=;
 b=N4ytH5Oisdp9Azqjv6HsP9CDDzn3IWqXeXeIYUs/aAlsru/AO7aDTQs+nuVi02gwf2rjZHrpA6PXuvivelIIeCpFY2bJR68uJ/JIwXecp/Ev+lO4QfbwOjQxxJyRgHn2cY4mlHwa4I+8plsk81xNQN/XCzMXPHLpGJ6ASmwJ67lmxJ6oG+T+OMXw++HDC9OHBG0JyoGrzIOOW2yk5N4GneS3g6osuBIyrXHQ2tt80Uwb2Qq9bP8Kqis0duxlJnSbbpIqlhYiAy4cKzrnL32PWbgbjLhMVWOck1fgaFQUCClYPJwKTIfiRCcTzNWZWjALRCkHI6f6GuvY0DVZSKv6lw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <62332bdc-696c-264b-47d7-2d9bcc6af734@suse.com>
Date: Tue, 30 May 2023 12:32:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 2/3] xen/misra: xen-analysis.py: Fix latent bug
Content-Language: en-US
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230519093019.2131896-1-luca.fancellu@arm.com>
 <20230519093019.2131896-3-luca.fancellu@arm.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230519093019.2131896-3-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0171.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8247:EE_
X-MS-Office365-Filtering-Correlation-Id: 147a89b5-c7ad-402f-efd9-08db60f92a0c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+8RzI0y04fe+47aNWto5w7s3gBG2DNoOo4G8Ux0S9ZAfnSvDbQJavr4zT1nxhrlOSy/D1haJx65gtoIYdSkj4ChaRjzgoPVt0nyJ7ZNmT2cHomUX8A0pz9Fj8bTiU+VHyQ5bt1tcmAF9dzmBE7DonxIiMwUsga13mVrPY9otrVPfb89EJOWnXAy+Z0RMgNYkPvVoFQxCuH+k3QHvJsad7jE5RVkgTM/Lg+0XpyUyj2Uia6g96qxPd56c41cxkwBJvSXBfePjLCmPJqhPtgWp7xh1LHqzdfYfWbakoTeMA9GpNSnAotnmzX6OKdhflp1o3fUm+DbzGnlcmmb2HnqbsoyOZJdgWrUlSq44UuxsIIpHbm43Byr28/0mB44Tapzsa1NT+T4YjOliB78kCLtmAJzir2Jgno0+DfmAZp/BsnoWCPqt5FRp51GJQsOd+xTYRP7MDZMf7xRX+yOdg/FQ+XFALmUZ/ZTI7k3vBF85dSFr/K/IxYOq70pmuywOkmpQ5k55s3HjM1HjkqnZcd54rn1HdOctFNlxq8uZLL4cXQMbKrm0SSBcBjLVRI0u8fdvnWo2M5LArYk1mWBgWAZIhZFcq2EokcR/T7G+bx5auf/x0nK44k7pSF7E/0Hudc//AKchzD2H6jdSn4K7xYj8tA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(478600001)(54906003)(8936002)(8676002)(5660300002)(2906002)(36756003)(4744005)(86362001)(31696002)(66556008)(66946007)(6916009)(4326008)(66476007)(316002)(38100700002)(41300700001)(26005)(31686004)(186003)(53546011)(6506007)(6512007)(2616005)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UGFwME1lcTZSRWhUcElFZnFuSTZQdVh3VEVLYzBDWnVXZ2ZRZW5yNW5BWVhC?=
 =?utf-8?B?MlJPYzFyUFhPczNvLyt0UE1iVWJPdEYrbURYZXNZbFhTd3E4T2c4NHBlRjNO?=
 =?utf-8?B?SXc3Zmd1NVFZZU1aQTduUTM1N0N5QW9Vc1lvekxUUG9TWlpBeDlxK1RTUTZn?=
 =?utf-8?B?UFpEUFFRWU1KTyttSGxBL2xGK0NzQTdSbU5WNXI3MWNFM3IwZW1FeFNRVW5j?=
 =?utf-8?B?WU5NR0xaWUgvbzZ1U2xnbWZLTlUzSlhIVG5XZVFRZHI1eFBaNFRZTVpXZ1Bl?=
 =?utf-8?B?TkNMaG0wMUM1UGJEdEdNalFmdVB0MThqb29wc3o5YStZaTdab1B4dTZvKzVa?=
 =?utf-8?B?ZFNPZlBjQk8rUjYrZ01GR0h0blNobGxRSkgwODNRd3QvbXBCbVc5d04rcGRP?=
 =?utf-8?B?MlRwaXc2dDdKa080SHByVDc4VUxHNkNkR2I2Z1Y4bkxvRlJFVVh0VXZlTWIx?=
 =?utf-8?B?WHlkMUZwNTdwV3o1T0RxUjlEQ2VzZzNCbUZWRlA3TVAySkVMclIwTFBVNmhM?=
 =?utf-8?B?c0lRbWI0aEZhelNJMXVBUVFvOFBXOFoxSzJTcTNyYUE0dTNWZ2VNc1BhdGlJ?=
 =?utf-8?B?ZXhCcUhxdUNWOXh4YjNkYkFaUGY0Z1Axb2NFdXpMeVNCMjVRVm1RQXQrS1pi?=
 =?utf-8?B?N09HYUJYUXNkZzFScFJndk03WGUvS0J2M1RjVGJIVkhzSnBPc3BWZ0xWa1lP?=
 =?utf-8?B?WGlZNUpDS3Q2LzA3eEM0ajhWM3hFVm9oN0c5ZTBYZDZ0Q3Z5OHNGaFB5cXV2?=
 =?utf-8?B?YnRMWDJZS2poWldDazc2bjFSUjJvM2VoU3V2TDduZFZRbkVIZUplL1pKTkoy?=
 =?utf-8?B?SnJINHpxRWhXUDNCcSswMy9BV05NQVJOb2FuajZ2ZHR2eWZlZlJJTjg4TEs1?=
 =?utf-8?B?RzZ3QUxBRmYvc1B0NFVqNlZhZlo5Sm0xZG1vYWlOR2U5UHFVd1BUQmNQRlow?=
 =?utf-8?B?cGxLaG16M016MTExa3ludUhXR0xjQzkrQ3lFcFliQ09KaStDSWR4NHFtYWpB?=
 =?utf-8?B?ME5uS1g3VEpXTk5zOTVjMzVJZzdXUTBEcnFBR20rQ2cveUcwU3dlbjQrd0lG?=
 =?utf-8?B?dDVNNzVEV255R2F3ZUZzUWFlTHR4MXRBU0JBVWxKOVU2b0U4OUFVV1djUDVV?=
 =?utf-8?B?ZTA2ZVNsV1pQb0UrQnBWcXpCK09MMHZKcFN6TEtaWEhSc2k3QVFYTjhjc0VH?=
 =?utf-8?B?QjJkS1p6djROcTI3TXkxMURabnN0UTg5c05xOFNuRXJqVEk3WXBaMlJFMWJU?=
 =?utf-8?B?NEtsU3ZhUTZDay8rNkI1ajNsaE5ZdmVTQWlVeWw4Sm95OUhuUVZZaXl1eE9h?=
 =?utf-8?B?L3Foc1FvTzd1Z01qME1PY3NKYklxZWxFQzExUUJQWERTK2xtSXZ5QW9mamVT?=
 =?utf-8?B?NU8rWkd6azJScUU1UFZIYXcwV21MS0J2OWc2VUNUOU1MZTBUUXBQdTZxU04y?=
 =?utf-8?B?NjA0MGplYSs1WGg3NnFHVWkzQ2xoUytFejVFTTVlQ3JUdFdOMVFTWFJOYVh0?=
 =?utf-8?B?Ull3aEMwY0FKVm1PTlBnbDZYMWx2ayt2UnIza1hKSWlyR2tENUdzOVJ4aU8y?=
 =?utf-8?B?b1ltZzNPNEFCWkl0RytZV1RIdEFGN1EzaHdWM3hUVjNicm5MM2diRjlqZmsy?=
 =?utf-8?B?TWhndXhLR1dSRm81TEhPMWhRUjBXRGZjQVQyKyt2Y0RyTjNtZXQrdVFCR0VD?=
 =?utf-8?B?cUtkbjJGanVYOXlZT25mZ3hDaVhqcW5JclR1Y1pnM1lWVDNYU2dlcW83OU5P?=
 =?utf-8?B?SCtHWEhHZlpTKzA0Vm4wNWRtWXVndFhCL1hoalU1YnNBTFpXU0FiYTA3TGxw?=
 =?utf-8?B?SUZCUWJjaG5oUHlSZEhqUE5YRWZ2RVlqU3g0NEpPRG5yZllDaCtRM2xudkZB?=
 =?utf-8?B?L1cyMDhWWVlqV1U2Ykc5WUI4aFNmaE1QRENzbVBWdi9uaXBDa2MwanZpSnpx?=
 =?utf-8?B?UVMzY29MV01oZ3dnQ1JkOStnQWptSWZidkk2ZnBrbjd4b2pMeStUL2FNeW1K?=
 =?utf-8?B?WmVSQ1dWdnNVQ01oMEJFTG9YU0pTcmIwcUdkOSt4LzJ2Y2VnUnh0dHoyREY0?=
 =?utf-8?B?a0V4YWp4d1o4QkRJdzBUYmZLbktoNG8wZ3ZnbnFBVHFXQkFSdUM4UHZaRXNh?=
 =?utf-8?Q?XVodibNVeuZHuZhPn++R94pNY?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 147a89b5-c7ad-402f-efd9-08db60f92a0c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 10:32:27.3691
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: j8Vkm/fSPl5g0dDAZnkK+lKj3MP+KwMcOQG4PfumyzSALrRUamhlj47E7mU5uJvzhvKLMW+bv4BDJSs43g05Bw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8247

On 19.05.2023 11:30, Luca Fancellu wrote:
> Currenly there is a latent bug that is not triggered because
> the function cppcheck_merge_txt_fragments is called with the
> parameter strip_paths having a list of only one element.
> 
> The bug is that the split function should not be in the
> loop for strip_paths, but one level before, fix it.
> 
> Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Just wanted to mention it: I've committed the patch as is, but in
the future could you please try to find slightly more specific a
title for such a change? The way it is now, someone going over
just the titles in the log will have no clue at all what kind of
bug was addressed here.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 10:34:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 10:34:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541078.843420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wgk-0004uV-Tr; Tue, 30 May 2023 10:34:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541078.843420; Tue, 30 May 2023 10:34:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wgk-0004uN-R5; Tue, 30 May 2023 10:34:50 +0000
Received: by outflank-mailman (input) for mailman id 541078;
 Tue, 30 May 2023 10:34:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8r7=BT=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q3wgj-0004uH-9L
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 10:34:49 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 993faedf-fed5-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 12:34:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 993faedf-fed5-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685442886;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=N5ajidpHHWOd1M+zXooDoN2CL1Doa07SpiUyV9RulE8=;
	b=uwnjy/mtKr2+9HiMkSppEBJvhSBQeJYZy8y9yBLg+rfzmVqY1K2fbkQ0BK6atYsbdlb/kZ
	OX0X95CEbPOmVVjWUm+w4pOuAY3qcbvl7jw9Z0BenrltybggdqR4c9qAM8x64i9YH37qey
	O9JBKI7y/oKr72QRWkDpq+hYUst4bIaCugFeqfZmWg1LhT6voGsk3R6H6krFIw1PwzllcV
	ucldO4NpvXDnORVDB9AFUwXdgJ89s7Ov7ICB6gREcPwIDmnuEeCUVFSvurBgxsiF9roVck
	ep3RsDjejQbfVOSuPv04nD7vklCwo/Y4EbsS5SCA7EfqWM3aQeTimx2OFQGCyA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685442886;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=N5ajidpHHWOd1M+zXooDoN2CL1Doa07SpiUyV9RulE8=;
	b=AxdWUb0SeD9XPAfvorylY2NPRS35OXUO70xmZChpaLP+3HhDz+9CZ0RcptudTwiisrOZW2
	YPXQzLGGwMCVv4AQ==
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom
 Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
In-Reply-To: <87mt1mjhk3.ffs@tglx>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name> <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx> <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx> <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name> <87mt1mjhk3.ffs@tglx>
Date: Tue, 30 May 2023 12:34:45 +0200
Message-ID: <87jzwqjeey.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 30 2023 at 11:26, Thomas Gleixner wrote:
> On Tue, May 30 2023 at 03:54, Kirill A. Shutemov wrote:
>> On Mon, May 29, 2023 at 11:31:29PM +0300, Kirill A. Shutemov wrote:
>>> Disabling parallel bringup helps. I didn't look closer yet. If you have
>>> an idea let me know.
>>
>> Okay, it crashes around .Lread_apicid due to touching MSRs that trigger #VE.
>>
>> Looks like the patch had no intention to enable parallel bringup on TDX.
>>
>> +        * Intel-TDX has a secure RDMSR hypercall, but that needs to be
>> +        * implemented seperately in the low level startup ASM code.
>>
>> But CC_ATTR_GUEST_STATE_ENCRYPT that used to filter it out is
>> SEV-ES-specific thingy and doesn't cover TDX. I don't think we have an
>> attribute that fits nicely here.
>
> Bah. That sucks.

Can we have something consistent in this CC space or needs everything to
be extra magic per CC variant?


From xen-devel-bounces@lists.xenproject.org Tue May 30 10:44:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 10:44:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541082.843430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wpS-0006QH-QH; Tue, 30 May 2023 10:43:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541082.843430; Tue, 30 May 2023 10:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wpS-0006QA-N6; Tue, 30 May 2023 10:43:50 +0000
Received: by outflank-mailman (input) for mailman id 541082;
 Tue, 30 May 2023 10:43:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3wpR-0006Q4-Cx
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 10:43:49 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on0605.outbound.protection.outlook.com
 [2a01:111:f400:fe0e::605])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id db72b455-fed6-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 12:43:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9249.eurprd04.prod.outlook.com (2603:10a6:10:350::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Tue, 30 May
 2023 10:43:44 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 10:43:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db72b455-fed6-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AHg0ntIg/W4U6JvX+xPMDl/UtGky6BA8cvhZ3Ul7uE6OiBnsAKvQ6PuJnNBd5zb8nB7TpclmHyjvg8XkmgyRrnYJwgx2HHydEfRykKHl9WOgKF/7MZkwJ2his+ycdWpKjCmK8iaA8TqxuW101UZAaLKxhpv0FF809aEIAn7jGxww9MxbmS3jdEBhxjizkz/4WrMHLMDQrHr9c8gMhaKWgbR7Fyi1MKezs9ZXJpQiTA7G9Pglf2PKC2kn3C+jLyYyQM3uNYt7gzauHj8c64S+ISNqo0HTRw04tncWNYvRWo99WOluCHy3k8Z+G9izdsVSyUUOWpxGTkBdyuKwqSuvjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=u/Gk5s24qJpyUc563MFjEoO9z5tiyu0lNPjN1g6jPiI=;
 b=StMY/eeu5QUrOkmcNXORJ4uiDZmJ4tMq+W52yaIKzYl8d1UjxKxgA/9Hc699RhKZN51G2QR4W1eEVRa4CAkKSyqi1+Ss/lDWeqX2jIBATuaCKCVnxIxmotJTEegPwY1HVe9UEDl4TBAOEc0dnN3/flzTKhavfcQ18SsrVW+ec4LGttnrVwtIFka88FbApD9pLvgpVUcrIssBY3P5RSiIeiT2rjVQm/4r8tQj1sqZi/5D/xjiOTsf1q3pb9LJEtlp607sCD99c0VjBPPEasNnDuwGE+2yYxWO3M8ZQuxAw0eM60o+zVJjaCu5jiqkOVoeXcFx0F4XQNoBV/IfGYLKkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u/Gk5s24qJpyUc563MFjEoO9z5tiyu0lNPjN1g6jPiI=;
 b=5KEVb508RIjSN30twRuwtOsP8oLqI7bcbR6jPnaE3YsUbW+/0kAsiAPFfl8CSZ1RB0A4/A7N7VHyeNPCEu9cXppyrqTRINU6eNDGaTFUUfcOjZ/hmlqq/wH6wV1LBHKx8lEVP7xgWwy4KujgNeUWDCDPO+s8eme05tZaGg/30FmA/4edXPhGjYrPzNJi9RZ6Zv50gG5iETtLE5ItAtqXIC18+wTKapkYOS8ra4VNJHz1aFIwIWob60M6QjKKkNcoZPP1sE1db4E8S2C/qnFWIK+T1FPBCINY+aNmfSR8D9YoS3JIlXcFGZDFmPx5Mj8dG/I7bYpTvJyjr+0aqzQJlg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <0fc6d179-e59e-3315-f483-a5c3de96c678@suse.com>
Date: Tue, 30 May 2023 12:43:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 14/15] Config.mk: move $(cc-option, ) to
 config/compiler-testing.mk
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-15-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-15-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0126.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::20) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9249:EE_
X-MS-Office365-Filtering-Correlation-Id: 31b0d85b-267d-49ff-4c5b-08db60fabda4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JVxfEjEguH1Gol3OvBnqtMjg3piYDacG9ulzpx5IrJQJkAGxQk6gBUoNx+Z4Bxk92ULtRnVaS44uDfY0jGsJJFSAMbgCYVs1W4Z+1SSnX/Pc9tLqrsRgiDiwY5nMZ7YtpKS2HowIBmi/pWMhToe2a+v+1YllpD9fra+gu1OVIxCHHKDWhgrTTulMoNeUOIVCag1ps0yuvpWEcGtWa3R2dyV9hx/9V8bUGyiQVcxniVlUv5qDkmWeVeSXl8RQEZUQfEiW1cNmZn9p88DgYmwJwCzx2Jsqe7rvrbODdj56U9lq1s9stNZBq7x4b+1LFNvsRgGaCHNgy2i3SgaqyZfSOUlH3aWqBm/cWFn3zOn22GrJFCQTIhwaDrBxMnvImo5dc/MXjtaPOXKBPcuhEMtnwKKZPcb1vMfTT9Zt/uG5c2W789H943BVWKgylbtvn/y05E4VK+hn/u+XvJTeQTAtu8+ss8QOgK15S2PQanFCkjpYVs1Q99ONgAbRKVGCaTa/DRviS/wJoJIjC5aA9GLC2mM8wSuycMPPS1MTDwVVsaV3T+JbSPbcFus9vumiyiFCSKT3djnFG6/xoku1xUg3BcK70ZJeW5GRXL0VhnURIVAcM8jpyzLIHdkFtodNZvuMiKqsOc+B+oU5fiMOb7A1iw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(136003)(39860400002)(396003)(346002)(366004)(451199021)(31696002)(6486002)(86362001)(41300700001)(4326008)(6916009)(66946007)(316002)(66476007)(66556008)(36756003)(5660300002)(186003)(2906002)(31686004)(53546011)(6512007)(26005)(6506007)(2616005)(83380400001)(38100700002)(54906003)(478600001)(8676002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WmRubG5NejZaSy9lV1VPMzRhazZpYXQ0aExJTjdQZkZpNGdiRW44Qi9lSHQx?=
 =?utf-8?B?WG1PZnkwVlNzbElqdVpXSk5NTEloZ09EeitqU254UU1hOGF1dmgyejRrNkdF?=
 =?utf-8?B?RE1yYUo1ZGlkb0x2YktDcGd2SUFRSVo4S3pNSnNqdStQS1I0MFl6NHlXVXNj?=
 =?utf-8?B?WnkwWklEN2wrb3hrWllnbCtLM0h5TjBjaEtSbG5pVUwwMjNqY3U1NXRjM1JV?=
 =?utf-8?B?bG9KbC93RWp3RC9zSTJEditqQlhidzNaTHhJS0MzektWREhCdTVvZFBFZ3NZ?=
 =?utf-8?B?MWFCOXBoaTdqaUF5K2MvWUFnVmQ1U3hvelUyTGx1WkVPUUhoY1BLekJxaGM3?=
 =?utf-8?B?V0FJNllqZ3NIbGNjR21sTk5pNHFVQnNYajlvT3pYVUNvR0trU0tkTnljUnQ5?=
 =?utf-8?B?S3EyVCtTVnB3TXk3cGc1WGVpeUZjMC9RcW5wNit3Rm9ud1lLbzJPZnFaNFl5?=
 =?utf-8?B?b2t5bkVXdGpHRmxXRGxzejlMdHdrYjI2NHQ0WGVFc2xzbHQ3clB1YjJFYWVG?=
 =?utf-8?B?UWhZK1lVOGhTVk91VHpLSGZlaG11aVdtcUNTWmF6NUJwc0VlMFNJTm12dFZO?=
 =?utf-8?B?MlhQWTR3N042czBydlV5UG9YQ25YMWE3dE1hLytZMTV3ODhabkl3bU1QaEFy?=
 =?utf-8?B?QnZDOHdVTHUweURZeXIvU05UWTJEVTliSHIrS1U4NXdOc2NUcUxwWTIzV0Ey?=
 =?utf-8?B?c1ZMcXhqU3VNOXJqYzlPL3dxL0dZNkNxS2tRZnBnNy83S2NZYi9xRjM2UW5X?=
 =?utf-8?B?Wm1xVTdCa0NCSVpHZnoyYzd6dWlXaEtPNzVscU10MGVMc0taUkN5Vk9sNjli?=
 =?utf-8?B?T2RmS25hWGRhV2RRUUhHQzl6RVhtQkFPOUdCMkQvVEthQWU0eFMxTklCQk1q?=
 =?utf-8?B?dFRVRldIQlJncWFVdE9KNkhKeGdPQ05GK3dPSWc1YWxtQ2lvNStKNURwRU80?=
 =?utf-8?B?ZzBYZ2l6anZWU3VUdVpZUnN3aWNmWFBJMzgxcFJ2TXhqVHA3UDBFdlQ0QkVy?=
 =?utf-8?B?d3FrT1BFRy9qb3BTeDdzNU5qb1pwRzNuajdWSDlxeEhLV29qOWg3akJoVjVp?=
 =?utf-8?B?WGFmMEZzU2VMZ3FHcUtENzJqWnJhMFZVNXViZUhkd1doNzdkYWtYTk96bFpa?=
 =?utf-8?B?dmJPTlJiOXd3M0RNWUZWdU0waTRGU3hYcGxObW9JQTF3OTltUFRsYVhWRkpG?=
 =?utf-8?B?ZmVtbVJhVnY0OXhVYkVGZVZzSjdaTGpYQ2hhWWwvcnNGMHkzN0ZLbWhyVWNm?=
 =?utf-8?B?ZDR2VndFRDBRdkd6TVhQVmpjTGtVTVNPb0tlWWJhUzRwRXp1RHBOTHRHOE01?=
 =?utf-8?B?c01FQWpzUjd6Z3hLL3NPNzAwUTJIdzFWR0hnVktyQmNwdFZMd2xOY203cFR2?=
 =?utf-8?B?UUh4aGY4bDgzSFYvUWhSanZDUFZ4azJNb2UxSTkvSXFwdWJtSUlXTlJDeXJ3?=
 =?utf-8?B?TFVHTDNRZWNrUDU0V2w3VWpXNlh2dVdnQWJ2WXpFaVBIc2J0MFRaQ1diYVZ1?=
 =?utf-8?B?OXlRcndJV1ZvYnBMeXlLeDNXZzVqVlE3ck54VUtwTGRhRUNsWlFRREJqOWNn?=
 =?utf-8?B?clk1aDRrcnk5QlROb1lGcklZZFdISGVjanBJR2hDaVozL3liVEFnUGV1NzlX?=
 =?utf-8?B?TTRRUXZOa25sRDh5aGIrU2xIdEo3N01WMDVkZmxPT1V0c295WGl0Vi9GQk5h?=
 =?utf-8?B?bG8rVWhJeWg1aVY5dlRTaVhVa3BSaGZaTlFwUEsxMk8rSXIveEJza2dzM1JT?=
 =?utf-8?B?K0x1NUZyZzBvMXFwNFdjQWF0Q3d1N3gvcHloVy9rWWV6dm1DZmdrZ2xES2FI?=
 =?utf-8?B?dmx6WjN2bVFvdUZNMmdDS2VSLzl5Sk9xVmFZbW1nZWpYQzNMZHZUZVVzaGsx?=
 =?utf-8?B?TkFURXpUblpFcnQvRWdzQm1ZNU5oVHNwakJ1M1N3RHpmajRVYWpoQWZDMGY0?=
 =?utf-8?B?THB2cVdscEpVY2FrdW9pK2s3VUVYMmkzdHlMaE1ibUNPVEhqYzd4WG5FdnZD?=
 =?utf-8?B?WTZyRW5lN1l3bHhCVEFDcTVwVWRXdzJGUGdJRVBMek1SNC81OGxQVUplUFBO?=
 =?utf-8?B?YnhZeit6eUlzTWdYSEYyRGloMFhlNWZSUDNEMEIzTkdwbUwwK0NwQUlNVXJ3?=
 =?utf-8?Q?rT9iGd7Cgt3+Y4iH/Uxb8IV6K?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 31b0d85b-267d-49ff-4c5b-08db60fabda4
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 10:43:44.4314
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vArMV/8WM96ebvW8z+fwyT6IO1MWCJmUkVVSnWZ9R5KB4Y2UKhnJn0Ucaj7i+LK2A0BVlaS9CtCizTe9V5CD5w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9249

On 23.05.2023 18:38, Anthony PERARD wrote:
> In xen/, it isn't necessary to include Config.mk in every Makefile in
> subdirectories as nearly all necessary variables should be calculated
> in xen/Makefile. But some Makefile make use of the macro $(cc-option,)
> that is only available in Config.mk.
> 
> Extract $(cc-option,) from Config.mk so we can use it without
> including Config.mk and thus without having to recalculate some CFLAGS
> which would be ignored.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

When I saw the title and in particular the new file's name which is
already mentioned there, I did expect something entirely different,
perhaps related to the testing of Xen. May I suggest e.g.
compiler-probing.mk or maybe even simply compiler.mk? (I'm also
uncertain about "compiler", tbh - I could e.g. see Kbuild.include's
as-option-add to also be useful outside of xen/, and hence at some
point wanting moving to some common file.)

> --- a/Config.mk
> +++ b/Config.mk
> @@ -18,6 +18,7 @@ realpath = $(wildcard $(foreach file,$(1),$(shell cd -P $(dir $(file)) && echo "
>  or       = $(if $(strip $(1)),$(1),$(if $(strip $(2)),$(2),$(if $(strip $(3)),$(3),$(if $(strip $(4)),$(4)))))
>  
>  -include $(XEN_ROOT)/.config
> +include $(XEN_ROOT)/config/compiler-testing.mk

Possibly being one of the few users of the top-level .config, I wonder
about the ordering of these includes. This isn't to say that I consider
wrong the order in which you have it now, but I could see the opposite
order as potentially useful, too.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 10:46:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 10:46:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541086.843439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ws1-00070M-64; Tue, 30 May 2023 10:46:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541086.843439; Tue, 30 May 2023 10:46:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ws1-00070F-3T; Tue, 30 May 2023 10:46:29 +0000
Received: by outflank-mailman (input) for mailman id 541086;
 Tue, 30 May 2023 10:46:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8r7=BT=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q3wrz-000709-DO
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 10:46:27 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3977eee4-fed7-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 12:46:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3977eee4-fed7-11ed-b231-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685443583;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=aczuPZRXM/3T3zeRyckgFz17tXnOXBoIRzMvbCKmwP4=;
	b=jtpE+gCbIAd5N53RxGc6sgaT2si80ubO5trdzq0YZPJgj0u0uGxnEtLhxrdtPRPKOaDE18
	ztu5b4k5XEjvK/fjpfaluMv16JvaD5E4Z3L+jqW7tLBl5pCr13VXjUoqvtgyKLd/Y/xqu/
	84SV1UbOvwmhndn5fFxZSGrnlbePrrHSpEQPGnffMATrvpt3ZAkTXG6ylWpzmAYhS4dUem
	FTBfKKPOVhev4++C/nR20YRFt6jyNNtCz9uhzfOoHg4olSWAigaKti8fg/vEQYtVI0MLbc
	gP2h3BhbdhTtPHPbM7xx+svbc8m0rHw4UqvtNs/L/faaOGodmpnGf4MovXV/Gg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685443583;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=aczuPZRXM/3T3zeRyckgFz17tXnOXBoIRzMvbCKmwP4=;
	b=Hfhdo2/Q/3z4khc8VnhVtRu45XjAYTmDIx3ynzUXbEbUuJzXZUdsmlA0kYxl32RnfB9UKw
	25lgiYCSG2Kzr+Dw==
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: [patch] x86/realmode: Make stack lock work in trampoline_compat()
In-Reply-To: <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name> <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx> <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx> <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
Date: Tue, 30 May 2023 12:46:22 +0200
Message-ID: <87h6rujdvl.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

The stack locking and stack assignment macro LOAD_REALMODE_ESP fails to
work when invoked from the 64bit trampoline entry point:

trampoline_start64
  trampoline_compat
    LOAD_REALMODE_ESP <- lock

Accessing tr_lock is only possible from 16bit mode. For the compat entry
point this needs to be pa_tr_lock so that the required relocation entry is
generated. Otherwise it locks the non-relocated address which is
aside of being wrong never cleared in secondary_startup_64() causing all
but the first CPU to get stuck on the lock.

Make the macro take an argument lock_pa which defaults to 0 and rename it
to LOCK_AND_LOAD_REALMODE_ESP to make it clear what this is about.

Fixes: f6f1ae9128d2 ("x86/smpboot: Implement a bit spinlock to protect the realmode stack")
Reported-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/realmode/rm/trampoline_64.S |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -37,12 +37,16 @@
 	.text
 	.code16
 
-.macro LOAD_REALMODE_ESP
+.macro LOCK_AND_LOAD_REALMODE_ESP lock_pa=0
 	/*
 	 * Make sure only one CPU fiddles with the realmode stack
 	 */
 .Llock_rm\@:
+	.if \lock_pa
+        lock btsl       $0, pa_tr_lock
+	.else
         lock btsl       $0, tr_lock
+	.endif
         jnc             2f
         pause
         jmp             .Llock_rm\@
@@ -63,7 +67,7 @@ SYM_CODE_START(trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	LOAD_REALMODE_ESP
+	LOCK_AND_LOAD_REALMODE_ESP
 
 	call	verify_cpu		# Verify the cpu supports long mode
 	testl   %eax, %eax		# Check for return code
@@ -106,7 +110,7 @@ SYM_CODE_START(sev_es_trampoline_start)
 	mov	%ax, %es
 	mov	%ax, %ss
 
-	LOAD_REALMODE_ESP
+	LOCK_AND_LOAD_REALMODE_ESP
 
 	jmp	.Lswitch_to_protected
 SYM_CODE_END(sev_es_trampoline_start)
@@ -189,7 +193,7 @@ SYM_CODE_START(pa_trampoline_compat)
 	 * In compatibility mode.  Prep ESP and DX for startup_32, then disable
 	 * paging and complete the switch to legacy 32-bit mode.
 	 */
-	LOAD_REALMODE_ESP
+	LOCK_AND_LOAD_REALMODE_ESP lock_pa=1
 	movw	$__KERNEL_DS, %dx
 
 	movl	$(CR0_STATE & ~X86_CR0_PG), %eax


From xen-devel-bounces@lists.xenproject.org Tue May 30 10:51:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 10:51:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541090.843450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wwL-0008Sc-Q5; Tue, 30 May 2023 10:50:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541090.843450; Tue, 30 May 2023 10:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wwL-0008SV-LQ; Tue, 30 May 2023 10:50:57 +0000
Received: by outflank-mailman (input) for mailman id 541090;
 Tue, 30 May 2023 10:50:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3wwL-0008SP-5c
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 10:50:57 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20616.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::616])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id da53ff02-fed7-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 12:50:56 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB9208.eurprd04.prod.outlook.com (2603:10a6:20b:44f::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 10:50:53 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 10:50:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da53ff02-fed7-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wz7VPzYse55yDRSIc65PzCR0DPlG2sD3JBE809PEEVjQOl7tD7I9FlgvY+ayPXWAwInUpctZ/GrSnjXa1lZ71RK3zcK0wY2TIiEN1HaItz1ulrRvi928SC5rpBeOoXKzCBNDB1RG/lFbwiWKf9X37qIZJTm/5JnuZIMSB6Qh6hn7p44FoFu6EtlJrWrTM/QWkDwKs1Rn6zwLr/4cc2mLwrqxu9xo9GoT1CcdSprdBKcPZrqhQFQRVolb3+Ur2p8M7Ao9u2kONUMXLZqdtuPA73ctyO67DzWVmxh1AI2pdwGFIphXTv/ErBQ1KspksQRgDQBxLRRjiP/7Qz/8jOtp6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CYVJoW2V+TtYrmw6ENM+NmRnA5VEBxjnlnPzOjcoZIc=;
 b=h6TvcC5dCnJSro5vVGADrqQd7wEAM7KWRfKGYIN2LOUOVXNE8UYGICbYp3WJiKn0+tRdC5ZBfzsYpiuJo8+RtK3T9IMhW5xQWWtJJfkx7d2/h6oj5PXEissO5dvQOo2VR47hMEvnqqI6MpWyHwRPP9VQ0sHUwAksZHGHgwY7NJKjIxE3ZKQMgFSflDP0987GA0GlUF/JGFhl51UfAQKWir72fi0cP7Gp/fCMieqtfDblGIzb6cAk8TdgqQJCkJAvW3bcZJDcz5a06kn2Ow2UphsYZuk8beVWLEDzQD6F5cMtSirLTlIQoZX60hwkeQD2o0y1ED1iwwe2cmeezF5uUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CYVJoW2V+TtYrmw6ENM+NmRnA5VEBxjnlnPzOjcoZIc=;
 b=cxdF+TmiLS+sNBkdmfze1FrGT4BTPrmGBz2lSmH2NpS9+vztN5xiiD7g8VaFdCONURnTiGU/AriPG9khjs6Zw5y0dFUAz9cFy08nNfrpSxRyRO5RHnglM8cQcqAiWyO6K1ccDCdE1bWRuxT7a1yJvZLutSjX08PG6xnKvKaIsldJE4IYruEFQea/fMfRHAS2l1XLNvMem/t4XMm05woPJNR+Vbglvplyp7WGoo15Dbxi/mJewA/xwTi5q8jtTyCyHBSjAa+P0zOttAXDJqk5zW2aZgV7ROTfXI/48I34yHVIwGZfkhJiN3WndwbHSVWQoVvmdOwJkeTeIL44Telc4w==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a9c391ab-ce82-f368-3802-6776c269668b@suse.com>
Date: Tue, 30 May 2023 12:50:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [XEN PATCH 15/15] build: remove Config.mk include from Rules.mk
Content-Language: en-US
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-16-anthony.perard@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230523163811.30792-16-anthony.perard@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FRYP281CA0013.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10::23)
 To VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB9208:EE_
X-MS-Office365-Filtering-Correlation-Id: 22c57d62-6e26-4e3f-b77f-08db60fbbd56
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FGBzEZZoD5hSzwCmlgzIp6D5ZFjL9BeMaEb8yHFpOMydlRs1EXZy9Pf3XzYIBhZtwcMK8JHu4a4M/z4EJVvNZsmSOju/IgC+GxoPehBpbSA3KD/RzqQCzivdIBgKgKCosWc+MveYsuqh7kwpZCoVae3QcmavO+N8YaZMyg5oFzLji72glQS64tWBK2ErgN89Gft5sygTGC1cA/mtus3m0PJbK7CYw3qsWRjNbNsrkZ8BFX9yaSoqQX3vJ7R6E10VGbu+alL+Tb6DRzq8swAXdF07Cv1ZzvTybB3dDBQT9blSB2FFWH+IsidWPiWLHJYwEWwTNoktQTWsuhrQwPFN70ujfTnUE4Wi5wd+dkF9PH5TXYxZSzziabQbBAWRd+1fOXS6e+z5NspU/fTk1lr1OOnJl6vmf2pW13+Z1cUOf0ZNGqlYfkeZMf+Qj1+2uG1JwowwkCMqcWHdTZh6XSiaw/HDYvvL4cIOzQgWfFiiRcL2b8t9/6/h8DMTuGjR6hSIi/8NRWQJ6kItbPZn9QWPn7HIihcMYtiVLBSJUrRPpk8nnf2lRP7JpKbM/yKtrMKNOWsjsnHp3GxUQPHQ7/S3aWrwVQzuCRziZDAnc4FmGmpGuCFNTbUGVEnqatCDRLCaoG91qfQPb7IEWWfBh+ao4Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(376002)(396003)(39860400002)(366004)(136003)(346002)(451199021)(478600001)(54906003)(8676002)(8936002)(5660300002)(2906002)(36756003)(31696002)(86362001)(66556008)(66946007)(4326008)(6916009)(66476007)(316002)(38100700002)(41300700001)(26005)(83380400001)(31686004)(6506007)(186003)(53546011)(6512007)(2616005)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VFA5Nlc5d0tNOXBnZnIwTndIdXRHM01oRVhmZ2pSQmJDM3hJd1ZwNkFWVGx6?=
 =?utf-8?B?aEgrQU1jdU54YXYwNVhhTEQrUzZPNU5GY2pBdllOYkJDZS9WMVRWL0p6M2oy?=
 =?utf-8?B?K1c4SldYdlBPNXFBWGJQN0dxbXhaR0dMWVRXUEtsUjBpK0dqYkJUTTNJbmxF?=
 =?utf-8?B?ZGJyMUEwb2RUQ1F1dXlVQXQ4VkFhZjUvV2luZUp0blVDNjNZaGg0aHdJOU05?=
 =?utf-8?B?ZHd4aVExeWVSVGpYc3JhK2xDQ3lvdE9MeDJuQWRrVjljVmo4L0NsQlpHeGlF?=
 =?utf-8?B?aDhXUTZSSkxvdUFlNFVUUUlMODE2c2NrVHlVUUorcit5YjRUMTdYSWkrbE82?=
 =?utf-8?B?QmFINFJaR2NsRzJjM0ZGbzNuMlZaZkdTWDVCMHFvWHR4aEJmdk0vZFF3blo5?=
 =?utf-8?B?dkh3SFFMUjBDb2FUM1JSYzlEcWdwOVU0Q3Q2UzdoSDFTcXFRL05qajhlOHNx?=
 =?utf-8?B?WURCTzNHd0xxTmorRE1tV1lSTFNBUlhheEF0dHFiZ3ZZMDFCSjhPZ2tEVmxp?=
 =?utf-8?B?QjY3YWlqazhIRU9jbFQ5d3RjbVJtdG1sVXQzei8raldzeDJJUGFoeGl5S0ll?=
 =?utf-8?B?U2Q3QXVPdnRyY3piak43eHNmcGJhTGNNNlBtMmpKcU1WYnAyVjV5TFNBME1n?=
 =?utf-8?B?d3h4V0lPSldKTmZoNjBzMjlabzNabjROdVVLbXY1Wkg2Vm14bVY0K2lUWUFp?=
 =?utf-8?B?T0NvbnAxeG53dERWVTA4Slh6OU1ZaTdoeXVXdkZXLzJaU3c2dDAzelZvNUxp?=
 =?utf-8?B?TldZREZtaE1PcWJ3K0xOMmpRL2FZV21aa0RLdmFaTk1XcDBxVlBJYytvMks4?=
 =?utf-8?B?c0lXbTIzS05jQ2I5YStoc1dzN2IwODV5eFhJNGRscVo3SlFwTGphUXV5ckN1?=
 =?utf-8?B?OTduN0o0WDdKWllPMTNOT2RwYUt2SDJ2bmorT09Oc01sbE9HcStMbGRnR3hp?=
 =?utf-8?B?MGFLaVpweElycmJkeEQwV2d0MVVYQjVRWG5uUDUyTjNoS2ZONXlDNjdOVkZk?=
 =?utf-8?B?NmhQYTJydjlPa0J1L0Q5eC9INEpCMTNSaEI4ZUU1MFNSRVM3cmQ0Vm1qTk5B?=
 =?utf-8?B?VUZUeUNQTFY2c0tPZEIxSE1vTzN3VWR5ZTlVcFhvUVZIYWliTWR0VXJKNC9P?=
 =?utf-8?B?c2UveDZXQXJ3Wk5RM1FwOWNLaXZlazdNVjhJQ0l0Y05pWDQ1aGNkSDBVMm1v?=
 =?utf-8?B?V3pRTTNHSTdvckdtbGoxYUNUMFIySWFoeExWUWo1SW13cFV6bTBMZENpN1Zr?=
 =?utf-8?B?RGM5aU8zdlpHUDNDUWlhRzJhcWRVenpVdkRGZVFWRkFHVS9ROU5DcmxQRGNk?=
 =?utf-8?B?SE9XUHpQWmxNL3ArMVcxVHFlT0RLT3NFRkRtd3hlM3M4NmEvaHRwRU1ORVAz?=
 =?utf-8?B?emZrUHNudElpMXNNQ2JhQWdiaDlOc3k0MXp3ZFlqZHlKbldaMGZKbFpWNVBr?=
 =?utf-8?B?ajRCK1JQVEd3NmNVaXBkZjNkcTlCWGtsaVNEcFRxYUZoa3N3alg2RTZlZWl4?=
 =?utf-8?B?YzF6UlpYYjVOb002emZ3S1dxYjhSOHdmaFU1N0E5TWh4YytkNG9kR2Ftd1o0?=
 =?utf-8?B?bFJNWmpqbHlUVVdNZXpCRFdTNjBCaUQ0dDJmK1hYNTdQaHhpUis1US93dHRo?=
 =?utf-8?B?WGxZOUtBbnI4UHNTMmw4STdORG5oTStkYzlGZlpmVXgrL3l0U3ozSm1hcXI1?=
 =?utf-8?B?SXpWSE1kM201dFdjZXg0Q1pvamdYMm1TVDNIU2lQY2doTjB6S1poWmx2dDVv?=
 =?utf-8?B?dHRmMmJHOW4yaFZOdlorSEc2TnZsVlRiZjdiVGI3RVlZQjFJNFMvYzBvM3Ju?=
 =?utf-8?B?OUgrMmZPU1ZKeVJ2NWU3TThEOFU0QUFHeHJWVGxNcG8zcmZMeDhRTEttaER4?=
 =?utf-8?B?L29xTURSYnZUSytobmVCc2ljNVVURW5mb0Y4bWprbzVTTTd5RWd4dmZHQjNC?=
 =?utf-8?B?U2dLZnFsSXBUR3F4RUVmWUlEeEM2dnlXUkp0aDlnYURRVHZZQnNmbTBFTTJ4?=
 =?utf-8?B?K3ErZmRubGFhRTM3alRSVXIvcHE0S2wzTFlFSUZ1VHQ1dmFndjEwdzVQQ3Jt?=
 =?utf-8?B?TnFrblVSSUF0V1BreFFaeEZudVVvelM2dE1tT3BqeThhQzM0OEFQcmMvaEdv?=
 =?utf-8?Q?otr83f/o42JNNsg8+C/ZeZfJl?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 22c57d62-6e26-4e3f-b77f-08db60fbbd56
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 10:50:53.4388
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HNDOwhasrBGXtC74k7lJSRFgUs8OZN2ftnBshyX/I4OqPqd1KHZuPTAnmnh5jShiqQfZkrlKozVvC7+fMr4CbQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB9208

On 23.05.2023 18:38, Anthony PERARD wrote:
> Everything needed to build the hypervisor should already be configured
> by "xen/Makefile", thus Config.mk shouldn't be needed.

"... by xen/Rules.mk." (Or else it sounds as if yo're removing its use
altogether.)

> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -246,10 +246,14 @@ export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
>                              sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
>                                  -e s'/riscv.*/riscv/g')
>  
> +export XEN_COMPILE_ARCH XEN_TARGET_ARCH
>  export CONFIG_SHELL := $(SHELL)
>  export CC CXX LD NM OBJCOPY OBJDUMP ADDR2LINE
> +export CPP AR

For these two, could I talk you into editing the earlier line instead
of adding a new one?

> --- a/xen/scripts/Kbuild.include
> +++ b/xen/scripts/Kbuild.include
> @@ -8,6 +8,13 @@ empty   :=
>  space   := $(empty) $(empty)
>  space_escape := _-_SPACE_-_
>  pound   := \#
> +comma   := ,
> +open    := (
> +close   := )
> +
> +# fallbacks for GNU Make older than 3.81
> +realpath = $(wildcard $(foreach file,$(1),$(shell cd -P $(dir $(file)) && echo "$$PWD/$(notdir $(file))")))
> +or       = $(if $(strip $(1)),$(1),$(if $(strip $(2)),$(2),$(if $(strip $(3)),$(3),$(if $(strip $(4)),$(4)))))
>  
>  ###
>  # Name of target with a '.' as filename prefix. foo/bar.o => foo/.bar.o

As long as they're the same, the collision with Config.mk's will be
benign (for xen/Makefile), but I wonder whether, along the lines of
the earlier patch, these wouldn't better be extracted into e.g.
config/fallbacks.mk. (Whether the single-character macros are also
extracted into somewhere is of less importance to me, at least right
now.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 10:51:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 10:51:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541091.843460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wwR-0000IP-5R; Tue, 30 May 2023 10:51:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541091.843460; Tue, 30 May 2023 10:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3wwR-0000IF-1r; Tue, 30 May 2023 10:51:03 +0000
Received: by outflank-mailman (input) for mailman id 541091;
 Tue, 30 May 2023 10:51:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3wwP-0000HB-M5; Tue, 30 May 2023 10:51:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3wwP-0004ae-Eu; Tue, 30 May 2023 10:51:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3wwP-0006l5-2J; Tue, 30 May 2023 10:51:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3wwO-0005aC-W1; Tue, 30 May 2023 10:51:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+qTdoBL2HCtyxkfabC3EcnKt3mWOa2cXkUQoae5qRps=; b=1W3pWhBh9oiQ3i8JCGr/Gh9ycA
	IMHxo4w6liy5FSm2ymY1/dwXd/Va/gNTc6Xml3DkupmCM94YdKmAxn9y/wUN3wUtYxb9ifIhDbxil
	xewQw3+mVJRbONjx9sn5wEF4FvHmn6dRDkmo4Z1kiO0ycpSCFG9Nr7GOCpB8rB1if39M=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181011-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 181011: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0f9283429dd487deeeb264ee5670551d596fc208
X-Osstest-Versions-That:
    ovmf=9d9761af50e538d983e00b1cb2d0ffcee261e552
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 30 May 2023 10:51:00 +0000

flight 181011 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181011/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0f9283429dd487deeeb264ee5670551d596fc208
baseline version:
 ovmf                 9d9761af50e538d983e00b1cb2d0ffcee261e552

Last test of basis   181008  2023-05-30 03:42:25 Z    0 days
Testing same since   181011  2023-05-30 08:42:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Liu, Zhiguang <Zhiguang.Liu@intel.com>
  Ming Tan <ming.tan@intel.com>
  Pedro Falcato <pedro.falcato@gmail.com>
  Ranbir Singh <rsingh@ventanamicro.com>
  Tan, Ming <ming.tan@intel.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9d9761af50..0f9283429d  0f9283429dd487deeeb264ee5670551d596fc208 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 30 11:04:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 11:04:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541101.843470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3x9a-0002Dx-Ch; Tue, 30 May 2023 11:04:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541101.843470; Tue, 30 May 2023 11:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3x9a-0002Dq-9L; Tue, 30 May 2023 11:04:38 +0000
Received: by outflank-mailman (input) for mailman id 541101;
 Tue, 30 May 2023 11:04:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1tCH=BT=citrix.com=prvs=5074c9224=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q3x9Y-0002Dk-Ub
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 11:04:37 +0000
Received: from esa1.hc3370-68.iphmx.com (esa1.hc3370-68.iphmx.com
 [216.71.145.142]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c0291991-fed9-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 13:04:32 +0200 (CEST)
Received: from mail-dm6nam10lp2103.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.103])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 07:04:27 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by PH0PR03MB6980.namprd03.prod.outlook.com (2603:10b6:510:16a::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 11:04:22 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Tue, 30 May 2023
 11:04:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0291991-fed9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685444672;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=uFghc1LBtGzWY4E+3teK0Rk/zJALyJTxpnBN7UxQxSQ=;
  b=UWsiO9NwgS5WByXlsx3YkDdbzNjEhq4eeyaqvEY4OAxOb7FDGVz1BxCj
   oDq5k77QbYcADqRUP3GBxxk9YLv6leD+3IsmbUQoINBgTC3L/whZeEPa0
   y8FB1S4ZV4LKORLjSNPl+DF/UmNs6LtQzoeFAVGksEJxruwJ18K7MGgXf
   g=;
X-IronPort-RemoteIP: 104.47.58.103
X-IronPort-MID: 111310354
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ZuE4O6uDzO5woBzNFKkRBOKAgefnVHxfMUV32f8akzHdYApBsoF/q
 tZmKW+OO/yOamL9f4wjOo++90IH6J7Qy9E2GgJoqn1kHylE+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Fv0gnRkPaoQ5AKFzyFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwbz4gRAyul72Nh+y0V9M1qPZkLZnGBdZK0p1g5Wmx4fcOZ7nmGv2PyfoGmTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osgP60aIG9lt+iHK25mm6Co
 W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiAdtJTeXjq6ICbFu7+HA5JRBNTXaCi6er1FOyWNt1K
 349w397xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L88wufQ2QJUDNFQNgnr9MtAywn0
 EeTmNHkDiApt6eaIVqG7audpz62PSkTLEcBaDUCQA9D5MPsyLzflTrKR9dnVaKw0Nv8HGiox
 yjQ9XBlwbIOkcQMyqO3u0jdhC6hrYTISQhz4RjLWmWi7UVyY4vNi5GU1GU3JM1odO6xJmRtd
 lBd8yRCxIji1a2wqRE=
IronPort-HdrOrdr: A9a23:hS0IPa78UNSJEMcvfQPXwVyBI+orL9Y04lQ7vn2ZFiY5TiXIra
 qTdaogviMc0AxhIE3Jmbi7WJVoMkmsjqKdgLNhdItKMzOW3FdAQLsN0WKm+UyYJ8SczJ8U6U
 4DSdkYNDSYNzET4anHCUuDYrAdKbK8gcOVbJLlvhJQpHZRGsNdBmlCazqzIwlTfk1rFJA5HJ
 2T6o5svDy7Y0kaacy9Gz0sQ/XDj8ejruOtXTc2QzocrCWehzKh77D3VzKC2A0Fbj9JybA+tU
 DYjg3C4Lm5uf3T8G6Q64aT1eUbpDLS8KoMOCW+sLlVFtwqsHfpWG1VYczMgNnympDt1L9lqq
 iPn/5qBbUI15qYRBDJnfKq4Xiq7N9m0Q6f9XaIxXTkusD3XzQ8Fo5Igp9YaALQ7w46sMh7y7
 8j5RPsi3N7N2KzoM3G3am8azh60k6v5XYym+8aiHJSFYMYdb9KtIQauEdYCo0JEi724J0uVL
 AGNrCr2N9GNVeBK3zJtGhmx9KhGnw1AxedW0AH/siYySJfknx1x1YRgMYfgnAD/pQgTIQs3Z
 WyDo140LVVCsMGZ6N0A+kMBcOxF2zWWBrJdHmfJFz2fZt3SE4la6SHkIndyNvaCaDglqFC56
 gpeGkoy1IPRw==
X-Talos-CUID: =?us-ascii?q?9a23=3AqONmYWjaA6olCyNW7E//cB7pNDJuNXPw5mv9MxC?=
 =?us-ascii?q?DTjhmVYfMZ23X+fNJqp87?=
X-Talos-MUID: 9a23:Ghk8Nwje+CrC0tBnrZopfMMpCttCzqn1EG00u8sU5dSINiVaOwW0tWHi
X-IronPort-AV: E=Sophos;i="6.00,204,1681185600"; 
   d="scan'208";a="111310354"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Kcq0hqTH1Wlkr3/7lY+yDlfVv7YNlVhfrHbiJCqNWWccav3uB5iIQTf4n4grNf4qQmp0YTWL1vfia7c0g2Cr4aHsgyqcukCAWZWI/B1yJPi8sagOUkzsiziuNqkd9ebQerl1WL3yk6U/rzVUkRpTEKfbXQET78HNwJiI/J8HHIhrzXwO6Dy2Hy0ZXrRrkMuBiSozpbtrtRsfwoVzQJ90VsM4jfppHZUYj8kguMBUlc1STKdhlG/DU7ba9SCAGeZLcGnVzJ1Pym1wxOvn1RjtVAvkt13GS0TUS7Nvb2dv7qb79EUd4BPUPb+VqouHqfGYerFtlLoOyQX2wPGEfjtHBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6mpXyUmSdrJG402BKaiJCtmB5duRnj++5v+WGT5Kx4I=;
 b=hmz+h1muIRzHNctBb3BFH1neKVl/7d5EuUetLv711My63r096k58OjefcZE2tpbx5KbyHvbM4+VQ89qhA35G2EJFMnHYKXTfzjcfmI+0kM11BLMf0rOmNCME0tTmTGcEIGcchQZjXK6V/loChbReyXcMOZkFtYDUm/cRzdhBMg7PPkba2Tm5p3MHrni3WV6XrnMedXsPtZitNeI8NJRFs23asFC8d6qoRYV/Ur5alw+5i/k87bEKqzaMnPOsXTGYm2OGYmpwRM/bK+1WwpSf67UaB9mQoUq6g9kppZpmZS0hDtJh3czGOScCpH8G35u4JtKS+dfFEJz5SdZrxhM/QQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6mpXyUmSdrJG402BKaiJCtmB5duRnj++5v+WGT5Kx4I=;
 b=heZi1aiLq+Y8KulKveJWJnI5iz+5Dle8JiCpVZVspWeADlPhD0NOVtDb2eRtuMonuVVbxp8UFrv6p/m2O+N7gMU+rGU8aA6WhznYrLaAH3S8pYQg70dixdF6vUW1xXhZgy6WPT67IgMGK8Q1LI1VL09haWVU7D35BqbaLFNTah8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 30 May 2023 13:04:16 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Message-ID: <ZHXYMPn4mR0O4rfR@Air-de-Roger>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4zx+TvUWTCEMh3@Air-de-Roger>
 <2b1b1744-2bc3-c7c0-a2d8-6aa6996d4af9@suse.com>
 <ZG94c9y4j4udFmsy@Air-de-Roger>
 <cedbc257-9ad9-56f1-5060-eaf173d45760@suse.com>
 <ZHRdjCKSVtWVkX96@Air-de-Roger>
 <25663dac-6023-a9a7-a495-c995762191d8@suse.com>
 <ZHW+Fu99ZGHPgMj+@Air-de-Roger>
 <830fb3cc-6ade-f10e-2a0a-95b676c63d87@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <830fb3cc-6ade-f10e-2a0a-95b676c63d87@suse.com>
X-ClientProxiedBy: LO4P265CA0086.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:2bd::19) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|PH0PR03MB6980:EE_
X-MS-Office365-Filtering-Correlation-Id: 77e77f5e-3560-4dea-fcff-08db60fd9f32
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VFKefwLukZrBWEXBk5BndxbH5kL6Dnw1O4dcmTf/KS3C0bbfLtOonFyWCztkqhnWDd7S99IU5QQgd0enLiS5C7AtMuvvmVPzTiE3vvpHQeMg1mQuLeJe9T062+jj635DxirU/nQjRu6hp7gC/mRRllCtl6nA3/Phxg1dpp8KKK8slFHkQDb9xKtC+NlgPbm4dGnrP8A/fIlxBC8yw0Whh6Ci2yjZaxTlYBxoYiZb1TNwxk2f0PYH/zx/vjEsnCCghED/OaBJzKOxJRQDd85g6TJFSPTWDqQwLQrGAizu7YGhoDrEsfiEHQwvs/st8ndNSIKWRPDuzr/Sr/B3w67bjqJ6x3tRbeQQGiOI4GDAgU2OsBmS8fKUcLYZRPfD6poHWXjqAMw81g088DZ28/wZ3b68IztCcZHXk38xOGFEkM+QYGlCbKvJnZYGF2fgx1PgopR/6iUZ55ToZ28D3WUVb+HGGoN7frD66Mckx7Qc5usacfjJcWWWe0habAbvhqQS6tOUKchnX2qmh0a6aTai2FmsTSl/Y/q6PBKzRIgu8fg/ezdnKWKdRECeaO1yARID
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(366004)(39860400002)(376002)(346002)(136003)(396003)(451199021)(186003)(9686003)(53546011)(26005)(6506007)(2906002)(54906003)(15650500001)(6512007)(478600001)(83380400001)(86362001)(6486002)(8676002)(8936002)(33716001)(82960400001)(38100700002)(41300700001)(316002)(85182001)(66946007)(66476007)(66556008)(6666004)(5660300002)(4326008)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TllENnhyYU5XdUxSbkF6b05kZVAzMUVFUnVBWUt0bEhTa1pCT3dzdDN3aXFY?=
 =?utf-8?B?SDJIZHNUUXdyTXZTVG10d1JCUEx0OEdLN2tkTzJFRGd1WHYrQVU2TkFGTGNv?=
 =?utf-8?B?SHhYTGZ1MExBemtaVFUxYVVXM01sb2NvZmVuc2p2dEpRZnR0UGlrQVp1elpB?=
 =?utf-8?B?emhKQ1pNWkZqWlArcFRUeitUQWtaZ1Evc1VQZUZ2YUFsWE5xYythblBUUkk0?=
 =?utf-8?B?M0R0cHlQdlZjdGFFVGJtaGhucU1JZlo1YytOam5uR0hFNHVRQy84N2wzR2pk?=
 =?utf-8?B?akhBUjBZYWpXeUxUa1pZUkVMM1pYRzRuK2VvQ2N6L2g3UENoMDZMTWlOUm05?=
 =?utf-8?B?UkhORFI0RS9UOHM2Qktra1cySHRDN1BQSU1kTHJLZGQ3SUhTcUpYR21mTDh6?=
 =?utf-8?B?QW5wL3ZNbWFhSGEvZXVIMERwbXh6Vnh5WkdQdHAzSGZ4NE0rN0JZRGVqcktv?=
 =?utf-8?B?dzlmWURTY0YvZkE1ODJDYWllMTRDTUZMN0RVTUpTZkdMS2VGYXJUZnhWNFF0?=
 =?utf-8?B?N3RaaEFhK1dzc0xCRDhFdHhDTmVuY3NtU25QRXltSUZJRlBrS1k3czAyYUtT?=
 =?utf-8?B?Ykk2clJscjV5aFNDSko3S1FqcU1uYzgvZFArZ05rL2NUTTN2V3Z0ZitsUTQy?=
 =?utf-8?B?cTIxTVIvcG8zREFPcWxJbC91Nks1ZDlSOVR4TFRiWTRGVHRvNjI2bGE3UCtW?=
 =?utf-8?B?UnAyTEw1a0NKYk9nMU05TXZ4dlEzdzlydUtGcFN0QnhOaFQyZUNTV2R6Sm1V?=
 =?utf-8?B?WHpZeEdVOWhLNWFTNzloL0VnOC9QOXZRNyt0S1I5NU5yRVlrZG5nTU4vcVRF?=
 =?utf-8?B?b3JXV053YWplbERMMG1ucjMwaCt3UnJhSmxPbXFqQ1A4OHJ5UFhtcjlLNFpT?=
 =?utf-8?B?aEJsSVByeWNLWHVPZVMxU3R1bjRLYUlIRWlXNElndVQ5aTE0c0t6SWFpSmow?=
 =?utf-8?B?V1JXN3g5YWdvWStUUS92K3JBY1FXYjhob0RWRnFkY05nb3BSaUc1VnIvUkpE?=
 =?utf-8?B?ckl4ZWpnVWxVWXNOV1RTQzFKZWx2SnZBQm9HYnJlTmNsTWZyeTJoRUpUb1Bz?=
 =?utf-8?B?ZmsyTm45NWVkclRTM1BCbksvQzhqWDdUdVd0bW54QnNDdUJxSUhXT0NXR084?=
 =?utf-8?B?Y1d0NisxUHN0MytBRXF6QVdGRGp5WDFzeGFhbU9WRUhaNjYrVTJsUTIrRUll?=
 =?utf-8?B?Z2NPL3VlNFJoZ1FNN3FJQkZXOTVuWEN1OE1IL3E0clhpQkJzbU1IQjFvZmx4?=
 =?utf-8?B?VzJIbVJIWHRrWlhIdDA2dG8yZ2dDT0pPWnhNZWRtQVR3U0hXYXR3OWtXRUlk?=
 =?utf-8?B?VFhTVDg1aUREK0E4TC9KanRMcUVWU0pybUY0WGFJN2VLTkF1ZE9iblE2SVlU?=
 =?utf-8?B?VDlSRmtOZHJSQkRYM0o2UzR1eFRLMStENDQxV3JaSGhKUUtGa25UTjcvMmhI?=
 =?utf-8?B?Z0NZMHgrQ1RNWUZuTXdwMUYrVm43YWNDVzZQU3NUcXdWMG9EdXZ1VjBJMERG?=
 =?utf-8?B?Y3dTcWl0TlU0ekZDKzU0TGo2aTNOVnYvRCtDYnBYbC9IUnZEeTVhTTFPbGhs?=
 =?utf-8?B?b3pSeldHTTEzelErNVlNOGhDZlF3czVPRU5wMTFCODN0N25YcFJjTjd2bXkz?=
 =?utf-8?B?ZFZpTTRLdE9VQkg0aDFUeldHcGU2NXpVK2FKZkY0Q0h1SlpmQ29aZjkvblla?=
 =?utf-8?B?eFlpSHVFVFlvazYrenZmenMvQ0YwWm43MXBiVG5BM0VtaDVwYkYxdm9RdHR4?=
 =?utf-8?B?QzhCQzJZL2czSDRwdytob0NDTlgwMzRBS0F4OTNWNXI0UzNPTHVpNzU5ZHZF?=
 =?utf-8?B?Y2J6WFlsZWtYNldiUXNBS0I3QmN1U1RaTTFXYzlWL2lScXpRaTk5UnNsa2dn?=
 =?utf-8?B?WE0vK1ZoaXY0NlRVaUdtS2ExYW5JdGI0Y0phWEpEY0VwQlp3d1ZPWnJCdXNJ?=
 =?utf-8?B?R1B6TUlPcmx0dXQ5V0xWbzNIZStFN0ZycjVKUEdoL2JTMnZyY0tkTk90T1Z5?=
 =?utf-8?B?OU1VQ0t3clZBbnNCV0dLSEZ0L211UlBBL3dRcWlVOWNLY3dqTk91VlpuVGtz?=
 =?utf-8?B?U3dxanpjblFSSmR6OTQ2ZkJKUlRIdkJJeWJlN2E3QlV5NWZkTmgxS2tubHRq?=
 =?utf-8?B?aUJtMTYraTVjZjl1dXNXZnA5c2haSGo0VGRydzVGYWkyTmQ4NkYzYVRkQlZo?=
 =?utf-8?B?UEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	hriTPy6t5V+W9Rzr62rlYdQkheSlXGRbaYdaF4AmKoEKV5LZKgMq+OgE7njdRtTSubCjRVy8b7hfquv/IydC17iSq1OKSrM87qUb+DUhEBgVXZoXgLJNyYbEwV4o4+Fzp0c08tZfoB9tSAj3NE7QgXyJH3EgtJz5Hfo297eK8u+LYA5bDiruvJ1NIYXrQv4hMK4J5vBiQs566XdsUDWFzevz3oMNq/TkaMUulknQH16uPUIgEzrXIFan4FeRVorN8/ebKd6FmmNsPWqKStUUthMuTB6GKzOosLp1M3ExmjbtTnh7I3CvOhHoSx1inxKEdfWr+kgEES61zqAKrdnEqrqX6qkeMNqlAtTWrAOz1h/gOaZfc4i3MBpO3tmtvQantr3fvifhLt3uzr9Y4gYCjwtTfYiQGvKnBLglVg2+wTIAjd4rsd8LU9scJZW/vUkJTvWcLfuzE0PIvEUovWu25g5nKbBUD9EXWDpvxUpYuxoj7j59Ck1wxssHzzA+ltz9OJXrRZWPlZfRer6R28ygFHE1bjs21c3TzOrFgk9DGDUeig05qc2y92cKSD7xO6c/ZGvGy+WsymBdBtaSWiaUk0p/ubuviAHnRAf7KfDLTqEp+YNCGOxBN8y0d44PzgT12xYwesYLiY5880EZ4MexN9cPKNUGgkkqBH5bmd496ur0sDrLiDaHl5QYHvhFyFkQkFzq3tdSKE23AmAhOuTgJ2l1zX8tmIrFdipDY4Izf1hXOmqg6yjK9UIOyyxd4/QO6+wyDYSw+CkMGX2y/s30XnPDeSccjl95qZVDAKoP8awUVeWXE2KTmpJy4RHhphIes/7bdpt3I30qpI6SVAAXYA==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 77e77f5e-3560-4dea-fcff-08db60fd9f32
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 11:04:21.9425
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: i7YAothSWiqgYHs/vo/kZrtbmMX2M0ni5ZOcU5sVY/aByEa+KXmtpMfWVrpuORLYbyjpLZVA3TyxvWNZUkJ7yA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6980

On Tue, May 30, 2023 at 11:44:52AM +0200, Jan Beulich wrote:
> On 30.05.2023 11:12, Roger Pau Monné wrote:
> > On Tue, May 30, 2023 at 10:45:09AM +0200, Jan Beulich wrote:
> >> On 29.05.2023 10:08, Roger Pau Monné wrote:
> >>> On Thu, May 25, 2023 at 05:30:54PM +0200, Jan Beulich wrote:
> >>>> On 25.05.2023 17:02, Roger Pau Monné wrote:
> >>>>> On Thu, May 25, 2023 at 04:39:51PM +0200, Jan Beulich wrote:
> >>>>>> On 24.05.2023 17:56, Roger Pau Monné wrote:
> >>>>>>> On Wed, May 24, 2023 at 03:45:58PM +0200, Jan Beulich wrote:
> >>>>>>>> --- a/xen/drivers/vpci/header.c
> >>>>>>>> +++ b/xen/drivers/vpci/header.c
> >>>>>>>> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
> >>>>>>>>      struct vpci_header *header = &pdev->vpci->header;
> >>>>>>>>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
> >>>>>>>>      struct pci_dev *tmp, *dev = NULL;
> >>>>>>>> +    const struct domain *d;
> >>>>>>>>      const struct vpci_msix *msix = pdev->vpci->msix;
> >>>>>>>>      unsigned int i;
> >>>>>>>>      int rc;
> >>>>>>>> @@ -285,9 +286,11 @@ static int modify_bars(const struct pci_
> >>>>>>>>  
> >>>>>>>>      /*
> >>>>>>>>       * Check for overlaps with other BARs. Note that only BARs that are
> >>>>>>>> -     * currently mapped (enabled) are checked for overlaps.
> >>>>>>>> +     * currently mapped (enabled) are checked for overlaps. Note also that
> >>>>>>>> +     * for Dom0 we also need to include hidden, i.e. DomXEN's, devices.
> >>>>>>>>       */
> >>>>>>>> -    for_each_pdev ( pdev->domain, tmp )
> >>>>>>>> +for ( d = pdev->domain; ; d = dom_xen ) {//todo
> >>>>>>>
> >>>>>>> Looking at this again, I think this is slightly more complex, as during
> >>>>>>> runtime dom0 will get here with pdev->domain == hardware_domain OR
> >>>>>>> dom_xen, and hence you also need to account that devices that have
> >>>>>>> pdev->domain == dom_xen need to iterate over devices that belong to
> >>>>>>> the hardware_domain, ie:
> >>>>>>>
> >>>>>>> for ( d = pdev->domain; ;
> >>>>>>>       d = (pdev->domain == dom_xen) ? hardware_domain : dom_xen )
> >>>>>>
> >>>>>> Right, something along these lines. To keep loop continuation expression
> >>>>>> and exit condition simple, I'll probably prefer
> >>>>>>
> >>>>>> for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
> >>>>>>       ; d = dom_xen )
> >>>>>
> >>>>> LGTM.  I would add parentheses around the pdev->domain != dom_xen
> >>>>> condition, but that's just my personal taste.
> >>>>>
> >>>>> We might want to add an
> >>>>>
> >>>>> ASSERT(pdev->domain == hardware_domain || pdev->domain == dom_xen);
> >>>>>
> >>>>> here, just to remind that this chunk must be revisited when adding
> >>>>> domU support (but you can also argue we haven't done this elsewhere),
> >>>>> I just feel here it's not so obvious we don't want do to this for
> >>>>> domUs.
> >>>>
> >>>> I could add such an assertion, if only ...
> >>>>
> >>>>>>> And we likely want to limit this to devices that belong to the
> >>>>>>> hardware_domain or to dom_xen (in preparation for vPCI being used for
> >>>>>>> domUs).
> >>>>>>
> >>>>>> I'm afraid I don't understand this remark, though.
> >>>>>
> >>>>> This was looking forward to domU support, so that you already cater
> >>>>> for pdev->domain not being hardware_domain or dom_xen, but we might
> >>>>> want to leave that for later, when domU support is actually
> >>>>> introduced.
> >>>>
> >>>> ... I understood why this checking doesn't apply to DomU-s as well,
> >>>> in your opinion.
> >>>
> >>> It's my understanding that domUs can never get hidden or read-only
> >>> devices assigned, and hence there no need to check for overlap with
> >>> devices assigned to dom_xen, as those cannot have any BARs mapped in
> >>> a domU physmap.
> >>>
> >>> So for domUs the overlap check only needs to be performed against
> >>> devices assigned to pdev->domain.
> >>
> >> I fully agree, but the assertion you suggested doesn't express that. Or
> >> maybe I'm misunderstanding what you did suggest, and there was an
> >> implication of some further if() around it.
> > 
> > Maybe I'm getting myself confused, but if you add something like:
> > 
> > for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain;
> >       ; d = dom_xen )
> > 
> > Such loop would need to be avoided for domUs, so my suggestion was to
> > add the assert in order to remind us that the loop would need
> > adjusting if we ever add domU support.  But maybe you had already
> > plans to restrict the loop to dom0 only.
> 
> Not really, no, but at the bottom of the loop I also have
> 
>         if ( !is_hardware_domain(d) )
>             break;
>     }
> 
> (still mis-formatted in the v2 patch). I.e. restricting to Dom0 goes
> only as far as the 2nd loop iteration.

Oh, right, and that would also exit the loop on the first iteration if
the device is assigned to a domU, so it's all fine.  Sorry for the
noise then.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 30 11:13:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 11:13:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541105.843480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xHg-0003j4-4t; Tue, 30 May 2023 11:13:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541105.843480; Tue, 30 May 2023 11:13:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xHg-0003ix-26; Tue, 30 May 2023 11:13:00 +0000
Received: by outflank-mailman (input) for mailman id 541105;
 Tue, 30 May 2023 11:12:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m7B1=BT=shutemov.name=kirill@srs-se1.protection.inumbo.net>)
 id 1q3xHe-0003ir-KF
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 11:12:58 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec5bc31e-feda-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 13:12:55 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id 9E5085C010B;
 Tue, 30 May 2023 07:12:53 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Tue, 30 May 2023 07:12:53 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 07:12:51 -0400 (EDT)
Received: by box.shutemov.name (Postfix, from userid 1000)
 id 3A8FC10CE6B; Tue, 30 May 2023 14:12:48 +0300 (+03)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec5bc31e-feda-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name;
	 h=cc:cc:content-type:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1685445173; x=
	1685531573; bh=jzxR1ScglUhfxqqGJjgvOS5NAqJpfFc/FG469x51F+w=; b=z
	Gm7H8q2xAwH65t7g76y5CpNRNA5QhcTT9CVh8xcZZdtfZF7xTv87d7VZCAAC/TEk
	ggEOsQ0d0JIIol5LaBUgMQPQGyyrsBk+MQRBvYZ8pzb0V1hZ+o8+a0kCFt4nMBVi
	DLzwKpOEDjkVtg8s4OlRz2e20pJZ/fwWcSqsVwRSlWLupRb6L8t3scy1jBdt+w58
	H4dOxL5cdxo5A2z1zjta/UDnxuIDN5al4owNBMvK5avlfyNWjfjgnHrYuUWsxjca
	YIUFnkT6n2B1y7mZnyFLTJy4jGY7MofSeCiX3Ash7A71fm447OX0Nrg2/sSqQhHc
	7lxckU497HvK26OalRNAg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1685445173; x=1685531573; bh=jzxR1ScglUhfx
	qqGJjgvOS5NAqJpfFc/FG469x51F+w=; b=IGCHqhWWcbyh6MB/CW4j9E5nOLkNc
	MB5S7AG5+9Uao8lFbpTrQFWDCxAgeAz50TkB7sSIqfCZe4GJqucU4q2of+fNzkAZ
	ozqucMxrSsKk8D4vwBp0M3nOXWfV55tZ+BwneK280TDjbxLiTaJ04HPIW2zPIAzi
	QzLxwlWrF7USZaAaKKXFYwDt4taOmxnalJUHUqeeb5tBFH5OcyG/0aW3BAk2IrsU
	OTU0Fx6sm6GJ4r1ZrQShU6ReUHcyoxDjL+s0xU4PF/zfeUk6T4mjxgMP+QeS1d22
	+L+cEp2vpyKyphRpU18a5qW3k8RyRCCLLIP2NVTJQFY/VyVoSdg5pTTFA==
X-ME-Sender: <xms:M9p1ZLUzh5h-xIjt6EAne-fkiWy-kfD7ehnMMU_n6uQ0E_sH7qrIXw>
    <xme:M9p1ZDm-11Kzq75g9bzZ4p8Bi8CqYoAD14uu63oHoN1nDRaGTkH9tbS6Rn2eXj1TR
    VcIvL-6SC1XJjuj0OE>
X-ME-Received: <xmr:M9p1ZHaMr9z-1bjaM9Y_FvwiQDZ_vreipQ5XsEQ_ONZ948J3EC_eQlPIq8aNlT8QB7lFHw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgfeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesthdttddttddtvdenucfhrhhomhepfdfmihhr
    ihhllhcutedrucfuhhhuthgvmhhovhdfuceokhhirhhilhhlsehshhhuthgvmhhovhdrnh
    grmhgvqeenucggtffrrghtthgvrhhnpefhieeghfdtfeehtdeftdehgfehuddtvdeuheet
    tddtheejueekjeegueeivdektdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh
    epmhgrihhlfhhrohhmpehkihhrihhllhesshhhuhhtvghmohhvrdhnrghmvg
X-ME-Proxy: <xmx:M9p1ZGU-nMSgIfCHDflrZ16Mmf4jTfAlm-Zv1KLq8HvnripwLe3DCg>
    <xmx:M9p1ZFkT9W3_xU5ax-ZWYNbzzY95_f0M5iBw8JZc7bVNl4xwOKbMPQ>
    <xmx:M9p1ZDfPrYWia5RjOy_0Tv69hBZV6AtJ3i_LBbPY9C8gWBh8gDMEvg>
    <xmx:Ndp1ZImdm_XgLH-WrJ7Wtzk8AriLHd8rHSyyZMbNAkgODwQ0ohRd5A>
Feedback-ID: ie3994620:Fastmail
Date: Tue, 30 May 2023 14:12:48 +0300
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,	Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,	Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch] x86/realmode: Make stack lock work in trampoline_compat()
Message-ID: <20230530111248.lzt77sydi7x3wau7@box.shutemov.name>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name>
 <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <87h6rujdvl.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87h6rujdvl.ffs@tglx>

On Tue, May 30, 2023 at 12:46:22PM +0200, Thomas Gleixner wrote:
> The stack locking and stack assignment macro LOAD_REALMODE_ESP fails to
> work when invoked from the 64bit trampoline entry point:
> 
> trampoline_start64
>   trampoline_compat
>     LOAD_REALMODE_ESP <- lock
> 
> Accessing tr_lock is only possible from 16bit mode. For the compat entry
> point this needs to be pa_tr_lock so that the required relocation entry is
> generated. Otherwise it locks the non-relocated address which is
> aside of being wrong never cleared in secondary_startup_64() causing all
> but the first CPU to get stuck on the lock.
> 
> Make the macro take an argument lock_pa which defaults to 0 and rename it
> to LOCK_AND_LOAD_REALMODE_ESP to make it clear what this is about.
> 
> Fixes: f6f1ae9128d2 ("x86/smpboot: Implement a bit spinlock to protect the realmode stack")
> Reported-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Tested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


From xen-devel-bounces@lists.xenproject.org Tue May 30 11:18:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 11:18:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541110.843489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xMt-0004Qu-R7; Tue, 30 May 2023 11:18:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541110.843489; Tue, 30 May 2023 11:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xMt-0004Qn-OR; Tue, 30 May 2023 11:18:23 +0000
Received: by outflank-mailman (input) for mailman id 541110;
 Tue, 30 May 2023 11:18:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IZK1=BT=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q3xMs-0004Qh-3Y
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 11:18:22 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.20]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae7d2964-fedb-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 13:18:19 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4UBIAHle
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 30 May 2023 13:18:10 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae7d2964-fedb-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1685445490; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=M3OzKoE3OnXMMuuYeGoewCaszTSTGUvss3H/ly1QRAnB21XWdgXLoZ9ogbh/GLgK7A
    TJQ/bLC1fuUPgYHKCvKHmV0u5ey/7LMVMXrI/PYUKRLomFuru93Rlt467cdmE0hXSU5o
    4OoIejqoTVJt96G7C9u6gpzs5HgZjBd0XH787730mhKxTM4qGGoGJ3hAGVKvNLFOsqES
    eX22HofL1vlpT9/8WBMz8QIApQnJtmdYYPFSad6/0s53ojn8mN0KAn7O4pnoQebWSVgu
    pL0fI7O1VvSv72xnsobXdbo+DvyxWl4VAYEzXJvkHB688oh96M69e2qd5t8kVLMhTq7C
    UslQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685445490;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=xpfZUYcz31NZPlJ7xvrja8Wj8g2sszCQGizz7J+CVKA=;
    b=tWjjzxWF5a+nOA4rwjagKQiarNCa0rHEGz4OyJTKljhdvVvGa59wurx9yMPoEESroO
    yEfN+oz9jelx73uriFWg7PBSsyIp4ion2SbM9a3nTxLvwNImzAv301IfP/Kr7CfnT9cR
    +9Uk3wx/qSITfJsk2AjIHfGCWMmS9ibUHuzjCxJUQQBRAnMpFlpQmpqRKd/1oNMbsRO2
    KIO3l3iwZ2/8ltPcZe0XQjBUsy3RkmyfAzZOsg4tErRnLs+viurdeTzzxcyOPzzEpo8r
    40VhixsLGNmRu6kSL532LJp3e9hJYmw3bRKpGr52iqTDSkErhoqRlkl/36OaoCP+dFly
    ZZtQ==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685445490;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=xpfZUYcz31NZPlJ7xvrja8Wj8g2sszCQGizz7J+CVKA=;
    b=GKaChHciI6VSbV7AdiDEE7gU+fM6WZpFia2Hu0TyPP1KzjAHZ3w5XkiPfhY78+gxgI
    teRpaZMT8dNEwaXS0knva5KWaYFn2p+RgZYt2YP0Mc6Bb2Nytkfqj8yQNQdsNFcqxkod
    IX0P7b6HJys0WkC1IIvuezz881P64O2xmkjRIcCPV8LELhMrzRrbixCsafaIhtZb5PkV
    KUzdVptEsocphIxyQQTSVJoQeM9BnU3VZWrPiZVvlScKDY89pVF6GkivNCx6wbhe6ruC
    gL9jxdXHlD5zdqazX+6+mwOvuPHu0LUff0B2+0D7cB2wB4JB6lmH9mp55Gq8wN7q1kYO
    w/eg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685445490;
    s=strato-dkim-0003; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=xpfZUYcz31NZPlJ7xvrja8Wj8g2sszCQGizz7J+CVKA=;
    b=ScHnd5IQDKhgMZr0ZHfLiY0koOReG7+Xk5biE44O7sUPC+2U5is+WW5PUNBPEThKCh
    G0BrICBCa+JXfH00iaAA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg4xqFv7EJ0tgRX/vKfT/e8Ig6v0dNw4QAWpzMWrRQ=="
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1] tools: fix make rpmball
Date: Tue, 30 May 2023 13:18:07 +0200
Message-Id: <20230530111807.6521-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="us-ascii"

Commit 438c5ffa44e99cceb574c0f9946aacacdedd2952 ("rpmball: Adjust to
new rpm, do not require --force") attempted to handle stricter
directory permissions in newer distributions.

This introduced a few issues:
- /boot used to be a constant prior commit
  6475d700055fa952f7671cee982a23de2f5e4a7c ("use BOOT_DIR as xen.gz
  install location"), since this commit the location has to be
  referenced via ${BOOT_DIR}
- it assumed the prefix and the various configurable paths match the
  glob pattern /*/*/*

Adjust the code to build a filelist on demand and filter directories
from an installed filesystem.rpm.

Take the opportunity to replace the usage of $RPM_BUILD_ROOT with
%buildroot, and use pushd/popd pairs.

Fixes: 438c5ffa4 ("rpmball: Adjust to new rpm, do not require --force")

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/misc/mkrpm | 33 ++++++++++++++++++++++++---------
 1 file changed, 24 insertions(+), 9 deletions(-)

diff --git a/tools/misc/mkrpm b/tools/misc/mkrpm
index 74f6761bb0..a7bf854197 100644
--- a/tools/misc/mkrpm
+++ b/tools/misc/mkrpm
@@ -50,20 +50,35 @@ uninstall.
 %build
 
 %install
-rm -rf \$RPM_BUILD_ROOT
-mkdir -p \$RPM_BUILD_ROOT
-cd %{_xenroot}
-dist/install.sh \$RPM_BUILD_ROOT/
+rm -rf %buildroot
+mkdir -p %buildroot
+pushd %_xenroot
+dist/install.sh %buildroot
+
+pushd %buildroot
+popd
+rm -f dist/filesystem.txt
+rm -f dist/directories.txt
+rm -f dist/files.txt
+find %buildroot -type d | sed 's|^%buildroot||' | sort > dist/directories.txt
+find %buildroot -type f | sed 's|^%buildroot||' | sort > dist/files.txt
+find %buildroot -type l | sed 's|^%buildroot||' | sort >> dist/files.txt
+if rpm -ql filesystem > dist/filesystem.txt
+then
+  while read
+  do
+    sed -i "s|^\${REPLY}$||" dist/directories.txt
+  done < dist/filesystem.txt
+fi
+sed 's@^@%%dir @' dist/directories.txt >> dist/files.txt
 
-cd \$RPM_BUILD_ROOT
+popd
 
 %clean
-rm -rf \$RPM_BUILD_ROOT
+rm -rf %buildroot
 
-%files
+%files -f %_xenroot/dist/files.txt
 %defattr(-,root,root,-)
-/*/*/*
-/boot/*
 
 %post
 EOF


From xen-devel-bounces@lists.xenproject.org Tue May 30 11:34:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 11:34:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541115.843499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xbq-0006qV-W2; Tue, 30 May 2023 11:33:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541115.843499; Tue, 30 May 2023 11:33:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xbq-0006qO-Sr; Tue, 30 May 2023 11:33:50 +0000
Received: by outflank-mailman (input) for mailman id 541115;
 Tue, 30 May 2023 11:33:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=92F4=BT=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q3xbp-0006qI-79
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 11:33:49 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d7bfd15a-fedd-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 13:33:47 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id
 2adb3069b0e04-4effb818c37so4667432e87.3
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 04:33:47 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 c12-20020a19760c000000b004edc6067affsm316790lff.8.2023.05.30.04.33.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 04:33:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7bfd15a-fedd-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685446427; x=1688038427;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=O5s4uh8sDVmiJ4w7F3gfTmaWTiit5kb00LOcnxB2mCg=;
        b=Dk9x1ENcn8fxA149gt299gBx6F0Q3FT7HVra+wE+M9LKoC4SsmdNHBTQ+SuuG2HoJC
         kBW0paoYMTnRirbLqJQR1mNwJfwYUF2jVcnN0ZqZadTMorCJ7DwBWDakJ0W9XsliuuVs
         C85cfXyBHJLaqet6cE7+YEq74Rcq+NkJoHQeLybJMtkYzi9U/WS1GCaj6tdxB+U5qV41
         OFOu3HbwemBU+ewZR0KHTibr5weK4nsjPtiyJygnzKfuonYIMom99PFACOFS9J58M7xG
         QFaFpgHhdJRVEnQ4m8Q+K1Ct+ohxE16h98LLxIOdlPcAp+WRzngweWUjuCJAYPhat/vi
         GvJQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685446427; x=1688038427;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=O5s4uh8sDVmiJ4w7F3gfTmaWTiit5kb00LOcnxB2mCg=;
        b=DnvfsEPJwwb5gtWGmvLBFzDrd20X224Ql37YmAYjgunNal4R6An0jB8Qlav+8Bgj7Q
         COL1mJYYRjU+VL6I2wiSNBke7sT0kFbRtrrSdXX767bDgu3kiYJevQK2Tc/ElF4aIQ+9
         3dbnHsH9S/pBGAozUPhkFSxhLPdrur5A4hl6KW60+BEjY7ePeDeicoTqVuYjX2fDauaX
         qT4CsAyORSHQFe+cYAoN2HJJYoPqk2MKKlfuUskM/bmBxrrv/9f1w0x9EyPwaMcu2Bao
         PSkarskiASwazOXnFnVXwScsA1paWsGv16w7wrgsYcjaZaeO2y8LsKTIr+z8/e7yee0N
         CFtg==
X-Gm-Message-State: AC+VfDy6w4c8PoUBjGTkBuMOXYvuNItziD5JbMT37ZYCXixzI1w+ryWF
	cndjrqebyQef0KWiuhnMbDc=
X-Google-Smtp-Source: ACHHUZ6PxfbFxMRqub82rITG0s3omp2+UeC0MUHLGK766MjEZjM1gQ0pImGHX1+gCVyIX5MZgYTxeQ==
X-Received: by 2002:ac2:4829:0:b0:4f3:a99f:1ea3 with SMTP id 9-20020ac24829000000b004f3a99f1ea3mr543442lft.32.1685446427192;
        Tue, 30 May 2023 04:33:47 -0700 (PDT)
Message-ID: <ee19fafb469490291127c8497251935849af61af.camel@gmail.com>
Subject: Re: [PATCH v9 0/5] enable MMU for RISC-V
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Gianluca Guida <gianluca@rivosinc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>,  Connor Davis <connojdavis@gmail.com>,
 xen-devel@lists.xenproject.org
Date: Tue, 30 May 2023 14:33:45 +0300
In-Reply-To: <4a3ecfc7-45e8-842a-3e1e-24c7f7d6af15@suse.com>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
	 <4a3ecfc7-45e8-842a-3e1e-24c7f7d6af15@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.2 (3.48.2-1.fc38) 
MIME-Version: 1.0

On Tue, 2023-05-30 at 12:23 +0200, Jan Beulich wrote:
> On 25.05.2023 17:28, Oleksii Kurochko wrote:
> > Oleksii Kurochko (5):
> > =C2=A0 xen/riscv: add VM space layout
> > =C2=A0 xen/riscv: introduce setup_initial_pages
> > =C2=A0 xen/riscv: align __bss_start
> > =C2=A0 xen/riscv: setup initial pagetables
> > =C2=A0 xen/riscv: remove dummy_bss variable
>=20
> While the series is now okay from my perspective, it'll need
> maintainer
> acks. I thought I'd remind you of the respective process aspect: It
> is
> on you to chase them.
>=20
I appreciate the reminder.

If I don't get an ACK from maintainers in a week, I'll send an
additional e-mail.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue May 30 11:37:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 11:37:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541119.843510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xfh-0007RJ-FS; Tue, 30 May 2023 11:37:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541119.843510; Tue, 30 May 2023 11:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xfh-0007RC-C1; Tue, 30 May 2023 11:37:49 +0000
Received: by outflank-mailman (input) for mailman id 541119;
 Tue, 30 May 2023 11:37:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m7B1=BT=shutemov.name=kirill@srs-se1.protection.inumbo.net>)
 id 1q3xff-0007R4-9k
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 11:37:47 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 649a52d8-fede-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 13:37:44 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id E3DF45C017D;
 Tue, 30 May 2023 07:37:43 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 30 May 2023 07:37:43 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 07:37:42 -0400 (EDT)
Received: by box.shutemov.name (Postfix, from userid 1000)
 id 177A51098DC; Tue, 30 May 2023 14:37:40 +0300 (+03)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 649a52d8-fede-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name;
	 h=cc:cc:content-type:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1685446663; x=
	1685533063; bh=O6qShh+asEKKf+CBp8F6HZvUgA+aseK9FX7RcXKq70E=; b=T
	PxfL1LB8q81/rUHg2XvMW3/4dxt+oSWRPKcAA/VfwYJOvqRMkuXbP9hfoQUPlVSO
	3soZ2+X/mMLCpkpE36IZk2R8xohb+JFPwPG226Er1VdHwSTvTSzW4L1L4BfMdTAl
	cyvBNn8WObkCH3dHEG0vdM9jEvph3Kqx1tVbGsUe3WZ44vXaE8v5maJ2RJe+YrSC
	GX5joEHfNuuIsxFWMg4QoCYDUUuSy4hBfDCbYL5QS75GdWcak7oLD0I/A7g5ySLZ
	97gbvYvfHV4/mUcIJk6wxGA0aOki4z2l3aOHQ/J9CG6pEcuQByuQm5NukOQOPZXq
	vPq3NVhuEDvhWnhHty/kQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1685446663; x=1685533063; bh=O6qShh+asEKKf
	+CBp8F6HZvUgA+aseK9FX7RcXKq70E=; b=l0ULuSdSAZxz4Id4NCjNDvSuSOjMi
	QqSMBgGatnuAwLmz3fq+W4g/oun3yPwfK//TkFcZ6Qq+DMdNuFKSZkR/czNaanZ3
	gznyzrlaJqmjlULYXIN7dHt3y127wCGCkXO0rr+NHuFgCc4N3xN5GwyOadZlstUV
	a3o1mniuANA9XqsFbG+jwBgNe/OGqk/sN44MUk+0RSsbIyYvHHYSTkvoRnOn6SeF
	rv4hWOEyn4GZwmVgc+SFsnrI01HOSbZcYCrd6IFT0pqXMDUW4Vmh2gZ04tO5eISK
	EoKkp0YR75uKBoU43kRStRyPv6JkVt6WDRCdPoFTo8MuPbZ66elW67kDQ==
X-ME-Sender: <xms:BuB1ZOy_Q46awEmN2n8G6kx8lsnsXgrIxRUOpUzTmTdMa1-A14ViIw>
    <xme:BuB1ZKTbzE4D__kHwieS6qeKhQFl4uQ6IxouuMedEdB1YUPeLzq8gSfMAkpVKHfMg
    QLPoAR-6feJERNRsXc>
X-ME-Received: <xmr:BuB1ZAU1H0WyeO0sWr9ctDyadG9xs8jMn7SxVF4JQokcsMi1YeZCk5CAfCZUAhfjSHSbbg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedggedtucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesthdttddttddtvdenucfhrhhomhepfdfmihhr
    ihhllhcutedrucfuhhhuthgvmhhovhdfuceokhhirhhilhhlsehshhhuthgvmhhovhdrnh
    grmhgvqeenucggtffrrghtthgvrhhnpefhieeghfdtfeehtdeftdehgfehuddtvdeuheet
    tddtheejueekjeegueeivdektdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh
    epmhgrihhlfhhrohhmpehkihhrihhllhesshhhuhhtvghmohhvrdhnrghmvg
X-ME-Proxy: <xmx:BuB1ZEh9a_Nx2KnEl0bxE_6wFFsAAHJjtQ7tVuZLckQsjbx72eMOOw>
    <xmx:BuB1ZADM9mF7NSxqOPQgVmbPy_3aGLmt_VDiuZpBghOrn8S6GDODfQ>
    <xmx:BuB1ZFJ4QwIW-3zwUMfEScRRvHHBGJkOJozaJp0tgTSGdiBfitkTXw>
    <xmx:B-B1ZGgs54qOEGkAg2SG46XGOwz7zUTLzQh2fKV_ve3nUjRgpDEsOg>
Feedback-ID: ie3994620:Fastmail
Date: Tue, 30 May 2023 14:37:40 +0300
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,	Paul McKenney <paulmck@kernel.org>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,	Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,	Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch v3 31/36] x86/apic: Provide cpu_primary_thread mask
Message-ID: <20230530113740.lbvg4to747xo32a7@box.shutemov.name>
References: <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name>
 <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name>
 <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87jzwqjeey.ffs@tglx>

On Tue, May 30, 2023 at 12:34:45PM +0200, Thomas Gleixner wrote:
> On Tue, May 30 2023 at 11:26, Thomas Gleixner wrote:
> > On Tue, May 30 2023 at 03:54, Kirill A. Shutemov wrote:
> >> On Mon, May 29, 2023 at 11:31:29PM +0300, Kirill A. Shutemov wrote:
> >>> Disabling parallel bringup helps. I didn't look closer yet. If you have
> >>> an idea let me know.
> >>
> >> Okay, it crashes around .Lread_apicid due to touching MSRs that trigger #VE.
> >>
> >> Looks like the patch had no intention to enable parallel bringup on TDX.
> >>
> >> +        * Intel-TDX has a secure RDMSR hypercall, but that needs to be
> >> +        * implemented seperately in the low level startup ASM code.
> >>
> >> But CC_ATTR_GUEST_STATE_ENCRYPT that used to filter it out is
> >> SEV-ES-specific thingy and doesn't cover TDX. I don't think we have an
> >> attribute that fits nicely here.
> >
> > Bah. That sucks.
> 
> Can we have something consistent in this CC space or needs everything to
> be extra magic per CC variant?

IIUC, CC_ATTR_GUEST_MEM_ENCRYPT should cover all AMD SEV flavours and
Intel TDX. But the name is confusing in this context: memory encryption
has nothing to do with the APIC.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


From xen-devel-bounces@lists.xenproject.org Tue May 30 11:48:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 11:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541123.843520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xps-0000WZ-Dx; Tue, 30 May 2023 11:48:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541123.843520; Tue, 30 May 2023 11:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xps-0000WS-Aw; Tue, 30 May 2023 11:48:20 +0000
Received: by outflank-mailman (input) for mailman id 541123;
 Tue, 30 May 2023 11:48:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2tj/=BT=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q3xpr-0000WM-P6
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 11:48:19 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ddfe51f0-fedf-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 13:48:17 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 0B8AF1F8D9;
 Tue, 30 May 2023 11:48:17 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id B2A7113597;
 Tue, 30 May 2023 11:48:16 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id dww+KoDidWSbBgAAMHmgww
 (envelope-from <jgross@suse.com>); Tue, 30 May 2023 11:48:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddfe51f0-fedf-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685447297; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=5og6JTPEvFEXVd4ZjHNiLOHRV6jX+qn1ll+b3g6Psaw=;
	b=T8hcyKbTTJ9UQm4ATIms445u8ebF8bCPB3AB1J1+zTpfl31lJ24ZB0SWtFfNh1e5qWiadD
	3WRqh1bpB22/JwlLNwicjRFGnmNk6D7dvi3KSvTPa1AKtl96tUGaJMl47FcwcH7YK+Ew3z
	LQemV/+YPxk8lu3h/PTEhXmGk336zWk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH] xen/include/public: fix 9pfs xenstore path description
Date: Tue, 30 May 2023 13:48:15 +0200
Message-Id: <20230530114815.18362-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In xen/include/public/io/9pfs.h the name of the Xenstore backend node
"security-model" should be "security_model", as this is how the Xen
tools are creating it and qemu is reading it.

Fixes: ad58142e73a9 ("xen/public: move xenstore related doc into 9pfs.h")
Fixes: cf1d2d22fdfd ("docs/misc: Xen transport for 9pfs")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/include/public/io/9pfs.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/public/io/9pfs.h b/xen/include/public/io/9pfs.h
index a0ce82d0a8..9ad2773082 100644
--- a/xen/include/public/io/9pfs.h
+++ b/xen/include/public/io/9pfs.h
@@ -64,7 +64,7 @@
  *
  *         Host filesystem path to share.
  *
- *    security-model
+ *    security_model
  *         Values:         "none"
  *
  *         *none*: files are stored using the same credentials as they are
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue May 30 11:56:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 11:56:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541127.843530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xy0-00020e-5w; Tue, 30 May 2023 11:56:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541127.843530; Tue, 30 May 2023 11:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3xy0-00020X-2n; Tue, 30 May 2023 11:56:44 +0000
Received: by outflank-mailman (input) for mailman id 541127;
 Tue, 30 May 2023 11:56:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3xxy-00020R-CV
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 11:56:42 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20625.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0939f819-fee1-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 13:56:39 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU2PR04MB8984.eurprd04.prod.outlook.com (2603:10a6:10:2e3::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Tue, 30 May
 2023 11:56:37 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 11:56:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0939f819-fee1-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JbYWkEB4C+fxvBElOK4WezzAvrsKWJVuZAXuJCvxIkl7CZ2XxHd4FISi6Ae24hoKojg/rmD5PQL23Hk46XX4wQsAFRRXWGHTGyakQsRDfHmLvxt+spDnlAPZrkpoLn1cb0j9AiT9J2YfuCGC9yw2zMOtApjUD+gz5EXPdhVoG9tXJqQGWd8Z0bcUfYsGIYV67gmgWY0YyDjIXxxXpffpuvazJMIJrWKsD9AWmlnf3S4gmyvzKe09+BFsCy672uAgqVtozxSmr4ggZjicTKOOAwc8KOIn6TrUFKvYtxDQxm3kIbDUl58WzUIbcjrrbsytJ5REQxfvkOC+C10CuCRgIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vU40Dezc96F761Mz9OWPNYoAWgUp4gZJNgrxzDcONpE=;
 b=XSEYLqEYmDrSjcJgdQGJJTA37Z3tziIB4BHJtM/z6B7BN1/qzb6w7R9KJJV+yFS0zvRvPmh9gi/iWDeEWFRa3ofSXs6RNMM/RujHzRu116NjqClzLDbxxkNT+xNzg+bFCuXUyWjriBhAWCSZ7tfuLD0LJeIRVT3i/3OyH24kAJxjC57090OFxWgEqojYkCm//QCp+eI2Og2JOQiyRYbnF/GlpvxaISFohvmmxkobfXR/uGy8QlGh9M40WtobDB64i+LklAOxt3LHzFw4Xjene2cNmknzfUPrZ7Rlc1Pr386wwo2Cba4nhP7cs3/Jf9PQZIGFwJ/y9511VPVzLU0+tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vU40Dezc96F761Mz9OWPNYoAWgUp4gZJNgrxzDcONpE=;
 b=MrlyJVj2Nan42afJ2tqBAtzIBsbcAF1FSQ16MXBzkeKAmy+sYfbMDT2SMqYCgRjHuxGuIEvgqQF/zIylx+tgUTvM55u2mt27ptL7evUPUZer9ptlmbdIEfVLmp23Uj2fZMVPKil8GPKL6Ovo4MXCKnpGiWaF402DyW40VBJnA0Fr/xrFJ4KDTbNYUHGQ2D3LVm2Z8pBFwjZdXL89L5va/Amm4f6UA2r9BbO4SBxMKqjehlkpe5QtfNgdgKMlCzkqEiHrUviiuRmR70gC/aLPTLbn6ny+rHxCg/aG0R3axpB3f0eplOzgr52Prtm7+ji9JnAeQXNP5xWgw09JxQh5cw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f3531050-fb31-2d9e-f3dd-2d310dc7c5ec@suse.com>
Date: Tue, 30 May 2023 13:56:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 1/2] x86/mm: add API for marking only part of a MMIO
 page read only
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
 <def382a6481a9d1bcc106200b971cd5b0f3d19c1.1683321183.git-series.marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <def382a6481a9d1bcc106200b971cd5b0f3d19c1.1683321183.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0093.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9b::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU2PR04MB8984:EE_
X-MS-Office365-Filtering-Correlation-Id: 714aa6d1-2536-4db3-a382-08db6104ebbc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DT9APEgbm07iZTKLetrqIYUvcJ5pO25tEfxQSRrnyidUcWO/jTvnqboVgw9tRMqdS55k6sTsDBieCkuxGAtIHv6IEKIhEgqqSlx0JXGWyQcBUIhiocz0nSzI4MKLLIOx0m7phoNJjhrbEaomZ27EGnVjHlgwHC4HtkM3oi70DxaNKxfDgzojVJOUEv+t5U3NFc0PUSmMVmnn/aqKaYAWsmB0O+5BFwSEJHAZNUv4c6xqcnigOe0UpBifMeW6XNdeqKcrtOYI1V9EhEUSj/27CH6sezfy3huQ9aJ1LKXbshE3Oz0CNBZ/r0sjJbIITWfO86sRjOZy2X9kihkUqL01O0sPhM1ZYf04/dp1u674gIBelb3z4lrpz/MB88If9UOmZQuQuTq3YCYwdcbpec+vfhf7KNB3uLcTFi/oKvra+DX7VVGnfD/Geh1gamVtN4l+FRyhoQdWGpKKLkL9Itsg6TAiTpXRrNalec+/Pq84KZqGLJlt38ptCSsIQRbWuzwe2Ru1Z/FjmEAFtznpzSrbBw5ss0Db/dO8/S4Paw9oCfpbTDkIXr5H5aFOiM4qaauE+M3vxC6Bi1DhBphDMAZTIvYKcV6JBzyGxLjTuScfWweO6zd4nSp6uBspoXO02DU0wzQKKBAkKqjsPdIO7SidWA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39850400004)(396003)(376002)(366004)(136003)(346002)(451199021)(31696002)(6486002)(86362001)(41300700001)(6916009)(4326008)(316002)(66556008)(66476007)(66946007)(36756003)(5660300002)(186003)(2906002)(30864003)(53546011)(31686004)(6512007)(6506007)(26005)(2616005)(66574015)(83380400001)(38100700002)(66899021)(478600001)(54906003)(8676002)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bG12QnVmREtienBNZ0gyc3ZwT1FBQ1B2NE5aQVBYS2pYOTNUWmxkSFFoRzgr?=
 =?utf-8?B?eGVJVEFZcHdoWkdFU0ExSExZWktNR013RlIyVDBCSnQ2MzUyNGpqQS9UQ0JK?=
 =?utf-8?B?UUFkUmUrc0dFZnhnVVl2WHZLeUIvcEVxT3R6d2thdC9ucjZzMHk5REd4Z1hJ?=
 =?utf-8?B?Um94MHMwY2s0Q2EzVmpaMVBYcWVIdE40RUpnWnUyZUpVc3IrZE1JS0tia3Yz?=
 =?utf-8?B?RG1icnpmQWVKb1pDMTZVM2JTRXNDNnhzQXJRdjlyMm8zdGF2V0tkM1FCM3oy?=
 =?utf-8?B?Vjc2ajFJYXFTMUg2TzVXL25hOGZ6eDNyaGMwdDZTaHQ0RndDNnJrYVQvYzBN?=
 =?utf-8?B?eDZUR1BUTmRvczF1SGphakdCbHE5d3ZxVlN3bEkvOEpEdUxNbUoydDRFQlgr?=
 =?utf-8?B?OHJQMEwwM0REdmtDWjVETnFoWUVVaEFvVXBzTkN1Smh1c21sbXU5QXE1SjN3?=
 =?utf-8?B?a0ZIaUwxbXFiWXFBaHFRd2pPL0NxU3dQekpqeDZiOWRHMStGanN2dFlGYWlR?=
 =?utf-8?B?eldIOFo3RWRzNGlQaDQ1SlZEZHlPWC9CMFNyQmE2TmdDYkRFeTh2UG45dmJO?=
 =?utf-8?B?UUNrVWZQaTlrcE4rb0Vjcm4vQzNRdUhJZk9IN3F5Q2MweDFTbldnK2ZaeUFl?=
 =?utf-8?B?RWVxVTArTTNKTEk5STBQRDkzUlorRDhMMXpHTUNCMW5jbHVHU1hNUkJKSmU3?=
 =?utf-8?B?MU1Xd0lWRUlRdHBwd0Z5a3dES3gvWUlpeGFTUkVPeHNRcUtlVjAxTXQyaC90?=
 =?utf-8?B?RkdTUGRscmwrQ1Q1R3hoWWRiRDQ5aUZNWXZpNXA2T09CYkIweGFBY2E1V1Jm?=
 =?utf-8?B?TmhWTWpaRlRuS2xGNnI2YmQ1UkduWU5LSjgzTkg3Z3pWTk9HcW1KR1NuaURa?=
 =?utf-8?B?Y2p6ejZueXJvSzJkdHlGS3FnNHlUM0tzUFNxcFJMd3JRRCsxeUpRNDlYYlFC?=
 =?utf-8?B?ZDdla3lsN25qYkVJdGNJczllR1dlMFpNYUhtRFVXa3dRczFwcHpleEdRV2pM?=
 =?utf-8?B?RThWS3NHQU9JUFNERHdFbFYvemJiYnZPaWVGeDNVVkxjQ3ZpZjAzMTZQWTdl?=
 =?utf-8?B?SnNmRUVLN3Z5aFBhL05wMHhYVG9VelQxalUzbkw5OUQzQm5USWdydUZWQmJw?=
 =?utf-8?B?MzZwTEUrcU1tK3NOSksrb2FJYm8rQmk5Ym5jRzEwdXYwMTFDT3Q0azVhOVcv?=
 =?utf-8?B?SGpEamF0R25VRHZ5UkVodnVBdTFFTURCOEc3K29tZDJXZGNFREp6MFU1NG05?=
 =?utf-8?B?MEdzKzl5aTFxU2pLdTRLcks3aEJUZDR5VHIwWk1GZzduMVNVREJFaGNRUVpU?=
 =?utf-8?B?QUE2d0ZXU05aNTByOG01UGVnaEdMblRHOUR1dTlwMThNeDBoZkovb1AzdUR2?=
 =?utf-8?B?c0h5eE9CVi9xL2hIQ0JqaDJRbGdzME82aUVETW9wWFpST3dLbTEwelluWm85?=
 =?utf-8?B?bGw0M1NvNHVqRDkxZFA0SXNldjYxSmZqR3VVY051NkgwUHlwcE1qNEtDRTRC?=
 =?utf-8?B?TjR4NlAvcHZ4dzJSTkNIWWJQeURlNlpVUWxvSDM1ZFVXcDVjZHhTVktEZDlX?=
 =?utf-8?B?YWp5ZkhpR2VPZEFaa2pWMzRYQW00Sk1JMktXakwwdWV1a0lNR0hvcDBWelc4?=
 =?utf-8?B?MGVDcDR3UklWOURGVUJNN1E0TUtGdVB0aUFndzVLQmhOOFF2VTBDZ1AwYXo4?=
 =?utf-8?B?ZVExOWF3bEtZTklUMTl2eDRHLzNEamNJcWJrck9VZnJBS3p0UEh6b2RSR2dr?=
 =?utf-8?B?SFVzY3M1WHE1bnJOMFZnZG9RTXd6RDY2T0M0RXdnVlVIQ01tSGpkR0VHN1I5?=
 =?utf-8?B?cW5ZUzFjeTV4VCt5a2h6QkZ5UHplOXUvQVF3VnpMUS93M0IzOTY5VnBRYWo0?=
 =?utf-8?B?UldIcjUvRU5QMXJvSlRsaXh2MWxHSlZNcWFNWW1ublJ6NUtIc2lMcU1DYmVn?=
 =?utf-8?B?N3daSURoM1o1dVVvbC95UkhTZVR0ZElWcm93ZEJWSUJtRHBEb1ExSUZWQXlp?=
 =?utf-8?B?bmJyRHZiVnNzOVdtcjNRZGdDSmlaQXRZa3ZZQkswbndJUjhlaTJnRWZ5T3Ju?=
 =?utf-8?B?a282bHJKY0dNVzRWbVVkMGJ4SXduM1JqYmJyMUJwZUVYanFjdDFWL2pEM1JZ?=
 =?utf-8?Q?pNmFR2/M/yxHmrq13/XH3BK3K?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 714aa6d1-2536-4db3-a382-08db6104ebbc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 11:56:36.7724
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NrvFyTe+Ium2DnUNDr6Q8l/rzXitX4a9/cDmSIVyZeqCsQD/qJdRrSb2rh4VAtlT4F9PrMlYKcLw/Uta+7HFvQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU2PR04MB8984

On 05.05.2023 23:25, Marek Marczykowski-Górecki wrote:
> In some cases, only few registers on a page needs to be write-protected.
> Examples include USB3 console (64 bytes worth of registers) or MSI-X's
> PBA table (which doesn't need to span the whole table either), although
> in the latter case the spec forbids placing other registers on the same
> page. Current API allows only marking whole pages pages read-only,
> which sometimes may cover other registers that guest may need to
> write into.
> 
> Currently, when a guest tries to write to an MMIO page on the
> mmio_ro_ranges, it's either immediately crashed on EPT violation - if
> that's HVM, or if PV, it gets #PF. In case of Linux PV, if access was
> from userspace (like, /dev/mem), it will try to fixup by updating page
> tables (that Xen again will force to read-only) and will hit that #PF
> again (looping endlessly). Both behaviors are undesirable if guest could
> actually be allowed the write.
> 
> Introduce an API that allows marking part of a page read-only. Since
> sub-page permissions are not a thing in page tables (they are in EPT,
> but not granular enough), do this via emulation (or simply page fault
> handler for PV) that handles writes that are supposed to be allowed.
> The new subpage_mmio_ro_add() takes a start physical address and the
> region size in bytes. Both start address and the size need to be 8-byte

8-byte (aka qword) here, but ...

> aligned, as a practical simplification (allows using smaller bitmask,
> and a smaller granularity isn't really necessary right now).
> It will internally add relevant pages to mmio_ro_ranges, but if either
> start or end address is not page-aligned, it additionally adds that page
> to a list for sub-page R/O handling. The list holds a bitmask which
> dwords are supposed to be read-only and an address where page is mapped

... dwords here?

> for write emulation - this mapping is done only on the first access. A
> plain list is used instead of more efficient structure, because there
> isn't supposed to be many pages needing this precise r/o control.
> 
> The mechanism this API is plugged in is slightly different for PV and
> HVM. For both paths, it's plugged into mmio_ro_emulated_write(). For PV,
> it's already called for #PF on read-only MMIO page. For HVM however, EPT
> violation on p2m_mmio_direct page results in a direct domain_crash().
> To reach mmio_ro_emulated_write(), change how write violations for
> p2m_mmio_direct are handled - specifically, check if they relate to such
> partially protected page via subpage_mmio_write_accept() and if so, call
> hvm_emulate_one_mmio() for them too. This decodes what guest is trying
> write and finally calls mmio_ro_emulated_write(). Note that hitting EPT
> write violation for p2m_mmio_direct page can only happen if the page was
> on mmio_ro_ranges (see ept_p2m_type_to_flags()), so there is no need for
> checking that again.

Yet that's then putting us on a certain risk wrt potential errata.

You also specifically talk about "guests", i.e. more than just hwdom.
Adding another easy access to the emulator (for HVM) comes with a
certain risk of future XSAs, too.

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1990,6 +1990,14 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
>          goto out_put_gfn;
>      }
>  
> +    if ( (p2mt == p2m_mmio_direct) && npfec.write_access && npfec.present &&
> +         subpage_mmio_write_accept(mfn, gla) &&
> +         (hvm_emulate_one_mmio(mfn_x(mfn), gla) == X86EMUL_OKAY) )
> +    {
> +        rc = 1;
> +        goto out_put_gfn;
> +    }

But npfec.write_access set doesn't mean it was a write permission
violation, does it? May I ask that this be accompanied by a comment
discussing the correctness/safety?

> --- a/xen/arch/x86/include/asm/mm.h
> +++ b/xen/arch/x86/include/asm/mm.h
> @@ -522,9 +522,24 @@ extern struct rangeset *mmio_ro_ranges;
>  void memguard_guard_stack(void *p);
>  void memguard_unguard_stack(void *p);
>  
> +/*
> + * Add more precise r/o marking for a MMIO page. Bytes range specified here
> + * will still be R/O, but the rest of the page (nor marked as R/O via another
> + * call) will have writes passed through.
> + * The start address and the size must be aligned to SUBPAGE_MMIO_RO_ALIGN.

With this alignment constraint, the earlier sentence can be read as
controdictory. How about "Byte-granular ranges ..." or "Ranges (using
byte granularity) ..."? I admit even that doesn't resolve the issue
fully, though.

> @@ -4882,6 +4895,243 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>      return 0;
>  }
>  
> +/* This needs subpage_ro_lock already taken */
> +static int __init subpage_mmio_ro_add_page(
> +    mfn_t mfn, unsigned int offset_s, unsigned int offset_e)
> +{
> +    struct subpage_ro_range *entry = NULL, *iter;
> +    int i;

unsigned int please (as almost always for induction variables).

> +    list_for_each_entry(iter, &subpage_ro_ranges, list)
> +    {
> +        if ( mfn_eq(iter->mfn, mfn) )
> +        {
> +            entry = iter;
> +            break;
> +        }
> +    }
> +    if ( !entry )
> +    {
> +        /* iter==NULL marks it was a newly allocated entry */

Nit: Even in a comment I think it would be nice if style rules were
followed, and hence == was surrounded by blanks.

> +        iter = NULL;
> +        entry = xzalloc(struct subpage_ro_range);
> +        if ( !entry )
> +            return -ENOMEM;
> +        entry->mfn = mfn;
> +    }
> +
> +    for ( i = offset_s; i <= offset_e; i += SUBPAGE_MMIO_RO_ALIGN )
> +        set_bit(i / SUBPAGE_MMIO_RO_ALIGN, entry->ro_qwords);

You're holding a spin lock, so won't __set_bit() suffice here? And
then __clear_bit() below?

> +    if ( !iter )
> +        list_add_rcu(&entry->list, &subpage_ro_ranges);
> +
> +    return 0;
> +}

Since you mark the qwords which are to be protected, how is one to set
up safely two discontiguous ranges on the same page? I think I had
asked for v1 already why you don't do things the other way around:
Initially the entire page is protected, and then writable regions are
carved out.

I guess I shouldn't further ask about overlapping r/o ranges and their
cleaning up. But at least a comment towards the restriction would be
nice. Perhaps even check upon registration that no part of the range
is already marked r/o.

> +static void __init subpage_mmio_ro_free(struct rcu_head *rcu)
> +{
> +    struct subpage_ro_range *entry = container_of(
> +        rcu, struct subpage_ro_range, rcu);
> +
> +    ASSERT(bitmap_empty(entry->ro_qwords, PAGE_SIZE / SUBPAGE_MMIO_RO_ALIGN));
> +
> +    if ( entry->mapped )
> +        iounmap(entry->mapped);
> +    xfree(entry);
> +}
> +
> +/* This needs subpage_ro_lock already taken */
> +static int __init subpage_mmio_ro_remove_page(
> +    mfn_t mfn,
> +    int offset_s,
> +    int offset_e)
> +{
> +    struct subpage_ro_range *entry = NULL, *iter;
> +    int rc, i;
> +
> +    list_for_each_entry_rcu(iter, &subpage_ro_ranges, list)
> +    {
> +        if ( mfn_eq(iter->mfn, mfn) )
> +        {
> +            entry = iter;
> +            break;
> +        }
> +    }
> +    if ( !entry )
> +        return -ENOENT;

Yet the sole caller doesn't care at all, not even by an assertion.

> +    for ( i = offset_s; i <= offset_e; i += SUBPAGE_MMIO_RO_ALIGN )
> +        clear_bit(i / SUBPAGE_MMIO_RO_ALIGN, entry->ro_qwords);
> +
> +    if ( !bitmap_empty(entry->ro_qwords, PAGE_SIZE / SUBPAGE_MMIO_RO_ALIGN) )
> +        return 0;
> +
> +    list_del_rcu(&entry->list);
> +    call_rcu(&entry->rcu, subpage_mmio_ro_free);

This being an __init function, what is the RCU-ness intended to guard
against?

> +    return rc;

DYM "return 0" here, or maybe even invert the if()'s condition to have
just a single return? "rc" was never written afaics, and the compiler
not spotting it is likely because the caller doesn't inspect the return
value.

> +}
> +
> +

Nit: No double blanks lines please.

> +int __init subpage_mmio_ro_add(
> +    paddr_t start,
> +    size_t size)
> +{
> +    mfn_t mfn_start = maddr_to_mfn(start);
> +    paddr_t end = start + size - 1;
> +    mfn_t mfn_end = maddr_to_mfn(end);
> +    int offset_end = 0;

unsigned int again, afaics. Also this can be declared in the more narrow
scope it's used in.

> +    int rc;
> +
> +    ASSERT(IS_ALIGNED(start, SUBPAGE_MMIO_RO_ALIGN));
> +    ASSERT(IS_ALIGNED(size, SUBPAGE_MMIO_RO_ALIGN));

Not meeting the first assertion's condition (thinking of a release build)
is kind of okay, as too large a range will be protected. But for the 2nd
too small a range would be covered aiui, so this may want dealing with in
a release-build-safe way.

> +    if ( !size )
> +        return 0;
> +
> +    rc = rangeset_add_range(mmio_ro_ranges, mfn_x(mfn_start), mfn_x(mfn_end));
> +    if ( rc )
> +        return rc;
> +
> +    spin_lock(&subpage_ro_lock);
> +
> +    if ( PAGE_OFFSET(start) ||
> +         (mfn_eq(mfn_start, mfn_end) && PAGE_OFFSET(end) != PAGE_SIZE - 1) )
> +    {
> +        offset_end = mfn_eq(mfn_start, mfn_end) ?
> +                     PAGE_OFFSET(end) :
> +                     (PAGE_SIZE - 1);
> +        rc = subpage_mmio_ro_add_page(mfn_start,
> +                                      PAGE_OFFSET(start),
> +                                      offset_end);
> +        if ( rc )
> +            goto err_unlock;
> +    }
> +
> +    if ( !mfn_eq(mfn_start, mfn_end) && PAGE_OFFSET(end) != PAGE_SIZE - 1 )
> +    {
> +        rc = subpage_mmio_ro_add_page(mfn_end, 0, PAGE_OFFSET(end));
> +        if ( rc )
> +            goto err_unlock_remove;
> +    }
> +
> +    spin_unlock(&subpage_ro_lock);
> +
> +    return 0;
> +
> + err_unlock_remove:
> +    if ( offset_end )
> +        subpage_mmio_ro_remove_page(mfn_start, PAGE_OFFSET(start), offset_end);
> +
> + err_unlock:
> +    spin_unlock(&subpage_ro_lock);
> +    if ( rangeset_remove_range(mmio_ro_ranges, mfn_x(mfn_start), mfn_x(mfn_end)) )
> +        printk(XENLOG_ERR "Failed to cleanup on failed subpage_mmio_ro_add()\n");
> +    return rc;
> +}

None of the failures here is particularly likely, so perhaps all is fine as
you have it. But there would be an alternative of retaining the
mmio_ro_ranges entry/entries, allowing the caller to "ignore" the error.

> +static void __iomem *subpage_mmio_get_page(struct subpage_ro_range *entry)
> +{
> +    if ( entry->mapped )
> +        return entry->mapped;
> +
> +    spin_lock(&subpage_ro_lock);
> +    /* Re-check under the lock */
> +    if ( entry->mapped )
> +        goto out_unlock;
> +
> +    entry->mapped = ioremap(mfn_x(entry->mfn) << PAGE_SHIFT, PAGE_SIZE);
> +
> + out_unlock:
> +    spin_unlock(&subpage_ro_lock);
> +    return entry->mapped;
> +}

This is easy to deal with without any "goto".

I'm further inclined to request that the ioremap() occur without the lock
held, followed by an iounmap() (after dropping the lock) if in fact the
mapping wasn't needed (anymore).

> +static void subpage_mmio_write_emulate(
> +    mfn_t mfn,
> +    unsigned int offset,
> +    const void *data,
> +    unsigned int len)
> +{
> +    struct subpage_ro_range *entry;
> +    void __iomem *addr;
> +
> +    rcu_read_lock(&subpage_ro_rcu);
> +
> +    list_for_each_entry_rcu(entry, &subpage_ro_ranges, list)
> +    {
> +        if ( mfn_eq(entry->mfn, mfn) )
> +        {
> +            if ( test_bit(offset / SUBPAGE_MMIO_RO_ALIGN, entry->ro_qwords) )
> +                goto write_ignored;

I think you can get away with just a single "goto" by putting the gprintk()
(and its label) here.

> +            addr = subpage_mmio_get_page(entry);
> +            if ( !addr )
> +            {
> +                gprintk(XENLOG_ERR,
> +                        "Failed to map page for MMIO write at 0x%"PRI_mfn"%03x\n",
> +                        mfn_x(mfn), offset);
> +                goto out_unlock;
> +            }
> +
> +            switch ( len )
> +            {
> +            case 1:
> +                writeb(*(uint8_t*)data, addr);
> +                break;
> +            case 2:
> +                writew(*(uint16_t*)data, addr);
> +                break;
> +            case 4:
> +                writel(*(uint32_t*)data, addr);
> +                break;
> +            case 8:
> +                writeq(*(uint64_t*)data, addr);
> +                break;

Please avoid casting away const-ness.

> +            default:
> +                /* mmio_ro_emulated_write() already validated the size */
> +                ASSERT_UNREACHABLE();
> +                goto write_ignored;
> +            }
> +            goto out_unlock;
> +        }
> +    }
> +    /* Do not print message for pages without any writable parts. */
> +    goto out_unlock;
> +
> + write_ignored:
> +    gprintk(XENLOG_WARNING,
> +             "ignoring write to R/O MMIO 0x%"PRI_mfn"%03x len %u\n",
> +             mfn_x(mfn), offset, len);

Nit: Indentation.

> + out_unlock:
> +    rcu_read_unlock(&subpage_ro_rcu);
> +}
> +
> +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla)
> +{
> +    unsigned int offset = PAGE_OFFSET(gla);
> +    const struct subpage_ro_range *entry;
> +
> +    rcu_read_lock(&subpage_ro_rcu);
> +
> +    list_for_each_entry_rcu(entry, &subpage_ro_ranges, list)
> +        if ( mfn_eq(entry->mfn, mfn) &&
> +             !test_bit(offset / SUBPAGE_MMIO_RO_ALIGN, entry->ro_qwords) )
> +        {
> +            /*
> +             * We don't know the write seize at this point yet, so it could be

Nit: "size" I assume.

> +             * an unalligned write, but accept it here anyway and deal with it
> +             * later.
> +             */

Have I overlooked where unaligned writes are dealt with?

Also nit: "unaligned".

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 12:04:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 12:04:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541132.843540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3y5Z-0003Xi-1e; Tue, 30 May 2023 12:04:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541132.843540; Tue, 30 May 2023 12:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3y5Y-0003Xb-Uy; Tue, 30 May 2023 12:04:32 +0000
Received: by outflank-mailman (input) for mailman id 541132;
 Tue, 30 May 2023 12:04:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3y5X-0003XV-Q5
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 12:04:31 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20609.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::609])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 21e92c44-fee2-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 14:04:30 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DUZPR04MB10037.eurprd04.prod.outlook.com (2603:10a6:10:4d8::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 12:04:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 12:04:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21e92c44-fee2-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MHa3t8GSuxt9bmWFt2doW+2vY8EB/Vnk3Uq0WGiYp4Yv8O6CNuB25oinK9q183T9Xb82vMzcViyTavn1o5CxJv5J9bIzDHyc1Of2fJxkdVAzkD5cSbrhZuZbKef3SYpAQ0FAoxa9KD9bKiCS7pPX347qUz3crwxIetzOVOa3quGeif6Wf4n+LWE4gY2IOtkyB9c3JVNFCrK/xXXsu579hsQJ1W29kwgwHJgN3f6uXdGe71a2lHueIQOnS7ex0L96fMGXm7tu4lVFFboxdbAKsooVX/KGpMpUMN9kK32nf+Fp4hR0SgkK8lzVv3ft0ht5a6ED1+5IlCh5z7YcssJKeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z5M6j9/zLZaXFCotqRCFUrGKX1avJFkRfkyDX0EUj/E=;
 b=n5rO/cTpMPDaOiFvfuExxVgi2Wd6tqjrxzIykXy5SKTdqYQch9y3gdlaivzmjsyc5nnlpcnH30qb21WfKssMsHCc3zwYmb7Uaedfz8Swlw6lCQSXnSrTwNXcobcq5nrmP1hbdlu/l3h3ZqrRp2SEli2GERDYh04RziZhuWIhaSw5q/rLarPhXw5yMTcfUXw2p4HnP7ffPweCwXSINQH1QL3wxYlNZTEmIM66BWoVsQqBpF26Zxj3iAxOynIdQQjQARH5QnexRHRnUcWMWU2qXRErl47sLyWYCLwyFxJ5vDJsk71Tx98/wyK0N9910d0lmR/JB5GRD46oGz1DUQPIeA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z5M6j9/zLZaXFCotqRCFUrGKX1avJFkRfkyDX0EUj/E=;
 b=ReCeJi3kPOCQn9jooFMpaVUI0OQgDFRmyTfDX+wt/Kgq7CkOLQdKoczF711/WrNTs0XHFzbdgzwPr2X3amATFhp6CloJ3WDQ7O6Go2tcMOQ05+I4Ho5iD7HmS7y62Lx6pOma9OrYJYIf8bGhzZvRu+7U2w/ZQk/t8/DFFt3JuVotkR90D0GNy4XnufRVENg4tvwYeZqwb5/YqZjufZDlJhI3trvgfQZiJY9Wqn/kKbSVbSkt4nIfquq6Td1HF3pDKAEbqMZF+PPtcqF91LIu3T7s5ZEcYTU6v6GYCDs4tjVoGliI3U9cZ0U3sm1OEoStxShhMisuRd1hP4k2ibedyA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f168a753-45d2-7d66-8ec7-ad06e6cd42eb@suse.com>
Date: Tue, 30 May 2023 14:04:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 2/2] drivers/char: Use sub-page ro API to make just
 xhci dbc cap RO
Content-Language: en-US
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.dd82aca339854e90ffe12e7bc4298254a6caaf0d.1683321183.git-series.marmarek@invisiblethingslab.com>
 <1f9909dacfd7822a1c7d30ba03bbec93fa2ff6fd.1683321183.git-series.marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <1f9909dacfd7822a1c7d30ba03bbec93fa2ff6fd.1683321183.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0174.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a0::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DUZPR04MB10037:EE_
X-MS-Office365-Filtering-Correlation-Id: 3f854806-b019-4b2f-d8dd-08db610604ca
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	y9/AG552Nr4n1U1bnmK0T71xsoBUA839WZIKg3ueFrf7U9Oo4pxNdz0ykJBHxs9DsTwGW6YmVO3vHzhk6jN3RkfbmJWHQox8kUC7HnncF6GlEtpa0iiR4YKFQY/dsr7KwKs8b1DdvpSUTrj6BOi9Kd0KhdYyHmmbmtw5FKK+3DUgePrHNn2oaZXTUe/0MGUJ49A6u52tc3RPW95+p0CDZFkZBO8mc1K1MVZWTZ/2k0F1zSNB6qigVXa0Fh1iO5IQIlnSWoawW3OCBiF2+M1ip+jO69FDTNXujfdprWSxKmkyfMa7ACjRK37Wx+BrVdxeUS6yBRL23ZnBHhQ9Bspw7gTFUDJh7oKYjMX8a5ExZbAwHZ4p8wcExul7EoVl0QqL1a0DTrRPefwZ690STetVwsHQQbS4uzljtcYB5QNoIubwt87aCF89+RovXJVlKm0YiIZROo/F5GzxgkfmBOlwtxShp/yzFOD1qxShqzYKRUbosvAHB+GQl8bNvffBXiawnABtEfTa2BjkYtPUNeeKbAhux6AxBky2rg/5Wm28wqg1cj/aN5uG3/wpqbJuEW6OkWUnzD1GgkI+9IR7T8LVAjEtkeYY1N2F+AMVJ8NZmxp1k/I1vLN37neorGiPwCIK7H8IjF33SduvkJ1oTLPEBg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(366004)(39860400002)(396003)(376002)(451199021)(86362001)(31696002)(36756003)(478600001)(54906003)(4326008)(316002)(66946007)(66556008)(66476007)(6916009)(6486002)(5660300002)(8936002)(8676002)(41300700001)(2906002)(38100700002)(2616005)(53546011)(6512007)(26005)(186003)(6506007)(66574015)(83380400001)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NUdWTXFhQXNNKzNrVzFDSWNqeUFGcXhEVWxpbllZVC9xTmp6VDhReTQxdWsx?=
 =?utf-8?B?Yzd1N0hFK2ZSdm9XbGhtQjQyQ2NSdnF6MDdqZ1g1dFN6Y284eTVJb2xrc0xh?=
 =?utf-8?B?TWpJT2JkSXpUaEhHa21BSEF3cXBDblUvN2JuOG1oNndMQmZwMTlFZmp3S1Zl?=
 =?utf-8?B?ZFJmamlOWGpDNlJQZ2krc2dseUNpN1lGYzk2ZzRQNDhCNUJwb3lkdFNkb3A5?=
 =?utf-8?B?QUNHV1RnVUZjdG1pT3YrNXZFVkwyVEpqUDNiZXlmMFAyU3p0bGU4TDBpVkRu?=
 =?utf-8?B?bzdDOE5OK1NhTXRXdS9WK1lGUU9ONWE4Vm41V3p3SkZFUkI3WHp4a0ZNZzlK?=
 =?utf-8?B?M2pBa3dWZnZZQU0xNFFOZlUxbmpyOGF3VlZEZDIxUFdUK0YrTGhtRGxMWFF2?=
 =?utf-8?B?WDRIVXZHdVBDcGVEMVUvcHNEUTJQRWI2cjlBNnJwRStWYXhRMTZoRWZJamh3?=
 =?utf-8?B?NnRHYUx6R2hTMU53RWpaeVNsMys2MllteUlOSGh1ZUhkaW1WRTJqbUVvZEF3?=
 =?utf-8?B?RlZDYktYbk13bU9PS2Q2amhyV29sQ3dCQ3VBK3pTUXd1UnhENmkxbFF3RlNa?=
 =?utf-8?B?QWdvRG5uU3J6RmNZbXF3alRGelFXdmlBZy8vSGlveU9tOTRiSnJxb05EV3Z5?=
 =?utf-8?B?MDNUbVBEUy9NT25hdjVBV0FjaEwveTNDSm9DOWVheVZnZjA3clg3R2l3YVk2?=
 =?utf-8?B?Q0FsUzhYd0QvV05tRUllN2l5K1htK250cGYwbkJIUzI4TjdtZU9iK0VIZ0pw?=
 =?utf-8?B?ZG9MZVdTak0wbDh1dkNoaUxpZkw4dE9wZzdPbmRNQjd5b05sQWNRZEFEeGVl?=
 =?utf-8?B?dTlreXdTTXpwNUpSTUt2QXZXc1pXNFNDdU1BS25nSC83SEtFTGNFUVpyTEFt?=
 =?utf-8?B?K0cwR0tXeXA2TnRjK0Q5b2tRbW5XNEdpcUxIeXVIM1lFRE9EK1p4cjIySjdz?=
 =?utf-8?B?aGxYWGpUeHR3M1FKVzY5U1hIVXQ4THo5ZGNkamw3STlLbWZCMDFzN0xNMC9S?=
 =?utf-8?B?ZTBSSUp1THhvSU4zeFI4d1c3UEdURlZyREpqWVROQ3Arek5QYVpOSWsybFR3?=
 =?utf-8?B?N0VSVFVtd0JMZkZyZXJERzY2SWtWQURJT0ZFaHRuSTRmL0dqTUIyWi9nSHBt?=
 =?utf-8?B?OHNsd2hhT0NtV1pQeW5aVmFXU29FUWdXZlUrbXJpNVlaZGtLa0JtTlNLM3Y4?=
 =?utf-8?B?L2RGbTROdi9QdjM1dFBZVG9zVTRFSjYzajVrMVB4clBMeDEwL0tMZ0xXS1g0?=
 =?utf-8?B?V2V6U3ZRVGlQRFQwMlYxS0MyV3NrclZubEdEMEpwUHdJbGRURW05TFRIakxV?=
 =?utf-8?B?RnFKQzBjT0F0NXNMYi9QdERucGlVL0lMdXdENGgzanlLNXpXWklOUjRTNUhK?=
 =?utf-8?B?NmJINy9ramJxTHJvZWpmNkxGR3RKOWpzR0J0a1ZLeVdUNFZiZ0grNSt6cTlw?=
 =?utf-8?B?N0tTYXV5QUdHWHlXT3pHZjlzUk1CRjIwaEtSZllIZmdob3BsemRFQ3VQVm1K?=
 =?utf-8?B?UFk3dnZ0Yjd0Y0RNRjV3SElkQlpJamp2YlFQUzFxUFdnZWdWcnZWZ0MybExE?=
 =?utf-8?B?Z2RYMWMwbWZxTzk2RG5KWjh6R2p1dnUxTW1DRFNrNS96MTBjdGJlZ2o3NUR0?=
 =?utf-8?B?ei94M2Z5SXR5cWhvMjNHL01oUlRmbmx3Wm1vVmZjcGh2Mld3aWY3NWFkeDlP?=
 =?utf-8?B?dC9xeDM5V3pxMTkweDk0amxUTmF0OUg4U2hPT0FtYWxQK0ttd09PdlVPM1Qz?=
 =?utf-8?B?eE9iVFRnZUQ4L2hZVTE5VXYyTm4xNTBkaWpGdWRCVHl6ajMxVkpjRUVUc2g1?=
 =?utf-8?B?eDM3NFYrbzlwNDFJN09qSXNtem5qSUt4RlpMRG1JRzlTTmdDNzFMaTAwSTBq?=
 =?utf-8?B?T1VPOWhDOXVKZkRjeWFHaWpHWGZpOWJORUFBM1J3dk9YaGVjSnpySDVZMmYx?=
 =?utf-8?B?ZWwyallyRC9QT3ErejdhTkIxZURCWHRwYktvUlJ5bExXVlQ5SzhWWUZTMTFJ?=
 =?utf-8?B?ZnQ5Wjhub29hK2JiV3h1MFB4RXAveEVDQlhyZ1czV2Y4dHBkNENlR3ViUHdl?=
 =?utf-8?B?cnVIZXpQMy9MZFRzOVpHNWNMbW5oRUZUdm52QkRJQ1hnckZyU1VFVVJqWER6?=
 =?utf-8?Q?Gk/JEHSIm3l+42zM5fwNofid8?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f854806-b019-4b2f-d8dd-08db610604ca
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 12:04:28.3507
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Z9hSJE1YEutaTmGe2J5RPPo2U9a8ryxXJ9z+yTx3AqPLUgKZOvGZVzVzvgl6FiGCF/WMOCB5N/2lWZDU6cB/Dg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DUZPR04MB10037

On 05.05.2023 23:25, Marek Marczykowski-Górecki wrote:
> Not the whole page, which may contain other registers too. In fact
> on Tiger Lake and newer (at least), this page do contain other registers
> that Linux tries to use.

Please can you clarify whether this is with spec or an erratum? I ask
not the least because I continue to wonder whether we really want/need
the non-negligible amount of new code added by path 1.

> And with share=yes, a domU would use them too.

And gain yet more access to the emulator, as mentioned in patch 1. The
security implications may (will?) want mentioning.

> --- a/xen/drivers/char/xhci-dbc.c
> +++ b/xen/drivers/char/xhci-dbc.c
> @@ -1221,14 +1221,12 @@ static void __init cf_check dbc_uart_init_postirq(struct serial_port *port)
>       * Linux's XHCI driver (as of 5.18) works without writting to the whole
>       * page, so keep it simple.
>       */
> -    if ( rangeset_add_range(mmio_ro_ranges,
> -                PFN_DOWN((uart->dbc.bar_val & PCI_BASE_ADDRESS_MEM_MASK) +
> -                         uart->dbc.xhc_dbc_offset),
> -                PFN_UP((uart->dbc.bar_val & PCI_BASE_ADDRESS_MEM_MASK) +
> -                       uart->dbc.xhc_dbc_offset +
> -                sizeof(*uart->dbc.dbc_reg)) - 1) )
> -        printk(XENLOG_INFO
> -               "Error while adding MMIO range of device to mmio_ro_ranges\n");
> +    if ( subpage_mmio_ro_add(
> +            (uart->dbc.bar_val & PCI_BASE_ADDRESS_MEM_MASK) +
> +             uart->dbc.xhc_dbc_offset,
> +            sizeof(*uart->dbc.dbc_reg)) )
> +        printk(XENLOG_WARNING
> +               "Error while marking MMIO range of XHCI console as R/O\n");

So how about falling back to just rangeset_add_range(mmio_ro_ranges, ...)
in this failure case? (I did mention an alternative to doing it here in
the comments on patch 1.)

Also, doesn't the comment ahead of the construct become stale?

Finally I think indentation of the function call arguments is off by one.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 12:08:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 12:08:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541136.843549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3y9e-000494-IE; Tue, 30 May 2023 12:08:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541136.843549; Tue, 30 May 2023 12:08:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3y9e-00048x-F2; Tue, 30 May 2023 12:08:46 +0000
Received: by outflank-mailman (input) for mailman id 541136;
 Tue, 30 May 2023 12:08:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Duec=BT=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1q3y9c-00048r-QR
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 12:08:44 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b89ae734-fee2-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 14:08:43 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id
 4fb4d7f45d1cf-5149e65c218so3762905a12.2
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 05:08:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b89ae734-fee2-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685448523; x=1688040523;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/9gGh5adh4R3lzm9nLcS4ofe3OQvKVrPA4gwvKJA/JE=;
        b=lxZnXVzZLqHtk9VH+64kFp0NfAU76fyK99wuVQMi2Z4EO/c5oTJ80RD2us9eBiU98Q
         6BQaBXjpylecpvLXGHOgYbLIcEhH/FMzVtJxuuttsfOJCPD6XQVxD8kqqhdcx3n7H3O7
         B4jfOJbYwyYWUi5YnbBKu1SMxdTtTx4SjnxretTy+Mh9yhtwrvBcnzvop3+NO2fKqfoJ
         Pm7/5yU6KeaoYWF9C0ixLXsef64ZUa/6sdiIHkPHhEtxjefuSpLD8jf9Dn2TfHMqjcmZ
         TccsKsJl5nIq/pCVsfrEEHiX3eO68kpJCbbgO9BnmB85ezTOAu3YC2gZdOst2pi9s86g
         eDJA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685448523; x=1688040523;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/9gGh5adh4R3lzm9nLcS4ofe3OQvKVrPA4gwvKJA/JE=;
        b=kSUPI7+QmpqdzCkqjsBqNLkA3RXiLShLe1wSMzBhOVv+1kmrzU74yhL1KcH2alfWc+
         nYTXgjle235fX7UJId+yzsSKC+zCCcjU4XMrhenolENYYZqf1Q5hxe0qgkV80vPfeHtT
         Fo32gNXfTolkwiNyDBwcRE7LnyEQ2IJHcdwpqPbhezWqu0xKp13NN2yoAxlIBJLiuLhd
         OYRPVzCFPm2WP+VM77n2amT8OXskOJOCGzVA8qqlGFOf4r8f4KgwhoN4Cmv+mCOV8Kyy
         fymOf1SvdDWkjV/N+2OLkJDHe4cYvhcZRVr2hw/udLJ7C0V6U9eLFTphs6tqdoZlngFf
         zS6A==
X-Gm-Message-State: AC+VfDyyaVnYMOpJ3Cmfz39hicUPxD7+COutnIL2x9RazlB1rn5k9t5X
	vryGL0vIZPcAT9rUtJJGSmoxmDPC72Wn414E+ojXJKvj
X-Google-Smtp-Source: ACHHUZ4s4vKtexGPTzJW2BecxoS2Gt+4oiYM7q7UOREg8Pkh8B7g0sQoDoqNWIv8kmRK43oiBy1PoCLm434KKPcNx3M=
X-Received: by 2002:a17:907:6291:b0:96f:32ae:a7e1 with SMTP id
 nd17-20020a170907629100b0096f32aea7e1mr2308613ejc.63.1685448522559; Tue, 30
 May 2023 05:08:42 -0700 (PDT)
MIME-Version: 1.0
References: <20230530114815.18362-1-jgross@suse.com>
In-Reply-To: <20230530114815.18362-1-jgross@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 30 May 2023 08:08:30 -0400
Message-ID: <CAKf6xptn2USt_USj+_KUq9BZHEY5HQGbkmF-n407nDot7P4K-g@mail.gmail.com>
Subject: Re: [PATCH] xen/include/public: fix 9pfs xenstore path description
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, 
	Julien Grall <julien@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Jan Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, May 30, 2023 at 7:48=E2=80=AFAM Juergen Gross <jgross@suse.com> wro=
te:
>
> In xen/include/public/io/9pfs.h the name of the Xenstore backend node
> "security-model" should be "security_model", as this is how the Xen
> tools are creating it and qemu is reading it.
>
> Fixes: ad58142e73a9 ("xen/public: move xenstore related doc into 9pfs.h")
> Fixes: cf1d2d22fdfd ("docs/misc: Xen transport for 9pfs")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue May 30 12:09:21 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 12:09:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541139.843559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3yAC-0004da-QW; Tue, 30 May 2023 12:09:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541139.843559; Tue, 30 May 2023 12:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3yAC-0004dT-Ns; Tue, 30 May 2023 12:09:20 +0000
Received: by outflank-mailman (input) for mailman id 541139;
 Tue, 30 May 2023 12:09:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8r7=BT=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q3yAB-0004dJ-P2
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 12:09:19 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cd7a40b1-fee2-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 14:09:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd7a40b1-fee2-11ed-b231-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685448557;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=iZKRAQ6PZ8GK4gaM6K1HebRWTpJfBGLswDCJGgLN5o0=;
	b=m4WMewK162ZNFwFaX5aCd5h4XR9fVfFKlcOKr0XvrKt28u52sp90XWujVFM8gK3h4ofwRI
	SOK74XcgObHGkHLKgrJtnZmuimwWXh/wXN5q9j6Dv8gCxwb+AXijmXtlT1aYPYK1KlmCia
	BRY4IEuplIYlboTaZlXuZ97GEPZdzEDaxJWuCfBZTFHFXe1n4ZZe6munDRdPajOfnef4JY
	B2aRHT8SvnCVu5Z0nVzDGLL0t8nspaFR5wLfetwgGrG6VTVTYbuigWgnfZFxZNY8e+e9jz
	fk5DZ/0bhEUcm9JiTR/8v3EDg6m0oY8ot+sN837vZzmjbxzjbsdTnvzWX7grnA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685448557;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=iZKRAQ6PZ8GK4gaM6K1HebRWTpJfBGLswDCJGgLN5o0=;
	b=ePubrlCxuRaMVoLtUqukHirTYLds6qKP6f4ORS+5dYI6s8wS4PLvyw0YznjLBK3j2Pn/GB
	8pWP8hu7tKt0KJDA==
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>, Paolo
 Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, Tom
 Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>, Oleksandr Natalenko <oleksandr@natalenko.name>, Paul
 Menzel <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli"
 <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama
 Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE
In-Reply-To: <87jzwqjeey.ffs@tglx>
References: <20230508181633.089804905@linutronix.de>
 <20230508185218.962208640@linutronix.de>
 <20230524204818.3tjlwah2euncxzmh@box.shutemov.name> <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx> <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx> <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name> <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx>
Date: Tue, 30 May 2023 14:09:17 +0200
Message-ID: <87cz2ija1e.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

The decision to allow parallel bringup of secondary CPUs checks
CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
parallel bootup because accessing the local APIC is intercepted and raises
a #VC or #VE, which cannot be handled at that point.

The check works correctly, but only for AMD encrypted guests. TDX does not
set that flag.

Check for cc_vendor != CC_VENDOR_NONE instead. That might be overbroad, but
definitely works for both AMD and Intel.

Fixes: 0c7ffa32dbd6 ("x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable it")
Reported-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/smpboot.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1282,7 +1282,7 @@ bool __init arch_cpuhp_init_parallel_bri
 	 * Intel-TDX has a secure RDMSR hypercall, but that needs to be
 	 * implemented seperately in the low level startup ASM code.
 	 */
-	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
+	if (cc_get_vendor() != CC_VENDOR_NONE) {
 		pr_info("Parallel CPU startup disabled due to guest state encryption\n");
 		return false;
 	}


From xen-devel-bounces@lists.xenproject.org Tue May 30 12:15:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 12:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541144.843569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3yFg-00068m-Cb; Tue, 30 May 2023 12:15:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541144.843569; Tue, 30 May 2023 12:15:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3yFg-00068f-A5; Tue, 30 May 2023 12:15:00 +0000
Received: by outflank-mailman (input) for mailman id 541144;
 Tue, 30 May 2023 12:14:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=478C=BT=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q3yFf-00068Z-NQ
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 12:14:59 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on0620.outbound.protection.outlook.com
 [2a01:111:f400:fe02::620])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 973ad878-fee3-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 14:14:57 +0200 (CEST)
Received: from AS9PR06CA0309.eurprd06.prod.outlook.com (2603:10a6:20b:45b::11)
 by AS8PR08MB9600.eurprd08.prod.outlook.com (2603:10a6:20b:618::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 12:14:54 +0000
Received: from AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:45b:cafe::f3) by AS9PR06CA0309.outlook.office365.com
 (2603:10a6:20b:45b::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23 via Frontend
 Transport; Tue, 30 May 2023 12:14:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM7EUR03FT027.mail.protection.outlook.com (100.127.140.124) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6455.21 via Frontend Transport; Tue, 30 May 2023 12:14:53 +0000
Received: ("Tessian outbound e13c2446394c:v136");
 Tue, 30 May 2023 12:14:53 +0000
Received: from cbeacdde2175.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9532A511-F636-4E27-9A33-0CC8AEF52799.1; 
 Tue, 30 May 2023 12:14:47 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cbeacdde2175.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 30 May 2023 12:14:47 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by DU0PR08MB9933.eurprd08.prod.outlook.com (2603:10a6:10:413::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Tue, 30 May
 2023 12:14:44 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::52ec:40fa:1d66:7a1b]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::52ec:40fa:1d66:7a1b%7]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 12:14:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 973ad878-fee3-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xBhMHAkZErx/GU6fN/DbXKC2nygE+iNDHwbGjaN4MDU=;
 b=TF6jbILyHAYe0lXoGEsPsRTbQFue9gKAfBvyuWC2h6cKPwcPa5x7+/h10K+URpzAogv1g3suVtmZWu+yoSL3CZQXo9EkCcBrzVfVh6Cr7iANAhY0uU1JHthwWMZ9kvf62jHaWpaCzoibQaYozBGMbZXZ/q1ATvtiegtFWiBbH/8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 477e7520f8b8c882
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VBmsWR7NcsvJK5cB7z/XKHIlp6oh0kZRl+ZwmyBUgRDBbg5eFgJ6RuFGDVoATjrfZHu9Lu7rLlF32+n81H68V/EhxQPvkzeJni8+SKLPcBc4teu7RWMoS5NCowqvmVkOKbVn8tUbhY/sO/Lb/cXRvw+DKfO9peaC3Ee5iEleu0r3TdKGozB18k8DksHgTDwY7tRhxvlKw8pClvBWW32N9k/LJhHzJhKaXSbqc9mVSf1kbgSPJqpmX9zG5SdoMcwued4ZuGx4oniLW0FUdzaqyUODsFnXUY28Byu6QxTgkEiwHVOhZTKMTK3wmDJWvtkhDcDufW+popJUV1x4DIRz5w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xBhMHAkZErx/GU6fN/DbXKC2nygE+iNDHwbGjaN4MDU=;
 b=VqJLJSzUVT0mU3efFUoc81k35IcdZejBuFEt+3wQ+sSaZvoLegyjj24kCJw7XsrRzqZAw+p8KKCNeQH/WtDHjK9vC8yA2/+XTAwZqz1O6eHlVeCv9ZxkDvLwTKe09r2BWsSEN49pNqYbWXD+y40QOZZzWQ7QbiSomxU2tKX/A3j/lAxj+1GsOE5pBqBb+hUkwuTLL3GaGKBRoXjh6OTlplzpoWrBKEVaARacVIjnhLOD0Uyg5rD1o+9lAHDM1apAL9AQu5YwyPDMRW0r4igHzAAKy8+L4KR8+U6AGeadhPF37qUU8C7UKlk4vjZRNCrVbPl3b9WvQcW5VyHbpagR/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xBhMHAkZErx/GU6fN/DbXKC2nygE+iNDHwbGjaN4MDU=;
 b=TF6jbILyHAYe0lXoGEsPsRTbQFue9gKAfBvyuWC2h6cKPwcPa5x7+/h10K+URpzAogv1g3suVtmZWu+yoSL3CZQXo9EkCcBrzVfVh6Cr7iANAhY0uU1JHthwWMZ9kvf62jHaWpaCzoibQaYozBGMbZXZ/q1ATvtiegtFWiBbH/8=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 2/3] xen/misra: xen-analysis.py: Fix latent bug
Thread-Topic: [PATCH 2/3] xen/misra: xen-analysis.py: Fix latent bug
Thread-Index: AQHZijSdR4YxdS8bIkewpmkDG4LQH69yr6OAgAAciAA=
Date: Tue, 30 May 2023 12:14:43 +0000
Message-ID: <E7BE8D97-599A-4741-9DE0-F40ECDA5095D@arm.com>
References: <20230519093019.2131896-1-luca.fancellu@arm.com>
 <20230519093019.2131896-3-luca.fancellu@arm.com>
 <62332bdc-696c-264b-47d7-2d9bcc6af734@suse.com>
In-Reply-To: <62332bdc-696c-264b-47d7-2d9bcc6af734@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.600.7)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|DU0PR08MB9933:EE_|AM7EUR03FT027:EE_|AS8PR08MB9600:EE_
X-MS-Office365-Filtering-Correlation-Id: 954bc4c3-4adb-4d41-fef8-08db610779d5
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 L/uMQq2oVggN2sZVZ7AdemTuYikFIFvRYZrExTMtv1OW3vluZKHr5KfCqt2lg77IAHada6VyfcKLhtlHAHQFxRgbfA5/lNEoQqkkYMaGDoQ9gujrwmJjUehiUnAbfaGH2uGaO8RHBR6cJcK0jnFpUPHk8btLqYEIMxmCBxvmiXx1T41s9pgHcrLkGqIXeyHh5ZcD54UqxbJfy81grvJ/C5p22anY9+kz4apP0Qzq3EkP72AdRzdgZT3VRuGVhmDrd82zktCv0kIbIebfJjDloPZ48j8QPyi79RxSIkQe9YPGA+aeo01/OVkucIreQsUJNUIew8M76lIWOR0+hIpRBd7Lk8fQGK5X5tuU5sCLCH5cpIg7FFXh4UtgHzKBC3z8cHIW7dmPuPkVIDGWWEMesSw6GsaCNa5NwXLVukyOd5+pLt3MchvDi+CVoNYrd0jLxxBHH2k3opC3ip3m+RNwONpZgjpePzjJRwrsUBBLkAWWxcNPKIlaFnUvhyPIuYtDTsG9/kxjvPNd/4BecY01/tMPIB+qfVv1x1Wyp2vPPSjaEQn4SUzFm7rbfAa/N4l/fcIKnw4E9ND7q50sZzwgw7c5o3xky053guh+7oNHurkT+q0dDSViWQA1wnUvypa938YZCt0BA1E5Pyt5jzhzIw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(376002)(346002)(136003)(396003)(451199021)(122000001)(38100700002)(41300700001)(38070700005)(6512007)(316002)(6916009)(4326008)(33656002)(71200400001)(86362001)(4744005)(8676002)(2906002)(8936002)(6486002)(2616005)(91956017)(66946007)(76116006)(66476007)(66556008)(36756003)(53546011)(64756008)(66446008)(54906003)(6506007)(26005)(478600001)(5660300002)(186003)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1C8E9E82E172924F84CFA979CC4A00A6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9933
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1974ca31-76ff-4809-d930-08db610773be
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Go3uVjnqxJdRsbQUJZsxEqR9d4ArNnlPo5Axe4onAMsTn8dzwywrQKpNVlCktOoojh/7BlsDpK6rQzMVaCx5zXBdqxQwFmuT7k9mi4U4PXeZwKFGwRTopkEIN/EUKXvUpcqlr9MCMEB9B8L79Q2pPNn4nKXpJ+LIrCu6PaCehvJbFymVCetawAHJDorz7M+qqHfcTgy+y0pjCZzqhgOPnptEKgLrnkQWImAgDo6VYkbQTzZczk9XDqrLgKzrJYREFWlHuBs4UXEqE6LTRPWB7wgn79S/h1cHGtvNqMPYdQQUoELvYL3EjQkM6Mf/CsmV66sdzaMqcTC97sEOIjCU2CYDqgR3aayBPoOnuRgHoeRtpBpe47bEChnkjR90NFciC0E+GGFiHiJ4wzr2kc532GjwSu6D0x9k8bwLXpbgFfJ58a3DnaHBGdCPpR3PuGoiBRn7kMFgwutLCjEa8JQiC01deFUmlrR1wNbfjnbrNB3n53/FErzPxCU35EgB7vhEwf4lZbpTPxmSrT9KQNDv6h8o0cviQtcoEInJlNMXCQx3fChSjZaNXEi7MC3zHem6R3y7K/cbo4RC44sn+ty/bQI5hYnna3/fhStIv7w8ZOh8QCSvjGsRqKhoOM52/Akxmj79l2bnzv1cTkMJggMsUPBKRZazAu4RNoKbGvOSNVY5WsaoWoRPIRQSufP8tx1J/xWWJSM4wCfVlywmkgxM0IaIHbhr0K0C8uMBpidLcpWXu6bEgbMYyNVVZx3bz9vR
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(39860400002)(346002)(396003)(451199021)(46966006)(40470700004)(36840700001)(186003)(2616005)(41300700001)(336012)(6506007)(6512007)(53546011)(36860700001)(26005)(6486002)(47076005)(478600001)(40460700003)(54906003)(356005)(81166007)(82740400003)(70586007)(70206006)(4326008)(40480700001)(316002)(5660300002)(8936002)(6862004)(8676002)(2906002)(4744005)(33656002)(86362001)(82310400005)(36756003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 12:14:53.9165
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 954bc4c3-4adb-4d41-fef8-08db610779d5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM7EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB9600



> On 30 May 2023, at 11:32, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 19.05.2023 11:30, Luca Fancellu wrote:
>> Currenly there is a latent bug that is not triggered because
>> the function cppcheck_merge_txt_fragments is called with the
>> parameter strip_paths having a list of only one element.
>>=20
>> The bug is that the split function should not be in the
>> loop for strip_paths, but one level before, fix it.
>>=20
>> Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis=
.py script")
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>=20
> Just wanted to mention it: I've committed the patch as is, but in
> the future could you please try to find slightly more specific a
> title for such a change? The way it is now, someone going over
> just the titles in the log will have no clue at all what kind of
> bug was addressed here.

Sure, I will be more specific in the future, thank you

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Tue May 30 12:15:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 12:15:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541145.843579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3yFz-0006WF-OS; Tue, 30 May 2023 12:15:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541145.843579; Tue, 30 May 2023 12:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3yFz-0006W8-Lq; Tue, 30 May 2023 12:15:19 +0000
Received: by outflank-mailman (input) for mailman id 541145;
 Tue, 30 May 2023 12:15:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=478C=BT=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1q3yFy-0006UI-12
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 12:15:18 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on20605.outbound.protection.outlook.com
 [2a01:111:f400:fe16::605])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a2750e74-fee3-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 14:15:15 +0200 (CEST)
Received: from DB6P18901CA0020.EURP189.PROD.OUTLOOK.COM (2603:10a6:4:16::30)
 by PR3PR08MB5722.eurprd08.prod.outlook.com (2603:10a6:102:8f::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 12:15:09 +0000
Received: from DBAEUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:16:cafe::d0) by DB6P18901CA0020.outlook.office365.com
 (2603:10a6:4:16::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22 via Frontend
 Transport; Tue, 30 May 2023 12:15:09 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DBAEUR03FT033.mail.protection.outlook.com (100.127.142.251) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6455.21 via Frontend Transport; Tue, 30 May 2023 12:15:09 +0000
Received: ("Tessian outbound 8b05220b4215:v136");
 Tue, 30 May 2023 12:15:09 +0000
Received: from cacd00489d64.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E7176F53-09E0-4023-B09D-99E8606F94A0.1; 
 Tue, 30 May 2023 12:15:02 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cacd00489d64.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 30 May 2023 12:15:02 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AM0PR08MB5506.eurprd08.prod.outlook.com (2603:10a6:208:17e::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.21; Tue, 30 May
 2023 12:14:56 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::52ec:40fa:1d66:7a1b]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::52ec:40fa:1d66:7a1b%7]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 12:14:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2750e74-fee3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=khKxWV6/1WPbI++MA1oOZPouL9bJ2V5LBAoo5rhtt+Y=;
 b=9sf9yPG6czyscv8diU75KeoglAIdggDTcyJyswzWGpTO5GeDGwtkYQPiF7Mc7nClYA7+iBLEl9/t1YKVjDNJC6r16DibMsPwbJ+u5d49sBAR2+Z7+yNsrP0ITdNYyWZdRTT6DNhaan3GQcLx1VKrj/epwnUxjsQ3J3qZ2fC8B2w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 23d7990802d54ccb
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T+8CeIlpyn0JlKF3M8ScF62PFjnwWJKHKIA9YbOTBepKSmNSDiVYZBQXePtPqjcPCWTomXOkS7TUthTxjl7gbvBaCWBxdSUByF1vz3+ZNPPrdPvQpfG8Lnvkc9C8YLNzKspEQV3LpgbBUYMpFn5/dMrCtjrJ1i9oKb48GjugGO6PktQdLqN8atFuYQgbSDygZMtPJLlR2lTQiWLSAtPeC0dtxSGnITYF1FkrTPD2h4UNUjgCs1XL3Cyrv/A9jrqT/5nUaZpfWUnRQ52so6nfjpgJhHF+I03n7HfsIzz710R1eaunuWa2v/9evU1adB4REtkG8kOv308jxYFdbu8HFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=khKxWV6/1WPbI++MA1oOZPouL9bJ2V5LBAoo5rhtt+Y=;
 b=FooT5WbCQzsf0IyeuaEOFSw7kmQIokBKRrKXfff8ItB6VZPcqmgs5McPFTMJ5gUMJQdK3DPFdFnhyoF75GlxiHCyfopJZ7MLc5bO/Ljcak7MMBf+NEruQ335I830ZL+4sijGgcVACl46csg0jY27yUsCHBt4XkIuGGfc/H1pxMpoI42QUYWd/hlORsCuXVSk11oGxhTp8GgGv1iVKpVlcOAwct4/AnNk/a9vBvk83IQZllvDlo+je6d1EQO0OfwuXJdtLCDKrZqN78yTsBPs874cYM2/kTqFg1P017GU3SwA9UcKD294WuYOXAQHaDdcsgTg90UXOKOwTWSDPGpfUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=khKxWV6/1WPbI++MA1oOZPouL9bJ2V5LBAoo5rhtt+Y=;
 b=9sf9yPG6czyscv8diU75KeoglAIdggDTcyJyswzWGpTO5GeDGwtkYQPiF7Mc7nClYA7+iBLEl9/t1YKVjDNJC6r16DibMsPwbJ+u5d49sBAR2+Z7+yNsrP0ITdNYyWZdRTT6DNhaan3GQcLx1VKrj/epwnUxjsQ3J3qZ2fC8B2w=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Julien
 Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [XEN PATCH 13/15] build: fix compile.h compiler version command
 line
Thread-Topic: [XEN PATCH 13/15] build: fix compile.h compiler version command
 line
Thread-Index: AQHZjZUfS+Hth9ncdEKi1WC1jlJ0EK9pLTuAgAl2xACAACF6gA==
Date: Tue, 30 May 2023 12:14:56 +0000
Message-ID: <8ED9FBA6-81E8-48F6-A641-3C19E3E870A2@arm.com>
References: <20230523163811.30792-1-anthony.perard@citrix.com>
 <20230523163811.30792-14-anthony.perard@citrix.com>
 <35D40E55-2D93-431A-8B16-FCFEBBDA25B1@arm.com>
 <62d42514-25d3-57f1-f061-0bde197a995b@suse.com>
In-Reply-To: <62d42514-25d3-57f1-f061-0bde197a995b@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3731.600.7)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AM0PR08MB5506:EE_|DBAEUR03FT033:EE_|PR3PR08MB5722:EE_
X-MS-Office365-Filtering-Correlation-Id: fb7789f9-e75c-460c-1d74-08db610782ef
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ok9hhIu6bsfjzSNZbH4n/gWrLU+IFHEswl189Hk6eePFyP/AVWRNLRteOzJ1W8NEfM1+/4vr+J/J34Q1TMUOOHkOJJ+p+IA/825ryxPBt5ltnQUml4EdHzFKc13sIzsG3MnNsUaDjY6bd1WB2A4FPJ1FOJvBn+W5djJC1QF8nKF956gXyirlIu9jdyn9I5GB701uZD5QsUBP4AawWQrJbiayxZ4YCB/qgi/HLIdzOMne6Q94RbpLZfyiASJS3W9tWwG/Hnu33OLHT4UoaO6AUmE68QAHMlOmi6VDXJt/sW5n1EkJ8d8VUxEMPrrCM6WfL1cv2F1Gd/ILJGnJGf2mDEP9wE6IE7FDHIlnncW7YbGXKporjTTT8spb9uF2/lxFTVhZpoK4QR0dO3imVpuom1hi1f8nE8e52+pqEuCJ71d2KlDyWadlSeyXKG/1BvCU9JPtbdNnwlWWu+rz4lxinuX1UbuifH168XNKEMrLGLyjAFnjRd6Qt3f3LDJBxWJ5xppeFRr6s9Oxs0cSq8MmCff9KCqyCEdYNDrczaLEV1qO8XlARNi8NP2bluAiCOWpjePV4OyzCI1sqtIsW+JwGCrqiVc0Y+gnHBPHp3qtfRt1Qsdi9qVIJCSBM1iRthZPIAS8rGxcQcybzXcMD43sIg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(136003)(346002)(396003)(39860400002)(451199021)(33656002)(186003)(26005)(53546011)(6506007)(6486002)(6512007)(316002)(71200400001)(2906002)(5660300002)(41300700001)(36756003)(8936002)(8676002)(478600001)(122000001)(38100700002)(54906003)(6916009)(4326008)(86362001)(91956017)(38070700005)(64756008)(66946007)(66446008)(2616005)(66476007)(83380400001)(76116006)(66556008)(45980500001);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <F2AD4F1AC6862A4E85A4FEAB05D64283@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5506
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DBAEUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	9cc55084-f79f-4213-6c21-08db61077b91
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vUOw/xkwIf1XqHmSffGKYS2xAPx1oK5M9V51Wp2kaXS5KzsS9zomQ0WVFr2VBayq3i6D8dRZBx9MWiUHcpLDkMccd/zm/jZ0FfNY0RSTidCnOvYxkrGiDn0k0GuLmvbVTaOJk45Uv4XB2Q1802raF8TQT+uRUsM91olhogmokF2egPGxez4qO9BdKZZm0omiQKwuj3m2lInhT8oZM7EMIFSE1/iRPlZkp0ryS6MlYMX47NX3j3JZAC2oGRatI5s36Jh0ALjbniiQCnsTyWa+EK6830ms/c82CFPJYsrNN+3n0GGnHgKjXi+xGsQcHlhkBj6Ds55YmyCffXfh2XfOmvRCgag58fmnasRdzqsiAr1jBJVVveUUnik5KqIohQ1IaCJEdqz+avSnGnEIyXB+fRjy1VU0PdAEeBtInEETrT/puzNvjOwwFPX5UzVMTHK04xH08cxYqvmzPURFDpe0EqW16b04LJ6wV3pR2RaCxRAxXchn7jULiboaNVjky8sO0VSAERTdc1pY/S79CiF5izYysDVE9GPLoYcwwPZrr08Dm/++2AIvZmc0NL09endsFqTRlRUIKLH8eZoNRUCJKsQRPCWK99TxOMYS1UUeW93IuBeTMle2JX8cOkyEiu9pk/wwoUi0Zvhq046Qlr6LRyG+NCCB5bFsadWO3xQJSV7e7temh/zm13u7/li7iT4BF0Z9KnwYCE4HR/QowdrTqQZXlyk6S38L4WymQ6HO2BQdHXWCCceQxvnaZXAv8Mo2
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(396003)(136003)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(6486002)(478600001)(54906003)(83380400001)(2616005)(47076005)(336012)(40480700001)(40460700003)(36860700001)(86362001)(33656002)(36756003)(82310400005)(53546011)(6512007)(6506007)(107886003)(2906002)(186003)(26005)(82740400003)(356005)(81166007)(4326008)(70206006)(8936002)(70586007)(316002)(8676002)(5660300002)(6862004)(41300700001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 12:15:09.2274
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fb7789f9-e75c-460c-1d74-08db610782ef
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DBAEUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5722

DQoNCj4gT24gMzAgTWF5IDIwMjMsIGF0IDExOjE0LCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjQuMDUuMjAyMyAxMTo0MywgTHVjYSBGYW5jZWxsdSB3
cm90ZToNCj4+IA0KPj4gDQo+Pj4gT24gMjMgTWF5IDIwMjMsIGF0IDE3OjM4LCBBbnRob255IFBF
UkFSRCA8YW50aG9ueS5wZXJhcmRAY2l0cml4LmNvbT4gd3JvdGU6DQo+Pj4gDQo+Pj4gQ0ZMQUdT
IGlzIGp1c3QgZnJvbSBDb25maWcubWssIGluc3RlYWQgdXNlIHRoZSBmbGFncyB1c2VkIHRvIGJ1
aWxkDQo+Pj4gWGVuLg0KPj4+IA0KPj4+IFNpZ25lZC1vZmYtYnk6IEFudGhvbnkgUEVSQVJEIDxh
bnRob255LnBlcmFyZEBjaXRyaXguY29tPg0KPj4+IC0tLQ0KPj4+IA0KPj4+IE5vdGVzOg0KPj4+
ICAgSSBkb24ndCBrbm93IGlmIENGTEFHUyBpcyBldmVuIHVzZWZ1bCB0aGVyZSwganVzdCAtLXZl
cnNpb24gd2l0aG91dCB0aGUNCj4+PiAgIGZsYWdzIG1pZ2h0IHByb2R1Y2UgdGhlIHNhbWUgcmVz
dWx0Lg0KPj4+IA0KPj4+IHhlbi9idWlsZC5tayB8IDIgKy0NCj4+PiAxIGZpbGUgY2hhbmdlZCwg
MSBpbnNlcnRpb24oKyksIDEgZGVsZXRpb24oLSkNCj4+PiANCj4+PiBkaWZmIC0tZ2l0IGEveGVu
L2J1aWxkLm1rIGIveGVuL2J1aWxkLm1rDQo+Pj4gaW5kZXggZTJhNzhhYTgwNi4uZDQ2OGJiNmUy
NiAxMDA2NDQNCj4+PiAtLS0gYS94ZW4vYnVpbGQubWsNCj4+PiArKysgYi94ZW4vYnVpbGQubWsN
Cj4+PiBAQCAtMjMsNyArMjMsNyBAQCBkZWZpbmUgY21kX2NvbXBpbGUuaA0KPj4+ICAgLWUgJ3Mv
QEB3aG9hbWlAQC8kKFhFTl9XSE9BTUkpL2cnIFwNCj4+PiAgIC1lICdzL0BAZG9tYWluQEAvJChY
RU5fRE9NQUlOKS9nJyBcDQo+Pj4gICAtZSAncy9AQGhvc3RuYW1lQEAvJChYRU5fQlVJTERfSE9T
VCkvZycgXA0KPj4+IC0gICAgLWUgJ3MhQEBjb21waWxlckBAISQoc2hlbGwgJChDQykgJChDRkxB
R1MpIC0tdmVyc2lvbiAyPiYxIHwgaGVhZCAtMSkhZycgXA0KPj4+ICsgICAgLWUgJ3MhQEBjb21w
aWxlckBAISQoc2hlbGwgJChDQykgJChYRU5fQ0ZMQUdTKSAtLXZlcnNpb24gMj4mMSB8IGhlYWQg
LTEpIWcnIFwNCj4+PiAgIC1lICdzL0BAdmVyc2lvbkBALyQoWEVOX1ZFUlNJT04pL2cnIFwNCj4+
PiAgIC1lICdzL0BAc3VidmVyc2lvbkBALyQoWEVOX1NVQlZFUlNJT04pL2cnIFwNCj4+PiAgIC1l
ICdzL0BAZXh0cmF2ZXJzaW9uQEAvJChYRU5fRVhUUkFWRVJTSU9OKS9nJyBcDQo+Pj4gLS0gDQo+
Pj4gQW50aG9ueSBQRVJBUkQNCj4+PiANCj4+PiANCj4+IA0KPj4gWWVzIEkgdGhpbmsgQW5kcmV3
IGlzIHJpZ2h0LCBzbyBJIGd1ZXNzICQoWEVOX0NGTEFHUykgY2FuIGJlIGRyb3BwZWQ/DQo+PiAN
Cj4+IFJldmlld2VkLWJ5OiBMdWNhIEZhbmNlbGx1IDxsdWNhLmZhbmNlbGx1QGFybS5jb20+DQo+
PiBUZXN0ZWQtYnk6IEx1Y2EgRmFuY2VsbHUgPGx1Y2EuZmFuY2VsbHVAYXJtLmNvbT4NCj4+IA0K
Pj4gSeKAmXZlIHRlc3RlZCB0aGlzIHBhdGNoIHdpdGggYW5kIHdpdGhvdXQgdGhlICQoWEVOX0NG
TEFHUyksIHNvIGlmIHlvdSBkcm9wIGl0IHlvdSBjYW4NCj4+IHJldGFpbiBteSByLWJ5IGlmIHlv
dSB3YW50Lg0KPiANCj4gSSdtIHNvcnJ5LCBJIGRpZG4ndCBsb29rIGJhY2sgaGVyZSB0byBzcG90
IHRoaXMgZXh0cmEgc2VudGVuY2UgYmVmb3JlDQo+IGNvbW1pdHRpbmcgdGhlIGVkaXRlZCBwYXRj
aCwgd2hpY2ggYXMgYSByZXN1bHQgSSd2ZSBub3cgcHV0IGluIHdpdGhvdXQNCj4geW91ciB0YWdz
Lg0KPiANCg0KTm8gcHJvYmxlbSENCg0KPiBKYW4NCg0K


From xen-devel-bounces@lists.xenproject.org Tue May 30 12:30:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 12:30:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541153.843590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3yUG-0000IZ-35; Tue, 30 May 2023 12:30:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541153.843590; Tue, 30 May 2023 12:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3yUF-0000Hs-Vo; Tue, 30 May 2023 12:30:03 +0000
Received: by outflank-mailman (input) for mailman id 541153;
 Tue, 30 May 2023 12:30:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m7B1=BT=shutemov.name=kirill@srs-se1.protection.inumbo.net>)
 id 1q3yUD-0008Mk-RC
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 12:30:02 +0000
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id af9a0e08-fee5-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 14:29:59 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 0F49D5C0187;
 Tue, 30 May 2023 08:29:56 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 30 May 2023 08:29:56 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 08:29:54 -0400 (EDT)
Received: by box.shutemov.name (Postfix, from userid 1000)
 id EB5E1104992; Tue, 30 May 2023 15:29:51 +0300 (+03)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af9a0e08-fee5-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name;
	 h=cc:cc:content-type:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1685449796; x=
	1685536196; bh=EEiwGlX0Rwp4JD9gZJQ2+HoBg+CT8hF4DrYL0banGb4=; b=J
	hSWsjxiEcHNkhsyc3AMDUebr4x6d+5Z/IPmQcP3r3HEa2PV4iCNRm3rpXjqMalMw
	/OE0m3uI6TqVPy5TXs2laPgrna5SS91VjYoT0EXhAxxqYtnfxHMo7QJ6Jnmck2VP
	qm3TSO2CTWhnzg3XOjo+mkHnuk/QAm3oPsYysD8xP3lNHGWahS0Xzz2Vdy00xqfC
	yAn6/IaCpooPjW/r5dFkd/NaFIY0S16nWFqTsAWzyPFXWDFJvzH1dM+PTmRZ6g7j
	2o0X6GJUAtrKbuNWm0yrLAHjaZxrzujAqghJbB5Pgmh3apFHgN+WZcMp8TtzR5PN
	UmtTiyLIs/YDkMAzAHUpw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1685449796; x=1685536196; bh=EEiwGlX0Rwp4J
	D9gZJQ2+HoBg+CT8hF4DrYL0banGb4=; b=O8gpcmbIa+lNB390LgDoxjyCM0VpN
	DeUDIRCCkjdQ+WpdGdzcgnse0TwvjzKgs5YriXR9QomUEOgfslSoIJc3vkGujs74
	VPkfaqeBhzNmKSfdheBPc28aFjtzgiElyb8XMQ0DCHEgvEKGVEHWluQY5NUAZeJE
	0lhHFHNpkdZmPZZBdDmvBiTp7eiVYb6eCW3ZjMWdk153TSS6tRrEHYat++3ogA0G
	Z8/iPnkXzgVKZDv3CCQVLAT94m9xNd1Y4sHfuu5+Nx2HBWBDzJVK/E2qPoDde2j0
	6E+pDcEqROIFws3Einak7j/Pec39SrUOUvx1BCXKNueY+QbbjiTx3jhIA==
X-ME-Sender: <xms:Qux1ZDpZHdZDdRqzuQ3oUtp4N6AbptLlyGyPf-zqztvLum_Wpk8xEg>
    <xme:Qux1ZNo1F15jgz6IzDPB24Axf7HXd1_RtRQC-fw6kQbdty6wE8mBYqfLb7uXR-P_r
    JVGo2mcRT2J7JcrdCo>
X-ME-Received: <xmr:Qux1ZAMinJKpb7I-Q699cMCl5dTtv9S_hHYYcGI6fA9B46v2Q7yLRFaxadSo-LlFaSYPXQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedghedvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesthdttddttddtvdenucfhrhhomhepfdfmihhr
    ihhllhcutedrucfuhhhuthgvmhhovhdfuceokhhirhhilhhlsehshhhuthgvmhhovhdrnh
    grmhgvqeenucggtffrrghtthgvrhhnpefhieeghfdtfeehtdeftdehgfehuddtvdeuheet
    tddtheejueekjeegueeivdektdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh
    epmhgrihhlfhhrohhmpehkihhrihhllhesshhhuhhtvghmohhvrdhnrghmvg
X-ME-Proxy: <xmx:Qux1ZG6ZsOw6xeVtxhzYN2s1-Hwz6qJfUWen7z_-iQrz1HVXN1ijxA>
    <xmx:Qux1ZC7stYJcdgzWQepJmmrvNf0MLSMCayxkwEJj9btY7h4-oM2Vrw>
    <xmx:Qux1ZOh_9TgCy4k005YOTV8O1Ry7xm6HbvZqesWx4liI3OpUdnIllw>
    <xmx:ROx1ZAaQz7AjEzvPi21kVUbjMxYAZy5wCX2Z0W4XoTeRn9P4Z4ohSw>
Feedback-ID: ie3994620:Fastmail
Date: Tue, 30 May 2023 15:29:51 +0300
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Thomas Gleixner <tglx@linutronix.de>,
	Tom Lendacky <thomas.lendacky@amd.com>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,	Paul McKenney <paulmck@kernel.org>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,	Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,	Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE
Message-ID: <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name>
References: <20230524204818.3tjlwah2euncxzmh@box.shutemov.name>
 <87y1lbl7r6.ffs@tglx>
 <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name>
 <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx>
 <87cz2ija1e.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87cz2ija1e.ffs@tglx>

On Tue, May 30, 2023 at 02:09:17PM +0200, Thomas Gleixner wrote:
> The decision to allow parallel bringup of secondary CPUs checks
> CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
> parallel bootup because accessing the local APIC is intercepted and raises
> a #VC or #VE, which cannot be handled at that point.
> 
> The check works correctly, but only for AMD encrypted guests. TDX does not
> set that flag.
> 
> Check for cc_vendor != CC_VENDOR_NONE instead. That might be overbroad, but
> definitely works for both AMD and Intel.

It boots fine with TDX, but I think it is wrong. cc_get_vendor() will
report CC_VENDOR_AMD even on bare metal if SME is enabled. I don't think
we want it.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


From xen-devel-bounces@lists.xenproject.org Tue May 30 12:39:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 12:39:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541157.843600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ycz-0001QH-Rb; Tue, 30 May 2023 12:39:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541157.843600; Tue, 30 May 2023 12:39:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3ycz-0001QA-Nw; Tue, 30 May 2023 12:39:05 +0000
Received: by outflank-mailman (input) for mailman id 541157;
 Tue, 30 May 2023 12:39:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q3ycx-0001Q4-NF
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 12:39:03 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20617.outbound.protection.outlook.com
 [2a01:111:f400:7d00::617])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f4b5edef-fee6-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 14:39:02 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8276.eurprd04.prod.outlook.com (2603:10a6:20b:3e7::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 12:38:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 12:38:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4b5edef-fee6-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UvAGqtyn56T87B0hutB68vkVzkAc6aQkwNEdsWAuJjmwXJa9os8ISyNsZDLF4S6cXfoEg7+pYwsG3bg9BoJzlCDROUVlytctUCfwZYDyHFIPug+S/LYiGyedj+ZQTxRF3oG7N8cwlmCYrzBBzwTBW38GXDymj5h6cidByFr1sC7F1QUCVt14mWWsonMYgZ7boo3W5PFG1sEvWIF018Y5eC2Hh76viJPwqEbhX82Zh6W9fRC15TzxOcLiJz3jkBovmzc73PJzmyiOal+A7GygHn87TihBQRwE6a7V3CTXPo8C1LspWlKtqHwQwqVyY/4oaCgsjT2Ztl4TzcQVQ1RdyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9sdhBVPmtL1e1+Kh9sGgFXp6ChoVMN6PauBpRwmz8hg=;
 b=Opw8qOrBox5dvxlHv6gkUsV+dwwjCO0rNwV2i3e1DePZNn+cY/KkFlX2q5paTNCLw+pq4/Gxgxn/GLN5s2oAocdzmkhhIbb4hoihYvBZv+Jv0L4nX0sI5C240QJb4cPYyAG7b6LMnoCEDIgX9DqZMtJKUSLzMBElA9fWfAPJ8nB7NFsDrzMwCryc5BdrTs9mDdX5MiQEOI2qzJXM6hwYbJijUuc013jj/hyFYpAZ8meWeHBS8tzISs5aS1SdT1fhUupL57978iBVswiwxt+cFmFi4LssZ5CAu9zA2232t5p9H8E0l6sSavLFcb9+Hke94/oDaXNC/xBh5tuCgipZzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9sdhBVPmtL1e1+Kh9sGgFXp6ChoVMN6PauBpRwmz8hg=;
 b=nsl/fpf5S/L3e+fwUumWP4TvDdaIidpHGVY2WhiJOn4znTJCjtgOH4hum4IWrCEccE+/Jzk7bqUO0h5/FuTaI4jBguh8Re//lTB+ffNTaUsZSPRFCjw+4gaJ5IEVqvXMa8U43288+pA3AKibDKxuxjq/Y9wyNI206/Uhj5DkLdTkF0/MQmvR51QU49XQv6pDf/8NAXW6dFTrwD44q79TWyQyaYmoiMZGThVrhC3wmvFoCLDbk/5+7SlLq3KOfjFMOLzBOZrIhF27XhJs0rPXsu+v60bRVL9O+0b9jDm+Sw0BC3ROyFG8ny66I7tgSHcqYSVqHM8Y58jFTSLRHCw0bg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <e1c6e297-0046-73f6-981d-af776b271f24@suse.com>
Date: Tue, 30 May 2023 14:38:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3] vPCI: account for hidden devices
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0053.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:48::17) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8276:EE_
X-MS-Office365-Filtering-Correlation-Id: 4da559de-489f-411c-5cfa-08db610ad672
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FHh7WfLOu2hiAI3P8oNMVwsx0g8tEKLT72n8hrvWJO/9BKvGOkXeINo+5WLMng4BvjUOLi93ilFtjmQ+lZJMl1sZx88QCHoMme5+8Gez/j7YaldsQVWGBYtRu5Y4E7LYuTiimtkffyWnds9H6T0PtPNzyinoR75F8HuYbzUXvPDKthq8RM86ze7Gs8jGE6Tl533AA1IDlILGW1dbNZRr0STrLkVaWLwlu7+aK5c+ZjvmoYqUZf9YNJacnRtPVq6azRKEYNedwKxZIM/XZHaqKLMwrFV59tU44+1ScBv20bXFTeJYuMmUOUdUiMW5swk7XyN6ZW8lPmJKnbzeIKrrFy5lqC42RumYc9p+gSV7S1rAgfwh/AuUXNJ2DD1NeOb4Wa6X4KA8/9p+aulnxoHLZnL04/aoCs2T6Oh+JBjsKrVf4y2P+nO121U1e45wo+1V5K61iZ0gFu+OXzvYeMKnXiHa616bH6Ex4ugxOG+v3s98eYmhqGOCT0E4vGLSh6Kd9L12eRFu5vVYEsYkAerY/s7h9zMfB8wQdPhGz9MhjuHFn518Av5w8OzaS7r+1mP/VfHhwODh6Up3BiXRFr8O5zHFNi6ruGZu61FiEo3TLjlIyXXgpvtp4HiAo+ngDufM3P1XUo8OAsqu5GBmzPpJwQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(136003)(396003)(39860400002)(376002)(451199021)(26005)(6486002)(6506007)(6512007)(2906002)(316002)(186003)(5660300002)(41300700001)(36756003)(8936002)(8676002)(478600001)(38100700002)(31686004)(54906003)(6916009)(86362001)(31696002)(4326008)(2616005)(66946007)(66476007)(66556008)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NDF6b2tPVm9rQklOb0Q0ZzQrc1hlb25kcXFVRm1sYThyeWVuenphT0FLaDR6?=
 =?utf-8?B?Sm1hSFlucGFJSU8va0N5TGhiNzJDZmZ6Z1BGOFFXYkxKTDNKOEtWMDdxKzFW?=
 =?utf-8?B?V1ZCWm54QUVLTys3WGo1aXpnNnh5dVJ1bUFVNXBJMTRQMzJYckE2bDZOMk9m?=
 =?utf-8?B?alJFWW5HMXhIczhiOUNaNEphaHY4eWZrQ2RFcnlNOHdyQ1oxa0hBVC8xWnRC?=
 =?utf-8?B?NlYyRjJDODZ4R3NQM1FrWWI2RHRaNHJtUjE1THdOOExKemJYdjhic0ZiajJX?=
 =?utf-8?B?T1d4VXpBRElJNmZ3SG5ZdjBCaEYrZWdxZnpEV2V0eHM5MWZmQjdvNXB3NmNU?=
 =?utf-8?B?dGxlSWFDUmtOMUlaajF5VGxqTEh1c1dWSGFPSXpxRkI2QmJCaEpNSkVzQjdI?=
 =?utf-8?B?YkRtMmNQVlQrTExpVytSUS8xU3Rzb010Rmpxc2w4MXBwNDBJVFFCcldrRHVT?=
 =?utf-8?B?Z3dnN0hWSHhYR0paNHUxTU5QU20wQTZXS29tWmk5YUhvbEh0WmtrWjMrL3V3?=
 =?utf-8?B?bWQ5bXFZZjZEWUJqZ2s5dlo1OWZzaXd3UjFMY2haTUNYdnl5RVJzVmN2c2ZI?=
 =?utf-8?B?VWVnOHhBT05OKzFLVzhFWTY2aE9HQzZyd1BVUkppdDYzZkpXYW1JdHBGMWtV?=
 =?utf-8?B?ZnFDM3pncm5JYy9Sdzhqei91ZmR6RWE1bjhWcEN1VkYxa2R5cy9jNzRKNXM5?=
 =?utf-8?B?VCt2ajViQjlaWVYwbkhNZFlndzlVOGlqdEwvbG5RNFh0UG1HQWtTdW1SSUll?=
 =?utf-8?B?TXR6aVkyNGZUSnlQb01XQ3lLR0JkWWVNRFpMM0tIUm9NNkdJb21XRVJaTEM3?=
 =?utf-8?B?T2F2MFFqSmdGSDlzY2REN091NzhTSmk1WWcvT1pBSDUrVXVjQ2JsZkY3bnVU?=
 =?utf-8?B?T3h6MXhTMFFtQTYxWTVYbklZL3ZvUGx5d2xRa1UrWVc0NFo2MUt1WGJ3dElG?=
 =?utf-8?B?N2ZWVHhIZDhKamZCMUF4eFZZaTVhbEppalJIdnFVd0FpU0kyRTlXTW1WWnRN?=
 =?utf-8?B?bUt5blphWVcwRnltUzlVaFhDMDNvbXBwQ21MTU9sMUE0SHYwK0MzdWh1YkZQ?=
 =?utf-8?B?K1NTNmg0T3Z3VnlNVVJ4TXAxVWFjMVhxQVRxTi9ZeEhQQXA4MDZkZHpBUHZs?=
 =?utf-8?B?VFMvZlI4eFpjK2Q4SXA5QUNVbTQrWFlYY25KZm93enhtaXp0bVNpMFhHcFZK?=
 =?utf-8?B?Q1dnN0tabmNCTHBraEFmd2ZSNmVYbDkyQmNoSWxuYUh5RmtyZ1phNUJvV25I?=
 =?utf-8?B?VklXK0puOGtPTG9DZ2FQeEExcDl3RUd3QUJHTjd6ZkU0anREcjEvZ2tTNjVZ?=
 =?utf-8?B?VnJNdnBISlovemU3VzV6YkN5OHFPUzJmTHlDN0hUU3JDZGN1VkVDaTNtM3ZG?=
 =?utf-8?B?T0MxMEZUU1JiZzhvWGxObldkNlk4aDUvaE1zSzIwWGUreXpQTGI1OVRqeVVo?=
 =?utf-8?B?Q3MxM3dCR0RyTS9hTE1hTTU0TzRydG51UGIrVHh3N2JadzlrYWtkRG15ckcz?=
 =?utf-8?B?TmNKRFJ5UXlZbW1tTnk1TEJhZ3NkdGtGaktxS2pSaERoR0ZiWGxkMHR3N1la?=
 =?utf-8?B?UVJaT25xUWs0VkREK3pXM20xdjdYZitWL2tJYmhRSnVldzFzZVdndG9jb3RR?=
 =?utf-8?B?VVh5ZFZybXFyeDNFS2gxbzRIWk83TThFalRyYkFnSzRKS2ZKMGNpSmd4UnJr?=
 =?utf-8?B?VjVGck55V09LZ2d6aFZqSTI5bS8wcjhtV0MxQ2JCVVc5Wk1OdDkzRTRWTE9m?=
 =?utf-8?B?TnBaQkJUVUxzOE5IT3RqbHUvR0d5dFRYcHNxRDZGbVNwZU9OTnptQmVLVmd6?=
 =?utf-8?B?NTJvSjlUTlUyRjlWU2hod0JMUTNjNFRvVDdsc0hrb0JhbTQ3djZmQW9UbnRG?=
 =?utf-8?B?Q3Q2bFQvVVNZSStYM2VDZVRqWTl6YW1TRmwrQkQ1SlZVdGlYVTA5SkZ1WU54?=
 =?utf-8?B?MjZXMk9HNVRDSHRXQVhqS1hPT3F5U2w1NXlNOCs3d0NKdTE0ejdJM3FyaVA5?=
 =?utf-8?B?UVVsWmFnTXhMdE1VMnd4VUxJVzRkRk1KcFR0V0EwSXVrZzVFT2xqeXNCd1A5?=
 =?utf-8?B?ektXSndMVnBlV2JlcDFFL3hJamlmT1dOaFViN3hRTHlPeVpVNWVsam5DMW5k?=
 =?utf-8?Q?uENQ64sDIE8x+UdSeiPZ98llG?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4da559de-489f-411c-5cfa-08db610ad672
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 12:38:58.2352
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JD8DoXlL+yDTdYIZ5EQK+WYzXSVWkFfhORv//XYfWPyPJBzFwJIw1ps8MtbcIUDVIG2fuuEsP7nt6enG+qMRPg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8276

Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
console) are associated with DomXEN, not Dom0. This means that while
looking for overlapping BARs such devices cannot be found on Dom0's list
of devices; DomXEN's list also needs to be scanned.

Suppress vPCI init altogether for r/o devices (which constitute a subset
of hidden ones).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Also consider pdev being DomXEN's in modify_bars(). Also consult
    DomXEN in vpci_{read,write}(). Move vpci_write()'s check of the r/o
    map out of mainline code. Re-base over the standalone addition of
    the loop continuation in modify_bars(), and finally make the code
    change there well-formed.
v2: Extend existing comment. Relax assertion. Don't initialize vPCI for
    r/o devices.

--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
     struct vpci_header *header = &pdev->vpci->header;
     struct rangeset *mem = rangeset_new(NULL, NULL, 0);
     struct pci_dev *tmp, *dev = NULL;
+    const struct domain *d;
     const struct vpci_msix *msix = pdev->vpci->msix;
     unsigned int i;
     int rc;
@@ -285,58 +286,69 @@ static int modify_bars(const struct pci_
 
     /*
      * Check for overlaps with other BARs. Note that only BARs that are
-     * currently mapped (enabled) are checked for overlaps.
+     * currently mapped (enabled) are checked for overlaps. Note also that
+     * for hwdom we also need to include hidden, i.e. DomXEN's, devices.
      */
-    for_each_pdev ( pdev->domain, tmp )
+    for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain; ; )
     {
-        if ( !tmp->vpci )
-            /*
-             * For the hardware domain it's possible to have devices assigned
-             * to it that are not handled by vPCI, either because those are
-             * read-only devices, or because vPCI setup has failed.
-             */
-            continue;
-
-        if ( tmp == pdev )
+        for_each_pdev ( d, tmp )
         {
-            /*
-             * Need to store the device so it's not constified and defer_map
-             * can modify it in case of error.
-             */
-            dev = tmp;
-            if ( !rom_only )
+            if ( !tmp->vpci )
                 /*
-                 * If memory decoding is toggled avoid checking against the
-                 * same device, or else all regions will be removed from the
-                 * memory map in the unmap case.
+                 * For the hardware domain it's possible to have devices
+                 * assigned to it that are not handled by vPCI, either because
+                 * those are read-only devices, or because vPCI setup has
+                 * failed.
                  */
                 continue;
-        }
 
-        for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
-        {
-            const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
-            unsigned long start = PFN_DOWN(bar->addr);
-            unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
-
-            if ( !bar->enabled || !rangeset_overlaps_range(mem, start, end) ||
-                 /*
-                  * If only the ROM enable bit is toggled check against other
-                  * BARs in the same device for overlaps, but not against the
-                  * same ROM BAR.
-                  */
-                 (rom_only && tmp == pdev && bar->type == VPCI_BAR_ROM) )
-                continue;
+            if ( tmp == pdev )
+            {
+                /*
+                 * Need to store the device so it's not constified and defer_map
+                 * can modify it in case of error.
+                 */
+                dev = tmp;
+                if ( !rom_only )
+                    /*
+                     * If memory decoding is toggled avoid checking against the
+                     * same device, or else all regions will be removed from the
+                     * memory map in the unmap case.
+                     */
+                    continue;
+            }
 
-            rc = rangeset_remove_range(mem, start, end);
-            if ( rc )
+            for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
             {
-                printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n",
-                       start, end, rc);
-                rangeset_destroy(mem);
-                return rc;
+                const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
+                unsigned long start = PFN_DOWN(bar->addr);
+                unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
+
+                if ( !bar->enabled ||
+                     !rangeset_overlaps_range(mem, start, end) ||
+                     /*
+                      * If only the ROM enable bit is toggled check against
+                      * other BARs in the same device for overlaps, but not
+                      * against the same ROM BAR.
+                      */
+                     (rom_only && tmp == pdev && bar->type == VPCI_BAR_ROM) )
+                    continue;
+
+                rc = rangeset_remove_range(mem, start, end);
+                if ( rc )
+                {
+                    printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n",
+                           start, end, rc);
+                    rangeset_destroy(mem);
+                    return rc;
+                }
             }
         }
+
+        if ( !is_hardware_domain(d) )
+            break;
+
+        d = dom_xen;
     }
 
     ASSERT(dev);
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -70,6 +70,7 @@ void vpci_remove_device(struct pci_dev *
 int vpci_add_handlers(struct pci_dev *pdev)
 {
     unsigned int i;
+    const unsigned long *ro_map;
     int rc = 0;
 
     if ( !has_vpci(pdev->domain) )
@@ -78,6 +79,11 @@ int vpci_add_handlers(struct pci_dev *pd
     /* We should not get here twice for the same device. */
     ASSERT(!pdev->vpci);
 
+    /* No vPCI for r/o devices. */
+    ro_map = pci_get_ro_map(pdev->sbdf.seg);
+    if ( ro_map && test_bit(pdev->sbdf.bdf, ro_map) )
+        return 0;
+
     pdev->vpci = xzalloc(struct vpci);
     if ( !pdev->vpci )
         return -ENOMEM;
@@ -332,8 +338,13 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsi
         return data;
     }
 
-    /* Find the PCI dev matching the address. */
+    /*
+     * Find the PCI dev matching the address, which for hwdom also requires
+     * consulting DomXEN.  Passthrough everything that's not trapped.
+     */
     pdev = pci_get_pdev(d, sbdf);
+    if ( !pdev && is_hardware_domain(d) )
+        pdev = pci_get_pdev(dom_xen, sbdf);
     if ( !pdev || !pdev->vpci )
         return vpci_read_hw(sbdf, reg, size);
 
@@ -427,7 +438,6 @@ void vpci_write(pci_sbdf_t sbdf, unsigne
     const struct pci_dev *pdev;
     const struct vpci_register *r;
     unsigned int data_offset = 0;
-    const unsigned long *ro_map = pci_get_ro_map(sbdf.seg);
 
     if ( !size )
     {
@@ -435,18 +445,20 @@ void vpci_write(pci_sbdf_t sbdf, unsigne
         return;
     }
 
-    if ( ro_map && test_bit(sbdf.bdf, ro_map) )
-        /* Ignore writes to read-only devices. */
-        return;
-
     /*
-     * Find the PCI dev matching the address.
-     * Passthrough everything that's not trapped.
+     * Find the PCI dev matching the address, which for hwdom also requires
+     * consulting DomXEN.  Passthrough everything that's not trapped.
      */
     pdev = pci_get_pdev(d, sbdf);
+    if ( !pdev && is_hardware_domain(d) )
+        pdev = pci_get_pdev(dom_xen, sbdf);
     if ( !pdev || !pdev->vpci )
     {
-        vpci_write_hw(sbdf, reg, size, data);
+        /* Ignore writes to read-only devices, which have no ->vpci. */
+        const unsigned long *ro_map = pci_get_ro_map(sbdf.seg);
+
+        if ( !ro_map || !test_bit(sbdf.bdf, ro_map) )
+            vpci_write_hw(sbdf, reg, size, data);
         return;
     }
 


From xen-devel-bounces@lists.xenproject.org Tue May 30 13:21:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 13:21:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541164.843610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zHu-0006gs-7K; Tue, 30 May 2023 13:21:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541164.843610; Tue, 30 May 2023 13:21:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zHu-0006gl-3m; Tue, 30 May 2023 13:21:22 +0000
Received: by outflank-mailman (input) for mailman id 541164;
 Tue, 30 May 2023 13:21:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1tCH=BT=citrix.com=prvs=5074c9224=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q3zHs-0006gf-Q9
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 13:21:21 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id db05d8b3-feec-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 15:21:17 +0200 (CEST)
Received: from mail-mw2nam10lp2102.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.102])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 09:21:10 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ2PR03MB7096.namprd03.prod.outlook.com (2603:10b6:a03:501::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Tue, 30 May
 2023 13:21:07 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Tue, 30 May 2023
 13:21:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db05d8b3-feec-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685452877;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=c6m9rV6kn4YukBz5sQHUkC/EEUKrjL6Of7kEgL+seck=;
  b=CiAcBbbusSMAcgVyi8UC5falnO/1bf557xHYGLmfpdSfjf1evn+yvWIm
   n5uewSSnJCBcgQ4sKneRiGu0xzH0Y/MwYncIEOmihkt6cHnbbfjyXYqY3
   lvEGjK4xITKsopcJ6odYpJiwNUSoRjs4cPNfbV2wRAp6snVI4vBew+F1G
   I=;
X-IronPort-RemoteIP: 104.47.55.102
X-IronPort-MID: 110252937
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:zCK/9q5yXIu6wobpYDAKigxRtHbHchMFZxGqfqrLsTDasY5as4F+v
 mEXUD+HPP6MYmT2Loh+aIqwpxwB6pTUndRrSFRqryFkHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa0Q5AeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mp
 MwnLjATRzG4iLiS0O/hE8R23/YtI5y+VG8fkikIITDxK98DGMiGZpqQoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooiOKF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxHuhBdhDT+LQGvhCuXOB6FYuVFosdXDgvOSc1lS+RPd7E
 hlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml2qfSpgqUGGUAZjpAc8A98t87QyQw0
 V2ElM+vAiZg2JWKTVqN+7HSqim9UQAXMGsDaCksXQYDpd75r+kblQnTR9xuFKq0iNzdGjzqx
 T2O6i8kiN07k8kP0Kmq+EHdtDilrJPJUw0d6x3eWySu6QYRTISofZCy4F7Xq/NJNp+ET0Kpt
 WIB3cOZ6YgmB5aHnj2AW+UJEbSg4d6KNTTdhRhkGJxJ3z2p+mW/dIFKpj9kLUFiM90sZjPiJ
 kTUvGt575hVOnyoYaZpYpmZBMEjzKymHtPgPs04dfJLa5l1MQqYpidnYBfI23i3yRB216YiJ
 Z2cbMCgS24ADrhqxya3QOFb1qI3wic5xiXYQpWTIwmb7IdyrUW9Ed8tWGZipMhjhE9YiG05K
 +piCvY=
IronPort-HdrOrdr: A9a23:WKG+kapjUXfcCaF/jta27zkaV5r3eYIsimQD101hICG9Ffbo8v
 xG/c5rtyMc7Qx7ZJhOo7y90da7MArhHPJOjrX5RI3SIDUO2lHIEGgS1/qA/9S6IVyHygc178
 4JGZSWYOecMbEQt6bHCWeDferJUrK8gcaVbXe09QYLcen4AJsQizuR/TzraHGf22J9dOEE/L
 723Ls7mwad
X-Talos-CUID: =?us-ascii?q?9a23=3A7OBrC2kwxWYC+7u5ycUJrKs2VmPXOSzt83HQPWa?=
 =?us-ascii?q?ZMmkzbK3IEwa94vpWncU7zg=3D=3D?=
X-Talos-MUID: 9a23:iGz9kwprm3xye07dpsUezwhMJulS3vjpMmAmyogbnpWeNTNVHw7I2Q==
X-IronPort-AV: E=Sophos;i="6.00,204,1681185600"; 
   d="scan'208";a="110252937"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GZJfh4DltdHj/nDy2ujjXm/5xrO7fYyrSMuuJfvSAkcPAkqDDk99PzRGoQohf9lJXzN/Y4cstyRKTQ7m+JssAK52usjxOQSom8K2pn/aBtISGcg6SuaTSY1hWBm/AcnazK4WJMV37yFj/U+VqZTYvRyzrpZpl11E+rgcpQhZyeHmSlWPUOCVTUmXeCijBepVyjRThliCXMQ/2k+Xk+UVhLw8xFoCjwAdQhRQXmwMH7QBYOF/aWQzRk/RWxbBh6ufQDglTHzWbPrcqYiw0POzQO3qqmXVIUgK0Iqz0TVRLm5HwKv7dHLKX/PC2/UqV9zKeHaAsvd4M+1j9cdy19A7cQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Vdftul0ofhlyXkKxCRAToYyfaEmztmaHlBduCfR8TJM=;
 b=b1c6wDIp0Ly90c0XW2kT1Jc1OVJ++9+g1AIgpz5odYn96RywR8LmscVmQX8I69bu0DUQL1YO/hx5FqidE2bjL8Wwb+Cq6q0oHzo39wOsqnZyF47MJ712xNcy91VCm3hNAgqLsain6XnPLgJnaUVViIudlxsWMYAd/KcxJjUV5cZaD3ZWhyg2pQo2nBT/moUM0oHmbWEh29BhsKOWzPhbqD2D+RKJQpUSl3FwsOuLpsOSgSpjl/k6abFZPMMMJr3cKW30+N1sOm9wEofz3D/QtGDyL7ezQQCxllCKrKdYRy/DVQ53iyPUJDmtYasDv+HQ40bmq08FTI9icZtMahKhpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Vdftul0ofhlyXkKxCRAToYyfaEmztmaHlBduCfR8TJM=;
 b=piQUgBfuUCZcNg4dEeCpm5z04heZa9vyp39/VJ8JZKXp2YFlP1hJ9m72U8lIpQpsZMsFNVIq/K5AjKU1L5dWyCWhgIIH0NlU9R6xYo3R/r71uOr+9wo8iqlbESEHSi6nOeRZNZ+KbHs+PAzPhY2ubrNJ54ipa63b8d+uqazkU4U=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 30 May 2023 15:21:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v2 1/2] x86: annotate entry points with type and size
Message-ID: <ZHX4PR56MQZQCVUX@Air-de-Roger>
References: <e4bf47ca-2ae6-1fd4-56a6-e4e777150b64@suse.com>
 <db10bc3d-962e-72a7-b53d-93a7ddd7f3ef@suse.com>
 <fd492a4a-11ba-b63a-daf4-99697db0db0e@suse.com>
 <ZHSp9+ouRrXFEY4R@Air-de-Roger>
 <bba057a2-0a68-bf05-9a92-59546b52c73c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <bba057a2-0a68-bf05-9a92-59546b52c73c@suse.com>
X-ClientProxiedBy: LO2P265CA0134.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::26) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ2PR03MB7096:EE_
X-MS-Office365-Filtering-Correlation-Id: c780f6fc-3eab-4bec-91c8-08db6110ba08
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sW+mOiD4nYO0NpOfTgprihms/1yvnl66KlVLrFMrylR+XskGrPB9dLa9KelDGMjPljBvIN26Tw2bQEzeJf3mhYHDkCpcaCmHztWVb+iVTqfhttiQMujqZf+FJ0FD89kiik9pjxgnybkAkuVEIrvDZ7umGm7DmE9UXSyTjsrZfEhaJJ1cntqE2pOoDvq1mY+HDXGOyti6D3/0V0mZxrxFP7yROT9wXnf7+WIHI3zcJmcW/HNX/M+XDYLFS00C8dXuHqS86l6WpTLmPCwW6Q+fPnF7e2huCEu52u/k6me0Ayu9ByfR6QOBa2QUg3E+asw/A1wMwT4eP1KL6mr+1+bJvRdDj8KPpd2z94G1Vz8dUWICD8oA5MuBTQ6FCRUVEDQnI5877m8Oc5RwA58co3/ch1+FyAo7ZAEbYpOvS60OAzyBOw/pd5JMDEA1P09S0DSwDddfC6UhEHGfbqSoXBi3gHGYg8nZ0qS36oBOGM0pscTsy06O+SuaQQC3A1PRLK2ryrrKQM2+JB+0CsagYFlb6DZunn4eXkQA7yKeKNWfLIAAqruIGkT5752kyxrtahgk
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(136003)(366004)(346002)(376002)(396003)(39860400002)(451199021)(186003)(26005)(53546011)(9686003)(6506007)(6486002)(6512007)(33656002)(316002)(7416002)(6666004)(2906002)(85182001)(5660300002)(41300700001)(8676002)(8936002)(478600001)(82960400001)(38100700002)(54906003)(6916009)(4326008)(86362001)(66946007)(33716001)(83380400001)(66556008)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QVVBdXVDc1hGVzNUSFBvZUlzdG9jV3UrLy8zajk1Q21iUjBRblRpaFdWZUlk?=
 =?utf-8?B?MUd3L05XT3ZUY3owWjA1aXk4TlR0ZkhSUnF5aUpsUFJjMlU1allrUDFnVmNG?=
 =?utf-8?B?TnNoUkU4VzhHK1FCb25aOW1UNWZXQ05LTUJlL3BDRFNPNUYwSUFISTdmSDNN?=
 =?utf-8?B?Si95NVJTQzU4TFpibDF1TGNUV2ZaMlJldkk2cXVXMnN3bEZsQmhvNUlXa0Zl?=
 =?utf-8?B?ZzR1SWVFdFFVeHF2akxia2lkTjdiRStVM0IwMEgvQ1VKeW81TG13L2hENDJX?=
 =?utf-8?B?SFVFWUYxdjY2MzhNQzlVL1NMbnFMbXEyL3BiL214YzExQXU0NVRXbTlvRzBW?=
 =?utf-8?B?S2R6VG45RzdTRTFqV1h1QUhRMXhDdEpkZWtXNGZoVGhkY0lrcHZJZmZtUnpH?=
 =?utf-8?B?ejJvQ2VNVWVDdnVvbS9oL1lUTjVGS20xQ1pWdk4vZ0t2b3JrSk0zTm5hOTAv?=
 =?utf-8?B?UjAyYWFGZDRUcWlBNnBmNEVkSDVuRSt0SjVDUGhscUN6Vm41QVNPbXdMaGJk?=
 =?utf-8?B?Slludm5NdVJTKzlYdnNuM2RxVFk0dkNoaFdHNXhsVmdudzlPWWsyTlV0dEsx?=
 =?utf-8?B?M3F2OEtibEVBN3RXN3JDRHhEY1A3TTBlNFhhek9hcGZsbGNoU0lhNUh0Znha?=
 =?utf-8?B?YWNpaXVvTUNtSGs2THdWditaOXh2RC9rM3VLZ1pDVlB6cXQyeWg2SG9uQm80?=
 =?utf-8?B?b3lBcUoxaW1sR05QY0NWWmJwYlppRGszaytYc010ZlBFbmxSZUdmd2h5Ti9K?=
 =?utf-8?B?TzBJVTRvbUtvbWNBQUIweEFiTVZMUzB2anVENWVBMkRIM3V2bXlmcnNsRzIx?=
 =?utf-8?B?T2hVR2kra3pESU5EM1lKQnp4akRCZ2ZTaVc3SGJQS1ZSdzN6U3gvMmhra0dZ?=
 =?utf-8?B?czh1T2tDM2tBbUt6MzF4MHRaMW0yaHo0dllYODNyL0t2WkJSMlo4by8yNnFN?=
 =?utf-8?B?TGUwMHhpOFdUSXg0K3FmV0hmSFZLSDdFSGgrdHVlUFp1Y21FVzV1ZGsxbW50?=
 =?utf-8?B?SEFqWTR3MUdiSnlRWFdnRnVVWkIyQ2lkWWJMeXh4WDgwdUozb2FBTHF0ZFpH?=
 =?utf-8?B?TXZaOUtFQ1JncXZGcmRvSjRHQ1hIL1B5bEV5TTdyZE85bHYrSEVvbWZMalUw?=
 =?utf-8?B?ZGU4a3ovTmgwRTRyVkd4Zi8zZDZzU0FKMUp0RFFBMW1UczBCZzhydDFmV1VH?=
 =?utf-8?B?K0FKclRGSHJ2SVNyeisxMTkrUDRYV0FJNWE3SEVJejltUU14WDQwY3FpdVZ3?=
 =?utf-8?B?T2JCVld5eHJUNVQ5SU9MRWxHeXFTVVk2UU1Pa0dtc0FGeUxNUG92SCs0QUdl?=
 =?utf-8?B?RStLY3dzSzZlTEQ2NXoyaEgxLy9ONnY1NHJGOTlKSEZKV0gxQ1lJU2taVExo?=
 =?utf-8?B?VE05MHN5TFRRbXRGZTI2aGZQcml6SzROWENhU0RXTjZ1NzlMVG91U1BwMGwy?=
 =?utf-8?B?QW9DM3VyVGVzMVpVYmo1aFlwK3N3eldpM2pHbzBHZ1lKZHQrekExSjBaWUNZ?=
 =?utf-8?B?Ykw4ZlMyMEw5Y3F5UGFodkdMZG5MOFBlVHZPSjNEQXBSV2RrMEtsWjdHcmxJ?=
 =?utf-8?B?MGlRMHdJUjc5cjc4djUydmJoZ1FBKzlhc01vTUZjOTVvbWd6bzEvTituTHlO?=
 =?utf-8?B?Y1RKT0dqQmtLckpyZEN3ajlmQythOWVIYTdWMEtteGl0RkhGS3NsaE9WQnNV?=
 =?utf-8?B?RUJQclJ3MktjaGl1OHpNRjdRMXJlZ0F0ME4ySnY5b25kUFpVdWFxblVsNDd4?=
 =?utf-8?B?RHRaTW52Z3BFMWNZWHlrdUZTeFBMaUl5VVVmWDBnM2xHTm9LQnZRWGF4RkNH?=
 =?utf-8?B?ZG41eFR2d0tBakJNREgwa29rQ3VFL1RjZzZKNjRMYjgzTVhoSHZISEtrbVg1?=
 =?utf-8?B?RDJXNGUxWTN2TUVRVDloS3dxZ1R0Z003cDF0a25JdVZiNWJKTkhKQzlRSDFN?=
 =?utf-8?B?RmtadFlxUlV6TFBDMmk2T1JYbjNvbkl0djZ5ZDB0ZTQ4aGVUTHBlNUZkVlpR?=
 =?utf-8?B?OE1sQW9HT0JYcm1YOEVDaFBLRzQ5dTNaWlFsT2FhSjJyQ252dG1SWjl0ZWNo?=
 =?utf-8?B?b2JGVmlwaVduQ1c1bEM0RStPU2F4VmNVMWd0c0g0bjFHMk1seEx2QSt5WDNK?=
 =?utf-8?Q?RhUpZ1GFS3b0+Nsv2TfMmP9we?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?bHZGMjgzWE9vN0svZmU0MTA3bnRhbGZrMHNueVM0TTVmdC9jWmIwS3dvTmhN?=
 =?utf-8?B?SXFESFZXRVJhRVpvMFhOMGFINDh1M1hJNUtmeGE5SGFSdUdEbWg2RXJoYXR4?=
 =?utf-8?B?R1NKV29XSm5Zd3VtY0tOTDd6NSs5RSs3aUE0Z3pNZUh3WDYwWGZHQ3BKYWJ2?=
 =?utf-8?B?TzVQV2JIMTluNkF5MnN5a1RjUXNzdU5hYVhrVG51TXI5RVBEMEVXSmF6VG9W?=
 =?utf-8?B?Nk9HbFpiMHVKSU1ha2k4V0h6TFpMTklQVHF5STVQUkpRa25IdVl0S3FrVTJ0?=
 =?utf-8?B?M3VGUmlqNmxIVHpvaXREUkI5dkdGWFJHZll4QWVLWFBXNFQwWDJKS2Fod0NY?=
 =?utf-8?B?N0tYdWRKdk1aU0tHZXYvSXUxNTBBS3pzWDd2a0dKaTlKUlNzZzcwNzdTSEgz?=
 =?utf-8?B?ODhGeFhSWkJPc1J4U0p6VElVTHJSTng5Y0oyVENTN1ZMQ0kreVN5SEl3ZjVI?=
 =?utf-8?B?cit4WnVhd0d1MEM2c3ZFcEM3a0FlTVZGOWV0a0RPdk4ramRaVTVXWU5vYTFN?=
 =?utf-8?B?VThyM1hzTXFSeUpjbHdzdmZ2c1UwR1lZSXEzbmoyc0haVG1od095d05ISUNB?=
 =?utf-8?B?ZjNncFBsb3pkcDVuUGZzbURSMnJiZWFsVndRK2JocnZYT0E2ZDMzaW1nVURs?=
 =?utf-8?B?b0NjUDBSZUFibFp0VUZzVjVidFRqanpHemZMdWVORExKMzVWdGV0eWNuQzFm?=
 =?utf-8?B?MUNyQVg0RHFXRFBmcXJkTi82SUhPTmh1SVNBRnR4NDhXdHY0KyszNzZhR21m?=
 =?utf-8?B?UlhSOUJvQjlTU1JMb3laTzFIRkVRTDFXTUl3ZFFrUEhnbW5aaWdWVkN5cjls?=
 =?utf-8?B?enYvSkdqdkdzVmk0TDN6czlZeUJ5RGphZFVma1BjbEp1d0ZNVGxOZnFVTkZP?=
 =?utf-8?B?Y0VKa1F5SHJBTURZc0ZFRHVEYTBDdU9sbTBMNXFhWllwRUcraW9XUjFDTjlD?=
 =?utf-8?B?WlQ0MHl6U0hHVHBsUW9VdnZrajQ5VGpoa1JHUzNBWk1wZ0k2L2NLYmQ1UExQ?=
 =?utf-8?B?TUdIanFFZFViT2hPcWErOXBRR2dWMVN6b3ZWRkEwZ1lnQktEZmxpSFIwL21t?=
 =?utf-8?B?ZXBGYkxudHFLYVdDWCtFMFJnMy9oZEV3Q24rTW5neVJKR3pBS3NBVWs0eXJl?=
 =?utf-8?B?YUo3SGNpQWM3dDlaazRMeDE4UWt4eDRmMDlJbUxSVU1iVFBtaHFEbk5obnpY?=
 =?utf-8?B?US9mT1hESmRJU0VoRG8vWjhOcDZIYWZiejRYZWxUK2VWT2FuNGZONTF3Wit0?=
 =?utf-8?Q?gmnbUqxuLpNudnI?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c780f6fc-3eab-4bec-91c8-08db6110ba08
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 13:21:07.5251
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YbyAiEmsT7JiM271UlODeyGA4WNPyL0Q9hVtyQulikm6ZtrX5Q5mA4V3qApge9024EefQrOmrOzD7c29N9H0RA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR03MB7096

On Tue, May 30, 2023 at 10:06:27AM +0200, Jan Beulich wrote:
> On 29.05.2023 15:34, Roger Pau Monné wrote:
> > On Tue, May 23, 2023 at 01:30:51PM +0200, Jan Beulich wrote:
> >> Note that the FB-label in autogen_stubs() cannot be converted just yet:
> >> Such labels cannot be used with .type. We could further diverge from
> >> Linux'es model and avoid setting STT_NOTYPE explicitly (that's the type
> >> labels get by default anyway).
> >>
> >> Note that we can't use ALIGN() (in place of SYM_ALIGN()) as long as we
> >> still have ALIGN.
> > 
> > FWIW, as I'm looking into using the newly added macros in order to add
> > annotations suitable for live-patching, I would need to switch some of
> > the LABEL usages into it's own functions, as it's not possible to
> > livepatch a function that has labels jumped into from code paths
> > outside of the function.
> 
> Hmm, I'm not sure what the best way is to overcome that restriction. I'm
> not convinced we want to arbitrarily name things "functions".

Any external entry point in the middle of a function-like block will
prevent it from being live patched.

If you want I can try to do a pass on top of your patch and see how
that would end up looking.  I'm attempting to think about other
solutions, but every other solution seems quite horrible.

> >> --- a/xen/arch/x86/include/asm/asm_defns.h
> >> +++ b/xen/arch/x86/include/asm/asm_defns.h
> >> @@ -81,6 +81,45 @@ register unsigned long current_stack_poi
> >>  
> >>  #ifdef __ASSEMBLY__
> >>  
> >> +#define SYM_ALIGN(algn...) .balign algn
> >> +
> >> +#define SYM_L_GLOBAL(name) .globl name
> >> +#define SYM_L_WEAK(name)   .weak name
> > 
> > Won't this better be added when required?  I can't spot any weak
> > symbols in assembly ATM, and you don't introduce any _WEAK macro
> > variants below.
> 
> Well, Andrew specifically mentioned to desire to also have Linux'es
> support for weak symbols. Hence I decided to add it here despite
> (for now) being unused). I can certainly drop that again, but in
> particular if we wanted to use the scheme globally, I think we may
> want to make it "complete".

OK, as long as we know it's unused.

> >> +#define SYM_L_LOCAL(name)  /* nothing */
> >> +
> >> +#define SYM_T_FUNC         STT_FUNC
> >> +#define SYM_T_DATA         STT_OBJECT
> >> +#define SYM_T_NONE         STT_NOTYPE
> >> +
> >> +#define SYM(name, typ, linkage, algn...)          \
> >> +        .type name, SYM_T_ ## typ;                \
> >> +        SYM_L_ ## linkage(name);                  \
> >> +        SYM_ALIGN(algn);                          \
> >> +        name:
> >> +
> >> +#define END(name) .size name, . - name
> >> +
> >> +#define ARG1_(x, y...) (x)
> >> +#define ARG2_(x, y...) ARG1_(y)
> >> +
> >> +#define LAST__(nr) ARG ## nr ## _
> >> +#define LAST_(nr)  LAST__(nr)
> >> +#define LAST(x, y...) LAST_(count_args(x, ## y))(x, ## y)
> > 
> > I find LAST not very descriptive, won't it better be named OPTIONAL()
> > or similar? (and maybe placed in lib.h?)
> 
> I don't think OPTIONAL describes the purpose. I truly mean "last" here.
> As to placing in lib.h - perhaps, but then we may want to have forms
> with more than 2 arguments right away (and it would be a little unclear
> how far up to go).

Hm, I would be fine with adding that version with just 2 arguments, as
it's better to have the helper in a generic place IMO.

> >> +
> >> +#define FUNC(name, algn...) \
> >> +        SYM(name, FUNC, GLOBAL, LAST(16, ## algn), 0x90)
> > 
> > A rant, should the alignment of functions use a different padding?
> > (ie: ret or ud2?) In order to prevent stray jumps falling in the
> > padding and fall trough into the next function.  That would also
> > prevent the implicit fall trough used in some places.
> 
> Yes, but that's a separate topic (for which iirc patches are pending
> as well, just of course not integrated with the work here. There's
> the slight risk of overlooking some "fall-through" case ...

Oh, OK, wasn't aware patches are floating for this already, just came
across it while reviewing.

> >> --- a/xen/arch/x86/x86_64/compat/entry.S
> >> +++ b/xen/arch/x86/x86_64/compat/entry.S
> >> @@ -8,10 +8,11 @@
> >>  #include <asm/page.h>
> >>  #include <asm/processor.h>
> >>  #include <asm/desc.h>
> >> +#include <xen/lib.h>
> > 
> > Shouldn't the inclusion of lib.h be in asm_defs.h, as that's where the
> > usage of count_args() resides? (I assume that's why lib.h is added
> > here).
> 
> When the uses are in macros I'm always largely undecided, and I slightly
> tend towards the (in general, perhaps not overly relevant here) "less
> dependencies" solution. As in: Source files not using the macros which
> use count_args() also don't need libs.h then.

I tend to prefer headers to be self contained, as it overall leads to
a clearer set of includes in source files.  It's not obvious why
entry.S needs lib.h unless the asm_macros.h usage is taken into
account.

> >>          sti
> >>          call  do_softirq
> >>          jmp   compat_test_all_events
> >>  
> >> -        ALIGN
> >>  /* %rbx: struct vcpu, %rdx: struct trap_bounce */
> >> -.Lcompat_process_trapbounce:
> >> +LABEL_LOCAL(.Lcompat_process_trapbounce)
> > 
> > It's my understanding that here the '.L' prefix is pointless, since
> > LABEL_LOCAL() will forcefully create a symbol for the label due to the
> > usage of .type?
> 
> I don't think .type has this effect. There's certainly no such label in
> the symbol table of the object file I have as a result.

I was expecting .type to force the creation of a symbol, so the '.L'
prefix does prevent the symbol from being created even if .type is
specified.

Shouldn't the assembler complain that we are attempting to set a type
for a not present symbol?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 30 13:22:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 13:22:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541168.843620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zJB-0007Cn-Gx; Tue, 30 May 2023 13:22:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541168.843620; Tue, 30 May 2023 13:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zJB-0007Cg-DI; Tue, 30 May 2023 13:22:41 +0000
Received: by outflank-mailman (input) for mailman id 541168;
 Tue, 30 May 2023 13:22:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3zJA-0007CS-KY; Tue, 30 May 2023 13:22:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3zJA-0008Pk-DP; Tue, 30 May 2023 13:22:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q3zJ9-0006TX-W6; Tue, 30 May 2023 13:22:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q3zJ9-0000hA-VM; Tue, 30 May 2023 13:22:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jhSe9sVT44Bp7g9SHg26i6hyOpf7mGkhH6yXfZcCxTU=; b=dmLvYQA+MuwAfYkIwZzwzWVymw
	tNjCWSptEuHoea/pdQQYwi3fSaSygiTeSWf5f1X0VOSKlwBaRtCppvZI/pK4u1mGCRvIkdibjPEA6
	k+FKyKB1/TrnF4xHni4VR8litc85s4lPgUgE9eUOYZ3N7SomRyNaPrdXzCFk4kzZf72U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181007-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 181007: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-examine-bios:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
X-Osstest-Versions-That:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 30 May 2023 13:22:39 +0000

flight 181007 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181007/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-examine-bios  6 xen-install      fail in 180992 pass in 181007
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 180992 pass in 181007
 test-amd64-amd64-xl-qcow2 21 guest-start/debian.repeat fail in 180992 pass in 181007
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install    fail pass in 180992

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180992
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180992
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180992
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180992
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180992
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180992
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180992
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180992
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180992
 test-amd64-i386-xl-vhd       21 guest-start/debian.repeat    fail  like 180992
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180992
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180992
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180992
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
baseline version:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc

Last test of basis   181007  2023-05-30 01:53:56 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue May 30 13:26:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 13:26:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541176.843630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zMS-0007vy-4G; Tue, 30 May 2023 13:26:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541176.843630; Tue, 30 May 2023 13:26:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zMS-0007vr-05; Tue, 30 May 2023 13:26:04 +0000
Received: by outflank-mailman (input) for mailman id 541176;
 Tue, 30 May 2023 13:26:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8xzU=BT=citrix.com=prvs=507ffd061=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q3zMQ-0007vl-GH
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 13:26:02 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 842becd6-feed-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 15:26:01 +0200 (CEST)
Received: from mail-mw2nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 09:25:58 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by PH0PR03MB7066.namprd03.prod.outlook.com (2603:10b6:510:29c::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18; Tue, 30 May
 2023 13:25:53 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 13:25:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 842becd6-feed-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685453160;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=eOEQXo4OsFTkl0aJ2ZIG2pMSvNlhvx5pPvLuC142sIA=;
  b=AP7iYvCcJFXhFdyfM9HqpXEysX3nuEI9UPGuS+ry2NwVQfVbfkKqkNuk
   JOuPfOv2nSZO0gyOCQKfXt1w7/1BHzdt7q0GYuUMDdglklplDir+nPAS1
   zTfQtIPESmt5K6o7tOyjhXUQ967h0EtgOORUtABNyguGMOwjfftDC54Gl
   0=;
X-IronPort-RemoteIP: 104.47.55.109
X-IronPort-MID: 110253815
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:k3ZJb6qMswYcTqN0DSoi0oe8gY5eBmI1ZBIvgKrLsJaIsI4StFCzt
 garIBmPaa2CYWCge911Oo2w801UusXSn9RhHQY++X8zEX4b95uZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzCFNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACIJSEiHv96R+5z4eNtv1+89EcjKJqpK7xmMzRmBZRonabbqZvySoPpnhnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeWraYKEEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhr6Yw3QbDnzF75Bs+ZXq4grqE0m2FAfFgE
 UA4xywtg60O+xn+JjX6d1jiyJKehTYeUddNF+wx6CmW17HZpQ2eAwAsUTppeNEg8sgsSlQCx
 lKP2t/kGzFrmLmUUm6GsKeZqyuoPioYJnNEYjULJTbp+PHmqYA3yxfQFNBqFfftisWvQGmhh
 TeXsCI5mrMfy9YR0Lm29kzGhDTqoYXVSgky5UPcWWfNAh5FWbNJrreAsTDzhcus5q7AJrVdl
 BDoQ/Sj0d0=
IronPort-HdrOrdr: A9a23:Z0P69K8DrObU+/r6vAduk+DnI+orL9Y04lQ7vn2ZhyYlC/Bw9v
 re5MjzsCWftN9/YgBEpTntAtjjfZqYz+8X3WBzB9aftWvdyQ+VxehZhOOI/9SjIU3DH4VmpM
 BdmsZFebvN5JtB4foSIjPULz/t+ra6GWmT69vj8w==
X-Talos-CUID: =?us-ascii?q?9a23=3AJL3vJ2pwC1UcBNTU8hfNKXHmUdIhVCXZ9Ev3GGH?=
 =?us-ascii?q?7O1RqGZaLdF6x4ooxxg=3D=3D?=
X-Talos-MUID: 9a23:eRSDDQTWiWflf4NkRXTcpTJBd/owwJ/0K383nbk7vcqqMBB/bmI=
X-IronPort-AV: E=Sophos;i="6.00,204,1681185600"; 
   d="scan'208";a="110253815"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GsG9dDEqku+vBj4LbbdQPgCZJPopqM9tsOpQ97DiQx364HBXjUe9+duecDQaOBYWfZJtcdaj589kSvMPx+IJjCtkhVfcR/5oueOaTO+BB9+bcJIwChdlFuvpkhxz69M9fiVk2nBUoNSc6Vgl9eY8aDLcjEEiZkzn5IhaToGiPhLtHYziz7n4Ez5ucJjTWynYe+Sg0PYKjhhmY6onmpY+gN/lqblt/m8FHgD2aXV56XeWjytzW3nyeO1wxaJh8RFND5qf4uc1Ii32De6wq+GX6KYGLaBW7TqwHdPgXBcRPKVc72HlgzQsW8HxauTUhOVXliotYXNGtxz9oBmgcLFH0g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xurPT9LVnL+sxGtgX0SkyeShRXmNwPFi1Y+vifVICmc=;
 b=nFaxj3/MxYkAETbg53P0gpMtbZytlkzqROwKrGavVZoxa3K37kQjcogMF3zv8T+5alvuq1HZDHKxZce1IwyjO5a4DMUdCz6U6Zyyuu0vXkXqU7EED/GRwCysTjMHy8XD49UkGUhffYyXQI/YQP7H9UWHQjBzhXPunbYTEwya8IJfKP7/7PcDh8bGh7m+2d9hVE/DweUwOGSudpreSPG3b6MN9+4VXpnDescjN2TNhrRqSGZSnWFw+EM2C+PVqx6rD2ojaeeoJXHx120z4Gr04NJLwShJX14xg/tXJGYZq2y9d8jJY/QKB3hfIsBZC0vu0f1Cvnt6iQzxOSCT7H+m+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xurPT9LVnL+sxGtgX0SkyeShRXmNwPFi1Y+vifVICmc=;
 b=rNmIJ5zLKh1aOu5gf1AtWrP1ZiFITj0UlfOfjQ0DC+FFMLnP7l1nHIBkyn+Z5gjch6eJrCBoeDH8g+OiteoPnf02/tof3saYawDtI2JQ1f0j3QZLKowkm1pzbBoH39V0kSDbtBuupr1paZL/+qeT5om/CLvDVXYIorbZJLkyqhc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <db0a6132-4ee0-51d2-08b9-88a286bc4b14@citrix.com>
Date: Tue, 30 May 2023 14:25:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 4/4] x86/cpu-policy: Derive {,R}RSBA for guest policies
Content-Language: en-GB
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
 <20230526110656.4018711-6-andrew.cooper3@citrix.com>
 <8d31e4a3-3f3c-f7a2-8d7d-0b0febf17f65@suse.com>
In-Reply-To: <8d31e4a3-3f3c-f7a2-8d7d-0b0febf17f65@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0579.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:276::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|PH0PR03MB7066:EE_
X-MS-Office365-Filtering-Correlation-Id: 5ed574bd-c448-4da1-d836-08db611163f6
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	A+5bNUv4rHtvzME2HPeap9ZgtpiWArXhIPmcBROjir7wbLn1Hhj+ApRp8VjepwBkaPTegckIaDgp8B1swK4NXbZz5wRaLIS24SOS4QbqcwNi1Kc223faeWjdhg3b1GDLveUpBT9LoPE7w2eBcZl+4mdRgIUAywKUpS00Io3Bvuaz50MfGHb7+UuVTZyz9o2sNdZ7oTXImFhok9+xTBrnbvGu+4HtWs1qqOT4LP8Lb+xqKTFK5JDmyPGpn8VoKqEA/7n4eVnOcK6ldLnOIn+VqQphjrhPsgPRvE5PVPoOCue8GRHJRl3Ypc4xNh2n2npE53AQxvbZ9fgWk39tsMHJoRXgvyndKv+x73CJVryz0bta4+SNRLcL7KG3M4H7YwrU7vTsRoXtpXM+hkFhQi72gxdQ51RpGEJ30gfrPlrt/K3POPGvpKY8ZQiqI98VQjqUTedunN3bQJpJklYPJzTQxU2J2Ot1tOHwq8eHbw2O0eoD/PE+N/mUQ4bl+uGZXROeMWRKHsd0l/PwftAqQ8q1FeZ5PmmSLmeHhxWA+PkDjocg0NlsSOotmod+vaW6Ys4KW/OkipNSfLj/s2k8UY3FuNmLRusNtwnQETLWW6wqBgcJUE+GRXWw6siEd/UXhqqYkn895HVc8z2DruwmRkL0Tg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(366004)(376002)(136003)(39860400002)(396003)(451199021)(6666004)(316002)(6486002)(41300700001)(2906002)(8676002)(26005)(6506007)(6512007)(5660300002)(38100700002)(8936002)(53546011)(83380400001)(36756003)(2616005)(31696002)(186003)(86362001)(82960400001)(6916009)(66556008)(4326008)(66946007)(66476007)(31686004)(54906003)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M3FxcEgrQ3hCR3I4cmcwd0VBVXVTei9sSjNwVlBha05Wb0tYbndWYWtLT3VG?=
 =?utf-8?B?eHZiMm1ESWg2Lzh2UlhRWlZ3eG4vUzVOMllFNTlzOVZHQjVsdHpvK3ZBUXhO?=
 =?utf-8?B?bGN5SlRlYVV6d0dUamViTmpyaFVqaDdvNHpQWll1WFcrSDVBU1JjRlJ1Rk1F?=
 =?utf-8?B?UG1zTXNQTXY5TjlZS3phYllHMis3R1FXMDRlcFhIc04yUkE0RkdUb3BiYTZN?=
 =?utf-8?B?NXp2VUk1L2lnaXNMMGxQZTlocjZEb2F0R0hqTk9MakQ4S0dZQXBoY3ZhNTFZ?=
 =?utf-8?B?UW1YeHVWdkN4MTU0RzJLZ25zdzFMYmtHSUhHejFFWnNqVTFqanlUbHhPODFz?=
 =?utf-8?B?c1ZCb3paR0JKOWw5UE10Ti9UTDQ0L2VqNkgvTUZseUhPS1JkNTRoWGZ6Y1Rt?=
 =?utf-8?B?NmtyaGNQMUNvUFJaRjBuY0E4UUZkZ1E5K3pCazhFbjlxVTdBcHBPckVuR1pD?=
 =?utf-8?B?K0czenNQNWpUZFd4ZWVjOWlLVGFLSGVHRWZjU29zellVYkRpTkhpTU1qTGFn?=
 =?utf-8?B?UzV2aEZTQStDekxxNWg5d1orZDBWV293QjYvNGozK1ZXZE5aT1BVTDRBQVNj?=
 =?utf-8?B?RWV2cjBtV083V2lCSU9OZXF2RFduMUgwOFRZY0VQWVkwbzN5K21OVm5qVFZI?=
 =?utf-8?B?OFhIWlp6aXBIOUd4VDJKSENPcVRpeC8xb0lzMGxLZlo0N3FrU1ArNVR1S05i?=
 =?utf-8?B?TkhrSjVsdkZDK3E4ZTZUZ1dOR0t1U2NzanpVV0EvZ3htWFNjeUxJbzFNRWlG?=
 =?utf-8?B?YW8xWkxHM0lya084Y0haMjdLZ0loaGNiVUJzYkprWkpjUHMwYkhMVW1jUG04?=
 =?utf-8?B?K1o4WGw3dlZLZGsyWW9GeUxIQ0g0RE1YNnE0TGJSdmlXWDV3bS8yL1VmOHRH?=
 =?utf-8?B?UHNZSFdxTDJaMS9sQi94Q1JadDEvUEFXYWdzTnNBYk12bjE1bm1tSERla25F?=
 =?utf-8?B?Q0l0WGdwK2R5bSt0L09DUHFpeEMrdFRSV0MreFFZT1NkZUhiRFU3TnZkSXR3?=
 =?utf-8?B?eUo1ankxalhRQ0FubjQvcDlyMDdGM2NPTXNPc2pPY0Y0RGlIc1FKUjg4bmdZ?=
 =?utf-8?B?Uk14dituWTZKR01rSlNiTXRmUmg0cEtjVGllU1ZoTXE2WjU1clRkelVKNUh2?=
 =?utf-8?B?eWlVWmtZVFd4N2grTWc1ZzZRbEs0YlBaUXdwZjlscTZiMm1iV20xVHFuWlNQ?=
 =?utf-8?B?SFVzazAzZXBQSVVBWFFuZ3Z0b0FNRjBPL25NZEQ2SFlFNVVjN3VtakdvVitW?=
 =?utf-8?B?MlVscFlXcUlka1hmUmdaOGRBVE9qVWplSDlFOG1LK2ZIa1NnVjliNXhXL01h?=
 =?utf-8?B?M2JqSlJkUFFYZkJDalhxbUdQU0t5L29vb2luUDV6SG1rMU1MMFpsK0RGMitj?=
 =?utf-8?B?ZEtrZmFpNzBOZzRiLzJQNjVTUTJjOVUyMzFob3hvM0VEZ1FZbExITWl5Sks5?=
 =?utf-8?B?YlFUdlpHWDdNZmp4cUFPQmFsQmhjSHAyYk02d3k0dFVQc0c3bkpOQXRiRmhW?=
 =?utf-8?B?QytoN2QwaWhRdXlOT3BTUFk0SmsrK3pmTThxVG8wdEx3S3RYK2hYRGV2bmdY?=
 =?utf-8?B?WDBBbXMzTklUTG0yYWxjblhvQzhFOFg3MkxwUTM1Q0RMMUtaWnpOSnp3anho?=
 =?utf-8?B?U1Y1N1lqeUtmWDh5VE1ieXdycUNKeXRNUDgwSnNqSUY0L3NLaDVLZW1xOFRt?=
 =?utf-8?B?ZTZVZmo4K1E0eDhrVTVJc0NnQjJZd0dxY1FHRlRFK0ptL21FZ2dHay9YT2hW?=
 =?utf-8?B?Um5wdGUzSEdEa1lmY3V4SlFTZWRTNEwweWFYYmR1VXFFVURpTzQwMm56Y3JV?=
 =?utf-8?B?NEtsdk9YdFdJKzY5c1ErMDU0SHJJZ0VYei9PZGdFTnFLNjA3NFNsbWNZK3VD?=
 =?utf-8?B?MDdzenlzQ2lsaVNRekdZV1pZWWU4UkhCU3ZZclF0c3JOeC9oYzNKNzQyMVpZ?=
 =?utf-8?B?MXZkZG1Td2tTVjFNZjlrTDdHYURFRHZOaDYrbGZ0QXZlUUk2ODVISllNSGh6?=
 =?utf-8?B?dlVObVhRamEzYnBZL3NQN3pWMU51U0JXVWxuM2dDa2VMZ24xbVZRNXFGdkxI?=
 =?utf-8?B?eGVpRU93ZEVjQVhHZjhOblBvUVM3M2RUNTRoSTIrVGthRUYyVFRDMzJGbE5Q?=
 =?utf-8?B?QlVqSFRneE5pOTlUZjRsU2k3QU4yM2NSU01raXBEL1UyN0NZTTJiOUMzaXhB?=
 =?utf-8?B?TEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	xlM2/p7Ga05I9KNXx4U6SHvis6IFbcLOFZGw0JpgZaaQKPWyYuaITq4hBgjQ5BHB9WizCwWRha9UjNs78U6T0yo1aFpgtGDNr+H1cxBJVW3ih6j08OqnepMkis3HDoOLQzvqEvttDbqTsTxvGgMcIWrUKPYyzjEmVoTJUGTB60C5iTAd2UC3KFOhirndWAuNUKBiXRiGdj+pYJ3Tq1iobLEzGL0Z/eMTqh3qMN7fgbBK0xZKVOxoOPE0GIA1mR+KYpkwkEm3AYvHAgNLAJW+/oLw4GJrjGrWcZL2iI7GOEizXHX2PmjuYNY0M1VS2nm9LhvIjuelwEZ+Fmool2wbq9fCf2uu9b/mjJA0hFkmrqAmNXZ5NLQivWN9anIKy+wT4lc9e7hubHFjvpjv//eNqisbuDrHSes6l3K+628/wJr8+h2lVTuMMuaUM2Z5w7+pZSSMgxEkxC9Jv4wpd7oiGr+EazcQIP0Is9AEOr3siNpr7pwJf+yP2tjTRc4IJ3/+vm9Jt8n5IuK6VwswJCl6i2mSI2pu2ZZV4x9AWpiTTZH2UZ2USqcNMIAVQbs+2wXIt5ptWEz7t0BxaI/tEcllAJtmh1R3EBj01bWBkRGJXgPF2v1eNR+fujMqL92cjrPLsqv1kYnCK8yMNtppHMgkUectqe/NepNpRcWRQK4yl6ROacJNiyoX1k4fDYLa9HG7nJ7ilhHJFfdlrQQZKLCYvcUoeWuUMZSUDtYdKy4NocK/nPc/hfGHZYwPft+tl5iSvbXVqUx9I1Ga8UgM+MHY+3YQOTgismc6vAago+tSz2c1XYSxaARjfP1rt1tZdsqP
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5ed574bd-c448-4da1-d836-08db611163f6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 13:25:52.5778
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: N/POXiLNLOXUCLRyv5ZoIqvMLDU67t6poVBK370IH5O8Ch6aKIvQkQ3UjS6+ggk5zEe0mr/XeNQ/8UAhHU6VXlBdasB7tIvR/hpBAapF8C0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB7066

On 30/05/2023 10:40 am, Jan Beulich wrote:
> On 26.05.2023 13:06, Andrew Cooper wrote:
>> The RSBA bit, "RSB Alternative", means that the RSB may use alternative
>> predictors when empty.  From a practical point of view, this mean "Retpoline
>> not safe".
>>
>> Enhanced IBRS (officially IBRS_ALL in Intel's docs, previously IBRS_ATT) is a
>> statement that IBRS is implemented in hardware (as opposed to the form
>> retrofitted to existing CPUs in microcode).
>>
>> The RRSBA bit, "Restricted-RSBA", is a combination of RSBA, and the eIBRS
>> property that predictions are tagged with the mode in which they were learnt.
>> Therefore, it means "when eIBRS is active, the RSB may fall back to
>> alternative predictors but restricted to the current prediction mode".  As
>> such, it's stronger statement than RSBA, but still means "Retpoline not safe".
> Just for my own understanding: Whether retpoline is safe with RRSBA does
> depend on the level of control a less privileged entity has over a more
> privileged entity's alternative predictor state?

Correct, but...

> If so, maybe add "probably" to the quoted statement?

... Spectre-BHI proved it was exploitable and could leak data.

"Don't do JIT in the kernel" was a very unsatisfactory resolution, and
in particular I think there is room to replicate the exploit with array
style sys/hypercalls.

As far as I'm concerned it's a matter of when, not if, a researcher
breaks this boundary again.

Concern in this area is why Intel added
MSR_SPEC_CTRL.{RRSBA,BHI}_DIS_{U,S} controls in ADL/SPR, which hobbles
this behaviour.  (And yes, we need to support these in guests too, but
that involves rearranging Xen's MSR_SPEC_CTRL handling which is why I
haven't gotten around to it yet.)

>> --- a/xen/arch/x86/cpu-policy.c
>> +++ b/xen/arch/x86/cpu-policy.c
>> @@ -423,8 +423,14 @@ static void __init guest_common_max_feature_adjustments(uint32_t *fs)
>>           * Retpoline not safe)", so these need to be visible to a guest in all
>>           * cases, even when it's only some other server in the pool which
>>           * suffers the identified behaviour.
>> +         *
>> +         * We can always run any VM which has previously (or will
>> +         * subsequently) run on hardware where Retpoline is not safe.  Note:
>> +         * The dependency logic may hide RRSBA for other reasons.
>>           */
>>          __set_bit(X86_FEATURE_ARCH_CAPS, fs);
>> +        __set_bit(X86_FEATURE_RSBA, fs);
>> +        __set_bit(X86_FEATURE_RRSBA, fs);
>>      }
>>  }
> Similar question to what I raised before: Can't this lead to both bits being
> set, which according to descriptions earlier in the series and elsewhere
> ought to not be possible?

In this case, no, because the max values are fully discarded when
establishing the default policy.

Remember this value is used to decide whether an incoming VM can be
accepted.  It doesn't reflect a sensible configuration to run.

Whether or not both values ought to be visible is the subject of the
outstanding question.

>
>> --- a/xen/tools/gen-cpuid.py
>> +++ b/xen/tools/gen-cpuid.py
>> @@ -318,7 +318,7 @@ def crunch_numbers(state):
>>          # IBRSB/IBRS, and we pass this MSR directly to guests.  Treating them
>>          # as dependent features simplifies Xen's logic, and prevents the guest
>>          # from seeing implausible configurations.
>> -        IBRSB: [STIBP, SSBD, INTEL_PSFD],
>> +        IBRSB: [STIBP, SSBD, INTEL_PSFD, EIBRS],
> Is this really an architecturally established dependency? From an abstract
> pov having just eIBRS ought to be enough of an indicator?

This is the same as asking "can we hide AVX512F but expose AVX512_*"...

> And hence it would
> be wrong to hide it just because IBRSB isn't also set.

EIBRS means "you should set MSR_SPEC_CTRL.IBRS once at the start of day
and leave it set", which to me firmly states a dependency.


>  Plus aiui ...
>
>> @@ -328,6 +328,9 @@ def crunch_numbers(state):
>>  
>>          # The ARCH_CAPS CPUID bit enumerates the availability of the whole register.
>>          ARCH_CAPS: list(range(RDCL_NO, RDCL_NO + 64)),
>> +
>> +        # The behaviour described by RRSBA depend on eIBRS being active.
>> +        EIBRS: [RRSBA],
> ... for the purpose of the change here this dependency is all you need.

This change is to make the values sane for a guest, which includes "you
don't get RRSBA, or EIBRS if you have to level IBRS out".

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 30 13:37:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 13:37:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541180.843640 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zWu-00013S-2p; Tue, 30 May 2023 13:36:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541180.843640; Tue, 30 May 2023 13:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zWt-00013L-Vz; Tue, 30 May 2023 13:36:51 +0000
Received: by outflank-mailman (input) for mailman id 541180;
 Tue, 30 May 2023 13:36:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1tCH=BT=citrix.com=prvs=5074c9224=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q3zWs-00013E-30
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 13:36:50 +0000
Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com
 [216.71.145.153]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 049f573b-feef-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 15:36:46 +0200 (CEST)
Received: from mail-mw2nam12lp2046.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.46])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 09:36:43 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA1PR03MB6641.namprd03.prod.outlook.com (2603:10b6:806:1cc::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 13:36:41 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Tue, 30 May 2023
 13:36:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 049f573b-feef-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685453806;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=j2ySiuXPSywQRZCLfQ0KLFYGTRq2VSX8MDl5MMnT7xE=;
  b=eOvMxXZ27TjXWMU08qxGOA0mbXc2WaBBEcdFpQk5o0v8g9QqbnlAgAZG
   Hr2xsbiNaYE3fzzq2xOObcaaDmXuJ+1OZcKHthJFfQEQM5naWw5JTDjZf
   1KFnvcey5f9h8qhNjMHJ4z0kCbT1EaIpk168G0mjsWG114i77M4E58Zkt
   E=;
X-IronPort-RemoteIP: 104.47.66.46
X-IronPort-MID: 110811061
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Mc5/pa0CDwvQoCSVO/bD5fhwkn2cJEfYwER7XKvMYLTBsI5bp2BVn
 GZJDWyGOauMYGfzcox3Yduw9k8P7ZPcnYRgTgBopC1hF35El5HIVI+TRqvS04F+DeWYFR46s
 J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo
 tq3qMDEULOf82cc3lk8teTb8HuDgNyo4GlD5gFlPagR1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL
 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ
 OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfGE5Br
 MFEBhc2P1OuieC9ya2KELVXr5F2RCXrFNt3VnBI6xj8VKxjZK+ZBqLA6JlfwSs6gd1IEbDGf
 c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxpvS6PkmSd05C0WDbRUsaNSshP2F6Ru
 0rN/njjAwFcP9uaodaA2iv13r+WxHulAer+EpW52eE0w3mrxlcLKxMLelyJn/2k00OxDoc3x
 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy
 hmOhdyBLSd0rLSfRHaZ97GVhTC/Iy4YKSkFfyBscOcey9zqoYV2iw2VSN9mSfSxloetRW+2x
 C2Wpi8jgblVldQMy6iw4VHAhXSru4TNSQk2oA7QWwpJ8z9EWWJsXKTwgXCz0BqKBN/xooWp1
 JTcp/Wj0Q==
IronPort-HdrOrdr: A9a23:KS8ZvKpIhqwZ2uYjRfBAxaYaV5v5L9V00zEX/kB9WHVpmszxrb
 HXoB17726QtN91Yh4dcL+7Sc+9qE3nhNpICOUqTNSftUzdyRKVxG8L1/qj/9XPcxeOtNK1/5
 0QPZSXMbXLfBtHZSyT2nj8Lz9Y+qjHzEnKv5a7854Od3AJV0g61XY3Nu/zKCQfL2MqafRZdK
 Zw/vA30wZIO05nFPhTKUN1GNQrzOe7864ODyR2fCLOKWG1/H6VAOmQKWnm4v5naUIz/Z4StU
 zMkwn87qLmm+ij0RnC22KWx4k+oqqq9jPNbPb8xvT9fQ+c9jqAVcBZQLuFsykyoOazrHgXsP
 SkmWZSA+1Dr0rLeGe7uB3s3BSl9g0PxTvN9X+06EGT0fAQYloBer98bEZiA2qpmXYIjZVF+o
 JNwm6DsJJTZCmw4hjV1pzzThlvoEK/vHolloco/jdieLpbUqZYqboV9Fg9KuZIIAvKrLo/GO
 1ZF83E4u1KGGnqJ0zxjy1U2dSpaG44GAyLK3JyzPC94nxthXh8+VETwtcSqHcG6fsGOu5529
 g=
X-Talos-CUID: 9a23:ig+07WMr2XG2IO5DQQZorXNINOceIz7wwnXbKlCJUzpSYejA
X-Talos-MUID: 9a23:1h895AvG1r4r4/mpvM2nhR59aoRM26SVEGMDjLoBifeAKHV9NGLI
X-IronPort-AV: E=Sophos;i="6.00,204,1681185600"; 
   d="scan'208";a="110811061"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=illSC23pHVn2leBo7sZZYWDDo+aVVes5tUWJXEzbDrJsZt32zrdbANo5hiwOthesV8iaQCvfJgHH3XmsLM/u8J9ZjT2CUfXaRoMwoTsOUwC1T4pYdVKoHB5WLdXYOT6xHwFmVFhlye4c6F+dRXM8oJjOTCZghGWzYKoLHoMtpKmpdQ3iKrKCYInBEfBT3Fuhsmj7AZNXrXiJpNldbqJd6XHIeSVZ1EbrtBa8hMR3PNmO9FLj7AmO7MjNbNVV1ymKxQlbgKehyPKrXODkWVwam6GwV5LBy6cyTLldnmQ/s7Cu/8uy5e3b0V7xVOp3iwdcei7lJNBnB9lpBoIHo/+hWg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=96weyViAV/4YcbOJcV3izndEXIx065gHdjHm5RRyJYM=;
 b=HJ60lqOmnT2dZN+4JtihZ8yP8FdGYmQQQ3jWr/NH1bV0o73ghk1zKRTOV0KpAc3wdtpUH14EteczAQcGPCwfbEt6fqld4d8y3brWIBlgkpQBLfzP+5mUFBCpF8wkt4SurxD4x9t6fQgtuyoIkdyPuCDhHCKUhtK8z2Qj/uIcQ/DNav7UzWukgAIX5oR9HlJadKgXuRjZdAywVgDWkfja6yu7iRPHVudylS0D30SjZT9rw3NsqTwCf94QPkS/XBy1seEQzICTQhRMehYpefpK5IeJONA5ovy4wUnxHK8Q+S6xprp+W/MajKtqYeiDuAy4UthoWrX3n3k9NVDTvQVjiw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=96weyViAV/4YcbOJcV3izndEXIx065gHdjHm5RRyJYM=;
 b=fiBqIxJuqdDd3Pvgq3W3KLxX4I277wpFXijQga/8ibpJTqQ8rBH42VojAPD8IagQaRvgl6QR+QhhU89mlUMWQl7nKMxIvSdOQmA+1xqjAGLi3ESCiaMS0WyUvmM5C/SJD94XI+Mh9PngekqQiNeIczHibbbGo/fWNU5VsG+izbE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 30 May 2023 15:36:35 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v3] vPCI: account for hidden devices
Message-ID: <ZHX746v6VZAehZsg@Air-de-Roger>
References: <e1c6e297-0046-73f6-981d-af776b271f24@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e1c6e297-0046-73f6-981d-af776b271f24@suse.com>
X-ClientProxiedBy: LO4P123CA0214.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a5::21) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA1PR03MB6641:EE_
X-MS-Office365-Filtering-Correlation-Id: da7daae1-8d75-4947-684f-08db6112e676
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	P6dBW9Gs/1jJZe8zbEjZEDSHvoBwsSWfD8uFDzY69sFu8EGvAlHO5cNBqLHhE1NMIxmSYpk111AGbtsLos3wAk3jnP0bJ4L2282ZbTMEPZ6MFrsM97bfgVsk+U6TURuXm/Ra4gcHJ2YDyrrAWm3Zwt3mFwLWr0jXeamaGTqCc5CAq5hQS+ed9zLtkppn2y+/DWNm4MKwQwxUc0LO8Jxoni0N0WVuyrwkUiiVdUt6MWCnuV22+OQ6gAt70A78K2eXtxiM3Ttat17pQ2/QZDSjTbRg3tUe2sjHHHtOmu2/FuIvgjbSARKmwMJHbLbhToxeyoVorXFLFboQuZiq3n2okNWtQtHYT2Z7/e2v4gHqzCUqWtmIJfC8XXFxKXn98QEkD1CQYndpBeZ5k/Xz219qNFQUHSkTSE0Qk7PDC5g5cIHmsTQenL077f5hekl0vaMU0n/cs+98EJHm2tFea8OUUh+YtD9hG1KMgxTggY9FKycRgfwv1cpxaLoXJp70x06tzgSBKUQmu73PCz2wIK3ajXv4CzLUe7zvAUA9MpugH0Z+RfTa+mfQp+8cCgJFJewn
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(136003)(376002)(366004)(346002)(396003)(451199021)(85182001)(86362001)(478600001)(54906003)(4326008)(316002)(6916009)(66946007)(66556008)(66476007)(6486002)(6666004)(5660300002)(33716001)(8936002)(8676002)(41300700001)(2906002)(38100700002)(82960400001)(9686003)(6512007)(6506007)(26005)(186003)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MjFHRi9LZmtjcmpQbiszQUxOSlZiUUhPSGszY2xnaVIydEdNWHRnTXBlVjFL?=
 =?utf-8?B?VHVKb1RieENjZXdoT3YwdzRCaFBaUm1tbHROYkpBbytmWW42MCt6akJ2amx5?=
 =?utf-8?B?YnF2QXlMZXJ5aDl3N2J6MWZ1QVhZejRZNnNlOUZXWG5VeHVFYzUvUGFvQkNj?=
 =?utf-8?B?NDJUM1MxVnRkc3NMM1VmTDB1ZFcrNEZWN21DVzFuTUpQSjI3blNYRXk3aXYv?=
 =?utf-8?B?MnpzNEU0QjhCTGd0SDcvTzl6QzZxUi9xYVdGa3BONVl2ZGJ5ZHJmSUtYbmQw?=
 =?utf-8?B?Z3dkWmxPNmFicXExYXVhSi9RQlhvWnIxWnlXOUdEb3RnaTIrOHI4a1R0ZHpz?=
 =?utf-8?B?cTRSYUJNODRuMkZyaW02aDE4Ynl0SVFRb3RSWmJSQytPOXlYdTdPVERBOUQy?=
 =?utf-8?B?aDBOTnlrTWZncm4xandCR2hFYlhJOGl0WDRjRFczQnh3TUNCL3B3dUlkekJH?=
 =?utf-8?B?N2dyUlp0Yy9Wb2s2T2p0bzlvV1paRVdUdHlMS1FGSFI2Sy9DdVFLVTJVMVht?=
 =?utf-8?B?cWtxdDE1STFPZkt2VTY0TnlKVm1YYy83OXlFeWdOYnpoZGZsNkpEV0tiUXJM?=
 =?utf-8?B?WVhzVFZ2UDdJeks3SmdYZ3ZvYy9NMGZYWlhraVJhWjFJWFJTc1o0elI4b3BD?=
 =?utf-8?B?TjBBVzE5TWtMcllXMHo3QkhYaTdFaHQyUDhWMmVYb2NvZGRQcldGQ29lN25p?=
 =?utf-8?B?cTdTOVZ3RDRjMkF5UEV4OE9YM0k3R3MwakV5cU1JeE56dXo4aTZiWExkanE2?=
 =?utf-8?B?SUNmeFJIVytDL1E3K0VhMkExdVRGaVNkOGZ6UDlwZTZLMWc4YWFxSnB5R0p5?=
 =?utf-8?B?L2F5akU5akY0UmhBbW5lYVV1QkpFRjFkRmRYL3EwSkkxNGcxMkU0RFZwRWRn?=
 =?utf-8?B?c2grbCsvR0RQRUZObEM1UTZ4R3UrdXhEblBEUkpkVXp6NjFjdEVzMnpqMzdP?=
 =?utf-8?B?L2F6ZU1iV2tKazZKT3lvNFdoVHdQY1paOVJjeEplbFR5OE42aHkwREpPR1JW?=
 =?utf-8?B?azhCNW85VmZmTGs0SElOSm5qN2dOZnBWamt0YXNoZzhadm5VWmV0cVZ4Y1V0?=
 =?utf-8?B?OUpDcU5WNXViQ09ZMmNMeGpuUlljZ2lxT1dyRGc0L25zNC92YkJuN3V0Q3FV?=
 =?utf-8?B?dy9YTTBDTHFJS2E4ZkM1L3U3UE5mUWhwSTQ4dGFIM0RqVUdoZnh5US8wNUlO?=
 =?utf-8?B?U3lBWWNtY3prN2o1WGN2S3M0NzF2MXJMUldXYXAwZWRha0FpNEp0R20vVS95?=
 =?utf-8?B?SnYrbUViZkF6ZzhPQzBsbzRCK2FCWktnQTBjYmlxTGZyMVBETm9yaE1EVDJq?=
 =?utf-8?B?dHdMc0pIVkNGU1d4ZGVLeXExUGFCRldpSFVQaElIMjBoWko5M292Z2dEMS9S?=
 =?utf-8?B?cTZJS3UvUVl0V0hEbVVSY2FDQllCVnlaVy9XU1RKcWx0UjNibFBsTWltQ2F1?=
 =?utf-8?B?NG5Tc0dEQlRpUU5TVi9qREQ2dm1YUFM1TDhBV0dyRWUvM2UvMmtZZWRpU3Vt?=
 =?utf-8?B?NExHRVZlUHd1SGtBODR1K2IyekpFTUxINkRqY3ZlancyVkJ1eTN0MXhzSzBN?=
 =?utf-8?B?dm9YNGVmY016N2VZOWlaK3JkSlhpY2FwZEVaTWlxaTUzOGVVR0JiQmNoZTVK?=
 =?utf-8?B?dm9WVWo5VzdrRGJKVHlQZVlKYWExK1lPMGE5ZjRRSzNjRGgxazQzUGlJQk1y?=
 =?utf-8?B?UWZzMTNSSDNOWmNCTTlud3R6a1NlQmF1RzZLVzFKU1B6K1BROGRFVW1qS2Ry?=
 =?utf-8?B?cThTSmNSV3JCMXhkYlc0YlBGbDJSRUQrUHgzR0FMUzFlcE9UbUgrUjh1NWNM?=
 =?utf-8?B?Z3BBbmN2Uk9zM3gyL0Rmd0xad1ZndnJhRzJwaUVqZnJWb1JXY1ZHWjVnUCto?=
 =?utf-8?B?QkUvSEdLVE5MM2JSTmRIVWEyS0V6Wjk1cS9COVlQSU1hSGcwZTJNTXRPcHlG?=
 =?utf-8?B?My8wUHlaNWFXK3VBS1BrSlk5Zzk1eGJUcWZqWlRRZ1lPR0hsblQ5TlBtYmRm?=
 =?utf-8?B?a1h0Sk9nYys4WUZhR0orZGtRR2UzcmRmQWlBOTJnNG45MjFxWjB4Q09JbHg3?=
 =?utf-8?B?aWxsWUhwRnZtbmtFelRCMkh1QXlKaWt3MzJ0MXpCYW1JcUtWNGRCekhySkZq?=
 =?utf-8?Q?jfXMUDf48UyMEBtItjxJw4IT9?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	tjv43YnqxP3jSWk/WlRc+OotVSC+wnr1d1ZPs3zQivlbb6zlrtwgPruT826+27FtyFOoquwLH5HNx5mvpBERHS9nbYgRRj05boBbgVQdEKQ8lNgznKj/RibQ/BcTHzwdgG2cV7q+x0JXAqdgSF1wxejod05DIU55Vrt/Km0Rg2vAJKElpl45NwEjoQ98Lpiumf4F73kSRXfIMly/K8j+bNFksJFJyWtHzW0OSofOEw2d79rtzjT0zZBUL8OTzBEPIcJ0nqNBbjnqOEN3nqT91qN3BoChq09wWaxIhDzMRq67k8U9rHPspzIvYbMTmCzPX5kJX1JCPpCkWcATEm6fK0OPLKcJP5hu049q+iGDMGEKVcZ5AjCF/7NAaxqZFpO16Rv7q5ubPYxlPCB2017bs8LPiaRvioqlE6n5dTGuqerNvwTQECDgf2uo8AwF0lo2FmZlR3UQ5u1s8iWd2XYgyKS/oubG1ImkIU99iWKByZkLrDmoOwESd+Xdg0xZhuWsH/Hq/y5x27d1jRZ0Bolp1uBSu2SRO8SCvcizU1ZBE82mVnkWevfFstekitJJcO7kuwj7Z/N3yvYyZ8ZmGHsh8mngq+Q7BOvWLFKJVeKBZfL5G5HwK82rZJfZ+uYkSzg8LvZonDA/JpwWpq0lbozN4wpCB+ajKPkPFxxTY19HYrq2HKUXwgHOWebGsyJVJ4Xidke6sjK8fDR33iuFJpIf46aGQUvE6cajSvETc0M2zfE+AsIj1pyZ3+tfbUdHPz52D7L+QTdvnse07SeiZZdNsj4EvYoDX9APPWsBED3Vgak5cnqFYvw/ZA+G3pi7ns0CFLbTrV6OSr1xlaEmyH8lFQ==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: da7daae1-8d75-4947-684f-08db6112e676
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 13:36:40.9463
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: cCH2mbZC96AziRRky7VKARBFmG5AGgRfVC19L9kF2sgXhRh9vSfRnv7x+La2LxdsTUgipUnybyf6JSX0aGr8GQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6641

On Tue, May 30, 2023 at 02:38:56PM +0200, Jan Beulich wrote:
> Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
> console) are associated with DomXEN, not Dom0. This means that while
> looking for overlapping BARs such devices cannot be found on Dom0's list
> of devices; DomXEN's list also needs to be scanned.
> 
> Suppress vPCI init altogether for r/o devices (which constitute a subset
> of hidden ones).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Just one nit below.

> ---
> v3: Also consider pdev being DomXEN's in modify_bars(). Also consult
>     DomXEN in vpci_{read,write}(). Move vpci_write()'s check of the r/o
>     map out of mainline code. Re-base over the standalone addition of
>     the loop continuation in modify_bars(), and finally make the code
>     change there well-formed.
> v2: Extend existing comment. Relax assertion. Don't initialize vPCI for
>     r/o devices.
> 
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
>      struct vpci_header *header = &pdev->vpci->header;
>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>      struct pci_dev *tmp, *dev = NULL;
> +    const struct domain *d;
>      const struct vpci_msix *msix = pdev->vpci->msix;
>      unsigned int i;
>      int rc;
> @@ -285,58 +286,69 @@ static int modify_bars(const struct pci_
>  
>      /*
>       * Check for overlaps with other BARs. Note that only BARs that are
> -     * currently mapped (enabled) are checked for overlaps.
> +     * currently mapped (enabled) are checked for overlaps. Note also that
> +     * for hwdom we also need to include hidden, i.e. DomXEN's, devices.
>       */
> -    for_each_pdev ( pdev->domain, tmp )
> +    for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain; ; )
>      {
> -        if ( !tmp->vpci )
> -            /*
> -             * For the hardware domain it's possible to have devices assigned
> -             * to it that are not handled by vPCI, either because those are
> -             * read-only devices, or because vPCI setup has failed.
> -             */
> -            continue;
> -
> -        if ( tmp == pdev )
> +        for_each_pdev ( d, tmp )
>          {
> -            /*
> -             * Need to store the device so it's not constified and defer_map
> -             * can modify it in case of error.
> -             */
> -            dev = tmp;
> -            if ( !rom_only )
> +            if ( !tmp->vpci )
>                  /*
> -                 * If memory decoding is toggled avoid checking against the
> -                 * same device, or else all regions will be removed from the
> -                 * memory map in the unmap case.
> +                 * For the hardware domain it's possible to have devices
> +                 * assigned to it that are not handled by vPCI, either because
> +                 * those are read-only devices, or because vPCI setup has
> +                 * failed.
>                   */
>                  continue;
> -        }
>  
> -        for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
> -        {
> -            const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
> -            unsigned long start = PFN_DOWN(bar->addr);
> -            unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
> -
> -            if ( !bar->enabled || !rangeset_overlaps_range(mem, start, end) ||
> -                 /*
> -                  * If only the ROM enable bit is toggled check against other
> -                  * BARs in the same device for overlaps, but not against the
> -                  * same ROM BAR.
> -                  */
> -                 (rom_only && tmp == pdev && bar->type == VPCI_BAR_ROM) )
> -                continue;
> +            if ( tmp == pdev )
> +            {
> +                /*
> +                 * Need to store the device so it's not constified and defer_map
> +                 * can modify it in case of error.
> +                 */
> +                dev = tmp;
> +                if ( !rom_only )
> +                    /*
> +                     * If memory decoding is toggled avoid checking against the
> +                     * same device, or else all regions will be removed from the
> +                     * memory map in the unmap case.
> +                     */
> +                    continue;
> +            }
>  
> -            rc = rangeset_remove_range(mem, start, end);
> -            if ( rc )
> +            for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
>              {
> -                printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n",
> -                       start, end, rc);
> -                rangeset_destroy(mem);
> -                return rc;
> +                const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
> +                unsigned long start = PFN_DOWN(bar->addr);
> +                unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
> +
> +                if ( !bar->enabled ||
> +                     !rangeset_overlaps_range(mem, start, end) ||
> +                     /*
> +                      * If only the ROM enable bit is toggled check against
> +                      * other BARs in the same device for overlaps, but not
> +                      * against the same ROM BAR.
> +                      */
> +                     (rom_only && tmp == pdev && bar->type == VPCI_BAR_ROM) )
> +                    continue;
> +
> +                rc = rangeset_remove_range(mem, start, end);
> +                if ( rc )
> +                {
> +                    printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n",
> +                           start, end, rc);
> +                    rangeset_destroy(mem);
> +                    return rc;
> +                }
>              }
>          }
> +
> +        if ( !is_hardware_domain(d) )
> +            break;
> +
> +        d = dom_xen;

Nit: don't you want to do this in the advancement to the next
iteration?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 30 13:59:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 13:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541185.843649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zsL-0003dR-VA; Tue, 30 May 2023 13:59:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541185.843649; Tue, 30 May 2023 13:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zsL-0003dK-S8; Tue, 30 May 2023 13:59:01 +0000
Received: by outflank-mailman (input) for mailman id 541185;
 Tue, 30 May 2023 13:59:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Ox2=BT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q3zsK-0003cy-04
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 13:59:00 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1f7e19cc-fef2-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 15:58:58 +0200 (CEST)
Received: by mail-wr1-x430.google.com with SMTP id
 ffacd0b85a97d-30ae95c4e75so2550047f8f.2
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 06:58:58 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 h4-20020adfe984000000b002fe96f0b3acsm3442008wrm.63.2023.05.30.06.58.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 06:58:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f7e19cc-fef2-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685455137; x=1688047137;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=KdYBBTE6aup7a7P5Imkc8FJlVgU6ce1DM4IkbamFAwc=;
        b=E71rr4N12y40l1ZlDvelC/iIjU5eY3/0EGKlLF+ialltzdrmStDdRTDf7MIxMW9V0X
         Uuk51XFe2RJzHalHbLG1Y59q3qdL1RA/VTj+LHb5DnPFTTLjRTexUYjr5rQ94W6F+dpg
         fQFqObmk8VYVxWB1QeGoeKQ1KNg5MzM34HJHw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685455137; x=1688047137;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=KdYBBTE6aup7a7P5Imkc8FJlVgU6ce1DM4IkbamFAwc=;
        b=JsIT134KS84sYgQur2u9jHnXoSxuYv6152KWnD6uLeu7rQoc+Pn+rlnuan906mjoQ0
         wubB0JXS/EB/a50NghTUIDHYVIRPKVpQrJvebjmfI9FxqOUOctiijYUjBESoTzGYfa44
         IdyqRqio7ogGUervrWozZbHYZwR3cNxisHk+Uv2RFlBdPBh32pta3DTjYag39GEGSivn
         Klh1YQs9esNilQnHbKZgxmlu24jE++Sq+dJ38qo3fnJ7MDu8RgT8Js4oIfCFq/YW5SfP
         MbmYPp65omliBZd5GUTMU7jg38P3BXAk7igTeydtMuZJLL4OG1nnN7n2HT2YGviETydw
         3Ccw==
X-Gm-Message-State: AC+VfDzpOJnm9fbmH6k8diiQW6BFD10lXbGMFslDcOlvRZHxnHUlCNoa
	vrIzgoNg74Ns6Vr+Yz1tgjxgjEPtiZkz4IlbtdA=
X-Google-Smtp-Source: ACHHUZ5DanzzcMNbP8IBYA8wnwOMx6StTtfculEVRukpl7meM/CCnJOhgcfiwICHMJbNCc32Hcaf4A==
X-Received: by 2002:adf:dd84:0:b0:306:4273:9efc with SMTP id x4-20020adfdd84000000b0030642739efcmr1686022wrl.40.1685455137583;
        Tue, 30 May 2023 06:58:57 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 0/3] Add Automatic IBRS support
Date: Tue, 30 May 2023 14:58:51 +0100
Message-Id: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

v2:
  * Renamed AUTOMATIC to AUTO
  * Style change in xen-cpuid.c
  * Swapped patches 2 and 3
  * Modified trampoline_efer from the BSP so APs use it during boot and S3
    wakeups pick it up.
  * Avoid the delay setting AutoIBRS

Adds support for AMD's Automatic IBRS. It's a set-and-forget feature that
prevents lower privileged executions from affecting speculations of higher
privileged executions, so retpolines are not required. Furthermore, it
clears the RSB upon VMEXIT, so we can avoid doing it if the feature is
present.

Patch 1 adds the relevant bit definitions for CPUID and EFER.

Patch 2 exposes the feature to HVM guests.

Patch 3 Hooks up AutoIBRS to spec_ctrl. so it's used when IBRS is picked.
        It also tweaks the heuristics so AutoIBRS is preferred over
        retpolines as BTI mitigation. This is enough to protect Xen.

Alejandro Vallejo (3):
  x86: Add bit definitions for Automatic IBRS
  x86: Expose Automatic IBRS to guests
  x86: Add support for AMD's Automatic IBRS

 tools/libs/light/libxl_cpuid.c              |  1 +
 tools/misc/xen-cpuid.c                      |  1 +
 xen/arch/x86/hvm/hvm.c                      |  3 ++
 xen/arch/x86/include/asm/cpufeature.h       |  1 +
 xen/arch/x86/include/asm/msr-index.h        |  4 +-
 xen/arch/x86/pv/emul-priv-op.c              |  4 +-
 xen/arch/x86/spec_ctrl.c                    | 45 ++++++++++++++++-----
 xen/include/public/arch-x86/cpufeatureset.h |  1 +
 8 files changed, 46 insertions(+), 14 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 13:59:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 13:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541187.843662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zsM-0003mE-LS; Tue, 30 May 2023 13:59:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541187.843662; Tue, 30 May 2023 13:59:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zsM-0003kz-EV; Tue, 30 May 2023 13:59:02 +0000
Received: by outflank-mailman (input) for mailman id 541187;
 Tue, 30 May 2023 13:59:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Ox2=BT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q3zsL-0003d9-4V
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 13:59:01 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f5b44fc-fef2-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 15:58:58 +0200 (CEST)
Received: by mail-wr1-x430.google.com with SMTP id
 ffacd0b85a97d-30ae4ec1ac7so2032769f8f.2
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 06:58:58 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 h4-20020adfe984000000b002fe96f0b3acsm3442008wrm.63.2023.05.30.06.58.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 06:58:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f5b44fc-fef2-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685455138; x=1688047138;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7VGIEfka143J0krZiSLzHrTls4eT4q/W+mkBwrLfv54=;
        b=ZhhZfO7q72sTT5rBhH2YPnCoB1wFR4rVVUlVo6NFuQyjnyd1w8vyWkUp8w4vDvVDyV
         7R2QiC5jAJipXukYakZZsufxLacWcThAR0EoTvElPuWcnBEsvJLz/Fq69iN0yeTsdlkO
         LlL+Es9vag2BXH0YSLS7buxBupdLdcUoEef0k=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685455138; x=1688047138;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7VGIEfka143J0krZiSLzHrTls4eT4q/W+mkBwrLfv54=;
        b=My4loFbhE2Rptr56F6JBpCt+WXiWbu//GzT7dAUBY331Wzshk3AG8HpR8wz6kKMdlW
         1etv+axrwwYwEReE0bDxTaeFMofjjXM9GQ3SmF5rC8jZyMIecdDc9IZq5F9gnqZVWzzK
         Cchw6emKr6ISJmA4DsEszKOu7X6HqyXPO2nO+jwxIiLv6davngQ0/I5eUvpfeUzxsj+X
         +cJH/NZW/VWZCtWOxZHv3RGtpygKfum6Em0nF6l8TooEep5eEIh0qrONWp87oLH6qYz/
         ZuPjZEBTPUObBaIUt68nRj0bH8VIE9N4QepjUGmaAzbPE92A6719xzB3WxRHYYuzS5+/
         Hykw==
X-Gm-Message-State: AC+VfDxKG/9idoJocX0qAIzAXNZK0SgRcmT7QXDi0+8IcXt3E5g0yupI
	X6kai26XqKyXOYVDdGEMYh1Xc+kGJKfMJDDtm+4=
X-Google-Smtp-Source: ACHHUZ5TjpqhfIQbn9xHpTqWWLDyhcSyIuXhK0cd3ZAOY33EY1wyj0ZIUSfHnCWHKrA3DuXFOQh7Xg==
X-Received: by 2002:adf:ff85:0:b0:2ee:f77f:3d02 with SMTP id j5-20020adfff85000000b002eef77f3d02mr1743613wrr.0.1685455138287;
        Tue, 30 May 2023 06:58:58 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 1/3] x86: Add bit definitions for Automatic IBRS
Date: Tue, 30 May 2023 14:58:52 +0100
Message-Id: <20230530135854.1517-2-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
References: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is an AMD feature to reduce the IBRS handling overhead. Once enabled,
processes running at CPL=0 are automatically IBRS-protected even if
SPEC_CTRL.IBRS is not set. Furthermore, the RAS/RSB is cleared on VMEXIT.

The feature is exposed in CPUID and toggled in EFER.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v2:
  * Renamed AUTOMATIC -> AUTO
  * Newline removal in xen-cpuid.c
---
 tools/libs/light/libxl_cpuid.c              | 1 +
 tools/misc/xen-cpuid.c                      | 1 +
 xen/arch/x86/include/asm/cpufeature.h       | 1 +
 xen/arch/x86/include/asm/msr-index.h        | 1 +
 xen/include/public/arch-x86/cpufeatureset.h | 1 +
 5 files changed, 5 insertions(+)

diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index cca0f19d93..f5ce9f9795 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -317,6 +317,7 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
 
         {"lfence+",      0x80000021, NA, CPUID_REG_EAX,  2,  1},
         {"nscb",         0x80000021, NA, CPUID_REG_EAX,  6,  1},
+        {"auto-ibrs",    0x80000021, NA, CPUID_REG_EAX,  8,  1},
         {"cpuid-user-dis", 0x80000021, NA, CPUID_REG_EAX, 17, 1},
 
         {"maxhvleaf",    0x40000000, NA, CPUID_REG_EAX,  0,  8},
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index 5d0c64a45f..c65d9e01bf 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -199,6 +199,7 @@ static const char *const str_e21a[32] =
 {
     [ 2] = "lfence+",
     [ 6] = "nscb",
+    [ 8] = "auto-ibrs",
 
     /* 16 */                [17] = "cpuid-user-dis",
 };
diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index 50235f098d..ace31e3b1f 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -161,6 +161,7 @@ static inline bool boot_cpu_has(unsigned int feat)
 #define cpu_has_amd_ssbd        boot_cpu_has(X86_FEATURE_AMD_SSBD)
 #define cpu_has_virt_ssbd       boot_cpu_has(X86_FEATURE_VIRT_SSBD)
 #define cpu_has_ssb_no          boot_cpu_has(X86_FEATURE_SSB_NO)
+#define cpu_has_auto_ibrs       boot_cpu_has(X86_FEATURE_AUTO_IBRS)
 
 /* CPUID level 0x00000007:0.edx */
 #define cpu_has_avx512_4vnniw   boot_cpu_has(X86_FEATURE_AVX512_4VNNIW)
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index 082fb2e0d9..73d0af2615 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -175,6 +175,7 @@
 #define  EFER_NXE                           (_AC(1, ULL) << 11) /* No Execute Enable */
 #define  EFER_SVME                          (_AC(1, ULL) << 12) /* Secure Virtual Machine Enable */
 #define  EFER_FFXSE                         (_AC(1, ULL) << 14) /* Fast FXSAVE/FXRSTOR */
+#define  EFER_AIBRSE                        (_AC(1, ULL) << 21) /* Automatic IBRS Enable */
 
 #define EFER_KNOWN_MASK \
     (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | EFER_SVME | EFER_FFXSE)
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 777041425e..3ac144100e 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -287,6 +287,7 @@ XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
 XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
 XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and limit too) */
+XEN_CPUFEATURE(AUTO_IBRS,          11*32+ 8) /*   HW can handle IBRS on its own */
 XEN_CPUFEATURE(CPUID_USER_DIS,     11*32+17) /*   CPUID disable for CPL > 0 software */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:1.ebx, word 12 */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 13:59:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 13:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541188.843668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zsM-0003xQ-Vc; Tue, 30 May 2023 13:59:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541188.843668; Tue, 30 May 2023 13:59:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zsM-0003vB-R4; Tue, 30 May 2023 13:59:02 +0000
Received: by outflank-mailman (input) for mailman id 541188;
 Tue, 30 May 2023 13:59:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Ox2=BT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q3zsL-0003cy-I1
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 13:59:01 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2089a5e4-fef2-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 15:59:00 +0200 (CEST)
Received: by mail-wr1-x42a.google.com with SMTP id
 ffacd0b85a97d-30ae61354fbso1945172f8f.3
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 06:59:00 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 h4-20020adfe984000000b002fe96f0b3acsm3442008wrm.63.2023.05.30.06.58.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 06:58:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2089a5e4-fef2-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685455139; x=1688047139;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=np5kDlh2y0ObvKEUmXbLb6p7vYDYelZ0XKIPKrONjXY=;
        b=ceD4bG2AVhDx9T77fVfaup8UPYlkCZtwOTU+5lg7QCiUwO1cHoYnKXlZvitfV5FsSU
         zcFWbQ0jAPkhbRSSBcKj3XFaLXlyQ6u/b3YZG5eepASYTe7g8NWnA06mqeyzsy7ReXwI
         R2TJWtrmGLW6Cysy78MziGZbMP+jp+gY4POZo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685455139; x=1688047139;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=np5kDlh2y0ObvKEUmXbLb6p7vYDYelZ0XKIPKrONjXY=;
        b=RTyab0RH1h79f9XafYutb5k4B1f9i+2FssDs2VmDbFTE5PNUQD6fU9Oba6+X15xn7X
         eHZF7wTrjgjth099stNBZpYSarwp5lz68JiM9P5Uf0yvYmhzelXftiGyXe7RTEtHHH4w
         5J3tBgLPkVDLuwQLIbeUIKbSmclPH/LYJ8hTA6JNCZJz+Ixgisi/qNva49qyF9hrvIxr
         JbvTOeOR8uJgxCXrozlEX5xFNNCj1/Z7zTs0s9A/v6FV2VKlb12AKt4hHEW0R5D68Hbi
         HAdPTGt1ZvCaW1WENyElIAmGtYxEmDSSf/a+eZc2JRgt2CHxyjL5znD+kD8c35nBXKlE
         hxnw==
X-Gm-Message-State: AC+VfDwV4283SuFAFPkUvfbOxte7xG7dF917MR5fkHuNTSZLOj4wW74T
	jl4u2i/KkNgBVP8sHuYaJCxbqWJO3yGmrJDa2Oc=
X-Google-Smtp-Source: ACHHUZ4GSwSns8NpfQqD2z6m8zVduqEQmAS+Sj6JmbS68rLRWiHF6PLU4LRp8pmAvlACwvAtaiUusw==
X-Received: by 2002:a5d:5551:0:b0:306:3731:f73b with SMTP id g17-20020a5d5551000000b003063731f73bmr1748394wrw.43.1685455139554;
        Tue, 30 May 2023 06:58:59 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 3/3] x86: Add support for AMD's Automatic IBRS
Date: Tue, 30 May 2023 14:58:54 +0100
Message-Id: <20230530135854.1517-4-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
References: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In cases where AutoIBRS is supported by the host:

* Prefer AutoIBRS to retpolines as BTI mitigation in heuristics
  calculations.
* Always enable AutoIBRS if IBRS is chosen as a BTI mitigation.
* Avoid stuffing the RAS/RSB on VMEXIT if AutoIBRS is enabled.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v2:
  * Gated CPUID read to e21a by the presence the leaf
  * Add auto-ibrs to trampoline_efer if chosen
  * Remove smpboot.c modifications, as they are not needed after
    trampoline_efer is modified
  * Avoid the AutoIBRS delay as it doesn't provide any benefit.
---
 xen/arch/x86/spec_ctrl.c | 45 ++++++++++++++++++++++++++++++----------
 1 file changed, 34 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 50d467f74c..36231e65fb 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -390,7 +390,7 @@ custom_param("pv-l1tf", parse_pv_l1tf);
 
 static void __init print_details(enum ind_thunk thunk)
 {
-    unsigned int _7d0 = 0, _7d2 = 0, e8b = 0, max = 0, tmp;
+    unsigned int _7d0 = 0, _7d2 = 0, e8b = 0, e21a = 0, max = 0, tmp;
     uint64_t caps = 0;
 
     /* Collect diagnostics about available mitigations. */
@@ -400,6 +400,8 @@ static void __init print_details(enum ind_thunk thunk)
         cpuid_count(7, 2, &tmp, &tmp, &tmp, &_7d2);
     if ( boot_cpu_data.extended_cpuid_level >= 0x80000008 )
         cpuid(0x80000008, &tmp, &e8b, &tmp, &tmp);
+    if ( boot_cpu_data.extended_cpuid_level >= 0x80000021 )
+        cpuid(0x80000021, &e21a, &tmp, &tmp, &tmp);
     if ( cpu_has_arch_caps )
         rdmsrl(MSR_ARCH_CAPABILITIES, caps);
 
@@ -430,11 +432,12 @@ static void __init print_details(enum ind_thunk thunk)
            (e8b  & cpufeat_mask(X86_FEATURE_IBPB_RET))       ? " IBPB_RET"       : "");
 
     /* Hardware features which need driving to mitigate issues. */
-    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (e8b  & cpufeat_mask(X86_FEATURE_IBPB)) ||
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB))          ? " IBPB"           : "",
            (e8b  & cpufeat_mask(X86_FEATURE_IBRS)) ||
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB))          ? " IBRS"           : "",
+           (e21a & cpufeat_mask(X86_FEATURE_AUTO_IBRS))      ? " AUTO_IBRS"      : "",
            (e8b  & cpufeat_mask(X86_FEATURE_AMD_STIBP)) ||
            (_7d0 & cpufeat_mask(X86_FEATURE_STIBP))          ? " STIBP"          : "",
            (e8b  & cpufeat_mask(X86_FEATURE_AMD_SSBD)) ||
@@ -468,7 +471,9 @@ static void __init print_details(enum ind_thunk thunk)
            thunk == THUNK_JMP       ? "JMP" : "?",
            (!boot_cpu_has(X86_FEATURE_IBRSB) &&
             !boot_cpu_has(X86_FEATURE_IBRS))         ? "No" :
-           (default_xen_spec_ctrl & SPEC_CTRL_IBRS)  ? "IBRS+" :  "IBRS-",
+           (cpu_has_auto_ibrs &&
+            (default_xen_spec_ctrl & SPEC_CTRL_IBRS)) ? "AUTO_IBRS+" :
+            (default_xen_spec_ctrl & SPEC_CTRL_IBRS)  ? "IBRS+" : "IBRS-",
            (!boot_cpu_has(X86_FEATURE_STIBP) &&
             !boot_cpu_has(X86_FEATURE_AMD_STIBP))    ? "" :
            (default_xen_spec_ctrl & SPEC_CTRL_STIBP) ? " STIBP+" : " STIBP-",
@@ -1150,15 +1155,20 @@ void __init init_speculation_mitigations(void)
     }
     else
     {
-        /*
-         * Evaluate the safest Branch Target Injection mitigations to use.
-         * First, begin with compiler-aided mitigations.
-         */
-        if ( IS_ENABLED(CONFIG_INDIRECT_THUNK) )
+        /* Evaluate the safest BTI mitigations with lowest overhead */
+        if ( cpu_has_auto_ibrs )
+        {
+            /*
+             * We'd rather use Automatic IBRS if present. It helps in order
+             * to avoid stuffing the RSB manually on every VMEXIT.
+             */
+            ibrs = true;
+        }
+        else if ( IS_ENABLED(CONFIG_INDIRECT_THUNK) )
         {
             /*
-             * On all hardware, we'd like to use retpoline in preference to
-             * IBRS, but only if it is safe on this hardware.
+             * Otherwise, we'd like to use retpoline in preference to
+             * plain IBRS, but only if it is safe on this hardware.
              */
             if ( retpoline_safe() )
                 thunk = THUNK_RETPOLINE;
@@ -1357,7 +1367,9 @@ void __init init_speculation_mitigations(void)
      */
     if ( opt_rsb_hvm )
     {
-        setup_force_cpu_cap(X86_FEATURE_SC_RSB_HVM);
+        /* Automatic IBRS wipes the RSB for us on VMEXIT */
+        if ( !(ibrs && cpu_has_auto_ibrs) )
+            setup_force_cpu_cap(X86_FEATURE_SC_RSB_HVM);
 
         /*
          * For SVM, Xen's RSB safety actions are performed before STGI, so
@@ -1594,6 +1606,17 @@ void __init init_speculation_mitigations(void)
             barrier();
         }
 
+        /*
+         * If we're to use AutoIBRS, then set it now for the BSP and mark
+         * it in trampoline_efer so it's picked up by the wakeup code. It
+         * will be used while starting up the APs and during S3 wakeups.
+         */
+        if ( ibrs && cpu_has_auto_ibrs )
+        {
+            write_efer(read_efer() | EFER_AIBRSE);
+            bootsym(trampoline_efer) |= EFER_AIBRSE;
+        }
+
         val = bsp_delay_spec_ctrl ? 0 : default_xen_spec_ctrl;
 
         wrmsrl(MSR_SPEC_CTRL, val);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 13:59:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 13:59:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541186.843657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zsM-0003gI-DS; Tue, 30 May 2023 13:59:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541186.843657; Tue, 30 May 2023 13:59:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q3zsM-0003fD-3X; Tue, 30 May 2023 13:59:02 +0000
Received: by outflank-mailman (input) for mailman id 541186;
 Tue, 30 May 2023 13:59:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4Ox2=BT=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q3zsK-0003cy-Hy
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 13:59:00 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 203d026d-fef2-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 15:58:59 +0200 (CEST)
Received: by mail-wr1-x42a.google.com with SMTP id
 ffacd0b85a97d-30af20f5f67so1344955f8f.1
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 06:58:59 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 h4-20020adfe984000000b002fe96f0b3acsm3442008wrm.63.2023.05.30.06.58.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 06:58:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 203d026d-fef2-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685455139; x=1688047139;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4+3vkI4p6cM/yuJFR9Dw+d2BZ3eFpBG5vfoWYBc257k=;
        b=XAX5oLbWeB08OGGVUKQA/OqHl3GCtzdYwuzG5DhRgKq1ZK8QFakw/On0iKpOwdXyyn
         GM8I8UebPdrlPTnAg0BPs0N3YXJFkfD6iBlOPXc+HZi05r6dwo48KPx8LJ+nBTOr7Hdb
         Xb7o29pqxMxtsoWI298VECIGfMc8Uyzv4cJO8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685455139; x=1688047139;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=4+3vkI4p6cM/yuJFR9Dw+d2BZ3eFpBG5vfoWYBc257k=;
        b=FOoqQ7IpoFrWw0vZlqDBCjjOOXVOdZF1QJ7vZZhh6snRV5PjjNtPA+xK9DF+5NByV/
         6W8SapUAqvP15lZbVpX1Z/SWmPGqZmUerhmw0nBdOZ5mZptoZboMvbsY53t7wZtYDcDV
         KsXiP0zlfsAhG+dTLMpyRlijeUFlre+XrUQh6Qtj39PvTCXUkwYGV/oiKWZSIPXS4ZVu
         H31fGZnqw+LPwHIJxzIVcemf9yn4ymslbCMcB70iRAJEmDo5mPnZ3SBz7W6DHCTUc8SM
         VMBHrB/Ozmst/jczeaoUuwvRKnAu4Nx0rRqu0/pp2HD2WImhJ7so2zkZoP9yxNFZXGLD
         h08A==
X-Gm-Message-State: AC+VfDxz1GMEqx+YCl+cSENpp0zB8ED+MI5RT6E8pnuNOOgQ9x1ukFZM
	BCtWNo4f1FLawSGTKyTNEJs9pYrVhTF6WNyqwY0=
X-Google-Smtp-Source: ACHHUZ4p5v9mjGo4XP3kEDBslkn3SYPYr1hSA6w6YDtJng12OutHtp199Lqqy1ag4faBnO6cDbSFuA==
X-Received: by 2002:a5d:678f:0:b0:307:34d4:7ec8 with SMTP id v15-20020a5d678f000000b0030734d47ec8mr1690001wru.34.1685455138922;
        Tue, 30 May 2023 06:58:58 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/3] x86: Expose Automatic IBRS to guests
Date: Tue, 30 May 2023 14:58:53 +0100
Message-Id: <20230530135854.1517-3-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
References: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Expose AutoIBRS to HVM guests. EFER is swapped by VMRUN, so Xen only has to
make sure writes to EFER.AIBRSE are gated on the feature being exposed.

Also hide EFER.AIBRSE from PV guests as they have no say in the matter.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v2:
  * Moved to patch2 from v1/patch3
---
 xen/arch/x86/hvm/hvm.c                      | 3 +++
 xen/arch/x86/include/asm/msr-index.h        | 3 ++-
 xen/arch/x86/pv/emul-priv-op.c              | 4 ++--
 xen/include/public/arch-x86/cpufeatureset.h | 2 +-
 4 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d7d31b5393..2d6e4bb9c6 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -936,6 +936,9 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
     if ( (value & EFER_FFXSE) && !p->extd.ffxsr )
         return "FFXSE without feature";
 
+    if ( (value & EFER_AIBRSE) && !p->extd.auto_ibrs )
+        return "AutoIBRS without feature";
+
     return NULL;
 }
 
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index 73d0af2615..49cb334c61 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -178,7 +178,8 @@
 #define  EFER_AIBRSE                        (_AC(1, ULL) << 21) /* Automatic IBRS Enable */
 
 #define EFER_KNOWN_MASK \
-    (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | EFER_SVME | EFER_FFXSE)
+    (EFER_SCE | EFER_LME | EFER_LMA | EFER_NXE | EFER_SVME | EFER_FFXSE | \
+     EFER_AIBRSE)
 
 #define MSR_STAR                            0xc0000081 /* legacy mode SYSCALL target */
 #define MSR_LSTAR                           0xc0000082 /* long mode SYSCALL target */
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 8a4ef9c35e..142bc4818c 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -853,8 +853,8 @@ static uint64_t guest_efer(const struct domain *d)
 {
     uint64_t val;
 
-    /* Hide unknown bits, and unconditionally hide SVME from guests. */
-    val = read_efer() & EFER_KNOWN_MASK & ~EFER_SVME;
+    /* Hide unknown bits, and unconditionally hide SVME and AIBRSE from guests. */
+    val = read_efer() & EFER_KNOWN_MASK & ~(EFER_SVME | EFER_AIBRSE);
     /*
      * Hide the 64-bit features from 32-bit guests.  SCE has
      * vendor-dependent behaviour.
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 3ac144100e..51d737a125 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -287,7 +287,7 @@ XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
 /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
 XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
 XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and limit too) */
-XEN_CPUFEATURE(AUTO_IBRS,          11*32+ 8) /*   HW can handle IBRS on its own */
+XEN_CPUFEATURE(AUTO_IBRS,          11*32+ 8) /*S  HW can handle IBRS on its own */
 XEN_CPUFEATURE(CPUID_USER_DIS,     11*32+17) /*   CPUID disable for CPL > 0 software */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:1.ebx, word 12 */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 14:13:37 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 14:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541197.843689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q406J-0007m7-7K; Tue, 30 May 2023 14:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541197.843689; Tue, 30 May 2023 14:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q406J-0007m0-4i; Tue, 30 May 2023 14:13:27 +0000
Received: by outflank-mailman (input) for mailman id 541197;
 Tue, 30 May 2023 14:13:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q406I-0007lt-5W
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 14:13:26 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20626.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 23c8afad-fef4-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 16:13:24 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7011.eurprd04.prod.outlook.com (2603:10a6:208:193::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 14:13:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 14:13:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23c8afad-fef4-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LV+LmZKC+j3/Z05FY7YDKLCtZMoFEYhjT/Nd6OyNT+cugei4fOisvSNXRCBZTHCKhrNHqPyFZ3QhTwZ6aBFiqAqtjJ/6avcN7MtFZ6MxgnR7aBaBq+fSxd1rNOj4FTeS549RCd8F/UCH/VHUJ9We5hmnqZlKkOxPVrI4TSi4ue7StxewFmLExUdBA4D/510rafo9/d8MHvl7p9/G3j/CA3mOKimK0Sw44+HJ+D8dd2CUcP9yaKEZ+IXSvUElQSnsv6Y8Dvi6A8UJn7MQ4aEfwsoTz6QaFV6Yzf6nfibdljZccEh4Ki7kGS4OePe+hurjy8dTd3pUacYpmFeEVucynw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vRIiKMvf1Ub2FSk7gjbF9voZkCCyBDej9iR3/ntdWWE=;
 b=ZzboBCVsNZ8WKGKuIh8sj52f88sFjzh3dDtaQNK247FDQGmSCLtvSH9Sb+bpcfZWaIGlLdbDaMsAdPc432y/y9uavV0VuyUDyumRXYY6uKp+he61racXM2wOHt1brsNDzangJoSVsd/XHn9hZhkUzVE7Mw2gpQVAsoOLiPETvrRbndPEGhWTIex4wnKpyaspO73QjGp/wNwa4aE5Ct666oCgsQ4itV7FcOnFrprNqFJUKjdSCbmSawK522aUSBV2+hPGTyD2tfXFE4HL/+1nTjqn/gLZZGc2Iv3DWeTVL58ruvyBg64JdgnYSjLIa5N94f6hmIPK0J/b5zlKnHPwmw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vRIiKMvf1Ub2FSk7gjbF9voZkCCyBDej9iR3/ntdWWE=;
 b=bsvSLTIBXht/UXKmE8RL37xXARR5dzk9sLBfXD8gdISz2TiM6FG8qYOQLRc5mzRGRMTVi4tW/5jqPq/enXUBF/XwxllmhYOce7uPtrC1VzakCXZT9ZWIayK5dr+JsRyWB2dRt+DfpcRrm7qV9z6+eq2qtezYZgmmQ2f/p9FHYAjLC86cFTk6UhaQfBCWbKnzspbKtVLEyfrHZVMgqQCg/RVc6D+MAsdRQknNHWxkA8WXad9ZzF7D+QAdjGdNgWBnhEaPTe/OhEvoUpDOQoMDymVmKmyZo8v4Dj3j9VoY+c/MYQyFvBlhoFjU+7D4RtcQErOTtDjCDxItsOlCxopPDQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <527d819a-44f0-72c4-13ae-403df43f0b9f@suse.com>
Date: Tue, 30 May 2023 16:13:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v3] vPCI: account for hidden devices
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <e1c6e297-0046-73f6-981d-af776b271f24@suse.com>
 <ZHX746v6VZAehZsg@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZHX746v6VZAehZsg@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0093.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a1::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7011:EE_
X-MS-Office365-Filtering-Correlation-Id: 96a1238e-9173-4874-86d1-08db611805e0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	eNqvME5e6WLV5okY2MuBuEcEoO9qmCqtnVwe726do7N+WNIAdvB6X+/cVyEAX8B5SIudKikgA+ZpT/ZpIa3xUI6iqXejlenk8uxVumJlEDPfRlJE3QjvBzS/FxJPtk/NTtc580jPDQOEC6HDRkqS2U/2jXQIq8rgWRt7Z2vJDCdel+kyj+8HJ9xlUr8KpQahN3ixttcn+RCvMjwm5fMVR8SXk9DWh5lOutPC+F82Dwe4IgrUi74B7rLN2WeAfoRKQOmxkQueAtNKeve/hhHIkxwvh92LIKnLOja2lwrIcpmYxwFvZQpTURts61u1skzmqq708sFuZHHz6imYJsTSqr+KNqSlqVb2wJXnM8rmiePge/0yuwfr4YhpeBmqHPodZquffV6nkyxVLyE4YaGm7zwKkgSOGv5t5OPV9orO9kl/QQiA4Yrps+kZB6EH/GoQAyksI0Zw++hon3BM8drKKmYXWRcJ6YHRe1vA12hJKXVeWQzwGPXloON1taIM5aBOinaXZUtR2Of19WGahpXWaNVo1yYcWJEn7I/tAOYAYwenWgCAJQheFsHSmo/fe2sZMeYLjhV/rkfmQ5Bc0skWL8fFj2IF+GGq6KcHjEOYPz4sklXbC/37wJbT5hs/jUXF6oXy6C+2x0nHqb21xvY5Cw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(136003)(396003)(39860400002)(376002)(451199021)(478600001)(54906003)(8676002)(8936002)(5660300002)(36756003)(31696002)(2906002)(86362001)(66556008)(66946007)(4326008)(6916009)(66476007)(316002)(41300700001)(38100700002)(26005)(31686004)(6512007)(53546011)(83380400001)(186003)(6506007)(2616005)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WWZyK1FCb1hvYmZCYi9GbFB0R1BpMGx0b0MydWVrY0xpcTFHUzUza2MwNnJE?=
 =?utf-8?B?RjZzemtGaXBkZlVpRDZ5Y3lVSy9IaDhML3pzQ3FMYjlUYkZPb1ZTZ2pwWitI?=
 =?utf-8?B?djA1MUF5V2tHb1NQbXdUb0ZtSXIyWnltdzhqemY1QjZ4a1ZZZWttcm5LZ0Zu?=
 =?utf-8?B?Y3lhREdrT29OTm1YaEFhSklkM0gyOVVmQkFVQkttS1NjOUNIRlIrdjFwMkp6?=
 =?utf-8?B?YkZVSDBqSm9SaTRRclIwZ2QzZ0NFUktYVFJwUzg4NDhzTGY4L1oxMkhkajZO?=
 =?utf-8?B?LzNwWEx0Z0tQalFDTFQ1d211cFYyNllMNGN2YmJvVm14aDdDVldxV1h5R1lI?=
 =?utf-8?B?QVhBYnhBV2ZnOHdoK1R3bzlFNzY3WnovdFV1bU9obmJZcjVTeFd6WW1VUmRP?=
 =?utf-8?B?MmxWNVZsb1Vqd0ZwbnVLVFVCY3RkekowSnNqNXpHN243MWpPR3h6TmRpM0NQ?=
 =?utf-8?B?ZTVqeGtOMjk4YmZSQ2JYU3FBNTluTzJjWTY1cndObEY5TGMvUlFqMzlLV3lH?=
 =?utf-8?B?elFPRXNBRGlmTFJTTUhxOC9FNUJZL0gzekQ4VXJnbjEvZmk2bWZWQW9FQkdE?=
 =?utf-8?B?OSs0UnhkUnd6N1ROZmp0S2huL1hkVnY5MTRIMlZxVk53emJiZWVuUC9RQllN?=
 =?utf-8?B?RGNzcnhRRGkwL3MwMFFjMXd1akM3cGszZ1lMWEJsa0JDTnVZNGNKTVVzbFdH?=
 =?utf-8?B?SC9rMk5VZHM1ckt2R0dhV0I1RnZTNjZmTmxGeUExZjlSdm54REEvZU5WMlR0?=
 =?utf-8?B?U1lLRXhMbGR1blhXMzZVdWQ5YmQ5elV6TzhIZEJZZEFlSXEyeEVaWk1lcUlW?=
 =?utf-8?B?Wng3dG1BN3c4bnNraWV2bnJQQW5oMnFkSVQrWmlVQ1BDRXMwSUxKTHhBeXY2?=
 =?utf-8?B?MEt6VVZML3pGUUprR0JRTzlaeDVXbVlsYkNLVGM2RndQWTFWSG1uOWlkd3Nz?=
 =?utf-8?B?V3NHcFZ4UStTSUdRS0hiWTVZdlExUU83dVJwbmNrd256cFBsNjg3TkdBWGpQ?=
 =?utf-8?B?THFiNWxQQkZxMnE4OVdSN2hzbi8yVmVKZHZyZmRmNE45RmZwMjRnWjVoOVJH?=
 =?utf-8?B?Ym0rYUdOOUJ4TnRXUEZqOWxvaXlGS0JhL1FwcTRmbFhBbGVCUDJIUHgzZE44?=
 =?utf-8?B?clFybUd1ZUlZQldTejJ6RDFVQU1Gc0JPcWFJcnZ6UjFVdVByRTE2d01UMVBp?=
 =?utf-8?B?bFZWY0lINTd5OGJpVWxETkZzN21MQ3EzaEF0RUQzRklid2lEY3RnSkZlN2dR?=
 =?utf-8?B?V0d1Z0trckNGUXd5VHE2QUxQWGlnQW9jYXRpQ0RmUGN0WU5rY25GK1JCam5I?=
 =?utf-8?B?OEVGSVdwV3NhTFl6MElHTVRPd2g5ZUpPVjlOOGpxa09Qc1hzbFJRdHk2Syt6?=
 =?utf-8?B?cWxYWEdPdzFxUEVveVNFL3RMYlltRTdIODRJZFVweU01MjZvcWJqQTFuR3dp?=
 =?utf-8?B?L1hxZHVOMy8wRC9oOTR1cFZJWVpBNUJYWUpqS2lqVW5kazRaSHlOZ2FQWk4v?=
 =?utf-8?B?SWQxeWZBcFRiMmUxaVVPaEZYT3VRVzVOaTRMbkV6UkQzTWk3UVRHZjlqNHpM?=
 =?utf-8?B?NVZlbkRxc2crbHV4TURzZUlJdjZVa0RBZDRERGswNVJJWk1LN2Y3QU1nT3Bu?=
 =?utf-8?B?MXNiVUxvWjVLUUJPR1kwdUpkU0NlMnBwTTNBaFkyMzVZRWlYMUs4V1IyNmRQ?=
 =?utf-8?B?UFNuRUZUVzBnQ0k4ZmFORXdjdnVPcnl4YU40WnpSZXV5TnBSTjV5amJWS1F3?=
 =?utf-8?B?elQ4aXA0am4vRWtVdDBpTHBBNFkvWWlwUitCMmtZZmtuNHUzWUo0QTQvNDRX?=
 =?utf-8?B?cVFJNmhoZXBTVThOaVpQQWRMd1A4TFYwR0JxbkV1MlpNZE1QRTVHU21adXNO?=
 =?utf-8?B?R3ptU0lmSFI1MFVKQlF3MHg4NVVtUW1wUjFzdVV2alNHMVlJMjZGR0l1NVhQ?=
 =?utf-8?B?Rys1OGhQM1h0VEkyaTdBWWUxWlZyQUFNVzd1ZkpBMzhXWkJkdzBweWkrUE9F?=
 =?utf-8?B?L3czZzFwZlJmNGNSZTZCd1JrZy8xNHNPWU15dlVmazNVRGhQVzhFSGZxbHla?=
 =?utf-8?B?ckJIOEdwMmF0dVJHb1FNZVpmT2hPQXh0NTFiS0hPYmpWSlhXMG92WEJkdlhW?=
 =?utf-8?Q?h1hb9f39qafMNnpKoJm+S9vWQ?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 96a1238e-9173-4874-86d1-08db611805e0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 14:13:21.0997
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6TLbJq590LUA68dc7sFrxlnsGxtqqyt4xk7Upkoh2frgyboJMjTEKTrXGTPk9B2hc+PXf4oJ18d/kpBKV6oOCg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7011

On 30.05.2023 15:36, Roger Pau Monné wrote:
> On Tue, May 30, 2023 at 02:38:56PM +0200, Jan Beulich wrote:
>> Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
>> console) are associated with DomXEN, not Dom0. This means that while
>> looking for overlapping BARs such devices cannot be found on Dom0's list
>> of devices; DomXEN's list also needs to be scanned.
>>
>> Suppress vPCI init altogether for r/o devices (which constitute a subset
>> of hidden ones).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> @@ -285,58 +286,69 @@ static int modify_bars(const struct pci_
>>  
>>      /*
>>       * Check for overlaps with other BARs. Note that only BARs that are
>> -     * currently mapped (enabled) are checked for overlaps.
>> +     * currently mapped (enabled) are checked for overlaps. Note also that
>> +     * for hwdom we also need to include hidden, i.e. DomXEN's, devices.
>>       */
>> -    for_each_pdev ( pdev->domain, tmp )
>> +    for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain; ; )
>>      {
>> -        if ( !tmp->vpci )
>> -            /*
>> -             * For the hardware domain it's possible to have devices assigned
>> -             * to it that are not handled by vPCI, either because those are
>> -             * read-only devices, or because vPCI setup has failed.
>> -             */
>> -            continue;
>> -
>> -        if ( tmp == pdev )
>> +        for_each_pdev ( d, tmp )
>>          {
>> -            /*
>> -             * Need to store the device so it's not constified and defer_map
>> -             * can modify it in case of error.
>> -             */
>> -            dev = tmp;
>> -            if ( !rom_only )
>> +            if ( !tmp->vpci )
>>                  /*
>> -                 * If memory decoding is toggled avoid checking against the
>> -                 * same device, or else all regions will be removed from the
>> -                 * memory map in the unmap case.
>> +                 * For the hardware domain it's possible to have devices
>> +                 * assigned to it that are not handled by vPCI, either because
>> +                 * those are read-only devices, or because vPCI setup has
>> +                 * failed.
>>                   */
>>                  continue;
>> -        }
>>  
>> -        for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
>> -        {
>> -            const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
>> -            unsigned long start = PFN_DOWN(bar->addr);
>> -            unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
>> -
>> -            if ( !bar->enabled || !rangeset_overlaps_range(mem, start, end) ||
>> -                 /*
>> -                  * If only the ROM enable bit is toggled check against other
>> -                  * BARs in the same device for overlaps, but not against the
>> -                  * same ROM BAR.
>> -                  */
>> -                 (rom_only && tmp == pdev && bar->type == VPCI_BAR_ROM) )
>> -                continue;
>> +            if ( tmp == pdev )
>> +            {
>> +                /*
>> +                 * Need to store the device so it's not constified and defer_map
>> +                 * can modify it in case of error.
>> +                 */
>> +                dev = tmp;
>> +                if ( !rom_only )
>> +                    /*
>> +                     * If memory decoding is toggled avoid checking against the
>> +                     * same device, or else all regions will be removed from the
>> +                     * memory map in the unmap case.
>> +                     */
>> +                    continue;
>> +            }
>>  
>> -            rc = rangeset_remove_range(mem, start, end);
>> -            if ( rc )
>> +            for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
>>              {
>> -                printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n",
>> -                       start, end, rc);
>> -                rangeset_destroy(mem);
>> -                return rc;
>> +                const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
>> +                unsigned long start = PFN_DOWN(bar->addr);
>> +                unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
>> +
>> +                if ( !bar->enabled ||
>> +                     !rangeset_overlaps_range(mem, start, end) ||
>> +                     /*
>> +                      * If only the ROM enable bit is toggled check against
>> +                      * other BARs in the same device for overlaps, but not
>> +                      * against the same ROM BAR.
>> +                      */
>> +                     (rom_only && tmp == pdev && bar->type == VPCI_BAR_ROM) )
>> +                    continue;
>> +
>> +                rc = rangeset_remove_range(mem, start, end);
>> +                if ( rc )
>> +                {
>> +                    printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n",
>> +                           start, end, rc);
>> +                    rangeset_destroy(mem);
>> +                    return rc;
>> +                }
>>              }
>>          }
>> +
>> +        if ( !is_hardware_domain(d) )
>> +            break;
>> +
>> +        d = dom_xen;
> 
> Nit: don't you want to do this in the advancement to the next
> iteration?

Well, I had it that way first, but I didn't like the need to wrap the
line there. Hence I moved it here, which is functionally identical as
long as no "continue" appears in this (now) outer loop.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 14:23:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 14:23:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541202.843700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q40G3-00010Z-86; Tue, 30 May 2023 14:23:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541202.843700; Tue, 30 May 2023 14:23:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q40G3-00010S-58; Tue, 30 May 2023 14:23:31 +0000
Received: by outflank-mailman (input) for mailman id 541202;
 Tue, 30 May 2023 14:23:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q40G2-00010J-Bn
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 14:23:30 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2060d.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::60d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 89b7d8b7-fef5-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 16:23:25 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DB9PR04MB8362.eurprd04.prod.outlook.com (2603:10a6:10:241::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 14:23:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 14:23:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89b7d8b7-fef5-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mG13PowQrjIB3ZPLhcAQiFkrKgMWQZz3LwWBCtijeMxLYMByihSw1dlXbsSRumkKWOHBG5s9h5Y0gQEgUBHo806RfDMnHiM2ePNowkQqh1ud+GnAcSpmVAQIFFruF05K2jzgTarv2LHs44Z2azj0LeSx2DjAarZ7aE/8h8jcbfQgaHUkX0bhd6Jrhkb6sTb+0WuLhkVnCvJPlN01NFwAxqmcm/HOIbc9we9t1yTab8zZZJnTPCwhE0ZxaSPDqDIE6IFHcje57J8JG0n5Wv+FJl4RJw7QUfgID0nCq+vwM2kWKhScVCl2CSetgRr1fptpY9cncLN8tRWtRp/n4S318w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=q/1Cks03xJjEHAGe4ku6/7jOBIrbgv0Gsyl3WkiQDQg=;
 b=F7UsFlT5EigGe2Deq6zCWdpFh9nEOasJIbRHTi0WlEHgEJQ9wS9Ra+ObdHX+8OR2L7Bq5f25RwlDntK8aqBh9jDuLKybvpw63dvRS711rBLNo1JvCOHRNissuhSw8BG4f2yv4trZ4abubpkayq85QF6tSErbtQwqNtmt5SSDizpRjCPui+IP+lh0rCZU4jqUcREsfpYs6VawkDJ/ltYlTXi6xZg7Zd/KGQAKzP5htl/d1ht1hUaWSkJNaOF8YBHaTM6EmPBXJNuh7UBfzSJZlSBIWIZOWt8ByGWqZtO3rPQ7lLMZSXeWfdXzb9O2WZAYPkSTxKYGLy8dWXrNWUpP7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q/1Cks03xJjEHAGe4ku6/7jOBIrbgv0Gsyl3WkiQDQg=;
 b=oh2IdJpQX9ch2CMRCCAkkQ1adMvPZMjheKc2XsMDfNoGHqOEtuSU0z8iJkaYNkwhc6b5XtD76dXclxhDmbjUHkYifr0HnNrvHib17vddtrfFUR538L7IVu1xhxqUnqx19IL4wnk20nI5XlNGmqin/7ZYvJvv0lbZXLuyXwX4Hm/O/8QXdDegmKzQyr2zQnJ3oz/ETWmbMqxTjJgNRSpLLkTMF+my4/XZlKSNsO6sWrf6SzSYjYaqIghlztDDvGH0MpPSLzbeE70sy4TOqCW+5QyenuNoHlyjssy45bjr+Q01isRjjjV7vucj3pAQ1c6qghwy8I1b/xwZLWfTJ+nCmg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f87cf1cd-61ba-aaf1-dd81-f2352acf4273@suse.com>
Date: Tue, 30 May 2023 16:23:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 1/2] x86: annotate entry points with type and size
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <e4bf47ca-2ae6-1fd4-56a6-e4e777150b64@suse.com>
 <db10bc3d-962e-72a7-b53d-93a7ddd7f3ef@suse.com>
 <fd492a4a-11ba-b63a-daf4-99697db0db0e@suse.com>
 <ZHSp9+ouRrXFEY4R@Air-de-Roger>
 <bba057a2-0a68-bf05-9a92-59546b52c73c@suse.com>
 <ZHX4PR56MQZQCVUX@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZHX4PR56MQZQCVUX@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0167.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:99::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DB9PR04MB8362:EE_
X-MS-Office365-Filtering-Correlation-Id: b97d9fb5-9b32-4250-e350-08db61196cf8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	d1UrnbFzH2wy9KzxlwJM11RqAMEUiViFalabeT9ujtzE1f/zU7yeWurhNSzvpCPL+WGWkNDrPkZDcdkcZb7JTOZfqZTP6VRGTUtJBsXG16/Gi73MClaY5g8X1x+MBrHT8/9II+iY1IbRorm6HKsFv2cQl96YU3oWP1PenMW65p14kU5j3o6zZ5vE74eRpAugrxx54+TRpEKOkDh6MyPVx/aUZ+4VVZNC8O5ZQg3Ypw2gRvrw9AT1M4K53U4EBcWV3NjwXcFel7EjHUgACBrDDv2YRF84iyHs85fNaFliY/jb842rVM/1Bv5wMfgDb/5H4F1XEFGg8cpqWVJlA5a0fQ74ZRM33XwZNZwMe+KZOqg/1DHkY0RJsN5l5C/pB+73NOmvwiQCFyA6j4bMZrCt8qhMX33FzOCzDnIqrLUWEO/jk9jcl2JVbIQrpayXmY0roJiaumpjTjINXvTVLbPES3zI0gkdt5u4jx6nbiwPvtUegX4ENcj9+cz6Pe6nLV1M0UrPWiMMT2BCNzZeM+jkyNrLT6HzCbr2qZN4//1oz3xrDSInMGmnD0bzBCCzSlXuDvBbozVGPbzT2y0E6KaRL4ZeFgWaFRHvHWE5y0W0zBtg5a3h2rbPSPL6dcUYEapGRJRpx7Nsu3+WgbfRpAvd4g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(366004)(376002)(346002)(39860400002)(451199021)(31696002)(6486002)(86362001)(41300700001)(478600001)(6916009)(4326008)(316002)(66476007)(66556008)(66946007)(36756003)(7416002)(5660300002)(186003)(2906002)(53546011)(6506007)(6512007)(31686004)(2616005)(83380400001)(26005)(66899021)(8936002)(8676002)(54906003)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b29tc1YyR2NJa1BYMENPeFZLbHRoR3IxL2dxZG9JV2RIa29vMW0xRUtUditp?=
 =?utf-8?B?OU5OTThXdjZuTlhzVndrUjdsR2RhYlR2NE95eFdkMnEyeUlveEtvSzFZbi9i?=
 =?utf-8?B?T2loODNZRkhkN1ptMmxyZ3lZbzNVNTd2R3JuS3ptQ01qMzlkVDJNOE1oMito?=
 =?utf-8?B?a0tHb0xVY1Y3N214ZHVYbVJqTmdXaXNESDRBUDFUVytmTVFMREdFdmV2NklL?=
 =?utf-8?B?T0RLSkgrMC9HaCtQMzFXa3RmWTVqWExDaUFsZ0E4RTFSRWZFc0VrYUhETWNI?=
 =?utf-8?B?R3lEcFFTbDVWcFBMWDlJVDBRVXliZDRJb2lIZkxSdEU4aVkvVi80dXFCS3Nl?=
 =?utf-8?B?WjBtQXdDNFFZclJpaWlOVXdQa1FKZDhicTdJMmRhbm5YdWwvdnVKY3FBQ0NH?=
 =?utf-8?B?MElGQllSNkR3MDBUcXMzRDBDelBnYWgzZW9XcTZaNHhCMXNCaDNhdkgwcFhJ?=
 =?utf-8?B?M0dPbTR4cjlNdmtTRElQc1V4bzhwL2tQM2hIWEttclRGc3B0TWtkRnZIT3Bn?=
 =?utf-8?B?ZzIvNTBjSC8yakVNMFBnMDRpdTJrVzJlV2NlOHFnQTBpaWZqUUIyWDQwZHFo?=
 =?utf-8?B?Mlk1QUIzUC8xNkU4amZvSm5TcXJVODhhakdkdHBPYUplTGdPOW1nRkdDUTFp?=
 =?utf-8?B?UUVCdXNsVlJRZlVvYWgxMnpteEtMSXVKS1pvOXJmc2o3WU5uRi9mZC9pV2xH?=
 =?utf-8?B?dktRalI4d0pOb1B5TnFvYmhWNGJIMWV5VFVmU2FSSXhURlhjUEF0QW0wajIv?=
 =?utf-8?B?WlNqcm9qelc3UWw2c1JDVzdPZHl5RDdEVUphdThCbFdhOXc4Z1BUTURVaUxr?=
 =?utf-8?B?Vk42QXdsbktFbG5TWkF6ell0dk04QldVQVFTaE00WmpBWDJUWnZHK25iRjlJ?=
 =?utf-8?B?RzZVS1I0cy9nSXhXQ0lWRGRTWU9EWjVrNmxuY1VLT0JZUmVlVTJmcmpwYjQy?=
 =?utf-8?B?UEdCbzhaQnNqZzhBTU0rZDZ4OGcwWm5LcGR4N2ZndE5Dd1kvY01iMWxsUCtN?=
 =?utf-8?B?NjczWHJRUVRGQ1I0bk16L21VNk53SGFXWWhHTFlrQVhsNmM3bkI3ZElIUGVR?=
 =?utf-8?B?MlBmWGVDR081VHd0N2tsSEM0OU1NVUpDMDJvN0o4QXNQdVJvQnQvN21hZHpD?=
 =?utf-8?B?Z2hOQnVvL2E0WWQ5SlUwVlBnMkpYWDRWdVBsQU12SkJtMTJ0N080N3A1cG1T?=
 =?utf-8?B?ZWdscHVxQUtmRzVybk1DNkl0MjB3UVRsWlJZOGc5QW5MWTNvejJkdU9BYXR3?=
 =?utf-8?B?c0ZGK3czN2djQ1BoS0JYUUJ0S0FYaEJsakkxYmQzamhCc3hRcGVZUGhxMWJI?=
 =?utf-8?B?Z016cjhhUTlQcit1b2xVc0dJZDRmMjlqMW9zUHB6ZENXZ2hxNkZDWVIydEcz?=
 =?utf-8?B?SldXbVZSalpzbHBVTUVkMldERStxQmI4aWZjWWd0dGpWZ2xRbVpVNVlobWlU?=
 =?utf-8?B?TnBodGVSbzdkM1RYMmpVQnZFZFVCWDlURTE4T0Y5K1J5dDJ4YS9ZNURaemZj?=
 =?utf-8?B?K0lkTG00Kzc4RnFPTEZidFE2bmFxOWJ0aGN6U3psL3IxQ0prS2s0ZkNmdXVw?=
 =?utf-8?B?Z2JiT3lNZjc4b1RJSUhJVEdIOU9QUnMzTTNGMnB4VUtwZlVHSUtSUGNTMXg2?=
 =?utf-8?B?Vjg1WjBKREE4dit5TVRmbk55UlVXcnloTzB2UElIT1gxNks4cWtzOHFXb3lC?=
 =?utf-8?B?cGVVNU0yNHAzR1hCTE5GOHkzM2x6RExyazlHOTlsQmlJZU5wZnVILzlQYS80?=
 =?utf-8?B?NUg3NHNRVDdDMW13TTdqQnBHREJHaGRTYlNBOWpUUUt2aTJ1QlQ5WldjNUdl?=
 =?utf-8?B?YzZVVXRRKzE5QTRRdDEvWHRPb3pDN1VuUzNmMWkrQVZ6eXdEdUxCcEdFcEtE?=
 =?utf-8?B?TkIvenJQZlFOcWJZSE95cXdDUTk1VDMvUWpzQkxaREplNUJwaEViN1JGR3N5?=
 =?utf-8?B?NEx3Q2RHNllpZkRBb3JoUURQNmNQUWFGNSsxVHI4TzVsb0VEMVFkOFRtWWZD?=
 =?utf-8?B?aGFOK2IrTHFubklQVWZCSXE0NGhxNUJoVVI0ZFFzSDk1V2NJNzhCWUw2UUlF?=
 =?utf-8?B?aGlLdDBZZVZaNG43SVgzODdtaVdzWkhPZlBEb0hsSk1ta096cEVwNDE2MFZ4?=
 =?utf-8?Q?P8lfE7H1qQeTmpK66BcuxS28Z?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b97d9fb5-9b32-4250-e350-08db61196cf8
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 14:23:23.4991
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MELE1O/IIBm5XqHz6eI8f6SMW3AJIO0arHQrrSAQCvEPQ3slvUPsM8M/emHqA+NVme5hpNwXGjbTA1Gu8AUqBA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB8362

On 30.05.2023 15:21, Roger Pau Monné wrote:
> On Tue, May 30, 2023 at 10:06:27AM +0200, Jan Beulich wrote:
>> On 29.05.2023 15:34, Roger Pau Monné wrote:
>>> On Tue, May 23, 2023 at 01:30:51PM +0200, Jan Beulich wrote:
>>>> Note that the FB-label in autogen_stubs() cannot be converted just yet:
>>>> Such labels cannot be used with .type. We could further diverge from
>>>> Linux'es model and avoid setting STT_NOTYPE explicitly (that's the type
>>>> labels get by default anyway).
>>>>
>>>> Note that we can't use ALIGN() (in place of SYM_ALIGN()) as long as we
>>>> still have ALIGN.
>>>
>>> FWIW, as I'm looking into using the newly added macros in order to add
>>> annotations suitable for live-patching, I would need to switch some of
>>> the LABEL usages into it's own functions, as it's not possible to
>>> livepatch a function that has labels jumped into from code paths
>>> outside of the function.
>>
>> Hmm, I'm not sure what the best way is to overcome that restriction. I'm
>> not convinced we want to arbitrarily name things "functions".
> 
> Any external entry point in the middle of a function-like block will
> prevent it from being live patched.

Is there actually any particular reason for this restriction? As long
as old and new code has the same external entry points, redirecting
all old ones to their new counterparts would seem feasible.

> If you want I can try to do a pass on top of your patch and see how
> that would end up looking.  I'm attempting to think about other
> solutions, but every other solution seems quite horrible.

Right, but splitting functions into piecemeal fragments isn't going
to be very nice either.

>>>> --- a/xen/arch/x86/include/asm/asm_defns.h
>>>> +++ b/xen/arch/x86/include/asm/asm_defns.h
>>>> @@ -81,6 +81,45 @@ register unsigned long current_stack_poi
>>>>  
>>>>  #ifdef __ASSEMBLY__
>>>>  
>>>> +#define SYM_ALIGN(algn...) .balign algn
>>>> +
>>>> +#define SYM_L_GLOBAL(name) .globl name
>>>> +#define SYM_L_WEAK(name)   .weak name
>>>
>>> Won't this better be added when required?  I can't spot any weak
>>> symbols in assembly ATM, and you don't introduce any _WEAK macro
>>> variants below.
>>
>> Well, Andrew specifically mentioned to desire to also have Linux'es
>> support for weak symbols. Hence I decided to add it here despite
>> (for now) being unused). I can certainly drop that again, but in
>> particular if we wanted to use the scheme globally, I think we may
>> want to make it "complete".
> 
> OK, as long as we know it's unused.

I've added a sentence to this effect to the description.

>>>> +#define SYM_L_LOCAL(name)  /* nothing */
>>>> +
>>>> +#define SYM_T_FUNC         STT_FUNC
>>>> +#define SYM_T_DATA         STT_OBJECT
>>>> +#define SYM_T_NONE         STT_NOTYPE
>>>> +
>>>> +#define SYM(name, typ, linkage, algn...)          \
>>>> +        .type name, SYM_T_ ## typ;                \
>>>> +        SYM_L_ ## linkage(name);                  \
>>>> +        SYM_ALIGN(algn);                          \
>>>> +        name:
>>>> +
>>>> +#define END(name) .size name, . - name
>>>> +
>>>> +#define ARG1_(x, y...) (x)
>>>> +#define ARG2_(x, y...) ARG1_(y)
>>>> +
>>>> +#define LAST__(nr) ARG ## nr ## _
>>>> +#define LAST_(nr)  LAST__(nr)
>>>> +#define LAST(x, y...) LAST_(count_args(x, ## y))(x, ## y)
>>>
>>> I find LAST not very descriptive, won't it better be named OPTIONAL()
>>> or similar? (and maybe placed in lib.h?)
>>
>> I don't think OPTIONAL describes the purpose. I truly mean "last" here.
>> As to placing in lib.h - perhaps, but then we may want to have forms
>> with more than 2 arguments right away (and it would be a little unclear
>> how far up to go).
> 
> Hm, I would be fine with adding that version with just 2 arguments, as
> it's better to have the helper in a generic place IMO.

I'll think about this some more.

>>>> +
>>>> +#define FUNC(name, algn...) \
>>>> +        SYM(name, FUNC, GLOBAL, LAST(16, ## algn), 0x90)
>>>
>>> A rant, should the alignment of functions use a different padding?
>>> (ie: ret or ud2?) In order to prevent stray jumps falling in the
>>> padding and fall trough into the next function.  That would also
>>> prevent the implicit fall trough used in some places.
>>
>> Yes, but that's a separate topic (for which iirc patches are pending
>> as well, just of course not integrated with the work here. There's
>> the slight risk of overlooking some "fall-through" case ...
> 
> Oh, OK, wasn't aware patches are floating for this already, just came
> across it while reviewing.

Well, those don't cover padding yet, but they deal with straight-line
speculation past RET or JMP.

>>>>          sti
>>>>          call  do_softirq
>>>>          jmp   compat_test_all_events
>>>>  
>>>> -        ALIGN
>>>>  /* %rbx: struct vcpu, %rdx: struct trap_bounce */
>>>> -.Lcompat_process_trapbounce:
>>>> +LABEL_LOCAL(.Lcompat_process_trapbounce)
>>>
>>> It's my understanding that here the '.L' prefix is pointless, since
>>> LABEL_LOCAL() will forcefully create a symbol for the label due to the
>>> usage of .type?
>>
>> I don't think .type has this effect. There's certainly no such label in
>> the symbol table of the object file I have as a result.
> 
> I was expecting .type to force the creation of a symbol, so the '.L'
> prefix does prevent the symbol from being created even if .type is
> specified.
> 
> Shouldn't the assembler complain that we are attempting to set a type
> for a not present symbol?

But .L symbols are still normal symbols to gas, just that it knows to not
emit them to the symbol table (unless there's a need, e.g. through a use
in a relocation that cannot be expressed as section-relative one). It
could flag the pointless use, but then it may get this wrong if in the
end the symbol does need emitting.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 14:34:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 14:34:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541206.843710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q40Qs-0002YY-7t; Tue, 30 May 2023 14:34:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541206.843710; Tue, 30 May 2023 14:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q40Qs-0002YR-4b; Tue, 30 May 2023 14:34:42 +0000
Received: by outflank-mailman (input) for mailman id 541206;
 Tue, 30 May 2023 14:34:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q40Qr-0002YJ-Cb
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 14:34:41 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on061c.outbound.protection.outlook.com
 [2a01:111:f400:fe1f::61c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1adc86f1-fef7-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 16:34:38 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by GVXPR04MB9849.eurprd04.prod.outlook.com (2603:10a6:150:112::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 14:34:34 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 14:34:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1adc86f1-fef7-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OqlHOmkFwOq0MffOw6VxBtT+NatYoJ2uTWYscWjZbl3n7XqmP4NANWp9qIEKM5PFjmOe0OxuUk7jHgrGJ3NQ3y8oRFHe+n6yOk1IYSiyogodHLnW9B/6uRv0l+lsj4t1hPDYcAH0pkPgedGwvVV3COOHiW5gXvKe/Zh23V3zCXwnUAJCL+Pnvs36h8bPDz/GFnk7OqRzqyD7jLf/yk8VcmePOgoCa07c/ZFKQaIsJj49s42YWHVxhWvkdPt14MvJ6RlfaOPQu2EHuwx4I4n+o+qBJ8MRzbIKxXfnhYTX+S+1FVI1XUgCqZtyL/fPXFmKBSqSHW2DBBGInfPfKWkH/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EpRXDByoNE9WsjetOwuu1EtBBWEkCBfxMobchWVcKEU=;
 b=XMAymCj1pap7mEKwwjInHkub7tsQSF133uNLIhQoxM0+KNFotTkWde7sXggBU8MOUI912OR12VQ2GTwj3M3Wt+rtpWt3cHeLgaWc+rPWU+wszofexNVOOrTKJOCHa2PQ0b4/4H+tpmQip8pGAGRBjPrzJ7Fiwl37K70ayfSFgB4A7lM5UfTX/miGaPSi9KAB/53/C13kya1tv1MhNNIj/fdRUL58eWhPsGew2SVZP7cOk73a1DlU+lesSGs8NnyguZVE/E4RNg5N4ZN1S/6hARRhu/cPGZlsOqR83mWe7neT9VOK3uN/dyfxr7Xp6vIKRStnenhunUEalC9PHUJYCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EpRXDByoNE9WsjetOwuu1EtBBWEkCBfxMobchWVcKEU=;
 b=DkyTonzRn6hQdl58XM3JiTpFy2zzEqkjBo2Cq3XOKZCbUUxKgktV5slFXowJPT16DVORkj6twW0QdW74+ZWXwfir0vrFUJeVuldyPInXRww8TgMWc6ve92hVIUeC+yNdRcKx8V8xKbvlSZFYhHxxDOQN0LprTv0FJUwouC1UY7zvso9trwiI/BLoIkjgNP4tvsoS/BCqtL9R7tm1V2a1HpOkmblFL33o0PXoo72g/r2mV6Kvexn8n1lCZAxL44w/MuwzTTebBJSamIRFe03uZ2+1owF114nigU4ouSWdAlZnaEyMuLGDCAsHP8N4ZN7HJ8y9uuoKUipJsQ6A+fDFJw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <36eaea12-066a-c470-6a22-bc3c50db741e@suse.com>
Date: Tue, 30 May 2023 16:34:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 4/4] x86/cpu-policy: Derive {,R}RSBA for guest policies
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230526110656.4018711-1-andrew.cooper3@citrix.com>
 <20230526110656.4018711-6-andrew.cooper3@citrix.com>
 <8d31e4a3-3f3c-f7a2-8d7d-0b0febf17f65@suse.com>
 <db0a6132-4ee0-51d2-08b9-88a286bc4b14@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <db0a6132-4ee0-51d2-08b9-88a286bc4b14@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR3P281CA0197.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a5::7) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|GVXPR04MB9849:EE_
X-MS-Office365-Filtering-Correlation-Id: d513cd20-5b7e-402c-27e4-08db611afcb5
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zUHwkhFgVT64SbJ8ZV3uDoTiqhNZG4rGcH7vDwUrmzn1ExPZNt6uh0MBXLTxkFGILaWv8FEqoxZUOnMmlTixfRLGvtBx3gMax10ruSP04cDGAvGb4u/R+Mfv1guc/BYugfMpHUDWGjbPwUuHg1Q5CeSwFfQ2Utyf0Sfkmjivpv7KvI8cGfQnpjBV2Bb8rzjYBj1MGpN3CRyZ/88Pgy7BAcv/x3i5hzEGKmed1Dn1jp+P0jLioBCoOSpsCDnVeqzXTjB4XTFK+qzmUJs/JGrjiSK2pfzAwrTP837wpJsS7c/lBHpJCzbt65lnZfie3jwNDKK40I79+ZLafgkPvOTq1WbgTFIGIgnVj5e3us8J77XVcimzuDaQxdyzp0cvxwq+84TYvoADAR1RE875QgdIE1zxcfkeaT7iQwFrPSbDfuiAosiDy3+E3rN91/j3HmQk9nj5hezIfpB4eE0SwW4uRIS8FpwA8Kvc2IwbV9HPH6tHfWenu0d2vQ5X2Jm4TVPK4O5enINKBs8vRXAqtW+DgASQaO6V5sp7N9DOG/pd7SWk3Fyf3NAL5QkQpTv11BWtm9mbaaNYDJwXCbrkY7F8zSBE6EL+V5kxp2mw4aYFcd2hoYBlFgHI0osGpAL5WgH7NIGsfn1SBaDguXO0qFN/kg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(136003)(376002)(346002)(366004)(451199021)(26005)(41300700001)(38100700002)(6486002)(186003)(6506007)(31686004)(2616005)(53546011)(6512007)(478600001)(54906003)(6916009)(66556008)(66946007)(4326008)(66476007)(316002)(5660300002)(8936002)(8676002)(31696002)(86362001)(2906002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bjdHcHVqRG15ZVVnTGcyaEx2eFhUT1VNdkhHNnVtMjJoWSttWVBFRDRxdUZP?=
 =?utf-8?B?azEwWnowaEpWS3BGM08xMzlsUkVLNGVFd0pXUzg5REJ1VE9VRzd6NWRjR2cv?=
 =?utf-8?B?SWVhOHN4dHdNcVNQOHhIU2lKS2lxdUV2TEFTa1A2Uk1yQmxJbVRjSmQ0ODh0?=
 =?utf-8?B?c294U3FHNWtMWHFySjUyeEhCa2twRkU0eGQxeXhQSnQ5cFdJN0hNaS9PVWp6?=
 =?utf-8?B?ejk3cmFXSnFqQTJDR2E1QVVUSWdqbjIrV1dkc0dkd2RtNDEwV1lJMDJ4TEN0?=
 =?utf-8?B?WkI4SFErZ2pKem9KZG81aUpUazZad1VBazRuRDliWGlkNmEvZ2dKZUFocHlB?=
 =?utf-8?B?QW8rSXJpa0JVREdzUktzVEs5dHBFUGxlZURMaE1DRzhueVdSRFMyc1B1RStm?=
 =?utf-8?B?VThsN1FVeVNnK0FsSkk1SklYRzRhdWJvek5GdFBta1FaalYxbmpwTG9CZzZI?=
 =?utf-8?B?YmNSREJ2cWhoRzFuSnl3aFBhZkhuKzUvWHFHeEtQckdSNzJWWnQzUCtMKzhZ?=
 =?utf-8?B?NkY0RHNoS0FQUXNZYUI2U3Z5TWlEOEhBTHZxRGRUeGhIWnhvUzRvVEFiWkFE?=
 =?utf-8?B?NU9PVXpjLzNWY0FJUVQ3V3BsZXJxRUVHa2E5Sk13a2w1b0RyV0JXOWdYZU0y?=
 =?utf-8?B?MGk3K25LNlU0L0lzOFFjSHM0amxPTk1NL09EZEx5YmpoZGF1aElHWGZtaGc2?=
 =?utf-8?B?N2hnMmI0b2ZmWVd5RlQ3MTBvK3VHUWZnQWErZXQ5N3V0YlY0d2k5R01JeHgr?=
 =?utf-8?B?RTBqN1FFNlc4cFBWWEtJbDFvVWg1eDV2b2VzZVQvSktWSERpZHZHZk9mTVRh?=
 =?utf-8?B?QUM5SThGZytoRVlhaXBmN1Y1bFAwZ1dTRjZxTytnckM2WE0xS0doUUlJMVpH?=
 =?utf-8?B?VStyc0oxUzdXa3VKV0RKem5LakE0S2FNOTJic1AvQlFNVTY4Y1lRSU5lbUNU?=
 =?utf-8?B?R0hOL0czem15bEk0Zm9SdzRrK01WM2ZPaGV4bWs1N2xqRGh2NndqV0pCS0t1?=
 =?utf-8?B?ak1xMUZMaU1ETS9zaEsyai9kUGtVRFVZUlp4bWpDQUZVSHFFZzVEZkFZdWwv?=
 =?utf-8?B?K1dWVnpVT3Avem5qZ1Y3WFgyZnliV09ObnF3MFJRMmNNTEx5QysxemhaN0NW?=
 =?utf-8?B?czJya2YxTnVmeTNzZHFjZDNwUzBXWVd5MVdoUG9qd1RQekhhem85dXBTRmtC?=
 =?utf-8?B?OVZvQTBhYXEwTFFLZWNwNkF4T0VqVHlPZGMvSW9jb25DUWh3cGplMWhZZldP?=
 =?utf-8?B?Szk4T3ZBVWZ1STc1STZCbi9hQ2NrUkp2dDM2cGVGeFkyQUdvM1Z3NWFEM21k?=
 =?utf-8?B?TXNrWnV5RXdlRGR5NkMrTVZtMGMvb0sxRjBlZjZQOUpKUkJaaC83aWpOTWxK?=
 =?utf-8?B?NjFabkk0Z3QrcWdlemlGNTd4ckVjcXRYVUx6cS9mN2JHRzZ3cUtETGZHRjc3?=
 =?utf-8?B?V0IzVURHM2hqWEhMbC9hTktmOHFvR0tMVElQTDEvTUFLZ2FuNXBnN3UwcWNz?=
 =?utf-8?B?UGZjUXBoekJYbDVMa3RSSzluRXp6b01XWERNRzM1UkM0czNHcnpreEhGRzRV?=
 =?utf-8?B?Q0ZJK3IrRmdNbWpnZ2E5YjJwOTNrMmRYWldzdjY1RDMydExjMm54cHpDRTg1?=
 =?utf-8?B?TWJUaGZPRHNBSVNWZmp6aU1ndnVZQlBqc2l6M1NmQy94anZUc255eHBmNWVv?=
 =?utf-8?B?U3U0YS9RNTdpSU10cEJNSzUvSVpyRVRRVmtlWUwzUDZ4RjNhNXNEZDNtQzQv?=
 =?utf-8?B?ZnlDaWxFMkdpR1hlV25ESkhKb3I4UUZqQjN1TmlKWmllU1FKQzlieDlWQVVn?=
 =?utf-8?B?TjlGZk9jbGM4VjlITUFQWUVpZkFMSlNmb01OeG5acUVHeC9MTlJKaHp0NzFM?=
 =?utf-8?B?dUd0emNub1NBRnBja1M2VTNvKy9vOU1LR1FiNHpTTFUwcnR3RVQyNmJUQlc5?=
 =?utf-8?B?MFQ2blptRUx2Nm9Td2dIV2VDUW9NOEErSGNCUnFYWWJOS1VKSTUxczRiMitZ?=
 =?utf-8?B?djVHUmt2R2lOaTJNMkFpdGJ1TFFxcVduT0hhOEpIeGczUFVkcDlpdUoyM1ZW?=
 =?utf-8?B?VUNoSTNBV0taU0hCWGJZTVBSUDh4SUlBVmk5UXpZYTN4dTNhOUFrQWZYcUFK?=
 =?utf-8?Q?4po8CGs4GpH/+fdXdUJVdYmYW?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d513cd20-5b7e-402c-27e4-08db611afcb5
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 14:34:34.1350
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qRYihnEJh09KBRxBzkpD+xFgLdXoB7Xjh1vCo5eF91n4nFG8UhoCjpFwJMitAoeXb00j2vHHtlmeFf25ZBaZkQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR04MB9849

On 30.05.2023 15:25, Andrew Cooper wrote:
> On 30/05/2023 10:40 am, Jan Beulich wrote:
>> On 26.05.2023 13:06, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/cpu-policy.c
>>> +++ b/xen/arch/x86/cpu-policy.c
>>> @@ -423,8 +423,14 @@ static void __init guest_common_max_feature_adjustments(uint32_t *fs)
>>>           * Retpoline not safe)", so these need to be visible to a guest in all
>>>           * cases, even when it's only some other server in the pool which
>>>           * suffers the identified behaviour.
>>> +         *
>>> +         * We can always run any VM which has previously (or will
>>> +         * subsequently) run on hardware where Retpoline is not safe.  Note:
>>> +         * The dependency logic may hide RRSBA for other reasons.
>>>           */
>>>          __set_bit(X86_FEATURE_ARCH_CAPS, fs);
>>> +        __set_bit(X86_FEATURE_RSBA, fs);
>>> +        __set_bit(X86_FEATURE_RRSBA, fs);
>>>      }
>>>  }
>> Similar question to what I raised before: Can't this lead to both bits being
>> set, which according to descriptions earlier in the series and elsewhere
>> ought to not be possible?
> 
> In this case, no, because the max values are fully discarded when
> establishing the default policy.
> 
> Remember this value is used to decide whether an incoming VM can be
> accepted.  It doesn't reflect a sensible configuration to run.

Right. I should have dropped the question when seeing the "default"
counterpart's behavior.

> Whether or not both values ought to be visible is the subject of the
> outstanding question.

Pending the answer there (and whichever easy adjustment)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

>>> --- a/xen/tools/gen-cpuid.py
>>> +++ b/xen/tools/gen-cpuid.py
>>> @@ -318,7 +318,7 @@ def crunch_numbers(state):
>>>          # IBRSB/IBRS, and we pass this MSR directly to guests.  Treating them
>>>          # as dependent features simplifies Xen's logic, and prevents the guest
>>>          # from seeing implausible configurations.
>>> -        IBRSB: [STIBP, SSBD, INTEL_PSFD],
>>> +        IBRSB: [STIBP, SSBD, INTEL_PSFD, EIBRS],
>> Is this really an architecturally established dependency? From an abstract
>> pov having just eIBRS ought to be enough of an indicator?
> 
> This is the same as asking "can we hide AVX512F but expose AVX512_*"...
> 
>> And hence it would
>> be wrong to hide it just because IBRSB isn't also set.
> 
> EIBRS means "you should set MSR_SPEC_CTRL.IBRS once at the start of day
> and leave it set", which to me firmly states a dependency.

Hmm, yes, now that you put it that way, I agree.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 14:56:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 14:56:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541210.843719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q40lV-00053S-UC; Tue, 30 May 2023 14:56:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541210.843719; Tue, 30 May 2023 14:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q40lV-00053L-Rc; Tue, 30 May 2023 14:56:01 +0000
Received: by outflank-mailman (input) for mailman id 541210;
 Tue, 30 May 2023 14:56:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGug=BT=citrix.com=prvs=50771f348=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1q40lT-00053D-BT
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 14:56:00 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 13c73cc1-fefa-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 16:55:56 +0200 (CEST)
Received: from mail-bn8nam12lp2168.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 10:55:51 -0400
Received: from DM6PR03MB5372.namprd03.prod.outlook.com (2603:10b6:5:24f::15)
 by SA2PR03MB5946.namprd03.prod.outlook.com (2603:10b6:806:111::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 14:55:49 +0000
Received: from DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::53f7:d006:1645:81e2]) by DM6PR03MB5372.namprd03.prod.outlook.com
 ([fe80::53f7:d006:1645:81e2%6]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 14:55:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13c73cc1-fefa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685458556;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=9ti+fh3IQP2kqX6vJ8EdZ90JnboJj2ATrgwrAFuSDeg=;
  b=Iquzw9eDMkDyvNjP0HMJ9WwZqmVXdaKilRIbi7UOTY/bEvwU3vlYh02C
   ixV8FO8j2OBIk4sh5unlQplUhHCtgmOgkb7A02pPlNAxPXGsoP//+Euij
   MaiisHUnAXdNsVAZIkWDhSQHCl/P5k/m/jkt+UljwEQCKHlAIXcheK+mL
   s=;
X-IronPort-RemoteIP: 104.47.55.168
X-IronPort-MID: 110952429
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:36VewqtNODFW1mGspzm/k7m2COfnVD1eMUV32f8akzHdYApBsoF/q
 tZmKW+BOP2NYjP8coglbtjn80sPsMPXxoRnSANkryljRn5B+JbJXdiXEBz9bniYRiHhoOCLz
 O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg3HVQ+IMsYoUoLs/YjhYJ1isSODQqIu
 Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4Fv0gnRkPaoQ5AKFzyFPZH4iDfrZw0XQE9E88tGSH
 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9
 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG
 fMwLzMdLTuqheGPwbeWeso9h8Q/JeqyI9ZK0p1g5Wmx4fcOZ7nmGv+Pz/kImTA6i4ZJAOrUY
 NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjf60boq9lt+iHK25mm6Co
 XnduWDwDRwAK9WbzRKO8262h/+JliT+MG4XPOThrq832gHJlwT/DjVITn2hm8K4gXeQfP1SD
 lYG1BswopoLoRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAYQTpRQNgnstIqXzss1
 0/Pk96BLTlutrKSYWiQ+redsXW5Pi19BWUaTSYATAYDs5/vrekbghvRQ5BjGaiug9vdHTD23
 iDMrS4iirFVhskOv42rrQ7vgD+2oJXNCAkv6W3/XG2/6RhiTJW4fIHu4l/ehd5KK5yYVR+Gp
 2QenNaF7/EmCouEnyiABu4KGdmB//SCNjDHqURiE5ko63Km/HvLVYNX5it3KG9qP9wCdDuvZ
 1Xc0StV5ZlOLD6pYLVxboaZFcsn1+7jGM7jW/SSacBBCrBheQnC8CxwaEq422H2jFNqgaw5I
 Y2cc8unETAdE6sP8datb+IU0LtuzCZuw2rWHMr/107+j+rYY2OJQ7AYNlfIdvo+8K6PvATS9
 ZBYKteOzBJcFub5Z0E77LIuELzDFlBjbbieliCdXrfrztZOcI35N8Ls/A==
IronPort-HdrOrdr: A9a23:A+AUDKjwIjOAh9QaMMBJVjknlHBQXuAji2hC6mlwRA09TyX4ra
 yTdZEgviMc5wx/ZJhNo7690dC7MBXhHPxOgbX5TI3CYOCOggLBRuxfBODZsl7d8kPFh4pg/J
 YlX69iCMDhSXhW5PyKhzVQyuxQouVvJprY4Nvj8w==
X-Talos-CUID: =?us-ascii?q?9a23=3AD3V3vWnWsopzo3d9ompON+CAJuTXOXre3DT7Ck2?=
 =?us-ascii?q?7MGdsZY22GF6Q5KpHytU7zg=3D=3D?=
X-Talos-MUID: 9a23:Gtex4AVrI3YFPeXq/GH3oitYD8Zj2oqJJ00oi4coiteLBxUlbg==
X-IronPort-AV: E=Sophos;i="6.00,204,1681185600"; 
   d="scan'208";a="110952429"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ock+eokmm1M8B6IPSnLDsHWlI8DAxADJUHuJspJKlqtFFsJ6O3mtgBCGHMObX7Y0YTbEe5+J7M/jl8kyIao3c8kKkPf1xkzEF3zPHBXL1jbbmMvEFw+jkX2GCvXsacy1pApgUe8g0h2WLBIEB7vtXwF65GXr7wM3W9E+x+E7/srzqE51P9Vss8fM324R1Rq9/bBJKJm65ydAXiH+2HgwX8ej71TJiTFhtmBXn6p7Rc57S+bFGFJvG1o8yLs3hubXbOHM/3Xx/Isfmk/mLOj0FMn7yXFrFyh6YlO4G1HFl83e7z0iFqk8KisAd7fIqAmpPM8seQNrQ19aj3tDr5cKKA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9ti+fh3IQP2kqX6vJ8EdZ90JnboJj2ATrgwrAFuSDeg=;
 b=QXMFw4/g8AhhmZG0GIfuUeABlWOuO1OyYqGoWMkGEcwMN/fK2jlEmsOKL8excw59KHDB3ZvKWXZLntRjLaxZcxbg99rQqjXeFYqIlvz7IAK01+uITHyx1BLZ/YvO0NMGEzLre/y2MrmcFSnJgPAQkzuXQ7SvTmiVfMcCIrP8p+ior0O5AhFCc+LDpXNIeM0gxrzRn0TBtOirVZp1zshn+MCyddeJyRiifUaXVrQap10JwJVhvVDhra2V03gPQSORRZ7XX6nRoQzgn3fH3B8eRPrbeTTkWJfDDWXYUCBMfPlsXLeZMVn1/rOqrtt8k78jU6kShx69p31+C3O27BOmHQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9ti+fh3IQP2kqX6vJ8EdZ90JnboJj2ATrgwrAFuSDeg=;
 b=I8aMyows9BzEP9ocnyQRdjvZ7ztpMc2HGo3qH9BEctuFxipcM6voFHJVYId82htClPy7nIhWXPKrlI/9Fjhd2zNQMWktSsjKmGd4i9NtEQi+TrWkjdRaSz5FHKqnzGZYGgM86X4zc2JyKZb+hdN+zGXL2a9j9HF9M1HFGDvIO8s=
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>,
	"x86@kernel.org" <x86@kernel.org>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Peter Jones <pjones@redhat.com>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] iscsi_ibft: Fix finding the iBFT under Xen Dom 0
Thread-Topic: [PATCH] iscsi_ibft: Fix finding the iBFT under Xen Dom 0
Thread-Index: AQHZjlmquto/3z2jNkiFi0kEFPQM+K9quqOAgAg19GA=
Date: Tue, 30 May 2023 14:55:49 +0000
Message-ID:
 <DM6PR03MB53725BBA63A682EB14050FC9F04B9@DM6PR03MB5372.namprd03.prod.outlook.com>
References: <20230524160558.3686226-1-ross.lagerwall@citrix.com>
 <6a2e6371-97a5-397d-ca1a-247b610b49d3@suse.com>
In-Reply-To: <6a2e6371-97a5-397d-ca1a-247b610b49d3@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
msip_labels:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM6PR03MB5372:EE_|SA2PR03MB5946:EE_
x-ms-office365-filtering-correlation-id: a4eb1a23-0c2c-4f33-51b0-08db611df4be
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 70xPPktU8f1gr1YMwqjToguvFWRuh/64sIpn5Oj765bYXq8z43FvkOam1FnYpZUR4cEmkZhdyeA/g28GW7JwYUFM3Km12KfEi/1n883cofWZ8egRP1xvV7ySjoG17Z2+QxehBhyGk7UgR1xNrcmdsZ8Okhm1Nj3RwrMPxgCwQu6Y8UIYAmqrsl8mzT9KLyqGNGLWEexlNTubguclbLfm5bLkU7kZ6dJG4BppHZxBZJaZYegIYqV8MnXYgwc/5ZTQCQ35IFOomocv+SmUs4tnjMR0T8BjsUt3Opt7p99Da0tbt+WMWYMO6Kx1bsBS/pr1G8iuBJB0HP4sr/OPMvHn9Tpf0k+DStawTmYWV6BEfXhCRc0hQah68v+0mnqoAgHv3e2FSrcVhQVzz/h9kK3DHALSvYsuibUqq7GGTSQU8eX42xoUYgrfZS7TSUxsO0YmRY8hbPmuJ8RPK171TNVzrxnu/lr2XspZk73dpRVauoOp412lUOJMjo5Y6fwulOG2IDj28tTqODzxf5pZr2yr92Ip+C7ZrtB822hjI8jubEW+z8M2CWEycr3wl5oDi4D/F/2BF3TNMNAUXC8YnHkgAdLAWZ++/YzT+UcATL3TFrIrIn49vTSee9nOXPClDr2M
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM6PR03MB5372.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(376002)(346002)(39860400002)(366004)(396003)(136003)(451199021)(186003)(9686003)(6506007)(2906002)(55016003)(53546011)(38070700005)(83380400001)(4326008)(6916009)(316002)(71200400001)(91956017)(76116006)(64756008)(66946007)(66446008)(66556008)(66476007)(41300700001)(7696005)(38100700002)(122000001)(54906003)(82960400001)(33656002)(478600001)(44832011)(26005)(8676002)(8936002)(86362001)(5660300002)(52536014)(7416002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?OzICd2agXSojOdeiP3V9j+32Ytz8HwM1JsKw44nPTL1Bo1CNCcsSEsjuwR?=
 =?iso-8859-1?Q?tqKtl6uHrDez5NyHJ2L9olaisrrvlXmZoz2hpOzSYcsgrtbb68F1HR+zzL?=
 =?iso-8859-1?Q?eNYEVZl5Yd7qE9ZcCFEMY9iEm73j10HyoyFsP9ydCpXff3A0yrTVNrhuke?=
 =?iso-8859-1?Q?jvhyWoSiNLBeFvPVDssH03TME/lEq8ZYATi0FrQaBQM2IvulzVBWLjbPo4?=
 =?iso-8859-1?Q?ELY67QTyaPX0J+rhLvaswFQFMVDr0eOFcpGFIUoyMtcivHc7cH4GGVeAL+?=
 =?iso-8859-1?Q?AR8OYORyqacpvNT6CGBfM6ztFR1npwI6LqwZK7iVZbXlsdbbaFlGljjGAh?=
 =?iso-8859-1?Q?a/wHOXa2uGadpYrh8vW1z3Q7QBIK0tkefuqf7eGhXqldOoOUfLgdGUi/GB?=
 =?iso-8859-1?Q?x2YXGZBky/ODL1+y+u8RB4vc38azdMo0iv48liaI50jyhzyuScehH09OXk?=
 =?iso-8859-1?Q?whXh5Tgeyl5B+gbSk4DlMcN2s6VEuycSVtqDneuloFtmWohSSQPnWdT07v?=
 =?iso-8859-1?Q?usJXYiZpMthA2NQIfysouC1ZSKPobHFp1jouruaIuUFFqpmwnsa7gf004J?=
 =?iso-8859-1?Q?nk0kHOlJZ45gdN6ht/Ga7aWe6Os+IBMAL9Pj6yvI+iakGThbJGD13es91w?=
 =?iso-8859-1?Q?8kbki/0g6J5mPOiESJJONEyniuneFuQRkbK0yIP8gwPwyEhpD7wLtu/dVB?=
 =?iso-8859-1?Q?vYvN0M6anqV7URUmAUKls/Z+TzLvJY5Rgo46qCvDpdMPOWw3LEovVO07Tn?=
 =?iso-8859-1?Q?uY/RYcs+WfStk5Dd29cBfcR12gOM+TL032OYmc64+ufuT/BECX8dFznnco?=
 =?iso-8859-1?Q?uRonxgnR7y4x64MS0Ot4CncZcJmDz1HrpbtaFA6N7Q+0a5YuAjIARnoepQ?=
 =?iso-8859-1?Q?BsnJ8JiEyhOsjjFELy/+bxSwJTGxTCdwf+2L9nJhFrHjjUKe18ZOOyxOfl?=
 =?iso-8859-1?Q?PJsItMmX7iuaq9JlOQUaJ0lc90reu2vMhZlorLbzzxadZxovu+dhbocCPb?=
 =?iso-8859-1?Q?nekbzJ7Jr5GXTa5v/F146NPkc7E54PIMJ0Su3JA6SQDXzqubuL5xFbtDOb?=
 =?iso-8859-1?Q?i9N3hl0nw5Tvuxl8dmudAS/YYyCTtcSV99kHrudGLbgvFvvOFz9JB2WiaI?=
 =?iso-8859-1?Q?DhdueV8vg1lcN4AhNSU0QQP3dcRmdUEhgn1eenMRo35hRGCtYEL/MIjqPG?=
 =?iso-8859-1?Q?xVqfVLyTPFcDmulmlJUSN+OpzqtreD3GUOQGudlmjMwJIxfbQMKPia2fi4?=
 =?iso-8859-1?Q?aE68C8Mmqx4zwChr04YWYVgTGb670LkHxeT3YS8UAOe37nDewOvGtNXb0Q?=
 =?iso-8859-1?Q?eGahTmYt5e8UDXPBYnlQ9QDPlVT3hjeki3PF4q3w6ef6dDVgfkSVf+aL9W?=
 =?iso-8859-1?Q?N20PjpnCRIuZXAEMRryI3K/ZmovWz1q5QRAcZ7OBM7iW6P7dC3Mk+dc4H5?=
 =?iso-8859-1?Q?kq6aK8G8CAqyqceUsQmR/d0oFtzQoQPeEM6zxKEIB0jBIWQosVguJ6kKV4?=
 =?iso-8859-1?Q?CMRGkcxRwofTL0BimcvMZG9oOE/m4pq9TIdEyaSowUZotCpPcW0uijfk/O?=
 =?iso-8859-1?Q?lyQgXf8PXJ0ocKa3ZtBB7wa3U65kGxTtGjX6l4BrFnBGi5UrSITtdPCp4O?=
 =?iso-8859-1?Q?i3fUurCPxbOratMLQ2AhsWpQbG3IDRJzso?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?iso-8859-1?Q?NTouaXZnnzIfY+WzTNaJ/CgJqwau3wm0GDZvE16hJuWaTsoJ2jFUr6j75X?=
 =?iso-8859-1?Q?8N+ds9xi6eQw/gCfprYYFZcLglvKvhtzCu0pCRBeF2SjH9rHPU70RUFHre?=
 =?iso-8859-1?Q?K4NOv4mGGh44vvFzZfpvhxSNBygtacwJU6y3H9GRdejyED/H1vNvoa7jQ6?=
 =?iso-8859-1?Q?rpj1LCXbeAwbV4Nktb05Alx28Y4iN7PBYKUGC8AeUmxBuJjXDMi9zgTh7b?=
 =?iso-8859-1?Q?zR9mDuw4mlL1zdt1zZeIlALJ1CZSjv+w9Xf+CmhzGlJ6cHy4HAyaHqbgBm?=
 =?iso-8859-1?Q?m+Auq+YICSEM6LzrZsMpeHQ55C1hJ9lnbbZo9VlzeS3tPxJ2Mdx3MBKGlR?=
 =?iso-8859-1?Q?yz1qRfeh1do3feCQ0XLGc+1clktc8SoTEIHIf/67cZ/kdKYHzZN1sgI82/?=
 =?iso-8859-1?Q?R9YSC8WHqOQuhrxJxXD/X2Jmp1jEDZX3YkcH0h05IOeI6Wc37DPAyYMXNI?=
 =?iso-8859-1?Q?fxVqgRedQk6fhE4U9fhXLlgmB6vKo3CQlv+3FOUe1+uwviBlYSZD23JJTw?=
 =?iso-8859-1?Q?H9AYDIhPgekMuHMWS8NGt0eUw6yn8lWuQ4rd69zl37f+5rkgqAZFB+QlpH?=
 =?iso-8859-1?Q?juH+hvuq7YKBeM/gVm7sBd+OOmMZAksSZZAGCPAy5r5OBjbQCrA1eD/AgM?=
 =?iso-8859-1?Q?TYUFJYPvyGHkzVhSbRiQkQ+zL+H3Vuo8qTeDf/E5TnUE7yM6vMqZRtn0Y3?=
 =?iso-8859-1?Q?TEyOcsb5zw/wMPxvbAYNVE7bXsqWXeVyGYgzFXx9+jAIHRjU8WJlrBsys8?=
 =?iso-8859-1?Q?Hh3LnncMNhMEGRr5Lk1GGCBNqJGVTiqSiKIPl67P9rBcNQ18VW3McmshlI?=
 =?iso-8859-1?Q?9VGdV2ArOyApm1h/Z80vv2AbfxdNuIt23JCyy2zHGq5OxjFeDxAjCq/Ciw?=
 =?iso-8859-1?Q?3krG3Sbma0khSLuwDubO+s0reJqOW1mKV1XgYAH1HnCJfCsGoxcy2ov0GL?=
 =?iso-8859-1?Q?mK2SLNU6ysDYkgMZBlhKVXuYeYNqPJKcVgJ6kdmi5tm8dLCO20VEg/2y5M?=
 =?iso-8859-1?Q?IPPh7nnzAqnvHt5NJjD8R95AA4FLHj/GEioBqWBTqUMToTox5C73Zmjg63?=
 =?iso-8859-1?Q?aHruO0OqVbXzjVyVxRSSUqU=3D?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM6PR03MB5372.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a4eb1a23-0c2c-4f33-51b0-08db611df4be
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 May 2023 14:55:49.0671
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: dM83QGKtwh731VOIt9r/qL9lDMYFtHTuEZ9kjHooNNh9lrm3mgL1MPl5fscqNE9GQvaO8Ho8Sx2v/w5LG44/WUvDFzViAKKm9xIXR2R6OH0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR03MB5946

> From: Jan Beulich <jbeulich@suse.com>=0A=
> Sent: Thursday, May 25, 2023 10:31 AM=0A=
> To: Ross Lagerwall <ross.lagerwall@citrix.com>=0A=
> Cc: Thomas Gleixner <tglx@linutronix.de>; Ingo Molnar <mingo@redhat.com>;=
 Borislav Petkov <bp@alien8.de>; Dave Hansen <dave.hansen@linux.intel.com>;=
 x86@kernel.org <x86@kernel.org>; Juergen Gross <jgross@suse.com>; Boris Os=
trovsky <boris.ostrovsky@oracle.com>; Peter Jones <pjones@redhat.com>; Konr=
ad Rzeszutek Wilk <konrad@kernel.org>; linux-kernel@vger.kernel.org <linux-=
kernel@vger.kernel.org>; xen-devel@lists.xenproject.org <xen-devel@lists.xe=
nproject.org>=0A=
> Subject: Re: [PATCH] iscsi_ibft: Fix finding the iBFT under Xen Dom 0 =0A=
> =A0=0A=
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments=
 unless you have verified the sender and know the content is safe.=0A=
> =0A=
> On 24.05.2023 18:05, Ross Lagerwall wrote:=0A=
> > --- a/arch/x86/xen/setup.c=0A=
> > +++ b/arch/x86/xen/setup.c=0A=
> > @@ -772,8 +772,14 @@ char * __init xen_memory_setup(void)=0A=
> >=A0=A0=A0=A0=A0=A0=A0=A0 * UNUSABLE regions in domUs are not handled and=
 will need=0A=
> >=A0=A0=A0=A0=A0=A0=A0=A0 * a patch in the future.=0A=
> >=A0=A0=A0=A0=A0=A0=A0=A0 */=0A=
> =0A=
> I think this comment now wants to move ...=0A=
> =0A=
> > -=A0=A0=A0=A0 if (xen_initial_domain())=0A=
> > +=A0=A0=A0=A0 if (xen_initial_domain()) {=0A=
> =0A=
> ... here. And then likely you want a blank line ...=0A=
> =0A=
> >=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 xen_ignore_unusable();=0A=
> =0A=
> ... here.=0A=
=0A=
OK=0A=
=0A=
> =0A=
> > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 /* Reserve 0.5 MiB to 1 MiB regio=
n so iBFT can be found */=0A=
> > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 xen_e820_table.entries[xen_e820_t=
able.nr_entries].addr =3D 0x80000;=0A=
> > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 xen_e820_table.entries[xen_e820_t=
able.nr_entries].size =3D 0x80000;=0A=
> > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 xen_e820_table.entries[xen_e820_t=
able.nr_entries].type =3D E820_TYPE_RESERVED;=0A=
> > +=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 xen_e820_table.nr_entries++;=0A=
> =0A=
> Surely this can be omitted when !CONFIG_ISCSI_IBFT_FIND?=0A=
> =0A=
=0A=
Yes, good point. I will fix that.=0A=
=0A=
Thanks,=0A=
Ross=


From xen-devel-bounces@lists.xenproject.org Tue May 30 15:01:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 15:01:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541217.843730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q40qi-0006aG-MD; Tue, 30 May 2023 15:01:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541217.843730; Tue, 30 May 2023 15:01:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q40qi-0006a9-I5; Tue, 30 May 2023 15:01:24 +0000
Received: by outflank-mailman (input) for mailman id 541217;
 Tue, 30 May 2023 15:01:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gGug=BT=citrix.com=prvs=50771f348=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1q40qh-0006Zz-T9
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 15:01:23 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d5d334a5-fefa-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 17:01:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5d334a5-fefa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685458881;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=rGvKnRwZHB6g8/WuMVkxc8zOkepepG1ZDwTftjDFvso=;
  b=A2PKGUYyYtp429/RIJlvNuy08niF5By0z9VGYfYGtAU/j63qxwMYV5fJ
   +h7X6jo2mRgg0Oi3jEU+BjNYjkhAJ5FrGmUPBaKFsSsYeFe6axq91foKF
   CVauUNAmtiDaS8IPCGtTUA/eN4NYuOJpn+VbTnHUCBGch2wFMTT1hEaT8
   M=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110269648
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:zKyT+q82s9/tFWzRr4wkDrUD3HmTJUtcMsCJ2f8bNWPcYEJGY0x3n
 zYbXW7XafiOa2WhLo9/YIXj/EMCvZeAmN5kGgA+pSw8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ird7ks31BjOkGlA5AdmOKsT5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklu7
 KEWGhdcfyqpgtinnuqaUedFvMo8eZyD0IM34hmMzBncBPciB5vCX7/L9ZlT2zJYasJmRKiEI
 ZBDMHw2MUqGOkcUUrsUIMtWcOOAnH7kaHtcoV2Yv7U+52z7xw1tyrn9dtHSf7RmQO0Mxx7C+
 DOYozSR7hcyKoXOlyCJ2WCXlLXLwS/Zar4bO6G26as/6LGU7jNKU0BHPbehmtG9i0ijS5dcJ
 lYS9y4Gs6c/7gqoQ8P7Uhn+p2SL1jYMVtwVH+Ak5QWlzqvP/x3fFmUCViRGatEtqIkxXzNC/
 kOGm8PBAT1praGPTnSc5vGYoFuaMzA9JGsDaClUCwcIi/HvqZ8+yBLGSM1uFoaxj9voCXfxx
 SyHqG41gLB7pcwB2ri//FvHqymxvZWPRQkwji3VUXii9RhRf5O+asqj7l2zxfxHNoeCCF6ao
 GIDhdOd/cgJF5iGkCHLS+IIdJms5vCKNxXGjFJvFoVn/DOok1a7f4dW7SpWPkplMs8YPzTuZ
 Sf7qVMPzJxeJn2naelweY3ZI8cty7LpGfzmW+rSY94IZYJ+HDJr5wk3OxTWhTq01hFxz+dmY
 8zznduQ4WgyKb06wgiRFvYn+7oR135m5HjdbKr10EHyuVaBX0K9RbAAOVqIS+k26qKYvQnYm
 +pi29u2JwZ3C7OnPHSOmWIHBRVTdCVgW8iqwyBCXrTbSjeKDl3NHBM4LVkJX4V+15pYme7Tl
 p1Wch8JkQGv7ZErxOjjV5yCVF8NdcwnxZ7YFXZ2VbpN55TESdjH0UvnX8FrFYTLDcQ6pRKOc
 9ELet+bHtNEQSnd9jIWYPHV9dIyKU/72FjUZ3P/OFDTmqKMoCSQq7fZkvbHrnFSXkJbS+Niy
 1Ff6u8racVaHFkzZConQPmu00mwrRAgdBFaBiP1zi1oUBy0quBCcnWh5sLb1ulQcX0vMBPGj
 VfJafrZzMGRy7IIHC7h3vnY/tv5SrsvRCK33QDztN6LCMUTxUL7qacobQpCVW+1uL/ckEl6W
 dho8g==
IronPort-HdrOrdr: A9a23:gfkKx6lsxKzKBHpnQpEpx0xOkx3pDfIj3DAbv31ZSRFFG/Fw9v
 re+cjzsCWe4gr5N0tNpTntAsa9qArnhPlICOoqTNWftWvd2FdARbsKhebfKlvbdREWndQttp
 uIHZIeNPTASWNXpeuS2njdL+od
X-Talos-CUID: =?us-ascii?q?9a23=3AvZt3xWuhJARTDv/3DCOa7Nc56IsFayDT/lCBMnW?=
 =?us-ascii?q?SDFx4FuW0UlG5wq17xp8=3D?=
X-Talos-MUID: 9a23:Oh7r0Ao3r8fUPrOsJj0ezy87ZZly74+8M1JXlckKmfu4PANsNSjI2Q==
X-IronPort-AV: E=Sophos;i="6.00,204,1681185600"; 
   d="scan'208";a="110269648"
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: <linux-kernel@vger.kernel.org>, <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen
	<dave.hansen@linux.intel.com>, <x86@kernel.org>, Juergen Gross
	<jgross@suse.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Peter Jones
	<pjones@redhat.com>, Konrad Rzeszutek Wilk <konrad@kernel.org>, "Ross
 Lagerwall" <ross.lagerwall@citrix.com>
Subject: [PATCH v2] iscsi_ibft: Fix finding the iBFT under Xen Dom 0
Date: Tue, 30 May 2023 16:01:06 +0100
Message-ID: <20230530150106.2703849-1-ross.lagerwall@citrix.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Since firmware doesn't indicate the iBFT in the E820, add a reserved
region so that it gets identity mapped when running as Dom 0 so that it
is possible to search for it. Move the call to reserve_ibft_region()
later so that it is called after the Xen identity mapping adjustments
are applied.

Finally, instead of using isa_bus_to_virt() which doesn't do the right
thing under Xen, use early_memremap() like the dmi_scan code does.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
---

In v2:
* Fix style issue
* Make added e820 entry conditional on ISCSI_IBFT_FIND

 arch/x86/kernel/setup.c            |  2 +-
 arch/x86/xen/setup.c               | 27 ++++++++++++++++++---------
 drivers/firmware/iscsi_ibft_find.c | 24 +++++++++++++++++-------
 3 files changed, 36 insertions(+), 17 deletions(-)

diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 16babff771bd..616b80507abd 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -796,7 +796,6 @@ static void __init early_reserve_memory(void)
 
 	memblock_x86_reserve_range_setup_data();
 
-	reserve_ibft_region();
 	reserve_bios_regions();
 	trim_snb_memory();
 }
@@ -1032,6 +1031,7 @@ void __init setup_arch(char **cmdline_p)
 	if (efi_enabled(EFI_BOOT))
 		efi_init();
 
+	reserve_ibft_region();
 	dmi_setup();
 
 	/*
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index c2be3efb2ba0..07c7039bdd66 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -764,17 +764,26 @@ char * __init xen_memory_setup(void)
 	BUG_ON(memmap.nr_entries == 0);
 	xen_e820_table.nr_entries = memmap.nr_entries;
 
-	/*
-	 * Xen won't allow a 1:1 mapping to be created to UNUSABLE
-	 * regions, so if we're using the machine memory map leave the
-	 * region as RAM as it is in the pseudo-physical map.
-	 *
-	 * UNUSABLE regions in domUs are not handled and will need
-	 * a patch in the future.
-	 */
-	if (xen_initial_domain())
+	if (xen_initial_domain()) {
+		/*
+		 * Xen won't allow a 1:1 mapping to be created to UNUSABLE
+		 * regions, so if we're using the machine memory map leave the
+		 * region as RAM as it is in the pseudo-physical map.
+		 *
+		 * UNUSABLE regions in domUs are not handled and will need
+		 * a patch in the future.
+		 */
 		xen_ignore_unusable();
 
+#ifdef CONFIG_ISCSI_IBFT_FIND
+		/* Reserve 0.5 MiB to 1 MiB region so iBFT can be found */
+		xen_e820_table.entries[xen_e820_table.nr_entries].addr = 0x80000;
+		xen_e820_table.entries[xen_e820_table.nr_entries].size = 0x80000;
+		xen_e820_table.entries[xen_e820_table.nr_entries].type = E820_TYPE_RESERVED;
+		xen_e820_table.nr_entries++;
+#endif
+	}
+
 	/* Make sure the Xen-supplied memory map is well-ordered. */
 	e820__update_table(&xen_e820_table);
 
diff --git a/drivers/firmware/iscsi_ibft_find.c b/drivers/firmware/iscsi_ibft_find.c
index 94b49ccd23ac..e3c1449987dd 100644
--- a/drivers/firmware/iscsi_ibft_find.c
+++ b/drivers/firmware/iscsi_ibft_find.c
@@ -52,9 +52,9 @@ static const struct {
  */
 void __init reserve_ibft_region(void)
 {
-	unsigned long pos;
+	unsigned long pos, virt_pos = 0;
 	unsigned int len = 0;
-	void *virt;
+	void *virt = NULL;
 	int i;
 
 	ibft_phys_addr = 0;
@@ -70,13 +70,20 @@ void __init reserve_ibft_region(void)
 		 * so skip that area */
 		if (pos == VGA_MEM)
 			pos += VGA_SIZE;
-		virt = isa_bus_to_virt(pos);
+
+		/* Map page by page */
+		if (offset_in_page(pos) == 0) {
+			if (virt)
+				early_memunmap(virt, PAGE_SIZE);
+			virt = early_memremap_ro(pos, PAGE_SIZE);
+			virt_pos = pos;
+		}
 
 		for (i = 0; i < ARRAY_SIZE(ibft_signs); i++) {
-			if (memcmp(virt, ibft_signs[i].sign, IBFT_SIGN_LEN) ==
-			    0) {
+			if (memcmp(virt + (pos - virt_pos), ibft_signs[i].sign,
+				   IBFT_SIGN_LEN) == 0) {
 				unsigned long *addr =
-				    (unsigned long *)isa_bus_to_virt(pos + 4);
+				    (unsigned long *)(virt + pos - virt_pos + 4);
 				len = *addr;
 				/* if the length of the table extends past 1M,
 				 * the table cannot be valid. */
@@ -84,9 +91,12 @@ void __init reserve_ibft_region(void)
 					ibft_phys_addr = pos;
 					memblock_reserve(ibft_phys_addr, PAGE_ALIGN(len));
 					pr_info("iBFT found at %pa.\n", &ibft_phys_addr);
-					return;
+					goto out;
 				}
 			}
 		}
 	}
+
+out:
+	early_memunmap(virt, PAGE_SIZE);
 }
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 15:17:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 15:17:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541222.843739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q415s-00089J-0t; Tue, 30 May 2023 15:17:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541222.843739; Tue, 30 May 2023 15:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q415r-00089C-Sg; Tue, 30 May 2023 15:17:03 +0000
Received: by outflank-mailman (input) for mailman id 541222;
 Tue, 30 May 2023 15:17:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1tCH=BT=citrix.com=prvs=5074c9224=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q415q-000896-9r
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 15:17:02 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 044c3dfd-fefd-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 17:16:59 +0200 (CEST)
Received: from mail-bn8nam11lp2168.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.168])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 11:16:00 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SA1PR03MB6529.namprd03.prod.outlook.com (2603:10b6:806:1c4::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.18; Tue, 30 May
 2023 15:15:54 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Tue, 30 May 2023
 15:15:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 044c3dfd-fefd-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685459819;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=+tV+I3MaoCLQ59JgmYu9eqtd55d8pIiyrIwuDPpQLtI=;
  b=gQUzK6V+B0wMM9Rv/eep0+J80GIbObGcabtHzFzC8m4+rQqxaeRee2lY
   xI50LOrcJImFe1tpvxyagHGcy1jtZUqrWTVc6Yu7mjHik9QXRqZhaR+I1
   NZbH7KaMsY1eZxZ/8Yk/cnzL2Hb3RqlkFJ3UBA5/SeWoe/hkmfzOuxsTx
   8=;
X-IronPort-RemoteIP: 104.47.58.168
X-IronPort-MID: 113422152
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:3YZqXq4KuzoxuBehOehYAgxRtHDHchMFZxGqfqrLsTDasY5as4F+v
 mNLXDuEaPmKNDPyKIggbI7noB8PvsfRzII2QAptr3thHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t
 plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0
 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa0Q5AeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd
 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4
 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mt
 qYUKyxKbT64h8G64LeYaeNFq+59FZy+VG8fkikIITDxK98DGMmGaYOaoNhS0XE3m9xEGuvYa
 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OnEoojuiF3Nn9I7RmQe1PmUmVv
 CTe9nnRCRAGLt2PjzGC9xpAg8eWxHinAthLSefQGvhC3FiuzTJCJzAsRVqp5sCU11S6WepdN
 BlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml2qfSpgqUGGUAZjpAc8A98t87QyQw0
 V2ElM+vAiZg2JWKTVqN+7HSqim9UQAXMGsDaCksXQYDpd75r+kblQnTR9xuFKq0iNzdGjzqx
 T2O6i8kiN07k8kP0Kmq+EHdtDilrJPJUw0d6x3eWySu6QYRTISofZCy4F7Xq/NJNp+ET0Kpt
 WIB3cOZ6YgmB5aHnj2AW+UJEbSg4d6KNTTdhRhkGJxJ3z2p+mW/dIFKpj9kLUFiM90sZjPiJ
 kTUvGt575hVOnyoYaZpYpmZBMEjzKymHtPgPs04dfJLa5l1MQWBrCdnYBfJ23i3yRZ816YiJ
 Z2cbMCgS24ADrhqxya3QOFb1qI3wic5xiXYQpWTIwmb7IdyrUW9Ed8tWGZipMhgt/vsTNn9m
 zqHC/a39g==
IronPort-HdrOrdr: A9a23:ClmYCq5EtYSzjCLMKwPXwOPXdLJyesId70hD6qkRc20xTiX8ra
 rCoB1173PJYVoqN03I4OrwQZVoIkmsl6Kdg7NwAV7KZmCPhILPFu9fBODZsl7d8kPFl9K14p
 0QF5SWWOeaMbGjt7eA3OBjKadH/DBbytHOuQ4D9QYUcei1UdAb0ztE
X-Talos-CUID: 9a23:+sRdeGPhG6idzO5DG3Nr91JTGe4ecl7R7HLzAEWRJEw3cejA
X-Talos-MUID: 9a23:4MxwEgsHkiR1QugjO82n2DRfP5k5zImUGEEf1qwr5tOANxRyJGLI
X-IronPort-AV: E=Sophos;i="6.00,204,1681185600"; 
   d="scan'208";a="113422152"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f/S4Xy+Mah9H8/CuE8lwQrq7eBLl/xisuAWmlfGoSKFjeDNhP4qzL5S8Us0Okwx20fUYDMpmaWXdjOpClOGFDvwQSnKOxxRHujNkr45cneo+Eyq7UZY8A74VTD7ZdLBn1ZImrlu5DaSt3AuBGy61CusFlT4v4vrkda0+WA7tVDD4WycH7mDr+FTIBBib6+P8KeuemelWXHPZ0YHe/GQr+uUqsqPBrClHtNqF3Ixd7JdqgnDQf/1bK3yfppkvTCs24rtBEH246ei9tCczskL+I7YvtEjuEpKV5OoUiIKo5cmNAZJJODpNGQFHD3f1z+UcUz2RXjiufLiXcUh9bhwIaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=64Ja1OIWGKAGq3lZEQgphsue4Bzy238t+MhdZ6MqwEg=;
 b=AAX2j4rTcTdXGI9TmxpX2H8jq2MXiyg8xP8XCdg8qurAM/OqsqiI6ahkgntxFc8bm+8j+j9XWzk0FuVNxy634HSFN/ZMnje2810+4HoDFXWK2VB8gw5ZHpdubSZChgc1Ihz4wZQKtic5f3tC4+l4q6Y/4XAApx2QXJNDPLZOEZdN1GQ9Lvrdoo/7MJr8YdqPM3u9Plr2iofhDUGPforsonPEngaDK0J7kHissFDgcZRCz5TgcMLECAG5IBV+mhNGOyyttdXSguiEWXNnijjwkA5rDCiHXP2SqlvGnbupCg0ZII+BMOMH/DWMSWCmpcVm/RZq0zessOLYmdMh888V6g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=64Ja1OIWGKAGq3lZEQgphsue4Bzy238t+MhdZ6MqwEg=;
 b=rFmQxH16aDy9RK8oz3z9dJyADUb7ND5CsKsDejqWoi2xxgwNwHEczpMFrZDqV/SJ9QnklOH5v+OkzuqrckviwaZiEgwmnLsRa4qqyEiAgmrtBdiKwj8q/Y5qxFCDt5/lElPhVgLcxessMs+YQLTNMP5urrc1x55bistp0KX9snI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 30 May 2023 17:15:48 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v2 1/2] x86: annotate entry points with type and size
Message-ID: <ZHYTJEndEaOj8gh8@Air-de-Roger>
References: <e4bf47ca-2ae6-1fd4-56a6-e4e777150b64@suse.com>
 <db10bc3d-962e-72a7-b53d-93a7ddd7f3ef@suse.com>
 <fd492a4a-11ba-b63a-daf4-99697db0db0e@suse.com>
 <ZHSp9+ouRrXFEY4R@Air-de-Roger>
 <bba057a2-0a68-bf05-9a92-59546b52c73c@suse.com>
 <ZHX4PR56MQZQCVUX@Air-de-Roger>
 <f87cf1cd-61ba-aaf1-dd81-f2352acf4273@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f87cf1cd-61ba-aaf1-dd81-f2352acf4273@suse.com>
X-ClientProxiedBy: LO4P123CA0406.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::15) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SA1PR03MB6529:EE_
X-MS-Office365-Filtering-Correlation-Id: a535702b-7997-47d9-30f4-08db6120c2e4
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Ha/gSwv5hgWJ6ALoQ+LPGx4dPC2m9bspv1scmpw2jKROUbjM+dpd34dQA7UwW8bFThUUWpvENUDcrpMLmBghxmhMfUppR1NlDGeSfCxJZEx4YO+iqlyfkOTYFNEeDNW15w3RursNoU4elHh3w73dZHP8kRINiRvjbLx1WQEK053CL50tXhauKjPE62CMiePktvzVl8z75yAk4JpMYvi7GfLxUEUceAl/y20iqC6pCVorjZY/Zyru5JNhr+hW0PtvNuyMtq/WUTFA5HKWLGcRl5ll/9aDgYbg+A17jumh9/AvLopiUI3RKMdvXUncCr2qdc2p6A7boHonnT+fu1y53C1UCo/8bvWoRwKTrGvgN2DYNHrYWA5ayUwvaILUPxX0VZhf3P7eO69qS+4cWdZU4hUiQobCTjhe78Bs06uJZU42t5wDai4vWKz6fBT4nwUuS2hxXkxLqMhMwkLFXOOdqnbfDzIMEZDS9mBzoIzPPiQkzAvF7q8WyIuGbT9bphbnSEEvlNGBYKvf4aiccR4qbkIvV2u/D+8zFN07UCCzFZmPWq/TFgpgJalg6+Eb+9wN
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(396003)(366004)(376002)(346002)(136003)(39860400002)(451199021)(54906003)(186003)(83380400001)(86362001)(478600001)(2906002)(85182001)(66899021)(66476007)(66946007)(82960400001)(4326008)(66556008)(6916009)(6666004)(316002)(6512007)(6506007)(9686003)(26005)(53546011)(7416002)(6486002)(5660300002)(8676002)(41300700001)(8936002)(38100700002)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MlpiSDZoYklUU0UyS0RJU2NMSEZHS0pkbm9YclBaMDg4bE85OWhQU0hhditt?=
 =?utf-8?B?OEFvTXBWazRPOEllcTgrQXpoemgyaDRSOURycThKK1FGT3JqQ0ZHcFJVTlRI?=
 =?utf-8?B?eXhKeHFaK3NzTG90Rll5aWk4RFJBZFBnRkd5Z1lESWZ1NVJvWFJpMUp0M2tP?=
 =?utf-8?B?MncxOXFSK0xrMXpTUXZQQ0RUbWJvclZOQnV1NnpTd3k3YjV1Q3lCaExNMDhH?=
 =?utf-8?B?WnJHZTFaa2JRWHNScHB5M0w5VUhFcC91b0tteGVpTWRZZ3pHWUpxdkZjM0Fr?=
 =?utf-8?B?ZVRka2ZtRXpiSHpQMWRKb3VWclRyUmhkQ0ttZktOLzV1d3NmeFQyM2NXYmVO?=
 =?utf-8?B?YXlXRGQ2VUdxeVFaY0pnVDl6VSswRk5vTVFoOVVpM1JXZW9jUjJMMncxUHhw?=
 =?utf-8?B?bkw5V0hHZ25aR1RqVFBSN0l1N3YzeEpuVmNFdDVraHh5L0llSXorTjkxejJX?=
 =?utf-8?B?L3lyVG9OdVVtN1c2Zi9HLzhUWDhWdFlyRFFpZWw2TGw1ZytlS3Z1UGtidWNF?=
 =?utf-8?B?ZktLZEhWQmN3SXByMXdlWGpIUkR4U25uc2R0UzZ3Z0t2Z09CZDBXcmNneE1k?=
 =?utf-8?B?b0VrTGM0T2V0UzBBS1hlM2tYL2tjbEdHK1lzT1paMVdoNTRLclhRVE5BTkhu?=
 =?utf-8?B?cWR2bk9DU2hMQ3RndVZEUnBnVmNmaHFja3MrRFZPMVZyTVVGNGVPSkxwMVhw?=
 =?utf-8?B?d1Y1N085QWNyVkpEdHg5d2p1MnRQa0VTT0JMU2tFM3M4a0lJMG1wVndhTHJy?=
 =?utf-8?B?Y2tkTTNqUGZVYUtQTXlYcEwwU1FQREl2RWpPRVQ1cVdtOVU2RFJVK1VwVWQ0?=
 =?utf-8?B?OXZkWEtiY0creUpsbTFKOHc4b01uWmhmSWw5ZlZxYU55TmFRc1U3bmlzZ0Uv?=
 =?utf-8?B?cmpkNTFCMVV3bndMRStDUWRHZ05vQWdoazhKdit2bGxVeldGbVhRUEpNOUc4?=
 =?utf-8?B?T1c0NkNXOWhWVjNoVW5PRDc3K3VkN2pNTjVVVmh4VDI5STFJblFDOFpWd2tz?=
 =?utf-8?B?MWd3Z1JHOUtBWHZDUEphWUxhcUlIYk5RL3NQbjRHYjRlS2IzTUEwRitPdndk?=
 =?utf-8?B?Q1ROZXRIL0ZpVnJnTG9FVGZZelVlN3lKUklpYTNxdWk4dG10U0hvb0UxYW1G?=
 =?utf-8?B?bU9lcFBmbWlVOW45TjVFaFk5Z2hxNTI5NUFvN0dOTGtSVmdkRUpnZjVFM1Bp?=
 =?utf-8?B?L3NFbGZicTR5Y2hESE1DQXNYSkRIZ1hQTDF2Ump6cXo3TzBwVnlBNFVDTjlJ?=
 =?utf-8?B?QWpESGVZcmkranhnM090bXhEWFErTURlQ05xaFVCN3B0THNzTFFYVVRsdmxH?=
 =?utf-8?B?VWJCSVdSeHNPR0xLVm8vaTlPdGx5TUxBOWJlQTZSRCtoaGJlRDdpSG5VUndu?=
 =?utf-8?B?c3dpRUtvS1hTK1JJNUkyZ000SFpUSjQ3MEU4SC95UFowaDdEUnZJa3FUM0FS?=
 =?utf-8?B?WDM3eGQxN3JReEQ0OGRHejJVM1VUOHdUTnVQd1R4U096aUltZjN4MFdtUmpM?=
 =?utf-8?B?RmhMb3ZDVEsrMUg3L0tNMGpZQ0cwaGw3SmpoUndSZHR5N1dyZG9WbDFHTTRE?=
 =?utf-8?B?VHdIYzVZNjdHSVkybEVBWW0rT09DL1JvbXJad0FHR205Zk1tU0hkTEYyN3NJ?=
 =?utf-8?B?d2dGdm1CNEhvWE9lbjkyVWtNb3FSdkVJeVhlVkdXZ3VUcjZGdHl5QTFOVWpp?=
 =?utf-8?B?QWVqTWhtQmRpSnJwWE02a2ZVZHpmdVdIdk93dHp2QmhmVXEwRzRlYXNsTWNt?=
 =?utf-8?B?Y1plZFlKbFJsem9wNHRuNFdZM25EbmRScG9aY2M0UlJNZWJxZjZXYjRjaklF?=
 =?utf-8?B?VzVvREZFNERqRHpwSzFmMVVtK3ZueGpEVzd2TU53L3FUK0l4ZlZPK0RNS0VW?=
 =?utf-8?B?azEva2ZOSnhzdFV5aTFyMHlIczY1WmR2VHRqcTBHS3Z5UzBqQlpNRzZIWVFV?=
 =?utf-8?B?S05Tc3dKS05kYUdEQTUxYUdGN280NFhRQ05JWUVubFFpVFd3dzBPYmNRT0Jl?=
 =?utf-8?B?WDhoRHJidDdVUlZqYVdOL0Q5aUpXQ2tSQXA2Tm9LY2NnRlhEK2JYOThyWmN5?=
 =?utf-8?B?T0NLdzQ5aU0vNXNDblZ3RDhoRmpsek11YU5LQ1Q1aHp3ZklrZGNlTmVYaDhD?=
 =?utf-8?B?NUw2MlMwVm81cUhxTHc5VmNpU1dpRlJmaGtCQ2xEanppcHFOUEk3UlBha0l6?=
 =?utf-8?B?L3c9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	=?utf-8?B?VWpTNGl6SkdObW9wNkJuQzN0eTRyZUVXRzZmNWIxeEFCRXhNZWR6QnBzUUh0?=
 =?utf-8?B?dnU0UTRiRnlPZndtZWM4dERpcnNKdG1WWXRRSThpYVg3ZWJJM3NZakh0dXlO?=
 =?utf-8?B?OVJRSVMxcnRyeUJCZ2hMYlNVYUg2a0svNHlLQ3ZSRkxaVncxc3FFd2ZiUGpX?=
 =?utf-8?B?bUptaUEzektKc2RieVFOVnp3N040ZWx2WGdIV1g3bUNoTmZvRmNibjBiS3hM?=
 =?utf-8?B?OUNwbEZOOVlsYUxlSkpQblE5VW9KYnlZUnpnZWo1QmxTR1krdEE1Q1dQaXRw?=
 =?utf-8?B?ZUQ1dHl6ckRLK2hsNnlXOXBFMTV6OGN2bW9nODJUZERWdXBLSUZySy8zYnVS?=
 =?utf-8?B?VVdCVm8vQnF5RWtNU1ZVL2Y0b1IwNGgzYzdha3NpVUVPMGlzY3dkZjZwekpV?=
 =?utf-8?B?RGZHK0hIMXVoeWVqd3hWbzRmVUtFNjNTR2w1NUZTT0dmSlY4V1hjSC8welFR?=
 =?utf-8?B?UzJjUmVFV2RHT09LR0c5OG1mMnFlaFRUVmZHR093d04wUkphMlRyV3VxUGVx?=
 =?utf-8?B?ZzZXOWdUSVJ4bXcyUU5IRGtKczB2ZU9lNW5sOW41ejQxQUc2RjdaWnhLckJB?=
 =?utf-8?B?WnNxb2pQdTkxczVxWS9iMXhzQjFDT2NSL2g0Zk9rcm5pZzNQcUc5SWJnSTBw?=
 =?utf-8?B?NlJFT2tCbnNVWlhYMFJyZ1RtM1hSYTAwdXkxTnRFVFluVVNzU1lTQkJBTVhh?=
 =?utf-8?B?ZUwrYUxKMkM5dGZiZnRIaXVEcHNKM2lMSmtwMXdWVmpKb0FzK3N1YXFvTW9P?=
 =?utf-8?B?Ukw4OTU5Qml6Zy84eFl3czFRRnBvc0t0RmFIY241bWgxSnQ3Z2JoS0tTMEJn?=
 =?utf-8?B?UmVzdnFTL253RUo1Y3B3bnVoVVFlbHBPN0I1VjlUQ0FwaFRMdXFCeE9DRUJW?=
 =?utf-8?B?anc3aHRKQm42RmpXNU5jMTc2ejV5Nk1sSzgrYnhqTEtnMzJYMTZUSE94dkho?=
 =?utf-8?B?RTZvVU9XdVBFVkt3emFaRGc4cERZbVF3Z2xPSFBwdGl5b3JNOXU0M0hVbTlh?=
 =?utf-8?B?R3pDYmNGcnFjYmRvLytzUUtyeno3SC9JQTFHMGIxRUtmWjlaeVI2ZnIra0hT?=
 =?utf-8?B?bnJJVjlLMGNtYmpwa2x2K2dtaktRSjFaSVlZQ0RSOW5qR3NkVE5QYTNDbFcx?=
 =?utf-8?B?T3pRc0FydmVTT2RiaFQxSis2MUlES2ZCc0ZaaEFSMnNzcmp3a2xSekttNUg3?=
 =?utf-8?B?VHBpL0QyUVBEQjhIL0pkL0FaVVYwcFd2OG9jbjVNazcrR3MwNlpjeVV1MVBz?=
 =?utf-8?Q?BlEUOL4V3hOi9qt?=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a535702b-7997-47d9-30f4-08db6120c2e4
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 15:15:54.2046
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2cMXVmCkWyYQTrW7GjbSkYrqO1XV6QZUyNVKn3LK7OaAH73CQjyqKqvjn+chkxcrMssmJ+Q0YfOrlQ0dofz1Pw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR03MB6529

On Tue, May 30, 2023 at 04:23:21PM +0200, Jan Beulich wrote:
> On 30.05.2023 15:21, Roger Pau Monné wrote:
> > On Tue, May 30, 2023 at 10:06:27AM +0200, Jan Beulich wrote:
> >> On 29.05.2023 15:34, Roger Pau Monné wrote:
> >>> On Tue, May 23, 2023 at 01:30:51PM +0200, Jan Beulich wrote:
> >>>> Note that the FB-label in autogen_stubs() cannot be converted just yet:
> >>>> Such labels cannot be used with .type. We could further diverge from
> >>>> Linux'es model and avoid setting STT_NOTYPE explicitly (that's the type
> >>>> labels get by default anyway).
> >>>>
> >>>> Note that we can't use ALIGN() (in place of SYM_ALIGN()) as long as we
> >>>> still have ALIGN.
> >>>
> >>> FWIW, as I'm looking into using the newly added macros in order to add
> >>> annotations suitable for live-patching, I would need to switch some of
> >>> the LABEL usages into it's own functions, as it's not possible to
> >>> livepatch a function that has labels jumped into from code paths
> >>> outside of the function.
> >>
> >> Hmm, I'm not sure what the best way is to overcome that restriction. I'm
> >> not convinced we want to arbitrarily name things "functions".
> > 
> > Any external entry point in the middle of a function-like block will
> > prevent it from being live patched.
> 
> Is there actually any particular reason for this restriction? As long
> as old and new code has the same external entry points, redirecting
> all old ones to their new counterparts would seem feasible.

Yes, that was another option, we could force asm patching to always be
done with a jump (instead of in-place) and then add jumps at the old
entry point addresses in order to redirect to the new addresses.

Or assert that the addresses of any symbols inside the function is not
changed in order to do in-place replacement of code.

> > If you want I can try to do a pass on top of your patch and see how
> > that would end up looking.  I'm attempting to think about other
> > solutions, but every other solution seems quite horrible.
> 
> Right, but splitting functions into piecemeal fragments isn't going
> to be very nice either.

I'm not sure how much splitting would be required TBH.

> >>>> +
> >>>> +#define FUNC(name, algn...) \
> >>>> +        SYM(name, FUNC, GLOBAL, LAST(16, ## algn), 0x90)
> >>>
> >>> A rant, should the alignment of functions use a different padding?
> >>> (ie: ret or ud2?) In order to prevent stray jumps falling in the
> >>> padding and fall trough into the next function.  That would also
> >>> prevent the implicit fall trough used in some places.
> >>
> >> Yes, but that's a separate topic (for which iirc patches are pending
> >> as well, just of course not integrated with the work here. There's
> >> the slight risk of overlooking some "fall-through" case ...
> > 
> > Oh, OK, wasn't aware patches are floating for this already, just came
> > across it while reviewing.
> 
> Well, those don't cover padding yet, but they deal with straight-line
> speculation past RET or JMP.

Introducing the helpers does make it easy to convert the padding for
all the existing users at least.

> >>>>          sti
> >>>>          call  do_softirq
> >>>>          jmp   compat_test_all_events
> >>>>  
> >>>> -        ALIGN
> >>>>  /* %rbx: struct vcpu, %rdx: struct trap_bounce */
> >>>> -.Lcompat_process_trapbounce:
> >>>> +LABEL_LOCAL(.Lcompat_process_trapbounce)
> >>>
> >>> It's my understanding that here the '.L' prefix is pointless, since
> >>> LABEL_LOCAL() will forcefully create a symbol for the label due to the
> >>> usage of .type?
> >>
> >> I don't think .type has this effect. There's certainly no such label in
> >> the symbol table of the object file I have as a result.
> > 
> > I was expecting .type to force the creation of a symbol, so the '.L'
> > prefix does prevent the symbol from being created even if .type is
> > specified.
> > 
> > Shouldn't the assembler complain that we are attempting to set a type
> > for a not present symbol?
> 
> But .L symbols are still normal symbols to gas, just that it knows to not
> emit them to the symbol table (unless there's a need, e.g. through a use
> in a relocation that cannot be expressed as section-relative one). It
> could flag the pointless use, but then it may get this wrong if in the
> end the symbol does need emitting.

Thanks for the explanation.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 30 15:28:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 15:28:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541227.843751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41H4-0001Ga-54; Tue, 30 May 2023 15:28:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541227.843751; Tue, 30 May 2023 15:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41H4-0001GT-05; Tue, 30 May 2023 15:28:38 +0000
Received: by outflank-mailman (input) for mailman id 541227;
 Tue, 30 May 2023 15:28:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xdt7=BT=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1q41H2-0001GN-3v
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 15:28:36 +0000
Received: from mail.skyhub.de (mail.skyhub.de [2a01:4f8:190:11c2::b:1457])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1bf72f8-fefe-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 17:28:31 +0200 (CEST)
Received: from zn.tnic (pd9530d32.dip0.t-ipconnect.de [217.83.13.50])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 17E8F1EC0411;
 Tue, 30 May 2023 17:28:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1bf72f8-fefe-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1685460510;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=7wMkihUDACMyxG5NoWweKMDlr/7la1a8uhQ7N10j4Jg=;
	b=mRke7Bdnd6HC68Ll3Z22g/XU5cqrYW9/4I7rc9xVly+dj2IbILa2Ojd4dlSH6Zmjq1e4U2
	rGgnl547wL6ZH1lBB62dbox0EFY8N1L6Wos2avbZxDKD28UgfJetes2/b3kFH8PkvJev6U
	D8v8dNzkW0mu3Ye1MfrTXEUa4/CHPgE=
Date: Tue, 30 May 2023 17:28:25 +0200
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
	mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Message-ID: <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
 <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>

On Mon, May 22, 2023 at 04:17:50PM +0200, Juergen Gross wrote:
> The attached diff is for patch 13.

Merged and pushed out into same branch.

Next issue. Diffing /proc/mtrr shows:

--- proc-mtrr.6.3	2023-05-30 17:00:13.215999483 +0200
+++ proc-mtrr.after	2023-05-30 16:01:38.281997816 +0200
@@ -1,8 +1,8 @@
 reg00: base=0x000000000 (    0MB), size= 2048MB, count=1: write-back
-reg01: base=0x080000000 ( 2048MB), size=  512MB, count=1: write-back
+reg01: base=0x080000000 ( 2048MB), size= 1024MB, count=1: write-back
 reg02: base=0x0a0000000 ( 2560MB), size=  256MB, count=1: write-back
 reg03: base=0x0ae000000 ( 2784MB), size=   32MB, count=1: uncachable
-reg04: base=0x100000000 ( 4096MB), size= 4096MB, count=1: write-back
+reg04: base=0x100000000 ( 4096MB), size=  256MB, count=1: write-back
 reg05: base=0x200000000 ( 8192MB), size= 8192MB, count=1: write-back
 reg06: base=0x400000000 (16384MB), size= 1024MB, count=1: write-back
 reg07: base=0x440000000 (17408MB), size=  256MB, count=1: write-back

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Tue May 30 15:29:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 15:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541231.843760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41He-0001oT-FZ; Tue, 30 May 2023 15:29:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541231.843760; Tue, 30 May 2023 15:29:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41He-0001oM-Bg; Tue, 30 May 2023 15:29:14 +0000
Received: by outflank-mailman (input) for mailman id 541231;
 Tue, 30 May 2023 15:29:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q41Hd-0001YI-Gu
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 15:29:13 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0629.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ba59954d-fefe-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 17:29:12 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8830.eurprd04.prod.outlook.com (2603:10a6:102:20d::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 15:29:09 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 15:29:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba59954d-fefe-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Gv7WBcWgsVWxa0Gj/kSeuNnqDmUj7J/Qhk14tC02Elq+jqVqgwW5UaNsvo2PfjL877k6elVo/YxoDfWLEW86EU9K86L5IWCysxS6YbvaBObFZrRL5uANqn0MKr5JgPSZusmifZgmpZfW4Qg3tSKfGluWGxP/TyN/ejqjOSATdjBJlSdcmZ0xKF4vTURDK6Eahnqi800ReYJKEZAGqr17H+78tTddAiy94cYG9hxa4dxyIg7+QkuWBqXWK69O/sk+o4oIdBDBkCpKENoavhMmIQgSb1Dn72Kfpimvupnb01f15w6JkZvsM6B1h9zNY+HRlOhdUGISyc3DpUApjat5Og==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Csk19f7zKgjYkgkNipWNOArhMZJoVpmcCMF/R/YGjqw=;
 b=LdTvhKtm8+BsLVxsRI1qttklLYtV59DqsUqMYtlsCSnf5hWYemBMHFYQPGNdfSI2bz0EsOv7LrlCyDlZUnvP8F8NjxgfNsYVcVtXRC1TNPN3tJ+GBLnCS6HfKnX2tR4AGiUNXYyYFjJo+HSbIGUiIl6tS2vQjv7zMuyv/kZd7kkh9iS3ZX6F8vP1jsp6y2nyLlqRupFgLxrkrUgsErc5sRAZoLcVh0fchkeQvYMd/aigqorywKOf6+C/K4ipy45eDtVgldw+WzOqPyOXkgIbBBlYnBpwlcm6sS2pGEb3oAHeif4b2T9BNcjgOq46qSiH94f6jJCFe+Igc4LmhvkKTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Csk19f7zKgjYkgkNipWNOArhMZJoVpmcCMF/R/YGjqw=;
 b=jkW+hTt1vVFz9d4smUH0YTiPcSTuo9LSm61NNFUOQEOtKBnkYkx68F9ONpm4gfsAzLVRToRkV9W/bzwg5Tj273IH/9xU9MTJpfrMuSy6S4TQ9pgRfL8Yj9eEVt8slz1TFENw/LT/hEqPVHh0G6QgVF8f9I9UqFaYidB/3O/0BGEMhN4yOa8DagnZ5P29qHe0ePtL8I4HsjKKJ2yMKeCPs3yM6FnFxoyMqu3VI9+wQNzu6lxYieuqOR2qtv+ajm4u2Ji1DlOXNjsD7u2ILJSBFPisP8TcJBit8wm8ZxY2j8Eq2UB9P0QYNLHidyNO0L/EtkIM3lZD4nR/cwamPZewng==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <fd113adc-6d66-5a4f-78ee-766c197ced93@suse.com>
Date: Tue, 30 May 2023 17:29:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] x86/vPIT: account for "counter stopped" time
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0157.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8830:EE_
X-MS-Office365-Filtering-Correlation-Id: 71d4e9db-af78-4222-db70-08db61229c51
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Vfj9hF+oFKtcQUjIzR6f3/keBqxinbs3JmYjJbA/8VuvxJBqK77XYRICwjoF+ZbF4sXjSfXc4APVNyzn6w/SEXGGp6d4ueggbVe3O/e6LopjP6SkoG1rVlWABXYK7sYm7BMmkqhD3JP0jPiypOVhw1JJZGh+SGhIiGvx+JY7cDFTXgmBnoLEZVD2By36G3HAsXRqdp5UxLgHhz3X8bTk4hlpu5z4hEUyJ6Q2imtllDqgRARaJxYjZGF0nI2qIBTM0pSGuAbRnsd7X/Bu7Z88vmOTNGPmPOz00nP5ZOjfb3V6fVEk1l17F6OZdwQolqA9X1qgDXYSJduy7YjfvM3dvM/G3oZJXr6vfgTa06U2wzQ64lX6KVTWQrGfx+K3Uocg/XaFQ+ugB2GhAvWmBevIo16DptYA49GzUILkRCCn3KkAPvPUx1jkiLizOlzKl6DenQTDMLnHZraCKFT6YweRt1lxsdgmOyaNRlqpbYw+ibq6/F/stC9HxzRgt5YQrJIgT9s5WWJa5yhaMtMTDQX4pkMoQ7yli39D2XoNKQTH/ypSCQMZV4ib2+EG4JQ0jTmzlbGgP268J4Bhak7UkQcaKBfaA5iNxksjp+q7truG0IBiAmGp6B3G8A77svAM9ZzRwSsxtLKor1yJAR1Ff0DARg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(396003)(346002)(366004)(376002)(451199021)(478600001)(6486002)(54906003)(6512007)(6506007)(186003)(26005)(2906002)(15650500001)(6916009)(8676002)(316002)(66556008)(66476007)(8936002)(41300700001)(5660300002)(66946007)(4326008)(38100700002)(31696002)(86362001)(36756003)(558084003)(83380400001)(2616005)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?b0VwNlgyTWY4NlYyVnBNdXdNNE5Fd1U5Z2dBd3VJaHMreEtmMjA1d09TWlhD?=
 =?utf-8?B?ZGozbGwzUWF2SFBrOGFPc011elhrZU9tSDMxS2pPbkdqbnJscGxvaU5tOHVz?=
 =?utf-8?B?Lzhyekh3RUM2cGNNQ2RTRnQyWmV6S3RiZkxSNUY1MmRiekc3OEZMVWNveUY2?=
 =?utf-8?B?eTJxS3hXZEFLNmxHY29mRndHMGR4b0ZKMUFSaXVRMUpuYTFLQlBraitySkhS?=
 =?utf-8?B?Nk5EVXZVRUZKSlFhT1Y2NlBkSkREa1pIZ0hWblpsc01iOEJNMGx3MmgwYmJW?=
 =?utf-8?B?ek1xYmc3UHlGN1lYeFA4TDJVeEJISWhTTWNJZzBQejlGM0tNTjlqSXVOZklH?=
 =?utf-8?B?QmVMZVRxbVNjeU1NMVVMRjZDK2VvTU8weVVwTXE3OEtUWFJDakhuS1Fka29C?=
 =?utf-8?B?YzFGb01sa29qRFhqZVZrNy9VVzRiRzBVU3FYWkFrenBDYjdOK1R3RndyR2pq?=
 =?utf-8?B?OGlzN0c5NVZVaWRDcXJQUkxONTd3bFpwNHJkcW5mSzdwRDQzamlPZkY4RU41?=
 =?utf-8?B?c2E0bDdnZjI5TWJPWUo4MllNUUNXZHorNGFZbng0QTQvVEtZSzREQzRvSmZa?=
 =?utf-8?B?dUl5YjN3NmxjSURMYk1vU1Rtb29xajFwRnJ4c2xSL3R2TGYvalF0M21nREVR?=
 =?utf-8?B?c0tOMUtUZGVRZkFMdFdxdEtrZDRjWkZlWXFqOTZBK1BhYmZsT3BWeG1LV2Qr?=
 =?utf-8?B?WHdxVlBFcFJqVjBiVGZBWFlacTdvWE5aOFJ0cFE5YmVTTVd1c05xVmJTNWUr?=
 =?utf-8?B?SzhaNjRNbkJDd1ZlUGVaNnByNE9HYXZzOTIxZktzNGowOERvTTBjTXd5VWk4?=
 =?utf-8?B?VVlJeFBPZUlhaFRGUEVzN2F0elByc2lMSkFMWTl4WmEvR0k3ZnJ3dWVLUzNT?=
 =?utf-8?B?NHNBQ2hla1FVdjVibWp2OVBTVUdXaW5JNlBiQTJPbzBxMk45NE5QdzVkTEto?=
 =?utf-8?B?YjlPaXdpQlBVbElFZmxQa2RkZklnbTFNd2dVWHNGcnFiUGJKcDRnVGdBWlJj?=
 =?utf-8?B?dU5jTnRDRHNaTzhOSDZSN2c5aHlBYjFwK2RmUll6OHkwUmFuSFp5ZUpTR2JI?=
 =?utf-8?B?R2svWW5ENkpvRDdjNDl6WStPMlNsMG1pSnFIcFltN2hjVVh6TWZLSG5pZ21X?=
 =?utf-8?B?bEVwQ3l2UjlRLzQ2VjlISE5FNkIxekJJMTRCdmYyZkl3T1FjMWlzc2Y1a25p?=
 =?utf-8?B?ZlhQcStDVjZBei9adVpQcDVGQUpHak8wVTVEOEJmNjdJL2RBcEJqdWh0QjJu?=
 =?utf-8?B?NldVclVLaHVneHhXT0x0bU9jck9QSXpCbUhva2ZFRzBHcTNRWVh0R3I1WDZs?=
 =?utf-8?B?bUd5TGpIQU0rUUNIYnN2NzlTbE9oQUJZRVprT1EwS1F6djl3SllHQUJ1dFht?=
 =?utf-8?B?bUQ3TTM4V0FJc2JLVHpWbng0eUhoWXpnR2ZDQlRwNytMck81cFJWUi9HQXk3?=
 =?utf-8?B?UzZ5SE5sMGN5aDlMSVloQXBJRTN3amJ3Zm54dWwrdlVyVEkvczBoZTZpWERK?=
 =?utf-8?B?RGltYkV0cVRNL2U4NllSbWZRdEVvWEdjYlBqMnpxQ3JrdnY2RGhJNzAwOUVP?=
 =?utf-8?B?TkRVWWJCSGMyVzh1ZHA3bHBnQkVNb3k1cVhaaG1NeWpNVzlmTERvajhsb0xx?=
 =?utf-8?B?eTkxbEtmOGlVWitBdFY0Sk9DZmkrN2l1UU5YOXI3SzZTZjc5dGttcWc2K3d6?=
 =?utf-8?B?cVZrUEtnNENDSi81ZjR6b0d3aWYxcFJRQzZXYlUzei8yUG1kc2VnY0ZUNWtO?=
 =?utf-8?B?SzFKaUhlOXJnYkxaSjRxYUhsZ0Q3ZldhM1BpV1RLZ1djRFNqUEZ4Zjl3Nk9D?=
 =?utf-8?B?MjVJTkRXWHllTmdjbTZQR1BWcTgzc2ZzcmRnTUdrUmcvVmNObVpKSlVNTDJw?=
 =?utf-8?B?KzViaWpGT3J6UUQ0YmRERUU3UDRsUFBNMGlWN2xvQVRndFNXd2lyTFdaMHZH?=
 =?utf-8?B?ZmduczVjT0hSMjlEd3RGUXc4OUpKTm44STZhS1NzUTZ4dDNTekxGZjBMNy9X?=
 =?utf-8?B?VlRhZENVUVIzcENlOWxSZndzQTI5dkkwdkN0dXJtUGxqQmhQVjRIQThVcGlX?=
 =?utf-8?B?Nm1qTUVKZm91NUxSS1Vjcml1OGhuZmhOSmJ3dmo1cFVjNlpKRWRpaFFFOGhP?=
 =?utf-8?Q?KvHhwh0vv4zxES7+1nanTLkxi?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 71d4e9db-af78-4222-db70-08db61229c51
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 15:29:08.4444
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Xm5xXZBBWNfIFHexlwj/Y1hzfIrZe1EhDDoPHVgIJQ54tEvE4HYAMiMrjJ7ucQmG2njqF8TBdOI+o1IJgW0Xcg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8830

This addresses an observation made while putting together "x86: detect
PIT aliasing on ports other than 0x4[0-3]".

1: re-order functions
2: account for "counter stopped" time

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 15:30:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 15:30:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541236.843770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41IW-0003DK-PJ; Tue, 30 May 2023 15:30:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541236.843770; Tue, 30 May 2023 15:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41IW-0003DD-Lt; Tue, 30 May 2023 15:30:08 +0000
Received: by outflank-mailman (input) for mailman id 541236;
 Tue, 30 May 2023 15:30:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q41IW-00038g-07
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 15:30:08 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on060a.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::60a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id da4be508-fefe-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 17:30:06 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PAXPR04MB8830.eurprd04.prod.outlook.com (2603:10a6:102:20d::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 15:30:04 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 15:30:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da4be508-fefe-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fVMIRt4Ysn2ESQ9jE8OCzequOH+oPAWYSM31g1/uf4ebEp+IkPpv7Npiwbwi3JUi00C5QisJOjmTGyURDbn4eRM9zurX35rwJsaf+c1WvMl0LXlU5H4hPlWfDtkXBigp5fkA2ERy/lG1xf/HplALIVv92Yic1AM9/s/tu8ZI8IjKdpmQHnxdnrmCkm0j7LghHxqE0VIPAokEo455K0D3bbUL/vm9vswxWxo21JGrnamOn6gx5pzdE7gWY6rhz53pWZqal4DLmcLH9KcUTEctTWR2U2EMpnZddFcWsWAzLG08uQe6srPGxwVswGvXFQEpoma556KUKWtvAyWpq9gquw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IAG71PtkJKdo4vtdYFhwpAHf5tOfIIddYmApNP3+/zY=;
 b=ZvRgD52vO67tz+jGJPgVi/WvOjMWCvhWDX+Qg5SvtxAFf0Msz9Aa3Q1Nk+v2ZlPLAOx7YFMuRrZJwAyjgZxs7OHym0Tb5N+0318Za4XTUauvz6L+4IUvekjJwhhFnl814CHltbB0un1Nq/7g38E1oNPAayqBKTj4ttEjyV2lyiZcE71RxfsZoBlleCHkUA/kJAxtYc4wFGo9m4H/6jyzVdzxebtWSls1xqMD7ICkQnPzixtT0Y5Qd3sHuOYe3ASxRd2LnZ6MwoiSJ3gAVdu81XDEsXRaIFvKaMfI0DiOKsBimbSVdfA+YMwO9vuMrUJxbFBefOOqj+ZEjonWAm88kg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IAG71PtkJKdo4vtdYFhwpAHf5tOfIIddYmApNP3+/zY=;
 b=KAfos2uGCkE8yU1H/NNG5lwzBXyGTkO6KJ6EOkk+wnwwGBiZYRRl8ruMSYPYAqf2LgN73ICj+qb3kqsFKpIETl4VYAg6ayXhVGOTmIJ20n1obYEZtQ/dKCJQblPyZDi9sdRsm1nvVBhj+qvS0iarPWc9GuSlW/QcR1ZJ3wg6qip8nFJL6x7nMIpGqK3Oybl3qFREDY5bH9Zw5Xpo57l9opLqkvF6YR4Avl/OSJs0UxjDcgaMBqlDSHvuUGChvpNxYypu5LfLbHraTvogBk6Ikw2HLsTTpUbJy+BLrYmGeuETpXNCTaByWYfRkHMidyofM/owHwWE/6A10Wv7UyFyHQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <b6cbf871-53a6-15ee-99d5-0ad2ab9c8b80@suse.com>
Date: Tue, 30 May 2023 17:30:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: [PATCH 1/2] x86/vPIT: re-order functions
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <fd113adc-6d66-5a4f-78ee-766c197ced93@suse.com>
In-Reply-To: <fd113adc-6d66-5a4f-78ee-766c197ced93@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0160.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a2::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PAXPR04MB8830:EE_
X-MS-Office365-Filtering-Correlation-Id: 1036e4d9-9588-4803-ddcf-08db6122bd90
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hKEbQKUepn9bXgc7glL2LuoAybckJ8ZaNz/BFFeNcbo1Sci1wS1Yczxy2qxq0ldNz8UeMMvMuKzDFTsnKiI5iqLjyRNfPu0wKh1QmAPJVOMaKTzKns/fIDFNAbZ+noEOFrPzKKAxzmurM6HxgqajIMTuvmFsKLhmJjlXUZknzflFjAsrRLNIPKB/0GE9mC0clH/wOxenvXIDQEs8k4Ox1RwfFIRimY0wuHHHdZFw9dmbAhym3yyjhBcufVzKyN/ZJ5JrnOfh5L+IwEaBYB5J5YLw0pLk4rd10pkm9ViAwo2nJcaLAZ9YrvIzNAzfnOeLh/kiQBaoa8Wdd4j7bALsbvaczJBjlpk0jRXwxGQSNjtbkWMvrhxU3CGtWNSIsNVnwC+X7R4L7l5NDr3++27TPEgJGndqneb7y+UkatQ5oX5aTlNecKALWXfEu4sfxnbV/elBjpsOIhQsfmteG6BpJZhYfrfUnlQVY7OHfbOmb2K3gRHT4hiISL4MMlvjT/xeH2EKpoBO+1GUVYa7n+icnep0bKOG0tL71WhvtfW5GjVrl6ASseJXQteS171buLfl9ap2wfkbQJ5kJGgT5Q1OtV++u1atKtASrR3ADAtDOzsEDGrCwgk7ERopykdWCXpAdMSGdFfg2j782F4psdHDwxOaPZA463JmzbYIePnlV8eoSyXNYE1Bol7jtSZXx6lS
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(136003)(396003)(346002)(366004)(376002)(451199021)(478600001)(6486002)(54906003)(6512007)(6506007)(186003)(26005)(2906002)(6916009)(8676002)(316002)(66556008)(66476007)(8936002)(41300700001)(5660300002)(66946007)(4326008)(38100700002)(31696002)(86362001)(36756003)(83380400001)(2616005)(31686004)(169823001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QytOSm8zbTc0QUV3K0krRTlHZEpxN2JtSzZkM1FrQWxmekppL3kweDF2N05w?=
 =?utf-8?B?SXZtVFA2N2xqRkJNM2l2RW1rMVZ3SEVDVGVsbzdtV1Jkem5ybUtqMUQ3c2lp?=
 =?utf-8?B?OFBFWG1GRkFLdlVIcmpDVmJadHdLRmxKdVdFV2VxOGg2SGhsTS82Q3lsNEpV?=
 =?utf-8?B?TklKeFcvbTBCOWRYeFNrR0Z2RERFVE42WVZJVzE4eU1mUmgyZ3lHRysvaXVv?=
 =?utf-8?B?RUtOd1haYnNmWktoSW4yR0JNa2lFd1R4T1ViRDRBRlZTMUl0U3U2ZEkwbTNa?=
 =?utf-8?B?ZEdxTi9YTzNpaWxLWEhsVE85S0wrM21IeFB0VkpKenkzTXh6T0wrd0FvWVY1?=
 =?utf-8?B?V3B3LytvL01rUFcyTHJ1aVZ0ZnNkNWM1M09QNU5qN29xc1dnbGxxSHVZV05B?=
 =?utf-8?B?YjB6dkhnVWFkd2k2Z1VKTEhGRyszWlNQNXJJYTNJWnVyZGZGdUhZOTdROUli?=
 =?utf-8?B?eitVUnQ4cFRSUitRSlJaWmUrbVZUTDNsNVVQNU1xa1NweUY1cnVtZWwzTkY5?=
 =?utf-8?B?S21hMWVKMFNmSkYwalFFNFRyTzltNlRiOTVSMms5dVRCSFV2cTA2dlRKUC90?=
 =?utf-8?B?YjR3S2kxb2lhdUhIYWZZRkZOOHgzeGZvdWlOWnh5ZXQ2aTl4Rkc3UXZua21Q?=
 =?utf-8?B?MGRaalRwaVhaYno1UFMxQkg5QmJscmZJckIvcXQ5dG9tNmlFbC9SUmxCampw?=
 =?utf-8?B?dDQrRmEyN2RETHRyRjkvQjRBMGNDNUdHWDNaWVk0TDZPOVBXZjZENVQ0cm9j?=
 =?utf-8?B?MDZjK2hyeE5Db3ErOFdMTndRRUtqYXRBWFB2Y2kvNHp1VU40cjVJQkxublpT?=
 =?utf-8?B?QjdMT0VCNkNvWDBMdWdodEhXeC9EeUhXczkyekN1ZVhERm04SkdpNFc0aTFh?=
 =?utf-8?B?QVkvSEloQTdpN0R4Z0NGdGdXZHBmRmczZEQ3SnppcTNIQzNodW9wU3ZVZWpj?=
 =?utf-8?B?UHFFZ1NCb0MvTDFWYmw5ZkMvaStsZkJDM1ZNS3hRRUVIaWkxT283bUNyOW9a?=
 =?utf-8?B?eU00dzRIcmtmOWs1L3NKVGZNamZFRTU4QXJwMkZ1TlA3MGVGeDBoQXQrZmJx?=
 =?utf-8?B?OWdHVjNOOERST3gwT3hkN0pUdUkwUURGSnpLano3YTVjMU9udjJzTVcyQUE5?=
 =?utf-8?B?a1kzNi96YVdFSmxrTi9ZZXlnNEN4N3FPRDladHBYM0lrdUR4dTM0MitZV2dZ?=
 =?utf-8?B?U2IzMkJ3anBDOS9VUlFZUGVIb0lPV3VyL1N1aHlmc0lTL0lRNFZCa2dkeUxN?=
 =?utf-8?B?a2hXcllkWjhuSUNSRko4ZHZ6NGJXUDlNRnVCaG8vWnZjelQ3YUlNZERLNlFL?=
 =?utf-8?B?SmZUSnoraUU4akpTMWpZUkxJQ1E3U2ZpNXJ0Y2VFazZDQ29UMHladHJYUVZ2?=
 =?utf-8?B?emlKdTVoMTlyZ1hOK1FXakdoaGRTdWVLZ2xJdXlybC9xT0VOK2F3ZHI0aDU4?=
 =?utf-8?B?aXAycFlHUU9IYmQyKzZhS0Y0TDJQc1RVMUxDUWNsdjhXcVB6cGVGSFZ5dXUz?=
 =?utf-8?B?eWRWS3k2MFRnNHRZZ1BhUTN0cVFURkhPMWNsTlZySjR6RzgxTndSaTY1dkgv?=
 =?utf-8?B?c0Z5Wnk0c0M1QW9nVVY1SXM0blJMbGtYa3d3Vm9iNU9VZVJsb1BjQy83aTZ6?=
 =?utf-8?B?dDEvZGR1M2hWbVhkcXF0QVk0RFFyekdHL2Noa1RnRDRYeWdpakRPaEhoOVc0?=
 =?utf-8?B?NmlvTGZobnVmQXdwazlIYlZpY3VEM0lHMXJmUkNPZC96dHl6ZjkxMjdYVjYw?=
 =?utf-8?B?cDhRZ0RlZzVnWGV0WTdGdy81R0FWeERqS3gwakxjakQrN1o1U3o2QTRkbVR1?=
 =?utf-8?B?aVpDd0tTOE5zU2Y0NkdTbGQ5MVZ6MEMwQTF5Zzc0am9pR0piZlJndXd0Q3RS?=
 =?utf-8?B?ODlZUC9FdENyUWNqUjg1RXc0SzREcm9aOUowdEtQdGZKSGo4UlUydFptSkV3?=
 =?utf-8?B?Mi9sdVdEV2NMRmQrNDhIVXl1aWw3TW9BemdBL0cydW5hd21IRG42aFdoaVBU?=
 =?utf-8?B?VU1FMmtqbzAwRjhDT000OUpoWXpMNzloZm85U2ppeWJUMjJJT1gyMGw4RmlK?=
 =?utf-8?B?SW5FMmc0NTRUUElFQ0lLSDNwM01lei92S2d3eVVhd3VCUm00clZJZjlONStn?=
 =?utf-8?Q?bcLFz7qrSNNfvrwc2Tb65I2qu?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1036e4d9-9588-4803-ddcf-08db6122bd90
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 15:30:04.1710
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: D740qdPzNjVufyk4Q7plvA6Pk0Ptp8tltmIr8v+dK+aPxIXnzZ8/61L7ZXG5KUYvRGRMSYQKkKOeWNfhkq2ISA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8830

To avoid the need for a forward declaration of pit_load_count() in a
subsequent change, move it earlier in the file (along with its helper
callback).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/emul-i8254.c
+++ b/xen/arch/x86/emul-i8254.c
@@ -87,6 +87,57 @@ static int pit_get_count(PITState *pit,
     return counter;
 }
 
+static void cf_check pit_time_fired(struct vcpu *v, void *priv)
+{
+    uint64_t *count_load_time = priv;
+    TRACE_0D(TRC_HVM_EMUL_PIT_TIMER_CB);
+    *count_load_time = get_guest_time(v);
+}
+
+static void pit_load_count(PITState *pit, int channel, int val)
+{
+    u32 period;
+    struct hvm_hw_pit_channel *s = &pit->hw.channels[channel];
+    struct vcpu *v = vpit_vcpu(pit);
+
+    ASSERT(spin_is_locked(&pit->lock));
+
+    if ( val == 0 )
+        val = 0x10000;
+
+    if ( v == NULL )
+        pit->count_load_time[channel] = 0;
+    else
+        pit->count_load_time[channel] = get_guest_time(v);
+    s->count = val;
+    period = DIV_ROUND(val * SYSTEM_TIME_HZ, PIT_FREQ);
+
+    if ( (v == NULL) || !is_hvm_vcpu(v) || (channel != 0) )
+        return;
+
+    switch ( s->mode )
+    {
+    case 2:
+    case 3:
+        /* Periodic timer. */
+        TRACE_2D(TRC_HVM_EMUL_PIT_START_TIMER, period, period);
+        create_periodic_time(v, &pit->pt0, period, period, 0, pit_time_fired,
+                             &pit->count_load_time[channel], false);
+        break;
+    case 1:
+    case 4:
+        /* One-shot timer. */
+        TRACE_2D(TRC_HVM_EMUL_PIT_START_TIMER, period, 0);
+        create_periodic_time(v, &pit->pt0, period, 0, 0, pit_time_fired,
+                             &pit->count_load_time[channel], false);
+        break;
+    default:
+        TRACE_0D(TRC_HVM_EMUL_PIT_STOP_TIMER);
+        destroy_periodic_time(&pit->pt0);
+        break;
+    }
+}
+
 static int pit_get_out(PITState *pit, int channel)
 {
     struct hvm_hw_pit_channel *s = &pit->hw.channels[channel];
@@ -156,57 +207,6 @@ static int pit_get_gate(PITState *pit, i
     return pit->hw.channels[channel].gate;
 }
 
-static void cf_check pit_time_fired(struct vcpu *v, void *priv)
-{
-    uint64_t *count_load_time = priv;
-    TRACE_0D(TRC_HVM_EMUL_PIT_TIMER_CB);
-    *count_load_time = get_guest_time(v);
-}
-
-static void pit_load_count(PITState *pit, int channel, int val)
-{
-    u32 period;
-    struct hvm_hw_pit_channel *s = &pit->hw.channels[channel];
-    struct vcpu *v = vpit_vcpu(pit);
-
-    ASSERT(spin_is_locked(&pit->lock));
-
-    if ( val == 0 )
-        val = 0x10000;
-
-    if ( v == NULL )
-        pit->count_load_time[channel] = 0;
-    else
-        pit->count_load_time[channel] = get_guest_time(v);
-    s->count = val;
-    period = DIV_ROUND(val * SYSTEM_TIME_HZ, PIT_FREQ);
-
-    if ( (v == NULL) || !is_hvm_vcpu(v) || (channel != 0) )
-        return;
-
-    switch ( s->mode )
-    {
-    case 2:
-    case 3:
-        /* Periodic timer. */
-        TRACE_2D(TRC_HVM_EMUL_PIT_START_TIMER, period, period);
-        create_periodic_time(v, &pit->pt0, period, period, 0, pit_time_fired, 
-                             &pit->count_load_time[channel], false);
-        break;
-    case 1:
-    case 4:
-        /* One-shot timer. */
-        TRACE_2D(TRC_HVM_EMUL_PIT_START_TIMER, period, 0);
-        create_periodic_time(v, &pit->pt0, period, 0, 0, pit_time_fired,
-                             &pit->count_load_time[channel], false);
-        break;
-    default:
-        TRACE_0D(TRC_HVM_EMUL_PIT_STOP_TIMER);
-        destroy_periodic_time(&pit->pt0);
-        break;
-    }
-}
-
 static void pit_latch_count(PITState *pit, int channel)
 {
     struct hvm_hw_pit_channel *c = &pit->hw.channels[channel];



From xen-devel-bounces@lists.xenproject.org Tue May 30 15:30:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 15:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541239.843780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41JB-0003jA-15; Tue, 30 May 2023 15:30:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541239.843780; Tue, 30 May 2023 15:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41JA-0003j3-UI; Tue, 30 May 2023 15:30:48 +0000
Received: by outflank-mailman (input) for mailman id 541239;
 Tue, 30 May 2023 15:30:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q41J9-0003i8-Uv
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 15:30:47 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on061f.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::61f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f251c4a2-fefe-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 17:30:46 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS4PR04MB9507.eurprd04.prod.outlook.com (2603:10a6:20b:4ca::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 15:30:42 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 15:30:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f251c4a2-fefe-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EeUJXB7Hxu9nClLUT5qqeakN+ZVjjNK6Pr+Xqaaf8SAHsoQhgTGiP3xpB43YtBIC0+V33giY8Ru4RSSxiegVLMlOpkxnaPQXmM+lRV4eI8QrYRkNze720HPSjeWNb24XlrZzt47iX62KtBHvvzHkdz5w0RplS8rHxL1HuQvQOgUAgys/tEY3X0FUHhSRuoBGDtFZDdQkWJV6H6NzYNrPGLm5is7TnD56ttandTnb97Y85R5ldZ5Zixi6pWs7S5OCvA6eF1IcmakmFoNh8LMXdV6z5wL+F/r6gNCVFzMvX5RGPVSVP24lPGYpmiXCguFkTu3gSLzYX48oP7kXVaLY1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aunSS5Nrb6KWs1AKnO9tx17OTd48iXWiBzukoYas7WI=;
 b=m0P0jlLbNwk+sXDhcTFqPPOUZVokbTsXQfQZg7TKIy/77dH9QpfguAUqli4dgoECuafR99aPYY9PUG3mrrJ61HtI0ELkqPGFp95mU7nrI0QpfSQrOqIWAo/P66TtJcUd7xRCAtRs8Q4W/RBTPYTvSl89b/NdI3mMqPYi6GzcHsLiXRbYG1c83T1o90DyJhZ/WFQd92sa1T4YdaaVAF/6WwbPrmJEu5hc3GXjwCaKbWs+29ILtLVS/lxtnbYnNTTL24oC9bo0aeEUt1dpqPB/ftPOXpoQJKVUKbwxtJbOBZRiIZYjHDpmeWyRaMtaFpT4CIEUupdkkJxC084GM2VTNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aunSS5Nrb6KWs1AKnO9tx17OTd48iXWiBzukoYas7WI=;
 b=Z8fIuuWO9EIsUKyUqSnDEFdjE9Y83usJ+1NewKo3zTyT61Ed2+vgl2ysPljux1emMmI2M74iOgEAc22ceaWII7st7KxaDJn4o9njwoLyw9adgOBbv2ShfXKaFfC56NOAeaePq6YFbfoZ84iR0Zhs1wEWk83Lfm+mkUdxJO6vZaYlwclxdVuuRBN8s7C7XjjqwM6JFc46V0hbEhfWDNLgfb9i/8c62JVQ19b25oJxynFMR97IqFgB+9MK3wQxvsN9Q5i+hnUTQWSyvoRrI/2t1Y1L2l4MAv2hj78q7O5VS+PwcYSG38QdAbh4oWkC+M8sLrN5unMV+RHzy3NmBL9ApA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <355c5379-ea9e-582c-0131-816204eb3ace@suse.com>
Date: Tue, 30 May 2023 17:30:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: [PATCH 2/2] x86/vPIT: account for "counter stopped" time
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <fd113adc-6d66-5a4f-78ee-766c197ced93@suse.com>
In-Reply-To: <fd113adc-6d66-5a4f-78ee-766c197ced93@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0104.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a9::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS4PR04MB9507:EE_
X-MS-Office365-Filtering-Correlation-Id: 723ce213-4386-4277-ebca-08db6122d42a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vXYFg0yQlOzJCKUvGJ3uK4K9bWi5Ey+9UOutKdyCqowqgwJS3WCLfDwNmf+7/jL7yKfDeHq9nJElpQGQuXyoqqUR/PASHwFUoZQE8v+201OTOoZyN4eZ4vVBQhEVuHDUj9VdEu2v/FY7BR6m0Kk7w4YY5GhokDCOB/BVcRszkVYnx19R8a8rzbnvAYyWtQVr3viuj8GlGK+HdiY4qDaiglGg0ZVGt9RDuMyuahyZdI72JageLRlyY/jqoN3CYkdA7xfghXUMpk2Cj6jYndUQXYtucHrPmtyhdLcg7Vd/IYQdkFyUYuXwL2bObA4Gxd0vB0B0gbJIpv8UdnOidUt6bNsJQku1AtBWoB7vIkcZCPW9Ig+Kz8CTKuehhCtBWs+wPHuaaH65RAKUPiK+z8po4k8/REhkJBUdIHrVaPcIZIp35gv4UHj8R4sZLaqD3uYwE4PctrTd5KDT/RU29KDng2ZGTFm3Pva75WVdd4HyjjtLYhySOgjNe6xCpLbRBnGfse0cKQl+2FFdRVQzb+bat/QKpO7wDHkI/Djn6UrJw6x/Pp7+HDZcu8pA9AUGIPEcQ2u7U52iUNRgPwrMiRnowMso6G+dCrcx6AhbZbDDLCq3dyfk76f8dG1awQFRZuyV/nupplM9Usv9to+ul7eGzCxmmd9uphnHBwfeas4LpaAqMtfAqztv8mscBPm+KsDFWPXVyzT+SJ3Ro5U6x66G4w==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(136003)(376002)(366004)(346002)(451199021)(86362001)(31696002)(36756003)(4326008)(54906003)(6916009)(66946007)(66556008)(66476007)(966005)(316002)(478600001)(6486002)(5660300002)(8936002)(8676002)(41300700001)(2906002)(15650500001)(38100700002)(2616005)(6512007)(6506007)(26005)(186003)(83380400001)(66899021)(31686004)(21314003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ZFUySVVGN3NmTEZseTlPa0NtNUdiVWpuT2xrYUEyMWlOWlZ6Q0tVV0hFNTBQ?=
 =?utf-8?B?TGVoUzlmR1NoTTJJdng5Mm43L1R4VUljWFVQNDM5N082LzROdmV1NHVZa0dr?=
 =?utf-8?B?OGpkT0QwdGQrU1RaNWtTL1Q5TUwyaHozTmE3VDZTeXJLUkZRNlhSM0Nxb1hC?=
 =?utf-8?B?Q1UrMUttN213dFJHd1BLUU41SSszeVJLYkhGajBzcUdHTEczSC8vL1VNd0Ra?=
 =?utf-8?B?cERLank5VEhydVhBMUFrdUcvaS80YVVmKzZLWmxnTU0vS0hWRGlHRjBmZXZl?=
 =?utf-8?B?aEladDQ4di9STldjajh5ZG8vbEZ5Rm93NTBIdytBSlFNRmxPVnZMYy9BZ21Z?=
 =?utf-8?B?K2NFOENESi9OT3dFWFlZd1d6ZklCQy9NUGNVdFNvMG9iMmtESlJweFlmdzdT?=
 =?utf-8?B?TnJlMHpQS3V1bWhtaTRheUsrMDRoblJQM0J2dTFKeWpKczdEek51aStNbCtP?=
 =?utf-8?B?cThVcW9VNVYrOXZWT0p6VVpuYm0wbi9tZURGNXlwbUF5ZlVSckltQWpob3ZC?=
 =?utf-8?B?R09MNUhyN2JwSHJUVy9pMkF3SjNvSlFpdVJwVGVNM29tb3pZQUI2TTFCQmVV?=
 =?utf-8?B?SlE5ZjZpKzdrSTVncWd1YXpDNEdmNG1GdG1QSnI5aDBTWG5sbVN5U1Y3YVJq?=
 =?utf-8?B?RVlpSkxzQUZtMWhSbXlHWmw2a2xZajVUSE55UUMrd3NUUXhlNGFWTC9CMnlt?=
 =?utf-8?B?Ymt5cWhrVzBZNVQyaHM2eFJ5M3prOGkxOWZldTFlUUc0N2JicXdTWHEvT1FD?=
 =?utf-8?B?bjJhaXBOTE9lVDdBUVAwTUhIM0JnUUlDOXRLRDErUXFxY3ZrN2RaM1dlWDdZ?=
 =?utf-8?B?czlPWXdWbVBEL3o1NDJXRXMvNXdSb3FpbVoveGRMU2x3ZXZSNVJVU0dXWkU4?=
 =?utf-8?B?Rm9CVURYREVZMjdsUmdvV3ZKTHhKNEw3bG1obmtkOXdYTXBXSmQ2dVowazRP?=
 =?utf-8?B?b3NSSEtBMmRIYmdVY2s2SjdEMng4UmFFWW1td0UzeVp0bjZXa25RZWpuei92?=
 =?utf-8?B?b2R2aS95QStveVpGbjhYbU85dElMQUVGMlV5bldaanRzQ1VESWlIejdWWDZl?=
 =?utf-8?B?RVVUMG9DYlNsODJBK3ZOaXdUZW11ajdwL1QyZWtnNnNWRU5UTkVOWDFDWHZl?=
 =?utf-8?B?dTZ5R0I5YzByQTUyMUFOTmRDMFhPUnJRT1drbjY1M2VEei9pUVVWMFlpVnZ2?=
 =?utf-8?B?SE1ERFp0Vno4S1pUeENlZlZBL01COG9sSDJNdXlnbkVpTzRYaXd6N3dWdlc2?=
 =?utf-8?B?a2luR1BPdEs1U1BXK3R2RjlYaXlTVmMrQjlCdVphQmEwUEN0Vk0rUGt0Zldt?=
 =?utf-8?B?Z2Flb2FjWnc3SGVzZ3p1Ky9iME1MdGp6eTlleUdaU3piQjRoSUhMS084TGpw?=
 =?utf-8?B?aHBDNXZ5Q1h1SG5PTnFZNW1MRitObzVCTWQwOEFsYkU3NUhxUVZOZk9kZVVp?=
 =?utf-8?B?VlRhOUlSVksvYnlia3c0R1I3RFpMbThYbzBTS0tpeGM5alFKTy9VUm84Q25T?=
 =?utf-8?B?U1ZNaC83VFp4TWl6VVBNOEU1YlloYlMraUN1SnFxV2J5VWFybVZRcFVNVnNW?=
 =?utf-8?B?bXZWaEFSU3ZLb3RXVERhWE5IR0taT0RoY29JcldHaDNNOUpkL2J0Ukl0MmQy?=
 =?utf-8?B?MVdPbmVIQlAwQndUVGxlUHYzTzNTN01TOVhWMDlPT0NlZmE4ZUw1V3lXTlRU?=
 =?utf-8?B?VzFqYmVWdzNMRjV4Vm9XdEVkS1RMYW1TaHJleHpOam1TRnlGZk95YU90dVpu?=
 =?utf-8?B?aCtWMEdmbUpncGpBZlNZTm96ZHJLYTRLYk02YTJpdUJvcWpyS3NsNWVyditG?=
 =?utf-8?B?Nm51dWpxM2xkRElyYVRWNXVkRlF5bGRpN0ROWmd3WndlRlNKNDBvdi9NMk1q?=
 =?utf-8?B?WEhFbit4cTVpa3ZGNFFWazZKYkdlTXBJY3lML09vT0dKMVpIYUJqSjhHeHhw?=
 =?utf-8?B?ZlMxZXZiaTdrN2xkWnJRdW5zQ3I0V0hla0pld1psQjhQNTJNVnMvNzRBRnEz?=
 =?utf-8?B?OTUxc3pldGFEU1Z5YUFpUy9EcTlzYmxoUG9STFFpV3kzTGRucnRyNXRUY2hL?=
 =?utf-8?B?azhSR0Exc1I1V1ozK1hEYmRUSGpxQmpRSWVNQXhrU1M3L3J2Njk4eC9kWHJy?=
 =?utf-8?Q?Npn/j7A2EyNkxy420cOQw2n6D?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 723ce213-4386-4277-ebca-08db6122d42a
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 15:30:42.0772
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: JW10tIzEMhVCB2CSklc+v8fl+bENzBULQELbEmDiq06F0gI2nwJiGCHiswlvR38qEh9GwUtSv0ZDI+qEJTUKpA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR04MB9507

For an approach like that used in "x86: detect PIT aliasing on ports
other than 0x4[0-3]" [1] to work, channel 2 may not (appear to) continue
counting when "gate" is low. Record the time when "gate" goes low, and
adjust pit_get_{count,out}() accordingly. Additionally for most of the
modes a rising edge of "gate" doesn't mean just "resume counting", but
"initiate counting", i.e. specifically the reloading of the counter with
its init value.

No special handling for state save/load: See the comment near the end of
pit_load().

[1] https://lists.xen.org/archives/html/xen-devel/2023-05/msg00898.html

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: "gate" can only ever be low for chan2 (with "x86/vPIT: check/bound
     values loaded from state save record" [2] in place), so in
     principle we could get away without a new pair of arrays, but just
     two individual fields. At the expense of more special casing in
     code.

TBD: Should we deal with other aspects of "gate low" in pit_get_out()
     here as well, right away? I was hoping to get away without ...
     (Note how the two functions also disagree in their placement of the
     "default" labels, even if that's largely benign when taking into
     account that modes 6 and 7 are transformed to 2 and 3 respectively
     by pit_load(). A difference would occur only before the guest first
     sets the mode, as pit_reset() sets it to 7.)

Other observations:
- Loading of new counts occurs too early in some of the modes (2/3: at
  end of current sequence or when gate goes high; 1/5: only when gate
  goes high).
- BCD counting doesn't appear to be properly supported either (at least
  that's mentioned in the public header).

[2] https://lists.xen.org/archives/html/xen-devel/2023-05/msg00887.html

--- a/xen/arch/x86/emul-i8254.c
+++ b/xen/arch/x86/emul-i8254.c
@@ -65,7 +65,10 @@ static int pit_get_count(PITState *pit,
 
     ASSERT(spin_is_locked(&pit->lock));
 
-    d = muldiv64(get_guest_time(v) - pit->count_load_time[channel],
+    d = pit->hw.channels[channel].gate || (c->mode & 3) == 1
+        ? get_guest_time(v) - pit->count_load_time[channel]
+        : pit->count_stop_time[channel];
+    d = muldiv64(d - pit->stopped_time[channel],
                  PIT_FREQ, SYSTEM_TIME_HZ);
 
     switch ( c->mode )
@@ -109,6 +112,7 @@ static void pit_load_count(PITState *pit
         pit->count_load_time[channel] = 0;
     else
         pit->count_load_time[channel] = get_guest_time(v);
+    pit->stopped_time[channel] = 0;
     s->count = val;
     period = DIV_ROUND(val * SYSTEM_TIME_HZ, PIT_FREQ);
 
@@ -147,7 +151,10 @@ static int pit_get_out(PITState *pit, in
 
     ASSERT(spin_is_locked(&pit->lock));
 
-    d = muldiv64(get_guest_time(v) - pit->count_load_time[channel], 
+    d = pit->hw.channels[channel].gate || (s->mode & 3) == 1
+        ? get_guest_time(v) - pit->count_load_time[channel]
+        : pit->count_stop_time[channel];
+    d = muldiv64(d - pit->stopped_time[channel],
                  PIT_FREQ, SYSTEM_TIME_HZ);
 
     switch ( s->mode )
@@ -181,22 +188,39 @@ static void pit_set_gate(PITState *pit,
 
     ASSERT(spin_is_locked(&pit->lock));
 
-    switch ( s->mode )
-    {
-    default:
-    case 0:
-    case 4:
-        /* XXX: just disable/enable counting */
-        break;
-    case 1:
-    case 5:
-    case 2:
-    case 3:
-        /* Restart counting on rising edge. */
-        if ( s->gate < val )
-            pit->count_load_time[channel] = get_guest_time(v);
-        break;
-    }
+    if ( s->gate > val )
+        switch ( s->mode )
+        {
+        case 0:
+        case 2:
+        case 3:
+        case 4:
+            /* Disable counting. */
+            if ( !channel )
+                destroy_periodic_time(&pit->pt0);
+            pit->count_stop_time[channel] = get_guest_time(v);
+            break;
+        }
+
+    if ( s->gate < val )
+        switch ( s->mode )
+        {
+        default:
+        case 0:
+        case 4:
+            /* Enable counting. */
+            pit->stopped_time[channel] += get_guest_time(v) -
+                                          pit->count_stop_time[channel];
+            break;
+
+        case 1:
+        case 5:
+        case 2:
+        case 3:
+            /* Initiate counting on rising edge. */
+            pit_load_count(pit, channel, pit->hw.channels[channel].count);
+            break;
+        }
 
     s->gate = val;
 }
--- a/xen/arch/x86/include/asm/hvm/vpt.h
+++ b/xen/arch/x86/include/asm/hvm/vpt.h
@@ -48,8 +48,14 @@ struct periodic_time {
 typedef struct PITState {
     /* Hardware state */
     struct hvm_hw_pit hw;
+
     /* Last time the counters read zero, for calcuating counter reads */
     int64_t count_load_time[3];
+    /* Last time the counters were stopped, for calcuating counter reads */
+    int64_t count_stop_time[3];
+    /* Accumulate "stopped" time, since the last counter write/reload. */
+    uint64_t stopped_time[3];
+
     /* Channel 0 IRQ handling. */
     struct periodic_time pt0;
     spinlock_t lock;



From xen-devel-bounces@lists.xenproject.org Tue May 30 15:35:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 15:35:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541245.843789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41NO-0004UV-K7; Tue, 30 May 2023 15:35:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541245.843789; Tue, 30 May 2023 15:35:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41NO-0004UO-HW; Tue, 30 May 2023 15:35:10 +0000
Received: by outflank-mailman (input) for mailman id 541245;
 Tue, 30 May 2023 15:35:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8xzU=BT=citrix.com=prvs=507ffd061=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q41NN-0004UG-H9
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 15:35:09 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8dd2d7ad-feff-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 17:35:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8dd2d7ad-feff-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685460907;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=EfDPiNQKXMBH56kJ48EMjpQnPshW7bpXihgAYM2vFb4=;
  b=AsBQKgXWK95L2/Ah9HKKdM31AoVnz3//DBT9G7tNTnn9ULjEigVpP67J
   WkWWA55Ek4DKdHDjAtEQO9yKPDem6tRm986CxT+WWtLyvO3yjc8+ZEYa2
   EjgMDSVDNjmfCFAHyf+lDFuzidne7Of2pUfntF6R1sNfP6/fV6OmsnBOZ
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110275639
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:s6TmKKJdUKLnryIwFE+Rx5UlxSXFcZb7ZxGr2PjKsXjdYENS1zRRy
 zcYXmHUOqvcY2rzLtglPd61o00Au56Dy4c3TFNlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t
 ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws
 Jb5rta31GWNglaYCUpKrfrbwP9TlK6q4mhA4wdmPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb
 /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/
 jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4uE3FO8
 qE+dQo1ZwihwPubwO6Zaclz05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP
 oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TTHJ0PwRvG/
 TOuE2LRPkkkNdu1kDm+9km92+TgrQr8G9gYG+jtnhJtqALKnTFCYPEMbnOrrP/8hkOgVtZ3L
 00P5jFovaU07FasTNT2Q1u/unHsljw2VsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LQHGz
 XfQwYmvX2Y29uTIFzTErOz8QS6O1TY9dH0SZShZa1Y/29D7gsYqoxKTbPVDD/vg5jHqIg3Yz
 zePpSk4orwci88Xyqm2lWz6byKQSovhFVBsuFiONo6xxkYgPdP+OdT0gbTOxawYRLt1WGVtq
 5TtdyK2yOkVRa+AmyWWKAnmNOH4vq3VWNEwbLMGInXAy9hP0yT7FWyzyGskTKuMDirjUWGBX
 aMrkVkNjKK/xVPzBUONX6q/Ct4x0Y/rHsn/W/bfY7JmO8YhKVTXoXs0PRfKhwgBdXTAdolma
 P+mnTuEVy5GWcyLMhLqLwvi7VPb7n9nnj6CLXwK5x+mzaCfdBaopUQtaTOzghQCxPrc+m39q
 o8PX/ZmPj0DCIUSlAGLq99MRb3LRFBnba3LRzt/Lbffc1Y2QTFwUZc8A9oJIuRYokicrc+Ql
 lnVZ6OS4AGXaaHvQelSVk1eVQ==
IronPort-HdrOrdr: A9a23:uQHET6PTcib268BcTs+jsMiBIKoaSvp037BL7SxMoHluGfBw+P
 rAoB1273HJYVQqOE3I6OrgBEDoexq1n/NICOIqTNSftWfdyQ6VBbAnwYz+wyDxXw3Sn9QtsZ
 uIqpIOauHNMQ==
X-Talos-CUID: 9a23:NSkGCGwJrL6AVvKSBFGhBgVNCNAldSD7kU3oKleaTn9ESOG3an+PrfY=
X-Talos-MUID: 9a23:iO7eWQXpjs4UJhfq/AO0ujBlHp5r2KCVGR0GvqUvvIrePyMlbg==
X-IronPort-AV: E=Sophos;i="6.00,204,1681185600"; 
   d="scan'208";a="110275639"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/spec-ctrl: Update hardware hints
Date: Tue, 30 May 2023 16:34:52 +0100
Message-ID: <20230530153452.1123823-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

 * Rename IBRS_ALL to EIBRS.  EIBRS is the term that everyone knows, and this
   makes ARCH_CAPS_EIBRS match the X86_FEATURE_EIBRS form.
 * Print RRSBA too, which is also a hint about behaviour.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/include/asm/msr-index.h | 2 +-
 xen/arch/x86/spec_ctrl.c             | 5 +++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index 082fb2e0d9ae..12aeb1a93909 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -67,7 +67,7 @@
 
 #define MSR_ARCH_CAPABILITIES               0x0000010a
 #define  ARCH_CAPS_RDCL_NO                  (_AC(1, ULL) <<  0)
-#define  ARCH_CAPS_IBRS_ALL                 (_AC(1, ULL) <<  1)
+#define  ARCH_CAPS_EIBRS                    (_AC(1, ULL) <<  1)
 #define  ARCH_CAPS_RSBA                     (_AC(1, ULL) <<  2)
 #define  ARCH_CAPS_SKIP_L1DFL               (_AC(1, ULL) <<  3)
 #define  ARCH_CAPS_SSB_NO                   (_AC(1, ULL) <<  4)
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 50d467f74cf8..cd5ea6aa52d9 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -409,10 +409,11 @@ static void __init print_details(enum ind_thunk thunk)
      * Hardware read-only information, stating immunity to certain issues, or
      * suggestions of which mitigation to use.
      */
-    printk("  Hardware hints:%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware hints:%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (caps & ARCH_CAPS_RDCL_NO)                        ? " RDCL_NO"        : "",
-           (caps & ARCH_CAPS_IBRS_ALL)                       ? " IBRS_ALL"       : "",
+           (caps & ARCH_CAPS_EIBRS)                          ? " EIBRS"          : "",
            (caps & ARCH_CAPS_RSBA)                           ? " RSBA"           : "",
+           (caps & ARCH_CAPS_RRSBA)                          ? " RRSBA"          : "",
            (caps & ARCH_CAPS_SKIP_L1DFL)                     ? " SKIP_L1DFL"     : "",
            (e8b  & cpufeat_mask(X86_FEATURE_SSB_NO)) ||
            (caps & ARCH_CAPS_SSB_NO)                         ? " SSB_NO"         : "",
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 30 15:36:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 15:36:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541249.843799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41Oi-000524-U9; Tue, 30 May 2023 15:36:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541249.843799; Tue, 30 May 2023 15:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41Oi-00051x-Qx; Tue, 30 May 2023 15:36:32 +0000
Received: by outflank-mailman (input) for mailman id 541249;
 Tue, 30 May 2023 15:36:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q41Oh-00051p-FB
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 15:36:31 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20629.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bfaa6603-feff-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 17:36:30 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8149.eurprd04.prod.outlook.com (2603:10a6:20b:3fd::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 15:36:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 15:36:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfaa6603-feff-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GwufSNZrlSvlqhz8cqKXbDJyJNwWfZM6MTKdE4WZUHzqY032FdZ0eBAbJ743xeGFUNcHKtjV16Dq+wzGNRgN6OksO5g1pQvHDi28NOJFV6f+2O0HZimLaadrEI1krfEmlh5oxEA/YYPkhcChVaBWEYqbtSMF9AIiXO3uZ5TdM2V6f8SrDk9jbJkb9BF5CGTW1+tfipJlSKZC5yJgYn0sfJF2WneXfdB2/Mwg7EpbRr3jvIq1n92AVQ3LP92+A8c3Yqu4vuAfovS8TS8HIQs+Kopa8K1JHFBeRHItg8Eqwj5lKBiZfcH06AZHu90/4P6P94LMZgs3YStjs3G6CxpD5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Lf6lMWMVW0XtD0ikh/Nwr1PebsDt2c9oPxVIxipWDL4=;
 b=ADwUImExFfP2+i8pcXU3nvX2OBf1vMmWxmhfKvtS4XNqIyC+5sVIB2Q6G/XtTGb8bqelQIFWCXn2EU6ZT4HiIbFTFiXNG1VtQ78WPoL4TaGZlRrqvsOeok63almuUeHxj3P2pwtQwUQbbZ+FHDodjfLXTdG335jJ01tatZQ6zQZ5LUQEwXtxfonneAp9uuogH2GtQBVPBcF5Q1/9F+ON6JB+TK6q/Q2aXHrV9hlKKWLXQRsL9PzR8/Velu30PH2YbXKy1VOMRRwOBejfwpsPfbMr4I++eF9lOZvd/QYFBG4XrJhanizrVyRSkDMTTn8f++wW5h5Zr6ouvxiKUB5HpQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Lf6lMWMVW0XtD0ikh/Nwr1PebsDt2c9oPxVIxipWDL4=;
 b=Z0yMlx39rD7+u4zV5Hb0e2gKTRBzZRy8YlLZSD9PtrlFtcd4JS1JgUtr992BMJuAYM2rZ1ljicUot4xlpd0mnqrJ+R5qm0D06y3GkoDzIjGWAKJF5uUs5WTSCTrotScn81WYhOuGYzadGhivNuVFwcCJR40gXLNRdidthUvmRTLD6VwslLnFqvYSbZiFpTglXbHuj2sSqOg9Ro7ge5c2VU8s3U84giR1SrQQL1JX+6HdHqrrfdSYdhhf8vijU8iRIR4DEn1CYLMBZIxPw8555yg/NAZr5ka1FjjV6Wo4TWF5tRrsjquJF8XjqEC5j1dae0zGWXpj+MNgCWavnBjWYQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d35664b5-a5fb-672f-e240-21fc1b5d83e9@suse.com>
Date: Tue, 30 May 2023 17:36:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH] x86/spec-ctrl: Update hardware hints
Content-Language: en-US
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20230530153452.1123823-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230530153452.1123823-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0053.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:4a::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8149:EE_
X-MS-Office365-Filtering-Correlation-Id: e499fd43-3f7f-4328-a624-08db6123a2d0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8YcRbaGYUojF3N916BXhbYn3IL5revkIl1RzF1jBv+U2M6RKsUKgudJzbJ1TmhYkgxQ7cZDiXtmr8IWe2+vRxSiiBwFZIEKQE1WYbTzc+6dfVwUPPN/596k02OYJFPmkn7gFI7ziSP2P37Lfh6ONPFvCLc7khMDw3sNt0wP0h4u5LZIy/XDcvjYaJfgn5fYzRBDztQhQdbC9/xzGZOrd3ClNGO4OAkoXzUNWfBdikH+1nWn8DMIhwmtPbE35WjK5JZFJKWdTaAjY1BYFDFfrUueMOhvmALsdbvfwJZFzupcQh0zBbGpllnhXOf925DsLwQ4HbGyFT8R0wy4XY0p99J/riX/QohBh6jyHnffAQDudKktfQTGd2JVKqxZIKwZJkEtaEU+fcz7Um/pxs/3P+XK3wk7dc+GjhCvzYq8fAMusg3Y2TiQpPebPb97P+A5hFUTKIrRyjlfxhof59eOb0Ad5xx+jbVRsbvKqCjOGNnov4FBuhIqn+V2xN3xkBGDwjFRBDnQc1fUU6/mhE7ISyo4XcpafWMOzkJHih3rO3oJC6K/nqjbGA/b5K8f7Q9wQeWP6Nk5pPICV5avFK6Y0/UetgqLkw4+uOGtKh+jpO9X3RtlDxstV0spEb+ydEVZBbUZbXUGLYLDfFzNk4FS7uw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(346002)(366004)(39860400002)(396003)(451199021)(478600001)(54906003)(8676002)(8936002)(5660300002)(36756003)(2906002)(31696002)(86362001)(4744005)(66476007)(4326008)(6916009)(66556008)(66946007)(316002)(38100700002)(41300700001)(2616005)(186003)(26005)(6506007)(53546011)(6512007)(6486002)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T2xTOVFOZlR6QW94SWx4SEgzVFp3amFGbEFOU3VIVXg5NFlMbnNXZFNSMXY3?=
 =?utf-8?B?Mk9PY1ZNMHN3VTZwQjRJSDIrQmdKRElwbVRoME1idjZjZ0tPYmFVNFpKR29C?=
 =?utf-8?B?c3A3WHY1Ry9QM2lzUzdzUDQrcmp6MWM4VkhFdzlUMytweEdPQjMwbWZWd2ha?=
 =?utf-8?B?N0h3TFRYMkJOTDBEMkxOSkFiRUJmZnJjUHNpaHJ6bDRBYUQzdmMzUFFiekNB?=
 =?utf-8?B?NWdDeUpUYUQxeWpldDFDV1ZNd2d1RjYxbURWQWtRNlVlRVg4YzJwZmZoWTBE?=
 =?utf-8?B?cU5MS3YyWjFhMWxYcTQ1Vnk4bU5GazdSd1pNZUZXcTBJUU1YU0t2NUdaVXVE?=
 =?utf-8?B?WndGanAwWndDUW1RTEZDa3I3RnExYUtaa2hVV29jOGgvN1d1ejFLd0tjZXdB?=
 =?utf-8?B?MldmWlB0djQrc1o4V3Y0bkwzN0RTaytyamtkZU1hajFKcnRDVjNVZFNwdG9P?=
 =?utf-8?B?cWVSUTVvdjFXdVlXTEx5U1hidlpQRDFOZkFyUUprczJvTEVGaHNQeVNES2xu?=
 =?utf-8?B?aUJuNU1FWkszZEJpZFBvWENobWFPT3R3RU00Y0RWT1h2dy9aNm1DamRUNVR5?=
 =?utf-8?B?dTgxbVBaaXQ4NkV6cHZzN1BucFB4ck8rbnhqdTFHek5sS29lMkVRYk1DVDdO?=
 =?utf-8?B?dzhaUkVXRk80VjE2UnFyNC84WENwanRxaWhqM2IrckN0bmxjVEYzZ3grWm4r?=
 =?utf-8?B?bngvOWg5aUorWHFLNjArS0VtdElPUjY1V0RZL3JqcjQ0L0Vla01aVGJQR1Fl?=
 =?utf-8?B?NnN6Z0tZMnYvZS9ZeWlQVjk2NG9tRXhZMTU1RVpsTWN3TW9NRHllRTNwTk1r?=
 =?utf-8?B?c1oyYzhvV09ZYkY2U0w4blIwcDlZczQ2OTl3dHVyWDRINnlER2dGREUwT3Np?=
 =?utf-8?B?SHRwazZWaWJLUFNDZTRPcGNJWkErc2phdWt3OXFCazJDRThyeGVhVm9kTENF?=
 =?utf-8?B?WWE4UXNnZi9ZeWxraVowemNZUklYU2NBaGZqSFVXMVoxbjlFS0srVmlZbkhw?=
 =?utf-8?B?OGhNOTdkS2xBYzlxRWZPWmlqMDRMOFJ0eThQOUlMbUVPZW1hWFdIQXQzNkVp?=
 =?utf-8?B?eVl5U3p6djdOdlFtSzlhVjdQQ2l3THVwSGtyR0hxazhtY3J4Z3VuaEpwelJQ?=
 =?utf-8?B?ejEvVVRQQTduVmZmYi92MmgxMWU4ZTRiU1ViZnY5QjdwL1VHL0JnSVNnUlBR?=
 =?utf-8?B?T0tpWElYR1p1MlRTUHFMUWZ3TDFNTVU5ZmJOemxuZ3BNNC8rLzRtUm5KcUhI?=
 =?utf-8?B?Y2FDQVRuYjdxMUp0emhibWZScWlkOU50NHB4bUlYOURVTGZuOGxMZDVWV2dQ?=
 =?utf-8?B?eU5KYXNFTDV5OGpOMHNPS1gwbUsvOWxsMDR2WklxVnpqSzBQRmtyYkdEaWVW?=
 =?utf-8?B?bS9mOHIrQkQvb0R0TE1hNUJKQWRrN0s1ZWkzUTdEelowZGRJUWJQTmwxUThZ?=
 =?utf-8?B?T3VMQ0ZuOXZXY1B3YUdLRGp4V3lZVHBxa0tHV2J2eGFXMXgxRmJnb3ZIWjBH?=
 =?utf-8?B?TENqWFBleEtaTEw4RVljNEtMbWc5MDZyYXR2K1FIOXBkTGQvMDEybVN2UGZR?=
 =?utf-8?B?NmJ2NlJ1Q1RRd0ZLT3RkNDQyQUNVRkRyTzcveHpWRUNVU0F3R2JsM2xjWXZW?=
 =?utf-8?B?NHluSUFBL1NFWlBqck9KK1pVVk1PYnhPd3hoL2cyWkZWajRGeHpvMHdndnJN?=
 =?utf-8?B?K3liZ2w0USs1QmtZUXYxRkpYY2JJQS8rOWI3K2ViOFdRSXduQjlBRkdZeXU0?=
 =?utf-8?B?TGpvSHRDVWRoL3J4N0ZoRlhxenVHVnBKMzVrMFVRYityeXUyMVlOQmd2dzJQ?=
 =?utf-8?B?dHlsaFRuVW10SDJjOVBNU1F5blB6TkFMTGU5cUg3YXZZNXRpdlhKd1dmNUVO?=
 =?utf-8?B?T2pXOTZEOHJiVXdJRUV2QWlsbWlNNHllbXBGRzYydFNLVVVGczJwU1BERDMx?=
 =?utf-8?B?VnkzeTlDb0IxNHQ5d2wzQ1kyUGdpZjVYR3M1eFJtZXo0UWpVbTN0L291SVk5?=
 =?utf-8?B?NTJBemR6SGttOTI4T3BaV2NBV1R1UGlkaTRYVzJ4TUp3S29BbUxzNkxpS29E?=
 =?utf-8?B?a1F6TldlTlBLaExEdkJEMGZBSFJqelNOczhHT0lYTm9Hb3B5N1ZjMklpRC9O?=
 =?utf-8?Q?XV1y/mF/jHzmo9XCbGvuGTlG4?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e499fd43-3f7f-4328-a624-08db6123a2d0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 15:36:28.8176
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: S6O1t+kWGmN/LioccPxyPs1sMPL5CWgA1P54pQlfeuZX+ZnJrmzhfsdjgS71nZsJPGGHVRTIH38yHWkgjTrriw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8149

On 30.05.2023 17:34, Andrew Cooper wrote:
>  * Rename IBRS_ALL to EIBRS.  EIBRS is the term that everyone knows, and this
>    makes ARCH_CAPS_EIBRS match the X86_FEATURE_EIBRS form.
>  * Print RRSBA too, which is also a hint about behaviour.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue May 30 15:44:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 15:44:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541254.843810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41WE-0006W1-NB; Tue, 30 May 2023 15:44:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541254.843810; Tue, 30 May 2023 15:44:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41WE-0006Vu-Jn; Tue, 30 May 2023 15:44:18 +0000
Received: by outflank-mailman (input) for mailman id 541254;
 Tue, 30 May 2023 15:44:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q41WE-0006Vo-99
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 15:44:18 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20606.outbound.protection.outlook.com
 [2a01:111:f400:7eaf::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d58da1c5-ff00-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 17:44:16 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS8PR04MB8149.eurprd04.prod.outlook.com (2603:10a6:20b:3fd::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 15:44:15 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 15:44:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d58da1c5-ff00-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O/8dDXUFqNmG3c+B/e0RP5vNbAx5X55/T3TglKZ6ARj7CSh7L91CPzesbrTuDR17Zq83Ab4YjEUDNpCkU2ZH7zc2Ah8w3gR01zjYv9hvLjxeuPjBV6qnEf3QNSMI1Eme3EXlnC/HJBS2E2pEz+Mle8GxsMKDLYMAdMlzYptBsNFwSl4LXX09Tmn1L52cDgzdPubyFjwbkBnVhsgFwCB+mjo2sO/NpW843ITeSmqew1RoQyHNCb7fmeX5XNHiyMa12wxwHMQbFKq+h00Mfwo7LcqztzZxTQF1tvWh5k2JmX2/7sL7NE3dNajxxgKDeK0sUj1zGIYfnqceAg7xmCNrRQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0HJTtS53KqXWzSsb6L/aY7JxQ7HTLL/nxzWH6d7AWRs=;
 b=PTcpgYVj90fu8Ydo45BjsHZbcDnoroWvZm4OXws+c8oH0HTOg8ub4B8QAPA5YIAat/kXCAN+C0CEhpCBufIGY2VfhLr9U0Ak9jsCiZ3+8QMvX8m2xHpCyUKpAycUX5lD36AgaM1vgu5uT7JJeXHEH5bNakHPwVmgSOciUrcKR5LlypgF+YvE56Zb3sTUWL3T9ZkfodVusDqaNwBMJ3UfBPYOKq+FC5UuOtvSzGEkSzrF+I3oY/eTDgtFLxq6/ldvh8G8fgrdmj/IdhcxAd5hqGEc9Bzobl6UZX2hfin0kB0zc+XFj0w9N2C3rGmbwxefHdQNXu0k3cicN/5aJ99Bhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0HJTtS53KqXWzSsb6L/aY7JxQ7HTLL/nxzWH6d7AWRs=;
 b=AWCa18bj25OjbEQWmEWhI2DvE+8idz4AYtH7s2w82KI1WnW7wDSZycHGSkCNtShq+l/oSEnUkTXJWmGqvVqTV611XN4+2g+QEP7l5CFT4YQqAbSUGAJrn1lFGH8fWb7PUVUhPA0DD/TRC+p+MTO/sTnnL8hSVUuztS90aFTZ8Ep1e5HaKJtwWQOoGuOGSGy+e44/vb1e0l4eL5s6UHEF5OAv3VS+9SJRTnJqF/T9mNFZWfMV8rpKH1Jb/sppmJAp+wdzmKJRCplH0witmOX77x5yhxSZaabpZyUXfn0QrxSYAhgo60rS1hz01iENnkwbyk3fc3oGmqxc4LH3Xp4crA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <86dc868a-eda9-9de6-0430-26da6f5ad465@suse.com>
Date: Tue, 30 May 2023 17:44:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v6 4/6] xen/riscv: introduce trap_init()
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
 <f4c4b711106283e26536105105892b93bb39ea3e.1685359848.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f4c4b711106283e26536105105892b93bb39ea3e.1685359848.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0008.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:15::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS8PR04MB8149:EE_
X-MS-Office365-Filtering-Correlation-Id: f7830e53-0ad5-463f-853c-08db6124b7d6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FGni2YcmC81mRzE1mkHrv3LzMMUrTGK3awPjfFn5yMoCqSOIikzZazxFR9tUURtS6LS6czzpYT3rSf6efogKtuXB00UzuD+g/oNsP6K8vOvawtHn0sdhx5ys1aRrzCLhCGVFSJV58VDG8haJu4elN3kQaEqi7MIebGki15WWzKIKAv8a5h3qgUYYGqkBxKszbar53jJzfp8cvzmjv/HxKlHvmLReymGjowJDhZCrOIqznoM0W7X7OyLi22JoQflp5Bam858NTk/JAWMFcDkr8TRhg5gJww3RD5OeAhx59Cw/AL4mZH9Wa/8F9Vv2UrwX7IC6z+yTp+uBXCNiT8pKlK8ImHGbvJcFKDncbSYb6aqK5e/3LoUz3xUVqJ8rXmVrJF6eRKvfQVTOyY+O5ZXnFk3XKoPQzjZIP4oXrP9OnNdOfV3p+gPfsOP2IRmRZ14a0qxcF+cSC8mj2obtvEy4PAbIDwy7SR+gQXOMzOUCYGnyuE4feNCYdFj4suEKafiKdmYriTKP3fRdeGeRzQ3L095QiwyNJ4YcVoyymwOAOuuaG3loH4yrTpCp3Z/Q6uuYUbrSW9aJDTlq0GmATGBiODlx4eDxvEWKTEezge1gqfiKFJ2m/gTEmsNLYMPhJMmEfHdT9ztzmsD1Nl772hDzGg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(346002)(366004)(39860400002)(396003)(451199021)(478600001)(54906003)(8676002)(8936002)(5660300002)(36756003)(2906002)(31696002)(86362001)(66476007)(4326008)(6916009)(66556008)(66946007)(316002)(38100700002)(41300700001)(2616005)(186003)(26005)(6506007)(53546011)(6512007)(6486002)(6666004)(31686004)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TWI5RjlvSkNjSjJkNzNMTUh3dzByRk9HKzJYMDRtek03aXhOMW1SNVZPYTZw?=
 =?utf-8?B?dFR2aVZSRGUrc0RBeCs3WEI5UHQyY0JQbmtiaHdjL3JtUWJrOEg5T09QNWYr?=
 =?utf-8?B?UHU5WW8yZnRkNXNkQWNVWHZ0aUF2TW1uRnJRRUU2Qjl1bHl2VmJUV0FFVDBp?=
 =?utf-8?B?djNOWjU2ZExoVjVaQTNTQUovbXhoZGVabDEvZnhwV2lNV3BpaEZQbnlXSVA3?=
 =?utf-8?B?WmtVbmt5ZVJoMkdaY0dqcjdhZzUxbzl6L3lCVmdyYUJBUThXbVExTWl2dCsw?=
 =?utf-8?B?aE5tSDZVOHZQMGdwUXE2bFZkM2I4WnhqdWpFU3BxRHYxSmVnQzFuUm9WT281?=
 =?utf-8?B?dmpMaStnMGd5aFhlL2lPRENUTnM3QjByS0I3alZ1VUxzQ0JxaUcvdzR3dWdv?=
 =?utf-8?B?V29ING50NE1sb2lZK2owdWNPTEZYaDFzaUN0Q2VWaHFERkJXTmsxV29QWml1?=
 =?utf-8?B?a0F0dnlQZitJckpGdG8raVZrR3BjZGhJSE1qaWQ5TE9neFI3ODNneTVWTUlV?=
 =?utf-8?B?RElUM1hPNHBqdlBJMWdFcWVmUUNwSWlpcGNpOVd2bEovTThoWnlGb2NRWk5z?=
 =?utf-8?B?UEUxaThMQklkVjluVmJlNmFSdXZmQ1BtMjdtNDlHSW1RZVBTbTlaTzV0NFJn?=
 =?utf-8?B?TjdBYjVxeVlGdWFISTJjRldFdVIyMW13YlorcHdwRys4SnQwMit0VEtKR09W?=
 =?utf-8?B?R1VRMnVua2VFRm5pWUNRSndxcTIwYWkxU2N5WDh6YzFJMUcydHJmOVRobTB3?=
 =?utf-8?B?cHpha2tjVHZCZUMxbjdSRE83M0pMNTF5RmNzUG9qd0NiTWJsVnJBUXdjMDJs?=
 =?utf-8?B?eDF3c0hrT0I4bEZWUG1IUkJFanUwSWF2eVVJWmhvNU1Od1kyaEtwSGRaR1dU?=
 =?utf-8?B?N205L0xscXFyYjRHTlAyMmlyVUpaMlFxQUdnWGFQUEFiMDN3NkFwVWNmSkFu?=
 =?utf-8?B?c1I1dWIzV3NLVGRoN0VqZU44VUxuOHlWVFJmVHFuNTVvTkJjTGdnM2srQ0lQ?=
 =?utf-8?B?TWR0NDV1ZkgwZ3FiZks0NWVSVmZxRnZMMDc3ZXVmSUl3b2kzMTBBYy9LRGRD?=
 =?utf-8?B?VjJBZGl4Um5ERVErVnRHZEJRVUNiNkJob2xDSjNCTXZZdkkrYWpaZVVBWk9X?=
 =?utf-8?B?ZnZEbEFoWUl3Wk0xNGREMDd1emxxM1AxSmZTTlNZOWU2ckxIMmVTT2RCNjky?=
 =?utf-8?B?UXRKSWFQck5Mc3JZOFU2clNTdGdYNlJVbDBHVndkUURtNk9wL1JCNGY0YXpW?=
 =?utf-8?B?VXQyM3FTRlU5OFRwbVIzd0VROGdpbWpBQWhTMFpGQUVFU1oyYkZGT255S3Ir?=
 =?utf-8?B?cCtmMUZzQXo1QzFodFh2aUNNRStjdUtMaTA4emdoSlh4bm5QbUVUR3ErdHpK?=
 =?utf-8?B?aW1nOE5nRG55TkFsZ0x1dzVBNVZDbTV1d2p1OGY4S09zMDlUTjVsL3cyZmlG?=
 =?utf-8?B?ZnhwdFY3SDBNejgrcjRSdVBocXFUSVBuQWhQMFJOczdWTXMvK09CVFMvMXZo?=
 =?utf-8?B?anhEdVlZNGY0RGFEN3FCQXphYS95NEh3dHo5bUtqWkxNTjhWWVVrWEVxbWd6?=
 =?utf-8?B?UC9MeGpQcktGa1BKdlVwTnVXRGpMM0tRVDQzR1l3QU5ySnlSUllTTldPSkt6?=
 =?utf-8?B?QzEvaTFkVW5HZkVhQmpaWE9XRnltams1UnJlVXcvY0lqNm1JYVU0YUhzYVBG?=
 =?utf-8?B?Z1labXROUWVUK3B5Q1ZIZ0QyOHpzQjFyVmV0RVlSMGNwNTNmcThBUzNVNDF6?=
 =?utf-8?B?SzA3SU9DZWtWOG5ERGsyS0lDWEgza1VFbjdzdkYyblprbVoyVjNYUlZUMGsy?=
 =?utf-8?B?L2JJb2dVTmlOM2RMbW9VZ0VFUWZYbXZ2d21XOWR2eWo3VDlPUjRHb0hXKyt2?=
 =?utf-8?B?ZndrUDQ2cjFTU1ptVVgrNE5taTY1dVpWTXZTR3ZrZmd0WFNkSUprclBQbkVE?=
 =?utf-8?B?bTdScGRJa2hiT0t0cDBhYXh1T1QxZ3gveDVzTlp0cGFqdk5YRVlhZmxlNjN3?=
 =?utf-8?B?YlUwd0xId3prZG9HU1QxYytJL2ZHTW9iYXNwQkNscVdOZnEyYlV1U2FlNlRI?=
 =?utf-8?B?cExRUzRyMnhQRE92YkFRMXE3N1g3M1pSSExwS3Fub3UyN3JVWHlYZzgxNUpX?=
 =?utf-8?Q?1TwIfOs2LWEgBVgLpNoVRdsV7?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7830e53-0ad5-463f-853c-08db6124b7d6
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 15:44:13.6468
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: V3l7SWV7WZgseWiNdhtJ+1U/FXUtA4eyYdIOqS1223JdhOCgtALY3fp9F+0euSz9LqU3hoJgCYqWLSHUR7dKwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8149

On 29.05.2023 14:13, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/traps.c
> +++ b/xen/arch/riscv/traps.c
> @@ -12,6 +12,31 @@
>  #include <asm/processor.h>
>  #include <asm/traps.h>
>  
> +#define cast_to_bug_frame(addr) \
> +    (const struct bug_frame *)(addr)

I can't find a use for this; should it be dropped or moved to some
later patch? In any event, if ti's intended to survive, it needs yet
another pair of parentheses.

> +/*
> + * Initialize the trap handling.
> + *
> + * The function is called after MMU is enabled.
> + */
> +void trap_init(void)

Is this going to be used for secondary processors as well? If not,
it will want to be __init.

> +{
> +    /*
> +     * When the MMU is off, addr varialbe will be a physical address otherwise
> +     * it would be a virtual address.
> +     *
> +     * It will work fine as:
> +     *  - access to addr is PC-relative.
> +     *  - -nopie is used. -nopie really suppresses the compiler emitting
> +     *    code going through .got (which then indeed would mean using absolute
> +     *    addresses).
> +     */

Is all of this comment still relevant not that you're running with
the MMU already enabled.

Jan

> +    unsigned long addr = (unsigned long)&handle_trap;
> +
> +    csr_write(CSR_STVEC, addr);
> +}
> +
>  static const char *decode_trap_cause(unsigned long cause)
>  {
>      static const char *const trap_causes[] = {



From xen-devel-bounces@lists.xenproject.org Tue May 30 16:00:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 16:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541258.843820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41m0-00014M-2c; Tue, 30 May 2023 16:00:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541258.843820; Tue, 30 May 2023 16:00:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41lz-00014F-Va; Tue, 30 May 2023 16:00:35 +0000
Received: by outflank-mailman (input) for mailman id 541258;
 Tue, 30 May 2023 16:00:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8LP9=BT=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q41ly-000149-N6
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 16:00:34 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20608.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1b7c0ad5-ff03-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 18:00:33 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBAPR04MB7254.eurprd04.prod.outlook.com (2603:10a6:10:1a4::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 16:00:30 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 16:00:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b7c0ad5-ff03-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z4JkT3hHUZrJY1yeZAIoqsyQhXM6l3cWACF5cyUyA0CeeebwSTX/iz0G/FQvD2+poNuNcyeTBY/9zRR1wjbGZmTv4HlOpfpUEwXraECP4f6OVnYmQ+NAsqXGFF9pBsJHmu9/GRQEl3fiw4qQQ98ImLU4ZxwPBA2hD1B+vsC/UBNZdFg3iGwqD8C3TULUCKRE6aj61f5ZscI/vIQjChZ7p4tESDZtnsljLkB0E3vVC9WIa9xfCTDbDQxt8DOLKgCy9/ihw7Z91zN2aLYBV/6CKnHSazHil8TUZ/47aP/C0XPSgpKW3vGKsBKZStOWJ0NGpPxqrSScQRQhelg/nCHG4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iW3mSk6Roum26zkTHxvOSDQxwnbMLDT06beO/vZCCLo=;
 b=KYIyQRZw0thEW1GuBxPeio/JXmahu7lJnNOc1NWicLkx4sTpTR1+ODAi/87xynlTVyI+rg3OGMCX3ImAYuLzLhHQq1Sdy78FfRVoWND9Zsa8FP5+I47OAafAirnp/DYkhTbH3VquFTbISe6m7So7b37H9KDvdtqSWYCEyrnx664KPJQ9g4UPG6C58+W/YNTeLKu4j8WlH2IIZwPuliI5p6rJTZzPPDW7/R6/pXQVpS0v5DZ8YMQkJnUQuljFc89Jig2CKEOdHg422+Klmt9oyHiMg8lBWhF0bhtUwHSGXf2lXC1nRzarwPvVUetj3UnN3pMfAhNMDxeOFWXwvEvk3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iW3mSk6Roum26zkTHxvOSDQxwnbMLDT06beO/vZCCLo=;
 b=JE2tl8f/caPOZiNzXTqGEzAjaMywhsyLPW0MFvQYabG4yJBpRq1f7JnGN3z3BJi9dbCSUq6zsEqaGgIOMcK4iW7p5VXNOX66+PwoYW5PXhJ6Sq8qaQI3Ou3h99SO/2Pu85kWSeMTUWT7ziZKIhkxXpYsTpgSyvS1VmNwQgkfp76Z70oSiDK6GfP0KeHsxrg48O+fr9uFoiqZG/Dqzi4T7LUrvNqUXETJud4pfH9HohY8BWEZNBHELmqJTi8cLm5NKD0oUe7JZBfGKRzUpCZLrpvNukR6v4NVRD1b2EdpqLCbre6gaiAMPfZLLB1rq0QUD3Jjto8EBDfs02eeOfBYFA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <92580a6f-e97a-c4a9-435c-bd95a84d4306@suse.com>
Date: Tue, 30 May 2023 18:00:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v6 5/6] xen/riscv: introduce an implementation of macros
 from <asm/bug.h>
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
 <bd2dd42c778714f25e7e98f74ff5e98eee1cd0a5.1685359848.git.oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <bd2dd42c778714f25e7e98f74ff5e98eee1cd0a5.1685359848.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0111.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9c::15) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBAPR04MB7254:EE_
X-MS-Office365-Filtering-Correlation-Id: a966a240-f7eb-478d-0b05-08db6126fdd0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	75A0y8gLkGmJdZxNpldjb1p3zB/6Yaeg2smdGr22F5mSuYh5PUj+pfwY8fFah8gGhu9613+p1MMukrUez0pUcquNVyMCDhuSkGb7Z5d2aLIjiwGWV75nrEJVFC2D9KN3rkxHUAwTmNm30gTwXkhs3VS77TaqIItfU7Kqr+f8u9HwjnniQops2TDQKygtSNKXU9nzIrLMdYnnD3AIOb66imH8IRySbjleyUnUn7PQHqLWE/4T87IalS9MFjWXJlRuW+rCHLdJ9XD7w2ECLhHN9U4ctO/XtiENot0vwiB2Ju50hgN/n+8DHxrDRGxSPy93KQGPghX/VFnktU5XDxCdfhvg6QZQSj0IPchHf6G9Ju7hxBE4/BMNs+ji5uffc7EgHqEmCzw9tkmrAlDq5Tu9Id77xoGBSoU7242Aw1y/mrIj/QRnvSOSEi3Q44kaWMt5S8nqpTiN9YyA4sCz+R1kFNLYnoCzB9gDIe7uYF7GO+Kqdyt/rro6NnZHolq0G91PTXuHMLh4gfjO8Llg77pu7YWQr5HQ2mbXUMT78bznZKAg/NFglrqvdmIVGnIWvLSpWXpLqoV+Fhbqzi2vYXlYs6AwqZnw64+QOjfrOHrw1M/78Si4KGu70ZLlmND04O+Ot2zEuRCWheFSpn9+Qy1+Zg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(396003)(346002)(376002)(39850400004)(366004)(451199021)(478600001)(31696002)(6486002)(86362001)(41300700001)(6916009)(4326008)(6666004)(316002)(66476007)(66556008)(36756003)(66946007)(2616005)(5660300002)(186003)(2906002)(53546011)(6512007)(31686004)(6506007)(26005)(83380400001)(66899021)(8676002)(54906003)(8936002)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?MEZudGp5VmdDYlVVQW95VEJteVQ4UVJMTGgwVmJuTHB5dk53V2V3Vi9UVUY0?=
 =?utf-8?B?Smo1dS81YTBENFN4enI2VU5hUDdBZ0wyZHR1c3VBRlJIWXhqVFg4VWcxcXMv?=
 =?utf-8?B?Qks0c2JEd1Ixd3ZxeVg2OWw1YU5JK3NCM2VoZXl2bWZaUEVtczdSbHIxYWI0?=
 =?utf-8?B?ZXRDZ1pXWVdJMU9ERFpUS0tBUkFVYzl3NFl4Z1FyTzZ1YW84b0NnT3E5VjZC?=
 =?utf-8?B?VjdEaUUzNjlTZzlTWkFSNi9YUm1YNWQ5cFd0WThHZHFGU2k5S1ZsbTdyWSto?=
 =?utf-8?B?WFYvNGpITHA5ME90VHMzZDRRcEMvcS9ncjBaTTU0RHJ2NE5UNHowSzVTR1VT?=
 =?utf-8?B?d0IwRVJzYWMzcS9SVmt4T1Bpb3ZMbzE0WUhnYjRLbWRYdklLZ25kR1NUd2F4?=
 =?utf-8?B?cVdBbWtLMHp6TGsvY1ltc3prNDJSaTRSdkJ6Sk9uQm1JbnAwNzZycnBlaGNI?=
 =?utf-8?B?YjJTc05PTW5vU21ISWNRKzFMYXpvRjVVUklFN0M0cXNwazV6N3VzMUdmU2ZR?=
 =?utf-8?B?SzhsR29DblhwaWpyNmNmTkJ2UVp6YmhEdHROOHBVOWxmSEhETXFUaW9QY2Vn?=
 =?utf-8?B?aTZ4a2wybzJXNXpxTlEyTFRFRzd2Y1NXNkd5NHJaV0RTTkY5T3RHZ0NtMGVF?=
 =?utf-8?B?bnFTbzJ5QVNubUU4Z2JHS0hsUVVJVldGWU5LZ1VZckJUb2d5RDZ2elBtaU9u?=
 =?utf-8?B?VExXeXVXd0M5U1dUVHlrRms4TVEyTFBwcEJpOHBGWWY1TllHOXVEblJtR2h3?=
 =?utf-8?B?M1BHRjFyMFhZaEVkMmdZL0NiNXBCTVNreG5iWnYrbW51Q09MR1B6RjEzNlA4?=
 =?utf-8?B?QXhLNzlNOEVhR2YxV0loUUc1V2xXRitlbENFYklJTGRCNC9MQWN0UXZRUzFp?=
 =?utf-8?B?eGRuSTY5bUNxbzMrcFdBUTV5VXpjWVNnelhBTHZieUowYkFnRE5QMGJ6MERL?=
 =?utf-8?B?VStLZ3JOYUh0Zi9WY2Zobm53OCt1dHRzMkhXZHNVRkRTY1JJbFJ3WkhibGlY?=
 =?utf-8?B?NkErcE9IZE5SUUo4bzhZRDFScUZjakdsTW9pZFBvcHk4V2RvdWV6SFQxNEZm?=
 =?utf-8?B?SnFsNGRTUHBoYlV4U1VjTG9BZE5pMlRhYW1LT0ZvVmlXSFBPZnFHMnJ2VnJy?=
 =?utf-8?B?bktLaXlhcWJnYXR1dk5UUm5mMTBRdDY3N3piVUpFK2ltOVpWNXdjQWFvUVRU?=
 =?utf-8?B?NDBJN2l4UExqM1R4NkMzK0hkdDRXdWNFUmFEVEhJL2tueHNZSzlRODdHR3ZQ?=
 =?utf-8?B?VlVlZTMzZTdNRkI5R3R6Y3djYUd2TnU5b0MwUmdoZFBmVGtWVzd2aDhEaUtk?=
 =?utf-8?B?NGh5dWpiT3BWR1kwcG13RnlWcE11TEFnT1IrL2tOQlYzOXc2ZU5oNVduMG1T?=
 =?utf-8?B?b0VxYjBsUHRrOVFxb013ck4zNXV6MTY1VjhmdDNGaWFXS0Z2NmN1NWF6ZmNy?=
 =?utf-8?B?SkZxbUpaeC9TU0pYSzR4VzdnRDZ5RnllQ2E4cThjNzlac2R3aUErWDNsamo3?=
 =?utf-8?B?WTFuNm1aZStPZEl1bTVtak9pMjJPZCt1ODk1SXAxTHpqOWNnd2dkNnc0Qndw?=
 =?utf-8?B?bmpySE16Mi80NlplNGpBS3NVM0YrR0IrNlNqYVI4dkt1Z3JVR0pwWXhVejBi?=
 =?utf-8?B?QWExQ016RzVZdnNLVk1kekN3elJQSE9VTzgraVlaa2tFZGMvR0xpSDJzbEtN?=
 =?utf-8?B?akU0WWhWY1RkVnQvNHZVKysvYU9FQWFXYjRJNzJmMTFHay9rQmRMVTE3VDRW?=
 =?utf-8?B?dkVxK01CSk1ELzBRODBlbmo3emYrckxOclBEKzB6eTlhY25LM2JSc3VlbVB0?=
 =?utf-8?B?ZVlVQ3hVaEhGZjBWQ01NQk1uOUNteVUyZHk4Y29BL2Jhcnd2NnhZZ0JmNHNs?=
 =?utf-8?B?dEk0VHVxZmZyc1Y2WFJmSUZ6Y0NHQ1BMemxMNjFITWV0NVp6S3BUSWY2YURn?=
 =?utf-8?B?NHBhQ1huR0JGWEEveGtjT0E2QjJGd24wWjdLS3Y5VjhmdWJON2NjNVRqRE5i?=
 =?utf-8?B?aUcyNG1BWFlFNTJYMEI3elBiV204T3gvazN4YkdnVHRtY296L0MrRHkzM1hU?=
 =?utf-8?B?VUZyZVUyTkJuQzB3bHJyL2dPZkRjaXllY2h3MHJBUjRuU2NKQVdCc2ZrdzVV?=
 =?utf-8?Q?H9Gh2kpWi/X7M6c5kVtY4tzcc?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a966a240-f7eb-478d-0b05-08db6126fdd0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 16:00:30.0377
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VUtToVjAS99nC75ZSpebOEVeGBro3fsm6AwoaqiIMqXk2ZFMtC6z0su5kpLjVS90UySOZ7E/Id8vEOPRsVTJIw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7254

On 29.05.2023 14:13, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/bug.h
> +++ b/xen/arch/riscv/include/asm/bug.h
> @@ -7,4 +7,32 @@
>  #ifndef _ASM_RISCV_BUG_H
>  #define _ASM_RISCV_BUG_H
>  
> +#ifndef __ASSEMBLY__
> +
> +#define BUG_INSTR "ebreak"
> +
> +/*
> + * The base instruction set has a fixed length of 32-bit naturally aligned
> + * instructions.
> + *
> + * There are extensions of variable length ( where each instruction can be
> + * any number of 16-bit parcels in length ) but they aren't used in Xen
> + * and Linux kernel ( where these definitions were taken from ).

This, at least to some degree, looks to contradict ...

> + * Compressed ISA is used now where the instruction length is 16 bit  and
> + * 'ebreak' instruction, in this case, can be either 16 or 32 bit (
> + * depending on if compressed ISA is used or not )

... this. Plus there already is CONFIG_RISCV_ISA_C, so compressed insns
can very well be used in Xen.

> @@ -114,7 +116,134 @@ static void do_unexpected_trap(const struct cpu_user_regs *regs)
>      die();
>  }
>  
> +void show_execution_state(const struct cpu_user_regs *regs)
> +{
> +    printk("implement show_execution_state(regs)\n");
> +}
> +
> +/*
> + * TODO: change early_printk's function to early_printk with format
> + *       when s(n)printf() will be added.

What is this comment about? I don't think I understand what it says
needs doing.

> + * Probably the TODO won't be needed as generic do_bug_frame()
> + * has been introduced and current implementation will be replaced
> + * with generic one when panic(), printk() and find_text_region()
> + * (virtual memory?) will be ready/merged
> + */
> +int do_bug_frame(const struct cpu_user_regs *regs, vaddr_t pc)

While it's going to be the maintainers to judge, I continue to be
unconvinced that introducing copies of common functions (also in
patch 1) is a good idea.

> +{
> +    const struct bug_frame *start, *end;
> +    const struct bug_frame *bug = NULL;
> +    unsigned int id = 0;
> +    const char *filename, *predicate;
> +    int lineno;
> +
> +    static const struct bug_frame* bug_frames[] = {

Nit: * and blank want to swap places. I would also expect another
"const".

> +static uint32_t read_instr(unsigned long pc)
> +{
> +    uint16_t instr16 = *(uint16_t *)pc;
> +
> +    if ( GET_INSN_LENGTH(instr16) == 2 )
> +        return (uint32_t)instr16;
> +    else
> +        return *(uint32_t *)pc;
> +}

As long as this function is only used on Xen code, it's kind of okay.
There you/we control whether code can change behind our backs. But as
soon as you might use this on guest code, the double read is going to
be a problem (I think; I wonder how hardware is supposed to deal with
the situation: Maybe they indeed fetch in 16-bit quantities?).

> --- a/xen/arch/riscv/xen.lds.S
> +++ b/xen/arch/riscv/xen.lds.S
> @@ -40,6 +40,16 @@ SECTIONS
>      . = ALIGN(PAGE_SIZE);
>      .rodata : {
>          _srodata = .;          /* Read-only data */
> +        /* Bug frames table */
> +       __start_bug_frames = .;
> +       *(.bug_frames.0)
> +       __stop_bug_frames_0 = .;
> +       *(.bug_frames.1)
> +       __stop_bug_frames_1 = .;
> +       *(.bug_frames.2)
> +       __stop_bug_frames_2 = .;
> +       *(.bug_frames.3)
> +       __stop_bug_frames_3 = .;
>          *(.rodata)
>          *(.rodata.*)
>          *(.data.rel.ro)

Nit: There looks to be an off-by-one in how you indent your addition
(except for the comment).

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 30 16:01:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 16:01:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541259.843829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41mE-0001Ma-E8; Tue, 30 May 2023 16:00:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541259.843829; Tue, 30 May 2023 16:00:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41mE-0001MR-Ai; Tue, 30 May 2023 16:00:50 +0000
Received: by outflank-mailman (input) for mailman id 541259;
 Tue, 30 May 2023 16:00:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8r7=BT=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q41mD-000149-1l
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 16:00:49 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2467e5e0-ff03-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 18:00:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2467e5e0-ff03-11ed-b231-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685462447;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=m93yS995VjAenJm1Tq30bXHcCehNtX8a6hAD3sks2W4=;
	b=AfBAWRGV89Mc653n/xLqy7vteib+6M5/l2mZj+wR8AGKO3PMoCQnprQPz2JjGFxvFh3npR
	75LwJDM0EE4g+P7BnJV6fDBo0D5A2wsMq2qcoO+Epgc1++GT/hm/IsyZQlfl88sd3wNdLj
	HacEZyX2iikiKMUlVpHMaBRfI2+ABpbUurQzM9ir/bGUnTaJbVHwLys+6ejBpuK/Ql2gfK
	8t5edlP2m1H7BWSDdUEp+QUPxhedXZYO6JcbMtV5DS0rwT2VI36zKlSE9ot+0iGtjPkQ+7
	L4moXOZFPonmdPVRaj68CTj/NyhRRfgqXsWZ04Esn64uoqL2rtpo8bp2DKqCyA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685462447;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=m93yS995VjAenJm1Tq30bXHcCehNtX8a6hAD3sks2W4=;
	b=kHOcw6J0QHtLvp0yEL0d0xC3npNrA5vZH/XdZXuEI3If84/jq/yLIElk56Kjr2Nz9yF9mV
	nI3pJZIHdR0bsSDw==
To: "Kirill A. Shutemov" <kirill@shutemov.name>, Tom Lendacky
 <thomas.lendacky@amd.com>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Sean Christopherson <seanjc@google.com>, Oleksandr Natalenko
 <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org, Russell King
 <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@HansenPartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE
In-Reply-To: <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name>
References: <20230524204818.3tjlwah2euncxzmh@box.shutemov.name>
 <87y1lbl7r6.ffs@tglx> <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name> <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name> <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx> <87cz2ija1e.ffs@tglx>
 <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name>
Date: Tue, 30 May 2023 18:00:46 +0200
Message-ID: <87wn0pizbl.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 30 2023 at 15:29, Kirill A. Shutemov wrote:
> On Tue, May 30, 2023 at 02:09:17PM +0200, Thomas Gleixner wrote:
>> The decision to allow parallel bringup of secondary CPUs checks
>> CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
>> parallel bootup because accessing the local APIC is intercepted and raises
>> a #VC or #VE, which cannot be handled at that point.
>> 
>> The check works correctly, but only for AMD encrypted guests. TDX does not
>> set that flag.
>> 
>> Check for cc_vendor != CC_VENDOR_NONE instead. That might be overbroad, but
>> definitely works for both AMD and Intel.
>
> It boots fine with TDX, but I think it is wrong. cc_get_vendor() will
> report CC_VENDOR_AMD even on bare metal if SME is enabled. I don't think
> we want it.

Right. Did not think about that.

But the same way is CC_ATTR_GUEST_MEM_ENCRYPT overbroad for AMD. Only
SEV-ES traps RDMSR if I'm understandig that maze correctly.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Tue May 30 16:03:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 16:03:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541267.843840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41od-0002Dr-Ru; Tue, 30 May 2023 16:03:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541267.843840; Tue, 30 May 2023 16:03:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q41od-0002Dk-PC; Tue, 30 May 2023 16:03:19 +0000
Received: by outflank-mailman (input) for mailman id 541267;
 Tue, 30 May 2023 16:03:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1tCH=BT=citrix.com=prvs=5074c9224=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q41oc-0002Dc-3C
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 16:03:18 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a77c5a0-ff03-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 18:03:14 +0200 (CEST)
Received: from mail-mw2nam12lp2040.outbound.protection.outlook.com (HELO
 NAM12-MW2-obe.outbound.protection.outlook.com) ([104.47.66.40])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 12:02:43 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by CH0PR03MB6177.namprd03.prod.outlook.com (2603:10b6:610:d0::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 16:02:38 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.020; Tue, 30 May 2023
 16:02:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a77c5a0-ff03-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685462595;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=fbVvUVv+glAsGDAlBtFHs0Kx122DbabYJS0qGvBtkC4=;
  b=Mrfi/EG/ug9FfRd1TydLmwnv/unnzBTz0QhnEv2KORG+NwemJyDqmRWU
   sC6mCMEh1zv4xXQvt4vMZQSOr46/AAoJlOTtKSa7eQ5AKsRJGrOf1DeL1
   DZncDflc18lE0YztfBG7r/O/kGGOV89cFrl2fbHHog24bLlOIkVYqQ5te
   Q=;
X-IronPort-RemoteIP: 104.47.66.40
X-IronPort-MID: 110280007
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:mJyIDqgx1eNbPbsVdqf+B+0AX161RxEKZh0ujC45NGQN5FlHY01je
 htvXD2APK2NZGL1f9kjb96+90IOscSEydFiSQFlqC8zHysb9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QSGzyN94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQyIggzUhK8tdmE3euRYc5UpeEJcpPkadZ3VnFIlVk1DN4AaLWaGeDv2oUd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilIvluS1WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHqjAtJKTOXnnhJsqES3lzA+M0IMbAuEvMnnoXewAN1WJ
 GVBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rXQyxaUAC4DVDEpQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkAowXQqYCYFSU4A/IPlqYRq1BbXFI4/SOiyk8H/Hiz2z
 3aSti8iir4PjMkNkaKm4VTAhDHqrZ/MJuIo2jjqsquexlsRTOaYi0aAszA3Md4owF6lc2S8
IronPort-HdrOrdr: A9a23:Tur6P6DVEj7bCJjlHejLsseALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPH/P5wr5lktQ/OxoHJPwOU80kqQFmrX5XI3SJTUO3VHFEGgM1+vfKlHbak7DH6tmpN
 1dmstFeaLN5DpB/KHHCWCDer5PoeVvsprY49s2p00dMT2CAJsQizuRZDzrcHGfE2J9dOcE/d
 enl4N6T33KQwVlUu2LQl0+G8TTrdzCk5zrJTYAGh4c8QGLyR+49bLgFBCc/xEGFxdC260r/2
 TpmxHwovzLiYD39jbsk0voq7hGktrozdVOQOSKl8guMz3pziq4eYh7XLWGnTYt5MWi8kwjnt
 Xgqwope+5z93TSVGeopgaF4Xir7B8er1vZjXOIi3rqpsL0ABo8Fsp6nIpcNj/U8VApst1Q2L
 9CmzvxjesdMTrw2ADGo/TYXRBjkUS55VIkjO4olnRaFa8TcqVYo4Az9F5cVL0AACX5woY6F/
 QGNrCV2N9mNXehK1zJtGhmx9KhGlw1Axe9W0AH/veY1jBH9UoJu3cw9Yg6pDMt5Zg9Q55L66
 DvKaJzjoxDSccQcOZUGPoBadHfMB2NfTv8dEapZXj3HqAOPHzA77Tt5q8u2e2scJsUiLMvhZ
 X6Vk9Cv2JaQTOhNSS35uwJzvnxehT+Ydy0ofsuoqSR+4eMC4YDCBfzCGzHyKCb0rEi6s6yYY
 fHBHsZOY6lEYLUI/c44+TPYegtFZAgarxlhj8aYSP4niuZEPydisXrNNDuGZHKLREIHkvCP1
 prZkmAGCwH1DHmZkPF
X-Talos-CUID: 9a23:f2dEwW/emG8kv0pUwhiVv2QeIv4ZcELv90XNemWTLl5KbZfLE1DFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3AE1c9AA0u1tv+K98dcGAf4rx/lTUj+oD/VG02zo8?=
 =?us-ascii?q?/6/aZBXJbAjnGgz24a9py?=
X-IronPort-AV: E=Sophos;i="6.00,204,1681185600"; 
   d="scan'208";a="110280007"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q0dhQ0m3s/p9KQXUXm5ATNJm5MKmjerCOQR1IAhjH35xhShXh0IUFcAVyCWG+VHcqqnuQHQ1pWhDF1h2l+0BJc7pSKgPrWQJ2GSdUcqkXxOw/cyRRghFJeFliGBkIXNZkSODgSsMw5LGVkmhWXIPuuaS7sb4Qm4n3lTzz3SUHJG0NeFXit9Kdl8xI2ZpdP3W6j22T1PeMlsECFhlLmJUzhZQY0VqIZkriLCaclHug1uuF5kVoEmufZhjMcXvgjLq/4zpvDPeD8W63u8Dosfghv4Vq0mzTp2WYbDSofPYeednBuVQUK2yfNb5bNnaGvAu6yZsL+T4JhIgM+46RgBZVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+5FQ48H7EQIx0ABbcT6jnLYZJJE6K+e9IallZtFqnLk=;
 b=T7QMuOehAriZFRJYfSbD2eHh/t2miGlsAiw02pPQWMAhFI7oGFiOUiMLWP0Z4F6q1YwQczVcQoCM8CSOBdZaR87fcic4KDQXI+3tc42bJS8xMKjeqtbVanCPlqCuE9aeCmM+oK+c3BL+q8M8qVuo0M0bOQNCV+XCUPnV158YYidq/FNYV6P35wHRQVdZQe2L+EKfbUeJQsmdJUdlnEigiNjjjN7qVTanunuiBGh5ULqD6hqOw8x0av3GLJgxVZHwEVG5SneGCJ4X3rIQD+Lz7a0ts6Ge8vGRye/dKnmhjadgcVH0Sjkklj1k7754VjOj68QrfpVj2+PVCY9OI6dbfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+5FQ48H7EQIx0ABbcT6jnLYZJJE6K+e9IallZtFqnLk=;
 b=P5T2jtoU2Lq7URYyRBdJkS7zNkZWEOjWWr0UoRLV0R0KD/cm6jy5GjW0xPyxsrSwPrIE9Wo0xh+W8JXooEiDj3WNwcOnmrOwmfLTdcSS+3pnWE/ouKGIO8tw4a1PaxKGQJ80dNzZfnUR5axWG/FqQLEMxqQwbCbjEVB3EX3SJQY=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Tue, 30 May 2023 18:02:32 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 2/3] multiboot2: parse console= and vga= options when
 setting GOP mode
Message-ID: <ZHYeGOFpAtLnoQf2@Air-de-Roger>
References: <20230331095946.45024-1-roger.pau@citrix.com>
 <20230331095946.45024-3-roger.pau@citrix.com>
 <2ee8a4b4-c0ac-8950-297a-e1fe97d2c494@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2ee8a4b4-c0ac-8950-297a-e1fe97d2c494@suse.com>
X-ClientProxiedBy: LNXP265CA0038.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5c::26) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|CH0PR03MB6177:EE_
X-MS-Office365-Filtering-Correlation-Id: aba8b923-b9eb-4f56-16a4-08db61274a08
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WFbTzb+h5pjnZo1iRkAlKZW/zK3VWd4ZCfj/AbndBAb6VlABs+Lu1kaAQ4cacgP09DR6LvIaefh6zOsJS9VXr4jqDmt3X4RlrLxWMVGef8jlW3B+HdUt/njRWt7IKeEAUnKSnkHJWacYCKpc0Oflzv5kpf9mW07dzwDt2hMBGFsh9eqWnNCdSLT0oNkO3t09wznpBMn9AMQKH+xyX0haOyJVGS2YW0X6yhFPaQOQewGOL5ji1WsEa5lVE4xOydRyiUvfFW/BWJdOZLB9bjP2YxZV+KVTZOTMxQj04p1YU++BbP9Ew8W9kAo5Bp6bqiHRMR8QT+an1pPXNqLP5L3rOQ+QuhKmmsOMIDw7tvv3h5AV3iGESyetOGgkmoFJ8trGUYRRdFoFW9MHuBN9ji4aGjrX+mI6C2hVlUEEzkXGYNNRuawaOrvFWVgq0nmutQVMKgaXc88jZe5CS3UMTYNfU6YDqEHsFJHVh/+IfMzRCdn5AQCk2Y0VGFydGkrqmTF0EMUiX1fuwghtJNckJpiumQF28Bm3BLhOzR05d8semIU0+pOVyyhWr9ZbLiRPcwmq
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(136003)(39860400002)(346002)(376002)(396003)(366004)(451199021)(4326008)(6916009)(66946007)(66556008)(66476007)(5660300002)(86362001)(478600001)(6486002)(41300700001)(8676002)(8936002)(54906003)(6512007)(6666004)(316002)(9686003)(6506007)(26005)(53546011)(33716001)(186003)(83380400001)(2906002)(85182001)(38100700002)(82960400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?a1VGUXMvUFd2bm10bUdhWk5ESWxvbXBVa0syaHhQSFJxSXFlVElYcjlKeHl4?=
 =?utf-8?B?dW5zRVlSWVA5T2VwM0N2aXFMWmgrVVF0ei9qc3VFRVhaWTRvS1dyN1V3N1Jz?=
 =?utf-8?B?OFN1UmFOU0xwNWVWak1OUVZ0VmtvYkxLQjNXZlZ2Qi8xZGV0UUVqNm1YTStv?=
 =?utf-8?B?bGNYUmZrYW5rTkFnditDRlZac1ptRE9TZStETGhIdTNkQkIzcm8rM3BXbEYr?=
 =?utf-8?B?NGVFTnZ4Q0hKUjNjNE9XYUtkQUhtYWVJckVZc2p3UnNnTGZWOFFMcExZWW1K?=
 =?utf-8?B?bFVkbHNTMGhjSW9wNnJCVUFPNWp2b09sYU5ZVjJiUm53MlZPK2V2dWxFdXk4?=
 =?utf-8?B?dFhGY0MrTHFmSGVDNTNudGNaZFBhQkwvQVJxMGJyRWZYNWszcURycWxyVnFj?=
 =?utf-8?B?ZFNnVHNrTnhzK29BRmJONTZRNnE1MFYrMmRFSmZwQ3B6RHNWakNwZW94dUZO?=
 =?utf-8?B?RUNyVis5YzNiNHJiMG9QTnk0bVNObFU1MDNJU09lSW9vOWhqZWJSanlIUjdr?=
 =?utf-8?B?KzIwZTA2eHhHZGtiQW92QlVWcjRXOWZma21WOTdGZGY4b2RPaWxEOS9iNURW?=
 =?utf-8?B?dkVTc3JFc0o4a3hQd3F3YllvcEYzNkJOWkVsZjVsd3dDaGM1a3E3K1VRTUJ6?=
 =?utf-8?B?RTdJdjA1RFptQU04QUZqRi9FMkFsNitVbXpYa2MzMkpucjIzRkNrSzlwamwv?=
 =?utf-8?B?RWR6a08ySm5Vd05IZGlwR29nUGpkM0gxOEhGcm02Ui9rZjRmSjU0OHhmMmdH?=
 =?utf-8?B?aE9PUFNDS1VIeFhqZlJqVHBPbXh1YXQxdXpUMm5MaEx3M282a1AzUUpyM2pl?=
 =?utf-8?B?MzJWOHhMNWU1QmJ6Vlh3VUQ0VUxyNnFHNkxrODkzcHRsZ1ltOGJVN3JXRWtO?=
 =?utf-8?B?U2tiQ2FsUThBSXFZTUFXbmZrNGVFYjZVdFcyTUJjbk54M3hrL240ZlFaeGJr?=
 =?utf-8?B?aDZ2bklySGVXckpLc3N1YkIrQWVwdE1STkVtUk1wQ0JOTkszL1FlV1oxRG5s?=
 =?utf-8?B?dXZZWjBzbnJNWVJDUElFQlcvVTIrZHc1YjVhUkZ4SGJmUjljR1NjNEtWZTBs?=
 =?utf-8?B?aGd3SEhvdXh3d0QzTTk5Wm9BUVRHTUc1ZXdLVW0yN2hCYnh2M3pRc3dPa1V0?=
 =?utf-8?B?bzNlUENSOU4rZTlXd1F2ZTlVM25CUXFrNEwxZmZtRTgyUk1UTmx4eFc0QTUr?=
 =?utf-8?B?aWpwYk5WTWhqQlJSWVhTR2tMY1B3ZW5hVHRJQVRhcTFZTWtLSXBsQ2w2ckpQ?=
 =?utf-8?B?TVRrQlFhVmoxYzNaNVd5d1dkeUN4RUhoMElZTC9BTWZjMUpPMVY3WGFLYUN2?=
 =?utf-8?B?dEJSbUo5SGJDdU9DVEpWc0RrTEVoUk5Bd1RobDBsd2U5RzF1OWJra0FpOWEy?=
 =?utf-8?B?T0YvVzZpeDlocUtxYUhieXl6SDhXeWJ2RHpscDg1UWw2d3pqdzhnODE4Zzhh?=
 =?utf-8?B?ejdiMVFkYXJmWlRjcEVwWm9GalRNUUlXTG1tSkl2R0ltK01lc2VUYXJlRFJ1?=
 =?utf-8?B?ZTBlbndkRHpFaE15MS9RWW1TUUozWnp0blVzeXMrU1JUcldBNGxuaHFiLzZH?=
 =?utf-8?B?RHBtQk1HeisrYkZnR2s5Tm1rc3BkZTR0TUZLV2VoQ3J3TW8zZ3ZTZVE2MFla?=
 =?utf-8?B?Q01yTWo1YngyYkZGQ3Vldkp4YzlZcDJ5SDZYYldUUkVzRk9zSVY0NFFrWXp2?=
 =?utf-8?B?QlBjQUFYR251L09QU1J5OWM3M2F1UU5jaDlMVTBJUlQ4U0dwd1NnMExlclBx?=
 =?utf-8?B?VjVkcDRCMFRwVG9SNWJGUDR3RldYUFovamhITDFFcHRjWFdBMWZFV2UyeUha?=
 =?utf-8?B?SUZINkQ3OVZ5N2tKc0RheitiZ05iN2g1VUh3TnhuQ3Z2alEyVG1sU0xoQVdx?=
 =?utf-8?B?SUNvS0YyVWpCYzB5K2hldUV2UGtwZlNyQUwwTmIxTm5vQ0lWZ1pqZy90ei8x?=
 =?utf-8?B?TWZZNVQ0UWJIaW1kSHdDRWIvcFFyaENBUEwwRmFPcTVRT0VKNmY1NnlJT21R?=
 =?utf-8?B?WkNVQ0pXOHNLc3lpeEdUNktRbXk0NTRKdWphTFUveGRuM0FDTExOS2hSZFVF?=
 =?utf-8?B?SzF0TUNNZkIzNzlPbU9pMkJsb1N0NThtOUNkSWFqa2hJR1h1cVNsVHhmV0t6?=
 =?utf-8?B?eW1WM2plRXBLME5DOWRpK3JYVnpMUVBOR2tEOHVBa0Q4NlgxelZvQlVDZGMw?=
 =?utf-8?B?ZFE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	Bx52O9ej4ZUJPyn8c/N2fhe0he9QXjnGHPcQ9/Uka0Qqm3dlSrs1AnKCavGq4n1zYk6+xtka/7UBAaG1b09z3i9hc9xle02QqWJzmjutSJKYE3yqSIUVBpQtOvqlPRmucHm46EKNioVR35HSLW2/qg0Sy+SVnxLjSCxyMl3Zy9ZTX17AMosWyLuTxUKtqCttcwUzZ9jg4PmJhA+XCUU4ipekG8NChiXRm89dZtDOpUxmouhAl5wT+NSgWFnc7vOYOkYGUTc8avX6AgzKIqcUtkVPsEa+mefgxqsj9NkAA1NkPbghmri8Ss8b65dZ+ZV8v3tcRAeRaxh2ZGa3KOtpp6GFonAl7fJ4nlMOq1n74dgHLA9M+kdahbL5enf0aQ7t6h8ZUx8ufoA9rBJy37kolmzjwyzrWW4TZVgBjUqkUuPEuSS2xCJmuKFDuhmHbaaGn0zSJ5dp8zw5vaixDTGglmCJeQZ2Mxx+w5r9rMCAhgisFEpQ2n9wgzaR85auHMUSyDVltznR2hN+7FTmkbd0zlyR1jNGUJfIgteYL2P6/exWYjcbP/rjdLQCxdipuSj5KRMx3zf70XupX1aw8I9sTag/lHG+JKLItr/A/aeKxVFNJjoq7VD6RVenVvPvT9eaQmEesQ8fKcaAilR6o3m8VS67n1KK/KBhRKE0Ro/3+5a99O7z+VZ8PXYSTh4kUO5HyKgap2zJhQnhlGe54Bnvh30r8vtZxxbm6bB/V3j6r9l7rPCjmPq+KMt0DGPJXx0TWXijfRcBiHpJ9SBcomyjOEI/MIAxqmkBVga0P0rfaMk2NExEy2wyQHfMM3H07AS+
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: aba8b923-b9eb-4f56-16a4-08db61274a08
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 16:02:37.9748
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: f9+rKlOdSzXX5VJgUIsWtKoh50M3JXMpfpARJc6Ii1E/W9ITwgyUdB4tAqZBON3MRePa4jxblltsi07SYgvQ3w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR03MB6177

On Wed, Apr 05, 2023 at 12:15:26PM +0200, Jan Beulich wrote:
> On 31.03.2023 11:59, Roger Pau Monne wrote:
> > Only set the GOP mode if vga is selected in the console option,
> 
> This particular aspect of the behavior is inconsistent with legacy
> boot behavior: There "vga=" isn't qualified by what "console=" has.

Hm, I find this very odd, why would we fiddle with the VGA (or the GOP
here) if we don't intend to use it for output?

> > otherwise just fetch the information from the current mode in order to
> > make it available to dom0.
> > 
> > Introduce support for passing the command line to the efi_multiboot2()
> > helper, and parse the console= and vga= options if present.
> > 
> > Add support for the 'gfx' and 'current' vga options, ignore the 'keep'
> > option, and print a warning message about other options not being
> > currently implemented.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >[...] 
> > --- a/xen/arch/x86/efi/efi-boot.h
> > +++ b/xen/arch/x86/efi/efi-boot.h
> > @@ -786,7 +786,30 @@ static bool __init efi_arch_use_config_file(EFI_SYSTEM_TABLE *SystemTable)
> >  
> >  static void __init efi_arch_flush_dcache_area(const void *vaddr, UINTN size) { }
> >  
> > -void __init efi_multiboot2(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
> > +/* Return the next occurrence of opt in cmd. */
> > +static const char __init *get_option(const char *cmd, const char *opt)
> > +{
> > +    const char *s = cmd, *o = NULL;
> > +
> > +    if ( !cmd || !opt )
> 
> I can see why you need to check "cmd", but there's no need to check "opt"
> I would say.

Given this is executed without a page-fault handler in place I thought
it was best to be safe rather than sorry, and avoid any pointer
dereferences.

> > @@ -807,7 +830,60 @@ void __init efi_multiboot2(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable
> >  
> >      if ( gop )
> >      {
> > -        gop_mode = efi_find_gop_mode(gop, 0, 0, 0);
> > +        const char *opt = NULL, *last = cmdline;
> > +        /* Default console selection is "com1,vga". */
> > +        bool vga = true;
> > +
> > +        /* For the console option the last occurrence is the enforced one. */
> > +        while ( (last = get_option(last, "console=")) != NULL )
> > +            opt = last;
> > +
> > +        if ( opt )
> > +        {
> > +            const char *s = strstr(opt, "vga");
> > +
> > +            if ( !s || s > strpbrk(opt, " ") )
> 
> Why strpbrk() and not the simpler strchr()? Or did you mean to also look
> for tabs, but then didn't include \t here (and in get_option())? (Legacy
> boot code also takes \r and \n as separators, btw, but I'm unconvinced
> of the need.)

I was originally checking for more characters here and didn't switch
when removing those.  I will add \t.

> Also aiui this is UB when the function returns NULL, as relational operators
> (excluding equality ones) may only be applied when both addresses refer to
> the same object (or to the end of an involved array).

Hm, I see, thanks for spotting. So I would need to do:

s > (strpbrk(opt, " ") ?: s)

So that we don't compare against NULL.

Also the original code was wrong AFAICT, as strpbrk() returning NULL
should result in vga=true (as it would imply console= is the last
option on the command line).

> > +                vga = false;
> > +        }
> > +
> > +        if ( vga )
> > +        {
> > +            unsigned int width = 0, height = 0, depth = 0;
> > +            bool keep_current = false;
> > +
> > +            last = cmdline;
> > +            while ( (last = get_option(last, "vga=")) != NULL )
> 
> It's yet different for "vga=", I'm afraid: Early boot code (boot/cmdline.c)
> finds the first instance only. Normal command line handling respects the
> last instance only. So while "vga=gfx-... vga=keep" will have the expected
> effect, "vga=keep vga=gfx-..." won't (I think). It is certainly fine to be
> less inflexible here, but I think this then wants accompanying by an update
> to the command line doc, no matter that presently it doesn't really
> describe these peculiarities.

But if we then describe this behavior in the documentation people
could rely on it.  Right now this is just an implementation detail (or
a bug I would say), and that would justify fixing boot/cmdline.c to
also respect the last instance only.

> Otoh it would end up being slightly cheaper
> to only look for the first instance here as well. In particular ...
> 
> > +            {
> > +                if ( !strncmp(last, "gfx-", 4) )
> > +                {
> > +                    width = simple_strtoul(last + 4, &last, 10);
> > +                    if ( *last == 'x' )
> > +                        height = simple_strtoul(last + 1, &last, 10);
> > +                    if ( *last == 'x' )
> > +                        depth = simple_strtoul(last + 1, &last, 10);
> > +                    /* Allow depth to be 0 or unset. */
> > +                    if ( !width || !height )
> > +                        width = height = depth = 0;
> > +                    keep_current = false;
> > +                }
> > +                else if ( !strncmp(last, "current", 7) )
> > +                    keep_current = true;
> > +                else if ( !strncmp(last, "keep", 4) )
> > +                {
> > +                    /* Ignore. */
> > +                }
> > +                else
> > +                {
> > +                    /* Fallback to defaults if unimplemented. */
> > +                    width = height = depth = 0;
> > +                    keep_current = false;
> 
> ... this zapping of what was successfully parsed before would then not be
> needed in any event (else I would question why this is necessary).

Hm, I don't have a strong opinion, the expected behavior I have of
command line options is that the last one is the one that takes
effect, but it would simplify the change if we only cared about the
first one, albeit that's an odd behavior.

My preference would be to leave the code here respecting the last
instance only, and attempt to fix boot/cmdline.c so it does the same.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 30 16:23:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 16:23:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541271.843850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q428R-0004hb-JV; Tue, 30 May 2023 16:23:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541271.843850; Tue, 30 May 2023 16:23:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q428R-0004hU-F7; Tue, 30 May 2023 16:23:47 +0000
Received: by outflank-mailman (input) for mailman id 541271;
 Tue, 30 May 2023 16:23:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OJSS=BT=intel.com=rick.p.edgecombe@srs-se1.protection.inumbo.net>)
 id 1q428Q-0004hO-2x
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 16:23:46 +0000
Received: from mga05.intel.com (mga05.intel.com [192.55.52.43])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 568a1c16-ff06-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 18:23:42 +0200 (CEST)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 30 May 2023 09:23:29 -0700
Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16])
 by fmsmga001.fm.intel.com with ESMTP; 30 May 2023 09:23:28 -0700
Received: from orsmsx611.amr.corp.intel.com (10.22.229.24) by
 ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Tue, 30 May 2023 09:23:28 -0700
Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by
 ORSMSX611.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23; Tue, 30 May 2023 09:23:27 -0700
Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by
 orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.23 via Frontend Transport; Tue, 30 May 2023 09:23:27 -0700
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.104)
 by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.23; Tue, 30 May 2023 09:23:25 -0700
Received: from MN0PR11MB5963.namprd11.prod.outlook.com (2603:10b6:208:372::10)
 by CY8PR11MB7172.namprd11.prod.outlook.com (2603:10b6:930:93::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 16:23:22 +0000
Received: from MN0PR11MB5963.namprd11.prod.outlook.com
 ([fe80::6984:19a5:fe1c:dfec]) by MN0PR11MB5963.namprd11.prod.outlook.com
 ([fe80::6984:19a5:fe1c:dfec%7]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 16:23:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 568a1c16-ff06-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1685463822; x=1716999822;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=x9MXvI5HktxoStyqb+CwpnhiQFlP6fmCYRPlaNDXwq0=;
  b=gY4ugTpJbxZCwLkmZmlQ1XpQypb/sB/TyMG+7xVB/hD/c5R7f4O0ngCr
   meN+++K8udZt8lROzSXq5zReVYHBsGf9jmDBpz1px9/7q4ynYsKbascng
   5zD8FzGsocIlnpRO/r8TgYv5VmKWGYumrLevwknfsaCqm89R42/T7YM6D
   Z3k5zb9DOfb2G9vOfq2qY2EfICwS5m2+XMctPYgiZ6vUuYS/mZ+fFqxFf
   d3s6YHG+ph9vfZalRyLZzNgTBNv5irtAcwCIekoGSkNTHRbR58veEz2KK
   aPk3n/UKljljLpy33u1zc1BjTt5Zt9o7TjXS60p0San2hakbAWt3+Ac0t
   A==;
X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="441328380"
X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; 
   d="scan'208";a="441328380"
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="850848040"
X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; 
   d="scan'208";a="850848040"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IsjHrH8BHNheqbYmCzAUpyOJNIUnadbcSm1qSeAfyqWd1whzOw2jbnzCJetaGP5ctQnhXAInRrxl8UMySINtl1ssxGOzDqYM49YXmB1AtMRco0riCRhDa7bQeu8ZtH8OF7KrCYji7v/uRE96MYzaW286KtGuFGezSJyasElYJrQ4z5zkZiICDxi63Gbf4RMNYfCdvSReY1YZtKlF0motUFHS0cOYbEf5EN/BCyXU4EnK4DgMDrN81Sy80SKZgLJwoTfVUBGn27wJa8+B4jitbKEPiD1zvUG/JqHL17VoS+8o8ruEXZi9hCnnYCvLtqt39biBatCH59khqRDlC1f+GA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=x9MXvI5HktxoStyqb+CwpnhiQFlP6fmCYRPlaNDXwq0=;
 b=kK8xDJABZwM6zk/A+VK63cUXvKkeVOOTf3pGno+EDOfHUZxo77rzFUeE55eRyyHxBP6xgOPWRdjKSJYmO/vOM/BzeVu7sWtVEnUu7Oh8r2rQ0SEjuqFgdsRJFZYHogNWZRr5Bp5X3INPMhLGJ4gVdOkf/Q7IZocqh2EdXKWnfEWiULgst7m+kpVZ1L+53h57ZmFhU3FUWAw/sBbel3wPK4TbRj1bIQkvVwW1gp46xnwH+x1lMh5pUR3VwXwI86XPbztcG4rp2MSLs6kHybwiDaQyIgQ8jmBBxKiXoX348q6HnJNVAMlcm6OgWdwo/f2kIz6X7bDPecgwsjM/qRmNbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
From: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
To: "mic@digikod.net" <mic@digikod.net>, "Christopherson,, Sean"
	<seanjc@google.com>, "dave.hansen@linux.intel.com"
	<dave.hansen@linux.intel.com>, "bp@alien8.de" <bp@alien8.de>,
	"keescook@chromium.org" <keescook@chromium.org>, "hpa@zytor.com"
	<hpa@zytor.com>, "mingo@redhat.com" <mingo@redhat.com>, "tglx@linutronix.de"
	<tglx@linutronix.de>, "pbonzini@redhat.com" <pbonzini@redhat.com>,
	"wanpengli@tencent.com" <wanpengli@tencent.com>, "vkuznets@redhat.com"
	<vkuznets@redhat.com>
CC: "kvm@vger.kernel.org" <kvm@vger.kernel.org>, "qemu-devel@nongnu.org"
	<qemu-devel@nongnu.org>, "liran.alon@oracle.com" <liran.alon@oracle.com>,
	"marian.c.rotariu@gmail.com" <marian.c.rotariu@gmail.com>, "Graf, Alexander"
	<graf@amazon.com>, "Andersen, John S" <john.s.andersen@intel.com>,
	"madvenka@linux.microsoft.com" <madvenka@linux.microsoft.com>,
	"ssicleru@bitdefender.com" <ssicleru@bitdefender.com>, "yuanyu@google.com"
	<yuanyu@google.com>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, "tgopinath@microsoft.com"
	<tgopinath@microsoft.com>, "jamorris@linux.microsoft.com"
	<jamorris@linux.microsoft.com>, "linux-security-module@vger.kernel.org"
	<linux-security-module@vger.kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "will@kernel.org" <will@kernel.org>,
	"dev@lists.cloudhypervisor.org" <dev@lists.cloudhypervisor.org>,
	"mdontu@bitdefender.com" <mdontu@bitdefender.com>,
	"linux-hardening@vger.kernel.org" <linux-hardening@vger.kernel.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>, "nicu.citu@icloud.com"
	<nicu.citu@icloud.com>, "ztarkhani@microsoft.com" <ztarkhani@microsoft.com>,
	"x86@kernel.org" <x86@kernel.org>
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Thread-Topic: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
Thread-Index: AQHZf2Ve5xw6RDm4tUGLnlOpizAoCa9qHQcAgAEGdICAAalfgIAGWnSA
Date: Tue, 30 May 2023 16:23:22 +0000
Message-ID: <fd1dd8bcc172093ad20243ac1e7bb8fce45b38ef.camel@intel.com>
References: <20230505152046.6575-1-mic@digikod.net>
	 <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
	 <facfd178-3157-80b4-243b-a5c8dabadbfb@digikod.net>
	 <58a803f6-c3de-3362-673f-767767a43f9c@digikod.net>
In-Reply-To: <58a803f6-c3de-3362-673f-767767a43f9c@digikod.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Evolution 3.44.4-0ubuntu1 
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: MN0PR11MB5963:EE_|CY8PR11MB7172:EE_
x-ms-office365-filtering-correlation-id: 8f4b5d35-bee6-4765-25fc-08db612a2ff9
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: YX9qeLlW9DFH5Keokrei0vuCHyw9t9QA/SZjW2AFBeYVC9TWzOEU+ByQbtuHzJ0A6EL7ONKXFgqJ2j4mxVCLJeUjh5PLtETEgiFZ4I8QWgvTpPex3BvJh4jHK7rA2zCzAyHAq+cs+8bDLqTlh0jrdEj+JnNY/m3pkvgvBiioiTRMJEbaT8eb06bQm+RTzDv9Oy5POZy39cMeExa+Bks9+vRbvVm6BNngtec6u2YmJnZVVZgAHB6VWQRiwNoNjNK4O4whZ8TVm284YHr8xxwIg4F5GTTEfupXMy7W/eFTd+vJZxhOoZ3iHeLUV78r1y+7i0glHk49z8OL4MNnl2dyvPqwz070snQjAziCLodaWK0C5BFFIQqM/BFL8Tl8+4yBJRs1A/F20awtTowCFEIRWo4fV5WgpWEFdVVjj8N2sl/KJSMRYV5kAjTCJQp45uVsdwU+O9r5JVp46qCWMpOqBn4O+1oGVaOYYOAlHodGh0QijmCPCvlxCtGruoXMb3qz3Dq9pmELyhs++WvLNQMasY3MSGXGhX8x9cG7KwK6IRejZu5RmfIDHvUr4NFPeia2YeJPdwu2/QZ9ler4JaXGyFtbdY3+FBZLpY7vxQKWNKZzttUDdAMJ9Xx/NzTUrd1x79Xdb4KBaNciyOGK2m/cqA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN0PR11MB5963.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(136003)(39860400002)(376002)(396003)(366004)(451199021)(186003)(2616005)(38100700002)(41300700001)(83380400001)(6506007)(6512007)(26005)(6486002)(66574015)(478600001)(110136005)(54906003)(71200400001)(921005)(4326008)(66476007)(91956017)(66556008)(66446008)(82960400001)(66946007)(76116006)(122000001)(64756008)(316002)(8936002)(8676002)(5660300002)(7406005)(7416002)(2906002)(38070700005)(86362001)(36756003);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?utf-8?B?NThiVktseEF5UFJpNTVobyt4U3V3ZHpveWs2SndYa3lHeW1WSW90cGVHTndQ?=
 =?utf-8?B?R0RLaDlhZlViT2ZnRVFTK1F4Sjd1aEw5R0wyRkpCVTh6NE9yOE9XU2J6enZp?=
 =?utf-8?B?RWduQWJpT3kzeVBVbU8vODUxMnRMeUZPR1hZUTF6Z1dzc2pPOXFXZkk1NXN0?=
 =?utf-8?B?R3hJK3IyM0p3T2RJL3cyRkhIU2JZOURXOVdkQWt6TWdRZVFUWEZsbjhERFVs?=
 =?utf-8?B?RmQ5MGQwRmNnV2pKU3p1OC9hS05qbUVOdnNscVdHdjVSdWcyZmlqQ3BxUTdr?=
 =?utf-8?B?aTcvS0NXWml6MU5Vb0ovV1JJYzYzcWliZzhiKzV6SUhEYmdYYW80UEZiRFRY?=
 =?utf-8?B?dUVsQmxiSjgzWnc3UzM5cjEyYTJmakEvNEN5OGVpRUlUUzZOZkdNc2Q5amsx?=
 =?utf-8?B?ZWIzODlJNXU4VVQ3SHdyWEw3RzVaTXFyQ3M3ZE91bEtia2xaTXF4N1dXaldu?=
 =?utf-8?B?d3dQWHVtZmxrdkxlTlJ1eWoyemhTZjBDQU94RVpSTDJaWkN6Z3pMTTh4dVpj?=
 =?utf-8?B?MDhTVWRYcUdZdGNQVEIwR3pOcWxFL21vbXdYMndGelg3ODlNdndoZlQxNnJI?=
 =?utf-8?B?VWdaNHNJWmJLUE9QdFM3cFFSeE9vdFZkRGhVSFE0ZVVhbjhwa1pmKzhnWUZy?=
 =?utf-8?B?TCsvcEk3OWJvbGFsQk1jdnhCcFl3T2V1eXIvVVhxMmFLanNrVWZiR1RGazhh?=
 =?utf-8?B?VlpPNWl6YzByeGFmK205aTBrUHA4bS9xSEdiZGdGWFFvOS85T2xrT2NpK1Ft?=
 =?utf-8?B?Tm9vL3RWcVRaR3NwTU1IcTN5bjZDZmZha3RGZVZrNk1wQTFnOU5Xa3lYanNi?=
 =?utf-8?B?Rk00ZGh3ZS9yNmI1NVM5UllNVkdmaDVUNVJpQXF3ZFRnQ1dnZjJWYjFuQWxB?=
 =?utf-8?B?RE41L29taUR3SU1TSWUwbUtnMmg2dFhHa1VwOVQxTTB2TjRBWjZsL1FtQmNi?=
 =?utf-8?B?Y1JwZ3JUNmEvL0Zud3NBMXVxZUs0MndSYUdyUlZvR1huL2czL2NkWXBkSDRD?=
 =?utf-8?B?aGlqMEk3aUVlRE81dytZdStMMkhZNTh1U2xzNDNURVo5ZjJqeGlYWTZmS3d5?=
 =?utf-8?B?a0sxR1lsbGtkbXh2V1lXYmpKdHFTN0FucDYrNXNWSE14N09zTWVFczhsYWw2?=
 =?utf-8?B?YS9MTGhzYUdYdVdOZkhFS1ZOcjZDUmxNMkV3czlYdG4wWmhyUHdtbVR0YmFs?=
 =?utf-8?B?dWI4SnVpQjJ6VXM1NGlJVnpTM2JYTjBSdTlydk1UMlBlb05tMzNwTHI2S3g3?=
 =?utf-8?B?MFFSclZFcHJCY2tLOUNrNVduZ2NpYkYwWFVxc3UyRmRNd1FmQ0UzcmtQVzY4?=
 =?utf-8?B?bUpTY0U5Wm1oMEJJYVBRRkdFK2dRbmF5OXFwWU5hMHgxMVdrcFdNRUNITFBn?=
 =?utf-8?B?SC9oU2xZVk5BS1k3VU5DSXo1SGh0ZUEwdlJIaG1RMGptbi9XLzl5d01jbzRY?=
 =?utf-8?B?c1B5VzBvdFFvZFQrRGM3ZERFa0l4WW0zQXRuYVVPb1hsenpNS09ZczIzZWl0?=
 =?utf-8?B?SXd4Z3pibzFsWXJBVXhiYXUwam5HS3puQThqMDg2NmljQ0tqYXlFdmsrZXNG?=
 =?utf-8?B?TjBqclJCbTlkaVp4emQ2VHEvUEE4aElGQmU5dytqOWNxd3dqenlPNnkwMi91?=
 =?utf-8?B?ZjJrYlBjUVg0aFRibllkV0k5TDRCajQzNHhNVE1VSDJzeVdBd2Yzd3l3MjhV?=
 =?utf-8?B?QWhVZ3J4eDFlTzY2N3pNZnNZcktPSHJWUXFPWW9mRUUwZDZHdXJxdWgvUk11?=
 =?utf-8?B?TkRsQXZqT2RzamN3S1RRZ2gveFROTzFIcUE3RW40bFYxZHhuYlNxQ0pxTVUz?=
 =?utf-8?B?TjVaQ2hURGRqSzdnMk9XSEVld2xRZWV1S1FmQk95d0hkRWxZUkFxcVpXKzVt?=
 =?utf-8?B?TEhYaG01NHhBTTIyelhBcGlrL1VOa3hpT3pYUlJmbm0wMURkeFRhOG1XQjk3?=
 =?utf-8?B?ZG1reHpqSEQ3MnZsZ2ZqbDR6dVBxbFZ4Vkc5ZFc3YmlMWU12elpTNnM4QVNq?=
 =?utf-8?B?a242WUs5QzVMVG5IVmJ1RGpNU3JNSmVGUWJMcWtyMk5RSS9jRVJuZDBCdXg1?=
 =?utf-8?B?TUYyMnQ3SGdkbEpQajZEUHJsWkx1THFVU1h0bDVTVkU1VjN0QlFHVXdlanFw?=
 =?utf-8?B?SVRpMzU5QlZONm1VWklpZmxqaXVTSVRkWXdqak9WdE05cmhtLy81eFB4VE5N?=
 =?utf-8?B?Wmc9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <5A1BBC9128CF3A4A9AFCB7DF898647AD@namprd11.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MN0PR11MB5963.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f4b5d35-bee6-4765-25fc-08db612a2ff9
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 May 2023 16:23:22.4146
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: wt9+Xdtquva4HMsuEq9CJzIi1L1ExpeP/Ljhz+DlY9QclYD7odzMJUX9PibUrW/obiBsmUGVoIG9glyjrTDAe18olIYWtMPIb9ij2hRUMZY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR11MB7172
X-OriginatorOrg: intel.com

T24gRnJpLCAyMDIzLTA1LTI2IGF0IDE3OjIyICswMjAwLCBNaWNrYcOrbCBTYWxhw7xuIHdyb3Rl
Og0KPiA+ID4gQ2FuIHRoZSBndWVzdCBrZXJuZWwgYXNrIHRoZSBob3N0IFZNTSdzIGVtdWxhdGVk
IGRldmljZXMgdG8gRE1BDQo+ID4gPiBpbnRvDQo+ID4gPiB0aGUgcHJvdGVjdGVkIGRhdGE/IEl0
IHNob3VsZCBnbyB0aHJvdWdoIHRoZSBob3N0IHVzZXJzcGFjZQ0KPiA+ID4gbWFwcGluZ3MgSQ0K
PiA+ID4gdGhpbmssIHdoaWNoIGRvbid0IGNhcmUgYWJvdXQgRVBUIHBlcm1pc3Npb25zLiBPciBk
aWQgSSBtaXNzDQo+ID4gPiB3aGVyZSB5b3UNCj4gPiA+IGFyZSBwcm90ZWN0aW5nIHRoYXQgYW5v
dGhlciB3YXk/IFRoZXJlIGFyZSBhIGxvdCBvZiBlYXN5IHdheXMgdG8NCj4gPiA+IGFzaw0KPiA+
ID4gdGhlIGhvc3QgdG8gd3JpdGUgdG8gZ3Vlc3QgbWVtb3J5IHRoYXQgZG9uJ3QgaW52b2x2ZSB0
aGUgRVBULiBZb3UNCj4gPiA+IHByb2JhYmx5IG5lZWQgdG8gcHJvdGVjdCB0aGUgaG9zdCB1c2Vy
c3BhY2UgbWFwcGluZ3MsIGFuZCBhbHNvDQo+ID4gPiB0aGUNCj4gPiA+IHBsYWNlcyBpbiBLVk0g
dGhhdCBrbWFwIGEgR1BBIHByb3ZpZGVkIGJ5IHRoZSBndWVzdC4NCj4gPiANCj4gPiBHb29kIHBv
aW50LCBJJ2xsIGNoZWNrIHRoaXMgY29uZnVzZWQgZGVwdXR5IGF0dGFjay4gRXh0ZW5kZWQgS1ZN
DQo+ID4gcHJvdGVjdGlvbnMgc2hvdWxkIGluZGVlZCBoYW5kbGUgYWxsIHdheXMgdG8gbWFwIGd1
ZXN0cycgbWVtb3J5Lg0KPiA+IEknbQ0KPiA+IHdvbmRlcmluZyBpZiBjdXJyZW50IFZNTXMgd291
bGQgZ3JhY2VmdWxseSBoYW5kbGUgc3VjaCBuZXcNCj4gPiByZXN0cmljdGlvbnMNCj4gPiB0aG91
Z2guDQo+IA0KPiBJIGd1ZXNzIHRoZSBob3N0IGNvdWxkIG1hcCBhcmJpdHJhcnkgZGF0YSB0byB0
aGUgZ3Vlc3QsIHNvIHRoYXQgbmVlZA0KPiB0byANCj4gYmUgaGFuZGxlZCwgYnV0IGhvdyBjb3Vs
ZCB0aGUgVk1NIChub3QgdGhlIGhvc3Qga2VybmVsKSBieXBhc3MvdXBkYXRlDQo+IEVQVCBpbml0
aWFsbHkgdXNlZCBmb3IgdGhlIGd1ZXN0IChhbmQgcG90ZW50aWFsbHkgbGF0ZXIgbWFwcGVkIHRv
IHRoZQ0KPiBob3N0KT8NCg0KV2VsbCB0cmFkaXRpb25hbGx5IGJvdGggUUVNVSBhbmQgS1ZNIGFj
Y2Vzc2VkIGd1ZXN0IG1lbW9yeSB2aWEgaG9zdA0KbWFwcGluZ3MgaW5zdGVhZCBvZiB0aGUgRVBU
LsKgU28gSSdtIHdvbmRlcmluZyB3aGF0IGlzIHN0b3BwaW5nIHRoZQ0KZ3Vlc3QgZnJvbSBwYXNz
aW5nIGEgcHJvdGVjdGVkIGdmbiB3aGVuIHNldHRpbmcgdXAgdGhlIERNQSwgYW5kIFFFTVUNCmJl
aW5nIGVudGljZWQgdG8gd3JpdGUgdG8gaXQ/IFRoZSBlbXVsYXRvciBhcyB3ZWxsIHdvdWxkIHVz
ZSB0aGVzZSBob3N0DQp1c2Vyc3BhY2UgbWFwcGluZ3MgYW5kIG5vdCBjb25zdWx0IHRoZSBFUFQg
SUlSQy4NCg0KSSB0aGluayBTZWFuIHdhcyBzdWdnZXN0aW5nIGhvc3QgdXNlcnNwYWNlIHNob3Vs
ZCBiZSBtb3JlIGludm9sdmVkIGluDQp0aGlzIHByb2Nlc3MsIHNvIHBlcmhhcHMgaXQgY291bGQg
cHJvdGVjdCBpdHMgb3duIGFsaWFzIG9mIHRoZQ0KcHJvdGVjdGVkIG1lbW9yeSwgZm9yIGV4YW1w
bGUgbXByb3RlY3QoKSBpdCBhcyByZWFkLW9ubHkuDQoNClRoZXJlIGlzICh3YXM/KSBzb21lIEtW
TSBQViBmZWF0dXJlcyB0aGF0IGFjY2Vzc2VkIGd1ZXN0IG1lbW9yeSB2aWEgdGhlDQpob3N0IGRp
cmVjdCBtYXAgYXMgd2VsbC4gSSB3b3VsZCB0aGluayBtcHJvdGVjdCgpIHNob3VsZCBwcm90ZWN0
IHRoaXMNCmF0IHRoZSBnZXRfdXNlcl9wYWdlcygpIHN0YWdlLCBidXQgaXQgbG9va3MgbGlrZSB0
aGUgZGV0YWlscyBoYXZlDQpjaGFuZ2VkIHNpbmNlIEkgbGFzdCB1bmRlcnN0b29kIGl0Lg0K


From xen-devel-bounces@lists.xenproject.org Tue May 30 16:24:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 16:24:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541275.843859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q429L-0005Gh-Vk; Tue, 30 May 2023 16:24:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541275.843859; Tue, 30 May 2023 16:24:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q429L-0005Ga-TE; Tue, 30 May 2023 16:24:43 +0000
Received: by outflank-mailman (input) for mailman id 541275;
 Tue, 30 May 2023 16:24:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=slLs=BT=redhat.com=kwolf@srs-se1.protection.inumbo.net>)
 id 1q429K-0005GS-IK
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 16:24:42 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 79968668-ff06-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 18:24:40 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-83-ahITsgYMPYCTa1zF2WpWBg-1; Tue, 30 May 2023 12:24:33 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0662F87282F;
 Tue, 30 May 2023 16:24:19 +0000 (UTC)
Received: from redhat.com (unknown [10.39.194.4])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id E24581121330;
 Tue, 30 May 2023 16:24:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79968668-ff06-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685463879;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=+PO7OF1UJIEyhl34/BsyVvrG83vEPKJb7h1Om/b9M+g=;
	b=I8E3iMoNG5YaWka4+qG8LG5aFBBfaCDR7nTWfJ7ILDL1JscwttZXLXX2VaU2R+aLymQXoK
	yh1zgP8BbfD7Xgi1CsF95/L1vTfueqliLFhpIiHefhyv5LCJ8ElwFn7Vg0v5thGIQExS9a
	t44OHSNQ+7PlNJ4RXywOIODKWQvEFuA=
X-MC-Unique: ahITsgYMPYCTa1zF2WpWBg-1
Date: Tue, 30 May 2023 18:24:13 +0200
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Coiby Xu <Coiby.Xu@gmail.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Richard W.M. Jones" <rjones@redhat.com>,
	Peter Xu <peterx@redhat.com>, xen-devel@lists.xenproject.org,
	Ronnie Sahlberg <ronniesahlberg@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Richard Henderson <richard.henderson@linaro.org>,
	Julia Suvorova <jusual@redhat.com>, Hanna Reitz <hreitz@redhat.com>,
	Leonardo Bras <leobras@redhat.com>, eesposit@redhat.com,
	Fam Zheng <fam@euphon.net>, Aarushi Mehta <mehta.aaru20@gmail.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Xie Yongji <xieyongji@bytedance.com>,
	Stefano Garzarella <sgarzare@redhat.com>, qemu-block@nongnu.org,
	Eduardo Habkost <eduardo@habkost.net>, Paul Durrant <paul@xen.org>,
	Stefan Weil <sw@weilnetz.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
	Peter Lieven <pl@kamp.de>, Paolo Bonzini <pbonzini@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Juan Quintela <quintela@redhat.com>
Subject: Re: [PATCH v6 00/20] block: remove aio_disable_external() API
Message-ID: <ZHYjLQ3pb+GBav6i@redhat.com>
References: <20230516190238.8401-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230516190238.8401-1-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3

Am 16.05.2023 um 21:02 hat Stefan Hajnoczi geschrieben:
> The aio_disable_external() API temporarily suspends file descriptor monitoring
> in the event loop. The block layer uses this to prevent new I/O requests being
> submitted from the guest and elsewhere between bdrv_drained_begin() and
> bdrv_drained_end().
> 
> While the block layer still needs to prevent new I/O requests in drained
> sections, the aio_disable_external() API can be replaced with
> .drained_begin/end/poll() callbacks that have been added to BdrvChildClass and
> BlockDevOps.
> 
> This newer .bdrained_begin/end/poll() approach is attractive because it works
> without specifying a specific AioContext. The block layer is moving towards
> multi-queue and that means multiple AioContexts may be processing I/O
> simultaneously.
> 
> The aio_disable_external() was always somewhat hacky. It suspends all file
> descriptors that were registered with is_external=true, even if they have
> nothing to do with the BlockDriverState graph nodes that are being drained.
> It's better to solve a block layer problem in the block layer than to have an
> odd event loop API solution.
> 
> The approach in this patch series is to implement BlockDevOps
> .drained_begin/end() callbacks that temporarily stop file descriptor handlers.
> This ensures that new I/O requests are not submitted in drained sections.

Thanks, applied to the block branch.

Kevin



From xen-devel-bounces@lists.xenproject.org Tue May 30 16:24:54 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 16:24:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541277.843870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q429W-0005bl-78; Tue, 30 May 2023 16:24:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541277.843870; Tue, 30 May 2023 16:24:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q429W-0005be-47; Tue, 30 May 2023 16:24:54 +0000
Received: by outflank-mailman (input) for mailman id 541277;
 Tue, 30 May 2023 16:24:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q429U-0005ZI-Im; Tue, 30 May 2023 16:24:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q429U-0004cH-EV; Tue, 30 May 2023 16:24:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q429U-0001v0-0y; Tue, 30 May 2023 16:24:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q429U-0005Uj-0W; Tue, 30 May 2023 16:24:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9i62zlCseA50x7E0mxAViWkBfsOahj2xaYLsj5R5gxk=; b=5NlgHo0I9MFya4s9lUQeBHukc8
	RAVs54Fg5Aue760o3NLyP1DGP90xvwayfLMz9Xq2mIha63V96XVX4jrAp0ZzCY5FoZkdwHkgGK16U
	fIAggarbjGAHwFyqSRLBY+hL4gcMIa0M1eFl8TUv7rCVMBN3jpjEsEyYPJuC8XZTgBEI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181014-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 181014: trouble: broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<job status>:broken:regression
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:host-install(5):broken:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=05422d276b56f2ebc2309a84a66fc5722c45ad74
X-Osstest-Versions-That:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 30 May 2023 16:24:52 +0000

flight 181014 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181014/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <job status>               broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 5 host-install(5) broken REGR. vs. 180963

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  05422d276b56f2ebc2309a84a66fc5722c45ad74
baseline version:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc

Last test of basis   180963  2023-05-26 13:01:58 Z    4 days
Testing same since   181014  2023-05-30 11:02:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Cyril Rébert <slack@rabbit.lu>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    broken  
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 broken
broken-step test-amd64-amd64-xl-qemuu-debianhvm-amd64 host-install(5)

Not pushing.

------------------------------------------------------------
commit 05422d276b56f2ebc2309a84a66fc5722c45ad74
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Tue May 30 12:12:59 2023 +0200

    build: adjust compile.h compiler version command line
    
    CFLAGS is just from Config.mk, drop its use. Don't even bother to
    instead use the flags used to build Xen.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 352c917acfe1dd6afc2eee44aa4ab7c50d4bc48a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue May 30 12:00:34 2023 +0200

    x86/vPIC: register only one ELCR handler instance
    
    There's no point consuming two port-I/O slots. Even less so considering
    that some real hardware permits both ports to be accessed in one go,
    emulating of which requires there to be only a single instance.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

commit 647377ea06b86d7356f5975e4780b9a6a81c188e
Author: Stewart Hildebrand <stewart.hildebrand@amd.com>
Date:   Tue May 30 11:59:33 2023 +0200

    xen/arm: un-break build with clang
    
    clang doesn't like extern with __attribute__((__used__)):
    
      ./arch/arm/include/asm/setup.h:171:8: error: 'used' attribute ignored [-Werror,-Wignored-attributes]
      extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
             ^
      ./arch/arm/include/asm/lpae.h:273:29: note: expanded from macro 'DEFINE_BOOT_PAGE_TABLE'
      lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \$
                                  ^
      ./include/xen/compiler.h:71:27: note: expanded from macro '__section'
      #define __section(s)      __used __attribute__((__section__(s)))
                                ^
      ./include/xen/compiler.h:104:39: note: expanded from macro '__used'
      #define __used         __attribute__((__used__))
                                            ^
    
    Simplify the declarations by getting rid of the macro (and thus the
    __aligned/__section/__used attributes) in the header. No functional change
    intended as the macro/attributes are present in the respective definitions in
    xen/arch/arm/mm.c.
    
    Fixes: 1c78d76b67e1 ("xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping")
    Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 2a8a1681505d67dae5d3964f98cc1b1daf8e43f3
Author: Cyril Rébert <slack@rabbit.lu>
Date:   Tue May 30 11:57:42 2023 +0200

    tools/xenstore: remove deprecated parameter from xenstore commands help
    
    Completing commit c65687e ("tools/xenstore: remove socket-only option from xenstore client").
    As the socket-only option (-s) has been removed from the Xenstore access commands (xenstore-*),
    also remove the parameter from the commands help (xenstore-* -h).
    
    Suggested-by: Yann Dirson <yann.dirson@vates.fr>
    Signed-off-by: Cyril Rébert <slack@rabbit.lu>
    Reviewed-by: Juergen Gross <jgross@suse.com>

commit ca045140d90c7892ec0664cdb2ef3e16c97eb0b6
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Tue May 30 11:57:17 2023 +0200

    xen/misra: xen-analysis.py: Fix cppcheck report relative paths
    
    Fix the generation of the relative path from the repo, for cppcheck
    reports, when the script is launching make with in-tree build.
    
    Fixes: b046f7e37489 ("xen/misra: xen-analysis.py: use the relative path from the ...")
    Reported-by: Michal Orzel <michal.orzel@amd.com>
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit 8bd504290bc3e5fb4d04150f96a36783407661b4
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Tue May 30 11:57:02 2023 +0200

    xen/misra: xen-analysis.py: Fix latent bug
    
    Currenly there is a latent bug that is not triggered because
    the function cppcheck_merge_txt_fragments is called with the
    parameter strip_paths having a list of only one element.
    
    The bug is that the split function should not be in the
    loop for strip_paths, but one level before, fix it.
    
    Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit e56f2106b6727223bd7de03e20fedd1f94da655d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue May 30 11:56:22 2023 +0200

    VMX/cpu-policy: disable RDTSCP and INVPCID insns as needed
    
    When either feature is available in hardware, but disabled for a guest,
    the respective insn would better cause #UD if attempted to be used.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit 233a8f20cfbe999505c7b07b359f03fc04111008
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue May 30 11:54:55 2023 +0200

    VMX/cpu-policy: check availability of RDTSCP and INVPCID
    
    Both have separate enable bits, which are optional. While on real
    hardware we can perhaps expect these VMX controls to be available if
    (and only if) the base CPU feature is available, when running
    virtualized ourselves this may not be the case.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 30 16:57:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 16:57:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541286.843879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42eJ-0000vq-EM; Tue, 30 May 2023 16:56:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541286.843879; Tue, 30 May 2023 16:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42eJ-0000vj-Bb; Tue, 30 May 2023 16:56:43 +0000
Received: by outflank-mailman (input) for mailman id 541286;
 Tue, 30 May 2023 16:56:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zKAq=BT=flex--seanjc.bounces.google.com=3wyp2ZAYKCRwK62FB48GG8D6.4GEP6F-56N6DDAKLK.P6FHJGB64L.GJ8@srs-se1.protection.inumbo.net>)
 id 1q42eH-0000vd-U8
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 16:56:41 +0000
Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com
 [2607:f8b0:4864:20::549])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f0568309-ff0a-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 18:56:37 +0200 (CEST)
Received: by mail-pg1-x549.google.com with SMTP id
 41be03b00d2f7-53f44c2566dso2594688a12.2
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 09:56:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0568309-ff0a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20221208; t=1685465796; x=1688057796;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:from:to:cc:subject:date:message-id:reply-to;
        bh=zUIF7qczhJfL0tzqN+Wk4Y8s0fyjf+/encaeJ8i8zo8=;
        b=RbEfFAAf2kcBOjvt74s5UVZKgWrUGFDUy+sd7fobbo+fCQD0px8ov63e8OeZXSJ8fh
         HG4hHegtTrDVP700YjLnXteCxE6guE54Vbd8zH/t2rCli4QSiy5tmFSofJYIZgEMn1zj
         Qsp4LrQqCDZwVqc25d9x6Ot8kXQSMfXIZ9gqaurkMyf4wf+UYa4w1aAbMwazKXlrtuiS
         Kk616ydOwBRsdeEiUal2WEabr5ltpvlk9LlkPPxhELMLDRXHyhZJZ3Til0KRYe81otj+
         tS/p/kiytx6/XARdF4p/h7eKZckIQCkqvbC4w6UMN19z2FxazUFpiFD2Hr1ZsYeU6F4t
         XCEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685465796; x=1688057796;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=zUIF7qczhJfL0tzqN+Wk4Y8s0fyjf+/encaeJ8i8zo8=;
        b=RHEdGEght3UPOQv0PilZND+qGeiHXiUJXHQWL1gDZsR3eCtdFXX1V7a/lUjwa133Jk
         7RVWt9zCjbhrCQ444p35hiHtPaSEqJQ/Y/ByW37mNTPY7+e/Fsg/1cAwLKui13+jbVXt
         /BzC47Jkv9KDkcnNppU3SKb+PaQ2jwtaPcFCyyHdCi+hTzNwoc1voRyvv6oXhTOrAt6p
         m+EMCVUFGgMUbq+g0rcpFzckQIbWsCoTBWoR7/YC8+8Q2H0VCft0RQOZ5VXtU88kRhkv
         OHn9ldyw0QOo0lrUQzDjqB/ZLX02jJ0k1C4C3YRRqjBkx/N4jZb9FfCFf+HHuNdortNa
         h0bg==
X-Gm-Message-State: AC+VfDykoNhAe5MvL1ApXQrlBrldix6qF28c1sCByi11kMUrVtG62oei
	tqzybxiHjD6Z6NA+anVe0zVvSLSsX6o=
X-Google-Smtp-Source: ACHHUZ6elIb9Xawt5kHr4iARsASD1eAmXyJrU7yMOnucjGsQDVEv6w8ENbA+wCa+rqHlmqnxd6WixqOcZs8=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a63:512:0:b0:52c:9e55:61ee with SMTP id
 18-20020a630512000000b0052c9e5561eemr545525pgf.3.1685465795799; Tue, 30 May
 2023 09:56:35 -0700 (PDT)
Date: Tue, 30 May 2023 09:56:34 -0700
In-Reply-To: <87wn0pizbl.ffs@tglx>
Mime-Version: 1.0
References: <87sfbhlwp9.ffs@tglx> <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx> <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name> <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx> <87cz2ija1e.ffs@tglx> <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name>
 <87wn0pizbl.ffs@tglx>
Message-ID: <ZHYqwsCURnrFdsVm@google.com>
Subject: Re: [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE
From: Sean Christopherson <seanjc@google.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>, Tom Lendacky <thomas.lendacky@amd.com>, 
	LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, 
	David Woodhouse <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Brian Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, 
	Oleksandr Natalenko <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>, 
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, 
	Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org, 
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>, 
	linux-arm-kernel@lists.infradead.org, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>, 
	linux-csky@vger.kernel.org, Thomas Bogendoerfer <tsbogend@alpha.franken.de>, 
	linux-mips@vger.kernel.org, 
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	linux-parisc@vger.kernel.org, Paul Walmsley <paul.walmsley@sifive.com>, 
	Palmer Dabbelt <palmer@dabbelt.com>, linux-riscv@lists.infradead.org, 
	Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>, 
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Content-Type: text/plain; charset="us-ascii"

On Tue, May 30, 2023, Thomas Gleixner wrote:
> On Tue, May 30 2023 at 15:29, Kirill A. Shutemov wrote:
> > On Tue, May 30, 2023 at 02:09:17PM +0200, Thomas Gleixner wrote:
> >> The decision to allow parallel bringup of secondary CPUs checks
> >> CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
> >> parallel bootup because accessing the local APIC is intercepted and raises
> >> a #VC or #VE, which cannot be handled at that point.
> >> 
> >> The check works correctly, but only for AMD encrypted guests. TDX does not
> >> set that flag.
> >> 
> >> Check for cc_vendor != CC_VENDOR_NONE instead. That might be overbroad, but
> >> definitely works for both AMD and Intel.
> >
> > It boots fine with TDX, but I think it is wrong. cc_get_vendor() will
> > report CC_VENDOR_AMD even on bare metal if SME is enabled. I don't think
> > we want it.
> 
> Right. Did not think about that.
> 
> But the same way is CC_ATTR_GUEST_MEM_ENCRYPT overbroad for AMD. Only
> SEV-ES traps RDMSR if I'm understandig that maze correctly.

Ya, regular SEV doesn't encrypt register state.


From xen-devel-bounces@lists.xenproject.org Tue May 30 16:57:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 16:57:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541290.843890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42fN-0001Rb-Nq; Tue, 30 May 2023 16:57:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541290.843890; Tue, 30 May 2023 16:57:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42fN-0001RU-LB; Tue, 30 May 2023 16:57:49 +0000
Received: by outflank-mailman (input) for mailman id 541290;
 Tue, 30 May 2023 16:57:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wXld=BT=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1q42fM-0001RD-E2
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 16:57:48 +0000
Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com
 [2607:f8b0:4864:20::433])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19aed878-ff0b-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 18:57:46 +0200 (CEST)
Received: by mail-pf1-x433.google.com with SMTP id
 d2e1a72fcca58-64d2f99c8c3so3439286b3a.0
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 09:57:46 -0700 (PDT)
Received: from localhost (ec2-52-9-159-93.us-west-1.compute.amazonaws.com.
 [52.9.159.93]) by smtp.gmail.com with ESMTPSA id
 k8-20020a635a48000000b0052cbd854927sm8830281pgm.18.2023.05.30.09.57.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 09:57:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19aed878-ff0b-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685465865; x=1688057865;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=iuWqQbNxc4T0f/DD5bEBy0I3LDWfGOw8QZp3us3pmgE=;
        b=A9AvHKYUdzFNAqzJpECrFSlpog12ItecR62EC7N/160mH8oO/LuQof1MztDRk5ZlGP
         EvMiKoOoJhcTjrbStdshlQZXsAFeTamcUtvVxhNzFdKHGMuD0FNnQA0cgqXo//viZVA4
         mdrGfsDayef8NCSQVHWe5KjAR15R5qCAB494JPe9Lohd37WbSOZSQcoB7UoULy9SwrWN
         GWUJUGFLpHokW1d2XTye9RkbcjONc40tSbck382uhdiXBSSfdOkogXIdtOjRu6gv8KvW
         bTv5kWAeM338pA0ggXAoo3LHh+6mGoAxZZmkzNuCcmFDRvtkmodWW7n2DRYcEcMxCO0R
         t3wQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685465865; x=1688057865;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iuWqQbNxc4T0f/DD5bEBy0I3LDWfGOw8QZp3us3pmgE=;
        b=dcMYtrWqylYxOpetn/lxRztKPicI0we+jsFlVlyuB/eXw4yDseHax/4wAXqzboe3GA
         yxodwCIl8HaX5o4DCUrbfcJifyaZNsoTLRNnQ9OaB5FVcL5NlyqWPKIisydjeH6PWWW1
         id+VVAZNuFs8cwwj15vLDSA5u5Stn0zZfIQxEQnxp/rDhl4+5+bEkn2RI7LjSJYbsKNx
         KCbXzbRX+KDSRqyxiQYkiZ6boKPV4G67pkfkSOKOmWhg3ma/H7BVgkCz3qtcifPPCgrI
         s00m0X13RRK5m0flM7yu+OjqbmmQOfu792Jg6aNOFfFyXmqRxrV/8/1ssyQp5yqLRBII
         qJ5A==
X-Gm-Message-State: AC+VfDxVr+cnZX1Pa5vEr6TRV4n+8IdhR/zb1ju/TSFBbZa0Zf7Sxwc/
	ZNQUXFlDI33zZsGss64OQoY=
X-Google-Smtp-Source: ACHHUZ4M9TQT1zLBpaeWEjnKr3UQ82+8kna3FkFH5W/yTFjYxZnzHn7aNfvFkFOw6DtihRA3RxdRBw==
X-Received: by 2002:a05:6a20:8e1d:b0:10e:96b5:45fa with SMTP id y29-20020a056a208e1d00b0010e96b545famr2942711pzj.43.1685465865016;
        Tue, 30 May 2023 09:57:45 -0700 (PDT)
Date: Tue, 30 May 2023 09:57:40 +0000
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v9 1/5] xen/riscv: add VM space layout
Message-ID: <ZHXIlG3bS437Dpow@bullseye>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
 <1621fd09987d20b3233132d422e5c9dfe300e3f7.1685027257.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1621fd09987d20b3233132d422e5c9dfe300e3f7.1685027257.git.oleksii.kurochko@gmail.com>

On Thu, May 25, 2023 at 06:28:14PM +0300, Oleksii Kurochko wrote:
> Also it was added explanation about ignoring of top VA bits
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V9:
>  - Update comment for VM layout description.
> ---
> Changes in V8:
>  - Add "#ifdef RV_STAGE1_MODE == SATP_MODE_SV39" instead of "#ifdef SV39"
>    in the comment to VM layout description.
>  - Update the upper bound of direct map area in VM layout description.
> ---
> Changes in V7:
>  - Fix range of frametable range in RV64 layout.
>  - Add ifdef SV39 to the RV64 layout comment to make it explicit that
>    description if for SV39 mode.
>  - Add missed row in the RV64 layout table.
> ---
> Changes in V6:
>  - update comment above the RISCV-64 layout table
>  - add Slot column to the table with RISCV-64 Layout
>  - update RV-64 layout table.
> ---
> Changes in V5:
> * the patch was introduced in the current patch series.
> ---
>  xen/arch/riscv/include/asm/config.h | 36 +++++++++++++++++++++++++++++
>  1 file changed, 36 insertions(+)
> 
> diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
> index 763a922a04..9900d29dab 100644
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -4,6 +4,42 @@
>  #include <xen/const.h>
>  #include <xen/page-size.h>
>  
> +/*
> + * RISC-V64 Layout:
> + *
> +#if RV_STAGE1_MODE == SATP_MODE_SV39
> + *
> + * From the riscv-privileged doc:
> + *   When mapping between narrower and wider addresses,
> + *   RISC-V zero-extends a narrower physical address to a wider size.
> + *   The mapping between 64-bit virtual addresses and the 39-bit usable
> + *   address space of Sv39 is not based on zero-extension but instead
> + *   follows an entrenched convention that allows an OS to use one or
> + *   a few of the most-significant bits of a full-size (64-bit) virtual
> + *   address to quickly distinguish user and supervisor address regions.
> + *
> + * It means that:
> + *   top VA bits are simply ignored for the purpose of translating to PA.
> + *
> + * ============================================================================
> + *    Start addr    |   End addr        |  Size  | Slot       |area description
> + * ============================================================================
> + * FFFFFFFFC0800000 |  FFFFFFFFFFFFFFFF |1016 MB | L2 511     | Unused
> + * FFFFFFFFC0600000 |  FFFFFFFFC0800000 |  2 MB  | L2 511     | Fixmap
> + * FFFFFFFFC0200000 |  FFFFFFFFC0600000 |  4 MB  | L2 511     | FDT
> + * FFFFFFFFC0000000 |  FFFFFFFFC0200000 |  2 MB  | L2 511     | Xen
> + *                 ...                  |  1 GB  | L2 510     | Unused
> + * 0000003200000000 |  0000007F80000000 | 309 GB | L2 200-509 | Direct map
> + *                 ...                  |  1 GB  | L2 199     | Unused
> + * 0000003100000000 |  00000031C0000000 |  3 GB  | L2 196-198 | Frametable
> + *                 ...                  |  1 GB  | L2 195     | Unused
> + * 0000003080000000 |  00000030C0000000 |  1 GB  | L2 194     | VMAP
> + *                 ...                  | 194 GB | L2 0 - 193 | Unused
> + * ============================================================================
> + *
> +#endif
> + */
> +
>  #if defined(CONFIG_RISCV_64)
>  # define LONG_BYTEORDER 3
>  # define ELFSIZE 64
> -- 
> 2.40.1
> 
> 

Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>


From xen-devel-bounces@lists.xenproject.org Tue May 30 16:58:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 16:58:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541295.843900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42g4-00022S-4t; Tue, 30 May 2023 16:58:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541295.843900; Tue, 30 May 2023 16:58:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42g4-00022L-2C; Tue, 30 May 2023 16:58:32 +0000
Received: by outflank-mailman (input) for mailman id 541295;
 Tue, 30 May 2023 16:58:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wXld=BT=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1q42g2-000226-KF
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 16:58:30 +0000
Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com
 [2607:f8b0:4864:20::42c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32723d64-ff0b-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 18:58:28 +0200 (CEST)
Received: by mail-pf1-x42c.google.com with SMTP id
 d2e1a72fcca58-64d293746e0so5412173b3a.2
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 09:58:28 -0700 (PDT)
Received: from localhost (ec2-52-9-159-93.us-west-1.compute.amazonaws.com.
 [52.9.159.93]) by smtp.gmail.com with ESMTPSA id
 q15-20020a62ae0f000000b006352a6d56ebsm1852253pff.119.2023.05.30.09.58.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 09:58:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32723d64-ff0b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685465907; x=1688057907;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=MY8Y/d9lsGbkXeH0mbAldIGXI9dMfLGW5DQ5y2Q/oKA=;
        b=oYh++T9MEfPGcesW0TYAX3AjygxWHfXNqu6JsTMK20RJMuSC4xL0vV1KrX1t2GUCcD
         Y8J4IF/RzIQlQrWykzki74dVEveC6C99cjhu8Xo88nuH/qAjIKO0MBkA4OIL+4XFEqut
         nVK2W+THAUj8c6PB2XtzmJhUQG+GEn4TNlZolJgNmAht3jeJoyV7HLFUt6/vwQZw/5vj
         AArrC0g8XhAlGn7Cj/g7Ki0Y13SFvcTNQRLsqLk7Zl3ADmr9NYXiDfV0Sr+Ksgpr8qaH
         fLyLadwadLC+qr3Ix9x7bm2HtMDLzkLXPQTsUB9M34CV9+3i6MVlu78hFYqLWPU/W8WF
         jVEQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685465907; x=1688057907;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=MY8Y/d9lsGbkXeH0mbAldIGXI9dMfLGW5DQ5y2Q/oKA=;
        b=QG1DLljZoF10Jl2GRcsCFN/l2EQJ5mzVTF/yC43CXHiZP8P/qYUNFSPSAXSMUMlnTv
         hC6CCrEToH8N6aLLxkbLc5aWXJKwDxbIeVxjDc3qA1+uhJm5809oGmD6RUm/EKO7eOVx
         XXNnCbtNrvwA3nLYI8DjRn9CUsVznQXn01h7C1fDlEPLMzuDHuo7m+EYRRra+9+ajpcO
         tr90UJ5jiEFaNXYSf5nObWHaLQq7VsMk1RaN4rPRrgJYN94H7gLinbzpQXSEtRb0UowJ
         zVkplGWgxGEBz9c6h/IbSRN5jkMYLeVL+XvH7U2aK0ahIfSG6u1M5SNfECu4HQeyl7Co
         R7jw==
X-Gm-Message-State: AC+VfDzF9X4DdNMzBxgP8FFTOwCDV4EquasGMaMKr9uRzVeaMW12Dmrj
	B5r/gQei7kwnSlVB/n+wGEY=
X-Google-Smtp-Source: ACHHUZ6Z+arnZdewCNuJNBQwtNC5tAXevRO31iw+YolgNPvkEBcu1HefFHsW0StH7Z1dGupLU9DEog==
X-Received: by 2002:a05:6a00:3901:b0:64a:f730:154b with SMTP id fh1-20020a056a00390100b0064af730154bmr3849151pfb.5.1685465906813;
        Tue, 30 May 2023 09:58:26 -0700 (PDT)
Date: Tue, 30 May 2023 09:58:26 +0000
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v9 4/5] xen/riscv: setup initial pagetables
Message-ID: <ZHXIwmivNrgKiGeH@bullseye>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
 <6ea28216df1c7f29ebd88e20adb05cdf75af20fe.1685027257.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <6ea28216df1c7f29ebd88e20adb05cdf75af20fe.1685027257.git.oleksii.kurochko@gmail.com>

On Thu, May 25, 2023 at 06:28:17PM +0300, Oleksii Kurochko wrote:
> The patch does two thing:
> 1. Setup initial pagetables.
> 2. Enable MMU which end up with code in
>    cont_after_mmu_is_enabled()
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V9:
>  - Nothing changed. Only rebase
> ---
> Changes in V8:
>  - Nothing changed. Only rebase
> ---
> Changes in V7:
>  - Nothing changed. Only rebase
> ---
> Changes in V6:
>  - Nothing changed. Only rebase
> ---
> Changes in V5:
>  - Nothing changed. Only rebase
> ---
> Changes in V4:
>  - Nothing changed. Only rebase
> ---
> Changes in V3:
>  - update the commit message that MMU is also enabled here
>  - remove early_printk("All set up\n") as it was moved to
>    cont_after_mmu_is_enabled() function after MMU is enabled.
> ---
> Changes in V2:
>  * Update the commit message
> ---
>  xen/arch/riscv/setup.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> index 315804aa87..cf5dc5824e 100644
> --- a/xen/arch/riscv/setup.c
> +++ b/xen/arch/riscv/setup.c
> @@ -21,7 +21,10 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
>  {
>      early_printk("Hello from C env\n");
>  
> -    early_printk("All set up\n");
> +    setup_initial_pagetables();
> +
> +    enable_mmu();
> +
>      for ( ;; )
>          asm volatile ("wfi");
>  
> -- 
> 2.40.1
> 
> 

Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>


From xen-devel-bounces@lists.xenproject.org Tue May 30 16:59:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 16:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541299.843909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42gl-0002ag-DW; Tue, 30 May 2023 16:59:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541299.843909; Tue, 30 May 2023 16:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42gl-0002aZ-B2; Tue, 30 May 2023 16:59:15 +0000
Received: by outflank-mailman (input) for mailman id 541299;
 Tue, 30 May 2023 16:59:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wXld=BT=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1q42gk-0001RD-CG
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 16:59:14 +0000
Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com
 [2607:f8b0:4864:20::432])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d996c34-ff0b-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 18:59:13 +0200 (CEST)
Received: by mail-pf1-x432.google.com with SMTP id
 d2e1a72fcca58-64d2a613ec4so3447367b3a.1
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 09:59:13 -0700 (PDT)
Received: from localhost (ec2-52-9-159-93.us-west-1.compute.amazonaws.com.
 [52.9.159.93]) by smtp.gmail.com with ESMTPSA id
 g16-20020a62e310000000b0064d74808738sm1802929pfh.214.2023.05.30.09.59.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 09:59:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d996c34-ff0b-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685465952; x=1688057952;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=LxaOtFfyQxSP90cwtzAnyMXl6U0rfymyfizE43djznw=;
        b=V2yt9sTyIPblISEiz+C8LF8cLyHIhxJK7OmFpU8Ep64dQH4d1+wJQW4/JUpNCpVUBG
         Uucq92Zm6BiS4tHpcFgts7ZJKRiKv+bww7NUaFKfA+b43HnXuW3+Yf/Cp1dS2oBJODxA
         FtQO+yVS45JdW2EU55FmKqLXoNhX+SoRqS25PVI0PngiG4rsYa3YZJPmYh5RZ63Pioq4
         0Irln+0nAX0hI6+wMtVy6YqDazMK4KfgGf9InaU1HtjU+aHpynx+5Felsc37BBFyI56U
         FpL8x1zRno5A7wK138xtdz4/GM+3G6rL6wz7yVM5FeOmO4sNNkLXa/iR3koTs6C+t8ne
         fkJw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685465952; x=1688057952;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LxaOtFfyQxSP90cwtzAnyMXl6U0rfymyfizE43djznw=;
        b=ksOZ5j/HGQ/vo/+2jPHeinrsLjcbc8xc/PVWacJg1ym4SmpMJocQkPbaLrm0VJ0HMB
         PosBX+odbRggmGOjbZt2NZfZ/mR9gDNLsnW3IOkrS9mxmZHsbBpsi0nKPGjFIIJaZaLt
         SICI70mD7ELgMWaxfP0qs6WYi9fUPZlwToFWbH2/Y0qw2BwCLInwq18yox48HlcQk3D/
         hZBFoXzYhXxg/p1OBbIJ/qruuj9Nrpa6NXMrtyGDfTh2IWVR6mlU5yOZa6baA8Gd8JmT
         3Gc5mO0QHxJRMO44xsK4G/zSHVQTR0sY6wKFsx/14AU8MTUhvUAueCdJXQP08D3YrXyR
         dLlg==
X-Gm-Message-State: AC+VfDyXYGnRFhBf4MZLMZV7OAQIV3xLouf9Lb478LfWKb5GkHPo+VL9
	vrXXuSUGyYJ09vHa4RBTkX9Bt13ODUbJ2gem
X-Google-Smtp-Source: ACHHUZ6YPzRF8ZyrWGjM4QgmvXHdoHIA3OtNf8Rdiz6lNZpZSNfAMi33LK5XppLPYrOBdhHMX5bq/A==
X-Received: by 2002:a05:6a20:2447:b0:111:ee3b:59b1 with SMTP id t7-20020a056a20244700b00111ee3b59b1mr3878681pzc.2.1685465952344;
        Tue, 30 May 2023 09:59:12 -0700 (PDT)
Date: Tue, 30 May 2023 09:59:15 +0000
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v9 3/5] xen/riscv: align __bss_start
Message-ID: <ZHXI8/86ibhq8FRl@bullseye>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
 <1158df1cde660e817c4f6d6e0a46ef22bd92dc04.1685027257.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1158df1cde660e817c4f6d6e0a46ef22bd92dc04.1685027257.git.oleksii.kurochko@gmail.com>

On Thu, May 25, 2023 at 06:28:16PM +0300, Oleksii Kurochko wrote:
> bss clear cycle requires proper alignment of __bss_start.
> 
> ALIGN(PAGE_SIZE) before "*(.bss.page_aligned)" in xen.lds.S
> was removed as any contribution to "*(.bss.page_aligned)" have to
> specify proper aligntment themselves.
> 
> Fixes: cfa0409f7cbb ("xen/riscv: initialize .bss section")
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> ---
> Changes in V9:
>  * Nothing changed. Only rebase.
> ---
> Changes in V8:
>  * Remove ". = ALIGN(PAGE_SIZE);" before "*(.bss.page_aligned)" in
>    vmlinux.lds.S file as any contribution to .bss.page_aligned have to specify
>    proper alignment themselves.
>  * Add "Fixes: cfa0409f7cbb ("xen/riscv: initialize .bss section")" to
>    the commit message
>  * Add "Reviewed-by: Jan Beulich <jbeulich@suse.com>" to the commit message
> ---
> Changes in V7:
>  * the patch was introduced in the current patch series.
> ---
>  xen/arch/riscv/xen.lds.S | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
> index fe475d096d..df71d31e17 100644
> --- a/xen/arch/riscv/xen.lds.S
> +++ b/xen/arch/riscv/xen.lds.S
> @@ -137,9 +137,9 @@ SECTIONS
>      __init_end = .;
>  
>      .bss : {                     /* BSS */
> +        . = ALIGN(POINTER_ALIGN);
>          __bss_start = .;
>          *(.bss.stack_aligned)
> -        . = ALIGN(PAGE_SIZE);
>          *(.bss.page_aligned)
>          . = ALIGN(PAGE_SIZE);
>          __per_cpu_start = .;
> -- 
> 2.40.1
> 
> 

Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>


From xen-devel-bounces@lists.xenproject.org Tue May 30 17:02:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 17:02:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541303.843920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42jp-00046R-Sg; Tue, 30 May 2023 17:02:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541303.843920; Tue, 30 May 2023 17:02:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42jp-00046K-Q2; Tue, 30 May 2023 17:02:25 +0000
Received: by outflank-mailman (input) for mailman id 541303;
 Tue, 30 May 2023 17:02:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m7B1=BT=shutemov.name=kirill@srs-se1.protection.inumbo.net>)
 id 1q42jo-00046B-DJ
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 17:02:24 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bc2e756d-ff0b-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 19:02:20 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 0EA885C01FE;
 Tue, 30 May 2023 13:02:18 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 30 May 2023 13:02:18 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 13:02:13 -0400 (EDT)
Received: by box.shutemov.name (Postfix, from userid 1000)
 id 228F410BD95; Tue, 30 May 2023 20:02:10 +0300 (+03)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc2e756d-ff0b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name;
	 h=cc:cc:content-type:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1685466137; x=
	1685552537; bh=LP3nfKYA60CmRb8usg09zLfC4LK0DKsE4DbPaZWKkC8=; b=s
	zV6WskACfFVEqKzRzZ6P7qAJe6qrFMSVHpBH+79Dp07d15X23DMKWcX0I0KFcXyT
	tzvjPL6PVqxKmWl9OiZAluQTsrsRgXXSkeoGZmBjg2AqbpZoCv4E9eLQoG+5nJyS
	HkVjFdxFuVE08S4971YSmjn/wkp41pa+PrkMvbhmYb5DpFzY5XaDud34Hh+Zhoz6
	GurqkOPG4rEcEB8AJi91jlxm1UuPkwewI+Rc+Y0yep2GSLgqXZvuHbrcM3AwGmnK
	nhvhwVE6CvdzeY2SmUcbH7u12o9rJdx3GhIBQURci3ZBnL3EtUUXUYCE5IO0j3Oy
	+ve3Is8IMm7NEhKD6NgDA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1685466137; x=1685552537; bh=LP3nfKYA60CmR
	b8usg09zLfC4LK0DKsE4DbPaZWKkC8=; b=RrRPOp3HKdgjex3qNFuRyIK+z7hQt
	HJxtGK/mchY8kBsvhzWBNaZITNLfVOjgxQFvAOTOf7PWbRIH/fhJZ3M68cpZl3Kn
	HDUVS/n7Z+vZxwnvikIMopbIJPPmy57YeU8FBhAPUN2sSq8d9kvTt2KyQQkK8gs4
	No0ajlXD9loe4/8muuMVBSwc6QJe1Lj9066iqw3R3OBSLwPg9Nby7zf3HGR8BFrS
	dYSVo6XMqyuUOZCcoeZ3/3G5JLGm2OhrRi2RCgQxZ5fd/w1BJX5y/1Ihd/UcUUuz
	E57DkJY8AxBbAzIrP2YlwBcSBfaZjdfjJciTnAVEgX/CKECNh69XtWHjA==
X-ME-Sender: <xms:FSx2ZM4Mq2KAXSzW7xxZWvfpDiasjaZIBeacE7HlH1FClmBTxkzytg>
    <xme:FSx2ZN6jM2ofnqd2UQ2vPVOctwBgLs5l_3lkVcIXxAiK_wk7w0xis6VpeGHxeSfMQ
    MfGz3AihHCrPbPLMOY>
X-ME-Received: <xmr:FSx2ZLeRPinJ2C5f9UrPoRvDeLWSbWpsQP0zjqFZOi8_cOsPLLOLFBLd6J4hz_zV5iLd1Q>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedguddthecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehttddttddttddvnecuhfhrohhmpedfmfhi
    rhhilhhlucetrdcuufhhuhhtvghmohhvfdcuoehkihhrihhllhesshhhuhhtvghmohhvrd
    hnrghmvgeqnecuggftrfgrthhtvghrnhephfeigefhtdefhedtfedthefghedutddvueeh
    tedttdehjeeukeejgeeuiedvkedtnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrg
    hmpehmrghilhhfrhhomhepkhhirhhilhhlsehshhhuthgvmhhovhdrnhgrmhgv
X-ME-Proxy: <xmx:Fix2ZBLhJRJiOQvWIoDyOqk9SL9n8lDqxeC32Ul4urdF3eutnKxrvw>
    <xmx:Fix2ZALFMhddyYQL6OktSRbWxIz8cLG6QYGTV0okFsA0NNYe2oPMJA>
    <xmx:Fix2ZCxKXFgl20Rki5Rb1K8_TT3BALB_9tnh9UHkN9QeaU67GWhF0A>
    <xmx:GSx2ZHr72m1Fde2ANoy9Cc4IuCDUtJgIl9vC2QblR6MMwhrZPaAhaw>
Feedback-ID: ie3994620:Fastmail
Date: Tue, 30 May 2023 20:02:10 +0300
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Thomas Gleixner <tglx@linutronix.de>,
	Tom Lendacky <thomas.lendacky@amd.com>
Cc: LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,	Paul McKenney <paulmck@kernel.org>,
	Sean Christopherson <seanjc@google.com>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,	Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,	Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE
Message-ID: <20230530170210.ujkv737uyjfvdoay@box.shutemov.name>
References: <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name>
 <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx>
 <87cz2ija1e.ffs@tglx>
 <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name>
 <87wn0pizbl.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87wn0pizbl.ffs@tglx>

On Tue, May 30, 2023 at 06:00:46PM +0200, Thomas Gleixner wrote:
> On Tue, May 30 2023 at 15:29, Kirill A. Shutemov wrote:
> > On Tue, May 30, 2023 at 02:09:17PM +0200, Thomas Gleixner wrote:
> >> The decision to allow parallel bringup of secondary CPUs checks
> >> CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
> >> parallel bootup because accessing the local APIC is intercepted and raises
> >> a #VC or #VE, which cannot be handled at that point.
> >> 
> >> The check works correctly, but only for AMD encrypted guests. TDX does not
> >> set that flag.
> >> 
> >> Check for cc_vendor != CC_VENDOR_NONE instead. That might be overbroad, but
> >> definitely works for both AMD and Intel.
> >
> > It boots fine with TDX, but I think it is wrong. cc_get_vendor() will
> > report CC_VENDOR_AMD even on bare metal if SME is enabled. I don't think
> > we want it.
> 
> Right. Did not think about that.
> 
> But the same way is CC_ATTR_GUEST_MEM_ENCRYPT overbroad for AMD. Only
> SEV-ES traps RDMSR if I'm understandig that maze correctly.

I don't know difference between SEV flavours that well.

I see there's that on SEV-SNP access to x2APIC MSR range (MSR 0x800-0x8FF)
is intercepted regardless if MSR_AMD64_SNP_ALT_INJ feature is present. But
I'm not sure what the state on SEV or SEV-ES.

Tom?

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


From xen-devel-bounces@lists.xenproject.org Tue May 30 17:02:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 17:02:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541305.843930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42kJ-0004Vt-5i; Tue, 30 May 2023 17:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541305.843930; Tue, 30 May 2023 17:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42kJ-0004Vm-2P; Tue, 30 May 2023 17:02:55 +0000
Received: by outflank-mailman (input) for mailman id 541305;
 Tue, 30 May 2023 17:02:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wXld=BT=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1q42kI-00046B-2j
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 17:02:54 +0000
Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com
 [2607:f8b0:4864:20::431])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cf48c86f-ff0b-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 19:02:51 +0200 (CEST)
Received: by mail-pf1-x431.google.com with SMTP id
 d2e1a72fcca58-64d3bc502ddso5495911b3a.0
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 10:02:51 -0700 (PDT)
Received: from localhost (ec2-52-9-159-93.us-west-1.compute.amazonaws.com.
 [52.9.159.93]) by smtp.gmail.com with ESMTPSA id
 j15-20020a62b60f000000b00642f1e03dc1sm1880080pff.174.2023.05.30.10.02.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 10:02:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf48c86f-ff0b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685466170; x=1688058170;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=jsPWlGB9G80j0bYRIRRLuVHXHYuIrm4pB1sYRL6DYoU=;
        b=ldBodAAk0OoE5Ob7R1ti3RsZwINj3Xm4gVOcx8UhFvNrNEO1rX//su5lpECG2cJp+b
         crF1AFhOHey9x62cgW4abi9v1l7AjZ1P0LmDQS8fSD1yeAEYJOQoCmKC/vRrFZmnIExt
         UE2Dsxz6Z/ylfEkdp1ekaPldk7xfdW0jTYhGVnNzncXFnAfILRVfPXirxzKKBWyvBd9i
         26BeGoIesO0UyXuWXQL5eHsNRgvQ8zYywLrZV+JXpdXREh/cvg/6pr7aqhygPBNbVLZr
         kd4ARAp3Ndj0QDcZqBFY9WG3Lu1TcS0HSqJ9XTL+mIHLVvM0KseKLRLpTi/0LGKkLhpV
         gE8A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685466170; x=1688058170;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jsPWlGB9G80j0bYRIRRLuVHXHYuIrm4pB1sYRL6DYoU=;
        b=IWUTFK5Mea5Dfg5v+RtVwKQfEH1WG/fHxh3CjIk5TS5cELSqACLqofnzPjTWq8xQ8a
         SxAb/OfJQTJehV3cl/zW5rBSbYYYa9um6+LHffVBjFulWPM+ZrKRzYgwH2cRIwTBC8Dk
         8ZuZlbEyJBtJYcQZELYxiczsCxmQHr05jUnki0t//sI7oumm8ABSlTz4UmOHBLIyCSXc
         BcjVgoYsXh3vi0TopSJ+2iJBYLdUK4zf8xA7iRR9znngnZJFujedI4ZtoewGW3i+vmIj
         Yg0Kfb438yBkup8A1L5ZG5OJolmgO3xeb57OsWcwirt/QlionME5tgt8rynOY842XFRH
         IgZQ==
X-Gm-Message-State: AC+VfDwD5gbzR8lc497uEYk1jBsQdiZbDwmET0cozJB8Qc1SjXaaC2+d
	T9ydptOgP7rdY7FTsQFlv1Q=
X-Google-Smtp-Source: ACHHUZ5wNdzF5QClQvmj509dl/s4QiNgY8fhvMBIqporww4PCyoPV++qfLuMSKsWc1YdEPswMbf55A==
X-Received: by 2002:a05:6a00:2296:b0:648:c1be:496 with SMTP id f22-20020a056a00229600b00648c1be0496mr3374958pfe.22.1685466169511;
        Tue, 30 May 2023 10:02:49 -0700 (PDT)
Date: Tue, 30 May 2023 10:03:10 +0000
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v9 2/5] xen/riscv: introduce setup_initial_pages
Message-ID: <ZHXJ3lfZ7W8Jc5xm@bullseye>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
 <8f8fb8849830ad2b249b9af903fe1eca70f7578a.1685027257.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <8f8fb8849830ad2b249b9af903fe1eca70f7578a.1685027257.git.oleksii.kurochko@gmail.com>

On Thu, May 25, 2023 at 06:28:15PM +0300, Oleksii Kurochko wrote:
> The idea was taken from xvisor but the following changes
> were done:
> * Use only a minimal part of the code enough to enable MMU
> * rename {_}setup_initial_pagetables functions
> * add an argument for setup_initial_mapping to have
>   an opportunity to make set PTE flags.
> * update setup_initial_pagetables function to map sections
>   with correct PTE flags.
> * Rewrite enable_mmu() to C.
> * map linker addresses range to load addresses range without
>   1:1 mapping. It will be 1:1 only in case when
>   load_start_addr is equal to linker_start_addr.
> * add safety checks such as:
>   * Xen size is less than page size
>   * linker addresses range doesn't overlap load addresses
>     range
> * Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
> * change PTE_LEAF_DEFAULT to RW instead of RWX.
> * Remove phys_offset as it is not used now
> * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
>   in  setup_inital_mapping() as they should be already aligned.
>   Make a check that {map_pa}_start are aligned.
> * Remove clear_pagetables() as initial pagetables will be
>   zeroed during bss initialization
> * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
>   as there is no such section in xen.lds.S
> * Update the argument of pte_is_valid() to "const pte_t *p"
> * Add check that Xen's load address is aligned at 4k boundary
> * Refactor setup_initial_pagetables() so it is mapping linker
>   address range to load address range. After setup needed
>   permissions for specific section ( such as .text, .rodata, etc )
>   otherwise RW permission will be set by default.
> * Add function to check that requested SATP_MODE is supported
> 
> Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> ---
> Changes in V9:
>   - Add "Reviewed-by: Jan Beulich <jbeulich@suse.com>" to commit message.
>   - Update macros VPN_MASK.
>   - Remove double blank lines.
> ---
> Changes in V8:
> 	- Add parentheses for lvl in pt_index() macros.
> 	- introduce macros paddr_to_pfn() and pfn_to_paddr() and use them inside
> 	  paddr_to_pte()/pte_to_paddr().
> 	- Remove "__" in sfence_vma() and add blanks inside the parentheses of
> 	  asm volatile.
> 	- Parenthesize the two & against the || at the start of setup_initial_mapping()
> 	  function.
> 	- Code style fixes.
> ---
> Changes in V7:
>  	- define CONFIG_PAGING_LEVELS=2 for SATP_MODE_SV32.
>  	- update switch_stack_and_jump macros(): add constraint 'X' for fn,
>     memory clobber and wrap into do {} while ( false ).
>  	- add noreturn to definition of enable_mmu().
>  	- update pt_index() to "(pt_linear_offset(lvl, (va)) & VPN_MASK)".
>  	- expand macros pte_to_addr()/addr_to_pte() in paddr_to_pte() and
>     pte_to_paddr() functions and after drop them.
>  	- remove inclusion of <asm/config.h>.
>  	- update commit message around definition of PGTBL_INITIAL_COUNT.
>  	- remove PGTBL_ENTRY_AMOUNT and use PAGETABLE_ENTRIES instead.
>  	- code style fixes
>  	- remove permission argument of setup_initial_mapping() function
>  	- remove calc_pgtbl_lvls_num() as it's not needed anymore after definition
>     of CONFIG_PAGING_LEVELS.
>  	- introduce sfence_vma().
>  	- remove satp_mode argument from check_pgtbl_mode_support() and use
>     RV_STAGE1_MODE directly instead.
>  	- change .align to .p2align.
>  	- drop inclusion of <asm/asm-offsets.h> from head.S. This change isn't
>     necessary for the current patch series.
> ---
> Changes in V6:
>  	- move PAGE_SHIFT, PADDR_BITS to the top of page-bits.h
>  	- cast argument x of pte_to_addr() macros to paddr_t to avoid risk of overflow for RV32
>  	- update type of num_levels from 'unsigned long' to 'unsigned int'
>  	- define PGTBL_INITIAL_COUNT as ((CONFIG_PAGING_LEVELS - 1) + 1)
>  	- update type of permission arguments. changed it from 'unsgined long' to 'unsigned int'
>  	- fix code style
>  	- switch while 'loop' to 'for' loop
>  	- undef HANDLE_PGTBL
>  	- clean root page table after MMU is disabled in check_pgtbl_mode_support() function
>  	- align __bss_start properly
>  	- remove unnecesssary const for paddr_to_pte, pte_to_paddr, pte_is_valid functions
>  	- add switch_stack_and_jump macros and use it inside enable_mmu() before jump to
>  	  cont_after_mmu_is_enabled() function
> ---
> Changes in V5:
> 	* Indent fields of pte_t struct
> 	* Rename addr_to_pte() and ppn_to_paddr() to match their content
> ---
> Changes in V4:
>   * use GB() macros instead of defining SZ_1G
>   * hardcode XEN_VIRT_START and add comment (ADDRESS_SPACE_END + 1 - GB(1))
>   * remove unnecessary 'asm' word at the end of #error
>   * encapsulate pte_t definition in a struct
>   * rename addr_to_ppn() to ppn_to_paddr().
>   * change type of paddr argument from const unsigned long to paddr_t
>   * pte_to_paddr() update prototype.
>   * calculate size of Xen binary based on an amount of page tables
>   * use unsgined int instead of 'uint32_t' instead of uint32_t as
>     their use isn't warranted.
>   * remove extern of bss_{start,end} as they aren't used in mm.c anymore
>   * fix code style
>   * add argument for HANDLE_PGTBL macros instead of curr_lvl_num variable
>   * make enable_mmu() as noinline to prevent under link-time optimization
>     because of the nature of enable_mmu()
>   * add function to check that SATP_MODE is supported.
>   * update the commit message
>   * update setup_initial_pagetables to set correct PTE flags in one pass
>     instead of calling setup_pte_permissions after setup_initial_pagetables()
>     as setup_initial_pagetables() isn't used to change permission flags.
> ---
> Changes in V3:
>  - update definition of pte_t structure to have a proper size of pte_t
>    in case of RV32.
>  - update asm/mm.h with new functions and remove unnecessary 'extern'.
>  - remove LEVEL_* macros as only XEN_PT_LEVEL_* are enough.
>  - update paddr_to_pte() to receive permissions as an argument.
>  - add check that map_start & pa_start is properly aligned.
>  - move  defines PAGETABLE_ORDER, PAGETABLE_ENTRIES, PTE_PPN_SHIFT to
>    <asm/page-bits.h>
>  - Rename PTE_SHIFT to PTE_PPN_SHIFT
>  - refactor setup_initial_pagetables: map all LINK addresses to LOAD addresses
>    and after setup PTEs permission for sections; update check that linker
>    and load addresses don't overlap.
>  - refactor setup_initial_mapping: allocate pagetable 'dynamically' if it is
>    necessary.
>  - rewrite enable_mmu in C; add the check that map_start and pa_start are
>    aligned on 4k boundary.
>  - update the comment for setup_initial_pagetable funcion
>  - Add RV_STAGE1_MODE to support different MMU modes
>  - set XEN_VIRT_START very high to not overlap with load address range
>  - align bss section
> ---
> Changes in V2:
>  * update the commit message:
>  * Remove {ZEROETH,FIRST,...}_{SHIFT,MASK, SIZE,...} and
>    introduce instead of them XEN_PT_LEVEL_*() and LEVEL_*
>  * Rework pt_linear_offset() and pt_index based on  XEN_PT_LEVEL_*()
>  * Remove clear_pagetables() functions as pagetables were zeroed during
>    .bss initialization
>  * Rename _setup_initial_pagetables() to setup_initial_mapping()
>  * Make PTE_DEFAULT equal to RX.
>  * Update prototype of setup_initial_mapping(..., bool writable) -> 
>    setup_initial_mapping(..., UL flags)  
>  * Update calls of setup_initial_mapping according to new prototype
>  * Remove unnecessary call of:
>    _setup_initial_pagetables(..., load_addr_start, load_addr_end, load_addr_start, ...)
>  * Define index* in the loop of setup_initial_mapping
>  * Remove attribute "__attribute__((section(".entry")))" for setup_initial_pagetables()
>    as we don't have such section
>  * make arguments of paddr_to_pte() and pte_is_valid() as const.
>  * make xen_second_pagetable static.
>  * use <xen/kernel.h> instead of declaring extern unsigned long _stext, 0etext, _srodata, _erodata
>  * update  'extern unsigned long __init_begin' to 'extern unsigned long __init_begin[]'
>  * use aligned() instead of "__attribute__((__aligned__(PAGE_SIZE)))"
>  * set __section(".bss.page_aligned") for page tables arrays
>  * fix identatations
>  * Change '__attribute__((section(".entry")))' to '__init'
>  * Remove phys_offset as it isn't used now.
>  * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0); in
>    setup_inital_mapping() as they should be already aligned.
>  * Remove clear_pagetables() as initial pagetables will be
>    zeroed during bss initialization
>  * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
>    as there is no such section in xen.lds.S
>  * Update the argument of pte_is_valid() to "const pte_t *p"
> ---
>  xen/arch/riscv/Makefile                |   1 +
>  xen/arch/riscv/include/asm/config.h    |  14 +-
>  xen/arch/riscv/include/asm/current.h   |  11 +
>  xen/arch/riscv/include/asm/mm.h        |  14 ++
>  xen/arch/riscv/include/asm/page-bits.h |  10 +
>  xen/arch/riscv/include/asm/page.h      |  61 ++++++
>  xen/arch/riscv/include/asm/processor.h |   5 +
>  xen/arch/riscv/mm.c                    | 276 +++++++++++++++++++++++++
>  xen/arch/riscv/setup.c                 |  11 +
>  xen/arch/riscv/xen.lds.S               |   3 +
>  10 files changed, 405 insertions(+), 1 deletion(-)
>  create mode 100644 xen/arch/riscv/include/asm/current.h
>  create mode 100644 xen/arch/riscv/include/asm/mm.h
>  create mode 100644 xen/arch/riscv/include/asm/page.h
>  create mode 100644 xen/arch/riscv/mm.c
> 
> diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
> index 443f6bf15f..956ceb02df 100644
> --- a/xen/arch/riscv/Makefile
> +++ b/xen/arch/riscv/Makefile
> @@ -1,5 +1,6 @@
>  obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
>  obj-y += entry.o
> +obj-y += mm.o
>  obj-$(CONFIG_RISCV_64) += riscv64/
>  obj-y += sbi.o
>  obj-y += setup.o
> diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
> index 9900d29dab..38862df0b8 100644
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -75,12 +75,24 @@
>    name:
>  #endif
>  
> -#define XEN_VIRT_START  _AT(UL, 0x80200000)
> +#ifdef CONFIG_RISCV_64
> +#define XEN_VIRT_START 0xFFFFFFFFC0000000 /* (_AC(-1, UL) + 1 - GB(1)) */
> +#else
> +#error "RV32 isn't supported"
> +#endif
>  
>  #define SMP_CACHE_BYTES (1 << 6)
>  
>  #define STACK_SIZE PAGE_SIZE
>  
> +#ifdef CONFIG_RISCV_64
> +#define CONFIG_PAGING_LEVELS 3
> +#define RV_STAGE1_MODE SATP_MODE_SV39
> +#else
> +#define CONFIG_PAGING_LEVELS 2
> +#define RV_STAGE1_MODE SATP_MODE_SV32
> +#endif
> +
>  #endif /* __RISCV_CONFIG_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/arch/riscv/include/asm/current.h b/xen/arch/riscv/include/asm/current.h
> new file mode 100644
> index 0000000000..d87e6717e0
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/current.h
> @@ -0,0 +1,11 @@
> +#ifndef __ASM_CURRENT_H
> +#define __ASM_CURRENT_H
> +
> +#define switch_stack_and_jump(stack, fn) do {               \
> +    asm volatile (                                          \
> +            "mv sp, %0\n"                                   \
> +            "j " #fn :: "r" (stack), "X" (fn) : "memory" ); \
> +    unreachable();                                          \
> +} while ( false )
> +
> +#endif /* __ASM_CURRENT_H */
> diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
> new file mode 100644
> index 0000000000..64293eacee
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/mm.h
> @@ -0,0 +1,14 @@
> +#ifndef _ASM_RISCV_MM_H
> +#define _ASM_RISCV_MM_H
> +
> +#include <asm/page-bits.h>
> +
> +#define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
> +#define paddr_to_pfn(pa)  ((unsigned long)((pa) >> PAGE_SHIFT))
> +
> +void setup_initial_pagetables(void);
> +
> +void enable_mmu(void);
> +void cont_after_mmu_is_enabled(void);
> +
> +#endif /* _ASM_RISCV_MM_H */
> diff --git a/xen/arch/riscv/include/asm/page-bits.h b/xen/arch/riscv/include/asm/page-bits.h
> index 1801820294..4a3e33589a 100644
> --- a/xen/arch/riscv/include/asm/page-bits.h
> +++ b/xen/arch/riscv/include/asm/page-bits.h
> @@ -4,4 +4,14 @@
>  #define PAGE_SHIFT              12 /* 4 KiB Pages */
>  #define PADDR_BITS              56 /* 44-bit PPN */
>  
> +#ifdef CONFIG_RISCV_64
> +#define PAGETABLE_ORDER         (9)
> +#else /* CONFIG_RISCV_32 */
> +#define PAGETABLE_ORDER         (10)
> +#endif
> +
> +#define PAGETABLE_ENTRIES       (1 << PAGETABLE_ORDER)
> +
> +#define PTE_PPN_SHIFT           10
> +
>  #endif /* __RISCV_PAGE_BITS_H__ */
> diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
> new file mode 100644
> index 0000000000..a7e2eee964
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/page.h
> @@ -0,0 +1,61 @@
> +#ifndef _ASM_RISCV_PAGE_H
> +#define _ASM_RISCV_PAGE_H
> +
> +#include <xen/const.h>
> +#include <xen/types.h>
> +
> +#include <asm/mm.h>
> +#include <asm/page-bits.h>
> +
> +#define VPN_MASK                    (PAGETABLE_ENTRIES - 1UL)
> +
> +#define XEN_PT_LEVEL_ORDER(lvl)     ((lvl) * PAGETABLE_ORDER)
> +#define XEN_PT_LEVEL_SHIFT(lvl)     (XEN_PT_LEVEL_ORDER(lvl) + PAGE_SHIFT)
> +#define XEN_PT_LEVEL_SIZE(lvl)      (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
> +#define XEN_PT_LEVEL_MAP_MASK(lvl)  (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
> +#define XEN_PT_LEVEL_MASK(lvl)      (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
> +
> +#define PTE_VALID                   BIT(0, UL)
> +#define PTE_READABLE                BIT(1, UL)
> +#define PTE_WRITABLE                BIT(2, UL)
> +#define PTE_EXECUTABLE              BIT(3, UL)
> +#define PTE_USER                    BIT(4, UL)
> +#define PTE_GLOBAL                  BIT(5, UL)
> +#define PTE_ACCESSED                BIT(6, UL)
> +#define PTE_DIRTY                   BIT(7, UL)
> +#define PTE_RSW                     (BIT(8, UL) | BIT(9, UL))
> +
> +#define PTE_LEAF_DEFAULT            (PTE_VALID | PTE_READABLE | PTE_WRITABLE)
> +#define PTE_TABLE                   (PTE_VALID)
> +
> +/* Calculate the offsets into the pagetables for a given VA */
> +#define pt_linear_offset(lvl, va)   ((va) >> XEN_PT_LEVEL_SHIFT(lvl))
> +
> +#define pt_index(lvl, va) (pt_linear_offset((lvl), (va)) & VPN_MASK)
> +
> +/* Page Table entry */
> +typedef struct {
> +#ifdef CONFIG_RISCV_64
> +    uint64_t pte;
> +#else
> +    uint32_t pte;
> +#endif
> +} pte_t;
> +
> +static inline pte_t paddr_to_pte(paddr_t paddr,
> +                                 unsigned int permissions)
> +{
> +    return (pte_t) { .pte = (paddr_to_pfn(paddr) << PTE_PPN_SHIFT) | permissions };
> +}
> +
> +static inline paddr_t pte_to_paddr(pte_t pte)
> +{
> +    return pfn_to_paddr(pte.pte >> PTE_PPN_SHIFT);
> +}
> +
> +static inline bool pte_is_valid(pte_t p)
> +{
> +    return p.pte & PTE_VALID;
> +}
> +
> +#endif /* _ASM_RISCV_PAGE_H */
> diff --git a/xen/arch/riscv/include/asm/processor.h b/xen/arch/riscv/include/asm/processor.h
> index a71448e02e..6db681d805 100644
> --- a/xen/arch/riscv/include/asm/processor.h
> +++ b/xen/arch/riscv/include/asm/processor.h
> @@ -69,6 +69,11 @@ static inline void die(void)
>          wfi();
>  }
>  
> +static inline void sfence_vma(void)
> +{
> +    asm volatile ( "sfence.vma" ::: "memory" );
> +}
> +
>  #endif /* __ASSEMBLY__ */
>  
>  #endif /* _ASM_RISCV_PROCESSOR_H */
> diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
> new file mode 100644
> index 0000000000..692ae9cb5e
> --- /dev/null
> +++ b/xen/arch/riscv/mm.c
> @@ -0,0 +1,276 @@
> +#include <xen/compiler.h>
> +#include <xen/init.h>
> +#include <xen/kernel.h>
> +#include <xen/pfn.h>
> +
> +#include <asm/early_printk.h>
> +#include <asm/csr.h>
> +#include <asm/current.h>
> +#include <asm/mm.h>
> +#include <asm/page.h>
> +#include <asm/processor.h>
> +
> +struct mmu_desc {
> +    unsigned int num_levels;
> +    unsigned int pgtbl_count;
> +    pte_t *next_pgtbl;
> +    pte_t *pgtbl_base;
> +};
> +
> +extern unsigned char cpu0_boot_stack[STACK_SIZE];
> +
> +#define PHYS_OFFSET ((unsigned long)_start - XEN_VIRT_START)
> +#define LOAD_TO_LINK(addr) ((addr) - PHYS_OFFSET)
> +#define LINK_TO_LOAD(addr) ((addr) + PHYS_OFFSET)
> +
> +/*
> + * It is expected that Xen won't be more then 2 MB.
> + * The check in xen.lds.S guarantees that.
> + * At least 3 page tables (in case of Sv39 ) are needed to cover 2 MB.
> + * One for each page level table with PAGE_SIZE = 4 Kb.
> + *
> + * One L0 page table can cover 2 MB(512 entries of one page table * PAGE_SIZE).
> + *
> + * It might be needed one more page table in case when Xen load address
> + * isn't 2 MB aligned.
> + */
> +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
> +
> +pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
> +stage1_pgtbl_root[PAGETABLE_ENTRIES];
> +
> +pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
> +stage1_pgtbl_nonroot[PGTBL_INITIAL_COUNT * PAGETABLE_ENTRIES];
> +
> +#define HANDLE_PGTBL(curr_lvl_num)                                          \
> +    index = pt_index(curr_lvl_num, page_addr);                              \
> +    if ( pte_is_valid(pgtbl[index]) )                                       \
> +    {                                                                       \
> +        /* Find L{ 0-3 } table */                                           \
> +        pgtbl = (pte_t *)pte_to_paddr(pgtbl[index]);                        \
> +    }                                                                       \
> +    else                                                                    \
> +    {                                                                       \
> +        /* Allocate new L{0-3} page table */                                \
> +        if ( mmu_desc->pgtbl_count == PGTBL_INITIAL_COUNT )                 \
> +        {                                                                   \
> +            early_printk("(XEN) No initial table available\n");             \
> +            /* panic(), BUG() or ASSERT() aren't ready now. */              \
> +            die();                                                          \
> +        }                                                                   \
> +        mmu_desc->pgtbl_count++;                                            \
> +        pgtbl[index] = paddr_to_pte((unsigned long)mmu_desc->next_pgtbl,    \
> +                                    PTE_VALID);                             \
> +        pgtbl = mmu_desc->next_pgtbl;                                       \
> +        mmu_desc->next_pgtbl += PAGETABLE_ENTRIES;                          \
> +    }
> +
> +static void __init setup_initial_mapping(struct mmu_desc *mmu_desc,
> +                                         unsigned long map_start,
> +                                         unsigned long map_end,
> +                                         unsigned long pa_start)
> +{
> +    unsigned int index;
> +    pte_t *pgtbl;
> +    unsigned long page_addr;
> +
> +    if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
> +    {
> +        early_printk("(XEN) Xen should be loaded at 4k boundary\n");
> +        die();
> +    }
> +
> +    if ( (map_start & ~XEN_PT_LEVEL_MAP_MASK(0)) ||
> +         (pa_start & ~XEN_PT_LEVEL_MAP_MASK(0)) )
> +    {
> +        early_printk("(XEN) map and pa start addresses should be aligned\n");
> +        /* panic(), BUG() or ASSERT() aren't ready now. */
> +        die();
> +    }
> +
> +    for ( page_addr = map_start;
> +          page_addr < map_end;
> +          page_addr += XEN_PT_LEVEL_SIZE(0) )
> +    {
> +        pgtbl = mmu_desc->pgtbl_base;
> +
> +        switch ( mmu_desc->num_levels )
> +        {
> +        case 4: /* Level 3 */
> +            HANDLE_PGTBL(3);
> +        case 3: /* Level 2 */
> +            HANDLE_PGTBL(2);
> +        case 2: /* Level 1 */
> +            HANDLE_PGTBL(1);
> +        case 1: /* Level 0 */
> +            {
> +                unsigned long paddr = (page_addr - map_start) + pa_start;
> +                unsigned int permissions = PTE_LEAF_DEFAULT;
> +                pte_t pte_to_be_written;
> +
> +                index = pt_index(0, page_addr);
> +
> +                if ( is_kernel_text(LINK_TO_LOAD(page_addr)) ||
> +                     is_kernel_inittext(LINK_TO_LOAD(page_addr)) )
> +                    permissions =
> +                        PTE_EXECUTABLE | PTE_READABLE | PTE_VALID;
> +
> +                if ( is_kernel_rodata(LINK_TO_LOAD(page_addr)) )
> +                    permissions = PTE_READABLE | PTE_VALID;
> +
> +                pte_to_be_written = paddr_to_pte(paddr, permissions);
> +
> +                if ( !pte_is_valid(pgtbl[index]) )
> +                    pgtbl[index] = pte_to_be_written;
> +                else
> +                {
> +                    if ( (pgtbl[index].pte ^ pte_to_be_written.pte) &
> +                         ~(PTE_DIRTY | PTE_ACCESSED) )
> +                    {
> +                        early_printk("PTE overridden has occurred\n");
> +                        /* panic(), <asm/bug.h> aren't ready now. */
> +                        die();
> +                    }
> +                }
> +            }
> +        }
> +    }
> +}
> +#undef HANDLE_PGTBL
> +
> +static bool __init check_pgtbl_mode_support(struct mmu_desc *mmu_desc,
> +                                            unsigned long load_start)
> +{
> +    bool is_mode_supported = false;
> +    unsigned int index;
> +    unsigned int page_table_level = (mmu_desc->num_levels - 1);
> +    unsigned level_map_mask = XEN_PT_LEVEL_MAP_MASK(page_table_level);
> +
> +    unsigned long aligned_load_start = load_start & level_map_mask;
> +    unsigned long aligned_page_size = XEN_PT_LEVEL_SIZE(page_table_level);
> +    unsigned long xen_size = (unsigned long)(_end - start);
> +
> +    if ( (load_start + xen_size) > (aligned_load_start + aligned_page_size) )
> +    {
> +        early_printk("please place Xen to be in range of PAGE_SIZE "
> +                     "where PAGE_SIZE is XEN_PT_LEVEL_SIZE( {L3 | L2 | L1} ) "
> +                     "depending on expected SATP_MODE \n"
> +                     "XEN_PT_LEVEL_SIZE is defined in <asm/page.h>\n");
> +        die();
> +    }
> +
> +    index = pt_index(page_table_level, aligned_load_start);
> +    stage1_pgtbl_root[index] = paddr_to_pte(aligned_load_start,
> +                                            PTE_LEAF_DEFAULT | PTE_EXECUTABLE);
> +
> +    sfence_vma();
> +    csr_write(CSR_SATP,
> +              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
> +              RV_STAGE1_MODE << SATP_MODE_SHIFT);
> +
> +    if ( (csr_read(CSR_SATP) >> SATP_MODE_SHIFT) == RV_STAGE1_MODE )
> +        is_mode_supported = true;
> +
> +    csr_write(CSR_SATP, 0);
> +
> +    sfence_vma();
> +
> +    /* Clean MMU root page table */
> +    stage1_pgtbl_root[index] = paddr_to_pte(0x0, 0x0);
> +
> +    return is_mode_supported;
> +}
> +
> +/*
> + * setup_initial_pagetables:
> + *
> + * Build the page tables for Xen that map the following:
> + *  1. Calculate page table's level numbers.
> + *  2. Init mmu description structure.
> + *  3. Check that linker addresses range doesn't overlap
> + *     with load addresses range
> + *  4. Map all linker addresses and load addresses ( it shouldn't
> + *     be 1:1 mapped and will be 1:1 mapped only in case if
> + *     linker address is equal to load address ) with
> + *     RW permissions by default.
> + *  5. Setup proper PTE permissions for each section.
> + */
> +void __init setup_initial_pagetables(void)
> +{
> +    struct mmu_desc mmu_desc = { CONFIG_PAGING_LEVELS, 0, NULL, NULL };
> +
> +    /*
> +     * Access to _start, _end is always PC-relative thereby when access
> +     * them we will get load adresses of start and end of Xen.
> +     * To get linker addresses LOAD_TO_LINK() is required to use.
> +     */
> +    unsigned long load_start    = (unsigned long)_start;
> +    unsigned long load_end      = (unsigned long)_end;
> +    unsigned long linker_start  = LOAD_TO_LINK(load_start);
> +    unsigned long linker_end    = LOAD_TO_LINK(load_end);
> +
> +    if ( (linker_start != load_start) &&
> +         (linker_start <= load_end) && (load_start <= linker_end) )
> +    {
> +        early_printk("(XEN) linker and load address ranges overlap\n");
> +        die();
> +    }
> +
> +    if ( !check_pgtbl_mode_support(&mmu_desc, load_start) )
> +    {
> +        early_printk("requested MMU mode isn't supported by CPU\n"
> +                     "Please choose different in <asm/config.h>\n");
> +        die();
> +    }
> +
> +    mmu_desc.pgtbl_base = stage1_pgtbl_root;
> +    mmu_desc.next_pgtbl = stage1_pgtbl_nonroot;
> +
> +    setup_initial_mapping(&mmu_desc,
> +                          linker_start,
> +                          linker_end,
> +                          load_start);
> +}
> +
> +void __init noreturn noinline enable_mmu()
> +{
> +    /*
> +     * Calculate a linker time address of the mmu_is_enabled
> +     * label and update CSR_STVEC with it.
> +     * MMU is configured in a way where linker addresses are mapped
> +     * on load addresses so in a case when linker addresses are not equal
> +     * to load addresses, after MMU is enabled, it will cause
> +     * an exception and jump to linker time addresses.
> +     * Otherwise if load addresses are equal to linker addresses the code
> +     * after mmu_is_enabled label will be executed without exception.
> +     */
> +    csr_write(CSR_STVEC, LOAD_TO_LINK((unsigned long)&&mmu_is_enabled));
> +
> +    /* Ensure page table writes precede loading the SATP */
> +    sfence_vma();
> +
> +    /* Enable the MMU and load the new pagetable for Xen */
> +    csr_write(CSR_SATP,
> +              PFN_DOWN((unsigned long)stage1_pgtbl_root) |
> +              RV_STAGE1_MODE << SATP_MODE_SHIFT);
> +
> +    asm volatile ( ".p2align 2" );
> + mmu_is_enabled:
> +    /*
> +     * Stack should be re-inited as:
> +     * 1. Right now an address of the stack is relative to load time
> +     *    addresses what will cause an issue in case of load start address
> +     *    isn't equal to linker start address.
> +     * 2. Addresses in stack are all load time relative which can be an
> +     *    issue in case when load start address isn't equal to linker
> +     *    start address.
> +     *
> +     * We can't return to the caller because the stack was reseted
> +     * and it may have stash some variable on the stack.
> +     * Jump to a brand new function as the stack was reseted
> +     */
> +
> +    switch_stack_and_jump((unsigned long)cpu0_boot_stack + STACK_SIZE,
> +                          cont_after_mmu_is_enabled);
> +}
> +
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> index 3786f337e0..315804aa87 100644
> --- a/xen/arch/riscv/setup.c
> +++ b/xen/arch/riscv/setup.c
> @@ -2,6 +2,7 @@
>  #include <xen/init.h>
>  
>  #include <asm/early_printk.h>
> +#include <asm/mm.h>
>  
>  /* Xen stack for bringing up the first CPU. */
>  unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
> @@ -26,3 +27,13 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
>  
>      unreachable();
>  }
> +
> +void __init noreturn cont_after_mmu_is_enabled(void)
> +{
> +    early_printk("All set up\n");
> +
> +    for ( ;; )
> +        asm volatile ("wfi");
> +
> +    unreachable();
> +}
> diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
> index 31e0d3576c..fe475d096d 100644
> --- a/xen/arch/riscv/xen.lds.S
> +++ b/xen/arch/riscv/xen.lds.S
> @@ -172,3 +172,6 @@ ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is misaligned")
>  
>  ASSERT(!SIZEOF(.got),      ".got non-empty")
>  ASSERT(!SIZEOF(.got.plt),  ".got.plt non-empty")
> +
> +ASSERT(_end - _start <= MB(2), "Xen too large for early-boot assumptions")
> +
> -- 
> 2.40.1
> 
> 

LGTM. Nice code to boot!

Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>


From xen-devel-bounces@lists.xenproject.org Tue May 30 17:03:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 17:03:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541308.843940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42kn-00059K-Jt; Tue, 30 May 2023 17:03:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541308.843940; Tue, 30 May 2023 17:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42kn-00059D-HH; Tue, 30 May 2023 17:03:25 +0000
Received: by outflank-mailman (input) for mailman id 541308;
 Tue, 30 May 2023 17:03:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wXld=BT=gmail.com=bobbyeshleman@srs-se1.protection.inumbo.net>)
 id 1q42km-00046B-32
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 17:03:24 +0000
Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com
 [2607:f8b0:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e1ca63af-ff0b-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 19:03:22 +0200 (CEST)
Received: by mail-pl1-x636.google.com with SMTP id
 d9443c01a7336-1b04706c85fso19090605ad.0
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 10:03:22 -0700 (PDT)
Received: from localhost (ec2-52-9-159-93.us-west-1.compute.amazonaws.com.
 [52.9.159.93]) by smtp.gmail.com with ESMTPSA id
 c7-20020a170902d48700b001ab016e7916sm10523636plg.234.2023.05.30.10.03.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 10:03:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1ca63af-ff0b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685466201; x=1688058201;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=KxFPuHMrCG3+IAfjt5k3nuYstYdlhBbIyw3EgWmU16g=;
        b=N3kIsJTJHipusSfDM42iOIddQgUBU6RTqsQwmqbgr8lEAI0zjgVi0Rh3yILK3Va61o
         wvDiK6tnONA9ZrfyygDuXtglz+egKqvtZIlww4IBGtqOF/anjXcEDMRlv/It9Qhgf3Dz
         AGUZIJeegL9wZSqnkNSs4p3wx8el6MAKMkOezDoRpe+J60jMfE9kH+goboI7y6ReRggh
         zMh5zGpwyS/Vq5ZYZqwn5E7k+1fF8J9xypcAqooGBpr9JiuwBUpp2KHni6ysdu3SsUk3
         Zuwle03Bq7n3/WWA5o4pNTHKjnDPXQegBoLF2wtSguinbqUYwERT5NlxLRuqswVgPVoN
         3dxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685466201; x=1688058201;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KxFPuHMrCG3+IAfjt5k3nuYstYdlhBbIyw3EgWmU16g=;
        b=PkFONL/3Wnp5tmIjoPPCEy9i+U2d7nA1E//yv2sUFA5p8ybxDzxDbPdVTwCGqrih2I
         lJJLC8qawhfv9AeDnxFyaPP3qDxfRM/hFRvH253o6YkqN8qT6G8baLqzhCQms1sMdxJW
         BTIR+cAxH0SO32HaZ+TY1jUgdSOGiZbMbTd4KhSrp5UoaqCf9w9C3Dgon3uizhafEm01
         pzDBVDUsG5VX3BuWqx2PPFTub7a8V57H986ML9ZQInxS2/fp6j3yWsyG+E5VYkN83Cml
         Zfz4VsZyiQxU2YAYUcXN+OzMSuP075EpAvweQI9Zgu01qzR7BZl4mjtGbz0dXk/pBbaP
         wwKA==
X-Gm-Message-State: AC+VfDyiTK9Ol+HVEjMcULh4nWRQaqsENISc/4lvtOLpYqeRAjnQ3SpP
	P0Z3j5/sm8oiDF3PEplYYnA=
X-Google-Smtp-Source: ACHHUZ7QJJXZloHwPyKOcrZtfG1Ro2AJkghZ8uwS7lcvMtpdNL1Eblp0pM70cXhI8Qls4v7fEwcvDQ==
X-Received: by 2002:a17:903:249:b0:1a6:82ac:f277 with SMTP id j9-20020a170903024900b001a682acf277mr3640748plh.14.1685466200997;
        Tue, 30 May 2023 10:03:20 -0700 (PDT)
Date: Tue, 30 May 2023 10:03:45 +0000
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Gianluca Guida <gianluca@rivosinc.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: Re: [PATCH v9 5/5] xen/riscv: remove dummy_bss variable
Message-ID: <ZHXKAVgBwZX9jnud@bullseye>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
 <03151586ddd34f61a24809d11bcc0be5e847b384.1685027257.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <03151586ddd34f61a24809d11bcc0be5e847b384.1685027257.git.oleksii.kurochko@gmail.com>

On Thu, May 25, 2023 at 06:28:18PM +0300, Oleksii Kurochko wrote:
> After introduction of initial pagetables there is no any sense
> in dummy_bss variable as bss section will not be empty anymore.
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V9:
>  - Nothing changed. Only rebase
> ---
> Changes in V8:
>  - Nothing changed. Only rebase
> ---
> Changes in V7:
>  - Nothing changed. Only rebase
> ---
> Changes in V6:
>  - Nothing changed. Only rebase
> ---
> Changes in V5:
>  - Nothing changed. Only rebase
> ---
> Changes in V4:
>  - Nothing changed. Only rebase
> ---
> Changes in V3:
>  * patch was introduced in the current one patch series (v3).
> ---
> Changes in V2:
>  * patch was introduced in the current one patch series (v2).
> ---
>  xen/arch/riscv/setup.c | 8 --------
>  1 file changed, 8 deletions(-)
> 
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> index cf5dc5824e..845d18d86f 100644
> --- a/xen/arch/riscv/setup.c
> +++ b/xen/arch/riscv/setup.c
> @@ -8,14 +8,6 @@
>  unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
>      __aligned(STACK_SIZE);
>  
> -/*  
> - * To be sure that .bss isn't zero. It will simplify code of
> - * .bss initialization.
> - * TODO:
> - *   To be deleted when the first real .bss user appears
> - */
> -int dummy_bss __attribute__((unused));
> -
>  void __init noreturn start_xen(unsigned long bootcpu_id,
>                                 paddr_t dtb_addr)
>  {
> -- 
> 2.40.1
> 
> 

Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>


From xen-devel-bounces@lists.xenproject.org Tue May 30 17:12:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 17:12:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541316.843949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42t4-0006mP-EH; Tue, 30 May 2023 17:11:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541316.843949; Tue, 30 May 2023 17:11:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q42t4-0006mI-BW; Tue, 30 May 2023 17:11:58 +0000
Received: by outflank-mailman (input) for mailman id 541316;
 Tue, 30 May 2023 17:11:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BJNG=BT=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q42t3-0006mC-CV
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 17:11:57 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1362bc0f-ff0d-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 19:11:55 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-631-dPfcO2s9MyOy4_j1vs0myg-1; Tue, 30 May 2023 13:11:50 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8340C38025FF;
 Tue, 30 May 2023 17:11:49 +0000 (UTC)
Received: from localhost (unknown [10.39.192.201])
 by smtp.corp.redhat.com (Postfix) with ESMTP id F367FC154D2;
 Tue, 30 May 2023 17:11:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1362bc0f-ff0d-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685466714;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=bmkH+iq/OmPHas9aSt5XErWUo4Wcii5lkj8crDV/n3g=;
	b=ZJv+NYpUa4iyMZZjIMCDWij5hLhsxc4hfxqWlyiwn7d0ss41ej/yiNbGRdOZ0XiPIYAvih
	gleicbcOm3a7EY/6wAu1ADzpjTiFQdrYrjYZ3l/6GYcTZMEqPOyW1aNStW1yFQPiEPvASj
	Ow/oKuVIKWJ2AXwkhJILBSfMqxuSSWI=
X-MC-Unique: dPfcO2s9MyOy4_j1vs0myg-1
Date: Tue, 30 May 2023 13:11:47 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: qemu-devel@nongnu.org, Aarushi Mehta <mehta.aaru20@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>, Hanna Reitz <hreitz@redhat.com>,
	Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
	xen-devel@lists.xenproject.org, eblake@redhat.com,
	Anthony Perard <anthony.perard@citrix.com>, qemu-block@nongnu.org
Subject: Re: [PATCH v2 5/6] block/linux-aio: convert to blk_io_plug_call() API
Message-ID: <20230530171147.GA991054@fedora>
References: <20230523171300.132347-1-stefanha@redhat.com>
 <20230523171300.132347-6-stefanha@redhat.com>
 <n6hik7dbl26lomhxvfal2kjrq6jhdiknjepb372dvxavuwiw6q@3l3mo4eywoxq>
 <20230524193634.GB17357@fedora>
 <63lutuyufibun4jscbjjlshbqqw6otetzfi67rfnfrxacwutnj@igewwxh4uwys>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="E6/uC6M7CDHnrLPP"
Content-Disposition: inline
In-Reply-To: <63lutuyufibun4jscbjjlshbqqw6otetzfi67rfnfrxacwutnj@igewwxh4uwys>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8


--E6/uC6M7CDHnrLPP
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Mon, May 29, 2023 at 10:50:34AM +0200, Stefano Garzarella wrote:
> On Wed, May 24, 2023 at 03:36:34PM -0400, Stefan Hajnoczi wrote:
> > On Wed, May 24, 2023 at 10:52:03AM +0200, Stefano Garzarella wrote:
> > > On Tue, May 23, 2023 at 01:12:59PM -0400, Stefan Hajnoczi wrote:
> > > > Stop using the .bdrv_co_io_plug() API because it is not multi-queue
> > > > block layer friendly. Use the new blk_io_plug_call() API to batch I=
/O
> > > > submission instead.
> > > >
> > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > Reviewed-by: Eric Blake <eblake@redhat.com>
> > > > ---
> > > > include/block/raw-aio.h |  7 -------
> > > > block/file-posix.c      | 28 ----------------------------
> > > > block/linux-aio.c       | 41 +++++++++++---------------------------=
---
> > > > 3 files changed, 11 insertions(+), 65 deletions(-)
> > > >
> > > > diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
> > > > index da60ca13ef..0f63c2800c 100644
> > > > --- a/include/block/raw-aio.h
> > > > +++ b/include/block/raw-aio.h
> > > > @@ -62,13 +62,6 @@ int coroutine_fn laio_co_submit(int fd, uint64_t=
 offset, QEMUIOVector *qiov,
> > > >
> > > > void laio_detach_aio_context(LinuxAioState *s, AioContext *old_cont=
ext);
> > > > void laio_attach_aio_context(LinuxAioState *s, AioContext *new_cont=
ext);
> > > > -
> > > > -/*
> > > > - * laio_io_plug/unplug work in the thread's current AioContext, th=
erefore the
> > > > - * caller must ensure that they are paired in the same IOThread.
> > > > - */
> > > > -void laio_io_plug(void);
> > > > -void laio_io_unplug(uint64_t dev_max_batch);
> > > > #endif
> > > > /* io_uring.c - Linux io_uring implementation */
> > > > #ifdef CONFIG_LINUX_IO_URING
> > > > diff --git a/block/file-posix.c b/block/file-posix.c
> > > > index 7baa8491dd..ac1ed54811 100644
> > > > --- a/block/file-posix.c
> > > > +++ b/block/file-posix.c
> > > > @@ -2550,26 +2550,6 @@ static int coroutine_fn raw_co_pwritev(Block=
DriverState *bs, int64_t offset,
> > > >     return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_WRITE);
> > > > }
> > > >
> > > > -static void coroutine_fn raw_co_io_plug(BlockDriverState *bs)
> > > > -{
> > > > -    BDRVRawState __attribute__((unused)) *s =3D bs->opaque;
> > > > -#ifdef CONFIG_LINUX_AIO
> > > > -    if (s->use_linux_aio) {
> > > > -        laio_io_plug();
> > > > -    }
> > > > -#endif
> > > > -}
> > > > -
> > > > -static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
> > > > -{
> > > > -    BDRVRawState __attribute__((unused)) *s =3D bs->opaque;
> > > > -#ifdef CONFIG_LINUX_AIO
> > > > -    if (s->use_linux_aio) {
> > > > -        laio_io_unplug(s->aio_max_batch);
> > > > -    }
> > > > -#endif
> > > > -}
> > > > -
> > > > static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
> > > > {
> > > >     BDRVRawState *s =3D bs->opaque;
> > > > @@ -3914,8 +3894,6 @@ BlockDriver bdrv_file =3D {
> > > >     .bdrv_co_copy_range_from =3D raw_co_copy_range_from,
> > > >     .bdrv_co_copy_range_to  =3D raw_co_copy_range_to,
> > > >     .bdrv_refresh_limits =3D raw_refresh_limits,
> > > > -    .bdrv_co_io_plug        =3D raw_co_io_plug,
> > > > -    .bdrv_co_io_unplug      =3D raw_co_io_unplug,
> > > >     .bdrv_attach_aio_context =3D raw_aio_attach_aio_context,
> > > >
> > > >     .bdrv_co_truncate                   =3D raw_co_truncate,
> > > > @@ -4286,8 +4264,6 @@ static BlockDriver bdrv_host_device =3D {
> > > >     .bdrv_co_copy_range_from =3D raw_co_copy_range_from,
> > > >     .bdrv_co_copy_range_to  =3D raw_co_copy_range_to,
> > > >     .bdrv_refresh_limits =3D raw_refresh_limits,
> > > > -    .bdrv_co_io_plug        =3D raw_co_io_plug,
> > > > -    .bdrv_co_io_unplug      =3D raw_co_io_unplug,
> > > >     .bdrv_attach_aio_context =3D raw_aio_attach_aio_context,
> > > >
> > > >     .bdrv_co_truncate                   =3D raw_co_truncate,
> > > > @@ -4424,8 +4400,6 @@ static BlockDriver bdrv_host_cdrom =3D {
> > > >     .bdrv_co_pwritev        =3D raw_co_pwritev,
> > > >     .bdrv_co_flush_to_disk  =3D raw_co_flush_to_disk,
> > > >     .bdrv_refresh_limits    =3D cdrom_refresh_limits,
> > > > -    .bdrv_co_io_plug        =3D raw_co_io_plug,
> > > > -    .bdrv_co_io_unplug      =3D raw_co_io_unplug,
> > > >     .bdrv_attach_aio_context =3D raw_aio_attach_aio_context,
> > > >
> > > >     .bdrv_co_truncate                   =3D raw_co_truncate,
> > > > @@ -4552,8 +4526,6 @@ static BlockDriver bdrv_host_cdrom =3D {
> > > >     .bdrv_co_pwritev        =3D raw_co_pwritev,
> > > >     .bdrv_co_flush_to_disk  =3D raw_co_flush_to_disk,
> > > >     .bdrv_refresh_limits    =3D cdrom_refresh_limits,
> > > > -    .bdrv_co_io_plug        =3D raw_co_io_plug,
> > > > -    .bdrv_co_io_unplug      =3D raw_co_io_unplug,
> > > >     .bdrv_attach_aio_context =3D raw_aio_attach_aio_context,
> > > >
> > > >     .bdrv_co_truncate                   =3D raw_co_truncate,
> > > > diff --git a/block/linux-aio.c b/block/linux-aio.c
> > > > index 442c86209b..5021aed68f 100644
> > > > --- a/block/linux-aio.c
> > > > +++ b/block/linux-aio.c
> > > > @@ -15,6 +15,7 @@
> > > > #include "qemu/event_notifier.h"
> > > > #include "qemu/coroutine.h"
> > > > #include "qapi/error.h"
> > > > +#include "sysemu/block-backend.h"
> > > >
> > > > /* Only used for assertions.  */
> > > > #include "qemu/coroutine_int.h"
> > > > @@ -46,7 +47,6 @@ struct qemu_laiocb {
> > > > };
> > > >
> > > > typedef struct {
> > > > -    int plugged;
> > > >     unsigned int in_queue;
> > > >     unsigned int in_flight;
> > > >     bool blocked;
> > > > @@ -236,7 +236,7 @@ static void qemu_laio_process_completions_and_s=
ubmit(LinuxAioState *s)
> > > > {
> > > >     qemu_laio_process_completions(s);
> > > >
> > > > -    if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
> > > > +    if (!QSIMPLEQ_EMPTY(&s->io_q.pending)) {
> > > >         ioq_submit(s);
> > > >     }
> > > > }
> > > > @@ -277,7 +277,6 @@ static void qemu_laio_poll_ready(EventNotifier =
*opaque)
> > > > static void ioq_init(LaioQueue *io_q)
> > > > {
> > > >     QSIMPLEQ_INIT(&io_q->pending);
> > > > -    io_q->plugged =3D 0;
> > > >     io_q->in_queue =3D 0;
> > > >     io_q->in_flight =3D 0;
> > > >     io_q->blocked =3D false;
> > > > @@ -354,31 +353,11 @@ static uint64_t laio_max_batch(LinuxAioState =
*s, uint64_t dev_max_batch)
> > > >     return max_batch;
> > > > }
> > > >
> > > > -void laio_io_plug(void)
> > > > +static void laio_unplug_fn(void *opaque)
> > > > {
> > > > -    AioContext *ctx =3D qemu_get_current_aio_context();
> > > > -    LinuxAioState *s =3D aio_get_linux_aio(ctx);
> > > > +    LinuxAioState *s =3D opaque;
> > > >
> > > > -    s->io_q.plugged++;
> > > > -}
> > > > -
> > > > -void laio_io_unplug(uint64_t dev_max_batch)
> > > > -{
> > > > -    AioContext *ctx =3D qemu_get_current_aio_context();
> > > > -    LinuxAioState *s =3D aio_get_linux_aio(ctx);
> > > > -
> > > > -    assert(s->io_q.plugged);
> > > > -    s->io_q.plugged--;
> > > > -
> > > > -    /*
> > > > -     * Why max batch checking is performed here:
> > > > -     * Another BDS may have queued requests with a higher dev_max_=
batch and
> > > > -     * therefore in_queue could now exceed our dev_max_batch. Re-c=
heck the max
> > > > -     * batch so we can honor our device's dev_max_batch.
> > > > -     */
> > > > -    if (s->io_q.in_queue >=3D laio_max_batch(s, dev_max_batch) ||
> > >=20
> > > Why are we removing this condition?
> > > Could the same situation occur with the new API?
> >=20
> > The semantics of unplug_fn() are different from .bdrv_co_unplug():
> > 1. unplug_fn() is only called when the last blk_io_unplug() call occurs,
> >   not every time blk_io_unplug() is called.
> > 2. unplug_fn() is per-thread, not per-BlockDriverState, so there is no
> >   way to get per-BlockDriverState fields like dev_max_batch.
> >=20
> > Therefore this condition cannot be moved to laio_unplug_fn().
>=20
> I see now.
>=20
> >=20
> > How important is this condition? I believe that dropping it does not
> > have much of an effect but maybe I missed something.
>=20
> With Kevin we agreed to add it to avoid extra latency in some devices,
> but we didn't do much testing on this.
>=20
> IIRC what solved the performance degradation was the check in
> laio_do_submit() that we still have after this changes.
>=20
> So it may not have much effect, but maybe it's worth mentioning in
> the commit description.

I'll update the commit description.

> >=20
> > Also, does it make sense to define per-BlockDriverState batching limits
> > when the AIO engine (Linux AIO or io_uring) is thread-local and shared
> > between all BlockDriverStates? I believe the fundamental reason (that we
> > discovered later) why dev_max_batch is effective is because the Linux
> > kernel processes 32 I/O request submissions at a time. Anything above 32
> > adds latency without a batching benefit.
>=20
> This is a good point, maybe we should confirm it with some tests though.

Yes, I would benchmark it. Also, switching to per-thread max_batch
involves a command-line interface change and we still need to keep
aio-max-batch for compatibility for some time, so it's not urgent.

Stefan

--E6/uC6M7CDHnrLPP
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmR2LlMACgkQnKSrs4Gr
c8h/Ggf7B7c65EZOrTNGL/839KEM85T6s0FUc1sNOCzsZju0RUmP/RLa23pmgTLW
RftvkboiCIe370CmfeiDz98h7g9BE2QFqNIrztwxbq2SK3AC8N48lUCS0Ssn0ZgA
Xtt6Qr9hiMBGBYmucYTp3SX7bw+eity223jOvhru/HRBUb9bvFq60fSwE8q5bTwT
rWIAfW8HKeC/z9Kqb8hgtgpIQc3hxRP/B9LpwTRnAWy/0JPFgY0eHf0E7wtiAbKp
9InSWGn55VQeAZDIqhgrDf3dSdcPYRMRq4UF9gU3hMvjVnCGZ/5KJk9VyZASenVS
wzOd5uBnaXVA42cN7HLszt7UgwGr0A==
=+o/P
-----END PGP SIGNATURE-----

--E6/uC6M7CDHnrLPP--



From xen-devel-bounces@lists.xenproject.org Tue May 30 17:29:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 17:29:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541320.843961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43AA-0008NC-Ui; Tue, 30 May 2023 17:29:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541320.843961; Tue, 30 May 2023 17:29:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43AA-0008N5-Pk; Tue, 30 May 2023 17:29:38 +0000
Received: by outflank-mailman (input) for mailman id 541320;
 Tue, 30 May 2023 17:29:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8xzU=BT=citrix.com=prvs=507ffd061=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q43A9-0008Mz-AF
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 17:29:37 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8a36eb98-ff0f-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 19:29:34 +0200 (CEST)
Received: from mail-dm6nam10lp2108.outbound.protection.outlook.com (HELO
 NAM10-DM6-obe.outbound.protection.outlook.com) ([104.47.58.108])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 13:29:23 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DS7PR03MB5656.namprd03.prod.outlook.com (2603:10b6:5:2d0::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 17:29:20 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 17:29:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a36eb98-ff0f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685467774;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=eGtQwqVhUh0OMGOjDWvkDnQgyxy1nP8PoWKXSd1uzaA=;
  b=dp42jGsWWtey1NPrv06d1WYe0U2CqSKhknn3fVM7FM3rsgFcwF717LC4
   OCMiS2MhkrN6I1ra1NqWzRZQbek3nYd1cVOrzSuGKyhpZ0cA4P2YUoFE+
   okXZith4gslVtqb3hhGBcU42UrJDSwidtEPDssUXn5u76AvScwNCToOhR
   c=;
X-IronPort-RemoteIP: 104.47.58.108
X-IronPort-MID: 110290678
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:07DN16rqiW2lzocubRParvNJMFheBmLLZBIvgKrLsJaIsI4StFCzt
 garIBnTa/yDamOgf9x2YNy18E4Av5HWyIJrSVc6rSEzRn4W9ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzCFNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXACkMMSqsn9ux/PGySbRoxegfFtjnZ7pK7xmMzRmBZRonabbqZvySoPN9gnI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeiraYKFEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhr6Yw0QXNmTx75Bs+XBy/jKKJlkeCHO1vb
 GAS3Gk+gZcy3Rn+JjX6d1jiyJKehTYVX9dSGus28gbL1KPQ5wubAUAPSjlcZJots8pebSwn0
 BqFks3kARRrsaaJUjSN+7GMtzSwNCMJa2gYakcsUg8t89Tl5oYpgXrnTNl5F7WupsboAjy2y
 DePxAA8jbgOic8A142g4EvKxTmro/D0ohUd4wzWWiep611/bYv8P4iwswGDvbBHMZqTSUSHs
 D4cgc+C4esSDJaL0iuQXOEKG7Lv7PGAWNHBvWNS81Aa32zF0xaekUp4ulmS+G8B3h44RALU
IronPort-HdrOrdr: A9a23:eVBmp68cO5MWRYxJerxuk+DiI+orL9Y04lQ7vn2ZHyYlFfBwWf
 rAoB17726WtN9/YhodcLy7UpVoIkm9yXcW2+cs1OyZLW3bUQKTRelfBO3ZrgEIcBeeygcy78
 tdmwcVMqyXMbDX5/yKgjVRsrwbsby6zJw=
X-Talos-CUID: 9a23:mEehN2NubzpwjO5DQSg4zmEMI8cZMXTekXOTD3OCGGoycejA
X-Talos-MUID: =?us-ascii?q?9a23=3A1Sv5KA9ig6ckMYU7MZyDMOOQf84v54n1OFwErdI?=
 =?us-ascii?q?5i46oMwozKxuijB3iFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,204,1681185600"; 
   d="scan'208";a="110290678"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aUaUi3XnpdTmEKw5ZUWzRLrKkjQGHBMe0dXsnehs/fOi2yIp0o0c0ARGlHbIz9lnU+8qvMIA+MjBGTXhYGUwWmLQM1WEWjdcDAfmltqBN7T8+bdiyoDfwXcs2QRY1xI074KJSPnpONFZnZofbRtwkvvwoXpE8kiRe1iT6EsmeQs6JrnirYGLJYN3jmOr+36Ply4nOnnSCWBxFlkEGBlJbEG05Midwu3sKSWNTOgeEgypphirQUDTG+BJdJmRLgfEleGmiThHW26YZLKmAQf9JhPt/IA2/FjMd0+MmfRYS8ZMc4dsoLhOOew8CaZ/9dxol1BfySD8jy0jziCBXHi/zA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ru31ZqSFBUGrduaSVqofp/33VcjaBYZ/bFaZiCXx+NI=;
 b=Mqv7wqwSqHf2PeisLVxd/67aXeWAZKZ+Xw5YxTrRlW1s+of8A1yOYQ/7BYG6+vva9sgS4GcAgNlw0fa5D9Mp7yfBLwChB2+uv9dREW1Sn0tWm0vpHxnWY+wg+ClltBSvkrAZ8T0DHZAO4GdchRgofCPy8mv/vbbf8FKv5H6iRv5FicMEsnnv1en/jwI7nk0Qba3djiKV0sT+ZuY90rwXWb70JdR8X2JLuBBdWW/QOIpnGfPOrAZ8uuMbLN2YtJceBLUO3MmMom1v6ILgRnhJX72+UItJOtnsECgQHRFGHchOUtTt24ZIWj8iwebXZtyr/xWJ8w2DvsJzVq/PGJm+Ug==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ru31ZqSFBUGrduaSVqofp/33VcjaBYZ/bFaZiCXx+NI=;
 b=HjRJmhcxLgyRtP2q7WxnsKquILMrIr9YM5U76yJgojzFc9nVF7Df1S8jJrCHXuN0BWwo7ZlrwXHpBM9Ay+ppILYijDkDGAtC3TyrZ/3baBzXB5BAitoaVRNAoXlVhgM1QMS7IsJa/sE57u86w2dWThw+TkAL7LeQ8PJF1NGne2o=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <bc209116-75ac-06d6-e4bb-eb77b10ac5bd@citrix.com>
Date: Tue, 30 May 2023 18:29:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 1/3] x86: Add bit definitions for Automatic IBRS
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
 Juergen Gross <jgross@suse.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
 <20230530135854.1517-2-alejandro.vallejo@cloud.com>
In-Reply-To: <20230530135854.1517-2-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0218.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a6::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DS7PR03MB5656:EE_
X-MS-Office365-Filtering-Correlation-Id: be414ac7-a7ea-4b87-1c6b-08db613366ca
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Hz/+BZdh7357iIakbP+WYmDHoPCHV2gcO+1SmHFFIPpkOyOG7ljUd/OBF3IjCR/qGi0l2VPEv9K3Dr8u1jmwD55lvs8JMSMgadAR6FIPO+6JJSg9cN5Yuk9DbSpL19M9a7qNCDxLmPg1MJHFPdNhxM3MInUi6H9h1Eq32eE23WowIsvSswpo+knlyKN2v8bl7BfniMDRN8lHHqMXLB0Cjhveek8NULgVJTXVsN5v2thXvYDVVB5ArVwOKXn43LaTwub33Q2TVQPAFb4ixkFavbW6GZqD7j59xbuUcGYHziBDSt94mTb8nFK0Z6VOVMgYbXFC3jcB15GxelfrY9F4ptEE5VZ7kqfovPz6SAuSq0gIe67tlKzm4cO927I5l5b0llOk68zOThGQUAdDWKvlJuvfH+5gtZ4pXeroQdqwx12R2TdpmhnZFBo2YepwOLlZgrfov8eor6k9WrxkXV6rNI8QWPhyYXBxVcR8VfDe7poyLpyahFZEBcSxi46s93L96eZkBucawOAFwF2K+TClv0Cf9dqDwWbTEu9uaq2Is0T3TiLtNlwGrdZxwX2DDU+TcFlIe156IXY4iltpb3nZr46TOLMXD5UEd3xXqoXfwmDf8RWSKwrEklXgFjK3gIcR32nWtH6WbqdMTYAlSU4blg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(366004)(136003)(376002)(39860400002)(346002)(451199021)(107886003)(83380400001)(2616005)(38100700002)(41300700001)(6486002)(6666004)(31686004)(26005)(6506007)(53546011)(6512007)(186003)(478600001)(110136005)(54906003)(66556008)(66476007)(66946007)(4326008)(82960400001)(316002)(8676002)(8936002)(5660300002)(2906002)(31696002)(36756003)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NnhOZWlSZDJkRDdXMmtDYVZiK0k3SHU2T3dkUGFkcW1nMm9seCs4cFQzd0xj?=
 =?utf-8?B?WklSUG8yOE03Q01WeUdhUGZmV0QyVmdzM0ZKUC9kUDIyeE9iZkxvc3FzZjJR?=
 =?utf-8?B?OWpuZ3VFOVp5YytLMWxQWjQreFJ4K2hqWGZBcTdsalZ4OWVjMndySk9Ib2Iw?=
 =?utf-8?B?UVk5UFNXZ0pqcDdNaXE5Q1R4Um1zNDhnc2huNGpDSWFTQis3eGY1QXFqVTJB?=
 =?utf-8?B?VHdmTGdWczFxVUxDWldHSXRqbG9oekk1QVR6TjhnOHJIYTBLVE0wRWdXQXdh?=
 =?utf-8?B?VVh2clJTWEtKMmtzTElJaVQvb3N3VEZWZHZSNStMS1B5SXRvWXcwRzdBYW5H?=
 =?utf-8?B?bHlIU0xmRVhtZzNmaWdWWTZMUTMyREZvVXZiY3gvSmUyRzB4WlNlMTZ0VFJs?=
 =?utf-8?B?Z1BpZGVuTFVYdkVPL3Ivb25CNGQ5RUo1MFVtVUt6MFpvVjlOT0tkci9ZYitD?=
 =?utf-8?B?cWV0MlgwUE9mUHRiVTNnZlNnVXRqbXZtaC9tT0hDZCt2ZUIzUzc1elNLVzhH?=
 =?utf-8?B?d0ZHMGxDUkJURTNVUjZNV3J3RU9TRFVNakFkallLZXk1V3JRSHJyLzRMVzJk?=
 =?utf-8?B?emFBV0FGei9UNStub2FzdFNxZGJMbDhUcmg3SHpWdklBc0xZSGxodDJlaVU1?=
 =?utf-8?B?Y0c5K0xBQlVrdXhQVG9BdVFMcURCaWVnYUpuQU5oSmlWNVppakhxdURhUFAw?=
 =?utf-8?B?cTk4eVQ3NWtoYnAvZm1naDVaeHZKOXFiL3FXZ29vRkszd3lQYW9DSEpCTHVj?=
 =?utf-8?B?UjZqcGVYWWFlWnlvSTc4TnhoSGRQU0dUbDhpOFlwSTRrd1duWkNHWjZJTno3?=
 =?utf-8?B?V3FLbU5YQ0l0QmJhRUlQQy9sT0FxWGtMRGI4WERtR2hhYk9KOVBtaUZOMnhx?=
 =?utf-8?B?NDlkRCthV1lUcGdpcUNCaGphS1JjbVRIMWs5aDRpUW1LRVBWaFcxN3dnZG1I?=
 =?utf-8?B?RTBpSlpqNGV0dzJtMC9nanRoUys0Z1dFTmlxRWZZeCtoTVB2cmtrbitCTUNE?=
 =?utf-8?B?dUUxWUZwSHRPWXNraFUvZThVMXc2NUt0WE80ZFhXL3lkczhoOUU1NTVCZk9p?=
 =?utf-8?B?V0lScHpvUnVSSXVPSjZITjdTWVFvNW1QUHdxK29ra0Jvc2REaDJVZTFWbWor?=
 =?utf-8?B?SXBzN1JMcnZJY3RMYmlTYU5aTW8vd01lMytaLzBOdmtmMC9VbkMxTllabHhV?=
 =?utf-8?B?YkRKSEdQSEE3dXRBaFNUL3dlVm1Pdy9UcS9GUmVOalJsZC9aSDRxM2k0aERT?=
 =?utf-8?B?aWRmRjJuVXQ4OEU0eE44dFlpQUpZemNERGdtRDNQOGNXTXMrOGl2NDBDeU41?=
 =?utf-8?B?T3hXclcxK1FWVGs5NEcyN0xlbW1uN3VkMTFqdERFc1NVY0k0NU1tazRNRlFL?=
 =?utf-8?B?cU9sZkE1N0hjUTRMK3NhRVJURktnMit2TUUwYlg3djE4dGNFVWtaWHJ5bUM0?=
 =?utf-8?B?VXlGVWFESWVNZ01rYktrVUR3eEpuNk9YVWRGMGMvbmJBdjdXOTlnUlVmY3JX?=
 =?utf-8?B?U2I1QnNyQ1ZEaDh0WWY2cEVxdzBDUFN6eGNRSTNuc21UY29IZm8wU2cvQjRL?=
 =?utf-8?B?WkRwa2htNUR4M3g1cWg1OFE2NGpRTWtZZ3Vxd3N4UWRFc3BFV0VOc2pKMjl3?=
 =?utf-8?B?N0VtR2ZaYkFFYTJwTnJUZzJKK0JaVXR1N24zdm5nd3RidGx1Tng5TkFnckhM?=
 =?utf-8?B?RXdSTngwaHhSSCtvYXJwZUpWVmR4Q1hmMW9BY084dTZnUEd2YzMvUUdWb2NT?=
 =?utf-8?B?YjRXaVU4L3MvN29lekFBOGNSZWxpWGZ1MmQ0SXh0STBvdjhZVkdHUzFRVFN0?=
 =?utf-8?B?ejdRbFNkdnhVcUdJdThOSnZ0TWNWWjQrUTh4aWg3a2V5ejZvbHQ2bU1hMDNW?=
 =?utf-8?B?WkhLVjUwdkNsK0x1SDBQcGMzV0pIY3hvZFdpektwVHIzWEJFSWJVZ3QzSUYx?=
 =?utf-8?B?Rjh1eUNtNjJuTWtKc0ZvY1IrY1hLalJDQXJnNHFvbFJaOVJmeFZYMFBkdGJ1?=
 =?utf-8?B?azhkaEJ2OEQ0bEgwVWNIaG14N0dzUDI4NjdvOCs3Q2U4eWhycXhrQXlSVllD?=
 =?utf-8?B?VlNBdWMrTGlCMk5IVTJ0L0RWcXcxbjBnTElrK0F6YkVCVWk5OU5iYXk4cFA2?=
 =?utf-8?B?L0lmSC9qdXNvajk1Y3JBajU5ZHBzZzhPbDBrdXNJUFovQUphS3dOR0RZTk9w?=
 =?utf-8?B?Ymc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	aVnjuntP5HUJ5umdJKIUSjQuQVoS1jYhzYPzlkPGgIMW2zlAAeOhihqoIO1KuxdvbqvGGccts9PFTO79QIXUNgzteWBhNbjS37K5Wp04nqLFBRuS+HIi2trVrSIPtCT0E0UjYgDtUFUzeFDow0VwtfaWAnRlbmwY41D2kPKoThsXZvtdd32Rfl06ZJLq/QeuLvqgZAfPocS9PktD1dQdZ5zyk/gtAtUgQv+RJB8a6dI8yQj5yKRSdNTyGj2EGChlr5G66IlH9ulCAA8OKSC/EAyef+XZuYmvCiKuqp5fAAnF1357LPrE04CXPD2I1Pvv+BaIgEz7IydtfnVvrlcv3wLKzAykCxHq3gaheEO/8uUTak8F7RZqFl/3nqeYdMcNsVNCG+QR4CEW2N2BYSLrdjohFn789pp38xYKbHwCK/DJdZs8fHzKQWtD/8OpGnh9J52Cio5sv3NCMSJe2Yqd36Ag6GirObWlG2/LKKYF2J8+8+S8wPFDDFVzYWAzKOCCt1slTQCldbR5vV0y7Bx8E3cwshG5mij3JPG0jkLjvgEdeMRfp9Qd0XEYxoyFPViY/vTooGEHbMi9AIX2MB+Kx0cfSKqOroUs7e+b5jFWZwDVs4mDejI4/LVijLrW+Ektu4dT0LpduXgq3O9kPcgfzE4dn/0h/L+ZcYyxBeHg2JsLAieew+nQan2ToeioDpLzIbgIB9AcRDpqkk3cZW+FJMlv9Hf5ZH6tSh3r06Hkqhsegjy4mKRnPaHItLG+VL6JYjac8hqg7WQGyWW0MpO+3J3nsSYS+uNFEV1WE/b7pT4epEQDvZNb5vdEzaw+72sZBgnQQR8UMUMv1jFWXI7l0oEGM9rVZ4zh7biZ6/RWE3c=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: be414ac7-a7ea-4b87-1c6b-08db613366ca
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 17:29:20.4437
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6nPQ2KGIoNFTiKmcA+ikUdoa3IhNjcH0FN7w5J/DEmVJrqWtwUox/jERMMUfNyEQRW+KSndPBCft7H/31wxv2+0zNXksuVgGufpp0TOdTUk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5656

On 30/05/2023 2:58 pm, Alejandro Vallejo wrote:
> This is an AMD feature to reduce the IBRS handling overhead. Once enabled,
> processes running at CPL=0 are automatically IBRS-protected even if
> SPEC_CTRL.IBRS is not set. Furthermore, the RAS/RSB is cleared on VMEXIT.
>
> The feature is exposed in CPUID and toggled in EFER.
>
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>, but...

> diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
> index 777041425e..3ac144100e 100644
> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -287,6 +287,7 @@ XEN_CPUFEATURE(AVX_IFMA,     10*32+23) /*A  AVX-IFMA Instructions */
>  /* AMD-defined CPU features, CPUID level 0x80000021.eax, word 11 */
>  XEN_CPUFEATURE(LFENCE_DISPATCH,    11*32+ 2) /*A  LFENCE always serializing */
>  XEN_CPUFEATURE(NSCB,               11*32+ 6) /*A  Null Selector Clears Base (and limit too) */
> +XEN_CPUFEATURE(AUTO_IBRS,          11*32+ 8) /*   HW can handle IBRS on its own */

... I've changed this on commit to just "Automatic IBRS".  The behaviour
is more far complicated than this, and anyone who wants to know needs to
read the manual extra carefully.

For one, there's a behaviour which depends on whether SEV-SNP was
enabled in firmware...

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 30 17:31:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 17:31:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541324.843970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43Bs-0001Nt-CP; Tue, 30 May 2023 17:31:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541324.843970; Tue, 30 May 2023 17:31:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43Bs-0001Nm-9N; Tue, 30 May 2023 17:31:24 +0000
Received: by outflank-mailman (input) for mailman id 541324;
 Tue, 30 May 2023 17:31:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8xzU=BT=citrix.com=prvs=507ffd061=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q43Bq-0001Na-Fu
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 17:31:22 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ca0b6a32-ff0f-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 19:31:21 +0200 (CEST)
Received: from mail-co1nam11lp2177.outbound.protection.outlook.com (HELO
 NAM11-CO1-obe.outbound.protection.outlook.com) ([104.47.56.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 30 May 2023 13:31:11 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB6055.namprd03.prod.outlook.com (2603:10b6:208:31b::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22; Tue, 30 May
 2023 17:31:07 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.022; Tue, 30 May 2023
 17:31:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca0b6a32-ff0f-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685467880;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=xPupXtiOvjibfucvNUqXFdPuf+q3cQrntE2/+FNJ+Eo=;
  b=PzVsBulfd0BpRGtfWRkxfkI+MdJ8JakXinhBCgjC4MTkApEQjkd54ILg
   wXK8GMzyPCV2dWC8KXb9ykgTQX68ONQkFvtypftVVtR1J6KXGRecJcxio
   DSYJ9jHfqsZI0xPg+IdpoPeb7/627gH5qjHJk3vT6lVhI7Vuot0uzWTIT
   8=;
X-IronPort-RemoteIP: 104.47.56.177
X-IronPort-MID: 113443194
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:JvbunKwUKD20aOIwVEh6t+f3xyrEfRIJ4+MujC+fZmUNrF6WrkVVy
 GAbCmCFaK2KY2bwctkjbozj9k5TuZWHy4BgQAA+qiAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw/zF8EsHUMja4mtC5QRjPqgT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KUdl6
 /k3eWA2VCmarNCc5quLQchplst2eaEHPKtH0p1h5RfwKK9/BLvkGuDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjaVlVMuuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiAdlOSObhp6ICbFu7w208JBpHVUqHucaehUqSdOkCd
 Gwv0397xUQ13AnxJjXnZDWxpHOGtxgQQd0WDeQ+7AyPzYLf5wGECi4PSTspQMwrsoo6SCIn0
 neNnsj1Hnp/vbuNU3Wf+7yI6zSoNkA9L2UPeCsFRgst+MT4rcc4iRenZslnOL64iJvyAz6Y/
 tyRhC03hrFWhslS0ay+pAnDm2j1+cePSRMp7ALKWG7j9hl+eIOue42v7x7c8OpEK4GaCFKGu
 RDohvSj0QzHNrnV/ATlfQnHNOvBCyqtWNEEvWNSIg==
IronPort-HdrOrdr: A9a23:9TY7naqybOV2RnaP4LoVdkIaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-Talos-CUID: =?us-ascii?q?9a23=3AXg6PVmovV87uikFIz2dWL4DmUeYObHjynSfrGXa?=
 =?us-ascii?q?hIlRDbqSlQn6V6Kwxxg=3D=3D?=
X-Talos-MUID: =?us-ascii?q?9a23=3AgBYbuA7Y1Xb2WLqPN5YXRpYCxoxt4ouCDxENgKk?=
 =?us-ascii?q?ehO6LMQBfAGrCrA+eF9o=3D?=
X-IronPort-AV: E=Sophos;i="6.00,204,1681185600"; 
   d="scan'208";a="113443194"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=doBwt3EtG+e2cThbT3n8dDguWQFUNep4vTQAEa9CW/thLZRSG1P1DwA8c3UCTzGZLmJgl6x1UDh+xqxp42MYxTSTsimKyIu77uxpsdfQ/t1Ba2eg14NRUrgVfcTrMDvPGcZgOtwzmkotXJmDFhYDegQ2QVTbzB0i0ci5n8bzEqDXcLc7ItufrnOHsXwkuZfiYJPMAVd/gQy7lG640moi+LnSQdcPotbYD12sLZIpdcUYugIRl4vub5mh72WDemqnCjfNPavCIsb6fU+Q/YBbpCO8GXO+WLOiGoh1v3/XQUt42xZ0zAhlOhdgwAszeRTpHAqVx429cle6pT7iU8+KUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xPupXtiOvjibfucvNUqXFdPuf+q3cQrntE2/+FNJ+Eo=;
 b=gSunSclFZYfLmWuDlXEgCPTGj08cH2FsT2jDN8Tnhpt8hw1wWDbEmQQSB93WuZjkOvTqkE9Gk+MriQg2fjk45qQTQrWPIHRz6k4y3yK/ykiMWek+xpLulS1QOH31EVaw62sCeRWHgZ2QwQDwZPQfzP7tWLK3qKuo5o/9b+AUvDochU6hlHamAsepKcuL9XX3hCcTY4wyRIl0gWmTITz3hg6rud+SMzOdwQL9OoZ/c6CDfzsexVkHVFaWOZweoN88xcuHforcLeEP/xlP/aHRisbMccJhD9gQt2uWtv6o3641KrbjGyPYjmTAJpo5i+2hBUadu+uVybq7TzHF7lFgAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xPupXtiOvjibfucvNUqXFdPuf+q3cQrntE2/+FNJ+Eo=;
 b=Xl56Ppr46BS3ll0+34b+7XXCQO6H7I027mN8V9QXQ04LOkflTEzz1NSb5Y8jPZ+83SypTphYDZIswgolBPRjPNdlIoTeke5Dasc7sX7kQaSO9/EKIP7G5v2jk5TCASuRwCku5xJ7ifF/yLw6A1cxpFCQMokotrFlEWxB4aEdi8A=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <2e1bea58-9f6f-08c0-ce00-148f79ba12ff@citrix.com>
Date: Tue, 30 May 2023 18:31:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 2/3] x86: Expose Automatic IBRS to guests
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
 <20230530135854.1517-3-alejandro.vallejo@cloud.com>
In-Reply-To: <20230530135854.1517-3-alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO4P123CA0218.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a6::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BL1PR03MB6055:EE_
X-MS-Office365-Filtering-Correlation-Id: 822de12b-ee3c-4fa2-82ad-08db6133a6bf
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	jcJLoydlj4PYrpwv4ZeRRMEu9F9hKGpodel1COYk+lP+b3psLAiEH23QO2ndzdr2uCW1onHoGV1Ywzp0FPM/VbrlEw678ydCU8TFsvB+kv0yqbgaYtzQTMgOmYdBgIK8ZoO5xBkNqMu1Gx4TaA0nndIISm5r/SKjeLMFDHPVLtjPYVuAItdq12PFuIuzHruzLJ2bnVE04Axa9g9ohjV0ynY+DBqUwoOF1lumrG2PchM1k4wCO1tJfB8rGux0/+KSGoPbEvnD9+4Mb9OVfedQ6kfiVOfET51LElz1d0d6DcsDp+aJ61qd3ffs6AXmArN0sOQ9A4F4BZHlU650c/BPxwTG5LjdEqunD+qlwXKKEj2qUBkUcjhgzpaULihj2nEuF/FJye/A7aV6O79ePgSA3VPVVfkgo6K5/CgSCiExX5ftfD4nIT4tipkYtyTPwk5su2QHplVSoadDDQyf86P3ksH2JaFdO8O0qU6cPJ7squjks+jzI4lHoIu7yk6roGdthgP+GN8N43OCfAipGn0rCJQhbGMpEfMC59UWlH0S5UOLBavXmbHObBH/cAf6vGjUw74GYf03cYhyNnORVXgE/3zwztSsBIl91SH8yyVwJCa9LPFKG1sJkygYO7HFZVr1EfigzMC5rCJVaKFSzgSVHg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(346002)(39860400002)(396003)(136003)(451199021)(54906003)(110136005)(6506007)(26005)(6512007)(83380400001)(2616005)(316002)(86362001)(31696002)(82960400001)(4326008)(478600001)(2906002)(4744005)(31686004)(66946007)(66476007)(66556008)(41300700001)(5660300002)(6666004)(186003)(6486002)(8936002)(8676002)(36756003)(53546011)(38100700002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WGVtZE9FN081ZkVRYlVKMHVNWVRtL3Bqb0x2bzRQeCt4aUREd3JTUTY2bmtl?=
 =?utf-8?B?V0pxVGJ5SkdDM3lFTkpqZk9GZWNTTkNrQ0dwZGlFdE1nUnZJd0wwQ0x5R0Nr?=
 =?utf-8?B?MUl2T1hDZDA0NVlyS00xRTFOSEQzUXMxRERqNWREZ0g5Y3dxbFh2OW0zalpU?=
 =?utf-8?B?TitaWmxnV0dMd01wWnpiK0toQ295dG12RE9NNGtQdDU2Rmp5SjhacU9zNE5H?=
 =?utf-8?B?WTM4N2NXNG1sNE9oU1ZYSC9YL3IxRVZTMXl5YThOaDhRdFBJQ1R2TW13cCtF?=
 =?utf-8?B?Nk9Ed01UL25icXhPdG9uNUQ1cmJ1TlN0QWFBdVpteTZZaUpzZmlUTFF1VkJ0?=
 =?utf-8?B?aWV1SkFHenEzQ0dxQ1A5ZWtMNkFsMWpENmVTQmJyaVc0L3hrQVIwRnNLcE5l?=
 =?utf-8?B?SS9VVDhzdnFCQ1RySHZCQjhtNFRGdllOclk3RXJBaWE3WlJtTGNBMVEvemkw?=
 =?utf-8?B?T3hFb0dCLy9qbVBETVVaTkM3NFZOSFlOZDFnc2xERFRnQWF4QkY3cktSc3Fx?=
 =?utf-8?B?MmU0SVIwaGlGMHpZRHJVUXBLMlpZdXFocEhDWDR4NzduNjNmZlFnQmM4U3Bx?=
 =?utf-8?B?bTJIbDdjZ3RnV3VDemNPTHloM2NQUWpBMzR0Umh1UEJlTm9JcE9vaWZ2bjdW?=
 =?utf-8?B?eWdxU2MzTW00STdBdHBMUUQ2VWVLcFhhajBLV2hnaEdJWkJqNDZIVDlqL0FX?=
 =?utf-8?B?RUdkaituNFA5YTMvVldRUmE3WkxhR2xKbFNWa3c5d0ZoelVoQ2hjdmV1YWN0?=
 =?utf-8?B?cG1IVjE5RHZYTW5PaTdCaURvaHZscEd3TytiTlArelJLUmY3RFdqY2hJQWtT?=
 =?utf-8?B?RmVaK1RFZVgxQUpQem1CSmF4TEFIWGNMcVRhRnNXck5zdTdZWExVc0dkaDkx?=
 =?utf-8?B?bytianhyckJhYUowNVZBcmoxdW10MWRqMDNvd3Qzajg4WGdtWEY2c0R1dUcv?=
 =?utf-8?B?bUVlOGR0UUZicmJ3UWwzZjVWUk9vc3JkQmZqRHV4YVhodFhTLzhWcCsvNFBk?=
 =?utf-8?B?VlQrNmsxTmV6d2FpNjRVVkRWRnZHVUVrSTZCU2JoeTI1UWE3N3pHaW5Ic2FW?=
 =?utf-8?B?Rnl6eFZxSUlCaWtGMlBpVXYwY1JKQy9Bb3RtRnJLOW5VdmNuMFBpV0o0ZWRz?=
 =?utf-8?B?Q2FkcFNoZHFZQkMrb29VNHArUzg2VXB6cXdjb3h6cnkzVXBBZjZUZExndlc1?=
 =?utf-8?B?VzFOdUs2ZHkveThXRTJNTUtMdkVwQXRPZmJ3cTRkRzQxbDFaUHBDemZrSUFJ?=
 =?utf-8?B?MlJXZVA5dzl1TkxWTS9jdU9BMS9pRkpndHhjcWN6anFUNnQ1Q25rVXYrN3lj?=
 =?utf-8?B?YmdDWkxlV0crT0lHaGZaemdLQ3RNMDVDdEl4d0lQVnRPWUNGaFgzbVJzOTY3?=
 =?utf-8?B?d01JbTI5VmJxQ3AydjBFYi9BYUovK1ZqSnZmZ3NHZGJDYmFvZkxPc0hNUTd3?=
 =?utf-8?B?OENQMnBkM0g1MHUwbWdaU2twSmhEWGdyb0NVQVpRWnRHL2RYeXFobmpQOFFv?=
 =?utf-8?B?dTJOcHZPQXk4TEZTc1hMV3l3bzd3anRGTUIxaHNCbk5XYVBZaGlFYXRPaGty?=
 =?utf-8?B?WFluVTZLMGZBOVdpcUNaeFVUUUxLUkVKZi81Q1ZJRjQwbFhUSUpSWnpoMURq?=
 =?utf-8?B?Z3ZjKzgrSFRLcVBHODNGZ25KTlY1TmNOSUZyZldidE9ZNDF1UURSaXdDQkF3?=
 =?utf-8?B?TkVRNDRRNjl5Rzhma1pzMjNQV0l6MjVrMjZubU15eVdaaHR1RmtKUERqZHZ5?=
 =?utf-8?B?S2M4UHM2cTk1WGVQdVdCMldNdUFobmI2d3lVbGdrV2MxODZEb2plUDBXbjFS?=
 =?utf-8?B?NTBJTFpGU0xSc0Y4bnJlSEZqOEczTUZ2Z2h4VU1WbklXZGR1V0IyR0JzcHlm?=
 =?utf-8?B?bDJVSHBibm0zUEhMRTUydHQwRklZUzhCZjRBUndGSTQyTysyajJXQ21LMnRN?=
 =?utf-8?B?T21zenZ3NGRxbDEzMWQrc2MwV2tWUEdiMko2LzB5ZzducHkvZ0xpc0FVcHJu?=
 =?utf-8?B?bG1iSEliam5McW0yb1JRU2hnTDFSQjBjL00zbXQrZXZnNlAwNTExTWdUUDNa?=
 =?utf-8?B?S1RRMFhFdWNjaEk3dlFuSFByQTNCdnJoSk9vMktQdWNxc2VXNndNZ1pJb0FO?=
 =?utf-8?B?cmZ3UGxBWS9vQmZ6OFZISHczeFQ1cEdkeDhxU2JWZFZDR3ltT3dtL25pMkRl?=
 =?utf-8?B?b0E9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	HY9Sh3J953zJLv90kGeJJ4B2vNRevr6r9KMAXW//IqHMqLdlvCMrJofQ4Chp3DcWqBhShhLYUqgvrLRTzvuf9Ow7REg3clORqwVGoNXQMeCru362BHgriN/W7F8UStC8rsf4rg5JbV2IBwVyd4ALuU7C7swhb84DtAkuVfakhtLi+tdQPCynOvvXmhJt71WFgZjPAiRmct/o4VvP+QamyoYvODEi1pG9EdNwQybSefRCd2FlF4h/me8FyQOZlFs00Cyejj7cDlmYjFwMNrQ7v/Gg1Lj+rqHLOZOWCgbA4DGTDdFAw8BTY+RIk/1gS/YNb5YNe6gD6f7wgFhGBoYAX7mT6ZMA0WTcKMDDRhY7EGoti0DpoAaBWNVrts9tbatgdPDBNPFmF5J6mTnrezEgVpO48NC445GWgLoqvOj8+FLjEYHGEvM18Fz7AhkrS0ANrxp/ekc0INhTHfxdtcExope8OZPmjFYFtczhkyZiUOeGlZ/bWLsfpIT1YihHc2QZwWJz0TBuQSUQjGIdSPJL8CAXdHgSsPEMCkCYJbZwt/egN4J5s5h9W1KDKV0oGxhleIZ6ndE90FolpTL7456upGPVQ61+f2x3eKKj+2zL+cEFj0cXKryGGb6ka31eWH/sHgcQiqRGvF3YnvCNFD9rDZmZVmDgzO1VpdAJVRgRxCNQS0dHc6ivTv3EYvHLNUCl80KcJpn2ImX7XxKEtFQf4tVIre82Y2LPB+EatRZjgS7YhF3jo6Q3xStAx5J91dJnaDQ1bh3llvJHNLH7YJCXyMwL2Yg55i3blITEQDWaJCvhsah5Ghpw3u3Wxz2vQVO99e7yzjjaMCgWx2VXQ7Vlm+x3hhNB3x6Mng1ccvX1l0o=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 822de12b-ee3c-4fa2-82ad-08db6133a6bf
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 17:31:07.5153
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0byAj8hmLN28hSCGWNRC1Y9QXSAy+soUZMlJ2tz8IT8d8AFEUVn3HMg/nQ5E5i+qxaX7x1cH6WXmqHDu7WWbjzQkahtF0WgSiVnQTkdpXhU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6055

On 30/05/2023 2:58 pm, Alejandro Vallejo wrote:
> Expose AutoIBRS to HVM guests. EFER is swapped by VMRUN, so Xen only has to
> make sure writes to EFER.AIBRSE are gated on the feature being exposed.
>
> Also hide EFER.AIBRSE from PV guests as they have no say in the matter.
>
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

I've committed this, but made two tweaks to the commit message.  First,
"x86/hvm" in the subject because it's important context at a glance.

Second, I've adjusted the bit about PV guests.  The reason why we can't
expose it yet is because Xen doesn't currently context switch EFER
between PV guests.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 30 17:32:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 17:32:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541326.843980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43CN-0001rY-L4; Tue, 30 May 2023 17:31:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541326.843980; Tue, 30 May 2023 17:31:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43CN-0001rQ-Hr; Tue, 30 May 2023 17:31:55 +0000
Received: by outflank-mailman (input) for mailman id 541326;
 Tue, 30 May 2023 17:31:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zZuJ=BT=flex--seanjc.bounces.google.com=3BzN2ZAYKCXAgSObXQUccUZS.QcalSb-RSjSZZWghg.lSbdfcXSQh.cfU@srs-se1.protection.inumbo.net>)
 id 1q43CM-0001Na-6K
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 17:31:54 +0000
Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com
 [2607:f8b0:4864:20::54a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dd93b758-ff0f-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 19:31:53 +0200 (CEST)
Received: by mail-pg1-x54a.google.com with SMTP id
 41be03b00d2f7-53445255181so2605152a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 10:31:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd93b758-ff0f-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20221208; t=1685467912; x=1688059912;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:from:to:cc:subject:date:message-id:reply-to;
        bh=5dIo9ozu/tftAp1UmpoAUrczqs99lWKXi5qA1UpHNYE=;
        b=fcMt7E9oftWvnXylT0q0LSi2ovxYU6GROaZO/YpP+hWVROA5CGkN858ssUM/OEUsq+
         cy2yKP5XxthrBWOhhRWmnMfaInbcFln+rmGuOYxtdLasZqdrf9i2ZKdgt/TEdCrBBjG7
         ZT9ootqRsxe/NChBz/85WsRSsuU1KSVsGHKC2qIH/mmVq1mmccPpcHFqUeIQGRkih0S3
         aGvDJJEkz3g8ZQ99QNMUDAP/dYZioxFkRygbUSwiPYm9+GJu5Jyv2dh2A01JO3qxQH41
         6ykaek6RLxvtQMhW8/hOelD76pSAAtY75YP51exJDFZC+VMuMAXLJkFisg7dDXEPnO0i
         fv+Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685467912; x=1688059912;
        h=cc:to:from:subject:message-id:references:mime-version:in-reply-to
         :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=5dIo9ozu/tftAp1UmpoAUrczqs99lWKXi5qA1UpHNYE=;
        b=GqQeARTJAIy0OCMnou0Vt/JPinTj7KJPa3u3g5owP1aXSvRugCBWy5UT3TO7eFis/I
         4i0DflcahSN1fWJpq30oLlezVcCKXOwb7P062DaI5um3qhTjoFgt1C7yWz1vu09FDXLe
         juRMvfEQJZ8HcTYO+3O9aXBhgdx64aLoFGTTuwlfKH4z1RKcYtiEbe6P5Td6jEvpycll
         +KkstALS2X4ixpj2bkKtNDZe1jtyPJ9K7XSg3MjF3KaQ0uWRYu8z6x8vgpF7OJuYm+kP
         uZiTvnwRlu8XZqJwjbw4swHjRinRZxs01dJLGS7CIdRatbD3FmIAcGFayILvyejVMcYW
         gucA==
X-Gm-Message-State: AC+VfDxU5zZhESBcIm0mGUnobpSyxLNVlTBPhKqKvuQRx8bZH/4xyRti
	uQgzWBxtKo9mj3aZwVQc9mWb30Hllq0=
X-Google-Smtp-Source: ACHHUZ5mA8belaIiWlXNNvjxlntxz+3HEXuwwih113OoUIVv+U+qOnHhtSEzZI4lNu6RqPJOqokzHLfKYhE=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a63:510:0:b0:52c:9996:c1f8 with SMTP id
 16-20020a630510000000b0052c9996c1f8mr555070pgf.10.1685467911952; Tue, 30 May
 2023 10:31:51 -0700 (PDT)
Date: Tue, 30 May 2023 10:31:50 -0700
In-Reply-To: <20230530170210.ujkv737uyjfvdoay@box.shutemov.name>
Mime-Version: 1.0
References: <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name>
 <87bki3kkfi.ffs@tglx> <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name> <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx> <87cz2ija1e.ffs@tglx> <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name>
 <87wn0pizbl.ffs@tglx> <20230530170210.ujkv737uyjfvdoay@box.shutemov.name>
Message-ID: <ZHYzBrLfT6DIKBw4@google.com>
Subject: Re: [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE
From: Sean Christopherson <seanjc@google.com>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Thomas Gleixner <tglx@linutronix.de>, Tom Lendacky <thomas.lendacky@amd.com>, 
	LKML <linux-kernel@vger.kernel.org>, x86@kernel.org, 
	David Woodhouse <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Brian Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>, 
	Oleksandr Natalenko <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>, 
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski <lucjan.lucjanov@gmail.com>, 
	Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org, 
	Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>, 
	linux-arm-kernel@lists.infradead.org, 
	Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren <guoren@kernel.org>, 
	linux-csky@vger.kernel.org, Thomas Bogendoerfer <tsbogend@alpha.franken.de>, 
	linux-mips@vger.kernel.org, 
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller <deller@gmx.de>, 
	linux-parisc@vger.kernel.org, Paul Walmsley <paul.walmsley@sifive.com>, 
	Palmer Dabbelt <palmer@dabbelt.com>, linux-riscv@lists.infradead.org, 
	Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>, 
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Content-Type: text/plain; charset="us-ascii"

On Tue, May 30, 2023, Kirill A. Shutemov wrote:
> On Tue, May 30, 2023 at 06:00:46PM +0200, Thomas Gleixner wrote:
> > On Tue, May 30 2023 at 15:29, Kirill A. Shutemov wrote:
> > > On Tue, May 30, 2023 at 02:09:17PM +0200, Thomas Gleixner wrote:
> > >> The decision to allow parallel bringup of secondary CPUs checks
> > >> CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
> > >> parallel bootup because accessing the local APIC is intercepted and raises
> > >> a #VC or #VE, which cannot be handled at that point.
> > >> 
> > >> The check works correctly, but only for AMD encrypted guests. TDX does not
> > >> set that flag.
> > >> 
> > >> Check for cc_vendor != CC_VENDOR_NONE instead. That might be overbroad, but
> > >> definitely works for both AMD and Intel.
> > >
> > > It boots fine with TDX, but I think it is wrong. cc_get_vendor() will
> > > report CC_VENDOR_AMD even on bare metal if SME is enabled. I don't think
> > > we want it.
> > 
> > Right. Did not think about that.
> > 
> > But the same way is CC_ATTR_GUEST_MEM_ENCRYPT overbroad for AMD. Only
> > SEV-ES traps RDMSR if I'm understandig that maze correctly.
> 
> I don't know difference between SEV flavours that well.
> 
> I see there's that on SEV-SNP access to x2APIC MSR range (MSR 0x800-0x8FF)
> is intercepted regardless if MSR_AMD64_SNP_ALT_INJ feature is present. But
> I'm not sure what the state on SEV or SEV-ES.

With SEV-ES, if the hypervisor intercepts an MSR access, the VM-Exit is instead
morphed to a #VC (except for EFER).  The guest needs to do an explicit VMGEXIT
(i.e. a hypercall) to explicitly request MSR emulation (this *can* be done in the
#VC handler, but the guest can also do VMGEXIT directly, e.g. in lieu of a RDMSR).

With regular SEV, VM-Exits aren't reflected into the guest.  Register state isn't
encrypted so the hypervisor can emulate MSR accesses (and other instructions)
without needing an explicit hypercall from the guest.


From xen-devel-bounces@lists.xenproject.org Tue May 30 18:10:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 18:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541336.844015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43nr-0006vG-Ix; Tue, 30 May 2023 18:10:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541336.844015; Tue, 30 May 2023 18:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43nr-0006tY-Dk; Tue, 30 May 2023 18:10:39 +0000
Received: by outflank-mailman (input) for mailman id 541336;
 Tue, 30 May 2023 18:10:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BJNG=BT=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q43np-0006Ma-T3
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 18:10:37 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 46a5f28c-ff15-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 20:10:37 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-567-RcpP8Yb5O6-KCdzFrDVW0w-1; Tue, 30 May 2023 14:10:32 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EB7E5185A78F;
 Tue, 30 May 2023 18:10:31 +0000 (UTC)
Received: from localhost (unknown [10.39.192.97])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5A43B40CFD45;
 Tue, 30 May 2023 18:10:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46a5f28c-ff15-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685470236;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=zspNQDN9x15zA+5+5xrrc00UJFsGuayW0uokDFqy9AA=;
	b=XYJHL/P/KkDEEsC3eMla+VJrQ9/wsycqctSeYSkgrjLhbvoivv+mIXuqkS56XGA8AZoV5z
	2nQGTswtB+0pS9NKMGkNoPneDSyQhK6ORRoAULjDWOfxqPUXuo+u9tHZTHYgIejRrFOC/T
	O2JgVR4X3NOK1s8tiHy7o0ndgvije8Y=
X-MC-Unique: RcpP8Yb5O6-KCdzFrDVW0w-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	eblake@redhat.com,
	Hanna Reitz <hreitz@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	sgarzare@redhat.com,
	qemu-block@nongnu.org,
	xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>
Subject: [PATCH v3 0/6] block: add blk_io_plug_call() API
Date: Tue, 30 May 2023 14:09:53 -0400
Message-Id: <20230530180959.1108766-1-stefanha@redhat.com>
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1

v3
- Patch 5: Mention why dev_max_batch condition was dropped [Stefano]
v2
- Patch 1: "is not be freed" -> "is not freed" [Eric]
- Patch 2: Remove unused nvme_process_completion_queue_plugged trace event
  [Stefano]
- Patch 3: Add missing #include and fix blkio_unplug_fn() prototype [Stefano]
- Patch 4: Removed whitespace hunk [Eric]

The existing blk_io_plug() API is not block layer multi-queue friendly because
the plug state is per-BlockDriverState.

Change blk_io_plug()'s implementation so it is thread-local. This is done by
introducing the blk_io_plug_call() function that block drivers use to batch
calls while plugged. It is relatively easy to convert block drivers from
.bdrv_co_io_plug() to blk_io_plug_call().

Random read 4KB performance with virtio-blk on a host NVMe block device:

iodepth   iops   change vs today
1        45612   -4%
2        87967   +2%
4       129872   +0%
8       171096   -3%
16      194508   -4%
32      208947   -1%
64      217647   +0%
128     229629   +0%

The results are within the noise for these benchmarks. This is to be expected
because the plugging behavior for a single thread hasn't changed in this patch
series, only that the state is thread-local now.

The following graph compares several approaches:
https://vmsplice.net/~stefan/blk_io_plug-thread-local.png
- v7.2.0: before most of the multi-queue block layer changes landed.
- with-blk_io_plug: today's post-8.0.0 QEMU.
- blk_io_plug-thread-local: this patch series.
- no-blk_io_plug: what happens when we simply remove plugging?
- call-after-dispatch: what if we integrate plugging into the event loop? I
  decided against this approach in the end because it's more likely to
  introduce performance regressions since I/O submission is deferred until the
  end of the event loop iteration.

Aside from the no-blk_io_plug case, which bottlenecks much earlier than the
others, we see that all plugging approaches are more or less equivalent in this
benchmark. It is also clear that QEMU 8.0.0 has lower performance than 7.2.0.

The Ansible playbook, fio results, and a Jupyter notebook are available here:
https://github.com/stefanha/qemu-perf/tree/remove-blk_io_plug

Stefan Hajnoczi (6):
  block: add blk_io_plug_call() API
  block/nvme: convert to blk_io_plug_call() API
  block/blkio: convert to blk_io_plug_call() API
  block/io_uring: convert to blk_io_plug_call() API
  block/linux-aio: convert to blk_io_plug_call() API
  block: remove bdrv_co_io_plug() API

 MAINTAINERS                       |   1 +
 include/block/block-io.h          |   3 -
 include/block/block_int-common.h  |  11 ---
 include/block/raw-aio.h           |  14 ---
 include/sysemu/block-backend-io.h |  13 +--
 block/blkio.c                     |  43 ++++----
 block/block-backend.c             |  22 -----
 block/file-posix.c                |  38 -------
 block/io.c                        |  37 -------
 block/io_uring.c                  |  44 ++++-----
 block/linux-aio.c                 |  41 +++-----
 block/nvme.c                      |  44 +++------
 block/plug.c                      | 159 ++++++++++++++++++++++++++++++
 hw/block/dataplane/xen-block.c    |   8 +-
 hw/block/virtio-blk.c             |   4 +-
 hw/scsi/virtio-scsi.c             |   6 +-
 block/meson.build                 |   1 +
 block/trace-events                |   6 +-
 18 files changed, 239 insertions(+), 256 deletions(-)
 create mode 100644 block/plug.c

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 18:10:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 18:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541337.844030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43ns-0007LR-PY; Tue, 30 May 2023 18:10:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541337.844030; Tue, 30 May 2023 18:10:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43ns-0007KT-Kw; Tue, 30 May 2023 18:10:40 +0000
Received: by outflank-mailman (input) for mailman id 541337;
 Tue, 30 May 2023 18:10:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BJNG=BT=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q43nr-0006Ma-78
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 18:10:39 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 47743aa9-ff15-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 20:10:38 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-154-3boQAr7ZMlW1O_l3e_kMEA-1; Tue, 30 May 2023 14:10:35 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6EAC1101A55C;
 Tue, 30 May 2023 18:10:34 +0000 (UTC)
Received: from localhost (unknown [10.39.192.97])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C7AF4C15612;
 Tue, 30 May 2023 18:10:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47743aa9-ff15-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685470237;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lITSxMKIy5BCBhtpkqwgAwb4vAOKPBcBqjiX2PUq5bE=;
	b=RnAmxD1MtvqMR58HKZQyTVrQGfwyGY45QlH+sBrLF38sUBqIqH95UuA7VYNAwDl8cXsfJG
	T9VaqMfph8MtpYBmmt9bXZhjrPYFG3eLCA1t9jKyZAufjtuCxX0SUsqvwwpqOk7PUogeuD
	A7OxhxzYU+cpPhzTcuro6rHvhD/8GGY=
X-MC-Unique: 3boQAr7ZMlW1O_l3e_kMEA-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	eblake@redhat.com,
	Hanna Reitz <hreitz@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	sgarzare@redhat.com,
	qemu-block@nongnu.org,
	xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>
Subject: [PATCH v3 6/6] block: remove bdrv_co_io_plug() API
Date: Tue, 30 May 2023 14:09:59 -0400
Message-Id: <20230530180959.1108766-7-stefanha@redhat.com>
In-Reply-To: <20230530180959.1108766-1-stefanha@redhat.com>
References: <20230530180959.1108766-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

No block driver implements .bdrv_co_io_plug() anymore. Get rid of the
function pointers.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
 include/block/block-io.h         |  3 ---
 include/block/block_int-common.h | 11 ----------
 block/io.c                       | 37 --------------------------------
 3 files changed, 51 deletions(-)

diff --git a/include/block/block-io.h b/include/block/block-io.h
index a27e471a87..43af816d75 100644
--- a/include/block/block-io.h
+++ b/include/block/block-io.h
@@ -259,9 +259,6 @@ void coroutine_fn bdrv_co_leave(BlockDriverState *bs, AioContext *old_ctx);
 
 AioContext *child_of_bds_get_parent_aio_context(BdrvChild *c);
 
-void coroutine_fn GRAPH_RDLOCK bdrv_co_io_plug(BlockDriverState *bs);
-void coroutine_fn GRAPH_RDLOCK bdrv_co_io_unplug(BlockDriverState *bs);
-
 bool coroutine_fn GRAPH_RDLOCK
 bdrv_co_can_store_new_dirty_bitmap(BlockDriverState *bs, const char *name,
                                    uint32_t granularity, Error **errp);
diff --git a/include/block/block_int-common.h b/include/block/block_int-common.h
index 6492a1e538..958962aa3a 100644
--- a/include/block/block_int-common.h
+++ b/include/block/block_int-common.h
@@ -753,11 +753,6 @@ struct BlockDriver {
     void coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_debug_event)(
         BlockDriverState *bs, BlkdebugEvent event);
 
-    /* io queue for linux-aio */
-    void coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_io_plug)(BlockDriverState *bs);
-    void coroutine_fn GRAPH_RDLOCK_PTR (*bdrv_co_io_unplug)(
-        BlockDriverState *bs);
-
     /**
      * bdrv_drain_begin is called if implemented in the beginning of a
      * drain operation to drain and stop any internal sources of requests in
@@ -1227,12 +1222,6 @@ struct BlockDriverState {
     unsigned int in_flight;
     unsigned int serialising_in_flight;
 
-    /*
-     * counter for nested bdrv_io_plug.
-     * Accessed with atomic ops.
-     */
-    unsigned io_plugged;
-
     /* do we need to tell the quest if we have a volatile write cache? */
     int enable_write_cache;
 
diff --git a/block/io.c b/block/io.c
index 4d54fda593..56b0c1ce6c 100644
--- a/block/io.c
+++ b/block/io.c
@@ -3219,43 +3219,6 @@ void *qemu_try_blockalign0(BlockDriverState *bs, size_t size)
     return mem;
 }
 
-void coroutine_fn bdrv_co_io_plug(BlockDriverState *bs)
-{
-    BdrvChild *child;
-    IO_CODE();
-    assert_bdrv_graph_readable();
-
-    QLIST_FOREACH(child, &bs->children, next) {
-        bdrv_co_io_plug(child->bs);
-    }
-
-    if (qatomic_fetch_inc(&bs->io_plugged) == 0) {
-        BlockDriver *drv = bs->drv;
-        if (drv && drv->bdrv_co_io_plug) {
-            drv->bdrv_co_io_plug(bs);
-        }
-    }
-}
-
-void coroutine_fn bdrv_co_io_unplug(BlockDriverState *bs)
-{
-    BdrvChild *child;
-    IO_CODE();
-    assert_bdrv_graph_readable();
-
-    assert(bs->io_plugged);
-    if (qatomic_fetch_dec(&bs->io_plugged) == 1) {
-        BlockDriver *drv = bs->drv;
-        if (drv && drv->bdrv_co_io_unplug) {
-            drv->bdrv_co_io_unplug(bs);
-        }
-    }
-
-    QLIST_FOREACH(child, &bs->children, next) {
-        bdrv_co_io_unplug(child->bs);
-    }
-}
-
 /* Helper that undoes bdrv_register_buf() when it fails partway through */
 static void GRAPH_RDLOCK
 bdrv_register_buf_rollback(BlockDriverState *bs, void *host, size_t size,
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 18:10:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 18:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541335.844010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43nr-0006rL-7W; Tue, 30 May 2023 18:10:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541335.844010; Tue, 30 May 2023 18:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43nr-0006rC-4j; Tue, 30 May 2023 18:10:39 +0000
Received: by outflank-mailman (input) for mailman id 541335;
 Tue, 30 May 2023 18:10:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BJNG=BT=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q43np-0006QH-Co
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 18:10:37 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4421aadc-ff15-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 20:10:33 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-138-pcP8SZG9Nf-vL8mTnvfyUw-1; Tue, 30 May 2023 14:10:30 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B73403828886;
 Tue, 30 May 2023 18:10:29 +0000 (UTC)
Received: from localhost (unknown [10.39.192.97])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1CBCDC154D1;
 Tue, 30 May 2023 18:10:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4421aadc-ff15-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685470231;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=k4GDn3dw5uXG8hqCvm2tijloNjCA2iqGGwZfUOTUEY8=;
	b=VR44y5oVrmALl9Rgl67tksj8U66c418Xg9UxMETHrY+NHse/hn1fhTxbL+UjaUtZX+zeg1
	iZCC/fs1Z2vT5oLD0AIKQDM6PZ8xyYm3XsHil7Ux6MoKC/NFrHEFQalh6FdEES18Rf2SRc
	Vgqu34CxgzX9WEhkk/aOqd0xtiRvxFo=
X-MC-Unique: pcP8SZG9Nf-vL8mTnvfyUw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	eblake@redhat.com,
	Hanna Reitz <hreitz@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	sgarzare@redhat.com,
	qemu-block@nongnu.org,
	xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>
Subject: [PATCH v3 4/6] block/io_uring: convert to blk_io_plug_call() API
Date: Tue, 30 May 2023 14:09:57 -0400
Message-Id: <20230530180959.1108766-5-stefanha@redhat.com>
In-Reply-To: <20230530180959.1108766-1-stefanha@redhat.com>
References: <20230530180959.1108766-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
v2
- Removed whitespace hunk [Eric]
---
 include/block/raw-aio.h |  7 -------
 block/file-posix.c      | 10 ----------
 block/io_uring.c        | 44 ++++++++++++++++-------------------------
 block/trace-events      |  5 ++---
 4 files changed, 19 insertions(+), 47 deletions(-)

diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
index 0fe85ade77..da60ca13ef 100644
--- a/include/block/raw-aio.h
+++ b/include/block/raw-aio.h
@@ -81,13 +81,6 @@ int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t offset,
                                   QEMUIOVector *qiov, int type);
 void luring_detach_aio_context(LuringState *s, AioContext *old_context);
 void luring_attach_aio_context(LuringState *s, AioContext *new_context);
-
-/*
- * luring_io_plug/unplug work in the thread's current AioContext, therefore the
- * caller must ensure that they are paired in the same IOThread.
- */
-void luring_io_plug(void);
-void luring_io_unplug(void);
 #endif
 
 #ifdef _WIN32
diff --git a/block/file-posix.c b/block/file-posix.c
index 0ab158efba..7baa8491dd 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2558,11 +2558,6 @@ static void coroutine_fn raw_co_io_plug(BlockDriverState *bs)
         laio_io_plug();
     }
 #endif
-#ifdef CONFIG_LINUX_IO_URING
-    if (s->use_linux_io_uring) {
-        luring_io_plug();
-    }
-#endif
 }
 
 static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
@@ -2573,11 +2568,6 @@ static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
         laio_io_unplug(s->aio_max_batch);
     }
 #endif
-#ifdef CONFIG_LINUX_IO_URING
-    if (s->use_linux_io_uring) {
-        luring_io_unplug();
-    }
-#endif
 }
 
 static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
diff --git a/block/io_uring.c b/block/io_uring.c
index 82cab6a5bd..4e25b56c9c 100644
--- a/block/io_uring.c
+++ b/block/io_uring.c
@@ -16,6 +16,7 @@
 #include "block/raw-aio.h"
 #include "qemu/coroutine.h"
 #include "qapi/error.h"
+#include "sysemu/block-backend.h"
 #include "trace.h"
 
 /* Only used for assertions.  */
@@ -41,7 +42,6 @@ typedef struct LuringAIOCB {
 } LuringAIOCB;
 
 typedef struct LuringQueue {
-    int plugged;
     unsigned int in_queue;
     unsigned int in_flight;
     bool blocked;
@@ -267,7 +267,7 @@ static void luring_process_completions_and_submit(LuringState *s)
 {
     luring_process_completions(s);
 
-    if (!s->io_q.plugged && s->io_q.in_queue > 0) {
+    if (s->io_q.in_queue > 0) {
         ioq_submit(s);
     }
 }
@@ -301,29 +301,17 @@ static void qemu_luring_poll_ready(void *opaque)
 static void ioq_init(LuringQueue *io_q)
 {
     QSIMPLEQ_INIT(&io_q->submit_queue);
-    io_q->plugged = 0;
     io_q->in_queue = 0;
     io_q->in_flight = 0;
     io_q->blocked = false;
 }
 
-void luring_io_plug(void)
+static void luring_unplug_fn(void *opaque)
 {
-    AioContext *ctx = qemu_get_current_aio_context();
-    LuringState *s = aio_get_linux_io_uring(ctx);
-    trace_luring_io_plug(s);
-    s->io_q.plugged++;
-}
-
-void luring_io_unplug(void)
-{
-    AioContext *ctx = qemu_get_current_aio_context();
-    LuringState *s = aio_get_linux_io_uring(ctx);
-    assert(s->io_q.plugged);
-    trace_luring_io_unplug(s, s->io_q.blocked, s->io_q.plugged,
-                           s->io_q.in_queue, s->io_q.in_flight);
-    if (--s->io_q.plugged == 0 &&
-        !s->io_q.blocked && s->io_q.in_queue > 0) {
+    LuringState *s = opaque;
+    trace_luring_unplug_fn(s, s->io_q.blocked, s->io_q.in_queue,
+                           s->io_q.in_flight);
+    if (!s->io_q.blocked && s->io_q.in_queue > 0) {
         ioq_submit(s);
     }
 }
@@ -370,14 +358,16 @@ static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s,
 
     QSIMPLEQ_INSERT_TAIL(&s->io_q.submit_queue, luringcb, next);
     s->io_q.in_queue++;
-    trace_luring_do_submit(s, s->io_q.blocked, s->io_q.plugged,
-                           s->io_q.in_queue, s->io_q.in_flight);
-    if (!s->io_q.blocked &&
-        (!s->io_q.plugged ||
-         s->io_q.in_flight + s->io_q.in_queue >= MAX_ENTRIES)) {
-        ret = ioq_submit(s);
-        trace_luring_do_submit_done(s, ret);
-        return ret;
+    trace_luring_do_submit(s, s->io_q.blocked, s->io_q.in_queue,
+                           s->io_q.in_flight);
+    if (!s->io_q.blocked) {
+        if (s->io_q.in_flight + s->io_q.in_queue >= MAX_ENTRIES) {
+            ret = ioq_submit(s);
+            trace_luring_do_submit_done(s, ret);
+            return ret;
+        }
+
+        blk_io_plug_call(luring_unplug_fn, s);
     }
     return 0;
 }
diff --git a/block/trace-events b/block/trace-events
index 048ad27519..6f121b7636 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -64,9 +64,8 @@ file_paio_submit(void *acb, void *opaque, int64_t offset, int count, int type) "
 # io_uring.c
 luring_init_state(void *s, size_t size) "s %p size %zu"
 luring_cleanup_state(void *s) "%p freed"
-luring_io_plug(void *s) "LuringState %p plug"
-luring_io_unplug(void *s, int blocked, int plugged, int queued, int inflight) "LuringState %p blocked %d plugged %d queued %d inflight %d"
-luring_do_submit(void *s, int blocked, int plugged, int queued, int inflight) "LuringState %p blocked %d plugged %d queued %d inflight %d"
+luring_unplug_fn(void *s, int blocked, int queued, int inflight) "LuringState %p blocked %d queued %d inflight %d"
+luring_do_submit(void *s, int blocked, int queued, int inflight) "LuringState %p blocked %d queued %d inflight %d"
 luring_do_submit_done(void *s, int ret) "LuringState %p submitted to kernel %d"
 luring_co_submit(void *bs, void *s, void *luringcb, int fd, uint64_t offset, size_t nbytes, int type) "bs %p s %p luringcb %p fd %d offset %" PRId64 " nbytes %zd type %d"
 luring_process_completion(void *s, void *aiocb, int ret) "LuringState %p luringcb %p ret %d"
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 18:10:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 18:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541334.843995 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43nn-0006QF-SU; Tue, 30 May 2023 18:10:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541334.843995; Tue, 30 May 2023 18:10:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43nn-0006Q0-MT; Tue, 30 May 2023 18:10:35 +0000
Received: by outflank-mailman (input) for mailman id 541334;
 Tue, 30 May 2023 18:10:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BJNG=BT=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q43nm-0006Ma-4I
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 18:10:34 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4362de48-ff15-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 20:10:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-634-ALgcGzpOPgWpn7IJbP1-mw-1; Tue, 30 May 2023 14:10:27 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5FB593C0F223;
 Tue, 30 May 2023 18:10:27 +0000 (UTC)
Received: from localhost (unknown [10.39.192.97])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B895E492B0A;
 Tue, 30 May 2023 18:10:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4362de48-ff15-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685470230;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lkj3uc2uid1yXWgYT2SgHzDWoawNVZpv4Y6BTe1vm/w=;
	b=PQO0RVPbm2dLfEVQVBEHvGYDRoguuc9fxSsUNHHL6Y9dx85zO9Tv+ck2REcvrZCoO9Vsho
	Gvs1j6q+SEe6P1It7dlFx6JBX8esUosoiDor7FvRBSLpzn/DnsIV8J7FHahrGTE1WKMoxh
	4Jp56dlBwBLglWVTs3081UedjBjA4dg=
X-MC-Unique: ALgcGzpOPgWpn7IJbP1-mw-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	eblake@redhat.com,
	Hanna Reitz <hreitz@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	sgarzare@redhat.com,
	qemu-block@nongnu.org,
	xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>
Subject: [PATCH v3 3/6] block/blkio: convert to blk_io_plug_call() API
Date: Tue, 30 May 2023 14:09:56 -0400
Message-Id: <20230530180959.1108766-4-stefanha@redhat.com>
In-Reply-To: <20230530180959.1108766-1-stefanha@redhat.com>
References: <20230530180959.1108766-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
v2
- Add missing #include and fix blkio_unplug_fn() prototype [Stefano]
---
 block/blkio.c | 43 ++++++++++++++++++++++++-------------------
 1 file changed, 24 insertions(+), 19 deletions(-)

diff --git a/block/blkio.c b/block/blkio.c
index 0cdc99a729..93c6d20d39 100644
--- a/block/blkio.c
+++ b/block/blkio.c
@@ -17,6 +17,7 @@
 #include "qemu/error-report.h"
 #include "qapi/qmp/qdict.h"
 #include "qemu/module.h"
+#include "sysemu/block-backend.h"
 #include "exec/memory.h" /* for ram_block_discard_disable() */
 
 #include "block/block-io.h"
@@ -325,16 +326,30 @@ static void blkio_detach_aio_context(BlockDriverState *bs)
                        false, NULL, NULL, NULL, NULL, NULL);
 }
 
-/* Call with s->blkio_lock held to submit I/O after enqueuing a new request */
-static void blkio_submit_io(BlockDriverState *bs)
+/*
+ * Called by blk_io_unplug() or immediately if not plugged. Called without
+ * blkio_lock.
+ */
+static void blkio_unplug_fn(void *opaque)
 {
-    if (qatomic_read(&bs->io_plugged) == 0) {
-        BDRVBlkioState *s = bs->opaque;
+    BDRVBlkioState *s = opaque;
 
+    WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_do_io(s->blkioq, NULL, 0, 0, NULL);
     }
 }
 
+/*
+ * Schedule I/O submission after enqueuing a new request. Called without
+ * blkio_lock.
+ */
+static void blkio_submit_io(BlockDriverState *bs)
+{
+    BDRVBlkioState *s = bs->opaque;
+
+    blk_io_plug_call(blkio_unplug_fn, s);
+}
+
 static int coroutine_fn
 blkio_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
 {
@@ -345,9 +360,9 @@ blkio_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_discard(s->blkioq, offset, bytes, &cod, 0);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
     return cod.ret;
 }
@@ -378,9 +393,9 @@ blkio_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_readv(s->blkioq, offset, iov, iovcnt, &cod, 0);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
 
     if (use_bounce_buffer) {
@@ -423,9 +438,9 @@ static int coroutine_fn blkio_co_pwritev(BlockDriverState *bs, int64_t offset,
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_writev(s->blkioq, offset, iov, iovcnt, &cod, blkio_flags);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
 
     if (use_bounce_buffer) {
@@ -444,9 +459,9 @@ static int coroutine_fn blkio_co_flush(BlockDriverState *bs)
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_flush(s->blkioq, &cod, 0);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
     return cod.ret;
 }
@@ -472,22 +487,13 @@ static int coroutine_fn blkio_co_pwrite_zeroes(BlockDriverState *bs,
 
     WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
         blkioq_write_zeroes(s->blkioq, offset, bytes, &cod, blkio_flags);
-        blkio_submit_io(bs);
     }
 
+    blkio_submit_io(bs);
     qemu_coroutine_yield();
     return cod.ret;
 }
 
-static void coroutine_fn blkio_co_io_unplug(BlockDriverState *bs)
-{
-    BDRVBlkioState *s = bs->opaque;
-
-    WITH_QEMU_LOCK_GUARD(&s->blkio_lock) {
-        blkio_submit_io(bs);
-    }
-}
-
 typedef enum {
     BMRR_OK,
     BMRR_SKIP,
@@ -1009,7 +1015,6 @@ static void blkio_refresh_limits(BlockDriverState *bs, Error **errp)
         .bdrv_co_pwritev         = blkio_co_pwritev, \
         .bdrv_co_flush_to_disk   = blkio_co_flush, \
         .bdrv_co_pwrite_zeroes   = blkio_co_pwrite_zeroes, \
-        .bdrv_co_io_unplug       = blkio_co_io_unplug, \
         .bdrv_refresh_limits     = blkio_refresh_limits, \
         .bdrv_register_buf       = blkio_register_buf, \
         .bdrv_unregister_buf     = blkio_unregister_buf, \
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 18:10:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 18:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541333.843990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43nn-0006Ms-IX; Tue, 30 May 2023 18:10:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541333.843990; Tue, 30 May 2023 18:10:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43nn-0006Ml-FC; Tue, 30 May 2023 18:10:35 +0000
Received: by outflank-mailman (input) for mailman id 541333;
 Tue, 30 May 2023 18:10:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BJNG=BT=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q43nl-0006Ma-Mt
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 18:10:33 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 435be645-ff15-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 20:10:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-395-OtHoLbc0M_edCvjK-HiKEg-1; Tue, 30 May 2023 14:10:25 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 987AA858EEC;
 Tue, 30 May 2023 18:10:24 +0000 (UTC)
Received: from localhost (unknown [10.39.192.97])
 by smtp.corp.redhat.com (Postfix) with ESMTP id DAFE02166B26;
 Tue, 30 May 2023 18:10:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 435be645-ff15-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685470230;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rzt+VbbYwkajdc2OCuRRgTByjydJmxUvTTyb3ffD+dw=;
	b=eB1YZNSUDrCz4hXC/54aO2OVS36xXt22vA4IWwY59e8lB2K2e+tUhkRX8qRs0Etz6yBbwD
	e9YA3sDYOngPLROF8FwHvb/Z/uZYlkE49zHdRa1Y2OmEbV88byZDsxsCxX4YPgxcN5m/F9
	sBhSE1ow4QExejNYvmKJY0SheeDMcb4=
X-MC-Unique: OtHoLbc0M_edCvjK-HiKEg-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	eblake@redhat.com,
	Hanna Reitz <hreitz@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	sgarzare@redhat.com,
	qemu-block@nongnu.org,
	xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>
Subject: [PATCH v3 2/6] block/nvme: convert to blk_io_plug_call() API
Date: Tue, 30 May 2023 14:09:55 -0400
Message-Id: <20230530180959.1108766-3-stefanha@redhat.com>
In-Reply-To: <20230530180959.1108766-1-stefanha@redhat.com>
References: <20230530180959.1108766-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
v2
- Remove unused nvme_process_completion_queue_plugged trace event
  [Stefano]
---
 block/nvme.c       | 44 ++++++++++++--------------------------------
 block/trace-events |  1 -
 2 files changed, 12 insertions(+), 33 deletions(-)

diff --git a/block/nvme.c b/block/nvme.c
index 5b744c2bda..100b38b592 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -25,6 +25,7 @@
 #include "qemu/vfio-helpers.h"
 #include "block/block-io.h"
 #include "block/block_int.h"
+#include "sysemu/block-backend.h"
 #include "sysemu/replay.h"
 #include "trace.h"
 
@@ -119,7 +120,6 @@ struct BDRVNVMeState {
     int blkshift;
 
     uint64_t max_transfer;
-    bool plugged;
 
     bool supports_write_zeroes;
     bool supports_discard;
@@ -282,7 +282,7 @@ static void nvme_kick(NVMeQueuePair *q)
 {
     BDRVNVMeState *s = q->s;
 
-    if (s->plugged || !q->need_kick) {
+    if (!q->need_kick) {
         return;
     }
     trace_nvme_kick(s, q->index);
@@ -387,10 +387,6 @@ static bool nvme_process_completion(NVMeQueuePair *q)
     NvmeCqe *c;
 
     trace_nvme_process_completion(s, q->index, q->inflight);
-    if (s->plugged) {
-        trace_nvme_process_completion_queue_plugged(s, q->index);
-        return false;
-    }
 
     /*
      * Support re-entrancy when a request cb() function invokes aio_poll().
@@ -480,6 +476,15 @@ static void nvme_trace_command(const NvmeCmd *cmd)
     }
 }
 
+static void nvme_unplug_fn(void *opaque)
+{
+    NVMeQueuePair *q = opaque;
+
+    QEMU_LOCK_GUARD(&q->lock);
+    nvme_kick(q);
+    nvme_process_completion(q);
+}
+
 static void nvme_submit_command(NVMeQueuePair *q, NVMeRequest *req,
                                 NvmeCmd *cmd, BlockCompletionFunc cb,
                                 void *opaque)
@@ -496,8 +501,7 @@ static void nvme_submit_command(NVMeQueuePair *q, NVMeRequest *req,
            q->sq.tail * NVME_SQ_ENTRY_BYTES, cmd, sizeof(*cmd));
     q->sq.tail = (q->sq.tail + 1) % NVME_QUEUE_SIZE;
     q->need_kick++;
-    nvme_kick(q);
-    nvme_process_completion(q);
+    blk_io_plug_call(nvme_unplug_fn, q);
     qemu_mutex_unlock(&q->lock);
 }
 
@@ -1567,27 +1571,6 @@ static void nvme_attach_aio_context(BlockDriverState *bs,
     }
 }
 
-static void coroutine_fn nvme_co_io_plug(BlockDriverState *bs)
-{
-    BDRVNVMeState *s = bs->opaque;
-    assert(!s->plugged);
-    s->plugged = true;
-}
-
-static void coroutine_fn nvme_co_io_unplug(BlockDriverState *bs)
-{
-    BDRVNVMeState *s = bs->opaque;
-    assert(s->plugged);
-    s->plugged = false;
-    for (unsigned i = INDEX_IO(0); i < s->queue_count; i++) {
-        NVMeQueuePair *q = s->queues[i];
-        qemu_mutex_lock(&q->lock);
-        nvme_kick(q);
-        nvme_process_completion(q);
-        qemu_mutex_unlock(&q->lock);
-    }
-}
-
 static bool nvme_register_buf(BlockDriverState *bs, void *host, size_t size,
                               Error **errp)
 {
@@ -1664,9 +1647,6 @@ static BlockDriver bdrv_nvme = {
     .bdrv_detach_aio_context  = nvme_detach_aio_context,
     .bdrv_attach_aio_context  = nvme_attach_aio_context,
 
-    .bdrv_co_io_plug          = nvme_co_io_plug,
-    .bdrv_co_io_unplug        = nvme_co_io_unplug,
-
     .bdrv_register_buf        = nvme_register_buf,
     .bdrv_unregister_buf      = nvme_unregister_buf,
 };
diff --git a/block/trace-events b/block/trace-events
index 32665158d6..048ad27519 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -141,7 +141,6 @@ nvme_kick(void *s, unsigned q_index) "s %p q #%u"
 nvme_dma_flush_queue_wait(void *s) "s %p"
 nvme_error(int cmd_specific, int sq_head, int sqid, int cid, int status) "cmd_specific %d sq_head %d sqid %d cid %d status 0x%x"
 nvme_process_completion(void *s, unsigned q_index, int inflight) "s %p q #%u inflight %d"
-nvme_process_completion_queue_plugged(void *s, unsigned q_index) "s %p q #%u"
 nvme_complete_command(void *s, unsigned q_index, int cid) "s %p q #%u cid %d"
 nvme_submit_command(void *s, unsigned q_index, int cid) "s %p q #%u cid %d"
 nvme_submit_command_raw(int c0, int c1, int c2, int c3, int c4, int c5, int c6, int c7) "%02x %02x %02x %02x %02x %02x %02x %02x"
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 18:11:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 18:11:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541339.844040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43oC-0008Oy-1o; Tue, 30 May 2023 18:11:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541339.844040; Tue, 30 May 2023 18:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43oB-0008On-Ul; Tue, 30 May 2023 18:10:59 +0000
Received: by outflank-mailman (input) for mailman id 541339;
 Tue, 30 May 2023 18:10:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BJNG=BT=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q43oB-0006QH-66
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 18:10:59 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 52766cf1-ff15-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 20:10:57 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com
 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-37-fVm_N7GDM0KF6oi8pyVfPQ-1; Tue, 30 May 2023 14:10:52 -0400
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com
 [10.11.54.7])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 28A3C29DD987;
 Tue, 30 May 2023 18:10:22 +0000 (UTC)
Received: from localhost (unknown [10.39.192.97])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1494E140EBB8;
 Tue, 30 May 2023 18:10:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52766cf1-ff15-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685470255;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rDWo/TbkkuGu5WvfIjHRTys/qCO/SxODVgMveMDyooM=;
	b=d1jx+LzHGoNTRstFwIxnIsRw7kp0GoRGNUf8QErBUxtIrKcUk+IAahi+c9Jhw+Q3WrxJ0e
	eKhekgzCkTlXtEiahBh45B3WzO4c+YW5FitzHZx+/7sxNWusX+KAohy7lJ4ZOO+EDOyBg3
	GK5VQOmOKusEeP5yDESW3pehDsEqHMQ=
X-MC-Unique: fVm_N7GDM0KF6oi8pyVfPQ-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	eblake@redhat.com,
	Hanna Reitz <hreitz@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	sgarzare@redhat.com,
	qemu-block@nongnu.org,
	xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>
Subject: [PATCH v3 1/6] block: add blk_io_plug_call() API
Date: Tue, 30 May 2023 14:09:54 -0400
Message-Id: <20230530180959.1108766-2-stefanha@redhat.com>
In-Reply-To: <20230530180959.1108766-1-stefanha@redhat.com>
References: <20230530180959.1108766-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7

Introduce a new API for thread-local blk_io_plug() that does not
traverse the block graph. The goal is to make blk_io_plug() multi-queue
friendly.

Instead of having block drivers track whether or not we're in a plugged
section, provide an API that allows them to defer a function call until
we're unplugged: blk_io_plug_call(fn, opaque). If blk_io_plug_call() is
called multiple times with the same fn/opaque pair, then fn() is only
called once at the end of the function - resulting in batching.

This patch introduces the API and changes blk_io_plug()/blk_io_unplug().
blk_io_plug()/blk_io_unplug() no longer require a BlockBackend argument
because the plug state is now thread-local.

Later patches convert block drivers to blk_io_plug_call() and then we
can finally remove .bdrv_co_io_plug() once all block drivers have been
converted.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
v2
- "is not be freed" -> "is not freed" [Eric]
---
 MAINTAINERS                       |   1 +
 include/sysemu/block-backend-io.h |  13 +--
 block/block-backend.c             |  22 -----
 block/plug.c                      | 159 ++++++++++++++++++++++++++++++
 hw/block/dataplane/xen-block.c    |   8 +-
 hw/block/virtio-blk.c             |   4 +-
 hw/scsi/virtio-scsi.c             |   6 +-
 block/meson.build                 |   1 +
 8 files changed, 173 insertions(+), 41 deletions(-)
 create mode 100644 block/plug.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 4b025a7b63..89f274f85e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2650,6 +2650,7 @@ F: util/aio-*.c
 F: util/aio-*.h
 F: util/fdmon-*.c
 F: block/io.c
+F: block/plug.c
 F: migration/block*
 F: include/block/aio.h
 F: include/block/aio-wait.h
diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
index d62a7ee773..be4dcef59d 100644
--- a/include/sysemu/block-backend-io.h
+++ b/include/sysemu/block-backend-io.h
@@ -100,16 +100,9 @@ void blk_iostatus_set_err(BlockBackend *blk, int error);
 int blk_get_max_iov(BlockBackend *blk);
 int blk_get_max_hw_iov(BlockBackend *blk);
 
-/*
- * blk_io_plug/unplug are thread-local operations. This means that multiple
- * IOThreads can simultaneously call plug/unplug, but the caller must ensure
- * that each unplug() is called in the same IOThread of the matching plug().
- */
-void coroutine_fn blk_co_io_plug(BlockBackend *blk);
-void co_wrapper blk_io_plug(BlockBackend *blk);
-
-void coroutine_fn blk_co_io_unplug(BlockBackend *blk);
-void co_wrapper blk_io_unplug(BlockBackend *blk);
+void blk_io_plug(void);
+void blk_io_unplug(void);
+void blk_io_plug_call(void (*fn)(void *), void *opaque);
 
 AioContext *blk_get_aio_context(BlockBackend *blk);
 BlockAcctStats *blk_get_stats(BlockBackend *blk);
diff --git a/block/block-backend.c b/block/block-backend.c
index ca537cd0ad..1f1d226ba6 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -2568,28 +2568,6 @@ void blk_add_insert_bs_notifier(BlockBackend *blk, Notifier *notify)
     notifier_list_add(&blk->insert_bs_notifiers, notify);
 }
 
-void coroutine_fn blk_co_io_plug(BlockBackend *blk)
-{
-    BlockDriverState *bs = blk_bs(blk);
-    IO_CODE();
-    GRAPH_RDLOCK_GUARD();
-
-    if (bs) {
-        bdrv_co_io_plug(bs);
-    }
-}
-
-void coroutine_fn blk_co_io_unplug(BlockBackend *blk)
-{
-    BlockDriverState *bs = blk_bs(blk);
-    IO_CODE();
-    GRAPH_RDLOCK_GUARD();
-
-    if (bs) {
-        bdrv_co_io_unplug(bs);
-    }
-}
-
 BlockAcctStats *blk_get_stats(BlockBackend *blk)
 {
     IO_CODE();
diff --git a/block/plug.c b/block/plug.c
new file mode 100644
index 0000000000..98a155d2f4
--- /dev/null
+++ b/block/plug.c
@@ -0,0 +1,159 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Block I/O plugging
+ *
+ * Copyright Red Hat.
+ *
+ * This API defers a function call within a blk_io_plug()/blk_io_unplug()
+ * section, allowing multiple calls to batch up. This is a performance
+ * optimization that is used in the block layer to submit several I/O requests
+ * at once instead of individually:
+ *
+ *   blk_io_plug(); <-- start of plugged region
+ *   ...
+ *   blk_io_plug_call(my_func, my_obj); <-- deferred my_func(my_obj) call
+ *   blk_io_plug_call(my_func, my_obj); <-- another
+ *   blk_io_plug_call(my_func, my_obj); <-- another
+ *   ...
+ *   blk_io_unplug(); <-- end of plugged region, my_func(my_obj) is called once
+ *
+ * This code is actually generic and not tied to the block layer. If another
+ * subsystem needs this functionality, it could be renamed.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/coroutine-tls.h"
+#include "qemu/notify.h"
+#include "qemu/thread.h"
+#include "sysemu/block-backend.h"
+
+/* A function call that has been deferred until unplug() */
+typedef struct {
+    void (*fn)(void *);
+    void *opaque;
+} UnplugFn;
+
+/* Per-thread state */
+typedef struct {
+    unsigned count;       /* how many times has plug() been called? */
+    GArray *unplug_fns;   /* functions to call at unplug time */
+} Plug;
+
+/* Use get_ptr_plug() to fetch this thread-local value */
+QEMU_DEFINE_STATIC_CO_TLS(Plug, plug);
+
+/* Called at thread cleanup time */
+static void blk_io_plug_atexit(Notifier *n, void *value)
+{
+    Plug *plug = get_ptr_plug();
+    g_array_free(plug->unplug_fns, TRUE);
+}
+
+/* This won't involve coroutines, so use __thread */
+static __thread Notifier blk_io_plug_atexit_notifier;
+
+/**
+ * blk_io_plug_call:
+ * @fn: a function pointer to be invoked
+ * @opaque: a user-defined argument to @fn()
+ *
+ * Call @fn(@opaque) immediately if not within a blk_io_plug()/blk_io_unplug()
+ * section.
+ *
+ * Otherwise defer the call until the end of the outermost
+ * blk_io_plug()/blk_io_unplug() section in this thread. If the same
+ * @fn/@opaque pair has already been deferred, it will only be called once upon
+ * blk_io_unplug() so that accumulated calls are batched into a single call.
+ *
+ * The caller must ensure that @opaque is not freed before @fn() is invoked.
+ */
+void blk_io_plug_call(void (*fn)(void *), void *opaque)
+{
+    Plug *plug = get_ptr_plug();
+
+    /* Call immediately if we're not plugged */
+    if (plug->count == 0) {
+        fn(opaque);
+        return;
+    }
+
+    GArray *array = plug->unplug_fns;
+    if (!array) {
+        array = g_array_new(FALSE, FALSE, sizeof(UnplugFn));
+        plug->unplug_fns = array;
+        blk_io_plug_atexit_notifier.notify = blk_io_plug_atexit;
+        qemu_thread_atexit_add(&blk_io_plug_atexit_notifier);
+    }
+
+    UnplugFn *fns = (UnplugFn *)array->data;
+    UnplugFn new_fn = {
+        .fn = fn,
+        .opaque = opaque,
+    };
+
+    /*
+     * There won't be many, so do a linear search. If this becomes a bottleneck
+     * then a binary search (glib 2.62+) or different data structure could be
+     * used.
+     */
+    for (guint i = 0; i < array->len; i++) {
+        if (memcmp(&fns[i], &new_fn, sizeof(new_fn)) == 0) {
+            return; /* already exists */
+        }
+    }
+
+    g_array_append_val(array, new_fn);
+}
+
+/**
+ * blk_io_plug: Defer blk_io_plug_call() functions until blk_io_unplug()
+ *
+ * blk_io_plug/unplug are thread-local operations. This means that multiple
+ * threads can simultaneously call plug/unplug, but the caller must ensure that
+ * each unplug() is called in the same thread of the matching plug().
+ *
+ * Nesting is supported. blk_io_plug_call() functions are only called at the
+ * outermost blk_io_unplug().
+ */
+void blk_io_plug(void)
+{
+    Plug *plug = get_ptr_plug();
+
+    assert(plug->count < UINT32_MAX);
+
+    plug->count++;
+}
+
+/**
+ * blk_io_unplug: Run any pending blk_io_plug_call() functions
+ *
+ * There must have been a matching blk_io_plug() call in the same thread prior
+ * to this blk_io_unplug() call.
+ */
+void blk_io_unplug(void)
+{
+    Plug *plug = get_ptr_plug();
+
+    assert(plug->count > 0);
+
+    if (--plug->count > 0) {
+        return;
+    }
+
+    GArray *array = plug->unplug_fns;
+    if (!array) {
+        return;
+    }
+
+    UnplugFn *fns = (UnplugFn *)array->data;
+
+    for (guint i = 0; i < array->len; i++) {
+        fns[i].fn(fns[i].opaque);
+    }
+
+    /*
+     * This resets the array without freeing memory so that appending is cheap
+     * in the future.
+     */
+    g_array_set_size(array, 0);
+}
diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index d8bc39d359..e49c24f63d 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -537,7 +537,7 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
      * is below us.
      */
     if (inflight_atstart > IO_PLUG_THRESHOLD) {
-        blk_io_plug(dataplane->blk);
+        blk_io_plug();
     }
     while (rc != rp) {
         /* pull request from ring */
@@ -577,12 +577,12 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
 
         if (inflight_atstart > IO_PLUG_THRESHOLD &&
             batched >= inflight_atstart) {
-            blk_io_unplug(dataplane->blk);
+            blk_io_unplug();
         }
         xen_block_do_aio(request);
         if (inflight_atstart > IO_PLUG_THRESHOLD) {
             if (batched >= inflight_atstart) {
-                blk_io_plug(dataplane->blk);
+                blk_io_plug();
                 batched = 0;
             } else {
                 batched++;
@@ -590,7 +590,7 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane)
         }
     }
     if (inflight_atstart > IO_PLUG_THRESHOLD) {
-        blk_io_unplug(dataplane->blk);
+        blk_io_unplug();
     }
 
     return done_something;
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 8f65ea4659..b4286424c1 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1134,7 +1134,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
     bool suppress_notifications = virtio_queue_get_notification(vq);
 
     aio_context_acquire(blk_get_aio_context(s->blk));
-    blk_io_plug(s->blk);
+    blk_io_plug();
 
     do {
         if (suppress_notifications) {
@@ -1158,7 +1158,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
         virtio_blk_submit_multireq(s, &mrb);
     }
 
-    blk_io_unplug(s->blk);
+    blk_io_unplug();
     aio_context_release(blk_get_aio_context(s->blk));
 }
 
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 612c525d9d..534a44ee07 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -799,7 +799,7 @@ static int virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
         return -ENOBUFS;
     }
     scsi_req_ref(req->sreq);
-    blk_io_plug(d->conf.blk);
+    blk_io_plug();
     object_unref(OBJECT(d));
     return 0;
 }
@@ -810,7 +810,7 @@ static void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
     if (scsi_req_enqueue(sreq)) {
         scsi_req_continue(sreq);
     }
-    blk_io_unplug(sreq->dev->conf.blk);
+    blk_io_unplug();
     scsi_req_unref(sreq);
 }
 
@@ -836,7 +836,7 @@ static void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
                 while (!QTAILQ_EMPTY(&reqs)) {
                     req = QTAILQ_FIRST(&reqs);
                     QTAILQ_REMOVE(&reqs, req, next);
-                    blk_io_unplug(req->sreq->dev->conf.blk);
+                    blk_io_unplug();
                     scsi_req_unref(req->sreq);
                     virtqueue_detach_element(req->vq, &req->elem, 0);
                     virtio_scsi_free_req(req);
diff --git a/block/meson.build b/block/meson.build
index 486dda8b85..fb4332bd66 100644
--- a/block/meson.build
+++ b/block/meson.build
@@ -23,6 +23,7 @@ block_ss.add(files(
   'mirror.c',
   'nbd.c',
   'null.c',
+  'plug.c',
   'qapi.c',
   'qcow2-bitmap.c',
   'qcow2-cache.c',
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 18:11:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 18:11:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541350.844050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43oM-0000iG-FJ; Tue, 30 May 2023 18:11:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541350.844050; Tue, 30 May 2023 18:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q43oM-0000i8-BN; Tue, 30 May 2023 18:11:10 +0000
Received: by outflank-mailman (input) for mailman id 541350;
 Tue, 30 May 2023 18:11:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BJNG=BT=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q43oL-0006QH-37
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 18:11:09 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58188cb8-ff15-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 20:11:06 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-141-uIvKrrFFNuyGlrGvyulc_w-1; Tue, 30 May 2023 14:11:03 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B1D3F8030D5;
 Tue, 30 May 2023 18:11:02 +0000 (UTC)
Received: from localhost (unknown [10.39.192.97])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5D95620296C6;
 Tue, 30 May 2023 18:10:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58188cb8-ff15-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685470265;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XV97eP38CRr/0Son/iV5qHHD0cfAwlv15JJUdp2ioGU=;
	b=FfP2CICNND/N3031cVSBNTsn0/fcpexVtQ6wthl9L7KQCsM8d5Yx33SczV+4MxZmYl66Fk
	ohZ1H01cxvSTDngEGRGANq4UV1jxHUD+QCy3cdlHqLWuSOr4UUq8yGyEr440ZeSfFamvJe
	MrTETaduyboLzYubaRlbf9W5seolqhA=
X-MC-Unique: uIvKrrFFNuyGlrGvyulc_w-1
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	eblake@redhat.com,
	Hanna Reitz <hreitz@redhat.com>,
	Fam Zheng <fam@euphon.net>,
	sgarzare@redhat.com,
	qemu-block@nongnu.org,
	xen-devel@lists.xenproject.org,
	Julia Suvorova <jusual@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>
Subject: [PATCH v3 5/6] block/linux-aio: convert to blk_io_plug_call() API
Date: Tue, 30 May 2023 14:09:58 -0400
Message-Id: <20230530180959.1108766-6-stefanha@redhat.com>
In-Reply-To: <20230530180959.1108766-1-stefanha@redhat.com>
References: <20230530180959.1108766-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4

Stop using the .bdrv_co_io_plug() API because it is not multi-queue
block layer friendly. Use the new blk_io_plug_call() API to batch I/O
submission instead.

Note that a dev_max_batch check is dropped in laio_io_unplug() because
the semantics of unplug_fn() are different from .bdrv_co_unplug():
1. unplug_fn() is only called when the last blk_io_unplug() call occurs,
   not every time blk_io_unplug() is called.
2. unplug_fn() is per-thread, not per-BlockDriverState, so there is no
   way to get per-BlockDriverState fields like dev_max_batch.

Therefore this condition cannot be moved to laio_unplug_fn(). It is not
obvious that this condition affects performance in practice, so I am
removing it instead of trying to come up with a more complex mechanism
to preserve the condition.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 include/block/raw-aio.h |  7 -------
 block/file-posix.c      | 28 ----------------------------
 block/linux-aio.c       | 41 +++++++++++------------------------------
 3 files changed, 11 insertions(+), 65 deletions(-)

diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
index da60ca13ef..0f63c2800c 100644
--- a/include/block/raw-aio.h
+++ b/include/block/raw-aio.h
@@ -62,13 +62,6 @@ int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
 
 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
 void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
-
-/*
- * laio_io_plug/unplug work in the thread's current AioContext, therefore the
- * caller must ensure that they are paired in the same IOThread.
- */
-void laio_io_plug(void);
-void laio_io_unplug(uint64_t dev_max_batch);
 #endif
 /* io_uring.c - Linux io_uring implementation */
 #ifdef CONFIG_LINUX_IO_URING
diff --git a/block/file-posix.c b/block/file-posix.c
index 7baa8491dd..ac1ed54811 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2550,26 +2550,6 @@ static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset,
     return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_WRITE);
 }
 
-static void coroutine_fn raw_co_io_plug(BlockDriverState *bs)
-{
-    BDRVRawState __attribute__((unused)) *s = bs->opaque;
-#ifdef CONFIG_LINUX_AIO
-    if (s->use_linux_aio) {
-        laio_io_plug();
-    }
-#endif
-}
-
-static void coroutine_fn raw_co_io_unplug(BlockDriverState *bs)
-{
-    BDRVRawState __attribute__((unused)) *s = bs->opaque;
-#ifdef CONFIG_LINUX_AIO
-    if (s->use_linux_aio) {
-        laio_io_unplug(s->aio_max_batch);
-    }
-#endif
-}
-
 static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
 {
     BDRVRawState *s = bs->opaque;
@@ -3914,8 +3894,6 @@ BlockDriver bdrv_file = {
     .bdrv_co_copy_range_from = raw_co_copy_range_from,
     .bdrv_co_copy_range_to  = raw_co_copy_range_to,
     .bdrv_refresh_limits = raw_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
@@ -4286,8 +4264,6 @@ static BlockDriver bdrv_host_device = {
     .bdrv_co_copy_range_from = raw_co_copy_range_from,
     .bdrv_co_copy_range_to  = raw_co_copy_range_to,
     .bdrv_refresh_limits = raw_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
@@ -4424,8 +4400,6 @@ static BlockDriver bdrv_host_cdrom = {
     .bdrv_co_pwritev        = raw_co_pwritev,
     .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
     .bdrv_refresh_limits    = cdrom_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
@@ -4552,8 +4526,6 @@ static BlockDriver bdrv_host_cdrom = {
     .bdrv_co_pwritev        = raw_co_pwritev,
     .bdrv_co_flush_to_disk  = raw_co_flush_to_disk,
     .bdrv_refresh_limits    = cdrom_refresh_limits,
-    .bdrv_co_io_plug        = raw_co_io_plug,
-    .bdrv_co_io_unplug      = raw_co_io_unplug,
     .bdrv_attach_aio_context = raw_aio_attach_aio_context,
 
     .bdrv_co_truncate                   = raw_co_truncate,
diff --git a/block/linux-aio.c b/block/linux-aio.c
index 442c86209b..5021aed68f 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -15,6 +15,7 @@
 #include "qemu/event_notifier.h"
 #include "qemu/coroutine.h"
 #include "qapi/error.h"
+#include "sysemu/block-backend.h"
 
 /* Only used for assertions.  */
 #include "qemu/coroutine_int.h"
@@ -46,7 +47,6 @@ struct qemu_laiocb {
 };
 
 typedef struct {
-    int plugged;
     unsigned int in_queue;
     unsigned int in_flight;
     bool blocked;
@@ -236,7 +236,7 @@ static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
 {
     qemu_laio_process_completions(s);
 
-    if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
+    if (!QSIMPLEQ_EMPTY(&s->io_q.pending)) {
         ioq_submit(s);
     }
 }
@@ -277,7 +277,6 @@ static void qemu_laio_poll_ready(EventNotifier *opaque)
 static void ioq_init(LaioQueue *io_q)
 {
     QSIMPLEQ_INIT(&io_q->pending);
-    io_q->plugged = 0;
     io_q->in_queue = 0;
     io_q->in_flight = 0;
     io_q->blocked = false;
@@ -354,31 +353,11 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64_t dev_max_batch)
     return max_batch;
 }
 
-void laio_io_plug(void)
+static void laio_unplug_fn(void *opaque)
 {
-    AioContext *ctx = qemu_get_current_aio_context();
-    LinuxAioState *s = aio_get_linux_aio(ctx);
+    LinuxAioState *s = opaque;
 
-    s->io_q.plugged++;
-}
-
-void laio_io_unplug(uint64_t dev_max_batch)
-{
-    AioContext *ctx = qemu_get_current_aio_context();
-    LinuxAioState *s = aio_get_linux_aio(ctx);
-
-    assert(s->io_q.plugged);
-    s->io_q.plugged--;
-
-    /*
-     * Why max batch checking is performed here:
-     * Another BDS may have queued requests with a higher dev_max_batch and
-     * therefore in_queue could now exceed our dev_max_batch. Re-check the max
-     * batch so we can honor our device's dev_max_batch.
-     */
-    if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch) ||
-        (!s->io_q.plugged &&
-         !s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending))) {
+    if (!s->io_q.blocked && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
         ioq_submit(s);
     }
 }
@@ -410,10 +389,12 @@ static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
 
     QSIMPLEQ_INSERT_TAIL(&s->io_q.pending, laiocb, next);
     s->io_q.in_queue++;
-    if (!s->io_q.blocked &&
-        (!s->io_q.plugged ||
-         s->io_q.in_queue >= laio_max_batch(s, dev_max_batch))) {
-        ioq_submit(s);
+    if (!s->io_q.blocked) {
+        if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch)) {
+            ioq_submit(s);
+        } else {
+            blk_io_plug_call(laio_unplug_fn, s);
+        }
     }
 
     return 0;
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Tue May 30 19:37:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 19:37:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541363.844061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45A0-00016Q-IN; Tue, 30 May 2023 19:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541363.844061; Tue, 30 May 2023 19:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45A0-00016J-Dt; Tue, 30 May 2023 19:37:36 +0000
Received: by outflank-mailman (input) for mailman id 541363;
 Tue, 30 May 2023 19:37:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q459z-000168-6y; Tue, 30 May 2023 19:37:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q459y-0000Z1-Rs; Tue, 30 May 2023 19:37:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q459y-0008Ub-BD; Tue, 30 May 2023 19:37:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q459y-0003Ss-Ah; Tue, 30 May 2023 19:37:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Jow5PfeYiPPmwCfb1Fwni5odj0NYl/OGGuFLAKfREos=; b=S4M0mh+UNuZD4OD/aFKCs8hD4s
	dhseEM3ii4VVX1c2DR5IVEvyzbUs4Rpg3S+Xe7NJBwbXCbqeJIlMym8yZkX3CzLMiZgyJoWARlmbg
	LiA8rXCt51Gu/ae+rHF3Wjpallh7LnPoQGdJiQ3R0hSENIzCvN9uEMqcWogq6BFKWb2A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181016-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 181016: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=05422d276b56f2ebc2309a84a66fc5722c45ad74
X-Osstest-Versions-That:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 30 May 2023 19:37:34 +0000

flight 181016 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181016/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  05422d276b56f2ebc2309a84a66fc5722c45ad74
baseline version:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc

Last test of basis   180963  2023-05-26 13:01:58 Z    4 days
Testing same since   181014  2023-05-30 11:02:30 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Cyril Rébert <slack@rabbit.lu>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f54dd5b53e..05422d276b  05422d276b56f2ebc2309a84a66fc5722c45ad74 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 30 19:52:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 19:52:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541369.844070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45O1-0003TL-Q6; Tue, 30 May 2023 19:52:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541369.844070; Tue, 30 May 2023 19:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45O1-0003TE-LI; Tue, 30 May 2023 19:52:05 +0000
Received: by outflank-mailman (input) for mailman id 541369;
 Tue, 30 May 2023 19:52:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8r7=BT=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q45O0-0003T8-2U
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 19:52:04 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7191c9f9-ff23-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 21:52:01 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7191c9f9-ff23-11ed-8611-37d641c3527e
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685476320;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LJ3pwe5VO2kqAY2WBljz3EqXfTvvu+3J346IL0PSsq8=;
	b=xpL4T8aPYWldl3m+QrLxN+At2fc+ZjAC6pEU4MuUjTSaZuj6IDFe6UakfjWnmmW8hKClix
	t7yS6D3ihlSEIf30gGCLXhg08YwOKfQvUQI3IgIl2aHuedpFPvjibaFkiCGXlhLxeGBGkW
	dGEwWvRoq2mQEzRs8oh56PiUJi2/s5QyH4qWIPL9JI/tQVBvtD0plLFK4LpxUmoH0fO1+H
	nGl/daQybL4Usq3PzahflEHPTKXjDjpOAZxUsLIwJWvToU8rpo8vLXAXsklySZIyOPhZMU
	EowGzZyfscIPTe6Qwpi0Biu3nomQmhobdOQI00py4ZRFq1Ph0F82SiNDS5umEg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685476320;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=LJ3pwe5VO2kqAY2WBljz3EqXfTvvu+3J346IL0PSsq8=;
	b=lIHbo7YKz61HDwNtVfFSrVm0Wqi8uCKfx3OWvRqPS+VwOS/NPYknS3d/X/ObIVAcsa60Eq
	QcFAHQnns6at8LBA==
To: Sean Christopherson <seanjc@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>, Tom Lendacky
 <thomas.lendacky@amd.com>, LKML <linux-kernel@vger.kernel.org>,
 x86@kernel.org, David Woodhouse <dwmw2@infradead.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>, Arjan van de
 Veen <arjan@linux.intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Paul
 McKenney <paulmck@kernel.org>, Oleksandr Natalenko
 <oleksandr@natalenko.name>, Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>, Piotr Gorski
 <lucjan.lucjanov@gmail.com>, Usama Arif <usama.arif@bytedance.com>,
 Juergen Gross <jgross@suse.com>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org, Russell King
 <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE
In-Reply-To: <ZHYqwsCURnrFdsVm@google.com>
References: <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name> <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name> <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx> <87cz2ija1e.ffs@tglx>
 <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name> <87wn0pizbl.ffs@tglx>
 <ZHYqwsCURnrFdsVm@google.com>
Date: Tue, 30 May 2023 21:51:59 +0200
Message-ID: <87leh5iom8.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 30 2023 at 09:56, Sean Christopherson wrote:
> On Tue, May 30, 2023, Thomas Gleixner wrote:
>> On Tue, May 30 2023 at 15:29, Kirill A. Shutemov wrote:
>> > On Tue, May 30, 2023 at 02:09:17PM +0200, Thomas Gleixner wrote:
>> >> The decision to allow parallel bringup of secondary CPUs checks
>> >> CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
>> >> parallel bootup because accessing the local APIC is intercepted and raises
>> >> a #VC or #VE, which cannot be handled at that point.
>> >> 
>> >> The check works correctly, but only for AMD encrypted guests. TDX does not
>> >> set that flag.
>> >> 
>> >> Check for cc_vendor != CC_VENDOR_NONE instead. That might be overbroad, but
>> >> definitely works for both AMD and Intel.
>> >
>> > It boots fine with TDX, but I think it is wrong. cc_get_vendor() will
>> > report CC_VENDOR_AMD even on bare metal if SME is enabled. I don't think
>> > we want it.
>> 
>> Right. Did not think about that.
>> 
>> But the same way is CC_ATTR_GUEST_MEM_ENCRYPT overbroad for AMD. Only
>> SEV-ES traps RDMSR if I'm understandig that maze correctly.
>
> Ya, regular SEV doesn't encrypt register state.

That aside. From a semantical POV making this decision about parallel
bootup based on some magic CC encryption attribute is questionable.

I'm tending to just do the below and make this CC agnostic (except that
I couldn't find the right spot for SEV-ES to clear that flag.)

Thanks,

        tglx
---
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -871,5 +871,7 @@ void __init tdx_early_init(void)
 	x86_platform.guest.enc_tlb_flush_required   = tdx_tlb_flush_required;
 	x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
 
+	x86_cpuinit.parallel_bringup = false;
+
 	pr_info("Guest detected\n");
 }
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -2,6 +2,7 @@
 #ifndef _ASM_X86_PLATFORM_H
 #define _ASM_X86_PLATFORM_H
 
+#include <linux/bits.h>
 #include <asm/bootparam.h>
 
 struct ghcb;
@@ -177,11 +178,14 @@ struct x86_init_ops {
  * struct x86_cpuinit_ops - platform specific cpu hotplug setups
  * @setup_percpu_clockev:	set up the per cpu clock event device
  * @early_percpu_clock_init:	early init of the per cpu clock event device
+ * @fixup_cpu_id:		fixup function for cpuinfo_x86::phys_proc_id
+ * @parallel_bringup:		Parallel bringup control
  */
 struct x86_cpuinit_ops {
 	void (*setup_percpu_clockev)(void);
 	void (*early_percpu_clock_init)(void);
 	void (*fixup_cpu_id)(struct cpuinfo_x86 *c, int node);
+	bool parallel_bringup;
 };
 
 struct timespec64;
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1287,6 +1287,11 @@ bool __init arch_cpuhp_init_parallel_bri
 		return false;
 	}
 
+	if (!x86_cpuinit.parallel_bringup) {
+		pr_info("Parallel CPU startup disabled by the platform\n");
+		return false;
+	}
+
 	smpboot_control = STARTUP_READ_APICID;
 	pr_debug("Parallel CPU startup enabled: 0x%08x\n", smpboot_control);
 	return true;
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -126,6 +126,7 @@ struct x86_init_ops x86_init __initdata
 struct x86_cpuinit_ops x86_cpuinit = {
 	.early_percpu_clock_init	= x86_init_noop,
 	.setup_percpu_clockev		= setup_secondary_APIC_clock,
+	.parallel_bringup		= true,
 };
 
 static void default_nmi_init(void) { };


From xen-devel-bounces@lists.xenproject.org Tue May 30 20:04:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:04:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541373.844080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45ZP-00054O-R4; Tue, 30 May 2023 20:03:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541373.844080; Tue, 30 May 2023 20:03:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45ZP-00054H-NN; Tue, 30 May 2023 20:03:51 +0000
Received: by outflank-mailman (input) for mailman id 541373;
 Tue, 30 May 2023 20:03:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDiB=BT=amd.com=Thomas.Lendacky@srs-se1.protection.inumbo.net>)
 id 1q45ZO-00054A-HN
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:03:50 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20629.outbound.protection.outlook.com
 [2a01:111:f400:7eaa::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 165c5a23-ff25-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 22:03:48 +0200 (CEST)
Received: from DM4PR12MB5229.namprd12.prod.outlook.com (2603:10b6:5:398::12)
 by BL1PR12MB5875.namprd12.prod.outlook.com (2603:10b6:208:397::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Tue, 30 May
 2023 20:03:44 +0000
Received: from DM4PR12MB5229.namprd12.prod.outlook.com
 ([fe80::61f6:a95e:c41e:bb25]) by DM4PR12MB5229.namprd12.prod.outlook.com
 ([fe80::61f6:a95e:c41e:bb25%3]) with mapi id 15.20.6455.020; Tue, 30 May 2023
 20:03:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 165c5a23-ff25-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L4hPXa1YSSIBa5fuNUv0Fb9MsU3P88rj1MtibQvafk1fFfTJRvnnlLd0+wpGgpf2kuQLm9A1wP2FmhpW3AslmOmwGJaViV+82+I3RgREJaM0w2QB0oixOP3cleR3gtzf3P4jPPvRzHIwlG1NjlsyAIqYF9cbFo9xJ04ZE5YPdU4yH2t/ntCcKuSpo39ZBA+jbhDJkY1X/kXFKeRWyVj6N7rnB0Di+rx5NSUFASFKUAG8p2WswYBi33ZBqUVBaefkvgukoaEX8GKrU+3PeGOQ36150G//LgRAPbXrapOhlo4GsIADDm6da7wBFICe6/evxxEfDYPeNWAUrLrbb1UXaA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Nl+OYljQD4C9LS/PzvACvwX/9iw88kYN01a9458Wsa0=;
 b=PoGap6j+woQx2OBpHhbBKLtN+0MWqqbMct5QmnApQafM7oOp6swFNqGsAEz56zK8VQ7MrTuBaZXmPk3fGkjzdv1HaMAP1gI5LPBi97wLCFJQJe7wpiKQUNUU99RkYNRUHR2za8HRbs87tDkrGomaDqO0ciDyfHL9YATM0qpqkgw88cl8DCA9303O9s6RRnbEfee1JbV9VfhyJb0NNor+s9b2mvJWwRc8rqQIZkApDXD2lzADKC5WmqDCQ6IdVzr3Fv5abgepluyCn+Jmxn/CdAk5q6TXHNJCIKxZ5IXilTl/XtNyiA2GKvP5nOlzLVj7JXIDKveNPLzxFc/mjY3rew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Nl+OYljQD4C9LS/PzvACvwX/9iw88kYN01a9458Wsa0=;
 b=Kl+ZJkKSWJ7x4s4+l/VFfZ154Z7NK6RFcrwrBEywZAiMAtYBBu9b1oil6q5JPckk6YYq+jz4HtXwz6MzXD7MWQuMYOuCZfvwnjnj1fSArKVIqsGvAfBbIF28GFFFpxXWNmgwNEGXDwjb/VcP7OoT77LACTZjOD0xlX5ISEWJWzQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <8751e955-e975-c6d4-630c-02912b9ef9da@amd.com>
Date: Tue, 30 May 2023 15:03:40 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE
Content-Language: en-US
To: Thomas Gleixner <tglx@linutronix.de>,
 Sean Christopherson <seanjc@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>,
 LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Dave Hansen <dave.hansen@linux.intel.com>
References: <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name> <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name> <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx> <87cz2ija1e.ffs@tglx>
 <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name> <87wn0pizbl.ffs@tglx>
 <ZHYqwsCURnrFdsVm@google.com> <87leh5iom8.ffs@tglx>
From: Tom Lendacky <thomas.lendacky@amd.com>
In-Reply-To: <87leh5iom8.ffs@tglx>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SA1P222CA0067.NAMP222.PROD.OUTLOOK.COM
 (2603:10b6:806:2c1::17) To DM4PR12MB5229.namprd12.prod.outlook.com
 (2603:10b6:5:398::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM4PR12MB5229:EE_|BL1PR12MB5875:EE_
X-MS-Office365-Filtering-Correlation-Id: b1a28d12-c18a-4b27-6fb2-08db6148f884
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nkuw3U47JZGOMBiso4u2EISSINBIxZTeKi1Cf92rI/a1L2X6rtvDckWexW34yz0JJqpHgIkov9Kxte6+A5O9bjt28aduBFwBLA9R7TQdtTVivc3GGOEjINC1goae2531T3dUep2vcJwUXhLo7ApMEJ0i5dv7Jlot2+3ePuD4shV6g5tNz85FrR99yAiXXKjy7w5nyZYFbvMIfvi3ocK4mjtK+z5jG/68IIp8CtKhFjSuLLnhe0etGRW9H6aHbQHf4u40osE45j+8MihASescGzhC2x9w/rrJbzy3X08IxkaKE5IscQMkEMty5RCbXUBT0zuy9ktU/bxGvrmXUCGI18mIzG3+TZIVf6ECP+xtTXBLAo6n8Mh958ih8CepKrmNx4Xm+LhDGUw+GpfN7Ztaq3sxS+6GdeOIZHchqJXzET/E9obNM6Z3/zE+ltU34L4hEa+qU+/h6G2wKh/M3+1hGirqcwSVEql7XAWAezCR4ldGfeJ+TckNPNLy2+BhXY+gD060HB10yBxsngQFiYh7nhN9p+cxboPRw+eV1KcB8Ku7O1f3K4Yv0fI7a0sdP05Hm/RQ52GgjZk3HXxcmZs43jRugKAsHdEoVwrrQgWxYck63IrP0td3al2stp+nwlCFh0/QJbLSHi0rpNkO4rWRyw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR12MB5229.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(136003)(366004)(39860400002)(376002)(451199021)(316002)(6666004)(38100700002)(41300700001)(6486002)(36756003)(478600001)(7406005)(7416002)(86362001)(2906002)(31696002)(2616005)(186003)(83380400001)(31686004)(6512007)(6506007)(110136005)(5660300002)(4326008)(26005)(53546011)(54906003)(66556008)(66476007)(8676002)(66946007)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NU8rNG9wNUN5YzZDZFhPTWRZN2xpSHJYYkNYblM5elp5RXU1OGdVOUkzYks1?=
 =?utf-8?B?S3pkcWR6TDhWWHQwR3ZVUWxuM2ZpNFNnUU82SnRua05YTDdLcGNsc25vK3J4?=
 =?utf-8?B?WFZvNmt5c3g5THZRbDBtWnFzeGREWlV1dldCSDV4VTdVQmxOT0k1L2RaZVZY?=
 =?utf-8?B?L3dkUnE3cU9vRkdMcXQ1QXlOdmM5VjdNWHJnZnhyUmdveGJYZWZMK2JDZzAv?=
 =?utf-8?B?OHJhbVB6OWY1SGpSQ1NXbDZkY2NJRGpHQWJVVnJLc1M4eXBtR3lBMTE2YmxU?=
 =?utf-8?B?MExXV2ViVHd5cGpaMExDQnlrc0NLVFNtQjlvN0JYbVIyTWJVbmZKRmVGQ1Uz?=
 =?utf-8?B?Nm5vVTV4WGx5MU5Udjc1ZGN3WEdyaEx5UVcwNWhlTmwranArUHBJU3ZyUWZZ?=
 =?utf-8?B?WGRkQ3Q5NGhHYS9tVC9iQ3RHamU1b3pWN2dHRjJUY2k2VlEvSTd4cjhQc2JT?=
 =?utf-8?B?Z2l2QXl5SitmZE4yajh4aWxOU2tkQVFSOVRHTlN6UnZjWkxUaHlvbWtjSUlV?=
 =?utf-8?B?MHJqMThlRVRRUjEveXI0N1hXY3JtZmJpY2FuTEhOSjlONmJ0c1NCaUtwT251?=
 =?utf-8?B?eEhIdGprTUJzaVRMNDNReVR5V1BtRUllOVdManR2bnNLcGtleVZDTzJwODN1?=
 =?utf-8?B?SS9XTGYyb3lXZVRXank5V2sxZEdmTzVzcSs0TkZGL1Y0T1pTT3pvazN2NFJl?=
 =?utf-8?B?WGh3Wk5aY2VjOUZTYWZucEJaK0ZST1NHY3FpQnNsQ250Zkc4Vk1nQy9HaUUv?=
 =?utf-8?B?Q1dYNlhyMDRWR1NTUTl0cVl4bmgreFFUckRvRVAvMW1LWU5IakRORFhOV0Ex?=
 =?utf-8?B?SEgweGt0V2kxdXV0Q25FcWE0aU1YVXpsVGN1K2k2ZnM0ZzZNeE5QNlRKdUlQ?=
 =?utf-8?B?dkxRUnY2ZFRDb2diVlVLQmIzRVJZN0NDNnZkUXN1U01nOWRUS01FWjZGZ0I0?=
 =?utf-8?B?YWVLbXorZjJtS294cG1XeVVpb0w2Q1Y3bitSTUhzbU91cnNob3RrRWNyaXpm?=
 =?utf-8?B?OXJzeVpldmd1NWt0TW5FejloV1E2WWNqcFUzYTlQUnBiQURoc1FxZnlLNjVL?=
 =?utf-8?B?aktsT21pU3g5Rk8wQ0pCUTF2S0twVHgzaWRVRStRM0tMTDlRdngraE1NTy9Y?=
 =?utf-8?B?eS9tajZFaFZPSDZMTUVhY085cGF4SG83dlpSN1lLdXhoaDVBQWN5NEY1dk16?=
 =?utf-8?B?Q1pva21ZRVVWRzdBVGd5YWNqR2RZYlc3bXBqQzY3VCtUVVhsVnQzeWlTYU1w?=
 =?utf-8?B?RE9CRDNyMWdoeFQ3S1VkWnR1UDRqV2oyclcyWGxJclhoZHY3dlZmbDVDalFa?=
 =?utf-8?B?bk11eVdlQnFDQWlDbzdvSDNxLzVqR1Y1eFlVYS95ZHhrN241V2JOK1BveHQ0?=
 =?utf-8?B?SmllNm5xVHhXblBaeUZUYVIxcTFiamhLem1XaHdycWRCbW5WeFpEQ25waHAw?=
 =?utf-8?B?Y2R2YXdMYnd3c1FBU3Q0S1FqOFowQVVpVWtMNDZzNm0yb2pxSGs1d0NGZWtt?=
 =?utf-8?B?ek80eHJqVkFzZzExYWNVek9VY3VUWm0weWt2MXBCb0dzWXFlY2hISmh4ZXVY?=
 =?utf-8?B?RjlqZFJ4ekNBZ3RlUnJhaWZmL3RoaGh5eXQzRW5hdDlteVM0bWRyTHBtSzQv?=
 =?utf-8?B?RkN5Z3E5bDFMKzN0bXpYSDhDSUFGVnhYVXgzSHRrdzlqT28ydFZ2R0NweTMw?=
 =?utf-8?B?RFI0dHJXTUdBL0ZjQVdkSmoyeGdsYXNlSVA3dnFUVVArR0pMSk1RcVhNVitM?=
 =?utf-8?B?L1gxOUZHNmxUQ2xrVEsvZE1vYy90ZFhqWkExVVgvbVN0WVQyQXQwQnJucEZ2?=
 =?utf-8?B?bHlZbGdKOGorOFZMaVIxMS9xRy9Ec2lTRTNNeGFFM3FDejF0ditJbHRFdGVo?=
 =?utf-8?B?aVFuMldSU3lqeWdLbFRIYkhQc3JiY2p1dWxMajdIWU9MUVo2SUhGdHpLRFho?=
 =?utf-8?B?M2ZsODhpWURTRWZmZWszWjM3czdrUXE4eG5uckt3bDlXTHlRazgxKzZvZGZH?=
 =?utf-8?B?VjM2bUxTUE1RUFRybFhDQ0lFbTJjdVgzK2YxaEJIT0hnQ051V05TbXo4ZVFJ?=
 =?utf-8?B?eDdvVE10THdkWlc1blpLRkpkL2lzam1OWm82blo5cGI3MmhwKzlGZ0dZb095?=
 =?utf-8?Q?qQoP6+yBoDjs0UKQbfGHQcD0C?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b1a28d12-c18a-4b27-6fb2-08db6148f884
X-MS-Exchange-CrossTenant-AuthSource: DM4PR12MB5229.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 20:03:44.0087
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BgYQExJGs99BJ7SAy++R+HNsumPeMSNB7pUuIBtjQITEK/DyzCxa7KE4G8CnxYJqvrym+6sNkJedUrrwcXbSkQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5875

On 5/30/23 14:51, Thomas Gleixner wrote:
> On Tue, May 30 2023 at 09:56, Sean Christopherson wrote:
>> On Tue, May 30, 2023, Thomas Gleixner wrote:
>>> On Tue, May 30 2023 at 15:29, Kirill A. Shutemov wrote:
>>>> On Tue, May 30, 2023 at 02:09:17PM +0200, Thomas Gleixner wrote:
>>>>> The decision to allow parallel bringup of secondary CPUs checks
>>>>> CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
>>>>> parallel bootup because accessing the local APIC is intercepted and raises
>>>>> a #VC or #VE, which cannot be handled at that point.
>>>>>
>>>>> The check works correctly, but only for AMD encrypted guests. TDX does not
>>>>> set that flag.
>>>>>
>>>>> Check for cc_vendor != CC_VENDOR_NONE instead. That might be overbroad, but
>>>>> definitely works for both AMD and Intel.
>>>>
>>>> It boots fine with TDX, but I think it is wrong. cc_get_vendor() will
>>>> report CC_VENDOR_AMD even on bare metal if SME is enabled. I don't think
>>>> we want it.
>>>
>>> Right. Did not think about that.
>>>
>>> But the same way is CC_ATTR_GUEST_MEM_ENCRYPT overbroad for AMD. Only
>>> SEV-ES traps RDMSR if I'm understandig that maze correctly.
>>
>> Ya, regular SEV doesn't encrypt register state.
> 
> That aside. From a semantical POV making this decision about parallel
> bootup based on some magic CC encryption attribute is questionable.
> 
> I'm tending to just do the below and make this CC agnostic (except that
> I couldn't find the right spot for SEV-ES to clear that flag.)

Maybe in sme_sev_setup_real_mode() in arch/x86/realmode/init.c? You could 
clear the flag within the CC_ATTR_GUEST_STATE_ENCRYPT check.

Thanks,
Tom

> 
> Thanks,
> 
>          tglx
> ---
> --- a/arch/x86/coco/tdx/tdx.c
> +++ b/arch/x86/coco/tdx/tdx.c
> @@ -871,5 +871,7 @@ void __init tdx_early_init(void)
>   	x86_platform.guest.enc_tlb_flush_required   = tdx_tlb_flush_required;
>   	x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
>   
> +	x86_cpuinit.parallel_bringup = false;
> +
>   	pr_info("Guest detected\n");
>   }
> --- a/arch/x86/include/asm/x86_init.h
> +++ b/arch/x86/include/asm/x86_init.h
> @@ -2,6 +2,7 @@
>   #ifndef _ASM_X86_PLATFORM_H
>   #define _ASM_X86_PLATFORM_H
>   
> +#include <linux/bits.h>
>   #include <asm/bootparam.h>
>   
>   struct ghcb;
> @@ -177,11 +178,14 @@ struct x86_init_ops {
>    * struct x86_cpuinit_ops - platform specific cpu hotplug setups
>    * @setup_percpu_clockev:	set up the per cpu clock event device
>    * @early_percpu_clock_init:	early init of the per cpu clock event device
> + * @fixup_cpu_id:		fixup function for cpuinfo_x86::phys_proc_id
> + * @parallel_bringup:		Parallel bringup control
>    */
>   struct x86_cpuinit_ops {
>   	void (*setup_percpu_clockev)(void);
>   	void (*early_percpu_clock_init)(void);
>   	void (*fixup_cpu_id)(struct cpuinfo_x86 *c, int node);
> +	bool parallel_bringup;
>   };
>   
>   struct timespec64;
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -1287,6 +1287,11 @@ bool __init arch_cpuhp_init_parallel_bri
>   		return false;
>   	}
>   
> +	if (!x86_cpuinit.parallel_bringup) {
> +		pr_info("Parallel CPU startup disabled by the platform\n");
> +		return false;
> +	}
> +
>   	smpboot_control = STARTUP_READ_APICID;
>   	pr_debug("Parallel CPU startup enabled: 0x%08x\n", smpboot_control);
>   	return true;
> --- a/arch/x86/kernel/x86_init.c
> +++ b/arch/x86/kernel/x86_init.c
> @@ -126,6 +126,7 @@ struct x86_init_ops x86_init __initdata
>   struct x86_cpuinit_ops x86_cpuinit = {
>   	.early_percpu_clock_init	= x86_init_noop,
>   	.setup_percpu_clockev		= setup_secondary_APIC_clock,
> +	.parallel_bringup		= true,
>   };
>   
>   static void default_nmi_init(void) { };


From xen-devel-bounces@lists.xenproject.org Tue May 30 20:06:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:06:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541378.844090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45bv-0005hs-BD; Tue, 30 May 2023 20:06:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541378.844090; Tue, 30 May 2023 20:06:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45bv-0005hl-7r; Tue, 30 May 2023 20:06:27 +0000
Received: by outflank-mailman (input) for mailman id 541378;
 Tue, 30 May 2023 20:06:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IZK1=BT=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q45bt-0005hd-8r
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:06:25 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [85.215.255.21]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 72d224ac-ff25-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 22:06:22 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4UK6LJxH
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 30 May 2023 22:06:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72d224ac-ff25-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1685477181; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=gPMvZ02d8pgWLJXgf0m0QKwKMlGMZYEFBgWC8P5RXNRn591fXKRmJMx8OwdsQm8zY3
    9LzQgXWf51Osa6KmhZWrpggVBiqjlfOfBbTKL8QLhvjAHXkFpTLxYFyBI/jdJRB/g00U
    F5KKKCs/PntPZyi1pNYRgIgpeAdO7Ft09uPLnQzwI/wPlN/o00bBEMaTQ0M+Rx+MM7tn
    Ua8fsk1DwWR+wzgEKQh2CywATO5wqi6F7+wkDLYCF3RF8hSEbDZQ4g1OUwfhRMHb/0nr
    tm88QggUsLgFnkUs+jdP6qOOa2NE+CkMxz+VgNa8c3GOJ8UvWv1hREhuBOcXf8jcaN2M
    USLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685477181;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=uYfGoVSOyGoPL5PshkzxCPvn4zASsscH3kmekFXLH/I=;
    b=QhRFOeeCj0rQgY7qv+kix7AJPB+uyqyLr2NLKrYF+YzlO30ShpC/pS+DJgiCXwjmpx
    JeC7GOkaDBhxU2V7pc5Ge7BeifUoBWxYvF9bTyXxHFAkgRaf3mdT9XFdk8cc6teZXnt9
    nxb5PSONfnUtjZSjLdwUNZoAYN7DWVQu47z+CofIltxjU3t14MraRGuAXGhzFNy2c1dc
    eP+8hMuwkNU/Sot/w+zuiFFu0Srsl+3idQng5TbQVl0G5MXa0M8Ha/CZ5yPY0vui5NkZ
    QnBN1GGLOrKDpaB1iikjNy1voPynQ1BEpmBE1xYYsDfKOwuCWmNq1HSMhwC7KC0jjA63
    yLlg==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685477181;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=uYfGoVSOyGoPL5PshkzxCPvn4zASsscH3kmekFXLH/I=;
    b=CPJp8AMmAV/UosZqAkzq6KWPm9Jil21sz/W3vjqC2TP7rt5TZjZ09P3txew96H9J/f
    qxg1gkQ84F2p/KIlhlMEnxOApqHJec9hMvCzYhquGo7j1EngB5rxLjmVBxkUH8Hq+O7q
    GswIZyZ5jxLdi2qcL/d1Rp6F+73MR7J4m4OqpcB+WQaMsQNaLn/ZBp/dkEiRfF9PF9E+
    D4KB6Urfyp7L45YgpszjYyMJQu3o37/T/h5pOwvc7RbzPFNVUpVPG+sFUieW9ybQiojT
    CM2x/DgCg4ZUW7qW3rPMESVHy6BZ7lR/31tl/YiqwGYGVlIWxngZf5YyzFtb87lKwgrc
    xveA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685477181;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=uYfGoVSOyGoPL5PshkzxCPvn4zASsscH3kmekFXLH/I=;
    b=gzHUix8uIxiBd7RsZ/XhwMYzyfLNmGRBzXnCEfpWwPca2g+A6dhyRr0kxhQsAnRd3b
    olybCPMNRueS/rk0+EAQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4BhOIaRvsld/sN75OpaIeOWAiVTRkMz6wPlUdSg=="
Date: Tue, 30 May 2023 22:06:13 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: xentrace buffer size, maxcpus and online cpus
Message-ID: <20230530220613.4c4da5cc.olaf@aepfle.de>
In-Reply-To: <578d341d-0c54-de64-73e7-1dfc7e5d7584@suse.com>
References: <20230530095859.60a3e4ea.olaf@aepfle.de>
	<578d341d-0c54-de64-73e7-1dfc7e5d7584@suse.com>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/VbbszYJd+QIJaS=1chhlVSo";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/VbbszYJd+QIJaS=1chhlVSo
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Tue, 30 May 2023 10:41:07 +0200 Jan Beulich <jbeulich@suse.com>:

> Using this N would be correct afaict, but that N isn't num_online_cpus().
> CPUs may have been offlined by the time trace buffers are initialized, so
> without looking too closely I think it would be num_present_cpus() that
> you're after.

In my testing num_online_cpus returns N, while num_present_cpus returns
all available pcpus. There is also num_possible_cpus, but this appears to
be an ARM thing.

If Xen is booted with maxcpus=3D, is there a way to use the remaining cpus?
In case this is possible, the code needs adjustment to reinitialize the
trace buffers. This is not an easy change. But if the remaining cpus
will remain offline, then something like this may work:

+++ b/xen/common/trace.c
@@ -110,7 +110,8 @@ static int calculate_tbuf_size(unsigned int pages, uint=
16_t t_info_first_offset)
     struct t_info dummy_pages;
     typeof(dummy_pages.tbuf_size) max_pages;
     typeof(dummy_pages.mfn_offset[0]) max_mfn_offset;
-    unsigned int max_cpus =3D nr_cpu_ids;
+    unsigned int nr_cpus =3D num_online_cpus();
+    unsigned int max_cpus =3D nr_cpus;
     unsigned int t_info_words;
=20
     /* force maximum value for an unsigned type */
@@ -148,11 +149,11 @@ static int calculate_tbuf_size(unsigned int pages, ui=
nt16_t t_info_first_offset)
      * NB this calculation is correct, because t_info_first_offset is
      * in words, not bytes
      */
-    t_info_words =3D nr_cpu_ids * pages + t_info_first_offset;
+    t_info_words =3D nr_cpus * pages + t_info_first_offset;
     t_info_pages =3D PFN_UP(t_info_words * sizeof(uint32_t));
     printk(XENLOG_INFO "xentrace: requesting %u t_info pages "
            "for %u trace pages on %u cpus\n",
-           t_info_pages, pages, nr_cpu_ids);
+           t_info_pages, pages, nr_cpus);
     return pages;
 }
=20


Olaf

--Sig_/VbbszYJd+QIJaS=1chhlVSo
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmR2VzUACgkQ86SN7mm1
DoCAhg//WFjBXPZT6Z/4yq4WxlJF0+i2nBvetK+oMxV+FtZuAsR0rS8CZWKKF1u5
MwAGwjqDXH+xgYCZiYrXRXKPj72tjkQh7e+GHmwxz/q1DsjDryJxFFB5asbS4lIw
Wpq2zZ78J9IQBsCfodY9Bi/mYKIPHVNA/uiWIJRaXzt7j4+Usbrh/biebiPsNOu9
qKPYfWAhDFX4eTPoDQgSqDkeKaUQ0bN/d1h1fLPs8CdLVBaEZJfBtJisWIQ0CRd1
Np359ttizbjjC/PR/NmZmmFGwqlKoL13GghHn2/91PGSTiAThKWM+b0pQg6NH4gI
+MDQ/KN+O4o4cRClbu2O/vSohHMR/aDXbHdmRm/OF1+JEg47MSXp1Dl2f/PFJYER
HHhVGGeSzi+NMKn8jjFD4zvc91T0Bsg8DkR/ibPmv0K2eM0daRqNLEThhoKjXFsb
kiq0pNpHn+DJ07vYLX8f+6usnH+1k4GprClUuV7uMpZskaUoGuuJHCak53xstJ4m
hvGIDYqcEJmzQ4JhHi7rojxvAk0QgZyjOTzTHxwZWNluS3hUXRO/8liCopfjSo3C
xn84GeDF5kHoRB69LKH8FlvDpogag0gGftMQkWHjupex1Zth9x6jpGf0pW527RdI
A2zGycAdoXDR3gzEtwLm4LiAHG7ZV/TiLN579SQbADRXw9lMdms=
=fqHp
-----END PGP SIGNATURE-----

--Sig_/VbbszYJd+QIJaS=1chhlVSo--


From xen-devel-bounces@lists.xenproject.org Tue May 30 20:27:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:27:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541383.844100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45vx-00089O-1u; Tue, 30 May 2023 20:27:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541383.844100; Tue, 30 May 2023 20:27:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45vw-00089H-U2; Tue, 30 May 2023 20:27:08 +0000
Received: by outflank-mailman (input) for mailman id 541383;
 Tue, 30 May 2023 20:27:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IZK1=BT=aepfle.de=olaf@srs-se1.protection.inumbo.net>)
 id 1q45vv-000899-Cu
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:27:07 +0000
Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de
 [81.169.146.162]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 57787e44-ff28-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 22:27:05 +0200 (CEST)
Received: from sender by smtp.strato.de (RZmta 49.4.0 AUTH)
 with ESMTPSA id x6987cz4UKR4Jzg
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate)
 for <xen-devel@lists.xenproject.org>;
 Tue, 30 May 2023 22:27:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57787e44-ff28-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; t=1685478424; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=VnJEUutl4jkMShBSlAMojyxUABKr49OJDLGqYKlf2+zFQD7cFMfWUcQEuXo/BIDRZ7
    Yjg6PCbY5Bo9zzyQnPYUMOJgsrkjs7q5pihBNZcXBDA06GI4psVWQHNGyEConKpX6NEa
    bFgBJxO0BB3VWL9FsSsmq8VT0wxYOM8AKnMNUZiA1JO7T9GQ7iZyQYStrHMbEUxcFiHk
    P0DJBKledmCiFBA1uY7Iqg0++qORJBr/QfB0m0o12uLq/Zozw1TjbcSSbcTVm1vlwRr6
    WuEw+BrFZ7qdO3Unfgk4u1067pOIlhyR7IIRrKWuSBJw+xhpsUzVGnyWpbv7bGBgW+QU
    M4hQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1685478424;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:To:From:Date:Cc:Date:From:
    Subject:Sender;
    bh=s2plSWGis1tu1gHRH55vXU6XuYyXsLmkSD2I2Znt0Lo=;
    b=sroE3XSrUPUHg4MgF8n5rkUgownQW0v4JwcKPGPlSAWW0aLfQhHhPExudm49LKjK92
    qNaPiD/bPOvu8ZtHCmyDv1mb8ZV0SWOv1hXG3qBjuQZz6QoBrpQPAEbVjgORPvbCMTg1
    J6vGgiqGxXd2fM0KSgZfeOI/DRCcVLQ2XFvhUFHGSpS2nUz+Iz0PtCqtGezoBbIYaOws
    dtG9n9OqmNVX2NqWAXiWPtleFEztZCad4yMle9wN9aAvihv0DF6YBJSSkr5dvDTtYVnN
    ftRPhBljXut5H8tYpecgA4BKlyfVlIwQo9DlsvPJ/VsyH+M8Eo6p4cIRUNCZ7S3huBs1
    iRTA==
ARC-Authentication-Results: i=1; strato.com;
    arc=none;
    dkim=none
X-RZG-CLASS-ID: mo00
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1685478424;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:To:From:Date:Cc:Date:From:
    Subject:Sender;
    bh=s2plSWGis1tu1gHRH55vXU6XuYyXsLmkSD2I2Znt0Lo=;
    b=HbJTI4P029QJ2tyWb+HHMeyYokWyzEU9vegdkD8Aivnz7bweLUDdRBh1xJ3Iv0Oe+t
    L6hdMU6GPTCIGJepZL6uxkKTmnlEX34UOAUTOcDr+EKEtswi/4nFy9Ryo5Z5WMDHx3qe
    fIr4P6LZmygjaW8FLw2Fpa4nb2ihIraGL0xRl9LHhl5rYGYMZawPd6UzyA9jDvtztLcl
    UMzLSxEsELk0BSY/67GtgklTttL7NXg50VXuIA54qX1zGM7JJvDn2dmN6wHq2EYq1u/4
    xoQFoFHssqfaehStQPRrwT6Ba44DM6bgISkhqnmUfPmfHzrO2y836upOAOHmpFpHiN5r
    F2Ng==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1685478424;
    s=strato-dkim-0003; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:To:From:Date:Cc:Date:From:
    Subject:Sender;
    bh=s2plSWGis1tu1gHRH55vXU6XuYyXsLmkSD2I2Znt0Lo=;
    b=mmwf29V9qBoLS00pIPCnj94GJVuSsM9ISky1xi1fNvMvguqdIfF/O45FH7JBXslldn
    jgWRxtAUNMjolaM9+UBg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisR4BhOIaRvsld/sN75OpaIeOWAiVTRkMz6wPlUdSg=="
Date: Tue, 30 May 2023 22:26:56 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: Re: HVM domU not created anymore in staging
Message-ID: <20230530222656.30169b97.olaf@aepfle.de>
In-Reply-To: <20230530094654.372003a0.olaf@aepfle.de>
References: <20230530094654.372003a0.olaf@aepfle.de>
X-Mailer: Claws Mail 20230504T161344.b05adb60 hat ein Softwareproblem, kann man nichts machen.
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/vc1KSlax09WF4JCx0mJkAOd";
 protocol="application/pgp-signature"; micalg=pgp-sha256
Content-Transfer-Encoding: 7bit

--Sig_/vc1KSlax09WF4JCx0mJkAOd
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Tue, 30 May 2023 09:46:54 +0200 Olaf Hering <olaf@aepfle.de>:

> Did osstest actually spot any HVM error?
> To me it looks like 180992 shows many failures, but nothing fatal.

I think osstest does not run Xen with xentrace enabled,
so it can not spot the failure I'm seeing.

Olaf

--Sig_/vc1KSlax09WF4JCx0mJkAOd
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmR2XBAACgkQ86SN7mm1
DoBqMxAAlBcAIulIXu2AQW8w9Bz+9PIMUiL4/Ac1lPmPTBr2eDv7SVZjDcjrPfQf
0lFJdYzxbX9quh3qFkSEZ/3Q2ZH+hFT3WlwXYJObSPq189QvEaQLZZR/q3THO+VK
va2q91CgQ4cRLLuAvh3PX1UbssLa0JeTrBTzNiav7KflBUltM35r4PSxrcOslwwp
nNKIu1hYTSQah9cTpR9Hi5tLZuN/K3qwfP6lMu9rZPwA8MbIjaD+eBlBSa5RAqpG
k1VZbKGc149A9V2nl3tNB5zG6Dh6JGKJ4Olt4gxOM4y/2Jc6+K3natRKwnKnbnLl
LG1sc3gADh99+CSfDszOshE1NhRxnSf+yW/Hw+c+QSTeC95hkb+i+QZkklNlAXku
eDA0uzMWtVYzzsOIFsTOONsefZ0JsjSRPrxp/NsZW2Jd033Q/NmnxIYAQE/eruQ2
/LzQAU7tjv8iukH6dT4vZVgP5PwIWWu0GFoyrzl1hwyg4HFemzFSJGLIOR+6BnBS
Xkzntx3Za3bjYcryh7kvikKNHGhc5rAmh0dcV3wUHPy+maAR4QpCOw3Sq1sB5srN
00tKBzZr91+nBoDJW46NmmufNUpi0pnXNfCqad7gVEZg1EHHxEB4SaYQ71lhH83Z
ZbB77YOkhWjppfDaPWbWcTMTd/f9VsySBXtlfZJ8C2jDTQqmx7k=
=geh2
-----END PGP SIGNATURE-----

--Sig_/vc1KSlax09WF4JCx0mJkAOd--


From xen-devel-bounces@lists.xenproject.org Tue May 30 20:30:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:30:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541388.844110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45yg-0000KZ-DD; Tue, 30 May 2023 20:29:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541388.844110; Tue, 30 May 2023 20:29:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q45yg-0000KS-AD; Tue, 30 May 2023 20:29:58 +0000
Received: by outflank-mailman (input) for mailman id 541388;
 Tue, 30 May 2023 20:29:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q45yf-0000KG-3h; Tue, 30 May 2023 20:29:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q45ye-0001vP-TA; Tue, 30 May 2023 20:29:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q45ye-0001K0-FX; Tue, 30 May 2023 20:29:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q45ye-0007s2-FA; Tue, 30 May 2023 20:29:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MK0pldEa71V3Vam6st4VmYGQJNpOPCeczFoc7/Ct3sM=; b=PuOQXC5M661H+sRzpLRManIHc/
	MHydSvPSRCixuUO6kNGlDxIaa7lo3bk+5vDWtVUS1dDGtOIUy27BMS7lsEe+jZmnu2t6EVKR9QLc/
	mW4+TKOtnoXCJToFUvIikcdm948tP1jk+BGt94PVmolggnIxhtxffmozZ7s/VjtmCJyU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181012-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 181012: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8b817fded42d8fe3a0eb47b1149d907851a3c942
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 30 May 2023 20:29:56 +0000

flight 181012 linux-linus real [real]
flight 181017 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/181012/
http://logs.test-lab.xenproject.org/osstest/logs/181017/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                8b817fded42d8fe3a0eb47b1149d907851a3c942
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   44 days
Failing since        180281  2023-04-17 06:24:36 Z   43 days   82 attempts
Testing same since   181002  2023-05-29 16:11:58 Z    1 days    3 attempts

------------------------------------------------------------
2553 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 323291 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 30 20:31:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:31:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541394.844120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460L-0001im-OQ; Tue, 30 May 2023 20:31:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541394.844120; Tue, 30 May 2023 20:31:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460L-0001id-LO; Tue, 30 May 2023 20:31:41 +0000
Received: by outflank-mailman (input) for mailman id 541394;
 Tue, 30 May 2023 20:31:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460K-0001iX-K7
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:31:40 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f8fa3fb6-ff28-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 22:31:37 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.west.internal (Postfix) with ESMTP id E6A0D3200906;
 Tue, 30 May 2023 16:31:33 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Tue, 30 May 2023 16:31:34 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:31:32 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8fa3fb6-ff28-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:sender:subject:subject:to:to;
	 s=fm1; t=1685478693; x=1685565093; bh=mBn4BtTIuaSR4k6kR5OhxVvo4
	bxwgD7eI3h6KSrS77Y=; b=OV5XCQmm52GKorJgytjmRhmDfFno0a/Pgi7UWpDrL
	hmac3z3F+t+j3QvIAwMR4SXWsjE00Dk8J2pl4/bzWVkMLVwTCqNUhcwwpYQQyRID
	prcGg+R9vE6c37cpOzefITiocc0xsV6Klz2vTQHMOb7Bq4dZxG0KjFsbkZlmP2KN
	4UGnkgg5UwPtIm1WL0u1FQq0kfl3kpp3phUtHwN0psepWye8YFq97zYVHkIfuPNZ
	XwdZ5Q15Y7S6lVoiCRL7QUWcNMt4WbaKMKTLoIV+vAoQTKxR0jmJ6p2DjVSE/Nvp
	eo3yhydz4SncBJZ1C9N20dkMonTC3o23ffZYDb/MYwasA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm1; t=1685478693; x=1685565093; bh=m
	Bn4BtTIuaSR4k6kR5OhxVvo4bxwgD7eI3h6KSrS77Y=; b=AsrK9fySZ3AZJANui
	7+TxVT0aD6Mmt+OOYUCS1xBbGC40JB4t9Zr57FAhjpU9BbocgsFPZv3WNzEHoyiw
	T9iw4+MxlfR5NR9KNQD77hE9wDHDsXxa0IM+szCQBqgjx9Jzd+6H0dn8mz8XkxbN
	LRLDdd58fW+Zs20WpFjccSO76SGJy2Ryqfg87kJHl9jRUt5dPpNasxKX+RAc4T1L
	bXo0Gg/hA2d6xc6Bcd5Hr+l3xrVftdl7twbjEh52cjWu0wYiR2RsWqfKFx7vZ+4X
	8esMxX3l4CZV0yTt6g7boPGvPu+uu0wNvJUK9FRrodoQFrKkOkYS+Z9UWiOdk64K
	9dIMg==
X-ME-Sender: <xms:JV12ZKw_sleDe0CYhkOy1rfxDsKjpyELd-fxjymHbgcgq5lekwsKZA>
    <xme:JV12ZGTAn57_8e1GFIsXL0ysKh0FpixwJKQ-TddPSbdxYp9ycPOiTVwaNOrltt2ad
    ug8Bk31vxxSZPs>
X-ME-Received: <xmr:JV12ZMUabxwRmIasWNcUfFJi1r8ayOF9Gz99yU6V6s-nhGY0G9ciErqgnlwdmTNxI5uZ2MoKV6c>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffogggtgfesthekredtredtjeenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhephfeggfeiiedtieejgedutdekgfet
    geehheegteekvefhfefgudehtdevleegueegnecuvehluhhsthgvrhfuihiivgeptdenuc
    frrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhs
    lhgrsgdrtghomh
X-ME-Proxy: <xmx:JV12ZAjQ2rV9YCzr3z2bw2d41_MyNZJp_kvJGEi-trj6PoDKHYaXqQ>
    <xmx:JV12ZMCHtfgQyZp2PZhX45i91mJDIhD_ys-ao3poTncETHRblZjImQ>
    <xmx:JV12ZBLpJyOl-upuH58Tw1NYpbK7gxAD5Bx9jCgXSTOlNIObWkK6QA>
    <xmx:JV12ZKAryiWCZk5crUODpJpFTwj0sB-WzC-8b4F9y9v0UR85ql4CiA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 00/16] Diskseq support in loop, device-mapper, and blkback
Date: Tue, 30 May 2023 16:31:00 -0400
Message-Id: <20230530203116.2008-1-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This work aims to allow userspace to create and destroy block devices
in a race-free way, and to allow them to be exposed to other Xen VMs via
blkback without races.

Changes since v1:

- Several device-mapper fixes added.
- The diskseq is now a separate Xenstore node, rather than being part of
  physical-device.
- Potentially backwards-incompatible changes to device-mapper now
  require userspace opt-in.
- The code has been tested: I have a block script written in C that uses
  these changes to successfully boot a Xen VM.
- The core block layer is almost completely untouched.  Instead of
  exposing a block device inode directly to userspace, device-mapper
  ioctls that create a block device now return that device's diskseq.
  Userspace can then use that diskseq to safely open the device.
  Furthermore, ioctls that operate on an existing device-mapper device
  now accept a diskseq parameter, which can be used to prevent races.

There are a few changes that make device-mapper's table validation
stricter.  Two of them are clear-cut fixes for memory safety bugs: one
prevents a misaligned pointer dereference in the kernel, and the other
prevents pointer arithmetic overflow that could cause the kernel to
dereference a userspace pointer, especially on 32-bit systems.  One
fixes a double-fetch bug that happens to be harmless right now, but
would make distribution backports of subsequent patches unsafe.  The
remaining fixes prevent totally nonsensical tables from being accepted.
This includes parameter strings that overlap the subsequent target spec,
and target specs that overlap the 'struct dm_ioctl' or each other.  I
doubt there is any userspace extant that generates such tables.

Finally, one patch forbids device-mapper devices to be named ".", "..",
or "control".  Since device-mapper devices are often accessed via
/dev/mapper/NAME, such names would likely greatly confuse userspace.  I
consider this to be an extension of the existing check that prohibits
device mapper names or UUIDs from containing '/'.

Demi Marie Obenour (16):
  device-mapper: Check that target specs are sufficiently aligned
  device-mapper: Avoid pointer arithmetic overflow
  device-mapper: do not allow targets to overlap 'struct dm_ioctl'
  device-mapper: Better error message for too-short target spec
  device-mapper: Target parameters must not overlap next target spec
  device-mapper: Avoid double-fetch of version
  device-mapper: Allow userspace to opt-in to strict parameter checks
  device-mapper: Allow userspace to provide expected diskseq
  device-mapper: Allow userspace to suppress uevent generation
  device-mapper: Refuse to create device named "control"
  device-mapper: "." and ".." are not valid symlink names
  device-mapper: inform caller about already-existing device
  xen-blkback: Implement diskseq checks
  block, loop: Increment diskseq when releasing a loop device
  xen-blkback: Minor cleanups
  xen-blkback: Inform userspace that device has been opened

 block/genhd.c                       |   1 +
 drivers/block/loop.c                |   6 +
 drivers/block/xen-blkback/blkback.c |   8 +-
 drivers/block/xen-blkback/xenbus.c  | 147 ++++++++--
 drivers/md/dm-core.h                |   2 +
 drivers/md/dm-ioctl.c               | 432 ++++++++++++++++++++++------
 drivers/md/dm.c                     |   5 +-
 include/linux/device-mapper.h       |   2 +-
 include/uapi/linux/dm-ioctl.h       |  67 ++++-
 9 files changed, 551 insertions(+), 119 deletions(-)

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:31:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:31:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541395.844129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460Q-0001zR-3Y; Tue, 30 May 2023 20:31:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541395.844129; Tue, 30 May 2023 20:31:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460Q-0001zH-0Z; Tue, 30 May 2023 20:31:46 +0000
Received: by outflank-mailman (input) for mailman id 541395;
 Tue, 30 May 2023 20:31:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460P-0001yj-1Q
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:31:45 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fc04f68a-ff28-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 22:31:42 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.west.internal (Postfix) with ESMTP id 26C5032005C1;
 Tue, 30 May 2023 16:31:40 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Tue, 30 May 2023 16:31:40 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:31:38 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fc04f68a-ff28-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478699; x=1685565099; bh=aSjkzE25S8
	BxmyJfHmxx2kjK812O0BGlmJS9UYp63Y4=; b=iK+uxXhV6X4xM7zdOsRLRNl7Rl
	GeCaXb9nofcQLJMH5PYa8rM5mPH1np/a+6qzHLKH/xgr4kR4IXAg30xdIqri75uf
	f/HA4WZkCSFcOo7cNqUyy8XM/mAA5BKtkkdZ1PCLM6dnhelU1vC8pavl7AWFfugt
	4sRiOO97mcVcpXj5MGvFR7i96rhwtfyMeXrQMUlsQduztFxlYcDLQG/EDvZSk3wq
	x92Oroq+S25UkVnqRiLhrdY1NyoS9E0KaGS1k185wNunE+FkLZv0bi0cekflq7QK
	YcsOdasoRS8hfC8B5I4G2Y5doOhecSPGwC5fQwkbDtySEHXq8JzjTicIZO5g==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478699; x=
	1685565099; bh=aSjkzE25S8BxmyJfHmxx2kjK812O0BGlmJS9UYp63Y4=; b=f
	Zri7GXfSSveOOtYOhvOUEtsANsXmWURASgkB/lbtLP/WL+xtmXsMnGdWCIJvHxk9
	3P4owz460lQ0tIW2GadrRZrHamxcLGygApj4txw8+S26mwTJy70Xa5MgAmR7OD28
	3BQxHlhnew0PgxDnyK6jr8OsOqZ89aJi5/efdD/8P7nMYdHdPt9KGKs8Fpq9Gt2K
	0fZnUTtGA/5zoxj5fzQ1MjgpSQTy7yTSwtJplgo44KMf2SM3jJ6mFQ4hQbuicwEG
	2NH0Z9/cMmeK3OtsVQ7bqYjG2KfdqSm8iVBn9slACvUrWPUr7xWilidlX88F519A
	zhCVORCXIU4SPKqCtRt2A==
X-ME-Sender: <xms:K112ZNLdk7wSJASVb91a04jG4oflO_XcOFaiSvkBTykdbHCVNggo6w>
    <xme:K112ZJI6zgUEzXAHeDV5KaRENmzmZJKK0SSdpBSaQmgoSdJ5yLXrxeq7Hsavz2EHD
    3MPAtXfus4r32E>
X-ME-Received: <xmr:K112ZFt0ZlOK8gkZQh9dbasyotJ1vU7m9y9SUbnCFXx7UUbjsnujiJQOQn0pH30DleIzc3IrecY>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:K112ZOa48AE4ZVOlNQI--xrDOF3nPXzQfahOf1ELK_GIz23fXPAnHA>
    <xmx:K112ZEYi6T0TChdOca6-6colnmqEJiG9Bi3CTwZBPyClLrhgw-QmZw>
    <xmx:K112ZCAYfZ5lmE6T25N3tVSR69c_BbGJ6sC2B9IpBSRuupCOGSXAdQ>
    <xmx:K112ZKMp1wowIzo5DcMQ7w40nQAFGFn6_yoXvmSQVFYk44IdGB8nAA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	stable@vger.kernel.org
Subject: [PATCH v2 02/16] device-mapper: Avoid pointer arithmetic overflow
Date: Tue, 30 May 2023 16:31:02 -0400
Message-Id: <20230530203116.2008-3-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Especially on 32-bit systems, it is possible for the pointer arithmetic
to overflow and cause a userspace pointer to be dereferenced in the
kernel.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable@vger.kernel.org
---
 drivers/md/dm-ioctl.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 34fa74c6a70db8aa67aaba3f6a2fc4f38ef736bc..64e8f16d344c47057de5e2d29e3d63202197dca0 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1396,6 +1396,25 @@ static int next_target(struct dm_target_spec *last, uint32_t next, void *end,
 {
 	static_assert(_Alignof(struct dm_target_spec) <= 8,
 		      "struct dm_target_spec has excessive alignment requirements");
+	static_assert(offsetof(struct dm_ioctl, data) >= sizeof(struct dm_target_spec),
+		      "struct dm_target_spec too big");
+
+	/*
+	 * Number of bytes remaining, starting with last. This is always
+	 * sizeof(struct dm_target_spec) or more, as otherwise *last was
+	 * out of bounds already.
+	 */
+	size_t remaining = (char *)end - (char *)last;
+
+	/*
+	 * There must be room for both the next target spec and the
+	 * NUL-terminator of the target itself.
+	 */
+	if (remaining - sizeof(struct dm_target_spec) <= next) {
+		DMERR("Target spec extends beyond end of parameters");
+		return -EINVAL;
+	}
+
 	if (next % 8) {
 		DMERR("Next target spec (offset %u) is not 8-byte aligned", next);
 		return -EINVAL;
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:31:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:31:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541396.844140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460R-0002FK-CB; Tue, 30 May 2023 20:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541396.844140; Tue, 30 May 2023 20:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460R-0002FD-7s; Tue, 30 May 2023 20:31:47 +0000
Received: by outflank-mailman (input) for mailman id 541396;
 Tue, 30 May 2023 20:31:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460P-0001yj-NG
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:31:45 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fa180845-ff28-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 22:31:42 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id E111C3200564;
 Tue, 30 May 2023 16:31:36 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 30 May 2023 16:31:37 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:31:35 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa180845-ff28-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478696; x=1685565096; bh=YCzACzClxG
	a/ktYTHwJaaNWjaN7phPISbw1cPfpWZUo=; b=A2bqaqsodYaS3trUBnAvHbPHd3
	LVnzv94OZsAab6NwgPLjBRfIiWlcKcR9Am7w7rrKWKhPylPRgC/Z2VcNnFU62eUL
	QTY6dlRFGDe23eZCW+dp0yNNmIRdMnyt2GgAGoiZhtgYvFZOA0VagWHCdhatNzfh
	TUgJDh+kiXR5cwVFR2KVVnJ7q4TZ5ZwBzCrj9Ax6A1mxFH11UB3jAjTrIxT82ler
	iv2JQNpLTLFvHis5VI4rWrAU32RDGijA49HX+5RYgB9Kc80xrRJg46IUUWg09Whp
	OfnQ4sOEVlnEOjG350bX8Xu2+XWR1rAuxgIrN4SPyci767A9fPU8B1J2jDPA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478696; x=
	1685565096; bh=YCzACzClxGa/ktYTHwJaaNWjaN7phPISbw1cPfpWZUo=; b=W
	WCIJ4Q1thNeJ9imoVk0+dlu1mJom8VM4vHfRKeLxMXjDqr0PwWHi8Pq8EqycYuWd
	pRyrl8aldz+pjfYnAgy+Kh35P1CyNuQUI3mrEIQOCMgmmSGUUCfVBQT72wavTR/V
	4z3iS7j2NEdfQcwwDSR32YarwaCCyQuGbfUgprJEbygrw8xplcCYQEFQbiREHXnq
	Bl8hQ68TiF84tjWjP2sbeAf3ceMf3sytqQt110go28ao3jtHtBvxsYJ0Fby4sCtP
	d3nB1fS6fQM0vzfbFfklzcPFwD3Fcw3Mj0ihhxf+Z72Q9IW+KDqdO2d8eoW5rmDb
	GSIQ01eTKLxX0YWgJcCqA==
X-ME-Sender: <xms:KF12ZIcWTu9BgYWVnhEkNCgyldRjiIIivqliwszOsduSmFm4IdqoPQ>
    <xme:KF12ZKP95io6VvF4anqN-RsGY3JIHPM2H_NNrf4hAiPKulV0tnnZybCWntLhxeqO4
    -Lik-Rhg1tlrDQ>
X-ME-Received: <xmr:KF12ZJjI3u9l3-RJvdt8Vo8A2EOjQ1fCbipCpT13Yne0YnV48t4V2u40avU4A8Q9pgTwxkx9Nw4>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:KF12ZN9kF0sGBlydds97tAta0NfieJK6SAd2g6wQaosp7uuxUJxIOg>
    <xmx:KF12ZEsRbplVPcJ71hKhLybUZMBxmbbFJybIQOln5uajLbXnbU2EaQ>
    <xmx:KF12ZEFbcbNdqbMxc5DAHolQq6sk_b8v92Q6aZJ8BV3hV8qPPuHfEQ>
    <xmx:KF12ZCAHscFGlI7DgBCVB9vwCGwd2SVURCYci4wh5OyMxMxUjfbZtA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	stable@vger.kernel.org
Subject: [PATCH v2 01/16] device-mapper: Check that target specs are sufficiently aligned
Date: Tue, 30 May 2023 16:31:01 -0400
Message-Id: <20230530203116.2008-2-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Otherwise subsequent code will dereference a misaligned
`struct dm_target_spec *`, which is undefined behavior.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable@vger.kernel.org
---
 drivers/md/dm-ioctl.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index cc77cf3d410921432eb0c62cdede7d55b9aa674a..34fa74c6a70db8aa67aaba3f6a2fc4f38ef736bc 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1394,6 +1394,13 @@ static inline fmode_t get_mode(struct dm_ioctl *param)
 static int next_target(struct dm_target_spec *last, uint32_t next, void *end,
 		       struct dm_target_spec **spec, char **target_params)
 {
+	static_assert(_Alignof(struct dm_target_spec) <= 8,
+		      "struct dm_target_spec has excessive alignment requirements");
+	if (next % 8) {
+		DMERR("Next target spec (offset %u) is not 8-byte aligned", next);
+		return -EINVAL;
+	}
+
 	*spec = (struct dm_target_spec *) ((unsigned char *) last + next);
 	*target_params = (char *) (*spec + 1);
 
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:31:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:31:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541397.844150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460S-0002VK-Im; Tue, 30 May 2023 20:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541397.844150; Tue, 30 May 2023 20:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460S-0002VD-FV; Tue, 30 May 2023 20:31:48 +0000
Received: by outflank-mailman (input) for mailman id 541397;
 Tue, 30 May 2023 20:31:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460Q-0001iX-Su
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:31:46 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fde95d2e-ff28-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 22:31:45 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 07F803200914;
 Tue, 30 May 2023 16:31:42 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Tue, 30 May 2023 16:31:43 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:31:41 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fde95d2e-ff28-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478702; x=1685565102; bh=pmcUjAGIZx
	Pp5PqEOgEqhVPemyqe9EdThZVOp4My7W0=; b=dxLeUTcWcURIKF3tIdl1etnJR5
	N5UmHb7gYi+h6qFr1h6Z9SC0wxIykswHISO17mCPtNq0NeJ2R7ZcfGGhLXwnTney
	w8vxVkRbqSYZPBXyoXLHByjIvnG7lVS1jupBNpog/rJiveBJkSUPozGoVJ7YSQ2s
	5WwQwq4OgSFQTe9eiKn7KoyWGFsUiGRonLvgMZjB7AU41/HyAj9TrO8UFizlMc2Z
	kjC/rqLsVAqk9UrzpS9OE5eKXjHUQXPwX7RzgO3B8+rrUSBY/uIyHCeXOIFrFBUx
	973IopAKjgCl01cB9/nuRm3U4VY/cNn+Itf/ju/WYCERZTrBTExuvORyanOw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478702; x=
	1685565102; bh=pmcUjAGIZxPp5PqEOgEqhVPemyqe9EdThZVOp4My7W0=; b=R
	UawBqS8r1YDIM4po3y2qsUXbp4zg/tVMtIWDUkXk2Kttv3pfjWUl31EWtaAo+jxK
	/kXxCsfVJzLogYbQFIxM2poNZdJNrkxjSdHCH5xBtj6GqDUUr/oIGSviHONsPWSr
	5v6LrhEM6HNy5GTorYKFgU7SqRGmewjInn2Q1ucsg36zNhg2F569jrlQRLrzGISB
	VIPuAEgV8JZufoZHXDa5LvBF0VsBt0uhRJzkJEAj0jc5U4TnTzHUo0/sRvcLbXSc
	m5CjwzswZE+AcQvkKG85UgCo8pJ17KM+Ph8DKWcmh0kLvVw8cL/Rv4b9TOLUJLWc
	mOiGUxDYt8n0bB7FAA8WA==
X-ME-Sender: <xms:Ll12ZG3ypkz4lUPkIh0DuuQJncBHG735YKqtwU_ktkPdRq3r_9RvgA>
    <xme:Ll12ZJFCVeuAe8Vsr8iAvdOZolH1VlPf7VqJi3_telOOU7kNKuGYsc6DpT14VVJ1H
    FGXcz41JK2iJiQ>
X-ME-Received: <xmr:Ll12ZO71J-CEsAY4_Mr_pTBBQiwUFmjqPIVJ-vTjQf3Y3pRLiM1GadCnVubwZ2flnJFCr-gRBvM>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudegkecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:Ll12ZH32Qri6hjgGiJ0FllRPCLo39f-5qy7T3uCDJ8LU5VX4h79TIg>
    <xmx:Ll12ZJEFd3tCtAnkRbIsBqGjfGNkEmSNFU7LjPpUMlqH7CTGrgKV0Q>
    <xmx:Ll12ZA9BQOHMzZIe_5G8lnEVFKz0d1wcyoVCA08ic3b5UkqDI5J22w>
    <xmx:Ll12ZPY1mCG9yLlRFWakggEPeJhZj6eEHLmC5CNrACfoZ9SfSMTPtg>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	stable@vger.kernel.org
Subject: [PATCH v2 03/16] device-mapper: do not allow targets to overlap 'struct dm_ioctl'
Date: Tue, 30 May 2023 16:31:03 -0400
Message-Id: <20230530203116.2008-4-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This prevents dm_split_args() from corrupting this struct.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable@vger.kernel.org
---
 drivers/md/dm-ioctl.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 64e8f16d344c47057de5e2d29e3d63202197dca0..a1d5fe64e1d0d9d3dcb06924249b89fe661944ab 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1444,6 +1444,12 @@ static int populate_table(struct dm_table *table,
 		return -EINVAL;
 	}
 
+	if (next < sizeof(struct dm_ioctl)) {
+		DMERR("%s: first target spec (offset %u) overlaps 'struct dm_ioctl'",
+		      __func__, next);
+		return -EINVAL;
+	}
+
 	for (i = 0; i < param->target_count; i++) {
 
 		r = next_target(spec, next, end, &spec, &target_params);
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:31:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:31:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541398.844160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460T-0002lt-Sb; Tue, 30 May 2023 20:31:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541398.844160; Tue, 30 May 2023 20:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460T-0002le-Nq; Tue, 30 May 2023 20:31:49 +0000
Received: by outflank-mailman (input) for mailman id 541398;
 Tue, 30 May 2023 20:31:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460T-0001yj-19
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:31:49 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ff8aeea6-ff28-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 22:31:48 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.west.internal (Postfix) with ESMTP id 210F1320093A;
 Tue, 30 May 2023 16:31:46 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 30 May 2023 16:31:46 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:31:44 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff8aeea6-ff28-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478705; x=1685565105; bh=LxAfFq0335
	m1ZlqqDrmxnqhMzXdNPPiOcKKMsIZRgGo=; b=aBkNicTIxTqaQMJAexZulqIRdm
	sIjZXtSOPgxSMACSgvF+9fjyKzTAG1i01eFAX1OK0NcsCEBDoXpuJyaQV6d/I0VA
	q/z3N4LwMEW1FIuYTzFbKbGtY+U3bZe1LREkgWbnGVo31RGnMs1i+RkUSauMA528
	b4f5/xVkZzCMR/eOmXM/no/o50kRvZdfkpjBHP+rZNTkawHhelZmjQubT6/xZFhM
	sB3+I0E3oDcdU9YB1PUvv3WTY3Cgz2c4tnkoPPKHzeYbYbxqStm6MaHhEQnNl0DE
	JrlecR96jAUXIBg4rTOGs1nHiRk+cCArx0I66C52cFKOY4AV+RksbuC93x9Q==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478705; x=
	1685565105; bh=LxAfFq0335m1ZlqqDrmxnqhMzXdNPPiOcKKMsIZRgGo=; b=d
	sVyDy4dBBVGtlMpWhrLCib5j+MpYeJykog270K0270985oiQ+CD+XqCmJf5w8okS
	3WHwdWZsn/VRIjpx5J+h2FUQSjGYLcZmnx9dLpvjIHy8M4HDT4rnwvAyUMLNSXUX
	EgawqZuIFSn4KyYS3JrTDlQIPAUT/0+Gk/yu0UXMiPqrGZPZY/fjoU9AbdzThyf4
	L26yfyQlhx8iDEUWKiIYjUH1A3/OL9QGVvyrDGN/OqpWq3Tw25tgEbIpzAMPdc+h
	5peIq/Yrn5aH9irvh3og9e7PEFL6Qs5Di/lIAk/sYO/pP/cGvtLIZYFIXPWUwGqY
	ZbbD9fIEV7MXFyjcopT5A==
X-ME-Sender: <xms:MV12ZNrs4Z9KmViO-dI3XIzBAS1TrFl0swpM5VQmw0lxIXMYMcYH8g>
    <xme:MV12ZPrQ67qj8n8Ym7JcnhoeIbBOcqe5AUzEdZaRULY3Y2ljd_av-OWHIU7FzbFEt
    QjUStib5j_KJew>
X-ME-Received: <xmr:MV12ZKPuAN6Wkyrp02gS2tS2qthWhdXqYyKQGY2nnqs8ppDaU7tay_iYZjR-1GawLtG2ma21xI8>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:MV12ZI558_m5ZAeIugOGhFh509dG0GfvI0meCVPsv0GzuuTVlEP55Q>
    <xmx:MV12ZM4ZDJsH_r6GgjxCGg9stcnTa1DX0UyGw4N5CSip8_xY_eOlIw>
    <xmx:MV12ZAitJNIqv97FdHQsw70B7O5NYVmGarm35_SET05V6VqDlcbwrg>
    <xmx:MV12ZLaNM7ETgMhYzUUA-gFXBJlTSFaEx8TwyLr2GLFSH0w_bxJTiw>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 04/16] device-mapper: Better error message for too-short target spec
Date: Tue, 30 May 2023 16:31:04 -0400
Message-Id: <20230530203116.2008-5-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Previously the error was "unable to find target", which is not helpful.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/md/dm-ioctl.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index a1d5fe64e1d0d9d3dcb06924249b89fe661944ab..9f505abba3dc22bffc6acb335c0bf29fec288fd5 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1423,9 +1423,6 @@ static int next_target(struct dm_target_spec *last, uint32_t next, void *end,
 	*spec = (struct dm_target_spec *) ((unsigned char *) last + next);
 	*target_params = (char *) (*spec + 1);
 
-	if (*spec < (last + 1))
-		return -EINVAL;
-
 	return invalid_str(*target_params, end);
 }
 
@@ -1451,6 +1448,11 @@ static int populate_table(struct dm_table *table,
 	}
 
 	for (i = 0; i < param->target_count; i++) {
+		if (next < sizeof(*spec)) {
+			DMERR("%s: next target spec (offset %u) overlaps 'struct dm_target_spec'",
+			      __func__, next);
+			return -EINVAL;
+		}
 
 		r = next_target(spec, next, end, &spec, &target_params);
 		if (r) {
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:32:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:32:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541399.844170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460Y-0003AT-4s; Tue, 30 May 2023 20:31:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541399.844170; Tue, 30 May 2023 20:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460Y-0003AG-16; Tue, 30 May 2023 20:31:54 +0000
Received: by outflank-mailman (input) for mailman id 541399;
 Tue, 30 May 2023 20:31:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460W-0001iX-Sz
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:31:52 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 016dd7dd-ff29-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 22:31:51 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id E7B28320091C;
 Tue, 30 May 2023 16:31:48 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Tue, 30 May 2023 16:31:49 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:31:47 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 016dd7dd-ff29-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478708; x=1685565108; bh=9dA8w08Lf6
	yLxBKaYsZib4B6okJI6Rf5TV9rO9Rq5kQ=; b=RqK1KJ1X1SLFXUPUiLR1ErkxSG
	wtKH1zEfrkJGt+cTam8om+jkJorj31HumCzC/S/NBs42VK9J2wPDcvRJtfbOY/PF
	LzxWIsfwLXWGb7AxPlJ+e9IZ0X7jZ/z50ke/BXkU2TVtU0G1j5omrVQ2OdS1YxCW
	t/51b1V8oIA+9ZyeONf7TZzODRQdx/JUayG7fSa+zO2QmmQE8w63GnLlKq+2l5bP
	J5JFEIp5z0YHGwPY8OdfIQlO1/OuUlcVKYbTHF8/ZPUYhtPBuBVIxIylNt97ox2P
	MzGhApOEsQ6j1W/JMW5aBKxVbzfonneR3x5JGYO5PGF86JEMjtyBJwlxTCUA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478708; x=
	1685565108; bh=9dA8w08Lf6yLxBKaYsZib4B6okJI6Rf5TV9rO9Rq5kQ=; b=D
	Msesq2aD/zsowges6BdJEmb3SXxsFD50z2JBIaZKcphdbkZ8bDmxUs+DpndiTUmB
	28v8uLd5dcZv2la0SPjq1IwvKXdC+gIuyMcnYmia7XsERzw2uH7FI8XCSafdXsu5
	EwMa0FI0jMTfRMekX9dRzZzq/aXRflH8qoHSxBLO1INLOvf1SoFV0H7bKQGsjupU
	NaniLmIVlRC/FaEd2D59ZYclkFgzLdscGM91ChlBj2LB5LbIQ847VIEasOMlHpNj
	nXWlcHyU433Z0l3bCdc9lS+C0bcphxZM4Q8ER2dJu2pRsL2pebnHxAxThnCbO/IS
	edfGYluS8mDU8v5EtX9dg==
X-ME-Sender: <xms:NF12ZPsxzLb6qiNMX1P2rmKAhz4tbssdhxqb5b2W4EpVXzuVeLIYuA>
    <xme:NF12ZAfmMk9oJ8u-mhK6Ui9Ulx5bvQ75B1qWdW7mwmhy1E8VWdhGqMqT5i45iBhrL
    5_Kp9tH1jr-geA>
X-ME-Received: <xmr:NF12ZCyTKoszpzg74E1caX7SPMUAsM7ZbJgaQdN_fT3vsPw-0CdsBuS4E3sGSm5CMXcpcrCEQj8>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudegkecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedune
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:NF12ZOOdzZrmzTTxQPerP2vAcrpzO6xwvMZsOM5HHfWIHjdkHwKsOw>
    <xmx:NF12ZP_v7UgW9QNz5GrWbg7QsOI10Y-VgTtA-VukT93FIMsnJVueuQ>
    <xmx:NF12ZOXl6LQXxBP_VPj-MHje4cHa370IULNKDufIrLuGd68vCXJiiw>
    <xmx:NF12ZLSelBIStPRHT1mOrFfztO0x3Inl8EpbIXnSumWL1iS8_dkzOg>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	stable@vger.kernel.org
Subject: [PATCH v2 05/16] device-mapper: Target parameters must not overlap next target spec
Date: Tue, 30 May 2023 16:31:05 -0400
Message-Id: <20230530203116.2008-6-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The NUL terminator for each target parameter string must preceed the
following 'struct dm_target_spec'.  Otherwise, dm_split_args() might
corrupt this struct.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable@vger.kernel.org
---
 drivers/md/dm-ioctl.c | 32 +++++++++++++++++++-------------
 1 file changed, 19 insertions(+), 13 deletions(-)

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 9f505abba3dc22bffc6acb335c0bf29fec288fd5..491ef55b9e8662c3b02a2162b8c93ee086c078a1 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1391,7 +1391,7 @@ static inline fmode_t get_mode(struct dm_ioctl *param)
 	return mode;
 }
 
-static int next_target(struct dm_target_spec *last, uint32_t next, void *end,
+static int next_target(struct dm_target_spec *last, uint32_t next, const char *end,
 		       struct dm_target_spec **spec, char **target_params)
 {
 	static_assert(_Alignof(struct dm_target_spec) <= 8,
@@ -1404,7 +1404,7 @@ static int next_target(struct dm_target_spec *last, uint32_t next, void *end,
 	 * sizeof(struct dm_target_spec) or more, as otherwise *last was
 	 * out of bounds already.
 	 */
-	size_t remaining = (char *)end - (char *)last;
+	size_t remaining = end - (char *)last;
 
 	/*
 	 * There must be room for both the next target spec and the
@@ -1423,7 +1423,7 @@ static int next_target(struct dm_target_spec *last, uint32_t next, void *end,
 	*spec = (struct dm_target_spec *) ((unsigned char *) last + next);
 	*target_params = (char *) (*spec + 1);
 
-	return invalid_str(*target_params, end);
+	return 0;
 }
 
 static int populate_table(struct dm_table *table,
@@ -1433,24 +1433,21 @@ static int populate_table(struct dm_table *table,
 	unsigned int i = 0;
 	struct dm_target_spec *spec = (struct dm_target_spec *) param;
 	uint32_t next = param->data_start;
-	void *end = (void *) param + param_size;
+	const char *const end = (const char *) param + param_size;
 	char *target_params;
+	size_t min_size = sizeof(struct dm_ioctl);
 
 	if (!param->target_count) {
 		DMERR("%s: no targets specified", __func__);
 		return -EINVAL;
 	}
 
-	if (next < sizeof(struct dm_ioctl)) {
-		DMERR("%s: first target spec (offset %u) overlaps 'struct dm_ioctl'",
-		      __func__, next);
-		return -EINVAL;
-	}
-
 	for (i = 0; i < param->target_count; i++) {
-		if (next < sizeof(*spec)) {
-			DMERR("%s: next target spec (offset %u) overlaps 'struct dm_target_spec'",
-			      __func__, next);
+		const char *nul_terminator;
+
+		if (next < min_size) {
+			DMERR("%s: next target spec (offset %u) overlaps %s",
+			      __func__, next, i ? "previous target" : "'struct dm_ioctl'");
 			return -EINVAL;
 		}
 
@@ -1460,6 +1457,15 @@ static int populate_table(struct dm_table *table,
 			return r;
 		}
 
+		nul_terminator = memchr(target_params, 0, (size_t)(end - target_params));
+		if (nul_terminator == NULL) {
+			DMERR("%s: target parameters not NUL-terminated", __func__);
+			return -EINVAL;
+		}
+
+		/* Add 1 for NUL terminator */
+		min_size = (nul_terminator - (const char *)spec) + 1;
+
 		r = dm_table_add_target(table, spec->target_type,
 					(sector_t) spec->sector_start,
 					(sector_t) spec->length,
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:32:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:32:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541400.844180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460b-0003Yg-JB; Tue, 30 May 2023 20:31:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541400.844180; Tue, 30 May 2023 20:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460b-0003Xs-GA; Tue, 30 May 2023 20:31:57 +0000
Received: by outflank-mailman (input) for mailman id 541400;
 Tue, 30 May 2023 20:31:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460Z-0001iX-SU
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:31:55 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 032840c1-ff29-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 22:31:54 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.west.internal (Postfix) with ESMTP id CC58F3200907;
 Tue, 30 May 2023 16:31:51 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 30 May 2023 16:31:52 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:31:50 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 032840c1-ff29-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478711; x=1685565111; bh=+cZsXk5aGi
	IWIF6vp9UgbBTuPJSW9vQMqM9Fi3GWErs=; b=b1X2ZZBno2R9/Fq0Y4byMWClkM
	yztjlEXBXhErAHMyu5pxVYfR1cLCApT4g/uUiO8kteIoPOYfwcM0QS5x9QWviai0
	+l213muvhmIojTILm+7p2eZtNtybubHmo4N4zNfbU1BjaH8lPNaOphi2xkYZuv4e
	h9L6gqPCJK3eOF1vIMloUMmjbsguyKqi2f1W5Tt/wuZ0+GYvxCApjlVAavetYN/A
	7tGd5rz/o5D0PfB0ZRMW60nhh19bpt9llkSd4Y6Kat2KL+JrOgTCciUHA3vZPkBb
	IVcmnbvx3CmJqx5G5vutH15atOF/5DmD2t9dwwAa4lNLI929ZeF3n8ClPHNw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478711; x=
	1685565111; bh=+cZsXk5aGiIWIF6vp9UgbBTuPJSW9vQMqM9Fi3GWErs=; b=U
	P15hdGQ/gq4QBXzqjvd7c1poQOzI9V99fFI4wL2BV4l6rOOg1UHF/IH0Rvn8DD2t
	8inNtSYO4AkLeCdh5u8DSWVX/Y3zi6mmFEbSB6KWnmKzE/gaGSfbS0/UhxzMo5IL
	2kibkVKxrhJBYHcxY0e9g8+oJElSF3YfB39B1l6n3IKWdY+LTIfj2T0Ioc7C6O44
	lgjwvbG10GpFJ0/Ji6FLUTiSnJaiSTElzXZUfNjXE4ABmx+5vaVNfCETg0Sz7mcl
	+DhnaD34G3OifmWX9w/p4yj414+40b2No5vBXQ/3zcAauZ+scwTIQY4dPuM5DQKT
	JWusrvBDFegrRj+Ts3bOw==
X-ME-Sender: <xms:N112ZEkhJpIZ7sLxFuxFFSwbSmLhk75Rnsa-ScNsVUmuvj8FVpJtZA>
    <xme:N112ZD3eZcFAVDADdKfjGL7hNa5Hcl_AmD1vrCoS3wziIIqaageEay8l6FlPFORfH
    Ky8LRm1oDRdNq8>
X-ME-Received: <xmr:N112ZCrGppu6Q7U999Ru6xZ8dvBvBdlKGA_a67mCNAqJcyLUqBe4Tn9Tbh1ALocpktA-UwH8SxU>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedune
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:N112ZAmIp3tGSyM63XF_iXHtC5OoUY3xfa3aSgMOpNWLJu57t7p0uw>
    <xmx:N112ZC3GHJ0_WhL1IkAhFBNahfeTS_YrY6vMkb918mL-qpm2nE-7eA>
    <xmx:N112ZHtrr_ccteqCN72Ytr8N4S4W1zp2eWJbXm3c0ikWA6uzYDIlyg>
    <xmx:N112ZDKNm5Vd0MyFHjWUee7VcjpX3kbLQNgUdSlw4MDRL5cW7QQH6Q>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	stable@vger.kernel.org
Subject: [PATCH v2 06/16] device-mapper: Avoid double-fetch of version
Date: Tue, 30 May 2023 16:31:06 -0400
Message-Id: <20230530203116.2008-7-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The version is fetched once in check_version(), which then does some
validation and then overwrites the version in userspace with the API
version supported by the kernel.  copy_params() then fetches the version
from userspace *again*, and this time no validation is done.  The result
is that the kernel's version number is completely controllable by
userspace, provided that userspace can win a race condition.

Fix this flaw by not copying the version back to the kernel the second
time.  This is not exploitable as the version is not further used in the
kernel.  However, it could become a problem if future patches start
relying on the version field.

Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable@vger.kernel.org
Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/md/dm-ioctl.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 491ef55b9e8662c3b02a2162b8c93ee086c078a1..20f452b6c61c1c4d20259fd0fc5443977e4454a0 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1873,12 +1873,13 @@ static ioctl_fn lookup_ioctl(unsigned int cmd, int *ioctl_flags)
  * As well as checking the version compatibility this always
  * copies the kernel interface version out.
  */
-static int check_version(unsigned int cmd, struct dm_ioctl __user *user)
+static int check_version(unsigned int cmd, struct dm_ioctl __user *user,
+			 struct dm_ioctl *kernel_params)
 {
-	uint32_t version[3];
 	int r = 0;
+	uint32_t *version = kernel_params->version;
 
-	if (copy_from_user(version, user->version, sizeof(version)))
+	if (copy_from_user(version, user->version, sizeof(user->version)))
 		return -EFAULT;
 
 	if ((version[0] != DM_VERSION_MAJOR) ||
@@ -1922,7 +1923,10 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
 	const size_t minimum_data_size = offsetof(struct dm_ioctl, data);
 	unsigned int noio_flag;
 
-	if (copy_from_user(param_kernel, user, minimum_data_size))
+	/* Version has been copied from userspace already, avoid TOCTOU */
+	if (copy_from_user((char *)param_kernel + sizeof(param_kernel->version),
+			   (char __user *)user + sizeof(param_kernel->version),
+			   minimum_data_size - sizeof(param_kernel->version)))
 		return -EFAULT;
 
 	if (param_kernel->data_size < minimum_data_size) {
@@ -2034,7 +2038,7 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
 	 * Check the interface version passed in.  This also
 	 * writes out the kernel's interface version.
 	 */
-	r = check_version(cmd, user);
+	r = check_version(cmd, user, &param_kernel);
 	if (r)
 		return r;
 
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:32:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:32:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541404.844200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460h-0004Vi-EI; Tue, 30 May 2023 20:32:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541404.844200; Tue, 30 May 2023 20:32:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460h-0004VX-9q; Tue, 30 May 2023 20:32:03 +0000
Received: by outflank-mailman (input) for mailman id 541404;
 Tue, 30 May 2023 20:32:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460g-0001iX-DY
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:32:02 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 06b2bc50-ff29-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 22:32:00 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id CFA993200344;
 Tue, 30 May 2023 16:31:57 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 30 May 2023 16:31:58 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:31:56 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06b2bc50-ff29-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478717; x=1685565117; bh=BECTYLtMds
	TyiT4uJYQfU+Mj8zJcihG/8pwIemyERcA=; b=BykYK4wdtoBrIVjSpxxaRPRuTr
	MGlY8mwYSF1nHQ3142m3NqrwVH/EijYHPymM56mLBuX227aHbQ5F2PW5DWZe4u7U
	lg8XpJpFzf5qQ9+uusq9GdBCE1en27c6fJjqiCBAmOvo5OlNdjA8tBsW+mrUjfCA
	i1VUPE3DVLVEEMqpNIzK/oylyXHY1ECWck487vgg36SRTL69g96Tmj2WP4WDhRDh
	YNZvRk872QnPn4w1V/NYR4fwDxLVOwskMoaUGoK62UfRSFaRcP0Nn2L5GVVeNhGI
	Mpu4eTYXCdD6V6NuJeZKduMF5daf+4pRdw20W2896KNxKazcdsRNJffWFNqA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478717; x=
	1685565117; bh=BECTYLtMdsTyiT4uJYQfU+Mj8zJcihG/8pwIemyERcA=; b=s
	1U5rNqdd3Uckhl+Jago2iM9Gf3UuVkhxSSUEaHhOfU7d6pLz10aattbKXvAzKLeQ
	p1y5pQ1q5M00Eb5JYCPaur978/NUoipGyMmasqpMjNPg+sH8guBmsx8gQC0lwOro
	MY4VFHjl8LbuDVnyVN2fMFbfJceUfRG9ptJUJYBiPSq9qQUzYzkfHdW5LQwKwknv
	96fbB4tjlsJ8Yaw7xsE64N4Kn6Ygh34s6DgbFGBnQuD99EyXiIgJjQHQLcg36Bvw
	KrOCQFa/X5wdDAAOODfQIkmoW8hRj7E40BSZhiPyo5FnrEg0N/s3wQacji9NgkkI
	38DoAfhisfRspgBJoSDkw==
X-ME-Sender: <xms:PV12ZIVF4XPa7vIfh95cZrTywjn8VZD4NyR1CTBdML7zBZS2OQ0OCg>
    <xme:PV12ZMkz8m_rPSIhxv_rMuxKobm9-zYx_FK6lU8br5aJB3NAksWPvPvzYNZSVPKrL
    8tt4fS9m_JgihU>
X-ME-Received: <xmr:PV12ZMb6UkdCD8vCp-byAOWnuf9MDLOmpVrsZt8uOG51rWcoEfGitxi_avKN2K4JLBHhcokC1KE>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedvne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:PV12ZHUWDhP7wugXqpeBlzXBDQksonnz5X2q5g8xJmd9-TvmhVV3cQ>
    <xmx:PV12ZClyTCts34PN4tBkcQghm4JynLn6RCdKMJJB878uR0m85eus7w>
    <xmx:PV12ZMcN0GoE-Y5NGUx_8gB2vH4zuAqBUwFoamms2hdSBsmZ6pnCrA>
    <xmx:PV12ZEVCZBpjvTGgpOo1SkHD80xPv9iLaEtVk4dz0Sca4ECSJjxkrA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 08/16] device-mapper: Allow userspace to provide expected diskseq
Date: Tue, 30 May 2023 16:31:08 -0400
Message-Id: <20230530203116.2008-9-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This can be used to avoid race conditions in which a device is destroyed
and recreated with the same major/minor, name, or UUID.  diskseqs are
only honored if strict parameter checking is on, to avoid any risk of
breaking old userspace.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/md/dm-ioctl.c         | 48 ++++++++++++++++++++++++++++-------
 include/uapi/linux/dm-ioctl.h | 33 +++++++++++++++++++++---
 2 files changed, 69 insertions(+), 12 deletions(-)

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index cf752e72ef6a2d8f8230e5bd6d1a6dc817a4f597..01cdf57bcafbf7f3e1b8304eec28792c6b24642d 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -871,6 +871,9 @@ static void __dev_status(struct mapped_device *md, struct dm_ioctl *param)
 		}
 		dm_put_live_table(md, srcu_idx);
 	}
+
+	if (param->version[0] >= DM_VERSION_MAJOR_STRICT)
+		dm_set_diskseq(param, disk->diskseq);
 }
 
 static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_size)
@@ -889,6 +892,8 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
 	if (r)
 		return r;
 
+	param->flags &= ~DM_INACTIVE_PRESENT_FLAG;
+
 	r = dm_hash_insert(param->name, *param->uuid ? param->uuid : NULL, md);
 	if (r) {
 		dm_put(md);
@@ -909,6 +914,7 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
 static struct hash_cell *__find_device_hash_cell(struct dm_ioctl *param)
 {
 	struct hash_cell *hc = NULL;
+	static_assert(offsetof(struct dm_ioctl, diskseq_high) == offsetof(struct dm_ioctl, data) + 3);
 
 	if (*param->uuid) {
 		if (*param->name || param->dev) {
@@ -937,6 +943,27 @@ static struct hash_cell *__find_device_hash_cell(struct dm_ioctl *param)
 	} else
 		return NULL;
 
+	if (param->version[0] >= DM_VERSION_MAJOR_STRICT) {
+		u64 expected_diskseq = dm_get_diskseq(param);
+		u64 diskseq;
+		struct mapped_device *md = hc->md;
+
+		if (WARN_ON_ONCE(md->disk == NULL))
+			return NULL;
+		diskseq = md->disk->diskseq;
+		if (WARN_ON_ONCE(diskseq == 0))
+			return NULL;
+		if (expected_diskseq != 0) {
+			if (expected_diskseq != diskseq) {
+				DMERR("Diskseq mismatch: expected %llu actual %llu",
+				      expected_diskseq, diskseq);
+				return NULL;
+			}
+		} else {
+			dm_set_diskseq(param, diskseq);
+		}
+	}
+
 	/*
 	 * Sneakily write in both the name and the uuid
 	 * while we have the cell.
@@ -2088,7 +2115,6 @@ static int validate_params(uint cmd, struct dm_ioctl *param,
 			   uint32_t ioctl_flags, uint32_t supported_flags)
 {
 	static_assert(__same_type(param->flags, supported_flags));
-	u64 zero = 0;
 
 	if (cmd == DM_DEV_CREATE_CMD) {
 		if (!*param->name) {
@@ -2112,14 +2138,24 @@ static int validate_params(uint cmd, struct dm_ioctl *param,
 		return 0;
 	}
 
+	if (param->data_size < sizeof(struct dm_ioctl)) {
+		DMERR("Entire struct dm_ioctl (size %zu) must be valid, but only %u was valid",
+		      sizeof(struct dm_ioctl), param->data_size);
+		return -EINVAL;
+	}
+
 	/* Check that strings are terminated */
 	if (!no_non_nul_after_nul(param->name, DM_NAME_LEN, cmd, "Name") ||
 	    !no_non_nul_after_nul(param->uuid, DM_UUID_LEN, cmd, "UUID")) {
 		return -EINVAL;
 	}
 
-	if (memcmp(param->data, &zero, sizeof(param->data)) != 0) {
-		DMERR("second padding field not zeroed in strict mode (cmd %u)", cmd);
+	/*
+	 * This also reads the NUL terminator of the UUID, but that has already been
+	 * checked to be zero by no_non_nul_after_nul().
+	 */
+	if (*(const u32 *)((const char *)param + sizeof(struct dm_ioctl) - 8) != 0) {
+		DMERR("padding field not zeroed in strict mode (cmd %u)", cmd);
 		return -EINVAL;
 	}
 
@@ -2129,12 +2165,6 @@ static int validate_params(uint cmd, struct dm_ioctl *param,
 		return -EINVAL;
 	}
 
-	if (param->padding) {
-		DMERR("padding not zeroed in strict mode (got %u, cmd %u)",
-		      param->padding, cmd);
-		return -EINVAL;
-	}
-
 	if (param->open_count != 0) {
 		DMERR("open_count not zeroed in strict mode (got %d, cmd %u)",
 		      param->open_count, cmd);
diff --git a/include/uapi/linux/dm-ioctl.h b/include/uapi/linux/dm-ioctl.h
index 81103e1dcdac3015204e9c05d73037191e965d59..5647b218f24b626f5c1cefe8bec18dc04373c3d0 100644
--- a/include/uapi/linux/dm-ioctl.h
+++ b/include/uapi/linux/dm-ioctl.h
@@ -136,16 +136,43 @@ struct dm_ioctl {
 	 * For output, the ioctls return the event number, not the cookie.
 	 */
 	__u32 event_nr;      	/* in/out */
-	__u32 padding;
+
+	union {
+		/* valid if DM_VERSION_MAJOR is used */
+		__u32 padding;		/* padding */
+		/* valid if DM_VERSION_MAJOR_STRICT is used */
+		__u32 diskseq_low;	/* in/out: low 4 bytes of the diskseq */
+	};
 
 	__u64 dev;		/* in/out */
 
 	char name[DM_NAME_LEN];	/* device name */
 	char uuid[DM_UUID_LEN];	/* unique identifier for
 				 * the block device */
-	char data[7];		/* padding or data */
+	union {
+		/* valid if DM_VERSION_MAJOR is used */
+		char data[7];	/* padding or data */
+		/* valid if DM_VERSION_MAJOR_STRICT is used */
+		struct {
+			char _padding[3];	/* padding */
+			__u32 diskseq_high;	/* in/out: high 4 bytes of the diskseq */
+		} __attribute__((packed));
+	};
 };
 
+__attribute__((always_inline)) static inline __u64
+dm_get_diskseq(const struct dm_ioctl *_i)
+{
+	return (__u64)_i->diskseq_high << 32 | (__u64)_i->diskseq_low;
+}
+
+__attribute__((always_inline)) static inline void
+dm_set_diskseq(struct dm_ioctl *_i, __u64 _diskseq)
+{
+	_i->diskseq_low = (__u32)(_diskseq & 0xFFFFFFFFU);
+	_i->diskseq_high = (__u32)(_diskseq >> 32);
+}
+
 /*
  * Used to specify tables.  These structures appear after the
  * dm_ioctl.
@@ -402,6 +429,6 @@ enum {
  * If DM_VERSION_MAJOR_STRICT is used, these flags are reserved and
  * must be zeroed.
  */
-#define DM_STRICT_ONLY_FLAGS ((__u32)0xFFF00004)
+#define DM_STRICT_ONLY_FLAGS ((__u32)(~((1UL << 19) - 1) | 1 << 9 | 1 << 7))
 
 #endif				/* _LINUX_DM_IOCTL_H */
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:32:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:32:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541402.844190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460e-00044H-UZ; Tue, 30 May 2023 20:32:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541402.844190; Tue, 30 May 2023 20:32:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460e-000445-Ox; Tue, 30 May 2023 20:32:00 +0000
Received: by outflank-mailman (input) for mailman id 541402;
 Tue, 30 May 2023 20:31:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460c-0001yj-Jx
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:31:58 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04d32778-ff29-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 22:31:56 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id AB9E13200920;
 Tue, 30 May 2023 16:31:54 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 30 May 2023 16:31:55 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:31:53 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04d32778-ff29-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478714; x=1685565114; bh=G/PZMKR5fM
	icHWYQT0GxcmUHbUsQVYWUtsZvrUHCJuU=; b=Fx+b8PAh3kQful+76g25YMRpI5
	OX2hyNg04HQUYaN44eHLXL/rFm2SALtFR0112VOAhcyu7Q2sJwY1i66PmyE2SnTT
	GQ8zTaVT5pxBlG08qTCZ+FS3117iMs4IgwF+YZnRQsIjYuCt9Q9EcokEF1/w2QFv
	Twymq+8VxvVG/76A/z0fBAsNeDIatL11QxQzfwr7rrvR/Yezfnoab4FX2SdDIO8+
	3Z09WdrcZcDTQYfG4wiLLpb2iXwlJVjKAN2ocz+agmZSrrgePipjWehEJVX2cKu6
	l8BXqqbyb62oQpkjHAq0YFkp+2SFAkTxICtIqKbr7Q7NzkLJZ8C4XaiyGSzQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478714; x=
	1685565114; bh=G/PZMKR5fMicHWYQT0GxcmUHbUsQVYWUtsZvrUHCJuU=; b=s
	iYr85F64Hkrtm1e26WXimQt3qTnsJz10WHrpr4PvU8EdZ5UO6Qfqj67YBBt2BbFO
	PUipDPJEr6g34YqFLy8Hwa01eC3ltU7TdIquoCIcGBTWPS2otA5rs1uHUbwazjjK
	Ww7dGsxlG99lt+yj4HNnXShCkklC9RuO2GIiKVVdrv7njlCf5lZcH+cU8FlYF758
	DVOQ5dzfsR+WfY5BDpMPF7W7vjmlnSsmy/B0Ll5i1cVddn1sbTTYzNIoUN9Ql6Ib
	XM5XK8sKDq8W6Feu9Fioasb6HgQXeG3lzMjwIN3YEC91VleFfGumhwAmWhtxq+H3
	0DWvG8HfoOsaY0EOEduGg==
X-ME-Sender: <xms:Ol12ZHST1-c4k5KKrHyghT_SmLOvnhcOlxPM6knJUcdFSSMnKQtFgw>
    <xme:Ol12ZIwcKjQD2dhY7Tm1JfUHCs4wW7eWePj7sclDGhOohdK0Bpc9HoCUKopY4HuW6
    Omi1tU7shxtXi8>
X-ME-Received: <xmr:Ol12ZM1HBLxQQ-mpsHM-gJJjxe3fPxbQ4hzuZhukt82LYO3FlpWjCV3MlxhC1oc0-HaUvadSyZ4>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedune
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:Ol12ZHAsNCJ4xwHw3TFK0t2zyMBNNsPy9Dr81PQ22kZ7mCjbE41uQg>
    <xmx:Ol12ZAhQp8O9m53jnJ2eMxtNDDK9k7jf5ZgN-p6xpX_aauv4Y_HIWA>
    <xmx:Ol12ZLrfatiJMsqo7x237vzd8RsIphoVGB0E4NWrLk38Q__DBUChJA>
    <xmx:Ol12ZMjhw_S6m1JhqMfd8QC_X-9xO459WsXWCWlW770znOftyAZ8TA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 07/16] device-mapper: Allow userspace to opt-in to strict parameter checks
Date: Tue, 30 May 2023 16:31:07 -0400
Message-Id: <20230530203116.2008-8-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently, device-mapper ioctls ignore unknown flags.  This makes
adding new flags to a given ioctl risky, as it could potentially break
old userspace.

To solve this problem, allow userspace to pass 5 as the major version to
any ioctl.  This causes the kernel to reject any flags that are not
supported by the ioctl, as well as nonzero padding and names and UUIDs
that are not NUL-terminated.  New flags will only be recognized if major
version 5 is used.  Kernels without this patch return -EINVAL if the
major version is 5, so this is backwards compatible.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/md/dm-ioctl.c         | 301 ++++++++++++++++++++++++++--------
 include/uapi/linux/dm-ioctl.h |  30 +++-
 2 files changed, 260 insertions(+), 71 deletions(-)

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 20f452b6c61c1c4d20259fd0fc5443977e4454a0..cf752e72ef6a2d8f8230e5bd6d1a6dc817a4f597 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -64,7 +64,8 @@ struct vers_iter {
 static struct rb_root name_rb_tree = RB_ROOT;
 static struct rb_root uuid_rb_tree = RB_ROOT;
 
-static void dm_hash_remove_all(bool keep_open_devices, bool mark_deferred, bool only_deferred);
+static void dm_hash_remove_all(bool keep_open_devices, bool mark_deferred, bool only_deferred,
+			       struct dm_ioctl *param);
 
 /*
  * Guards access to both hash tables.
@@ -78,7 +79,7 @@ static DEFINE_MUTEX(dm_hash_cells_mutex);
 
 static void dm_hash_exit(void)
 {
-	dm_hash_remove_all(false, false, false);
+	dm_hash_remove_all(false, false, false, NULL);
 }
 
 /*
@@ -333,7 +334,8 @@ static struct dm_table *__hash_remove(struct hash_cell *hc)
 	return table;
 }
 
-static void dm_hash_remove_all(bool keep_open_devices, bool mark_deferred, bool only_deferred)
+static void dm_hash_remove_all(bool keep_open_devices, bool mark_deferred, bool only_deferred,
+			       struct dm_ioctl *param)
 {
 	int dev_skipped;
 	struct rb_node *n;
@@ -367,6 +369,8 @@ static void dm_hash_remove_all(bool keep_open_devices, bool mark_deferred, bool
 			dm_table_destroy(t);
 		}
 		dm_ima_measure_on_device_remove(md, true);
+		if (param != NULL && !dm_kobject_uevent(md, KOBJ_REMOVE, param->event_nr, false))
+			param->flags |= DM_UEVENT_GENERATED_FLAG;
 		dm_put(md);
 		if (likely(keep_open_devices))
 			dm_destroy(md);
@@ -513,7 +517,7 @@ static struct mapped_device *dm_hash_rename(struct dm_ioctl *param,
 
 void dm_deferred_remove(void)
 {
-	dm_hash_remove_all(true, false, true);
+	dm_hash_remove_all(true, false, true, NULL);
 }
 
 /*
@@ -529,7 +533,7 @@ typedef int (*ioctl_fn)(struct file *filp, struct dm_ioctl *param, size_t param_
 
 static int remove_all(struct file *filp, struct dm_ioctl *param, size_t param_size)
 {
-	dm_hash_remove_all(true, !!(param->flags & DM_DEFERRED_REMOVE), false);
+	dm_hash_remove_all(true, !!(param->flags & DM_DEFERRED_REMOVE), false, param);
 	param->data_size = 0;
 	return 0;
 }
@@ -892,8 +896,6 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
 		return r;
 	}
 
-	param->flags &= ~DM_INACTIVE_PRESENT_FLAG;
-
 	__dev_status(md, param);
 
 	dm_put(md);
@@ -947,8 +949,6 @@ static struct hash_cell *__find_device_hash_cell(struct dm_ioctl *param)
 
 	if (hc->new_map)
 		param->flags |= DM_INACTIVE_PRESENT_FLAG;
-	else
-		param->flags &= ~DM_INACTIVE_PRESENT_FLAG;
 
 	return hc;
 }
@@ -1161,7 +1161,6 @@ static int do_resume(struct dm_ioctl *param)
 
 	new_map = hc->new_map;
 	hc->new_map = NULL;
-	param->flags &= ~DM_INACTIVE_PRESENT_FLAG;
 
 	up_write(&_hash_lock);
 
@@ -1426,6 +1425,32 @@ static int next_target(struct dm_target_spec *last, uint32_t next, const char *e
 	return 0;
 }
 
+static inline bool sloppy_checks(const struct dm_ioctl *param)
+{
+	return param->version[0] < DM_VERSION_MAJOR_STRICT;
+}
+
+static bool no_non_nul_after_nul(const char *untrusted_str, size_t size,
+				 unsigned int cmd, const char *msg)
+{
+	const char *cursor;
+	const char *endp = untrusted_str + size;
+	const char *nul_terminator = memchr(untrusted_str, '\0', size);
+
+	if (nul_terminator == NULL) {
+		DMERR("%s not NUL-terminated, cmd(%u)", msg, cmd);
+		return false;
+	}
+	for (cursor = nul_terminator; cursor < endp; cursor++) {
+		if (*cursor != 0) {
+			DMERR("%s has non-NUL byte at %zd after NUL byte at %zd, cmd(%u)",
+			      msg, cursor - untrusted_str, nul_terminator - untrusted_str, cmd);
+			return false;
+		}
+	}
+	return true;
+}
+
 static int populate_table(struct dm_table *table,
 			  struct dm_ioctl *param, size_t param_size)
 {
@@ -1436,12 +1461,19 @@ static int populate_table(struct dm_table *table,
 	const char *const end = (const char *) param + param_size;
 	char *target_params;
 	size_t min_size = sizeof(struct dm_ioctl);
+	bool const strict = !sloppy_checks(param);
 
 	if (!param->target_count) {
 		DMERR("%s: no targets specified", __func__);
 		return -EINVAL;
 	}
 
+	if (strict && param_size % 8 != 0) {
+		DMERR("%s: parameter size %zu not multiple of 8",
+		      __func__, param_size);
+		return -EINVAL;
+	}
+
 	for (i = 0; i < param->target_count; i++) {
 		const char *nul_terminator;
 
@@ -1466,6 +1498,18 @@ static int populate_table(struct dm_table *table,
 		/* Add 1 for NUL terminator */
 		min_size = (nul_terminator - (const char *)spec) + 1;
 
+		if (strict) {
+			if (!no_non_nul_after_nul(spec->target_type, sizeof(spec->target_type),
+						  DM_TABLE_LOAD_CMD, "target type"))
+				return -EINVAL;
+
+			if (spec->status) {
+				DMERR("%s: status in target spec must be zero, not %u",
+				      __func__, spec->status);
+				return -EINVAL;
+			}
+		}
+
 		r = dm_table_add_target(table, spec->target_type,
 					(sector_t) spec->sector_start,
 					(sector_t) spec->length,
@@ -1476,6 +1520,32 @@ static int populate_table(struct dm_table *table,
 		}
 
 		next = spec->next;
+
+		if (strict) {
+			uint64_t zero = 0;
+			/*
+			 * param_size is a multiple of 8 so this is still in
+			 * bounds (or 1 past the end).
+			 */
+			size_t expected_next = round_up(min_size, 8);
+
+			if (expected_next != next) {
+				DMERR("%s: in strict mode, expected next to be %zu but it was %u",
+				      __func__, expected_next, next);
+				return -EINVAL;
+			}
+
+			if (memcmp(&zero, nul_terminator, next - min_size + 1) != 0) {
+				DMERR("%s: in strict mode, padding must be zeroed", __func__);
+				return -EINVAL;
+			}
+		}
+	}
+
+	if (strict && next != (size_t)(end - (const char *)spec)) {
+		DMERR("%s: last target size is %u, but %zd bytes remaining in target spec",
+		      __func__, next, end - (const char *)spec);
+		return -EINVAL;
 	}
 
 	return dm_table_complete(table);
@@ -1823,48 +1893,67 @@ static int target_message(struct file *filp, struct dm_ioctl *param, size_t para
  * the ioctl.
  */
 #define IOCTL_FLAGS_NO_PARAMS		1
-#define IOCTL_FLAGS_ISSUE_GLOBAL_EVENT	2
+#define IOCTL_FLAGS_TAKES_EVENT_NR      2
+#define IOCTL_FLAGS_ISSUE_GLOBAL_EVENT	(IOCTL_FLAGS_TAKES_EVENT_NR | 4)
 
 /*
  *---------------------------------------------------------------
  * Implementation of open/close/ioctl on the special char device.
  *---------------------------------------------------------------
  */
-static ioctl_fn lookup_ioctl(unsigned int cmd, int *ioctl_flags)
+static ioctl_fn lookup_ioctl(unsigned int cmd, int *ioctl_flags, uint32_t *supported_flags)
 {
 	static const struct {
 		int cmd;
 		int flags;
 		ioctl_fn fn;
+		uint32_t supported_flags;
 	} _ioctls[] = {
-		{DM_VERSION_CMD, 0, NULL}, /* version is dealt with elsewhere */
-		{DM_REMOVE_ALL_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, remove_all},
-		{DM_LIST_DEVICES_CMD, 0, list_devices},
+		/* Macro to make the structure initializers somewhat readable */
+#define I(cmd, flags, fn, supported_flags) {							\
+	(cmd),											\
+	(flags),										\
+	(fn),											\
+	/*											\
+	 * Supported flags in sloppy mode must not include anything in DM_STRICT_ONLY_FLAGS.	\
+	 * Use BUILD_BUG_ON_ZERO to check for that.						\
+	 */											\
+	(supported_flags) | BUILD_BUG_ON_ZERO((supported_flags) & DM_STRICT_ONLY_FLAGS),	\
+}
+		I(DM_VERSION_CMD, 0, NULL, 0), /* version is dealt with elsewhere */
+		I(DM_REMOVE_ALL_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, remove_all,
+		 DM_DEFERRED_REMOVE),
+		I(DM_LIST_DEVICES_CMD, 0, list_devices, DM_UUID_FLAG),
+		I(DM_DEV_CREATE_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_create,
+		 DM_PERSISTENT_DEV_FLAG),
+		I(DM_DEV_REMOVE_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_remove,
+		 DM_DEFERRED_REMOVE),
+		I(DM_DEV_RENAME_CMD, IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_rename,
+		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_UUID_FLAG),
+		I(DM_DEV_SUSPEND_CMD, IOCTL_FLAGS_NO_PARAMS, dev_suspend,
+		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_SUSPEND_FLAG | DM_SKIP_LOCKFS_FLAG | DM_NOFLUSH_FLAG),
+		I(DM_DEV_STATUS_CMD, IOCTL_FLAGS_NO_PARAMS, dev_status, DM_QUERY_INACTIVE_TABLE_FLAG),
+		I(DM_DEV_WAIT_CMD, IOCTL_FLAGS_TAKES_EVENT_NR, dev_wait,
+		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_STATUS_TABLE_FLAG | DM_NOFLUSH_FLAG),
+		I(DM_TABLE_LOAD_CMD, 0, table_load, DM_QUERY_INACTIVE_TABLE_FLAG | DM_READONLY_FLAG),
+		I(DM_TABLE_CLEAR_CMD, IOCTL_FLAGS_NO_PARAMS, table_clear, DM_QUERY_INACTIVE_TABLE_FLAG),
+		I(DM_TABLE_DEPS_CMD, 0, table_deps, DM_QUERY_INACTIVE_TABLE_FLAG),
+		I(DM_TABLE_STATUS_CMD, 0, table_status,
+		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_STATUS_TABLE_FLAG | DM_NOFLUSH_FLAG),
 
-		{DM_DEV_CREATE_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_create},
-		{DM_DEV_REMOVE_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_remove},
-		{DM_DEV_RENAME_CMD, IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_rename},
-		{DM_DEV_SUSPEND_CMD, IOCTL_FLAGS_NO_PARAMS, dev_suspend},
-		{DM_DEV_STATUS_CMD, IOCTL_FLAGS_NO_PARAMS, dev_status},
-		{DM_DEV_WAIT_CMD, 0, dev_wait},
+		I(DM_LIST_VERSIONS_CMD, 0, list_versions, 0),
 
-		{DM_TABLE_LOAD_CMD, 0, table_load},
-		{DM_TABLE_CLEAR_CMD, IOCTL_FLAGS_NO_PARAMS, table_clear},
-		{DM_TABLE_DEPS_CMD, 0, table_deps},
-		{DM_TABLE_STATUS_CMD, 0, table_status},
-
-		{DM_LIST_VERSIONS_CMD, 0, list_versions},
-
-		{DM_TARGET_MSG_CMD, 0, target_message},
-		{DM_DEV_SET_GEOMETRY_CMD, 0, dev_set_geometry},
-		{DM_DEV_ARM_POLL_CMD, IOCTL_FLAGS_NO_PARAMS, dev_arm_poll},
-		{DM_GET_TARGET_VERSION_CMD, 0, get_target_version},
+		I(DM_TARGET_MSG_CMD, 0, target_message, DM_QUERY_INACTIVE_TABLE_FLAG),
+		I(DM_DEV_SET_GEOMETRY_CMD, 0, dev_set_geometry, 0),
+		I(DM_DEV_ARM_POLL_CMD, IOCTL_FLAGS_NO_PARAMS, dev_arm_poll, 0),
+		I(DM_GET_TARGET_VERSION_CMD, 0, get_target_version, 0),
 	};
 
 	if (unlikely(cmd >= ARRAY_SIZE(_ioctls)))
 		return NULL;
 
 	cmd = array_index_nospec(cmd, ARRAY_SIZE(_ioctls));
+	*supported_flags = _ioctls[cmd].supported_flags;
 	*ioctl_flags = _ioctls[cmd].flags;
 	return _ioctls[cmd].fn;
 }
@@ -1877,27 +1966,34 @@ static int check_version(unsigned int cmd, struct dm_ioctl __user *user,
 			 struct dm_ioctl *kernel_params)
 {
 	int r = 0;
-	uint32_t *version = kernel_params->version;
+	uint32_t expected_major_version = DM_VERSION_MAJOR;
 
-	if (copy_from_user(version, user->version, sizeof(user->version)))
+	if (copy_from_user(kernel_params->version, user->version, sizeof(kernel_params->version)))
 		return -EFAULT;
 
-	if ((version[0] != DM_VERSION_MAJOR) ||
-	    (version[1] > DM_VERSION_MINOR)) {
+	if (kernel_params->version[0] >= DM_VERSION_MAJOR_STRICT)
+		expected_major_version = DM_VERSION_MAJOR_STRICT;
+
+	if ((kernel_params->version[0] != expected_major_version) ||
+	    (kernel_params->version[1] > DM_VERSION_MINOR)) {
 		DMERR("ioctl interface mismatch: kernel(%u.%u.%u), user(%u.%u.%u), cmd(%d)",
-		      DM_VERSION_MAJOR, DM_VERSION_MINOR,
+		      expected_major_version,
+		      DM_VERSION_MINOR,
 		      DM_VERSION_PATCHLEVEL,
-		      version[0], version[1], version[2], cmd);
+		      kernel_params->version[0],
+		      kernel_params->version[1],
+		      kernel_params->version[2],
+		      cmd);
 		r = -EINVAL;
 	}
 
 	/*
 	 * Fill in the kernel version.
 	 */
-	version[0] = DM_VERSION_MAJOR;
-	version[1] = DM_VERSION_MINOR;
-	version[2] = DM_VERSION_PATCHLEVEL;
-	if (copy_to_user(user->version, version, sizeof(version)))
+	kernel_params->version[0] = expected_major_version;
+	kernel_params->version[1] = DM_VERSION_MINOR;
+	kernel_params->version[2] = DM_VERSION_PATCHLEVEL;
+	if (copy_to_user(user->version, kernel_params->version, sizeof(kernel_params->version)))
 		return -EFAULT;
 
 	return r;
@@ -1920,9 +2016,12 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
 {
 	struct dm_ioctl *dmi;
 	int secure_data;
-	const size_t minimum_data_size = offsetof(struct dm_ioctl, data);
+	const size_t minimum_data_size = sloppy_checks(param_kernel) ?
+		offsetof(struct dm_ioctl, data) : sizeof(struct dm_ioctl);
 	unsigned int noio_flag;
 
+	static_assert(offsetof(struct dm_ioctl, data_size) == sizeof(param_kernel->version));
+	static_assert(offsetof(struct dm_ioctl, data_size) == 12);
 	/* Version has been copied from userspace already, avoid TOCTOU */
 	if (copy_from_user((char *)param_kernel + sizeof(param_kernel->version),
 			   (char __user *)user + sizeof(param_kernel->version),
@@ -1930,12 +2029,13 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
 		return -EFAULT;
 
 	if (param_kernel->data_size < minimum_data_size) {
-		DMERR("Invalid data size in the ioctl structure: %u",
-		      param_kernel->data_size);
+		DMERR("Invalid data size in the ioctl structure: %u (minimum %zu)",
+		      param_kernel->data_size, minimum_data_size);
 		return -EINVAL;
 	}
 
 	secure_data = param_kernel->flags & DM_SECURE_DATA_FLAG;
+	param_kernel->flags &= ~DM_SECURE_DATA_FLAG;
 
 	*param_flags = secure_data ? DM_WIPE_BUFFER : 0;
 
@@ -1966,7 +2066,8 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
 	/* Copy from param_kernel (which was already copied from user) */
 	memcpy(dmi, param_kernel, minimum_data_size);
 
-	if (copy_from_user(&dmi->data, (char __user *)user + minimum_data_size,
+	if (copy_from_user((char *)dmi + minimum_data_size,
+			   (char __user *)user + minimum_data_size,
 			   param_kernel->data_size - minimum_data_size))
 		goto bad;
 data_copied:
@@ -1983,33 +2084,86 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
 	return -EFAULT;
 }
 
-static int validate_params(uint cmd, struct dm_ioctl *param)
+static int validate_params(uint cmd, struct dm_ioctl *param,
+			   uint32_t ioctl_flags, uint32_t supported_flags)
 {
-	/* Always clear this flag */
-	param->flags &= ~DM_BUFFER_FULL_FLAG;
-	param->flags &= ~DM_UEVENT_GENERATED_FLAG;
-	param->flags &= ~DM_SECURE_DATA_FLAG;
-	param->flags &= ~DM_DATA_OUT_FLAG;
-
-	/* Ignores parameters */
-	if (cmd == DM_REMOVE_ALL_CMD ||
-	    cmd == DM_LIST_DEVICES_CMD ||
-	    cmd == DM_LIST_VERSIONS_CMD)
-		return 0;
+	static_assert(__same_type(param->flags, supported_flags));
+	u64 zero = 0;
 
 	if (cmd == DM_DEV_CREATE_CMD) {
 		if (!*param->name) {
 			DMERR("name not supplied when creating device");
 			return -EINVAL;
 		}
-	} else if (*param->uuid && *param->name) {
-		DMERR("only supply one of name or uuid, cmd(%u)", cmd);
+	} else {
+		if (*param->uuid && *param->name) {
+			DMERR("only supply one of name or uuid, cmd(%u)", cmd);
+			return -EINVAL;
+		}
+	}
+
+	if (sloppy_checks(param)) {
+		/* Ensure strings are terminated */
+		param->name[DM_NAME_LEN - 1] = '\0';
+		param->uuid[DM_UUID_LEN - 1] = '\0';
+		/* Mask off bits that could confuse other code */
+		param->flags &= ~DM_STRICT_ONLY_FLAGS;
+		/* Skip strict checks */
+		return 0;
+	}
+
+	/* Check that strings are terminated */
+	if (!no_non_nul_after_nul(param->name, DM_NAME_LEN, cmd, "Name") ||
+	    !no_non_nul_after_nul(param->uuid, DM_UUID_LEN, cmd, "UUID")) {
 		return -EINVAL;
 	}
 
-	/* Ensure strings are terminated */
-	param->name[DM_NAME_LEN - 1] = '\0';
-	param->uuid[DM_UUID_LEN - 1] = '\0';
+	if (memcmp(param->data, &zero, sizeof(param->data)) != 0) {
+		DMERR("second padding field not zeroed in strict mode (cmd %u)", cmd);
+		return -EINVAL;
+	}
+
+	if (param->flags & ~supported_flags) {
+		DMERR("unsupported flags 0x%x specified, cmd(%u)",
+		      param->flags & ~supported_flags, cmd);
+		return -EINVAL;
+	}
+
+	if (param->padding) {
+		DMERR("padding not zeroed in strict mode (got %u, cmd %u)",
+		      param->padding, cmd);
+		return -EINVAL;
+	}
+
+	if (param->open_count != 0) {
+		DMERR("open_count not zeroed in strict mode (got %d, cmd %u)",
+		      param->open_count, cmd);
+		return -EINVAL;
+	}
+
+	if (param->event_nr != 0 && (ioctl_flags & IOCTL_FLAGS_TAKES_EVENT_NR) == 0) {
+		DMERR("Event number not zeroed for command that does not take one (got %u, cmd %u)",
+		      param->event_nr, cmd);
+		return -EINVAL;
+	}
+
+	if (ioctl_flags & IOCTL_FLAGS_NO_PARAMS) {
+		/* Ignores parameters */
+		if (param->data_size != sizeof(struct dm_ioctl)) {
+			DMERR("command %u must not have parameters", cmd);
+			return -EINVAL;
+		}
+
+		if (param->target_count != 0) {
+			DMERR("command %u must have zero target_count", cmd);
+			return -EINVAL;
+		}
+
+		if (param->data_start) {
+			DMERR("command %u must have zero data_start", cmd);
+			return -EINVAL;
+		}
+	}
 
 	return 0;
 }
@@ -2024,6 +2178,7 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
 	ioctl_fn fn = NULL;
 	size_t input_param_size;
 	struct dm_ioctl param_kernel;
+	uint32_t supported_flags, old_flags;
 
 	/* only root can play with this */
 	if (!capable(CAP_SYS_ADMIN))
@@ -2039,7 +2194,7 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
 	 * writes out the kernel's interface version.
 	 */
 	r = check_version(cmd, user, &param_kernel);
-	if (r)
+	if (r != 0)
 		return r;
 
 	/*
@@ -2048,7 +2203,7 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
 	if (cmd == DM_VERSION_CMD)
 		return 0;
 
-	fn = lookup_ioctl(cmd, &ioctl_flags);
+	fn = lookup_ioctl(cmd, &ioctl_flags, &supported_flags);
 	if (!fn) {
 		DMERR("dm_ctl_ioctl: unknown command 0x%x", command);
 		return -ENOTTY;
@@ -2063,11 +2218,20 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
 		return r;
 
 	input_param_size = param->data_size;
-	r = validate_params(cmd, param);
+
+	/*
+	 * In sloppy mode, validate_params will clear some
+	 * flags to ensure other code does not get confused.
+	 * Save the original flags here.
+	 */
+	old_flags = param->flags;
+	r = validate_params(cmd, param, ioctl_flags, supported_flags);
 	if (r)
 		goto out;
+	/* This XOR keeps only the flags validate_params has changed. */
+	old_flags ^= param->flags;
 
-	param->data_size = offsetof(struct dm_ioctl, data);
+	param->data_size = sloppy_checks(param) ? offsetof(struct dm_ioctl, data) : sizeof(struct dm_ioctl);
 	r = fn(file, param, input_param_size);
 
 	if (unlikely(param->flags & DM_BUFFER_FULL_FLAG) &&
@@ -2077,6 +2241,9 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
 	if (!r && ioctl_flags & IOCTL_FLAGS_ISSUE_GLOBAL_EVENT)
 		dm_issue_global_event();
 
+	/* Resture the flags that validate_params cleared */
+	param->flags |= old_flags;
+
 	/*
 	 * Copy the results back to userland.
 	 */
diff --git a/include/uapi/linux/dm-ioctl.h b/include/uapi/linux/dm-ioctl.h
index 1990b5700f6948243def314cec22f380926aca2e..81103e1dcdac3015204e9c05d73037191e965d59 100644
--- a/include/uapi/linux/dm-ioctl.h
+++ b/include/uapi/linux/dm-ioctl.h
@@ -171,8 +171,11 @@ struct dm_target_spec {
 
 	/*
 	 * Parameter string starts immediately after this object.
-	 * Be careful to add padding after string to ensure correct
-	 * alignment of subsequent dm_target_spec.
+	 * Be careful to add padding after string to ensure 8-byte
+	 * alignment of subsequent dm_target_spec.  If the major version
+	 * is DM_VERSION_MAJOR_STRICT, the padding must be at most 7 bytes,
+	 * (not including the terminating NULt that ends the string) and
+	 * must be zeroed.
 	 */
 };
 
@@ -285,14 +288,25 @@ enum {
 #define DM_TARGET_MSG	 _IOWR(DM_IOCTL, DM_TARGET_MSG_CMD, struct dm_ioctl)
 #define DM_DEV_SET_GEOMETRY	_IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl)
 
+/* Legacy major version */
 #define DM_VERSION_MAJOR	4
-#define DM_VERSION_MINOR	48
+/*
+ * New major version.  Enforces strict parameter checks and is required for
+ * using some new features, such as new flags.  Should be used by all new code.
+ *
+ * If one uses DM_VERSION_MAJOR_STRICT, it is possible for the behavior of
+ * ioctls to depend on the minor version passed by userspace.  Userspace must
+ * not pass a minor version greater than the version it was designed for.
+ */
+#define DM_VERSION_MAJOR_STRICT 5
+#define DM_VERSION_MINOR	49
 #define DM_VERSION_PATCHLEVEL	0
 #define DM_VERSION_EXTRA	"-ioctl (2023-03-01)"
 
 /* Status bits */
 #define DM_READONLY_FLAG	(1 << 0) /* In/Out */
 #define DM_SUSPEND_FLAG		(1 << 1) /* In/Out */
+#define DM_EXISTS_FLAG		(1 << 2) /* Not used by kernel, reserved for libdevmapper in userland */
 #define DM_PERSISTENT_DEV_FLAG	(1 << 3) /* In */
 
 /*
@@ -315,7 +329,8 @@ enum {
 #define DM_BUFFER_FULL_FLAG	(1 << 8) /* Out */
 
 /*
- * This flag is now ignored.
+ * This flag is now ignored if DM_VERSION_MAJOR is used, and causes
+ * -EINVAL if DM_VERSION_MAJOR_STRICT is used.
  */
 #define DM_SKIP_BDGET_FLAG	(1 << 9) /* In */
 
@@ -382,4 +397,11 @@ enum {
  */
 #define DM_IMA_MEASUREMENT_FLAG	(1 << 19) /* In */
 
+/*
+ * If DM_VERSION_MAJOR is used, these flags are ignored by the kernel.
+ * If DM_VERSION_MAJOR_STRICT is used, these flags are reserved and
+ * must be zeroed.
+ */
+#define DM_STRICT_ONLY_FLAGS ((__u32)0xFFF00004)
+
 #endif				/* _LINUX_DM_IOCTL_H */
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:32:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:32:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541407.844210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460k-0004xO-Pf; Tue, 30 May 2023 20:32:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541407.844210; Tue, 30 May 2023 20:32:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460k-0004x8-Lt; Tue, 30 May 2023 20:32:06 +0000
Received: by outflank-mailman (input) for mailman id 541407;
 Tue, 30 May 2023 20:32:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460i-0001yj-NG
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:32:04 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 087f76e2-ff29-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 22:32:03 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.west.internal (Postfix) with ESMTP id 279DD320091D;
 Tue, 30 May 2023 16:32:01 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Tue, 30 May 2023 16:32:01 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:31:59 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 087f76e2-ff29-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478720; x=1685565120; bh=aAfoA6tajO
	I3DdZbN+5gNp3o1Bf92v6/idOtFVwuqiE=; b=XYryrGSs9af08o9XzJZlWkP7YS
	JLLfR1m5fe8QVd3QR+6WCZdGftB4WKelk4w73RWFcsldPXK/6kET4skLGHEjfyR9
	9raZlXBEqMOY3lcfb3RKOIXooZ6bPv1MTjeKMBqsKjXscnwDbIJTiVZqWxMgeHZ+
	X0fukMQ6rp3BZHVQA/h3abeEGZylMMkQPtqmNC1qKQK7FYeXv1VabGc/J9HKQHj5
	hNwxe11yXZPnYOujIsBfr07INd/FWOq3WMM/AhxJFTOsJjo3mZCX0BTyTGB+HOpO
	sm9nG/rSIhh+fS7BiI2cZ7Vx1W96+WzIAyAKIkO+GaPZ1WnP1yjbU6VnCMAA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478720; x=
	1685565120; bh=aAfoA6tajOI3DdZbN+5gNp3o1Bf92v6/idOtFVwuqiE=; b=B
	LBapeZTHOetv6cE0GTwPpL73JQDH4RfekajwyobPYL6NNCpVV58gVIFMYzREQE9O
	cIOpGoxbNrxHuDPmwJ6eVXN4MozN4uVBcS2b7Rl5bxt7Fp/e93CHX+VipbnxUY/t
	0eVrwRVlPeCVOVUbuTwecqzI4e02b7wsokNIEUeINpoAmY9Qqcvt0HGvOuTlydim
	/NVfmnLxWbVFFibA08/xffwRWQU38ugjH9Qo+9YaNfAmM4rJErVXAHQBctyESQ2V
	8rewIK4P5DUD8YH+0lVTRQv+lSSc80A0ROaBPwCP1s7qkVpwdFzgyTOMjWxqbCIt
	0TXQT4E0HMZaYYSRHSDSw==
X-ME-Sender: <xms:QF12ZCZ3oN8GVGLxftsLzb7XF7JQDy-dZnoVAYHi3TcO_u7VLXpd_Q>
    <xme:QF12ZFb7XHXylUzDBAPLQa0q-p3-Ao2i09qxVCvoD0sTRms-BG9QgAizX9_o8qqV2
    iM1k7MO_51MwnI>
X-ME-Received: <xmr:QF12ZM-fDsGeeKGFdvmuUUj-AlmSJ1erOTgu1vjCIr1jWGzS9apxH44jdLDCboDUu013w4EWm-I>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedvne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:QF12ZEpxhXCT_a9X11L5ybiledCrn_Stwvppk-WVV9haaHIhJIyoFQ>
    <xmx:QF12ZNp8FaVGTlYstv3J-F2wor1oiUHQNKPgVVt7cDxeN5uMp-AYMQ>
    <xmx:QF12ZCShMD5AgQCCI_Hjg0W23jI2_49FuGbpq_XVAqLelcfljtcm1w>
    <xmx:QF12ZBIH9JvcUfeg7VUQdt1FC4VXptH2JLx0XUrDRrulFOdEuGr79A>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 09/16] device-mapper: Allow userspace to suppress uevent generation
Date: Tue, 30 May 2023 16:31:09 -0400
Message-Id: <20230530203116.2008-10-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Userspace can use this to avoid spamming udev with events that udev
should ignore.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/md/dm-core.h          |  2 +
 drivers/md/dm-ioctl.c         | 78 ++++++++++++++++++-----------------
 drivers/md/dm.c               |  5 ++-
 include/linux/device-mapper.h |  2 +-
 include/uapi/linux/dm-ioctl.h | 14 +++++--
 5 files changed, 57 insertions(+), 44 deletions(-)

diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
index aecab0c0720f77ae2a0ab048304ea3d1023f9959..a033f85d1a9d9b3d8ec893efd6552fb48d2b3541 100644
--- a/drivers/md/dm-core.h
+++ b/drivers/md/dm-core.h
@@ -115,6 +115,8 @@ struct mapped_device {
 
 	/* for blk-mq request-based DM support */
 	bool init_tio_pdu:1;
+	/* If set, do not emit any uevents. */
+	bool disable_uevents:1;
 	struct blk_mq_tag_set *tag_set;
 
 	struct dm_stats stats;
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 01cdf57bcafbf7f3e1b8304eec28792c6b24642d..52aa5505d23b2f3d9c0faf6e8a91b74cd7845581 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -814,6 +814,11 @@ static struct dm_table *dm_get_live_or_inactive_table(struct mapped_device *md,
 		dm_get_inactive_table(md, srcu_idx) : dm_get_live_table(md, srcu_idx);
 }
 
+static inline bool sloppy_checks(const struct dm_ioctl *param)
+{
+	return param->version[0] < DM_VERSION_MAJOR_STRICT;
+}
+
 /*
  * Fills in a dm_ioctl structure, ready for sending back to
  * userland.
@@ -872,7 +877,7 @@ static void __dev_status(struct mapped_device *md, struct dm_ioctl *param)
 		dm_put_live_table(md, srcu_idx);
 	}
 
-	if (param->version[0] >= DM_VERSION_MAJOR_STRICT)
+	if (!sloppy_checks(param))
 		dm_set_diskseq(param, disk->diskseq);
 }
 
@@ -888,7 +893,7 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
 	if (param->flags & DM_PERSISTENT_DEV_FLAG)
 		m = MINOR(huge_decode_dev(param->dev));
 
-	r = dm_create(m, &md);
+	r = dm_create(m, &md, param->flags & DM_DISABLE_UEVENTS_FLAG);
 	if (r)
 		return r;
 
@@ -1452,11 +1457,6 @@ static int next_target(struct dm_target_spec *last, uint32_t next, const char *e
 	return 0;
 }
 
-static inline bool sloppy_checks(const struct dm_ioctl *param)
-{
-	return param->version[0] < DM_VERSION_MAJOR_STRICT;
-}
-
 static bool no_non_nul_after_nul(const char *untrusted_str, size_t size,
 				 unsigned int cmd, const char *msg)
 {
@@ -1928,59 +1928,61 @@ static int target_message(struct file *filp, struct dm_ioctl *param, size_t para
  * Implementation of open/close/ioctl on the special char device.
  *---------------------------------------------------------------
  */
-static ioctl_fn lookup_ioctl(unsigned int cmd, int *ioctl_flags, uint32_t *supported_flags)
+static ioctl_fn lookup_ioctl(unsigned int cmd, bool strict, int *ioctl_flags, uint32_t *supported_flags)
 {
 	static const struct {
 		int cmd;
 		int flags;
 		ioctl_fn fn;
 		uint32_t supported_flags;
+		uint32_t strict_flags;
 	} _ioctls[] = {
 		/* Macro to make the structure initializers somewhat readable */
-#define I(cmd, flags, fn, supported_flags) {							\
-	(cmd),											\
-	(flags),										\
-	(fn),											\
-	/*											\
-	 * Supported flags in sloppy mode must not include anything in DM_STRICT_ONLY_FLAGS.	\
-	 * Use BUILD_BUG_ON_ZERO to check for that.						\
-	 */											\
-	(supported_flags) | BUILD_BUG_ON_ZERO((supported_flags) & DM_STRICT_ONLY_FLAGS),	\
+#define I(cmd, flags, fn, supported_flags, strict_flags) {						\
+	(cmd),												\
+	(flags),											\
+	(fn),												\
+	/*												\
+	 * Supported flags in sloppy mode must not include anything in DM_STRICT_ONLY_FLAGS.		\
+	 * Use BUILD_BUG_ON_ZERO to check for that.							\
+	 */												\
+	(supported_flags) | BUILD_BUG_ON_ZERO((supported_flags) & DM_STRICT_ONLY_FLAGS),		\
+	(strict_flags) | (supported_flags) | BUILD_BUG_ON_ZERO((supported_flags) & (strict_flags)),	\
 }
-		I(DM_VERSION_CMD, 0, NULL, 0), /* version is dealt with elsewhere */
+		I(DM_VERSION_CMD, 0, NULL, 0, 0), /* version is dealt with elsewhere */
 		I(DM_REMOVE_ALL_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, remove_all,
-		 DM_DEFERRED_REMOVE),
-		I(DM_LIST_DEVICES_CMD, 0, list_devices, DM_UUID_FLAG),
+		 DM_DEFERRED_REMOVE, 0),
+		I(DM_LIST_DEVICES_CMD, 0, list_devices, DM_UUID_FLAG, 0),
 		I(DM_DEV_CREATE_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_create,
-		 DM_PERSISTENT_DEV_FLAG),
+		 DM_PERSISTENT_DEV_FLAG, DM_DISABLE_UEVENTS_FLAG),
 		I(DM_DEV_REMOVE_CMD, IOCTL_FLAGS_NO_PARAMS | IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_remove,
-		 DM_DEFERRED_REMOVE),
+		 DM_DEFERRED_REMOVE, 0),
 		I(DM_DEV_RENAME_CMD, IOCTL_FLAGS_ISSUE_GLOBAL_EVENT, dev_rename,
-		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_UUID_FLAG),
+		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_UUID_FLAG, 0),
 		I(DM_DEV_SUSPEND_CMD, IOCTL_FLAGS_NO_PARAMS, dev_suspend,
-		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_SUSPEND_FLAG | DM_SKIP_LOCKFS_FLAG | DM_NOFLUSH_FLAG),
-		I(DM_DEV_STATUS_CMD, IOCTL_FLAGS_NO_PARAMS, dev_status, DM_QUERY_INACTIVE_TABLE_FLAG),
+		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_SUSPEND_FLAG | DM_SKIP_LOCKFS_FLAG | DM_NOFLUSH_FLAG, 0),
+		I(DM_DEV_STATUS_CMD, IOCTL_FLAGS_NO_PARAMS, dev_status, DM_QUERY_INACTIVE_TABLE_FLAG, 0),
 		I(DM_DEV_WAIT_CMD, IOCTL_FLAGS_TAKES_EVENT_NR, dev_wait,
-		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_STATUS_TABLE_FLAG | DM_NOFLUSH_FLAG),
-		I(DM_TABLE_LOAD_CMD, 0, table_load, DM_QUERY_INACTIVE_TABLE_FLAG | DM_READONLY_FLAG),
-		I(DM_TABLE_CLEAR_CMD, IOCTL_FLAGS_NO_PARAMS, table_clear, DM_QUERY_INACTIVE_TABLE_FLAG),
-		I(DM_TABLE_DEPS_CMD, 0, table_deps, DM_QUERY_INACTIVE_TABLE_FLAG),
+		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_STATUS_TABLE_FLAG | DM_NOFLUSH_FLAG, 0),
+		I(DM_TABLE_LOAD_CMD, 0, table_load, DM_QUERY_INACTIVE_TABLE_FLAG | DM_READONLY_FLAG, 0),
+		I(DM_TABLE_CLEAR_CMD, IOCTL_FLAGS_NO_PARAMS, table_clear, DM_QUERY_INACTIVE_TABLE_FLAG, 0),
+		I(DM_TABLE_DEPS_CMD, 0, table_deps, DM_QUERY_INACTIVE_TABLE_FLAG, 0),
 		I(DM_TABLE_STATUS_CMD, 0, table_status,
-		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_STATUS_TABLE_FLAG | DM_NOFLUSH_FLAG),
+		 DM_QUERY_INACTIVE_TABLE_FLAG | DM_STATUS_TABLE_FLAG | DM_NOFLUSH_FLAG, 0),
 
-		I(DM_LIST_VERSIONS_CMD, 0, list_versions, 0),
+		I(DM_LIST_VERSIONS_CMD, 0, list_versions, 0, 0),
 
-		I(DM_TARGET_MSG_CMD, 0, target_message, DM_QUERY_INACTIVE_TABLE_FLAG),
-		I(DM_DEV_SET_GEOMETRY_CMD, 0, dev_set_geometry, 0),
-		I(DM_DEV_ARM_POLL_CMD, IOCTL_FLAGS_NO_PARAMS, dev_arm_poll, 0),
-		I(DM_GET_TARGET_VERSION_CMD, 0, get_target_version, 0),
+		I(DM_TARGET_MSG_CMD, 0, target_message, DM_QUERY_INACTIVE_TABLE_FLAG, 0),
+		I(DM_DEV_SET_GEOMETRY_CMD, 0, dev_set_geometry, 0, 0),
+		I(DM_DEV_ARM_POLL_CMD, IOCTL_FLAGS_NO_PARAMS, dev_arm_poll, 0, 0),
+		I(DM_GET_TARGET_VERSION_CMD, 0, get_target_version, 0, 0),
 	};
 
 	if (unlikely(cmd >= ARRAY_SIZE(_ioctls)))
 		return NULL;
 
 	cmd = array_index_nospec(cmd, ARRAY_SIZE(_ioctls));
-	*supported_flags = _ioctls[cmd].supported_flags;
+	*supported_flags = strict ? _ioctls[cmd].strict_flags : _ioctls[cmd].supported_flags;
 	*ioctl_flags = _ioctls[cmd].flags;
 	return _ioctls[cmd].fn;
 }
@@ -2233,7 +2235,7 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
 	if (cmd == DM_VERSION_CMD)
 		return 0;
 
-	fn = lookup_ioctl(cmd, &ioctl_flags, &supported_flags);
+	fn = lookup_ioctl(cmd, !sloppy_checks(&param_kernel), &ioctl_flags, &supported_flags);
 	if (!fn) {
 		DMERR("dm_ctl_ioctl: unknown command 0x%x", command);
 		return -ENOTTY;
@@ -2451,7 +2453,7 @@ int __init dm_early_create(struct dm_ioctl *dmi,
 		m = MINOR(huge_decode_dev(dmi->dev));
 
 	/* alloc dm device */
-	r = dm_create(m, &md);
+	r = dm_create(m, &md, false);
 	if (r)
 		return r;
 
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 3b694ba3a106e68d4c0d5e64cd9136cf7abce237..efdf70a331cb681a88490f45d26259c29ddac850 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2276,13 +2276,14 @@ static struct dm_table *__unbind(struct mapped_device *md)
 /*
  * Constructor for a new device.
  */
-int dm_create(int minor, struct mapped_device **result)
+int dm_create(int minor, struct mapped_device **result, bool disable_uevents)
 {
 	struct mapped_device *md;
 
 	md = alloc_dev(minor);
 	if (!md)
 		return -ENXIO;
+	md->disable_uevents = disable_uevents;
 
 	dm_ima_reset_data(md);
 
@@ -2999,6 +3000,8 @@ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action,
 	char udev_cookie[DM_COOKIE_LENGTH];
 	char *envp[3] = { NULL, NULL, NULL };
 	char **envpp = envp;
+	if (md->disable_uevents)
+		return 0;
 	if (cookie) {
 		snprintf(udev_cookie, DM_COOKIE_LENGTH, "%s=%u",
 			 DM_COOKIE_ENV_VAR_NAME, cookie);
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index a52d2b9a68460ac7951ad6ebe76d9a1cfccf7afb..7c8d7a7e8798d20e517e2264c06772ecd8b41ef3 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -463,7 +463,7 @@ void dm_consume_args(struct dm_arg_set *as, unsigned int num_args);
  * DM_ANY_MINOR chooses the next available minor number.
  */
 #define DM_ANY_MINOR (-1)
-int dm_create(int minor, struct mapped_device **md);
+int dm_create(int minor, struct mapped_device **md, bool disable_uevents);
 
 /*
  * Reference counting for md.
diff --git a/include/uapi/linux/dm-ioctl.h b/include/uapi/linux/dm-ioctl.h
index 5647b218f24b626f5c1cefe8bec18dc04373c3d0..07cc5bbb6944ebaa42ddfec6fd5e0413c535e7ff 100644
--- a/include/uapi/linux/dm-ioctl.h
+++ b/include/uapi/linux/dm-ioctl.h
@@ -356,8 +356,16 @@ enum {
 #define DM_BUFFER_FULL_FLAG	(1 << 8) /* Out */
 
 /*
- * This flag is now ignored if DM_VERSION_MAJOR is used, and causes
- * -EINVAL if DM_VERSION_MAJOR_STRICT is used.
+ * This flag is only recognized when DM_VERSION_MAJOR_STRICT is used.
+ * It tells the kernel to not generate any uevents for the newly-created
+ * device.  Using it outside of DM_DEV_CREATE results in -EINVAL.  When
+ * DM_VERSION_MAJOR is used this flag is ignored.
+ */
+#define DM_DISABLE_UEVENTS_FLAG	(1 << 9) /* In */
+
+/*
+ * This flag is now ignored if DM_VERSION_MAJOR is used.  When
+ * DM_VERSION_MAJOR_STRICT is used it is an alias for DM_DISABLE_UEVENT_FLAG.
  */
 #define DM_SKIP_BDGET_FLAG	(1 << 9) /* In */
 
@@ -426,8 +434,6 @@ enum {
 
 /*
  * If DM_VERSION_MAJOR is used, these flags are ignored by the kernel.
- * If DM_VERSION_MAJOR_STRICT is used, these flags are reserved and
- * must be zeroed.
  */
 #define DM_STRICT_ONLY_FLAGS ((__u32)(~((1UL << 19) - 1) | 1 << 9 | 1 << 7))
 
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:32:10 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:32:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541410.844220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460o-0005T8-5o; Tue, 30 May 2023 20:32:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541410.844220; Tue, 30 May 2023 20:32:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460n-0005Sr-VF; Tue, 30 May 2023 20:32:09 +0000
Received: by outflank-mailman (input) for mailman id 541410;
 Tue, 30 May 2023 20:32:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460m-0001iX-Nd
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:32:08 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0a5c0882-ff29-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 22:32:06 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailout.west.internal (Postfix) with ESMTP id F2B183200930;
 Tue, 30 May 2023 16:32:03 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Tue, 30 May 2023 16:32:04 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:32:02 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a5c0882-ff29-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478723; x=1685565123; bh=rwGIL/9NRS
	MhUwVWwsxjoxS5AMM8VOhnzZgHY9xGhxg=; b=qkBueq2r2OyWcbusDQUrjajFFr
	8ESqGro3ggJwym7lFf9BI9ESX7xekhNhstWp8F1ueTG3NmoZqthNHTWOXbCGq5Vw
	67eIWybB+EGrobvjODoQqwC70WtdLXTl2k5nQabcp3ZQs/dQWH2f+QrTvapYJOrW
	JrHz90N9W+aLbcMlDjF0cjaNT1cyK5hLdd/RpV3VjOJWSictxkSNvJ89RwdrBPZG
	E9Kxm85hwY5s7dhDK62GqfSywSV09msrreFFwcypf91DJsdE3WhAfmU038D6fTNC
	IJwxS9XPoI4erBBQzlRsVtR6w58sX6QtAWlnzP3FUVoK+Ii+7VvWyJVHGyBA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478723; x=
	1685565123; bh=rwGIL/9NRSMhUwVWwsxjoxS5AMM8VOhnzZgHY9xGhxg=; b=y
	4N/s9MUZzZd4QAK3LutuCyRwXJa6PV0apyBdJJ0IvWA7d2gGhkz/XsZL7+VJH2SG
	iOcTbZxxHnngGWA/vK1Cy/hWHcqG4/qzaYGV3DdLfQBhITU6JQPFY6rYK8F78XZs
	GsFx5qwOlj2SIdDWk5FlJyasxkbCa3R96lDH1Hm2Dvhf2k7z0wpQK4fuV1Q/mMRI
	fUSMAzPyoOA+uwmX+9p0bB+XYk6wNk3PB/5bWjVYO6PrYeHoE9udaLaxIrHaFqnW
	IaScZZBdVbuKJ7vOHjBpV2ygqFDhJvzlolEe5vP4KmsAbmnCAoIBk1ZiIgIsCNXl
	w1VoeHG8LsGkBO7eC24Bg==
X-ME-Sender: <xms:Q112ZMJhVikC1UrX3Mb_HLcgL_u4cRjw7_2N3_37WA8f1GdiBc0pnw>
    <xme:Q112ZMIHatiw5Vza07l_H3J8Q5Jo6SiJ0JklzDq2oVq19WcxL7-SLcq2Z6xIGOn9e
    WudhjdY1xw9uGw>
X-ME-Received: <xmr:Q112ZMvxVcJBiasn5UFXjTImzNzJJWI7pNevtYfQQmMTBB-b8IybX_PYolTn_jMKlOzP0fkNc-E>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:Q112ZJbB2ysOiize_ujypULuG_R3Cn4Yq-9hscSDMSK5ll_YW10Haw>
    <xmx:Q112ZDb1CeOkQsN1ioyuZQeKryd91axJ67ADAgGmg8uxAd6cVc2vzw>
    <xmx:Q112ZFCIOgBVdd0gvdQPehRl4EuciVdHt2_LQ9qvpSUR9DQCv8tXpQ>
    <xmx:Q112ZH4GTf0whc09NpBt8a1c0iKBwD4CvlewH7ytbqUM7jnZPa7ciw>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 10/16] device-mapper: Refuse to create device named "control"
Date: Tue, 30 May 2023 16:31:10 -0400
Message-Id: <20230530203116.2008-11-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Typical userspace setups create a symlink under /dev/mapper with the
name of the device, but /dev/mapper/control is reserved for the control
device.  Therefore, trying to create such a device is almost certain to
be a userspace bug.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/md/dm-ioctl.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 52aa5505d23b2f3d9c0faf6e8a91b74cd7845581..9ae00e3c1a72c19575814cf473774835b364320b 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -771,7 +771,12 @@ static int get_target_version(struct file *filp, struct dm_ioctl *param, size_t
 static int check_name(const char *name)
 {
 	if (strchr(name, '/')) {
-		DMERR("invalid device name");
+		DMERR("device name cannot contain '/'");
+		return -EINVAL;
+	}
+
+	if (strcmp(name, DM_CONTROL_NODE) == 0) {
+		DMERR("device name cannot be \"%s\"", DM_CONTROL_NODE);
 		return -EINVAL;
 	}
 
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:32:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:32:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541413.844230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460p-0005nA-Ky; Tue, 30 May 2023 20:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541413.844230; Tue, 30 May 2023 20:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q460p-0005m4-Eg; Tue, 30 May 2023 20:32:11 +0000
Received: by outflank-mailman (input) for mailman id 541413;
 Tue, 30 May 2023 20:32:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460n-0001yj-UB
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:32:09 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0bda8dbd-ff29-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 22:32:08 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.west.internal (Postfix) with ESMTP id C38563200948;
 Tue, 30 May 2023 16:32:06 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Tue, 30 May 2023 16:32:07 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:32:05 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bda8dbd-ff29-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478726; x=1685565126; bh=eT2dle5SRu
	lVFDzi62TQ1IQZDI2oVYJkexEdj7/Fslc=; b=TjsQAkSuKW8jtKf3DGsyG9k8zB
	Trnwh7BYpObxsovagPyZP1xIVriYTxgHNSH5nj9ifAbA9gdN6AbnhABYUrz7X2Zz
	KpEy+jDQaGp2QjfZ4PUwmZEO98fqknFuIn6qaYfZfNH4vntEEHsm4e4VAkqo4yo0
	PnkSH+j3F9HYaNylPFWtqgof0/h6RnGJmADeTa+W0iOQ/33+BsxtUOzDvpOhuWyA
	pRxIpcnJp87HASHTkoCnA+XuZLjzC5ReS3XQCm6OVf1wK7aoDpuRON/3CGE8D3hA
	TVOl0GJyhdVvxWcF1R6HiMZ2JYdtrEIzCiMn7PYtSWSKaryTZBhGxMxOJMpQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478726; x=
	1685565126; bh=eT2dle5SRulVFDzi62TQ1IQZDI2oVYJkexEdj7/Fslc=; b=f
	x+WGuknoInsViw78HCTgMny6arWnrKDaevCR8NIQfxAqzpjLevOC+ucbiCUnt8No
	wfD1dHHzk4dunu2DDvLgkxgvQcvcbLuITSJjFO1dZ6QYMU2EYxHMT4yLaVN4S07e
	YWJ37QTAJGrcHBVA4qwlGST9+Hu998Nj53zoG9/psvzFlUVLP7246H1dAOyf2cO6
	/TKVgGoVfRr6YDDbw7Wlv4ZUL1wEgu2LEIaitpXknaBbAtMtwdgCVZB86ZjR6/9s
	V6CvT6TpCsRdaZOMkl0LfAzvQUxC1DMUqe5tQBYSfFcaz8Jjc2YKLKxsMqowUIfF
	2SwuwNy1+1MMOXK1ij1PA==
X-ME-Sender: <xms:Rl12ZAr9aQWa_Qa0egkowH06KsJYmTHtkA-E1EAvwWLgM4WfnS7R1g>
    <xme:Rl12ZGrVpcCLx1UYJ8LeUhOzmhkSxM377BSRfIN9X1kgXYCC6gHdAwLxOekWNQZTl
    c4d2H3hYG3E9y8>
X-ME-Received: <xmr:Rl12ZFNycaVfEeQsS-3kpUA_-kT_bOrYQs8Q2OKWHBCjqSMutY820oVAFcuLdy_7KWZED9wrev0>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedune
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:Rl12ZH6rawH_UPbQ1nJIN6Dg0_P60CEXYxgPhsCW7h8JmvVTYQOPbQ>
    <xmx:Rl12ZP78Vu93ETjN9w0f_BXL5M3BXAMEytqXeh_gjRJWPTkdz6CIeA>
    <xmx:Rl12ZHiIZA4a_gkn5vv5uueUjEU59j_7sw1bIyolyLBtoEN5Hzqz5Q>
    <xmx:Rl12ZCaON5yso2NsQujWogpPIv60NqpRS9-gP5P329739DVJCRA63A>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 11/16] device-mapper: "." and ".." are not valid symlink names
Date: Tue, 30 May 2023 16:31:11 -0400
Message-Id: <20230530203116.2008-12-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Using either of these is going to greatly confuse userspace, as they are
not valid symlink names and so creating the usual /dev/mapper/NAME
symlink will not be possible.  As creating a device with either of these
names is almost certainly a userspace bug, just error out.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/md/dm-ioctl.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 9ae00e3c1a72c19575814cf473774835b364320b..17ece816d490b6c40d019da131ade44c9a201dab 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -775,8 +775,10 @@ static int check_name(const char *name)
 		return -EINVAL;
 	}
 
-	if (strcmp(name, DM_CONTROL_NODE) == 0) {
-		DMERR("device name cannot be \"%s\"", DM_CONTROL_NODE);
+	if (strcmp(name, DM_CONTROL_NODE) == 0 ||
+	    strcmp(name, ".") == 0 ||
+	    strcmp(name, "..") == 0) {
+		DMERR("device name cannot be \"%s\", \".\", or \"..\"", DM_CONTROL_NODE);
 		return -EINVAL;
 	}
 
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:37:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541432.844244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4663-0000ER-HG; Tue, 30 May 2023 20:37:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541432.844244; Tue, 30 May 2023 20:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4663-0000DD-CV; Tue, 30 May 2023 20:37:35 +0000
Received: by outflank-mailman (input) for mailman id 541432;
 Tue, 30 May 2023 20:37:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460r-0001iX-Br
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:32:13 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0da02579-ff29-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 22:32:11 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 723F63200943;
 Tue, 30 May 2023 16:32:09 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Tue, 30 May 2023 16:32:10 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:32:08 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0da02579-ff29-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478729; x=1685565129; bh=irr2EYlMc/
	ep40A7KRI/xWi2gREEnIhu2G5hCeO8CLU=; b=hURAEgyL29RZKJ+OYcTsErbDM/
	S1Ul5/NUJUPSUYPM5HMp20PXXk0gLiNmvxjoP1CB2io9qAj/T4B/cqHiaSjgGmi8
	0OUe8kLmaoL/YVBAxDpvilMIDBf/Y2qDnpqjA2O6Z1bm6pdtSzAuU8pbwRpb/64E
	t3nTRGtgH4YjIZNDyv9MRjk+yomLwVCQZeoxQogkrfQEBvmCXu28G+w8CDjq/W3w
	Mx9lG3fpLwdz3h+U33/HWIfxkCWVZsJU414cVmYbYoJ0vUv3StWVENMkJBzowoLX
	64xRWpZtQ+yZbqGZ94lNuiuB4ECWPLAOLRd5LUD0psVSjobpmBc7joc+izig==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478729; x=
	1685565129; bh=irr2EYlMc/ep40A7KRI/xWi2gREEnIhu2G5hCeO8CLU=; b=T
	fCfk4e0EN396eUbVOrH0ULWVryoukeqNEcqlz5QS2HXpBrwM99heFQhEMM352iV6
	XAioIqwlM6bC2bcJ15QZDtZssrnTAGcrNi3nu0JQPPTg/gHaMcm2YXhbjbcvIdQQ
	4+3ZjJ25Umxss9+sYsD2GfoZvAZN93/LREI/kYsX68VNvJdeBwv4W88M+K8GvrLk
	GhCudBNacqw+ctxx40im2bofmCkHltz0hoFc6VCGvIwgHePd+PRtAT6qflQL54sH
	rbfRPUaLSDzs1SnGq04bsIi17Eg2y8geLVB1gOhZF3+sb+8P2bG2PY/EzAnSWset
	u/evqcp9ruAl8Jc+bKtNg==
X-ME-Sender: <xms:SF12ZBfctJ2d2A3-iQ6vuwFvKc-UzF9BWiLn3tPIBEP1uqJV2EZD2Q>
    <xme:SF12ZPN76MIg5Z3ePhg_4ZPlgtRhapc1-gsYCO9FC46owA3180B5iHdYrNAPo62vN
    oWvwplncvHil8w>
X-ME-Received: <xmr:SF12ZKhwvdGkK2LUyW4IwEmdeVTIPWynVGVi5FIQTnBHOzVXFjkt47FA_H9DtJrKuR6TbZ-by4g>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:SF12ZK9To1BBIlpSkbrdLxqawBipD57OWmcgw7ISpIx8hEI3IQzMnw>
    <xmx:SF12ZNufUTatZ7nDNjzUx7u7Yl5dj5WRNOF-DD4z9BYxoBkagB6b1w>
    <xmx:SF12ZJFuKj8OnPlsEzvYl9HNQDX1s7aNCelS_9YCw0_fh0PGW-LdEQ>
    <xmx:SV12ZO_ohIr6Yj_olom9PICipQajdyzC4FPnRS3TPpiafB-36ZwAww>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 12/16] device-mapper: inform caller about already-existing device
Date: Tue, 30 May 2023 16:31:12 -0400
Message-Id: <20230530203116.2008-13-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Not only is this helpful for debugging, it also saves the caller an
ioctl in the case where a device should be used if it exists or created
otherwise.  To ensure existing userspace is not broken, this feature is
only enabled in strict mode.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/md/dm-ioctl.c | 26 +++++++++++++++++++-------
 1 file changed, 19 insertions(+), 7 deletions(-)

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 17ece816d490b6c40d019da131ade44c9a201dab..44425093d3b908abf80e05e1fc99a26b17e18a42 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -256,11 +256,13 @@ static void free_cell(struct hash_cell *hc)
 	}
 }
 
+static void __dev_status(struct mapped_device *md, struct dm_ioctl *param);
+
 /*
  * The kdev_t and uuid of a device can never change once it is
  * initially inserted.
  */
-static int dm_hash_insert(const char *name, const char *uuid, struct mapped_device *md)
+static int dm_hash_insert(const char *name, const char *uuid, struct mapped_device *md, struct dm_ioctl *param)
 {
 	struct hash_cell *cell, *hc;
 
@@ -277,6 +279,8 @@ static int dm_hash_insert(const char *name, const char *uuid, struct mapped_devi
 	down_write(&_hash_lock);
 	hc = __get_name_cell(name);
 	if (hc) {
+		if (param)
+			__dev_status(hc->md, param);
 		dm_put(hc->md);
 		goto bad;
 	}
@@ -287,6 +291,8 @@ static int dm_hash_insert(const char *name, const char *uuid, struct mapped_devi
 		hc = __get_uuid_cell(uuid);
 		if (hc) {
 			__unlink_name(cell);
+			if (param)
+				__dev_status(hc->md, param);
 			dm_put(hc->md);
 			goto bad;
 		}
@@ -901,12 +907,14 @@ static int dev_create(struct file *filp, struct dm_ioctl *param, size_t param_si
 		m = MINOR(huge_decode_dev(param->dev));
 
 	r = dm_create(m, &md, param->flags & DM_DISABLE_UEVENTS_FLAG);
-	if (r)
+	if (r) {
+		DMERR("Could not create device-mapper device");
 		return r;
+	}
 
 	param->flags &= ~DM_INACTIVE_PRESENT_FLAG;
 
-	r = dm_hash_insert(param->name, *param->uuid ? param->uuid : NULL, md);
+	r = dm_hash_insert(param->name, *param->uuid ? param->uuid : NULL, md, param);
 	if (r) {
 		dm_put(md);
 		dm_destroy(md);
@@ -2269,7 +2277,6 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
 		goto out;
 	/* This XOR keeps only the flags validate_params has changed. */
 	old_flags ^= param->flags;
-
 	param->data_size = sloppy_checks(param) ? offsetof(struct dm_ioctl, data) : sizeof(struct dm_ioctl);
 	r = fn(file, param, input_param_size);
 
@@ -2284,9 +2291,14 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
 	param->flags |= old_flags;
 
 	/*
-	 * Copy the results back to userland.
+	 * Copy the results back to userland if either:
+	 *
+	 * - The ioctl succeeded.
+	 * - The ioctl is DM_DEV_CREATE, the return value is -EBUSY,
+	 *   and strict parameter checking is enabled.
 	 */
-	if (!r && copy_to_user(user, param, param->data_size))
+	if ((!r || (!sloppy_checks(param) && cmd == DM_DEV_CREATE_CMD && r == -EBUSY)) &&
+	    copy_to_user(user, param, param->data_size))
 		r = -EFAULT;
 
 out:
@@ -2465,7 +2477,7 @@ int __init dm_early_create(struct dm_ioctl *dmi,
 		return r;
 
 	/* hash insert */
-	r = dm_hash_insert(dmi->name, *dmi->uuid ? dmi->uuid : NULL, md);
+	r = dm_hash_insert(dmi->name, *dmi->uuid ? dmi->uuid : NULL, md, NULL);
 	if (r)
 		goto err_destroy_dm;
 
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:37:44 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:37:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541431.844240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4663-0000Bh-9v; Tue, 30 May 2023 20:37:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541431.844240; Tue, 30 May 2023 20:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4663-0000BV-42; Tue, 30 May 2023 20:37:35 +0000
Received: by outflank-mailman (input) for mailman id 541431;
 Tue, 30 May 2023 20:37:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460w-0001yj-7o
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:32:18 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10f33985-ff29-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 22:32:17 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.west.internal (Postfix) with ESMTP id 4714F3200941;
 Tue, 30 May 2023 16:32:15 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 30 May 2023 16:32:16 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:32:14 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10f33985-ff29-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478734; x=1685565134; bh=g5zq4E4lPj
	cpxmqA493MTNjpJWMfGCfLjfMtSBIRfwE=; b=btbUEJhZQrCus6G2cmEF0UCOcj
	HNQw5o5uoDqCKaQPz1xaoENxf/gqQkjPOKEpiYtWko86dN7B8bAa/1edBnKcRD7G
	oQ35bPjRiDM3Jv8XO1PvPnA7DMenzs5QntueBspv43BZbVUDcRuKnOjgJrhEm+4R
	rrkZDloEyf4nIxvooH5WoAn0qyYo1buSox+f90jbf6nqe8FEXeQXhcBszbyjJ0PT
	OKbXpnhvHop3pjBRjTF1k5EOi6rr3vwj7D2WCRY0I3CPSydA9M5iRpuK60HEX8jy
	2JVKqdSSuG8cj0Mk9QjkKkJcPE3ivgkave7ORC7XdxNQ5YP17scWJOh05bXw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478734; x=
	1685565134; bh=g5zq4E4lPjcpxmqA493MTNjpJWMfGCfLjfMtSBIRfwE=; b=H
	SuVjZAMaB+BwxGijqAt8c9WB79vIciVrFRlLtCOky9UQfMhAcjJT6a0VfxdmETVV
	8nEF3+nsP3QXtkVaQ9HylXOplwnBU37jG6FMC+6U3sAiOJR8vllWwxVRB/bJAJGE
	o8bZWd+Fr97H/XWkQ3GYu4EmvQTHh7b+bAUlYcKH2xBKKmp1xkiy2sC9i429aUhn
	bZzm0AYgIdeait5fZO9zhL7go84uaGta1a6TuurzUQSKqG7xTdxpGanhw3S6PklV
	3SdizlKHz/Q1lWeSqDcVd6A7ngssgCYCwK1H8wUbPyw4kwtT5lUkDxT+HIHKtiXM
	h1f1eXwlT0CBcdLaEOlRQ==
X-ME-Sender: <xms:Tl12ZLCrjMwa5dd8nXInpbI80j8aks7WDHGQdC47Q6JkvDKgesheeg>
    <xme:Tl12ZBhWW6gzbLQXIBx87Wn5FhMOV1Jmqe7Bu4LvPr1RNpfkJmMNYXM8yoX9-WtnK
    KOwvl36hcRBn78>
X-ME-Received: <xmr:Tl12ZGmQvePgFgiwvtACF3E10lqS1U8ApvlnKupnDUwSSod63Ptq06kbfUX7FZofZWiSkLg8hdg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedvne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:Tl12ZNw-JLuG8i_8xOIdX4SeeayTYC2T2NBU3vVW_z588mtnm1MNXQ>
    <xmx:Tl12ZAQw_DXvdm36tgmR2p2Ev34IIuqgkFUOmqOOyl5qMY21MCCl5Q>
    <xmx:Tl12ZAamGsz2l8le4BoAmD-xKUjEfjqYwtGU3y6fUVm9ndKsvfHsDw>
    <xmx:Tl12ZFSWJmukln6Pasz6VSPEKly6NXMAjEHrAUW2zz3CBGO838Esww>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 14/16] block, loop: Increment diskseq when releasing a loop device
Date: Tue, 30 May 2023 16:31:14 -0400
Message-Id: <20230530203116.2008-15-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The previous patch for checking diskseq in blkback is not enough to
prevent the following race:

1. Program X opens a loop device
2. Program X gets the diskseq of the loop device.
3. Program X associates a file with the loop device.
4. Program X passes the loop device major, minor, and diskseq to
   something, such as Xen blkback.
5. Program X exits.
6. Program Y detaches the file from the loop device.
7. Program Y attaches a different file to the loop device.
8. Xen blkback finally gets around to opening the loop device and checks
   that the diskseq is what it expects it to be.  Even though the
   diskseq is the expected value, the result is that blkback is
   accessing the wrong file.

To prevent this race condition, increment the diskseq of a loop device
when it is detached from its file descriptor.  This causes blkback (or
any other program, for that matter) to fail at step 8.  Export the
inc_diskseq() function to make this possible.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
I considered destroying the loop device altogether instead of bumping
its diskseq, but was not able to accomplish that.  Suggestions welcome.
---
 block/genhd.c        | 1 +
 drivers/block/loop.c | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/block/genhd.c b/block/genhd.c
index 1cb489b927d50ab06a84a4bfd6913ca8ba7318d4..c0ca2c387732171321555cd57565fbc606768505 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -1502,3 +1502,4 @@ void inc_diskseq(struct gendisk *disk)
 {
 	disk->diskseq = atomic64_inc_return(&diskseq);
 }
+EXPORT_SYMBOL(inc_diskseq);
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index bc31bb7072a2cb7294d32066f5d0aa14130349b4..05ea5fb41508b4106f184dd6b4c37942716bdcac 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1205,6 +1205,12 @@ static void __loop_clr_fd(struct loop_device *lo, bool release)
 	if (!part_shift)
 		set_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);
 	mutex_lock(&lo->lo_mutex);
+
+	/*
+	 * Increment the disk sequence number, so that userspace knows this
+	 * device now points to something else.
+	 */
+	inc_diskseq(lo->lo_disk);
 	lo->lo_state = Lo_unbound;
 	mutex_unlock(&lo->lo_mutex);
 
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:37:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:37:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541458.844276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q466H-0001C1-Kr; Tue, 30 May 2023 20:37:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541458.844276; Tue, 30 May 2023 20:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q466H-0001Ah-D4; Tue, 30 May 2023 20:37:49 +0000
Received: by outflank-mailman (input) for mailman id 541458;
 Tue, 30 May 2023 20:37:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460z-0001yj-HJ
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:32:21 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12a83ba4-ff29-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 22:32:20 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.west.internal (Postfix) with ESMTP id 32CF03200962;
 Tue, 30 May 2023 16:32:18 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 30 May 2023 16:32:18 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:32:16 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12a83ba4-ff29-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:sender
	:subject:subject:to:to; s=fm1; t=1685478737; x=1685565137; bh=1g
	7wCby6gdLY4jovUh0//kEDEi26QbSZsUz9aHtBM2E=; b=PTeqRnmYVEq0wVDDJD
	v2NgZMyQcplu/m14wNSYj0ve2VYJmp8/HYabLKde0hrBjhqPZg+BxcpirBxApwy2
	KvE46HYM5wnhC/4Udrw7hIu2L32qAPNxDFZSBSqYYIz8YujdBJci/NpTNo+p53K/
	ZP8rfzYLpird6egQYWSHAANqz/RZSLswiBplOl2A/y0OyIu1BxIicwKMkAPHGDmV
	WessmjZhCG8zKV4Ukgz9PvctwW0ATp+S5OvpXekfubo5VoPLDLWvcXJBYNxtQwjv
	35EuDW8XkpOFiyYumD6qGJ1/GAkIvB5AhpkolWi7yQxyzJUOn5vpE8RhOvgoL7fS
	IUPg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=
	1685478737; x=1685565137; bh=1g7wCby6gdLY4jovUh0//kEDEi26QbSZsUz
	9aHtBM2E=; b=r0rkmSaGTkR2+3ZnA69S5XdEOZnX5eZ1eqBMohYZ6j1pDR1bBh3
	P1bQkHb1aiBf1/paYWLWZ6AQv38yySL0NXXisjeAkN/2Fw6/19D61jXb67QHtocH
	nY2dPHsGsa+vj/j56DVnnMYRzaWKLjV/o+G6vMbtTDRYca8UuWRp7PMHxrv2wb8g
	cdHne0mqgpSzSUcH6DyBsqFNuKPGlHeL7aPXNEEqKgjzioAq5o9+iWqS12vVJbzI
	VeUakDmBw1pkRDsIoh9JGCA82dK2Ag5zfwJrLSviRxJc/0zfHHW/oSddQhvVWtM4
	21jj9qe9xf4rI1JLfrZVS7uAi/G3uQwtsKw==
X-ME-Sender: <xms:UV12ZJsB7-6thDBOW04ZBm8dwA1w8OqPQmWsHPIsTD85Gotn6oCtAQ>
    <xme:UV12ZCcND6TvIARXd8QOOkdba5asaIVOCge4bX4g8CQ5jmO5RYFcsdAxDLYUr6xpT
    K7j0UTQ4KOSnXg>
X-ME-Received: <xmr:UV12ZMz7jUkHkkTvC4PaB0pWjhII28OE1-d4VoJYh3Rk5KoK5polsbsvSHQ6A82D-keKTXHm8f0>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfgggtgfesthekredtredtjeenucfhrhhomhepffgv
    mhhiucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhih
    hnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepkeefieekhfdtgeeuueelleeg
    vdetieehgfejteduvedvvdejudetudelfedukefhnecuffhomhgrihhnpehinhguihhrvg
    gtthdrnhhrpdhinhguihhrvggtthdrihgunecuvehluhhsthgvrhfuihiivgeptdenucfr
    rghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhslh
    grsgdrtghomh
X-ME-Proxy: <xmx:UV12ZAPqoC6ugOyhqw12hilSeplCRnKjnOlJfmhYkOVrSOBnWerfZA>
    <xmx:UV12ZJ8Zyv2sMdGJ8yUkzVbNcYb8OnOE3NDfnKK7_d4J77we3bDJpQ>
    <xmx:UV12ZAVfjKO8vuRkhvcAjXEFakWAjuI2n5mclwt9ec30GFsNgGodag>
    <xmx:UV12ZCMvAk4xGoCgzeqg5Gt0J8JIuCFJuktUAcqyZrosY5iMdZIIaw>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 15/16] xen-blkback: Minor cleanups
Date: Tue, 30 May 2023 16:31:15 -0400
Message-Id: <20230530203116.2008-16-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This adds a couple of BUILD_BUG_ON()s and moves some arithmetic after
the validation code that checks the arithmetic’s preconditions.  The
previous code was correct but could potentially trip sanitizers that
check for unsigned integer wraparound.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/block/xen-blkback/blkback.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index c362f4ad80ab07bfb58caff0ed7da37dc1484fc5..ac760a08d559085ab875784f1c58cdf2ead95a43 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -1342,6 +1342,8 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
 	nseg = req->operation == BLKIF_OP_INDIRECT ?
 	       req->u.indirect.nr_segments : req->u.rw.nr_segments;
 
+	BUILD_BUG_ON(offsetof(struct blkif_request, u.rw.id) != 8);
+	BUILD_BUG_ON(offsetof(struct blkif_request, u.indirect.id) != 8);
 	if (unlikely(nseg == 0 && operation_flags != REQ_PREFLUSH) ||
 	    unlikely((req->operation != BLKIF_OP_INDIRECT) &&
 		     (nseg > BLKIF_MAX_SEGMENTS_PER_REQUEST)) ||
@@ -1365,13 +1367,13 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
 		preq.sector_number     = req->u.rw.sector_number;
 		for (i = 0; i < nseg; i++) {
 			pages[i]->gref = req->u.rw.seg[i].gref;
-			seg[i].nsec = req->u.rw.seg[i].last_sect -
-				req->u.rw.seg[i].first_sect + 1;
-			seg[i].offset = (req->u.rw.seg[i].first_sect << 9);
 			if ((req->u.rw.seg[i].last_sect >= (XEN_PAGE_SIZE >> 9)) ||
 			    (req->u.rw.seg[i].last_sect <
 			     req->u.rw.seg[i].first_sect))
 				goto fail_response;
+			seg[i].nsec = req->u.rw.seg[i].last_sect -
+				req->u.rw.seg[i].first_sect + 1;
+			seg[i].offset = (req->u.rw.seg[i].first_sect << 9);
 			preq.nr_sects += seg[i].nsec;
 		}
 	} else {
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:37:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:37:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541448.844260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q466F-0000qQ-PB; Tue, 30 May 2023 20:37:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541448.844260; Tue, 30 May 2023 20:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q466F-0000q7-Kj; Tue, 30 May 2023 20:37:47 +0000
Received: by outflank-mailman (input) for mailman id 541448;
 Tue, 30 May 2023 20:37:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q460t-0001yj-Gh
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:32:15 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f3a81e8-ff29-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 22:32:14 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.west.internal (Postfix) with ESMTP id 6EF803200934;
 Tue, 30 May 2023 16:32:12 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 30 May 2023 16:32:13 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:32:11 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f3a81e8-ff29-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478732; x=1685565132; bh=KqvPH9pDDf
	rZerL/j1Q0a8RK2jTSm93zECezjcEfZ7Y=; b=CeUdXfyO67/whbrhgac9/avXLW
	eru7QHUZqMx94QYfPMlVXU+ui+tRFdwVPN9xlP9gICmNJdQalLaUVoS7Zc943wjC
	sJFZiHqqQBqawmzSq2uS5sj0kejzU2ucPIFyOcLtjPgoyrOKSyZ8v6CejqHwIIVf
	QhSSQhpepZN1qfTatTXNiFRN45qghiX/endy/GoXyAo4OF5DvAqSC+R4tvUZql/g
	Twf7nZVK/OZ+O5hzh/GEO/K+xp6lZGqfRa1L4ridR6SwKojehZihM4jei6a9/TkB
	0NXGvH2/Oo567MZ7sqO1hRGDy2T5AnJVIsGfHDdoKrXfptumV12qVDqo43kA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478732; x=
	1685565132; bh=KqvPH9pDDfrZerL/j1Q0a8RK2jTSm93zECezjcEfZ7Y=; b=x
	1E+gfj51gX0i5a8eh20dla4FZptdVnV/3RIPZB3nB5jAIbu455IAdaMdcicMgN6O
	4SBqvYe9KR4Kej10pm88BwhRbA3HJRbxH//tVjialFMNZJrovhqgY1UDyRu1wJcw
	QHLv45kgxPU7huWf9ipWhx6WsGloc+WqNO4rAhShCMykuP4wA6usnjDEyY/0gRna
	EhIOGzw6Zx9L9zjSbhcDkSFqBm1jjKHdeYRfdE5UB3scu3hnpjdt5yJterqlZrbv
	J5cY6BQYk5xcpeviB3QRy1CADT22E0N/XTiASKCzbXfOkz55iiiWUGOl+t2QGjsf
	0fnwF/t6q5vZO+QP/5VMg==
X-ME-Sender: <xms:S112ZFgm3IeHFi6lKr5FpH5jvpfxwO-emQ2czS7Pq9uDlsy9Fog8-Q>
    <xme:S112ZKB_RZTgfN8gOc3Qt1yLL0BLazY40yQ4kW6JyaHmpwYd6le27kfuoY0HNPe8q
    Bg1QVfpKaNzuu4>
X-ME-Received: <xmr:S112ZFFvIfFBGsSNVsyzjW3e8trWKGolRY9j-x5oH_a7oB_rpZIly3L0KUdHGO_X5tEjIbl3Z-4>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedvne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:S112ZKSXztYWhNh1MQVCl3fCY8DiKab0iHLF3aayTdVK2LoubaYlsA>
    <xmx:S112ZCytp1E3O_qW43L1mXgjApcSM6g_XRGwrav4Xg9SVmX7n86Qkw>
    <xmx:S112ZA7gcLfxJIeXwgaFf_QMQGleyKUVRN_r2UoxT_a0-RDhxoexxg>
    <xmx:TF12ZNwxjvc_-0yaKFm8hnvtR3BECga_Ld9X_kRH4r-5lLJDCs-ZAw>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 13/16] xen-blkback: Implement diskseq checks
Date: Tue, 30 May 2023 16:31:13 -0400
Message-Id: <20230530203116.2008-14-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This allows specifying a disk sequence number in XenStore.  If it does
not match the disk sequence number of the underlying device, the device
will not be exported and a warning will be logged.  Userspace can use
this to eliminate race conditions due to major/minor number reuse.
Old kernels do not support the new syntax, but a later patch will allow
userspace to discover that the new syntax is supported.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/block/xen-blkback/xenbus.c | 112 +++++++++++++++++++++++------
 1 file changed, 89 insertions(+), 23 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 4807af1d58059394d7a992335dabaf2bc3901721..9c3eb148fbd802c74e626c3d7bcd69dcb09bd921 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -24,6 +24,7 @@ struct backend_info {
 	struct xenbus_watch	backend_watch;
 	unsigned		major;
 	unsigned		minor;
+	unsigned long long	diskseq;
 	char			*mode;
 };
 
@@ -479,7 +480,7 @@ static void xen_vbd_free(struct xen_vbd *vbd)
 
 static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle,
 			  unsigned major, unsigned minor, int readonly,
-			  int cdrom)
+			  bool cdrom, u64 diskseq)
 {
 	struct xen_vbd *vbd;
 	struct block_device *bdev;
@@ -507,6 +508,26 @@ static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle,
 		xen_vbd_free(vbd);
 		return -ENOENT;
 	}
+
+	if (diskseq) {
+		struct gendisk *disk = bdev->bd_disk;
+
+		if (unlikely(disk == NULL)) {
+			pr_err("%s: device %08x has no gendisk\n",
+			       __func__, vbd->pdevice);
+			xen_vbd_free(vbd);
+			return -EFAULT;
+		}
+
+		if (unlikely(disk->diskseq != diskseq)) {
+			pr_warn("%s: device %08x has incorrect sequence "
+				"number 0x%llx (expected 0x%llx)\n",
+				__func__, vbd->pdevice, disk->diskseq, diskseq);
+			xen_vbd_free(vbd);
+			return -ENODEV;
+		}
+	}
+
 	vbd->size = vbd_sz(vbd);
 
 	if (cdrom || disk_to_cdi(vbd->bdev->bd_disk))
@@ -707,6 +728,9 @@ static void backend_changed(struct xenbus_watch *watch,
 	int cdrom = 0;
 	unsigned long handle;
 	char *device_type;
+	char *diskseq_str = NULL;
+	int diskseq_len;
+	unsigned long long diskseq;
 
 	pr_debug("%s %p %d\n", __func__, dev, dev->otherend_id);
 
@@ -725,10 +749,46 @@ static void backend_changed(struct xenbus_watch *watch,
 		return;
 	}
 
-	if (be->major | be->minor) {
-		if (be->major != major || be->minor != minor)
-			pr_warn("changing physical device (from %x:%x to %x:%x) not supported.\n",
-				be->major, be->minor, major, minor);
+	diskseq_str = xenbus_read(XBT_NIL, dev->nodename, "diskseq", &diskseq_len);
+	if (IS_ERR(diskseq_str)) {
+		int err = PTR_ERR(diskseq_str);
+		diskseq_str = NULL;
+
+		/*
+		 * If this does not exist, it means legacy userspace that does not
+		 * support diskseq.
+		 */
+		if (unlikely(!XENBUS_EXIST_ERR(err))) {
+			xenbus_dev_fatal(dev, err, "reading diskseq");
+			return;
+		}
+		diskseq = 0;
+	} else if (diskseq_len <= 0) {
+		xenbus_dev_fatal(dev, -EFAULT, "diskseq must not be empty");
+		goto fail;
+	} else if (diskseq_len > 16) {
+		xenbus_dev_fatal(dev, -ERANGE, "diskseq too long: got %d but limit is 16",
+				 diskseq_len);
+		goto fail;
+	} else if (diskseq_str[0] == '0') {
+		xenbus_dev_fatal(dev, -ERANGE, "diskseq must not start with '0'");
+		goto fail;
+	} else {
+		char *diskseq_end;
+		diskseq = simple_strtoull(diskseq_str, &diskseq_end, 16);
+		if (diskseq_end != diskseq_str + diskseq_len) {
+			xenbus_dev_fatal(dev, -EINVAL, "invalid diskseq");
+			goto fail;
+		}
+		kfree(diskseq_str);
+		diskseq_str = NULL;
+	}
+
+	if (be->major | be->minor | be->diskseq) {
+		if (be->major != major || be->minor != minor || be->diskseq != diskseq)
+			pr_warn("changing physical device (from %x:%x:%llx to %x:%x:%llx)"
+				" not supported.\n",
+				be->major, be->minor, be->diskseq, major, minor, diskseq);
 		return;
 	}
 
@@ -756,29 +816,35 @@ static void backend_changed(struct xenbus_watch *watch,
 
 	be->major = major;
 	be->minor = minor;
+	be->diskseq = diskseq;
 
 	err = xen_vbd_create(be->blkif, handle, major, minor,
-			     !strchr(be->mode, 'w'), cdrom);
-
-	if (err)
-		xenbus_dev_fatal(dev, err, "creating vbd structure");
-	else {
-		err = xenvbd_sysfs_addif(dev);
-		if (err) {
-			xen_vbd_free(&be->blkif->vbd);
-			xenbus_dev_fatal(dev, err, "creating sysfs entries");
-		}
-	}
+			     !strchr(be->mode, 'w'), cdrom, diskseq);
 
 	if (err) {
-		kfree(be->mode);
-		be->mode = NULL;
-		be->major = 0;
-		be->minor = 0;
-	} else {
-		/* We're potentially connected now */
-		xen_update_blkif_status(be->blkif);
+		xenbus_dev_fatal(dev, err, "creating vbd structure");
+		goto fail;
 	}
+
+	err = xenvbd_sysfs_addif(dev);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "creating sysfs entries");
+		goto free_vbd;
+	}
+
+	/* We're potentially connected now */
+	xen_update_blkif_status(be->blkif);
+	return;
+
+free_vbd:
+	xen_vbd_free(&be->blkif->vbd);
+fail:
+	kfree(diskseq_str);
+	kfree(be->mode);
+	be->mode = NULL;
+	be->major = 0;
+	be->minor = 0;
+	be->diskseq = 0;
 }
 
 /*
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:37:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:37:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541457.844270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q466H-00017z-6U; Tue, 30 May 2023 20:37:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541457.844270; Tue, 30 May 2023 20:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q466H-00017n-2g; Tue, 30 May 2023 20:37:49 +0000
Received: by outflank-mailman (input) for mailman id 541457;
 Tue, 30 May 2023 20:37:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8QB4=BT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1q4612-0001iX-U5
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:32:24 +0000
Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com
 [64.147.123.21]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1486eb29-ff29-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 22:32:23 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.west.internal (Postfix) with ESMTP id 0D022320096E;
 Tue, 30 May 2023 16:32:20 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Tue, 30 May 2023 16:32:21 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 30 May 2023 16:32:19 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1486eb29-ff29-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to; s=fm1; t=1685478740; x=1685565140; bh=gJtXKfjMb5
	k3PryJlBFsKsS3ZINd0EksqBFZ4ekKEDw=; b=iSW8ttt26dngYvxurKxiOI2vd1
	WHoQ92v7hwye8qiWm/sMzD3pqp5ya8c2OA22dKJKDKSCyzxijZP+Dp144KoNmvpt
	K8OI+kD4ismX23LXNvkYBIS8haogcvQd7EEZ19u32N/1GOR8sh6VaGOBsVQxw9Bq
	zrT17xw8paAPsuaett1MoyqWDe6uGCPznnCFymKvWzjN2rUubwy9Ob0uFq6jMg76
	gqlN1m7Mc8Mkxu+yjAxvH0vX7/PlJqdVVec52wcaqJpFToYt9AxPY3yOgBt+RjEI
	aG4fzuzWpC22NKMBxr6O/RMjHdv28qKWfTHlwgkpaOz+BcWrwFwxRmGMeaMg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:date:date:feedback-id:feedback-id:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1685478740; x=
	1685565140; bh=gJtXKfjMb5k3PryJlBFsKsS3ZINd0EksqBFZ4ekKEDw=; b=o
	QbHhAjdGI/824Nl01Pip/x1ESJBxXGnNCUCk4Arc6BskEm0MTh9UC6MZYaR2q79S
	k1Uqb9Vj1z9f6r5u3H1HVCqm/h4f+Vs2C2I55ZxnoCF18+ZrNegBzgUSduwuENN3
	ZelAmC1BB6EziOQZ1m0S4cDcLpBIGrjkaMEsTbam0kd7jHN3XymXk0VF7iAEn4+L
	g585WZp+QBIJWRDnqGJWJT85ZYmZick+DsS7QTOGtDLjDvulDczBAEFMf2pxjFDZ
	z70/jIsQbBWGVBj2a0QhQ3776f+UlGOsyvyOBXKD9nxTQaFTHfPjpO3nn2UqnNoO
	C4Idzu7QZyx3OsqotLhew==
X-ME-Sender: <xms:VF12ZCx6LBtGDgihYZ-NBi1TQbMnT10K4M83ELm3G3ATfM8R279sSA>
    <xme:VF12ZOSoOZisX8Dh7PgaI1Br2VXmS17_-QQHdNrd8ODedLEjRJsiupF-ePxfF5-CL
    VIu0ou0rSk2fvk>
X-ME-Received: <xmr:VF12ZEUZ2v-22K9HABN8joZIwupVm2l2OuN_SbATb71V0ubBkGLMjmuY97GOgzqwv3poc01Y0NU>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekjedgudeglecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpefhvfevufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejffejgffgueegudevvdejkefg
    hefghffhffejteekleeufeffteffhfdtudehteenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:VF12ZIgsEEnwHKmC43Kv9GxV7bH-i7tiH_PkAz1p_2nBYwSEAf6YrA>
    <xmx:VF12ZEDqiN37KhUgZoqxnScnVqOaHAYoy7GmY53oxQjsejR59koZOQ>
    <xmx:VF12ZJLZp0lEBhXhDlI94ziBRaLfrp4UJrXDlteRIEm9FlDD8PS34w>
    <xmx:VF12ZCA0rxPCrK0M9295LprgFjyNUobhbd0Xa629Rogk5JO32xfXOA>
Feedback-ID: iac594737:Fastmail
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jens Axboe <axboe@kernel.dk>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 16/16] xen-blkback: Inform userspace that device has been opened
Date: Tue, 30 May 2023 16:31:16 -0400
Message-Id: <20230530203116.2008-17-demi@invisiblethingslab.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Set "opened" to "0" before the hotplug script is called.  Once the
device node has been opened, set "opened" to "1".

"opened" is used exclusively by userspace.  It serves two purposes:

1. It tells userspace that the diskseq Xenstore entry is supported.

2. It tells userspace that it can wait for "opened" to be set to 1.
   Once "opened" is 1, blkback has a reference to the device, so
   userspace doesn't need to keep one.

Together, these changes allow userspace to use block devices with
delete-on-close behavior, such as loop devices with the autoclear flag
set or device-mapper devices with the deferred-remove flag set.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
 drivers/block/xen-blkback/xenbus.c | 35 ++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 9c3eb148fbd802c74e626c3d7bcd69dcb09bd921..519a78aa9073d1faa1dce5c1b36e95ae58da534b 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -3,6 +3,20 @@
     Copyright (C) 2005 Rusty Russell <rusty@rustcorp.com.au>
     Copyright (C) 2005 XenSource Ltd
 
+In addition to the Xenstore nodes required by the Xen block device
+specification, this implementation of blkback uses a new Xenstore
+node: "opened".  blkback sets "opened" to "0" before the hotplug script
+is called.  Once the device node has been opened, blkback sets "opened"
+to "1".
+
+"opened" is read exclusively by userspace.  It serves two purposes:
+
+1. It tells userspace that diskseq@major:minor syntax for "physical-device" is
+   supported.
+
+2. It tells userspace that it can wait for "opened" to be set to 1 after writing
+   "physical-device".  Once "opened" is 1, blkback has a reference to the
+   device, so userspace doesn't need to keep one.
 
 */
 
@@ -699,6 +713,14 @@ static int xen_blkbk_probe(struct xenbus_device *dev,
 	if (err)
 		pr_warn("%s write out 'max-ring-page-order' failed\n", __func__);
 
+	/*
+	 * This informs userspace that the "opened" node will be set to "1" when
+	 * the device has been opened successfully.
+	 */
+	err = xenbus_write(XBT_NIL, dev->nodename, "opened", "0");
+	if (err)
+		goto fail;
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -826,6 +848,19 @@ static void backend_changed(struct xenbus_watch *watch,
 		goto fail;
 	}
 
+	/*
+	 * Tell userspace that the device has been opened and that blkback has a
+	 * reference to it.  Userspace can then close the device or mark it as
+	 * delete-on-close, knowing that blkback will keep the device open as
+	 * long as necessary.
+	 */
+	err = xenbus_write(XBT_NIL, dev->nodename, "opened", "1");
+	if (err) {
+		xenbus_dev_fatal(dev, err, "%s: notifying userspace device has been opened",
+				 dev->nodename);
+		goto free_vbd;
+	}
+
 	err = xenvbd_sysfs_addif(dev);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "creating sysfs entries");
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab



From xen-devel-bounces@lists.xenproject.org Tue May 30 20:47:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 20:47:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541484.844290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q46Fk-00040b-FA; Tue, 30 May 2023 20:47:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541484.844290; Tue, 30 May 2023 20:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q46Fk-00040U-CR; Tue, 30 May 2023 20:47:36 +0000
Received: by outflank-mailman (input) for mailman id 541484;
 Tue, 30 May 2023 20:47:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8r7=BT=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q468B-00009z-1m
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 20:39:47 +0000
Received: from galois.linutronix.de (galois.linutronix.de [193.142.43.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1c9a7cc1-ff2a-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 22:39:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c9a7cc1-ff2a-11ed-b231-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685479184;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ulmrcImbtRvzqGgwE3a2VNEOCnRPGxt2UYSBYI3GTxQ=;
	b=4qWCcJKQCkNkGPqM/Rfcb/VSCOwDyDojxTr5YkT3yoCg+h+F4OXwKDoF2GjtKdG3kd2wY8
	L6bWsrD3kwGyYxrBlWcx8MzjAlMZB6uXNTfvj3b9vg3gdFNgUQPGfyCbjSp51zbj1DtDUf
	ZjhuZVqHtZUcqLZbFThMCPsY5zK/aJ3WXndEd7hP4OxuJTEM55rjMY4KMg2bFxvvHz6wsE
	N9v3mPnUudrr0MrxZTNc1yl0Htt+qivGdQJImDDVV7QN/OKhjJMU+C93JgoYjKHuK5xZ6E
	cCzobMUk50hG/fjlycbXB22hdFz/Ih2wozkyllmdvsLBgk3kP7tJ+RO+prCUDw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685479184;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ulmrcImbtRvzqGgwE3a2VNEOCnRPGxt2UYSBYI3GTxQ=;
	b=OCsvGQbDF6UOTjQKlqYKNmvbthDBtxG8seNlKDn/Ru8sYGzf7Qdj9re6e3JcMKWoHbPnwS
	H9C/y8wZ1kg41FCA==
To: Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>, LKML
 <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Oleksandr Natalenko <oleksandr@natalenko.name>, Paul Menzel
 <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama Arif
 <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE
In-Reply-To: <8751e955-e975-c6d4-630c-02912b9ef9da@amd.com>
References: <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name> <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name> <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx> <87cz2ija1e.ffs@tglx>
 <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name> <87wn0pizbl.ffs@tglx>
 <ZHYqwsCURnrFdsVm@google.com> <87leh5iom8.ffs@tglx>
 <8751e955-e975-c6d4-630c-02912b9ef9da@amd.com>
Date: Tue, 30 May 2023 22:39:44 +0200
Message-ID: <871qiximen.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, May 30 2023 at 15:03, Tom Lendacky wrote:
> On 5/30/23 14:51, Thomas Gleixner wrote:
>> That aside. From a semantical POV making this decision about parallel
>> bootup based on some magic CC encryption attribute is questionable.
>> 
>> I'm tending to just do the below and make this CC agnostic (except that
>> I couldn't find the right spot for SEV-ES to clear that flag.)
>
> Maybe in sme_sev_setup_real_mode() in arch/x86/realmode/init.c? You could 
> clear the flag within the CC_ATTR_GUEST_STATE_ENCRYPT check.

Eeew.

Can we please have a AMD SEV-ES init specific place and not hijack some
random code which has to check CC_ATTR_GUEST_STATE_ENCRYPT?

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Tue May 30 21:14:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 21:14:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541488.844299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q46fO-0007RP-H8; Tue, 30 May 2023 21:14:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541488.844299; Tue, 30 May 2023 21:14:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q46fO-0007RI-EF; Tue, 30 May 2023 21:14:06 +0000
Received: by outflank-mailman (input) for mailman id 541488;
 Tue, 30 May 2023 21:14:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDiB=BT=amd.com=Thomas.Lendacky@srs-se1.protection.inumbo.net>)
 id 1q46fN-0007RC-0N
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 21:14:05 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20620.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e55e1da4-ff2e-11ed-8611-37d641c3527e;
 Tue, 30 May 2023 23:14:01 +0200 (CEST)
Received: from DM4PR12MB5229.namprd12.prod.outlook.com (2603:10b6:5:398::12)
 by PH7PR12MB6539.namprd12.prod.outlook.com (2603:10b6:510:1f0::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May
 2023 21:13:57 +0000
Received: from DM4PR12MB5229.namprd12.prod.outlook.com
 ([fe80::61f6:a95e:c41e:bb25]) by DM4PR12MB5229.namprd12.prod.outlook.com
 ([fe80::61f6:a95e:c41e:bb25%3]) with mapi id 15.20.6455.020; Tue, 30 May 2023
 21:13:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e55e1da4-ff2e-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kX3oYEn1eJZW+MdPVb/T4I9QBTeQl9DLbkUEaWkzM0EFqcwYaanjKTRv/e7z+I9GvZs66wTYEHIS66AJnvO7+O+4E4+CeSobikUCqRtQOKMYfonLdb3IO1EEokGRsWchAt/Z0x1yB0/+8hqSIK+I63CjTS3qyetf2h7wRnyOLrrLpdE70WBi/EMZHCKO5TciHWlAmi3HZgRn5cFUYkibo16juUWHuSOKMzO3rQnx/khUxXqkkntFdatFyCu9wz7e+lINEQvZg4jgt0v1dGL8vckFYE2ptzN758yobpkLV4ijqQqIaplE52fqXz68QDhV5D8wC7tjCcWS0jYZe19Hxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q6toczsQlbG1rkzV+nTlLC50bGtvscaDHUpYbpwR1Xo=;
 b=Pxhf1O1qXodg7ZnVUHOBMdvfQOMzKF82odyf1kuFpO7tLm6SkTXsXv9sGMyNTOgbwGcJGPUZltdFcx0MhYAvESNN6WVhm1pOkI+GBhLDDubfKDzIETR44Ti6bDwRJIhaVUdGOFUPaJBbtvCZAIU4VcIEQyRwWXalsShNQYAC3nhJE9v+GaVLvxEu++nNnjfP4sJRo2XwDecDdZeYq41pYBvxzDnrCSISqf77B2dyKT54pp+oUBH+fe0cLR6uin+75WkqnoOv1XzjV+TR2shvUzSVNNxJs0n40Izp95VMmJ7eLm6h1ac4DmtRF/hx3STnMtWL+COFRswreIP9Li2hUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q6toczsQlbG1rkzV+nTlLC50bGtvscaDHUpYbpwR1Xo=;
 b=DkNPdIVhYf4Z1dMzjAsH0Ry4f3yr4FJVeuuu1KGU0IQFmpJsrghc0TKFyjB942+h5t78vwRdMCpWr4ZnMaUt6zgUTWGzoerYfaO2XTTElHzdtUVeBdHwyOUwI0q7lc9zk9pnEP8PcR1vu0zEcjeJJfwumIVqGIjji+JX3ihherg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <b6323987-059e-5396-20b9-8b6a1687e289@amd.com>
Date: Tue, 30 May 2023 16:13:52 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [patch] x86/smpboot: Disable parallel bootup if cc_vendor != NONE
Content-Language: en-US
To: Thomas Gleixner <tglx@linutronix.de>,
 Sean Christopherson <seanjc@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>,
 LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Dave Hansen <dave.hansen@linux.intel.com>
References: <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name> <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name> <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx> <87cz2ija1e.ffs@tglx>
 <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name> <87wn0pizbl.ffs@tglx>
 <ZHYqwsCURnrFdsVm@google.com> <87leh5iom8.ffs@tglx>
 <8751e955-e975-c6d4-630c-02912b9ef9da@amd.com> <871qiximen.ffs@tglx>
From: Tom Lendacky <thomas.lendacky@amd.com>
In-Reply-To: <871qiximen.ffs@tglx>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SA0PR11CA0113.namprd11.prod.outlook.com
 (2603:10b6:806:d1::28) To DM4PR12MB5229.namprd12.prod.outlook.com
 (2603:10b6:5:398::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM4PR12MB5229:EE_|PH7PR12MB6539:EE_
X-MS-Office365-Filtering-Correlation-Id: 799bb5bd-33ed-44cc-5d21-08db6152c763
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tFLVxiKAD+s2TQVrxVh9W+xT774o3ABmAbcW+zkmJGjfU+e5LN4TpekvRUcMa1ONp1+yaXgL2t0McaDks8/j+sklNTSApoAMByaEw38LIV6zPNFYHgJSQ723+BnqFYZAY97ExLxLla4Ws5/dRRVoZmlRD+RHZLiDhwuBGbX0QXTldrFmSeOORWaIUbvZdHtL4LLavHUId7Kng6wiD5xddXSIvMuL7xinPPzAO+N0bmK+whgAlLMlOtGoJ+/8Txr/JOA1ggafEIR5ydpOV+L+z3OieMyaSCJqICfpd7wHcGnAtyuODHzupqc2NKxQiLwbIykPRCu/79p6gsLD3vBFp2QC5M+I4Iw61Jj0Dl4Bax+wM5w/ZdLywb2jxyXvaMWZYlWAVxv08h58SA67l9lYKUYVUHtZFJoxcEnI/IiDOAu7atv1Myp6eaCBYMzkzQRWkb33A9XtphOZ7AMhgE6FGY/3YlUWkmr668P6HlPwz2cY06BpT/uvwG8SiwcfmRC+i+Np4XfTt8mWEdy5fgpXHvFIYGPiHtc78n7OXfyQNktr43qJKVKgoXNkof2fApAeqZGRK6wQiSzp6VPcLeCvAJxoi7OUvNuyHHvQQHtcDxW2ucWqS0r/+R409uv1SP+xocYiarPfseswetlVoERu7Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR12MB5229.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(136003)(39860400002)(396003)(346002)(451199021)(2906002)(4744005)(186003)(53546011)(31686004)(6512007)(6506007)(26005)(5660300002)(8936002)(54906003)(8676002)(110136005)(38100700002)(2616005)(6486002)(478600001)(31696002)(86362001)(41300700001)(316002)(4326008)(6666004)(36756003)(7406005)(7416002)(66476007)(66556008)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?UE4zdjdIVDFoMFpWU0NiTW9Zb1Rob2ZwUEpoR2NHaVdDd0x5MmxsWi9RejdN?=
 =?utf-8?B?dDBydjg2SmlaNUNJTUxrVDBlV28zZldsN2gzaDNYYjUwN2JsdEJ1OEZsSGFE?=
 =?utf-8?B?SGZNenp1a29rMXBPZk1DTzhLNVBhZzBOcUJaZVpObmlhNE11RHlWbHEyeFdQ?=
 =?utf-8?B?S04yZWJNeTJTazdRR0NISW5GTjQrWXRRQW1yYWVpQTFqK1hTbHJxVHpjcTRS?=
 =?utf-8?B?cUVCMlQwcThabyt5cWk0NTB2YlNuY3JLcHY0TGJhTFA5TUVqRkZXRVJXVjR2?=
 =?utf-8?B?c3N0eHFqYTBOL0lYVE9xMnFvUlRBOC92Sm91VGs0b2ZGWCtOVVpzQ2k4NUdr?=
 =?utf-8?B?WDBka2hqYUlEUVVpN09QMFdia3ZLc2F4VDQ3a2pHUkM4Zk9OZ3JUOExteFFa?=
 =?utf-8?B?R0Zoc3ZqaGNvb2MxbVp5SGtyTFV1WTMraTd0eGg4N2hwK1J4OUhVVW5RMmlh?=
 =?utf-8?B?ajBDK3grQUNXZ2dDK1J5OS9Zcmo5andnMHVuZnBJVTJnT1Fnei93bVpva3Vp?=
 =?utf-8?B?d2tSVHI5L0plKzNMdHlQeVVPRGhNM0E5L0JnSHI4T0Y5UFFkRXdhTVNrM3JN?=
 =?utf-8?B?WkxBWHpBSUtITm1uc0M4a1pzRklNNy84R1lNRzNuc3BSUlZQdmZKbXF2ZE52?=
 =?utf-8?B?Z1kwTWgrdVh3dlQyR2d1V0RiNFBKaTVoTzNjUUk4QnZDOUw3YlhwWDFNeGlk?=
 =?utf-8?B?UFhQTDZKRVp1VWVUU1dqQ25aaXFzYTBVM2tmZTRLWEQ3dlNZRTZFRS82b0Zs?=
 =?utf-8?B?dVFINHEwQitnYTZXelJ4SFVTbVI0QU1teFB1dm40MllURHFNOTE1aUJRSU11?=
 =?utf-8?B?RTZjNXpYV0hzclZuUFFsd0lidkhpVmc3NVRsaUlNYTJhOHE3bkRPTmZqb3lU?=
 =?utf-8?B?cDhNbFBpMFZiTnJBbWIySGZOaFovcEtVOG5xR2phZ1VZRGRFdWgvdG5nT0tM?=
 =?utf-8?B?bnJMcGlubWd0STdVNDdIaTVzNklMa1IxNjR6eklsWjVVZU1PbHhCSW9qUFNI?=
 =?utf-8?B?eHBJY0grZXNCMnJ2cXNET2xVYkVXTlQ5ZWx2L05EVnVUa3N5WGlIeHcyUERC?=
 =?utf-8?B?b2NBYUJDSjE0S1JEM3Bwb1hpRDJEejFZZUxxVDRWL25zaitHcUVMOHJVdVhh?=
 =?utf-8?B?U3Y4Q3hsVExrWlp0eW4yTVFjdWhZZVZCUGwxOVhPbWwwcC9mUWtTR2M3M3Rn?=
 =?utf-8?B?aXExTXIrV09nL2t0ZVZGYXRUWElXbW04MkM0cEpNNmhNR2I4YTNGTCs3cVJM?=
 =?utf-8?B?WHJFL2VCTWZkRVo4L0dHaktjdGM0RHdleU5BbVRmbzdyVzlDNW85WWhzckRp?=
 =?utf-8?B?bG1CeWtLQnJLUjV5NWdWVTVTSzRQNlZYMEQ4QnB6TFp1T0didzF6S3FxUEpH?=
 =?utf-8?B?Tmg0cWtyWm8xWnhTRC9EUVFYZGxHdkp6M2p3Yi9nczdDb2MwR2I0NnA3Z09p?=
 =?utf-8?B?YmtqNndEZkVWdVFqVVB3MUhTWnovbjk2TG1wN0NxTXlxZEg3NUIzK29vaE8v?=
 =?utf-8?B?Z0FpNWRVWmVidXVjdWUzZUZ3aUxweS9yVEpPN01yMURVSUx4U0xRaVRQdHk0?=
 =?utf-8?B?VHVaZmxHV1FnQlpRbERpcVRnd2N5Y1ZkMGpOZCs1aXdVUzA0VTV3MktXN1Ns?=
 =?utf-8?B?V3V3SkdiZDhhYVE2Wk5COGtIWDRVUUx5VGYvU0hRR0JZSS9WWUgzM0JSNytn?=
 =?utf-8?B?cmhteU9KYjVTWFJld2lKanBpZURheEFPSUlaWXVLWnR0TmkvaWU2UkVhZGNF?=
 =?utf-8?B?MVlFM091RjM2bWJQOGpxeUROb3JraGZpU1c5Qm9LbjA5dVp5R3hoQVg0Y2pH?=
 =?utf-8?B?QS91dWZkZ3hVTGZkNnNGMHliWlZkTTFoOVM4K1JIbm0yZlkrVnR5V3MwcGt1?=
 =?utf-8?B?K29jYXFoalB6UU90YVV2R1A0bVg4OC9JRllGZnZ4d2lzaE80dVU3UlhHUjNq?=
 =?utf-8?B?ZTROMnROMnZnOWNPV2lOeksybDk5WWRVQThZY3hOaEZzdXYrSktwSklPZEc4?=
 =?utf-8?B?UDNSelB3VmpjbVhDV3RuZTVvZExOby9kRmFoVE05bVFwMDM0eG5aV2RSeTYz?=
 =?utf-8?B?aHo1L3FyTnNhNlBkcklraTNLNlJKemdLTGNFRHNMQTRJeGNVZGNHUmJlbThy?=
 =?utf-8?Q?mw+mRv+QhNzA1yApTBVic+wZb?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 799bb5bd-33ed-44cc-5d21-08db6152c763
X-MS-Exchange-CrossTenant-AuthSource: DM4PR12MB5229.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 21:13:56.5884
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BUSTBXcn7JkFzDZrO/ByTN4Z7z+v5t9uM1rORzgNtni36bEIKJ1yVxfi9qi2+YdX2w+UTsbOrsApCghf3SlmZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6539

On 5/30/23 15:39, Thomas Gleixner wrote:
> On Tue, May 30 2023 at 15:03, Tom Lendacky wrote:
>> On 5/30/23 14:51, Thomas Gleixner wrote:
>>> That aside. From a semantical POV making this decision about parallel
>>> bootup based on some magic CC encryption attribute is questionable.
>>>
>>> I'm tending to just do the below and make this CC agnostic (except that
>>> I couldn't find the right spot for SEV-ES to clear that flag.)
>>
>> Maybe in sme_sev_setup_real_mode() in arch/x86/realmode/init.c? You could
>> clear the flag within the CC_ATTR_GUEST_STATE_ENCRYPT check.
> 
> Eeew.
> 
> Can we please have a AMD SEV-ES init specific place and not hijack some
> random code which has to check CC_ATTR_GUEST_STATE_ENCRYPT?

As long as it's not too early, you could try sme_early_init() in 
arch/x86/mm/mem_encrypt_amd.c. Add a check for sev_status & 
MSR_AMD64_SEV_ES_ENABLED and clear the flag.

Thanks,
Tom

> 
> Thanks,
> 
>          tglx


From xen-devel-bounces@lists.xenproject.org Tue May 30 21:24:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 21:24:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541493.844310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q46p8-0000aB-JD; Tue, 30 May 2023 21:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541493.844310; Tue, 30 May 2023 21:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q46p8-0000a4-G7; Tue, 30 May 2023 21:24:10 +0000
Received: by outflank-mailman (input) for mailman id 541493;
 Tue, 30 May 2023 21:24:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iU77=BT=kernel.org=helgaas@srs-se1.protection.inumbo.net>)
 id 1q46p6-0000Zu-DF
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 21:24:08 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e2c599b-ff30-11ed-b231-6b7b168915f2;
 Tue, 30 May 2023 23:24:06 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B52D263375;
 Tue, 30 May 2023 21:24:04 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CB01EC4339B;
 Tue, 30 May 2023 21:24:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e2c599b-ff30-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685481844;
	bh=nSQjRvB5sqKGMsdtyBRHXZYqX4emyOnsRXhxnpxj5CY=;
	h=Date:From:To:Cc:Subject:In-Reply-To:From;
	b=TV1bjAHwa6NLdVZVD7DllGDZUObfyAObgZ8dpZarPA3MFb+4JwjHeGjrah/sKeuB6
	 Xq8Z6uhzHX0F3PKJSdkRh1pFxjO+XorK0VZvx6no556rDJNfISNz+wSJYYSdEmvn7Z
	 hUMZYEgCtddE7YUdKaua4hQLvfWtpf57g3PAd+6ZmWrfu2xY5A7mha+RytVeJ/84Sv
	 mxnDEJPKCDkp/bA6BADO+x5ZZtDdknfJeDc4bcwl2S3en2mWL+rmMtiDyQsmiX2t01
	 UKGSafGTnUjV0OrS31D6yutY2Y59cmVv6Xh9lXe15RO90svr9jvNnhSKkE8vII7TeO
	 VMw5j/h0xzgKQ==
Date: Tue, 30 May 2023 16:24:02 -0500
From: Bjorn Helgaas <helgaas@kernel.org>
To: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	Rich Felker <dalias@libc.org>, linux-sh@vger.kernel.org,
	linux-pci@vger.kernel.org,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	linux-kernel@vger.kernel.org,
	=?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Andrew Lunn <andrew@lunn.ch>, sparclinux@vger.kernel.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Gregory Clement <gregory.clement@bootlin.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Russell King <linux@armlinux.org.uk>, linux-acpi@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>, xen-devel@lists.xenproject.org,
	Matt Turner <mattst88@gmail.com>,
	Anatolij Gustschin <agust@denx.de>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Nicholas Piggin <npiggin@gmail.com>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	linux-arm-kernel@lists.infradead.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	linuxppc-dev@lists.ozlabs.org, Randy Dunlap <rdunlap@infradead.org>,
	linux-mips@vger.kernel.org,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	linux-alpha@vger.kernel.org,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>
Subject: Re: [PATCH v8 0/7] Add pci_dev_for_each_resource() helper and update
 users
Message-ID: <ZHZpcli2UmdzHgme@bhelgaas>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ZF6YIezraETr9iNM@bhelgaas>

On Fri, May 12, 2023 at 02:48:51PM -0500, Bjorn Helgaas wrote:
> On Fri, May 12, 2023 at 01:56:29PM +0300, Andy Shevchenko wrote:
> > On Tue, May 09, 2023 at 01:21:22PM -0500, Bjorn Helgaas wrote:
> > > On Tue, Apr 04, 2023 at 11:11:01AM -0500, Bjorn Helgaas wrote:
> > > > On Thu, Mar 30, 2023 at 07:24:27PM +0300, Andy Shevchenko wrote:
> > > > > Provide two new helper macros to iterate over PCI device resources and
> > > > > convert users.
> > > 
> > > > Applied 2-7 to pci/resource for v6.4, thanks, I really like this!
> > > 
> > > This is 09cc90063240 ("PCI: Introduce pci_dev_for_each_resource()")
> > > upstream now.
> > > 
> > > Coverity complains about each use,
> > 
> > It needs more clarification here. Use of reduced variant of the
> > macro or all of them? If the former one, then I can speculate that
> > Coverity (famous for false positives) simply doesn't understand `for
> > (type var; var ...)` code.
> 
> True, Coverity finds false positives.  It flagged every use in
> drivers/pci and drivers/pnp.  It didn't mention the arch/alpha, arm,
> mips, powerpc, sh, or sparc uses, but I think it just didn't look at
> those.
> 
> It flagged both:
> 
>   pbus_size_io    pci_dev_for_each_resource(dev, r)
>   pbus_size_mem   pci_dev_for_each_resource(dev, r, i)
> 
> Here's a spreadsheet with a few more details (unfortunately I don't
> know how to make it dump the actual line numbers or analysis like I
> pasted below, so "pci_dev_for_each_resource" doesn't appear).  These
> are mostly in the "Drivers-PCI" component.
> 
> https://docs.google.com/spreadsheets/d/1ohOJwxqXXoDUA0gwopgk-z-6ArLvhN7AZn4mIlDkHhQ/edit?usp=sharing
> 
> These particular reports are in the "High Impact Outstanding" tab.

Where are we at?  Are we going to ignore this because some Coverity
reports are false positives?

Bjorn


From xen-devel-bounces@lists.xenproject.org Tue May 30 22:05:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 22:05:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541497.844319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q47TA-0004za-Lj; Tue, 30 May 2023 22:05:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541497.844319; Tue, 30 May 2023 22:05:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q47TA-0004zT-J4; Tue, 30 May 2023 22:05:32 +0000
Received: by outflank-mailman (input) for mailman id 541497;
 Tue, 30 May 2023 22:05:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FU5e=BT=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q47TA-0004zN-18
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 22:05:32 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 15c98c83-ff36-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 00:05:28 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 479356242B;
 Tue, 30 May 2023 22:05:27 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8112BC433D2;
 Tue, 30 May 2023 22:05:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15c98c83-ff36-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685484326;
	bh=JIN2dGLWbFOiEe9KtTqP1vysF18RlxTcA0edmZRFyQ4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=gNKFQdPfhKg65xc7q9ix82GbndebBzjDXVf6+R6mtoM73ULJIJMjWNYhRVV4hkGSE
	 1Ta1UpWGKQZxb3A7Lwcv8gVjOni7QZPjBceWLrYAZUa6A5Nzw9CYgLhhy8ydA29X1A
	 1EhTuB7dpwQIEbGXJHuWjST7ZET7dZ3RMLbF5GDPOgoknCF9dFbOG96W0Zmb8PoAO2
	 ae2c2ym2PthwqYeubc1F/jglxvjbEjD6r6Drkukab0vYJj4N09wkGN1sbFDBXexc7k
	 VT4miYHvGH33CpQZsTw+0ZyVbb7i8MCW3P9CPnIxB4mXx4wq+PrMl9rycJZz2OIzP8
	 LHZdta2k9ZOMw==
Date: Tue, 30 May 2023 15:05:23 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Juergen Gross <jgross@suse.com>
cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>, 
    Julien Grall <julien@xen.org>, Anthony PERARD <anthony.perard@citrix.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH] xen/include/public: fix 9pfs xenstore path description
In-Reply-To: <20230530114815.18362-1-jgross@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305301505180.44000@ubuntu-linux-20-04-desktop>
References: <20230530114815.18362-1-jgross@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 30 May 2023, Juergen Gross wrote:
> In xen/include/public/io/9pfs.h the name of the Xenstore backend node
> "security-model" should be "security_model", as this is how the Xen
> tools are creating it and qemu is reading it.
> 
> Fixes: ad58142e73a9 ("xen/public: move xenstore related doc into 9pfs.h")
> Fixes: cf1d2d22fdfd ("docs/misc: Xen transport for 9pfs")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/include/public/io/9pfs.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/include/public/io/9pfs.h b/xen/include/public/io/9pfs.h
> index a0ce82d0a8..9ad2773082 100644
> --- a/xen/include/public/io/9pfs.h
> +++ b/xen/include/public/io/9pfs.h
> @@ -64,7 +64,7 @@
>   *
>   *         Host filesystem path to share.
>   *
> - *    security-model
> + *    security_model
>   *         Values:         "none"
>   *
>   *         *none*: files are stored using the same credentials as they are
> -- 
> 2.35.3
> 


From xen-devel-bounces@lists.xenproject.org Tue May 30 22:06:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 22:06:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541498.844330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q47Tf-0005NP-VD; Tue, 30 May 2023 22:06:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541498.844330; Tue, 30 May 2023 22:06:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q47Tf-0005NI-Ru; Tue, 30 May 2023 22:06:03 +0000
Received: by outflank-mailman (input) for mailman id 541498;
 Tue, 30 May 2023 22:06:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q47Te-0005Lp-Tr; Tue, 30 May 2023 22:06:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q47Te-00041R-ME; Tue, 30 May 2023 22:06:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q47Te-0006Ly-5I; Tue, 30 May 2023 22:06:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q47Te-0003p6-3r; Tue, 30 May 2023 22:06:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+ZXjDXjT6q43e20lr/QhrmlBYG6UKDr/kwgC0SWBpuM=; b=iktKy7zVRml1xqSTgVBGaGJWYL
	iYtOf01m8tHbjAK6DRkwNDHHgtV3gF4t63kArF3K4aYdc5+gj8DuTCZU6RkOvgz4SrV0PdIg7tEsw
	IL6+MqEWv0ZGvFojmEYKqUurH94gYvVvbkIkVTUjhOoC3RVi/YgxdUxrbF7cvmQn3SV8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181013-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 181013: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=aa9bbd865502ed517624ab6fe7d4b5d89ca95e43
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 30 May 2023 22:06:02 +0000

flight 181013 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181013/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host       fail pass in 181006
 test-amd64-amd64-xl-qcow2    21 guest-start/debian.repeat  fail pass in 181006

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180691
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                aa9bbd865502ed517624ab6fe7d4b5d89ca95e43
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   13 days
Failing since        180699  2023-05-18 07:21:24 Z   12 days   49 attempts
Testing same since   181006  2023-05-30 01:10:27 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bernhard Beschow <shentey@gmail.com>
  Bin Meng <bin.meng@windriver.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Cédric Le Goater <clg@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Erico Nunes <ernunes@redhat.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Nicholas Piggin <npiggin@gmail.com>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Rene Engel <ReneEngel80@emailn.de>
  Richard Henderson <richard.henderson@linaro.org>
  Richard Purdie <richard.purdie@linuxfoundation.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sergio Lopez <slp@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vivek Kasireddy <vivek.kasireddy@intel.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 8961 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 30 22:39:19 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 22:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541508.844340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q47zc-0000bF-Nk; Tue, 30 May 2023 22:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541508.844340; Tue, 30 May 2023 22:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q47zc-0000b8-JC; Tue, 30 May 2023 22:39:04 +0000
Received: by outflank-mailman (input) for mailman id 541508;
 Tue, 30 May 2023 22:39:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FU5e=BT=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q47zb-0000b2-BZ
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 22:39:03 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c5248311-ff3a-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 00:39:00 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 899D360AF0;
 Tue, 30 May 2023 22:38:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5628CC433D2;
 Tue, 30 May 2023 22:38:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5248311-ff3a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685486339;
	bh=3LyBTta425MQh6JueAG5J87JEcwJLyC6WrZDt8GPvu8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=SSRW4iIOfg49EXgx0UeQXWuQ7BmMuoPkUEwYq/xqKnnxT+NdhtoWCKF35E92eJ0F2
	 w2O1zlhATqWYe7X5tUT/QymdNEtpRl+HWoan8U4X2MiLvFbSqlvAiXq+ILKxfz6bfk
	 TpXrcZ+qk0ewJhTxT9k1WfIZFnovfpVA67MqJPt13hOn9G2NaZ+kp5lQAj4dCGbyLG
	 1YWvachOIIPMbCKLMrOXn+KLWobuvo/pcS+1LnVtnQBppmufRRmdEAAesoG8hpRUMs
	 WAdOdVyVtykXAeJAYuxiCH7+Uwz44nXwOVvmFJyw9EBcGI/h9cDAp8z5ghHuBV47B1
	 d+7HseXw5iAcA==
Date: Tue, 30 May 2023 15:38:56 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
In-Reply-To: <22f1e765-891d-ef2d-01b5-e9dfe6ca895b@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305301529090.44000@ubuntu-linux-20-04-desktop>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com> <ZG4dmJuzNVUE5UIY@Air-de-Roger> <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com> <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop> <8956af09-9ba4-11bf-a272-25f508bbbb3c@suse.com>
 <alpine.DEB.2.22.394.2305251224070.44000@ubuntu-linux-20-04-desktop> <22f1e765-891d-ef2d-01b5-e9dfe6ca895b@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 26 May 2023, Jan Beulich wrote:
> On 25.05.2023 21:24, Stefano Stabellini wrote:
> > On Thu, 25 May 2023, Jan Beulich wrote:
> >> On 25.05.2023 01:37, Stefano Stabellini wrote:
> >>> On Wed, 24 May 2023, Jan Beulich wrote:
> >>>>>> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
> >>>>>>      modify_bars() to consistently respect BARs of hidden devices while
> >>>>>>      setting up "normal" ones (i.e. to avoid as much as possible the
> >>>>>>      "continue" path introduced here), setting up of the former may want
> >>>>>>      doing first.
> >>>>>
> >>>>> But BARs of hidden devices should be mapped into dom0 physmap?
> >>>>
> >>>> Yes.
> >>>
> >>> The BARs would be mapped read-only (not read-write), right? Otherwise we
> >>> let dom0 access devices that belong to Xen, which doesn't seem like a
> >>> good idea.
> >>>
> >>> But even if we map the BARs read-only, what is the benefit of mapping
> >>> them to Dom0? If Dom0 loads a driver for it and the driver wants to
> >>> initialize the device, the driver will crash because the MMIO region is
> >>> read-only instead of read-write, right?
> >>>
> >>> How does this device hiding work for dom0? How does dom0 know not to
> >>> access a device that is present on the PCI bus but is used by Xen?
> >>
> >> None of these are new questions - this has all been this way for PV Dom0,
> >> and so far we've limped along quite okay. That's not to say that we
> >> shouldn't improve things if we can, but that first requires ideas as to
> >> how.
> > 
> > For PV, that was OK because PV requires extensive guest modifications
> > anyway. We only run Linux and few BSDs as Dom0. So, making the interface
> > cleaner and reducing guest changes is nice-to-have but not critical.
> > 
> > For PVH, this is different. One of the top reasons for AMD to work on
> > PVH is to enable arbitrary non-Linux OSes as Dom0 (when paired with
> > dom0less/hyperlaunch). It could be anything from Zephyr to a
> > proprietary RTOS like VxWorks. Minimal guest changes for advanced
> > features (e.g. Dom0 S3) might be OK but in general I think we should aim
> > at (almost) zero guest changes. On ARM, it is already the case (with some
> > non-upstream patches for dom0less PCI.)
> > 
> > For this specific patch, which is necessary to enable PVH on AMD x86 in
> > gitlab-ci, we can do anything we want to make it move faster. But
> > medium/long term I think we should try to make non-Xen-aware PVH Dom0
> > possible.
> 
> I don't think Linux could boot as PVH Dom0 without any awareness. Hence
> I guess it's not easy to see how other OSes might. What you're after
> looks rather like a HVM Dom0 to me, with it being unclear where the
> external emulator then would run (in a stubdom maybe, which might be
> possible to arrange for via the dom0less way of creating boot time
> DomU-s) and how it would get any necessary xenstore based information.

I know that Linux has lots of Xen awareness scattered everywhere so it
is difficult to tell what's what. Leaving the PVH entry point aside for
this discussion, what else is really needed for a Linux without
CONFIG_XEN to boot as PVH Dom0?

Same question from a different angle: let's say that we boot Zephyr or
another RTOS as HVM Dom0, what is really required for the emulator to
emulate? I am hoping that the answer is "nothing" except for maybe a
UART.

It comes down to how much legacy stuff the guest OS expects to find.
Legacy stuff that would normally be emulated by QEMU. I am counting on
the fact that a modern OS doesn't expect any of the legacy stuff (e.g.
PIIX3/Q35/E1000) if it is not advertised in the firmware tables. If
there is no need for QEMU, I don't know if I would call it PVH or HVM
but either way we are good. 

Same for xenstore: there should be no need for xenstore without
CONFIG_XEN.


From xen-devel-bounces@lists.xenproject.org Tue May 30 22:42:47 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 22:42:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541512.844349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q483C-00021I-4w; Tue, 30 May 2023 22:42:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541512.844349; Tue, 30 May 2023 22:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q483C-00021B-2M; Tue, 30 May 2023 22:42:46 +0000
Received: by outflank-mailman (input) for mailman id 541512;
 Tue, 30 May 2023 22:42:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FU5e=BT=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q483A-000215-JS
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 22:42:44 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 49503ac8-ff3b-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 00:42:42 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 679B262A61;
 Tue, 30 May 2023 22:42:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 38F78C433D2;
 Tue, 30 May 2023 22:42:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49503ac8-ff3b-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685486560;
	bh=N4oTt86dadx0ez5kyAZNyZZ2y/GRALdDROiBl6Ky8mo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=pqhcoHwv/ofw+vAEPajoEV6o0quBc5+Vu2gt1HteBD9WcjMKYow6Xykh9bgPqInRX
	 9y+CONEx7ZyhxjRogSSGCG29BFbIlslWKgoSxrd+jJgK3ymMjzZzi7KJpV4WCzINOp
	 4qmLa3JQZyyYMSjsL5lc0fZM8frD1Ih+cax9z2+FgjwLF4QShHJZj1niSRsPEdbZY7
	 Z9K/k+zdh64QTHazg59Z78VvyNnZl2NpawtTyfAvNjU4//IdK9GwpMulmN8m/1cVP5
	 Z6184ye49g94b3c+C9XNfavTgqQDYMGchsX6MzJ2KqJ3mb0HHC9Zq8YMUC6G9oWI0d
	 ahygfzfVzodlg==
Date: Tue, 30 May 2023 15:42:38 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
In-Reply-To: <03127618-13a2-872e-82e9-b23ce8095f70@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305301539180.44000@ubuntu-linux-20-04-desktop>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com> <ZG4dmJuzNVUE5UIY@Air-de-Roger> <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com> <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop> <8956af09-9ba4-11bf-a272-25f508bbbb3c@suse.com>
 <alpine.DEB.2.22.394.2305251224070.44000@ubuntu-linux-20-04-desktop> <alpine.DEB.2.22.394.2305251226000.44000@ubuntu-linux-20-04-desktop> <03127618-13a2-872e-82e9-b23ce8095f70@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 26 May 2023, Jan Beulich wrote:
> On 25.05.2023 21:32, Stefano Stabellini wrote:
> > Like I wrote, personally I am happy with whatever gets us to have the PVH
> > test in gitlab-ci faster.
> > 
> > However, on the specific problem of PCI devices used by Xen and how to
> > deal with them for Dom0 PVH, I think they should be completely hidden.
> > Hidden in the sense that they don't appear on the Dom0 PCI bus. If the
> > hidden device is a function of a multi-function PCI device, then the
> > entire multi-function PCI device should be hidden.
> > 
> > I don't think this case is very important because devices used by Xen
> > are timers, IOMMUs, UARTs,
> 
> ... USB debug ports (EHCI, XHCI), ...
> 
> > all devices that typically are not multi-function,
> 
> except for the ones added. Furthermore see video_endboot() for a case
> of also hiding the VGA device, which isn't unlikely to have secondary
> functions (sound controllers are not uncommon). Hence ...
> 
> > so it is OK to be extra careful and remove the entire
> > device from Dom0 in the odd case that the device is both multi-function
> > and only partially used by Xen. This is what I would do for Xen on ARM
> > too.
> 
> ... at best I would see this as a non-default mode of operation. Of
> course we could also play more funny games with vPCI, like surfacing
> a "stub" device in place of a hidden one, so the other functions can
> still be found.

>From our use-case point of view, non-default is OK. The PCI stub idea is
a cool trick that might work but hopefully we won't need it at least
initially. But it is something to consider if it turns out there is an
important multi-function device with one of the devices hidden.


From xen-devel-bounces@lists.xenproject.org Tue May 30 22:58:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 22:58:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541516.844359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q48IH-0003aI-EC; Tue, 30 May 2023 22:58:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541516.844359; Tue, 30 May 2023 22:58:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q48IH-0003aB-BI; Tue, 30 May 2023 22:58:21 +0000
Received: by outflank-mailman (input) for mailman id 541516;
 Tue, 30 May 2023 22:58:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q48IG-0003a1-4F; Tue, 30 May 2023 22:58:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q48IF-0005BK-Ra; Tue, 30 May 2023 22:58:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q48IF-0000rl-Cp; Tue, 30 May 2023 22:58:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q48IF-0004wD-CN; Tue, 30 May 2023 22:58:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XATu1It7YOFkt033gy/vMaFSps5aqk4roS1WGiEHj+s=; b=yKyzKmRD+VfRzeGGCzDBMjp7jf
	ohWvKMeF/JSUndMBGT/iAcaaSUy2QPBwh5q6O8JzZpI5iEvfdzLJensZJwZgMZEG74samMkhfiT7p
	W+AaXDzcK3YQZUVQSZmgvkpQ6wRbK76j+Ra/vbK0z35YiyVfcfEjHNtcrj/Rt66vGb2o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181018-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 181018: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=94200e1bae07e725cc07238c11569c5cab7befb7
X-Osstest-Versions-That:
    xen=05422d276b56f2ebc2309a84a66fc5722c45ad74
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 30 May 2023 22:58:19 +0000

flight 181018 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181018/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  94200e1bae07e725cc07238c11569c5cab7befb7
baseline version:
 xen                  05422d276b56f2ebc2309a84a66fc5722c45ad74

Last test of basis   181016  2023-05-30 17:03:26 Z    0 days
Testing same since   181018  2023-05-30 20:00:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alejandro Vallejo <alejandro.vallejo@cloud.com>
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   05422d276b..94200e1bae  94200e1bae07e725cc07238c11569c5cab7befb7 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 30 23:05:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 23:05:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541522.844369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q48PQ-00056C-5x; Tue, 30 May 2023 23:05:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541522.844369; Tue, 30 May 2023 23:05:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q48PQ-000565-3A; Tue, 30 May 2023 23:05:44 +0000
Received: by outflank-mailman (input) for mailman id 541522;
 Tue, 30 May 2023 23:05:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FU5e=BT=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q48PP-00055z-AQ
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 23:05:43 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7d761fa4-ff3e-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 01:05:38 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id E379661336;
 Tue, 30 May 2023 23:05:37 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B3140C433D2;
 Tue, 30 May 2023 23:05:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d761fa4-ff3e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685487937;
	bh=W4qs1C3fqrpYJfUFup6WGc9s7xQH+F9LR2w3DLMz8gI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=CV5Svw/G1dVZFoLzHmRi/CkqKaK8eYofGD2cg0fTDhwFtPFCVH0YFMymrLQ1p9pc8
	 yAR6Yh1gPpwJ11Usgg+oF+zQuV9+dTFwcXAbepLWmX2ioKbU1WkSxI0QgyKA3kxMR5
	 G5oti36qyguGLHEq4TF3q1QEbnt832T3yh5pKxLwT+cNN+gvQD9UdSt6m+lwZs5EZp
	 T26ev6ymsTVfUEy/DibhFV6Vi1tZm+FiDo/SFu0wueagoj/8QEcXidWS5zQj6CZ6r8
	 X1GmOVCNjYJKiMMfF93TUMWP1j8eOBL1DAWEDqo5NAeFrKn0WZUaJ4OgSf5tbRbr6O
	 WERjhjiYv6g2w==
Date: Tue, 30 May 2023 16:05:35 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v3] vPCI: account for hidden devices
In-Reply-To: <e1c6e297-0046-73f6-981d-af776b271f24@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305301604530.44000@ubuntu-linux-20-04-desktop>
References: <e1c6e297-0046-73f6-981d-af776b271f24@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 30 May 2023, Jan Beulich wrote:
> Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
> console) are associated with DomXEN, not Dom0. This means that while
> looking for overlapping BARs such devices cannot be found on Dom0's list
> of devices; DomXEN's list also needs to be scanned.
> 
> Suppress vPCI init altogether for r/o devices (which constitute a subset
> of hidden ones).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Tested-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> v3: Also consider pdev being DomXEN's in modify_bars(). Also consult
>     DomXEN in vpci_{read,write}(). Move vpci_write()'s check of the r/o
>     map out of mainline code. Re-base over the standalone addition of
>     the loop continuation in modify_bars(), and finally make the code
>     change there well-formed.
> v2: Extend existing comment. Relax assertion. Don't initialize vPCI for
>     r/o devices.
> 
> --- a/xen/drivers/vpci/header.c
> +++ b/xen/drivers/vpci/header.c
> @@ -218,6 +218,7 @@ static int modify_bars(const struct pci_
>      struct vpci_header *header = &pdev->vpci->header;
>      struct rangeset *mem = rangeset_new(NULL, NULL, 0);
>      struct pci_dev *tmp, *dev = NULL;
> +    const struct domain *d;
>      const struct vpci_msix *msix = pdev->vpci->msix;
>      unsigned int i;
>      int rc;
> @@ -285,58 +286,69 @@ static int modify_bars(const struct pci_
>  
>      /*
>       * Check for overlaps with other BARs. Note that only BARs that are
> -     * currently mapped (enabled) are checked for overlaps.
> +     * currently mapped (enabled) are checked for overlaps. Note also that
> +     * for hwdom we also need to include hidden, i.e. DomXEN's, devices.
>       */
> -    for_each_pdev ( pdev->domain, tmp )
> +    for ( d = pdev->domain != dom_xen ? pdev->domain : hardware_domain; ; )
>      {
> -        if ( !tmp->vpci )
> -            /*
> -             * For the hardware domain it's possible to have devices assigned
> -             * to it that are not handled by vPCI, either because those are
> -             * read-only devices, or because vPCI setup has failed.
> -             */
> -            continue;
> -
> -        if ( tmp == pdev )
> +        for_each_pdev ( d, tmp )
>          {
> -            /*
> -             * Need to store the device so it's not constified and defer_map
> -             * can modify it in case of error.
> -             */
> -            dev = tmp;
> -            if ( !rom_only )
> +            if ( !tmp->vpci )
>                  /*
> -                 * If memory decoding is toggled avoid checking against the
> -                 * same device, or else all regions will be removed from the
> -                 * memory map in the unmap case.
> +                 * For the hardware domain it's possible to have devices
> +                 * assigned to it that are not handled by vPCI, either because
> +                 * those are read-only devices, or because vPCI setup has
> +                 * failed.
>                   */
>                  continue;
> -        }
>  
> -        for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
> -        {
> -            const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
> -            unsigned long start = PFN_DOWN(bar->addr);
> -            unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
> -
> -            if ( !bar->enabled || !rangeset_overlaps_range(mem, start, end) ||
> -                 /*
> -                  * If only the ROM enable bit is toggled check against other
> -                  * BARs in the same device for overlaps, but not against the
> -                  * same ROM BAR.
> -                  */
> -                 (rom_only && tmp == pdev && bar->type == VPCI_BAR_ROM) )
> -                continue;
> +            if ( tmp == pdev )
> +            {
> +                /*
> +                 * Need to store the device so it's not constified and defer_map
> +                 * can modify it in case of error.
> +                 */
> +                dev = tmp;
> +                if ( !rom_only )
> +                    /*
> +                     * If memory decoding is toggled avoid checking against the
> +                     * same device, or else all regions will be removed from the
> +                     * memory map in the unmap case.
> +                     */
> +                    continue;
> +            }
>  
> -            rc = rangeset_remove_range(mem, start, end);
> -            if ( rc )
> +            for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ )
>              {
> -                printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n",
> -                       start, end, rc);
> -                rangeset_destroy(mem);
> -                return rc;
> +                const struct vpci_bar *bar = &tmp->vpci->header.bars[i];
> +                unsigned long start = PFN_DOWN(bar->addr);
> +                unsigned long end = PFN_DOWN(bar->addr + bar->size - 1);
> +
> +                if ( !bar->enabled ||
> +                     !rangeset_overlaps_range(mem, start, end) ||
> +                     /*
> +                      * If only the ROM enable bit is toggled check against
> +                      * other BARs in the same device for overlaps, but not
> +                      * against the same ROM BAR.
> +                      */
> +                     (rom_only && tmp == pdev && bar->type == VPCI_BAR_ROM) )
> +                    continue;
> +
> +                rc = rangeset_remove_range(mem, start, end);
> +                if ( rc )
> +                {
> +                    printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n",
> +                           start, end, rc);
> +                    rangeset_destroy(mem);
> +                    return rc;
> +                }
>              }
>          }
> +
> +        if ( !is_hardware_domain(d) )
> +            break;
> +
> +        d = dom_xen;
>      }
>  
>      ASSERT(dev);
> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -70,6 +70,7 @@ void vpci_remove_device(struct pci_dev *
>  int vpci_add_handlers(struct pci_dev *pdev)
>  {
>      unsigned int i;
> +    const unsigned long *ro_map;
>      int rc = 0;
>  
>      if ( !has_vpci(pdev->domain) )
> @@ -78,6 +79,11 @@ int vpci_add_handlers(struct pci_dev *pd
>      /* We should not get here twice for the same device. */
>      ASSERT(!pdev->vpci);
>  
> +    /* No vPCI for r/o devices. */
> +    ro_map = pci_get_ro_map(pdev->sbdf.seg);
> +    if ( ro_map && test_bit(pdev->sbdf.bdf, ro_map) )
> +        return 0;
> +
>      pdev->vpci = xzalloc(struct vpci);
>      if ( !pdev->vpci )
>          return -ENOMEM;
> @@ -332,8 +338,13 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsi
>          return data;
>      }
>  
> -    /* Find the PCI dev matching the address. */
> +    /*
> +     * Find the PCI dev matching the address, which for hwdom also requires
> +     * consulting DomXEN.  Passthrough everything that's not trapped.
> +     */
>      pdev = pci_get_pdev(d, sbdf);
> +    if ( !pdev && is_hardware_domain(d) )
> +        pdev = pci_get_pdev(dom_xen, sbdf);
>      if ( !pdev || !pdev->vpci )
>          return vpci_read_hw(sbdf, reg, size);
>  
> @@ -427,7 +438,6 @@ void vpci_write(pci_sbdf_t sbdf, unsigne
>      const struct pci_dev *pdev;
>      const struct vpci_register *r;
>      unsigned int data_offset = 0;
> -    const unsigned long *ro_map = pci_get_ro_map(sbdf.seg);
>  
>      if ( !size )
>      {
> @@ -435,18 +445,20 @@ void vpci_write(pci_sbdf_t sbdf, unsigne
>          return;
>      }
>  
> -    if ( ro_map && test_bit(sbdf.bdf, ro_map) )
> -        /* Ignore writes to read-only devices. */
> -        return;
> -
>      /*
> -     * Find the PCI dev matching the address.
> -     * Passthrough everything that's not trapped.
> +     * Find the PCI dev matching the address, which for hwdom also requires
> +     * consulting DomXEN.  Passthrough everything that's not trapped.
>       */
>      pdev = pci_get_pdev(d, sbdf);
> +    if ( !pdev && is_hardware_domain(d) )
> +        pdev = pci_get_pdev(dom_xen, sbdf);
>      if ( !pdev || !pdev->vpci )
>      {
> -        vpci_write_hw(sbdf, reg, size, data);
> +        /* Ignore writes to read-only devices, which have no ->vpci. */
> +        const unsigned long *ro_map = pci_get_ro_map(sbdf.seg);
> +
> +        if ( !ro_map || !test_bit(sbdf.bdf, ro_map) )
> +            vpci_write_hw(sbdf, reg, size, data);
>          return;
>      }
>  
> 


From xen-devel-bounces@lists.xenproject.org Tue May 30 23:16:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 30 May 2023 23:16:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541526.844379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q48Zd-0006ay-3c; Tue, 30 May 2023 23:16:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541526.844379; Tue, 30 May 2023 23:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q48Zd-0006ar-16; Tue, 30 May 2023 23:16:17 +0000
Received: by outflank-mailman (input) for mailman id 541526;
 Tue, 30 May 2023 23:16:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aK1z=BT=chromium.org=keescook@srs-se1.protection.inumbo.net>)
 id 1q48Zb-0006al-7V
 for xen-devel@lists.xenproject.org; Tue, 30 May 2023 23:16:15 +0000
Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com
 [2607:f8b0:4864:20::42d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f6d73150-ff3f-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 01:16:11 +0200 (CEST)
Received: by mail-pf1-x42d.google.com with SMTP id
 d2e1a72fcca58-64d30ab1ef2so3838540b3a.2
 for <xen-devel@lists.xenproject.org>; Tue, 30 May 2023 16:16:12 -0700 (PDT)
Received: from www.outflux.net (198-0-35-241-static.hfc.comcastbusiness.net.
 [198.0.35.241]) by smtp.gmail.com with ESMTPSA id
 v22-20020aa78516000000b0063d3d776910sm2123773pfn.138.2023.05.30.16.16.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 30 May 2023 16:16:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6d73150-ff3f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google; t=1685488571; x=1688080571;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=q6wOdCsvPcfL1AGuKhq7si1i+l1tB9VGHRhgBuI4hFQ=;
        b=A6P9MRUqWxltKJ9Zgwok7DSDRoOxoww2FWlF6WQO1tYgoM5+Hi0Ez2oEaQQjsD1m8R
         d4fRiNfemEoHsLbKtPfU643qi1eTaIaRbZTrXG8PIKxDT7zOE/fxQqSHWBsVd7zxgsD9
         9LmT/yPeVeEdIlXyQ0jganlgblNgnS7PESvrk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685488571; x=1688080571;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=q6wOdCsvPcfL1AGuKhq7si1i+l1tB9VGHRhgBuI4hFQ=;
        b=VapaaLoMBbAC9ofdaYuqscy2CC1i1hkgCU+GxcH6e9ARnQMnWuU3hD9ji6iTEyTHUe
         UGHqjOnsgyTXuU7mY/ansOGcIXZOE/xy+df9+uSVtW7Ep1ltSRm7JtvsxSVu2GJ5lX/4
         OS64EYxtQkaPwiFGiUA9kwlVHjwImIkXZ5VcDbhdAfMINoJwXqwz+8up/o248RSysgJ6
         7+R5uOzzlipuGRUdZEPWU99yAAL6vCZDU43pWciFn2THx9wZXFTZz+lEbxmiTjjndotI
         xBEl4nHMIENSAgkHoy8a2cwjNdzJ7ahbZnqqZhOLu4Pk8HxUXlLKkuHeKvQ/0z/kDy53
         6qFw==
X-Gm-Message-State: AC+VfDydEI/EGaNWqSepnH3xEJILCS69bxhkcCqwx4t4kvDsbip1kGof
	e6kSwflQZP/UHtlXFI9S/7DM4g==
X-Google-Smtp-Source: ACHHUZ5vUXK2I/pfuTav0TBfWy3DDrvFJsd+7uESTSUanu+Vl50RByVZpLds9PiU8F5GGbjiErQYOQ==
X-Received: by 2002:a05:6a00:10c4:b0:646:663a:9d60 with SMTP id d4-20020a056a0010c400b00646663a9d60mr4308038pfu.10.1685488570875;
        Tue, 30 May 2023 16:16:10 -0700 (PDT)
Date: Tue, 30 May 2023 16:16:09 -0700
From: Kees Cook <keescook@chromium.org>
To: =?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>
Cc: Wei Liu <wei.liu@kernel.org>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H . Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Alexander Graf <graf@amazon.com>,
	Forrest Yuan Yu <yuanyu@google.com>,
	James Morris <jamorris@linux.microsoft.com>,
	John Andersen <john.s.andersen@intel.com>,
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>,
	Marian Rotariu <marian.c.rotariu@gmail.com>,
	Mihai =?utf-8?B?RG9uyJt1?= <mdontu@bitdefender.com>,
	=?utf-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Thara Gopinath <tgopinath@microsoft.com>,
	Will Deacon <will@kernel.org>,
	Zahra Tarkhani <ztarkhani@microsoft.com>,
	=?utf-8?Q?=C8=98tefan_=C8=98icleru?= <ssicleru@bitdefender.com>,
	dev@lists.cloudhypervisor.org, kvm@vger.kernel.org,
	linux-hardening@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org,
	qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org,
	x86@kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v1 5/9] KVM: x86: Add new hypercall to lock control
 registers
Message-ID: <202305301614.BF8D80D3D5@keescook>
References: <20230505152046.6575-1-mic@digikod.net>
 <20230505152046.6575-6-mic@digikod.net>
 <ZFlllHjntehpthma@liuwe-devbox-debian-v2>
 <901ff104-215c-8e81-fbae-5ecd8fa94449@digikod.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <901ff104-215c-8e81-fbae-5ecd8fa94449@digikod.net>

On Mon, May 29, 2023 at 06:48:03PM +0200, Mickal Salan wrote:
> 
> On 08/05/2023 23:11, Wei Liu wrote:
> > On Fri, May 05, 2023 at 05:20:42PM +0200, Mickal Salan wrote:
> > > This enables guests to lock their CR0 and CR4 registers with a subset of
> > > X86_CR0_WP, X86_CR4_SMEP, X86_CR4_SMAP, X86_CR4_UMIP, X86_CR4_FSGSBASE
> > > and X86_CR4_CET flags.
> > > 
> > > The new KVM_HC_LOCK_CR_UPDATE hypercall takes two arguments.  The first
> > > is to identify the control register, and the second is a bit mask to
> > > pin (i.e. mark as read-only).
> > > 
> > > These register flags should already be pinned by Linux guests, but once
> > > compromised, this self-protection mechanism could be disabled, which is
> > > not the case with this dedicated hypercall.
> > > 
> > > Cc: Borislav Petkov <bp@alien8.de>
> > > Cc: Dave Hansen <dave.hansen@linux.intel.com>
> > > Cc: H. Peter Anvin <hpa@zytor.com>
> > > Cc: Ingo Molnar <mingo@redhat.com>
> > > Cc: Kees Cook <keescook@chromium.org>
> > > Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
> > > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > > Cc: Sean Christopherson <seanjc@google.com>
> > > Cc: Thomas Gleixner <tglx@linutronix.de>
> > > Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> > > Cc: Wanpeng Li <wanpengli@tencent.com>
> > > Signed-off-by: Mickal Salan <mic@digikod.net>
> > > Link: https://lore.kernel.org/r/20230505152046.6575-6-mic@digikod.net
> > [...]
> > >   	hw_cr4 = (cr4_read_shadow() & X86_CR4_MCE) | (cr4 & ~X86_CR4_MCE);
> > >   	if (is_unrestricted_guest(vcpu))
> > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > > index ffab64d08de3..a529455359ac 100644
> > > --- a/arch/x86/kvm/x86.c
> > > +++ b/arch/x86/kvm/x86.c
> > > @@ -7927,11 +7927,77 @@ static unsigned long emulator_get_cr(struct x86_emulate_ctxt *ctxt, int cr)
> > >   	return value;
> > >   }
> > > +#ifdef CONFIG_HEKI
> > > +
> > > +extern unsigned long cr4_pinned_mask;
> > > +
> > 
> > Can this be moved to a header file?
> 
> Yep, but I'm not sure which one. Any preference Kees?

Uh, er, I was never expecting that mask to be non-static. ;) To that
end, how about putting it in arch/x86/kvm/x86.h ?

-- 
Kees Cook


From xen-devel-bounces@lists.xenproject.org Wed May 31 00:33:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 00:33:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541531.844389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q49lb-0006wO-Mu; Wed, 31 May 2023 00:32:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541531.844389; Wed, 31 May 2023 00:32:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q49lb-0006wH-KM; Wed, 31 May 2023 00:32:43 +0000
Received: by outflank-mailman (input) for mailman id 541531;
 Wed, 31 May 2023 00:32:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RcVj=BU=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q49la-0006wB-An
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 00:32:42 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a5161961-ff4a-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 02:32:38 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id B7E2E63519;
 Wed, 31 May 2023 00:32:37 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2EA04C433D2;
 Wed, 31 May 2023 00:32:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5161961-ff4a-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685493157;
	bh=qGXhzhHGIOxUqjuFhB8ssq9HbdjoYQKfzSCesG0rW3I=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=hpWZhbV6Hq0Z51S8SgS+pvig8nCrrpnVuBqRKTZkb2h7cUeIcgMXCJ5volTyAuznr
	 2lGXDyyEMNnKdke0Lh3lnCqgTGmslcAkcz0CK2gPoLARjB/CkMT5W07pUQxfo6jEsU
	 GBSmTrr7Bdr58cXebHx7uIj0IqgWXyvshblVaZkDbkVtI/aZBx58dK2xRjrmrncjmt
	 hFCQG6a8+bp0AaDRiUgmnvPfAHx7qJjEDKnJzz8jAK0yA679TXHQnYcKCjAWUgrCsr
	 IaWsRrYFU0lswdor3tM1wNud8JyL6PEjzJSExoSeJRG6rjDFA1rIYboJ7OwhmEWlqO
	 Qi3D+wMsq/MGw==
Date: Tue, 30 May 2023 17:32:34 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Stefano Stabellini <stefano.stabellini@amd.com>, 
    xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com, 
    xenia.ragiadakou@amd.com, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [RFC] Xen crashes on ASSERT on suspend/resume, suggested fix
In-Reply-To: <c143dc69-20bd-da87-3d01-d405c358fc67@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305301729230.44000@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2305181638390.128889@ubuntu-linux-20-04-desktop> <ZGzFnE2w/YqYT35c@Air-de-Roger> <ZGzSnu8m/IqjmyHx@Air-de-Roger> <alpine.DEB.2.22.394.2305241646520.44000@ubuntu-linux-20-04-desktop> <6790d5ae-9742-f5f3-bd8c-62602ee9cb1d@suse.com>
 <alpine.DEB.2.22.394.2305251248000.44000@ubuntu-linux-20-04-desktop> <c143dc69-20bd-da87-3d01-d405c358fc67@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 26 May 2023, Jan Beulich wrote:
> On 25.05.2023 21:54, Stefano Stabellini wrote:
> > On Thu, 25 May 2023, Jan Beulich wrote:
> >> On 25.05.2023 01:51, Stefano Stabellini wrote:
> >>> xen/irq: fix races between send_cleanup_vector and _clear_irq_vector
> >>
> >> This title is, I'm afraid, already misleading. No such race can occur
> >> afaict, as both callers of _clear_irq_vector() acquire the IRQ
> >> descriptor lock first, and irq_complete_move() (the sole caller of
> >> send_cleanup_vector()) is only ever invoked as or by an ->ack()
> >> hook, which in turn is only invoked with, again, the descriptor lock
> >> held.
> > 
> > Yes I see that you are right about the locking, and thank you for taking
> > the time to look into it.
> > 
> > One last question: could it be that a second interrupt arrives while
> > ->ack() is being handled?  do_IRQ() is running with interrupts disabled?
> 
> It is, at least as far as the invocation of ->ack() is concerned. Else
> the locking scheme would be broken. You may not that around ->handler()
> invocation we enable interrupts.

OK. FYI, we were able to repro a problem after 250+ suspend/resume Dom0
cycles with my patch applied. So unfortunately there is no extra
information as my patch removes the ASSERTs.

However I can tell you that the symptom is the below. I am not sure if
it tells you anything but FYI. So clearly my patch makes the problem
harder to repro but doesn't fix it.


May 23 22:47:31 amd-saravana-crater kernel: [17881.744986] INFO: task kworker/u8:1:45 blocked for more than 120 seconds.
May 23 22:47:31 amd-saravana-crater kernel: [17881.745048]       Not tainted 6.1.0-rtc-s3 #1
May 23 22:47:31 amd-saravana-crater kernel: [17881.745089] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 23 22:47:31 amd-saravana-crater kernel: [17881.745144] task:kworker/u8:1    state:D stack:0     pid:45    ppid:2      flags:0x00004000
May 23 22:47:31 amd-saravana-crater kernel: [17881.745154] Workqueue: writeback wb_workfn (flush-259:0)
May 23 22:47:31 amd-saravana-crater kernel: [17881.745170] Call Trace:
May 23 22:47:31 amd-saravana-crater kernel: [17881.745174]  <TASK>
May 23 22:47:31 amd-saravana-crater kernel: [17881.745182]  __schedule+0x2d5/0x920
May 23 22:47:31 amd-saravana-crater kernel: [17881.745192]  ? preempt_count_add+0x7c/0xc0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745202]  schedule+0x63/0xd0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745208]  __bio_queue_enter+0xeb/0x230
May 23 22:47:31 amd-saravana-crater kernel: [17881.745217]  ? prepare_to_wait_event+0x130/0x130
May 23 22:47:31 amd-saravana-crater kernel: [17881.745226]  blk_mq_submit_bio+0x358/0x570
May 23 22:47:31 amd-saravana-crater kernel: [17881.745237]  __submit_bio+0xfa/0x170
May 23 22:47:31 amd-saravana-crater kernel: [17881.745243]  submit_bio_noacct_nocheck+0x229/0x2b0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745250]  ? ktime_get+0x47/0xb0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745256]  submit_bio_noacct+0x1e4/0x5a0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745261]  ? submit_bio_noacct+0x1e4/0x5a0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745268]  submit_bio+0x47/0x80
May 23 22:47:31 amd-saravana-crater kernel: [17881.745273]  ext4_io_submit+0x24/0x40
May 23 22:47:31 amd-saravana-crater kernel: [17881.745282]  ext4_writepages+0x57f/0xdd0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745288]  ? _raw_read_lock_bh+0x20/0x40
May 23 22:47:31 amd-saravana-crater kernel: [17881.745296]  ? update_sd_lb_stats.constprop.148+0x11e/0x960
May 23 22:47:31 amd-saravana-crater kernel: [17881.745308]  do_writepages+0xbf/0x1a0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745314]  ? __enqueue_entity+0x6c/0x80
May 23 22:47:31 amd-saravana-crater kernel: [17881.745321]  ? enqueue_entity+0x1a9/0x370
May 23 22:47:31 amd-saravana-crater kernel: [17881.745327]  __writeback_single_inode+0x44/0x360
May 23 22:47:31 amd-saravana-crater kernel: [17881.745332]  ? _raw_spin_unlock+0x19/0x40
May 23 22:47:31 amd-saravana-crater kernel: [17881.745339]  writeback_sb_inodes+0x203/0x4e0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745350]  __writeback_inodes_wb+0x66/0xd0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745358]  wb_writeback+0x23d/0x2d0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745366]  wb_workfn+0x20b/0x490
May 23 22:47:31 amd-saravana-crater kernel: [17881.745372]  ? _raw_spin_unlock+0x19/0x40
May 23 22:47:31 amd-saravana-crater kernel: [17881.745381]  process_one_work+0x227/0x440
May 23 22:47:31 amd-saravana-crater kernel: [17881.745389]  worker_thread+0x31/0x3e0
May 23 22:47:31 amd-saravana-crater kernel: [17881.745395]  ? process_one_work+0x440/0x440
May 23 22:47:31 amd-saravana-crater kernel: [17881.745400]  kthread+0xfe/0x130
May 23 22:47:31 amd-saravana-crater kernel: [17881.745406]  ? kthread_complete_and_exit+0x20/0x20
May 23 22:47:31 amd-saravana-crater kernel: [17881.745413]  ret_from_fork+0x22/0x30
May 23 22:47:31 amd-saravana-crater kernel: [17881.745425]  </TASK>


From xen-devel-bounces@lists.xenproject.org Wed May 31 03:41:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 03:41:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541535.844400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Chw-0007Nr-5L; Wed, 31 May 2023 03:41:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541535.844400; Wed, 31 May 2023 03:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Chw-0007Nj-0H; Wed, 31 May 2023 03:41:08 +0000
Received: by outflank-mailman (input) for mailman id 541535;
 Wed, 31 May 2023 03:41:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Chu-0007NZ-Fo; Wed, 31 May 2023 03:41:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Chu-0002jE-3y; Wed, 31 May 2023 03:41:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Cht-0001n4-Mc; Wed, 31 May 2023 03:41:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Cht-0004nt-GF; Wed, 31 May 2023 03:41:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MOdkgapNm0mPQe3pp6eUi9wo1uR84S4fSAJJ9OrtXCc=; b=Y1ljRWAzAJ6EfHtEM+6PXPe4wK
	WaWci5PsRJl1CQtXFjxM7mriVxxRVys2k9X9MXtzvDVwD84Bh+AMZ2wv5iHCmWMDJvN7wX77g21Ch
	QLnb1jMC+SKa/hpuqqNnzg2waaXMd9xVKHvqhWVWKT6ny/oUiHKidA2dM/DTxWppweEg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181015-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 181015: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-libvirt-xsm:xen-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-multivcpu:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-raw:xen-install:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=51d0ac4577c20b22cb659b16d73d37c05ce2fbde
X-Osstest-Versions-That:
    linux=f53660ec669f60c772fdf7d75d1c24d288547cee
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 03:41:05 +0000

flight 181015 linux-5.4 real [real]
flight 181022 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/181015/
http://logs.test-lab.xenproject.org/osstest/logs/181022/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm  7 xen-install         fail pass in 181022-retest
 test-armhf-armhf-xl-multivcpu 18 guest-start/debian.repeat fail pass in 181022-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 181022 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 181022 never pass
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host         fail  like 180690
 test-amd64-i386-libvirt-raw   7 xen-install                  fail  like 180690
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 180690
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180690
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180690
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180690
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180690
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180690
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180690
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180690
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 180690
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180690
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180690
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180690
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                51d0ac4577c20b22cb659b16d73d37c05ce2fbde
baseline version:
 linux                f53660ec669f60c772fdf7d75d1c24d288547cee

Last test of basis   180690  2023-05-17 09:46:05 Z   13 days
Testing same since   181015  2023-05-30 12:14:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "t.feng" <fengtao40@huawei.com>
  Aaron Armstrong Skomra <aaron.skomra@wacom.com>
  Adam Stylinski <kungfujesus06@gmail.com>
  Ai Chao <aichao@kylinos.cn>
  Alain Volmat <avolmat@me.com>
  Alan Stern <stern@rowland.harvard.edu>
  Aleksandr Loktionov <aleksandr.loktionov@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Alexei Starovoitov <ast@kernel.org>
  Amadeusz Sławiński <amadeuszx.slawinski@linux.intel.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Badhri Jagan Sridharan <badhri@google.com>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Bastien Nocera <hadess@hadess.net>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Bin Li <bin.li@canonical.com>
  Bob Moore <robert.moore@intel.com>
  Cezary Rojewski <cezary.rojewski@intel.com>
  Chao Yu <chao@kernel.org>
  Christian Brauner <brauner@kernel.org>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Cong Wang <cong.wang@bytedance.com>
  Daisuke Nojiri <dnojiri@chromium.org>
  Dan Carpenter <dan.carpenter@linaro.org>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Gabay <daniel.gabay@intel.com>
  Daniel Sneddon <daniel.sneddon@linux.intel.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Hansen <dave.hansen@linux.intel.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Dmitry Bogdanov <d.bogdanov@yadro.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dong Chenchen <dongchenchen2@huawei.com>
  Duoming Zhou <duoming@zju.edu.cn>
  Eli Cohen <elic@nvidia.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@nbd.name>
  Filipe Manana <fdmanana@suse.com>
  Finn Thain <fthain@linux-m68k.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Frank Wang <frank.wang@rock-chips.com>
  Gavrilov Ilia <Ilia.Gavrilov@infotecs.ru>
  Geert Uytterhoeven <geert@linux-m68k.org>
  George Kennedy <george.kennedy@oracle.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory Greenman <gregory.greenman@intel.com>
  Gregory Oakes <gregory.oakes@amd.com>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo A. R. Silva <gustavoars@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Hao Lan <lanhao@huawei.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hardik Garg <hargar@linux.microsoft.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Hector Martin <marcan@marcan.st>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Heiko Carstens <hca@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hyunwoo Kim <imv4bel@gmail.com>
  Ilya Dryomov <idryomov@gmail.com>
  Ilya Leoshkevich <iii@linux.ibm.com>
  Inki Dae <inki.dae@samsung.com>
  Ioana Ciornei <ioana.ciornei@nxp.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jan Kara <jack@suse.cz>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jason Gerecke <jason.gerecke@wacom.com>
  Jason Gerecke <killertofu@gmail.com>
  Javier Rodriguez <josejavier.rodriguez@duagon.com>
  Jerry Snitselaar <jsnitsel@redhat.com>
  Jie Wang <wangjie125@huawei.com>
  Jijie Shao <shaojijie@huawei.com>
  Jimmy Assarsson <extja@kvaser.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Slaby <jslaby@suse.cz>
  Johan Hovold <johan+linaro@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Thumshirn <jth@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Jorge Sanjuan Garcia <jorge.sanjuangarcia@duagon.com>
  Josef Bacik <josef@toxicpanda.com>
  Josh Poimboeuf <jpoimboe@kernel.org>
  Juergen Gross <jgross@suse.com>
  Justin Tee <justin.tee@broadcom.com>
  Kalle Valo <kvalo@kernel.org>
  Ke Zhang <m202171830@hust.edu.cn>
  Kemeng Shi <shikemeng@huaweicloud.com>
  Kevin Groeneveld <kgroeneveld@lenbrook.com>
  Konrad Gräfe <k.graefe@gateware.de>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Lee Jones <lee@kernel.org>
  Leon Romanovsky <leon@kernel.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Luke D. Jones <luke@ljones.dev>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Kepplinger <martin.kepplinger@puri.sm>
  Mauro Carvalho Chehab <mchehab@kernel.org>
  Maxime Bizon <mbizon@freebox.fr>
  Maxime Ripard <maxime@cerno.tech>
  Menglong Dong <dong.menglong@zte.com.cn>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Schmitz <schmitzmic@gmail.com>
  Mike Christie <michael.christie@oracle.com>
  Min Li <lm0963hack@gmail.com>
  Nathan Chancellor <nathan@kernel.org>
  Nick Child <nnac123@linux.ibm.com>
  Nikhil Mahale <nmahale@nvidia.com>
  Nikolay Borisov <nborisov@suse.com>
  Nur Hussein <hussein@unixcat.org>
  Ojaswin Mujoo <ojaswin@linux.ibm.com>
  Oleg Nesterov <oleg@redhat.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Hartkopp <socketcan@hartkopp.net>
  Ovidiu Panait <ovidiu.panait@windriver.com>
  Pablo Greco <pgreco@centosproject.org>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peilin Ye <peilin.ye@bytedance.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Philipp Hortmann <philipp.g.hortmann@gmail.com>
  Pierre Gondois <pierre.gondois@arm.com>
  Ping Cheng <ping.cheng@wacom.com>
  Ping Cheng <pinglinux@gmail.com>
  Po-Hsu Lin <po-hsu.lin@canonical.com>
  Pratyush Yadav <ptyadav@amazon.de>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qiang Ning <qning0106@126.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Rodríguez Barbarin, José Javier <JoseJavier.Rodriguez@duagon.com>
  Roi Dayan <roid@nvidia.com>
  Roy Novich <royno@nvidia.com>
  Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Saravana Kannan <saravanak@google.com>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shay Drory <shayd@nvidia.com>
  Shengjiu Wang <shengjiu.wang@nxp.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Steffen Klassert <steffen.klassert@secunet.com>
  Stephen Boyd <sboyd@kernel.org>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Suzuki K Poulose <suzuki.poulose@arm.com>
  t.feng <fengtao40@huawei.com>
  Taehee Yoo <ap420073@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tamir Duberstein <tamird@google.com>
  Tariq Toukan <tariqt@nvidia.com>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tobias Brunner <tobias@strongswan.org>
  Tomas Krcka <krckatom@amazon.de>
  Tony Lindgren <tony@atomide.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Udipto Goswami <quic_ugoswami@quicinc.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vadim Pasternak <vadimp@mellanox.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vernon Lovejoy <vlovejoy@redhat.com>
  Vicki Pfau <vi@endrift.com>
  Vinod Koul <vkoul@kernel.org>
  Vitaliy Tomin <tomin@iszf.irk.ru>
  void0red <30990023+void0red@users.noreply.github.com>
  Weitao Wang <WeitaoWang-oc@zhaoxin.com>
  Will Deacon <will@kernel.org>
  William Tu <u9012063@gmail.com>
  Wim Van Sebroeck <wim@linux-watchdog.org>
  Wyes Karny <wyes.karny@amd.com>
  Xin Long <lucien.xin@gmail.com>
  Xiubo Li <xiubli@redhat.com>
  Yonghong Song <yhs@fb.com>
  Zev Weiss <zev@bewilderbeest.net>
  Zhang Rui <rui.zhang@intel.com>
  Zheng Wang <zyytlz.wz@163.com>
  Zhu Yanjun <zyjzyj2000@gmail.com>
  Zhuang Shengen <zhuangshengen@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   f53660ec669f..51d0ac4577c2  51d0ac4577c20b22cb659b16d73d37c05ce2fbde -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Wed May 31 07:20:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541545.844410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4G7j-0003gI-Sj; Wed, 31 May 2023 07:19:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541545.844410; Wed, 31 May 2023 07:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4G7j-0003gB-Ps; Wed, 31 May 2023 07:19:59 +0000
Received: by outflank-mailman (input) for mailman id 541545;
 Wed, 31 May 2023 07:19:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4G7i-0003g1-LS; Wed, 31 May 2023 07:19:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4G7i-0005uE-GD; Wed, 31 May 2023 07:19:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4G7i-0008J8-29; Wed, 31 May 2023 07:19:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4G7i-0003jN-1h; Wed, 31 May 2023 07:19:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5gLU0rKs/8wDw3R3LxOXT1LHa2Fu1D9F758wFoVa/m4=; b=azIxCIO+gZu0L1sXyEsLRyT9Z5
	rXo7VYJZNRTlIM7hTcN4UUpT0TPTFb2hHPqUwjfTHWBjYh1f6jopWoDyuNTpsgCkhZKdh5xWTWdM8
	AGy69IfZWWW2ThnssaLHLEwL+y7E+Q/RxrNTnE+TRNhmncsPvPVfjsFiK5UHhcp79KOk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181024-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 181024: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d8e5d35ede7158ccbb9abf600e65b9aa6e043f74
X-Osstest-Versions-That:
    ovmf=0f9283429dd487deeeb264ee5670551d596fc208
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 07:19:58 +0000

flight 181024 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181024/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d8e5d35ede7158ccbb9abf600e65b9aa6e043f74
baseline version:
 ovmf                 0f9283429dd487deeeb264ee5670551d596fc208

Last test of basis   181011  2023-05-30 08:42:06 Z    0 days
Testing same since   181024  2023-05-31 05:10:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiaxin Wu <jiaxin.wu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0f9283429d..d8e5d35ede  d8e5d35ede7158ccbb9abf600e65b9aa6e043f74 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541551.844419 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GC4-00053h-E3; Wed, 31 May 2023 07:24:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541551.844419; Wed, 31 May 2023 07:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GC4-00053a-BM; Wed, 31 May 2023 07:24:28 +0000
Received: by outflank-mailman (input) for mailman id 541551;
 Wed, 31 May 2023 07:24:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GC3-00053N-7D
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:27 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 2b389400-ff84-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 09:24:24 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 44D6C1042;
 Wed, 31 May 2023 00:25:09 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9BCD03F663;
 Wed, 31 May 2023 00:24:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b389400-ff84-11ed-b231-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH v8 00/12] SVE feature for arm guests
Date: Wed, 31 May 2023 08:24:01 +0100
Message-Id: <20230531072413.868673-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This serie is introducing the possibility for Dom0 and DomU guests to use
sve/sve2 instructions.

SVE feature introduces new instruction and registers to improve performances on
floating point operations.

The SVE feature is advertised using the ID_AA64PFR0_EL1 register, SVE field, and
when available the ID_AA64ZFR0_EL1 register provides additional information
about the implemented version and other SVE feature.

New registers added by the SVE feature are Z0-Z31, P0-P15, FFR, ZCR_ELx.

Z0-Z31 are scalable vector register whose size is implementation defined and
goes from 128 bits to maximum 2048, the term vector length will be used to refer
to this quantity.
P0-P15 are predicate registers and the size is the vector length divided by 8,
same size is the FFR (First Fault Register).
ZCR_ELx is a register that can control and restrict the maximum vector length
used by the <x> exception level and all the lower exception levels, so for
example EL3 can restrict the vector length usable by EL3,2,1,0.

The platform has a maximum implemented vector length, so for every value
written in ZCR register, if this value is above the implemented length, then the
lower value will be used. The RDVL instruction can be used to check what vector
length is the HW using after setting ZCR.

For an SVE guest, the V0-V31 registers are part of the Z0-Z31, so there is no
need to save them separately, saving Z0-Z31 will save implicitly also V0-V31.

SVE usage can be trapped using a flag in CPTR_EL2, hence in this serie the
register is added to the domain state, to be able to trap only the guests that
are not allowed to use SVE.

This serie is introducing a command line parameter to enable Dom0 to use SVE and
to set its maximum vector length that by default is 0 which means the guest is
not allowed to use SVE. Values from 128 to 2048 mean the guest can use SVE with
the selected value used as maximum allowed vector length (which could be lower
if the implemented one is lower).
For DomUs, an XL parameter with the same way of use is introduced and a dom0less
DTB binding is created.

The context switch is the most critical part because there can be big registers
to be saved, in this serie an easy approach is used and the context is
saved/restored every time for the guests that are allowed to use SVE.

Luca Fancellu (12):
  xen/arm: enable SVE extension for Xen
  xen/arm: add SVE vector length field to the domain
  xen/arm: Expose SVE feature to the guest
  xen/arm: add SVE exception class handling
  arm/sve: save/restore SVE context switch
  xen/common: add dom0 xen command line argument for Arm
  xen: enable Dom0 to use SVE feature
  xen/physinfo: encode Arm SVE vector length in arch_capabilities
  tools: add physinfo arch_capabilities handling for Arm
  xen/tools: add sve parameter in XL configuration
  xen/arm: add sve property for dom0less domUs
  xen/changelog: Add SVE and "dom0" options to the changelog for Arm

 CHANGELOG.md                                  |   3 +
 SUPPORT.md                                    |   6 +
 docs/man/xl.cfg.5.pod.in                      |  16 ++
 docs/misc/arm/device-tree/booting.txt         |  16 ++
 docs/misc/xen-command-line.pandoc             |  20 +-
 tools/golang/xenlight/helpers.gen.go          |   4 +
 tools/golang/xenlight/types.gen.go            |  24 +++
 tools/include/libxl.h                         |  11 +
 .../include/xen-tools/arm-arch-capabilities.h |  28 +++
 tools/include/xen-tools/common-macros.h       |   2 +
 tools/libs/light/libxl.c                      |   1 +
 tools/libs/light/libxl_arm.c                  |  33 +++
 tools/libs/light/libxl_internal.h             |   1 -
 tools/libs/light/libxl_types.idl              |  23 +++
 tools/ocaml/libs/xc/xenctrl.ml                |   4 +-
 tools/ocaml/libs/xc/xenctrl.mli               |   4 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c           |   8 +-
 tools/python/xen/lowlevel/xc/xc.c             |  42 +++-
 tools/xl/xl_info.c                            |   8 +
 tools/xl/xl_parse.c                           |   8 +
 xen/arch/arm/Kconfig                          |  10 +-
 xen/arch/arm/README.LinuxPrimitives           |  11 +
 xen/arch/arm/arm64/Makefile                   |   1 +
 xen/arch/arm/arm64/cpufeature.c               |   7 +-
 xen/arch/arm/arm64/domctl.c                   |   4 +
 xen/arch/arm/arm64/sve-asm.S                  | 195 ++++++++++++++++++
 xen/arch/arm/arm64/sve.c                      | 182 ++++++++++++++++
 xen/arch/arm/arm64/vfp.c                      |  79 ++++---
 xen/arch/arm/arm64/vsysreg.c                  |  41 +++-
 xen/arch/arm/cpufeature.c                     |   6 +-
 xen/arch/arm/domain.c                         |  55 ++++-
 xen/arch/arm/domain_build.c                   |  66 ++++++
 xen/arch/arm/include/asm/arm64/sve.h          |  72 +++++++
 xen/arch/arm/include/asm/arm64/sysregs.h      |   4 +
 xen/arch/arm/include/asm/arm64/vfp.h          |  16 ++
 xen/arch/arm/include/asm/cpufeature.h         |  14 ++
 xen/arch/arm/include/asm/domain.h             |  11 +
 xen/arch/arm/include/asm/processor.h          |   3 +
 xen/arch/arm/setup.c                          |   5 +-
 xen/arch/arm/sysctl.c                         |   4 +
 xen/arch/arm/traps.c                          |  36 +++-
 xen/arch/x86/dom0_build.c                     |  48 ++---
 xen/common/domain.c                           |  23 +++
 xen/common/kernel.c                           |  28 +++
 xen/include/public/arch-arm.h                 |   2 +
 xen/include/public/sysctl.h                   |   4 +
 xen/include/xen/domain.h                      |   1 +
 xen/include/xen/lib.h                         |  10 +
 48 files changed, 1084 insertions(+), 116 deletions(-)
 create mode 100644 tools/include/xen-tools/arm-arch-capabilities.h
 create mode 100644 xen/arch/arm/arm64/sve-asm.S
 create mode 100644 xen/arch/arm/arm64/sve.c
 create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541553.844436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GC6-0005PA-3z; Wed, 31 May 2023 07:24:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541553.844436; Wed, 31 May 2023 07:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GC5-0005NI-WE; Wed, 31 May 2023 07:24:30 +0000
Received: by outflank-mailman (input) for mailman id 541553;
 Wed, 31 May 2023 07:24:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GC4-00053N-TG
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:28 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 2cad721d-ff84-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 09:24:27 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C5FF915BF;
 Wed, 31 May 2023 00:25:11 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 752683F663;
 Wed, 31 May 2023 00:24:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2cad721d-ff84-11ed-b231-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v8 02/12] xen/arm: add SVE vector length field to the domain
Date: Wed, 31 May 2023 08:24:03 +0100
Message-Id: <20230531072413.868673-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add sve_vl field to arch_domain and xen_arch_domainconfig struct,
to allow the domain to have an information about the SVE feature
and the number of SVE register bits that are allowed for this
domain.

sve_vl field is the vector length in bits divided by 128, this
allows to use less space in the structures.

The field is used also to allow or forbid a domain to use SVE,
because a value equal to zero means the guest is not allowed to
use the feature.

Check that the requested vector length is lower or equal to the
platform supported vector length, otherwise fail on domain
creation.

Check that only 64 bit domains have SVE enabled, otherwise fail.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
Changes from v7:
 - add r-by Bertrand
 - add r-by Julien
Changes from v6:
 - Style fix, have is_sve_domain as static inline instead of macro
   (Julien)
Changes from v5:
 - Update commit message stating the interface ver. bump (Bertrand)
 - in struct arch_domain, protect sve_vl with CONFIG_ARM64_SVE,
   given the change, move also is_sve_domain() where it's protected
   inside sve.h and create a stub when the macro is not defined,
   protect the usage of sve_vl where needed.
   (Julien)
 - Add a check for 32 bit guest running on top of 64 bit host that
   have sve parameter enabled to stop the domain creation, added in
   construct_domain() of domain_build.c and subarch_do_domctl of
   domctl.c. (Julien)
Changes from v4:
 - Return 0 in get_sys_vl_len() if sve is not supported, code style fix,
   removed else if since the conditions can't fallthrough, removed not
   needed condition checking for VL bits validity because it's already
   covered, so delete is_vl_valid() function. (Jan)
Changes from v3:
 - don't use fixed types when not needed, use encoded value also in
   arch_domain so rename sve_vl_bits in sve_vl. (Jan)
 - rename domainconfig_decode_vl to sve_decode_vl because it will now
   be used also to decode from arch_domain value
 - change sve_vl from uint16_t to uint8_t and move it after "type" field
   to optimize space.
Changes from v2:
 - rename field in xen_arch_domainconfig from "sve_vl_bits" to
   "sve_vl" and use the implicit padding after gic_version to
   store it, now this field is the VL/128. (Jan)
 - Created domainconfig_decode_vl() function to decode the sve_vl
   field and use it as plain bits value inside arch_domain.
 - Changed commit message reflecting the changes
Changes from v1:
 - no changes
Changes from RFC:
 - restore zcr_el2 in sve_restore_state, that will be introduced
   later in this serie, so remove zcr_el2 related code from this
   patch and move everything to the later patch (Julien)
 - add explicit padding into struct xen_arch_domainconfig (Julien)
 - Don't lower down the vector length, just fail to create the
   domain. (Julien)
---
 xen/arch/arm/arm64/domctl.c          |  4 ++++
 xen/arch/arm/arm64/sve.c             | 12 +++++++++++
 xen/arch/arm/domain.c                | 29 ++++++++++++++++++++++++++
 xen/arch/arm/domain_build.c          |  7 +++++++
 xen/arch/arm/include/asm/arm64/sve.h | 31 ++++++++++++++++++++++++++++
 xen/arch/arm/include/asm/domain.h    |  5 +++++
 xen/include/public/arch-arm.h        |  2 ++
 7 files changed, 90 insertions(+)

diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c
index 0de89b42c448..14fc622e9956 100644
--- a/xen/arch/arm/arm64/domctl.c
+++ b/xen/arch/arm/arm64/domctl.c
@@ -10,6 +10,7 @@
 #include <xen/sched.h>
 #include <xen/hypercall.h>
 #include <public/domctl.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 
 static long switch_mode(struct domain *d, enum domain_type type)
@@ -43,6 +44,9 @@ long subarch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         case 32:
             if ( !cpu_has_el1_32 )
                 return -EINVAL;
+            /* SVE is not supported for 32 bit domain */
+            if ( is_sve_domain(d) )
+                return -EINVAL;
             return switch_mode(d, DOMAIN_32BIT);
         case 64:
             return switch_mode(d, DOMAIN_64BIT);
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index e05ccc38a896..a9144e48ef6b 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -8,6 +8,7 @@
 #include <xen/types.h>
 #include <asm/arm64/sve.h>
 #include <asm/arm64/sysregs.h>
+#include <asm/cpufeature.h>
 #include <asm/processor.h>
 #include <asm/system.h>
 
@@ -49,6 +50,17 @@ register_t compute_max_zcr(void)
     return vl_to_zcr(hw_vl);
 }
 
+/* Get the system sanitized value for VL in bits */
+unsigned int get_sys_vl_len(void)
+{
+    if ( !cpu_has_sve )
+        return 0;
+
+    /* ZCR_ELx len field is ((len + 1) * 128) = vector bits length */
+    return ((system_cpuinfo.zcr64.bits[0] & ZCR_ELx_LEN_MASK) + 1U) *
+            SVE_VL_MULTIPLE_VAL;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index d5ab15db46c4..6c22551b0ed2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -13,6 +13,7 @@
 #include <xen/wait.h>
 
 #include <asm/alternative.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpuerrata.h>
 #include <asm/cpufeature.h>
 #include <asm/current.h>
@@ -555,6 +556,8 @@ int arch_vcpu_create(struct vcpu *v)
     v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
     v->arch.cptr_el2 = get_default_cptr_flags();
+    if ( is_sve_domain(v->domain) )
+        v->arch.cptr_el2 &= ~HCPTR_CP(8);
 
     v->arch.hcr_el2 = get_default_hcr_flags();
 
@@ -599,6 +602,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
     unsigned int max_vcpus;
     unsigned int flags_required = (XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap);
     unsigned int flags_optional = (XEN_DOMCTL_CDF_iommu | XEN_DOMCTL_CDF_vpmu);
+    unsigned int sve_vl_bits = sve_decode_vl(config->arch.sve_vl);
 
     if ( (config->flags & ~flags_optional) != flags_required )
     {
@@ -607,6 +611,26 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }
 
+    /* Check feature flags */
+    if ( sve_vl_bits > 0 )
+    {
+        unsigned int zcr_max_bits = get_sys_vl_len();
+
+        if ( !zcr_max_bits )
+        {
+            dprintk(XENLOG_INFO, "SVE is unsupported on this machine.\n");
+            return -EINVAL;
+        }
+
+        if ( sve_vl_bits > zcr_max_bits )
+        {
+            dprintk(XENLOG_INFO,
+                    "Requested SVE vector length (%u) > supported length (%u)\n",
+                    sve_vl_bits, zcr_max_bits);
+            return -EINVAL;
+        }
+    }
+
     /* The P2M table must always be shared between the CPU and the IOMMU */
     if ( config->iommu_opts & XEN_DOMCTL_IOMMU_no_sharept )
     {
@@ -749,6 +773,11 @@ int arch_domain_create(struct domain *d,
     if ( (rc = domain_vpci_init(d)) != 0 )
         goto fail;
 
+#ifdef CONFIG_ARM64_SVE
+    /* Copy the encoded vector length sve_vl from the domain configuration */
+    d->arch.sve_vl = config->arch.sve_vl;
+#endif
+
     return 0;
 
 fail:
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 3f4558ade67f..c46f19208dcb 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -26,6 +26,7 @@
 #include <asm/platform.h>
 #include <asm/psci.h>
 #include <asm/setup.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 #include <asm/domain_build.h>
 #include <xen/event.h>
@@ -3691,6 +3692,12 @@ static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
         return -EINVAL;
     }
 
+    if ( is_sve_domain(d) && (kinfo->type == DOMAIN_32BIT) )
+    {
+        printk("SVE is not available for 32-bit domain\n");
+        return -EINVAL;
+    }
+
     if ( is_64bit_domain(d) )
         vcpu_switch_to_aarch64_mode(v);
 
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index c0466243c7bc..4b63412727fc 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -8,13 +8,44 @@
 #ifndef _ARM_ARM64_SVE_H
 #define _ARM_ARM64_SVE_H
 
+#include <xen/sched.h>
+
 #define SVE_VL_MAX_BITS 2048U
 
 /* Vector length must be multiple of 128 */
 #define SVE_VL_MULTIPLE_VAL 128U
 
+static inline unsigned int sve_decode_vl(unsigned int sve_vl)
+{
+    /* SVE vector length is stored as VL/128 in xen_arch_domainconfig */
+    return sve_vl * SVE_VL_MULTIPLE_VAL;
+}
+
 register_t compute_max_zcr(void);
 
+#ifdef CONFIG_ARM64_SVE
+
+static inline bool is_sve_domain(const struct domain *d)
+{
+    return d->arch.sve_vl > 0;
+}
+
+unsigned int get_sys_vl_len(void);
+
+#else /* !CONFIG_ARM64_SVE */
+
+static inline bool is_sve_domain(const struct domain *d)
+{
+    return false;
+}
+
+static inline unsigned int get_sys_vl_len(void)
+{
+    return 0;
+}
+
+#endif /* CONFIG_ARM64_SVE */
+
 #endif /* _ARM_ARM64_SVE_H */
 
 /*
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index e776ee704b7d..331da0f3bcc3 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -67,6 +67,11 @@ struct arch_domain
     enum domain_type type;
 #endif
 
+#ifdef CONFIG_ARM64_SVE
+    /* max SVE encoded vector length */
+    uint8_t sve_vl;
+#endif
+
     /* Virtual MMU */
     struct p2m_domain p2m;
 
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 1528ced5097a..38311f559581 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -300,6 +300,8 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
 struct xen_arch_domainconfig {
     /* IN/OUT */
     uint8_t gic_version;
+    /* IN - Contains SVE vector length divided by 128 */
+    uint8_t sve_vl;
     /* IN */
     uint16_t tee_type;
     /* IN */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:30 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541552.844430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GC5-0005Ik-M5; Wed, 31 May 2023 07:24:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541552.844430; Wed, 31 May 2023 07:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GC5-0005Ib-Ik; Wed, 31 May 2023 07:24:29 +0000
Received: by outflank-mailman (input) for mailman id 541552;
 Wed, 31 May 2023 07:24:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GC3-00053N-T7
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:28 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 2bf35299-ff84-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 09:24:25 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 928B41063;
 Wed, 31 May 2023 00:25:10 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 270C93F663;
 Wed, 31 May 2023 00:24:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2bf35299-ff84-11ed-b231-6b7b168915f2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v8 01/12] xen/arm: enable SVE extension for Xen
Date: Wed, 31 May 2023 08:24:02 +0100
Message-Id: <20230531072413.868673-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Enable Xen to handle the SVE extension, add code in cpufeature module
to handle ZCR SVE register, disable trapping SVE feature on system
boot only when SVE resources are accessed.
While there, correct coding style for the comment on coprocessor
trapping.

Now cptr_el2 is part of the domain context and it will be restored
on context switch, this is a preparation for saving the SVE context
which will be part of VFP operations, so restore it before the call
to save VFP registers.
To save an additional isb barrier, restore cptr_el2 before an
existing isb barrier and move the call for saving VFP context after
that barrier. To keep a (mostly) specularity of ctxt_switch_to()
and ctxt_switch_from(), move vfp_save_state() up in the function.

Change the KConfig entry to make ARM64_SVE symbol selectable, by
default it will be not selected.

Create sve module and sve_asm.S that contains assembly routines for
the SVE feature, this code is inspired from linux and it uses
instruction encoding to be compatible with compilers that does not
support SVE, imported instructions are documented in
README.LinuxPrimitives.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
Changes from v7:
 - modified comment in cpufeature.c, add r-by Bertrand
 - Add A-by Julien
Changes from v6:
 - modified licence, add emacs block, move vfp_save_state up in the
   function, add comments to CPTR_EL2 and vfp_restore_state, add
   ASSERT_UNREACHABLE to compute_max_zcr when SVE is not enabled in
   KConfig, don't use variable in init_traps(), code style fixes,
   add entries to README.LinuxPrimitives (Julien)
 - vl_to_zcr is moved into sve.c module as changes to the series led
   to its usage only inside it, remove stub for compute_max_zcr and
   rely on compiler DCE.
Changes from v5:
 - Add R-by Bertrand
Changes from v4:
 - don't use fixed types in vl_to_zcr, forgot to address that in
   v3, by mistake I changed that in patch 2, fixing now (Jan)
Changes from v3:
 - no changes
Changes from v2:
 - renamed sve_asm.S in sve-asm.S, new files should not contain
   underscore in the name (Jan)
Changes from v1:
 - Add assert to vl_to_zcr, it is never called with vl==0, but just
   to be sure it won't in the future.
Changes from RFC:
 - Moved restoring of cptr before an existing barrier (Julien)
 - Marked the feature as unsupported for now (Julien)
 - Trap and un-trap only when using SVE resources in
   compute_max_zcr() (Julien)
---
 xen/arch/arm/Kconfig                     | 10 ++--
 xen/arch/arm/README.LinuxPrimitives      |  9 ++++
 xen/arch/arm/arm64/Makefile              |  1 +
 xen/arch/arm/arm64/cpufeature.c          |  7 ++-
 xen/arch/arm/arm64/sve-asm.S             | 48 +++++++++++++++++++
 xen/arch/arm/arm64/sve.c                 | 59 ++++++++++++++++++++++++
 xen/arch/arm/cpufeature.c                |  6 ++-
 xen/arch/arm/domain.c                    | 20 +++++---
 xen/arch/arm/include/asm/arm64/sve.h     | 27 +++++++++++
 xen/arch/arm/include/asm/arm64/sysregs.h |  1 +
 xen/arch/arm/include/asm/cpufeature.h    | 14 ++++++
 xen/arch/arm/include/asm/domain.h        |  1 +
 xen/arch/arm/include/asm/processor.h     |  2 +
 xen/arch/arm/setup.c                     |  5 +-
 xen/arch/arm/traps.c                     | 27 ++++++-----
 15 files changed, 210 insertions(+), 27 deletions(-)
 create mode 100644 xen/arch/arm/arm64/sve-asm.S
 create mode 100644 xen/arch/arm/arm64/sve.c
 create mode 100644 xen/arch/arm/include/asm/arm64/sve.h

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 239d3aed3c7f..41f45d8d1203 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -112,11 +112,15 @@ config ARM64_PTR_AUTH
 	  This feature is not supported in Xen.
 
 config ARM64_SVE
-	def_bool n
+	bool "Enable Scalar Vector Extension support (UNSUPPORTED)" if UNSUPPORTED
 	depends on ARM_64
 	help
-	  Scalar Vector Extension support.
-	  This feature is not supported in Xen.
+	  Scalar Vector Extension (SVE/SVE2) support for guests.
+
+	  Please be aware that currently, enabling this feature will add latency on
+	  VM context switch between SVE enabled guests, between not-enabled SVE
+	  guests and SVE enabled guests and viceversa, compared to the time
+	  required to switch between not-enabled SVE guests.
 
 config ARM64_MTE
 	def_bool n
diff --git a/xen/arch/arm/README.LinuxPrimitives b/xen/arch/arm/README.LinuxPrimitives
index 1d53e6a898da..76c8df29e416 100644
--- a/xen/arch/arm/README.LinuxPrimitives
+++ b/xen/arch/arm/README.LinuxPrimitives
@@ -62,6 +62,15 @@ done
 linux/arch/arm64/lib/clear_page.S       xen/arch/arm/arm64/lib/clear_page.S
 linux/arch/arm64/lib/copy_page.S        unused in Xen
 
+---------------------------------------------------------------------
+
+SVE assembly macro: last sync @ v6.3.0 (last commit: 457391b03803)
+
+linux/arch/arm64/include/asm/fpsimdmacros.h   xen/arch/arm/include/asm/arm64/sve-asm.S
+
+The following macros were taken from Linux:
+    _check_general_reg, _check_num, _sve_rdvl
+
 =====================================================================
 arm32
 =====================================================================
diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 28481393e98f..54ad55c75cda 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -13,6 +13,7 @@ obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += mm.o
 obj-y += smc.o
 obj-y += smpboot.o
+obj-$(CONFIG_ARM64_SVE) += sve.o sve-asm.o
 obj-y += traps.o
 obj-y += vfp.o
 obj-y += vsysreg.o
diff --git a/xen/arch/arm/arm64/cpufeature.c b/xen/arch/arm/arm64/cpufeature.c
index d9039d37b2d1..b4656ff4d80f 100644
--- a/xen/arch/arm/arm64/cpufeature.c
+++ b/xen/arch/arm/arm64/cpufeature.c
@@ -455,15 +455,11 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
 	ARM64_FTR_END,
 };
 
-#if 0
-/* TODO: use this to sanitize SVE once we support it */
-
 static const struct arm64_ftr_bits ftr_zcr[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
 		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
 	ARM64_FTR_END,
 };
-#endif
 
 /*
  * Common ftr bits for a 32bit register with all hidden, strict
@@ -603,6 +599,9 @@ void update_system_features(const struct cpuinfo_arm *new)
 
 	SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
 
+	if ( cpu_has_sve )
+		SANITIZE_REG(zcr64, 0, zcr);
+
 	/*
 	 * Comment from Linux:
 	 * Userspace may perform DC ZVA instructions. Mismatched block sizes
diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
new file mode 100644
index 000000000000..4d1549344733
--- /dev/null
+++ b/xen/arch/arm/arm64/sve-asm.S
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Arm SVE assembly routines
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ *
+ * Some macros and instruction encoding in this file are taken from linux 6.1.1,
+ * file arch/arm64/include/asm/fpsimdmacros.h, some of them are a modified
+ * version.
+ */
+
+/* Sanity-check macros to help avoid encoding garbage instructions */
+
+.macro _check_general_reg nr
+    .if (\nr) < 0 || (\nr) > 30
+        .error "Bad register number \nr."
+    .endif
+.endm
+
+.macro _check_num n, min, max
+    .if (\n) < (\min) || (\n) > (\max)
+        .error "Number \n out of range [\min,\max]"
+    .endif
+.endm
+
+/* SVE instruction encodings for non-SVE-capable assemblers */
+/* (pre binutils 2.28, all kernel capable clang versions support SVE) */
+
+/* RDVL X\nx, #\imm */
+.macro _sve_rdvl nx, imm
+    _check_general_reg \nx
+    _check_num (\imm), -0x20, 0x1f
+    .inst 0x04bf5000                \
+        | (\nx)                     \
+        | (((\imm) & 0x3f) << 5)
+.endm
+
+/* Gets the current vector register size in bytes */
+GLOBAL(sve_get_hw_vl)
+    _sve_rdvl 0, 1
+    ret
+
+/*
+ * Local variables:
+ * mode: ASM
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
new file mode 100644
index 000000000000..e05ccc38a896
--- /dev/null
+++ b/xen/arch/arm/arm64/sve.c
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#include <xen/types.h>
+#include <asm/arm64/sve.h>
+#include <asm/arm64/sysregs.h>
+#include <asm/processor.h>
+#include <asm/system.h>
+
+extern unsigned int sve_get_hw_vl(void);
+
+/* Takes a vector length in bits and returns the ZCR_ELx encoding */
+static inline register_t vl_to_zcr(unsigned int vl)
+{
+    ASSERT(vl > 0);
+    return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
+}
+
+register_t compute_max_zcr(void)
+{
+    register_t cptr_bits = get_default_cptr_flags();
+    register_t zcr = vl_to_zcr(SVE_VL_MAX_BITS);
+    unsigned int hw_vl;
+
+    /* Remove trap for SVE resources */
+    WRITE_SYSREG(cptr_bits & ~HCPTR_CP(8), CPTR_EL2);
+    isb();
+
+    /*
+     * Set the maximum SVE vector length, doing that we will know the VL
+     * supported by the platform, calling sve_get_hw_vl()
+     */
+    WRITE_SYSREG(zcr, ZCR_EL2);
+
+    /*
+     * Read the maximum VL, which could be lower than what we imposed before,
+     * hw_vl contains VL in bytes, multiply it by 8 to use vl_to_zcr() later
+     */
+    hw_vl = sve_get_hw_vl() * 8U;
+
+    /* Restore CPTR_EL2 */
+    WRITE_SYSREG(cptr_bits, CPTR_EL2);
+    isb();
+
+    return vl_to_zcr(hw_vl);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/cpufeature.c b/xen/arch/arm/cpufeature.c
index c4ec38bb2554..b53e1a977601 100644
--- a/xen/arch/arm/cpufeature.c
+++ b/xen/arch/arm/cpufeature.c
@@ -9,6 +9,7 @@
 #include <xen/init.h>
 #include <xen/smp.h>
 #include <xen/stop_machine.h>
+#include <asm/arm64/sve.h>
 #include <asm/cpufeature.h>
 
 DECLARE_BITMAP(cpu_hwcaps, ARM_NCAPS);
@@ -143,6 +144,9 @@ void identify_cpu(struct cpuinfo_arm *c)
 
     c->zfr64.bits[0] = READ_SYSREG(ID_AA64ZFR0_EL1);
 
+    if ( cpu_has_sve )
+        c->zcr64.bits[0] = compute_max_zcr();
+
     c->dczid.bits[0] = READ_SYSREG(DCZID_EL0);
 
     c->ctr.bits[0] = READ_SYSREG(CTR_EL0);
@@ -199,7 +203,7 @@ static int __init create_guest_cpuinfo(void)
     guest_cpuinfo.pfr64.mpam = 0;
     guest_cpuinfo.pfr64.mpam_frac = 0;
 
-    /* Hide SVE as Xen does not support it */
+    /* Hide SVE by default */
     guest_cpuinfo.pfr64.sve = 0;
     guest_cpuinfo.zfr64.bits[0] = 0;
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index d8ef6501ff8e..d5ab15db46c4 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -95,6 +95,9 @@ static void ctxt_switch_from(struct vcpu *p)
     /* CP 15 */
     p->arch.csselr = READ_SYSREG(CSSELR_EL1);
 
+    /* VFP */
+    vfp_save_state(p);
+
     /* Control Registers */
     p->arch.cpacr = READ_SYSREG(CPACR_EL1);
 
@@ -155,9 +158,6 @@ static void ctxt_switch_from(struct vcpu *p)
 
     /* XXX MPU */
 
-    /* VFP */
-    vfp_save_state(p);
-
     /* VGIC */
     gic_save_state(p);
 
@@ -181,9 +181,6 @@ static void ctxt_switch_to(struct vcpu *n)
     /* VGIC */
     gic_restore_state(n);
 
-    /* VFP */
-    vfp_restore_state(n);
-
     /* XXX MPU */
 
     /* Fault Status */
@@ -256,8 +253,17 @@ static void ctxt_switch_to(struct vcpu *n)
     WRITE_CP32(n->arch.joscr, JOSCR);
     WRITE_CP32(n->arch.jmcr, JMCR);
 #endif
+
+    /*
+     * CPTR_EL2 needs to be written before calling vfp_restore_state, a
+     * synchronization instruction is expected after the write (isb)
+     */
+    WRITE_SYSREG(n->arch.cptr_el2, CPTR_EL2);
     isb();
 
+    /* VFP - call vfp_restore_state after writing on CPTR_EL2 + isb */
+    vfp_restore_state(n);
+
     /* CP 15 */
     WRITE_SYSREG(n->arch.csselr, CSSELR_EL1);
 
@@ -548,6 +554,8 @@ int arch_vcpu_create(struct vcpu *v)
 
     v->arch.vmpidr = MPIDR_SMP | vcpuid_to_vaffinity(v->vcpu_id);
 
+    v->arch.cptr_el2 = get_default_cptr_flags();
+
     v->arch.hcr_el2 = get_default_hcr_flags();
 
     v->arch.mdcr_el2 = HDCR_TDRA | HDCR_TDOSA | HDCR_TDA;
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
new file mode 100644
index 000000000000..c0466243c7bc
--- /dev/null
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Arm SVE feature code
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#ifndef _ARM_ARM64_SVE_H
+#define _ARM_ARM64_SVE_H
+
+#define SVE_VL_MAX_BITS 2048U
+
+/* Vector length must be multiple of 128 */
+#define SVE_VL_MULTIPLE_VAL 128U
+
+register_t compute_max_zcr(void);
+
+#endif /* _ARM_ARM64_SVE_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 463899951414..4cabb9eb4d5e 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -24,6 +24,7 @@
 #define ICH_EISR_EL2              S3_4_C12_C11_3
 #define ICH_ELSR_EL2              S3_4_C12_C11_5
 #define ICH_VMCR_EL2              S3_4_C12_C11_7
+#define ZCR_EL2                   S3_4_C1_C2_0
 
 #define __LR0_EL2(x)              S3_4_C12_C12_ ## x
 #define __LR8_EL2(x)              S3_4_C12_C13_ ## x
diff --git a/xen/arch/arm/include/asm/cpufeature.h b/xen/arch/arm/include/asm/cpufeature.h
index c62cf6293fd6..03fe684b4d36 100644
--- a/xen/arch/arm/include/asm/cpufeature.h
+++ b/xen/arch/arm/include/asm/cpufeature.h
@@ -32,6 +32,12 @@
 #define cpu_has_thumbee   (boot_cpu_feature32(thumbee) == 1)
 #define cpu_has_aarch32   (cpu_has_arm || cpu_has_thumb)
 
+#ifdef CONFIG_ARM64_SVE
+#define cpu_has_sve       (boot_cpu_feature64(sve) == 1)
+#else
+#define cpu_has_sve       0
+#endif
+
 #ifdef CONFIG_ARM_32
 #define cpu_has_gicv3     (boot_cpu_feature32(gic) >= 1)
 #define cpu_has_gentimer  (boot_cpu_feature32(gentimer) == 1)
@@ -323,6 +329,14 @@ struct cpuinfo_arm {
         };
     } isa64;
 
+    union {
+        register_t bits[1];
+        struct {
+            unsigned long len:4;
+            unsigned long __res0:60;
+        };
+    } zcr64;
+
     struct {
         register_t bits[1];
     } zfr64;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 2a51f0ca688e..e776ee704b7d 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -190,6 +190,7 @@ struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+    register_t cptr_el2;
     register_t hcr_el2;
     register_t mdcr_el2;
 
diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index 54f253087718..bc683334125c 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -582,6 +582,8 @@ void do_trap_guest_serror(struct cpu_user_regs *regs);
 
 register_t get_default_hcr_flags(void);
 
+register_t get_default_cptr_flags(void);
+
 /*
  * Synchronize SError unless the feature is selected.
  * This is relying on the SErrors are currently unmasked.
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 74b40e527f99..bbf72b69aae6 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -135,10 +135,11 @@ static void __init processor_id(void)
            cpu_has_el2_32 ? "64+32" : cpu_has_el2_64 ? "64" : "No",
            cpu_has_el1_32 ? "64+32" : cpu_has_el1_64 ? "64" : "No",
            cpu_has_el0_32 ? "64+32" : cpu_has_el0_64 ? "64" : "No");
-    printk("    Extensions:%s%s%s\n",
+    printk("    Extensions:%s%s%s%s\n",
            cpu_has_fp ? " FloatingPoint" : "",
            cpu_has_simd ? " AdvancedSIMD" : "",
-           cpu_has_gicv3 ? " GICv3-SysReg" : "");
+           cpu_has_gicv3 ? " GICv3-SysReg" : "",
+           cpu_has_sve ? " SVE" : "");
 
     /* Warn user if we find unknown floating-point features */
     if ( cpu_has_fp && (boot_cpu_feature64(fp) >= 2) )
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index d40c331a4e9c..3393e10b52e6 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -93,6 +93,21 @@ register_t get_default_hcr_flags(void)
              HCR_TID3|HCR_TSC|HCR_TAC|HCR_SWIO|HCR_TIDCP|HCR_FB|HCR_TSW);
 }
 
+register_t get_default_cptr_flags(void)
+{
+    /*
+     * Trap all coprocessor registers (0-13) except cp10 and
+     * cp11 for VFP.
+     *
+     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
+     *
+     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
+     * RES1, i.e. they would trap whether we did this write or not.
+     */
+    return  ((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
+             HCPTR_TTA | HCPTR_TAM);
+}
+
 static enum {
     SERRORS_DIVERSE,
     SERRORS_PANIC,
@@ -135,17 +150,7 @@ void init_traps(void)
     /* Trap CP15 c15 used for implementation defined registers */
     WRITE_SYSREG(HSTR_T(15), HSTR_EL2);
 
-    /* Trap all coprocessor registers (0-13) except cp10 and
-     * cp11 for VFP.
-     *
-     * /!\ All coprocessors except cp10 and cp11 cannot be used in Xen.
-     *
-     * On ARM64 the TCPx bits which we set here (0..9,12,13) are all
-     * RES1, i.e. they would trap whether we did this write or not.
-     */
-    WRITE_SYSREG((HCPTR_CP_MASK & ~(HCPTR_CP(10) | HCPTR_CP(11))) |
-                 HCPTR_TTA | HCPTR_TAM,
-                 CPTR_EL2);
+    WRITE_SYSREG(get_default_cptr_flags(), CPTR_EL2);
 
     /*
      * Configure HCR_EL2 with the bare minimum to run Xen until a guest
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541554.844450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GC9-0005rQ-9z; Wed, 31 May 2023 07:24:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541554.844450; Wed, 31 May 2023 07:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GC9-0005rF-71; Wed, 31 May 2023 07:24:33 +0000
Received: by outflank-mailman (input) for mailman id 541554;
 Wed, 31 May 2023 07:24:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GC7-0005os-P8
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:31 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 2d2f60eb-ff84-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 09:24:28 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0834015DB;
 Wed, 31 May 2023 00:25:13 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A85513F663;
 Wed, 31 May 2023 00:24:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d2f60eb-ff84-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v8 03/12] xen/arm: Expose SVE feature to the guest
Date: Wed, 31 May 2023 08:24:04 +0100
Message-Id: <20230531072413.868673-4-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a guest is allowed to use SVE, expose the SVE features through
the identification registers.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes from v7:
 - add r-by Bertrand
Changes from v6:
 - code style fix, add A-by Julien
Changes from v5:
 - given the move of is_sve_domain() in asm/arm64/sve.h, add the
   header to vsysreg.c
 - dropping Bertrand's R-by because of the change
Changes from v4:
 - no changes
Changes from v3:
 - no changes
Changes from v2:
 - no changes
Changes from v1:
 - No changes
Changes from RFC:
 - No changes
---
 xen/arch/arm/arm64/vsysreg.c | 41 ++++++++++++++++++++++++++++++++++--
 1 file changed, 39 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
index 758750983c11..fe31f7b3827f 100644
--- a/xen/arch/arm/arm64/vsysreg.c
+++ b/xen/arch/arm/arm64/vsysreg.c
@@ -18,6 +18,8 @@
 
 #include <xen/sched.h>
 
+#include <asm/arm64/cpufeature.h>
+#include <asm/arm64/sve.h>
 #include <asm/current.h>
 #include <asm/regs.h>
 #include <asm/traps.h>
@@ -295,7 +297,28 @@ void do_sysreg(struct cpu_user_regs *regs,
     GENERATE_TID3_INFO(MVFR0_EL1, mvfr, 0)
     GENERATE_TID3_INFO(MVFR1_EL1, mvfr, 1)
     GENERATE_TID3_INFO(MVFR2_EL1, mvfr, 2)
-    GENERATE_TID3_INFO(ID_AA64PFR0_EL1, pfr64, 0)
+
+    case HSR_SYSREG_ID_AA64PFR0_EL1:
+    {
+        register_t guest_reg_value = guest_cpuinfo.pfr64.bits[0];
+
+        if ( is_sve_domain(v->domain) )
+        {
+            /* 4 is the SVE field width in id_aa64pfr0_el1 */
+            uint64_t mask = GENMASK(ID_AA64PFR0_SVE_SHIFT + 4 - 1,
+                                    ID_AA64PFR0_SVE_SHIFT);
+            /* sysval is the sve field on the system */
+            uint64_t sysval = cpuid_feature_extract_unsigned_field_width(
+                                system_cpuinfo.pfr64.bits[0],
+                                ID_AA64PFR0_SVE_SHIFT, 4);
+            guest_reg_value &= ~mask;
+            guest_reg_value |= (sysval << ID_AA64PFR0_SVE_SHIFT) & mask;
+        }
+
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
+                                  guest_reg_value);
+    }
+
     GENERATE_TID3_INFO(ID_AA64PFR1_EL1, pfr64, 1)
     GENERATE_TID3_INFO(ID_AA64DFR0_EL1, dbg64, 0)
     GENERATE_TID3_INFO(ID_AA64DFR1_EL1, dbg64, 1)
@@ -306,7 +329,21 @@ void do_sysreg(struct cpu_user_regs *regs,
     GENERATE_TID3_INFO(ID_AA64MMFR2_EL1, mm64, 2)
     GENERATE_TID3_INFO(ID_AA64AFR0_EL1, aux64, 0)
     GENERATE_TID3_INFO(ID_AA64AFR1_EL1, aux64, 1)
-    GENERATE_TID3_INFO(ID_AA64ZFR0_EL1, zfr64, 0)
+
+    case HSR_SYSREG_ID_AA64ZFR0_EL1:
+    {
+        /*
+         * When the guest has the SVE feature enabled, the whole id_aa64zfr0_el1
+         * needs to be exposed.
+         */
+        register_t guest_reg_value = guest_cpuinfo.zfr64.bits[0];
+
+        if ( is_sve_domain(v->domain) )
+            guest_reg_value = system_cpuinfo.zfr64.bits[0];
+
+        return handle_ro_read_val(regs, regidx, hsr.sysreg.read, hsr, 1,
+                                  guest_reg_value);
+    }
 
     /*
      * Those cases are catching all Reserved registers trapped by TID3 which
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541555.844456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GC9-0005vO-PS; Wed, 31 May 2023 07:24:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541555.844456; Wed, 31 May 2023 07:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GC9-0005un-GE; Wed, 31 May 2023 07:24:33 +0000
Received: by outflank-mailman (input) for mailman id 541555;
 Wed, 31 May 2023 07:24:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GC8-0005os-Ey
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:32 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 2de467be-ff84-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 09:24:29 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 39258165C;
 Wed, 31 May 2023 00:25:14 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DBF0F3F663;
 Wed, 31 May 2023 00:24:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2de467be-ff84-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v8 04/12] xen/arm: add SVE exception class handling
Date: Wed, 31 May 2023 08:24:05 +0100
Message-Id: <20230531072413.868673-5-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

SVE has a new exception class with code 0x19, introduce the new code
and handle the exception.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
Changes from v6:
 - Add R-by Julien
Changes from v5:
 - modified error messages (Julien)
 - add R-by Bertrand
Changes from v4:
 - No changes
Changes from v3:
 - No changes
Changes from v2:
 - No changes
Changes from v1:
 - No changes
Changes from RFC:
 - No changes
---
 xen/arch/arm/include/asm/processor.h | 1 +
 xen/arch/arm/traps.c                 | 9 +++++++++
 2 files changed, 10 insertions(+)

diff --git a/xen/arch/arm/include/asm/processor.h b/xen/arch/arm/include/asm/processor.h
index bc683334125c..7e42ff8811fc 100644
--- a/xen/arch/arm/include/asm/processor.h
+++ b/xen/arch/arm/include/asm/processor.h
@@ -426,6 +426,7 @@
 #define HSR_EC_HVC64                0x16
 #define HSR_EC_SMC64                0x17
 #define HSR_EC_SYSREG               0x18
+#define HSR_EC_SVE                  0x19
 #endif
 #define HSR_EC_INSTR_ABORT_LOWER_EL 0x20
 #define HSR_EC_INSTR_ABORT_CURR_EL  0x21
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 3393e10b52e6..f6437f6aa9c9 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2172,6 +2172,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
         perfc_incr(trap_sysreg);
         do_sysreg(regs, hsr);
         break;
+    case HSR_EC_SVE:
+        GUEST_BUG_ON(regs_mode_is_32bit(regs));
+        gprintk(XENLOG_WARNING, "Domain tried to use SVE while not allowed\n");
+        inject_undef_exception(regs, hsr);
+        break;
 #endif
 
     case HSR_EC_INSTR_ABORT_LOWER_EL:
@@ -2201,6 +2206,10 @@ void do_trap_hyp_sync(struct cpu_user_regs *regs)
     case HSR_EC_BRK:
         do_trap_brk(regs, hsr);
         break;
+    case HSR_EC_SVE:
+        /* An SVE exception is a bug somewhere in hypervisor code */
+        do_unexpected_trap("SVE trap at EL2", regs);
+        break;
 #endif
     case HSR_EC_DATA_ABORT_CURR_EL:
     case HSR_EC_INSTR_ABORT_CURR_EL:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541556.844470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCB-0006MN-4u; Wed, 31 May 2023 07:24:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541556.844470; Wed, 31 May 2023 07:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCA-0006Ke-Vf; Wed, 31 May 2023 07:24:34 +0000
Received: by outflank-mailman (input) for mailman id 541556;
 Wed, 31 May 2023 07:24:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GC9-0005os-Ey
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:33 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 2e9ccefd-ff84-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 09:24:30 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6D2FF16F2;
 Wed, 31 May 2023 00:25:15 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1BEBA3F663;
 Wed, 31 May 2023 00:24:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e9ccefd-ff84-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v8 05/12] arm/sve: save/restore SVE context switch
Date: Wed, 31 May 2023 08:24:06 +0100
Message-Id: <20230531072413.868673-6-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Save/restore context switch for SVE, allocate memory to contain
the Z0-31 registers whose length is maximum 2048 bits each and
FFR who can be maximum 256 bits, the allocated memory depends on
how many bits is the vector length for the domain and how many bits
are supported by the platform.

Save P0-15 whose length is maximum 256 bits each, in this case the
memory used is from the fpregs field in struct vfp_state,
because V0-31 are part of Z0-31 and this space would have been
unused for SVE domain otherwise.

Create zcr_el{1,2} fields in arch_vcpu, initialise zcr_el2 on vcpu
creation given the requested vector length and restore it on
context switch, save/restore ZCR_EL1 value as well.

List import macros from Linux in README.LinuxPrimitives.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
Changes from v7:
 - Fixed comments for sve_context_init and sve_context_free,
   protect arch.vfp.sve_zreg_ctx_end, arch.zcr_el1, arch.zcr_el2
   with ifdefs. (Julien)
 - Given the changes, dropped Bertrand's R-by
Changes from v6:
 - Add comment for explain why sve_save/sve_load are different from
   Linux, add macros in xen/arch/arm/README.LinuxPrimitives (Julien)
 - Add comments in sve_context_init and sve_context_free, handle the
   case where sve_zreg_ctx_end is NULL, move setting of v->arch.zcr_el2
   in sve_context_init (Julien)
 - remove stubs for sve_context_* and sve_save_* and rely on compiler
   DCE (Jan)
 - Add comments for sve_save_ctx/sve_load_ctx (Julien)
Changes from v5:
 - use XFREE instead of xfree, keep the headers (Julien)
 - Avoid math computation for every save/restore, store the computation
   in struct vfp_state once (Bertrand)
 - protect access to v->domain->arch.sve_vl inside arch_vcpu_create now
   that sve_vl is available only on arm64
Changes from v4:
 - No changes
Changes from v3:
 - don't use fixed len types when not needed (Jan)
 - now VL is an encoded value, decode it before using.
Changes from v2:
 - No changes
Changes from v1:
 - No changes
Changes from RFC:
 - Moved zcr_el2 field introduction in this patch, restore its
   content inside sve_restore_state function. (Julien)
---
 xen/arch/arm/README.LinuxPrimitives      |   4 +-
 xen/arch/arm/arm64/sve-asm.S             | 147 +++++++++++++++++++++++
 xen/arch/arm/arm64/sve.c                 |  91 ++++++++++++++
 xen/arch/arm/arm64/vfp.c                 |  79 ++++++------
 xen/arch/arm/domain.c                    |   6 +
 xen/arch/arm/include/asm/arm64/sve.h     |   4 +
 xen/arch/arm/include/asm/arm64/sysregs.h |   3 +
 xen/arch/arm/include/asm/arm64/vfp.h     |  16 +++
 xen/arch/arm/include/asm/domain.h        |   5 +
 9 files changed, 320 insertions(+), 35 deletions(-)

diff --git a/xen/arch/arm/README.LinuxPrimitives b/xen/arch/arm/README.LinuxPrimitives
index 76c8df29e416..301c0271bbe4 100644
--- a/xen/arch/arm/README.LinuxPrimitives
+++ b/xen/arch/arm/README.LinuxPrimitives
@@ -69,7 +69,9 @@ SVE assembly macro: last sync @ v6.3.0 (last commit: 457391b03803)
 linux/arch/arm64/include/asm/fpsimdmacros.h   xen/arch/arm/include/asm/arm64/sve-asm.S
 
 The following macros were taken from Linux:
-    _check_general_reg, _check_num, _sve_rdvl
+    _check_general_reg, _check_num, _sve_rdvl, __for, _for, _sve_check_zreg,
+    _sve_check_preg, _sve_str_v, _sve_ldr_v, _sve_str_p, _sve_ldr_p, _sve_rdffr,
+    _sve_wrffr
 
 =====================================================================
 arm32
diff --git a/xen/arch/arm/arm64/sve-asm.S b/xen/arch/arm/arm64/sve-asm.S
index 4d1549344733..59dbefbbb252 100644
--- a/xen/arch/arm/arm64/sve-asm.S
+++ b/xen/arch/arm/arm64/sve-asm.S
@@ -17,6 +17,18 @@
     .endif
 .endm
 
+.macro _sve_check_zreg znr
+    .if (\znr) < 0 || (\znr) > 31
+        .error "Bad Scalable Vector Extension vector register number \znr."
+    .endif
+.endm
+
+.macro _sve_check_preg pnr
+    .if (\pnr) < 0 || (\pnr) > 15
+        .error "Bad Scalable Vector Extension predicate register number \pnr."
+    .endif
+.endm
+
 .macro _check_num n, min, max
     .if (\n) < (\min) || (\n) > (\max)
         .error "Number \n out of range [\min,\max]"
@@ -26,6 +38,54 @@
 /* SVE instruction encodings for non-SVE-capable assemblers */
 /* (pre binutils 2.28, all kernel capable clang versions support SVE) */
 
+/* STR (vector): STR Z\nz, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_str_v nz, nxbase, offset=0
+    _sve_check_zreg \nz
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0xe5804000                \
+        | (\nz)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* LDR (vector): LDR Z\nz, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_ldr_v nz, nxbase, offset=0
+    _sve_check_zreg \nz
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0x85804000                \
+        | (\nz)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* STR (predicate): STR P\np, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_str_p np, nxbase, offset=0
+    _sve_check_preg \np
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0xe5800000                \
+        | (\np)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
+/* LDR (predicate): LDR P\np, [X\nxbase, #\offset, MUL VL] */
+.macro _sve_ldr_p np, nxbase, offset=0
+    _sve_check_preg \np
+    _check_general_reg \nxbase
+    _check_num (\offset), -0x100, 0xff
+    .inst 0x85800000                \
+        | (\np)                     \
+        | ((\nxbase) << 5)          \
+        | (((\offset) & 7) << 10)   \
+        | (((\offset) & 0x1f8) << 13)
+.endm
+
 /* RDVL X\nx, #\imm */
 .macro _sve_rdvl nx, imm
     _check_general_reg \nx
@@ -35,11 +95,98 @@
         | (((\imm) & 0x3f) << 5)
 .endm
 
+/* RDFFR (unpredicated): RDFFR P\np.B */
+.macro _sve_rdffr np
+    _sve_check_preg \np
+    .inst 0x2519f000                \
+        | (\np)
+.endm
+
+/* WRFFR P\np.B */
+.macro _sve_wrffr np
+    _sve_check_preg \np
+    .inst 0x25289000                \
+        | ((\np) << 5)
+.endm
+
+.macro __for from:req, to:req
+    .if (\from) == (\to)
+        _for__body %\from
+    .else
+        __for %\from, %((\from) + ((\to) - (\from)) / 2)
+        __for %((\from) + ((\to) - (\from)) / 2 + 1), %\to
+    .endif
+.endm
+
+.macro _for var:req, from:req, to:req, insn:vararg
+    .macro _for__body \var:req
+        .noaltmacro
+        \insn
+        .altmacro
+    .endm
+
+    .altmacro
+    __for \from, \to
+    .noaltmacro
+
+    .purgem _for__body
+.endm
+
+/*
+ * sve_save and sve_load are different from the Linux version because the
+ * buffers to save the context are different from Xen and for example Linux
+ * is using this macro to save/restore also fpsr and fpcr while we do it in C
+ */
+
+.macro sve_save nxzffrctx, nxpctx, save_ffr
+    _for n, 0, 31, _sve_str_v \n, \nxzffrctx, \n - 32
+    _for n, 0, 15, _sve_str_p \n, \nxpctx, \n
+        cbz \save_ffr, 1f
+        _sve_rdffr 0
+        _sve_str_p 0, \nxzffrctx
+        _sve_ldr_p 0, \nxpctx
+        b 2f
+1:
+        str xzr, [x\nxzffrctx]      // Zero out FFR
+2:
+.endm
+
+.macro sve_load nxzffrctx, nxpctx, restore_ffr
+    _for n, 0, 31, _sve_ldr_v \n, \nxzffrctx, \n - 32
+        cbz \restore_ffr, 1f
+        _sve_ldr_p 0, \nxzffrctx
+        _sve_wrffr 0
+1:
+    _for n, 0, 15, _sve_ldr_p \n, \nxpctx, \n
+.endm
+
 /* Gets the current vector register size in bytes */
 GLOBAL(sve_get_hw_vl)
     _sve_rdvl 0, 1
     ret
 
+/*
+ * Save the SVE context
+ *
+ * x0 - pointer to buffer for Z0-31 + FFR
+ * x1 - pointer to buffer for P0-15
+ * x2 - Save FFR if non-zero
+ */
+GLOBAL(sve_save_ctx)
+    sve_save 0, 1, x2
+    ret
+
+/*
+ * Load the SVE context
+ *
+ * x0 - pointer to buffer for Z0-31 + FFR
+ * x1 - pointer to buffer for P0-15
+ * x2 - Restore FFR if non-zero
+ */
+GLOBAL(sve_load_ctx)
+    sve_load 0, 1, x2
+    ret
+
 /*
  * Local variables:
  * mode: ASM
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index a9144e48ef6b..56d8f27ea26a 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -5,6 +5,7 @@
  * Copyright (C) 2022 ARM Ltd.
  */
 
+#include <xen/sizes.h>
 #include <xen/types.h>
 #include <asm/arm64/sve.h>
 #include <asm/arm64/sysregs.h>
@@ -14,6 +15,25 @@
 
 extern unsigned int sve_get_hw_vl(void);
 
+/*
+ * Save the SVE context
+ *
+ * sve_ctx - pointer to buffer for Z0-31 + FFR
+ * pregs - pointer to buffer for P0-15
+ * save_ffr - Save FFR if non-zero
+ */
+extern void sve_save_ctx(uint64_t *sve_ctx, uint64_t *pregs, int save_ffr);
+
+/*
+ * Load the SVE context
+ *
+ * sve_ctx - pointer to buffer for Z0-31 + FFR
+ * pregs - pointer to buffer for P0-15
+ * restore_ffr - Restore FFR if non-zero
+ */
+extern void sve_load_ctx(uint64_t const *sve_ctx, uint64_t const *pregs,
+                         int restore_ffr);
+
 /* Takes a vector length in bits and returns the ZCR_ELx encoding */
 static inline register_t vl_to_zcr(unsigned int vl)
 {
@@ -21,6 +41,21 @@ static inline register_t vl_to_zcr(unsigned int vl)
     return ((vl / SVE_VL_MULTIPLE_VAL) - 1U) & ZCR_ELx_LEN_MASK;
 }
 
+static inline unsigned int sve_zreg_ctx_size(unsigned int vl)
+{
+    /*
+     * Z0-31 registers size in bytes is computed from VL that is in bits, so VL
+     * in bytes is VL/8.
+     */
+    return (vl / 8U) * 32U;
+}
+
+static inline unsigned int sve_ffrreg_ctx_size(unsigned int vl)
+{
+    /* FFR register size is VL/8, which is in bytes (VL/8)/8 */
+    return (vl / 64U);
+}
+
 register_t compute_max_zcr(void)
 {
     register_t cptr_bits = get_default_cptr_flags();
@@ -61,6 +96,62 @@ unsigned int get_sys_vl_len(void)
             SVE_VL_MULTIPLE_VAL;
 }
 
+int sve_context_init(struct vcpu *v)
+{
+    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
+    uint64_t *ctx = _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
+                             sve_ffrreg_ctx_size(sve_vl_bits),
+                             L1_CACHE_BYTES);
+
+    if ( !ctx )
+        return -ENOMEM;
+
+    /*
+     * Points to the end of Z0-Z31 memory, just before FFR memory, to be kept in
+     * sync with sve_context_free().
+     */
+    v->arch.vfp.sve_zreg_ctx_end = ctx +
+        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
+
+    v->arch.zcr_el2 = vl_to_zcr(sve_vl_bits);
+
+    return 0;
+}
+
+void sve_context_free(struct vcpu *v)
+{
+    unsigned int sve_vl_bits;
+
+    if ( v->arch.vfp.sve_zreg_ctx_end )
+        return;
+
+    sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
+
+    /*
+     * Currenly points to the end of Z0-Z31 memory which is not the start of
+     * the buffer. To be kept in sync with the sve_context_init().
+     */
+    v->arch.vfp.sve_zreg_ctx_end -=
+        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
+
+    XFREE(v->arch.vfp.sve_zreg_ctx_end);
+}
+
+void sve_save_state(struct vcpu *v)
+{
+    v->arch.zcr_el1 = READ_SYSREG(ZCR_EL1);
+
+    sve_save_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
+}
+
+void sve_restore_state(struct vcpu *v)
+{
+    WRITE_SYSREG(v->arch.zcr_el1, ZCR_EL1);
+    WRITE_SYSREG(v->arch.zcr_el2, ZCR_EL2);
+
+    sve_load_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index 47885e76baae..2d0d7c2e6ddb 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -2,29 +2,35 @@
 #include <asm/processor.h>
 #include <asm/cpufeature.h>
 #include <asm/vfp.h>
+#include <asm/arm64/sve.h>
 
 void vfp_save_state(struct vcpu *v)
 {
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
-                 "stp q2, q3, [%1, #16 * 2]\n\t"
-                 "stp q4, q5, [%1, #16 * 4]\n\t"
-                 "stp q6, q7, [%1, #16 * 6]\n\t"
-                 "stp q8, q9, [%1, #16 * 8]\n\t"
-                 "stp q10, q11, [%1, #16 * 10]\n\t"
-                 "stp q12, q13, [%1, #16 * 12]\n\t"
-                 "stp q14, q15, [%1, #16 * 14]\n\t"
-                 "stp q16, q17, [%1, #16 * 16]\n\t"
-                 "stp q18, q19, [%1, #16 * 18]\n\t"
-                 "stp q20, q21, [%1, #16 * 20]\n\t"
-                 "stp q22, q23, [%1, #16 * 22]\n\t"
-                 "stp q24, q25, [%1, #16 * 24]\n\t"
-                 "stp q26, q27, [%1, #16 * 26]\n\t"
-                 "stp q28, q29, [%1, #16 * 28]\n\t"
-                 "stp q30, q31, [%1, #16 * 30]\n\t"
-                 : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
+    if ( is_sve_domain(v->domain) )
+        sve_save_state(v);
+    else
+    {
+        asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
+                     "stp q2, q3, [%1, #16 * 2]\n\t"
+                     "stp q4, q5, [%1, #16 * 4]\n\t"
+                     "stp q6, q7, [%1, #16 * 6]\n\t"
+                     "stp q8, q9, [%1, #16 * 8]\n\t"
+                     "stp q10, q11, [%1, #16 * 10]\n\t"
+                     "stp q12, q13, [%1, #16 * 12]\n\t"
+                     "stp q14, q15, [%1, #16 * 14]\n\t"
+                     "stp q16, q17, [%1, #16 * 16]\n\t"
+                     "stp q18, q19, [%1, #16 * 18]\n\t"
+                     "stp q20, q21, [%1, #16 * 20]\n\t"
+                     "stp q22, q23, [%1, #16 * 22]\n\t"
+                     "stp q24, q25, [%1, #16 * 24]\n\t"
+                     "stp q26, q27, [%1, #16 * 26]\n\t"
+                     "stp q28, q29, [%1, #16 * 28]\n\t"
+                     "stp q30, q31, [%1, #16 * 30]\n\t"
+                     : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
+    }
 
     v->arch.vfp.fpsr = READ_SYSREG(FPSR);
     v->arch.vfp.fpcr = READ_SYSREG(FPCR);
@@ -37,23 +43,28 @@ void vfp_restore_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
-                 "ldp q2, q3, [%1, #16 * 2]\n\t"
-                 "ldp q4, q5, [%1, #16 * 4]\n\t"
-                 "ldp q6, q7, [%1, #16 * 6]\n\t"
-                 "ldp q8, q9, [%1, #16 * 8]\n\t"
-                 "ldp q10, q11, [%1, #16 * 10]\n\t"
-                 "ldp q12, q13, [%1, #16 * 12]\n\t"
-                 "ldp q14, q15, [%1, #16 * 14]\n\t"
-                 "ldp q16, q17, [%1, #16 * 16]\n\t"
-                 "ldp q18, q19, [%1, #16 * 18]\n\t"
-                 "ldp q20, q21, [%1, #16 * 20]\n\t"
-                 "ldp q22, q23, [%1, #16 * 22]\n\t"
-                 "ldp q24, q25, [%1, #16 * 24]\n\t"
-                 "ldp q26, q27, [%1, #16 * 26]\n\t"
-                 "ldp q28, q29, [%1, #16 * 28]\n\t"
-                 "ldp q30, q31, [%1, #16 * 30]\n\t"
-                 : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
+    if ( is_sve_domain(v->domain) )
+        sve_restore_state(v);
+    else
+    {
+        asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
+                     "ldp q2, q3, [%1, #16 * 2]\n\t"
+                     "ldp q4, q5, [%1, #16 * 4]\n\t"
+                     "ldp q6, q7, [%1, #16 * 6]\n\t"
+                     "ldp q8, q9, [%1, #16 * 8]\n\t"
+                     "ldp q10, q11, [%1, #16 * 10]\n\t"
+                     "ldp q12, q13, [%1, #16 * 12]\n\t"
+                     "ldp q14, q15, [%1, #16 * 14]\n\t"
+                     "ldp q16, q17, [%1, #16 * 16]\n\t"
+                     "ldp q18, q19, [%1, #16 * 18]\n\t"
+                     "ldp q20, q21, [%1, #16 * 20]\n\t"
+                     "ldp q22, q23, [%1, #16 * 22]\n\t"
+                     "ldp q24, q25, [%1, #16 * 24]\n\t"
+                     "ldp q26, q27, [%1, #16 * 26]\n\t"
+                     "ldp q28, q29, [%1, #16 * 28]\n\t"
+                     "ldp q30, q31, [%1, #16 * 30]\n\t"
+                     : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
+    }
 
     WRITE_SYSREG(v->arch.vfp.fpsr, FPSR);
     WRITE_SYSREG(v->arch.vfp.fpcr, FPCR);
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 6c22551b0ed2..add9929b7943 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -557,7 +557,11 @@ int arch_vcpu_create(struct vcpu *v)
 
     v->arch.cptr_el2 = get_default_cptr_flags();
     if ( is_sve_domain(v->domain) )
+    {
+        if ( (rc = sve_context_init(v)) != 0 )
+            goto fail;
         v->arch.cptr_el2 &= ~HCPTR_CP(8);
+    }
 
     v->arch.hcr_el2 = get_default_hcr_flags();
 
@@ -587,6 +591,8 @@ fail:
 
 void arch_vcpu_destroy(struct vcpu *v)
 {
+    if ( is_sve_domain(v->domain) )
+        sve_context_free(v);
     vcpu_timer_destroy(v);
     vcpu_vgic_free(v);
     free_xenheap_pages(v->arch.stack, STACK_ORDER);
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index 4b63412727fc..65b46685d263 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -22,6 +22,10 @@ static inline unsigned int sve_decode_vl(unsigned int sve_vl)
 }
 
 register_t compute_max_zcr(void);
+int sve_context_init(struct vcpu *v);
+void sve_context_free(struct vcpu *v);
+void sve_save_state(struct vcpu *v);
+void sve_restore_state(struct vcpu *v);
 
 #ifdef CONFIG_ARM64_SVE
 
diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h b/xen/arch/arm/include/asm/arm64/sysregs.h
index 4cabb9eb4d5e..3fdeb9d8cdef 100644
--- a/xen/arch/arm/include/asm/arm64/sysregs.h
+++ b/xen/arch/arm/include/asm/arm64/sysregs.h
@@ -88,6 +88,9 @@
 #ifndef ID_AA64ISAR2_EL1
 #define ID_AA64ISAR2_EL1            S3_0_C0_C6_2
 #endif
+#ifndef ZCR_EL1
+#define ZCR_EL1                     S3_0_C1_C2_0
+#endif
 
 /* ID registers (imported from arm64/include/asm/sysreg.h in Linux) */
 
diff --git a/xen/arch/arm/include/asm/arm64/vfp.h b/xen/arch/arm/include/asm/arm64/vfp.h
index e6e8c363bc16..4b399ccbfb31 100644
--- a/xen/arch/arm/include/asm/arm64/vfp.h
+++ b/xen/arch/arm/include/asm/arm64/vfp.h
@@ -6,7 +6,23 @@
 
 struct vfp_state
 {
+    /*
+     * When SVE is enabled for the guest, fpregs memory will be used to
+     * save/restore P0-P15 registers, otherwise it will be used for the V0-V31
+     * registers.
+     */
     uint64_t fpregs[64] __vfp_aligned;
+
+#ifdef CONFIG_ARM64_SVE
+    /*
+     * When SVE is enabled for the guest, sve_zreg_ctx_end points to memory
+     * where Z0-Z31 registers and FFR can be saved/restored, it points at the
+     * end of the Z0-Z31 space and at the beginning of the FFR space, it's done
+     * like that to ease the save/restore assembly operations.
+     */
+    uint64_t *sve_zreg_ctx_end;
+#endif
+
     register_t fpcr;
     register_t fpexc32_el2;
     register_t fpsr;
diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 331da0f3bcc3..99e798ffff68 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -195,6 +195,11 @@ struct arch_vcpu
     register_t tpidrro_el0;
 
     /* HYP configuration */
+#ifdef CONFIG_ARM64_SVE
+    register_t zcr_el1;
+    register_t zcr_el2;
+#endif
+
     register_t cptr_el2;
     register_t hcr_el2;
     register_t mdcr_el2;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541557.844476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCB-0006SH-K6; Wed, 31 May 2023 07:24:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541557.844476; Wed, 31 May 2023 07:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCB-0006Ra-A0; Wed, 31 May 2023 07:24:35 +0000
Received: by outflank-mailman (input) for mailman id 541557;
 Wed, 31 May 2023 07:24:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GCA-0005os-F7
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:34 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 2f7d01f9-ff84-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 09:24:31 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 175C21042;
 Wed, 31 May 2023 00:25:17 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 508E03F663;
 Wed, 31 May 2023 00:24:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f7d01f9-ff84-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v8 06/12] xen/common: add dom0 xen command line argument for Arm
Date: Wed, 31 May 2023 08:24:07 +0100
Message-Id: <20230531072413.868673-7-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently x86 defines a Xen command line argument dom0=<list> where
there can be specified dom0 controlling sub-options, to use it also
on Arm, move the code that loops through the list of arguments from
x86 to the common code and from there, call architecture specific
functions to handle the comma separated sub-options.

No functional changes are intended.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes from v5:
 - Add Bertrand R-by
Changes from v4:
 - return EINVAL in Arm implementation of parse_arch_dom0_param,
   shorten variable name in the funtion from str_begin, str_end to
   s, e. Removed variable rc from x86 parse_arch_dom0_param
   implementation. (Jan)
 - Add R-By Jan
Changes from v3:
 - new patch
---
 xen/arch/arm/domain_build.c |  5 ++++
 xen/arch/x86/dom0_build.c   | 48 ++++++++++++++-----------------------
 xen/common/domain.c         | 23 ++++++++++++++++++
 xen/include/xen/domain.h    |  1 +
 4 files changed, 47 insertions(+), 30 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index c46f19208dcb..e2f063991423 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -60,6 +60,11 @@ static int __init parse_dom0_mem(const char *s)
 }
 custom_param("dom0_mem", parse_dom0_mem);
 
+int __init parse_arch_dom0_param(const char *s, const char *e)
+{
+    return -EINVAL;
+}
+
 /* Override macros from asm/page.h to make them work with mfn_t */
 #undef virt_to_mfn
 #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 79234f18ff01..9f5300a3efbb 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -266,42 +266,30 @@ bool __initdata opt_dom0_pvh = !IS_ENABLED(CONFIG_PV);
 bool __initdata opt_dom0_verbose = IS_ENABLED(CONFIG_VERBOSE_DEBUG);
 bool __initdata opt_dom0_msr_relaxed;
 
-static int __init cf_check parse_dom0_param(const char *s)
+int __init parse_arch_dom0_param(const char *s, const char *e)
 {
-    const char *ss;
-    int rc = 0;
+    int val;
 
-    do {
-        int val;
-
-        ss = strchr(s, ',');
-        if ( !ss )
-            ss = strchr(s, '\0');
-
-        if ( IS_ENABLED(CONFIG_PV) && !cmdline_strcmp(s, "pv") )
-            opt_dom0_pvh = false;
-        else if ( IS_ENABLED(CONFIG_HVM) && !cmdline_strcmp(s, "pvh") )
-            opt_dom0_pvh = true;
+    if ( IS_ENABLED(CONFIG_PV) && !cmdline_strcmp(s, "pv") )
+        opt_dom0_pvh = false;
+    else if ( IS_ENABLED(CONFIG_HVM) && !cmdline_strcmp(s, "pvh") )
+        opt_dom0_pvh = true;
 #ifdef CONFIG_SHADOW_PAGING
-        else if ( (val = parse_boolean("shadow", s, ss)) >= 0 )
-            opt_dom0_shadow = val;
+    else if ( (val = parse_boolean("shadow", s, e)) >= 0 )
+        opt_dom0_shadow = val;
 #endif
-        else if ( (val = parse_boolean("verbose", s, ss)) >= 0 )
-            opt_dom0_verbose = val;
-        else if ( IS_ENABLED(CONFIG_PV) &&
-                  (val = parse_boolean("cpuid-faulting", s, ss)) >= 0 )
-            opt_dom0_cpuid_faulting = val;
-        else if ( (val = parse_boolean("msr-relaxed", s, ss)) >= 0 )
-            opt_dom0_msr_relaxed = val;
-        else
-            rc = -EINVAL;
-
-        s = ss + 1;
-    } while ( *ss );
+    else if ( (val = parse_boolean("verbose", s, e)) >= 0 )
+        opt_dom0_verbose = val;
+    else if ( IS_ENABLED(CONFIG_PV) &&
+              (val = parse_boolean("cpuid-faulting", s, e)) >= 0 )
+        opt_dom0_cpuid_faulting = val;
+    else if ( (val = parse_boolean("msr-relaxed", s, e)) >= 0 )
+        opt_dom0_msr_relaxed = val;
+    else
+        return -EINVAL;
 
-    return rc;
+    return 0;
 }
-custom_param("dom0", parse_dom0_param);
 
 static char __initdata opt_dom0_ioports_disable[200] = "";
 string_param("dom0_ioports_disable", opt_dom0_ioports_disable);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 6a440590fe2a..caaa40263792 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -364,6 +364,29 @@ static int __init cf_check parse_extra_guest_irqs(const char *s)
 }
 custom_param("extra_guest_irqs", parse_extra_guest_irqs);
 
+static int __init cf_check parse_dom0_param(const char *s)
+{
+    const char *ss;
+    int rc = 0;
+
+    do {
+        int ret;
+
+        ss = strchr(s, ',');
+        if ( !ss )
+            ss = strchr(s, '\0');
+
+        ret = parse_arch_dom0_param(s, ss);
+        if ( ret && !rc )
+            rc = ret;
+
+        s = ss + 1;
+    } while ( *ss );
+
+    return rc;
+}
+custom_param("dom0", parse_dom0_param);
+
 /*
  * Release resources held by a domain.  There may or may not be live
  * references to the domain, and it may or may not be fully constructed.
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 26f9c4f6dd5b..1df8f933d076 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -16,6 +16,7 @@ typedef union {
 struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id);
 
 unsigned int dom0_max_vcpus(void);
+int parse_arch_dom0_param(const char *s, const char *e);
 struct vcpu *alloc_dom0_vcpu0(struct domain *dom0);
 
 int vcpu_reset(struct vcpu *);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541558.844489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCE-0006z7-1S; Wed, 31 May 2023 07:24:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541558.844489; Wed, 31 May 2023 07:24:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCD-0006yy-U5; Wed, 31 May 2023 07:24:37 +0000
Received: by outflank-mailman (input) for mailman id 541558;
 Wed, 31 May 2023 07:24:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GCB-0005os-R5
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:35 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 306acae4-ff84-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 09:24:33 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9B9871063;
 Wed, 31 May 2023 00:25:18 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EF2373F663;
 Wed, 31 May 2023 00:24:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 306acae4-ff84-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v8 07/12] xen: enable Dom0 to use SVE feature
Date: Wed, 31 May 2023 08:24:08 +0100
Message-Id: <20230531072413.868673-8-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a command line parameter to allow Dom0 the use of SVE resources,
the command line parameter sve=<integer>, sub argument of dom0=,
controls the feature on this domain and sets the maximum SVE vector
length for Dom0.

Add a new function, parse_signed_integer(), to parse an integer
command line argument.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com> # !arm
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes from v7:
 - Fixed documentation, add R-by Jan and Bertrand
Changes from v6:
 - Fixed case for e==NULL in parse_signed_integer, drop parenthesis
   from if conditions, delete inline sve_domctl_vl_param and rely on
   DCE from the compiler (Jan)
 - Drop parenthesis from opt_dom0_sve (Julien)
 - Do not continue if 'sve' is in command line args but
   CONFIG_ARM64_SVE is not selected:
   https://lore.kernel.org/all/7614AE25-F59D-430A-9C3E-30B1CE0E1580@arm.com/
Changes from v5:
 - stop the domain if VL error occurs (Julien, Bertrand)
 - update the documentation
 - Rename sve_sanitize_vl_param to sve_domctl_vl_param to
   mark the fact that we are sanitizing a parameter coming from
   the user before encoding it into sve_vl in domctl structure.
   (suggestion from Bertrand in a separate discussion)
 - update comment in parse_signed_integer, return boolean in
   sve_domctl_vl_param (Jan).
Changes from v4:
 - Negative values as user param means max supported HW VL (Jan)
 - update documentation, make use of no_config_param(), rename
   parse_integer into parse_signed_integer and take long long *,
   also put a comment on the -2 return condition, update
   declaration comment to reflect the modifications (Jan)
Changes from v3:
 - Don't use fixed len types when not needed (Jan)
 - renamed domainconfig_encode_vl to sve_encode_vl
 - Use a sub argument of dom0= to enable the feature (Jan)
 - Add parse_integer() function
Changes from v2:
 - xen_domctl_createdomain field has changed into sve_vl and its
   value now is the VL / 128, create an helper function for that.
Changes from v1:
 - No changes
Changes from RFC:
 - Changed docs to explain that the domain won't be created if the
   requested vector length is above the supported one from the
   platform.
---
 docs/misc/xen-command-line.pandoc    | 20 ++++++++++++++++++--
 xen/arch/arm/arm64/sve.c             | 20 ++++++++++++++++++++
 xen/arch/arm/domain_build.c          | 26 ++++++++++++++++++++++++++
 xen/arch/arm/include/asm/arm64/sve.h | 10 ++++++++++
 xen/common/kernel.c                  | 28 ++++++++++++++++++++++++++++
 xen/include/xen/lib.h                | 10 ++++++++++
 6 files changed, 112 insertions(+), 2 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e0b89b7d3319..4060ebdc5d76 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -777,9 +777,9 @@ Specify the bit width of the DMA heap.
 
 ### dom0
     = List of [ pv | pvh, shadow=<bool>, verbose=<bool>,
-                cpuid-faulting=<bool>, msr-relaxed=<bool> ]
+                cpuid-faulting=<bool>, msr-relaxed=<bool> ] (x86)
 
-    Applicability: x86
+    = List of [ sve=<integer> ] (Arm64)
 
 Controls for how dom0 is constructed on x86 systems.
 
@@ -838,6 +838,22 @@ Controls for how dom0 is constructed on x86 systems.
 
     If using this option is necessary to fix an issue, please report a bug.
 
+Enables features on dom0 on Arm systems.
+
+*   The `sve` integer parameter enables Arm SVE usage for Dom0 and sets the
+    maximum SVE vector length, the option is applicable only to Arm64 Dom0
+    kernels.
+    A value equal to 0 disables the feature, this is the default value.
+    Values below 0 means the feature uses the maximum SVE vector length
+    supported by hardware, if SVE is supported.
+    Values above 0 explicitly set the maximum SVE vector length for Dom0,
+    allowed values are from 128 to maximum 2048, being multiple of 128.
+    Please note that when the user explicitly specifies the value, if that value
+    is above the hardware supported maximum SVE vector length, the domain
+    creation will fail and the system will stop, the same will occur if the
+    option is provided with a positive non zero value, but the platform doesn't
+    support SVE.
+
 ### dom0-cpuid
     = List of comma separated booleans
 
diff --git a/xen/arch/arm/arm64/sve.c b/xen/arch/arm/arm64/sve.c
index 56d8f27ea26a..23a9d0ba661c 100644
--- a/xen/arch/arm/arm64/sve.c
+++ b/xen/arch/arm/arm64/sve.c
@@ -13,6 +13,9 @@
 #include <asm/processor.h>
 #include <asm/system.h>
 
+/* opt_dom0_sve: allow Dom0 to use SVE and set maximum vector length. */
+int __initdata opt_dom0_sve;
+
 extern unsigned int sve_get_hw_vl(void);
 
 /*
@@ -152,6 +155,23 @@ void sve_restore_state(struct vcpu *v)
     sve_load_ctx(v->arch.vfp.sve_zreg_ctx_end, v->arch.vfp.fpregs, 1);
 }
 
+bool __init sve_domctl_vl_param(int val, unsigned int *out)
+{
+    /*
+     * Negative SVE parameter value means to use the maximum supported
+     * vector length, otherwise if a positive value is provided, check if the
+     * vector length is a multiple of 128
+     */
+    if ( val < 0 )
+        *out = get_sys_vl_len();
+    else if ( (val % SVE_VL_MULTIPLE_VAL) == 0 )
+        *out = val;
+    else
+        return false;
+
+    return true;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e2f063991423..14b42120a9b1 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -62,6 +62,22 @@ custom_param("dom0_mem", parse_dom0_mem);
 
 int __init parse_arch_dom0_param(const char *s, const char *e)
 {
+    long long val;
+
+    if ( !parse_signed_integer("sve", s, e, &val) )
+    {
+#ifdef CONFIG_ARM64_SVE
+        if ( (val >= INT_MIN) && (val <= INT_MAX) )
+            opt_dom0_sve = val;
+        else
+            printk(XENLOG_INFO "'sve=%lld' value out of range!\n", val);
+
+        return 0;
+#else
+        panic("'sve' property found, but CONFIG_ARM64_SVE not selected");
+#endif
+    }
+
     return -EINVAL;
 }
 
@@ -4134,6 +4150,16 @@ void __init create_dom0(void)
     if ( iommu_enabled )
         dom0_cfg.flags |= XEN_DOMCTL_CDF_iommu;
 
+    if ( opt_dom0_sve )
+    {
+        unsigned int vl;
+
+        if ( sve_domctl_vl_param(opt_dom0_sve, &vl) )
+            dom0_cfg.arch.sve_vl = sve_encode_vl(vl);
+        else
+            panic("SVE vector length error\n");
+    }
+
     dom0 = domain_create(0, &dom0_cfg, CDF_privileged | CDF_directmap);
     if ( IS_ERR(dom0) )
         panic("Error creating domain 0 (rc = %ld)\n", PTR_ERR(dom0));
diff --git a/xen/arch/arm/include/asm/arm64/sve.h b/xen/arch/arm/include/asm/arm64/sve.h
index 65b46685d263..a71d6a295dcc 100644
--- a/xen/arch/arm/include/asm/arm64/sve.h
+++ b/xen/arch/arm/include/asm/arm64/sve.h
@@ -21,14 +21,22 @@ static inline unsigned int sve_decode_vl(unsigned int sve_vl)
     return sve_vl * SVE_VL_MULTIPLE_VAL;
 }
 
+static inline unsigned int sve_encode_vl(unsigned int sve_vl_bits)
+{
+    return sve_vl_bits / SVE_VL_MULTIPLE_VAL;
+}
+
 register_t compute_max_zcr(void);
 int sve_context_init(struct vcpu *v);
 void sve_context_free(struct vcpu *v);
 void sve_save_state(struct vcpu *v);
 void sve_restore_state(struct vcpu *v);
+bool sve_domctl_vl_param(int val, unsigned int *out);
 
 #ifdef CONFIG_ARM64_SVE
 
+extern int opt_dom0_sve;
+
 static inline bool is_sve_domain(const struct domain *d)
 {
     return d->arch.sve_vl > 0;
@@ -38,6 +46,8 @@ unsigned int get_sys_vl_len(void);
 
 #else /* !CONFIG_ARM64_SVE */
 
+#define opt_dom0_sve     0
+
 static inline bool is_sve_domain(const struct domain *d)
 {
     return false;
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index f7b1f65f373c..7cd00a4c999a 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -314,6 +314,34 @@ int parse_boolean(const char *name, const char *s, const char *e)
     return -1;
 }
 
+int __init parse_signed_integer(const char *name, const char *s, const char *e,
+                                long long *val)
+{
+    size_t slen, nlen;
+    const char *str;
+    long long pval;
+
+    slen = e ? ({ ASSERT(e >= s); e - s; }) : strlen(s);
+    nlen = strlen(name);
+
+    if ( !e )
+        e = s + slen;
+
+    /* Check that this is the name we're looking for and a value was provided */
+    if ( slen <= nlen || strncmp(s, name, nlen) || s[nlen] != '=' )
+        return -1;
+
+    pval = simple_strtoll(&s[nlen + 1], &str, 10);
+
+    /* Number not recognised */
+    if ( str != e )
+        return -2;
+
+    *val = pval;
+
+    return 0;
+}
+
 int cmdline_strcmp(const char *frag, const char *name)
 {
     for ( ; ; frag++, name++ )
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index e914ccade095..5343ee7a944a 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -94,6 +94,16 @@ int parse_bool(const char *s, const char *e);
  */
 int parse_boolean(const char *name, const char *s, const char *e);
 
+/**
+ * Given a specific name, parses a string of the form:
+ *   $NAME=<integer number>
+ * returning 0 and a value in val, for a recognised integer.
+ * Returns -1 for name not found, general errors, or -2 if name is found but
+ * not recognised number.
+ */
+int parse_signed_integer(const char *name, const char *s, const char *e,
+                         long long *val);
+
 /**
  * Very similar to strcmp(), but will declare a match if the NUL in 'name'
  * lines up with comma, colon, semicolon or equals in 'frag'.  Designed for
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:38 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541559.844495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCE-00073p-HW; Wed, 31 May 2023 07:24:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541559.844495; Wed, 31 May 2023 07:24:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCE-00073A-AP; Wed, 31 May 2023 07:24:38 +0000
Received: by outflank-mailman (input) for mailman id 541559;
 Wed, 31 May 2023 07:24:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GCC-0005os-PF
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:36 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 31518958-ff84-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 09:24:34 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2AF7415BF;
 Wed, 31 May 2023 00:25:20 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7E90B3F663;
 Wed, 31 May 2023 00:24:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31518958-ff84-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v8 08/12] xen/physinfo: encode Arm SVE vector length in arch_capabilities
Date: Wed, 31 May 2023 08:24:09 +0100
Message-Id: <20230531072413.868673-9-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the arm platform supports SVE, advertise the feature in the
field arch_capabilities in struct xen_sysctl_physinfo by encoding
the SVE vector length in it.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes from v5:
 - Add R-by from Bertrand
Changes from v4:
 - Write arch_capabilities from arch_do_physinfo instead of using
   stub functions (Jan)
Changes from v3:
 - domainconfig_encode_vl is now named sve_encode_vl
Changes from v2:
 - Remove XEN_SYSCTL_PHYSCAP_ARM_SVE_SHFT, use MASK_INSR and
   protect with ifdef XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK (Jan)
 - Use the helper function sve_arch_cap_physinfo to encode
   the VL into physinfo arch_capabilities field.
Changes from v1:
 - Use only arch_capabilities and some defines to encode SVE VL
   (Bertrand, Stefano, Jan)
Changes from RFC:
 - new patch
---
 xen/arch/arm/sysctl.c       | 4 ++++
 xen/include/public/sysctl.h | 4 ++++
 2 files changed, 8 insertions(+)

diff --git a/xen/arch/arm/sysctl.c b/xen/arch/arm/sysctl.c
index b0a78a8b10d0..e9a0661146e4 100644
--- a/xen/arch/arm/sysctl.c
+++ b/xen/arch/arm/sysctl.c
@@ -11,11 +11,15 @@
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <xen/hypercall.h>
+#include <asm/arm64/sve.h>
 #include <public/sysctl.h>
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
 {
     pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm | XEN_SYSCTL_PHYSCAP_hap;
+
+    pi->arch_capabilities |= MASK_INSR(sve_encode_vl(get_sys_vl_len()),
+                                       XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK);
 }
 
 long arch_do_sysctl(struct xen_sysctl *sysctl,
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 2b24d6bfd00e..9d06e92d0f6a 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -94,6 +94,10 @@ struct xen_sysctl_tbuf_op {
 /* Max XEN_SYSCTL_PHYSCAP_* constant.  Used for ABI checking. */
 #define XEN_SYSCTL_PHYSCAP_MAX XEN_SYSCTL_PHYSCAP_gnttab_v2
 
+#if defined(__arm__) || defined(__aarch64__)
+#define XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK  (0x1FU)
+#endif
+
 struct xen_sysctl_physinfo {
     uint32_t threads_per_core;
     uint32_t cores_per_socket;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541560.844510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCG-0007bT-NZ; Wed, 31 May 2023 07:24:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541560.844510; Wed, 31 May 2023 07:24:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCG-0007bM-Jz; Wed, 31 May 2023 07:24:40 +0000
Received: by outflank-mailman (input) for mailman id 541560;
 Wed, 31 May 2023 07:24:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GCF-0005os-3R
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:39 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 3261c531-ff84-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 09:24:36 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E52B21042;
 Wed, 31 May 2023 00:25:21 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0E9F63F663;
 Wed, 31 May 2023 00:24:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3261c531-ff84-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Christian Lindig <christian.lindig@cloud.com>
Subject: [PATCH v8 09/12] tools: add physinfo arch_capabilities handling for Arm
Date: Wed, 31 May 2023 08:24:10 +0100
Message-Id: <20230531072413.868673-10-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On Arm, the SVE vector length is encoded in arch_capabilities field
of struct xen_sysctl_physinfo, make use of this field in the tools
when building for arm.

Create header arm-arch-capabilities.h to handle the arch_capabilities
field of physinfo for Arm.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>
Acked-by: Christian Lindig <christian.lindig@cloud.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
Changes from v7:
 - in xc.c, add "arm_sve_vl" entry to the python dict only on arm64.
   (Marek)
Changes from v6:
 - Fix licence header in arm-atch-capabilities.h, add R-by (Anthony)
Changes from v5:
 - no changes
Changes from v4:
 - Move arm-arch-capabilities.h into xen-tools/, add LIBXL_HAVE_,
   fixed python return type to I instead of i. (Anthony)
Changes from v3:
 - add Ack-by for the Golang bits (George)
 - add Ack-by for the OCaml tools (Christian)
 - now xen-tools/libs.h is named xen-tools/common-macros.h
 - changed commit message to explain why the header modification
   in python/xen/lowlevel/xc/xc.c
Changes from v2:
 - rename arm_arch_capabilities.h in arm-arch-capabilities.h, use
   MASK_EXTR.
 - Now arm-arch-capabilities.h needs MASK_EXTR macro, but it is
   defined in libxl_internal.h, it doesn't feel right to include
   that header so move MASK_EXTR into xen-tools/libs.h that is also
   included in libxl_internal.h
Changes from v1:
 - now SVE VL is encoded in arch_capabilities on Arm
Changes from RFC:
 - new patch
---
 tools/golang/xenlight/helpers.gen.go          |  2 +
 tools/golang/xenlight/types.gen.go            |  1 +
 tools/include/libxl.h                         |  6 +++
 .../include/xen-tools/arm-arch-capabilities.h | 28 +++++++++++++
 tools/include/xen-tools/common-macros.h       |  2 +
 tools/libs/light/libxl.c                      |  1 +
 tools/libs/light/libxl_internal.h             |  1 -
 tools/libs/light/libxl_types.idl              |  1 +
 tools/ocaml/libs/xc/xenctrl.ml                |  4 +-
 tools/ocaml/libs/xc/xenctrl.mli               |  4 +-
 tools/ocaml/libs/xc/xenctrl_stubs.c           |  8 ++--
 tools/python/xen/lowlevel/xc/xc.c             | 42 ++++++++++++++-----
 tools/xl/xl_info.c                            |  8 ++++
 13 files changed, 87 insertions(+), 21 deletions(-)
 create mode 100644 tools/include/xen-tools/arm-arch-capabilities.h

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 0a203d22321f..35397be2f9e2 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -3506,6 +3506,7 @@ x.CapVmtrace = bool(xc.cap_vmtrace)
 x.CapVpmu = bool(xc.cap_vpmu)
 x.CapGnttabV1 = bool(xc.cap_gnttab_v1)
 x.CapGnttabV2 = bool(xc.cap_gnttab_v2)
+x.ArchCapabilities = uint32(xc.arch_capabilities)
 
  return nil}
 
@@ -3540,6 +3541,7 @@ xc.cap_vmtrace = C.bool(x.CapVmtrace)
 xc.cap_vpmu = C.bool(x.CapVpmu)
 xc.cap_gnttab_v1 = C.bool(x.CapGnttabV1)
 xc.cap_gnttab_v2 = C.bool(x.CapGnttabV2)
+xc.arch_capabilities = C.uint32_t(x.ArchCapabilities)
 
  return nil
  }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index a7c17699f80e..3d968a496744 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -1079,6 +1079,7 @@ CapVmtrace bool
 CapVpmu bool
 CapGnttabV1 bool
 CapGnttabV2 bool
+ArchCapabilities uint32
 }
 
 type Connectorinfo struct {
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index cfa1a191318c..4fa09ff7635a 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -525,6 +525,12 @@
  */
 #define LIBXL_HAVE_PHYSINFO_CAP_GNTTAB 1
 
+/*
+ * LIBXL_HAVE_PHYSINFO_ARCH_CAPABILITIES indicates that libxl_physinfo has a
+ * arch_capabilities field.
+ */
+#define LIBXL_HAVE_PHYSINFO_ARCH_CAPABILITIES 1
+
 /*
  * LIBXL_HAVE_MAX_GRANT_VERSION indicates libxl_domain_build_info has a
  * max_grant_version field for setting the max grant table version per
diff --git a/tools/include/xen-tools/arm-arch-capabilities.h b/tools/include/xen-tools/arm-arch-capabilities.h
new file mode 100644
index 000000000000..3849e897925d
--- /dev/null
+++ b/tools/include/xen-tools/arm-arch-capabilities.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: LGPL-2.1-only */
+/*
+ * Copyright (C) 2023 ARM Ltd.
+ */
+
+#ifndef ARM_ARCH_CAPABILITIES_H
+#define ARM_ARCH_CAPABILITIES_H
+
+#include <stdint.h>
+#include <xen/sysctl.h>
+
+#include <xen-tools/common-macros.h>
+
+static inline
+unsigned int arch_capabilities_arm_sve(unsigned int arch_capabilities)
+{
+#if defined(__aarch64__)
+    unsigned int sve_vl = MASK_EXTR(arch_capabilities,
+                                    XEN_SYSCTL_PHYSCAP_ARM_SVE_MASK);
+
+    /* Vector length is divided by 128 before storing it in arch_capabilities */
+    return sve_vl * 128U;
+#else
+    return 0;
+#endif
+}
+
+#endif /* ARM_ARCH_CAPABILITIES_H */
diff --git a/tools/include/xen-tools/common-macros.h b/tools/include/xen-tools/common-macros.h
index 76b55bf62085..d53b88182560 100644
--- a/tools/include/xen-tools/common-macros.h
+++ b/tools/include/xen-tools/common-macros.h
@@ -72,6 +72,8 @@
 #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
 #endif
 
+#define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
+
 #ifndef __must_check
 #define __must_check __attribute__((__warn_unused_result__))
 #endif
diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
index a0bf7d186f69..175d6dde0b80 100644
--- a/tools/libs/light/libxl.c
+++ b/tools/libs/light/libxl.c
@@ -409,6 +409,7 @@ int libxl_get_physinfo(libxl_ctx *ctx, libxl_physinfo *physinfo)
         !!(xcphysinfo.capabilities & XEN_SYSCTL_PHYSCAP_gnttab_v1);
     physinfo->cap_gnttab_v2 =
         !!(xcphysinfo.capabilities & XEN_SYSCTL_PHYSCAP_gnttab_v2);
+    physinfo->arch_capabilities = xcphysinfo.arch_capabilities;
 
     GC_FREE;
     return 0;
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 5244fde6239a..8aba3e138909 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -132,7 +132,6 @@
 
 #define DIV_ROUNDUP(n, d) (((n) + (d) - 1) / (d))
 
-#define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
 #define MASK_INSR(v, m) (((v) * ((m) & -(m))) & (m))
 
 #define LIBXL__LOGGING_ENABLED
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index c10292e0d7e3..fd31dacf7d5a 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -1133,6 +1133,7 @@ libxl_physinfo = Struct("physinfo", [
     ("cap_vpmu", bool),
     ("cap_gnttab_v1", bool),
     ("cap_gnttab_v2", bool),
+    ("arch_capabilities", uint32),
     ], dir=DIR_OUT)
 
 libxl_connectorinfo = Struct("connectorinfo", [
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index e4096bf92c1d..bf23ca50bb15 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -128,12 +128,10 @@ type physinfo_cap_flag =
   | CAP_Gnttab_v1
   | CAP_Gnttab_v2
 
-type arm_physinfo_cap_flag
-
 type x86_physinfo_cap_flag
 
 type arch_physinfo_cap_flags =
-  | ARM of arm_physinfo_cap_flag list
+  | ARM of int
   | X86 of x86_physinfo_cap_flag list
 
 type physinfo =
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index ef2254537430..ed1e28ea30a0 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -113,12 +113,10 @@ type physinfo_cap_flag =
   | CAP_Gnttab_v1
   | CAP_Gnttab_v2
 
-type arm_physinfo_cap_flag
-
 type x86_physinfo_cap_flag
 
 type arch_physinfo_cap_flags =
-  | ARM of arm_physinfo_cap_flag list
+  | ARM of int
   | X86 of x86_physinfo_cap_flag list
 
 type physinfo = {
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index f686db3124ee..a03da31f6f2c 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -851,13 +851,15 @@ CAMLprim value stub_xc_physinfo(value xch_val)
 	arch_cap_list = Tag_cons;
 
 	arch_cap_flags_tag = 1; /* tag x86 */
-#else
-	caml_failwith("Unhandled architecture");
-#endif
 
 	arch_cap_flags = caml_alloc_small(1, arch_cap_flags_tag);
 	Store_field(arch_cap_flags, 0, arch_cap_list);
 	Store_field(physinfo, 10, arch_cap_flags);
+#elif defined(__aarch64__)
+	Store_field(physinfo, 10, Val_int(c_physinfo.arch_capabilities));
+#else
+	caml_failwith("Unhandled architecture");
+#endif
 
 	CAMLreturn(physinfo);
 }
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 9728b34185ac..491e88977fd3 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -22,6 +22,7 @@
 #include <xen/hvm/hvm_info_table.h>
 #include <xen/hvm/params.h>
 
+#include <xen-tools/arm-arch-capabilities.h>
 #include <xen-tools/common-macros.h>
 
 /* Needed for Python versions earlier than 2.3. */
@@ -871,6 +872,7 @@ static PyObject *pyxc_physinfo(XcObject *self)
     const char *virtcap_names[] = { "hvm", "pv" };
     const unsigned virtcaps_bits[] = { XEN_SYSCTL_PHYSCAP_hvm,
                                        XEN_SYSCTL_PHYSCAP_pv };
+    PyObject *objret;
 
     if ( xc_physinfo(self->xc_handle, &pinfo) != 0 )
         return pyxc_error_to_exception(self->xc_handle);
@@ -897,17 +899,35 @@ static PyObject *pyxc_physinfo(XcObject *self)
     if ( p != virt_caps )
       *(p-1) = '\0';
 
-    return Py_BuildValue("{s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s,s:s}",
-                            "nr_nodes",         pinfo.nr_nodes,
-                            "threads_per_core", pinfo.threads_per_core,
-                            "cores_per_socket", pinfo.cores_per_socket,
-                            "nr_cpus",          pinfo.nr_cpus,
-                            "total_memory",     pages_to_kib(pinfo.total_pages),
-                            "free_memory",      pages_to_kib(pinfo.free_pages),
-                            "scrub_memory",     pages_to_kib(pinfo.scrub_pages),
-                            "cpu_khz",          pinfo.cpu_khz,
-                            "hw_caps",          cpu_cap,
-                            "virt_caps",        virt_caps);
+    objret = Py_BuildValue("{s:i,s:i,s:i,s:i,s:l,s:l,s:l,s:i,s:s,s:s}",
+                           "nr_nodes",         pinfo.nr_nodes,
+                           "threads_per_core", pinfo.threads_per_core,
+                           "cores_per_socket", pinfo.cores_per_socket,
+                           "nr_cpus",          pinfo.nr_cpus,
+                           "total_memory",     pages_to_kib(pinfo.total_pages),
+                           "free_memory",      pages_to_kib(pinfo.free_pages),
+                           "scrub_memory",     pages_to_kib(pinfo.scrub_pages),
+                           "cpu_khz",          pinfo.cpu_khz,
+                           "hw_caps",          cpu_cap,
+                           "virt_caps",        virt_caps);
+
+#if defined(__aarch64__)
+    if ( objret ) {
+        unsigned int sve_vl_bits;
+        PyObject *py_arm_sve_vl;
+
+        sve_vl_bits = arch_capabilities_arm_sve(pinfo.arch_capabilities);
+        py_arm_sve_vl = PyLong_FromUnsignedLong(sve_vl_bits);
+
+        if ( !py_arm_sve_vl )
+            return NULL;
+
+        if( PyDict_SetItemString(objret, "arm_sve_vl", py_arm_sve_vl) )
+            return NULL;
+    }
+#endif
+
+    return objret;
 }
 
 static PyObject *pyxc_getcpuinfo(XcObject *self, PyObject *args, PyObject *kwds)
diff --git a/tools/xl/xl_info.c b/tools/xl/xl_info.c
index 712b7638b013..ddc42f96b979 100644
--- a/tools/xl/xl_info.c
+++ b/tools/xl/xl_info.c
@@ -27,6 +27,7 @@
 #include <libxl_json.h>
 #include <libxl_utils.h>
 #include <libxlutil.h>
+#include <xen-tools/arm-arch-capabilities.h>
 
 #include "xl.h"
 #include "xl_utils.h"
@@ -224,6 +225,13 @@ static void output_physinfo(void)
          info.cap_gnttab_v2 ? " gnttab-v2" : ""
         );
 
+    /* Print arm SVE vector length only on ARM platforms */
+#if defined(__aarch64__)
+    maybe_printf("arm_sve_vector_length  : %u\n",
+         arch_capabilities_arm_sve(info.arch_capabilities)
+        );
+#endif
+
     vinfo = libxl_get_version_info(ctx);
     if (vinfo) {
         i = (1 << 20) / vinfo->pagesize;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:42 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541561.844519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCI-0007vu-D2; Wed, 31 May 2023 07:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541561.844519; Wed, 31 May 2023 07:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCI-0007ua-6x; Wed, 31 May 2023 07:24:42 +0000
Received: by outflank-mailman (input) for mailman id 541561;
 Wed, 31 May 2023 07:24:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GCG-0005os-8N
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:40 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 332b6d1a-ff84-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 09:24:37 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 401E015DB;
 Wed, 31 May 2023 00:25:23 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C7DFA3F663;
 Wed, 31 May 2023 00:24:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 332b6d1a-ff84-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v8 10/12] xen/tools: add sve parameter in XL configuration
Date: Wed, 31 May 2023 08:24:11 +0100
Message-Id: <20230531072413.868673-11-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add sve parameter in XL configuration to allow guests to use
SVE feature.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
Changes from v7:
 - add R-by Anthony
Changes from v6:
 - Add check for sve_vl be multiple of 128 (Anthony)
Changes from v5:
 - Update documentation
 - re-generated golang files
Changes from v4:
 - Rename sve field to sve_vl (Anthony), changed type to
   libxl_sve_type
 - Sanity check of sve field in libxl instead of xl, update docs
   (Anthony)
 - drop Ack-by from George because of the changes in the Golang bits
Changes from v3:
 - no changes
Changes from v2:
 - domain configuration field name has changed to sve_vl,
   also its value now is VL/128.
 - Add Ack-by George for the Golang bits
Changes from v1:
 - updated to use arch_capabilities field for vector length
Changes from RFC:
 - changed libxl_types.idl sve field to uint16
 - now toolstack uses info from physinfo to check against the
   sve XL value
 - Changed documentation
---
 docs/man/xl.cfg.5.pod.in             | 16 ++++++++++++++
 tools/golang/xenlight/helpers.gen.go |  2 ++
 tools/golang/xenlight/types.gen.go   | 23 +++++++++++++++++++
 tools/include/libxl.h                |  5 +++++
 tools/libs/light/libxl_arm.c         | 33 ++++++++++++++++++++++++++++
 tools/libs/light/libxl_types.idl     | 22 +++++++++++++++++++
 tools/xl/xl_parse.c                  |  8 +++++++
 7 files changed, 109 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 24ac92718288..1b4e13ab647b 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2955,6 +2955,22 @@ Currently, only the "sbsa_uart" model is supported for ARM.
 
 =back
 
+=item B<sve="vl">
+
+The `sve` parameter enables Arm Scalable Vector Extension (SVE) usage for the
+guest and sets the maximum SVE vector length, the option is applicable only to
+AArch64 guests.
+A value equal to "disabled" disables the feature, this is the default value.
+Allowed values are "disabled", "128", "256", "384", "512", "640", "768", "896",
+"1024", "1152", "1280", "1408", "1536", "1664", "1792", "1920", "2048", "hw".
+Specifying "hw" means that the maximum vector length supported by the platform
+will be used.
+Please be aware that if a specific vector length is passed and its value is
+above the maximum vector length supported by the platform, an error will be
+raised.
+
+=back
+
 =head3 x86
 
 =over 4
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 35397be2f9e2..cd1a16e32eac 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1149,6 +1149,7 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version)
 x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart)
+x.ArchArm.SveVl = SveType(xc.arch_arm.sve_vl)
 if err := x.ArchX86.MsrRelaxed.fromC(&xc.arch_x86.msr_relaxed);err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
@@ -1653,6 +1654,7 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 xc.arch_arm.gic_version = C.libxl_gic_version(x.ArchArm.GicVersion)
 xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart)
+xc.arch_arm.sve_vl = C.libxl_sve_type(x.ArchArm.SveVl)
 if err := x.ArchX86.MsrRelaxed.toC(&xc.arch_x86.msr_relaxed); err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index 3d968a496744..b131a7eedc9d 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -490,6 +490,28 @@ TeeTypeNone TeeType = 0
 TeeTypeOptee TeeType = 1
 )
 
+type SveType int
+const(
+SveTypeHw SveType = -1
+SveTypeDisabled SveType = 0
+SveType128 SveType = 128
+SveType256 SveType = 256
+SveType384 SveType = 384
+SveType512 SveType = 512
+SveType640 SveType = 640
+SveType768 SveType = 768
+SveType896 SveType = 896
+SveType1024 SveType = 1024
+SveType1152 SveType = 1152
+SveType1280 SveType = 1280
+SveType1408 SveType = 1408
+SveType1536 SveType = 1536
+SveType1664 SveType = 1664
+SveType1792 SveType = 1792
+SveType1920 SveType = 1920
+SveType2048 SveType = 2048
+)
+
 type RdmReserve struct {
 Strategy RdmReserveStrategy
 Policy RdmReservePolicy
@@ -564,6 +586,7 @@ TypeUnion DomainBuildInfoTypeUnion
 ArchArm struct {
 GicVersion GicVersion
 Vuart VuartType
+SveVl SveType
 }
 ArchX86 struct {
 MsrRelaxed Defbool
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 4fa09ff7635a..cac641a7eba2 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -283,6 +283,11 @@
  */
 #define LIBXL_HAVE_BUILDINFO_ARCH_ARM_TEE 1
 
+/*
+ * libxl_domain_build_info has the arch_arm.sve_vl field.
+ */
+#define LIBXL_HAVE_BUILDINFO_ARCH_ARM_SVE_VL 1
+
 /*
  * LIBXL_HAVE_SOFT_RESET indicates that libxl supports performing
  * 'soft reset' for domains and there is 'soft_reset' shutdown reason
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 97c80d7ed0fa..35f76dfc21e4 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -3,6 +3,8 @@
 #include "libxl_libfdt_compat.h"
 #include "libxl_arm.h"
 
+#include <xen-tools/arm-arch-capabilities.h>
+
 #include <stdbool.h>
 #include <libfdt.h>
 #include <assert.h>
@@ -211,6 +213,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
+    /* Parameter is sanitised in libxl__arch_domain_build_info_setdefault */
+    if (d_config->b_info.arch_arm.sve_vl) {
+        /* Vector length is divided by 128 in struct xen_domctl_createdomain */
+        config->arch.sve_vl = d_config->b_info.arch_arm.sve_vl / 128U;
+    }
+
     return 0;
 }
 
@@ -1685,6 +1693,31 @@ int libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
     /* ACPI is disabled by default */
     libxl_defbool_setdefault(&b_info->acpi, false);
 
+    /* Sanitise SVE parameter */
+    if (b_info->arch_arm.sve_vl) {
+        unsigned int max_sve_vl =
+            arch_capabilities_arm_sve(physinfo->arch_capabilities);
+
+        if (!max_sve_vl) {
+            LOG(ERROR, "SVE is unsupported on this machine.");
+            return ERROR_FAIL;
+        }
+
+        if (LIBXL_SVE_TYPE_HW == b_info->arch_arm.sve_vl) {
+            b_info->arch_arm.sve_vl = max_sve_vl;
+        } else if (b_info->arch_arm.sve_vl > max_sve_vl) {
+            LOG(ERROR,
+                "Invalid sve value: %d. Platform supports up to %u bits",
+                b_info->arch_arm.sve_vl, max_sve_vl);
+            return ERROR_FAIL;
+        } else if (b_info->arch_arm.sve_vl % 128) {
+            LOG(ERROR,
+                "Invalid sve value: %d. It must be multiple of 128",
+                b_info->arch_arm.sve_vl);
+            return ERROR_FAIL;
+        }
+    }
+
     if (b_info->type != LIBXL_DOMAIN_TYPE_PV)
         return 0;
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index fd31dacf7d5a..9e48bb772646 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -523,6 +523,27 @@ libxl_tee_type = Enumeration("tee_type", [
     (1, "optee")
     ], init_val = "LIBXL_TEE_TYPE_NONE")
 
+libxl_sve_type = Enumeration("sve_type", [
+    (-1, "hw"),
+    (0, "disabled"),
+    (128, "128"),
+    (256, "256"),
+    (384, "384"),
+    (512, "512"),
+    (640, "640"),
+    (768, "768"),
+    (896, "896"),
+    (1024, "1024"),
+    (1152, "1152"),
+    (1280, "1280"),
+    (1408, "1408"),
+    (1536, "1536"),
+    (1664, "1664"),
+    (1792, "1792"),
+    (1920, "1920"),
+    (2048, "2048")
+    ], init_val = "LIBXL_SVE_TYPE_DISABLED")
+
 libxl_rdm_reserve = Struct("rdm_reserve", [
     ("strategy",    libxl_rdm_reserve_strategy),
     ("policy",      libxl_rdm_reserve_policy),
@@ -690,6 +711,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
     ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
                                ("vuart", libxl_vuart_type),
+                               ("sve_vl", libxl_sve_type),
                               ])),
     ("arch_x86", Struct(None, [("msr_relaxed", libxl_defbool),
                               ])),
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 1f6f47daf4e1..f036e56fc239 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2887,6 +2887,14 @@ skip_usbdev:
         }
     }
 
+    if (!xlu_cfg_get_string (config, "sve", &buf, 1)) {
+        e = libxl_sve_type_from_string(buf, &b_info->arch_arm.sve_vl);
+        if (e) {
+            fprintf(stderr, "Unknown sve \"%s\" specified\n", buf);
+            exit(EXIT_FAILURE);
+        }
+    }
+
     parse_vkb_list(config, d_config);
 
     d_config->virtios = NULL;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541562.844525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCJ-00082J-2X; Wed, 31 May 2023 07:24:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541562.844525; Wed, 31 May 2023 07:24:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCI-00081P-Q7; Wed, 31 May 2023 07:24:42 +0000
Received: by outflank-mailman (input) for mailman id 541562;
 Wed, 31 May 2023 07:24:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GCH-0005os-Eu
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:41 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 33fb4336-ff84-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 09:24:39 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8E7861042;
 Wed, 31 May 2023 00:25:24 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 22E603F663;
 Wed, 31 May 2023 00:24:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33fb4336-ff84-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v8 11/12] xen/arm: add sve property for dom0less domUs
Date: Wed, 31 May 2023 08:24:12 +0100
Message-Id: <20230531072413.868673-12-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a device tree property in the dom0less domU configuration
to enable the guest to use SVE.

Update documentation.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
Changes from v7:
 - Add r-by Bertrand and Michal
 - Fixed some panic() messages with newline char at the end (Michal)
 - Use Arm64 instead of AArch64 in the documentation, add A-by (Julien)
Changes from v6:
 - Use ifdef in create_domUs and fail if 'sve' is used on systems
   with CONFIG_ARM64_SVE not selected (Bertrand, Julien, Jan)
Changes from v5:
 - Stop the domain creation if SVE not supported or SVE VL
   errors (Julien, Bertrand)
 - now sve_sanitize_vl_param is renamed to sve_domctl_vl_param
   and returns a boolean, change the affected code.
 - Reworded documentation.
Changes from v4:
 - Now it is possible to specify the property "sve" for dom0less
   device tree node without any value, that means the platform
   supported VL will be used.
Changes from v3:
 - Now domainconfig_encode_vl is named sve_encode_vl
Changes from v2:
 - xen_domctl_createdomain field name has changed into sve_vl
   and its value is the VL/128, use domainconfig_encode_vl
   to encode a plain VL in bits.
Changes from v1:
 - No changes
Changes from RFC:
 - Changed documentation
---
 docs/misc/arm/device-tree/booting.txt | 16 +++++++++++++++
 xen/arch/arm/domain_build.c           | 28 +++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 3879340b5e0a..bbd955e9c2f6 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -193,6 +193,22 @@ with the following properties:
     Optional. Handle to a xen,cpupool device tree node that identifies the
     cpupool where the guest will be started at boot.
 
+- sve
+
+    Optional. The `sve` property enables Arm SVE usage for the domain and sets
+    the maximum SVE vector length, the option is applicable only to Arm64
+    guests.
+    A value equal to 0 disables the feature, this is the default value.
+    Specifying this property with no value, means that the SVE vector length
+    will be set equal to the maximum vector length supported by the platform.
+    Values above 0 explicitly set the maximum SVE vector length for the domain,
+    allowed values are from 128 to maximum 2048, being multiple of 128.
+    Please note that when the user explicitly specifies the value, if that value
+    is above the hardware supported maximum SVE vector length, the domain
+    creation will fail and the system will stop, the same will occur if the
+    option is provided with a non zero value, but the platform doesn't support
+    SVE.
+
 - xen,enhanced
 
     A string property. Possible property values are:
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 14b42120a9b1..579bc8194fed 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -4029,6 +4029,34 @@ void __init create_domUs(void)
             d_cfg.max_maptrack_frames = val;
         }
 
+        if ( dt_get_property(node, "sve", &val) )
+        {
+#ifdef CONFIG_ARM64_SVE
+            unsigned int sve_vl_bits;
+            bool ret = false;
+
+            if ( !val )
+            {
+                /* Property found with no value, means max HW VL supported */
+                ret = sve_domctl_vl_param(-1, &sve_vl_bits);
+            }
+            else
+            {
+                if ( dt_property_read_u32(node, "sve", &val) )
+                    ret = sve_domctl_vl_param(val, &sve_vl_bits);
+                else
+                    panic("Error reading 'sve' property\n");
+            }
+
+            if ( ret )
+                d_cfg.arch.sve_vl = sve_encode_vl(sve_vl_bits);
+            else
+                panic("SVE vector length error\n");
+#else
+            panic("'sve' property found, but CONFIG_ARM64_SVE not selected\n");
+#endif
+        }
+
         /*
          * The variable max_init_domid is initialized with zero, so here it's
          * very important to use the pre-increment operator to call
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:24:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:24:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541563.844537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCK-0008Vi-TD; Wed, 31 May 2023 07:24:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541563.844537; Wed, 31 May 2023 07:24:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GCK-0008UI-FA; Wed, 31 May 2023 07:24:44 +0000
Received: by outflank-mailman (input) for mailman id 541563;
 Wed, 31 May 2023 07:24:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iKzP=BU=arm.com=luca.fancellu@srs-se1.protection.inumbo.net>)
 id 1q4GCI-0005os-Oz
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:24:42 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id 34ed7b53-ff84-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 09:24:40 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3965D1063;
 Wed, 31 May 2023 00:25:26 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.arm.com [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 71CC43F663;
 Wed, 31 May 2023 00:24:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34ed7b53-ff84-11ed-8611-37d641c3527e
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Henry Wang <Henry.Wang@arm.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v8 12/12] xen/changelog: Add SVE and "dom0" options to the changelog for Arm
Date: Wed, 31 May 2023 08:24:13 +0100
Message-Id: <20230531072413.868673-13-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20230531072413.868673-1-luca.fancellu@arm.com>
References: <20230531072413.868673-1-luca.fancellu@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Arm now can use the "dom0=" Xen command line option and the support
for guests running SVE instructions is added, put entries in the
changelog.

Mention the "Tech Preview" status and add an entry in SUPPORT.md

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: Henry Wang <Henry.Wang@arm.com> # CHANGELOG
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes from v7:
 - Add r-by Bertrand
 - Use 'Arm64 domains' instead of 'AArch64 guest' in SUPPORT.md
   (Julien)
Changes from v6:
 - Add Henry's A-by to CHANGELOG
Changes from v5:
 - Add Tech Preview status and add entry in SUPPORT.md (Bertrand)
Changes from v4:
 - No changes
Change from v3:
 - new patch
---
 CHANGELOG.md | 3 +++
 SUPPORT.md   | 6 ++++++
 2 files changed, 9 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5bfd3aa5c0d5..512b7bdc0fcb 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -11,6 +11,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    cap toolstack provided values.
  - Ignore VCPUOP_set_singleshot_timer's VCPU_SSHOTTMR_future flag. The only
    known user doesn't use it properly, leading to in-guest breakage.
+ - The "dom0" option is now supported on Arm and "sve=" sub-option can be used
+   to enable dom0 guest to use SVE/SVE2 instructions.
 
 ### Added
  - On x86, support for features new in Intel Sapphire Rapids CPUs:
@@ -20,6 +22,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    - Bus-lock detection, used by Xen to mitigate (by rate-limiting) the system
      wide impact of a guest misusing atomic instructions.
  - xl/libxl can customize SMBIOS strings for HVM guests.
+ - On Arm, Xen supports guests running SVE/SVE2 instructions. (Tech Preview)
 
 ## [4.17.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.17.0) - 2022-12-12
 
diff --git a/SUPPORT.md b/SUPPORT.md
index 6dbed9d5d029..c0aeb1f3f5e0 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -99,6 +99,12 @@ Extension to the GICv3 interrupt controller to support MSI.
 
     Status: Experimental
 
+### ARM Scalable Vector Extension (SVE/SVE2)
+
+Arm64 domains can use Scalable Vector Extension (SVE/SVE2).
+
+    Status: Tech Preview
+
 ## Guest Type
 
 ### x86/PV
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:29:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:29:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541616.844550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GGU-00044X-GC; Wed, 31 May 2023 07:29:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541616.844550; Wed, 31 May 2023 07:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GGU-00044Q-DP; Wed, 31 May 2023 07:29:02 +0000
Received: by outflank-mailman (input) for mailman id 541616;
 Wed, 31 May 2023 07:29:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0BFG=BU=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q4GGS-00044K-Pv
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:29:00 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [2001:67c:2178:6::1d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cefe389d-ff84-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 09:28:59 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C92851FD5E;
 Wed, 31 May 2023 07:28:58 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 43E7F138E8;
 Wed, 31 May 2023 07:28:58 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id k3shDzr3dmSNPAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 31 May 2023 07:28:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cefe389d-ff84-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685518138; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=idW4QU2kWkEZ0Jrv3EzvM9qxTCF40PCEp7KIJ87jyNY=;
	b=OueEr3zKYEVkgIw1z8tt2xV5yzk85Kurm4nhTWPWFSbhsXTY+fkRxt+z2I+xzWM0N01ClB
	LdySQNHYL+PZBnIrlcXYz0xcaz27B6tgFgh7tj0rWvWM+8CwoA6NUuWH0px7YxEssdyGpQ
	o+KxruUCtu8q3TtnfHuSpuOT8/7d+dM=
Message-ID: <888f860d-4307-54eb-01da-11f9adf65559@suse.com>
Date: Wed, 31 May 2023 09:28:57 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Content-Language: en-US
To: Borislav Petkov <bp@alien8.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
 mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
 Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
 <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
 <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>
From: Juergen Gross <jgross@suse.com>
In-Reply-To: <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------zhKw5yvbvWkpWa8p6pNVDnxp"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------zhKw5yvbvWkpWa8p6pNVDnxp
Content-Type: multipart/mixed; boundary="------------l2MouIcQ3q3ayPok03ThE0wp";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Borislav Petkov <bp@alien8.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
 mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
 Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
Message-ID: <888f860d-4307-54eb-01da-11f9adf65559@suse.com>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
 <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
 <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>
In-Reply-To: <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>

--------------l2MouIcQ3q3ayPok03ThE0wp
Content-Type: multipart/mixed; boundary="------------2fH4jB2fLfQ4dxU1rma4JPUh"

--------------2fH4jB2fLfQ4dxU1rma4JPUh
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzAuMDUuMjMgMTc6MjgsIEJvcmlzbGF2IFBldGtvdiB3cm90ZToNCj4gT24gTW9uLCBN
YXkgMjIsIDIwMjMgYXQgMDQ6MTc6NTBQTSArMDIwMCwgSnVlcmdlbiBHcm9zcyB3cm90ZToN
Cj4+IFRoZSBhdHRhY2hlZCBkaWZmIGlzIGZvciBwYXRjaCAxMy4NCj4gDQo+IE1lcmdlZCBh
bmQgcHVzaGVkIG91dCBpbnRvIHNhbWUgYnJhbmNoLg0KPiANCj4gTmV4dCBpc3N1ZS4gRGlm
ZmluZyAvcHJvYy9tdHJyIHNob3dzOg0KPiANCj4gLS0tIHByb2MtbXRyci42LjMJMjAyMy0w
NS0zMCAxNzowMDoxMy4yMTU5OTk0ODMgKzAyMDANCj4gKysrIHByb2MtbXRyci5hZnRlcgky
MDIzLTA1LTMwIDE2OjAxOjM4LjI4MTk5NzgxNiArMDIwMA0KPiBAQCAtMSw4ICsxLDggQEAN
Cj4gICByZWcwMDogYmFzZT0weDAwMDAwMDAwMCAoICAgIDBNQiksIHNpemU9IDIwNDhNQiwg
Y291bnQ9MTogd3JpdGUtYmFjaw0KPiAtcmVnMDE6IGJhc2U9MHgwODAwMDAwMDAgKCAyMDQ4
TUIpLCBzaXplPSAgNTEyTUIsIGNvdW50PTE6IHdyaXRlLWJhY2sNCj4gK3JlZzAxOiBiYXNl
PTB4MDgwMDAwMDAwICggMjA0OE1CKSwgc2l6ZT0gMTAyNE1CLCBjb3VudD0xOiB3cml0ZS1i
YWNrDQo+ICAgcmVnMDI6IGJhc2U9MHgwYTAwMDAwMDAgKCAyNTYwTUIpLCBzaXplPSAgMjU2
TUIsIGNvdW50PTE6IHdyaXRlLWJhY2sNCj4gICByZWcwMzogYmFzZT0weDBhZTAwMDAwMCAo
IDI3ODRNQiksIHNpemU9ICAgMzJNQiwgY291bnQ9MTogdW5jYWNoYWJsZQ0KPiAtcmVnMDQ6
IGJhc2U9MHgxMDAwMDAwMDAgKCA0MDk2TUIpLCBzaXplPSA0MDk2TUIsIGNvdW50PTE6IHdy
aXRlLWJhY2sNCj4gK3JlZzA0OiBiYXNlPTB4MTAwMDAwMDAwICggNDA5Nk1CKSwgc2l6ZT0g
IDI1Nk1CLCBjb3VudD0xOiB3cml0ZS1iYWNrDQo+ICAgcmVnMDU6IGJhc2U9MHgyMDAwMDAw
MDAgKCA4MTkyTUIpLCBzaXplPSA4MTkyTUIsIGNvdW50PTE6IHdyaXRlLWJhY2sNCj4gICBy
ZWcwNjogYmFzZT0weDQwMDAwMDAwMCAoMTYzODRNQiksIHNpemU9IDEwMjRNQiwgY291bnQ9
MTogd3JpdGUtYmFjaw0KPiAgIHJlZzA3OiBiYXNlPTB4NDQwMDAwMDAwICgxNzQwOE1CKSwg
c2l6ZT0gIDI1Nk1CLCBjb3VudD0xOiB3cml0ZS1iYWNrDQo+IA0KDQpXZWlyZC4NCg0KQ2Fu
IHlvdSBwbGVhc2UgYm9vdCB0aGUgc3lzdGVtIHdpdGggdGhlIE1UUlIgcGF0Y2hlcyBhbmQg
c3BlY2lmeSAibXRycj1kZWJ1ZyINCm9uIHRoZSBjb21tYW5kIGxpbmU/IEknZCBiZSBpbnRl
cmVzdGVkIGluIHRoZSByYXcgcmVnaXN0ZXIgdmFsdWVzIGJlaW5nIHJlYWQNCmFuZCB0aGUg
cmVzdWx0aW5nIG1lbW9yeSB0eXBlIG1hcC4NCg0KDQpKdWVyZ2VuDQo=
--------------2fH4jB2fLfQ4dxU1rma4JPUh
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2fH4jB2fLfQ4dxU1rma4JPUh--

--------------l2MouIcQ3q3ayPok03ThE0wp--

--------------zhKw5yvbvWkpWa8p6pNVDnxp
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmR29zkFAwAAAAAACgkQsN6d1ii/Ey/D
eQf/ThGqCy4WXGZTN+DrkW1qO3DxndwaGDgqfB97PAWRZa2Fdm/bQPCxbQfAtDJskHbz0dgSaoOq
qi8fxA0Meqsuzin5B04UFXPDZY48P32IyDUF48pN15XFLgZF01uDKavtE5gFLQ2jBNx0ZkjK6sPw
d9z9e3Lf3HBUYspPa1JRXM3qmzB0/rIE0dqr7b8QOLVE/Hv1M9GpSxwGnxY30sCPB0om5TqxMgmw
g6r8jw99xzLTP7hxrhzwsUHNtZjbsRjciaV/EEGFuO0Q3RyevQR/T++Om99+gk2OPqx1sJRB22+d
WLipU6se2j1skJrZujRoPBswW4DwMY06CPv31wcvOA==
=bLto
-----END PGP SIGNATURE-----

--------------zhKw5yvbvWkpWa8p6pNVDnxp--


From xen-devel-bounces@lists.xenproject.org Wed May 31 07:33:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:33:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541621.844559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GLC-0005Y7-7d; Wed, 31 May 2023 07:33:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541621.844559; Wed, 31 May 2023 07:33:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GLC-0005Y0-4W; Wed, 31 May 2023 07:33:54 +0000
Received: by outflank-mailman (input) for mailman id 541621;
 Wed, 31 May 2023 07:33:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rqU/=BU=redhat.com=sgarzare@srs-se1.protection.inumbo.net>)
 id 1q4GLA-0005Xu-Mp
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:33:52 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7c30baf7-ff85-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 09:33:50 +0200 (CEST)
Received: from mail-lj1-f200.google.com (mail-lj1-f200.google.com
 [209.85.208.200]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-99-JBw-ZGEjOcihmvaln_Xh7w-1; Wed, 31 May 2023 03:33:45 -0400
Received: by mail-lj1-f200.google.com with SMTP id
 38308e7fff4ca-2b05714a774so29388341fa.1
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 00:33:45 -0700 (PDT)
Received: from sgarzare-redhat (host-87-12-25-16.business.telecomitalia.it.
 [87.12.25.16]) by smtp.gmail.com with ESMTPSA id
 i13-20020a17090685cd00b0096f83b16ab1sm8499433ejy.136.2023.05.31.00.33.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 00:33:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c30baf7-ff85-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685518429;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8jvYQlhKuHyKtL4mNj/znMLADcf3tcavTDhI4smE8dw=;
	b=XsNxPhP/8MBxxQyXBXzhG681qCuVcjMEiE5HzTG4zJ3zQVnHZfq4wr4G3Y5EpRi/4kHdlO
	z8iT9E/Dh9LpWsn2YTjU5Fg54tydZBrvg43qGtE/bk7u5xjhdZSEHYgFa3hrPIMNgv+Akh
	y2byYSKQPMeew0R+A/UJlI9r3ikeq4U=
X-MC-Unique: JBw-ZGEjOcihmvaln_Xh7w-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685518424; x=1688110424;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8jvYQlhKuHyKtL4mNj/znMLADcf3tcavTDhI4smE8dw=;
        b=f/uPZ7zrZSJRo99Qi2PrInts5OAANRzfMo8WH0UuzBu6k2jYBNae+Rn5G6JaXaeUX6
         chbCkr+0vRaTbTRCWWyYV5r+HpHpJ0A6KnCAWVOOcea1Zp0wLou9iIowKRtoQmzVqDDv
         xPS6gPwJsSLKK4kEhGE1h6Oa0XU6Id8jwM2TTeKjMRGU/TRixP3M8rY81X8IpW6vPh6x
         mdMC3BPMEScRSwoBEKN5UcFVYZ2NqXd54QrtEgFykmHRXD+2OnWGxE8xzznM1NTxR4b9
         Rld+RXYT3F1dr1jGl+KPr5NDKJ9QGLQo6ry3Fs4G7A07nf/IwT5LvQSTHINtnNpXtCjw
         XOww==
X-Gm-Message-State: AC+VfDzl9Xp4NX9Qzlm6UIesj8N45IxgLBvfBdUZ09JF3bKRXy2xBaYz
	LgMfJ9QmJR3P0k0+jY476NCIgOWEciz8f/vOFrcAsCt3fQjmDpN6HXC2ddIszHfDOGjxPgtFYsT
	sIYPaAn4lFFD2S+Qm0IQn3A7MBFs=
X-Received: by 2002:a2e:9e47:0:b0:2af:a696:3691 with SMTP id g7-20020a2e9e47000000b002afa6963691mr2125459ljk.40.1685518424311;
        Wed, 31 May 2023 00:33:44 -0700 (PDT)
X-Google-Smtp-Source: ACHHUZ551RIeF9bnzkTrz/lia8+/hteNl9cJtKk1XRyhXQesUA8vv9D1oOPKoklfR2VDrpxdBxuWZw==
X-Received: by 2002:a2e:9e47:0:b0:2af:a696:3691 with SMTP id g7-20020a2e9e47000000b002afa6963691mr2125438ljk.40.1685518423940;
        Wed, 31 May 2023 00:33:43 -0700 (PDT)
Date: Wed, 31 May 2023 09:33:41 +0200
From: Stefano Garzarella <sgarzare@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, 
	eblake@redhat.com, Hanna Reitz <hreitz@redhat.com>, Fam Zheng <fam@euphon.net>, 
	qemu-block@nongnu.org, xen-devel@lists.xenproject.org, 
	Julia Suvorova <jusual@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@linaro.org>, Paul Durrant <paul@xen.org>, Kevin Wolf <kwolf@redhat.com>, 
	Anthony Perard <anthony.perard@citrix.com>, "Michael S. Tsirkin" <mst@redhat.com>, 
	Aarushi Mehta <mehta.aaru20@gmail.com>
Subject: Re: [PATCH v3 5/6] block/linux-aio: convert to blk_io_plug_call() API
Message-ID: <olpmomsccllt6s5yuzzznwoaf6mpx3vmcex5bt477uviettgra@owpdleplwg36>
References: <20230530180959.1108766-1-stefanha@redhat.com>
 <20230530180959.1108766-6-stefanha@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20230530180959.1108766-6-stefanha@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Disposition: inline

On Tue, May 30, 2023 at 02:09:58PM -0400, Stefan Hajnoczi wrote:
>Stop using the .bdrv_co_io_plug() API because it is not multi-queue
>block layer friendly. Use the new blk_io_plug_call() API to batch I/O
>submission instead.
>
>Note that a dev_max_batch check is dropped in laio_io_unplug() because
>the semantics of unplug_fn() are different from .bdrv_co_unplug():
>1. unplug_fn() is only called when the last blk_io_unplug() call occurs,
>   not every time blk_io_unplug() is called.
>2. unplug_fn() is per-thread, not per-BlockDriverState, so there is no
>   way to get per-BlockDriverState fields like dev_max_batch.
>
>Therefore this condition cannot be moved to laio_unplug_fn(). It is not
>obvious that this condition affects performance in practice, so I am
>removing it instead of trying to come up with a more complex mechanism
>to preserve the condition.
>
>Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>Reviewed-by: Eric Blake <eblake@redhat.com>
>---
> include/block/raw-aio.h |  7 -------
> block/file-posix.c      | 28 ----------------------------
> block/linux-aio.c       | 41 +++++++++++------------------------------
> 3 files changed, 11 insertions(+), 65 deletions(-)

LGTM!

Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>



From xen-devel-bounces@lists.xenproject.org Wed May 31 07:44:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:44:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541625.844570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GVT-00075e-5i; Wed, 31 May 2023 07:44:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541625.844570; Wed, 31 May 2023 07:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4GVT-00075X-22; Wed, 31 May 2023 07:44:31 +0000
Received: by outflank-mailman (input) for mailman id 541625;
 Wed, 31 May 2023 07:44:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qQHT=BU=linutronix.de=tglx@srs-se1.protection.inumbo.net>)
 id 1q4GVR-00075B-TQ
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:44:29 +0000
Received: from galois.linutronix.de (galois.linutronix.de
 [2a0a:51c0:0:12e:550::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f8822e8d-ff86-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 09:44:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8822e8d-ff86-11ed-b231-6b7b168915f2
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1685519067;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=w+JYerxWJSKupTV1uEaRy4HAvltDS00WO6xGerR48rM=;
	b=a5vi7FP4K2/0WepNfGqeR/1wGpNma9maeRFF/S+Ow1vyJUxlrhPjlqpp0Gt/tzTbtGb9+h
	l3LwhAd8UbuY25f1yxIvmEJg9IiycQoNmE2Jw46Svp/LlN0TDRsCc1rPrDCmSd7QhGP+2X
	bpOZCtZBEicRixezH5KsGlLKe9iAgpeZd8Acr1epWp4OHRKWxl6sahQA7Ect0KPEWRsl8P
	Y5mx+YIQEij+GUPZWZZnXzdfGCoGo0Dylkir7fmS/sA84o97ZsT75+6BgHuS4kekwgCwF2
	0PAfMiN4GOmyR3diQuwLhVAfANGV+kugCPs4bHz8Idk3MexeM+liZaUuTG2wZA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1685519067;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=w+JYerxWJSKupTV1uEaRy4HAvltDS00WO6xGerR48rM=;
	b=hiUajp6HjMrpej9Plw6pV1er+5xqm8AlaocdPjp6g94On4ClItoSp7WWjhUGtjQr4KH1MY
	2ShCuAlv5qPs3qDQ==
To: Tom Lendacky <thomas.lendacky@amd.com>, Sean Christopherson
 <seanjc@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>, LKML
 <linux-kernel@vger.kernel.org>, x86@kernel.org, David Woodhouse
 <dwmw2@infradead.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Brian
 Gerst <brgerst@gmail.com>, Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Oleksandr Natalenko <oleksandr@natalenko.name>, Paul Menzel
 <pmenzel@molgen.mpg.de>, "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>, Usama Arif
 <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xenproject.org,
 Russell King <linux@armlinux.org.uk>, Arnd Bergmann <arnd@arndb.de>,
 linux-arm-kernel@lists.infradead.org, Catalin Marinas
 <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Guo Ren
 <guoren@kernel.org>, linux-csky@vger.kernel.org, Thomas Bogendoerfer
 <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org, "James E.J.
 Bottomley" <James.Bottomley@hansenpartnership.com>, Helge Deller
 <deller@gmx.de>, linux-parisc@vger.kernel.org, Paul Walmsley
 <paul.walmsley@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>,
 linux-riscv@lists.infradead.org, Mark Rutland <mark.rutland@arm.com>,
 Sabin Rapan <sabrapan@amazon.com>, "Michael Kelley (LINUX)"
 <mikelley@microsoft.com>, Dave Hansen <dave.hansen@linux.intel.com>
Subject: [patch] x86/smpboot: Fix the parallel bringup decision
In-Reply-To: <b6323987-059e-5396-20b9-8b6a1687e289@amd.com>
References: <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name> <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name> <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx> <87cz2ija1e.ffs@tglx>
 <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name> <87wn0pizbl.ffs@tglx>
 <ZHYqwsCURnrFdsVm@google.com> <87leh5iom8.ffs@tglx>
 <8751e955-e975-c6d4-630c-02912b9ef9da@amd.com> <871qiximen.ffs@tglx>
 <b6323987-059e-5396-20b9-8b6a1687e289@amd.com>
Date: Wed, 31 May 2023 09:44:26 +0200
Message-ID: <87ilc9gd2d.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain

The decision to allow parallel bringup of secondary CPUs checks
CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
parallel bootup because accessing the local APIC is intercepted and raises
a #VC or #VE, which cannot be handled at that point.

The check works correctly, but only for AMD encrypted guests. TDX does not
set that flag.

As there is no real connection between CC attributes and the inability to
support parallel bringup, replace this with a generic control flag in
x86_cpuinit and let SEV-ES and TDX init code disable it.

Fixes: 0c7ffa32dbd6 ("x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable it")
Reported-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/coco/tdx/tdx.c         |   11 +++++++++++
 arch/x86/include/asm/x86_init.h |    3 +++
 arch/x86/kernel/smpboot.c       |   19 ++-----------------
 arch/x86/kernel/x86_init.c      |    1 +
 arch/x86/mm/mem_encrypt_amd.c   |   15 +++++++++++++++
 5 files changed, 32 insertions(+), 17 deletions(-)

--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -871,5 +871,16 @@ void __init tdx_early_init(void)
 	x86_platform.guest.enc_tlb_flush_required   = tdx_tlb_flush_required;
 	x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
 
+	/*
+	 * TDX intercepts the RDMSR to read the X2APIC ID in the parallel
+	 * bringup low level code. That raises #VE which cannot be handled
+	 * there.
+	 *
+	 * Intel-TDX has a secure RDMSR hypercall, but that needs to be
+	 * implemented seperately in the low level startup ASM code.
+	 * Until that is in place, disable parallel bringup for TDX.
+	 */
+	x86_cpuinit.parallel_bringup = false;
+
 	pr_info("Guest detected\n");
 }
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -177,11 +177,14 @@ struct x86_init_ops {
  * struct x86_cpuinit_ops - platform specific cpu hotplug setups
  * @setup_percpu_clockev:	set up the per cpu clock event device
  * @early_percpu_clock_init:	early init of the per cpu clock event device
+ * @fixup_cpu_id:		fixup function for cpuinfo_x86::phys_proc_id
+ * @parallel_bringup:		Parallel bringup control
  */
 struct x86_cpuinit_ops {
 	void (*setup_percpu_clockev)(void);
 	void (*early_percpu_clock_init)(void);
 	void (*fixup_cpu_id)(struct cpuinfo_x86 *c, int node);
+	bool parallel_bringup;
 };
 
 struct timespec64;
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1267,23 +1267,8 @@ void __init smp_prepare_cpus_common(void
 /* Establish whether parallel bringup can be supported. */
 bool __init arch_cpuhp_init_parallel_bringup(void)
 {
-	/*
-	 * Encrypted guests require special handling. They enforce X2APIC
-	 * mode but the RDMSR to read the APIC ID is intercepted and raises
-	 * #VC or #VE which cannot be handled in the early startup code.
-	 *
-	 * AMD-SEV does not provide a RDMSR GHCB protocol so the early
-	 * startup code cannot directly communicate with the secure
-	 * firmware. The alternative solution to retrieve the APIC ID via
-	 * CPUID(0xb), which is covered by the GHCB protocol, is not viable
-	 * either because there is no enforcement of the CPUID(0xb)
-	 * provided "initial" APIC ID to be the same as the real APIC ID.
-	 *
-	 * Intel-TDX has a secure RDMSR hypercall, but that needs to be
-	 * implemented seperately in the low level startup ASM code.
-	 */
-	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
-		pr_info("Parallel CPU startup disabled due to guest state encryption\n");
+	if (!x86_cpuinit.parallel_bringup) {
+		pr_info("Parallel CPU startup disabled by the platform\n");
 		return false;
 	}
 
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -126,6 +126,7 @@ struct x86_init_ops x86_init __initdata
 struct x86_cpuinit_ops x86_cpuinit = {
 	.early_percpu_clock_init	= x86_init_noop,
 	.setup_percpu_clockev		= setup_secondary_APIC_clock,
+	.parallel_bringup		= true,
 };
 
 static void default_nmi_init(void) { };
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -501,6 +501,21 @@ void __init sme_early_init(void)
 	x86_platform.guest.enc_status_change_finish  = amd_enc_status_change_finish;
 	x86_platform.guest.enc_tlb_flush_required    = amd_enc_tlb_flush_required;
 	x86_platform.guest.enc_cache_flush_required  = amd_enc_cache_flush_required;
+
+	/*
+	 * AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the
+	 * parallel bringup low level code. That raises #VC which cannot be
+	 * handled there.
+	 * It does not provide a RDMSR GHCB protocol so the early startup
+	 * code cannot directly communicate with the secure firmware. The
+	 * alternative solution to retrieve the APIC ID via CPUID(0xb),
+	 * which is covered by the GHCB protocol, is not viable either
+	 * because there is no enforcement of the CPUID(0xb) provided
+	 * "initial" APIC ID to be the same as the real APIC ID.
+	 * Disable parallel bootup.
+	 */
+	if (sev_status & MSR_AMD64_SEV_ES_ENABLED)
+		x86_cpuinit.parallel_bringup = false;
 }
 
 void __init mem_encrypt_free_decrypted_mem(void)


From xen-devel-bounces@lists.xenproject.org Wed May 31 07:49:59 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 07:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541629.844580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Gaa-0007jJ-Oe; Wed, 31 May 2023 07:49:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541629.844580; Wed, 31 May 2023 07:49:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Gaa-0007jC-Lj; Wed, 31 May 2023 07:49:48 +0000
Received: by outflank-mailman (input) for mailman id 541629;
 Wed, 31 May 2023 07:49:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RxVg=BU=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1q4GaY-0007j6-HU
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 07:49:47 +0000
Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com
 [64.147.123.24]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b16dae06-ff87-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 09:49:39 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.west.internal (Postfix) with ESMTP id 9BB06320097C
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 03:49:36 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Wed, 31 May 2023 03:49:36 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA for
 <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 03:49:34 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b16dae06-ff87-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:content-type:content-type:date:date
	:from:from:in-reply-to:message-id:mime-version:reply-to:sender
	:subject:subject:to:to; s=fm1; t=1685519376; x=1685605776; bh=GO
	lGGLQsjK0DFARcfo5fcT/jqEPAPCA7tnl79juiUgQ=; b=HJBB9JQxN7lSH1rCxr
	hcXI4Mrl9Ri93qzXHryFYGFndXjprE8dCg+M27EmRmWuFHiTG7qSoLu4ATyieeQ7
	sCVsHwYXJ6Xyqf1safUmFxAU6l8H2pE6lRsWqGwFWnE7uVa7a4kgFoD+ih+/JbTD
	XTAruQZppIebn+deaC7gSgRjnP4iAAD1d92ansyBqp99uxGa4S/bqyjERIs7xWXY
	FqFUkU6rWmEA1+vMzZex7HyqtlUORDs+emJUTfs60qE1YnavFy9h0eL6botp0xRs
	nHfvtZ4kcCL1nFvgEqYwXiJh0x73QUnjcCu0kV65xjbF/mEJOaaXy9o7if0wzlYG
	VaKQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:message-id
	:mime-version:reply-to:sender:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=
	1685519376; x=1685605776; bh=GOlGGLQsjK0DFARcfo5fcT/jqEPAPCA7tnl
	79juiUgQ=; b=sdW1I8vEPxeQpCriHO2Sq0IOz27DU3Rb4i/ny2wA0KibuhKc6c0
	4IoTSxE42NTsebE9IrHzoMPraULx0Owzih4CTq3SMn2sQIAUlWGSiLx12VJ6/WUc
	9YCPT7Br/DaLdM+BIY9GtEF46FDphcLRY6Ds+IEEPzQNKqdjawcMV3CaLpymq8at
	MiZws5bS4d5SwQGbX99KYkJOd+iuACTOEtv9vT0RiQQmh9uNGKpz7EUXUoSdt/ZB
	yJ3bOw6VpdEsAyq+CEUQxI4HhjOOraEqovEsbYbFARPcCq7K/rb2wAfLsJ+/qrTS
	K7/wjHC+ehnAdcYmufz9xVdtmBRcmyXss+w==
X-ME-Sender: <xms:EPx2ZHci5sywf8VlSIbwvkqW9ifNA4oSX9hEYqDjboh9i8eOfQqPiA>
    <xme:EPx2ZNOynUCnAls0fSARipB1oyiMRxhpvvsnh2WkI772_qMZ1vpTqRwUH_MSX_g2R
    9OUgUzFUHA_Bw>
X-ME-Received: <xmr:EPx2ZAgOs_3gBDwPbBdQWD54KhOUKesk_vtcSXRK1ltv9-643gklYli7TpaF>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekkedguddvhecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecunecujfgurhepfffhvffukfggtggusehgtderre
    dttdejnecuhfhrohhmpeforghrvghkucforghrtgiihihkohifshhkihdqifpkrhgvtghk
    ihcuoehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqe
    enucggtffrrghtthgvrhhnpedtudfgteduveduieevvefgteeujeelgffggffhhffhhedt
    ffeffefgudeugeefhfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrih
    hlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgt
    ohhm
X-ME-Proxy: <xmx:EPx2ZI-VGwh8Ze01RoFNGYmN1ahg--csBWJZ4axdvqKfBk7RfGjgtQ>
    <xmx:EPx2ZDvsNFqIGYI8gjzAdHM5ZqU1dbTmdYKgp1f6HwkTeGML4xQImw>
    <xmx:EPx2ZHEzEGarS6fxdJe0Beso_Skjdo5DQYO0_-rrhCqHYgQkso3t_A>
    <xmx:EPx2ZB7FlckwQISl-ETSLkb5wIQzNac_lGEw00ZGsJMq-eEsxwT71g>
Feedback-ID: i1568416f:Fastmail
Date: Wed, 31 May 2023 09:49:32 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: HVM performance once again
Message-ID: <ZHb8DBbRuklAXhCE@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="0/lXUo8O37WGBCUc"
Content-Disposition: inline


--0/lXUo8O37WGBCUc
Content-Type: multipart/mixed; protected-headers=v1;
	boundary="T1KEJXkE9CKFKvx+"
Content-Disposition: inline
Date: Wed, 31 May 2023 09:49:32 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: HVM performance once again


--T1KEJXkE9CKFKvx+
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hi,

I returned to HVM performance once again, this time looking at PCI
passthrough impact evaluating network throughput.
The setup:
 - Xen 4.17
 - Linux 6.3.2 in all domU
 - iperf -c running in a PVH (call it "client")
 - iperf -s running in a HVM (call it "server")
 - client's netfront has a backend directly in server
 - frontend's "trusted" is set to 0
 - HVM have qemu in a stubdomain in all the cases
 - no intentional differences about HVM besides presence of a PCI device
   (it is a network card, but it was not involved in the traffic)

And now the results:
 - server is plain HVM: ~6Gbps
 - server is HVM and has some PCI passthrough: ~3Gbps

Any idea why such huge difference?

One difference I see when comparing the logs, is 64MB swiotlb initalized
in no-PCI case, but I'm not sure if that's really relevant...

Both dmesg attached.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--T1KEJXkE9CKFKvx+
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="hvm-pci.log"
Content-Transfer-Encoding: quoted-printable

[    0.000000] Linux version 6.3.2-1.qubes.fc37.x86_64 (mockbuild@0c424897d=
4764da983d21c08af6d60ea) (gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4), GNU=
 ld version 2.38-27.fc37) #1 SMP PREEMPT_DYNAMIC Thu May 11 22:08:07 GMT 20=
23
[    0.000000] Command line: root=3D/dev/mapper/dmroot ro nomodeset console=
=3Dhvc0 rd_NO_PLYMOUTH rd.plymouth.enable=3D0 plymouth.enable=3D0 clocksour=
ce=3Dtsc xen_scrub_pages=3D0  selinux=3D1 security=3Dselinux
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point=
 registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys Us=
er registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: xstate_offset[5]:  832, xstate_sizes[5]:   64
[    0.000000] x86/fpu: xstate_offset[6]:  896, xstate_sizes[6]:  512
[    0.000000] x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024
[    0.000000] x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]:    8
[    0.000000] x86/fpu: Enabled xstate features 0x2e7, context size is 2440=
 bytes, using 'compacted' format.
[    0.000000] signal: max sigframe size: 3632
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reser=
ved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reser=
ved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000003effefff] usable
[    0.000000] BIOS-e820: [mem 0x000000003efff000-0x000000003effffff] reser=
ved
[    0.000000] BIOS-e820: [mem 0x00000000fc000000-0x00000000fc00afff] ACPI =
NVS
[    0.000000] BIOS-e820: [mem 0x00000000fc00b000-0x00000000ffffffff] reser=
ved
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.4 present.
[    0.000000] DMI: Xen HVM domU, BIOS 4.17.0 04/25/2023
[    0.000000] Hypervisor detected: Xen HVM
[    0.000000] Xen version 4.17.
[    0.000000] platform_pci_unplug: Netfront and the Xen platform PCI drive=
r have been compiled for this kernel: unplug emulated NICs.
[    0.000000] platform_pci_unplug: Blkfront and the Xen platform PCI drive=
r have been compiled for this kernel: unplug emulated disks.
               You might have to change the root device
               from /dev/hd[a-d] to /dev/xvd[a-d]
               in your root=3D kernel command line option
[    0.000022] HVMOP_pagetable_dying not supported
[    0.018916] tsc: Fast TSC calibration using PIT
[    0.018919] tsc: Detected 2803.078 MHz processor
[    0.018920] tsc: Detected 2803.186 MHz TSC
[    0.019389] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.019392] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.019396] last_pfn =3D 0x3efff max_arch_pfn =3D 0x400000000
[    0.019435] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT=
 =20
[    0.027602] found SMP MP-table at [mem 0x000f5a80-0x000f5a8f]
[    0.027631] Using GB pages for direct mapping
[    0.027759] RAMDISK: [mem 0x3e8d5000-0x3efeffff]
[    0.027769] ACPI: Early table checksum verification disabled
[    0.027777] ACPI: RSDP 0x00000000000F59D0 000024 (v02 Xen   )
[    0.027786] ACPI: XSDT 0x00000000FC00A650 000054 (v01 Xen    HVM      00=
000000 HVML 00000000)
[    0.027790] ACPI: FACP 0x00000000FC00A370 0000F4 (v04 Xen    HVM      00=
000000 HVML 00000000)
[    0.027795] ACPI: DSDT 0x00000000FC001040 0092A3 (v02 Xen    HVM      00=
000000 INTL 20220331)
[    0.027798] ACPI: FACS 0x00000000FC001000 000040
[    0.027800] ACPI: FACS 0x00000000FC001000 000040
[    0.027802] ACPI: APIC 0x00000000FC00A470 000070 (v02 Xen    HVM      00=
000000 HVML 00000000)
[    0.027804] ACPI: HPET 0x00000000FC00A560 000038 (v01 Xen    HVM      00=
000000 HVML 00000000)
[    0.027807] ACPI: WAET 0x00000000FC00A5A0 000028 (v01 Xen    HVM      00=
000000 HVML 00000000)
[    0.027809] ACPI: SSDT 0x00000000FC00A5D0 000031 (v02 Xen    HVM      00=
000000 INTL 20220331)
[    0.027811] ACPI: SSDT 0x00000000FC00A610 000031 (v02 Xen    HVM      00=
000000 INTL 20220331)
[    0.027813] ACPI: Reserving FACP table memory at [mem 0xfc00a370-0xfc00a=
463]
[    0.027814] ACPI: Reserving DSDT table memory at [mem 0xfc001040-0xfc00a=
2e2]
[    0.027815] ACPI: Reserving FACS table memory at [mem 0xfc001000-0xfc001=
03f]
[    0.027815] ACPI: Reserving FACS table memory at [mem 0xfc001000-0xfc001=
03f]
[    0.027816] ACPI: Reserving APIC table memory at [mem 0xfc00a470-0xfc00a=
4df]
[    0.027817] ACPI: Reserving HPET table memory at [mem 0xfc00a560-0xfc00a=
597]
[    0.027817] ACPI: Reserving WAET table memory at [mem 0xfc00a5a0-0xfc00a=
5c7]
[    0.027818] ACPI: Reserving SSDT table memory at [mem 0xfc00a5d0-0xfc00a=
600]
[    0.027818] ACPI: Reserving SSDT table memory at [mem 0xfc00a610-0xfc00a=
640]
[    0.028833] No NUMA configuration found
[    0.028835] Faking a node at [mem 0x0000000000000000-0x000000003effefff]
[    0.028843] NODE_DATA(0) allocated [mem 0x3e8aa000-0x3e8d4fff]
[    0.030170] Zone ranges:
[    0.030171]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.030173]   DMA32    [mem 0x0000000001000000-0x000000003effefff]
[    0.030174]   Normal   empty
[    0.030175]   Device   empty
[    0.030175] Movable zone start for each node
[    0.030178] Early memory node ranges
[    0.030178]   node   0: [mem 0x0000000000001000-0x000000000009efff]
[    0.030179]   node   0: [mem 0x0000000000100000-0x000000003effefff]
[    0.030181] Initmem setup node 0 [mem 0x0000000000001000-0x000000003effe=
fff]
[    0.030184] On node 0, zone DMA: 1 pages in unavailable ranges
[    0.030198] On node 0, zone DMA: 97 pages in unavailable ranges
[    0.031209] On node 0, zone DMA32: 4097 pages in unavailable ranges
[    0.033405] ACPI: PM-Timer IO Port: 0xb008
[    0.033449] IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-=
47
[    0.033451] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.033453] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 low level)
[    0.033453] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 low level)
[    0.033454] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 low level)
[    0.033457] ACPI: Using ACPI (MADT) for SMP configuration information
[    0.033458] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[    0.033461] TSC deadline timer available
[    0.033461] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.033468] [mem 0x3f000000-0xfbffffff] available for PCI devices
[    0.033469] Booting paravirtualized kernel on Xen HVM
[    0.033471] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0=
xffffffff, max_idle_ns: 1910969940391419 ns
[    0.038575] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:2 nr_cpu_ids:2 nr=
_node_ids:1
[    0.038767] percpu: Embedded 63 pages/cpu s221184 r8192 d28672 u1048576
[    0.038771] pcpu-alloc: s221184 r8192 d28672 u1048576 alloc=3D1*2097152
[    0.038773] pcpu-alloc: [0] 0 1=20
[    0.038790] xen: PV spinlocks enabled
[    0.038792] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, =
linear)
[    0.038795] Fallback order for Node 0: 0=20
[    0.038797] Built 1 zonelists, mobility grouping on.  Total pages: 253759
[    0.038798] Policy zone: DMA32
[    0.038799] Kernel command line: root=3D/dev/mapper/dmroot ro nomodeset =
console=3Dhvc0 rd_NO_PLYMOUTH rd.plymouth.enable=3D0 plymouth.enable=3D0 cl=
ocksource=3Dtsc xen_scrub_pages=3D0  selinux=3D1 security=3Dselinux
[    0.038824] Booted with the nomodeset parameter. Only the system framebu=
ffer will be available
[    0.038883] Unknown kernel command line parameters "rd_NO_PLYMOUTH", wil=
l be passed to user space.
[    0.038944] random: crng init done
[    0.039025] Dentry cache hash table entries: 131072 (order: 8, 1048576 b=
ytes, linear)
[    0.039079] Inode-cache hash table entries: 65536 (order: 7, 524288 byte=
s, linear)
[    0.039287] mem auto-init: stack:all(zero), heap alloc:on, heap free:on
[    0.039288] mem auto-init: clearing system memory may take some time...
[    0.103114] Memory: 941396K/1031796K available (18432K kernel code, 3208=
K rwdata, 8692K rodata, 4992K init, 18632K bss, 90140K reserved, 0K cma-res=
erved)
[    0.103189] SLUB: HWalign=3D64, Order=3D0-3, MinObjects=3D0, CPUs=3D2, N=
odes=3D1
[    0.103197] Kernel/User page tables isolation: enabled
[    0.103217] ftrace: allocating 54208 entries in 212 pages
[    0.109785] ftrace: allocated 212 pages with 4 groups
[    0.110437] Dynamic Preempt: voluntary
[    0.110453] rcu: Preemptible hierarchical RCU implementation.
[    0.110454] rcu: 	RCU restricting CPUs from NR_CPUS=3D8192 to nr_cpu_ids=
=3D2.
[    0.110454] 	Trampoline variant of Tasks RCU enabled.
[    0.110455] 	Rude variant of Tasks RCU enabled.
[    0.110455] 	Tracing variant of Tasks RCU enabled.
[    0.110455] rcu: RCU calculated value of scheduler-enlistment delay is 1=
00 jiffies.
[    0.110456] rcu: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D2
[    0.112571] NR_IRQS: 524544, nr_irqs: 512, preallocated irqs: 16
[    0.112596] xen:events: Using FIFO-based ABI
[    0.112611] xen:events: Xen HVM callback vector for event delivery is en=
abled
[    0.112774] rcu: srcu_init: Setting srcu_struct sizes based on contentio=
n.
[    0.112867] kfence: initialized - using 2097152 bytes for 255 objects at=
 0x(____ptrval____)-0x(____ptrval____)
[    0.175986] Console: colour VGA+ 80x25
[    0.176004] printk: console [hvc0] enabled
[    0.177192] ACPI: Core revision 20221020
[    0.177290] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, =
max_idle_ns: 30580167144 ns
[    0.177370] APIC: Switch to symmetric I/O mode setup
[    0.177906] x2apic enabled
[    0.178418] Switched APIC routing to physical x2apic.
[    0.180149] ..TIMER: vector=3D0x30 apic1=3D0 pin1=3D2 apic2=3D0 pin2=3D0
[    0.184902] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles:=
 0x2868027b22e, max_idle_ns: 440795325881 ns
[    0.184924] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 5606.37 BogoMIPS (lpj=3D2803186)
[    0.184941] pid_max: default: 32768 minimum: 301
[    0.184968] LSM: initializing lsm=3Dlockdown,capability,yama,integrity,s=
elinux
[    0.184987] Yama: becoming mindful.
[    0.184998] SELinux:  Initializing.
[    0.185031] Mount-cache hash table entries: 2048 (order: 2, 16384 bytes,=
 linear)
[    0.185043] Mountpoint-cache hash table entries: 2048 (order: 2, 16384 b=
ytes, linear)
[    0.185274] x86/cpu: User Mode Instruction Prevention (UMIP) activated
[    0.185338] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[    0.185347] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
[    0.185360] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user=
 pointer sanitization
[    0.185374] Spectre V2 : Mitigation: Retpolines
[    0.185382] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB=
 on context switch
[    0.185393] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
[    0.185402] Spectre V2 : Enabling Restricted Speculation for firmware ca=
lls
[    0.185413] Spectre V2 : mitigation: Enabling conditional Indirect Branc=
h Prediction Barrier
[    0.185426] Spectre V2 : User space: Mitigation: STIBP via prctl
[    0.185437] Speculative Store Bypass: Mitigation: Speculative Store Bypa=
ss disabled via prctl
[    0.185452] MDS: Mitigation: Clear CPU buffers
[    0.198292] Freeing SMP alternatives memory: 48K
[    0.198355] clocksource: xen: mask: 0xffffffffffffffff max_cycles: 0x1cd=
42e4dffb, max_idle_ns: 881590591483 ns
[    0.198374] Xen: using vcpuop timer interface
[    0.198380] installing Xen timer for CPU 0
[    0.198441] smpboot: CPU0: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GH=
z (family: 0x6, model: 0x8c, stepping: 0x1)
[    0.198475] cpu 0 spinlock event irq 52
[    0.198573] cblist_init_generic: Setting adjustable number of callback q=
ueues.
[    0.198585] cblist_init_generic: Setting shift to 1 and lim to 1.
[    0.198604] cblist_init_generic: Setting shift to 1 and lim to 1.
[    0.198621] cblist_init_generic: Setting shift to 1 and lim to 1.
[    0.198638] Performance Events: unsupported p6 CPU model 140 no PMU driv=
er, software events only.
[    0.198671] rcu: Hierarchical SRCU implementation.
[    0.198679] rcu: 	Max phase no-delay instances is 400.
[    0.198900] NMI watchdog: Perf NMI watchdog permanently disabled
[    0.198921] smp: Bringing up secondary CPUs ...
[    0.198921] installing Xen timer for CPU 1
[    0.198945] x86: Booting SMP configuration:
[    0.198952] .... node  #0, CPUs:      #1
[    0.065145] APIC: Stale IRR: 00080000,00000000,00000000,00000000,0000000=
0,00000000,00000000,00000000 ISR: 00000000,00000000,00000000,00000000,00000=
000,00000000,00000000,00000000
[    0.201461] cpu 1 spinlock event irq 57
[    0.201964] smp: Brought up 1 node, 2 CPUs
[    0.201973] smpboot: Max logical packages: 1
[    0.201982] smpboot: Total of 2 processors activated (11212.74 BogoMIPS)
[    0.203069] devtmpfs: initialized
[    0.203069] x86/mm: Memory block size: 128MB
[    0.203137] ACPI: PM: Registering ACPI NVS region [mem 0xfc000000-0xfc00=
afff] (45056 bytes)
[    0.203137] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xfffffff=
f, max_idle_ns: 1911260446275000 ns
[    0.203936] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
[    0.204006] pinctrl core: initialized pinctrl subsystem
[    0.204123] PM: RTC time: 07:34:45, date: 2023-05-31
[    0.204353] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    0.204460] DMA: preallocated 128 KiB GFP_KERNEL pool for atomic allocat=
ions
[    0.204476] DMA: preallocated 128 KiB GFP_KERNEL|GFP_DMA pool for atomic=
 allocations
[    0.204489] DMA: preallocated 128 KiB GFP_KERNEL|GFP_DMA32 pool for atom=
ic allocations
[    0.204510] audit: initializing netlink subsys (disabled)
[    0.204927] audit: type=3D2000 audit(1685518485.303:1): state=3Dinitiali=
zed audit_enabled=3D0 res=3D1
[    0.205011] thermal_sys: Registered thermal governor 'fair_share'
[    0.205037] thermal_sys: Registered thermal governor 'bang_bang'
[    0.205049] thermal_sys: Registered thermal governor 'step_wise'
[    0.205059] thermal_sys: Registered thermal governor 'user_space'
[    0.205079] cpuidle: using governor menu
[    0.205963] PCI: Using configuration type 1 for base access
[    0.206083] kprobes: kprobe jump-optimization is enabled. All kprobes ar=
e optimized if possible.
[    0.244027] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
[    0.244027] HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
[    0.244027] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
[    0.244027] HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
[    0.244960] cryptd: max_cpu_qlen set to 1000
[    0.244970] raid6: skipped pq benchmark and selected avx512x4
[    0.244970] raid6: using avx512x2 recovery algorithm
[    0.245004] ACPI: Added _OSI(Module Device)
[    0.245012] ACPI: Added _OSI(Processor Device)
[    0.245020] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.245028] ACPI: Added _OSI(Processor Aggregator Device)
[    0.250484] ACPI: 3 ACPI AML tables successfully acquired and loaded
[    0.251325] xen: --> pirq=3D16 -> irq=3D9 (gsi=3D9)
[    0.253462] ACPI: Interpreter enabled
[    0.253477] ACPI: PM: (supports S0 S3 S5)
[    0.253486] ACPI: Using IOAPIC for interrupt routing
[    0.254157] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug
[    0.254172] PCI: Ignoring E820 reservations for host bridge windows
[    0.254431] ACPI: Enabled 2 GPEs in block 00 to 0F
[    0.262927] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    0.262943] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MS=
I EDR HPX-Type3]
[    0.262957] acpi PNP0A03:00: _OSC: not requesting OS control; OS require=
s [ExtendedConfig ASPM ClockPM MSI]
[    0.262977] acpi PNP0A03:00: fail to add MMCONFIG information, can't acc=
ess extended configuration space under this bridge
[    0.263236] PCI host bridge to bus 0000:00
[    0.263243] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
[    0.263254] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    0.263265] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f window]
[    0.263277] pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfbfffff=
f window]
[    0.263289] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.263482] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
[    0.267278] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
[    0.271370] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
[    0.273924] pci 0000:00:01.1: reg 0x20: [io  0xc200-0xc20f]
[    0.274967] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x=
01f7]
[    0.274981] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
[    0.274991] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x=
0177]
[    0.275003] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
[    0.276018] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
[    0.279145] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX=
4 ACPI
[    0.279203] pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX=
4 SMB
[    0.280529] pci 0000:00:02.0: [5853:0001] type 00 class 0xff8000
[    0.281924] pci 0000:00:02.0: reg 0x10: [io  0xc000-0xc0ff]
[    0.282453] pci 0000:00:02.0: reg 0x14: [mem 0xf0000000-0xf0ffffff pref]
[    0.288336] pci 0000:00:04.0: [1234:1111] type 00 class 0x030000
[    0.289497] pci 0000:00:04.0: reg 0x10: [mem 0xf1000000-0xf1ffffff pref]
[    0.291280] pci 0000:00:04.0: reg 0x18: [mem 0xf2016000-0xf2016fff]
[    0.294277] pci 0000:00:04.0: reg 0x30: [mem 0xf2000000-0xf200ffff pref]
[    0.294521] pci 0000:00:04.0: Video device with shadowed ROM at [mem 0x0=
00c0000-0x000dffff]
[    0.295539] pci 0000:00:05.0: [8086:24cd] type 00 class 0x0c0320
[    0.296924] pci 0000:00:05.0: reg 0x10: [mem 0xf2017000-0xf2017fff]
[    0.305422] pci 0000:00:06.0: [8086:2725] type 00 class 0x028000
[    0.314490] pci 0000:00:06.0: reg 0x10: [mem 0xf2010000-0xf2013fff 64bit]
[    0.349073] ACPI: PCI: Interrupt link LNKA configured for IRQ 5
[    0.349355] ACPI: PCI: Interrupt link LNKB configured for IRQ 10
[    0.349980] ACPI: PCI: Interrupt link LNKC configured for IRQ 11
[    0.350250] ACPI: PCI: Interrupt link LNKD configured for IRQ 5
[    0.353021] xen:balloon: Initialising balloon driver
[    0.353048] iommu: Default domain type: Translated=20
[    0.353048] iommu: DMA domain TLB invalidation policy: lazy mode=20
[    0.353048] SCSI subsystem initialized
[    0.353048] libata version 3.00 loaded.
[    0.353048] ACPI: bus type USB registered
[    0.353048] usbcore: registered new interface driver usbfs
[    0.353048] usbcore: registered new interface driver hub
[    0.353048] usbcore: registered new device driver usb
[    0.353048] pps_core: LinuxPPS API ver. 1 registered
[    0.353048] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>
[    0.353048] PTP clock support registered
[    0.353048] EDAC MC: Ver: 3.0.0
[    0.355174] NetLabel: Initializing
[    0.355174] NetLabel:  domain hash size =3D 128
[    0.355174] NetLabel:  protocols =3D UNLABELED CIPSOv4 CALIPSO
[    0.355174] NetLabel:  unlabeled traffic allowed by default
[    0.355174] mctp: management component transport protocol core
[    0.355174] NET: Registered PF_MCTP protocol family
[    0.355174] PCI: Using ACPI for IRQ routing
[    0.355174] PCI: pci_cache_line_size set to 64 bytes
[    0.357429] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
[    0.357431] e820: reserve RAM buffer [mem 0x3efff000-0x3fffffff]
[    0.357455] pci 0000:00:04.0: vgaarb: setting as boot VGA device
[    0.357455] pci 0000:00:04.0: vgaarb: bridge control possible
[    0.357455] pci 0000:00:04.0: vgaarb: VGA device added: decodes=3Dio+mem=
,owns=3Dio+mem,locks=3Dnone
[    0.357925] vgaarb: loaded
[    0.357983] hpet: 3 channels of 0 reserved for per-cpu timers
[    0.358003] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[    0.358014] hpet0: 3 comparators, 64-bit 62.500000 MHz counter
[    0.360958] clocksource: Switched to clocksource xen
[    0.369190] VFS: Disk quotas dquot_6.6.0
[    0.369205] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 byte=
s)
[    0.369249] pnp: PnP ACPI init
[    0.369297] system 00:00: [mem 0x00000000-0x0009ffff] could not be reser=
ved
[    0.369351] system 00:01: [io  0x08a0-0x08a3] has been reserved
[    0.369363] system 00:01: [io  0x0cc0-0x0ccf] has been reserved
[    0.369374] system 00:01: [io  0x04d0-0x04d1] has been reserved
[    0.369402] xen: --> pirq=3D18 -> irq=3D8 (gsi=3D8)
[    0.369421] xen: --> pirq=3D19 -> irq=3D12 (gsi=3D12)
[    0.369442] xen: --> pirq=3D20 -> irq=3D1 (gsi=3D1)
[    0.369460] xen: --> pirq=3D21 -> irq=3D6 (gsi=3D6)
[    0.369461] pnp 00:05: [dma 2]
[    0.369499] system 00:06: [io  0xae00-0xae0f] has been reserved
[    0.369511] system 00:06: [io  0xb044-0xb047] has been reserved
[    0.370633] pnp: PnP ACPI: found 7 devices
[    0.377652] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, m=
ax_idle_ns: 2085701024 ns
[    0.377698] NET: Registered PF_INET protocol family
[    0.377727] IP idents hash table entries: 16384 (order: 5, 131072 bytes,=
 linear)
[    0.377918] tcp_listen_portaddr_hash hash table entries: 512 (order: 1, =
8192 bytes, linear)
[    0.377933] Table-perturb hash table entries: 65536 (order: 6, 262144 by=
tes, linear)
[    0.377947] TCP established hash table entries: 8192 (order: 4, 65536 by=
tes, linear)
[    0.377965] TCP bind hash table entries: 8192 (order: 6, 262144 bytes, l=
inear)
[    0.377997] TCP: Hash tables configured (established 8192 bind 8192)
[    0.378027] MPTCP token hash table entries: 1024 (order: 2, 24576 bytes,=
 linear)
[    0.378046] UDP hash table entries: 512 (order: 2, 16384 bytes, linear)
[    0.378059] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes, lin=
ear)
[    0.378087] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    0.378102] NET: Registered PF_XDP protocol family
[    0.378115] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
[    0.378126] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
[    0.378137] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff windo=
w]
[    0.378147] pci_bus 0000:00: resource 7 [mem 0xf0000000-0xfbffffff windo=
w]
[    0.378257] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[    0.517324] pci 0000:00:00.0: quirk_passive_release+0x0/0xb0 took 135884=
 usecs
[    0.517345] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    0.517880] xen: --> pirq=3D22 -> irq=3D39 (gsi=3D39)
[    0.519007] PCI: CLS 0 bytes, default 64
[    0.519082] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x286=
8027b22e, max_idle_ns: 440795325881 ns
[    0.519086] Trying to unpack rootfs image as initramfs...
[    0.519163] clocksource: Switched to clocksource tsc
[    0.519751] Initialise system trusted keyrings
[    0.519769] Key type blacklist registered
[    0.519895] workingset: timestamp_bits=3D36 max_order=3D18 bucket_order=
=3D0
[    0.519921] zbud: loaded
[    0.520409] integrity: Platform Keyring initialized
[    0.520419] integrity: Machine keyring initialized
[    0.527409] NET: Registered PF_ALG protocol family
[    0.527422] xor: automatically using best checksumming function   avx   =
   =20
[    0.527435] Key type asymmetric registered
[    0.527442] Asymmetric key parser 'x509' registered
[    0.592548] Freeing initrd memory: 7276K
[    0.594089] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 245)
[    0.594169] io scheduler mq-deadline registered
[    0.594179] io scheduler kyber registered
[    0.594191] io scheduler bfq registered
[    0.595314] atomic64_test: passed for x86-64 platform with CX8 and with =
SSE
[    0.595586] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input0
[    0.595620] ACPI: button: Power Button [PWRF]
[    0.595653] input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/inpu=
t/input1
[    0.595675] ACPI: button: Sleep Button [SLPF]
[    0.619696] xen: --> pirq=3D23 -> irq=3D24 (gsi=3D24)
[    0.620050] xen:grant_table: Grant tables using version 1 layout
[    0.620137] Grant table initialized
[    0.620347] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[    0.621375] Non-volatile memory driver v1.3
[    0.621407] Linux agpgart interface v0.103
[    0.621640] ACPI: bus type drm_connector registered
[    0.630212] ata_piix 0000:00:01.1: version 2.13
[    0.630265] ata_piix 0000:00:01.1: enabling device (0000 -> 0001)
[    0.631280] scsi host0: ata_piix
[    0.631371] scsi host1: ata_piix
[    0.631392] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc200 irq 14
[    0.631403] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc208 irq 15
[    0.631569] usbcore: registered new interface driver usbserial_generic
[    0.631583] usbserial: USB Serial support registered for generic
[    0.631613] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f13:PS2M] at 0=
x60,0x64 irq 1,12
[    0.634445] serio: i8042 KBD port at 0x60,0x64 irq 1
[    0.634459] serio: i8042 AUX port at 0x60,0x64 irq 12
[    0.634530] mousedev: PS/2 mouse device common for all mice
[    0.635781] input: AT Translated Set 2 keyboard as /devices/platform/i80=
42/serio0/input/input2
[    0.636593] rtc_cmos 00:02: registered as rtc0
[    0.636631] rtc_cmos 00:02: setting system clock to 2023-05-31T07:34:45 =
UTC (1685518485)
[    0.636664] rtc_cmos 00:02: alarms up to one day, 114 bytes nvram
[    0.636686] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. =
Duplicate IMA measurements will not be recorded in the IMA log.
[    0.636707] device-mapper: uevent: version 1.0.3
[    0.642574] device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised:=
 dm-devel@redhat.com
[    0.643434] intel_pstate: CPU model not supported
[    0.643499] hid: raw HID events driver (C) Jiri Kosina
[    0.643525] usbcore: registered new interface driver usbhid
[    0.643534] usbhid: USB HID core driver
[    0.643611] drop_monitor: Initializing network drop monitor service
[    0.643673] Initializing XFRM netlink socket
[    0.643693] NET: Registered PF_INET6 protocol family
[    0.644937] Segment Routing with IPv6
[    0.644946] RPL Segment Routing with IPv6
[    0.644957] In-situ OAM (IOAM) with IPv6
[    0.644978] mip6: Mobile IPv6
[    0.644987] NET: Registered PF_PACKET protocol family
[    0.645179] IPI shorthand broadcast: enabled
[    0.717748] AVX2 version of gcm_enc/dec engaged.
[    0.717839] AES CTR mode by8 optimization enabled
[    0.719003] sched_clock: Marking stable (654403398, 64145092)->(86199716=
3, -143448673)
[    0.719159] registered taskstats version 1
[    0.719246] Loading compiled-in X.509 certificates
[    0.725713] Loaded X.509 cert 'Build time autogenerated kernel key: 71b1=
2e3caf24d55eb39a8d585ceb01a5ebb6ae3a'
[    0.725826] zswap: loaded using pool lzo/zbud
[    0.727372] page_owner is disabled
[    0.727468] Key type .fscrypt registered
[    0.727476] Key type fscrypt-provisioning registered
[    0.727703] Btrfs loaded, crc32c=3Dcrc32c-generic, zoned=3Dyes, fsverity=
=3Dyes
[    0.727727] Key type big_key registered
[    0.728980] Key type encrypted registered
[    0.728996] ima: No TPM chip found, activating TPM-bypass!
[    0.729007] Loading compiled-in module X.509 certificates
[    0.729379] Loaded X.509 cert 'Build time autogenerated kernel key: 71b1=
2e3caf24d55eb39a8d585ceb01a5ebb6ae3a'
[    0.729399] ima: Allocated hash algorithm: sha256
[    0.729415] ima: No architecture policies found
[    0.729430] evm: Initialising EVM extended attributes:
[    0.729438] evm: security.selinux
[    0.729445] evm: security.SMACK64 (disabled)
[    0.729453] evm: security.SMACK64EXEC (disabled)
[    0.729461] evm: security.SMACK64TRANSMUTE (disabled)
[    0.729469] evm: security.SMACK64MMAP (disabled)
[    0.729477] evm: security.apparmor
[    0.729483] evm: security.ima
[    0.729489] evm: security.capability
[    0.729495] evm: HMAC attrs: 0x1
[    0.749002] alg: No test for 842 (842-scomp)
[    0.749038] alg: No test for 842 (842-generic)
[    0.827965] xenbus_probe_frontend: Device with no driver: device/vbd/517=
12
[    0.827981] xenbus_probe_frontend: Device with no driver: device/vbd/517=
28
[    0.828003] xenbus_probe_frontend: Device with no driver: device/vbd/517=
44
[    0.828014] xenbus_probe_frontend: Device with no driver: device/vbd/517=
60
[    0.828070] PM:   Magic number: 7:95:573
[    0.828147] RAS: Correctable Errors collector initialized.
[    0.833775] Freeing unused decrypted memory: 2036K
[    0.834286] Freeing unused kernel image (initmem) memory: 4992K
[    0.835596] Write protecting the kernel read-only data: 28672k
[    0.835880] Freeing unused kernel image (rodata/data gap) memory: 1548K
[    0.835920] rodata_test: all tests were successful
[    0.835933] Run /init as init process
[    0.835940]   with arguments:
[    0.835941]     /init
[    0.835942]     rd_NO_PLYMOUTH
[    0.835942]   with environment:
[    0.835943]     HOME=3D/
[    0.835943]     TERM=3Dlinux
[    0.838682] Invalid max_queues (4), will use default max: 2.
[    0.846432] blkfront: xvda: flush diskcache: enabled; persistent grants:=
 enabled; indirect descriptors: enabled; bounce buffer: enabled
[    0.862006]  xvda: xvda1 xvda2 xvda3
[    0.863374] blkfront: xvdb: flush diskcache: enabled; persistent grants:=
 enabled; indirect descriptors: enabled; bounce buffer: enabled
[    0.864929] blkfront: xvdc: flush diskcache: enabled; persistent grants:=
 enabled; indirect descriptors: enabled; bounce buffer: enabled
[    0.866067] blkfront: xvdd: barrier or flush: disabled; persistent grant=
s: enabled; indirect descriptors: enabled; bounce buffer: enabled
[    1.235755]  xvdc: xvdc1 xvdc3
[    1.260299] EXT4-fs (xvda3): mounted filesystem f76da228-62b7-4543-835d-=
9ab0d57bbc45 with ordered data mode. Quota mode: none.
[    1.265182] /dev/xvdd: Can't open blockdev
[    1.266137] EXT4-fs (xvdd): mounting ext3 file system using the ext4 sub=
system
[    1.269835] EXT4-fs (xvdd): mounted filesystem 63438b4b-f5d8-462a-82fe-9=
d52951a6722 with ordered data mode. Quota mode: none.
[    1.349081] audit: type=3D1404 audit(1685518486.211:2): enforcing=3D1 ol=
d_enforcing=3D0 auid=3D4294967295 ses=3D4294967295 enabled=3D1 old-enabled=
=3D1 lsm=3Dselinux res=3D1
[    1.373517] SELinux:  Class user_namespace not defined in policy.
[    1.373537] SELinux: the above unknown classes and permissions will be a=
llowed
[    1.375284] SELinux:  policy capability network_peer_controls=3D1
[    1.375297] SELinux:  policy capability open_perms=3D1
[    1.375306] SELinux:  policy capability extended_socket_class=3D1
[    1.375316] SELinux:  policy capability always_check_network=3D0
[    1.375326] SELinux:  policy capability cgroup_seclabel=3D1
[    1.375334] SELinux:  policy capability nnp_nosuid_transition=3D1
[    1.375344] SELinux:  policy capability genfs_seclabel_symlinks=3D1
[    1.375357] SELinux:  policy capability ioctl_skip_cloexec=3D0
[    1.406404] audit: type=3D1403 audit(1685518486.268:3): auid=3D429496729=
5 ses=3D4294967295 lsm=3Dselinux res=3D1
[    1.408271] systemd[1]: Successfully loaded SELinux policy in 59.906ms.
[    1.433151] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup =
in 9.663ms.
[    1.439705] systemd[1]: systemd 251.14-2.fc37 running in system mode (+P=
AM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT +GNUTLS +OPENSSL =
+ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +L=
IBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZS=
TD +BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=3Dunified)
[    1.439761] systemd[1]: Detected virtualization xen.
[    1.439773] systemd[1]: Detected architecture x86-64.
[    1.441169] systemd[1]: No hostname configured, using default hostname.
[    1.441214] systemd[1]: Hostname set to <fedora>.
[    1.489698] systemd[1]: bpf-lsm: BPF LSM hook not enabled in the kernel,=
 BPF LSM not supported
[    1.490215] memfd_create() without MFD_EXEC nor MFD_NOEXEC_SEAL, pid=3D1=
 'systemd'
[    1.525856] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/=
i8042/serio1/input/input4
[    1.648265] block xvda: the capability attribute has been deprecated.
[    1.648419] systemd-gpt-auto-generator[226]: Failed to dissect: Permissi=
on denied
[    1.648724] (sd-execut[218]: /usr/lib/systemd/system-generators/systemd-=
gpt-auto-generator failed with exit status 1.
[    1.725139] systemd[1]: /usr/lib/systemd/system/qubes-gui-agent.service:=
15: Standard output type syslog is obsolete, automatically updating to jour=
nal. Please update your unit file, and consider removing the setting altoge=
ther.
[    1.787876] systemd[1]: Queued start job for default target multi-user.t=
arget.
[    1.799152] systemd[1]: Created slice system-getty.slice - Slice /system=
/getty.
[    1.800100] systemd[1]: Created slice system-modprobe.slice - Slice /sys=
tem/modprobe.
[    1.800859] systemd[1]: Created slice system-serial\x2dgetty.slice - Sli=
ce /system/serial-getty.
[    1.801470] systemd[1]: Created slice user.slice - User and Session Slic=
e.
[    1.801830] systemd[1]: Started systemd-ask-password-console.path - Disp=
atch Password Requests to Console Directory Watch.
[    1.802132] systemd[1]: Started systemd-ask-password-wall.path - Forward=
 Password Requests to Wall Directory Watch.
[    1.802835] systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automou=
nt - Arbitrary Executable File Formats File System Automount Point.
[    1.803025] systemd[1]: Reached target cryptsetup.target - Local Encrypt=
ed Volumes.
[    1.803158] systemd[1]: Reached target integritysetup.target - Local Int=
egrity Protected Volumes.
[    1.803306] systemd[1]: Reached target remote-fs.target - Remote File Sy=
stems.
[    1.803420] systemd[1]: Reached target slices.target - Slice Units.
[    1.803538] systemd[1]: Reached target veritysetup.target - Local Verity=
 Protected Volumes.
[    1.806744] systemd[1]: Listening on systemd-coredump.socket - Process C=
ore Dump Socket.
[    1.806996] systemd[1]: Listening on systemd-initctl.socket - initctl Co=
mpatibility Named Pipe.
[    1.807505] systemd[1]: Listening on systemd-journald-audit.socket - Jou=
rnal Audit Socket.
[    1.808159] systemd[1]: Listening on systemd-journald-dev-log.socket - J=
ournal Socket (/dev/log).
[    1.808722] systemd[1]: Listening on systemd-journald.socket - Journal S=
ocket.
[    1.809438] systemd[1]: Listening on systemd-oomd.socket - Userspace Out=
-Of-Memory (OOM) Killer Socket.
[    1.810802] systemd[1]: Listening on systemd-udevd-control.socket - udev=
 Control Socket.
[    1.811123] systemd[1]: Listening on systemd-udevd-kernel.socket - udev =
Kernel Socket.
[    1.811464] systemd[1]: Listening on systemd-userdbd.socket - User Datab=
ase Manager Socket.
[    1.849651] systemd[1]: Mounting dev-hugepages.mount - Huge Pages File S=
ystem...
[    1.850451] systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue =
File System...
[    1.851217] systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug F=
ile System...
[    1.851966] systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace=
 File System...
[    1.852744] systemd[1]: Starting dev-xvdc1-swap.service - Enable swap on=
 /dev/xvdc1 early...
[    1.854825] systemd[1]: Starting kmod-static-nodes.service - Create List=
 of Static Device Nodes...
[    1.855630] systemd[1]: Starting modprobe@configfs.service - Load Kernel=
 Module configfs...
[    1.856409] systemd[1]: Starting modprobe@dm_mod.service - Load Kernel M=
odule dm_mod...
[    1.857186] systemd[1]: Starting modprobe@drm.service - Load Kernel Modu=
le drm...
[    1.857968] systemd[1]: Starting modprobe@fuse.service - Load Kernel Mod=
ule fuse...
[    1.858758] systemd[1]: Starting modprobe@loop.service - Load Kernel Mod=
ule loop...
[    1.860236] systemd[1]: Starting systemd-modules-load.service - Load Ker=
nel Modules...
[    1.860598] Adding 1048572k swap on /dev/xvdc1.  Priority:-2 extents:1 a=
cross:1048572k SSFS
[    1.861141] systemd[1]: Starting systemd-network-generator.service - Gen=
erate network units from Kernel command line...
[    1.861981] systemd[1]: Starting systemd-udev-trigger.service - Coldplug=
 All udev Devices...
[    1.862820] systemd[1]: Condition check resulted in dev-xvdc1.swap - /de=
v/xvdc1 being skipped.
[    1.862857] systemd[1]: Unnecessary job was removed for dev-xvdc1.device=
 - /dev/xvdc1.
[    1.863125] systemd[1]: Mounted dev-hugepages.mount - Huge Pages File Sy=
stem.
[    1.863320] systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug Fi=
le System.
[    1.863644] systemd[1]: Finished kmod-static-nodes.service - Create List=
 of Static Device Nodes.
[    1.863771] systemd[1]: Reached target swap.target - Swaps.
[    1.865021] systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
[    1.866192] systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue F=
ile System.
[    1.866450] systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace =
File System.
[    1.868354] systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
[    1.869924] systemd[1]: Finished systemd-network-generator.service - Gen=
erate network units from Kernel command line.
[    1.874343] systemd[1]: dev-xvdc1-swap.service: Deactivated successfully.
[    1.874460] systemd[1]: Finished dev-xvdc1-swap.service - Enable swap on=
 /dev/xvdc1 early.
[    2.049357] systemd[1]: modprobe@configfs.service: Deactivated successfu=
lly.
[    2.049489] systemd[1]: Finished modprobe@configfs.service - Load Kernel=
 Module configfs.
[    2.049748] systemd[1]: modprobe@dm_mod.service: Deactivated successfull=
y.
[    2.049840] systemd[1]: Finished modprobe@dm_mod.service - Load Kernel M=
odule dm_mod.
[    2.050025] systemd[1]: modprobe@drm.service: Deactivated successfully.
[    2.050112] systemd[1]: Finished modprobe@drm.service - Load Kernel Modu=
le drm.
[    2.052726] systemd[1]: Mounting sys-kernel-config.mount - Kernel Config=
uration File System...
[    2.052812] systemd[1]: systemd-fsck-root.service - File System Check on=
 Root Device was skipped because of a failed condition check (ConditionPath=
IsReadWrite=3D!/).
[    2.053631] systemd[1]: Starting systemd-remount-fs.service - Remount Ro=
ot and Kernel File Systems...
[    2.055367] systemd[1]: Mounted sys-kernel-config.mount - Kernel Configu=
ration File System.
[    2.063069] EXT4-fs (xvda3): re-mounted f76da228-62b7-4543-835d-9ab0d57b=
bc45. Quota mode: none.
[    2.080095] fuse: init (API version 7.38)
[    2.081139] systemd[1]: Finished systemd-remount-fs.service - Remount Ro=
ot and Kernel File Systems.
[    2.081310] systemd[1]: ostree-remount.service - OSTree Remount OS/ Bind=
 Mounts was skipped because of a failed condition check (ConditionKernelCom=
mandLine=3Dostree).
[    2.081378] systemd[1]: systemd-firstboot.service - First Boot Wizard wa=
s skipped because of a failed condition check (ConditionFirstBoot=3Dyes).
[    2.081954] systemd[1]: systemd-hwdb-update.service - Rebuild Hardware D=
atabase was skipped because of a failed condition check (ConditionNeedsUpda=
te=3D/etc).
[    2.082010] systemd[1]: systemd-sysusers.service - Create System Users w=
as skipped because of a failed condition check (ConditionNeedsUpdate=3D/etc=
).
[    2.084015] xen:xen_evtchn: Event-channel device installed
[    2.084787] loop: module loaded
[    2.085654] systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Cr=
eate Static Device Nodes in /dev...
[    2.086004] systemd[1]: modprobe@fuse.service: Deactivated successfully.
[    2.086126] systemd[1]: Finished modprobe@fuse.service - Load Kernel Mod=
ule fuse.
[    2.086352] systemd[1]: modprobe@loop.service: Deactivated successfully.
[    2.086450] systemd[1]: Finished modprobe@loop.service - Load Kernel Mod=
ule loop.
[    2.086682] systemd[1]: systemd-repart.service - Repartition Root Disk w=
as skipped because all trigger condition checks failed.
[    2.115478] systemd[1]: Finished systemd-udev-trigger.service - Coldplug=
 All udev Devices.
[    2.119529] systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Cr=
eate Static Device Nodes in /dev.
[    2.248928] systemd[1]: Reached target local-fs-pre.target - Preparation=
 for Local File Systems.
[    2.252718] systemd[1]: Started haveged.service - Entropy Daemon based o=
n the HAVEGE algorithm.
[    2.254066] systemd[1]: Starting systemd-journald.service - Journal Serv=
ice...
[    2.254889] systemd[1]: Starting systemd-udevd.service - Rule-based Mana=
ger for Device Events and Files...
[    2.280631] systemd[1]: Started systemd-journald.service - Journal Servi=
ce.
[    2.294713] Rounding down aligned max_sectors from 4294967295 to 4294967=
288
[    2.294764] db_root: cannot open: /etc/target
[    2.303696] audit: type=3D1130 audit(1685518487.143:4): pid=3D1 uid=3D0 =
auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-journald comm=3D"systemd" exe=3D"/usr/lib/systemd/system=
d" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    2.438891] audit: type=3D1130 audit(1685518487.301:5): pid=3D1 uid=3D0 =
auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-udevd comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd" =
hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    2.443290] systemd-journald[264]: Received client request to flush runt=
ime journal.
[    2.445625] audit: type=3D1130 audit(1685518487.308:6): pid=3D1 uid=3D0 =
auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-modules-load comm=3D"systemd" exe=3D"/usr/lib/systemd/sy=
stemd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    2.515666] audit: type=3D1130 audit(1685518487.378:7): pid=3D1 uid=3D0 =
auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dmodprobe@fuse comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd" =
hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    2.515702] audit: type=3D1131 audit(1685518487.378:8): pid=3D1 uid=3D0 =
auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dmodprobe@fuse comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd" =
hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    2.515932] audit: type=3D1130 audit(1685518487.378:9): pid=3D1 uid=3D0 =
auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-sysctl comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd"=
 hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    2.520762] memmap_init_zone_device initialised 32768 pages in 0ms
[    2.522921] audit: type=3D1130 audit(1685518487.385:10): pid=3D1 uid=3D0=
 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dqubes-db comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd" hostn=
ame=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    2.557791] FDC 0 is a S82078B
[    2.563377] piix4_smbus 0000:00:01.3: SMBus Host Controller not enabled!
[    2.590058] input: PC Speaker as /devices/platform/pcspkr/input/input5
[    2.603567] ehci-pci 0000:00:05.0: EHCI Host Controller
[    2.603686] ehci-pci 0000:00:05.0: new USB bus registered, assigned bus =
number 1
[    2.605505] ehci-pci 0000:00:05.0: irq 39, io mem 0xf2017000
[    2.613259] ehci-pci 0000:00:05.0: USB 2.0 started, EHCI 1.00
[    2.613372] usb usb1: New USB device found, idVendor=3D1d6b, idProduct=
=3D0002, bcdDevice=3D 6.03
[    2.613386] usb usb1: New USB device strings: Mfr=3D3, Product=3D2, Seri=
alNumber=3D1
[    2.613398] usb usb1: Product: EHCI Host Controller
[    2.613406] usb usb1: Manufacturer: Linux 6.3.2-1.qubes.fc37.x86_64 ehci=
_hcd
[    2.613418] usb usb1: SerialNumber: 0000:00:05.0
[    2.613514] hub 1-0:1.0: USB hub found
[    2.613525] hub 1-0:1.0: 6 ports detected
[    2.732879] cfg80211: Loading compiled-in X.509 certificates for regulat=
ory database
[    2.733203] Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[    2.754151] EXT4-fs (xvdb): mounted filesystem 746fe74c-2e8d-4d06-9216-b=
4fd74ea9071 with ordered data mode. Quota mode: none.
[    2.853580] usb 1-1: new high-speed USB device number 2 using ehci-pci
[    2.922926] Intel(R) Wireless WiFi driver for Linux
[    2.924098] xen: --> pirq=3D17 -> irq=3D40 (gsi=3D40)
[    2.974268] iwlwifi 0000:00:06.0: api flags index 2 larger than supporte=
d by driver
[    2.974299] iwlwifi 0000:00:06.0: TLV_FW_FSEQ_VERSION: FSEQ Version: 0.0=
=2E2.36
[    2.974976] iwlwifi 0000:00:06.0: loaded firmware version 74.fe17486e.0 =
ty-a0-gf-a0-74.ucode op_mode iwlmvm
[    2.989117] intel_rapl_msr: PL4 support detected.
[    3.012135] usb 1-1: New USB device found, idVendor=3D0627, idProduct=3D=
0001, bcdDevice=3D 0.00
[    3.012156] usb 1-1: New USB device strings: Mfr=3D1, Product=3D3, Seria=
lNumber=3D10
[    3.012170] usb 1-1: Product: QEMU USB Tablet
[    3.012178] usb 1-1: Manufacturer: QEMU
[    3.012210] usb 1-1: SerialNumber: 42
[    3.032337] input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:0=
5.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input6
[    3.032466] hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.0=
1 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:05.0-1/input0
[    3.073319] iwlwifi 0000:00:06.0: Detected Intel(R) Wi-Fi 6 AX210 160MHz=
, REV=3D0x420
[    3.073377] thermal thermal_zone0: failed to read out thermal zone (-61)
[    3.083582] iwlwifi 0000:00:06.0: WRT: Invalid buffer destination
[    3.246951] iwlwifi 0000:00:06.0: WFPM_UMAC_PD_NOTIFICATION: 0x1f
[    3.247176] iwlwifi 0000:00:06.0: WFPM_LMAC2_PD_NOTIFICATION: 0x1f
[    3.247315] iwlwifi 0000:00:06.0: WFPM_AUTH_KEY_0: 0x80
[    3.247425] iwlwifi 0000:00:06.0: CNVI_SCU_SEQ_DATA_DW9: 0x0
[    3.249710] iwlwifi 0000:00:06.0: loaded PNVM version e4a49534
[    3.264991] iwlwifi 0000:00:06.0: Detected RF GF, rfid=3D0x10d000
[    3.340162] iwlwifi 0000:00:06.0: base HW address: 00:93:37:95:ca:3b
[    3.545603] iwlwifi 0000:00:06.0: firmware didn't ACK the reset - contin=
ue anyway
[    3.546216] iwlwifi 0000:00:06.0: Start IWL Error Log Dump:
[    3.546227] iwlwifi 0000:00:06.0: Transport status: 0x0000004A, valid: 6
[    3.546238] iwlwifi 0000:00:06.0: Loaded firmware version: 74.fe17486e.0=
 ty-a0-gf-a0-74.ucode
[    3.546253] iwlwifi 0000:00:06.0: 0x00000084 | NMI_INTERRUPT_UNKNOWN    =
  =20
[    3.546264] iwlwifi 0000:00:06.0: 0x00A082F0 | trm_hw_status0
[    3.546275] iwlwifi 0000:00:06.0: 0x00000000 | trm_hw_status1
[    3.546285] iwlwifi 0000:00:06.0: 0x004DBE68 | branchlink2
[    3.546294] iwlwifi 0000:00:06.0: 0x004BFC9C | interruptlink1
[    3.546304] iwlwifi 0000:00:06.0: 0x004BFC9C | interruptlink2
[    3.546314] iwlwifi 0000:00:06.0: 0x00016F32 | data1
[    3.546322] iwlwifi 0000:00:06.0: 0x01000000 | data2
[    3.546331] iwlwifi 0000:00:06.0: 0x00000000 | data3
[    3.546339] iwlwifi 0000:00:06.0: 0x00000000 | beacon time
[    3.546350] iwlwifi 0000:00:06.0: 0x0002A565 | tsf low
[    3.546358] iwlwifi 0000:00:06.0: 0x00000000 | tsf hi
[    3.546366] iwlwifi 0000:00:06.0: 0x00000000 | time gp1
[    3.546375] iwlwifi 0000:00:06.0: 0x0003F04E | time gp2
[    3.546383] iwlwifi 0000:00:06.0: 0x00000001 | uCode revision type
[    3.546393] iwlwifi 0000:00:06.0: 0x0000004A | uCode version major
[    3.546404] iwlwifi 0000:00:06.0: 0xFE17486E | uCode version minor
[    3.546414] iwlwifi 0000:00:06.0: 0x00000420 | hw version
[    3.546422] iwlwifi 0000:00:06.0: 0x18C80002 | board version
[    3.546433] iwlwifi 0000:00:06.0: 0x8071FF00 | hcmd
[    3.546441] iwlwifi 0000:00:06.0: 0x20020000 | isr0
[    3.546449] iwlwifi 0000:00:06.0: 0x00000000 | isr1
[    3.546458] iwlwifi 0000:00:06.0: 0x48F00002 | isr2
[    3.546466] iwlwifi 0000:00:06.0: 0x00C0000C | isr3
[    3.546474] iwlwifi 0000:00:06.0: 0x00000000 | isr4
[    3.546483] iwlwifi 0000:00:06.0: 0x00000000 | last cmd Id
[    3.546491] iwlwifi 0000:00:06.0: 0x00016F32 | wait_event
[    3.546499] iwlwifi 0000:00:06.0: 0x00000000 | l2p_control
[    3.546508] iwlwifi 0000:00:06.0: 0x00000000 | l2p_duration
[    3.546516] iwlwifi 0000:00:06.0: 0x00000000 | l2p_mhvalid
[    3.546787] iwlwifi 0000:00:06.0: 0x00000000 | l2p_addr_match
[    3.546798] iwlwifi 0000:00:06.0: 0x00000009 | lmpm_pmg_sel
[    3.546806] iwlwifi 0000:00:06.0: 0x00000000 | timestamp
[    3.546815] iwlwifi 0000:00:06.0: 0x00000028 | flow_handler
[    3.547039] iwlwifi 0000:00:06.0: Start IWL Error Log Dump:
[    3.547049] iwlwifi 0000:00:06.0: Transport status: 0x0000004A, valid: 7
[    3.547059] iwlwifi 0000:00:06.0: 0x20000074 | ADVANCED_SYSASSERT
[    3.547070] iwlwifi 0000:00:06.0: 0x00000000 | umac branchlink1
[    3.547080] iwlwifi 0000:00:06.0: 0x8045F470 | umac branchlink2
[    3.547090] iwlwifi 0000:00:06.0: 0x8047EE8E | umac interruptlink1
[    3.547100] iwlwifi 0000:00:06.0: 0x8047EE8E | umac interruptlink2
[    3.547111] iwlwifi 0000:00:06.0: 0x01000000 | umac data1
[    3.547119] iwlwifi 0000:00:06.0: 0x8047EE8E | umac data2
[    3.547127] iwlwifi 0000:00:06.0: 0x00000000 | umac data3
[    3.547136] iwlwifi 0000:00:06.0: 0x0000004A | umac major
[    3.547144] iwlwifi 0000:00:06.0: 0xFE17486E | umac minor
[    3.547152] iwlwifi 0000:00:06.0: 0x0003F159 | frame pointer
[    3.547163] iwlwifi 0000:00:06.0: 0xC0886258 | stack pointer
[    3.547173] iwlwifi 0000:00:06.0: 0x00040F07 | last host cmd
[    3.547183] iwlwifi 0000:00:06.0: 0x00000404 | isr status reg
[    3.547824] iwlwifi 0000:00:06.0: IML/ROM dump:
[    3.547834] iwlwifi 0000:00:06.0: 0x00000B03 | IML/ROM error/state
[    3.548037] iwlwifi 0000:00:06.0: 0x00008572 | IML/ROM data1
[    3.548247] iwlwifi 0000:00:06.0: 0x00000080 | IML/ROM WFPM_AUTH_KEY_0
[    3.548521] iwlwifi 0000:00:06.0: Fseq Registers:
[    3.548679] iwlwifi 0000:00:06.0: 0x60000100 | FSEQ_ERROR_CODE
[    3.548828] iwlwifi 0000:00:06.0: 0x00440007 | FSEQ_TOP_INIT_VERSION
[    3.548978] iwlwifi 0000:00:06.0: 0x00080009 | FSEQ_CNVIO_INIT_VERSION
[    3.549128] iwlwifi 0000:00:06.0: 0x0000A652 | FSEQ_OTP_VERSION
[    3.549277] iwlwifi 0000:00:06.0: 0x00000002 | FSEQ_TOP_CONTENT_VERSION
[    3.549427] iwlwifi 0000:00:06.0: 0x4552414E | FSEQ_ALIVE_TOKEN
[    3.549582] iwlwifi 0000:00:06.0: 0x00400410 | FSEQ_CNVI_ID
[    3.549731] iwlwifi 0000:00:06.0: 0x00400410 | FSEQ_CNVR_ID
[    3.549878] iwlwifi 0000:00:06.0: 0x00400410 | CNVI_AUX_MISC_CHIP
[    3.550029] iwlwifi 0000:00:06.0: 0x00400410 | CNVR_AUX_MISC_CHIP
[    3.550181] iwlwifi 0000:00:06.0: 0x00009061 | CNVR_SCU_SD_REGS_SD_REG_D=
IG_DCDC_VTRIM
[    3.550334] iwlwifi 0000:00:06.0: 0x00000061 | CNVR_SCU_SD_REGS_SD_REG_A=
CTIVE_VDIG_MIRROR
[    3.550487] iwlwifi 0000:00:06.0: WRT: Collecting data: ini trigger 4 fi=
red (delay=3D0ms).
[    4.508930] iwlwifi 0000:00:06.0 wls6: renamed from wlan0
[    4.739155] NET: Registered PF_QIPCRTR protocol family
[    4.752757] iwlwifi 0000:00:06.0: WRT: Invalid buffer destination
[    4.916099] iwlwifi 0000:00:06.0: WFPM_UMAC_PD_NOTIFICATION: 0x1f
[    4.916143] iwlwifi 0000:00:06.0: WFPM_LMAC2_PD_NOTIFICATION: 0x1f
[    4.916179] iwlwifi 0000:00:06.0: WFPM_AUTH_KEY_0: 0x80
[    4.916217] iwlwifi 0000:00:06.0: CNVI_SCU_SEQ_DATA_DW9: 0x0
[    5.241640] iwlwifi 0000:00:06.0: firmware didn't ACK the reset - contin=
ue anyway
[    5.241977] iwlwifi 0000:00:06.0: Start IWL Error Log Dump:
[    5.241987] iwlwifi 0000:00:06.0: Transport status: 0x0000004A, valid: 6
[    5.241997] iwlwifi 0000:00:06.0: Loaded firmware version: 74.fe17486e.0=
 ty-a0-gf-a0-74.ucode
[    5.242012] iwlwifi 0000:00:06.0: 0x00000084 | NMI_INTERRUPT_UNKNOWN    =
  =20
[    5.242023] iwlwifi 0000:00:06.0: 0x000002F0 | trm_hw_status0
[    5.242033] iwlwifi 0000:00:06.0: 0x00000000 | trm_hw_status1
[    5.242043] iwlwifi 0000:00:06.0: 0x004DBE68 | branchlink2
[    5.242052] iwlwifi 0000:00:06.0: 0x004D1DEA | interruptlink1
[    5.242062] iwlwifi 0000:00:06.0: 0x004D1DEA | interruptlink2
[    5.242072] iwlwifi 0000:00:06.0: 0x00016F32 | data1
[    5.242081] iwlwifi 0000:00:06.0: 0x01000000 | data2
[    5.242089] iwlwifi 0000:00:06.0: 0x00000000 | data3
[    5.242098] iwlwifi 0000:00:06.0: 0x003D1E73 | beacon time
[    5.242106] iwlwifi 0000:00:06.0: 0x000318FA | tsf low
[    5.242114] iwlwifi 0000:00:06.0: 0x00000000 | tsf hi
[    5.242122] iwlwifi 0000:00:06.0: 0x00000000 | time gp1
[    5.242131] iwlwifi 0000:00:06.0: 0x0004637B | time gp2
[    5.242139] iwlwifi 0000:00:06.0: 0x00000001 | uCode revision type
[    5.242150] iwlwifi 0000:00:06.0: 0x0000004A | uCode version major
[    5.242160] iwlwifi 0000:00:06.0: 0xFE17486E | uCode version minor
[    5.242170] iwlwifi 0000:00:06.0: 0x00000420 | hw version
[    5.242179] iwlwifi 0000:00:06.0: 0x18C80002 | board version
[    5.242189] iwlwifi 0000:00:06.0: 0x80B3FF00 | hcmd
[    5.242197] iwlwifi 0000:00:06.0: 0x00020000 | isr0
[    5.242205] iwlwifi 0000:00:06.0: 0x00000000 | isr1
[    5.242214] iwlwifi 0000:00:06.0: 0x48F04802 | isr2
[    5.242222] iwlwifi 0000:00:06.0: 0x04C3000C | isr3
[    5.242230] iwlwifi 0000:00:06.0: 0x00000000 | isr4
[    5.242238] iwlwifi 0000:00:06.0: 0x001B0148 | last cmd Id
[    5.242247] iwlwifi 0000:00:06.0: 0x00016F32 | wait_event
[    5.242255] iwlwifi 0000:00:06.0: 0x00000000 | l2p_control
[    5.242263] iwlwifi 0000:00:06.0: 0x00000000 | l2p_duration
[    5.242271] iwlwifi 0000:00:06.0: 0x00000000 | l2p_mhvalid
[    5.242280] iwlwifi 0000:00:06.0: 0x00000000 | l2p_addr_match
[    5.242290] iwlwifi 0000:00:06.0: 0x00000018 | lmpm_pmg_sel
[    5.242298] iwlwifi 0000:00:06.0: 0x00000000 | timestamp
[    5.242306] iwlwifi 0000:00:06.0: 0x00001840 | flow_handler
[    5.242531] iwlwifi 0000:00:06.0: Start IWL Error Log Dump:
[    5.242540] iwlwifi 0000:00:06.0: Transport status: 0x0000004A, valid: 7
[    5.242563] iwlwifi 0000:00:06.0: 0x20000074 | ADVANCED_SYSASSERT
[    5.242574] iwlwifi 0000:00:06.0: 0x00000000 | umac branchlink1
[    5.242584] iwlwifi 0000:00:06.0: 0x8045F470 | umac branchlink2
[    5.242594] iwlwifi 0000:00:06.0: 0xC00818A8 | umac interruptlink1
[    5.242604] iwlwifi 0000:00:06.0: 0x8047EE8E | umac interruptlink2
[    5.242614] iwlwifi 0000:00:06.0: 0x01000000 | umac data1
[    5.242622] iwlwifi 0000:00:06.0: 0x8047EE8E | umac data2
[    5.242631] iwlwifi 0000:00:06.0: 0x00000000 | umac data3
[    5.242639] iwlwifi 0000:00:06.0: 0x0000004A | umac major
[    5.242647] iwlwifi 0000:00:06.0: 0xFE17486E | umac minor
[    5.242655] iwlwifi 0000:00:06.0: 0x00046486 | frame pointer
[    5.242666] iwlwifi 0000:00:06.0: 0xC0886258 | stack pointer
[    5.242676] iwlwifi 0000:00:06.0: 0x001C0F07 | last host cmd
[    5.242685] iwlwifi 0000:00:06.0: 0x00000409 | isr status reg
[    5.242887] iwlwifi 0000:00:06.0: IML/ROM dump:
[    5.242896] iwlwifi 0000:00:06.0: 0x00000B03 | IML/ROM error/state
[    5.243099] iwlwifi 0000:00:06.0: 0x0000853F | IML/ROM data1
[    5.243301] iwlwifi 0000:00:06.0: 0x00000080 | IML/ROM WFPM_AUTH_KEY_0
[    5.243500] iwlwifi 0000:00:06.0: Fseq Registers:
[    5.243656] iwlwifi 0000:00:06.0: 0x60000100 | FSEQ_ERROR_CODE
[    5.243806] iwlwifi 0000:00:06.0: 0x00440007 | FSEQ_TOP_INIT_VERSION
[    5.243956] iwlwifi 0000:00:06.0: 0x00080009 | FSEQ_CNVIO_INIT_VERSION
[    5.244106] iwlwifi 0000:00:06.0: 0x0000A652 | FSEQ_OTP_VERSION
[    5.244248] iwlwifi 0000:00:06.0: 0x00000002 | FSEQ_TOP_CONTENT_VERSION
[    5.244397] iwlwifi 0000:00:06.0: 0x4552414E | FSEQ_ALIVE_TOKEN
[    5.244552] iwlwifi 0000:00:06.0: 0x00400410 | FSEQ_CNVI_ID
[    5.244700] iwlwifi 0000:00:06.0: 0x00400410 | FSEQ_CNVR_ID
[    5.244848] iwlwifi 0000:00:06.0: 0x00400410 | CNVI_AUX_MISC_CHIP
[    5.245000] iwlwifi 0000:00:06.0: 0x00400410 | CNVR_AUX_MISC_CHIP
[    5.245151] iwlwifi 0000:00:06.0: 0x00009061 | CNVR_SCU_SD_REGS_SD_REG_D=
IG_DCDC_VTRIM
[    5.245304] iwlwifi 0000:00:06.0: 0x00000061 | CNVR_SCU_SD_REGS_SD_REG_A=
CTIVE_VDIG_MIRROR
[    5.245453] iwlwifi 0000:00:06.0: WRT: Collecting data: ini trigger 4 fi=
red (delay=3D0ms).
[    6.215274] kauditd_printk_skb: 97 callbacks suppressed
[    6.215276] audit: type=3D1130 audit(1685518491.077:77): pid=3D1 uid=3D0=
 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dqubes-network comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd" =
hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    6.226596] iwlwifi 0000:00:06.0: WRT: Invalid buffer destination
[    6.241701] audit: type=3D1130 audit(1685518491.104:78): pid=3D1 uid=3D0=
 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-user-sessions comm=3D"systemd" exe=3D"/usr/lib/systemd/s=
ystemd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    6.245580] audit: type=3D1130 audit(1685518491.108:79): pid=3D1 uid=3D0=
 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dgetty@tty1 comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd" hos=
tname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    6.249889] audit: type=3D1130 audit(1685518491.111:80): pid=3D1 uid=3D0=
 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dserial-getty@hvc0 comm=3D"systemd" exe=3D"/usr/lib/systemd/syste=
md" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    6.259120] audit: type=3D1130 audit(1685518491.122:81): pid=3D1 uid=3D0=
 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dqubes-qrexec-agent comm=3D"systemd" exe=3D"/usr/lib/systemd/syst=
emd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    6.270305] audit: type=3D1130 audit(1685518491.132:82): pid=3D1 uid=3D0=
 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dqubes-gui-agent comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd=
" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    6.318867] audit: type=3D1325 audit(1685518491.181:83): table=3Dfilter:=
17 family=3D2 entries=3D1 op=3Dnft_register_rule pid=3D609 subj=3Dsystem_u:=
system_r:iptables_t:s0 comm=3D"iptables"
[    6.318896] audit: type=3D1300 audit(1685518491.181:83): arch=3Dc000003e=
 syscall=3D46 success=3Dyes exit=3D548 a0=3D3 a1=3D7ffd2a598be0 a2=3D0 a3=
=3D7ffd2a598bcc items=3D0 ppid=3D608 pid=3D609 auid=3D4294967295 uid=3D0 gi=
d=3D0 euid=3D0 suid=3D0 fsuid=3D0 egid=3D0 sgid=3D0 fsgid=3D0 tty=3D(none) =
ses=3D4294967295 comm=3D"iptables" exe=3D"/usr/sbin/xtables-nft-multi" subj=
=3Dsystem_u:system_r:iptables_t:s0 key=3D(null)
[    6.318941] audit: type=3D1327 audit(1685518491.181:83): proctitle=3D697=
07461626C6573002D4900494E505554002D7000746370002D73003139322E33302E3235322E=
302F3232002D2D64706F7274003830002D6A00414343455054
[    6.320441] audit: type=3D1325 audit(1685518491.182:84): table=3Dfilter:=
18 family=3D2 entries=3D1 op=3Dnft_register_rule pid=3D610 subj=3Dsystem_u:=
system_r:iptables_t:s0 comm=3D"iptables"
[    6.389513] iwlwifi 0000:00:06.0: WFPM_UMAC_PD_NOTIFICATION: 0x1f
[    6.389571] iwlwifi 0000:00:06.0: WFPM_LMAC2_PD_NOTIFICATION: 0x1f
[    6.389611] iwlwifi 0000:00:06.0: WFPM_AUTH_KEY_0: 0x80
[    6.389666] iwlwifi 0000:00:06.0: CNVI_SCU_SEQ_DATA_DW9: 0x0
[    6.392923] systemd-gpt-auto-generator[627]: Failed to dissect: Permissi=
on denied
[    6.536558] vif vif-4-0 vif4.0: Guest Rx ready
[    6.550036] vif vif-52-0 vif52.0: Guest Rx ready
[    6.588456] vif vif-31-0 vif31.0: Guest Rx ready
[    7.249610] IPv6: ADDRCONF(NETDEV_CHANGE): vif4.0: link becomes ready
[    7.249673] IPv6: ADDRCONF(NETDEV_CHANGE): vif52.0: link becomes ready
[    7.249695] IPv6: ADDRCONF(NETDEV_CHANGE): vif31.0: link becomes ready
[    9.980278] wls6: authenticate with 0a:b0:4c:5d:7a:c2
[    9.980300] wls6: 80 MHz not supported, disabling VHT
[    9.993987] wls6: send auth to 0a:b0:4c:5d:7a:c2 (try 1/3)
[   10.108954] wls6: authenticated
[   10.109676] wls6: associate with 0a:b0:4c:5d:7a:c2 (try 1/3)
[   10.117897] wls6: RX AssocResp from 0a:b0:4c:5d:7a:c2 (capab=3D0x1431 st=
atus=3D0 aid=3D3)
[   10.141777] wls6: associated
[   10.208383] IPv6: ADDRCONF(NETDEV_CHANGE): wls6: link becomes ready
[   11.249613] kauditd_printk_skb: 112 callbacks suppressed
[   11.249617] audit: type=3D1334 audit(1685518496.111:148): prog-id=3D36 o=
p=3DUNLOAD
[   11.249680] audit: type=3D1334 audit(1685518496.111:149): prog-id=3D35 o=
p=3DUNLOAD
[   11.249715] audit: type=3D1334 audit(1685518496.111:150): prog-id=3D34 o=
p=3DUNLOAD
[   11.249748] audit: type=3D1334 audit(1685518496.111:151): prog-id=3D32 o=
p=3DUNLOAD
[   11.249781] audit: type=3D1334 audit(1685518496.111:152): prog-id=3D31 o=
p=3DUNLOAD
[   11.249814] audit: type=3D1334 audit(1685518496.111:153): prog-id=3D30 o=
p=3DUNLOAD
[   11.249849] audit: type=3D1334 audit(1685518496.111:154): prog-id=3D29 o=
p=3DUNLOAD
[   11.249883] audit: type=3D1334 audit(1685518496.111:155): prog-id=3D28 o=
p=3DUNLOAD
[   11.249916] audit: type=3D1334 audit(1685518496.111:156): prog-id=3D27 o=
p=3DUNLOAD
[   11.249949] audit: type=3D1334 audit(1685518496.111:157): prog-id=3D26 o=
p=3DUNLOAD
[   16.480856] kauditd_printk_skb: 26 callbacks suppressed
[   16.480859] audit: type=3D1101 audit(1685518501.343:180): pid=3D1271 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:accounting grantors=3Dpam_unix,pam_localuser =
acct=3D"user" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/dev/p=
ts/0 res=3Dsuccess'
[   16.481321] audit: type=3D1123 audit(1685518501.343:181): pid=3D1271 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'cwd=3D"/home/user" cmd=3D646E66202D4320696E7374616C6C2=
02E2F5175626573496E636F6D696E672F68766D746573742F69706572662D322E312E392D31=
2E666333372E7838365F36342E72706D exe=3D"/usr/bin/sudo" terminal=3Dpts/0 res=
=3Dsuccess'
[   16.482265] audit: type=3D1110 audit(1685518501.344:182): pid=3D1271 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:setcred grantors=3Dpam_env,pam_localuser,pam_=
unix acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/=
dev/pts/0 res=3Dsuccess'
[   16.484389] audit: type=3D1105 audit(1685518501.346:183): pid=3D1271 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:session_open grantors=3Dpam_keyinit,pam_limit=
s,pam_keyinit,pam_limits,pam_systemd,pam_unix acct=3D"root" exe=3D"/usr/bin=
/sudo" hostname=3D? addr=3D? terminal=3D/dev/pts/0 res=3Dsuccess'
[   16.484897] audit: type=3D2300 audit(1685518501.347:184): pid=3D1271 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'newrole: old-context=3Dunconfined_u:unconfined_r:uncon=
fined_t:s0-s0:c0.c1023 new-context=3Dunconfined_u:unconfined_r:unconfined_t=
:s0-s0:c0.c1023 exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/dev=
/pts/0 res=3Dsuccess'
[   19.842504] audit: type=3D1130 audit(1685518504.704:185): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Drun-ra1e2a338fc754e09ae3cd0bb1cdf7ab6 comm=3D"systemd" exe=3D"/u=
sr/lib/systemd/systemd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   20.225507] audit: type=3D1130 audit(1685518505.087:186): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dman-db-cache-update comm=3D"systemd" exe=3D"/usr/lib/systemd/sys=
temd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   20.225747] audit: type=3D1131 audit(1685518505.088:187): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dman-db-cache-update comm=3D"systemd" exe=3D"/usr/lib/systemd/sys=
temd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   20.242337] audit: type=3D1131 audit(1685518505.105:188): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Drun-ra1e2a338fc754e09ae3cd0bb1cdf7ab6 comm=3D"systemd" exe=3D"/u=
sr/lib/systemd/systemd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   22.346783] audit: type=3D1130 audit(1685518507.209:189): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dunbound-anchor comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd"=
 hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   22.346821] audit: type=3D1131 audit(1685518507.209:190): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dunbound-anchor comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd"=
 hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   22.608401] audit: type=3D1131 audit(1685518507.470:191): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3DNetworkManager-dispatcher comm=3D"systemd" exe=3D"/usr/lib/syste=
md/systemd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   22.686381] audit: type=3D1106 audit(1685518507.549:192): pid=3D1271 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:session_close grantors=3Dpam_keyinit,pam_limi=
ts,pam_keyinit,pam_limits,pam_systemd,pam_unix acct=3D"root" exe=3D"/usr/bi=
n/sudo" hostname=3D? addr=3D? terminal=3D/dev/pts/0 res=3Dsuccess'
[   22.687026] audit: type=3D1104 audit(1685518507.549:193): pid=3D1271 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:setcred grantors=3Dpam_env,pam_localuser,pam_=
unix acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/=
dev/pts/0 res=3Dsuccess'
[   40.386923] audit: type=3D1131 audit(1685518524.537:194): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-hostnamed comm=3D"systemd" exe=3D"/usr/lib/systemd/syste=
md" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   40.442095] audit: type=3D1334 audit(1685518524.593:195): prog-id=3D39 o=
p=3DUNLOAD
[   40.442113] audit: type=3D1334 audit(1685518524.593:196): prog-id=3D38 o=
p=3DUNLOAD
[   40.442126] audit: type=3D1334 audit(1685518524.593:197): prog-id=3D37 o=
p=3DUNLOAD
[   78.053908] audit: type=3D1101 audit(1685518562.201:198): pid=3D1383 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:accounting grantors=3Dpam_unix,pam_localuser =
acct=3D"user" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/dev/p=
ts/0 res=3Dsuccess'
[   78.054205] audit: type=3D1123 audit(1685518562.201:199): pid=3D1383 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'cwd=3D"/home/user" cmd=3D"dmesg" exe=3D"/usr/bin/sudo"=
 terminal=3Dpts/0 res=3Dsuccess'
[   78.054722] audit: type=3D1110 audit(1685518562.201:200): pid=3D1383 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:setcred grantors=3Dpam_env,pam_localuser,pam_=
unix acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/=
dev/pts/0 res=3Dsuccess'
[   78.057405] audit: type=3D1105 audit(1685518562.205:201): pid=3D1383 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:session_open grantors=3Dpam_keyinit,pam_limit=
s,pam_keyinit,pam_limits,pam_systemd,pam_unix acct=3D"root" exe=3D"/usr/bin=
/sudo" hostname=3D? addr=3D? terminal=3D/dev/pts/0 res=3Dsuccess'
[   78.057701] audit: type=3D2300 audit(1685518562.205:202): pid=3D1383 uid=
=3D1000 auid=3D1000 ses=3D2 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'newrole: old-context=3Dunconfined_u:unconfined_r:uncon=
fined_t:s0-s0:c0.c1023 new-context=3Dunconfined_u:unconfined_r:unconfined_t=
:s0-s0:c0.c1023 exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/dev=
/pts/0 res=3Dsuccess'

--T1KEJXkE9CKFKvx+
Content-Type: text/plain; charset=utf-8
Content-Disposition: attachment; filename="hvm.log"
Content-Transfer-Encoding: quoted-printable

[    0.000000] Linux version 6.3.2-1.qubes.fc37.x86_64 (mockbuild@0c424897d=
4764da983d21c08af6d60ea) (gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4), GNU=
 ld version 2.38-27.fc37) #1 SMP PREEMPT_DYNAMIC Thu May 11 22:08:07 GMT 20=
23
[    0.000000] Command line: root=3D/dev/mapper/dmroot ro nomodeset console=
=3Dhvc0 rd_NO_PLYMOUTH rd.plymouth.enable=3D0 plymouth.enable=3D0 clocksour=
ce=3Dtsc xen_scrub_pages=3D0  selinux=3D1 security=3Dselinux
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point=
 registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys Us=
er registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: xstate_offset[5]:  832, xstate_sizes[5]:   64
[    0.000000] x86/fpu: xstate_offset[6]:  896, xstate_sizes[6]:  512
[    0.000000] x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024
[    0.000000] x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]:    8
[    0.000000] x86/fpu: Enabled xstate features 0x2e7, context size is 2440=
 bytes, using 'compacted' format.
[    0.000000] signal: max sigframe size: 3632
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reser=
ved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reser=
ved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000efffefff] usable
[    0.000000] BIOS-e820: [mem 0x00000000effff000-0x00000000efffffff] reser=
ved
[    0.000000] BIOS-e820: [mem 0x00000000fc000000-0x00000000fc00afff] ACPI =
NVS
[    0.000000] BIOS-e820: [mem 0x00000000fc00b000-0x00000000ffffffff] reser=
ved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000108ffffff] usable
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.4 present.
[    0.000000] DMI: Xen HVM domU, BIOS 4.17.0 04/25/2023
[    0.000000] Hypervisor detected: Xen HVM
[    0.000000] Xen version 4.17.
[    0.000000] platform_pci_unplug: Netfront and the Xen platform PCI drive=
r have been compiled for this kernel: unplug emulated NICs.
[    0.000000] platform_pci_unplug: Blkfront and the Xen platform PCI drive=
r have been compiled for this kernel: unplug emulated disks.
               You might have to change the root device
               from /dev/hd[a-d] to /dev/xvd[a-d]
               in your root=3D kernel command line option
[    0.000024] HVMOP_pagetable_dying not supported
[    0.018276] tsc: Fast TSC calibration using PIT
[    0.018277] tsc: Detected 2803.524 MHz processor
[    0.018278] tsc: Detected 2803.186 MHz TSC
[    0.018688] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D> rese=
rved
[    0.018690] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.018693] last_pfn =3D 0x109000 max_arch_pfn =3D 0x400000000
[    0.018728] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT=
 =20
[    0.018774] last_pfn =3D 0xeffff max_arch_pfn =3D 0x400000000
[    0.025698] found SMP MP-table at [mem 0x000f5a80-0x000f5a8f]
[    0.025721] Using GB pages for direct mapping
[    0.025847] RAMDISK: [mem 0xef8d5000-0xeffeffff]
[    0.025851] ACPI: Early table checksum verification disabled
[    0.025859] ACPI: RSDP 0x00000000000F59D0 000024 (v02 Xen   )
[    0.025862] ACPI: XSDT 0x00000000FC00A650 000054 (v01 Xen    HVM      00=
000000 HVML 00000000)
[    0.025867] ACPI: FACP 0x00000000FC00A370 0000F4 (v04 Xen    HVM      00=
000000 HVML 00000000)
[    0.025871] ACPI: DSDT 0x00000000FC001040 0092A3 (v02 Xen    HVM      00=
000000 INTL 20220331)
[    0.025874] ACPI: FACS 0x00000000FC001000 000040
[    0.025876] ACPI: FACS 0x00000000FC001000 000040
[    0.025877] ACPI: APIC 0x00000000FC00A470 000070 (v02 Xen    HVM      00=
000000 HVML 00000000)
[    0.025880] ACPI: HPET 0x00000000FC00A560 000038 (v01 Xen    HVM      00=
000000 HVML 00000000)
[    0.025882] ACPI: WAET 0x00000000FC00A5A0 000028 (v01 Xen    HVM      00=
000000 HVML 00000000)
[    0.025884] ACPI: SSDT 0x00000000FC00A5D0 000031 (v02 Xen    HVM      00=
000000 INTL 20220331)
[    0.025886] ACPI: SSDT 0x00000000FC00A610 000031 (v02 Xen    HVM      00=
000000 INTL 20220331)
[    0.025888] ACPI: Reserving FACP table memory at [mem 0xfc00a370-0xfc00a=
463]
[    0.025889] ACPI: Reserving DSDT table memory at [mem 0xfc001040-0xfc00a=
2e2]
[    0.025889] ACPI: Reserving FACS table memory at [mem 0xfc001000-0xfc001=
03f]
[    0.025890] ACPI: Reserving FACS table memory at [mem 0xfc001000-0xfc001=
03f]
[    0.025890] ACPI: Reserving APIC table memory at [mem 0xfc00a470-0xfc00a=
4df]
[    0.025891] ACPI: Reserving HPET table memory at [mem 0xfc00a560-0xfc00a=
597]
[    0.025891] ACPI: Reserving WAET table memory at [mem 0xfc00a5a0-0xfc00a=
5c7]
[    0.025892] ACPI: Reserving SSDT table memory at [mem 0xfc00a5d0-0xfc00a=
600]
[    0.025892] ACPI: Reserving SSDT table memory at [mem 0xfc00a610-0xfc00a=
640]
[    0.027144] No NUMA configuration found
[    0.027147] Faking a node at [mem 0x0000000000000000-0x0000000108ffffff]
[    0.027155] NODE_DATA(0) allocated [mem 0x108fd5000-0x108ffffff]
[    0.032431] Zone ranges:
[    0.032466]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.032468]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[    0.032469]   Normal   [mem 0x0000000100000000-0x0000000108ffffff]
[    0.032470]   Device   empty
[    0.032470] Movable zone start for each node
[    0.032473] Early memory node ranges
[    0.032473]   node   0: [mem 0x0000000000001000-0x000000000009efff]
[    0.032509]   node   0: [mem 0x0000000000100000-0x00000000efffefff]
[    0.032510]   node   0: [mem 0x0000000100000000-0x0000000108ffffff]
[    0.032546] Initmem setup node 0 [mem 0x0000000000001000-0x0000000108fff=
fff]
[    0.032650] On node 0, zone DMA: 1 pages in unavailable ranges
[    0.032666] On node 0, zone DMA: 97 pages in unavailable ranges
[    0.036704] On node 0, zone Normal: 1 pages in unavailable ranges
[    0.036896] On node 0, zone Normal: 28672 pages in unavailable ranges
[    0.038237] ACPI: PM-Timer IO Port: 0xb008
[    0.038317] IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-=
47
[    0.038320] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.038322] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 low level)
[    0.038322] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 low level)
[    0.038323] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 low level)
[    0.038327] ACPI: Using ACPI (MADT) for SMP configuration information
[    0.038327] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[    0.038330] TSC deadline timer available
[    0.038365] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.038372] [mem 0xf0000000-0xfbffffff] available for PCI devices
[    0.038406] Booting paravirtualized kernel on Xen HVM
[    0.038442] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0=
xffffffff, max_idle_ns: 1910969940391419 ns
[    0.043500] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:2 nr_cpu_ids:2 nr=
_node_ids:1
[    0.044401] percpu: Embedded 63 pages/cpu s221184 r8192 d28672 u1048576
[    0.044406] pcpu-alloc: s221184 r8192 d28672 u1048576 alloc=3D1*2097152
[    0.044408] pcpu-alloc: [0] 0 1=20
[    0.044427] xen: PV spinlocks enabled
[    0.044429] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, =
linear)
[    0.044439] Fallback order for Node 0: 0=20
[    0.044441] Built 1 zonelists, mobility grouping on.  Total pages: 10037=
11
[    0.044444] Policy zone: Normal
[    0.044445] Kernel command line: root=3D/dev/mapper/dmroot ro nomodeset =
console=3Dhvc0 rd_NO_PLYMOUTH rd.plymouth.enable=3D0 plymouth.enable=3D0 cl=
ocksource=3Dtsc xen_scrub_pages=3D0  selinux=3D1 security=3Dselinux
[    0.044469] Booted with the nomodeset parameter. Only the system framebu=
ffer will be available
[    0.044529] Unknown kernel command line parameters "rd_NO_PLYMOUTH", wil=
l be passed to user space.
[    0.044580] random: crng init done
[    0.044844] Dentry cache hash table entries: 524288 (order: 10, 4194304 =
bytes, linear)
[    0.044976] Inode-cache hash table entries: 262144 (order: 9, 2097152 by=
tes, linear)
[    0.045177] mem auto-init: stack:all(zero), heap alloc:on, heap free:on
[    0.045178] mem auto-init: clearing system memory may take some time...
[    0.045185] software IO TLB: area num 2.
[    0.814885] Memory: 3868756K/4079220K available (18432K kernel code, 320=
8K rwdata, 8692K rodata, 4992K init, 18632K bss, 210204K reserved, 0K cma-r=
eserved)
[    0.814975] SLUB: HWalign=3D64, Order=3D0-3, MinObjects=3D0, CPUs=3D2, N=
odes=3D1
[    0.814983] Kernel/User page tables isolation: enabled
[    0.815003] ftrace: allocating 54208 entries in 212 pages
[    0.821692] ftrace: allocated 212 pages with 4 groups
[    0.822378] Dynamic Preempt: voluntary
[    0.822399] rcu: Preemptible hierarchical RCU implementation.
[    0.822403] rcu: 	RCU restricting CPUs from NR_CPUS=3D8192 to nr_cpu_ids=
=3D2.
[    0.822404] 	Trampoline variant of Tasks RCU enabled.
[    0.822404] 	Rude variant of Tasks RCU enabled.
[    0.822404] 	Tracing variant of Tasks RCU enabled.
[    0.822405] rcu: RCU calculated value of scheduler-enlistment delay is 1=
00 jiffies.
[    0.822405] rcu: Adjusting geometry for rcu_fanout_leaf=3D16, nr_cpu_ids=
=3D2
[    0.824533] NR_IRQS: 524544, nr_irqs: 512, preallocated irqs: 16
[    0.824634] xen:events: Using FIFO-based ABI
[    0.824689] xen:events: Xen HVM callback vector for event delivery is en=
abled
[    0.824891] rcu: srcu_init: Setting srcu_struct sizes based on contentio=
n.
[    0.826068] kfence: initialized - using 2097152 bytes for 255 objects at=
 0x(____ptrval____)-0x(____ptrval____)
[    0.889781] Console: colour VGA+ 80x25
[    0.889803] printk: console [hvc0] enabled
[    0.891060] ACPI: Core revision 20221020
[    0.891167] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, =
max_idle_ns: 30580167144 ns
[    0.891251] APIC: Switch to symmetric I/O mode setup
[    0.891852] x2apic enabled
[    0.892365] Switched APIC routing to physical x2apic.
[    0.894103] ..TIMER: vector=3D0x30 apic1=3D0 pin1=3D2 apic2=3D0 pin2=3D0
[    0.898731] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles:=
 0x2868027b22e, max_idle_ns: 440795325881 ns
[    0.898755] Calibrating delay loop (skipped), value calculated using tim=
er frequency.. 5606.37 BogoMIPS (lpj=3D2803186)
[    0.898771] pid_max: default: 32768 minimum: 301
[    0.898815] LSM: initializing lsm=3Dlockdown,capability,yama,integrity,s=
elinux
[    0.898834] Yama: becoming mindful.
[    0.898847] SELinux:  Initializing.
[    0.898902] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes,=
 linear)
[    0.898915] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 b=
ytes, linear)
[    0.899255] x86/cpu: User Mode Instruction Prevention (UMIP) activated
[    0.899413] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[    0.899421] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
[    0.899434] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user=
 pointer sanitization
[    0.899449] Spectre V2 : Mitigation: Retpolines
[    0.899457] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB=
 on context switch
[    0.899469] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
[    0.899478] Spectre V2 : Enabling Restricted Speculation for firmware ca=
lls
[    0.899489] Spectre V2 : mitigation: Enabling conditional Indirect Branc=
h Prediction Barrier
[    0.899502] Spectre V2 : User space: Mitigation: STIBP via prctl
[    0.899512] Speculative Store Bypass: Mitigation: Speculative Store Bypa=
ss disabled via prctl
[    0.899527] MDS: Mitigation: Clear CPU buffers
[    0.912328] Freeing SMP alternatives memory: 48K
[    0.912476] clocksource: xen: mask: 0xffffffffffffffff max_cycles: 0x1cd=
42e4dffb, max_idle_ns: 881590591483 ns
[    0.912589] Xen: using vcpuop timer interface
[    0.912595] installing Xen timer for CPU 0
[    0.912705] smpboot: CPU0: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GH=
z (family: 0x6, model: 0x8c, stepping: 0x1)
[    0.912741] cpu 0 spinlock event irq 52
[    0.912751] cblist_init_generic: Setting adjustable number of callback q=
ueues.
[    0.912760] cblist_init_generic: Setting shift to 1 and lim to 1.
[    0.912832] cblist_init_generic: Setting shift to 1 and lim to 1.
[    0.912865] cblist_init_generic: Setting shift to 1 and lim to 1.
[    0.912897] Performance Events: unsupported p6 CPU model 140 no PMU driv=
er, software events only.
[    0.912961] rcu: Hierarchical SRCU implementation.
[    0.912970] rcu: 	Max phase no-delay instances is 400.
[    0.914253] NMI watchdog: Perf NMI watchdog permanently disabled
[    0.914836] smp: Bringing up secondary CPUs ...
[    0.915089] installing Xen timer for CPU 1
[    0.915168] x86: Booting SMP configuration:
[    0.915174] .... node  #0, CPUs:      #1
[    0.066858] APIC: Stale IRR: 00080000,00000000,00000000,00000000,0000000=
0,00000000,00000000,00000000 ISR: 00000000,00000000,00000000,00000000,00000=
000,00000000,00000000,00000000
[    0.916819] cpu 1 spinlock event irq 57
[    0.916863] smp: Brought up 1 node, 2 CPUs
[    0.916872] smpboot: Max logical packages: 1
[    0.916881] smpboot: Total of 2 processors activated (11212.74 BogoMIPS)
[    0.917853] devtmpfs: initialized
[    0.917853] x86/mm: Memory block size: 128MB
[    0.919043] ACPI: PM: Registering ACPI NVS region [mem 0xfc000000-0xfc00=
afff] (45056 bytes)
[    0.919043] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xfffffff=
f, max_idle_ns: 1911260446275000 ns
[    0.919043] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
[    0.919043] pinctrl core: initialized pinctrl subsystem
[    0.919812] PM: RTC time: 07:39:11, date: 2023-05-31
[    0.921973] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    0.922091] DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocat=
ions
[    0.922108] DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic=
 allocations
[    0.922122] DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atom=
ic allocations
[    0.922156] audit: initializing netlink subsys (disabled)
[    0.922230] audit: type=3D2000 audit(1685518751.876:1): state=3Dinitiali=
zed audit_enabled=3D0 res=3D1
[    0.922230] thermal_sys: Registered thermal governor 'fair_share'
[    0.922230] thermal_sys: Registered thermal governor 'bang_bang'
[    0.922757] thermal_sys: Registered thermal governor 'step_wise'
[    0.922767] thermal_sys: Registered thermal governor 'user_space'
[    0.922784] cpuidle: using governor menu
[    0.924119] PCI: Using configuration type 1 for base access
[    0.924243] kprobes: kprobe jump-optimization is enabled. All kprobes ar=
e optimized if possible.
[    0.951785] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
[    0.951799] HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
[    0.951809] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
[    0.951819] HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
[    0.964866] cryptd: max_cpu_qlen set to 1000
[    0.965803] raid6: skipped pq benchmark and selected avx512x4
[    0.965869] raid6: using avx512x2 recovery algorithm
[    0.965914] ACPI: Added _OSI(Module Device)
[    0.965914] ACPI: Added _OSI(Processor Device)
[    0.965914] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.965914] ACPI: Added _OSI(Processor Aggregator Device)
[    0.972784] ACPI: 3 ACPI AML tables successfully acquired and loaded
[    0.973313] xen: --> pirq=3D16 -> irq=3D9 (gsi=3D9)
[    0.975965] ACPI: Interpreter enabled
[    0.975979] ACPI: PM: (supports S0 S3 S5)
[    0.975986] ACPI: Using IOAPIC for interrupt routing
[    0.976281] PCI: Using host bridge windows from ACPI; if necessary, use =
"pci=3Dnocrs" and report a bug
[    0.976295] PCI: Ignoring E820 reservations for host bridge windows
[    0.976980] ACPI: Enabled 2 GPEs in block 00 to 0F
[    0.985929] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    0.985945] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MS=
I EDR HPX-Type3]
[    0.985959] acpi PNP0A03:00: _OSC: not requesting OS control; OS require=
s [ExtendedConfig ASPM ClockPM MSI]
[    0.986790] acpi PNP0A03:00: fail to add MMCONFIG information, can't acc=
ess extended configuration space under this bridge
[    0.987113] PCI host bridge to bus 0000:00
[    0.987122] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
[    0.987149] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    0.987160] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bfff=
f window]
[    0.987174] pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfbfffff=
f window]
[    0.987187] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.987807] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
[    0.991334] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
[    0.995845] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
[    0.998160] pci 0000:00:01.1: reg 0x20: [io  0xc300-0xc30f]
[    0.999153] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x=
01f7]
[    0.999167] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
[    0.999177] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x=
0177]
[    0.999189] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
[    1.000229] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
[    1.003823] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX=
4 ACPI
[    1.003892] pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX=
4 SMB
[    1.005755] pci 0000:00:02.0: [5853:0001] type 00 class 0xff8000
[    1.006944] pci 0000:00:02.0: reg 0x10: [io  0xc000-0xc0ff]
[    1.007754] pci 0000:00:02.0: reg 0x14: [mem 0xf0000000-0xf0ffffff pref]
[    1.014755] pci 0000:00:04.0: [1234:1111] type 00 class 0x030000
[    1.015955] pci 0000:00:04.0: reg 0x10: [mem 0xf1000000-0xf1ffffff pref]
[    1.017755] pci 0000:00:04.0: reg 0x18: [mem 0xf201a000-0xf201afff]
[    1.021107] pci 0000:00:04.0: reg 0x30: [mem 0xf2000000-0xf200ffff pref]
[    1.021337] pci 0000:00:04.0: Video device with shadowed ROM at [mem 0x0=
00c0000-0x000dffff]
[    1.022799] pci 0000:00:05.0: [8086:24cd] type 00 class 0x0c0320
[    1.023934] pci 0000:00:05.0: reg 0x10: [mem 0xf201b000-0xf201bfff]
[    1.030066] pci 0000:00:06.0: [8086:2668] type 00 class 0x040300
[    1.031333] pci 0000:00:06.0: reg 0x10: [mem 0xf2010000-0xf2013fff]
[    1.038755] pci 0000:00:08.0: [1033:0194] type 00 class 0x0c0330
[    1.040149] pci 0000:00:08.0: reg 0x10: [mem 0xf2014000-0xf2017fff 64bit]
[    1.047939] ACPI: PCI: Interrupt link LNKA configured for IRQ 5
[    1.048220] ACPI: PCI: Interrupt link LNKB configured for IRQ 10
[    1.048859] ACPI: PCI: Interrupt link LNKC configured for IRQ 11
[    1.049151] ACPI: PCI: Interrupt link LNKD configured for IRQ 5
[    1.052083] xen:balloon: Initialising balloon driver
[    1.052106] iommu: Default domain type: Translated=20
[    1.052106] iommu: DMA domain TLB invalidation policy: lazy mode=20
[    1.052841] SCSI subsystem initialized
[    1.052895] libata version 3.00 loaded.
[    1.052895] ACPI: bus type USB registered
[    1.052895] usbcore: registered new interface driver usbfs
[    1.052895] usbcore: registered new interface driver hub
[    1.052895] usbcore: registered new device driver usb
[    1.052895] pps_core: LinuxPPS API ver. 1 registered
[    1.052895] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo =
Giometti <giometti@linux.it>
[    1.052895] PTP clock support registered
[    1.052895] EDAC MC: Ver: 3.0.0
[    1.054991] NetLabel: Initializing
[    1.055056] NetLabel:  domain hash size =3D 128
[    1.055064] NetLabel:  protocols =3D UNLABELED CIPSOv4 CALIPSO
[    1.055089] NetLabel:  unlabeled traffic allowed by default
[    1.055101] mctp: management component transport protocol core
[    1.055111] NET: Registered PF_MCTP protocol family
[    1.055126] PCI: Using ACPI for IRQ routing
[    1.055133] PCI: pci_cache_line_size set to 64 bytes
[    1.057205] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
[    1.057206] e820: reserve RAM buffer [mem 0xeffff000-0xefffffff]
[    1.057207] e820: reserve RAM buffer [mem 0x109000000-0x10bffffff]
[    1.057220] pci 0000:00:04.0: vgaarb: setting as boot VGA device
[    1.057220] pci 0000:00:04.0: vgaarb: bridge control possible
[    1.057220] pci 0000:00:04.0: vgaarb: VGA device added: decodes=3Dio+mem=
,owns=3Dio+mem,locks=3Dnone
[    1.057755] vgaarb: loaded
[    1.057821] hpet: 3 channels of 0 reserved for per-cpu timers
[    1.057846] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[    1.057857] hpet0: 3 comparators, 64-bit 62.500000 MHz counter
[    1.060830] clocksource: Switched to clocksource xen
[    1.080067] VFS: Disk quotas dquot_6.6.0
[    1.080132] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 byte=
s)
[    1.213195] pnp: PnP ACPI init
[    1.213312] system 00:00: [mem 0x00000000-0x0009ffff] could not be reser=
ved
[    1.213388] system 00:01: [io  0x08a0-0x08a3] has been reserved
[    1.213401] system 00:01: [io  0x0cc0-0x0ccf] has been reserved
[    1.213412] system 00:01: [io  0x04d0-0x04d1] has been reserved
[    1.213441] xen: --> pirq=3D17 -> irq=3D8 (gsi=3D8)
[    1.213461] xen: --> pirq=3D18 -> irq=3D12 (gsi=3D12)
[    1.213480] xen: --> pirq=3D19 -> irq=3D1 (gsi=3D1)
[    1.213498] xen: --> pirq=3D20 -> irq=3D6 (gsi=3D6)
[    1.213500] pnp 00:05: [dma 2]
[    1.213540] system 00:06: [io  0xae00-0xae0f] has been reserved
[    1.213552] system 00:06: [io  0xb044-0xb047] has been reserved
[    1.214591] pnp: PnP ACPI: found 7 devices
[    1.222007] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, m=
ax_idle_ns: 2085701024 ns
[    1.222061] NET: Registered PF_INET protocol family
[    1.222090] IP idents hash table entries: 65536 (order: 7, 524288 bytes,=
 linear)
[    1.222799] tcp_listen_portaddr_hash hash table entries: 2048 (order: 3,=
 32768 bytes, linear)
[    1.222819] Table-perturb hash table entries: 65536 (order: 6, 262144 by=
tes, linear)
[    1.222835] TCP established hash table entries: 32768 (order: 6, 262144 =
bytes, linear)
[    1.222873] TCP bind hash table entries: 32768 (order: 8, 1048576 bytes,=
 linear)
[    1.223570] TCP: Hash tables configured (established 32768 bind 32768)
[    1.223642] MPTCP token hash table entries: 4096 (order: 4, 98304 bytes,=
 linear)
[    1.224120] UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
[    1.224247] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, li=
near)
[    1.224417] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    1.224498] NET: Registered PF_XDP protocol family
[    1.224514] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
[    1.224526] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
[    1.224537] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff windo=
w]
[    1.224620] pci_bus 0000:00: resource 7 [mem 0xf0000000-0xfbffffff windo=
w]
[    1.224719] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[    1.224779] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    1.225376] xen: --> pirq=3D21 -> irq=3D39 (gsi=3D39)
[    1.226379] xen: --> pirq=3D22 -> irq=3D17 (gsi=3D17)
[    1.226991] PCI: CLS 0 bytes, default 64
[    1.227004] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    1.227015] software IO TLB: mapped [mem 0x00000000eb8d5000-0x00000000ef=
8d5000] (64MB)
[    1.227077] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x286=
8027b22e, max_idle_ns: 440795325881 ns
[    1.227147] clocksource: Switched to clocksource tsc
[    1.227315] Trying to unpack rootfs image as initramfs...
[    1.310942] Freeing initrd memory: 7276K
[    1.311625] Initialise system trusted keyrings
[    1.311714] Key type blacklist registered
[    1.311791] workingset: timestamp_bits=3D36 max_order=3D20 bucket_order=
=3D0
[    1.311818] zbud: loaded
[    1.312205] integrity: Platform Keyring initialized
[    1.312283] integrity: Machine keyring initialized
[    1.319152] NET: Registered PF_ALG protocol family
[    1.319164] xor: automatically using best checksumming function   avx   =
   =20
[    1.319177] Key type asymmetric registered
[    1.319184] Asymmetric key parser 'x509' registered
[    1.322754] Block layer SCSI generic (bsg) driver version 0.4 loaded (ma=
jor 245)
[    1.322971] io scheduler mq-deadline registered
[    1.322983] io scheduler kyber registered
[    1.322996] io scheduler bfq registered
[    1.325026] atomic64_test: passed for x86-64 platform with CX8 and with =
SSE
[    1.325502] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/inpu=
t/input0
[    1.325612] ACPI: button: Power Button [PWRF]
[    1.325658] input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/inpu=
t/input1
[    1.325736] ACPI: button: Sleep Button [SLPF]
[    1.424702] xen: --> pirq=3D23 -> irq=3D24 (gsi=3D24)
[    1.424844] xen:grant_table: Grant tables using version 1 layout
[    1.424879] Grant table initialized
[    1.429485] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[    1.430565] Non-volatile memory driver v1.3
[    1.430579] Linux agpgart interface v0.103
[    1.430764] ACPI: bus type drm_connector registered
[    1.438665] ata_piix 0000:00:01.1: version 2.13
[    1.438759] ata_piix 0000:00:01.1: enabling device (0000 -> 0001)
[    1.439967] scsi host0: ata_piix
[    1.440133] scsi host1: ata_piix
[    1.440157] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc300 irq 14
[    1.440171] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc308 irq 15
[    1.444472] usbcore: registered new interface driver usbserial_generic
[    1.444606] usbserial: USB Serial support registered for generic
[    1.444702] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f13:PS2M] at 0=
x60,0x64 irq 1,12
[    1.447355] serio: i8042 KBD port at 0x60,0x64 irq 1
[    1.447379] serio: i8042 AUX port at 0x60,0x64 irq 12
[    1.447442] mousedev: PS/2 mouse device common for all mice
[    1.448594] input: AT Translated Set 2 keyboard as /devices/platform/i80=
42/serio0/input/input2
[    1.456287] rtc_cmos 00:02: registered as rtc0
[    1.456326] rtc_cmos 00:02: setting system clock to 2023-05-31T07:39:12 =
UTC (1685518752)
[    1.456366] rtc_cmos 00:02: alarms up to one day, 114 bytes nvram
[    1.456389] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. =
Duplicate IMA measurements will not be recorded in the IMA log.
[    1.456411] device-mapper: uevent: version 1.0.3
[    1.456443] device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised:=
 dm-devel@redhat.com
[    1.456498] intel_pstate: CPU model not supported
[    1.456559] hid: raw HID events driver (C) Jiri Kosina
[    1.456580] usbcore: registered new interface driver usbhid
[    1.456592] usbhid: USB HID core driver
[    1.456677] drop_monitor: Initializing network drop monitor service
[    1.456740] Initializing XFRM netlink socket
[    1.456760] NET: Registered PF_INET6 protocol family
[    1.467899] Segment Routing with IPv6
[    1.467979] RPL Segment Routing with IPv6
[    1.467999] In-situ OAM (IOAM) with IPv6
[    1.468053] mip6: Mobile IPv6
[    1.468104] NET: Registered PF_PACKET protocol family
[    1.542125] IPI shorthand broadcast: enabled
[    1.542238] AVX2 version of gcm_enc/dec engaged.
[    1.542401] AES CTR mode by8 optimization enabled
[    1.543831] sched_clock: Marking stable (1477485310, 65858244)->(1758243=
692, -214900138)
[    1.543987] registered taskstats version 1
[    1.685444] Loading compiled-in X.509 certificates
[    1.692037] Loaded X.509 cert 'Build time autogenerated kernel key: 71b1=
2e3caf24d55eb39a8d585ceb01a5ebb6ae3a'
[    1.692822] zswap: loaded using pool lzo/zbud
[    1.695143] page_owner is disabled
[    1.695380] Key type .fscrypt registered
[    1.695389] Key type fscrypt-provisioning registered
[    1.697151] Btrfs loaded, crc32c=3Dcrc32c-generic, zoned=3Dyes, fsverity=
=3Dyes
[    1.697273] Key type big_key registered
[    1.699197] Key type encrypted registered
[    1.700129] ima: No TPM chip found, activating TPM-bypass!
[    1.700209] Loading compiled-in module X.509 certificates
[    1.700607] Loaded X.509 cert 'Build time autogenerated kernel key: 71b1=
2e3caf24d55eb39a8d585ceb01a5ebb6ae3a'
[    1.700782] ima: Allocated hash algorithm: sha256
[    1.700808] ima: No architecture policies found
[    1.700840] evm: Initialising EVM extended attributes:
[    1.700914] evm: security.selinux
[    1.700923] evm: security.SMACK64 (disabled)
[    1.700934] evm: security.SMACK64EXEC (disabled)
[    1.700944] evm: security.SMACK64TRANSMUTE (disabled)
[    1.700956] evm: security.SMACK64MMAP (disabled)
[    1.700966] evm: security.apparmor
[    1.700974] evm: security.ima
[    1.700982] evm: security.capability
[    1.700989] evm: HMAC attrs: 0x1
[    1.723581] alg: No test for 842 (842-scomp)
[    1.723617] alg: No test for 842 (842-generic)
[    1.805993] xenbus_probe_frontend: Device with no driver: device/vbd/517=
12
[    1.806009] xenbus_probe_frontend: Device with no driver: device/vbd/517=
28
[    1.806024] xenbus_probe_frontend: Device with no driver: device/vbd/517=
44
[    1.806067] xenbus_probe_frontend: Device with no driver: device/vbd/517=
60
[    1.806082] xenbus_probe_frontend: Device with no driver: device/vif/0
[    1.806110] PM:   Magic number: 7:198:674
[    1.806215] RAS: Correctable Errors collector initialized.
[    1.806240] xen:balloon: Waiting for initial ballooning down having fini=
shed.
[    2.323465] xen:balloon: Initial ballooning down finished.
[    2.332180] Freeing unused decrypted memory: 2036K
[    2.332959] Freeing unused kernel image (initmem) memory: 4992K
[    2.333038] Write protecting the kernel read-only data: 28672k
[    2.333308] Freeing unused kernel image (rodata/data gap) memory: 1548K
[    2.333371] rodata_test: all tests were successful
[    2.333386] Run /init as init process
[    2.333393]   with arguments:
[    2.333394]     /init
[    2.333394]     rd_NO_PLYMOUTH
[    2.333395]   with environment:
[    2.333395]     HOME=3D/
[    2.333396]     TERM=3Dlinux
[    2.340292] Invalid max_queues (4), will use default max: 2.
[    2.340675] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/=
i8042/serio1/input/input4
[    2.349573] blkfront: xvda: flush diskcache: enabled; persistent grants:=
 enabled; indirect descriptors: enabled; bounce buffer: enabled
[    2.352338]  xvda: xvda1 xvda2 xvda3
[    2.354693] blkfront: xvdb: flush diskcache: enabled; persistent grants:=
 enabled; indirect descriptors: enabled; bounce buffer: enabled
[    2.356756] blkfront: xvdc: flush diskcache: enabled; persistent grants:=
 enabled; indirect descriptors: enabled; bounce buffer: enabled
[    2.358326] blkfront: xvdd: barrier or flush: disabled; persistent grant=
s: enabled; indirect descriptors: enabled; bounce buffer: enabled
[    2.739164]  xvdc: xvdc1 xvdc3
[    2.756099] EXT4-fs (xvda3): mounted filesystem f76da228-62b7-4543-835d-=
9ab0d57bbc45 with ordered data mode. Quota mode: none.
[    2.760621] /dev/xvdd: Can't open blockdev
[    2.761510] EXT4-fs (xvdd): mounting ext3 file system using the ext4 sub=
system
[    2.764870] EXT4-fs (xvdd): mounted filesystem 63438b4b-f5d8-462a-82fe-9=
d52951a6722 with ordered data mode. Quota mode: none.
[    2.867437] audit: type=3D1404 audit(1685518753.910:2): enforcing=3D1 ol=
d_enforcing=3D0 auid=3D4294967295 ses=3D4294967295 enabled=3D1 old-enabled=
=3D1 lsm=3Dselinux res=3D1
[    2.895748] SELinux:  Class user_namespace not defined in policy.
[    2.895766] SELinux: the above unknown classes and permissions will be a=
llowed
[    2.897736] SELinux:  policy capability network_peer_controls=3D1
[    2.897751] SELinux:  policy capability open_perms=3D1
[    2.897759] SELinux:  policy capability extended_socket_class=3D1
[    2.897769] SELinux:  policy capability always_check_network=3D0
[    2.897779] SELinux:  policy capability cgroup_seclabel=3D1
[    2.897788] SELinux:  policy capability nnp_nosuid_transition=3D1
[    2.897798] SELinux:  policy capability genfs_seclabel_symlinks=3D1
[    2.897808] SELinux:  policy capability ioctl_skip_cloexec=3D0
[    2.926873] audit: type=3D1403 audit(1685518753.970:3): auid=3D429496729=
5 ses=3D4294967295 lsm=3Dselinux res=3D1
[    2.928375] systemd[1]: Successfully loaded SELinux policy in 61.539ms.
[    2.941021] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup =
in 5.013ms.
[    2.946938] systemd[1]: systemd 251.14-2.fc37 running in system mode (+P=
AM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT +GNUTLS +OPENSSL =
+ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +L=
IBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZS=
TD +BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=3Dunified)
[    2.946991] systemd[1]: Detected virtualization xen.
[    2.947003] systemd[1]: Detected architecture x86-64.
[    2.948412] systemd[1]: No hostname configured, using default hostname.
[    2.948466] systemd[1]: Hostname set to <fedora>.
[    3.008356] systemd[1]: bpf-lsm: BPF LSM hook not enabled in the kernel,=
 BPF LSM not supported
[    3.008900] memfd_create() without MFD_EXEC nor MFD_NOEXEC_SEAL, pid=3D1=
 'systemd'
[    3.152910] block xvda: the capability attribute has been deprecated.
[    3.153079] systemd-gpt-auto-generator[226]: Failed to dissect: Permissi=
on denied
[    3.153403] (sd-execut[218]: /usr/lib/systemd/system-generators/systemd-=
gpt-auto-generator failed with exit status 1.
[    3.210596] systemd[1]: /usr/lib/systemd/system/qubes-u2fproxy@.service:=
8: Standard output type syslog is obsolete, automatically updating to journ=
al. Please update your unit file, and consider removing the setting altoget=
her.
[    3.216491] systemd[1]: /usr/lib/systemd/system/qubes-gui-agent.service:=
15: Standard output type syslog is obsolete, automatically updating to jour=
nal. Please update your unit file, and consider removing the setting altoge=
ther.
[    3.257751] systemd[1]: Queued start job for default target multi-user.t=
arget.
[    3.265441] systemd[1]: Created slice system-getty.slice - Slice /system=
/getty.
[    3.266226] systemd[1]: Created slice system-modprobe.slice - Slice /sys=
tem/modprobe.
[    3.266712] systemd[1]: Created slice system-qubes\x2du2fproxy.slice - S=
lice /system/qubes-u2fproxy.
[    3.267158] systemd[1]: Created slice system-serial\x2dgetty.slice - Sli=
ce /system/serial-getty.
[    3.267563] systemd[1]: Created slice user.slice - User and Session Slic=
e.
[    3.267770] systemd[1]: Started systemd-ask-password-console.path - Disp=
atch Password Requests to Console Directory Watch.
[    3.267951] systemd[1]: Started systemd-ask-password-wall.path - Forward=
 Password Requests to Wall Directory Watch.
[    3.268413] systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automou=
nt - Arbitrary Executable File Formats File System Automount Point.
[    3.268543] systemd[1]: Reached target cryptsetup.target - Local Encrypt=
ed Volumes.
[    3.268619] systemd[1]: Reached target integritysetup.target - Local Int=
egrity Protected Volumes.
[    3.268709] systemd[1]: Reached target remote-fs.target - Remote File Sy=
stems.
[    3.268776] systemd[1]: Reached target slices.target - Slice Units.
[    3.268854] systemd[1]: Reached target veritysetup.target - Local Verity=
 Protected Volumes.
[    3.271426] systemd[1]: Listening on systemd-coredump.socket - Process C=
ore Dump Socket.
[    3.271662] systemd[1]: Listening on systemd-initctl.socket - initctl Co=
mpatibility Named Pipe.
[    3.272120] systemd[1]: Listening on systemd-journald-audit.socket - Jou=
rnal Audit Socket.
[    3.272695] systemd[1]: Listening on systemd-journald-dev-log.socket - J=
ournal Socket (/dev/log).
[    3.273188] systemd[1]: Listening on systemd-journald.socket - Journal S=
ocket.
[    3.354088] systemd[1]: Listening on systemd-oomd.socket - Userspace Out=
-Of-Memory (OOM) Killer Socket.
[    3.355404] systemd[1]: Listening on systemd-udevd-control.socket - udev=
 Control Socket.
[    3.355660] systemd[1]: Listening on systemd-udevd-kernel.socket - udev =
Kernel Socket.
[    3.355913] systemd[1]: Listening on systemd-userdbd.socket - User Datab=
ase Manager Socket.
[    3.356834] systemd[1]: Mounting dev-hugepages.mount - Huge Pages File S=
ystem...
[    3.357725] systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue =
File System...
[    3.358558] systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug F=
ile System...
[    3.359475] systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace=
 File System...
[    3.360335] systemd[1]: Starting dev-xvdc1-swap.service - Enable swap on=
 /dev/xvdc1 early...
[    3.363070] systemd[1]: Starting kmod-static-nodes.service - Create List=
 of Static Device Nodes...
[    3.364011] systemd[1]: Starting modprobe@configfs.service - Load Kernel=
 Module configfs...
[    3.364036] Adding 1048572k swap on /dev/xvdc1.  Priority:-2 extents:1 a=
cross:1048572k SSFS
[    3.364928] systemd[1]: Starting modprobe@dm_mod.service - Load Kernel M=
odule dm_mod...
[    3.365713] systemd[1]: Starting modprobe@drm.service - Load Kernel Modu=
le drm...
[    3.366569] systemd[1]: Starting modprobe@fuse.service - Load Kernel Mod=
ule fuse...
[    3.367398] systemd[1]: Starting modprobe@loop.service - Load Kernel Mod=
ule loop...
[    3.368323] systemd[1]: Starting qubes-remount-lib-modules.service - Rem=
ount /lib/modules for SELinux...
[    3.369283] systemd[1]: Starting systemd-network-generator.service - Gen=
erate network units from Kernel command line...
[    3.370160] systemd[1]: Starting systemd-udev-trigger.service - Coldplug=
 All udev Devices...
[    3.371170] systemd[1]: Condition check resulted in dev-xvdc1.swap - /de=
v/xvdc1 being skipped.
[    3.371208] systemd[1]: Unnecessary job was removed for dev-xvdc1.device=
 - /dev/xvdc1.
[    3.371580] systemd[1]: Mounted dev-hugepages.mount - Huge Pages File Sy=
stem.
[    3.371693] systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue F=
ile System.
[    3.555164] systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug Fi=
le System.
[    3.555273] systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace =
File System.
[    3.555590] systemd[1]: Finished kmod-static-nodes.service - Create List=
 of Static Device Nodes.
[    3.556003] systemd[1]: dev-xvdc1-swap.service: Deactivated successfully.
[    3.556106] systemd[1]: Finished dev-xvdc1-swap.service - Enable swap on=
 /dev/xvdc1 early.
[    3.556256] systemd[1]: Reached target swap.target - Swaps.
[    3.564167] systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
[    3.564261] systemd[1]: systemd-fsck-root.service - File System Check on=
 Root Device was skipped because of a failed condition check (ConditionPath=
IsReadWrite=3D!/).
[    3.564988] systemd[1]: Starting systemd-remount-fs.service - Remount Ro=
ot and Kernel File Systems...
[    3.568660] systemd[1]: Finished systemd-network-generator.service - Gen=
erate network units from Kernel command line.
[    3.568812] systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
[    3.571804] systemd[1]: usr-lib-modules.mount: Deactivated successfully.
[    3.574969] systemd[1]: modprobe@configfs.service: Deactivated successfu=
lly.
[    3.575082] systemd[1]: Finished modprobe@configfs.service - Load Kernel=
 Module configfs.
[    3.575591] systemd[1]: modprobe@dm_mod.service: Deactivated successfull=
y.
[    3.575692] systemd[1]: Finished modprobe@dm_mod.service - Load Kernel M=
odule dm_mod.
[    3.575886] systemd[1]: modprobe@drm.service: Deactivated successfully.
[    3.575975] systemd[1]: Finished modprobe@drm.service - Load Kernel Modu=
le drm.
[    3.576151] systemd[1]: modprobe@fuse.service: Deactivated successfully.
[    3.576244] systemd[1]: Finished modprobe@fuse.service - Load Kernel Mod=
ule fuse.
[    3.576650] systemd[1]: sys-fs-fuse-connections.mount - FUSE Control Fil=
e System was skipped because of a failed condition check (ConditionPathExis=
ts=3D/sys/fs/fuse/connections).
[    3.614422] EXT4-fs (xvda3): re-mounted f76da228-62b7-4543-835d-9ab0d57b=
bc45. Quota mode: none.
[    3.626171] systemd[1]: Mounting sys-kernel-config.mount - Kernel Config=
uration File System...
[    3.626975] systemd[1]: modprobe@loop.service: Deactivated successfully.
[    3.627088] systemd[1]: Finished modprobe@loop.service - Load Kernel Mod=
ule loop.
[    3.627558] systemd[1]: Finished systemd-udev-trigger.service - Coldplug=
 All udev Devices.
[    3.627768] systemd[1]: Finished systemd-remount-fs.service - Remount Ro=
ot and Kernel File Systems.
[    3.754198] systemd[1]: Mounted sys-kernel-config.mount - Kernel Configu=
ration File System.
[    3.754405] systemd[1]: ostree-remount.service - OSTree Remount OS/ Bind=
 Mounts was skipped because of a failed condition check (ConditionKernelCom=
mandLine=3Dostree).
[    3.754457] systemd[1]: systemd-firstboot.service - First Boot Wizard wa=
s skipped because of a failed condition check (ConditionFirstBoot=3Dyes).
[    3.756427] EXT4-fs (xvdd): unmounting filesystem 63438b4b-f5d8-462a-82f=
e-9d52951a6722.
[    3.765956] systemd[1]: systemd-hwdb-update.service - Rebuild Hardware D=
atabase was skipped because of a failed condition check (ConditionNeedsUpda=
te=3D/etc).
[    3.766037] systemd[1]: systemd-repart.service - Repartition Root Disk w=
as skipped because all trigger condition checks failed.
[    3.766078] systemd[1]: systemd-sysusers.service - Create System Users w=
as skipped because of a failed condition check (ConditionNeedsUpdate=3D/etc=
).
[    3.769559] EXT4-fs (xvdd): mounting ext3 file system using the ext4 sub=
system
[    3.771401] EXT4-fs (xvdd): mounted filesystem 63438b4b-f5d8-462a-82fe-9=
d52951a6722 with ordered data mode. Quota mode: none.
[    3.771773] systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Cr=
eate Static Device Nodes in /dev...
[    3.775186] systemd[1]: Finished qubes-remount-lib-modules.service - Rem=
ount /lib/modules for SELinux.
[    3.777475] systemd[1]: Starting systemd-modules-load.service - Load Ker=
nel Modules...
[    3.789728] audit: type=3D1400 audit(1685518754.833:4): avc:  denied  { =
module_request } for  pid=3D268 comm=3D"systemd-modules" kmod=3D"crypto-pkc=
s1pad(rsa,sha256)" scontext=3Dsystem_u:system_r:systemd_modules_load_t:s0 t=
context=3Dsystem_u:system_r:kernel_t:s0 tclass=3Dsystem permissive=3D0
[    3.789775] audit: type=3D1400 audit(1685518754.833:5): avc:  denied  { =
module_request } for  pid=3D268 comm=3D"systemd-modules" kmod=3D"crypto-pkc=
s1pad(rsa,sha256)-all" scontext=3Dsystem_u:system_r:systemd_modules_load_t:=
s0 tcontext=3Dsystem_u:system_r:kernel_t:s0 tclass=3Dsystem permissive=3D0
[    3.796562] systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Cr=
eate Static Device Nodes in /dev.
[    3.796841] systemd[1]: Reached target local-fs-pre.target - Preparation=
 for Local File Systems.
[    3.798739] xen:xen_evtchn: Event-channel device installed
[    3.802563] systemd[1]: Started haveged.service - Entropy Daemon based o=
n the HAVEGE algorithm.
[    3.803851] systemd[1]: Starting systemd-journald.service - Journal Serv=
ice...
[    3.804839] systemd[1]: Starting systemd-udevd.service - Rule-based Mana=
ger for Device Events and Files...
[    3.856516] systemd[1]: Started systemd-journald.service - Journal Servi=
ce.
[    3.868507] systemd-journald[272]: Received client request to flush runt=
ime journal.
[    3.871492] audit: type=3D1130 audit(1685518754.900:6): pid=3D1 uid=3D0 =
auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-journald comm=3D"systemd" exe=3D"/usr/lib/systemd/system=
d" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    3.886355] audit: type=3D1130 audit(1685518754.929:7): pid=3D1 uid=3D0 =
auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-udevd comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd" =
hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    3.975968] audit: type=3D1130 audit(1685518755.019:8): pid=3D1 uid=3D0 =
auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-journal-flush comm=3D"systemd" exe=3D"/usr/lib/systemd/s=
ystemd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    4.000021] Rounding down aligned max_sectors from 4294967295 to 4294967=
288
[    4.000112] db_root: cannot open: /etc/target
[    4.027237] audit: type=3D1130 audit(1685518755.070:9): pid=3D1 uid=3D0 =
auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-modules-load comm=3D"systemd" exe=3D"/usr/lib/systemd/sy=
stemd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    4.043206] audit: type=3D1130 audit(1685518755.086:10): pid=3D1 uid=3D0=
 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-sysctl comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd"=
 hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    4.044606] memmap_init_zone_device initialised 32768 pages in 0ms
[    4.058258] piix4_smbus 0000:00:01.3: SMBus Host Controller not enabled!
[    4.130834] FDC 0 is a S82078B
[    4.156363] input: PC Speaker as /devices/platform/pcspkr/input/input5
[    4.170433] ehci-pci 0000:00:05.0: EHCI Host Controller
[    4.170543] ehci-pci 0000:00:05.0: new USB bus registered, assigned bus =
number 1
[    4.171513] ehci-pci 0000:00:05.0: irq 39, io mem 0xf201b000
[    4.178752] EXT4-fs (xvdb): mounted filesystem 44c92d29-b8e7-4836-a951-f=
71caa9c065d with ordered data mode. Quota mode: none.
[    4.179586] ehci-pci 0000:00:05.0: USB 2.0 started, EHCI 1.00
[    4.179701] usb usb1: New USB device found, idVendor=3D1d6b, idProduct=
=3D0002, bcdDevice=3D 6.03
[    4.179716] usb usb1: New USB device strings: Mfr=3D3, Product=3D2, Seri=
alNumber=3D1
[    4.179729] usb usb1: Product: EHCI Host Controller
[    4.179737] usb usb1: Manufacturer: Linux 6.3.2-1.qubes.fc37.x86_64 ehci=
_hcd
[    4.179750] usb usb1: SerialNumber: 0000:00:05.0
[    4.180934] hub 1-0:1.0: USB hub found
[    4.180948] hub 1-0:1.0: 6 ports detected
[    4.218910] xen_netfront: Initialising Xen virtual ethernet driver
[    4.221051] xen_netfront: backend supports XDP headroom
[    4.221067] vif vif-0: bouncing transmitted data to zeroed pages
[    4.234422] xhci_hcd 0000:00:08.0: xHCI Host Controller
[    4.237466] xhci_hcd 0000:00:08.0: new USB bus registered, assigned bus =
number 2
[    4.239507] xhci_hcd 0000:00:08.0: hcc params 0x00080001 hci version 0x1=
00 quirks 0x0000000000000014
[    4.243069] xhci_hcd 0000:00:08.0: xHCI Host Controller
[    4.243119] xhci_hcd 0000:00:08.0: new USB bus registered, assigned bus =
number 3
[    4.243133] xhci_hcd 0000:00:08.0: Host supports USB 3.0 SuperSpeed
[    4.243899] usb usb2: New USB device found, idVendor=3D1d6b, idProduct=
=3D0002, bcdDevice=3D 6.03
[    4.243915] usb usb2: New USB device strings: Mfr=3D3, Product=3D2, Seri=
alNumber=3D1
[    4.243928] usb usb2: Product: xHCI Host Controller
[    4.243936] usb usb2: Manufacturer: Linux 6.3.2-1.qubes.fc37.x86_64 xhci=
-hcd
[    4.243948] usb usb2: SerialNumber: 0000:00:08.0
[    4.244029] hub 2-0:1.0: USB hub found
[    4.244870] hub 2-0:1.0: 4 ports detected
[    4.245594] usb usb3: We don't know the algorithms for LPM for this host=
, disabling LPM.
[    4.245623] usb usb3: New USB device found, idVendor=3D1d6b, idProduct=
=3D0003, bcdDevice=3D 6.03
[    4.245636] usb usb3: New USB device strings: Mfr=3D3, Product=3D2, Seri=
alNumber=3D1
[    4.245648] usb usb3: Product: xHCI Host Controller
[    4.245656] usb usb3: Manufacturer: Linux 6.3.2-1.qubes.fc37.x86_64 xhci=
-hcd
[    4.245668] usb usb3: SerialNumber: 0000:00:08.0
[    4.245816] hub 3-0:1.0: USB hub found
[    4.245946] hub 3-0:1.0: 4 ports detected
[    4.379401] xen: --> pirq=3D24 -> irq=3D40 (gsi=3D40)
[    4.397461] intel_rapl_msr: PL4 support detected.
[    4.400429] snd_hda_codec_generic hdaudioC0D0: autoconfig for Generic: l=
ine_outs=3D1 (0x3/0x0/0x0/0x0/0x0) type:line
[    4.400432] snd_hda_codec_generic hdaudioC0D0:    speaker_outs=3D0 (0x0/=
0x0/0x0/0x0/0x0)
[    4.400433] snd_hda_codec_generic hdaudioC0D0:    hp_outs=3D0 (0x0/0x0/0=
x0/0x0/0x0)
[    4.400434] snd_hda_codec_generic hdaudioC0D0:    mono: mono_out=3D0x0
[    4.400435] snd_hda_codec_generic hdaudioC0D0:    inputs:
[    4.400435] snd_hda_codec_generic hdaudioC0D0:      Line=3D0x5
[    4.423367] usb 1-1: new high-speed USB device number 2 using ehci-pci
[    4.571898] usb 1-1: New USB device found, idVendor=3D0627, idProduct=3D=
0001, bcdDevice=3D 0.00
[    4.571917] usb 1-1: New USB device strings: Mfr=3D1, Product=3D3, Seria=
lNumber=3D10
[    4.571930] usb 1-1: Product: QEMU USB Tablet
[    4.571939] usb 1-1: Manufacturer: QEMU
[    4.571946] usb 1-1: SerialNumber: 42
[    4.579643] input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:0=
5.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input6
[    4.580049] hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.0=
1 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:05.0-1/input0
[    4.589770] loop: module loaded
[    4.597653] fuse: init (API version 7.38)
[    6.737828] kauditd_printk_skb: 114 callbacks suppressed
[    6.737830] audit: type=3D1106 audit(1685518757.781:103): pid=3D617 uid=
=3D0 auid=3D1000 ses=3D2 subj=3Dsystem_u:system_r:local_login_t:s0-s0:c0.c1=
023 msg=3D'op=3DPAM:session_close grantors=3Dpam_selinux,pam_loginuid,pam_s=
elinux,pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_umask,pam_lastlog ac=
ct=3D"user" exe=3D"/usr/lib/qubes/qrexec-agent" hostname=3D? addr=3D? termi=
nal=3D? res=3Dsuccess'
[    6.737890] audit: type=3D1104 audit(1685518757.781:104): pid=3D617 uid=
=3D0 auid=3D1000 ses=3D2 subj=3Dsystem_u:system_r:local_login_t:s0-s0:c0.c1=
023 msg=3D'op=3DPAM:setcred grantors=3Dpam_rootok acct=3D"user" exe=3D"/usr=
/lib/qubes/qrexec-agent" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[    7.285478] audit: type=3D1106 audit(1685518758.328:105): pid=3D612 uid=
=3D0 auid=3D1000 ses=3D1 subj=3Dsystem_u:system_r:local_login_t:s0-s0:c0.c1=
023 msg=3D'op=3DPAM:session_close grantors=3Dpam_selinux,pam_loginuid,pam_s=
elinux,pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_umask,pam_lastlog ac=
ct=3D"user" exe=3D"/usr/lib/qubes/qrexec-agent" hostname=3D? addr=3D? termi=
nal=3D? res=3Dsuccess'
[    7.285536] audit: type=3D1104 audit(1685518758.328:106): pid=3D612 uid=
=3D0 auid=3D1000 ses=3D1 subj=3Dsystem_u:system_r:local_login_t:s0-s0:c0.c1=
023 msg=3D'op=3DPAM:setcred grantors=3Dpam_rootok acct=3D"user" exe=3D"/usr=
/lib/qubes/qrexec-agent" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   10.588394] systemd-journald[272]: Time jumped backwards, rotating.
[   10.593625] audit: type=3D1130 audit(1685518760.006:107): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dqubes-sync-time comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd=
" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   10.593664] audit: type=3D1131 audit(1685518760.006:108): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dqubes-sync-time comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd=
" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   20.771405] audit: type=3D1101 audit(1685518770.184:109): pid=3D1028 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:accounting grantors=3Dpam_unix,pam_localuser =
acct=3D"user" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/dev/p=
ts/0 res=3Dsuccess'
[   20.771533] audit: type=3D1123 audit(1685518770.184:110): pid=3D1028 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'cwd=3D"/home/user" cmd=3D646E6620696E7374616C6C202D432=
02E2F69706572662D322E312E392D312E666333372E7838365F36342E72706D exe=3D"/usr=
/bin/sudo" terminal=3Dpts/0 res=3Dsuccess'
[   20.772368] audit: type=3D1110 audit(1685518770.185:111): pid=3D1028 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:setcred grantors=3Dpam_env,pam_localuser,pam_=
unix acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/=
dev/pts/0 res=3Dsuccess'
[   20.774435] audit: type=3D1105 audit(1685518770.187:112): pid=3D1028 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:session_open grantors=3Dpam_keyinit,pam_limit=
s,pam_keyinit,pam_limits,pam_systemd,pam_unix acct=3D"root" exe=3D"/usr/bin=
/sudo" hostname=3D? addr=3D? terminal=3D/dev/pts/0 res=3Dsuccess'
[   20.774999] audit: type=3D2300 audit(1685518770.187:113): pid=3D1028 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'newrole: old-context=3Dunconfined_u:unconfined_r:uncon=
fined_t:s0-s0:c0.c1023 new-context=3Dunconfined_u:unconfined_r:unconfined_t=
:s0-s0:c0.c1023 exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/dev=
/pts/0 res=3Dsuccess'
[   22.548766] audit: type=3D1130 audit(1685518771.961:114): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dunbound-anchor comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd"=
 hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   22.548831] audit: type=3D1131 audit(1685518771.961:115): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dunbound-anchor comm=3D"systemd" exe=3D"/usr/lib/systemd/systemd"=
 hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   23.992181] audit: type=3D1130 audit(1685518773.404:116): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Drun-r98added9c3b340adae9435487780b67a comm=3D"systemd" exe=3D"/u=
sr/lib/systemd/systemd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   24.241420] audit: type=3D1130 audit(1685518773.653:117): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dman-db-cache-update comm=3D"systemd" exe=3D"/usr/lib/systemd/sys=
temd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   24.241459] audit: type=3D1131 audit(1685518773.653:118): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dman-db-cache-update comm=3D"systemd" exe=3D"/usr/lib/systemd/sys=
temd" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   26.439229] kauditd_printk_skb: 1 callbacks suppressed
[   26.439232] audit: type=3D1106 audit(1685518775.851:120): pid=3D1028 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:session_close grantors=3Dpam_keyinit,pam_limi=
ts,pam_keyinit,pam_limits,pam_systemd,pam_unix acct=3D"root" exe=3D"/usr/bi=
n/sudo" hostname=3D? addr=3D? terminal=3D/dev/pts/0 res=3Dsuccess'
[   26.439292] audit: type=3D1104 audit(1685518775.851:121): pid=3D1028 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:setcred grantors=3Dpam_env,pam_localuser,pam_=
unix acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/=
dev/pts/0 res=3Dsuccess'
[   28.629938] audit: type=3D1101 audit(1685518778.042:122): pid=3D1139 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:accounting grantors=3Dpam_unix,pam_localuser =
acct=3D"user" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/dev/p=
ts/0 res=3Dsuccess'
[   28.630129] audit: type=3D1123 audit(1685518778.042:123): pid=3D1139 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'cwd=3D"/home/user" cmd=3D6E6674206164642072756C6520717=
562657320637573746F6D2D696E70757420616363657074 exe=3D"/usr/bin/sudo" termi=
nal=3Dpts/0 res=3Dsuccess'
[   28.630624] audit: type=3D1110 audit(1685518778.043:124): pid=3D1139 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:setcred grantors=3Dpam_env,pam_localuser,pam_=
unix acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/=
dev/pts/0 res=3Dsuccess'
[   28.632479] audit: type=3D1105 audit(1685518778.045:125): pid=3D1139 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:session_open grantors=3Dpam_keyinit,pam_limit=
s,pam_keyinit,pam_limits,pam_systemd,pam_unix acct=3D"root" exe=3D"/usr/bin=
/sudo" hostname=3D? addr=3D? terminal=3D/dev/pts/0 res=3Dsuccess'
[   28.632730] audit: type=3D2300 audit(1685518778.045:126): pid=3D1139 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'newrole: old-context=3Dunconfined_u:unconfined_r:uncon=
fined_t:s0-s0:c0.c1023 new-context=3Dunconfined_u:unconfined_r:unconfined_t=
:s0-s0:c0.c1023 exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/dev=
/pts/0 res=3Dsuccess'
[   28.660272] audit: type=3D1325 audit(1685518778.072:127): table=3Dqubes:=
8 family=3D2 entries=3D1 op=3Dnft_register_rule pid=3D1140 subj=3Dunconfine=
d_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 comm=3D"nft"
[   28.660644] audit: type=3D1300 audit(1685518778.072:127): arch=3Dc000003=
e syscall=3D46 success=3Dyes exit=3D144 a0=3D3 a1=3D7fff17f303a0 a2=3D0 a3=
=3D726ae8d67bc4 items=3D0 ppid=3D1139 pid=3D1140 auid=3D1000 uid=3D0 gid=3D=
0 euid=3D0 suid=3D0 fsuid=3D0 egid=3D0 sgid=3D0 fsgid=3D0 tty=3Dpts0 ses=3D=
4 comm=3D"nft" exe=3D"/usr/sbin/nft" subj=3Dunconfined_u:unconfined_r:uncon=
fined_t:s0-s0:c0.c1023 key=3D(null)
[   28.661081] audit: type=3D1327 audit(1685518778.072:127): proctitle=3D2F=
7573722F7362696E2F6E6674006164640072756C6500717562657300637573746F6D2D696E7=
0757400616363657074
[   37.352843] kauditd_printk_skb: 2 callbacks suppressed
[   37.352847] audit: type=3D1131 audit(1685518786.765:130): pid=3D1 uid=3D=
0 auid=3D4294967295 ses=3D4294967295 subj=3Dsystem_u:system_r:init_t:s0 msg=
=3D'unit=3Dsystemd-hostnamed comm=3D"systemd" exe=3D"/usr/lib/systemd/syste=
md" hostname=3D? addr=3D? terminal=3D? res=3Dsuccess'
[   37.377516] audit: type=3D1334 audit(1685518786.790:131): prog-id=3D35 o=
p=3DUNLOAD
[   37.377562] audit: type=3D1334 audit(1685518786.790:132): prog-id=3D34 o=
p=3DUNLOAD
[   37.377599] audit: type=3D1334 audit(1685518786.790:133): prog-id=3D33 o=
p=3DUNLOAD
[   44.946770] audit: type=3D1325 audit(1685518794.359:134): table=3Dqubes-=
firewall:9 family=3D2 entries=3D4 op=3Dnft_register_chain pid=3D1181 subj=
=3Dsystem_u:system_r:unconfined_service_t:s0 comm=3D"nft"
[   44.946810] audit: type=3D1300 audit(1685518794.359:134): arch=3Dc000003=
e syscall=3D46 success=3Dyes exit=3D880 a0=3D3 a1=3D7ffed54d4080 a2=3D0 a3=
=3D7a6b97eecbc4 items=3D0 ppid=3D529 pid=3D1181 auid=3D4294967295 uid=3D0 g=
id=3D0 euid=3D0 suid=3D0 fsuid=3D0 egid=3D0 sgid=3D0 fsgid=3D0 tty=3D(none)=
 ses=3D4294967295 comm=3D"nft" exe=3D"/usr/sbin/nft" subj=3Dsystem_u:system=
_r:unconfined_service_t:s0 key=3D(null)
[   44.946860] audit: type=3D1327 audit(1685518794.359:134): proctitle=3D6E=
6674002D66002F6465762F737464696E
[   44.978424] audit: type=3D1325 audit(1685518794.391:135): table=3Dqubes-=
firewall:10 family=3D2 entries=3D3 op=3Dnft_register_chain pid=3D1185 subj=
=3Dsystem_u:system_r:unconfined_service_t:s0 comm=3D"nft"
[   44.978456] audit: type=3D1300 audit(1685518794.391:135): arch=3Dc000003=
e syscall=3D46 success=3Dyes exit=3D420 a0=3D3 a1=3D7ffc8c7388f0 a2=3D0 a3=
=3D794d3bbbdbc4 items=3D0 ppid=3D529 pid=3D1185 auid=3D4294967295 uid=3D0 g=
id=3D0 euid=3D0 suid=3D0 fsuid=3D0 egid=3D0 sgid=3D0 fsgid=3D0 tty=3D(none)=
 ses=3D4294967295 comm=3D"nft" exe=3D"/usr/sbin/nft" subj=3Dsystem_u:system=
_r:unconfined_service_t:s0 key=3D(null)
[   44.978501] audit: type=3D1327 audit(1685518794.391:135): proctitle=3D6E=
6674002D66002F6465762F737464696E
[   44.984834] audit: type=3D1325 audit(1685518794.396:136): table=3Dqubes-=
firewall:11 family=3D2 entries=3D3 op=3Dnft_register_chain pid=3D1186 subj=
=3Dsystem_u:system_r:unconfined_service_t:s0 comm=3D"nft"
[   44.984864] audit: type=3D1300 audit(1685518794.396:136): arch=3Dc000003=
e syscall=3D46 success=3Dyes exit=3D420 a0=3D3 a1=3D7ffc97f36780 a2=3D0 a3=
=3D7deb403c0bc4 items=3D0 ppid=3D529 pid=3D1186 auid=3D4294967295 uid=3D0 g=
id=3D0 euid=3D0 suid=3D0 fsuid=3D0 egid=3D0 sgid=3D0 fsgid=3D0 tty=3D(none)=
 ses=3D4294967295 comm=3D"nft" exe=3D"/usr/sbin/nft" subj=3Dsystem_u:system=
_r:unconfined_service_t:s0 key=3D(null)
[   44.984909] audit: type=3D1327 audit(1685518794.396:136): proctitle=3D6E=
6674002D66002F6465762F737464696E
[   45.004767] vif vif-4-0 vif4.0: Guest Rx ready
[   45.062429] audit: type=3D1325 audit(1685518794.440:137): table=3Dqubes-=
nat-accel:12 family=3D1 entries=3D8 op=3Dnft_register_chain pid=3D1223 subj=
=3Dsystem_u:system_r:iptables_t:s0-s0:c0.c1023 comm=3D"nft"
[  106.608006] kauditd_printk_skb: 22 callbacks suppressed
[  106.608009] audit: type=3D1101 audit(1685518856.020:143): pid=3D1273 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:accounting grantors=3Dpam_unix,pam_localuser =
acct=3D"user" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/dev/p=
ts/1 res=3Dsuccess'
[  106.608814] audit: type=3D1123 audit(1685518856.021:144): pid=3D1273 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'cwd=3D"/home/user" cmd=3D"dmesg" exe=3D"/usr/bin/sudo"=
 terminal=3Dpts/1 res=3Dsuccess'
[  106.609939] audit: type=3D1110 audit(1685518856.022:145): pid=3D1273 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:setcred grantors=3Dpam_env,pam_localuser,pam_=
unix acct=3D"root" exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/=
dev/pts/1 res=3Dsuccess'
[  106.614575] audit: type=3D1105 audit(1685518856.027:146): pid=3D1273 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'op=3DPAM:session_open grantors=3Dpam_keyinit,pam_limit=
s,pam_keyinit,pam_limits,pam_systemd,pam_unix acct=3D"root" exe=3D"/usr/bin=
/sudo" hostname=3D? addr=3D? terminal=3D/dev/pts/1 res=3Dsuccess'
[  106.615633] audit: type=3D2300 audit(1685518856.028:147): pid=3D1273 uid=
=3D1000 auid=3D1000 ses=3D4 subj=3Dunconfined_u:unconfined_r:unconfined_t:s=
0-s0:c0.c1023 msg=3D'newrole: old-context=3Dunconfined_u:unconfined_r:uncon=
fined_t:s0-s0:c0.c1023 new-context=3Dunconfined_u:unconfined_r:unconfined_t=
:s0-s0:c0.c1023 exe=3D"/usr/bin/sudo" hostname=3D? addr=3D? terminal=3D/dev=
/pts/1 res=3Dsuccess'

--T1KEJXkE9CKFKvx+--

--0/lXUo8O37WGBCUc
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmR2/AwACgkQ24/THMrX
1yz2FAf/cB8UxOTbWkoiAYa/X68YlwfRJftdWs3szfpAjmQPu4KcXpqhUhQNOyCM
8cz5ktJyiMz0GPcD6qb0xcW2boTxoTVgymQWQ6ZlgerveJkqvR2RjzSnu2pMDMSY
+BFC1IzqDni3McTm1c1llAvDUyLuHVRpCDkWdQzXhabzmyv7jNfui7rlq5Ri7ZXM
8N9818eR3E610yACzULp/o9f3LmUqarim/uejRXdzne7MHapEwF0+kLMRLvKxZtO
KEHdVsZQ1VrYRS0j9rbyCT7WV/91o+TFew94CV7Z2mhwW6KstvpTMCP3bXbeTWKp
KzsyQmucfM5B2v/2zI3fPPZYc6Pv/A==
=kxs7
-----END PGP SIGNATURE-----

--0/lXUo8O37WGBCUc--


From xen-devel-bounces@lists.xenproject.org Wed May 31 08:25:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 08:25:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541641.844590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4H8g-0004FF-2Z; Wed, 31 May 2023 08:25:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541641.844590; Wed, 31 May 2023 08:25:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4H8f-0004F8-Vx; Wed, 31 May 2023 08:25:01 +0000
Received: by outflank-mailman (input) for mailman id 541641;
 Wed, 31 May 2023 08:25:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EXUc=BU=citrix.com=prvs=508c705fb=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q4H8f-0004F2-Aj
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 08:25:01 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9eb59896-ff8c-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 10:24:55 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 31 May 2023 04:24:53 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BLAPR03MB5636.namprd03.prod.outlook.com (2603:10b6:208:297::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22; Wed, 31 May
 2023 08:24:52 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 08:24:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9eb59896-ff8c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685521496;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=L4R79u4BlALJ+FAofgd3gJ5bEhmSY7WnvzozR7U4ZW4=;
  b=HpLXLjkWy796uT+oX8RhmT/m9mb8IejZE7q24cYL9RlLoJ8a78U4A2oy
   tVioORcxtHWEDZwBPoxVbvtHqldoEawi636WwoU+c8H/Cr+x+CJBlSxaT
   RkAX/zTh1NO282GbTuqaONCi4ff4ody+bqaDksuy++ZyJNoC1TcmDMCq6
   c=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 111057348
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:Lbua5KkMonZf7kUKQHeqx7ro5gyhJ0RdPkR7XQ2eYbSJt1+Wr1Gzt
 xJNUDqCOK6DNGb0L4wna4Syox4CscSDm4VrTAFqqnhmFCMWpZLJC+rCIxarNUt+DCFhoGFPt
 JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq
 LvartbWfVSowFaYCEpNg064gE0p5KyaVA8w5ARkPqgW5waGzhH5MbpETU2PByqgKmVrNrbSq
 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/
 f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3
 cUhBBcBM1fevciZ/ZKbQ+JWj/YpfOC+aevzulk4pd3YJdAPZMmaBo/stZpf1jp2gd1SF/HDY
 cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3iea8WDbWUoXiqcF9hEGXq
 3iA523kKhobKMae2XyO9XfEaurnxHqiAN9NT+HonhJsqGGq4WkrLxgtb3a+5sSdrxadcY59d
 kNBr0LCqoB3riRHVOLVWhSipXeesx00WtxOEvY74gWA1qrV5QmCAmEOCDVGbbQOpMIwADAny
 FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRVLufgcBRAoBptPl8Ic6i0uWSs45SfDkyNroBTv33
 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CChRqcBO51lmW8g
 UU=
IronPort-HdrOrdr: A9a23:pqvo/a0QO0KHFXOYDshBHAqjBEQkLtp133Aq2lEZdPU0SKGlfg
 6V/MjztCWE7gr5PUtLpTnuAsa9qB/nm6KdpLNhX4tKPzOW31dATrsSjrcKqgeIc0HDH6xmpM
 JdmsBFY+EYZmIK6foSjjPYLz4hquP3j5xBh43lvglQpdcBUdAQ0+97YDzrYnGfXGN9dOME/A
 L33Ls7m9KnE05nFviTNz0+cMXogcbEr57iaQ5uPW9a1OHf5QnYk4ITCnKjr20jbw8=
X-Talos-CUID: 9a23:3Hws8mF0j1ekQfpTqmJezWIzFp1iL0f340rwA0azM29bQu28HAo=
X-Talos-MUID: 9a23:cPuf4wWuDZPCEtPq/APVozRaE8JT2YmrT30ptM0lkZWmaCMlbg==
X-IronPort-AV: E=Sophos;i="6.00,205,1681185600"; 
   d="scan'208";a="111057348"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QHfy70HlftIX/eFF0suSOGRhyvesKu9Ln0GZrHM5AMO+uiB6OgWysOsYPpFd7R5KYLRbde2xqUP12V7thPBzKVfdnT5568jJNIKeMBlOhJWzl5FFmbnDP7LuBPOqyR4MNRY72t1WKonQufp9wc2EO1JV5vMsQRkoHt/FkhqAD0VPeqULHHys9gBZnUxrvho16jxXUiUH/Y56MPDkr5k7ntFbRs2PQi5v284Rhxr7GVmHteneJe+2kINEnT3jSoOaYN0JOm6ZGl4D9zWojE12e91meUapHwdfxUqgsvLEwtauH9sFUIBiaQNjHvQO5xZCsNNnnQRQalFa4BRy3DljHg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6yizvWieGBDbXFQUkBN2ZaXU+JT+BHn5OSwneDCy93s=;
 b=ZbdqQF5JEf17WZMbd9m/ILdQLgyWrZe1kvKEHRJl4v3q6BgBdFQxZoMCwBBI6ILkNXfT1t2bsOhrm9husizkKe7Nsj9S1ZsYIHmwXbWpFcY63zagcbVMryU5nWmCXb1skuDfm0Phqa9woIraA+r28wSMHU0EmP6ssAGtpplK4Kfw3O5vNwYI0NvkKnh0qt/CQjz819pMDgQI4leuPUpxKzjnwytRWpWmHCYlLQYlghU84xWS1DcWDoJPIpiktCgU5TBPjEa94Ik5qoJOor3gwrjLqP7Rs35go7ZDTPHjW8z7rGc1XVtZLRs8FiDiT23fO+xeviCLw+tiGcISrxblGQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6yizvWieGBDbXFQUkBN2ZaXU+JT+BHn5OSwneDCy93s=;
 b=rsJDCEzskwS18CrzdQmSJsIU/yyk0CGMkNqvSVEn0WnZqrMfoaFH2gxrjPZsyPy1w9dILczIjQ5FQeXpxzhqLg2rYDfMM+Wf7exOMX04+3D5lxb6VGjALIzKrauhFtzDppJuAIwcujEaPtPMiyh7N+47jLw8KZtMoxj+xS3qxsI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 31 May 2023 10:24:44 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: HVM performance once again
Message-ID: <ZHcETJZThCdr22MW@Air-de-Roger>
References: <ZHb8DBbRuklAXhCE@mail-itl>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZHb8DBbRuklAXhCE@mail-itl>
X-ClientProxiedBy: LO2P265CA0219.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::15) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BLAPR03MB5636:EE_
X-MS-Office365-Filtering-Correlation-Id: 35b08f38-1004-49ee-6218-08db61b080f4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Y/2+0v1i1NpMYvAMptZzwkzPAnsm/9427m0L97rMdl8PAXETOtgM9jbMcw3FhCtarOy54Sc2Clw/zzci3Bb0LicgOWLMdcLYXzGGAEysC54Zh1vCRGQQuIaoJEYYRcqubqvfdgn90tZF/XPjuXiGXKsXLeGiqw5hgGMC+tyvbl5q28FuSid8fFF9LBeucIggyMSTN7csNywF5Ba6qzdjEeHF+VmuKo/jjcOqkRBRoNVBTdZX7yo5XlDthqlDK0lbw5b31XxZ4mRdRQqf1I5cVz6qXSS2uLzKRpZIdvp0xgUobH78/5j8b4ccLrzgP7KdT0XGsFl7F3BK04dOuwnmTDhhrUwgYvCPAn2tGbLFFi/HADqbhY4+xL2+dQLpFzRFkpi03afapc8KoqRcBbEE2p/kSCbye6VobbbvV+06gsN6iUPZwY/ys65DByxOeiyFeRCKX8AvWsVkJWXs2B8KnVwNI3h9rCo4wF8papVq9Y+tC9scq7sEZYqJz3IpzsOl58cCCqyOAwbUNSCVB/3aHwydCB4um8904BFNSpJ9/6Uwt6AQPsNMxdHw0OcObc/I
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(376002)(39860400002)(366004)(136003)(346002)(396003)(451199021)(6512007)(316002)(6506007)(9686003)(186003)(6486002)(8676002)(33716001)(8936002)(41300700001)(82960400001)(6666004)(5660300002)(26005)(38100700002)(4326008)(6916009)(66556008)(66946007)(66476007)(2906002)(3480700007)(86362001)(85182001)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T0dvQTZTZ2dZbmV5TjA0WEp1cm5LL3BTZENXenpYbFF6d3hFVnY0N1Jmem16?=
 =?utf-8?B?c2ZWSjZON0NRYkVRU1RPVkNFb0s0TzBGbU9vTVRVRVgvZUJmVVYyNXBZa0Fz?=
 =?utf-8?B?MnY0Nm9KbXpJWkxYOEFkU2VZOGdJdWtDdWtMcFlwQlluemhMQmc5VXk4WjZo?=
 =?utf-8?B?N1YvdXplaGtJd3VrSFRtQ01qcjNqcEgvOEZFY2Z6ZmlNTHU5UVY1ckVvYXQy?=
 =?utf-8?B?OWc3Y0RNaDNJQmFyL2NYZ3VwWkNzM0QyWmNneEhvbFZPeC84RjYvSFAyWW9U?=
 =?utf-8?B?SGtnNDVqbXZJSVJEVlB0MEowUEhhSkVZMitleFc3UkZqT1lSK1E5V3Q5NVlz?=
 =?utf-8?B?bVVhOGFac0cxSktMWnB0ZDNqdUQxUExmR3pFWWhxNkl5d01OME1Ld2ZxSVEv?=
 =?utf-8?B?c0xxNmRKM0xiaXBZalRJMUdKK0o0RTNHMk9yNXNNQXpRMTMvUll4SXEwRUQy?=
 =?utf-8?B?WkV0ZituOFFxWFJwejFvUHVvVEI2TUhzRjRGK1VpWURDMExLU1AyN1JlU0Ns?=
 =?utf-8?B?U2VCOFRZY1pFbnR1ejV3MThFN3JaVXZ6TVpXbWFEU0thT3ZCdmFSK2FiVnU3?=
 =?utf-8?B?bDhRNHJjUG1ZL2tRZWI3NEJFUVk1YVJUMUwyQjNNaER6VEFqRi9DdFZZUm5i?=
 =?utf-8?B?WHR3eHgyTHhkMXdEKzFwMHF6cEpncndGa3BTMGVrTU9jVFlyRWV2N3FPN3lr?=
 =?utf-8?B?RXBNWm1EM0g4RXFmMTBsdVpRUWVGQkc3dHVSYWRnYUh3N1FzVzdVNCtrbVRu?=
 =?utf-8?B?MEZLVlRIMjVjM1BXNnVHQmR5NTdud0VtTUhnVFYyR01sVlBsOW5KWVV6bFh4?=
 =?utf-8?B?MTdQYlZyS2dmcCtWQ3Q0OC9VbG42bTVEQVVxNmtNcDAxQWJsSE1HbHJpdFNq?=
 =?utf-8?B?QzNFR3VLN3RXMitubndQVTdPTk9TOFd4UDU2VkNwenNYMG9YZGNyaW9sYk44?=
 =?utf-8?B?UUR4Zk1FbG9qa0JpWTVON1hyN3dURVVtc2VNREZwZHI0WnJPVENXRzY4VXJO?=
 =?utf-8?B?bWl1bVRoRjFEWlZnMEJJZzJMOUlrZmZOVTJLUktWajZqbnQ4ZzRJcngrczVl?=
 =?utf-8?B?cWx6WFQ5UlhsV1ptSEpOcDJXclZXK0tJWjFydVFmU2NWUXVkdDFWZ1RReFJC?=
 =?utf-8?B?WTViRis0bTNxeGZhY3BSQTV4MHZJVXF6TVJVSThyeEVSK3dmN3c5c0hTMXJn?=
 =?utf-8?B?YkxGSDk4cmhtR2lUUEdOc0J3a2oyZ2hZQWpoNSthUjNZUXgxNkI4d3NIMkZQ?=
 =?utf-8?B?VzllYzk0RS9laHU2NmVmUTF1U2c0a0ZPa1R3UXJYb3NZQWo5N2xrb0pxMjc1?=
 =?utf-8?B?c2JmeVVSREFoSDBJWTlXZmRiTmsvUHEreW82NlNwYWNSZjhZUndqa2wyVVY0?=
 =?utf-8?B?LzdZWXZvVmpNa2Y4SUIwek5RUUVhekhDVmlkbGpqR2xuS2ppSzh4ZjRLQWNH?=
 =?utf-8?B?MGJYVjUwN3lJNmZRQlBPWExZY2RuVVliWkRBUGNOQWlwc1ZzVDVjbGpJMHRF?=
 =?utf-8?B?Y3ZBWkVWUW13MklMemhqalZqVmVNb09HR3lrb2lFQUtKTUl2d1JPYmVXT1VR?=
 =?utf-8?B?SXBNVmhHa2ViMFcwTlVXd1BJbUdzbFdHK0tod205cGxYcGZaeE5PeXpPUTN3?=
 =?utf-8?B?Qm40a3B2TERnS1M1d1JlSDdFb0JrNVNjYWxBWDdpNFJXSkI3Z0JuQXRTZEhm?=
 =?utf-8?B?bHhFLzBYV0ZPcURRL2JNNURFMU5tamtYWlJmY3dMVWpvdDhPV3kxQ2dBdzVw?=
 =?utf-8?B?M1hUNFZtbWFjVHRNK0c1Zko4aENDcnVPRUNzaWJBL1Z0WW5wTVFOVVRlWkFL?=
 =?utf-8?B?RTJSYUhQUi9sUlpLQ3ZTVlcwVVB0b0NGMTE5WnNFUVlDUVVjRXNKRUdoVnBM?=
 =?utf-8?B?cThYOFpXMFRSYkRLdTcxc1dKellwQlFTbDZFMmw2L0ZSWWhBVlRKQnJsYWdT?=
 =?utf-8?B?R056MVZGVmw4enRYR1JtV2tlVFVTRmViS1MwbDFLWXo4YVVSc1hoUVlHYk9Q?=
 =?utf-8?B?RHZ0QmNSTjk4aitoamtMT3lYa2JxT3BmTmpOejUzWWpBdURrMnBvWGxhSGFJ?=
 =?utf-8?B?RTRGd2EvejdZMnpCVmVDRllENTN0ckUyMTd5S2xJZjdPUTNSZWMvSS9rc0ZL?=
 =?utf-8?Q?ggtev4e79aWK64B6NitgXPF6+?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	0yZ9QjfcTZR5b0xxZtuh2coqoG/tvttVynK4yXM+ZPj2vc8PWAOdIWI1K+o5ddN/f0KY1HCDdAqso8PZsEU2rhE2CsXlKOsS20LqkVrkx808bGM3XeYB6kqm+9P3B7WGl+LtHSxUWwHOKqaH9x23ynrYTOLSHQ4Mpu1Ldod3zO+BYyfpHspahJqtoarsQN3ic1Hg91v93XCsEBqQbid3BSLb9jVpqKhslvVwDLwqlu0j15VbCAaU913y4POJFmQocJ4k7cl/aHqlOtGqHOBsdb6CZxsAaJJjwe/fOWgzZSXDvQ30FkhgsVZNssUn1Fno90TKYX9Xs4jkE2XX6yGKwBlpbtpl5TfnScyCUVJQwPyDE74Fo8GudIe37lAGGFIRoT6oK/B9uEbuMcOvsig8hgkhihvts3W8XZehzStYtoHRvy11rGRFmHgDnJaHyAMyuGyghhiA+TDuBbrRhZafxq9jKUhsaR5Y5V2vbAI8X/0mky3YVY5paQNRedDPU+uik8FL2TJpJpb327up2KEVBiEfL7JO3e+0c4dBWaHJ+LxmUAJJ0EMBoQZ7DjWV2ll6jii/L1JxBnkphJ30LIcT9IDo5GDTCxifZVM5FaVZReISq9aRW4Z1yWyBxI58erQo7p6Q1C6hBLNclMdfBICsYrel/CiYdm/oM7lzSSe2o66573JJaHa0A4v1Z4/HELCw6n86l0MJh1OP1XXqmz5gTM25lmAYFlPHhlO05trVL0xZ9Weit7iH2lX6SKQKua2CBbCMxwPOfSXu3YBaNChHogKcEXZ3ycZWmaePJNAk06bU5pxWNK3ivKdg7Q6C6LNvbeatr+VQzk//xZHmwcT5pw==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 35b08f38-1004-49ee-6218-08db61b080f4
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 08:24:51.1311
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sIMm4XAMvol7LThliFQJVROKR5aLy5BR48D6Z1/AvUVYUD/NeC/08qbdTVGyPOmpF4pmfRo9wgTlwdCBbZGCQw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR03MB5636

On Wed, May 31, 2023 at 09:49:32AM +0200, Marek Marczykowski-Górecki wrote:
> Hi,
> 
> I returned to HVM performance once again, this time looking at PCI
> passthrough impact evaluating network throughput.
> The setup:
>  - Xen 4.17
>  - Linux 6.3.2 in all domU
>  - iperf -c running in a PVH (call it "client")
>  - iperf -s running in a HVM (call it "server")
>  - client's netfront has a backend directly in server
>  - frontend's "trusted" is set to 0
>  - HVM have qemu in a stubdomain in all the cases
>  - no intentional differences about HVM besides presence of a PCI device
>    (it is a network card, but it was not involved in the traffic)
> 
> And now the results:
>  - server is plain HVM: ~6Gbps
>  - server is HVM and has some PCI passthrough: ~3Gbps
> 
> Any idea why such huge difference?

Just a wild guess, when domains have a PCI device assigned the memory
cache types from the guest are enforced, otherwise it all defaults to
write-back (see epte_get_entry_emt()).

If you are not using the PCI device you might want to play with
epte_get_entry_emt() and see if that makes a difference.

Do you see the same performance regression when testing on AMD?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 31 08:28:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 08:28:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541645.844600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4HC2-0004om-HN; Wed, 31 May 2023 08:28:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541645.844600; Wed, 31 May 2023 08:28:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4HC2-0004of-EV; Wed, 31 May 2023 08:28:30 +0000
Received: by outflank-mailman (input) for mailman id 541645;
 Wed, 31 May 2023 08:28:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4HC1-0004oV-UI; Wed, 31 May 2023 08:28:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4HC1-0004Q7-Op; Wed, 31 May 2023 08:28:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4HC1-0001SX-8j; Wed, 31 May 2023 08:28:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4HC1-0000Bk-8G; Wed, 31 May 2023 08:28:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XUH2S52WhMHiX7lnsj89/voH1dcOyRKVoMsYnN7zqvY=; b=uW2PFlmyvmZYlTxGIw9wFOH8od
	GbSSWGGBffO9uG3isLsAfWgEhchGN1swNH0jRWYyKtNhCbGX6JjuZpVcAN897xCNISmA8BWzjoFV3
	SVI1eiJttVVQSl3pdU0ao9IYoUTeyGcVGw9X0lLw2+RMz6JKNBld2s1cFGkTvbIp3DWE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181019-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 181019: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    xen-unstable:test-amd64-i386-xl-shadow:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=05422d276b56f2ebc2309a84a66fc5722c45ad74
X-Osstest-Versions-That:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 08:28:29 +0000

flight 181019 xen-unstable real [real]
flight 181025 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/181019/
http://logs.test-lab.xenproject.org/osstest/logs/181025/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 181007

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-shadow     7 xen-install         fail pass in 181025-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 181007
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 181007
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 181007
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 181007
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 181007
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 181007
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 181007
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 181007
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 181007
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 181007
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 181007
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 181007
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  05422d276b56f2ebc2309a84a66fc5722c45ad74
baseline version:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc

Last test of basis   181007  2023-05-30 01:53:56 Z    1 days
Testing same since   181019  2023-05-30 20:07:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Cyril Rébert <slack@rabbit.lu>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 05422d276b56f2ebc2309a84a66fc5722c45ad74
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Tue May 30 12:12:59 2023 +0200

    build: adjust compile.h compiler version command line
    
    CFLAGS is just from Config.mk, drop its use. Don't even bother to
    instead use the flags used to build Xen.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 352c917acfe1dd6afc2eee44aa4ab7c50d4bc48a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue May 30 12:00:34 2023 +0200

    x86/vPIC: register only one ELCR handler instance
    
    There's no point consuming two port-I/O slots. Even less so considering
    that some real hardware permits both ports to be accessed in one go,
    emulating of which requires there to be only a single instance.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

commit 647377ea06b86d7356f5975e4780b9a6a81c188e
Author: Stewart Hildebrand <stewart.hildebrand@amd.com>
Date:   Tue May 30 11:59:33 2023 +0200

    xen/arm: un-break build with clang
    
    clang doesn't like extern with __attribute__((__used__)):
    
      ./arch/arm/include/asm/setup.h:171:8: error: 'used' attribute ignored [-Werror,-Wignored-attributes]
      extern DEFINE_BOOT_PAGE_TABLE(boot_pgtable);
             ^
      ./arch/arm/include/asm/lpae.h:273:29: note: expanded from macro 'DEFINE_BOOT_PAGE_TABLE'
      lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned")                   \$
                                  ^
      ./include/xen/compiler.h:71:27: note: expanded from macro '__section'
      #define __section(s)      __used __attribute__((__section__(s)))
                                ^
      ./include/xen/compiler.h:104:39: note: expanded from macro '__used'
      #define __used         __attribute__((__used__))
                                            ^
    
    Simplify the declarations by getting rid of the macro (and thus the
    __aligned/__section/__used attributes) in the header. No functional change
    intended as the macro/attributes are present in the respective definitions in
    xen/arch/arm/mm.c.
    
    Fixes: 1c78d76b67e1 ("xen/arm64: mm: Introduce helpers to prepare/enable/disable the identity mapping")
    Signed-off-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 2a8a1681505d67dae5d3964f98cc1b1daf8e43f3
Author: Cyril Rébert <slack@rabbit.lu>
Date:   Tue May 30 11:57:42 2023 +0200

    tools/xenstore: remove deprecated parameter from xenstore commands help
    
    Completing commit c65687e ("tools/xenstore: remove socket-only option from xenstore client").
    As the socket-only option (-s) has been removed from the Xenstore access commands (xenstore-*),
    also remove the parameter from the commands help (xenstore-* -h).
    
    Suggested-by: Yann Dirson <yann.dirson@vates.fr>
    Signed-off-by: Cyril Rébert <slack@rabbit.lu>
    Reviewed-by: Juergen Gross <jgross@suse.com>

commit ca045140d90c7892ec0664cdb2ef3e16c97eb0b6
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Tue May 30 11:57:17 2023 +0200

    xen/misra: xen-analysis.py: Fix cppcheck report relative paths
    
    Fix the generation of the relative path from the repo, for cppcheck
    reports, when the script is launching make with in-tree build.
    
    Fixes: b046f7e37489 ("xen/misra: xen-analysis.py: use the relative path from the ...")
    Reported-by: Michal Orzel <michal.orzel@amd.com>
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit 8bd504290bc3e5fb4d04150f96a36783407661b4
Author: Luca Fancellu <luca.fancellu@arm.com>
Date:   Tue May 30 11:57:02 2023 +0200

    xen/misra: xen-analysis.py: Fix latent bug
    
    Currenly there is a latent bug that is not triggered because
    the function cppcheck_merge_txt_fragments is called with the
    parameter strip_paths having a list of only one element.
    
    The bug is that the split function should not be in the
    loop for strip_paths, but one level before, fix it.
    
    Fixes: 02b26c02c7c4 ("xen/scripts: add cppcheck tool to the xen-analysis.py script")
    Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit e56f2106b6727223bd7de03e20fedd1f94da655d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue May 30 11:56:22 2023 +0200

    VMX/cpu-policy: disable RDTSCP and INVPCID insns as needed
    
    When either feature is available in hardware, but disabled for a guest,
    the respective insn would better cause #UD if attempted to be used.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit 233a8f20cfbe999505c7b07b359f03fc04111008
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue May 30 11:54:55 2023 +0200

    VMX/cpu-policy: check availability of RDTSCP and INVPCID
    
    Both have separate enable bits, which are optional. While on real
    hardware we can perhaps expect these VMX controls to be available if
    (and only if) the base CPU feature is available, when running
    virtualized ourselves this may not be the case.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 31 08:35:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 08:35:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541652.844610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4HId-0006LJ-C2; Wed, 31 May 2023 08:35:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541652.844610; Wed, 31 May 2023 08:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4HId-0006LC-8V; Wed, 31 May 2023 08:35:19 +0000
Received: by outflank-mailman (input) for mailman id 541652;
 Wed, 31 May 2023 08:35:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5Ad7=BU=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1q4HIb-0006L6-J0
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 08:35:18 +0000
Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 102ad9ef-ff8e-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 10:35:14 +0200 (CEST)
Received: from zn.tnic (pd9530d32.dip0.t-ipconnect.de [217.83.13.50])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 1C1511EC0523;
 Wed, 31 May 2023 10:35:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 102ad9ef-ff8e-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1685522113;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=4w2+XBAyUg1x2j+KVLx3U9OB1kNUW8vIMipBpcOS6zM=;
	b=nWIkXbbIcdmWcTh/LgL6nKUHNbCw91oDms3oSSuwKA5bu/rGzR7YiGHA9zKf5ZlVg5bOYS
	yAsweFoFutFAOzbcZ3PTfOMgZLWnhwZnSe/AFgT+heE+1+uSZTBvTJRMVIBeDRnka/p8Ii
	25F3MsaI0w4pUJIvisq6sRpkv74y6fI=
Date: Wed, 31 May 2023 10:35:08 +0200
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
	mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Message-ID: <20230531083508.GAZHcGvB68PUAH7f+a@fat_crate.local>
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
 <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
 <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>
 <888f860d-4307-54eb-01da-11f9adf65559@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <888f860d-4307-54eb-01da-11f9adf65559@suse.com>

On Wed, May 31, 2023 at 09:28:57AM +0200, Juergen Gross wrote:
> Can you please boot the system with the MTRR patches and specify "mtrr=debug"
> on the command line? I'd be interested in the raw register values being read
> and the resulting memory type map.

This is exactly why I wanted this option. And you're already putting it
to good use. :-P

Full dmesg below.

[    0.000000] microcode: updated early: 0x710 -> 0x718, date = 2019-05-21
[    0.000000] Linux version 6.4.0-rc1+ (boris@zn) (gcc (Debian 12.2.0-9) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Tue May 30 15:54:17 CEST 2023
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-6.4.0-rc1+ root=/dev/sda7 ro earlyprintk=ttyS0,115200 console=ttyS0,115200 console=tty0 ras=cec_disable root=/dev/sda7 log_buf_len=10M resume=/dev/sda5 no_console_suspend ignore_loglevel mtrr=debug
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
[    0.000000] signal: max sigframe size: 1776
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000018ebafff] usable
[    0.000000] BIOS-e820: [mem 0x0000000018ebb000-0x0000000018fe7fff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000018fe8000-0x0000000018fe8fff] usable
[    0.000000] BIOS-e820: [mem 0x0000000018fe9000-0x0000000018ffffff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000019000000-0x000000001dffcfff] usable
[    0.000000] BIOS-e820: [mem 0x000000001dffd000-0x000000001dffffff] ACPI data
[    0.000000] BIOS-e820: [mem 0x000000001e000000-0x00000000ac77cfff] usable
[    0.000000] BIOS-e820: [mem 0x00000000ac77d000-0x00000000ac77ffff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac780000-0x00000000ac780fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac781000-0x00000000ac782fff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac783000-0x00000000ac7d9fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac7da000-0x00000000ac7dafff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac7db000-0x00000000ac7dcfff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac7dd000-0x00000000ac7e7fff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac7e8000-0x00000000ac7f1fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac7f2000-0x00000000ac7f5fff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac7f6000-0x00000000ac7f9fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac7fa000-0x00000000ac7fafff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac7fb000-0x00000000ac803fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac804000-0x00000000ac810fff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac811000-0x00000000ac813fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac814000-0x00000000ad7fffff] usable
[    0.000000] BIOS-e820: [mem 0x00000000b0000000-0x00000000b3ffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fed20000-0x00000000fed3ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fed50000-0x00000000fed8ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ffa00000-0x00000000ffa3ffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000044fffffff] usable
[    0.000000] printk: bootconsole [earlyser0] enabled
[    0.000000] printk: debug: ignoring loglevel setting.
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] efi: EFI v2.0 by American Megatrends
[    0.000000] efi: ACPI 2.0=0x1dffff98 SMBIOS=0xac811018 
[    0.000000] efi: Remove mem57: MMIO range=[0xb0000000-0xb3ffffff] (64MB) from e820 map
[    0.000000] e820: remove [mem 0xb0000000-0xb3ffffff] reserved
[    0.000000] efi: Not removing mem58: MMIO range=[0xfed20000-0xfed3ffff] (128KB) from e820 map
[    0.000000] efi: Remove mem59: MMIO range=[0xfed50000-0xfed8ffff] (0MB) from e820 map
[    0.000000] e820: remove [mem 0xfed50000-0xfed8ffff] reserved
[    0.000000] efi: Remove mem60: MMIO range=[0xffa00000-0xffa3ffff] (0MB) from e820 map
[    0.000000] e820: remove [mem 0xffa00000-0xffa3ffff] reserved
[    0.000000] SMBIOS 2.6 present.
[    0.000000] DMI: Dell Inc. Precision T3600/0PTTT9, BIOS A13 05/11/2014
[    0.000000] tsc: Fast TSC calibration using PIT
[    0.000000] tsc: Detected 3591.377 MHz processor
[    0.000767] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.007307] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.012878] last_pfn = 0x450000 max_arch_pfn = 0x400000000
[    0.018357] MTRR default type: uncachable
[    0.022347] MTRR fixed ranges enabled:
[    0.026085]   00000-9FFFF write-back
[    0.029650]   A0000-BFFFF uncachable
[    0.033214]   C0000-FFFFF write-protect
[    0.037039] MTRR variable ranges enabled:
[    0.041038]   0 base 000000000000000 mask 0003FFC00000000 write-back
[    0.047383]   1 base 000000400000000 mask 0003FFFC0000000 write-back
[    0.053730]   2 base 000000440000000 mask 0003FFFF0000000 write-back
[    0.060076]   3 base 0000000AE000000 mask 0003FFFFE000000 uncachable
[    0.066421]   4 base 0000000B0000000 mask 0003FFFF0000000 uncachable
[    0.072768]   5 base 0000000C0000000 mask 0003FFFC0000000 uncachable
[    0.079114]   6 disabled
[    0.081635]   7 disabled
[    0.084156]   8 disabled
[    0.086677]   9 disabled
[    0.089203] total RAM covered: 16352M
[    0.093023] Found optimal setting for mtrr clean up
[    0.097734]  gran_size: 64K 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G
[    0.104864] MTRR map: 6 entries (3 fixed + 3 variable; max 23), built from 10 variable MTRRs
[    0.113294]   0: 0000000000000000-000000000009ffff write-back
[    0.119033]   1: 00000000000a0000-00000000000bffff uncachable
[    0.124771]   2: 00000000000c0000-00000000000fffff write-protect
[    0.130769]   3: 0000000000100000-00000000adffffff write-back
[    0.136508]   4: 00000000ae000000-00000000afffffff uncachable
[    0.142246]   5: 0000000100000000-000000044fffffff write-back
[    0.147992] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
[    0.155122] e820: update [mem 0xae000000-0xafffffff] usable ==> reserved
[    0.161663] e820: update [mem 0xc0000000-0xffffffff] usable ==> reserved
[    0.168358] e820: update [mem 0x110000000-0x1ffffffff] usable ==> reserved
[    0.175227] WARNING: BIOS bug: CPU MTRRs don't cover all of memory, losing 3840MB of RAM.
[    0.183397] update e820 for mtrr
[    0.186621] modified physical RAM map:
[    0.190351] modified: [mem 0x0000000000000000-0x0000000000000fff] reserved
[    0.197219] modified: [mem 0x0000000000001000-0x000000000009ffff] usable
[    0.203914] modified: [mem 0x0000000000100000-0x0000000018ebafff] usable
[    0.210608] modified: [mem 0x0000000018ebb000-0x0000000018fe7fff] ACPI NVS
[    0.217475] modified: [mem 0x0000000018fe8000-0x0000000018fe8fff] usable
[    0.224170] modified: [mem 0x0000000018fe9000-0x0000000018ffffff] ACPI NVS
[    0.231037] modified: [mem 0x0000000019000000-0x000000001dffcfff] usable
[    0.237732] modified: [mem 0x000000001dffd000-0x000000001dffffff] ACPI data
[    0.244687] modified: [mem 0x000000001e000000-0x00000000ac77cfff] usable
[    0.251381] modified: [mem 0x00000000ac77d000-0x00000000ac77ffff] type 20
[    0.258162] modified: [mem 0x00000000ac780000-0x00000000ac780fff] reserved
[    0.265031] modified: [mem 0x00000000ac781000-0x00000000ac782fff] type 20
[    0.271812] modified: [mem 0x00000000ac783000-0x00000000ac7d9fff] reserved
[    0.278679] modified: [mem 0x00000000ac7da000-0x00000000ac7dafff] type 20
[    0.285460] modified: [mem 0x00000000ac7db000-0x00000000ac7dcfff] reserved
[    0.292329] modified: [mem 0x00000000ac7dd000-0x00000000ac7e7fff] type 20
[    0.299109] modified: [mem 0x00000000ac7e8000-0x00000000ac7f1fff] reserved
[    0.305977] modified: [mem 0x00000000ac7f2000-0x00000000ac7f5fff] type 20
[    0.312757] modified: [mem 0x00000000ac7f6000-0x00000000ac7f9fff] reserved
[    0.319627] modified: [mem 0x00000000ac7fa000-0x00000000ac7fafff] type 20
[    0.326408] modified: [mem 0x00000000ac7fb000-0x00000000ac803fff] reserved
[    0.333275] modified: [mem 0x00000000ac804000-0x00000000ac810fff] type 20
[    0.340058] modified: [mem 0x00000000ac811000-0x00000000ac813fff] reserved
[    0.346927] modified: [mem 0x00000000ac814000-0x00000000ad7fffff] usable
[    0.353620] modified: [mem 0x00000000fed20000-0x00000000fed3ffff] reserved
[    0.360489] modified: [mem 0x0000000100000000-0x000000010fffffff] usable
[    0.367183] modified: [mem 0x0000000110000000-0x00000001ffffffff] reserved
[    0.374051] modified: [mem 0x0000000200000000-0x000000044fffffff] usable
[    0.380745] last_pfn = 0x450000 max_arch_pfn = 0x400000000
[    0.386223] last_pfn = 0xad800 max_arch_pfn = 0x400000000
[    0.393245] found SMP MP-table at [mem 0x000f1dd0-0x000f1ddf]
[    0.398838] Using GB pages for direct mapping
[    0.415353] printk: log_buf_len: 16777216 bytes
[    0.419724] printk: early log buf free: 253832(96%)
[    0.424592] Secure boot could not be determined
[    0.429112] RAMDISK: [mem 0x372c7000-0x3795afff]
[    0.433723] ACPI: Early table checksum verification disabled
[    0.439377] ACPI: RSDP 0x000000001DFFFF98 000024 (v02 DELL  )
[    0.445112] ACPI: XSDT 0x000000001DFFEE18 00006C (v01 DELL   CBX3     06222004 MSFT 00010013)
[    0.453632] ACPI: FACP 0x0000000018FF0C18 0000F4 (v04 DELL   CBX3     06222004 MSFT 00010013)
[    0.462153] ACPI: DSDT 0x0000000018FA9018 006373 (v01 DELL   CBX3     00000000 INTL 20091112)
[    0.470671] ACPI: FACS 0x0000000018FFDF40 000040
[    0.475278] ACPI: FACS 0x0000000018FF1F40 000040
[    0.479887] ACPI: APIC 0x000000001DFFDC18 000158 (v02 DELL   CBX3     06222004 MSFT 00010013)
[    0.488406] ACPI: MCFG 0x0000000018FFED18 00003C (v01 A M I  OEMMCFG. 06222004 MSFT 00000097)
[    0.496927] ACPI: TCPA 0x0000000018FFEC98 000032 (v02                 00000000      00000000)
[    0.505447] ACPI: SSDT 0x0000000018FEFA98 000306 (v01 DELLTP TPM      00003000 INTL 20091112)
[    0.513967] ACPI: HPET 0x0000000018FFEC18 000038 (v01 A M I   PCHHPET 06222004 AMI. 00000003)
[    0.522487] ACPI: BOOT 0x0000000018FFEB98 000028 (v01 DELL   CBX3     06222004 AMI  00010013)
[    0.531008] ACPI: SSDT 0x0000000018FB0018 037106 (v02 INTEL  CpuPm    00004000 INTL 20091112)
[    0.539526] ACPI: SLIC 0x0000000018FEEC18 000176 (v03 DELL   CBX3     06222004 MSFT 00010013)
[    0.548046] ACPI: Reserving FACP table memory at [mem 0x18ff0c18-0x18ff0d0b]
[    0.555088] ACPI: Reserving DSDT table memory at [mem 0x18fa9018-0x18faf38a]
[    0.562130] ACPI: Reserving FACS table memory at [mem 0x18ffdf40-0x18ffdf7f]
[    0.569172] ACPI: Reserving FACS table memory at [mem 0x18ff1f40-0x18ff1f7f]
[    0.576213] ACPI: Reserving APIC table memory at [mem 0x1dffdc18-0x1dffdd6f]
[    0.583254] ACPI: Reserving MCFG table memory at [mem 0x18ffed18-0x18ffed53]
[    0.590295] ACPI: Reserving TCPA table memory at [mem 0x18ffec98-0x18ffecc9]
[    0.597336] ACPI: Reserving SSDT table memory at [mem 0x18fefa98-0x18fefd9d]
[    0.604378] ACPI: Reserving HPET table memory at [mem 0x18ffec18-0x18ffec4f]
[    0.611418] ACPI: Reserving BOOT table memory at [mem 0x18ffeb98-0x18ffebbf]
[    0.618665] ACPI: Reserving SSDT table memory at [mem 0x18fb0018-0x18fe711d]
[    0.625706] ACPI: Reserving SLIC table memory at [mem 0x18feec18-0x18feed8d]
[    0.632792] No NUMA configuration found
[    0.636572] Faking a node at [mem 0x0000000000000000-0x000000044fffffff]
[    0.643268] NODE_DATA(0) allocated [mem 0x44b7f8000-0x44b7fbfff]
[    0.649305] Zone ranges:
[    0.651786]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.657959]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[    0.664132]   Normal   [mem 0x0000000100000000-0x000000044fffffff]
[    0.670304] Movable zone start for each node
[    0.674564] Early memory node ranges
[    0.678128]   node   0: [mem 0x0000000000001000-0x000000000009ffff]
[    0.684388]   node   0: [mem 0x0000000000100000-0x0000000018ebafff]
[    0.690647]   node   0: [mem 0x0000000018fe8000-0x0000000018fe8fff]
[    0.696908]   node   0: [mem 0x0000000019000000-0x000000001dffcfff]
[    0.703168]   node   0: [mem 0x000000001e000000-0x00000000ac77cfff]
[    0.709427]   node   0: [mem 0x00000000ac814000-0x00000000ad7fffff]
[    0.715686]   node   0: [mem 0x0000000100000000-0x000000010fffffff]
[    0.721946]   node   0: [mem 0x0000000200000000-0x000000044fffffff]
[    0.728206] Initmem setup node 0 [mem 0x0000000000001000-0x000000044fffffff]
[    0.735250] On node 0, zone DMA: 1 pages in unavailable ranges
[    0.735274] On node 0, zone DMA: 96 pages in unavailable ranges
[    0.741596] On node 0, zone DMA32: 301 pages in unavailable ranges
[    0.747459] On node 0, zone DMA32: 23 pages in unavailable ranges
[    0.756635] On node 0, zone DMA32: 3 pages in unavailable ranges
[    0.762596] On node 0, zone DMA32: 151 pages in unavailable ranges
[    0.768986] On node 0, zone Normal: 10240 pages in unavailable ranges
[    0.788009] ACPI: PM-Timer IO Port: 0x408
[    0.798317] IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
[    0.805164] IOAPIC[1]: apic_id 2, version 32, address 0xfec3f000, GSI 24-47
[    0.812116] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.818461] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.825071] ACPI: Using ACPI (MADT) for SMP configuration information
[    0.831501] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[    0.836633] TSC deadline timer available
[    0.840544] smpboot: Allowing 32 CPUs, 24 hotplug CPUs
[    0.845700] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
[    0.853236] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000fffff]
[    0.860800] PM: hibernation: Registered nosave memory: [mem 0x18ebb000-0x18fe7fff]
[    0.868363] PM: hibernation: Registered nosave memory: [mem 0x18fe9000-0x18ffffff]
[    0.875926] PM: hibernation: Registered nosave memory: [mem 0x1dffd000-0x1dffffff]
[    0.883490] PM: hibernation: Registered nosave memory: [mem 0xac77d000-0xac77ffff]
[    0.891051] PM: hibernation: Registered nosave memory: [mem 0xac780000-0xac780fff]
[    0.898614] PM: hibernation: Registered nosave memory: [mem 0xac781000-0xac782fff]
[    0.906178] PM: hibernation: Registered nosave memory: [mem 0xac783000-0xac7d9fff]
[    0.913742] PM: hibernation: Registered nosave memory: [mem 0xac7da000-0xac7dafff]
[    0.921306] PM: hibernation: Registered nosave memory: [mem 0xac7db000-0xac7dcfff]
[    0.928869] PM: hibernation: Registered nosave memory: [mem 0xac7dd000-0xac7e7fff]
[    0.936433] PM: hibernation: Registered nosave memory: [mem 0xac7e8000-0xac7f1fff]
[    0.943996] PM: hibernation: Registered nosave memory: [mem 0xac7f2000-0xac7f5fff]
[    0.951559] PM: hibernation: Registered nosave memory: [mem 0xac7f6000-0xac7f9fff]
[    0.959122] PM: hibernation: Registered nosave memory: [mem 0xac7fa000-0xac7fafff]
[    0.966686] PM: hibernation: Registered nosave memory: [mem 0xac7fb000-0xac803fff]
[    0.974248] PM: hibernation: Registered nosave memory: [mem 0xac804000-0xac810fff]
[    0.981809] PM: hibernation: Registered nosave memory: [mem 0xac811000-0xac813fff]
[    0.989372] PM: hibernation: Registered nosave memory: [mem 0xad800000-0xfed1ffff]
[    0.996933] PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfed3ffff]
[    1.004495] PM: hibernation: Registered nosave memory: [mem 0xfed40000-0xffffffff]
[    1.012060] PM: hibernation: Registered nosave memory: [mem 0x110000000-0x1ffffffff]
[    1.019797] [mem 0xad800000-0xfed1ffff] available for PCI devices
[    1.025880] Booting paravirtualized kernel on bare hardware
[    1.031444] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
[    1.046295] setup_percpu: NR_CPUS:256 nr_cpumask_bits:32 nr_cpu_ids:32 nr_node_ids:1
[    1.056271] percpu: Embedded 78 pages/cpu s282624 r8192 d28672 u524288
[    1.062650] pcpu-alloc: s282624 r8192 d28672 u524288 alloc=1*2097152
[    1.068988] pcpu-alloc: [0] 00 01 02 03 [0] 04 05 06 07 
[    1.074292] pcpu-alloc: [0] 08 09 10 11 [0] 12 13 14 15 
[    1.079594] pcpu-alloc: [0] 16 17 18 19 [0] 20 21 22 23 
[    1.084897] pcpu-alloc: [0] 24 25 26 27 [0] 28 29 30 31 
[    1.090219] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.4.0-rc1+ root=/dev/sda7 ro earlyprintk=ttyS0,115200 console=ttyS0,115200 console=tty0 ras=cec_disable root=/dev/sda7 log_buf_len=10M resume=/dev/sda5 no_console_suspend ignore_loglevel mtrr=debug
[    1.112884] Unknown kernel command line parameters "BOOT_IMAGE=/boot/vmlinuz-6.4.0-rc1+", will be passed to user space.
[    1.125058] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
[    1.133712] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[    1.141770] Fallback order for Node 0: 0 
[    1.141776] Built 1 zonelists, mobility grouping on.  Total pages: 3150281
[    1.152485] Policy zone: Normal
[    1.155624] mem auto-init: stack:off, heap alloc:off, heap free:off
[    1.161881] software IO TLB: area num 32.
[    1.201697] Memory: 12308624K/12801796K available (14336K kernel code, 2459K rwdata, 5712K rodata, 3044K init, 14704K bss, 492916K reserved, 0K cma-reserved)
[    1.215760] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=32, Nodes=1
[    1.222260] Kernel/User page tables isolation: enabled
[    1.227461] ftrace: allocating 40092 entries in 157 pages
[    1.238827] ftrace: allocated 157 pages with 5 groups
[    1.243844] Dynamic Preempt: full
[    1.247217] Running RCU self tests
[    1.250451] Running RCU synchronous self tests
[    1.254895] rcu: Preemptible hierarchical RCU implementation.
[    1.260621] rcu: 	RCU lockdep checking is enabled.
[    1.265402] rcu: 	RCU restricting CPUs from NR_CPUS=256 to nr_cpu_ids=32.
[    1.272184] 	Trampoline variant of Tasks RCU enabled.
[    1.277225] 	Rude variant of Tasks RCU enabled.
[    1.281746] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
[    1.289395] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=32
[    1.296244] Running RCU synchronous self tests
[    1.303525] NR_IRQS: 16640, nr_irqs: 1088, preallocated irqs: 16
[    1.309579] rcu: srcu_init: Setting srcu_struct sizes based on contention.
[    1.316463] Console: colour dummy device 80x25
[    1.320753] printk: console [tty0] enabled
[    1.324829] printk: bootconsole [earlyser0] disabled
[    1.329828] printk: console [ttyS0] enabled
[    2.969459] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
[    2.977231] ... MAX_LOCKDEP_SUBCLASSES:  8
[    2.981340] ... MAX_LOCK_DEPTH:          48
[    2.985536] ... MAX_LOCKDEP_KEYS:        8192
[    2.989908] ... CLASSHASH_SIZE:          4096
[    2.994279] ... MAX_LOCKDEP_ENTRIES:     32768
[    2.998738] ... MAX_LOCKDEP_CHAINS:      65536
[    3.003195] ... CHAINHASH_SIZE:          32768
[    3.007653]  memory used by lock dependency info: 6365 kB
[    3.013071]  memory used for stack traces: 4224 kB
[    3.017878]  per task-struct memory footprint: 1920 bytes
[    3.023329] ACPI: Core revision 20230331
[    3.027502] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns
[    3.036706] APIC: Switch to symmetric I/O mode setup
[    3.041907] x2apic: IRQ remapping doesn't support X2APIC mode
[    3.047731] Switched APIC routing to physical flat.
[    3.053202] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[    3.063699] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x33c4821c4fd, max_idle_ns: 440795387422 ns
[    3.074284] Calibrating delay loop (skipped), value calculated using timer frequency.. 7182.75 BogoMIPS (lpj=3591377)
[    3.075272] pid_max: default: 32768 minimum: 301
[    3.083375] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    3.084297] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[    3.086699] CPU0: Thermal monitoring enabled (TM1)
[    3.087321] process: using mwait in idle threads
[    3.088279] Last level iTLB entries: 4KB 512, 2MB 8, 4MB 8
[    3.089272] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32, 1GB 0
[    3.090276] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[    3.091273] Spectre V2 : Mitigation: Retpolines
[    3.092272] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[    3.093271] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
[    3.094271] Spectre V2 : Enabling Restricted Speculation for firmware calls
[    3.095274] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
[    3.096272] Spectre V2 : User space: Mitigation: STIBP via prctl
[    3.097273] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
[    3.098275] MDS: Mitigation: Clear CPU buffers
[    3.099272] MMIO Stale Data: Unknown: No mitigations
[    3.113896] Freeing SMP alternatives memory: 36K
[    3.114674] Running RCU synchronous self tests
[    3.115277] Running RCU synchronous self tests
[    3.116462] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-1620 0 @ 3.60GHz (family: 0x6, model: 0x2d, stepping: 0x7)
[    3.117725] cblist_init_generic: Setting adjustable number of callback queues.
[    3.118272] cblist_init_generic: Setting shift to 5 and lim to 1.
[    3.119344] cblist_init_generic: Setting shift to 5 and lim to 1.
[    3.120328] Running RCU-tasks wait API self tests
[    3.232390] Performance Events: PEBS fmt1+, SandyBridge events, 16-deep LBR, full-width counters, Intel PMU driver.
[    3.233292] ... version:                3
[    3.234272] ... bit width:              48
[    3.235276] ... generic registers:      4
[    3.236280] ... value mask:             0000ffffffffffff
[    3.237279] ... max period:             00007fffffffffff
[    3.238277] ... fixed-purpose events:   3
[    3.239277] ... event mask:             000000070000000f
[    3.240594] Estimated ratio of average max frequency by base frequency (times 1024): 1052
[    3.241365] rcu: Hierarchical SRCU implementation.
[    3.242273] rcu: 	Max phase no-delay instances is 400.
[    3.245869] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
[    3.247411] smp: Bringing up secondary CPUs ...
[    3.248655] x86: Booting SMP configuration:
[    3.249274] .... node  #0, CPUs:        #1  #2  #3  #4
[    3.260747] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[    3.262583]   #5  #6  #7
[    3.268545] smp: Brought up 1 node, 8 CPUs
[    3.270285] smpboot: Max logical packages: 4
[    3.271315] smpboot: Total of 8 processors activated (57462.03 BogoMIPS)
[    3.274775] devtmpfs: initialized
[    3.276302] ACPI: PM: Registering ACPI NVS region [mem 0x18ebb000-0x18fe7fff] (1232896 bytes)
[    3.277409] ACPI: PM: Registering ACPI NVS region [mem 0x18fe9000-0x18ffffff] (94208 bytes)
[    3.278460] Running RCU synchronous self tests
[    3.279304] Running RCU synchronous self tests
[    3.280402] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[    3.281300] futex hash table entries: 8192 (order: 8, 1048576 bytes, linear)
[    3.283791] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[    3.284713] DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations
[    3.285286] DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
[    3.286284] DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
[    3.287578] thermal_sys: Registered thermal governor 'step_wise'
[    3.287581] thermal_sys: Registered thermal governor 'user_space'
[    3.288347] cpuidle: using governor ladder
[    3.290314] cpuidle: using governor menu
[    3.291323] Simple Boot Flag at 0xf3 set to 0x1
[    3.292331] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it
[    3.293435] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xb0000000-0xb3ffffff] (base 0xb0000000)
[    3.294276] PCI: not using MMCONFIG
[    3.295279] PCI: Using configuration type 1 for base access
[    3.296327] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on
[    3.298427] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
[    3.300295] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
[    3.301276] HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
[    3.302276] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
[    3.303279] HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
[    3.305532] ACPI: Added _OSI(Module Device)
[    3.306274] ACPI: Added _OSI(Processor Device)
[    3.307273] ACPI: Added _OSI(3.0 _SCP Extensions)
[    3.308273] ACPI: Added _OSI(Processor Aggregator Device)
[    3.344417] Callback from call_rcu_tasks_rude() invoked.
[    3.439738] ACPI: 3 ACPI AML tables successfully acquired and loaded
[    3.468367] ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
[    3.475332] ACPI: Interpreter enabled
[    3.476350] ACPI: PM: (supports S0 S1 S3 S4 S5)
[    3.477289] ACPI: Using IOAPIC for interrupt routing
[    3.478339] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xb0000000-0xb3ffffff] (base 0xb0000000)
[    3.482798] [Firmware Info]: PCI: MMCONFIG at [mem 0xb0000000-0xb3ffffff] not reserved in ACPI motherboard resources
[    3.483287] PCI: MMCONFIG at [mem 0xb0000000-0xb3ffffff] reserved as EfiMemoryMappedIO
[    3.484290] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    3.485272] PCI: Using E820 reservations for host bridge windows
[    3.486876] ACPI: Enabled 7 GPEs in block 00 to 3F
[    3.526310] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f])
[    3.527292] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
[    3.528506] acpi PNP0A08:00: _OSC: platform does not support [AER PCIeCapability LTR]
[    3.529483] acpi PNP0A08:00: _OSC: not requesting control; platform does not support [PCIeCapability]
[    3.530280] acpi PNP0A08:00: _OSC: OS requested [PME AER PCIeCapability LTR]
[    3.531279] acpi PNP0A08:00: _OSC: platform willing to grant [PME]
[    3.532279] acpi PNP0A08:00: _OSC: platform retains control of PCIe features (AE_SUPPORT)
[    3.534553] PCI host bridge to bus 0000:00
[    3.535281] pci_bus 0000:00: root bus resource [io  0x0000-0x03af window]
[    3.536273] pci_bus 0000:00: root bus resource [io  0x03e0-0x0cf7 window]
[    3.537273] pci_bus 0000:00: root bus resource [io  0x03b0-0x03df window]
[    3.538273] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    3.539277] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000dffff window]
[    3.540279] pci_bus 0000:00: root bus resource [mem 0xb0000000-0xfbffffff window]
[    3.541285] pci_bus 0000:00: root bus resource [bus 00-1f]
[    3.542375] pci 0000:00:00.0: [8086:3c00] type 00 class 0x060000
[    3.543395] pci 0000:00:00.0: PME# supported from D0 D3hot D3cold
[    3.544527] pci 0000:00:01.0: [8086:3c02] type 01 class 0x060400
[    3.545399] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[    3.546742] pci 0000:00:01.1: [8086:3c03] type 01 class 0x060400
[    3.547387] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
[    3.548643] pci 0000:00:02.0: [8086:3c04] type 01 class 0x060400
[    3.549387] pci 0000:00:02.0: PME# supported from D0 D3hot D3cold
[    3.550636] pci 0000:00:03.0: [8086:3c08] type 01 class 0x060400
[    3.551318] pci 0000:00:03.0: enabling Extended Tags
[    3.552355] pci 0000:00:03.0: PME# supported from D0 D3hot D3cold
[    3.553603] pci 0000:00:05.0: [8086:3c28] type 00 class 0x088000
[    3.554490] pci 0000:00:05.2: [8086:3c2a] type 00 class 0x088000
[    3.555481] pci 0000:00:05.4: [8086:3c2c] type 00 class 0x080020
[    3.556287] pci 0000:00:05.4: reg 0x10: [mem 0xf332d000-0xf332dfff]
[    3.557488] pci 0000:00:11.0: [8086:1d3e] type 01 class 0x060400
[    3.558410] pci 0000:00:11.0: PME# supported from D0 D3hot D3cold
[    3.559592] pci 0000:00:16.0: [8086:1d3a] type 00 class 0x078000
[    3.560300] pci 0000:00:16.0: reg 0x10: [mem 0xf332c000-0xf332c00f 64bit]
[    3.561367] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
[    3.562423] pci 0000:00:16.3: [8086:1d3d] type 00 class 0x070002
[    3.563293] pci 0000:00:16.3: reg 0x10: [io  0xf0a0-0xf0a7]
[    3.564279] pci 0000:00:16.3: reg 0x14: [mem 0xf332a000-0xf332afff]
[    3.565486] pci 0000:00:19.0: [8086:1502] type 00 class 0x020000
[    3.566287] pci 0000:00:19.0: reg 0x10: [mem 0xf3300000-0xf331ffff]
[    3.567280] pci 0000:00:19.0: reg 0x14: [mem 0xf3329000-0xf3329fff]
[    3.568280] pci 0000:00:19.0: reg 0x18: [io  0xf040-0xf05f]
[    3.569344] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold
[    3.570532] pci 0000:00:1a.0: [8086:1d2d] type 00 class 0x0c0320
[    3.571295] pci 0000:00:1a.0: reg 0x10: [mem 0xf3328000-0xf33283ff]
[    3.572371] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[    3.573569] pci 0000:00:1b.0: [8086:1d20] type 00 class 0x040300
[    3.574298] pci 0000:00:1b.0: reg 0x10: [mem 0xf3320000-0xf3323fff 64bit]
[    3.575381] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
[    3.576621] pci 0000:00:1c.0: [8086:1d16] type 01 class 0x060400
[    3.577387] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[    3.578560] pci 0000:00:1c.2: [8086:1d14] type 01 class 0x060400
[    3.579387] pci 0000:00:1c.2: PME# supported from D0 D3hot D3cold
[    3.580565] pci 0000:00:1d.0: [8086:1d26] type 00 class 0x0c0320
[    3.581295] pci 0000:00:1d.0: reg 0x10: [mem 0xf3327000-0xf33273ff]
[    3.582378] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[    3.583567] pci 0000:00:1e.0: [8086:244e] type 01 class 0x060401
[    3.584597] pci 0000:00:1f.0: [8086:1d41] type 00 class 0x060100
[    3.585702] pci 0000:00:1f.2: [8086:1d02] type 00 class 0x010601
[    3.586292] pci 0000:00:1f.2: reg 0x10: [io  0xf090-0xf097]
[    3.587279] pci 0000:00:1f.2: reg 0x14: [io  0xf080-0xf083]
[    3.588282] pci 0000:00:1f.2: reg 0x18: [io  0xf070-0xf077]
[    3.589279] pci 0000:00:1f.2: reg 0x1c: [io  0xf060-0xf063]
[    3.590279] pci 0000:00:1f.2: reg 0x20: [io  0xf020-0xf03f]
[    3.591283] pci 0000:00:1f.2: reg 0x24: [mem 0xf3326000-0xf33267ff]
[    3.592323] pci 0000:00:1f.2: PME# supported from D3hot
[    3.593521] pci 0000:00:1f.3: [8086:1d22] type 00 class 0x0c0500
[    3.594295] pci 0000:00:1f.3: reg 0x10: [mem 0xf3325000-0xf33250ff 64bit]
[    3.595288] pci 0000:00:1f.3: reg 0x20: [io  0xf000-0xf01f]
[    3.596568] pci 0000:00:01.0: PCI bridge to [bus 01]
[    3.597374] pci 0000:00:01.1: PCI bridge to [bus 02]
[    3.598375] pci 0000:03:00.0: [10de:10d8] type 00 class 0x030000
[    3.599283] pci 0000:03:00.0: reg 0x10: [mem 0xf2000000-0xf2ffffff]
[    3.600281] pci 0000:03:00.0: reg 0x14: [mem 0xf4000000-0xf7ffffff 64bit pref]
[    3.601281] pci 0000:03:00.0: reg 0x1c: [mem 0xf8000000-0xf9ffffff 64bit pref]
[    3.602283] pci 0000:03:00.0: reg 0x24: [io  0xe000-0xe07f]
[    3.603282] pci 0000:03:00.0: reg 0x30: [mem 0xf3000000-0xf307ffff pref]
[    3.604288] pci 0000:03:00.0: enabling Extended Tags
[    3.605310] pci 0000:03:00.0: BAR 3: assigned to efifb
[    3.606290] pci 0000:03:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[    3.607460] pci 0000:03:00.1: [10de:0be3] type 00 class 0x040300
[    3.608281] pci 0000:03:00.1: reg 0x10: [mem 0xf3080000-0xf3083fff]
[    3.609300] pci 0000:03:00.1: enabling Extended Tags
[    3.610484] pci 0000:00:02.0: PCI bridge to [bus 03]
[    3.611281] pci 0000:00:02.0:   bridge window [io  0xe000-0xefff]
[    3.612274] pci 0000:00:02.0:   bridge window [mem 0xf2000000-0xf30fffff]
[    3.613276] pci 0000:00:02.0:   bridge window [mem 0xf4000000-0xf9ffffff 64bit pref]
[    3.614357] pci 0000:00:03.0: PCI bridge to [bus 04]
[    3.615394] pci 0000:05:00.0: [8086:1d6b] type 00 class 0x010700
[    3.616296] pci 0000:05:00.0: reg 0x10: [mem 0xfa800000-0xfa803fff 64bit pref]
[    3.617287] pci 0000:05:00.0: reg 0x18: [mem 0xfa400000-0xfa7fffff 64bit pref]
[    3.618282] pci 0000:05:00.0: reg 0x20: [io  0xd000-0xd0ff]
[    3.619304] pci 0000:05:00.0: enabling Extended Tags
[    3.620400] pci 0000:05:00.0: reg 0x164: [mem 0x00000000-0x00003fff 64bit pref]
[    3.621273] pci 0000:05:00.0: VF(n) BAR0 space: [mem 0x00000000-0x0007bfff 64bit pref] (contains BAR0 for 31 VFs)
[    3.622629] pci 0000:00:11.0: PCI bridge to [bus 05]
[    3.623276] pci 0000:00:11.0:   bridge window [io  0xd000-0xdfff]
[    3.624275] pci 0000:00:11.0:   bridge window [mem 0xf3200000-0xf32fffff]
[    3.625283] pci 0000:00:11.0:   bridge window [mem 0xfa400000-0xfa8fffff 64bit pref]
[    3.626365] pci 0000:00:1c.0: PCI bridge to [bus 06]
[    3.627396] pci 0000:07:00.0: [1033:0194] type 00 class 0x0c0330
[    3.628301] pci 0000:07:00.0: reg 0x10: [mem 0xf3100000-0xf3101fff 64bit]
[    3.629439] pci 0000:07:00.0: PME# supported from D0 D3hot D3cold
[    3.630630] pci 0000:00:1c.2: PCI bridge to [bus 07]
[    3.631284] pci 0000:00:1c.2:   bridge window [mem 0xf3100000-0xf31fffff]
[    3.632304] pci_bus 0000:08: extended config space not accessible
[    3.633369] pci 0000:00:1e.0: PCI bridge to [bus 08] (subtractive decode)
[    3.634288] pci 0000:00:1e.0:   bridge window [io  0x0000-0x03af window] (subtractive decode)
[    3.635273] pci 0000:00:1e.0:   bridge window [io  0x03e0-0x0cf7 window] (subtractive decode)
[    3.636279] pci 0000:00:1e.0:   bridge window [io  0x03b0-0x03df window] (subtractive decode)
[    3.637273] pci 0000:00:1e.0:   bridge window [io  0x0d00-0xffff window] (subtractive decode)
[    3.638280] pci 0000:00:1e.0:   bridge window [mem 0x000a0000-0x000dffff window] (subtractive decode)
[    3.639278] pci 0000:00:1e.0:   bridge window [mem 0xb0000000-0xfbffffff window] (subtractive decode)
[    3.641847] ACPI: PCI: Interrupt link LNKA configured for IRQ 11
[    3.642428] ACPI: PCI: Interrupt link LNKB configured for IRQ 10
[    3.643427] ACPI: PCI: Interrupt link LNKC configured for IRQ 5
[    3.644416] ACPI: PCI: Interrupt link LNKD configured for IRQ 10
[    3.645416] ACPI: PCI: Interrupt link LNKE configured for IRQ 3
[    3.646415] ACPI: PCI: Interrupt link LNKF configured for IRQ 0
[    3.647280] ACPI: PCI: Interrupt link LNKF disabled
[    3.648415] ACPI: PCI: Interrupt link LNKG configured for IRQ 11
[    3.649433] ACPI: PCI: Interrupt link LNKH configured for IRQ 0
[    3.650273] ACPI: PCI: Interrupt link LNKH disabled
[    3.651665] ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-ff])
[    3.652276] acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
[    3.653490] acpi PNP0A08:01: _OSC: platform does not support [AER PCIeCapability LTR]
[    3.654475] acpi PNP0A08:01: _OSC: not requesting control; platform does not support [PCIeCapability]
[    3.655273] acpi PNP0A08:01: _OSC: OS requested [PME AER PCIeCapability LTR]
[    3.656272] acpi PNP0A08:01: _OSC: platform willing to grant [PME]
[    3.657272] acpi PNP0A08:01: _OSC: platform retains control of PCIe features (AE_SUPPORT)
[    3.658290] acpi PNP0A08:01: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-3f] only partially covers this bridge
[    3.659754] PCI host bridge to bus 0000:20
[    3.660274] pci_bus 0000:20: root bus resource [io  0x03b0-0x03df window]
[    3.661273] pci_bus 0000:20: root bus resource [mem 0x000a0000-0x000bffff window]
[    3.662273] pci_bus 0000:20: root bus resource [bus 20-ff]
[    3.663806] iommu: Default domain type: Translated 
[    3.664273] iommu: DMA domain TLB invalidation policy: lazy mode 
[    3.665578] SCSI subsystem initialized
[    3.666344] libata version 3.00 loaded.
[    3.667293] ACPI: bus type USB registered
[    3.668322] usbcore: registered new interface driver usbfs
[    3.669311] usbcore: registered new interface driver hub
[    3.670327] usbcore: registered new device driver usb
[    3.671325] pps_core: LinuxPPS API ver. 1 registered
[    3.672272] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[    3.673290] PTP clock support registered
[    3.675286] efivars: Registered efivars operations
[    3.676804] PCI: Using ACPI for IRQ routing
[    3.679485] PCI: Discovered peer bus 3f
[    3.680280] PCI: root bus 3f: using default resources
[    3.681273] PCI: Probing PCI hardware (bus 3f)
[    3.682350] PCI host bridge to bus 0000:3f
[    3.683273] pci_bus 0000:3f: root bus resource [io  0x0000-0xffff]
[    3.684273] pci_bus 0000:3f: root bus resource [mem 0x00000000-0x3fffffffffff]
[    3.685287] pci_bus 0000:3f: No busn resource found for root bus, will use [bus 3f-ff]
[    3.686280] pci_bus 0000:3f: busn_res: can not insert [bus 3f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 20-ff])
[    3.687305] pci 0000:3f:08.0: [8086:3c80] type 00 class 0x088000
[    3.688417] pci 0000:3f:08.3: [8086:3c83] type 00 class 0x088000
[    3.689415] pci 0000:3f:08.4: [8086:3c84] type 00 class 0x088000
[    3.690437] pci 0000:3f:09.0: [8086:3c90] type 00 class 0x088000
[    3.691403] pci 0000:3f:09.3: [8086:3c93] type 00 class 0x088000
[    3.692412] pci 0000:3f:09.4: [8086:3c94] type 00 class 0x088000
[    3.693418] pci 0000:3f:0a.0: [8086:3cc0] type 00 class 0x088000
[    3.694377] pci 0000:3f:0a.1: [8086:3cc1] type 00 class 0x088000
[    3.695382] pci 0000:3f:0a.2: [8086:3cc2] type 00 class 0x088000
[    3.696376] pci 0000:3f:0a.3: [8086:3cd0] type 00 class 0x088000
[    3.697386] pci 0000:3f:0b.0: [8086:3ce0] type 00 class 0x088000
[    3.698383] pci 0000:3f:0b.3: [8086:3ce3] type 00 class 0x088000
[    3.699376] pci 0000:3f:0c.0: [8086:3ce8] type 00 class 0x088000
[    3.700374] pci 0000:3f:0c.1: [8086:3ce8] type 00 class 0x088000
[    3.701381] pci 0000:3f:0c.6: [8086:3cf4] type 00 class 0x088000
[    3.702376] pci 0000:3f:0c.7: [8086:3cf6] type 00 class 0x088000
[    3.703374] pci 0000:3f:0d.0: [8086:3ce8] type 00 class 0x088000
[    3.704385] pci 0000:3f:0d.1: [8086:3ce8] type 00 class 0x088000
[    3.705383] pci 0000:3f:0d.6: [8086:3cf5] type 00 class 0x088000
[    3.706382] pci 0000:3f:0e.0: [8086:3ca0] type 00 class 0x088000
[    3.707388] pci 0000:3f:0e.1: [8086:3c46] type 00 class 0x110100
[    3.708398] pci 0000:3f:0f.0: [8086:3ca8] type 00 class 0x088000
[    3.709417] pci 0000:3f:0f.1: [8086:3c71] type 00 class 0x088000
[    3.710410] pci 0000:3f:0f.2: [8086:3caa] type 00 class 0x088000
[    3.711409] pci 0000:3f:0f.3: [8086:3cab] type 00 class 0x088000
[    3.712410] pci 0000:3f:0f.4: [8086:3cac] type 00 class 0x088000
[    3.713409] pci 0000:3f:0f.5: [8086:3cad] type 00 class 0x088000
[    3.714411] pci 0000:3f:0f.6: [8086:3cae] type 00 class 0x088000
[    3.715383] pci 0000:3f:10.0: [8086:3cb0] type 00 class 0x088000
[    3.716410] pci 0000:3f:10.1: [8086:3cb1] type 00 class 0x088000
[    3.717411] pci 0000:3f:10.2: [8086:3cb2] type 00 class 0x088000
[    3.718409] pci 0000:3f:10.3: [8086:3cb3] type 00 class 0x088000
[    3.719414] pci 0000:3f:10.4: [8086:3cb4] type 00 class 0x088000
[    3.720411] pci 0000:3f:10.5: [8086:3cb5] type 00 class 0x088000
[    3.721409] pci 0000:3f:10.6: [8086:3cb6] type 00 class 0x088000
[    3.722411] pci 0000:3f:10.7: [8086:3cb7] type 00 class 0x088000
[    3.723406] pci 0000:3f:11.0: [8086:3cb8] type 00 class 0x088000
[    3.724394] pci 0000:3f:13.0: [8086:3ce4] type 00 class 0x088000
[    3.725382] pci 0000:3f:13.1: [8086:3c43] type 00 class 0x110100
[    3.726393] pci 0000:3f:13.4: [8086:3ce6] type 00 class 0x110100
[    3.727381] pci 0000:3f:13.5: [8086:3c44] type 00 class 0x110100
[    3.728383] pci 0000:3f:13.6: [8086:3c45] type 00 class 0x088000
[    3.729389] pci_bus 0000:3f: busn_res: [bus 3f-ff] end is updated to 3f
[    3.730276] pci_bus 0000:3f: busn_res: can not insert [bus 3f] under domain [bus 00-ff] (conflicts with (null) [bus 20-ff])
[    3.731350] PCI: pci_cache_line_size set to 64 bytes
[    3.732395] e820: reserve RAM buffer [mem 0x18ebb000-0x1bffffff]
[    3.733287] e820: reserve RAM buffer [mem 0x18fe9000-0x1bffffff]
[    3.734272] e820: reserve RAM buffer [mem 0x1dffd000-0x1fffffff]
[    3.735278] e820: reserve RAM buffer [mem 0xac77d000-0xafffffff]
[    3.736287] e820: reserve RAM buffer [mem 0xad800000-0xafffffff]
[    3.737589] pci 0000:03:00.0: vgaarb: setting as boot VGA device
[    3.738270] pci 0000:03:00.0: vgaarb: bridge control possible
[    3.738270] pci 0000:03:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[    3.738301] vgaarb: loaded
[    3.739432] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
[    3.740276] hpet0: 8 comparators, 64-bit 14.318180 MHz counter
[    3.743436] clocksource: Switched to clocksource tsc-early
[    3.749474] pnp: PnP ACPI init
[    3.752794] system 00:00: [mem 0xfc000000-0xfcffffff] has been reserved
[    3.759457] system 00:00: [mem 0xfd000000-0xfdffffff] has been reserved
[    3.766110] system 00:00: [mem 0xfe000000-0xfeafffff] has been reserved
[    3.772764] system 00:00: [mem 0xfeb00000-0xfebfffff] has been reserved
[    3.779416] system 00:00: [mem 0xfed00400-0xfed3ffff] could not be reserved
[    3.786420] system 00:00: [mem 0xfed45000-0xfedfffff] has been reserved
[    3.793487] system 00:01: [io  0x0680-0x069f] has been reserved
[    3.799454] system 00:01: [io  0x0800-0x080f] has been reserved
[    3.799467] Callback from call_rcu_tasks() invoked.
[    3.799478] system 00:01: [io  0xffff] has been reserved
[    3.815677] system 00:01: [io  0xffff] has been reserved
[    3.821028] system 00:01: [io  0x0400-0x0453] has been reserved
[    3.826980] system 00:01: [io  0x0458-0x047f] has been reserved
[    3.832934] system 00:01: [io  0x0500-0x057f] has been reserved
[    3.838897] system 00:01: [io  0x164e-0x164f] has been reserved
[    3.845146] system 00:03: [io  0x0454-0x0457] has been reserved
[    3.852337] pnp: PnP ACPI: found 8 devices
[    3.867387] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[    3.876377] NET: Registered PF_INET protocol family
[    3.881552] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[    3.892583] tcp_listen_portaddr_hash hash table entries: 8192 (order: 7, 589824 bytes, linear)
[    3.901469] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
[    3.909262] TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[    3.917531] TCP bind hash table entries: 65536 (order: 11, 9437184 bytes, vmalloc hugepage)
[    3.928450] TCP: Hash tables configured (established 131072 bind 65536)
[    3.935325] UDP hash table entries: 8192 (order: 8, 1310720 bytes, linear)
[    3.942619] UDP-Lite hash table entries: 8192 (order: 8, 1310720 bytes, linear)
[    3.950426] NET: Registered PF_UNIX/PF_LOCAL protocol family
[    3.956166] pci 0000:00:01.0: PCI bridge to [bus 01]
[    3.961170] pci 0000:00:01.1: PCI bridge to [bus 02]
[    3.966168] pci 0000:00:02.0: PCI bridge to [bus 03]
[    3.971157] pci 0000:00:02.0:   bridge window [io  0xe000-0xefff]
[    3.977286] pci 0000:00:02.0:   bridge window [mem 0xf2000000-0xf30fffff]
[    3.984123] pci 0000:00:02.0:   bridge window [mem 0xf4000000-0xf9ffffff 64bit pref]
[    3.991912] pci 0000:00:03.0: PCI bridge to [bus 04]
[    3.996914] pci 0000:05:00.0: BAR 7: assigned [mem 0xfa804000-0xfa87ffff 64bit pref]
[    4.004701] pci 0000:00:11.0: PCI bridge to [bus 05]
[    4.009696] pci 0000:00:11.0:   bridge window [io  0xd000-0xdfff]
[    4.015824] pci 0000:00:11.0:   bridge window [mem 0xf3200000-0xf32fffff]
[    4.022656] pci 0000:00:11.0:   bridge window [mem 0xfa400000-0xfa8fffff 64bit pref]
[    4.030448] pci 0000:00:1c.0: PCI bridge to [bus 06]
[    4.035468] pci 0000:00:1c.2: PCI bridge to [bus 07]
[    4.040459] pci 0000:00:1c.2:   bridge window [mem 0xf3100000-0xf31fffff]
[    4.047287] pci 0000:00:1e.0: PCI bridge to [bus 08]
[    4.052297] pci_bus 0000:00: resource 4 [io  0x0000-0x03af window]
[    4.058507] pci_bus 0000:00: resource 5 [io  0x03e0-0x0cf7 window]
[    4.064715] pci_bus 0000:00: resource 6 [io  0x03b0-0x03df window]
[    4.070942] pci_bus 0000:00: resource 7 [io  0x0d00-0xffff window]
[    4.077158] pci_bus 0000:00: resource 8 [mem 0x000a0000-0x000dffff window]
[    4.084066] pci_bus 0000:00: resource 9 [mem 0xb0000000-0xfbffffff window]
[    4.090976] pci_bus 0000:03: resource 0 [io  0xe000-0xefff]
[    4.096579] pci_bus 0000:03: resource 1 [mem 0xf2000000-0xf30fffff]
[    4.102879] pci_bus 0000:03: resource 2 [mem 0xf4000000-0xf9ffffff 64bit pref]
[    4.110143] pci_bus 0000:05: resource 0 [io  0xd000-0xdfff]
[    4.115744] pci_bus 0000:05: resource 1 [mem 0xf3200000-0xf32fffff]
[    4.122045] pci_bus 0000:05: resource 2 [mem 0xfa400000-0xfa8fffff 64bit pref]
[    4.129313] pci_bus 0000:07: resource 1 [mem 0xf3100000-0xf31fffff]
[    4.135611] pci_bus 0000:08: resource 4 [io  0x0000-0x03af window]
[    4.141821] pci_bus 0000:08: resource 5 [io  0x03e0-0x0cf7 window]
[    4.148030] pci_bus 0000:08: resource 6 [io  0x03b0-0x03df window]
[    4.154249] pci_bus 0000:08: resource 7 [io  0x0d00-0xffff window]
[    4.160462] pci_bus 0000:08: resource 8 [mem 0x000a0000-0x000dffff window]
[    4.167369] pci_bus 0000:08: resource 9 [mem 0xb0000000-0xfbffffff window]
[    4.174436] pci_bus 0000:20: resource 4 [io  0x03b0-0x03df window]
[    4.180661] pci_bus 0000:20: resource 5 [mem 0x000a0000-0x000bffff window]
[    4.187630] pci_bus 0000:3f: resource 4 [io  0x0000-0xffff]
[    4.193232] pci_bus 0000:3f: resource 5 [mem 0x00000000-0x3fffffffffff]
[    4.199918] pci 0000:00:05.0: disabled boot interrupts on device [8086:3c28]
[    4.208571] pci 0000:03:00.1: extending delay after power-on from D3hot to 20 msec
[    4.216333] pci 0000:03:00.1: D0 power state depends on 0000:03:00.0
[    4.223191] pci 0000:07:00.0: enabling device (0000 -> 0002)
[    4.229117] PCI: CLS 64 bytes, default 64
[    4.233195] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    4.233433] Unpacking initramfs...
[    4.239672] software IO TLB: mapped [mem 0x00000000a877d000-0x00000000ac77d000] (64MB)
[    4.252983] Initialise system trusted keyrings
[    4.257679] workingset: timestamp_bits=56 max_order=22 bucket_order=0
[    4.264459] ntfs: driver 2.1.32 [Flags: R/W].
[    4.268854] fuse: init (API version 7.38)
[    4.273139] 9p: Installing v9fs 9p2000 file system support
[    4.278836] Key type asymmetric registered
[    4.282980] Asymmetric key parser 'x509' registered
[    4.287925] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
[    4.316498] ACPI: \_PR_.CP00: Found 4 idle states
[    4.322022] ACPI: \_PR_.CP01: Found 4 idle states
[    4.326883] ACPI: \_PR_.CP02: Found 4 idle states
[    4.331741] ACPI: \_PR_.CP03: Found 4 idle states
[    4.336594] ACPI: \_PR_.CP04: Found 4 idle states
[    4.341447] ACPI: \_PR_.CP05: Found 4 idle states
[    4.346299] ACPI: \_PR_.CP06: Found 4 idle states
[    4.351151] ACPI: \_PR_.CP07: Found 4 idle states
[    4.358021] Freeing initrd memory: 6736K
[    4.500661] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    4.507336] 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    4.517352] serial 0000:00:16.3: enabling device (0000 -> 0003)
[    4.525626] 0000:00:16.3: ttyS1 at I/O 0xf0a0 (irq = 17, base_baud = 115200) is a 16550A
[    4.534736] Linux agpgart interface v0.103
[    4.539182] ACPI: bus type drm_connector registered
[    4.545067] nouveau 0000:03:00.0: vgaarb: deactivate vga console
[    4.551292] nouveau 0000:03:00.0: NVIDIA GT218 (0a8c00b1)
[    4.673972] nouveau 0000:03:00.0: bios: version 70.18.83.00.08
[    4.684664] nouveau 0000:03:00.0: fb: 512 MiB DDR3
[    4.889076] tsc: Refined TSC clocksource calibration: 3591.345 MHz
[    4.895354] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x33c4635c383, max_idle_ns: 440795314831 ns
[    4.905501] clocksource: Switched to clocksource tsc
[    4.999816] nouveau 0000:03:00.0: DRM: VRAM: 512 MiB
[    5.004901] nouveau 0000:03:00.0: DRM: GART: 1048576 MiB
[    5.010322] nouveau 0000:03:00.0: DRM: TMDS table version 2.0
[    5.016157] nouveau 0000:03:00.0: DRM: DCB version 4.0
[    5.021383] nouveau 0000:03:00.0: DRM: DCB outp 00: 02000360 00000000
[    5.027926] nouveau 0000:03:00.0: DRM: DCB outp 01: 02000362 00020010
[    5.034476] nouveau 0000:03:00.0: DRM: DCB outp 02: 028003a6 0f220010
[    5.041016] nouveau 0000:03:00.0: DRM: DCB outp 03: 01011380 00000000
[    5.047544] nouveau 0000:03:00.0: DRM: DCB outp 04: 08011382 00020010
[    5.054071] nouveau 0000:03:00.0: DRM: DCB outp 05: 088113c6 0f220010
[    5.060599] nouveau 0000:03:00.0: DRM: DCB conn 00: 00101064
[    5.066341] nouveau 0000:03:00.0: DRM: DCB conn 01: 00202165
[    5.077323] nouveau 0000:03:00.0: DRM: MM: using COPY for buffer copies
[    5.084095] stackdepot: allocating hash table of 1048576 entries via kvcalloc
[    5.100189] [drm] Initialized nouveau 1.3.1 20120801 for 0000:03:00.0 on minor 0
[    5.155486] fbcon: nouveaudrmfb (fb0) is primary device
[    5.299223] Console: switching to colour frame buffer device 210x65
[    5.318062] nouveau 0000:03:00.0: [drm] fb0: nouveaudrmfb frame buffer device
[    5.361998] megasas: 07.725.01.00-rc1
[    5.366287] st: Version 20160209, fixed bufsize 32768, s/g segs 256
[    5.373365] ahci 0000:00:1f.2: version 3.0
[    5.379420] ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x3 impl SATA mode
[    5.387923] ahci 0000:00:1f.2: flags: 64bit ncq sntf pm led clo pio slum part ems apst 
[    5.404110] scsi host0: ahci
[    5.408666] scsi host1: ahci
[    5.413155] scsi host2: ahci
[    5.417474] scsi host3: ahci
[    5.421656] scsi host4: ahci
[    5.425773] scsi host5: ahci
[    5.429388] ata1: SATA max UDMA/133 abar m2048@0xf3326000 port 0xf3326100 irq 32
[    5.437202] ata2: SATA max UDMA/133 abar m2048@0xf3326000 port 0xf3326180 irq 32
[    5.445018] ata3: DUMMY
[    5.447726] ata4: DUMMY
[    5.450467] ata5: DUMMY
[    5.453194] ata6: DUMMY
[    5.456324] e1000e: Intel(R) PRO/1000 Network Driver
[    5.461640] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[    5.469230] e1000e 0000:00:19.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[    5.563007] e1000e 0000:00:19.0 0000:00:19.0 (uninitialized): registered PHC clock
[    5.656045] e1000e 0000:00:19.0 eth0: (PCI Express:2.5GT/s:Width x1) 90:b1:1c:7b:da:e7
[    5.664463] e1000e 0000:00:19.0 eth0: Intel(R) PRO/1000 Network Connection
[    5.671760] e1000e 0000:00:19.0 eth0: MAC: 10, PHY: 11, PBA No: 7041FF-0FF
[    5.679792] xhci_hcd 0000:07:00.0: xHCI Host Controller
[    5.685750] xhci_hcd 0000:07:00.0: new USB bus registered, assigned bus number 1
[    5.693916] xhci_hcd 0000:07:00.0: hcc params 0x014042cb hci version 0x96 quirks 0x0000000000000004
[    5.704373] xhci_hcd 0000:07:00.0: xHCI Host Controller
[    5.704410] ehci-pci 0000:00:1a.0: EHCI Host Controller
[    5.709698] xhci_hcd 0000:07:00.0: new USB bus registered, assigned bus number 2
[    5.709775] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus number 3
[    5.715032] xhci_hcd 0000:07:00.0: Host supports USB 3.0 SuperSpeed
[    5.715758] ehci-pci 0000:00:1a.0: debug port 2
[    5.723137] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 6.04
[    5.734874] ehci-pci 0000:00:1a.0: irq 16, io mem 0xf3328000
[    5.736537] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    5.744079] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00
[    5.749532] usb usb1: Product: xHCI Host Controller
[    5.766884] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[    5.769664] usb usb1: Manufacturer: Linux 6.4.0-rc1+ xhci-hcd
[    5.769674] usb usb1: SerialNumber: 0000:07:00.0
[    5.771453] hub 1-0:1.0: USB hub found
[    5.776217] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    5.777062] hub 1-0:1.0: 2 ports detected
[    5.778266] ata1.00: ATA-8: ST2000DM001-1CH164, CC24, max UDMA/133
[    5.779735] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
[    5.785413] ata2.00: ATAPI: PLDS DVD+/-RW DS-8A9SH, ED11, max UDMA/100
[    5.787281] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 6.04
[    5.792950] ata1.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 32), AA
[    5.794080] ata1.00: configured for UDMA/133
[    5.794996] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    5.800420] scsi 0:0:0:0: Direct-Access     ATA      ST2000DM001-1CH1 CC24 PQ: 0 ANSI: 5
[    5.802303] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    5.802517] sd 0:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
[    5.802524] sd 0:0:0:0: [sda] 4096-byte physical blocks
[    5.802565] sd 0:0:0:0: [sda] Write Protect is off
[    5.802585] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    5.802639] ata2.00: configured for UDMA/100
[    5.802735] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    5.802909] sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes
[    5.804515] usb usb2: Product: xHCI Host Controller
[    5.810877] scsi 1:0:0:0: CD-ROM            PLDS     DVD+-RW DS-8A9SH ED11 PQ: 0 ANSI: 5
[    5.813110] usb usb2: Manufacturer: Linux 6.4.0-rc1+ xhci-hcd
[    5.813118] usb usb2: SerialNumber: 0000:07:00.0
[    5.814218] hub 2-0:1.0: USB hub found
[    5.930498]  sda: sda1 sda2 sda3 sda4 < sda5 sda6 sda7 sda8 sda9 sda10 sda11 sda12 sda13 sda14 sda15 >
[    5.937639] hub 2-0:1.0: 2 ports detected
[    5.947027] sd 0:0:0:0: [sda] Attached SCSI disk
[    5.952925] usbcore: registered new interface driver usb-storage
[    5.953320] usb usb3: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 6.04
[    5.953346] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    5.953358] usb usb3: Product: EHCI Host Controller
[    5.953368] usb usb3: Manufacturer: Linux 6.4.0-rc1+ ehci_hcd
[    5.953377] usb usb3: SerialNumber: 0000:00:1a.0
[    5.955657] hub 3-0:1.0: USB hub found
[    5.963290] usbcore: registered new interface driver usbserial_generic
[    5.966351] hub 3-0:1.0: 3 ports detected
[    5.968987] ehci-pci 0000:00:1d.0: EHCI Host Controller
[    5.972243] usbserial: USB Serial support registered for generic
[    5.975991] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus number 4
[    5.978016] i8042: PNP: PS/2 Controller [PNP0303:PS2K] at 0x60,0x64 irq 1
[    5.978781] ehci-pci 0000:00:1d.0: debug port 2
[    5.979252] i8042: PNP: PS/2 appears to have AUX port disabled, if this is incorrect please boot with i8042.nopnp
[    5.979563] i8042: Warning: Keylock active
[    5.984165] ehci-pci 0000:00:1d.0: irq 17, io mem 0xf3327000
[    5.984694] serio: i8042 KBD port at 0x60,0x64 irq 1
[    5.991990] ehci-pci 0000:00:1d.0: USB 2.0 started, EHCI 1.00
[    5.996780] mousedev: PS/2 mouse device common for all mice
[    5.997064] usb usb4: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 6.04
[    5.997764] usbcore: registered new interface driver synaptics_usb
[    5.998371] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[    5.999033] input: PC Speaker as /devices/platform/pcspkr/input/input1
[    5.999865] usb usb4: Product: EHCI Host Controller
[    6.000732] rtc_cmos 00:02: RTC can wake from S4
[    6.001380] usb usb4: Manufacturer: Linux 6.4.0-rc1+ ehci_hcd
[    6.002438] sr 1:0:0:0: [sr0] scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray
[    6.002682] rtc_cmos 00:02: registered as rtc0
[    6.002754] rtc_cmos 00:02: setting system clock to 2023-05-31T08:13:04 UTC (1685520784)
[    6.002869] rtc_cmos 00:02: alarms up to one year, y3k, 242 bytes nvram, hpet irqs
[    6.002890] usb usb4: SerialNumber: 0000:00:1d.0
[    6.002897] fail to initialize ptp_kvm
[    6.003504] cdrom: Uniform CD-ROM driver Revision: 3.20
[    6.005377] hub 4-0:1.0: USB hub found
[    6.009843] intel_pstate: Intel P-state driver initializing
[    6.020196] hub 4-0:1.0: 3 ports detected
[    6.038068] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[    6.042372] hid: raw HID events driver (C) Jiri Kosina
[    6.256352] NET: Registered PF_INET6 protocol family
[    6.263200] Segment Routing with IPv6
[    6.268012] In-situ OAM (IOAM) with IPv6
[    6.268465] sr 1:0:0:0: Attached scsi CD-ROM sr0
[    6.268573] mip6: Mobile IPv6
[    6.269349] sr 1:0:0:0: Attached scsi generic sg1 type 5
[    6.269870] NET: Registered PF_PACKET protocol family
[    6.282986] usb 4-1: new high-speed USB device number 2 using ehci-pci
[    6.288314] 9pnet: Installing 9P2000 support
[    6.309283] microcode: Microcode Update Driver: v2.2.
[    6.309291] IPI shorthand broadcast: enabled
[    6.327011] sched_clock: Marking stable (4579003425, 1747946202)->(7838879238, -1511929611)
[    6.337407] registered taskstats version 1
[    6.342047] Loading compiled-in X.509 certificates
[    6.368624] printk: console [netcon0] enabled
[    6.373971] netconsole: network logging started
[    6.375966] usb 3-1: new high-speed USB device number 2 using ehci-pci
[    6.380413] clk: Disabling unused clocks
[    6.395067] Freeing unused decrypted memory: 2036K
[    6.401430] Freeing unused kernel image (initmem) memory: 3044K
[    6.414063] Write protecting the kernel read-only data: 20480k
[    6.417477] usb 4-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00
[    6.429238] usb 4-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[    6.437113] Freeing unused kernel image (rodata/data gap) memory: 432K
[    6.442848] hub 4-1:1.0: USB hub found
[    6.448521] Run /init as init process
[    6.452699]   with arguments:
[    6.456182]     /init
[    6.458958]   with environment:
[    6.460973] hub 4-1:1.0: 8 ports detected
[    6.462586]     HOME=/
[    6.470011]     TERM=linux
[    6.473991]     BOOT_IMAGE=/boot/vmlinuz-6.4.0-rc1+
[    6.520754] usb 3-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00
[    6.529566] usb 3-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[    6.543836] hub 3-1:1.0: USB hub found
[    6.551163] hub 3-1:1.0: 6 ports detected
[    7.716407] random: crng init done
[    8.320622] process '/usr/bin/fstype' started with executable stack
[    8.678656] EXT4-fs (sda7): mounted filesystem 6aef0462-c7e4-45ca-9a68-4e435300595e with ordered data mode. Quota mode: disabled.
[   10.589450] acpi-cpufreq: probe of acpi-cpufreq failed with error -17
[   10.591550] i801_smbus 0000:00:1f.3: enabling device (0000 -> 0003)
[   10.592674] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input2
[   10.593618] ACPI: button: Power Button [PWRB]
[   10.593840] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
[   10.599604] ACPI: button: Power Button [PWRF]
[   10.605121] i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
[   10.647504] i2c i2c-14: 4/4 memory slots populated (from DMI)
[   10.684712] iTCO_vendor_support: vendor-support=0
[   10.715708] RAPL PMU: API unit is 2^-32 Joules, 2 fixed counters, 163840 ms ovfl timer
[   10.724357] RAPL PMU: hw unit of domain pp0-core 2^-16 Joules
[   10.730682] RAPL PMU: hw unit of domain package 2^-16 Joules
[   10.731713] iTCO_wdt iTCO_wdt.1.auto: Found a Patsburg TCO device (Version=2, TCOBASE=0x0460)
[   10.747468] iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
[   10.866628] cryptd: max_cpu_qlen set to 1000
[   10.970050] AVX version of gcm_enc/dec engaged.
[   10.975735] AES CTR mode by8 optimization enabled
[   12.063715] EDAC MC: Ver: 3.0.0
[   12.069807] EDAC DEBUG: edac_mc_sysfs_init: device mc created
[   12.129012] EDAC DEBUG: sbridge_init: 
[   12.134337] EDAC sbridge: Seeking for: PCI ID 8086:3ca0
[   12.142034] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3ca0
[   12.149709] EDAC sbridge: Seeking for: PCI ID 8086:3ca0
[   12.156866] EDAC sbridge: Seeking for: PCI ID 8086:3ca8
[   12.163530] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3ca8
[   12.171528] EDAC sbridge: Seeking for: PCI ID 8086:3ca8
[   12.178310] EDAC sbridge: Seeking for: PCI ID 8086:3c71
[   12.185032] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3c71
[   12.191963] EDAC sbridge: Seeking for: PCI ID 8086:3c71
[   12.197977] EDAC sbridge: Seeking for: PCI ID 8086:3caa
[   12.203914] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3caa
[   12.210999] EDAC sbridge: Seeking for: PCI ID 8086:3caa
[   12.218000] EDAC sbridge: Seeking for: PCI ID 8086:3cab
[   12.224070] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cab
[   12.230697] EDAC sbridge: Seeking for: PCI ID 8086:3cab
[   12.233955] raid6: sse2x4   gen() 14825 MB/s
[   12.236475] EDAC sbridge: Seeking for: PCI ID 8086:3cac
[   12.247017] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cac
[   12.253817] EDAC sbridge: Seeking for: PCI ID 8086:3cac
[   12.253955] raid6: sse2x2   gen() 17600 MB/s
[   12.254629] EDAC sbridge: Seeking for: PCI ID 8086:3cad
[   12.270653] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cad
[   12.271953] raid6: sse2x1   gen() 13406 MB/s
[   12.277273] EDAC sbridge: Seeking for: PCI ID 8086:3cad
[   12.277736] raid6: using algorithm sse2x2 gen() 17600 MB/s
[   12.278521] EDAC sbridge: Seeking for: PCI ID 8086:3cb8
[   12.295956] raid6: .... xor() 9295 MB/s, rmw enabled
[   12.300040] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cb8
[   12.300970] raid6: using ssse3x2 recovery algorithm
[   12.301022] EDAC sbridge: Seeking for: PCI ID 8086:3cb8
[   12.323539] EDAC sbridge: Seeking for: PCI ID 8086:3cf4
[   12.329993] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cf4
[   12.336849] EDAC sbridge: Seeking for: PCI ID 8086:3cf4
[   12.342617] EDAC sbridge: Seeking for: PCI ID 8086:3cf6
[   12.348749] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cf6
[   12.355352] EDAC sbridge: Seeking for: PCI ID 8086:3cf6
[   12.361335] EDAC sbridge: Seeking for: PCI ID 8086:3cf5
[   12.367035] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cf5
[   12.373588] EDAC sbridge: Seeking for: PCI ID 8086:3cf5
[   12.379462] EDAC DEBUG: sbridge_probe: Registering MC#0 (1 of 1)
[   12.386249] EDAC DEBUG: sbridge_register_mci: MC: mci = 000000009a9d559b, dev = 00000000fe166939
[   12.396975] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3ca0, bus 63 with dev = 0000000083b9c14f
[   12.407225] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3ca8, bus 63 with dev = 000000007daf49c8
[   12.417462] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3c71, bus 63 with dev = 0000000095b74c5a
[   12.427993] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3caa, bus 63 with dev = 00000000ed519f97
[   12.428196] xor: automatically using best checksumming function   avx       
[   12.428973] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cab, bus 63 with dev = 000000001a05e474
[   12.456556] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cac, bus 63 with dev = 0000000052d1f1e6
[   12.466979] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cad, bus 63 with dev = 00000000a771d86a
[   12.477455] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cb8, bus 63 with dev = 0000000045fb76d7
[   12.487676] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cf4, bus 63 with dev = 00000000f39a9d0f
[   12.498143] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cf6, bus 63 with dev = 00000000046233b4
[   12.508373] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cf5, bus 63 with dev = 00000000cc484fec
[   12.518869] EDAC DEBUG: get_dimm_config: mc#0: Node ID: 0, source ID: 0
[   12.525991] EDAC DEBUG: get_dimm_config: Memory mirroring is disabled
[   12.532939] EDAC DEBUG: get_dimm_config: Lockstep is disabled
[   12.539965] EDAC DEBUG: get_dimm_config: address map is on open page mode
[   12.547182] EDAC DEBUG: __populate_dimms: Memory is registered
[   12.553568] EDAC DEBUG: __populate_dimms: mc#0: ha 0 channel 0, dimm 0, 4096 MiB (1048576 pages) bank: 8, rank: 2, row: 0x8000, col: 0x400
[   12.566753] EDAC DEBUG: __populate_dimms: mc#0: ha 0 channel 1, dimm 0, 4096 MiB (1048576 pages) bank: 8, rank: 2, row: 0x8000, col: 0x400
[   12.581000] EDAC DEBUG: __populate_dimms: mc#0: ha 0 channel 2, dimm 0, 4096 MiB (1048576 pages) bank: 8, rank: 2, row: 0x8000, col: 0x400
[   12.593896] EDAC DEBUG: __populate_dimms: mc#0: ha 0 channel 3, dimm 0, 4096 MiB (1048576 pages) bank: 8, rank: 2, row: 0x8000, col: 0x400
[   12.606793] EDAC DEBUG: get_memory_layout: TOLM: 2.812 GB (0x00000000b3ffffff)
[   12.614478] EDAC DEBUG: get_memory_layout: TOHM: 17.312 GB (0x0000000453ffffff)
[   12.622828] EDAC DEBUG: get_memory_layout: SAD#0 DRAM up to 17.250 GB (0x0000000450000000) Interleave: [8:6] reg=0x000044c3
[   12.634683] EDAC DEBUG: get_memory_layout: SAD#0, interleave #0: 0
[   12.641935] EDAC DEBUG: get_memory_layout: TAD#0: up to 2.750 GB (0x00000000b0000000), socket interleave 1, memory interleave 4, TGT: 0, 1, 2, 3, reg=0x0002b3e4
[   12.656832] EDAC DEBUG: get_memory_layout: TAD#1: up to 17.250 GB (0x0000000450000000), socket interleave 1, memory interleave 4, TGT: 0, 1, 2, 3, reg=0x001133e4
[   12.671799] EDAC DEBUG: get_memory_layout: TAD CH#0, offset #0: 0.000 GB (0x0000000000000000), reg=0x00000000
[   12.682510] EDAC DEBUG: get_memory_layout: TAD CH#0, offset #1: 1.250 GB (0x0000000050000000), reg=0x00000500
[   12.692916] EDAC DEBUG: get_memory_layout: TAD CH#1, offset #0: 0.000 GB (0x0000000000000000), reg=0x00000000
[   12.703355] EDAC DEBUG: get_memory_layout: TAD CH#1, offset #1: 1.250 GB (0x0000000050000000), reg=0x00000500
[   12.713775] EDAC DEBUG: get_memory_layout: TAD CH#2, offset #0: 0.000 GB (0x0000000000000000), reg=0x00000000
[   12.724532] EDAC DEBUG: get_memory_layout: TAD CH#2, offset #1: 1.250 GB (0x0000000050000000), reg=0x00000500
[   12.735918] EDAC DEBUG: get_memory_layout: TAD CH#3, offset #0: 0.000 GB (0x0000000000000000), reg=0x00000000
[   12.746340] EDAC DEBUG: get_memory_layout: TAD CH#3, offset #1: 1.250 GB (0x0000000050000000), reg=0x00000500
[   12.756756] EDAC DEBUG: get_memory_layout: CH#0 RIR#0, limit: 3.999 GB (0x00000000fff00000), way: 2, reg=0x9000000e
[   12.767958] EDAC DEBUG: get_memory_layout: CH#0 RIR#0 INTL#0, offset 0.000 GB (0x0000000000000000), tgt: 0, reg=0x00000000
[   12.780009] EDAC DEBUG: get_memory_layout: CH#0 RIR#0 INTL#1, offset 0.000 GB (0x0000000000000000), tgt: 1, reg=0x00010000
[   12.791549] EDAC DEBUG: get_memory_layout: CH#1 RIR#0, limit: 3.999 GB (0x00000000fff00000), way: 2, reg=0x9000000e
[   12.802491] EDAC DEBUG: get_memory_layout: CH#1 RIR#0 INTL#0, offset 0.000 GB (0x0000000000000000), tgt: 0, reg=0x00000000
[   12.814420] EDAC DEBUG: get_memory_layout: CH#1 RIR#0 INTL#1, offset 0.000 GB (0x0000000000000000), tgt: 1, reg=0x00010000
[   12.826866] EDAC DEBUG: get_memory_layout: CH#2 RIR#0, limit: 3.999 GB (0x00000000fff00000), way: 2, reg=0x9000000e
[   12.837904] EDAC DEBUG: get_memory_layout: CH#2 RIR#0 INTL#0, offset 0.000 GB (0x0000000000000000), tgt: 0, reg=0x00000000
[   12.849484] EDAC DEBUG: get_memory_layout: CH#2 RIR#0 INTL#1, offset 0.000 GB (0x0000000000000000), tgt: 1, reg=0x00010000
[   12.861348] EDAC DEBUG: get_memory_layout: CH#3 RIR#0, limit: 3.999 GB (0x00000000fff00000), way: 2, reg=0x9000000e
[   12.872320] EDAC DEBUG: get_memory_layout: CH#3 RIR#0 INTL#0, offset 0.000 GB (0x0000000000000000), tgt: 0, reg=0x00000000
[   12.883887] EDAC DEBUG: get_memory_layout: CH#3 RIR#0 INTL#1, offset 0.000 GB (0x0000000000000000), tgt: 1, reg=0x00010000
[   12.895731] EDAC DEBUG: edac_mc_add_mc_with_groups: 
[   12.901723] EDAC DEBUG: edac_create_sysfs_mci_device: device mc0 created
[   12.910049] EDAC DEBUG: edac_create_dimm_object: device dimm0 created at location channel 0 slot 0 
[   12.919929] EDAC DEBUG: edac_create_dimm_object: device dimm3 created at location channel 1 slot 0 
[   12.931048] EDAC DEBUG: edac_create_dimm_object: device dimm6 created at location channel 2 slot 0 
[   12.940648] EDAC DEBUG: edac_create_dimm_object: device dimm9 created at location channel 3 slot 0 
[   12.952077] EDAC DEBUG: edac_create_csrow_object: device csrow0 created
[   12.959596] EDAC MC0: Giving out device to module sb_edac controller Sandy Bridge SrcID#0_Ha#0: DEV 0000:3f:0e.0 (INTERRUPT)
[   12.972975] EDAC sbridge:  Ver: 1.1.2 
[   14.009312] Adding 33554428k swap on /dev/sda5.  Priority:-2 extents:1 across:33554428k 
[   14.123544] EXT4-fs (sda7): re-mounted 6aef0462-c7e4-45ca-9a68-4e435300595e. Quota mode: disabled.
[   14.842033] e1000e 0000:00:19.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[   14.855626] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[   15.284697] EXT4-fs (sda6): mounting ext3 file system using the ext4 subsystem
[   15.571525] EXT4-fs (sda6): mounted filesystem 6b02369b-7362-4920-b703-8ba36125139f with ordered data mode. Quota mode: disabled.
[   15.650183] FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
[   15.679894] BTRFS info (device sda9): using crc32c (crc32c-intel) checksum algorithm
[   15.689918] BTRFS info (device sda9): disk space caching is enabled
[   16.222303] SGI XFS with ACLs, security attributes, quota, no debug enabled
[   16.260915] XFS (sda10): Deprecated V4 format (crc=0) will not be supported after September 2030.
[   16.281460] XFS (sda10): Mounting V4 Filesystem b62c870e-d204-498e-999b-5a0ea7c560cd
[   16.415636] XFS (sda10): Ending clean mount
[   16.423369] xfs filesystem being mounted at /mnt/kernel supports timestamps until 2038-01-19 (0x7fffffff)
[   16.623738] EXT4-fs (sda11): mounted filesystem a1428eb4-29da-4a1f-bbde-e9dc1081fb27 with ordered data mode. Quota mode: disabled.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Wed May 31 08:42:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 08:42:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541659.844623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4HPP-0007qX-5v; Wed, 31 May 2023 08:42:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541659.844623; Wed, 31 May 2023 08:42:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4HPP-0007qQ-36; Wed, 31 May 2023 08:42:19 +0000
Received: by outflank-mailman (input) for mailman id 541659;
 Wed, 31 May 2023 08:42:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOFA=BU=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q4HPN-0007qK-OB
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 08:42:17 +0000
Received: from mail-lf1-x12b.google.com (mail-lf1-x12b.google.com
 [2a00:1450:4864:20::12b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0acbb42c-ff8f-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 10:42:14 +0200 (CEST)
Received: by mail-lf1-x12b.google.com with SMTP id
 2adb3069b0e04-4f4b384c09fso6483110e87.3
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 01:42:15 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 bg22-20020a05600c3c9600b003f4283f5c1bsm2549802wmb.2.2023.05.31.01.42.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 01:42:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0acbb42c-ff8f-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685522535; x=1688114535;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id:from:to
         :cc:subject:date:message-id:reply-to;
        bh=Osa6hVxTZdMNXuW1v1cwzqZsai20sLwtKYlmBbJRRqU=;
        b=M1ROsHsJwnjZ89apSgITOKrlhXXAt8aM4NwhpsZ9q3BYqQH93jP11khg+auJeK8a5f
         gue9HSh56ehbqjVeHF1XjPH/kme6rnBnEMBY4ZTyCKXFFRyC4bijwZw5yBNWpq36tTg6
         msh6tnEUCrIADgpJdLdekWKD2BvJ5CIYlyz54=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685522535; x=1688114535;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Osa6hVxTZdMNXuW1v1cwzqZsai20sLwtKYlmBbJRRqU=;
        b=R055RyI+Dhn2E38jW90fkht7p6zzMFKRsvU2us7MnQDumSirLyA9lMcJdlFbt/KKnq
         Dl+s1SovbyKZR0pavv1HLAtPudrTC12aCoWtEc/izYwV++gm/muw88buS2ZyX+PAsrZL
         XNJ4s8lPQaFBCFy4DyAMf8hxb5i8ej3Ok6K+/B4sUo00AjCOgFyJetfxabPmyOu0QP3H
         DNlmrtb4GoA8jp+D232i5THoJu1cCpOHaKckxrw/yA+7Ms7jLGmGi00/6wMwtaF88uhW
         XaUu+NWvapGeLBmXCdk312YV0mz27MmWy3AeQFZOEAgjW2Lk/n2SLVU8fkF8hcmv02ED
         2P/g==
X-Gm-Message-State: AC+VfDyZVfgKDscGocOfDKaxSsVyAgt3hzAQv8YUDcZZtdrrHPmd78fy
	vKTmWghgbRi08QUZcILUWAXS33MVKQlYEBjVzFQ=
X-Google-Smtp-Source: ACHHUZ6+f7F6JUKyaOyaNOXYrEv5GBvVb05c20J98HURKyeck1hZ9prak3v5hnBpfX4Fo6VahfXLXg==
X-Received: by 2002:a05:6512:72:b0:4ef:f09c:c505 with SMTP id i18-20020a056512007200b004eff09cc505mr2063845lfo.37.1685522534956;
        Wed, 31 May 2023 01:42:14 -0700 (PDT)
Message-ID: <64770866.050a0220.c32a7.5357@mx.google.com>
X-Google-Original-Message-ID: <ZHcIZRjx5LzVxi+D@EMEAENGAAD19049.>
Date: Wed, 31 May 2023 09:42:13 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Juergen Gross <jgross@suse.com>, Jan Beulich <jbeulich@suse.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH v2 1/3] x86: Add bit definitions for Automatic IBRS
References: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
 <20230530135854.1517-2-alejandro.vallejo@cloud.com>
 <bc209116-75ac-06d6-e4bb-eb77b10ac5bd@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <bc209116-75ac-06d6-e4bb-eb77b10ac5bd@citrix.com>

On Tue, May 30, 2023 at 06:29:14PM +0100, Andrew Cooper wrote:
> ... I've changed this on commit to just "Automatic IBRS". The behaviour
> is more far complicated than this, and anyone who wants to know needs to
> read the manual extra carefully.
> 
> For one, there's a behaviour which depends on whether SEV-SNP was
> enabled in firmware...
> 
> ~Andrew
Ack.

Alejandro


From xen-devel-bounces@lists.xenproject.org Wed May 31 09:01:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 09:01:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541673.844633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Hhw-0001wZ-0q; Wed, 31 May 2023 09:01:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541673.844633; Wed, 31 May 2023 09:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Hhv-0001wS-UZ; Wed, 31 May 2023 09:01:27 +0000
Received: by outflank-mailman (input) for mailman id 541673;
 Wed, 31 May 2023 09:01:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOFA=BU=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q4Hhu-0001wM-Dp
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 09:01:26 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b78d825c-ff91-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 11:01:23 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id
 ffacd0b85a97d-3063433fa66so4048457f8f.3
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 02:01:24 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 20-20020a05600c22d400b003f180d5b145sm19957669wmg.40.2023.05.31.02.01.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 02:01:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b78d825c-ff91-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685523683; x=1688115683;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id:from:to
         :cc:subject:date:message-id:reply-to;
        bh=qQXWIfDuVmsIWvJfCCkpoHNdEpUjjhn2AgEZ3ftf/xQ=;
        b=Fl8AxKrsYXzncYC4ObhJR0K2XfnieKzs46piHcvq5aXtYii3Gf7zvI92S2dkVVnuMD
         hkbCq+UpE/75w+VYzUn5Es/LVpjbYqoArbGZqmMhzI9ivwhOGKiOD2h0kUD8JQPnqxyv
         M/BfFnL4zCpI5JXntkq7dD/Vi/FvCoxcbqh4I=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685523683; x=1688115683;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=qQXWIfDuVmsIWvJfCCkpoHNdEpUjjhn2AgEZ3ftf/xQ=;
        b=eLIVHHC8VpU1Eha7/nBn/0Hh8BocUe0i7OftTbxCMt5truimYR9pLHjM61evx3DXy2
         n/0XZt7N8lwCbu0zJyN0WbKSHIPYRRjzBI9bnMY6ZC+9Q8k2gzcx5DgPRxV8yA+bJkwd
         uxP5UHdndoTq7QQQ0yWLCpibwFhebJccYQqJZCxO6MaZo7aAMUvgbrRYlLTMjanaPNX3
         /N22mBjzKGZ1pyd+p6rqjAl2r8EgGe8V27k2uYTNhg8GDeo0fnY4IOQ10C0eoK+kylRw
         J3kFnIhwL5I20ErW1m6wzzpaNo+HCSMw3NM9Komy4SzhDgKRrGDKhYHFk36Fab7SLO8a
         p36Q==
X-Gm-Message-State: AC+VfDwFXzcOteqfAIviQeIpvT8WbIA7lb8TqZW4he0qjqQeNiADacjA
	5Y9k69fF8k5lgSHfvOST4uqp/A==
X-Google-Smtp-Source: ACHHUZ4eujNhAr1KbosS0Y71g2yECH5jD8CRv4UNl7qvMDEtgA1r6DkaMn/uODmiUKmlUfBufsGX2g==
X-Received: by 2002:a05:600c:20c:b0:3f4:253b:92b3 with SMTP id 12-20020a05600c020c00b003f4253b92b3mr3695636wmi.18.1685523683707;
        Wed, 31 May 2023 02:01:23 -0700 (PDT)
Message-ID: <64770ce3.050a0220.fb998.7ff6@mx.google.com>
X-Google-Original-Message-ID: <ZHcM4WIfJ8+Tu1PB@EMEAENGAAD19049.>
Date: Wed, 31 May 2023 10:01:21 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 2/3] x86: Expose Automatic IBRS to guests
References: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
 <20230530135854.1517-3-alejandro.vallejo@cloud.com>
 <2e1bea58-9f6f-08c0-ce00-148f79ba12ff@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2e1bea58-9f6f-08c0-ce00-148f79ba12ff@citrix.com>

On Tue, May 30, 2023 at 06:31:03PM +0100, Andrew Cooper wrote:
> I've committed this, but made two tweaks to the commit message. First,
> "x86/hvm" in the subject because it's important context at a glance.
Sure, that makes sense.

> Second, I've adjusted the bit about PV guests. The reason why we can't
> expose it yet is because Xen doesn't currently context switch EFER
> between PV guests.
> 
> ~Andrew
We could of course context switch EFER sensibly, but what would that mean
for Automatic IBRS? It can't be trivially used for domain-to-domain
isolation because every domain is in a co-equal protection level. Is there
a non-obvious edge that exposing some interface to it gives for PV? The
only useful case I can think of is PVH, and that seems to be subsumed by
HVM.

Alejandro


From xen-devel-bounces@lists.xenproject.org Wed May 31 09:06:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 09:06:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541676.844643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4HmJ-0002Yz-Kh; Wed, 31 May 2023 09:05:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541676.844643; Wed, 31 May 2023 09:05:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4HmJ-0002Ys-Gg; Wed, 31 May 2023 09:05:59 +0000
Received: by outflank-mailman (input) for mailman id 541676;
 Wed, 31 May 2023 09:05:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VTAn=BU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q4HmJ-0002Ym-31
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 09:05:59 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on2060c.outbound.protection.outlook.com
 [2a01:111:f400:7e1b::60c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5adeae30-ff92-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 11:05:57 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by VE1PR04MB7487.eurprd04.prod.outlook.com (2603:10a6:800:1a2::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.24; Wed, 31 May
 2023 09:05:54 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 09:05:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5adeae30-ff92-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Bj01/iCT5uuD/U57r0OY+natuJiAofVCf3MKx9xUTA5saa1xbgxvASfzLAWqVSjUQT8632/NV0rKmnaJ/7Dl9oxNC6ygv94xqoEa9tm8I2RZyqruusB87JWuZj3tLKUxngEsQaq7+ZK78ThJXZY0CHj7Q8+HOipUbXOZh5WmLLhTjkfezb+VQVAeQZkhe5ahdd2JIGiZj4kKvfpfgURgm5w6lEgg0zI470GhB24be3lvEIbdipjEJ8gic7RjckJi5EX96ot+NTYu61nkecB4XOsnc5N7HRBNUltCvN2xaMh1x/bReLSSnq58lCnJUyA4n1SQiLrk8zEqZm3zKXWjNA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=TEoluasBZv6hl4jzKFrVbuO0CfRLMLayK1d2uZWJ680=;
 b=BAPFNYnh1/5dLLtVpTkHqpwYCfkW3RCnqwJyvCudZBEuL4bEHRd6q23oj6AFUbeY6cAXtuxkIUuRtZUgMzdEjNgc6BjhDr2wQuUHJSl4d5R9XHBuLoWB6hm+blI/6Dr+7WiYC36UsMMfZYdo8AMxCbpGlbu8jwiWVTzEaQWSPZgog52PQnr1VZpTAiXA9nn1KpdtAgQYFyxfx4LsQQ5H4NGdAtXe+FyPN1VSJCZdXJnMFGhrGWbCLumfym57pWGXKqgxS0UqiN2cMf7h8Dm8iUUCq7doEwFa6FkJ/04sxHrAJw5ZnmvUCpzI/vtgoOIsL4NyWHXjmTgvzqFFQWe2dQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TEoluasBZv6hl4jzKFrVbuO0CfRLMLayK1d2uZWJ680=;
 b=eqzHJUPK1xZz2FnwQb22Oxz1NOS4pDvmi9Tui60IiJHlMk96sH+6u+JweGDOMsfajIGob5xT877tRPdEtSSR6wSyjOb2C2V/POTioLcWKFprnfQeIBaUWggKrHvv2hHj1Dk0vRV29AV8JyeKmj9zmtHsSI13Yg8PGOAwxexLssSareMPwgWYy7N87z6DwxTtTl1S4SmcN9XnjWqeBdWA0YigY58s6FvO/AxvR9pRFPAFWeB5M4xmEjNe6Py/E+9Y21vvnjf1tbsvV/IbKVj02RUdNRLzIx0zdQG8wHxunWDDaahZ96GIz27fzqxrAtb35kazVQPgdKHWEOiCnf0aig==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <1b99e58d-338c-02af-eeab-00d691337d00@suse.com>
Date: Wed, 31 May 2023 11:05:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: xentrace buffer size, maxcpus and online cpus
Content-Language: en-US
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
References: <20230530095859.60a3e4ea.olaf@aepfle.de>
 <578d341d-0c54-de64-73e7-1dfc7e5d7584@suse.com>
 <20230530220613.4c4da5cc.olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20230530220613.4c4da5cc.olaf@aepfle.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0123.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9d::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|VE1PR04MB7487:EE_
X-MS-Office365-Filtering-Correlation-Id: 690e67b9-d755-4aee-4395-08db61b63d2e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	P4XmZziatOWOqnMdGJoy+D2JCP/L//gfoYv1UXBiaPy41nHFbZTSQA3UrXNbPIZ2K2SCd86rwFZ4NfzHPIVRjRvp6C6SwuIv5BqLW0SCo4fosGpJ2kWtUX+sKhI4RK2xnANAEMOIFLByWvhy/DF3ySo3dWNMzJl5ErYvSbRwZoV5bIVHltCMAaVuRB1i8aEUmHlraSMcT8mcj4qNf0tP44b7Ypcj/V83s34+x3JKAEkl6CcvNz4vY2+bDG/I5h/giMXtsbpvWJ4ZAFjeGuBo77kvCEL5nTwF8pnplTvgIAwECIkDrNDlMEKEl0M6OEuTzq39Kx7PqXafMdeZ+ZJ1CEMPv+Ft97niGrrMwmNUqKvYiDT2uW2QbVGqGP7xbyzgRPiE9nPliYvUGVOtS7M5tr0KE8k9WnanHZzmyPJAnBlZJxzMuhBl25u7sX6lvm74jXHNLRD45U4i9+ohcnOdREVPsNdNIzmcwrewtp0hp/8P8QKOGGxWqh2sY1ybI2ZMO44T8a6mnwtXuVYocMFsSjyFrg007vEoNAqzF9iFS2eeFNZH+7Wubz4o2oAP7+8jwJPPBMCF6XijMejOLidKrKNWsNeBAkXzBzfY1MIAFjev1DRrFtQkNYxPUyfKjSMK4qsaZror72tnJHYDpoxGsw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(376002)(396003)(366004)(39860400002)(136003)(451199021)(66556008)(66476007)(5660300002)(66946007)(316002)(6916009)(4326008)(31686004)(8676002)(8936002)(53546011)(478600001)(41300700001)(6486002)(6512007)(186003)(6506007)(26005)(2616005)(2906002)(83380400001)(38100700002)(31696002)(86362001)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dHFZS2RiTkM0enFaTUZoeW92Z0krZGVJK05BNTJ0cENveE9BOUZ4UFpxc0pX?=
 =?utf-8?B?VTh4emZBTzFGUklnSDlrQkRBY25iS2FTL2pEdHAwVUNZdXBTUFZmSjF6UllG?=
 =?utf-8?B?VWxwVG5id0dqVDFmN0xZSVB2Z0VwajVRWGxtcllXZ1l4N2VpaVJERGlROWtq?=
 =?utf-8?B?dUxibDFGSTNvREdSQ3BJTi94TzJkbVZ2SXFCc1ZxdHpiY1VLcFpJcXorSmdT?=
 =?utf-8?B?Tk1NbkdVYi8yZ3QwNHZMNWJaWm9OckhIejNPRW03aDlQL1dNTHdwK04wZHhm?=
 =?utf-8?B?YzBmcDJnd3cvSFBja1EwOHZQeXdOVE5FTXZvR1NBUC9tdGZjWjlRV1IwQnZs?=
 =?utf-8?B?QUxhVzNWOEFWT3lVTE1acit3SXJDWWNKSTdEQWFRbC9oNWJ5YWRjRmdpckRH?=
 =?utf-8?B?a1ZKNzhHU05KNTBQU3JrVWMxQUszc0pndkhvcWdDWEdPamxQZkI1STZlcUZ1?=
 =?utf-8?B?NTdBQnBVVUQzL0hybjMxTEYrSWQzSm0rUVRYM2FIRSsyWlBKSVNPNzRnSzRI?=
 =?utf-8?B?YUhreEVvQnVZQUs0cUZCRkphM0dzMS9id0JYK1dLQVZOV2IxNWI2M3BvUmho?=
 =?utf-8?B?dVZpcURSbHNpZG00ZVhEamQvcEFVcWlFckV4OFdmdnE0T1k2UkFmTUxldHJ4?=
 =?utf-8?B?ZjJ3WkpWS2d1cUJLTmY1UFQrWHRaeFdzMlVrNE1tS1l6aEU5b3dwcTRRUkxI?=
 =?utf-8?B?MnFRemVYeHBsdUhqTkdMT3AyTlFzelpXVHo1OTM5MHZJMndRUndtaCt4NGd6?=
 =?utf-8?B?SS9FTWxMS2VsNVluLzBpU2g0eXd1b1lVRklJcERkQk9GTCtCUTNKZFBWdEZr?=
 =?utf-8?B?bU9BWE5pdllDd0VKcnVhSThWWnJpTEFCN3MvQmRLTDMzakdqN2hRbEk2ck1x?=
 =?utf-8?B?UWhMbHRvWjlEcy9KUWhHQ3VpVGhCOVQxcWZRWm9HQno0OVh2WkhyRlphSnNG?=
 =?utf-8?B?M3lKYUdHZFZrdGlPZTN6YzdGdVU5QlNhL0EzSksySXpUUWJHSWo4a25HRmFN?=
 =?utf-8?B?KzhML3M4Mk14N002NHBSbE9yNjl3cTVaaWFKTjR3ZnY2VzZFWjhpNnh6RFdl?=
 =?utf-8?B?Q3NYc05EeFkzSjZwekdFVFBncHRBQlFnc0hONENoUkltV2l3RmhrV3FOWFlC?=
 =?utf-8?B?R1Ntb1VtdjdJWFYvZmR2bXdNVnZTb3kzTWJoN05PVk4yb0JiTkR5UjdrcFFj?=
 =?utf-8?B?VHhkVHdGR094NzdLVXJ2TFU0SWVOSi90OVNMWUNQREp0WmlrSDZKNmIvRndR?=
 =?utf-8?B?Z25aUnkzT0JOY2FIOFI4cVpoM3ZkbnYycGsvN3M2aTdjckM2YllBMUszcFE4?=
 =?utf-8?B?ZjM3Ri91VmRVaTkvczdTbHhkWWpLK2pwa0dNMHVyTTB3eDJhWG8rK0dQODF6?=
 =?utf-8?B?S3FTdCtaMWlyWFRwNkU2SlhwZzd0Y2NhRTMxaUVtWTJVaFFxUmtDRmtIQWxr?=
 =?utf-8?B?RlZJdzhkMGtjUEdhWEhVS2ZNTEF4by96WWU5aGNCdklzZzRjd2FOOFUzbndq?=
 =?utf-8?B?QVdlSWpiazZkbDZKaFozU1d4L0c1U0k0VGNYdHoxcyt0MGd0M1ZjNmlDWnZQ?=
 =?utf-8?B?L00rZThTTnFyN3RrQ3Q1cE5uL1A5RU9LS2RJTG8rT2NEclVIYnRlcThib0ox?=
 =?utf-8?B?TUJUZVNKYnU2N2VUaUcxOU0rSVV2ZXRQRTkrcGxQN3lZOWhmTmMvUnFPTktn?=
 =?utf-8?B?dXprWk9KQ01kSCtFLzUzWnd1b05YZUdMbW9IQUJCUFNqbGFnTFo0bU0vbXMy?=
 =?utf-8?B?cEo0a25SVW9vRnRvbHN2eEczcEdHUS93REcwYjRaZzN6VUV4SDBiaFdPZmEr?=
 =?utf-8?B?R0EybWQ0ZXEwenBiZTBXQlc0ZS9rT053VFhwSFJVMkJSUEEyNzVPUDVDcFh4?=
 =?utf-8?B?TXRubTVMczhsUmZWWXEzSTVNd3Yra2sxTVJpOE9vWGk0OFc2K1llUEZaZzA2?=
 =?utf-8?B?YzhlRExRcXdSMElrMXpWK0J1TkN0VUhFcXAwWXpwT0tJc3NSdkNqUXJ5RUhx?=
 =?utf-8?B?bmlCQ1VoZFR6ZnpWSDNYUG12TE0ycHlHSUpaY1NvdU5hMGJnRzlPVkUwNlZk?=
 =?utf-8?B?UmFlN3k1aVZBb3NYdDRtNDFjQjFIODlRR09aOW9vZkR4SllsUlQrdE9LdHpy?=
 =?utf-8?Q?ZtjycmeBr58LEUpjO5GpnWED0?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 690e67b9-d755-4aee-4395-08db61b63d2e
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 09:05:54.2800
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ikz8UIUMqZZteWQYa1zGBdiz6RmqqmLWZyvzx1l7SoXEqxdZs97JgC7ch3XZdT1vySkXNPU2ZnSS2CJxPEjqlw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB7487

On 30.05.2023 22:06, Olaf Hering wrote:
> Tue, 30 May 2023 10:41:07 +0200 Jan Beulich <jbeulich@suse.com>:
> 
>> Using this N would be correct afaict, but that N isn't num_online_cpus().
>> CPUs may have been offlined by the time trace buffers are initialized, so
>> without looking too closely I think it would be num_present_cpus() that
>> you're after.
> 
> In my testing num_online_cpus returns N, while num_present_cpus returns
> all available pcpus. There is also num_possible_cpus, but this appears to
> be an ARM thing.
> 
> If Xen is booted with maxcpus=, is there a way to use the remaining cpus?

In general no, because then nr_cpu_ids will be too constrained. But
note that CPU parking also comes into play here, leading to nr_cpu_ids
being set to all possible (present + hotplug) CPUs. Iirc parked CPUs
can be brought online even beyond what "maxcpus=" says (albeit I think
that's more a side effect of the parking implementation than an
intended goal).

> In case this is possible, the code needs adjustment to reinitialize the
> trace buffers. This is not an easy change. But if the remaining cpus
> will remain offline, then something like this may work:
> 
> +++ b/xen/common/trace.c
> @@ -110,7 +110,8 @@ static int calculate_tbuf_size(unsigned int pages, uint16_t t_info_first_offset)
>      struct t_info dummy_pages;
>      typeof(dummy_pages.tbuf_size) max_pages;
>      typeof(dummy_pages.mfn_offset[0]) max_mfn_offset;
> -    unsigned int max_cpus = nr_cpu_ids;
> +    unsigned int nr_cpus = num_online_cpus();
> +    unsigned int max_cpus = nr_cpus;

As said before, num_online_cpus() will under-report for the purpose
here, as CPUs may have been brought offline, and may be brought online
again later (independent of the use of "maxcpus=").

Re-initializing trace buffers when more CPUs come online might of
course be an option, but it would need doing in a race free manner.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 31 09:16:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 09:16:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541680.844652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Hvq-000488-Ga; Wed, 31 May 2023 09:15:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541680.844652; Wed, 31 May 2023 09:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Hvq-000481-Dl; Wed, 31 May 2023 09:15:50 +0000
Received: by outflank-mailman (input) for mailman id 541680;
 Wed, 31 May 2023 09:15:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VTAn=BU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q4Hvp-00047v-DU
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 09:15:49 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20613.outbound.protection.outlook.com
 [2a01:111:f400:7d00::613])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id badc4362-ff93-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 11:15:48 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DBBPR04MB7899.eurprd04.prod.outlook.com (2603:10a6:10:1e1::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.24; Wed, 31 May
 2023 09:15:45 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 09:15:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: badc4362-ff93-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l/htu+xCNmgrRxE9k3b7PwwC55u5whTUqj7w3ApouiY8EVaOyUko5o1qg/N3eL6LoFEPPfenHyaXg50qH6vVTF6sTv2dbXk24Thxi5sdHfPjbm2BTGHEswY1gmT+b0NADrj147sRo61vTiY2WS2qvfhv4T0Pftq65nd9k7568H0YOjFh7PVNEJecHYt3PU7hvTmBjbkCG1GY8+2OZaMhnI/7WLQjsXSAAaQw09ne6eMCGRw6C4RR5n3XRLxmdpUL4Nh0+kRnC95RODIGbJeElcBN8Q1EoY3x4z8SILFUHXDJqj+wlk94SOIWZVj2CQcx63h192N4IazxfQqvosOtPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=nwSoEqNkQBhLrenw8fGigrS6OudYI3l6kjib7iW7vLA=;
 b=XyUxTsOOQJdbdG2qoDUenyQJSmX5mfdf3D4jedymHYXpor+s6yuiHlTMj6s7tkYa3so86CzbngQhJzSQh3r11bznvitXHtnQ5KqqDS1IecX5hZp4Zlc+tOwagXafAhTvcEdf5le9GGdBJfeIQb241chYwZC/qYNPGznT8RXjFawyV7TLkGZqYza2ArBavzCKRA9siBtrJLPN7HTvQlQbUzNMFqo7cWa0hqu5GrTKJQ9byUR6qmkcqi4atq0cm3nZGu2i9ij2tqTbiMjXzizk6DE6BYoBs6TuOLVYhr41hwpZUDcYLdReS+7uuICnQySZuRyUxdybeuLzzviSBg0rUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nwSoEqNkQBhLrenw8fGigrS6OudYI3l6kjib7iW7vLA=;
 b=evd7MX8ILfiRwwtPYrWb3ry/L9hLcW1UseJlgmmzVhczMQ1X7MufeGky5emG7i8FGZGi0qqzBS9BmWv7Wfe+5zz8FZqTZYnZNRKbgkJXX798hC/0dH6kWqDOelcwwSuxA4XNOlhkrNzr0PA7P0fppLk+TLH5W8xiy4lu57cA5cSR7WWhautZG84zzAPvfs7B52a4S90RwgRyXxV+JoyxZ7eLKSbWWCnhdjPpgknrmCc6n7q+xQJnsSGTcVx1KOHtpxuHty7JqH5NF69ktr8ggfYPDfSPSvJhpO/gdMIGzPwFpi/58gkk4yl8RI/M6KQnITg1UUc8HuoKWU8Gdxzt6A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <96ade9a5-37c9-dd38-cb04-ed0f2c0bbd97@suse.com>
Date: Wed, 31 May 2023 11:15:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 2/3] multiboot2: parse console= and vga= options when
 setting GOP mode
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230331095946.45024-1-roger.pau@citrix.com>
 <20230331095946.45024-3-roger.pau@citrix.com>
 <2ee8a4b4-c0ac-8950-297a-e1fe97d2c494@suse.com>
 <ZHYeGOFpAtLnoQf2@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZHYeGOFpAtLnoQf2@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0236.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b2::8) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DBBPR04MB7899:EE_
X-MS-Office365-Filtering-Correlation-Id: d84cd62f-bd0a-4146-4ca3-08db61b79db1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	v6F/rLzI97Q6TYtbd+qs43mAKT10jTowEbyhhH9vEcTLVTQ+GGvnv2wuEft7usLFHqlYxyylowEGol4YzD8X54VAWmUGqXn5VJitw+jiEdA9UuSjn4yZy82raFg1C+m3ZRgqSjzCMhyTY6+JrBV8/Ft0B6PlWsad0CZy4gryBV6Z1FkZM+Yj8T5sfokRSbIZrLh8Fiunzq7UQQvYa+XZhzZcQY/7EhP4PaBbz/tn151X9/3cYR52ftGKbhmAhgHXN99ay2y2X5qQLvJdQvgE8vslZuqma+obiExeXXAnhJURVL8HY46BykRJ/3O57t2VP1khkkU+TgDYB/rgo24AUYYXS5BouegVThL9v2vUUbEn48tipqTrdq/B5qTh0oFJP2UKv1mlFST4h2AqdCKPCWuSygbDwQCt8qZr5qPW50zVgXRCdt1E8mcXuR2ZxrYubzHQO70fWivhparGb/FY0q5j7iIQm+Wz+deuCTC/gsB5ABwaJx61F26h8AWcHeP5PB5kxKF5FE16GsKGXZpFVZrR8+9/MBG/xtv2hjW+rvZpvqZMOHOkwc2oyzBaUA8OY84ceJUP0Q3PJ8EGqGqKJXlJe+uoMk3fBuMBGd3r1ZdD8tkgRpJUEQT3j0otilH6QmPKEzbI+4QnkUj5p7tRAg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(396003)(366004)(136003)(346002)(376002)(451199021)(6486002)(478600001)(54906003)(6512007)(53546011)(6506007)(186003)(26005)(2906002)(4326008)(316002)(8936002)(66946007)(66556008)(66476007)(41300700001)(8676002)(5660300002)(38100700002)(6916009)(31696002)(86362001)(36756003)(83380400001)(2616005)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WE92NTN3ZzNkNDZWKzVoN1ZRTG5GYko0V0g5bkxFWDdXOGZ3Tm5LbjI3Q3dl?=
 =?utf-8?B?eGF3ZmNkLzZmd0VSeFFXNEVMKzYzVFJ3ZDhlc1hEQUcra0lHdmV3R3UrL2Jk?=
 =?utf-8?B?WFdLWm4wMXhsYXBZRzE0V2ppVWR6ZWtTQjI2eUVyME5OZzZ2K0xiV2dmNU45?=
 =?utf-8?B?ZGhUM1dOZnNnNHBkYnNDRWkzN1BtaWpKODhkQ1BvNk9UYkpZSERrcFNqSDBX?=
 =?utf-8?B?eFNPdTk3NWM0R0pMNzZqN3ZmczE4UVkrOG1FMzNsNjVPVm9keHVGTFJ1MFZt?=
 =?utf-8?B?MHUzeVlneERqV1NvekFtdkVHVXZDaExWa0VsYzNiT3hQaGhpRFQ1dDJQL2FG?=
 =?utf-8?B?Nm1ZOGUvV25EK3ZVS0hpTytQb2Q4MVplakJwYjZuR0JKSXhxa0dyRkZKeWpu?=
 =?utf-8?B?RGFhaFA5NEFMR2Y0SHNma2prUGlxdHoxV3dIVmtrSC9aQ2lVTkl2T2c0T3FO?=
 =?utf-8?B?SWpNR016U3QwSmV2WFBRRUFYam4rUGhMMUVGRlovUWlmdFFNVEFoeFBod3RE?=
 =?utf-8?B?cnBLSkF5R3R6emFINitONk5jaktuam5wamNLSHhESW0zZDJIU00vaEZtemd5?=
 =?utf-8?B?NmJNUlZyTnlTQWFWZ29BdDVTUUZQVzBwUmZjeFZOYTBzTWxLYlVMcWZWVXZn?=
 =?utf-8?B?N2F3SXFNZFN1TmN4cUg4OC9OT0FOM05teVl6K0VESTdzbE5tQkVkd2V0WEdy?=
 =?utf-8?B?aGJ3c3BCcVN0ZnFXM0E0YnpxSmh6bXVnVVhMeUNyYUVOSUlJbmdQYTcvR2Ru?=
 =?utf-8?B?MG01MXU3Y0tLeVFhMUliWWZSMkEyVGNmQWxRU0x5S3hXa2xXRWIvSDJXY1BY?=
 =?utf-8?B?c2VvazZWSEY1VlJpbTFqdEtLSVFRSjVnSzBYa21UbGlsKy9mdjZsd0NzNU9o?=
 =?utf-8?B?Wjc5WTUxL2tXOE45YWdNc2JWN2xCMGRIendPSkpTVTJRR2s3a083UlhGZU5a?=
 =?utf-8?B?WFlyUlN4a2JJRHhNSVlYVDAzd1prSk5VMDl6RnlnUDQvcUpER21Ick9IMFQ5?=
 =?utf-8?B?UndXSzRLbUs4a2VLOWlyMlhORnhDcXd5cVlXZjBSa09MTVZKbjdUZFhpanlh?=
 =?utf-8?B?eThEZkQ1d3JKdi82SXJheXo2UzhsUzd4b2psbHd4cysxS1pCemNadjhDQVd1?=
 =?utf-8?B?aTl0ZVVSNDVibkFOdVoza0tjUHR2R3I2bHkraWU4YUhpazBYSkd5eVg0Tm9p?=
 =?utf-8?B?ZWhFRFV5UmpwN0xLUitqVnZ2VkdpSFFOdnFDLzFyOTljMnZtMUhDeWtWdEY0?=
 =?utf-8?B?dXUxQ0tOdS9kSzIvRXBKY09mRmFwTFRsWlM4Kzl0TS96alBEZDk3d2o2aTJ6?=
 =?utf-8?B?K1V0VFZ4Tm45L3Rpa2dQUUJVK2NoNklsY2EwY0NnUThkcGE3TVB2MndvUmds?=
 =?utf-8?B?UDZjQzFhRTBXZnEyQXNnS05aWUpMb0V3UDlmNnlUY3M3eUNIeWJrVnpiUkFT?=
 =?utf-8?B?QUlROXZ2VW9WWndCc0ovcXBGZ3hMSTR4U1pJUXVybzRETlJ5TXN4YnF5NDcz?=
 =?utf-8?B?SmZlTGlrTVRCYTRhRUlBQVYwaG0xTXpyWERLRUcvek1Rc1ZwQjdCMUdOYm5V?=
 =?utf-8?B?cWFkTW5GRjRBWWZod0x3UG9kanNZWmR3ZmRiek5yYlk4SVM0OVc2bnVrUWtw?=
 =?utf-8?B?T2VBT1NwVkdoZFIyWml1YjJMckMrQ0xTdU81blAvMmtZc2tGRmpyK1UrVEdp?=
 =?utf-8?B?anR1N2xvTC91aUMweWdTU25ReW10eTB0OWxWblZFRUpOZ3JRb0dQaCtpMWlP?=
 =?utf-8?B?UzIwTnNQTEg5NnI3cVJYN1R5a3lVamZDTE41Wk15anYwY2FNdGVQTlpCb3l6?=
 =?utf-8?B?TXcrekJac0xUaDJiL25pRnRhQzlEMXdUNi9CV2JweUZNNzViQW9CamVER0xO?=
 =?utf-8?B?d2tJUmxlYmJJcGdOTkZ5YWg2VkduRXJicEFTUU1IVFZjU3BrRTRCQmV5bllK?=
 =?utf-8?B?T1Y0QzBadkpaaVorTzdhTlBCWkRGbUpxdjZaTzQ3MjY4WlpBc1BocllkaGFh?=
 =?utf-8?B?U1ZmVWZaRVF1dkVRR1cvY3hyWlhCWVM2OWxaVkU5d1Z2L0NGUS9ubHhEM1NQ?=
 =?utf-8?B?YVEwZDd0cHhOWFc2S1JZa1Z2OWVWR3hRU1dUeDNSTmFUSGtyeDN2T25jK0tH?=
 =?utf-8?Q?3hbLG/1n93FeOexEwJ0uswH0M?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d84cd62f-bd0a-4146-4ca3-08db61b79db1
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 09:15:45.7142
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YrI+7B+e5ZeK/3Raq0XpgcG9+vuUsSVcBJa8EvRmOPBcGkjSpMYd+XJTxKclyDJiMpklON4ncrVoobAUD0keyw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR04MB7899

On 30.05.2023 18:02, Roger Pau Monné wrote:
> On Wed, Apr 05, 2023 at 12:15:26PM +0200, Jan Beulich wrote:
>> On 31.03.2023 11:59, Roger Pau Monne wrote:
>>> Only set the GOP mode if vga is selected in the console option,
>>
>> This particular aspect of the behavior is inconsistent with legacy
>> boot behavior: There "vga=" isn't qualified by what "console=" has.
> 
> Hm, I find this very odd, why would we fiddle with the VGA (or the GOP
> here) if we don't intend to use it for output?

Because we also need to arrange for what Dom0 possibly wants to use.
It has no way of setting the mode the low-level (BIOS or EFI) way.

>>> otherwise just fetch the information from the current mode in order to
>>> make it available to dom0.
>>>
>>> Introduce support for passing the command line to the efi_multiboot2()
>>> helper, and parse the console= and vga= options if present.
>>>
>>> Add support for the 'gfx' and 'current' vga options, ignore the 'keep'
>>> option, and print a warning message about other options not being
>>> currently implemented.
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> [...] 
>>> --- a/xen/arch/x86/efi/efi-boot.h
>>> +++ b/xen/arch/x86/efi/efi-boot.h
>>> @@ -786,7 +786,30 @@ static bool __init efi_arch_use_config_file(EFI_SYSTEM_TABLE *SystemTable)
>>>  
>>>  static void __init efi_arch_flush_dcache_area(const void *vaddr, UINTN size) { }
>>>  
>>> -void __init efi_multiboot2(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
>>> +/* Return the next occurrence of opt in cmd. */
>>> +static const char __init *get_option(const char *cmd, const char *opt)
>>> +{
>>> +    const char *s = cmd, *o = NULL;
>>> +
>>> +    if ( !cmd || !opt )
>>
>> I can see why you need to check "cmd", but there's no need to check "opt"
>> I would say.
> 
> Given this is executed without a page-fault handler in place I thought
> it was best to be safe rather than sorry, and avoid any pointer
> dereferences.

Hmm, I see. We don't do so elsewhere, though, I think.

>>> @@ -807,7 +830,60 @@ void __init efi_multiboot2(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable
>>>  
>>>      if ( gop )
>>>      {
>>> -        gop_mode = efi_find_gop_mode(gop, 0, 0, 0);
>>> +        const char *opt = NULL, *last = cmdline;
>>> +        /* Default console selection is "com1,vga". */
>>> +        bool vga = true;
>>> +
>>> +        /* For the console option the last occurrence is the enforced one. */
>>> +        while ( (last = get_option(last, "console=")) != NULL )
>>> +            opt = last;
>>> +
>>> +        if ( opt )
>>> +        {
>>> +            const char *s = strstr(opt, "vga");
>>> +
>>> +            if ( !s || s > strpbrk(opt, " ") )
>>
>> Why strpbrk() and not the simpler strchr()? Or did you mean to also look
>> for tabs, but then didn't include \t here (and in get_option())? (Legacy
>> boot code also takes \r and \n as separators, btw, but I'm unconvinced
>> of the need.)
> 
> I was originally checking for more characters here and didn't switch
> when removing those.  I will add \t.
> 
>> Also aiui this is UB when the function returns NULL, as relational operators
>> (excluding equality ones) may only be applied when both addresses refer to
>> the same object (or to the end of an involved array).
> 
> Hm, I see, thanks for spotting. So I would need to do:
> 
> s > (strpbrk(opt, " ") ?: s)
> 
> So that we don't compare against NULL.
> 
> Also the original code was wrong AFAICT, as strpbrk() returning NULL
> should result in vga=true (as it would imply console= is the last
> option on the command line).

I'm afraid I'm in trouble what "original code" you're referring to here.
Iirc you really only add code to the function. And boot/cmdline.c has no
use of strpbrk() afaics.

>>> +                vga = false;
>>> +        }
>>> +
>>> +        if ( vga )
>>> +        {
>>> +            unsigned int width = 0, height = 0, depth = 0;
>>> +            bool keep_current = false;
>>> +
>>> +            last = cmdline;
>>> +            while ( (last = get_option(last, "vga=")) != NULL )
>>
>> It's yet different for "vga=", I'm afraid: Early boot code (boot/cmdline.c)
>> finds the first instance only. Normal command line handling respects the
>> last instance only. So while "vga=gfx-... vga=keep" will have the expected
>> effect, "vga=keep vga=gfx-..." won't (I think). It is certainly fine to be
>> less inflexible here, but I think this then wants accompanying by an update
>> to the command line doc, no matter that presently it doesn't really
>> describe these peculiarities.
> 
> But if we then describe this behavior in the documentation people
> could rely on it.  Right now this is just an implementation detail (or
> a bug I would say), and that would justify fixing boot/cmdline.c to
> also respect the last instance only.

Yes, fixing the non-EFI code is certainly an option (and then describing
the hopefully consistent result in the doc).

Jan

>> Otoh it would end up being slightly cheaper
>> to only look for the first instance here as well. In particular ...
>>
>>> +            {
>>> +                if ( !strncmp(last, "gfx-", 4) )
>>> +                {
>>> +                    width = simple_strtoul(last + 4, &last, 10);
>>> +                    if ( *last == 'x' )
>>> +                        height = simple_strtoul(last + 1, &last, 10);
>>> +                    if ( *last == 'x' )
>>> +                        depth = simple_strtoul(last + 1, &last, 10);
>>> +                    /* Allow depth to be 0 or unset. */
>>> +                    if ( !width || !height )
>>> +                        width = height = depth = 0;
>>> +                    keep_current = false;
>>> +                }
>>> +                else if ( !strncmp(last, "current", 7) )
>>> +                    keep_current = true;
>>> +                else if ( !strncmp(last, "keep", 4) )
>>> +                {
>>> +                    /* Ignore. */
>>> +                }
>>> +                else
>>> +                {
>>> +                    /* Fallback to defaults if unimplemented. */
>>> +                    width = height = depth = 0;
>>> +                    keep_current = false;
>>
>> ... this zapping of what was successfully parsed before would then not be
>> needed in any event (else I would question why this is necessary).
> 
> Hm, I don't have a strong opinion, the expected behavior I have of
> command line options is that the last one is the one that takes
> effect, but it would simplify the change if we only cared about the
> first one, albeit that's an odd behavior.
> 
> My preference would be to leave the code here respecting the last
> instance only, and attempt to fix boot/cmdline.c so it does the same.
> 
> Thanks, Roger.



From xen-devel-bounces@lists.xenproject.org Wed May 31 09:23:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 09:23:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541685.844663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4I3C-0005e5-DW; Wed, 31 May 2023 09:23:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541685.844663; Wed, 31 May 2023 09:23:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4I3C-0005dy-A5; Wed, 31 May 2023 09:23:26 +0000
Received: by outflank-mailman (input) for mailman id 541685;
 Wed, 31 May 2023 09:23:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VTAn=BU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q4I3A-0005dr-OO
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 09:23:24 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20627.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::627])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ca48e8ac-ff94-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 11:23:23 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AS5PR04MB9941.eurprd04.prod.outlook.com (2603:10a6:20b:67c::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Wed, 31 May
 2023 09:23:21 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 09:23:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca48e8ac-ff94-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hwLGCtfT25j+5g2UYpqMCwkQ2TUy+r90fMDcmO81j4/OQhOCDhaqusM6grnr4oGBX1mVw/QvBBCWcv/SQDSQ5P/QkVICj0PLkBIiZKghObzjXdbmZimi9e3Tb5xGeSnNJx9BuNrHNdRBBDHnD6SO9C930qIYCu/VK2DrfHRjX3GmS7ctcMF6nTSaowNOfwYMaQQUxazUxCcfCpC7awriL4vJr8oidrO/kD3vofh6JrYKh4wN5xghDCFEUNwS4VATVLARx6iBbPQgZ7kNE/hYy5AaCUluqLDRUTFLGmY7fuRZDPw0sPjBrcDStxgq1H849gZ/3KfUO4eKrCxtpwGYBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0hALktanoGsL6JZON8Cfeg+lJxFL1p7XSykSevalN+k=;
 b=C8xjnYDh91HOTP4XU+8mQEiAqklOCf/v3jLLeN6FS5d1rqQ/gpApdEu+CODSxxhhIJtYJPI39XxiyxGWoK4ulK0VUiMItVbTTULwD/KQLjpKsU6zE/Np0yZhMEOGcoAzRO5KgxRXLrS5RjTLhnA2QQVFX64TK9/NEjn3JegRvTzThV5Ax5nAXwIsIAETeHXRqpraamt5SW3X5qINtCQ3/TiwVpwteqmspTdIRFZ/ZkM+mP2GcuhpFTDm6Mm36fNwdojR32t8BhswvHM8p0pykD/dWq3cVKu6pfUDUns5aBIYkc6ploD+vnX/ak0ypu2l1eDmemWT4eDhzSP52ve12g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0hALktanoGsL6JZON8Cfeg+lJxFL1p7XSykSevalN+k=;
 b=XnJOTPXfBA1O8mmNU9vFBHDdUdp3JS/476gBD06SWTvzFEz9g/i42pFyzfqHOVEJ8suIZcBzLnp0vT52VZwdPCxXnh2qYoMZSqjgVnUENwPOJ/89fAJJwFklGtEnAGkI5hu/JmgOd20YAd9kqNTa6JbpoJMh+5DJphjnsygBIAJkJ+Xis7SJEFa4GUM7YN6G0OBosLdjDpff4D5Gf4/Yh78N3cGESRaGqjvqWRkkwDA/NuQb0fkGYqafs8AuWHNsO1YSyehuy7DwSIgND379C00RL+DJlTgw1M9iULGOpjwFqKEw43CQh7+CImAYQY0J4013qvaHunRXHauoOL7uiA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <a97829cc-727a-aa27-b00b-d92afb0f8863@suse.com>
Date: Wed, 31 May 2023 11:23:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
Content-Language: en-US
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com>
 <ZG4dmJuzNVUE5UIY@Air-de-Roger>
 <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com>
 <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop>
 <8956af09-9ba4-11bf-a272-25f508bbbb3c@suse.com>
 <alpine.DEB.2.22.394.2305251224070.44000@ubuntu-linux-20-04-desktop>
 <22f1e765-891d-ef2d-01b5-e9dfe6ca895b@suse.com>
 <alpine.DEB.2.22.394.2305301529090.44000@ubuntu-linux-20-04-desktop>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <alpine.DEB.2.22.394.2305301529090.44000@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR3P281CA0118.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a3::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AS5PR04MB9941:EE_
X-MS-Office365-Filtering-Correlation-Id: 9cc82581-3eb0-430d-5974-08db61b8ad25
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lu12WjQmbUdDbL2+8EojFknwYBIOK4BBAximYF6Qi4pbX/B0ru93UNoMIX/NZFxpfUyHA4bQCDvI6RGCqfrIOEyHp6t9kMvSOjgtnZNir8znhKuKnqf6pYTVTJPTQVZeJUo9Bnhc7tQ7UjnxKF+4zPYYUh0lczKcKKt4xNwFgaTCHW7oDG8Vm1nqkwXJj1SCSTmwX5N73zoulg+9k7I9AUpYmR0XJNk0pXcQXG5titqJl++V4ZqgDLtdUG7/xzKDv2TsA1O3X7z6HCL321ZPBnAIjifHT/vwF6n6Uua0Q2oGoh8blNYmU/KcuHwJsD7GdZH2AT9lNkLkl8BQAN0qV4VTT4pWgnxDukd741JQuHZ+ltEF5/Iaq6RWRY/ezTnWuS/22UKwdsgafxJz4oVNf6Cl7gmXSchEXhtDP42XtDImKgBeYZwDoABFsFJocfGIB4AczbnkXwt2s6AyceDuH1ftCBaxgcJ1er1I8YsBla7a1dqEMGo4l/gbceBoXlQbgUfqk/Nea1t9w5IyWUldSxot/WEkqVw14OBrJKbvjCubm1Odg7bC+pjuy86eVX1j8iWtGXjUSF57m3gqXq6lvbR6m5OtvxZxucvA85ilytZLIH482+zE8SQlHKHrqvPWHF59WQBm5BsmX1sWI/oBYg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(136003)(376002)(366004)(346002)(39860400002)(451199021)(186003)(2616005)(38100700002)(41300700001)(31686004)(83380400001)(6512007)(53546011)(26005)(6506007)(6486002)(478600001)(66899021)(54906003)(66476007)(6916009)(66556008)(66946007)(4326008)(316002)(8676002)(8936002)(5660300002)(86362001)(31696002)(2906002)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VE5KTjMzb3FyeVB2QUljTDBiZUd4UTljUDBhaXFIRVFIUXF2OTIrUk44SUpu?=
 =?utf-8?B?dHQ1UFJzMStKZzlSK1ZFa1BWbHRMRUVQcUxQNHZxV28ybmRrVHJRaXVuckxU?=
 =?utf-8?B?ZTZHTmE5RGwvTjMvWi9ReVFXcUQ4M3VLd3FIdHlLQXdOQ3V2N2V4NGRKTHI2?=
 =?utf-8?B?U0ZqdXNKaVlNSHZkLzIzMWxLeldmazdWcE4wWEJaSGI3RktPeUcyTEJkNmhM?=
 =?utf-8?B?MnN1a2lqREFib0dEY3AyMkhQVE1udVQ5M3lzSXFjQ3F5c3hzc1BYWDhpbCtv?=
 =?utf-8?B?ZXpUdlJxRlhpb2hsNHArajhiam1ucDlBTDFlU0JUaDh5SUR6YUZJRE5wZ3BO?=
 =?utf-8?B?T3QwL0N1Um5wcmpEWmk2MTk1blJZaVVvTXlhSWJESkZIU2hqczZrOHlCNWdz?=
 =?utf-8?B?ejB0R1lQY1pDZmRhZDBaUkNjdWxPTWRJRzR4ZDBDU2tQblBlTjFwbFplUkFL?=
 =?utf-8?B?cE1MaE1XdlVrZ2o2cmhjVFQzNVJ2UDBNSUp3OGlsakdKUDRraWd3SFp6ekYx?=
 =?utf-8?B?ZTJueDBSNFptQkVPZnhxS2NZaHVLMFkxNlYzclRQOU1IenZCMlE3SDl1VDFs?=
 =?utf-8?B?YzlqaXZZNGo2citwaS9XcGQ4a2VZTmNFb2REYVVqM01hby93Nmk1QlhCQ0s1?=
 =?utf-8?B?cSs4YnEwd2R6Z1plTXdSODh6UWFpWTRhLzhpMTVseEZPbkJzMGNaYWllbUs0?=
 =?utf-8?B?dysrd0szTkk1VUFJWWN0c01KUnBadFpkbHdkcFpaNkpjcnE0UDRTSWR3dXZK?=
 =?utf-8?B?d3BUaC9mZ0dsdVQ1VGlSbG90a2VNQ1czb2N1RTg0MVFueVlSckJEbk1xRHB1?=
 =?utf-8?B?Kys4UjIveTZZcU9tMFlXU0w4RklEa012akFEZkRPUVNlUnBUU2M0Skd3YTdU?=
 =?utf-8?B?VkhyS2VFd0hCQTJmZUxjY3YxNzJBcVJkY1hvd3pKeGJ3WG5vdmh3QWJCcTZO?=
 =?utf-8?B?MWNRc0I0bCsvVDdDOHVuUlZxR2pLT3hrSFBVekwzbFJxdjNNUEFyQlc2NzZ6?=
 =?utf-8?B?TFNYWU5jUWhHQWwySEcxRlAwNDVsUm16S1Y0Q29iNnl2M25GT1cwRmJUZ0JB?=
 =?utf-8?B?NS8wYVlHYXBiUGRQT2QxK3dIVVpCOHYwVWhDRzhvb0lzSUtXS2NRSzU4ZXFK?=
 =?utf-8?B?U1gvMEhBaUZ2Q2RQRExvRzQ3VmxlMS9id29mdWMvVUU1S3FWU1dlSmN0WVFE?=
 =?utf-8?B?blpXWS9xZWE3ZWxQRkJlQmx3NXk4cW9wT3ZPRGNNSEZlQ2gwcXhnMmNURjY2?=
 =?utf-8?B?bEVQU2NuWlZBbzBxN0FuOEZ6aWV2QWgrNVRQOGtXREpub003eWhZR2hpYXFs?=
 =?utf-8?B?eFMwQkliSU4waWlxUGhTWk9yUUFSUm5sSk10OUlCaFBEd3dMYXI3dUxDSjZC?=
 =?utf-8?B?cXdpeFVSQ0RMd05xRmtSSWVhV0tINFVVR0x1TU0zdDkvNktpSFhNRXpOaEFL?=
 =?utf-8?B?eUIrcVpKMzZqSkxQNFVmSnFWTVgxbXRUTlZaeXhJc1NlQk1RUzVoVXpPdlh5?=
 =?utf-8?B?d0cya2Z3b2M3WHdRUk0welBEMHZUUndyQU5xZXlQcUdFUC95dThGUzlwMUN3?=
 =?utf-8?B?MGx5V0EwQllOZHhyODhmTUxvZEJ5SlhmK043RE1Fd1NuTGtHcElXTWJPN1Iw?=
 =?utf-8?B?cEkyMWptcEdtSDR3eVBBT0t6Y3VXQkFoZ3BZQ08wMFRUMjlrUCtuZ2pVakxE?=
 =?utf-8?B?cDdsMFJnWFBXQmY2dDlvejZka1hBT3AwbFo2bUpTQkxpLzBuMHNoZVBzdG5K?=
 =?utf-8?B?eDFUVDduRkNRRlhON3ZWZEVCbExtU00wQmJNcjc3SXRqWHlTbW5sMDNFQkJs?=
 =?utf-8?B?SzdOL09DY3pXeWswZ0YzK1B3YlRiWUUxMHlSbERVWlR2cUJRY1NZWDVqWWRL?=
 =?utf-8?B?RzJ6L08wekw5TzEzWFBoTXh3cnFoU0FXRUNoWldjMEs2NFo5c21lb1lTTHJL?=
 =?utf-8?B?ZWlQNnlRUjdJYW1ISi93M1RldjdKUWNZQWJCTXNSRnk3WUtONE9kMkQvZEI1?=
 =?utf-8?B?YnkrQys4VkIrWGJXY1RaSDk1ZGJ1M3ZNTFd0aGxvdWJuZU5yQUhTQWtUblVV?=
 =?utf-8?B?MVoyN0hPc3JaNHJ3Mm51NnpHVWZGdG9HcGxFMVR3TXlDdmpudDdMcGhoaTFM?=
 =?utf-8?Q?0XRyCnx7UbPSPsmhRhNYKe8mu?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9cc82581-3eb0-430d-5974-08db61b8ad25
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 09:23:21.1398
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: v0aEkdEx8zunMPfiSegAPuQ0/aTUKwm2OM5TS1k97dD8ax19365j+40tAMOvW+rRL7sawxWFkjRQYuzTaMGwlg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS5PR04MB9941

On 31.05.2023 00:38, Stefano Stabellini wrote:
> On Fri, 26 May 2023, Jan Beulich wrote:
>> On 25.05.2023 21:24, Stefano Stabellini wrote:
>>> On Thu, 25 May 2023, Jan Beulich wrote:
>>>> On 25.05.2023 01:37, Stefano Stabellini wrote:
>>>>> On Wed, 24 May 2023, Jan Beulich wrote:
>>>>>>>> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
>>>>>>>>      modify_bars() to consistently respect BARs of hidden devices while
>>>>>>>>      setting up "normal" ones (i.e. to avoid as much as possible the
>>>>>>>>      "continue" path introduced here), setting up of the former may want
>>>>>>>>      doing first.
>>>>>>>
>>>>>>> But BARs of hidden devices should be mapped into dom0 physmap?
>>>>>>
>>>>>> Yes.
>>>>>
>>>>> The BARs would be mapped read-only (not read-write), right? Otherwise we
>>>>> let dom0 access devices that belong to Xen, which doesn't seem like a
>>>>> good idea.
>>>>>
>>>>> But even if we map the BARs read-only, what is the benefit of mapping
>>>>> them to Dom0? If Dom0 loads a driver for it and the driver wants to
>>>>> initialize the device, the driver will crash because the MMIO region is
>>>>> read-only instead of read-write, right?
>>>>>
>>>>> How does this device hiding work for dom0? How does dom0 know not to
>>>>> access a device that is present on the PCI bus but is used by Xen?
>>>>
>>>> None of these are new questions - this has all been this way for PV Dom0,
>>>> and so far we've limped along quite okay. That's not to say that we
>>>> shouldn't improve things if we can, but that first requires ideas as to
>>>> how.
>>>
>>> For PV, that was OK because PV requires extensive guest modifications
>>> anyway. We only run Linux and few BSDs as Dom0. So, making the interface
>>> cleaner and reducing guest changes is nice-to-have but not critical.
>>>
>>> For PVH, this is different. One of the top reasons for AMD to work on
>>> PVH is to enable arbitrary non-Linux OSes as Dom0 (when paired with
>>> dom0less/hyperlaunch). It could be anything from Zephyr to a
>>> proprietary RTOS like VxWorks. Minimal guest changes for advanced
>>> features (e.g. Dom0 S3) might be OK but in general I think we should aim
>>> at (almost) zero guest changes. On ARM, it is already the case (with some
>>> non-upstream patches for dom0less PCI.)
>>>
>>> For this specific patch, which is necessary to enable PVH on AMD x86 in
>>> gitlab-ci, we can do anything we want to make it move faster. But
>>> medium/long term I think we should try to make non-Xen-aware PVH Dom0
>>> possible.
>>
>> I don't think Linux could boot as PVH Dom0 without any awareness. Hence
>> I guess it's not easy to see how other OSes might. What you're after
>> looks rather like a HVM Dom0 to me, with it being unclear where the
>> external emulator then would run (in a stubdom maybe, which might be
>> possible to arrange for via the dom0less way of creating boot time
>> DomU-s) and how it would get any necessary xenstore based information.
> 
> I know that Linux has lots of Xen awareness scattered everywhere so it
> is difficult to tell what's what. Leaving the PVH entry point aside for
> this discussion, what else is really needed for a Linux without
> CONFIG_XEN to boot as PVH Dom0?
> 
> Same question from a different angle: let's say that we boot Zephyr or
> another RTOS as HVM Dom0, what is really required for the emulator to
> emulate? I am hoping that the answer is "nothing" except for maybe a
> UART.
> 
> It comes down to how much legacy stuff the guest OS expects to find.
> Legacy stuff that would normally be emulated by QEMU. I am counting on
> the fact that a modern OS doesn't expect any of the legacy stuff (e.g.
> PIIX3/Q35/E1000) if it is not advertised in the firmware tables.

And that's where I expect the problems start: We don't really alter
things like the DSDT and SSDTs, and we also don't parse them. So we
won't know what firmware describes there. Hence we have to expect that
any legacy device might be present in the underlying platform, and
hence would also need offering either by passing through or by
emulation. Yet we can't sensibly emulate everything in Xen itself.

> If
> there is no need for QEMU, I don't know if I would call it PVH or HVM
> but either way we are good. 
> 
> Same for xenstore: there should be no need for xenstore without
> CONFIG_XEN.

Right, it may be possible to get away without xenstore.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 31 09:30:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 09:30:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541690.844675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4I9z-00078H-4Z; Wed, 31 May 2023 09:30:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541690.844675; Wed, 31 May 2023 09:30:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4I9z-00078A-1c; Wed, 31 May 2023 09:30:27 +0000
Received: by outflank-mailman (input) for mailman id 541690;
 Wed, 31 May 2023 09:30:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EXUc=BU=citrix.com=prvs=508c705fb=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q4I9x-000782-FI
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 09:30:25 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c38b2b4a-ff95-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 11:30:22 +0200 (CEST)
Received: from mail-dm6nam12lp2175.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.175])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 31 May 2023 05:30:14 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB6549.namprd03.prod.outlook.com (2603:10b6:a03:386::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Wed, 31 May
 2023 09:30:12 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 09:30:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c38b2b4a-ff95-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685525422;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=tnGQqFbHqFqBZJYohXIvt3R+ZeLOuEetwifyQeDdq70=;
  b=VYsTLWP3hZx+B+hzN8ko67zHBYvTOx12RNWFLAam2jyrLSeMcnEdSUgB
   +5tajR4ZA28Dex9wyMkf/sAOzFOsp/+MJj6CdjP3D2lbl6s6tAYvO7CXH
   SUi7MtS11jb/6yqqBQhwKIs2C2SWfCvoQZr1mIu0+unvCr+SoeOJDQZcN
   Q=;
X-IronPort-RemoteIP: 104.47.59.175
X-IronPort-MID: 109807238
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:WKft76N8P3TcV1DvrR2DlsFynXyQoLVcMsEvi/4bfWQNrUoh1zBWy
 TMaXG2GbPqDN2LyfNokbYzkoR4PuJPcz4djHAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tF5QVmPJingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0uVTAH1op
 fUXFG5ORA2Hidy5w+KmWPY506zPLOGzVG8ekldJ6GiDSNoDH9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PtxujeOpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyr91reQx3irMG4UPP6856FKhX+L/UcaIRhMR0XrrPOmyWfrDrqzL
 GRRoELCt5Ma9kamU938VB2Qu2Ofs1gXXN84O/I+wBGAzOzT+QnxLngJSHtNZcIrsOcyRCc2z
 RmZktXxHzttvbaJD3WH+d+pQSiaPCEUKSoOYHECRA5cud37+ths01TIU8ppF7OzgpvtAzbsz
 juWrS84wbIOkcoM0Kb99lfC696xmqX0oscOzl2/dgqYAslRP+ZJu6TABYDn0Mt9
IronPort-HdrOrdr: A9a23:ZqlYvKjqivMT+YCfNDEhfEX4InBQXr8ji2hC6mlwRA09TyX+rb
 HMoB1773/JYVkqMk3I9ersBEDiexLhHPxOjrX5VI3KNGLbUQCTQr2Kg7GP/wHd
X-Talos-CUID: 9a23:FM7g52x2MEsaVnN3mdXNBgUXPsE5VC3Uw0uOYGWlDWhXR77FVE2PrfY=
X-Talos-MUID: 9a23:ykUDswRZaUw7ho8dRXSvgCA9FMJK+5+FAWkIy4cY68SOKm9ZbmI=
X-IronPort-AV: E=Sophos;i="6.00,205,1681185600"; 
   d="scan'208";a="109807238"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a2MqC909JB7elvqmaS5PusIeaumdobZDGYvk4Q/QM/wfn3aA3nGoVFRxQaNqPXxYUjPSvYNjHQAq5k79dNlh1p+idC3TjD46zeLdQOLxmAO9+7VHxAOT5ff5FJHnsrQ8yNfhtiQVFe055z9AQhTqY5kNP4OC5RIrNzXoWjXt2lB9FgZa+LIpcQQE0DGVRqGVYP6CA6N82w1ISpjfeN3cRgkXaLojOeeyvTr7qtlr53BuxxXA37up852tSLX0945svtNVQKz4R2P7hlNrcjg8uhwYRlTBSKmovc95DfEjeNH2gyMIXTzqloR39AB8q8g9utN/eslaMygS/Xz02RNhLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XBLde2htW1OSo1rrrW1SClQXxgodJkUXsZlWoE3kYkw=;
 b=Q3pecsBxuGuXcja6Nwnx1Nf+QQ0JURK7o7CkQYYAgCu9xnmyG8tYsd7ckAIDWvQaGWaSrTu24rexip9e6ufJscaxNrOdnS3EFrOb86upBWYfToo/uz1Codq0FkJNrlM5YPsQmaSPUv0BDn3wxdde4GzWzzoNdSzGt7KE7rMzSuYrf86C5563qNyYz4ircAgr3dVN9TqmvkCfhsZH+YwAyVAYdoPq65LBU0A1q5YVw9L1sSeRNJsAnD1HXAqRSgNoKlO1wgoyfpb0Z3ek9nwwJ9ayFmrnGULdzUhvgPeuLBtGGxWV6CxKbOXtTtAsSvfP2CeeWUjPe8VR9k65Hr7buw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XBLde2htW1OSo1rrrW1SClQXxgodJkUXsZlWoE3kYkw=;
 b=FZPKjuNJWnSGl8UxZVMEqqxACH/TOeSNAYwWBprQTBXqntTI9BNCVHY+1lmfC537pvkYgOA9Wht4woc83JtOwVq0Lw3wp4xzjFQBWYcW4YvboJMO4lgU6oHJRYZX86e+afKSsr/F05D0viw8s8cchEBoKxlW9wDHSRfjQEVXjYs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 31 May 2023 11:30:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 2/3] multiboot2: parse console= and vga= options when
 setting GOP mode
Message-ID: <ZHcTn9dV0EPGTg6t@Air-de-Roger>
References: <20230331095946.45024-1-roger.pau@citrix.com>
 <20230331095946.45024-3-roger.pau@citrix.com>
 <2ee8a4b4-c0ac-8950-297a-e1fe97d2c494@suse.com>
 <ZHYeGOFpAtLnoQf2@Air-de-Roger>
 <96ade9a5-37c9-dd38-cb04-ed0f2c0bbd97@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <96ade9a5-37c9-dd38-cb04-ed0f2c0bbd97@suse.com>
X-ClientProxiedBy: LO2P123CA0084.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:138::17) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB6549:EE_
X-MS-Office365-Filtering-Correlation-Id: cfd07882-7f83-4b27-5a61-08db61b9a23b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	djdil9b9fvF8VulYj1smiCSN+wiDVdcplmUnM1In3EVcvUH2k2G0pRKzHtqWMsL5xovuVz9kDLDbl65W45WmnxHThBvaR+q4HJrzI5y6UAwzU+Clp4WZC7b+eYT9kX+ee9QY+hcsj9pHE5D7ATvMbrfAqAY+U/u1wpqURsTi8LRzUuw1qb5UqqvTSepftmFWLB2cmF9PjXgRkj1sqM7cf6nlcDWMpMub5+rdY0At5QFC2NJmQ2ixjQHuSiTw5w7BfksZhf5ghxOdlbZ61VP0xYYA824ERG59U65UEqBlg+zWoIKt8G+SUXZVcZGK14r0xZU2s4GUDXcBT1YmoWQeKytqXAlHKjH7RK8us/xd/yt5rRGHLoWyq4KJpKFfy45XhoSiCOKi4wvUFETRnuQZyrsQDubdFShOvsrEwkmyKB+reCYinhNNM9JUBuR76eKpou2O0ntDLQsnP8TwkKsMtfiIMsX7tDkW7uqQmNBLK5/IctR6BED2DBdml/RY19rjh4VnmorNQIZRg4HvIxfZ+9dug0nKMUrOl8th94FDBg4YTjKqR7nnC9k+zp7n+b6/
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(39860400002)(366004)(376002)(346002)(136003)(396003)(451199021)(38100700002)(41300700001)(316002)(6512007)(6666004)(82960400001)(6916009)(4326008)(86362001)(2906002)(8936002)(6486002)(8676002)(66946007)(66476007)(9686003)(66556008)(53546011)(54906003)(6506007)(26005)(85182001)(83380400001)(478600001)(33716001)(5660300002)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?ci9iL2dGOHgrRFltalRobk90UXZEWFp5RjZvdXgyUnhrWDVGdFFTSWlPYWR4?=
 =?utf-8?B?dzJ2b3gzMUMwQmtlaE1ZZnBFdVYzS0V5a3V4K0wrZVlRY0hHYnVaaHg3Um84?=
 =?utf-8?B?dVFwSVZHemkyaW1qaHpzbU9aUDkrRnBLOE90dHFjUldHYmt3L0U4dWMzSGhs?=
 =?utf-8?B?bHQyMS85K2RZUVF3YTdkME5uSDg4dE9rL0NHR1dFNEl3dG1UYmVOWEpZUWZp?=
 =?utf-8?B?MXlyalVjNzlwSXB3eVF0NE40ZXZ4ZFF6WGI0VVFyWm5BUzRqcGNLQmJILzRU?=
 =?utf-8?B?Z0ZiUm93OWJXSUxOSGJmTWIrTjZOMmFDYlV6aWVsUjdlV1lNVmphN2UzSkI5?=
 =?utf-8?B?c1ZtZ3VFS3ZQbkd2Z2JGcmxQdDVVZlBqaTB1WjlVOUFKcGNaVzBhdXIrVDdr?=
 =?utf-8?B?anY4QU0xSFBOZ3ZacXFhcHZOMGR4R3VBMS9IMHNXUk5BZ2plUjJLQVh2bnZ6?=
 =?utf-8?B?NERjcVpHMmxaVUc0ajVQc2t4LzdHNkp0YzhPMWY0MDNMakxEMTVmZW5hWDBx?=
 =?utf-8?B?Q3dvSTVwWkF1K1dhbVVNbGZ5RmhmYVZNSFZuQUN2QmJVUVQ0UTk2Nzcwd2NG?=
 =?utf-8?B?OXpURGpLd2ZjQTY4bUE0bEJqdUVHRW40VytFUTBFdWN6M2RvWkc1a0xZaGFo?=
 =?utf-8?B?NTMyTFYwRjQwRGdDbGRJaEhkTmhhTFN5MWllUU0rZlpWQlhTbGl0aCtGRVFh?=
 =?utf-8?B?VE0wWkpQVzJ2ajZlRTFOZ2QrUnlublNJeUZYYi83QW01SmdJSUk5QkJZVnpV?=
 =?utf-8?B?YjNiQ1dIZXNxTEV5cHk3NTFLY1FDVXYxVjdrbzNVeGhGa3hCTmh5bnJIdWc3?=
 =?utf-8?B?bzJ3bkIwbDc3VFlzVENxMEdkbE5BUjQzbkszZEw4czRmUXMwTkNSUXhkM2E3?=
 =?utf-8?B?a291ZEs4OWNKMXhvZ005WGVIRHdaRVNtZFBDYXFPMmlYU3BUY0hwdTUvc0s5?=
 =?utf-8?B?YnlTa1daanBSbmZyL3VsWGJqT2t3QkJiT1FsOURNK3ZiOGZmUGQ3dTBJd2d5?=
 =?utf-8?B?ejhQNEJER2xFMFJ5dTRiTm1FL1MvcW9NTkw5QXNGRUJIbWVmSHdqOVdiM2Ra?=
 =?utf-8?B?NHA4bnB3VVAzNlpHeURXSnFZWDluVkJodHo0R0s3S2ExYVAzQzhRdVJXUGxK?=
 =?utf-8?B?ZGpEZk5Fb1ErZTQ3QkdoUC9iL1lWU2d6NzIxYk1rNmx2a2xSc1RqVmZNZzJl?=
 =?utf-8?B?SG8rL1RLTm1jdk1SWVRVMzhmS2ZXdEVqOTlmVzhqdDJBWmlSQUhZU3owV1FM?=
 =?utf-8?B?ZFBzMGs2dzZPcVZHcURGS1hXdU1HeFBkNzV5QmhOak9LS2dGM29iWDdnU0dV?=
 =?utf-8?B?RWRKbWVFQVRUeGJ3TlJrRlZDTTNaZE5SOEF0L3BvWklFcVlwL0tjZDBuTXdT?=
 =?utf-8?B?M2hncGszRGdPSjB6ZWhFOWYvTVk5Tks0R2RZSE1RVGwyS2Y1dFNtUmFIOUJu?=
 =?utf-8?B?ZXBIbE9QMUJPOTJqSGh1RGw5UXlPeURIZi9FVWhmcWpqbXcwSDdVOWlBd1Nl?=
 =?utf-8?B?a0JLdzRydEE5c2N5VDBzdXRXSUxvRjJIRVc2K0JPMVZXT2JhbUtLZVRVZzk5?=
 =?utf-8?B?UnB6MkNHWXlQanVlWU90bm5TTkZibFIyc2JidExKNkRRb01taWpDSjZ3MFBt?=
 =?utf-8?B?aGRWNTJ0NE1nRjgwNm1FWjBPRnFlSUNabFZNODROckZZMGllWC81dUM1UlVT?=
 =?utf-8?B?cjV2b003RXVTTFZQY3F1ZVhyZzdoeWNjRVBIeEV0Wlo0VWNxU083QUVsbEpU?=
 =?utf-8?B?V25obmwyNk9FYXJ4WmFNOEZ6VjNJZnVMWk5kY3hTbmFaNFdhY3QrdE1RVGM4?=
 =?utf-8?B?R09IcXpBOTJYbzBvd3JJOUc4TkF6NGdIdFdGZEFRRTFGZnZrUjA1Nmlrc0Jn?=
 =?utf-8?B?T3NnODU5QnhtYnpQc1NVWWc2OHB6ZnZZZjdrWWJsR2pOWmp3ZXNhY2xZMGtW?=
 =?utf-8?B?WDZWNnkyZ0RDcjI4cEJXL1VMZlpqZGU1blJ6UHpTY1VYb0VFR1EwazlOaHg1?=
 =?utf-8?B?YmZQT1lTQ3liTGFuWmwvUFhqMVgvd0RRQm85L0tzcnRaRXg3akpNT0lrQlc0?=
 =?utf-8?B?MFdCK2E3Qm1ZTEJ3THkvVXRpL0VYQ2FMKzhqVmRNRzJtcktUV2tTVmYyUmVt?=
 =?utf-8?B?WlFTbG1uRTNXZVdVN2V6ZnY2UW4zc3pER0k3L3ZxZ3B6dVVHeHV3c2RYcGZs?=
 =?utf-8?B?Vmc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	aIA08lL+YWS0CTsVcezKQtO5q+x0pzTKrPNYk+gvlkTjMLdy9DAcxfltWutXFiuNGb8CmBIZIp7YJrYyoJ1pczHV8o7fs9nql11JS0v02HV9Fucivy7dGcyCzZS9HmkJZqtChtn9cITAuvTjKqJd0EbFk5JtqKaHl1Oh0/XgC0zLT8ts83B8xLZCppLqW4xLDJwtWkDjhXT1rVn6//b0+Pwtf4S06vnDXs3EZJCcxkWTrwg3gR2DLGavukIUYZqyRHh393ATK+GcNPbboGJ4Hqf3NcoGy/KJz4UeMx3lhpZ8kyaJ9urd+7/sdTjZFu0pYe1MyzC9aeKyweXsXwyq181IwRehiHTTYhSIR4CR1RRmlhG4RZPltNdkmhgAibkWEBaVjJ9N9WsrWwvmPSw/IUYZfKlgP7njyyBRd3Z7sZRo+xioQ89CUUKLDlhPAK5isZ6Aumxt5ZVltCA347rCxS3KI6LnnYAOyhDlYJqXyZwWkmaVjkn/PfeLZtWNlYZDEdczpSnAbO4HCHf1wbJ4aCEHgC0LTXgFr63SOtvy48Zr3KA6K6/TfLTQFtXMGi3aUD6lLG5dNGtOE0ey7r54a9Q5jc3ETFCsowNppo0+Wjrk/YVjV541a//zCBMfXoxzDI3JEdHMZwmEK/l1qungnOOgGbaiVtG/z+zTbCnx24jk/70a9rvMA2j+Nxi4HpBwI79tCtpw1QSOBKbE4At7R1XL0RJuhzvaeCAGJpF1my0l6xzZW/8GMogwXU8e+ZGjvYNk+PDZbdfEwVP1p2QYUUR/7p+LHrkZRRV+y6+jwIG1EhjMsAk/0m5Rh4spNdx7
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cfd07882-7f83-4b27-5a61-08db61b9a23b
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 09:30:12.3842
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lFGHC/XdeEicNyfogu0idZy6ExEssGdxH28S1yPDTAKowslQ7tAc0mbPUwOMECVNXIN4W9CNYXWZILYdTFw77g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6549

On Wed, May 31, 2023 at 11:15:44AM +0200, Jan Beulich wrote:
> On 30.05.2023 18:02, Roger Pau Monné wrote:
> > On Wed, Apr 05, 2023 at 12:15:26PM +0200, Jan Beulich wrote:
> >> On 31.03.2023 11:59, Roger Pau Monne wrote:
> >>> Only set the GOP mode if vga is selected in the console option,
> >>
> >> This particular aspect of the behavior is inconsistent with legacy
> >> boot behavior: There "vga=" isn't qualified by what "console=" has.
> > 
> > Hm, I find this very odd, why would we fiddle with the VGA (or the GOP
> > here) if we don't intend to use it for output?
> 
> Because we also need to arrange for what Dom0 possibly wants to use.
> It has no way of setting the mode the low-level (BIOS or EFI) way.

I understand this might be needed when Xen is booted as an EFI
application, but otherwise it should be the bootloader that takes care
of setting such mode, as (most?) OSes are normally loaded with boot
services already exited.

I've removed the parsing of the console= option and unconditionally
parse vga= now.  We can always adjust later.

> >>> otherwise just fetch the information from the current mode in order to
> >>> make it available to dom0.
> >>>
> >>> Introduce support for passing the command line to the efi_multiboot2()
> >>> helper, and parse the console= and vga= options if present.
> >>>
> >>> Add support for the 'gfx' and 'current' vga options, ignore the 'keep'
> >>> option, and print a warning message about other options not being
> >>> currently implemented.
> >>>
> >>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >>> [...] 
> >>> --- a/xen/arch/x86/efi/efi-boot.h
> >>> +++ b/xen/arch/x86/efi/efi-boot.h
> >>> @@ -786,7 +786,30 @@ static bool __init efi_arch_use_config_file(EFI_SYSTEM_TABLE *SystemTable)
> >>>  
> >>>  static void __init efi_arch_flush_dcache_area(const void *vaddr, UINTN size) { }
> >>>  
> >>> -void __init efi_multiboot2(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
> >>> +/* Return the next occurrence of opt in cmd. */
> >>> +static const char __init *get_option(const char *cmd, const char *opt)
> >>> +{
> >>> +    const char *s = cmd, *o = NULL;
> >>> +
> >>> +    if ( !cmd || !opt )
> >>
> >> I can see why you need to check "cmd", but there's no need to check "opt"
> >> I would say.
> > 
> > Given this is executed without a page-fault handler in place I thought
> > it was best to be safe rather than sorry, and avoid any pointer
> > dereferences.
> 
> Hmm, I see. We don't do so elsewhere, though, I think.

If you insist I can remove it, otherwise I will just leave as-is.

> 
> >>> @@ -807,7 +830,60 @@ void __init efi_multiboot2(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable
> >>>  
> >>>      if ( gop )
> >>>      {
> >>> -        gop_mode = efi_find_gop_mode(gop, 0, 0, 0);
> >>> +        const char *opt = NULL, *last = cmdline;
> >>> +        /* Default console selection is "com1,vga". */
> >>> +        bool vga = true;
> >>> +
> >>> +        /* For the console option the last occurrence is the enforced one. */
> >>> +        while ( (last = get_option(last, "console=")) != NULL )
> >>> +            opt = last;
> >>> +
> >>> +        if ( opt )
> >>> +        {
> >>> +            const char *s = strstr(opt, "vga");
> >>> +
> >>> +            if ( !s || s > strpbrk(opt, " ") )
> >>
> >> Why strpbrk() and not the simpler strchr()? Or did you mean to also look
> >> for tabs, but then didn't include \t here (and in get_option())? (Legacy
> >> boot code also takes \r and \n as separators, btw, but I'm unconvinced
> >> of the need.)
> > 
> > I was originally checking for more characters here and didn't switch
> > when removing those.  I will add \t.
> > 
> >> Also aiui this is UB when the function returns NULL, as relational operators
> >> (excluding equality ones) may only be applied when both addresses refer to
> >> the same object (or to the end of an involved array).
> > 
> > Hm, I see, thanks for spotting. So I would need to do:
> > 
> > s > (strpbrk(opt, " ") ?: s)
> > 
> > So that we don't compare against NULL.
> > 
> > Also the original code was wrong AFAICT, as strpbrk() returning NULL
> > should result in vga=true (as it would imply console= is the last
> > option on the command line).
> 
> I'm afraid I'm in trouble what "original code" you're referring to here.
> Iirc you really only add code to the function. And boot/cmdline.c has no
> use of strpbrk() afaics.

Original code in the patch, anyway this is now gone because I no
longer parse console=.

> >>> +                vga = false;
> >>> +        }
> >>> +
> >>> +        if ( vga )
> >>> +        {
> >>> +            unsigned int width = 0, height = 0, depth = 0;
> >>> +            bool keep_current = false;
> >>> +
> >>> +            last = cmdline;
> >>> +            while ( (last = get_option(last, "vga=")) != NULL )
> >>
> >> It's yet different for "vga=", I'm afraid: Early boot code (boot/cmdline.c)
> >> finds the first instance only. Normal command line handling respects the
> >> last instance only. So while "vga=gfx-... vga=keep" will have the expected
> >> effect, "vga=keep vga=gfx-..." won't (I think). It is certainly fine to be
> >> less inflexible here, but I think this then wants accompanying by an update
> >> to the command line doc, no matter that presently it doesn't really
> >> describe these peculiarities.
> > 
> > But if we then describe this behavior in the documentation people
> > could rely on it.  Right now this is just an implementation detail (or
> > a bug I would say), and that would justify fixing boot/cmdline.c to
> > also respect the last instance only.
> 
> Yes, fixing the non-EFI code is certainly an option (and then describing
> the hopefully consistent result in the doc).

OK, let me take a look, I expect that should be fairly trivial.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 31 09:31:41 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 09:31:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541695.844686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4IBA-0007iS-IU; Wed, 31 May 2023 09:31:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541695.844686; Wed, 31 May 2023 09:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4IBA-0007iL-Es; Wed, 31 May 2023 09:31:40 +0000
Received: by outflank-mailman (input) for mailman id 541695;
 Wed, 31 May 2023 09:31:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0BFG=BU=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q4IBA-0007iB-0p
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 09:31:40 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f183e148-ff95-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 11:31:38 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 29C301FD60;
 Wed, 31 May 2023 09:31:38 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 83FD9138E8;
 Wed, 31 May 2023 09:31:37 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id VFmeHvkTd2TEAgAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 31 May 2023 09:31:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f183e148-ff95-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685525498; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mox8b8vT26odzKQfKkw1K7+Wm+UZ8eqz7llUI5QSg8k=;
	b=XHf9+pOGBRfbmMpO/UW1ImDja3XOHYMcBAGANh+plWQZYcP+04FA04kjxV8Cs9OwHXCFMJ
	F/UbrXGweXVe5Ldge+07IP1xWhvNIN32962WuzFPB9TlC1Z3H8g8jBhabC6NJghxTLEi7N
	8qiT0i0qhCHsTnxycZ/QtSwx8vM/Y8Y=
Message-ID: <7e824a95-6676-9553-4158-d434f617fcbb@suse.com>
Date: Wed, 31 May 2023 11:31:37 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Content-Language: en-US
To: Borislav Petkov <bp@alien8.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
 mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
 Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
 <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
 <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>
 <888f860d-4307-54eb-01da-11f9adf65559@suse.com>
 <20230531083508.GAZHcGvB68PUAH7f+a@fat_crate.local>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
In-Reply-To: <20230531083508.GAZHcGvB68PUAH7f+a@fat_crate.local>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------24GFxpKYKP5BZzRRH60QZsvv"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------24GFxpKYKP5BZzRRH60QZsvv
Content-Type: multipart/mixed; boundary="------------fNJfB2Tgxipl1Nkv6U8O5K3b";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Borislav Petkov <bp@alien8.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
 mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
 Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
Message-ID: <7e824a95-6676-9553-4158-d434f617fcbb@suse.com>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
 <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
 <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>
 <888f860d-4307-54eb-01da-11f9adf65559@suse.com>
 <20230531083508.GAZHcGvB68PUAH7f+a@fat_crate.local>
In-Reply-To: <20230531083508.GAZHcGvB68PUAH7f+a@fat_crate.local>

--------------fNJfB2Tgxipl1Nkv6U8O5K3b
Content-Type: multipart/mixed; boundary="------------FtQlGK5DGcgDrf3eg8wUr71y"

--------------FtQlGK5DGcgDrf3eg8wUr71y
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzEuMDUuMjMgMTA6MzUsIEJvcmlzbGF2IFBldGtvdiB3cm90ZToNCj4gT24gV2VkLCBN
YXkgMzEsIDIwMjMgYXQgMDk6Mjg6NTdBTSArMDIwMCwgSnVlcmdlbiBHcm9zcyB3cm90ZToN
Cj4+IENhbiB5b3UgcGxlYXNlIGJvb3QgdGhlIHN5c3RlbSB3aXRoIHRoZSBNVFJSIHBhdGNo
ZXMgYW5kIHNwZWNpZnkgIm10cnI9ZGVidWciDQo+PiBvbiB0aGUgY29tbWFuZCBsaW5lPyBJ
J2QgYmUgaW50ZXJlc3RlZCBpbiB0aGUgcmF3IHJlZ2lzdGVyIHZhbHVlcyBiZWluZyByZWFk
DQo+PiBhbmQgdGhlIHJlc3VsdGluZyBtZW1vcnkgdHlwZSBtYXAuDQo+IA0KPiBUaGlzIGlz
IGV4YWN0bHkgd2h5IEkgd2FudGVkIHRoaXMgb3B0aW9uLiBBbmQgeW91J3JlIGFscmVhZHkg
cHV0dGluZyBpdA0KPiB0byBnb29kIHVzZS4gOi1QDQo+IA0KPiBGdWxsIGRtZXNnIGJlbG93
Lg0KPiANCj4gWyAgICAwLjAxMjg3OF0gbGFzdF9wZm4gPSAweDQ1MDAwMCBtYXhfYXJjaF9w
Zm4gPSAweDQwMDAwMDAwMA0KPiBbICAgIDAuMDE4MzU3XSBNVFJSIGRlZmF1bHQgdHlwZTog
dW5jYWNoYWJsZQ0KPiBbICAgIDAuMDIyMzQ3XSBNVFJSIGZpeGVkIHJhbmdlcyBlbmFibGVk
Og0KPiBbICAgIDAuMDI2MDg1XSAgIDAwMDAwLTlGRkZGIHdyaXRlLWJhY2sNCj4gWyAgICAw
LjAyOTY1MF0gICBBMDAwMC1CRkZGRiB1bmNhY2hhYmxlDQo+IFsgICAgMC4wMzMyMTRdICAg
QzAwMDAtRkZGRkYgd3JpdGUtcHJvdGVjdA0KPiBbICAgIDAuMDM3MDM5XSBNVFJSIHZhcmlh
YmxlIHJhbmdlcyBlbmFibGVkOg0KPiBbICAgIDAuMDQxMDM4XSAgIDAgYmFzZSAwMDAwMDAw
MDAwMDAwMDAgbWFzayAwMDAzRkZDMDAwMDAwMDAgd3JpdGUtYmFjaw0KDQoxNiBHQiBXQiBh
dCBhZGRyZXNzIDAuDQoNCj4gWyAgICAwLjA0NzM4M10gICAxIGJhc2UgMDAwMDAwNDAwMDAw
MDAwIG1hc2sgMDAwM0ZGRkMwMDAwMDAwIHdyaXRlLWJhY2sNCg0KMSBHQiBXQiBhdCBhZGRy
ZXNzIDE2R0IuDQoNCj4gWyAgICAwLjA1MzczMF0gICAyIGJhc2UgMDAwMDAwNDQwMDAwMDAw
IG1hc2sgMDAwM0ZGRkYwMDAwMDAwIHdyaXRlLWJhY2sNCg0KMjU2TUIgV0IgYXQgYWRkcmVz
cyAxN0dCLg0KDQpUaGlzIG1lYW5zIHBlciBkZWZhdWx0IDAtNDRmZmZmZmZmIGFyZSBXQi4N
Cg0KPiBbICAgIDAuMDYwMDc2XSAgIDMgYmFzZSAwMDAwMDAwQUUwMDAwMDAgbWFzayAwMDAz
RkZGRkUwMDAwMDAgdW5jYWNoYWJsZQ0KDQozMk1CIFVDIGF0IEFFMDAwMDAwDQoNCj4gWyAg
ICAwLjA2NjQyMV0gICA0IGJhc2UgMDAwMDAwMEIwMDAwMDAwIG1hc2sgMDAwM0ZGRkYwMDAw
MDAwIHVuY2FjaGFibGUNCg0KMjU2TUIgVUMgYXQgQjAwMDAwMDANCg0KPiBbICAgIDAuMDcy
NzY4XSAgIDUgYmFzZSAwMDAwMDAwQzAwMDAwMDAgbWFzayAwMDAzRkZGQzAwMDAwMDAgdW5j
YWNoYWJsZQ0KDQo1MTJNQiBVQyBhdCBDMDAwMDAwMA0KDQpTbyBhbiBVQyBob2xlIGF0IEFF
MDAwMDAwLUZGRkZGRkZGLg0KDQo+IFsgICAgMC4wNzkxMTRdICAgNiBkaXNhYmxlZA0KPiBb
ICAgIDAuMDgxNjM1XSAgIDcgZGlzYWJsZWQNCj4gWyAgICAwLjA4NDE1Nl0gICA4IGRpc2Fi
bGVkDQo+IFsgICAgMC4wODY2NzddICAgOSBkaXNhYmxlZA0KPiBbICAgIDAuMDg5MjAzXSB0
b3RhbCBSQU0gY292ZXJlZDogMTYzNTJNDQo+IFsgICAgMC4wOTMwMjNdIEZvdW5kIG9wdGlt
YWwgc2V0dGluZyBmb3IgbXRyciBjbGVhbiB1cA0KDQpJdCBzZWVtcyBhcyBpZiBtdHJyX2Ns
ZWFudXAoKSBkaWQgY2hhbmdlIHRoZSBNVFJSIHNldHRpbmdzLg0KDQpXaGF0IGl0IGRpZCB3
b3VsZCBoYXZlIGJlZW4gcHJpbnRlZCBpZiBwcl9kZWJ1ZygpIHdvdWxkIGhhdmUgYmVlbg0K
YWN0aXZlLiA6LSgNCg0KPiBbICAgIDAuMDk3NzM0XSAgZ3Jhbl9zaXplOiA2NEsgCWNodW5r
X3NpemU6IDY0TSAJbnVtX3JlZzogOCAgCWxvc2UgY292ZXIgUkFNOiAwRw0KPiBbICAgIDAu
MTA0ODY0XSBNVFJSIG1hcDogNiBlbnRyaWVzICgzIGZpeGVkICsgMyB2YXJpYWJsZTsgbWF4
IDIzKSwgYnVpbHQgZnJvbSAxMCB2YXJpYWJsZSBNVFJScw0KPiBbICAgIDAuMTEzMjk0XSAg
IDA6IDAwMDAwMDAwMDAwMDAwMDAtMDAwMDAwMDAwMDA5ZmZmZiB3cml0ZS1iYWNrDQo+IFsg
ICAgMC4xMTkwMzNdICAgMTogMDAwMDAwMDAwMDBhMDAwMC0wMDAwMDAwMDAwMGJmZmZmIHVu
Y2FjaGFibGUNCj4gWyAgICAwLjEyNDc3MV0gICAyOiAwMDAwMDAwMDAwMGMwMDAwLTAwMDAw
MDAwMDAwZmZmZmYgd3JpdGUtcHJvdGVjdA0KPiBbICAgIDAuMTMwNzY5XSAgIDM6IDAwMDAw
MDAwMDAxMDAwMDAtMDAwMDAwMDBhZGZmZmZmZiB3cml0ZS1iYWNrDQo+IFsgICAgMC4xMzY1
MDhdICAgNDogMDAwMDAwMDBhZTAwMDAwMC0wMDAwMDAwMGFmZmZmZmZmIHVuY2FjaGFibGUN
Cj4gWyAgICAwLjE0MjI0Nl0gICA1OiAwMDAwMDAwMTAwMDAwMDAwLTAwMDAwMDA0NGZmZmZm
ZmYgd3JpdGUtYmFjaw0KDQpUaGUgTVRSUiBtYXAgc2VlbXMgdG8gYmUgZmluZSBhc3N1bWlu
ZyB0aGUgTVRSUiB2YWx1ZXMgYmVmb3JlIHRoZSAiY2xlYW4gdXAiLg0KDQo+IFsgICAgMC4x
NDc5OTJdIHg4Ni9QQVQ6IENvbmZpZ3VyYXRpb24gWzAtN106IFdCICBXQyAgVUMtIFVDICBX
QiAgV1AgIFVDLSBXVA0KID4gWyAgICAwLjE1NTEyMl0gZTgyMDogdXBkYXRlIFttZW0gMHhh
ZTAwMDAwMC0weGFmZmZmZmZmXSB1c2FibGUgPT0+IHJlc2VydmVkDQogPiBbICAgIDAuMTYx
NjYzXSBlODIwOiB1cGRhdGUgW21lbSAweGMwMDAwMDAwLTB4ZmZmZmZmZmZdIHVzYWJsZSA9
PT4gcmVzZXJ2ZWQNCiA+IFsgICAgMC4xNjgzNThdIGU4MjA6IHVwZGF0ZSBbbWVtIDB4MTEw
MDAwMDAwLTB4MWZmZmZmZmZmXSB1c2FibGUgPT0+IHJlc2VydmVkDQogPiBbICAgIDAuMTc1
MjI3XSBXQVJOSU5HOiBCSU9TIGJ1ZzogQ1BVIE1UUlJzIGRvbid0IGNvdmVyIGFsbCBvZiBt
ZW1vcnksIGxvc2luZyANCjM4NDBNQiBvZiBSQU0uDQoNCkNsZWFuIHVwIG1lc3NlZCB3aXRo
IHRoZSBzZXR0aW5ncywgcmVzdWx0aW5nIGluIGxvc3Mgb2YgUkFNLg0KDQpEaWQgeW91IGNo
ZWNrIHdoZXRoZXIgQ09ORklHX01UUlJfU0FOSVRJWkVSX0VOQUJMRV9ERUZBVUxUIHdhcyB0
aGUgc2FtZSBpbiBib3RoDQprZXJuZWxzIHlvdSd2ZSB0ZXN0ZWQ/DQoNCg0KSnVlcmdlbg0K

--------------FtQlGK5DGcgDrf3eg8wUr71y
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------FtQlGK5DGcgDrf3eg8wUr71y--

--------------fNJfB2Tgxipl1Nkv6U8O5K3b--

--------------24GFxpKYKP5BZzRRH60QZsvv
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmR3E/kFAwAAAAAACgkQsN6d1ii/Ey/M
zggAisVIrCMiJz4WLPmjaKUQvAvIDD1xoFhCVf0VE6jtX7FC8FZVkAcm0QuoOjxlMI1ST5zjxMqR
AW2F05GHDq6rL7ufLu0j/vprfRU6qdIZhQKbOGM1AKi//VpxNpES3zfp+M7szAOgy/HclR1DI6Hb
SWDV7ze+q5bcRAb8tO0Fwl5ZqG/t4LJMFxJP5o1sPoeFZwPId9R4F5ydUBzdZxQDzQCCsKSexZmY
Fw7ubqFWUEr9QGecs6CcjtyaYiwAHxsfG+nihd0+WZT8L/4usDxVdgEOkMZZHk5pzVq31HBReTAx
zoAPOHex5tpy+obQWQwz3OdZwG02MhbI9fw0eN3Eag==
=Y4Ws
-----END PGP SIGNATURE-----

--------------24GFxpKYKP5BZzRRH60QZsvv--


From xen-devel-bounces@lists.xenproject.org Wed May 31 09:48:18 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 09:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541700.844698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4IR6-0000wd-1H; Wed, 31 May 2023 09:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541700.844698; Wed, 31 May 2023 09:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4IR5-0000wW-Uw; Wed, 31 May 2023 09:48:07 +0000
Received: by outflank-mailman (input) for mailman id 541700;
 Wed, 31 May 2023 09:48:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VTAn=BU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q4IR4-0000wQ-Hq
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 09:48:06 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2061d.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::61d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3ce9b9f6-ff98-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 11:48:04 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by DU0PR04MB9466.eurprd04.prod.outlook.com (2603:10a6:10:35a::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22; Wed, 31 May
 2023 09:48:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 09:48:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ce9b9f6-ff98-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yci7bKCQWmAoF6zFdykAacaOXHxlUcltk6UFcu2SZo52yVSFbigVgD+kxxbK4W4jg5i7xuISiOpv3Oj1MGUz2ZOk+sumpqRQYEy8Uz3k7MuK9M9BtA6rrP3iSXy/LVJY2UFUSo9qj5MTleEf058ilBPAm+CvJfvXyfB2/Vb+BzQKHFXD71z+6xgfbkT9xHbm1VYrYcVugGbpYKQ0enNsZLmcidJyUCvDbfSOfFFqOhj/pDyHm0egiQeqQFFoN+6OJkbCVKOdVjhGSoqNOu/AvK0+ML0BURUQ2ycpfwOapq9GNMPvuAQ5pMmlSdq2Vsu7xdfBu98SmBq/eE07RRfeEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yUtLSOGwQiXhd9W3H1x3+onUMYzxJI0bwxTp9FPObkI=;
 b=cYhEsI/kk0OdRcWzHifQyot3oTTpkoFIxf6zJDMScAa+QGfr80VWXrcrCUOlHIUIOS1y30Joo8bMjrCvLBD+3U7dUDjbkdEajkXkXQOhMvTq8/6nbRTp973p0pMzSSWcjEQ5cXbpAsCv8ZCb/u2l7+I4XLOJ25R82py1qxNS5boCBrUYEF3zDXYNjEKHcLNu8Saxtbmayr2hAz1thm11UkBJmKFwB1yBG1Gtn6/W6j1PV6hQy+YGNkva/tw/zrdt0I3sqKle1eY9gBaHtXV9sPU0vFRAsne5Ugz/DHCLJbXOJqni/XQrBh53jQiaj+m9b/EQZfJmQRBHa3zW7YkWYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yUtLSOGwQiXhd9W3H1x3+onUMYzxJI0bwxTp9FPObkI=;
 b=vau8+Jjdgp4wreTKMX7RNMk4XsNFh/SgVG069Xpr0a1oMiAA+V/PRdGj1odo6cKtNu2y7U37nxS18fUuS0N+lJhNmKOwVhgpODwQjxCxq0dqSYbbC3esTC5k97Ra/Ohu9nQ7vvXsp89QpGK4xPmNrvpYl6IbNtYj0VLc7LQQZpMQ/k/BbOmCsabzLdJv1+xqc9pJd9hzL0yJPz/dZgAVU1VR6iFFgjVGFEnr2BWntxr4A0lC9T3a0oQwmtCtjRx0Y+HE1tlHKZDQLthaJqKY3863tXoFbeIOVSL3+18GMuJ6qZcR8EM9qZ6D3kFIjwu7XiKyA/VMUMDkPHzLdbACWQ==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <f742cc5c-f6de-acb2-34c3-22055f3c9b5d@suse.com>
Date: Wed, 31 May 2023 11:47:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 2/3] multiboot2: parse console= and vga= options when
 setting GOP mode
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230331095946.45024-1-roger.pau@citrix.com>
 <20230331095946.45024-3-roger.pau@citrix.com>
 <2ee8a4b4-c0ac-8950-297a-e1fe97d2c494@suse.com>
 <ZHYeGOFpAtLnoQf2@Air-de-Roger>
 <96ade9a5-37c9-dd38-cb04-ed0f2c0bbd97@suse.com>
 <ZHcTn9dV0EPGTg6t@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZHcTn9dV0EPGTg6t@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0137.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9e::12) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|DU0PR04MB9466:EE_
X-MS-Office365-Filtering-Correlation-Id: be85093f-0535-4e6f-bc23-08db61bc1f9c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	0h9eGyXa8Lcq6s4xmGypJ564hpTQx8zyuMgwAj6DhmBkXwypNBRG1F5TJbXa0GGsSIEUUxSd/p8RXdqJ08OCZd9NGrQx5mazD/UPB46irAvwMRbrdEk4k0OGNCd8j3ZF2NR3b2Tfcsmj+AdBsskyxLKsZm3FtHzH3HkrYDLBYaX5Go4FZ9OJjA7LtpODiIGpSLQxTHWFdew2JsGAvkDkOy1mFoSRIqMzIxdNPGXv6UizyvTjTAZf2FlwMdc0n95FIy9w3mHVlsKVqoopH0muIZuidGjnRzF92p1KkpJSg+UEaklZcgu+0P0svvJ4aJ2ZjpVELV7mzbmtlJUEO3DIM4vaYc5KGWbvS2X5+gmzBS4fR6g8cEQwDGlubRvLsTHHwG9b4crqr43Et60EIIkjcFU2p4i/CKIW9QxZJnJjWaPXSWl795CbsBf1fkLeWKEipTueg7uI0MD9OmGm/6OMPLOAsu5lJjseRYhf9XYBy07g5qGy/dOJ4XuIigH8bPYCsg7c7NmCngWAkbPlr1WotBbF3/mdRwijWxDqIdIGdc3iEl62FAHTdUvXWG0yO6Az+7eRYg299jStabzXa0gffnwq2wgOScm9J3MeW/QyhGzvDtHbOi+uw/H7t0p8VLfpzu1qCjApCDCPCaaf54j2ag==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(346002)(39860400002)(396003)(376002)(136003)(451199021)(5660300002)(31686004)(54906003)(8676002)(8936002)(41300700001)(478600001)(6486002)(6916009)(6512007)(53546011)(186003)(66556008)(4326008)(66946007)(316002)(66476007)(26005)(6506007)(2616005)(2906002)(38100700002)(31696002)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?RU50SGk1YWFMcDhSUVppUGF1WmNiR0pSdWduSit3ZTRmSDhxazFXTGlzUUJG?=
 =?utf-8?B?SHlrWjRiU0srRXc1R0xTNUxJMitwZ0lMbmZPdEhLTktDWi9TM1N2WHhad2o0?=
 =?utf-8?B?UFBpU2ZDOUlRRThkV0JSd1RoTHY2RGVzSjFzT2VCNkxyNlArS3VobXlETFNl?=
 =?utf-8?B?REgzV2FMSnZnMCtuUFkvZ3h3Q3MwSlB5SEorbUxrT2JNWkc2a3lCLzg4c2Rj?=
 =?utf-8?B?V2pMMk9SMzVEV29sWENPbTAzRE9kTnU5YkswTVVZejZ2SERrbXl4NFVIaElP?=
 =?utf-8?B?V3ZGbjNBbVR1OC9EYkR2TkVPckR0eDFOWWlTZVJkbWYzYWdGcU5hNm51a3ds?=
 =?utf-8?B?K1QySXg0TTZrcjZGTHhPazhBb3ZOUDZIQ3ZUbEcyb0lhU20yOGprYTF3Yk56?=
 =?utf-8?B?WkRLRUZHNGVSVXo4U2tJRWdLY3YzaGJ0WW9WaDRVTlc4SzFQTDNiQWM1bWN4?=
 =?utf-8?B?aE5jaXlnOVg0bVRIMVRYcmpWV2poMGV2Ny9DL1NOMGpEdm4xZHRiS3FTbVhU?=
 =?utf-8?B?bHY4VjZ4VFFxNHhzbXQ1WDhtUmRzUzVuWGh1RnZBQ3U1N2w4Zmtubm54S2xw?=
 =?utf-8?B?OWNPdnE3dldPV3F5SGdPZEY4TCsyWWdMUWJ3aWgrNlJnT1EwNWM5M1ZYVkdE?=
 =?utf-8?B?WEZLUXhyMGtEa1RwMUhuelAyYVN5dG1jQTZGWSt4eVAyL0JKWktYOUZGaWZy?=
 =?utf-8?B?dTZwWUtXTjhRQVY0LzRTN2VLWGwyNFBKZ21oTitXNmlRVnRYYzJzY0pOTjVi?=
 =?utf-8?B?eU5MdjRuUk1DN2R3QWZGRWJVbGxTQXJMTG40djRKbE1tN3lZaC9DdzV4MDM3?=
 =?utf-8?B?RzlxTE55cldTZUxUeVg5UU5OM2RGUXBvZlF6cVcvUjY3Z3R0anhPQ2FQc1Yy?=
 =?utf-8?B?cHJyY1FCK2tjbzBKdTN4N0dNVVRwTG93QWt3dEpJeW1zNis1VFdTTFh0Q0pL?=
 =?utf-8?B?QUFRblpJZmN0QVJPWjdWdU1xeEFYNFBQSEZqVnAwc2NrQW1yVVhaWGhSZFRu?=
 =?utf-8?B?OHZqVXZHeHBrbUp0Ykp4bTJEVGNMVGFEaStxaGJicGVJM0ZWZS9WOHBTa1Nv?=
 =?utf-8?B?dktEWlVXakJKejVOZnkvc2h5ZzQxTElOK0c5TUxXUjk5eUVOTHRRMkhHYmpL?=
 =?utf-8?B?UENic1Z1bDlhaWxPM3FBUDdpMysraTdaL1FCVm5EcTg5dFBtMGVqR0V0Nkln?=
 =?utf-8?B?R0RRc0VCS0pqbmdTdDhvcm5IcTQzOXErSk9vaVkxZmU5TTVMNkc3K1J5SnB0?=
 =?utf-8?B?S21VQ25LRGlTclZCcENwbCs4UVpLbWdSSW9wN09CdXVML0dMb25NazcyNkNQ?=
 =?utf-8?B?OGpMTk5lNUVydWNndGlDVkRZc1lXblUxVjhJYnZhdUpaM2hoVGZIOVBFMlBM?=
 =?utf-8?B?dDFDSWsyVXRGMGxwV0lJTkpzV1NQWUgrSGVxUEV0QXpSZi9scENheHlvODBq?=
 =?utf-8?B?OUhubyt2RThMbmxkbDY0YkdVejVrODYwZkRlaitqeVFTUW1ydEdLYkl1NmZy?=
 =?utf-8?B?ZE11K2xUSFBzY0U4MnpwN21VZ0lrYkNiYmRqSEExemEvRlVzam1SUm9XSFFt?=
 =?utf-8?B?cFhhbWZ3SUlIcnFOVlhCcFpsNEdrYkFWSzFUeUZXWUwvckNQZjJyU0ZOdXps?=
 =?utf-8?B?UFZPQmk3dDV3RlB2blR0akc5TEs5b0NVaTZtbngwaUk1SDBEWEVaRU56ejJY?=
 =?utf-8?B?VnVWMzBKMjdSLzhkT09BZjJQbHZQMktLOFFmYU01V0x0UE9GSkdWTXhQN0VE?=
 =?utf-8?B?UnVaYWN0OERldGVPbDBoU2M2RzYxYzlwVnFQK0MwbkZydDB0TmdOcGlMMGhJ?=
 =?utf-8?B?Ujc4YmtWMUwxK29ZQXNUQ1lRM0IvaXRGSnlLcXJtb3pqamFLcWxUdTFZWTlj?=
 =?utf-8?B?VTJzOU9RUXJxOE8weTZ1RlNJVHBhbWtjUFFJbUtSYndSZGt6aWhzZ0RwOWNt?=
 =?utf-8?B?TmdnTlZLeCsxdDY4VFZ3UTlBL01vMkFySEFIU0tCRUNyUkV6a1hmUlY1RGIx?=
 =?utf-8?B?NFNRNFlDU3NOd01ydlhpbXdrUGMyRHZRekRWc2pncUR6blZDZm9ra1pjQzZ4?=
 =?utf-8?B?WFVtaUlWV1VydGVPVnh2cXppMU1xbWVJc1Ivd3RuZVhRZ1pUZ3V1R3VlaW5G?=
 =?utf-8?Q?ApVN1k9hAm0LAI6lQ5d2BCNcM?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: be85093f-0535-4e6f-bc23-08db61bc1f9c
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 09:48:01.6341
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0A0dPq34tJf9H6FUrqb/2aioGw5QH8kB1GibkEor8EI4MlqBykrJM/y/Y9oelTROar2pzI+ZflTUVPA1cQkcSQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR04MB9466

On 31.05.2023 11:30, Roger Pau Monné wrote:
> On Wed, May 31, 2023 at 11:15:44AM +0200, Jan Beulich wrote:
>> On 30.05.2023 18:02, Roger Pau Monné wrote:
>>> On Wed, Apr 05, 2023 at 12:15:26PM +0200, Jan Beulich wrote:
>>>> On 31.03.2023 11:59, Roger Pau Monne wrote:
>>>>> Only set the GOP mode if vga is selected in the console option,
>>>>
>>>> This particular aspect of the behavior is inconsistent with legacy
>>>> boot behavior: There "vga=" isn't qualified by what "console=" has.
>>>
>>> Hm, I find this very odd, why would we fiddle with the VGA (or the GOP
>>> here) if we don't intend to use it for output?
>>
>> Because we also need to arrange for what Dom0 possibly wants to use.
>> It has no way of setting the mode the low-level (BIOS or EFI) way.
> 
> I understand this might be needed when Xen is booted as an EFI
> application, but otherwise it should be the bootloader that takes care
> of setting such mode, as (most?) OSes are normally loaded with boot
> services already exited.

The bootloader doing this is a quirk imo. In the Linux case this implies
knowing the inner structure of the binary to be booted, to communicate
the necessary information, plus peeking into the kernel's command line.

Furthermore I wasn't referring to the EFI-with-bootloader case, but the
legacy BIOS one plus the (mentioned by you) EFI-application one. Even
the MB2 protocol allows the bootloader to hand over with boot services
not exited yet, iirc, so even in that case Xen would be in the position
of using boot service functions (from efi_multiboot2()).

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 31 09:58:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 09:58:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541704.844708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Ib7-0002Sc-Ve; Wed, 31 May 2023 09:58:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541704.844708; Wed, 31 May 2023 09:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Ib7-0002SV-ST; Wed, 31 May 2023 09:58:29 +0000
Received: by outflank-mailman (input) for mailman id 541704;
 Wed, 31 May 2023 09:58:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5Ad7=BU=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1q4Ib6-0002SP-KH
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 09:58:28 +0000
Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id afc9d9dd-ff99-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 11:58:26 +0200 (CEST)
Received: from zn.tnic (pd9530d32.dip0.t-ipconnect.de [217.83.13.50])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id A190C1EC04CB;
 Wed, 31 May 2023 11:58:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afc9d9dd-ff99-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1685527105;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=hV5YBOIQSuwVXJ+8Bq//nWeBhs0OtlO6df6O2vn8m1U=;
	b=Z5dl/Esl28XcJGYLQMWLB7mDNT4HnJmuKuH7Q7rBtdBM4dZsUgVpe+otHybYon2eveTgiP
	AsTaFAFSoetQQLdUoW1I/5GajpeRc7w7d27DQ5wtBU0egue/Hk38/BctN3oEqzR61+V4pC
	P8u8HorX55f1Ee47CpUpGwCZiVcGwmM=
Date: Wed, 31 May 2023 11:58:21 +0200
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
	mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Message-ID: <20230531095821.GBZHcaPUvp8jo/IwV7@fat_crate.local>
References: <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
 <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
 <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>
 <888f860d-4307-54eb-01da-11f9adf65559@suse.com>
 <20230531083508.GAZHcGvB68PUAH7f+a@fat_crate.local>
 <7e824a95-6676-9553-4158-d434f617fcbb@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <7e824a95-6676-9553-4158-d434f617fcbb@suse.com>

On Wed, May 31, 2023 at 11:31:37AM +0200, Juergen Gross wrote:
> What it did would have been printed if pr_debug() would have been
> active. :-(

Lemme turn those into pr_info(). pr_debug() is nuts.

> Did you check whether CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT was the same in both
> kernels you've tested?

Yes, it is enabled.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Wed May 31 10:16:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 10:16:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541709.844719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Is1-0004zY-Ff; Wed, 31 May 2023 10:15:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541709.844719; Wed, 31 May 2023 10:15:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Is1-0004zR-Cn; Wed, 31 May 2023 10:15:57 +0000
Received: by outflank-mailman (input) for mailman id 541709;
 Wed, 31 May 2023 10:15:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X43w=BU=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q4Irz-0004zL-VN
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 10:15:56 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 202efdb9-ff9c-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 12:15:53 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id
 38308e7fff4ca-2af30a12e84so60066441fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 03:15:53 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 x8-20020a2e8808000000b002a9f022e8bcsm3163798ljh.65.2023.05.31.03.15.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 03:15:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 202efdb9-ff9c-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685528153; x=1688120153;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=yEGi2e5xZC0+UnMTftABv8dWt3TREaq1MMTHsonbF2c=;
        b=bu21Y4KLu0PE07UKpLyopbkfRpqLHOU4JfqUjHQ0KD9EeW2onKP/xHudW+FusxNMit
         pBRs4/mBD1r4dvjUHaxkpQ+bPQifuCPUfdzCW9RD5iK79BzWPtOpk/T1uJCKgzrYJXNn
         QEH8PnBT2lbTWzu7K66QkTlrx5lZaK3Z4CFRxPRtFQptm7QfXuaF38ZoZx5WZEIhp3wC
         revtv175ywg+G02YooTDz0RSfkny69NoMqGkMUI/iVgxuOBZyD/HPzQzVyuCHCiz/10D
         vuH9t+mHvIAUQRTzshxehHoIA2W5yti3+tznqGMuTvG/fB6u+TvtWhr4jYpYKPwhssTJ
         bWew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685528153; x=1688120153;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=yEGi2e5xZC0+UnMTftABv8dWt3TREaq1MMTHsonbF2c=;
        b=kghPtzHB6Fl8GCxDtF1lDCg5DFtRA9Ab7RH4Y1MPYvDj6CPS+oAfvvHAKE6au6ys/E
         6+TQFd5/feT9Ua0b3t91LPXYHmvGoWPFa8OWUrzSTpJN6KtV7w+5DHZrkJ10pbfsKwS8
         dAKVLbmQ6QF3tOZw3VbPxMDut3h8qihactpn7HqGmhSLpne4y377B9nMhHBAUDvNI3R/
         59GuzvJzByNBy/QlWWWuetZzHYOHwz9hrRJxbs1USL8D7qtEzE+kB5bUhvJvYWj2A61Q
         70OzG1GXDFLyiFgrV8vat4NS/EwOQIXCgIuuhIUKAqDJppyjejFOexkBhkl1DTlbQB3+
         6l3g==
X-Gm-Message-State: AC+VfDwvD05uz26sSVzrF6+JdMLByQyvJr5IBfNDp2fBBG01QT41/rZF
	UgnbOzBxbYL57LhaDCw9xHg=
X-Google-Smtp-Source: ACHHUZ6ayb9I12LWn4FFY0stVngGqku4LfZItIQLrsnKo+VrEaiGPynitSyJFAuxQgVVUnDIHl22oQ==
X-Received: by 2002:a2e:8245:0:b0:2ab:365b:dc7d with SMTP id j5-20020a2e8245000000b002ab365bdc7dmr2488048ljh.27.1685528153075;
        Wed, 31 May 2023 03:15:53 -0700 (PDT)
Message-ID: <0816b0790f85652ec688a21476bbb4bba05b5735.camel@gmail.com>
Subject: Re: [PATCH v6 4/6] xen/riscv: introduce trap_init()
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, 
 xen-devel@lists.xenproject.org
Date: Wed, 31 May 2023 13:15:51 +0300
In-Reply-To: <86dc868a-eda9-9de6-0430-26da6f5ad465@suse.com>
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
	 <f4c4b711106283e26536105105892b93bb39ea3e.1685359848.git.oleksii.kurochko@gmail.com>
	 <86dc868a-eda9-9de6-0430-26da6f5ad465@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.2 (3.48.2-1.fc38) 
MIME-Version: 1.0

On Tue, 2023-05-30 at 17:44 +0200, Jan Beulich wrote:
> On 29.05.2023 14:13, Oleksii Kurochko wrote:
> > --- a/xen/arch/riscv/traps.c
> > +++ b/xen/arch/riscv/traps.c
> > @@ -12,6 +12,31 @@
> > =C2=A0#include <asm/processor.h>
> > =C2=A0#include <asm/traps.h>
> > =C2=A0
> > +#define cast_to_bug_frame(addr) \
> > +=C2=A0=C2=A0=C2=A0 (const struct bug_frame *)(addr)
>=20
> I can't find a use for this; should it be dropped or moved to some
> later patch? In any event, if ti's intended to survive, it needs yet
> another pair of parentheses.
You are right. It should be a part of the next patch.
Thanks.

>=20
> > +/*
> > + * Initialize the trap handling.
> > + *
> > + * The function is called after MMU is enabled.
> > + */
> > +void trap_init(void)
>=20
> Is this going to be used for secondary processors as well? If not,
> it will want to be __init.
I think I'll use it for secondary processors.

>=20
> > +{
> > +=C2=A0=C2=A0=C2=A0 /*
> > +=C2=A0=C2=A0=C2=A0=C2=A0 * When the MMU is off, addr varialbe will be =
a physical
> > address otherwise
> > +=C2=A0=C2=A0=C2=A0=C2=A0 * it would be a virtual address.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 *
> > +=C2=A0=C2=A0=C2=A0=C2=A0 * It will work fine as:
> > +=C2=A0=C2=A0=C2=A0=C2=A0 *=C2=A0 - access to addr is PC-relative.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 *=C2=A0 - -nopie is used. -nopie really suppr=
esses the compiler
> > emitting
> > +=C2=A0=C2=A0=C2=A0=C2=A0 *=C2=A0=C2=A0=C2=A0 code going through .got (=
which then indeed would mean
> > using absolute
> > +=C2=A0=C2=A0=C2=A0=C2=A0 *=C2=A0=C2=A0=C2=A0 addresses).
> > +=C2=A0=C2=A0=C2=A0=C2=A0 */
>=20
> Is all of this comment still relevant not that you're running with
> the MMU already enabled.
Not really. I think comment above trap_init() function will be enough.
I'll remove this comment.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Wed May 31 10:35:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 10:35:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541715.844739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JAj-0007ko-Dw; Wed, 31 May 2023 10:35:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541715.844739; Wed, 31 May 2023 10:35:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JAj-0007kd-BK; Wed, 31 May 2023 10:35:17 +0000
Received: by outflank-mailman (input) for mailman id 541715;
 Wed, 31 May 2023 10:35:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fDuq=BU=bounce.vates.fr=bounce-md_30504962.647722de.v1-5c5b0ecf977c4fcaa1f18deb54eb672d@srs-se1.protection.inumbo.net>)
 id 1q4JAi-0007jd-3z
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 10:35:16 +0000
Received: from mail135-21.atl141.mandrillapp.com
 (mail135-21.atl141.mandrillapp.com [198.2.135.21])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d226c6ef-ff9e-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 12:35:12 +0200 (CEST)
Received: from pmta14.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail135-21.atl141.mandrillapp.com (Mailchimp) with ESMTP id
 4QWQdB6hWpz1XLF77
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 10:35:10 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 5c5b0ecf977c4fcaa1f18deb54eb672d; Wed, 31 May 2023 10:35:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d226c6ef-ff9e-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1685529310; x=1685789810; i=yann.dirson@vates.fr;
	bh=wnbQ9exsbh9Kc2yqTPXKL1rlSkUhURrZSz4zCQzH+Vo=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=Tx/M6eHwprd474aqgqQ9ume86sYe6l78PoOyZ4I/XUGcP3LdPfGVuLDvcicLbZhlP
	 /YU4rJ+Ggt64IAkk9eYfTxz2NnXIXDzsBxOGywAMuSq3GxdqUrCmtxwJAHpHjfflHR
	 T3hW654L+Y4CvMBUB/d4IaYLt/8l9r2Hktre4abg=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1685529310; h=From : 
 Subject : To : Cc : Message-Id : In-Reply-To : References : Date : 
 MIME-Version : Content-Type : Content-Transfer-Encoding : From : 
 Subject : Date : X-Mandrill-User : List-Unsubscribe; 
 bh=wnbQ9exsbh9Kc2yqTPXKL1rlSkUhURrZSz4zCQzH+Vo=; 
 b=aRQ1xCYORgdfXbH8v1OUaaHcsVqG8PygZghA15Ia+qR2S1MJghTdiSo9bOv3ron896AXVQ
 D9d5MjTGpZ94DfUT7F3/J/G5hbfsH5qLk1l8LZ+RWQ9npDMbXn0jFkk7PAQIqstvx+JBPbGq
 nBWFFOR/7i3jKiaR6SOJQ1tR4UOZk=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: =?utf-8?Q?[PATCH=201/1]=20doc:=20clarify=20intended=20usage=20of=20~/control/=20xentore=20path?=
X-Mailer: git-send-email 2.30.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 9f954236-9dc1-4070-9d34-807dea7ccea1
X-Bm-Transport-Timestamp: 1685529309459
To: xen-devel@lists.xenproject.org
Cc: Yann Dirson <yann.dirson@vates.fr>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-Id: <20230531103427.1551719-2-yann.dirson@vates.fr>
In-Reply-To: <20230531103427.1551719-1-yann.dirson@vates.fr>
References: <20230531103427.1551719-1-yann.dirson@vates.fr>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.5c5b0ecf977c4fcaa1f18deb54eb672d?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230531:md
Date: Wed, 31 May 2023 10:35:10 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

Signed-off-by: Yann Dirson <yann.dirson@vates.fr>
---
 docs/misc/xenstore-paths.pandoc | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/docs/misc/xenstore-paths.pandoc b/docs/misc/xenstore-paths.pandoc
index f07ef90f63..5501033893 100644
--- a/docs/misc/xenstore-paths.pandoc
+++ b/docs/misc/xenstore-paths.pandoc
@@ -432,6 +432,35 @@ by udev ("0") or will be run by the toolstack directly ("1").
 
 ### Platform Feature and Control Paths
 
+#### ~/control = "" []
+
+Directory to hold feature and control paths.  This directory is not
+guest-writable, only the toolstack is allowed to create new child
+nodes under this.
+
+Children of this nodes can have one of several types:
+
+* platform features: using name pattern `platform-feature-*`, they may
+  be set by the toolstack to inform the guest, and are not writable by
+  the guest.
+
+* guest features: using name pattern `feature-*`, they may be created
+  by the toolstack with an empty value (`""`), should be set writable
+  by the guest which can then advertize to the toolstack its
+  (non-)usage of the feature with values `"0"` and `"1"` respectively.
+  The lack of update by the guest can be interpreted by the toolstack
+  as the lack of supporting software (PV driver, guest agent, ...) in
+  the guest.
+
+* control nodes: using any name not matching the above pattern, they
+  are used by the toolstack or by the guest to signal a specific
+  condition to the other end, which is expected to watch it to react
+  to changes.
+
+Note: the presence of a control node in itself advertises the
+underlying toolstack feature, it is not necessary to add an extra
+platform-feature for such cases.
+
 #### ~/control/sysrq = (""|COMMAND) [w]
 
 This is the PV SysRq control node. A toolstack can write a single character
-- 
2.30.2



Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


From xen-devel-bounces@lists.xenproject.org Wed May 31 10:35:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 10:35:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541714.844729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JAc-0007Um-7v; Wed, 31 May 2023 10:35:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541714.844729; Wed, 31 May 2023 10:35:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JAc-0007Uf-4U; Wed, 31 May 2023 10:35:10 +0000
Received: by outflank-mailman (input) for mailman id 541714;
 Wed, 31 May 2023 10:35:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZHTS=BU=bounce.vates.fr=bounce-md_30504962.647722d9.v1-5d095d79e9f04458b3251c8f8a0efadc@srs-se1.protection.inumbo.net>)
 id 1q4JAb-0007UZ-Bw
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 10:35:09 +0000
Received: from mail5.us4.mandrillapp.com (mail5.us4.mandrillapp.com
 [205.201.136.5]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cf0de643-ff9e-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 12:35:07 +0200 (CEST)
Received: from pmta15.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail5.us4.mandrillapp.com (Mailchimp) with ESMTP id 4QWQd54qYczDRYW5k
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 10:35:05 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 5d095d79e9f04458b3251c8f8a0efadc; Wed, 31 May 2023 10:35:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf0de643-ff9e-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.fr;
	s=mandrill; t=1685529305; x=1685789805; i=yann.dirson@vates.fr;
	bh=6esGehIgFwWl9LvAG7GFoLkEcUZ39SrurMvnn+v971U=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=BcgoHN3CcLVgwmHQqLVpyZZY5G1VHfCz0PTPQNEIDM10oRevdP73a4KVLxo1xPfkG
	 JiIvGoGbFiJUMunxvD8vPFbc0IZ4dV0sgqLl3D3+H3ZvFWOddtsoZhGVk08deMkCT7
	 w20n84Cs+aJh9HISOy94+oSuDAAJpgrZHaHUYkmQ=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com; 
 i=@mandrillapp.com; q=dns/txt; s=mandrill; t=1685529305; h=From : 
 Subject : To : Cc : Message-Id : Date : MIME-Version : Content-Type : 
 Content-Transfer-Encoding : From : Subject : Date : X-Mandrill-User : 
 List-Unsubscribe; bh=6esGehIgFwWl9LvAG7GFoLkEcUZ39SrurMvnn+v971U=; 
 b=TOY2Gf0rGNuiV6pm5lTIHRqL36lasWRT1NC5JXC6Ni9iV+de2Kq67WDwjlee0K3q95k5Hq
 7kUp6sYFL2629fzxl+eEcLZlC0LgByP8GJ1t1cZVIzY9Fm4jl0hRNTCsU7kNrwZ8Kd5JmxNA
 3X1eYo2Np69aSTpyfV3sCdtoU3KRI=
From: Yann Dirson <yann.dirson@vates.fr>
Subject: =?utf-8?Q?[PATCH=200/1]=20RFC:=20clarify=20intended=20usage=20of=20~/control/=20xentore=20path?=
X-Mailer: git-send-email 2.30.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 9f954236-9dc1-4070-9d34-807dea7ccea1
X-Bm-Transport-Timestamp: 1685529304177
To: xen-devel@lists.xenproject.org
Cc: Yann Dirson <yann.dirson@vates.fr>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Message-Id: <20230531103427.1551719-1-yann.dirson@vates.fr>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.5d095d79e9f04458b3251c8f8a0efadc?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20230531:md
Date: Wed, 31 May 2023 10:35:05 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

This proposal, spurred by a discrepancy between how toolstacks handles
the control nodes, tries to summarize what I understand to be the
spirit of ~/control/, from its children already described in the
xenstore-paths document, and from the libxl behaviour.

Yann Dirson (1):
  doc: clarify intended usage of ~/control/ xentore path

 docs/misc/xenstore-paths.pandoc | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

-- 
2.30.2



Yann Dirson | Vates Platform Developer

XCP-ng & Xen Orchestra - Vates solutions
w: vates.fr | xcp-ng.org | xen-orchestra.com


From xen-devel-bounces@lists.xenproject.org Wed May 31 10:40:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 10:40:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541722.844749 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JG3-00013X-2Y; Wed, 31 May 2023 10:40:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541722.844749; Wed, 31 May 2023 10:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JG2-00013Q-Ve; Wed, 31 May 2023 10:40:46 +0000
Received: by outflank-mailman (input) for mailman id 541722;
 Wed, 31 May 2023 10:40:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X43w=BU=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q4JG1-00013K-73
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 10:40:45 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 988ce7ba-ff9f-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 12:40:44 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2af24ee004dso58475171fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 03:40:44 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 r7-20020a2eb607000000b002a8a5afb87csm3234962ljn.20.2023.05.31.03.40.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 03:40:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 988ce7ba-ff9f-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685529644; x=1688121644;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=s36fIqkklfoaLHBmkviR4gy89G1RdQE5efQmscEOmYs=;
        b=JgLCl8W0p9zYhZtlhhIyutTIYZYputA7jckJ2u4Oiv5/M5zOOq8C0WqYRMa10FmN8i
         RJSV/fhvcr5NIbD8opw87DcNxUHktKX49fL7WwYt14TVTdABkQ8xU/WbNrWfTEEepzh5
         MjSRVJL8otknD0bZbbZNo0r9GjAmnWzcvOzkW+PUIUcMPtSWOV5/mfV/63IL6rzwi6pq
         gMixsHSJ48SLDQqfps1ou+r1AYpD5vm2ok/96BxOPgSdrTFWt5y/wSNefxmKGx0l8iHq
         b4OLZElUlt941+KwtPkJO0BG9frJ6JSiQ8B+Cz/rLT0n2ry4QZ4MndRH/u3wYZCF0rcU
         n4Zg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685529644; x=1688121644;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=s36fIqkklfoaLHBmkviR4gy89G1RdQE5efQmscEOmYs=;
        b=lAKNYV2uLntx+66cB+jc4gVtqcd2X33FuuQ/GXSu8pWJ6g6qfEbfU+dC+9dU4gBqW3
         h+alHGFnIcFM82PDGjQ6Ur+W3MwwDl1Gi6CCTKfBYFNfMYULE3zNPOm4KsE1t3wlMJHG
         62SttxwtZWK6+9hfaJbs1zKfiBD2lMaOd1Ymo16BrU8LAbLD+yxvpCiYlAYuH4M/d5Te
         Z9vHJt55IFPCxkOBz0CsLSNisFXHHtnAUfjltfJ19f0pS1GW2q5Ka2QkRXRaFffVS2NQ
         2Stb7jiPAHcGpQnV6txQdTl91r+scfj2v1aZmXk4mu8XXeutHRt4R57PA3njVNQY7OR+
         QE/A==
X-Gm-Message-State: AC+VfDw7vUsd0aST7lPabs6w5sc33K4vfIm5RUQHtdDaB5mKqT0JddGo
	y4Jv29zQiCEX4ZatnlpoMUg=
X-Google-Smtp-Source: ACHHUZ6NmqW2m7ClcRjyC27sf/AlPk46kflA46HImX3xALKY1DZQb7L2AzZCgPe4hWcZvgkMgD0sLQ==
X-Received: by 2002:a2e:9b82:0:b0:2b0:2976:172d with SMTP id z2-20020a2e9b82000000b002b02976172dmr2629919lji.9.1685529643663;
        Wed, 31 May 2023 03:40:43 -0700 (PDT)
Message-ID: <f3bf3a483f7282eb365cf04f27e1c7a4e84f5aae.camel@gmail.com>
Subject: Re: [PATCH v6 5/6] xen/riscv: introduce an implementation of macros
 from <asm/bug.h>
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, 
 xen-devel@lists.xenproject.org
Date: Wed, 31 May 2023 13:40:42 +0300
In-Reply-To: <92580a6f-e97a-c4a9-435c-bd95a84d4306@suse.com>
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
	 <bd2dd42c778714f25e7e98f74ff5e98eee1cd0a5.1685359848.git.oleksii.kurochko@gmail.com>
	 <92580a6f-e97a-c4a9-435c-bd95a84d4306@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.2 (3.48.2-1.fc38) 
MIME-Version: 1.0

On Tue, 2023-05-30 at 18:00 +0200, Jan Beulich wrote:
> On 29.05.2023 14:13, Oleksii Kurochko wrote:
> > --- a/xen/arch/riscv/include/asm/bug.h
> > +++ b/xen/arch/riscv/include/asm/bug.h
> > @@ -7,4 +7,32 @@
> > =C2=A0#ifndef _ASM_RISCV_BUG_H
> > =C2=A0#define _ASM_RISCV_BUG_H
> > =C2=A0
> > +#ifndef __ASSEMBLY__
> > +
> > +#define BUG_INSTR "ebreak"
> > +
> > +/*
> > + * The base instruction set has a fixed length of 32-bit naturally
> > aligned
> > + * instructions.
> > + *
> > + * There are extensions of variable length ( where each
> > instruction can be
> > + * any number of 16-bit parcels in length ) but they aren't used
> > in Xen
> > + * and Linux kernel ( where these definitions were taken from ).
>=20
> This, at least to some degree, looks to contradict ...
>=20
> > + * Compressed ISA is used now where the instruction length is 16
> > bit=C2=A0 and
> > + * 'ebreak' instruction, in this case, can be either 16 or 32 bit
> > (
> > + * depending on if compressed ISA is used or not )
>=20
> ... this. Plus there already is CONFIG_RISCV_ISA_C, so compressed
> insns
> can very well be used in Xen.
Thanks. You are right. The comment should be updated.

>=20
> > @@ -114,7 +116,134 @@ static void do_unexpected_trap(const struct
> > cpu_user_regs *regs)
> > =C2=A0=C2=A0=C2=A0=C2=A0 die();
> > =C2=A0}
> > =C2=A0
> > +void show_execution_state(const struct cpu_user_regs *regs)
> > +{
> > +=C2=A0=C2=A0=C2=A0 printk("implement show_execution_state(regs)\n");
> > +}
> > +
> > +/*
> > + * TODO: change early_printk's function to early_printk with
> > format
> > + *=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 when s(n)printf() will be added=
.
>=20
> What is this comment about? I don't think I understand what it says
> needs doing.
I meant that it would be nice to introduce the second version of
early_printk() function which will take 'format', as printk() does.

But there is no any sense in this comment because all early_printk() in
do_bug_frame() were changed to printk().

Thereby I will update the comment.

>=20
> > + * Probably the TODO won't be needed as generic do_bug_frame()
> > + * has been introduced and current implementation will be replaced
> > + * with generic one when panic(), printk() and find_text_region()
> > + * (virtual memory?) will be ready/merged
> > + */
> > +int do_bug_frame(const struct cpu_user_regs *regs, vaddr_t pc)
>=20
> While it's going to be the maintainers to judge, I continue to be
> unconvinced that introducing copies of common functions (also in
> patch 1) is a good idea.
Generally I agree with you but as I mentioned before and in the comment
above the function do_bug_frame() the reason not to use generic
implementation of do_bug_frame() now as it will require to introduce
compilation of whole Xen's common code. ( there is no way to enable
just necessary parts for the current one function ).=20

I think that after this patch series I'll introduce compilation of
Xen's common code and after it'll be merged do_bug_frame() can be
removed.

>=20
> > +{
> > +=C2=A0=C2=A0=C2=A0 const struct bug_frame *start, *end;
> > +=C2=A0=C2=A0=C2=A0 const struct bug_frame *bug =3D NULL;
> > +=C2=A0=C2=A0=C2=A0 unsigned int id =3D 0;
> > +=C2=A0=C2=A0=C2=A0 const char *filename, *predicate;
> > +=C2=A0=C2=A0=C2=A0 int lineno;
> > +
> > +=C2=A0=C2=A0=C2=A0 static const struct bug_frame* bug_frames[] =3D {
>=20
> Nit: * and blank want to swap places. I would also expect another
> "const".
Thanks. I'll update that.

>=20
> > +static uint32_t read_instr(unsigned long pc)
> > +{
> > +=C2=A0=C2=A0=C2=A0 uint16_t instr16 =3D *(uint16_t *)pc;
> > +
> > +=C2=A0=C2=A0=C2=A0 if ( GET_INSN_LENGTH(instr16) =3D=3D 2 )
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return (uint32_t)instr16;
> > +=C2=A0=C2=A0=C2=A0 else
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return *(uint32_t *)pc;
> > +}
>=20
> As long as this function is only used on Xen code, it's kind of okay.
> There you/we control whether code can change behind our backs. But as
> soon as you might use this on guest code, the double read is going to
> be a problem (I think; I wonder how hardware is supposed to deal with
> the situation: Maybe they indeed fetch in 16-bit quantities?).
I'll check how the hardware fetches instructions.

I am trying to figure out why the double-read can be a problem. It
looks pretty safe to read 16 bits ( they will be available for any
instruction length with the assumption that the minimal instruction
length is 16 ), then check the length of the instruction, and if it is
32-bit instruction, read it as uint32_t.
>=20
> > --- a/xen/arch/riscv/xen.lds.S
> > +++ b/xen/arch/riscv/xen.lds.S
> > @@ -40,6 +40,16 @@ SECTIONS
> > =C2=A0=C2=A0=C2=A0=C2=A0 . =3D ALIGN(PAGE_SIZE);
> > =C2=A0=C2=A0=C2=A0=C2=A0 .rodata : {
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 _srodata =3D .;=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Read-only data */
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Bug frames table */
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 __start_bug_frames =3D .;
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *(.bug_frames.0)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 __stop_bug_frames_0 =3D .;
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *(.bug_frames.1)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 __stop_bug_frames_1 =3D .;
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *(.bug_frames.2)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 __stop_bug_frames_2 =3D .;
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *(.bug_frames.3)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 __stop_bug_frames_3 =3D .;
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *(.rodata)
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *(.rodata.*)
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *(.data.rel.ro)
>=20
> Nit: There looks to be an off-by-one in how you indent your addition
> (except for the comment).
Thanks. One space is really absent...

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Wed May 31 10:57:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 10:57:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541726.844759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JWI-0002f1-G6; Wed, 31 May 2023 10:57:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541726.844759; Wed, 31 May 2023 10:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JWI-0002eu-Bt; Wed, 31 May 2023 10:57:34 +0000
Received: by outflank-mailman (input) for mailman id 541726;
 Wed, 31 May 2023 10:57:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EXUc=BU=citrix.com=prvs=508c705fb=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q4JWG-0002eZ-UI
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 10:57:33 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eeca9397-ffa1-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 12:57:29 +0200 (CEST)
Received: from mail-dm6nam11lp2173.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.173])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 31 May 2023 06:57:23 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by BY5PR03MB5267.namprd03.prod.outlook.com (2603:10b6:a03:225::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22; Wed, 31 May
 2023 10:57:21 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 10:57:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eeca9397-ffa1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685530649;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=/a8q9dRYrWNiaxur1bpULk9irO0PxOsB2WfuQJYO7Gk=;
  b=cComfOYl+Kq9U8O+XzTo72y/6bysAaNH8gR6EbwgfIkOIJyY2pcFspJw
   bgldp/K7YuU3dBoDxuc7MeTeVj3CL0SwtBrevnMyFOZ4szFE3Gaqf8wiC
   f70DzdEZBoZGb5EPuiV2PG5mJzBHeSNW6S36MXkP3Ct2b7GebOsr7C8jT
   o=;
X-IronPort-RemoteIP: 104.47.57.173
X-IronPort-MID: 109816428
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:clLjn6imFJAhiPJGzm/B2LF0X161RhEKZh0ujC45NGQN5FlHY01je
 htvWj+FOP3fNmCgLtgga4nl904Ev5TSyIA1SARkpShmRSob9cadCdqndUqhZCn6wu8v7q5Ex
 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv
 cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QSHzyB94K83fsldEVOpGuG4IcbiL
 wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ
 OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+
 tQlBTsMViHA29mOnpLnFLZtg+Q/L8TkadZ3VnFIlVk1DN4AaLWaGuDmwIEd2z09wMdTAfzZe
 swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEuluGya7I5efTTLSlRtlyfq
 W/cuXzwHzkRNcCFyCrD+XWp7gPKtXqjCdJLSuHippaGhnW33Dc/KTgESWG3uKWCs2eAa9tUc
 kMtr39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLL5y6JC25CSSROAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9EIMZTSoNTA9A79y9pog210vLVow6T/XzicDpEzbtx
 TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQKzASpoRGpBcmS8g
 Q==
IronPort-HdrOrdr: A9a23:bQnVGqwYvDYNqSwVkXmvKrPwhb1zdoMgy1knxilNoHxuH/BwWf
 re/8jztCWE8gr5PUtLpTnuAtj5fZqxz+8N3WBVB9mftWrdyQmVxeNZnO3fKlTbckWUmoFgPO
 VbEpSWY+eAaGSS4/yb3CCIV/QtzMC8/Ke07N2uq0uENGpRGtpdBn9Ce2GmO3wzeSJBApYVEp
 6V5s18pzSmdXl/VLXLOlA1G9XpodrGsonnbx4ADxsM4mC1/EyVwY+/LjOf2Rs5SDNAwbAr9E
 fflQiR3NTcj9iLjiL20WjeyY9Xltvnwt5/AqW3+7QoAySpjhy3IIdsX7DHuzwquuSi9Usni7
 D30modA/g=
X-Talos-CUID: 9a23:YejIxW/TKbCW/gdENLmVv3M5EcEdQz6B90+OAxPmVmdsaKGWRFDFrQ==
X-Talos-MUID: 9a23:WxtGbgp0NDAyMTwtAmIezxV4b+tJxrayMkwUza4/kve0PCcpNA7I2Q==
X-IronPort-AV: E=Sophos;i="6.00,205,1681185600"; 
   d="scan'208";a="109816428"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HCXdpKQsX26yj8wOL71R0PTUBvUdCV2wpHGoMoN4etako9pmrJ0uPJP2n0+3tCqAruL3gOZESs4Fge3mYxW8fWjhJ3sn2ZsRfW1nug8O/wg5TRl4qI3G9Kl76kDh1XEuFmsjCDuxuwoWoyGPfiSI+omzNTd6DP3fflU94vccW2lBL2njwPkXhP7OWnFAgFliZGuLJgVcy6oKgvGyWdJQOfCrb+G5EY08Y422a2or9JrHOaIEa6worWq8FidWfvnS2xKqCsx3MVKNcC8WnRC8n4fksUIAJgqri4mz/Inc8tAloiAKqNKaYf+S3opMa1RPHFi+6eqxvtS6KviLtHH9FA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=k11gLLKqtlPS8PlX5FS1VGfQ7U3VS0mLMVNd3u96KMI=;
 b=mKAS0TZnw1C0Pk2HaGInWqlwr/JekyMd6YW8Rqtw20xZy1z8qqn4Cy5eGpYGlMxIvUFzap2o6JpRWjcyMfTNSAPL1Pr0W/O2Atym72vwcHdMiYLCl7VNwAioEXAFXZ4NFwReYZoogn6o9U6hn5X32CAkbwX5+SwqvlhVHjCfrFCBSJlwCvpjvmaXFJR71WJ5bnEkIHRhZ1H2UNSEOV+yLTjcEL+YWJHeuWsxrEUp7uirRVQzZ2pHASzmfVe/kAWWnHSEPXNnSEe98eRlvc1SDamsf5JXWG0m9GtlerRUy0ja7PQIEM+S3wV6L9TOb0mUIqBm905xEl4VxCZhHgwyWg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k11gLLKqtlPS8PlX5FS1VGfQ7U3VS0mLMVNd3u96KMI=;
 b=VSMcT0EPHQaJigUSUTsvJOK97UsAn1BoTCg/cEm8uYLQOQqUyOdlH+llNecmxylpqAy4Z0FnN0Z+FuzuM/JAauFu7vfvrV2vo2mh57EVu4Q5JaejJtTOHJ2/WDgr+59qaCFj8GjzBFN3dOjZ/4myPCCTQR02K++HBAh0M3p5cz8=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 31 May 2023 12:57:13 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 3/3] multiboot2: do not set StdOut mode unconditionally
Message-ID: <ZHcoCcd5nugmWURI@Air-de-Roger>
References: <20230331095946.45024-1-roger.pau@citrix.com>
 <20230331095946.45024-4-roger.pau@citrix.com>
 <b9bd819d-93ad-d511-4602-8e3f4f515546@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <b9bd819d-93ad-d511-4602-8e3f4f515546@suse.com>
X-ClientProxiedBy: LO2P265CA0052.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::16) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|BY5PR03MB5267:EE_
X-MS-Office365-Filtering-Correlation-Id: c38e0ba5-41ef-46e7-b31d-08db61c5ce3b
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hj/JPLFnzp+8S0qFDd/4vHW/e8DL7sP+Pp9xwO5GidTg/csv8zDX+dxKAEUNw6awsbF1ImJkivhl/zERHkg9RzLIOt9u+6Uc6VjUJLMRbTRbAR7wR52SkRVOcMOe8Y/Gh7HZgN19UnS7xy3qvh82wcKTvaqumLTMwxisqZv4Jakv/iJamuGwkL2rlX1vOuTCUSSdu0zwia9gjIKznRc29qveI1ED37F5pI3NrT0vqTSCPtu9VJUsh9MHura4PAA0XJKhTVKn2QlJON5A8ygd0lSBmOkOwtZI51XGqnkZ6HZ8TSe2h0ZmUwKZdKprFXZHK7fGb1OlxUNChH0vJWMCkjGrrVaPYw1SAn79e5MTxT/v05HFmgBGgits6BXDPxTPP6mGcZcN6+aWxl0QBi2JVE9SP7N1YMotKNi2t/6EYGowVhLySBKGv1SXfle+ESNqmq2MiyqIfRlW77hE4eDMxMtyeJTcTrZG4H+NyqigTeztAPCgbEGKAVVfk7yjzUceSqC7FIopdAkFS71z57xNSpqWnBOpORyoeiAvt+fxOS+RDj8zefGaGV0Scgj+rFYT
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(39860400002)(366004)(396003)(346002)(376002)(136003)(451199021)(54906003)(8936002)(8676002)(4326008)(2906002)(66556008)(66946007)(66476007)(6916009)(41300700001)(5660300002)(316002)(478600001)(6486002)(6666004)(6506007)(33716001)(6512007)(9686003)(53546011)(186003)(26005)(83380400001)(85182001)(82960400001)(86362001)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?VTk5anVaMFUvYzhURm1CMGEwVmx2LzJ2d0d3ZkQ1VVE1WVhNSnRoYU1EblZk?=
 =?utf-8?B?S2xGNy9aRzI0RWxRWitpNXpsTzd5TlY0NEs4elNHcndOQ2l2TG5iejRFbTJD?=
 =?utf-8?B?MEtIQ2RiRTdETnB0S1oyd3VJWlBhWENyOWovQ1ViT241ZDlEU2hMRDRPYTg0?=
 =?utf-8?B?b2ZZQnpyc1o2RGF5UlZCYmFYRWV6VktnQldReWIwNHBWbDBLQ01CUjRDejJo?=
 =?utf-8?B?bjQ1b0Jnekx2eVhYN1Nqak5qT2FOK3RXR1FuTnFsR1lQOGROcnFLMXZIcHh0?=
 =?utf-8?B?SjdhT3ArbnhMcmJLMnNGS09ZNEphOEN2c2dPS0RsQ3cxVFlVVEs1V3dESktp?=
 =?utf-8?B?elVZZXV1OGtueVp5NHR3RUZpUEdCRkxMeEEyMXlZd0ZYN2dlRnBJMitoYW1h?=
 =?utf-8?B?RFdDT3o5UlR3SCthRXIxL1FRbDhjbzFveis3bThFWG1xRHZBc3J2NFAwQWxx?=
 =?utf-8?B?emZZMTVSRGhEWXNSUGNhOFBPbFZNTjA3Nm12VkpYUUVQdG9YQ0JmM1Rlb0FG?=
 =?utf-8?B?aVNTSW0zaXowTWYyNzgwN2I2NCs1bWJtK1poV3FtQ3NHa1hDRWkwWE9SSEVn?=
 =?utf-8?B?ZnNSL3BCTWxwSUJyVnAzWUpjMGc1VlNzM3U2djM0WGxMU3RFdHBMY09ESDNP?=
 =?utf-8?B?WGZTd3RoUXlaQWNndEN3MmhCVWhqcHZLUXZYSjUzei9ZVyt2QXhKTThCc1hn?=
 =?utf-8?B?dytYU3VYQU00TjFXVkpOQm9sSlYzSkIzVldQZEs4ZnoyL2MzYlJoOHBHRjh2?=
 =?utf-8?B?RW54Z0YvQndKem9PSit3eHFOaGhWN2FiRTk2SFpXK2RzUE9tTlJTTHl3M1E1?=
 =?utf-8?B?NXZ0RjZUQStYZWsyVnFKZ2JjU2F6Y29ZbzVRQzkzRGRYV08vd2RsbXRscjJk?=
 =?utf-8?B?ejc1SHUrQmVWM3VqVjlDQW9XY01wZUxmd05sSmZxMmxMVmhDMTJyb2UzT3M2?=
 =?utf-8?B?azFMQzdsekhDTnlYN282SThZcFJBQ0ozYUdiUFFxcS9vaE1FcVhTR3NlYTMy?=
 =?utf-8?B?RlJJTUp0dWEyZXA4cklHWWVYTVVxNVlWVTZCSnJWeFB1NEVvdTR5YUdmazlk?=
 =?utf-8?B?N1RiT29zWGdlZXNSUmJDUW1WOGFoZDB2Q0I0YzUyemVKMmJ3aGJBSnc5TENT?=
 =?utf-8?B?NGR4cVFSeDRuektqb25ETmx2N25wTkVnMTZYK212VmRpWlBHVHRvQ2lhZDdu?=
 =?utf-8?B?Tll3WUlmQjcrR1RKNmNhOU1YeVAvZjVqUXU4bHRqUCtyejBiTHpla2xHQTlI?=
 =?utf-8?B?alE1SDlqMHVOOENkaFJRdDJHNWhzazlXQ1Bna2RKZHF0b1lJYk1BcFFGZVZE?=
 =?utf-8?B?TWcxVkJuQW4rbExBS3l4YU5nekZpK0w1NlE5SjNQSTdrdktpTlRoNVUzZHAz?=
 =?utf-8?B?OEtqWkRrVGJtTkVJemxxSExwdC85VlpBUUpFMGkrKzlWSGxKc0pnQWJrelVJ?=
 =?utf-8?B?aEF2SWRGYjY1UkxYY1hNeURkbXUxWnEvWTg5RUpvaE1YTTkzUWRpVGZMUzBU?=
 =?utf-8?B?OE9jMWxDdmtXaXF6aGdhMlg3WVFRNnBYdW1GaXNtS3p1dGo4RzZWK3F4K1Z2?=
 =?utf-8?B?eklCLzRXQjJjWlNvK09WRVJQUVhOQUlPNTJPazcrMUFReHZxVExGL3Q1VzQ0?=
 =?utf-8?B?WTFHQnRDOGdIa3ZtLy92YUlVUFpNVTNWRFRGUUpSdy9QajQ2YlR0YmRWakQw?=
 =?utf-8?B?Yk01d2p6bkt4QXptVnVIRW9HdUdNMHJ1alk5VC9SRVJhNlZtS1VTaEQ4VGph?=
 =?utf-8?B?R1kzcmN0WkNVUVRHY01wWDBJcmh4NHRtN3lVZEU5RHpnalhIQWVMYlZRWThN?=
 =?utf-8?B?bFlXeXV1SWZVajB3ZURWMmZrTWF1dFhYYStGUng2QkxQbHpNRHAvTGtqdStz?=
 =?utf-8?B?MEN4U0F1NzVwRFZ6bUJUVkZvaHBpOG1RZHpIN2dFM2p5TFBvbHVGOFF0a0VU?=
 =?utf-8?B?SEQ4R1lDcHA0ZDRRK1ArRWNBNlNVMjVFVDRrZFV2QkxXcUFFRmQwVnJZY0NJ?=
 =?utf-8?B?cDNNd29Sc21ZMUJrR0gzak5tVGpxOHlYdEJHVUY1SlFDMEplclpDa3Q0VWlW?=
 =?utf-8?B?YUlrV1hoMEJUcWh0b3ZqUmNBdDF5akJIWGk0VndBUmhmSkZIWDNnN1dQaTlk?=
 =?utf-8?B?TldGdlNaOWtFZ0ptNk5nT0Zhd0xQMjVEZUxrL3pyZ2U4eDZnMkRQRXBoMnZH?=
 =?utf-8?B?ZEE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	PYXwuT2M1aTkAqgiDEbQyotYLWBqO/ogzG/Sj4FdLAL4bKUH1usbWQPo1uRSKhxqDLXZ/upPeiVD9pWQI9v7UC6sSFfqoi4dSFkQk4sXpEdHOQ6MieuS5jTiktVBxQrfxepf7XxdxbGxTASgwnjmvKnJcNs7rxw6bQVXI63bRq1qsagzetAurvV2YCS+4WJ2oARs96OdHUTylfUu9EVQIknj9Y+vBVksrwuJyqmlBMHmzYzQKQyDu4tfnTuJVLKzMj/XIsHBnnob0wR3MFolrsfJxtWxH+djc0ZIRu4BTL4I/GG8x064RK6Cb7sWXq6gZqqqffTQDGMHFIBoPIgOIzjdCTm0JCsmEd9t0e2b52DoYDIcyuI4Ii5PCv43WNYxMkjJHjBKKVAiwb3GM6gRScae7VIGVliv9QmqofyfVF2ULrpOs3zeoWP8limQ0veRshVo+cT/KYLZSDyvl99z9WUin8PCeXd/9y/ktOmNh8D5Mppm+ig5kKnaBjD0ecf+KNDojhCpaGMjGyDTf3KI7OoYnxnk2hD1iYuRh134Yo5j0tLS6ekkOilImh9jlxygentXkju6MRJ/UCDojmNYfkBFEOTbBML5UUnvWbST3qfT+BiROiYM4PbObBJDTjzqnameuKC1OtvFkL42i8j99UkuftZjeetP+/xpZVgHdx1sAv3talKg+vsKgEWGbrklaFOVs7tSmk4ZWcXa+dQjckfyMrBNfbuX57ySsD3xD2PQ5DCU4Dc9XsaVmUkkKOEebYqsPloyGdv1+z5f66SwfjsXhMWKrD4585aFHlG+S/z9ZQISoVH7WQ/+piWajdZ9G4AF8xM/wub3PlUIBHiHKg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c38e0ba5-41ef-46e7-b31d-08db61c5ce3b
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 10:57:20.3684
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: r2hHbzKW/N14rWSPdgfG6kStw/ZAHMWJxw2f1kcD1ASPE8XcdHoveX/yMRZM1m16re3pRWgfdZn5jZ2uvWCn7g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5267

On Wed, Apr 05, 2023 at 12:36:55PM +0200, Jan Beulich wrote:
> On 31.03.2023 11:59, Roger Pau Monne wrote:
> > @@ -887,6 +881,15 @@ void __init efi_multiboot2(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable
> >  
> >          efi_arch_edid(gop_handle);
> >      }
> > +    else
> > +    {
> > +        /* If no GOP, init ConOut (StdOut) to the max supported size. */
> > +        efi_console_set_mode();
> > +
> > +        if ( StdOut->QueryMode(StdOut, StdOut->Mode->Mode,
> > +                               &cols, &rows) == EFI_SUCCESS )
> > +            efi_arch_console_init(cols, rows);
> > +    }
> 
> Instead of making this an "else", wouldn't you better check that a
> valid gop_mode was found? efi_find_gop_mode() can return ~0 after all.

When using vga=current gop_mode would also be ~0, in order for
efi_set_gop_mode() to not change the current mode, I was trying to
avoid exposing keep_current or similar extra variable to signal this.

> Furthermore, what if the active mode doesn't support text output? (I
> consider the spec unclear in regard to whether this is possible, but
> maybe I simply didn't find the right place stating it.)
> 
> Finally I think efi_arch_console_init() wants calling nevertheless.
> 
> So altogether maybe
> 
>     if ( gop_mode == ~0 ||
>          StdOut->QueryMode(StdOut, StdOut->Mode->Mode,
>                            &cols, &rows) != EFI_SUCCESS )

I think it would make more sense to call efi_console_set_mode() only
if the current StdOut mode is not valid, as anything different from
vga=current will already force a GOP mode change.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 31 10:59:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 10:59:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541731.844769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JXs-0003Gy-UW; Wed, 31 May 2023 10:59:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541731.844769; Wed, 31 May 2023 10:59:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JXs-0003Gr-RN; Wed, 31 May 2023 10:59:12 +0000
Received: by outflank-mailman (input) for mailman id 541731;
 Wed, 31 May 2023 10:59:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GGHB=BU=gmail.com=oleshiiwood@srs-se1.protection.inumbo.net>)
 id 1q4JXs-0003Gj-3R
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 10:59:12 +0000
Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com
 [2607:f8b0:4864:20::22b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2ba98c11-ffa2-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 12:59:10 +0200 (CEST)
Received: by mail-oi1-x22b.google.com with SMTP id
 5614622812f47-3982f09df74so3469189b6e.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 03:59:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ba98c11-ffa2-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685530749; x=1688122749;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=QnowcQLYOUDL3eO3KJWamGlJ6VxLfJvvh+2QsHdm1JM=;
        b=VMFNt3nGERsiDB47HOAzmrI5CEW9DYmDjxxizJbz3z5R+i6AHtX/vtQj17quTUbGsr
         Kac3EIWK5aLLvpVkzs8hggHdunsa+JJZxwCZSo4wHZY4+UlJEoDOB+VrfZBg1xq/6wM5
         e/L0zehor9mxthSLQYe4RbZFGdk6CfhQj8gLVb8UpoaX+rJ8Z5PxsvTRPf+RFiByoXVw
         EJiXGXO8LkDVkVbgHNr7FcC04LXxGtMBxB0njHeXnTVVUCAsqFF9TbIsNlxaYsmbGatW
         3pF8FbIOvzB6u8h/T0J9O4ekk5ZNagAGfGGmOwmWGnsJuiK/rmSnV/Cr3JvfTVPbxkFP
         SZ7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685530749; x=1688122749;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=QnowcQLYOUDL3eO3KJWamGlJ6VxLfJvvh+2QsHdm1JM=;
        b=LLm+bECNLd3doY4oDKAOQJdTW32k+3u0qNFTOob/H32y8pMTYHNe6p3w2zp9dEyDPr
         h7nWt5ejsNVJWH+11kV2GpOzI7CttGooXGV2ZegnPVPELbP8SVSXy70FBt6fVR2gp9a0
         cVl0OQIBO5ZCG3vEHJjC2seKBLxttiXVVLGflsmGNgvpKj6WmZn6G3+LxzgU6qLWctRe
         0fqWSp9oqDdUCH/L+WW9p5r5/l8CFGyWLMpk4cgWXGvuqlYLfkpFCs1/M7Xmv+SZ3TLG
         cQKrRR/6KEitrRdniqrUX059/qwYrJ/5Q5VORKagvpG2O1LlcdZpnqFg7doXdPTA58ak
         +URg==
X-Gm-Message-State: AC+VfDxJ+HAjq9jIckYuWJp+bXlv6nHoRFM4DbDRbUE9gsE7oeVv+NII
	N0kC1PRfbQALxpacWIqbjbvL/XmaC3LQSdYQhPQXOTsiPxp21w==
X-Google-Smtp-Source: ACHHUZ75X95nz6Oz8/tpPBB2B0aGPjoq82yF2Cuw9WUdr5r2yi9k00PkkRDUj+vE71MNHue558K7nCOxEUN2Lc1IgC0=
X-Received: by 2002:a05:6808:13d4:b0:398:59fe:6ee3 with SMTP id
 d20-20020a05680813d400b0039859fe6ee3mr4623999oiw.54.1685530748889; Wed, 31
 May 2023 03:59:08 -0700 (PDT)
MIME-Version: 1.0
From: Oleg Nikitenko <oleshiiwood@gmail.com>
Date: Wed, 31 May 2023 14:04:42 +0300
Message-ID: <CA+SAi2tnribWcyORs3P-8za0aJx3t5NcOXCo1g917Csgowpb9g@mail.gmail.com>
Subject: xen with colors issue again
To: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="0000000000008baf2705fcfb35c8"

--0000000000008baf2705fcfb35c8
Content-Type: text/plain; charset="UTF-8"

Hello,

I built the xlnx_rebase_4.17 branch and ran it in our environment with
colors.
I ran into the next issue. Looks like some device was stucking.
It may come up immediately on start or after 20-30 minutes later even with
no DomUs.
A xen command line is
"console=dtuart dtuart=serial0 dom0_mem=1800M dom0_max_vcpus=2
dom0_vcpus_pin bootscrub=0 vwfi=native sched=null timer_slop=0
llc-coloring=on llc-way-size=64K xen-llc-colors=0-1 dom0-llc-colors=2-8";

This is a first one coming up

(XEN) d0v0 Forwarding AES operation: 3254779951
Assert occurred from file xcsudma.c at line 143

I found out this is inside some DMA handler in FSBL code

void XCsuDma_Transfer(XCsuDma *InstancePtr, XCsuDma_Channel Channel,
                                        u64 Addr, u32 Size, u8 EnDataLast)

        Xil_AssertVoid(Size <= (u32)(XCSUDMA_SIZE_MAX));

This is a second one coming up.

[  188.737910] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: ERROR: Gcm
Tag mismatch
(XEN) d0v0 Forwarding AES operation: 3254779951
[  188.748496] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: ERROR : Non
word aligned data
(XEN) d0v0 Forwarding AES operation: 3254779951
[  198.826279] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: ERROR : Non
word aligned data
(XEN) d0v0 Forwarding AES operation: 3254779951
[  198.837363] zynqmp_aes firmware:zynqmp-firmware:zynqmp-aes: ERROR:
Invalid
(XEN) d0v0 Forwarding AES operation: 3254779951
Received exception
MSR: 0x200, EAR: 0x181, EDR: 0x0, ESR: 0x861
[  229.916284] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[  229.916667] (detected by 1, t=5252 jiffies, g=1953, q=101)
[  229.922286] rcu: All QSes seen, last rcu_sched kthread activity 5252
(4294949715-4294944463), jiffies_till_next_fqs=1, root ->qsmask 0x0
[  229.934569] rcu: rcu_sched kthread timer wakeup didn't happen for 5251
jiffies! g1953 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
[  229.945727] rcu: Possible timer handling issue on cpu=0
timer-softirq=3481
[  229.952734] rcu: rcu_sched kthread starved for 5252 jiffies! g1953 f0x2
RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=0
[  229.962940] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[  229.971936] rcu: RCU grace-period kthread stack dump:
[  229.977041] task:rcu_sched       state:R stack:    0 pid:   12 ppid:
2 flags:0x00000008
[  229.985433] Call trace:
[  229.987939]  __switch_to+0xf4/0x13c
[  229.991486]  __schedule+0x2f0/0x690
[  229.995032]  schedule+0x5c/0xc4
[  229.998232]  schedule_timeout+0x80/0xf0
[  230.002125]  rcu_gp_fqs_loop+0xf0/0x2b4
[  230.006017]  rcu_gp_kthread+0xe8/0x100
[  230.009824]  kthread+0x120/0x130
[  230.013111]  ret_from_fork+0x10/0x20
[  230.016744] rcu: Stack dump where RCU GP kthread last ran:
[  230.022279] Task dump for CPU 0:
[  230.025567] task:tokio-runtime-w state:R  running task     stack:    0
pid:  795 ppid:   408 flags:0x00000000
[  230.035514] Call trace:
[  230.038022]  __switch_to+0xf4/0x13c
[  230.041569]  0xffffffae44320a00
[  292.936283] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[  292.936646] (detected by 1, t=21007 jiffies, g=1953, q=113)
[  292.942354] rcu: All QSes seen, last rcu_sched kthread activity 21007
(4294965470-4294944463), jiffies_till_next_fqs=1, root ->qsmask 0x0
[  292.954723] rcu: rcu_sched kthread timer wakeup didn't happen for 21006
jiffies! g1953 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
[  292.965968] rcu: Possible timer handling issue on cpu=0
timer-softirq=3481
[  292.972975] rcu: rcu_sched kthread starved for 21007 jiffies! g1953 f0x2
RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=0
[  292.983268] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM
is now expected behavior.
[  292.992264] rcu: RCU grace-period kthread stack dump:
[  292.997368] task:rcu_sched       state:R stack:    0 pid:   12 ppid:
2 flags:0x00000008
[  293.005759] Call trace:
[  293.008266]  __switch_to+0xf4/0x13c
[  293.011814]  __schedule+0x2f0/0x690
[  293.015359]  schedule+0x5c/0xc4
[  293.018560]  schedule_timeout+0x80/0xf0
[  293.022453]  rcu_gp_fqs_loop+0xf0/0x2b4
[  293.026345]  rcu_gp_kthread+0xe8/0x100
[  293.030151]  kthread+0x120/0x130
[  293.033438]  ret_from_fork+0x10/0x20
[  293.037071] rcu: Stack dump where RCU GP kthread last ran:
[  293.042607] Task dump for CPU 0:
[  293.045895] task:tokio-runtime-w state:R  running task     stack:    0
pid:  795 ppid:   408 flags:0x00000000
[  293.055842] Call trace:
[  293.058350]  __switch_to+0xf4/0x13c
[  293.061897]  0xffffffae44320a00

Maybe someone will be able to give me some hints where I should make sense ?

Regards,
Oleg

--0000000000008baf2705fcfb35c8
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><br></div><div>Hello,</div><div><br></div><div>I buil=
t the xlnx_rebase_4.17 branch and ran it in our environment with colors.</d=
iv><div>I ran into the next issue. Looks like some device was stucking.<br>=
</div><div>It may come up immediately on start or after 20-30 minutes later=
 even with no DomUs.</div><div>A xen command line is <br></div><div>&quot;c=
onsole=3Ddtuart dtuart=3Dserial0 dom0_mem=3D1800M=20
dom0_max_vcpus=3D2 dom0_vcpus_pin bootscrub=3D0 vwfi=3Dnative sched=3Dnull=
=20
timer_slop=3D0 llc-coloring=3Don llc-way-size=3D64K xen-llc-colors=3D0-1=20
dom0-llc-colors=3D2-8&quot;;</div><div><br></div><div>This is a first one c=
oming up<br></div><div></div><div><div></div><div><br></div><div>(XEN) d0v0=
 Forwarding AES operation: 3254779951<br>Assert occurred from file xcsudma.=
c at line 143</div><div><br></div><div>I found out this is inside some DMA =
handler in FSBL code<br></div><div><br></div><div>void XCsuDma_Transfer(XCs=
uDma *InstancePtr, XCsuDma_Channel Channel,<br>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 u64 Addr, u32 Size, u8 EnDataLast)<b=
r><br>=C2=A0 =C2=A0 =C2=A0 =C2=A0 Xil_AssertVoid(Size &lt;=3D (u32)(XCSUDMA=
_SIZE_MAX));<br><br></div></div><div>This is a second one coming up.</div><=
div><br></div><div>[ =C2=A0188.737910] zynqmp_aes firmware:zynqmp-firmware:=
zynqmp-aes: ERROR: Gcm Tag mismatch<br>(XEN) d0v0 Forwarding AES operation:=
 3254779951<br>[ =C2=A0188.748496] zynqmp_aes firmware:zynqmp-firmware:zynq=
mp-aes: ERROR : Non word aligned data<br>(XEN) d0v0 Forwarding AES operatio=
n: 3254779951<br>[ =C2=A0198.826279] zynqmp_aes firmware:zynqmp-firmware:zy=
nqmp-aes: ERROR : Non word aligned data<br>(XEN) d0v0 Forwarding AES operat=
ion: 3254779951<br>[ =C2=A0198.837363] zynqmp_aes firmware:zynqmp-firmware:=
zynqmp-aes: ERROR: Invalid<br>(XEN) d0v0 Forwarding AES operation: 32547799=
51<br>Received exception<br>MSR: 0x200, EAR: 0x181, EDR: 0x0, ESR: 0x861<br=
>[ =C2=A0229.916284] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:<br=
>[ =C2=A0229.916667] 	(detected by 1, t=3D5252 jiffies, g=3D1953, q=3D101)<=
br>[ =C2=A0229.922286] rcu: All QSes seen, last rcu_sched kthread activity =
5252 (4294949715-4294944463), jiffies_till_next_fqs=3D1, root -&gt;qsmask 0=
x0<br>[ =C2=A0229.934569] rcu: rcu_sched kthread timer wakeup didn&#39;t ha=
ppen for 5251 jiffies! g1953 f0x2 RCU_GP_WAIT_FQS(5) -&gt;state=3D0x200<br>=
[ =C2=A0229.945727] rcu: 	Possible timer handling issue on cpu=3D0 timer-so=
ftirq=3D3481<br>[ =C2=A0229.952734] rcu: rcu_sched kthread starved for 5252=
 jiffies! g1953 f0x2 RCU_GP_WAIT_FQS(5) -&gt;state=3D0x200 -&gt;cpu=3D0<br>=
[ =C2=A0229.962940] rcu: 	Unless rcu_sched kthread gets sufficient CPU time=
, OOM is now expected behavior.<br>[ =C2=A0229.971936] rcu: RCU grace-perio=
d kthread stack dump:<br>[ =C2=A0229.977041] task:rcu_sched =C2=A0 =C2=A0 =
=C2=A0 state:R stack: =C2=A0 =C2=A00 pid: =C2=A0 12 ppid: =C2=A0 =C2=A0 2 f=
lags:0x00000008<br>[ =C2=A0229.985433] Call trace:<br>[ =C2=A0229.987939] =
=C2=A0__switch_to+0xf4/0x13c<br>[ =C2=A0229.991486] =C2=A0__schedule+0x2f0/=
0x690<br>[ =C2=A0229.995032] =C2=A0schedule+0x5c/0xc4<br>[ =C2=A0229.998232=
] =C2=A0schedule_timeout+0x80/0xf0<br>[ =C2=A0230.002125] =C2=A0rcu_gp_fqs_=
loop+0xf0/0x2b4<br>[ =C2=A0230.006017] =C2=A0rcu_gp_kthread+0xe8/0x100<br>[=
 =C2=A0230.009824] =C2=A0kthread+0x120/0x130<br>[ =C2=A0230.013111] =C2=A0r=
et_from_fork+0x10/0x20<br>[ =C2=A0230.016744] rcu: Stack dump where RCU GP =
kthread last ran:<br>[ =C2=A0230.022279] Task dump for CPU 0:<br>[ =C2=A023=
0.025567] task:tokio-runtime-w state:R =C2=A0running task =C2=A0 =C2=A0 sta=
ck: =C2=A0 =C2=A00 pid: =C2=A0795 ppid: =C2=A0 408 flags:0x00000000<br>[ =
=C2=A0230.035514] Call trace:<br>[ =C2=A0230.038022] =C2=A0__switch_to+0xf4=
/0x13c<br>[ =C2=A0230.041569] =C2=A00xffffffae44320a00<br>[ =C2=A0292.93628=
3] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:<br>[ =C2=A0292.93664=
6] 	(detected by 1, t=3D21007 jiffies, g=3D1953, q=3D113)<br>[ =C2=A0292.94=
2354] rcu: All QSes seen, last rcu_sched kthread activity 21007 (4294965470=
-4294944463), jiffies_till_next_fqs=3D1, root -&gt;qsmask 0x0<br>[ =C2=A029=
2.954723] rcu: rcu_sched kthread timer wakeup didn&#39;t happen for 21006 j=
iffies! g1953 f0x2 RCU_GP_WAIT_FQS(5) -&gt;state=3D0x200<br>[ =C2=A0292.965=
968] rcu: 	Possible timer handling issue on cpu=3D0 timer-softirq=3D3481<br=
>[ =C2=A0292.972975] rcu: rcu_sched kthread starved for 21007 jiffies! g195=
3 f0x2 RCU_GP_WAIT_FQS(5) -&gt;state=3D0x200 -&gt;cpu=3D0<br>[ =C2=A0292.98=
3268] rcu: 	Unless rcu_sched kthread gets sufficient CPU time, OOM is now e=
xpected behavior.<br>[ =C2=A0292.992264] rcu: RCU grace-period kthread stac=
k dump:<br>[ =C2=A0292.997368] task:rcu_sched =C2=A0 =C2=A0 =C2=A0 state:R =
stack: =C2=A0 =C2=A00 pid: =C2=A0 12 ppid: =C2=A0 =C2=A0 2 flags:0x00000008=
<br>[ =C2=A0293.005759] Call trace:<br>[ =C2=A0293.008266] =C2=A0__switch_t=
o+0xf4/0x13c<br>[ =C2=A0293.011814] =C2=A0__schedule+0x2f0/0x690<br>[ =C2=
=A0293.015359] =C2=A0schedule+0x5c/0xc4<br>[ =C2=A0293.018560] =C2=A0schedu=
le_timeout+0x80/0xf0<br>[ =C2=A0293.022453] =C2=A0rcu_gp_fqs_loop+0xf0/0x2b=
4<br>[ =C2=A0293.026345] =C2=A0rcu_gp_kthread+0xe8/0x100<br>[ =C2=A0293.030=
151] =C2=A0kthread+0x120/0x130<br>[ =C2=A0293.033438] =C2=A0ret_from_fork+0=
x10/0x20<br>[ =C2=A0293.037071] rcu: Stack dump where RCU GP kthread last r=
an:<br>[ =C2=A0293.042607] Task dump for CPU 0:<br>[ =C2=A0293.045895] task=
:tokio-runtime-w state:R =C2=A0running task =C2=A0 =C2=A0 stack: =C2=A0 =C2=
=A00 pid: =C2=A0795 ppid: =C2=A0 408 flags:0x00000000<br>[ =C2=A0293.055842=
] Call trace:<br>[ =C2=A0293.058350] =C2=A0__switch_to+0xf4/0x13c<br>[ =C2=
=A0293.061897] =C2=A00xffffffae44320a00</div><div><br></div><div>Maybe some=
one will be able to give me some hints where I should make sense ?</div><di=
v><br></div><div>Regards,</div><div>Oleg<br></div><div></div><div></div><di=
v></div></div>

--0000000000008baf2705fcfb35c8--


From xen-devel-bounces@lists.xenproject.org Wed May 31 11:08:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 11:08:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541735.844779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JgX-0004pu-RR; Wed, 31 May 2023 11:08:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541735.844779; Wed, 31 May 2023 11:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4JgX-0004pn-NB; Wed, 31 May 2023 11:08:09 +0000
Received: by outflank-mailman (input) for mailman id 541735;
 Wed, 31 May 2023 11:08:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7qvw=BU=shutemov.name=kirill@srs-se1.protection.inumbo.net>)
 id 1q4JgW-0004ph-Oj
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 11:08:09 +0000
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6a5f8c89-ffa3-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 13:08:06 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id 4D0625C00F1;
 Wed, 31 May 2023 07:08:04 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Wed, 31 May 2023 07:08:04 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed,
 31 May 2023 07:08:00 -0400 (EDT)
Received: by box.shutemov.name (Postfix, from userid 1000)
 id D69A310BD95; Wed, 31 May 2023 14:07:56 +0300 (+03)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a5f8c89-ffa3-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name;
	 h=cc:cc:content-type:content-type:date:date:from:from
	:in-reply-to:in-reply-to:message-id:mime-version:references
	:reply-to:sender:subject:subject:to:to; s=fm2; t=1685531284; x=
	1685617684; bh=aeL29fTibpX7OH1ns9iCpikawL6pyGgQIxFzUFIJ5mE=; b=x
	UDCGtesaL634GL9eXdFKoKL1s1Tz9Ll4uk1Af1BMnvQGH4j9+1PhPANaEaVE379R
	CdP85tpEg9YcUW2h5dFDnyNMAmMCkYWCYlWHtfBYAhuz7I36GZhVasYeXJhtrGKQ
	9Lrd5hHztASe4grn656Fm1LxjoNjOG1JM3Ad9/lCi0sHtpjyf+ptypYz5yGO613d
	qh8+Kv3Y5LpM1E1ielOLRn5Ok/JsogZMAHdTruPsKoySspbI4+TYmoxqI/VIiBEP
	9NSI/2t5My7odxwCZX0DKZDp9MOjZtjLzlE2tfb83C3rAyCOX0y3uoPoI2tB10/o
	jG1KAHIWcun/pGj4MxlRQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:sender:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1685531284; x=1685617684; bh=aeL29fTibpX7O
	H1ns9iCpikawL6pyGgQIxFzUFIJ5mE=; b=CKpieIWYKmOkyyCDiuZP8D+PUVECt
	T2ErndCGQJp98RFqZNoUkT/fLhMhJ/86fqOlgcQFwHL1AieUk9bVGqVQwn4r+E2W
	f0BdsMG0bUxMm93eQ9DkRhxUnn/i/uxmRKJyngDMg7NgPssfHdpcqc4Ji8JdwWKu
	KcmycyZC8ziSkBHUUx0EKF81JTIo9p1nky54jRBkScT+6/Vtg8YxhCTisfVZ1PtQ
	DxAb3nZ6wlHPyxA4NCng2CaUSohgQo62Msc4qWAgBMoDuT6fhd6tjX3xJipyPlE8
	6WncsmBPKzOXGkMG97UzW0dluJcZkHMNBSKLi7gH8TFXTIx7+dO3jVsvQ==
X-ME-Sender: <xms:kCp3ZGhTVDV0DVu7dmcXkuzxkpXwEA9IwCIFbDRZWqgyIv8wnEJkFQ>
    <xme:kCp3ZHAPrdO9QnDydIYpePwNPqTvQeT4rDLRRhSWppJUEyVQkKccYZjgigzReTWrC
    K0XSDatK5L73o__kNY>
X-ME-Received: <xmr:kCp3ZOGKLu-mTkNtzCsjRFXSA0mPL0O0V-GjwUeK0cpeJiwxiYZ1sNPmsq0FoVcs2JuCqQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeekledgfeehucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesthdttddttddtvdenucfhrhhomhepfdfmihhr
    ihhllhcutedrucfuhhhuthgvmhhovhdfuceokhhirhhilhhlsehshhhuthgvmhhovhdrnh
    grmhgvqeenucggtffrrghtthgvrhhnpefhieeghfdtfeehtdeftdehgfehuddtvdeuheet
    tddtheejueekjeegueeivdektdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh
    epmhgrihhlfhhrohhmpehkihhrihhllhesshhhuhhtvghmohhvrdhnrghmvg
X-ME-Proxy: <xmx:kCp3ZPSz5hGKt8pyGe8Ue_MUxG7I7-LMRL0S8Maei_bY-CtDzDeDqg>
    <xmx:kCp3ZDybG56imlj3lotFSk424EAyO4S3ZalWSFbkTcroWC2CEkq-8g>
    <xmx:kCp3ZN6cb8NiInnM6MmJJINWLXlxFmXCECS2-MdbyVDlBUBQ7WTXsw>
    <xmx:lCp3ZIQZa_9Yhyap2wMrdMHayAKX3jfVOiYwSdewsO8ergavPjSuDQ>
Feedback-ID: ie3994620:Fastmail
Date: Wed, 31 May 2023 14:07:56 +0300
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
	David Woodhouse <dwmw2@infradead.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Brian Gerst <brgerst@gmail.com>,
	Arjan van de Veen <arjan@linux.intel.com>,
	Paolo Bonzini <pbonzini@redhat.com>,	Paul McKenney <paulmck@kernel.org>,
	Oleksandr Natalenko <oleksandr@natalenko.name>,
	Paul Menzel <pmenzel@molgen.mpg.de>,
	"Guilherme G. Piccoli" <gpiccoli@igalia.com>,
	Piotr Gorski <lucjan.lucjanov@gmail.com>,
	Usama Arif <usama.arif@bytedance.com>,	Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org,	Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>,	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,	Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>,	linux-csky@vger.kernel.org,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	linux-mips@vger.kernel.org,
	"James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
	Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,	linux-riscv@lists.infradead.org,
	Mark Rutland <mark.rutland@arm.com>,	Sabin Rapan <sabrapan@amazon.com>,
	"Michael Kelley (LINUX)" <mikelley@microsoft.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: Re: [patch] x86/smpboot: Fix the parallel bringup decision
Message-ID: <20230531110756.g4cz2tjnc7ypskre@box.shutemov.name>
References: <87jzwqjeey.ffs@tglx>
 <87cz2ija1e.ffs@tglx>
 <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name>
 <87wn0pizbl.ffs@tglx>
 <ZHYqwsCURnrFdsVm@google.com>
 <87leh5iom8.ffs@tglx>
 <8751e955-e975-c6d4-630c-02912b9ef9da@amd.com>
 <871qiximen.ffs@tglx>
 <b6323987-059e-5396-20b9-8b6a1687e289@amd.com>
 <87ilc9gd2d.ffs@tglx>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87ilc9gd2d.ffs@tglx>

On Wed, May 31, 2023 at 09:44:26AM +0200, Thomas Gleixner wrote:
> The decision to allow parallel bringup of secondary CPUs checks
> CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
> parallel bootup because accessing the local APIC is intercepted and raises
> a #VC or #VE, which cannot be handled at that point.
> 
> The check works correctly, but only for AMD encrypted guests. TDX does not
> set that flag.
> 
> As there is no real connection between CC attributes and the inability to
> support parallel bringup, replace this with a generic control flag in
> x86_cpuinit and let SEV-ES and TDX init code disable it.
> 
> Fixes: 0c7ffa32dbd6 ("x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable it")
> Reported-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Tested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


From xen-devel-bounces@lists.xenproject.org Wed May 31 11:46:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 11:46:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541740.844792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4KHQ-0000rM-LA; Wed, 31 May 2023 11:46:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541740.844792; Wed, 31 May 2023 11:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4KHQ-0000rF-I5; Wed, 31 May 2023 11:46:16 +0000
Received: by outflank-mailman (input) for mailman id 541740;
 Wed, 31 May 2023 11:46:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sC/f=BU=citrix.com=prvs=508b7ea43=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q4KHP-0000r5-7y
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 11:46:15 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bbd22a80-ffa8-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 13:46:10 +0200 (CEST)
Received: from mail-bn8nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 31 May 2023 07:46:01 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BL1PR03MB6167.namprd03.prod.outlook.com (2603:10b6:208:315::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22; Wed, 31 May
 2023 11:45:59 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.022; Wed, 31 May 2023
 11:45:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbd22a80-ffa8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685533570;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=nKurpDXEWdM0jPg9FXmRMj2pdd8+YhmxFD1vE5fBxro=;
  b=QuLTYNaYZQYABWMFaKFVdxOa/1x3xtkyLy45KwvPxo/2C8sl1Rppwutm
   KKsN3oLxLM1UiQTqbB2Akok9i6Y2YNxnnhOjewQO30IRZXdyfdGz3Ug5G
   KzkDqVcOn4rQN6oWgWerK5CWDgp/owxMYXiJrzt+vmR7RE2ClI4pRO/IN
   U=;
X-IronPort-RemoteIP: 104.47.55.169
X-IronPort-MID: 109821829
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:ycdq3aqMlVWAFGjPlqxVu5ZmuaFeBmKsZBIvgKrLsJaIsI4StFCzt
 garIBmAPPiLMGb9L9x+atvg9x8O6MKDy4NqGQRq/CEwEXhA8ZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzCBNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXAGwmSyiqu96f+a+2YeQ0g9QIa/L3YZxK7xmMzRmBZRonabbqZvyQoPV+jHI3jM0IGuvCb
 c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jeWraYSEEjCJbZw9ckKwj
 2TK5WnmRDodM8SS02Gt+XOwnO7f2yj8Xer+EZXhrq862gTDnD175Bs+ekW+rfvllX6ES+ljK
 14M8QUVookq+xn+JjX6d1jiyJKehTYbX9dTCOw7rgKQ0K3f4wWeLmcBRz9FLtchsaceVTEsk
 1OEgd7tLThuq6GOD2KQ8K+OqjG/MjRTKnUNDQcGRwYY59jooKkokwnCCN1kFcadlcbpEDv9x
 zSLqikWhLgJi8MPkaKh8jjvjDOloJzURQcd/ATJWXmk6Ag/b4mgD6Si7lLR/PtbLIKUS1CHl
 HcBksmaqusJCPmlnSiMW/kEHavv6eyMNjbdmnZwE5Jn/DOok1aoeoZW5zNyLVloKe4LfDboZ
 AnYvgY52XNIFH6jbKsyaYThDc0vlPLkDY68CKGSacdSaJ9scgPB5DtpeUObw2Hqlg4rjL07P
 pCYN82rCB72FJha8dZ/fM9FuZdD+8z07Tq7qUzTp/h/7YejWQ==
IronPort-HdrOrdr: A9a23:5ypkrKz/FzjlQXax9JKfKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-Talos-CUID: 9a23:8UgfAGDKo00y8oz6E3VM0XY9JvwMS2DQ53qIJl+TTl11FaLAHA==
X-Talos-MUID: 9a23:dSlk6gUyNSoGUqbq/BDygT1gaNhY2bSVNEAfiZ4pgMalLSMlbg==
X-IronPort-AV: E=Sophos;i="6.00,207,1681185600"; 
   d="scan'208";a="109821829"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NOSkObuHN/Vijex/ZsHTfVc9bDYUUicAoIF2Dv5kfpbSFh5aOKxSNOXNY/toJhO+1Kgp2NqZdMDam+BjZHGFlTKJEGCdRCpVMeaxjiQ4Y4qcUPt3cmTj8wqnPFJQtHnv2M4vqRpMaS+0AhaTFWn9e+yNUcevRiQVRZXLxBCq9PAAob3T5Fz4AknK+yjYVCK9RomgttbEDynZkWTNFFxYijuAvQA/MTPdHopnp+3WvQ/T5jyEu7H0MwzbxTN1dI9S5n80b6QJeSw2eohRY9qncX0bw0rMgrIQTGe7LHwI+Krx4xEUC/6fkX0TWNLrDh9b8RHoR9tRo/FO1geubEYbgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=/VbPZ4Ysg5oOPxI5dTCJYqTy0gM7MCGNZ/t2NDvd/Dc=;
 b=LTOtyGhOMiXF5prH03AhistIaKiFG9kiFVcozkZaGLRinLAn2G9X6lklW5rjuSuSRLSnnbD2K6oO70x++gMX9GeDs3fuEA/XXYuk8g9EhDvxbYDhh/Lj0+aLOybI7sxx+SaXEoZSF+3zhhwVNuN9rgAo51XQqrKJ8TRt/3du5nHmBNwpi6anxCudfqd2igixl+gQ38Rfv/W/Lq39CIoHlfCDnZReYInL/lIuPh+15KIlxuKAYPKPqeKYu7LGihNHp+CIN8QpoJ4sj4AhcjkFIrVyCD7msV6L6yjQPfSfc6q9wfEEDgIfsxCeXVUd1CAc+BWZXGmsESPnCBG6sj0R3A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/VbPZ4Ysg5oOPxI5dTCJYqTy0gM7MCGNZ/t2NDvd/Dc=;
 b=rOwYvvU7RlwiXlCBGrneyv0umFxEvx1+IKL4Tk4nadOl+3rV1lUpqSh0Qwn8LlhM5XaS1PU+JP3k0r4zl4yFC1GW/4v7jDFD0vBLNj5Rdd3Tsm175YrrXnHBwge1Nw0nwzLy8lmO5eAS4SF/6uIMzZck5cjbMdQQP2yvZrqwAxc=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <4e336121-fc0c-b007-bf7b-430352563d55@citrix.com>
Date: Wed, 31 May 2023 12:45:52 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [BROKEN] Re: [PATCH v9 0/5] enable MMU for RISC-V
Content-Language: en-GB
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Gianluca Guida <gianluca@rivosinc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>
References: <cover.1685027257.git.oleksii.kurochko@gmail.com>
In-Reply-To: <cover.1685027257.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0200.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9e::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BL1PR03MB6167:EE_
X-MS-Office365-Filtering-Correlation-Id: a454d74d-6098-4b5d-3dd1-08db61cc99f8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fQRrxiMZw6E/Nwpiltx+moiFI0XMLmsDJpuqcj92OlTpWuezV1F105p9qoGamZxksfJmTLgQ3g7thM+0LoPEUqszoRPI0QVGuZ0Qz6UKINs8/fqTIRfVD0UwM7owIMSSokGlmdRP4uqg8ifEzPI9QLgsnlMpUOWEZ8CJmdh0rnnkdtQwPnOBax3YaV/CcqFNuV2zx2Mqv2DJ5ckY/mROCmmCAEeKMZbD/EFiwIuVhxoOjXK5EFQvGwKoKgBqLMSJtT8mNLRvXWg2Bc2L2OqI2NpVvWwMTcvE/zHAmtD7bHejB9yPQC8uGO6jTUJOT06bK61JIupEKQdX69spYSiFd68Ww6rY8e9QZmgTBlQ0Vzqh+Q7NFxdL4BrmJX5ChXYehcGBp5KlvXmcgaLIpW1J2ddhtic95EY5aSbwfsTTgRSnt7IlEK4vosn5QnzHAgzZFWOstF7Xh/+B8feQFEn3Af3WU4vFvMjnjkX1cxADfdEaql7v76RPMNt1Dm83N6vdoXMrOMc0THokUD/2AgJYUnWschRqm9aGAU88HrTNd5n1LD5r0u17zjOYwXAKNZaJjSlM38/6qPi1ix5Ei//9XHTaaNSGYw/LHhtBnT1oinY7tUqs/WVFTXdOG3idRI0BB3xFTiW2vi3AnS0Ufi7zOA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(39860400002)(396003)(366004)(376002)(136003)(451199021)(5660300002)(8936002)(83380400001)(26005)(38100700002)(6512007)(6506007)(82960400001)(2616005)(186003)(2906002)(53546011)(8676002)(6666004)(4326008)(316002)(6486002)(86362001)(41300700001)(66556008)(66946007)(66476007)(31696002)(36756003)(54906003)(31686004)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?Z1hDZXJxc2k1TUUzM3NjVm5pcUk1SW9UUUhIMXM0c1hXaDFJYXpOSk05SjdN?=
 =?utf-8?B?VS9KTHo3UUpqVXRLSzZDQjVvSmw4a2p3THBuS2l0dFREd05BVERQSEtBZ1R4?=
 =?utf-8?B?aVFOZDU1WjN2SHo4VHJLTXhia1VSZ1pDTS84Q243L25xNUpyZHZvU2I5RXFN?=
 =?utf-8?B?cGp3clpYMG5HT0ZkMjlqSUhZakMwWXN3T3BVNE9jVFZKbHMvTlJCSWUvclJw?=
 =?utf-8?B?RFRQRFdnOVNaeUVEOVE0K0hBcnhpL051bVBuUEMzeVNTdUtsOWlqTEhlL3hx?=
 =?utf-8?B?TWpnUnM1RUt6c1REM3FOaFJOeUpOOEVVYU8xQ0xyVFJNS2F1QXBGejBuV2lN?=
 =?utf-8?B?RXBqd1dQUjZpazR5UU8xNmdHY3Y5L2p6RjdZMzlUaVBrbkF5NldJUzJOUGUx?=
 =?utf-8?B?N1BDeUJISm92MmNQUXpCVTNib1lDMXM3UVVRMEY3VUZUdUN0Q00yblNsWWtr?=
 =?utf-8?B?WEJIQkp2UUYyQ1dvaEc5UkczbTVYWDIvdWJ4UHJLd2I0ZmphWjJTbTcxZUZr?=
 =?utf-8?B?bENaVUpuemhPa3g4ZjBXZWEvb1VQNW1yL0J1ODFlK1VxOGVDbjF0TWZZK0kr?=
 =?utf-8?B?bzQwQ0prL2lBclZRUkJBdVdyM1cybkJZTllJTjd2czJ1MkpsQ3lVUEdjeHBL?=
 =?utf-8?B?MFRjYm55ZTRneXZIMnNDQmU2U3R0Y25BSE13eTcvRENMTURYN1V2S21lcEE5?=
 =?utf-8?B?QjlzYmttOUR3Z01ZamxpSVFuMzhTQXpocm9GRGRyNWsrR1JETFlHVmVoNU82?=
 =?utf-8?B?dXdDQWZ2NWxMZEtiSXVpQ2NSQmh0QXdabTFDV0dLQVJqZmxiNkFGUGdNUUFO?=
 =?utf-8?B?eXMxbW82Q1U3RlczaTlSSUE4N1NZZXNLb1BTUmRKTEdPdFJZaDFGZUp4SlB4?=
 =?utf-8?B?QVVDRytSRWNYQmRFR0lJanpqb28xOW9GQ3EydTlyQzdrWkU4eXhKZHRxZnBz?=
 =?utf-8?B?dlM1dHlvQnhOdncvNFdLZXYwR3BJU25hOUI5U01ySW1lZXZvOTFuc09TM0c4?=
 =?utf-8?B?Z0M5NWZWRGxyQ0lTdXFNZXNjWFdkTFdVZEpXaE9UQlpLbm1ncFozd1RYUDlZ?=
 =?utf-8?B?eGl5L24xQnFiTzd4R20xMkQ0QnBnalNJQnpnNWVqRmdBRkgweWNBVlVVSlhn?=
 =?utf-8?B?TExDKzJuaThZMDJGUE9uQWlOMEUzbVZwQ1JzRkdKNWE1RlZGSkdsc2s2WWUw?=
 =?utf-8?B?QU91eXBXekl4QWNzVWFnNzF3MHJrWFYxcCs5RFpSZzRhazdCSmZoUWlwS2Zu?=
 =?utf-8?B?cm4yTm9SbWlSSGFzVU01NTJZQzFqTG5xcGd0RC9PR005UDV4SmFDWFNLdVZM?=
 =?utf-8?B?MlFsbDR0YXljcWlrYVd4RzZlR1NmeU1hTGlEcUJmVzYvdGx4SVhDdHVSRXNK?=
 =?utf-8?B?cWVZUkFIbFVQL1FwTkdqK0pyYk1icENMR1RFQWFQOXNXaUN2c2puRlgyRFIz?=
 =?utf-8?B?d1V1MVVIQjdpNmxDOWdwZGR0QTJjTmlNVHFVYUpNWEYzZFhQZlcyRFlsbEJE?=
 =?utf-8?B?LzQ1alZMY1doeDFOdW0razQ2TWROOUQ1M1RMdmg5TDV0OVo0bjJyelp4dVRD?=
 =?utf-8?B?a3Q4U0hDdTcvOUYrOGs0M1kyOGRvRGQ0WFRrZnArYTJlMjYzeUF6ZEFrWWND?=
 =?utf-8?B?Z3VuY1ptWitlNGhLRFR4RkdOUSs2WEtJS0daUEJyOVRxZ1ZXcWFrZEZObjFB?=
 =?utf-8?B?VVBzdTVDQS9jTGVMK0NwSzhQMmJnOUdJWTZ3N0dkTkJ0VkV6U3pUQnJjbTRa?=
 =?utf-8?B?eEdGYS8xdU92VjRqTHJkbnNxRy8xRDlHVlVnUjdKZFZVblFFNjRxbVlSek12?=
 =?utf-8?B?NXoyZGpLWUtoa2YxeXAwQUVxaGR3TkkyQlFqUE1sRHlrVkQwdUxUN2dZaVM2?=
 =?utf-8?B?ZmJ3ek5nWStWNFJkSE1rQkpNeWZVRGkyQ2hseXlvVUFGWmVVK0RzNFErUXJH?=
 =?utf-8?B?TFFCMVV3c2lnRzFJbUtWQ2R0MlRwUjFNVTg1RHVmRE5sdGhQc3FDWll4OXNT?=
 =?utf-8?B?dEpORnhwblYxTUxWNGxWNmlnSjRDcnFUVHJqTE11RnFrTGRYWHpKbDRyMGtj?=
 =?utf-8?B?L3k4ZmxzN0hFc2hvTzZOWkZMdWl5UlNZYUhSZWlMYkFkaTBlckpCck5SamJU?=
 =?utf-8?B?azlnaE10U1IwZlVIbFFueXZQVWRoaDdXR2lJdDBWYmJVMzRVelQxSTZwR0g4?=
 =?utf-8?B?UlE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	+Buf7f4LNO46Jfqx2AESmsvgECM5WI8cfhbU8vfNf32l1e5JlSSP0Hl2Xw8Uz1F7r7W9L1iLx//H5DAzx0Y3shdQdAeSQ4fG6mofc5BfonBnfzo1sk0NwYRDH0CKv0bMXLELa78TdVlxgqJ5s2lFq0MUQfsPUWAK5UR7sZmVJQfaQx0Wmri1Kjhi8p3HTSkToMfP0R6ybPmULkmZZUsIlFkh1E+hcJH4H57Sf9H4uua92kwaQ9BZ2+ln5qBR+hnvc2IeRRmeKzBroBL4gZwf+ixQsYY3rL45Jqh3WcJgs0kWRwFkj/yppKfugzGJYY1Jn0KFuAlJ7NU3gtTpMmJPMyEl275UT8iaMtHEwGrLJPekgkoOdECLxiL4xVs9Q5AQHGPsTAhQhhEobhEnwLzItPWGxexXNDnXlfN7US18fumdCS0KH93BWPMDI4fAhpZrS/YMGJH+nzoTmWCflqvJNZEBbqlW59scGVaXNr1NzOWZVxDxtI+8QmP+WYckT/fUc3f2PEJQvZp8z4zg06Y/hn3iTommuhVqHWeGf1i5qjV0xaFU6KdN+nrV41Uiq/JQkorVqNgZZ8FsDpBGOYFZrt85pw8Y3L+NvdQbyIoB1RojvjZ+3qyB9CDsnnwPOWiekUHVo70zXOtPbdaatWrodlJDtipbpKPPN+X++BS+m/C64r9x/cLsfdjvTBIV6bN5hEGERLPCW/s0U3+ybaMfpGuy/pc9GIBoJ+u/12UwMQyT1TL+CcAeSMoJJ94nbDFIRXzSfSz7NuvkaQa5leRg5dUdXFimNyQEQMmAdz965J6kUVkkhzqIOo95D4ppfo5Rkk8Ito6uCDEnIxDTtBCYKJ6u/YcILQhopC5CZG9ucOi40Sl94bTvtVBeAkpmQp2k5H0/v2K6NSXKsNcvxWc35JcJyU+T+1ENHEo3Ho0GsJ0=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a454d74d-6098-4b5d-3dd1-08db61cc99f8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 11:45:59.1524
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: m2EjgFjP8pK9vbfApxjXNSKaUcQ8yyQz2bhNeV/5MIR4UNdYdbLG9gsKVq/YWItXVxSLvJvV55EXwoU/L8w6S8whZZmSG5685/sCp+xLyIU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR03MB6167

On 25/05/2023 4:28 pm, Oleksii Kurochko wrote:
> Oleksii Kurochko (5):
>   xen/riscv: add VM space layout
>   xen/riscv: introduce setup_initial_pages
>   xen/riscv: align __bss_start
>   xen/riscv: setup initial pagetables
>   xen/riscv: remove dummy_bss variable

These have just been committed.

But as I fed back to early drafts of this series, patch 2 is
sufficiently fragile and unwise as to be unacceptable in this form.

enable_mmu() is unsafe in multiple ways, from the compiler reordering
statements (the label needs to be in the asm statement for that to work
correctly), and because it * depends* on hooking all exceptions and
pagefault.

Any exception other than pagefault, or not taking a pagefault causes it
to malfunction, which means you will fail to boot depending on where Xen
was loaded into memory.  It may not explode inside Qemu right now, but
it will not function reliably in the general case.

Furthermore, a combination of patch 2 and 4 breaks the CI integration of
looking for "All set up" at the end of start_xen().  It's not ok, from a
code quality point of view, to defer 99% of start_xen()'s functionality
into unrelated function.


Please do not do anything else until you've addressed these issues. 
enable_mmu() needs to return normally, cont_after_mmu_is_enabled() needs
deleting entirely, and there needs to be an identity page for Xen to
land on so it isn't jumping into the void and praying not to explode.

Other minor issues include page.h not having __ASSEMBLY__ guards, mm.c
locally externing cpu0_boot_stack[] from setup.c when the declaration
needs to be in a header file somewhere, and SPDX tags.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 31 12:18:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 12:18:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541745.844802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Km8-0004Ou-7s; Wed, 31 May 2023 12:18:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541745.844802; Wed, 31 May 2023 12:18:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Km8-0004On-4s; Wed, 31 May 2023 12:18:00 +0000
Received: by outflank-mailman (input) for mailman id 541745;
 Wed, 31 May 2023 12:17:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Km6-0004Od-7d; Wed, 31 May 2023 12:17:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Km5-0003d4-Qw; Wed, 31 May 2023 12:17:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Km5-0002WY-De; Wed, 31 May 2023 12:17:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Km5-0007g6-DF; Wed, 31 May 2023 12:17:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Co71gBVoGY2v3cBxqJmb1OHF7NfCkX4DZthBNIivWu4=; b=RCyknKd5d5dVL2vdrrAXRzRHPT
	gxXcnohi41p3qCAAYxOdItfNKuETlP1SK08hGwh71tE5FN2opfrfV9baawV3cBwwWHSFbhayTujSs
	9Twudsbz878QO6oalxaR1B3Uyr3R+RUCMLyNDHuMeFybvnXPg5zqFl1nCLOvqWYjmDYo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181031-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 181031: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=465217b0f872602b4084a1b0fa2ef75377cb3589
X-Osstest-Versions-That:
    xen=94200e1bae07e725cc07238c11569c5cab7befb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 12:17:57 +0000

flight 181031 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181031/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 181018
 build-arm64-xsm               6 xen-build                fail REGR. vs. 181018
 build-armhf                   6 xen-build                fail REGR. vs. 181018

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  465217b0f872602b4084a1b0fa2ef75377cb3589
baseline version:
 xen                  94200e1bae07e725cc07238c11569c5cab7befb7

Last test of basis   181018  2023-05-30 20:00:24 Z    0 days
Testing same since   181031  2023-05-31 11:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 465217b0f872602b4084a1b0fa2ef75377cb3589
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed May 31 12:01:11 2023 +0200

    vPCI: account for hidden devices
    
    Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
    console) are associated with DomXEN, not Dom0. This means that while
    looking for overlapping BARs such devices cannot be found on Dom0's list
    of devices; DomXEN's list also needs to be scanned.
    
    Suppress vPCI init altogether for r/o devices (which constitute a subset
    of hidden ones).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit 445fdc641e304ff41a544f8f5926a13b604c08ad
Author: Juergen Gross <jgross@suse.com>
Date:   Wed May 31 12:00:40 2023 +0200

    xen/include/public: fix 9pfs xenstore path description
    
    In xen/include/public/io/9pfs.h the name of the Xenstore backend node
    "security-model" should be "security_model", as this is how the Xen
    tools are creating it and qemu is reading it.
    
    Fixes: ad58142e73a9 ("xen/public: move xenstore related doc into 9pfs.h")
    Fixes: cf1d2d22fdfd ("docs/misc: Xen transport for 9pfs")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 0f80a46ffa6bfd5d111fc2e64ee5983513627e4d
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 12:00:13 2023 +0200

    xen/riscv: remove dummy_bss variable
    
    After introduction of initial pagetables there is no any sense
    in dummy_bss variable as bss section will not be empty anymore.
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit 0d74fc2b2f85586ceb5672aedc79c666e529381d
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 12:00:05 2023 +0200

    xen/riscv: setup initial pagetables
    
    The patch does two thing:
    1. Setup initial pagetables.
    2. Enable MMU which end up with code in
       cont_after_mmu_is_enabled()
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit ec337ce2e972b70619f5a076b20910a2ff4fea7a
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 11:59:53 2023 +0200

    xen/riscv: align __bss_start
    
    bss clear cycle requires proper alignment of __bss_start.
    
    ALIGN(PAGE_SIZE) before "*(.bss.page_aligned)" in xen.lds.S
    was removed as any contribution to "*(.bss.page_aligned)" have to
    specify proper aligntment themselves.
    
    Fixes: cfa0409f7cbb ("xen/riscv: initialize .bss section")
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit e66003e7be1996c9dd8daca54ba34ad5bb58d668
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 11:55:58 2023 +0200

    xen/riscv: introduce setup_initial_pages
    
    The idea was taken from xvisor but the following changes
    were done:
    * Use only a minimal part of the code enough to enable MMU
    * rename {_}setup_initial_pagetables functions
    * add an argument for setup_initial_mapping to have
      an opportunity to make set PTE flags.
    * update setup_initial_pagetables function to map sections
      with correct PTE flags.
    * Rewrite enable_mmu() to C.
    * map linker addresses range to load addresses range without
      1:1 mapping. It will be 1:1 only in case when
      load_start_addr is equal to linker_start_addr.
    * add safety checks such as:
      * Xen size is less than page size
      * linker addresses range doesn't overlap load addresses
        range
    * Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
    * change PTE_LEAF_DEFAULT to RW instead of RWX.
    * Remove phys_offset as it is not used now
    * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
      in  setup_inital_mapping() as they should be already aligned.
      Make a check that {map_pa}_start are aligned.
    * Remove clear_pagetables() as initial pagetables will be
      zeroed during bss initialization
    * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
      as there is no such section in xen.lds.S
    * Update the argument of pte_is_valid() to "const pte_t *p"
    * Add check that Xen's load address is aligned at 4k boundary
    * Refactor setup_initial_pagetables() so it is mapping linker
      address range to load address range. After setup needed
      permissions for specific section ( such as .text, .rodata, etc )
      otherwise RW permission will be set by default.
    * Add function to check that requested SATP_MODE is supported
    
    Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit efadb18dd58abaa0c6102e04f1c25ac94c273853
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 11:55:46 2023 +0200

    xen/riscv: add VM space layout
    
    Also it was added explanation about ignoring of top VA bits
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 31 12:23:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 12:23:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541751.844812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Kr0-0005oy-SO; Wed, 31 May 2023 12:23:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541751.844812; Wed, 31 May 2023 12:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Kr0-0005or-OT; Wed, 31 May 2023 12:23:02 +0000
Received: by outflank-mailman (input) for mailman id 541751;
 Wed, 31 May 2023 12:23:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sC/f=BU=citrix.com=prvs=508b7ea43=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q4Kqy-0005ol-L0
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 12:23:00 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e0ae5ad2-ffad-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 14:22:59 +0200 (CEST)
Received: from mail-dm6nam12lp2177.outbound.protection.outlook.com (HELO
 NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.177])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 31 May 2023 08:22:52 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by BN9PR03MB6092.namprd03.prod.outlook.com (2603:10b6:408:11d::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22; Wed, 31 May
 2023 12:22:49 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.022; Wed, 31 May 2023
 12:22:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0ae5ad2-ffad-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685535779;
  h=message-id:date:subject:to:references:from:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=3oN8eKsUv8TWadx+qFeAxeLJMVAsVB8AkMJRgyT8Lsw=;
  b=Rer9IgQQXseRJP3eFdhLw5HVulgulks3IYNgiVpryCkKNMF6vRxXIniO
   uN50qvEpeOCDa7rheCit1teADfWmzTypZsZBdE54E67MLWeC83UUTbtkk
   5YXKKnUpcVTjzBxMAe95u1hL7t4erZAa28tRpyUItfh1ApVEnkWnfstHx
   w=;
X-IronPort-RemoteIP: 104.47.59.177
X-IronPort-MID: 113551806
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:g1Wxq6zGnQBLlV+caB16t+cIxyrEfRIJ4+MujC+fZmUNrF6WrkVSy
 2AcCziDbvqPY2emfNt0Oojkp00PsZ/SyoVmGwVsqCAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw/zF8EsHUMja4mtC5QRjPqkT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KWdCx
 /IdETkcVzmg3OHtnPGqG/le38t2eaEHPKtH0p1h5RfwKK98BLzmHeDN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjmVlVMuuFTuGIO9ltiibMNZhEuH4
 EnB+Hz0GEoyP92D0zuVtHmrg4cjmAuiAdlPSeTorqACbFu7m2Y4JA8bbAuH/OC/yXSOZ/sGI
 FU95X97xUQ13AnxJjXnZDW/u2WYswUAHtNcFuw8wBGAzLLO5ASUDXRCSSROAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BW0IaDIATAAFy8L+u4x1hRXKJuuPC4awh9zxXD31n
 TaDqXFkg61J1JFUkaKm4VrAnjSg4IDTSRI47RnWWWTj6R5lYImiZMqj7l2zAet8Ebt1h2Kp5
 BAs8/VyJshUZX1RvERhmNkwIYw=
IronPort-HdrOrdr: A9a23:9/C3U6N3LeqqW8BcT0v155DYdb4zR+YMi2TDj3oBMSC8cqSj9v
 xG785rlyMc6QxhHU3I9urwXJVp3xvnhPtICOUqUotKGTOWwVdAT7sSmrcKoQeQfBEWn9Q1vc
 wMH5SWSueAa2SS5vyb3ODMKadD/DDxytHKuQ6x9RlQpawAUctdxjY8LjzePlx9RQFAC5Z8Po
 Gb/NB7qz2pfmlSRtinB1EeNtKz7uHjpdbDW1orFhQn4A6BgXeD87jhCSWV2R8YTndm3aoiy2
 7YiAb0j5/T+81TiyWsnFM73a4m1ecJ+eEzSPBkTfJlZQkEvzzYLriJnYfy8gzd7tvfqWrC2+
 O82CvId/4DkU84+QqO0FjQM92K6kdv15al8y7vvVLz5cP+Xz40EMxHmMZQdQbY8VMpuJVm3L
 tMxH/xjeskMfrsplWM2zHzbWAYqmOk5X451eIDhX1WVoUTLLdXsIwE5UtQVJMNBjjz5owrGP
 RnSJi03ocjTXqKK3TC+mV/yt2lWXo+Wh+AX0gZo8SQlzxbhmpwwUcUzNEW2n0A6JU+QZ9Z4P
 msCNUcqJheCssNKa5tDuYIRsW6TmTLXBLXKWqXZU/qEakWUki92qIfII9Fmt1CVKZ4s6fawq
 6xLW+wnVRCB37TNQ==
X-Talos-CUID: 9a23:wOQ3mm/6wQfdmF87uFCVv0gYKM4gS3TM9V7zeEzjVSV4Ga2nEFDFrQ==
X-Talos-MUID: =?us-ascii?q?9a23=3As/muMQ8LZ01AZk67q3QPTuiQf5piyIi8BXldrcg?=
 =?us-ascii?q?fvYq+BRBZYjyEpSviFw=3D=3D?=
X-IronPort-AV: E=Sophos;i="6.00,207,1681185600"; 
   d="scan'208";a="113551806"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lD8QN1adqklqzZg7yiGzTZCSGHdMSnWfpMFh/YbM5FfMpVjDeZ8mdI0sIZjn4ntpL6JrFfwt+vfqvy42mlIszjiSkMSBob8CAxssAhWmr9rU6M9Ed7ZEuwr44fmUVXkn/h3XLmdgBAfAadgmdgOwnhNPzXCSzxfr4xcqCiFBV7u40nps0LT/pRSXp7m8Kn4MUxZbTCOuiCN0hK/DHpVO0+BkcL3gga4/rzh5s7PG5dDqJ9hGAMPAD/2yKd94vI2679+7SyZOKysXMHBjxKb7I6O8xJe8FSPF6zkpYPQfr7LxkXu8F4wIBsqEzHOOXjVHPSYAtmGs3wvBMRcptMKIHA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IjGU/eG2hZQz1UJfukO3WcbkhsrtZoh6xYPE0qbvEKk=;
 b=S0MYQU9UXj6NGohUqf5eug9Fq80/6whLnjP9bPD8TxT0/CT6iuOOskHmwR56NW9V+hmJ+42ZlgeJR9tE6eJcyxhQWj5nn/5brqCWLqRvj7KyoCoTQ9yRcF+o/ITwpqX/WmZd3KiDmI4ENnF0vJXdmQf9JO7tgG8mWP0hcID4BYRAB+KdtHbPo6zLyClfrXuYvKTJAZ+l4gMeB9h4aDLMLvLy1i+h2L3BrcJgP4wLB7KKsacdxI96gdn/G9oLWsBGcJ/ieaQpfAsl5PttZGrwb41LumxlcbdbvQ0Xe18ZbsJQj01LkGh6uG+21xliyvCkvtIrdNmAFWkc+2GfmxUXiQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IjGU/eG2hZQz1UJfukO3WcbkhsrtZoh6xYPE0qbvEKk=;
 b=Xt0i0bWQ3WpCL4KO6xDk+SVgMZtHJ6Dey7sXXlmBhTn/L7XEmUIquXqumQ87QJzdU9+CMKhbBrKm0rQV+SERFqd1n0I9wWylIO5CsbE6UJ/CIxJ3DCBXLkLxmfbro9p36GL8IqXhC5J73A1+5gh6Vuoo4HFSwCqclHp9AE7xnEQ=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <b9417407-453f-d13d-6bd6-debe1fd3a9ac@citrix.com>
Date: Wed, 31 May 2023 13:22:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [xen-unstable-smoke test] 181031: regressions - trouble:
 blocked/fail
Content-Language: en-GB
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <osstest-181031-mainreport@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <osstest-181031-mainreport@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO4P123CA0650.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:296::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|BN9PR03MB6092:EE_
X-MS-Office365-Filtering-Correlation-Id: ef3768a3-cea1-4263-1e44-08db61d1be70
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vCNNHZ7/A1qVdVa1ZzmFB02DHyslN/8cjun8PVAdDW3LRaBuzSS8tf9rfhKCf6NUCZ78jsL3qutdqfZ5Nm7FfZ5LrsCvyoPNyirYlBonOHvdQy/dzcdcICEKpKbnfB+Dwo3sbjpOffjR8+Uliny4XO+52W6pcEWS9agmiAdwJrlLlT4VzzUkCOyJlGz+5N03GSIpT32amyxTwlvTL040TGBATGEs2pkqQ+Iuohd01hz28hHEPR/nQyZuLNjCgRgN3OkJOv/KFtM1bz0fxjmc+GaWyYIO6zACwOaekI6lnqx3gouW3HBNl/B1RTdHvWLWP1Rwf7hfkNQMfJrNdosSDJWoxxLjGdPWHktbpZAeuYHiOA0XJ6Ra8DT96UwroEYx4ihbu033+otKu6Rk3a57g+57jNk1WPa1SIS79PC1+xwODfwUJoExPIDNud0+X2FCXRBACtgILu5N+ivuI5AXzMUeXspIfQizf4A49pGAqAG7xmERwDfQhbxAfqg5BvNiJ2jDbxk0YtrQIu0x247qzwxTbF/AQaJ3/tUXcRdqujicIjCw1KVP5eWdACRFsiwW2GBWeyZZ5084KZiaHoeSJ0NR2z+is+7jXhACy/CLJGFwQNBJWKLBi+Do4xT35x5iAdgo1JlOQ3l9zQfxOJoaPg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(136003)(396003)(346002)(376002)(39860400002)(451199021)(82960400001)(38100700002)(86362001)(36756003)(31696002)(31686004)(8936002)(8676002)(41300700001)(966005)(5660300002)(26005)(6506007)(6512007)(53546011)(4744005)(2906002)(186003)(2616005)(83380400001)(6666004)(316002)(66556008)(66476007)(66946007)(6486002)(110136005)(478600001)(6636002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?anBhQlptOUY5alhsbGFMeXBVMkM2TktObTRIdE01a0RuejZxNnhZTHJqTFRR?=
 =?utf-8?B?MGZ4bk1LZEUxM0Z1aHJ0MjFwRXVkYlZRWDR0RkdLbkthSlRDY0VYVWZzWHl1?=
 =?utf-8?B?UHBsVm1UV0FvRUxESFJvb0RxWHBZaG1RenkwZ3NOOGxVMVlOK2NqcU1QRkcv?=
 =?utf-8?B?VmZRdG1FWU4yc0F1NWpYOE1RM2twZ3ZSdkdRSmtlRitXUjF5cmVGTEpkcitn?=
 =?utf-8?B?N2lnTGVVTHRYVWFSS2F5bktIUjhVK2lmd2hSak5rMDltZm93K2NsZklTZEdS?=
 =?utf-8?B?eHJHeGZOcXY1Sjd0bUpOZUljU0VpNHplbmJMcm5LZWltUzZyTzVMQkJxa2F3?=
 =?utf-8?B?eEhUZ09xTTJGRG5wem82cnp2YjlxbFMxNWdyM2JRUGdDNUhsOTU3dVJ3Y2x6?=
 =?utf-8?B?RExUVUxpWGU2QTJtVEUzSGZjTEovRStOa1RvZnlua2NXQ3hndDlDMUhqR3RH?=
 =?utf-8?B?aHdiRDUyTWZSc091MTRSK3RzTmRPdjZUcDhHTEhtNWpCUW5hRXNHU2FnaDRV?=
 =?utf-8?B?bW5URTFlTWpIMSszRFhNMFhUQzdTM2hRNU55b3RjY0hUWThmTWQwaFZWYi9Y?=
 =?utf-8?B?MUxtNGpsRlo5NEpML2JZOVZwVFp1bGJTZHJtV0x5a2s5WThDaTE1ZzRLMGdF?=
 =?utf-8?B?ZS80YXNDcnpNRmx0MmxCYmk5aGIxTk43bFRWUDg5WEN0Ym9EMHJIK1czSDBS?=
 =?utf-8?B?VWNMYVg0U2l0WE5YWUsyYkRxbTJ6SHEyMTduRXN2M0N6UndZZEZNQkc0Y3A0?=
 =?utf-8?B?MEhPWnFzZ1gxNnlIU0c3dWFLUFdhSjhNTDNwazRjWmFPSFNzUnplY1JwbVFi?=
 =?utf-8?B?RSt4MTlGMmdPazJXMjluRDdHRkNlQVdBTGQrN3FxbkZxeVFoclFIeWovaUVw?=
 =?utf-8?B?NEZwVnpGcFpneEVvYzdNdE93Q2puYmFURllORUplN2tzckNWUUFlalpBNGpI?=
 =?utf-8?B?MUVOVElQTmpIODN1bjdVeFFrSFliYWxOY0Z0ck5ERStFMlRiZlluaHFjc1Rr?=
 =?utf-8?B?dU1KZGJNN1B0V1JqelNhbVkrUjZyc0IrWVhOd1NVVzJHQkZxQmZCTWtKc1pv?=
 =?utf-8?B?S21yUWQ2QUxGdm5rd1R2a3ViRXF6T1lIYUZmV1hnR29JaFBRdWY1QnZuQXlh?=
 =?utf-8?B?TENwd1RYZmp3U0xJRWVXSXhRNGZaZ3FsUmIvbGdOOXlGQVpPdFZuNWhqZVM2?=
 =?utf-8?B?UTcveFBjMmtsUlBsV0I0MHJJOStlUHVZYlZjY2lTOCtpV1B6K1VzMXdVQWQ5?=
 =?utf-8?B?a3RaZjNTbitvSmZQakp3MGNJRUI5Zzk5czlDVnNSUnkrK3NVa2xMUnBpbkRj?=
 =?utf-8?B?L1l0eDBQVnhQcHFHaDZHanJBelhZdUhYZ3hCUEU2d3FzR0ZvN0JnMlN2VG15?=
 =?utf-8?B?eHEvYVJjU242d3pYMEViR2diSWJVYTdYakQyWFpCMDE2WEpzVSt0MFMxRWV5?=
 =?utf-8?B?NGRrVTVKZHZ3VzVaaHgrcTM1b2VlYkJJV0RSeXpkeFpLcW5IQThaYkpQQ1RI?=
 =?utf-8?B?MjRUK3JNbmpFMnAyOVhlaXJzNkd3K2sxZytNYmJlNmZtWkJjelFrcEF0RW9L?=
 =?utf-8?B?RlBBRVFCRzNMYjhXVjUzdWRIUTQ4SEl2VkdzdTQxeUdkQ0JWNFVqeWg5czVa?=
 =?utf-8?B?WlBTdE5mQTdaay9UTHJwbGMrNEYydFFHb3o1LzM1dEl2WVYvOTd5U2QzdVNm?=
 =?utf-8?B?aFFwSWFVbmNNaXk3N3ExTmtJbnROT0ZsaXppSU1SNGxBK2lWcFJBZFdaL29U?=
 =?utf-8?B?THRsY0NuSTNnM1A0RFBkYks0YzE0dHFianNHK29DdVdqbFpHUTNDa3ZSSmVn?=
 =?utf-8?B?Q1lCcWZzT3RlWEtGenB6WnYzSC85dWN5NVc2cFQxRis2UVhFMWpvbmJaQTg3?=
 =?utf-8?B?STFyem80YTV4VFMyZmR4UWZ3VDgvNlNFcWh3Zkd1TWZKN0VPM3h0ZktBN1ZO?=
 =?utf-8?B?MXBvZ1ZneUU3N0IrZ2E3eEdHaEkyaUZmVm96UnAyNGxjNG90Z3VZMmFnM0RO?=
 =?utf-8?B?VTFzR3NNWXY5ZWtUalNEOFhiaktBWWJQSjh4QmNNMGhxS0V0VTRjVVRKQmUv?=
 =?utf-8?B?eXF1a2dCdEFjR0M0SUthMUZhZWV5TVpCMU8yYm45VDlrdmtqbXdqWVRHRVFH?=
 =?utf-8?B?MXQwTWVOMVNqWFBKcWt6STRIZld1b3lPb29ZRThqT0RkK2MwZTVvZVYwUmlW?=
 =?utf-8?B?SUE9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	DgXM8Zy/2wesvfOBCkXg1DWpzcMkXxBHjEOjju3gELpmFJC4ju1r2+cttk2X9xsuaUMpacpwtxFpxxMCFSQTPs9wBMSACL7DYDTL1bxVoJ8bDn/pqiZzKT9nurwF/+iagJi/2Y7yssxqqN7ZzEbjSmlCopd5MA7jU3VW3gXkN8+P9U5fDegQKc1g5SpfmLseBHUw3tRKmNpGB/GhZw+RORnZYh8itFIZcfVaqQpASFIgZAQSxsxXDz4UFKBlSGv8vSN9kJtWuB5rlXQ3ahwtcMmH3olmDCSpzujVc8zYyxB9pvuaf2EmJIuqj0laNwCSz7Vm97fAaHHTr/kXlAxRdcQ0HPn5zqqf01YVXfOUzStnfdbL/3BHqYSwl6Hywp1UcvK+UKyId7l8MrQkJy1AqrSzwcqJ0Y9gk83qOUgWTteypH4SE2k9e5q0k1nGvWbiRL+DSsxnNnHvyUpKFZdCtfptHS/6J242umxoVMWjyTrjdGsMPyndXEWFw4bU0brxaf+uGHYmpnN8sIJF89PAsx5Ixsd58rZ3KZ9Qdm+EDbqgyF6IQ+wAvc/CFZlmyvYT9B4NeF1PY3+DOwJHRT1fU2rqoqdkPRBhTzYDEZRjeGViBKCikXn8LFyBWJ9VMHrtIYptWWPO4MgmDsjWDkFiCMso9wu/rzPY7OgkKbGd9caxTtXn4hC5SvDsTNOQrrkUonHZiXtv3T+m4K9vgKy4vcBV0FzT/Xtff1q/7rRO7YBkdOOlm8xy1DPamnNtlf70FyBCmnrvdvzPq8s+Onssa6kqXWXcmsss6UcDphgoBXrywM2+mgHuo9hTkOzlmBvB79PzLlriTDfSvDn3wQNoWg==
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ef3768a3-cea1-4263-1e44-08db61d1be70
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 12:22:47.7857
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uBvy2c8XYuxd3IJ/wAB7jUPb1I+/WmjRZm1/ytdHFdUbuiOJwChrtr30VbINKzL0O5oUuRgucMPNyMBr35q1HdIOMpJHvm7zB/cr9aAAbt4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR03MB6092

On 31/05/2023 1:17 pm, osstest service owner wrote:
> flight 181031 xen-unstable-smoke real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/181031/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-amd64                   6 xen-build                fail REGR. vs. 181018
>  build-arm64-xsm               6 xen-build                fail REGR. vs. 181018
>  build-armhf                   6 xen-build                fail REGR. vs. 181018

Real failure, caused by the vPCI change.

http://logs.test-lab.xenproject.org/osstest/logs/181031/build-arm64-xsm/6.ts-xen-build.log

The userspace test needs is_hardware_domain().

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 31 12:39:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 12:39:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541756.844825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4L6j-0007PM-7Y; Wed, 31 May 2023 12:39:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541756.844825; Wed, 31 May 2023 12:39:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4L6j-0007PF-4r; Wed, 31 May 2023 12:39:17 +0000
Received: by outflank-mailman (input) for mailman id 541756;
 Wed, 31 May 2023 12:39:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sC/f=BU=citrix.com=prvs=508b7ea43=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q4L6h-0007P9-Rf
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 12:39:15 +0000
Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com
 [216.71.155.168]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 24b33fff-ffb0-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 14:39:13 +0200 (CEST)
Received: from mail-bn7nam10lp2109.outbound.protection.outlook.com (HELO
 NAM10-BN7-obe.outbound.protection.outlook.com) ([104.47.70.109])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 31 May 2023 08:39:04 -0400
Received: from BYAPR03MB3623.namprd03.prod.outlook.com (2603:10b6:a02:aa::12)
 by DM4PR03MB6934.namprd03.prod.outlook.com (2603:10b6:8:46::5) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.6411.28; Wed, 31 May 2023 12:39:02 +0000
Received: from BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b]) by BYAPR03MB3623.namprd03.prod.outlook.com
 ([fe80::81d5:6cc1:5b52:3e0b%3]) with mapi id 15.20.6433.022; Wed, 31 May 2023
 12:39:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24b33fff-ffb0-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685536752;
  h=message-id:date:from:subject:to:cc:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=8OYiF6SPBjT7zP/qAx7QHPipcP5zT6XDLyGJG4Qf7+E=;
  b=eYgn3KkLkaHZ18sG4XlvZAgrp42CCccAHBoEb2td1GE8tDpjDQ6zgaWp
   /p91CiItJlu63ot+WW4sb8D1fi+6YM/c9g81XmxWFQPgFd5J/tamqe7Qz
   6FJrtVTZvT0QKmjScRXxJkxOPuhBZPRPOvnzKVndIQXqy+LCQnDGUzYTe
   E=;
X-IronPort-RemoteIP: 104.47.70.109
X-IronPort-MID: 109827932
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:lhib/K9pzdaDbNpPfe3FDrUDQH+TJUtcMsCJ2f8bNWPcYEJGY0x3x
 2FOUTjTMqncZWr1ctwkPomy/ElXuMWAyYRqTVQ4/Ho8E34SpcT7XtnIdU2Y0wF+jCHgZBk+s
 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk
 d7pqojUNUTNNwRcawr40Ird7ks31BjOkGlA5AdmOKsS5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb
 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk
 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDklj/
 80Tcz4CQyyN3f2Z24uke8poiMMseZyD0IM34hmMzBn/JNN/GdXpZfqP4tVVmjAtmspJAPDSI
 dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTeLilUpgdABM/KMEjCObexTklyVu
 STt+GPhDwtBHNee1SCE4jSngeqncSbTAdpDROfnp6Uw6LGV7m8KNloQfGHjm+u0sGKwf9lYJ
 Q8L6wN7+MDe82TuFLERRSaQqXqJvBcaV8BXVfMz7AWAyK386AKeG2RCRTlEAPQ2uclzSTE02
 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9BW0IaDIATAAFy8L+u4x1hRXKJv58FIalg9uzHiv/q
 w1mtwA7jrQXyMQNiKOy+Amfhyr2/8CYCAko+g/QQ2SpqBtjY5KobJCp7l6d6utcKIGeTR+Ku
 31sd9Wi0d3ixKqlzESlKNjh1pnwjxpZGFUwWWJSIqQ=
IronPort-HdrOrdr: A9a23:0gOv064GrLe6vD30JQPXwEHXdLJyesId70hD6qkRc3Bom6mj/P
 xG88566faZslcssTQb6Km90YO7MBThHP1OjrX5Q43SOjUO0VHAROsO0WKh+UyZJ8SZzJ8n6U
 4KScZD4bPLfCRHpPe/zA6kE8sxhPmrmZrY+ts2Fk0dNz2CvZsQkjtRO0KgHkpqXxkDIJw2Gp
 aGj/A3xQaISDAsYsOnHWlAeu7MqdHR0LfrfhICbiRXjTWmvHeT5LnmCAjd5wwZUD9E3N4ZgA
 v4uj283KmlruqqjiTRzmrCq6lR8eGRrud+OA==
X-Talos-CUID: 9a23:i2Y7u2He0yCY45zMqmJK/n8fQc4qX0HTlmj/OHanOEVtRrqaHAo=
X-Talos-MUID: 9a23:5r3/JwYGD+BfXeBTtCbSwzdLDp9S8amUJE4pmpII+NaVHHkl
X-IronPort-AV: E=Sophos;i="6.00,207,1681185600"; 
   d="scan'208";a="109827932"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UPoIFmHJALVL8yqSHPlMjOSl3eT8OrLAuxpksr7GJEGj8+acmRrLv+8wdh3eXEjjjdK2Ip/apJ2n9OfJsbM6m1V1qmHgUZSzO9SzdJ+KYdzYGlL1490rDyPpoaeSD/2nqdwVvH0cYKbqfgqOf27VSgHqZ3LiDoL0dtAQEWpDlrsQk4HFt+eDyzdUREyf+47aBQcQo5iojty+cZqL7JAlySsG0I4bbr6Vjcskgag8spNENMumPw4Yy4EZdPW2BHDn+h4CzU0+NHojkK1APwScI5PgUKRn7T4dGNAo9bkf/fAKHa1QqIj5a1Z2Mewfyokb9773RPF2hBwWyBisi6KWJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8OYiF6SPBjT7zP/qAx7QHPipcP5zT6XDLyGJG4Qf7+E=;
 b=V7D40nYqx84JN6vrjDgak3O4b1FOEZgBoirABZqVSbzgSsPLiXTKaoyT7l38gTI/B2Q/b3N3+fropGyz43bYOxKXSR47pVq6xRxdVMJ4snGOWXeKXSDDuc0Pa4BacVxn5MB1FuGyYyXmZqR1SQ0RA63tuxC628gYIP7coEwn9r6Fxss7qAJ2zOXIXQF00jejrkQ7iEBPiD35TiQQGjfCZA7VoYseVZ4Ug2cYKIMnjWEZ1ngbZ/3UWBYYP+3rb943LYO853adEJaD0+HLwKz702zx5fvf3iXPYXWP5pL6Sd7YS8OWuUMb18q5q6ms/Yc38krPhF5mh6IBvknLEFRTxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8OYiF6SPBjT7zP/qAx7QHPipcP5zT6XDLyGJG4Qf7+E=;
 b=vprGT8adL4uSGkuT0fBZPcqtpj1LSJKg/4rlnY4sR7xy2l4zDUaY61oRaQ/6dykPhW9/CxSTGYvzceNuzveqOqgoentCDUlU94cslquUXzHG531W1k4m3jKjSrJoN2rlkpGyvkixvTtvOi4FudYp6G+UtjtBXdM/xS6z352KPQw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Message-ID: <3f009cf5-852a-abb8-814b-a63f05e37a16@citrix.com>
Date: Wed, 31 May 2023 13:38:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 2/3] x86: Expose Automatic IBRS to guests
Content-Language: en-GB
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20230530135854.1517-1-alejandro.vallejo@cloud.com>
 <20230530135854.1517-3-alejandro.vallejo@cloud.com>
 <2e1bea58-9f6f-08c0-ce00-148f79ba12ff@citrix.com>
 <64770ce3.050a0220.fb998.7ff6@mx.google.com>
In-Reply-To: <64770ce3.050a0220.fb998.7ff6@mx.google.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P123CA0088.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:138::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BYAPR03MB3623:EE_|DM4PR03MB6934:EE_
X-MS-Office365-Filtering-Correlation-Id: 8740b73b-7013-4dd6-bbef-08db61d40347
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nBJIwCHR0j6KEjsnwG9uNDtdQhgD3VKMBN6RslJnwqmtlsc7Hs8gx/KgMqE2LYAH2VNgZci6ubcZKKMI+XYHSIP3or8lopwFdtMDQHWknWMyfReBVo/8fB3wmqQDn85BjCxkYwkoSrxmU1oLhseHE/jQqbF5dAVt+KHjfzyMGbXecFpJvo3usaxoAl3BtlVb9XWlSQRQgO85pU/I5eSVLBlZKKgP+s2xJd9iIJF31Jxle1bHDRg+95xr14By+cIzz1FD0UK5Ha+37ZeW/oDPkJFQwMvkh4h2dNVDf5ZhMD0jbK+2EgZcrF8lg9n2fy+1iKySRdxWAVJniiXN7iQen48ajBiF+dMHoVa42lDBHd9TKh6XaRQ9iPeyYdeUxs2yVDRGOZnYZmPsB14y7Igpe+ICsvgqITJU7N16ZutyyScqLevJlYPcMZ5v/5I80cQ3gkYSUa6eV0Lw7b/rMTvax2cfrdbKFzA9PWljNyTNhiTVK+XZjntLKE7KXy6N+gKgGqcCPMQ6qpeOQhV9UNwAnGT4gLqpsJB7Z0Ta2bP1Ud9qSwXQe63HSEZyZqrgs9zApjgz16ToQKNoaDewA+v4LyRxTWMWLFKtqsXHlHoFWLh4JQ3GluHpvghzKEBd6A9IQcAbykwd2/w3CMP72TM61A==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(39860400002)(366004)(396003)(346002)(376002)(136003)(451199021)(6506007)(6512007)(186003)(31686004)(2616005)(2906002)(53546011)(54906003)(478600001)(26005)(83380400001)(86362001)(31696002)(8676002)(6486002)(41300700001)(38100700002)(8936002)(82960400001)(66556008)(66476007)(66946007)(316002)(5660300002)(6666004)(36756003)(6916009)(4326008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?eDNXSDhhZGMrbGdpUzVrdVFSR1hLa2RGUkNJQmpHWnRiYXo5aENmK1RXNXFU?=
 =?utf-8?B?ODcxaU5GUS96dmtsSTVEbnA4SWw4VG0zMVBSR2hTRk03LzRieXhRS0g1akRl?=
 =?utf-8?B?YVNOMy84My9rTkZGbnRWOXdCZ2JNVzFFdmg5dzUwd0Z5c1pqZWo2UHByeWVz?=
 =?utf-8?B?Q1EyTE0vcEtzVVZyMHNWK3lueVBSRGZ4NUdvSkR3TEJvWXpoSGpjdHZrTm1w?=
 =?utf-8?B?Z095dE5sOEZGeGpMUVJIa29RVnFvY052NlFud0RyZ3Z5N3RiMWI2VTNnVVBh?=
 =?utf-8?B?d3VqUW5RNThpMkxudzVLVE5XK1o2TnVKK0thT29jUlFrV0xrL1BZK0dlcDl0?=
 =?utf-8?B?TEU5QUtQRDhIdU5vWTQ0bldSSXhUQTJ0OStwYjZUemFLcVUxMG9hNTcweXMz?=
 =?utf-8?B?NDJlNVVaUkk2Rm9kT0hrSHg2TmU2bnlIMlZLS2YyVnliUm5QZHgvblZlSDVK?=
 =?utf-8?B?SXdqRmdUcEdseWFUb2lHcEl4ckd6b0hBNmhqZytFQ0tUY2h4aU5YT1lEenFJ?=
 =?utf-8?B?bTFqa0NtNVN2T1Y1SlgrK21DTHRSanUxT2VqSWxHSmRhaDluSWFLNnkxRzlw?=
 =?utf-8?B?djhMNC9IMC9HK0VwMEJ4MHo0MVRCZXhxbHlzSkpHQ0JjN1BFeVZqLzBBVEgr?=
 =?utf-8?B?UTh0RWt5MXVIbGJTVGhjSU84aHdKbHd2ODhwZHVueWpxM0pNZk50TGJoVXB2?=
 =?utf-8?B?dUpnVTcrRXJWdHV1aFhBblp2YWxRRS9KZDVjZ2w0NFlIeUxNUHE4NmpHSzNF?=
 =?utf-8?B?VTM2NTA5dnMwd3QwSkJFWERGTTZnS1JyKzdmRzV1YUZhcW9IR3dqcFdxWVFp?=
 =?utf-8?B?MHZCMkVMaWtlZ3RqZ0FXcEJGTTlRQXp3MWlZbVhPcXB6OFFudnpaQzBaVDZY?=
 =?utf-8?B?YUM4MUpCbXE2SjhoSVNxY2dvRmwwMEM5ZEFIbUpIY0RBZk9ZUTREV2xTQ2lL?=
 =?utf-8?B?QVBEbnNDRFEvYnJyZWJaamlWOVVHVzZqZnlpRnBnYUQ5Q0hZNWZ0SmRaY1JL?=
 =?utf-8?B?ajJBRVlKUU1FdzBEclZKaTNsWGJ3SWh3QVdZbDFZbnJxUnJMK0JiRXVqSllM?=
 =?utf-8?B?QVArK2tNUlJvZ3hoZ3VmTGFvS1dWMUJQajZ4L20rQVVNV1JlQTZYeFIyK1h4?=
 =?utf-8?B?b0VxbGlrbHdCUVJTclY1ZXlZR3BpSW1ETlVYQ1JoNk94S2ZWWUNRVkZWTVQy?=
 =?utf-8?B?U0pENk1QVXpFNVpEQ0hGZHppaE9YS0t6U2xEeVFxeHFvV3p6RUxTV1lCQ1py?=
 =?utf-8?B?bWFmQ0M1MVh4UHpaRjFhbEdUbUw3b3ZmOTlhNXdTVjlBSGxKRG5BVEFPYVlp?=
 =?utf-8?B?QVRITElCM3hTSmtMclYzZk82a2c5OFJLcW9ET1NnSFJuRnBEOVRtaFlwcUFZ?=
 =?utf-8?B?TGhBVUozcXNMV0JHWHNLOW9FM2M4UXpvNEpmQ2xWRDlsRGZHUC9HWHhKY096?=
 =?utf-8?B?aDJYM2gvaVE0V1R3aEFvNzBYcjhNWFUyOEg2TGlsRGI2azJrMWRwdDBYTnRF?=
 =?utf-8?B?L3djTTBBRzVOYVhtSDJvOFZvQVVDcDIreW1WLzVpMXhrdjVjck1YUHI5MXNn?=
 =?utf-8?B?dlBOYzNuTXJTZjZPR1NFMlhkOERDd05wNVpDTUNCV2xOQkhqTlVCTXBFUmlh?=
 =?utf-8?B?N3pSbVFSQVhrcU11bVZ0enBVcHZNUzVWM1RtSHlZbEsxdGlRTzRMRUlJUXlQ?=
 =?utf-8?B?VHlndkNoZ29aZ2tndUxIekFVMWR3dENDT3hNLzJBbU9PQk9tb0JvMmJyckZT?=
 =?utf-8?B?Z2hjNkFOalc5WGZwcWxGbzNIYlMvZ2pyVTVRS3lPMi9IRERZcVg4ZVV0bndt?=
 =?utf-8?B?U0VybldmdUI2b0ZJZHhiYUZvRStjR1l0NU5BZ0V5K1YydTFVM0RCdFcyUGcw?=
 =?utf-8?B?S3owOVZrT2oxU0Z1MERlWUo1cDZoU3k3Q2VLNjBpUHVUSkM4TXNhd0UxTGdZ?=
 =?utf-8?B?d1ZQSTl4NS9MYVFWRVRtSWNRZlFLVmtiWk1pUm5LbDdKK0RvdXRUK05mbGNi?=
 =?utf-8?B?NW9ZWU5IWDJTQ1BNTCtEZkt0NksweXJQbjdDVE1kWVZUdWx0M3d3RzFZejdm?=
 =?utf-8?B?cTVnZXhJcVI5K3JtdG95d0N0WTNTWVMxSW1YaTdjMU0rbWVXbHdMallCU2Z5?=
 =?utf-8?B?SzdSODZBN3hBNExtWCtFekk5MVVIMFlERHRQaS9pZG8ra1crYkw4cFd4NVhj?=
 =?utf-8?B?NHc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	HEUb4SR14LfdoIH5CchpMQHK2WSff0x/8yhRfqX0zRpNZZTGBaMm/94so7dbmzcScJiJaoEBvIi3nCXevWN0ELr//+WnJ8i9z+yPWpYoN9ahpPfLvdmdusHeFHucGlm75i15fG0QMWD3gN1oqnWISV1XaDqUhIPY2po72CuvCzztaN3GJ4ozyyJ50Ru8I1SfEUGdWV+EkObdSWcC4sS3bJODKxDIQ7aKUQAi4ZT5F/UdtNOKyI3SggvmAcvsGrOwNhwsHsPXk+YIxwiQXgUWlOyYHFKQ3v8ylnbG2eKkUnSUV2HCgZ5BbeeOSP7a0zLvmJC3ooNVm9nXaRDiZdK7K7NhxNDFGrnov9MIXTfU7vI2yVJzCOb97bhnxUkxkgS24jIfPNhMcLUOCIgU0NbjllqEnMpW7ut4OMYkPsl92d6bOvSx0lk7MbxfzxZQO1tqPxyNO8lrvSGNawlqjX/4MHyDQj14rN8pefP+e8ejdcJGE31MABYpMMvjHkf1JcLJKN67MmhLYY7UTdbm7ek7pBB2X6TajShenaq/UfsEmrqiWHGWuutefFUSTuGoLgAfgy2gor1xHtMVNWOel2Qc93vYOMb9ln6wMgTAsJYquiPGUTOJ+neJkdB2DjeW/A7jhMlEK+pwFlKGQlnu5PshhKNQpDcamVKNWAXb+4cppEAgDY6nlJHULnrxnttKeamJDGDhrf40mdhPebIU61Uh/2U8tGV2p1gd2rEw4AmlrotR4Ds0Y8bhQJll4hLWmnEm8KaUkGYUxXmW3HG53kLKLTGEKCsnpWwO6dTjbM0De2UsbuMM/7FTdRK76BA7TT5vkNilu981uz8Gh/37anpkqunqefIZxxyU1h+2+/4swUA=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8740b73b-7013-4dd6-bbef-08db61d40347
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 12:39:02.2883
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CZhs9tJI53oNOaKzW5uTnWvOhDDvHeXLEW5Pon00X/v69gbHe4xoWB8Cn68zbrHZBzVydGCTYzHeVg/x777KFAR2NOX4hQXWfWZFj2UoNp8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6934

On 31/05/2023 10:01 am, Alejandro Vallejo wrote:
> On Tue, May 30, 2023 at 06:31:03PM +0100, Andrew Cooper wrote:
>> I've committed this, but made two tweaks to the commit message.  First,
>> "x86/hvm" in the subject because it's important context at a glance.
> Sure, that makes sense.
>
>> Second, I've adjusted the bit about PV guests.  The reason why we can't
>> expose it yet is because Xen doesn't currently context switch EFER
>> between PV guests.
>>
>> ~Andrew
> We could of course context switch EFER sensibly, but what would that mean
> for Automatic IBRS? It can't be trivially used for domain-to-domain
> isolation because every domain is in a co-equal protection level. Is there
> a non-obvious edge that exposing some interface to it gives for PV? The
> only useful case I can think of is PVH, and that seems to be subsumed by
> HVM.

Hence why it's fine to not worry about PV for now.

Right now, when we decide to use IBRS on AMD, we set it unilaterally. 
This turns out to be better performance than flipping it on privilege
changes (whether that's non-Xen <-> Xen, or guest user <-> kernel).

PV guests are obscure corner cases these days, and fall outside of
anything the hardware vendors care about when it comes to prediction
mode.  The only sane option is to have Xen explicitly tell the the PV
guest what Xen is doing, and let the guest decide if it wants to do
anything further in terms of protections.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 31 12:42:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 12:42:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541761.844835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4L9h-0000VO-Q8; Wed, 31 May 2023 12:42:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541761.844835; Wed, 31 May 2023 12:42:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4L9h-0000VH-Ll; Wed, 31 May 2023 12:42:21 +0000
Received: by outflank-mailman (input) for mailman id 541761;
 Wed, 31 May 2023 12:42:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4L9h-0000V4-1C; Wed, 31 May 2023 12:42:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4L9g-0004B1-TO; Wed, 31 May 2023 12:42:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4L9g-0003Kc-Cz; Wed, 31 May 2023 12:42:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4L9g-0000uX-CQ; Wed, 31 May 2023 12:42:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wzEKngLlHJmEuslgDf4QgND7X7NwfjM5W3+1EQhm5+A=; b=QPxdGei8F0y/HH3ohG9Hs6DJ7H
	zcBVHy3yFPwTpE6T/QFfLElbnAsPhGvItxXoElZg4NSGYy/TiR7F0lXRBU81sHBlN0jhL2ZkFb6gj
	uro075AlUBEYxueSEuQh6UNOVIvfC1XfvEObHtV0bP12eGrjOC0pysPKUQbPUnN9wEyw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181020-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 181020: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8b817fded42d8fe3a0eb47b1149d907851a3c942
X-Osstest-Versions-That:
    linux=6c538e1adbfc696ac4747fb10d63e704344f763d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 12:42:20 +0000

flight 181020 linux-linus real [real]
flight 181030 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/181020/
http://logs.test-lab.xenproject.org/osstest/logs/181030/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 180278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-examine      8 reboot                       fail  like 180278
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 180278
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180278
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180278
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 180278
 test-armhf-armhf-libvirt-raw  8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 180278
 test-armhf-armhf-xl           8 xen-boot                     fail  like 180278
 test-armhf-armhf-libvirt-qcow2  8 xen-boot                    fail like 180278
 test-armhf-armhf-xl-vhd       8 xen-boot                     fail  like 180278
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 180278
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                8b817fded42d8fe3a0eb47b1149d907851a3c942
baseline version:
 linux                6c538e1adbfc696ac4747fb10d63e704344f763d

Last test of basis   180278  2023-04-16 19:41:46 Z   44 days
Failing since        180281  2023-04-17 06:24:36 Z   44 days   83 attempts
Testing same since   181002  2023-05-29 16:11:58 Z    1 days    4 attempts

------------------------------------------------------------
2553 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-freebsd11-amd64                             pass    
 test-amd64-amd64-freebsd12-amd64                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 323291 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 31 12:51:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 12:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541769.844852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4LIE-00026K-Ny; Wed, 31 May 2023 12:51:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541769.844852; Wed, 31 May 2023 12:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4LIE-00026D-I9; Wed, 31 May 2023 12:51:10 +0000
Received: by outflank-mailman (input) for mailman id 541769;
 Wed, 31 May 2023 12:51:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4LID-000263-TO; Wed, 31 May 2023 12:51:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4LID-0004Km-IK; Wed, 31 May 2023 12:51:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4LID-0003jt-9q; Wed, 31 May 2023 12:51:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4LID-0004Oi-9N; Wed, 31 May 2023 12:51:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EZ2x4Vr+TCLjzX7/4ayPNReZpyAsTnoFCjj8VvxUCWE=; b=iOOzpDKVcBYUF7YZJDVF36ICPc
	sT+CYpC/Q7WSPqk8KWuXyIjHpOE1qVlOOmcXjxZI36tSWs45pUmOI03u5YEJEJvRqMkut7Jxtp7Tv
	ZsPVpknOzKTKje57EmVKGRyavPa0hy3PM/ol46zIU9VdFZW2gWEDWp3o2FRexbXOg6jM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181028-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 181028: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d15d2667d58d40c0748919ac4b5771b875c0780b
X-Osstest-Versions-That:
    ovmf=d8e5d35ede7158ccbb9abf600e65b9aa6e043f74
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 12:51:09 +0000

flight 181028 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181028/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d15d2667d58d40c0748919ac4b5771b875c0780b
baseline version:
 ovmf                 d8e5d35ede7158ccbb9abf600e65b9aa6e043f74

Last test of basis   181024  2023-05-31 05:10:47 Z    0 days
Testing same since   181028  2023-05-31 09:12:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d8e5d35ede..d15d2667d5  d15d2667d58d40c0748919ac4b5771b875c0780b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 31 12:56:45 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 12:56:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541775.844861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4LNU-0002iU-8R; Wed, 31 May 2023 12:56:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541775.844861; Wed, 31 May 2023 12:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4LNU-0002iN-5Z; Wed, 31 May 2023 12:56:36 +0000
Received: by outflank-mailman (input) for mailman id 541775;
 Wed, 31 May 2023 12:56:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VTAn=BU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q4LNS-0002iH-Rf
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 12:56:34 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2047.outbound.protection.outlook.com [40.107.7.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9123c723-ffb2-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 14:56:32 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7457.eurprd04.prod.outlook.com (2603:10a6:20b:1d9::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.24; Wed, 31 May
 2023 12:56:02 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 12:56:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9123c723-ffb2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TZIfooylHcdMNY/Ys6N0MOoDVyJ+Pi3zIlsbgwWz0fVI4eDdzvXofGbIz2L/NqpV/qw3quySZkGoJyaJZL+CQVjunUpzShRBX+IlciLEEhTM/my/MJrjAI62EP+t9Wwis9BgMKjqg39Ic4Reizjsr+eKdML/7s1r00azsRkreTWHRz5MtYo8W1Idb5ZWpZVc049iiEGZRmXDAQekiVGUpbjHy72h89DzhM0zgRq97+3cDLraJAxx5jE2q/9XY7HTAgIKZLTo3edQEGRRR5erFAVZViXHVohW1pnpgm9iftYQ2oMnsnSXrdot9/TkddYsIEgfPbJcJFK9Uzo3Pumg9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eibG4ODoHhSIit3N/ADLUkJ1CVVZUjtrmS789JHSSV8=;
 b=cg34yxr/PLZQJZlxreICD6Y5U1Jc3cNpHX850BHgcmrQOK4y1Dn68JRdfu9SoZE5mM1MBLvpkJePr6ZP7EjD366u239blryB4EUAP8ZijKCMDBlNFitMdGg7Jh6SvmAs2jp0ou50ozL4f0jkDNCWmXS9sGkGrQlgJqTLVPwZo6d9R6kLQNeFvhJifqfQ/4yWNNAig2Js58LJ2cr5Ob7BYEJ0w2tqqap8cR6NRY7rBUkU7MubjcgnrMvUWIln+YPVaCkOpVsTieNhR+QoeLufW19VTXFPV6SknJLZ2xHeowYGMix+2ANOtjkSN+EzXVU8TTgt2qwe8DMIWjnoMFFyUA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eibG4ODoHhSIit3N/ADLUkJ1CVVZUjtrmS789JHSSV8=;
 b=Y90B/SNiCqSS31llvMPvOnmLlIVXi3tWI+MCyZXWvKObPjoTCdg7n2Dpe3TBCY3jKAleYYqyB5qiapAg3JgV8W98qRR+kpZv0mZj2kg+hI9lMDmnCwlpm2Itkt3uRQLuPc3kTJQhBULrOHBnY3K48iJj473TqgS18uzmTJHsMLubFlsMyByFfyl7y63KmjRkDheX7R5VEpiDHIkW4im+Qq5OZyoF/WX3VqNhTdnl8A4t2oFh1CldUuEItYg+EyL1e64KGZqZd5RQNOnjpL0W8JYyjLyeNQtS3yYyJMA3Ov6CVS5Eob8HYd9vqQW0Zl6ZCsWgLm65WZdEUQBsVxIrBA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <46f2bed8-f592-acf7-4506-dd1558433716@suse.com>
Date: Wed, 31 May 2023 14:55:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v6 5/6] xen/riscv: introduce an implementation of macros
 from <asm/bug.h>
Content-Language: en-US
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bob Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
 <bd2dd42c778714f25e7e98f74ff5e98eee1cd0a5.1685359848.git.oleksii.kurochko@gmail.com>
 <92580a6f-e97a-c4a9-435c-bd95a84d4306@suse.com>
 <f3bf3a483f7282eb365cf04f27e1c7a4e84f5aae.camel@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <f3bf3a483f7282eb365cf04f27e1c7a4e84f5aae.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0049.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7457:EE_
X-MS-Office365-Filtering-Correlation-Id: a2e0de26-075e-410e-3737-08db61d662dc
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pi/EZOiaHri4zDbDWy/iihiH5hjxwrv26o7l/ggaMtBvR7ef7fvT6HodwNOLLR1+zYnplr8sQIkuvrBKtSbYcV5YRrwz7qMcMjtTpD01sKAXt7M3yPPMBMofGrMeBFH7PKb469h0Vs2LJZX9S63K6XaTzNXJZXWV92daKzb9OOdJsB5QCA0O1V28dbLcI/Q92Z6COjC6cZWlM88jKNZJVs3o+ES0MmCK574iuBrvgQw7sbHRdSgpjhaz+zN5yjELw2xu4QgZqkk9s8majZ0dxrDMqzng1BzW+oOOqj85aAMlihN1ouhHw7yX5IxYtl4ov/Fy6G8L1Yfwlqu3+gUBtW0FGWefyomq4rRJlP/Mo+BQOZoCK6Ebj47Vg0xMCP6eTZVS0LLDsmMNMNCL4/UzPyjZ1ycpY7SDATwdrgedWXg+jIGY+miydcABgHDHul2BpF+uF0B+1pkE4xKAlVlyTVpWX026Un0UnTjv+UoW7+cM85Zvm2UGszjQgQ/khoT3fWDPEEEve/DpTuc42E62Aqst2OG4qLNhEXdO7KrhZW0mVtBG0B77C+SocjEsbWwk8zRHiMPP9ZB/p9z3UoUBoV4EEhcJXf+BkPLWy1iwflPuCbvN7rytN96mhpeQmmYFAXcP202BywZyzo9mqWSMcg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(376002)(346002)(366004)(39860400002)(396003)(451199021)(31686004)(5660300002)(54906003)(41300700001)(8676002)(8936002)(6486002)(26005)(186003)(66899021)(53546011)(4326008)(6916009)(66946007)(66556008)(66476007)(478600001)(316002)(6506007)(6512007)(2616005)(2906002)(83380400001)(38100700002)(31696002)(86362001)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SHlORmZxUnJIZVBVaXZxRXBXWlhraE5jbitEZmN2ZVY2aDhDaWNEcWRRQXhp?=
 =?utf-8?B?UER0MDdDanRJYVMxQk1hTUp1S1RSK2p0VUNoTHFONGlNQ05HNFdZNXFqNEQv?=
 =?utf-8?B?T2IrNHR1bUZDYldFUUw2eHFseDkzdGkwYWp6aTVibTdQMTVKOEdZb2dyYjQv?=
 =?utf-8?B?K0F1KzErOStpazlNcmhoQ3NuSFNTMm5pL3kwYzI4ODNoSm16QnNaNWEvb2Vk?=
 =?utf-8?B?TWRVVkdHaXVsQWZ2blVpU2RIM0R2Zmo0eitwK3FSc2NUZ3BXeGdxRFR4VCtv?=
 =?utf-8?B?aTFaMmt1empzUDgxMDdDbUVxTCt2MnhDN3ZYb0dJdFJpaTBaL3QwNm9US1h4?=
 =?utf-8?B?RG5EOWhNTjkvY1NUNHExMVA1YWEydnF3b3dFQWNmemw3cFJHQWJOZSs1UzFr?=
 =?utf-8?B?alo3RTBIRlo0MDFnMUNhVHFGY2NYT2M3Z0Y3eWxEQ3ZUVXhxaVFoQlV3NlAv?=
 =?utf-8?B?bTgwc1NZN2xGL2kxRWtEb0IyVjF5by9NMVRQK0JLNkM2ZVVRVCtQd3E3RWQx?=
 =?utf-8?B?VEF2L2IwMU4vSFYzSEIrOWY4VEM1ZGFtMEE0SE9UZGFXYU9mOVlmZEZGamxU?=
 =?utf-8?B?N2FiRVZ0U3dFYm1PNVN1UzROd2lwcjZiKzNSWURIMFRMOFZDc0hlb09TZm1F?=
 =?utf-8?B?K2RDb1VkZ0Z6Ty80RzZkTTVscVdxWTh4TXp4TVdlYXpNUXIwdU00Qi9Fby94?=
 =?utf-8?B?amFYY2Q5aFFEVWd0endaRnBuY0o2cC9sV0NubGpaYXdWdEJZVlBnc1hVMUdN?=
 =?utf-8?B?b081eHROeUNOY3VVbTBnL1RxOGtqd3FuOUpnb2dMSGhwYko4STJZVnNDdkph?=
 =?utf-8?B?bzZLWE5CdDRBMkk4U0VrSWwzekQwcE9FQjRObkxIQlpsL0hVV3lpZjhiRHA3?=
 =?utf-8?B?L0E0UlpkSWtQTFJiVWdEQmd5WlRFaVdOR011ejA0ak43WUo3Z1A1c2xsZ3dh?=
 =?utf-8?B?NnJjOGFXc3dhSmErRUFvd2xnbjlOdHQ3alpiTHB1NGt0STg4ZFNvTHRMMUN0?=
 =?utf-8?B?THVZNVQyaUdtNTEzTnRNNjV1SHI0NXJ6WXIrNVg0aWhLd01xdHFJRGVLdFJy?=
 =?utf-8?B?ZkNxczNJa0NweDlGU3FISU1hOURPQVRPMkYxUUNXK3M5Mi9kQmt4cUVndlpn?=
 =?utf-8?B?aTE1cFN1WUU2aUt5TjM0T1p3RFd6c09CMEtSWjJHTW5mSHA0UVdkOGdrdmNr?=
 =?utf-8?B?M3hHTVgvUW4zL0V1My8wdmMyUjBSbVN2TGhPUlFJWnE2aDlxdERjSjNGeCtO?=
 =?utf-8?B?aGhxa3dITnhyOWV1N0dxUlJJSFJ5RDlYOEsza2lNZkV3Y2x2N25QV2Z4TE8r?=
 =?utf-8?B?SllYUjhPalZ0TGlHcTh6ZGp6UVJ6Yk11NTV1UlZLQ3AzRHhXdUowaWxnTmxB?=
 =?utf-8?B?S242UlBVVFNIWWJMNFlDS29RczMveTBaTHNXMlRuMkFSRXArYjNsc1ROVEIw?=
 =?utf-8?B?WWtYbHBiamR5ZzY0QlBoakEwSEg5ZWpaY2RrU25GKy9iTlJiZFJEZXVQdVZV?=
 =?utf-8?B?NnYwRFh2YzhMSzRGNExxVWpROVF2Y2pqRXViSUVGUFB4SlNpL1l4Ym9oVkdr?=
 =?utf-8?B?OTNDVDhRT3V5YWVnQ1J5YmprT0tDN0sxZlhNU21GYUpLN1U2ZHB0S1luUm1p?=
 =?utf-8?B?Q0ZndFh5MWJmcUoxYS9BVWhVa3p6SllZVTJWOTEwR0NnNE03Mm5QS0Y0ZG1Y?=
 =?utf-8?B?dEhHYStuK2I4Y3VGYm4vWGxVR1Z5K0xkTC9vOHJTZmdlM1NLc0ZXcXpMb1R4?=
 =?utf-8?B?TU40T1Vod1M1NVRreHdNSEJlcjN2c2lhb2VJMENYTUVNVE50MmNtSkp6Zm5m?=
 =?utf-8?B?bjdUZHhpb1dHRUZ6RGdTODRUb3dnQkgvOXBtUzFsOEhoRldSSDZIRWpnNWF1?=
 =?utf-8?B?bEFBMS9aVkdJSVBKa1RqY21sSjU5Y2FML3RJa3d5ZWJzTVZNT01KNWMreTRu?=
 =?utf-8?B?bzhoRDZqdTJHZi90OUFYVEM4UkcrTmxqcU5FVGNpQjQ1MDRFWVJySGFMREp1?=
 =?utf-8?B?YVJxNkxER0pFWk9sa1o2RXpZSHdkUFk2a1VKYmVNb2Y4aERPTmQ0MWdKZWk1?=
 =?utf-8?B?UERzSWNDR2NaOUtHRTJBTVZQdjNscFZGaG5hcmx5cStzdldCMHFRZyttN0F0?=
 =?utf-8?Q?xGisAj+RyyeF8d9TmWy8uXCIg?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a2e0de26-075e-410e-3737-08db61d662dc
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 12:56:01.5696
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2dEyZqsDKimhB0tuylRrtcMSzx2WGt1Ct+X/1cgnU5SGWu002SExb7IfaoluwjI9Jr4GDM2Ug5VJWCupVlMq4w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7457

On 31.05.2023 12:40, Oleksii wrote:
> On Tue, 2023-05-30 at 18:00 +0200, Jan Beulich wrote:
>> On 29.05.2023 14:13, Oleksii Kurochko wrote:
>>> +static uint32_t read_instr(unsigned long pc)
>>> +{
>>> +    uint16_t instr16 = *(uint16_t *)pc;
>>> +
>>> +    if ( GET_INSN_LENGTH(instr16) == 2 )
>>> +        return (uint32_t)instr16;
>>> +    else
>>> +        return *(uint32_t *)pc;
>>> +}
>>
>> As long as this function is only used on Xen code, it's kind of okay.
>> There you/we control whether code can change behind our backs. But as
>> soon as you might use this on guest code, the double read is going to
>> be a problem (I think; I wonder how hardware is supposed to deal with
>> the situation: Maybe they indeed fetch in 16-bit quantities?).
> I'll check how the hardware fetches instructions.
> 
> I am trying to figure out why the double-read can be a problem. It
> looks pretty safe to read 16 bits ( they will be available for any
> instruction length with the assumption that the minimal instruction
> length is 16 ), then check the length of the instruction, and if it is
> 32-bit instruction, read it as uint32_t.

Simply consider what happens if a buggy or malicious entity changes the
code between the two reads. And not just with the detection of "break"
in mind that you use it for here.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 31 13:06:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:06:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541780.844871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4LWy-0004IT-8X; Wed, 31 May 2023 13:06:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541780.844871; Wed, 31 May 2023 13:06:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4LWy-0004IM-5Z; Wed, 31 May 2023 13:06:24 +0000
Received: by outflank-mailman (input) for mailman id 541780;
 Wed, 31 May 2023 13:06:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibWb=BU=bombadil.srs.infradead.org=BATV+b81a8c9a6d22e8bb2302+7220+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1q4LWw-0004IG-LV
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:06:23 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ee7fcd44-ffb3-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 15:06:20 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.96 #2 (Red Hat
 Linux)) id 1q4LWl-00HUSK-1J; Wed, 31 May 2023 13:06:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee7fcd44-ffb3-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=U9SykjTbl/OmSzNnS/VGmSZL4PA7bSh1J2szdt4a2Pw=; b=LbVnJ7tohLEudtorXqpRDpDZN3
	KXS1escUUdY/DOXE7glxtWC7fv9LCCpxim1GaC10rWst+5gsMqchki8IHRmidw7fP3Yp+eFn94KqR
	Sk3dPh8FTOHJx2C/NuNLtFTFLLg+Fd8C4Yr7YSG/vtsfYoF9ppSAaq2sz008A8aZIJk5eS/zyxQ8v
	XHhHoV4dijEof0Mg945LcJZFOINy4hrdPsD4A7V6Rvu5BcvrgpgHFOemHsIeOS9qELyCYqpg8Dm/9
	/VlSeRJ7BRB3iK335gAVE7bN5LCpjoCYZ6HS0gO8oj60ZKSJ/+BciNQmRVyAgUc4NgSQJYtOl37ba
	/yKTbOYw==;
Date: Wed, 31 May 2023 06:06:11 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Jens Axboe <axboe@kernel.dk>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	dm-devel@redhat.com, linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [dm-devel] [PATCH v2 00/16] Diskseq support in loop,
 device-mapper, and blkback
Message-ID: <ZHdGQz/hZJhiShH3@infradead.org>
References: <20230530203116.2008-1-demi@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230530203116.2008-1-demi@invisiblethingslab.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Tue, May 30, 2023 at 04:31:00PM -0400, Demi Marie Obenour wrote:
> This work aims to allow userspace to create and destroy block devices
> in a race-free way, and to allow them to be exposed to other Xen VMs via
> blkback without races.
> 
> Changes since v1:
> 
> - Several device-mapper fixes added.

Let's get these reviewed by the DM maintainers independently.  This
series is mixing up way too many things.


From xen-devel-bounces@lists.xenproject.org Wed May 31 13:19:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:19:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541785.844882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Lj6-0005oF-CN; Wed, 31 May 2023 13:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541785.844882; Wed, 31 May 2023 13:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Lj6-0005o8-83; Wed, 31 May 2023 13:18:56 +0000
Received: by outflank-mailman (input) for mailman id 541785;
 Wed, 31 May 2023 13:18:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VTAn=BU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q4Lj4-0005o2-Pt
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:18:54 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2075.outbound.protection.outlook.com [40.107.7.75])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id afd8f024-ffb5-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 15:18:52 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM0PR04MB7043.eurprd04.prod.outlook.com (2603:10a6:208:19b::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22; Wed, 31 May
 2023 13:18:23 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 13:18:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afd8f024-ffb5-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K7FJIET0IufhnY1eojwGZl3m7qBvDsP7Kk5ulnIFFLTDyvdW3MF4nQjoG3HRLiwP/CCbNk/D0CBeCg9IL16KJbNO3TZsnFlmOKF9beReicKjUDFop6ikJPLdTb8Ah7x8GdgypZDL2Dx4GmyysIKtK8K3tlPBJ3w6SZszknTNC0vh2jeJ/ZMQTNgC6c+CVuiqBA1Klw0RE7C4+zl0QMs78qvhw0d1qbAG1ilx+IlQ4U31tUJtb0tZ+Mp1LaMfwwWss6oz6KCLkL4DDHh+JKXKq9STWdiS+hqRD/AlZ7RoFNY36o5HLRuUZEsvtKjl+TjrCPFSqt4rNIHE5GJT2xdfNA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XmIPgAhbc3o7ZTiso542VUldPhVMLg1sQpqZfeRXPX8=;
 b=Vn3nBm0fEsRYXHKxWEfL5X5pjV/a5O8afemFbisXkOsLs8LuccrfkeLYnsHSjRHzAELf4boIs/S01skQyQK/TBwNAqEMakdk66CBB+iCJCv1U4VL5fTyZZGtL4DayCJdo+/nE9tEQzfbEIwcSMIaQ5+Md1tZc7zHnZ9Y7P1igLblYkNSz5TGFGa9Gqg6LovIKeWSVAhIw/AOfhL+cGiR0eO6qRv9QZog5Npn3H4Qo7wsb5leefpGzur57HVkbHV2qIzSHT/Pj6YR6bmXaHjIbkmYzub0k0EDZlD0fPTJgI9A4hVEtiHSX+CAKCauKYpRD2lGGL/PAF79bljRRaYJvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XmIPgAhbc3o7ZTiso542VUldPhVMLg1sQpqZfeRXPX8=;
 b=oAm6wpdqouEv61zOgC59DY40DDeKkgFjpEd/SVnU8y2MUGjzV+CddwmUGvSABa3gVXpH3PS6JE0azm/vv/Iygq18MWxFtUWlajxRQZs7xqjNr6wBdoWAzNw/S0ogI7zLQodHxd+IzIDGAWKrRRrKPLQG8Ma6Kv3/qbcdYkRvfUNt+257jk/evcXkGFC7dtGuGGvuhhYkzjSjxHA7iU9vfGElVPSXJzx45X5P+G6Aa8FV6Z+HuarcSkbce5kxB2N0Ljw+Ge+ZWItloY1XQ0rRDXNKfd+Zq9hkbF4Br0ivQ/OS5pxFtRDMPerkDtvevTqwNm/hMxMe2HtzAS72x0wFBg==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <bb1bfe6c-eef6-5552-78e4-bdec8fd43561@suse.com>
Date: Wed, 31 May 2023 15:18:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] vPCI: test harness adjustments
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR2P281CA0043.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:92::16) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM0PR04MB7043:EE_
X-MS-Office365-Filtering-Correlation-Id: e2ecec7c-285f-4e03-bd96-08db61d982d0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7nKbTdW5/EgYKK8/D29ESwDBoPJKy+Gw3PAcAPtl2N+3vcLrAMH62DPpfQYaAYIKZ43ZWHQjmBNSUN6TPPuM173osLwuNrhcwikFN8caDZH9C34GAUkeki4uxq7eqEInVxZG/i3MgFe9S/XQYkgtzBp8ihYHbMkEiGqPRLbS/LudjsWBRY8VwKGo+WHOJLmq+45nbOYsY0GItfrXOBDIfOyGRNangHZ+qZb21qXYlNV4ZcUKLRozNr4Mx4lsKd/X1HTwy3FNUxHFHyv6AvLl08nP+XhG8GYiX0MVkTU4uvHpLOyTXVI6idflLZGmgtpC8v5GvuekvWK4MW8wiUucItCZFIYxBFps5MU8S50CXPuQwb+RkZT/ISGq2r4NEQb37RDyUjvHECxH7BiKQ856CtxlZCC7QBMNgUVlOi1INdZM9ylICGjHx4XqAh3iJ+bYzqK80TD2f8kvcbcWbbuLYtq4fS458QKFts5QtTvp7NreQcXS7DCkmv5UljxdFf5cSuB9VVYSW3TQgaoNxguzLeMZnggbMHxtdNUAiX6ZgW2ivnusjdrJAykppb0CeOR0RsRGvmZTpY4rdBwE46EBU2O7w4/Ks7mIjd4po+re39n10ZyGGdOFbioeCZoo/fBEi9JUKcnM99PZGY+sE09nlw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(396003)(376002)(346002)(136003)(39860400002)(366004)(451199021)(31686004)(38100700002)(36756003)(31696002)(86362001)(558084003)(478600001)(8936002)(41300700001)(5660300002)(26005)(6506007)(6512007)(186003)(8676002)(2616005)(2906002)(316002)(6486002)(54906003)(66556008)(66476007)(4326008)(66946007)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NStMb1JjYWNGYzBjOGdOdjIyOVdhMDBsMkVGaVhMVEp1YmFUa0pXN3hpQmR3?=
 =?utf-8?B?K2lEMElVcEswS28rb2VmRnl5MGVjWFlFcVBER0c1TTViY203SVdpL3VsK0Ix?=
 =?utf-8?B?YW1oZHM0Rm9GV2JMY01CdFFISE55VDN5MnhHdXNwVWNPeTZHekViSHFCeHpM?=
 =?utf-8?B?YmJQYXRPSEJmL0RXblJ6SzdOMEZHUVJnak9OUFFSV1ZGZHBRaW96WWJBbWRx?=
 =?utf-8?B?TjhrbzFYMmZKQXozM2JWeC9ZSEVBV3ZTVFdndENkem0ySnRmYU9GUjFDYmxI?=
 =?utf-8?B?elQ0clVqZ1lpQmlvcFVGSVF3b1UyQ1dSSVg2aEc3U2lIa0duNzZtZGkrQnJS?=
 =?utf-8?B?L1cwT1U0SEtmU1VOMXFsV3FvVGdpZ0tSd1R4bmdCWnZyYzF4WWhrNFZkOVdq?=
 =?utf-8?B?UmR1dkx1NG5oOGdBaWtMSnJCOGtSakthV3Vrakh3Z3QrR3RLeDZpNll6Tkt0?=
 =?utf-8?B?QmR3RmRiNlpsWkZGMjhpWG43bnFselhmMjFoaklSOFpMZjE0ZzNFK2k4UnZY?=
 =?utf-8?B?Nk1pMmE4ZXhIY2ZrQ3Z4QUpsZ2NKZi9KMkh6d2pJWkZLeUhCSDZXMlJjTGUy?=
 =?utf-8?B?WTJyS3AvWTA3SitmREQ3dlUvMUs0dHozdml0eVlHdFJyU2NkS1pOS0xXSWEy?=
 =?utf-8?B?bC95cks2ak94cklrUGQreWlGOTlZQ0lLN3pCTFpmaWppWHdtaG94bkVZUFc2?=
 =?utf-8?B?VXI1bTRmaWQ0Si9raUhVZ2YrTzA2bEd3R2ZVa1ZKV0lBT3lPRWtKTllZOXNr?=
 =?utf-8?B?RDFacVQwbmhNbnRneTZaRDcrV0xSNWFYK1dxelJIV0lVNjB1VDVCZkFDNHBT?=
 =?utf-8?B?RWRKcGZNQlQxQmFCQVhRRFN6amlVZkwralkzQWlwZnR3Z0JpdVpXUG04aE94?=
 =?utf-8?B?d3NFRWFEMXp1UTN3VU9lOFVIeWdCdDZBVmt5b21ucmhRZUJJbUJEMWNHV3BM?=
 =?utf-8?B?ejF4dFowTG4wTVdweTFYRU1NWW5od1dsazZ0RHg4VnZ6MlB6bGVxbXJycGxD?=
 =?utf-8?B?VTYzRnJHNVlLbDlIazBTS2FNc2FJbnRsKzZhWlI0NEE1SGxtRGRwTWQ3eE5n?=
 =?utf-8?B?cXhKUjBLK2NuZk04dTErMlU5VlJDN294OUNraE1FR2ZhWWNLd0VyU29YMUMx?=
 =?utf-8?B?bEprR1AvOCt1YkcvbHljYkhabGt2cFQ2RFdqbUU1bitvTThRcXUwbWZBRjlw?=
 =?utf-8?B?SVVQVWhjTnFJZVVKSDVDdVB3Y0hrei8rSGZXcUJJOTBtbDlNS1hsY0xaSmc3?=
 =?utf-8?B?ak5NVExJblQwdDJIWW1oemI5UEE1OUFQNHlMZlBGci93YWRhekErc1AyZzNz?=
 =?utf-8?B?Q09WQ0d5SFNNMDhoYjVqZnpZNllXZ3hvbGNlUUdqaDNPUHZ1alRleEdLZG0z?=
 =?utf-8?B?Si9DZURCK3VKK1d4eUJWTnEvVnJyLysxNUhIVWNhTDB4MGlFR3pKUGQxNnVU?=
 =?utf-8?B?ZUJtZVZKeVBBODVtZnJNbjEzbFlWM0VvSTk3Wi9NeUNVWlZBamw2N2V2TDl2?=
 =?utf-8?B?eEtmbTRzeFgyRHdXNGdYejE1Z2hmVno3a1BHVGtzQ3BIcFdOVFFzeFcxdUh3?=
 =?utf-8?B?WjlUQlNmUmJDTGVna25WSmhLV1EydWNQeUxlaGxwSEYxQmdlWmoyT3FzOTRI?=
 =?utf-8?B?dUhqRTBneHJMVU04OGZ6QngvYVB5V2dPQTFrcnZTTEN5aFBlWkw1b2lYeUZn?=
 =?utf-8?B?N0Iwc2ZuakI4Y0RjRFJYalltVHJMSGVtMWtoc242THYrTDBOUlJLUk5BSDJO?=
 =?utf-8?B?bW94RVA5anpaK1B3Wml6MUsxSTF2NjQ2UnRmazFEcU9hVGIyWE4vY0p0ZGRF?=
 =?utf-8?B?YzZ3VmlMd21mVTdwNzN0SmFYRXNqa2NiRDlPVm96eHhjZ0ZCYU5XOCtzVE0z?=
 =?utf-8?B?QkJyR3hDb3VVbjZIY240UGFzM1FNZklTemhMZDk2bGtmN0R6ZHhKblhDejAy?=
 =?utf-8?B?eU1OeTJvcTFTTEc1NXFieW9lcXo4YlU1UEx2V1VIMGtWa2d2dGdoaTJ5Qnkr?=
 =?utf-8?B?bFZWK0c2UkhvbEpXbUZMNmVyQk1rdEZCVjh0SjBRYmNGTlVVcng3OXg0YzlL?=
 =?utf-8?B?OGU0SzFVRFVrMzdZNzhLZEhGRmsxNlZZUDl3VWtJb2Q0MEF6T1N4YWlTWjJ1?=
 =?utf-8?Q?EgJJLcYywpruc1Cm+r1lLuoxI?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e2ecec7c-285f-4e03-bd96-08db61d982d0
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 13:18:23.5655
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZeyVSNVXOvSyXn3gBn+q2Q7dx3Zv9vPbmJedW1gsjCue9e0cB9bV0K+UU1+z4xCCmg1sH/Ntowy034HHJOOENg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB7043

1: add test harness entry to ./MAINTAINERS
2: fix test harness build

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 31 13:19:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:19:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541788.844892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Ljn-0006J3-Lb; Wed, 31 May 2023 13:19:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541788.844892; Wed, 31 May 2023 13:19:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Ljn-0006Iu-GY; Wed, 31 May 2023 13:19:39 +0000
Received: by outflank-mailman (input) for mailman id 541788;
 Wed, 31 May 2023 13:19:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VTAn=BU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q4Ljm-0006Ii-AX
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:19:38 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2061d.outbound.protection.outlook.com
 [2a01:111:f400:fe16::61d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c9e97818-ffb5-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 15:19:36 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8971.eurprd04.prod.outlook.com (2603:10a6:20b:40a::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22; Wed, 31 May
 2023 13:19:33 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 13:19:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9e97818-ffb5-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Douo68h64jgb8n+BkE9pcV8D+uS9X4Scg/4d3LjOeljv1jcKOvcowATvxJtlq8AOqYqJ78KJ4A3UNrBqd/kC4WaVMroRwGg6O9h3VCSN37VHdmJXxXGcclwOemfeU65G1fmdW8NBliCtO0FG80gCxP1M1dyp+oEx5f9hDI7N+6y6aWPxC8SvweTT10Jz6oVbqnHEVeywFB5kI6ubNaNuY2yG2u2+ZRJ5wnMQMZ40J6ldzWN0E2Z4azsNCotiH787elbfRzkDS0tyT9bbCfQwkQ09SFcwfB9PWhgEkM15iZagPu7/NctpwdQ38S/8d74+MxruKO3bEReCAf1KFJeFIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JP8kQ7odwWV8WY5Lpq9Bg80UQanGlkB+D89ROMQnm3s=;
 b=gD/CL3jsUhFFI6/78TJnbWf7eQQq74lI/gfYnrbpFhZIrW5mG+kzEDL1Ahnn7I5GvlvcrBe42iIqtZzPNSfsl8nbuXGJZxYEwtkIIWbkVsyjW/fIBCcJGk5lisTZlYxKBLbnHP4F4pgGgWADVgpMbwudkLxTlugLkiVdRijvyNH/l6dmD5Rt7xaS5ZeXFUDq8C3teL+T07ZlWJX/RWXqjw5pMqiJlG4jJEAoOwrIM3qZMUxu+fUCUTyZSpBRmSPHJC/Gw9aq9gwpDEofkofs5oUhcJiXI10jGfWdD854swXMRZV5YnqNENUoNwYJebd7iNoA8NyLNI8HeCIp0xob8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JP8kQ7odwWV8WY5Lpq9Bg80UQanGlkB+D89ROMQnm3s=;
 b=NIKoKO40KMTksEEnZODKiMmqitCUgN+aWSvqN9I0SMLZ9as2ULQ1fJ8OG79dIGBQQu6D20quTnRwXvglLylGy629BcoO4hOLW5CexcvCA5wbqdkQmlokcIkC2NDnnLdt4J1HsTHcPnIuBf/qyBXQqmwSqkho9bUPHW1T1y38C/HjjDIg8Xv+8Vr2AyA4Nr7faQWNSRLDpRHRKIr5l10zE6xAK7VMx9xHzDJ+ZXD5BzN+lYEAlA3v/KVSBeH0bt9Mqb3idUO0E9tA84EYBl1u/jqIlKmGBirBjw1RVxDntgL4N95sLzMRV0+0qmI76tfAEaskcSw7kRrGVEU0NJAV3A==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <87171ea8-16b0-6284-c4c0-bf0d74fcfc9b@suse.com>
Date: Wed, 31 May 2023 15:19:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: [PATCH 1/2] vPCI: add test harness entry to ./MAINTAINERS
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb1bfe6c-eef6-5552-78e4-bdec8fd43561@suse.com>
In-Reply-To: <bb1bfe6c-eef6-5552-78e4-bdec8fd43561@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0194.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::18) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8971:EE_
X-MS-Office365-Filtering-Correlation-Id: 0e12ade7-cbf3-4b0d-be76-08db61d9ac54
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	M3Jk5R+xEYVyheKIxZHV6SJd5bC2kY3GDT/AD+Lw6bLIF6chntGxHatGgesBORdWG3WIyo2SHCRkljWyl9hRvGDXPXUEqCVDqXmf4Pju2XuVJht//k6IjSgW8ldqyyS7ZmKlPBZDd+4yBUpI3+LEC+vxLi0MXTZ3Z3C475I5n9xxuS2MNPb2JGU2Vn7cgfvN1JUY9J/jcwDvTsAhZRNmqf9bSv27q3bvqmlKzneBNem5nFa5O5vL31zKkmzskoAeasPRGRsmLH0QNgjGPww4XI37lDoYuHqGbpl2rQN6GHmLxdn1E+Gt5P6UQ00UVvMwYH+GY2KxNwfwO8UEu+joyt4A7pAQQZ1GCFcL5a97G/pjJ5xlVOJSfmFgTOmrx3UJcvnhEym1WCdfR8X+DHtS+6k07i6KrsYP8gTj0THGLhjpEPJk9h6gumdB2yZqw3/fYiIGwGekceDrzvH9wSTwGBId4wc60IjGfNJlpsRFXpTTOTupYw72LTRRR1FhIdjDdL80ejdI5lmv8sOs7UxUda/4LtWjSu72pPJZ3O5wYHJ9u6C5QqEBilbdVK7VTjBCorz3ZfbDWlz39S8u51bv4LxDVmMCQGFq60w2l0JACSod3oGzl4RWUJ2+vk8G5QEiacbgLwR3z5euSBPQ0HY37g==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(376002)(346002)(396003)(39860400002)(451199021)(8936002)(8676002)(2906002)(2616005)(558084003)(5660300002)(41300700001)(38100700002)(6506007)(26005)(66476007)(6512007)(36756003)(316002)(186003)(66946007)(6916009)(66556008)(4326008)(31696002)(31686004)(86362001)(54906003)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TkUyNWI3WlJ5ZzlGWk9lT1oxZXpoN1NjOWFRN2NkNUtnT0lUNHZXdVRydnR1?=
 =?utf-8?B?cGNNOG1ERnRaQXdzaTMrT1RuRWZTVGtLL2YxbUhaclZGc0o3UjhHc1V1MlVK?=
 =?utf-8?B?cklkQ2JQeFh0WTN1bDdQbzFwelcvZktHazNSS20rd0IxL2ZGUmlIQjUzYkJN?=
 =?utf-8?B?b2pwZTVJZlBJM3oxbkZKQStmYjlRVllrSjRUSTUwZUpTS0FENEpnNVJ6dTFY?=
 =?utf-8?B?N0lQdW1PaCtDUlJJc0tYaWdEbSsrcXByN1ZEMjBUMjUzb3ZjWnd4dlV2Nlpw?=
 =?utf-8?B?MklCNDh0b0xFcElUd0RWYU1kREg2OGtHaS9MTGNxdGhSS2hMc2V2OXUwNWU4?=
 =?utf-8?B?MnZOeWQxM2lpbUM4QnQwMXdEa0EvZVFBOGhhZFNHK21RNVYvRnVMeVNxNG5h?=
 =?utf-8?B?LzdJNElXVVV4MlhrY3BvWnFQVFFYaHZRdTA3M2JPd2YrdlBySlNJYXg5TXB1?=
 =?utf-8?B?bDNqUVppdHBnNms4RlgyUjZWTEtuMFZaRXVxUkp6K0toa1FUOXFCRGNEU0Vy?=
 =?utf-8?B?NHVONnF4UHVwWjVSM3Z1VnlZSnJ2SmlENlVXaEtFS2hFSkx6T283VE1ZNlN6?=
 =?utf-8?B?ZkdZMG9ocFdhTmhEVHhTZTNsUHMxTnNobFROYXo1VWdWTlozL09tYjF1dFUx?=
 =?utf-8?B?RVVWaExVd0VDMHg1STFEcjFvbHJaamZNMjA0YXVlSjE5TmNQRFVUZDVYaDVl?=
 =?utf-8?B?VW1OWGhuRzZsdUQvaHcwY2hVWUlMdGM1ZkxxWGJCWE10VGVNbnlhRVJwMDdC?=
 =?utf-8?B?ckUxbHEwZ0cwdGIweFhKWlBGd1pkYlVWUlZTa0N5WDNmRDdabHVnV2pQSDFa?=
 =?utf-8?B?bDY5MlhiV1lOTUM0bURFaVAzN1ZKVEJsV2dLcjhUeVpmYlJ4ODcrVktkTmdP?=
 =?utf-8?B?N1M3SThVQ243Wk9qMFFYOWErTFd6U1RzdzZjUG96NUxIN3kyWmhUUlJ3V3Y3?=
 =?utf-8?B?MFBhVlBBUS9nVGRJVkRBQnFmR2ZIWkxUK3llUDJyeXdnZUtzZS9aeG83c0dr?=
 =?utf-8?B?ZE11eDBwdFRxY1hKQzkrdUtUZGFpQkVMbGN2cVBvclpOQ3ppWGNQVzljZjJo?=
 =?utf-8?B?N1Q5OVF6YW95TmhzS1c1QllHOG5IdkxSNnY3Qi8yWmZmT0Y2cEVzZE43TFg3?=
 =?utf-8?B?OFZ5dm91ZlJWWWtiZFF1c0g5QVJCWDQwVm9Ed3RkSnpKb3QzUWFxaSt4RExF?=
 =?utf-8?B?bTBwSFVQREhBTVpWczgycnA3RHVUbGsrOURFNUs4MXVJU3dJUHFiYWQ1Ym90?=
 =?utf-8?B?MkJQU1E3d1BTU0YvME1Sa0hibW4zRmdYM0RLbzR0OHczNVkyYVk0bzNranZm?=
 =?utf-8?B?andhN0twSEY2L3BXMXNaK2FmQ1BaZWVmOElSZHJrbDdxWUdTamxRTW0rYzBY?=
 =?utf-8?B?MVl2eXBBOWsrZjdIUzZNNHNlSm9aSmJGMm84eXRidHZKZlk3NDhGMVVFT1oy?=
 =?utf-8?B?ZnRUdXZudGNTZFlUSk9remhkZkYzWUdmWGN1eWkvMi9CWkNSVUp3NHNCWXhx?=
 =?utf-8?B?cUp0S3g3Ym1GbEFYQjFrYzRZaDE0UG9Dam1md2g2VXlUNVVnVlEzaVBsZ2xT?=
 =?utf-8?B?d3dHUmc4MzUvTTlBak9CdDFYOWs3dmVmbzA1NkNCd3dMT3RGMitjNmR3ejJO?=
 =?utf-8?B?bDFyY3FEZnNoQ0VzNCs4VnduNGdhMHF1T3d6amtLQm9ad0YrK05HTlh4ZDd5?=
 =?utf-8?B?SmVXeklGeWd6dmUzRUN5bGgxRWlFQnU1bis3aUIydlJYUWNVQ0NyMWgzaUJM?=
 =?utf-8?B?TEF3QUxaaWp3SDlwemZuVTNBRGNIVmcrTDl6UXhvUWVxU0VKSi9rRWJweGIx?=
 =?utf-8?B?Y0FFNnBVblBiK3RVVFA1YnBCZkNFR1IyWHF6anlicitYWldMN2lCOXhuRm00?=
 =?utf-8?B?RnROdFJ0K01nakRiRk5jUlN2VUg3M3d4bUNDVEZFSDk0T2gzNVprQTZqTlZp?=
 =?utf-8?B?alNpVUt3MFo0T09pQjdPN3I4SkF6a1JzODVQWE5SR2ZyMWtDZlI2YnU0ZjVa?=
 =?utf-8?B?L3RNdDJJcEEyVkNiZ2crZHcvYnVRUk1aUEdEUG1IT0NxZlo5SDhOdFdQamxj?=
 =?utf-8?B?bjd4bURmRFNlZmJ3eGZnYTVPSVFWeWZMQTFoM1hBM2tSNzlESzk4SzdzdEc2?=
 =?utf-8?Q?kbjhZSdXoNFbL/1WP53WoMkdw?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0e12ade7-cbf3-4b0d-be76-08db61d9ac54
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 13:19:33.2232
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0xyWDxDDzckHU2K7JxKnqCsX+926SGlWFx4mV8JetraIe9kZ36ONnXvZWl6ruqOMzRpFr4XBazC3danLG1gmSQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8971

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -568,6 +568,7 @@
 VPCI
 M:	Roger Pau Monné <roger.pau@citrix.com>
 S:	Supported
+F:	tools/tests/vpci/
 F:	xen/drivers/vpci/
 F:	xen/include/xen/vpci.h
 



From xen-devel-bounces@lists.xenproject.org Wed May 31 13:20:04 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:20:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541791.844901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4LkB-000767-Th; Wed, 31 May 2023 13:20:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541791.844901; Wed, 31 May 2023 13:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4LkB-00075O-P3; Wed, 31 May 2023 13:20:03 +0000
Received: by outflank-mailman (input) for mailman id 541791;
 Wed, 31 May 2023 13:20:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VTAn=BU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q4LkA-0006Ii-9M
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:20:02 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur02on2062b.outbound.protection.outlook.com
 [2a01:111:f400:fe16::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7e423fd-ffb5-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 15:19:59 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM9PR04MB8971.eurprd04.prod.outlook.com (2603:10a6:20b:40a::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22; Wed, 31 May
 2023 13:19:58 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 13:19:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7e423fd-ffb5-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VOEkwWIkl5LJlW2bZEPwYu2vTAAokFWLf+PRGCcrAKn+JOZJWy61zj/wYZQrGb733F3XMIuep1QDRbRNoqAMtUWz7fZIbO4IEIY+SnMlC45QttrQzGAih++4WEuaACj6/Fy8SI69Jbnif4F3NspdIR6RIF2OPgCmGoP6wJDIyMwOukrGM5rw8XVUSCmWdk/1ffb/K4wFeCC1zq9tcBYPuqJ3yDwJnEtF4okfhsMMx2uyosBdQBfJ3jc4piXZrRRCVtIKyDbTRol8o3OJa5zN8f2r5ekx23M5OtYyTi80Nij6Mu7R3stPoi2vteE+S3eoX9qU6qc4RCkDSCzfrIm7nw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=wSlSXBqwCEB3zRwV6lgVr1HsRYwYeKUKlwNXh+OiKo0=;
 b=Q4lBs4RuflhHm4j3SiyBPesSAzEjNnSwy46UwIfgY9+Buimzfvs1ZgxMg6oDcK/S3KtQBcGbKNuz+du30E8u+RHBXRyMsSZA9HSndOUSTC/2WctV1ozBemOkiDnCTJCgPeHhJNtMGytqd18VZGD20C2k1iiRO3GFPBFHY3cIDoWNLcUxQ4QNXUyKFVaw/nrHaRamDNOTHUsQpF2OXcM0D9Ft3JnlUSD17NkXjZgJ3VSEx3mc/cD2Th5Pow1Cl2r8V1KlJmkvE989KHtKUPUFJwM0pyY4SrLiV9rPNmjQzUEyAN980cuXLRHaQrTk6UJ3cXp0mRDhtYM1axxT1GjWSw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wSlSXBqwCEB3zRwV6lgVr1HsRYwYeKUKlwNXh+OiKo0=;
 b=3ZWgdIa2O7AyQpZjhUhmOsrB7ajoP1LAQz8s96Qx0ptp00f1Z2BNyxFt2e9vEfiz1X+13MFJx9c6aSxEHRwfy72zO2dyMbIuS24Yt5H6Mf1dxlZQT3WjhUEp0G7CaqDJqC5eqcYfSTtS2rdvMmU7Fx95tTdhl3PIlK+pk+QTgt0CB2XeaR6RfZ+30Ly6oW1cGrXZJuUvLR95yBEsmHUPsX6NFsENif2vlEi30lk0LHUIJP704koHUbqpTcJBPcK65P4qhnyMknM/dtYuLtu0O4ExQ7WBcjxnZ/tBS/z4a3kuR5nFlpr3xMd+najfU2UebWqS8x+HAwMYtYVUZeyuIw==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <838c09e5-97ce-2d23-9d42-645a8925217d@suse.com>
Date: Wed, 31 May 2023 15:19:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: [PATCH 2/2] vPCI: fix test harness build
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb1bfe6c-eef6-5552-78e4-bdec8fd43561@suse.com>
In-Reply-To: <bb1bfe6c-eef6-5552-78e4-bdec8fd43561@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: FR0P281CA0188.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:ab::9) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM9PR04MB8971:EE_
X-MS-Office365-Filtering-Correlation-Id: 0fc676a5-8249-4eee-09e5-08db61d9bb34
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tADcMHGOWYemyAYm9qA+36UnVk5no+j+qgGOKtTIzl6H6UOg8MbewgDmahQSm8F9/6M5uRoSpozSw/zLbHQCIHPN8AAdFhVVEruk4e9OQJd/ytqLdb8MVKH8rufgTBoP2ouf5IUIoNvrgTyBLVBkNjJ4+1GRKHqTwQjNkfDK5OU79/eUL5w2scqFtbpBJBMoBPTP/kIsEoPQdpSnXPctD/NcUzrYb7pxiUAaYiiCXm9aLBZ0Uzsvas+A/2Q/L43WTXnvZvKiFY88C9CNrecOZ9Hr+NJ+/O4m17Tih16lyF66O71ZZLTStgVo/axYNvViRkdM9RuDSGZB6u+VwjhIFPqyp//RmPe90f81SPo+vWfO//8WW1HioMEqYk4ufWyy1DqnGFj7bLb0VuRnQxZhoP/DSKLbEYYYfikXAnI4Ir0J3CZ89GDBy0oYbVqpIF1eYcETlU9LWKnt9v7+E6kXOYMCV5OUYuMRkhb9SOiWGqhaIHkp5tRAiM8zxY7LqG5Hk34NZLU0FRZCOo8GVXbtqPx8/0fv4ZCKBAZ5WPPvozIOe/q7gKXb7ayFAdgx3VKtUIJp4lqqbxzcZCLfRRzsOZClt00HeXky/PE18bHtxkUXAFzpW657DKkSsRUw7O6UAlttIW7M+bg+jXqyjOcmMA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(136003)(366004)(376002)(346002)(396003)(39860400002)(451199021)(8936002)(4744005)(8676002)(2906002)(2616005)(5660300002)(41300700001)(38100700002)(6506007)(26005)(66476007)(6512007)(36756003)(316002)(186003)(66946007)(6916009)(66556008)(4326008)(31696002)(31686004)(86362001)(83380400001)(54906003)(478600001)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?bVpxT01NcVUyQjhBNC9tZW9ZK2wrS3pKTnErU1JoUjE5RXR4WURMdmxYYUk3?=
 =?utf-8?B?Q01iMjF0RzE1UGh2RzRkdkFNYUdDdnZnOTNiZlU3SnlzTVNNNzgyK0dYZGhG?=
 =?utf-8?B?RG54WDY2eXY5Q2tMVWN2Y0ZBaEZJMTVFZmZEZml5WU10K0tIZ21aYVh4Q1BF?=
 =?utf-8?B?M2pZWkQ3TkdCampGeE1BbE5rbFkxNU9ieXVRTHh1Z05XcythTCtuRjJyRUJa?=
 =?utf-8?B?QW1wdUhQNy9RbnFUQ2RFdG41cmU1OHZrUDQ5eSsyaEZBbU1YRm9uQnlsQ2Yz?=
 =?utf-8?B?dXdDZ0poNHFoYW51aHJ3UkR2Z3A3RlZoMExmTlV6UWVHN2ZFdm5KK1BabnQ2?=
 =?utf-8?B?TTNPMmg0Mk9GZFdBSnBUQlgvL2N4QXZDVVdTeGg3YzFicVJudUpFeFpMRXRx?=
 =?utf-8?B?dFVkcDVBYjAvM3I0c2JlY0dSSmtiMlNtQkdZckx6cmRkTlRtclhMQWYxVmFw?=
 =?utf-8?B?TDl0VUdiSGZNZHV2YWFQc1V3Q3BiejFuL2R4eVJpZndJMEVVWnBQK2tIc3ZO?=
 =?utf-8?B?VytVUWRtTThaZjJ1Zm1LdGI0ak05NHl1dXhwT0UxWVdVZkhCN3g5Z1hQUUY4?=
 =?utf-8?B?cFA2NVZobXRpK2ZFZkFqU01sb3Q2cWx0djZiQkUvYU1acW1SZGZPREpBNE4v?=
 =?utf-8?B?VEkvZVJUUnU5QmxpcWRMdFJ1SEY5aW45bFY3YXVGQVByV0dMWEVLL1k0dzk3?=
 =?utf-8?B?Y3BOcEpJWnJnQ3UyTHZtVjNVNWNyRU5xZzFzZmsrWmFWazFsaXF3b2lwUU8y?=
 =?utf-8?B?K3R1cldMTUt6SDZMRE80K3hpaFpPaHc2ZkI2cDk1dUNFaDZrVUh1eG43VnlN?=
 =?utf-8?B?VS93RzFTZW1tRXg1NTIxNHdRKytWWTh6UllJaWlXMGFMQ203NUJsTzZEaXVp?=
 =?utf-8?B?a0M0WU4yVU9vWE1OUC9kZXJJQzJRVWJEcS9Ya3BlU0QrOHd2L0hnSGZaZFRM?=
 =?utf-8?B?RC9rVzc0Ny9scXhBQVdzNld0UGNhOUR2NFBtVkxpbkZBME1sR01OZ3JDeHRk?=
 =?utf-8?B?Zmg4aTZvQ2x1WXZxd3A0YVVqaGptbXkyT05JWm1hSGhGZkJyemdCc2h1SUtY?=
 =?utf-8?B?US92NU9vRmF5SlZIZGw3ZmZaZGxwWGt6Si9Sam9BVUlGa1Zyck9jNjhiTTVs?=
 =?utf-8?B?dVd3UE9LbVM2MnRIUEpKZjh4V0V6b0JBQkwrM3N0RGFETmJxYVdSbHNCZTB3?=
 =?utf-8?B?UFV0TUtWOG9ybVY1MSsrb1M3eDk5WlErazZ2YXJYbmtTV1NjTnRpaXFUTloz?=
 =?utf-8?B?Vi9UME1ZNFQyeU9wejRLTEVsOFdUZkNRRjBGdzJIVEMrTkRjcldvbGFBOVNT?=
 =?utf-8?B?OVJOTTBFQWphbnZHeGxkd2lqNjRsajV1RnBmdU54Y3F5WmJjdk5BVzZKbFpF?=
 =?utf-8?B?bmdFblJ0RGFhUXI0dGR3V0p2akl0ak9TZUZaTjNUYUtqMVAwOUJRVVFOSW5t?=
 =?utf-8?B?RTJyQStOa1pvajZISjZuMUhBNGFsTkFuZk5BTUFtWUtiTEY2Y25jK3hkRHRW?=
 =?utf-8?B?aEprYkxITGpyWmtKRGppQStEaWkwVytlSXlEWk85blI0eXBFS1UxK2lVeU4v?=
 =?utf-8?B?bVpYQ2hDNHZMZXRYdU5RYkFDeW5TbHlqaVhUOWQwaW5Oek1WRCtyS1JHWG9C?=
 =?utf-8?B?ckdWdVZORUlBUnpCSWNwRThuNUhOdlVGVUpnWUV4WXg2eHdEMEo2VmRNYW5r?=
 =?utf-8?B?WStRL3VPWmhwMXQ2b0lvY3UzV3Z3T2ZZbFVZRExzVEg2SHFWZW9KajFsRUZo?=
 =?utf-8?B?Rm1NQ2lNRFZYWWNFVm1xaXNKV20xUStNa0VVS1RNQ3M0NFZKNjUxR2tmZEw5?=
 =?utf-8?B?UytzZmE0NWRtd1VhaHFRcDNNN2x3T081L3ZVaU56Y2gvQzdJQTZIV2l3WUhI?=
 =?utf-8?B?M0dRODZuU2RUYXpENUpMM01nQmJFeTM2N2F3QXRqQnBCdVU5aDdjNncrZ3JL?=
 =?utf-8?B?QTNIbnJaREtOenR6R1FrUUhpT3pJQmEvQTBYUk9ud1BmT1VPR21TcHZta1Ns?=
 =?utf-8?B?UldwUHdMRVdKRnpMUkRXa0RmYmZuQm9mQVBFWEdGSFhIVWtFekxoYmcrcGw5?=
 =?utf-8?B?Z3ZhVjVJYXNjL1JrZjU3dlJjSGxuUWlZcVdlNFhVVGllbTUrRjJ6V3dydlNM?=
 =?utf-8?Q?y0KWI305Wq4jDGfx4EJ9i84hC?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0fc676a5-8249-4eee-09e5-08db61d9bb34
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 13:19:58.1505
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aO9EdjvQqEZcu5DU2u0xbTlLnY45gMg7jlQ+/4hm0y/4XMn4+HyIvHma6CGOgWehG4GtLm4tmvxnq8WWv3lDwA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR04MB8971

The earlier commit introduced two uses of is_hardware_domain().

Fixes: 465217b0f872 ("vPCI: account for hidden devices")
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/vpci/emul.h
+++ b/tools/tests/vpci/emul.h
@@ -82,6 +82,8 @@ typedef union {
 
 #define __hwdom_init
 
+#define is_hardware_domain(d) ((void)(d), false)
+
 #define has_vpci(d) true
 
 #define xzalloc(type) ((type *)calloc(1, sizeof(type)))



From xen-devel-bounces@lists.xenproject.org Wed May 31 13:42:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:42:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541800.844917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M5E-0001pV-TE; Wed, 31 May 2023 13:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541800.844917; Wed, 31 May 2023 13:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M5E-0001pO-Pi; Wed, 31 May 2023 13:41:48 +0000
Received: by outflank-mailman (input) for mailman id 541800;
 Wed, 31 May 2023 13:41:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MZqJ=BU=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1q4M5E-0001pH-43
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:41:48 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2b5f65f-ffb8-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 15:41:46 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 9C7FE63B30;
 Wed, 31 May 2023 13:41:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12D34C4339E;
 Wed, 31 May 2023 13:41:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2b5f65f-ffb8-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685540505;
	bh=HlwpPDD095eWKjo4oiJTfkTblpFKG89ZfJEGh62GyPQ=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=I/AHXEi4gffcFw4H66NjLeAgQNUYA65+rfeyDw9g9EI9otB/vWnYXMywDCAPJM6mC
	 5tZlSW4po/mbyAXem2lAlTLjPpUODxdi1rvb2HIJ9ceHBoIQMao9wimysTy4If9eQO
	 O4JS4pa+Upf8wMKAibsOJiitfK++uSS6s6ZbcS8l9+6CdnKx2r0o3+6/c9/s8XR+1r
	 /WxTGS0PKR3e3Q5uJ57QQpWrStjhjeXT41+URM/KxRmAmg8QGbGJajaTF6O93pT4EQ
	 jSHyodjTI48Rr2cR1Z6pBOtecZf13LElNc5IsWyF7Rdne/II5CXCweCYO3cC0dlsIC
	 GBAamB3O5GgPA==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	sstabellini@kernel.org,
	roger.pau@citrix.com,
	axboe@kernel.dk,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org
Subject: [PATCH AUTOSEL 6.3 34/37] xen/blkfront: Only check REQ_FUA for writes
Date: Wed, 31 May 2023 09:40:16 -0400
Message-Id: <20230531134020.3383253-34-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230531134020.3383253-1-sashal@kernel.org>
References: <20230531134020.3383253-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Ross Lagerwall <ross.lagerwall@citrix.com>

[ Upstream commit b6ebaa8100090092aa602530d7e8316816d0c98d ]

The existing code silently converts read operations with the
REQ_FUA bit set into write-barrier operations. This results in data
loss as the backend scribbles zeroes over the data instead of returning
it.

While the REQ_FUA bit doesn't make sense on a read operation, at least
one well-known out-of-tree kernel module does set it and since it
results in data loss, let's be safe here and only look at REQ_FUA for
writes.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Acked-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20230426164005.2213139-1-ross.lagerwall@citrix.com
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/block/xen-blkfront.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 23ed258b57f0e..c1890c8a9f6e7 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -780,7 +780,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		ring_req->u.rw.handle = info->handle;
 		ring_req->operation = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
-		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
+		if (req_op(req) == REQ_OP_FLUSH ||
+		    (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
 			/*
 			 * Ideally we can do an unordered flush-to-disk.
 			 * In case the backend onlysupports barriers, use that.
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed May 31 13:43:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541804.844927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M6Y-0002Lw-7S; Wed, 31 May 2023 13:43:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541804.844927; Wed, 31 May 2023 13:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M6Y-0002Lp-3I; Wed, 31 May 2023 13:43:10 +0000
Received: by outflank-mailman (input) for mailman id 541804;
 Wed, 31 May 2023 13:43:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MZqJ=BU=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1q4M6X-0002IL-1A
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:43:09 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1347f73a-ffb9-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 15:43:08 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 4D7C0616B5;
 Wed, 31 May 2023 13:43:07 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C044DC4339B;
 Wed, 31 May 2023 13:43:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1347f73a-ffb9-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685540586;
	bh=MhTyEwAVoWuct1qjPySfzk9ClI6MHygYq9CIndOlE+M=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=sB3Z2VmHxTHfRb+7rhuGyz5ngaAA3wTgwq7Owimr/hkuUc1g80DE30+G+GpdDlXdn
	 SGqoJrATvoseMYRoAgMikPdJQ9XrVtqh5QmgM0TktKt4v7NdHyzpBQ8McCp4kxBa0h
	 kr+qS8CHAc+J/AkCVN2a08BH8/ucriiqQtPADS3f8eyECV+PIYG+F8hiOiUYAIlA9/
	 pC+ZXUYL+ree36oOQenumrj0dA4wd6+GwsvsV0NT2a5Yjd15TxrYx+FW/HcdKJ5b+U
	 27wrZsT0Rk+XerAiE3T9mR+OU2O3b+SqecZlUEFXj++DoVlWzPCKgqsxPz4S63m1X5
	 sDrg07kTBTVug==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	axboe@kernel.dk,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org
Subject: [PATCH AUTOSEL 6.1 30/33] xen/blkfront: Only check REQ_FUA for writes
Date: Wed, 31 May 2023 09:41:56 -0400
Message-Id: <20230531134159.3383703-30-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230531134159.3383703-1-sashal@kernel.org>
References: <20230531134159.3383703-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Ross Lagerwall <ross.lagerwall@citrix.com>

[ Upstream commit b6ebaa8100090092aa602530d7e8316816d0c98d ]

The existing code silently converts read operations with the
REQ_FUA bit set into write-barrier operations. This results in data
loss as the backend scribbles zeroes over the data instead of returning
it.

While the REQ_FUA bit doesn't make sense on a read operation, at least
one well-known out-of-tree kernel module does set it and since it
results in data loss, let's be safe here and only look at REQ_FUA for
writes.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Acked-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20230426164005.2213139-1-ross.lagerwall@citrix.com
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/block/xen-blkfront.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 35b9bcad9db90..5ddf393aa390f 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -780,7 +780,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		ring_req->u.rw.handle = info->handle;
 		ring_req->operation = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
-		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
+		if (req_op(req) == REQ_OP_FLUSH ||
+		    (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
 			/*
 			 * Ideally we can do an unordered flush-to-disk.
 			 * In case the backend onlysupports barriers, use that.
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed May 31 13:44:09 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541808.844937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M7V-0002tH-G9; Wed, 31 May 2023 13:44:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541808.844937; Wed, 31 May 2023 13:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M7V-0002tA-DL; Wed, 31 May 2023 13:44:09 +0000
Received: by outflank-mailman (input) for mailman id 541808;
 Wed, 31 May 2023 13:44:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MZqJ=BU=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1q4M7T-0002sy-WF
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:44:08 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3558fa9a-ffb9-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 15:44:05 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 7BDE863B55;
 Wed, 31 May 2023 13:44:04 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 02F46C4339E;
 Wed, 31 May 2023 13:44:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3558fa9a-ffb9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685540643;
	bh=pV9uChkc0H4x4MSq/JUNMWPSxLJQXSvHsCVIU8XyKYU=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=pSk/ef7XqtDyHJtvLmw8nsfj/JHJjeyb7mPNF6k+Jn1hVBBULmCBBJ2lPI+EFcc+G
	 J05a9DDS5jjrBAobce5HDuKOFyJfXsaUNFDmHrWNJbMV2aJdBsDscCHWQ4QR77T4X8
	 gXCV6JqYWzUCqzUi6J7Va39+wJLvCkxcL45I79JYNbDoBu+XvdIdFLcgOd+MckXGLd
	 R06ks5TDmGwCHmQw6oxYd7mPWAPFpFllFSf6DCucd4Vfqm+rmN8PuKTeVY+OskW7PB
	 zeEiZ1kDsG5Pv/29qO7CLwVOjIzIOX2iLPARjJAbxV2g/JzIxJRBiHtrMa3m/wmrcw
	 BgiQjbtxRcigg==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	sstabellini@kernel.org,
	roger.pau@citrix.com,
	axboe@kernel.dk,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org
Subject: [PATCH AUTOSEL 5.15 22/24] xen/blkfront: Only check REQ_FUA for writes
Date: Wed, 31 May 2023 09:43:18 -0400
Message-Id: <20230531134320.3384102-22-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230531134320.3384102-1-sashal@kernel.org>
References: <20230531134320.3384102-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Ross Lagerwall <ross.lagerwall@citrix.com>

[ Upstream commit b6ebaa8100090092aa602530d7e8316816d0c98d ]

The existing code silently converts read operations with the
REQ_FUA bit set into write-barrier operations. This results in data
loss as the backend scribbles zeroes over the data instead of returning
it.

While the REQ_FUA bit doesn't make sense on a read operation, at least
one well-known out-of-tree kernel module does set it and since it
results in data loss, let's be safe here and only look at REQ_FUA for
writes.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Acked-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20230426164005.2213139-1-ross.lagerwall@citrix.com
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/block/xen-blkfront.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 24a86d829f92a..831747ba8113c 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -780,7 +780,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		ring_req->u.rw.handle = info->handle;
 		ring_req->operation = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
-		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
+		if (req_op(req) == REQ_OP_FLUSH ||
+		    (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
 			/*
 			 * Ideally we can do an unordered flush-to-disk.
 			 * In case the backend onlysupports barriers, use that.
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed May 31 13:45:06 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:45:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541811.844947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M8F-0003RW-Q2; Wed, 31 May 2023 13:44:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541811.844947; Wed, 31 May 2023 13:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M8F-0003RP-M7; Wed, 31 May 2023 13:44:55 +0000
Received: by outflank-mailman (input) for mailman id 541811;
 Wed, 31 May 2023 13:44:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MZqJ=BU=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1q4M8E-0003R8-2j
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:44:54 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51ad0365-ffb9-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 15:44:53 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id EE96563B92;
 Wed, 31 May 2023 13:44:51 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 603B9C433D2;
 Wed, 31 May 2023 13:44:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51ad0365-ffb9-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685540691;
	bh=ngj77xP5JuQ0WSEzUbr2XD51zkljVFU1XP7Ct4Ddi8c=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=Yy5RftwDQ3NkP/xoC9eVMtK2lbZE9FLRfkq5CQ4nOpcylejny9HXufhFdetyLfHcw
	 Av7py+s9/OtZZjWqbLq3GnTh5Ut28WRmlYRhjlwH66vCv+lMiU2Y2ipjhGA75Vq2g1
	 AD/VR0OHQfoXXLDn7Cf9alnb6nl2aI6d7F1jHLp/y/PI08lyjcw7mT3URjcNDHYT8X
	 HGN50BcGjsVIXqwQJCu7szdlStWkgogWWpead94G3OqMxHI8XTibHJnEeJqLiNYkY5
	 VYsuobv00SC/3Kwh36rXCNyqvGd2bzSdDBq31snRUfaBdW1CYRCtptLi7YlGs0Cx7e
	 bJIFTdlbic0gQ==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	sstabellini@kernel.org,
	roger.pau@citrix.com,
	axboe@kernel.dk,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org
Subject: [PATCH AUTOSEL 5.10 20/21] xen/blkfront: Only check REQ_FUA for writes
Date: Wed, 31 May 2023 09:44:13 -0400
Message-Id: <20230531134415.3384458-20-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230531134415.3384458-1-sashal@kernel.org>
References: <20230531134415.3384458-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Ross Lagerwall <ross.lagerwall@citrix.com>

[ Upstream commit b6ebaa8100090092aa602530d7e8316816d0c98d ]

The existing code silently converts read operations with the
REQ_FUA bit set into write-barrier operations. This results in data
loss as the backend scribbles zeroes over the data instead of returning
it.

While the REQ_FUA bit doesn't make sense on a read operation, at least
one well-known out-of-tree kernel module does set it and since it
results in data loss, let's be safe here and only look at REQ_FUA for
writes.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Acked-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20230426164005.2213139-1-ross.lagerwall@citrix.com
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/block/xen-blkfront.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 6f33d62331b1f..d68a8ca2161fb 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -792,7 +792,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		ring_req->u.rw.handle = info->handle;
 		ring_req->operation = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
-		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
+		if (req_op(req) == REQ_OP_FLUSH ||
+		    (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
 			/*
 			 * Ideally we can do an unordered flush-to-disk.
 			 * In case the backend onlysupports barriers, use that.
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed May 31 13:45:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:45:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541816.844956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M8u-0003ww-1K; Wed, 31 May 2023 13:45:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541816.844956; Wed, 31 May 2023 13:45:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M8t-0003wp-V2; Wed, 31 May 2023 13:45:35 +0000
Received: by outflank-mailman (input) for mailman id 541816;
 Wed, 31 May 2023 13:45:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MZqJ=BU=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1q4M8t-0003ut-4z
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:45:35 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 69ba1038-ffb9-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 15:45:33 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 5E04163B8A;
 Wed, 31 May 2023 13:45:32 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C77B2C4339B;
 Wed, 31 May 2023 13:45:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69ba1038-ffb9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685540731;
	bh=p/IgxmZZFrxBdPYVbroxVgHz0FMQYuzCL/FQBFsDPeg=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=gsP/PpgZHs3JyUBeO4WmELP+CM81s9Hiotlsj3drW6kgRnoApYPP4gwPRoGlxiToN
	 2HQ/VxH88YGyziOO7f5vP2d3jGDWpc9ZyHqXnGridTKFGsiwI3RmL1oUSPhIniFVOB
	 XW0CE3vz1QVXahSOtTRh98XWhpLruunBjXDrQytm1i/+XefEIXcnuojAxs951Cuqw3
	 GZJupS/PLGEAZhy4/UdY1OIQkLxxGMQYdUB+3tLgaCqyEd1aw9aK1C8mX6F3dtsEkT
	 WpbY+g9tk43wKb7AxxI9YjQDJonXqT9opxC7z+dRHFQ2ST34LgtY43QUVpRqzATJ3L
	 Ns6uxmx64+BRw==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	axboe@kernel.dk,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org
Subject: [PATCH AUTOSEL 5.4 16/17] xen/blkfront: Only check REQ_FUA for writes
Date: Wed, 31 May 2023 09:45:00 -0400
Message-Id: <20230531134502.3384828-16-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230531134502.3384828-1-sashal@kernel.org>
References: <20230531134502.3384828-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Ross Lagerwall <ross.lagerwall@citrix.com>

[ Upstream commit b6ebaa8100090092aa602530d7e8316816d0c98d ]

The existing code silently converts read operations with the
REQ_FUA bit set into write-barrier operations. This results in data
loss as the backend scribbles zeroes over the data instead of returning
it.

While the REQ_FUA bit doesn't make sense on a read operation, at least
one well-known out-of-tree kernel module does set it and since it
results in data loss, let's be safe here and only look at REQ_FUA for
writes.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Acked-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20230426164005.2213139-1-ross.lagerwall@citrix.com
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/block/xen-blkfront.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index d0538c03f0332..da67621ebc212 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -779,7 +779,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		ring_req->u.rw.handle = info->handle;
 		ring_req->operation = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
-		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
+		if (req_op(req) == REQ_OP_FLUSH ||
+		    (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
 			/*
 			 * Ideally we can do an unordered flush-to-disk.
 			 * In case the backend onlysupports barriers, use that.
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed May 31 13:46:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:46:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541819.844966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M9S-0004TY-9T; Wed, 31 May 2023 13:46:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541819.844966; Wed, 31 May 2023 13:46:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M9S-0004TR-6p; Wed, 31 May 2023 13:46:10 +0000
Received: by outflank-mailman (input) for mailman id 541819;
 Wed, 31 May 2023 13:46:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MZqJ=BU=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1q4M9R-0004TE-8f
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:46:09 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7de440b0-ffb9-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 15:46:07 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 2B3FC63AEF;
 Wed, 31 May 2023 13:46:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9EE64C4339C;
 Wed, 31 May 2023 13:46:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7de440b0-ffb9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685540765;
	bh=qe51Uh+xie1MnPHX/zdQ1DrgBZaG52fL+rEGpEKdbQQ=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=HKv9cA9JPJAJwhfbTm6jHu0QkID9eCk4IbY3AqU8Kh464GkUNJ31RD0E/0RIOcECc
	 j5BLUQuZEDkFKalDrd01JXDO8orqQ2gaK8HjL7c2+P/B0ImJi9rpbqXGGW/Wd3rl2Q
	 ZXJAoPnE5dsBMHHLOwaheR4mMJ7Csj17uWi7vvMvaqiJfzOQV/SlzW86zzQO6L0Ktd
	 87w/ck4WsEw+5N2B/CBhDzKfDyKBKWWE7NsXA/5lMIpNP4KfXqw6qHA0HU1DDIeKDc
	 7hW9A2GyHVLWPSTB1G/G/yKqdbPckCRN4MVG4zqT8A1fZM0Usl/TvAudp9nxuktwG2
	 PRmPLu8fbQFkw==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	axboe@kernel.dk,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org
Subject: [PATCH AUTOSEL 4.19 13/13] xen/blkfront: Only check REQ_FUA for writes
Date: Wed, 31 May 2023 09:45:41 -0400
Message-Id: <20230531134541.3385043-13-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230531134541.3385043-1-sashal@kernel.org>
References: <20230531134541.3385043-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Ross Lagerwall <ross.lagerwall@citrix.com>

[ Upstream commit b6ebaa8100090092aa602530d7e8316816d0c98d ]

The existing code silently converts read operations with the
REQ_FUA bit set into write-barrier operations. This results in data
loss as the backend scribbles zeroes over the data instead of returning
it.

While the REQ_FUA bit doesn't make sense on a read operation, at least
one well-known out-of-tree kernel module does set it and since it
results in data loss, let's be safe here and only look at REQ_FUA for
writes.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Acked-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20230426164005.2213139-1-ross.lagerwall@citrix.com
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/block/xen-blkfront.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 7ee618ab1567b..b4807d12ef29c 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -779,7 +779,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		ring_req->u.rw.handle = info->handle;
 		ring_req->operation = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
-		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
+		if (req_op(req) == REQ_OP_FLUSH ||
+		    (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
 			/*
 			 * Ideally we can do an unordered flush-to-disk.
 			 * In case the backend onlysupports barriers, use that.
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed May 31 13:46:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:46:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541821.844977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M9a-0004oo-GZ; Wed, 31 May 2023 13:46:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541821.844977; Wed, 31 May 2023 13:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M9a-0004oh-Da; Wed, 31 May 2023 13:46:18 +0000
Received: by outflank-mailman (input) for mailman id 541821;
 Wed, 31 May 2023 13:46:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EXUc=BU=citrix.com=prvs=508c705fb=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q4M9Y-0004TE-V8
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:46:17 +0000
Received: from esa4.hc3370-68.iphmx.com (esa4.hc3370-68.iphmx.com
 [216.71.155.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 81c56718-ffb9-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 15:46:14 +0200 (CEST)
Received: from mail-dm6nam11lp2170.outbound.protection.outlook.com (HELO
 NAM11-DM6-obe.outbound.protection.outlook.com) ([104.47.57.170])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 31 May 2023 09:46:05 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ0PR03MB7051.namprd03.prod.outlook.com (2603:10b6:a03:4d7::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.19; Wed, 31 May
 2023 13:46:03 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 13:46:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81c56718-ffb9-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685540774;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=UsOz6eHwm9u5f1bygqhZApqE9ooH5hTMjSD4zWvwnco=;
  b=VksHwadWTYB5vqGRYkmhC3lXo87zw/w2V+Qwf1a6aNUpetEo3pIfZmJt
   c/EB4tPG2SgTi5YARGU3kZfK7MMPlOh5M9NN/7q4/W3JOn7xaLn6Cy5BB
   3EzBe6qt+FHGivOCKKZi0tcMwp8QvtL8KKLdp+SiBazWXieVcMMAUluoT
   s=;
X-IronPort-RemoteIP: 104.47.57.170
X-IronPort-MID: 113564348
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:E3YX06OQ2GqI00DvrR1rlsFynXyQoLVcMsEvi/4bfWQNrUokg2NUm
 GFJDD2FOPiIM2v2edB/YYvgoRsC756EzdVlSgto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v
 63yTvGacajYm1eF/k/F3oDJ9CU6jufQAOKnUoYoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/
 Nj/uKUzAnf8s9JPGjxSs/rrRC9H5qyo42tF5QVmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW
 +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ
 HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0sZIUWxn9
 NkHFDAQNhW9jMCI37SRUeY506zPLOGzVG8ekldJ6GiASNoDH9XESaiM4sJE1jAtgMwIBezZe
 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PVxujeLpOBy+OGF3N79YNuFSN8Thk+Fj
 mnH4374ElcRM9n3JT+tqyv327GTxn+rMG4UPJ2a++5jvHiu/2EsCw80agXludmEuFHrDrqzL
 GRRoELCt5Ma9kamU938VB2Qu2Ofs1gXXN84O/037kSBx7TZ5y6dB3MYVXhRZdo+rsg0SDc2k
 FiTkLvBByFrsbCTYWKQ8PGTtzzaETMOMWYIaCsATA0Ey9ruuoc+ilTIVNkLOLGxps34H3f32
 T/ikcQlr7AajMpO26Dl+1nC2muovsKQEVZz4RjLVGW46A8/fJSie4Gj9Vnc67BHMZqdSV6C+
 nMDnqBy8dwzMH1ErwTVKM1lIV1jz6zt3OH06bK3I6Qcyg==
IronPort-HdrOrdr: A9a23:s98FPq0TkkzbMicZ62L5qQqjBLwkLtp133Aq2lEZdPU1SKClfq
 WV98jzuiWatN98Yh8dcLK7WJVoMEm8yXcd2+B4V9qftWLdyQiVxe9ZnO7f6gylNyri9vNMkY
 dMGpIObOEY1GIK7/rH3A==
X-Talos-CUID: 9a23:E3HsbmwfEUGcBvE3HC+sBgU0JZAoW3z400vQPhe9N21LdqKTY0SfrfY=
X-Talos-MUID: 9a23:OM/6UAsm6qPCPq1Lt82npBNvJM5UxpmXBX8Og7c55ZecFRFCJGLI
X-IronPort-AV: E=Sophos;i="6.00,207,1681185600"; 
   d="scan'208";a="113564348"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QmVV+SjZt+v6kp5fTFA7PuOxCaX34A7MMYaoJfL/yzzxQX+9cj2mbPMKSlqi5tT1TPqg9AmgHINxHGOy0VjsXOPkjIVqYvQtzRGAqH02xH58PyVDagZppMlR20KkbUEFQDluHyzxq6YDUbbrkysr+wpN8aDaanO7a8SqKA/4lyYFUBmOc74P/1fo3UzJcRvJD0Hb0XXsvHI7hdY+YLXKGgGfMMuD6aKYw0Wb8xNhK+hZzO+49c18yz//u7CAhxCZgAWOANwl+Y4GCQ0bBLYNrZtpL0776lwqFA7ZIRC3fMobqId8+2+jqxVua2vj91bzUJPuSEET5NxyZPaa6aQEhw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UsOz6eHwm9u5f1bygqhZApqE9ooH5hTMjSD4zWvwnco=;
 b=gpYyXSszSzpHWQUC4U++zz5upCUxD9OXdgoW3OluDF1RRyo23vJS5aQyTRu6rE1EWCW5/fRJ6gY/yGyoUWm7JqUt/QKXjgI/N23XjT58834PW3A/3NKLeCFnll+JjbbxC5AzEv4SplH9KRIKxyj6Z3lrolYtkeU7puzJzrOrgoeFMAnQqRsBODiA4V+hYd6w3mVj/J6p4AyHP4FrSuhzaa4uoN7P4KL/nDZo+ktDckrCxn0NJr0NZ16hrqx3HFtyyaDhmk15PvwR7acMXj2nr5nbRse5J34yzTQ76bHa+GiuZhdRpHh6EM58qN9di8CBCDmcJlDkX/0uVfkz3THZ9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UsOz6eHwm9u5f1bygqhZApqE9ooH5hTMjSD4zWvwnco=;
 b=tcgIszeGC9SXK/moHFtKQZYnTAimap5YXUQJnVunleFTErMokbEPeqmUnVnIHx77RFPLY2/J5iQGnCgkKK/+uDzJS6SEvKTpRW3NvGrpCUYymfXc+5EjvG3VEv7wBl461X8+/y2/SbIWkvSzNf0M5bUHg8APl2IqhrzfQy7mrVg=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 31 May 2023 15:45:56 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH 1/2] vPCI: add test harness entry to ./MAINTAINERS
Message-ID: <ZHdPlJYzH2KX1ljG@Air-de-Roger>
References: <bb1bfe6c-eef6-5552-78e4-bdec8fd43561@suse.com>
 <87171ea8-16b0-6284-c4c0-bf0d74fcfc9b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <87171ea8-16b0-6284-c4c0-bf0d74fcfc9b@suse.com>
X-ClientProxiedBy: LO2P265CA0366.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a3::18) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ0PR03MB7051:EE_
X-MS-Office365-Filtering-Correlation-Id: 505edf4c-2df3-4b38-0a79-08db61dd5ffe
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	riDxb4l/DbF+aeqvU2UJOKEI40pYPuFcBuP0i08Fk1CEsSoRM65G0vSVSe6OTslY1+GFAtDPlRxyNSnCll55qqp6p1tLfRakI2abNzadg3+RzOAuTj1QRMHjfE/O/6l7EUnF0/j/QDOavRaKMGs0a8RwZponViwsOfHfeseydN9WBy6vgY1QT8PrSGWfS1shcB2f6+aR0mgjdl5QB6JpydBjqpIs/LCwi1F9LYamZZ37FV2vX9msLkoMh/q37tkGzPFtITrgYzTngSwE309YEXn5mhIBA/5jbHJeoI0Gb7heyH8mFxEm3khaAyu5M8/T4IuoCX2z5Z5GCXqVCNKe/6fQH/EfxSmAxc1i7XoOyAMeBuZiqaKPTEPctta4bmnUN846KpE0mN26CstnDSRfcBckNXsBqBFZ1bLSCy1HVFAH4cbaVWM+6txpMl4xrLSa5c2uSZphtGrzYRvg3bOBBG31fdiMOisEhz3nGoqJHzbpvGu4R/vcgJGtziIv5XWi/yuJXYUA7UPUGVmwKxB1O7DIQezk6JGrsdTrmkWTH5O3apRetn4jGrXNBe8Gncly
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(7916004)(4636009)(396003)(39860400002)(136003)(376002)(346002)(366004)(451199021)(6506007)(6666004)(6512007)(26005)(316002)(9686003)(82960400001)(66556008)(66476007)(66946007)(6916009)(4326008)(38100700002)(558084003)(5660300002)(8676002)(6486002)(33716001)(41300700001)(8936002)(107886003)(86362001)(478600001)(54906003)(186003)(2906002)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?SDE4d1g5eUh1ZHhFYzVhS3paUmsvYTZ6bzFzWmVnM3lHTHM1Sk0rbXNSVmJa?=
 =?utf-8?B?d1I2ZWJOenFldjlwaWVPUVBkY04yVnRUeHcyNEJjTnQyV1E2T080Y0xUdWhh?=
 =?utf-8?B?TU9SYm5CdGNkcTZPY21JcnMzalo3WklkekpKREh5dTQrNzRMOHUzbEh1cDNl?=
 =?utf-8?B?eDN1M0dUSnZjWHM5cW5Xc3Z4RTJvTS9uYm0zc21SazZXcTh2d2ZhQitxK2d6?=
 =?utf-8?B?d3pta0hFc21pUDVnRzM0RkVTcVhBWU04Y1U0NzNjRW5kN2ZPbUdIeHcxMTVh?=
 =?utf-8?B?VjB6L1daOGxuVXZkYUFLeFhiVldaWm5VYUptZlRRajRoanNXL284VUhyRzFF?=
 =?utf-8?B?NTRYYXdkQXl4Q3JUL2U5SUpaMSt2cVErc05UalJSdy9DVHB4L3FsUVNFOHYz?=
 =?utf-8?B?QXJBdUhvV2swbTE5cTIxb1RnWnZXZHd5RHQrWjFnbUdTdnNBV2NXUE9aWnJH?=
 =?utf-8?B?bmdkWS81alNhSnY1Tk82Q2VkRE5UcGFERjY0d1cvRUhKSndEQ0E5blVQVHpO?=
 =?utf-8?B?TFhpdFZ2Q0ppMDhabG9NaTVCTU5BZCtFcDd5N2lmTi9oZ2MrYjJyam1pb0t4?=
 =?utf-8?B?ckhvWVdPZ3RsLzN3M2JjdkpaTTFXSE9ST1lCWk9jeXpONk41cFF0MjE5MFJn?=
 =?utf-8?B?L05FTWd4RzZ0YmN2dTZYWFRlSU9vYkxMSUszdWtYT0hlS2RSYnhzN2pPTEhI?=
 =?utf-8?B?WDA3OHZ1M1p4U2pyMnNPWnE3NEZsaGQ4eWhyL0lnZUhtSU5IakZMYVhIVFlX?=
 =?utf-8?B?MVRMc29RY3FIYU9IWVlqcWxUV2dJaEU4cmpOZlhkdXNYSksvN3BjSCtRNktv?=
 =?utf-8?B?Q0FnV2pOMXRCS1U2ZHFIdlgwY2N6K1BDY0lISjZlMGxMU3FUclZHUExWbTlZ?=
 =?utf-8?B?bWVhK2d4aXFxV1ZraVAybVExT0I3Y240VTFSWUJVUUx5bXNDVVlaeFJDL01F?=
 =?utf-8?B?a0cvSDFyaVBoVjlXeU9Pb201a1JjdkRIa3ljWUEyNFdHV3JOQ0Zud3hrTFI4?=
 =?utf-8?B?N2xhRktaemJYdE9JLzZyTVVuNi9TMkVPMkV5R3QwcEhXQ2xyOTZoN3RuVm9a?=
 =?utf-8?B?akVvL2xBTnhQK1VYOFhWZjhGUGZkZkNXVGc3ekd4b1VJdk44bWZaeUZZejVC?=
 =?utf-8?B?bWp6UW84Z09JTW00NENBZnhiUkUvYVdiWDJBU3BRSWl6N3VoWXZFdTFJRkRP?=
 =?utf-8?B?S00vSlUwWFdTMWdsK1F0YUtxOUxwNVR4NjhEVXluSzJ6M3F2bWliZ0IreTVs?=
 =?utf-8?B?a2tETGxDZVVQcDYvSmRocExFYm1VenlIVEZlOTlLeFNsa29wTkRwVGxncVB4?=
 =?utf-8?B?Z3JieFhQT2twRVBnOHVLZlFJVVhLQnBkYWs1N0JiV29zVW1iTGpKcS9FRUR0?=
 =?utf-8?B?eElWY24vcDVsc0hGL1l4M2lpbkQ3SWZNd2xHOEkzL0dvcFRUY2tVWnRBb0dP?=
 =?utf-8?B?eC82MEFMcUVYanZZNVVFdGx6TDAzRFlBMkRVK2FUS0hSNWZlTVBaa2NLbGtr?=
 =?utf-8?B?T1hmd3lJbmgxL3k4L2lXNHZZNTNRMnFjY3Y2MWtFNVkwbXhKZFM2ay9qdkd0?=
 =?utf-8?B?WXE4M1NjUmk1Rlh5WTd1WnJBZTBmdmxNV1ZTN0dnamZZZ0J5YW9uOUFGN0xp?=
 =?utf-8?B?UGkvc0U5VkhmNUxhUDlVUTMxbFFJSkdGVDJWaXF3VlVvSjVSTTVFOEpickYz?=
 =?utf-8?B?RTFaNkNRWENCWXFMMWZPVEw5ZUNNR3dScnQ3WjZza2xqenlvWjF2V2d2SHUz?=
 =?utf-8?B?WDV2VWs3S082MlduaS91anRUU0Q1UkoxZXR6c2IxVzdjM1pZWkJiL3RHYVRi?=
 =?utf-8?B?R1R0VFRlaU1QeGNhZlNDRVNlY1VoOGlEdDNuK1p1NDJHczJ0ZFFYc0I4VUhn?=
 =?utf-8?B?QWZlT29WUVhkY1hlMk1sakN1bGZPVXRHcW1ZWm1RR3hGK082Uktvck1mSFM3?=
 =?utf-8?B?VHNsN3k5OFFMaFBleWE5b3dMeXA0b091dmhZYU9NNFR5WmRIbzBCbTFsM210?=
 =?utf-8?B?cWQyTTl0aGx2TUMwOHl6U3U5cHVEYldVT25jTVY2UEJ0dys2anovanhWWFc2?=
 =?utf-8?B?YWNwT0tEM2JMb0NyT1A3WjZKSzBoOUE3ZHg1UmVzTGFYK1ZzbVJmQXJ3c2xj?=
 =?utf-8?Q?tn89Kil2eT7i9+WKvxP6EvzrT?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	CXdvq9GVoMNUeiI1z0YkH9Q4OoWqgC2pT5/J7CxLaXJL7CHuHRjsk3I9BxkNXGwwZUxmC8MXNVCZD4Qfid60pZm1xmozhfLn7cIlthRRuLq8z7ixfwJCmQIxALUCO79UqvEOTJt2BMkR1/Qj3WjFqAZ4vrOk7JcXU+1912ZSZ8dnAaz1LXlxBU0KtJuvkqS9AfjcBHh8nkZVjMDMbXtUcKq3Uvip0vKWVqLn1r+jYOYXw3GqzTJVRAsvEO9XTUX8GuJFBlzxkf3jBD9npaxA/Jy0Zf0gwjIs1+9LpSGFd3xgWCkAhIH7Qm4MCDPaHcaRX3pgic5m238a5vgEMTEqW6O0qoMq6yBqY9TMaNdVt8wcUIpJnl5fKSaEq8tg6gBGgM46UgJQj+ORPG2oPinAg22W1QvAYWaa/zEtz+5G/pqSFqZflaDhP2UA6jdu6+X1AKwnigKZw26ZzkfMFfRqmiEFh8AZxtQP96u9h6QONCNztSgRPgki4EUQjne1cDxEOUwl3iWO4eIl/lp86WVuVQyR6o8BWwq3VG/wsktqcno7FycCN6zgbQ6Ywf8MsPHS+5bOj2hkkzu0pZviNB7pDbpYr2PaF6lcCkh5Ud23OrgnTaVmDnpyDahFlSiFvHGiZ0cwUw+QibR3Xt0ze97dW5q4B9KG9/2nFw7eHH8h8WJsyg2g/ed6DkWEd9eLz9AA6IHMjT+Ob/2/Q5JfcRsXXhVMUPlCZrpX8zpXzgWplS+R9EFgJihBNU63JbflgAq6NBxUB6R/GQuWjRgsOlK9FYulVHdR8zcKoGF/UIdrfqNxB7opUnoBXQnPdt5d6AgSwwbwBpS3pxzfchD2GvIjL1euqCIt3dA1W9vEoD/v0C8=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 505edf4c-2df3-4b38-0a79-08db61dd5ffe
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 13:46:03.0836
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BoF9ATaTvtcpx0i5BwchtNhr/U699YjIbzpqTFHmN2liNcMroHH2DvTSV6beeLGW/jKbVCgKAZPV7hDigiTP2Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB7051

On Wed, May 31, 2023 at 03:19:31PM +0200, Jan Beulich wrote:
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 31 13:46:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:46:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541822.844987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M9m-0005IM-Tb; Wed, 31 May 2023 13:46:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541822.844987; Wed, 31 May 2023 13:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4M9m-0005I8-QX; Wed, 31 May 2023 13:46:30 +0000
Received: by outflank-mailman (input) for mailman id 541822;
 Wed, 31 May 2023 13:46:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MZqJ=BU=kernel.org=sashal@srs-se1.protection.inumbo.net>)
 id 1q4M9l-0004Om-4V
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:46:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8a98f6d9-ffb9-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 15:46:28 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 79E9663AEF;
 Wed, 31 May 2023 13:46:27 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED1C6C4339C;
 Wed, 31 May 2023 13:46:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a98f6d9-ffb9-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685540786;
	bh=9TH9kbz82M/rlSZWTwKcCj2cFk5ctGnmCg3LAaqJ3eI=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=p22/GQ9z2/M4XwRfrpZySejAoXbRWMRfRZIfef1gGL2h42hF4g1BqzQWaGR4pESMd
	 RRkOI9Urn3F0toz5G5DE6tl2QJpzhT/L0LqBeFppu1nG7CqOwJ5HhE8+ca4kseF0ci
	 fsRd8U4n5Kqh5Xqk5T4ey5OZmAH8WPVoiMbNoHAJGeyj7rL3XdRy4hcLlQPNl/O5yr
	 fRKgGu7Y5cdDaWTKuY7L0IHwv3pEN+CEBX0Y8evrdpnsVHpe/GXvI5pn2CiGm/gTNQ
	 pIPXZUBtPhxSOwrO0ukgZv89zy/pkrDdyomjfXFhfCfxbNvuLujQ6xEwHn0MXg11zQ
	 jDmiwPn7Z9HIw==
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	axboe@kernel.dk,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org
Subject: [PATCH AUTOSEL 4.14 10/10] xen/blkfront: Only check REQ_FUA for writes
Date: Wed, 31 May 2023 09:46:06 -0400
Message-Id: <20230531134606.3385210-10-sashal@kernel.org>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20230531134606.3385210-1-sashal@kernel.org>
References: <20230531134606.3385210-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Ross Lagerwall <ross.lagerwall@citrix.com>

[ Upstream commit b6ebaa8100090092aa602530d7e8316816d0c98d ]

The existing code silently converts read operations with the
REQ_FUA bit set into write-barrier operations. This results in data
loss as the backend scribbles zeroes over the data instead of returning
it.

While the REQ_FUA bit doesn't make sense on a read operation, at least
one well-known out-of-tree kernel module does set it and since it
results in data loss, let's be safe here and only look at REQ_FUA for
writes.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Acked-by: Juergen Gross <jgross@suse.com>
Link: https://lore.kernel.org/r/20230426164005.2213139-1-ross.lagerwall@citrix.com
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/block/xen-blkfront.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index cd58f582c50c1..b649f1a68b417 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -780,7 +780,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		ring_req->u.rw.handle = info->handle;
 		ring_req->operation = rq_data_dir(req) ?
 			BLKIF_OP_WRITE : BLKIF_OP_READ;
-		if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
+		if (req_op(req) == REQ_OP_FLUSH ||
+		    (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
 			/*
 			 * Ideally we can do an unordered flush-to-disk.
 			 * In case the backend onlysupports barriers, use that.
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed May 31 13:52:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:52:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541833.844996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4MEy-0007Ds-HR; Wed, 31 May 2023 13:51:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541833.844996; Wed, 31 May 2023 13:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4MEy-0007Dl-Eg; Wed, 31 May 2023 13:51:52 +0000
Received: by outflank-mailman (input) for mailman id 541833;
 Wed, 31 May 2023 13:51:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EXUc=BU=citrix.com=prvs=508c705fb=roger.pau@srs-se1.protection.inumbo.net>)
 id 1q4MEx-0007Df-2f
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:51:51 +0000
Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com
 [216.71.145.155]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 49655887-ffba-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 15:51:49 +0200 (CEST)
Received: from mail-bn8nam11lp2169.outbound.protection.outlook.com (HELO
 NAM11-BN8-obe.outbound.protection.outlook.com) ([104.47.58.169])
 by ob1.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256;
 31 May 2023 09:51:46 -0400
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com (2603:10b6:a03:38d::21)
 by SJ2PR03MB7526.namprd03.prod.outlook.com (2603:10b6:a03:555::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.21; Wed, 31 May
 2023 13:51:44 +0000
Received: from SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd]) by SJ0PR03MB6423.namprd03.prod.outlook.com
 ([fe80::192:6bdf:b105:64dd%3]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 13:51:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49655887-ffba-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685541109;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=2h4XlK8HYoaeCVaLrsxJv6og/ZR8/dUI48YNLG3/YTA=;
  b=Av95/6jcbnH28vb2+YeFD6HxbO72u3T6FLUaZGfmDa3B+1GUhpck5eJd
   Xlqb7aEOIaoMv6jOwMYSj14j/HZNeO+SdrzgzHz5r9wtORDokLiezU3o4
   qs7CKns2PtG35M6oSOHk85t03HW+i8qIpcOt7bot8+IaktZiqwHRH6Ev8
   o=;
X-IronPort-RemoteIP: 104.47.58.169
X-IronPort-MID: 111096515
X-IronPort-Reputation: None
X-IronPort-Listener: OutboundMail
X-IronPort-SenderGroup: RELAY_O365
X-IronPort-MailFlowPolicy: $RELAYED
IronPort-Data: A9a23:D2bshaxS4+tewy7dKIJ6t+f5xyrEfRIJ4+MujC+fZmUNrF6WrkUOn
 zcZX2nSafyMNDChLY0ibNiypkgGsMDWmoViSgQ4rSAxQypGp/SeCIXCJC8cHc8wwu7rFxs7s
 ppEOrEsCOhuExcwcz/0auCJQUFUjP3OHfykTrafYEidfCc8IA85kxVvhuUltYBhhNm9Emult
 Mj75sbSIzdJ4RYtWo4vw/zF8EsHUMja4mtC5QRjPqkT5jcyqlFOZH4hDfDpR5fHatE88t6SH
 47r0Ly/92XFyBYhYvvNfmHTKxBirhb6ZGBiu1IOM0SQqkEqSh8ai87XAME0e0ZP4whlqvgqo
 Dl7WT5cfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KWREr
 t9beClRVEHd2Oz13J+VQfBXov12eaEHPKtH0p1h5RfwKK9/BLrlE+DN79Ie2yosjMdTG/qYf
 9AedTdkcBXHZVtIJ0sTD5U92uyvgxETcRUB8A7T+fVxvjiVlVIguFTuGIO9ltiiX8Jak1zev
 mvb12/4HgsbJJqUzj/tHneE37aWxXKlBdtPfFG+3vR73n2rxXBUM0wxX2q5oKCBo3+meM0Kf
 iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBGAzO/Y7hiUAkAATyVdc5o2uckuXzso2
 1SV2dTzClRHsqCRSH+b3qeZq3W1Iyd9BXQZeSYOQA8B4t/iiII+lBTCSpBkCqHdpsLxMSH9x
 XaNtidWulkIpcsC1qH+91aXhTup/8HNVlRsuFSRWX+55ARkYoLjf5av9VXQ8fdHKsCeU0WFu
 38H3cOZ6YjiEK2wqcBEe81VdJnB2hpPGGS0bYJHd3X5ywmQxg==
IronPort-HdrOrdr: A9a23:yTK8W6nZOnvR8SIgCNu9CdHzetPpDfIF3DAbv31ZSRFFG/Fwwf
 re5MjzsiWE7Ar5OUtQ/OxpWsG7K080kKQZ3WB/B9uftWXd11dAT7sSj7cKoQeBJ8SkzJ8l6U
 4IScEXY+EYa2IVsS+Q2njaLz9P+ri6GA/Dv5ak85/AJzsaDJ2JTm1Ce2CmLnE=
X-Talos-CUID: 9a23:e1tY1G7zBKVgiMHKo9ss8URPOYN1ayHnnSmPLQykJEhiUYaxcArF
X-Talos-MUID: 9a23:5toshAZg5LK/S+BTvS6ypAlhKIBTsoOkOUIMyK0vitGnKnkl
X-IronPort-AV: E=Sophos;i="6.00,207,1681185600"; 
   d="scan'208";a="111096515"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UQpxFcasm35S5OEyvGJUSbimH3/YMLjQeQfYiS+7r+U7JKR07Y4trQdDo0SDa9SNUorj5vexZEx4dq1nQhuXIW/LQs+bbAFYsZ/ISMRrpyXTDotXsi5SDwXvrr+znuIX0s/DlpkWyb/kTkiXxSSMKvR/d6i8wWtXxmawKcEAX4h6sBzvvXGF1NXWjpFczckPpS376tM+Ttt9gXPCEB5nmC5V6hGHD35jADl9PjP6ZGOTyhz6puRejP7WBWW2a5ro3APs1hNlP8qfBOGI7VHgcLetYL7iEpyhFHM6yEe82siUwncKmY6208ATN1mXcqV5hICTSh9Olv8+dm9nB5g1gw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VGzuwmUBRvw2x/Q4kARKrSHOknjdqB2m0jb6FtOd/jA=;
 b=XbtjlT7gwfgvp4RIRyQM6k0hN1SKoH5UcnTFwurdDSAZ+8oh/CipQp1yNERHArVfspvsu68YgAmNQCn/nP/R7AtMGhxmQSnq5VpL1QlRLeMGmoxuWex3tANV8e8W9G+//yof+l06XMIVKBfxH9cDdYG59zWwLLwXpJxLSAHAr5o79X1UfqLW/02XORZI70mD+XmNqKxCPCIfGoXdj8gsemEeelhY5qLIN7ZzB9zp68v1vKtEXLyhd/J2YZzW5X9SfsMrXOPKTvoRb9tBiQdgS6wVz/62tAhQXUqThhK+HMwBrMcQzEVqjYJgtcchT7MCZMMz92DSSgQa6vpDoF+gAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VGzuwmUBRvw2x/Q4kARKrSHOknjdqB2m0jb6FtOd/jA=;
 b=uRB1qFeZVVx/S5Kx0pW7B+qBPPn8h9uV/5da7k1xd4hUtmRxXZNT1QdppDnlr1CUlGTePlKENLUHyDHhrbmiL5GFLIaDlsKTpJsllfpkJaHTOEa8AuhHf+wuS0d8qlw1tUmJGWImZMKac0+7mQYRd+wA2Z1o1RGas/+3Kipe0YE=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=citrix.com;
Date: Wed, 31 May 2023 15:51:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: Re: [PATCH 2/2] vPCI: fix test harness build
Message-ID: <ZHdQ6Qw8vQIZ+s3c@Air-de-Roger>
References: <bb1bfe6c-eef6-5552-78e4-bdec8fd43561@suse.com>
 <838c09e5-97ce-2d23-9d42-645a8925217d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <838c09e5-97ce-2d23-9d42-645a8925217d@suse.com>
X-ClientProxiedBy: LO4P123CA0034.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::21) To SJ0PR03MB6423.namprd03.prod.outlook.com
 (2603:10b6:a03:38d::21)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6423:EE_|SJ2PR03MB7526:EE_
X-MS-Office365-Filtering-Correlation-Id: 43a9653b-61a6-4d4e-c8d0-08db61de2afe
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr,ExtFwd
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CAN//JaDVWNmCCTpYeyIKqIctYSMDg6lp+no65U60uqwmkBxeKC6rSxo4/HtfaNm6IACkLVWYnsYabqQmfE4lJJ4DvAMF1dR+mh9Uv/BOef/fFR7CNB7x7e0VmqzbOKxevvOWIF3SALT22xHlPZb/sKy+Tu38qnZggoCrv80ulxnu33mFeLC2EtkMfwCT9psBVccJFIigMU6FetrhWLz/3QrF74cK+McUFq+yHpkuSsGGwjz1Gdn+KM8S1clz3o14zouOwdO/UaQdsn9kzzQOam0ED1OIeGmu4pUNOKzmpoG1DXKpmvsLdqf8M/ruxd4QpWlyFfSOPaEn7pjbhuQLJapTP2Lq7LYs8SexjOj/JdbPGVzIkiruF4jZmgK66BtFuIW2zHMMYkHDXmaeZfvW/x+LofBs17lvkkEUeSHUmJVyyqG8irt3gTu9RmPGh0vusLVv3Uy+HafgkeaXj4R4Mxhl4UimSCnySXGYQ6+2udam0v+TCQwQApOx1tww8uV9BuTcoe/L2S/Yrjno/6nVhGBfWVw0xncrSkm8/MbYa+sfm12oFtFFyQGioE14noO1WILkDHJB/Z9/e78SDp4h3BZ0FhfImzov0AFHtZ5zOo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB6423.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(7916004)(396003)(136003)(366004)(376002)(39860400002)(346002)(451199021)(5660300002)(86362001)(83380400001)(33716001)(8936002)(8676002)(2906002)(4744005)(82960400001)(85182001)(38100700002)(6916009)(4326008)(66946007)(316002)(66556008)(66476007)(54906003)(186003)(478600001)(107886003)(9686003)(6506007)(6512007)(26005)(6666004)(41300700001)(6486002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?aldrTjFmMWJyckVIREF0VGxkNnFvb3VVNyt3Nnhpckw5UWNheU91WkJ5VXdT?=
 =?utf-8?B?MnNLUXh2dzNNK3FpbXRZdkxCM2d5VjIyWmlOdlZCWWNwZlY2S1g1UjlkdU5R?=
 =?utf-8?B?clU3MGpyOGpjdTd5dU1uMmZUMVVtSEZpaWVNbktMUjUwSmpTL3FhR010WFpv?=
 =?utf-8?B?Z25UVmxndVNPWXVrNk12U3ZCN25Yb1pRejNvWmNNck12a1VudkF6YlNYVkRR?=
 =?utf-8?B?TUpqR3lwY2tWOTlDaXpZSWl5UXdEWnNhZmZFeURKeW12S01MVFMvOHRsWEM0?=
 =?utf-8?B?U0NjV2V3STJZRHl5VFVwMkh2M0RuUFJXZWNQU0NwL3VtbGlYcmEvSE5CZU1F?=
 =?utf-8?B?UWJkQ3plaVlBV2tyWEUyOWJnakV4bmd3TDVRUWdyZXRMU0MxdGsvejMrd3NS?=
 =?utf-8?B?UFo5UHpVM3d5VkVpYVNNeElrYUt1dW5ydVZ4dVZFWDZzZmRkMjBQc2pjL1RH?=
 =?utf-8?B?Tmo1dlhXeEhWbnBMT0c5dTkrcHdVVFB4c3d0NDhsTU8xeHc2bFZoSWlUZDZi?=
 =?utf-8?B?aUlHYzVNU29MN05GaFhjRXhiZmN2SHFVVDdiUWJ4N1FFYU1IK1hQYnFLSm0z?=
 =?utf-8?B?b2poUWhSWVc1d25DSjJ3cXR6REFCWUlxOC9TNy9NSkRSbUtud29MMDVmMEZY?=
 =?utf-8?B?MmFEMkNtSlJ0RHRHTWYyZE5Ra2I4VTlZNG8rdXV6eFhoNGd4Q2tGWEpxZzVX?=
 =?utf-8?B?ekVIN1dBVnFQSVl2aUNrSEswZUFnTXZLcGxxMTFHYUlNL3g5dUlCUmlaR2Ji?=
 =?utf-8?B?cjFKdHA1NVlSVWUxbjhmNzhHdGIvZEtvaE83ci83K3RGTXhSQUlRUmpGcVpE?=
 =?utf-8?B?WjI4cUpiblZ1VERUc0R1bmpJRWlOUjlpVExPR0JKVG9NRjcybUtUOTVvZlU3?=
 =?utf-8?B?QUJGYUwvOGg3TEMvOENwaTZxMVN0L3IxRVZxYnVYKzA5WEIxZThPaW9pd1VI?=
 =?utf-8?B?dWQ1SHNLTGlOamJpV1hSV1lTQW9Dc3QrWmQ2V3EvNlhTY2FIaHpxc0xWNU1S?=
 =?utf-8?B?VkJQcnFCRTV1VFJjTUg2SHZHcnQwTU9LU045NEw0QXNQeUwzb2IxQWN4RDBt?=
 =?utf-8?B?L24zLy9vUlJGR1hZME5ZbW9XbzZGbm44Z1J6alZlWjJBWEt3K0x4RHRMOThW?=
 =?utf-8?B?TXQ5NFNJTXRNYzNPQVJMQ2xMeGdQQVBGU2QzM3lpSmo0cE1NS05rODBOSmlO?=
 =?utf-8?B?WVg4Mmp4cTNSWjgzM1FtVitpUzQ0S3kvdDdTR2xGMEZQeU5VK0JkdSt2N1Vz?=
 =?utf-8?B?VlhnbmdOMFdrSlFsQnRpUlRBbE44QjM2OTZxMEp5WFNKV2U4ekR5TEtVaTVj?=
 =?utf-8?B?ZkZpOFlVcnhJSUR2bmU1YjdBWDBZbmZjOC8wODdKZy9KRDhlUUZqcy94dEE5?=
 =?utf-8?B?MXRUOTVTay9wUDZNSWYwSDNyaU43YXJFb3RRaHR2NTlOM0RqQXVDaXFFMW5S?=
 =?utf-8?B?Wk1qWCt5S1M3ZWc2cVVrZDNYMGFyaVlibXhLWkZGYkR4WlgvV0NCY3NoK2NU?=
 =?utf-8?B?azF1VXBkR3BzSGhMMDJGOEVPcm5jQWFHKzVNRlFLOG1RNjNGVkNTcXpWQlZz?=
 =?utf-8?B?bEV2NlFPc1R4Zlp5WGt3U2U1M3M5Nnc5WXpZRWMzUWVDQ0JWUTlKdnhqdDhD?=
 =?utf-8?B?NVZNd3gwU2d1bzJJeTMrUVc4eVozT3pwQjlYaFdpU3VqODhxWEE4clkyWmpm?=
 =?utf-8?B?aEgzamgzUmZhaitrNkFyL0VJRmlpS3lpdlRzd1plOUZHbHJxUlN2ZEl2K0c4?=
 =?utf-8?B?dXN3ZXpGVDZKZGI5S0YvTlN5SnVKZ3VhT0hNQ3NlcUI5eEV1SkI1TVVYMW5i?=
 =?utf-8?B?Y282UnJvS0o1eTJ5dDlHSStQKzBBRzU5VG9jRktnWVVhQzAvaElaK2toZzZt?=
 =?utf-8?B?T1dwQVBsSjhkUWhBdG9DYlljaDdmZXp1aHd5VVRIYStnU3pDeEhYaUlpZEY3?=
 =?utf-8?B?V3FsV2NSUXJMbzk3NVk1Tks1V0t4U2lxOGU1SGVXUkRrTTQyTHNaNGR1d1Ax?=
 =?utf-8?B?NysyZ1VIUnZ0MTNuYkdJUzBSaUVVNHJwUlNmSkZXMm5TVGtIOWtYVVpPTkV1?=
 =?utf-8?B?UkhvVDF2aGxkbXZ5VGlQMllWTlBBYjhZOUZESnZlYjBCaHFCdEJyMVYycHhV?=
 =?utf-8?B?UitRV3J1WUwyQnFiUkdxRGdUQlBZeDRKRUxnZ2RybDYrRzVFd0MzQ2FLQzE2?=
 =?utf-8?B?MXc9PQ==?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	3x6S+KUXsvEZmav4Je8nnCJU2gPhVlyzUVkQbrvQNBPcaVO0PMdzcDw/8hcfcDYqOhFnzSRrJaEPg92r2uBC5ow3dDdHUCtAzVgmYbepze/kPwGAfUIw9pRKqTZPiUldpFyZ/4QtwVvAuQWmUhJfnjD9WZqPtHn0kMb/ILhBKshAJIrjH1tya7fKgxpGIehbeYbrvpoU61xxvTZEzLW4RQiJvXN0xnZf94R+9wrg9pGjkUIvWZKYFqSggNm5r2KPh3JgiceJeyEKjgt8jHXOWDpuiOlNHjR1hoQOMYd1mxymqk/X5yg8IDJKRW7svsSqtjBORQBSiY3+cp5pIjyYUqT3iiuui5AsfZT37C+zXacYCqNcWhPj9yLPdjj1ijf8jsjA13SX2K5ouSBUiKkNW1Hch09QGFZEiRMf8QCgFC5Gv2LMvu8HBYSyAJWGd1nEIHG3hEVn4hYde/3XkR3VZiuaTqPkvDb9uSdFKENncm+l6ypfLdZeGPvpZtvl49ZL5nXB0vhdUFAEl6aY6JBluYnIZQXGnx49bUjYJQ8QU9PVEakgiqT1xHbstTyU6xy6zdCAEeepyRy02ztn3+TbjBfqZM1w1j/Bx+t74uvptMGVl9mgG1vfOhYHGRQ32IXWxt2d4KmYNHa991ZPbCSfEFZVW/e6pImFoKtMeJldGNB8S0CVEjBndV3Dii8iYq62yBmgUzSFJMAHEvRbOJoerVrUigpFGEOyFYxQYpKIEgTI89KD0j/BF0GwEmf96TPH9C8+6vPXy8aE6ikBY1jxulnlH3w/vShSQ/Aauufci90jJsA7ljWL60NK7IPl+Bmio64O5TvKfemZ8FxcxCemG0vsitRe/4EG3N6Oiubo+0I=
X-OriginatorOrg: citrix.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 43a9653b-61a6-4d4e-c8d0-08db61de2afe
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB6423.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 13:51:43.6490
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: sqTyvEwSpQsSsHSrf8vV96ZQm8osUBpBJmPCf6/nL5NZ2d5eI/tmaCThtNqpj3UsCrzpNXHuF0zVapC5KyzomQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR03MB7526

On Wed, May 31, 2023 at 03:19:56PM +0200, Jan Beulich wrote:
> The earlier commit introduced two uses of is_hardware_domain().
> 
> Fixes: 465217b0f872 ("vPCI: account for hidden devices")
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

We do rely on the compiler always removing the call to
pci_get_pdev(dom_xen, ...) or AFAICT that would also trigger an error
as there's no definition of dom_xen in this scope.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 31 13:59:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 13:59:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541837.845006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4MLq-0007rD-7e; Wed, 31 May 2023 13:58:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541837.845006; Wed, 31 May 2023 13:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4MLq-0007r6-4z; Wed, 31 May 2023 13:58:58 +0000
Received: by outflank-mailman (input) for mailman id 541837;
 Wed, 31 May 2023 13:58:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5JF=BU=amd.com=Thomas.Lendacky@srs-se1.protection.inumbo.net>)
 id 1q4MLo-0007r0-L4
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 13:58:56 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20628.outbound.protection.outlook.com
 [2a01:111:f400:fe5a::628])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 45b49ffc-ffbb-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 15:58:53 +0200 (CEST)
Received: from DM4PR12MB5229.namprd12.prod.outlook.com (2603:10b6:5:398::12)
 by CH3PR12MB9171.namprd12.prod.outlook.com (2603:10b6:610:1a2::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22; Wed, 31 May
 2023 13:58:48 +0000
Received: from DM4PR12MB5229.namprd12.prod.outlook.com
 ([fe80::61f6:a95e:c41e:bb25]) by DM4PR12MB5229.namprd12.prod.outlook.com
 ([fe80::61f6:a95e:c41e:bb25%3]) with mapi id 15.20.6455.020; Wed, 31 May 2023
 13:58:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45b49ffc-ffbb-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m2BiljUrZZTfJBEcts5DmGB6ZjiptPo85N9xgBHJKUQdURejyS39JrMNDesCr9BufjhnmdticUpMACn41djJ0SNvY24fcuMJHggeISmaTiglgGCYOP8TiYkt1DY1T9QU/kYMR9snaoJjeREIr+3j4N6s4+C4Oy28gDSekttxF1XHO1raoJPCwgzlXP2i/MSsZvEprMx0Y2QmTcvtIu5XwQeUMax+b63H8UUDogi3Vr+47psZ0xusTDoX2zczAfjWdl866ic89Z+mn4UpoT8amFZaEZpmcjT1nkkWlNntoTAYjl+2/b9IDLkCCslwEyjfVJlBm76dTr4Pi8b/2LeRnA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qtxROaJ2cZo1BV7fUmgud+j50U5EMZBzNTbpu4eKHKs=;
 b=boFmLhoOfD70+Cj0gkM7arlaF9/Ga/CEVomnbo6NpOezlE46R0jxdqzzwjTZOP3SRtsjemABK2DSuvVfkoHltIYy6hDmeZ60z8GvjMmZuFkz1CaHw894f4mHQO5JgrHFEZjPFKVplvVvHWUlTf4h8vCt9xckkR2RxyuWbDjdI5fJk4jEn3vEXEyUdzyRS/bGDpDx5PZxbczp0rfaboouzW36WYvBD/R4wg02kF1y4pmyo4ONrSg4Yl5fC1V2h/kVLu5OK6NUYoGOlTJ6GIWgtryGv6rCtc+gP9ZwRKOUOsyzyZCupLVwpNs4QeQuplP/Of3lkHhFBn5LdNvhK6zA7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qtxROaJ2cZo1BV7fUmgud+j50U5EMZBzNTbpu4eKHKs=;
 b=ns+3+TgGYV217fh/mknzE28XO0h8YNhAsk9S+KcdDqzGhF08unZOBnIYwEjkrQX0p2fgNu9Q07+fN+3Z79qLSPm4g2ElW8+Zw6Dt1u9wpuunWzVd14nXtxctA9iQgo5P90NNryrUE9vqRj4+7jHXcNla96BLuyl6V+VzEHGqWtw=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <c4e8d060-deb5-bce9-cb65-cd0dc9ed7735@amd.com>
Date: Wed, 31 May 2023 08:58:43 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [patch] x86/smpboot: Fix the parallel bringup decision
Content-Language: en-US
To: Thomas Gleixner <tglx@linutronix.de>,
 Sean Christopherson <seanjc@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>,
 LKML <linux-kernel@vger.kernel.org>, x86@kernel.org,
 David Woodhouse <dwmw2@infradead.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Brian Gerst <brgerst@gmail.com>,
 Arjan van de Veen <arjan@linux.intel.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Paul McKenney <paulmck@kernel.org>,
 Oleksandr Natalenko <oleksandr@natalenko.name>,
 Paul Menzel <pmenzel@molgen.mpg.de>,
 "Guilherme G. Piccoli" <gpiccoli@igalia.com>,
 Piotr Gorski <lucjan.lucjanov@gmail.com>,
 Usama Arif <usama.arif@bytedance.com>, Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Russell King <linux@armlinux.org.uk>,
 Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org,
 Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>,
 Guo Ren <guoren@kernel.org>, linux-csky@vger.kernel.org,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>, linux-mips@vger.kernel.org,
 "James E.J. Bottomley" <James.Bottomley@hansenpartnership.com>,
 Helge Deller <deller@gmx.de>, linux-parisc@vger.kernel.org,
 Paul Walmsley <paul.walmsley@sifive.com>, Palmer Dabbelt
 <palmer@dabbelt.com>, linux-riscv@lists.infradead.org,
 Mark Rutland <mark.rutland@arm.com>, Sabin Rapan <sabrapan@amazon.com>,
 "Michael Kelley (LINUX)" <mikelley@microsoft.com>,
 Dave Hansen <dave.hansen@linux.intel.com>
References: <87sfbhlwp9.ffs@tglx>
 <20230529023939.mc2akptpxcg3eh2f@box.shutemov.name> <87bki3kkfi.ffs@tglx>
 <20230529203129.sthnhzgds7ynddxd@box.shutemov.name>
 <20230530005428.jyrc2ezx5raohlrt@box.shutemov.name> <87mt1mjhk3.ffs@tglx>
 <87jzwqjeey.ffs@tglx> <87cz2ija1e.ffs@tglx>
 <20230530122951.2wu5rwcu26ofov6f@box.shutemov.name> <87wn0pizbl.ffs@tglx>
 <ZHYqwsCURnrFdsVm@google.com> <87leh5iom8.ffs@tglx>
 <8751e955-e975-c6d4-630c-02912b9ef9da@amd.com> <871qiximen.ffs@tglx>
 <b6323987-059e-5396-20b9-8b6a1687e289@amd.com> <87ilc9gd2d.ffs@tglx>
From: Tom Lendacky <thomas.lendacky@amd.com>
In-Reply-To: <87ilc9gd2d.ffs@tglx>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SN6PR08CA0033.namprd08.prod.outlook.com
 (2603:10b6:805:66::46) To DM4PR12MB5229.namprd12.prod.outlook.com
 (2603:10b6:5:398::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DM4PR12MB5229:EE_|CH3PR12MB9171:EE_
X-MS-Office365-Filtering-Correlation-Id: 7e9cb5d1-b8b6-422e-33eb-08db61df2755
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fzAmz9SFY2eJh6vMKmZxjFl0l64ppKu3/HM7wuk9+Rgg4pPEP1A8sYGCrSE6+jBmzowRfVDiClRSxCiYLN/cdMAGsCT0s6X+08EPYRm6bETMwmfYT4amLYITyeb0wpxgwBiLILYk+FMBkfeF8gb+OowQ91NNqiaXNscwPt3u9beogaKyxu5WXPJe0dWCKRNc5hHWZc3TnHWqpkeeyEWKx2v6CdRxagfTp7xtRdCxAiHo8LolmWL9E7G/WY5GiLszoEszQH+Wfh2bg9F4kqFDYQdqpcTpRAooePWwBni5ce/0YbXNYTjf2ozVQo/x+efpjaSd+8CAvt1kOYQz8oEwob8JAXc8Ro7c/pTKcVGZZI3mXPv8Ed4oWUzUCKu8/Duu/voPeQL3ydtE3aFg2IxdCIwWpVSCQ6Du/L1oM9uVAKbC7iICVq5H6nZuALVyBqQrJQHRcyKNhbmWiSnrVDK6C2REIUfsDLBv6+C3AlQ94v5+pK4wiWJrm7tIVTSivagLb3qIiXNZYK6Nd7VL2lgSrUtZWtvMllAAqkqVIuuKJCcteqqjqxBDjFAJ0xxCZCdQZoR/aVddFQsuSMjo9PRSU115U1lJnUjyiju0BMlRw9/H1Hhq7kQQ+IDQbFQizQxnPKIZa/U5RsrjeTbvZtOeXA==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR12MB5229.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(376002)(396003)(346002)(136003)(39860400002)(451199021)(6486002)(6666004)(478600001)(186003)(36756003)(53546011)(26005)(6506007)(6512007)(83380400001)(2616005)(38100700002)(86362001)(31696002)(4326008)(316002)(66556008)(66476007)(66946007)(41300700001)(2906002)(7406005)(7416002)(31686004)(5660300002)(110136005)(8936002)(8676002)(54906003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?TGNoenZtcWtSa0Fnd3R6U3ExQkRaNzF5LzRFeCs5aFllTy9rUmVNRVJRbG5G?=
 =?utf-8?B?VG41TGlhd2hDNDB5OWNXVE9WTGFwVTNJaVp4NmJXTGZwMnhmV1VpeG9vR1VZ?=
 =?utf-8?B?RXVvMkNDRHZkNUFvelJzZ1ZlYmZaM2dsbzZVb3gwK2toNDF1NEh2UFRRQ0lm?=
 =?utf-8?B?TFZvWUFUU1FlNC93U2hWUVdvNjZpdENqYmxjZW41NGFqSFpWQ0NyME5relNN?=
 =?utf-8?B?RVpPbmh6M215V2FRNUk5c2EyTE9RcUZpUE5tSkowaElPTkxQcHNQZ0d1Zjg5?=
 =?utf-8?B?S2VXRmloYlVheFg2aVVkeXIrNGNnbjRlUFdibDZiUVFweGFjRU1uYVk5eFBO?=
 =?utf-8?B?SUdJakVsb0wyK2VGVFhicDgwRWRKa2tjUUZlSmRKQm9GTjhHSm0zOWhtNDZP?=
 =?utf-8?B?eEdLdXdKNVB4RU42dEhLOGJ4dmZaaDZZR3ZSbjhyU2lUNXBhSkVCMGlGM3d3?=
 =?utf-8?B?anpwTkJ5ZkVocWV1RVQ3a2xtMyt1eS9RZ2ZnR2xoWTJZWTMvMW1EUkJmVGdw?=
 =?utf-8?B?UGxXVjlKKzdjVHUzZ3VHTTdpYXFqMmt3ZXlDaUVBczIvSkMzRDJBWUNBTDE0?=
 =?utf-8?B?dWVaZGRwcmhad0VJUXJzZ29WaWhhTkhwdFdVakVLZ0ZvUmNIOU1pNmJ3aStC?=
 =?utf-8?B?Y3NyRU9sUXN1TnhDbWpBUXBnUkhiSWdGUXBucEUzdkhROGliTnlzZ3ZHQnNS?=
 =?utf-8?B?cEhrR0lyV2QydmczVkd2UzY5VmFkMjhKbFVlS1pFaS91TmNORkFrZ1g3MDlm?=
 =?utf-8?B?cWVZbVVGK053dDhLVTZVbVJwUFVOQnlHOXJxZHRyQ2lhTnh1REpzU2pqWUpz?=
 =?utf-8?B?bUZQQkNRRnpucjVuZE4wS1phM09IejlUVllrcUFrckRVMG11RjBSTVhMZmsv?=
 =?utf-8?B?RjJMbHhYY0ZRMldpeXREZEo3Tm5zWjhYZjlmTGZKNXNkOTFmUkpoYm03Ykxh?=
 =?utf-8?B?aXN1U3BWQnhJd09CZEkxNDlQZTB1ZW5TSElaaCtPVk80T0tiUktEY2gxbXUw?=
 =?utf-8?B?ZVMrSm9DRWFjSE9xeG5nZk1XVHplUjRtU1lkaUhPSGtCUEpwY1BRcXdYRTJ5?=
 =?utf-8?B?V3kvZXdmRG5lZTI4V0hjd1lzMTJZbUxWa3lRSGpnbnNsNXhncFVnRnJrSnRk?=
 =?utf-8?B?VC9LUW9Pb0NyTW1ZNWtVS1JNRnJRWkFiZUhKenBIQXJvQVB5SGpYWUlOckR6?=
 =?utf-8?B?aVFOV2N5ek84RVphNXI2aTU2MFZRL2p6QUpjSG1iajM2RjFPcHlSdktsRjN5?=
 =?utf-8?B?azlndWkreDlFUkpkSFZaWlp3UlFFOWhYV1l2ZUpUR24vMUlPb2tOZ01HUU40?=
 =?utf-8?B?akptYjh2RkRBOXVKazU5dWpIM1lTTjRmU1lPUG9jcmpWdVpKbitiMG1PY3Ir?=
 =?utf-8?B?TzFMWExnS2tJT3dBdk5xN0YzMjVSd3VNdGx2VnVuS2lhM0IzdkpSWkhib2VI?=
 =?utf-8?B?cUFpcDBXK3pNY0NEd0tLeTQ5SmJRcU5JeHR4WjNxT1RhaUZib3JzZ3FQdGtv?=
 =?utf-8?B?Wis4VUVCVTlIOStqRUxna1JtcVplMTRHZm9PeU0yRmI1OHY0cDR6Vk9OSXpE?=
 =?utf-8?B?U0FlcE5yWHpabEtnSVRhRHJuMUF5RTlVMXJaazBZS081SjVKb0hxWVkrVjNa?=
 =?utf-8?B?KzRYcFV3aHoxMnM3WUtYb0ticTM1UVM1SGFwQVN3M2N1S1pkMzZCQXU3Wmpu?=
 =?utf-8?B?V3E0V0paZmxRd2dWa05xOEZWVDMzUGJDOXZ6ZjNIdnFmQzBranY2VWU1bnZU?=
 =?utf-8?B?cVA3d0NxQVZLU0tRUEMzbzArQUxXL0UyWlFldnArTktmRi9iYmVwdjB5ZENR?=
 =?utf-8?B?QWlaWDVFbEFNekhZWVBJcE9XZER2VmpudUR5UkQ3VkdFeEJHdk5CbTdDbFRL?=
 =?utf-8?B?L0VKMFh1b05ObkRqNWNvYUpxN1V5SkZPSVhFN2Z0bzVXeDRsUXVqcmJsNWk3?=
 =?utf-8?B?K0ZHWlZOUVRtdStHOFJBMDhSUGU1VUdnZ1FWQURRM09ocVJKNXFMRVBlUk1J?=
 =?utf-8?B?dWpXSmFEaWRqYnBkbkp6d0J6d0VrRzRobzBWQnpoTGJOQ1VxYndYYWNsRlNx?=
 =?utf-8?B?ZHJhbmExVFNoVXhJUEhqUlpZYUswMjd2dTB3MDlhV25OWDdnQUlnMUlGTHdY?=
 =?utf-8?Q?N80BHVCoJMzH+lm69lWrGA3GD?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e9cb5d1-b8b6-422e-33eb-08db61df2755
X-MS-Exchange-CrossTenant-AuthSource: DM4PR12MB5229.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 13:58:47.9896
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 88Y4Jy7srjnspesxLBMjsqFOTBNL9rgFVHeVNC0hsaZzxgwsRjwVWWFOSAH8I9y4LORY8ikBNHMwAi2nZedIMw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB9171

On 5/31/23 02:44, Thomas Gleixner wrote:
> The decision to allow parallel bringup of secondary CPUs checks
> CC_ATTR_GUEST_STATE_ENCRYPT to detect encrypted guests. Those cannot use
> parallel bootup because accessing the local APIC is intercepted and raises
> a #VC or #VE, which cannot be handled at that point.
> 
> The check works correctly, but only for AMD encrypted guests. TDX does not
> set that flag.
> 
> As there is no real connection between CC attributes and the inability to
> support parallel bringup, replace this with a generic control flag in
> x86_cpuinit and let SEV-ES and TDX init code disable it.
> 
> Fixes: 0c7ffa32dbd6 ("x86/smpboot/64: Implement arch_cpuhp_init_parallel_bringup() and enable it")
> Reported-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Still works for SEV-ES/SEV-SNP with parallel boot properly disabled.

Tested-by: Tom Lendacky <thomas.lendacky@amd.com>

> ---
>   arch/x86/coco/tdx/tdx.c         |   11 +++++++++++
>   arch/x86/include/asm/x86_init.h |    3 +++
>   arch/x86/kernel/smpboot.c       |   19 ++-----------------
>   arch/x86/kernel/x86_init.c      |    1 +
>   arch/x86/mm/mem_encrypt_amd.c   |   15 +++++++++++++++
>   5 files changed, 32 insertions(+), 17 deletions(-)
> 
> --- a/arch/x86/coco/tdx/tdx.c
> +++ b/arch/x86/coco/tdx/tdx.c
> @@ -871,5 +871,16 @@ void __init tdx_early_init(void)
>   	x86_platform.guest.enc_tlb_flush_required   = tdx_tlb_flush_required;
>   	x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
>   
> +	/*
> +	 * TDX intercepts the RDMSR to read the X2APIC ID in the parallel
> +	 * bringup low level code. That raises #VE which cannot be handled
> +	 * there.
> +	 *
> +	 * Intel-TDX has a secure RDMSR hypercall, but that needs to be
> +	 * implemented seperately in the low level startup ASM code.
> +	 * Until that is in place, disable parallel bringup for TDX.
> +	 */
> +	x86_cpuinit.parallel_bringup = false;
> +
>   	pr_info("Guest detected\n");
>   }
> --- a/arch/x86/include/asm/x86_init.h
> +++ b/arch/x86/include/asm/x86_init.h
> @@ -177,11 +177,14 @@ struct x86_init_ops {
>    * struct x86_cpuinit_ops - platform specific cpu hotplug setups
>    * @setup_percpu_clockev:	set up the per cpu clock event device
>    * @early_percpu_clock_init:	early init of the per cpu clock event device
> + * @fixup_cpu_id:		fixup function for cpuinfo_x86::phys_proc_id
> + * @parallel_bringup:		Parallel bringup control
>    */
>   struct x86_cpuinit_ops {
>   	void (*setup_percpu_clockev)(void);
>   	void (*early_percpu_clock_init)(void);
>   	void (*fixup_cpu_id)(struct cpuinfo_x86 *c, int node);
> +	bool parallel_bringup;
>   };
>   
>   struct timespec64;
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -1267,23 +1267,8 @@ void __init smp_prepare_cpus_common(void
>   /* Establish whether parallel bringup can be supported. */
>   bool __init arch_cpuhp_init_parallel_bringup(void)
>   {
> -	/*
> -	 * Encrypted guests require special handling. They enforce X2APIC
> -	 * mode but the RDMSR to read the APIC ID is intercepted and raises
> -	 * #VC or #VE which cannot be handled in the early startup code.
> -	 *
> -	 * AMD-SEV does not provide a RDMSR GHCB protocol so the early
> -	 * startup code cannot directly communicate with the secure
> -	 * firmware. The alternative solution to retrieve the APIC ID via
> -	 * CPUID(0xb), which is covered by the GHCB protocol, is not viable
> -	 * either because there is no enforcement of the CPUID(0xb)
> -	 * provided "initial" APIC ID to be the same as the real APIC ID.
> -	 *
> -	 * Intel-TDX has a secure RDMSR hypercall, but that needs to be
> -	 * implemented seperately in the low level startup ASM code.
> -	 */
> -	if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
> -		pr_info("Parallel CPU startup disabled due to guest state encryption\n");
> +	if (!x86_cpuinit.parallel_bringup) {
> +		pr_info("Parallel CPU startup disabled by the platform\n");
>   		return false;
>   	}
>   
> --- a/arch/x86/kernel/x86_init.c
> +++ b/arch/x86/kernel/x86_init.c
> @@ -126,6 +126,7 @@ struct x86_init_ops x86_init __initdata
>   struct x86_cpuinit_ops x86_cpuinit = {
>   	.early_percpu_clock_init	= x86_init_noop,
>   	.setup_percpu_clockev		= setup_secondary_APIC_clock,
> +	.parallel_bringup		= true,
>   };
>   
>   static void default_nmi_init(void) { };
> --- a/arch/x86/mm/mem_encrypt_amd.c
> +++ b/arch/x86/mm/mem_encrypt_amd.c
> @@ -501,6 +501,21 @@ void __init sme_early_init(void)
>   	x86_platform.guest.enc_status_change_finish  = amd_enc_status_change_finish;
>   	x86_platform.guest.enc_tlb_flush_required    = amd_enc_tlb_flush_required;
>   	x86_platform.guest.enc_cache_flush_required  = amd_enc_cache_flush_required;
> +
> +	/*
> +	 * AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the
> +	 * parallel bringup low level code. That raises #VC which cannot be
> +	 * handled there.
> +	 * It does not provide a RDMSR GHCB protocol so the early startup
> +	 * code cannot directly communicate with the secure firmware. The
> +	 * alternative solution to retrieve the APIC ID via CPUID(0xb),
> +	 * which is covered by the GHCB protocol, is not viable either
> +	 * because there is no enforcement of the CPUID(0xb) provided
> +	 * "initial" APIC ID to be the same as the real APIC ID.
> +	 * Disable parallel bootup.
> +	 */
> +	if (sev_status & MSR_AMD64_SEV_ES_ENABLED)
> +		x86_cpuinit.parallel_bringup = false;
>   }
>   
>   void __init mem_encrypt_free_decrypted_mem(void)


From xen-devel-bounces@lists.xenproject.org Wed May 31 14:03:02 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 14:03:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541842.845017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4MPf-0000yw-SB; Wed, 31 May 2023 14:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541842.845017; Wed, 31 May 2023 14:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4MPf-0000yp-Om; Wed, 31 May 2023 14:02:55 +0000
Received: by outflank-mailman (input) for mailman id 541842;
 Wed, 31 May 2023 14:02:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VTAn=BU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q4MPe-0000yd-7X
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 14:02:54 +0000
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2061b.outbound.protection.outlook.com
 [2a01:111:f400:fe1a::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d4dd29a4-ffbb-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 16:02:51 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by AM8PR04MB7953.eurprd04.prod.outlook.com (2603:10a6:20b:246::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.24; Wed, 31 May
 2023 14:02:48 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 14:02:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4dd29a4-ffbb-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VfSkkMmMNSpMIC2r+dgDKEQrgKp+j9A2IKlgnGRbZ2vvyLGt6jlfXioH/vPTWvHWkBPIQVx9psezbn6piXwq1IfD6+sabgRU5qcPDmS8/UO+ZI1vk5e4/U9jjzXJPF+XkmYDIb3kvbrU1IVE7HmWQzcbyL2kR/77+ra7A0gMOI0FTJ1A9XaojBQS7ZILbrc4S9IJhb7fni7YJvfT6qVQDX7xBBHGSfFNGCOMI2w7x2xJsUyaLkp8qD7rFmXkulHFBSeDI9oVFpws42NaJkWgbE4ooFyAePschRZd8sD1atUQ9eMIN+Gf+YQxJ2ldbevlXV77aNxJWe5nK3hqINyqHg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=pGqqPUSwVl8ZXQWXWeiK1coGaeSXoF+U0q1t2zckZSg=;
 b=jLRkeSH+sEvnoDbaBN6M8LHzcjTs05IW+43P0iOlnC+qFHdeShapPg8S3Wq/SbVZ1gk+VQu9CR9wjRGZNN4ZhHl82XxNMGoKE1k6IyB+PMu8GqxSdpsoAMTKnBkBpExv+tiAIU8P4+JJwMr7id6ZtPFAu3s5zWzRZmYZbDpkoKNwe7eACVvq98CbqW90ieUJ0L5fM5agJznsEn4DiEzaWZLKj0p2agsB41QzD+zuVcTJldMzoIgitoWHrhl8Dq9IMIq5w183RYGwPGnVaFP1ozUNO4WGIPqEwjU6Qa1AzFKJU1iIAlBiLrrrC1RhJ/pEsESjamPJ61AqbOPX2mKSLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pGqqPUSwVl8ZXQWXWeiK1coGaeSXoF+U0q1t2zckZSg=;
 b=3Dwpt0GUTzGvamZae94XsaRJkSbr6dp2VPCQH6MRfczIEhMCpjWjXXDWHG59u9Sbvd8dOAQ39zc4F0wmx3q8bmrouqIx+dcwI0oUV5OUUOyD+3TUSpWgx7ebJAlth2+Gtf/Kl9u2l0l1PTXqQfMUzGBp5n+s4jTBhcsG8+0lxmIQrWzu7xuI6CFDtQRmw/yG9gwuAl9egI3lqZFtxJdTJnhI/Xom48+rbNspVk2DPBCX/bRPQlWFDbcQiQ/ZuX0Dgbvl13ivAXAsp/UutY+dcRg9I9bTkGpiJR24tbcYeon9pNo47nmjF+hMFssN3Ou/Bd6DkLgkHLDflKxVVEJroA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <d9dcc7a9-066a-58ed-0404-c787aea68325@suse.com>
Date: Wed, 31 May 2023 16:02:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH 2/2] vPCI: fix test harness build
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>
References: <bb1bfe6c-eef6-5552-78e4-bdec8fd43561@suse.com>
 <838c09e5-97ce-2d23-9d42-645a8925217d@suse.com>
 <ZHdQ6Qw8vQIZ+s3c@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZHdQ6Qw8vQIZ+s3c@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR2P281CA0177.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:9f::13) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|AM8PR04MB7953:EE_
X-MS-Office365-Filtering-Correlation-Id: ee9f9e6d-e83b-4554-792f-08db61dfb6da
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	RTKmKc8uVx1nKFs/ISP/G7OWzqtFGp1k02nxWpfFeOsGSzYHOXihqYhGfIrnpaN1xeCtk3fW5zr5OO+/2076dEDPLbyXGavFEopem87VHLEa2zRF6ZM8B86AOztSyXnnW0nd5MfVOVYvAu2Is2rt2prqLrRvWMPToDKyKR7meqdyEU9G6ORTNCSIqQN6uOGlS67ZFW9ApOn4K1YI7HDyR9LQ1fDnqNJkUYcIDEZk+e0zi/zXpMvPb0IR91PSixLI4inlz9wFkLHs+8hw34pT2T/72wwrVG0obbcEsvIkzfBiB1IOKZDMuAO297Lj6wO6oJHkiUajqsqKQFaAAtp7aGwfU14qRDXnjKXa2MonEB7M4VXi4qxusqO7OeRPGKtsyfx3kI5vbENa/BfCUSESa70NNb+EWND9lkOSkFQ6Y7LilB5GE+i3LMha8/lx3Jurdv2F4tB72eEdy97W8orQJAkyjmkBaial+Flva/coJSEUa0tfx3UCJfPDNOufcj2hOdd8MCTQcDF9O0Edik7AHYO9QVZWq2wwGPGK3wuKfjwhXG1vZ0HifyLyVyPDhaz/RE4EuSWEpWZ8K9rFMxBH6UkN8WClHvc7jvwT9gizyaDRhjCsfS0YH/1XGcox1F98Oyhusq+x9oUkGF9D7QCjbQ==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(366004)(396003)(346002)(136003)(376002)(39860400002)(451199021)(6486002)(316002)(6666004)(5660300002)(2616005)(8936002)(31696002)(86362001)(83380400001)(41300700001)(8676002)(478600001)(38100700002)(186003)(6506007)(6512007)(26005)(53546011)(31686004)(54906003)(6916009)(4326008)(2906002)(4744005)(66946007)(36756003)(66476007)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?dVRldnBxb0xoOURvVXlVbUFxeXRTTUhNVUtKaW1EZUdmWktrQXl0MGduY3pU?=
 =?utf-8?B?NGxTemd2TXVkRnd3VklIM0NZQkVXazZqLzNBcGxmUXlsUVp4ZVcrZXhPQW1H?=
 =?utf-8?B?N1dYRHhORm0zYVRsaTFYdncvZDlLOWdKQituUU1HWnZFSFR0QW9qTTIwQ3Rh?=
 =?utf-8?B?aTBWNzBNZ1BYUE0zZlVKa09VMG5TdFVDV1JOaHoyNEdQUk1XSmFJUzdaWlhp?=
 =?utf-8?B?eE84cHJyYmNUNkN6VHY3clFpNDhTNU51ZjBkOHlhWFVpMmNZclRXZmkvSlkv?=
 =?utf-8?B?WlROZ09yeXl0dCs5SFVvTjI5T2hDREIxdk5xVFdHQ01raTlOT2VwNytMTDhs?=
 =?utf-8?B?aU5uMDREUWN5Y2w1cE0wMktFVXNlbWovTzd2eVlLTE9vZThLWCtVUy9DNE01?=
 =?utf-8?B?SGtCTFhCbDNVM1BpeG0zUU9VR3hPTVIvSEFCeVp0SDFJRyt2MHVaYkxhYjBs?=
 =?utf-8?B?UENsQ1lYZWZ1WlRoblRtKy9vd0k1UElKdVdXR1V5QThJcU1WT1pkcVBydHY1?=
 =?utf-8?B?ejlZN29HUkFvZTgxdEc1WVRGTFJ2aHkrd0lYaDBaOGNsQ3FvTU44aVRCRDQ3?=
 =?utf-8?B?bU03c05FNS80REwvc1MzNU9vWmYvWjlwYUx1L1A2OGFmM0o0WjdMNFQvYXFq?=
 =?utf-8?B?VWNGQVV3Zit4Mm45ZTVjZnlGenhGS21mWjdZZGFoV3JaRjRMcjFrRkI3NmNG?=
 =?utf-8?B?ejVCTzRGL3ZuNTBRRDJHNTB4UDB5SFRKR1I2MWhCMXluM3d3QW4xdUVVWUJP?=
 =?utf-8?B?MFJSZXkvcFEyREN1dStibWJPUEJaYS9sQ2J3OFRUVVdiM2JVLzBrZEVJdzVt?=
 =?utf-8?B?TWRHTWNObzZUSHlBNld5NFVWVElUbFpPNjZBSDdaQS92WmZxcEdKNXhlblM3?=
 =?utf-8?B?V1JXNkdxTHpCL1dWbzVYR3hDQmhvUG1WdFcyRDhlRGVRVkMzUGkvR2IyaGZI?=
 =?utf-8?B?amhxSGlCNGp1ejdQUmNhd25WZVZMbkNVc0hsaEZYcElTRURsS3BQd294S2Nn?=
 =?utf-8?B?a2JEZjY3TlVMcXU0cWVTd0lMdHBhaVkwaHZEVUF2NTI2OWd1MXY2UERUc3A2?=
 =?utf-8?B?SGJjMU5yN3k5czlrU0NmbXBrQ1R5UVE3MUpYdU5CT1loOUVRR0ZXWlJBR0Jq?=
 =?utf-8?B?UDZkZzBDNHZSTjBYK3U3aDBIMytBeHg2N1I2cHM4a2RwTEJkMVFlZXRISWUv?=
 =?utf-8?B?UkVoeVlpeGpYWE8wNm5kU2F2OTd2NkFXUVI5eGV0Z0RxUG1vTm1DTTREc3Na?=
 =?utf-8?B?Z1d2bDBROTM3V0hFUUUxVDNlYlJHdDFyRXZxVGZSL25wTFJIcVFVYkYzK1Bz?=
 =?utf-8?B?RjdUTFk0SkdqMHdIeDhaQ0pFWTNtbnh2UWpFd2EwdzFBU3ZpZ3FPcW9iV1g5?=
 =?utf-8?B?aW8xOUdXVmg1OWw5eDZkSGx3cUJGa3MwN0pRcmFhbndaTWRiMjNCc0QrSHJ3?=
 =?utf-8?B?VFFkZkhKQldjVnh6T1JTdSttQUlpTFpkcEtpZm52NTg3S3pkSVNFcFVuYmNv?=
 =?utf-8?B?ODhZcFJOdVJJT0NPQ0Q1bzlYeENBOHhUQUtieXVFZTQ3cjhwNnFnckNxUjZ3?=
 =?utf-8?B?MzBabEJLRjBnMkp5QXkzZVhLSUQ2SVdkYmdaNDRJdzlGYmpwWGhoRWtITzhz?=
 =?utf-8?B?NE5BY1NnU3l5S1NQNTJLaFAvM2lMZUtVZHdRMFhzYnpzSmpyb0NEVmluTXh5?=
 =?utf-8?B?QVVXc3JuYlpkaW5TQ1MxalFHSkZmdUZGeFBMckJYNXQ2R1pvbFBjNXdkMTA1?=
 =?utf-8?B?dUZmOUl0RDlZQ3RCZ29YNEM1ZDB1YnhZaDBPdEdObWl0c2VQUmZxYVBCalFJ?=
 =?utf-8?B?UGcvZUVOM1M0QjRiSVd6RWZ6VmJMUTFMN0lYMFpVOUttSnFmV0lrY3lqZjky?=
 =?utf-8?B?b0xxdHFGNlFiSnpvNkFlc3U0N2pzS2ZOVC9wbWRIN2tkdE4wOFJpUFBJZ2Fo?=
 =?utf-8?B?OHRoTXJVcDQ3MTNxSjFmdkQvTnowZHhZNmFNVVE4NkR6em5XTTl2N08zR2pU?=
 =?utf-8?B?bDR2VGFmOFdYQUxwcVh5WStEcUc2RVV0OEVVNlZtZVlBWTY0NXV4KzJQSlUw?=
 =?utf-8?B?U0xIOTJGdXJiZWVwTjNBNWxiczRFVlRQTEo5RzJJbXAxTFBoOXpSZ2E0M3Zx?=
 =?utf-8?Q?Y33TxTDiZlF4a83TNUR9nE6ww?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ee9f9e6d-e83b-4554-792f-08db61dfb6da
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 14:02:47.8890
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fLnpamAlAVur8NZihRKkyGnHfWVA52X9/pCTI1w8lOmamGK+ntOJcotN5ZlSz1hx8yPffsfxOTP917h9WwwzBw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7953

On 31.05.2023 15:51, Roger Pau Monné wrote:
> On Wed, May 31, 2023 at 03:19:56PM +0200, Jan Beulich wrote:
>> The earlier commit introduced two uses of is_hardware_domain().
>>
>> Fixes: 465217b0f872 ("vPCI: account for hidden devices")
>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> We do rely on the compiler always removing the call to
> pci_get_pdev(dom_xen, ...) or AFAICT that would also trigger an error
> as there's no definition of dom_xen in this scope.

Not really, no. The stub macro itself discards all its arguments:

#define pci_get_pdev(...) (&test_pdev)

Jan



From xen-devel-bounces@lists.xenproject.org Wed May 31 14:16:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 14:16:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541846.845027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Mcr-0002WK-3b; Wed, 31 May 2023 14:16:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541846.845027; Wed, 31 May 2023 14:16:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Mcq-0002WD-Vc; Wed, 31 May 2023 14:16:32 +0000
Received: by outflank-mailman (input) for mailman id 541846;
 Wed, 31 May 2023 14:16:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VTAn=BU=suse.com=JBeulich@srs-se1.protection.inumbo.net>)
 id 1q4Mcp-0002W5-HC
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 14:16:31 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 (mail-db3eur04on0626.outbound.protection.outlook.com
 [2a01:111:f400:fe0c::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bccec82e-ffbd-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 16:16:30 +0200 (CEST)
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com (2603:10a6:803:122::25)
 by PA4PR04MB7901.eurprd04.prod.outlook.com (2603:10a6:102:ca::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.24; Wed, 31 May
 2023 14:16:28 +0000
Received: from VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c]) by VE1PR04MB6560.eurprd04.prod.outlook.com
 ([fe80::e442:306f:7711:e24c%5]) with mapi id 15.20.6433.024; Wed, 31 May 2023
 14:16:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bccec82e-ffbd-11ed-b231-6b7b168915f2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PVe7x3jlu8HB++0iPIOPxJY9Y5JHGH5zRSjFqp5+Um2GLFuDAbzL56hAwDGku1URowQkX9+TJExmtl1IyibqAmZ0EL6Qx/mG6J18fpnZnPfiQyfcL3mWZp76Dv8JFOesqs8vN1ARkPay/IdDAOb3OPrDGkh9IuPisRJgSV8OpcFeX1e+BwSBogwU76XDwk1aE5fCwDiMt1H7SCiV9J6w3tI/KxzXBaZvXhUvXc9FHLIh8uDE9c6r+tqTlPqMf7RiQcuMx07XW8vU6kIGZ6Xm75GW/uGtOsDFOKePGgdrzY+46KNaYS+1f5TNFK/9WPX1FdMbcuxn9ImjuHWqf5QdZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1CJmBLPeRlDKIRxTBOqjIyefB49v71BzfBsGzuhbnV8=;
 b=oaXkIKok5ogkKbNMjnsYgd/A28MjHTWk5YWfOINmBVCNpU/Tu7eZ/1vwBOBiln9wIuyGEPXzeAGFRBsl2P3pII297mBjzAyVu6cPA36fFhSeI8xnG1hF2X5q/POrVZ6lwriXTtcLdtApc7arYJz9UifkA2KeaPe0pHWnLBlHfzXZQ/EvXb+5etqnr8Sq3Oi4S4UosF/Wa2A2SlTDjYEx6NC0WkXczpupW31fh/0KjVqPMe7cSmQ6iqNz7RCUftFJAkPSMFoxcw+1TbBQSLhj3qirW4eV0WZLyfx79ScrwU9WbeGVCKHDlwvOJF7IkyMH3afkwkYjB4wYFEsw+5fMOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1CJmBLPeRlDKIRxTBOqjIyefB49v71BzfBsGzuhbnV8=;
 b=fAZziWtJv7J1W07WzvJC2NSP++ht4KYUUKjFggjtA3UuifCfyVSfQ2Sfi8RXWCm9vY4YNDPh1sFbEdyannoH/EUvqX1JGrHh5EhBaVR5eIFIo97qygdGsPneNUrRwC0ZgVvDKET7ebFqgpq6icx4oECmMGOP6zaTurV5MUc7EofI5Ir9RIGWTqNESXFC/DolKtwTatuy0KXEDmYUwy2HeQEnwJ1cJaBZwWHehkkWD2uIOFtjoMYO01/1rqaajxdfKAhqfAO9BytEAStnm3w7yseA3gDnf1HRgN/Bqq3yGxgKcxxnHbYKUOcSsejhFj9Be8+Vw79+LJl/nQbcx6ZGzA==
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=suse.com;
Message-ID: <699303a3-34a2-8a65-a33e-fa9c09a385d8@suse.com>
Date: Wed, 31 May 2023 16:16:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Subject: Re: [PATCH v2 3/3] multiboot2: do not set StdOut mode unconditionally
Content-Language: en-US
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20230331095946.45024-1-roger.pau@citrix.com>
 <20230331095946.45024-4-roger.pau@citrix.com>
 <b9bd819d-93ad-d511-4602-8e3f4f515546@suse.com>
 <ZHcoCcd5nugmWURI@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <ZHcoCcd5nugmWURI@Air-de-Roger>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0178.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:b4::6) To VE1PR04MB6560.eurprd04.prod.outlook.com
 (2603:10a6:803:122::25)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: VE1PR04MB6560:EE_|PA4PR04MB7901:EE_
X-MS-Office365-Filtering-Correlation-Id: af398142-78e1-4f42-2010-08db61e19ffb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aSi32qqck7ALLybDH+YtMGhNQxAtS9TTTOPzduJIjBktcUGu9LN9cF9EnOGdTKayBaxP96yNF5qVsDXTDj0dr4VFLXrz6CPE59Z5nUQsWcbbQe0G2CChHZW7Z16kCHyHUPh3MdBJ8gVD8WgH4LZdP7oXCb174sk8sEmPz/GMnvOGxKiYNT5Yr/WK2A4omIPevScODMYBo1x+zzXPVd/pT2l/WKdcFhq8pcdk4or/yHxPO4VkFPWf8Nbg2VCh2A2JxVc+GlR257Lj5tmBAkovx4HbavID65i0FJOz4MMd6c8Q1rsTqyJQnfMGm0qXtXwYX+KtF5MOdLYPmLSI3Sp2v21iTmbDF6gSLy1wcmmh4PudBk7ootLWkESwlmb3UIXMhWA5QLo8ne0IBpO+5Dp8UlZ20tSX6VxZdlxvpyeadokZ2GwqEmD+W2RNx73GzTL2Hf9fXLWfXhR/kcwVE0FmsnsnR56f/QspJie0+1CitnUyXODN4CK13EzBJmp8cdE7SbTKuwdF4gvLncaw+Iuy1Hbl9HYZ5dveoO91yerikapHLFyD5TjNdm++QlOXGDffxdmjC8ZcT5ijMGuTDqhnkhOsqrc8MR7bO1XNNaCbsAUYtUD7ijB4KWo46tkg/WJpE3IhyCQR8OOwAqoOnYMrHzyRG8n8jJjaO4UhBC4+159v4VCOxfAKpcBoIiVBuDVTtzhJsOGKZ6NcTnd3sRnScw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR04MB6560.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(39860400002)(376002)(396003)(346002)(366004)(136003)(451199021)(2616005)(83380400001)(186003)(2906002)(6916009)(4326008)(66556008)(66476007)(66946007)(6486002)(316002)(54906003)(478600001)(5660300002)(26005)(6506007)(6512007)(53546011)(41300700001)(8936002)(8676002)(38100700002)(36756003)(31696002)(86362001)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?WExCbSszV1RpeDNoRzJvS3RKQjgxZVZUZjNxM1VZK2tudW5EMUdodXZ6ZFBs?=
 =?utf-8?B?Q1dkb1ovMG5MZXVNd3VoVWlUMFFGSVk0UGU3UGVVQVBSc2o2aWJnVkkrNzU3?=
 =?utf-8?B?THVyS21xS3lkOHZtMmMzNHhKRitzbGphMER4OU9vQzRodVIxZU1TTFpJQmxq?=
 =?utf-8?B?MnQzVXJYUTlLcGFlMlpHZlI5aVVSaXZYS0M4MUdPbzRUeWJka0EzQmFVZDZQ?=
 =?utf-8?B?TGxpNmYzSkNCZnVuS3NQNi8wczJnL0twUGM1MTFhUXB3bWNLT0t2Q2NCM1g1?=
 =?utf-8?B?bjJnOWNLdDNjQ3Byc2J4cEQzVytWSGE3WTdpWlVLNFRrOVY0ZFViWC9ZcjFD?=
 =?utf-8?B?cU90eFRvY3RDVzhiaForTzZVdW1JOUxMTlhvNzlncnV3QUhUdWpzY2liVVgy?=
 =?utf-8?B?VjBQZUhYWmdwMnpOUW8wQkhTaGdRUnc5cVlaMmVXR0s0N1ZmS3NKMTc2SXd6?=
 =?utf-8?B?eUdIaDFXdytxb3lpbGNES1BHNHYrL0F2N1FjYXJRb1ZEb2krRW4yS0lCMld0?=
 =?utf-8?B?QTA2TlNMTitUN0FZNTl2eldBTzVOUlVudDg0bzl6NlpVazVhN2JQZ05nU0Yy?=
 =?utf-8?B?d082WmZQUkVVTitvSDlLYkpvWkdrOThRdmFRRFVwNjRPVGVZMytKVitLTE8v?=
 =?utf-8?B?VmNqZ0g4a2VkUlA2cFY5N3V6aW93KzNub3Fyd0g4cGZBRmdkQkEwT0NuOVhF?=
 =?utf-8?B?UW03bVk2WFVaWEIzQW1ycHB1eHZUUG1XYi92Snl4Z09QM09aL2lQRHpkTWc0?=
 =?utf-8?B?cE5KZ3FpMjJrR0hidEl4bHh3VGFjYWJDQlg3b0lVK3VJUkdaOFo0M2VWNkRE?=
 =?utf-8?B?cC9VcE5DcTZjU2o2NWllWk1ZZzFnK1VtL0tPckluM3hCZ0F5cUk5RWtXZWNO?=
 =?utf-8?B?MUpCaDJIVkxIOVJPVVVDSnZrMDc3bWY0WlJ2OXFPM1JlSHJYbmllYm1QTWNL?=
 =?utf-8?B?TDBRYjRLcjNIejJTUWxaaHFFSVVtdGkrUzVsdHZQclhtSVozTUZxUHRHSkY0?=
 =?utf-8?B?NkErbk9KRVFRbmRSMEpORGUrMmFpaXNrQnpXUXhmVGZFVWR6aFkxRzRYMEpV?=
 =?utf-8?B?QnVJTGdLN3YxV21Ec05mdEdEYmF3c0JNaWV5N3ErelZ5MnZRL0lST2RxUkI0?=
 =?utf-8?B?ekczVE4yOWNuNENSYnVGRGJJYWk3bEtJcnVkQnFKbit1QTR3cHAwcWFwc0tV?=
 =?utf-8?B?SmNtb1A3ZU9ZdEZIdXViUlhTRHZJUzZoRDZJVFE1UVROS0pOdGxDdEdUOE1v?=
 =?utf-8?B?OExPY2pDb2lCVFplYmh1Y0xVYnJDWlgwZWs5aWh1VGttNHVlTHJqTVN0VXRZ?=
 =?utf-8?B?SDI0cmRSSXpWUmF4bkkxb0dmUGMzRVVjcllZOG8wdmRlZVlWbGk5Y1ljaWla?=
 =?utf-8?B?VEJjK2dIQjB2dlZudVRtT3dHVjhPVDlaU3VvN1Z2QWxZdHg1RUtKZ01EcWpp?=
 =?utf-8?B?QmJXa0FmT1JkSmkxUmgrSGpHczd5Wkh5R3hWT2hyZXFRR0hYcW43MUwvUEph?=
 =?utf-8?B?RWNJa1ZsLzU3aTdwcG1NZlJyRzVBbzl2WWMzK3BXemhYL0diVlJieUlOU2Jw?=
 =?utf-8?B?VnJGUGlOMU5wY015eVpnYjVBVVM2N0xaVTYyZnlNWENKVm5VUFEwM2wzYTg5?=
 =?utf-8?B?dUVEWXg1eVFQWCtZMm9Gb1pzMWF3THZyb24xUUQ3M2ZuL212TUFPYlNlWWpY?=
 =?utf-8?B?cEtYT3pLWVlSMEhOVnozZSs2bmRrczJvZVZydXhMekhSVGUwTUxldGxqVVJP?=
 =?utf-8?B?UVZXUHJPSWxLT3lLVFVqRmp1d01WZk1yMDJONEFsZ0hUUVBZckI0VEVJSG0r?=
 =?utf-8?B?cjJYdXhhNXhWU25hRlowYk9Ganl1aVMxaU5kd0tST2llRUdFZnNLSEJNZTJq?=
 =?utf-8?B?dlNmMUhSSDdEOTlYeG12QW84WjJMN3NVOGZLU2dIdnFYMEF0R1RjdjhFMk83?=
 =?utf-8?B?QmdPYlRQQWNOa3ByYkdiQ2VMSytXN2xlU3AzcEkzNXdOUHNvSFJmclhTWDM2?=
 =?utf-8?B?ZUZjay9yVHdMWktMYUlwOXJBbVZiSU5BOWJoYmhNczlzMlZnV3Z3YXhjODYy?=
 =?utf-8?B?VEE5NzJleGY0QS9NTXdWSXJ6Y1hCclU4Y1NEa2hkQXN6ZlRsNkZLL1lUL1ow?=
 =?utf-8?Q?7k+6lKmvWiZ+oWrbyn6Xsc0KA?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: af398142-78e1-4f42-2010-08db61e19ffb
X-MS-Exchange-CrossTenant-AuthSource: VE1PR04MB6560.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 14:16:28.4399
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Bo//3VPqGTd0pSrptWsw9QZNEs2SYEf4XNyNA3IEC+rMsZ1uivD7vYGP5wti2taHvfQn9tV2/UGyzEaEBSSMRg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR04MB7901

On 31.05.2023 12:57, Roger Pau Monné wrote:
> On Wed, Apr 05, 2023 at 12:36:55PM +0200, Jan Beulich wrote:
>> On 31.03.2023 11:59, Roger Pau Monne wrote:
>>> @@ -887,6 +881,15 @@ void __init efi_multiboot2(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable
>>>  
>>>          efi_arch_edid(gop_handle);
>>>      }
>>> +    else
>>> +    {
>>> +        /* If no GOP, init ConOut (StdOut) to the max supported size. */
>>> +        efi_console_set_mode();
>>> +
>>> +        if ( StdOut->QueryMode(StdOut, StdOut->Mode->Mode,
>>> +                               &cols, &rows) == EFI_SUCCESS )
>>> +            efi_arch_console_init(cols, rows);
>>> +    }
>>
>> Instead of making this an "else", wouldn't you better check that a
>> valid gop_mode was found? efi_find_gop_mode() can return ~0 after all.
> 
> When using vga=current gop_mode would also be ~0, in order for
> efi_set_gop_mode() to not change the current mode,

And then we'd skip efi_console_set_mode() here as well, which I think
is what we want with "vga=current"?

> I was trying to
> avoid exposing keep_current or similar extra variable to signal this.
> 
>> Furthermore, what if the active mode doesn't support text output? (I
>> consider the spec unclear in regard to whether this is possible, but
>> maybe I simply didn't find the right place stating it.)
>>
>> Finally I think efi_arch_console_init() wants calling nevertheless.
>>
>> So altogether maybe
>>
>>     if ( gop_mode == ~0 ||
>>          StdOut->QueryMode(StdOut, StdOut->Mode->Mode,
>>                            &cols, &rows) != EFI_SUCCESS )
> 
> I think it would make more sense to call efi_console_set_mode() only
> if the current StdOut mode is not valid, as anything different from
> vga=current will already force a GOP mode change.

Hmm, this may also make sense. I guess I'd like to see the combined
result to be better able to judge.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 31 14:20:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 14:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541852.845042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4MgQ-0003yP-Ix; Wed, 31 May 2023 14:20:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541852.845042; Wed, 31 May 2023 14:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4MgQ-0003yI-Fx; Wed, 31 May 2023 14:20:14 +0000
Received: by outflank-mailman (input) for mailman id 541852;
 Wed, 31 May 2023 14:20:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0BFG=BU=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1q4MgO-0003yC-JT
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 14:20:12 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 401d08ec-ffbe-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 16:20:10 +0200 (CEST)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id EFBC51FD72;
 Wed, 31 May 2023 14:20:09 +0000 (UTC)
Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512)
 (No client certificate requested)
 by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 60097138E8;
 Wed, 31 May 2023 14:20:09 +0000 (UTC)
Received: from dovecot-director2.suse.de ([192.168.254.65])
 by imap2.suse-dmz.suse.de with ESMTPSA id W1DIFZlXd2TPJAAAMHmgww
 (envelope-from <jgross@suse.com>); Wed, 31 May 2023 14:20:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 401d08ec-ffbe-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1685542809; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=+v7YPVGfZvtKAdTq6nXJd4vhbr1CCQZIq7rqtmuqxmI=;
	b=I4YbHP2puHb/OuSciSSLm7fdzNJSh2DgLYy5x1HvCYGy3h7SPbwdswoFPjO4OUYVeWco4M
	w/Oo5uNjF19U2Qq1I4MR+RyDxRe10GdteGuAFDfYZXVJ7eyh/9BqzjsnjzegqbmuzF710d
	GABiNaaKpgQQ3qsMKLd/zP4tzxiEcMs=
Message-ID: <efe79c9e-1e31-adb9-8f93-962249bf01bb@suse.com>
Date: Wed, 31 May 2023 16:20:08 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.11.0
Content-Language: en-US
To: Borislav Petkov <bp@alien8.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
 mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
 Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
 <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
 <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>
 <888f860d-4307-54eb-01da-11f9adf65559@suse.com>
 <20230531083508.GAZHcGvB68PUAH7f+a@fat_crate.local>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
In-Reply-To: <20230531083508.GAZHcGvB68PUAH7f+a@fat_crate.local>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------xd0kCNFJ7htiPbZrxqZVh5R5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------xd0kCNFJ7htiPbZrxqZVh5R5
Content-Type: multipart/mixed; boundary="------------uMCsnvuhx47XwqUu0VjOI6Uj";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Borislav Petkov <bp@alien8.de>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
 linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
 mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Dave Hansen <dave.hansen@linux.intel.com>,
 "H. Peter Anvin" <hpa@zytor.com>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
 Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
Message-ID: <efe79c9e-1e31-adb9-8f93-962249bf01bb@suse.com>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
References: <20230502120931.20719-1-jgross@suse.com>
 <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
 <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
 <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>
 <888f860d-4307-54eb-01da-11f9adf65559@suse.com>
 <20230531083508.GAZHcGvB68PUAH7f+a@fat_crate.local>
In-Reply-To: <20230531083508.GAZHcGvB68PUAH7f+a@fat_crate.local>

--------------uMCsnvuhx47XwqUu0VjOI6Uj
Content-Type: multipart/mixed; boundary="------------A7l2ew5FBk47frMXXX4FPWJr"

--------------A7l2ew5FBk47frMXXX4FPWJr
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMzEuMDUuMjMgMTA6MzUsIEJvcmlzbGF2IFBldGtvdiB3cm90ZToNCj4gWyAgICAwLjAx
ODM1N10gTVRSUiBkZWZhdWx0IHR5cGU6IHVuY2FjaGFibGUNCj4gWyAgICAwLjAyMjM0N10g
TVRSUiBmaXhlZCByYW5nZXMgZW5hYmxlZDoNCj4gWyAgICAwLjAyNjA4NV0gICAwMDAwMC05
RkZGRiB3cml0ZS1iYWNrDQo+IFsgICAgMC4wMjk2NTBdICAgQTAwMDAtQkZGRkYgdW5jYWNo
YWJsZQ0KPiBbICAgIDAuMDMzMjE0XSAgIEMwMDAwLUZGRkZGIHdyaXRlLXByb3RlY3QNCj4g
WyAgICAwLjAzNzAzOV0gTVRSUiB2YXJpYWJsZSByYW5nZXMgZW5hYmxlZDoNCj4gWyAgICAw
LjA0MTAzOF0gICAwIGJhc2UgMDAwMDAwMDAwMDAwMDAwIG1hc2sgMDAwM0ZGQzAwMDAwMDAw
IHdyaXRlLWJhY2sNCj4gWyAgICAwLjA0NzM4M10gICAxIGJhc2UgMDAwMDAwNDAwMDAwMDAw
IG1hc2sgMDAwM0ZGRkMwMDAwMDAwIHdyaXRlLWJhY2sNCj4gWyAgICAwLjA1MzczMF0gICAy
IGJhc2UgMDAwMDAwNDQwMDAwMDAwIG1hc2sgMDAwM0ZGRkYwMDAwMDAwIHdyaXRlLWJhY2sN
Cj4gWyAgICAwLjA2MDA3Nl0gICAzIGJhc2UgMDAwMDAwMEFFMDAwMDAwIG1hc2sgMDAwM0ZG
RkZFMDAwMDAwIHVuY2FjaGFibGUNCj4gWyAgICAwLjA2NjQyMV0gICA0IGJhc2UgMDAwMDAw
MEIwMDAwMDAwIG1hc2sgMDAwM0ZGRkYwMDAwMDAwIHVuY2FjaGFibGUNCj4gWyAgICAwLjA3
Mjc2OF0gICA1IGJhc2UgMDAwMDAwMEMwMDAwMDAwIG1hc2sgMDAwM0ZGRkMwMDAwMDAwIHVu
Y2FjaGFibGUNCj4gWyAgICAwLjA3OTExNF0gICA2IGRpc2FibGVkDQo+IFsgICAgMC4wODE2
MzVdICAgNyBkaXNhYmxlZA0KPiBbICAgIDAuMDg0MTU2XSAgIDggZGlzYWJsZWQNCj4gWyAg
ICAwLjA4NjY3N10gICA5IGRpc2FibGVkDQo+IFsgICAgMC4wODkyMDNdIHRvdGFsIFJBTSBj
b3ZlcmVkOiAxNjM1Mk0NCj4gWyAgICAwLjA5MzAyM10gRm91bmQgb3B0aW1hbCBzZXR0aW5n
IGZvciBtdHJyIGNsZWFuIHVwDQo+IFsgICAgMC4wOTc3MzRdICBncmFuX3NpemU6IDY0SyAJ
Y2h1bmtfc2l6ZTogNjRNIAludW1fcmVnOiA4ICAJbG9zZSBjb3ZlciBSQU06IDBHDQoNCk9u
ZSBvdGhlciBub3RlOiB3aHkgZG9lcyBtdHJyX2NsZWFudXAoKSB0aGluayB0aGF0IHVzaW5n
IDggaW5zdGVhZCBvZiA2DQp2YXJpYWJsZSBNVFJScyB3b3VsZCBiZSBhbiAib3B0aW1hbCBz
ZXR0aW5nIj8NCg0KSU1PIGl0IHNob3VsZCByZXBsYWNlIHRoZSBvcmlnaW5hbCBzZXR1cCBv
bmx5IGluIGNhc2UgaXQgaXMgdXNpbmcgX2xlc3NfDQpNVFJScyB0aGFuIGJlZm9yZS4NCg0K
QWRkaXRpb25hbGx5IEkgYmVsaWV2ZSBtdHJyX2NsZWFudXAoKSB3b3VsZCBtYWtlIG11Y2gg
bW9yZSBzZW5zZSBpZiBpdA0Kd291bGRuJ3QgYmUgX19pbml0LCBidXQgYmVpbmcgdXNhYmxl
IHdoZW4gdHJ5aW5nIHRvIGFkZCBhZGRpdGlvbmFsIE1UUlJzDQppbiB0aGUgcnVubmluZyBz
eXN0ZW0gaW4gY2FzZSB3ZSBydW4gb3V0IG9mIE1UUlJzLg0KDQpJdCBzaG91bGQgcHJvYmFi
bHkgYmUgYmFzZWQgb24gdGhlIG5ldyBNVFJSIG1hcCBhbnl3YXkuLi4NCg0KDQpKdWVyZ2Vu
DQo=
--------------A7l2ew5FBk47frMXXX4FPWJr
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------A7l2ew5FBk47frMXXX4FPWJr--

--------------uMCsnvuhx47XwqUu0VjOI6Uj--

--------------xd0kCNFJ7htiPbZrxqZVh5R5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmR3V5gFAwAAAAAACgkQsN6d1ii/Ey+O
Egf/YUkDfBklAyIeo6WYXLRMHcHggGfaZkdv9nk8mPESYVwlxfwGMW4QjSwuWZbUkMkIApIAsOnX
ux5xcZ6uHRJbGlJbWoJg3KqHg286K/LmRtiA8KuVNygmEHW+WGqRUL9HdDcUDmfaWPmxOi5mM0i2
e2xOHC6Zzc95X5C/SlMEo26d3r49m0T0zoFD8d73C7cbAi02CroNavXXDBmhPW/3W+slSBpAPVns
jSG0lDjFH7ocHDNOlxgP+/umRUZne5NoTPmKQWLc9giYs9razfUawUcDBXWrhPIjFEk8zHWxSLm+
Vr/2zDu2AjMX+PDrEZnxSG2NZ7Z53ZM7N2tPZX005g==
=48oc
-----END PGP SIGNATURE-----

--------------xd0kCNFJ7htiPbZrxqZVh5R5--


From xen-devel-bounces@lists.xenproject.org Wed May 31 14:57:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 14:57:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541859.845059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4NFv-0007Ve-Ju; Wed, 31 May 2023 14:56:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541859.845059; Wed, 31 May 2023 14:56:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4NFv-0007VX-Gx; Wed, 31 May 2023 14:56:55 +0000
Received: by outflank-mailman (input) for mailman id 541859;
 Wed, 31 May 2023 14:56:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wPyO=BU=tibco.com=gdunlap@srs-se1.protection.inumbo.net>)
 id 1q4NFu-0007VR-2v
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 14:56:54 +0000
Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com
 [2a00:1450:4864:20::234])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 60be9bd9-ffc3-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 16:56:52 +0200 (CEST)
Received: by mail-lj1-x234.google.com with SMTP id
 38308e7fff4ca-2af2d092d7aso64140131fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 07:56:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60be9bd9-ffc3-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685545012; x=1688137012;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=POFEumE0VQ1zyBVsKvGoI13EBzkAX3ZgfTRWEw89yEI=;
        b=iKNI1pNxcu4qdAXBymnj4nI24pNf1JRAAeExZ6wXhLfClJhU21trw6ss+ZZQkLojwA
         vRNho8ZXK7Li20CQlFCO3zjt8UNQ99qjGNYF4TEGLkY3i/7ahC3tvBOrgcz+OL3gjycT
         GrrkoaAi5ADK1/lxceXMi4peyr2gWChwj7QqU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685545012; x=1688137012;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=POFEumE0VQ1zyBVsKvGoI13EBzkAX3ZgfTRWEw89yEI=;
        b=c+wV6FGbPBGEsXoJmaCGQY1bw9nbJ6hWXbgoJ7UuvwxqMWsIbjjfh4rp3sNO9RJ71d
         fyV2SH9zFfG8q9xKqINJl6ZyLxgi2vOnFEvk0t9mSuYptf+DoK38W1h2py6Ru7tf1JLK
         uQPd1X0ZPeFFfokVt5Ta1lyIyr7VHT+AKRkXT38lpevZOQq1hynAExeb6w2+bu6FR644
         zHYv0R1OE9Z1iSLzpFUufE3sbZeBn9kLb+jJ9nXnBFISf9hlRX3zWebMJGV3JrAtJW0f
         9Ke5sl6wulRkDOeLEIOT5Fz0h2cVhVNdRInq6b4CPt95DXFGYY8HKmVzFEwADAbxEPbG
         kROw==
X-Gm-Message-State: AC+VfDzqjHpnxP9R9SbzybSW+qL2XJ+1a7K+JRrfvVlE1sVp3PF02sKk
	D5khVF51Km6dh5gURxMMlE3JT0Qt93Jx4oXzXBvV76W/eWPkNePNPU4=
X-Google-Smtp-Source: ACHHUZ5CrQbwENypz44WtPVIBycoZHiTOgCc2xGLsiG1PNGa0wrroloOk6ypQf8qDYwuFU17RcIr712Hw6WqRr+11k4=
X-Received: by 2002:a2e:9c92:0:b0:2af:2d77:9be1 with SMTP id
 x18-20020a2e9c92000000b002af2d779be1mr2719522lji.6.1685545011813; Wed, 31 May
 2023 07:56:51 -0700 (PDT)
MIME-Version: 1.0
From: George Dunlap <george.dunlap@cloud.com>
Date: Wed, 31 May 2023 15:56:41 +0100
Message-ID: <CA+zSX=Z==wy0_zD61q5+OACLA+UToB5K83UGqU003-t97rhW_A@mail.gmail.com>
Subject: [ANNOUNCE] Call for agenda items for 1 June Community Call @ 1500 UTC
To: Xen-devel <xen-devel@lists.xenproject.org>, 
	Tamas K Lengyel <tamas.k.lengyel@gmail.com>, "intel-xen@intel.com" <intel-xen@intel.com>, 
	"daniel.kiper@oracle.com" <daniel.kiper@oracle.com>, Roger Pau Monne <roger.pau@citrix.com>, 
	Sergey Dyasli <sergey.dyasli@citrix.com>, 
	Christopher Clark <christopher.w.clark@gmail.com>, Rich Persaud <persaur@gmail.com>, 
	Kevin Pearson <kevin.pearson@ortmanconsulting.com>, Juergen Gross <jgross@suse.com>, 
	Paul Durrant <pdurrant@amazon.com>, "Ji, John" <john.ji@intel.com>, 
	"edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, 
	"robin.randhawa@arm.com" <robin.randhawa@arm.com>, Artem Mygaiev <Artem_Mygaiev@epam.com>, 
	Matt Spencer <Matt.Spencer@arm.com>, Stewart Hildebrand <Stewart.Hildebrand@amd.com>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Rian Quinn <rianquinn@gmail.com>, "Daniel P. Smith" <dpsmith@apertussolutions.com>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLRG91ZyBHb2xkc3RlaW4=?= <cardoe@cardoe.com>, 
	George Dunlap <george.dunlap@citrix.com>, David Woodhouse <dwmw@amazon.co.uk>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLVmFyYWQgR2F1dGFt?= <varadgautam@gmail.com>, 
	Brian Woods <brian.woods@xilinx.com>, Robert Townley <rob.townley@gmail.com>, 
	Bobby Eshleman <bobby.eshleman@gmail.com>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQ29yZXkgTWlueWFyZA==?= <cminyard@mvista.com>, 
	Olivier Lambert <olivier.lambert@vates.fr>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Ash Wilding <ash.j.wilding@gmail.com>, Rahul Singh <Rahul.Singh@arm.com>, 
	=?UTF-8?Q?Piotr_Kr=C3=B3l?= <piotr.krol@3mdeb.com>, 
	Brendan Kerrigan <brendank310@gmail.com>, insurgo@riseup.net, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Scott Davis <scottwd@gmail.com>, 
	Anthony PERARD <anthony.perard@citrix.com>, Michal Orzel <michal.orzel@amd.com>, 
	Marc Ungeschikts <marc.ungeschikts@vates.fr>, Zhiming Shen <zshen@exotanium.io>, 
	Xenia Ragiadakou <burzalodowa@gmail.com>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLSGVucnkgV2FuZw==?= <Henry.Wang@arm.com>, 
	Per Bilse <per.bilse@citrix.com>, Samuel Verschelde <stormi-xcp@ylix.fr>, 
	Andrei Semenov <andrei.semenov@vates.fr>, Yann Dirson <yann.dirson@vates.fr>, 
	Bernhard Kaindl <bernhard.kaindl@cloud.com>, 
	=?UTF-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLTHVjYSBGYW5jZWxsdQ==?= <luca.fancellu@arm.com>, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>
Content-Type: multipart/alternative; boundary="000000000000aeb0a905fcfe8780"

--000000000000aeb0a905fcfe8780
Content-Type: text/plain; charset="UTF-8"

Hi all,

Sorry for the late notice on this -- somehow my reminder didn't trigger.  I
believe we had discussed at the last call to hold this meeting, and skip
the one in July.


The proposed agenda is in
https://cryptpad.fr/pad/#/2/pad/edit/eddrScu3DyYdHZ6f0ReVGazZ/ and you can
edit to add items.  Alternatively, you can reply to this mail directly.

Agenda items appreciated a few days before the call: please put your name
besides items if you edit the document.

Note the following administrative conventions for the call:
* Unless, agreed in the previous meeting otherwise, the call is on the 1st
Thursday of each month at 1600 British Time (either GMT or BST)
* I usually send out a meeting reminder a few days before with a
provisional agenda

* To allow time to switch between meetings, we'll plan on starting the
agenda at 16:05 sharp.  Aim to join by 16:03 if possible to allocate time
to sort out technical difficulties &c

* If you want to be CC'ed please add or remove yourself from the
sign-up-sheet at
https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/

Best Regards
George


== Dial-in Information ==
## Meeting time
16:00 - 17:00 British time
Further International meeting times:
<https://www.timeanddate.com/worldclock/meetingdetails.html?year=2020&month=4&day=2&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179>
<https://www.timeanddate.com/worldclock/meetingdetails.html?year=2021&month=04&day=8&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179>
https://www.timeanddate.com/worldclock/meetingdetails.html?year=2023&month=6&day=1&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179


## Dial in details
Web: https://meet.jit.si/XenProjectCommunityCall

Dial-in info and pin can be found here:

https://meet.jit.si/static/dialInInfo.html?room=XenProjectCommunityCall

--000000000000aeb0a905fcfe8780
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><p>Hi all,<br></p><p>Sorry for the late notice on this -- =
somehow my reminder didn&#39;t trigger.=C2=A0 I believe we had discussed at=
 the last call to hold this meeting, and skip the one in July.</p><p><br>Th=
e proposed agenda is in <a href=3D"https://cryptpad.fr/pad/#/2/pad/edit/edd=
rScu3DyYdHZ6f0ReVGazZ/">https://cryptpad.fr/pad/#/2/pad/edit/eddrScu3DyYdHZ=
6f0ReVGazZ/</a>=C2=A0and you can edit to add items.=C2=A0 Alternatively, yo=
u can reply to this mail directly.<br></p><p>Agenda items appreciated a few=
 days before the call: please put your name besides items if you edit the d=
ocument.<br></p><p>Note the following administrative conventions for the ca=
ll:<br>* Unless, agreed in the previous meeting otherwise, the call is on t=
he 1st Thursday of each month at 1600 British Time (either GMT or BST)<br>*=
 I usually send out a meeting reminder a few days before with a provisional=
 agenda<br></p><p>* To allow time to switch between meetings, we&#39;ll pla=
n on starting the agenda at 16:05 sharp.=C2=A0 Aim to join by 16:03 if poss=
ible to allocate time to sort out technical difficulties &amp;c</p><p>* If =
you want to be CC&#39;ed please add or remove yourself from the sign-up-she=
et at=C2=A0<a href=3D"https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RF=
Pz0sRCf+/">https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/</=
a><br></p><p>Best Regards<br>George<br></p><p><br></p><p>=3D=3D Dial-in Inf=
ormation =3D=3D<br>## Meeting time<br>16:00 - 17:00 British time<br>Further=
 International meeting times:=C2=A0<a href=3D"https://www.timeanddate.com/w=
orldclock/meetingdetails.html?year=3D2020&amp;month=3D4&amp;day=3D2&amp;hou=
r=3D15&amp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224&amp;p=
4=3D179"></a><a href=3D"https://www.timeanddate.com/worldclock/meetingdetai=
ls.html?year=3D2021&amp;month=3D04&amp;day=3D8&amp;hour=3D15&amp;min=3D0&am=
p;sec=3D0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224&amp;p4=3D179"></a><a href=
=3D"https://www.timeanddate.com/worldclock/meetingdetails.html?year=3D2023&=
amp;month=3D6&amp;day=3D1&amp;hour=3D15&amp;min=3D0&amp;sec=3D0&amp;p1=3D12=
34&amp;p2=3D37&amp;p3=3D224&amp;p4=3D179">https://www.timeanddate.com/world=
clock/meetingdetails.html?year=3D2023&amp;month=3D6&amp;day=3D1&amp;hour=3D=
15&amp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224&amp;p4=3D=
179</a><br></p><p><br>## Dial in details<br>Web:=C2=A0<a href=3D"https://me=
et.jit.si/XenProjectCommunityCall">https://meet.jit.si/XenProjectCommunityC=
all</a><br></p><p>Dial-in info and pin can be found here:<br></p><p><a href=
=3D"https://meet.jit.si/static/dialInInfo.html?room=3DXenProjectCommunityCa=
ll">https://meet.jit.si/static/dialInInfo.html?room=3DXenProjectCommunityCa=
ll</a></p></div>

--000000000000aeb0a905fcfe8780--


From xen-devel-bounces@lists.xenproject.org Wed May 31 15:01:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 15:01:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541865.845075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4NKb-0000Yf-8z; Wed, 31 May 2023 15:01:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541865.845075; Wed, 31 May 2023 15:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4NKb-0000YY-63; Wed, 31 May 2023 15:01:45 +0000
Received: by outflank-mailman (input) for mailman id 541865;
 Wed, 31 May 2023 15:01:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4NKZ-0000YO-FV; Wed, 31 May 2023 15:01:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4NKZ-0007H8-D9; Wed, 31 May 2023 15:01:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4NKY-000329-Ve; Wed, 31 May 2023 15:01:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4NKY-0003in-VA; Wed, 31 May 2023 15:01:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ko8d8mKBa2YM5nB8JbUw0Lu1DFmisxHdXheLVSK2zKw=; b=0rKWKpQ/afJ+drs8rcymxsI5Ou
	nd9/qGEzAM0fZveul96Ij+xVmMoU4sa6E/272KSESzIPHvHP4koOrPRXJFY2RL8i6iF5QMjFj0J55
	nNzpgVwUsc2wmkjD/Njef21zHGnEF9Um91BwB9UljEFWKNRlMlECnmue6Ti8zEUxkAt0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181021-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 181021: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-libvirt-pair:leak-check/check/src_host:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:leak-check/check/dst_host:fail:regression
    qemu-mainline:test-amd64-amd64-xl-xsm:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:leak-check/check:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:leak-check/check:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:leak-check/check:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:leak-check/check:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:leak-check/check/src_host:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:leak-check/check/dst_host:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:leak-check/check:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:leak-check/check:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:leak-check/check:fail:regression
    qemu-mainline:test-amd64-i386-xl-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:leak-check/check:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-raw:leak-check/check:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:leak-check/check:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:leak-check/check:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:leak-check/check:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:leak-check/check:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f89f54d52bf8fdc6de1c90367f9bdd65e40fa382
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 15:01:42 +0000

flight 181021 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181021/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 30 leak-check/check/src_host fail REGR. vs. 180691
 test-amd64-amd64-libvirt-pair 31 leak-check/check/dst_host fail REGR. vs. 180691
 test-amd64-amd64-xl-xsm     22 guest-start/debian.repeat fail REGR. vs. 180691
 test-amd64-i386-libvirt      23 leak-check/check         fail REGR. vs. 180691
 test-amd64-amd64-libvirt-xsm 23 leak-check/check         fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 test-amd64-amd64-libvirt     23 leak-check/check         fail REGR. vs. 180691
 test-amd64-i386-libvirt-xsm  23 leak-check/check         fail REGR. vs. 180691
 test-amd64-i386-libvirt-pair 30 leak-check/check/src_host fail REGR. vs. 180691
 test-amd64-i386-libvirt-pair 31 leak-check/check/dst_host fail REGR. vs. 180691
 test-amd64-amd64-xl-qcow2    24 leak-check/check         fail REGR. vs. 180691
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 21 leak-check/check fail REGR. vs. 180691
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 180691
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 21 leak-check/check fail REGR. vs. 180691
 test-amd64-i386-xl-vhd      21 guest-start/debian.repeat fail REGR. vs. 180691
 test-amd64-amd64-libvirt-vhd 22 leak-check/check         fail REGR. vs. 180691
 test-amd64-i386-libvirt-raw  22 leak-check/check         fail REGR. vs. 180691
 test-armhf-armhf-libvirt     21 leak-check/check         fail REGR. vs. 180691
 test-armhf-armhf-libvirt-qcow2 20 leak-check/check       fail REGR. vs. 180691
 test-armhf-armhf-xl-vhd      20 leak-check/check         fail REGR. vs. 180691
 test-armhf-armhf-libvirt-raw 20 leak-check/check         fail REGR. vs. 180691
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180691
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                f89f54d52bf8fdc6de1c90367f9bdd65e40fa382
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   14 days
Failing since        180699  2023-05-18 07:21:24 Z   13 days   50 attempts
Testing same since   181021  2023-05-30 22:10:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bernhard Beschow <shentey@gmail.com>
  Bin Meng <bin.meng@windriver.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Clément Chigot <chigot@adacore.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Cédric Le Goater <clg@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Enze Li <lienze@kylinos.cn>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Erico Nunes <ernunes@redhat.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Francisco Iglesias <frasse.iglesias@gmail.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Mostafa Saleh <smostafa@google.com>
  Nicholas Piggin <npiggin@gmail.com>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Rene Engel <ReneEngel80@emailn.de>
  Richard Henderson <richard.henderson@linaro.org>
  Richard Purdie <richard.purdie@linuxfoundation.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sergio Lopez <slp@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Tommy Wu <tommy.wu@sifive.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vitaly Cheptsov <cheptsov@ispras.ru>
  Vivek Kasireddy <vivek.kasireddy@intel.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      fail    
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 10362 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 31 15:28:00 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 15:28:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541875.845093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Njj-0003BO-HT; Wed, 31 May 2023 15:27:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541875.845093; Wed, 31 May 2023 15:27:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Njj-0003BH-Eg; Wed, 31 May 2023 15:27:43 +0000
Received: by outflank-mailman (input) for mailman id 541875;
 Wed, 31 May 2023 15:27:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Njh-0003B7-Nk; Wed, 31 May 2023 15:27:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Njh-0007qW-95; Wed, 31 May 2023 15:27:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Njg-0004sx-T1; Wed, 31 May 2023 15:27:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4Njg-0000OX-Sa; Wed, 31 May 2023 15:27:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LJ4RyU6G/dD1k1H5+5LCGBby/PvXSQOdbW7DRFezXzU=; b=uQ4ZJ/N3kRdHq8M1dU076dsTPh
	65i/MWokQKiHrnb5DXb0oTYcSK+4hOpylLBXV4p3aYUjAupaj8oF9rmyFth0sXVdXwWRAv5lIT+wM
	VoQZfu9fEY2T8Pml2MWGGnZvwBvAUKBxXVxUiTvdJ4dgfy77bWK93ExTfHyrehFHvksM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181035-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 181035: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=465217b0f872602b4084a1b0fa2ef75377cb3589
X-Osstest-Versions-That:
    xen=94200e1bae07e725cc07238c11569c5cab7befb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 15:27:40 +0000

flight 181035 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181035/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 181018
 build-arm64-xsm               6 xen-build                fail REGR. vs. 181018
 build-armhf                   6 xen-build                fail REGR. vs. 181018

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  465217b0f872602b4084a1b0fa2ef75377cb3589
baseline version:
 xen                  94200e1bae07e725cc07238c11569c5cab7befb7

Last test of basis   181018  2023-05-30 20:00:24 Z    0 days
Testing same since   181031  2023-05-31 11:00:27 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 465217b0f872602b4084a1b0fa2ef75377cb3589
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed May 31 12:01:11 2023 +0200

    vPCI: account for hidden devices
    
    Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
    console) are associated with DomXEN, not Dom0. This means that while
    looking for overlapping BARs such devices cannot be found on Dom0's list
    of devices; DomXEN's list also needs to be scanned.
    
    Suppress vPCI init altogether for r/o devices (which constitute a subset
    of hidden ones).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit 445fdc641e304ff41a544f8f5926a13b604c08ad
Author: Juergen Gross <jgross@suse.com>
Date:   Wed May 31 12:00:40 2023 +0200

    xen/include/public: fix 9pfs xenstore path description
    
    In xen/include/public/io/9pfs.h the name of the Xenstore backend node
    "security-model" should be "security_model", as this is how the Xen
    tools are creating it and qemu is reading it.
    
    Fixes: ad58142e73a9 ("xen/public: move xenstore related doc into 9pfs.h")
    Fixes: cf1d2d22fdfd ("docs/misc: Xen transport for 9pfs")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 0f80a46ffa6bfd5d111fc2e64ee5983513627e4d
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 12:00:13 2023 +0200

    xen/riscv: remove dummy_bss variable
    
    After introduction of initial pagetables there is no any sense
    in dummy_bss variable as bss section will not be empty anymore.
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit 0d74fc2b2f85586ceb5672aedc79c666e529381d
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 12:00:05 2023 +0200

    xen/riscv: setup initial pagetables
    
    The patch does two thing:
    1. Setup initial pagetables.
    2. Enable MMU which end up with code in
       cont_after_mmu_is_enabled()
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit ec337ce2e972b70619f5a076b20910a2ff4fea7a
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 11:59:53 2023 +0200

    xen/riscv: align __bss_start
    
    bss clear cycle requires proper alignment of __bss_start.
    
    ALIGN(PAGE_SIZE) before "*(.bss.page_aligned)" in xen.lds.S
    was removed as any contribution to "*(.bss.page_aligned)" have to
    specify proper aligntment themselves.
    
    Fixes: cfa0409f7cbb ("xen/riscv: initialize .bss section")
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit e66003e7be1996c9dd8daca54ba34ad5bb58d668
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 11:55:58 2023 +0200

    xen/riscv: introduce setup_initial_pages
    
    The idea was taken from xvisor but the following changes
    were done:
    * Use only a minimal part of the code enough to enable MMU
    * rename {_}setup_initial_pagetables functions
    * add an argument for setup_initial_mapping to have
      an opportunity to make set PTE flags.
    * update setup_initial_pagetables function to map sections
      with correct PTE flags.
    * Rewrite enable_mmu() to C.
    * map linker addresses range to load addresses range without
      1:1 mapping. It will be 1:1 only in case when
      load_start_addr is equal to linker_start_addr.
    * add safety checks such as:
      * Xen size is less than page size
      * linker addresses range doesn't overlap load addresses
        range
    * Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
    * change PTE_LEAF_DEFAULT to RW instead of RWX.
    * Remove phys_offset as it is not used now
    * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
      in  setup_inital_mapping() as they should be already aligned.
      Make a check that {map_pa}_start are aligned.
    * Remove clear_pagetables() as initial pagetables will be
      zeroed during bss initialization
    * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
      as there is no such section in xen.lds.S
    * Update the argument of pte_is_valid() to "const pte_t *p"
    * Add check that Xen's load address is aligned at 4k boundary
    * Refactor setup_initial_pagetables() so it is mapping linker
      address range to load address range. After setup needed
      permissions for specific section ( such as .text, .rodata, etc )
      otherwise RW permission will be set by default.
    * Add function to check that requested SATP_MODE is supported
    
    Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit efadb18dd58abaa0c6102e04f1c25ac94c273853
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 11:55:46 2023 +0200

    xen/riscv: add VM space layout
    
    Also it was added explanation about ignoring of top VA bits
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 31 15:31:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 15:31:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541881.845103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4NnC-0004c3-0b; Wed, 31 May 2023 15:31:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541881.845103; Wed, 31 May 2023 15:31:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4NnB-0004bw-U5; Wed, 31 May 2023 15:31:17 +0000
Received: by outflank-mailman (input) for mailman id 541881;
 Wed, 31 May 2023 15:31:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sC/f=BU=citrix.com=prvs=508b7ea43=Andrew.Cooper3@srs-se1.protection.inumbo.net>)
 id 1q4NnA-0004bo-N1
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 15:31:16 +0000
Received: from esa6.hc3370-68.iphmx.com (esa6.hc3370-68.iphmx.com
 [216.71.155.175]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2aa24a06-ffc8-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 17:31:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2aa24a06-ffc8-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1685547074;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=x/kGhRmaCQ9yLV1zhMt3H3YzU8lRSSys04lMtxP7CR0=;
  b=MDCQ5YB7YYZ5nKqX/bLgxFnuxqRO3wZRu7jgH/L5weHT/Mbpo2YmNpsl
   gnzxFbQ+m1/Skg9GXZIzpyFpd5V8WLFN7O4zjPTVEgt2riJMe+8OlZd/o
   HHR2cJcfliC5xJdGm6ucYdILlylpjcMbHz0NATaHlRS8QwYOVcQQFKrGT
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
X-SBRS: 4.0
X-MesageID: 110425760
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.156.123
X-Policy: $RELAYED
IronPort-Data: A9a23:3HH0/qrhadyCLsKT06VJus3+Ks5eBmIEZRIvgKrLsJaIsI4StFCzt
 garIBnQaP3ZZDGkedsna4++9kIGvJDVmIVhTQtqqS1gQi0SpZuZCYyVIHmrMnLJJKUvbq7FA
 +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev
 NL2sqX3NUSsnjV5KQr40YrawP9UlKq04GtwUmAWP6gR5weDzCBNVvrzGInqR5fGatgMdgKFb
 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo
 OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay
 RAXABUNfDmsquOM/K3hRrBQndl+HZbnPLpK7xmMzRmBZRonaZXKQqGM7t5ExjYgwMtJGJ4yZ
 eJAN2ApNk6ZJUQSZBFOUslWcOSA3xETdxVxrl6PqLVxyG/U1AFri5DmMcbPe8zMTsJQ9qqdj
 jufpzijX05EbLRzzxK50SywpNPLsRrXXaYSM5Li5M9JkHOqkzl75Bo+CgLg/KjRZlSFc8JSL
 QkY9zQjqYA29Ve3VZ/tUhugunmGsxUAHd1KHIUSyiuA167V6AaxHXUfQ3hKb9lOnNAybSwn0
 BmOhdyBONB0mOTLEzTHrO7S9G7sf3FPdgfueBPoUyMg48vKj6wytC7QFNo6LLWo0+yqFTHJl
 mXiQDcFu50fissC1qOe9F/Bgi6xqpWhcjPZ9jk7TUr+sFonOdfNi5iArAGCsK0edNrxokyp5
 iBspiSI0AwZ4XhhfgSpSf5FIrym7u3t3Nb00Q82RMlJG9hAFheekWFsDNNWfh8B3iUsI2WBj
 KrvVeR5uvdu0IOCN/MfXm5II51CIVLcPdrkTOvISdFFf4J8cgSKlAk3Ox7Ngz+0zhB9wf1mU
 Xt+TSpKJSxCYUiA5GPnL9rxLJdxnnxurY8tbcyTI+ubPUq2OyfOFOZt3KqmZeEl9qKUyDg5A
 P4GX/ZmPy53CbWkCgGOqN57ELz/BSRjbXwAg5ANJ7Hrz8sPMD1JNsI9Npt6K9Y4xvQEx72Vl
 px/M2cBoGfCabT8AV3iQhhehHnHDf6TcVpT0fQQAGuV
IronPort-HdrOrdr: A9a23:iZU0f6lv4XMEuUeBMPg8K+DERLzpDfIX3DAbv31ZSRFFG/Fw9v
 re/sjzsCWetN9/YhwdcK+7SdC9qB/nmaKdmLNhWotKBTOW3ldAT7sSjrcKoQeAJ8SWzIc0v5
 uIFZIQNDSaNzhHZKjBjjVRZ70bsb26GNLCv5a680tQ
X-Talos-CUID: =?us-ascii?q?9a23=3AMXpNu2iuvJENBkb0g15kLc13azJuV33P5Vj9InW?=
 =?us-ascii?q?EFjxlEYGOVAKhp69kqp87?=
X-Talos-MUID: =?us-ascii?q?9a23=3AGEsqZQ3jemC9lejYOmhstU7jJzUj34fwFGIxlbk?=
 =?us-ascii?q?/l8ygNQtfYSWsrySZe9py?=
X-IronPort-AV: E=Sophos;i="6.00,207,1681185600"; 
   d="scan'208";a="110425760"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Alejandro Vallejo
	<alejandro.vallejo@cloud.com>
Subject: [PATCH] xen/cpu-policy: Add an IBRS -> AUTO_IBRS dependency
Date: Wed, 31 May 2023 16:30:28 +0100
Message-ID: <20230531153028.1224147-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

AUTO_IBRS is an extention over regular (AMD) IBRS, and needs hiding if IBRS is
levelled out for any reason.

Fixes: defaf651631a ("x86/hvm: Expose Automatic IBRS to guests")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Alejandro Vallejo <alejandro.vallejo@cloud.com>

This was an oversight of mine when reviewing the aformentioned patch.
---
 xen/tools/gen-cpuid.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
index f28ff708a2fc..973fcc1c64e8 100755
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -319,7 +319,7 @@ def crunch_numbers(state):
         # as dependent features simplifies Xen's logic, and prevents the guest
         # from seeing implausible configurations.
         IBRSB: [STIBP, SSBD, INTEL_PSFD],
-        IBRS: [AMD_STIBP, AMD_SSBD, PSFD,
+        IBRS: [AMD_STIBP, AMD_SSBD, PSFD, AUTO_IBRS,
                IBRS_ALWAYS, IBRS_FAST, IBRS_SAME_MODE],
         AMD_STIBP: [STIBP_ALWAYS],
 

base-commit: 465217b0f872602b4084a1b0fa2ef75377cb3589
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 31 15:56:43 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 15:56:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541885.845114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4OBb-00077O-W7; Wed, 31 May 2023 15:56:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541885.845114; Wed, 31 May 2023 15:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4OBb-00077H-TO; Wed, 31 May 2023 15:56:31 +0000
Received: by outflank-mailman (input) for mailman id 541885;
 Wed, 31 May 2023 15:56:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4OBa-000777-JT; Wed, 31 May 2023 15:56:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4OBa-0008UG-8J; Wed, 31 May 2023 15:56:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4OBZ-0006Ex-PK; Wed, 31 May 2023 15:56:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4OBZ-0002Vq-Os; Wed, 31 May 2023 15:56:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=n/V8tbR3zDs1g0bC/KCjSJqBVmBKlL51dyPXPruZtf8=; b=aUJUf95slN+lJxAETMfGOMEHn2
	JiVr9UgrQMiofsrUVUEGodwGq97uREOWVUPahGKZw66pvbqUkH0Ch7faCp6kv2vNdP2XNyoseunR5
	kywqgJDWbO7QSuJsRjRaV8eMzDTXNtlhMt6XMV6uOQyp0BoGE1jiCF5ppx29OIFMpZ6Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181036-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 181036: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=9f12d6b6ecf8ffe9cd4d93fe0976fdbaf2ded4f0
X-Osstest-Versions-That:
    ovmf=d15d2667d58d40c0748919ac4b5771b875c0780b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 15:56:29 +0000

flight 181036 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181036/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9f12d6b6ecf8ffe9cd4d93fe0976fdbaf2ded4f0
baseline version:
 ovmf                 d15d2667d58d40c0748919ac4b5771b875c0780b

Last test of basis   181028  2023-05-31 09:12:10 Z    0 days
Testing same since   181036  2023-05-31 13:10:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Zhihao Li <zhihao.li@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d15d2667d5..9f12d6b6ec  9f12d6b6ecf8ffe9cd4d93fe0976fdbaf2ded4f0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 31 16:02:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 16:02:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541891.845124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4OGw-0000fS-K3; Wed, 31 May 2023 16:02:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541891.845124; Wed, 31 May 2023 16:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4OGw-0000fL-Gt; Wed, 31 May 2023 16:02:02 +0000
Received: by outflank-mailman (input) for mailman id 541891;
 Wed, 31 May 2023 16:02:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOFA=BU=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q4OGu-0000fF-QB
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 16:02:00 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 795243f8-ffcc-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 18:01:59 +0200 (CEST)
Received: by mail-wm1-x32d.google.com with SMTP id
 5b1f17b1804b1-3f6077660c6so44853495e9.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 09:01:59 -0700 (PDT)
Received: from EMEAENGAAD19049. (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 x2-20020adff0c2000000b0030af20aaa3fsm7330260wro.71.2023.05.31.09.01.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 09:01:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 795243f8-ffcc-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685548919; x=1688140919;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=rSNxuF1FaMsejw/Nfe/KRC/8shyYVQNtl+Xbzvh5/58=;
        b=eQ0AfemjkLXxcbEamcsZ2VSGMe0m1rkPvVL4kWZBZRRU+s1tkhmFlFw1naPpRryfWF
         nkjjpeBGY6eLkbzEmo1gFxEJUVC31xL2qgoJg1bsQ3bbDwkNMfJbbMyA6Tz2F+Giqmsi
         We/MuOwEOQdiMbZqm3fIxZeaPwIgaiEFKLIZw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685548919; x=1688140919;
        h=in-reply-to:content-disposition:mime-version:references:subject:cc
         :to:from:date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=rSNxuF1FaMsejw/Nfe/KRC/8shyYVQNtl+Xbzvh5/58=;
        b=HQNeSOxrSbUHwD1CIdozxxSo3segCAMEE74i4+T5Xz5lv5lAN2GBmMGIYi+cYDFgw6
         lrEPg3Dl4FEqtmnH4LETfFyoSfGWL3U6mev3Ik5VQ/LE5p5ef+8vg0tfxfVVmAYA7OVb
         hOMwY9wm555gg0Bl4hX3O4EFrr3rzvRKAkYAoHTtqJXnAVJ79UDAoUjF93I1qxejAjQB
         1A4w67+nDv2iW/VoTt4hCE3Coem2q4kW/zJUq9gRmPdl1HGh8nPnSyc+io47d/hGvCNw
         igI/uhmfD1mAW2CQoVqX7XQb9DcWNwMzcgjYks61pRzsDj/hmjrtadXsbuCgRQV+67cI
         Ieig==
X-Gm-Message-State: AC+VfDwVeK+8zZ489PY78ehhjYKFaFOgK+E2oCOBT0vWxwDUp2a8Zzs9
	YH6KqVSGuCyz6ZKt2EpvnjNRqA==
X-Google-Smtp-Source: ACHHUZ5RryIwBjPcyFi943X8mwdoKpxVWwwB5XQiQqY2Qkbae05rwFNl607OJ+Pm4RhxtOMxYKhURg==
X-Received: by 2002:adf:f685:0:b0:306:30e8:eb34 with SMTP id v5-20020adff685000000b0030630e8eb34mr4155725wrp.48.1685548918830;
        Wed, 31 May 2023 09:01:58 -0700 (PDT)
Message-ID: <64776f76.df0a0220.3fe7d.ce8b@mx.google.com>
X-Google-Original-Message-ID: <ZHdvdPF2PXZpEBBa@EMEAENGAAD19049.>
Date: Wed, 31 May 2023 17:01:56 +0100
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] xen/cpu-policy: Add an IBRS -> AUTO_IBRS dependency
References: <20230531153028.1224147-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20230531153028.1224147-1-andrew.cooper3@citrix.com>

On Wed, May 31, 2023 at 04:30:28PM +0100, Andrew Cooper wrote:
> AUTO_IBRS is an extention over regular (AMD) IBRS, and needs hiding if IBRS is
> levelled out for any reason.
True that. My bad.

> ---
>  xen/tools/gen-cpuid.py | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
> index f28ff708a2fc..973fcc1c64e8 100755
> --- a/xen/tools/gen-cpuid.py
> +++ b/xen/tools/gen-cpuid.py
> @@ -319,7 +319,7 @@ def crunch_numbers(state):
>          # as dependent features simplifies Xen's logic, and prevents the guest
>          # from seeing implausible configurations.
>          IBRSB: [STIBP, SSBD, INTEL_PSFD],
> -        IBRS: [AMD_STIBP, AMD_SSBD, PSFD,
> +        IBRS: [AMD_STIBP, AMD_SSBD, PSFD, AUTO_IBRS,
>                 IBRS_ALWAYS, IBRS_FAST, IBRS_SAME_MODE],
>          AMD_STIBP: [STIBP_ALWAYS],
>  
> 
> base-commit: 465217b0f872602b4084a1b0fa2ef75377cb3589
> -- 
> 2.30.2
> 

LGTM

Alejandro


From xen-devel-bounces@lists.xenproject.org Wed May 31 17:47:40 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 17:47:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541904.845164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Put-0002U4-I1; Wed, 31 May 2023 17:47:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541904.845164; Wed, 31 May 2023 17:47:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Put-0002Tx-E2; Wed, 31 May 2023 17:47:23 +0000
Received: by outflank-mailman (input) for mailman id 541904;
 Wed, 31 May 2023 17:47:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RcVj=BU=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q4Pur-0002Tr-OY
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 17:47:21 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2fce7e88-ffdb-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 19:47:19 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id D40E660ADB;
 Wed, 31 May 2023 17:47:17 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A293EC433EF;
 Wed, 31 May 2023 17:47:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fce7e88-ffdb-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685555237;
	bh=7swq4b5yVBSwi4TGIAY0pCzJl1Qy9yKYR+uafk/48v8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QsJfbF71GcjWpJR2B6CgrfD7z8DsQHJruSdwnn3gH8wx0ZHMzYLWKPIabCgXmW4gW
	 2gVm6IfiCiS8V3bhmDt62gXirz6MrYwION6W3bs3Hg/PHd17jXu7y+83tga7hFkX7o
	 tRmqJvUtjhE6iQtKClRgFNEwtel3RvANyw6S/x8nvtKauxtr/bkZe02PXwBUXTXSg2
	 fvnVbIeEv9d6qzeGDF2VgqI9Uv10sDUY1JvX94zncOjshC+XgfBLc3czrSB0U95WHn
	 mKdxypPNjn8FKj+htMeISjAkgC9QIukzEoapyDSEaaU7qK/HHextozElMWX7lvqY34
	 iRcBoC157bnug==
Date: Wed, 31 May 2023 10:47:15 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH RFC v2] vPCI: account for hidden devices
In-Reply-To: <a97829cc-727a-aa27-b00b-d92afb0f8863@suse.com>
Message-ID: <alpine.DEB.2.22.394.2305311042470.44000@ubuntu-linux-20-04-desktop>
References: <7294a70c-0089-e375-bb5a-bf9544d4f251@suse.com> <ZG4dmJuzNVUE5UIY@Air-de-Roger> <614c5bf4-b273-7439-caf7-f6df0d90bf87@suse.com> <alpine.DEB.2.22.394.2305241627290.44000@ubuntu-linux-20-04-desktop> <8956af09-9ba4-11bf-a272-25f508bbbb3c@suse.com>
 <alpine.DEB.2.22.394.2305251224070.44000@ubuntu-linux-20-04-desktop> <22f1e765-891d-ef2d-01b5-e9dfe6ca895b@suse.com> <alpine.DEB.2.22.394.2305301529090.44000@ubuntu-linux-20-04-desktop> <a97829cc-727a-aa27-b00b-d92afb0f8863@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 31 May 2023, Jan Beulich wrote:
> On 31.05.2023 00:38, Stefano Stabellini wrote:
> > On Fri, 26 May 2023, Jan Beulich wrote:
> >> On 25.05.2023 21:24, Stefano Stabellini wrote:
> >>> On Thu, 25 May 2023, Jan Beulich wrote:
> >>>> On 25.05.2023 01:37, Stefano Stabellini wrote:
> >>>>> On Wed, 24 May 2023, Jan Beulich wrote:
> >>>>>>>> RFC: _setup_hwdom_pci_devices()' loop may want splitting: For
> >>>>>>>>      modify_bars() to consistently respect BARs of hidden devices while
> >>>>>>>>      setting up "normal" ones (i.e. to avoid as much as possible the
> >>>>>>>>      "continue" path introduced here), setting up of the former may want
> >>>>>>>>      doing first.
> >>>>>>>
> >>>>>>> But BARs of hidden devices should be mapped into dom0 physmap?
> >>>>>>
> >>>>>> Yes.
> >>>>>
> >>>>> The BARs would be mapped read-only (not read-write), right? Otherwise we
> >>>>> let dom0 access devices that belong to Xen, which doesn't seem like a
> >>>>> good idea.
> >>>>>
> >>>>> But even if we map the BARs read-only, what is the benefit of mapping
> >>>>> them to Dom0? If Dom0 loads a driver for it and the driver wants to
> >>>>> initialize the device, the driver will crash because the MMIO region is
> >>>>> read-only instead of read-write, right?
> >>>>>
> >>>>> How does this device hiding work for dom0? How does dom0 know not to
> >>>>> access a device that is present on the PCI bus but is used by Xen?
> >>>>
> >>>> None of these are new questions - this has all been this way for PV Dom0,
> >>>> and so far we've limped along quite okay. That's not to say that we
> >>>> shouldn't improve things if we can, but that first requires ideas as to
> >>>> how.
> >>>
> >>> For PV, that was OK because PV requires extensive guest modifications
> >>> anyway. We only run Linux and few BSDs as Dom0. So, making the interface
> >>> cleaner and reducing guest changes is nice-to-have but not critical.
> >>>
> >>> For PVH, this is different. One of the top reasons for AMD to work on
> >>> PVH is to enable arbitrary non-Linux OSes as Dom0 (when paired with
> >>> dom0less/hyperlaunch). It could be anything from Zephyr to a
> >>> proprietary RTOS like VxWorks. Minimal guest changes for advanced
> >>> features (e.g. Dom0 S3) might be OK but in general I think we should aim
> >>> at (almost) zero guest changes. On ARM, it is already the case (with some
> >>> non-upstream patches for dom0less PCI.)
> >>>
> >>> For this specific patch, which is necessary to enable PVH on AMD x86 in
> >>> gitlab-ci, we can do anything we want to make it move faster. But
> >>> medium/long term I think we should try to make non-Xen-aware PVH Dom0
> >>> possible.
> >>
> >> I don't think Linux could boot as PVH Dom0 without any awareness. Hence
> >> I guess it's not easy to see how other OSes might. What you're after
> >> looks rather like a HVM Dom0 to me, with it being unclear where the
> >> external emulator then would run (in a stubdom maybe, which might be
> >> possible to arrange for via the dom0less way of creating boot time
> >> DomU-s) and how it would get any necessary xenstore based information.
> > 
> > I know that Linux has lots of Xen awareness scattered everywhere so it
> > is difficult to tell what's what. Leaving the PVH entry point aside for
> > this discussion, what else is really needed for a Linux without
> > CONFIG_XEN to boot as PVH Dom0?
> > 
> > Same question from a different angle: let's say that we boot Zephyr or
> > another RTOS as HVM Dom0, what is really required for the emulator to
> > emulate? I am hoping that the answer is "nothing" except for maybe a
> > UART.
> > 
> > It comes down to how much legacy stuff the guest OS expects to find.
> > Legacy stuff that would normally be emulated by QEMU. I am counting on
> > the fact that a modern OS doesn't expect any of the legacy stuff (e.g.
> > PIIX3/Q35/E1000) if it is not advertised in the firmware tables.
> 
> And that's where I expect the problems start: We don't really alter
> things like the DSDT and SSDTs, and we also don't parse them. So we
> won't know what firmware describes there. Hence we have to expect that
> any legacy device might be present in the underlying platform, and
> hence would also need offering either by passing through or by
> emulation. Yet we can't sensibly emulate everything in Xen itself.

I see your point, thanks for the explanation. I can see it might require
some work in that area, either by removing those devices from the
firmware tables (that we currently don't even parse) or by passing
through those devices when possible. FYI there is also an ACPI SOT table
[1] that could maybe used for this but nobody has ever used it so far.

[1] https://wiki.xenproject.org/images/0/02/Status-override-table.pdf


From xen-devel-bounces@lists.xenproject.org Wed May 31 17:49:13 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 17:49:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541910.845174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Pwe-000360-3a; Wed, 31 May 2023 17:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541910.845174; Wed, 31 May 2023 17:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Pwe-00035t-07; Wed, 31 May 2023 17:49:12 +0000
Received: by outflank-mailman (input) for mailman id 541910;
 Wed, 31 May 2023 17:49:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5Ad7=BU=alien8.de=bp@srs-se1.protection.inumbo.net>)
 id 1q4Pwa-00035f-Ow
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 17:49:10 +0000
Received: from mail.skyhub.de (mail.skyhub.de [2a01:4f8:190:11c2::b:1457])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6eded52b-ffdb-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 19:49:04 +0200 (CEST)
Received: from zn.tnic (pd9530d32.dip0.t-ipconnect.de [217.83.13.50])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id A25531EC042D;
 Wed, 31 May 2023 19:49:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6eded52b-ffdb-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1685555343;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=4awF7ihropMIeuORHFj3RY30SkzPrI4h/Pp4BMpmeAM=;
	b=K9yTaQIvSiqB2B4WfU+Jnili6hYPgH7rfjnOVXpv22taysYxHx7NknW6idh/cKrzOEESRm
	honBMfa3yHiNwpH1DYMfqa/pm75xafqAeoWWnLiBkKjxzPW54I1xQLxuNntaI1yRNYsM4P
	ezsvY7SMHGs6rokaZ8atyMo6m83ReoA=
Date: Wed, 31 May 2023 19:48:57 +0200
From: Borislav Petkov <bp@alien8.de>
To: Juergen Gross <jgross@suse.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	linux-hyperv@vger.kernel.org, linux-doc@vger.kernel.org,
	mikelley@microsoft.com, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Jonathan Corbet <corbet@lwn.net>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v6 00/16] x86/mtrr: fix handling with PAT but without MTRR
Message-ID: <20230531174857.GDZHeIib57h5lT5Vh1@fat_crate.local>
References: <20230509201437.GFZFqprc6otRejDPUt@fat_crate.local>
 <20230509233641.GGZFrZCTDH7VwUMp5R@fat_crate.local>
 <20230510133024.GBZFuccC1FxIZNKL+8@fat_crate.local>
 <4c47a11c-0565-678d-3467-e01c5ec16600@suse.com>
 <20230511163208.GDZF0YiOfxQhSo4RDm@fat_crate.local>
 <0cd3899b-cf3b-61c1-14ae-60b6b49d14ab@suse.com>
 <20230530152825.GAZHYWGXAp8PHgN/w0@fat_crate.local>
 <888f860d-4307-54eb-01da-11f9adf65559@suse.com>
 <20230531083508.GAZHcGvB68PUAH7f+a@fat_crate.local>
 <efe79c9e-1e31-adb9-8f93-962249bf01bb@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <efe79c9e-1e31-adb9-8f93-962249bf01bb@suse.com>

On Wed, May 31, 2023 at 04:20:08PM +0200, Juergen Gross wrote:
> One other note: why does mtrr_cleanup() think that using 8 instead of 6
> variable MTRRs would be an "optimal setting"?

Maybe the more extensive debug output below would help answer that...

> IMO it should replace the original setup only in case it is using _less_
> MTRRs than before.

Right.

> Additionally I believe mtrr_cleanup() would make much more sense if it
> wouldn't be __init, but being usable when trying to add additional MTRRs
> in the running system in case we run out of MTRRs.
> 
> It should probably be based on the new MTRR map anyway...

So I'm not really sure we really care about adding additional MTRRs.
There probably is a use case which does that but I haven't seen one yet
- MTRRs are all legacy crap to me.

Btw, one more patch ontop:

---
From: "Borislav Petkov (AMD)" <bp@alien8.de>
Date: Wed, 31 May 2023 19:23:34 +0200
Subject: [PATCH] x86/mtrr: Unify debugging printing

Put all the debugging output behind "mtrr=debug" and get rid of
"mtrr_cleanup_debug" which wasn't even documented anywhere.

No functional changes.

Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
---
 arch/x86/kernel/cpu/mtrr/cleanup.c | 59 ++++++++++++------------------
 arch/x86/kernel/cpu/mtrr/generic.c |  2 +-
 arch/x86/kernel/cpu/mtrr/mtrr.c    |  5 +--
 arch/x86/kernel/cpu/mtrr/mtrr.h    |  3 ++
 4 files changed, 29 insertions(+), 40 deletions(-)

diff --git a/arch/x86/kernel/cpu/mtrr/cleanup.c b/arch/x86/kernel/cpu/mtrr/cleanup.c
index ed5f84c20ac2..18cf79d6e2c5 100644
--- a/arch/x86/kernel/cpu/mtrr/cleanup.c
+++ b/arch/x86/kernel/cpu/mtrr/cleanup.c
@@ -55,9 +55,6 @@ static int __initdata				nr_range;
 
 static struct var_mtrr_range_state __initdata	range_state[RANGE_NUM];
 
-static int __initdata debug_print;
-#define Dprintk(x...) do { if (debug_print) pr_debug(x); } while (0)
-
 #define BIOS_BUG_MSG \
 	"WARNING: BIOS bug: VAR MTRR %d contains strange UC entry under 1M, check with your system vendor!\n"
 
@@ -79,12 +76,11 @@ x86_get_mtrr_mem_range(struct range *range, int nr_range,
 		nr_range = add_range_with_merge(range, RANGE_NUM, nr_range,
 						base, base + size);
 	}
-	if (debug_print) {
-		pr_debug("After WB checking\n");
-		for (i = 0; i < nr_range; i++)
-			pr_debug("MTRR MAP PFN: %016llx - %016llx\n",
-				 range[i].start, range[i].end);
-	}
+
+	Dprintk("After WB checking\n");
+	for (i = 0; i < nr_range; i++)
+		Dprintk("MTRR MAP PFN: %016llx - %016llx\n",
+			 range[i].start, range[i].end);
 
 	/* Take out UC ranges: */
 	for (i = 0; i < num_var_ranges; i++) {
@@ -112,24 +108,22 @@ x86_get_mtrr_mem_range(struct range *range, int nr_range,
 		subtract_range(range, RANGE_NUM, extra_remove_base,
 				 extra_remove_base + extra_remove_size);
 
-	if  (debug_print) {
-		pr_debug("After UC checking\n");
-		for (i = 0; i < RANGE_NUM; i++) {
-			if (!range[i].end)
-				continue;
-			pr_debug("MTRR MAP PFN: %016llx - %016llx\n",
-				 range[i].start, range[i].end);
-		}
+	Dprintk("After UC checking\n");
+	for (i = 0; i < RANGE_NUM; i++) {
+		if (!range[i].end)
+			continue;
+
+		Dprintk("MTRR MAP PFN: %016llx - %016llx\n",
+			 range[i].start, range[i].end);
 	}
 
 	/* sort the ranges */
 	nr_range = clean_sort_range(range, RANGE_NUM);
-	if  (debug_print) {
-		pr_debug("After sorting\n");
-		for (i = 0; i < nr_range; i++)
-			pr_debug("MTRR MAP PFN: %016llx - %016llx\n",
-				 range[i].start, range[i].end);
-	}
+
+	Dprintk("After sorting\n");
+	for (i = 0; i < nr_range; i++)
+		Dprintk("MTRR MAP PFN: %016llx - %016llx\n",
+			range[i].start, range[i].end);
 
 	return nr_range;
 }
@@ -164,13 +158,6 @@ static int __init enable_mtrr_cleanup_setup(char *str)
 }
 early_param("enable_mtrr_cleanup", enable_mtrr_cleanup_setup);
 
-static int __init mtrr_cleanup_debug_setup(char *str)
-{
-	debug_print = 1;
-	return 0;
-}
-early_param("mtrr_cleanup_debug", mtrr_cleanup_debug_setup);
-
 static void __init
 set_var_mtrr(unsigned int reg, unsigned long basek, unsigned long sizek,
 	     unsigned char type)
@@ -267,7 +254,7 @@ range_to_mtrr(unsigned int reg, unsigned long range_startk,
 			align = max_align;
 
 		sizek = 1UL << align;
-		if (debug_print) {
+		if (mtrr_debug) {
 			char start_factor = 'K', size_factor = 'K';
 			unsigned long start_base, size_base;
 
@@ -542,7 +529,7 @@ static void __init print_out_mtrr_range_state(void)
 		start_base = to_size_factor(start_base, &start_factor);
 		type = range_state[i].type;
 
-		pr_debug("reg %d, base: %ld%cB, range: %ld%cB, type %s\n",
+		Dprintk("reg %d, base: %ld%cB, range: %ld%cB, type %s\n",
 			i, start_base, start_factor,
 			size_base, size_factor,
 			(type == MTRR_TYPE_UNCACHABLE) ? "UC" :
@@ -714,7 +701,7 @@ int __init mtrr_cleanup(void)
 		return 0;
 
 	/* Print original var MTRRs at first, for debugging: */
-	pr_debug("original variable MTRRs\n");
+	Dprintk("original variable MTRRs\n");
 	print_out_mtrr_range_state();
 
 	memset(range, 0, sizeof(range));
@@ -746,7 +733,7 @@ int __init mtrr_cleanup(void)
 
 		if (!result[i].bad) {
 			set_var_mtrr_all();
-			pr_debug("New variable MTRRs\n");
+			Dprintk("New variable MTRRs\n");
 			print_out_mtrr_range_state();
 			return 1;
 		}
@@ -766,7 +753,7 @@ int __init mtrr_cleanup(void)
 
 			mtrr_calc_range_state(chunk_size, gran_size,
 				      x_remove_base, x_remove_size, i);
-			if (debug_print) {
+			if (mtrr_debug) {
 				mtrr_print_out_one_result(i);
 				pr_info("\n");
 			}
@@ -790,7 +777,7 @@ int __init mtrr_cleanup(void)
 		gran_size <<= 10;
 		x86_setup_var_mtrrs(range, nr_range, chunk_size, gran_size);
 		set_var_mtrr_all();
-		pr_debug("New variable MTRRs\n");
+		Dprintk("New variable MTRRs\n");
 		print_out_mtrr_range_state();
 		return 1;
 	} else {
diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index e5c5192d8a28..58a3848435c4 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -41,7 +41,7 @@ struct cache_map {
 	u64 fixed:1;
 };
 
-static bool mtrr_debug;
+bool mtrr_debug;
 
 static int __init mtrr_param_setup(char *str)
 {
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
index ec8670bb5d88..767bf1c71aad 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
@@ -332,7 +332,7 @@ static int mtrr_check(unsigned long base, unsigned long size)
 {
 	if ((base & (PAGE_SIZE - 1)) || (size & (PAGE_SIZE - 1))) {
 		pr_warn("size and base must be multiples of 4 kiB\n");
-		pr_debug("size: 0x%lx  base: 0x%lx\n", size, base);
+		Dprintk("size: 0x%lx  base: 0x%lx\n", size, base);
 		dump_stack();
 		return -1;
 	}
@@ -423,8 +423,7 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
 			}
 		}
 		if (reg < 0) {
-			pr_debug("no MTRR for %lx000,%lx000 found\n",
-				 base, size);
+			Dprintk("no MTRR for %lx000,%lx000 found\n", base, size);
 			goto out;
 		}
 	}
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtrr.h
index 8385d7d3a865..5655f253d929 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.h
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.h
@@ -10,6 +10,9 @@
 #define MTRR_CHANGE_MASK_VARIABLE  0x02
 #define MTRR_CHANGE_MASK_DEFTYPE   0x04
 
+extern bool mtrr_debug;
+#define Dprintk(x...) do { if (mtrr_debug) pr_info(x); } while (0)
+
 extern unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES];
 
 struct mtrr_ops {
-- 
2.35.1


[    0.000000] microcode: updated early: 0x710 -> 0x718, date = 2019-05-21
[    0.000000] Linux version 6.4.0-rc1+ (root@gondor) (gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.1) #1 SMP PREEMPT_DYNAMIC Wed May 31 13:45:34 CEST 2023
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-6.4.0-rc1+ root=/dev/sda7 ro earlyprintk=ttyS0,115200 console=ttyS0,115200 console=tty0 ras=cec_disable root=/dev/sda7 log_buf_len=10M resume=/dev/sda5 no_console_suspend ignore_loglevel mtrr=debug
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
[    0.000000] signal: max sigframe size: 1776
[    0.000000] BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000018ebafff] usable
[    0.000000] BIOS-e820: [mem 0x0000000018ebb000-0x0000000018fe7fff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000018fe8000-0x0000000018fe8fff] usable
[    0.000000] BIOS-e820: [mem 0x0000000018fe9000-0x0000000018ffffff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000019000000-0x000000001dffcfff] usable
[    0.000000] BIOS-e820: [mem 0x000000001dffd000-0x000000001dffffff] ACPI data
[    0.000000] BIOS-e820: [mem 0x000000001e000000-0x00000000ac77cfff] usable
[    0.000000] BIOS-e820: [mem 0x00000000ac77d000-0x00000000ac77ffff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac780000-0x00000000ac780fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac781000-0x00000000ac782fff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac783000-0x00000000ac7d9fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac7da000-0x00000000ac7dafff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac7db000-0x00000000ac7dcfff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac7dd000-0x00000000ac7e7fff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac7e8000-0x00000000ac7f1fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac7f2000-0x00000000ac7f5fff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac7f6000-0x00000000ac7f9fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac7fa000-0x00000000ac7fafff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac7fb000-0x00000000ac803fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac804000-0x00000000ac810fff] type 20
[    0.000000] BIOS-e820: [mem 0x00000000ac811000-0x00000000ac813fff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ac814000-0x00000000ad7fffff] usable
[    0.000000] BIOS-e820: [mem 0x00000000b0000000-0x00000000b3ffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fed20000-0x00000000fed3ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fed50000-0x00000000fed8ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000ffa00000-0x00000000ffa3ffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000044fffffff] usable
[    0.000000] printk: bootconsole [earlyser0] enabled
[    0.000000] printk: debug: ignoring loglevel setting.
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] efi: EFI v2.0 by American Megatrends
[    0.000000] efi: ACPI 2.0=0x1dffff98 SMBIOS=0xac811018 
[    0.000000] efi: Remove mem57: MMIO range=[0xb0000000-0xb3ffffff] (64MB) from e820 map
[    0.000000] e820: remove [mem 0xb0000000-0xb3ffffff] reserved
[    0.000000] efi: Not removing mem58: MMIO range=[0xfed20000-0xfed3ffff] (128KB) from e820 map
[    0.000000] efi: Remove mem59: MMIO range=[0xfed50000-0xfed8ffff] (0MB) from e820 map
[    0.000000] e820: remove [mem 0xfed50000-0xfed8ffff] reserved
[    0.000000] efi: Remove mem60: MMIO range=[0xffa00000-0xffa3ffff] (0MB) from e820 map
[    0.000000] e820: remove [mem 0xffa00000-0xffa3ffff] reserved
[    0.000000] SMBIOS 2.6 present.
[    0.000000] DMI: Dell Inc. Precision T3600/0PTTT9, BIOS A13 05/11/2014
[    0.000000] tsc: Fast TSC calibration using PIT
[    0.000000] tsc: Detected 3591.179 MHz processor
[    0.000747] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.007211] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.012717] last_pfn = 0x450000 max_arch_pfn = 0x400000000
[    0.018124] MTRR default type: uncachable
[    0.022074] MTRR fixed ranges enabled:
[    0.025768]   00000-9FFFF write-back
[    0.029290]   A0000-BFFFF uncachable
[    0.032812]   C0000-FFFFF write-protect
[    0.036592] MTRR variable ranges enabled:
[    0.040543]   0 base 000000000000000 mask 0003FFC00000000 write-back
[    0.046814]   1 base 000000400000000 mask 0003FFFC0000000 write-back
[    0.053086]   2 base 000000440000000 mask 0003FFFF0000000 write-back
[    0.059357]   3 base 0000000AE000000 mask 0003FFFFE000000 uncachable
[    0.065629]   4 base 0000000B0000000 mask 0003FFFF0000000 uncachable
[    0.071899]   5 base 0000000C0000000 mask 0003FFFC0000000 uncachable
[    0.078172]   6 disabled
[    0.080664]   7 disabled
[    0.083155]   8 disabled
[    0.085645]   9 disabled
[    0.088140] original variable MTRRs
[    0.091574] reg 0, base: 0GB, range: 16GB, type WB
[    0.096299] reg 1, base: 16GB, range: 1GB, type WB
[    0.101026] reg 2, base: 17GB, range: 256MB, type WB
[    0.105923] reg 3, base: 2784MB, range: 32MB, type UC
[    0.110906] reg 4, base: 2816MB, range: 256MB, type UC
[    0.115975] reg 5, base: 3GB, range: 1GB, type UC
[    0.120617] After WB checking
[    0.123536] MTRR MAP PFN: 0000000000000000 - 0000000000450000
[    0.129208] After UC checking
[    0.132127] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.137798] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.143468] After sorting
[    0.146044] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.151714] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.157385] total RAM covered: 16352M
[    0.160994] rangeX: 0000000000000000 - 00000000ae000000
[    0.166148] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    0.172332] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    0.178690] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    0.185305] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    0.191835] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    0.198363] rangeX: 0000000100000000 - 0000000450000000
[    0.203518] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    0.209703] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    0.215887] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    0.222159] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    0.228603] After WB checking
[    0.231522] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.237193] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.242864] After UC checking
[    0.245784] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.251455] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.257126] After sorting
[    0.259702] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.265371] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.271042]  gran_size: 64K 	chunk_size: 64K 	num_reg: 9  	lose cover RAM: 0G

[    0.279545] rangeX: 0000000000000000 - 00000000ae000000
[    0.284700] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    0.290886] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    0.297243] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    0.303857] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    0.310386] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    0.316915] rangeX: 0000000100000000 - 0000000450000000
[    0.322069] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    0.328254] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    0.334440] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    0.340712] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    0.347155] After WB checking
[    0.350076] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.355745] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.361414] After UC checking
[    0.364335] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.370005] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.375675] After sorting
[    0.378253] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.383922] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.389593]  gran_size: 64K 	chunk_size: 128K 	num_reg: 9  	lose cover RAM: 0G

[    0.398184] rangeX: 0000000000000000 - 00000000ae000000
[    0.403338] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    0.409524] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    0.415881] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    0.422497] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    0.429027] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    0.435555] rangeX: 0000000100000000 - 0000000450000000
[    0.440710] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    0.446897] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    0.453083] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    0.459355] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    0.465798] After WB checking
[    0.468719] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.474442] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.480114] After UC checking
[    0.483034] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.488705] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.494377] After sorting
[    0.496954] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.502626] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.508297]  gran_size: 64K 	chunk_size: 256K 	num_reg: 9  	lose cover RAM: 0G

[    0.516888] rangeX: 0000000000000000 - 00000000ae000000
[    0.522044] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    0.528230] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    0.534588] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    0.541206] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    0.547736] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    0.554266] rangeX: 0000000100000000 - 0000000450000000
[    0.559420] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    0.565605] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    0.571791] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    0.578062] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    0.584506] After WB checking
[    0.587427] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.593097] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.598768] After UC checking
[    0.601690] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.607360] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.613031] After sorting
[    0.615607] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.621278] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.626949]  gran_size: 64K 	chunk_size: 512K 	num_reg: 9  	lose cover RAM: 0G

[    0.635541] rangeX: 0000000000000000 - 00000000ae000000
[    0.640695] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    0.646882] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    0.653240] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    0.659856] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    0.666386] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    0.672916] rangeX: 0000000100000000 - 0000000450000000
[    0.678073] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    0.684260] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    0.690447] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    0.696719] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    0.703164] After WB checking
[    0.706085] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.711754] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.717425] After UC checking
[    0.720347] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.726018] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.731689] After sorting
[    0.734266] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.739937] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.745608]  gran_size: 64K 	chunk_size: 1M 	num_reg: 9  	lose cover RAM: 0G

[    0.754027] rangeX: 0000000000000000 - 00000000ae000000
[    0.759182] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    0.765368] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    0.771726] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    0.778342] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    0.784872] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    0.791402] rangeX: 0000000100000000 - 0000000450000000
[    0.796558] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    0.802745] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    0.808931] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    0.815204] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    0.821648] After WB checking
[    0.824569] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.830240] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.835910] After UC checking
[    0.838831] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.844503] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.850173] After sorting
[    0.852752] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.858422] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.864093]  gran_size: 64K 	chunk_size: 2M 	num_reg: 9  	lose cover RAM: 0G

[    0.872514] rangeX: 0000000000000000 - 00000000ae000000
[    0.877668] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    0.883855] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    0.890212] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    0.896827] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    0.903357] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    0.909885] rangeX: 0000000100000000 - 0000000450000000
[    0.915040] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    0.921226] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    0.927411] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    0.933682] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    0.940126] After WB checking
[    0.943047] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.948717] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.954387] After UC checking
[    0.957308] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.962978] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.968648] After sorting
[    0.971225] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    0.976895] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    0.982565]  gran_size: 64K 	chunk_size: 4M 	num_reg: 9  	lose cover RAM: 0G

[    0.990985] rangeX: 0000000000000000 - 00000000ae000000
[    0.996139] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    1.002324] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    1.008682] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    1.015297] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    1.021826] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    1.028356] rangeX: 0000000100000000 - 0000000450000000
[    1.033512] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    1.039699] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    1.045884] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    1.052156] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    1.058599] After WB checking
[    1.061520] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.067189] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.072859] After UC checking
[    1.075781] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.081451] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.087122] After sorting
[    1.089700] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.095371] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.101041]  gran_size: 64K 	chunk_size: 8M 	num_reg: 9  	lose cover RAM: 0G

[    1.109462] rangeX: 0000000000000000 - 00000000ae000000
[    1.114616] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    1.120803] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    1.127160] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    1.133775] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    1.140304] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    1.146834] rangeX: 0000000100000000 - 0000000450000000
[    1.151989] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    1.158175] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    1.164361] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    1.170634] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    1.177078] After WB checking
[    1.179998] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.185668] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.191339] After UC checking
[    1.194259] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.199930] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.205601] After sorting
[    1.208178] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.213848] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.219517]  gran_size: 64K 	chunk_size: 16M 	num_reg: 9  	lose cover RAM: 0G

[    1.228021] rangeX: 0000000000000000 - 00000000ae000000
[    1.233177] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    1.239362] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    1.245720] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    1.252336] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    1.258864] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    1.265393] rangeX: 0000000100000000 - 0000000450000000
[    1.270547] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    1.276732] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    1.282917] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    1.289189] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    1.295632] After WB checking
[    1.298552] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.304221] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.309892] After UC checking
[    1.312812] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.318483] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.324151] After sorting
[    1.326728] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.332399] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.338070]  gran_size: 64K 	chunk_size: 32M 	num_reg: 9  	lose cover RAM: 0G

[    1.346574] range0: 0000000000000000 - 00000000b0000000
[    1.351728] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    1.357914] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    1.364273] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    1.370888] hole: 00000000ae000000 - 00000000b0000000
[    1.375872] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    1.382401] rangeX: 0000000100000000 - 0000000450000000
[    1.387557] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    1.393742] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    1.399929] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    1.406200] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    1.412643] After WB checking
[    1.415564] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    1.421234] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.426904] After UC checking
[    1.429825] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.435494] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.441165] After sorting
[    1.443742] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.449412] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.455084]  gran_size: 64K 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G

[    1.463591] range0: 0000000000000000 - 00000000b0000000
[    1.468746] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    1.474932] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    1.481290] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    1.487906] hole: 00000000ae000000 - 00000000b0000000
[    1.492889] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    1.499513] rangeX: 0000000100000000 - 0000000450000000
[    1.504669] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    1.510855] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    1.517041] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    1.523312] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    1.529757] After WB checking
[    1.532676] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    1.538347] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.544018] After UC checking
[    1.546939] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.552610] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.558280] After sorting
[    1.560858] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.566529] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.572199]  gran_size: 64K 	chunk_size: 128M 	num_reg: 8  	lose cover RAM: 0G

[    1.580791] range0: 0000000000000000 - 00000000b0000000
[    1.585945] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    1.592131] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    1.598489] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    1.605103] hole: 00000000ae000000 - 00000000b0000000
[    1.610086] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    1.616615] rangeX: 0000000100000000 - 0000000450000000
[    1.621770] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    1.627957] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    1.634142] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    1.640413] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    1.646858] After WB checking
[    1.649777] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    1.655449] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.661120] After UC checking
[    1.664041] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.669711] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.675382] After sorting
[    1.677959] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.683631] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.689301]  gran_size: 64K 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 0G

[    1.697894] range0: 0000000000000000 - 00000000c0000000
[    1.703049] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    1.709235] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    1.715421] hole: 00000000ae000000 - 00000000c0000000
[    1.720405] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    1.726935] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    1.733550] range0: 0000000100000000 - 0000000460000000
[    1.738705] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    1.744891] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    1.751078] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    1.757351] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[    1.763797] hole: 0000000450000000 - 0000000460000000
[    1.768781] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[    1.775483] After WB checking
[    1.778404] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    1.784076] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[    1.789747] After UC checking
[    1.792668] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.798339] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.804010] After sorting
[    1.806587] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.812258] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.817929]  gran_size: 64K 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 0G

[    1.826522] range0: 0000000000000000 - 00000000c0000000
[    1.831678] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    1.837864] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    1.844052] hole: 00000000ae000000 - 00000000c0000000
[    1.849035] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    1.855565] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    1.862183] range0: 0000000100000000 - 0000000480000000
[    1.867338] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    1.873525] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    1.879711] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    1.885985] hole: 0000000450000000 - 0000000480000000
[    1.890968] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    1.897670] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    1.904372] After WB checking
[    1.907294] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    1.912965] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[    1.918636] After UC checking
[    1.921557] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.927228] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.932899] After sorting
[    1.935476] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    1.941147] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    1.946820]  gran_size: 64K 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 0G

[    1.955239] range0: 0000000000000000 - 0000000100000000
[    1.960394] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[    1.966580] hole: 00000000ae000000 - 0000000100000000
[    1.971564] Setting variable MTRR 1, base: 2784MB, range: 32MB, type UC
[    1.978094] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[    1.984710] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[    1.990896] range0: 0000000100000000 - 0000000480000000
[    1.996051] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    2.002237] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    2.008423] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    2.014694] hole: 0000000450000000 - 0000000480000000
[    2.019677] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    2.026378] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    2.033078] After WB checking
[    2.036000] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[    2.041672] After UC checking
[    2.044591] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.050261] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.055931] After sorting
[    2.058508] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.064178] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.069849]  gran_size: 64K 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 0G

[    2.078268] rangeX: 0000000000000000 - 00000000ae000000
[    2.083422] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    2.089608] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    2.095965] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    2.102581] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    2.109109] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    2.115640] rangeX: 0000000100000000 - 0000000450000000
[    2.120795] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    2.126982] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    2.133168] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    2.139441] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    2.145884] After WB checking
[    2.148805] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.154476] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.160147] After UC checking
[    2.163068] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.168739] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.174409] After sorting
[    2.176986] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.182657] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.188327]  gran_size: 128K 	chunk_size: 128K 	num_reg: 9  	lose cover RAM: 0G

[    2.197003] rangeX: 0000000000000000 - 00000000ae000000
[    2.202158] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    2.208342] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    2.214699] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    2.221315] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    2.227845] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    2.234373] rangeX: 0000000100000000 - 0000000450000000
[    2.239528] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    2.245714] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    2.251901] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    2.258173] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    2.264617] After WB checking
[    2.267537] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.273207] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.278877] After UC checking
[    2.281798] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.287470] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.293140] After sorting
[    2.295716] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.301387] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.307058]  gran_size: 128K 	chunk_size: 256K 	num_reg: 9  	lose cover RAM: 0G

[    2.315734] rangeX: 0000000000000000 - 00000000ae000000
[    2.320889] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    2.327075] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    2.333431] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    2.340047] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    2.346574] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    2.353104] rangeX: 0000000100000000 - 0000000450000000
[    2.358258] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    2.364443] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    2.370628] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    2.376900] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    2.383343] After WB checking
[    2.386264] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.391934] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.397604] After UC checking
[    2.400525] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.406195] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.411865] After sorting
[    2.414442] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.420112] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.425783]  gran_size: 128K 	chunk_size: 512K 	num_reg: 9  	lose cover RAM: 0G

[    2.434458] rangeX: 0000000000000000 - 00000000ae000000
[    2.439613] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    2.445797] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    2.452154] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    2.458769] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    2.465298] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    2.471828] rangeX: 0000000100000000 - 0000000450000000
[    2.476982] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    2.483168] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    2.489352] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    2.495623] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    2.502067] After WB checking
[    2.504987] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.510656] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.516327] After UC checking
[    2.519247] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.525022] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.530692] After sorting
[    2.533268] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.538938] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.544607]  gran_size: 128K 	chunk_size: 1M 	num_reg: 9  	lose cover RAM: 0G

[    2.553112] rangeX: 0000000000000000 - 00000000ae000000
[    2.558265] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    2.564450] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    2.570808] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    2.577424] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    2.583952] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    2.590482] rangeX: 0000000100000000 - 0000000450000000
[    2.595637] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    2.601824] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    2.608009] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    2.614281] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    2.620725] After WB checking
[    2.623646] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.629318] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.634987] After UC checking
[    2.637909] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.643578] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.649251] After sorting
[    2.651827] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.657498] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.663168]  gran_size: 128K 	chunk_size: 2M 	num_reg: 9  	lose cover RAM: 0G

[    2.671674] rangeX: 0000000000000000 - 00000000ae000000
[    2.676828] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    2.683014] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    2.689374] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    2.695990] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    2.702520] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    2.709050] rangeX: 0000000100000000 - 0000000450000000
[    2.714205] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    2.720391] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    2.726578] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    2.732851] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    2.739295] After WB checking
[    2.742217] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.747888] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.753558] After UC checking
[    2.756481] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.762152] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.767823] After sorting
[    2.770401] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.776071] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.781742]  gran_size: 128K 	chunk_size: 4M 	num_reg: 9  	lose cover RAM: 0G

[    2.790250] rangeX: 0000000000000000 - 00000000ae000000
[    2.795404] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    2.801592] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    2.807950] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    2.814566] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    2.821096] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    2.827626] rangeX: 0000000100000000 - 0000000450000000
[    2.832781] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    2.838969] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    2.845154] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    2.851427] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    2.857871] After WB checking
[    2.860792] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.866462] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.872133] After UC checking
[    2.875053] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.880723] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.886394] After sorting
[    2.888971] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.894642] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.900312]  gran_size: 128K 	chunk_size: 8M 	num_reg: 9  	lose cover RAM: 0G

[    2.908816] rangeX: 0000000000000000 - 00000000ae000000
[    2.913972] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    2.920157] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    2.926515] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    2.933131] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    2.939660] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    2.946189] rangeX: 0000000100000000 - 0000000450000000
[    2.951344] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    2.957530] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    2.963717] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    2.969988] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    2.976433] After WB checking
[    2.979354] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.985025] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    2.990695] After UC checking
[    2.993617] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    2.999289] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.004960] After sorting
[    3.007536] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.013206] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.018877]  gran_size: 128K 	chunk_size: 16M 	num_reg: 9  	lose cover RAM: 0G

[    3.027470] rangeX: 0000000000000000 - 00000000ae000000
[    3.032624] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    3.038811] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    3.045169] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    3.051786] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    3.058316] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    3.064846] rangeX: 0000000100000000 - 0000000450000000
[    3.070001] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    3.076188] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    3.082375] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    3.088646] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    3.095090] After WB checking
[    3.098011] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.103681] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.109352] After UC checking
[    3.112274] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.117944] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.123615] After sorting
[    3.126193] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.131864] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.137534]  gran_size: 128K 	chunk_size: 32M 	num_reg: 9  	lose cover RAM: 0G

[    3.146126] range0: 0000000000000000 - 00000000b0000000
[    3.151280] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    3.157467] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    3.163823] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    3.170439] hole: 00000000ae000000 - 00000000b0000000
[    3.175422] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    3.181950] rangeX: 0000000100000000 - 0000000450000000
[    3.187105] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    3.193291] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    3.199476] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    3.205748] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    3.212191] After WB checking
[    3.215112] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    3.220782] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.226451] After UC checking
[    3.229372] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.235042] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.240714] After sorting
[    3.243291] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.248961] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.254633]  gran_size: 128K 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G

[    3.263224] range0: 0000000000000000 - 00000000b0000000
[    3.268379] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    3.274565] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    3.280925] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    3.287541] hole: 00000000ae000000 - 00000000b0000000
[    3.292524] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    3.299054] rangeX: 0000000100000000 - 0000000450000000
[    3.304209] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    3.310395] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    3.316581] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    3.322853] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    3.329298] After WB checking
[    3.332219] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    3.337889] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.343561] After UC checking
[    3.346481] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.352152] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.357824] After sorting
[    3.360401] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.366071] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.371742]  gran_size: 128K 	chunk_size: 128M 	num_reg: 8  	lose cover RAM: 0G

[    3.380419] range0: 0000000000000000 - 00000000b0000000
[    3.385575] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    3.391762] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    3.398120] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    3.404736] hole: 00000000ae000000 - 00000000b0000000
[    3.409719] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    3.416250] rangeX: 0000000100000000 - 0000000450000000
[    3.421405] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    3.427591] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    3.433778] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    3.440050] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    3.446493] After WB checking
[    3.449414] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    3.455084] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.460756] After UC checking
[    3.463676] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.469348] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.475019] After sorting
[    3.477595] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.483266] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.488937]  gran_size: 128K 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 0G

[    3.497614] range0: 0000000000000000 - 00000000c0000000
[    3.502768] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    3.508955] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    3.515139] hole: 00000000ae000000 - 00000000c0000000
[    3.520122] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    3.526652] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    3.533267] range0: 0000000100000000 - 0000000460000000
[    3.538422] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    3.544609] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    3.550875] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    3.557146] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[    3.563588] hole: 0000000450000000 - 0000000460000000
[    3.568571] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[    3.575273] After WB checking
[    3.578193] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    3.583864] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[    3.589535] After UC checking
[    3.592455] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.598126] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.603797] After sorting
[    3.606374] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.612045] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.617716]  gran_size: 128K 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 0G

[    3.626392] range0: 0000000000000000 - 00000000c0000000
[    3.631546] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    3.637732] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    3.643917] hole: 00000000ae000000 - 00000000c0000000
[    3.648900] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    3.655430] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    3.662045] range0: 0000000100000000 - 0000000480000000
[    3.667201] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    3.673387] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    3.679573] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    3.685842] hole: 0000000450000000 - 0000000480000000
[    3.690826] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    3.697527] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    3.704228] After WB checking
[    3.707149] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    3.712819] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[    3.718490] After UC checking
[    3.721409] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.727080] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.732750] After sorting
[    3.735327] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.740997] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.746667]  gran_size: 128K 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 0G

[    3.755173] range0: 0000000000000000 - 0000000100000000
[    3.760328] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[    3.766513] hole: 00000000ae000000 - 0000000100000000
[    3.771497] Setting variable MTRR 1, base: 2784MB, range: 32MB, type UC
[    3.778026] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[    3.784642] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[    3.790827] range0: 0000000100000000 - 0000000480000000
[    3.795982] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    3.802168] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    3.808354] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    3.814625] hole: 0000000450000000 - 0000000480000000
[    3.819608] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    3.826310] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    3.833012] After WB checking
[    3.835933] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[    3.841604] After UC checking
[    3.844525] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.850195] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.855865] After sorting
[    3.858443] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.864113] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.869784]  gran_size: 128K 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 0G

[    3.878289] rangeX: 0000000000000000 - 00000000ae000000
[    3.883444] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    3.889630] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    3.895988] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    3.902603] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    3.909131] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    3.915660] rangeX: 0000000100000000 - 0000000450000000
[    3.920814] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    3.927000] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    3.933187] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    3.939457] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    3.945902] After WB checking
[    3.948823] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.954492] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.960164] After UC checking
[    3.963085] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.968754] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.974425] After sorting
[    3.977001] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    3.982672] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    3.988341]  gran_size: 256K 	chunk_size: 256K 	num_reg: 9  	lose cover RAM: 0G

[    3.997018] rangeX: 0000000000000000 - 00000000ae000000
[    4.002173] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    4.008357] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    4.014715] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    4.021329] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    4.027860] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    4.034388] rangeX: 0000000100000000 - 0000000450000000
[    4.039542] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    4.045728] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    4.051913] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    4.058184] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    4.064626] After WB checking
[    4.067547] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.073217] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.078887] After UC checking
[    4.081808] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.087477] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.093149] After sorting
[    4.095725] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.101396] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.107066]  gran_size: 256K 	chunk_size: 512K 	num_reg: 9  	lose cover RAM: 0G

[    4.115742] rangeX: 0000000000000000 - 00000000ae000000
[    4.120898] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    4.127085] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    4.133443] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    4.140059] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    4.146589] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    4.153119] rangeX: 0000000100000000 - 0000000450000000
[    4.158273] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    4.164460] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    4.170645] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    4.176916] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    4.183361] After WB checking
[    4.186281] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.191951] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.197621] After UC checking
[    4.200543] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.206212] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.211883] After sorting
[    4.214460] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.220131] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.225802]  gran_size: 256K 	chunk_size: 1M 	num_reg: 9  	lose cover RAM: 0G

[    4.234307] rangeX: 0000000000000000 - 00000000ae000000
[    4.239462] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    4.245648] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    4.252004] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    4.258620] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    4.265149] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    4.271679] rangeX: 0000000100000000 - 0000000450000000
[    4.276834] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    4.283020] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    4.289205] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    4.295477] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    4.301918] After WB checking
[    4.304839] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.310509] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.316179] After UC checking
[    4.319100] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.324771] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.330441] After sorting
[    4.333018] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.338688] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.344361]  gran_size: 256K 	chunk_size: 2M 	num_reg: 9  	lose cover RAM: 0G

[    4.352866] rangeX: 0000000000000000 - 00000000ae000000
[    4.358021] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    4.364207] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    4.370566] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    4.377180] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    4.383712] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    4.390242] rangeX: 0000000100000000 - 0000000450000000
[    4.395397] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    4.401584] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    4.407771] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    4.414043] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    4.420488] After WB checking
[    4.423408] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.429079] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.434749] After UC checking
[    4.437670] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.443342] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.449012] After sorting
[    4.451591] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.457261] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.462932]  gran_size: 256K 	chunk_size: 4M 	num_reg: 9  	lose cover RAM: 0G

[    4.471436] rangeX: 0000000000000000 - 00000000ae000000
[    4.476591] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    4.482776] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    4.489133] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    4.495749] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    4.502278] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    4.508807] rangeX: 0000000100000000 - 0000000450000000
[    4.513962] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    4.520148] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    4.526335] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    4.532606] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    4.539051] After WB checking
[    4.541973] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.547644] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.553314] After UC checking
[    4.556235] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.561905] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.567576] After sorting
[    4.570197] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.575867] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.581538]  gran_size: 256K 	chunk_size: 8M 	num_reg: 9  	lose cover RAM: 0G

[    4.590043] rangeX: 0000000000000000 - 00000000ae000000
[    4.595197] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    4.601384] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    4.607741] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    4.614358] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    4.620887] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    4.627418] rangeX: 0000000100000000 - 0000000450000000
[    4.632573] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    4.638759] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    4.644946] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    4.651219] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    4.657663] After WB checking
[    4.660583] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.666254] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.671924] After UC checking
[    4.674846] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.680518] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.686189] After sorting
[    4.688766] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.694437] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.700109]  gran_size: 256K 	chunk_size: 16M 	num_reg: 9  	lose cover RAM: 0G

[    4.708701] rangeX: 0000000000000000 - 00000000ae000000
[    4.713857] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    4.720043] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    4.726400] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    4.733016] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    4.739546] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    4.746076] rangeX: 0000000100000000 - 0000000450000000
[    4.751231] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    4.757417] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    4.763602] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    4.769872] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    4.776315] After WB checking
[    4.779236] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.784906] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.790576] After UC checking
[    4.793497] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.799169] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.804840] After sorting
[    4.807416] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.813087] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.818758]  gran_size: 256K 	chunk_size: 32M 	num_reg: 9  	lose cover RAM: 0G

[    4.827348] range0: 0000000000000000 - 00000000b0000000
[    4.832502] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    4.838688] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    4.845045] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    4.851659] hole: 00000000ae000000 - 00000000b0000000
[    4.856642] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    4.863172] rangeX: 0000000100000000 - 0000000450000000
[    4.868327] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    4.874512] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    4.880698] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    4.886970] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    4.893414] After WB checking
[    4.896335] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    4.902005] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.907675] After UC checking
[    4.910596] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.916267] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.921936] After sorting
[    4.924514] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    4.930185] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    4.935856]  gran_size: 256K 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G

[    4.944447] range0: 0000000000000000 - 00000000b0000000
[    4.949603] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    4.955789] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    4.962147] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    4.968763] hole: 00000000ae000000 - 00000000b0000000
[    4.973746] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    4.980276] rangeX: 0000000100000000 - 0000000450000000
[    4.985431] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    4.991617] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    4.997804] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    5.004074] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    5.010518] After WB checking
[    5.013439] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    5.019110] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.024779] After UC checking
[    5.027700] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.033370] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.039040] After sorting
[    5.041618] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.047287] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.052958]  gran_size: 256K 	chunk_size: 128M 	num_reg: 8  	lose cover RAM: 0G

[    5.061635] range0: 0000000000000000 - 00000000b0000000
[    5.066790] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    5.072975] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    5.079333] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    5.085950] hole: 00000000ae000000 - 00000000b0000000
[    5.090933] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    5.097462] rangeX: 0000000100000000 - 0000000450000000
[    5.102616] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    5.108802] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    5.114988] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    5.121261] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    5.127705] After WB checking
[    5.130625] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    5.136295] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.141967] After UC checking
[    5.144886] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.150558] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.156227] After sorting
[    5.158806] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.164475] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.170146]  gran_size: 256K 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 0G

[    5.178825] range0: 0000000000000000 - 00000000c0000000
[    5.183981] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    5.190167] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    5.196352] hole: 00000000ae000000 - 00000000c0000000
[    5.201336] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    5.207865] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    5.214479] range0: 0000000100000000 - 0000000460000000
[    5.219634] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    5.225819] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    5.232005] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    5.238277] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[    5.244720] hole: 0000000450000000 - 0000000460000000
[    5.249703] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[    5.256405] After WB checking
[    5.259326] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    5.264995] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[    5.270667] After UC checking
[    5.273586] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.279256] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.284926] After sorting
[    5.287505] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.293175] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.298846]  gran_size: 256K 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 0G

[    5.307523] range0: 0000000000000000 - 00000000c0000000
[    5.312678] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    5.318864] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    5.325050] hole: 00000000ae000000 - 00000000c0000000
[    5.330033] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    5.336564] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    5.343179] range0: 0000000100000000 - 0000000480000000
[    5.348334] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    5.354520] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    5.360706] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    5.366978] hole: 0000000450000000 - 0000000480000000
[    5.371961] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    5.378662] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    5.385364] After WB checking
[    5.388284] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    5.393955] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[    5.399626] After UC checking
[    5.402545] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.408216] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.413887] After sorting
[    5.416463] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.422134] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.427805]  gran_size: 256K 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 0G

[    5.436311] range0: 0000000000000000 - 0000000100000000
[    5.441465] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[    5.447651] hole: 00000000ae000000 - 0000000100000000
[    5.452634] Setting variable MTRR 1, base: 2784MB, range: 32MB, type UC
[    5.459164] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[    5.465782] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[    5.471966] range0: 0000000100000000 - 0000000480000000
[    5.477122] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    5.483309] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    5.489495] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    5.495767] hole: 0000000450000000 - 0000000480000000
[    5.500750] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    5.507454] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    5.514156] After WB checking
[    5.517078] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[    5.522749] After UC checking
[    5.525670] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.531339] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.537011] After sorting
[    5.539589] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.545259] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.550931]  gran_size: 256K 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 0G

[    5.559437] rangeX: 0000000000000000 - 00000000ae000000
[    5.564592] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    5.570778] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    5.577137] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    5.583751] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    5.590281] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    5.596907] rangeX: 0000000100000000 - 0000000450000000
[    5.602063] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    5.608249] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    5.614435] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    5.620709] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    5.627152] After WB checking
[    5.630073] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.635744] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.641414] After UC checking
[    5.644337] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.650007] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.655678] After sorting
[    5.658255] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.663925] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.669596]  gran_size: 512K 	chunk_size: 512K 	num_reg: 9  	lose cover RAM: 0G

[    5.678275] rangeX: 0000000000000000 - 00000000ae000000
[    5.683429] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    5.689615] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    5.695973] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    5.702589] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    5.709119] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    5.715649] rangeX: 0000000100000000 - 0000000450000000
[    5.720806] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    5.726992] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    5.733178] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    5.739451] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    5.745895] After WB checking
[    5.748815] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.754486] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.760157] After UC checking
[    5.763078] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.768748] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.774420] After sorting
[    5.776997] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.782668] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.788339]  gran_size: 512K 	chunk_size: 1M 	num_reg: 9  	lose cover RAM: 0G

[    5.796844] rangeX: 0000000000000000 - 00000000ae000000
[    5.801998] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    5.808185] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    5.814543] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    5.821158] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    5.827689] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    5.834218] rangeX: 0000000100000000 - 0000000450000000
[    5.839374] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    5.845560] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    5.851745] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    5.858017] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    5.864460] After WB checking
[    5.867381] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.873052] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.878723] After UC checking
[    5.881644] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.887314] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.892986] After sorting
[    5.895563] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.901233] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.906906]  gran_size: 512K 	chunk_size: 2M 	num_reg: 9  	lose cover RAM: 0G

[    5.915411] rangeX: 0000000000000000 - 00000000ae000000
[    5.920565] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    5.926751] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    5.933110] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    5.939725] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    5.946255] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    5.952785] rangeX: 0000000100000000 - 0000000450000000
[    5.957940] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    5.964126] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    5.970311] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    5.976583] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    5.983028] After WB checking
[    5.985949] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    5.991620] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    5.997292] After UC checking
[    6.000213] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.005883] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.011556] After sorting
[    6.014132] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.019803] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.025473]  gran_size: 512K 	chunk_size: 4M 	num_reg: 9  	lose cover RAM: 0G

[    6.033980] rangeX: 0000000000000000 - 00000000ae000000
[    6.039134] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    6.045321] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    6.051679] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    6.058295] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    6.064825] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    6.071353] rangeX: 0000000100000000 - 0000000450000000
[    6.076508] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    6.082693] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    6.088879] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    6.095152] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    6.101594] After WB checking
[    6.104516] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.110187] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.115856] After UC checking
[    6.118778] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.124446] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.130116] After sorting
[    6.132693] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.138363] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.144033]  gran_size: 512K 	chunk_size: 8M 	num_reg: 9  	lose cover RAM: 0G

[    6.152537] rangeX: 0000000000000000 - 00000000ae000000
[    6.157692] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    6.163877] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    6.170234] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    6.176848] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    6.183376] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    6.189904] rangeX: 0000000100000000 - 0000000450000000
[    6.195059] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    6.201244] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    6.207429] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    6.213699] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    6.220142] After WB checking
[    6.223062] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.228732] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.234402] After UC checking
[    6.237322] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.242993] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.248662] After sorting
[    6.251240] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.256909] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.262580]  gran_size: 512K 	chunk_size: 16M 	num_reg: 9  	lose cover RAM: 0G

[    6.271173] rangeX: 0000000000000000 - 00000000ae000000
[    6.276327] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    6.282515] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    6.288873] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    6.295489] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    6.302018] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    6.308549] rangeX: 0000000100000000 - 0000000450000000
[    6.313704] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    6.319890] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    6.326076] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    6.332348] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    6.338793] After WB checking
[    6.341712] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.347382] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.353054] After UC checking
[    6.355975] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.361645] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.367315] After sorting
[    6.369892] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.375563] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.381233]  gran_size: 512K 	chunk_size: 32M 	num_reg: 9  	lose cover RAM: 0G

[    6.389825] range0: 0000000000000000 - 00000000b0000000
[    6.394980] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    6.401166] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    6.407524] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    6.414140] hole: 00000000ae000000 - 00000000b0000000
[    6.419123] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    6.425653] rangeX: 0000000100000000 - 0000000450000000
[    6.430809] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    6.436996] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    6.443182] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    6.449453] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    6.455898] After WB checking
[    6.458818] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    6.464489] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.470160] After UC checking
[    6.473081] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.478751] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.484421] After sorting
[    6.486999] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.492670] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.498342]  gran_size: 512K 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G

[    6.506933] range0: 0000000000000000 - 00000000b0000000
[    6.512088] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    6.518276] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    6.524634] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    6.531249] hole: 00000000ae000000 - 00000000b0000000
[    6.536231] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    6.542761] rangeX: 0000000100000000 - 0000000450000000
[    6.547916] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    6.554103] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    6.560290] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    6.566562] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    6.573005] After WB checking
[    6.575927] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    6.581598] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.587268] After UC checking
[    6.590188] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.595859] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.601530] After sorting
[    6.604108] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.609779] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.615449]  gran_size: 512K 	chunk_size: 128M 	num_reg: 8  	lose cover RAM: 0G

[    6.624220] range0: 0000000000000000 - 00000000b0000000
[    6.629376] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    6.635561] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    6.641919] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    6.648535] hole: 00000000ae000000 - 00000000b0000000
[    6.653518] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    6.660048] rangeX: 0000000100000000 - 0000000450000000
[    6.665202] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    6.671388] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    6.677574] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    6.683844] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    6.690290] After WB checking
[    6.693211] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    6.698880] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.704552] After UC checking
[    6.707473] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.713143] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.718814] After sorting
[    6.721391] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.727062] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.732733]  gran_size: 512K 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 0G

[    6.741410] range0: 0000000000000000 - 00000000c0000000
[    6.746566] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    6.752751] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    6.758937] hole: 00000000ae000000 - 00000000c0000000
[    6.763920] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    6.770449] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    6.777064] range0: 0000000100000000 - 0000000460000000
[    6.782219] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    6.788405] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    6.794593] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    6.800866] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[    6.807310] hole: 0000000450000000 - 0000000460000000
[    6.812294] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[    6.818997] After WB checking
[    6.821918] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    6.827589] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[    6.833261] After UC checking
[    6.836181] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.841853] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.847523] After sorting
[    6.850100] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.855771] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.861444]  gran_size: 512K 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 0G

[    6.870120] range0: 0000000000000000 - 00000000c0000000
[    6.875277] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    6.881463] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    6.887651] hole: 00000000ae000000 - 00000000c0000000
[    6.892634] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    6.899165] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    6.905783] range0: 0000000100000000 - 0000000480000000
[    6.910938] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    6.917125] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    6.923312] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    6.929584] hole: 0000000450000000 - 0000000480000000
[    6.934568] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    6.941270] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    6.947974] After WB checking
[    6.950894] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    6.956565] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[    6.962236] After UC checking
[    6.965157] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.970827] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.976499] After sorting
[    6.979076] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    6.984747] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    6.990418]  gran_size: 512K 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 0G

[    6.998925] range0: 0000000000000000 - 0000000100000000
[    7.004080] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[    7.010267] hole: 00000000ae000000 - 0000000100000000
[    7.015251] Setting variable MTRR 1, base: 2784MB, range: 32MB, type UC
[    7.021781] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[    7.028398] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[    7.034583] range0: 0000000100000000 - 0000000480000000
[    7.039739] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    7.045923] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    7.052110] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    7.058382] hole: 0000000450000000 - 0000000480000000
[    7.063365] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    7.070067] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    7.076768] After WB checking
[    7.079690] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[    7.085361] After UC checking
[    7.088280] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.093952] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.099623] After sorting
[    7.102200] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.107871] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.113541]  gran_size: 512K 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 0G

[    7.122048] rangeX: 0000000000000000 - 00000000ae000000
[    7.127203] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    7.133388] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    7.139747] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    7.146364] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    7.152895] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    7.159426] rangeX: 0000000100000000 - 0000000450000000
[    7.164580] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    7.170767] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    7.176955] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    7.183225] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    7.189670] After WB checking
[    7.192590] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.198261] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.203933] After UC checking
[    7.206853] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.212524] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.218195] After sorting
[    7.220772] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.226443] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.232114]  gran_size: 1M 	chunk_size: 1M 	num_reg: 9  	lose cover RAM: 0G

[    7.240447] rangeX: 0000000000000000 - 00000000ae000000
[    7.245602] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    7.251788] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    7.258146] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    7.264762] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    7.271292] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    7.277822] rangeX: 0000000100000000 - 0000000450000000
[    7.282978] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    7.289165] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    7.295352] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    7.301625] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    7.308069] After WB checking
[    7.310990] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.316663] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.322335] After UC checking
[    7.325257] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.330929] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.336602] After sorting
[    7.339179] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.344850] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.350522]  gran_size: 1M 	chunk_size: 2M 	num_reg: 9  	lose cover RAM: 0G

[    7.358859] rangeX: 0000000000000000 - 00000000ae000000
[    7.364014] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    7.370201] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    7.376561] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    7.383178] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    7.389708] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    7.396239] rangeX: 0000000100000000 - 0000000450000000
[    7.401393] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    7.407581] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    7.413769] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    7.420042] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    7.426487] After WB checking
[    7.429409] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.435081] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.440752] After UC checking
[    7.443673] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.449345] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.455017] After sorting
[    7.457594] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.463265] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.468936]  gran_size: 1M 	chunk_size: 4M 	num_reg: 9  	lose cover RAM: 0G

[    7.477270] rangeX: 0000000000000000 - 00000000ae000000
[    7.482425] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    7.488612] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    7.494971] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    7.501587] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    7.508119] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    7.514650] rangeX: 0000000100000000 - 0000000450000000
[    7.519805] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    7.525993] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    7.532178] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    7.538451] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    7.544896] After WB checking
[    7.547817] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.553490] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.559160] After UC checking
[    7.562081] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.567753] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.573424] After sorting
[    7.576001] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.581672] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.587343]  gran_size: 1M 	chunk_size: 8M 	num_reg: 9  	lose cover RAM: 0G

[    7.595678] rangeX: 0000000000000000 - 00000000ae000000
[    7.600832] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    7.607018] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    7.613377] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    7.619993] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    7.626523] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    7.633053] rangeX: 0000000100000000 - 0000000450000000
[    7.638207] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    7.644488] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    7.650672] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    7.656944] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    7.663389] After WB checking
[    7.666311] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.671981] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.677652] After UC checking
[    7.680573] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.686243] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.691915] After sorting
[    7.694491] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.700162] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.705834]  gran_size: 1M 	chunk_size: 16M 	num_reg: 9  	lose cover RAM: 0G

[    7.714254] rangeX: 0000000000000000 - 00000000ae000000
[    7.719408] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    7.725595] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    7.731953] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    7.738568] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    7.745099] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    7.751628] rangeX: 0000000100000000 - 0000000450000000
[    7.756782] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    7.762969] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    7.769154] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    7.775428] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    7.781872] After WB checking
[    7.784793] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.790463] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.796134] After UC checking
[    7.799056] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.804726] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.810397] After sorting
[    7.812975] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.818646] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.824317]  gran_size: 1M 	chunk_size: 32M 	num_reg: 9  	lose cover RAM: 0G

[    7.832737] range0: 0000000000000000 - 00000000b0000000
[    7.837892] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    7.844078] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    7.850436] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    7.857052] hole: 00000000ae000000 - 00000000b0000000
[    7.862036] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    7.868566] rangeX: 0000000100000000 - 0000000450000000
[    7.873720] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    7.879906] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    7.886094] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    7.892365] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    7.898809] After WB checking
[    7.901730] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    7.907401] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.913071] After UC checking
[    7.915992] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.921663] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.927334] After sorting
[    7.929911] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    7.935582] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    7.941252]  gran_size: 1M 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G

[    7.949671] range0: 0000000000000000 - 00000000b0000000
[    7.954826] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    7.961011] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    7.967368] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    7.973984] hole: 00000000ae000000 - 00000000b0000000
[    7.978967] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    7.985496] rangeX: 0000000100000000 - 0000000450000000
[    7.990650] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    7.996837] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    8.003023] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    8.009295] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    8.015739] After WB checking
[    8.018662] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    8.024332] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.030004] After UC checking
[    8.032925] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.038595] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.044267] After sorting
[    8.046844] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.052516] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.058187]  gran_size: 1M 	chunk_size: 128M 	num_reg: 8  	lose cover RAM: 0G

[    8.066694] range0: 0000000000000000 - 00000000b0000000
[    8.071850] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    8.078036] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    8.084394] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    8.091010] hole: 00000000ae000000 - 00000000b0000000
[    8.095994] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    8.102525] rangeX: 0000000100000000 - 0000000450000000
[    8.107681] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    8.113867] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    8.120055] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    8.126327] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    8.132772] After WB checking
[    8.135693] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    8.141362] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.147033] After UC checking
[    8.149955] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.155625] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.161297] After sorting
[    8.163874] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.169545] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.175217]  gran_size: 1M 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 0G

[    8.183723] range0: 0000000000000000 - 00000000c0000000
[    8.188877] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    8.195063] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    8.201249] hole: 00000000ae000000 - 00000000c0000000
[    8.206232] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    8.212762] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    8.219378] range0: 0000000100000000 - 0000000460000000
[    8.224532] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    8.230720] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    8.236907] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    8.243180] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[    8.249624] hole: 0000000450000000 - 0000000460000000
[    8.254607] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[    8.261307] After WB checking
[    8.264229] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    8.269898] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[    8.275569] After UC checking
[    8.278489] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.284160] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.289829] After sorting
[    8.292407] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.298077] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.303747]  gran_size: 1M 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 0G

[    8.312252] range0: 0000000000000000 - 00000000c0000000
[    8.317407] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    8.323593] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    8.329779] hole: 00000000ae000000 - 00000000c0000000
[    8.334763] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    8.341294] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    8.347909] range0: 0000000100000000 - 0000000480000000
[    8.353064] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    8.359252] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    8.365438] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    8.371711] hole: 0000000450000000 - 0000000480000000
[    8.376693] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    8.383395] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    8.390099] After WB checking
[    8.393019] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    8.398692] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[    8.404363] After UC checking
[    8.407284] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.412955] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.418625] After sorting
[    8.421203] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.426874] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.432547]  gran_size: 1M 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 0G

[    8.440881] range0: 0000000000000000 - 0000000100000000
[    8.446036] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[    8.452222] hole: 00000000ae000000 - 0000000100000000
[    8.457205] Setting variable MTRR 1, base: 2784MB, range: 32MB, type UC
[    8.463737] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[    8.470351] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[    8.476537] range0: 0000000100000000 - 0000000480000000
[    8.481693] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    8.487880] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    8.494066] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    8.500340] hole: 0000000450000000 - 0000000480000000
[    8.505323] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    8.512025] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    8.518727] After WB checking
[    8.521649] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[    8.527320] After UC checking
[    8.530240] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.535911] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.541581] After sorting
[    8.544160] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.549830] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.555501]  gran_size: 1M 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 0G

[    8.563835] rangeX: 0000000000000000 - 00000000ae000000
[    8.568990] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    8.575176] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    8.581532] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    8.588149] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    8.594679] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    8.601209] rangeX: 0000000100000000 - 0000000450000000
[    8.606365] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    8.612551] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    8.618738] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    8.625010] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    8.631454] After WB checking
[    8.634376] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.640046] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.645717] After UC checking
[    8.648638] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.654308] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.659979] After sorting
[    8.662556] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.668278] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.673948]  gran_size: 2M 	chunk_size: 2M 	num_reg: 9  	lose cover RAM: 0G

[    8.682282] rangeX: 0000000000000000 - 00000000ae000000
[    8.687436] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    8.693624] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    8.699982] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    8.706598] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    8.713127] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    8.719658] rangeX: 0000000100000000 - 0000000450000000
[    8.724815] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    8.731001] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    8.737188] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    8.743463] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    8.749908] After WB checking
[    8.752830] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.758502] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.764174] After UC checking
[    8.767095] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.772766] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.778439] After sorting
[    8.781017] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.786688] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.792359]  gran_size: 2M 	chunk_size: 4M 	num_reg: 9  	lose cover RAM: 0G

[    8.800695] rangeX: 0000000000000000 - 00000000ae000000
[    8.805850] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    8.812037] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    8.818397] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    8.825013] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    8.831544] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    8.838076] rangeX: 0000000100000000 - 0000000450000000
[    8.843231] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    8.849418] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    8.855605] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    8.861879] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    8.868323] After WB checking
[    8.871244] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.876917] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.882588] After UC checking
[    8.885509] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.891180] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.896850] After sorting
[    8.899429] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.905101] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    8.910772]  gran_size: 2M 	chunk_size: 8M 	num_reg: 9  	lose cover RAM: 0G

[    8.919108] rangeX: 0000000000000000 - 00000000ae000000
[    8.924264] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    8.930451] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    8.936809] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    8.943425] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    8.949955] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    8.956486] rangeX: 0000000100000000 - 0000000450000000
[    8.961642] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    8.967829] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    8.974017] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    8.980290] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    8.986735] After WB checking
[    8.989656] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    8.995329] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.001000] After UC checking
[    9.003923] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.009594] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.015266] After sorting
[    9.017843] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.023514] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.029187]  gran_size: 2M 	chunk_size: 16M 	num_reg: 9  	lose cover RAM: 0G

[    9.037607] rangeX: 0000000000000000 - 00000000ae000000
[    9.042764] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    9.048951] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    9.055310] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    9.061927] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    9.068458] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    9.074989] rangeX: 0000000100000000 - 0000000450000000
[    9.080145] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    9.086331] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    9.092518] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    9.098791] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    9.105237] After WB checking
[    9.108157] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.113829] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.119501] After UC checking
[    9.122422] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.128094] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.133765] After sorting
[    9.136342] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.142014] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.147685]  gran_size: 2M 	chunk_size: 32M 	num_reg: 9  	lose cover RAM: 0G

[    9.156105] range0: 0000000000000000 - 00000000b0000000
[    9.161261] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    9.167447] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    9.173807] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    9.180423] hole: 00000000ae000000 - 00000000b0000000
[    9.185406] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    9.191936] rangeX: 0000000100000000 - 0000000450000000
[    9.197091] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    9.203279] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    9.209465] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    9.215737] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    9.222181] After WB checking
[    9.225103] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    9.230774] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.236446] After UC checking
[    9.239367] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.245038] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.250709] After sorting
[    9.253287] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.258960] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.264631]  gran_size: 2M 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G

[    9.273053] range0: 0000000000000000 - 00000000b0000000
[    9.278207] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    9.284394] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    9.290752] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    9.297369] hole: 00000000ae000000 - 00000000b0000000
[    9.302352] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    9.308882] rangeX: 0000000100000000 - 0000000450000000
[    9.314038] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    9.320227] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    9.326413] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    9.332686] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    9.339132] After WB checking
[    9.342054] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    9.347726] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.353397] After UC checking
[    9.356318] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.361991] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.367663] After sorting
[    9.370241] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.375913] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.381585]  gran_size: 2M 	chunk_size: 128M 	num_reg: 8  	lose cover RAM: 0G

[    9.390093] range0: 0000000000000000 - 00000000b0000000
[    9.395249] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    9.401437] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    9.407796] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[    9.414414] hole: 00000000ae000000 - 00000000b0000000
[    9.419398] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[    9.425928] rangeX: 0000000100000000 - 0000000450000000
[    9.431085] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    9.437273] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    9.443459] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    9.449732] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[    9.456177] After WB checking
[    9.459098] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[    9.464770] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.470442] After UC checking
[    9.473364] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.479036] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.484708] After sorting
[    9.487286] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.492957] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.498628]  gran_size: 2M 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 0G

[    9.507136] range0: 0000000000000000 - 00000000c0000000
[    9.512292] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    9.518480] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    9.524667] hole: 00000000ae000000 - 00000000c0000000
[    9.529651] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    9.536182] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    9.542798] range0: 0000000100000000 - 0000000460000000
[    9.547953] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    9.554141] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    9.560328] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[    9.566602] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[    9.573048] hole: 0000000450000000 - 0000000460000000
[    9.578032] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[    9.584736] After WB checking
[    9.587658] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    9.593330] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[    9.599003] After UC checking
[    9.601924] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.607596] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.613269] After sorting
[    9.615846] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.621517] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.627191]  gran_size: 2M 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 0G

[    9.635698] range0: 0000000000000000 - 00000000c0000000
[    9.640854] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    9.647040] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[    9.653228] hole: 00000000ae000000 - 00000000c0000000
[    9.658212] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[    9.664743] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[    9.671360] range0: 0000000100000000 - 0000000480000000
[    9.676516] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    9.682701] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    9.688889] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    9.695246] hole: 0000000450000000 - 0000000480000000
[    9.700230] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    9.706934] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    9.713636] After WB checking
[    9.716557] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[    9.722228] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[    9.727901] After UC checking
[    9.730820] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.736493] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.742163] After sorting
[    9.744742] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.750414] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.756085]  gran_size: 2M 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 0G

[    9.764419] range0: 0000000000000000 - 0000000100000000
[    9.769576] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[    9.775763] hole: 00000000ae000000 - 0000000100000000
[    9.780748] Setting variable MTRR 1, base: 2784MB, range: 32MB, type UC
[    9.787277] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[    9.793894] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[    9.800081] range0: 0000000100000000 - 0000000480000000
[    9.805237] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[    9.811425] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[    9.817612] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[    9.823885] hole: 0000000450000000 - 0000000480000000
[    9.828869] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[    9.835571] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[    9.842273] After WB checking
[    9.845196] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[    9.850868] After UC checking
[    9.853789] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.859461] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.865132] After sorting
[    9.867710] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.873380] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.879052]  gran_size: 2M 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 0G

[    9.887388] rangeX: 0000000000000000 - 00000000ae000000
[    9.892544] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[    9.898730] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[    9.905090] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[    9.911707] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[    9.918238] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[    9.924769] rangeX: 0000000100000000 - 0000000450000000
[    9.929925] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[    9.936111] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[    9.942300] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[    9.948571] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[    9.955017] After WB checking
[    9.957938] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.963610] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.969282] After UC checking
[    9.972204] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.977874] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.983547] After sorting
[    9.986124] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[    9.991795] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[    9.997468]  gran_size: 4M 	chunk_size: 4M 	num_reg: 9  	lose cover RAM: 0G

[   10.005803] rangeX: 0000000000000000 - 00000000ae000000
[   10.010959] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   10.017146] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   10.023505] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[   10.030121] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[   10.036651] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[   10.043182] rangeX: 0000000100000000 - 0000000450000000
[   10.048337] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[   10.054522] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[   10.060708] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[   10.066981] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[   10.073424] After WB checking
[   10.076346] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.082016] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.087687] After UC checking
[   10.090610] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.096280] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.101951] After sorting
[   10.104528] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.110199] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.115871]  gran_size: 4M 	chunk_size: 8M 	num_reg: 9  	lose cover RAM: 0G

[   10.124205] rangeX: 0000000000000000 - 00000000ae000000
[   10.129361] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   10.135547] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   10.141905] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[   10.148523] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[   10.155054] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[   10.161584] rangeX: 0000000100000000 - 0000000450000000
[   10.166740] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[   10.172927] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[   10.179115] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[   10.185387] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[   10.191832] After WB checking
[   10.194753] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.200425] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.206097] After UC checking
[   10.209018] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.214690] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.220363] After sorting
[   10.222939] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.228611] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.234282]  gran_size: 4M 	chunk_size: 16M 	num_reg: 9  	lose cover RAM: 0G

[   10.242703] rangeX: 0000000000000000 - 00000000ae000000
[   10.247858] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   10.254044] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   10.260403] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[   10.267019] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[   10.273550] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[   10.280079] rangeX: 0000000100000000 - 0000000450000000
[   10.285235] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[   10.291422] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[   10.297608] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[   10.303880] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[   10.310326] After WB checking
[   10.313248] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.318917] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.324587] After UC checking
[   10.327509] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.333179] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.338851] After sorting
[   10.341429] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.347099] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.352770]  gran_size: 4M 	chunk_size: 32M 	num_reg: 9  	lose cover RAM: 0G

[   10.361191] range0: 0000000000000000 - 00000000b0000000
[   10.366347] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   10.372533] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   10.378892] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   10.385509] hole: 00000000ae000000 - 00000000b0000000
[   10.390492] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   10.397024] rangeX: 0000000100000000 - 0000000450000000
[   10.402179] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   10.408365] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   10.414551] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   10.420823] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   10.427267] After WB checking
[   10.430190] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   10.435860] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.441532] After UC checking
[   10.444452] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.450124] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.455795] After sorting
[   10.458374] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.464043] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.469715]  gran_size: 4M 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G

[   10.478136] range0: 0000000000000000 - 00000000b0000000
[   10.483292] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   10.489478] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   10.495834] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   10.502450] hole: 00000000ae000000 - 00000000b0000000
[   10.507435] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   10.513964] rangeX: 0000000100000000 - 0000000450000000
[   10.519120] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   10.525308] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   10.531494] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   10.537766] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   10.544212] After WB checking
[   10.547133] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   10.552804] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.558475] After UC checking
[   10.561398] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.567069] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.572740] After sorting
[   10.575319] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.580988] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.586661]  gran_size: 4M 	chunk_size: 128M 	num_reg: 8  	lose cover RAM: 0G

[   10.595167] range0: 0000000000000000 - 00000000b0000000
[   10.600323] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   10.606510] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   10.612869] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   10.619486] hole: 00000000ae000000 - 00000000b0000000
[   10.624468] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   10.630999] rangeX: 0000000100000000 - 0000000450000000
[   10.636155] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   10.642343] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   10.648529] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   10.654801] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   10.661247] After WB checking
[   10.664169] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   10.669839] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.675512] After UC checking
[   10.678432] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.684104] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.689777] After sorting
[   10.692354] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.698026] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.703698]  gran_size: 4M 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 0G

[   10.712205] range0: 0000000000000000 - 00000000c0000000
[   10.717406] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   10.723593] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   10.729780] hole: 00000000ae000000 - 00000000c0000000
[   10.734764] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[   10.741296] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   10.747913] range0: 0000000100000000 - 0000000460000000
[   10.753070] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   10.759258] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   10.765445] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   10.771718] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[   10.778163] hole: 0000000450000000 - 0000000460000000
[   10.783146] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[   10.789850] After WB checking
[   10.792771] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   10.798443] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[   10.804115] After UC checking
[   10.807036] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.812708] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.818381] After sorting
[   10.820959] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.826629] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.832302]  gran_size: 4M 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 0G

[   10.840809] range0: 0000000000000000 - 00000000c0000000
[   10.845965] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   10.852153] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   10.858340] hole: 00000000ae000000 - 00000000c0000000
[   10.863324] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[   10.869857] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   10.876474] range0: 0000000100000000 - 0000000480000000
[   10.881629] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   10.887816] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   10.894003] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   10.900277] hole: 0000000450000000 - 0000000480000000
[   10.905261] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   10.911964] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   10.918667] After WB checking
[   10.921589] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   10.927260] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[   10.932933] After UC checking
[   10.935853] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.941525] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.947198] After sorting
[   10.949775] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   10.955448] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   10.961120]  gran_size: 4M 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 0G

[   10.969456] range0: 0000000000000000 - 0000000100000000
[   10.974611] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[   10.980799] hole: 00000000ae000000 - 0000000100000000
[   10.985782] Setting variable MTRR 1, base: 2784MB, range: 32MB, type UC
[   10.992314] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[   10.998930] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[   11.005118] range0: 0000000100000000 - 0000000480000000
[   11.010273] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   11.016461] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   11.022648] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   11.028922] hole: 0000000450000000 - 0000000480000000
[   11.033906] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   11.040608] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   11.047310] After WB checking
[   11.050231] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[   11.055902] After UC checking
[   11.058822] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.064493] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.070166] After sorting
[   11.072744] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.078416] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.084089]  gran_size: 4M 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 0G

[   11.092423] rangeX: 0000000000000000 - 00000000ae000000
[   11.097579] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   11.103767] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   11.110126] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[   11.116743] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[   11.123273] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[   11.129804] rangeX: 0000000100000000 - 0000000450000000
[   11.134961] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[   11.141147] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[   11.147335] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[   11.153609] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[   11.160054] After WB checking
[   11.162975] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.168647] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.174319] After UC checking
[   11.177241] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.182913] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.188584] After sorting
[   11.191163] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.196834] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.202506]  gran_size: 8M 	chunk_size: 8M 	num_reg: 9  	lose cover RAM: 0G

[   11.210843] rangeX: 0000000000000000 - 00000000ae000000
[   11.215999] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   11.222185] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   11.228545] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[   11.235161] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[   11.241692] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[   11.248224] rangeX: 0000000100000000 - 0000000450000000
[   11.253380] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[   11.259568] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[   11.265756] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[   11.272030] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[   11.278476] After WB checking
[   11.281397] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.287068] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.292740] After UC checking
[   11.295662] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.301333] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.307004] After sorting
[   11.309581] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.315252] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.320923]  gran_size: 8M 	chunk_size: 16M 	num_reg: 9  	lose cover RAM: 0G

[   11.329344] rangeX: 0000000000000000 - 00000000ae000000
[   11.334500] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   11.340686] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   11.347046] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[   11.353662] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[   11.360193] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[   11.366725] rangeX: 0000000100000000 - 0000000450000000
[   11.371880] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[   11.378067] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[   11.384256] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[   11.390529] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[   11.396973] After WB checking
[   11.399895] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.405566] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.411237] After UC checking
[   11.414158] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.419829] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.425501] After sorting
[   11.428079] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.433750] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.439421]  gran_size: 8M 	chunk_size: 32M 	num_reg: 9  	lose cover RAM: 0G

[   11.447843] range0: 0000000000000000 - 00000000b0000000
[   11.452998] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   11.459185] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   11.465542] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   11.472158] hole: 00000000ae000000 - 00000000b0000000
[   11.477142] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   11.483671] rangeX: 0000000100000000 - 0000000450000000
[   11.488826] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   11.495012] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   11.501197] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   11.507469] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   11.513913] After WB checking
[   11.516834] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   11.522504] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.528176] After UC checking
[   11.531097] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.536767] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.542438] After sorting
[   11.545015] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.550686] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.556357]  gran_size: 8M 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G

[   11.564777] range0: 0000000000000000 - 00000000b0000000
[   11.569931] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   11.576117] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   11.582475] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   11.589091] hole: 00000000ae000000 - 00000000b0000000
[   11.594075] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   11.600604] rangeX: 0000000100000000 - 0000000450000000
[   11.605760] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   11.611947] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   11.618133] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   11.624406] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   11.630850] After WB checking
[   11.633772] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   11.639441] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.645113] After UC checking
[   11.648035] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.653705] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.659376] After sorting
[   11.661953] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.667623] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.673294]  gran_size: 8M 	chunk_size: 128M 	num_reg: 8  	lose cover RAM: 0G

[   11.681799] range0: 0000000000000000 - 00000000b0000000
[   11.686954] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   11.693139] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   11.699498] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   11.706114] hole: 00000000ae000000 - 00000000b0000000
[   11.711098] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   11.717629] rangeX: 0000000100000000 - 0000000450000000
[   11.722784] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   11.728968] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   11.735154] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   11.741452] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   11.747895] After WB checking
[   11.750816] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   11.756485] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.762156] After UC checking
[   11.765076] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.770747] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.776418] After sorting
[   11.778995] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.784666] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.790337]  gran_size: 8M 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 0G

[   11.798844] range0: 0000000000000000 - 00000000c0000000
[   11.804000] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   11.810186] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   11.816372] hole: 00000000ae000000 - 00000000c0000000
[   11.821357] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[   11.827889] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   11.834504] range0: 0000000100000000 - 0000000460000000
[   11.839661] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   11.845849] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   11.852037] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   11.858309] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[   11.864754] hole: 0000000450000000 - 0000000460000000
[   11.869739] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[   11.876443] After WB checking
[   11.879364] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   11.885035] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[   11.890709] After UC checking
[   11.893629] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.899301] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.904972] After sorting
[   11.907550] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   11.913221] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   11.918892]  gran_size: 8M 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 0G

[   11.927399] range0: 0000000000000000 - 00000000c0000000
[   11.932553] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   11.938740] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   11.944927] hole: 00000000ae000000 - 00000000c0000000
[   11.949910] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[   11.956440] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   11.963056] range0: 0000000100000000 - 0000000480000000
[   11.968211] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   11.974398] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   11.980585] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   11.986855] hole: 0000000450000000 - 0000000480000000
[   11.991840] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   11.998542] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   12.005244] After WB checking
[   12.008166] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   12.013836] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[   12.019507] After UC checking
[   12.022429] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.028100] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.033771] After sorting
[   12.036348] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.042021] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.047692]  gran_size: 8M 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 0G

[   12.056028] range0: 0000000000000000 - 0000000100000000
[   12.061184] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[   12.067369] hole: 00000000ae000000 - 0000000100000000
[   12.072353] Setting variable MTRR 1, base: 2784MB, range: 32MB, type UC
[   12.078883] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[   12.085499] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[   12.091687] range0: 0000000100000000 - 0000000480000000
[   12.096842] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   12.103029] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   12.109216] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   12.115488] hole: 0000000450000000 - 0000000480000000
[   12.120471] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   12.127174] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   12.133876] After WB checking
[   12.136797] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[   12.142469] After UC checking
[   12.145389] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.151061] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.156734] After sorting
[   12.159310] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.164982] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.170653]  gran_size: 8M 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 0G

[   12.178988] rangeX: 0000000000000000 - 00000000ae000000
[   12.184141] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   12.190326] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   12.196684] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[   12.203299] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[   12.209830] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[   12.216360] rangeX: 0000000100000000 - 0000000450000000
[   12.221515] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[   12.227701] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[   12.233889] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[   12.240159] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[   12.246604] After WB checking
[   12.249525] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.255196] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.260867] After UC checking
[   12.263789] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.269459] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.275131] After sorting
[   12.277708] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.283379] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.289049]  gran_size: 16M 	chunk_size: 16M 	num_reg: 9  	lose cover RAM: 0G

[   12.297556] rangeX: 0000000000000000 - 00000000ae000000
[   12.302711] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   12.308897] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   12.315255] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[   12.321871] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[   12.328401] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[   12.334931] rangeX: 0000000100000000 - 0000000450000000
[   12.340087] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[   12.346273] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[   12.352458] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[   12.358731] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[   12.365175] After WB checking
[   12.368096] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.373766] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.379437] After UC checking
[   12.382358] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.388028] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.393700] After sorting
[   12.396277] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.401948] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.407620]  gran_size: 16M 	chunk_size: 32M 	num_reg: 9  	lose cover RAM: 0G

[   12.416126] range0: 0000000000000000 - 00000000b0000000
[   12.421280] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   12.427466] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   12.433825] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   12.440440] hole: 00000000ae000000 - 00000000b0000000
[   12.445422] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   12.451952] rangeX: 0000000100000000 - 0000000450000000
[   12.457107] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   12.463291] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   12.469477] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   12.475748] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   12.482191] After WB checking
[   12.485113] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   12.490782] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.496452] After UC checking
[   12.499373] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.505043] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.510713] After sorting
[   12.513289] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.518960] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.524631]  gran_size: 16M 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G

[   12.533136] range0: 0000000000000000 - 00000000b0000000
[   12.538290] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   12.544477] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   12.550835] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   12.557451] hole: 00000000ae000000 - 00000000b0000000
[   12.562433] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   12.568964] rangeX: 0000000100000000 - 0000000450000000
[   12.574118] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   12.580305] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   12.586491] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   12.592763] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   12.599207] After WB checking
[   12.602128] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   12.607800] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.613469] After UC checking
[   12.616391] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.622061] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.627733] After sorting
[   12.630310] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.635981] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.641653]  gran_size: 16M 	chunk_size: 128M 	num_reg: 8  	lose cover RAM: 0G

[   12.650245] range0: 0000000000000000 - 00000000b0000000
[   12.655401] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   12.661587] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   12.667945] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   12.674559] hole: 00000000ae000000 - 00000000b0000000
[   12.679542] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   12.686071] rangeX: 0000000100000000 - 0000000450000000
[   12.691226] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   12.697413] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   12.703598] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   12.709869] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   12.716313] After WB checking
[   12.719234] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   12.724904] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.730575] After UC checking
[   12.733496] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.739167] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.744838] After sorting
[   12.747414] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.753084] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.758756]  gran_size: 16M 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 0G

[   12.767388] range0: 0000000000000000 - 00000000c0000000
[   12.772542] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   12.778728] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   12.784913] hole: 00000000ae000000 - 00000000c0000000
[   12.789896] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[   12.796424] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   12.803040] range0: 0000000100000000 - 0000000460000000
[   12.808193] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   12.814380] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   12.820564] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   12.826836] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[   12.833279] hole: 0000000450000000 - 0000000460000000
[   12.838262] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[   12.844964] After WB checking
[   12.847887] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   12.853556] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[   12.859228] After UC checking
[   12.862149] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.867819] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.873491] After sorting
[   12.876067] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.881738] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   12.887408]  gran_size: 16M 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 0G

[   12.895999] range0: 0000000000000000 - 00000000c0000000
[   12.901154] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   12.907340] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   12.913526] hole: 00000000ae000000 - 00000000c0000000
[   12.918510] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[   12.925040] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   12.931656] range0: 0000000100000000 - 0000000480000000
[   12.936810] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   12.942997] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   12.949183] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   12.955455] hole: 0000000450000000 - 0000000480000000
[   12.960440] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   12.967142] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   12.973844] After WB checking
[   12.976765] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   12.982436] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[   12.988109] After UC checking
[   12.991030] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   12.996701] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.002373] After sorting
[   13.004951] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.010623] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.016296]  gran_size: 16M 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 0G

[   13.024717] range0: 0000000000000000 - 0000000100000000
[   13.029872] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[   13.036060] hole: 00000000ae000000 - 0000000100000000
[   13.041044] Setting variable MTRR 1, base: 2784MB, range: 32MB, type UC
[   13.047575] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[   13.054191] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[   13.060379] range0: 0000000100000000 - 0000000480000000
[   13.065534] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   13.071722] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   13.077909] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   13.084183] hole: 0000000450000000 - 0000000480000000
[   13.089166] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   13.095868] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   13.102570] After WB checking
[   13.105491] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[   13.111164] After UC checking
[   13.114085] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.119757] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.125427] After sorting
[   13.128005] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.133675] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.139347]  gran_size: 16M 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 0G

[   13.147769] rangeX: 0000000000000000 - 00000000ae000000
[   13.152925] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   13.159111] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   13.165471] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[   13.172088] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[   13.178618] Setting variable MTRR 4, base: 2752MB, range: 32MB, type WB
[   13.185149] rangeX: 0000000100000000 - 0000000450000000
[   13.190305] Setting variable MTRR 5, base: 4GB, range: 4GB, type WB
[   13.196493] Setting variable MTRR 6, base: 8GB, range: 8GB, type WB
[   13.202681] Setting variable MTRR 7, base: 16GB, range: 1GB, type WB
[   13.208956] Setting variable MTRR 8, base: 17GB, range: 256MB, type WB
[   13.215400] After WB checking
[   13.218322] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.223994] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.229666] After UC checking
[   13.232588] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.238260] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.243931] After sorting
[   13.246509] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.252181] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.257854]  gran_size: 32M 	chunk_size: 32M 	num_reg: 9  	lose cover RAM: 0G

[   13.266361] range0: 0000000000000000 - 00000000b0000000
[   13.271517] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   13.277704] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   13.284064] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   13.290682] hole: 00000000ae000000 - 00000000b0000000
[   13.295666] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   13.302196] rangeX: 0000000100000000 - 0000000450000000
[   13.307351] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   13.313539] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   13.319725] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   13.325998] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   13.332443] After WB checking
[   13.335365] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   13.341037] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.346708] After UC checking
[   13.349629] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.355300] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.360973] After sorting
[   13.363550] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.369221] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.374893]  gran_size: 32M 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G

[   13.383400] range0: 0000000000000000 - 00000000b0000000
[   13.388555] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   13.394743] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   13.401100] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   13.407717] hole: 00000000ae000000 - 00000000b0000000
[   13.412701] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   13.419232] rangeX: 0000000100000000 - 0000000450000000
[   13.424387] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   13.430574] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   13.436760] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   13.443033] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   13.449477] After WB checking
[   13.452398] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   13.458068] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.463740] After UC checking
[   13.466660] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.472331] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.478003] After sorting
[   13.480580] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.486252] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.491922]  gran_size: 32M 	chunk_size: 128M 	num_reg: 8  	lose cover RAM: 0G

[   13.500516] range0: 0000000000000000 - 00000000b0000000
[   13.505671] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   13.511859] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   13.518217] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   13.524833] hole: 00000000ae000000 - 00000000b0000000
[   13.529817] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   13.536348] rangeX: 0000000100000000 - 0000000450000000
[   13.541503] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   13.547689] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   13.553876] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   13.560149] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   13.566593] After WB checking
[   13.569515] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   13.575185] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.580856] After UC checking
[   13.583778] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.589448] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.595120] After sorting
[   13.597698] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.603370] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.609040]  gran_size: 32M 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 0G

[   13.617633] range0: 0000000000000000 - 00000000c0000000
[   13.622789] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   13.628974] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   13.635161] hole: 00000000ae000000 - 00000000c0000000
[   13.640145] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[   13.646676] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   13.653294] range0: 0000000100000000 - 0000000460000000
[   13.658450] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   13.664637] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   13.670824] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   13.677098] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[   13.683542] hole: 0000000450000000 - 0000000460000000
[   13.688525] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[   13.695227] After WB checking
[   13.698149] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   13.703820] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[   13.709492] After UC checking
[   13.712414] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.718086] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.723757] After sorting
[   13.726334] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.732007] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.737678]  gran_size: 32M 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 0G

[   13.746271] range0: 0000000000000000 - 00000000c0000000
[   13.751427] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   13.757613] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   13.763799] hole: 00000000ae000000 - 00000000c0000000
[   13.768784] Setting variable MTRR 2, base: 2784MB, range: 32MB, type UC
[   13.775314] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   13.781931] range0: 0000000100000000 - 0000000480000000
[   13.787142] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   13.793328] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   13.799516] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   13.805790] hole: 0000000450000000 - 0000000480000000
[   13.810774] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   13.817477] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   13.824179] After WB checking
[   13.827100] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   13.832772] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[   13.838443] After UC checking
[   13.841364] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.847035] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.852707] After sorting
[   13.855284] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.860955] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.866627]  gran_size: 32M 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 0G

[   13.875048] range0: 0000000000000000 - 0000000100000000
[   13.880203] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[   13.886391] hole: 00000000ae000000 - 0000000100000000
[   13.891374] Setting variable MTRR 1, base: 2784MB, range: 32MB, type UC
[   13.897905] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[   13.904521] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[   13.910707] range0: 0000000100000000 - 0000000480000000
[   13.915864] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   13.922050] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   13.928238] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   13.934510] hole: 0000000450000000 - 0000000480000000
[   13.939495] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   13.946198] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   13.952900] After WB checking
[   13.955822] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[   13.961493] After UC checking
[   13.964414] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.970087] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.975757] After sorting
[   13.978336] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   13.984007] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   13.989678]  gran_size: 32M 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 0G

[   13.998100] rangeX: 0000000000000000 - 00000000ac000000
[   14.003256] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   14.009443] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   14.015803] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[   14.022419] Setting variable MTRR 3, base: 2688MB, range: 64MB, type WB
[   14.028952] rangeX: 0000000100000000 - 0000000450000000
[   14.034107] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   14.040294] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   14.046483] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   14.052756] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   14.059202] After WB checking
[   14.062123] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.067795] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.073468] After UC checking
[   14.076389] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.082059] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.087730] After sorting
[   14.090308] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.095979] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.101649]  gran_size: 64M 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 32M

[   14.110241] range0: 0000000000000000 - 00000000b0000000
[   14.115395] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   14.121581] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   14.127940] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   14.134555] hole: 00000000ac000000 - 00000000b0000000
[   14.139538] Setting variable MTRR 3, base: 2752MB, range: 64MB, type UC
[   14.146068] rangeX: 0000000100000000 - 0000000450000000
[   14.151223] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   14.157410] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   14.163596] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   14.169868] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   14.176312] After WB checking
[   14.179233] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   14.184904] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.190575] After UC checking
[   14.193496] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.199166] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.204837] After sorting
[   14.207413] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.213084] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.218755]  gran_size: 64M 	chunk_size: 128M 	num_reg: 8  	lose cover RAM: 32M

[   14.227432] range0: 0000000000000000 - 00000000b0000000
[   14.232588] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   14.238774] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   14.245132] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   14.251748] hole: 00000000ac000000 - 00000000b0000000
[   14.256732] Setting variable MTRR 3, base: 2752MB, range: 64MB, type UC
[   14.263263] rangeX: 0000000100000000 - 0000000450000000
[   14.268418] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   14.274604] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   14.280790] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   14.287062] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   14.293506] After WB checking
[   14.296428] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   14.302099] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.307768] After UC checking
[   14.310690] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.316362] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.322033] After sorting
[   14.324611] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.330282] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.335953]  gran_size: 64M 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 32M

[   14.344631] range0: 0000000000000000 - 00000000c0000000
[   14.349787] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   14.355974] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   14.362160] hole: 00000000ac000000 - 00000000c0000000
[   14.367142] Setting variable MTRR 2, base: 2752MB, range: 64MB, type UC
[   14.373671] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   14.380289] range0: 0000000100000000 - 0000000460000000
[   14.385443] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   14.391629] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   14.397817] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   14.404090] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[   14.410534] hole: 0000000450000000 - 0000000460000000
[   14.415516] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[   14.422217] After WB checking
[   14.425139] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   14.430810] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[   14.436480] After UC checking
[   14.439400] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.445071] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.450742] After sorting
[   14.453318] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.458989] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.464660]  gran_size: 64M 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 32M

[   14.473337] range0: 0000000000000000 - 00000000c0000000
[   14.478493] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   14.484679] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   14.490865] hole: 00000000ac000000 - 00000000c0000000
[   14.495849] Setting variable MTRR 2, base: 2752MB, range: 64MB, type UC
[   14.502380] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   14.508998] range0: 0000000100000000 - 0000000480000000
[   14.514153] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   14.520339] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   14.526526] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   14.532799] hole: 0000000450000000 - 0000000480000000
[   14.537782] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   14.544484] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   14.551186] After WB checking
[   14.554107] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   14.559777] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[   14.565448] After UC checking
[   14.568367] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.574038] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.579709] After sorting
[   14.582287] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.587956] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.593626]  gran_size: 64M 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 32M

[   14.602132] range0: 0000000000000000 - 0000000100000000
[   14.607287] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[   14.613474] hole: 00000000ac000000 - 0000000100000000
[   14.618458] Setting variable MTRR 1, base: 2752MB, range: 64MB, type UC
[   14.624988] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[   14.631603] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[   14.637788] range0: 0000000100000000 - 0000000480000000
[   14.642944] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   14.649129] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   14.655315] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   14.661588] hole: 0000000450000000 - 0000000480000000
[   14.666569] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   14.673271] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   14.679972] After WB checking
[   14.682892] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[   14.688563] After UC checking
[   14.691483] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.697153] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.702823] After sorting
[   14.705400] MTRR MAP PFN: 0000000000000000 - 00000000000ac000
[   14.711071] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.716741]  gran_size: 64M 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 32M

[   14.725246] rangeX: 0000000000000000 - 00000000a8000000
[   14.730402] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   14.736587] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   14.742944] Setting variable MTRR 2, base: 2560MB, range: 128MB, type WB
[   14.749559] rangeX: 0000000100000000 - 0000000450000000
[   14.754714] Setting variable MTRR 3, base: 4GB, range: 4GB, type WB
[   14.760900] Setting variable MTRR 4, base: 8GB, range: 8GB, type WB
[   14.767085] Setting variable MTRR 5, base: 16GB, range: 1GB, type WB
[   14.773357] Setting variable MTRR 6, base: 17GB, range: 256MB, type WB
[   14.779799] After WB checking
[   14.782720] MTRR MAP PFN: 0000000000000000 - 00000000000a8000
[   14.788389] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.794059] After UC checking
[   14.796980] MTRR MAP PFN: 0000000000000000 - 00000000000a8000
[   14.802650] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.808321] After sorting
[   14.810932] MTRR MAP PFN: 0000000000000000 - 00000000000a8000
[   14.816601] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.822271]  gran_size: 128M 	chunk_size: 128M 	num_reg: 7  	lose cover RAM: 96M

[   14.831033] range0: 0000000000000000 - 00000000b0000000
[   14.836188] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   14.842374] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   14.848732] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   14.855348] hole: 00000000a8000000 - 00000000b0000000
[   14.860331] Setting variable MTRR 3, base: 2688MB, range: 128MB, type UC
[   14.866946] rangeX: 0000000100000000 - 0000000450000000
[   14.872100] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   14.878287] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   14.884473] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   14.890745] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   14.897189] After WB checking
[   14.900110] MTRR MAP PFN: 0000000000000000 - 00000000000b0000
[   14.905780] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.911451] After UC checking
[   14.914372] MTRR MAP PFN: 0000000000000000 - 00000000000a8000
[   14.920042] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.925713] After sorting
[   14.928290] MTRR MAP PFN: 0000000000000000 - 00000000000a8000
[   14.933960] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   14.939631]  gran_size: 128M 	chunk_size: 256M 	num_reg: 8  	lose cover RAM: 96M

[   14.948394] range0: 0000000000000000 - 00000000c0000000
[   14.953548] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   14.959733] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   14.965919] hole: 00000000a8000000 - 00000000c0000000
[   14.970902] Setting variable MTRR 2, base: 2688MB, range: 128MB, type UC
[   14.977516] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   14.984132] range0: 0000000100000000 - 0000000460000000
[   14.989287] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   14.995472] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   15.001658] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   15.007929] Setting variable MTRR 7, base: 17GB, range: 512MB, type WB
[   15.014373] hole: 0000000450000000 - 0000000460000000
[   15.019356] Setting variable MTRR 8, base: 17664MB, range: 256MB, type UC
[   15.026058] After WB checking
[   15.028979] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   15.034649] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[   15.040318] After UC checking
[   15.043240] MTRR MAP PFN: 0000000000000000 - 00000000000a8000
[   15.048910] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.054580] After sorting
[   15.057158] MTRR MAP PFN: 0000000000000000 - 00000000000a8000
[   15.062826] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.068496]  gran_size: 128M 	chunk_size: 512M 	num_reg: 9  	lose cover RAM: 96M

[   15.077261] range0: 0000000000000000 - 00000000c0000000
[   15.082415] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   15.088603] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   15.094789] hole: 00000000a8000000 - 00000000c0000000
[   15.099772] Setting variable MTRR 2, base: 2688MB, range: 128MB, type UC
[   15.106388] Setting variable MTRR 3, base: 2816MB, range: 256MB, type UC
[   15.113004] range0: 0000000100000000 - 0000000480000000
[   15.118159] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   15.124344] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   15.130530] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   15.136802] hole: 0000000450000000 - 0000000480000000
[   15.141785] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   15.148485] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   15.155187] After WB checking
[   15.158109] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   15.163778] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[   15.169451] After UC checking
[   15.172370] MTRR MAP PFN: 0000000000000000 - 00000000000a8000
[   15.178041] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.183712] After sorting
[   15.186289] MTRR MAP PFN: 0000000000000000 - 00000000000a8000
[   15.191961] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.197630]  gran_size: 128M 	chunk_size: 1G 	num_reg: 9  	lose cover RAM: 96M

[   15.206221] range0: 0000000000000000 - 0000000100000000
[   15.211376] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[   15.217562] hole: 00000000a8000000 - 0000000100000000
[   15.222545] Setting variable MTRR 1, base: 2688MB, range: 128MB, type UC
[   15.229160] Setting variable MTRR 2, base: 2816MB, range: 256MB, type UC
[   15.235775] Setting variable MTRR 3, base: 3GB, range: 1GB, type UC
[   15.241962] range0: 0000000100000000 - 0000000480000000
[   15.247117] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   15.253303] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   15.259488] Setting variable MTRR 6, base: 16GB, range: 2GB, type WB
[   15.265760] hole: 0000000450000000 - 0000000480000000
[   15.270743] Setting variable MTRR 7, base: 17664MB, range: 256MB, type UC
[   15.277444] Setting variable MTRR 8, base: 17920MB, range: 512MB, type UC
[   15.284148] After WB checking
[   15.287069] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[   15.292740] After UC checking
[   15.295659] MTRR MAP PFN: 0000000000000000 - 00000000000a8000
[   15.301330] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.306999] After sorting
[   15.309577] MTRR MAP PFN: 0000000000000000 - 00000000000a8000
[   15.315246] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.320917]  gran_size: 128M 	chunk_size: 2G 	num_reg: 9  	lose cover RAM: 96M

[   15.329507] rangeX: 0000000000000000 - 00000000a0000000
[   15.334661] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   15.340847] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   15.347205] rangeX: 0000000100000000 - 0000000450000000
[   15.352359] Setting variable MTRR 2, base: 4GB, range: 4GB, type WB
[   15.358545] Setting variable MTRR 3, base: 8GB, range: 8GB, type WB
[   15.364730] Setting variable MTRR 4, base: 16GB, range: 1GB, type WB
[   15.371002] Setting variable MTRR 5, base: 17GB, range: 256MB, type WB
[   15.377447] After WB checking
[   15.380368] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.386039] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.391709] After UC checking
[   15.394630] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.400302] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.405971] After sorting
[   15.408548] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.414220] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.419890]  gran_size: 256M 	chunk_size: 256M 	num_reg: 6  	lose cover RAM: 224M

[   15.428738] rangeX: 0000000000000000 - 00000000a0000000
[   15.433891] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   15.440077] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   15.446433] range0: 0000000100000000 - 0000000460000000
[   15.451587] Setting variable MTRR 2, base: 4GB, range: 4GB, type WB
[   15.457773] Setting variable MTRR 3, base: 8GB, range: 8GB, type WB
[   15.463958] Setting variable MTRR 4, base: 16GB, range: 1GB, type WB
[   15.470228] Setting variable MTRR 5, base: 17GB, range: 512MB, type WB
[   15.476672] hole: 0000000450000000 - 0000000460000000
[   15.481654] Setting variable MTRR 6, base: 17664MB, range: 256MB, type UC
[   15.488354] After WB checking
[   15.491276] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.496945] MTRR MAP PFN: 0000000000100000 - 0000000000460000
[   15.502614] After UC checking
[   15.505535] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.511205] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.516876] After sorting
[   15.519454] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.525123] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.530794]  gran_size: 256M 	chunk_size: 512M 	num_reg: 7  	lose cover RAM: 224M

[   15.539642] range0: 0000000000000000 - 00000000c0000000
[   15.544796] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   15.550982] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   15.557169] hole: 00000000a0000000 - 00000000c0000000
[   15.562151] Setting variable MTRR 2, base: 2560MB, range: 512MB, type UC
[   15.568767] range0: 0000000100000000 - 0000000480000000
[   15.573922] Setting variable MTRR 3, base: 4GB, range: 4GB, type WB
[   15.580107] Setting variable MTRR 4, base: 8GB, range: 8GB, type WB
[   15.586293] Setting variable MTRR 5, base: 16GB, range: 2GB, type WB
[   15.592565] hole: 0000000450000000 - 0000000480000000
[   15.597548] Setting variable MTRR 6, base: 17664MB, range: 256MB, type UC
[   15.604249] Setting variable MTRR 7, base: 17920MB, range: 512MB, type UC
[   15.610951] After WB checking
[   15.613872] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   15.619542] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[   15.625213] After UC checking
[   15.628132] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.633803] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.639474] After sorting
[   15.642051] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.647721] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.653393]  gran_size: 256M 	chunk_size: 1G 	num_reg: 8  	lose cover RAM: 224M

[   15.662071] range0: 0000000000000000 - 0000000100000000
[   15.667226] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[   15.673413] hole: 00000000a0000000 - 0000000100000000
[   15.678396] Setting variable MTRR 1, base: 2560MB, range: 512MB, type UC
[   15.685012] Setting variable MTRR 2, base: 3GB, range: 1GB, type UC
[   15.691199] range0: 0000000100000000 - 0000000480000000
[   15.696355] Setting variable MTRR 3, base: 4GB, range: 4GB, type WB
[   15.702541] Setting variable MTRR 4, base: 8GB, range: 8GB, type WB
[   15.708727] Setting variable MTRR 5, base: 16GB, range: 2GB, type WB
[   15.714999] hole: 0000000450000000 - 0000000480000000
[   15.719982] Setting variable MTRR 6, base: 17664MB, range: 256MB, type UC
[   15.726684] Setting variable MTRR 7, base: 17920MB, range: 512MB, type UC
[   15.733386] After WB checking
[   15.736307] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[   15.741979] After UC checking
[   15.744899] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.750570] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.756239] After sorting
[   15.758816] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.764487] MTRR MAP PFN: 0000000000100000 - 0000000000450000
[   15.770158]  gran_size: 256M 	chunk_size: 2G 	num_reg: 8  	lose cover RAM: 224M

[   15.778835] rangeX: 0000000000000000 - 00000000a0000000
[   15.783991] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   15.790178] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   15.796537] rangeX: 0000000100000000 - 0000000440000000
[   15.801692] Setting variable MTRR 2, base: 4GB, range: 4GB, type WB
[   15.807879] Setting variable MTRR 3, base: 8GB, range: 8GB, type WB
[   15.814065] Setting variable MTRR 4, base: 16GB, range: 1GB, type WB
[   15.820339] After WB checking
[   15.823260] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.828931] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   15.834711] After UC checking
[   15.837556] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.843229] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   15.848899] After sorting
[   15.851477] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.857150] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   15.862820]  gran_size: 512M 	chunk_size: 512M 	num_reg: 5  	lose cover RAM: 480M

[   15.871671] range0: 0000000000000000 - 00000000c0000000
[   15.876826] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   15.883013] Setting variable MTRR 1, base: 2GB, range: 1GB, type WB
[   15.889199] hole: 00000000a0000000 - 00000000c0000000
[   15.894183] Setting variable MTRR 2, base: 2560MB, range: 512MB, type UC
[   15.900799] rangeX: 0000000100000000 - 0000000440000000
[   15.905953] Setting variable MTRR 3, base: 4GB, range: 4GB, type WB
[   15.912139] Setting variable MTRR 4, base: 8GB, range: 8GB, type WB
[   15.918326] Setting variable MTRR 5, base: 16GB, range: 1GB, type WB
[   15.924598] After WB checking
[   15.927520] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   15.933191] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   15.938863] After UC checking
[   15.941783] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.947454] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   15.953125] After sorting
[   15.955702] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   15.961373] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   15.967045]  gran_size: 512M 	chunk_size: 1G 	num_reg: 6  	lose cover RAM: 480M

[   15.975722] range0: 0000000000000000 - 0000000100000000
[   15.980877] Setting variable MTRR 0, base: 0GB, range: 4GB, type WB
[   15.987063] hole: 00000000a0000000 - 0000000100000000
[   15.992047] Setting variable MTRR 1, base: 2560MB, range: 512MB, type UC
[   15.998662] Setting variable MTRR 2, base: 3GB, range: 1GB, type UC
[   16.004850] range0: 0000000100000000 - 0000000480000000
[   16.010005] Setting variable MTRR 3, base: 4GB, range: 4GB, type WB
[   16.016191] Setting variable MTRR 4, base: 8GB, range: 8GB, type WB
[   16.022379] Setting variable MTRR 5, base: 16GB, range: 2GB, type WB
[   16.028652] hole: 0000000440000000 - 0000000480000000
[   16.033636] Setting variable MTRR 6, base: 17GB, range: 1GB, type UC
[   16.039908] After WB checking
[   16.042830] MTRR MAP PFN: 0000000000000000 - 0000000000480000
[   16.048502] After UC checking
[   16.051423] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   16.057093] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   16.062764] After sorting
[   16.065340] MTRR MAP PFN: 0000000000000000 - 00000000000a0000
[   16.071011] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   16.076679]  gran_size: 512M 	chunk_size: 2G 	num_reg: 7  	lose cover RAM: 480M

[   16.085356] rangeX: 0000000000000000 - 0000000080000000
[   16.090512] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   16.096698] rangeX: 0000000100000000 - 0000000440000000
[   16.101853] Setting variable MTRR 1, base: 4GB, range: 4GB, type WB
[   16.108039] Setting variable MTRR 2, base: 8GB, range: 8GB, type WB
[   16.114227] Setting variable MTRR 3, base: 16GB, range: 1GB, type WB
[   16.120500] After WB checking
[   16.123421] MTRR MAP PFN: 0000000000000000 - 0000000000080000
[   16.129093] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   16.134766] After UC checking
[   16.137687] MTRR MAP PFN: 0000000000000000 - 0000000000080000
[   16.143358] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   16.149030] After sorting
[   16.151607] MTRR MAP PFN: 0000000000000000 - 0000000000080000
[   16.157279] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   16.162950]  gran_size: 1G 	chunk_size: 1G 	num_reg: 4  	lose cover RAM: 992M

[   16.171458] rangeX: 0000000000000000 - 0000000080000000
[   16.176613] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   16.182800] range0: 0000000100000000 - 0000000480000000
[   16.187956] Setting variable MTRR 1, base: 4GB, range: 4GB, type WB
[   16.194143] Setting variable MTRR 2, base: 8GB, range: 8GB, type WB
[   16.200331] Setting variable MTRR 3, base: 16GB, range: 2GB, type WB
[   16.206603] hole: 0000000440000000 - 0000000480000000
[   16.211589] Setting variable MTRR 4, base: 17GB, range: 1GB, type UC
[   16.217862] After WB checking
[   16.220783] MTRR MAP PFN: 0000000000000000 - 0000000000080000
[   16.226454] MTRR MAP PFN: 0000000000100000 - 0000000000480000
[   16.232127] After UC checking
[   16.235048] MTRR MAP PFN: 0000000000000000 - 0000000000080000
[   16.240720] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   16.246391] After sorting
[   16.248970] MTRR MAP PFN: 0000000000000000 - 0000000000080000
[   16.254641] MTRR MAP PFN: 0000000000100000 - 0000000000440000
[   16.260312]  gran_size: 1G 	chunk_size: 2G 	num_reg: 5  	lose cover RAM: 992M

[   16.268820] rangeX: 0000000000000000 - 0000000080000000
[   16.273975] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   16.280163] rangeX: 0000000100000000 - 0000000400000000
[   16.285318] Setting variable MTRR 1, base: 4GB, range: 4GB, type WB
[   16.291506] Setting variable MTRR 2, base: 8GB, range: 8GB, type WB
[   16.297694] After WB checking
[   16.300615] MTRR MAP PFN: 0000000000000000 - 0000000000080000
[   16.306288] MTRR MAP PFN: 0000000000100000 - 0000000000400000
[   16.311960] After UC checking
[   16.314882] MTRR MAP PFN: 0000000000000000 - 0000000000080000
[   16.320554] MTRR MAP PFN: 0000000000100000 - 0000000000400000
[   16.326225] After sorting
[   16.328804] MTRR MAP PFN: 0000000000000000 - 0000000000080000
[   16.334475] MTRR MAP PFN: 0000000000100000 - 0000000000400000
[   16.340147]  gran_size: 2G 	chunk_size: 2G 	num_reg: 3  	lose cover RAM: 2016M

[   16.348740] Found optimal setting for mtrr clean up
[   16.353552]  gran_size: 64K 	chunk_size: 64M 	num_reg: 8  	lose cover RAM: 0G
[   16.360598] range0: 0000000000000000 - 00000000b0000000
[   16.365754] Setting variable MTRR 0, base: 0GB, range: 2GB, type WB
[   16.371942] Setting variable MTRR 1, base: 2GB, range: 512MB, type WB
[   16.378301] Setting variable MTRR 2, base: 2560MB, range: 256MB, type WB
[   16.384918] hole: 00000000ae000000 - 00000000b0000000
[   16.389902] Setting variable MTRR 3, base: 2784MB, range: 32MB, type UC
[   16.396433] rangeX: 0000000100000000 - 0000000450000000
[   16.401587] Setting variable MTRR 4, base: 4GB, range: 4GB, type WB
[   16.407773] Setting variable MTRR 5, base: 8GB, range: 8GB, type WB
[   16.413959] Setting variable MTRR 6, base: 16GB, range: 1GB, type WB
[   16.420232] Setting variable MTRR 7, base: 17GB, range: 256MB, type WB
[   16.426676] New variable MTRRs
[   16.429684] reg 0, base: 0GB, range: 2GB, type WB
[   16.434324] reg 1, base: 2GB, range: 512MB, type WB
[   16.439136] reg 2, base: 2560MB, range: 256MB, type WB
[   16.444205] reg 3, base: 2784MB, range: 32MB, type UC
[   16.449189] reg 4, base: 4GB, range: 4GB, type WB
[   16.453829] reg 5, base: 8GB, range: 8GB, type WB
[   16.458468] reg 6, base: 16GB, range: 1GB, type WB
[   16.463195] reg 7, base: 17GB, range: 256MB, type WB
[   16.468094] MTRR map: 6 entries (3 fixed + 3 variable; max 23), built from 10 variable MTRRs
[   16.476427]   0: 0000000000000000-000000000009ffff write-back
[   16.482098]   1: 00000000000a0000-00000000000bffff uncachable
[   16.487769]   2: 00000000000c0000-00000000000fffff write-protect
[   16.493698]   3: 0000000000100000-00000000adffffff write-back
[   16.499369]   4: 00000000ae000000-00000000afffffff uncachable
[   16.505041]   5: 0000000100000000-000000044fffffff write-back
[   16.510712] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
[   16.517781] After WB checking
[   16.520546] MTRR MAP PFN: 0000000000000000 - 00000000000c0000
[   16.526218] MTRR MAP PFN: 0000000000100000 - 0000000000110000
[   16.531889] MTRR MAP PFN: 0000000000200000 - 0000000000450000
[   16.537560] After UC checking
[   16.540481] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   16.546152] MTRR MAP PFN: 0000000000100000 - 0000000000110000
[   16.551823] MTRR MAP PFN: 0000000000200000 - 0000000000450000
[   16.557493] MTRR MAP PFN: 00000000000b0000 - 00000000000c0000
[   16.563164] After sorting
[   16.565742] MTRR MAP PFN: 0000000000000000 - 00000000000ae000
[   16.571414] MTRR MAP PFN: 00000000000b0000 - 00000000000c0000
[   16.577085] MTRR MAP PFN: 0000000000100000 - 0000000000110000
[   16.582755] MTRR MAP PFN: 0000000000200000 - 0000000000450000
[   16.588427] e820: update [mem 0xae000000-0xafffffff] usable ==> reserved
[   16.595042] e820: update [mem 0xc0000000-0xffffffff] usable ==> reserved
[   16.601659] e820: update [mem 0x110000000-0x1ffffffff] usable ==> reserved
[   16.608447] WARNING: BIOS bug: CPU MTRRs don't cover all of memory, losing 3840MB of RAM.
[   16.616524] update e820 for mtrr
[   16.619710] modified physical RAM map:
[   16.623397] modified: [mem 0x0000000000000000-0x0000000000000fff] reserved
[   16.630184] modified: [mem 0x0000000000001000-0x000000000009ffff] usable
[   16.636800] modified: [mem 0x0000000000100000-0x0000000018ebafff] usable
[   16.643416] modified: [mem 0x0000000018ebb000-0x0000000018fe7fff] ACPI NVS
[   16.650202] modified: [mem 0x0000000018fe8000-0x0000000018fe8fff] usable
[   16.656818] modified: [mem 0x0000000018fe9000-0x0000000018ffffff] ACPI NVS
[   16.663606] modified: [mem 0x0000000019000000-0x000000001dffcfff] usable
[   16.670222] modified: [mem 0x000000001dffd000-0x000000001dffffff] ACPI data
[   16.677096] modified: [mem 0x000000001e000000-0x00000000ac77cfff] usable
[   16.683712] modified: [mem 0x00000000ac77d000-0x00000000ac77ffff] type 20
[   16.690415] modified: [mem 0x00000000ac780000-0x00000000ac780fff] reserved
[   16.697202] modified: [mem 0x00000000ac781000-0x00000000ac782fff] type 20
[   16.703904] modified: [mem 0x00000000ac783000-0x00000000ac7d9fff] reserved
[   16.710694] modified: [mem 0x00000000ac7da000-0x00000000ac7dafff] type 20
[   16.717397] modified: [mem 0x00000000ac7db000-0x00000000ac7dcfff] reserved
[   16.724185] modified: [mem 0x00000000ac7dd000-0x00000000ac7e7fff] type 20
[   16.730889] modified: [mem 0x00000000ac7e8000-0x00000000ac7f1fff] reserved
[   16.737678] modified: [mem 0x00000000ac7f2000-0x00000000ac7f5fff] type 20
[   16.744380] modified: [mem 0x00000000ac7f6000-0x00000000ac7f9fff] reserved
[   16.751169] modified: [mem 0x00000000ac7fa000-0x00000000ac7fafff] type 20
[   16.757871] modified: [mem 0x00000000ac7fb000-0x00000000ac803fff] reserved
[   16.764661] modified: [mem 0x00000000ac804000-0x00000000ac810fff] type 20
[   16.771363] modified: [mem 0x00000000ac811000-0x00000000ac813fff] reserved
[   16.778152] modified: [mem 0x00000000ac814000-0x00000000ad7fffff] usable
[   16.784768] modified: [mem 0x00000000fed20000-0x00000000fed3ffff] reserved
[   16.791557] modified: [mem 0x0000000100000000-0x000000010fffffff] usable
[   16.798173] modified: [mem 0x0000000110000000-0x00000001ffffffff] reserved
[   16.804962] modified: [mem 0x0000000200000000-0x000000044fffffff] usable
[   16.811579] last_pfn = 0x450000 max_arch_pfn = 0x400000000
[   16.816992] last_pfn = 0xad800 max_arch_pfn = 0x400000000
[   16.823948] found SMP MP-table at [mem 0x000f1dd0-0x000f1ddf]
[   16.829484] Using GB pages for direct mapping
[   16.846482] printk: log_buf_len: 16777216 bytes
[   16.850803] printk: early log buf free: 77040(29%)
[   16.855528] Secure boot could not be determined
[   16.860169] RAMDISK: [mem 0x36ff3000-0x377f0fff]
[   16.864724] ACPI: Early table checksum verification disabled
[   16.870310] ACPI: RSDP 0x000000001DFFFF98 000024 (v02 DELL  )
[   16.875978] ACPI: XSDT 0x000000001DFFEE18 00006C (v01 DELL   CBX3     06222004 MSFT 00010013)
[   16.884401] ACPI: FACP 0x0000000018FF0C18 0000F4 (v04 DELL   CBX3     06222004 MSFT 00010013)
[   16.892820] ACPI: DSDT 0x0000000018FA9018 006373 (v01 DELL   CBX3     00000000 INTL 20091112)
[   16.901239] ACPI: FACS 0x0000000018FFDF40 000040
[   16.905791] ACPI: FACS 0x0000000018FF1F40 000040
[   16.910344] ACPI: APIC 0x000000001DFFDC18 000158 (v02 DELL   CBX3     06222004 MSFT 00010013)
[   16.918764] ACPI: MCFG 0x0000000018FFED18 00003C (v01 A M I  OEMMCFG. 06222004 MSFT 00000097)
[   16.927185] ACPI: TCPA 0x0000000018FFEC98 000032 (v02                 00000000      00000000)
[   16.935607] ACPI: SSDT 0x0000000018FEFA98 000306 (v01 DELLTP TPM      00003000 INTL 20091112)
[   16.944028] ACPI: HPET 0x0000000018FFEC18 000038 (v01 A M I   PCHHPET 06222004 AMI. 00000003)
[   16.952448] ACPI: BOOT 0x0000000018FFEB98 000028 (v01 DELL   CBX3     06222004 AMI  00010013)
[   16.960869] ACPI: SSDT 0x0000000018FB0018 037106 (v02 INTEL  CpuPm    00004000 INTL 20091112)
[   16.969290] ACPI: SLIC 0x0000000018FEEC18 000176 (v03 DELL   CBX3     06222004 MSFT 00010013)
[   16.977711] ACPI: Reserving FACP table memory at [mem 0x18ff0c18-0x18ff0d0b]
[   16.984671] ACPI: Reserving DSDT table memory at [mem 0x18fa9018-0x18faf38a]
[   16.991631] ACPI: Reserving FACS table memory at [mem 0x18ffdf40-0x18ffdf7f]
[   16.998592] ACPI: Reserving FACS table memory at [mem 0x18ff1f40-0x18ff1f7f]
[   17.005553] ACPI: Reserving APIC table memory at [mem 0x1dffdc18-0x1dffdd6f]
[   17.012515] ACPI: Reserving MCFG table memory at [mem 0x18ffed18-0x18ffed53]
[   17.019474] ACPI: Reserving TCPA table memory at [mem 0x18ffec98-0x18ffecc9]
[   17.026434] ACPI: Reserving SSDT table memory at [mem 0x18fefa98-0x18fefd9d]
[   17.033394] ACPI: Reserving HPET table memory at [mem 0x18ffec18-0x18ffec4f]
[   17.040355] ACPI: Reserving BOOT table memory at [mem 0x18ffeb98-0x18ffebbf]
[   17.047314] ACPI: Reserving SSDT table memory at [mem 0x18fb0018-0x18fe711d]
[   17.054275] ACPI: Reserving SLIC table memory at [mem 0x18feec18-0x18feed8d]
[   17.061280] No NUMA configuration found
[   17.065015] Faking a node at [mem 0x0000000000000000-0x000000044fffffff]
[   17.071634] NODE_DATA(0) allocated [mem 0x44b7f8000-0x44b7fbfff]
[   17.077601] Zone ranges:
[   17.080054]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[   17.086154]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[   17.092256]   Normal   [mem 0x0000000100000000-0x000000044fffffff]
[   17.098357] Movable zone start for each node
[   17.102567] Early memory node ranges
[   17.106091]   node   0: [mem 0x0000000000001000-0x000000000009ffff]
[   17.112277]   node   0: [mem 0x0000000000100000-0x0000000018ebafff]
[   17.118465]   node   0: [mem 0x0000000018fe8000-0x0000000018fe8fff]
[   17.124651]   node   0: [mem 0x0000000019000000-0x000000001dffcfff]
[   17.130838]   node   0: [mem 0x000000001e000000-0x00000000ac77cfff]
[   17.137025]   node   0: [mem 0x00000000ac814000-0x00000000ad7fffff]
[   17.143214]   node   0: [mem 0x0000000100000000-0x000000010fffffff]
[   17.149401]   node   0: [mem 0x0000000200000000-0x000000044fffffff]
[   17.155589] Initmem setup node 0 [mem 0x0000000000001000-0x000000044fffffff]
[   17.162551] On node 0, zone DMA: 1 pages in unavailable ranges
[   17.162574] On node 0, zone DMA: 96 pages in unavailable ranges
[   17.168833] On node 0, zone DMA32: 301 pages in unavailable ranges
[   17.174628] On node 0, zone DMA32: 23 pages in unavailable ranges
[   17.183711] On node 0, zone DMA32: 3 pages in unavailable ranges
[   17.189602] On node 0, zone DMA32: 151 pages in unavailable ranges
[   17.195922] On node 0, zone Normal: 10240 pages in unavailable ranges
[   17.214913] ACPI: PM-Timer IO Port: 0x408
[   17.225104] IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
[   17.231871] IOAPIC[1]: apic_id 2, version 32, address 0xfec3f000, GSI 24-47
[   17.238744] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[   17.245016] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[   17.251548] ACPI: Using ACPI (MADT) for SMP configuration information
[   17.257903] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[   17.262976] TSC deadline timer available
[   17.266839] smpboot: Allowing 32 CPUs, 24 hotplug CPUs
[   17.271938] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
[   17.279387] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000fffff]
[   17.286862] PM: hibernation: Registered nosave memory: [mem 0x18ebb000-0x18fe7fff]
[   17.294339] PM: hibernation: Registered nosave memory: [mem 0x18fe9000-0x18ffffff]
[   17.301815] PM: hibernation: Registered nosave memory: [mem 0x1dffd000-0x1dffffff]
[   17.309292] PM: hibernation: Registered nosave memory: [mem 0xac77d000-0xac77ffff]
[   17.316768] PM: hibernation: Registered nosave memory: [mem 0xac780000-0xac780fff]
[   17.324243] PM: hibernation: Registered nosave memory: [mem 0xac781000-0xac782fff]
[   17.331718] PM: hibernation: Registered nosave memory: [mem 0xac783000-0xac7d9fff]
[   17.339194] PM: hibernation: Registered nosave memory: [mem 0xac7da000-0xac7dafff]
[   17.346669] PM: hibernation: Registered nosave memory: [mem 0xac7db000-0xac7dcfff]
[   17.354146] PM: hibernation: Registered nosave memory: [mem 0xac7dd000-0xac7e7fff]
[   17.361621] PM: hibernation: Registered nosave memory: [mem 0xac7e8000-0xac7f1fff]
[   17.369098] PM: hibernation: Registered nosave memory: [mem 0xac7f2000-0xac7f5fff]
[   17.376575] PM: hibernation: Registered nosave memory: [mem 0xac7f6000-0xac7f9fff]
[   17.384052] PM: hibernation: Registered nosave memory: [mem 0xac7fa000-0xac7fafff]
[   17.391527] PM: hibernation: Registered nosave memory: [mem 0xac7fb000-0xac803fff]
[   17.399004] PM: hibernation: Registered nosave memory: [mem 0xac804000-0xac810fff]
[   17.406481] PM: hibernation: Registered nosave memory: [mem 0xac811000-0xac813fff]
[   17.413957] PM: hibernation: Registered nosave memory: [mem 0xad800000-0xfed1ffff]
[   17.421433] PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfed3ffff]
[   17.428910] PM: hibernation: Registered nosave memory: [mem 0xfed40000-0xffffffff]
[   17.436388] PM: hibernation: Registered nosave memory: [mem 0x110000000-0x1ffffffff]
[   17.444037] [mem 0xad800000-0xfed1ffff] available for PCI devices
[   17.450051] Booting paravirtualized kernel on bare hardware
[   17.455553] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
[   17.470292] setup_percpu: NR_CPUS:256 nr_cpumask_bits:32 nr_cpu_ids:32 nr_node_ids:1
[   17.480176] percpu: Embedded 78 pages/cpu s282624 r8192 d28672 u524288
[   17.486483] pcpu-alloc: s282624 r8192 d28672 u524288 alloc=1*2097152
[   17.492748] pcpu-alloc: [0] 00 01 02 03 [0] 04 05 06 07 
[   17.497989] pcpu-alloc: [0] 08 09 10 11 [0] 12 13 14 15 
[   17.503231] pcpu-alloc: [0] 16 17 18 19 [0] 20 21 22 23 
[   17.508472] pcpu-alloc: [0] 24 25 26 27 [0] 28 29 30 31 
[   17.513733] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.4.0-rc1+ root=/dev/sda7 ro earlyprintk=ttyS0,115200 console=ttyS0,115200 console=tty0 ras=cec_disable root=/dev/sda7 log_buf_len=10M resume=/dev/sda5 no_console_suspend ignore_loglevel mtrr=debug
[   17.536134] Unknown kernel command line parameters "BOOT_IMAGE=/boot/vmlinuz-6.4.0-rc1+", will be passed to user space.
[   17.548193] Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
[   17.556753] Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
[   17.564724] Fallback order for Node 0: 0 
[   17.564730] Built 1 zonelists, mobility grouping on.  Total pages: 3150281
[   17.575317] Policy zone: Normal
[   17.578420] mem auto-init: stack:off, heap alloc:off, heap free:off
[   17.584606] software IO TLB: area num 32.
[   17.624396] Memory: 12307176K/12801796K available (14336K kernel code, 2459K rwdata, 5720K rodata, 3056K init, 14580K bss, 494364K reserved, 0K cma-reserved)
[   17.638296] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=32, Nodes=1
[   17.644723] Kernel/User page tables isolation: enabled
[   17.649865] ftrace: allocating 40212 entries in 158 pages
[   17.661197] ftrace: allocated 158 pages with 5 groups
[   17.666159] Dynamic Preempt: full
[   17.669489] Running RCU self tests
[   17.672684] Running RCU synchronous self tests
[   17.677078] rcu: Preemptible hierarchical RCU implementation.
[   17.682738] rcu: 	RCU lockdep checking is enabled.
[   17.687465] rcu: 	RCU restricting CPUs from NR_CPUS=256 to nr_cpu_ids=32.
[   17.694167] 	Trampoline variant of Tasks RCU enabled.
[   17.699150] 	Rude variant of Tasks RCU enabled.
[   17.703617] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
[   17.711178] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=32
[   17.717947] Running RCU synchronous self tests
[   17.725299] NR_IRQS: 16640, nr_irqs: 1088, preallocated irqs: 16
[   17.731287] rcu: srcu_init: Setting srcu_struct sizes based on contention.
[   17.738091] Console: colour dummy device 80x25
[   17.742330] printk: console [tty0] enabled
[   17.746360] printk: bootconsole [earlyser0] disabled
[   17.751300] printk: console [ttyS0] enabled
[   35.841161] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
[   35.848843] ... MAX_LOCKDEP_SUBCLASSES:  8
[   35.852903] ... MAX_LOCK_DEPTH:          48
[   35.857050] ... MAX_LOCKDEP_KEYS:        8192
[   35.861371] ... CLASSHASH_SIZE:          4096
[   35.865690] ... MAX_LOCKDEP_ENTRIES:     32768
[   35.870096] ... MAX_LOCKDEP_CHAINS:      65536
[   35.874504] ... CHAINHASH_SIZE:          32768
[   35.878910]  memory used by lock dependency info: 6365 kB
[   35.884264]  memory used for stack traces: 4224 kB
[   35.889015]  per task-struct memory footprint: 1920 bytes
[   35.894404] ACPI: Core revision 20230331
[   35.898530] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns
[   35.907626] APIC: Switch to symmetric I/O mode setup
[   35.912767] x2apic: IRQ remapping doesn't support X2APIC mode
[   35.918524] Switched APIC routing to physical flat.
[   35.923934] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[   35.934621] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x33c3c65886c, max_idle_ns: 440795268982 ns
[   35.945081] Calibrating delay loop (skipped), value calculated using timer frequency.. 7182.35 BogoMIPS (lpj=3591179)
[   35.946070] pid_max: default: 32768 minimum: 301
[   35.955171] Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[   35.956095] Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[   35.957631] CPU0: Thermal monitoring enabled (TM1)
[   35.958118] process: using mwait in idle threads
[   35.959077] Last level iTLB entries: 4KB 512, 2MB 8, 4MB 8
[   35.960069] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32, 1GB 0
[   35.961073] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[   35.962071] Spectre V2 : Mitigation: Retpolines
[   35.963069] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[   35.964069] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
[   35.965069] Spectre V2 : Enabling Restricted Speculation for firmware calls
[   35.966071] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
[   35.967069] Spectre V2 : User space: Mitigation: STIBP via prctl
[   35.968070] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
[   35.969073] MDS: Mitigation: Clear CPU buffers
[   35.970069] MMIO Stale Data: Unknown: No mitigations
[   35.985005] Freeing SMP alternatives memory: 36K
[   35.985483] Running RCU synchronous self tests
[   35.986078] Running RCU synchronous self tests
[   35.987264] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-1620 0 @ 3.60GHz (family: 0x6, model: 0x2d, stepping: 0x7)
[   35.997096] cblist_init_generic: Setting adjustable number of callback queues.
[   35.998069] cblist_init_generic: Setting shift to 5 and lim to 1.
[   36.000095] cblist_init_generic: Setting shift to 5 and lim to 1.
[   36.002081] Running RCU-tasks wait API self tests
[   36.114073] Performance Events: PEBS fmt1+, SandyBridge events, 16-deep LBR, full-width counters, Intel PMU driver.
[   36.116088] ... version:                3
[   36.117069] ... bit width:              48
[   36.118081] ... generic registers:      4
[   36.119069] ... value mask:             0000ffffffffffff
[   36.120069] ... max period:             00007fffffffffff
[   36.121069] ... fixed-purpose events:   3
[   36.122074] ... event mask:             000000070000000f
[   36.129109] Estimated ratio of average max frequency by base frequency (times 1024): 1052
[   36.133070] rcu: Hierarchical SRCU implementation.
[   36.134069] rcu: 	Max phase no-delay instances is 400.
[   36.177103] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
[   36.195085] smp: Bringing up secondary CPUs ...
[   36.204077] x86: Booting SMP configuration:
[   36.205077] .... node  #0, CPUs:        #1
[   36.248070] Callback from call_rcu_tasks_rude() invoked.
[   36.256079]   #2  #3
[   36.320090] Callback from call_rcu_tasks() invoked.
[   36.330080]   #4
[   36.345078] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[   36.355090]   #5  #6  #7
[   36.413081] smp: Brought up 1 node, 8 CPUs
[   36.415070] smpboot: Max logical packages: 4
[   36.416101] smpboot: Total of 8 processors activated (57458.86 BogoMIPS)
[   36.442109] devtmpfs: initialized
[   36.459083] ACPI: PM: Registering ACPI NVS region [mem 0x18ebb000-0x18fe7fff] (1232896 bytes)
[   36.463097] ACPI: PM: Registering ACPI NVS region [mem 0x18fe9000-0x18ffffff] (94208 bytes)
[   36.469079] Running RCU synchronous self tests
[   36.470102] Running RCU synchronous self tests
[   36.478092] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[   36.480079] futex hash table entries: 8192 (order: 8, 1048576 bytes, linear)
[   36.506096] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[   36.517094] DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations
[   36.519076] DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
[   36.520082] DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
[   36.527080] thermal_sys: Registered thermal governor 'step_wise'
[   36.527084] thermal_sys: Registered thermal governor 'user_space'
[   36.529100] cpuidle: using governor ladder
[   36.532107] cpuidle: using governor menu
[   36.535081] Simple Boot Flag at 0xf3 set to 0x1
[   36.536115] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it
[   36.541091] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xb0000000-0xb3ffffff] (base 0xb0000000)
[   36.543086] PCI: not using MMCONFIG
[   36.544084] PCI: Using configuration type 1 for base access
[   36.546073] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on
[   36.568093] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
[   36.582129] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
[   36.713123] HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
[   36.844117] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
[   36.977118] HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
[   37.128097] ACPI: Added _OSI(Module Device)
[   37.129071] ACPI: Added _OSI(Processor Device)
[   37.130075] ACPI: Added _OSI(3.0 _SCP Extensions)
[   37.131070] ACPI: Added _OSI(Processor Aggregator Device)
[   39.056108] ACPI: 3 ACPI AML tables successfully acquired and loaded
[   39.481075] ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
[   39.570093] ACPI: Interpreter enabled
[   39.572104] ACPI: PM: (supports S0 S1 S3 S4 S5)
[   39.574088] ACPI: Using IOAPIC for interrupt routing
[   39.576093] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xb0000000-0xb3ffffff] (base 0xb0000000)
[   39.632085] [Firmware Info]: PCI: MMCONFIG at [mem 0xb0000000-0xb3ffffff] not reserved in ACPI motherboard resources
[   39.634091] PCI: MMCONFIG at [mem 0xb0000000-0xb3ffffff] reserved as EfiMemoryMappedIO
[   39.635103] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[   39.637069] PCI: Using E820 reservations for host bridge windows
[   39.650117] ACPI: Enabled 7 GPEs in block 00 to 3F
[   40.231098] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f])
[   40.233089] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
[   40.239112] acpi PNP0A08:00: _OSC: platform does not support [AER PCIeCapability LTR]
[   40.245091] acpi PNP0A08:00: _OSC: not requesting control; platform does not support [PCIeCapability]
[   40.247084] acpi PNP0A08:00: _OSC: OS requested [PME AER PCIeCapability LTR]
[   40.248070] acpi PNP0A08:00: _OSC: platform willing to grant [PME]
[   40.249070] acpi PNP0A08:00: _OSC: platform retains control of PCIe features (AE_SUPPORT)
[   40.271080] PCI host bridge to bus 0000:00
[   40.272078] pci_bus 0000:00: root bus resource [io  0x0000-0x03af window]
[   40.273074] pci_bus 0000:00: root bus resource [io  0x03e0-0x0cf7 window]
[   40.274077] pci_bus 0000:00: root bus resource [io  0x03b0-0x03df window]
[   40.275070] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[   40.276070] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000dffff window]
[   40.277070] pci_bus 0000:00: root bus resource [mem 0xb0000000-0xfbffffff window]
[   40.278078] pci_bus 0000:00: root bus resource [bus 00-1f]
[   40.280111] pci 0000:00:00.0: [8086:3c00] type 00 class 0x060000
[   40.284110] pci 0000:00:00.0: PME# supported from D0 D3hot D3cold
[   40.291090] pci 0000:00:01.0: [8086:3c02] type 01 class 0x060400
[   40.294089] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[   40.305075] pci 0000:00:01.1: [8086:3c03] type 01 class 0x060400
[   40.308090] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
[   40.316106] pci 0000:00:02.0: [8086:3c04] type 01 class 0x060400
[   40.320089] pci 0000:00:02.0: PME# supported from D0 D3hot D3cold
[   40.328097] pci 0000:00:03.0: [8086:3c08] type 01 class 0x060400
[   40.331070] pci 0000:00:03.0: enabling Extended Tags
[   40.333101] pci 0000:00:03.0: PME# supported from D0 D3hot D3cold
[   40.342086] pci 0000:00:05.0: [8086:3c28] type 00 class 0x088000
[   40.347094] pci 0000:00:05.2: [8086:3c2a] type 00 class 0x088000
[   40.352089] pci 0000:00:05.4: [8086:3c2c] type 00 class 0x080020
[   40.353102] pci 0000:00:05.4: reg 0x10: [mem 0xf332d000-0xf332dfff]
[   40.359090] pci 0000:00:11.0: [8086:1d3e] type 01 class 0x060400
[   40.362112] pci 0000:00:11.0: PME# supported from D0 D3hot D3cold
[   40.370108] pci 0000:00:16.0: [8086:1d3a] type 00 class 0x078000
[   40.372097] pci 0000:00:16.0: reg 0x10: [mem 0xf332c000-0xf332c00f 64bit]
[   40.376069] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
[   40.380079] pci 0000:00:16.3: [8086:1d3d] type 00 class 0x070002
[   40.381090] pci 0000:00:16.3: reg 0x10: [io  0xf0a0-0xf0a7]
[   40.382082] pci 0000:00:16.3: reg 0x14: [mem 0xf332a000-0xf332afff]
[   40.387106] pci 0000:00:19.0: [8086:1502] type 00 class 0x020000
[   40.389090] pci 0000:00:19.0: reg 0x10: [mem 0xf3300000-0xf331ffff]
[   40.391083] pci 0000:00:19.0: reg 0x14: [mem 0xf3329000-0xf3329fff]
[   40.392083] pci 0000:00:19.0: reg 0x18: [io  0xf040-0xf05f]
[   40.394100] pci 0000:00:19.0: PME# supported from D0 D3hot D3cold
[   40.401103] pci 0000:00:1a.0: [8086:1d2d] type 00 class 0x0c0320
[   40.403086] pci 0000:00:1a.0: reg 0x10: [mem 0xf3328000-0xf33283ff]
[   40.406074] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[   40.413083] pci 0000:00:1b.0: [8086:1d20] type 00 class 0x040300
[   40.414094] pci 0000:00:1b.0: reg 0x10: [mem 0xf3320000-0xf3323fff 64bit]
[   40.418076] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
[   40.425088] pci 0000:00:1c.0: [8086:1d16] type 01 class 0x060400
[   40.428089] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[   40.435092] pci 0000:00:1c.2: [8086:1d14] type 01 class 0x060400
[   40.438089] pci 0000:00:1c.2: PME# supported from D0 D3hot D3cold
[   40.445111] pci 0000:00:1d.0: [8086:1d26] type 00 class 0x0c0320
[   40.447105] pci 0000:00:1d.0: reg 0x10: [mem 0xf3327000-0xf33273ff]
[   40.451081] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[   40.458085] pci 0000:00:1e.0: [8086:244e] type 01 class 0x060401
[   40.465114] pci 0000:00:1f.0: [8086:1d41] type 00 class 0x060100
[   40.476078] pci 0000:00:1f.2: [8086:1d02] type 00 class 0x010601
[   40.477089] pci 0000:00:1f.2: reg 0x10: [io  0xf090-0xf097]
[   40.478083] pci 0000:00:1f.2: reg 0x14: [io  0xf080-0xf083]
[   40.479077] pci 0000:00:1f.2: reg 0x18: [io  0xf070-0xf077]
[   40.480077] pci 0000:00:1f.2: reg 0x1c: [io  0xf060-0xf063]
[   40.481077] pci 0000:00:1f.2: reg 0x20: [io  0xf020-0xf03f]
[   40.482077] pci 0000:00:1f.2: reg 0x24: [mem 0xf3326000-0xf33267ff]
[   40.484073] pci 0000:00:1f.2: PME# supported from D3hot
[   40.490091] pci 0000:00:1f.3: [8086:1d22] type 00 class 0x0c0500
[   40.491092] pci 0000:00:1f.3: reg 0x10: [mem 0xf3325000-0xf33250ff 64bit]
[   40.492091] pci 0000:00:1f.3: reg 0x20: [io  0xf000-0xf01f]
[   40.499091] pci 0000:00:01.0: PCI bridge to [bus 01]
[   40.501110] pci 0000:00:01.1: PCI bridge to [bus 02]
[   40.505084] pci 0000:03:00.0: [10de:10d8] type 00 class 0x030000
[   40.506086] pci 0000:03:00.0: reg 0x10: [mem 0xf2000000-0xf2ffffff]
[   40.507084] pci 0000:03:00.0: reg 0x14: [mem 0xf4000000-0xf7ffffff 64bit pref]
[   40.508085] pci 0000:03:00.0: reg 0x1c: [mem 0xf8000000-0xf9ffffff 64bit pref]
[   40.509075] pci 0000:03:00.0: reg 0x24: [io  0xe000-0xe07f]
[   40.510075] pci 0000:03:00.0: reg 0x30: [mem 0xf3000000-0xf307ffff pref]
[   40.511085] pci 0000:03:00.0: enabling Extended Tags
[   40.512102] pci 0000:03:00.0: BAR 3: assigned to efifb
[   40.514087] pci 0000:03:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[   40.519088] pci 0000:03:00.1: [10de:0be3] type 00 class 0x040300
[   40.520085] pci 0000:03:00.1: reg 0x10: [mem 0xf3080000-0xf3083fff]
[   40.521105] pci 0000:03:00.1: enabling Extended Tags
[   40.527108] pci 0000:00:02.0: PCI bridge to [bus 03]
[   40.529078] pci 0000:00:02.0:   bridge window [io  0xe000-0xefff]
[   40.530078] pci 0000:00:02.0:   bridge window [mem 0xf2000000-0xf30fffff]
[   40.531080] pci 0000:00:02.0:   bridge window [mem 0xf4000000-0xf9ffffff 64bit pref]
[   40.533107] pci 0000:00:03.0: PCI bridge to [bus 04]
[   40.537097] pci 0000:05:00.0: [8086:1d6b] type 00 class 0x010700
[   40.539099] pci 0000:05:00.0: reg 0x10: [mem 0xfa800000-0xfa803fff 64bit pref]
[   40.541090] pci 0000:05:00.0: reg 0x18: [mem 0xfa400000-0xfa7fffff 64bit pref]
[   40.542084] pci 0000:05:00.0: reg 0x20: [io  0xd000-0xd0ff]
[   40.543096] pci 0000:05:00.0: enabling Extended Tags
[   40.546099] pci 0000:05:00.0: reg 0x164: [mem 0x00000000-0x00003fff 64bit pref]
[   40.548070] pci 0000:05:00.0: VF(n) BAR0 space: [mem 0x00000000-0x0007bfff 64bit pref] (contains BAR0 for 31 VFs)
[   40.556093] pci 0000:00:11.0: PCI bridge to [bus 05]
[   40.557079] pci 0000:00:11.0:   bridge window [io  0xd000-0xdfff]
[   40.558079] pci 0000:00:11.0:   bridge window [mem 0xf3200000-0xf32fffff]
[   40.559082] pci 0000:00:11.0:   bridge window [mem 0xfa400000-0xfa8fffff 64bit pref]
[   40.561105] pci 0000:00:1c.0: PCI bridge to [bus 06]
[   40.565103] pci 0000:07:00.0: [1033:0194] type 00 class 0x0c0330
[   40.567104] pci 0000:07:00.0: reg 0x10: [mem 0xf3100000-0xf3101fff 64bit]
[   40.572102] pci 0000:07:00.0: PME# supported from D0 D3hot D3cold
[   40.581106] pci 0000:00:1c.2: PCI bridge to [bus 07]
[   40.583081] pci 0000:00:1c.2:   bridge window [mem 0xf3100000-0xf31fffff]
[   40.584103] pci_bus 0000:08: extended config space not accessible
[   40.588078] pci 0000:00:1e.0: PCI bridge to [bus 08] (subtractive decode)
[   40.589079] pci 0000:00:1e.0:   bridge window [io  0x0000-0x03af window] (subtractive decode)
[   40.590070] pci 0000:00:1e.0:   bridge window [io  0x03e0-0x0cf7 window] (subtractive decode)
[   40.591070] pci 0000:00:1e.0:   bridge window [io  0x03b0-0x03df window] (subtractive decode)
[   40.592070] pci 0000:00:1e.0:   bridge window [io  0x0d00-0xffff window] (subtractive decode)
[   40.593077] pci 0000:00:1e.0:   bridge window [mem 0x000a0000-0x000dffff window] (subtractive decode)
[   40.594070] pci 0000:00:1e.0:   bridge window [mem 0xb0000000-0xfbffffff window] (subtractive decode)
[   40.621105] ACPI: PCI: Interrupt link LNKA configured for IRQ 11
[   40.626076] ACPI: PCI: Interrupt link LNKB configured for IRQ 10
[   40.630077] ACPI: PCI: Interrupt link LNKC configured for IRQ 5
[   40.634072] ACPI: PCI: Interrupt link LNKD configured for IRQ 10
[   40.638072] ACPI: PCI: Interrupt link LNKE configured for IRQ 3
[   40.642072] ACPI: PCI: Interrupt link LNKF configured for IRQ 0
[   40.643070] ACPI: PCI: Interrupt link LNKF disabled
[   40.647072] ACPI: PCI: Interrupt link LNKG configured for IRQ 11
[   40.651076] ACPI: PCI: Interrupt link LNKH configured for IRQ 0
[   40.652070] ACPI: PCI: Interrupt link LNKH disabled
[   40.661092] ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-ff])
[   40.662073] acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
[   40.667103] acpi PNP0A08:01: _OSC: platform does not support [AER PCIeCapability LTR]
[   40.673083] acpi PNP0A08:01: _OSC: not requesting control; platform does not support [PCIeCapability]
[   40.674071] acpi PNP0A08:01: _OSC: OS requested [PME AER PCIeCapability LTR]
[   40.675070] acpi PNP0A08:01: _OSC: platform willing to grant [PME]
[   40.676070] acpi PNP0A08:01: _OSC: platform retains control of PCIe features (AE_SUPPORT)
[   40.677094] acpi PNP0A08:01: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-3f] only partially covers this bridge
[   40.689082] PCI host bridge to bus 0000:20
[   40.690071] pci_bus 0000:20: root bus resource [io  0x03b0-0x03df window]
[   40.691070] pci_bus 0000:20: root bus resource [mem 0x000a0000-0x000bffff window]
[   40.692070] pci_bus 0000:20: root bus resource [bus 20-ff]
[   40.704098] iommu: Default domain type: Translated 
[   40.705070] iommu: DMA domain TLB invalidation policy: lazy mode 
[   40.712082] SCSI subsystem initialized
[   40.717070] libata version 3.00 loaded.
[   40.718103] ACPI: bus type USB registered
[   40.720113] usbcore: registered new interface driver usbfs
[   40.723072] usbcore: registered new interface driver hub
[   40.725075] usbcore: registered new device driver usb
[   40.727079] pps_core: LinuxPPS API ver. 1 registered
[   40.728070] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[   40.729092] PTP clock support registered
[   40.738111] efivars: Registered efivars operations
[   40.751104] PCI: Using ACPI for IRQ routing
[   40.786096] PCI: Discovered peer bus 3f
[   40.787075] PCI: root bus 3f: using default resources
[   40.788077] PCI: Probing PCI hardware (bus 3f)
[   40.790095] PCI host bridge to bus 0000:3f
[   40.791071] pci_bus 0000:3f: root bus resource [io  0x0000-0xffff]
[   40.792070] pci_bus 0000:3f: root bus resource [mem 0x00000000-0x3fffffffffff]
[   40.793076] pci_bus 0000:3f: No busn resource found for root bus, will use [bus 3f-ff]
[   40.794077] pci_bus 0000:3f: busn_res: can not insert [bus 3f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 20-ff])
[   40.795095] pci 0000:3f:08.0: [8086:3c80] type 00 class 0x088000
[   40.799071] pci 0000:3f:08.3: [8086:3c83] type 00 class 0x088000
[   40.803083] pci 0000:3f:08.4: [8086:3c84] type 00 class 0x088000
[   40.807085] pci 0000:3f:09.0: [8086:3c90] type 00 class 0x088000
[   40.810101] pci 0000:3f:09.3: [8086:3c93] type 00 class 0x088000
[   40.815077] pci 0000:3f:09.4: [8086:3c94] type 00 class 0x088000
[   40.819089] pci 0000:3f:0a.0: [8086:3cc0] type 00 class 0x088000
[   40.822087] pci 0000:3f:0a.1: [8086:3cc1] type 00 class 0x088000
[   40.825088] pci 0000:3f:0a.2: [8086:3cc2] type 00 class 0x088000
[   40.828091] pci 0000:3f:0a.3: [8086:3cd0] type 00 class 0x088000
[   40.831091] pci 0000:3f:0b.0: [8086:3ce0] type 00 class 0x088000
[   40.834085] pci 0000:3f:0b.3: [8086:3ce3] type 00 class 0x088000
[   40.837088] pci 0000:3f:0c.0: [8086:3ce8] type 00 class 0x088000
[   40.840084] pci 0000:3f:0c.1: [8086:3ce8] type 00 class 0x088000
[   40.843089] pci 0000:3f:0c.6: [8086:3cf4] type 00 class 0x088000
[   40.846084] pci 0000:3f:0c.7: [8086:3cf6] type 00 class 0x088000
[   40.849084] pci 0000:3f:0d.0: [8086:3ce8] type 00 class 0x088000
[   40.852090] pci 0000:3f:0d.1: [8086:3ce8] type 00 class 0x088000
[   40.855088] pci 0000:3f:0d.6: [8086:3cf5] type 00 class 0x088000
[   40.858091] pci 0000:3f:0e.0: [8086:3ca0] type 00 class 0x088000
[   40.861086] pci 0000:3f:0e.1: [8086:3c46] type 00 class 0x110100
[   40.864099] pci 0000:3f:0f.0: [8086:3ca8] type 00 class 0x088000
[   40.868115] pci 0000:3f:0f.1: [8086:3c71] type 00 class 0x088000
[   40.872115] pci 0000:3f:0f.2: [8086:3caa] type 00 class 0x088000
[   40.877074] pci 0000:3f:0f.3: [8086:3cab] type 00 class 0x088000
[   40.881072] pci 0000:3f:0f.4: [8086:3cac] type 00 class 0x088000
[   40.884115] pci 0000:3f:0f.5: [8086:3cad] type 00 class 0x088000
[   40.888111] pci 0000:3f:0f.6: [8086:3cae] type 00 class 0x088000
[   40.892098] pci 0000:3f:10.0: [8086:3cb0] type 00 class 0x088000
[   40.897070] pci 0000:3f:10.1: [8086:3cb1] type 00 class 0x088000
[   40.900114] pci 0000:3f:10.2: [8086:3cb2] type 00 class 0x088000
[   40.904114] pci 0000:3f:10.3: [8086:3cb3] type 00 class 0x088000
[   40.908113] pci 0000:3f:10.4: [8086:3cb4] type 00 class 0x088000
[   40.913073] pci 0000:3f:10.5: [8086:3cb5] type 00 class 0x088000
[   40.917069] pci 0000:3f:10.6: [8086:3cb6] type 00 class 0x088000
[   40.920113] pci 0000:3f:10.7: [8086:3cb7] type 00 class 0x088000
[   40.924111] pci 0000:3f:11.0: [8086:3cb8] type 00 class 0x088000
[   40.928098] pci 0000:3f:13.0: [8086:3ce4] type 00 class 0x088000
[   40.932089] pci 0000:3f:13.1: [8086:3c43] type 00 class 0x110100
[   40.935086] pci 0000:3f:13.4: [8086:3ce6] type 00 class 0x110100
[   40.938087] pci 0000:3f:13.5: [8086:3c44] type 00 class 0x110100
[   40.941087] pci 0000:3f:13.6: [8086:3c45] type 00 class 0x088000
[   40.944094] pci_bus 0000:3f: busn_res: [bus 3f-ff] end is updated to 3f
[   40.945070] pci_bus 0000:3f: busn_res: can not insert [bus 3f] under domain [bus 00-ff] (conflicts with (null) [bus 20-ff])
[   40.947101] PCI: pci_cache_line_size set to 64 bytes
[   40.951091] e820: reserve RAM buffer [mem 0x18ebb000-0x1bffffff]
[   40.952084] e820: reserve RAM buffer [mem 0x18fe9000-0x1bffffff]
[   40.953077] e820: reserve RAM buffer [mem 0x1dffd000-0x1fffffff]
[   40.954070] e820: reserve RAM buffer [mem 0xac77d000-0xafffffff]
[   40.955082] e820: reserve RAM buffer [mem 0xad800000-0xafffffff]
[   40.967107] pci 0000:03:00.0: vgaarb: setting as boot VGA device
[   40.968067] pci 0000:03:00.0: vgaarb: bridge control possible
[   40.968067] pci 0000:03:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[   40.968100] vgaarb: loaded
[   40.974090] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
[   40.976070] hpet0: 8 comparators, 64-bit 14.318180 MHz counter
[   40.982099] clocksource: Switched to clocksource tsc-early
[   40.988240] pnp: PnP ACPI init
[   40.991521] system 00:00: [mem 0xfc000000-0xfcffffff] has been reserved
[   40.998103] system 00:00: [mem 0xfd000000-0xfdffffff] has been reserved
[   41.004680] system 00:00: [mem 0xfe000000-0xfeafffff] has been reserved
[   41.011255] system 00:00: [mem 0xfeb00000-0xfebfffff] has been reserved
[   41.017829] system 00:00: [mem 0xfed00400-0xfed3ffff] could not be reserved
[   41.024751] system 00:00: [mem 0xfed45000-0xfedfffff] has been reserved
[   41.031728] system 00:01: [io  0x0680-0x069f] has been reserved
[   41.037622] system 00:01: [io  0x0800-0x080f] has been reserved
[   41.043502] system 00:01: [io  0xffff] has been reserved
[   41.048782] system 00:01: [io  0xffff] has been reserved
[   41.054071] system 00:01: [io  0x0400-0x0453] has been reserved
[   41.059957] system 00:01: [io  0x0458-0x047f] has been reserved
[   41.065852] system 00:01: [io  0x0500-0x057f] has been reserved
[   41.071736] system 00:01: [io  0x164e-0x164f] has been reserved
[   41.077906] system 00:03: [io  0x0454-0x0457] has been reserved
[   41.085021] pnp: PnP ACPI: found 8 devices
[   41.098764] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[   41.107647] NET: Registered PF_INET protocol family
[   41.112764] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[   41.123763] tcp_listen_portaddr_hash hash table entries: 8192 (order: 7, 589824 bytes, linear)
[   41.132498] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
[   41.140210] TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[   41.148382] TCP bind hash table entries: 65536 (order: 11, 9437184 bytes, vmalloc hugepage)
[   41.159219] TCP: Hash tables configured (established 131072 bind 65536)
[   41.166013] UDP hash table entries: 8192 (order: 8, 1310720 bytes, linear)
[   41.173258] UDP-Lite hash table entries: 8192 (order: 8, 1310720 bytes, linear)
[   41.180980] NET: Registered PF_UNIX/PF_LOCAL protocol family
[   41.186659] pci 0000:00:01.0: PCI bridge to [bus 01]
[   41.191608] pci 0000:00:01.1: PCI bridge to [bus 02]
[   41.196553] pci 0000:00:02.0: PCI bridge to [bus 03]
[   41.201486] pci 0000:00:02.0:   bridge window [io  0xe000-0xefff]
[   41.207543] pci 0000:00:02.0:   bridge window [mem 0xf2000000-0xf30fffff]
[   41.214285] pci 0000:00:02.0:   bridge window [mem 0xf4000000-0xf9ffffff 64bit pref]
[   41.221982] pci 0000:00:03.0: PCI bridge to [bus 04]
[   41.226931] pci 0000:05:00.0: BAR 7: assigned [mem 0xfa804000-0xfa87ffff 64bit pref]
[   41.234647] pci 0000:00:11.0: PCI bridge to [bus 05]
[   41.239585] pci 0000:00:11.0:   bridge window [io  0xd000-0xdfff]
[   41.245639] pci 0000:00:11.0:   bridge window [mem 0xf3200000-0xf32fffff]
[   41.252386] pci 0000:00:11.0:   bridge window [mem 0xfa400000-0xfa8fffff 64bit pref]
[   41.260079] pci 0000:00:1c.0: PCI bridge to [bus 06]
[   41.265023] pci 0000:00:1c.2: PCI bridge to [bus 07]
[   41.269960] pci 0000:00:1c.2:   bridge window [mem 0xf3100000-0xf31fffff]
[   41.276711] pci 0000:00:1e.0: PCI bridge to [bus 08]
[   41.281652] pci_bus 0000:00: resource 4 [io  0x0000-0x03af window]
[   41.287796] pci_bus 0000:00: resource 5 [io  0x03e0-0x0cf7 window]
[   41.293936] pci_bus 0000:00: resource 6 [io  0x03b0-0x03df window]
[   41.300074] pci_bus 0000:00: resource 7 [io  0x0d00-0xffff window]
[   41.306214] pci_bus 0000:00: resource 8 [mem 0x000a0000-0x000dffff window]
[   41.313041] pci_bus 0000:00: resource 9 [mem 0xb0000000-0xfbffffff window]
[   41.319872] pci_bus 0000:03: resource 0 [io  0xe000-0xefff]
[   41.325410] pci_bus 0000:03: resource 1 [mem 0xf2000000-0xf30fffff]
[   41.331630] pci_bus 0000:03: resource 2 [mem 0xf4000000-0xf9ffffff 64bit pref]
[   41.338804] pci_bus 0000:05: resource 0 [io  0xd000-0xdfff]
[   41.344341] pci_bus 0000:05: resource 1 [mem 0xf3200000-0xf32fffff]
[   41.350568] pci_bus 0000:05: resource 2 [mem 0xfa400000-0xfa8fffff 64bit pref]
[   41.357746] pci_bus 0000:07: resource 1 [mem 0xf3100000-0xf31fffff]
[   41.363973] pci_bus 0000:08: resource 4 [io  0x0000-0x03af window]
[   41.370109] pci_bus 0000:08: resource 5 [io  0x03e0-0x0cf7 window]
[   41.376252] pci_bus 0000:08: resource 6 [io  0x03b0-0x03df window]
[   41.382391] pci_bus 0000:08: resource 7 [io  0x0d00-0xffff window]
[   41.388533] pci_bus 0000:08: resource 8 [mem 0x000a0000-0x000dffff window]
[   41.395363] pci_bus 0000:08: resource 9 [mem 0xb0000000-0xfbffffff window]
[   41.402341] pci_bus 0000:20: resource 4 [io  0x03b0-0x03df window]
[   41.408484] pci_bus 0000:20: resource 5 [mem 0x000a0000-0x000bffff window]
[   41.415369] pci_bus 0000:3f: resource 4 [io  0x0000-0xffff]
[   41.420902] pci_bus 0000:3f: resource 5 [mem 0x00000000-0x3fffffffffff]
[   41.427523] pci 0000:00:05.0: disabled boot interrupts on device [8086:3c28]
[   41.436046] pci 0000:03:00.1: extending delay after power-on from D3hot to 20 msec
[   41.443731] pci 0000:03:00.1: D0 power state depends on 0000:03:00.0
[   41.450516] pci 0000:07:00.0: enabling device (0000 -> 0002)
[   41.456378] PCI: CLS 64 bytes, default 64
[   41.460417] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[   41.460652] Unpacking initramfs...
[   41.466819] software IO TLB: mapped [mem 0x00000000a877d000-0x00000000ac77d000] (64MB)
[   41.468717] Initialise system trusted keyrings
[   41.482767] workingset: timestamp_bits=56 max_order=22 bucket_order=0
[   41.489466] ntfs: driver 2.1.32 [Flags: R/W].
[   41.493806] fuse: init (API version 7.38)
[   41.498164] 9p: Installing v9fs 9p2000 file system support
[   41.503767] Key type asymmetric registered
[   41.507860] Asymmetric key parser 'x509' registered
[   41.512741] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
[   41.527502] ACPI: \_PR_.CP00: Found 4 idle states
[   41.532304] ACPI: \_PR_.CP01: Found 4 idle states
[   41.537103] ACPI: \_PR_.CP02: Found 4 idle states
[   41.541902] ACPI: \_PR_.CP03: Found 4 idle states
[   41.546698] ACPI: \_PR_.CP04: Found 4 idle states
[   41.551499] ACPI: \_PR_.CP05: Found 4 idle states
[   41.556296] ACPI: \_PR_.CP06: Found 4 idle states
[   41.561095] ACPI: \_PR_.CP07: Found 4 idle states
[   41.597891] Freeing initrd memory: 8184K
[   41.619224] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[   41.625632] 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[   41.633871] serial 0000:00:16.3: enabling device (0000 -> 0003)
[   41.640546] 0000:00:16.3: ttyS1 at I/O 0xf0a0 (irq = 17, base_baud = 115200) is a 16550A
[   41.649256] Linux agpgart interface v0.103
[   41.653478] ACPI: bus type drm_connector registered
[   41.658694] nouveau 0000:03:00.0: vgaarb: deactivate vga console
[   41.664805] nouveau 0000:03:00.0: NVIDIA GT218 (0a8c00b1)
[   41.783852] nouveau 0000:03:00.0: bios: version 70.18.83.00.08
[   41.791350] nouveau 0000:03:00.0: fb: 512 MiB DDR3
[   41.947445] nouveau 0000:03:00.0: DRM: VRAM: 512 MiB
[   41.952387] nouveau 0000:03:00.0: DRM: GART: 1048576 MiB
[   41.957685] nouveau 0000:03:00.0: DRM: TMDS table version 2.0
[   41.963399] nouveau 0000:03:00.0: DRM: DCB version 4.0
[   41.968518] nouveau 0000:03:00.0: DRM: DCB outp 00: 02000360 00000000
[   41.974925] nouveau 0000:03:00.0: DRM: DCB outp 01: 02000362 00020010
[   41.981329] nouveau 0000:03:00.0: DRM: DCB outp 02: 028003a6 0f220010
[   41.987733] nouveau 0000:03:00.0: DRM: DCB outp 03: 01011380 00000000
[   41.994136] nouveau 0000:03:00.0: DRM: DCB outp 04: 08011382 00020010
[   42.000534] nouveau 0000:03:00.0: DRM: DCB outp 05: 088113c6 0f220010
[   42.006939] nouveau 0000:03:00.0: DRM: DCB conn 00: 00101064
[   42.012569] nouveau 0000:03:00.0: DRM: DCB conn 01: 00202165
[   42.020740] nouveau 0000:03:00.0: DRM: MM: using COPY for buffer copies
[   42.027341] stackdepot: allocating hash table of 1048576 entries via kvcalloc
[   42.038133] [drm] Initialized nouveau 1.3.1 20120801 for 0000:03:00.0 on minor 0
[   42.089491] fbcon: nouveaudrmfb (fb0) is primary device
[   42.222287] Console: switching to colour frame buffer device 210x65
[   42.237704] nouveau 0000:03:00.0: [drm] fb0: nouveaudrmfb frame buffer device
[   42.267022] megasas: 07.725.01.00-rc1
[   42.270935] st: Version 20160209, fixed bufsize 32768, s/g segs 256
[   42.277457] ahci 0000:00:1f.2: version 3.0
[   42.282257] ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x3 impl SATA mode
[   42.290465] ahci 0000:00:1f.2: flags: 64bit ncq sntf pm led clo pio slum part ems apst 
[   42.303392] scsi host0: ahci
[   42.307226] scsi host1: ahci
[   42.311111] scsi host2: ahci
[   42.314950] scsi host3: ahci
[   42.318743] scsi host4: ahci
[   42.322567] scsi host5: ahci
[   42.325740] ata1: SATA max UDMA/133 abar m2048@0xf3326000 port 0xf3326100 irq 32
[   42.333262] ata2: SATA max UDMA/133 abar m2048@0xf3326000 port 0xf3326180 irq 32
[   42.340762] ata3: DUMMY
[   42.343325] ata4: DUMMY
[   42.345880] ata5: DUMMY
[   42.348442] ata6: DUMMY
[   42.351205] e1000e: Intel(R) PRO/1000 Network Driver
[   42.356285] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[   42.362803] e1000e 0000:00:19.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[   42.455510] e1000e 0000:00:19.0 0000:00:19.0 (uninitialized): registered PHC clock
[   42.494241] tsc: Refined TSC clocksource calibration: 3591.345 MHz
[   42.500474] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x33c4635c383, max_idle_ns: 440795314831 ns
[   42.510514] clocksource: Switched to clocksource tsc
[   42.552496] e1000e 0000:00:19.0 eth0: (PCI Express:2.5GT/s:Width x1) 90:b1:1c:7b:da:e7
[   42.560820] e1000e 0000:00:19.0 eth0: Intel(R) PRO/1000 Network Connection
[   42.568045] e1000e 0000:00:19.0 eth0: MAC: 10, PHY: 11, PBA No: 7041FF-0FF
[   42.576375] xhci_hcd 0000:07:00.0: xHCI Host Controller
[   42.582333] xhci_hcd 0000:07:00.0: new USB bus registered, assigned bus number 1
[   42.590349] xhci_hcd 0000:07:00.0: hcc params 0x014042cb hci version 0x96 quirks 0x0000000000000004
[   42.602536] xhci_hcd 0000:07:00.0: xHCI Host Controller
[   42.602647] ehci-pci 0000:00:1a.0: EHCI Host Controller
[   42.608342] xhci_hcd 0000:07:00.0: new USB bus registered, assigned bus number 2
[   42.613250] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus number 3
[   42.621064] xhci_hcd 0000:07:00.0: Host supports USB 3.0 SuperSpeed
[   42.628018] ehci-pci 0000:00:1a.0: debug port 2
[   42.629138] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 6.04
[   42.638445] ehci-pci 0000:00:1a.0: irq 16, io mem 0xf3328000
[   42.639756] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[   42.647241] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00
[   42.650658] usb usb1: Product: xHCI Host Controller
[   42.650665] usb usb1: Manufacturer: Linux 6.4.0-rc1+ xhci-hcd
[   42.650669] usb usb1: SerialNumber: 0000:07:00.0
[   42.652419] hub 1-0:1.0: USB hub found
[   42.660282] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[   42.665620] hub 1-0:1.0: 2 ports detected
[   42.667343] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[   42.669920] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
[   42.675530] ata1.00: ATA-8: ST2000DM001-1CH164, CC24, max UDMA/133
[   42.678188] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 6.04
[   42.682908] ata2.00: ATAPI: PLDS DVD+/-RW DS-8A9SH, ED11, max UDMA/100
[   42.684161] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[   42.694439] ata2.00: configured for UDMA/100
[   42.697217] usb usb2: Product: xHCI Host Controller
[   42.765158] usb usb2: Manufacturer: Linux 6.4.0-rc1+ xhci-hcd
[   42.771427] usb usb2: SerialNumber: 0000:07:00.0
[   42.778718] hub 2-0:1.0: USB hub found
[   42.783042] hub 2-0:1.0: 2 ports detected
[   42.788048] usbcore: registered new interface driver usb-storage
[   42.788444] usb usb3: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 6.04
[   42.789304] usbcore: registered new interface driver usbserial_generic
[   42.795156] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[   42.795165] usb usb3: Product: EHCI Host Controller
[   42.797255] usbserial: USB Serial support registered for generic
[   42.797531] usb usb3: Manufacturer: Linux 6.4.0-rc1+ ehci_hcd
[   42.798172] i8042: PNP: PS/2 Controller [PNP0303:PS2K] at 0x60,0x64 irq 1
[   42.798587] usb usb3: SerialNumber: 0000:00:1a.0
[   42.799069] i8042: PNP: PS/2 appears to have AUX port disabled, if this is incorrect please boot with i8042.nopnp
[   42.800360] hub 3-0:1.0: USB hub found
[   42.808768] i8042: Warning: Keylock active
[   42.809209] hub 3-0:1.0: 3 ports detected
[   42.809779] serio: i8042 KBD port at 0x60,0x64 irq 1
[   42.811128] ehci-pci 0000:00:1d.0: EHCI Host Controller
[   42.817677] mousedev: PS/2 mouse device common for all mice
[   42.817762] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus number 4
[   42.818566] usbcore: registered new interface driver synaptics_usb
[   42.819135] ehci-pci 0000:00:1d.0: debug port 2
[   42.819757] input: PC Speaker as /devices/platform/pcspkr/input/input1
[   42.824537] ehci-pci 0000:00:1d.0: irq 17, io mem 0xf3327000
[   42.828478] rtc_cmos 00:02: RTC can wake from S4
[   42.836151] ehci-pci 0000:00:1d.0: USB 2.0 started, EHCI 1.00
[   42.841430] rtc_cmos 00:02: registered as rtc0
[   42.847709] usb usb4: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 6.04
[   42.854387] rtc_cmos 00:02: setting system clock to 2023-05-31T17:20:58 UTC (1685553658)
[   42.858159] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[   42.860212] rtc_cmos 00:02: alarms up to one year, y3k, 242 bytes nvram, hpet irqs
[   42.860700] usb usb4: Product: EHCI Host Controller
[   42.861280] fail to initialize ptp_kvm
[   42.861830] usb usb4: Manufacturer: Linux 6.4.0-rc1+ ehci_hcd
[   42.874856] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[   42.878162] usb usb4: SerialNumber: 0000:00:1d.0
[   42.886364] intel_pstate: Intel P-state driver initializing
[   42.887690] hub 4-0:1.0: USB hub found
[   42.888386] ata1.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 32), AA
[   42.893302] hub 4-0:1.0: 3 ports detected
[   42.895483] ata1.00: configured for UDMA/133
[   42.899361] hid: raw HID events driver (C) Jiri Kosina
[   42.903588] scsi 0:0:0:0: Direct-Access     ATA      ST2000DM001-1CH1 CC24 PQ: 0 ANSI: 5
[   42.903975] NET: Registered PF_INET6 protocol family
[   42.905834] sd 0:0:0:0: Attached scsi generic sg0 type 0
[   42.905836] sd 0:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
[   42.905844] sd 0:0:0:0: [sda] 4096-byte physical blocks
[   42.905863] sd 0:0:0:0: [sda] Write Protect is off
[   42.905865] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[   42.905894] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[   42.905935] sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes
[   42.911734] Segment Routing with IPv6
[   42.924083] scsi 1:0:0:0: CD-ROM            PLDS     DVD+-RW DS-8A9SH ED11 PQ: 0 ANSI: 5
[   42.925549] In-situ OAM (IOAM) with IPv6
[   43.036891]  sda: sda1 sda2 sda3 sda4 < sda5 sda6 sda7 sda8 sda9 sda10 sda11 sda12 sda13 sda14 sda15 >
[   43.037046] mip6: Mobile IPv6
[   43.040321] sd 0:0:0:0: [sda] Attached SCSI disk
[   43.042199] NET: Registered PF_PACKET protocol family
[   43.059171] usb 3-1: new high-speed USB device number 2 using ehci-pci
[   43.060717] 9pnet: Installing 9P2000 support
[   43.138146] usb 4-1: new high-speed USB device number 2 using ehci-pci
[   43.168224] microcode: Microcode Update Driver: v2.2.
[   43.168245] IPI shorthand broadcast: enabled
[   43.185467] sr 1:0:0:0: [sr0] scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray
[   43.185909] sched_clock: Marking stable (24988003896, 18197114398)->(40270383848, 2914734446)
[   43.186161] cdrom: Uniform CD-ROM driver Revision: 3.20
[   43.196244] registered taskstats version 1
[   43.196667] usb 3-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00
[   43.196677] usb 3-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[   43.197524] hub 3-1:1.0: USB hub found
[   43.197636] hub 3-1:1.0: 6 ports detected
[   43.240176] Loading compiled-in X.509 certificates
[   43.259441] sr 1:0:0:0: Attached scsi CD-ROM sr0
[   43.264879] sr 1:0:0:0: Attached scsi generic sg1 type 5
[   43.265646] printk: console [netcon0] enabled
[   43.276045] netconsole: network logging started
[   43.282842] clk: Disabling unused clocks
[   43.291169] Freeing unused decrypted memory: 2036K
[   43.298182] Freeing unused kernel image (initmem) memory: 3056K
[   43.301540] usb 4-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00
[   43.313352] usb 4-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[   43.324184] Write protecting the kernel read-only data: 20480k
[   43.328613] hub 4-1:1.0: USB hub found
[   43.335408] Freeing unused kernel image (rodata/data gap) memory: 424K
[   43.340672] hub 4-1:1.0: 8 ports detected
[   43.347007] Run /init as init process
[   43.351180]   with arguments:
[   43.354643]     /init
[   43.357428]   with environment:
[   43.361060]     HOME=/
[   43.363920]     TERM=linux
[   43.367134]     BOOT_IMAGE=/boot/vmlinuz-6.4.0-rc1+
[   44.634498] random: crng init done
[   45.164347] process '/usr/bin/fstype' started with executable stack
[   45.434574] EXT4-fs (sda7): mounted filesystem 6aef0462-c7e4-45ca-9a68-4e435300595e with ordered data mode. Quota mode: disabled.
[   47.352248] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input2
[   47.361715] ACPI: button: Power Button [PWRB]
[   47.367255] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
[   47.379434] ACPI: button: Power Button [PWRF]
[   47.449607] acpi-cpufreq: probe of acpi-cpufreq failed with error -17
[   47.617916] i801_smbus 0000:00:1f.3: enabling device (0000 -> 0003)
[   47.626336] i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
[   47.639363] i2c i2c-14: 4/4 memory slots populated (from DMI)
[   47.658951] iTCO_vendor_support: vendor-support=0
[   47.719940] iTCO_wdt iTCO_wdt.1.auto: Found a Patsburg TCO device (Version=2, TCOBASE=0x0460)
[   47.733500] iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
[   47.746305] RAPL PMU: API unit is 2^-32 Joules, 2 fixed counters, 163840 ms ovfl timer
[   47.755374] RAPL PMU: hw unit of domain pp0-core 2^-16 Joules
[   47.762241] RAPL PMU: hw unit of domain package 2^-16 Joules
[   47.813203] cryptd: max_cpu_qlen set to 1000
[   47.881441] AVX version of gcm_enc/dec engaged.
[   47.887212] AES CTR mode by8 optimization enabled
[   50.016398] EDAC MC: Ver: 3.0.0
[   50.022299] EDAC DEBUG: edac_mc_sysfs_init: device mc created
[   50.092319] EDAC DEBUG: sbridge_init: 
[   50.097607] EDAC sbridge: Seeking for: PCI ID 8086:3ca0
[   50.104497] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3ca0
[   50.113044] EDAC sbridge: Seeking for: PCI ID 8086:3ca0
[   50.121202] EDAC sbridge: Seeking for: PCI ID 8086:3ca8
[   50.127596] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3ca8
[   50.134472] EDAC sbridge: Seeking for: PCI ID 8086:3ca8
[   50.141058] EDAC sbridge: Seeking for: PCI ID 8086:3c71
[   50.147247] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3c71
[   50.154406] EDAC sbridge: Seeking for: PCI ID 8086:3c71
[   50.161081] EDAC sbridge: Seeking for: PCI ID 8086:3caa
[   50.167359] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3caa
[   50.174411] EDAC sbridge: Seeking for: PCI ID 8086:3caa
[   50.180169] EDAC sbridge: Seeking for: PCI ID 8086:3cab
[   50.185916] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cab
[   50.192971] EDAC sbridge: Seeking for: PCI ID 8086:3cab
[   50.198676] EDAC sbridge: Seeking for: PCI ID 8086:3cac
[   50.205154] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cac
[   50.211707] EDAC sbridge: Seeking for: PCI ID 8086:3cac
[   50.218146] EDAC sbridge: Seeking for: PCI ID 8086:3cad
[   50.224107] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cad
[   50.230940] EDAC sbridge: Seeking for: PCI ID 8086:3cad
[   50.236631] EDAC sbridge: Seeking for: PCI ID 8086:3cb8
[   50.243107] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cb8
[   50.249656] EDAC sbridge: Seeking for: PCI ID 8086:3cb8
[   50.255337] EDAC sbridge: Seeking for: PCI ID 8086:3cf4
[   50.261334] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cf4
[   50.267899] EDAC sbridge: Seeking for: PCI ID 8086:3cf4
[   50.273587] EDAC sbridge: Seeking for: PCI ID 8086:3cf6
[   50.279279] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cf6
[   50.285880] EDAC sbridge: Seeking for: PCI ID 8086:3cf6
[   50.291823] EDAC sbridge: Seeking for: PCI ID 8086:3cf5
[   50.297918] EDAC DEBUG: sbridge_get_onedevice: Detected 8086:3cf5
[   50.304453] EDAC sbridge: Seeking for: PCI ID 8086:3cf5
[   50.310169] EDAC DEBUG: sbridge_probe: Registering MC#0 (1 of 1)
[   50.316914] EDAC DEBUG: sbridge_register_mci: MC: mci = 00000000fc1d5f67, dev = 00000000b83d72de
[   50.326137] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3ca0, bus 63 with dev = 000000006237880f
[   50.336538] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3ca8, bus 63 with dev = 00000000feac8781
[   50.346671] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3c71, bus 63 with dev = 0000000049e1a2e9
[   50.356802] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3caa, bus 63 with dev = 00000000050d2144
[   50.367137] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cab, bus 63 with dev = 00000000816523e1
[   50.377506] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cac, bus 63 with dev = 0000000021323927
[   50.388133] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cad, bus 63 with dev = 00000000c918e29b
[   50.398223] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cb8, bus 63 with dev = 0000000082a365be
[   50.408321] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cf4, bus 63 with dev = 0000000024e1afac
[   50.418676] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cf6, bus 63 with dev = 00000000b24847b5
[   50.428820] EDAC DEBUG: sbridge_mci_bind_devs: Associated PCI 8086:3cf5, bus 63 with dev = 0000000097fcc79e
[   50.438952] EDAC DEBUG: get_dimm_config: mc#0: Node ID: 0, source ID: 0
[   50.446162] EDAC DEBUG: get_dimm_config: Memory mirroring is disabled
[   50.453181] EDAC DEBUG: get_dimm_config: Lockstep is disabled
[   50.459837] EDAC DEBUG: get_dimm_config: address map is on open page mode
[   50.466986] EDAC DEBUG: __populate_dimms: Memory is registered
[   50.473817] EDAC DEBUG: __populate_dimms: mc#0: ha 0 channel 0, dimm 0, 4096 MiB (1048576 pages) bank: 8, rank: 2, row: 0x8000, col: 0x400
[   50.486836] EDAC DEBUG: __populate_dimms: mc#0: ha 0 channel 1, dimm 0, 4096 MiB (1048576 pages) bank: 8, rank: 2, row: 0x8000, col: 0x400
[   50.499636] EDAC DEBUG: __populate_dimms: mc#0: ha 0 channel 2, dimm 0, 4096 MiB (1048576 pages) bank: 8, rank: 2, row: 0x8000, col: 0x400
[   50.512389] EDAC DEBUG: __populate_dimms: mc#0: ha 0 channel 3, dimm 0, 4096 MiB (1048576 pages) bank: 8, rank: 2, row: 0x8000, col: 0x400
[   50.525149] EDAC DEBUG: get_memory_layout: TOLM: 2.812 GB (0x00000000b3ffffff)
[   50.532968] EDAC DEBUG: get_memory_layout: TOHM: 17.312 GB (0x0000000453ffffff)
[   50.540626] EDAC DEBUG: get_memory_layout: SAD#0 DRAM up to 17.250 GB (0x0000000450000000) Interleave: [8:6] reg=0x000044c3
[   50.552318] EDAC DEBUG: get_memory_layout: SAD#0, interleave #0: 0
[   50.559608] EDAC DEBUG: get_memory_layout: TAD#0: up to 2.750 GB (0x00000000b0000000), socket interleave 1, memory interleave 4, TGT: 0, 1, 2, 3, reg=0x0002b3e4
[   50.574277] EDAC DEBUG: get_memory_layout: TAD#1: up to 17.250 GB (0x0000000450000000), socket interleave 1, memory interleave 4, TGT: 0, 1, 2, 3, reg=0x001133e4
[   50.589045] EDAC DEBUG: get_memory_layout: TAD CH#0, offset #0: 0.000 GB (0x0000000000000000), reg=0x00000000
[   50.599583] EDAC DEBUG: get_memory_layout: TAD CH#0, offset #1: 1.250 GB (0x0000000050000000), reg=0x00000500
[   50.609863] EDAC DEBUG: get_memory_layout: TAD CH#1, offset #0: 0.000 GB (0x0000000000000000), reg=0x00000000
[   50.620382] EDAC DEBUG: get_memory_layout: TAD CH#1, offset #1: 1.250 GB (0x0000000050000000), reg=0x00000500
[   50.630651] EDAC DEBUG: get_memory_layout: TAD CH#2, offset #0: 0.000 GB (0x0000000000000000), reg=0x00000000
[   50.641173] EDAC DEBUG: get_memory_layout: TAD CH#2, offset #1: 1.250 GB (0x0000000050000000), reg=0x00000500
[   50.652138] EDAC DEBUG: get_memory_layout: TAD CH#3, offset #0: 0.000 GB (0x0000000000000000), reg=0x00000000
[   50.662616] EDAC DEBUG: get_memory_layout: TAD CH#3, offset #1: 1.250 GB (0x0000000050000000), reg=0x00000500
[   50.668122] raid6: sse2x4   gen() 16567 MB/s
[   50.672887] EDAC DEBUG: get_memory_layout: CH#0 RIR#0, limit: 3.999 GB (0x00000000fff00000), way: 2, reg=0x9000000e
[   50.688328] EDAC DEBUG: get_memory_layout: CH#0 RIR#0 INTL#0, offset 0.000 GB (0x0000000000000000), tgt: 0, reg=0x00000000
[   50.695132] raid6: sse2x2   gen() 18515 MB/s
[   50.699738] EDAC DEBUG: get_memory_layout: CH#0 RIR#0 INTL#1, offset 0.000 GB (0x0000000000000000), tgt: 1, reg=0x00010000
[   50.716064] EDAC DEBUG: get_memory_layout: CH#1 RIR#0, limit: 3.999 GB (0x00000000fff00000), way: 2, reg=0x9000000e
[   50.717125] raid6: sse2x1   gen() 13790 MB/s
[   50.727383] EDAC DEBUG: get_memory_layout: CH#1 RIR#0 INTL#0, offset 0.000 GB (0x0000000000000000), tgt: 0, reg=0x00000000
[   50.731584] raid6: using algorithm sse2x2 gen() 18515 MB/s
[   50.732073] EDAC DEBUG: get_memory_layout: CH#1 RIR#0 INTL#1, offset 0.000 GB (0x0000000000000000), tgt: 1, reg=0x00010000
[   50.749122] raid6: .... xor() 9361 MB/s, rmw enabled
[   50.749425] EDAC DEBUG: get_memory_layout: CH#2 RIR#0, limit: 3.999 GB (0x00000000fff00000), way: 2, reg=0x9000000e
[   50.750140] raid6: using ssse3x2 recovery algorithm
[   50.750512] EDAC DEBUG: get_memory_layout: CH#2 RIR#0 INTL#0, offset 0.000 GB (0x0000000000000000), tgt: 0, reg=0x00000000
[   50.793914] EDAC DEBUG: get_memory_layout: CH#2 RIR#0 INTL#1, offset 0.000 GB (0x0000000000000000), tgt: 1, reg=0x00010000
[   50.805528] EDAC DEBUG: get_memory_layout: CH#3 RIR#0, limit: 3.999 GB (0x00000000fff00000), way: 2, reg=0x9000000e
[   50.805651] xor: automatically using best checksumming function   avx       
[   50.806070] EDAC DEBUG: get_memory_layout: CH#3 RIR#0 INTL#0, offset 0.000 GB (0x0000000000000000), tgt: 0, reg=0x00000000
[   50.835451] EDAC DEBUG: get_memory_layout: CH#3 RIR#0 INTL#1, offset 0.000 GB (0x0000000000000000), tgt: 1, reg=0x00010000
[   50.846975] EDAC DEBUG: edac_mc_add_mc_with_groups: 
[   50.852997] EDAC DEBUG: edac_create_sysfs_mci_device: device mc0 created
[   50.860424] EDAC DEBUG: edac_create_dimm_object: device dimm0 created at location channel 0 slot 0 
[   50.870348] EDAC DEBUG: edac_create_dimm_object: device dimm3 created at location channel 1 slot 0 
[   50.879983] EDAC DEBUG: edac_create_dimm_object: device dimm6 created at location channel 2 slot 0 
[   50.889619] EDAC DEBUG: edac_create_dimm_object: device dimm9 created at location channel 3 slot 0 
[   50.899294] EDAC DEBUG: edac_create_csrow_object: device csrow0 created
[   50.906987] EDAC MC0: Giving out device to module sb_edac controller Sandy Bridge SrcID#0_Ha#0: DEV 0000:3f:0e.0 (INTERRUPT)
[   50.918737] EDAC sbridge:  Ver: 1.1.2 
[   51.219795] Btrfs loaded, zoned=no, fsverity=no
[   51.242568] BTRFS: device label leap42.1 devid 1 transid 970 /dev/sda12 scanned by systemd-udevd (400)
[   51.365772] BTRFS: device label leap15 devid 1 transid 673 /dev/sda15 scanned by systemd-udevd (387)
[   51.388483] BTRFS: device label 12sp2 devid 1 transid 345 /dev/sda13 scanned by systemd-udevd (376)
[   51.414242] BTRFS: device label SLE12SP3 devid 1 transid 47084 /dev/sda9 scanned by systemd-udevd (394)
[   51.447800] BTRFS: device label 12sp1 devid 1 transid 268 /dev/sda14 scanned by systemd-udevd (386)
[   51.996348] e1000e 0000:00:19.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[   52.010023] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[   52.273823] Adding 33554428k swap on /dev/sda5.  Priority:-2 extents:1 across:33554428k 
[   52.388663] EXT4-fs (sda7): re-mounted 6aef0462-c7e4-45ca-9a68-4e435300595e. Quota mode: disabled.
[   53.583074] EXT4-fs (sda6): mounting ext3 file system using the ext4 subsystem
[   53.869852] EXT4-fs (sda6): mounted filesystem 6b02369b-7362-4920-b703-8ba36125139f with ordered data mode. Quota mode: disabled.
[   53.940759] FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
[   53.969227] BTRFS info (device sda9): using crc32c (crc32c-intel) checksum algorithm
[   53.980221] BTRFS info (device sda9): disk space caching is enabled
[   54.715670] SGI XFS with ACLs, security attributes, quota, no debug enabled
[   54.751050] XFS (sda10): Deprecated V4 format (crc=0) will not be supported after September 2030.
[   54.780009] XFS (sda10): Mounting V4 Filesystem b62c870e-d204-498e-999b-5a0ea7c560cd
[   54.913313] XFS (sda10): Ending clean mount
[   54.920531] xfs filesystem being mounted at /mnt/kernel supports timestamps until 2038-01-19 (0x7fffffff)
[   55.121930] EXT4-fs (sda11): mounted filesystem a1428eb4-29da-4a1f-bbde-e9dc1081fb27 with ordered data mode. Quota mode: disabled.






-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Wed May 31 17:51:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 17:51:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541917.845184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Pym-0004aC-S6; Wed, 31 May 2023 17:51:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541917.845184; Wed, 31 May 2023 17:51:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4Pym-0004a5-Od; Wed, 31 May 2023 17:51:24 +0000
Received: by outflank-mailman (input) for mailman id 541917;
 Wed, 31 May 2023 17:51:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LOFA=BU=tibco.com=avallejo@srs-se1.protection.inumbo.net>)
 id 1q4Pyl-0004Zx-OD
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 17:51:23 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c1414dcc-ffdb-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 19:51:22 +0200 (CEST)
Received: by mail-wm1-x334.google.com with SMTP id
 5b1f17b1804b1-3f6042d610fso62654015e9.1
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 10:51:22 -0700 (PDT)
Received: from localhost.localdomain (default-46-102-197-194.interdsl.co.uk.
 [46.102.197.194]) by smtp.gmail.com with ESMTPSA id
 x4-20020a05600c21c400b003f0aefcc457sm25322982wmj.45.2023.05.31.10.51.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 10:51:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1414dcc-ffdb-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1685555482; x=1688147482;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=mua9IhVSN1OtRUUfoVQkh1/qlL0MpIq8e2mssvEX7SI=;
        b=RHqXNyePvzey2ogCJkNO6uqvvpqkm1iFCi3nA0zJCX2bLqpQQ0YNIg/vZ+7GbraBkS
         JzTYlZna6OQ4NsLI/mlsGI5pcWJF9xguqb2LJL69ehoqiBWIGzsMy0hsRkZO75QQ/b/f
         Pi6XctmR4dETXH1FgZJTF8npdCMS67lKzPh2Q=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685555482; x=1688147482;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=mua9IhVSN1OtRUUfoVQkh1/qlL0MpIq8e2mssvEX7SI=;
        b=dRW1M+NysPqRUF5cNVOaX7m4sSX55QaqwiphHvvpfvorsHhu0Ctj4JiMgcTv9TXnu7
         G87uefJi9tCbDG+eKrFS6JFyxihbn732rYzv9NJodSq7P6UP0JQlo1+F2oVJwNyqN1qB
         0ZMDELd39pU+XaYjG6kJPMkZXt93IlNGRgsiB80p6Y/cNxftBqRjzywil9FiGXQozXmk
         v/GLlWruzlGv2RGI7BDxcIVlScVgl8u1FVKc8uy8C91b5trU3kfjhXVz/rHJ7PwYXIiQ
         VX/IoZey9VdIbqA7TFI8K/socu6TXtVOrK3L7i2f9OfAtyrbw/p4EMkYu004NvDzBrCZ
         HVtA==
X-Gm-Message-State: AC+VfDzz/dtvVpv3kmzurYxWVbwaRloOkFLHr8ehg/aJ7mGqtLtVQpe2
	CkGGeGsVfrJB4owFnyxvdeWtSTx2KyWtYu+5gmM=
X-Google-Smtp-Source: ACHHUZ4sFxg4zHRjRHHQq7oKOt2KFUbEVgewfmEwDCdJXxazOJ98aQv4Rpakwovg8iEuMUJ09+bZzg==
X-Received: by 2002:a7b:cbd6:0:b0:3f6:cdf7:a741 with SMTP id n22-20020a7bcbd6000000b003f6cdf7a741mr5157859wmi.25.1685555481838;
        Wed, 31 May 2023 10:51:21 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] x86/microcode: Prevent attempting updates known to fail
Date: Wed, 31 May 2023 18:51:19 +0100
Message-Id: <20230531175119.10830-1-alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If IA32_MSR_MCU_CONTROL exists, then it's possible a CPU may be unable to
perform microcode updates. This is controlled through the DIS_MCU_LOAD bit.

This patch checks that the CPU that got the request is capable of doing an
update. If it is, then we let the procedure go through. While not enough
for the general case (different CPUs with different settings), this patch
copes with the far more common scenario of all CPUs being locked.

Note that for the uncommon general case, we already have some logic in
place to emit a message on xl-dmseg in order to notify the admin that they
should reboot the machine ASAP.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
 xen/arch/x86/cpu/microcode/core.c     | 27 +++++++++++++++++++++++++++
 xen/arch/x86/include/asm/cpufeature.h |  1 +
 xen/arch/x86/include/asm/msr-index.h  |  5 +++++
 3 files changed, 33 insertions(+)

diff --git a/xen/arch/x86/cpu/microcode/core.c b/xen/arch/x86/cpu/microcode/core.c
index cd456c476f..e507945932 100644
--- a/xen/arch/x86/cpu/microcode/core.c
+++ b/xen/arch/x86/cpu/microcode/core.c
@@ -697,6 +697,17 @@ static long cf_check microcode_update_helper(void *data)
     return ret;
 }
 
+static bool this_cpu_can_install_update(void)
+{
+    uint64_t mcu_ctrl;
+
+    if ( !cpu_has_mcu_ctrl )
+        return true;
+
+    rdmsrl(MSR_MCU_CONTROL, mcu_ctrl);
+    return !(mcu_ctrl & MCU_CONTROL_DIS_MCU_LOAD);
+}
+
 int microcode_update(XEN_GUEST_HANDLE(const_void) buf, unsigned long len)
 {
     int ret;
@@ -708,6 +719,22 @@ int microcode_update(XEN_GUEST_HANDLE(const_void) buf, unsigned long len)
     if ( !ucode_ops.apply_microcode )
         return -EINVAL;
 
+    if ( !this_cpu_can_install_update() )
+    {
+        /*
+         * This CPU can't install microcode, so it makes no sense to try to
+         * go on. We're implicitly trusting firmware sanity in that all
+         * CPUs are expected to have a homogeneous setting. If, for some
+         * reason, another CPU happens to be locked down when this one
+         * isn't then unpleasantness will follow. In particular, some CPUs
+         * will be updated while others will not. A very stern message will
+         * be displayed in xl-dmesg that case, strongly advising to reboot the
+         * machine.
+         */
+        printk("WARNING: microcode not installed due to DIS_MCU_LOAD=1");
+        return -EACCES;
+    }
+
     buffer = xmalloc_flex_struct(struct ucode_buf, buffer, len);
     if ( !buffer )
         return -ENOMEM;
diff --git a/xen/arch/x86/include/asm/cpufeature.h b/xen/arch/x86/include/asm/cpufeature.h
index ace31e3b1f..0118171d7e 100644
--- a/xen/arch/x86/include/asm/cpufeature.h
+++ b/xen/arch/x86/include/asm/cpufeature.h
@@ -192,6 +192,7 @@ static inline bool boot_cpu_has(unsigned int feat)
 #define cpu_has_if_pschange_mc_no boot_cpu_has(X86_FEATURE_IF_PSCHANGE_MC_NO)
 #define cpu_has_tsx_ctrl        boot_cpu_has(X86_FEATURE_TSX_CTRL)
 #define cpu_has_taa_no          boot_cpu_has(X86_FEATURE_TAA_NO)
+#define cpu_has_mcu_ctrl        boot_cpu_has(X86_FEATURE_MCU_CTRL)
 #define cpu_has_fb_clear        boot_cpu_has(X86_FEATURE_FB_CLEAR)
 
 /* Synthesized. */
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index 2749e433d2..5c1350b5f9 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -165,6 +165,11 @@
 #define  PASID_PASID_MASK                   0x000fffff
 #define  PASID_VALID                        (_AC(1, ULL) << 31)
 
+#define MSR_MCU_CONTROL                     0x00001406
+#define  MCU_CONTROL_LOCK                   (_AC(1, ULL) <<  0)
+#define  MCU_CONTROL_DIS_MCU_LOAD           (_AC(1, ULL) <<  1)
+#define  MCU_CONTROL_EN_SMM_BYPASS          (_AC(1, ULL) <<  2)
+
 #define MSR_UARCH_MISC_CTRL                 0x00001b01
 #define  UARCH_CTRL_DOITM                   (_AC(1, ULL) <<  0)
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 18:04:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 18:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541923.845200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4QBa-0006Cy-2y; Wed, 31 May 2023 18:04:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541923.845200; Wed, 31 May 2023 18:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4QBZ-0006Cr-VC; Wed, 31 May 2023 18:04:37 +0000
Received: by outflank-mailman (input) for mailman id 541923;
 Wed, 31 May 2023 18:04:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QBY-0006CP-1N; Wed, 31 May 2023 18:04:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QBX-0003Qt-Un; Wed, 31 May 2023 18:04:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QBX-0004zU-Hs; Wed, 31 May 2023 18:04:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QBX-0006iz-HQ; Wed, 31 May 2023 18:04:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=+irzMzEaAhRfhLFP4CUGcqipvAx+5mISxhQMl07Y588=; b=OeZ099wxpsSJYhW+esUdKHlYwg
	WbY3aiPL+B2htdNLGZ8UjJbvfFoc07lSpJTSUZz9gjZV7POVZYpqLj9TRcfkag6YVJAp5XK2aAcMV
	qO6znk3KfxY7zWpDV836HL3W2f9BGXHfcaWyzjp9YbKcEacbR4zz2MeEo4UvxyCZyU94=;
To: xen-devel@lists.xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-amd64
Message-Id: <E1q4QBX-0006iz-HQ@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 18:04:35 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-amd64
testid xen-build

Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  465217b0f872602b4084a1b0fa2ef75377cb3589
  Bug not present: 445fdc641e304ff41a544f8f5926a13b604c08ad
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/181050/


  commit 465217b0f872602b4084a1b0fa2ef75377cb3589
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Wed May 31 12:01:11 2023 +0200
  
      vPCI: account for hidden devices
      
      Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
      console) are associated with DomXEN, not Dom0. This means that while
      looking for overlapping BARs such devices cannot be found on Dom0's list
      of devices; DomXEN's list also needs to be scanned.
      
      Suppress vPCI init altogether for r/o devices (which constitute a subset
      of hidden ones).
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
      Tested-by: Stefano Stabellini <sstabellini@kernel.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-amd64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build --summary-out=tmp/181050.bisection-summary --basis-template=181018 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-amd64 xen-build
Searching for failure / basis pass:
 181035 fail [host=himrod2] / 181018 [host=himrod0] 181016 [host=himrod0] 180963 ok.
Failure / basis pass flights: 181035 / 180963
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 465217b0f872602b4084a1b0fa2ef75377cb3589
Basis pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#8c51cd970509b97d8378d175646ec32889828158-8c51cd970509b97d8378d175646ec32889828158 git://xenbits.xen.org/xen.git#f54dd5b53ee516fa1d4c106e0744ce0083acfcdc-465217b0f872602b4084a1b0fa2ef75377cb3589
Loaded 5002 nodes in revision graph
Searching for test results:
 180963 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
 181016 [host=himrod0]
 181018 [host=himrod0]
 181037 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 8347d6bb29bfd0c3b5acdc078574a8643c5a5637
 181031 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 465217b0f872602b4084a1b0fa2ef75377cb3589
 181032 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
 181034 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 465217b0f872602b4084a1b0fa2ef75377cb3589
 181038 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 e66003e7be1996c9dd8daca54ba34ad5bb58d668
 181039 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 0d74fc2b2f85586ceb5672aedc79c666e529381d
 181040 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 0f80a46ffa6bfd5d111fc2e64ee5983513627e4d
 181035 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 465217b0f872602b4084a1b0fa2ef75377cb3589
 181043 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 445fdc641e304ff41a544f8f5926a13b604c08ad
 181045 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 465217b0f872602b4084a1b0fa2ef75377cb3589
 181046 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 445fdc641e304ff41a544f8f5926a13b604c08ad
 181047 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 465217b0f872602b4084a1b0fa2ef75377cb3589
 181048 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 445fdc641e304ff41a544f8f5926a13b604c08ad
 181050 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 465217b0f872602b4084a1b0fa2ef75377cb3589
Searching for interesting versions
 Result found: flight 180963 (pass), for basis pass
 For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 445fdc641e304ff41a544f8f5926a13b604c08ad, results HASH(0x55dc07d44900) HASH(0x55dc07d3b210) HASH(0x55dc07d328c8) For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 0f80a46ffa6bfd5d111fc2e64ee5983513627e4d, results HASH(0x55dc07d40448) For basis failure, parent search stopping at 3d273dd05e51e5a1\
 ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 0d74fc2b2f85586ceb5672aedc79c666e529381d, results HASH(0x55dc07d3e440) For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 e66003e7be1996c9dd8daca54ba34ad5bb58d668, results HASH(0x55dc07d3c438) For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 8347d6bb29bfd0c3b5acdc078574a8643c5a56\
 37, results HASH(0x55dc07d35df8) For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 f54dd5b53ee516fa1d4c106e0744ce0083acfcdc, results HASH(0x55dc07d32bc8) HASH(0x55dc07d3b390) Result found: flight 181031 (fail), for basis failure (at ancestor ~1498)
 Repro found: flight 181032 (pass), for basis pass
 Repro found: flight 181034 (fail), for basis failure
 0 revisions at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8c51cd970509b97d8378d175646ec32889828158 445fdc641e304ff41a544f8f5926a13b604c08ad
No revisions left to test, checking graph state.
 Result found: flight 181043 (pass), for last pass
 Result found: flight 181045 (fail), for first failure
 Repro found: flight 181046 (pass), for last pass
 Repro found: flight 181047 (fail), for first failure
 Repro found: flight 181048 (pass), for last pass
 Repro found: flight 181050 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  465217b0f872602b4084a1b0fa2ef75377cb3589
  Bug not present: 445fdc641e304ff41a544f8f5926a13b604c08ad
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/181050/


  commit 465217b0f872602b4084a1b0fa2ef75377cb3589
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Wed May 31 12:01:11 2023 +0200
  
      vPCI: account for hidden devices
      
      Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
      console) are associated with DomXEN, not Dom0. This means that while
      looking for overlapping BARs such devices cannot be found on Dom0's list
      of devices; DomXEN's list also needs to be scanned.
      
      Suppress vPCI init altogether for r/o devices (which constitute a subset
      of hidden ones).
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
      Tested-by: Stefano Stabellini <sstabellini@kernel.org>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
181050: tolerable ALL FAIL

flight 181050 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/181050/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64                   6 xen-build               fail baseline untested


jobs:
 build-amd64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed May 31 18:14:39 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 18:14:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541931.845212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4QL8-0007hu-0s; Wed, 31 May 2023 18:14:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541931.845212; Wed, 31 May 2023 18:14:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4QL7-0007hn-UR; Wed, 31 May 2023 18:14:29 +0000
Received: by outflank-mailman (input) for mailman id 541931;
 Wed, 31 May 2023 18:14:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QL6-0007hd-2O; Wed, 31 May 2023 18:14:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QL5-0003kC-Sg; Wed, 31 May 2023 18:14:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QL5-0005MW-Dz; Wed, 31 May 2023 18:14:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QL5-0002Bz-DZ; Wed, 31 May 2023 18:14:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nc0/cEN31Df9u0VnJzrkoowJpASosrl+FAMTxE9wJ90=; b=aoW+y+r0qvWndhUcjWjivjsBSk
	+v4fMRBTtuRdPGjPMZd6TK9zA8sY4hcaXd9uwWex2IySNsNo6luFngYdLpOVqSxBU1Rqd5oF90QGg
	YeOiDy1l2TCZkiWHLFQ8AYNkwOlOUnrzA8pjBrgf4bUD1jA+B3vraFS1GjstsKugSWqM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181044-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 181044: regressions - trouble: blocked/broken/pass
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:<job status>:broken:regression
    xen-unstable-smoke:build-armhf:host-build-prep:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=04f25e9048c375898430a58e1c570806896252cb
X-Osstest-Versions-That:
    xen=94200e1bae07e725cc07238c11569c5cab7befb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 18:14:27 +0000

flight 181044 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181044/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 build-armhf                   5 host-build-prep          fail REGR. vs. 181018

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  04f25e9048c375898430a58e1c570806896252cb
baseline version:
 xen                  94200e1bae07e725cc07238c11569c5cab7befb7

Last test of basis   181018  2023-05-30 20:00:24 Z    0 days
Failing since        181031  2023-05-31 11:00:27 Z    0 days    3 attempts
Testing same since   181044  2023-05-31 16:01:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bobby Eshleman <bobbyeshleman@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  broken  
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

------------------------------------------------------------
commit 04f25e9048c375898430a58e1c570806896252cb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed May 31 16:04:30 2023 +0200

    vPCI: fix test harness build
    
    The earlier commit introduced two uses of is_hardware_domain().
    
    Fixes: 465217b0f872 ("vPCI: account for hidden devices")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 7a2f0ba0d08562fc09c6dd865c6cb3468185be1f
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed May 31 16:04:12 2023 +0200

    vPCI: add test harness entry to ./MAINTAINERS
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 465217b0f872602b4084a1b0fa2ef75377cb3589
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed May 31 12:01:11 2023 +0200

    vPCI: account for hidden devices
    
    Hidden devices (e.g. an add-in PCI serial card used for Xen's serial
    console) are associated with DomXEN, not Dom0. This means that while
    looking for overlapping BARs such devices cannot be found on Dom0's list
    of devices; DomXEN's list also needs to be scanned.
    
    Suppress vPCI init altogether for r/o devices (which constitute a subset
    of hidden ones).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Tested-by: Stefano Stabellini <sstabellini@kernel.org>

commit 445fdc641e304ff41a544f8f5926a13b604c08ad
Author: Juergen Gross <jgross@suse.com>
Date:   Wed May 31 12:00:40 2023 +0200

    xen/include/public: fix 9pfs xenstore path description
    
    In xen/include/public/io/9pfs.h the name of the Xenstore backend node
    "security-model" should be "security_model", as this is how the Xen
    tools are creating it and qemu is reading it.
    
    Fixes: ad58142e73a9 ("xen/public: move xenstore related doc into 9pfs.h")
    Fixes: cf1d2d22fdfd ("docs/misc: Xen transport for 9pfs")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 0f80a46ffa6bfd5d111fc2e64ee5983513627e4d
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 12:00:13 2023 +0200

    xen/riscv: remove dummy_bss variable
    
    After introduction of initial pagetables there is no any sense
    in dummy_bss variable as bss section will not be empty anymore.
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit 0d74fc2b2f85586ceb5672aedc79c666e529381d
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 12:00:05 2023 +0200

    xen/riscv: setup initial pagetables
    
    The patch does two thing:
    1. Setup initial pagetables.
    2. Enable MMU which end up with code in
       cont_after_mmu_is_enabled()
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit ec337ce2e972b70619f5a076b20910a2ff4fea7a
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 11:59:53 2023 +0200

    xen/riscv: align __bss_start
    
    bss clear cycle requires proper alignment of __bss_start.
    
    ALIGN(PAGE_SIZE) before "*(.bss.page_aligned)" in xen.lds.S
    was removed as any contribution to "*(.bss.page_aligned)" have to
    specify proper aligntment themselves.
    
    Fixes: cfa0409f7cbb ("xen/riscv: initialize .bss section")
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit e66003e7be1996c9dd8daca54ba34ad5bb58d668
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 11:55:58 2023 +0200

    xen/riscv: introduce setup_initial_pages
    
    The idea was taken from xvisor but the following changes
    were done:
    * Use only a minimal part of the code enough to enable MMU
    * rename {_}setup_initial_pagetables functions
    * add an argument for setup_initial_mapping to have
      an opportunity to make set PTE flags.
    * update setup_initial_pagetables function to map sections
      with correct PTE flags.
    * Rewrite enable_mmu() to C.
    * map linker addresses range to load addresses range without
      1:1 mapping. It will be 1:1 only in case when
      load_start_addr is equal to linker_start_addr.
    * add safety checks such as:
      * Xen size is less than page size
      * linker addresses range doesn't overlap load addresses
        range
    * Rework macros {THIRD,SECOND,FIRST,ZEROETH}_{SHIFT,MASK}
    * change PTE_LEAF_DEFAULT to RW instead of RWX.
    * Remove phys_offset as it is not used now
    * Remove alignment  of {map, pa}_start &= XEN_PT_LEVEL_MAP_MASK(0);
      in  setup_inital_mapping() as they should be already aligned.
      Make a check that {map_pa}_start are aligned.
    * Remove clear_pagetables() as initial pagetables will be
      zeroed during bss initialization
    * Remove __attribute__((section(".entry")) for setup_initial_pagetables()
      as there is no such section in xen.lds.S
    * Update the argument of pte_is_valid() to "const pte_t *p"
    * Add check that Xen's load address is aligned at 4k boundary
    * Refactor setup_initial_pagetables() so it is mapping linker
      address range to load address range. After setup needed
      permissions for specific section ( such as .text, .rodata, etc )
      otherwise RW permission will be set by default.
    * Add function to check that requested SATP_MODE is supported
    
    Origin: git@github.com:xvisor/xvisor.git 9be2fdd7
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>

commit efadb18dd58abaa0c6102e04f1c25ac94c273853
Author: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Date:   Wed May 31 11:55:46 2023 +0200

    xen/riscv: add VM space layout
    
    Also it was added explanation about ignoring of top VA bits
    
    Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Acked-by: Bobby Eshleman <bobbyeshleman@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 31 18:24:32 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 18:24:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541939.845222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4QUb-0000pu-2C; Wed, 31 May 2023 18:24:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541939.845222; Wed, 31 May 2023 18:24:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4QUa-0000pn-VT; Wed, 31 May 2023 18:24:16 +0000
Received: by outflank-mailman (input) for mailman id 541939;
 Wed, 31 May 2023 18:24:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QUZ-0000pd-Q1; Wed, 31 May 2023 18:24:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QUZ-0003u9-Hi; Wed, 31 May 2023 18:24:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QUZ-0005tP-Bk; Wed, 31 May 2023 18:24:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4QUZ-0006ib-BH; Wed, 31 May 2023 18:24:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HM2P7vXvt/pVevWFG7CAPzcwmPWq1BFF1/zT7wCIAqQ=; b=jHDdlxwh9O/kUJc/BCiV0WnKuc
	Ap4r5pK4mKSMogxcfDP7JnWsDep3oRCpWOyAQ3y2ZeWRnKW6BQfNQtQ1Sz3vqlH7ELjtodSezxYY3
	AWXROwIcX+afmmVfVBpzBHWDyUBE0eBSXc3MRldeX2XhROor9jA3NR8mD1ND/cZ+2+F8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181023-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 181023: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    libvirt:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=9222f35dc6917f00d166be3bb69ac4e5ff8536f0
X-Osstest-Versions-That:
    libvirt=e35b5df3f561ea5678a21aa1b39f14308fc6363c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 18:24:15 +0000

flight 181023 libvirt real [real]
flight 181049 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/181023/
http://logs.test-lab.xenproject.org/osstest/logs/181049/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail pass in 181049-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 180985
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 180985
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 180985
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              9222f35dc6917f00d166be3bb69ac4e5ff8536f0
baseline version:
 libvirt              e35b5df3f561ea5678a21aa1b39f14308fc6363c

Last test of basis   180985  2023-05-28 04:18:52 Z    3 days
Testing same since   181023  2023-05-31 04:21:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Privoznik <mprivozn@redhat.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   e35b5df3f5..9222f35dc6  9222f35dc6917f00d166be3bb69ac4e5ff8536f0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 31 19:51:16 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 19:51:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541952.845239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4RqS-0001WF-57; Wed, 31 May 2023 19:50:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541952.845239; Wed, 31 May 2023 19:50:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4RqS-0001W8-2B; Wed, 31 May 2023 19:50:56 +0000
Received: by outflank-mailman (input) for mailman id 541952;
 Wed, 31 May 2023 19:50:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nj2T=BU=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1q4RqQ-0001W2-I1
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 19:50:54 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 711bab0c-ffec-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 21:50:51 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 us-mta-621-5fq1aeImPn-bTRYXg3IKzw-1; Wed, 31 May 2023 15:50:45 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CA493800888;
 Wed, 31 May 2023 19:50:44 +0000 (UTC)
Received: from localhost (unknown [10.39.192.127])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 14743C154D7;
 Wed, 31 May 2023 19:50:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 711bab0c-ffec-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1685562649;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=yidvGRAT/OnP5PqG+xEcTx82SlvRwvyy1Kdgkf5/4pY=;
	b=Q7PZT/zcH91sp9tX066gyFRABstWY8y3ff0JiXFKnbJEO2pO9MJZqVVeQ864we0YD3L2B7
	sk0HnaZcNZ964WiASrU6OqAJvMSWZNOoyaDj1PEk8FOXeg1jI0QNiBGtN7YyKUmuuOKdVH
	gIieZXyJ7XUdMrlmtltQ8RquGOiPvUE=
X-MC-Unique: 5fq1aeImPn-bTRYXg3IKzw-1
Date: Wed, 31 May 2023 15:50:42 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: kwolf@redhat.com
Cc: Paolo Bonzini <pbonzini@redhat.com>, eblake@redhat.com,
	Hanna Reitz <hreitz@redhat.com>, Fam Zheng <fam@euphon.net>,
	sgarzare@redhat.com, qemu-block@nongnu.org,
	xen-devel@lists.xenproject.org, Julia Suvorova <jusual@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	Paul Durrant <paul@xen.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Aarushi Mehta <mehta.aaru20@gmail.com>, qemu-devel@nongnu.org
Subject: Re: [PATCH v3 0/6] block: add blk_io_plug_call() API
Message-ID: <20230531195042.GA1509371@fedora>
References: <20230530180959.1108766-1-stefanha@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="RoKjmCOI6qwAyphs"
Content-Disposition: inline
In-Reply-To: <20230530180959.1108766-1-stefanha@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8


--RoKjmCOI6qwAyphs
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi Kevin,
Do you want to review the thread-local blk_io_plug() patch series or
should I merge it?

Thanks,
Stefan

--RoKjmCOI6qwAyphs
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmR3pRIACgkQnKSrs4Gr
c8h2Xgf/fXnY9mbKX8MwRoUNXIaProbM9HoJz8wrU/R4i3OpVIfIGHwbXa/Xpt7G
/Oyn/gXv+daEcC7yfOf8B+PzcCucvadYTcN/DvioPKz0ve4czwEjxsJnQznutoUx
kfTVC1OTkgkijEi4GpUNt83ghjVEmp6oos0ggfWSCyQZYWR6MK1/Lh5svV8viiR1
GJq+d2LBDnbC5eCk0dCqdZemAh0tuPr5nSR8edI5WUG830VgsaQnE0gzvF5IsfN5
9VxD3iP6bQHS59zn/hoOV91tqA+ohhwP/qHsUAQQLpQLo5dfH6d6fZ9Y3Y3n53HG
sLBl2KWOYj4e5Q4tSgmBoUplfPCdvQ==
=ndb5
-----END PGP SIGNATURE-----

--RoKjmCOI6qwAyphs--



From xen-devel-bounces@lists.xenproject.org Wed May 31 20:07:03 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 20:07:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541959.845255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4S5y-0003At-IL; Wed, 31 May 2023 20:06:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541959.845255; Wed, 31 May 2023 20:06:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4S5y-0003Am-FG; Wed, 31 May 2023 20:06:58 +0000
Received: by outflank-mailman (input) for mailman id 541959;
 Wed, 31 May 2023 20:06:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X43w=BU=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1q4S5w-0003Ag-Bx
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 20:06:56 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b09bd5fe-ffee-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 22:06:55 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2af278ca45eso1036251fa.1
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 13:06:54 -0700 (PDT)
Received: from [192.168.202.197] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 q10-20020a2e968a000000b002ab2184a9basm3380329lji.109.2023.05.31.13.06.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 13:06:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b09bd5fe-ffee-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685563614; x=1688155614;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=tgsrApDL9dgyD9ANjfTSA4bxrsQ0ncubH9yaAJaq4Wo=;
        b=eR+gqR77LqziY3tVuDPKoI4KRDG0iRqQWNTkTtPKW3yKPcU4BQIinwBsqPWZGQVJGJ
         wRhyBZU5hCOl/DDv2GEpXxsrxHupEl4gURmgwa12YWLVpeppD6vGnh0xE9eHMxJngCc2
         rW8cxSANyxel2SZyr+KfwMbfQws6owznZeED35QWC3eg2KeduqOH7g9/WgEF2UqkJVXL
         SOYKWwGBVolzNxuESwBTH1caU3TM/qsn8y4OG7cZZ1+mbN3h9Cpy0Dm9/yXFHjQIFrL5
         3Qbq4tPMsAXtTrQxz2R6fqlCNEqlJA/LF1zRE32IqpxOdyrrSUK+g82XIFCZAv61ThK2
         CVZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685563614; x=1688155614;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=tgsrApDL9dgyD9ANjfTSA4bxrsQ0ncubH9yaAJaq4Wo=;
        b=KvyCJ4oWLSYUwQG0AZzN2UGgsGlfEwz1eW4qdK9zFTduxiBofc7cQx2vWKCuNabzmL
         Jusd8lCl8s4L1d1ERH2yOcQvMPIM37/B34j4jtYU0aUgPfF5mY+HTBb8S0bpfyAwrhHT
         U2lBhVORpoT5nX5BOtolpJCto7Im0YjjZfntN0wfBJtyP8jAVL7lqcxFkSi5cnB2GlE9
         +fuZlTdaNfrxHIM00xg3U3vrN4i4OsCtcNv7N9s6fSHE4aUsMRcl8RRrshL67RN3JPvU
         hVLgzKf/Zm/erLI/3CXze8uflcSUFe1s0L0Y7heLFXP8eTc2Lwi2l5KWRLOdCc4TwL8P
         meNQ==
X-Gm-Message-State: AC+VfDwHKZYFQIsQEaFtxO3pjA47p1r9ZzOMC3sEKE4AGiikPipWqB0c
	O/TXrMlUMCZhH4E0xyZd4ItMxWfTAHA=
X-Google-Smtp-Source: ACHHUZ4f9Hf9kz/XyQh4O0me7whV2krYZ6u4igJyWdgzvZcf4Yh2iiOJcNDeUe2RoVit0C9DwWs30A==
X-Received: by 2002:a2e:8782:0:b0:2ab:e50:315a with SMTP id n2-20020a2e8782000000b002ab0e50315amr3431050lji.51.1685563614008;
        Wed, 31 May 2023 13:06:54 -0700 (PDT)
Message-ID: <4073258b3a3c6a0cb19843f02941d1e62e6f882d.camel@gmail.com>
Subject: Re: [PATCH v6 5/6] xen/riscv: introduce an implementation of macros
 from <asm/bug.h>
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  Stefano Stabellini <sstabellini@kernel.org>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
 <alistair.francis@wdc.com>, Connor Davis <connojdavis@gmail.com>, 
 xen-devel@lists.xenproject.org
Date: Wed, 31 May 2023 23:06:52 +0300
In-Reply-To: <92580a6f-e97a-c4a9-435c-bd95a84d4306@suse.com>
References: <cover.1685359848.git.oleksii.kurochko@gmail.com>
	 <bd2dd42c778714f25e7e98f74ff5e98eee1cd0a5.1685359848.git.oleksii.kurochko@gmail.com>
	 <92580a6f-e97a-c4a9-435c-bd95a84d4306@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.48.2 (3.48.2-1.fc38) 
MIME-Version: 1.0

On Tue, 2023-05-30 at 18:00 +0200, Jan Beulich wrote:
> > +static uint32_t read_instr(unsigned long pc)
> > +{
> > +=C2=A0=C2=A0=C2=A0 uint16_t instr16 =3D *(uint16_t *)pc;
> > +
> > +=C2=A0=C2=A0=C2=A0 if ( GET_INSN_LENGTH(instr16) =3D=3D 2 )
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return (uint32_t)instr16;
> > +=C2=A0=C2=A0=C2=A0 else
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return *(uint32_t *)pc;
> > +}
>=20
> As long as this function is only used on Xen code, it's kind of okay.
> There you/we control whether code can change behind our backs. But as
> soon as you might use this on guest code, the double read is going to
> be a problem
Will it be enough to add a comment that read_instr() should be used
only on Xen code? Or it is needed to introduce some lock?

> (I think; I wonder how hardware is supposed to deal with
> the situation: Maybe they indeed fetch in 16-bit quantities?).
I thought that it reads amount of bytes corresponded to i-cache size
and then the pipeline tracks whether an instruction is 16  or 32 bit.

At least something similar is done for BOOM RISC-V CPU [1].

[1]
https://github.com/riscv-boom/riscv-boom/blob/master/docs/sections/instruct=
ion-fetch-stage.rst#id64


From xen-devel-bounces@lists.xenproject.org Wed May 31 20:24:22 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 20:24:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541964.845265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4SMa-0005as-0Z; Wed, 31 May 2023 20:24:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541964.845265; Wed, 31 May 2023 20:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4SMZ-0005al-SU; Wed, 31 May 2023 20:24:07 +0000
Received: by outflank-mailman (input) for mailman id 541964;
 Wed, 31 May 2023 20:24:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2KTs=BU=flex--seanjc.bounces.google.com=346x3ZAYKCUY0mivrkowwotm.kwu5mv-lm3mttq010.5mvxzwrmk1.wzo@srs-se1.protection.inumbo.net>)
 id 1q4SMZ-0005af-Aq
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 20:24:07 +0000
Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com
 [2607:f8b0:4864:20::1149])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16083b4e-fff1-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 22:24:05 +0200 (CEST)
Received: by mail-yw1-x1149.google.com with SMTP id
 00721157ae682-568ab5c813eso99147b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 13:24:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16083b4e-fff1-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20221208; t=1685564643; x=1688156643;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id
         :reply-to;
        bh=APftvx9keMYUt1Xk5prtdAgJw5++/nh9oJ2TjaQ+44Q=;
        b=PX/4QCBDJ9PLCDibCE4LoOf2U0DE4t/KWyS05NGIZcGzqK/qwcSjxPluWcaqwv1Stk
         rMyraQASzxyWuHCUByRiPyI4r+TKzg18DQi23LqLbo8WEM0cjd44YF28bRWYO1A5spCy
         C8uoUDWMPuMiaxHglFY6t+vlO8uFRPkrpIUD9WU6y/eI2MWpQZdCjDnBrvtPK2foPDM8
         2b0pUD4qsLCE5pd4FfnukjL1T9+czSe/1aJuv1AeB5D9vgahqEpjGWaUrfGv4vRCGUgK
         7d8Ob7qWLSyhaFm1bWvZd46hX/2rCmhkQhiUf9f78KTYzvmJ6/SxSLGkVtXf5wgn2kFJ
         IIHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685564643; x=1688156643;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=APftvx9keMYUt1Xk5prtdAgJw5++/nh9oJ2TjaQ+44Q=;
        b=alpC5BEYLdv4RY3S49od0hk2qsjnte1GUmAmCrMrPNNjFM7S78HUAJ8OoD4+yTd9iP
         7cZStlrM8cBKVUF+gfQ3Nq96Cu36pnfr514ppmrDyMpCO/n9Re3aKOziQXuciQ1qcoAv
         eSs/l1JzXkaaH9Q4IskU224r6dShKZ6rFTYtg8yKpNvTbOQLgIFTbazVuHiN1/9+Ag9N
         NeFIVNeDdMGuxMLL17Q7pKcEqkvJpgU/yPxqZeZvc6nPtAtTdEkMFnF71KDznzSTp8e3
         GfYCgFYLNZ/bc88tZ6JKSkS/md3PLheiI9GWEX2YdA56looOcQ9DciUMQX1Jw2e8qcma
         WxOg==
X-Gm-Message-State: AC+VfDw4Tbs8ApZli5sB4gh51VjlGsi7bmtKH5Znr+0iCryMC/0eEF2T
	B5jmTow6TpCLEZ5dRuAu2khkuTWi9s0=
X-Google-Smtp-Source: ACHHUZ4Rs9okzQlJSZpcwd3Z5dIegCtWutSFaW8eVk3gk4ZcP95GwA04TMkwmyC49mplI5lk+ocRWvQTqeQ=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a81:e305:0:b0:55d:ea61:d8e9 with SMTP id
 q5-20020a81e305000000b0055dea61d8e9mr4086306ywl.7.1685564643426; Wed, 31 May
 2023 13:24:03 -0700 (PDT)
Date: Wed, 31 May 2023 13:24:01 -0700
In-Reply-To: <fd1dd8bcc172093ad20243ac1e7bb8fce45b38ef.camel@intel.com>
Mime-Version: 1.0
References: <20230505152046.6575-1-mic@digikod.net> <93726a7b9498ec66db21c5792079996d5fed5453.camel@intel.com>
 <facfd178-3157-80b4-243b-a5c8dabadbfb@digikod.net> <58a803f6-c3de-3362-673f-767767a43f9c@digikod.net>
 <fd1dd8bcc172093ad20243ac1e7bb8fce45b38ef.camel@intel.com>
Message-ID: <ZHes4a73Zg+6JuFB@google.com>
Subject: Re: [RFC PATCH v1 0/9] Hypervisor-Enforced Kernel Integrity
From: Sean Christopherson <seanjc@google.com>
To: Rick P Edgecombe <rick.p.edgecombe@intel.com>
Cc: "mic@digikod.net" <mic@digikod.net>, 
	"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>, "bp@alien8.de" <bp@alien8.de>, 
	"keescook@chromium.org" <keescook@chromium.org>, "hpa@zytor.com" <hpa@zytor.com>, 
	"mingo@redhat.com" <mingo@redhat.com>, "tglx@linutronix.de" <tglx@linutronix.de>, 
	"pbonzini@redhat.com" <pbonzini@redhat.com>, "wanpengli@tencent.com" <wanpengli@tencent.com>, 
	"vkuznets@redhat.com" <vkuznets@redhat.com>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>, 
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>, "liran.alon@oracle.com" <liran.alon@oracle.com>, 
	"marian.c.rotariu@gmail.com" <marian.c.rotariu@gmail.com>, Alexander Graf <graf@amazon.com>, 
	John S Andersen <john.s.andersen@intel.com>, 
	"madvenka@linux.microsoft.com" <madvenka@linux.microsoft.com>, 
	"ssicleru@bitdefender.com" <ssicleru@bitdefender.com>, "yuanyu@google.com" <yuanyu@google.com>, 
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, 
	"tgopinath@microsoft.com" <tgopinath@microsoft.com>, 
	"jamorris@linux.microsoft.com" <jamorris@linux.microsoft.com>, 
	"linux-security-module@vger.kernel.org" <linux-security-module@vger.kernel.org>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "will@kernel.org" <will@kernel.org>, 
	"dev@lists.cloudhypervisor.org" <dev@lists.cloudhypervisor.org>, 
	"mdontu@bitdefender.com" <mdontu@bitdefender.com>, 
	"linux-hardening@vger.kernel.org" <linux-hardening@vger.kernel.org>, 
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>, 
	"virtualization@lists.linux-foundation.org" <virtualization@lists.linux-foundation.org>, 
	"nicu.citu@icloud.com" <nicu.citu@icloud.com>, "ztarkhani@microsoft.com" <ztarkhani@microsoft.com>, 
	"x86@kernel.org" <x86@kernel.org>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

On Tue, May 30, 2023, Rick P Edgecombe wrote:
> On Fri, 2023-05-26 at 17:22 +0200, Micka=EF=BF=BDl Sala=EF=BF=BDn wrote:
> > > > Can the guest kernel ask the host VMM's emulated devices to DMA int=
o
> > > > the protected data? It should go through the host userspace mapping=
s I
> > > > think, which don't care about EPT permissions. Or did I miss where =
you
> > > > are protecting that another way? There are a lot of easy ways to as=
k
> > > > the host to write to guest memory that don't involve the EPT. You
> > > > probably need to protect the host userspace mappings, and also the
> > > > places in KVM that kmap a GPA provided by the guest.
> > >=20
> > > Good point, I'll check this confused deputy attack. Extended KVM
> > > protections should indeed handle all ways to map guests' memory.  I'm
> > > wondering if current VMMs would gracefully handle such new restrictio=
ns
> > > though.
> >=20
> > I guess the host could map arbitrary data to the guest, so that need to=
 be
> > handled, but how could the VMM (not the host kernel) bypass/update EPT
> > initially used for the guest (and potentially later mapped to the host)=
?
>=20
> Well traditionally both QEMU and KVM accessed guest memory via host
> mappings instead of the EPT.=EF=BF=BDSo I'm wondering what is stopping th=
e
> guest from passing a protected gfn when setting up the DMA, and QEMU
> being enticed to write to it? The emulator as well would use these host
> userspace mappings and not consult the EPT IIRC.
>=20
> I think Sean was suggesting host userspace should be more involved in
> this process, so perhaps it could protect its own alias of the
> protected memory, for example mprotect() it as read-only.

Ya, though "suggesting" is really "demanding, unless someone provides super=
 strong
justification for handling this directly in KVM".  It's basically the same =
argument
that led to Linux Security Modules: I'm all for KVM providing the framework=
 and
plumbing, but I don't want KVM to get involved in defining policy, thread m=
odels, etc.


From xen-devel-bounces@lists.xenproject.org Wed May 31 20:32:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 20:32:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541969.845274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4SUQ-00073l-Nd; Wed, 31 May 2023 20:32:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541969.845274; Wed, 31 May 2023 20:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4SUQ-00073e-L4; Wed, 31 May 2023 20:32:14 +0000
Received: by outflank-mailman (input) for mailman id 541969;
 Wed, 31 May 2023 20:32:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LP7+=BU=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1q4SUP-00073Y-Ea
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 20:32:13 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20600.outbound.protection.outlook.com
 [2a01:111:f400:fe5b::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 373023d8-fff2-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 22:32:10 +0200 (CEST)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by CH2PR12MB4037.namprd12.prod.outlook.com (2603:10b6:610:7a::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22; Wed, 31 May
 2023 20:32:04 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::246d:4776:b460:9277]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::246d:4776:b460:9277%5]) with mapi id 15.20.6455.020; Wed, 31 May 2023
 20:32:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 373023d8-fff2-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J3PImYiGzkXjfUBlaarJR69Fpjsz+2+nQQYUU8e4oBmm/L37lB6I1AKIjdPrI6DCc+CK342d1gnKdRAYawBgALEQtco0qOBaopm1K+CvufeegN8UpEgMH7J/DtkYYCBCApT8Bjl9JMT0nphNr2IryoiUJ2HCub1cWFVCoTXwr4HNX1nFRNXgYCDmefOuuz2Amlxgt0LX6DrZFlk3xSOHBRAczIp/bvvP7waESEfNp5B5zFChKP2IEp5KzCIvZTrp4TOykdjzxaoRG5PHfrxnExrlmFlbfXW9eAnpyCWXCnNNGu6FhVhPxPEJ7MCtONTYwbjpB6bO31wuP5To4TVCIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oO0AMN45eB6SUi2xKnMcHd4wNF11b/l4mcCntD3psbo=;
 b=AlQIgDi4SIo8FRmZ5/ZIrcC6GQ2HYvDQC9Cb7SR5MZOYDVatwONZCv+Egp0U5aXXPfb55581ZPLLDShydL5/RUtbpuObxygDvoaB52Gl5kU7YgTP05ip2MbBw3sqxLerR97TGUeX4HNKwn/zf+40c1Dp+fSseJ0traXG9AcBPgLGS8Ad083d9e+6RQpmAy0FiAuVU793CuDeMqGTOp71Irf2DgIFeH0IE/qqj4CEiUw5NHIujhmUaSq9ne8E0Fe39w+VwZMSpDgflYATHQ+M6pn8UNvY0GFqYnYFrxHLq8C1PriLIfTOGUHXGnvcFr40eDDzvI+iRRCcgRB+ERT4yA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oO0AMN45eB6SUi2xKnMcHd4wNF11b/l4mcCntD3psbo=;
 b=1JnBHLg4lX5YA2CosixGDaRgDblLFaLNekXZhKkEN4Jqyhck5cbYiUbfszyXrrLzRkNFv1SjU5LvB7YSqm7RetnepEmtzY8jTeSedgzPiUHTPHLRYyJhfwYF571MV8RR0gFThn0TabVosTCiLM+vNvaI0DX0Rol0wy7E1BoIQ9Q=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <f854bfc4-1f3c-19df-ba22-89c8859cfe6d@amd.com>
Date: Wed, 31 May 2023 13:32:01 -0700
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [XEN][PATCH v6 02/19] common/device_tree: handle memory
 allocation failure in __unflatten_device_tree()
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, Julien Grall <julien@xen.org>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-3-vikram.garhwal@amd.com>
 <57937e19-e038-b36a-73aa-c2a95de7e525@amd.com>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <57937e19-e038-b36a-73aa-c2a95de7e525@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SJ0PR03CA0179.namprd03.prod.outlook.com
 (2603:10b6:a03:338::34) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|CH2PR12MB4037:EE_
X-MS-Office365-Filtering-Correlation-Id: e54924e0-061e-44c1-bc4b-08db62161837
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EMwlazHoRmMBEjx+gDvj3IXyz+AOJP6pzIdmaP/xarAn/y4Zf5G+hEWgRXJmnvWq/GtE/EL4Z0PU0j7BBY/bx0LeHMeG1IKcMiIYGEJi3KQGqzwxbKR+aIT87YaZKGe+RpqTa7UR/KS9C/Ufl5SHTwn2b2t6EM5hb/Hg9K2hHqXMcaF5x/8Pm1aalq+w5gR59aAcOJOHzXrAfdZy8lRDrDzcu0Ml6rmrmI/aNNVre0Je2Al7Pb7Y2Aj5NR5xkpVVtXKkwriyHQmuGEOaNLSna7TAoG4wygKIJV/Ph8p0RdYeMp99UY/dR3J8DNzrX3KDDbK/C0u1VtEPweoHt7sUxOYCjRfFvZrt6VRh88R5hZnaB4zgozytbOJJ9OKzuTFhGQojTzq6m5At1gR3nViAgE9qdVxlxn2XVpU2AtEiTD2xxhi3gh0vemruOLgGR4QzD7VV3AUpK7/6cuyOar/aGRZmHnDfCYps9Q/eUQYivS0VyPx+nvDK5is2Es6BzG10fMGUd7oPGvG7UrlibGyAYndcZ1d+LQHAefBtufapqDN6t3JtcifrVMTan/KuRKpqkd1siZscNGL86PYIhdVFzH6+Crtnl/RvbUeIsiZ1qAVQj7OlMQE6bu7IpuoryqBAe4+mTAAcNaInCWbR7+ClDg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(39860400002)(366004)(136003)(346002)(376002)(451199021)(8936002)(8676002)(4326008)(66556008)(2906002)(66946007)(66476007)(5660300002)(31686004)(41300700001)(44832011)(316002)(478600001)(6486002)(966005)(6666004)(6506007)(6512007)(53546011)(186003)(26005)(36756003)(83380400001)(86362001)(2616005)(38100700002)(31696002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?T1lhQmNoWWV2Z0R3Nnl2dzZnNHBRUzVON2pHamJaUitvd0lrNURXcGQ1Y1pO?=
 =?utf-8?B?K3pNSUtyTzlzT2pybEdzbTNGaXVydGxjRlYwZk9Xc0hvZDZTcHpIN0hXOEJn?=
 =?utf-8?B?bnZIa3M5cnJ6MzRRd0tMWVJWL2NCQW5HbjdrWHFpQXE3bzlueVBnUGVWaXVa?=
 =?utf-8?B?S1lZMHZWclZ6OEVOQnZXMFBZVGlxWHNvSHg0bGZVeFNsU09NVTZnTEl0R0No?=
 =?utf-8?B?YldIN1g2SHhWeFM4ZjFjWWZSNXlOZ3FLTis5Vks0ekZ4eHgvOFJWZi9vN0Iy?=
 =?utf-8?B?M1NQM2lmK2ZpUUhrejF0dTdRWlhaN3FyM0NXVUdOZ3RpNFJIeUFsL0NZbW5W?=
 =?utf-8?B?TFZQWE5yU0ZWVFJWWlRMWENkT3NHY2NnazNvMzVlTWNiOUROZVFrRzJHRTBP?=
 =?utf-8?B?dW00cmg1UVI5bFlBUlc1NUZmdktKWGNuc3NnbWxRVm5CVDgyeElBZlc1TjlR?=
 =?utf-8?B?KytQaGRhQzM4K3pwZGZ4TGRFSDdXVlVBbllTVGV5MXo5cDlKV3ExV0h6d1Nj?=
 =?utf-8?B?aHhtZXo2Wld5SWJoUFQzYTBIa3EzRDV4SEFtZUJEc0wyM0tmRmhXRkR6d3N1?=
 =?utf-8?B?OUw1RCt5T05DQTM5NXFldmkzZm1Ea0dJbXFTOTc2cURKOHNOSTlxeitFeDRD?=
 =?utf-8?B?R2llRHc4Y29xb3dyZ2xTNFRsNG8rcmVLWG41WTZ3M2RzbThaY1h2QVpwclYy?=
 =?utf-8?B?VGNYTHp5WDRqVytKTVJ5TFlvYWFQWDJzZHg2M2JvR1lQTWpwOVFUU2hlc2hx?=
 =?utf-8?B?QXV0ZkdLNnNkWUVmRFIrckFQWU9SdzBXNG9QeVNLdjZ4cWdtRkw0aWM2VUhB?=
 =?utf-8?B?TzV5MWZwUXhNampSRTdyTlg2SGwrb2k0ZGx2cHZzWGpSSUZCUjBNalFrTTQ3?=
 =?utf-8?B?VWZyOVd2Q3VHbWVIZEE0bDU1amN0L0lleU1ZQWV3KzlNWUcxWEJTSmlXZ1Bm?=
 =?utf-8?B?VkQ5akt4K1ZSOVdSdTlPSXFwMjFyOHcxNnF0eUxwbCtudi85YWFhdlNuUmNE?=
 =?utf-8?B?MHdOSnBQekVCaHJjdllyMWdFM0hZKzFwd1BTSXRlLy84TDVxMlVyZ0l4YUdL?=
 =?utf-8?B?aDVjRGU1T3hvNkRuVW1yTTVGRUZ6end0d0t1eURTVGhRRU4yMWJjWW55dVNC?=
 =?utf-8?B?Y2VjanQ0dkpRQVVhTVVqdFE0NzhlWE1DSS84VHI3YmtZandUNFQrNjRXUlFW?=
 =?utf-8?B?UXNLOGpUZU4yY2tKRWVIS3JNVHJyZnFsc0JKcSswbWhCcGUvc05NSU4zT29v?=
 =?utf-8?B?VUNOV2Q1WVhqMnNQb3NKdlVLdzZldi9tYlMxWjY3UHc4SGlrK25aaDBSRUcw?=
 =?utf-8?B?ZVZYcGxLbDU0VHF0Slp5YXozMGJqaXJpbm9ybkxwYXBXeCs5VEs5VUwrbjZT?=
 =?utf-8?B?b3lZYXJNcUg5NjUxcEhKdktrYlBtVnZaUmhoR0E1aVZZU2NWdXhCTGd5TU1n?=
 =?utf-8?B?c3NwWWhIaVZvemJEYnByR0ZLQUNGdUlnR3BwQ3lmaUFBNTBzVWR2N3ptNnI5?=
 =?utf-8?B?b2xoUTJUd0IzZVdIV1FMUEpUSGxGaWZLek94YUZVUFlKdUxBRU5kNk5IYm9z?=
 =?utf-8?B?eEJ3V08rRGF6QlhSVTZmRGZaODdYUXQ4dHlTY3AyUmVNeFlWcXo1NVM0MVJJ?=
 =?utf-8?B?SEp4ZDU3c1dRZEF6dUtncHVDa2Z1TklocFJVUmpkdWV4U3hkUHJQZDlWVy9u?=
 =?utf-8?B?U3Q0eXlqdUZKVXVDWEZKREUxcXlPaWFmN2JRRGpTNFRLUElBakJOckRsR0JU?=
 =?utf-8?B?TFU0OTZVdEJsNk5uaXZ2SVB2MTlGUHVhOEoxbXIzcVBML0liMnNONTFUMWVF?=
 =?utf-8?B?Vm43aVpNY2ZnZ2NjSWI3WUpEV2hPaVl6OXc1YjdQWDFQNGZyVGZqbk5JTmFh?=
 =?utf-8?B?R2VUOVROZGtZdkxXeVhmSDZRZE1XRktCRmZsME92MGtlRTJ0WkdjaXNnUzFq?=
 =?utf-8?B?RU9KZGpFOGIxNHMrbUNlL1pkY2xyVGh2eEc1NFlmVGMxYm9nOUs5eUlpeCt2?=
 =?utf-8?B?OTg2Y2VTamhCMlorRXgrNVJ2b21qY2M4bVdXWEczWjluUXNoK2VmclRUdmJV?=
 =?utf-8?B?SEVNQXJYdGYxTEZRUk9NdnV5L0R6UlNNKy9JM214eEc1ekRJcVo2YWV0UEcw?=
 =?utf-8?Q?XUT9Kn6FRnw2yrFnhmFhGjUMI?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e54924e0-061e-44c1-bc4b-08db62161837
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 20:32:04.0826
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AsKjGTf5lO5+ilWUUrr9yY//0Od2Wqc0/a9MDzkaewY2DyxJ+mippWTu/Fi/e5oBejW+jTn+VO5Y3er4EVVMHw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4037

Hi Michal,

On 5/5/23 2:38 AM, Michal Orzel wrote:
> On 03/05/2023 01:36, Vikram Garhwal wrote:
>>
>> Change __unflatten_device_tree() return type to integer so it can propagate
>> memory allocation failure. Add panic() in dt_unflatten_host_device_tree() for
>> memory allocation failure during boot.
>>
>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
> I think we are missing a Fixes tag.
Like the below line?
Fixes: fb97eb6 ("xen/arm: Create a hierarchical device tree")

Original patch for your reference: 
https://github.com/xen-project/xen/commit/fb97eb614acfbcc812098bbbe5dde99271fe0a0d

Regards,
Vikram

>> ---
>>   xen/common/device_tree.c | 13 ++++++++++---
>>   1 file changed, 10 insertions(+), 3 deletions(-)
>>
>> diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
>> index 5f7ae45304..fc38a0b3dd 100644
>> --- a/xen/common/device_tree.c
>> +++ b/xen/common/device_tree.c
>> @@ -2056,8 +2056,8 @@ static unsigned long unflatten_dt_node(const void *fdt,
>>    * @fdt: The fdt to expand
>>    * @mynodes: The device_node tree created by the call
>>    */
>> -static void __init __unflatten_device_tree(const void *fdt,
>> -                                           struct dt_device_node **mynodes)
>> +static int __init __unflatten_device_tree(const void *fdt,
>> +                                          struct dt_device_node **mynodes)
>>   {
>>       unsigned long start, mem, size;
>>       struct dt_device_node **allnextp = mynodes;
>> @@ -2078,6 +2078,8 @@ static void __init __unflatten_device_tree(const void *fdt,
>>
>>       /* Allocate memory for the expanded device tree */
>>       mem = (unsigned long)_xmalloc (size + 4, __alignof__(struct dt_device_node));
>> +    if ( !mem )
>> +        return -ENOMEM;
>>
>>       ((__be32 *)mem)[size / 4] = cpu_to_be32(0xdeadbeef);
>>
>> @@ -2095,6 +2097,8 @@ static void __init __unflatten_device_tree(const void *fdt,
>>       *allnextp = NULL;
>>
>>       dt_dprintk(" <- unflatten_device_tree()\n");
>> +
>> +    return 0;
>>   }
>>
>>   static void dt_alias_add(struct dt_alias_prop *ap,
>> @@ -2179,7 +2183,10 @@ dt_find_interrupt_controller(const struct dt_device_match *matches)
>>
>>   void __init dt_unflatten_host_device_tree(void)
>>   {
>> -    __unflatten_device_tree(device_tree_flattened, &dt_host);
>> +    int error = __unflatten_device_tree(device_tree_flattened, &dt_host);
> NIT: there should be a blank line between definitions and rest of the code
>
>> +    if ( error )
>> +        panic("__unflatten_device_tree failed with error %d\n", error);
>> +
>>       dt_alias_scan();
>>   }
>>
>> --
>> 2.17.1
>>
>>
> FWICS, patches 2 and 4 are not strictly related to DTBO and are fixing issues
> and propagating errors which is always good. Therefore by moving them to the start
> of the series, they could be merged right away reducing the number of patches to review.
> At the moment, they can't be because patch 3 placed in-between is strictly related to the series.
>
> @julien?
>
> ~Michal
>



From xen-devel-bounces@lists.xenproject.org Wed May 31 20:36:12 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 20:36:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541975.845284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4SY9-0007ik-C5; Wed, 31 May 2023 20:36:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541975.845284; Wed, 31 May 2023 20:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4SY9-0007id-92; Wed, 31 May 2023 20:36:05 +0000
Received: by outflank-mailman (input) for mailman id 541975;
 Wed, 31 May 2023 20:36:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4SY7-0007iT-RR; Wed, 31 May 2023 20:36:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4SY7-0006jD-HY; Wed, 31 May 2023 20:36:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4SY7-0002vm-7F; Wed, 31 May 2023 20:36:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4SY7-0004IN-6n; Wed, 31 May 2023 20:36:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nz8fisi7/zhVqzFLhNjzN6uvE0XNaMpfcvcf62hibEI=; b=ijAOZ9EbhJmgQvIphJxK90sOqF
	9isM0e46GKXKIpkSl7zm3pw4JDcO+NzRvgFD2LzZvxk6R+F2F2ahs2NPKjSMVUvvvrHm2SE4I0WAd
	Pnh5bYXXI3sJRuin6F4SRWYTz6c6w4kCQKV+MW37M1n5YBRZMl5q14rsVfjyyHEhLa+I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181041-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 181041: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-armhf:<job status>:broken:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:leak-check/check/src_host:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:leak-check/check/dst_host:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:leak-check/check:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:leak-check/check:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:leak-check/check:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:leak-check/check:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:leak-check/check/src_host:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:leak-check/check/dst_host:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:leak-check/check:fail:regression
    qemu-mainline:build-armhf:host-build-prep:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:leak-check/check:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-vhd:leak-check/check:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-raw:leak-check/check:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:leak-check/check:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=51bdb0b57a2d9e84d6915fbae7b5d76c8820cf3c
X-Osstest-Versions-That:
    qemuu=6972ef1440a9d685482d78672620a7482f2bd09a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 20:36:03 +0000

flight 181041 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181041/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                     <job status>                 broken
 test-amd64-amd64-libvirt-pair 30 leak-check/check/src_host fail REGR. vs. 180691
 test-amd64-amd64-libvirt-pair 31 leak-check/check/dst_host fail REGR. vs. 180691
 test-amd64-i386-libvirt      23 leak-check/check         fail REGR. vs. 180691
 test-amd64-amd64-libvirt-xsm 23 leak-check/check         fail REGR. vs. 180691
 build-arm64-xsm               6 xen-build                fail REGR. vs. 180691
 build-arm64                   6 xen-build                fail REGR. vs. 180691
 test-amd64-amd64-libvirt     23 leak-check/check         fail REGR. vs. 180691
 test-amd64-i386-libvirt-xsm  23 leak-check/check         fail REGR. vs. 180691
 test-amd64-i386-libvirt-pair 30 leak-check/check/src_host fail REGR. vs. 180691
 test-amd64-i386-libvirt-pair 31 leak-check/check/dst_host fail REGR. vs. 180691
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 21 leak-check/check fail REGR. vs. 180691
 build-armhf                   5 host-build-prep          fail REGR. vs. 180691
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 21 leak-check/check fail REGR. vs. 180691
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 180691
 test-amd64-i386-xl-vhd       24 leak-check/check         fail REGR. vs. 180691
 test-amd64-i386-libvirt-raw  22 leak-check/check         fail REGR. vs. 180691
 test-amd64-amd64-xl-qcow2    24 leak-check/check         fail REGR. vs. 180691

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-vhd       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 180691
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 180691
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 180691
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 180691
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 180691
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                51bdb0b57a2d9e84d6915fbae7b5d76c8820cf3c
baseline version:
 qemuu                6972ef1440a9d685482d78672620a7482f2bd09a

Last test of basis   180691  2023-05-17 10:45:22 Z   14 days
Failing since        180699  2023-05-18 07:21:24 Z   13 days   51 attempts
Testing same since   181041  2023-05-31 15:02:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Afonso Bordado <afonsobordado@gmail.com>
  Akihiko Odaki <akihiko.odaki@daynix.com>
  Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Graf <graf@amazon.com>
  Alistair Francis <alistair.francis@wdc.com>
  Ani Sinha <ani@anisinha.ca>
  Ani Sinha <anisinha@redhat.com>
  Anton Johansson <anjo@rev.ng>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bernhard Beschow <shentey@gmail.com>
  Bin Meng <bin.meng@windriver.com>
  Brian Cain <bcain@quicinc.com>
  Brice Goglin <Brice.Goglin@inria.fr>
  Camilla Conte <cconte@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Clément Chigot <chigot@adacore.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Cédric Le Goater <clg@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniil Kovalev <dkovalev@compiler-toolchain-for.me>
  David Hildenbrand <david@redhat.com>
  Emanuele Giuseppe Esposito <eesposit@redhat.com>
  Enze Li <lienze@kylinos.cn>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric DeVolder <eric.devolder@oracle.com>
  Erico Nunes <ernunes@redhat.com>
  Eugenio Pérez <eperezma@redhat.com>
  Fabiano Rosas <farosas@suse.de>
  Fan Ni <fan.ni@samsung.com>
  Fiona Ebner <f.ebner@proxmox.com>
  Francisco Iglesias <frasse.iglesias@gmail.com>
  Gavin Shan <gshan@redhat.com>
  Gregory Price <gourry.memverge@gmail.com>
  Gregory Price <gregory.price@memverge.com>
  Hao Zeng <zenghao@kylinos.cn>
  Hawkins Jiawei <yin31149@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Ira Weiny <ira.weiny@intel.com>
  Jason Wang <jasowang@redhat.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  John Snow <jsnow@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Juan Quintela <quintela@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Lei Yang <leiyang@redhat.com>
  Leonardo Bras <leobras@redhat.com>
  Maksim Davydov <davydov-max@yandex-team.ru>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
  Marco Liebel <quic_mliebel@quicinc.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Mostafa Saleh <smostafa@google.com>
  Nicholas Piggin <npiggin@gmail.com>
  Nicolas Saenz Julienne <nsaenz@amazon.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daudé <philmd@linaro.org>
  Raghu H <raghuhack78@gmail.com>
  Rene Engel <ReneEngel80@emailn.de>
  Richard Henderson <richard.henderson@linaro.org>
  Richard Purdie <richard.purdie@linuxfoundation.org>
  Ricky Zhou <ricky@rzhou.org>
  Ryan Wendland <wendland@live.com.au>
  Sebastian Ott <sebott@redhat.com>
  Sergio Lopez <slp@redhat.com>
  Sid Manning <sidneym@quicinc.com>
  Song Gao <gaosong@loongson.cn>
  Stefan Hajnoczi <stefanha@redhat.com>
  Steve Sistare <steven.sistare@oracle.com>
  Taylor Simpson <tsimpson@quicinc.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Weißschuh <thomas@t-8ch.de>
  timothee.cocault@gmail.com <timothee.cocault@gmail.com>
  Timothée Cocault <timothee.cocault@gmail.com>
  Tommy Wu <tommy.wu@sifive.com>
  Viktor Prutyanov <viktor@daynix.com>
  Vitaly Cheptsov <cheptsov@ispras.ru>
  Vivek Kasireddy <vivek.kasireddy@intel.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
  Volker Rümelin <vr_qemu@t-online.de>
  Xinyu Li <lixinyu20s@ict.ac.cn>
  Zeng Hao <zenghao@kylinos.cn>
  Zhenyu Zhang <zhenyzha@redhat.com>
  Zhenzhong Duan <zhenzhong.duan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  broken  
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               blocked 
 test-amd64-amd64-xl-qcow2                                    fail    
 test-arm64-arm64-libvirt-raw                                 blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-libvirt-raw                                  fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-arm64-arm64-xl-vhd                                      blocked 
 test-armhf-armhf-xl-vhd                                      blocked 
 test-amd64-i386-xl-vhd                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-armhf broken

Not pushing.

(No revision log; it would be 10808 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 31 21:19:25 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:19:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.541986.845306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TDo-0003k1-PD; Wed, 31 May 2023 21:19:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 541986.845306; Wed, 31 May 2023 21:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TDo-0003ju-MT; Wed, 31 May 2023 21:19:08 +0000
Received: by outflank-mailman (input) for mailman id 541986;
 Wed, 31 May 2023 21:19:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4TDm-0003jk-V2; Wed, 31 May 2023 21:19:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4TDm-0007lc-L9; Wed, 31 May 2023 21:19:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4TDl-0004GL-TE; Wed, 31 May 2023 21:19:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4TDl-0003C6-Sl; Wed, 31 May 2023 21:19:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=od7p3H64ArN/+HRanIo/fY5YXB4s7xdWma/eq2GFsz4=; b=1pqc8eX7SZUYqB8iuRV1QTCR0H
	tX3fQivPrUZADAQAjgWJZNA6mTnlX/kvidBpqnBiTmQ7N+pDuGhzz3gyDfcalm3FOMBmMn/BX/tyd
	nq9hLVKYXcf14QV2fmwrHu5rrIesIq7h1gCexm4KoAmmCZZ0MkFqAjT7tkufQ1G8d7ts=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181027-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 181027: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:debian-fixup:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-pair:guest-migrate/dst_host/src_host/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=94200e1bae07e725cc07238c11569c5cab7befb7
X-Osstest-Versions-That:
    xen=f54dd5b53ee516fa1d4c106e0744ce0083acfcdc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 21:19:05 +0000

flight 181027 xen-unstable real [real]
flight 181056 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/181027/
http://logs.test-lab.xenproject.org/osstest/logs/181056/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvhv2-amd 13 debian-fixup       fail pass in 181056-retest
 test-amd64-amd64-libvirt-pair 28 guest-migrate/dst_host/src_host/debian.repeat fail pass in 181056-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail like 180976
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 181007
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 181007
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 181007
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 181007
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 181007
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 181007
 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check   fail like 181007
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 181007
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 181007
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 181007
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 181007
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 181007
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-raw  14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  94200e1bae07e725cc07238c11569c5cab7befb7
baseline version:
 xen                  f54dd5b53ee516fa1d4c106e0744ce0083acfcdc

Last test of basis   181007  2023-05-30 01:53:56 Z    1 days
Failing since        181019  2023-05-30 20:07:02 Z    1 days    2 attempts
Testing same since   181027  2023-05-31 08:39:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alejandro Vallejo <alejandro.vallejo@cloud.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Cyril Rébert <slack@rabbit.lu>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Luca Fancellu <luca.fancellu@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stewart Hildebrand <stewart.hildebrand@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-i386-examine-bios                                 pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-libvirt-raw                                  pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-i386-examine-uefi                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-arm64-arm64-xl-vhd                                      pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-i386-xl-vhd                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f54dd5b53e..94200e1bae  94200e1bae07e725cc07238c11569c5cab7befb7 -> master


From xen-devel-bounces@lists.xenproject.org Wed May 31 21:30:46 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:30:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542005.845361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TOz-0006cY-Hc; Wed, 31 May 2023 21:30:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542005.845361; Wed, 31 May 2023 21:30:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TOz-0006cR-EQ; Wed, 31 May 2023 21:30:41 +0000
Received: by outflank-mailman (input) for mailman id 542005;
 Wed, 31 May 2023 21:30:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LWp+=BU=kernel.org=helgaas@srs-se1.protection.inumbo.net>)
 id 1q4TOy-0006cL-Ei
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:30:40 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 61267d00-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:30:37 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id A2F996117B;
 Wed, 31 May 2023 21:30:30 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AE7E1C433D2;
 Wed, 31 May 2023 21:30:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61267d00-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685568630;
	bh=gW9waI0b8IiYUwk0vfOCM7pXk2nDw1QSAznIy/h3ghs=;
	h=Date:From:To:Cc:Subject:In-Reply-To:From;
	b=KtBbCN/ixHzj8ZulEY/5zCSpe/VVKBpSZFDZuNCSDCoNNZ4IeoA+28lWLQWvSlii6
	 83BDJpZxFh6HJCKpfPSOLONuS2M6Vumh+ceOlzxWjyrmYHupNvr4DztdjDLJl3iqmN
	 E3araGx+1PyOqST2zluz35yOcgzUDjix8glDLTAN/wUxtojSsJeinZtS0BNCLJVOXb
	 hkXJLKswE11TqxiL8vaPqm931ggJJ71GILWlC61yGzVhgB56FeGO9+L1ZryEUY7qlP
	 EPZxdEvOEEyvpu7VdWJnY4Z4bFlhgqbJcQ6tWlmZqEjiu2TPM90LCU6E74dCFHn04m
	 J5aGwHVGTvHag==
Date: Wed, 31 May 2023 16:30:28 -0500
From: Bjorn Helgaas <helgaas@kernel.org>
To: Jonas Gorski <jonas.gorski@gmail.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= <kw@linux.com>,
	linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
	=?iso-8859-1?Q?Micka=EBl_Sala=FCn?= <mic@digikod.net>,
	Rich Felker <dalias@libc.org>, linux-sh@vger.kernel.org,
	Dominik Brodowski <linux@dominikbrodowski.net>,
	Andrew Lunn <andrew@lunn.ch>, sparclinux@vger.kernel.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Gregory Clement <gregory.clement@bootlin.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Russell King <linux@armlinux.org.uk>, linux-acpi@vger.kernel.org,
	Miguel Ojeda <ojeda@kernel.org>, xen-devel@lists.xenproject.org,
	Matt Turner <mattst88@gmail.com>,
	Anatolij Gustschin <agust@denx.de>,
	Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Niklas Schnelle <schnelle@linux.ibm.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Nicholas Piggin <npiggin@gmail.com>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	linux-arm-kernel@lists.infradead.org,
	Juergen Gross <jgross@suse.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@linaro.org>,
	linuxppc-dev@lists.ozlabs.org, Randy Dunlap <rdunlap@infradead.org>,
	linux-mips@vger.kernel.org,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	linux-alpha@vger.kernel.org,
	Pali =?iso-8859-1?Q?Roh=E1r?= <pali@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	"Maciej W. Rozycki" <macro@orcam.me.uk>
Subject: Re: [PATCH v8 0/7] Add pci_dev_for_each_resource() helper and update
 users
Message-ID: <ZHe8dKb3f392MfBO@bhelgaas>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAOiHx==5YWhDiZP2PyHZiJrmtqRzvqCqoSO59RwuYuR85BezBg@mail.gmail.com>

On Wed, May 31, 2023 at 08:48:35PM +0200, Jonas Gorski wrote:
> ...

> Looking at the code I understand where coverity is coming from:
> 
> #define __pci_dev_for_each_res0(dev, res, ...)                         \
>        for (unsigned int __b = 0;                                      \
>             res = pci_resource_n(dev, __b), __b < PCI_NUM_RESOURCES;   \
>             __b++)
> 
>  res will be assigned before __b is checked for being less than
> PCI_NUM_RESOURCES, making it point to behind the array at the end of
> the last loop iteration.
> 
> Rewriting the test expression as
> 
> __b < PCI_NUM_RESOURCES && (res = pci_resource_n(dev, __b));
> 
> should avoid the (coverity) warning by making use of lazy evaluation.
> 
> It probably makes the code slightly less performant as res will now be
> checked for being not NULL (which will always be true), but I doubt it
> will be significant (or in any hot paths).

Thanks a lot for looking into this!  I think you're right, and I think
the rewritten expression is more logical as well.  Do you want to post
a patch for it?

Bjorn


From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542008.845371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPV-0006zd-Ro; Wed, 31 May 2023 21:31:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542008.845371; Wed, 31 May 2023 21:31:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPV-0006zW-NA; Wed, 31 May 2023 21:31:13 +0000
Received: by outflank-mailman (input) for mailman id 542008;
 Wed, 31 May 2023 21:31:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPV-0006zB-2p
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:13 +0000
Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com
 [2607:f8b0:4864:20::1130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 75d6fa66-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:31:10 +0200 (CEST)
Received: by mail-yw1-x1130.google.com with SMTP id
 00721157ae682-565e6beb7aaso732537b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:10 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75d6fa66-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568670; x=1688160670;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=Wjb7mkxds2Oxu0tDA//mCadvcwpJnafWf0cD82pJKQE=;
        b=nYeBH5p3+Qt6FUJ38UmFIxO7emaaMGST1TXgppBSGlquoGi0Vu88EMjm0R/stdK090
         rAkGmKWr2iOmn9u+Pb/kM3qRzmk4mZo0E/qCXj+6fXwb5xT2cyvDopKHOf2mbUaUTVkW
         IZJWedT+tZ8ojnhcwbCb2MsICPKO3NlZEBtylSK1ToQ7f9kqjkh5/U5nwImpNeor9qu5
         IqgnJLTK4zE+DQC6rph9ulLyMVpd89EilRZG10pjBGo7K+adbGLVnpAlHyiTcC9wvrmA
         MF9mV19ie8UWuPHO/B2dxtGuMvy+vvJd/sRqVo6b1JEZ5k0xW/bO1/+Ig0OmF8ty3QFr
         I20Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568670; x=1688160670;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=Wjb7mkxds2Oxu0tDA//mCadvcwpJnafWf0cD82pJKQE=;
        b=eKo0Kjvt0Ft7MNcsCxRdPmYjfUutf6PfTzLTai3z0cTLMeUKT3/aP5DwtsKxr/jlli
         h0y8pxZqwcDX2q+LMAgXvu0Rfzcc9jMWuMbyrI6otjvXwBmreP44hz0PpNgW09zvTdkv
         09T1esOJrBsYVRJpQqDgeUbxQBBKJREDJy27gze+9kTecVautP54suLXyE4zY4RO3F+Q
         HNjaYdrbJ+XZMyHptRd7uLSf2WtH4Kb4eZZZv1Rx0NrY4gRys8TFoGZJCsH8LZk1FGCr
         GAneJSyllYTl/mKVx1WIGvEAtkwwJOqWq5nHLaDw5Nm1Gf6ppTMGwxIWJtZodOiowFfR
         d4HA==
X-Gm-Message-State: AC+VfDwQVV2dRL0W7p2gjJy/Iko+uWNZiptNa316zLUk3rd72PPlHZKH
	mDQCDnYFn/1gVvMI+wSFBIg=
X-Google-Smtp-Source: ACHHUZ5BOuY9ns1UExUd8Gqiz8IGTSQZuNKoj5s1mEJ0QIzwi0ShIzlk0w6m3uPiq99qwG8NM2Rw1w==
X-Received: by 2002:a0d:df51:0:b0:55a:3560:8ee0 with SMTP id i78-20020a0ddf51000000b0055a35608ee0mr6881175ywe.20.1685568669613;
        Wed, 31 May 2023 14:31:09 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Dinh Nguyen <dinguyen@kernel.org>,
	Jonas Bonn <jonas@southpole.se>,
	David Hildenbrand <david@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"David S. Miller" <davem@davemloft.net>,
	Richard Weinberger <richard@nod.at>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Arnd Bergmann <arnd@arndb.de>
Subject: [PATCH v3 00/34] Split ptdesc from struct page
Date: Wed, 31 May 2023 14:29:58 -0700
Message-Id: <20230531213032.25338-1-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The MM subsystem is trying to shrink struct page. This patchset
introduces a memory descriptor for page table tracking - struct ptdesc.

This patchset introduces ptdesc, splits ptdesc from struct page, and
converts many callers of page table constructor/destructors to use ptdescs.

Ptdesc is a foundation to further standardize page tables, and eventually
allow for dynamic allocation of page tables independent of struct page.
However, the use of pages for page table tracking is quite deeply
ingrained and varied across archictectures, so there is still a lot of
work to be done before that can happen.

This is rebased on next-20230531.

v3:
  Got an Acked-by
  Fixed the arm64 compilation issue
  Rename some ptdesc utility functions to be pagetable_* instead
  Add some comments to functions describing their uses

Vishal Moola (Oracle) (34):
  mm: Add PAGE_TYPE_OP folio functions
  s390: Use _pt_s390_gaddr for gmap address tracking
  s390: Use pt_frag_refcount for pagetables
  pgtable: Create struct ptdesc
  mm: add utility functions for ptdesc
  mm: Convert pmd_pgtable_page() to pmd_ptdesc()
  mm: Convert ptlock_alloc() to use ptdescs
  mm: Convert ptlock_ptr() to use ptdescs
  mm: Convert pmd_ptlock_init() to use ptdescs
  mm: Convert ptlock_init() to use ptdescs
  mm: Convert pmd_ptlock_free() to use ptdescs
  mm: Convert ptlock_free() to use ptdescs
  mm: Create ptdesc equivalents for pgtable_{pte,pmd}_page_{ctor,dtor}
  powerpc: Convert various functions to use ptdescs
  x86: Convert various functions to use ptdescs
  s390: Convert various gmap functions to use ptdescs
  s390: Convert various pgalloc functions to use ptdescs
  mm: Remove page table members from struct page
  pgalloc: Convert various functions to use ptdescs
  arm: Convert various functions to use ptdescs
  arm64: Convert various functions to use ptdescs
  csky: Convert __pte_free_tlb() to use ptdescs
  hexagon: Convert __pte_free_tlb() to use ptdescs
  loongarch: Convert various functions to use ptdescs
  m68k: Convert various functions to use ptdescs
  mips: Convert various functions to use ptdescs
  nios2: Convert __pte_free_tlb() to use ptdescs
  openrisc: Convert __pte_free_tlb() to use ptdescs
  riscv: Convert alloc_{pmd, pte}_late() to use ptdescs
  sh: Convert pte_free_tlb() to use ptdescs
  sparc64: Convert various functions to use ptdescs
  sparc: Convert pgtable_pte_page_{ctor, dtor}() to ptdesc equivalents
  um: Convert {pmd, pte}_free_tlb() to use ptdescs
  mm: Remove pgtable_{pmd, pte}_page_{ctor, dtor}() wrappers

 Documentation/mm/split_page_table_lock.rst    |  12 +-
 .../zh_CN/mm/split_page_table_lock.rst        |  14 +-
 arch/arm/include/asm/tlb.h                    |  12 +-
 arch/arm/mm/mmu.c                             |   6 +-
 arch/arm64/include/asm/tlb.h                  |  14 +-
 arch/arm64/mm/mmu.c                           |   7 +-
 arch/csky/include/asm/pgalloc.h               |   4 +-
 arch/hexagon/include/asm/pgalloc.h            |   8 +-
 arch/loongarch/include/asm/pgalloc.h          |  27 ++-
 arch/loongarch/mm/pgtable.c                   |   7 +-
 arch/m68k/include/asm/mcf_pgalloc.h           |  41 ++--
 arch/m68k/include/asm/sun3_pgalloc.h          |   8 +-
 arch/m68k/mm/motorola.c                       |   4 +-
 arch/mips/include/asm/pgalloc.h               |  31 +--
 arch/mips/mm/pgtable.c                        |   7 +-
 arch/nios2/include/asm/pgalloc.h              |   8 +-
 arch/openrisc/include/asm/pgalloc.h           |   8 +-
 arch/powerpc/mm/book3s64/mmu_context.c        |  10 +-
 arch/powerpc/mm/book3s64/pgtable.c            |  32 +--
 arch/powerpc/mm/pgtable-frag.c                |  46 ++--
 arch/riscv/include/asm/pgalloc.h              |   8 +-
 arch/riscv/mm/init.c                          |  16 +-
 arch/s390/include/asm/pgalloc.h               |   4 +-
 arch/s390/include/asm/tlb.h                   |   4 +-
 arch/s390/mm/gmap.c                           | 222 +++++++++++-------
 arch/s390/mm/pgalloc.c                        | 126 +++++-----
 arch/sh/include/asm/pgalloc.h                 |   9 +-
 arch/sparc/mm/init_64.c                       |  17 +-
 arch/sparc/mm/srmmu.c                         |   5 +-
 arch/um/include/asm/pgalloc.h                 |  18 +-
 arch/x86/mm/pgtable.c                         |  46 ++--
 arch/x86/xen/mmu_pv.c                         |   2 +-
 include/asm-generic/pgalloc.h                 |  62 +++--
 include/asm-generic/tlb.h                     |  11 +
 include/linux/mm.h                            | 155 ++++++++----
 include/linux/mm_types.h                      |  14 --
 include/linux/page-flags.h                    |  20 +-
 include/linux/pgtable.h                       |  61 +++++
 mm/memory.c                                   |   8 +-
 39 files changed, 665 insertions(+), 449 deletions(-)

-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542009.845381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPX-0007F0-2V; Wed, 31 May 2023 21:31:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542009.845381; Wed, 31 May 2023 21:31:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPW-0007Er-V5; Wed, 31 May 2023 21:31:14 +0000
Received: by outflank-mailman (input) for mailman id 542009;
 Wed, 31 May 2023 21:31:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPV-0006xu-K2
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:13 +0000
Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com
 [2607:f8b0:4864:20::1132])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 76ea81ac-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:12 +0200 (CEST)
Received: by mail-yw1-x1132.google.com with SMTP id
 00721157ae682-568900c331aso733787b3.3
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:12 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76ea81ac-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568671; x=1688160671;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bX969DJmm0QSNKJIBN0sjsA5M6jg25Ht5D0PDUGbnFg=;
        b=P7FbSxrmAPv3mGtl9KCze7nT8iCFSFWQSZd60mX+cJFZkb27WYcyDR38HUvSUNwOfA
         WyleVMjeIqtLEEwRc2YnbZ7txjSr2kC1Bu6eSjk/V4zQ7iY1ikjarw2unChyDBFaqyHl
         qrkXBVTcGnA2v9yMUpvE/svfRz2vIL9d0Nbx7Bw2sQnHGn9MhsASWeYBsHvbspQGObju
         1ipMDLV38bpj6HIH1n6AHSjFxILtRbDR5cBiuIg4YWigaXfk1Rbe6tGcUyluemp1MpHz
         Go7YaG8Vo7KSrA3KfxzMvgw5BO+v7vQIsvpOcy7enjsShHfHLHc+oYi1vHe+rSK3PBLh
         b6cg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568671; x=1688160671;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=bX969DJmm0QSNKJIBN0sjsA5M6jg25Ht5D0PDUGbnFg=;
        b=BFAiwLovTVwb+JP/HHeOttuQOvaMfn72A5nGX6HO51I+QaY2XCTvgW4NFCpFkd1Zac
         8hILcmx2sH4uWwi+oZEVeOMFoHuD1QYJFOdDN2nRtofenTWJU8kR+HfQrztPYO8W/efV
         8SeNGwzvpHxXkq51k9oDGw9F9NShKoJcZfuIe4AILaAU6h3WAQSbRn3yQkbN7hZvK6Dd
         wDYAnGck0w3QkLEqrxhWip2d4lLFqjqa8EA3JbjgVOpyAkMC4vMZOJBdUuGb9d87q07h
         cBmpzcTHl9gNfHEvplTGoNpyWPWsDBxFHC59skmkJzvA83WP67dJqjewjy3HavBeURed
         MvOg==
X-Gm-Message-State: AC+VfDz7h4XivDCS2xMYI7X6bmchshPOYcSvDd+P083/z5EjVotPXSrZ
	cbb31i6+MhJ5iXZp62emIsI=
X-Google-Smtp-Source: ACHHUZ49VjuEJEQYDlxCOib/9Ri8bRDKMaFSJKO9JbdYMtejkSn38dasbkcATHh4yJ4tcm7dGKQETg==
X-Received: by 2002:a0d:e7c3:0:b0:565:a081:a925 with SMTP id q186-20020a0de7c3000000b00565a081a925mr7102903ywe.29.1685568671501;
        Wed, 31 May 2023 14:31:11 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 01/34] mm: Add PAGE_TYPE_OP folio functions
Date: Wed, 31 May 2023 14:29:59 -0700
Message-Id: <20230531213032.25338-2-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

No folio equivalents for page type operations have been defined, so
define them for later folio conversions.

Also changes the Page##uname macros to take in const struct page* since
we only read the memory here.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/page-flags.h | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 92a2063a0a23..e99a616b9bcd 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -908,6 +908,8 @@ static inline bool is_page_hwpoison(struct page *page)
 
 #define PageType(page, flag)						\
 	((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
+#define folio_test_type(folio, flag)					\
+	((folio->page.page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
 
 static inline int page_type_has_type(unsigned int page_type)
 {
@@ -920,20 +922,34 @@ static inline int page_has_type(struct page *page)
 }
 
 #define PAGE_TYPE_OPS(uname, lname)					\
-static __always_inline int Page##uname(struct page *page)		\
+static __always_inline int Page##uname(const struct page *page)		\
 {									\
 	return PageType(page, PG_##lname);				\
 }									\
+static __always_inline int folio_test_##lname(const struct folio *folio)\
+{									\
+	return folio_test_type(folio, PG_##lname);			\
+}									\
 static __always_inline void __SetPage##uname(struct page *page)		\
 {									\
 	VM_BUG_ON_PAGE(!PageType(page, 0), page);			\
 	page->page_type &= ~PG_##lname;					\
 }									\
+static __always_inline void __folio_set_##lname(struct folio *folio)	\
+{									\
+	VM_BUG_ON_FOLIO(!folio_test_type(folio, 0), folio);		\
+	folio->page.page_type &= ~PG_##lname;				\
+}									\
 static __always_inline void __ClearPage##uname(struct page *page)	\
 {									\
 	VM_BUG_ON_PAGE(!Page##uname(page), page);			\
 	page->page_type |= PG_##lname;					\
-}
+}									\
+static __always_inline void __folio_clear_##lname(struct folio *folio)	\
+{									\
+	VM_BUG_ON_FOLIO(!folio_test_##lname(folio), folio);		\
+	folio->page.page_type |= PG_##lname;				\
+}									\
 
 /*
  * PageBuddy() indicates that the page is free and in the buddy system
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542010.845391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPZ-0007X1-AH; Wed, 31 May 2023 21:31:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542010.845391; Wed, 31 May 2023 21:31:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPZ-0007Wk-6r; Wed, 31 May 2023 21:31:17 +0000
Received: by outflank-mailman (input) for mailman id 542010;
 Wed, 31 May 2023 21:31:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPX-0006xu-Ki
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:15 +0000
Received: from mail-yw1-x112a.google.com (mail-yw1-x112a.google.com
 [2607:f8b0:4864:20::112a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7826ea95-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:14 +0200 (CEST)
Received: by mail-yw1-x112a.google.com with SMTP id
 00721157ae682-5689335d2b6so710547b3.3
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:14 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7826ea95-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568673; x=1688160673;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UyShpFpgVR6i39SmTUNSXyIOGqdGil6rpZQyINgvaNc=;
        b=f7BrGYQnWDGWjfOB31IyWypGIm3sBSR5GrTxoSkB0QeDnrAwpQMBLgz7x4dss7x6Kx
         z8I+j44YnjDGYTGStotDdtxsa/waxh6ZUSCH9KN770hNpMjA5qx26THJhAKA84XsHIjg
         d+FA15tJG3X7p4Dd/zl9mNqFRBR4ZfIxhRyfWuEXj4DIKjNUhORsqh8m4YkFTh1vwyAw
         iyd+M4+f1yeXh77RBvmJ2epMefw6jWv7Z4UtGXoYcPC17GRuLf30ygXQ6kXFWrAWSjot
         BuS0sa6bTNmUL+CA5lHvLCEjjuA11WZtCvGqambv0ftqLgXPYZ2IrVgUzrzFrI+OwUke
         NJBg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568673; x=1688160673;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=UyShpFpgVR6i39SmTUNSXyIOGqdGil6rpZQyINgvaNc=;
        b=TDq9F35Mq9qqesfevdM+cgmVaoa9NLr1NeevdCOP4pdMLMDOKLocy0ZivbHCgtZBwv
         Qym/ytbGu137WUcztuFRzo302gTDtuTDAOkZ2ayB5HTckyzwHxQHCN4QXZwy5HvZaVeG
         6vv9rRSVM0gVPhYRMBS3H8Y2QUPPXmTf6ZRuFjbwoKLy6MHb3Oswm4vbLE03zeiGyumE
         P+sazI+SWbq19LUhxzJRisd9mgjImYDLDtPzYPh411yfO+cIBhQWyMTChfPlNr30uPT6
         BbA8ujXSvoSvU7fTwHa3LRVc3+iK91MViX7vMuyl74940pGzdhvVOF1ODPCnMqJD/18N
         qcXw==
X-Gm-Message-State: AC+VfDzqGDdG4Bi3AvxQVLw/2xSM1xlnz0pwBJ+/fzliGCScLeXC6/K9
	r+nFOAE5n178ouSZNElpRfs=
X-Google-Smtp-Source: ACHHUZ6mRQVnU5urUM106lddsvyZ49x5l1bfNIRf6BQNoX3kHicchX9XznOVWCanWsbzCT3HXivh6Q==
X-Received: by 2002:a81:86c1:0:b0:559:e235:5f65 with SMTP id w184-20020a8186c1000000b00559e2355f65mr7058986ywf.37.1685568673506;
        Wed, 31 May 2023 14:31:13 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>
Subject: [PATCH v3 02/34] s390: Use _pt_s390_gaddr for gmap address tracking
Date: Wed, 31 May 2023 14:30:00 -0700
Message-Id: <20230531213032.25338-3-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

s390 uses page->index to keep track of page tables for the guest address
space. In an attempt to consolidate the usage of page fields in s390,
replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap.

This will help with the splitting of struct ptdesc from struct page, as
well as allow s390 to use _pt_frag_refcount for fragmented page table
tracking.

Since page->_pt_s390_gaddr aliases with mapping, ensure its set to NULL
before freeing the pages as well.

This also reverts commit 7e25de77bc5ea ("s390/mm: use pmd_pgtable_page()
helper in __gmap_segment_gaddr()") which had s390 use
pmd_pgtable_page() to get a gmap page table, as pmd_pgtable_page()
should be used for more generic process page tables.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/mm/gmap.c      | 56 +++++++++++++++++++++++++++-------------
 include/linux/mm_types.h |  2 +-
 2 files changed, 39 insertions(+), 19 deletions(-)

diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index dc90d1eb0d55..81c683426b49 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -70,7 +70,7 @@ static struct gmap *gmap_alloc(unsigned long limit)
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		goto out_free;
-	page->index = 0;
+	page->_pt_s390_gaddr = 0;
 	list_add(&page->lru, &gmap->crst_list);
 	table = page_to_virt(page);
 	crst_table_init(table, etype);
@@ -187,16 +187,20 @@ static void gmap_free(struct gmap *gmap)
 	if (!(gmap_is_shadow(gmap) && gmap->removed))
 		gmap_flush_tlb(gmap);
 	/* Free all segment & region tables. */
-	list_for_each_entry_safe(page, next, &gmap->crst_list, lru)
+	list_for_each_entry_safe(page, next, &gmap->crst_list, lru) {
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
+	}
 	gmap_radix_tree_free(&gmap->guest_to_host);
 	gmap_radix_tree_free(&gmap->host_to_guest);
 
 	/* Free additional data for a shadow gmap */
 	if (gmap_is_shadow(gmap)) {
 		/* Free all page tables. */
-		list_for_each_entry_safe(page, next, &gmap->pt_list, lru)
+		list_for_each_entry_safe(page, next, &gmap->pt_list, lru) {
+			page->_pt_s390_gaddr = 0;
 			page_table_free_pgste(page);
+		}
 		gmap_rmap_radix_tree_free(&gmap->host_to_rmap);
 		/* Release reference to the parent */
 		gmap_put(gmap->parent);
@@ -318,12 +322,14 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
 		list_add(&page->lru, &gmap->crst_list);
 		*table = __pa(new) | _REGION_ENTRY_LENGTH |
 			(*table & _REGION_ENTRY_TYPE_MASK);
-		page->index = gaddr;
+		page->_pt_s390_gaddr = gaddr;
 		page = NULL;
 	}
 	spin_unlock(&gmap->guest_table_lock);
-	if (page)
+	if (page) {
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
+	}
 	return 0;
 }
 
@@ -336,12 +342,14 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
 static unsigned long __gmap_segment_gaddr(unsigned long *entry)
 {
 	struct page *page;
-	unsigned long offset;
+	unsigned long offset, mask;
 
 	offset = (unsigned long) entry / sizeof(unsigned long);
 	offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE;
-	page = pmd_pgtable_page((pmd_t *) entry);
-	return page->index + offset;
+	mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1);
+	page = virt_to_page((void *)((unsigned long) entry & mask));
+
+	return page->_pt_s390_gaddr + offset;
 }
 
 /**
@@ -1351,6 +1359,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 	/* Free page table */
 	page = phys_to_page(pgt);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	page_table_free_pgste(page);
 }
 
@@ -1379,6 +1388,7 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr,
 		/* Free page table */
 		page = phys_to_page(pgt);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		page_table_free_pgste(page);
 	}
 }
@@ -1409,6 +1419,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 	/* Free segment table */
 	page = phys_to_page(sgt);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1437,6 +1448,7 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr,
 		/* Free segment table */
 		page = phys_to_page(sgt);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1467,6 +1479,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr)
 	/* Free region 3 table */
 	page = phys_to_page(r3t);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1495,6 +1508,7 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr,
 		/* Free region 3 table */
 		page = phys_to_page(r3t);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1525,6 +1539,7 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr)
 	/* Free region 2 table */
 	page = phys_to_page(r2t);
 	list_del(&page->lru);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 }
 
@@ -1557,6 +1572,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
 		/* Free region 2 table */
 		page = phys_to_page(r2t);
 		list_del(&page->lru);
+		page->_pt_s390_gaddr = 0;
 		__free_pages(page, CRST_ALLOC_ORDER);
 	}
 }
@@ -1762,9 +1778,9 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		return -ENOMEM;
-	page->index = r2t & _REGION_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_r2t = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
@@ -1814,6 +1830,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 	return rc;
 }
@@ -1846,9 +1863,9 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		return -ENOMEM;
-	page->index = r3t & _REGION_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_r3t = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
@@ -1898,6 +1915,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 	return rc;
 }
@@ -1930,9 +1948,9 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
 	if (!page)
 		return -ENOMEM;
-	page->index = sgt & _REGION_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_sgt = page_to_phys(page);
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
@@ -1982,6 +2000,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	__free_pages(page, CRST_ALLOC_ORDER);
 	return rc;
 }
@@ -2014,9 +2033,9 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
 	if (table && !(*table & _SEGMENT_ENTRY_INVALID)) {
 		/* Shadow page tables are full pages (pte+pgste) */
 		page = pfn_to_page(*table >> PAGE_SHIFT);
-		*pgt = page->index & ~GMAP_SHADOW_FAKE_TABLE;
+		*pgt = page->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE;
 		*dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT);
-		*fake = !!(page->index & GMAP_SHADOW_FAKE_TABLE);
+		*fake = !!(page->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE);
 		rc = 0;
 	} else  {
 		rc = -EAGAIN;
@@ -2054,9 +2073,9 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	page = page_table_alloc_pgste(sg->mm);
 	if (!page)
 		return -ENOMEM;
-	page->index = pgt & _SEGMENT_ENTRY_ORIGIN;
+	page->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN;
 	if (fake)
-		page->index |= GMAP_SHADOW_FAKE_TABLE;
+		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
 	s_pgt = page_to_phys(page);
 	/* Install shadow page table */
 	spin_lock(&sg->guest_table_lock);
@@ -2101,6 +2120,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
+	page->_pt_s390_gaddr = 0;
 	page_table_free_pgste(page);
 	return rc;
 
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 306a3d1a0fa6..6161fe1ae5b8 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -144,7 +144,7 @@ struct page {
 		struct {	/* Page table pages */
 			unsigned long _pt_pad_1;	/* compound_head */
 			pgtable_t pmd_huge_pte; /* protected by page->ptl */
-			unsigned long _pt_pad_2;	/* mapping */
+			unsigned long _pt_s390_gaddr;	/* mapping */
 			union {
 				struct mm_struct *pt_mm; /* x86 pgds only */
 				atomic_t pt_frag_refcount; /* powerpc */
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542011.845396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPZ-0007eB-S3; Wed, 31 May 2023 21:31:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542011.845396; Wed, 31 May 2023 21:31:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPZ-0007d6-Nr; Wed, 31 May 2023 21:31:17 +0000
Received: by outflank-mailman (input) for mailman id 542011;
 Wed, 31 May 2023 21:31:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPY-0006xu-Qd
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:16 +0000
Received: from mail-yw1-x112a.google.com (mail-yw1-x112a.google.com
 [2607:f8b0:4864:20::112a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7969a384-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:16 +0200 (CEST)
Received: by mail-yw1-x112a.google.com with SMTP id
 00721157ae682-5689335d2b6so710907b3.3
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:16 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7969a384-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568676; x=1688160676;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=V5BTNYEJrMvEdgPHE9qghL8qlsXVOst+iBLMKLVFrSI=;
        b=ZOrjRpztCykn5S2UQynBNzBa3MWmWRXtFOidHPb8ljWg9idLGbr9+vMldhGKYgTruN
         1934LTH9c2Wh9xXG+tZ6quTSzUJuhszO9LCf/AYBJsPXuQjHo4TfwO3yqkVoV4JnI7mh
         lK7Zjtt0YSLV9vHzTQP3W+nJ0auBkrTwfRrAwKmblLPqRjLTgj5BxN/zgR0xbn+upf0o
         apfiugG50HJ8VEu2RIDT+46U7OJf8q3uoGBfoRDouzvlyJiG8pcaDr4mVE+vzfndDM01
         /Jv+1rxOJiA9ppMDbAjm8+j8FmFsNys3enhgZaJmy9pp0lvroiQaVp9oa4sPbWaYs1n7
         Zxdg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568676; x=1688160676;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=V5BTNYEJrMvEdgPHE9qghL8qlsXVOst+iBLMKLVFrSI=;
        b=begzDfcsW6XbMI+NOAXapE0XWTrMHXoABiOFotMQysGRmEOhF07uwGUEED+u9LZe4h
         YjKIyO1eTCCG+UY9HEEOHMF9kYSxAzOQLvLYJE8KCBpXPLeuAwV86BlM6BXi6s/Gx2VN
         KepIh0xs+JLYAfYDslSnhkMnd95/hC9rKzpfbd8i30nsosHq85I6sfgHeI4Ba/Mwh2X5
         ikmSpO9GAdKhNBO6JKjNEKZT+lJ3TuGfUxe5ww53W7xitoUKJCOGZjtwb/z/P9lGRLuD
         otHt1IZrBhn1PlTRtAzhN1k2VYoA7AIh2FR78T4TIJXbd144YRJshfR+xEQlLjNTs6mx
         YJ4w==
X-Gm-Message-State: AC+VfDzzqu8J5iz9PMOb7CWN+1KZtfSUFSFP5sVKBcUPhtfp6KcRT6Th
	gd1vCdXuLkcq1vU65vaj4HQ=
X-Google-Smtp-Source: ACHHUZ7A4AW0g1EJo54eGQeBH+bfqFv0MB7wzzUtb0w1cODVt+jOCsqaj2lzyysVEFaa+9MnJMXYag==
X-Received: by 2002:a0d:df81:0:b0:561:bd01:9ff with SMTP id i123-20020a0ddf81000000b00561bd0109ffmr7412356ywe.28.1685568675532;
        Wed, 31 May 2023 14:31:15 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>
Subject: [PATCH v3 03/34] s390: Use pt_frag_refcount for pagetables
Date: Wed, 31 May 2023 14:30:01 -0700
Message-Id: <20230531213032.25338-4-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

s390 currently uses _refcount to identify fragmented page tables.
The page table struct already has a member pt_frag_refcount used by
powerpc, so have s390 use that instead of the _refcount field as well.
This improves the safety for _refcount and the page table tracking.

This also allows us to simplify the tracking since we can once again use
the lower byte of pt_frag_refcount instead of the upper byte of _refcount.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/mm/pgalloc.c | 38 +++++++++++++++-----------------------
 1 file changed, 15 insertions(+), 23 deletions(-)

diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
index 66ab68db9842..6b99932abc66 100644
--- a/arch/s390/mm/pgalloc.c
+++ b/arch/s390/mm/pgalloc.c
@@ -182,20 +182,17 @@ void page_table_free_pgste(struct page *page)
  * As follows from the above, no unallocated or fully allocated parent
  * pages are contained in mm_context_t::pgtable_list.
  *
- * The upper byte (bits 24-31) of the parent page _refcount is used
+ * The lower byte (bits 0-7) of the parent page pt_frag_refcount is used
  * for tracking contained 2KB-pgtables and has the following format:
  *
  *   PP  AA
- * 01234567    upper byte (bits 24-31) of struct page::_refcount
+ * 01234567    upper byte (bits 0-7) of struct page::pt_frag_refcount
  *   ||  ||
  *   ||  |+--- upper 2KB-pgtable is allocated
  *   ||  +---- lower 2KB-pgtable is allocated
  *   |+------- upper 2KB-pgtable is pending for removal
  *   +-------- lower 2KB-pgtable is pending for removal
  *
- * (See commit 620b4e903179 ("s390: use _refcount for pgtables") on why
- * using _refcount is possible).
- *
  * When 2KB-pgtable is allocated the corresponding AA bit is set to 1.
  * The parent page is either:
  *   - added to mm_context_t::pgtable_list in case the second half of the
@@ -243,11 +240,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 		if (!list_empty(&mm->context.pgtable_list)) {
 			page = list_first_entry(&mm->context.pgtable_list,
 						struct page, lru);
-			mask = atomic_read(&page->_refcount) >> 24;
+			mask = atomic_read(&page->pt_frag_refcount);
 			/*
 			 * The pending removal bits must also be checked.
 			 * Failure to do so might lead to an impossible
-			 * value of (i.e 0x13 or 0x23) written to _refcount.
+			 * value of (i.e 0x13 or 0x23) written to
+			 * pt_frag_refcount.
 			 * Such values violate the assumption that pending and
 			 * allocation bits are mutually exclusive, and the rest
 			 * of the code unrails as result. That could lead to
@@ -259,8 +257,8 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 				bit = mask & 1;		/* =1 -> second 2K */
 				if (bit)
 					table += PTRS_PER_PTE;
-				atomic_xor_bits(&page->_refcount,
-							0x01U << (bit + 24));
+				atomic_xor_bits(&page->pt_frag_refcount,
+							0x01U << bit);
 				list_del(&page->lru);
 			}
 		}
@@ -281,12 +279,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 	table = (unsigned long *) page_to_virt(page);
 	if (mm_alloc_pgste(mm)) {
 		/* Return 4K page table with PGSTEs */
-		atomic_xor_bits(&page->_refcount, 0x03U << 24);
+		atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
 		memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE);
 		memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE);
 	} else {
 		/* Return the first 2K fragment of the page */
-		atomic_xor_bits(&page->_refcount, 0x01U << 24);
+		atomic_xor_bits(&page->pt_frag_refcount, 0x01U);
 		memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE);
 		spin_lock_bh(&mm->context.lock);
 		list_add(&page->lru, &mm->context.pgtable_list);
@@ -323,22 +321,19 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
 		 * will happen outside of the critical section from this
 		 * function or from __tlb_remove_table()
 		 */
-		mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
 		if (mask & 0x03U)
 			list_add(&page->lru, &mm->context.pgtable_list);
 		else
 			list_del(&page->lru);
 		spin_unlock_bh(&mm->context.lock);
-		mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24));
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x10U << bit);
 		if (mask != 0x00U)
 			return;
 		half = 0x01U << bit;
 	} else {
 		half = 0x03U;
-		mask = atomic_xor_bits(&page->_refcount, 0x03U << 24);
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
 	}
 
 	page_table_release_check(page, table, half, mask);
@@ -368,8 +363,7 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 	 * outside of the critical section from __tlb_remove_table() or from
 	 * page_table_free()
 	 */
-	mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
-	mask >>= 24;
+	mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
 	if (mask & 0x03U)
 		list_add_tail(&page->lru, &mm->context.pgtable_list);
 	else
@@ -391,14 +385,12 @@ void __tlb_remove_table(void *_table)
 		return;
 	case 0x01U:	/* lower 2K of a 4K page table */
 	case 0x02U:	/* higher 2K of a 4K page table */
-		mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24));
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, mask << 4);
 		if (mask != 0x00U)
 			return;
 		break;
 	case 0x03U:	/* 4K page table with pgstes */
-		mask = atomic_xor_bits(&page->_refcount, 0x03U << 24);
-		mask >>= 24;
+		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
 		break;
 	}
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:20 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542012.845411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPc-00086R-5C; Wed, 31 May 2023 21:31:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542012.845411; Wed, 31 May 2023 21:31:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPc-00086H-1V; Wed, 31 May 2023 21:31:20 +0000
Received: by outflank-mailman (input) for mailman id 542012;
 Wed, 31 May 2023 21:31:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPb-0006zB-C5
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:19 +0000
Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com
 [2607:f8b0:4864:20::1130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a65048a-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:31:17 +0200 (CEST)
Received: by mail-yw1-x1130.google.com with SMTP id
 00721157ae682-565e6beb7aaso734007b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:17 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a65048a-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568677; x=1688160677;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/2bE9CDG9Oxw3pEA2fSuUshC33VMwoQ2Xz6qScjQL3E=;
        b=iYqqA4KZmoroGYhynV/jiNkWzAMHN4TlFNnsupyXM0HztFgnmGpbYhflGvCI4PnTEH
         qsb+x4h+Ra/DHJZpmJ+cDpCt0WFO8DN9RsHpGbhgd8UZD1C59aXCIdDYpdL/DDlqjtME
         QAC1ynKbnXEL2BUoeiDKJEOTEmjHPJwx20Oocm0d9VYWRam4vpslVsanNLIqZtenI7il
         z8Y4AX5Qo3hIFiKRtt14OfPeMVmgXXgQiLO70vbxlL1MgUQYvu/5R1IEb6sabFTUiQkk
         WKmmoKw4Cu+qJUco9/5c4L88Blpl8Ohrj5Udd50ZlUMrgKFkg8PwQhuutxVbJdY53ahQ
         RhOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568677; x=1688160677;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/2bE9CDG9Oxw3pEA2fSuUshC33VMwoQ2Xz6qScjQL3E=;
        b=kH3K8ygNTZQ+KVREVrArW3x8eVmSYpgBmu+pSzc6stbiryLJuvlabnxCJ+knhW/rJR
         DfIff7uziJTIGmYdJvXHG+uKU26VE0X2f/dYc22O+Ko3WdTWaaVoeO8fCdpOrJ/83QNP
         HrZmnJMcuh3tjpWo0hy683TQxxv9EjhZ/c/Uw2T8VBehNw0b7tESKXJwf+ZKS3DFLRja
         dY6Ci8IEAwFahLXMjs7Ip0hco7TFz9GsuH3GeVVJPc5mntUjXu8MfZfx8sLh6sgNOZf9
         fc6ItnCQwDyVWf+W0zer8yDD4Z7PFCyF9ivSRKZa7YxeLt/lQ40zhgLyyaV05wJVvns2
         xYGw==
X-Gm-Message-State: AC+VfDyxvml26+BSAovMc4/Pbcpq8gkCcYVr3u29bG4wqeNJcjwUlfGj
	t98LTeYlzv10ivx5d2JCXmk=
X-Google-Smtp-Source: ACHHUZ6pTbgk5oakG8rCTNTHmfkc+dDnZpqxrm9LdrARtP0jWcbiwYUD1VVJhesMtUI+Yz5eprOqiQ==
X-Received: by 2002:a0d:d816:0:b0:565:bb04:53fa with SMTP id a22-20020a0dd816000000b00565bb0453famr7385514ywe.10.1685568677391;
        Wed, 31 May 2023 14:31:17 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 04/34] pgtable: Create struct ptdesc
Date: Wed, 31 May 2023 14:30:02 -0700
Message-Id: <20230531213032.25338-5-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently, page table information is stored within struct page. As part
of simplifying struct page, create struct ptdesc for page table
information.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/pgtable.h | 52 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index c5a51481bbb9..c997e9878969 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -975,6 +975,58 @@ static inline void ptep_modify_prot_commit(struct vm_area_struct *vma,
 #endif /* __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION */
 #endif /* CONFIG_MMU */
 
+
+/**
+ * struct ptdesc - Memory descriptor for page tables.
+ * @__page_flags: Same as page flags. Unused for page tables.
+ * @pt_list: List of used page tables. Used for s390 and x86.
+ * @_pt_pad_1: Padding that aliases with page's compound head.
+ * @pmd_huge_pte: Protected by ptdesc->ptl, used for THPs.
+ * @_pt_s390_gaddr: Aliases with page's mapping. Used for s390 gmap only.
+ * @pt_mm: Used for x86 pgds.
+ * @pt_frag_refcount: For fragmented page table tracking. Powerpc and s390 only.
+ * @ptl: Lock for the page table.
+ *
+ * This struct overlays struct page for now. Do not modify without a good
+ * understanding of the issues.
+ */
+struct ptdesc {
+	unsigned long __page_flags;
+
+	union {
+		struct list_head pt_list;
+		struct {
+			unsigned long _pt_pad_1;
+			pgtable_t pmd_huge_pte;
+		};
+	};
+	unsigned long _pt_s390_gaddr;
+
+	union {
+		struct mm_struct *pt_mm;
+		atomic_t pt_frag_refcount;
+		unsigned long index;
+	};
+
+#if ALLOC_SPLIT_PTLOCKS
+	spinlock_t *ptl;
+#else
+	spinlock_t ptl;
+#endif
+};
+
+#define TABLE_MATCH(pg, pt)						\
+	static_assert(offsetof(struct page, pg) == offsetof(struct ptdesc, pt))
+TABLE_MATCH(flags, __page_flags);
+TABLE_MATCH(compound_head, pt_list);
+TABLE_MATCH(compound_head, _pt_pad_1);
+TABLE_MATCH(pmd_huge_pte, pmd_huge_pte);
+TABLE_MATCH(mapping, _pt_s390_gaddr);
+TABLE_MATCH(pt_mm, pt_mm);
+TABLE_MATCH(ptl, ptl);
+#undef TABLE_MATCH
+static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
+
 /*
  * No-op macros that just return the current protection value. Defined here
  * because these macros can be used even if CONFIG_MMU is not defined.
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:23 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542014.845421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPf-0008TH-Fb; Wed, 31 May 2023 21:31:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542014.845421; Wed, 31 May 2023 21:31:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPf-0008T5-Bl; Wed, 31 May 2023 21:31:23 +0000
Received: by outflank-mailman (input) for mailman id 542014;
 Wed, 31 May 2023 21:31:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPe-0006zB-96
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:22 +0000
Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com
 [2607:f8b0:4864:20::112f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b9f937b-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:31:20 +0200 (CEST)
Received: by mail-yw1-x112f.google.com with SMTP id
 00721157ae682-568a1011488so969497b3.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:20 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b9f937b-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568679; x=1688160679;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ODlFnqFbwCYj1Eu7Q59WYAsHTUoZZ7gjL38Gt/grehE=;
        b=NGZceTIR2lqEzgrDeWjSiexk6w9g3s1TlD144+ZFhyEk2n6+p9UC+6qVvvVLXzp2su
         K9oJ0eJw0ezplOB9R0ifZq19iQ285sb1OM4mJ1UElkWFp36T3wq89+HQwlMRGwOKmFO7
         z1tP0exNbwqfKUldCdliSyHLB7+WYxs5iLiptM0KEACbqVBdid/Ntr98KvLnXyv2Phff
         Kkrj2cY5I52akM7rhkuqjxOeMf+L5Rld4rmHhMpoy2yk5X8KAowKfrZQsw0r7hKhSbaY
         cfZsGjnDLb5nI+L/8w5H9J8F2OZUMVJi9RmJunsGnB8M2co0leG7lQkvBoLTt5HxAMxc
         OBEQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568679; x=1688160679;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ODlFnqFbwCYj1Eu7Q59WYAsHTUoZZ7gjL38Gt/grehE=;
        b=JYJhmhWMlPke8MI9TrOWa4LxAQGuwap4eSmRWOQy0bOpTK1DqpptmULa1xkHQv6Rpx
         smljmD0ZhhyZnQQIxA0oECPF03WCxstFpi/95B2/SDFcfVzYTtiT40HLvEnMxKu2dFOC
         YgpSpiqeranxTFvAjutejTpseYZ1RvaJa1n+AcyF47UW5Qlxj0P+9mdhxDj/SLTM49lQ
         +jz2BcQaFyQZzsBZrBjYePWhcr4ZXQOnb9O1VqlKxI9XDn/mXqTj5gHLd7XB7r5CFTVa
         7uab8XdivVPt80/bPx9ZSQlB/ETSHG0dOoQzQYquttrdZ4aYHE4tFZsHn56MwgFjhfaN
         IsVA==
X-Gm-Message-State: AC+VfDxW5tDeanzRUE3sZ838dAYb3C2bTqMkULHJCxA2UnrTLW9L017c
	YcEGd7XlDJIBAGUsOM53nq4=
X-Google-Smtp-Source: ACHHUZ65n2/E4xpD/Goiwc5ceCegWo1qrr/SWnaz3wgfEMIe1tJLwwNDUTgx+izUNHv+/w4oBsm9SQ==
X-Received: by 2002:a81:a0c1:0:b0:568:b10a:e430 with SMTP id x184-20020a81a0c1000000b00568b10ae430mr7264559ywg.25.1685568679304;
        Wed, 31 May 2023 14:31:19 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 05/34] mm: add utility functions for ptdesc
Date: Wed, 31 May 2023 14:30:03 -0700
Message-Id: <20230531213032.25338-6-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce utility functions setting the foundation for ptdescs. These
will also assist in the splitting out of ptdesc from struct page.

Functions that focus on the descriptor are prefixed with ptdesc_* while
functions that focus on the pagetable are prefixed with pagetable_*.

pagetable_alloc() is defined to allocate new ptdesc pages as compound
pages. This is to standardize ptdescs by allowing for one allocation
and one free function, in contrast to 2 allocation and 2 free functions.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/asm-generic/tlb.h | 11 +++++++
 include/linux/mm.h        | 61 +++++++++++++++++++++++++++++++++++++++
 include/linux/pgtable.h   | 12 ++++++++
 3 files changed, 84 insertions(+)

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index b46617207c93..6bade9e0e799 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -481,6 +481,17 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
 	return tlb_remove_page_size(tlb, page, PAGE_SIZE);
 }
 
+static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt)
+{
+	tlb_remove_table(tlb, pt);
+}
+
+/* Like tlb_remove_ptdesc, but for page-like page directories. */
+static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt)
+{
+	tlb_remove_page(tlb, ptdesc_page(pt));
+}
+
 static inline void tlb_change_page_size(struct mmu_gather *tlb,
 						     unsigned int page_size)
 {
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 42ff3e04c006..620537e2f94f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2747,6 +2747,62 @@ static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long a
 }
 #endif /* CONFIG_MMU */
 
+static inline struct ptdesc *virt_to_ptdesc(const void *x)
+{
+	return page_ptdesc(virt_to_page(x));
+}
+
+static inline void *ptdesc_to_virt(const struct ptdesc *pt)
+{
+	return page_to_virt(ptdesc_page(pt));
+}
+
+static inline void *ptdesc_address(const struct ptdesc *pt)
+{
+	return folio_address(ptdesc_folio(pt));
+}
+
+static inline bool pagetable_is_reserved(struct ptdesc *pt)
+{
+	return folio_test_reserved(ptdesc_folio(pt));
+}
+
+/**
+ * pagetable_alloc - Allocate pagetables
+ * @gfp:    GFP flags
+ * @order:  desired pagetable order
+ *
+ * pagetable_alloc allocates a page table descriptor as well as all pages
+ * described by it.
+ *
+ * Return: The ptdesc describing the allocated page tables.
+ */
+static inline struct ptdesc *pagetable_alloc(gfp_t gfp, unsigned int order)
+{
+	struct page *page = alloc_pages(gfp | __GFP_COMP, order);
+
+	return page_ptdesc(page);
+}
+
+/**
+ * pagetable_free - Free pagetables
+ * @pt:	The page table descriptor
+ *
+ * pagetable_free frees a page table descriptor as well as all page
+ * tables described by said ptdesc.
+ */
+static inline void pagetable_free(struct ptdesc *pt)
+{
+	struct page *page = ptdesc_page(pt);
+
+	__free_pages(page, compound_order(page));
+}
+
+static inline void pagetable_clear(void *x)
+{
+	clear_page(x);
+}
+
 #if USE_SPLIT_PTE_PTLOCKS
 #if ALLOC_SPLIT_PTLOCKS
 void __init ptlock_cache_init(void);
@@ -2973,6 +3029,11 @@ static inline void mark_page_reserved(struct page *page)
 	adjust_managed_page_count(page, -1);
 }
 
+static inline void free_reserved_ptdesc(struct ptdesc *pt)
+{
+	free_reserved_page(ptdesc_page(pt));
+}
+
 /*
  * Default method to free all the __init memory into the buddy system.
  * The freed pages will be poisoned with pattern "poison" if it's within
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index c997e9878969..5f12622d1521 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1027,6 +1027,18 @@ TABLE_MATCH(ptl, ptl);
 #undef TABLE_MATCH
 static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
 
+#define ptdesc_page(pt)			(_Generic((pt),			\
+	const struct ptdesc *:		(const struct page *)(pt),	\
+	struct ptdesc *:		(struct page *)(pt)))
+
+#define ptdesc_folio(pt)		(_Generic((pt),			\
+	const struct ptdesc *:		(const struct folio *)(pt),	\
+	struct ptdesc *:		(struct folio *)(pt)))
+
+#define page_ptdesc(p)			(_Generic((p),			\
+	const struct page *:		(const struct ptdesc *)(p),	\
+	struct page *:			(struct ptdesc *)(p)))
+
 /*
  * No-op macros that just return the current protection value. Defined here
  * because these macros can be used even if CONFIG_MMU is not defined.
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:24 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542015.845431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPg-0000JB-R1; Wed, 31 May 2023 21:31:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542015.845431; Wed, 31 May 2023 21:31:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPg-0000I4-Mj; Wed, 31 May 2023 21:31:24 +0000
Received: by outflank-mailman (input) for mailman id 542015;
 Wed, 31 May 2023 21:31:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPf-0006xu-6N
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:23 +0000
Received: from mail-yw1-x112a.google.com (mail-yw1-x112a.google.com
 [2607:f8b0:4864:20::112a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ceb407c-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:22 +0200 (CEST)
Received: by mail-yw1-x112a.google.com with SMTP id
 00721157ae682-566586b180fso959697b3.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:22 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ceb407c-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568681; x=1688160681;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=cQi1r9n8ZvdAYjhQrP0RtpOHDwCF0T3SkT9pOHS3grY=;
        b=I4ZK3ufVe7mv71vTYogtgymswU5NJnWubvNrBVwVZ1xQ0v/p7EI5y5eDyiHJRrHKpG
         azRQJKWH7AViYgdCK9GdEYZ8omA3oIlbOzs852fLZAEvIF+ozvEvZkK1/lCOTaAOLLWg
         oij0mtTKcWvOTyQV92r6jSH492LqXNF3KyTROTbV0zOPwz64hVwHxQg5gRw6g2twpMr+
         Syr51H68T+2e3gvOiE/6C7bywaJcNLAGV+Y3hFA7u25fKQ4NEmZwbyYlQc7gCZ6R/kVr
         pYNZAd+SPL7/3au6VqcXubefeyJmThujQisY5D7sD9GWsMGa+7uty95WSWwZ8guF3AKi
         4svg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568681; x=1688160681;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=cQi1r9n8ZvdAYjhQrP0RtpOHDwCF0T3SkT9pOHS3grY=;
        b=DWD63bHsk4XnbPxf0G7VA37ES0uQ4tE+ulpXsnolh6y1uVKWJYdxmQtvoukzeh/0vC
         9BH+5a7aiITZZ1VEc/jhtRSBlkIeN/UKQAnQ0qvs0kQ6Bye5eK/xKKFGDZCKpWRAp48x
         alE7TxIo+KLfvFYk2kc9MmLNVtS0RGPMqGoyapIu+XikXgJCjpfL4QIQgQZOud5iUJyx
         dD/WhZGMMd3jYahDU5JUY6+OIvSA9IC/MJtzH3mMA2VwpKbLYbtBU1J0Pli60GT/qhbL
         ciwwfgTj0QDcIuFaHMK516BJTmCgLz8JyvEK1iftkMlEeLP2VCGMURZkdSsP2LsoUKN0
         NKJg==
X-Gm-Message-State: AC+VfDw3SNo7bV8JL7TsjvNielHIFsjkbyI6ORyF/RT0iG4RE98U9VUg
	AYAksn9CgjT6+YrEpGm+TwU=
X-Google-Smtp-Source: ACHHUZ4MgiQhTnDT0ikSenPgoz+0X6zm2wOF95JZP/LWMfbyCLY8nB5YKCMBE1n0Fb7UhukNcNCVlw==
X-Received: by 2002:a0d:f5c6:0:b0:561:1c14:b8df with SMTP id e189-20020a0df5c6000000b005611c14b8dfmr6643798ywf.47.1685568681477;
        Wed, 31 May 2023 14:31:21 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 06/34] mm: Convert pmd_pgtable_page() to pmd_ptdesc()
Date: Wed, 31 May 2023 14:30:04 -0700
Message-Id: <20230531213032.25338-7-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Converts pmd_pgtable_page() to pmd_ptdesc() and all its callers. This
removes some direct accesses to struct page, working towards splitting
out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 620537e2f94f..3a9c40e90dd7 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2912,15 +2912,15 @@ static inline void pgtable_pte_page_dtor(struct page *page)
 
 #if USE_SPLIT_PMD_PTLOCKS
 
-static inline struct page *pmd_pgtable_page(pmd_t *pmd)
+static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd)
 {
 	unsigned long mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1);
-	return virt_to_page((void *)((unsigned long) pmd & mask));
+	return virt_to_ptdesc((void *)((unsigned long) pmd & mask));
 }
 
 static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 {
-	return ptlock_ptr(pmd_pgtable_page(pmd));
+	return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd)));
 }
 
 static inline bool pmd_ptlock_init(struct page *page)
@@ -2939,7 +2939,7 @@ static inline void pmd_ptlock_free(struct page *page)
 	ptlock_free(page);
 }
 
-#define pmd_huge_pte(mm, pmd) (pmd_pgtable_page(pmd)->pmd_huge_pte)
+#define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte)
 
 #else
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:28 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542017.845441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPk-0000lt-Bg; Wed, 31 May 2023 21:31:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542017.845441; Wed, 31 May 2023 21:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPk-0000lh-8L; Wed, 31 May 2023 21:31:28 +0000
Received: by outflank-mailman (input) for mailman id 542017;
 Wed, 31 May 2023 21:31:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPj-0006zB-2v
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:27 +0000
Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com
 [2607:f8b0:4864:20::1130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e0b6392-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:31:24 +0200 (CEST)
Received: by mail-yw1-x1130.google.com with SMTP id
 00721157ae682-565a63087e9so745227b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:24 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e0b6392-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568683; x=1688160683;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=27lyfpZ04FD2UDRY/I33l74zvzgaqoXO3tIGNjTPBPg=;
        b=ekb2h5vYZpG+cwgCVg4fQK7EHYCXZwnN3tICBhvbAXV/qaJ4duvuh8+ArMRtJnIp/1
         3RGo21yGJSysDJalslm0N3KJ+xBemufNmCM4QUS+AlLps3g1asXmgD446fAwgWFlRe+6
         a7aoyacEmBOwDL/dieiMjv/UIxc7tBtkwjPRg8HdgpDriWcSnd2JpyypDy8aN9IwIMR1
         dAM+p/G53wM0julinDJp886bQVRAOW6SvsbsPDW7Xp4BxlToFfeY++p5U7tcCmWgdlyl
         aRBYNbpeHC4ndh8fG8DxUBYpHsoQ7lFfl1tkv5LwqehXHAbOBlEVGGEVzchpWOIR1fN9
         uokg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568683; x=1688160683;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=27lyfpZ04FD2UDRY/I33l74zvzgaqoXO3tIGNjTPBPg=;
        b=guzH1zzM/QzJiTRuoeqqdQ8FvI1dG7UdFBmzbEIPtTWdXKNNfs6c+iFGV1gxkBYsga
         Oy/i932nzlq1uGydhNRTLKHuyZr4IJDUjZE29cmjMnMCuDQmwMXyau5Kcm0WyOE+UkMj
         lX4CORodyq1c/0cHGKJnULsT+aIjkQI6tD5PGWP0CdzehmN9kV2egqqiE/KqRiafz/bf
         ZZSg+IY+WXZliyk9DybL9jxSZtqQpbcWGcqnmpwVQX0d/bLswRAC/EYAh8IUQCg1IzIB
         5oCaxNDRSTMG1CzoWoxCvg8cp6x1ClhQy0XogTk0RwrhuXizy5Vuv0rWxJmgivecpfRl
         mKrg==
X-Gm-Message-State: AC+VfDwYnVkfIT8f0tO8dnaiQXyIu877vy31+cDpfx3K/JwP4WzKOrrO
	YmyC/szn55+J0dlBCnBVmUw=
X-Google-Smtp-Source: ACHHUZ6dr8MPeM7AL/jHaXXQ52piABkpCgdGqBTW+dyh+LgB10RRsXu3Y1HlJGFoY/78HDuWSu9FGg==
X-Received: by 2002:a0d:d741:0:b0:565:dff1:d1e2 with SMTP id z62-20020a0dd741000000b00565dff1d1e2mr7923571ywd.18.1685568683472;
        Wed, 31 May 2023 14:31:23 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 07/34] mm: Convert ptlock_alloc() to use ptdescs
Date: Wed, 31 May 2023 14:30:05 -0700
Message-Id: <20230531213032.25338-8-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 6 +++---
 mm/memory.c        | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3a9c40e90dd7..1fd16ac96036 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2806,7 +2806,7 @@ static inline void pagetable_clear(void *x)
 #if USE_SPLIT_PTE_PTLOCKS
 #if ALLOC_SPLIT_PTLOCKS
 void __init ptlock_cache_init(void);
-extern bool ptlock_alloc(struct page *page);
+bool ptlock_alloc(struct ptdesc *ptdesc);
 extern void ptlock_free(struct page *page);
 
 static inline spinlock_t *ptlock_ptr(struct page *page)
@@ -2818,7 +2818,7 @@ static inline void ptlock_cache_init(void)
 {
 }
 
-static inline bool ptlock_alloc(struct page *page)
+static inline bool ptlock_alloc(struct ptdesc *ptdesc)
 {
 	return true;
 }
@@ -2848,7 +2848,7 @@ static inline bool ptlock_init(struct page *page)
 	 * slab code uses page->slab_cache, which share storage with page->ptl.
 	 */
 	VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
-	if (!ptlock_alloc(page))
+	if (!ptlock_alloc(page_ptdesc(page)))
 		return false;
 	spin_lock_init(ptlock_ptr(page));
 	return true;
diff --git a/mm/memory.c b/mm/memory.c
index 8358f3b853f2..8d37dd302f2f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5938,14 +5938,14 @@ void __init ptlock_cache_init(void)
 			SLAB_PANIC, NULL);
 }
 
-bool ptlock_alloc(struct page *page)
+bool ptlock_alloc(struct ptdesc *ptdesc)
 {
 	spinlock_t *ptl;
 
 	ptl = kmem_cache_alloc(page_ptl_cachep, GFP_KERNEL);
 	if (!ptl)
 		return false;
-	page->ptl = ptl;
+	ptdesc->ptl = ptl;
 	return true;
 }
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:29 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542018.845447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPk-0000qX-SJ; Wed, 31 May 2023 21:31:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542018.845447; Wed, 31 May 2023 21:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPk-0000pl-JV; Wed, 31 May 2023 21:31:28 +0000
Received: by outflank-mailman (input) for mailman id 542018;
 Wed, 31 May 2023 21:31:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPj-0006xu-70
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:27 +0000
Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com
 [2607:f8b0:4864:20::1134])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7f27acc4-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:26 +0200 (CEST)
Received: by mail-yw1-x1134.google.com with SMTP id
 00721157ae682-56896c77434so1035227b3.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:26 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f27acc4-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568685; x=1688160685;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=TzKQXlz5IDfPvgDKTOeCKkz+dXM61NoKgwUhpTupiUw=;
        b=hIfmjb5X06SZ3BaD9NYBS6hgA+ayAquIKaVHr2BHnLBOPZKQ/J4ZAsHZVuEaw1F1C0
         DLhGZpjTZZFs02UVgouMcVD7Om43dfavrY/9B0xqLArccnIFv4HyErh7ebqkxAVmYpAt
         c44ATu4lPRNkYKf1axyVO8gXjPrDMUnXOwNjgt4KrmrL4IpwwDNTjXfR8pLFfm5hWJH3
         A3an9It/QF6RuKMl0LJM9Di6sd8T8h9NIoEvAXZjtMgMuUIFQhhIMF5orERUADpDZzUC
         JdcxeDPS4ad7hb6jzXYz7QYZfeE2bjaq1vVtDaxxKS7PVXsoR09l6Ff83vLOpPnhM1WL
         QcSQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568685; x=1688160685;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=TzKQXlz5IDfPvgDKTOeCKkz+dXM61NoKgwUhpTupiUw=;
        b=izskBxoFeZz+yuFhUwnVUY1pA7VfYdg4yQKUEj8qa4vwyqAEHJDdOFr9/+nQl+H9K+
         9Oy8yvbqUUvBImpXO74n1u2Ag5aueQRKN1k4W39O534t57KHYty1tn6j8AxaWtMItUPZ
         XjB8kow6ADcHJJs95t3l05mH92DNo/+zwG4Yv4GHWIfGkogfnGgRWKEjPeZxJZSIs0lA
         YatKm/kbp8rfIavcStLh0N1ZJK+6QGQ+yijUwlrQt3n7WCbooBymMO9x6+e+2wBUIQZg
         HzWegDWl5jK1t/HASy2WIVIuBnWbNiM/g6Vj+zIIQSO2GWWu39CHkLuaHY5o6BVNA5/l
         8JoA==
X-Gm-Message-State: AC+VfDytU0AxS/kOtwfIvN/9Aqt7JJIghrkA735K23sRIECNQ6MwQp4X
	5+GSINJ2QGwtKLDhyZ3ACzk=
X-Google-Smtp-Source: ACHHUZ4penqUBoef5VZ7RS39HNNXGjV3amJwKYqX+6tsdiojUfn+sW0Rg5ayNGX0C78Cc9bsT/rYzg==
X-Received: by 2002:a81:a113:0:b0:54f:752e:9e60 with SMTP id y19-20020a81a113000000b0054f752e9e60mr6993857ywg.37.1685568685351;
        Wed, 31 May 2023 14:31:25 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 08/34] mm: Convert ptlock_ptr() to use ptdescs
Date: Wed, 31 May 2023 14:30:06 -0700
Message-Id: <20230531213032.25338-9-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/x86/xen/mmu_pv.c |  2 +-
 include/linux/mm.h    | 14 +++++++-------
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index b3b8d289b9ab..f469862e3ef4 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -651,7 +651,7 @@ static spinlock_t *xen_pte_lock(struct page *page, struct mm_struct *mm)
 	spinlock_t *ptl = NULL;
 
 #if USE_SPLIT_PTE_PTLOCKS
-	ptl = ptlock_ptr(page);
+	ptl = ptlock_ptr(page_ptdesc(page));
 	spin_lock_nest_lock(ptl, &mm->page_table_lock);
 #endif
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1fd16ac96036..6f7263fcd821 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2809,9 +2809,9 @@ void __init ptlock_cache_init(void);
 bool ptlock_alloc(struct ptdesc *ptdesc);
 extern void ptlock_free(struct page *page);
 
-static inline spinlock_t *ptlock_ptr(struct page *page)
+static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
 {
-	return page->ptl;
+	return ptdesc->ptl;
 }
 #else /* ALLOC_SPLIT_PTLOCKS */
 static inline void ptlock_cache_init(void)
@@ -2827,15 +2827,15 @@ static inline void ptlock_free(struct page *page)
 {
 }
 
-static inline spinlock_t *ptlock_ptr(struct page *page)
+static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
 {
-	return &page->ptl;
+	return &ptdesc->ptl;
 }
 #endif /* ALLOC_SPLIT_PTLOCKS */
 
 static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 {
-	return ptlock_ptr(pmd_page(*pmd));
+	return ptlock_ptr(page_ptdesc(pmd_page(*pmd)));
 }
 
 static inline bool ptlock_init(struct page *page)
@@ -2850,7 +2850,7 @@ static inline bool ptlock_init(struct page *page)
 	VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
 	if (!ptlock_alloc(page_ptdesc(page)))
 		return false;
-	spin_lock_init(ptlock_ptr(page));
+	spin_lock_init(ptlock_ptr(page_ptdesc(page)));
 	return true;
 }
 
@@ -2920,7 +2920,7 @@ static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd)
 
 static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 {
-	return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd)));
+	return ptlock_ptr(pmd_ptdesc(pmd));
 }
 
 static inline bool pmd_ptlock_init(struct page *page)
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:31 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542019.845461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPn-0001PJ-4S; Wed, 31 May 2023 21:31:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542019.845461; Wed, 31 May 2023 21:31:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPn-0001P4-0O; Wed, 31 May 2023 21:31:31 +0000
Received: by outflank-mailman (input) for mailman id 542019;
 Wed, 31 May 2023 21:31:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPm-0006zB-F4
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:30 +0000
Received: from mail-yw1-x112b.google.com (mail-yw1-x112b.google.com
 [2607:f8b0:4864:20::112b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8045b566-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:31:28 +0200 (CEST)
Received: by mail-yw1-x112b.google.com with SMTP id
 00721157ae682-565ca65e7ffso991967b3.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:28 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8045b566-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568687; x=1688160687;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=M8eY/8E6JF8s/V5NNW8TZkaDOnTKuaiqrBMcG8UrW5A=;
        b=F74YoiyArmyQwVgOOUW2kfK/7YQSeytT5Qdq4W7jztqrHFW+OxsR62y2z0sZiEV/Pj
         CHUjkMgQ+tHkrFh4S6EPBTtuP0+8wu5BQZyij+e15SnoNrkShPqGZvWdjF5BKJMv2Qv9
         d//2kHJUZF8y/wk4PqE5bOyyg7AT91rLlQi2SzidEtRHvmTf0yi6wamtogBgC9EY9N9P
         uw5EAPHRhzcCUSce1FYm7EE8CHhgk2SNgdWG+thFtb+Fd9K9ZtkE1HL6Lqu8zbrIjBxa
         OBo4gwjzqO/AwYTPpiaC4MdC5Mq0f0RkoLNLDnNUh3I4ll9vsxHX0eJNI6QQ67EeV90U
         TdMw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568687; x=1688160687;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=M8eY/8E6JF8s/V5NNW8TZkaDOnTKuaiqrBMcG8UrW5A=;
        b=XNdW3u9bRAKu4MIvDXyh7ULsp+wh6Pcn+fOfE+k+yPxQvQPK+zy9RipQ1VoJW76bJS
         rqg3ewrpt2B09+InCXy7t+b+IjKFnu2W1FxrJprS2ifFv0JMFrB0I6Q/yLvkoqUiZ9RN
         +R6fPhCT8mSKx1OGTjW41QqnKnKXQ/Suw0FcumIWyefrhTlisMjuuGadyW4H0bQLQg5x
         fPK7zanvR7Uq6PYeNzWvgcco+ByZiiJyDti2zcHtTftOLp00qJ3kIndZRSfbQB3ROMOR
         k3XR+N9JP2zuE1D8Hj3u13TAcRtfpdM4Wq/GHNEUrv0kAwC1G0RJqt0V6yNBvtNSiGDi
         kbTw==
X-Gm-Message-State: AC+VfDynwrL9XeN6Y1cvmV3NrUlxabjoBdbdeZuUVpyxCrHnWlJ49zXI
	lr1BZvKBz0nABTTsKzqpX9o=
X-Google-Smtp-Source: ACHHUZ7+UFYbIzpavTgJ7f/w4fvGAPRL0PNa3hATkUDplqTdJkdO52BS/n5X3DMJ5Uv/pNitevHZ/Q==
X-Received: by 2002:a0d:dd92:0:b0:568:be91:c2c0 with SMTP id g140-20020a0ddd92000000b00568be91c2c0mr6597562ywe.6.1685568687247;
        Wed, 31 May 2023 14:31:27 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 09/34] mm: Convert pmd_ptlock_init() to use ptdescs
Date: Wed, 31 May 2023 14:30:07 -0700
Message-Id: <20230531213032.25338-10-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6f7263fcd821..8e63e60c399c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2923,12 +2923,12 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return ptlock_ptr(pmd_ptdesc(pmd));
 }
 
-static inline bool pmd_ptlock_init(struct page *page)
+static inline bool pmd_ptlock_init(struct ptdesc *ptdesc)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	page->pmd_huge_pte = NULL;
+	ptdesc->pmd_huge_pte = NULL;
 #endif
-	return ptlock_init(page);
+	return ptlock_init(ptdesc_page(ptdesc));
 }
 
 static inline void pmd_ptlock_free(struct page *page)
@@ -2948,7 +2948,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return &mm->page_table_lock;
 }
 
-static inline bool pmd_ptlock_init(struct page *page) { return true; }
+static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; }
 static inline void pmd_ptlock_free(struct page *page) {}
 
 #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte)
@@ -2964,7 +2964,7 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd)
 
 static inline bool pgtable_pmd_page_ctor(struct page *page)
 {
-	if (!pmd_ptlock_init(page))
+	if (!pmd_ptlock_init(page_ptdesc(page)))
 		return false;
 	__SetPageTable(page);
 	inc_lruvec_page_state(page, NR_PAGETABLE);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:33 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542020.845471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPp-0001qe-Ir; Wed, 31 May 2023 21:31:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542020.845471; Wed, 31 May 2023 21:31:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPp-0001qC-Cn; Wed, 31 May 2023 21:31:33 +0000
Received: by outflank-mailman (input) for mailman id 542020;
 Wed, 31 May 2023 21:31:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPn-0006zB-Pf
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:31 +0000
Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com
 [2607:f8b0:4864:20::1130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 81617c7b-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:31:30 +0200 (CEST)
Received: by mail-yw1-x1130.google.com with SMTP id
 00721157ae682-568ba7abc11so707437b3.3
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:30 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81617c7b-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568689; x=1688160689;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xowfOyB08CJw8MKGIIgRKSCkyhh2uF8MvUFj9KXpTSw=;
        b=L8ygLqSRY2eqB8nseiiMf24A2f/spwq3tiGOrBuR5p7dj9EWG9ImFwe46ZZwXRtvRX
         /V86q8xSGkCuVPlvQOJ2SMa4rsoFmPrmu5autDJLeyd3yfLXEA6bbO2/4Yqm1LlWBfgW
         xJxuutRSGFBvTral/NTreMAMCGbSkBu5TZ5qxdEeFHUTponOUVV1jiviowhgii7tWLBq
         MEHuHS4DCf2z1UfCtRdUmDITSqJKFDRP0C3+jLAFaOsLApRcLJ8PsQ+aNZLN1wbuZUBY
         Px85P6DC+zY+Gwn3bCa+FGLrsmclVsP0ccqay7jLihOMLhj3/zlw+zFLVxYDPwbySfXO
         VKRw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568689; x=1688160689;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=xowfOyB08CJw8MKGIIgRKSCkyhh2uF8MvUFj9KXpTSw=;
        b=ZekeVHtadc+FJRIo3m1j+rZDgGK79Wuf6dA7IwRy7tYSQLnB/NDF9l3Rpvtpa4TLkC
         nIo2uoJs88bHz3HU++8Ej8v+wO8NKTcFcWhOHX5unF+iHAyU6Bw/QBZ2AyAf5y3iiqBx
         iwOSpi4a7JFfXBQZC4Tgf23VTQlGEU/hjbYtnQkrMyrhIuTGCcsJcVWUiIFxbYQDyzFW
         GjonhiyWMbyEhYG8wc6OjnDEgQHpngnrzr0Qy7s7kmOewPM4k7JCQtpFjMmEfwedETBM
         ODIdY5idVkjwve0N6XUWf6be7bxeEtG7H8/uIesxUEakIx1t6xC2KMg0vUBtbbawCFxE
         CkTw==
X-Gm-Message-State: AC+VfDzwkDqzyQ/LzNJPNA736JAgkJszlA/cnIEFGMQrRaA1wKg2PE3O
	fyGwpgb/oUioNby5p/eZJ/g=
X-Google-Smtp-Source: ACHHUZ4HC/p6n5oc2YZUDdXfXImpqPK7MSZMtRTu8JGDzMkRUKKOQ8wmCGx7YESYze8Eib1xLonoHg==
X-Received: by 2002:a81:5357:0:b0:561:c5c3:9d79 with SMTP id h84-20020a815357000000b00561c5c39d79mr7593955ywb.45.1685568689113;
        Wed, 31 May 2023 14:31:29 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 10/34] mm: Convert ptlock_init() to use ptdescs
Date: Wed, 31 May 2023 14:30:08 -0700
Message-Id: <20230531213032.25338-11-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8e63e60c399c..bc2f139de4e7 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2838,7 +2838,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return ptlock_ptr(page_ptdesc(pmd_page(*pmd)));
 }
 
-static inline bool ptlock_init(struct page *page)
+static inline bool ptlock_init(struct ptdesc *ptdesc)
 {
 	/*
 	 * prep_new_page() initialize page->private (and therefore page->ptl)
@@ -2847,10 +2847,10 @@ static inline bool ptlock_init(struct page *page)
 	 * It can happen if arch try to use slab for page table allocation:
 	 * slab code uses page->slab_cache, which share storage with page->ptl.
 	 */
-	VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
-	if (!ptlock_alloc(page_ptdesc(page)))
+	VM_BUG_ON_PAGE(*(unsigned long *)&ptdesc->ptl, ptdesc_page(ptdesc));
+	if (!ptlock_alloc(ptdesc))
 		return false;
-	spin_lock_init(ptlock_ptr(page_ptdesc(page)));
+	spin_lock_init(ptlock_ptr(ptdesc));
 	return true;
 }
 
@@ -2863,13 +2863,13 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 	return &mm->page_table_lock;
 }
 static inline void ptlock_cache_init(void) {}
-static inline bool ptlock_init(struct page *page) { return true; }
+static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; }
 static inline void ptlock_free(struct page *page) {}
 #endif /* USE_SPLIT_PTE_PTLOCKS */
 
 static inline bool pgtable_pte_page_ctor(struct page *page)
 {
-	if (!ptlock_init(page))
+	if (!ptlock_init(page_ptdesc(page)))
 		return false;
 	__SetPageTable(page);
 	inc_lruvec_page_state(page, NR_PAGETABLE);
@@ -2928,7 +2928,7 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	ptdesc->pmd_huge_pte = NULL;
 #endif
-	return ptlock_init(ptdesc_page(ptdesc));
+	return ptlock_init(ptdesc);
 }
 
 static inline void pmd_ptlock_free(struct page *page)
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:34 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542021.845476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPq-0001v5-4t; Wed, 31 May 2023 21:31:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542021.845476; Wed, 31 May 2023 21:31:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPp-0001u4-QC; Wed, 31 May 2023 21:31:33 +0000
Received: by outflank-mailman (input) for mailman id 542021;
 Wed, 31 May 2023 21:31:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPn-0006xu-Se
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:31 +0000
Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com
 [2607:f8b0:4864:20::1132])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 827cee1c-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:31 +0200 (CEST)
Received: by mail-yw1-x1132.google.com with SMTP id
 00721157ae682-568900c331aso736367b3.3
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:31 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 827cee1c-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568691; x=1688160691;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=G1jEnioq7k9B+WnYjAw3S6OIwBriiVkAgPsVN7nSKAQ=;
        b=sRB3lx4kAmnM8+wptcE+ajVYWk3XjheLnACKkDukTCUf9xFbPcDCYhPah6bex/BYdh
         giGFY1Br6w10nUQCd9DWblMM8GH+uaGuqnqROXiLO3UcERmYA2lDrQkgyUuZp7Z8kkqA
         nuRpQ/9G4vNwfZmkO1bs5HdxOej2dcQ5VFdd4ryduclGMMN3123XdTOk5+I53fDJyOM9
         XY0ZeZ9oP8N1MEbhuB5it+QgLHnwrzlk6MNwsUUfvNiqVb/KKB+BbxPk/D/IjrOXLBJg
         QnLP5YOwWWUD0KCage2oW1q2o8nxOiUGc6Utker8eoaHUzZhr+sfeHDiQePcAYy+fUoc
         EHhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568691; x=1688160691;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=G1jEnioq7k9B+WnYjAw3S6OIwBriiVkAgPsVN7nSKAQ=;
        b=UG310g5QLkQlX5bmjjzTRQbEStub79yoFSChNGQe6fTRuIlww392dvKO+r9vVDhJJ3
         +V/y+4ek89y3P3qHVkZisdnpBbtSvMgBkznvVVGkPnO9hnWHk8DhY1iG2lOtGTcNyrgr
         Qrqsj8s7wF+19NONCdqDK5Fn/rWBcQerrCbxp+zCEZUhFzDpEq+1Yi0e6+Edb0oAkjio
         9FvkDKYYHCAY4Hg/p6iw/8MUYToZYEmc2wr/KkU/GZsGbp2MEqAnwOcO7vzAeT9Rpcz9
         ZgHJ1ANFQJHaYZZ34uyJzvdlr3uuByhJPHx96mjozhondVsJgtfp0vrYfZ8sUdb2FqoI
         bA0Q==
X-Gm-Message-State: AC+VfDx4oT+/DImFfSfi/QVDhwGS/8sSn60pEpv1JzN6jycE9deRm2RU
	L7cvWrQrkiWJSTNriUECM9XPANv476q4lw==
X-Google-Smtp-Source: ACHHUZ5VBnMU9jyAiESUYWnGmIowgKT6gU4T3B/pZNfl2GNdmiRLLxEPQJ4tr+ESELmCnMZcerU5cA==
X-Received: by 2002:a81:49cf:0:b0:561:c5d4:ee31 with SMTP id w198-20020a8149cf000000b00561c5d4ee31mr7393100ywa.38.1685568690985;
        Wed, 31 May 2023 14:31:30 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 11/34] mm: Convert pmd_ptlock_free() to use ptdescs
Date: Wed, 31 May 2023 14:30:09 -0700
Message-Id: <20230531213032.25338-12-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index bc2f139de4e7..ffc82355fea6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2931,12 +2931,12 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc)
 	return ptlock_init(ptdesc);
 }
 
-static inline void pmd_ptlock_free(struct page *page)
+static inline void pmd_ptlock_free(struct ptdesc *ptdesc)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	VM_BUG_ON_PAGE(page->pmd_huge_pte, page);
+	VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc));
 #endif
-	ptlock_free(page);
+	ptlock_free(ptdesc_page(ptdesc));
 }
 
 #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte)
@@ -2949,7 +2949,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd)
 }
 
 static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; }
-static inline void pmd_ptlock_free(struct page *page) {}
+static inline void pmd_ptlock_free(struct ptdesc *ptdesc) {}
 
 #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte)
 
@@ -2973,7 +2973,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page)
 
 static inline void pgtable_pmd_page_dtor(struct page *page)
 {
-	pmd_ptlock_free(page);
+	pmd_ptlock_free(page_ptdesc(page));
 	__ClearPageTable(page);
 	dec_lruvec_page_state(page, NR_PAGETABLE);
 }
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:31:36 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:31:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542022.845491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPs-0002fz-NF; Wed, 31 May 2023 21:31:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542022.845491; Wed, 31 May 2023 21:31:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TPs-0002el-GI; Wed, 31 May 2023 21:31:36 +0000
Received: by outflank-mailman (input) for mailman id 542022;
 Wed, 31 May 2023 21:31:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPq-0006zB-Sy
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:34 +0000
Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com
 [2607:f8b0:4864:20::1130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 83a17619-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:31:33 +0200 (CEST)
Received: by mail-yw1-x1130.google.com with SMTP id
 00721157ae682-565e6beb7aaso736827b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:33 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83a17619-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568693; x=1688160693;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=JALdXD/M7jBu0Q8jfR4m4Tx9IinYQtlxZ5YXkrMJ3RM=;
        b=SDECzrupkY8P71D5Bx9Ena6q3wieNwuGBpSv8ettDWf4PjEIxkW1kzC+/sNtzTCciF
         08YPq8+A8ip8H8kzMs94GMf7GyNCyBFe+MQWmq4tDFaYpFFNOE6UTqgnOULDbhNW0X9R
         mptdXbgDm5NvfVfzaMpwLm1cS6nmZ7EARb4uqJGF+rFwJSp8Gd0Dc+nu6W5F2EiHrPlp
         wvBO17QGrS6gAkojPK4/lKp5/gxat5ehsK5W06KJyleVWDfyXvKL/KtB/UxjcTPMijxH
         kcvx1/i/I9fdgW5R6GrMiCaj6gwLScpZ5XR6LtwZa6L23gJoAxE1JEulvsNvUD9DStHW
         I/CA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568693; x=1688160693;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=JALdXD/M7jBu0Q8jfR4m4Tx9IinYQtlxZ5YXkrMJ3RM=;
        b=MXUIL3E5bZ8wqC/fWcNMaZjLSWSpzoWpChN2w8GBWoXyO7Hn9JXlDLeEIUppwsJtAD
         ZyHAKACBAggoNkQGRmkpWd5eD6NU3vkacskBi8yviyNGDJere6M95YfubNBPEoXWN7VA
         g6gL4c3Fvi4LZ/f5cetvuHsTJEJ/XcOIzpLB3+T2epS8gweUx2A4DUy8a1k/8DCqurnU
         zWy9PriWjC6Ftx3i5SQ+ypiQCN34677IwxjxGgkji02ZsZY9aiXsJOJUHjDmvtsiZ6ot
         gyhWY81bGDbOAqAW6+eqibgeva1maKOSCuqpQovXAdpeA87gaC0T6vFAFlhYrnxv/2Hq
         mZsg==
X-Gm-Message-State: AC+VfDwNBczsG6be4126xE1ZAuVUn0IGUXICrxeiERL5BFBKYUZ467nO
	hL4Okyy907sK/8ZN+ac8go8=
X-Google-Smtp-Source: ACHHUZ7Dif7B1ZpmhmssBZICB8FP9/M3ln6TJpLdw+EDe49hetyZ/pJac8jXPpLostqsKq4KAza0XA==
X-Received: by 2002:a81:4ed2:0:b0:568:4ef1:ba63 with SMTP id c201-20020a814ed2000000b005684ef1ba63mr7706752ywb.14.1685568692854;
        Wed, 31 May 2023 14:31:32 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 12/34] mm: Convert ptlock_free() to use ptdescs
Date: Wed, 31 May 2023 14:30:10 -0700
Message-Id: <20230531213032.25338-13-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This removes some direct accesses to struct page, working towards
splitting out struct ptdesc from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 10 +++++-----
 mm/memory.c        |  4 ++--
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ffc82355fea6..72725aa6c30d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2807,7 +2807,7 @@ static inline void pagetable_clear(void *x)
 #if ALLOC_SPLIT_PTLOCKS
 void __init ptlock_cache_init(void);
 bool ptlock_alloc(struct ptdesc *ptdesc);
-extern void ptlock_free(struct page *page);
+void ptlock_free(struct ptdesc *ptdesc);
 
 static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc)
 {
@@ -2823,7 +2823,7 @@ static inline bool ptlock_alloc(struct ptdesc *ptdesc)
 	return true;
 }
 
-static inline void ptlock_free(struct page *page)
+static inline void ptlock_free(struct ptdesc *ptdesc)
 {
 }
 
@@ -2864,7 +2864,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
 }
 static inline void ptlock_cache_init(void) {}
 static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; }
-static inline void ptlock_free(struct page *page) {}
+static inline void ptlock_free(struct ptdesc *ptdesc) {}
 #endif /* USE_SPLIT_PTE_PTLOCKS */
 
 static inline bool pgtable_pte_page_ctor(struct page *page)
@@ -2878,7 +2878,7 @@ static inline bool pgtable_pte_page_ctor(struct page *page)
 
 static inline void pgtable_pte_page_dtor(struct page *page)
 {
-	ptlock_free(page);
+	ptlock_free(page_ptdesc(page));
 	__ClearPageTable(page);
 	dec_lruvec_page_state(page, NR_PAGETABLE);
 }
@@ -2936,7 +2936,7 @@ static inline void pmd_ptlock_free(struct ptdesc *ptdesc)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc));
 #endif
-	ptlock_free(ptdesc_page(ptdesc));
+	ptlock_free(ptdesc);
 }
 
 #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte)
diff --git a/mm/memory.c b/mm/memory.c
index 8d37dd302f2f..df0251243dfa 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5949,8 +5949,8 @@ bool ptlock_alloc(struct ptdesc *ptdesc)
 	return true;
 }
 
-void ptlock_free(struct page *page)
+void ptlock_free(struct ptdesc *ptdesc)
 {
-	kmem_cache_free(page_ptl_cachep, page->ptl);
+	kmem_cache_free(page_ptl_cachep, ptdesc->ptl);
 }
 #endif
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542045.845505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVf-0006A8-SI; Wed, 31 May 2023 21:37:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542045.845505; Wed, 31 May 2023 21:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVf-00069T-NH; Wed, 31 May 2023 21:37:35 +0000
Received: by outflank-mailman (input) for mailman id 542045;
 Wed, 31 May 2023 21:37:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQL-0006zB-FN
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:05 +0000
Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com
 [2607:f8b0:4864:20::1132])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 957a0a3a-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:32:04 +0200 (CEST)
Received: by mail-yw1-x1132.google.com with SMTP id
 00721157ae682-5689335d2b6so718917b3.3
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:32:03 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.32.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:32:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 957a0a3a-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568723; x=1688160723;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=QSGr1AIBRcRb+Ur7L3hOWyc2qTqn+mhf5/YLNQmZcmA=;
        b=P+sFLMkiyDDyXlnEG12Vca4wXTlsNP1tjr5mpQ3xYowZkV29Gi4DHzR6qoKyncOySw
         jnRT0Uzpd7rJ5SnFzozOgbTxqzpBvgdV5deGMByJA1EX8U9bDlcdVF5nVlhFFf1pyZ0O
         KrWMXXKGFgo38kAOB8ncfn/SPFyRIIMw4Td8wl18EIsvccYdq2AsnvZ0dkbvnVIaV8oQ
         lw4pGC9Zhdztp77vkJB22l9Fos3IjWhzJKFNlsAyyHAnjGjJYghL1MXToen9jCOsOslF
         dBNcDAuc1Cy34XA+B9s+UNYDNSEGNOwzJVHJrEoMCND5EfDLqD8QbuDF0jEbofHqgPPQ
         P6qg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568723; x=1688160723;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=QSGr1AIBRcRb+Ur7L3hOWyc2qTqn+mhf5/YLNQmZcmA=;
        b=gpJf43wNV9+m6ftOxcBHrULyODctrSNd3f827AuDw/waUz2ke9FAoDN6AFbjqiAOXR
         6sxJO5EqV3JEXRWYe6e1C3StNoy99OR3yu8MQ91eAa8eHWAVQKbVtZ7dnbfYY1fYkz/9
         zGdF7DLw1xL67BjVTm2Tf6Gm+ak/A8hp8L1oRZvkfDL54ZdrvxMalUqEjhtZO01CssHj
         UXRX0C7kPYcnfJiMUuKtjRfi+RjGE5Mye/EtaFmQhl3rCxB5+5NRtnK74cJAVJseLRSz
         liU5T/rFsIGNjQp/Xi7Ra+EUfqVHaQ5s+Q1h55jDJ4PulTBd8c/6EQQaoMs3bW1QwpLY
         +D2A==
X-Gm-Message-State: AC+VfDz+wT3afHPZADavFVOMaWLQGKWE8zZunE322MQ9ymDnoRQdibB8
	oT6/JzScIagyopI/ptrjc64=
X-Google-Smtp-Source: ACHHUZ6lpvw1UsG9eqv+xSZaIjITC9PhoF1JcriZ34z9zriWEJuaLEkcr/e1lknRm4p3R1WVQNwsxA==
X-Received: by 2002:a81:4ed2:0:b0:561:7ec:cf90 with SMTP id c201-20020a814ed2000000b0056107eccf90mr7025257ywb.42.1685568722698;
        Wed, 31 May 2023 14:32:02 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Dinh Nguyen <dinguyen@kernel.org>
Subject: [PATCH v3 27/34] nios2: Convert __pte_free_tlb() to use ptdescs
Date: Wed, 31 May 2023 14:30:25 -0700
Message-Id: <20230531213032.25338-28-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/nios2/include/asm/pgalloc.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h
index ecd1657bb2ce..ce6bb8e74271 100644
--- a/arch/nios2/include/asm/pgalloc.h
+++ b/arch/nios2/include/asm/pgalloc.h
@@ -28,10 +28,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
 
 extern pgd_t *pgd_alloc(struct mm_struct *mm);
 
-#define __pte_free_tlb(tlb, pte, addr)				\
-	do {							\
-		pgtable_pte_page_dtor(pte);			\
-		tlb_remove_page((tlb), (pte));			\
+#define __pte_free_tlb(tlb, pte, addr)					\
+	do {								\
+		pagetable_pte_dtor(page_ptdesc(pte));			\
+		tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 	} while (0)
 
 #endif /* _ASM_NIOS2_PGALLOC_H */
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542050.845533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVh-0006d5-IA; Wed, 31 May 2023 21:37:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542050.845533; Wed, 31 May 2023 21:37:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVh-0006ZX-4s; Wed, 31 May 2023 21:37:37 +0000
Received: by outflank-mailman (input) for mailman id 542050;
 Wed, 31 May 2023 21:37:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQ4-0006xu-1m
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:48 +0000
Received: from mail-yw1-x1131.google.com (mail-yw1-x1131.google.com
 [2607:f8b0:4864:20::1131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8c099398-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:47 +0200 (CEST)
Received: by mail-yw1-x1131.google.com with SMTP id
 00721157ae682-565cdb77b01so924977b3.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:47 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c099398-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568707; x=1688160707;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=P4oKpN44FwSHo9/T7M2wxV2695OWbA5/gOtyDCnuhZU=;
        b=sqsXs3WYNhTysc3aTx28iPtHSvphFkiMR2eHXkB7o2KtkKwUS1n7i2swDh0KUO31WT
         vavq1N/QnPwK/noQMjdRqJgHjHH9y1SbXtcnoH8vWPU6cfMxQg3fey32EPBGxlhGICtg
         d8p/rta/A6M/whTQ0SC5TRg4TVCyj1UgJEz0AD/6Adc/3sAai7pLjPybThunWDVH0699
         DJojWygC0Hk+BqL+vYwU7r82IBRSNnvoC+HA8BiJR1lfy1uKogoYj7KNe4JDvOtpNfpw
         fDp/K8vIfXRi0kNe3liDJnJyivYwfRujVeTE2teBbN8ilRdSN+O8021GFlawVUTXhajU
         nKiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568707; x=1688160707;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=P4oKpN44FwSHo9/T7M2wxV2695OWbA5/gOtyDCnuhZU=;
        b=RczLT2Dt+bprrsnCfrkTlDm5gSEQssihQ49em4mVp+80We3HBp4o9g2KWfSdhXFBNM
         rF8OAXdlySdeSzgwhiAsAogKW3tH32LKxqq+Lvz7fB2QnXFcuzGern/L1uOiwK8DOMff
         KC+JAFK0H1amqlmMjqXh/mrEr/79lPCbmxG+vbhSxjq1+1oD1cBhR+bYhzzgPPnGKHVb
         zMq0wTs2Qirjl9RZJwIsO7jt5lFMDm81WF6vkW32dFZyZr7m1Wi75tKScZR0FvxoIevC
         JbKGxRZ33I641vSw79zU24Pw4xah9jKJdC8BdgRztyAk0FHVcjRWLLDMeSUnBsVSJZjg
         zjKg==
X-Gm-Message-State: AC+VfDwoiaAFTRh4PCh8t6gyK5Sn9UA+YHT02eYU+dTvAO3ecq5sQ1lp
	mJjZA05ABZGJNmhNSfocsDlItsdPRQBmTg==
X-Google-Smtp-Source: ACHHUZ4cUrGzzAXtFYmG5Rt72zlr1Ku9VNItY9WpWFz1hTE12ha2fk+XvRIynyT8hJNvgfTr9E++sw==
X-Received: by 2002:a25:aa03:0:b0:ba8:5fd6:e657 with SMTP id s3-20020a25aa03000000b00ba85fd6e657mr7836976ybi.49.1685568706939;
        Wed, 31 May 2023 14:31:46 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Arnd Bergmann <arnd@arndb.de>
Subject: [PATCH v3 19/34] pgalloc: Convert various functions to use ptdescs
Date: Wed, 31 May 2023 14:30:17 -0700
Message-Id: <20230531213032.25338-20-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use pagetable_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/asm-generic/pgalloc.h | 62 +++++++++++++++++++++--------------
 1 file changed, 37 insertions(+), 25 deletions(-)

diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
index a7cf825befae..1e5799ba2e56 100644
--- a/include/asm-generic/pgalloc.h
+++ b/include/asm-generic/pgalloc.h
@@ -18,7 +18,11 @@
  */
 static inline pte_t *__pte_alloc_one_kernel(struct mm_struct *mm)
 {
-	return (pte_t *)__get_free_page(GFP_PGTABLE_KERNEL);
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL, 0);
+
+	if (!ptdesc)
+		return NULL;
+	return (pte_t *)ptdesc_address(ptdesc);
 }
 
 #ifndef __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL
@@ -41,7 +45,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
  */
 static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
 {
-	free_page((unsigned long)pte);
+	pagetable_free(virt_to_ptdesc(pte));
 }
 
 /**
@@ -49,7 +53,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
  * @mm: the mm_struct of the current context
  * @gfp: GFP flags to use for the allocation
  *
- * Allocates a page and runs the pgtable_pte_page_ctor().
+ * Allocates a ptdesc and runs the pagetable_pte_ctor().
  *
  * This function is intended for architectures that need
  * anything beyond simple page allocation or must have custom GFP flags.
@@ -58,17 +62,17 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
  */
 static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp)
 {
-	struct page *pte;
+	struct ptdesc *ptdesc;
 
-	pte = alloc_page(gfp);
-	if (!pte)
+	ptdesc = pagetable_alloc(gfp, 0);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(pte)) {
-		__free_page(pte);
+	if (!pagetable_pte_ctor(ptdesc)) {
+		pagetable_free(ptdesc);
 		return NULL;
 	}
 
-	return pte;
+	return ptdesc_page(ptdesc);
 }
 
 #ifndef __HAVE_ARCH_PTE_ALLOC_ONE
@@ -76,7 +80,7 @@ static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp)
  * pte_alloc_one - allocate a page for PTE-level user page table
  * @mm: the mm_struct of the current context
  *
- * Allocates a page and runs the pgtable_pte_page_ctor().
+ * Allocates a ptdesc and runs the pagetable_pte_ctor().
  *
  * Return: `struct page` initialized as page table or %NULL on error
  */
@@ -98,8 +102,10 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
  */
 static inline void pte_free(struct mm_struct *mm, struct page *pte_page)
 {
-	pgtable_pte_page_dtor(pte_page);
-	__free_page(pte_page);
+	struct ptdesc *ptdesc = page_ptdesc(pte_page);
+
+	pagetable_pte_dtor(ptdesc);
+	pagetable_free(ptdesc);
 }
 
 
@@ -110,7 +116,7 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page)
  * pmd_alloc_one - allocate a page for PMD-level page table
  * @mm: the mm_struct of the current context
  *
- * Allocates a page and runs the pgtable_pmd_page_ctor().
+ * Allocates a ptdesc and runs the pagetable_pmd_ctor().
  * Allocations use %GFP_PGTABLE_USER in user context and
  * %GFP_PGTABLE_KERNEL in kernel context.
  *
@@ -118,28 +124,30 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page)
  */
 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
 {
-	struct page *page;
+	struct ptdesc *ptdesc;
 	gfp_t gfp = GFP_PGTABLE_USER;
 
 	if (mm == &init_mm)
 		gfp = GFP_PGTABLE_KERNEL;
-	page = alloc_page(gfp);
-	if (!page)
+	ptdesc = pagetable_alloc(gfp, 0);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pmd_page_ctor(page)) {
-		__free_page(page);
+	if (!pagetable_pmd_ctor(ptdesc)) {
+		pagetable_free(ptdesc);
 		return NULL;
 	}
-	return (pmd_t *)page_address(page);
+	return (pmd_t *)ptdesc_address(ptdesc);
 }
 #endif
 
 #ifndef __HAVE_ARCH_PMD_FREE
 static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
 {
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmd);
+
 	BUG_ON((unsigned long)pmd & (PAGE_SIZE-1));
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
-	free_page((unsigned long)pmd);
+	pagetable_pmd_dtor(ptdesc);
+	pagetable_free(ptdesc);
 }
 #endif
 
@@ -149,11 +157,15 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
 
 static inline pud_t *__pud_alloc_one(struct mm_struct *mm, unsigned long addr)
 {
-	gfp_t gfp = GFP_PGTABLE_USER;
+	gfp_t gfp = GFP_PGTABLE_USER | __GFP_ZERO;
+	struct ptdesc *ptdesc;
 
 	if (mm == &init_mm)
 		gfp = GFP_PGTABLE_KERNEL;
-	return (pud_t *)get_zeroed_page(gfp);
+	ptdesc = pagetable_alloc(gfp, 0);
+	if (!ptdesc)
+		return NULL;
+	return (pud_t *)ptdesc_address(ptdesc);
 }
 
 #ifndef __HAVE_ARCH_PUD_ALLOC_ONE
@@ -175,7 +187,7 @@ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
 static inline void __pud_free(struct mm_struct *mm, pud_t *pud)
 {
 	BUG_ON((unsigned long)pud & (PAGE_SIZE-1));
-	free_page((unsigned long)pud);
+	pagetable_free(virt_to_ptdesc(pud));
 }
 
 #ifndef __HAVE_ARCH_PUD_FREE
@@ -190,7 +202,7 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud)
 #ifndef __HAVE_ARCH_PGD_FREE
 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 {
-	free_page((unsigned long)pgd);
+	pagetable_free(virt_to_ptdesc(pgd));
 }
 #endif
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542046.845512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVg-0006Dd-8u; Wed, 31 May 2023 21:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542046.845512; Wed, 31 May 2023 21:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVf-0006Bd-WE; Wed, 31 May 2023 21:37:36 +0000
Received: by outflank-mailman (input) for mailman id 542046;
 Wed, 31 May 2023 21:37:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQQ-0006zB-L6
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:10 +0000
Received: from mail-yw1-x112d.google.com (mail-yw1-x112d.google.com
 [2607:f8b0:4864:20::112d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 98ff2382-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:32:09 +0200 (CEST)
Received: by mail-yw1-x112d.google.com with SMTP id
 00721157ae682-565ba53f434so709207b3.3
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:32:09 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.32.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:32:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 98ff2382-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568729; x=1688160729;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=viYddHqy4Zdgp0J9O3iaWpIktRFOpc+3TNsEHN2SX9A=;
        b=hVqCB7aFrL5Nthi8pAlbjPvcInjFZt1qHkddABmx8gtVThNjktDfpfcNDFDlw5CG2+
         3xRmURAs/k2Nz9kNjLGLhiem3PMoX9/E6HDghH1XQlgENBdym8j3RSqFBZDSSy/b34yh
         AE2QmRwS6oPUlHWjPpeT/6u7AJu6WV2M048WkEp+vu4WP+h6zxRkbzSalddZaQ9VOuvn
         LGPsCzdTJ+5x5K0wsh8kq5o2Vj7pKu44k/mXiACQfaNUMTBzpYonaoHUeQuALtzvZCNz
         9M8tSd6AM43Q88HJNwRerrYrxc+HBnHCgqChwQ1i4KI3ds/XwizMZlg5u8MI061tV0gV
         yEdA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568729; x=1688160729;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=viYddHqy4Zdgp0J9O3iaWpIktRFOpc+3TNsEHN2SX9A=;
        b=lvsCKOhxudiyLMG8ZFYopz6IoqdeddTGhhcJDITvrgl2Q+eisv/zt1AM8Lt/LD347k
         ZpRvn4HkaMic9qnZAcraz2R3l987W8GvhsPI4cux/KZluwSv5JmovDf4KFYzmRafKl+B
         WUqT/EAbaLVyV1Errz8sDQQjPyU/qsDmm4lsG1OU+x79poD0pJwu+LRc5CNsKlUkXhtK
         vdoVjthDZvt58heYWOx/7tpNYXqbN4GB3bztYp4ownvvge8LhM8ZmKzjvCrVNbFARmPW
         YrUNnKrXLWoCKeToULj5oHRFWNvaf1RPhy1xgsEMiek5T7p/78IR2emR3v+A+kWPKvkK
         5XtQ==
X-Gm-Message-State: AC+VfDxxleNsOJeqqpyK3kqBftOFSCOdLyinAErm0JfyYWCkQu0X4O5F
	x9U6Yp+Xn7oxbAEYvyWhul0=
X-Google-Smtp-Source: ACHHUZ7dIRO2Itz1UIyUgFPRBSeGYhHu2aqfX+t+ZClzgP9Es2W2C4cR0F0LltUImCNhGQXKVAT26w==
X-Received: by 2002:a81:48ce:0:b0:565:62eb:db6 with SMTP id v197-20020a8148ce000000b0056562eb0db6mr28101ywa.42.1685568728711;
        Wed, 31 May 2023 14:32:08 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>
Subject: [PATCH v3 30/34] sh: Convert pte_free_tlb() to use ptdescs
Date: Wed, 31 May 2023 14:30:28 -0700
Message-Id: <20230531213032.25338-31-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents. Also cleans up some spacing issues.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/sh/include/asm/pgalloc.h | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h
index a9e98233c4d4..5d8577ab1591 100644
--- a/arch/sh/include/asm/pgalloc.h
+++ b/arch/sh/include/asm/pgalloc.h
@@ -2,6 +2,7 @@
 #ifndef __ASM_SH_PGALLOC_H
 #define __ASM_SH_PGALLOC_H
 
+#include <linux/mm.h>
 #include <asm/page.h>
 
 #define __HAVE_ARCH_PMD_ALLOC_ONE
@@ -31,10 +32,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
 	set_pmd(pmd, __pmd((unsigned long)page_address(pte)));
 }
 
-#define __pte_free_tlb(tlb,pte,addr)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), (pte));			\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	pagetable_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #endif /* __ASM_SH_PGALLOC_H */
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542047.845518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVg-0006KN-K9; Wed, 31 May 2023 21:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542047.845518; Wed, 31 May 2023 21:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVg-0006I0-9e; Wed, 31 May 2023 21:37:36 +0000
Received: by outflank-mailman (input) for mailman id 542047;
 Wed, 31 May 2023 21:37:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQO-0006xu-Dd
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:08 +0000
Received: from mail-yw1-x112b.google.com (mail-yw1-x112b.google.com
 [2607:f8b0:4864:20::112b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 97d02189-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:32:07 +0200 (CEST)
Received: by mail-yw1-x112b.google.com with SMTP id
 00721157ae682-565bdae581eso792347b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:32:07 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.32.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:32:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97d02189-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568727; x=1688160727;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lnBVr922X+M85c+6VJKDI+Caz3ny3LZ2fwcpIuiP1/4=;
        b=XZ3kJddTkoKufk0kwBdOVZQZoSVtcE5HG6sX6rRRHvCe7Om9x7xY0K4IiCL0nHsHxQ
         9eGr6MZvVeUtc5fgyKg5P1qBSWAOsfbcmSPQlXFPghUVu6fp2sIDmm+5U3bq4g7lRHpB
         Zld4cy93/rMGN07+QSO9q6jJf8+GwNN3p+hTYG0BRIQZI6tNs0RI+r52u0EH3fUAXWUT
         KW3t0eM8fAoeH75fwVt/7iugc0BvPA69n2m/eo4szwTaUcjtnQ0rOZaZxJ3/gvL6spZ8
         hnUdu2Pp/aCFMWHQdwhDEFfy8i7DytzgyU5VaM/fnvmpMyEanQ53VlCHmZFkS4f807YX
         H9xg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568727; x=1688160727;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=lnBVr922X+M85c+6VJKDI+Caz3ny3LZ2fwcpIuiP1/4=;
        b=NuA7cfU3HB93Sn3lfk/nUBbqd4lrwDr2dkkTjPO+Js0FO8pHUN8uhHWhV5/uvQJCx4
         r9jvMjpjOMS5WCntLGUV6mLUXgHH2p027012u5+lQBghx6qv7pffmCfVHOAfGlePHpoP
         IOMxs8cFeL1NBlRF93cuA+ITcLXxNJPMr4mva9+sXLjY/qnkAg8HbskYwHYYbkXKN2JZ
         HU5rvHlbjjHcIYPgnnujmP4xv6qX/a5q73v1fgXqP0IeAOznKYR5O2R0dEQuntjb4x2B
         g+IiIPlA3xmU0xxHBhUgYKKaKY4iFwRSABvoMEmSdCC+Ym5iBEEL+TLs0Jt5wCklp3Cr
         HYgw==
X-Gm-Message-State: AC+VfDyWYwGZoPaM2sPX83sw2ReqnGWLdIzyL0s/Eyr2jLUDwtVMyPRt
	iQ1Z90Xn0y7lBCyghTobxlw=
X-Google-Smtp-Source: ACHHUZ5mU9zEYCdDSqHlmf5fji+LZR6GnrVvINr+QLg2em3BFf1b9EHyVp5oT6zaLyXHcWvIl6C56g==
X-Received: by 2002:a0d:e212:0:b0:55d:a4fb:864a with SMTP id l18-20020a0de212000000b0055da4fb864amr6942187ywe.14.1685568726728;
        Wed, 31 May 2023 14:32:06 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@rivosinc.com>
Subject: [PATCH v3 29/34] riscv: Convert alloc_{pmd, pte}_late() to use ptdescs
Date: Wed, 31 May 2023 14:30:27 -0700
Message-Id: <20230531213032.25338-30-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use pagetable_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
---
 arch/riscv/include/asm/pgalloc.h |  8 ++++----
 arch/riscv/mm/init.c             | 16 ++++++----------
 2 files changed, 10 insertions(+), 14 deletions(-)

diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h
index 59dc12b5b7e8..d169a4f41a2e 100644
--- a/arch/riscv/include/asm/pgalloc.h
+++ b/arch/riscv/include/asm/pgalloc.h
@@ -153,10 +153,10 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
 
 #endif /* __PAGETABLE_PMD_FOLDED */
 
-#define __pte_free_tlb(tlb, pte, buf)   \
-do {                                    \
-	pgtable_pte_page_dtor(pte);     \
-	tlb_remove_page((tlb), pte);    \
+#define __pte_free_tlb(tlb, pte, buf)			\
+do {							\
+	pagetable_pte_dtor(page_ptdesc(pte));		\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));\
 } while (0)
 #endif /* CONFIG_MMU */
 
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index 2f7a7c345a6a..2fe6ca1b1f95 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -353,12 +353,10 @@ static inline phys_addr_t __init alloc_pte_fixmap(uintptr_t va)
 
 static phys_addr_t __init alloc_pte_late(uintptr_t va)
 {
-	unsigned long vaddr;
-
-	vaddr = __get_free_page(GFP_KERNEL);
-	BUG_ON(!vaddr || !pgtable_pte_page_ctor(virt_to_page((void *)vaddr)));
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL, 0);
 
-	return __pa(vaddr);
+	BUG_ON(!ptdesc || !pagetable_pte_ctor(ptdesc));
+	return __pa((pte_t *)ptdesc_address(ptdesc));
 }
 
 static void __init create_pte_mapping(pte_t *ptep,
@@ -436,12 +434,10 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va)
 
 static phys_addr_t __init alloc_pmd_late(uintptr_t va)
 {
-	unsigned long vaddr;
-
-	vaddr = __get_free_page(GFP_KERNEL);
-	BUG_ON(!vaddr || !pgtable_pmd_page_ctor(virt_to_page((void *)vaddr)));
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL, 0);
 
-	return __pa(vaddr);
+	BUG_ON(!ptdesc || !pagetable_pmd_ctor(ptdesc));
+	return __pa((pmd_t *)ptdesc_address(ptdesc));
 }
 
 static void __init create_pmd_mapping(pmd_t *pmdp,
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:48 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542048.845525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVg-0006RG-Vf; Wed, 31 May 2023 21:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542048.845525; Wed, 31 May 2023 21:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVg-0006P9-L7; Wed, 31 May 2023 21:37:36 +0000
Received: by outflank-mailman (input) for mailman id 542048;
 Wed, 31 May 2023 21:37:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQW-0006zB-Hl
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:16 +0000
Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com
 [2607:f8b0:4864:20::1132])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9c7e113a-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:32:15 +0200 (CEST)
Received: by mail-yw1-x1132.google.com with SMTP id
 00721157ae682-561c1436c75so810627b3.1
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:32:15 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.32.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:32:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c7e113a-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568734; x=1688160734;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=cQ/uV998f7GBK3U8fEwXlOSI96PW+5wzkUNMHiphdSU=;
        b=Gco7uaZ8T0f7tmyth6uZeLtIanv6GpulpDiuQwIsnzqnPkjzJkIu0h1PZGQ4iQfI5A
         Wsvmi3eI4+D34lVcMDUsWRCETH5qcZJ6o4mHbMVq/3LTC/HNm7eQ6q2KfFVCA86f3d3I
         LxBFjI2qJ2MPH41lEvrCJnwlKDxe1mBh52b4jr4phbiXUV/rBKhdQfCzS+0QvKIx+CwJ
         I9nD3g0MgWbwsz1cldEhhb6oFNl81Iu/IWeIlG3kHhqR9NOaQpioUXIEm19YojgZVH7b
         YKNDEnmfXzRXtSrjS8AB8ePFyFPpDGIfkcu+Z1fbAVxcvdW9nHAWNVZ44Hj2Se7PAnUx
         U49A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568734; x=1688160734;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=cQ/uV998f7GBK3U8fEwXlOSI96PW+5wzkUNMHiphdSU=;
        b=HueF+KiXFhMxRS5gTYVvNzu9LlxgYXw7Roe1g/0xYmKJM6kDA6CqBmma8MG2c4/h0k
         HzQDE5KMsooVhzHZsaxOIJa/gk3/fLu/uGZs7mhRAZ/SPxM0yQC2dVJtioED91zgkYup
         qCgkBN/4ePh8I2PDT2zy5QoKGleD9syPbWSoKcOnfr1rko3nOrKFD+o+W0j6WTSu53wH
         AuTdocJ07Ku3Y1bSHISsffQWDeiy3WUVGwLfXuUhRs3GajA7/p7Hz/WCaqXyfWcfeJX1
         Tx8XaCstRrJbVsDmHnPcda4anVna6/MXBgdcR2w10ATQzxAe1noLRDiCK5IM97vxWPR9
         qM5w==
X-Gm-Message-State: AC+VfDxvFaigeimJFc3URKhX4wwNBELZ0FjOdVYkdwiJpMKq9rEOqJxy
	ciLzK5Ba6FUPzweLMvBSCBx1tot5CVR9gQ==
X-Google-Smtp-Source: ACHHUZ5gIOI1fuO3+omWI3ZFi04M+ls0ECM5yJW/hGOTfIhuUEs0QHIWlyyE93+ta0AMRd/Bn9r+fw==
X-Received: by 2002:a0d:db15:0:b0:562:6c0:c4d3 with SMTP id d21-20020a0ddb15000000b0056206c0c4d3mr7636270ywe.13.1685568734582;
        Wed, 31 May 2023 14:32:14 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Richard Weinberger <richard@nod.at>
Subject: [PATCH v3 33/34] um: Convert {pmd, pte}_free_tlb() to use ptdescs
Date: Wed, 31 May 2023 14:30:31 -0700
Message-Id: <20230531213032.25338-34-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents. Also cleans up some spacing issues.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/um/include/asm/pgalloc.h | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h
index 8ec7cd46dd96..de5e31c64793 100644
--- a/arch/um/include/asm/pgalloc.h
+++ b/arch/um/include/asm/pgalloc.h
@@ -25,19 +25,19 @@
  */
 extern pgd_t *pgd_alloc(struct mm_struct *);
 
-#define __pte_free_tlb(tlb,pte, address)		\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb),(pte));			\
+#define __pte_free_tlb(tlb, pte, address)			\
+do {								\
+	pagetable_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #ifdef CONFIG_3_LEVEL_PGTABLES
 
-#define __pmd_free_tlb(tlb, pmd, address)		\
-do {							\
-	pgtable_pmd_page_dtor(virt_to_page(pmd));	\
-	tlb_remove_page((tlb),virt_to_page(pmd));	\
-} while (0)						\
+#define __pmd_free_tlb(tlb, pmd, address)			\
+do {								\
+	pagetable_pmd_dtor(virt_to_ptdesc(pmd));			\
+	tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd));	\
+} while (0)
 
 #endif
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542043.845501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVf-00066n-Id; Wed, 31 May 2023 21:37:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542043.845501; Wed, 31 May 2023 21:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVf-00066g-Fn; Wed, 31 May 2023 21:37:35 +0000
Received: by outflank-mailman (input) for mailman id 542043;
 Wed, 31 May 2023 21:37:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQH-0006xu-K7
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:01 +0000
Received: from mail-yw1-x1133.google.com (mail-yw1-x1133.google.com
 [2607:f8b0:4864:20::1133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91e7afc7-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:57 +0200 (CEST)
Received: by mail-yw1-x1133.google.com with SMTP id
 00721157ae682-565c9109167so788487b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:57 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91e7afc7-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568717; x=1688160717;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GKvAry2ucxH/XDx2KcG3A/KGUGQDtp2kelP1IwoS25g=;
        b=Y3FtUHVtc1fwM5WsLpUiDBg6/rzXML83Sf1SpHo45QrbJM4hkZ47EVZ9Jzx+YqMx//
         BHSo5V7SZU5qK6dRGkjND3aVt+qVlirBx9EWbYVMgnm+L6iMxxMix1wZ5PVLi/ijI+Ey
         kjhf04CYPPVWH8GpyPUK1r6JzDlEPiRQAM9NSrBfw3+VwNYXW3xAcUEWGbHBzHelo4Pu
         bF0T8t1NtshEajz0mqHjS6WkCiELQh858kC5qsY7VIk5maPoFZeAKP9V+myVulhgPyK+
         zsaBkoumThJ6EqDdX5m4PI3pGkx/pRFHiZHFkTEVJ/iNtjtCRgnvofJL1HddDTqxb4fu
         tjHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568717; x=1688160717;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=GKvAry2ucxH/XDx2KcG3A/KGUGQDtp2kelP1IwoS25g=;
        b=DA6QZmFRpKc3EMBXsG6UnsrqCxvA13Pc7/qcg3EsMaI8MqAvc/oy++AFYtVN02niic
         DZ17uRO4PY4J1pOrLqUwDApLj0LowMIHe+NyPcJR5jUsL8MnOOzWsZKXjuz6zz5e45rR
         GnZKKYuS6XXW8XNeGq4MQTx62mmB6W6EqIgcsrKQsHmyLKIgRRz48r/pzWGBNcc16Gv2
         ZIufiAv+4Fjl4kCWIHzIllaJDpX0NfjtUfOsgNtpTCbc2As7dnB9AFtUaJOj9jDg9RDj
         8Vz/kKLfUUwf6LD+zm/9KS1UZ5qbCa09XHTvWxdc33Ie+W93wQLG+rcGvCISwR54+pxI
         AnLg==
X-Gm-Message-State: AC+VfDz6vPp6JfK4fm9136ZB+jOU5SYr20up1oMLSfW9IO3gNs0VOU/q
	YlauOouNA9sBNRAqq5fF/cQ=
X-Google-Smtp-Source: ACHHUZ4onSH62Fs1UXCoMXHHySj60ubOs649jT/DcV3u5CJS2oOZodlnCpjMB+cFBzv/roEimAjFGg==
X-Received: by 2002:a0d:df10:0:b0:55a:6100:c0e6 with SMTP id i16-20020a0ddf10000000b0055a6100c0e6mr6709224ywe.47.1685568716762;
        Wed, 31 May 2023 14:31:56 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Huacai Chen <chenhuacai@kernel.org>
Subject: [PATCH v3 24/34] loongarch: Convert various functions to use ptdescs
Date: Wed, 31 May 2023 14:30:22 -0700
Message-Id: <20230531213032.25338-25-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use pagetable_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/loongarch/include/asm/pgalloc.h | 27 +++++++++++++++------------
 arch/loongarch/mm/pgtable.c          |  7 ++++---
 2 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h
index af1d1e4a6965..5f5afbf1f10c 100644
--- a/arch/loongarch/include/asm/pgalloc.h
+++ b/arch/loongarch/include/asm/pgalloc.h
@@ -45,9 +45,9 @@ extern void pagetable_init(void);
 extern pgd_t *pgd_alloc(struct mm_struct *mm);
 
 #define __pte_free_tlb(tlb, pte, address)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), pte);			\
+do {								\
+	pagetable_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));	\
 } while (0)
 
 #ifndef __PAGETABLE_PMD_FOLDED
@@ -55,18 +55,18 @@ do {							\
 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pmd_t *pmd;
-	struct page *pg;
+	struct ptdesc *ptdesc;
 
-	pg = alloc_page(GFP_KERNEL_ACCOUNT);
-	if (!pg)
+	ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, 0);
+	if (!ptdesc)
 		return NULL;
 
-	if (!pgtable_pmd_page_ctor(pg)) {
-		__free_page(pg);
+	if (!pagetable_pmd_ctor(ptdesc)) {
+		pagetable_free(ptdesc);
 		return NULL;
 	}
 
-	pmd = (pmd_t *)page_address(pg);
+	pmd = (pmd_t *)ptdesc_address(ptdesc);
 	pmd_init(pmd);
 	return pmd;
 }
@@ -80,10 +80,13 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pud_t *pud;
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL, 0);
 
-	pud = (pud_t *) __get_free_page(GFP_KERNEL);
-	if (pud)
-		pud_init(pud);
+	if (!ptdesc)
+		return NULL;
+	pud = (pud_t *)ptdesc_address(ptdesc);
+
+	pud_init(pud);
 	return pud;
 }
 
diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c
index 36a6dc0148ae..cdba10ffc0df 100644
--- a/arch/loongarch/mm/pgtable.c
+++ b/arch/loongarch/mm/pgtable.c
@@ -11,10 +11,11 @@
 
 pgd_t *pgd_alloc(struct mm_struct *mm)
 {
-	pgd_t *ret, *init;
+	pgd_t *init, *ret = NULL;
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL, 0);
 
-	ret = (pgd_t *) __get_free_page(GFP_KERNEL);
-	if (ret) {
+	if (ptdesc) {
+		ret = (pgd_t *)ptdesc_address(ptdesc);
 		init = pgd_offset(&init_mm, 0UL);
 		pgd_init(ret);
 		memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:49 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542053.845556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVj-0007Lr-Gd; Wed, 31 May 2023 21:37:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542053.845556; Wed, 31 May 2023 21:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVj-0007KG-5T; Wed, 31 May 2023 21:37:39 +0000
Received: by outflank-mailman (input) for mailman id 542053;
 Wed, 31 May 2023 21:37:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQJ-0006zB-Od
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:03 +0000
Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com
 [2607:f8b0:4864:20::1136])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 94408fb0-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:32:02 +0200 (CEST)
Received: by mail-yw1-x1136.google.com with SMTP id
 00721157ae682-565a3cdba71so1033807b3.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:32:02 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:32:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94408fb0-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568721; x=1688160721;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=BVDfvhMIj7WJxaEIn3M83QWohhaKgCSnIk9ScAfRPU4=;
        b=sZVMueiD/ZRZMwtJ1uBmpVp21br88K7Ts0BJnaG9K42MiUpWTkvlqtGrjbXbMcnnig
         ltism/PzSzsT+R/to8b3yQRLDRn38PbBDdqNHRciyJGDmOy9mh7D1lXAJvcHy0PMN5jp
         su04PaNazxgOqRipoXzoczHkJmES+frsRN0f23ykUUyQqSuCIK+AAi0RHbkHHQVi5qyL
         3FwLmSeStb28bpTQ1/hpnTMSIeCoZ802rEkgE91uQZODe47s5OoFtBuE2chhEZiF5V8U
         obYLBCRSZ+tlt5wA8er/wOg+Jr+QyZpX7NHf8eGMHCX/qwho850Ox9fTY+JInMAzI852
         Fi7Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568721; x=1688160721;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=BVDfvhMIj7WJxaEIn3M83QWohhaKgCSnIk9ScAfRPU4=;
        b=ElfXBQYSnmh1QNh6rsV768R53Cb8un+cmicHDna+n1XdaWyQaB+oKbZi8GKIHgUdZM
         k1hV6Dj+S1+SAfGDJBixmcKBbvYaCRrdGka4eFfhwLJspfSy+tsswmW572blglQO9F6K
         4XyU6D99XV7nhl08hcv51/kzcgmx/ChYJr+6OF7n1BJP5NnyzKU5EXSe2AZX5fjQo+3S
         ZSRjBP6B14W88+YOIKQ8NY/Q9BJ+eG1OMaQQ204pdGyEi3ePa2UDaSzSnrSpB+HcINlz
         ngsnhwMMHoV/apPXcSoh53Ems512InjOK48Wq+Qnn/o5BVdOH4BcZJXLCi49OY2sYlUE
         RD2Q==
X-Gm-Message-State: AC+VfDyjaUGuIL6SQw/ctsi1QqSiwq6qxqnNHLgNiRWsWCKXIsw48XQm
	iBAQC+KyCf3tAf3MqM+PQhg=
X-Google-Smtp-Source: ACHHUZ5JiP1u+Csf8JFW+4lOHEoJa4kJJO4bSkCJf3taGF1GbHqDT+hHsEbp8gGuVkWnLydQNajjtA==
X-Received: by 2002:a0d:cc47:0:b0:568:e7e6:4199 with SMTP id o68-20020a0dcc47000000b00568e7e64199mr4171591ywd.6.1685568720746;
        Wed, 31 May 2023 14:32:00 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Subject: [PATCH v3 26/34] mips: Convert various functions to use ptdescs
Date: Wed, 31 May 2023 14:30:24 -0700
Message-Id: <20230531213032.25338-27-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use pagetable_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/mips/include/asm/pgalloc.h | 31 +++++++++++++++++--------------
 arch/mips/mm/pgtable.c          |  7 ++++---
 2 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
index f72e737dda21..3ba1fdb06502 100644
--- a/arch/mips/include/asm/pgalloc.h
+++ b/arch/mips/include/asm/pgalloc.h
@@ -51,13 +51,13 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm);
 
 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 {
-	free_pages((unsigned long)pgd, PGD_TABLE_ORDER);
+	pagetable_free(virt_to_ptdesc(pgd));
 }
 
-#define __pte_free_tlb(tlb,pte,address)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), pte);			\
+#define __pte_free_tlb(tlb, pte, address)			\
+do {								\
+	pagetable_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));	\
 } while (0)
 
 #ifndef __PAGETABLE_PMD_FOLDED
@@ -65,18 +65,18 @@ do {							\
 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pmd_t *pmd;
-	struct page *pg;
+	struct ptdesc *ptdesc;
 
-	pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER);
-	if (!pg)
+	ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER);
+	if (!ptdesc)
 		return NULL;
 
-	if (!pgtable_pmd_page_ctor(pg)) {
-		__free_pages(pg, PMD_TABLE_ORDER);
+	if (!pagetable_pmd_ctor(ptdesc)) {
+		pagetable_free(ptdesc);
 		return NULL;
 	}
 
-	pmd = (pmd_t *)page_address(pg);
+	pmd = (pmd_t *)ptdesc_address(ptdesc);
 	pmd_init(pmd);
 	return pmd;
 }
@@ -90,10 +90,13 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
 static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address)
 {
 	pud_t *pud;
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL, PUD_TABLE_ORDER);
 
-	pud = (pud_t *) __get_free_pages(GFP_KERNEL, PUD_TABLE_ORDER);
-	if (pud)
-		pud_init(pud);
+	if (!ptdesc)
+		return NULL;
+	pud = (pud_t *)ptdesc_address(ptdesc);
+
+	pud_init(pud);
 	return pud;
 }
 
diff --git a/arch/mips/mm/pgtable.c b/arch/mips/mm/pgtable.c
index b13314be5d0e..6be3493d7722 100644
--- a/arch/mips/mm/pgtable.c
+++ b/arch/mips/mm/pgtable.c
@@ -10,10 +10,11 @@
 
 pgd_t *pgd_alloc(struct mm_struct *mm)
 {
-	pgd_t *ret, *init;
+	pgd_t *init, *ret = NULL;
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL, PGD_TABLE_ORDER);
 
-	ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_TABLE_ORDER);
-	if (ret) {
+	if (ptdesc) {
+		ret = (pgd_t *) ptdesc_address(ptdesc);
 		init = pgd_offset(&init_mm, 0UL);
 		pgd_init(ret);
 		memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:50 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542058.845562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVj-0007V5-UY; Wed, 31 May 2023 21:37:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542058.845562; Wed, 31 May 2023 21:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVj-0007TW-Ln; Wed, 31 May 2023 21:37:39 +0000
Received: by outflank-mailman (input) for mailman id 542058;
 Wed, 31 May 2023 21:37:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQ0-0006xu-Gr
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:44 +0000
Received: from mail-yw1-x1131.google.com (mail-yw1-x1131.google.com
 [2607:f8b0:4864:20::1131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 887f6898-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:42 +0200 (CEST)
Received: by mail-yw1-x1131.google.com with SMTP id
 00721157ae682-565cdb77b01so924127b3.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:42 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 887f6898-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568701; x=1688160701;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2hmT3cFhGadOVawWrEtUkNd5SWK58SDjV2Pgih+1szI=;
        b=qCuT/++aTu5voiOP9lxbYa2wX1Gs7GIE5mehWhJHti5ThLraOVyVXWhmaJR+v1y54N
         IdQiU+NYuWik6bEUHkLb+g6oYQD3CK8mhDz9okE0XoL9PJ0a4IOIU4jm+WBCUbGhTi5J
         LPz/hx+cTj9Ik3l+Z/jIxTWrPQib0OpunlrmFAXw2EoLBC+PSBOIkFblIUU3Uldliuas
         34nU1GYaVtloDwpUq0BE+J83fN8diAshP9siyaUnXYIV03TEKnqBK0cGcqDqFHua09oq
         9NmL5CjMURKedbmhu8fIXa5Q5PYVs28f7s/ci56qHScpOoc9ZWUmg8pVIdqPilPQz7j4
         pqDA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568701; x=1688160701;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=2hmT3cFhGadOVawWrEtUkNd5SWK58SDjV2Pgih+1szI=;
        b=e0xG6AnyKgOJoMEnJPuVuDyKdAkDdgCHjuurGyk/CRZ0Qp+GugiEk04t1Xf4GPMI9j
         IBFJSSMHjcP69Hfihz+JrpWyxeQ2CekR7R6pn9hDPHYBH9H++fVqUCSIIMxk3/etFBei
         e0wwHePvr+t1THD0dtxIRYiao0m5tv9gopIVUO191Lr4GXJRgTG/CXcf6uA4PvXGGMRV
         L0UjFX/5bg+5ozAkCUa1JxUnbYGH2f+PMsHZvFaqpG30oe+qBL3SPrbnjBgG9TorsGVT
         PYb/FO6DYXU3SyGhpvqv6LCFe8OZTEP77gno3gFGCo2mVtH4Ew4JCDQi7Iz6Sf2R+5kE
         W1iA==
X-Gm-Message-State: AC+VfDyXiRF7jpVUrohUd+DhrJ8TvqOPsnYG68E+HsRzV3GZtHEeEwQq
	LDE5THkR859Q/HQpjJIwp/4=
X-Google-Smtp-Source: ACHHUZ4sIXnW5weoNzN6tBrBSictW4o5z5lE7fK29Fh96qR3U+ZwXD72yfAHya2YAGrW5tYvlOMUrA==
X-Received: by 2002:a0d:df81:0:b0:561:bd01:9ff with SMTP id i123-20020a0ddf81000000b00561bd0109ffmr7413365ywe.28.1685568700953;
        Wed, 31 May 2023 14:31:40 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>
Subject: [PATCH v3 16/34] s390: Convert various gmap functions to use ptdescs
Date: Wed, 31 May 2023 14:30:14 -0700
Message-Id: <20230531213032.25338-17-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to split struct ptdesc from struct page, convert various
functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use pagetable_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/mm/gmap.c | 230 ++++++++++++++++++++++++--------------------
 1 file changed, 128 insertions(+), 102 deletions(-)

diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 81c683426b49..010e87df7299 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -34,7 +34,7 @@
 static struct gmap *gmap_alloc(unsigned long limit)
 {
 	struct gmap *gmap;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned long *table;
 	unsigned long etype, atype;
 
@@ -67,12 +67,12 @@ static struct gmap *gmap_alloc(unsigned long limit)
 	spin_lock_init(&gmap->guest_table_lock);
 	spin_lock_init(&gmap->shadow_lock);
 	refcount_set(&gmap->ref_count, 1);
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		goto out_free;
-	page->_pt_s390_gaddr = 0;
-	list_add(&page->lru, &gmap->crst_list);
-	table = page_to_virt(page);
+	ptdesc->_pt_s390_gaddr = 0;
+	list_add(&ptdesc->pt_list, &gmap->crst_list);
+	table = ptdesc_to_virt(ptdesc);
 	crst_table_init(table, etype);
 	gmap->table = table;
 	gmap->asce = atype | _ASCE_TABLE_LENGTH |
@@ -181,25 +181,25 @@ static void gmap_rmap_radix_tree_free(struct radix_tree_root *root)
  */
 static void gmap_free(struct gmap *gmap)
 {
-	struct page *page, *next;
+	struct ptdesc *ptdesc, *next;
 
 	/* Flush tlb of all gmaps (if not already done for shadows) */
 	if (!(gmap_is_shadow(gmap) && gmap->removed))
 		gmap_flush_tlb(gmap);
 	/* Free all segment & region tables. */
-	list_for_each_entry_safe(page, next, &gmap->crst_list, lru) {
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+	list_for_each_entry_safe(ptdesc, next, &gmap->crst_list, pt_list) {
+		ptdesc->_pt_s390_gaddr = 0;
+		pagetable_free(ptdesc);
 	}
 	gmap_radix_tree_free(&gmap->guest_to_host);
 	gmap_radix_tree_free(&gmap->host_to_guest);
 
 	/* Free additional data for a shadow gmap */
 	if (gmap_is_shadow(gmap)) {
-		/* Free all page tables. */
-		list_for_each_entry_safe(page, next, &gmap->pt_list, lru) {
-			page->_pt_s390_gaddr = 0;
-			page_table_free_pgste(page);
+		/* Free all ptdesc tables. */
+		list_for_each_entry_safe(ptdesc, next, &gmap->pt_list, pt_list) {
+			ptdesc->_pt_s390_gaddr = 0;
+			page_table_free_pgste(ptdesc_page(ptdesc));
 		}
 		gmap_rmap_radix_tree_free(&gmap->host_to_rmap);
 		/* Release reference to the parent */
@@ -308,27 +308,27 @@ EXPORT_SYMBOL_GPL(gmap_get_enabled);
 static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
 			    unsigned long init, unsigned long gaddr)
 {
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned long *new;
 
 	/* since we dont free the gmap table until gmap_free we can unlock */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	new = page_to_virt(page);
+	new = ptdesc_to_virt(ptdesc);
 	crst_table_init(new, init);
 	spin_lock(&gmap->guest_table_lock);
 	if (*table & _REGION_ENTRY_INVALID) {
-		list_add(&page->lru, &gmap->crst_list);
+		list_add(&ptdesc->pt_list, &gmap->crst_list);
 		*table = __pa(new) | _REGION_ENTRY_LENGTH |
 			(*table & _REGION_ENTRY_TYPE_MASK);
-		page->_pt_s390_gaddr = gaddr;
-		page = NULL;
+		ptdesc->_pt_s390_gaddr = gaddr;
+		ptdesc = NULL;
 	}
 	spin_unlock(&gmap->guest_table_lock);
-	if (page) {
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+	if (ptdesc) {
+		ptdesc->_pt_s390_gaddr = 0;
+		pagetable_free(ptdesc);
 	}
 	return 0;
 }
@@ -341,15 +341,15 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table,
  */
 static unsigned long __gmap_segment_gaddr(unsigned long *entry)
 {
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned long offset, mask;
 
 	offset = (unsigned long) entry / sizeof(unsigned long);
 	offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE;
 	mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1);
-	page = virt_to_page((void *)((unsigned long) entry & mask));
+	ptdesc = virt_to_ptdesc((void *)((unsigned long) entry & mask));
 
-	return page->_pt_s390_gaddr + offset;
+	return ptdesc->_pt_s390_gaddr + offset;
 }
 
 /**
@@ -1345,6 +1345,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 	unsigned long *ste;
 	phys_addr_t sto, pgt;
 	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	ste = gmap_table_walk(sg, raddr, 1); /* get segment pointer */
@@ -1358,9 +1359,11 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_pgt(sg, raddr, __va(pgt));
 	/* Free page table */
 	page = phys_to_page(pgt);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	page_table_free_pgste(page);
+
+	ptdesc = page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	page_table_free_pgste(ptdesc_page(ptdesc));
 }
 
 /**
@@ -1374,9 +1377,10 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr)
 static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr,
 				unsigned long *sgt)
 {
-	struct page *page;
 	phys_addr_t pgt;
 	int i;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	for (i = 0; i < _CRST_ENTRIES; i++, raddr += _SEGMENT_SIZE) {
@@ -1387,9 +1391,11 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_pgt(sg, raddr, __va(pgt));
 		/* Free page table */
 		page = phys_to_page(pgt);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		page_table_free_pgste(page);
+
+		ptdesc = page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		page_table_free_pgste(ptdesc_page(ptdesc));
 	}
 }
 
@@ -1405,6 +1411,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 	unsigned long r3o, *r3e;
 	phys_addr_t sgt;
 	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	r3e = gmap_table_walk(sg, raddr, 2); /* get region-3 pointer */
@@ -1418,9 +1425,11 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_sgt(sg, raddr, __va(sgt));
 	/* Free segment table */
 	page = phys_to_page(sgt);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+
+	ptdesc = page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	pagetable_free(ptdesc);
 }
 
 /**
@@ -1434,9 +1443,10 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr)
 static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr,
 				unsigned long *r3t)
 {
-	struct page *page;
 	phys_addr_t sgt;
 	int i;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	for (i = 0; i < _CRST_ENTRIES; i++, raddr += _REGION3_SIZE) {
@@ -1447,9 +1457,11 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_sgt(sg, raddr, __va(sgt));
 		/* Free segment table */
 		page = phys_to_page(sgt);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+
+		ptdesc = page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		pagetable_free(ptdesc);
 	}
 }
 
@@ -1465,6 +1477,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr)
 	unsigned long r2o, *r2e;
 	phys_addr_t r3t;
 	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	r2e = gmap_table_walk(sg, raddr, 3); /* get region-2 pointer */
@@ -1478,9 +1491,11 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_r3t(sg, raddr, __va(r3t));
 	/* Free region 3 table */
 	page = phys_to_page(r3t);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+
+	ptdesc = page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	pagetable_free(ptdesc);
 }
 
 /**
@@ -1495,8 +1510,9 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr,
 				unsigned long *r2t)
 {
 	phys_addr_t r3t;
-	struct page *page;
 	int i;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	for (i = 0; i < _CRST_ENTRIES; i++, raddr += _REGION2_SIZE) {
@@ -1507,9 +1523,11 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr,
 		__gmap_unshadow_r3t(sg, raddr, __va(r3t));
 		/* Free region 3 table */
 		page = phys_to_page(r3t);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+
+		ptdesc = page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		pagetable_free(ptdesc);
 	}
 }
 
@@ -1525,6 +1543,7 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr)
 	unsigned long r1o, *r1e;
 	struct page *page;
 	phys_addr_t r2t;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	r1e = gmap_table_walk(sg, raddr, 4); /* get region-1 pointer */
@@ -1538,9 +1557,11 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr)
 	__gmap_unshadow_r2t(sg, raddr, __va(r2t));
 	/* Free region 2 table */
 	page = phys_to_page(r2t);
-	list_del(&page->lru);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+
+	ptdesc = page_ptdesc(page);
+	list_del(&ptdesc->pt_list);
+	ptdesc->_pt_s390_gaddr = 0;
+	pagetable_free(ptdesc);
 }
 
 /**
@@ -1558,6 +1579,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
 	struct page *page;
 	phys_addr_t r2t;
 	int i;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	asce = __pa(r1t) | _ASCE_TYPE_REGION1;
@@ -1571,9 +1593,11 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr,
 		r1t[i] = _REGION1_ENTRY_EMPTY;
 		/* Free region 2 table */
 		page = phys_to_page(r2t);
-		list_del(&page->lru);
-		page->_pt_s390_gaddr = 0;
-		__free_pages(page, CRST_ALLOC_ORDER);
+
+		ptdesc = page_ptdesc(page);
+		list_del(&ptdesc->pt_list);
+		ptdesc->_pt_s390_gaddr = 0;
+		pagetable_free(ptdesc);
 	}
 }
 
@@ -1770,18 +1794,18 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	unsigned long raddr, origin, offset, len;
 	unsigned long *table;
 	phys_addr_t s_r2t;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	/* Allocate a shadow region second table */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_r2t = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_r2t = page_to_phys(ptdesc_page(ptdesc));
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 4); /* get region-1 pointer */
@@ -1802,7 +1826,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 		 _REGION_ENTRY_TYPE_R1 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= (r2t & _REGION_ENTRY_PROTECT);
-	list_add(&page->lru, &sg->crst_list);
+	list_add(&ptdesc->pt_list, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -1830,8 +1854,8 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+	ptdesc->_pt_s390_gaddr = 0;
+	pagetable_free(ptdesc);
 	return rc;
 }
 EXPORT_SYMBOL_GPL(gmap_shadow_r2t);
@@ -1855,18 +1879,18 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	unsigned long raddr, origin, offset, len;
 	unsigned long *table;
 	phys_addr_t s_r3t;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	/* Allocate a shadow region second table */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_r3t = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_r3t = page_to_phys(ptdesc_page(ptdesc));
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 3); /* get region-2 pointer */
@@ -1887,7 +1911,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 		 _REGION_ENTRY_TYPE_R2 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= (r3t & _REGION_ENTRY_PROTECT);
-	list_add(&page->lru, &sg->crst_list);
+	list_add(&ptdesc->pt_list, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -1915,8 +1939,8 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+	ptdesc->_pt_s390_gaddr = 0;
+	pagetable_free(ptdesc);
 	return rc;
 }
 EXPORT_SYMBOL_GPL(gmap_shadow_r3t);
@@ -1940,18 +1964,18 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	unsigned long raddr, origin, offset, len;
 	unsigned long *table;
 	phys_addr_t s_sgt;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg) || (sgt & _REGION3_ENTRY_LARGE));
 	/* Allocate a shadow segment table */
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_sgt = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_sgt = page_to_phys(ptdesc_page(ptdesc));
 	/* Install shadow region second table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 2); /* get region-3 pointer */
@@ -1972,7 +1996,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 		 _REGION_ENTRY_TYPE_R3 | _REGION_ENTRY_INVALID;
 	if (sg->edat_level >= 1)
 		*table |= sgt & _REGION_ENTRY_PROTECT;
-	list_add(&page->lru, &sg->crst_list);
+	list_add(&ptdesc->pt_list, &sg->crst_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_REGION_ENTRY_INVALID;
@@ -2000,8 +2024,8 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	__free_pages(page, CRST_ALLOC_ORDER);
+	ptdesc->_pt_s390_gaddr = 0;
+	pagetable_free(ptdesc);
 	return rc;
 }
 EXPORT_SYMBOL_GPL(gmap_shadow_sgt);
@@ -2024,8 +2048,9 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
 			   int *fake)
 {
 	unsigned long *table;
-	struct page *page;
 	int rc;
+	struct page *page;
+	struct ptdesc *ptdesc;
 
 	BUG_ON(!gmap_is_shadow(sg));
 	spin_lock(&sg->guest_table_lock);
@@ -2033,9 +2058,10 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr,
 	if (table && !(*table & _SEGMENT_ENTRY_INVALID)) {
 		/* Shadow page tables are full pages (pte+pgste) */
 		page = pfn_to_page(*table >> PAGE_SHIFT);
-		*pgt = page->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE;
+		ptdesc = page_ptdesc(page);
+		*pgt = ptdesc->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE;
 		*dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT);
-		*fake = !!(page->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE);
+		*fake = !!(ptdesc->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE);
 		rc = 0;
 	} else  {
 		rc = -EAGAIN;
@@ -2064,19 +2090,19 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 {
 	unsigned long raddr, origin;
 	unsigned long *table;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	phys_addr_t s_pgt;
 	int rc;
 
 	BUG_ON(!gmap_is_shadow(sg) || (pgt & _SEGMENT_ENTRY_LARGE));
 	/* Allocate a shadow page table */
-	page = page_table_alloc_pgste(sg->mm);
-	if (!page)
+	ptdesc = page_ptdesc(page_table_alloc_pgste(sg->mm));
+	if (!ptdesc)
 		return -ENOMEM;
-	page->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN;
+	ptdesc->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN;
 	if (fake)
-		page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
-	s_pgt = page_to_phys(page);
+		ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE;
+	s_pgt = page_to_phys(ptdesc_page(ptdesc));
 	/* Install shadow page table */
 	spin_lock(&sg->guest_table_lock);
 	table = gmap_table_walk(sg, saddr, 1); /* get segment pointer */
@@ -2094,7 +2120,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	/* mark as invalid as long as the parent table is not protected */
 	*table = (unsigned long) s_pgt | _SEGMENT_ENTRY |
 		 (pgt & _SEGMENT_ENTRY_PROTECT) | _SEGMENT_ENTRY_INVALID;
-	list_add(&page->lru, &sg->pt_list);
+	list_add(&ptdesc->pt_list, &sg->pt_list);
 	if (fake) {
 		/* nothing to protect for fake tables */
 		*table &= ~_SEGMENT_ENTRY_INVALID;
@@ -2120,8 +2146,8 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
 	return rc;
 out_free:
 	spin_unlock(&sg->guest_table_lock);
-	page->_pt_s390_gaddr = 0;
-	page_table_free_pgste(page);
+	ptdesc->_pt_s390_gaddr = 0;
+	page_table_free_pgste(ptdesc_page(ptdesc));
 	return rc;
 
 }
@@ -2814,11 +2840,11 @@ EXPORT_SYMBOL_GPL(__s390_uv_destroy_range);
  */
 void s390_unlist_old_asce(struct gmap *gmap)
 {
-	struct page *old;
+	struct ptdesc *old;
 
-	old = virt_to_page(gmap->table);
+	old = virt_to_ptdesc(gmap->table);
 	spin_lock(&gmap->guest_table_lock);
-	list_del(&old->lru);
+	list_del(&old->pt_list);
 	/*
 	 * Sometimes the topmost page might need to be "removed" multiple
 	 * times, for example if the VM is rebooted into secure mode several
@@ -2833,7 +2859,7 @@ void s390_unlist_old_asce(struct gmap *gmap)
 	 * pointers, so list_del can work (and do nothing) without
 	 * dereferencing stale or invalid pointers.
 	 */
-	INIT_LIST_HEAD(&old->lru);
+	INIT_LIST_HEAD(&old->pt_list);
 	spin_unlock(&gmap->guest_table_lock);
 }
 EXPORT_SYMBOL_GPL(s390_unlist_old_asce);
@@ -2854,7 +2880,7 @@ EXPORT_SYMBOL_GPL(s390_unlist_old_asce);
 int s390_replace_asce(struct gmap *gmap)
 {
 	unsigned long asce;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	void *table;
 
 	s390_unlist_old_asce(gmap);
@@ -2863,10 +2889,10 @@ int s390_replace_asce(struct gmap *gmap)
 	if ((gmap->asce & _ASCE_TYPE_MASK) == _ASCE_TYPE_SEGMENT)
 		return -EINVAL;
 
-	page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
-	if (!page)
+	ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER);
+	if (!ptdesc)
 		return -ENOMEM;
-	table = page_to_virt(page);
+	table = ptdesc_to_virt(ptdesc);
 	memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT));
 
 	/*
@@ -2875,7 +2901,7 @@ int s390_replace_asce(struct gmap *gmap)
 	 * it will be freed when the VM is torn down.
 	 */
 	spin_lock(&gmap->guest_table_lock);
-	list_add(&page->lru, &gmap->crst_list);
+	list_add(&ptdesc->pt_list, &gmap->crst_list);
 	spin_unlock(&gmap->guest_table_lock);
 
 	/* Set new table origin while preserving existing ASCE control bits */
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:51 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542059.845569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVk-0007h7-Nr; Wed, 31 May 2023 21:37:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542059.845569; Wed, 31 May 2023 21:37:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVk-0007fi-Ak; Wed, 31 May 2023 21:37:40 +0000
Received: by outflank-mailman (input) for mailman id 542059;
 Wed, 31 May 2023 21:37:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQA-0006xu-Il
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:54 +0000
Received: from mail-yw1-x1133.google.com (mail-yw1-x1133.google.com
 [2607:f8b0:4864:20::1133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8e734bc5-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:51 +0200 (CEST)
Received: by mail-yw1-x1133.google.com with SMTP id
 00721157ae682-565c7399afaso817727b3.1
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:51 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e734bc5-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568711; x=1688160711;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=zsW7mi3+X2fEN/3x0Bi+dafPb2wVjOjUR4AWbTKXMQ0=;
        b=Tua91nOZKux2LUG/1azHCwE0BQP0Ldt/fQcFBTr5Gs9SOVOAK3kiOZTEPbQOZmmRiy
         L/C7DDzaEAD4TJ/PsqDB9bGQgfNVr/qmpDxDu1T3P6Y99drNSm3ToAIDJ8f5x/21uNZG
         pUUOD8Jgv7j2nU4xiGagK626SEPu0aHoPwTImFxvzDIuYjYtXajoHjgtFk6Zd3V/pyaG
         gLJRnJdAK4YFiObBQEli6ODKvc3njLqcTrHFtBx9NqeTVuEeD5M63vdNUsRurBLkb0LY
         QIMgkM/02UNDYBO6osQY211nPQYZamIZAFPQkpE3fQYXReywR2/8RCOtWf6rdMhBO40D
         9r3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568711; x=1688160711;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=zsW7mi3+X2fEN/3x0Bi+dafPb2wVjOjUR4AWbTKXMQ0=;
        b=ODO2zhFbtPlyaZn8bevFCZnwYiQtvMFDedJikgjoaEyQQWYkDBLxZ1/aqUbEKiBV67
         GG6jTKM9NL4YnGXT3h6rDLjRM2PC3lWHw3CCUVxdlZTkRYX5ESwTgqCpaP2LOCSFOK6k
         cnbEwAC/lY2aXk8NvY4o4PfzqUqGu1zTQeNeueTVeGMfMpCxZ7haClVEXNpfkuHO9SI4
         XQDPPq/QrNAOsbJ/WnKlGArzedJ0Ze98XkL5ObXsa+j1uA0ARfSbE+Wda18tC62ZCoGA
         3zSSkQA60xBdhnOuaRO11t5rGnwIneNgBi0CxOrZT7LOhDhs74og62Ym8PqG/T/Nha9B
         hUdw==
X-Gm-Message-State: AC+VfDyktwG2MnV2IZFDi/uplh5PTK4DCmmkp2TAqIEzn66UtPRYLnY7
	m+rcpFMSKWTbMQLRXV1/d/A=
X-Google-Smtp-Source: ACHHUZ4VOBb0iwQHMeqwI/gDAzuoarhqz29K6pjaMDJD1aWt+PGt8CSoTG+XJl4l0eUyTigEUM/N5g==
X-Received: by 2002:a0d:dc83:0:b0:567:c388:3552 with SMTP id f125-20020a0ddc83000000b00567c3883552mr7998210ywe.6.1685568710976;
        Wed, 31 May 2023 14:31:50 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>
Subject: [PATCH v3 21/34] arm64: Convert various functions to use ptdescs
Date: Wed, 31 May 2023 14:30:19 -0700
Message-Id: <20230531213032.25338-22-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/arm64/include/asm/tlb.h | 14 ++++++++------
 arch/arm64/mm/mmu.c          |  7 ++++---
 2 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
index c995d1f4594f..2c29239d05c3 100644
--- a/arch/arm64/include/asm/tlb.h
+++ b/arch/arm64/include/asm/tlb.h
@@ -75,18 +75,20 @@ static inline void tlb_flush(struct mmu_gather *tlb)
 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
 				  unsigned long addr)
 {
-	pgtable_pte_page_dtor(pte);
-	tlb_remove_table(tlb, pte);
+	struct ptdesc *ptdesc = page_ptdesc(pte);
+
+	pagetable_pte_dtor(ptdesc);
+	tlb_remove_ptdesc(tlb, ptdesc);
 }
 
 #if CONFIG_PGTABLE_LEVELS > 2
 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp,
 				  unsigned long addr)
 {
-	struct page *page = virt_to_page(pmdp);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmdp);
 
-	pgtable_pmd_page_dtor(page);
-	tlb_remove_table(tlb, page);
+	pagetable_pmd_dtor(ptdesc);
+	tlb_remove_ptdesc(tlb, ptdesc);
 }
 #endif
 
@@ -94,7 +96,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp,
 static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp,
 				  unsigned long addr)
 {
-	tlb_remove_table(tlb, virt_to_page(pudp));
+	tlb_remove_ptdesc(tlb, virt_to_ptdesc(pudp));
 }
 #endif
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index af6bc8403ee4..5867a0e917b9 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -426,6 +426,7 @@ static phys_addr_t __pgd_pgtable_alloc(int shift)
 static phys_addr_t pgd_pgtable_alloc(int shift)
 {
 	phys_addr_t pa = __pgd_pgtable_alloc(shift);
+	struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa));
 
 	/*
 	 * Call proper page table ctor in case later we need to
@@ -433,12 +434,12 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 	 * this pre-allocated page table.
 	 *
 	 * We don't select ARCH_ENABLE_SPLIT_PMD_PTLOCK if pmd is
-	 * folded, and if so pgtable_pmd_page_ctor() becomes nop.
+	 * folded, and if so pagetable_pte_ctor() becomes nop.
 	 */
 	if (shift == PAGE_SHIFT)
-		BUG_ON(!pgtable_pte_page_ctor(phys_to_page(pa)));
+		BUG_ON(!pagetable_pte_ctor(ptdesc));
 	else if (shift == PMD_SHIFT)
-		BUG_ON(!pgtable_pmd_page_ctor(phys_to_page(pa)));
+		BUG_ON(!pagetable_pmd_ctor(ptdesc));
 
 	return pa;
 }
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:52 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542062.845584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVm-0008FJ-GC; Wed, 31 May 2023 21:37:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542062.845584; Wed, 31 May 2023 21:37:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVm-0008Ed-9O; Wed, 31 May 2023 21:37:42 +0000
Received: by outflank-mailman (input) for mailman id 542062;
 Wed, 31 May 2023 21:37:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQF-0006xu-Jx
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:59 +0000
Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com
 [2607:f8b0:4864:20::1134])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 90da08c3-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:55 +0200 (CEST)
Received: by mail-yw1-x1134.google.com with SMTP id
 00721157ae682-5659d85876dso736997b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:55 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90da08c3-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568715; x=1688160715;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ELrYSGf3rZcM0SKncNLanyoUH1rFlxcI1YjdtaAHWOM=;
        b=HVGvjPguQLfPfWkiro39nbW+VW4H5uAeLDGXOc+QtDych5ss5NctDwcG9I1c/6SDfz
         APKwygjUx+f7skr0RtKISqSCz+AX9pojhDKoYcMCQyYniYHupBRU0wOHCUoFqOcXQqMh
         s5WPVmf9msAVfThs98QJDhYbBO3Us7PGfZ8s6z4W61niHbKtctIY2nt8en5Qr8xNSORH
         CbV2Bk80OmBOMZ+UG2sJggDfKIMsAz+S52UtEmTBG7rQO3TOu+Ov8Sc+TVK0fDYwDx9u
         QGgal0w97o+Ln3oc2qepS23J75YM3y/Dwt4cq4bO+YUzmRNBHe7fo14HmAPdP2J5Q06p
         Rhjg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568715; x=1688160715;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ELrYSGf3rZcM0SKncNLanyoUH1rFlxcI1YjdtaAHWOM=;
        b=K6ShitWdQwfsoUTLBBQZ2NoJ3/E/6nuOABSOWNsC8RhUIoQi6gs9HV97Hy9h32CukG
         +Xo3zOlhyHGDkkzv/C6NHpes7KBUvAhBFIszL+mPw+96upbqEyZmsQLX0XMyQo3aY+Kj
         eb+FZ0O77QG+ZTCtKhcLyZS6IMZVdQ9zdzV5Z6Uk48J+2bpk8MCt1xidGixffffE85CZ
         QGx9GvcanqTo+vZKSUuDsnhi63KP/2Gcc+SP0e6JpH40j6+vBrfgE1skOeI0g97YUrot
         1lkYCEpvk66ASyRxFzS7Wj53lMLgcbZgmWEHiHCfvMAP6/v+ibDXDOeTl5//gFvmR0ph
         b4bw==
X-Gm-Message-State: AC+VfDzfFVAIlFiIo+LbJYo9S6JHYTTDpdv6fM+So6Ezlgh73UShxIHv
	NJEbOPTn9bv0r6AoP4PrJYY=
X-Google-Smtp-Source: ACHHUZ7KZYp2khjpsM/eLpOUxorXRdnAb+nMmuGE16o93fcYthDjjOgaXk5MabZ2W/EzUmWY8vgN2A==
X-Received: by 2002:a81:6f06:0:b0:565:cef7:92d2 with SMTP id k6-20020a816f06000000b00565cef792d2mr6456173ywc.21.1685568714840;
        Wed, 31 May 2023 14:31:54 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 23/34] hexagon: Convert __pte_free_tlb() to use ptdescs
Date: Wed, 31 May 2023 14:30:21 -0700
Message-Id: <20230531213032.25338-24-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/hexagon/include/asm/pgalloc.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/hexagon/include/asm/pgalloc.h b/arch/hexagon/include/asm/pgalloc.h
index f0c47e6a7427..55988625e6fb 100644
--- a/arch/hexagon/include/asm/pgalloc.h
+++ b/arch/hexagon/include/asm/pgalloc.h
@@ -87,10 +87,10 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
 		max_kernel_seg = pmdindex;
 }
 
-#define __pte_free_tlb(tlb, pte, addr)		\
-do {						\
-	pgtable_pte_page_dtor((pte));		\
-	tlb_remove_page((tlb), (pte));		\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	pagetable_pte_dtor((page_ptdesc(pte)));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #endif
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:53 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542067.845591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVn-0008Ks-5b; Wed, 31 May 2023 21:37:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542067.845591; Wed, 31 May 2023 21:37:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVm-0008It-RH; Wed, 31 May 2023 21:37:42 +0000
Received: by outflank-mailman (input) for mailman id 542067;
 Wed, 31 May 2023 21:37:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQ2-0006zB-89
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:46 +0000
Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com
 [2607:f8b0:4864:20::1132])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 89b964a6-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:31:44 +0200 (CEST)
Received: by mail-yw1-x1132.google.com with SMTP id
 00721157ae682-561c1436c75so806487b3.1
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:44 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89b964a6-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568703; x=1688160703;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=g0EDlTBepFAIScUxyTJ+0DyGhEsxHu6BpoqqcqVDYAk=;
        b=lfEsLtgTeKEzXFirMo9o11V/XRmZm00/YaLm7BlBKxUZMh8F89Ri2q82RalA6HVFcr
         fGfOCaBw7lrZfJUknvgerzX2CLrTKKqqygseS2G5r3CrGL21cM5rmCXifeNktoOdWeqE
         hCAdIHYTp6Omd8YIi0G2dwAir8S3Dh7e6wZEUy3DbCkX4PtFD3VJdhVAKwcBVEwSBDM1
         IgMwwjhrauy7YY5+zFOQZ8f3GCW/2V8MHMVyy+uFfFYSBx0y4NOWRzIYfjEZfvlGo2qR
         q/y0VEM36li7rrjLRirqy8+8CcNmR7WeMBFP9OUiKGR04mxHqqi+zNnx2O9I2zESO1QH
         Pk4Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568703; x=1688160703;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=g0EDlTBepFAIScUxyTJ+0DyGhEsxHu6BpoqqcqVDYAk=;
        b=JyrS9t+dyaomEJtULIRiU6s5WJVnwvCqLj3CugIFz0aLIblQI4qiTw6VvRGGCeVL5b
         U9FuWjXBDBZ867kkDGcBPgzWF9Pfxcir5QOT4MoKg/cFa8hN2JSiaW4W3docOhpcDyGz
         WmOSgSVePkiKtDPIBmopcTlPtY3+lZL4Jw+c8Gt/wHUyvWIp/70q1LJVkPZDVER0QD9d
         vXN7RmfVX2fj9w3Er8s/pamWomciATxCroaPh2SqR6fOS6zwYyReSa0egaVTa5IH17Kf
         DMZib7WA3/om4kixuPqjz5aQJSsAymQ0cKrVDE1snuIVS4//kH9gZxf6MZpCgdRph5uT
         gMDw==
X-Gm-Message-State: AC+VfDxZ+xD2TxQ9ozhhFi3KUWOMzwuVyg1VgMnp3eciwVdYkBq6Hm8Z
	dtaxhW69klCd/kKpNPmDKgA=
X-Google-Smtp-Source: ACHHUZ4ZKhxc9PbSQYs4zCFcIXlSyOZD/0WXO8m20IywWxu9Fif+j8BE1GhA2Av88s4QXNRNjVFnmA==
X-Received: by 2002:a0d:ccd3:0:b0:561:e89d:e263 with SMTP id o202-20020a0dccd3000000b00561e89de263mr6292041ywd.52.1685568702956;
        Wed, 31 May 2023 14:31:42 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>
Subject: [PATCH v3 17/34] s390: Convert various pgalloc functions to use ptdescs
Date: Wed, 31 May 2023 14:30:15 -0700
Message-Id: <20230531213032.25338-18-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use pagetable_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/s390/include/asm/pgalloc.h |   4 +-
 arch/s390/include/asm/tlb.h     |   4 +-
 arch/s390/mm/pgalloc.c          | 108 ++++++++++++++++----------------
 3 files changed, 59 insertions(+), 57 deletions(-)

diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h
index 17eb618f1348..00ad9b88fda9 100644
--- a/arch/s390/include/asm/pgalloc.h
+++ b/arch/s390/include/asm/pgalloc.h
@@ -86,7 +86,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr)
 	if (!table)
 		return NULL;
 	crst_table_init(table, _SEGMENT_ENTRY_EMPTY);
-	if (!pgtable_pmd_page_ctor(virt_to_page(table))) {
+	if (!pagetable_pmd_ctor(virt_to_ptdesc(table))) {
 		crst_table_free(mm, table);
 		return NULL;
 	}
@@ -97,7 +97,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
 {
 	if (mm_pmd_folded(mm))
 		return;
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
+	pagetable_pmd_dtor(virt_to_ptdesc(pmd));
 	crst_table_free(mm, (unsigned long *) pmd);
 }
 
diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h
index b91f4a9b044c..383b1f91442c 100644
--- a/arch/s390/include/asm/tlb.h
+++ b/arch/s390/include/asm/tlb.h
@@ -89,12 +89,12 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd,
 {
 	if (mm_pmd_folded(tlb->mm))
 		return;
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
+	pagetable_pmd_dtor(virt_to_ptdesc(pmd));
 	__tlb_adjust_range(tlb, address, PAGE_SIZE);
 	tlb->mm->context.flush_mm = 1;
 	tlb->freed_tables = 1;
 	tlb->cleared_puds = 1;
-	tlb_remove_table(tlb, pmd);
+	tlb_remove_ptdesc(tlb, pmd);
 }
 
 /*
diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
index 6b99932abc66..eeb7c95b98cf 100644
--- a/arch/s390/mm/pgalloc.c
+++ b/arch/s390/mm/pgalloc.c
@@ -43,17 +43,17 @@ __initcall(page_table_register_sysctl);
 
 unsigned long *crst_table_alloc(struct mm_struct *mm)
 {
-	struct page *page = alloc_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL, CRST_ALLOC_ORDER);
 
-	if (!page)
+	if (!ptdesc)
 		return NULL;
-	arch_set_page_dat(page, CRST_ALLOC_ORDER);
-	return (unsigned long *) page_to_virt(page);
+	arch_set_page_dat(ptdesc_page(ptdesc), CRST_ALLOC_ORDER);
+	return (unsigned long *) ptdesc_to_virt(ptdesc);
 }
 
 void crst_table_free(struct mm_struct *mm, unsigned long *table)
 {
-	free_pages((unsigned long)table, CRST_ALLOC_ORDER);
+	pagetable_free(virt_to_ptdesc(table));
 }
 
 static void __crst_table_upgrade(void *arg)
@@ -140,21 +140,21 @@ static inline unsigned int atomic_xor_bits(atomic_t *v, unsigned int bits)
 
 struct page *page_table_alloc_pgste(struct mm_struct *mm)
 {
-	struct page *page;
+	struct ptdesc *ptdesc;
 	u64 *table;
 
-	page = alloc_page(GFP_KERNEL);
-	if (page) {
-		table = (u64 *)page_to_virt(page);
+	ptdesc = pagetable_alloc(GFP_KERNEL, 0);
+	if (ptdesc) {
+		table = (u64 *)ptdesc_to_virt(ptdesc);
 		memset64(table, _PAGE_INVALID, PTRS_PER_PTE);
 		memset64(table + PTRS_PER_PTE, 0, PTRS_PER_PTE);
 	}
-	return page;
+	return ptdesc_page(ptdesc);
 }
 
 void page_table_free_pgste(struct page *page)
 {
-	__free_page(page);
+	pagetable_free(page_ptdesc(page));
 }
 
 #endif /* CONFIG_PGSTE */
@@ -230,7 +230,7 @@ void page_table_free_pgste(struct page *page)
 unsigned long *page_table_alloc(struct mm_struct *mm)
 {
 	unsigned long *table;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	unsigned int mask, bit;
 
 	/* Try to get a fragment of a 4K page as a 2K page table */
@@ -238,9 +238,9 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 		table = NULL;
 		spin_lock_bh(&mm->context.lock);
 		if (!list_empty(&mm->context.pgtable_list)) {
-			page = list_first_entry(&mm->context.pgtable_list,
-						struct page, lru);
-			mask = atomic_read(&page->pt_frag_refcount);
+			ptdesc = list_first_entry(&mm->context.pgtable_list,
+						struct ptdesc, pt_list);
+			mask = atomic_read(&ptdesc->pt_frag_refcount);
 			/*
 			 * The pending removal bits must also be checked.
 			 * Failure to do so might lead to an impossible
@@ -253,13 +253,13 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 			 */
 			mask = (mask | (mask >> 4)) & 0x03U;
 			if (mask != 0x03U) {
-				table = (unsigned long *) page_to_virt(page);
+				table = (unsigned long *) ptdesc_to_virt(ptdesc);
 				bit = mask & 1;		/* =1 -> second 2K */
 				if (bit)
 					table += PTRS_PER_PTE;
-				atomic_xor_bits(&page->pt_frag_refcount,
+				atomic_xor_bits(&ptdesc->pt_frag_refcount,
 							0x01U << bit);
-				list_del(&page->lru);
+				list_del(&ptdesc->pt_list);
 			}
 		}
 		spin_unlock_bh(&mm->context.lock);
@@ -267,27 +267,27 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
 			return table;
 	}
 	/* Allocate a fresh page */
-	page = alloc_page(GFP_KERNEL);
-	if (!page)
+	ptdesc = pagetable_alloc(GFP_KERNEL, 0);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(page)) {
-		__free_page(page);
+	if (!pagetable_pte_ctor(ptdesc)) {
+		pagetable_free(ptdesc);
 		return NULL;
 	}
-	arch_set_page_dat(page, 0);
+	arch_set_page_dat(ptdesc_page(ptdesc), 0);
 	/* Initialize page table */
-	table = (unsigned long *) page_to_virt(page);
+	table = (unsigned long *) ptdesc_to_virt(ptdesc);
 	if (mm_alloc_pgste(mm)) {
 		/* Return 4K page table with PGSTEs */
-		atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
+		atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U);
 		memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE);
 		memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE);
 	} else {
 		/* Return the first 2K fragment of the page */
-		atomic_xor_bits(&page->pt_frag_refcount, 0x01U);
+		atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x01U);
 		memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE);
 		spin_lock_bh(&mm->context.lock);
-		list_add(&page->lru, &mm->context.pgtable_list);
+		list_add(&ptdesc->pt_list, &mm->context.pgtable_list);
 		spin_unlock_bh(&mm->context.lock);
 	}
 	return table;
@@ -309,9 +309,8 @@ static void page_table_release_check(struct page *page, void *table,
 void page_table_free(struct mm_struct *mm, unsigned long *table)
 {
 	unsigned int mask, bit, half;
-	struct page *page;
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
-	page = virt_to_page(table);
 	if (!mm_alloc_pgste(mm)) {
 		/* Free 2K page table fragment of a 4K page */
 		bit = ((unsigned long) table & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t));
@@ -321,39 +320,38 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
 		 * will happen outside of the critical section from this
 		 * function or from __tlb_remove_table()
 		 */
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x11U << bit);
 		if (mask & 0x03U)
-			list_add(&page->lru, &mm->context.pgtable_list);
+			list_add(&ptdesc->pt_list, &mm->context.pgtable_list);
 		else
-			list_del(&page->lru);
+			list_del(&ptdesc->pt_list);
 		spin_unlock_bh(&mm->context.lock);
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x10U << bit);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x10U << bit);
 		if (mask != 0x00U)
 			return;
 		half = 0x01U << bit;
 	} else {
 		half = 0x03U;
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U);
 	}
 
-	page_table_release_check(page, table, half, mask);
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	page_table_release_check(ptdesc_page(ptdesc), table, half, mask);
+	pagetable_pte_dtor(ptdesc);
+	pagetable_free(ptdesc);
 }
 
 void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 			 unsigned long vmaddr)
 {
 	struct mm_struct *mm;
-	struct page *page;
 	unsigned int bit, mask;
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
 	mm = tlb->mm;
-	page = virt_to_page(table);
 	if (mm_alloc_pgste(mm)) {
 		gmap_unlink(mm, table, vmaddr);
 		table = (unsigned long *) ((unsigned long)table | 0x03U);
-		tlb_remove_table(tlb, table);
+		tlb_remove_ptdesc(tlb, table);
 		return;
 	}
 	bit = ((unsigned long) table & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t));
@@ -363,11 +361,11 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 	 * outside of the critical section from __tlb_remove_table() or from
 	 * page_table_free()
 	 */
-	mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit);
+	mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x11U << bit);
 	if (mask & 0x03U)
-		list_add_tail(&page->lru, &mm->context.pgtable_list);
+		list_add_tail(&ptdesc->pt_list, &mm->context.pgtable_list);
 	else
-		list_del(&page->lru);
+		list_del(&ptdesc->pt_list);
 	spin_unlock_bh(&mm->context.lock);
 	table = (unsigned long *) ((unsigned long) table | (0x01U << bit));
 	tlb_remove_table(tlb, table);
@@ -377,7 +375,7 @@ void __tlb_remove_table(void *_table)
 {
 	unsigned int mask = (unsigned long) _table & 0x03U, half = mask;
 	void *table = (void *)((unsigned long) _table ^ mask);
-	struct page *page = virt_to_page(table);
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
 	switch (half) {
 	case 0x00U:	/* pmd, pud, or p4d */
@@ -385,18 +383,18 @@ void __tlb_remove_table(void *_table)
 		return;
 	case 0x01U:	/* lower 2K of a 4K page table */
 	case 0x02U:	/* higher 2K of a 4K page table */
-		mask = atomic_xor_bits(&page->pt_frag_refcount, mask << 4);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, mask << 4);
 		if (mask != 0x00U)
 			return;
 		break;
 	case 0x03U:	/* 4K page table with pgstes */
-		mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U);
+		mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U);
 		break;
 	}
 
-	page_table_release_check(page, table, half, mask);
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	page_table_release_check(ptdesc_page(ptdesc), table, half, mask);
+	pagetable_pte_dtor(ptdesc);
+	pagetable_free(ptdesc);
 }
 
 /*
@@ -424,16 +422,20 @@ static void base_pgt_free(unsigned long *table)
 static unsigned long *base_crst_alloc(unsigned long val)
 {
 	unsigned long *table;
+	struct ptdesc *ptdesc;
 
-	table =	(unsigned long *)__get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
-	if (table)
-		crst_table_init(table, val);
+	ptdesc = pagetable_alloc(GFP_KERNEL, CRST_ALLOC_ORDER);
+	if (!ptdesc)
+		return NULL;
+	table = ptdesc_address(ptdesc);
+
+	crst_table_init(table, val);
 	return table;
 }
 
 static void base_crst_free(unsigned long *table)
 {
-	free_pages((unsigned long)table, CRST_ALLOC_ORDER);
+	pagetable_free(virt_to_ptdesc(table));
 }
 
 #define BASE_ADDR_END_FUNC(NAME, SIZE)					\
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:55 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542069.845603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVo-0000Km-WC; Wed, 31 May 2023 21:37:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542069.845603; Wed, 31 May 2023 21:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVo-0000I2-Bi; Wed, 31 May 2023 21:37:44 +0000
Received: by outflank-mailman (input) for mailman id 542069;
 Wed, 31 May 2023 21:37:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQU-0006zB-Ff
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:14 +0000
Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com
 [2607:f8b0:4864:20::1132])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9b44bb20-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:32:13 +0200 (CEST)
Received: by mail-yw1-x1132.google.com with SMTP id
 00721157ae682-561c1436c75so810367b3.1
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:32:12 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.32.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:32:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b44bb20-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568732; x=1688160732;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jxwTfn+1WulDyktO0TSvPUm3DeZmyyKIR9rdY3Tk+MA=;
        b=L4jsN9kH5t72hj91BgcxrEjxGpFvV+qs5I3bkyZBlk1og0Opgec9qEsIK7FrraEHx2
         1n1HXytspMZRcqAu6IErIg00HRKEQlSU5ge+vaCFc1Gqlt4Qy7JKfOaB6sQJmWGwoAk1
         ee2W8IRVTtbl/idUF0z1LRlcPjUT7a6tY+VOsk0Iu1C+hZh22D5V07hzjpJteBVx3v1W
         ZI/MBAiirrh/1wAfUDPooe/ZARW8FaFD8bCYxc4nZtEdbMeu9sESBKz6utwpFa1EL+di
         5AtPWE2kRx/FV4woLekgALaQYhp02WG5GHPy9VCjhu9orzHIF3v5vTBp+/fCu2h+nW96
         ROmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568732; x=1688160732;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jxwTfn+1WulDyktO0TSvPUm3DeZmyyKIR9rdY3Tk+MA=;
        b=JXJoLW0OMGFckLzQhnFt1DnH3OCl5n91RKq3l6jfCZHyCgnJQTj7i93ZQOv3OkmNFK
         IE0druhIgGgtj879iP+c6AmEXrSj5c5UljW2pSUmQIWU9ih7sMfrZ2UasXseERcvEVAC
         vppttEC9HvCDrA7TEkVslQg/CLjBFKuwgIf4Aq3JlOZ8F0vmVuwQq3y7N9N9oxRinxwA
         /8ijlw050PZEKG+zlr9T+fyx9jtly7vsAb/QBgUptHxQ9O485L5Ie+gRSMXiYFkHt/2a
         rBAtcAUp/tcOPiv2tefozMmWkLJZ66w/rHR3tOoGPnkFlj5YwNg1lRq6lsfjHGf6cfEU
         mvMQ==
X-Gm-Message-State: AC+VfDxE+FdoYgxqY73vn3OFpOIrRQPboXfE6ywolP/2Bu9So1fyyWkF
	7SAlLIDekWB6v2GPEK6038o=
X-Google-Smtp-Source: ACHHUZ7+/nVY5dy5ww7MvRpp2ZVq2XpqSkLFoFJGuLgsQ62NeKnra3AVED33Ph7R+tcYIq7VRl7QLw==
X-Received: by 2002:a0d:eb83:0:b0:561:3fb7:1333 with SMTP id u125-20020a0deb83000000b005613fb71333mr7209246ywe.43.1685568732564;
        Wed, 31 May 2023 14:32:12 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: [PATCH v3 32/34] sparc: Convert pgtable_pte_page_{ctor, dtor}() to ptdesc equivalents
Date: Wed, 31 May 2023 14:30:30 -0700
Message-Id: <20230531213032.25338-33-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable pte constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/sparc/mm/srmmu.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index 13f027afc875..8393faa3e596 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -355,7 +355,8 @@ pgtable_t pte_alloc_one(struct mm_struct *mm)
 		return NULL;
 	page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT);
 	spin_lock(&mm->page_table_lock);
-	if (page_ref_inc_return(page) == 2 && !pgtable_pte_page_ctor(page)) {
+	if (page_ref_inc_return(page) == 2 &&
+			!pagetable_pte_ctor(page_ptdesc(page))) {
 		page_ref_dec(page);
 		ptep = NULL;
 	}
@@ -371,7 +372,7 @@ void pte_free(struct mm_struct *mm, pgtable_t ptep)
 	page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT);
 	spin_lock(&mm->page_table_lock);
 	if (page_ref_dec_return(page) == 1)
-		pgtable_pte_page_dtor(page);
+		pagetable_pte_dtor(page_ptdesc(page));
 	spin_unlock(&mm->page_table_lock);
 
 	srmmu_free_nocache(ptep, SRMMU_PTE_TABLE_SIZE);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:56 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542072.845609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVq-0000bR-6J; Wed, 31 May 2023 21:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542072.845609; Wed, 31 May 2023 21:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVp-0000XS-JT; Wed, 31 May 2023 21:37:45 +0000
Received: by outflank-mailman (input) for mailman id 542072;
 Wed, 31 May 2023 21:37:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQZ-0006zB-KG
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:19 +0000
Received: from mail-yw1-x112a.google.com (mail-yw1-x112a.google.com
 [2607:f8b0:4864:20::112a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9dbb48c1-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:32:17 +0200 (CEST)
Received: by mail-yw1-x112a.google.com with SMTP id
 00721157ae682-565c3aa9e82so788977b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:32:17 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.32.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:32:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9dbb48c1-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568736; x=1688160736;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0GxeAY3mx9RNm+XPUbEc94ZOjIpLa+PLxcZvSX9iQwI=;
        b=L93fUoh1n2yrqqKzpBnTktVcBXm2pCKywXktnW0z1yiLQvKxqvmxEchYgF8OV+6QTv
         0bzTf1A4OlAuxa/9dZerxpZsbwsI4TjqwMyAQcFuTYSuZJaeWyYJuEWFU5GYKhF/PssT
         BviFKPz7vL/Fi2iPZINEIUsA7v2Occp7X58VFjDz42JLNThFo7xdGCGk1w6ybVnEIlHs
         b72Ur/xMHtdJmjaw3M4Bie9pileXuOl23g9fXNIiRCDxPyqPbOwfW5Xk2DEao1nEkBYA
         Y/k6knxGTbApmWF3bdrPaXzT+XUQklhHF/wdKHs0p7V2Zyj9EYf6Jf8ACh3np5J3xStV
         TZFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568736; x=1688160736;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=0GxeAY3mx9RNm+XPUbEc94ZOjIpLa+PLxcZvSX9iQwI=;
        b=N81WaI8N2rGQz4tjvXC8STFMsc5toW8/Rg/xuTdzkeJiaQhu8pR8iO5zp9JRkVww39
         8FKuRzZO3n1k4RrP5TIuxjL+A50DpFyme+4nsE3tqZ9fKyIrTSZnvLIR6WLiFG+9LseS
         WFUEXChglithDOdyZkxDPzIiPx3ZbsYSFp8YclzlEo80WthFFvsr+7FgjZdBX+yLazwN
         nVskcvXZyi1dI8g8k4Zo4EDh8CV10hTU9WjH2rTWIoR6HXAJcqInkY0TGpuiXNiB0OYm
         9WrMa02A4Z9MrBaUvVHizYy/LbPdBTXBTWdC511jDjaqXwWrJIM7AtaQRwCrqiPnY6za
         b+nA==
X-Gm-Message-State: AC+VfDzLWTMTU/cvZZCiAgncLU/43qh62Zws8cmPTRwJ1uTLIPLvfSa+
	Xlv55jp12faArA6Ts6ZfURk=
X-Google-Smtp-Source: ACHHUZ5gNCrFPXlbEmiVTeSbnpNfi3te/9mQ7ozGamtiEd+qz1ysxTVS5h/CUcEwYIfry/bpNOrmaQ==
X-Received: by 2002:a81:4f13:0:b0:565:ee73:7711 with SMTP id d19-20020a814f13000000b00565ee737711mr6829660ywb.46.1685568736548;
        Wed, 31 May 2023 14:32:16 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 34/34] mm: Remove pgtable_{pmd, pte}_page_{ctor, dtor}() wrappers
Date: Wed, 31 May 2023 14:30:32 -0700
Message-Id: <20230531213032.25338-35-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

These functions are no longer necessary. Remove them and cleanup
Documentation referencing them.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 Documentation/mm/split_page_table_lock.rst    | 12 +++++------
 .../zh_CN/mm/split_page_table_lock.rst        | 14 ++++++-------
 include/linux/mm.h                            | 20 -------------------
 3 files changed, 13 insertions(+), 33 deletions(-)

diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst
index 50ee0dfc95be..4bffec728340 100644
--- a/Documentation/mm/split_page_table_lock.rst
+++ b/Documentation/mm/split_page_table_lock.rst
@@ -53,7 +53,7 @@ Support of split page table lock by an architecture
 ===================================================
 
 There's no need in special enabling of PTE split page table lock: everything
-required is done by pgtable_pte_page_ctor() and pgtable_pte_page_dtor(), which
+required is done by pagetable_pte_ctor() and pagetable_pte_dtor(), which
 must be called on PTE table allocation / freeing.
 
 Make sure the architecture doesn't use slab allocator for page table
@@ -63,8 +63,8 @@ This field shares storage with page->ptl.
 PMD split lock only makes sense if you have more than two page table
 levels.
 
-PMD split lock enabling requires pgtable_pmd_page_ctor() call on PMD table
-allocation and pgtable_pmd_page_dtor() on freeing.
+PMD split lock enabling requires pagetable_pmd_ctor() call on PMD table
+allocation and pagetable_pmd_dtor() on freeing.
 
 Allocation usually happens in pmd_alloc_one(), freeing in pmd_free() and
 pmd_free_tlb(), but make sure you cover all PMD table allocation / freeing
@@ -72,7 +72,7 @@ paths: i.e X86_PAE preallocate few PMDs on pgd_alloc().
 
 With everything in place you can set CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK.
 
-NOTE: pgtable_pte_page_ctor() and pgtable_pmd_page_ctor() can fail -- it must
+NOTE: pagetable_pte_ctor() and pagetable_pmd_ctor() can fail -- it must
 be handled properly.
 
 page->ptl
@@ -92,7 +92,7 @@ trick:
    split lock with enabled DEBUG_SPINLOCK or DEBUG_LOCK_ALLOC, but costs
    one more cache line for indirect access;
 
-The spinlock_t allocated in pgtable_pte_page_ctor() for PTE table and in
-pgtable_pmd_page_ctor() for PMD table.
+The spinlock_t allocated in pagetable_pte_ctor() for PTE table and in
+pagetable_pmd_ctor() for PMD table.
 
 Please, never access page->ptl directly -- use appropriate helper.
diff --git a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst
index 4fb7aa666037..a2c288670a24 100644
--- a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst
+++ b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst
@@ -56,16 +56,16 @@ Hugetlb特定的辅助函数:
 架构对分页表锁的支持
 ====================
 
-没有必要特别启用PTE分页表锁：所有需要的东西都由pgtable_pte_page_ctor()
-和pgtable_pte_page_dtor()完成，它们必须在PTE表分配/释放时被调用。
+没有必要特别启用PTE分页表锁：所有需要的东西都由pagetable_pte_ctor()
+和pagetable_pte_dtor()完成，它们必须在PTE表分配/释放时被调用。
 
 确保架构不使用slab分配器来分配页表：slab使用page->slab_cache来分配其页
 面。这个区域与page->ptl共享存储。
 
 PMD分页锁只有在你有两个以上的页表级别时才有意义。
 
-启用PMD分页锁需要在PMD表分配时调用pgtable_pmd_page_ctor()，在释放时调
-用pgtable_pmd_page_dtor()。
+启用PMD分页锁需要在PMD表分配时调用pagetable_pmd_ctor()，在释放时调
+用pagetable_pmd_dtor()。
 
 分配通常发生在pmd_alloc_one()中，释放发生在pmd_free()和pmd_free_tlb()
 中，但要确保覆盖所有的PMD表分配/释放路径：即X86_PAE在pgd_alloc()中预先
@@ -73,7 +73,7 @@ PMD分页锁只有在你有两个以上的页表级别时才有意义。
 
 一切就绪后，你可以设置CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK。
 
-注意：pgtable_pte_page_ctor()和pgtable_pmd_page_ctor()可能失败--必
+注意：pagetable_pte_ctor()和pagetable_pmd_ctor()可能失败--必
 须正确处理。
 
 page->ptl
@@ -90,7 +90,7 @@ page->ptl用于访问分割页表锁，其中'page'是包含该表的页面struc
    的指针并动态分配它。这允许在启用DEBUG_SPINLOCK或DEBUG_LOCK_ALLOC的
    情况下使用分页锁，但由于间接访问而多花了一个缓存行。
 
-PTE表的spinlock_t分配在pgtable_pte_page_ctor()中，PMD表的spinlock_t
-分配在pgtable_pmd_page_ctor()中。
+PTE表的spinlock_t分配在pagetable_pte_ctor()中，PMD表的spinlock_t
+分配在pagetable_pmd_ctor()中。
 
 请不要直接访问page->ptl - -使用适当的辅助函数。
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 2c7d27348ea9..218cad2041a6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2878,11 +2878,6 @@ static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc)
 	return true;
 }
 
-static inline bool pgtable_pte_page_ctor(struct page *page)
-{
-	return pagetable_pte_ctor(page_ptdesc(page));
-}
-
 static inline void pagetable_pte_dtor(struct ptdesc *ptdesc)
 {
 	struct folio *folio = ptdesc_folio(ptdesc);
@@ -2892,11 +2887,6 @@ static inline void pagetable_pte_dtor(struct ptdesc *ptdesc)
 	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
 }
 
-static inline void pgtable_pte_page_dtor(struct page *page)
-{
-	pagetable_pte_dtor(page_ptdesc(page));
-}
-
 #define pte_offset_map_lock(mm, pmd, address, ptlp)	\
 ({							\
 	spinlock_t *__ptl = pte_lockptr(mm, pmd);	\
@@ -2987,11 +2977,6 @@ static inline bool pagetable_pmd_ctor(struct ptdesc *ptdesc)
 	return true;
 }
 
-static inline bool pgtable_pmd_page_ctor(struct page *page)
-{
-	return pagetable_pmd_ctor(page_ptdesc(page));
-}
-
 static inline void pagetable_pmd_dtor(struct ptdesc *ptdesc)
 {
 	struct folio *folio = ptdesc_folio(ptdesc);
@@ -3001,11 +2986,6 @@ static inline void pagetable_pmd_dtor(struct ptdesc *ptdesc)
 	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
 }
 
-static inline void pgtable_pmd_page_dtor(struct page *page)
-{
-	pagetable_pmd_dtor(page_ptdesc(page));
-}
-
 /*
  * No scalability reason to split PUD locks yet, but follow the same pattern
  * as the PMD locks to make it easier if we decide to.  The VM should not be
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:57 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542073.845619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVr-0000m2-DQ; Wed, 31 May 2023 21:37:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542073.845619; Wed, 31 May 2023 21:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVq-0000jT-Fn; Wed, 31 May 2023 21:37:46 +0000
Received: by outflank-mailman (input) for mailman id 542073;
 Wed, 31 May 2023 21:37:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQ7-0006xu-ID
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:51 +0000
Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com
 [2607:f8b0:4864:20::1134])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8d364f37-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:49 +0200 (CEST)
Received: by mail-yw1-x1134.google.com with SMTP id
 00721157ae682-5659d85876dso736147b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:49 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d364f37-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568709; x=1688160709;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=llblHDjuRzdwObovgJjTe//ioNSPoyRpZegKgEWGdOs=;
        b=iH3Uuo4ydabF86Ae3Pt9mqJWCo9Mdl0v853LngJ++ms5qQhJIV2nJMOwvvj1jdSolp
         wkug3c5kE5OyC8B7qbo9EtOWW91dAFU5qbdW1jXEhkmIJBuTnUynbfXIOO6t9HnXXwkE
         IN/6ruRl/K/MZEBNhZ/CMDwD4Ey5RSIokBkrylG9gJyzM17uataBRfYABiymq5L1JBem
         AiofKm4cV+Hpu3+dW3nZQBqxHmZ0U8w75MqODX8zjymWwzwAB9aGsVd720nTIfH2C9wd
         s6EKfpFxPPWRO5hvsFIezwgeyXUB9GISVAQy5eO7rADF0ZyAn64v+X5gTI2GfoMJv3eF
         AD6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568709; x=1688160709;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=llblHDjuRzdwObovgJjTe//ioNSPoyRpZegKgEWGdOs=;
        b=h9zyTFDp9erEvM5pICEL5HLKZVekOwd+4ZSQmwexF28Ot6cDM9gfaqh3szAMxkBoU6
         Il0YLAovs9KTquSQ3mlTcvv7YIaLD5VkpN78ejZ1TaJHFI6oYyU5EPGcOqY0+HP35NIe
         pcYlm35CxuoDHNUJ33BdCIRlndRVwlM1C4S2ZmPLUT8l0hzlMsm6v2RIMdVcvvCi3wW5
         upbYkYYgYJ3OnEDxN0sTlBiZ/N/Ss5IIB0x51eE5/xzQSB2lFXBBzoxckW2WMShy94pF
         HiQEWumcKPkDP0qbQqE9Aks/bxo00I4nXWS0GseLHRcNAjXT+ucH2l5nAwP9/UWbvUFP
         eqAg==
X-Gm-Message-State: AC+VfDyAGpru6/FsbxUdb0/QRHGXpXtAu8iTVciM4JOwVnh7tv8o2wxY
	uYKSlnB/WAH4jlW3Gzj0Y5w=
X-Google-Smtp-Source: ACHHUZ6yD6O1sbMXJdkzsJg9s7w6GLhFX5Kt5hWPrM1g6MrxSM/fcoQvLUKDHg0FsgcJPRMkk3z8xg==
X-Received: by 2002:a0d:e647:0:b0:568:bca8:de50 with SMTP id p68-20020a0de647000000b00568bca8de50mr6918459ywe.17.1685568708928;
        Wed, 31 May 2023 14:31:48 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>
Subject: [PATCH v3 20/34] arm: Convert various functions to use ptdescs
Date: Wed, 31 May 2023 14:30:18 -0700
Message-Id: <20230531213032.25338-21-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

late_alloc() also uses the __get_free_pages() helper function. Convert
this to use pagetable_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/arm/include/asm/tlb.h | 12 +++++++-----
 arch/arm/mm/mmu.c          |  6 +++---
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h
index b8cbe03ad260..f40d06ad5d2a 100644
--- a/arch/arm/include/asm/tlb.h
+++ b/arch/arm/include/asm/tlb.h
@@ -39,7 +39,9 @@ static inline void __tlb_remove_table(void *_table)
 static inline void
 __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr)
 {
-	pgtable_pte_page_dtor(pte);
+	struct ptdesc *ptdesc = page_ptdesc(pte);
+
+	pagetable_pte_dtor(ptdesc);
 
 #ifndef CONFIG_ARM_LPAE
 	/*
@@ -50,17 +52,17 @@ __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr)
 	__tlb_adjust_range(tlb, addr - PAGE_SIZE, 2 * PAGE_SIZE);
 #endif
 
-	tlb_remove_table(tlb, pte);
+	tlb_remove_ptdesc(tlb, ptdesc);
 }
 
 static inline void
 __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr)
 {
 #ifdef CONFIG_ARM_LPAE
-	struct page *page = virt_to_page(pmdp);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmdp);
 
-	pgtable_pmd_page_dtor(page);
-	tlb_remove_table(tlb, page);
+	pagetable_pmd_dtor(ptdesc);
+	tlb_remove_ptdesc(tlb, ptdesc);
 #endif
 }
 
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 22292cf3381c..294518fd0240 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -737,11 +737,11 @@ static void __init *early_alloc(unsigned long sz)
 
 static void *__init late_alloc(unsigned long sz)
 {
-	void *ptr = (void *)__get_free_pages(GFP_PGTABLE_KERNEL, get_order(sz));
+	void *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL, get_order(sz));
 
-	if (!ptr || !pgtable_pte_page_ctor(virt_to_page(ptr)))
+	if (!ptdesc || !pagetable_pte_ctor(ptdesc))
 		BUG();
-	return ptr;
+	return ptdesc;
 }
 
 static pte_t * __init arm_pte_alloc(pmd_t *pmd, unsigned long addr,
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:37:58 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:37:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542075.845624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVs-0000wj-50; Wed, 31 May 2023 21:37:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542075.845624; Wed, 31 May 2023 21:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVr-0000sK-DJ; Wed, 31 May 2023 21:37:47 +0000
Received: by outflank-mailman (input) for mailman id 542075;
 Wed, 31 May 2023 21:37:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LP7+=BU=amd.com=vikram.garhwal@srs-se1.protection.inumbo.net>)
 id 1q4TQI-0006zB-QL
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:02 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on2062d.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 92f5eb6d-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:32:00 +0200 (CEST)
Received: from MW3PR12MB4409.namprd12.prod.outlook.com (2603:10b6:303:2d::23)
 by SA1PR12MB5640.namprd12.prod.outlook.com (2603:10b6:806:238::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.24; Wed, 31 May
 2023 21:31:55 +0000
Received: from MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::246d:4776:b460:9277]) by MW3PR12MB4409.namprd12.prod.outlook.com
 ([fe80::246d:4776:b460:9277%5]) with mapi id 15.20.6455.020; Wed, 31 May 2023
 21:31:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92f5eb6d-fffa-11ed-8611-37d641c3527e
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dbBArU04kAQpB2JVuvBHrdYiuMDxjT3WU2qvdXjOFbFFTY60q/YWmpGr1m8pP5IOygWSifkntF3MAfLUYxAPygdUY3RPjsEcnpBUBL3XAIxFBdNQWa6v8o93qPxsYYcRjvDe04oFJI5CbMAxyzT33iSXC4zhOe8qRg0zs4XvaMs1fcOmA8paGlRMGRR9eXrjCifht/5KcLUe9sK9EyzIx0DXo+07rKvTdkHAkc5mUIHATTuT1MvbxnV+h2tGOkn7mG2LIQ3L0wIwpgIyyuEjcS95LG8iJNvlG9+AxRmKdyn6T0Pw0Oou8fAy3bie9x1zqjXL+ZyjML9GAptEcNu8Ug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=QZqgq69XDgY/WCCBtf8GsCnxxbOsGmopbsOOxTJh9WQ=;
 b=JMdmgAJsFW5uj81cmQH/CYQsQuwpUU6qsLNmAJisNzKiadYErJeBhUAb+e3+BVlxPBiV2pqD70t/cYvUdOmT9dvfSVfU/w+adts46VVrCFvs+dyC6tyhCxjx2/HGXaDftn1B5xDONxeV1W/AaX+SCdnAgzChDDge/+vekH4COxq+Irb7BU5TYuP6scrJ9kyIUGcyO5B/QS6Gf+hlLuqcT0vhgT0fHG7rbTwuHOLLq6digfEiNrtsIlEa8WHdvBodvVx54hGXg+TfUJUfvHqFqe86iRTPdx9TwxpnxTE82bPp748n+pdFPQnVh/PjQbj6itwoc1iAFIrCpk+AAGeeFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QZqgq69XDgY/WCCBtf8GsCnxxbOsGmopbsOOxTJh9WQ=;
 b=b6A1/OrtMHbyBrF7YlFQlBspJiLiHuyOGQcPhEyL3yXTZ4C2DEYXohFh4PwRgqgDjEf7m+q43RzPPTy4jmZORw9Avn5E0fh/yTF4S1sFbrH20YuojwGSXsh/7k54K9Lg+vhee1sEYqMT+1ppI7dr4PklUFBi4sTyw/s5czd4I2o=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <99b0bdf6-b215-f65f-aad9-3ae74a14f66e@amd.com>
Date: Wed, 31 May 2023 14:31:53 -0700
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
 Gecko/20100101 Thunderbird/102.11.0
Subject: Re: [XEN][PATCH v6 08/19] xen/device-tree: Add
 device_tree_find_node_by_path() to find nodes in device tree
Content-Language: en-US
To: Michal Orzel <michal.orzel@amd.com>, Henry Wang <Henry.Wang@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "sstabellini@kernel.org" <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>
References: <20230502233650.20121-1-vikram.garhwal@amd.com>
 <20230502233650.20121-9-vikram.garhwal@amd.com>
 <AS8PR08MB79910CFF4439E503046660EF926D9@AS8PR08MB7991.eurprd08.prod.outlook.com>
 <080bd1fe-a58c-5bdf-eef5-995420001ca4@amd.com>
From: Vikram Garhwal <vikram.garhwal@amd.com>
In-Reply-To: <080bd1fe-a58c-5bdf-eef5-995420001ca4@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: SJ0PR03CA0192.namprd03.prod.outlook.com
 (2603:10b6:a03:2ef::17) To MW3PR12MB4409.namprd12.prod.outlook.com
 (2603:10b6:303:2d::23)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: MW3PR12MB4409:EE_|SA1PR12MB5640:EE_
X-MS-Office365-Filtering-Correlation-Id: 6d347481-9d47-4933-f7ef-08db621e74a2
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nSTcedYxnGWPSbZ65XDunT+hSDdywXn2ahHHEoU2PSdMGIyHLKg5EO3AefyXu24I+woqv8vvXbNJnL3FvV5dEuMY9AC3ZOrJdla6Y3ZDhdWXPDiNLbGy0BisIGYBAEMLe3IUzESVxytgt33G0ScgZ/qFdTP//lk2McjjB9I6p12eHG+WdPgOaT8R2MuqpQ14zfN8lQocrsN43h1LJ73kxNJeH0pEqOWDUkLXYhC/bmLb8/Rr5HaqIcE/EFpYML08Pa3q3cZSRxylVKWSdnlDWWHseEnYEycpab7LoE6STwnN7JiZ+F1eJwxmLiNW1zX2Ot51/Gx/NzOLdbk8AUsf+QLcrh88DaM4WMRqLkdyjn1TMsr5c0WEu+OC55At9rsEoozKipmE3doiVg+aZftsj/zhAThOeVRZ9oPvVGiP2HbbPjcXlpUSTLXtIqonMWZuq2aOxcrn3Qqq/4sKVxVXJAckUCLPqZi1AV8wcLREQaYtdn+bQPo6YFqsHsJfho+edANoWqCCw8y8hTWwxzfZyHCv+t3w9wdfiRCXVvbcsYaUB/e3oy4upw2sLb+orwQdjgPE2e8p2jHWc8hJnZEtS/SuAygIl4s6ti21yx7SA6idbzkjnjLy400PGWJHT6FE8eHb5bxKl+bWXUV0xLIg08kXClt6JaNatUrwUuE4NQo=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MW3PR12MB4409.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(346002)(376002)(39860400002)(366004)(136003)(396003)(451199021)(66476007)(5660300002)(36756003)(4326008)(66946007)(66556008)(478600001)(110136005)(8676002)(54906003)(31686004)(6486002)(316002)(44832011)(41300700001)(38100700002)(8936002)(86362001)(2616005)(6512007)(53546011)(186003)(83380400001)(31696002)(6506007)(26005)(2906002)(37363002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?NUJKdHNOTnQ2NFkzS1JKcElzWlE2djl2VytIRWlFREdRVkhtY1BZNmpndnFl?=
 =?utf-8?B?T1U2MGdwWi9qVFlwa0tRWEFmWnEvNUsyVjh5Yy9vUUNGWUxsMDdqcUNFakFZ?=
 =?utf-8?B?aFBBdWxCZU9zWEhrT1RRcnRsdjNMVkoyc1VKZk95bGY5WVhqM3VIZWdQZVFI?=
 =?utf-8?B?clJSQXFRK1VPUEg0M3YrUFY2TDlTTjMweXkxVWplK3hVUXdoOWdRMjNCSm0x?=
 =?utf-8?B?YjJNdkY2WG1DSVhwYTJKeVBOVGFaRHM0NlNndGk1UzlIcmtJYUZkMjh2dXFw?=
 =?utf-8?B?eVFuZjJWL0xzbW94OTU1ejJLVUtUaXVxOHhya1BMa0toem8yV1ljRzhXRWF1?=
 =?utf-8?B?N0lMTHlESkRwMk9BNFdUNHZJMms3VytJTTlBZDlkMjAzSjBDaTZRMGJnTnd4?=
 =?utf-8?B?ZUErZG4xRHBRME5md2MyMlQxbW91SFVyZ1ZYaUFVOHRxa0hzc0V5aDdzdi9K?=
 =?utf-8?B?WVl6OEZYNDgwbXlPYUlCRUhxNVgyd1VzYkdkRWluYkt6VjFiQ3c3ekhOaWhG?=
 =?utf-8?B?OXdSaXh2U0krRHlIRnRFOEFWcmk0SVo2dStkNXVSRWxWejVsTjhOSVBYM1VQ?=
 =?utf-8?B?VzRzTUhDd1RPemtGUlg1b1lVa0I5bWZyYjBlOVNMZG9aZUFWYzRIUXNjQlJM?=
 =?utf-8?B?S2ZzWUFPMzV0Vzd5dHlpMUFOcE5nSjVKc1diamNHTms4dmV1SVY0N3Z1OXNZ?=
 =?utf-8?B?Q0RDS1RmQS9tdW1TOWxMRllxYWo2R2hBWjViK1g5SU10bkg0bzJrL0ZOLzhF?=
 =?utf-8?B?MUttRWFhSStWRW5iN1JpSFFCVjZPRlM2UkhpWlpoRVV6UEUvYmhaMlVXd3ow?=
 =?utf-8?B?L0lQTjRjalgyc3NCcUs4V3lsVjF5LzJYNkhpNzdKaVJmOWZUSDBvNHJuUmY2?=
 =?utf-8?B?eDQydVJoME0vaEhWd1NlSG55N3Z4UTUzS3NDc1NPS2IwSGxKblIxTXE0UHpQ?=
 =?utf-8?B?WG14Ly8xYnFTc25sNTFGa2xlSDdxbTBLVjZrMkNzbUNpbkFzeE44TnVpblpa?=
 =?utf-8?B?Tk1vSDNqeDl1TmRmYStkaUdUZG5wb0N2Y2pCKzl1Z2pQaHU1U3Q3NHVaRUhO?=
 =?utf-8?B?TndVcFhxR2VobDh0SWREWTdlNm80MmwrcGk4a3JaRFV3RjIwSVc0QXpidThv?=
 =?utf-8?B?Q1VudE9FNTBUbmFHWC9HUVZOd2ZMd1hMUlJqa2pCblN3b3dITVdFcjRFakkz?=
 =?utf-8?B?a0lLZENCQktzYU1sdkU2TnNmMHJRRGU0blZnSDRjdGZWSU13L3hSVi9MRmRj?=
 =?utf-8?B?bmVJdHU3SVJQV011RURMY0cxczFHRC8rSjFQV1VFUXhzR2pkUDVlTTFJc1M2?=
 =?utf-8?B?Yy9NK1Exc2NRWkptUFJuZ1psbFlscXlLdWxYeFE0aXFENC9aL1p5NXlJdjcx?=
 =?utf-8?B?UnpXQjZsYThkcUc5VksrdWpsQ1U2aUd6NnYwdlZvSm9IeFVrb3R4RG5aOVJS?=
 =?utf-8?B?NlZrZkZlaVNQRTFsckc5SGswelhLWHIydVFCMHBsV1Ira2xLeFYzMWlhMHhX?=
 =?utf-8?B?cnhlVkxoaGRUVEgrRFlNZjU4TTZGNm03eVZSZDcxcDJVWjAvNEFSMmgvZ0Nz?=
 =?utf-8?B?STcvQjlqZFMrYTNjempkdmhSd0dSNmVwczErN1o5N0QvVzg2NTdqeDk0MEw3?=
 =?utf-8?B?SWVVQmxKR05rQUdLVHAxa2FjKzNIdytvT1N6em82YlZ4MDdVdURWQ21FMHNF?=
 =?utf-8?B?ZUdyY3JsbmN1TVNsZjlMWENQQ1VHVzRRUFNZSEVzeU5hSmw0SmI5dnYzR0E1?=
 =?utf-8?B?OTkzNmIxMlRkSUpQOEl5clhHSHZWTGdvS2FVNnBISnpXWVk2THB5dnkyeGRp?=
 =?utf-8?B?aGRIdCtOeTR1Skhiajl4RkZWWHhqclhTMjZwZlBiRWR3eThIdDF0NTFTNVRE?=
 =?utf-8?B?NzVtVFM4UXRpdWZ6bHpqdnhtYmtQdjNOMnNjS3lBczczZXY2S01ZK2hSMEpM?=
 =?utf-8?B?Z2ZtdFRRdW1keGt0ckVBYzFkcy85UGVJRVRwRnd1Qm1hRnFtUzZVTXkvVmF0?=
 =?utf-8?B?TjBlSTdzYlZMMDJVSW9HMWliS29VdWt5MXYzUG5WWE8xOVBqOU9VeHBoRGw3?=
 =?utf-8?B?eHIxUVRiaHpTbFdmL09Wc1lBR2RDKzBYRzB6RWtzeC9kdTNCSHRyRmNNSDh2?=
 =?utf-8?Q?o2HNFqMOCpgW3PQoKOyrHJeg7?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6d347481-9d47-4933-f7ef-08db621e74a2
X-MS-Exchange-CrossTenant-AuthSource: MW3PR12MB4409.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 May 2023 21:31:55.1327
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4p0mFqyu4e60hYjYm2U71XHBsfKoq8BSE8DBzCKyB4Ohvp0Q0DsfOXp0ryWQJHBF2XWxENsMF/QH78CiyMbgXA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB5640

Hi Henry & Michal,


On 5/9/23 4:29 AM, Michal Orzel wrote:
>
> On 04/05/2023 06:23, Henry Wang wrote:
>>
>> Hi Vikram,
>>
>>> -----Original Message-----
>>> Subject: [XEN][PATCH v6 08/19] xen/device-tree: Add
>>> device_tree_find_node_by_path() to find nodes in device tree
>>>
>>> Add device_tree_find_node_by_path() to find a matching node with path for
>>> a
>>> dt_device_node.
>>>
>>> Reason behind this function:
>>>      Each time overlay nodes are added using .dtbo, a new fdt(memcpy of
>>>      device_tree_flattened) is created and updated with overlay nodes. This
>>>      updated fdt is further unflattened to a dt_host_new. Next, we need to find
>>>      the overlay nodes in dt_host_new, find the overlay node's parent in dt_host
>>>      and add the nodes as child under their parent in the dt_host. Thus we need
>>>      this function to search for node in different unflattened device trees.
>>>
>>> Also, make dt_find_node_by_path() static inline.
>>>
>>> Signed-off-by: Vikram Garhwal <vikram.garhwal@amd.com>
>>> ---
>>>   xen/common/device_tree.c      |  5 +++--
>>>   xen/include/xen/device_tree.h | 17 +++++++++++++++--
>>>   2 files changed, 18 insertions(+), 4 deletions(-)
>>>
>> [...]
>>
>>>   /**
>>> - * dt_find_node_by_path - Find a node matching a full DT path
>>> + * device_tree_find_node_by_path - Generic function to find a node
>>> matching the
>>> + * full DT path for any given unflatten device tree
>>> + * @dt_node: The device tree to search
>> I noticed that you missed Michal's comment here about renaming the
>> "dt_node" here to "dt" to match below function prototype...
> This is one thing. The other is that in v5 you said this is to be a generic function
> where you can search from a middle of a device tree. This means that the parameter should be
> named "node" or "from" and the description needs to say "The node to start searching from" +
> seeing the lack of ->allnext you can mention that this is inclusive (i.e. the passed node will also be searched).
Changed this for v7. Will send it out soon.

@Henry, i didn't add reviewed-by as the patch is bit changed with 
renaming. Can you please review v7 and give your feedback.
>
> ~Michal



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:38:01 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542077.845635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVv-0001b8-DF; Wed, 31 May 2023 21:37:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542077.845635; Wed, 31 May 2023 21:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVu-0001U9-7H; Wed, 31 May 2023 21:37:50 +0000
Received: by outflank-mailman (input) for mailman id 542077;
 Wed, 31 May 2023 21:37:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQN-0006zB-DH
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:07 +0000
Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com
 [2607:f8b0:4864:20::1134])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 969a615b-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:32:05 +0200 (CEST)
Received: by mail-yw1-x1134.google.com with SMTP id
 00721157ae682-568a1011488so975227b3.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:32:05 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.32.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:32:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 969a615b-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568725; x=1688160725;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=K1x1J4vur13K6ienQiLVxExje9aLbKUsjVFvIhcqO10=;
        b=R4JOyFF5oGis8NxGU8pSpePc9JPXMsijgUHIihoBFDMX7vxfyfSgK9gS7L9vsvgMnj
         MoFYSUO2hucvt6EXtvpP7cclM0FlcdhnlFMXLdspy+TrRtUW4byNtMBbPaBMOtheTLzo
         JyXpw6dclXVn5TBmIUwO4ll+Dwj2sxioBm1tcLfW7tR6/lFa3tmkjwZ+b2UxBU9kd/xU
         9674/17mKjyw9zKVkY7JTqJdm8j6Ukk6MApKOQVf0MMoClbBMESl6AhiCYUKK1tGclZh
         a/IF0OWr+we0YPVHDLrOdoGrQ5ob2G14WJkmdrRtqm2pP0cB8C0ATsRWLlC+f5L22Z3U
         c7HQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568725; x=1688160725;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=K1x1J4vur13K6ienQiLVxExje9aLbKUsjVFvIhcqO10=;
        b=FGjWt5b+RoXcyDZ5F2vIItGv2Mt9DWqJyqCCaC148MbmzwFLREZZ4bOMbDUQL8zVpb
         4aY8bt+NmkS/Wg8F0I3dPYxFtANM28SHqK4BsdtNmvw7LaH/ujDwce/C2DHT9kdDF2qp
         Og/v+KHr56ubnuUf76sAis7hZylheu5DrqZ5FNRFlGzw6f1X0csYEVAI34DnQliHEvGc
         g05/hxHFGycJSFt0LJFOqy8fSQ3t5F5FXH5MMkSkyMcU6XAt02t1DFlxqDJUZJHgxRjm
         5sAWIXyEhHryM3MOa9ujmGIv/rjitrLFV+4Bhlq6wRe3TckyObtd2djCaMEDsmqWW4cv
         HgIw==
X-Gm-Message-State: AC+VfDxJ7yGvRo6XzZgfvaz5a7soyluiQ8nJkzIi+KX2RxCTIJZKKoQk
	G3ZuZCWSkua4heSoAMeXzDs=
X-Google-Smtp-Source: ACHHUZ7OmIDubddXjmsfBoe/XagN9Nq+I2v1lq5kjayLp/xLt6wDO/g4P/XIV7x6OHxU5ctdplxcBg==
X-Received: by 2002:a81:d246:0:b0:565:9d27:c5e0 with SMTP id m6-20020a81d246000000b005659d27c5e0mr8010590ywl.2.1685568724694;
        Wed, 31 May 2023 14:32:04 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Jonas Bonn <jonas@southpole.se>
Subject: [PATCH v3 28/34] openrisc: Convert __pte_free_tlb() to use ptdescs
Date: Wed, 31 May 2023 14:30:26 -0700
Message-Id: <20230531213032.25338-29-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/openrisc/include/asm/pgalloc.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h
index b7b2b8d16fad..c6a73772a546 100644
--- a/arch/openrisc/include/asm/pgalloc.h
+++ b/arch/openrisc/include/asm/pgalloc.h
@@ -66,10 +66,10 @@ extern inline pgd_t *pgd_alloc(struct mm_struct *mm)
 
 extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm);
 
-#define __pte_free_tlb(tlb, pte, addr)	\
-do {					\
-	pgtable_pte_page_dtor(pte);	\
-	tlb_remove_page((tlb), (pte));	\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	pagetable_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));	\
 } while (0)
 
 #endif
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:38:05 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:38:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542083.845651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVz-0002gs-9O; Wed, 31 May 2023 21:37:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542083.845651; Wed, 31 May 2023 21:37:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TVy-0002e7-K0; Wed, 31 May 2023 21:37:54 +0000
Received: by outflank-mailman (input) for mailman id 542083;
 Wed, 31 May 2023 21:37:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQS-0006zB-Qb
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:12 +0000
Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com
 [2607:f8b0:4864:20::1134])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a32ac6e-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:32:11 +0200 (CEST)
Received: by mail-yw1-x1134.google.com with SMTP id
 00721157ae682-568a1011488so976197b3.0
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:32:11 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.32.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:32:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a32ac6e-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568731; x=1688160731;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1GRmBDGAiDB5ohg5xpMsC/Ie0xVPC3qwwrjfZtpy6VU=;
        b=TzIQcYP8UmEvu1XoUXTmQnAashbo2T6eYaCRckVyeMmLMro7gHhzasHCkMcWFCcICP
         qASPopGO4jPsUUoq2oo54YJvtjYAvzC57SSS0Yqri4mv3NjvTdLMSdkLvvM7oA4Ee24y
         wtM1hRp99610fYgP6iNO/jZdKl6qkgz0kh/qsgBmVzQ/w0d6rCW5SOQFK0BhVT7CHUOG
         qPxfe0g3o4O3N6ywZpv0i38BDLHQS9YQOgCA2owyv5y3xv1tuRhfQmBHrNK0am7f5K2A
         BF7Ua5SoJjDL0sDtWpclX2w1tSSbohGXIW608mTIM4bjGCvoZIGoxKxNqCAe4MAT2CZW
         ovbw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568731; x=1688160731;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1GRmBDGAiDB5ohg5xpMsC/Ie0xVPC3qwwrjfZtpy6VU=;
        b=V3XsIo9dPGA7ebNUEKIarlEHN6PR8/26sN69KgGsN1ryqr4SyeZaNyhBC+WeULDNfQ
         FL2wM6KRgfS3ErETTdJTyO1gdMQ62Opkw2RVJyce0gym4XfRLEYyv1fHzoXZcGtUX9y4
         YvYjGuYiKNUhENYCoketUuKyut3XWSFPP7a1/mh5Ut+TgsiYFWk2+A2qKDs7JTbhuieu
         2KeyJnJoLF/sL+3mjz84TM+KifuCDGoO/iZ0jVqFVFhI2k831QFl10c19UNssbQ43jo8
         cuIe36+aVeuokPjIG2iZop+llEP6AR1QgG1ohKYLKxl2pBviMUREwkHhFiTw2KNSs4ES
         PvEQ==
X-Gm-Message-State: AC+VfDwYnkReSB4OH03JQEzwqgXqZDIcWOtX5MmcK8wjjLkaGiGCIIJw
	VBgpOeraDzEquKZxJGIfKrE=
X-Google-Smtp-Source: ACHHUZ5LYpeqVXEBxKdhnKxPXFX2lgcS9ekTnfnLE12pRifhXXzQCAWJFiLd7o1EIw9HUUaJU74F+Q==
X-Received: by 2002:a0d:e24d:0:b0:561:949d:a7fd with SMTP id l74-20020a0de24d000000b00561949da7fdmr7343037ywe.45.1685568730646;
        Wed, 31 May 2023 14:32:10 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: [PATCH v3 31/34] sparc64: Convert various functions to use ptdescs
Date: Wed, 31 May 2023 14:30:29 -0700
Message-Id: <20230531213032.25338-32-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/sparc/mm/init_64.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 04f9db0c3111..8a1618c3b435 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2893,14 +2893,15 @@ pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
 
 pgtable_t pte_alloc_one(struct mm_struct *mm)
 {
-	struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO);
-	if (!page)
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0);
+
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(page)) {
-		__free_page(page);
+	if (!pagetable_pte_ctor(ptdesc)) {
+		pagetable_free(ptdesc);
 		return NULL;
 	}
-	return (pte_t *) page_address(page);
+	return (pte_t *) ptdesc_address(ptdesc);
 }
 
 void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
@@ -2910,10 +2911,10 @@ void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
 
 static void __pte_free(pgtable_t pte)
 {
-	struct page *page = virt_to_page(pte);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pte);
 
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	pagetable_pte_dtor(ptdesc);
+	pagetable_free(ptdesc);
 }
 
 void pte_free(struct mm_struct *mm, pgtable_t pte)
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:38:07 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:38:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542085.845661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TW1-000352-4J; Wed, 31 May 2023 21:37:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542085.845661; Wed, 31 May 2023 21:37:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TW0-00031q-7b; Wed, 31 May 2023 21:37:56 +0000
Received: by outflank-mailman (input) for mailman id 542085;
 Wed, 31 May 2023 21:37:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPy-0006xu-GV
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:42 +0000
Received: from mail-yw1-x1133.google.com (mail-yw1-x1133.google.com
 [2607:f8b0:4864:20::1133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 873beee1-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:40 +0200 (CEST)
Received: by mail-yw1-x1133.google.com with SMTP id
 00721157ae682-565c7399afaso815237b3.1
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:40 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 873beee1-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568699; x=1688160699;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=CiuD1fnPRO8jyev+m9HAl1NKcgSXjs7PYAEQ9fPxEwI=;
        b=dwKPCZ9sOqelwYr8F/cSipEa3jHhRdlxXOeKLR/uKctFcyWbknhl7PTpjdfvG2wdAw
         pMFXdWvJKkAwifmK+eF/WF2c+CTwD32ESs3xYPpfMCjwyoVuEtX6esdP6hVyOjImmyUk
         Oh+dQbUYFzz+RNVUajzvmNf19c5I4sagTuDGjtO3TygkKRoR+ykzPoVUyEPrbcpAn52e
         rU0Chmm96Re5mM9CLdg1318OX9ZTyh5zkMgv9dD9FfM+Aps24GBiAtG6k7NQXS+y1kVH
         W5R3JCSdW+dNmMUo9S420mdoMO5JugaoZBL/4nDQ/iinPbgBaddQ0udYo8DlZN/IM0qe
         FbCQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568699; x=1688160699;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=CiuD1fnPRO8jyev+m9HAl1NKcgSXjs7PYAEQ9fPxEwI=;
        b=gyhBTNltWv5ppnidOclxmzrmOEE2ChyxPYPtnccsUNejuGiIWwDssCaLs4QE8Y/rEn
         IZ7XzfE7Oicdm4KGsfgMnhvVeZfOcULSb5hpuhSo6hVe36d9Rj1YuRI7BzpjCPCysHRF
         YAyzvoNR0bWfo+5h3YPhUaPhAFOKyHcNMrUR5+lo3NNFdqxdT/EHhoDT22CFBILKunUg
         DDoZ4CDx1dt9YkeqGrPZPIRA1lykWc4jkv7EjG+ljXjs/HlGmwhU2FgwRo77/diNyQv4
         DoQCCe7WY5VRS2nK0eTttkquW6t4F6pjqRe8Q/IIIiVUXWo/2SktAcS7V9y7B1mnSfq/
         DGpw==
X-Gm-Message-State: AC+VfDxHslLsaO1CMW7J3+JknmllIISeWoIwdle90pvdYHuo17pS+ood
	PgvuRB0rubNL8FmhJJB7RPY=
X-Google-Smtp-Source: ACHHUZ68ZyyZPWIoZr6cEDE/Fc/GjANr19wlevcJlVACZTiXrZelT9pLwANtLCgEsMseNvfv3aa+eg==
X-Received: by 2002:a81:4e8b:0:b0:561:d6dd:bc84 with SMTP id c133-20020a814e8b000000b00561d6ddbc84mr7760695ywb.48.1685568698919;
        Wed, 31 May 2023 14:31:38 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Dave Hansen <dave.hansen@linux.intel.com>
Subject: [PATCH v3 15/34] x86: Convert various functions to use ptdescs
Date: Wed, 31 May 2023 14:30:13 -0700
Message-Id: <20230531213032.25338-16-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to split struct ptdesc from struct page, convert various
functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use pagetable_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/x86/mm/pgtable.c | 46 +++++++++++++++++++++++++------------------
 1 file changed, 27 insertions(+), 19 deletions(-)

diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index e4f499eb0f29..79681557fce6 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -52,7 +52,7 @@ early_param("userpte", setup_userpte);
 
 void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
 {
-	pgtable_pte_page_dtor(pte);
+	pagetable_pte_dtor(page_ptdesc(pte));
 	paravirt_release_pte(page_to_pfn(pte));
 	paravirt_tlb_remove_table(tlb, pte);
 }
@@ -60,7 +60,7 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
 #if CONFIG_PGTABLE_LEVELS > 2
 void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
 {
-	struct page *page = virt_to_page(pmd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmd);
 	paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT);
 	/*
 	 * NOTE! For PAE, any changes to the top page-directory-pointer-table
@@ -69,8 +69,8 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
 #ifdef CONFIG_X86_PAE
 	tlb->need_flush_all = 1;
 #endif
-	pgtable_pmd_page_dtor(page);
-	paravirt_tlb_remove_table(tlb, page);
+	pagetable_pmd_dtor(ptdesc);
+	paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc));
 }
 
 #if CONFIG_PGTABLE_LEVELS > 3
@@ -92,16 +92,16 @@ void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d)
 
 static inline void pgd_list_add(pgd_t *pgd)
 {
-	struct page *page = virt_to_page(pgd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pgd);
 
-	list_add(&page->lru, &pgd_list);
+	list_add(&ptdesc->pt_list, &pgd_list);
 }
 
 static inline void pgd_list_del(pgd_t *pgd)
 {
-	struct page *page = virt_to_page(pgd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pgd);
 
-	list_del(&page->lru);
+	list_del(&ptdesc->pt_list);
 }
 
 #define UNSHARED_PTRS_PER_PGD				\
@@ -112,12 +112,12 @@ static inline void pgd_list_del(pgd_t *pgd)
 
 static void pgd_set_mm(pgd_t *pgd, struct mm_struct *mm)
 {
-	virt_to_page(pgd)->pt_mm = mm;
+	virt_to_ptdesc(pgd)->pt_mm = mm;
 }
 
 struct mm_struct *pgd_page_get_mm(struct page *page)
 {
-	return page->pt_mm;
+	return page_ptdesc(page)->pt_mm;
 }
 
 static void pgd_ctor(struct mm_struct *mm, pgd_t *pgd)
@@ -213,11 +213,14 @@ void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd)
 static void free_pmds(struct mm_struct *mm, pmd_t *pmds[], int count)
 {
 	int i;
+	struct ptdesc *ptdesc;
 
 	for (i = 0; i < count; i++)
 		if (pmds[i]) {
-			pgtable_pmd_page_dtor(virt_to_page(pmds[i]));
-			free_page((unsigned long)pmds[i]);
+			ptdesc = virt_to_ptdesc(pmds[i]);
+
+			pagetable_pmd_dtor(ptdesc);
+			pagetable_free(ptdesc);
 			mm_dec_nr_pmds(mm);
 		}
 }
@@ -232,16 +235,21 @@ static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[], int count)
 		gfp &= ~__GFP_ACCOUNT;
 
 	for (i = 0; i < count; i++) {
-		pmd_t *pmd = (pmd_t *)__get_free_page(gfp);
-		if (!pmd)
+		pmd_t *pmd = NULL;
+		struct ptdesc *ptdesc = pagetable_alloc(gfp, 0);
+
+		if (!ptdesc)
 			failed = true;
-		if (pmd && !pgtable_pmd_page_ctor(virt_to_page(pmd))) {
-			free_page((unsigned long)pmd);
-			pmd = NULL;
+		if (ptdesc && !pagetable_pmd_ctor(ptdesc)) {
+			pagetable_free(ptdesc);
+			ptdesc = NULL;
 			failed = true;
 		}
-		if (pmd)
+		if (ptdesc) {
 			mm_inc_nr_pmds(mm);
+			pmd = (pmd_t *)ptdesc_address(ptdesc);
+		}
+
 		pmds[i] = pmd;
 	}
 
@@ -838,7 +846,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
 
 	free_page((unsigned long)pmd_sv);
 
-	pgtable_pmd_page_dtor(virt_to_page(pmd));
+	pagetable_pmd_dtor(virt_to_ptdesc(pmd));
 	free_page((unsigned long)pmd);
 
 	return 1;
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:38:08 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542089.845669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TW2-0003WA-LN; Wed, 31 May 2023 21:37:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542089.845669; Wed, 31 May 2023 21:37:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TW1-0003PK-SN; Wed, 31 May 2023 21:37:57 +0000
Received: by outflank-mailman (input) for mailman id 542089;
 Wed, 31 May 2023 21:37:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQD-0006xu-Jb
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:57 +0000
Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com
 [2607:f8b0:4864:20::1136])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8f8c314f-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:54 +0200 (CEST)
Received: by mail-yw1-x1136.google.com with SMTP id
 00721157ae682-5664b14966bso877027b3.1
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:53 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f8c314f-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568713; x=1688160713;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=twP3X2a+WodLh3389MnIMBhN0jbH10mGEhh4PvePsM0=;
        b=PfN/YuXqEhOYfWlhCizTVYhB0yTtEnGBcp1Rf+NGxnIYb5NMfNjpOJp7si8Gqd9u2Z
         fVUf7A/NR+jmOjDUmFOcEU9lVtXwdNmW/KajF2dl5HDRHfmf/W944R+zyeedwYfvy537
         14PZNj7A3uugHcuLzifkUVqBMDywmdqymi8GehMXhWnbSeCktA8GuayP2Gw4BIeku+KV
         EWgr5cqL7Y75WorTm+eBQ331vxvi6DIr56AVXpyFUK5xmnNLyeer0KWpXh+3ggToS823
         Egfk0bra3X4pY1o6khSRW89GKMQyeeVJx3oX9CNStxK0FHLZpfrALdAzG2hgMTcdbX0M
         +9og==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568713; x=1688160713;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=twP3X2a+WodLh3389MnIMBhN0jbH10mGEhh4PvePsM0=;
        b=fd6qlZLqHUr1HumI8fARZXz6batzeRw6xWfSyabKypyTx+G3f9EMlRo3kn7TSiQwDk
         w1MenDPYQWUr1shCpOFJ382jAKcKkWOiya0V+X4vYjZFTYTZ0MuN38xzP7BHPdxQkYFD
         +zhKCPFyihK/R94VVGwOXgDCCg5hidg0J/YBWhKKZzdk9QPZwkwjaksMvd25VKo0taoU
         l//exPEVZTFHuUxMpoVGElProfahaKXiaUfbUGENaguTDk4RXevzvYBZBpdvtMdghj7m
         MnSOY+flvVc7oHEbcLBETpsuypNL+Bb/Q4wyGKzTfGaCfcR1mqip0NJ4m6fhGxsqx8KY
         85Hw==
X-Gm-Message-State: AC+VfDxyrCtUq5AdS9Ik3Z/4SoPhNlFxozPP2jUi6HUYSb70AKiuTp1r
	abr1bwtg3jgL2MMMRPVr3Cc=
X-Google-Smtp-Source: ACHHUZ7FD6AVojP8YF17yPS22sW4VLLKSMzYdsUMVLL7wG9gS5evh1bJZ6trRixkrytBmJNTyQ8JCA==
X-Received: by 2002:a0d:d684:0:b0:566:386b:75fc with SMTP id y126-20020a0dd684000000b00566386b75fcmr7598602ywd.18.1685568712866;
        Wed, 31 May 2023 14:31:52 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 22/34] csky: Convert __pte_free_tlb() to use ptdescs
Date: Wed, 31 May 2023 14:30:20 -0700
Message-Id: <20230531213032.25338-23-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/csky/include/asm/pgalloc.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h
index 7d57e5da0914..9c84c9012e53 100644
--- a/arch/csky/include/asm/pgalloc.h
+++ b/arch/csky/include/asm/pgalloc.h
@@ -63,8 +63,8 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
 
 #define __pte_free_tlb(tlb, pte, address)		\
 do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page(tlb, pte);			\
+	pagetable_pte_dtor(page_ptdesc(pte));		\
+	tlb_remove_page_ptdesc(tlb, page_ptdesc(pte));	\
 } while (0)
 
 extern void pagetable_init(void);
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:38:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:38:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542096.845679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TW5-00046W-DM; Wed, 31 May 2023 21:38:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542096.845679; Wed, 31 May 2023 21:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TW4-00042T-ND; Wed, 31 May 2023 21:38:00 +0000
Received: by outflank-mailman (input) for mailman id 542096;
 Wed, 31 May 2023 21:37:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPt-0006zB-TB
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:37 +0000
Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com
 [2607:f8b0:4864:20::1130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 84e22f42-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:31:36 +0200 (CEST)
Received: by mail-yw1-x1130.google.com with SMTP id
 00721157ae682-565cfe4ece7so747477b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:36 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84e22f42-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568695; x=1688160695;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1oaubVECbDcpncGofy3SyRxI7NQgpixW3FZHbyn9Yjk=;
        b=ditcnS71SBC9ni8k286RH+lbOQgfG7PCRFPFB/+gsGtd91WJDFwTiqS8wSFxx5SqNm
         NDEu55kAliwqkv4+OeAiBx4Dtlk6APsI2JGOnAuJmD3QcfvBggdvrn23yK3wDSKM6a8t
         e32asmJ/Rn3K4ua1Ni58ZwRhYlStsE+77Viu6GCYO1u0Ujf908XsQZtJlTqrHFJAkKkK
         Mjajw5BTeegwx0KghNxbnl2uzVS9ZiBwBGYnqna7EVzWEplqQfDV4S8WEKdXyIryNNy8
         tJSCeZKUks8IOUQQh+yhMBSuygAiQUvt/2CrjYMsfZwcI4YczZhuoKfku3jgAV+GNYVx
         ktNA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568695; x=1688160695;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1oaubVECbDcpncGofy3SyRxI7NQgpixW3FZHbyn9Yjk=;
        b=UaGcElmkN4qnsWmye+cNX+ex/+cDN7WCA0WZF8mKGjss3pcBJrn+NwBwDiB92HF9hW
         zbh2M4tPTVxO3XROQV5j6s+JCYlIPiw7AGAIHPkmYv4ECCfIeQjwBnoFhbZm2xBjD7JG
         NlHZigsQXNqYMqlDlIKO1VOxDbetr6PpF8nh5Bq94+9EN2asmPjl90CyM3FeTpYCbWjc
         YIqWKM+BIHoLnwl05cY5HBX33b4xlfOo26l4kA4vtCsJdpFfy/TZeZopFppjXduS9+bu
         lwmrnsJQQOcuJlPZgbRmQQ7qk11q0bLU9iFy9+lQ2oJkegi/uhdvocTRvz7m5Lxwkr/U
         bJxg==
X-Gm-Message-State: AC+VfDzHzftinvoqLt5oqw+/u070lUKWadjWtsB0RPDV2PNm/XxxEU7D
	T1wMcY+ociCxZXP3YCBzy5IYpFmxw54A5g==
X-Google-Smtp-Source: ACHHUZ5aCiPljYuheD8NnJvwTTWKzTm6/fHh7z9wHXOX5DNM+ib+INiEbpn3KPIafpnwazjaJRxT7w==
X-Received: by 2002:a81:5294:0:b0:561:e540:b1b3 with SMTP id g142-20020a815294000000b00561e540b1b3mr7813627ywb.38.1685568694951;
        Wed, 31 May 2023 14:31:34 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 13/34] mm: Create ptdesc equivalents for pgtable_{pte,pmd}_page_{ctor,dtor}
Date: Wed, 31 May 2023 14:30:11 -0700
Message-Id: <20230531213032.25338-14-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Creates pagetable_pte_ctor(), pagetable_pmd_ctor(), pagetable_pte_dtor(),
and pagetable_pmd_dtor() and make the original pgtable
constructor/destructors wrappers.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm.h | 56 ++++++++++++++++++++++++++++++++++------------
 1 file changed, 42 insertions(+), 14 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 72725aa6c30d..2c7d27348ea9 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2867,20 +2867,34 @@ static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; }
 static inline void ptlock_free(struct ptdesc *ptdesc) {}
 #endif /* USE_SPLIT_PTE_PTLOCKS */
 
-static inline bool pgtable_pte_page_ctor(struct page *page)
+static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc)
 {
-	if (!ptlock_init(page_ptdesc(page)))
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	if (!ptlock_init(ptdesc))
 		return false;
-	__SetPageTable(page);
-	inc_lruvec_page_state(page, NR_PAGETABLE);
+	__folio_set_table(folio);
+	lruvec_stat_add_folio(folio, NR_PAGETABLE);
 	return true;
 }
 
+static inline bool pgtable_pte_page_ctor(struct page *page)
+{
+	return pagetable_pte_ctor(page_ptdesc(page));
+}
+
+static inline void pagetable_pte_dtor(struct ptdesc *ptdesc)
+{
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	ptlock_free(ptdesc);
+	__folio_clear_table(folio);
+	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
+}
+
 static inline void pgtable_pte_page_dtor(struct page *page)
 {
-	ptlock_free(page_ptdesc(page));
-	__ClearPageTable(page);
-	dec_lruvec_page_state(page, NR_PAGETABLE);
+	pagetable_pte_dtor(page_ptdesc(page));
 }
 
 #define pte_offset_map_lock(mm, pmd, address, ptlp)	\
@@ -2962,20 +2976,34 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd)
 	return ptl;
 }
 
-static inline bool pgtable_pmd_page_ctor(struct page *page)
+static inline bool pagetable_pmd_ctor(struct ptdesc *ptdesc)
 {
-	if (!pmd_ptlock_init(page_ptdesc(page)))
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	if (!pmd_ptlock_init(ptdesc))
 		return false;
-	__SetPageTable(page);
-	inc_lruvec_page_state(page, NR_PAGETABLE);
+	__folio_set_table(folio);
+	lruvec_stat_add_folio(folio, NR_PAGETABLE);
 	return true;
 }
 
+static inline bool pgtable_pmd_page_ctor(struct page *page)
+{
+	return pagetable_pmd_ctor(page_ptdesc(page));
+}
+
+static inline void pagetable_pmd_dtor(struct ptdesc *ptdesc)
+{
+	struct folio *folio = ptdesc_folio(ptdesc);
+
+	pmd_ptlock_free(ptdesc);
+	__folio_clear_table(folio);
+	lruvec_stat_sub_folio(folio, NR_PAGETABLE);
+}
+
 static inline void pgtable_pmd_page_dtor(struct page *page)
 {
-	pmd_ptlock_free(page_ptdesc(page));
-	__ClearPageTable(page);
-	dec_lruvec_page_state(page, NR_PAGETABLE);
+	pagetable_pmd_dtor(page_ptdesc(page));
 }
 
 /*
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:38:14 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:38:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542102.845697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TW8-0004pf-6Y; Wed, 31 May 2023 21:38:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542102.845697; Wed, 31 May 2023 21:38:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TW7-0004nH-J2; Wed, 31 May 2023 21:38:03 +0000
Received: by outflank-mailman (input) for mailman id 542102;
 Wed, 31 May 2023 21:38:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQ2-0006xu-HD
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:46 +0000
Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com
 [2607:f8b0:4864:20::1134])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8acfa527-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:46 +0200 (CEST)
Received: by mail-yw1-x1134.google.com with SMTP id
 00721157ae682-5659d85876dso735417b3.2
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:46 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8acfa527-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568705; x=1688160705;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=o9eRM/zDkfdSk+8A/Hewm27be7ojppibpTW/4yMTPrs=;
        b=cEdd+3JqSM8NFS7GIxJNDGEwvf5zLrjyKSP/sMa+M4iyqruEv0NeTXFNM3FuEOve0w
         BIidhyYZPlXr46Gag37ah0Z9lK+/zubpEH8qTvbGBEkRrc++Uy/9vmo7kzodV2wrSQQX
         I30LmfVEDViclfTqhhCm9JCK+j60eB88X4OE4kx5hV9tTa2skQgGrRsYDfbJW0ibMSeL
         XGmJa+8fFqRfTTRXmPxLl8Ng1lVV80Cc71vk49EsIYa6/DzcwK4zi4KQyIaDphcDsbUr
         cdzc/H0c6XAN15Wd27KyDlOf+VJeKUYgNs/w5+avyyLu4TkMCo4jGnCkWr4fOUgF3X5G
         tX9Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568705; x=1688160705;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=o9eRM/zDkfdSk+8A/Hewm27be7ojppibpTW/4yMTPrs=;
        b=do8pG4QuhwYJtztrp83sjKVzaWiS77lrGIsoEUgJVuOC0vMZuxcANiRAMv6HgEqims
         3ywJIccF5ZOWODeXN6VCXvuZDRsrbLbQO9sz8MUBMg/mTLPp7IzoH5Rl8gh6Y/4nyTPL
         Nv/sXq/yWRUP6gj2783JxI87ouP/5tokdMn576/YfoByezQwEe0zG4yjP+SzGJMsttyR
         h9vVvy5414GwXse03Y8ibJyv9s6PmUDfo9BTIz/kaizvDfW5z0szJjaVPs2snvp37QX9
         1q5Dz8eYQXOqeCtuPlgxB1UrqZYoBAcTkKQZKxi4gx8FOvOcPU7ik29YHmcqpfPAKeim
         TSGQ==
X-Gm-Message-State: AC+VfDxj7ur5mqnb5Q4ZRZNbonNvA9F9qrgLpB1tnnb5g7i9RtxGMunO
	XX6I2QtqDiDbDIR3NPswd28=
X-Google-Smtp-Source: ACHHUZ6qsHeUoao2+s8uy7ftoyOjIWcEzXHhSNjnyPVGfYnJHG4o2q2q2wEf6DyMGhXFqCryp1fDIQ==
X-Received: by 2002:a81:778b:0:b0:565:ee47:5844 with SMTP id s133-20020a81778b000000b00565ee475844mr7319022ywc.38.1685568704868;
        Wed, 31 May 2023 14:31:44 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 18/34] mm: Remove page table members from struct page
Date: Wed, 31 May 2023 14:30:16 -0700
Message-Id: <20230531213032.25338-19-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The page table members are now split out into their own ptdesc struct.
Remove them from struct page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/mm_types.h | 14 --------------
 include/linux/pgtable.h  |  3 ---
 2 files changed, 17 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6161fe1ae5b8..31ffa1be21d0 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -141,20 +141,6 @@ struct page {
 		struct {	/* Tail pages of compound page */
 			unsigned long compound_head;	/* Bit zero is set */
 		};
-		struct {	/* Page table pages */
-			unsigned long _pt_pad_1;	/* compound_head */
-			pgtable_t pmd_huge_pte; /* protected by page->ptl */
-			unsigned long _pt_s390_gaddr;	/* mapping */
-			union {
-				struct mm_struct *pt_mm; /* x86 pgds only */
-				atomic_t pt_frag_refcount; /* powerpc */
-			};
-#if ALLOC_SPLIT_PTLOCKS
-			spinlock_t *ptl;
-#else
-			spinlock_t ptl;
-#endif
-		};
 		struct {	/* ZONE_DEVICE pages */
 			/** @pgmap: Points to the hosting device page map. */
 			struct dev_pagemap *pgmap;
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 5f12622d1521..3b89dd028973 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1020,10 +1020,7 @@ struct ptdesc {
 TABLE_MATCH(flags, __page_flags);
 TABLE_MATCH(compound_head, pt_list);
 TABLE_MATCH(compound_head, _pt_pad_1);
-TABLE_MATCH(pmd_huge_pte, pmd_huge_pte);
 TABLE_MATCH(mapping, _pt_s390_gaddr);
-TABLE_MATCH(pt_mm, pt_mm);
-TABLE_MATCH(ptl, ptl);
 #undef TABLE_MATCH
 static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
 
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:38:15 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:38:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542103.845701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TW9-00050J-5f; Wed, 31 May 2023 21:38:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542103.845701; Wed, 31 May 2023 21:38:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TW8-0004xf-I8; Wed, 31 May 2023 21:38:04 +0000
Received: by outflank-mailman (input) for mailman id 542103;
 Wed, 31 May 2023 21:38:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TPv-0006zB-TH
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:31:39 +0000
Received: from mail-yw1-x112d.google.com (mail-yw1-x112d.google.com
 [2607:f8b0:4864:20::112d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8619285f-fffa-11ed-8611-37d641c3527e;
 Wed, 31 May 2023 23:31:38 +0200 (CEST)
Received: by mail-yw1-x112d.google.com with SMTP id
 00721157ae682-565ba53f434so704127b3.3
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:38 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8619285f-fffa-11ed-8611-37d641c3527e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568697; x=1688160697;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+FIbG7hZZYadBjxNE3W8iUcLLESpTErkJHs37ORZhp8=;
        b=QQW4eJbw08fzp5u0C3EApsqhSDScPxqfpSujd2Fftsdk0NV+UqjO1XJqJdVLi66zFu
         cinpPfeEqgmR/O1Ran8+YgWTwfmK//QUf637v77XMU/TckkC/xhkbCW+xajO3Dbn3frP
         kO/L8KSj4HsUJRKxWikCzzefHwxQle3fdbiwKYTLnSneYpsNE6F2bSphOyyvkSKnD24S
         ROPR0bP/wgQqTMXtaw/FJstwqXMiiPsr72NT7QRMUoF9N1ccw4pckpdGN9TFHK/dLUHW
         re4TxQzZRWiv1DfYJnXE2tUZsXuB7byvBA7x2TcImuqAH62S99tmu9Nc/6bmZRLPJwEN
         bi+g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568697; x=1688160697;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=+FIbG7hZZYadBjxNE3W8iUcLLESpTErkJHs37ORZhp8=;
        b=O8HH8f4oKOVb/kbx4aWPIZEK21ijz0/uUjtw2pM5nQv7KcnnnS2x85Q6b/qXxjrSse
         s2QD2b1H+UiAZepi17UWAW5I8ZSc4LeIDn1YdUARH52o1/lVr5Z4HtHND93k+DGTMXSN
         AExwZAYj7NCfxeCg1FASC27Dzi9AIvUW2CDmGPS5lZQcO1wPFzeJb4uQ152AIvi6ETIP
         LRhxaBHW23g4N3BRUUr5fsTfdm/rLdZr0r4gkflpWBu5h3U01aKRZ55SqDbxILATSwKD
         0URkJh0PTrV71g/ZsjS0jMTk08pbH858Vvx0/2KowpfGQ81hEYPy/cB9yIewsRAtPQB9
         llVQ==
X-Gm-Message-State: AC+VfDw5v42+x09KWLKBzG4O1XDOYl3tjfmcqFT7yjgsMrr0HMAaZpUE
	zsL3TpX+fpOvIIYWu7wKtgE=
X-Google-Smtp-Source: ACHHUZ6MqVeBd30TPNz1jGF+qHEusksPOw1TO03qOyBH3mOgAXGsgyOhQYPLdxwJyABP8iCNsexq+Q==
X-Received: by 2002:a81:66d6:0:b0:559:eae4:9671 with SMTP id a205-20020a8166d6000000b00559eae49671mr7863818ywc.14.1685568696959;
        Wed, 31 May 2023 14:31:36 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Subject: [PATCH v3 14/34] powerpc: Convert various functions to use ptdescs
Date: Wed, 31 May 2023 14:30:12 -0700
Message-Id: <20230531213032.25338-15-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to split struct ptdesc from struct page, convert various
functions to use ptdescs.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/powerpc/mm/book3s64/mmu_context.c | 10 +++---
 arch/powerpc/mm/book3s64/pgtable.c     | 32 +++++++++---------
 arch/powerpc/mm/pgtable-frag.c         | 46 +++++++++++++-------------
 3 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
index c766e4c26e42..1715b07c630c 100644
--- a/arch/powerpc/mm/book3s64/mmu_context.c
+++ b/arch/powerpc/mm/book3s64/mmu_context.c
@@ -246,15 +246,15 @@ static void destroy_contexts(mm_context_t *ctx)
 static void pmd_frag_destroy(void *pmd_frag)
 {
 	int count;
-	struct page *page;
+	struct ptdesc *ptdesc;
 
-	page = virt_to_page(pmd_frag);
+	ptdesc = virt_to_ptdesc(pmd_frag);
 	/* drop all the pending references */
 	count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT;
 	/* We allow PTE_FRAG_NR fragments from a PTE page */
-	if (atomic_sub_and_test(PMD_FRAG_NR - count, &page->pt_frag_refcount)) {
-		pgtable_pmd_page_dtor(page);
-		__free_page(page);
+	if (atomic_sub_and_test(PMD_FRAG_NR - count, &ptdesc->pt_frag_refcount)) {
+		pagetable_pmd_dtor(ptdesc);
+		pagetable_free(ptdesc);
 	}
 }
 
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 85c84e89e3ea..1212deeabe15 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -306,22 +306,22 @@ static pmd_t *get_pmd_from_cache(struct mm_struct *mm)
 static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
 {
 	void *ret = NULL;
-	struct page *page;
+	struct ptdesc *ptdesc;
 	gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO;
 
 	if (mm == &init_mm)
 		gfp &= ~__GFP_ACCOUNT;
-	page = alloc_page(gfp);
-	if (!page)
+	ptdesc = pagetable_alloc(gfp, 0);
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pmd_page_ctor(page)) {
-		__free_pages(page, 0);
+	if (!pagetable_pmd_ctor(ptdesc)) {
+		pagetable_free(ptdesc);
 		return NULL;
 	}
 
-	atomic_set(&page->pt_frag_refcount, 1);
+	atomic_set(&ptdesc->pt_frag_refcount, 1);
 
-	ret = page_address(page);
+	ret = ptdesc_address(ptdesc);
 	/*
 	 * if we support only one fragment just return the
 	 * allocated page.
@@ -331,12 +331,12 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
 
 	spin_lock(&mm->page_table_lock);
 	/*
-	 * If we find pgtable_page set, we return
+	 * If we find ptdesc_page set, we return
 	 * the allocated page with single fragment
 	 * count.
 	 */
 	if (likely(!mm->context.pmd_frag)) {
-		atomic_set(&page->pt_frag_refcount, PMD_FRAG_NR);
+		atomic_set(&ptdesc->pt_frag_refcount, PMD_FRAG_NR);
 		mm->context.pmd_frag = ret + PMD_FRAG_SIZE;
 	}
 	spin_unlock(&mm->page_table_lock);
@@ -357,15 +357,15 @@ pmd_t *pmd_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr)
 
 void pmd_fragment_free(unsigned long *pmd)
 {
-	struct page *page = virt_to_page(pmd);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pmd);
 
-	if (PageReserved(page))
-		return free_reserved_page(page);
+	if (pagetable_is_reserved(ptdesc))
+		return free_reserved_ptdesc(ptdesc);
 
-	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
-	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
-		pgtable_pmd_page_dtor(page);
-		__free_page(page);
+	BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0);
+	if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) {
+		pagetable_pmd_dtor(ptdesc);
+		pagetable_free(ptdesc);
 	}
 }
 
diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c
index 20652daa1d7e..8961f1540209 100644
--- a/arch/powerpc/mm/pgtable-frag.c
+++ b/arch/powerpc/mm/pgtable-frag.c
@@ -18,15 +18,15 @@
 void pte_frag_destroy(void *pte_frag)
 {
 	int count;
-	struct page *page;
+	struct ptdesc *ptdesc;
 
-	page = virt_to_page(pte_frag);
+	ptdesc = virt_to_ptdesc(pte_frag);
 	/* drop all the pending references */
 	count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT;
 	/* We allow PTE_FRAG_NR fragments from a PTE page */
-	if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) {
-		pgtable_pte_page_dtor(page);
-		__free_page(page);
+	if (atomic_sub_and_test(PTE_FRAG_NR - count, &ptdesc->pt_frag_refcount)) {
+		pagetable_pte_dtor(ptdesc);
+		pagetable_free(ptdesc);
 	}
 }
 
@@ -55,25 +55,25 @@ static pte_t *get_pte_from_cache(struct mm_struct *mm)
 static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
 {
 	void *ret = NULL;
-	struct page *page;
+	struct ptdesc *ptdesc;
 
 	if (!kernel) {
-		page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT);
-		if (!page)
+		ptdesc = pagetable_alloc(PGALLOC_GFP | __GFP_ACCOUNT, 0);
+		if (!ptdesc)
 			return NULL;
-		if (!pgtable_pte_page_ctor(page)) {
-			__free_page(page);
+		if (!pagetable_pte_ctor(ptdesc)) {
+			pagetable_free(ptdesc);
 			return NULL;
 		}
 	} else {
-		page = alloc_page(PGALLOC_GFP);
-		if (!page)
+		ptdesc = pagetable_alloc(PGALLOC_GFP, 0);
+		if (!ptdesc)
 			return NULL;
 	}
 
-	atomic_set(&page->pt_frag_refcount, 1);
+	atomic_set(&ptdesc->pt_frag_refcount, 1);
 
-	ret = page_address(page);
+	ret = ptdesc_address(ptdesc);
 	/*
 	 * if we support only one fragment just return the
 	 * allocated page.
@@ -82,12 +82,12 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
 		return ret;
 	spin_lock(&mm->page_table_lock);
 	/*
-	 * If we find pgtable_page set, we return
+	 * If we find ptdesc_page set, we return
 	 * the allocated page with single fragment
 	 * count.
 	 */
 	if (likely(!pte_frag_get(&mm->context))) {
-		atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR);
+		atomic_set(&ptdesc->pt_frag_refcount, PTE_FRAG_NR);
 		pte_frag_set(&mm->context, ret + PTE_FRAG_SIZE);
 	}
 	spin_unlock(&mm->page_table_lock);
@@ -108,15 +108,15 @@ pte_t *pte_fragment_alloc(struct mm_struct *mm, int kernel)
 
 void pte_fragment_free(unsigned long *table, int kernel)
 {
-	struct page *page = virt_to_page(table);
+	struct ptdesc *ptdesc = virt_to_ptdesc(table);
 
-	if (PageReserved(page))
-		return free_reserved_page(page);
+	if (pagetable_is_reserved(ptdesc))
+		return free_reserved_ptdesc(ptdesc);
 
-	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
-	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
+	BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0);
+	if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) {
 		if (!kernel)
-			pgtable_pte_page_dtor(page);
-		__free_page(page);
+			pagetable_pte_dtor(ptdesc);
+		pagetable_free(ptdesc);
 	}
 }
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 21:38:17 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 21:38:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542105.845711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TWB-0005Vi-C5; Wed, 31 May 2023 21:38:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542105.845711; Wed, 31 May 2023 21:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4TWA-0005Sg-Re; Wed, 31 May 2023 21:38:06 +0000
Received: by outflank-mailman (input) for mailman id 542105;
 Wed, 31 May 2023 21:38:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jp8B=BU=gmail.com=vishal.moola@srs-se1.protection.inumbo.net>)
 id 1q4TQI-0006xu-K7
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 21:32:02 +0000
Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com
 [2607:f8b0:4864:20::1134])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 92f508ae-fffa-11ed-b231-6b7b168915f2;
 Wed, 31 May 2023 23:31:59 +0200 (CEST)
Received: by mail-yw1-x1134.google.com with SMTP id
 00721157ae682-561b7729a12so22050707b3.1
 for <xen-devel@lists.xenproject.org>; Wed, 31 May 2023 14:31:59 -0700 (PDT)
Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::46])
 by smtp.googlemail.com with ESMTPSA id
 t63-20020a0dd142000000b0055aafcef659sm658905ywd.5.2023.05.31.14.31.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 31 May 2023 14:31:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92f508ae-fffa-11ed-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20221208; t=1685568718; x=1688160718;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iEZT4iY8EEl9pa7al7MLo6hk70lgBLqPfTLtEBGVb+s=;
        b=PGO1theuZfi6ZrSPVpMFRQAeOHsn5pMjC8xgKa8aRHHFy/i2vTnGPbNlxfLx+6Euaq
         Hf4qVvFQ61uRrkfUN+YNchpjFmVCPYUXGh7NblhXFTi/GX4kJQBqzqLIMTqMj4VWjiVr
         t8NWR43kuHnjQ8iMMng7kHUvIdpNqX/tRzg++sr2gZfQOVzgSjuKT3bzIA3uSG7kWgDQ
         17UYBKLxvl6bX7QLV22k75B/kUNvf5SVMTA3n26j13v0+L9xXBnD6yck43hrFvtcREzQ
         KMCTc+ZOGPqah06KW6AsjVfXFVLRcSZafJ7+MXnceNvSPC9LkpMpq/Z7sCtXDDCHjgXH
         Kgmw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20221208; t=1685568718; x=1688160718;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=iEZT4iY8EEl9pa7al7MLo6hk70lgBLqPfTLtEBGVb+s=;
        b=gjB4HHzN2kuhQjs6V/GRAr+kxng9w0PNrLfan4wReHJbx9DuFW/iI1qeA9C2IXIK3z
         yVLCojOYrVzLU8BXl9UHvrJNN3qcyp6tpLNcjgymGSf09sN7/pUzvjLMzmR8JIlCGTwN
         M2r9OoGSO7VOJ9ywZlhnyGvPEhgCJes8hCca9Xkm/hEiYsRiHEmCrJ9vliBiziwvCefu
         z5NwD58fPLORHFKLHAQnkAhlXs3J0h9wQIhLxAfXmxWS7Rv8qH+Efq/RS9lulDG1CV9p
         iXRaWoxRxU0i4Nqft356LwriOu9EKM+WomAy2U2HbXxKotkoUY8SoVIK6ZQQFpXiPlXU
         6ztg==
X-Gm-Message-State: AC+VfDy5Xs5xGV+qfxQ+N5KdKy12/iyE78Cd156MJHH6l89w2zkBeIwC
	kHVCq2ikrtfKp9lZKvfaOJM=
X-Google-Smtp-Source: ACHHUZ4Otln+kVOEgUySV1ArKpIYvBeruzYUd+lmycqZnK7GGCWSTwkmzaWpGSc76/XUk7nK9Exgbg==
X-Received: by 2002:a81:4f05:0:b0:561:987e:27a with SMTP id d5-20020a814f05000000b00561987e027amr7848466ywb.10.1685568718603;
        Wed, 31 May 2023 14:31:58 -0700 (PDT)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org,
	linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-csky@vger.kernel.org,
	linux-hexagon@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-m68k@lists.linux-m68k.org,
	linux-mips@vger.kernel.org,
	linux-openrisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-sh@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>
Subject: [PATCH v3 25/34] m68k: Convert various functions to use ptdescs
Date: Wed, 31 May 2023 14:30:23 -0700
Message-Id: <20230531213032.25338-26-vishal.moola@gmail.com>
X-Mailer: git-send-email 2.40.1
In-Reply-To: <20230531213032.25338-1-vishal.moola@gmail.com>
References: <20230531213032.25338-1-vishal.moola@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

As part of the conversions to replace pgtable constructor/destructors with
ptdesc equivalents, convert various page table functions to use ptdescs.

Some of the functions use the *get*page*() helper functions. Convert
these to use pagetable_alloc() and ptdesc_address() instead to help
standardize page tables further.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/m68k/include/asm/mcf_pgalloc.h  | 41 ++++++++++++++--------------
 arch/m68k/include/asm/sun3_pgalloc.h |  8 +++---
 arch/m68k/mm/motorola.c              |  4 +--
 3 files changed, 27 insertions(+), 26 deletions(-)

diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h
index 5c2c0a864524..9eb4ef9e6d77 100644
--- a/arch/m68k/include/asm/mcf_pgalloc.h
+++ b/arch/m68k/include/asm/mcf_pgalloc.h
@@ -7,20 +7,19 @@
 
 extern inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
 {
-	free_page((unsigned long) pte);
+	pagetable_free(virt_to_ptdesc(pte));
 }
 
 extern const char bad_pmd_string[];
 
 extern inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
 {
-	unsigned long page = __get_free_page(GFP_DMA);
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_DMA | __GFP_ZERO, 0);
 
-	if (!page)
+	if (!ptdesc)
 		return NULL;
 
-	memset((void *)page, 0, PAGE_SIZE);
-	return (pte_t *) (page);
+	return (pte_t *) (ptdesc_address(ptdesc));
 }
 
 extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address)
@@ -35,36 +34,36 @@ extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address)
 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable,
 				  unsigned long address)
 {
-	struct page *page = virt_to_page(pgtable);
+	struct ptdesc *ptdesc = virt_to_ptdesc(pgtable);
 
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	pagetable_pte_dtor(ptdesc);
+	pagetable_free(ptdesc);
 }
 
 static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
 {
-	struct page *page = alloc_pages(GFP_DMA, 0);
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_DMA, 0);
 	pte_t *pte;
 
-	if (!page)
+	if (!ptdesc)
 		return NULL;
-	if (!pgtable_pte_page_ctor(page)) {
-		__free_page(page);
+	if (!pagetable_pte_ctor(ptdesc)) {
+		pagetable_free(ptdesc);
 		return NULL;
 	}
 
-	pte = page_address(page);
-	clear_page(pte);
+	pte = ptdesc_address(ptdesc);
+	pagetable_clear(pte);
 
 	return pte;
 }
 
 static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable)
 {
-	struct page *page = virt_to_page(pgtable);
+	struct ptdesc *ptdesc = virt_to_ptdesc(ptdesc);
 
-	pgtable_pte_page_dtor(page);
-	__free_page(page);
+	pagetable_pte_dtor(ptdesc);
+	pagetable_free(ptdesc);
 }
 
 /*
@@ -75,16 +74,18 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable)
 
 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 {
-	free_page((unsigned long) pgd);
+	pagetable_free(virt_to_ptdesc(pgd));
 }
 
 static inline pgd_t *pgd_alloc(struct mm_struct *mm)
 {
 	pgd_t *new_pgd;
+	struct ptdesc *ptdesc = pagetable_alloc(GFP_DMA | GFP_NOWARN, 0);
 
-	new_pgd = (pgd_t *)__get_free_page(GFP_DMA | __GFP_NOWARN);
-	if (!new_pgd)
+	if (!ptdesc)
 		return NULL;
+	new_pgd = (pgd_t *) ptdesc_address(ptdesc);
+
 	memcpy(new_pgd, swapper_pg_dir, PTRS_PER_PGD * sizeof(pgd_t));
 	memset(new_pgd, 0, PAGE_OFFSET >> PGDIR_SHIFT);
 	return new_pgd;
diff --git a/arch/m68k/include/asm/sun3_pgalloc.h b/arch/m68k/include/asm/sun3_pgalloc.h
index 198036aff519..ff48573db2c0 100644
--- a/arch/m68k/include/asm/sun3_pgalloc.h
+++ b/arch/m68k/include/asm/sun3_pgalloc.h
@@ -17,10 +17,10 @@
 
 extern const char bad_pmd_string[];
 
-#define __pte_free_tlb(tlb,pte,addr)			\
-do {							\
-	pgtable_pte_page_dtor(pte);			\
-	tlb_remove_page((tlb), pte);			\
+#define __pte_free_tlb(tlb, pte, addr)				\
+do {								\
+	pagetable_pte_dtor(page_ptdesc(pte));			\
+	tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));	\
 } while (0)
 
 static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte)
diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index c75984e2d86b..594575a0780c 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -161,7 +161,7 @@ void *get_pointer_table(int type)
 			 * m68k doesn't have SPLIT_PTE_PTLOCKS for not having
 			 * SMP.
 			 */
-			pgtable_pte_page_ctor(virt_to_page(page));
+			pagetable_pte_ctor(virt_to_ptdesc(page));
 		}
 
 		mmu_page_ctor(page);
@@ -201,7 +201,7 @@ int free_pointer_table(void *table, int type)
 		list_del(dp);
 		mmu_page_dtor((void *)page);
 		if (type == TABLE_PTE)
-			pgtable_pte_page_dtor(virt_to_page((void *)page));
+			pagetable_pte_dtor(virt_to_ptdesc((void *)page));
 		free_page (page);
 		return 1;
 	} else if (ptable_list[type].next != dp) {
-- 
2.40.1



From xen-devel-bounces@lists.xenproject.org Wed May 31 23:33:11 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 23:33:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542178.845737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4VJF-0005On-20; Wed, 31 May 2023 23:32:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542178.845737; Wed, 31 May 2023 23:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4VJE-0005Og-Ur; Wed, 31 May 2023 23:32:52 +0000
Received: by outflank-mailman (input) for mailman id 542178;
 Wed, 31 May 2023 23:32:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4VJE-0005OW-Ir; Wed, 31 May 2023 23:32:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4VJE-0002Ia-B6; Wed, 31 May 2023 23:32:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1q4VJE-0002Av-2v; Wed, 31 May 2023 23:32:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1q4VJE-0002SO-2S; Wed, 31 May 2023 23:32:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aqx5hl7Nt0wb2PyfF9J0zKew6BrCvBt4HohtFg5cevI=; b=lTbY1JRwKpBU0wSujJPyqXKbm6
	NUvHdtji9XhkdJPy58PwCS8JokRo/PHjYhYkicHPHhuYnFG086WXim7BpemJR0Y4vODXadCovCahe
	LeSYPDs0CjHGdWYxcc5TXYAlQDFd8zgLgY1z71ghhHXyPFVqu94eI8A6P1b6VKnwKqEU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-181054-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 181054: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dc98fa74446e5abe417e5ba9a6a632b50444cfa1
X-Osstest-Versions-That:
    xen=94200e1bae07e725cc07238c11569c5cab7befb7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 31 May 2023 23:32:52 +0000

flight 181054 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/181054/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dc98fa74446e5abe417e5ba9a6a632b50444cfa1
baseline version:
 xen                  94200e1bae07e725cc07238c11569c5cab7befb7

Last test of basis   181018  2023-05-30 20:00:24 Z    1 days
Failing since        181031  2023-05-31 11:00:27 Z    0 days    4 attempts
Testing same since   181054  2023-05-31 19:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bobby Eshleman <bobbyeshleman@gmail.com>
  George Dunlap <george.dunlap@cloud.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   94200e1bae..dc98fa7444  dc98fa74446e5abe417e5ba9a6a632b50444cfa1 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 31 23:49:35 2023
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 31 May 2023 23:49:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.542185.845750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4VZK-00075n-FC; Wed, 31 May 2023 23:49:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 542185.845750; Wed, 31 May 2023 23:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1q4VZK-00075g-CF; Wed, 31 May 2023 23:49:30 +0000
Received: by outflank-mailman (input) for mailman id 542185;
 Wed, 31 May 2023 23:49:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RcVj=BU=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1q4VZI-00075G-ST
 for xen-devel@lists.xenproject.org; Wed, 31 May 2023 23:49:28 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c672212f-000d-11ee-b231-6b7b168915f2;
 Thu, 01 Jun 2023 01:49:26 +0200 (CEST)
Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by dfw.source.kernel.org (Postfix) with ESMTPS id 8310460EBB;
 Wed, 31 May 2023 23:49:25 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 426F2C433D2;
 Wed, 31 May 2023 23:49:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c672212f-000d-11ee-b231-6b7b168915f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1685576965;
	bh=V0LVJpm6gqhdWdM3hxlzoeWek7HH89EI9IWkTveLUA8=;
	h=From:To:Cc:Subject:Date:From;
	b=NVeHRxP/H2HTrRscF/S9BiyU/iL8QTU8boHd0nmyVUm/dW+JnWghGdRIA23/sxOwg
	 dzYWJl8YsmGGVaV3/zMCeDLfZfTqcLRIVIwK2BXtJ8uYC66IzaZkNHKe/gPvkpJsGG
	 RrUnS+4mxazmrrKqWvrm+D5K197ZBwRQZqqSAw8n59x/OrlascWtbBZP9bIOamVb2+
	 1MXu362X4YtBYDYNui0vdXq5+5CBXdwakrQADBcLkNTiMjAjz2hImvTlvp92bIYPqY
	 muAdK/sbIQf61g9Q+XycNzhvc2npjPwKcLNiUTetWuOgw1tL4Q3ZCUFb+z1OK3uuDS
	 KrqKE7krTvI6g==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	marmarek@invisiblethingslab.com,
	Stefano Stabellini <stefano.stabellini@amd.com>
Subject: [PATCH] automation: zen3 dom0pvh test
Date: Wed, 31 May 2023 16:49:21 -0700
Message-Id: <20230531234921.2291367-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Stefano Stabellini <stefano.stabellini@amd.com>

Add a PVH Dom0 test for the zen3 runner.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
 automation/gitlab-ci/test.yaml | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index fbe2c0589a..d5cb238b0a 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -202,6 +202,14 @@ zen3p-smoke-x86-64-gcc-debug:
     - *x86-64-test-needs
     - alpine-3.12-gcc-debug
 
+zen3p-smoke-x86-64-dom0pvh-gcc-debug:
+  extends: .zen3p-x86-64
+  script:
+    - ./automation/scripts/qubes-x86-64.sh dom0pvh 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.12-gcc-debug
+
 zen3p-pci-hvm-x86-64-gcc-debug:
   extends: .zen3p-x86-64
   script:
-- 
2.25.1



